# [Official] NVIDIA RTX 3090 Owner's Club



## zhrooms

_Last Updated: October 23, 2022_

*Note: This content is licensed under Creative Commons 3.0. This means that you are free to copy and redistribute this material, but only if the following criteria are met: 1) You must give appropriate credit by linking back to this thread. 2) You may not use this material for commercial purposes or place this on a for-profit website with ads. 3) You cannot create derivative work based on this material.

NVIDIA GeForce® RTX 3090*

⠀⠀⠀⠀⠀*RTX 3070 / Ti Owner's Club*
⠀⠀*RTX 3080 Owner's Club*
⠀⠀⠀⠀*RTX 3080 Ti Owner's Club*
*→ RTX 3090 Owner's Club*
⠀⠀⠀⠀*RTX 3090 Ti Owner's Club*

*Click here to join the discussion on Discord* or join directly through the *Discord app* with the code *kkuFR3d*









Source: *NVIDIA*

*SPECS (Click Spoiler)*



Spoiler






Rich (BB code):


 
   *Architecture* Ampere
   *Chip* GA102-300-A1
   *Transistors* 28,300 million
   *Die Size* 628 mm²
   *Manufacturing Process* Samsung 8nm

   *CUDA Cores* 5248 (10496)
   *TMUs* 328
   *ROPs* 112
   *SM Count* 82
   *Tensor Cores* 328
   *GigaRays* -- GR/s

   *Core Clock* 1395 MHz
   *Boost Clock* 1695 MHz
   *Memory* 24GB GDDR6X
   *Memory Bus* 384-bit
   *Memory Clock* 1219 MHz / 19504 MHz
   *Memory Bandwidth* 936 GB/s
   *External Power Supply* 12-Pin
   *TDP* 350W

   *DirectX* 12.2 Ultimate
   *OpenGL* 4.6
   *OpenCL* 2.0
   *Vulkan* 1.2
   *CUDA* 8.6

   *Interface* PCIe 4.0 x16
   *Connectors* 1x HDMI 2.1, 3x DisplayPort 1.4a
   *Dimensions* 313 x 138mm (3-Slot)

   *Price* $1499 US

   *Release Date* September 24, 2020







Rich (BB code):


RTX 4090    | AD102-300 |  5nm | 608mm² | 76.3 BT | 16384 CCs | 512 TMUs | 176 ROPs | 128 SMs | 2520 MHz |  24GB | 2048MB x 12 | GDDR6X | 384-bit | 1008 GB/s | 450W⠀⠀
RTX 4080    | AD103-300 |  5nm | 379mm² | 45.9 BT |⠀ 9728 CCs | 304 TMUs | 112 ROPs | ⠀76 SMs | 2505 MHz |  16GB | 2048MB x 8  | GDDR6X | 256-bit | ⠀716 GB/s | 320W⠀⠀
RTX 3090 Ti | GA102-350 |  8nm | 628mm² | 28.3 BT | 10752 CCs | 336 TMUs | 112 ROPs | ⠀84 SMs | 1865 MHz |  24GB | 2048MB x 12 | GDDR6X | 384-bit | 1008 GB/s | 450W⠀⠀
RTX 3090    | GA102-300 |  8nm | 628mm² | 28.3 BT | 10496 CCs | 328 TMUs | 112 ROPs | ⠀82 SMs | 1695 MHz |  24GB | 1024MB x 24 | GDDR6X | 384-bit | ⠀936 GB/s | 350W⠀⠀
RTX 3080 Ti | GA102-250 |  8nm | 628mm² | 28.3 BT | 10240 CCs | 320 TMUs | 112 ROPs | ⠀80 SMs | 1665 MHz |  12GB | 1024MB x 12 | GDDR6X | 384-bit | ⠀912 GB/s | 320W⠀⠀
RTX 3080    | GA102-200 |  8nm | 628mm² | 28.3 BT |⠀ 8704 CCs | 272 TMUs |  96 ROPs | ⠀68 SMs | 1710 MHz |  10GB | 1024MB x 10 | GDDR6X | 320-bit | ⠀760 GB/s | 320W

*Note:* Gaming performance on Ampere and later, do not scale linearly with CUDA core count when compared with previous generations.


Rich (BB code):


RTX 2080 Ti | TU102-300 | 12nm | 754mm² | 18.6 BT |⠀ 4352 CCs  | 272 TMUs |  88 ROPs | ⠀68 SMs | 1635 MHz |  11GB | 1024MB x 11 | GDDR6  | 352-bit | ⠀616 GB/s | 250W
RTX 2080 S  | TU104-450 | 12nm | 545mm² | 13.6 BT |⠀ 3072 CCs  | 192 TMUs |  64 ROPs | ⠀48 SMs | 1815 MHz |   8GB | 1024MB x 8  | GDDR6  | 256-bit | ⠀496 GB/s | 250W
RTX 2080    | TU104-400 | 12nm | 545mm² | 13.6 BT |⠀ 2944 CCs  | 184 TMUs |  64 ROPs | ⠀46 SMs | 1710 MHz |   8GB | 1024MB x 8  | GDDR6  | 256-bit | ⠀448 GB/s | 215W
GTX 1080 Ti | GP102-350 | 16nm | 471mm² | 12.0 BT |⠀ 3584 CCs  | 224 TMUs |  88 ROPs | ⠀28 SMs | 1582 MHz |  11GB | 1024MB x 11 | GDDR5X | 352-bit | ⠀484 GB/s | 250W
GTX 1080    | GP104-400 | 16nm | 314mm² |  7.2 BT | ⠀2560 CCs  | 160 TMUs |  64 ROPs | ⠀20 SMs | 1733 MHz |   8GB | 1024MB x 8  | GDDR5X | 256-bit | ⠀320 GB/s | 180W
GTX 980 Ti  | GM200-310 | 28nm | 601mm² |  8.0 BT |⠀ 2816 CCs  | 172 TMUs |  96 ROPs | ⠀22 SMs | 1076 MHz |   6GB |  512MB x 12 | GDDR5  | 384-bit | ⠀336 GB/s | 250W
GTX 980     | GM204-400 | 28nm | 398mm² |  5.2 BT |⠀ 2048 CCs  | 128 TMUs |  64 ROPs | ⠀16 SMs | 1216 MHz |   4GB |  512MB x 8  | GDDR5  | 256-bit | ⠀224 GB/s | 165W
GTX 780 Ti  | GK110-425 | 28nm | 551mm² |  7.1 BT |⠀ 2880 CCs  | 240 TMUs |  48 ROPs | ⠀15 SMs |  928 MHz |   3GB |  256MB x 12 | GDDR5  | 384-bit |⠀ 336 GB/s | 250W
GTX 780     | GK110-300 | 28nm | 551mm² |  7.1 BT | ⠀2304 CCs  | 192 TMUs |  48 ROPs | ⠀12 SMs |  900 MHz |   3GB |  256MB x 12 | GDDR5  | 384-bit |⠀ 288 GB/s | 250W
GTX 680     | GK104-400 | 28nm | 294mm² |  3.5 BT |⠀ 1536 CCs  | 128 TMUs |  32 ROPs |  ⠀8 SMs | 1058 MHz |   2GB |  256MB x 8  | GDDR5  | 256-bit | ⠀192 GB/s | 200W
GTX 580     | GF110-375 | 40nm | 520mm² |  3.0 BT |  ⠀512 CCs  |  64 TMUs |  48 ROPs | ⠀16 SMs |  772 MHz | 1.5GB |  128MB x 12 | GDDR5  | 384-bit | ⠀192 GB/s | 250W


*ASUS*
AsusTek Computer (stylised as ASUS) was founded in Taipei, Taiwan in 1989, currently headquartered in Taipei, Taiwan.


ModelLengthSlotFanHDMIBIOSConnectorPower LimitPCBPWMGPU StageVRAM StageMPN*Strix OC*319mm2.903*2**2**3x8-Pin*390/*480*W*Custom*MP2888A22×90YV0F93-M0NM00*TUF OC*300mm2.603*2**2*2x8-Pin350/375 W*Custom*uP9512R20×90YV0FD1-M0NM00*EKWB**216mm*2.00*Water*112x8-Pin350/375 WReference20×90YV0F80-M0NM00*Turbo*268mm2.00*Blower*112x8-Pin350/350WCustom18×90YV0FK0-M0NB00

*EVGA*
EVGA Corporation was founded in California, United States in 1989, currently headquartered in California, United States.


ModelLengthSlotFanHDMIBIOSConnectorPower LimitPCBPWMGPU StageVRAM StageMPN*Kingpin*289mm2.00*Hybrid*1*3**3x8-Pin*430/*520*W*Custom*MP2888A23×24G-P5-3998-KR*FTW3 Ultra*289mm2.00*Water*1*2**3x8-Pin*420/*500*W*Custom*NCP81610/uP9511R23×24G-P5-3989-KR*FTW3 Ultra*289mm2.00*Hybrid*1*2**3x8-Pin*420/*500*W*Custom*NCP81610/uP9511R23×24G-P5-3988-KR*FTW3 Ultra*300mm2.7531*2**3x8-Pin*420/*500*W*Custom*NCP81610/uP9511R23×24G-P5-3987-KR*XC3 Ultra*263mm2.00*Water*112x8-Pin350/366W*Custom*NCP81610/uP9511R18×24G-P5-3979-KR*XC3 Ultra*263mm2.00*Hybrid*112x8-Pin350/366W*Custom*NCP81610/uP9511R18×24G-P5-3978-KR*XC3 Ultra*285mm2.203112x8-Pin350/366W*Custom*NCP81610/uP9511R18×24G-P5-3975-KR*XC3*285mm2.203112x8-Pin350/366W*Custom*NCP81610/uP9511R18×24G-P5-3973-KR*XC3 Black*285mm2.203112x8-Pin350/366 W*Custom*NCP81610/uP9511R18×24G-P5-3971-KR
*Note:* EVGA started shipping cards with NCP81610 instead of uP9511R in early 2021.

*GALAX | KFA2* - Not available in North America
GALAXY was founded in Hong Kong, China in 1994, GALAXY and its European brand KFA2 (Kick Friggin Ass) merged in 2014 to form GALAX as a single unified brand, the name KFA2 still exist for the European market but all designs are GALAX, currently headquartered in Hong Kong, China.


ModelLengthSlotFanHDMIBIOSConnectorPower LimitPCBPWMGPU StageVRAM StageMPN*HOF*339mm3.4031*2**3x8-Pin**Custom*22×70A TDA214704×50A AOZ5311NQI39NXM5MD3BNO*SG*317mm3.053112x8-Pin350/350W*Reference*uP9511R18×39NSM5MD1GNA*EX*mm2x8-Pin350/350W*Reference*39NXM5MD1JNA

*GIGABYTE*
GIGA-BYTE Technology (stylised as GIGABYTE) was founded in Taipei, Taiwan in 1986, currently headquartered in Taipei, Taiwan and California, United States.


ModelLengthSlotFanHDMIBIOSConnectorPower LimitPCBPWMGPU StageVRAM StageMPN*Xtreme*319mm3.503*3**2**3x8-Pin*420/450W*Custom*uP9512R22×GV-N3090AORUS X-24GD*Waterforce*252mm2.00*Water**3**2*2x8-Pin370/390W*Custom*uP9512R22×GV-N3090AORUSX WB-24GD*Waterforce*252mm2.00*Hybrid**3**2*2x8-Pin370/390W*Custom*uP9512R22×GV-N3090AORUSX W-24GD*Master 2.0*319mm3.503*3**2**3x8-Pin*370/390W*Custom*uP9512R22×GV-N3090AORUS M-24GD*Master*319mm3.503*3**2*2x8-Pin370/390W*Custom*uP9512R22×GV-N3090AORUS M-24GD*Gaming OC*320mm2.753*2**2*2x8-Pin370/390W*Custom*uP9512R19×GV-N3090GAMING OC-24GD*Vision OC*320mm2.753*2*12x8-Pin370/390W*Custom*uP9512R19×GV-N3090VISION OC-24GD*Eagle OC*320mm2.803*2*12x8-Pin350/385W*Custom*uP9512R19×GV-N3090EAGLE OC-24GD*Turbo*267mm2.00Blower*2*12x8-Pin350/350W*Custom*uP9512R19×GV-N3090TURBO-24GD

*INNO3D*
InnoVISION Multimedia was founded in Hong Kong, China in 1989, primarily recognized for its graphic cards marketed under the Inno3D brand, acquired by PC Partner in 2008, currently headquartered in Hong Kong, China.


ModelLengthSlotFanHDMIBIOSConnectorPower LimitPCBPWMGPU StageVRAM StageMPN*iChill Frostbite**226mm*2.00*Water*112x8-Pin370/370W*Custom*NCP8161018×C3090-246XX-1880FB*iChill X4*300mm3.003112x8-Pin370/370W*Custom*NCP8161018×C30904-246XX-1880VA36*iChill X3*300mm3.003112x8-Pin370/370W*Custom*NCP8161018×C30903-246XX-1880VA37*Gaming X3*300mm3.003112x8-Pin350/370W*Custom*NCP8161018×N30903-246X-1880VA37N*X3*300mm2.003112x8-Pin350/___W*Custom*NCP81610N30903-246X-1880VA44

*MSI*
Micro-Star International (stylised MSI) was founded in Taipei, Taiwan in 1986, currently headquartered in Taipei, Taiwan.


ModelLengthSlotFanHDMIBIOSConnectorPower LimitPCBPWMGPU StageVRAM StageMPN*Suprim X*336mm3.0531*2**3x8-Pin*420/450W*Custom*uP9512R20×V388-010*Gaming X Trio*323mm2.80311*3x8-Pin*370/380W*Custom*uP9512R18×V388-011R*Ventus OC*305mm2.853112x8-Pin350/350W*Custom*uP9512R18×V388-002R*Aero*300mm2.00*Blower*112x8-Pin350/350WCustom18×

*NVIDIA*
Nvidia Corporation (stylised nVIDIA) was founded in California, United States in 1993, currently headquartered in California, United States.


ModelLengthSlotFanHDMIBIOSConnectorPower LimitPCBPWMGPU StageVRAM StageMPN*Founders Edition*313mm3.00211*12-Pin*350/400W*Custom*MP2888B20×900-1G136-2510-000

*PALIT | GAINWARD* - Not available in North America
Palit Microsystems (stylised PaLiT) was founded in Taipei, Taiwan in 1988, acquired the Gainward brand and company in 2005, currently headquartered in Taipei, Taiwan.


ModelLengthSlotFanHDMIBIOSConnectorPower LimitPCBPWMGPU StageVRAM StageMPN*GameRock OC*304mm3.0031*2**3x8-Pin*420/470W*Custom*NCP8161022×NED3090H19SB-1021G*Phantom+ GS*304mm2.7031*2**3x8-Pin*420/470W*Custom*NCP8161022×NED3090H19SB-1021M*Phantom GS*304mm2.7031*2**3x8-Pin*420/470W*Custom*NCP8161022×NED3090H19SB-1021P*GamingPro OC*294mm2.703112x8-Pin350/365W*Reference*NCP8161018×NED3090S19SB-132BA*Phoenix GS*294mm2.703112x8-Pin350/365W*Reference*NCP8161018×NED3090S19SB-132BX

*PNY*
PNY Technologies was founded in New York, United States in 1985, currently headquartered in New Jersey, United States.


ModelLengthSlotFanHDMIBIOSConnectorPower LimitPCBPWMGPU StageVRAM StageMPN*XLR8 Revel*294mm2.803112x8-Pin350/365W*Reference*NCP8161018×VCG309024TFXPPB*XLR8 Uprising*317mm3.003112x8-Pin350/365W*Reference*NCP8161018×VCG309024TFXMPB

*ZOTAC*
ZOTAC is under the umbrella of PC Partner, and was founded in Hong Kong, China in 2006, currently headquartered in Hong Kong, China.


ModelLengthSlotFanHDMIBIOSConnectorPower LimitPCBPWMGPU StageVRAM StageMPN*ArcticStorm*302mm2.00*Water*113x8-Pin*Custom*20×ZT-A30900Q-30P*AMP Extreme*356mm3.0031*2*3x8-Pin*Custom*ZT-A30900B-10P*AMP Core*328mm2.953113x8-Pin*Custom*ZT-A30900C-10P*Trinity OC*318mm2.903112x8-Pin350/350W*Custom*uP9511R18ZT-A30900J-10P

*TECHPOWERUP | GPU-Z 

Download TechPowerUp GPU-Z

NVIDIA | NVFLASH 

Download NVIDIA NVFlash

BIOS | ROM 

TechPowerUp BIOS Collection < Verified 

TechPowerUp BIOS Collection < Unverified

FLASH | GUIDE (Click Spoiler)*



Spoiler













*└ Step 01 of 27 - Download NVFlash ┘ *










*└ Step 02 of 27 - Downloads Folder ┘ *










*└ Step 03 of 27 - Open Zip File ┘ *










*└ Step 04 of 27 - Copy Files ┘ *










*└ Step 05 of 27 - Create New Folder ┘ *










*└ Step 06 of 27 - Name Folder ┘ *










*└ Step 07 of 27 - Paste Files ┘ *










*└ Step 08 of 27 - Installation Successful ┘ *










*└ Step 09 of 27 - Find BIOS ┘ *










*└ Step 10 of 27 - Download BIOS ┘ *










*└ Step 11 of 27 - Name BIOS ┘ *










*└ Step 12 of 27 - Copy or Cut BIOS ┘ *










*└ Step 13 of 27 - Paste BIOS ┘ *










*└ Step 14 of 27 - Download Successful ┘ *










*└ Step 15 of 27 - Before Flash ┘ *










*└ Step 16 of 27 - Maximum Power Limit (330W) ┘ *



















*└ Step 17 of 27 - Starting Command Prompt (Administrator) ┘ *










*chdir C:\nvflash

└ Step 18 of 27 - Changing Directory ┘ *










*nvflash64 --protectoff

└ Step 19 of 27 - Disable Flash Protection ┘ *










*nvflash64 --save Partner2080TiModel.rom

└ Step 20 of 27 - (Optional) BIOS Backup ┘ *










*└ Step 21 of 27 - BIOS Saved ┘ *










*nvflash64 -6 Partner2080TiModel.rom

Y

└ Step 22 of 27 - Flash BIOS ┘ *










*Y

└ Step 23 of 27 - Confirm Update ┘ *










*exit

└ Step 24 of 27 - Flash Completed ┘ *










*└ Step 25 of 27 - After Flash ┘ *










*└ Step 26 of 27 - Maximum Power Limit (380W) ┘ *










*chdir C:\nvflash

nvflash64 --protecton

exit

└ Step 27 of 27 - (Optional) Enable Flash Protection ┘ *



*OVERCLOCKING | TOOLS 

Download ASUS GPUTweak III

Download Colorful iGame Center

Download EVGA Precision X1

Download Gainward EXPERTool

Download Galax/KFA2 Xtreme Tuner Plus

Download Gigabyte AORUS Engine

Download Inno3D TuneIT

Download MSI Afterburner

Download Palit ThunderMaster

Download PNY Velocity X

Download Zotac FireStorm*[/B]​


----------



## MrTOOSHORT

In for a 3090 on the 24th...


----------



## Shawnb99

In for one at some point. Need Heatkiller to come out with a block first, refuse to buy EK


----------



## domenic

So is the Founders Edition “compact” board layout considered reference? Are the AIBs going to have an “AIB reference” or is it up to each of them to cook up their own custom board layouts? Also wonder how this will impact water blocks if there isn't a real reference layout?


----------



## Forks

My body is ready


----------



## keeph8n

Ready for two 3090s


----------



## iamjanco

Lol, the forum might not be back up and running by then. If they're still using the same schedule, they're about a week off still from going dark.


----------



## roccale

Here i'm.


----------



## Foxrun

I'm in. I was throwing money at the screen but big daddy Jenson wouldn't take it yet.


----------



## kx11

purchase link ?


----------



## roccale

24 september not now, on nvidia site.


----------



## HeadlessKnight

In for one, then if not enough two.


----------



## Bastiaan_NL

Shawnb99 said:


> In for one at some point. Need Heatkiller to come out with a block first, refuse to buy EK


This for sure!
I know I'm in at some point, but not without watercooling it with a Heatkiller block.


----------



## nasmith2000

I'm in, probably a hybrid....evga.


----------



## kx11

Bastiaan_NL said:


> This for sure!
> I know I'm in at some point, but not without watercooling it with a Heatkiller block.



i bet Bitspower got them for next week already


----------



## Sonac

3090 ftw!


----------



## roccale

realy ugly connector and bad position. will be included in the package I hope.


----------



## Zurv

ugh.. i should have sold my RTX Titans before today 

I'll likely go SLI 3090 in both my gaming systems.

The big question for me is.. where are the screws mr nvidia? I hope it isn't a glue-fest on the FE models?
I'll 100% be watercooling (as always)


----------



## Nineball_Seraph

In for one, Not sure if im going to get a founders card and wait for heatkiller blocks or get a hydro cooper card. Anyone got any stories about the hydrocopper series? Are they decent blocks?


----------



## Zurv

kx11 said:


> i bet Bitspower got them for next week already


I already talked to them. 
(this was from monday)



> Hello Jordan,
> 
> Glad to get your email.
> We will have the NVidia 3090 water block in about 2 weeks once the card releases.
> And we will offer you the pre-order link to the water block this week.
> Any question, feel free to contact us.
> 
> 
> Best regards!
> 
> Lily Wong
> Skype: shopservice.bitspowertw
> Tel: +886-4-8819330-122
> VOIP: 07010189337
> https://shop.bitspower.com/


----------



## kx11

now Kingpin 3090 sounds like a damn good deal


----------



## kx11

Zurv said:


> I already talked to them.
> (this was from monday)



that's my friend Lily from the horrendous 2080ti lightning Z glitch that locks up the core clock to 1350mhz when the fans are removed, he helped me a little and sent me shots from their lab proving their blocks works very well and they did


----------



## originxt

Based on the dimensions of my heatkiller block for 1080ti width and adding 6mm to the 3090, I can't even fit this in my 011d case. Big sad.


----------



## Nineball_Seraph

originxt said:


> Based on the dimensions of my heatkiller block for 1080ti width and adding 6mm to the 3090, I can't even fit this in my 011d case. Big sad.


Wait what?


----------



## originxt

Nineball_Seraph said:


> Wait what?


At least measured with my hardware, the 1080ti heatkiller iv block is 118mm, (~6mm) bigger than the 1080ti. 3090 width is 138mm so adding 6mm, estimated from the previous heatkiller block, puts it at 144mm which is a bit to wide for my case. 

Sorry if it seemed confusing.


----------



## Zurv

originxt said:


> At least measured with my hardware, the 1080ti heatkiller iv block is 118mm, (~6mm) bigger than the 1080ti. 3090 width is 138mm so adding 6mm, estimated from the previous heatkiller block, puts it at 144mm which is a bit to wide for my case.
> 
> Sorry if it seemed confusing.


the FE PCBs are really short. Also a very diff setup. Other than using a GPU only block (ie, not cooling anything else.) Old blocks are likely worthless for this new card.


----------



## Tobiman

I might take the plunge if this card is worth it compared to the RTX 3080.


----------



## originxt

Zurv said:


> the FE PCBs are really short. Also a very diff setup. Other than using a GPU only block (ie, not cooling anything else.) Old blocks are likely worthless for this new card.


Don't plan on reusing the block. I'm just making the assumption a full 3090 block will also add maybe 6mm of extra width to the card. You say the fe pcb is really short? Any chance you have the dimensions on hand? My case is pretty cramped for space lol.

Edit: Nvm, just saw the supposed leaked pcb. Holy **** that thing is small.


----------



## J7SC

...still a bit confused about the 3090 'actual' cuda core count (ie tables on p1 here, some vendors vs Nvidia page on 3090 below). Also, w-cooling these makes even more sense than previous gens not only re. heat, but also re. length


----------



## originxt

J7SC said:


> ...still a bit confused about the 3090 'actual' cuda core count (ie tables on p1 here, some vendors vs Nvidia page on 3090 below). Also, w-cooling these makes even more sense than previous gens not only re. heat, but also re. length


Just saw leaked pictures of the pcb... It's small... Like very small. It almost looks comical compared to the sizes of previous gen cards.


----------



## JBTown

J7SC said:


> ...still a bit confused about the 3090 'actual' cuda core count (ie tables on p1 here, some vendors vs Nvidia page on 3090 below). Also, w-cooling these makes even more sense than previous gens not only re. heat, but also re. length


They doubled the FP32 units per SMU, so the "new" counts are effectively double counted vs the old method. Actual performance difference will be far less than double.


----------



## mouacyk

J7SC said:


> ...still a bit confused about the 3090 'actual' cuda core count (ie tables on p1 here, some vendors vs Nvidia page on 3090 below). Also, w-cooling these makes even more sense than previous gens not only re. heat, but also re. length


NVidia is doubling the core count for the 2x FP32 throughput. Was scratching my head at first as well, when I went to NVidia's site and saw over 9000 cores!


----------



## J7SC

mouacyk said:


> NVidia is doubling the core count for the 2x FP32 throughput. Was scratching my head at first as well, when I went to NVidia's site and saw over 9000 cores!





JBTown said:


> They doubled the FP32 units per SMU, so the "new" counts are effectively double counted vs the old method. Actual performance difference will be far less than double.



Thanks to both of you  ...GN's first vid just out on this also helped explain some of it (2 calcs per cycle, other)










originxt said:


> Just saw leaked pictures of the pcb... It's small... Like very small. It almost looks comical compared to the sizes of previous gen cards.



...yeah, must be the air-cooooler (opposed to the actual PCB), reference design sizes below (from GN video)


----------



## sblantipodi

but how much this card should be faster over a 3080?


----------



## skupples

I just wanna say...


I sure hope the outsiders are nicer to the owners than they normally are.


in days of old, we'd have already locked this thread twice due to personal attacks & trolling.


----------



## iamjanco

skupples said:


> I just wanna say...
> 
> I sure hope the outsiders are nicer to the owners than they normally are.
> 
> in days of old, we'd have already locked this thread twice due to personal attacks & trolling.


We don't actually have any owners yet, do we...?


----------



## mouacyk

Therefore, we can't technically attack them... unless Jensen has already brought us all to the future, but forgot the cards.


----------



## J7SC

mouacyk said:


> Therefore, we can't technically attack them... unless Jensen has already brought us all to the future, but forgot the cards.


 
...you mean Jensen's stove which also popped up for the G/A100 launch and now baked up the 3090 isn't really a stove, but a time machine :thinking:


----------



## NYU87

In for one.


----------



## ttnuagmada

I'm probably going to spring for one of these, but I can't help but think we'll see a 3080 Ti with a 352 bit bus and 22gigs as soon as Big Navi gets here for like 500 dollars less.


----------



## GoldCartGamer

I am left wondering do I get the 3090 or wait on the inevitable 3080 Ti.


----------



## geriatricpollywog

Does anybody know which XOC bios will work on my reference 3090?


----------



## J7SC

0451 said:


> Does anybody know which XOC bios will work on my reference 3090?


----------



## TK421

but there will be a bigger die right? 3090 is not fully unlocked die




also do we confirm that the vram is exclusively micron?


----------



## inedenimadam

Cash in hand...was looking for a buy it now button...but only notify lists for pre-orders.


color me disappointed, but not surprised.


----------



## ttnuagmada

TK421 said:


> but there will be a bigger die right? 3090 is not fully unlocked die
> 
> 
> 
> 
> also do we confirm that the vram is exclusively micron?


The fully unlocked one is 10752 shaders. I could see Nvidia putting the full die on the Titan and giving it 48 gigs or something. There's not enough there for a Ti part.


----------



## HyperMatrix

ttnuagmada said:


> I'm probably going to spring for one of these, but I can't help but think we'll see a 3080 Ti with a 352 bit bus and 22gigs as soon as Big Navi gets here for like 500 dollars less.


20GB at 21GBps once new mem is ready from micron. Will be 3080 super, if required after AMD big Navi launch. And note that the price will increase for those models. They won’t be an in-place replacement upgrade. 



TK421 said:


> but there will be a bigger die right? 3090 is not fully unlocked die
> 
> also do we confirm that the vram is exclusively micron?


Most they could do is a 3090 ti or super similar to the Titan Xp versus the Titan X (Pascal). Slightly faster ram (full speed 21GBps GDDR6x) with slightly more cores. Giving 5-10% more performance. 

They’d have to create an entirely new die for anything more than that since the structure of the HPC Ampere die is completely different from gaming. And that’s really highly unlikely to happen since there’s no way AMD can compete against the 3090 as is despite easier access to HBM and a better 7nm TSMC fab.


----------



## chispy

Only one 3090 for me , do not like the FE 12 pin new cable :/ , might go Asus or evga this time.


----------



## TK421

HyperMatrix said:


> 20GB at 21GBps once new mem is ready from micron. Will be 3080 super, if required after AMD big Navi launch. And note that the price will increase for those models. They won’t be an in-place replacement upgrade.
> 
> 
> 
> * Most they could do is a 3090 ti or super similar to the Titan Xp versus the Titan X (Pascal). Slightly faster ram (full speed 21GBps GDDR6x) with slightly more cores. Giving 5-10% more performance. *
> 
> They’d have to create an entirely new die for anything more than that since the structure of the HPC Ampere die is completely different from gaming. And that’s really highly unlikely to happen since there’s no way AMD can compete against the 3090 as is despite easier access to HBM and a better 7nm TSMC fab.









hopefully AIBs are allowed to customize this...


----------



## GraphicsWhore

Going with EVGA again I'm pretty sure. Depending on timing and price I may wait out for a HydroCopper. If it's going to be like the 2080Ti variant that took forever I'm fine with like the "Gaming" variant and blocking it - worked well for my 2080Ti.

This is going to be a part of a mostly-new custom loop going from 7700k, 2080Ti to 10850K and the 3090. Got the Maximus XII Formula on the way already. I'm hyped.


----------



## HyperMatrix

Aquacomputer block with active cooled backplate for me, whenever it arrives. But I might be one of the only people excited about trying out the new 3090 fan design. Apparently it's 10x quieter, and it keeps the card 30 degrees cooler than the RTX Titan. That's pretty huge.


----------



## TK421

HyperMatrix said:


> Aquacomputer block with active cooled backplate for me, whenever it arrives. But I might be one of the only people excited about trying out the new 3090 fan design. Apparently it's 10x quieter, and it keeps the card 30 degrees cooler than the RTX Titan. That's pretty huge.


don't eat the marketing too much


----------



## Shawnb99

Nice to see EVGA will have the FTW3/XC3 cards on release. Other models to follow, already announced the Kingpin just no date. Now to decide what one to get and what blocks Optimus comes out with. Also Kingpins will have 360 radiators this year, all other hybrids get 240's I think


----------



## kot0005

Yess Finally a gud GPU


----------



## doggymad

I'm thinking maybe the EVGA Hybrid. Will wait for benchmarks though - hoping to still get about £450-500 for my 2080 Ti as well but that might be wishful thinking now :-/


----------



## kx11

to me this is huge


----------



## HyperMatrix

kx11 said:


> to me this is huge


I’m hoping this means HDR screenshots as well. Only way I’ve been able to take them so far is with the Windows 10 Xbox Game Bar. And it saves them in a JXR format that I can’t do anything with. As soon as I open it in any converter or editor, even with the proper plugin from Microsoft installed, it shows up completely overexposed and broken. 

But I mean...1st world problems...haha


----------



## Duskfall

Any opinions on the 3x8 Pin connectors regarding the OC capability of AIB cards?


----------



## ThrashZone

iamjanco said:


> We don't actually have any owners yet, do we...?


Hi,
Nope seems this thread is premature 
Stats list is absent of any real data this could all be in existing news or rumors threads.


----------



## ZealotKi11er

I really want to get FE but hate Nvidia's non-tranferable warranty. What other good options are there not too far off MSRP. I liked eVGA Hybrid in the past but with 240mm AIO, how much will that be. Don't really want to pay $100 more if FE cooler is really that much better than the Turing FE cooler.


----------



## ThrashZone

ZealotKi11er said:


> I really want to get FE but hate Nvidia's non-tranferable warranty. What other good options are there not too far off MSRP. I liked eVGA Hybrid in the past but with 240mm AIO, how much will that be. Don't really want to pay $100 more if FE cooler is really that much better than the Turing FE cooler.


Hi,
Yeah all these cards are long as heck and blow straight through the card so case top exhaust will have to be done.
Water block for sure not sure I'd waste time with an aio.


----------



## Thoth420

Count me in for a Reference 3090 on the 24th if they don't sell out before I can get an order in. I am very interested in this new cooler design.
I also will be ordering a 3080 for a buddies build. He hasn't decided which model he wants to go with yet.


----------



## ZealotKi11er

ThrashZone said:


> Hi,
> Yeah all these cards are long as heck and blow straight through the card so case top exhaust will have to be done.
> Water block for sure not sure I'd waste time with an aio.


Don't want to bother with Water Block in case I want to flip for something better.


----------



## kx11

HyperMatrix said:


> I’m hoping this means HDR screenshots as well. Only way I’ve been able to take them so far is with the Windows 10 Xbox Game Bar. And it saves them in a JXR format that I can’t do anything with. As soon as I open it in any converter or editor, even with the proper plugin from Microsoft installed, it shows up completely overexposed and broken.
> 
> But I mean...1st world problems...haha



i could be wrong but i think Shadowplay can already capture HDR screenshots


check out this one i captured a while ago


https://www.flickr.com/photos/[email protected]/46389032955/in/dateposted/


----------



## kx11

is it true 3090 got two SM locked? does that mean 3090ti/super is slated for late 2021?!


----------



## Shawnb99

kx11 said:


> is it true 3090 got two SM locked? does that mean 3090ti/super is slated for late 2021?!


There's no 3090TI so quit falling for fake news


----------



## Zed03

I don't know why I'm asking, but, do you think there's any chance we'll see a 3090 Waterforce WB? Aorus only had a 2080 Ti Waterforce WB, not the Titan, so I'm worried they won't make one for 3090 

I really don't want to get a FE + manually install waterblock. Just want to get something off-the-shelf with fittings already on it. Especially since FE will also require flashing custom bios to get higher power limits, which is just more BS I don't want to go through.


----------



## Shawnb99

Zed03 said:


> I don't know why I'm asking, but, do you think there's any chance we'll see a 3090 Waterforce WB? Aorus only had a 2080 Ti Waterforce WB, not the Titan, so I'm worried they won't make one for 3090
> 
> I really don't want to get a FE + manually install waterblock. Just want to get something off-the-shelf with fittings already on it. Especially since FE will also require flashing custom bios to get higher power limits, which is just more BS I don't want to go through.


Considering they are marketing the 3090 as a consumer card and not a titan I think you can expect them to make one.


----------



## Nineball_Seraph

kx11 said:


> is it true 3090 got two SM locked? does that mean 3090ti/super is slated for late 2021?!


my understanding is the that 3090 is technically the titan card and that a 3080 ti is possibly still in the works. but as for a 3090 ti i doubt it.

I expect the 3080 ti will fall in the 1200 price category.

But anything is possible.


----------



## Shawnb99

Nineball_Seraph said:


> my understanding is the that 3090 is technically the titan card and that a 3080 ti is possibly still in the works. but as for a 3090 ti i doubt it.
> 
> I expect the 3080 ti will fall in the 1200 price category.
> 
> But anything is possible.


Also with EVGA making a 3090 kingpin it's "safe" to say there's no ti model


----------



## J7SC

Duskfall said:


> Any opinions on the 3x8 Pin connectors regarding the OC capability of AIB cards?


 
...well, there [are] folks @ HWBot who have taken a 3x 8 pin 2080 Ti card to 2970MHz +- (on LN2 and custom bios of course) so I would no expect any issues for a 3x 8pin equipped 3090...




Shawnb99 said:


> Also with EVGA making a 3090 kingpin it's "safe" to say there's no ti model


 
...right now, it looks like the 3090 is 'the Ti' but that doesn't mean that NVidia couldn't introduce a 3080 Ti or even 3090 Ti card later...in addition there is still talk and some 'leaks (>salt)' about 48GB Ampere Titan. As stated before, NVidia is expanding the top-end model range


----------



## Zed03

3090 ti 0% chance
3080 ti 100% chance (gddr 6x, 16gb)
3070 ti already leaked
Titan VI 50% chance. Depends if yields on Samsung 8nm get good enough to enable 100% of SM on GA104. In the meantime, they'll keep getting cut down to GA102.


----------



## J7SC

FYI, Galax showed some pics of their 30 series 'SG' per below, with 1-clip fan for the back plate...no pics of the HOF as of yet, though


----------



## marc0053

*marc0053*

Planning for a RTX 3090 as well on the 24. Mostly interested in the KPe. I hope everyone is well


----------



## Thanh Nguyen

Where is the goverment check? I need to upgrade my 2080ti to 3090.


----------



## bl4ckdot

I'm in. Mostly interested in Galax HOF / KPE


----------



## Nineball_Seraph

im wondering about dimensions though. The 3090 looks like it will be massive with the fan but actually might be smaller that a 2080ti once you install a waterblock. Has there been any leaks of the PCB dimensions?


----------



## GraphicsWhore

After thinking about it I have a hard time imagining a performance gap between 80 and 90 to justify the gap in price; seems like it will just come down to the much larger VRAM.

If so, it honestly sounds like a 3080Ti is what I'd actually want - a 3080 with, I don't know, 16GBs? And around the price of the launch 2080Ti? 

That would involve waiting but I've got so many complicated things to work out on this new build before I can even start putting it together (trying to go metal tubes), it may be just as well.

Need those benchmarks...


----------



## J7SC

GraphicsWhore said:


> After thinking about it I have a hard time imagining a performance gap between 80 and 90 to justify the gap in price; seems like it will just come down to the much larger VRAM.
> 
> If so, it honestly sounds like a 3080Ti is what I'd actually want - a 3080 with, I don't know, 16GBs? And around the price of the launch 2080Ti?
> 
> That would involve waiting but I've got so many complicated things to work out on this new build before I can even start putting it together (trying to go metal tubes), it may be just as well.
> 
> Need those benchmarks...


 
...you could also consider 8-way NVlink 2090s :


----------



## GraphicsWhore

J7SC said:


> ...you could also consider 8-way NVlink 2090s :
> 
> https://www.youtube.com/watch?v=iQIFF1OVic4


Amazing. I can only dream of a MonkeyMark score that high.


----------



## HyperMatrix

kx11 said:


> HyperMatrix said:
> 
> 
> 
> Iâ€™️m hoping this means HDR screenshots as well. Only way Iâ€™️ve been able to take them so far is with the Windows 10 Xbox Game Bar. And it saves them in a JXR format that I canâ€™️t do anything with. As soon as I open it in any converter or editor, even with the proper plugin from Microsoft installed, it shows up completely overexposed and broken.
> 
> But I mean...1st world problems...haha
> 
> 
> 
> 
> i could be wrong but i think Shadowplay can already capture HDR screenshots
> 
> 
> check out this one i captured a while ago
> 
> 
> https://www.flickr.com/photos/[email protected]/46389032955/in/dateposted/
Click to expand...

It’s not that you can’t take screenshots in HDR games. It’s that the screenshots aren’t saved in HDR. Also you can’t upload HDR photos to Flickr so even if the pic really was in an HDR format on your PC I wouldn’t be able to see it from the link you sent.

Here's a direct download link for an actual HDR screenshot in Windows 10/Xbox JXR format: https://120hz.org/porn/hdr.jxr (31.5MB)
Here's a link to an SDR version which is a tonemapped version of the HDR shot: https://120hz.org/porn/sdr.jxr (29.1MB)

If you have an HDR display and Windows 10 with HDR enabled, you'll immediately be able to see the beautiful retina burning brightness on that HDR pic.


----------



## Jason Marshall

Soon, my precious... soon!!


----------



## zhrooms

_There is no Ti, it's a discontinued name as far as we know, at least on mid to high end cards, replaced by SUPER. And yes, we are definitely getting a 3070 and 3080 SUPER._

I added this to the FAQ, everyone should be aware of it;

*Question:* How much faster is the RTX 3090 than the RTX 3080?
*Answer:* We don't yet for sure, but everything points to the 3090 being around 20% faster overall. Both the 3080 and 3090 are based on the GA102 ("8N" custom Samsung process, 627mm² die size). The RTX 3090 features a full fat die with 20.6% more CUDA cores, SMs, TMUs, Tensor cores, and RT cores. It also offers 23.2% higher memory bandwidth thanks to its 384-bit memory bus. Target boost clocks are close, but with a slight edge to the 3080. All this amounts to the 90 card being around 20% faster under ideal conditions, but at a 114% higher cost.


----------



## ddarko

Hmm, Nvidia has already confirmed that there won't be preorders for the Founder Edition cards but I had assumed custom cards would be available to preorder before launch days. Guess not, per MSI, Nvidia isn't allowing any retailer to take preorders:

https://www.reddit.com/r/nvidia/comments/il8e0o/according_to_msi_insider_stream_nobody_is_allowed/


----------



## Nizzen

ddarko said:


> Hmm, Nvidia has already confirmed that there won't be preorders for the Founder Edition cards but I had assumed custom cards would be available to preorder before launch days. Guess not, per MSI, Nvidia isn't allowing any retailer to take preorders:
> 
> https://www.reddit.com/r/nvidia/comments/il8e0o/according_to_msi_insider_stream_nobody_is_allowed/


Too bad


----------



## skupples

inedenimadam said:


> Cash in hand...was looking for a buy it now button...but only notify lists for pre-orders.
> 
> 
> color me disappointed, but not surprised.



i mean, they did say the 17th at the show & those notify lists. At least, NVs.


EVGA hasn't put up pre-orders yet, n those or NV founders are all I care about for now. They'll flip all the same.


----------



## Thanh Nguyen

$2000 before tax and shipping on overclock.uk for 3090 strix.


----------



## originxt

Sorry if it's been answered, will all the cards be available to order on the 17th and ship on the respective date (24th) or do we order on day on release for each card?


----------



## Jpmboy

Zurv said:


> ugh.. i should have sold my RTX Titans before today
> 
> I'll likely go SLI 3090 in both my gaming systems.
> 
> The big question for me is.. where are the screws mr nvidia? I hope it isn't a glue-fest on the FE models?
> I'll 100% be watercooling (as always)


yeah, 2 with NV-Link is where I'm at too. EVERY waterblock kit really needs to come with a 1 or 2 slot plate. Slot spacing on many MBs can't handle 2 AFAIK.


----------



## BigMack70

Here's hoping I score one of the cards on the 24th. Shame about the 2080 Ti resale lol.

Hoping this will do it for me for 3-4 years. Looks impressive.


----------



## MrTOOSHORT

BigMack70 said:


> Here's hoping I score one of the cards on the 24th. Shame about the 2080 Ti resale lol.
> 
> Hoping this will do it for me for 3-4 years. Looks impressive.



Funny that some aren’t impressed and then say they’ll wait some more while rocking a gtx 980.


----------



## BigMack70

MrTOOSHORT said:


> Funny that some aren’t impressed and then say they’ll wait some more while rocking a gtx 980.


Just depends on what the specific needs are. I've been chasing a "finalized" 4k living room gaming setup ever since I bought one of the first 4k TVs with HDMI 2.0 and a pair of Titan X Maxwell cards to run in SLI to drive it back in 2015.

The display is "final" now with my LG C9... can't imagine swapping this screen out until it gets distracting burn-in or otherwise fails. And I'm thinking that the 3090 will do it on the GPU side. I don't necessarily care about running the most demanding games at full 4k, since I don't mind sitting back a couple feet and dropping to 1440p, but I want better than console fidelity with full 4k 120 FPS on titles that are not crazy demanding, and the 3090 looks like the card for the job.

I'm not impressed with the price/performance on offer here, but I am ready to stop upgrading every cycle, and I don't trust 10GB vram to last 4 years.


----------



## J7SC

I have been really happy with dual 2080 Ti Aorus Xtr WB factory full water-block cards purchased in December '18 (which btw I'm keeping, including for productivity). I definitely will look for Aorus again or similar custom PCB / factory full water-block 3090s...thinking that final decision time will be December-to-February. Most custom PCBs should be out by then, along with enough binned GPUs flow-through for the bigger vendors...

FYI, some 'early' custom PCB descriptions by GN


Spoiler


----------



## Mooncheese

Anyone care to guesstimate how much faster 3090 will be over 3080 at 1440p and 2160p and VR?


----------



## Mooncheese

RT TFLOPS: 58 vs 69 = 17%
Tensor TOPS: 238 vs 285 = 20%?
Bandwidth: 760 vs 936 = 20%?
CUDA cores: 8704 vs 10496 = 18%? 

Maybe 20% difference? Higher at 4K and 8K due to memory bus and video memory speed? 

Leaning towards the 3090 but that's a huge price difference if the performance difference is only 20%. This is just a little better than TitanXp vs 1080 Ti all over again (10%). 

I just sucks because 10GB is already obsolete. I see 10.5GB of video memory usage in Half Life: Alyx (Index, will be a little worse with the Reverb G2) and 9GB in a few other games at 3440x1440.


----------



## DokoBG

Hope its not less than 20% at least. But from what im reading availability is going to be very limited on these.


----------



## cT.

I plan on getting a waterblock for the 3090. I also plan on doing some light overclocking. Is there a particular 3rd party card manufacturer I should lean towards?


Also, is there a particular water block manufacturer that I should be looking at? I heard EKWB is decent, but dunno if they offer the best water cooling temps.


----------



## Shawnb99

cT. said:


> I plan on getting a waterblock for the 3090. I also plan on doing some light overclocking. Is there a particular 3rd party card manufacturer I should lean towards?
> 
> 
> Also, is there a particular water block manufacturer that I should be looking at? I heard EKWB is decent, but dunno if they offer the best water cooling temps.



Heatkiller will be the best option and or Optimus if they ever release one


----------



## cT.

Shawnb99 said:


> Heatkiller will be the best option and or Optimus if they ever release one


Thanks! Any idea about card manufacturers to choose for overclocking? I've been hearing good things about EVGA FTW3s.


----------



## Shawnb99

cT. said:


> Thanks! Any idea about card manufacturers to choose for overclocking? I've been hearing good things about EVGA FTW3s.



FTW3’s are the better binned card but they require a different block then the normal cards. For the 2080ti that meant only blocks were Hydrocopper or EK. Actually about to grab the EK one to swap with my Hydrocopper.

Hopefully Heatkiller comes out with a block for the FTW3


----------



## doggymad

I don't know whether to try and flip the 2080 Ti now. I do have an old Radeon 7850 I can chuck in - but am I going to get a meaningful amount more than when it drops? :/


----------



## HyperMatrix

doggymad said:


> I don't know whether to try and flip the 2080 Ti now. I do have an old Radeon 7850 I can chuck in - but am I going to get a meaningful amount more than when it drops? :/


Just my opinion....but it depends on what you can get for it. The eBay market was flooded heavily yesterday with units selling at even lower than $500. Right now it seems to have stabilized around $700. And there will still be a market for people who already have 2080 ti cards who want to run sli/nvlink for professional work. For those people, their option would only be the 3090 and that's a $1500 purchase, as opposed to whatever the 2080ti happens to sell for. The other big problem, is based on reports of RTX 3000 series shortages. If there is infinite stock, as soon as the cards are released, the 2080 ti would drop to around $500-$550. Because as far as gaming goes...you have an RTX 3070 that can generally outperform the 2080ti, including superior ray tracing performance. But if there is no stock...then any card is more valuable than no card. So if it were me...I'd just wait than gimp myself for weeks, and potentially not get one of the new RTX cards, and potentially not have gotten any more money for my card now, or maybe even less than I could have gotten later due to supply shortages.


----------



## kot0005

EK Blocks https://www.ekwb.com/shop/ek-quantu...2WPDrVeHawOGkc55btGSJi8zL1matav6mmBqosfzQL8gM

No active or even Hybrid cooling for the 3090 Backside ? So the 3090 doesnt have VRAM on the back ?


----------



## HyperMatrix

*FYI Guys. This answers most of the questions we still had about Ampere:​*

https://www.reddit.com/r/nvidia/comments/ilhao8/nvidia_rtx_30series_you_asked_we_answered/​


----------



## Shawnb99

The big question is regarding stock and will it be a paper launch until the end of year or will these be stock enough for everyone?


----------



## BigMack70

Shawnb99 said:


> The big question is regarding stock and will it be a paper launch until the end of year or will these be stock enough for everyone?


The fact that Nvidia banned pre-orders SCREAMS paper launch. They don't have enough stock to meet demand and don't know when they will, so it's just first come first served. I doubt these will see wide availability until early next year.


----------



## GAN77

kot0005 said:


> EK Blocks https://www.ekwb.com/shop/ek-quantu...2WPDrVeHawOGkc55btGSJi8zL1matav6mmBqosfzQL8gM
> 
> No active or even Hybrid cooling for the 3090 Backside ? So the 3090 doesnt have VRAM on the back ?


Waterblock with a design from the last decade)
Scary waterblock.
I see many flaws in it. The backplate is announced, but the link does not work.


----------



## kx11

3090 is BIG


----------



## Nizzen

kx11 said:


> 3090 is BIG



No, the MB is small 

Not that bigger than MSI 2080ti Trio X. Msi 2080ti Trio X is actual LONGER !. L= 32.7cm VS 31.9cm


----------



## straha20

kot0005 said:


> EK Blocks https://www.ekwb.com/shop/ek-quantu...2WPDrVeHawOGkc55btGSJi8zL1matav6mmBqosfzQL8gM
> 
> No active or even Hybrid cooling for the 3090 Backside ? So the 3090 doesnt have VRAM on the back ?


Hmmm...for reference PCB only, not the FE. I wonder how long before a Founders block becomes available...if it ever does.


----------



## Zed03

There appears to be a viet listing for 3090 waterforce wb.. don't know how legit it is, but hurray! Can avoid manual waterblock install and don't need to flash bios for high power limits.

https://sg.h2gaming.vn/gigabyte-aorus-rtx-3090-xtreme-waterforce-wb


----------



## Baasha

Zurv said:


> ugh.. i should have sold my RTX Titans before today
> 
> I'll likely go SLI 3090 in both my gaming systems.
> 
> The big question for me is.. where are the screws mr nvidia? I hope it isn't a glue-fest on the FE models?
> I'll 100% be watercooling (as always)


hehe.. me too!

looks like I will be forced to watercool to fit both GPUs in the rigs.


----------



## LiquidHaus

So any idea on why pre-orders aren't a thing this time around?

Gonna be tough fighting it out with bots and scripts.


----------



## Shawnb99

LiquidHaus said:


> So any idea on why pre-orders aren't a thing this time around?
> 
> Gonna be tough fighting it out with bots and scripts.



Paper launch or said to be. Very very limited stock until the new year are the rumours.


----------



## J7SC

Zed03 said:


> There appears to be a viet listing for 3090 waterforce wb.. don't know how legit it is, but hurray! Can avoid manual waterblock install and don't need to flash bios for high power limits.
> 
> https://sg.h2gaming.vn/gigabyte-aorus-rtx-3090-xtreme-waterforce-wb


 
...not sure about that exact URL, but I think & hope that there will be a "GIGABYTE AORUS RTX 3090 XTREME WATERFORCE WB" as their flagship, may be as a bit later release again. Asus Strix also looks really good (3x 8 pin...) and it likely will get enough early attention from w-block vendors. 

Also, there are some really thin 120mm fans out here, between 9mm - 15mm. The one below (red arrow) is a Scythe I believe, and I use it as a 'helper' to cool VRM and DRAM - works great. That could also be a good solution re. passive-cooling-only back plates / back GDDR6X for 3090 w-blocks ?!


----------



## originxt

Would the smaller, cut down board of the 3090 fe impact how well it can overclock compared to the larger boards, reference or otherwise?

I plan to put a block on my card once I get it but it looks like the fe board would fit best in my case.


----------



## Zed03

originxt said:


> Would the smaller, cut down board of the 3090 fe impact how well it can overclock compared to the larger boards, reference or otherwise?
> 
> I plan to put a block on my card once I get it but it looks like the fe board would fit best in my case.


There is like a <0.01% difference. Shorter traces are better for stability and have less heat waste due to reduced resistance.


----------



## Baasha

Shawnb99 said:


> Paper launch or said to be. Very very limited stock until the new year are the rumours.


That is disturbing to say the least. I really hope that that is not the case.


----------



## GunnzAkimbo

3090's too slow.

We're gonna have to go to 4090!


----------



## Mooncheese

straha20 said:


> Hmmm...for reference PCB only, not the FE. I wonder how long before a Founders block becomes available...if it ever does.


I don't understand what EK are referring to, Nvidia are not calling the 3070, 3080 and 3090 Founders Edition cards and I see no other "reference PCB" cards, what are they talking about? Is there going to be a standard GPU layout and size shared by the other manufacturers this time around, i.e. EVGA XC3 and whatever "reference PCB" cards Asus, Gigabyte, MSI and others have on offer?


----------



## Mooncheese

cT. said:


> I plan on getting a waterblock for the 3090. I also plan on doing some light overclocking. Is there a particular 3rd party card manufacturer I should lean towards?
> 
> 
> Also, is there a particular water block manufacturer that I should be looking at? I heard EKWB is decent, but dunno if they offer the best water cooling temps.


I'm partial to EVGA because of the 3 year warranty and the fact that mounting a water-block isn't grounds for denying a warranty claim. Also, I love the iCX sensors, I had memory fail with my first 2080 Ti (Alienware Aurora, Micron, no overclock) and being able to see the memory temps with it's replacement, my current card (EVGA XC2) really added to peace of mind and is worth the $50 that they demand. 

Leaning towards XC3 this time around because the power delivery of 2080 Ti was over-built and there wasn't really any advantage in the additional power phases of FTW3 and you also may have to wait a while for block availability with non-reference layout PCB. I've been running the 373w FTW3 vbios on my XC2 without any issues (load temps of 42C).


----------



## Mooncheese

Shawnb99 said:


> FTW3’s are the better binned card but they require a different block then the normal cards. For the 2080ti that meant only blocks were Hydrocopper or EK. Actually about to grab the EK one to swap with my Hydrocopper.
> 
> Hopefully Heatkiller comes out with a block for the FTW3


Are you sure about this? I heard that there was no binning going on for any cards other than Kingpin with the 20 series and that the core of 300a core of FTW3 was in the same silicon lottery as any other 300a.


----------



## Mooncheese

GAN77 said:


> Waterblock with a design from the last decade)
> Scary waterblock.
> I see many flaws in it. The backplate is announced, but the link does not work.


Are you referring to the cross channels and the fact that it's flow optimized? 

How are block manufacturers going to address cooling the video memory on the top side of the PCB?


----------



## nick name

kx11 said:


> 3090 is BIG


Lol. That put a smile on my face.


----------



## Nizzen

Mooncheese said:


> Are you referring to the cross channels and the fact that it's flow optimized?
> 
> How are block manufacturers going to address cooling the video memory on the top side of the PCB?


I think My old Titan X's had memory on the backside. Had ek watercooling with EK backplate. Backplate did the job, but had fan over the top of the cards. 3-way sli 

https://www.overclock.net/forum/69-...tx-titan-x-owners-club-1275.html#post24217789


----------



## keikei

BigMack70 said:


> The fact that Nvidia banned pre-orders SCREAMS paper launch. They don't have enough stock to meet demand and don't know when they will, so it's just first come first served. I doubt these will see wide availability until early next year.



Gud deduction.


----------



## Foxrun

Shawnb99 said:


> Paper launch or said to be. Very very limited stock until the new year are the rumours.


Could you link those rumors?


----------



## keikei

Foxrun said:


> Could you link those rumors?


https://wccftech.com/report-nvidia-geforce-rtx-30-gpus-to-be-in-short-supply-until-2021/


----------



## Krzych04650

Getting two of these, though the coolers look ridiculously massive 😛 I am still on X99 with no real upgrade available so I am hesitant to do full watercooling loop before some good new platform arrives, but looking at all of those 3-slot monsters it won't be so easy to just get air cooled cards temporarily, it is going to be a total sandwich. I don't really expect Hybrids to be available anytime soon. Maybe FE could be fine due to how it is designed, but then you get this weirdly placed 12-pin and probably incompatibility with other BIOSes.


----------



## Mad Pistol

I'm actually very surprised at all of the interest in the 3090. I thought that the crazy price and minimal performance increase would drive some people away.

I was very wrong.


----------



## changboy

Mad Pistol said:


> I'm actually very surprised at all of the interest in the 3090. I thought that the crazy price and minimal performance increase would drive some people away.
> 
> I was very wrong.


 Many dream of it but not everyone will buy it, and about limited stuff its a marketing way to increase the price at the most higher level. Same they do with the 2080 ti, it was sold at 200$ higher at the begening and incresase till the end. A good way to rip customer, this is my point of view.

Like you i will close my mough and pay the asking price and i will be happy hehehe.


----------



## HyperMatrix

Mad Pistol said:


> I'm actually very surprised at all of the interest in the 3090. I thought that the crazy price and minimal performance increase would drive some people away.
> 
> I was very wrong.


Think of it this way. 20%~ more RT performance is the difference between playing at 100Hz or at 120Hz. Or playing at 120Hz or at 144Hz. So for people who are playing on 4K 144Hz monitors that were $2000 at launch, paying $1500 to actually be able to play at that resolution/refresh rate isn't all that crazy. And for those who've been running previous generation Titan cards or the 1080ti, we've been on 11/12GB VRAM for so long that we need an upgrade. Playing some games recently, I've had that full 12GB filled up. So having double the VRAM, at a higher speed/bus, with more RT/Tensor cores, it just makes sense. 

And I'm not saying that as someone who will spend any mount on anything. I bought 2x Pascal Titan X cards when they launched. But I refused to buy a single RTX 2080ti because it wasn't worth the money. The 3090 is a solid upgrade, not just from Pascal, but from Turing as well. Especially with DLSS 2.0 being such a huge game changing feature. Personally I was hoping for the rumors of 4x RT performance to be true, since even the RT performance of the 3090 is insufficient without DLSS. But I really can't complain about the performance of the 3090 as a whole package.


----------



## Mad Pistol

changboy said:


> Many dream of it but not everyone will buy it, and about limited stuff its a marketing way to increase the price at the most higher level. Same they do with the 2080 ti, it was sold at 200$ higher at the begening and incresase till the end. A good way to rip customer, this is my point of view.
> 
> Like you i will close my mough and pay the asking price and i will be happy hehehe.


Unfortunately, I will not be joining this club. I have an LG CX 55 OLED, and I was convinced that I needed an RTX 3090 in order to achieve my dream of 4k120.

The 3080 has changed my mind. It actually performs far better than I think anyone anticipated. Because of this, I will be getting a 3080 first, and if that's not enough to satisfy me, I will sell it and spring for the 3090. I hope I don't have to do that though... double the price for such a small gain seems crazy.


----------



## J7SC

Mad Pistol said:


> Unfortunately, I will not be joining this club. I have an LG CX 55 OLED, and I was convinced that I needed an RTX 3090 in order to achieve my dream of 4k120.
> 
> The 3080 has changed my mind. It actually performs far better than I think anyone anticipated. Because of this, I will be getting a 3080 first, and if that's not enough to satisfy me, I will sell it and spring for the 3090. I hope I don't have to do that though... double the price for such a small gain seems crazy.


 
...I'm a but mystified, even if I agree with some of the comments. Have there actually been any wide-ranging 'real world' benchmarks by 'trusted 3rd parties' for both the 3080 and 3090 ?


----------



## Mad Pistol

J7SC said:


> ...I'm a but mystified, even if I agree with some of the comments. Have there actually been any wide-ranging 'real world' benchmarks by 'trusted 3rd parties' for both the 3080 and 3090 ?


Except for the DigitalFoundry video comparing 2080 v 3080, and the Doom Eternal video comparing the 2080 Ti v 3080, no. However, considering that I'm currently on a 2070 Super, that's all I need to see to be convinced that the 3080 is a massive upgrade... at least for me.

The 3090 is definitely faster, but based on core counts alone, it's not going to be that much faster. The 3090 will excel greatly in one area that the 3080 cannot: 8K. I just got into 4K, so the 3090 seems like complete and utter overkill.


----------



## skupples

Mooncheese said:


> Are you sure about this? I heard that there was no binning going on for any cards other than Kingpin with the 20 series and that the core of 300a core of FTW3 was in the same silicon lottery as any other 300a.


yet some how NV tends to always release the best silicon themselves 


but yes, that's typically correct. the amount of binning that goes into each individual chip is rather low until you get into HOF & KP tier. 





the #1 limiting factor is NV themselves, and heat. Always has been, always will be. They even taunt us now, with their over-engineered power sections.


----------



## changboy

Mad Pistol said:


> Unfortunately, I will not be joining this club. I have an LG CX 55 OLED, and I was convinced that I needed an RTX 3090 in order to achieve my dream of 4k120.
> 
> The 3080 has changed my mind. It actually performs far better than I think anyone anticipated. Because of this, I will be getting a 3080 first, and if that's not enough to satisfy me, I will sell it and spring for the 3090. I hope I don't have to do that though... double the price for such a small gain seems crazy.


 I think like you, my head tell me to buy the 3080 and something inside me : buy the 3090 coz you will always look at it after pay for the 3080 lol. Really hard for me to look at better and buy lower and after i have feeling to regret it.

I also own oled but a C8 even this i play at 4K higher then 60 fps coz its 4;2;0 setting so fps can go high even at 4k. What i know its in a near future if you want play some game at 4k and pump up all setting you will need more then 10gb of memory on ur gpu. For that i was desapoint when i sae the ammount of memory on the rtx-3080 !

tell me now if you want buy a rtx-3090 ? lol.


----------



## Mad Pistol

changboy said:


> I think like you, my head tell me to buy the 3080 and something inside me : buy the 3090 coz you will always look at it after pay for the 3080 lol. Really hard for me to look at better and buy lower and after i have feeling to regret it.
> 
> I also own oled but a C8 even this i play at 4K higher then 60 fps coz its 4;2;0 setting so fps can go high even at 4k. What i know its in a near future if you want play some game at 4k and pump up all setting you will need more then 10gb of memory on ur gpu. For that i was desapoint when i sae the ammount of memory on the rtx-3080 !
> 
> tell me now if you want buy a rtx-3090 ? lol.


I actually thought this originally, but DLSS and AI upscaling has changed the game. It allows games to be rendered at lower resolutions and then upscaled using AI. The game changer is that it uses less VRAM because it is being rendered at a lower res.

If I am wrong and games very quickly start saturating 10GB VRAM, I will gladly dump the 3080 for a 3090 (or equivalent model at the time). I have no problem upgrading, but my mentality has changed since I bought my 1080 Ti; I no longer need the absolute highest end to be happy, but I do know I will kick myself if I spend too much on a product.


----------



## changboy

Mad Pistol said:


> I actually thought this originally, but DLSS and AI upscaling has changed the game. It allows games to be rendered at lower resolutions and then upscaled using AI. The game changer is that it uses less VRAM because it is being rendered at a lower res.
> 
> If I am wrong and games very quickly start saturating 10GB VRAM, I will gladly dump the 3080 for a 3090 (or equivalent model at the time). I have no problem upgrading, but my mentality has changed since I bought my 1080 Ti; I no longer need the absolute highest end to be happy, but I do know I will kick myself if I spend too much on a product.


 Oh ok, i sold my 1080 ti 5 months ago and i was waiting for new card but i never tried DLSS, i remember played read dead redemption 2 and using 10 gb of memory, My fps was 29 and a lil more sometimes.

The fact is an old rx-580 from amd was 8gb and they bring a new high end card with only 10gb, this is really desapoint me. At this point i clearly see they will comes out with new model in a short time, maybe 6 month or so with a 3080 ti but this force me to buy a 3090, the true is after the announce from nvidia i was a bit angry lol.


----------



## BigMack70

changboy said:


> Oh ok, i sold my 1080 ti 5 months ago and i was waiting for new card but i never tried DLSS, i remember played read dead redemption 2 and using 10 gb of memory, My fps was 29 and a lil more sometimes.
> 
> The fact is an old rx-580 from amd was 8gb and they bring a new high end card with only 10gb, this is really desapoint me. At this point i clearly see they will comes out with new model in a short time, maybe 6 month or so with a 3080 ti but this force me to buy a 3090, the true is after the announce from nvidia i was a bit angry lol.


If you're willing to consider upgrading again in 2 years, I'd say there's zero chance that 10GB of vram will be a limiting factor. While I know several games will fill up a full 10-12GB at 4k, I'm unaware of any game that's limited in performance at 4k on even 8GB VRAM. I'm going 3090 because I'm gambling on either it lasting until 2 upgrade cycles away, or that if I do "need" to upgrade next cycle that it will hold its resale value well to the prosumer market in need of the 24GB VRAM. Doubt we're going to see cards with more than 24GB for a while so hoping this will remain attractive to that crowd on resale after it's more or less done for gamers.


----------



## J7SC

BigMack70 said:


> If you're willing to consider upgrading again in 2 years, I'd say there's zero chance that 10GB of vram will be a limiting factor. While I know several games will fill up a full 10-12GB at 4k, I'm unaware of any game that's limited in performance at 4k on even 8GB VRAM. I'm going 3090 because I'm gambling on either it lasting until 2 upgrade cycles away, or that if I do "need" to upgrade next cycle that it will hold its resale value well to the prosumer market in need of the 24GB VRAM. Doubt we're going to see cards with more than 24GB for a while so hoping this will remain attractive to that crowd on resale after it's more or less done for gamers.


 
..w/o real and varied benchmarks, hard to say for sure, but I also think that 10 GB of GDDR6X is "more" than 10 GB of GDDR6....and DARE I say, compared to the price of a 24GB (GDDR6) Titan RTX, the 3090 with 24 GB of GDDR6X is, ahem, a steal. As an aside, I got a kick out of all the Micron haters (re. 'test escapes' 2080 Ti issues) swooning over the 3090 w/ Micron memory. Anyway...

...anyway, there's still some persistent rumour that there will also be a 48GB GDDR6X Ampere Titan 'later' (for those folk with the 8k 'wall tv' ?). I do think 'Titan' is a valuable brand, and see no reason why NVidia would just give it up, instead of using it to expand the big-$ range upwards :2cents:


----------



## BigMack70

J7SC said:


> ..w/o real and varied benchmarks, hard to say for sure, but I also think that 10 GB of GDDR6X is "more" than 10 GB of GDDR6....and DARE I say, compared to the price of a 24GB (GDDR6) Titan RTX, the 3090 with 24 GB of GDDR6X is, ahem, a steal. As an aside, I got a kick out of all the Micron haters (re. 'test escapes' 2080 Ti issues) swooning over the 3090 w/ Micron memory. Anyway...
> 
> ...anyway, there's still some persistent rumour that there will also be a 48GB GDDR6X Ampere Titan 'later' (for those folk with the 8k 'wall tv' ?). I do think 'Titan' is a valuable brand, and see no reason why NVidia would just give it up, instead of using it to expand the big-$ range upwards :2cents:


Yeah. Well, I already bought a 2080 Ti when that launched, and that was a FAR worse value proposition than what's on offer here with the 3090. If my math checks out, for the 3090 to be as bad a value proposition as the 2080 Ti was, it would need to be priced around or above $2400.


----------



## J7SC

BigMack70 said:


> Yeah. Well, I already bought a 2080 Ti when that launched, and that was a FAR worse value proposition than what's on offer here with the 3090. If my math checks out, for the 3090 to be as bad a value proposition as the 2080 Ti was, it would need to be priced around or above $2400.


 
:headscrat...I think I just posted that comparatively speaking, the 3090 is a great value - which is why I mentioned earlier that I plan to get two once my fav custom PCB model and w-cooling lines are out..I was also adding that (re. 3080) 10GB of GDDR6X > 10 GB o GDDR6


----------



## BigMack70

J7SC said:


> :headscrat...I think I just posted that comparatively speaking, the 3090 is a great value - which is why I mentioned earlier that I plan to get two once my fav custom PCB model and w-cooling lines are out..I was also adding that (re. 3080) 10GB of GDDR6X > 10 GB o GDDR6


I'm simply expanding your comparison. It's not only a great value compared to the Titan RTX, but also compared to what the 2080 Ti offered. 2080 Ti offered, relative to previous flagship, +35% performance for +70% price. 3090 is offering, relative to previous flagship, at least +50% performance for +25% price.


----------



## Jpmboy

J7SC said:


> ..w/o real and varied benchmarks, hard to say for sure, but I also think that 10 GB of GDDR6X is "more" than 10 GB of GDDR6....and DARE I say, compared to the price of a 24GB (GDDR6) Titan RTX, the 3090 with 24 GB of GDDR6X is, ahem, a steal. As an aside, I got a kick out of all the Micron haters (re. 'test escapes' 2080 Ti issues) swooning over the 3090 w/ Micron memory. Anyway...
> 
> ...anyway, there's still some persistent rumour that there will also be a *48GB GDDR6X Ampere Titan* 'later' (for those folk with the 8k 'wall tv' ?). I do think 'Titan' is a valuable brand, and see no reason why NVidia would just give it up, instead of using it to expand the big-$ range upwards :2cents:


I certainly hope so, and that it is not DP crippled vs the Tesla. So far, only the OG and Titan V had access to to the complete FMU/FPU available in the Tesla class. Volta was a 50% increase vs Pascal. Ampere is only 24% over Volta (at the tesla level... and the Titan V is 97% of the Tesla V). I know this is not important for AI, and only limited benefit for gaming physics, but the RTX titan was, frankly not really significantly better than a good 2080Ti (at anything). Hopefully they don't botch the market segmentation again.


----------



## Mooncheese

BigMack70 said:


> I'm simply expanding your comparison. It's not only a great value compared to the Titan RTX, but also compared to what the 2080 Ti offered. 2080 Ti offered, relative to previous flagship, +35% performance for +70% price. 3090 is offering, relative to previous flagship, at least +50% performance for +25% price.


Probably closer to +70% performance given the extrapolation between TFLOP's and CUDA core count (20%), not counting how much more can be extracted via memory overclocking with 384 bit bus given the recent performance demonstration between 2080 Ti and the 3080 in Doom Eternal, which is instructive because Vulkan API nearly completely removes display driver involvement and hence any inefficiency in this regard. 

It's probably save to bet that the 3090 is at minimum 20% faster than the 3080 and the 3080 is 50% faster than the 2080 Ti best-case-scenario, all numbers can go down 10% or so depending on the API and engine being used as Vulkan truly is best-case-scenario. 

https://www.pcgamer.com/watch-nvidias-rtx-3080-rip-and-tear-the-2080-ti-at-4k-in-doom-eternal/

But yeah, they dropped the node size 50% and increased the core count from 4352 to 8704 (3080), 50% solely in rasterization doesn't surprise me. Ampere is shaping up to be much faster than any of us had anticipated. 

I was anticipating the 3080 to be 25-30% faster than 2080 Ti and the 3090 to be 50% faster with all Ampere cards doing RT 2-3x faster.


----------



## Mooncheese

J7SC said:


> :headscrat...I think I just posted that comparatively speaking, the 3090 is a great value - which is why I mentioned earlier that I plan to get two once my fav custom PCB model and w-cooling lines are out..I was also adding that (re. 3080) 10GB of GDDR6X > 10 GB o GDDR6


Gigabyte Aorus Extreme WB? 

The only thing I don't like about this model is Gigabyte's non-existent customer service should I need to RMA and the fact that every previous WB design was not serviceable, meaning, you could not take the block off the GPU to replace the TIM.


----------



## BigMack70

Mooncheese said:


> Probably closer to +70% performance given the extrapolation between TFLOP's and CUDA core count (20%), not counting how much more can be extracted via memory overclocking with 384 bit bus given the recent performance demonstration between 2080 Ti and the 3080 in Doom Eternal, which is instructive because Vulkan API nearly completely removes display driver involvement and hence any inefficiency in this regard.
> 
> It's probably save to bet that the 3090 is at minimum 20% faster than the 3080 and the 3080 is 50% faster than the 2080 Ti best-case-scenario, all numbers can go down 10% or so depending on the API and engine being used as Vulkan truly is best-case-scenario.
> 
> https://www.pcgamer.com/watch-nvidias-rtx-3080-rip-and-tear-the-2080-ti-at-4k-in-doom-eternal/
> 
> But yeah, they dropped the node size 50% and increased the core count from 4352 to 8704 (3080), 50% solely in rasterization doesn't surprise me. Ampere is shaping up to be much faster than any of us had anticipated.
> 
> I was anticipating the 3080 to be 25-30% faster than 2080 Ti and the 3090 to be 50% faster with all Ampere cards doing RT 2-3x faster.


Yeah, my 50% number is very conservative. If, per the digital foundry video, the 3080 averages about 80% faster than the 2080, then the 3080 will be about 40% faster than the 2080Ti. By the specs, that means 3090 should be about 1.2*1.4 = 1.7x the performance of the 2080Ti. Only thing I'm not sure of is how badly that might end up limited by power restrictions, since 3090 TDP is only 10% higher than 3080. So, my expectations are set around 1.5x 2080Ti performance and anything above the is a nice bonus IMO.


----------



## ref

This is going to be a nice upgrade from my SLI 1080's. Just need to know which cards will use the reference PCB so I can buy a card to watercool ASAP. Also really excited to pay over $2,000 for this shipped to Canada, zzz.


----------



## iamjanco

Jpmboy said:


> I certainly hope so, and that it is not DP crippled vs the Tesla. So far, only the OG and Titan V had access to to the complete FMU/FPU available in the Tesla class. Volta was a 50% increase vs Pascal. Ampere is only 24% over Volta (at the tesla level... and the Titan V is 97% of the Tesla V). I know this is not important for AI, and only limited benefit for gaming physics, but the RTX titan was, frankly not really significantly better than a good 2080Ti (at anything). *Hopefully they don't botch the market segmentation again.*


Given the overall topic at hand (another gpu series release all dressed up), I'm reminded of one of Hollyweird's more memorable lines: _you can polish a tud, but it's still a tud._ 

That's paraphrased, of course; but without being overly preachy:

Some seem to understand this well enough that it helps keep them out of trouble (like JP). Others cave easier to what amounts to the glitz and glamour years of studying them as a potential market have helped hone into an relatively easy, if not an immediate sale. Then there's also that selfie factor, the epeen, the _mine is bigger than yours_ motivation that seems to drive some to poise with their index fingers over imaginary _buy-it-now_ buttons weeks on end from the beginning of every new release; only to be frustrated once those buttons start showing up for real AND disappearing just as fast. 

Enough on all that though. Me, I'm just trying to figure out why a middle aged guy who's made beaucoup bucks doing what he does is slaving over a hot stove in his kitchen, dressed in a leather jacket. I'd be sweating like a stuck pig myself 


----
Good luck getting the cards you want, each and every one of you. That comes from the heart.


----------



## HyperMatrix

FOUNDERS EDITION CARDS ARE NOT REFERENCE DESIGN. 

https://www.tweaktown.com/news/7495...ders-not-reference-they-are-custom/index.html

I don’t know if water block manufacturers are going to release founders edition blocks in the future or not. It could be they got access to the reference PCB design just as AIB partners did, and will make Founders Edition ones once they get a hold of one after launch. But at present, founders edition = no water cooling.

I’ll probably fire off an email to Aquacomputer to see if they have any plans for it or not. This could make a big difference for people trying to choose what to buy.


----------



## Nammi

HyperMatrix said:


> FOUNDERS EDITION CARDS ARE NOT REFERENCE DESIGN.
> 
> https://www.tweaktown.com/news/7495...ders-not-reference-they-are-custom/index.html
> 
> I don’t know if water block manufacturers are going to release founders edition blocks in the future or not. It could be they got access to the reference PCB design just as AIB partners did, and will make Founders Edition ones once they get a hold of one after launch. But at present, founders edition = no water cooling.
> 
> I’ll probably fire off an email to Aquacomputer to see if they have any plans for it or not. This could make a big difference for people trying to choose what to buy.


If google translate isn't completely off, there will be FE blocks from Aquacomputer.
https://forum.aquacomputer.de/wasserk-hlung/p1447754-k-hlblock-f-r-die-3090-von-nv/#post1447754

I was planning to slap a block on the FE but considering it's custom pcb and the only one with the 12pin, I feel like the chances for a decent plimit bios are slim...


----------



## MrTOOSHORT

Might have to wait to see whats going with waterblocks FE and reference nonsense.


----------



## Chuckclc

Ahh! It is actually happening!


----------



## ThrashZone

Hi,
Someone forgot FE stands for Founders Edition lol or vanilla :doh:


----------



## keikei

How much faster is 3090 vs 3080, claimed by Nvidia?


----------



## DooRules

MrTOOSHORT said:


> Might have to wait to see whats going with waterblocks FE and reference nonsense.


assuming we are lucky enough to snag one on launch


----------



## HyperMatrix

Nammi said:


> HyperMatrix said:
> 
> 
> 
> FOUNDERS EDITION CARDS ARE NOT REFERENCE DESIGN.
> 
> https://www.tweaktown.com/news/7495...ders-not-reference-they-are-custom/index.html
> 
> I donâ€™️t know if water block manufacturers are going to release founders edition blocks in the future or not. It could be they got access to the reference PCB design just as AIB partners did, and will make Founders Edition ones once they get a hold of one after launch. But at present, founders edition = no water cooling.
> 
> Iâ€™️ll probably fire off an email to Aquacomputer to see if they have any plans for it or not. This could make a big difference for people trying to choose what to buy.
> 
> 
> 
> If google translate isn't completely off, there will be FE blocks from Aquacomputer.
> https://forum.aquacomputer.de/wasserk-hlung/p1447754-k-hlblock-f-r-die-3090-von-nv/#post1447754
> 
> I was planning to slap a block on the FE but considering it's custom pcb and the only one with the 12pin, I feel like the chances for a decent plimit bios are slim...
Click to expand...

That’s perfect! Thanks for the forum link with confirmation of FE board blocks from Aquacomputer. Really going to need one of theirs with the active cooled backplate for the double sided memory placement and super compact board design. 

I feel like chances of picking up a founders edition model will be better. Otherwise I’d consider an EVGA model for going under water.


----------



## J7SC

Nammi said:


> If google translate isn't completely off, there will be FE blocks from Aquacomputer.
> https://forum.aquacomputer.de/wasserk-hlung/p1447754-k-hlblock-f-r-die-3090-von-nv/#post1447754
> 
> I was planning to slap a block on the FE but considering it's custom pcb and the only one with the 12pin, I feel like the chances for a decent plimit bios are slim...


 
The Aquacomputer quote does indeed state that they're going to offer blocks for FE 3070 - 3090 (the question is 'when'). He also added that they are planning blocks for all 'popular' custom PCBs...that could bode well for Asus Strix and Aorus Xtr (if those don't bring out factory full water-block versions of their own). Your point about the 12 pin connector on FE and potential dearth of custom bios is a good one...


----------



## Shawnb99

Overclockers.uk has pricing up and when we can expect to be able to order it

https://www.overclockers.co.uk/foru...s-3090-3080-3070-now-online-at-ocuk.18897450/


----------



## BigMack70

Shawnb99 said:


> Overclockers.uk has pricing up and when we can expect to be able to order it
> 
> https://www.overclockers.co.uk/foru...s-3090-3080-3070-now-online-at-ocuk.18897450/


Not much help when trying to determine what time of day we need to have our F5 keys ready. I hope they're correct about having good supply.

The news about Witcher 3 getting RTX confirmed I'll be going 3090. Can't wait to play the upgraded version.


----------



## Shawnb99

BigMack70 said:


> Not much help when trying to determine what time of day we need to have our F5 keys ready. I hope they're correct about having good supply.
> 
> The news about Witcher 3 getting RTX confirmed I'll be going 3090. Can't wait to play the upgraded version.



Nope no help at all. 2pm even my time is a deal breaker for me. I work graveyard so I’ll be sleeping then, not taking a day off to buy a GPU.
Hopefully there’s lots of stock as I’m waiting to see what blocks come out before i dive in


----------



## Glerox

I also have confirmation from EKWB support that there will be blocks for FE pcb but don't know when.


----------



## pewpewlazer

HyperMatrix said:


> FOUNDERS EDITION CARDS ARE NOT REFERENCE DESIGN.
> 
> https://www.tweaktown.com/news/7495...ders-not-reference-they-are-custom/index.html
> 
> I don’t know if water block manufacturers are going to release founders edition blocks in the future or not. It could be they got access to the reference PCB design just as AIB partners did, and will make Founders Edition ones once they get a hold of one after launch. But at present, founders edition = no water cooling.
> 
> I’ll probably fire off an email to Aquacomputer to see if they have any plans for it or not. This could make a big difference for people trying to choose what to buy.


So the reference PCB for the 350w RTX 3090 is 2x 8 pin? Unless the AIBs disregard the whole "8 pin rated for 150w" nonsense, get ready to slide that power limit alllllll the way up to a whopping 107%.

I should probably brush up on my non-existent soldering skills while waiting for the 24th.


----------



## Mooncheese

BigMack70 said:


> Yeah, my 50% number is very conservative. If, per the digital foundry video, the 3080 averages about 80% faster than the 2080, then the 3080 will be about 40% faster than the 2080Ti. By the specs, that means 3090 should be about 1.2*1.4 = 1.7x the performance of the 2080Ti. Only thing I'm not sure of is how badly that might end up limited by power restrictions, since 3090 TDP is only 10% higher than 3080. So, my expectations are set around 1.5x 2080Ti performance and anything above the is a nice bonus IMO.


10% higher with reference TDP and 12 pin power connector (300w + 75w over PCI-E slot). Non-reference, i.e. FTW3 etc with 3x8 pin power, is not going to be power limited and if they over-engineer the power delivery again this time around like they did with TU-102 then 450W+ should be par for the course with non-reference PCB. 

For me though, this is a tough decision, I'm most definitely going with the 3090 but I want to put it under water as quickly as possible. Nvidia is most certainly out of the equation because water blocks won't be available at early on and also, I'm hearing nothing but horror stories about Digital River in turns of customer service and I hear that Nvidia can technically deny an RMA if they realize you disassembled the card, say for TIM replacement or putting a water block on it. Also, 3 year transferable warranty. 

Right now I'm eyeing the XC3 but I believe it may only be 2x8 pin power so now I'm thinking FTW3 but that's going to be approaching $1700 U.S. before taxes and who knows when a water-block will be developed for it.


----------



## DooRules

Mooncheese said:


> 10% higher with reference TDP and 12 pin power connector (300w + 75w over PCI-E slot). Non-reference, i.e. FTW3 etc with 3x8 pin power, is not going to be power limited and if they over-engineer the power delivery again this time around like they did with TU-102 then 450W+ should be par for the course with non-reference PCB.
> 
> For me though, this is a tough decision, I'm most definitely going with the 3090 but I want to put it under water as quickly as possible. Nvidia is most certainly out of the equation because water blocks won't be available at early on and also, I'm hearing nothing but horror stories about Digital River in turns of customer service and I hear that Nvidia can technically deny an RMA if they realize you disassembled the card, say for TIM replacement or putting a water block on it. Also, 3 year transferable warranty.
> 
> Right now I'm eyeing the XC3 but I believe it may only be 2x8 pin power so now I'm thinking FTW3 but that's going to be approaching $1700 U.S. before taxes and who knows when a water-block will be developed for it.


I have had occasion to rma a Titan back to Digital River, absolutely the easiest rma I ever had. And I bought a Titan RTX, used it for 3 weeks and figured that the price in no way warranted the performance so sent it back for a full refund, no questions asked. That was my own personal experience with them.


----------



## Mooncheese

How much power can be conveyed over 2x8 pin? I'm hearing that it's 150w per 8 pin and another 75w via PCI-E slot but I'm also hearing that the PSU can deliver much more than that over 8 pin. 

Overclocking the 3090 is going to require at least 450-500w. There's no way 375w is ample. So now I'm looking at a 3x8 pin power card which means waiting forever for a water block to come out. No way in hell am I sticking EVGA's HydroCopper block on a FTW3 if I go that route, whoever was in charge of the aesthetic change with the 3000 series needs to be fired. I'm sorry, it's not growing on me. The red stripe, the hexagonal hole pattern, I don't like anything about this



DooRules said:


> I have had occasion to rma a Titan back to Digital River, absolutely the easiest rma I ever had. And I bought a Titan RTX, used it for 3 weeks and figured that the price in no way warranted the performance so sent it back for a full refund, no questions asked. That was my own personal experience with them.


Youre the first person I've heard from who has had a positive experience.


----------



## originxt

Supposedly 6 AM PST is the release time for each card on their respective dates. https://www.reddit.com/r/nvidia/comments/imjqby/6am_pst_release_time_confirmed_by_nv_tim/


----------



## PharmingInStyle

May have been brought up already but there was the surprise release of the 2080ti which is faster than the 2080 and the Titan V. And it angered many who bought the latter two cards. So would anyone be ticked off if the same thing happens assuming a rumored 3080ti is faster than the 3090? It's very unlikely for a repeat of that so probably nothing to even bother thinking about.

Now money isn't the issue with me and it wasn't back then, I forked over the cash just to have the fastest card even if it was just a couple or few fps faster. So I got a Titan V to replace my 1080. The 2080ti was released and I was somewhat annoyed in my pride so I got a Titan RTX. 

edit - If the same thing happens like I mentioned above I'll shell out the money again and not even have to go in debt. But not everyone here is that fortunate and will be quite angered in the unlikely event Nvidia pulls that stunt again if the rumored "ti" is noticeably faster than the 3090.


----------



## straha20

So from Reddit regarding EKWB FE water blocks...










https://www.reddit.com/r/nvidia/comments/imn4o0/ek_support_response_regarding_30803090_fe/

I think it may be a good thing to try and start compiling a list of AIB cards that will be using the Reference Board.


----------



## skupples

when did 3080 sku become high end? I thought that stopped after Titan 1.0... Being the Highest end unit out on day one doesn't = high end product when its only a piece in a super slow release cycle. 


y'all really do tell yourselves some strange things.


cost is relative, if it seems high end to you, maybe its something you don't want to afford?




straha20 said:


> So from Reddit regarding EKWB FE water blocks...
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> https://www.reddit.com/r/nvidia/comments/imn4o0/ek_support_response_regarding_30803090_fe/
> 
> I think it may be a good thing to try and start compiling a list of AIB cards that will be using the Reference Board.





weird... wasn't it confirmed that FE is not Reference this time around via EK Saying they're making both units.


----------



## straha20

skupples said:


> when did 3080 sku become high end? I thought that stopped after Titan 1.0... Being the Highest end unit out on day one doesn't = high end product when its only a piece in a super slow release cycle.
> 
> 
> y'all really do tell yourselves some strange things.
> 
> 
> cost is relative, if it seems high end to you, maybe its something you don't want to afford?
> 
> 
> 
> 
> 
> 
> 
> weird... wasn't it confirmed that FE is not Reference this time around via EK Saying they're making both units.


I was only aware of EK saying they would have blocks available at or close to paunch, but not specifying for what boards. They do have blocks for reference boards listed, but nothing for FE. This is a different situation than normal because FE has pretty much always been the reference board, but not this time. This time, the FE board and Reference boards are different.


----------



## BigMack70

PharmingInStyle said:


> May have been brought up already but there was the surprise release of the 2080ti which is faster than the 2080 and the Titan V. And it angered many who bought the latter two cards. So would anyone be ticked off if the same thing happens assuming a rumored 3080ti is faster than the 3090? It's very unlikely for a repeat of that so probably nothing to even bother thinking about.
> 
> Now money isn't the issue with me and it wasn't back then, I forked over the cash just to have the fastest card even if it was just a couple or few fps faster. So I got a Titan V to replace my 1080. The 2080ti was released and I was somewhat annoyed in my pride so I got a Titan RTX.
> 
> edit - If the same thing happens like I mentioned above I'll shell out the money again and not even have to go in debt. But not everyone here is that fortunate and will be quite angered in the unlikely event Nvidia pulls that stunt again if the rumored "ti" is noticeably faster than the 3090.


I don't have any idea if they'll make a $3k+ Titan card at some point, but it's very unlikely they'll make a Ti version above the 3090. Three things strongly suggest that - first, $1500 is already by far the most expensive GPU to get included in their gaming marketing/lineup, second, top end SKUs like the evga kingpin are coming on 3090,and third, since they are already at a 350W TDP, there's not much headroom to push out a faster GPU. 

I fully expect something to release between the 3080 and 3090, probably a 20GB version.


----------



## Baasha

Isn't the RoG Strix 3090 supposedly a 400W card (3x 8-pin)?

I am hoping there will be an Ampere Titan ~ Dec. 2020 for the prosumer market. I don't see why they would avoid that segment altogether; especially if it has 48GB of VRAM.

Secondly, a 3080 Ti w/ 20GB VRAM sounds like it will slot in nicely between the 3080 & 3090 around $999 - $1199. I doubt it will be faster (at least on paper) than the 3090, but it may end up being gamers' favorite card - plenty of VRAM (20GB), faster than the vanilla 3080 and not as expensive as the 3090.

I just hope the Ampere Titan is also not a 3-slot card like the 3090 FE.


----------



## sakete

originxt said:


> Supposedly 6 AM PST is the release time for each card on their respective dates. https://www.reddit.com/r/nvidia/comments/imjqby/6am_pst_release_time_confirmed_by_nv_tim/


I saw that. But is that just for the Nvidia FE cards, or for all AIBs as well?


----------



## HyperMatrix

pewpewlazer said:


> So the reference PCB for the 350w RTX 3090 is 2x 8 pin? Unless the AIBs disregard the whole "8 pin rated for 150w" nonsense, get ready to slide that power limit alllllll the way up to a whopping 107%.
> 
> I should probably brush up on my non-existent soldering skills while waiting for the 24th.


This actually has me very concerned. My Pascal Titan X is rated at 250W but under OC and heavy load it can hit up to 310W. That's 24% higher than the rated power use. With the 3090 you're looking at 350W. If as you mentioned we go by the 150W per 8 pin standard, which is lower than what it can actually deliver, that's 300W from the 12-pin connector, and 50W from the PCIE slot. That leaves 25W extra (remainder of PCIE slot power). But if we expect overclocked power consumption to go up similar to the Titan X, you'll need 430W-450W. 

Now there are 3 problems with that. 

1) You fall short of the "in-spec" wattage by approximately 50W-75W.
2) Even though you can theoretically exceed the 150w per 8-pin connector, it could mean the card vrms aren't designed with higher than 375W power draw in mind
3) On top of all that, due to said limitations...the bios could be severely restricted

I really hope there are reviews, including OC tests, prior to launch. Because even though I was initially sure I wanted a founders edition card, I'm really doubting it'll be able to handle any substantial overclocks. I just checked and the Titan X (Pascal) had an 8+6 connector. So it had 75W from PCIE, 75W from 6-pin cable and 150W from 8-pin cable. Which explains my max of around 300W. Because the design allowed for an additional 20% power draw above TDP. The 3090 doesn't. It only allows for an extra 7.6% as you mentioned.

It's back to the drawing board for me. Only reason to get a 3090 Founder's Edition with 7.6% headroom is if you plan to keep it on air.

edit: Anyone know of any cards coming out with 3x 8-pin power connectors? Looking at pictures, I see the Asus 3090 Strix has it. A lot of others haven't provided full 360 pics though.

edit2: Second question. If you get a board with 3x 8-pin connectors, that would fall outside of "reference board design." So that would mean no water blocks? Unless you buy a model that already has a factory block installed like EVGA Hydro Copper? 

update: So far found 2 Asus cards (https://videocardz.com/press-release/asus-announces-geforce-rtx-30-series) and 2 MSI cards (https://videocardz.com/press-releas...rtx-3090-rtx-3080-and-rtx-3070-graphics-cards) that have it.

update2: NDA for FE card reviews is lifted on the 14th. For AIB cards, on the 17th. So hopefully we'll get to see more details prior to the release of the 3090.

update 3: Looking at a video of an RTX 2080ti with 295W TDP, OC'd to 2145MHz, stable at 34-35C at full load, and it pushes even past 380W. So nearly 30% higher than stock TDP. Since the 2080ti and 3090 both have the same power limit of 375W, this is going to be very very bad for overclocks....


----------



## straha20

UPDATE: Links have been pulled.

Hmmm...I thought preorders weren't allowed...

Amazon has the Zotac 3090 and 3080 listed for preorder...$2400

https://www.amazon.com/dp/B08HJQ182D/ref=sr_1_1?dchild=1&keywords="+RTX+3090"&qid=1599270155&sr=8-1

I would think this was a scam, but the listing is Sold and Shipped by Amazon.


----------



## Mooncheese

From what I have gathered Aquacomputer is making blocks for FE right off the bat.

How much power can safely be conveyed over 2x8 pin?


----------



## straha20

Mooncheese said:


> From what I have gathered Aquacomputer is making blocks for FE right off the bat.
> 
> How much power can safely be conveyed over 2x8 pin?


Looks like AquaComputer plans on make FE blocks but they will not be available at launch.

https://forum.aquacomputer.de/weite...376-roadmap-for-nvidia-30-series-waterblocks/


Each 8 pin is rated at 150 watts, so two 8 pin at 150 watts plus the 75 watts from pcie slot is 375 watts.


----------



## changboy

straha20 said:


> Looks like AquaComputer plans on make FE blocks but they will not be available at launch.
> 
> https://forum.aquacomputer.de/weite...376-roadmap-for-nvidia-30-series-waterblocks/
> 
> 
> Each 8 pin is rated at 150 watts, so two 8 pin at 150 watts plus the 75 watts from pcie slot is 375 watts.


No 2 x 8 pins can provide 500 watt if ur psu can provide it !

Just look at a old test of the amd hd-7990 2 gpu card with 2 x 8 pins connector and you will see this card draw 450 watt.

Maybe long time ago when psu was 500-600 watt then yes it can have hard time provide this but with psu today its not a problem.


----------



## Nizzen

changboy said:


> No 2 x 8 pins can provide 500 watt if ur psu can provide it !
> 
> Just look at a old test of the amd hd-7990 2 gpu card with 2 x 8 pins connector and you will see this card draw 450 watt.
> Maybe long time ago when psu was 500-600 watt then yes it can have hard time provide this but with psu today its not a problem.


Yes!

We see the same on the good old Evga 780ti classified. I drew about 650w with ONE 780ti classified with Evbot. The card had only 2x8pin connectors


----------



## HyperMatrix

Nizzen said:


> Yes!
> 
> We see the same on the good old Evga 780ti classified. I drew about 650w with ONE 780ti classified with Evbot. The card had only 2x8pin connectors


You had a custom/unlocked bios though, if I'm not mistaken from that generation. I think we had like +400% power bios for the cards. Issue I have here is that if there is no way to get a custom bios installed, there's a possibility that the cards will be heavily power gated since they're already so close to the in-spec max. Not sure how the bios situation would work with the 3x 8-pin cards. Are manufacturers allowed to set custom power limits based on their board design? Or are the limits all set by Nvidia?

edit: also evbot. lol. not sure how much room the components as well as bios will allow if it's designed around a 375W in-spec max.


----------



## zhrooms

NVIDIA and partners don't go above ATX spec, it's 375W for 2x8-Pin and 525W for 3x8-Pin, doesn't matter that you can run 600W on a single 8-Pin.

We saw some partners allow a 380W limit up from 375W, but EVGA did it correctly on FTW3 by allowing 373W.

EVGA Kingpin had 3x8-Pin - full 520W, similarly to FTW3 just under the actual limit of 525W.

GALAX Hall of Fame also 3x8-Pin, but at 450W instead, so in between 2x8-Pin and 3x8-Pin.

MSI Lightning Z worst one out there with 3x8-Pin at just 380W.. every single 2080 Ti card by MSI was a disaster in some way.


----------



## changboy

On a normal utilisation of ur card inside a normal system you wont see any difference with 2 x 8pins or 3 x 8 unless you go on ln2 with those card.

Like the 2080 ti all of fame score around the same thing then an evga 2080 ti xc.

Now the 3090 and 3080 are on 8nm and it should need less power vs the 2080 ti and ya i know the core have double and this is why tdp is 350w. Nvidia also always limiting the power to the card to protect the gpu.

So all in all i doubt a 3x 8 pins card will score a lot more then those with 2 x 8pins, its more the bios at the final stage will make a difference, at least this is what i think.

Sure a kingpin at price point they sell this beast with all custom part on board will give you a little bit more and again in normal usage unless you go ln2 and put crazy voltage. 

And yes i understand someone can feel better knowing he have a 3 x 8 pins but if nvidia put 2 x 8pins they know what they do.


----------



## zhrooms

changboy said:


> On a normal utilisation of ur card inside a normal system you wont see any difference with 2 x 8pins or 3 x 8 unless you go on ln2 with those card.
> 
> Like the 2080 ti all of fame score around the same thing then an evga 2080 ti xc.
> 
> Now the 3090 and 3080 are on 8nm and it should need less power vs the 2080 ti and ya i know the core have double and this is why tdp is 350w. Nvidia also always limiting the power to the card to protect the gpu.
> 
> So all in all i doubt a 3x 8 pins card will score a lot more then those with 2 x 8pins, its more the bios at the final stage will make a difference, at least this is what i think.


Well you are wrong, https://www.overclock.net/forum/69-...tered-rtx-2080-ti-intrusive-power-limits.html.

150W higher power limit allows for a way higher overclock without throttling.

This applies to Air cooling (as well as AIO and Water), since there are plenty of larger triple fan coolers that can keep the GPU at 60c or below, you are going to run into severe throttling if you run Metro Exodus in 4K on an FE card (game consumes up to 500W at max overclock with no power limit).

Hall of Fame at 450W scores a hell of a lot higher than XC at 338W, Time Spy Extreme on Air consumes 600W, XC is almost at half that.


----------



## Glottis

Anyone else worried about power throttling on 3090 FE? I always bought cards that have ample power headroom in the past, but I'm now considering Founder's Edition. FE cooler seems very capable, but I think this GPU really needs 3x 8pin to spring to life and a single 12pin will be a bottleneck for this 350W TGP monster.


----------



## changboy

zhrooms said:


> Well you are wrong, https://www.overclock.net/forum/69-...tered-rtx-2080-ti-intrusive-power-limits.html.
> 
> 150W higher power limit allows for a way higher overclock without throttling.
> 
> This applies to Air cooling (as well as AIO and Water), since there are plenty of larger triple fan coolers that can keep the GPU at 60c or below, you are going to run into severe throttling if you run Metro Exodus in 4K on an FE card (game consumes up to 500W at max overclock with no power limit).
> 
> Hall of Fame at 450W scores a hell of a lot higher than XC at 338W, Time Spy Extreme on Air consumes 600W, XC is almost at half that.


 Its a bios related, even you have 10 x 8pins and ur bios is 125watt you will fail, if you have the bios with 400 watt and 2 x 8pins it will work, then why they flash those bios card ? for fun ?

You need the bios for ur card but ya they not provide much for lower end card or fail to flash.

Then i better buy a kingping 3090 lol


----------



## changboy

This is not worth 400 or 600$ + for this look :


----------



## Krzych04650

NVIDIA released first performance graph for 3090 along with architecture details. Looks good. Not sure how Titan RTX performance shown here relates to 2080 Ti according to them though.



Glottis said:


> Anyone else worried about power throttling on 3090 FE? I always bought cards that have ample power headroom in the past, but I'm now considering Founder's Edition. FE cooler seems very capable, but I think this GPU really needs 3x 8pin to spring to life and a single 12pin will be a bottleneck for this 350W TGP monster.


I think 12-pin power connector is now confirmed to be capable of only 300W, so suspicions about the whopping 107% power limit will most likely come true. This will not be enough to prevent even stock clocks form power throttling, let alone any overclocking.

It will probably make all initial reviews irrelevant since they are going to be made on stock settings with stock power limit and 3080 is probably going to have much more adequate power limit than 3090, looking at just 30W TDP difference between them. Only OC numbers on 3x8pin cards will show actual performance difference between the two of them.


----------



## changboy

Lol performance of the 3090 not really better then the 3080 from what i see in the graph, iam wrong or what the hell is that ?


----------



## zhrooms

changboy said:


> Its a bios related, even you have 10 x 8pins and ur bios is 125watt you will fail, if you have the bios with 400 watt and 2 x 8pins it will work, then why they flash those bios card ? for fun ? You need the bios for ur card but ya they not provide much for lower end card or fail to flash. Then i better buy a kingping 3090 lol


 
I have no idea what you just said.



changboy said:


> This is not worth 400 or 600$ + for this look :


 
The chart is just showing some light overclocking, also GTA V is five and half years old, and at 1440p it doesn't require much graphical power, typically around 280-330W depending on resolution, so there is no power limitation going on there. If he had upped resolution to 4K and locked 1.093V the results would have been very different.


----------



## changboy

I just mean those really high card price its near the price of 2 cards, who think its a good choice.


----------



## liang333

I'm struggling. Feel like 3080 is good enough for most cases. Can't afford a 8K TV anyways. (Would be nice to have though.)


----------



## Krzych04650

changboy said:


> Lol performance of the 3090 not really better then the 3080 from what i see in the graph, iam wrong or what the hell is that ?


You are wrong in fact, if you translate the performance from all of these graphs using framerate numbers from games reviews from sites like TPU they are showing 3090 as 20% faster than 3080.

Quick example

According to this graph, 3080 is 1.88x of 2070S in Control

Quick look at TPU review shows following numbers at 4K:
2070 Super - 25.5 FPS
2080 Ti - 35.6 FPS
So 3080 is 25.5 x 1.88 = 47.94, or 1.34x of 2080 Ti

Now according this graph, 3090 is 1.55x faster than Titan RTX in the same game
Again, 2080 Ti - 35.6 FPS
Titan RTX is generally 5% faster than 2080 Ti
Titan RTX - 37.38 FPS
So 3090 is 37.38 x 1.55 = 57.94 FPS, or 1.21x of 3080

The misconception of no performance difference between 3080 and 3090 comes from people thinking that 3080 is somehow 60% faster than 2080 Ti because of that DOOM video that NVIDIA released, where 3080 is slightly less than 40% faster on average, but some people and media outlets still made it 60-70%, because you see, these days you can show people the performance on two videos side by side black on white and they will still manage to argue what the difference was, and media outlets are taking full advantage of that.



liang333 said:


> I'm struggling. Feel like 3080 is good enough for most cases. Can't afford a 8K TV anyways. (Would be nice to have though.)


Even for crazies like me who are going to buy 3090 because it supports SLI this doesn't go without a reflection so it is perfectly understandable that someone less enthusiastic about all of this is struggling to pay 2.15x the price for 20% performance gain. You all really should just get a 3080 and leave 3090 stock to us


----------



## Wihglah

The only reason I am getting the 3090 is for the VRAM. 10GB just isn't going to cut it in VR.


----------



## BigMack70

Krzych04650 said:


> You are wrong in fact, if you translate the performance from all of these graphs using framerate numbers from games reviews from sites like TPU they are showing 3090 as 20% faster than 3080.
> 
> Quick example
> 
> According to this graph, 3080 is 1.88x of 2070S in Control
> 
> Quick look at TPU review shows following numbers at 4K:
> 2070 Super - 25.5 FPS
> 2080 Ti - 35.6 FPS
> So 3080 is 25.5 x 1.88 = 47.94, or 1.34x of 2080 Ti
> 
> Now according this graph, 3090 is 1.55x faster than Titan RTX in the same game
> Again, 2080 Ti - 35.6 FPS
> Titan RTX is generally 5% faster than 2080 Ti
> Titan RTX - 37.38 FPS
> So 3090 is 37.38 x 1.55 = 57.94 FPS, or 1.21x of 3080
> 
> The misconception of no performance difference between 3080 and 3090 comes from people thinking that 3080 is somehow 60% faster than 2080 Ti because of that DOOM video that NVIDIA released, where 3080 is slightly less than 40% faster on average, but some people and media outlets still made it 60-70%, because you see, these days you can show people the performance on two videos side by side black on white and they will still manage to argue what the difference was, and media outlets are taking full advantage of that.
> 
> 
> 
> Even for crazies like me who are going to buy 3090 because it supports SLI this doesn't go without a reflection so it is perfectly understandable that someone less enthusiastic about all of this is struggling to pay 2.15x the price for 20% performance gain. You all really should just get a 3080 and leave 3090 stock to us


This. 3080 is going to be about 2080 Ti + 35-40% performance. 3090 should be about 2080 Ti + 60-65%.


----------



## HyperMatrix

BigMack70 said:


> This. 3080 is going to be about 2080 Ti + 35-40% performance. 3090 should be about 2080 Ti + 60-65%.


Don't forget the 2x RT performance. Games that were held back because of RT could see a 100% increase. Then on top of that, there's the new and more powerful DLSS 2.0. Looking at some of the benchmarks, the an overclocked 3090 with RTX and DLSS 2.0 should be able to deliver 4K 144Hz gameplay. The only problem I have is the lack of DLSS support in games.


----------



## BigMack70

HyperMatrix said:


> Don't forget the 2x RT performance. Games that were held back because of RT could see a 100% increase. Then on top of that, there's the new and more powerful DLSS 2.0. Looking at some of the benchmarks, the an overclocked 3090 with RTX and DLSS 2.0 should be able to deliver 4K 144Hz gameplay. The only problem I have is the lack of DLSS support in games.


I stand by my numbers with RTX on. I think anyone expecting an average performance of more than 2080 Ti + 60-65% from the 3090 is going to be extremely disappointed. 2080 Ti + 75-80% performance is probably best case scenario in specific titles favoring Ampere.


----------



## HyperMatrix

BigMack70 said:


> I stand by my numbers with RTX on. I think anyone expecting an average performance of more than 2080 Ti + 60-65% from the 3090 is going to be extremely disappointed. 2080 Ti + 75-80% performance is probably best case scenario in specific titles favoring Ampere.


If the game was limited due to RT performance, depending on how limited it was due to it, then doubling RT performance allows for a potential doubling of FPS assuming shaders don't become limited.

I'll give you an example. Metro Exodus without DLSS. 2080 Ti got 71.1 fps at 4K Ultra settings. With RTX Ultra turned on, it dropped to 34.6 fps. So simply because of the lack of ray tracing performance, the frame rate dropped by more than half. So in this example, if you simply double the RT core performance, you'd go from 34.6 fps to 69.2 fps. Or, almost back to the RTX Off performance.

edit: benchmark reference added Metro Exodus Benchmark Performance, RTX & DLSS


----------



## changboy

I want buy the rtx-3090 but after check aal card coming i really dont know what is the better choice. FE écition will have the smallest pcb card and the smalest waterblock too, so it will be verry nice to see that small card also the most powerfull ! But i check the evga hydro copper ready to put in the loop and i also check the asus strix oc, the beast. Just 5 minutes ago i begin pré-order the waterblock for the founder edition then i stop at checkout lol. I not sure wich one is a better buy, i think of power, overclocking and watercooling it.

I really think 15 minutes after the release they will all sold out lol. I need to know what to do for the date but hard to decide when you didnt see or watch a review of those product.


----------



## BigMack70

HyperMatrix said:


> If the game was limited due to RT performance, depending on how limited it was due to it, then doubling RT performance allows for a potential doubling of FPS assuming shaders don't become limited.
> 
> I'll give you an example. Metro Exodus without DLSS. 2080 Ti got 71.1 fps at 4K Ultra settings. With RTX Ultra turned on, it dropped to 34.6 fps. So simply because of the lack of ray tracing performance, the frame rate dropped by more than half. So in this example, if you simply double the RT core performance, you'd go from 34.6 fps to 69.2 fps. Or, almost back to the RTX Off performance.
> 
> edit: benchmark reference added Metro Exodus Benchmark Performance, RTX & DLSS


I think you're just getting set for disappointment. We already saw in the digital foundry video that Ampere's relative performance to Turing does increase in heavy RTX workloads (e.g. Path tracing), however, the relative increase in performance is not dramatic or earth shattering.


----------



## HyperMatrix

BigMack70 said:


> I think you're just getting set for disappointment. We already saw in the digital foundry video that Ampere's relative performance to Turing does increase in heavy RTX workloads (e.g. Path tracing), however, the relative increase in performance is not dramatic or earth shattering.


Literally RT TFLOPS jumps 2x from 2080 Ti to 3090. From 34 RT TFLOPS to 69 RT TFLOPS. I don't know how to more clearly state what I've already stated. If RT performance is limiting your fps by at least half, then doubling RT performance will double your frame rate. 

If RT usage wasn't very heavy in a game, for example you were normally getting 60fps, and with RTX on you drop to 45fps, doubling RT cores isn't going to get you to 90fps. It'll allow you to get up to 90fps, if shader performance has enough headroom for it. And since the actual shader performance of the 3090 will be around 50-75% higher than the 2080Ti, in this example, if you previously got 60fps with RTX off, you'd now get 90-105fps. So....let's try this again. If you have the shader performance of an RTX 3090, giving you 90-105fps, and you enable RTX with the RT performance of a 2080ti, you'd still only get 45fps. But since the the 3090 has exactly 2x the RT performance, you'd get 90fps. 

So to recap....in RT workloads, you're limited by both those factors. Shaders, and RT. Either one can be a limiting factor. Whichever is the lower performing one, will limit the other. So your max FPS will be the max FPS of the lower of the 2. If you can hit 100 fps with RTX off, and 50 fps with RTX on, then your max FPS is 50. If you increase RT performance by 2x, your FPS will go up to 100. If you increase RT performance by 1000x, your FPS will still stay at 100. If your RTX on performance is 50fps, and you increase shader performance by 10x, your max FPS will still be 50fps (unless RTX can simultaneously utilize shader cores and rt cores for ray tracing, which is theoretically possible, even if inefficient).


----------



## BigMack70

Real world performance is never as simple as comparing TFlops. If you don't believe me, go watch the DF video, or just wait for the reviews. They show the 3080 in its absolute best possible case scenario - path tracing. And it gets a minor relative performance bump (from ~80% better than 2080 to ~95% better). 3090 vs 2080Ti will be the same sort of thing.


----------



## pewpewlazer

BigMack70 said:


> I think you're just getting set for disappointment. We already saw in the digital foundry video that Ampere's relative performance to Turing does increase in heavy RTX workloads (e.g. Path tracing), however, the relative increase in performance is not dramatic or earth shattering.


So you're telling me that Doom Eternal, which does NOT support ray tracing, is utilizing the RT cores?


----------



## HyperMatrix

BigMack70 said:


> Real world performance is never as simple as comparing TFlops. If you don't believe me, go watch the DF video, or just wait for the reviews. They show the 3080 in its absolute best possible case scenario - path tracing. And it gets a minor relative performance bump (from ~80% better than 2080 to ~95% better). 3090 vs 2080Ti will be the same sort of thing.


I did. You should rewatch it. In another RT heavy game like Control, the RTX 3080 got 178% of the performance of an RTX 2080ti with DLSS disabled. Accounting for the RTX 3090's 20% performance boost, that means we can expect the RTX 3090 to have 213% of the performance of the RTX 2080Ti.

I'm done debating this with you. Cheers.


----------



## BigMack70

HyperMatrix said:


> I did. You should rewatch it. In another RT heavy game like Control, the RTX 3080 got 178% of the performance of an RTX 2080ti with DLSS disabled. Accounting for the RTX 3090's 20% performance boost, that means we can expect the RTX 3090 to have 213% of the performance of the RTX 2080Ti.
> 
> I'm done debating this with you. Cheers.


I don't know if you realize this, but the DF video is 3080 vs 2080. Not 2080Ti. But seeing as how you're just becoming more wrong with more posts, probably best to stop debating.



pewpewlazer said:


> So you're telling me that Doom Eternal, which does NOT support ray tracing, is utilizing the RT cores?


That's obviously not what I'm telling you. But nice bait.


----------



## HyperMatrix

BigMack70 said:


> I don't know if you realize this, but the DF video is 3080 vs 2080. Not 2080Ti. But seeing as how you're just becoming more wrong with more posts, probably best to stop debating.
> 
> 
> 
> That's obviously not what I'm telling you. But nice bait.


Except for around the 11 minute mark when he compares the performance to an RTX 2080 Ti. Like I said, rewatch the video. I'm done.


----------



## BigMack70

HyperMatrix said:


> Except for around the 11 minute mark when he compares the performance to an RTX 2080 Ti. Like I said, rewatch the video. I'm done.


So you are basing your argument based on a part of the comparison where he presents no hard data to analyze either way. Got it.

I'll stick to the numbers.

Fun fact edit: I'm going to ignore the part where you misrepresent his data as being vs 2080 Ti when it's actually vs 2080, and simply point out that 213% faster than 2080 is about exactly what I'd expect from the 3090. That makes it about 65% faster than the 2080 Ti. Which is exactly what I'm telling you is going to be the case. We have more than enough data to draw this conclusion. There will be variance from game to game, but nothing so wild as to enable games where the 3090 is fully twice as fast as the 2080 Ti. To create that kind of performance delta, you'd need to create a situation where the 11GB VRAM buffer on the 2080 Ti is insufficient, such as at 8k gaming. In any performance condition where vram buffer is not limiting, the performance delta is likely to vary between 1.4x and 1.8x the performance of 2080 Ti.


----------



## changboy

I want say Overclock.net have done a really great job with new thread and all feature, great job !


----------



## Thoth420

changboy said:


> I want say Overclock.net have done a really great job with new thread and all feature, great job !


Agreed. The new layout far exceeded my expectation. Kudos staff if you see this by chance.

Now 3090s... anyone planning on camping a Microcenter? They said on twitter they would have cards for release day. In store only and I am guessing 1 per customer.


----------



## changboy

Iam already there in the parking with the winnebago lol.


----------



## changboy

Newegg begun to show the 3000 serie : Are you a human?


----------



## HyperMatrix

changboy said:


> Newegg begun to show the 3000 serie : Are you a human?


I'm still wondering what availability's going to look like. I want one of the 22/23 power stage models with 3x 8-pin connectors (Asus ROG Strix and EVGA FTW3 and Kingpin, currently) for proper overclocking. But would hate to miss out on getting a 3090 at all.


----------



## changboy

Ah i see your in Canada too, me too i want one of these and not sure to be able to get it first day or hour of release. I not sure but the asus strix oc 3090 can cost 2600$ here from what i see in other country transfert in cad + tax here.


----------



## HyperMatrix

changboy said:


> Ah i see your in Canada too, me too i want one of these and not sure to be able to get it first day or hour of release. I not sure but the asus strix oc 3090 can cost 2600$ here from what i see in other country transfert in cad + tax here.


Yeah I wouldn't pay $2600. I think that's with 20% VAT as well as additional ridiculous import fees in Europe since both the retailers with prices are european. Looking at the price difference between the 2080Ti FTW3 models vs. the basic models here would indicate a roughly $1650 USD price point. Heck for $1800 USD if I could get an FTW3 HydroCopper with a waterblock on it already, I'd be ecstatic.

The FE/Reference design models with 2x 8-pin connectors aren't going to clock higher than 2GHz. The 3x 8-pin models with 22/23 power stages and 400W+ TDP are likely able to hit 2.2GHz. So it'd be a shame to spend $1500 on a card, then handicap it by 10% potential performance to save 10% on the price.

Then again if you're out in one of those silly provinces with 13-14% sales tax, I feel your pain. I hate even paying 5% here in Alberta. Damn Trudeau and his federal sales tax. 

update: Just checked EVGA's website. Went to checkout with an RTX 2080 Super and UPS Worldwide Expedited shipping is free. Not sure if it's a promo to get rid of these cards or if it'll apply to all purchases. But something to keep in mind to save on shipping at least.


----------



## Thoth420

changboy said:


> Iam already there in the parking with the winnebago lol.


Nice! I plan on hitting the Cambridge location as it is closest to me. I have to build for a friend too so I plan on camping for both 3080 and 3090 release. I am totally happy with a reference 3090 for myself to start out. He said he doesn't care at all and just wants whatever 3080 I can get.



changboy said:


> Newegg begun to show the 3000 serie : Are you a human?


Cheers! 🍻


----------



## dentnu

HyperMatrix said:


> I'm still wondering what availability's going to look like. I want one of the 22/23 power stage models with 3x 8-pin connectors (Asus ROG Strix and EVGA FTW3 and Kingpin, currently) for proper overclocking. But would hate to miss out on getting a 3090 at all.


Where did you see or read about the power stages ? I have been trying to find out this info for a while now. Can you link your source please ? Thanks


----------



## HyperMatrix

dentnu said:


> Where did you see or read about the power stages ? I have been trying to find out this info for a while now. Can you link your source please ? Thanks


Manufacturer release notes. It's also documented on the first post of this thread.


----------



## Shawnb99

HyperMatrix said:


> Yeah I wouldn't pay $2600. I think that's with 20% VAT as well as additional ridiculous import fees in Europe since both the retailers with prices are european. Looking at the price difference between the 2080Ti FTW3 models vs. the basic models here would indicate a roughly $1650 USD price point. Heck for $1800 USD if I could get an FTW3 HydroCopper with a waterblock on it already, I'd be ecstatic.
> 
> The FE/Reference design models with 2x 8-pin connectors aren't going to clock higher than 2GHz. The 3x 8-pin models with 22/23 power stages and 400W+ TDP are likely able to hit 2.2GHz. So it'd be a shame to spend $1500 on a card, then handicap it by 10% potential performance to save 10% on the price.
> 
> Then again if you're out in one of those silly provinces with 13-14% sales tax, I feel your pain. I hate even paying 5% here in Alberta. Damn Trudeau and his federal sales tax.
> 
> update: Just checked EVGA's website. Went to checkout with an RTX 2080 Super and UPS Worldwide Expedited shipping is free. Not sure if it's a promo to get rid of these cards or if it'll apply to all purchases. But something to keep in mind to save on shipping at least.


Considering the 2080TI Hydrocopper cost $1600 USD I doubt the 3090 will only be $1800. We're looking at least a $300-400 increase for the Hydrocopper model. So I'm expecting around $2000
Free shipping is nice but UPS will still gouge us on the brokerage fees.


----------



## J7SC

Igor's lab has a decent write-up on the new SM tech and related changes implemented by NVidia for 3090 and other 3000 series cards...seems that some game devs _might_ have to come up with some patches for 'max exploitation' of that tech, depending on their mix of floating point and integer in use. The article (in English) is > here With that in mind, I am particularly interested in some MS Flight Sim 2020 4K Ultra benchmarks.

Hardware Canucks has a nice if incomplete vid on 3090, 3080 and 3070 Custom PCB > here 

Subject to factory full water-block 3090 cards which may come out later by other vendors, the Asus 3090 Strix OC (w/ Heatkiller block) and the Kingpin 3090 (both 3x 8 pin, and both from vendors with a strong history of custom XOC bios) are currently looking most interesting to me re. 3090s. In any event, I'll wait a few months before I make any purchase choices. As with every gen, I probably also stare longingly at pics and vids of the Galax HoF [OCL] 3090 when it comes out, but it seems to be 'unobtanium' via regular retail channels (w/ warranty etc) where I live. 

Speaking of Kingpin 3090, this was leaked by Vince earlier today...


----------



## HyperMatrix

Shawnb99 said:


> Considering the 2080TI Hydrocopper cost $1600 USD I doubt the 3090 will only be $1800. We're looking at least a $300-400 increase for the Hydrocopper model. So I'm expecting around $2000
> Free shipping is nice but UPS will still gouge us on the brokerage fees.


UPS brokerage is about $12.50 or $15 or something. They charge an arm and a leg for UPS Standard (ground) shipping brokerage where it's like $60 or something not including taxes. And yeah I never looked at the Hydrocopper price. That's pretty high if that was the official MSRP direct from EVGA. At this point, I'd rather have a 3x 8-pin model on air than a 2x 8-pin/FE model under water. Since there would be no power headroom for overclocking on it anyway. Here's hoping the standard FTW3 model is reasonably priced.


----------



## BigMack70

If the 3090 is anything like the 2080 Ti, none of the cards available at or immediately after launch will be worth anything for serious overclocking.


----------



## Shawnb99

HyperMatrix said:


> UPS brokerage is about $12.50 or $15 or something. They charge an arm and a leg for UPS Standard (ground) shipping brokerage where it's like $60 or something not including taxes. And yeah I never looked at the Hydrocopper price. That's pretty high if that was the official MSRP direct from EVGA. At this point, I'd rather have a 3x 8-pin model on air than a 2x 8-pin/FE model under water. Since there would be no power headroom for overclocking on it anyway. Here's hoping the standard FTW3 model is reasonably priced.


Yeah I made the mistake and picked the Standard, total rip off. Yeah was $1600 direct from EVGA so I expect it to be just as high a markup. Better off getting the Air cooled FTW3 and slap a EK block on it. Was cheaper and apparently cooled better as well. That's likely what I'll be doing. I'm waiting to see what blocks come out or least for EK to make one and then I'll grab one or two.
Hopefully we'll have more options then just EK or EVGA hydrocopper.
I'll be bugging Optimus to make one


----------



## J7SC

BigMack70 said:


> If the 3090 is anything like the 2080 Ti, none of the cards available at or immediately after launch will be worth anything for serious overclocking.


I am wondering if the Samsung 8nm node will show 'improvements' over time, never mind additional custom PCB offerings by vendors after the initial release. I also like to see what AMD releases in late October. In any case, I picked up two 2080 Ti Aorus XTR WB (factory full w-block) three months + after their initial release here in W.Canada for just over USD 1,300 each (before tax). Both are excellent clockers and have been trouble-free - and work great in NVL-SLI-CFR for MetroEx, MS FS 2020, etc. 

BTW, MS FS 2020 4K / ultra is a real system (CPU + GPU) torture instrument, in case new 3090 owners want to find out how well the cooling, and PSU, work ;-)


----------



## BigMack70

J7SC said:


> I am wondering if the Samsung 8nm node will show 'improvements' over time, never mind additional custom PCB offerings by vendors after the initial release. I also like to see what AMD releases in late October. In any case, I picked up two 2080 Ti Aorus XTR WB (factory full w-block) three months + after their initial release here in W.Canada for just over USD 1,300 each (before tax). Both are excellent clockers and have been trouble-free - and work great in NVL-SLI-CFR for MetroEx, MS FS 2020, etc.
> 
> BTW, MS FS 2020 4K / ultra is a real system (CPU + GPU) torture instrument, in case new 3090 owners want to find out how well the cooling, and PSU, work ;-)


I'm more concerned about Nvidia's artificial BIOS power limitations than anything else. Most 2080Ti models were artificially locked down to a ridiculous degree.


----------



## HyperMatrix

BigMack70 said:


> I'm more concerned about Nvidia's artificial BIOS power limitations than anything else. Most 2080Ti models were artificially locked down to a ridiculous degree.


You don't have to worry about artificial bios power limits. Max headroom you have is +7% power anyway, haha. So even with a fully unlocked bios you have access to a whopping 25W. The good news is that the Asus ROG Strix card has a listed 400W TDP. So we know at least custom cards can have more power going to them. Still waiting to see what info EVGA releases.









Check out all of our buffed-up GeForce RTX 3070, RTX 3080, and RTX 3090 graphics cards | ROG - Republic of Gamers Global


GeForce RTX 30-series GPUs are here, and we've buffed up all of our graphics cards to match.




rog.asus.com


----------



## BigMack70

HyperMatrix said:


> You don't have to worry about artificial bios power limits. Max headroom you have is +7% power anyway, haha. So even with a fully unlocked bios you have access to a whopping 25W. The good news is that the Asus ROG Strix card has a listed 400W TDP. So we know at least custom cards can have more power going to them. Still waiting to see what info EVGA releases.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Check out all of our buffed-up GeForce RTX 3070, RTX 3080, and RTX 3090 graphics cards | ROG - Republic of Gamers Global
> 
> 
> GeForce RTX 30-series GPUs are here, and we've buffed up all of our graphics cards to match.
> 
> 
> 
> 
> rog.asus.com


You can easily pull more than 150W from an 8 pin power connector safely. It's all down to what the bios will allow.


----------



## J7SC

BigMack70 said:


> I'm more concerned about Nvidia's artificial BIOS power limitations than anything else. Most 2080Ti models were artificially locked down to a ridiculous degree.


...which is why I'm looking forward to see the 3900 battles play out between Galax, EVGA KP and Asus Strix - those seemed to get around the limitations the best w/ Turing and likely will do the same w/Ampere, with at least some version of XOC bios making it out to the wild. FYI, my 2080 Tis pull 380W each on stock bios and max PL, for a combined 760W...add in an oc'ed TR and a lot of peripherals and I never even bothered to try any XOC., nor really needed to. 

May be I'll get that that 2000W SuperFlower Leadex PSU  for my next project


----------



## Thoth420

J7SC said:


> ...which is why I'm looking forward to see the 3900 battles play out between Galax, EVGA KP and Asus Strix - those seemed to get around the limitations the best w/ Turing and likely will do the same w/Ampere, with at least some version of XOC bios making it out to the wild. FYI, my 2080 Tis pull 380W each on stock bios and max PL, for a combined 760W...add in an oc'ed TR and a lot of peripherals and I never even bothered to try any XOC., nor really needed to.
> 
> May be I'll get that that 2000W SuperFlower Leadex PSU  for my next project


Speaking of: What are others plans in regard to a PSU? I have a Leadex Special Edition 1000W sealed in box atm as it was all I could really find that seemed decent. I do not mind massive overkill at all either it just seems nothing is available in the states. Seasonic top tier units are all sold out and have been. Are new units coming out or something to coincide with these new cards?


----------



## changboy

I buy an evga 1600p2 last week but didnt arrive yet. I'am actually with a 1000p2.


----------



## shiokarai

FE and reference edition blocks already out by Bykski, can get them here:









BYKSKI Nvidia RTX 3080 (Ti) FE incl. Backplate (incl. Backplate)


Bykski waterblock for Nvidia RTX 3080 (Ti) FE with backplate (N-RTX3080FE-X). Shipping from EU.




ezmodding.com













32.39US $ |Bykski Watercooler For RTX 3090 RTX 3080 Maxsun ,Palit Founders Edition ,Full Cover Water Block, AIC,N RTX3090 X|Fans & Cooling| - AliExpress


Smarter Shopping, Better Living! Aliexpress.com




www.aliexpress.com





On aliexpress shipping is expected in 10 days or so + actual shipping to your destination (about a month I guess?). On ezmodding shipping is expected at the 24th + actual shipping to your destination (about a month, too, I guess?). Aliexpress with backplate + nickel acrylic + RGB = about $130 with shipping. I've ordered one, confirmed with the seller it's for the 3080/3090 FE edition, we'll see how well it's made and how well it performs... should ok until aquacomputer/watercool releases their block  Don't want the EK one -> waaaay to pricey for the perf/looks they offer (basically recycled design over and over).

watercool (Heatkiller) - no info whatsoever, EKWB - they'll make FE blocks, sometime soon (a month? 2 months?) alphacool - FE blocks sometime soon (a month? 2 months?)


----------



## dentnu

Anyone know how many power stages the MSI Trio has ?


----------



## Nizzen

dentnu said:


> Anyone know how many power stages the MSI Trio has ?


Enough 

It says 18-_phase power_ supply


----------



## Krzych04650

Looks great. I am not a fan of 3-slot design though, wider and longer I understand, but why make it so thick. 2.4 slot would be enough.


----------



## changboy

I just order the Bykski cooler for the FE


----------



## BigMack70

Krzych04650 said:


> Looks great. I am not a fan of 3-slot design though, wider and longer I understand, but why make it so thick. 2.4 slot would be enough.


I love the massive 3 slot design. Gives me hope that it might actually keep the card quiet without needing to put it on water.


----------



## GAN77




----------



## Sylencer90

Will defo get one right on release considering all parts for a new pc i want to get are in stock.


----------



## Krzych04650

BigMack70 said:


> I love the massive 3 slot design. Gives me hope that it might actually keep the card quiet without needing to put it on water.


I have a problem with this because of SLI, depending on the power limit on FE I may use them temporarily before I do full watercooling, and I have 3-slot spacing on motherboard. The spacing would be so tight that even side intake wouldn't help  But for just one card I can see why this is a good thing, also it looks really good for 3-slot card.

But it probably won't be a problem since the power limit is going to be very restrictive, which is a bit of a nonsense for something called BFGPU... No power limit should be one of it's main features. But if this 12-pin is really only 300W then there is no way around it, 375W power limit... Also shouldn't a different power connector mean that you won't be able to flash any other BIOS on FE?


----------



## Krzych04650

double post


----------



## shiokarai

I wonder how hard it will be to actually disassemble the card and put a block on it - seems like REALLY hard, esp. with no visible damage to the cooler or apparent signs of tampering with... no screws to be seen anywhere (covered), torx(?) screws on the slot bracket etc. Tough one I'm afraid.


----------



## changboy

I think the 12 pins connector can provide 600watt and all company of psu will make a 2 x 8 pins direct from the psu to this 12 pins, so it will look nice and evga already have it.


----------



## BigMack70

Krzych04650 said:


> But it probably won't be a problem since the power limit is going to be very restrictive, which is a bit of a nonsense for something called BFGPU... No power limit should be one of it's main features. But if this 12-pin is really only 300W then there is no way around it, 375W power limit... Also shouldn't a different power connector mean that you won't be able to flash any other BIOS on FE?


Spec on an 8-pin PCI-e power connector is 150W. That does NOT mean that there's some magical limitation in place that prevents that connector from drawing more than 150W. You can easily pull 200+ W through an 8-pin PCI-e connector without causing any problems on a decent high end power supply.

While not strictly accurate, it's closer to reality to think of 150W spec as a minimum requirement for the PSU to be able to deliver than to think of it as a maximum possible or even maximum advised power delivery.

If there is a limitation in place on power draw to 375W, it is artificially imposed by Nvidia.


----------



## J7SC

Thoth420 said:


> Speaking of: What are others plans in regard to a PSU? I have a Leadex Special Edition 1000W sealed in box atm as it was all I could really find that seemed decent. I do not mind massive overkill at all either it just seems nothing is available in the states. Seasonic top tier units are all sold out and have been. Are new units coming out or something to coincide with these new cards?


I'm currently running a solid Antec HCP Platinum PC1300W (innards by Delta) for my dual 2080 Tis and the TR, and have a second one which can be connected to the first via their 'OCLink' cable - that would be 2600W (and I used that before on other oc builds) but it also becomes an issue of your specific circuits at 110V. I have my main system connected via an industrial-grade extension cord to the 'Dryer' circuit...

The shortage of higher-end PSUs will only get worse from what I've read, at least for another few months. For dual custom-PCB and w-cooled 3090s, I would not want anything less than a 1400W quality-rated PSU. Also, I mentioned MS FS 2020 4K/ultra before...I actually dialed my system's CPU and GPUs oc down as it makes little difference w/ NVL-CFR enabled in that Sim and remaining at 60fps+, but it reduced overall system power draw from a consistent 1070+ W to mid-to-high 800 W range.



BigMack70 said:


> Spec on an 8-pin PCI-e power connector is 150W. That does NOT mean that there's some magical limitation in place that prevents that connector from drawing more than 150W. You can easily pull 200+ W through an 8-pin PCI-e connector without causing any problems on a decent high end power supply.
> 
> While not strictly accurate, it's closer to reality to think of 150W spec as a minimum requirement for the PSU to be able to deliver than to think of it as a maximum possible or even maximum advised power delivery.
> 
> If there is a limitation in place on power draw to 375W, it is artificially imposed by Nvidia.


You're right, and DerBauer has done some tests w/ disabling / 'reducing' 8-pin PCIe (below). That said, NVidia and vendors cannot endorse anything beyond official rated 'spec' because of insurance, warranty etc. That said, a dual 8-pin should be plenty (and safe, IMO) for a lot more than the rated capacity.


----------



## RobotDevil666

I was ready to buy RTX3090FE as soon as they are up for sale but I'm not sure if it's a good Idea. I will be putting waterblock first thing and as nice as this cooler is it will be a pain to take off, also this power connector is in a super awkward position. 
3rd party card may make more sense this time around, I wonder when will those be available, hopefully we won't have to wait weeks.


REPLY


----------



## Krzych04650

changboy said:


> I think the 12 pins connector can provide 600watt and all company of psu will make a 2 x 8 pins direct from the psu to this 12 pins, so it will look nice and evga already have it.
> View attachment 2458552


I was under the same impression, the 12pin cable, the one that is provided by PSU manufacturers, connects to two 8pin sockets that are normally powering 2x8pin each, so it could theoretically provide 600W, but everyone seems to be under the impression that it is only 300W and I think I even read that few times in articles about Ampere architecture once the embargo lifted on that. The fact that NVIDIA's own adapter is from 2x8-pin to 12-pin also suggests that it is only 300W.



BigMack70 said:


> Spec on an 8-pin PCI-e power connector is 150W. That does NOT mean that there's some magical limitation in place that prevents that connector from drawing more than 150W. You can easily pull 200+ W through an 8-pin PCI-e connector without causing any problems on a decent high end power supply.
> 
> While not strictly accurate, it's closer to reality to think of 150W spec as a minimum requirement for the PSU to be able to deliver than to think of it as a maximum possible or even maximum advised power delivery.
> 
> If there is a limitation in place on power draw to 375W, it is artificially imposed by Nvidia.


Yes but this is exactly the problem, the power limits are going to go by the standard spec so the card won't be able to pull more than regular 2x8pin one does. This is really worrying because 3090 already has very low TDP compared to 3080 considering the spec difference between them, the alleged 375W power limit is not even going to be enough to stabilize stock clocks, let alone any overclock. We don't know how these cards are going to behave in terms of power and how restrictive stock power limits are, but if they are anything like Turing, 375W for 3090 is nothing.


----------



## BigMack70

Krzych04650 said:


> Yes but this is exactly the problem, the power limits are going to go by the standard spec


Source?



J7SC said:


> That said, NVidia and vendors cannot endorse anything beyond official rated 'spec' because of insurance, warranty etc. That said, a dual 8-pin should be plenty (and safe, IMO) for a lot more than the rated capacity.


I'm hoping that they consider overclocking to not be officially "endorsed" and therefore they are willing to allow the power delivery to exceed 375W. But I guess we'll see. If they have gimped the founders cards to 375W but allowed partner cards to exceed 375W, I'll probably sell off my FE card and buy one of the higher end models later on. But as-is, none of the partner cards releasing at launch look particularly appealing to me over founders.

I think it won't make any difference on air or water cooling rather you have two or three 8-pin power connectors. On a high end PSU, purely from a PCI-e power connector perspective, you should be able to push 450-500W through these cards on 2x 8-pin without any trouble. In other words, the limiting factor on overclocking and performance is not going to be the number of physical power connectors; the limitations will come elsewhere - bios limitations / shunt mods / cooling problems / etc will be the problems that need solving.


----------



## J7SC

...judging from my experience w/ Turing, I think temp control will be critical for 3090s, whether it is 2x 8 pin FE and the like, or if you opt for 3x 8pin in order for maximum PL and oc later. *The sheer size of the 3090 *as part of a huge and expensive air-cooling system tells me that 'NVidia boost algorithm' (or rather boost reduction algorithm ; -) ) is alive and well in Ampere, if not more so.

Before shunt mods and custom XOC bios (for custom PCB), I would/will do everything possible to get sustained low temps...my current Turing setup contains a separate GPU loop with a total of 1080x55 mm rad space and dual D5s. This helps to keep them below 38C (the first step-down in MHz, give or take a few degrees) most of the time, and it probably is enough for dual 3090s, but we'll see if/when I eventually get there. This doesn't mean that power bios, 3x 8 pin and/or perhaps shunt mods for the more courageous aren't also important, but temp control should be the first line of defense against NVBoost and its evil ways, IMO


----------



## Foxrun

J7SC said:


> ...judging from my experience w/ Turing, I think temp control will be critical for 3090s, whether it is 2x 8 pin FE and the like, or if you opt for 3x 8pin in order for maximum PL and oc later. *The sheer size of the 3090 *as part of a huge and expensive air-cooling system tells me that 'NVidia boost algorithm' (or rather boost reduction algorithm ; -) ) is alive and well in Ampere, if not more so.
> 
> Before shunt mods and custom XOC bios (for custom PCB), I would/will do everything possible to get sustained low temps...my current Turing setup contains a separate GPU loop with a total of 1080x55 mm rad space and dual D5s. This helps to keep them below 38C (the first step-down in MHz, give or take a few degrees) most of the time, and it probably is enough for dual 3090s, but we'll see if/when I eventually get there. This doesn't mean that power bios, 3x 8 pin and/or perhaps shunt mods for the more courageous aren't also important, but temp control should be the first line of defense against NVBoost and its evil ways, IMO


What fan speed are you running? I have an rtx titan with a 9900k and under heavy loads Ill see 48c with the titan. That's with two 480mm thick rads and a total of 8 fans running at 1400


----------



## Mooncheese

BigMack70 said:


> I stand by my numbers with RTX on. I think anyone expecting an average performance of more than 2080 Ti + 60-65% from the 3090 is going to be extremely disappointed. 2080 Ti + 75-80% performance is probably best case scenario in specific titles favoring Ampere.


What do we make of the Open CL benchmarks showing the 3080 being 168% faster than the 2080 and between 138-141% faster than 2080 Ti? There is no RT in this benchmark. 









NVIDIA GeForce RTX 3080: 168% of RTX 2080 SUPER performance in CUDA and OpenCL benchmarks - VideoCardz.com


The first post-announcement benchmark of the GeForce RTX 3080 graphics cards. NVIDIA GeForce RTX 3080: 168% of RTX 2080 performance The GeForce RTX 3080 has been put through a number of CUDA and OpenCL tests with the Compubench benchmark suite. The GeForce RTX 3080 graphics card has been tested...




videocardz.com


----------



## Mooncheese

BigMack70 said:


> Real world performance is never as simple as comparing TFlops. If you don't believe me, go watch the DF video, or just wait for the reviews. They show the 3080 in its absolute best possible case scenario - path tracing. And it gets a minor relative performance bump (from ~80% better than 2080 to ~95% better). 3090 vs 2080Ti will be the same sort of thing.


Correct me if I'm wrong but Digital Foundry showed the 3080 running around 190% faster with RT on and around 170% faster with RT off compared to the 2080? Considering the 3090 is shaping up to be 25% faster going by RT TFLOPS and core count how does this not translate into 1.9x faster than the 2080 Ti? 

I understand the need to be conservative with the expected performance, especially given what we saw with Turing, but to me it looks as though the 3090 is shaping up to be 50% faster in rasterization and ~1.9x faster in RT compared to the 2080 Ti.


----------



## HyperMatrix

FYI, some prices listed for RTX 3090s on BHPhotoVideo.com. First I've seen listed here in North America without all the additional foreign voodoo import/taxes upcharges. 

All 3 are standard 2x 8-pin power connector designs, although the Gigabyte models are listed as having 19 power phases instead of 18. Which doesn't make sense to me....cuz why would you go with a custom design board instead of the ready to go reference design, in order to add 1 extra phase and still be on 2x 8-pin connecors? So take that with a grain of salt.

$1499 ZOTAC GAMING GeForce RTX 3090 Trinity Graphics Card








ZOTAC GeForce RTX 3090 GAMING Trinity OC Gaming Graphics Card


Buy ZOTAC GeForce RTX 3090 GAMING Trinity OC Gaming Graphics Card featuring Boostable up to 1710 MHz, 10496 CUDA Cores, Ampere Architecture, 24GB of GDDR6X VRAM, 19.5 Gb/s Memory Speed, 384-Bit Memory Interface, HDMI 2.1 | DisplayPort 1.4a, 7680 x 4320 Max Digital Resolution, PCIe 4.0 Interface...




www.bhphotovideo.com





$1549 Gigabyte GeForce RTX 3090 EAGLE OC 24G Graphics Card








Gigabyte GeForce RTX 3090 EAGLE OC 24G Graphics Card


Buy Gigabyte GeForce RTX 3090 EAGLE OC 24G Graphics Card featuring 1725 MHz Core, 10496 CUDA Cores, Ampere Architecture, 24GB of GDDR6X VRAM, 19.5 Gb/s Memory Speed, 384-Bit Memory Interface, HDMI 2.1 | DisplayPort 1.4a, 7680 x 4320 Max Digital Resolution, PCIe 4.0 Interface, WINDFORCE 3X...




www.bhphotovideo.com





$1579 Gigabyte GeForce RTX3090 GAMING OC 24G Graphics Card








Gigabyte GeForce RTX 3090 GAMING OC 24G Graphics Card


Buy Gigabyte GeForce RTX 3090 GAMING OC 24G Graphics Card featuring 1755 MHz Core, 10496 CUDA Cores, Ampere Architecture, 24GB of GDDR6X VRAM, 19.5 Gb/s Memory Speed, 384-Bit Memory Interface, HDMI 2.1 | DisplayPort 1.4a, 7680 x 4320 Max Digital Resolution, PCIe 4.0 Interface, WINDFORCE 3X...




www.bhphotovideo.com


----------



## Mooncheese

changboy said:


> I want buy the rtx-3090 but after check aal card coming i really dont know what is the better choice. FE écition will have the smallest pcb card and the smalest waterblock too, so it will be verry nice to see that small card also the most powerfull ! But i check the evga hydro copper ready to put in the loop and i also check the asus strix oc, the beast. Just 5 minutes ago i begin pré-order the waterblock for the founder edition then i stop at checkout lol. I not sure wich one is a better buy, i think of power, overclocking and watercooling it.
> 
> I really think 15 minutes after the release they will all sold out lol. I need to know what to do for the date but hard to decide when you didnt see or watch a review of those product.


If you intend to put the card under water the dilemma is:

A. "Reference" PCB and being limited to 2x8 or 1x12 pin power limitations (BIOS side, yes I understand that 2x8 pin power can convey more than 300w).

B. "Non-Reference" PCB and having to wait, I don't know 1 year for a quality block to be made but having 3x8 pin power and no power limitations?

I'm faced with the same dilemma, and am currently leaning heavily towards going non-reference and waiting. I've settled on EVGA's 3090 FTW, I anticipated it being another $200 up and over $1500 but I already know that wattage starvation is going to be THE limiting factor of GA-102 with 375w. You may be able to flash a non-reference BIOS (i.e., I'm running FTW3 bios on my 2080 Ti XC2, 2x8 pin) the question is, will flashing be more difficult this time around and do you run the risk of burning something up over 2x8 pin? I doubt it. This is what I'm mulling over in my head right now. Although I'm leaning towards FTW today (XC3 yesterday) having a look at how long it took EKWB to make a block for this variant:

EVGA GeForce RTX 2080 Ti FTW3 Ultra 11 GB Review (FTW3 Review, Nov 2018)

EK Water Blocks Announces EK-Quantum Vector FTW3 Water Block (EKWB FTW3 block announced, Nov 2019)

I mean EVGA will undoubtedly release their HydroCopper block much sooner (I'm hearing Oct this year) but that block, it doesn't look good, EVGA, in their infinite wisdom has decided to put this permanent plastic red trim on all of their cards this time around, and to top everything off I've read that the 20 series HC blocks had performance issues (possibly due to inadequate contact with the GPU die / VRM / Memory etc). I've seen a thread over in the EVGA forums where someone stated that their HC blocked 2080 Ti was running at 75C!

If I go with "reference" PCB, meaning, the XC3, I could put EKWB's block on it immediately but would be limited to 375w, assuming the FTW bios cannot be safely flashed to the XC3. (2080 Ti Kingpin bios could not be flashed to anything other than the Kingpin for example).

But the FTW variant has more than 3x8 pin power and a higher TDP (450w?) that is of value to me. With the 3090, I need to know how hot the bank of VRAM that will be being cooled by the back-plate will be for the sake of the health of the card having put it under a water block. I have the iCX3 sensors on my XC2 and I love being able to see how hot my VRM and VRAM is. It adds tremendous peace of mind after experiencing memory failure with a Micron memory equipped 300a 2080 Ti pulled from an Alienware Aurora where there may have been inadequate contact between the VRAM and the water-block. Knowing how hot the top bank of VRAM is under a water block is of utmost importance, at least to me. Additionally there is a BIOS switch (useful if you have a bad flash) and a power fuse. I could care less about the cooler, but ultimately if I have to wait 1 year for a decent water block to be developed I may be running this card on air until that time.

I'm very worried that 3090 FE and "reference" PCB variants are going to be bottlenecked BIG TIME by the 375w limit. If a higher wattage BIOS cannot be flashed to them, youre going to be limited to a health 7% overclock at 375w.

A 25-30% overclock, assuming possible with the 3090, unless undervolting, will require another 25-30% power. Additionally, I can't imagine how power hungry this card will be with a memory overclock and 12GB of memory modules.

If anyone reading this is contemplating putting their FE or "reference" PCB variant under water and hoping to get a good overclock I highly advise you reconsider this idea in light of how power hungry Ampere is and the limitations of a 2x8 pin / 1x12 pin connector and potentially higher TDP bios' being incompatible.

Having written all of this I've decided to get a 3x8 pin power variant and wait for a block.




HyperMatrix said:


> Literally RT TFLOPS jumps 2x from 2080 Ti to 3090. From 34 RT TFLOPS to 69 RT TFLOPS. I don't know how to more clearly state what I've already stated. If RT performance is limiting your fps by at least half, then doubling RT performance will double your frame rate.
> 
> If RT usage wasn't very heavy in a game, for example you were normally getting 60fps, and with RTX on you drop to 45fps, doubling RT cores isn't going to get you to 90fps. It'll allow you to get up to 90fps, if shader performance has enough headroom for it. And since the actual shader performance of the 3090 will be around 50-75% higher than the 2080Ti, in this example, if you previously got 60fps with RTX off, you'd now get 90-105fps. So....let's try this again. If you have the shader performance of an RTX 3090, giving you 90-105fps, and you enable RTX with the RT performance of a 2080ti, you'd still only get 45fps. But since the the 3090 has exactly 2x the RT performance, you'd get 90fps.
> 
> So to recap....in RT workloads, you're limited by both those factors. Shaders, and RT. Either one can be a limiting factor. Whichever is the lower performing one, will limit the other. So your max FPS will be the max FPS of the lower of the 2. If you can hit 100 fps with RTX off, and 50 fps with RTX on, then your max FPS is 50. If you increase RT performance by 2x, your FPS will go up to 100. If you increase RT performance by 1000x, your FPS will still stay at 100. If your RTX on performance is 50fps, and you increase shader performance by 10x, your max FPS will still be 50fps (unless RTX can simultaneously utilize shader cores and rt cores for ray tracing, which is theoretically possible, even if inefficient).


Fantastic explanation, I hadn't thought of this this way, and youre right, if the weakest link is rasterization performance it doesn't matter if RT is 2x better. Anyhow, on a positive note, at least those of us with a 3090 will see no performance hit turning RT on (and maxed)!



HyperMatrix said:


> Yeah I wouldn't pay $2600. I think that's with 20% VAT as well as additional ridiculous import fees in Europe since both the retailers with prices are european. Looking at the price difference between the 2080Ti FTW3 models vs. the basic models here would indicate a roughly $1650 USD price point. Heck for $1800 USD if I could get an FTW3 HydroCopper with a waterblock on it already, I'd be ecstatic.
> 
> The FE/Reference design models with 2x 8-pin connectors aren't going to clock higher than 2GHz. The 3x 8-pin models with 22/23 power stages and 400W+ TDP are likely able to hit 2.2GHz. So it'd be a shame to spend $1500 on a card, then handicap it by 10% potential performance to save 10% on the price.
> 
> Then again if you're out in one of those silly provinces with 13-14% sales tax, I feel your pain. I hate even paying 5% here in Alberta. Damn Trudeau and his federal sales tax.
> 
> update: Just checked EVGA's website. Went to checkout with an RTX 2080 Super and UPS Worldwide Expedited shipping is free. Not sure if it's a promo to get rid of these cards or if it'll apply to all purchases. But something to keep in mind to save on shipping at least.


100% agreed that the power limiations of "reference" and FE 3090 will be the limiting factor of any kind of overclock. Anyone reading this contemplating FE or "reference" and intending to put under WB and overclocking, I highly recommend you reconsider.



shiokarai said:


> I wonder how hard it will be to actually disassemble the card and put a block on it - seems like REALLY hard, esp. with no visible damage to the cooler or apparent signs of tampering with... no screws to be seen anywhere (covered), torx(?) screws on the slot bracket etc. Tough one I'm afraid.


I know right?! I remember with the 20 series FE GamersNexus had to take a heat gun and literally destroy the cooler to take it off.



changboy said:


> I think the 12 pins connector can provide 600watt and all company of psu will make a 2 x 8 pins direct from the psu to this 12 pins, so it will look nice and evga already have it.
> View attachment 2458552


From what I've gathered on this, 1x12 pin is actually only rated to 300w, is the equivalent of 2x8 pin. The real question is, unlike 2x8 pin, which CAN safely convey more than 300w, will the new compact design have thermal issues attempting to do the same?



RobotDevil666 said:


> I was ready to buy RTX3090FE as soon as they are up for sale but I'm not sure if it's a good Idea. I will be putting waterblock first thing and as nice as this cooler is it will be a pain to take off, also this power connector is in a super awkward position.
> 3rd party card may make more sense this time around, I wonder when will those be available, hopefully we won't have to wait weeks.


I don't recommend FE if youre going to put it under water, not just for the reason that FE may be limited to 375w but you may end up having to literally destroy the cooler to take it apart. See my comments above.



J7SC said:


> ...judging from my experience w/ Turing, I think temp control will be critical for 3090s, whether it is 2x 8 pin FE and the like, or if you opt for 3x 8pin in order for maximum PL and oc later. *The sheer size of the 3090 *as part of a huge and expensive air-cooling system tells me that 'NVidia boost algorithm' (or rather boost reduction algorithm ; -) ) is alive and well in Ampere, if not more so.
> 
> Before shunt mods and custom XOC bios (for custom PCB), I would/will do everything possible to get sustained low temps...my current Turing setup contains a separate GPU loop with a total of 1080x55 mm rad space and dual D5s. This helps to keep them below 38C (the first step-down in MHz, give or take a few degrees) most of the time, and it probably is enough for dual 3090s, but we'll see if/when I eventually get there. This doesn't mean that power bios, 3x 8 pin and/or perhaps shunt mods for the more courageous aren't also important, but temp control should be the first line of defense against NVBoost and its evil ways, IMO


Yes, temp control will be of utmost importance because with Ampere we are going to learn very quickly early on that undervolting is of vital importance, ESPECIALLY with an FE or "reference" card limited to 375w. Less voltage at a given freq = less wattage drawn / needed at said freq but this requires temps in the low to mid 40's.


----------



## Mooncheese

HyperMatrix said:


> FYI, some prices listed for RTX 3090s on BHPhotoVideo.com. First I've seen listed here in North America without all the additional foreign voodoo import/taxes upcharges.
> 
> All 3 are standard 2x 8-pin power connector designs, although the Gigabyte models are listed as having 19 power phases instead of 18. Which doesn't make sense to me....cuz why would you go with a custom design board instead of the ready to go reference design, in order to add 1 extra phase and still be on 2x 8-pin connecors? So take that with a grain of salt.
> 
> $1499 ZOTAC GAMING GeForce RTX 3090 Trinity Graphics Card
> 
> 
> 
> 
> 
> 
> 
> 
> ZOTAC GeForce RTX 3090 GAMING Trinity OC Gaming Graphics Card
> 
> 
> Buy ZOTAC GeForce RTX 3090 GAMING Trinity OC Gaming Graphics Card featuring Boostable up to 1710 MHz, 10496 CUDA Cores, Ampere Architecture, 24GB of GDDR6X VRAM, 19.5 Gb/s Memory Speed, 384-Bit Memory Interface, HDMI 2.1 | DisplayPort 1.4a, 7680 x 4320 Max Digital Resolution, PCIe 4.0 Interface...
> 
> 
> 
> 
> www.bhphotovideo.com
> 
> 
> 
> 
> 
> $1549 Gigabyte GeForce RTX 3090 EAGLE OC 24G Graphics Card
> 
> 
> 
> 
> 
> 
> 
> 
> Gigabyte GeForce RTX 3090 EAGLE OC 24G Graphics Card
> 
> 
> Buy Gigabyte GeForce RTX 3090 EAGLE OC 24G Graphics Card featuring 1725 MHz Core, 10496 CUDA Cores, Ampere Architecture, 24GB of GDDR6X VRAM, 19.5 Gb/s Memory Speed, 384-Bit Memory Interface, HDMI 2.1 | DisplayPort 1.4a, 7680 x 4320 Max Digital Resolution, PCIe 4.0 Interface, WINDFORCE 3X...
> 
> 
> 
> 
> www.bhphotovideo.com
> 
> 
> 
> 
> 
> $1579 Gigabyte GeForce RTX3090 GAMING OC 24G Graphics Card
> 
> 
> 
> 
> 
> 
> 
> 
> Gigabyte GeForce RTX 3090 GAMING OC 24G Graphics Card
> 
> 
> Buy Gigabyte GeForce RTX 3090 GAMING OC 24G Graphics Card featuring 1755 MHz Core, 10496 CUDA Cores, Ampere Architecture, 24GB of GDDR6X VRAM, 19.5 Gb/s Memory Speed, 384-Bit Memory Interface, HDMI 2.1 | DisplayPort 1.4a, 7680 x 4320 Max Digital Resolution, PCIe 4.0 Interface, WINDFORCE 3X...
> 
> 
> 
> 
> www.bhphotovideo.com


The additional phase may make the power delivery more efficient, meaning more wattage is conveyed to the components but less is drawn. Also, cleaner power. Same concept with CPU VRM.

Edit: 

Having more phases do the same amount of work means less heat per phase = less impedance = greater efficiency.


----------



## changboy

@* Mooncheese watch this :*


----------



## Mooncheese

changboy said:


> @* Mooncheese watch this :*


Yes I understand and get that and I stated as such in my last comment (that more than 300w can be conveyed over 2x8 pin). 

What I did state, is that as of right now it's unclear whether or not 1x12 pin will be able to safely do the same, and it's my understanding, having researched this, that 1x12 pin is NOT rated at 600w, but is only rated at 300w. You have a more compact design, unless they are using different and better material, more than 300w over 1x12 pin might not be safely possible. 

We just don't know yet, I'm sure Der8auer will do testing after the fact but if youre basing a purchasing decision around the intent to overclock I would reconsider settling on the FE variant for this reason.


----------



## J7SC

Foxrun said:


> What fan speed are you running? I have an rtx titan with a 9900k and under heavy loads Ill see 48c with the titan. That's with two 480mm thick rads and a total of 8 fans running at 1400


...a total of 12x 120mm fans on three XSPC RX 360s for that GPU-only loop...six of the fans are Corsair ML 120s set to around 2K rpm, the other six GentleTyphoon 3K rpm, most with shrouding to help '''sound''' and air-flow...which is very good in an open TT Core P5. 

As mentioned, these particular w-cooled 2080 Tis pull a steady 760W on max PL (2x 380 W) so might be comparable to 3090 FE at around 375 W. Bottom graph in pic is a 3DM DLSS feature test for 2x GPU which time-wise is essentially 2x Port Royal in a row. Ambient temp was about 20 C for that run (unlike today :-( one of the hottest days of the year here...)


----------



## HyperMatrix

changboy said:


> @* Mooncheese watch this :*


Yeah but the issue is a bit different here. The RTX 2080Ti FE had 13 phases. The Asus ROG Strix 2080Ti he uses, while still only having 2x 8-pin connectors, has 16 phases. It doesn't come down to strictly the number of power connectors, but the accompanying modifications made to support the expected increase in power. Your 8-pin cables can put out more than 150W. But would the board be designed to handle that well? Would the bios even allow it? 

We don't have Power Phases/TDP info on the EVGA FTW3 cards yet, but we know the the Asus ROG Strix with 3x 8-pin has a 400W TDP, and 22 power phases, and the EVGA Kingpin has 23 power phases. Basically you'd be fighting a lot of elements at the same time here, for a card that is far more power hungry than anything we've had in the past. If you can flash a custom bios, that'll take care of the artificial software limitation. But what if it's like the Pascal Titan X that can't be flashed? What if you do flash a new bios, and the components have trouble dealing with 450W+ of power delivery for overclocking?

Honestly I wish you were right. I'd love to pick up a Founders Edition of the 3090, especially because I'd really wanted to put an Aquacomputer block with active cooled backplate on it. But I have a feeling we're going to see a significant difference (more than 100MHz) in OC performance between FE/Reference design and the ROG Strix/FTW3/etc...

Now we play the waiting game and hope the card reviewers do proper overclock testing. Because I guess that's the only way we'd know for sure.


----------



## domenic

Mooncheese said:


> If you intend to put the card under water the dilemma is:
> 
> A. "Reference" PCB and being limited to 2x8 or 1x12 pin power limitations (BIOS side, yes I understand that 2x8 pin power can convey more than 300w).
> 
> B. "Non-Reference" PCB and having to wait, I don't know 1 year for a quality block to be made but having 3x8 pin power and no power limitations?
> 
> I'm faced with the same dilemma, and am currently leaning heavily towards going non-reference and waiting. I've settled on EVGA's 3090 FTW, I anticipated it being another $200 up and over $1500 but I already know that wattage starvation is going to be THE limiting factor of GA-102 with 375w. You may be able to flash a non-reference BIOS (i.e., I'm running FTW3 bios on my 2080 Ti XC2, 2x8 pin) the question is, will flashing be more difficult this time around and do you run the risk of burning something up over 2x8 pin? I doubt it. This is what I'm mulling over in my head right now. Although I'm leaning towards FTW today (XC3 yesterday) having a look at how long it took EKWB to make a block for this variant:
> 
> EVGA GeForce RTX 2080 Ti FTW3 Ultra 11 GB Review (FTW3 Review, Nov 2018)
> 
> EK Water Blocks Announces EK-Quantum Vector FTW3 Water Block (EKWB FTW3 block announced, Nov 2019)
> 
> I mean EVGA will undoubtedly release their HydroCopper block much sooner (I'm hearing Oct this year) but that block, it doesn't look good, EVGA, in their infinite wisdom has decided to put this permanent plastic red trim on all of their cards this time around, and to top everything off I've read that the 20 series HC blocks had performance issues (possibly due to inadequate contact with the GPU die / VRM / Memory etc). I've seen a thread over in the EVGA forums where someone stated that their HC blocked 2080 Ti was running at 75C!
> 
> If I go with "reference" PCB, meaning, the XC3, I could put EKWB's block on it immediately but would be limited to 375w, assuming the FTW bios cannot be safely flashed to the XC3. (2080 Ti Kingpin bios could not be flashed to anything other than the Kingpin for example).
> 
> But the FTW variant has more than 3x8 pin power and a higher TDP (450w?) that is of value to me. With the 3090, I need to know how hot the bank of VRAM that will be being cooled by the back-plate will be for the sake of the health of the card having put it under a water block. I have the iCX3 sensors on my XC2 and I love being able to see how hot my VRM and VRAM is. It adds tremendous peace of mind after experiencing memory failure with a Micron memory equipped 300a 2080 Ti pulled from an Alienware Aurora where there may have been inadequate contact between the VRAM and the water-block. Knowing how hot the top bank of VRAM is under a water block is of utmost importance, at least to me. Additionally there is a BIOS switch (useful if you have a bad flash) and a power fuse. I could care less about the cooler, but ultimately if I have to wait 1 year for a decent water block to be developed I may be running this card on air until that time.
> 
> I'm very worried that 3090 FE and "reference" PCB variants are going to be bottlenecked BIG TIME by the 375w limit. If a higher wattage BIOS cannot be flashed to them, youre going to be limited to a health 7% overclock at 375w.
> 
> A 25-30% overclock, assuming possible with the 3090, unless undervolting, will require another 25-30% power. Additionally, I can't imagine how power hungry this card will be with a memory overclock and 12GB of memory modules.
> 
> If anyone reading this is contemplating putting their FE or "reference" PCB variant under water and hoping to get a good overclock I highly advise you reconsider this idea in light of how power hungry Ampere is and the limitations of a 2x8 pin / 1x12 pin connector and potentially higher TDP bios' being incompatible.
> 
> Having written all of this I've decided to get a 3x8 pin power variant and wait for a block.
> 
> 
> 
> 
> Fantastic explanation, I hadn't thought of this this way, and youre right, if the weakest link is rasterization performance it doesn't matter if RT is 2x better. Anyhow, on a positive note, at least those of us with a 3090 will see no performance hit turning RT on (and maxed)!
> 
> 
> 
> 100% agreed that the power limiations of "reference" and FE 3090 will be the limiting factor of any kind of overclock. Anyone reading this contemplating FE or "reference" and intending to put under WB and overclocking, I highly recommend you reconsider.
> 
> 
> 
> I know right?! I remember with the 20 series FE GamersNexus had to take a heat gun and literally destroy the cooler to take it off.
> 
> 
> 
> From what I've gathered on this, 1x12 pin is actually only rated to 300w, is the equivalent of 2x8 pin. The real question is, unlike 2x8 pin, which CAN safely convey more than 300w, will the new compact design have thermal issues attempting to do the same?
> 
> 
> 
> I don't recommend FE if youre going to put it under water, not just for the reason that FE may be limited to 375w but you may end up having to literally destroy the cooler to take it apart. See my comments above.
> 
> 
> 
> Yes, temp control will be of utmost importance because with Ampere we are going to learn very quickly early on that undervolting is of vital importance, ESPECIALLY with an FE or "reference" card limited to 375w. Less voltage at a given freq = less wattage drawn / needed at said freq but this requires temps in the low to mid 40's.


I am in the exact same position as you and my thinking is also identical in terms of strategy. At the 3090 & water cooling custom loop level we are certainly in the Big Leagues and no way would we want to box ourselves in by not having enough power to maximize the significant investment we have in our rigs (other PC parts, plumbing, etc) before you even get to a $1500-$1800 graphics card. 

With that said just as you have concluded not being able to get a water block on or even near day one is a huge letdown but we must adapt. Your compromise of just staying with the stock cooler until a viable and tested block (i.e. backside memory chips) is mandatory despite the fact it may be an extended period of time. I have already ordered some additional PETG tubes and plan on doing some "bypass surgery" until a decent block is available.

My next question (or just general point of discussion) is around which 3090 to get. I like EVGA because they don't play the "you voided your warranty" game when disassembling it to install the block. The FTW3 besides having the 3rd power connector also has the dual bios & fuse feature so we get some additional protection from ourselves when we tinker.

The last decision point is the FTW3 or FTW3 Ultra. Several discussions have been floating around binning or at least sorting of the GPU by the vendors. From what I have been reading since the 3090 is literally just the big brother of the 3080 the better GPUs coming out of the wafers are being designated as 3090s so the "rejects" are getting neutered a bit and downshifted into 3080s. So far so good for us since ultimately we want the best silicon we can get.

With that being said I am trying to figure out if the FTW3 Ultra is going to be consciously tested or handpicked in some way as such or are all of the 3090s (just due to the fact they are a 3090) basically the same and the only difference is the price and factory clock set at the factory - nothing we could't or wouldn't want to do ourselves manually. 

Thoughts?


----------



## HyperMatrix

domenic said:


> I am in the exact same position as you and my thinking is also identical in terms of strategy. At the 3090 & water cooling custom loop level we are certainly in the Big Leagues and no way would we want to box ourselves in by not having enough power to maximize the significant investment we have in our rigs (other PC parts, plumbing, etc) before you even get to a $1500-$1800 graphics card.
> 
> With that said just as you have concluded not being able to get a water block on or even near day one is a huge letdown but we must adapt. Your compromise of just staying with the stock cooler until a viable and tested block (i.e. backside memory chips) is mandatory despite the fact it may be an extended period of time. I have already ordered some additional PETG tubes and plan on doing some "bypass surgery" until a decent block is available.
> 
> My next question (or just general point of discussion) is around which 3090 to get. I like EVGA because they don't play the "you voided your warranty" game when disassembling it to install the block. The FTW3 besides having the 3rd power connector also has the dual bios & fuse feature so we get some additional protection from ourselves when we tinker.
> 
> The last decision point is the FTW3 or FTW3 Ultra. Several discussions have been floating around binning or at least sorting of the GPU by the vendors. From what I have been reading since the 3090 is literally just the big brother of the 3080 the better GPUs coming out of the wafers are being designated as 3090s so the "rejects" are getting neutered a bit and downshifted into 3080s. So far so good for us since ultimately we want the best silicon we can get.
> 
> With that being said I am trying to figure out if the FTW3 Ultra is going to be consciously tested or handpicked in some way as such or are all of the 3090s (just due to the fact they are a 3090) basically the same and the only difference is the price and factory clock set at the factory - nothing we could't or wouldn't want to do ourselves manually.
> 
> Thoughts?



This had me curious so I looked up notes on the RTX 2080Ti FTW Ultra. Found this on EVGA's forums. Different colors for different people. The red is an official EVGA response.

Hello!
The Ultra cards are shipped out with a higher guaranteed clock speed than the Non-Ultra cards. You can look at it in the way that the Non-Ultra variant of the FTW3 is basically the "DT" version for the 10 series cards. The card would ship at stock guaranteed clock speeds rather than the higher guaranteed clock speed.

-Jacob B.

Are you suggesting that the ultra is "binned"? How does it fit to the previous gen KP edition "quality range" (price are higher for better binned), if you don't mind to answer?
In addition, how about the memory? Are Ultra more likely to be using the Samsung GDDR6? Samsung performs better overall from what I can see.

We do not have any binned RTX cards. Also, the VRAM can come from any of the manufacturers. We do not specify which RAM is on the cards, and there is no guarantee to get a certain manufacturer's RAM on your card. 

With the exception of the Kingpins that are binned, there are no binned GPUs and only a guarantee of advertised speeds, that's it so basically, Black Edition GPU might be able to clock as good as a FTW3 but having lower power limits does make it challenging otherwise I think you'd see more Blacks up there with the other more expensive SKUs.


----------



## animeowns

going for 2 3090's but I am not sure on founders edition or the evga models yet


----------



## BigMack70

HyperMatrix said:


> The RTX 2080Ti FE had 13 phases.


Yes, and it was more than sufficient for any kind of overclocking done on air or water. The 2080 Ti FE was limited compared to higher-end AIB cards by its BIOS power limits and its poor stock cooler, not its VRM, unless talking hardcore overclocking with sub-ambient cooling. 



Spoiler: 2080 Ti VRM analysis











Judging by the 2080 Ti FE, there's no reason to expect that the VRMs on the 3090 FE will limit its OC potential compared to AIB cards on air or water. Its cooler or its BIOS power limits might, however; we'll need to wait and see.


----------



## changboy

About the 12 pin connector i found this :

Nvidia’s tiny 12-pin connector is also expected to supply a beefy dose of wattage—648 watts, according to Tom’s Hardware. If true, that means a single cable plus the power from the PCIe slot should be plenty for Nvidia’s new “Ampere” GPU-based GeForce RTX 30-series graphics cards.


----------



## BigMack70

changboy said:


> About the 12 pin connector i found this :
> 
> Nvidia’s tiny 12-pin connector is also expected to supply a beefy dose of wattage—648 watts, according to Tom’s Hardware. If true, that means a single cable plus the power from the PCIe slot should be plenty for Nvidia’s new “Ampere” GPU-based GeForce RTX 30-series graphics cards.


Yup. The hardware should all be more than fine. Actual performance will be all down to what they are going to do with BIOS power limits. Unless we get reliable leaks, we won't know for sure until reviews post.


----------



## changboy

I think the bios on Fe can be flash like some have done with 2080 ti Fe.


----------



## Mooncheese

J7SC said:


> ...a total of 12x 120mm fans on three XSPC RX 360s for that GPU-only loop...six of the fans are Corsair ML 120s set to around 2K rpm, the other six GentleTyphoon 3K rpm, most with shrouding to help '''sound''' and air-flow...which is very good in an open TT Core P5.
> 
> As mentioned, these particular w-cooled 2080 Tis pull a steady 760W on max PL (2x 380 W) so might be comparable to 3090 FE at around 375 W. Bottom graph in pic is a 3DM DLSS feature test for 2x GPU which time-wise is essentially 2x Port Royal in a row. Ambient temp was about 20 C for that run (unlike today :-( one of the hottest days of the year here...)
> 
> View attachment 2458576


350w + only 25% = 440w

Assuming a 30% overclock is possible youre looking at 450w+ 

24GB of video memory, any kind of memory overclock is going to require more power than ever before. 




HyperMatrix said:


> Yeah but the issue is a bit different here. The RTX 2080Ti FE had 13 phases. The Asus ROG Strix 2080Ti he uses, while still only having 2x 8-pin connectors, has 16 phases. It doesn't come down to strictly the number of power connectors, but the accompanying modifications made to support the expected increase in power. Your 8-pin cables can put out more than 150W. But would the board be designed to handle that well? Would the bios even allow it?
> 
> We don't have Power Phases/TDP info on the EVGA FTW3 cards yet, but we know the the Asus ROG Strix with 3x 8-pin has a 400W TDP, and 22 power phases, and the EVGA Kingpin has 23 power phases. Basically you'd be fighting a lot of elements at the same time here, for a card that is far more power hungry than anything we've had in the past. If you can flash a custom bios, that'll take care of the artificial software limitation. But what if it's like the Pascal Titan X that can't be flashed? What if you do flash a new bios, and the components have trouble dealing with 450W+ of power delivery for overclocking?
> 
> Honestly I wish you were right. I'd love to pick up a Founders Edition of the 3090, especially because I'd really wanted to put an Aquacomputer block with active cooled backplate on it. But I have a feeling we're going to see a significant difference (more than 100MHz) in OC performance between FE/Reference design and the ROG Strix/FTW3/etc...
> 
> Now we play the waiting game and hope the card reviewers do proper overclock testing. Because I guess that's the only way we'd know for sure.


If a non FE BIOS cannot be flashed to the FE then it's going to be much more than 100 MHz. Assuming we can get away with a 25-30% overclock again this time around like we did with Turing youre potentially looking at leaving ~20% on the table due to the 375w TDP limit. 

These cards boost clock around where Turing boost clocks (~1900 MHz or so). Another 7% via PT slider will get you around 2000 MHz, give or take, and that's it. From what I gather Ampere can clock about 100 MHz higher in actuality, but youre never going to see 2.2GHz limited to 375w. 

A lot of assumptions here on my part, for all we know the 3090 will similarly have an overbuilt VRM like 2080 Ti and we may be able to flash a higher TDP bios to it as well. 

My issue is that there seems to be some confusion on the rated power limit of the single 12 pin. It's rated for 300w, but apparently, just like 2x8 pin, it can convey much more than that. The question is, will Nvidia engineer Ampere's BIOS in a way that locks us out of changing them using the safety concerns of the single 12 pin as a rationale? This is pure speculation on my part. But if it were me, and I wanted to overclock, I would just bypass the single 12 pin and FE with a 3x8 pin variant, say the Strix or FTW. 



domenic said:


> I am in the exact same position as you and my thinking is also identical in terms of strategy. At the 3090 & water cooling custom loop level we are certainly in the Big Leagues and no way would we want to box ourselves in by not having enough power to maximize the significant investment we have in our rigs (other PC parts, plumbing, etc) before you even get to a $1500-$1800 graphics card.
> 
> With that said just as you have concluded not being able to get a water block on or even near day one is a huge letdown but we must adapt. Your compromise of just staying with the stock cooler until a viable and tested block (i.e. backside memory chips) is mandatory despite the fact it may be an extended period of time. I have already ordered some additional PETG tubes and plan on doing some "bypass surgery" until a decent block is available.
> 
> My next question (or just general point of discussion) is around which 3090 to get. I like EVGA because they don't play the "you voided your warranty" game when disassembling it to install the block. The FTW3 besides having the 3rd power connector also has the dual bios & fuse feature so we get some additional protection from ourselves when we tinker.
> 
> The last decision point is the FTW3 or FTW3 Ultra. Several discussions have been floating around binning or at least sorting of the GPU by the vendors. From what I have been reading since the 3090 is literally just the big brother of the 3080 the better GPUs coming out of the wafers are being designated as 3090s so the "rejects" are getting neutered a bit and downshifted into 3080s. So far so good for us since ultimately we want the best silicon we can get.
> 
> With that being said I am trying to figure out if the FTW3 Ultra is going to be consciously tested or handpicked in some way as such or are all of the 3090s (just due to the fact they are a 3090) basically the same and the only difference is the price and factory clock set at the factory - nothing we could't or wouldn't want to do ourselves manually.
> 
> Thoughts?


I had this same question over on EVGA's forum and someone stated that although EVGA aren't calling it binning they are running the various chips through their paces somehow and selling them accordingly, with Ultra variants passing going through some kind of binning process: 

ty_ger07 stated: 

_When they are assembled, they are tested. Ones which don't pass the Ultra clocks get the non-Ultra BIOS, don't get the Ultra shroud stickers, and get a non-Ultra part number and serial number sticker.
That is technically a simplistic binning process.
Since they use the same PCB, the process is pretty straight-forward, and the end user MAY be able to realize a slight overclocking difference between the two versions of the cards.
If you happen to be one of the unlucky ones who gets one which was specifically sold as a non-Ultra version because it didn't pass the Ultra clock test, you will definitely be one of the people who will notice a difference when you try to overclock it. From your perspective, it will definitely feel like you lost due to a binning process.
If it turns out that the majority are capable of the Ultra clocks, many of the non-Ultra versions may just end up being mostly a random spattering of same-performing chips just to satisfy a slightly different price-point market. In that case, if you receive one of those cards, it will feel to you that there was no binning process.
No matter what, one thing definitely makes the Ultra version different than the non-Ultra version: the Ultra version is guaranteed by warranty to meet or exceed the higher advertised clocks, while the non-Ultra version is only guaranteed by warranty to meet or exceed the lower advertised clocks.

Don't know why we need three separate threads on this subject._

Now, it could be that this person doesn't know what they are talking about, or that yes this is actually going on. We have yet to get confirmation on this from anyone representing EVGA in their official forum, in fact the only statement we have is that "no, no binning is going on". 





changboy said:


> About the 12 pin connector i found this :
> 
> Nvidia’s tiny 12-pin connector is also expected to supply a beefy dose of wattage—648 watts, according to Tom’s Hardware. If true, that means a single cable plus the power from the PCIe slot should be plenty for Nvidia’s new “Ampere” GPU-based GeForce RTX 30-series graphics cards.


Interesting, but technically it's only rated for 300w correct? This is the same story as 2x8 pin ultimately. I'm worried that NV will use this new connector type as rationale for locking down the BIOS of ampere, but this is speculation on my part at this point.


----------



## J7SC

As much as I prefer 3x8 pin 3090s w/ full waterblock, I do think that the aircooler on the 3090 FE is kind of - cool. It reminds me of the cylinder head of my first motorcycles. A TT Core P5/7 (open case) with dual air-cooled 3090 FEs and a 10,000 BTU portable AC on a separate electrical circuit might actually be a killer setup...would also help with the problems of one card exhausting on the other in a dual GPU setup, and/or the CPU and VRM...3090 open case ghetto rigging might become a thing


----------



## changboy

To me this pcb looks like the most advance pcb for comsumer ive ever see, its derived directly from the professinal agx module and those cost a lot of money,


----------



## Mooncheese

J7SC said:


> As much as I prefer 3x8 pin 3090s w/ full waterblock, I do think that the aircooler on the 3090 FE is kind of - cool. It reminds me of the cylinder head of my first motorcycles. A TT Core P5/7 (open case) with dual air-cooled 3090 FEs and a 10,000 BTU portable AC on a separate electrical circuit might actually be a killer setup...would also help with the problems of one card exhausting on the other in a dual GPU setup, and/or the CPU and VRM...3090 open case ghetto rigging might become a thing


Yes but look at it, there are no screws. Anywhere. Remember when GamersNexus tried to disassemble the 2080 and had to take a heat-gun and ultimately had to destroy the cooler to get it off? No-one is going to swap to WB and then take that off and revert to the FE cooler and act like nothing happened hoping they don't void their warranty this time around. 






Don't get me wrong, the design is out of this world, I absolutely love it, but it's going under a water block and I need a card that will retain it's warranty after the fact and I don't want to be limited to 375w. Hence my going with a FTW over the FE. 



changboy said:


> To me this pcb looks like the most advance pcb for comsumer ive ever see, its derived directly from the professinal agx module and those cost a lot of money,
> View attachment 2458588


Yes we've never seen anything like it, once it's under a water block who cares. The only advantage of this design is to accommodate the dual heat-sink and fan of the FE cooler.


----------



## changboy

The waterblock is verry small and look beatiful and Jay2C have made a vidéo and onbox the rtx-3080 and show plug all around the card mostly hidden all screw, i think it will be verry easy to watercool not like the 2080 ti.
We will see next week when they will test the 3080, its built like the 3090 and after see this we will have better idea on everything. Then i will decide to keep or cancel my order on the FE waterblock coz i have 14 days allowed to cancel it hehehe. But for sure any of those card will desapoint.


----------



## J7SC

FYI, over at the Heatkiller thread, there are posts suggesting that their custom PCB focus is on a w-block for the Asus Strix, presumably reasonably soon


----------



## Mooncheese

J7SC said:


> FYI, over at the Heatkiller thread, there are posts suggesting that their custom PCB focus is on a w-block for the Asus Strix, presumably reasonably soon


Link please! Thinking about this dilemma further, I'm thinking that I may postpone picking up a 3090 at launch, neither a waterblocked "reference" 3090 limited to 375w nor a 3x8 pin power AIB card that isn't power limited but won't have a decent water block for 1 year is an attractive option. 

There is no point in picking up a FTW card right now if I'm just going to keep it in the box and wait for a waterblock because for all we know 3080 Ti could only be 4 months away (not likely, but they've released they've done this in the past, 980 Ti was released in Feb 2014, only 4 months after the Maxwell Titan card released, in Nov 2013).


----------



## kx11

holding out for KP, the 360rad AiO is all that matters


----------



## ref

I'm still torn between the Strix or a reference model at this point. It's nice to know Heatkiller will be making a block for the Strix. I don't mind having to wait a few months for a waterblock as I would gladly prefer the quality of Heatkiller/Aquacomputer vs EK and others.

Power limits and how they effect overclocking will certainly be interesting, can't wait for some benchmarks.


----------



## Larkonian

I am sure that the FE will be BIOS locked to 350w. The 12-pin connector might be able to handle a lot of current but they are shipping the cards with a 2x8-pin adaptor so they won't ever allow more than 300w.
You would need to be able to flash a BIOS from another vendor in order to increase that limit which might be impossible since it won't be the same PCB this time around (so probably different VRM setup). Also they might lock it down further.

As for the questions on binning, I very much doubt that they test the clocks on cards after they are created. That would be a huge effort. It is probably much cheaper to pick a boost clock that all the chips can hit and then load normal or ultra bios according to market demands. They might do a quick "can it post" test but that's it.


----------



## skline00

I've played around for a number of years custom water-cooling gpus. The most expensive gpu I purchased so far is my EVGA GTX2080TI XC which is NOT OC'd because I didn't think it would really help much.

Looking at the entrance price to get into a GTX 3090 I think I would save up and get a KingPin when it comes out. Probably worth the extra premium considering the base price is so high.


----------



## Nammi

Seems like Strix is the custom board to receive most blocks.
WB info summary


----------



## shiokarai

Mooncheese said:


> Yes but look at it, there are no screws. Anywhere. Remember when GamersNexus tried to disassemble the 2080 and had to take a heat-gun and ultimately had to destroy the cooler to get it off? No-one is going to swap to WB and then take that off and revert to the FE cooler and act like nothing happened hoping they don't void their warranty this time around.
> 
> 
> 
> 
> 
> 
> Don't get me wrong, the design is out of this world, I absolutely love it, but it's going under a water block and I need a card that will retain it's warranty after the fact and I don't want to be limited to 375w. Hence my going with a FTW over the FE.
> 
> 
> 
> Yes we've never seen anything like it, once it's under a water block who cares. The only advantage of this design is to accommodate the dual heat-sink and fan of the FE cooler.


Actually, in the case of RTX 2080 ti FE it was easy to remove FE cooler.... the video linked in your post is about cooler teardown, not about what matters when you install the block - cooler removal, it's this video:






But this time I'm afraid it will be hard to even remove the stock cooler...


----------



## BigMack70

skline00 said:


> I've played around for a number of years custom water-cooling gpus. The most expensive gpu I purchased so far is my EVGA GTX2080TI XC which is NOT OC'd because I didn't think it would really help much.
> 
> Looking at the entrance price to get into a GTX 3090 I think I would save up and get a KingPin when it comes out. Probably worth the extra premium considering the base price is so high.


If the 2080Ti is any indication, the 3090 KPE will probably cost $2200-2400 and will only be worth that premium to people who value the experience of playing with the card at low level and/or doing sub-ambient overclocking, or for people who want the best version of the card and don't care about price.



Larkonian said:


> I am sure that the FE will be BIOS locked to 350w. The 12-pin connector might be able to handle a lot of current but they are shipping the cards with a 2x8-pin adaptor so they won't ever allow more than 300w.
> You would need to be able to flash a BIOS from another vendor in order to increase that limit which might be impossible since it won't be the same PCB this time around (so probably different VRM setup). Also they might lock it down further.


That's quite a lot of speculation there. Makes no sense to say they'll lock to 350W. Spec is 375W so that's probably the worst case. 

You guys do realize that there have been 2x 8-pin cards in the past easily capable of pulling 400+W, right? I don't think it's a given that Nvidia will lock to 375W. I'd say odds are 50/50.


----------



## RagingCain

Has anyone established what SLI is looking like on air?


----------



## Shawnb99

RagingCain said:


> Has anyone established what SLI is looking like on air?


The inside of your PC melting from the heat


----------



## Sheyster

Shawnb99 said:


> The inside of your PC melting from the heat


Open bench might be a good option for air-cooled dual 3090 cards if they're truly quiet at higher fan speeds as is being speculated.


----------



## RagingCain

Shawnb99 said:


> The inside of your PC melting from the heat


Heat is an issue needing solved, but more the physical constraints.



Sheyster said:


> Open bench might be a good option for air-cooled dual 3090 cards if they're truly quiet at higher fan speeds as is being speculated.


That might be a good route, or ridiculous sized Antec transformer case with an e-atx.


----------



## Sheyster

RagingCain said:


> That might be a good route, or ridiculous sized Antec transformer case with an e-atx.


That thing is... Interesting?


----------



## Mooncheese

ref said:


> I'm still torn between the Strix or a reference model at this point. It's nice to know Heatkiller will be making a block for the Strix. I don't mind having to wait a few months for a waterblock as I would gladly prefer the quality of Heatkiller/Aquacomputer vs EK and others.
> 
> Power limits and how they effect overclocking will certainly be interesting, can't wait for some benchmarks.


Any particular reason youre settling on the Strix over over variants? Do you have any idea how much this will be (there was a 3090 Strix listing on a German site, caseking.de, I think for $1750 or something crazy). Can you point to where Heatkiller said they are making a 3090 block?

The problem I have with these statements about people making blocks is that it can take them forever to get around to it. In a recent post of mine I post to a Techpowerup EVGA 2080 Ti FTW3 review dated Nov 2018 and then a product release announcement post from EKWB for the FTW3 block dated Nov 2019, or one year later.



Nammi said:


> Seems like Strix is the custom board to receive most blocks.
> WB info summary


Thanks for this summary! I'm actually leaning back towards FTW3 again as Bykski is releasing a block for it and they are not playing around apparently as they were one of the first to have a 3080 - 3090 block out. Half of my loop contains Bykski parts (fittings, tubing, temp sensors) I find their products actually EXCEED that of EK for nearly half the price from Ali-Express. In fact, the 3080 - 3090 FE block is going for $100 WITH the back-plate, that's half the price of EK's block with the back-plate.

The concern I have is that we just have announcements of intent to make blocks, and going by past history, it took EKWB one year from time of release to make a FTW3 block (FTW3 release, Nov 2018, EKWB FTW3 block release, Nov 2019).

I need more than announcements, I need some kind of timeframe. How quickly were blocks made the last time around for the Strix variant?



shiokarai said:


> Actually, in the case of RTX 2080 ti FE it was easy to remove FE cooler.... the video linked in your post is about cooler teardown, not about what matters when you install the block - cooler removal, it's this video:
> 
> 
> 
> 
> 
> 
> But this time I'm afraid it will be hard to even remove the stock cooler...


Correction: Yes thanks for the clarification! 



BigMack70 said:


> If the 2080Ti is any indication, the 3090 KPE will probably cost $2200-2400 and will only be worth that premium to people who value the experience of playing with the card at low level and/or doing sub-ambient overclocking, or for people who want the best version of the card and don't care about price.
> 
> 
> 
> That's quite a lot of speculation there. Makes no sense to say they'll lock to 350W. Spec is 375W so that's probably the worst case.
> 
> You guys do realize that there have been 2x 8-pin cards in the past easily capable of pulling 400+W, right? I don't think it's a given that Nvidia will lock to 375W. I'd say odds are 50/50.


Agreed, KP is probably going to run $2400 this time around. I understand it's a marvelous card and the very best of the best but good lord, it always seems to be around 50% up and over the price of the card in question. Not sure who this is for other than people for whom money is really no object. Agreed, it's too early to tell how locked down Ampere will be, but I'm not taking any chances with FE.

50/50 sounds about right, and I don't like those odds, not for $1500+


----------



## RagingCain

Sheyster said:


> That thing is... Interesting?


I do like it. I am internally having the debate upgrade (😇 You have a family now. You should be responsible.) .... or new build (👿 pick me! pick me! then we can sit around browsing the steam library without playing anything!).


----------



## J7SC

RagingCain said:


> Heat is an issue needing solved, but more the physical constraints.
> 
> 
> 
> That might be a good route, or ridiculous sized Antec transformer case with an e-atx.


Antec Torque is very interesting, but I'm not sure it is e-ATX (before Dremel). The TT Core P5 I use for dual GPU wasn't technically an e-ATX either, but it is now (after Dremel). In addition to the Antec and Core P5 etc, there's also the Core P7 - probably a good e-ATX candidate for dual 3090s demanding lots of airflow...


----------



## BigMack70

RagingCain said:


> Has anyone established what SLI is looking like on air?


Large, loud, and hot.


----------



## changboy

You realize the size of the cooler of the 3090 is around twice the one on the 3080 and this desing blow direct on the back of the case the most of the heat and the other end part of the card if to help keep dropping the temp.

The air go out from the inside fan will not getting hot as the air exhaust on the back. At leats this is what i think.


----------



## ref

Mooncheese said:


> Any particular reason youre settling on the Strix over over variants? Do you have any idea how much this will be (there was a 3090 Strix listing on a German site, caseking.de, I think for $1750 or something crazy). Can you point to where Heatkiller said they are making a 3090 block?
> 
> The problem I have with these statements about people making blocks is that it can take them forever to get around to it. In a recent post of mine I post to a Techpowerup EVGA 2080 Ti FTW3 review dated Nov 2018 and then a product release announcement post from EKWB for the FTW3 block dated Nov 2019, or one year later.
> 
> 
> 
> Thanks for this summary! I'm actually leaning back towards FTW3 again as Bykski is releasing a block for it and they are not playing around apparently as they were one of the first to have a 3080 - 3090 block out. Half of my loop contains Bykski parts (fittings, tubing, temp sensors) I find their products actually EXCEED that of EK for nearly half the price from Ali-Express. In fact, the 3080 - 3090 FE block is going for $100 WITH the back-plate, that's half the price of EK's block with the back-plate.
> 
> The concern I have is that we just have announcements of intent to make blocks, and going by past history, it took EKWB one year from time of release to make a FTW3 block (FTW3 release, Nov 2018, EKWB FTW3 block release, Nov 2019).
> 
> I need more than announcements, I need some kind of timeframe. How quickly were blocks made the last time around for the Strix variant?
> 
> 
> 
> Correction: Yes thanks for the clarification!
> 
> 
> 
> Agreed, KP is probably going to run $2400 this time around. I understand it's a marvelous card and the very best of the best but good lord, it always seems to be around 50% up and over the price of the card in question. Not sure who this is for other than people for whom money is really no object. Agreed, it's too early to tell how locked down Ampere will be, but I'm not taking any chances with FE.
> 
> 50/50 sounds about right, and I don't like those odds, not for $1500+












I will absolutely consider the FTW3 as soon as there is some confirmation that blocks are being made for it in a reasonable timeframe from a good company. Like I said, I don't mind waiting a few months for a block but I won't wait a year. In regards to cost, I'm not concerned about money as it's going to cost a **** ton anyways to get one in Canada


----------



## J7SC

ref said:


> (...) Like I said, I don't mind waiting a few months for a block but I won't wait a year. In regards to cost, I'm not concerned about money as it's *going to cost a **** ton anyways to get one in Canada*


Yeah, Canada isn't exactly 'cheap' for PC parts. But I have ordered several items directly from Watercool's online shop in Germany, and the exchange rate (via credit card) was very favourable. Delivery (pre-Covid, mind you) was speedy and painless from Germany to W.Canada


----------



## Shawnb99

J7SC said:


> Yeah, Canada isn't exactly 'cheap' for PC parts. But I have ordered several items directly from Watercool's online shop in Germany, and the exchange rate (via credit card) was very favourable. Delivery (pre-Covid, mind you) was speedy and painless from Germany to W.Canada


Think now it's limited to DHL Express only. Think that's true for most of the EU and UK. Ordering from Aquacomputer for other stuff I'm paying $45 euros for shipping. Should be speedy though.


----------



## domenic

Mooncheese said:


> Any particular reason youre settling on the Strix over over variants? Do you have any idea how much this will be (there was a 3090 Strix listing on a German site, caseking.de, I think for $1750 or something crazy). Can you point to where Heatkiller said they are making a 3090 block?
> 
> The problem I have with these statements about people making blocks is that it can take them forever to get around to it. In a recent post of mine I post to a Techpowerup EVGA 2080 Ti FTW3 review dated Nov 2018 and then a product release announcement post from EKWB for the FTW3 block dated Nov 2019, or one year later.
> 
> 
> 
> Thanks for this summary! I'm actually leaning back towards FTW3 again as Bykski is releasing a block for it and they are not playing around apparently as they were one of the first to have a 3080 - 3090 block out. Half of my loop contains Bykski parts (fittings, tubing, temp sensors) I find their products actually EXCEED that of EK for nearly half the price from Ali-Express. In fact, the 3080 - 3090 FE block is going for $100 WITH the back-plate, that's half the price of EK's block with the back-plate.
> 
> The concern I have is that we just have announcements of intent to make blocks, and going by past history, it took EKWB one year from time of release to make a FTW3 block (FTW3 release, Nov 2018, EKWB FTW3 block release, Nov 2019).
> 
> I need more than announcements, I need some kind of timeframe. How quickly were blocks made the last time around for the Strix variant?
> 
> 
> 
> Correction: Yes thanks for the clarification!
> 
> 
> 
> Agreed, KP is probably going to run $2400 this time around. I understand it's a marvelous card and the very best of the best but good lord, it always seems to be around 50% up and over the price of the card in question. Not sure who this is for other than people for whom money is really no object. Agreed, it's too early to tell how locked down Ampere will be, but I'm not taking any chances with FE.
> 
> 50/50 sounds about right, and I don't like those odds, not for $1500+


We seem to be in in the same boat as many people here. I have had good experience with EKWB and EVGA over the last two cycles of Nvidia flagship cards putting the two together with zero issues. I have no experience with ASUS in terms of video cards and until this thread never heard of Bykski.

Although I feel most comfortable with the EVGA / EKWB combo I am open to say an ASUS / Bykski match-up. The ASUS Strix is appealing over the FTW3 because it has a second HDMI 2.1 port. 

So from what I am reading in the recent posts (1) EKWB has listed the Strix as "coming soon" - but not the FTW3 and (2) Bykski has announced (not seeing any link to verify however) they will be making blocks for BOTH the FTW3 and Strix 3090s which based on history leads us to believe it wont be an delayed for months. Am I getting that right? What is the thinking in general (historically) about FTW3 vs Strix cards? Are they roughly the same tier in terms of features? In the next couple of weeks it seems obvious we need to see some kind of solid evidence a block manufacturer will have product available that fits the custom 3090 we are looking for. Three power connectors is a must at this level of investment & risk. 

So on the 24th its most likely going to come down to a bunch of us sitting in front of a bunch of screens with several pages pre-loaded pages waiting to see if we can snag anything before the sites crash or stock goes to zero for weeks / months. Hopefully by then we will have our 1-2-3 picks in our mind and just go down the line seeing if we can grab anything as pausing to think about any of the above will be nothing but hesitation forfeiting any chance of grabbing a card.


----------



## Foxrun

domenic said:


> So on the 24th its most likely going to come down to a bunch of us sitting in front of a bunch of screens with several pages pre-loaded pages waiting to see if we can snag anything before the sites crash or stock goes to zero for weeks / months. Hopefully by then we will have our 1-2-3 picks in our mind and just go down the line seeing if we can grab anything as pausing to think about any of the above will be nothing but hesitation forfeiting any chance of grabbing a card.


I think most of us if not all will get a 3090 on the 24th. It'll come down to being ready at 9am est. I managed a 2080ti preorder on launch (first wave) after having to wait 10 minutes from a slow webpage through my phone. You need to be ready to buy and try to narrow down your choice to one of two. If you show up 30min late then yes your SoL.


----------



## Slaughtahouse

J7SC said:


> Yeah, Canada isn't exactly 'cheap' for PC parts. But I have ordered several items directly from Watercool's online shop in Germany, and the exchange rate (via credit card) was very favourable. Delivery (pre-Covid, mind you) was speedy and painless from Germany to W.Canada


Any challenges with "duties"? I've had this issue before with orders from our neighbour to the south many times.


----------



## HyperMatrix

Slaughtahouse said:


> Any challenges with "duties"? I've had this issue before with orders from our neighbour to the south many times.


If the shop you’re buying from charges you tax (that you can see on the checkout page) there won’t be any duty/brokerage fees. Also avoid UPS Standard (ground) and FedEx Ground at all costs. I shipped a $400 NAS to California for $15 CAD to get warranty repairs done. When they shipped it back with UPS Standard, UPS sent me a bill for $88 CAD for duty/brokerage. Lol. NAS company paid the bill for me but still....UPS Standard is cancer.


----------



## J7SC

Slaughtahouse said:


> Any challenges with "duties"? I've had this issue before with orders from our neighbour to the south many times.


The items from Watercool from Germany had no duty because I think there is a big database somewhere determining duties, taxes and the like (ie if the exact product is also available in Canada, or not). Also, more advanced online shops have that built-in as well when you order and fill in your country, address etc.

I have bought a phase cooler and w-cooling items from retailers in the US many years back, and the duties / taxes, while not outrageous, had to be paid to Canada Post (the delivery agent in my area) before they would 'hand over' the goods...


----------



## Shawnb99

Installing a block on the Strix voids the warranty correct?


----------



## Mooncheese

Shawnb99 said:


> Installing a block on the Strix voids the warranty correct?


I could be mistaken but I believe only EVGA allows putting the card under a water block without voiding the warranty.


----------



## Shawnb99

Mooncheese said:


> I could be mistaken but I believe only EVGA allows putting the card under a water block without voiding the warranty.


I figured. Evga is the only option for me then. That makes the FTW3 or Kingpin the only options and not into extreme OC so can't justify the extra cost for the Kingpin. Going to wait for the Hydrocopper model though I think rather then join the mad rush on launch and then have to wait for a block


----------



## domenic

Mooncheese said:


> I could be mistaken but I believe only EVGA allows putting the card under a water block without voiding the warranty.


According to an ASUS Forum Moderator within this thread disassembly of the card in order to install a water block does NOT void the warranty unless you inadvertently damage it in the process.


----------



## HyperMatrix

For those who are curious, ASUS does make some very high quality products. But their support is absolute dog poop. If you're buying their products, make sure you buy some 3rd party coverage for it. I cracked the screen on a monitor and they refused to either sell me a replacement screen or even tell me roughly how much a new screen would cost if I wanted them to service it. 

They said they won't even tell me if a screen replacement cost is closer to $500 or $1500. And I'm not in the habit of spending $80 for shipping with shipping insurance just to find out if it's economical to replace my broken screen or not when it's something they clearly know. I even told them I wouldn't hold them to the estimate they gave and just want a rough idea of what it would cost, if absolutely nothing else is wrong, and a cracked screen needs to be fully replaced. But they wouldn't budge. 

Honestly absolutely garbage service and I wouldn't buy anything expensive from them again. EVGA, on the other hand, I've heard nothing but good things about over the years and from talking to a few of the guys who work there back during the 120Hz.net days, I know they really do care about their customers as well as about gaming and the community as a whole. They even went out of their way to contact Nvidia several times on our behalf regarding some driver limitations that were causing problems for us when overclocking refresh rates.


----------



## kot0005

NVIDIA GeForce RTX 3080 synthetic and gaming performance leaked - VideoCardz.com


We have new benchmark results straight from reviewers. NVIDIA GeForce RTX 3080 performance The official embargo on GeForce RTX 3080 Founders Edition review lifts on Monday. We have been contacted by many reviewers trying to verify their results, and since we already have the data, we decided to...




videocardz.com





Well I dunno how much faster the 3090 will be, the 3080 performance is quite bad at 1440p unless the CPU's are just big bottlenecks, then that also means that AMD's cards will be even worse.


----------



## changboy

I'am in Canada and never ad problem with asus service, they always be great at me and fast. I bought asus product since 2008 and always i got a problem they always be there. But i always registered my product like it should be. If i got problem i just follow step by step and no problem here.


----------



## changboy

kot0005 said:


> NVIDIA GeForce RTX 3080 synthetic and gaming performance leaked - VideoCardz.com
> 
> 
> We have new benchmark results straight from reviewers. NVIDIA GeForce RTX 3080 performance The official embargo on GeForce RTX 3080 Founders Edition review lifts on Monday. We have been contacted by many reviewers trying to verify their results, and since we already have the data, we decided to...
> 
> 
> 
> 
> videocardz.com
> 
> 
> 
> 
> 
> Well I dunno how much faster the 3090 will be, the 3080 performance is quite bad at 1440p unless the CPU's are just big bottlenecks, then that also means that AMD's cards will be even worse.


From what i see the 3080 is made to perform at high resolution, like the 2080 ti is not so good at 1080p then i asking myself if the 3080 not enough for me with my 4k oled. The 3090 will shine in 8k but i play at 4k.


----------



## J7SC

...I opted for


Shawnb99 said:


> Think now it's limited to DHL Express only. Think that's true for most of the EU and UK. Ordering from Aquacomputer for other stuff I'm paying $45 euros for shipping. Should be speedy though.


...below is one of my Watercool Germany orders from December '18. I used the FedEx option Watercool's site offered, and it arrived w/o any issues, a day early, and w/o taxes or duties...also note bottom print on taxes etc


----------



## Section31

J7SC said:


> ...I opted for
> 
> 
> ...below is one of my Watercool Germany orders from December '18. I used the FedEx option Watercool's site offered, and it arrived w/o any issues, a day early, and w/o taxes or duties...also note bottom print on taxes etc
> 
> View attachment 2458686


I You are one of the lucky ones. I have ordered from them using Fedex and never gotten away from taxes. It's not too bad though. Recent order involving Mo-Ra3 (330cad) and only paid 36cad in taxes and processing fees. Only when i used USPS did i get lucky. I used to buy cellphones from clove.uk and always used royal mail when possible, everytime i did, i never was taxed on 1000cad mobile phones.

As long as you don't use UPS, you are perfectly fine. So DHL is pretty ok but you can't get away from taxes. Though I have one order i haven't received an bill because they can't find the tax agents handling my shipment. My only beef with Fedex is i got to tell them to bill my account and not bother the person that answers the door.


----------



## Section31

Shawnb99 said:


> I figured. Evga is the only option for me then. That makes the FTW3 or Kingpin the only options and not into extreme OC so can't justify the extra cost for the Kingpin. Going to wait for the Hydrocopper model though I think rather then join the mad rush on launch and then have to wait for a block


It's looking good to be patient. By time they come in to stock, you will get choice of aquacomputer, heatkiller for sure. Who knows, maybe Optimus too (but doubtful), you endup saving cash by getting the waterblock you want. I think you and me are in same boat, we are also looking out on the heatkiller radiator front too.


----------



## J7SC

Section31 said:


> I You are one of the lucky ones. I have ordered from them using Fedex and never gotten away from taxes. It's not too bad though. Recent order involving Mo-Ra3 (330cad) and only paid 36cad in taxes and processing fees. Only when i used USPS did i get lucky. I used to buy cellphones from clove.uk and always used royal mail when possible, everytime i did, i never was taxed on 1000cad mobile phones.
> 
> As long as you don't use UPS, you are perfectly fine. So DHL is pretty ok but you can't get away from taxes. Though I have one order i haven't received an bill because they can't find the tax agents handling my shipment.


...back in the days after university in Canada, I worked for the feds here as an 'international economist' (don't ask). That plus subsequent work in business means that I should know the tax and duty codes (note 'should'). When in this displayed case FedEx received the order from Watercool Germany and informed the Canada Customs and Revenue agents agency about the upcoming and incoming shipment, someone there would have looked up the duty and tax status - if it wasn't on their (often outdated) list, it means there was no direct Canadian competing product and/or also exempt per trade Inco db code.

...on the other hand, may be I just really got lucky as you say  

In any event, my experiences with ordering directly from Watefcool in Germany (and for that matter other outfits and countries in EU such as France) has been well, just nuthin' negative to report. I figured Watercool would do a Strix w-block again, coz they did one for the 2080 Ti which was very well received. Also, if you check their site, they are very proud of their capacity expansion and latest production toys > here


----------



## changboy

Some law have change in Canada, at least for me(québec), coz in 2018 when i order from newegg i pay 5% tax and now its same thing then in store here, its mean 15% tax. They dont want you to order in other provine to save tax so they fix that, verry funny. Ordering from usa also have high import fees when delivery comes, it can be 25%. 

I just order optimus fondation cpu block from Chicago and i pay around 35 usd for shipping and when it arrive here dhl ask me 46$ cad and the block cost 119 usd ! So for a cpu at 119 usd i pay around 250 cad ! Funny too.


----------



## Section31

changboy said:


> Some law have change in Canada, at least for me(québec), coz in 2018 when i order from newegg i pay 5% tax and now its same thing then in store here, its mean 15% tax. They dont want you to order in other provine to save tax so they fix that, verry funny. Ordering from usa also have high import fees when delivery comes, it can be 25%.
> 
> I just order optimus fondation cpu block from Chicago and i pay around 35 usd for shipping and when it arrive here dhl ask me 46$ cad and the block cost 119 usd ! So for a cpu at 119 usd i pay around 250 cad ! Funny too.


Optimus no longer has USPS shipping option. DHL is guarantee to get charged sales tax. When Optimus had USPS option, i got 3 of 4 waterblock orders without any additional charges. Again it's total luck.


----------



## changboy

I'am always sooo lucky then


----------



## Mooncheese

domenic said:


> According to an ASUS Forum Moderator within this thread disassembly of the card in order to install a water block does NOT void the warranty unless you inadvertently damage it in the process.


That's great news, the Strix is definitely an option to consider, especially if EKWB intends to get a WB out for that variant before FTW3. But how expensive is the Strix? There was a caseking.de product page that had the Strix at $1750 or so. 



HyperMatrix said:


> For those who are curious, ASUS does make some very high quality products. But their support is absolute dog poop. If you're buying their products, make sure you buy some 3rd party coverage for it. I cracked the screen on a monitor and they refused to either sell me a replacement screen or even tell me roughly how much a new screen would cost if I wanted them to service it.
> 
> They said they won't even tell me if a screen replacement cost is closer to $500 or $1500. And I'm not in the habit of spending $80 for shipping with shipping insurance just to find out if it's economical to replace my broken screen or not when it's something they clearly know. I even told them I wouldn't hold them to the estimate they gave and just want a rough idea of what it would cost, if absolutely nothing else is wrong, and a cracked screen needs to be fully replaced. But they wouldn't budge.
> 
> Honestly absolutely garbage service and I wouldn't buy anything expensive from them again. EVGA, on the other hand, I've heard nothing but good things about over the years and from talking to a few of the guys who work there back during the 120Hz.net days, I know they really do care about their customers as well as about gaming and the community as a whole. They even went out of their way to contact Nvidia several times on our behalf regarding some driver limitations that were causing problems for us when overclocking refresh rates.


Asus does make quality stuff, and their customer service is ok (I had damaged a PG278Q monitor whilst moving years ago and I sent it to them and they gave me a quote of only like $200 if I remember correctly to repair it), they are definitely an option to consider alongside EVGA. 

Gigabyte, however, I've heard nothing but bad things about (non-existent customer service, they never respond to inquiry). 

They are all mostly comparable in terms of build quality and reliability, but if I had to rate them in order of customer service it would be: 

1. EVGA
2. Asus
3. MSI 
4. Gigabyte

In terms of features it would be: 

1. EVGA
2. Asus
3. MSI 
4. Gigabyte 

In my opinion, unless proven otherwise, the FTW3 has the Strix beat in terms of features. I believe they both have a power fuse and bios switch but the FTW3 with it's iCX3 temp sensors on the VRM and VRAM will be invaluable this time around for those of us putting the card under a WB as we will be able to see if a passive back-plate is adequate enough to cool the top bank of VRAM (or whether or not we need to add additional heat-sinks etc).


----------



## dentnu

You don’t void your warranty with MSI cards either. I had a 2080 TI trio last gen and was very happy with it. Very quite card for the brief time I used its stock cooler. I am leaning towards Asus Strix this gen I have never had an asus card and the strix looks the best out of all of them in my opinion . Lets hope its not priced to crazy... 

If not I will go the MSI trio again. I used to only buy evga cards for but have not liked the look of there cards for a while now. The FTW looks horrible with all that rgb and the red accents killed it for me. I agree when it comes to customer service EVGA is the best. That alone is reason enough to buy there products.


----------



## HyperMatrix

dentnu said:


> You don’t void your warranty with MSI cards either. I had a 2080 TI trio last gen and was very happy with it. Very quite card for the brief time I used its stock cooler. I am leaning towards Asus Strix this gen I have never had an asus card and the strix looks the best out of all of them in my opinion . Lets hope its not priced to crazy...
> 
> If not I will go the MSI trio again. I used to only buy evga cards for but have not liked the look of there cards for a while now. The FTW looks horrible with all that rgb and the red accents killed it for me. I agree when it comes to customer service EVGA is the best. That alone is reason enough to buy there products.


Also one reason I want an EVGA card, and this could be silly on my part but I really want it for this, is the EVGA Precision overclocking app built into the Xbox Game Bar. It only works with EVGA cards.









EVGA - Articles - EVGA Precision for Game Bar


EVGA Precision for Game Bar is here. This widget for the Xbox Game Bar on Windows 10 devices gives you instant access to monitoring your EVGA graphics card, built right into the Game Bar. Simply press Windows logo key + G to open Game Bar over your game, application or even desktop.




www.evga.com


----------



## Mooncheese

dentnu said:


> You don’t void your warranty with MSI cards either. I had a 2080 TI trio last gen and was very happy with it. Very quite card for the brief time I used its stock cooler. I am leaning towards Asus Strix this gen I have never had an asus card and the strix looks the best out of all of them in my opinion . Lets hope its not priced to crazy...
> 
> If not I will go the MSI trio again. I used to only buy evga cards for but have not liked the look of there cards for a while now. The FTW looks horrible with all that rgb and the red accents killed it for me. I agree when it comes to customer service EVGA is the best. That alone is reason enough to buy there products.


Agreed, Strix looks the best, I don't know what EVGA was thinking this time around, I don't like the red accent, everything else is ok though. The FTW3 is slowly growing on me to be honest, but I could care less, the card is going under water. 



HyperMatrix said:


> Also one reason I want an EVGA card, and this could be silly on my part but I really want it for this, is the EVGA Precision overclocking app built into the Xbox Game Bar. It only works with EVGA cards.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> EVGA - Articles - EVGA Precision for Game Bar
> 
> 
> EVGA Precision for Game Bar is here. This widget for the Xbox Game Bar on Windows 10 devices gives you instant access to monitoring your EVGA graphics card, built right into the Game Bar. Simply press Windows logo key + G to open Game Bar over your game, application or even desktop.
> 
> 
> 
> 
> www.evga.com
> 
> 
> 
> 
> 
> View attachment 2458695
> 
> View attachment 2458696


Wow I didn't even know this was a thing! This is very cool, thanks for sharing! I've been using MSI AB over EVGA PX because I undervolt and I don't like the volt+freq curve in PX.


----------



## J7SC

Asus may not have a friendly / most compliant customer service, but among the 30 odd 'recent and live' GPUs, I have 4x Asus and 9x times EVGA, the latter all being Classified or KP cards...While I've had bad Asus customer service (only on initial bios release gen X99 mobos), none was necessary for any of the Asus GPUs. On the other hand, I have/had 4 of 8 EVGA cards (7th and 9th gen) develop issues around PCIe x. I decided to RMA one of those EVGA cards and there was no dispute by EVGA, even after water-cooling...but what I had sent in was a 91%+ ASIC pristine CL card (less than 4 weeks old), and what I got back was a beat-up 68% ASIC, shroud-chipped and otherwise questionable GPU.

The early versions of 780 Ti KPE seemed like they had the same weak PCB w/ a different-bin GPU and shroud. The more recent KPE seem genuinely different fron the other EVGAs, even re. the PCB, and in any case, the Vince-TiN bios, EVbot and their other engineering and support always was/is above reproach, no matter what version I dealt with.

So IMO, while I will have no problem opting for a KPE 3090, depending on its price competitiveness and also timeliness to market, I will not touch any other EVGA card in the Ampere series. For now, I am leaning towards 2x Asus 3090 Strix OC w/ Heatkiller or Aquacomputer blocks, but I will wait and see if other preferred vendors (= in my experience, such as Aorus) will release some additional factory w-blocked cards in the 3090s series in a few months, preferably with 3x 8 pin. No hurry at this stage, though


----------



## domenic

Mooncheese said:


> That's great news, the Strix is definitely an option to consider, especially if EKWB intends to get a WB out for that variant before FTW3. But how expensive is the Strix? There was a caseking.de product page that had the Strix at $1750 or so.
> 
> 
> 
> Asus does make quality stuff, and their customer service is ok (I had damaged a PG278Q monitor whilst moving years ago and I sent it to them and they gave me a quote of only like $200 if I remember correctly to repair it), they are definitely an option to consider alongside EVGA.
> 
> Gigabyte, however, I've heard nothing but bad things about (non-existent customer service, they never respond to inquiry).
> 
> They are all mostly comparable in terms of build quality and reliability, but if I had to rate them in order of customer service it would be:
> 
> 1. EVGA
> 2. Asus
> 3. MSI
> 4. Gigabyte
> 
> In terms of features it would be:
> 
> 1. EVGA
> 2. Asus
> 3. MSI
> 4. Gigabyte
> 
> In my opinion, unless proven otherwise, the FTW3 has the Strix beat in terms of features. I believe they both have a power fuse and bios switch but the FTW3 with it's iCX3 temp sensors on the VRM and VRAM will be invaluable this time around for those of us putting the card under a WB as we will be able to see if a passive back-plate is adequate enough to cool the top bank of VRAM (or whether or not we need to add additional heat-sinks etc).


Good points. I currently run three "regular monitors" + a new LG CX 55" OLED sitting in a box ready to go. The monitoring features of the FTW3 would trump a second HDMI 2.1 port I probably wont use.

Speaking of memory on the back of the 3090 I can't find any information floating around in regard to how the various waterblock vendors are going to address this which has me somewhat concerned. Obviously AIBs are designing their stock 3090 coolers to also cool these chips so by us slapping on a waterblock only on the front side I feel like this is going to be a problem. I suppose a backplate could help but its an optional thing with EKWB at least. I can't imagine its possible to create some sort of "wrap around block" but just placing a passive backplate to cool a bunch of super hot chips that were originally designed to be cooled with a fan of some sort does't feel right either.

Any ideas?


----------



## HyperMatrix

domenic said:


> Speaking of memory on the back of the 3090 I can't find any information floating around in regard to how the various waterblock vendors are going to address this which has me somewhat concerned. Obviously AIBs are designing their stock 3090 coolers to also cool these chips so by us slapping on a waterblock only on the front side I feel like this is going to be a problem. I suppose a backplate could help but its an optional thing with EKWB at least. I can't imagine its possible to create some sort of "wrap around block" but just placing a passive backplate to cool a bunch of super hot chips that were originally designed to be cooled with a fan of some sort does't feel right either.
> 
> Any ideas?


I don't know about the other block makers. But my last 2 cards I've used Aquacomputer, which has the option for an active cooled backplate. It basically works by placing a heatpipe on the backplate, which gets plugged in to your liquid cooling connector block thingy. Sorry for the overuse of technical terms.


----------



## Mooncheese

J7SC said:


> Asus may not have a friendly / most compliant customer service, but among the 30 odd 'recent and live' GPUs, I have 4x Asus and 9x times EVGA, the latter all being Classified or KP cards...While I've had bad Asus customer service (only on initial bios release gen X99 mobos), none was necessary for any of the Asus GPUs. On the other hand, I have/had 4 of 8 EVGA cards (7th and 9th gen) develop issues around PCIe x. I decided to RMA one of those EVGA cards and there was no dispute by EVGA, even after water-cooling...but what I had sent in was a 91%+ ASIC pristine CL card (less than 4 weeks old), and what I got back was a beat-up 68% ASIC, shroud-chipped and otherwise questionable GPU.
> 
> The early versions of 780 Ti KPE seemed like they had the same weak PCB w/ a different-bin GPU and shroud. The more recent KPE seem genuinely different fron the other EVGAs, even re. the PCB, and in any case, the Vince-TiN bios, EVbot and their other engineering and support always was/is above reproach, no matter what version I dealt with.
> 
> So IMO, while I will have no problem opting for a KPE 3090, depending on its price competitiveness and also timeliness to market, I will not touch any other EVGA card in the Ampere series. For now, I am leaning towards 2x Asus 3090 Strix OC w/ Heatkiller or Aquacomputer blocks, but I will wait and see if other preferred vendors (= in my experience, such as Aorus) will release some additional factory w-blocked cards in the 3090s series in a few months, preferably with 3x 8 pin. No hurry at this stage, though


Wow, you have an extensive GPU ownership history! That's the thing with repair though, manufacturers typically don't do repair, they simply send you a different card. I suppose this is a way to lose the silicon lottery ultimately. 

Do you have any information regarding the Strix Heatkiller and Aquacomputer blocks in terms of availability? Also, considering no-one is using Samsung 2GB modules the following commentreply to Domenic may interest you.



domenic said:


> Good points. I currently run three "regular monitors" + a new LG CX 55" OLED sitting in a box ready to go. The monitoring features of the FTW3 would trump a second HDMI 2.1 port I probably wont use.
> 
> Speaking of memory on the back of the 3090 I can't find any information floating around in regard to how the various waterblock vendors are going to address this which has me somewhat concerned. Obviously AIBs are designing their stock 3090 coolers to also cool these chips so by us slapping on a waterblock only on the front side I feel like this is going to be a problem. I suppose a backplate could help but its an optional thing with EKWB at least. I can't imagine its possible to create some sort of "wrap around block" but just placing a passive backplate to cool a bunch of super hot chips that were originally designed to be cooled with a fan of some sort does't feel right either.
> 
> Any ideas?


No ideas, we are all in new territory here with the 3090, one reason I'm leaning towards the FTW3 is that being able to gauge memory temps will be of vital importance. I just viewed this video, and it gets into exactly why the iCX3 sensors may be useful and also the cooling approach that EVGA is using with the 3090 FTW3: 






I had a feeling those top bank modules were connected to a heatsink via heatpipe and this pretty much all but confirms that. I'm not sure a passive back-plate from the likes of EKWB will be sufficient for this task. Even more worrisome is that the way the back-plates are designed (I have one on my XC2) is they connect directly to the backside of the VRM on the card. So you will have VRM conveying heat into the back-plate and into your top bank of modules. 

It may be that the only feasible solution is Aquacomputer's active back-plate, and there is no telling when they will make a block for the FTW3, but if I remember correctly, they've stated they intend to make a block for most 30 series cards, including the FTW3. 



HyperMatrix said:


> I don't know about the other block makers. But my last 2 cards I've used Aquacomputer, which has the option for an active cooled backplate. It basically works by placing a heatpipe on the backplate, which gets plugged in to your liquid cooling connector block thingy. Sorry for the overuse of technical terms.


Yes, see the GN vid above, it may be the case that Aquacomputer's back-plate may be one of the only effective solutions with the 3090, the other being just buying the 3090 FTW3 HC directly from EVGA, but there is no telling when that will be due out. 

I think I'm going to hold off buying on the 24th of September because I won't be running the card on air and I want to see how cooling the top bank of VRAM will be accomplished by the various block manufacturers before getting one.


----------



## J7SC

Mooncheese said:


> Wow, you have an extensive GPU ownership history! (...)


Well, I sign some of the purchase orders I write @ work..


----------



## Mooncheese

J7SC said:


> Well, I sign some of the purchase orders I write @ work..


Do you have any concerns with how cooling the top bank of VRAM on the 3090 will be accomplished in conjunction with a water block, irrespective of the manufacturer?


----------



## ref

Aquacomputer creating a Strix/FTW3 block would be the best case scenario for me, but AFAIK they have only confirmed FE and reference. Heatkiller is a close second, who is confirmed to be making a Strix, however I like Aquacomputer's designs a bit more based on previous gen blocks. I hope most of the waterblock companies will announce their full plans before launch as it will help me decide on which card to get.


----------



## HyperMatrix

Nvidia just announced they're delaying the release date for reviews of the cards until 1 day before they go on sale. They're claiming COVID and reviewers needing more time to get the reviews done....despite most reviewers mentioning they were done their reviews days ago. Sounds to me like they don't want a lot of time to pass from the release of the reviews until the release of the cards. Especially since the leaked reviews have shown the cards in a less favorable light than what Nvidia themselves announced on September 1st. Fortunately won't affect us 3090 folk as much since we gotta wait until the 24th anyway.


----------



## Glerox

I agree with most of the people here that FE seems to not be the best choice for watercooling because of hypothetical software power limit like old FE cards.

For the first time I'll get an AIB at launch, either ROG with bitspower block or FTW with EVGA block. I just hope there will be some in stock for more than one minute...


----------



## Mooncheese




----------



## domenic

This site / page is keeping track of all RTX 30 series water block manufacturers and products with daily updates.


----------



## domenic

Mooncheese said:


> Wow, you have an extensive GPU ownership history! That's the thing with repair though, manufacturers typically don't do repair, they simply send you a different card. I suppose this is a way to lose the silicon lottery ultimately.
> 
> Do you have any information regarding the Strix Heatkiller and Aquacomputer blocks in terms of availability? Also, considering no-one is using Samsung 2GB modules the following commentreply to Domenic may interest you.
> 
> 
> 
> No ideas, we are all in new territory here with the 3090, one reason I'm leaning towards the FTW3 is that being able to gauge memory temps will be of vital importance. I just viewed this video, and it gets into exactly why the iCX3 sensors may be useful and also the cooling approach that EVGA is using with the 3090 FTW3:
> 
> 
> 
> 
> 
> 
> I had a feeling those top bank modules were connected to a heatsink via heatpipe and this pretty much all but confirms that. I'm not sure a passive back-plate from the likes of EKWB will be sufficient for this task. Even more worrisome is that the way the back-plates are designed (I have one on my XC2) is they connect directly to the backside of the VRM on the card. So you will have VRM conveying heat into the back-plate and into your top bank of modules.
> 
> It may be that the only feasible solution is Aquacomputer's active back-plate, and there is no telling when they will make a block for the FTW3, but if I remember correctly, they've stated they intend to make a block for most 30 series cards, including the FTW3.
> 
> 
> 
> Yes, see the GN vid above, it may be the case that Aquacomputer's back-plate may be one of the only effective solutions with the 3090, the other being just buying the 3090 FTW3 HC directly from EVGA, but there is no telling when that will be due out.
> 
> I think I'm going to hold off buying on the 24th of September because I won't be running the card on air and I want to see how cooling the top bank of VRAM will be accomplished by the various block manufacturers before getting one.


This type of backplate solution is what was mentioned above and seems to be what we need for the 3090. I have no experience with AquaComputer and don't see any non-reference blocks made for custom AIB cards in the past. Am I missing something?


----------



## dentnu

I don't see any listing on any USA sites like amazon or newegg, for the Asus strix 3080 or the 3090. I know we are still about two weeks before the 3090 launch but the 3080 launch is less than a week away and its not been listed yet. Something tells me its not going to be ready for launch day. Hope I am wrong.


----------



## changboy

Arggg i change my mind about my order of the *Bykski RTX 3080 / 3090 Founders Edtion GPU Cooler.*
I send email to be refund lol, i will wait to see what card i will be able to get, ft3w or strix.

I just dont want cant get 1 and need wait again, its doing 5 month i wait after sell my 1080 ti and now running on old r9-290 pcs+ lol and this on my 4k oled, full of bug and cant get more then 30 fps on deskdrop and all my latest game i stop played them coz the card its too old. So boring to wait lol.


----------



## CallsignVega

Pretty funny that NVIDIA delayed the NDA lifting. I think this talk about a 3080 being 70% faster than a 2080Ti including normal rasterization vaporizes.


----------



## BigMack70

CallsignVega said:


> Pretty funny that NVIDIA delayed the NDA lifting. I think this talk about a 3080 being 70% faster than a 2080Ti including normal rasterization vaporizes.


That talk has never existed outside of people who don't know how to read or listen and who imagine the letters "Ti" where they weren't written or spoken.

3080 should be 70% faster than a 2080, give or take; maybe a bit better if the DF benchmark games are representative and not cherry-picked (doubtful). Which means 33% faster than 2080 Ti, give or take. Which means 3090 should be around 60% faster than 2080 Ti, give or take.


----------



## originxt

CallsignVega said:


> Pretty funny that NVIDIA delayed the NDA lifting. I think this talk about a 3080 being 70% faster than a 2080Ti including normal rasterization vaporizes.


Wasn't the embargo delay done to help reviewers who got their samples late get their testing done and released at the same time as everyone else? They are still having the embargo lifted before actual time for sales so unsure the issue is here? Unless 24 hours isn't enough for people to make a decision based on what results are released?


----------



## J7SC

I wouldn't be totally surprised if NVidia is delaying a bit to get an updated patch / driver out to deal with some of the specific titles, such as referenced here (usual pinch of salt)


----------



## BigMack70

Mooncheese said:


> I mean this is still fantastic but it's not 50% faster


How on earth could anyone legitimately have thought the 3080 would be 50% faster than 2080 Ti? Did some of you really take Nvidia's marketing claim of "_3080 will be up to twice as fast as the 2080_" and conclude "Sweet! T_he 3080 is going to be twice as fast as the 2080_"!? Because that's what 50% faster than 2080 Ti means. Did you think perhaps the 3080 was going to perform even better than Nvidia's marketing department claimed? Is that really the kindergarten level of critical thinking being used among a group of people with $1500 of disposable income for a graphics card? Really? Come on.

If you are disappointed that the 3080 is not 50% faster than the 2080 Ti based on Nvidia's information about the card, you need to work on your critical thinking skills. And frankly, how anyone gets a job that pays enough to provide $1500 of disposable income on a GPU without those skills is beyond my comprehension. 50% better than 2080 Ti will be in controlled best case scenarios only and Nvidia never claimed otherwise. Most of what we've seen firmly plants the 3080 around the 2080 Ti + 33% category of performance at 4k, with less of a jump at lower resolutions, and none of that is out of line or surprising based on Nvidia's presentation of the card. 

Now, if it turns out that the average performance of the 3080 is something like 2080 Ti + 15% at 4k, then go ahead and get the disappointment pitchforks out. But so far, nothing we've seen suggests that.


----------



## domenic

Checkout this completely inadequate answer from EKWB in response to to how they intend to cool the RAM chips on the back of the 3090 with their blocks.


----------



## Mooncheese

domenic said:


> Checkout this completely inadequate answer from EKWB in response to to how they intend to cool the RAM chips on the back of the 3090 with their blocks.


Yes that's my reddit post, apparently I'm the only one asking this question on the internet at present. I was also banned on r/watercooling without explanation for posting the same question and EKWB's non-existent response there. I also posted the same question on r/nvidia and only received comments amounting to insults about entitlement.


----------



## pewpewlazer

We've seen benchmarks showing the 3080 being 60-80% faster than the 2080. So realistically, we can expect the 3080 to be somewhere around 30-35% faster than the 2080 Ti. 

No, Nvidia's vague marketing claims of "up to twice as fast as a 2080" or some cherry picked benchmarks from DF do not mean the 3080 should be 50% faster than a 2080 Ti across the board.

The level of completely unrealistic expectations is just baffling. This is similar to Pascal level of performance improvement, which everyone looked back on fondly when Turing came out. Now it's disappointing?


----------



## dentnu

I don't see what the big problem is NVIDIA a cooperation and its main goal is to make money... If they think they need to delay the reviews by a few days so be it. Who knows what the truth is behind the reviews being delayed. What I do know is that their main goal is to make as much money as possible. They do not care about you, me or anyone else and that is perfectly fine. They are not in the business of caring. It’s called capitalism and if I was in charge of NVIDIA I would do exactly what they are doing right now. If you have a problem with their business strategy then don't buy their products or any other product from any other company because they are all the same. Please let’s not ***** and cry that what they are doing is not acceptable. Life is driven by money that is what makes the world spin and everyone is trying to make as much as possible.


----------



## Mooncheese

pewpewlazer said:


> We've seen benchmarks showing the 3080 being 60-80% faster than the 2080. So realistically, we can expect the 3080 to be somewhere around 30-35% faster than the 2080 Ti.
> 
> No, Nvidia's vague marketing claims of "up to twice as fast as a 2080" or some cherry picked benchmarks from DF do not mean the 3080 should be 50% faster than a 2080 Ti across the board.
> 
> The level of completely unrealistic expectations is just baffling. This is similar to Pascal level of performance improvement, which everyone looked back on fondly when Turing came out. Now it's disappointing?


That's not what I said, I said that the 3080 went from being ~50% faster than the 2080 Ti (curated Digital Foundry benchmarks: SOTTR, Control and Doom Eternal) to only ~25% faster (recent 3DMark suite benchmark leak).

In the early exclusive DF video the 3080 is clearly 75-80% faster than the 2080 in Control and SOTTR and upwards of 90% faster in Doom Eternal with RT and DLSS off.

There is also the Doom Eternal 4K video with the 3080 running that game at ~150 FPS vs ~100 FPS of the 2080 Ti.

Considering the 2080 Ti is 25% faster on average than the 2080 some extrapolation puts the 3080 at ~50% faster than the 2080 Ti (backed up with the direct comparison of the 3080 to the 2080 Ti Doom Eternal @ 4K)

Big Mack basically called me an idiot "HOw someoONe can affords $1500 for gRaphics and have jobs to pAY wage = YOUR DUMB" in response.

I never stated that the 3080 is 50% faster in light of the recent leaked 3DMark bench suite leak.

I CLEARLY questioned Nvidia's statement that they are pushing the review NDA up to the 16th, one day before the 3080 is available for purchase,as likely due to the fact that they've gauged consumer sentiment in light of the 3DMark bench suite leak to be negative overall (that this contradicts their slick marketing showing a 50% performance disparity over 2080 Ti), and that they are using the "pandemic" as an excuse.

Is everyone here illiterate? I'm the moron?


----------



## kx11

3080 is faster than 2080ti, it's a fact

why are people starting a controversy ?!! even 1% faster is great given the 3080 pricing compared to 2080ti


----------



## HyperMatrix

I’ve honestly never understood people who blindly side with a business/manufacturer instead of the consumer. There’s literally nothing in it for them unless they’re getting paid by those companies or have significant stock in them. As some have said, they’re businesses. They’re greedy. They only care about making more money.

So knowing that, why would you make it easy for them to screw you? You need to call them out and bring attention to their shady practices if you’re to have any hope of influencing the behavior of the behemoth that is Nvidia or whatever other company you’re talking about. If everyone had been on board with Turing like Tom’s Hardware and their “Just Buy It” article, they would have had more sales and there would be less of a push for them to make a change that would entice reviewers/consumers today.


----------



## domenic

*EDIT* - GN covered this here in a video. I guess we will just have to wait for Steve & Co. to do their testing and tell us if a passive backplate is going to cut it or not. 

----------------------------------------

Hmmm... Got this response over on the EVGA forum after I asked about the 3090 and if the warranty would be voided by removing their heat pipe / cold plate on the FTW3 and replacing with a 3rd party water block with only a passive backplate. Comment came back not from a moderator but its an interesting possibility.

_"*EVGA will not be using the same PCB as Nvidia is using on the FE cards. EVGA will probably have all the memory chips on the same side of the board*." _

Sound plausible? The FE is a custom design with that wacky cut in half PCB to accommodate their new cooling system so perhaps the extra space on the board with a "normal" design would give more space for the chips?

Below is the FE 3080 PCB. Are the memory chips the vertical stack of four on either side of the GPU? If so do you think EVGA could fit (or has the in-house engineering expertise to do so) another eight of these on a single side of a more traditional / larger custom PCB?


----------



## dentnu

HyperMatrix said:


> I’ve honestly never understood people who blindly side with a business/manufacturer instead of the consumer. There’s literally nothing in it for them unless they’re getting paid by those companies or have significant stock in them. As some have said, they’re businesses. They’re greedy. They only care about making more money.
> 
> So knowing that, why would you make it easy for them to screw you? You need to call them out and bring attention to their shady practices if you’re to have any hope of influencing the behavior of the behemoth that is Nvidia or whatever other company you’re talking about. If everyone had been on board with Turing like Tom’s Hardware and their “Just Buy It” article, they would have had more sales and there would be less of a push for them to make a change that would entice reviewers/consumers today.


Hey HyperMatrix, I agree with most of what you said, note this is not directly at you or anyone specifically. In my opinion there is no hope of influencing the behavior of a billion-dollar company by calling them out. They will run there marketing spin and act like they care but, in the end, they are going to do what they deem best for the future of the company. We are nothing to them except dollar bills. I know it’s messed up but this is how the world works. The sooner everyone realizes this the sooner they can learn how not to let them screw you. Only way to truly get their attention is to not buy their products. If you want a company to change their ways vote with your wallet. I don't want to sound like I have an issue with NVIDIA because I don't. I see what they have become and it’s an impressive company. I am just laying it out for everyone that might have an issue on how they operate or get mad on business decision they make. Don't get mad cause they delayed the review date by a few days. Cause now that might mean they lied about their card’s performance. Everyone should know all those slides are marking fluff and are not 100% accurate. All I read from those NVIDIA slides is that it’s going to be faster and that's all I need to know. I understand not everyone thinks like me or is me but come on have some commonsense.


----------



## HyperMatrix

dentnu said:


> Hey HyperMatrix, I agree with most of what you said, note this is not directly at you or anyone specifically. In my opinion there is no hope of influencing the behavior of a billion-dollar company by calling them out. They will run there marketing spin and act like they care but, in the end, they are going to do what they deem best for the future of the company. We are nothing to them except dollar bills. I know it’s messed up but this is how the world works. The sooner everyone realizes this the sooner they can learn how not to let them screw you. Only way to truly get their attention is to not buy their products. If you want a company to change their ways vote with your wallet. I don't want to sound like I have an issue with NVIDIA because I don't. I see what they have become and it’s an impressive company. I am just laying it out for everyone that might have an issue on how they operate or get mad on business decision they make. Don't get mad cause they delayed the review date by a few days. Cause now that might mean they lied about their card’s performance. Everyone should know all those slides are marking fluff and are not 100% accurate. All I read from those NVIDIA slides is that it’s going to be faster and that's all I need to know. I understand not everyone thinks like me or is me but come on have some commonsense.


Bringing attention to it can help other people vote with their wallet as well though.


----------



## Wihglah

domenic said:


> *EDIT* - GN covered this here in a video. I guess we will just have to wait for Steve & Co. to do their testing and tell us if a passive backplate is going to cut it or not.
> 
> ----------------------------------------
> 
> Hmmm... Got this response over on the EVGA forum after I asked about the 3090 and if the warranty would be voided by removing their heat pipe / cold plate on the FTW3 and replacing with a 3rd party water block with only a passive backplate. Comment came back not from a moderator but its an interesting possibility.
> 
> _"*EVGA will not be using the same PCB as Nvidia is using on the FE cards. EVGA will probably have all the memory chips on the same side of the board*." _
> 
> Sound plausible? The FE is a custom design with that wacky cut in half PCB to accommodate their new cooling system so perhaps the extra space on the board with a "normal" design would give more space for the chips?
> 
> Below is the FE 3080 PCB. Are the memory chips the vertical stack of four on either side of the GPU? If so do you think EVGA could fit (or has the in-house engineering expertise to do so) another eight of these on a single side of a more traditional / larger custom PCB?
> 
> View attachment 2458748


It's my understanding that 2Gb GDDR 6X chips do not exist yet and the distance they are positioned from the core is critical. Its probable that the FTW card will not be a reference design and definitely won't be an FE though.


----------



## ribosome

domenic said:


> *EDIT* - GN covered this here in a video. I guess we will just have to wait for Steve & Co. to do their testing and tell us if a passive backplate is going to cut it or not.
> 
> ----------------------------------------
> 
> Hmmm... Got this response over on the EVGA forum after I asked about the 3090 and if the warranty would be voided by removing their heat pipe / cold plate on the FTW3 and replacing with a 3rd party water block with only a passive backplate. Comment came back not from a moderator but its an interesting possibility.
> 
> _"*EVGA will not be using the same PCB as Nvidia is using on the FE cards. EVGA will probably have all the memory chips on the same side of the board*." _
> 
> Sound plausible? The FE is a custom design with that wacky cut in half PCB to accommodate their new cooling system so perhaps the extra space on the board with a "normal" design would give more space for the chips?
> 
> Below is the FE 3080 PCB. Are the memory chips the vertical stack of four on either side of the GPU? If so do you think EVGA could fit (or has the in-house engineering expertise to do so) another eight of these on a single side of a more traditional / larger custom PCB?
> 
> View attachment 2458748


There's another two memory chips on that PCB, one above and one below the GPU. And I somehow doubt that EVGA found a way to fit 20 memory chips all on the same side.


----------



## dentnu

I believe they said yesterday in the EVGA live stream it was not a reference PCB. If I remember correctly Jacob said the XC3 was not exactly reference it was a bit longer and they still did not know if it would be compatible with the ekwb reference water-block. 

Here is the video if anyone interested I don't have a timestamp but he did say it. There allot questions answered on the FTW3 and XC3.


----------



## treetops422

domenic said:


> Checkout this completely inadequate answer from EKWB in response to to how they intend to cool the RAM chips on the back of the 3090 with their blocks.


I don't think it matters which side of the ram you place the thermal pads. I want to say Buildzoid mentioned that while tearing down a 5700 xt. They would simply have to add "bumps" + thermal pads. simple stuff. Or they could make a duo back and front cooler. Or ignore the issue entirely and let them passively cool since the rest of the card is under water.


----------



## HyperMatrix

treetops422 said:


> I don't think it matters which side of the ram you place the thermal pads. I want to say Buildzoid mentioned that while tearing down a 5700 xt. They would simply have to add "bumps" + thermal pads. simple stuff. Or they could make a duo back and front cooler. Or ignore the issue entirely and let them passively cool since the rest of the card is under water.


direct memory chip cooling is the best solution. That’s why Aquacomputer makes their blocks with direct contact and doesn’t use thermal pads. Just small layer of thermal paste just as with the GPU die. Indirect cooling will still cool. But not to the same level. If these are like the 12gb vram on the Titan x pascal, then they’re going to get incredibly toasty. Memory on those cards got hotter than the gpu die itself on air. Especially if you tried to OC it.


----------



## Krzych04650

BigMack70 said:


> That talk has never existed outside of people who don't know how to read or listen and who imagine the letters "Ti" where they weren't written or spoken.
> 
> 3080 should be 70% faster than a 2080, give or take; maybe a bit better if the DF benchmark games are representative and not cherry-picked (doubtful). Which means 33% faster than 2080 Ti, give or take. Which means 3090 should be around 60% faster than 2080 Ti, give or take.





pewpewlazer said:


> We've seen benchmarks showing the 3080 being 60-80% faster than the 2080. So realistically, we can expect the 3080 to be somewhere around 30-35% faster than the 2080 Ti.
> 
> No, Nvidia's vague marketing claims of "up to twice as fast as a 2080" or some cherry picked benchmarks from DF do not mean the 3080 should be 50% faster than a 2080 Ti across the board.
> 
> The level of completely unrealistic expectations is just baffling. This is similar to Pascal level of performance improvement, which everyone looked back on fondly when Turing came out. Now it's disappointing?


People are just actively looking for reason to complain, it is a standard at this point. All the data is up there, you really need to want to have a problem to have any issues here.

We have been shown a lot of numbers already, especially with recent 3DMark leak that looks legit, so it is pretty clear that compared to 2080Ti 3080 is 25% faster on the absolute lowest end and above 40% on the high end. So the Pascal-like generational leap is worst case scenario and it will be much more than that in many cases. The cases of "up to two times faster than 2080" have also been shown by DF in Doom Eternal and Quake RTX, these were short bursts and not sustainable performance, but still, this is enough for "up to".

The reviews are only to show overclocking and power limits at this point, and maybe the exact performance leap in individual games you look forward to, overall performance is already known.


----------



## ref

Hmm, it seems full custom PCBs may not be available at launch, at least for the 3080 (rumor): Overclocking NVIDIA GeForce RTX 3080 memory to 20 Gbps is easy - VideoCardz.com

Assuming the situation is the same for the 3090, it may seem like the FE will have the highest TDP over the reference AIBs. In the case of the 3080, the FE is 370w and the AIB is 320w.

This lack of information so close to launch is incredibly frustrating. I'm really hoping we will know more by the week of the 24th.


----------



## zhrooms

ref said:


> Assuming the situation is the same for the 3090, it may seem like the FE will have the highest TDP over the reference AIBs. In the case of the 3080, the FE is 370w and the AIB is 320w.
> 
> This lack of information so close to launch is incredibly frustrating. I'm really hoping we will know more by the week of the 24th.


No, NVIDIA Custom PCB 1x12-Pin is 320W TDP and 370W Max Power Limit, NVIDIA Reference PCB 2x8-Pin is 320W TDP and 350W Max Power Limit. As for the 3090, we know TDP is 350W, and for the Reference PCB 2x8-Pin we know they won't exceed specs, which is 375W max, so you'll be able to increase power limit by 20-25W on 3090, basically zero OC headroom. ASUS has also gone out with Strix OC numbers, they did not mention which card but possibly both, as the PCB they use is the same, it's going to feature a power limit of "up to 400W", which sounds incredibly conservative since it's 3x8-Pin PCB capable of 525W. But not surprising since Strix OC 2080 Ti ran just 325W, made it one of the worst performing cards of the Ti's, this time they sadly seem to go the same route.

So there isn't really any lacking information, we know basically everything already, main things we don't know are RT/Tensor performance, then small things such as custom cards power limits.


----------



## Krzych04650

zhrooms said:


> No, NVIDIA Custom PCB 1x12-Pin is 320W TDP and 370W Max Power Limit, NVIDIA Reference PCB 2x8-Pin is 320W TDP and 350W Max Power Limit. As for the 3090, we know TDP is 350W, and for the Reference PCB 2x8-Pin we know they won't exceed specs, which is 375W max, so you'll be able to increase power limit by 20-25W on 3090, basically zero OC headroom. ASUS has also gone out with Strix OC numbers, they did not mention which card but possibly both, as the PCB they use is the same, it's going to feature a power limit of "up to 400W", which sounds incredibly conservative since it's 3x8-Pin PCB capable of 525W. But not surprising since Strix OC 2080 Ti ran just 325W, made it one of the worst performing cards of the Ti's, this time they sadly seem to go the same route.
> 
> So there isn't really any lacking information, we know basically everything already, main things we don't know are RT/Tensor performance, then small things such as custom cards power limits.


If this is true then power limit is really going to be deciding factor with OC, even more so than Turing. If you look for 2080 Ti 380W BIOS equivalent then you really need those 525 watts, it would basically be the same, 50% increase over stock. 400W for 3090 is like 290W for 2080 Ti, thats an undervolting territory not overclocking. The selection of cards is really going to be narrow. Not much hope for good ones at launch


----------



## Mooncheese

Well it looks like I'm waiting until variants arrive with greater power limit. EVGA is still mum about their cards, they don't even have product pages up. Still concerned as to how the top bank of VRAM will be cooled under a water block. 

2080 Ti is capable of a 30% overclock. If they don't raise the power limit 3090 will be capable of a 7% overclock at 375w? 

Am I also reading this correctly that FE 3080 will have 370w on tap and "reference" will only have 350w? Doesn't make any sense to me. 

Also, here's a bench of overclocked 3080 doing 17,300 Timespy. Golf clap. My 2080 Ti can do 16,560 or so. 









Overclocking NVIDIA GeForce RTX 3080 memory to 20 Gbps is easy - VideoCardz.com


NVIDIA GeForce RTX 3080 memory easily overclocks to 20 Gbps Our sources were kind enough to share some benchmarking results of the GeForce RTX 3080. We found the most interesting topic to be overclocking. It appears that currently, the best RTX 3080 variant is … NVIDIA Founders Edition, only...




videocardz.com





I suppose I will be waiting for a card with 500w. Hopefully Gigabyte have powerful WB cards again this time around with their Aorus line-up. 

So far what I've seen isn't hope inspiring.


----------



## J7SC

domenic said:


> *EDIT* - GN covered this here in a video. I guess we will just have to wait for Steve & Co. to do their testing and tell us if a passive backplate is going to cut it or not.
> 
> ----------------------------------------
> 
> Hmmm... Got this response over on the EVGA forum after I asked about the 3090 and if the warranty would be voided by removing their heat pipe / cold plate on the FTW3 and replacing with a 3rd party water block with only a passive backplate. Comment came back not from a moderator but its an interesting possibility.
> 
> _"*EVGA will not be using the same PCB as Nvidia is using on the FE cards. EVGA will probably have all the memory chips on the same side of the board*." _
> 
> Sound plausible? The FE is a custom design with that wacky cut in half PCB to accommodate their new cooling system so perhaps the extra space on the board with a "normal" design would give more space for the chips?
> 
> Below is the FE 3080 PCB. Are the memory chips the vertical stack of four on either side of the GPU? If so do you think EVGA could fit (or has the in-house engineering expertise to do so) another eight of these on a single side of a more traditional / larger custom PCB?
> 
> View attachment 2458748


...every time I see that 3080 FE PCB image with those two extra VRAM slots, I'm thinking 3080 Ti....this thread being about '3090' though, it will have 'full house' front and back. 

As to discussions about 'how much faster' than a 2080 Ti, those seem irrelevant since the 2080 Ti is apparently out of production anyway. I do think companies like NVidia, AMD and Intel regularly get themselves a bit into a trap when they announce new things before NDA expiry - and with graphs and tables by their 'marketing dep't', often without appropriate scales, test parameters and footnotes. But it's not long now before we get a bunch of real-world tests by trusted 3rd parties....


----------



## Mooncheese

domenic said:


> *EDIT* - GN covered this here in a video. I guess we will just have to wait for Steve & Co. to do their testing and tell us if a passive backplate is going to cut it or not.
> 
> ----------------------------------------
> 
> Hmmm... Got this response over on the EVGA forum after I asked about the 3090 and if the warranty would be voided by removing their heat pipe / cold plate on the FTW3 and replacing with a 3rd party water block with only a passive backplate. Comment came back not from a moderator but its an interesting possibility.
> 
> _"*EVGA will not be using the same PCB as Nvidia is using on the FE cards. EVGA will probably have all the memory chips on the same side of the board*." _
> 
> Sound plausible? The FE is a custom design with that wacky cut in half PCB to accommodate their new cooling system so perhaps the extra space on the board with a "normal" design would give more space for the chips?
> 
> Below is the FE 3080 PCB. Are the memory chips the vertical stack of four on either side of the GPU? If so do you think EVGA could fit (or has the in-house engineering expertise to do so) another eight of these on a single side of a more traditional / larger custom PCB?
> 
> View attachment 2458748


Not plausible, you have people who don't know what they are talking about stating that the 3090 will have 12 2GB modules only on one side. That module design, shared with Titan V, is rumored to be in production next year and that's just a rumor. 

If there were 2GB modules Nvidia would have used them on the FE. There aren't any. Additionally EVGA already showed how the FTW3 cooler will cool the top bank of VRAM via cold plate connected to heat-sink via heat-pipe in one of the videos I recently shared here. 

No-one is using the 2GB modules, people making these statements don't know what they are talking about.


----------



## Mooncheese

I think I'm going to wait until November to see what NV have up their sleeve in response to Big Navi, by then we will all have a better idea as to what is the best 3090 variant in regards to water-cooling (and cooling the top bank of VRAM) and overclocking in regards to power delivery. 

Hell if I wait long enough, who knows, maybe NV will do a repeat of the Maxwell where they released the 80 Ti card only 4 months after the Titan (Titan: Nov 2013, 980 Ti: Feb, 2014) 

I posted this over at EVGA forum: 




AHowes said:


> I'm sure one would have no problem selling said card for retail or more if they want to jump to another model later on.


_Even after NV release 20GB 3080 and 16GB 3070 variants within a quarter in response to Big Navi (who will have a 16GB card that will sit in between the 3070 and 3080 in rasterization @ $550 and 24GB card that will sit in between the 3080 and 3090 @ $1k? Remember the 1060? They sold 3GB and 6GB variants simultaneously and the 6GB variants had considerably higher resale value.






See the gaping chasm in price and performance between the 3080 and the 3090? Yeah, NGreedia have something up their sleeve for Big Navi, whether that will be a 20GB 3080 or 3080S is the question. 

Everyone buying first wave ampere expecting them to be at the top of the stack and to hold value goes against what Nvdia has done with their product releases historically. 

First wave refresh NEVER hold their value. 

See: 

RTX 2070

GTX 1080

GTX 1060

GTX 980_


----------



## Jordel

@zhrooms how come the change to the table in regards to power limits on the STRIX? Do we not have any official word on a 400W power limit?


----------



## J7SC

FYI, Igor's Lab has an article on early 3080 (w/ implications for 3090) binning > here ...as always, pinch of salt at the ready, but Igor has had some pretty good advanced info on Ampere so far


----------



## dentnu

Mooncheese said:


> I think I'm going to wait until November to see what NV have up their sleeve in response to Big Navi, by then we will all have a better idea as to what is the best 3090 variant in regards to water-cooling (and cooling the top bank of VRAM) and overclocking in regards to power delivery.
> 
> Hell if I wait long enough, who knows, maybe NV will do a repeat of the Maxwell where they released the 80 Ti card only 4 months after the Titan (Titan: Nov 2013, 980 Ti: Feb, 2014)
> 
> I posted this over at EVGA forum:
> 
> 
> 
> 
> _Even after NV release 20GB 3080 and 16GB 3070 variants within a quarter in response to Big Navi (who will have a 16GB card that will sit in between the 3070 and 3080 in rasterization @ $550 and 24GB card that will sit in between the 3080 and 3090 @ $1k? Remember the 1060? They sold 3GB and 6GB variants simultaneously and the 6GB variants had considerably higher resale value.
> 
> 
> 
> 
> 
> 
> See the gaping chasm in price and performance between the 3080 and the 3090? Yeah, NGreedia have something up their sleeve for Big Navi, whether that will be a 20GB 3080 or 3080S is the question.
> 
> Everyone buying first wave ampere expecting them to be at the top of the stack and to hold value goes against what Nvdia has done with their product releases historically.
> 
> First wave refresh NEVER hold their value.
> 
> See:
> 
> RTX 2070
> 
> GTX 1080
> 
> GTX 1060
> 
> GTX 980_


I think the 3090 is the top of the stack. Jensen said during the keynote that the 3090 is the Titan this generation. Will there be a 3080 Ti at around 1K sure I can see that but it’s not going to be top of the stack. I would guess everyone in this thread is getting a 3090 cause it’s the best you can get. BIG Navi is not going to get anywhere close to performance level of the 3090. NVIDIA is not dumb they definitely have a 3080 TI ready to go if needed. The question is will they need it. My guess is if BIG Navi does not beat the 3080 the TI version will be shelved for 6 months or more. It will then come out as the 3080 super along with the 3070, 3060 supers. Anyone holding out till BIG Navi is going to have to wait till next year to get their hands on a 3080 or 3090. The world is going through a pandemic no one is operating at 100% and probably won't be for a very long time. If you are not getting a card in this first wave it’s going to be hard finding one in the next 3 months. While there might be small batches released over the next few months don't expect a shipment anywhere close to what is coming on the 17th or 24th. I sold my 2080 TI on 8/27 and I currently have no card right now so I can't wait to start gaming again and cannot afford to wait. It sucks that the power limits will be low but XOC bios should start coming out for the MSI, EVGA, and Asus cards soon after like they always do.

Here is an article I read a while back no idea if its true but I am not taking any chances.

 Video Cards Exclusive: GeForce RTX 30 series cards tight supply until end of year


----------



## Krzych04650

Whether I get my 3090s first wave or not really depends on what models are available, I am certainly not buying anything with low power limit. I am also aware of 3080 Ti/S possibility, but it won't be out till late Q1 and I just cannot wait that long, especially that I take 2-3 months of break from work during winter so this is when I need these cards the most. There won't be anything better than 3090 since full die is 10752 CUDA, only 5% slower but $999 3080Ti/S is a possibilty. Pascal-like 3080 Ti at 699 and 3080 dropping 500 is not happening. Waiting to get a better deal but later in gen so it gets replaced faster isn't that much of a great deal, not really worth the wait. I'd rather buy right away and use it for full 2 years, it just gets amortized is such a long time anyway and there is no waiting for who knows how many months. Time has a cost too, at least to me


----------



## Wihglah

IMO the 3080 is a good value card, but the huge gap to the 3090 means there must be plans for something in between.
Having something in between means everyone who gets a 3080 at launch will have buyers remorse as soon as whatever card comes out at $1000. Also - a 12GB 3090 makes no sense to me. It would have to be a 3080 with 20Gb or maybe with on extra SMU unlocked and 22Gb.

Also - for everyone waiting for a 3x8pin 500W card. Bare in mind the difference between a top overclocked FE and a top overclocked FTW, will be 100MHz. That's a 5% difference. Anywhere that matters, it won't be enough.


----------



## dentnu

Wihglah said:


> IMO the 3080 is a good value card, but he huge gap to the 3090 means there must be plans for something in between.
> Having something in between means everyone who gets a 3080 at launch will buyers remorse as soon as whatever card comes out at $1000. Also - a 12GB 3090 makes no sense to me. It would have to be a 3080 with 20Gb or maybe with on extra SMU unlocked and 22Gb.


I agree there are going to be some very angry people when the 3080 TI is announced at 1k.


----------



## HyperMatrix

Jordel said:


> @zhrooms how come the change to the table in regards to power limits on the STRIX? Do we not have any official word on a 400W power limit?



Asus says 400W TDP on their own website. Here’s the quote:

“We take full advantage of our three-connector design by giving ROG Strix GeForce RTX 30-series cards up to 400W of total board power to play with.”









Check out all of our buffed-up GeForce RTX 3070, RTX 3080, and RTX 3090 graphics cards | ROG - Republic of Gamers Global


GeForce RTX 30-series GPUs are here, and we've buffed up all of our graphics cards to match.




rog.asus.com





I haven’t seen any new information come out that contradicts that.


----------



## sakete

dentnu said:


> I agree there are going to be some very angry people when the 3080 TI is announced at 1k.


Perhaps, I know I will not be angry. I'm going from a 980ti to a 3080, so a huge jump in performance, and will be paying about the same I paid 5 years ago. A potential 3080ti at 1k would perform better, but you also pay more for it so I'd hope it would perform better.

I also game at 1440p, and not planning on getting 4k anytime soon. That 3080 will have me set for 2-3 years, and I'll see what's available at that point.

3080 is a lot of bang for your buck.


----------



## domenic

\


Wihglah said:


> IMO the 3080 is a good value card, but the huge gap to the 3090 means there must be plans for something in between.
> Having something in between means everyone who gets a 3080 at launch will have buyers remorse as soon as whatever card comes out at $1000. Also - a 12GB 3090 makes no sense to me. It would have to be a 3080 with 20Gb or maybe with on extra SMU unlocked and 22Gb.
> 
> Also - for everyone waiting for a 3x8pin 500W card. Bare in mind the difference between a top overclocked FE and a top overclocked FTW, will be 100MHz. That's a 5% difference. Anywhere that matters, it won't be enough.


Has anyone made something like an overclocking calculator spreadsheet that guesstimates the difference in performance (in expected FPS?) as best we know at the moment as you go up in clock speed? I for one have been eyeing the FTW3 primarily since it has the extra power connector + might be binned higher and secondarily because of the sensors but at what cost vs performance increase? If the FTW3 costs $300 to $400 more is it worth it for that small of a performance increase vs just going with a "regular" reference 3090? Heck the stock 3090 is more than 2x the cost of the 3080 for + ~20% if I am reading articles correctly.

Even more complicated & confusing is the schedule and amount of time between when real reviews & tear downs are allowed to come out vs when we can order (even tighter for AIB cards?). Not much time to make an informed decision. Based on the 3080 schedule its looking like 24 hours or less then the race to hit the refresh button game trying to grab up a card. If you hesitate then it could be a long winter without any gaming....


----------



## Mooncheese

dentnu said:


> I think the 3090 is the top of the stack. Jensen said during the keynote that the 3090 is the Titan this generation. Will there be a 3080 Ti at around 1K sure I can see that but it’s not going to be top of the stack. I would guess everyone in this thread is getting a 3090 cause it’s the best you can get. BIG Navi is not going to get anywhere close to performance level of the 3090. NVIDIA is not dumb they definitely have a 3080 TI ready to go if needed. The question is will they need it. My guess is if BIG Navi does not beat the 3080 the TI version will be shelved for 6 months or more. It will then come out as the 3080 super along with the 3070, 3060 supers. Anyone holding out till BIG Navi is going to have to wait till next year to get their hands on a 3080 or 3090. The world is going through a pandemic no one is operating at 100% and probably won't be for a very long time. If you are not getting a card in this first wave it’s going to be hard finding one in the next 3 months. While there might be small batches released over the next few months don't expect a shipment anywhere close to what is coming on the 17th or 24th. I sold my 2080 TI on 8/27 and I currently have no card right now so I can't wait to start gaming again and cannot afford to wait. It sucks that the power limits will be low but XOC bios should start coming out for the MSI, EVGA, and Asus cards soon after like they always do.
> 
> Here is an article I read a while back no idea if its true but I am not taking any chances.
> 
> Video Cards Exclusive: GeForce RTX 30 series cards tight supply until end of year


You've bought into the scarcity narrative hook, line and sinker. Industrial production in China, where all of this is made, is actually back on line at full speed and has been for a few months now.

The actual situation is not one of supply chain limitation but artificial scarcity on the part of NV, and your sentiment reflects the fact that it's effective.

See this is how it works, NV releases Ampere in September, two months ahead of Big Navi and tells the consuming public that there will be no pre-orders and that we can expect a limited supply due to the "pandemic" (which is nonsense, again, China and Taiwan are back online).

End consumer processes this as "WELL I BETTER BUY NOW, CAN'T WAIT FOR WHAT NV DROPS IN RESPONSE TO AMD, I.E. 20GB 3080S OR EVEN A $1K 3080 TI, I MUST BUY AND I MUST BUY RIGHT NOW OR I WONT GET THIS CARD UNTIL SOMETIME NEXT YEAR!"

From what I gather AMD's 6800XT is shaping up to sit in between the 3070 and 3080 in rasterization with 16GB of video memory for $550 and the 6900XT will sit in between the 3080 and 3090 with 24GB of video memory for $1000.

In November.

You don't think NV will drop the card that is missing in the gaping chasm of the lineup between the $700 3080 and the $1500 3090 in response?

Come on dude, Asian supply chain is no longer disrupted, this is masterful abusing of the "pandemic"  narrative on the part of Juang and co.

Please think critically.

Also, please don't confuse this with a personal attack or insult. I'm asking for critical reflection with this launch. Remember, this is the same NV that brought us the Turing Marketing debacle, nothing has changed.


----------



## dentnu

Mooncheese said:


> You've bought into the scarcity narrative hook, line and sinker. Industrial production in China, where all of this is made, is actually back on line at full speed and has been for a few months now.
> 
> The actual situation is not one of supply chain limitation but artificial scarcity on the part of NV, and your sentiment reflects the fact that it's effective.
> 
> See this is how it works, NV releases Ampere in September, two months ahead of Big Navi and tells the consuming public that there will be no pre-orders and that we can expect a limited supply due to the "pandemic" (which is nonsense, again, China and Taiwan are back online).
> 
> End consumer processes this as "WELL I BETTER BUY NOW, CAN'T WAIT FOR WHAT NV DROPS IN RESPONSE TO AMD, I.E. 20GB 3080S OR EVEN A $1K 3080 TI, I MUST BUY AND I MUST BUY RIGHT NOW OR I WONT GET THIS CARD UNTIL SOMETIME NEXT YEAR!"
> 
> From what I gather AMD's 6800XT is shaping up to sit in between the 3070 and 3080 in rasterization with 16GB of video memory for $550 and the 6900XT will sit in between the 3080 and 3090 with 24GB of video memory for $1000.
> 
> In November.
> 
> You don't think NV will drop the card that is missing in the gaping chasm of the lineup between the $700 3080 and the $1500 3090 in response?
> 
> Come on dude, Asian supply chain is no longer disrupted, this is masterful abusing of the "pandemic"  narrative on the part of Juang and co.
> 
> Please think critically.
> 
> Also, please don't confuse this with a personal attack or insult. I'm asking for critical reflection with this launch. Remember, this is the same NV that brought us the Turing Marketing debacle, nothing has changed.


Your narrative might be right or wrong who knows and honestly, I do not care I am not buying a 3080 and not interested in a 3080 TI as it will never be faster than a 3090. All I know is that I sold my 2080 TI and need a new GPU. I knew well before the 3090 was announced I was going to get the best card NVIDIA would release in the 3000 series. I do not buy AMD cards never have and never will. AMD makes great hardware but their software/driver teams are pretty bad. I do not care about what will happen in a month when AMD launches their cards. I will already have the best card you can get for the next two years. If you look at the power supply industry you will see they are in short supply. They have been this way for a few months and it’s supposed to be that way for a few more months. Just cause China and Taiwan are back online does not mean anything. It's not just about factory's making a product, there are other things involved.

If I remember correctly a Gamers Nexus video a few months ago stated that the biggest issue power supply manufactures were having was finding space on a ship to send their container. As PPE equipment was a priority and taking most of the space and there was nothing they could do. I am not saying this is what is still happening or that this will happen to NVIDIA. Just explaining that there is more to it than just factory's being up and running. If you read my full post which you quoted I said that NVIDIA has a 3080 TI at around 1k ready to go. I never said it was not true. I could also care less if Jensen and co are abusing the pandemic narrative. NVIDIA is the top dog in the graphic space and AMD has not been a player for the last 10 years if not more. NVIDIA are going to do whatever they want to do because they have no competition and it will properly be that way for a long time. I am not rooting for NVIDIA I want there to be completion as that is always good for us consumers. I am just saying it like I see it.


----------



## Mooncheese

dentnu said:


> Your narrative might be right or wrong who knows and honestly, I do not care I am not buying a 3080 and not interested in a 3080 TI as it will never be faster than a 3090. All I know is that I sold my 2080 TI and need a new GPU. I knew well before the 3090 was announced I was going to get the best card NVIDIA would release in the 3000 series. I do not buy AMD cards never have and never will. AMD makes great hardware but their software/driver teams are pretty bad. I do not care about what will happen in a month when AMD launches their cards. I will already have the best card you can get for the next two years. If you look at the power supply industry you will see they are in short supply. They have been this way for a few months and it’s supposed to be that way for a few more months. Just cause China and Taiwan are back online does not mean anything. It's not just about factory's making a product, there are other things involved.
> 
> If I remember correctly a Gamers Nexus video a few months ago stated that the biggest issue power supply manufactures were having was finding space on a ship to send their container. As PPE equipment was a priority and taking most of the space and there was nothing they could do. I am not saying this is what is still happening or that this will happen to NVIDIA. Just explaining that there is more to it than just factory's being up and running. If you read my full post which you quoted I said that NVIDIA has a 3080 TI at around 1k ready to go. I never said it was not true. I could also care less if Jensen and co are abusing the pandemic narrative. NVIDIA is the top dog in the graphic space and AMD has not been a player for the last 10 years if not more. NVIDIA are going to do whatever they want to do because they have no competition and it will properly be that way for a long time. I am not rooting for NVIDIA I want there to be completion as that is always good for us consumers. I am just saying it like I see it.


I agree, I hate to be a fanboy but Nvidia has amazing software and drivers. G-Sync, DLSS, RT, and now RTX IO. I also have a 3D Vision panel and I will be sad to see that go if we can't patch the newer drivers (members of the 3D Vision community been keeping 3D Vision along from 425.31 forward, to include Paul Dusler with his program 3D Fix Manager). Geforce Experience is even awesome, I have been using that for recording gameplay video and I love the Freestyle Filters, particularly Sharpen (I was a big fan of Lunasharpen via Reshade). Also, RTX Voice is very cool. 

Anyhow, MLID has stated recently that the FE variants this time around are binned and the AIB will be getting whatever is left which I'm kind of finding hard to square with Igors Lab stating that 30% of Ampere dies are standard, 60% are good, and 10% are excellent. Does this mean that FE is among the 10% / Excellent quality and will clock higher?

Anyone care to comment on this statement?






If so, that means I have to choose between a potentially locked down BIOS at 375w or 3x8 pin AIB card (FTW3 or Strix) with a less stellar chip.


----------



## pewpewlazer

Mooncheese said:


> I agree, I hate to be a fanboy but Nvidia has amazing software and drivers. G-Sync, DLSS, RT, and now RTX IO. I also have a 3D Vision panel and I will be sad to see that go if we can't patch the newer drivers (members of the 3D Vision community been keeping 3D Vision along from 425.31 forward, to include Paul Dusler with his program 3D Fix Manager). Geforce Experience is even awesome, I have been using that for recording gameplay video and I love the Freestyle Filters, particularly Sharpen (I was a big fan of Lunasharpen via Reshade). Also, RTX Voice is very cool.
> 
> Anyhow, MLID has stated recently that the FE variants this time around are binned and the AIB will be getting whatever is left which I'm kind of finding hard to square with Igors Lab stating that 30% of Ampere dies are standard, 60% are good, and 10% are excellent. Does this mean that FE is among the 10% / Excellent quality and will clock higher?
> 
> Anyone care to comment on this statement?
> 
> 
> 
> 
> 
> 
> If so, that means I have to choose between a potentially locked down BIOS at 375w or 3x8 pin AIB card (FTW3 or Strix) with a less stellar chip.


I recommend you find a more credible news source than a proven clown and AMD Fanboi on YouTube.


----------



## Krzych04650

Mooncheese said:


> I agree, I hate to be a fanboy but Nvidia has amazing software and drivers. G-Sync, DLSS, RT, and now RTX IO. I also have a 3D Vision panel and I will be sad to see that go if we can't patch the newer drivers (members of the 3D Vision community been keeping 3D Vision along from 425.31 forward, to include Paul Dusler with his program 3D Fix Manager). Geforce Experience is even awesome, I have been using that for recording gameplay video and I love the Freestyle Filters, particularly Sharpen (I was a big fan of Lunasharpen via Reshade). Also, RTX Voice is very cool.


NVIDIA's feature set goes far beyond just modern RTX oriented software. They basically have a decade worth of rich features both in drivers and in-game integration. Going AMD is a massive compromise even before you conider performance, compromise that is not acceptable on high-end.



pewpewlazer said:


> I recommend you find a more credible news source than a proven clown and AMD Fanboi on YouTube.


Yea I would recommend that too, it would be best not go below a certain level here.


----------



## t1337dude

I'm excited to see how the FE stacks up against the AIB's. And if I can slap my Morpheus 2 on either of them. I wish Nvidia didn't delay the reviews!


----------



## Thoth420

Spoiler: [QUOTE="Mooncheese, post: 28629114, member: 543734



You've bought into the scarcity narrative hook, line and sinker. Industrial production in China, where all of this is made, is actually back on line at full speed and has been for a few months now. The actual situation is not one of supply chain limitation but artificial scarcity on the part of NV, and your sentiment reflects the fact that it's effective. See this is how it works, NV releases Ampere in September, two months ahead of Big Navi and tells the consuming public that there will be no pre-orders and that we can expect a limited supply due to the "pandemic" (which is nonsense, again, China and Taiwan are back online). End consumer processes this as "WELL I BETTER BUY NOW, CAN'T WAIT FOR WHAT NV DROPS IN RESPONSE TO AMD, I.E. 20GB 3080S OR EVEN A $1K 3080 TI, I MUST BUY AND I MUST BUY RIGHT NOW OR I WONT GET THIS CARD UNTIL SOMETIME NEXT YEAR!" From what I gather AMD's 6800XT is shaping up to sit in between the 3070 and 3080 in rasterization with 16GB of video memory for $550 and the 6900XT will sit in between the 3080 and 3090 with 24GB of video memory for $1000. In November. You don't think NV will drop the card that is missing in the gaping chasm of the lineup between the $700 3080 and the $1500 3090 in response? Come on dude, Asian supply chain is no longer disrupted, this is masterful abusing of the "pandemic" narrative on the part of Juang and co. Please think critically. Also, please don't confuse this with a personal attack or insult. I'm asking for critical reflection with this launch. Remember, this is the same NV that brought us the Turing Marketing debacle, nothing has changed. 



 [/QUOTE]"]



I agree with everything you said but I want the best card in the heap(even if the performance gap is slight) so I am just going for the 3090 day 1. That said for most people it would be smarter to wait as you said.


----------



## CallsignVega

Wihglah said:


> IMO the 3080 is a good value card, but the huge gap to the 3090 means there must be plans for something in between.
> Having something in between means everyone who gets a 3080 at launch will have buyers remorse as soon as whatever card comes out at $1000. Also - a 12GB 3090 makes no sense to me. It would have to be a 3080 with 20Gb or maybe with on extra SMU unlocked and 22Gb.
> 
> Also - for everyone waiting for a 3x8pin 500W card. Bare in mind the difference between a top overclocked FE and a top overclocked FTW, will be 100MHz. That's a 5% difference. Anywhere that matters, it won't be enough.


Huge gap? The projected difference between the 3090 and 3080 performance is only 20%. That is smaller than the gap between the 2080Ti and 2080. 

With my shunted RTX Titan, I've got the OEM board running 2160 MHz at ~450 watts which is higher than most 2080Ti water overclocks. IMO, if you shunt a FE/reference board, the difference between it and something like a FTW3/Kingpin could virtually disappear. Largely due to NVIDIA using quality components on the FE. Just use a really good power supply to feed the 2x 8-pin and you won't have a problem with power.


----------



## BigMack70

CallsignVega said:


> Huge gap? The projected difference between the 3090 and 3080 performance is only 20%. That is smaller than the gap between the 2080Ti and 2080.
> 
> With my shunted RTX Titan, I've got the OEM board running 2160 MHz at ~450 watts which is higher than most 2080Ti water overclocks. IMO, if you shunt a FE/reference board, the difference between it and something like a FTW3/Kingpin could virtually disappear. Largely due to NVIDIA using quality components on the FE. Just use a really good power supply to feed the 2x 8-pin and you won't have a problem with power.


The 2080 Ti is 25% faster than the 2080 and they put a 2080 Super in between them. I think it's a given that they will put a card somewhere between the $700 and $1500 mark that is faster than 3080, with more vram, but slower than 3090 and probably with less vram.


----------



## CallsignVega

Ya but I doubt many people went "I gotta sell my 2080 to get a 2080 Super" since there was such a small difference. 

But it is true I think giving the 3080 only 10GB VRAM was a marketing move, leaving open a card between the 3080 and 3090. Hopefully they got rid of that stupid Super name.


----------



## Avacado

Krzych04650 said:


> Looks great. I am not a fan of 3-slot design though, wider and longer I understand, but why make it so thick. 2.4 slot would be enough.


I think the performance increases are great, but honestly the matte grey isn't appealing to me. Just looks like a grey brick.


----------



## Wihglah

CallsignVega said:


> Huge gap? The projected difference between the 3090 and 3080 performance is only 20%. That is smaller than the gap between the 2080Ti and 2080.
> 
> With my shunted RTX Titan, I've got the OEM board running 2160 MHz at ~450 watts which is higher than most 2080Ti water overclocks. IMO, if you shunt a FE/reference board, the difference between it and something like a FTW3/Kingpin could virtually disappear. Largely due to NVIDIA using quality components on the FE. Just use a really good power supply to feed the 2x 8-pin and you won't have a problem with power.



Gap in cost...


----------



## Mooncheese

pewpewlazer said:


> I recommend you find a more credible news source than a proven clown and AMD Fanboi on YouTube.



Care to back up your assertions? Here's a leak synopsis from June, notice everything on this page came to fruition except the 2.3 GHz part. 

If anyone says something you don't like / critical about your pet brand they are a "proven clown". 

Speculating is hard, no-one does it with 100% accuracy.






 
GA102 60-90% faster in some instances? (3090 vs 2080 Ti and 3080 vs 2080) Check

USB C no longer on the card? Check

GDDR6X used? Check

1TB bandwidth? Check (960 Gbps on the 3090)

Leaked shroud legitimate? Check

Sept release? Check

In fact Moores Law is Dead leaked NVCache in the previous vid, and that made it to the final product as RTX IO!

He also was one of the first to point out that NV would call it the 3090 and not the 3080 TI, (8:19 mark). 

Smearing people with baseless statements on the internet isn't cool. Moore's Law is Dead is not an AMD fanboy, he's putting a lot on the line by being critical of NV and their release strategy (scarcity economics to pump up the hype and demand). He is equally critical of everyone, including Intel and AMD. If it wasn't for people engaging in tech journalism of this sort Gamers Nexus wouldn't exist (this is how they too got started). 

I suppose we should all just be mindless cheerleaders? That's journalism today? Mindless cheerleading? 

"Oooooh it's so great that NGreedia have decided to limit the supply of their Ampere cards, for no reason other than to pump up demand, and it's also wonderful that they've decided to gimp the 70 and 80 cards with only 8 and 10GB of VRAM, because that costs a whopping $8 per GB in bulk from Samsung! Can't put another 2GB on each card, that would increase the price by $16! 20GB, that's a whopping $80! NO, better to have a card that will struggle with next gen console ports whose consoles have more unified memory and therefore said games have more textures and assets and will require more than 8 or 10GB of video memory at 2560x1440 to say nothing of 4K! We can just upgrade again to the 3080 Ti and 3070 Ti again next year! I don't mind needlessly parting with $800 a year for a GPU! I'm a rich moron who needlessly upgrades my smart phone every year! Iphone 11? It's trash now because the Iphone 12 came out with an additional camera! I will wait in line and pay $1000 for the new one! 

I mean Jesus Christ people are so willfully dumb, it's disgusting. 

You have one journalist speaking out against this insane logic and he's castigated willy nilly by the very consumer demographic that his information would benefit.


----------



## changboy

So you mean we better buy 2x rtx-3090 ?


----------



## dentnu

What AIO's you guys planning on getting? My first choice was the Strix but it looks like that is not going to be available on day one. I am now leaning towards the Gaming X Trio. I had a 2080 TI Trio and it was a great card never had any issues with it in the 2+ years of owning it. I keep looking at the FTW3 to see if I see something I like about it but man is that an ugly card. I know EVGA is the best out of the manufactures. I just don't understand why they thought putting a giant RGB dashboard and then added red accents was great looking design. I guess if I am not quick enough and am able to get the Trio the FTW3 will have to do. I don't know why I am caring so much about how it looks since I am putting a water block on it. Guess first impressions mean something...


----------



## Shawnb99

dentnu said:


> What AIO's you guys planning on getting? My first choice was the Strix but it looks like that is not going to be available on day one. I am now leaning towards the Gaming X Trio. I had a 2080 TI Trio and it was a great card never had any issues with it in the 2+ years of owning it. I keep looking at the FTW3 to see if I see something I like about it but man is that an ugly card. I know EVGA is the best out of the manufactures. I just don't understand why they thought putting a giant RGB dashboard and then added red accents was great looking design. I guess if I am not quick enough and am able to get the Trio the FTW3 will have to do. I don't know why I am caring so much about how it looks since I am putting a water block on it. Guess first impressions mean something...


Yeah the red accent is really ugly but I don't care so I'll be removing all of that for a block. An ugly accent that will be removed anyways isn't enough to make me chose anyone but EVGA


----------



## BigMack70

dentnu said:


> What AIO's you guys planning on getting? My first choice was the Strix but it looks like that is not going to be available on day one. I am now leaning towards the Gaming X Trio. I had a 2080 TI Trio and it was a great card never had any issues with it in the 2+ years of owning it. I keep looking at the FTW3 to see if I see something I like about it but man is that an ugly card. I know EVGA is the best out of the manufactures. I just don't understand why they thought putting a giant RGB dashboard and then added red accents was great looking design. I guess if I am not quick enough and am able to get the Trio the FTW3 will have to do. I don't know why I am caring so much about how it looks since I am putting a water block on it. Guess first impressions mean something...


If the Founders winds up being bad, I'm going to try and replace it with an EVGA Hybrid. All the AIB's releasing at launch look fugly to me. I don't know what EVGA was thinking with the red accents on their cards.


----------



## rares495

BigMack70 said:


> For suggesting people are pre-ordering a product that cannot be pre-ordered. Yup. Salty. Jealous. Uninformed.
> 
> And now ignored. Go back to the youtube comments section, troll.


You people still TRIED to preorder and would have done so if it was possible.


----------



## domenic

At least here's a pic of the FTW3 3090 and a couple tidbits of info. Article says the FTW3 PCB & up the product stack including the Kingpin are the same.









EVGA shows off its GeForce RTX 3090 XC3 and FTW3 GPUs


Related news: RTX 3070 cards will be available from 15th Oct. RTX 3080 reviews are delayed.




hexus.net


----------



## BigMack70

domenic said:


> At least here's a pic of the FTW3 3090 and a couple tidbits of info. Article says the FTW3 PCB & up the product stack including the Kingpin are the same.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> EVGA shows off its GeForce RTX 3090 XC3 and FTW3 GPUs
> 
> 
> Related news: RTX 3070 cards will be available from 15th Oct. RTX 3080 reviews are delayed.
> 
> 
> 
> 
> hexus.net


I can't imagine the Kingpin is using the same PCB as the lower end cards. If true, extremely disappointing for anyone looking to get one of those cards.


----------



## dentnu

BigMack70 said:


> I can't imagine the Kingpin is using the same PCB as the lower end cards. If true, extremely disappointing for anyone looking to get one of those cards.


TIN designed a new custom PCB for the 2080 TI. I did read on the EVGA forums that gamer nexus confirmed TIN is no longer working with EVGA. If that is the case then chances are the PCB will in fact be the same as the FTW3. That sucks if true.

Thank you TiN for all you have contributed! - EVGA Forums


----------



## BigMack70

dentnu said:


> TIN designed a new custom PCB for the 2080 TI. I did read on the EVGA forums that gamer nexus confirmed TIN is no longer working with EVGA. If that is the case then chances are the PCB will in fact be the same as the FTW3. That sucks if true.
> 
> Thank you TiN for all you have contributed! - EVGA Forums


Guess that explains why the Kingpin is going to be available so close to launch then... rip


----------



## changboy

This is not the same pcb, 3 x 8 pins on the back for the kingpin model :


----------



## changboy

__ https://www.facebook.com/vince.lucido/posts/2982994971805924


----------



## BigMack70

changboy said:


> This is not the same pcb, 3 x 8 pins on the back for the kingpin model :
> View attachment 2458935


Can't comment on the rest of the pcb, but the ftw3 is 3x8 pin


----------



## J7SC

BigMack70 said:


> Can't comment on the rest of the pcb, but the ftw3 is 3x8 pin


There are some differences visible in the KPE partial pic, such as the EVBot port, but until both FTW3 and KPE are out in the wild along with PCB descriptions (i.e. number of layers), the rest is 'wait and see'...


----------



## changboy

Ya, i saw this on the web but i dont think its the new pcb hehehe.
If we want we can buy a 3080 soon....2 days lol. If you look at the f3tw 3080 and 3090 the card its the same...just the price is twice bigger.


----------



## kx11

dentnu said:


> TIN designed a new custom PCB for the 2080 TI. I did read on the EVGA forums that gamer nexus confirmed TIN is no longer working with EVGA. If that is the case then chances are the PCB will in fact be the same as the FTW3. That sucks if true.
> 
> Thank you TiN for all you have contributed! - EVGA Forums


most likely he worked on 3090 KP before leaving, good man and a big loss to PC hardware enthusiasts 

that is until he find a new GPU maker to work for


----------



## J7SC

kx11 said:


> most likely he worked on 3090 KP before leaving, good man and a big loss to PC hardware enthusiasts
> 
> that is until he find a new GPU maker to work for


Yeah, TiN will be missed - I wish he could set up my mobos and GPUs; at least he's on the same continent now ;-) This also reminds me of Elmor leaving ROG Asus


----------



## originxt

Is the nda for all rtx 3000 cards or just the 3080? If it's just the 3080, when can we expect the nda for 3090 to lift?


----------



## Sheyster

originxt said:


> Is the nda for all rtx 3000 cards or just the 3080? If it's just the 3080, when can we expect the nda for 3090 to lift?


As far as I know it's for the 3080 FE only, lifts tomorrow (Sept 16). The 3090 lift will probably be 3 days before release unless they push it back a day or two like they did for the 3080 FE.


----------



## dentnu

Alphacool Eisblock is now listing compatible reference board cards. Guess now we know which ones are 100% reference cards.



https://www.alphacool.com/detail/index/sArticle/27822/sCategory/20538


----------



## Niju

dentnu said:


> Alphacool Eisblock is now listing compatible reference board cards. Guess now will know which ones are 100% reference cards.
> 
> 
> 
> https://www.alphacool.com/detail/index/sArticle/27822/sCategory/20538


Non of the major brands (Asus, EVGA, Gigabyte, MSI) on that list so far. Hopefully that changes.


----------



## originxt

I'm really hoping the 3080 reviews will give us a glimpse of how much power draw is allowed through that 12 pin connector. Hopefully it gives us an idea on whether the 3090 fe cards won't be limited. 

Anyone know if evga and the like will also have fe cards branded or is it just Nvidia with the fe edition cards?


----------



## BigMack70

originxt said:


> I'm really hoping the 3080 reviews will give us a glimpse of how much power draw is allowed through that 12 pin connector. Hopefully it gives us an idea on whether the 3090 fe cards won't be limited.
> 
> Anyone know if evga and the like will also have fe cards branded or is it just Nvidia with the fe edition cards?


I don't expect much helpful info on that. The info we need is if nvidia is willing to push the 3090 past 375W and I believe 370W is the OC limit on the 3080.

I do think we'll get some info as to the temps/noise of Nvidia's design, though.


----------



## dentnu

originxt said:


> I'm really hoping the 3080 reviews will give us a glimpse of how much power draw is allowed through that 12 pin connector. Hopefully it gives us an idea on whether the 3090 fe cards won't be limited.
> 
> Anyone know if evga and the like will also have fe cards branded or is it just Nvidia with the fe edition cards?


From my understanding FE cards will only be sold by NVIDIA. All other AIO were given the reference design so they can make there cards if they chose to. The reference design is not the same as what NVIDIA is using in the FE cards. The FE is using a customer PCB made by Nvidia.


----------



## Thoth420

dentnu said:


> What AIO's you guys planning on getting? My first choice was the Strix but it looks like that is not going to be available on day one. I am now leaning towards the Gaming X Trio. I had a 2080 TI Trio and it was a great card never had any issues with it in the 2+ years of owning it. I keep looking at the FTW3 to see if I see something I like about it but man is that an ugly card. I know EVGA is the best out of the manufactures. I just don't understand why they thought putting a giant RGB dashboard and then added red accents was great looking design. I guess if I am not quick enough and am able to get the Trio the FTW3 will have to do. I don't know why I am caring so much about how it looks since I am putting a water block on it. Guess first impressions mean something...


I was eyeing either the Gigabyte Xtreme or the Asus Strix in that order. I'll take a Founders edition if it is all I can find however. I'm not much a max OC guy though just looking for something aesthetic.


----------



## sakete

FTW3 cards not available on Thursday:

__ https://twitter.com/i/web/status/1305894136159940608


----------



## BigMack70

sakete said:


> FTW3 cards not available on Thursday:
> 
> __ https://twitter.com/i/web/status/1305894136159940608


Not really relevant to the 3090.


----------



## sakete

BigMack70 said:


> Not really relevant to the 3090.


Not yet anyway, who knows how this could spiral. With COVID, supply chains are a real mess everywhere.


----------



## BigMack70

sakete said:


> Not yet anyway, who knows how this could spiral. With COVID, supply chains are a real mess everywhere.




__ https://twitter.com/i/web/status/1305912990189772802


----------



## Mooncheese




----------



## ttnuagmada

BigMack70 said:


> Not really relevant to the 3090.


It's an indication that the better 3090 boards might also be late.

edit: nm, saw the other tweet


----------



## domenic

Yet another non-response response from EKWB regarding the question about how they are addressing the chips on both sides of the 3090 PCB issue. I understand in a way they are trying to say they can't say anything until release but they aren't selling anything licensed from NVIDIA and don't see how they are bound to an NDA unless NVIDA made them sign something in return for some specs ahead of time.



https://forums.evga.com/download.axd?file=0;3079753&where=&f=2020-09-15%2017_14_39-Capture%20EKWB.jpg


----------



## ttnuagmada

Anyone have any bets on which card/cards end up with the highest power limit? I'm thinking I'll probably jump on a FTW3.


----------



## HyperMatrix

ttnuagmada said:


> Anyone have any bets on which card/cards end up with the highest power limit? I'm thinking I'll probably jump on a FTW3.


So far only ROG Strix and EVGA FTW3/kingpin are listed with over 20 power phases. ROG Strix says on their website that it'll have 400W TGP. FTW3 says it'll have higher power limit than reference/FE, but no one knows how much yet. Personally I'll be going with the FTW3. Kingpin board has 23 power phases. If the information is correct about FTW3 and Kingpin using the same board, that'll mean 23 phases for FTW3 as well, as opposed to ROG Strix and its 22 phases. Generally speaking FTW3 base models are cheaper than ROG Strix as well. And EVGA warranty and step up program are superior to anything Asus offers. That along with access to the EVGA Xbox Game Bar Precision X app. So as long as the TDPs are the same, the EVGA will be the one to get for me. ROG Strix looks better. But I'm going to put the card under water later anyway.


----------



## ttnuagmada

HyperMatrix said:


> ROG Strix looks better. But I'm going to put the card under water later anyway.


Yeah that's my plan, don't want the power limit being an issue. Hopefully someone comes out with a block for it pretty quickly, but I'll probably just bypass it in my loop for the time being.


----------



## HyperMatrix

ttnuagmada said:


> Yeah that's my plan, don't want the power limit being an issue. Hopefully someone comes out with a block for it pretty quickly, but I'll probably just bypass it in my loop for the time being.


Based on the FTW3 and ROG Strix 2080ti series, there were 3rd party blocks made for them. Likely the same thing will happen with the 3090. But yeah...question of when will be important. But I'd rather wait a few months with a 400W+ FTW3 card than be stuck with a 375W FE card that couldn't really do much better even if I did put it under water. 

I mean that's a plus for the 3090 FE, actually. It's a really nice air cooler. Like really nice. Especially if your CPU is water cooled already so you don't have to worry about exhaust air from it. And it will do an amazing job within its given 375W power envelope. But for me, I need that extra 10% OC power. As more games turn on additional RTX features, we're going to need it.


----------



## BigMack70

ttnuagmada said:


> Anyone have any bets on which card/cards end up with the highest power limit? I'm thinking I'll probably jump on a FTW3.


We don't have any info on this yet, and there's no correlation between number of power phases and bios power limit restrictions. All these cards have overbuilt power delivery for air or water. 

If I had to bet, you'll need to do a custom xoc bios flash onto a card like the kingpin for absolute maximum power headroom. I am guessing that all the cards by default will be pretty close together.


----------



## domenic

BigMack70 said:


> We don't have any info on this yet, and there's no correlation between number of power phases and bios power limit restrictions. All these cards have overbuilt power delivery for air or water.
> 
> If I had to bet, you'll need to do a custom xoc bios flash onto a card like the kingpin for absolute maximum power headroom. I am guessing that all the cards by default will be pretty close together.


I flashed my 2080ti XC Ultra with a BIOS that a had slightly higher power limit but was sweating initially because I was worried I would brick it. Can anyone with more experience comment on how well the FTW3 dual bios works? Is it basically full proof from a risk standpoint in that you can just switch over to the other BIOS to recover the one you may have hosed? By experimenting with different BIOSes do you still run the risk of damaging the card in any way (anything is possible but speaking from a practical standpoint)? Does EVGAs warranty indirectly not get voided when flashing BIOSs since they give you the ability in the first place to switch between them?


----------



## J7SC

I would think 3x8 pin cards like the Asus Strx OC and KPE will get to 400 W+ on stock bios, and much higher with their respective XOC custom bios and strong cooling...not that 2x 8 pin couldn't handle much more than 400 W, but it is also a matter of specs for certification for sale. 22 phases vs 23 phases is not an issue, IMO. One thing which could get in the way of cross-flashing 3090 bios (assuming it is still similar to 2080 Ti) are the different display output arrangements vendors have shown.


----------



## dentnu

Looks like best buy has pricing up for ROG Strix 3090. No idea if the pricing is final but it looks to be. ROG Strix is $1799 Yikes... No idea if its the OC version



https://www.bestbuy.com/site/asus-geforce-rtx-3090-24gb-gddr6x-pci-express-4-0-graphics-card/6432447.p?skuId=6432447


----------



## t1337dude

I have a feeling that one will need a bit of luck to score a 3090 FE at launch. Most people will probably end up having to buy the $1600+ AIB variants.


----------



## dentnu

t1337dude said:


> I have a feeling that one will need a bit of luck to score a 3090 FE at launch. Most people will probably end up having to buy the $1600+ AIB variants.


I am not buying an FE its either Strix, Trio, or FTW3 for me. I do not want to spend $1800 on a card. Anyone here have/had a 2080 TI Strix ? I know they were one the most expensive cards is the extra cost justified ?


----------



## ref

t1337dude said:


> I have a feeling that one will need a bit of luck to score a 3090 FE at launch. Most people will probably end up having to buy the $1600+ AIB variants.


I was thinking the opposite, you would think NVIDIA themselves would have a "decent" amount of stock, but who knows.


----------



## HyperMatrix

dentnu said:


> Looks like best buy has pricing up for ROG Strix 3090. No idea if the pricing is final but it looks to be. ROG Strix is $1799 Yikes... No idea if its the OC version
> 
> 
> 
> https://www.bestbuy.com/site/asus-geforce-rtx-3090-24gb-gddr6x-pci-express-4-0-graphics-card/6432447.p?skuId=6432447


MSI Gaming X Trio listed at $1901 on BH Photo Video. I'll take that ROG Strix at $1799 plz.


----------



## Mooncheese

This is insane, what features does the Strix have that set it up and above the other AIB's other than a pretty and effective cooler and bios switch? 

FTW3 has thermistors on the PCB (iCX3 sensors), dual bios, a similar heat-sink, similar build quality but will likely be priced at nor more than $1700 which works out to around 1600 after the 5% associate code discount. 

Genuinely curious as to why Strix is so expensive and sought after compared to FTW3.


----------



## pewpewlazer

Mooncheese said:


> This is insane, what features does the Strix have that set it up and above the other AIB's other than a pretty and effective cooler and bios switch?
> 
> FTW3 has thermistors on the PCB (iCX3 sensors), dual bios, a similar heat-sink, similar build quality but will likely be priced at nor more than $1700 which works out to around 1600 after the 5% associate code discount.
> 
> *Genuinely curious as to why Strix is so expensive and sought after compared to FTW3.*


Do we have official pricing for both? The 2080 Ti STRIX was cheaper than the 2080 Ti FTW3...

Asus also seems to have consistently had the best coolers as well. Personally, I think water cooling is absolutely mandatory for high end GPUs. But if you're going to use the stock air cooler, the STRIX is a no-brainer. Just look at how ridiculously loud the 2080 Ti FTW3 was compared to the STRIX:









EVGA GeForce RTX 2080 Ti FTW3 Ultra 11 GB Review


The RTX 2080 Ti FTW3 Ultra is EVGA's flagship RTX 2080 Ti. It comes with a large triple-slot triple-fan cooler that delivers amazing temperatures. Power limits have also been increased, and the VRM is stronger than ever with 16-phases for the GPU and three phases for the GDDR6 memory.




www.techpowerup.com





If I "had" to use the stock air cooler, I'd pay an extra $100+ for the STRIX just based on noise.


----------



## Mooncheese

pewpewlazer said:


> Do we have official pricing for both? The 2080 Ti STRIX was cheaper than the 2080 Ti FTW3...
> 
> Asus also seems to have consistently had the best coolers as well. Personally, I think water cooling is absolutely mandatory for high end GPUs. But if you're going to use the stock air cooler, the STRIX is a no-brainer. Just look at how ridiculously loud the 2080 Ti FTW3 was compared to the STRIX:
> 
> 
> 
> 
> 
> 
> 
> 
> 
> EVGA GeForce RTX 2080 Ti FTW3 Ultra 11 GB Review
> 
> 
> The RTX 2080 Ti FTW3 Ultra is EVGA's flagship RTX 2080 Ti. It comes with a large triple-slot triple-fan cooler that delivers amazing temperatures. Power limits have also been increased, and the VRM is stronger than ever with 16-phases for the GPU and three phases for the GDDR6 memory.
> 
> 
> 
> 
> www.techpowerup.com
> 
> 
> 
> 
> 
> If I "had" to use the stock air cooler, I'd pay an extra $100+ for the STRIX just based on noise.


I'm putting my 3090 under a full water block, the question is, who will have a block out first, Strix or FTW3?


----------



## MacMus

Are all reference boards be the same ? I think of going EVGA FTW3 3080 to replace my 2080ti


----------



## EconomyFishFinger

Has anyone seen anything regarding full coverage waterblock compatability on cards with 3x8 pin power connectors rather than the standard 2x8 pin on AIB cards?

I'm hoping to get a FTW3 with 3x8 pin power connectors and putting on a waterblock for it. Id have went with the Hydrocopper offering but that apparently only has 2x8 pin connectors.

If im throwing a not-insignificant amount of cash at this card I at least want to be able to unload as much power through it as possible. Already prepped radiator setup for this with the Alphacool X45 1080 Nova radiator and a single 360 rad in the loop.


----------



## Krzych04650

BigMack70 said:


> We don't have any info on this yet, and there's no correlation between number of power phases and bios power limit restrictions. All these cards have overbuilt power delivery for air or water.
> 
> If I had to bet, you'll need to do a custom xoc bios flash onto a card like the kingpin for absolute maximum power headroom. I am guessing that all the cards by default will be pretty close together.


Yea there were some massive overbuilt 3x8pin 2080 Ti models that had only 330W power limit, whopping 10W more than FE  

Manufacturers probably aren't going to list thier power limits in specs, or only base ones, so only reviews and user reports will tell. And this will basically be the only major factor, assuming you have your own cooling.


----------



## pewpewlazer

Mooncheese said:


> I'm putting my 3090 under a full water block, the question is, who will have a block out first, Strix or FTW3?


STRIX, but I heard it's overpriced, so I would suggest waiting for the FTW3.


----------



## J7SC

...Crysis just released an 8K trailer video... s.th. for 3090s to have fun with when the time comes ?


----------



## dentnu

HyperMatrix said:


> MSI Gaming X Trio listed at $1901 on BH Photo Video. I'll take that ROG Strix at $1799 plz.


Thats nuts if that is the final price of the trio. I think B&H have that price as a place holder. I had a 2080 ti trio last gen and it had the lowest price between the strix and FTW. I preordered the 2080 ti trio and got it on day one for 1249.99.I just can’t believe msi would price it that high.


----------



## dentnu

If you filter prices on newegg it will show you the cost of each card. 

I can confirm newegg listed the Strix OC at $1800.
Newegg,GeForce RTX 3090,$1799 - $1800 Desktop Graphics Cards | Newegg.com

TUF OC @ $1600
Newegg,GeForce RTX 3090,$1599 - $1600 Desktop Graphics Cards | Newegg.com

TUF @ $1500
Newegg,GeForce RTX 3090,$1499 - $1500 Desktop Graphics Cards | Newegg.com


----------



## dentnu

If anyone want to compare 3080 prices to what 3090 prices could possibly be. Here are the prices for 3080 cards listed on newegg.

3080 Pricing as of now based on Newegg system workaround listed above

Subject to change:
$850 Asus strix
$810 Evga FTW3 ULTRA
$790 Evga FTW3
$770 Evga XC ULTRA
$760 MSI Gaming X Trio
$750 Evga XC Gaming | Asus Tuf OC | Gigabyte Gaming OC
$740 MSI Ventus 3x OC
$730 Evga XC Black | Gigabyte Eagle OC
$720 Zotac Trinity
$700 Asus Tuf | MSI Ventus 3x


----------



## Dagamus NM

Founders edition for me. EK waterblock and call it good there.

Question, since HDMI 2.1 is finally on a GPU what cable is recommended? I see a lot of cheap cables with reviews showing that they don't meet spec nor do they deliver the performance. Best price on a 10' cable?


----------



## BigMack70

Looking like exactly what I said about performance for 3080... It's a hair over 30% better than 2080Ti at 4k. 3090 should be around 60% better if it's not horribly power throttled at stock.

Overclocking looks to be a real dud on the 3080 FE and I bet even more so on 3090. Looks like Ampere is going to depend on custom bios flashing even more than Turing did.


----------



## Mooncheese

pewpewlazer said:


> STRIX, but I heard it's overpriced, so I would suggest waiting for the FTW3.


Yes if Strix is really going to be $1800 FTW3 will be around $1600 after the 5% associate code is applied at checkout. 

The question is, will it take EKWB a year to make a quality block for it? 

Bykski has a block planned, but right now the product page is just a place-holder. 

Worst case scenario one could go with EVGA's own over-priced HydroCopper block when they release them in October (per Jacob @ EVGA). 

FTW3 + $200 HC block is still the same price as Strix without a block and for all we know HC will be out before EKWB releases the Strix WB! 

3 year transferable warranty and thermistors on the PCB, I see no upside to going with Asus over EVGA, only downsides. 

Strix may be the better option if youre keeping the card on air, but that remains to be seen as both cards have different cooler designs this time around.


----------



## CallsignVega

Igor says the new FE board can't be shunt modded due to the way the new chips monitor the voltages. So that puts FE out for me. Will have to try and get a Kingpin/FTW3/Strix/MSI Trio then.


----------



## kx11

MSi X Trio 3080 seems to be the fastest GPU out of all 3080s according to 3dmark leaks


----------



## dentnu

kx11 said:


> MSi X Trio 3080 seems to be the fastest GPU out of all 3080s according to 3dmark leaks


Link ?


----------



## kx11

dentnu said:


> Link ?




__ https://twitter.com/i/web/status/1306253575538880512


----------



## Krzych04650

Everything falls within expectations with 3080, 30% faster than 2080 Ti on average. Power characteristics are a huge concern as we expected, power limits on FE cards are only really enough to stabilize stock curve. I haven't seen any proper OC video with manual VF curve tunning and undervolting, but you are not getting anywhere with 76C temperature anyway.

3090 is certiany going to be even more power constrained. Looking at how going 370W on 3080 does next to nothing it seems that the same +50% power limit as on Turing will be required, which given already high TDPs of Ampere may not really work all that well. I am starting to think that undervolted 2025 MHz may be a way to go and that clocking will be more similar to Pascal with most cards topping around 2075 MHz rather than Turing where any card can do well past 2100 and you will much sooner trip over power limit and temp before hitting the limit of the silicon.

Again, not much chance for good custom models on launch. Unless 3090 FE somehow has less restrictive power limit than 3080, it looks like quite a bit of waiting.



kx11 said:


> __ https://twitter.com/i/web/status/1306253575538880512


The differences look really extreme. But considering that something like TUF is reference board without any headroom, it could make sense if Trio really makes use of those 3x8pin connectors. Still it looks like too much.


----------



## HyperMatrix

Regarding temps on FE, manual fan curve would probably help. Toms hardware listed fan RPM for the cards and the 3080 FE was substantially lower than other high end cards. I wonder what that could mean for OC potential since apparently these cards are tuned for noise.


----------



## HyperMatrix

kx11 said:


> __ https://twitter.com/i/web/status/1306253575538880512


AIB reviews tomorrow. God I hope you're right about that tweet. Lol.


----------



## originxt

Was looking to get the 3090 fe and water block it due to size constraints but it's looking like I have to find a good 3 8 pin card now. I'm not even sure if I can find a card that'll fit lol. 

Any thoughts if a 3 8pin partner model will fit in a 011D with a full cover block?


----------



## kx11

HyperMatrix said:


> AIB reviews tomorrow. God I hope you're right about that tweet. Lol.


Paul's HW reported his 3080 drawing as much as 500w, and that was a FE card


----------



## BigMack70

It's pretty concerning to see 3080 FE cards gain almost nothing going from 320W to 370W power limit. They've got Ampere pushed well past its point of peak efficiency, which is somewhat concerning.

Probably means that to overclock a 3090 in any meaningful way, you're gonna need a 450-500W+ BIOS. Ouch.


----------



## HyperMatrix

kx11 said:


> Paul's HW reported his 3080 drawing as much as 500w, and that was a FE card
> 
> View attachment 2459105


That's total system power draw though. Not just GPU. When your GPU is putting out more frames, other system components are also being utilized at a higher rate.


----------



## J7SC

...watched Igor's Lab and Gamers Nexus vids on the 3080 FE (which allows for some early conclusions re. 3090 FE). While the new FE stock cooler is an improvement, temps (and with it down-clocking) are still a concern, as are power mods. Overall, GN summed it nicely (paraphrasing) 'those who panic-sold their 2080 Ti for 500 bucks' should regret it. Depending what game / setting you are looking at, improvements are in the 20% to low 30% range.

...apart from Igor's heat-pics, his other repeating refrain is that the Ampere cards are held back by 'the rest of the system. As GN, with similar game scores compared to what Igor was reporting, was running a 10700K / 5.1, kind of makes you wonder what you need for a 3090 (or two)


----------



## kx11

so Linus posted these slides




















very interesting since my 2080ti beats them by 10fps avg, i have mine on an M.2 (corsair mp510)


----------



## HyperMatrix

Just noticed in the nexus gamer video that they were able to hit 2085MHz on the gpu with +900 to memory on 100% fan speed. But it wasn’t sustainable. That actually gives some good hopes for watercooling performance with AIB cards with higher power limits. So 2.1GHz+ may not be that crazy for the 3090.


----------



## chanmanx2k

@shrooms


zhrooms said:


> _Last Updated: September 15, 2020_
> 
> *NVIDIA GeForce® RTX 3090*
> 
> *RTX 3070 Owner's Club*
> *RTX 3080 Owner's Club*
> *→ RTX 3090 Owner's Club*
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> *SPECS (Click SHOW)*
> 
> 
> 
> Spoiler
> 
> 
> 
> 
> 
> 
> Rich (BB code):
> 
> 
> *Architecture* Ampere
> *Chip* GA102-300-A1
> *Transistors* 28,000 million
> *Die Size* 627 mm²
> *Manufacturing Process* 8nm
> 
> *CUDA Cores* 5248 (10496)
> *TMUs* 328
> *ROPs* 112
> *SM Count* 82
> *Tensor Cores* 328
> *GigaRays* -- GR/s
> 
> *Core Clock* 1410 MHz
> *Boost Clock* 1695 MHz
> *Memory* 24GB GDDR6X
> *Memory Bus* 384-bit
> *Memory Clock* 2437.5 MHz / 19500 MHz
> *Memory Bandwidth* 936 GB/s
> *External Power Supply* 12-Pin
> *TDP* 350W
> 
> *DirectX* 12.2 Ultimate
> *OpenGL* 4.6
> *OpenCL* 2.0
> *Vulkan* 1.2.140
> *CUDA* 8.5
> 
> *Interface* PCIe 4.0 x16
> *Connectors* 1x HDMI 2.1, 3x DisplayPort 1.4a
> *Dimensions* 313 x 138mm (3-Slot)
> 
> *Price* $1499 US
> 
> *Release Date* September 24, 2020
> 
> 
> 
> 
> 
> 
> 
> Code:
> 
> 
> RTX 3090    | GA102-300 |  8nm | 627mm² | 28.0 BT | 5248 CCs* | 328 TMUs | 96 ROPs | 82 SMs | 1695 MHz |  24GB | 1024MB x 24 | GDDR6X | 384-bit | 936 GB/s
> RTX 3080    | GA102-200 |  8nm | 627mm² | 28.0 BT | 4352 CCs* | 272 TMUs | 88 ROPs | 68 SMs | 1710 MHz |  10GB | 1024MB x 10 | GDDR6X | 320-bit | 760 GB/s
> RTX 2080 Ti | TU102-300 | 12nm | 754mm² | 18.6 BT | 4352 CCs  | 272 TMUs | 88 ROPs | 68 SMs | 1635 MHz |  11GB | 1024MB x 11 | GDDR6  | 352-bit | 616 GB/s
> RTX 2080 S  | TU104-450 | 12nm | 545mm² | 13.6 BT | 3072 CCs  | 192 TMUs | 64 ROPs | 48 SMs | 1815 MHz |   8GB | 1024MB x 8  | GDDR6  | 256-bit | 496 GB/s
> RTX 2080    | TU104-400 | 12nm | 545mm² | 13.6 BT | 2944 CCs  | 184 TMUs | 64 ROPs | 46 SMs | 1710 MHz |   8GB | 1024MB x 8  | GDDR6  | 256-bit | 448 GB/s
> GTX 1080 Ti | GP102-350 | 16nm | 471mm² | 12.0 BT | 3584 CCs  | 224 TMUs | 88 ROPs | 28 SMs | 1582 MHz |  11GB | 1024MB x 11 | GDDR5X | 352-bit | 484 GB/s
> GTX 1080    | GP104-400 | 16nm | 314mm² |  7.2 BT | 2560 CCs  | 160 TMUs | 64 ROPs | 20 SMs | 1733 MHz |   8GB | 1024MB x 8  | GDDR5X | 256-bit | 320 GB/s
> GTX 980 Ti  | GM200-310 | 28nm | 601mm² |  8.0 BT | 2816 CCs  | 172 TMUs | 96 ROPs | 22 SMs | 1076 MHz |   6GB |  512MB x 12 | GDDR5  | 384-bit | 336 GB/s
> GTX 980     | GM204-400 | 28nm | 398mm² |  5.2 BT | 2048 CCs  | 128 TMUs | 64 ROPs | 16 SMs | 1216 MHz |   4GB |  512MB x 8  | GDDR5  | 256-bit | 224 GB/s
> GTX 780 Ti  | GK110-425 | 28nm | 551mm² |  7.1 BT | 2880 CCs  | 240 TMUs | 48 ROPs | 15 SMs |  928 MHz |   3GB |  256MB x 12 | GDDR5  | 384-bit | 336 GB/s
> GTX 780     | GK110-300 | 28nm | 551mm² |  7.1 BT | 2304 CCs  | 192 TMUs | 48 ROPs | 12 SMs |  900 MHz |   3GB |  256MB x 12 | GDDR5  | 384-bit | 288 GB/s
> GTX 680     | GK104-400 | 28nm | 294mm² |  3.5 BT | 1536 CCs  | 128 TMUs | 32 ROPs |  8 SMs | 1058 MHz |   2GB |  256MB x 8  | GDDR5  | 256-bit | 192 GB/s
> GTX 580     | GF110-375 | 40nm | 520mm² |  3.0 BT |  512 CCs  |  64 TMUs | 48 ROPs | 16 SMs |  772 MHz | 1.5GB |  128MB x 12 | GDDR5  | 384-bit | 192 GB/s
> * Can execute twice as many FP32 calculations per clock compared to previous generation, thus marketed as 10496 and 8704 CUDA cores
> 
> 
> *ASUS*
> 
> 
> Rich (BB code):
> 
> 
> *TUF*                 | 3 Fan | 2.7  Slot | 300mm | -- Power Stages | 1695 MHz Boost | 2x8-Pin | 350/375 W |    *Custom PCB* | EAN 4718017909297 | PN 90YV0FD0-M0NM00
> *TUF OC*              | 3 Fan | 2.7  Slot | 300mm | -- Power Stages | ---- MHz Boost | 2x8-Pin | 350/375 W |    *Custom PCB* | EAN 4718017922814 | PN 90YV0FD1-M0NM00
> *Strix*               | 3 Fan | 2.9  Slot | 319mm | 22 Power Stages | 1695 MHz Boost | *3x8-Pin* | ---/--- W |    *Custom PCB* | EAN 4718017908955 | PN 90YV0F90-M0NM00
> *Strix OC*            | 3 Fan | 2.9  Slot | 319mm | 22 Power Stages | ---- MHz Boost | *3x8-Pin* | ---/--- W |    *Custom PCB* | EAN 4718017928618 | PN 90YV0F93-M0NM00
> 
> *EVGA*
> 
> 
> Rich (BB code):
> 
> 
> XC3                 | 3 Fan | 2.2  Slot | ---mm | 19 Power Stages | 1695 MHz Boost | 2x8-Pin | 350/375 W | Reference PCB | EAN               | PN 24G-P5-3973-KR
> *XC3 Black*           | 3 Fan | 2.2  Slot | ---mm | 19 Power Stages | 1695 MHz Boost | 2x8-Pin | 350/375 W | Reference PCB | EAN               | PN 24G-P5-3973-KR
> *XC3 Ultra*           | 3 Fan | 2.2  Slot | ---mm | 19 Power Stages | ---- MHz Boost | 2x8-Pin | 350/375 W | Reference PCB | EAN               | PN 24G-P5-3975-KR
> *XC3* Hybrid          | 1 Fan | 2    Slot | ---mm | 19 Power Stages | ---- MHz Boost | 2x8-Pin | 350/375 W | Reference PCB | EAN               | PN
> *XC3* Water           | Water | 2    Slot | ---mm | 19 Power Stages | ---- MHz Boost | 2x8-Pin | 350/375 W | Reference PCB | EAN               | PN
> *FTW3*                | 3 Fan | 2.75 Slot | ---mm | 23 Power Stages | 1695 MHz Boost | *3x8-Pin* | ---/--- W |    *Custom PCB* | EAN               | PN 24G-P5-3985-KR
> *FTW3 Ultra*          | 3 Fan | 2.75 Slot | ---mm | 23 Power Stages | ---- MHz Boost | *3x8-Pin* | ---/--- W |    *Custom PCB* | EAN               | PN 24G-P5-3987-KR
> *FTW3* Hybrid         | 1 Fan | 2    Slot | ---mm | 23 Power Stages | ---- MHz Boost | *3x8-Pin* | ---/--- W |    *Custom PCB* | EAN               | PN
> *FTW3* Water          | Water | 2    Slot | ---mm | 23 Power Stages | ---- MHz Boost | *3x8-Pin* | ---/--- W |    *Custom PCB* | EAN               | PN
> *Kingpin* Hybrid      | 1 Fan | 2    Slot | ---mm | 23 Power Stages | ---- MHz Boost | *3x8-Pin* | ---/--- W |    *Custom PCB* | EAN               | PN
> *Kingpin* Water       | Water | 2    Slot | ---mm | 23 Power Stages | ---- MHz Boost | *3x8-Pin* | ---/--- W |    *Custom PCB* | EAN               | PN
> 
> *GAINWARD*
> 
> 
> Rich (BB code):
> 
> 
> *Phoenix*             | 3 Fan | 2.7  Slot | 294mm | 19 Power Stages | 1695 MHz Boost | 2x8-Pin | 350/375 W | Reference PCB | EAN 4710562241976 | PN 471056224-1976
> *Phoenix GS*          | 3 Fan | 2.7  Slot | 294mm | 19 Power Stages | *1725* MHz Boost | 2x8-Pin | 350/375 W | Reference PCB | EAN 4710562242034 | PN 471056224-2034
> 
> *GALAX | KFA2*
> 
> 
> Rich (BB code):
> 
> 
> *SG*                  | 3 Fan | 2.9  Slot | 317mm | 19 Power Stages | 1695 MHz Boost | 2x8-Pin | 350/375 W | Reference PCB | EAN               | PN 39NSM5MD1GNK
> *EX*                  | 3 Fan | 2.9  Slot | 316mm | -- Power Stages | *1740* MHz Boost | 2x8-Pin | 350/375 W |    *Custom PCB* | EAN               | PN 39NXM5MD1JNK
> *EX Pink*             | 3 Fan | 2.9  Slot | 316mm | -- Power Stages | *1740* MHz Boost | 2x8-Pin | 350/375 W |    *Custom PCB* | EAN               | PN
> *EX White*            | 3 Fan | 2.9  Slot | 316mm | -- Power Stages | *1740* MHz Boost | 2x8-Pin | 350/375 W |    *Custom PCB* | EAN               | PN
> 
> *GIGABYTE*
> 
> 
> Rich (BB code):
> 
> 
> *Eagle OC*            | 3 Fan | -.-  Slot | ---mm | -- Power Stages | ---- MHz Boost | 2x8-Pin | 350/375 W |    *Custom PCB* | EAN 4719331307578 | PN GV-N3090EAGLE OC-24GD
> *Gaming OC*           | 3 Fan | -.-  Slot | ---mm | -- Power Stages | ---- MHz Boost | 2x8-Pin | 350/375 W |    *Custom PCB* | EAN 4719331307547 | PN GV-N3090GAMING OC-24GD
> *AORUS Master*        | 3 Fan | -.-  Slot | ---mm | -- Power Stages | ---- MHz Boost | 2x8-Pin | 350/375 W |    *Custom PCB* | EAN               | PN GV-N3090AORUS M-24GD
> *AORUS Xtreme*        | 3 Fan | -.-  Slot | ---mm | -- Power Stages | ---- MHz Boost | *3x8-Pin* | ---/--- W |    *Custom PCB* | EAN               | PN GV-N3090AORUS X-24GD
> 
> *INNO3D*
> 
> 
> Rich (BB code):
> 
> 
> *Gaming X3*           | 3 Fan | -.-  Slot | ---mm | 19 Power Stages | ---- MHz Boost | 2x8-Pin | 350/375 W | Reference PCB | EAN 0835168001558 | PN N30903-246X-1880VA37N
> *iChill X3*           | 3 Fan | -.-  Slot | ---mm | 19 Power Stages | ---- MHz Boost | 2x8-Pin | 350/375 W | Reference PCB | EAN 0835168001541 | PN C30903-246XX-1880VA37
> *iChill X4*           | 3 Fan | -.-  Slot | ---mm | 19 Power Stages | ---- MHz Boost | 2x8-Pin | 350/375 W | Reference PCB | EAN 0835168001534 | PN C30904-246XX-1880VA36
> 
> *MSI*
> 
> 
> Rich (BB code):
> 
> 
> *Ventus 3X*           | 3 Fan | -.-  Slot | 305mm | -- Power Stages | ---- MHz Boost | 2x8-Pin | 350/375 W |    *Custom PCB* | EAN               | PN
> *Ventus 3X OC*        | 3 Fan | -.-  Slot | 305mm | -- Power Stages | ---- MHz Boost | 2x8-Pin | 350/375 W |    *Custom PCB* | EAN 4719072762476 | PN V388-002R
> *Gaming Trio*         | 3 Fan | -.-  Slot | 335mm | -- Power Stages | ---- MHz Boost | *3x8-Pin* | ---/--- W |    *Custom PCB* | EAN               | PN
> *Gaming X Trio*       | 3 Fan | -.-  Slot | 335mm | -- Power Stages | ---- MHz Boost | *3x8-Pin* | ---/--- W |    *Custom PCB* | EAN 4719072762506 | PN V388-011R
> 
> *NVIDIA*
> 
> 
> Rich (BB code):
> 
> 
> *Founders Edition*    | 2 Fan | 3    Slot | 313mm | 19 Power Stages | 1695 MHz Boost | *1x12-Pin* | 350/375 W |    *Custom PCB* | EAN               | PN
> 
> *PALIT*
> 
> 
> Rich (BB code):
> 
> 
> *GamingPro*           | 3 Fan | 2.7  Slot | 294mm | 19 Power Stages | 1695 MHz Boost | 2x8-Pin | 350/375 W | Reference PCB | EAN               | PN NED3090019SB-132BA
> *GamingPro OC*        | 3 Fan | 2.7  Slot | 294mm | 19 Power Stages | *1725* MHz Boost | 2x8-Pin | 350/375 W | Reference PCB | EAN               | PN NED3090S19SB-132BA
> 
> *PNY*
> 
> 
> Rich (BB code):
> 
> 
> *XLR8 M*              | 3 Fan | 2.7  Slot | 294mm | 19 Power Stages | 1695 MHz Boost | 2x8-Pin | 350/375 W | Reference PCB | EAN 0751492639536 | PN VCG309024TFXMPB
> *XLR8*                | 3 Fan | 2.7  Slot | 294mm | 19 Power Stages | 1695 MHz Boost | 2x8-Pin | 350/375 W | Reference PCB | EAN 0751492639543 | PN VCG309024TFXPPB
> 
> *ZOTAC*
> 
> 
> Rich (BB code):
> 
> 
> *Trinity*             | 3 Fan | 2.5  Slot | 318mm | 19 Power Stages | 1695 MHz Boost | 2x8-Pin | 350/375 W | Reference PCB | EAN 4895173622427 | PN ZT-A30900D-10P
> *Trinity HoLo*        | 3 Fan | 2.5  Slot | 318mm | 19 Power Stages | ---- MHz Boost | 2x8-Pin | 350/375 W | Reference PCB | EAN               | PN
> *AMP Extreme*         | 3 Fan | -.-  Slot | ---mm | -- Power Stages | ---- MHz Boost | *3x8-Pin* | ---/--- W |    *Custom PCB* | EAN               | PN
> 
> *TECHPOWERUP | GPU-Z
> 
> Download TechPowerUp GPU-Z
> 
> 
> NVIDIA | NVFLASH
> 
> Download NVIDIA NVFlash
> 
> 
> BIOS | ROM
> 
> 
> 
> 
> TechPowerUp BIOS Collection < Verified
> 
> Click to expand...
> 
> 
> 
> 
> 
> TechPowerUp BIOS Collection < Unverified
> 
> Click to expand...
> 
> 
> OVERCLOCKING | TOOLS
> 
> Download ASUS GPUTweak II
> 
> Download EVGA Precision X1
> 
> Download Gainward EXPERTool
> 
> Download Galax/KFA2 Xtreme Tuner Plus
> 
> Download Gigabyte AORUS Engine
> 
> Download Inno3D TuneIT
> 
> Download MSI Afterburner
> 
> Download Palit ThunderMaster
> 
> Download PNY Velocity X
> 
> Download Zotac FireStorm
> 
> 
> QUESTIONS | FAQ *
> 
> _Last Updated: September 1, 2020_​
> *Question:* How much faster is the RTX 3090 than the RTX 3080?
> *Answer:* We don't yet for sure, but everything points to the 3090 being around 20% faster overall. Both the 3080 and 3090 are based on the GA102 ("8N" custom Samsung process, 627mm² die size). The RTX 3090 features a full fat die with 20.6% more CUDA cores, SMs, TMUs, tensor cores, and RT cores. It also offers 23.2% higher memory bandwidth thanks to its 384-bit memory bus. Target boost clocks are close, but with a slight edge to the 3080. All this amounts to the 90 card being around 20% faster under ideal conditions, but at a 114% higher cost.


Could you include Width as well please?


----------



## BigMack70

kx11 said:


> so Linus posted these slides
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> very interesting since my 2080ti beats them by 10fps avg, i have mine on an M.2 (corsair mp510)


Yeah something's wrong with their slide as concerns the settings. I just benched my 2080 Ti on this with everything set to maximum @ 4k with Ultra ray tracing and DLSS, and got 46fps average. Maybe they forgot to turn on DLSS? I dunno.


----------



## ref

Just pre-ordered a Strix waterblock from EK - not really a fan of their stuff, I find their quality pretty meh compared to Aquacomputer/Watercool, but to see a custom PCB block already made was enough to put an order in.

Now just need to hope they will have Strix 3090's in stock on release, or if not, shortly after.


----------



## Mooncheese

I found Hardware Unboxed to have the most comprehensive review, they showed that the performance difference between 3080 and 2080 Ti is only 20% at 1440p (30% at 4K) and they showed why this is the cas (27:28 mark, at 4K the portion of the render time per frame is heavier on FP32 shaders than on lower resolutions). They also showed that 8GB of video memory is not enough for 4K.






Also, EKWB has Strix 3080 / 3090 blocks and back-plates up for pre-order. Can anyone confirm whether or not the Strix will be priced at $1800?

Considering I am currently at 3440x1440, which is closer to 2560x1440 than 3840x2160 (25% and 67% difference respectively) and considering that the 3080 is only ~10-15% faster than 2080 Ti when both cards are at the same power draw of 330w. That means I would be looking at maybe a 23% uplift at 3440x1440 with overclocked 2080 Ti at the same power draw? (I believe the 10-15% cited by Steve means that there was a 10% difference at 1440p and a 15% difference at 4K running the 2080 Ti overclocked @ the same power draw of 330w but I could be mistaken).

This means the 3080 is not a viable upgrade path for those of us with 2080 Ti and if the estimated 20% performance uplift of the 3090 is accurate that means that the 3090 may only be ~35% faster at 1440p vs 2080 Ti. For $1500.

The only way that this would make any sense economically is that I also have pre-ordered HP's Reverb G2 whose resolution would see something more like a ~45-50% increase in performance (2160x2160x2) but even then it's still a tough pill to swallow.

For anyone not at 4K with a 2080 Ti I would advise them to save their money. Basically overclocked 2080 Ti is roughly 10% slower than the 3080 at 1440p at the same power draw.

This is Turing all over again, might as well replace the 2080 and 2080 Ti with the 3080 and 3090 if you are a 2080 Ti owner.


----------



## HyperMatrix

Asus RTX 3090 official prices:

Asus TUF RTX 3090 $1499 ASUS TUF Gaming GeForce RTX 3090 DirectX 12 TUF-RTX3090-24G-GAMING Video Card - Newegg.com

Asus TUF RTX 3090 (Overclocked) $1599 ASUS TUF Gaming GeForce RTX 3090 TUF-RTX3090-O24G-GAMING Video Card - Newegg.com

Asus ROG Strix RTX 3090 (Overclocked) $1799 ASUS ROG Strix GeForce RTX 3090 DirectX 12 ROG-STRIX-RTX3090-O24G-GAMING Video Card - Newegg.com


Update:

$1499 ZOTAC GAMING GeForce RTX 3090 Trinity Graphics Card ZOTAC GAMING GeForce RTX 3090 Trinity Graphics Card

$1549 Gigabyte GeForce RTX 3090 EAGLE OC 24G Graphics Card Gigabyte GeForce RTX 3090 EAGLE OC 24G Graphics Card

$1579 Gigabyte GeForce RTX3090 GAMING OC 24G Graphics Card Gigabyte GeForce RTX3090 GAMING OC 24G Graphics Card

$1842 MSI GeForce RTX 3090 VENTUS 3X 24G Oc Graphic Card MSI GeForce RTX 3090 VENTUS 3X 24G Oc Graphic Card

$1901 MSI GeForce RTX 3090 GAMING X TRIO 24G Graphic Card MSI GeForce RTX 3090 GAMING X TRIO 24G Graphic Card


Regarding the MSI Gaming X Trio card, a leaked benchmark for the 3080 variant of the card showed it scoring 18% higher than the FE 3080. So if that turns out to be true (which we'll know tomorrow) and holds true for the RTX 3090 as well, then it would make sense. I wouldn't expect any less overclocking headroom/performance from the Asus ROG Strix or EVGA FTW3 Ultra though. And if I had to guess, the FTW3 will probably be priced at around $1699, and $1750~ for the Ultra which is the overclocked version.


----------



## Mooncheese

Uggh, forget it, add the WB and back-plate ($230 before taxes) and I'm looking at: 

$1800 
+$230

= 

$2030, add 8% sales tax for brings that up to $2200, add shipping, brings that up to around $2240 or so. 

So $2240 for a 35% bump in frame-rate on my 3440x1440 ultrawide before the G2 arrives. 

Yeah, no thanks. I think I'm going to sit this one out. 

$900 for the 2080 Ti XC2 + waterblock was a lot of money for a ~50% increase up and over 1080 Ti. 

Another $2200 for another 35-50%, this is insane. 

And bear in mind, this is the only upgrade path if you have a 2080 Ti. 

How do people cheerlead for this crap?


----------



## Mooncheese

BigMack70 said:


> It's pretty concerning to see 3080 FE cards gain almost nothing going from 320W to 370W power limit. They've got Ampere pushed well past its point of peak efficiency, which is somewhat concerning.
> 
> Probably means that to overclock a 3090 in any meaningful way, you're gonna need a 450-500W+ BIOS. Ouch.


The key to unlocking Ampere's performance will be undervolting, which is really only possible under water, this time around.


----------



## MikeGR7

Yes. All persistent 2080ti owners pls stick to your cards if you can't justify the cost of 3000 series.

We have low Ampere stock for those who really need it plus 2080tis hold their ground really nice being the 4th fastest gpu on the world.

Well, you'll face a struggle with 3060s for that 4th place around Xmas but that's even more months ahead so don't overthink it.


----------



## HyperMatrix

Mooncheese said:


> Uggh, forget it, add the WB and back-plate ($230 before taxes) and I'm looking at:
> 
> $1800
> +$230
> 
> =
> 
> $2030, add 8% sales tax for brings that up to $2200, add shipping, brings that up to around $2240 or so.
> 
> So $2240 for a 35% bump in frame-rate on my 3440x1440 ultrawide before the G2 arrives.
> 
> Yeah, no thanks. I think I'm going to sit this one out.
> 
> $900 for the 2080 Ti XC2 + waterblock was a lot of money for a ~50% increase up and over 1080 Ti.
> 
> Another $2200 for another 35-50%, this is insane.
> 
> And bear in mind, this is the only upgrade path if you have a 2080 Ti.
> 
> How do people cheerlead for this crap?



I'm not surprised that you're not buying it. I was surprised, however, when you previously said that you *were *going to buy it, considering how much time you've spent hating on it. I think the cards could have been better than they are. However, as long as the AIB performance of 2.1GHz+ under water is possible, which has been indicated, then this is still a solid upgrade. I've been on a Pascal Titan X at 2.1GHz for 4 years now. Refused to buy RTX 2080ti because it was a garbage upgrade. But a 2.1GHz RTX 3090 would give me a 2.5-3x performance boost, enable RTX/DLSS, and provide 24GB VRAM which is great for gaming, as well as speed up some of my professional work flow including ai upscaling/interpolation and 3d rendering. 

But as to your question of why I like the card, support it, and am excited for it, the answer is simple. Because there is nothing else. And in a world where Nvidia is competing against only itself at the top end, I'd much rather support a $1500 Titan class card as opposed to their previous $2500 Titan card pricing.


----------



## changboy

What about : 1800 usd to cad = 2372 + 15% tax (quebec) = 2730$ WOW !


----------



## HyperMatrix

changboy said:


> What about : 1800 usd to cad = 2372 + 15% tax (quebec) = 2730$ WOW !


Yeah I'm not sure how people put up with 15% tax. Maybe look at setting up a forwarding/billing address in Alberta to make your purchases with 5% tax. Save $237. Shipping from AB to QC would cost you like $30. So huge savings.


----------



## changboy

Ya i already think about that, will you give me your adress without keeping my card ? lol


----------



## HyperMatrix

changboy said:


> Ya i already think about that, will you give me your adress without keeping my card ? lol


I can only promise one of those things.  Nothing like getting reparations for you guys opposing our pipelines after taking our equalization payment money for decades. Haha.


----------



## changboy

And if i bought 2 rtx-3090 and you can keep 1 ??


----------



## Mooncheese

MikeGR7 said:


> Yes. All persistent 2080ti owners pls stick to your cards if you can't justify the cost of 3000 series.
> 
> We have low Ampere stock for those who really need it plus 2080tis hold their ground really nice being the 4th fastest gpu on the world.
> 
> Well, you'll face a struggle with 3060s for that 4th place around Xmas but that's even more months ahead so don't overthink it.


Hmmm, 2080 TI is 10-15% slower than the 3080 when both cards are at the same power draw? 

And the 3060 is going to be faster? 

Dude the 3070 might not even be faster. 

Basically all day I've been getting nothing but hate filled nonsense from uninformed morons. I'm done. Have fun with your purchasing decisions but here are the facts: 

3080 = 10-15% faster than 2080 Ti at the same power draw
3090 = 30-35% faster than 2080 Ti at the same power draw (estimated)

If the 3070 is only 20% slower than the 3080 @ 220w (unlikely, it will probably be 30% slower given the huge loss of TDP) then it might be as fast as the 2080 Ti in some instances. 

Saying the 3060 is going to be faster than 2080 Ti, you literally don't know what youre talking about.


----------



## BigMack70

HyperMatrix said:


> But as to your question of why I like the card, support it, and am excited for it, the answer is simple. Because there is nothing else. And in a world where Nvidia is competing against only itself at the top end, I'd much rather support a $1500 Titan class card as opposed to their previous $2500 Titan card pricing.


I'm expecting a Titan card above this at $3k+, and view this as a 2080Ti replacement for a price hike, but am content buying for the same reasons - nvidia is competing against itself and the 3090 is highly likely to define top end performance and visuals for the next two years at least. I'm in.


----------



## kx11

BigMack70 said:


> Yeah something's wrong with their slide as concerns the settings. I just benched my 2080 Ti on this with everything set to maximum @ 4k with Ultra ray tracing and DLSS, and got 46fps average. Maybe they forgot to turn on DLSS? I dunno.


i agree, but someone else also got the same results on 3080


----------



## changboy

rtx-3080 is a lot faster then 2080 ti, read dead 2 is the worst incease perf of all games.
lot of times 2080 ti is 60 fps when 3080 is at 85 fps, this is huge at 4k ultra.

Thats mean future game will run fine on rtx-3080 when the 2080 ti will fail for sure.


----------



## Dagamus NM

Mooncheese said:


> Uggh, forget it, add the WB and back-plate ($230 before taxes) and I'm looking at:
> 
> $1800
> +$230
> 
> =
> 
> $2030, add 8% sales tax for brings that up to $2200, add shipping, brings that up to around $2240 or so.
> 
> So $2240 for a 35% bump in frame-rate on my 3440x1440 ultrawide before the G2 arrives.
> 
> Yeah, no thanks. I think I'm going to sit this one out.
> 
> $900 for the 2080 Ti XC2 + waterblock was a lot of money for a ~50% increase up and over 1080 Ti.
> 
> Another $2200 for another 35-50%, this is insane.
> 
> And bear in mind, this is the only upgrade path if you have a 2080 Ti.
> 
> How do people cheerlead for this crap?


If this cost is too much for the increase in performance then just stick with your 2080Ti for another gen or two. There are two kinds of people the upgrade every generation, those that enjoy having the latest and greatest and those that are mad because they paid a bunch and didn't get what they perceive as the value.


----------



## HyperMatrix

BigMack70 said:


> I'm expecting a Titan card above this at $3k+, and view this as a 2080Ti replacement for a price hike, but am content buying for the same reasons - nvidia is competing against itself and the 3090 is highly likely to define top end performance and visuals for the next two years at least. I'm in.


As I've written a few times now, I'm not sure how they could make a 'Titan Card' unless they used a completely different die design. These are the only improvement points possible:

1) Enable full 10752 cores (2.4% increase only compared to 3090)
2) Card could use newer 21-22GBps memory once Micron releases it (7.6-12.8% memory bandwidth boost. not bad)
3) Card could also be bumped up to 48GB VRAM at that point
4) Card could also be made on either TSMC 7nm FinFet like A100 card, or Samsung 5nm EUV or other advanced node that becomes available

Of those things, here are the problems:

1) Minimal boost. Very insignificant. No one that held out buying a 3090 would now buy one because of this
2) This would actually help, and is one of the most likely things to happen (along with the 2.4% core boost above)
3) This is unlikely. Possible, but unlikely. Because the Ampere Quadro is already doing this. With the full 10752 cores enabled, and 48GB VRAM. Although at GDDR6, because micron GDDR6x chips are not yet available. Going this route would put the Titan above the Ampere RTX Quadro card, whereas the Titan was designed to be a middleground between gaming and professional work. So basically, it would invalidate the Quadro card, which wouldn't make sense
4) This is theoretically possible. And while it'll reduce power usage, and may increase clocks a bit, it's unlikely due to yields. What would they do with all the chips with disabled cores? This could only happen if there were a full Ampere refresh on a new node where chips could be designated for different uses.

So realistically, the best that could happen for a 3090 Ti is the full +2.4% cores and +10%~ memory bandwidth. Nothing more. However, the 3080Ti will happen. Probably priced at around $1000 with 20GB VRAM, with little to no increase in the number of cores available, unless it was needed due to competition from AMD.


----------



## BigMack70

HyperMatrix said:


> As I've written a few times now, I'm not sure how they could make a 'Titan Card' unless they used a completely different die design. These are the only improvement points possible:
> 
> 1) Enable full 10752 cores (2.4% increase only compared to 3090)
> 2) Card could use newer 21-22GBps memory once Micron releases it (7.6-12.8% memory bandwidth boost. not bad)
> 3) Card could also be bumped up to 48GB VRAM at that point
> 4) Card could also be made on either TSMC 7nm FinFet like A100 card, or Samsung 5nm EUV or other advanced node that becomes available
> 
> Of those things, here are the problems:
> 
> 1) Minimal boost. Very insignificant. No one that held out buying a 3090 would now buy one because of this
> 2) This would actually help, and is one of the most likely things to happen (along with the 2.4% core boost above)
> 3) This is unlikely. Possible, but unlikely. Because the Ampere Quadro is already doing this. With the full 10752 cores enabled, and 48GB VRAM. Although at GDDR6, because micron GDDR6x chips are not yet available. Going this route would put the Titan above the Ampere RTX Quadro card, whereas the Titan was designed to be a middleground between gaming and professional work. So basically, it would invalidate the Quadro card, which wouldn't make sense
> 4) This is theoretically possible. And while it'll reduce power usage, and may increase clocks a bit, it's unlikely due to yields. What would they do with all the chips with disabled cores? This could only happen if there were a full Ampere refresh on a new node where chips could be designated for different uses.
> 
> So realistically, the best that could happen for a 3090 Ti is the full +2.4% cores and +10%~ memory bandwidth. Nothing more. However, the 3080Ti will happen. Probably priced at around $1000 with 20GB VRAM, with little to no increase in the number of cores available, unless it was needed due to competition from AMD.


A titan with +5-10% performance and more vram at a giant performance premium makes sense to me based on the history of the Titan lineup. That's what most of the Titan cards have been, essentially.


----------



## zhrooms

Mooncheese said:


> So $2240 for a 35% bump in frame-rate on my 3440x1440 ultrawide before the G2 arrives.
> 
> Yeah, no thanks. I think I'm going to sit this one out.
> 
> And bear in mind, this is the only upgrade path if you have a 2080 Ti.


2175MHz overclocked 2080 Ti on a Water block and XOC BIOS makes 3090 2x8-Pin realistically only 25-35% faster. If I sell my card that's roughly $700 with cooler, new 3090 will be at least $1499 no block, so that's $800 for as little as 25% increase in 4K, 60 to 75 FPS in intensive single player games, best case 80 FPS. You're paying just over $30 per % of performance.

I sold my GTX 1080 Ti for $700 and bought 2080 Ti for $1150, that was $450, which I thought was a lot, had to pay 65% more of what I already had to get the upgrade, but I also got a 50% performance increase, so $9 per % of performance. 2080 Ti to 3090 upgrade is again, $30, more than 3x as expensive, for as little as half the performance boost I got from 1080 Ti to 2080 Ti.

And let's just humor the idea that 1080 Ti actually should've sold for $499 used when 2080 Ti dropped, because of the inflated prices at the time, $650 to upgrade, that's still just $13 per % of performance, closer to half of what it costs to upgrade to 3090, still not even comparable.

And to get any meaningful performance increase from the 3090, since it comes stock 350W TDP (375W max) with 2x8-Pin on Reference PCB, you have to buy a 3x8-Pin card with a much higher power limit of 450 at the very least, to get any kind of OC headroom, this brings prices up to $1799 or above. This would bring the performance up by at least 10% from 25-35 to 35-45, now we're actually talking some real numbers (40+), but the price.. to upgrade my Ti to 3x8-Pin 3090 would cost me $1100, for let's say 40% performance, that's still $27.5 per % of performance.

Truly bonkers, the 3090 is an awful awful card, and the 3080 is barely an upgrade from an overclocked 2080 Ti, 0-15% best case. And let's do the same here, if I sell my Ti for $650 and want a meaningfull performance increase, in other words a 3x8-Pin with high power limit such as the FTW3 Ultra, which is around $950 from what I've seen, 45% higher $ than my Ti is worth right now ($300), for 10% performance (very generous), also comes out to $30 per % of performance, some instances it will be 5% so up to $60 per %.

The worst part is that we desperately need higher performance for the upcoming RTX games such as Cyberpunk, to drive that in 4K with maxed out Ray Tracing, you're absolutely screwed, even with a 3090 FTW3 Ultra (450-520W power limit), but that's the best that we got, and it'll set you back $1800-1900, not including proper cooling for the overclock that you will need to squeeze every last MHz out of it, so with a block it's safe to say around $2000. That's really the only way to get a really meaningful performance increase, not that far off 1080 Ti to 2080 Ti.

Another thing to keep in mind is that NVIDIA is definitely preparing SUPER cards, they could appear in 3, 6, 9, 12 months, with up to 20GB VRAM, along with an increased CUDA core count, likely 350W TDP, for possibly as little as $999, making the 3090 become much less enticing. But if you want to play Cyberpunk on release you don't have that option. $2000 ($1300 if upgrading from Ti) to get up to 50% higher framerate in Cyberpunk 4K with RT enabled (best case), which still isn't enough, 30 FPS x 50% is still just 45, with DLSS 2.0 it should solve the issue and bump it up another 33% to 60 FPS.

The only option I see that I wouldn't be pissed about, would be finding a 3090 3x8-Pin on sale, used (with full store warranty) for a few hundred less, and sell it possibly 6 months before RTX 40 series is announced, buy a used 2080 Ti for $499 and run that until new flagship drops some months later, this would cause you to lose the least amount of money. But that's still a shaky plan because of the 3080 SUPER 20GB, could seriously threat the cost of 3090, and demand on the used market. I'd also want to point out that $1300 to upgrade isn't really that big of a deal, but it's not about the money, it's the principle, the fact that they charge these prices for *gaming* computer parts is insane, as I've built and overclocked PCs for almost two decades now, and this is just ridiculous, with 25% tax in Scandinavia the Strix OC costs $2318.. absurd, for that amount of money you can build a complete high-end gaming full RGB with AIO cooler for both CPU and GPU. I really want no part in supporting these cards but at the same time it's the only way for us enthusiasts to get the meaningful performance increase we want, and need.

So many choices!


----------



## MikeGR7

All this talk about power draw makes me laugh..
This is a graphics card and not some kind of room heater/ air conditioner for us to have it's energy efficiency as a top priority.
Of course it could bother ppl with 550W PSU but pls this is 3090s club get yourself a 1KW if you're here.

I get it that having to use efficiency mambo jumbo to prove your 1200$ purchase will barely outrun the 350$ 3060 must be annoying but name calling other users for having diffrent views is low.


----------



## HyperMatrix

BigMack70 said:


> A titan with +5-10% performance and more vram at a giant performance premium makes sense to me based on the history of the Titan lineup. That's what most of the Titan cards have been, essentially.


None of the Titan "Plus" models have had more Ram than the base Titan version. The Kepler Titan was replaced with the Titan Black with 7.1% more cores activated, 16.6% higher memory bandwidth, and 12% higher boost clock. The Titan X Pascal to Titan Xp was was 7.1% higher core count, 14% higher memory bandwidth, and 3.3% higher boost clock. So while the 3090 could be boosted a bit, you'd be seeing a 2.4% vs 7.1% core count boost, and a 7.6% memory bandwidth boost vs 14-16%. It'd be the smallest Titan class boost ever, making it overall somewhat insignificant. It won't increase sales of the card. And as I mentioned before....there's pretty much no chance of the Titan going with 48GB VRAM because that card *already exists*. It's a Quadro card with the same die, and fully unlocked cores.


----------



## BigMack70

HyperMatrix said:


> None of the Titan "Plus" models have had more Ram than the base Titan version. The Kepler Titan was replaced with the Titan Black with 7.1% more cores activated, 16.6% higher memory bandwidth, and 12% higher boost clock. The Titan X Pascal to Titan Xp was was 7.1% higher core count, 14% higher memory bandwidth, and 3.3% higher boost clock. So while the 3090 could be boosted a bit, you'd be seeing a 2.4% vs 7.1% core count boost, and a 7.6% memory bandwidth boost vs 14-16%. It'd be the smallest Titan class boost ever, making it overall somewhat insignificant. It won't increase sales of the card. And as I mentioned before....there's pretty much no chance of the Titan going with 48GB VRAM because that card *already exists*. It's a Quadro card with the same die, and fully unlocked cores.


3090 isn't a Titan. 

As for the rest, we'll see. It's definitely possible to put a GA102 Titan between the 3090 and the GA100 quadro.


----------



## HyperMatrix

BigMack70 said:


> 3090 isn't a Titan.
> 
> As for the rest, we'll see.


If the 3090 isn't a Titan, then a Titan card won't exist. Because there is 0 room for improvement based on the 3080 using the same die. Look at the 3090 as the Titan and the 3080 as a 3080Ti. They're using the same die which in previous generations was done only on Titan/xx80Ti. The card spec differences are the same. Only the naming has changed. 

3090 is as Titan as Ampere has room for.


----------



## Dagamus NM

HyperMatrix said:


> If the 3090 isn't a Titan, then a Titan card won't exist. Because there is 0 room for improvement based on the 3080 using the same die. Look at the 3090 as the Titan and the 3080 as a 3080Ti. They're using the same die which in previous generations was done only on Titan/xx80Ti. The card spec differences are the same. Only the naming has changed.
> 
> 3090 is as Titan as Ampere has room for.


But there is room for improvement. It will be small on the cores, but double the memory. If the 3090 is the replacement to the 2080Ti then it makes perfect sense. The 780Ti compared to the Kepler Titan, very small increase in cores and 6gb vram vs 3gb. The 980Ti had the same thing for the Titan of the Maxwell line. 12gb vs 6gb and not a lot more core. The 1080Ti changed it up, the Titan X P had 12gb vs 11gb and slightly more cores, then they released a slightly larger Titan Xp after that. Then the Titan Volta which was it's own weird release, then the newest Titan which compared to the 2080Ti seemed farther than ever in price. 

So yes, there is still space on the die for a few more cores. Doubling the memory will be the big thing though. There are people that will pay that, even gamers who are at 8K and don't like the frame rate with everything turned up.

I fully expect a Titan card and it will be quite niche. AI people that don't have the budget for a Tesla card will find it acceptable.


----------



## HyperMatrix

Dagamus NM said:


> But there is room for improvement. It will be small on the cores, but double the memory. If the 3090 is the replacement to the 2080Ti then it makes perfect sense. The 780Ti compared to the Kepler Titan, very small increase in cores and 6gb vram vs 3gb. The 980Ti had the same thing for the Titan of the Maxwell line. 12gb vs 6gb and not a lot more core. The 1080Ti changed it up, the Titan X P had 12gb vs 11gb and slightly more cores, then they released a slightly larger Titan Xp after that. Then the Titan Volta which was it's own weird release, then the newest Titan which compared to the 2080Ti seemed farther than ever in price.
> 
> So yes, there is still space on the die for a few more cores. Doubling the memory will be the big thing though. There are people that will pay that, even gamers who are at 8K and don't like the frame rate with everything turned up.
> 
> I fully expect a Titan card and it will be quite niche. AI people that don't have the budget for a Tesla card will find it acceptable.


As I mentioned...the card you’re dreaming of already exists. It’s the Ampere RTX Quadro. Full cores. 48GB VRAM. But currently GDDR6 because Micron GDDRX6 isn’t ready yet. Why would they release duplicate cards?


----------



## BigMack70

zhrooms said:


> 2175MHz overclocked 2080 Ti on a Water block and XOC BIOS makes 3090 2x8-Pin realistically only 25-35% faster.


This seems like an unwarranted "glass half empty" take on the situation. Stock vs stock, the 3080 is 32% faster than 2080Ti. If the 3090 is 20% faster than the 3080, and you somehow miraculously managed to get stock+20% performance out of your 2080Ti in games, which I doubt by the way, and you can only get a 5% overclock performance boost out of the 3090, that's still a 38-39% average performance uplift for 3090 vs OC'd 2080Ti. 

The price is another matter. Obviously nvidia has been a wallet destroying monopoly for years at this point and the 3090 is no exception. But the performance uplift is there. 3090 will be a very solid upgrade to the 2080Ti.


----------



## zhrooms

HyperMatrix said:


> If the 3090 isn't a Titan, then a Titan card won't exist. Because there is 0 room for improvement based on the 3080 using the same die. Look at the 3090 as the Titan and the 3080 as a 3080Ti. They're using the same die which in previous generations was done only on Titan/xx80Ti. The card spec differences are the same. Only the naming has changed.
> 
> 3090 is as Titan as Ampere has room for.


Quadro and Titan coming soon with 48GB (2GB modules), last TPC enabled (2 SMs) which adds 128 CUDA Cores (256).

3090 is a gaming card, advertised for 8K gaming.

It is absolutely not meant for work, as proven by NVIDIA themselves,


> GeForce cards are still going to be artificially capped in tensor performance for market segmentation reasons. As with RTX 20 cards, FP16 tensor ops with FP32 accumulate is running at half the native rate. This leaves the door open to an Ampere Titan.


----------



## J7SC

A while back I speculated that NVidia might be expanding the top tier segment with an additional GPU offering, so 3080 Ti, 3090 and Ampere Titan. The 3080 is clearly well-priced to compete with RDNA2 (at least with one of the to rumoured RDNA2 versions, big & bigger) and Nvidia doesn't have to put all their cards (pun intended) on the table already now, not until RDNA2 flavours (and perhaps Intel Xe tile) have been fully revealed.

The 3080 was a bit over-hyped by their own marketing dep't with cherry-picked somewhat nebulous performance graphs (and Quake II RTX etc). Nevertheless, the 3080 is a great card at a palatable price (if actually available at that), just not worth an immediate upgrade from a 2080 Ti, which is out of production now, anyway. If the 3080 results we saw today by various reviewers scale, then 3090 should be, on average, between 40% and 50%+ faster than 2080 Ti, depending on application. I also expect driver updates and patches for Ampere to help.


----------



## zhrooms

BigMack70 said:


> This seems like an unwarranted "glass half empty" take on the situation. Stock vs stock, the 3080 is 32% faster than 2080Ti. If the 3090 is 20% faster than the 3080, and you somehow miraculously managed to get stock+20% performance out of your 2080Ti in games, which I doubt by the way, and you can only get a 5% overclock performance boost out of the 3090, that's still a 38-39% average performance uplift for 3090 vs OC'd 2080Ti.
> 
> The price is another matter. Obviously nvidia has been a wallet destroying monopoly for years at this point and the 3090 is no exception. But the performance uplift is there. 3090 will be a very solid upgrade to the 2080Ti.


2175MHz is 30% above stock.. explained this in previous posts.

My Ti performs near identical (within 5%) of an overclocked 3080 FE (370W).

3090 is 20% faster than 3080, and even more severely power limited, 350W stock and 370W max.

Conclusion is that 25-35% is *very* realistic, being generous with 35% because there are a few high res games that show slightly higher numbers. The performance uplift is certainly not there. 3090 will not be a solid upgrade if it's only 25% faster, which it most certainly will be, not in all but many scenarios.


----------



## HyperMatrix

zhrooms said:


> Quadro and Titan coming soon with 48GB (2GB modules), last TPC enabled (2 SMs) which adds 128 CUDA Cores (256).
> 
> 3090 is a gaming card, advertised for 8K gaming.
> 
> It is absolutely not meant for work, as proven by NVIDIA themselves,


Yes, but if it's a card with the same die, all cores enabled, 48gb VRAM, and it already exists, without any artificial gimping, called the Quadro, why would they need to make a second version that does the exact same thing? Tell me what I'm missing here.


----------



## dentnu

Mooncheese said:


> Uggh, forget it, add the WB and back-plate ($230 before taxes) and I'm looking at:
> 
> $1800
> +$230
> 
> =
> 
> $2030, add 8% sales tax for brings that up to $2200, add shipping, brings that up to around $2240 or so.
> 
> So $2240 for a 35% bump in frame-rate on my 3440x1440 ultrawide before the G2 arrives.
> 
> Yeah, no thanks. I think I'm going to sit this one out.
> 
> $900 for the 2080 Ti XC2 + waterblock was a lot of money for a ~50% increase up and over 1080 Ti.
> 
> Another $2200 for another 35-50%, this is insane.
> 
> And bear in mind, this is the only upgrade path if you have a 2080 Ti.
> 
> How do people cheerlead for this crap?


Its not about cheerleading. I would assume everyone here buying a graphics card with a starting MSRP of $1500 just does not care about cost. Everyone here just wants the best no matter what. Its been known since day one that the price to performance on 3090 is not worth the cost. Yet people like the ones here myself included only want the best. Honestly after seeing the benchmarks of the 3080. If you don’t have a 4k monitor or you are a professional esports player you should not be buying a 3090.


----------



## zhrooms

HyperMatrix said:


> Yes, but if it's a card with the same die, all cores enabled, 48gb VRAM, and it already exists, without any artificial gimping, called the Quadro, why would they need to make a second version that does the exact same thing? Tell me what I'm missing here.


Simple answer, they're aimed at different types of work.








What’s the Difference Between GeForce and Quadro Graphics Cards?


A look into NVIDIA’s two most prominent graphics cards lineups.




www.engineering.com


----------



## kaydubbed

How do we get the associates code for EVGA?

Sent from my SM-N975U1 using Tapatalk


----------



## HyperMatrix

zhrooms said:


> Simple answer, they're aimed at different types of work.
> 
> 
> 
> 
> 
> 
> 
> 
> What’s the Difference Between GeForce and Quadro Graphics Cards?
> 
> 
> A look into NVIDIA’s two most prominent graphics cards lineups.
> 
> 
> 
> 
> www.engineering.com


That answers nothing. This is the Ampere Quadro. Once 2GB GDDR6x memory is available from Micron, what could an Ampere Titan offer that this card doesn't already offer that would necessitate its creation, without destroying the need for the Quadro card itself?


----------



## BigMack70

zhrooms said:


> 2175MHz is 30% above stock


No. What 2080 Ti runs at 1675 MHz? Even the Founders Edition in a crappy case will run faster than that. 2080 Ti FE runs at like 1800 Mhz average. 2175 is a 20% overclock.



zhrooms said:


> Conclusion is that 25-35% is *very* realistic


This is only realistic in one of three scenarios:
1) The 3090 is not actually 20% faster than the 3080, but instead something like 10% faster.
2) You are talking about sub-4k resolutions
3) You are talking about a cherry picked game suite that Ampere performs relatively poorly in compared to Turing and does not represent average performance.

Apart from those scenarios, your thinking about the performance math is simply wrong.

2080 Ti stock = 0.758* 3080 stock (or, alternatively, 3080 stock = 1.32 * 2080 Ti stock) _<-- this is known (__source__)_
2080 Ti OC = 1.2 * 2080 Ti stock = 0.91 * 3080 stock_ <-- this is generous, given that OC performance scaling is not perfectly 1:1_
3090 stock = 1.2 * 3080 stock _<-- this is unknown and an assumption_
3090 OC = 1.05 * 3090 stock _<-- very reasonable assumption_

If you run the math on this, you get 3090 OC = (1.05 * 1.2) / 0.91 * 2080 Ti OC = 1.385 * 2080 Ti OC

If you want a realistic but pessimistic outlook on the 3090, assuming it is in fact about 20% faster than the 3080, that's as pessimistic as the math allows at a 38% average performance improvement vs a heavily OC'd 2080 Ti.

If you want to argue that the 3090 is going to be only 25-35% faster than 2080 Ti OC, you need to add some pessimistic assumptions, such as "3090 will only be 15% faster than 3080".

And you're ignoring all sorts of possible optimistic assumptions, such as:
1) The newer the game/engine, and the more RTX features are implemented, the greater Ampere's relative performance jump over Turing
2) It's very likely that proper custom 3090's, with custom BIOS and under water, will far exceed a meager 5% performance bump when overclocked


----------



## changboy

You compare super model of 2080 ti to 3080 fe, wait and see what the strix will do when oc to the max


----------



## zhrooms

BigMack70 said:


> No. What 2080 Ti runs at 1675 MHz? Even the Founders Edition in a crappy case will run faster than that. 2080 Ti FE runs at like 1800 Mhz average. 2175 is a 20% overclock.


33x 2080 Ti's run slower than FE which is a factory overclocked card, and it's around 1650 actually, I'm just being generous saying 30%, it's slightly higher. No idea how you have what seems like hundreds of post in the 2080 Ti thread without ever picking up this information.



BigMack70 said:


> Your thinking about the performance math is simply wrong.
> 
> It's very likely that proper custom 3090's, with custom BIOS and under water, will far exceed a meager 5% performance bump when overclocked


You have no idea what you're talking about, go back a few pages and look at my Time Spy score that is near identical to a 3080, and I didn't even push it to the limit, can still go 30MHz higher, then go look at all the reviews where they barely reach 5% OC. So take these 5% plus 20% (generous since higher TDP and lower OC headroom on 3090), and we can safely say it's around 25% worst case, best case 35% since some titles show higher in 4K + RT.

I'm not talking about $1849 or above cards with 3x8-Pin and high power limit such as FTW3 Ultra, there's also no guarantee on flashable XOC BIOSes if that's what you mean by "custom". To get all of the performance that the high power limit allows you want to preferably cool it with water, then we're at $1999 including the block cost. I very clearly explained this a few posts ago.

This is the only way you're going to get above the 25%, take it up to possibly 45%, but also going to cost another $500. So yes it will be around 25% up to 35%, on every 2x8-Pin card no matter the cost as the power limit will be limited, only when you find the right card such as FTW3 Ultra will you unlock the true power of the 3090, and very few will do that, be willing to spend $500 above MSRP. So I'm obviously not comparing that (although I did add that in my previous post, for obvious reasons such as it's important to know all options).


----------



## BigMack70

zhrooms said:


> 33x 2080 Ti's run slower than FE which is a factory overclocked card


That's nice. But benchmarks compare vs the FE and treat it as the "stock" result, which runs at ~1800 MHz give or take. Your card is 20% faster than an average FE and the 3080 is 32% faster on average than the FE, not some mythical trashcan 2080 Ti that happens to run at 1670 MHz. 

Nothing you say affects the math I showed you (for example, I'm assuming a 5% OC on the 3090, far from requiring a $1950 XOC card). It's accurate and you just don't like it for some reason. There are assumptions built into it, namely the 3090's relative performance to the 3080, and those assumptions may prove false, but your expectations are unrealistically pessimistic, as I've shown.

Anyway, I'm out on this one. The math is there, with my assumptions clearly labeled. There's no point debating the assumptions further, because all we can do is wait, and debating the rest is like debating if 2+2=4.


----------



## zhrooms

changboy said:


> You compare super model of 2080 ti to 3080 fe, wait and see what the strix will do when oc to the max


Strix was the worst 2080 Ti card out of the 83x listed ones in the Owner's Club if we look at price/performance, 39x cards had a higher power limit than the Strix, 29x of the cards had no factory OC (most of them Non-A), meaning Strix was #40th out of 49 on power limit, incredibly bad. There were more than a dozen cards that cost $150 less than the Strix with up to 40W higher power limit. Strix was an absolutely awful card, cooler looked fantastic but was performing worse than many of the other larger triple fan coolers.

And they're repeating the same mistake this time, they've already gone out with it just going "up to 400W", which is an absolute joke considering with 3x8-Pin they can set the power limit to 525W if they want, like Kingpin which ran 520W, and Galax Hall of Fame on air cooler at 450W. 
Meanwhile EVGA FTW3 Ultra will very likely be closer to 500W, one more power stage than Strix, same as Kingpin (23), for a very similar price.

Basically avoid Strix at all cost, it has no OC headroom.


----------



## changboy

Oh didnt knew this lol


----------



## HyperMatrix

I recommend people take a review of the 3080 in rendering applications that are designed to take advantage of the hardware. In certain cases, it's outperforming the RTX Titan by 70%: NVIDIA GeForce RTX 3080 Performance In Blender, Octane, V-Ray, & More

Substantial improvements. And the 3090 will be 20% faster than that. Theoretically a bit more than 20% since there are 20.5% more cores actives, and a 23.1% increase in memory bandwidth.

As long as we get some proper power unlocked cards, we'll get the performance we want. Even if efficiency will be thrown out the window.


----------



## zhrooms

BigMack70 said:


> That's nice. But benchmarks compare vs the FE and treat it as the "stock" result, which runs at ~1800 MHz give or take. Your card is 20% faster than an average FE and the 3080 is 32% faster on average than the FE, not some mythical trashcan 2080 Ti that happens to run at 1670 MHz.


Holy **** how do you not understand this, ignore everything you've seen and listen to me, my Ti performs the *same* in Time Spy as every 3080 FE in reviews without OC, and they all overclock around 5% on top of that, how much faster my 2080 Ti is from stock is irrelevant, meaning a 3080 FE OC is 5% faster than my 2080 Ti if my 2080 Ti is just as fast as an FE without OC.



BigMack70 said:


> Anyway, I'm out on this one.


Sounds like a good idea



BigMack70 said:


> There's no point debating the assumptions further, because all we can do is wait, and debating the rest is like debating if 2+2=4.


There are no _assumptions_, we're talking within a few percent accuracy. That's far more than an vague assumption. And we also know for a fact that 3090 will be within a few percent from 20. So even if you twist the numbers as much as possible, you're not going to exceed 5% in one way or another. If I say lowest difference is 25%, it's arguable that it could be 30%, but not more than that. And you are curently throwing around 15% (my 25 to your 38%).


----------



## pewpewlazer

zhrooms said:


> Holy **** how do you not understand this, ignore everything you've seen and listen to me, my Ti performs the *same* in Time Spy as every 3080 FE in reviews without OC, and they all overclock around 5% on top of that, how much faster my 2080 Ti is from stock is irrelevant, meaning a 3080 FE OC is 5% faster than my 2080 Ti if my 2080 Ti is just as fast as an FE without OC.


How does your 2080 Ti compare to the 3080 (stock) in Time Spy Extreme? Or Fire Strike Extreme/Ultra? Or Port Royal? Or better yet, how about in games?

Referencing regular Time Spy as a performance comparison is cherry picking to make your 2080 Ti look better than it is. Kind of like DF/Nvidia did with Doom Eternal for the 3080.


----------



## BigMack70

pewpewlazer said:


> How does your 2080 Ti compare to the 3080 (stock) in Time Spy Extreme? Or Fire Strike Extreme/Ultra? Or Port Royal? Or better yet, how about in games?
> 
> Referencing regular Time Spy as a performance comparison is cherry picking to make your 2080 Ti look better than it is. Kind of like DF/Nvidia did with Doom Eternal for the 3080.


He's posting as if his card and benchmark scores are somehow being insulted. If someone doesn't want to hear math, let them alone while they pick their cherries.


----------



## HyperMatrix

BigMack70 said:


> He's posting as if his card and benchmark scores are somehow being insulted. If someone doesn't want to hear math, let them alone while they pick their cherries.


He’s not wrong though. His $1500 card under water is really almost as good as a base model $699 card on stock clocks in some cases.


----------



## BigMack70

HyperMatrix said:


> He’s not wrong though. His $1500 card under water is really almost as good as a base model $699 card on stock clocks in some cases.


lol

Jokes aside, I'll just point out that I've not disputed that. It's actually built in to the math I put forth that he's trying to find ways to ignore, which makes it even more amusing. My post is literally an explanation of "_if your 2080 Ti OC's average performance is 91% of the 3080 and the 3090 is 120% of the 3080, the 3090 is 38% faster than your 2080 TI on average_" and it's apparently ruffled some feathers


----------



## CallsignVega

Hmm, so far not too impressed. At this rate, a water cooled 3090 will only be ~20% faster than my 2175 MHz shunt modded RTX Titan on water.

Almost 2 year wait for 20% isn't exactly blowing my skirt up.


----------



## BigMack70

CallsignVega said:


> Hmm, so far not too impressed. At this rate, a water cooled 3090 will only be ~20% faster than my 2175 MHz shunt modded RTX Titan on water.
> 
> Almost 2 year wait for 20% isn't exactly blowing my skirt up.


Depends how the 3090 winds up overclocking on water, I suppose.


----------



## J7SC

Regarding the Asus Strix OC, the 2080 Ti Strix was a 2x 8 pin (nothing wrong with that) while the 3090 is a 3x 8 pin. The tests I have seen of the 2080 Ti Strix OC showed it to be a decent card, but since my 2080 Tis are a different make, I can't really comment further. In the end, everyone has their fav vendor, based on their experience. As mentioned before, I won't touch EVGA anymore after 3 of 8 of their top$ cards developed PCB issues (2 separate gens). That doesn't mean EVGA is bad, but I personally won't buy them (exception is KPE if it has a different PCB). KPE and Strix also offer good XOC bios support.

One result in the various tests I saw today of the 3080 FE which interested me the most was MS FS 2020...the gains didn't seem to be that impressive over a 2080 Ti(s), but by next Thursday, actual benchmarks for the 3090 should be out, including for MS FS 2020. I am looking forward to those and also other benches.


----------



## sultanofswing

I am only interested to see if Nvidia still does the whole A chip vs non A chip. 
Coming from a Kingpin 2080ti my plan would be just to probably get the cheapest A chip 3090 that Heatkiller will support if the Chip binning scheme is carried over fromTuring.


----------



## domenic

LOL - I am not interested in a 3080 but I wanted to do a test run this morning to see what to expect a week from now with the 3090 launch.

At exactly 9:00am EST I had the EVGA store, Newegg, and B&H open in different browser windows. 

At Newegg before 9 I had an EVGA XC3 on the screen as it said "Notify me". As soon as the clock struck 9 it said available and I was able to add it to my cart and I made it to checkout which I abandoned by closing the window (this was all within the first fifteen seconds).

At 30 seconds after 9 Newegg crashed as well as BH & the EVGA store.

I HOPE that with the custom 3090 AIB cards (~$1800) there will be less demand due to the price. But supply could be far less however.


----------



## BigMack70

domenic said:


> LOL - I am not interested in a 3080 but I wanted to do a test run this morning to see what to expect a week from now with the 3090 launch.
> 
> At exactly 9:00am EST I had the EVGA store, Newegg, and B&H open in different browser windows.
> 
> At Newegg before 9 I had an EVGA XC3 on the screen as it said "Notify me". As soon as the clock struck 9 it said available and I was able to add it to my cart and I made it to checkout which I abandoned by closing the window (this was all within the first fifteen seconds).
> 
> At 30 seconds after 9 Newegg crashed as well as BH & the EVGA store.
> 
> I HOPE that with the custom 3090 AIB cards (~$1800) there will be less demand due to the price. But supply could be far less however.


I was doing the same thing. Even thought of going through with the 3080 purchase just to play with it for a week before upgrading.

EVGA crashed completely on me and was unusable. By the time I could reload, everything was out of stock. This is the same experience I've had with their store website with the past GPU launches as well - it's almost not even worth bothering.

I was too slow to see anything in stock on Amazon.

Newegg still had a couple of the lesser-known brands at about 9:03 AM; I think I could have bought one but chose not to.

I've never seen anything but "Notify Me" on Nvidia's website... which is disappointing, because I was hoping to pick up a 3090 FE from their store directly and it looks like they either had about 1 card in stock, or are making this a paper launch from their own store.


----------



## ref

Oof, 3080 launch looked rough. Totally expecting at this point having to wait until December or the new year for a 3090. Will do all that I can do on the 24th, but not expecting to get one.


----------



## sakete

ref said:


> Oof, 3080 launch looked rough. Totally expecting at this point having to wait until December or the new year for a 3090. Will do all that I can do on the 24th, but not expecting to get one.


It took me an hour to get through placing an order on the EVGA site. But I got one. Man, what a pain. It's just the XC3 board, but it'll do as I don't do much OC'ing anyway (I know, blasphemy here on OCN).


----------



## BigMack70

sakete said:


> It took me an hour to get through placing an order on the EVGA site. But I got one. Man, what a pain. It's just the XC3 board, but it'll do as I don't do much OC'ing anyway (I know, blasphemy here on OCN).


Lost mine. Got it in cart but then took about 20 minutes to get through the login process and never made it to checkout.

I don't think sales ever went live on Nvidia's site for FE cards. Or if they did, it was over so fast that they were gone within milliseconds after 9:00. 

At this point, I'm doubtful of the ability to get a 3090 next week. Hopefully by the time Cyberpunk is out, I guess.


----------



## sakete

Oops, wrong thread.


----------



## Sheyster

The EVGA 12-pin FE cable is available for ordering now. Price is a bit high but given how hideous that adapter in the box is, I'm sure they'll sell out soon.


----------



## Krzych04650

NVIDIA SLI Support Transitioning to Native Game Integrations - VideoCardz.com


With the emergence of low level graphics APIs such as DirectX 12 and Vulkan, game developers are able to implement SLI support natively within the game itself instead of relying upon a SLI driver profile. The expertise of the game developer within their own code allows them to achieve the best...




videocardz.com





This is really weird. Not the fact that they move to in-game implementations, but this article seems to imply that current SLI supported games won't work on 3090 even though it has NVLink bridge. That's just insane. So many games that could be pushed to insane levels of performance with 3090 SLI... What a tremendous waste.

This combined with the fact that so far no models are able to get any overclock even out of 3080 really breaks a lot of hype for me. Looking at this and at sheer amount of SLI supported games I have scheduled, getting second 2080 Ti on the cheap for next year or so is an option now.

People who are saying that they can match 3080 with an overclocked 2080 Ti are not very good at maths, but if 3090 is 1.5x faster stock vs stock but doesn't overclock then the real difference shrinks to below 1.4x.


----------



## BigMack70

Krzych04650 said:


> NVIDIA SLI Support Transitioning to Native Game Integrations - VideoCardz.com
> 
> 
> With the emergence of low level graphics APIs such as DirectX 12 and Vulkan, game developers are able to implement SLI support natively within the game itself instead of relying upon a SLI driver profile. The expertise of the game developer within their own code allows them to achieve the best...
> 
> 
> 
> 
> videocardz.com
> 
> 
> 
> 
> 
> This is really weird. Not the fact that they move to in-game implementations, but this article seems to imply that current SLI supported games won't work on 3090 even though it has NVLink bridge. That's just insane.


It just means they aren't updating or developing new SLI profiles anymore. 

SLI is incredibly dead tech. This is just another nail in the coffin.


----------



## Krzych04650

BigMack70 said:


> It just means they aren't updating or developing new SLI profiles anymore.
> 
> SLI is incredibly dead tech. This is just another nail in the coffin.


Thats not what I mean, it implies that 3090 is incompatible with existing SLI profiles, like at all. Nobody expects support in future games but to cut support in existing ones despite NVLink bridge present is unexpected


----------



## dentnu

The MSI 3080 Gaming X trio is a joke. It has 15 power phases compared to FE which has 18 and its max power limit is 350 watts. What is the point of having three 8 pins power connectors. Looks like its either a 3090 Strix or FTW3 for me as the 3090 trio will also be a joke.


----------



## Krzych04650

dentnu said:


> The MSI 3080 Gaming X trio is a joke. It has 15 power phases compared to FE which has 18 and its max power limit is 350 watts. What is the point of having three 8 pins power connectors. Looks like its either a 3090 Strix or FTW3 for me as the 3090 trio will also be a joke.


MSI haven't been in a very good shape since after 1000 series. 2080 Ti Trio was absolutely massive and with 3x8pin as well just to have 10W higher power limit than FE. A lot of flashy design and no real functionality. Too bad, though that is true for most of them at this point. This time especially, they are almost all coming out with cards with lower PL than FE, one Zotac card has a freakin 336W max limit. There were regular 2080 models with almost that amount


----------



## BigMack70

So... new plan... cross fingers and just try and grab any possible 3090 card on launch day and then replace it with the model I actually want later on after the supply chain resolves itself.


----------



## BigMack70

Krzych04650 said:


> Thats not what I mean, it implies that 3090 is incompatible with existing SLI profiles, like at all. Nobody expects support in future games but to cut support in existing ones despite NVLink bridge present is unexpected


Looks like it. From the driver notes:


> Implicit SLI Disabled on NVIDIA Ampere GPUs


Rip.


----------



## Krzych04650

BigMack70 said:


> Rip.


Yea. I was really set for two 3090s but of course I assumed they will work in existing SLI titles. What a sad day, SLI dead, custom 3080 models failing super hard, clearly indicating that you won't be able to get good 3090 models anywhere close to launch, if at all assuming that you will need 500W+ PL. Some serious first world problems here


----------



## vmanuelgm

Owner here:



https://www.3dmark.com/spy/13917177



This puppy needs block and shunt, really!!!


----------



## Mooncheese

vmanuelgm said:


> Owner here:
> 
> 
> 
> https://www.3dmark.com/spy/13917177
> 
> 
> 
> This puppy needs block and shunt, really!!!


3090? Your result is hidden. 

Yes it's apparent that GA102 needs lower temps and more power to get more than a 5% overclock out of it. 

Undervolting while under water may be the best route in lieu of a shunt. 

Still waiting to see what kind of power the FTW3 3080 is limited to, MSI's Trio is limited to 350w and that's not even "reference"!


----------



## Mooncheese

BigMack70 said:


> So... new plan... cross fingers and just try and grab any possible 3090 card on launch day and then replace it with the model I actually want later on after the supply chain resolves itself.


Why do you NEED to get one right now? 

Getting whatever random one only to get another later on doesn't sound like a prudent plan. Why not simply wait? This will resolve itself in November when RDNA2 releases and there is a 24GB card that sits in between the 3080 and 3090 for $1000. 

Watch NV release that 20GB GA102 real fast in response.


----------



## vmanuelgm

Mooncheese said:


> 3090? Your result is hidden.
> 
> Yes it's apparent that GA102 needs lower temps and more power to get more than a 5% overclock out of it.
> 
> Undervolting while under water may be the best route in lieu of a shunt.
> 
> Still waiting to see what kind of power the FTW3 3080 is limited to, MSI's Trio is limited to 350w and that's not even "reference"!













Big Voltaire at the moment!!! xDDDD


----------



## J7SC

Krzych04650 said:


> NVIDIA SLI Support Transitioning to Native Game Integrations - VideoCardz.com
> 
> 
> With the emergence of low level graphics APIs such as DirectX 12 and Vulkan, game developers are able to implement SLI support natively within the game itself instead of relying upon a SLI driver profile. The expertise of the game developer within their own code allows them to achieve the best...
> 
> 
> 
> 
> videocardz.com
> 
> 
> 
> 
> 
> This is really weird. Not the fact that they move to in-game implementations, but this article seems to imply that current SLI supported games won't work on 3090 even though it has NVLink bridge. That's just insane. So many games that could be pushed to insane levels of performance with 3090 SLI... What a tremendous waste.
> 
> This combined with the fact that so far no models are able to get any overclock even out of 3080 really breaks a lot of hype for me. Looking at this and at sheer amount of SLI supported games I have scheduled, getting second 2080 Ti on the cheap for next year or so is an option now.
> 
> People who are saying that they can match 3080 with an overclocked 2080 Ti are not very good at maths, but if 3090 is 1.5x faster stock vs stock but doesn't overclock then the real difference shrinks to below 1.4x.





Krzych04650 said:


> Yea. I was really set for two 3090s but of course I assumed they will work in existing SLI titles. What a sad day, SLI dead, custom 3080 models failing super hard, clearly indicating that you won't be able to get good 3090 models anywhere close to launch, if at all assuming that you will need 500W+ PL. Some serious first world problems here


As an owner of 2x 2080 Ti and quite a few other multi GPU setups (work and play), I was at least pleased about this: _"Existing SLI driver profiles will continue to be tested and maintained for SLI-ready RTX 20 Series and earlier GPUs"_ in the article. 

Also, I wouldn't necessarily jump to conclusions that all mGPU is dead/declining fast, just traditional SLI such as AFR (there are several different methods to do SLI). On the contrary, I expect mGPUs, tiles and so forth -.and with it CFR - to gain in importance, with traditional SLI (AFR etc) declining further ...per article _ "For GeForce RTX 3090 and future SLI-capable GPUs, SLI will only be supported when implemented natively within the game._". Even then though, knowing a bit about the underlying game engine will help re. NV Inspector profiles that may be substituted. The 3090 / Ampere gen is likely a transitional GPU gen, given what has leaked out about mGPU and Hopper, Intel Xe and even post-RDNA2. Traditional SLI (AFR etc) does have several problems such as micro-stutter and asynchronous loads that for example mGPU CFR does not which is why the focus will shift away from traditional SLI and towards CFR,, BTW, I already run on quite a few apps on my 2x 2080 Tis system in CFR - doesn't work everywhere, though it does on_ my fav_ apps, fortunately

All that said, this news does impact the purchase decision re. 'one' vs 'two' 3090s (or even Ampere Titan) when the time comes


----------



## Krzych04650

J7SC said:


> As an owner of 2x 2080 Ti and quite a few other multi GPU setups (work and play), I was at least pleased about this: _"Existing SLI driver profiles will continue to be tested and maintained for SLI-ready RTX 20 Series and earlier GPUs"_ in the article.
> 
> Also, I wouldn't necessarily jump to conclusions that all mGPU is dead/declining fast, just traditional SLI such as AFR (there are several different methods to do SLI). On the contrary, I expect mGPUs, tiles and so forth -.and with it CFR - to gain in importance, with traditional SLI (AFR etc) declining further ...per article _ "For GeForce RTX 3090 and future SLI-capable GPUs, SLI will only be supported when implemented natively within the game._". Even then though, knowing a bit about the underlying game engine will help re. NV Inspector profiles that may be substituted. The 3090 / Ampere gen is likely a transitional GPU gen, given what has leaked out about mGPU and Hopper, Intel Xe and even post-RDNA2. Traditional SLI (AFR etc) does have several problems such as micro-stutter and asynchronous loads that for example mGPU CFR does not which is why the focus will shift away from traditional SLI and towards CFR,, BTW, I already run on quite a few apps on my 2x 2080 Tis system in CFR - doesn't work everywhere, though it does on_ my fav_ apps, fortunately
> 
> All that said, this news does impact the purchase decision re. 'one' vs 'two' 3090s (or even Ampere Titan) when the time comes


I agree, but the fact that traditional SLI is completely dropped from new cards without any backwards compatibility is really sad, there are so many great games that could be so pushed now. Ehh... I will probably get second 2080 Ti and extend the life of my X99 platform for a bit more, and then build whole new system in something like March once Alder Lake (or however it is called) and high PL 3090 models arrive. I've been following 2080 Ti SLI closely since this was my potential way to go if Ampere was not good, so I am aware of CFR. I've been using 1080 SLI before and had great time with it.

I have a lot of SLI supported games scheduled and in such games 2080 Ti SLI will be way faster than 3090, and I will leave single card games like Control for 3090 to play later in Q2 2021. Everything about 3090 should be resolved by that time. I just hope LG 38WN95C gets out of vaporware stage soon


----------



## HyperMatrix

dentnu said:


> The MSI 3080 Gaming X trio is a joke. It has 15 power phases compared to FE which has 18 and its max power limit is 350 watts. What is the point of having three 8 pins power connectors. Looks like its either a 3090 Strix or FTW3 for me as the 3090 trio will also be a joke.


15 (or was it 16?) power phases for the GPU. 3 for the memory/controller. Same with how the 2080ti was structured. Some listed the total phases. Some broke them down into what phases for what components.


----------



## HyperMatrix

Anyone know if there would be more overclocking headroom by removing the fans off the GPU and powering them separately? I assume their power consumption deducts from the total card power available? Not sure how much power these fans take but some of my case fans are over 10W each. Would be nice to see the results with fans running at 100% with some extra wattage available to get an idea of how well the cards would react under water.


----------



## BigMack70

HyperMatrix said:


> Anyone know if there would be more overclocking headroom by removing the fans off the GPU and powering them separately? I assume their power consumption deducts from the total card power available? Not sure how much power these fans take but some of my case fans are over 10W each. Would be nice to see the results with fans running at 100% with some extra wattage available to get an idea of how well the cards would react under water.


Given that a jump from 320 to 370W seems to yield next to nothing as far as overclocking performance, I highly doubt a few watts for the fans makes any difference.


----------



## HyperMatrix

BigMack70 said:


> Given that a jump from 320 to 370W seems to yield next to nothing as far as overclocking performance, I highly doubt a few watts for the fans makes any difference.


Gamer Nexus had the FE up to with +900MHz on memory, and GPU clock at 2085MHz with the fans at 100% speed. GPU was at 60C at these clocks. It wasn't stable for long term use, but shows that you can get decent clocks with enough cooling. Perhaps with a tad more wattage available, these clocks could have been stable.


----------



## Sheyster

Krzych04650 said:


> MSI haven't been in a very good shape since after 1000 series. 2080 Ti Trio was absolutely massive and with 3x8pin as well just to have 10W higher power limit than FE. A lot of flashy design and no real functionality. Too bad, though that is true for most of them at this point. This time especially, they are almost all coming out with cards with lower PL than FE, one Zotac card has a freakin 336W max limit. There were regular 2080 models with almost that amount


The MSI Duke OC 2080 Ti (my card) with the 380w Galax BIOS is a beast, even on air. The Trio was actually slower in benchmarks. I would not write off MSI for the 3090 Trio quite yet. Let's wait for reviews.


----------



## J7SC

HyperMatrix said:


> Gamer Nexus had the FE up to with +900MHz on memory, and GPU clock at 2085MHz with the fans at 100% speed. GPU was at 60C at these clocks. It wasn't stable for long term use, but shows that you can get decent clocks with enough cooling. Perhaps with a tad more wattage available, these clocks could have been stable.
> 
> View attachment 2459216


That's interesting, though we probably need more tests on more cards, including of course the 3090s...still, 8 C difference resulted in an extra 30 MHz if I read that table correctly, so NVBoost algo re. temps seems even more aggressive.

Still early days, but I am wondering about the low number of announcements / teasers re. factory water-cooled cards. Then again, there are custom bundles for the 3090 (note "all parts in stock"; prices include taxes):


----------



## lokran88

Any idea what this EK version from Asus could be at the top? Is this EKWB so a waterblock like the MSI Sea Hawk EK!? But just 2x8 pins, would have expected 3 pins for a watercooled card, still I don't know what else it could be.


----------



## vmanuelgm

Big Voltaire asking for shunt mod:


----------



## HyperMatrix

vmanuelgm said:


> Big Voltaire asking for shunt mod:


Someone had stated that the FE cards can't be shunted because the boards are designed to check how much power is actually coming in and gate it.

I noticed that card was pulling up to 396W. Do we have any idea if it's an FE or AIB? I'm getting the impression that Ampere can clock very well if you had unlocked power limits and could keep the card cool. I built my rig/cooling setup to previously power 3x Titan cards so would have no problem with a single card pulling 500W+ if it were an option.


----------



## vmanuelgm

It is a Big Voltaire Gigabyte Gaming OC...

Length is 32cms and heatsink is working quite nicely. I saw der8auer youtube video in which he shunts the two resistors near the power plugs and seems to do the trick.

Bad thing with this Gigabyte is the power cables mess, which doesn't allow to use the waterblocks made for reference boards unless u mod the waterblock (cutting some things here and there).


----------



## Glerox

Krzych04650 said:


> MSI haven't been in a very good shape since after 1000 series. 2080 Ti Trio was absolutely massive and with 3x8pin as well just to have 10W higher power limit than FE. A lot of flashy design and no real functionality. Too bad, though that is true for most of them at this point. This time especially, they are almost all coming out with cards with lower PL than FE, one Zotac card has a freakin 336W max limit. There were regular 2080 models with almost that amount


The 2080ti Lightning Z was OK except the bug at launch that you couldnt unplug the stock cooler fan lol


----------



## HyperMatrix

vmanuelgm said:


> It is a Big Voltaire Gigabyte Gaming OC...


Sorry I'm not familiar with this card. I thought the video said 3090?


----------



## Jpmboy

HyperMatrix said:


> Anyone know if there would be more overclocking headroom by removing the fans off the GPU and powering them separately? I assume their power consumption deducts from the total card power available? Not sure how much power these fans take but some of my case fans are over 10W each. Would be nice to see the results with fans running at 100% with some extra wattage available to get an idea of how well the cards would react under water.


 AFAIK, the fans and lights do not come off the card's TDP rating. The TDP rating is based on the core and a bios-set limitation.


HyperMatrix said:


> Someone had stated that the FE cards can't be shunted because the boards are designed to check how much power is actually coming in and gate it.
> 
> I noticed that card was pulling up to 396W. Do we have any idea if it's an FE or AIB? I'm getting the impression that Ampere can clock very well if you had unlocked power limits and could keep the card cool. I built my rig/cooling setup to previously power 3x Titan cards so would have no problem with a single card pulling 500W+ if it were an option.


The shunt mods done on previous gen cards were done specifically to overcome this power-induced clock bin drop. If you know which resistors to mod, on an Ampere, it works the same. (eg, lowers the signal vBIOS compares against). If we get a bios editor, no hardware mod is needed.


----------



## vmanuelgm

Big Voltaire=3090 in my words... Not launched yet, that's why I use that expression.

xDDD





Jpmboy said:


> AFAIK, the fans and lights do not come off the card's TDP rating. The TDP rating is based on the core and a bios-set limitation.
> 
> The shunt mods done on previous gen cards were done specifically to overcome this power-induced clock bin drop. If you know which resistors to mod, on an Ampere, it works the same. (eg, lowers the signal vBIOS compares against). If we get a bios editor, no hardware mod is needed.









The 3090 is very strangled in terms of power, doesn't make sense...


----------



## domenic

It seems that many of us here will (eventually) be buying a ~$1,700-$2K 3090. I live in Wilmington Delaware where there is no sales tax. If you are local to the PA / MD / NJ area and wanted to ship to my place to save the tax and meet to pick it up I wouldn’t mind helping someone out…


----------



## HyperMatrix




----------



## HyperMatrix

Gamer Nexus just said on video that he has custom vbios for the rtx3080 with unlocked power limit. So that's a good sign.

update: FTW3 hitting 2055MHz while temps are cool -- at STOCK clock settings!

update2: They just asked Jacob from EVGA live on youtube if they can flash the VBIOS with an unlocked one live on air. They didn't answer that but he did say that their 100% power target is substantially higher than the Nvidia ones. which explains the 2055MHz peak at boost.

update3: with +30 to gpu clock, ftw3 is peaking at 2070MHz at 400W atm.


----------



## BigMack70

domenic said:


> It seems that many of us here will (eventually) be buying a ~$1,700-$2K 3090. I live in Wilmington Delaware where there is no sales tax. If you are local to the PA / MD / NJ area and wanted to ship to my place to save the tax and meet to pick it up I wouldn’t mind helping someone out…


I also volunteer for someone to ship me a $1500 graphics card. I'm a selfless kind of guy so I'd even accept two.


----------



## domenic

HyperMatrix said:


> Gamer Nexus just said on video that he has custom vbios for the rtx3080 with unlocked power limit. So that's a good sign.
> 
> update: FTW3 hitting 2055MHz while temps are cool -- at STOCK clock settings!


But isn't that a "private" BIOS coming from the AIBs provided to guys like Steve from GN under NDA for their liquid nitrogen testing? Man I wish we could get our hands on these custom BIOSs...

I am watching the same live stream. If the 3080 FTW is hitting these clocks then the 3090 version looks like the one I at least want. Only problem still is a water block being available and then the issue of the backside memory chips. Argh.


----------



## HyperMatrix

domenic said:


> But isn't that a "private" BIOS coming from the AIBs provided to guys like Steve from GN under NDA for their liquid nitrogen testing? Man I wish we could get our hands on these custom BIOSs...
> 
> I am watching the same live stream. If the 3080 FTW is hitting these clocks then the 3090 version looks like the one I at least want. Only problem still is a water block being available and then the issue of the backside memory chips. Argh.


I think the fact you can flash a new bios at all is the important thing. Someone had dumped the Pascal Titan X bios before, but the card itself couldn't be written to, so no point there.


----------



## BigMack70

__ https://twitter.com/i/web/status/1306760396829634560
This is regarding the FTW3. Too bad they'll have about 5 of them and none of us even have a chance. But I think this confirms that eventually I'm going to try and get an FTW3 Hybrid. Even if it takes a few months.


----------



## HyperMatrix

BigMack70 said:


> __ https://twitter.com/i/web/status/1306760396829634560
> This is regarding the FTW3. Too bad they'll have about 5 of them and none of us even have a chance. But I think this confirms that eventually I'm going to try and get an FTW3 Hybrid. Even if it takes a few months.


440W isn't bad for stock TDP. And apparently there is a custom unlocked bios as well that will likely leak out one way or another. I was hoping for higher than this since the 3080 has a 420W limit and couldn't keep a stable 2070MHz at 55-60C. So what would that mean for 1.2x the core count and its power draw? Either way, guess I shouldn't complain. That's 65W above the FE card so with proper cooling, could do 2.1GHz just fine.


----------



## Glerox

Anyone still buying two rtx 3090s despite that SLI news? (hoping built-in mGPU support will be available in future games)


----------



## HyperMatrix

Glerox said:


> Anyone still buying two rtx 3090s despite that SLI news? (hoping built-in mGPU support will be available in future games)


only way I see mgpu coming back is along with chiplet designed gpus. I had 2 Titan x pascal cards in SLI. Prior to that I had 3x Maxwell Titans. And prior to that 4x Kepler Titans if I’m not mistaken. Though may have been 3x on Kepler as well.

Anyway, after a year of owning the 2x Pascal Titan setup, I sold one of the cards. Not because I needed the money. But because I had it disabled most the game. Number of games that supported it was low and getting lower. Some games performed worse with it on. Those that did support it often didn’t get it working right until months after launch when I had already finished the game. Additional issues with stutter, poor scaling, alt tabbing issues and more that I thank god can’t remember anymore.

And if I remember correctly, you can’t even SLI ray tracing. It’s really not worth it unless you also use the NVLINK for professional applications/rendering. If money is literally no issue and you wouldn’t mind paying another $2k+ for extra performance on just a few games, go for it. But SLI itself has been dying for years. And died even faster when SLI was removed from lower tier cards meaning developers had an even smaller number of players who would benefit from dedicating development time to.


----------



## pewpewlazer

domenic said:


> But isn't that a "private" BIOS coming from the AIBs provided to guys like Steve from GN under NDA for their liquid nitrogen testing? Man I wish we could get our hands on these custom BIOSs...


Most likely. Can't post BIOS because "under NDA", but they parade it around and dangle it in your face on the forums _haha look at what I have that you can't have because I'm special_. Typical for a while now.


----------



## J7SC

HyperMatrix said:


> only way I see mgpu coming back is along with chiplet designed gpus.(...)*And if I remember correctly, you can’t even SLI ray tracing. *It’s really not worth it unless you also use the NVLINK for professional applications/rendering. (...)


...per below you can do SLI ray tracing, at least in CFR (= my fav term of the week, it seems, probably because I'm having a good time with it in MS FS 2020 4K ultra). That said, for 2x 3090s and the earlier-mentioned SLI profile changes, the use case really should include productivity as well.


----------



## Mooncheese

BigMack70 said:


> __ https://twitter.com/i/web/status/1306760396829634560
> This is regarding the FTW3. Too bad they'll have about 5 of them and none of us even have a chance. But I think this confirms that eventually I'm going to try and get an FTW3 Hybrid. Even if it takes a few months.


Finally we get the TDP figure for the FTW3.

If 3090 is limited to 440W, I mean that's an improvement over FE (but FE is seen hitting 395w) somewhat but nowhere near where it should be given that the 3090 has 20% more CUDA cores and relative performance versus the 3080.

This almost makes me want a 3080 unless we can get a higher power 3090.

Or, maybe wait for a 20GB 3080 at 420W.

I'm curious as to whether or not all of the video memory is drawing power if half of it is not even in use with the 3090 or if there is some kind of intelligent power management where the 3090 doesn't even use (ideally) the modules on the top of the card unless they are needed.

3090 looks to be potentially 20% faster than the 3080 but only has another 20W on tap whether or not you go with an FE or FTW3?

If 3080 FTW3 has a 420w limit then the 3090 should be at 500w. Not sure if it's capped at 450W because of VRM limitation or limits of the air cooler and safety concerns or what.

Like I'm almost excited that FTW3 is at 440W but when you think about it, that's still not enough for this card.


----------



## HyperMatrix

J7SC said:


> ...per below you can do SLI ray tracing, at least in CFR (= my fav term of the week, it seems, probably because I'm having a good time with it in MS FS 2020 4K ultra). That said, for 2x 3090s and the earlier-mentioned SLI profile changes, the use case really should include productivity as well.


I’d say the average 50% scaling for 100% more money is bad. But that’d be stupid when I’m willing to pay 114% more money for the 3090 over the 3080 to get an extra 20%. Haha. If only there was more/better support.


----------



## J7SC

The other thing to consider with 2x 3090s would be power consumption...more than 900 W combined just for the GPUs is certainly possible, and folks w/ dual 3090s won't be running it on a 486/66...so throw in an oc'ed HEDT and you better go PSU shopping first


----------



## pewpewlazer

HyperMatrix said:


> I’d say the average 50% scaling for 100% more money is bad. But that’d be stupid when I’m willing to pay 114% more money for the 3090 over the 3080 to get an extra 20%. Haha. If only there was more/better support.


50% scaling with driver level CFR seems EXTREMELY optimistic from that I've seen. If we could get 50% scaling with CFR, used 2080 Ti pricing would be way higher.


----------



## Mooncheese

HyperMatrix said:


> I’d say the average 50% scaling for 100% more money is bad. But that’d be stupid when I’m willing to pay 114% more money for the 3090 over the 3080 to get an extra 20%. Haha. If only there was more/better support.


Although SLI is no longer supported one way to view the 3090 coming from 2080 Ti is as such:

$700 = 20% bump at 4K with both cards at same power draw
$1500 =40% bump at 4K with both cards at same power draw

This all hinges on whether or not the 3090 is actually 20% faster than the 3080.

Either way, this is actually worse than Turing as the prices have gone up.

Now it's $1500 for the 80 Ti instead of $1200.

Golf clap, I suppose we should remain enthusiastic.

Then there's the bots, I don't know if this is incompetence or intentional.

NV and partners could have prevented this had they allowed pre-orders?

Will they rectify the situation before the 3090 release?


----------



## J7SC

pewpewlazer said:


> 50% scaling with driver level CFR seems EXTREMELY optimistic from that I've seen. If we could get 50% scaling with CFR, used 2080 Ti pricing would be way higher.


Scaling varies by application, yet if it means getting to playable vs non-playable @ 4K Ultra everything, it is worth it, IMO - if you already have the dual GPU hardware anyways. But more importantly, CFR was never publicized by NVidia when it (re)introduced it around last November...you really had to know the right 3 - 4 NV Inspector settings. That said, Crytek and 4A Game engines seem to really like it, in addition to quite a few other apps I mentioned before. 

CFR was allegedly reintroduced (RTX-gen only) for game and software developers looking down the line to future mGPU setups with chiplets / 'tiles' etc since CFR has inherent advantages over traditional SLi like no micro-stutter. However, with the driver releases before Ampere, CFR seems to have disappeared again (hmm, ?!), or at least is more hidden. As next-gen mGPU setups come closer, I'm sure it will receive more attention again, though. Finally, I already have 2x w-cooled 2080 Tis for work and play, so with CFR working with the right set of drivers and myself always doing SLi-type builds over the last five GPU gens, it makes sense to get the most out of the current setup...it allows me to sit back and choose the right upgrade path (ie which custom PCB, w-cooled 3090s) w/o feeling rushed by the lack of 4K Ultra performance in my fav apps.


----------



## originxt

Managed to order one. PNY GeForce RTX 3090 24GB XLR8 Gaming EPIC-X RGB Triple Fan Edition Unsure what pcb this is. Reference?

Edit: PNY canceled as it wasn't meant to be put on sale yet. Rip.


----------



## lester007

dual 8pin for that pny?


----------



## originxt

lester007 said:


> dual 8pin for that pny?


Yeah, not too enthused based on the results of dual 8 pin 3080 OC results. Honestly unsure if I want this particular version. Might cancel. Debating even waiting longer for the ftw3 honestly if I don't get it the first run. If I'm spending this much, might as well get the one I want.


----------



## vmanuelgm

originxt said:


> Managed to snag one. PNY GeForce RTX 3090 24GB XLR8 Gaming EPIC-X RGB Triple Fan Edition Unsure what pcb this is. Reference?


Yep, pure reference...

Dual 8pin is enough to have these 3090's under water, 350w-400w for every connector with a good power supply. Need of course unlocked bios or shunt mod.

My Gaming OC goes with OC Bios to 390w only.


----------



## psychrage

Got my order in on the PNY too. By the time I worked through the 404's, and BofA flagged my first "success" as fraudulent. After okaying the amount, and another round of 404ing the card had been for sale for probably 3-1/2 to 4 hours. Hoping my order sticks.

I've always viewed PNY as a low quality brand. Admittedly not fully knowing or having reason to think that. Is there any reason for me to feel this way? In terms of 3080/3090's, they're not ugly imo.

Also what is the difference between the two PNY cards?


----------



## kx11

GALAX US store should have stock soon for 3080



GeForce RTX™ 3080 Series


----------



## Krzych04650

HyperMatrix said:


> 440W isn't bad for stock TDP. And apparently there is a custom unlocked bios as well that will likely leak out one way or another. I was hoping for higher than this since the 3080 has a 420W limit and couldn't keep a stable 2070MHz at 55-60C. So what would that mean for 1.2x the core count and its power draw? Either way, guess I shouldn't complain. That's 65W above the FE card so with proper cooling, could do 2.1GHz just fine.


If this 440W is indeed a stock TDP, not max, then this is quite promising. 3080 really shows that we are going to need way more power than FE and reference models. The fact that 3090 has only has 10% higher TDP despite having 20% more spec and more than twice the VRAM really doesn't bode well. It will dip hard on clocks like 2080 Ti FE did with 260W stock limit.

Thats mostly the reason why 3080 looks like it doesn't overclock, 2080 Ti FE throttles down to 1700s under heavy loads, while 3080 is configured to maintain 1900+ at all times. Probably won't be the case with 3090 FE and reference


----------



## vmanuelgm

Can 3090 run Crysis Remastered???

Not so much...






But optimization is horrendous, we are in 2020 and this remaster is saturating one thread. I will try HT Off later...


----------



## BigMack70

vmanuelgm said:


> Can 3090 run Crysis Remastered???
> 
> Not so much...
> 
> 
> 
> 
> 
> 
> But optimization is horrendous, we are in 2020 and this remaster is saturating one thread. I will try HT Off later...


Everything about this remaster has looked awful to me. I'm glad I didn't buy it. The visuals don't look particularly remarkable by modern standards, and their way of hammering performance was the laziest thing ever - just completely turn off LOD. And now we know it has the same dumpster fire CPU optimization as the original. 

So very disappointing. If they really wanted to properly remaster Crysis and achieve the same effect as the original back in 2007, they should have fixed the CPU issues and the "can it run crysis?" setting should have been something like fully path traced lighting.


----------



## vmanuelgm

RDR2 at 3440x1440p Ultra


----------



## Mooncheese

originxt said:


> Managed to snag one. PNY GeForce RTX 3090 24GB XLR8 Gaming EPIC-X RGB Triple Fan Edition Unsure what pcb this is. Reference?


How? 



vmanuelgm said:


> RDR2 at 3440x1440p Ultra


Not very impressive, do you have any 1440p benchmarks? Timespy normal and extreme? More benchmarks please!


----------



## changboy

The card is not out , how he can do benchmark ??


----------



## Mooncheese

changboy said:


> The card is not out , how he can do benchmark ??


You do understand that all of the gameplay videos and benchmarks vmanuelgm has shown above are the 3090? "OriginXT" stated a page ago that they also have one in their possession and will also be testing it.


----------



## changboy

Mooncheese said:


> You do understand that all of the gameplay videos and benchmarks vmanuelgm has shown above are the 3090? "OriginXT" stated a page ago that they also have one in their possession and will also be testing it.


Ya i saw it and miss other page where they talking about testing that card, its ok.
But fps seam not really higher in read dead 2 vs with the 3080 ??


----------



## originxt

changboy said:


> The card is not out , how he can do benchmark ??


Pny canceled as the card wasn't meant to be put on sale lol. Snag meaning ordered but I could see where people could get confused by the term. I'll go and edit my post above accordingly.


----------



## changboy

originxt said:


> Pny canceled as the card wasn't meant to be put on sale lol. Snag meaning ordered but I could see where people could get confused by the term. I'll go and edit my post above accordingly.


This with 3080 or 3090 ?


----------



## originxt

changboy said:


> This with 3080 or 3090 ?


3090.


----------



## chibi

vmanuelgm said:


> Owner here:
> 
> 
> 
> https://www.3dmark.com/spy/13917177
> 
> 
> 
> This puppy needs block and shunt, really!!!


Wow, how did you get one so soon? Any more available?


----------



## Mooncheese

Shunt modding GA102 doesn't look promising.







www.overclock.net


----------



## J7SC

What's the latest ETA and pricing info (if any at this stage) on the KingPin 3090 ?

KPE (and probably Galax/KF2 HoF OCL when it comes out) can likely do this (3080, mind you) and more later on w/o shunt mods, > this being '2340 MHz' on LN2


----------



## HyperMatrix

vmanuelgm said:


> Can 3090 run Crysis Remastered???
> 
> Not so much...
> 
> 
> 
> 
> 
> 
> But optimization is horrendous, we are in 2020 and this remaster is saturating one thread. I will try HT Off later...


You kids are spoiled. When the original crisis came out I don’t think I could get higher than 12fps maxed out. lol.


----------



## changboy

HyperMatrix said:


> You kids are spoiled. When the original crisis came out I don’t think I could get higher than 12fps maxed out. lol.


Having crysis that time and run well on my amd hd-4870 + intel E-8500 dual core lol !


----------



## HyperMatrix

Krzych04650 said:


> If this 440W is indeed a stock TDP, not max, then this is quite promising. 3080 really shows that we are going to need way more power than FE and reference models. The fact that 3090 has only has 10% higher TDP despite having 20% more spec and more than twice the VRAM really doesn't bode well. It will dip hard on clocks like 2080 Ti FE did with 260W stock limit.
> 
> Thats mostly the reason why 3080 looks like it doesn't overclock, 2080 Ti FE throttles down to 1700s under heavy loads, while 3080 is configured to maintain 1900+ at all times. Probably won't be the case with 3090 FE and reference


It’s 440W Max. Gamer nexus livestream OC testing yesterday with the 3080 model that has 420W Max TDP couldn’t get it up higher than 406W at least according to GPU-Z. And that was only good for 2025-2070MHz with 20Gbps memory.

these cards would be infinitely better with an unlocked power limit though and cooler temps. Question is, if the FTW3 and KPE are really using the same board design, why would EVGA want an unlocked bios to go out for the FTW3 when the biggest marketing difference for their cards is likely to be an even higher power limit on the KPE?

Currently waiting on 2 sets of info. Derbauers next shunt mod video which already showed improvement with a slight bump. And Gamer Nexus OC/LN2 test videos this weekend, which is likely to also be done with an unlocked power bios that he confirmed he has. That’ll give us far more information than any speculation about performance.


----------



## Krzych04650

HyperMatrix said:


> It’s 440W Max. Gamer nexus livestream OC testing yesterday with the 3080 model that has 420W Max TDP couldn’t get it up higher than 406W at least according to GPU-Z. And that was only good for 2025-2070MHz with 20Gbps memory.
> 
> these cards would be infinitely better with an unlocked power limit though and cooler temps. Question is, if the FTW3 and KPE are really using the same board design, why would EVGA want an unlocked bios to go out for the FTW3 when the biggest marketing difference for their cards is likely to be an even higher power limit on the KPE?
> 
> Currently waiting on 2 sets of info. Derbauers next shunt mod video which already showed improvement with a slight bump. And Gamer Nexus OC/LN2 test videos this weekend, which is likely to also be done with an unlocked power bios that he confirmed he has. That’ll give us far more information than any speculation about performance.


Yea I am also waiting for der8auer's video. I am not shunt modding $1500 card, but it should show us where the clock ceiling is and how much power limits are really holding these cards back. But if 3080 FTW cannot do 2100 with 420W then it looks very much like Turing, where even 50% PL increase over stock FE can power throttle in games. Like I said in some previous post, to get equivalent PL on 3090 to 380W BIOS on 2080 Ti you will need 510W and it still won't be enough. At this point you may just as well hook up some chiller or cheap AC to your case and get those additional clocks through temp reduction, that's about the same level of efficiency


----------



## changboy

So this mean any card will perform around the same once in game, the best oc will give 1 fps or 2 fps at 4k ultra more then stock one right ? So its mean going with the best cheapo one right ?


----------



## BigMack70

changboy said:


> So this mean any card will perform around the same once in game, the best oc will give 1 fps or 2 fps at 4k ultra more then stock one right ? So its mean going with the best cheapo one right ?


It probably means going with whatever card you want (cheap / aesthetic / etc) unless you plan to flash a custom/unlocked BIOS onto your card with water cooling - in which case you probably want to be more specific about your card choice.

It looks like the performance delta of the launch round of cards is all within +/- about 4% of each other, without much the end user can do to push past that. At least, if the 3090 is like the 3080.


----------



## J7SC

...I also look forward to DerBauer's 'part 2' vid on the Asus Tuf 3080 mods...never mind the shunt mods, the extra voltage measure points and especially the Elmor IC2 connection point are very interesting. I would also assume that the Asus Strix is equipped with similar features.


----------



## Krzych04650

changboy said:


> So this mean any card will perform around the same once in game, the best oc will give 1 fps or 2 fps at 4k ultra more then stock one right ? So its mean going with the best cheapo one right ?


Well first of all don't use FPS difference out of context without providing the baseline performance because it doesn't make any sense and makes you look bad. Use percentages. 

As an example on architecture we already know well, Turing, good 2080 Ti overclocks could run 2100 clock constant while stock FE 2080 Ti was dipping into 1700s under heavy load. This translated to around 15% performance increase, so by your metrics one would get 60 FPS instead of 52.


----------



## changboy

For me, if i want buy one of the best model + waterblock its around 3100$ cad, not cheap.
Cheapo like asus tuf can cost around 2375$ cad. The difference is huge.


----------



## ThrashZone

Hi,
Probably not a big deal to most but nvidia shows a win-7 driver finally


----------



## changboy

ThrashZone said:


> Hi,
> Probably not a big deal to most but nvidia shows a win-7 driver finally


You are running on windows 7 ?


----------



## domenic

*Not all "reference" 30 Series RTX cards are built the same!*

Until watching this Buildzoid PCB breakdown of the Zotac RTX 3080 Trinity I thought at least all reference cards would be basically the same. Not so much and I am now convinced you get what you pay for and the premium for quality custom cards like the FTW3 & Strix are well worth it.

Buildzoid commented this is his first of many such videos where he will be analyzing many of the AIB and FE cards. Only "good thing" he could say about this particular card was at least it was the first he looked at and as such couldn't say any others are worse yet. If you watch the first and last few minutes of the video he basically concludes that Zotac cut corners in the power delivery & other components where the low power cap in the BIOS is probably the only thing from preventing it from burning up. This weekend when GN loads the uncapped private BIOSs and puts a bunch of cards under LN2 it will validate "don't try this at home on a crap card".

May be a blessing in disguise if we can't snag a 3090 on the 24th as it will give time for more deep dives like this to be completed guiding our decision on what card is actually the best and let the water block situation also shake out.


----------



## changboy

Ya and when Buildzoid do is video on all pcb of 2080 ti at the end he said that : If you want watercooling it, just buy the FE lol !


----------



## ThrashZone

changboy said:


> You are running on windows 7 ?


Hi,
Yes 
10 is way to weird for everyday use.


----------



## Mooncheese

I've said it before and I will say it again here, I believe the key to unlocking GA102 performance is bringing the core down to ~40C under a lot of water in conjunction with an undervolt.

Less voltage = less wattage drawn at a given freq.

~2100-2200 MHz @ .950v @ 440w ~40C may be the magic number.


----------



## Mooncheese

NVIDIA GeForce RTX 3090 3DMark Time Spy scores leaked - VideoCardz.com


We have the first benchmarks results for the upcoming flagship TITAN-class graphics card, the GeForce RTX 3090. NVIDIA GeForce RTX 3090 is ~19% faster than RTX 3080 The GeForce RTX 3090 scores 20387 ‘graphics’ points in Time Spy 1440p preset. The same card scores 10328 points in Time Spy Extreme...




videocardz.com





Edit: 

That's a 24% bump up and over overclocked 2080 Ti @ 375w (~16,500 Timespy GPU) @ 1440p

Assuming the 3090 is capable of a 7% overclock that's around 30%, if 10% around 35%, right in line with my predictions here:

Nvidia's Dumbest Decision (AdoredTV)

$1500 for 30% at 1440p and this is the only upgrade path for someone with a 2080 Ti. 

This is EXACTLY the proposition if one had a 1080 Ti in 2018 except now it's worse, now it's $1500+ before taxes.


----------



## vmanuelgm

Hi guys.

I shunt modded my Giga 3090 touching only the closest resistors, just like der8auer did, I still see fluctuation, not so heavy as non shunted, but fluctuation. I will probably have to touch the other 3 resistors, gonna wait for der8auer's advice in next videos.

Some benchs after beta shunt mod:



















Power consumption has increased from 390w (powerthrottling point in my Giga) to 550w.


----------



## HyperMatrix

vmanuelgm said:


> Hi guys.
> 
> I shunt modded my Giga 3090 touching only the closest resistors, just like der8auer did, I still see fluctuation, not so heavy as non shunted, but fluctuation. I will probably have to touch the other 3 resistors, gonna wait for der8auer's advice in next videos.
> 
> Some benchs after beta shunt mod:
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Power consumption has increased from 390w (powerthrottling point in my Giga) to 550w.


Hope you have better luck with the additional shunt mods. Looking at your Port Royal score right now, it's only 13% higher than the EVGA FTW3 RTX 3080 that Gamer Nexus OC'd yesterday. I can't believe Nvidia power restricted these cards so much. 

edit: is that 550w total system power or just your 3090??

edit2: did you perhaps overclock the memory too high, causing a lot of errors/error correction which results in performance regression? Did you try lower clock speeds as well? Your score should be higher than what it is atm.


----------



## domenic

vmanuelgm said:


> Hi guys.
> 
> I shunt modded my Giga 3090 touching only the closest resistors, just like der8auer did, I still see fluctuation, not so heavy as non shunted, but fluctuation. I will probably have to touch the other 3 resistors, gonna wait for der8auer's advice in next videos.
> 
> Some benchs after beta shunt mod:
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Power consumption has increased from 390w (powerthrottling point in my Giga) to 550w.


What are your temps? Thermal throttling maybe? Man we need to get these under water....


----------



## vmanuelgm

HyperMatrix said:


> Hope you have better luck with the additional shunt mods. Looking at your Port Royal score right now, it's only 13% higher than the EVGA FTW3 RTX 3080 that Gamer Nexus OC'd yesterday. I can't believe Nvidia power restricted these cards so much.
> 
> edit: is that 550w total system power or just your 3090??
> 
> edit2: did you perhaps overclock the memory too high, causing a lot of errors/error correction which results in performance regression? Did you try lower clock speeds as well? Your score should be higher than what it is atm.


550w the card only.

Which was the score of GamersNexus???

70 degrees at load.


----------



## J7SC

vmanuelgm said:


> Hi guys.
> 
> I shunt modded my Giga 3090 touching only the closest resistors, just like der8auer did, I still see fluctuation, not so heavy as non shunted, but fluctuation. I will probably have to touch the other 3 resistors, gonna wait for der8auer's advice in next videos.
> 
> Some benchs after beta shunt mod:
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Power consumption has increased from 390w (powerthrottling point in my Giga) to 550w.


Good stuff. That PortRoyal score in particular is superb, even when compared to a highly modded 3080 > here


----------



## HyperMatrix

vmanuelgm said:


> 550w the card only.
> 
> Which was the score of GamersNexus???
> 
> 70 degrees at load.


Gamer Nexus, OC settings/temps with Port Royal score:










If I remember correctly, they also got over 10,000 on time spy.


----------



## vmanuelgm

HyperMatrix said:


> Gamer Nexus, OC settings/temps with Port Royal score:
> 
> View attachment 2459334
> 
> 
> If I remember correctly, they also got over 10,000 on time spy.


This is rbuass at 2310...



https://www.3dmark.com/spy/13929835




Dont think gamersnexus got above 10k when this guy only did 10500 in graphics score...


----------



## changboy

Umm, if you need to mod the 3090 to get higher then a 3080 i dont think its worth buying this card, Maybe something i dont understand or the 3090 at this price is a complete fail.


----------



## vmanuelgm

changboy said:


> Umm, if you need to mod the 3090 to get higher then a 3080 i dont think its worth buying this card, Maybe something i dont understand or the 3090 at this price is a complete fail.


I had to mod the 2080Ti too to get the best of it...


----------



## changboy

what is ur 3090 ? gigabyte ?


----------



## vmanuelgm

changboy said:


> what is ur 3090 ? gigabyte ?


Yep, Gaming OC


----------



## domenic

vmanuelgm said:


> Hi guys.
> 
> I shunt modded my Giga 3090 touching only the closest resistors, just like der8auer did, I still see fluctuation, not so heavy as non shunted, but fluctuation. I will probably have to touch the other 3 resistors, gonna wait for der8auer's advice in next videos.
> 
> Power consumption has increased from 390w (powerthrottling point in my Giga) to 550w.


What resolution were the benchmarks - 4K? 1440?


----------



## changboy

vmanuelgm said:


> Yep, Gaming OC


 This was the worst 3080 of 3 card of gamer nexus testing last day.
Maybe the strix will shine a lot more.


----------



## vmanuelgm

3440x1440p, talking about the gaming videos. Benchs like timespy or timespy extreme have their own resolution.

TimeSpy:


----------



## Mooncheese

domenic said:


> What resolution were the benchmarks - 4K? 1440?


I believe Timespy and Port Royal are 2560x1440 and Timespy Extreme is 3840x2160

14k in Port Royal is good, that's about 40% higher than my overclocked 2080 Ti @ 2100 MHz core, 7900 MHz memory @ 1.025v @ 375w.

I don't think the 3090 needs 550w to do that as the 3080 can do 12,400 or so @ 420w (Steve at GN had a 3080 FTW3 up to 12,400 in Port Royal) but I could be mistaken.


----------



## HyperMatrix

vmanuelgm said:


> 550w the card only.
> 
> Which was the score of GamersNexus???
> 
> 70 degrees at load.


@vmanuelgm youre famous now.

Here's some more leaked benchmarks on the NVIDIA GeForce RTX 3090









NVIDIA GeForce RTX 3090 Ultra-Enthusiast Graphics Card Benchmarks Leak Out, Up To 50% Faster Than The RTX 2080 Ti in 3DMark


The first benchmarks of NVIDIA's ultra-enthusiast GeForce RTX 3090 graphics card have leaked out, showing a 50% jump over RTX 2080 Ti.




wccftech.com













Even more NVIDIA GeForce RTX 3090 benchmarks leak - VideoCardz.com


NVIDIA GeForce RTX 3090 already in the wild It appears that while some people still struggle to find RTX 3080 in stores, others had no issues finding GeForce RTX 3090 ahead of its launch. It is unclear if the samples were obtained through official channels (review samples) or bought from...




videocardz.com


----------



## TheScarecow

Post a photo of the card please otherwise your just faking it


----------



## HyperMatrix

TheScarecow said:


> Post a photo of the card please otherwise your just faking it


hes not faking. He’s been doing videos for a couple days now.


----------



## TheScarecow

then it will take 2 seconds to post a photo of the actual real card and not just trolling with edited videos


----------



## TwinParadox

@vmanuelgm 

Hi, could you please dump your RTX 3090 bios please ? I need it in order to update the GOP Updater tool for new Ampére cards.

If you can't or don't want to share it, thanks anyway.


----------



## vmanuelgm

Incredible after these years u question me... Really...

















Your comments are very pathetic, @TheScarecow 37 total posts in this forum and one to question me... Very very pathetic, u should apologize!!!


----------



## Silent Scone

vmanuelgm said:


> Incredible after these years u question me... Really...
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Your comments are very pathetic, @TheScarecow 37 total posts in this forum and one to question me... Very very pathetic, u should apologize!!!


And you broke NDA. Good luck getting sampled again.


----------



## HyperMatrix

Silent Scone said:


> And you broke NDA. Good luck getting sampled again.


He never signed an NDA because he's not a reviewer....man you're dense...get off it mate.


----------



## Silent Scone

HyperMatrix said:


> He never signed an NDA because he's not a reviewer....man you're dense...get off it mate.


You don't have to sign anything not to be involved in breaching a nondisclosure. Do you know where the card was sourced? Probably not...

No need to get offensive, far too many people getting irate over a commodity. Shows some serious underlying issues.


----------



## vmanuelgm

Silent Scone said:


> And you broke NDA. Good luck getting sampled again.


U are really a friend of mine...

Never going to understand some people around here, really...

Translate and be serious, u are a joke and need to grow up!!!









Acuerdo de confidencialidad - Wikipedia, la enciclopedia libre







es.wikipedia.org





Summing up, no sign, no NDA for me...


----------



## HyperMatrix

Silent Scone said:


> You don't have to sign anything not to be involved in breaching a nondisclosure. No need to get offensive, far too many people getting irate over a commodity. Shows some serious underlying issues.


Lol. He was not involved at all in any way with an NDA. Nvidia/AIBs didn't sent this card to him. What's wrong with you?



vmanuelgm said:


> U are really a friend of mine...
> 
> Never going to understand some people around here, really...


Ignore him. He's clearly jealous that you have a card already and he doesn't.


----------



## Silent Scone

vmanuelgm said:


> U are really a friend of mine...
> 
> Never going to understand some people around here, really...
> 
> Translate and be serious, u are a joke...
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Acuerdo de confidencialidad - Wikipedia, la enciclopedia libre
> 
> 
> 
> 
> 
> 
> 
> es.wikipedia.org


If you're going to flaunt results for something that's unreleased, expect to be butchered. I'm not attacking you, but if the card was sourced from GB directly you've breached someone's trust Morally, I don't find that ok.

Moral of the story is you should have waited if you didn't want to be lambasted. I'm sure some appreciate the insight, though.



HyperMatrix said:


> Lol. He was not involved at all in any way with an NDA. Nvidia/AIBs didn't sent this card to him. What's wrong with you?
> 
> 
> 
> Ignore him. He's clearly jealous that you have a card already and he doesn't.


Sorry, I don't know who you are.


----------



## vmanuelgm

Silent Scone said:


> If you're going to flaunt results for something that's unreleased, expect to be butchered. I'm not attacking you, but if the card was sourced from GB directly you've breached someone's trust Morally, I don't find that ok.
> 
> Moral of the story is you should have waited if you didn't want to be lambasted. I'm sure some appreciate the insight, though.


U confirmed u are a kiddo full of envy, really...

Grow up, u need it!!!

By the way, I am a lawyer here in Spain. U are gonna question this too, but this is the truth...


----------



## Silent Scone

vmanuelgm said:


> U confirmed u are a kiddo full of envy, really...
> 
> Grow up, u need it!!!
> 
> By the way, I am a lawyer here in Spain. U are gonna question this too, but this is the truth...


I'm not concerned with your profession, it makes no odds. Neither does the fact you have a card already, maybe I do too. The point I was making is you're becoming defensive when you really should have expected the reaction you've had. Regardless of where you obtained the card, it's most likely you've breached someone's trust. That can't be overlooked, but I'm sure many people are also enjoying the insight, so thanks for that.


----------



## vmanuelgm

Silent Scone said:


> I'm not concerned with your profession, it makes no odds. Neither does the fact you have a card already, maybe I do too. The point I was making is you're becoming defensive when you really should have expected the reaction you've had. Regardless of where you obtained the card, it's most likely you've breached someone's trust. That can't be overlooked, but I'm sure many people are also enjoying the insight, so thanks for that.


U should apologize. If u have the card and didn't sign NDA, u can show some benchs. Please do!!!

Quit it, please, u should be embarrased...


----------



## Silent Scone

vmanuelgm said:


> U should apologize. If u have the card and didn't sign NDA, u can show some benchs. Please do!!!
> 
> Quit it, please, u should be embarrased...


The only thing I'm embarrassed about is the lack of etiquette some enthusiasts show when they've been extended an olive branch. Enjoy the card


----------



## vmanuelgm

Thanks, Mr Kiddo.

Don't worry, nobody will know who gave me the card...


----------



## Silent Scone

vmanuelgm said:


> Thanks, Mr Kiddo.
> 
> Don't worry, nobody will know who gave me the card...


There's a chance nobody will, no. (yes, I know you didn't sign anything). They take this stuff very seriously.

Moving on, enjoy the card..it's a beast!


----------



## vmanuelgm

It is quite easy to avoid that, just remove serials, and keep em for myself, nothing else...


----------



## Spiriva

vmanuelgm said:


> It is quite easy to avoid that, just remove serials, and keep em for myself, nothing else...


Im looking to buy a 3090 my self on the 24th of September. Thank you for sharing your benchmarks!
+rep!


----------



## MikeGR7

vmanuelgm said:


> It is quite easy to avoid that, just remove serials, and keep em for myself, nothing else...


Thank you for sharing!
I did it myself years ago back with GTX 670, i was one the first ppl who got that card and it was one day before public release.
I too faced distrust and back then i didn't have a camera lol i used GPUz screens and managed to scan the very box in my computers scanner lolol

Guess what, i went to the store and voila found one piece on the self!
No idea how it ended up there but i took it, went to the counter and payed for it, so that made the "Betrayal of trust" rumbling useless against me.

Many where on the verge of buying GTX 580 for MORE money at that time thinking that 670 had no chance of surpassing it's performance... needless to say how many thanked me for postring my 3D Mark scores...

There are ppl who look look only for themselves and ppl who try to actually help this community.
I'm glad you made the right choice.


----------



## Krzych04650

Silent Scone said:


> far too many people getting irate over a commodity. Shows some serious underlying issues.


Wow, that's really deep, you really must be a philosopher... Or maybe you are just parroting what was said in GN's video 😂🤣



vmanuelgm said:


> Hi guys.
> 
> I shunt modded my Giga 3090 touching only the closest resistors, just like der8auer did, I still see fluctuation, not so heavy as non shunted, but fluctuation. I will probably have to touch the other 3 resistors, gonna wait for der8auer's advice in next videos.
> 
> Some benchs after beta shunt mod:
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Power consumption has increased from 390w (powerthrottling point in my Giga) to 550w.


11186 graphic score in Extreme is around 41-42% over good 2080 Ti's with 380W BIOS. Goes in line with our expectations that are taking much weaker OC potential of Ampere into account. FE vs FE it is 58%, really shows the gap shrinking after OC


----------



## HyperMatrix

Krzych04650 said:


> Wow, that's really deep, you really must be a philosopher... Or maybe you are just parroting what was said in GN's video 😂🤣


Well GN is honestly kind of a moron when he makes those comments. First off, he’s not a gamer. It’s obvious because he says even if you have a 980, hang on to it. Second, he’s sitting there with 5 rtx 3080 cards in front of him. I agree that a lot of entitled people were getting ridiculously upset beyond reason. But really scolding people who care enough to watch your videos about the rtx 3080 for wanting a 3080 while you have several of them is just asinine.


----------



## changboy

*** with peoples today, some can kill you coz you have what they dont have. I cant belive see some are jealous like this. Weird mind really.

*vmanuelgm have the card and i' am happy for him and he take the time to show us what the card can do before the launch.

Thank you vmanuelgm for this.*


----------



## Silent Scone

Krzych04650 said:


> Wow, that's really deep, you really must be a philosopher... Or maybe you are just parroting what was said in GN's video 😂🤣


Not seen, but if someone else is also thinking the same thing that tells you everything you need to know lol. I've seen comments on etailer forums with people claiming to have called the police <over lack of availability>


----------



## Krzych04650

HyperMatrix said:


> Well GN is honestly kind of a moron when he makes those comments. First off, he’s not a gamer. It’s obvious because he says even if you have a 980, hang on to it. Second, he’s sitting there with 5 rtx 3080 cards in front of him. I agree that a lot of entitled people were getting ridiculously upset beyond reason. But really scolding people who care enough to watch your videos about the rtx 3080 for wanting a 3080 while you have several of them is just asinine.


I also don't like when he makes such comments that are really bordering an ideology. Especially that nobody there at GN is an actual gamer, they have said it themselves multiple times. And even if they didn't you can clearly see it. Thats my biggest problem with this channel, testing is good but they don't have any real perspective on gaming, especially enthusiast gaming. This is generally the problem with all of these channels, I am not aware of anyone who would have a good testing and be an actual enthusiast gamer at the same time. Most of them haven't played a game seriously probably for like a decade.


----------



## TheScarecow

Sorry for doubting you *vmanuelgm! * think you could trying doing some benching with a A/C unit blasting the card?


----------



## ThrashZone

vmanuelgm said:


> Thanks, Mr Kiddo.
> 
> Don't worry, nobody will know who gave me the card...


Hi,
Nice two day after launch got one that is impressive 
What's not impressive is not sharing photos of it lol 🤣


----------



## changboy

The day of launch of the 3080, i got the chance to buy one asus tuf rtx-3080 but i didnt bought it, i wait for the 3090. And this even i can buy it and sell it higher price like some do coz i dont found its a good way to do money in life. Some can sell there mother for money lol.


----------



## HyperMatrix

ThrashZone said:


> Hi,
> Nice two day after launch got one that is impressive
> What's not impressive is not sharing photos of it lol 🤣


He has a 3090. The 3090 doesn't launch for another 5 days. He's had his for a few days now. He also did share pictures when asked and also attempted a bios dump on video while GPU-z was running as requested by another user.


----------



## ThrashZone

vmanuelgm said:


> Incredible after these years u question me... Really...
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Your comments are very pathetic, @TheScarecow 37 total posts in this forum and one to question me... Very very pathetic, u should apologize!!!


Hi,
There it is


----------



## marc0053

Thanks for posting these early 3090 results and modding info vmanuelgm. Appreciate it! Keep them coming 
If you have a chance can you show ethereum mining hashrate? Cheers, Marc


----------



## dentnu

NVIDIA GeForce RTX 3090 gaming performance review leaks out - VideoCardz.com


It seems that TecLab is back with yet another leaked review. NVIDIA GeForce RTX 3090 reviewed, only 10% faster than RTX 3080? The guys who were the first to publish RTX 3080 review are now back with another piece, but this time covering the flagship model — GeForce RTX 3090. The video by the...




videocardz.com





Summary 3090 is about 10% at best faster than 3080 . No idea if its true.


----------



## elmosn

HyperMatrix said:


> Well GN is honestly kind of a moron when he makes those comments. First off, he’s not a gamer. It’s obvious because he says even if you have a 980, hang on to it. Second, he’s sitting there with 5 rtx 3080 cards in front of him. I agree that a lot of entitled people were getting ridiculously upset beyond reason. But really scolding people who care enough to watch your videos about the rtx 3080 for wanting a 3080 while you have several of them is just asinine.


The circlejerk is strong here.
I've come straight from reddit just for the laughs.
Steve is not a gamer, but is strictly objective and account for every personal opinion.
Now, on the other hand, here we have a lucky user with early access on a COMMODITY being cocky and attwhore, and envy people trying to channel their hatred and frustration.
Always head of this site, but never imagined the toxicity of its community.

By the way, hello to everyone.


----------



## MikeGR7

elmosn said:


> The circlejerk is strong here.
> I've come straight from reddit just for the laughs.
> Steve is not a gamer, but is strictly objective and account for every personal opinion.
> Now, on the other hand, here we have a lucky user with early access on a COMMODITY being cocky and attwhore, and envy people trying to channel their hatred and frustration.
> Always head of this site, but never imagined the toxicity of its community.
> 
> By the way, hello to everyone.


I too "head" of reddit and how it's full of uninformed kids commenting on things unknown to them.

Nice to have you here, be prepared to actually learn something about technology.


----------



## nick name

elmosn said:


> The circlejerk is strong here.
> I've come straight from reddit just for the laughs.
> Steve is not a gamer, but is strictly objective and account for every personal opinion.
> Now, on the other hand, here we have a lucky user with early access on a COMMODITY being cocky and attwhore, and envy people trying to channel their hatred and frustration.
> Always head of this site, but never imagined the toxicity of its community.
> 
> By the way, hello to everyone.


Ayyyyyy welcome!


----------



## Krzych04650

dentnu said:


> NVIDIA GeForce RTX 3090 gaming performance review leaks out - VideoCardz.com
> 
> 
> It seems that TecLab is back with yet another leaked review. NVIDIA GeForce RTX 3090 reviewed, only 10% faster than RTX 3080? The guys who were the first to publish RTX 3080 review are now back with another piece, but this time covering the flagship model — GeForce RTX 3090. The video by the...
> 
> 
> 
> 
> videocardz.com
> 
> 
> 
> 
> 
> Summary 3090 is about 10% at best faster than 3080 . No idea if its true.


There are many other leaks, for example 3DMark ones showing 20%. There are few separate ones showing 10300 GPU score in Time Spy Extreme, compared to 8600 of 3080. Given how the difference between 3080 and 2080 Ti in this benchmark is exactly the same as average difference in games (30%), I'd rather believe these scores. But we will find out soon enough.

All of these 4-7% scores would be terrible though. Not because of the price, you don't have to buy it if it turns out bad, but 3090 is where GPU performance is going to cap for next 2 years, so this has much wider implications for the future in general, at least on the high-end


----------



## nick name

Do people not know @Silent Scone 's credentials?

And the latest 3090 bench leaks really aren't flattering when you factor in the price of the card. I saw them and headed over here to see the opinion of others. 

I wonder if there is going to be more headroom for overclocks with the 3090.


----------



## carlhil2

My OCed 2080Ti scores 7933 in TSE and I am going to be all over the 3090 when it hit MC....


----------



## SuprUsrStan

Anyone notice the boost clocks and the memory clocks significantly higher than the 3080 clocks? It's almost 100Mhz higher on core and that reference card is hitting 370W. 
It's said the 3090 FE should pull 390W. I thought 75W + 150W + 150W = 375W and since the 12 pin is powered off of the 8 pin, how is it able to pull 390W?


----------



## nick name

SuprUsrStan said:


> Anyone notice the boost clocks and the memory clocks significantly higher than the 3080 clocks? It's almost 100Mhz higher on core and that reference card is hitting 370W.
> It's said the 3090 FE should pull 390W. I thought 75W + 150W + 150W = 375W and since the 12 pin is powered off of the 8 pin, how is it able to pull 390W?
> 
> 
> View attachment 2459375


Shunt mod?


----------



## Silent Scone

Gigabyte rtx 3090 gaming oc | eBay


RTX 3090 IS RIGHT NOW THE FASTEST GAMING CARD IN THE WORLD, FASTER THAN THE RECENTLY RELEASED NVIDIA RTX 3080.



www.ebay.co.uk





That was quick...


----------



## Mooncheese

dentnu said:


> NVIDIA GeForce RTX 3090 gaming performance review leaks out - VideoCardz.com
> 
> 
> It seems that TecLab is back with yet another leaked review. NVIDIA GeForce RTX 3090 reviewed, only 10% faster than RTX 3080? The guys who were the first to publish RTX 3080 review are now back with another piece, but this time covering the flagship model — GeForce RTX 3090. The video by the...
> 
> 
> 
> 
> videocardz.com
> 
> 
> 
> 
> 
> Summary 3090 is about 10% at best faster than 3080 . No idea if its true.


Extremely high likelihood that GA-102-300 is being held back by power limit. Although it has 20% more CUDA cores etc and should theoretically be 20% faster it's limited to 375w whereas the 3080 is limited to 370w. To see a 20% difference without power limit GA-102-300 needs more power.


----------



## skline00

Mooncheese agree on the power AND the 3090 is really tageted to content creators and gamers with VERY high end monitors (8k?)


----------



## Mooncheese

Krzych04650 said:


> There are many other leaks, for example 3DMark ones showing 20%. There are few separate ones showing 10300 GPU score in Time Spy Extreme, compared to 8600 of 3080. Given how the difference between 3080 and 2080 Ti in this benchmark is exactly the same as average difference in games (30%), I'd rather believe these scores. But we will find out soon enough.
> 
> All of these 4-7% scores would be terrible though. Not because of the price, you don't have to buy it if it turns out bad, but 3090 is where GPU performance is going to cap for next 2 years, so this has much wider implications for the future in general, at least on the high-end


That's not the average difference, it's 20% at 1440p and 30% at 4K and you need to deduct 10% when you run 2080 Ti at the same power draw making for 10% at 1440p and 20% at 4K. 






According to Steam Hardware Survey 2% of us are actually at 3840x2160, 3x more at at 2560x1440, so for most of us the 3080 is 10% faster than the 2080 Ti (3440x1440 here). But hey, if you want to believe it's an upgrade coming from 2080 Ti, more power to you.


----------



## Mooncheese

skline00 said:


> Mooncheese agree on the power AND the 3090 is really tageted to content creators and gamers with VERY high end monitors (8k?)


8K LMAO. NGreedia are out of touch. 

Ultrawide 1440p is where it's at, not 8K flat 16:9.


----------



## nick name

Silent Scone said:


> Gigabyte rtx 3090 gaming oc | eBay
> 
> 
> RTX 3090 IS RIGHT NOW THE FASTEST GAMING CARD IN THE WORLD, FASTER THAN THE RECENTLY RELEASED NVIDIA RTX 3080.
> 
> 
> 
> www.ebay.co.uk
> 
> 
> 
> 
> 
> That was quick...


Ba ha ha ha ha ha.

Edit:
Wait, does he have more than one 3090 or is he selling the 3090 that he's posted about on YouTube? The one he has a shunt mod video about. The one he is selling as Opened -- never used.


----------



## Mooncheese

Theory:

GA102 is so far into the power-efficiency curve that that's why NV and partners have it limited to 370/375 and 420/440w respectively.

With TU-102 there comes a point where throwing more power and voltage at the core yields diminishing returns, in fact this is the case with any piece of silicon.

My TU-102-300a will do 2040-2055 MHz core with an undervolt of 1.025v @ ~300-330w, occasionally requiring 373w.

It can do 2100 MHz @ 1.055-1.063v but consumes an additional 35w on average, making for a whopping 3% speed increase for 10% more power.

I believe GA102 is right at the edge of this curve, but worse!

It may be the case that getting another 5% out of GA102 requires another 20% more power and this is why NV and partners are restricting it to around 400w. Another 5% requires around 500w or more and they don't want people burning up their cards chasing numbers.

Anyhow, not impressed with Ampere.

NV played brinkmanship with TSMC and lost, TSMC didn't need their business with RDNA 2 and Zen 3. NV saves a whopping $26 per yield and passed the operating cost onto the consumer in the form of a ~50% higher electricity bill than would otherwise be with a 7nm TSMC die 50% it's size at the same speed (425mm2 7nm TSMC = 628mm2 8nm EUV).







And the mindless masses eat it up, they love it, they think they are getting a card that is 30%+ faster on average over 2080 Ti when in actuality it's only 10% faster when running both cards at the same power draw at the resolution that most people are at (2% at 3840x2160). And even if youre at 4K, it's only 20% faster at the same power draw.

How much is one saving when the electricity bill goes up $20 a month on average compared to a card with the same performance on TSMC 7nm?

But NV saved $26 per yield!

"Ooooh, look at that cooler, just focus hypnotically on the cooler, don't worry about any of this"

"Youre getting sleepy, focus on the cooler, we will count backwards from 10"

"Just buy it"

"Just buy it"

"Just buy it"

Enter Digital Foundry, now owned by NV "Yeah and the 3080 is 50% faster than the 2080 Ti in Doom Eternal!" Nevermind that the 2080 Ti is running at 260w and the 3080 @ 350w! We can't compare them at the same power draw, that's beyond our capability!"

What a farce!

Basically NV pulled an Intel.

If AMD ever had an opportunity to supplant NV it's right now, ESPECIALLY in the laptop market.

Also, everyone talking about NV turning some kind of corner with the pricing shenanigans. No they didnt. In fact it's worse now when you factor in how they passed the cost of their business failings onto you in the form of a 50% higher electricity bill, all so they could save $26 per yield.

And again, just like in 2018 for someone with a 1080 Ti, the only upgrade path is the $1500 card!

Will it actually be 20% faster than GA-102-200 under an ocean of water @ 500w though?

Assuming it is that makes for a card that is 30% faster at 1440p and 40% faster at 4K at the same power draw as TU-102!

For $1500-$1800 before adding the cost of a water block and taxes, or around $2000-2200.

Oh yeah, that's a compelling value proposition.

But don't worry, this is really for people with money flowing out of their ears, the 8K enthusiasts!

Golf clap?

OUT OF TOUCH WITH REALITY.


----------



## nick name

Mooncheese said:


> Theory:
> 
> GA102 is so far into the power-efficiency curve that that's why NV and partners have it limited to 370/375 and 420/440w respectively.
> 
> With TU-102 there comes a point where throwing more power and voltage at the core yields diminishing returns, in fact this is the case with any piece of silicon.
> 
> My TU-102-300a will do 2040-2055 MHz core with an undervolt of 1.025v @ ~300-330w, occasionally requiring 373w.
> 
> It can do 2100 MHz @ 1.055-1.063v but consumes an additional 35w on average, making for a whopping 3% speed increase for 10% more power.
> 
> I believe GA102 is right at the edge of this curve, but worse!
> 
> It may be the case that getting another 5% out of GA102 requires another 20% more power and this is why NV and partners are restricting it to around 400w. Another 5% requires around 500w or more and they don't want people burning up their cards chasing numbers.
> 
> Anyhow, not impressed with Ampere.
> 
> NV played brinkmanship with TSMC and lost, TSMC didn't need their business with RDNA 2 and Zen 3. NV saves a whopping $26 per yield and passed the operating cost onto the consumer in the form of a ~50% higher electricity bill than would otherwise be with a 7nm TSMC die 50% it's size at the same speed (425mm2 7nm TSMC = 628mm2 8nm EUV).
> 
> 
> 
> 
> 
> 
> 
> And the mindless masses eat it up, they love it, they think they are getting a card that is 30%+ faster on average over 2080 Ti when in actuality it's only 10% faster when running both cards at the same power draw at the resolution that most people are at (2% at 3840x216). And even if youre at 4K, it's only 20% faster at the same power draw.
> 
> How much is one saving when the electricity bill goes up $20 a month on average compared to a card with the same performance on TSMC 7nm?
> 
> But NV saved $26 per yield!
> 
> "Ooooh, look at that cooler, just focus hypnotically on the cooler, don't worry about any of this"
> 
> "Youre getting sleep, focus on the cooler, we will count backwards from 10"
> 
> "Just buy it"
> 
> "Just buy it"
> 
> "Just buy it"
> 
> Enter Digital Foundry, now owned by NV "Yeah and the 3080 is 50% faster than the 2080 Ti in Doom Eternal!" Nevermind that the 2080 Ti is running at 260w and the 3080 @ 350w! We can't compare them at the same power draw, that's beyond our capability!"
> 
> What a farce!
> 
> Basically NV pulled an Intel.
> 
> If AMD ever had an opportunity to supplant NV it's right now, ESPECIALLY in the laptop market.


So do you watch a lot of AdoredTV?


----------



## Mooncheese

nick name said:


> So do you watch a lot of AdoredTV?


Is that an attempt at a jab or a genuine question?

His analysis is top notch, the prognostication business is always a gamble for anyone however.


----------



## kx11

der8auer talking about the PCB and voltages of Asus rtx 3080 TUF, also shunt mod


----------



## nick name

Mooncheese said:


> Is that an attempt at a jab or a genuine question?
> 
> His analysis is top notch, the prognostication business is always a gamble for anyone however.


A jab, but a friendly jab.


----------



## Mooncheese

nick name said:


> A jab, but a friendly jab.


----------



## Zurv

more faster more better. 

Cards power limited? _yawn_ it is what is it. Nothing new.
No one cares about conspiracy stuff here. Nvidia has huge margins (and? good for them. That is what happens when they only have to compete vs themselves) and they will limit cards power so they don't have to worry about RMAs. (or lock in card tiers.)
Also power per watt is better for the 3000s, but no one is limiting themselves checking the power for the same output. Dumb Marketing and doesn't matter (unless you are miner/data center?. maybe?)

To me the 3090 is a bargain when compared to the cost of the RTX Titan (of which I have two.)
Clearly the shut mods work, but it doesn't looks like the extra power helps much for much higher clocks. That said, i'll likely mod mine because the mod does make the OC more stable which is a big plus.
Maybe the point where the cards are power limited is the sweet spot. (It sorta looks like that from the 3080.) Which wasn't the case for older series.

I didn't get much from extra from power on the 2080 ti (i had 2 kingpin 2080ti and 2 2080ti FE) - Yes the kingpin was better - but not by much, but that was more about the binning then the XOC-no-limits-bios. (but to be fair, i did shutmod the FE. All my system are also running 5-6 360 rads and multi pumps). The KPE could hold about 50-75mhz more than the launch FE (and at a billion times the cost. holy cow was the KPE + blocks == costly. I buy a lot of cards, cpus, etc and almost never feel like i wasted money. I did for the KPE. I could have picked up more RTX Titans. I don't LN2.. nor did i use the AIO.. people using either of those likely got more out of the KPE. But so is life.)

I'm bummed about SLI.
the 2080ti is slow.. the 3080 is slow.. the 3090 will be slow!
I want the power for native 4k and full ray tracing! Makes me sad (no matter how good DLSS 2.x is - which oddly is pretty good) that i'll have to play cyberpunk in 1080 to use ray tracing!

As i pour one out for SLI... this pix is still fitting (and still true.. IMO -- well.. maybe.. i don't have the 3090s yet)


----------



## Mooncheese

Monkey business:






Now imagine this situation with another 100-150w on top of that on air.

And that's why NV and partners are limiting GA102 to 370/375 and 420/440w. 

Sure you can extract another 10% out of GA102-300 increasing TDP from 390 to 550w but how long will the component last when this memory controller is at 105C at only 370w on air?


----------



## doom26464

nick name said:


> So do you watch a lot of AdoredTV?


He only mass spamed that video like every other thread on this forum. 

Once again ignoring capacity. Sure there is obviously the point samsung node is nowhere near as good as TSMC, but what good would it do to produce cards that would be faster and better if there was like next to none avaible through the product life cycle. I know right now this is pretty much a paper launch and avaiblty is zip but at least with samsung they can ramp up to meet demand, no way it would happen with TSMC and there avaible capacity for Nvidia.


----------



## Esenel

Silent Scone said:


> Gigabyte rtx 3090 gaming oc | eBay
> 
> 
> RTX 3090 IS RIGHT NOW THE FASTEST GAMING CARD IN THE WORLD, FASTER THAN THE RECENTLY RELEASED NVIDIA RTX 3080.
> 
> 
> 
> www.ebay.co.uk
> 
> 
> 
> 
> 
> That was quick...


Very cheap move of our spanish fellow here.
Unused with a shunt mod :-x
Of course.


----------



## kiriakos

Mooncheese said:


> Now imagine this situation with another 100-150w on top of that on air.
> 
> And that's why NV and partners are limiting GA102 to 370/375 and 420/440w.
> 
> Sure you can extract another 10% out of GA102-300 increasing TDP from 390 to 550w but how long will the component last when this memory controller is at 105C at only 370w on air?


NVIDIA invented another hot pan this wasting too much energy to operate.
The best question for now, this is when NVIDIA will use six micron lithography instead of eight?
If they do follow this road, the RTX 4000 will be a much better performer, this running cooler, this sucking 30% less energy.


----------



## smonkie

Esenel said:


> Very cheap move of our spanish fellow here.
> Unused with a shunt mod :-x
> Of course.


Dou you mean vmanuelg? How do you know he's the same user?


----------



## Mooncheese

doom26464 said:


> He only mass *spamed* that video like every other thread on this forum.
> 
> Once again ignoring capacity. Sure there is obviously the point *samsung* node is nowhere near as good as TSMC, but what good would it do to produce cards that would be faster and better if there was like next to none *avaible* through the product life cycle. I know right now this is pretty much a paper launch and *avaiblty* is zip but at least with *samsung* they can ramp up to meet demand, no way it would happen with TSMC and *there* *avaible* capacity for Nvidia.


*Ignoring capacity. TSMC could produce these wafers for NV but not at a price that NV wants. Looking into this, apparently Apple was saturating the 7nm queue but they've switched their demand to 5nm, just as was predicted.

They may no longer be at capacity for 7nm. The fact is that NV didn't want to pay $8k per wafer when they can pay $3k per wafer and pass the cost onto the consumer in the form of higher electricity prices because at this point the mindless masses will buy whatever they make absent competition.

"30% increase at 1440p for $1500-1700 before taxes at the same power draw compared to my 2080 Ti? Thank you sir, may I have another?"

This isn't a paper launch.

I've also highlighted the spelling errors in your comment in bold.

My comments are filled with relevant information, yours amount to baseless insults, probably because you don't like the FACT that GA102-200 isn't more than 10% faster than TU-102-300a at 1440p at the same power draw.


----------



## Mooncheese

kiriakos said:


> NVIDIA invented another hot pan this wasting too much energy to operate.
> The best question for now, this is when NVIDIA will use six micron lithography instead of eight?
> If they do follow this road, the RTX 4000 will be a much better performer, this running cooler, this sucking 30% less energy.


Speculation: NVIDIA's Hopper Architecture Will Be Made On TSMC's 5nm Process, Launching 2021

This depends on how successful Ampere is. 

If we don't buy it, then they will be forced to release a product that the consumers will buy. 

Sadly the mindless masses are definitely buying this mediocrity.


----------



## originxt

Do we have a rough idea on pcb sizes for the 3090? Or should we assume pcb size are slightly smaller than the cooler length/width?


----------



## nick name

smonkie said:


> Dou you mean vmanuelg? How do you know he's the same user?


Same picture displayed here as in the eBay listing.


----------



## kiriakos

Mooncheese said:


> Speculation: NVIDIA's Hopper Architecture Will Be Made On TSMC's 5nm Process, Launching 2021
> 
> This depends on how successful Ampere is.
> 
> If we don't buy it, then they will be forced to release a product that the consumers will buy.
> 
> Sadly the mindless masses are definitely buying this mediocrity.


There is a few mindless masses, most posts screaming I WILL BUY ONE !! they are not made by consumers but due sellers, a form of influence.. end of October the dust will get back at the floor where this belong.
5 micron GPU? this is good news. 
It worth the waiting.


----------



## newroo

Will there be added the pcb-length when the information is coming? Guessing many of us will overclock and will put waterblocks on the 3090, so the custom pcb length might be of interest. It is to me at least.


----------



## HyperMatrix

Is there any way to block a user here so you don’t even have to see their posts anymore? Really getting tired of the cheesy non stop negative conspiracy theory essay spam that some keep posting about Nvidia and these cards when I’m just trying to browse for real information.


----------



## pantsoftime

Do we have any sense of what the partner boards power limit will be for a reference style board? I'm wondering in relation to the popular 380W Galax BIOS that many people use on the 2080Ti. I've been trying to parse the numbers posted so far and I haven't found a good sense of how fast a 3090 would be in relation to a 380W 2080Ti. Sounds like maybe 20-30% in a raster app?


----------



## originxt

HyperMatrix said:


> Is there any way to block a user here so you don’t even have to see their posts anymore? Really getting tired of the cheesy non stop negative conspiracy theory essay spam that some keep posting about Nvidia and these cards when I’m just trying to browse for real information.


Go into their profile and click ignore. Blocks all the posts it seems. Would like to have some real news and discussions on the actual product instead of rumor mills, what-ifs, and what aboutisms.


----------



## HyperMatrix

pantsoftime said:


> Do we have any sense of what the partner boards power limit will be for a reference style board? I'm wondering in relation to the popular 380W Galax BIOS that many people use on the 2080Ti. I've been trying to parse the numbers posted so far and I haven't found a good sense of how fast a 3090 would be in relation to a 380W 2080Ti. Sounds like maybe 20-30% in a raster app?


Reference boards are using 2x 8-pin. To stay within spec, they have to keep a max of 375W. Highest I’ve seen one pull is a peak of about 390W. Some of the cards actually have a TDP/power limit lower than 375W and have a board design that couldn’t handle much more power than that for long term use.

The 3090 is an identical chip to the 3080. 20.5% more cores. 23% more bandwidth. The only factor that will determine the performance gap between them is the power limit. A 375W limited 3090 would probably not perform more than 5-10% faster than a 420W 3080.

To get a 3090 over locked to the same stats as a 3080 with a 420W power limit you’d need a 500W card, whether through bios flash or shunt mods. But if you get a reference design card....it’s most likely not going to be made with components that can handle that level of output.



originxt said:


> Go into their profile and click ignore. Blocks all the posts it seems.


OMG it worked! No more Mooncheese spam! Thank you!


----------



## J7SC

Mooncheese said:


> Speculation: NVIDIA's Hopper Architecture Will Be Made On TSMC's 5nm Process, Launching 2021
> [...]


No doubt 3090 is an interesting GPU I still look forward to playing with, but the MCM approach for Hopper (or later ?) gens has been mentioned on multiple occasions since at least '18. Multi-chiplets / tiles just make way too much sense not to pursue (also witness Intel Xe tile GPU). MCM obviously needs corresponding drivers, which is why NVL-SLI-CFR made that 'undocumented' appearance in November '19. While CFR has been disabled (or at least hidden even more) again in the latest NVidia drivers -right before Ampere launch btw- I think it will make a comeback.

In the end, I don't know when Nvidia (and AMD) gen will introduce MCM, but it is a question of when, not if, IMO. At least I hope it will as I am really enjoying CFR on dual 2080 Tis 4K with apps that support it. That on MCM would be fantastic.The huge power consumption of 3080/3090Ampere, even with backed-in limits holding it back and before shunt mods, intrinsically also makes the case for MCM in future gens. Igor's Lab had a nice vid on PSU needs for Ampere yesterday, showing brief spikes of a stock-bios max PL 3080 at over 500W. Finally, I m kind of wondering what AMD has up its sleeve re. MCM....comparing Ryzen/TR 1st gen to current and what looks like reasonable leaks (still with salt) on the upcoming Ryzen iterations, they are making huge progress re the tying together the chiplets in hardware via IF and improved latency...while a CPU is not a GPU, that is still a good head start re. knowledge on engineering and manufacturing implications.

But in the meantime, a custom-PCB w-cooled high-PL 3090 looks tempting to me while I wait for MCM...the separate GPU loop I built for 2x 380W 2080 Tis should be able to comfortably handle it w/o too much fuss


----------



## MonnieRock

J7SC said:


> But in the meantime, a custom-PCB w-cooled high-PL 3090 looks tempting to me while I wait for MCM


This is my plan too. Something like a EVGA RTX 3090 FTW3 Ultra under water upgrade from SLI Titanxp's. Skipped Turing.


----------



## originxt

Supposedly 3090 ftw3 is on the same board as the 3080? Do we have pcb dimensions of the ftw3 3080 by chance?


----------



## padman

smonkie said:


> Dou you mean vmanuelg? How do you know he's the same user?


eBay user selling this card is called "vitolodebamio" and I figured it must be some kind of name+surname combination but since I'm not spanish I googled "vitolo debamio" and google quickly corrected me to "Vitolo de Bamio" and searching for this name yielded this thread ASUS Rampage IV Extreme socket 2011 - Página 202 on some kind of spanish forum where user named "vmanuelgm" says "I'm not Billy, I'm Vitolo de Bamio, hehehe "(via google translate).
This should be enough confirmation.


----------



## doom26464

Mooncheese said:


> *Ignoring capacity. TSMC could produce these wafers for NV but not at a price that NV wants. Looking into this, apparently Apple was saturating the 7nm queue but they've switched their demand to 5nm, just as was predicted.
> 
> They may no longer be at capacity for 7nm. The fact is that NV didn't want to pay $8k per wafer when they can pay $3k per wafer and pass the cost onto the consumer in the form of higher electricity prices because at this point the mindless masses will buy whatever they make absent competition.
> 
> "30% increase at 1440p for $1500-1700 before taxes at the same power draw compared to my 2080 Ti? Thank you sir, may I have another?"
> 
> This isn't a paper launch.
> 
> I've also highlighted the spelling errors in your comment in bold.
> 
> My comments are filled with relevant information, yours amount to baseless insults, probably because you don't like the FACT that GA102-200 isn't more than 10% faster than TU-102-300a at 1440p at the same power draw.


I Don't own a 2080ti so I could care less, besides nobody with a 2080ti should really be rushing to buy a 3080. 

2080s and below or 10 series owners would see larger benefit.

No baseless insults merely pointing out the flaw in your spammed adoretv video. 

You seem very angry or disturbed over this launch, may I suggest some rest and some soothing tea?


----------



## ThrashZone

doom26464 said:


> I Don't own a 2080ti so I could care less, besides nobody with a 2080ti should really be rushing to buy a 3080.
> 
> 2080s and below or 10 series owners would see larger benefit.
> 
> No baseless insults merely pointing out the flaw in your spammed adoretv video.
> 
> You seem very angry or disturbed over this launch, may I suggest some rest and some soothing tea?


Hi,
lol yeah youtube spammer is spot on


----------



## nick name

padman said:


> eBay user selling this card is called "vitolodebamio" and I figured it must be some kind of name+surname combination but since I'm not spanish I googled "vitolo debamio" and google quickly corrected me to "Vitolo de Bamio" and searching for this name yielded this thread ASUS Rampage IV Extreme socket 2011 - Página 202 on some kind of spanish forum where user named "vmanuelgm" says "I'm not Billy, I'm Vitolo de Bamio, hehehe "(via google translate).
> This should be enough confirmation.


I know this is the 3090 thread, but how are people not more bothered by this?


----------



## domenic

Noticed this afternoon the EKWB configurator is now listing all EVGA 3080 models (including the custom FTW3) in the drop down selector box & then "coming soon" on the next page. If you try to pick the 3090 in the configurator EVGA isn't listed at all however.

This seems like a good sign as I would assume the 3090 FTW3 PCB is most likely identical with the difference being memory chips on the backside. What are the chances their accompanying backplate will be enough to cool those additional chips / heat load?


----------



## Kalm_Traveler

domenic said:


> Noticed this afternoon the EKWB configurator is now listing all EVGA 3080 models (including the custom FTW3) in the drop down selector box & then "coming soon" on the next page. If you try to pick the 3090 in the configurator EVGA isn't listed at all however.
> 
> This seems like a good sign as I would assume the 3090 FTW3 PCB is most likely identical with the difference being memory chips on the backside. What are the chances their accompanying backplate will be enough to cool those additional chips / heat load?


I'm curious to see an actual 3090 PCB because looking at the FE 3080 teardown Gamersnexus did, I don't see any unoccupied memory pad spots on the back of the PCB. There are 2 unoccupied pad spots on the front though, which if populated would be 12gb (with the 3080's 1gb chips), which makes me think that 24gb is likely instead going to simply populate all 12 memory spots on the 'front' with 2gb chips rather than only 10 of them with 1gb chips.


----------



## domenic

Kalm_Traveler said:


> I'm curious to see an actual 3090 PCB because looking at the FE 3080 teardown Gamersnexus did, I don't see any unoccupied memory pad spots on the back of the PCB. There are 2 unoccupied pad spots on the front though, which if populated would be 12gb (with the 3080's 1gb chips), which makes me think that 24gb is likely instead going to simply populate all 12 memory spots on the 'front' with 2gb chips rather than only 10 of them with 1gb chips.


Earlier in this thread it was confirmed that 2x capacity chips don't exist yet.... With all of the evidence however that a 20GB 3080 is incoming that will bring more pressure on the water block manufacturers to come up with a solution for a two sided memory cooling requirement.


----------



## J7SC

MonnieRock said:


> This is my plan too. Something like a EVGA RTX 3090 FTW3 Ultra under water upgrade from SLI Titanxp's. Skipped Turing.


...haven't built a non-SLI system since '12 - I usually opt for HEDT mobos and dual GPUs (also re. work / productivity), and building a single GPU system will feel a bit like a big Harley V Twin with one cylinder missing...at the end of the day though after all the new GPU chest beating has died down, I figure there will be some really nice custom PCB 3090s (factory full w-block) cards hitting with plenty enough for full-phat 4K.... the way NVidia [apparently] distributes GPU chips, you as a vendor have to buy and move a given amount of lower-bin chips before you get the higher-binned ones....

I skipped the 1080s and related Titan series altogether and went straight from dual 980 Classies to dual Aorus Xtr full-water 2080 Tis, but only four months plus after 2080 Ti were first released...the dual 980 Cls are still doing great for 1080p and still are in use every day, but for 4K/60+ FPS, I couldn't fault the 2080 Tis for anything right now...I only got into 4K in late 2018, and per (4K) thumbnail below. CFR and drivers work for now - and sooner or later (per above) MCM / mGPU will rule.

But there will be that 'dry spell' in between....So in a few months, I will very seriously pursue the likes of the KingPin 3090 (have EVbots  ), or even yet again try in vain to get a Galax / KF2 HoF 3090 full water-block card (not gonna happen in my region)...main thing is custom PCB,-top phase 3090 w/ full waterblocks - after everything I've read and seen about power consumption (= heat) w/ Ampere


----------



## Glerox

nick name said:


> Same picture displayed here as in the eBay listing.


yup, really cheap move... can you give a -1 rep point lol


----------



## Glerox

MonnieRock said:


> This is my plan too. Something like a EVGA RTX 3090 FTW3 Ultra under water upgrade from SLI Titanxp's. Skipped Turing.


same here, will try to get two 3090s FTW3. Bought the corsair 1600w and two 60mm 480 rads for this. Would be overkill for only one card, no choice to buy two haha.


----------



## dentnu

Glerox said:


> same here, will try to get two 3090s FTW3. Bought the corsair 1600w and two 60mm 480 rads for this. Would be overkill for only one card, no choice to buy two haha.


I don’t know if you read the news but NVIDIA will no longer update SLI game profiles in there drivers. SLI for games is dead. Its posted in this thread few pages back.


----------



## HyperMatrix

nick name said:


> I know this is the 3090 thread, but how are people not more bothered by this?


I’m bothered about the doxxing. 



Glerox said:


> yup, really cheap move... can you give a -1 rep point lol


If we could, I’d give you one. What business is it of yours what someone does with their 3090? The card was obtained for a large some of money. Tested. And being sold again. It has value to any publication who would want to do a review and release it before NDA lifts for actual reviewers. The question again is, what business is it of yours if he decides to sell his card? He owes you nothing yet he contributed very useful and welcome information for our benefit. And now he’s being attacked. You guys aren’t exactly encouraging other people with early access to hardware to share their findings with us.


----------



## Mooncheese

dentnu said:


> I don’t know if you read the news but NVIDIA will no longer update SLI game profiles in there drivers. SLI for games is dead. Its posted in this thread few pages back.


Not just that but the 3D Vision community, of which I'm casually a member (still using it on my PG278Q, my AW3418DW however is my primary display), is having difficulty getting 3D Vision to work in DX11 titles with the latest display driver. Up until just recently 3D Fix Manager has been able to add the 3D Vision bits from 425.31 to the newer drivers. For example I'm on 443.24 (and get prompted to update to newer by this driver works better with CEMU as Vulcan works without a shader cache now from CEM 1.19 forward) 

3D Vision is awesome when it works correctly and you don't have a CPU bottleneck introduced by the inefficiency of it (the dreaded "3 core bug")



HyperMatrix said:


> I’m bothered about the doxxing.
> 
> If we could, I’d give you one. What business is it of yours what someone does with their 3090? The card was obtained for a large some of money. Tested. And being sold again. It has value to any publication who would want to do a review and release it before NDA lifts for actual reviewers. The question again is, what business is it of yours if he decides to sell his card? He owes you nothing yet he contributed very useful and welcome information for our benefit. And now he’s being attacked. You guys aren’t exactly encouraging other people with early access to hardware to share their findings with us.


Doxxing? What exactly happened? Is "vmanuelgm" in some kind of trouble? 

He's doing the right thing, to hell with all this secrecy following the announcement. 

What he's doing is allowing those of us still on the fence with a 3090 purchase to have a better understanding of the price-performance proposition. 

Did NV push back the review NDA to one day before the 3090 goes up for sale as well? Are we still on for live reviews on the 21st? 

What's the problem? And he wants to sell it? That's his right.

I am curious as to how he attained said 3090 and if he is legally in some kind of trouble. 

From what I can tell he's very intelligent with leading people on a goose chase. He's probably not a lawyer from Spain. He's probably using VPN and where his identity is protected so he's technically got nothing to lose. Setting up an ebay account with any email address is easy. You can even keep the balance in the Paypal account and Paypal doesn't do any form of verification of identity.

NV basically can't do anything about it. 

Someone leaked the data, and we should all, everyone one of us here, thank "vmanuelgm" for doing so. 

I do wish the leaker was capable of more than running synthetic benches though. I for one would like to see FLIR imagery of the video memory. Igors lab just pointed out that the 3080 hits 106 degrees on a certain module at 370w under the FE cooler. 

I would love to see what the VRAM looks like on this Gigabyte 3090 @ 550w under FLIR. 

He also showed that just like the GA-102-200, GA-102-300 efficiency drops off a cliff around 400w. 

The 3090 does 20,150 Timespy GPU at 390w and then needs an additional 160w for another 10% @ 550w! (21,900). 

Still mulling over the 3090, too bad Nvidia threw all of us 3D Vision enthusiasts under the bus when they discontinued 3D Vision support after 425.31, had Ampere 3D Vision support it would be something other than the G-Sync module in my panel to justify staying with them. 

AMD will have AI super sampling and path tracing and RDNA 2 architecture based Big Navi will render console port engine content more efficiently than Nvidia. 

I'm hearing AMD's 6900XT will sit in between the 3080 and 3090 rasterization-wise but at 300w (not 400w) with a 7nm TSMC node with 24GB of video memory for $1000. 

10GB isn't future proof. It's already obsolete. 


__
https://www.reddit.com/r/nvidia/comments/itx0pm

Anyhow, we owe a lot to "vmanuelgm" for his leaks. 

Also, GN benchmark round-up showed marginal gains on air for the 3080 FTW3, Tuff and Gigabyte Eagle. I believe Port Royal increased from 11,500 to 12,400 running the 3080 @ 420w (FTW3) and Timespy GPU only increased to 18,500.


----------



## Intoxicus

HyperMatrix said:


> Is there any way to block a user here so you don’t even have to see their posts anymore? Really getting tired of the cheesy non stop negative conspiracy theory essay spam that some keep posting about Nvidia and these cards when I’m just trying to browse for real information.


Agreed.

Mooncheese keeps posting links to his nonsense in the EVGA forums not realizing anyone that follows the links finds a bunch of people calling him out.

The confirmation bias from him is tiring and ridiculous.


----------



## Intoxicus

VRAM allocation is not VRAM usage.
This VRAM limited myth is tiresome and not based in facts.
Steve from GN called it out in a recent video.
Because it's allocated doesn't mean it gets used.


----------



## kot0005

3090 strix WB.

so no watercooling on the vram chiops on the back ?? i am concerned...









EK Water Blocks for ROG Strix NVIDIA RTX 30 Series Graphics Cards Are Now Available - ekwb.com


EK®, the leading computer cooling solutions provider, is ready to offer its premium high-performance GPU water block for ASUS® ROG® Strix edition NVIDIA® GeForce® RTX™ 30 Series graphics cards. This huge new water block is named EK-Quantum Vector Strix RTX 3080/3090 D-RGB, and it is very...




www.ekwb.com


----------



## Mooncheese

Intoxicus said:


> Agreed.
> 
> Mooncheese keeps posting links to his nonsense in the EVGA forums not realizing anyone that follows the links finds a bunch of people calling him out.
> 
> The confirmation bias from him is tiring and ridiculous.





Intoxicus said:


> VRAM allocation is not VRAM usage.
> This VRAM limited myth is tiresome and not based in facts.
> Steve from GN called it out in a recent video.
> Because it's allocated doesn't mean it gets used.


"People are calling me out"

LMAO.

NOONE has yet to win an argument with me yet. Prove otherwise. Noone is calling me out. I've handily defeated both you and your moron buddy Hypermatrix who is such a sorry loser that he he's responded childishly with some stupid quote of mine as his signature.

Sorry are you doing VR? I am, seeing 10.5GB used in Half Life: Alyx in the Index.


__
https://www.reddit.com/r/nvidia/comments/itx0pm

"Confirmation bias" LMAO.

Please point to my "bias".

Again, youre confusing fact stating with bias.

You and your buddy Hypermatrix are tools. I am running circles around you.

Look here, look at who accurately predicted how the 3080 would perform a day before the reviews went live: Nvidia's Dumbest Decision (AdoredTV)

Let's compare systems, here's mine:











Can't wait to see your AIO cooled card.

See my system? Try 14/10 acrylic, and that's actually my first hard-tubing loop. You still on AIO's buddy?


----------



## pewpewlazer

HyperMatrix said:


> Is there any way to block a user here so you don’t even have to see their posts anymore? Really getting tired of the cheesy non stop negative conspiracy theory essay spam that some keep posting about Nvidia and these cards when I’m just trying to browse for real information.


Could you imagine employing said person and paying them enough to be able to afford a $1000++ graphics card? Yikes!


----------



## Esenel

@Mooncheese 
Your build looks quite cheap.
From Walmart?

And now please stop annoying everybody here.


----------



## pewpewlazer

Esenel said:


> @Mooncheese
> Your build looks quite cheap.
> From Walmart?
> 
> And now please stop annoying everybody here.


I run my build on Walmart Great Value Distilled Water.

Please do not insult me like that again.


----------



## Esenel

@pewpewlazer 
You are right.
His build could be from german Kik. 
Because a person bragging to have superior skills with posting some build is cheap...



Mooncheese said:


> You and your buddy Hypermatrix are tools. I am running circles around you.
> ...
> Let's compare systems, here's mine:
> ...
> Can't wait to see your AIO cooled card.
> See my system? Try 14/10 acrylic, and that's actually my first hard-tubing loop. You still on AIO's buddy?


----------



## Glerox

dentnu said:


> I don’t know if you read the news but NVIDIA will no longer update SLI game profiles in there drivers. SLI for games is dead. Its posted in this thread few pages back.


Yeah I just hope that some games will have built-in mGPU support in dx12 and vulkan.
worst case, I use my second 3090 for my other PC.


----------



## Glerox

HyperMatrix said:


> I’m bothered about the doxxing.
> 
> 
> 
> If we could, I’d give you one. What business is it of yours what someone does with their 3090? The card was obtained for a large some of money. Tested. And being sold again. It has value to any publication who would want to do a review and release it before NDA lifts for actual reviewers. The question again is, what business is it of yours if he decides to sell his card? He owes you nothing yet he contributed very useful and welcome information for our benefit. And now he’s being attacked. You guys aren’t exactly encouraging other people with early access to hardware to share their findings with us.


I don't mind that he is selling the card at whatever price. I do mind he's saying the card was unused although we know he did a freakin shunt mode on it lol. Aneways, don't want to pollute this thread so that's all I'm gonna say about that.


----------



## J7SC

."..meanwhile back at the farm" [which I just flew over in MS FlightSim 2020  ]

...have a look at the VRAM 'consumption' per MSI AB (same session, three different scenes, including mountains as well as a heavily build-up & populated metro area). This is for 4K / Ultra / dense settings. It isn't so much the overall number of VRAM usage (though over 10.4 K MB at one point) but that it rose from initial allocations during the sim session...now, GDDR6X 10GB is faster than 11GB of GDDR6 (even with a decent VRAM oc on the later, per GPUz bandwidth in 2nd pic), but I can see why some folks think 10 GB of GDDR6X might be a bit tight for the 3080...and in the meantime, Gigabyte *might*have inadvertently confirmed several variants of the 3080 w/ 20GB of VRAM per Tweaktown table also below - I was kind of hoping that they would also throw in a 48 GB Xtr version of the 3090, but with 24 GB already in 'base' configuration, the 3090s probably don't have the same urgency to deal with at least the perception of a tight VRAM budget


----------



## Mooncheese

J7SC said:


> ."..meanwhile back at the farm" [which I just flew over in MS FlightSim 2020  ]
> 
> ...have a look at the VRAM 'consumption' per MSI AB (same session, three different scenes, including mountains as well as a heavily build-up & populated metro area). This is for 4K / Ultra / dense settings. It isn't so much the overall number of VRAM usage (though over 10.4 K MB at one point) but that it rose from initial allocations during the sim session...now, GDDR6X 10GB is faster than 11GB of GDDR6 (even with a decent VRAM oc on the later, per GPUz bandwidth in 2nd pic), but I can see why some folks think 10 GB of GDDR6X might be a bit tight for the 3080...and in the meantime, Gigabyte *might*have inadvertently confirmed several variants of the 3080 w/ 20GB of VRAM per Tweaktown table also below - I was kind of hoping that they would also throw in a 48 GB Xtr version of the 3090, but with 24 GB already in 'base' configuration, the 3090s probably don't have the same urgency to deal with at least the perception of a tight VRAM budget
> 
> View attachment 2459474
> View attachment 2459475
> View attachment 2459477


Yes I've been saying 10GB isn't future proof, it's engineered obsolescence as I point out here.


----------



## BigMack70

Looks like LG's C9 and CX TVs are having issues with hdmi 2.1... Chroma subsampling problems on CX and busted gsync on c9.

Disappointing... This whole launch just seems half baked. I may hold off a month or two on a purchase to see if issues get ironed out.


----------



## pantsoftime

BigMack70 said:


> Looks like LG's C9 and CX TVs are having issues with hdmi 2.1... Chroma subsampling problems on CX and busted gsync on c9.
> 
> Disappointing... This whole launch just seems half baked. I may hold off a month or two on a purchase to see if issues get ironed out.


Is there a thread tracking those issues? That's so disappointing. HDMI 2.1 was the killer feature of these cards for me that had me considering upgrading from 2080Ti.


----------



## J7SC

BigMack70 said:


> Looks like LG's C9 and CX TVs are having issues with hdmi 2.1... Chroma subsampling problems on CX and busted gsync on c9.
> 
> Disappointing... This whole launch just seems half baked. I may hold off a month or two on a purchase to see if issues get ironed out.





pantsoftime said:


> Is there a thread tracking those issues? That's so disappointing. HDMI 2.1 was the killer feature of these cards for me that had me considering upgrading from 2080Ti.


...trying to clarify if this an issue with the LG C9 / CX, or the Ampere card?


----------



## BigMack70

pantsoftime said:


> Is there a thread tracking those issues? That's so disappointing. HDMI 2.1 was the killer feature of these cards for me that had me considering upgrading from 2080Ti.


I found out about it here... Several other sources/links in the post and comments 



Spoiler





__
https://www.reddit.com/r/nvidia/comments/iwem9k




I am not unhappy with the performance I have from my 2080Ti under water, and hdmi 2.1 for 4k 120fps with gsync is the only reason I'm excited to buy this card. But now I'm thinking maybe I'll just wait until this is all fixed.


----------



## BigMack70

J7SC said:


> ...trying to clarify if this an issue with the LG C9 / CX, or the Ampere card?


I don't think it's clear yet. Looks like some people are trying to get a response from LG. I could see it being a driver bug, but maybe it's a LG firmware issue


----------



## J7SC

BigMack70 said:


> I don't think it's clear yet. Looks like some people are trying to get a response from LG. I could see it being a driver bug, but maybe it's a LG firmware issue


Tx. Hopefully, this gets sorted soon as I'm getting ready to purchase a new TV / monitor within a few months


----------



## digiadventures

Mooncheese said:


> 10GB isn't future proof. It's already obsolete.


I thought hardware unboxed testing proved otherwise. We only lose 20% performance in these rare cases INSTEAD 100% performance loss and severe stuttering, which is what usually happens when there is not enough vram. Isn't this good news for 3080 owners ? I mean sure, nobody wants to lose 20% performance but its not end of the world. 3080 will be still faster then 3070 16 GB for example


----------



## Glerox

Can someone repost the website link with all the updates on upcoming waterblocks?
Thanks!


----------



## inutile

There is a teardown of the Gigabyte Eagle OC version up for anyone curious. 






Also shows the back of the PCB.


----------



## Zakiel

inutile said:


> There is a teardown of the Gigabyte Eagle OC version up for anyone curious.


wow big premium over there. Around $2100 us converted from VTD. Also around 7min, he stated that this is not reference pcb.


----------



## domenic

Zakiel said:


> wow big premium over there. Around $2100 us converted from VTD. Also around 7min, he stated that this is not reference pcb.


First pics I have seen confirming the RAM chips on the back. Couple questions; does anyone know how hot they are supposed to get, can a metal backplate handle the load via passive cooling with a water block on the front, is memory temp determined by memory used overall or just constant? Need the likes of Buildzoid....


----------



## Esenel

Glerox said:


> Can someone repost the website link with all the updates on upcoming waterblocks?
> Thanks!











RTX 3000 Wasserkühler: Stand der Dinge (Update 1.1.22)


Aktuellen Informationen und Übersicht zu Wasserkühlern / Wasserblöcken für die nVidia RTX 3000 (RTX 3090, 3080 (Ti) und 3070) und Custom-Designs.




hardware-helden.de





This one?


----------



## J7SC

domenic said:


> First pics I have seen confirming the RAM chips on the back. Couple questions; does anyone know how hot they are supposed to get, can a metal backplate handle the load via passive cooling with a water block on the front, is memory temp determined by memory used overall or just constant? Need the likes of Buildzoid....


Aqua Computer had already introduced active backplate cooling for the 2080 Ti (and possibly Titan RTX), so it shouldn't be too difficult for the 3090..


----------



## Mooncheese

Good news on the 3090 front:

Watching GN's LN2 3080 overclocking stream, the FTW3 they are using has a modded vbios, meaning, there will most likely be a higher TDP bios floating around somewhere for this variant, and seeing as to how similar the card is to the 3090 there will most likely be a modded bios for the 3090 FTW3 as well:








Also, and I don't know if this terrifies me or inspires confidence but here's a tear down video of 'vmanuelgm's Gigabyte 3090 variant showing that the top bank of memory is only being cooled by the back-plate. Now, bear in mind that the design of this backplate is such that it's being actively cooled via the heat-sink fan.

I'm really really curious as to what the top bank of VRAM of this card looks like under FLIR @ 400W and at 500W and I'm hoping that Igors Lab gets their hand on this variant for that purpose.

Good news is that cooling the top bank of video memory of GA-102-300 may be possible with only a water-block back-plate with airflow running across it at 400W. Not sure about 500W though.

Here's that Igors Lab video showing 3080 FE hitting 106C near or on a video memory module in FLIR:

Do note how the mods at r/Nvidia keep taking this down:


__
https://www.reddit.com/r/nvidia/comments/iwk9p1

They also refused to post AdoredTV's Nvidia's Dumbest Decision video and didn't respond to inquiry when I asked them the reasoning for doing so.


----------



## kaydubbed

Has anyone addressed the fact that these "leaked" 3090 benchmarks look benched in 1080p and not 1440p/4k? I'm a little new to 3DMark scores. 










Even more NVIDIA GeForce RTX 3090 benchmarks leak - VideoCardz.com


NVIDIA GeForce RTX 3090 already in the wild It appears that while some people still struggle to find RTX 3080 in stores, others had no issues finding GeForce RTX 3090 ahead of its launch. It is unclear if the samples were obtained through official channels (review samples) or bought from...




videocardz.com


----------



## ribosome

kaydubbed said:


> Has anyone addressed the fact that these "leaked" 3090 benchmarks look benched in 1080p and not 1440p/4k? I'm a little new to 3DMark scores.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Even more NVIDIA GeForce RTX 3090 benchmarks leak - VideoCardz.com
> 
> 
> NVIDIA GeForce RTX 3090 already in the wild It appears that while some people still struggle to find RTX 3080 in stores, others had no issues finding GeForce RTX 3090 ahead of its launch. It is unclear if the samples were obtained through official channels (review samples) or bought from...
> 
> 
> 
> 
> videocardz.com


Only Fire Strike is 1080p. Fire Strike Extreme, Time Spy, and Port Royal are all 1440p. Fire Strike Ultra and Time Spy Extreme are both 2160p. Fire Strike is DX11, Port Royal and Time Spy are DX12. Port Royal adds raytracing for some shadows and reflections.


----------



## vmanuelgm

RTX 3090 Battlefield V, Metro Exodus Benchmark and Blender BMW:

Battlefield V video:






Metro Exodus Bench video:






Blender BMW:


----------



## J7SC

...someone else here had mentioned Igor's vid on GDDR6X hitting 104C (via internal probe) on a 3080 FE while the external view via infrared thermometer and through other materials showed 84C...I just watched the vid (in German) but there`s also an accompanying story (in English) -- *here* ...at 110C (internal), the memory will start to throttle to protect itself and apparently at 120C +, specs say it may go on to memory heaven

...It is worth highlighting this again here as 3090 owners will have to deal with double sided modules, so back plate cooling seems even more imperative. On the other hand, the memory most affected per Igor`s piece on the 3080 FE was closest to the VRM, meaning that custom PCB cards might have different host-spot regions, especially if the custom PCB locates the VRM further away from the VRAM


----------



## Mooncheese

vmanuelgm said:


> RTX 3090 Battlefield V, Metro Exodus Benchmark and Blender BMW:
> 
> 
> Metro Exodus Bench video:


Fantastic data, exactly what I was looking for, this is extremely impressive honestly. 

I've included my 3440x1440 benchmark run and data for comparison: 








http://imgur.com/a/sEpPeTe


Please do note that this is with RT on HIGH not on Ultra as you have in the 3840x2160 run. 

The 3090 is roughly 2 FPS higher Average, Max and Min. 

Now here's the interesting part, 3840x2160 is 67% more pixels than 3440x1440. 

Meaning, the 3090 is 67% faster than overclocked 2080 Ti @ 2070 MHz core / 8000 MHz memory @ 40C @ 340 limit (XC2 bios). 

Going by the wattage reading in OSD I presume this is at 550W however? 

Still impressive, but I would say deduct 10% for how the 3090 would perform around 400w, but that's still more than 57% gain over 2080 Ti in Metro Exodus at 3840x2160, I would say deduct another 10% comparing the two cards at 1440p, or around ~47% faster BUT it we need to remember that youre running this bench with RT on Ultra. 

I will do another run tonight with the FTW3 bios (373w) with RT on Ultra so we can have a better idea. 

Very impressive. 



J7SC said:


> ...someone else here had mentioned Igor's vid on GDDR6X hitting 104C (via internal probe) on a 3080 FE while the external view via infrared thermometer and through other materials showed 84C...I just watched the vid (in German) but there`s also an accompanying story (in English) -- *here* ...at 110C (internal), the memory will start to throttle to protect itself and apparently at 120C +, specs say it may go on to memory heaven
> 
> ...It is worth highlighting this again here as 3090 owners will have to deal with double sided modules, so back plate cooling seems even more imperative. On the other hand, the memory most affected per Igor`s piece on the 3080 FE was closest to the VRM, meaning that custom PCB cards might have different host-spot regions, especially if the custom PCB locates the VRM further away from the VRAM


Fantastic observation! Yes the VRM in that area is definitely compounding the problem and this may be reduced somewhat with a larger PCB. Definitely another point for the AIB vs FE debate. 

I've asked vmanuelgm if he can attach a thermocoupler to the side of the top bank of VRM under the back-plate so we can have an idea as to how hot it gets at 400w and 550w and he said he would get around to it. 

I'm kind of thinking that a back-plate with airflow across it may be adequate at 400w, but 500w+, not sure, we need temp testing in this area. I'm fairly certain Igor's Lab will do this when they get their hands on one, problem is these cards will be going up for sale before then.


----------



## domenic

So where / how exactly do we obtain the power unlocked BIOS? On the GN livestream today Steve actually referred people to this forum as a place to look. Only into the fist part of the video and it looks like the FTW card is a winner.


----------



## J7SC

vmanuelgm said:


> RTX 3090 Battlefield V, Metro Exodus Benchmark and Blender BMW...)


Do you have any way to measure GDDR6X VRAM temps with your Gigabyte 3090 in a longish bench, at least at the back (ie via infrared temp tool) ? If so, can you also also indicate whether the card is on a test bench or in a closed case, plus helper fans etc ? Tx


----------



## dentnu

Very good info in this video I found. No idea who this guy is or if what he says is true but very interesting info nonetheless.


----------



## Mooncheese

dentnu said:


> Very good info in this video I found. No idea who this guy is or if what he says is true but very interesting info nonetheless.


Great video, very interesting. He mentions the Unreal engine being used in next gen consoles. I'm curious as to whether or not AMD's AI super-sampling will be more efficient with console ports seeing as how the underlying architecture (RDNA 2) is the same. 

I'm kind of on the fence right now with the 3090. Part of me wants it because the 4K gains will be meaningful in VR (HP Reverb G2, 2160x2160x2) and I have some extremely demanding titles such as Assetto Corsa Competizione and SkyrmVR with UVRE (via Wabbajack). These games are so demanding that I stopped playing them and am saving them for a GPU upgrade. 

I'm also interested in Ray Tracing and DLSS, but I'm curious as to whether or not AMD will be able to offer similar technologies for less and whether or not RDNA 2 will be better at running console ports seeing as how both next-gen consoles are RDNA 2. 

Part of me wants to wait until November to see how RDNA 2 shapes up. I heard a rumor today that there's already a benchmark of full die RDNA 2 that is 10% faster than the 3080. I also heard that this card, if it is the 6900xt, will be 10% faster than the 3080 at much less than 370w, have 24GB of video memory and an MSRP of $1000. 

If AMD's AI Supersampling and Path Tracing can compete with Nvidia's and RDNA 2 runs the console ports better, even if in hardware terms 6900xt is slower than the 3090, I would say that Nvidia is indeed in trouble.


----------



## inutile

Mooncheese said:


> Fantastic data, exactly what I was looking for, this is extremely impressive honestly.
> 
> I've included my 3440x1440 benchmark run and data for comparison:
> 
> 
> 
> 
> 
> 
> 
> 
> http://imgur.com/a/sEpPeTe
> 
> 
> Please do note that this is with RT on HIGH not on Ultra as you have in the 3840x2160 run.
> 
> The 3090 is roughly 2 FPS higher Average, Max and Min.
> 
> Now here's the interesting part, 3840x2160 is 67% more pixels than 3440x1440.
> 
> Meaning, the 3090 is 67% faster than overclocked 2080 Ti @ 2070 MHz core / 8000 MHz memory @ 40C @ 340 limit (XC2 bios).
> 
> Going by the wattage reading in OSD I presume this is at 550W however?
> 
> Still impressive, but I would say deduct 10% for how the 3090 would perform around 400w, but that's still more than 57% gain over 2080 Ti in Metro Exodus at 3840x2160, I would say deduct another 10% comparing the two cards at 1440p, or around ~47% faster BUT it we need to remember that youre running this bench with RT on Ultra.
> 
> I will do another run tonight with the FTW3 bios (373w) with RT on Ultra so we can have a better idea.
> 
> Very impressive.
> 
> 
> 
> Fantastic observation! Yes the VRM in that area is definitely compounding the problem and this may be reduced somewhat with a larger PCB. Definitely another point for the AIB vs FE debate.
> 
> I've asked vmanuelgm if he can attach a thermocoupler to the side of the top bank of VRM under the back-plate so we can have an idea as to how hot it gets at 400w and 550w and he said he would get around to it.
> 
> I'm kind of thinking that a back-plate with airflow across it may be adequate at 400w, but 500w+, not sure, we need temp testing in this area. I'm fairly certain Igor's Lab will do this when they get their hands on one, problem is these cards will be going up for sale before then.


I ran this on my 2080ti (FTW3 with HydroCopper block, 2100mhz core) at the exact same settings and averaged 32.29fps, which would put it almost exactly 45% slower than vmanuelgm's bench. There would likely be other system differences to consider as well, I'm on a 9900k at 5ghz all core and 32GB 3200mhz CL14 RAM.


----------



## Mooncheese

inutile said:


> I ran this on my 2080ti (FTW3 with HydroCopper block, 2100mhz core) at the exact same settings and averaged 32.29fps, which would put it almost exactly 45% slower than vmanuelgm's bench. There would likely be other system differences to consider as well, I'm on a 9900k at 5ghz all core and 32GB 3200mhz CL14 RAM.
> View attachment 2459539


Nice data, thanks. I believe vmanuelgm's run was done shunted @ 550w so we need to deduct around 10% (extrapolating from the 3D Mark Timespy GPU score difference of 20,200 @ 400w (he's saying it was 20,900 but the original score was 20,200 but I could be mistaken) and 21,900 shunted @ 550w = ~10%) or around 35% performance difference.

This is about what I was expecting honestly (40% at 4K and 30% at 1440p).

As for the CPU bottleneck, there is no CPU bottleneck present whatsoever in Metro Exodus at 1440p and 2160p. Maybe if you run the game at 1080p with RT off you might run into a bottleneck at 200 FPS.

Although the game is a DX12 title and can use the additional cores there isn't a whole lot going on in terms of emergent gameplay events (it's not Far Cry / Fallout with random events happening, it's almost as bad as a corridor shooter).

I have no CPU bottlenecking problems outside of a very small number of titles with 8700k @ 5.1 GHz actual. There's been next-to-no IPC gain between 8700k, 9900k and 10900k, just an increase in core count. My 8700k does 5.1 GHz @ 1.38v 0 AVX (5.1 GHz actual across all 6 cores) with load temps in the high 50's and I have Spectre and Meltdown mitigations disabled which are worth about 100 MHz. You would need to run 9900k / 10900k @ 5.2 GHz 0 AVX for the same single core speed. I also have 32GB of 3600 MHz DDR4 (17-20-20-38). It's as fast as the memory controller of this board will allow (Gigabyte Z370 Gaming 7, something you learn after you own the board). The nice thing about Z390 is that the memory controllers are much better and there's much better memory overclocking (4200-4500 MHz etc.) and that DOES actually make a difference with CPU bound scenarios.

Edit: 

Having looked at your result again, I noticed youre comparing non DLSS runs. I wonder if there has been some Ampere specific improvement in the way the new architecture handles DLSS or if it can simply do it better than Turing because we essentially have the same GPU (my Timespy run is in my signature, that was done with Geforce Experience and MSI AB enabled which cost around 200-300 points, so that run should be closer to 16,700-16,800 in actuality). I suppose I will just do DLSS run to compare. 

Going by just the pixel density increase between 3440x1440 the 3090 should theoretically be 67% faster considering it had the same score at 3840x2160 with RT on Ultra but youre somehow only 45% slower with DLSS disabled and RT on Ultra as well and we essentially have the same GPU (XC2 with FTW3 vbios under full water block here, load temps of 42C)

Are you on the latest driver? 

Question is, have there been Ampere specific improvements to DLSS performance? 

Seeing as how DLSS is accomplished with Tensor Cores and GA102 has more of them I think a more thorough comparison would be to do a DLSS to DLSS run correct?


----------



## nievz

vmanuelgm said:


> RTX 3090 Battlefield V, Metro Exodus Benchmark and Blender BMW:
> 
> Battlefield V video:
> 
> 
> 
> 
> 
> 
> Metro Exodus Bench video:
> 
> 
> 
> 
> 
> 
> Blender BMW:


I have been planning to buy a 3090. This tells me that the Gigabyte Gaming OC card has great cooling. Even with the shunt mod, it doesn't reach 70C in games. However, with the stock 400W limit, and those who do not want to mod their card, i think 3090 is not worth the price.

What fan profile are you using?


----------



## kaydubbed

Seeing that > 400w power to speed plateaus so much, now my question is whether a higher end AIB like the FTW3 3090 is worth it when I'm planning on keeping it on air.


----------



## HyperMatrix

kaydubbed said:


> Seeing that > 400w power to speed plateaus so much, now my question is whether a higher end AIB like the FTW3 3090 is worth it when I'm planning on keeping it on air.


I think if you're spending $1800 on an 3090 FTW3 over $800 for a 3080 FTW3, to get an extra 15% performance (due to power limit issues) you should just consider a 3080 FTW3, or wait for the 20GB model which will probably go for around $1000. Or better yet, put the 3090 under water. If you don't have a loop already to add it to, consider the Hybrid/KPE editions with a built in/attached radiatior. At this price range we're far past the point of reasonable price/performance comparisons. You're paying 125% more to get an extra 15% performance (3080 FTW3 vs. 3090 FTW3). At that point, you may as well pay another 16.6% - 22% and get the KPE edition for an extra 5-8% performance with more reliability and less noise.


----------



## Mooncheese

inutile said:


> I ran this on my 2080ti (FTW3 with HydroCopper block, 2100mhz core) at the exact same settings and averaged 32.29fps, which would put it almost exactly 45% slower than vmanuelgm's bench. There would likely be other system differences to consider as well, I'm on a 9900k at 5ghz all core and 32GB 3200mhz CL14 RAM.
> View attachment 2459539


I just did another run at 2100 MHz core, 7900 MHz memory with RT on Ultra and DLSS Off and it did 52 FPS avg @ 3440x1440. Rough math puts that right around your average at 4K same settings (67% of 32 = 20, 32+20=52) so if I could do the run at 3840x2160 we would probably have a very similar score.










I also did another run with DLSS enabled and RT on Ultra and had the same average as before but my Max FPS dipped down 17 FPS under RT on High and vmanuelgm's run. I suppose there isn't a big difference between RT High and Ultra in the Metro Exodus benchmark.

Rough math means that Ampere's Tensor Cores are being put to good work handling DLSS much better than Turing as my theoretical 4K run (dividing the resolution difference between 3440x1440 and 3840x2160 = 67% into your FPS average and adding that to your average = my FPS average) and doing that we see that the 3090 is 67% faster than 2080 Ti at the same frequency (all of our runs done at 2100 MHz) @ 3840x2160 with DLSS enabled and 45% faster with DLSS disabled.

Given that we see that GA102-200 (320w) shows a 30% gap at 3840x2160 and 20% at 2560x1440 compared to TU-102 (260w) at least 10% of the 67% gap between TU-102 and GA-102-300 is the fact that GA102 handles higher resolution much better. Given the leaked benches it doesn't appear that GA-102-300 has a percentage advantage greater than 20% at 4K vs GA-102-200 however. Deduct another 10% because vmanuelgem's run was at 550w and we have a 57% gap at 4K with DLSS enabled (35% with it disabled, going by your 45% difference run and deducting 10% for 550w GA102-300) and a 37% theoretical gap at 1440p in Metro Exodus with DLSS enabled, possibly shrinking to 30% with DLSS disabled.

In sum:

The 3090 should be 57% faster than 2080 Ti in Metro Exodus with RT on Ultra and DLSS enabled @ at the same power draw (390w) and 35% faster with DLSS disabled.

Conclusion:

The problem with current synthetics like Timespy is that they don't show the generational uplift in AI super-sampling between Turing and Ampere. Technically, going only by Timespy, the 3090 should only be around 30% faster at 1440p and 40% faster at 4K. What we see here with DLSS 1.0 is that it's actually 35% faster at 4K but a staggering 57% faster with DLSS enabled (on both cards compared). 

Ampere is definitely making use of it's additional Tensor Cores. It's benchmarks like this that reinforce the recent video stating the importance of AI super-sampling in the future of 3D rendering.


----------



## Thoth420

So the Eagle looks like a winner? That is good to hear tbh. 
I am going to MS for release so I will have to settle for whatever they have. I have a feeling I am gonna be a Zotac or EVGA boy unfortunately. 
What I really need for my build is a white 90. In the states there are zero options atm.


----------



## Mooncheese

kaydubbed said:


> Seeing that > 400w power to speed plateaus so much, now my question is whether a higher end AIB like the FTW3 3090 is worth it when I'm planning on keeping it on air.


HyperMatrix is correct, youre better off going with a high end AIB 3080 variant and putting it under a water block and closing the gap to the 3090 with overclocking for $1000 or so. 

You could probably get a really good AIB 3080 @ 2100 MHz core under full water block @ 420w (or more via vbios swap) within 5% of an air cooled 3090 at default clocks / with little TDP headroom that can't really see it's 20% potential uplift because of power starvation and falling off of an efficiency cliff beyond 400-450w (10% gained on shunted 3090 between 390 and 550w). 

3080 FTW3 = $850? 
Water block and backplate = $200
Pump-res combo, soft tubing, decent 360mm radiator = $300 

$1350 and youre within 5% of an air cooled 3090 with a 10-12% overclock and your GPU can be dead silent if you want it to top it off and instead of sunken cost into a single component the cooling parts are forward compatible should you be in a position to upgrade to 3090 FTW3 down the road and the rest of the cooling system is universally forward compatible with any other expansion, be it another radiator or CPU block etc. 

I don't think the FTW3 will be $1800 though, that's the Strix price, if I remember correctly FTW3 is going to go for $1675-$1700 and that's before the 5% Associate Code promo, assuming you can put that in and checkout before the bots buy them all again though. 

If youre in the position where you already have a loop it's a different story. But I wouldn't dump $1600 on a GPU, I would focus on a cooling loop first. Water is the best, nothing beats the absence of noise and insanely low temps (my 2080 Ti is nearly inaudible at 45C about the entire card @ 373w).


----------



## MacMus

is it worth to get 3090 ?

how fast is it from 3080 ?

i have 2080ti right now .. and considering switching.. Do you know if there is 3080ti planned?


----------



## Esenel

MacMus said:


> is it worth to get 3090 ?
> how fast is it from 3080 ?


Official benchmarks on Thursday.


----------



## t1337dude

kaydubbed said:


> Seeing that > 400w power to speed plateaus so much, now my question is whether a higher end AIB like the FTW3 3090 is worth it when I'm planning on keeping it on air.


It's hard to know which AIB's are worth it at this point, but they do seem to be a bit on the pricey side. If the 3080's are anything to go on, the Asus TUF 3090 might be a good pick for not spending too much. At this point I know that's what I'm scoping out. Definitely not going with water - that's for sure. The 3090 + whatever AIB's squeeze out of the box is likely going to be the sweet spot. Putting on a custom loop for a few extra % isn't worth the head-ache or the money.


----------



## Beardcutter

HyperMatrix said:


> if vmanuelgm gives out information as to how many cards he does or doesn’t have, it makes it easier for him and the source of the cards to be found. If you feel the eBay listing is for a card that is different from what is described, report it to eBay. Don’t ask for information that could potentially incriminate him and then be upset that he doesn’t give it to you. I don’t think any of you who are questioning him right now would have any interest in buying a 3090 at that price regardless of whether it was shunt modded or not.


Thank goodness you are here to continuously defend the character of someone who's clearly being dishonest and attempting to scam potential buyers on Ebay.


----------



## HyperMatrix

Beardcutter said:


> Thank goodness you are here to continuously defend the character of someone who's clearly being dishonest and attempting to scam potential buyers on Ebay.


I’m here for information on the 3090. Not to white knight based on assumptions with no proof. And all you’re doing is interfering with that. 

As I said earlier...if you feel that the listing is from him, and you feel that the card that’s listed is the one that’s shunted and not a second card, please report it to eBay. There’s no point or purpose in hijacking this thread over assumptions that don’t benefit anyone. No one here is genuinely considering buying a sub par version of the 3090 for 3-4x it’s MSRP 3 days before it’s sale date, nor would I recommend that anyone do so.


----------



## andrvas

Which of the 3090 cards have dual bios, is there a list somewhere? Only I've seen so far is the Gigabyte Gaming OC.


----------



## vmanuelgm

Yep, Gigabyte 3090 gaming oc has dual bios. There is a switch to choose between oc bios and silent bios. OC bios allows 5% more power (100 vs 105).


----------



## Wihglah

vmanuelgm said:


> Yep, Gigabyte 3090 gaming oc has dual bios. There is a switch to choose between oc bios and silent bios. OC bios allows 5% more power (100 vs 105).


Do you have VR?

If so - can you check if there are any scenarios that utilize more than 10Gb of VRAM?

Thinking specifically of DCS


----------



## HyperMatrix

For anyone curious, Gamer Nexus did an LN2 OC on the EVGA RTX 3080 FTW3 during a live stream. They scored 14069 in port royal with 2370MHz GPU clock and +1050MHz on the memory. Temperatures were -155 to -190c. They did this with the help of a custom bios with 900W power limit. They currently hold the single card world record. Second place is rbuass, who you may remember from articles a few days ago. He was 581 points behind at 13488 on the Galax RTX 3080 card, also under LN2 at 2.34GHz. This is substantially faster. 

For comparison, the highest clocked RTX 2080ti under LN2 at 2.7GHz (14% higher) scored 13090 (7% lower). The 2080ti was a KingPin Edition card. So once a KPE 3080 comes out, we can expect an even bigger bump. Let's just hope we get access to an unlocked power bios.









3DMark Port Royal Hall of Fame


The 3DMark.com Overclocking Hall of Fame is the official home of 3DMark world record scores.




www.3dmark.com


----------



## vmanuelgm

Wihglah said:


> Do you have VR?
> 
> If so - can you check if there are any scenarios that utilize more than 10Gb of VRAM?
> 
> Thinking specifically of DCS


Nope, no VR here.



HyperMatrix said:


> For anyone curious, Gamer Nexus did an LN2 OC on the EVGA RTX 3080 FTW3 during a live stream. They scored 14069 in port royal with 2370MHz GPU clock and +1050MHz on the memory. Temperatures were -155 to -190c. They did this with the help of a custom bios with 900W power limit. They currently hold the single card world record. Second place is rbuass, who you may remember from articles a few days ago. He was 581 points behind at 13488 on the Galax RTX 3080 card, also under LN2 at 2.34GHz. This is substantially faster.
> 
> For comparison, the highest clocked RTX 2080ti under LN2 at 2.7GHz (14% higher) scored 13090 (7% lower). The 2080ti was a KingPin Edition card. So once a KPE 3080 comes out, we can expect an even bigger bump. Let's just hope we get access to an unlocked power bios.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 3DMark Port Royal Hall of Fame
> 
> 
> The 3DMark.com Overclocking Hall of Fame is the official home of 3DMark world record scores.
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> View attachment 2459554


I saw the live and told them several times to go above 14120…


----------



## HyperMatrix

vmanuelgm said:


> I saw the live and told them several times to go above 14120…


Haha I know they tried an extra +25MHz on the core but it kept crashing. I wish they had hooked up the Nvidia PCAT kit to show power draw by the card at all those settings. Would have been interesting to see.


----------



## vmanuelgm

HyperMatrix said:


> Haha I know they tried an extra +25MHz on the core but it kept crashing. I wish they had hooked up the Nvidia PCAT kit to show power draw by the card at all those settings. Would have been interesting to see.


Yep it was a nice watch, they are two pros!!!


----------



## andrews2547

Please don't have off topic arguments. If this is something you want to do, take it to PMs.

Thanks.


----------



## shiokarai

So it seems 3090 needs a BIOS with at least 550w PL to stretch its legs? And 3080 about 450w? also, only EVGA FTW3 for now has confirmed somehow better PL than stock? (420w 3080, 440w 3090, ASUS -> "only" 400w).


----------



## zhrooms

Mooncheese said:


> Good news on the 3090 front, watching GN's LN2 3080 overclocking stream, the FTW3 they are using has a modded vbios, meaning, there will most likely be a higher TDP bios floating around somewhere for this variant, and seeing as to how similar the card is to the 3090 there will most likely be a modded bios for the 3090 FTW3 as well


No, that's not how it works at all, do not get your hopes up for a custom BIOS ever leaking, especially from Steve, they had multiple custom BIOSes during Turing launch that never saw the light of day, they're sadly actually professional and don't leak things.

BIOSes typically leak from "sort of" trusted LN2 overclockers, they share with their close friends that share with their friends and suddenly some idiot uploaded it publicly. This is completely RNG (unpredictable), there are still multiple XOC BIOSes from 2080 Ti that are not publicly released, I have three of them as an example, 2000, 1000 and 600. The fact that GALAX and Strix XOC are out now for Ti is 100% luck, Kingpin XOC was released officially while the GALAX/Strix was leaked. And there are still multiple ones not leaked or officially released.

So, you should be prepared that we might never get XOC BIOSes for Ampere, especially for 3080 since that's not the card overclockers will bench, 3080 BIOS coming is therefore very likely never happening. For 3090 the best bet is to hope Kingpin is released officially again, and still works on other cards which is certainly no guarantee, especially this time when Ampere cards has multiple PWM controllers. I have no faith that there will be a GALAX/Strix XOC again. Safest bet is always to just buy the card with the highest power limit, which is so far FTW3/Ultra (both), that has 420W maximum power limit on 3080 and 440W on 3090. In case XOC doesn't happen that's the most you're gonna get.


----------



## vmanuelgm

The 3090 needs even more power to stabilize clocks. Guess Nvidia put so many resistors to make things harder...


----------



## BigMack70

vmanuelgm said:


> The 3090 needs even more power to stabilize clocks. Guess Nvidia put so many resistors to make things harder...


3090 really seems like it should have been a 3 x 8-pin 450W card stock.


----------



## vmanuelgm

BigMack70 said:


> 3090 really seems like it should have been a 3 x 8-pin 450W card stock.


2x8pin are enough for watercooling, every 8pin can handle 350-400w, enough power under that cooling.


Alphacool block arrived...




















I have to check if it fits the special pcb of the Giga. I will probably have to shunt the block too, 

Alphacool messed up the case picture, Nvidia block with Radeon sticker...


----------



## BigMack70

vmanuelgm said:


> 2x8pin are enough for watercooling, every 8pin can handle 350-400w, enough power under that cooling.


Obviously. But Nvidia would never release a stock card with 450W of power draw on 2x8-pin.


----------



## ThrashZone

vmanuelgm said:


> 2x8pin are enough for watercooling, every 8pin can handle 350-400w, enough power under that cooling.
> 
> 
> Alphacool block arrived...
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I have to check if it fits the special pcb of the Giga. I will probably have to shunt the block too,
> 
> Alphacool messed up the case picture, Nvidia block with Radeon sticker...


Hi,
You oder some QDC's


----------



## Mooncheese

HyperMatrix said:


> For anyone curious, Gamer Nexus did an LN2 OC on the EVGA RTX 3080 FTW3 during a live stream. They scored 14069 in port royal with 2370MHz GPU clock and +1050MHz on the memory. Temperatures were -155 to -190c. They did this with the help of a custom bios with 900W power limit. They currently hold the single card world record. Second place is rbuass, who you may remember from articles a few days ago. He was 581 points behind at 13488 on the Galax RTX 3080 card, also under LN2 at 2.34GHz. This is substantially faster.
> 
> For comparison, the highest clocked RTX 2080ti under LN2 at 2.7GHz (14% higher) scored 13090 (7% lower). The 2080ti was a KingPin Edition card. So once a KPE 3080 comes out, we can expect an even bigger bump. Let's just hope we get access to an unlocked power bios.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 3DMark Port Royal Hall of Fame
> 
> 
> The 3DMark.com Overclocking Hall of Fame is the official home of 3DMark world record scores.
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> View attachment 2459554


Yeah but LN2 bears no relation to reality. 

My best Port Royal score is 10,400 @ like 2100 MHz core and 8100 Mhz memory under full water block @ 42C with 2080 Ti, about 27% lower than 2080 Ti @ 2700 MHz under LN2.


----------



## Mooncheese

zhrooms said:


> No, that's not how it works at all, do not get your hopes up for a custom BIOS ever leaking, especially from Steve, they had multiple custom BIOSes during Turing launch that never saw the light of day, they're sadly actually professional and don't leak things.
> 
> BIOSes typically leak from "sort of" trusted LN2 overclockers, they share with their close friends that share with their friends and suddenly some idiot uploaded it publicly. This is completely RNG (unpredictable), there are still multiple XOC BIOSes from 2080 Ti that are not publicly released, I have three of them as an example, 2000, 1000 and 600. The fact that GALAX and Strix XOC are out now for Ti is 100% luck, Kingpin XOC was released officially while the GALAX/Strix was leaked. And there are still multiple ones not leaked or officially released.
> 
> So, you should be prepared that we might never get XOC BIOSes for Ampere, especially for 3080 since that's not the card overclockers will bench, 3080 BIOS coming is therefore very likely never happening. For 3090 the best bet is to hope Kingpin is released officially again, and still works on other cards which is certainly no guarantee, especially this time when Ampere cards has multiple PWM controllers. I have no faith that there will be a GALAX/Strix XOC again. Safest bet is always to just buy the card with the highest power limit, which is so far FTW3/Ultra (both), that has 420W maximum power limit on 3080 and 440W on 3090. In case XOC doesn't happen that's the most you're gonna get.


Great reply, yeah I was already leaning towards FTW3 as 440w is plenty (550W for 10% gain is ridiculous). I intend to put the card under full water block and undervolt, this combination has worked nicely with both 1080 Ti and 2080 Ti. 440w should be plenty with a slight undervolt. I imagine 550w scores like we've seen from vmanuelgm should be possible with 450w and an undervolt with the card at ~42-45C. 

This is what I did to circumvent the 300w limit of 1080 Ti FE. In demanding games and benches it would drop down to 1935-1911 MHz with default voltage. Undervolting to 1.025v it would hold 2000 MHz with no additional wattage. 

Speaking of benchmarking, I just ran my best Timespy @ 2130 MHz core, 8000 MHz memory @ 42C (378w according to hwinfo64) @ 1.055v and now it's refusing to validate the result) 

"Error code : 2

(problem id: 112664351)"

I did just recently upgrade Timespy during the recent sale and I'm wondering if doing that has corrupted the install or something because I don't have any changes made in Nvidia Control Panel to cheat the benchmark and make the score higher. This score is my personal best but is ballpark for 2080 Ti at this freq under water. 

Any ideas?


----------



## vmanuelgm

ThrashZone said:


> Hi,
> You oder some QDC's


Hi mate.

I already have QDC's...


----------



## ThrashZone

vmanuelgm said:


> Hi mate.
> 
> I already have QDC's...


Hi,
Oops that was Changboy that was inquiring about qdc's my bad


----------



## kaydubbed

Mooncheese said:


> HyperMatrix is correct, youre better off going with a high end AIB 3080 variant and putting it under a water block and closing the gap to the 3090 with overclocking for $1000 or so.
> 
> You could probably get a really good AIB 3080 @ 2100 MHz core under full water block @ 420w (or more via vbios swap) within 5% of an air cooled 3090 at default clocks / with little TDP headroom that can't really see it's 20% potential uplift because of power starvation and falling off of an efficiency cliff beyond 400-450w (10% gained on shunted 3090 between 390 and 550w).
> 
> 3080 FTW3 = $850?
> Water block and backplate = $200
> Pump-res combo, soft tubing, decent 360mm radiator = $300
> 
> $1350 and youre within 5% of an air cooled 3090 with a 10-12% overclock and your GPU can be dead silent if you want it to top it off and instead of sunken cost into a single component the cooling parts are forward compatible should you be in a position to upgrade to 3090 FTW3 down the road and the rest of the cooling system is universally forward compatible with any other expansion, be it another radiator or CPU block etc.
> 
> I don't think the FTW3 will be $1800 though, that's the Strix price, if I remember correctly FTW3 is going to go for $1675-$1700 and that's before the 5% Associate Code promo, assuming you can put that in and checkout before the bots buy them all again though.
> 
> If youre in the position where you already have a loop it's a different story. But I wouldn't dump $1600 on a GPU, I would focus on a cooling loop first. Water is the best, nothing beats the absence of noise and insanely low temps (my 2080 Ti is nearly inaudible at 45C about the entire card @ 373w).



Thanks for the tips. I think I'll get the FTW3 3090 in anticipation of future water cooling. Maybe just the GPU for the time being since my Arctic Freeze II 360 AIO is doing fine with my 3950x. I can fit another rad in my CoolerMaster H500M; even a 420mm since my case is not exactly 'spec' since it's been modded [i.e. gutted]. Any recommendations for a pump-reservoir, rad, and block vendor?


----------



## Shawnb99

Kingpin


kaydubbed said:


> Thanks for the tips. I think I'll get the FTW3 3090 in anticipation of future water cooling. Maybe just the GPU for the time being since my Arctic Freeze II 360 AIO is doing fine with my 3950x. I can fit another rad in my CoolerMaster H500M; even a 420mm since my case is not exactly 'spec' since it's been modded [i.e. gutted]. Any recommendations for a pump-reservoir, rad, and block vendor?



For the FTW3 we are likely limited in blocks to EK, EVGA or Optimus. We'll see if others make a block but for the 2080TI the only option was the Hydropcopper or EK.
I suggest Hardwarelabs for radiators and Optimus/Aquacomputer for pump/reservoir combo


----------



## Mooncheese

Great news, reinstalling 3DMark resolved the error problem: 



https://www.3dmark.com/3dm/50650351?



2130 MHz core, 7950 MHz memory, 1.055v, 378w, @ 42C. 




vmanuelgm said:


> 2x8pin are enough for watercooling, every 8pin can handle 350-400w, enough power under that cooling.
> 
> 
> Alphacool block arrived...
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I have to check if it fits the special pcb of the Giga. I will probably have to shunt the block too,
> 
> Alphacool messed up the case picture, Nvidia block with Radeon sticker...


Does that block only cover the GPU core and bottom bank of VRAM? That looks way different than the block on the box. 



kaydubbed said:


> Thanks for the tips. I think I'll get the FTW3 3090 in anticipation of future water cooling. Maybe just the GPU for the time being since my Arctic Freeze II 360 AIO is doing fine with my 3950x. I can fit another rad in my CoolerMaster H500M; even a 420mm since my case is not exactly 'spec' since it's been modded [i.e. gutted]. Any recommendations for a pump-reservoir, rad, and block vendor?


Declining marginal returns, the 3090 on air will only be 5-10% faster than overclocked to the limit 3080 under water at 45C and you can go with a 3080 for $300 less by my estimation and everything but the block is forward compatible (the block will be forward compatible with a 3090, 3080S or 3080 Ti upgrade). 

It's just a big waste of money. 

You could go 3080 and when 3080 Ti / 3080S drops you could upgrade to that and sell your current 3080 for $500. 

As far as recommending cooling parts, I'm not partial to EK per se, like their stuff is overpriced, but they do make good radiators and their blocks are of good quality. For fittings I would go Bykski via AliExpress, all of my Bykski fittings exceed EK in quality at half the price. Half my loop is Bykski (fittings, tubing, Barrow distro plate) and the other half EK (monoblock, GPU block, rads, pumps). 

Hardware Labs makes good radiators as well, but those are pricey. 

EK's XE 360 is still at the top of the chart in terms of 360 performance: EK CoolStream XE 360mm Radiator Review - ExtremeRigs.net

I have that in the front and a CE 420 in the ceiling. 

CE 420 is a solid 420 rad but it's 45mm thick, make sure you have the space for push pull. 

Some advice from the outset, if you can afford to do so go 2 D5's in serial, you will nearly double the head pressure of a single D5 and the flow rate is better than DDC. I run my D5's at 40% RPM and they go to 100% RPM when water temp gets up to 29C (load) via 1/4 water temp sensor than you can place where a plug would go in the GPU block etc. 

Here's my build:


----------



## Mooncheese

Shawnb99 said:


> Kingpin
> 
> 
> 
> For the FTW3 we are likely limited in blocks to EK, EVGA or Optimus. We'll see if others make a block but for the 2080TI the only option was the Hydropcopper or EK.
> I suggest Hardwarelabs for radiators and Optimus/Aquacomputer for pump/reservoir combo


The problem I have with EK is that last time it took them an entire year to make the block for the FTW3. They stated they have one planned, but we have no real ETA. Bykski also have one planned, and if I pick up the 3090 I may go with them this time around as I find the quality of their products exceeding that of EK at nearly half the price (fittings and tubing, but their block is a little over $100 including the back-plate whereas EK want $185 for the block and another $50 for the backplate, or almost double the price). 

2080 Ti FTW3 release: Nov 2018
EKWB release: Nov 2019


----------



## J7SC

Re. custom XOC bios, I am pretty sure that eventually, some of the hi-po LN2 ones will leak into the wild (even if they shouldn't, at least not w/o a pw-protected folder that contains the warnings). I didn't load custom bios for my 2090 Tis as they are already at 380W max each and good clockers, not to mention are also used for work apps, but for a single 3090, I might. Re. a 3090 purchase, I still have to figure out the work related aspects re. '2x / NVL'...but especially with 'only' one card, I will opt for a card/s that have dual bios, factory full w-blocks and a very good backplate cooling solution. I also think that in a few weeks / months, we will see several additional 3090 custom PCB models from the major manufacturers, once they have enough GPUs for binning.

What I've been missing so far is any leak / info / pics on Galax 3090 HoF...but may be that will change in a few days. Usually, the Hof (/OcL) hits early and with massive clocks and scores, though this time, the race with KPE and perhaps Asus could be tighter. Speaking of scores such as Port Royal, I haven't run a single GPU one for over a year (10556 was last score)...but for dual GPU, it's been well over a year in 3DM HoF PrtRyl in the top-30 range...though I think that is about to change 'rapidly' when dual 3090s show up in a few days time ;-). Per thumbnail below, I am looking forward to _learn about the % difference of DLSS on vs off for the 3090s and comparing that to 2080 Ti's delta -- it should be telling re. Ai capability _changes with this new gen. Finally, on systems still limited to PCIe 3.0 compared to PCIe 4.0, 3090(s) should really be able to show the advantages of PCIe 4.0, especially at 4K....hope we see more on that (beyond initial tests w/ single 3080s) as it determines other aspects of the upgrade planning.


----------



## Mooncheese

J7SC said:


> Re. custom XOC bios, I am pretty sure that eventually, some of the hi-po LN2 ones will leak into the wild (even if they shouldn't, at least not w/o a pw-protected folder that contains the warnings). I didn't load custom bios for my 2090 Tis as they are already at 380W max each and good clockers, not to mention are also used for work apps, but for a single 3090, I might. Re. a 3090 purchase, I still have to figure out the work related aspects re. '2x / NVL'...but especially with 'only' one card, I will opt for a card/s that have dual bios, factory full w-blocks and a very good backplate cooling solution. I also think that in a few weeks / months, we will see several additional 3090 custom PCB models from the major manufacturers, once they have enough GPUs for binning.
> 
> What I've been missing so far is any leak / info / pics on Galax 3090 HoF...but may be that will change in a few days. Usually, the Hof (/OcL) hits early and with massive clocks and scores, though this time, the race with KPE and perhaps Asus could be tighter. Speaking of scores such as Port Royal, I haven't run a single GPU one for over a year (10556 was last score)...but for dual GPU, it's been well over a year in 3DM HoF PrtRyl in the top-30 range...though I think that is about to change 'rapidly' when dual 3090s show up in a few days time ;-). Per thumbnail below, I am looking forward to _learn about the % difference of DLSS on vs off for the 3090s and comparing that to 2080 Ti's delta -- it should be telling re. Ai capability _changes with this new gen. Finally, on systems still limited to PCIe 3.0 compared to PCIe 4.0, 3090(s) should really be able to show the advantages of PCIe 4.0, especially at 4K....hope we see more on that (beyond initial tests w/ single 3080s) as it determines other aspects of the upgrade planning.
> 
> View attachment 2459582


10,555 is really good for 2080 Ti, I just ran Port Royal at 2130 MHz core, 7950 MHz memory @ 373w (FTW3 bios) and 1.063v, topping out at 43C (70F ambient, a little warmer than usual) and it did 10,326, roughly 10% slower than 3080 at the same power draw.

Speaking of Ampere specific DLSS improvement, if you read my recent post pertaining to the Metro Exodus benchmark vmanuelgm's shunted 3090 @ 550w was 45% faster with DLSS off and a staggering 67% faster with DLSS on.

I'm not talking about the 3090 having DLSS on compared to 2080 Ti with DLSS off. I'm talking about 67% faster with DLSS on for both cards.

That's at 550w however, going by the performance difference between vmanuelgm's Timespy and Port Royal runs at 390w this is a 10% difference so we need to deduct 10% from 45 and 67% respectively if we are to compare 2080 Ti @ 2100 MHz @ 373w to 3090 @ 390w making for a 35% and 57% difference.

That run was also done at 4K, and I only have a 1440p run, and Ampere is roughly 10% faster than 2080 Ti at 4K so to fair deduct another 10% here between 45% and 67% if we are to look at 1440p performance.

Or 27% without DLSS and 47% with DLSS at 1440p.

This is right in line with my estimation of 3090 performance vis a vis 2080 Ti at 1440p at the same power draw.

The DLSS performance kind of explains why a few games really show a higher performance disparity between Turing and Ampere (Control and Shadow of the Tomb Raider).

Ampere has more Tensor Cores and is making good use of them.

And yes, AI Super-Sampling will be extremely relevant going forward. Both consoles will employ the technology, the question is, will RDNA 2 based Navi GPU's be able to compete with DLSS 2.0 and 3.0?


----------



## J7SC

...the DLSS on vs off in my thumbnail pic above equated to just about a 40% gain - though that run was done in May, and I'm not sure what DLSS 2.x version was employed then. Once 2x 3090 results are published for PortRoyal DLSS, I'll re-run my 2x 2080 Tis with the latest 3DM patches and drivers to afford a cleaner comparison...


----------



## Mooncheese

Dude, I didn't even know that DLSS was employed in Port Royal and that you could compare it on and off! 

40% is huge. 

And here's some early evidence backing up my hypothesis that undervolting Ampere may be the way to go. Imagine this but with ~40-45C about the entire card: 


__
https://www.reddit.com/r/nvidia/comments/iwt953

I'm confident that GA-102-300 @ .90v @ 440W under loads of water will be comparable to vmanuelgm's performance @550w @ 71C on air. 

This 3080 is at factory performance with an undervolt and -50W TDP on air. 



https://www.3dmark.com/compare/spy/14012733/spy/14012641/spy/14011894/spy/14011774#



Imagine this under water.


----------



## domenic

Take a look at this article - Talks about the memory chips on the 3080 & heat suggesting the addition of thermal pads on the backplate reduces memory temps. With the 3090 actually having memory mounted on the back some type of thermal pad is going to be essential for those of us going down the water block route + custom backplate (i.e. EK for example)

Question - where does one get thermal pads that would work here and how do you determine the proper thickness? 









Report: Why The GeForce RTX 3080's GDDR6X Memory Is Clocked at 19 Gbps


Blame the heat.




www.tomshardware.com


----------



## J7SC

Mooncheese said:


> Dude, I didn't even know that DLSS was employed in Port Royal and that you could compare it on and off!
> (...)


...it's the 3DM 'DLSS Feature Test' that automatically runs Port Royal sequentially, once with and once without DLSS...also a great way to test how good GPU cooling is


----------



## Mooncheese

J7SC said:


> ...it's the 3DM 'DLSS Feature Test' that automatically runs Port Royal sequentially, once with and once without DLSS...also a great way to test how good GPU cooling is


I have amazing thermals, my GPU doesn't exceed 42-43C during 30-40 min stability test. Thanks for this info though, I'm curious as to how the 3080 looks here with DLSS on and off, that will give us an idea as to what to expect with the 3090. 

Kind of leaning towards picking up a 3090 again, I thought today was the review NDA or did they push that back to the 23rd? 

What card are you leaning towards?


----------



## Mooncheese

domenic said:


> Take a look at this article - Talks about the memory chips on the 3080 & heat suggesting the addition of thermal pads on the backplate reduces memory temps. With the 3090 actually having memory mounted on the back some type of thermal pad is going to be essential for those of us going down the water block route + custom backplate (i.e. EK for example)
> 
> Question - where does one get thermal pads that would work here and how do you determine the proper thickness?
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Report: Why The GeForce RTX 3080's GDDR6X Memory Is Clocked at 19 Gbps
> 
> 
> Blame the heat.
> 
> 
> 
> 
> www.tomshardware.com


I have 1, 2 and 3mm Gelid GC Extreme pads on hand, but 2mm is probably a safe bet. These are pricey, but good to have hanging around: 





__





GP-EXTREME 80 x 40 THERMAL PAD – Gelid Solutions







gelidsolutions.com


----------



## Nammi

Mooncheese said:


> Ampere has more Tensor Cores and is making good use of them.


Thought I'd throw some info out there as I've seen this stated multiple times now.

Amperes tensor cores have been beefed up so much that nvidia even decided to gimp them for the geforce lineup.
Tensor core counts:
2080ti - 544
3080 - 272
3090 - 328


----------



## Mooncheese

Nammi said:


> Thought I'd throw some info out there as I've seen this stated multiple times now.
> 
> Amperes tensor cores have been beefed up so much that nvidia even decided to gimp them for the geforce lineup.
> Tensor core counts:
> 2080ti - 544
> 3080 - 272
> 3090 - 328
> View attachment 2459590


Great info, is this accounting for the way Nvidia is counting cores going by concurrent FP32 instruction pipelines, i.e., 2x whatever their core count is? If so then this should read:

2080 Ti = 544
3080 = 544
3090 = 656

Edit, I believe this is the case because the difference between 544 and 656, 20%, is the same performance increase we are seeing in Metro Exodus benchmark (DLSS 1.0):

At 4K, normalized for 390w, 3090 is 35% faster than 2080 Ti with DLSS disabled on both cards and 57% faster with DLSS enabled on both cards at the same power draw.

57 - 35 = 21%

656 - 544 = 20%

This is speculative on my part but if Nvidia is counting Tensor Cores the way they are counting CUDA cores then this is what's going on. 

Given their dense count of Total FP16 Operations Per SM with half the Tensor Cores per SM (3080) I believe this may be the case.


----------



## inutile

EVGA 3090 prices leaked through Newegg price sorting filter:

XC3 Black: 1529.99 USD
XC3 Gaming: 1549.99 USD
XC3 Ultra: TBD
FTW3 Gaming: 1749.99 USD
FTW3 Ultra: 1799.99 USD 



EVGA RTX 3090 Pricing leaked on NewEgg (US)



Personally $1800 for the FTW3 is a bit over what I would consider paying for a bit more wattage that's going to result in a few percent perf increase.


----------



## Mooncheese

inutile said:


> EVGA 3090 prices leaked through Newegg price sorting filter:
> 
> XC3 Black: 1529.99 USD
> XC3 Gaming: 1549.99 USD
> XC3 Ultra: TBD
> FTW3 Gaming: 1749.99 USD
> FTW3 Ultra: 1799.99 USD
> 
> 
> 
> EVGA RTX 3090 Pricing leaked on NewEgg (US)
> 
> 
> 
> Personally $1800 for the FTW3 is a bit over what I would consider paying for a bit more wattage that's going to result in a few percent perf increase.


 Wow that's a lot higher than I was expecting (3080 FTW3 is $789, same PCB). At that rate you might as well go with the $1800 Strix and be guaranteed to be able to put it under a water block right away. 

It took EKWB one year to make a block for 2080 Ti FTW3.

2080 Ti FTW3 release date: Nov 2018

EKWB released: Nov 2019

iCX3 sensors vs being able to put it under a block right away. No other real differences, other than the Asus board may be more reliable as they've automated their manufacturing process. We've yet to see the TDP of Asus Strix 3090. This time around the TDP of what card you go with will be of utmost importance as the boards vary so wildly from each other, there's no guarantee that vbios from one card will successfully flash to another.


----------



## Nammi

Mooncheese said:


> Great info, is this accounting for the way Nvidia is counting cores going by concurrent FP32 instruction pipelines, i.e., 2x whatever their core count is? If so then this should read:
> 
> 2080 Ti = 544
> 3080 = 544
> 3090 = 656
> 
> Edit, I believe this is the case because the difference between 544 and 656, 20%, is the same performance increase we are seeing in Metro Exodus benchmark (DLSS 1.0):
> 
> At 4K, normalized for 390w, 3090 is 35% faster than 2080 Ti with DLSS disabled on both cards and 57% faster with DLSS enabled on both cards at the same power draw.


The new tensor cores can now do more operations hence why the lower amount of cores can keep up, so no doubling here. Check the attachment on my previous post.

The thing with nv Cuda core counting is that with Turing they split INT and FP into different paths while only counting the FP units as Cuda cores. Whereas before a Cuda core composed of both INT and FP units. And with Ampere the definition of Cuda core has changed yet again, 3090 as an example has 5248 FP + 5248 FP/INT(typical old cuda core). 5248 FP + (2 624 FP + 2 624 FP/INT)


----------



## Mooncheese

$1800, I'm going with Strix, screw it. 

EVGA have lost touch with reality. 

3090 FTW3 @ $1500 is probablly a fair deal considering the trade in the cost of the $150 FE cooler versus the various value added technology, the thermistors, the dual bios and the fuse. EVGA's cooler doesn't cost $150 to make. Probably $50. 

So basically they are asking for $400 up and over FE. 

They've lost their damn minds. 

At least with the Strix I am guaranteed to not have to wait a year to put it under a water block as those blocks are already available for pre-order.


----------



## dentnu

inutile said:


> EVGA 3090 prices leaked through Newegg price sorting filter:
> 
> XC3 Black: 1529.99 USD
> XC3 Gaming: 1549.99 USD
> XC3 Ultra: TBD
> FTW3 Gaming: 1749.99 USD
> FTW3 Ultra: 1799.99 USD
> 
> 
> 
> EVGA RTX 3090 Pricing leaked on NewEgg (US)
> 
> 
> 
> Personally $1800 for the FTW3 is a bit over what I would consider paying for a bit more wattage that's going to result in a few percent perf increase.


I was planning on getting the 3090 Trio at first then I saw what a joke the 3080 trio was and how MSI is a **** company that likes to deceive their customers. I mean come on they have 3 8pin power connectors on that 3080 trio for no reason whatsoever. I am now left with either getting a 3090 FTW or Strix with that said the price on these is pretty crazy. There each $1800 + tax + overnight shipping comes out to about $1964 if I buy it through EVGA site. I keep second guessing myself if I really should waste that much. I would have no problem paying for it if I knew that the price is justified. The question is if the extra wattage on these are going to make a difference? I know it will if we get XOC bios but that is a big unknown at this point.

What cards are you guys going for? Any of you going to be paying for those $1800 cards if so mine sharing why? Make me a believer...


----------



## Mooncheese

Nammi said:


> The new tensor cores can now do more operations hence why the lower amount of cores can keep up, so no doubling here. Check the attachment on my previous post.
> 
> The thing with nv Cuda core counting is that with Turing they split INT and FP into different paths while only counting the FP units as Cuda cores. Whereas before a Cuda core composed of both INT and FP units. And with Ampere the definition of Cuda core has changed yet again, 3090 as an example has 5248 FP + 5248 FP/INT(typical old cuda core).


You didn't quote my entire post, at the end I stated this:

_*This is speculative on my part but if Nvidia is counting Tensor Cores the way they are counting CUDA cores then this is what's going on.

Given their dense count of Total FP16 Operations Per SM with half the Tensor Cores per SM (3080) I believe this may be the case.*_

Going by the Metro Exodus benchmarks we have on hand, the only way the 20% increase in DLSS performance can be accounted for is if you count Ampere's Turing Cores the same way NV is counting their CUDA cores (2x concurrent FP32 pipeline, 1x INT32, 74% of average rendering stream is FP32, 26% is INT32)

Edit: 

Per Table 4: 

Total FP16 FMA operations per SM: 

TU-102 = 512 with 8 Tensor cores per SM
GA-102 (3080) = 512 with 4 Tensor Cores per SM and "1024" sparse. 

FP16 FMA operations per Tensor Core 

TU-102 = 64
GA-102 (3080) = 128 Dense and 256 Sparse

That's 2x FP16 FMA Operations per SM and 4x FP16 FMA Ops per Tensor Core accounting for the fact that GA-102 has 4 Tensor Cores per SM vs TU-102 @ 8 cores.


----------



## Mooncheese

dentnu said:


> I was planning on getting the 3090 Trio at first then I saw what a joke the 3080 trio was and how MSI is a **** company that likes to deceive their customers. I mean come on they have 3 8pin power connectors on that 3080 trio for no reason whatsoever. I am now left with either getting a 3090 FTW or Strix with that said the price on these is pretty crazy. There each $1800 + tax + overnight shipping comes out to about $1964 if I buy it through EVGA site. I keep second guessing myself if I really should waste that much. I would have no problem paying for it if I knew that the price is justified. The question is if the extra wattage on these are going to make a difference? I know it will if we get XOC bios but that is a big unknown at this point.
> 
> What cards are you guys going for? Any of you going to be paying for those $1800 cards if so mine sharing why? Make me a believer...


I'm going with Strix, to hell with EVGA, they are out of their god damn minds. 

3080 FTW3 is only $90 up and over FE but the 3090 FTW3 is $300 up and over FE? 

Huh? 

It's basically the same card, youre not buying more of anything with the 3090 FTW PCB, they are simply price gouging because they anticipate the market will bear these prices. 

FTW3 is NOT on par with Strix, I don't care if it has iCX3 sensors. 

Fair price for FTW3 is $1600 for their Gaming variant and $1650 for Ultra. 

Also, not waiting around for a year for a quality block to be made for it and no HydroCopper is not a quality block.


----------



## MonnieRock

Hummm,

I was expecting about 1649.99 for the RTX 3090 FTW3 Ultra. Basically around $150-$160 over the FE

The FE 3080 is $699.99 and the 3080 FTW3 Ultra is $809.99.....$110 difference


Strange they want $300 for the difference between the FE 3090 and the FTW3 Ultra 3090


----------



## J7SC

MonnieRock said:


> Hummm,
> 
> I was expecting about 1649.99 for the RTX 3090 FTW3 Ultra. Basically around $150-$160 over the FE
> 
> The FE 3080 is $699.99 and the 3080 FTW3 Ultra is $809.99.....$110 difference
> 
> 
> *Strange they want $300 for the difference between the FE 3090 and the FTW3 Ultra 3090*


...likely because it is a smaller, higher-profit margin market segment ? Also, waiting a few months (January?) might pay off


----------



## Mooncheese

MonnieRock said:


> Hummm,
> 
> I was expecting about 1649.99 for the RTX 3090 FTW3 Ultra. Basically around $150-$160 over the FE
> 
> The FE 3080 is $699.99 and the 3080 FTW3 Ultra is $809.99.....$110 difference
> 
> 
> Strange they want $300 for the difference between the FE 3090 and the FTW3 Ultra 3090


It's even stranger when you account for the fact that the 3090 FE cooler costs about $155 to manufacture according to Igor's Lab and the FTW3 cooler is probably $50. 

So just add $400 on top of FE price. 

For thermistors, dual bios, 10% more TDP and a 3 year warranty that isn't voided if you put it under water. 

I don't know, $400, that's approaching the cost of an entire mid tier GPU. 

$1600 (Gaming) and $1650 (Ultra) sounds about fair, that's about $250 higher than FE accounting for the cheaper cooler. 

$1800, they've lost their minds.


----------



## MonnieRock

J7SC said:


> likely because it is a smaller


From what I understand, the PCB is the same size between the FTW3 3080 Ultra and the FTW3 3090 Ultra. 

Yes it has more ram and vrms etc., but that should be expected in the difference in price for example the 699 to 1499 FE cards.


----------



## Nammi

Mooncheese said:


> You didn't quote my entire post, at the end I stated this:
> 
> _*This is speculative on my part but if Nvidia is counting Tensor Cores the way they are counting CUDA cores then this is what's going on.
> 
> Given their dense count of Total FP16 Operations Per SM with half the Tensor Cores per SM (3080) I believe this may be the case.*_
> 
> Going by the Metro Exodus benchmarks we have on hand, the only way the 20% increase in DLSS performance can be accounted for is if you count Ampere's Turing Cores the same way NV is counting their CUDA cores (2x concurrent FP32 pipeline, 1x INT32, 74% of average rendering stream is FP32, 26% is INT32)
> 
> Edit:
> 
> Per Table 4:
> 
> Total FP16 FMA operations per SM:
> 
> TU-102 = 512 with 8 Tensor cores per SM
> GA-102 (3080) = 512 with 4 Tensor Cores per SM and "1024" sparse.
> 
> FP16 FMA operations per Tensor Core
> 
> TU-102 = 64
> GA-102 (3080) = 128 Dense and 256 Sparse
> 
> That's 2x FP16 FMA Operations per SM and 4x FP16 FMA Ops per Tensor Core accounting for the fact that GA-102 has 4 Tensor Cores per SM vs TU-102 @ 8 cores.


Ah, you must've edited after I quoted...
I took a better look at the nv whitepaper and it's actually not 5248 FP + 5248 FP/INT as I stated earlier, It's 5248 FP + (2 624 FP + 2 624 FP/INT).
Anyway I'm just going by nvidias definition of the tensor core, I don't see what's there to speculate.

Still unsure if to try to snag a Strix/FTW3 at launch or wait...


----------



## domenic

*Bots are still at it.* Been keeping an eye on the EVGA site - I happened to hit refresh and this appeared with Add to Cart in green. Site immediately crashed the second I clicked on anything. Not just the product page but evga.com.


----------



## inutile

dentnu said:


> I was planning on getting the 3090 Trio at first then I saw what a joke the 3080 trio was and how MSI is a **** company that likes to deceive their customers. I mean come on they have 3 8pin power connectors on that 3080 trio for no reason whatsoever. I am now left with either getting a 3090 FTW or Strix with that said the price on these is pretty crazy. There each $1800 + tax + overnight shipping comes out to about $1964 if I buy it through EVGA site. I keep second guessing myself if I really should waste that much. I would have no problem paying for it if I knew that the price is justified. The question is if the extra wattage on these are going to make a difference? I know it will if we get XOC bios but that is a big unknown at this point.
> 
> What cards are you guys going for? Any of you going to be paying for those $1800 cards if so mine sharing why? Make me a believer...



Truthfully at this point I think I'm just going to wait a few weeks and let the dust settle a bit. I want to see how the AIB boards stack up to each other, see how the waterblock market sorts out (is anyone going to do blocks with active cooling backplates for the hot GDDR6X modules on back of PCB?), and avoid fighting bots and dealing with the F5 stress of trying to get a card right now in the first place. I really just want an upgrade before Cyberpunk so I have a bit of time to make a better informed decision before rushing into a variant that may have poor block selection or other downsides compared to others.


----------



## ttnuagmada

Mooncheese said:


> I'm going with Strix, to hell with EVGA, they are out of their god damn minds.
> 
> 3080 FTW3 is only $90 up and over FE but the 3090 FTW3 is $300 up and over FE?
> 
> Huh?
> 
> It's basically the same card, youre not buying more of anything with the 3090 FTW PCB, they are simply price gouging because they anticipate the market will bear these prices.
> 
> FTW3 is NOT on par with Strix, I don't care if it has iCX3 sensors.
> 
> Fair price for FTW3 is $1600 for their Gaming variant and $1650 for Ultra.
> 
> Also, not waiting around for a year for a quality block to be made for it and no HydroCopper is not a quality block.



Sorry to disappoint, but the Strix is going for 1800 as well. The price is already up on Best Buys website.


----------



## domenic

ttnuagmada said:


> Sorry to disappoint, but the Strix is going for 1800 as well. The price is already up on Best Buys website.


Is the "BIOS switch" operationally the same on all of these cards across vendors? If you want to play around with other BIOSs do you just first make a backup of the two original and then flash to one, see what happens, if there is a problem flip to the other, restore, etc.?

Only problem I see with the Strix is the power limit is only 400 vs 440 with the FTW3. Extra temp sensors also seems like a plus with the back side memory / water block situation also.


----------



## Nizzen

vmanuelgm said:


> Nope, no VR here.
> 
> 
> 
> I saw the live and told them several times to go above 14120…


I only got 12257 points in Port Royal with 3080 
Nzz1








3DMark Port Royal Hall of Fame


The 3DMark.com Overclocking Hall of Fame is the official home of 3DMark world record scores.




www.3dmark.com


----------



## lokran88

inutile said:


> is anyone going to do blocks with active cooling backplates for the hot GDDR6X modules on back of PCB?


I can tell you from an official German Watercool thread where an employee of Watercool stated that a "watercooled backplate had been considered already for Turing - maybe we will realize it with this generation ".

And Aquacomputer offered already active backplates for Turing even though those were basically only heatpipes connected to the water terminal, so it did not have much contact with the coolant.

I think it is very likely that both of these brands will actually offer active cooling backplates. But they are still hesitant with communicating it as they say that this release has been confusing and they want to study things well and decide when they really got hand on the cards and not rush out with a block that does not meet the quality that they are aiming for.

But when I see EK is already offering a Strix block there is just a normal backplate, which is not really what I am looking for, considering how hot GDDR6X seems to run.


----------



## inutile

A few more early benches from someone with a Gaming X Trio:


__
https://www.reddit.com/r/nvidia/comments/ix9zcg


----------



## domenic

delete


----------



## Mooncheese

MonnieRock said:


> Hummm,
> 
> I was expecting about 1649.99 for the RTX 3090 FTW3 Ultra. Basically around $150-$160 over the FE
> 
> The FE 3080 is $699.99 and the 3080 FTW3 Ultra is $809.99.....$110 difference
> 
> 
> Strange they want $300 for the difference between the FE 3090 and the FTW3 Ultra 3090


It's $400 when you understand that the FE cooler costs $155 to make and the FTW3 probably $50.



domenic said:


> Is the "BIOS switch" operationally the same on all of these cards across vendors? If you want to play around with other BIOSs do you just first make a backup of the two original and then flash to one, see what happens, if there is a problem flip to the other, restore, etc.?
> 
> Only problem I see with the Strix is the power limit is only 400 vs 440 with the FTW3. Extra temp sensors also seems like a plus with the back side memory / water block situation also.


Yes if you have a bad flash there is the safety measure of using the back-up bios via physical switch on the board.



ttnuagmada said:


> Sorry to disappoint, but the Strix is going for 1800 as well. The price is already up on Best Buys website.


Yes I know that, I thought I made it clear that for $1800 I was going with the Strix because EKWB will have a water-block out for that card in October whereas with FTW3, who knows, but the last time it took EKWB one year to produce a water block for the FTW3.

2080 Ti FTW3 release: Nov 2018
EKWB 2080 Ti FTW3 water block release: Nov 2019

Bykski has stated they will make a WB for the FTW3 but as of right now they just have a place-holder product page as they are likely awaiting this card from EVGA so that they can manufacturer a block for it.

Asus is clearly ahead of the power curve getting out their card to EKWB early so that a block can be ready in October.

EVGA, not so much. They probably want their HydroCrapper block @ $300 to be the first on the market for everyone sitting around waiting to put their FTW3 under one, and yes, it has the silly lipstick red line on it along with steel plugs and mediocre performance (someone was at 71C and 2080 Ti with one over on EVGA forum but I can't find the thread).


----------



## domenic

W*T*F - This just showed as in-stock on Amazon. $1999.95??????????

I thought the Ebay scalpers were bad....


----------



## Mooncheese

domenic said:


> W*T*F - This just showed as in-stock on Amazon. $1999.95??????????
> 
> View attachment 2459607


Scalper, report it as having misleading information, reason: price.


----------



## dentnu

domenic said:


> W*T*F - This just showed as in-stock on Amazon. $1999.95??????????
> 
> View attachment 2459607


Its sold by a reseller not amazon. That why the price is so high...


----------



## Mooncheese

I've just received some bad news that my finances may not allow for this upgrade so I may truly be sitting this one out.

Either card, including a ~$235 WB + backplate combo and I'm looking at around $2250 after taxes and shipping for a 35% bump at 1440p, 45% bump in VR (2160x2160x2), potentially 55% more at 1440p with titles with DLSS. Although they've announced DLSS for VR no titles currently support it.

I can live without a 35% bump for $2250.

These prices are absolutely insane.

And bear in mind, this isn't the Titan card, this is the 80 Ti.

Looking forward to how Big Navi shapes up and whether or not NV will respond with an 3080S / 3080 Ti to try to squeeze in between the 3080 and 3090 but honestly, there isn't much of a gap there.

I can't believe how things have spiraled out of control with NV's pricing shenanigans.

$1200 for 50% bump over 1080 Ti was pushing it. $1800 for 35% bump over 2080 Ti (at the same power draw), they are straight smoking crack.


----------



## HyperMatrix

lokran88 said:


> I can tell you from an official German Watercool thread where an employee of Watercool stated that a "watercooled backplate had been considered already for Turing - maybe we will realize it with this generation ".
> 
> And Aquacomputer offered already active backplates for Turing even though those were basically only heatpipes connected to the water terminal, so it did not have much contact with the coolant.
> 
> I think it is very likely that both of these brands will actually offer active cooling backplates. But they are still hesitant with communicating it as they say that this release has been confusing and they want to study things well and decide when they really got hand on the cards and not rush out with a block that does not meet the quality that they are aiming for.
> 
> But when I see EK is already offering a Strix block there is just a normal backplate, which is not really what I am looking for, considering how hot GDDR6X seems to run.


The problem is not knowing whether Aquacomputer will build a block that works with the good cards like the Strix or FTW3. I know others have. But Aquacomputer has generally had very limited selection of cards it covers. That’s why there’s still a bit of hesitation around FTW3 Ultra at $1800. Because even at the stock 440W, it won’t be enough of a performance boost over the FE. And if we get an unlocked bios, what difference will there really be between the FE and it?

I’m just hoping there’s a better chance of an unlocked bios leak/release for the FTW3 since it’s the same board design as the KPE. And with the FTW3 Utra, FTW3 Gaming, Hybrid, and KPE editions all using the same board design, I’m hoping it’ll be the most popular 3rd party board and will get a good block made by Aquacomputer.


----------



## HyperMatrix

Bit of info on gddr6x temp issues. Seems like active cooled backplate is the only way to go. Even if block makers don’t use one, should probably stick a couple fans right on top of the backplate. 









Report: Why The GeForce RTX 3080's GDDR6X Memory Is Clocked at 19 Gbps


Blame the heat.




www.tomshardware.com


----------



## Twintale

Hello everyone. After seeing 3090's results here in Metro Exodus benchmark I decided to reinstall it and try as well. My system first: [email protected] 4,5 GHz, 64 Gb 4000 MHz RAM, Gigabyte RTX 2080 Ti extreme waterforce with OC (1770 factory and +100 manually = 1870 boost clock (from GPU-Z)). Secondly, results:








~20% less than 3090 is actually pretty sad to see... but I guess it was expected?


----------



## HyperMatrix

Twintale said:


> Hello everyone. After seeing 3090's results here in Metro Exodus benchmark I decided to reinstall it and try as well. My system first: [email protected] 4,5 GHz, 64 Gb 4000 MHz RAM, Gigabyte RTX 2080 Ti extreme waterforce with OC (1770 factory and +100 manually = 1870 boost clock (from GPU-Z)). Secondly, results:
> View attachment 2459618
> 
> ~20% less than 3090 is actually pretty sad to see... but I guess it was expected?


What I’ve noticed in several games is that using using DLSS wizardry can lower the difference in FPS from the 2080ti to the 3089/3090, and DLSS implementation can have wildly different levels of performance gains from as little as 30% to over 100%. Since those can be very different on a game by game basis, if you wanted to do a pure power comparison between the cards I’d recommending running with DLSS off. It’s also worth noting that Metro Exodus still uses DLSS 1.0 which doesn’t use your tensor cores for acceleration.


----------



## Twintale

Thanks for explaining and yes - I did no rt and no dlss benchmark but I can’t compare it to 3090 since a person here didn’t test it:








The reason why I compare my card to 3090 is because I wanted to upgrade to 3090.


----------



## kaydubbed

Mooncheese said:


> I'm going with Strix, to hell with EVGA, they are out of their god damn minds.
> 
> 3080 FTW3 is only $90 up and over FE but the 3090 FTW3 is $300 up and over FE?
> 
> Huh?
> 
> It's basically the same card, youre not buying more of anything with the 3090 FTW PCB, they are simply price gouging because they anticipate the market will bear these prices.
> 
> FTW3 is NOT on par with Strix, I don't care if it has iCX3 sensors.
> 
> Fair price for FTW3 is $1600 for their Gaming variant and $1650 for Ultra.
> 
> Also, not waiting around for a year for a quality block to be made for it and no HydroCopper is not a quality block.


Besides the better QA and reputation than other companies (great motherboards) what does the Strix have going for it over the EVGA FTW3 gaming besides WB availability?


----------



## J7SC

HyperMatrix said:


> Bit of info on gddr6x temp issues. Seems like active cooled backplate is the only way to go. Even if block makers don’t use one, should probably stick a couple fans right on top of the backplate.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Report: Why The GeForce RTX 3080's GDDR6X Memory Is Clocked at 19 Gbps
> 
> 
> Blame the heat.
> 
> 
> 
> 
> www.tomshardware.com


...Galax has the 1 clip fan option for the RTX 30XX back (below) but I will see what Watercool or Aqua Computer bring to market and for what vendor / model re. active back-plate cooling


----------



## domenic

HyperMatrix said:


> Bit of info on gddr6x temp issues. Seems like active cooled backplate is the only way to go. Even if block makers don’t use one, should probably stick a couple fans right on top of the backplate.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Report: Why The GeForce RTX 3080's GDDR6X Memory Is Clocked at 19 Gbps
> 
> 
> Blame the heat.
> 
> 
> 
> 
> www.tomshardware.com


So I am thinking a FTW3 3090, grab whatever single sided block comes out first plus a backplate and then go ghetto with some of these thermal pads and like you said slap a fan on top of the backplate. That will give some time for maybe something better to emerge. By then they will either do an interim refresh or the two year cycle will be upon us again. argh.


----------



## HyperMatrix

Twintale said:


> Thanks for explaining and yes - I did no rt and no dlss benchmark but I can’t compare it to 3090 since a person here didn’t test it:
> View attachment 2459627
> 
> The reason why I compare my card to 3090 is because I wanted to upgrade to 3090.


vmanuelgm did a video with DLSS on/off using gigabyte 3090 shunted on air. He got 46.83fps without DLSS and 66.03fps with DLSS. Despite being shunted though, his card isn't performing very well. I'm not sure why. It was only scoring 13% higher than an EVGA 3080 FTW3 OC (on air, no shunt, normal bios) on Port Royal. Could be due to driver issues or other variables. But at least a frame of reference.


----------



## BigMack70

$1800 for the ftw3 is a bit steep but I guess unsurprising. Probably means the hybrid ftw3 will be $1900-2k. Almost makes me more interested in just water blocking a FE


----------



## HyperMatrix

BigMack70 said:


> $1800 for the ftw3 is a bit steep but I guess unsurprising. Probably means the hybrid ftw3 will be $1900-2k. Almost makes me more interested in just water blocking a FE


Haha if there was a guarantee of an unlocked bios, and given the limited general OC potential of these cards, I'd say the FE would be a great choice. But not knowing about the bios is what makes it hard. Especially since the FE is a custom design so unknown/unlikely that another card's bios can work on it.


----------



## BigMack70

HyperMatrix said:


> Haha if there was a guarantee of an unlocked bios, and given the limited general OC potential of these cards, I'd say the FE would be a great choice. But not knowing about the bios is what makes it hard. Especially since the FE is a custom design so unknown/unlikely that another card's bios can work on it.


I'm getting less and less confident about the OC potential of these cards as more info comes out on the 3080. Saw one guy who managed to successfully power mod to let it draw 520+W and got 5% performance boost. 

Frankly, I'm not bothering flashing my card, regardless of what model it is, if 5% performance is what's on offer. I'll never notice a 5% performance delta in games.


----------



## J7SC

BigMack70 said:


> I'm getting less and less confident about the OC potential of these cards as more info comes out on the 3080. Saw one guy who managed to successfully power mod to let it draw 520+W and got 5% performance boost.
> 
> Frankly, I'm not bothering flashing my card, regardless of what model it is, if 5% performance is what's on offer. I'll never notice a 5% performance delta in games.


...yeah, per several early RTX 30 oc results, the single best thing an owner can do seems to throw as much cooling at these cards as possible / reasonable for daily use (with the added cooling requirement re. the back).


----------



## vmanuelgm

Nizzen said:


> I only got 12257 points in Port Royal with 3080
> Nzz1
> 
> 
> 
> 
> 
> 
> 
> 
> 3DMark Port Royal Hall of Fame
> 
> 
> The 3DMark.com Overclocking Hall of Fame is the official home of 3DMark world record scores.
> 
> 
> 
> 
> www.3dmark.com


It is a nice result, u are almost at top 10...

I got today 14390...











Installed the Aurora Block in my Gigabyte after cutting some methacrylate to acomodate the extra Display port the Giga has. Had to uninstall backplate since it was interfering with Omega memory slots.


----------



## vmanuelgm

Mooncheese said:


> Great news, reinstalling 3DMark resolved the error problem:
> 
> 
> 
> https://www.3dmark.com/3dm/50650351?
> 
> 
> 
> 2130 MHz core, 7950 MHz memory, 1.055v, 378w, @ 42C.
> 
> 
> 
> 
> Does that block only cover the GPU core and bottom bank of VRAM? That looks way different than the block on the box.
> 
> 
> 
> Declining marginal returns, the 3090 on air will only be 5-10% faster than overclocked to the limit 3080 under water at 45C and you can go with a 3080 for $300 less by my estimation and everything but the block is forward compatible (the block will be forward compatible with a 3090, 3080S or 3080 Ti upgrade).
> 
> It's just a big waste of money.
> 
> You could go 3080 and when 3080 Ti / 3080S drops you could upgrade to that and sell your current 3080 for $500.
> 
> As far as recommending cooling parts, I'm not partial to EK per se, like their stuff is overpriced, but they do make good radiators and their blocks are of good quality. For fittings I would go Bykski via AliExpress, all of my Bykski fittings exceed EK in quality at half the price. Half my loop is Bykski (fittings, tubing, Barrow distro plate) and the other half EK (monoblock, GPU block, rads, pumps).
> 
> Hardware Labs makes good radiators as well, but those are pricey.
> 
> EK's XE 360 is still at the top of the chart in terms of 360 performance: EK CoolStream XE 360mm Radiator Review - ExtremeRigs.net
> 
> I have that in the front and a CE 420 in the ceiling.
> 
> CE 420 is a solid 420 rad but it's 45mm thick, make sure you have the space for push pull.
> 
> Some advice from the outset, if you can afford to do so go 2 D5's in serial, you will nearly double the head pressure of a single D5 and the flow rate is better than DDC. I run my D5's at 40% RPM and they go to 100% RPM when water temp gets up to 29C (load) via 1/4 water temp sensor than you can place where a plug would go in the GPU block etc.
> 
> Here's my build:



Block covers core, memory in the core side and vrms. As I said in other post, I had to cut the methacrylate to acomodate the extra display port the Giga has. I also had to uninstall the backplate since it was interfering with memory slots in my Omega. Temps are quite good, not as good as with an EK block, but good enough.


----------



## BigMack70

vmanuelgm said:


> It is a nice result, u are almost at top 10...
> 
> I got today 14390...
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Installed the Aurora Block in my Gigabyte after cutting some methacrylate to acomodate the extra Display port the Giga has. Had to uninstall backplate since it was interfering with Omega memory slots.


What's your sense of how the temps are on the back side memory modules? Do you think they need those heatsinks?


----------



## vmanuelgm

BigMack70 said:


> What's your sense of how the temps are on the back side memory modules? Do you think they need those heatsinks?


Forgot to add the probes, tomorrow will add them and report temperatures.

The hottest parts are memory modules and back side of VRM's.


----------



## CallsignVega

I haven't even been paying attention to this over-hyped crappy gen. What is the TLR version? Something like:

1. Ampere is already maxed out and is a terrible overclocker.
2. A 3080 overclocked barely beats a overlocked 2080Ti.
3. 3090 is only 9% faster than 3080.
4. Only 3090 cards 400w are greater are the Strix (400w) and FTW3 (440w) that are worthwhile to be put under water.

Anything that I am missing or needs correction?


----------



## kx11

Jacob freeman just confirmed 3090 KingPin should be out by late October


__ https://twitter.com/i/web/status/1308217026469482496


----------



## MrTOOSHORT

I can wait for kingpin!


----------



## BigMack70

CallsignVega said:


> I haven't even been paying attention to this over-hyped crappy gen. What is the TLR version? Something like:
> 
> 1. Ampere is already maxed out and is a terrible overclocker.
> 2. A 3080 overclocked barely beats a overlocked 2080Ti.
> 3. 3090 is only 9% faster than 3080.
> 4. Only 3090 cards 400w are greater are the Strix (400w) and FTW3 (440w) that are worthwhile to be put under water.
> 
> Anything that I am missing or needs correction?


The 3090 under water is still going to be 35%+ better than a water-cooled 2080 Ti.

At top end, this gen is a lot like last gen. Last gen, 1080 Ti - 2080 Ti was +30% performance for +$500. 2080 Ti - 3090 is like +35-40% performance for +$300.

In the end, this gen is only exciting for 1080 Ti (or older) owners who nope-d out of the $1200 asking price for the upgrade last gen. They got a good upgrade then with the 3080. But for anyone who bought in on the 2080 Ti, this gen feels kinda bad because 3080 is definitely not worth it over the 2080 Ti and 3090 is a bare minimum performance upgrade for another $300 on top of what was already a bad price point.


----------



## vmanuelgm




----------



## J7SC

I stumbled upon this page > here (22 AIB 3090s) for some 'window shopping'...first time I've seen a pic of the Aorus Xtr (successor to what I have now). Still no pics of the Galax HoF though...when it finally comes out, I look forward to Buildzoid doing the PCB analysis for it and also the KPE


----------



## Emmanuel

BigMack70 said:


> The 3090 under water is still going to be 35%+ better than a water-cooled 2080 Ti.
> 
> At top end, this gen is a lot like last gen. Last gen, 1080 Ti - 2080 Ti was +30% performance for +$500. 2080 Ti - 3090 is like +35-40% performance for +$300.
> 
> In the end, this gen is only exciting for 1080 Ti (or older) owners who nope-d out of the $1200 asking price for the upgrade last gen. They got a good upgrade then with the 3080. But for anyone who bought in on the 2080 Ti, this gen feels kinda bad because 3080 is definitely not worth it over the 2080 Ti and 3090 is a bare minimum performance upgrade for another $300 on top of what was already a bad price point.


Yep that's me. Pulling out and shipping my 1080ti tomorrow and wondering which RTX 3090 I'm upgrading to. I still feel like it's a terrible value compared to the 3080, but I'll need the extra VRAM and I'm not willing to wait for the 20GB 3080s.


----------



## Glerox

CallsignVega said:


> I haven't even been paying attention to this over-hyped crappy gen. What is the TLR version? Something like:
> 
> 1. Ampere is already maxed out and is a terrible overclocker.
> 2. A 3080 overclocked barely beats a overlocked 2080Ti.
> 3. 3090 is only 9% faster than 3080.
> 4. Only 3090 cards 400w are greater are the Strix (400w) and FTW3 (440w) that are worthwhile to be put under water.
> 
> Anything that I am missing or needs correction?


Yup pretty much it. Also SLI is no longer having drivers for it and future SLI native support in games is a complete unknown.

soooo you'll buy one or two? 😂


----------



## BigMack70

Emmanuel said:


> Yep that's me. Pulling out and shipping my 1080ti tomorrow and wondering which RTX 3090 I'm upgrading to. I still feel like it's a terrible value compared to the 3080, but I'll need the extra VRAM and I'm not willing to wait for the 20GB 3080s.


The vram thing really baffles me. I don't know why they didn't just sell a 3080 20GB model at launch for $1000.


----------



## Emmanuel

I drafted my opinion on this earlier, so here it is.

-They release a powerful card at an exciting price but neuter it down to 10GB which pretty soon won't be 4K future-proof. Even 4 year old cards such as the 1080ti had more RAM (11GB).

-The solution to this intentional problem: the RTX 3090. Real-world performance gains appear to be approximately 10% over the 3080. What this really translates to is that until you hit the stuttering hell of a VRAM wall (10GB), you're paying an extra $800 for 10% performance gains. Correct me if I'm wrong, but I think this is a new cost/performance record for nVidia consumer cards.

-Knowing that people are hyped up and eager to upgrade to the 3000 series, they conveniently hold off the release of the most sensible solution, the 20GB 3080. This should have been the standard 3080, period. These cards are intended for and truly outperform previous generations at higher resolutions only. What comes with higher resolutions? Greater VRAM usage.

If you are currently in the market for an upgrade, you might now fall into these categories:

1- you wait for the 20GB 3080 release, whenever that is. If you intend on selling your current GPU to finance your next one, waiting unfortunately translates into further devaluation = less money in your wallet.
2- you pony up the extra $800 and just be done with it = less money in your wallet.
3- you buy the 10GB 3080 now knowing that you might have to upgrade to the 20GB version when VRAM starts becoming a problem. This makes you a 2-time buyer, more money for nVidia = less money in your wallet.

So yeah, nVidia is milking the crap out of us. I might put up with it considering that I haven't given them a dime in 3.5 years and the performance gain in my case is around 100%.


----------



## J7SC

...I agree that a 3080 w/ 20GB of VRAM makes A LOT of sense (w/ Gigabyte already having all but confirmed it via a certification label per recent post here)... but may be NVidia is holding it back as a RDNA2 release fighter ?


----------



## BigMack70

Emmanuel said:


> Correct me if I'm wrong, but I think this is a new cost/performance record for nVidia consumer cards.


As crazy as it sounds, in a generational comparison it's still a better deal than the 2080 Ti was vs 1080 Ti when it launched. Only Titan cards have been worse value propositions than the 2080 Ti.


----------



## Mooncheese

BigMack70 said:


> The 3090 under water is still going to be 35%+ better than a water-cooled 2080 Ti.
> 
> At top end, this gen is a lot like last gen. Last gen, 1080 Ti - 2080 Ti was +30% performance for +$500. 2080 Ti - 3090 is like +35-40% performance for +$300.
> 
> In the end, this gen is only exciting for 1080 Ti (or older) owners who nope-d out of the $1200 asking price for the upgrade last gen. They got a good upgrade then with the 3080. But for anyone who bought in on the 2080 Ti, this gen feels kinda bad because 3080 is definitely not worth it over the 2080 Ti and 3090 is a bare minimum performance upgrade for another $300 on top of what was already a bad price point.



I made an error with the Metro Exodus benchmark analysis comparing 2080 Ti FTW3 under full water block @ 2100 MHz core, ~7900 MHz memory to vmanuelgm's shunted 3090 run @ 550w.

Comparing the 4K data without DLSS the 3090 is 45% faster. Seeing as how vmanuelgm's 3090 is 10% faster at 550w vs 390w we need to subtract 10% here to make the difference 35% @ the same power draw.

So it's 35% faster at the same power draw @ 4K without DLSS and 57% faster with DLSS enabled on both cards because Ampere has the equivalent of 20% more Tensor Core performance (see previous analysis).

And seeing as how the 3080 (320w) is 30% faster at 4k but only 20% faster than 2080 Ti (260W) at 1440p and seeing as how the 3090 is shaping up to be 20% faster than the 3080 at both resolutions we can safely estimate 3090 1440p performance to be 10% less.

So 25% without DLSS on both cards and ~45% with DLSS on with both cards @ 1440p.

GA102 falls off a performance-efficiency cliff beyond 400w.

The 3080 under LN2 with virtually unlimited power only beat the 2080 Ti by 1k points in Port Royal, a 2.5k point increase from factory clocks (11.5k) making for a 21.7% overclock.

The 2080 Ti does 8738 in Port Royal at factory clocks and the world record is 13090 under LN2 making for a staggering 49.8% overclock.

So much for the theory that Ampere would similarly overclock well removing thermal and power constraint.

Ampere is already 20% into it's 30% overclock.

At a point with Turing, say beyond 2150 MHz and 400w, more freq requires an inordinate amount of power and wattage. But 2150 Mhz core is a 30% overclock!

GA102 is clocked 10% under this threshold from the factory!

You MIGHT be able to squeeze another 10%, but youre going to need to run GA102 at another 47% more TDP!

It takes 160w more over 390w for just 10% return with GA102-300! Basically, 1 additional MHz beyond say 1950 MHz requires 1 watt of power!

The first 1950 MHz only requires ~390w of power! And that last 160 requires another 160w!

Basically this is the only upgrade path for someone with a 2080 Ti. The 3080 is maybe 10% faster at 1440p at the same power draw, and that's honestly pushing it.

If youre at 4K or in VR then Ampere is 10% faster than at 1440p.

I went for a walk this afternoon and thought about this situation deeply. This is such a ridiculous value proposition that it truly borders on the absurd.

A quality AIB 3090 (Strix or FTW3) would run me $1800 before taxes, or around $2k after taxes. Add a water block and back-plate from EKWB and youre looking at $2300 including tax and shipping. For a 25% gain (45% gain in DLSS titles) at 1440p and 35% gain (55% in DLSS titles) in VR / at 4K?

For $2500 this thing is going to need to be as fast as 2080 Ti SLI and transform into a sandwich maker and make me a sandwich on command.

"Introducing Transformer 3090, 2x faster than 2080 Ti, can transform into a humanoid robot and make you a sandwich = $2500!"

1080 Ti to 2080 Ti was not 30%, it was around 50% in actuality.



https://www.3dmark.com/compare/spy/14026055/spy/2949106



That's with no DLSS or RT core involvement! The 2080 Ti is @ 373w and that 1080 Ti is only at 300w though. I have another run around 16k @ 320w though, which still shows a 50% gain in this bench.

It's not simply $300 more than the $1200 2080 Ti for 35-45% more performance.

1. 2080 Ti was more than 25% faster than 1080 Ti in actuality
2. You could buy 2080 Ti used as I did, XC2 local from Craigslist for $900 making for $1050 with the water block for a solid 50% gain.

Now it's more than 2x the price for half the gain ($2300 for 25-45%, DLSS dependent, at 1440p, whereas my the 2080 Ti took my 1440p up 50% across the board in the vast majority of my demanding titles, with a few outliers, with no DLSS)

I don't know who this product is marketed to but I've had enough of this nonsense.

I know Nvidia reads this sub-forum so please don't interpret this as a diatribe directed at you, it's more for them.

They have lost touch with reality.

Also, 8K? No-one wants 8K. Ultrawide 1440p is where it's at and for that my 2080 Ti is plenty fast and will have to tide me over as we wait for Hopper, or, if Big Navi, whose 6900XT is shaping up to sit in between the 3080 and 3090 with 24GB of video memory (no engineered obsolescence here) and whose underlying architecture, RDNA 2, is shared with both next-gen consoles who will have path tracing, AI super-sampling and rapid data retrieval GPU pipeline "RTX IO" equivalent, and will most likely run said console ports natively better, oh and for $1000! In November.

I'm waiting, this thing is a joke, sad thing is, seeing as how the 3090 is only 20% faster than the 3080 best case scenario, there's no room to squeeze a 3080S or 3080 Ti in there, UNLESS, they intend to shaft the 3090 crowd with a "3080 Ti" which amounts to the same GA102-300 die with say 16GB of video memory for $1000. I mean they've basically done this before and they did just release a card that they are claiming will be faster than the 2080 Ti (conveniently left at 260w in their "comparisons") for $500 less than two years after releasing the $1200 2080 Ti.

Either way, I'm not buying this, NV have completely lost their marbles.

$1800 before taxes for 25% bump at 1440p up and over overclocked 2080 Ti @ the same power draw and with next-to-zero overclocking headroom LMAO.

No, away with you and your nonsense.

Here's that 3090 @ 375w vs 2080 Ti @ 375w

16,750 vs 20,569 Timespy GPU = 22% increase for $1700 before taxes

Metro Exodus = 25% and 45% (DLSS) at 1440p and 35% and 55% (DLSS) at 4K for $1700 before taxes

Here's 2080 Ti vs 1080 Ti = 57% increase for $1200 new (~$700-900 used)

Here's 1080 Ti vs 980 Ti = 47% increase for $700

Here's 980 Ti vs 780 Ti = 59% increase for $700

Anyhow, goodbye everyone, I don't recommend anyone buy this at this price point. I recommend waiting to see what AMD has in store for us in November.

6900XT rumors:

10% faster than 3080, 24GB video memory, under 300w (7nm TSMC) for $1000 and will most assuredly run the next gen console ports that are build on the same architecture better.

Unless youre tied to Nvidia hardware, i.e. a G-Sync panel (the whole point of the module in retrospect was to get you stuck in their ecosystem), I recommend considering supporting the competition.

Don't let the pretty cooler fool you, Nvidia is in deep kaka with both next-gen consoles running the same architecture as their only competitor in the discrete gaming GPU market, they rushed Ampere out of the door in an attempt to beat AMD to the punch and have mostly succeeded with that. But the fight isn't over. If Big Navi GPU's handle the next-gen console ports native path tracing, AI Super Sampling and rapid GPU data retrieval pipeline (I don't even know what AMD is calling this, but it's their version of RTX IO, and they were the first to invent it) is more efficient (think about it, Nvidia is going to have to render DLSS and RT on top of an engine that was built around RDNA 2 architecture, it's not going to be as efficient as simply rendering those features natively with RDNA 2 architecture, i.e. Big Navi).

Anyhow, even if you aren't keen on considering AMD, if you wait NGreedia is likely to drop whatever card they intend to shoehorn in between the 3080 and 3090 in response to a $1000 24GB card that is faster than the 3080 for the same price point.

Buying this mediocrity, and I do mean mediocrity, right now is not very prudent.

This is what a monopoly looks like, a $1700 before taxes 80 Ti card that is the only upgrade path to the last 80 Ti card you probably bought for $1200 before taxes.

No empire lasts forever, here's to hoping AMD upsets the GPU market just as they did the CPU market.

*Addendum: 

If youre an Nvidia goon reading this, the final nail in my discontinuation of loyal support is not just your pricing, but your killing off of 3D Vision as of driver 425.31. 

How much did that save you in the grand scheme of things? You had to pay a programmer for the extra 5 minutes required to copy the previous 3D Vision bits forward to the next driver release? Gotta get rid of that to save what, 5 minute of man-hour labor per driver update? 

Ok, well I still use 3D Vision, and upgrading to Ampere means I say goodbye to that? From what we can tell thus far, 3D Vision no longer works with DX11 titles. 











NVIDIA GeForce 3D Vision Driver Forums - Meant to be Seen






www.mtbs3d.com





So yeah, no, you DO NOT deserve $1700 before taxes for your rebadged 80 Ti card. Sorry. *


----------



## doom26464

Would people even notice 10%? 

I mean if you put me in a chair without a FPS counter and one machine had 3080 other 3090 there probaly no way to tell a difference.

I would notice an extra $800 from my bank account however.


----------



## J7SC

doom26464 said:


> Would people even notice 10%?
> 
> I mean if you put me in a chair without a FPS counter and one machine had 3080 other 3090 there probaly no way to tell a difference.
> 
> I would notice an extra $800 from my bank account however.


...valid point but the answer is - it depends.. If you're playing on a nice 4K monitor, the extra FPS can get you from barely playable to a 'decent' experience in 4K ultra, especially in certain sims. A good example would be MS FlightSim 2020 (all hail CFR...)


----------



## inutile

I don't think a 3080 with 20GB makes any sense unless they increase the SM count as well. Going from 10GB to 20GB alone would be a technical spec win and look better on the box but isn't going to make any difference whatsoever in 99.99% of gaming applications for quite a while. This GDDR6X is also not cheap so you're increasing the price a good chunk for something that's going to produce virtually identical gaming benchmarks. I don't think it would be a good sell for anyone who actually looks at reviews and benchmarks.

We're obviously all speculating but what makes most sense to me is that in 6-12 months they do a 3080Ti/S with more SMs and 20GB that does more or less match 3090, then do a 3090Ti/S/Titan with the extra 2 possible SMs and 48GB once Micron makes 2GB mem modules. This would follow the typical pattern of an 80Ti coming 6-12 later that rivals that gen's Titan (3090 in this case obv) for less money and avoid royally pissing off all of the folks frantically trying to acquire a 3080 10GB right now.


----------



## Mooncheese

inutile said:


> I don't think a 3080 with 20GB makes any sense unless they increase the SM count as well. Going from 10GB to 20GB alone would be a technical spec win and look better on the box but isn't going to make any difference whatsoever in 99.99% of gaming applications for quite a while. This GDDR6X is also not cheap so you're increasing the price a good chunk for something that's going to produce virtually identical gaming benchmarks. I don't think it would be a good sell for anyone who actually looks at reviews and benchmarks.
> 
> We're obviously all speculating but what makes most sense to me is that in 6-12 months they do a 3080Ti/S with more SMs and 20GB that does more or less match 3090, then do a 3090Ti/S/Titan with the extra 2 possible SMs and 48GB once Micron makes 2GB mem modules. This would follow the typical pattern of an 80Ti coming 6-12 later that rivals that gen's Titan (3090 in this case obv) for less money and avoid royally pissing off all of the folks frantically trying to acquire a 3080 10GB right now.


That's what I was thinking, they are going to have to release a card that actually matches the 3090. I'm thinking they will take 4-8GB off the 3090 and drop the price down to $1000 or $1200 and call it the 3080 Ti. Same GA-102-300 die with no disabled SM's.

Also, last I heard NV buys GDDR6X in bulk at $10 per GB.

How many here would pay another $60 for a 16GB 3080? I bet 90% of you reading this would.

They didn't limit the 3070 and 3080 to 8 and 10GB for cost or logistical reasons, but because they took another page out of crApple's playbook and needed to engineer obsolescence somewhere into the product.

"See the Iphone 12, one new camera! Your Iphone 11 from just last year? Only 3 cameras! You need to upgrade!"

They want to increase the consumer replacement rate of discrete GPU's, $155 cooler and all, hence the 8 and 10GB intentional gimping of Ampere.

It's wasteful and anti-consumer behavior for sure.

Don't support it.

If you've had enough of all of this nonsense wait and see what AMD has in store in November.

$1700 80 Ti cards are what a monopoly looks like.

NGreedia is and has become the Intel of the GPU industry, here's to hoping AMD forces some of us to re-examine our relationship with them this year.


----------



## HyperMatrix

I can’t imagine a Titan with all cores enabled and 48GB vram. Because that card already exists. It’s the ampere quadro. Although with gddr6 not gddr6x memory due to unavailability of 2GB micron chips at this time. Best that’ll happen is the extra 2.4% cores enabled and move to 21 or 22 Gbps memory. 

Beyond that, the next upgrade is going to require a node shrink - which is theoretically possible, although an expensive option, which depends entirely on how competitive AMD is.


----------



## Emmanuel

In my opinion 20GB, 24GB etc is overkill but 10GB is cutting it too close for 4K gaming. I think even 14GB would have been acceptable and it wouldn't be that much more expensive to manufacture. I agree, most of us **** about 8k gaming, that's all pure marketing in an attempt to wow the consumer into thinking $1500 is justified. I'm actually looking at upgrading my current 4K monitor to a 4K OLED. I have 0 interest in moving to 8K and multiplying the processing requirement by 4 which would only ensure that I'll never max out most modern games for years.

nVidia strategically neutered the 3080 just enough that it remains good in many situations, but enough that the 3090 becomes the only viable long-term solution (for those that don't upgrade every generation or want to be 2-time buyers within a single generation).

Part of me is getting impatient for an upgrade having kept the same card for almost 4 years, but another part of me doesn't want to support nVidia's behavior that you guys described so well. This engineered obsolescence, marketing buzzwords and modified naming conventions isn't fooling us. Plus shame on them for being so greedy when instead they should be thankful that we're still giving them money during uncertain financial times. I might just hold off upgrading until closer to Christmas and give my CPU's IGP a spin to run my desktop lol.


----------



## zhrooms

Just got confirmation by a reddit user that got a *MSI RTX 3090 Gaming X Trio* early, *1785MHz Default Boost Clock* and *370W Default Power Limit* with a *Maximum Power Limit of 380W (+3%)*. This is 20W lower than Strix (unconfirmed) and 60W lower than FTW3 (confirmed).
_BIOS Version 94.02.26.80.C8 / PG132 SKU 10 VGA BIOS MSINV388MH.122_

And we have PCB pictures from 3080 Trio, so we know it's using the full *20 power stages* since 3080 board only occupies 16 out of 20.


----------



## Esenel

380W is just a joke for these GPUs :-X


----------



## BigMack70

doom26464 said:


> Would people even notice 10%?
> 
> I mean if you put me in a chair without a FPS counter and one machine had 3080 other 3090 there probaly no way to tell a difference.
> 
> I would notice an extra $800 from my bank account however.


I think 10% can be noticeable in certain cases where your performance is on the borderline between acceptable and unacceptable. 

But I wouldn't expect anyone to be swapping a 3080 out for a 3090 anytime soon. 3080 is the value card, and most people want the value card. 

I don't care what anyone says about 20GB vram being stupid on the 3080 and 10GB being enough... Lots of games are already allocating 8-10GB of vram at 4k, and I don't trust that capacity to work well for very long in next gen games. 

And in any case, I'm not spending money to swap out a card with 11GB vram for a card with 10. Really hard for me to see a 3080 as anything but a side-grade for 2080Ti owners.


----------



## dentnu

Anyone here make up there mind and are planing to get either the FTW3 or Strix @ $1800 ?


----------



## HyperMatrix

BigMack70 said:


> I think 10% can be noticeable in certain cases where your performance is on the borderline between acceptable and unacceptable.
> 
> But I wouldn't expect anyone to be swapping a 3080 out for a 3090 anytime soon. 3080 is the value card, and most people want the value card.
> 
> I don't care what anyone says about 20GB vram being stupid on the 3080 and 10GB being enough... Lots of games are already allocating 8-10GB of vram at 4k, and I don't trust that capacity to work well for very long in next gen games.
> 
> And in any case, I'm not spending money to swap out a card with 11GB vram for a card with 10. Really hard for me to see a 3080 as anything but a side-grade for 2080Ti owners.


Yeah I’m in the same boat. The 3080 is enough of a performance upgrade over the pascal Titan X but there’s no way on earth I’m going to go to 10GB VRAM when I’ve been on 12GB for 5 years. And I think Nvidia knew this too. If you have a 2080ti on air, then it’s probably worth it to move to a 3080 for very little additional cost. But if you’re under water and have solid overclocks going, it’s really going to be such a minor upgrade that it’s not worth the hassle.

And despite what people say about VRAM usage, consider that 4K textures actually suck. They’re good at distance. But lose detail when close up. That’s why a lot of texture mods and remasters like crysis are going with 8k textures. Then add in higher levels of AA and Super Scaling. You’ll find that even older games under those conditions will need more than 10GB.

10GB is basically the borderline “it’s ok for right now but you’re SOL when next gen games come out” amount. If you’re playing at sub-4K you’ll be fine. But at 4K, you’ll need more.


----------



## HyperMatrix

dentnu said:


> Anyone here make up there mine and are plaing to get either the FTW3 or Strix @ $1800 ?


EVGA FTW3 is better in many ways. Better board. Higher power. Better warranty/support. On the flip side....rog Strix cards have had better water block support and availability. I’ll still be going FTW3 at the risk of there never being an unlocked power bios. At least I’ll have 440W to play with guaranteed.


----------



## kot0005

MSI really dropped the ball on this series, they use a graphene backplate and its flexible... like whyyy


----------



## Shawnb99

The latest news is the NDA’s expire at the same time the 3090’s go on sale so no time for bad reviews or poor benchmarks to come out.

This is definitely shady by Nvida


__ https://twitter.com/i/web/status/1308358228376461314


----------



## HyperMatrix

Shawnb99 said:


> The latest news is the NDA’s expire at the same time the 3090’s go on sale so no time for bad reviews or poor benchmarks to come out.
> 
> This is definitely shady by Nvida
> 
> 
> __ https://twitter.com/i/web/status/1308358228376461314


Same reason they pushed back the NDA on the 3080 reviews. They knew the reviews wouldn't be as nice as their marketing pitch. Fortunately we've had some 3090 leaks and know which ones to avoid. Especially at these ridiculous prices.


----------



## BigMack70

Shawnb99 said:


> The latest news is the NDA’s expire at the same time the 3090’s go on sale so no time for bad reviews or poor benchmarks to come out.
> 
> This is definitely shady by Nvida
> 
> 
> __ https://twitter.com/i/web/status/1308358228376461314


They know that the 3090 is going to be bought by people who just want the best and don't care. The market for this card is not buyers who need a review first. I'm not saying it isn't shady; it would be much more pro-consumer to give people reviews prior to making the purchase available, but their behavior makes perfect sense. It definitely suggests that the performance improvement is meaningfully less than the paper specs suggest, so I'm inclined to think 10-15% performance boost over 3080 and also with zero OC headroom on air.

They'll be gone before 9:05 AM EST, reviews or no.


----------



## zhrooms

dentnu said:


> Anyone here make up there mind and are planing to get either the FTW3 or Strix @ $1800 ?


*FTW3
24* Power Stages (*20*+4)
*440W* Power Limit (Confirmed)
Dual BIOS
1x HDMI 2.1
*Note:* It's been confirmed both FTW3 Gaming and FTW3 Ultra has a power limit of 440W, it does not matter which one you buy if manually overclocking, just different factory boost clock, no binning.

*Strix*
22 Power Stages (18+4)
400W Power Limit (Unconfirmed)
Dual BIOS
*2x* HDMI 2.1
*Note:* Power limit was stated by ASUS on September 1 but might have changed since then, they said "up to 400W". Also we don't know if that power limit also applies to non OC model, Advanced model on 2080 Ti had a lower power limit of 313W compared to OC model 325W.

Unless you need two HDMI 2.1, the FTW3 is obviously much better, and even better yet, just get the regular FTW3, not the Ultra, it's up to $70 less on Newegg ($1729.99).

*Edit:* Gigabyte Gaming OC has been confirmed

*Gaming OC*
19 Power Stages (15+4)
390W Power Limit (Confirmed)
Dual BIOS
*2x* HDMI 2.1
*Note:* Same size as Strix, at 320mm and 2.75 Slot (Strix is 319mm and 2.8 Slot). + Extra year of Warranty.

Price in the UK is £70 lower (or $90/€75). GamingOC is legit a very valid alternative for people currently interested in the Strix, 10W power difference is nothing, and you save almost $100, 3 fewer power stages also don't do anything except raise VRM temps by a few degrees, not worth any money to keep them down by that amount.

FTW3 is still the champion, but I'd very likely choose Gaming OC over Strix based on this new information.

*Edit2:* Gigabyte AORUS Xtreme details surfaced a few days ago

*AORUS Xtreme*
20 Power Stages (16+4)
---W Power Limit (Unknown)
Dual BIOS
*3x* HDMI 2.1
*Note:* Massive cooler, biggest one yet at 3.5 Slot (70mm), also a year extra warranty like all Gigabyte cards. The news of the Gaming OC featuring a power limit of 390W, exceeding the 2x8-Pin spec, gives me hope that Gigabyte won't hold the AORUS Xtreme 3x8-Pin power limit back, I'm expecting it to at least match FTW3 at 440W, possibly even going higher, that would be a massive win for Gigabyte, it's also the same price as Strix OC and FTW3 Ultra, but two extra HDMI ports, much larger cooler, looks the best too in my opinion, with gold accents, absolutely massive heatsink, LCD that you can customize, and lastly the front.


----------



## inutile

zhrooms said:


> *FTW3
> 24* Power Stages (*20*+4)
> *440W* Power Limit (*Confirmed*)
> Dual BIOS
> 1x HDMI 2.1
> *Note:* It's been confirmed both FTW3 Gaming and FTW3 Ultra has a power limit of 440W, it does not matter which one you buy if manually overclocking, just different factory boost clock, no binning.
> 
> *Strix*
> 22 Power Stages (18+4)
> 400W Power Limit (Unconfirmed)
> Dual BIOS
> *2x* HDMI 2.1
> *Note:* Power limit was stated by ASUS on September 1 but might have changed since then, they said "up to 400W". Also we don't know if that power limit also applies to non OC model, Advanced model on 2080 Ti had a lower power limit of 313W compared to OC model 325W.
> 
> Unless you need two HDMI 2.1, the FTW3 is obviously much better, and even better yet, just get the regular FTW3, not the Ultra, it's up to $70 less on Newegg ($1729.99).


I've been really happy with my 2080ti FTW3 and HydroCopper block but honestly I absolutely hate the design of the new HC block and knowing how limited and late other FTW compatible blocks are I'll probably wait and see if anything else shakes out in the power limit department.


----------



## zhrooms

Holy crap, Gigabyte RTX 3090 GamingOC 19 Power Stages 2x8-Pin Custom PCB (Modified Reference PCB).. 370W Default Power Limit and Maximum Power Limit *390W*!

They are exceeding 375W 2x8-Pin spec by 15W, on Turing (2080 Ti) there were only 1 card that exceeded it, and that was GALAX/KFA2 NVIDIA Reference PCB at 380W (5W above), FTW3 stayed just under it at 373W and now on Ampere we got TUF OC for 3080 running exactly 375W.

So it's very interesting that Gigabyte decided to go 15W above spec, if ASUS claim of "up to 400W" holds true, the GamingOC is just 10W below, for a considerably cheaper price ($90/€75/£70), and *yes* GamingOC also features *DualBIOS* and an *extra HDMI 2.1* port just like Strix. Just 3 fewer power stages for the GPU, so VRM will run slightly warmer on GamingOC (by a few degrees, doesn't matter). Coolers are also same size, both are ~320mm (319 & 320), Strix 2.8 Slot and GamingOC 2.75 Slot. 4 Year warranty on Gigabyte as well.


----------



## nievz

zhrooms said:


> *FTW3
> 24* Power Stages (*20*+4)
> *440W* Power Limit (Confirmed)
> Dual BIOS
> 1x HDMI 2.1
> *Note:* It's been confirmed both FTW3 Gaming and FTW3 Ultra has a power limit of 440W, it does not matter which one you buy if manually overclocking, just different factory boost clock, no binning.
> 
> *Strix*
> 22 Power Stages (18+4)
> 400W Power Limit (Unconfirmed)
> Dual BIOS
> *2x* HDMI 2.1
> *Note:* Power limit was stated by ASUS on September 1 but might have changed since then, they said "up to 400W". Also we don't know if that power limit also applies to non OC model, Advanced model on 2080 Ti had a lower power limit of 313W compared to OC model 325W.
> 
> Unless you need two HDMI 2.1, the FTW3 is obviously much better, and even better yet, just get the regular FTW3, not the Ultra, it's up to $70 less on Newegg ($1729.99).
> 
> *Edit:* Gigabyte Gaming OC has been confirmed
> 
> *Gaming OC*
> 19 Power Stages (15+4)
> 390W Power Limit (Confirmed)
> Dual BIOS
> *2x* HDMI 2.1
> *Note:* Same size as Strix, at 320mm and 2.75 Slot (Strix is 319mm and 2.8 Slot). + Extra year of Warranty.
> 
> Price in the UK is £70 lower (or $90/€75). GamingOC is legit a very valid alternative for people currently interested in the Strix, 10W power difference is nothing, and you save almost $100, 3 fewer power stages also don't do anything except raise VRM temps by a few degrees, not worth any money to keep them down by that amount.
> 
> FTW3 is still the champion, but I'd very likely choose Gaming OC over Strix based on this new information.
> 
> *Edit2:* Gigabyte AORUS Xtreme details surfaced a few days ago
> 
> *AORUS Xtreme*
> 20 Power Stages (16+4)
> ---W Power Limit (Unknown)
> Dual BIOS
> *3x* HDMI 2.1
> *Note:* Massive cooler, biggest one yet at 3.5 Slot (70mm), also a year extra warranty like all Gigabyte cards. The news of the Gaming OC featuring a power limit of 390W, exceeding the 2x8-Pin spec, gives me hope that Gigabyte won't hold the AORUS Xtreme 3x8-Pin power limit back, I'm expecting it to at least match FTW3 at 440W, possibly even going higher, that would be a massive win for Gigabyte, it's also the same price as Strix OC and FTW3 Ultra, but two extra HDMI ports, much larger cooler, looks the best too in my opinion, with gold accents, absolutely massive heatsink, LCD that you can customize, and lastly the front.


any info on the ASUS TUF Gaming GeForce RTX 3090?


----------



## zhrooms

nievz said:


> any info on the ASUS TUF Gaming GeForce RTX 3090?


No, but it should be identical to 3080 TUF, so 350/375W on 2x8-Pin and 20 Power Stages, Dual BIOS and extra HDMI. Doubt they'd go over 2x8-Pin spec (375W) but since Gigabyte went 15W over who the hell knows, just have to wait and confirm on release day.


----------



## nycgtr

zhrooms said:


> Holy crap, Gigabyte RTX 3090 GamingOC 19 Power Stages 2x8-Pin Custom PCB (Modified Reference PCB).. 370W Default Power Limit and Maximum Power Limit *390W*!
> 
> They are exceeding 375W 2x8-Pin spec by 15W, on Turing (2080 Ti) there were only 1 card that exceeded it, and that was GALAX/KFA2 NVIDIA Reference PCB at 380W (5W above), FTW3 stayed just under it at 373W and now on Ampere we got TUF OC for 3080 running exactly 375W.
> 
> So it's very interesting that Gigabyte decided to go 15W above spec, if ASUS claim of "up to 400W" holds true, the GamingOC is just 10W below, for a considerably cheaper price ($90/€75/£70), and *yes* GamingOC also features *DualBIOS* and an *extra HDMI 2.1* port just like Strix. Just 3 fewer power stages for the GPU, so VRM will run slightly warmer on GamingOC (by a few degrees, doesn't matter). Coolers are also same size, both are ~320mm (319 & 320), Strix 2.8 Slot and GamingOC 2.75 Slot. 4 Year warranty on Gigabyte as well.


Until you see the 3090 Gigaybte pcb shot and realize it's never getting a waterblock


----------



## zhrooms

nycgtr said:


> Until you see the 3090 Gigaybte pcb shot and realize it's never getting a waterblock
> View attachment 2459716


It's reference based, super easy to fit water blocks for it, we even have a user (@vmanuelgm) two pages back who already run one on the 3090 Gaming OC. Just required slight modification for the larger size of the HDMI module, the rest of the board fit it just fine, because it's still 90% reference design, the flat connectors going out the side are not in the way for any block except possibly the EK one which has an RGB piece at the end, but you can remove that if you want, then it'd fit too (with the exception of the HDMI module). Example; Removing the black part. Also, just looking at the screw holes, you'd just have to cut out maybe 5mm in the bottom left corner for the HDMI module, it almost fits already, we're talking 3-5mm that needs to be removed, completely safe to remove it if you don't care about warranty, and you can always buy a new plexi for it through EK support, to restore it to factory new.


----------



## vmanuelgm

Just adding to zhrooms comments, Alphacool backplate coming with the Aurora block is poorly designed. Some of the pads are so wide that increase the total width of the card causing issues with close memory slots, like in my Omega (u can't install the card with the backplate on). Besides, the proportions among the pads are bad, so some components like memory modules do not make contact with the backplate if u don't take off others or make them thinner.


----------



## J7SC

I also find the 10 GB limit to be an 'artificial limitation' --- not least as the PCBs all seem to be set up for 12 GB anyway. As demonstrated before, I have seen 4K ultra apps 'allocate/use' 9 GB, and then mid-play increase that to well over 10 GB in certain scenes. It is that increase I find telling. On the other hand, GDDR6X is faster than GDDR6 on throughput. I repeat that IMO, they're holding the 20 GB 3080 version back to deal with the RDNA2 release...from what I've read, at least some of the RDNA2 models will come out with 16 GB VRAM and a price in the RTX 3080 range.

As to 8K, I also watched Jensen's presentation and the slide about 8K/60 FPS...I have no interest in 8K at this stage (may be in a few years from now, depending on content developments). Still, I don't get why NVidia made 'the SLI will shift to devs' announcement if they want to push folks to buy 3090s for among other things 8K...transposing the custom profile burden onto individual game devs was how DLSS 1.0 had problems, before DLSS 2.0 with a different, more centralized approach came to save the day. Granted, traditional SLI's (AFR) days are likely numbered re. future games, but there is CFR and other mGPU techniques they could have been used for 2x 3090 marketing, also in preparation for future mGPU / tile type GPUs

*EDIT:* On the Gigabyte / Aorus 3090 w-block questions, they offered a factory full w-block version of the previous RTX gen, ie. for the top-line Aorus XTR (in addition to after market blocks being available for it). That was even with a unique PCB design...hopefully, they'll do the same with 3090


----------



## zhrooms

J7SC said:


> I also find the 10 GB limit to be an 'artificial limitation' --- not least as the PCBs all seem to be set up for 12 GB anyway. As demonstrated before, I have seen 4K ultra apps 'allocate/use' 9 GB, and then mid-play increase that to well over 10 GB in certain scenes. It is that increase I find telling. On the other hand, GDDR6X is faster than GDDR6 on throughput. I repeat that IMO, they're holding the 20 GB 3080 version back to deal with the RDNA2 release...from what I've read, at least some of the RDNA2 models will come out with 16 GB VRAM and a price in the RTX 3080 range.


Just market segmentation as always, 3080 320-bit (GA102-200) is not a successor to 2080 Ti 352-bit (TU102-300). 10GB is most definitely an increase from 8GB.

2GB > 4GB > 8GB > 8GB > 10GB (Non-Ti equivalent)
1.5GB > 3GB > 6GB > 11GB > 11GB (Ti equivalent)

Don't buy 3080 if 10GB isn't enough? It's clearly not the Ti successor, and if you want Ti.. there is no Ti, the best we can hope for is a 3080 SUPER 20GB with roughly 10% increased CUDA cores, but it'll cost, probably $999-1199. Still won't be Ti replacement as it's still 320-bit, just have to accept Ti flagships is a thing of the past. Now we got 3080, 3080 SUPER and 3090 instead.

And yes it's very obvious that they're withholding 20GB model to compete with 16GB AMD cards, 12 and 16GB variants are confirmed. Both beating out 3080 at 10GB, meaning NVIDIA has to release 20GB or they'll lose a ton of sales because very very few will bother getting the 3090. Question is if the 3080 20GB variant will be a SUPER card or not, the gap is so small between 3080 and 3090 already (20%) so either they don't increase CUDA Cores at all, or it's somewhere in the middle, 5-10%, who knows.

This is without question the worst NVIDIA launch/series ever, 3090 is just about 25% faster than 2080 Ti on water cooling with XOC BIOS, 25% is nothing, 980 Ti to 1080 Ti was 70%, 1080 TI to 2080 Ti was 50% (OC to OC), and now we're seeing under 30% (max) OC to OC 2080 Ti to 3090, it's like a joke. NVIDIA screwing with us (the consumers). The only thing that can save Ampere right now is a RTX 3080 SUPER 20GB with 10% CUDA Core increase and priced at $999, that's it, I'd buy that **** right away and be satisfied. 3080 FTW3 Ultra at $820 420W and 3090 FTW3 Ultra $1800 440W is not going to satisfy me, 3080 doesn't have enough performance and 3090 is super expensive for very little gain. 3080 SUPER 20GB performance still wouldn't be enough but still a lot better value than the current 3080/3090 models.


----------



## mirkendargen

I noticed the KPE is listing a 520W power limit on the first page now, but Googling isn't showing me any announcement of that. Anyone know where that's coming from? Given that The KPE would supposedly be the same PCB as the FTW3....that would be pretty awesome (assuming a version of nvflash surfaces that can dump/flash 30-series cards).


----------



## vmanuelgm

520w are nice...


----------



## J7SC

...in addition to the 'stated' power rating, there usually is also a bit of extra wiggle room...both my Aorus 2080 Ti XTR WB are on stock bios, rated at 366W, but both consistently hit 380W each...Also good to see that the Aorus XTR 3090 gets 3x8 pin power...

..wondering about power spikes as well which can often (very briefly) add another big chunk 'over and above' what is measured by slower monitoring software. Assuming that custom-PCB high-watt 3090 owners have a system to match, including a big-watt CPU etc and plenty of RAM, I think a high quality 850 W is the bare min for one card, IMO


----------



## zhrooms

mirkendargen said:


> I noticed the KPE is listing a 520W power limit on the first page now, but Googling isn't showing me any announcement of that. Anyone know where that's coming from? Given that The KPE would supposedly be the same PCB as the FTW3....that would be pretty awesome (assuming a version of nvflash surfaces that can dump/flash 30-series cards).


I added it even though we have no confirmation, because it's capped 3x8-Pin, they can't go higher without going outside of spec which they won't do, and that's why EVGA set the FTW3 to 373W on 2080 Ti which was 2x8-Pin (375W max) and they set it to 373W, 2W shy of the limit, and on Kingpin with 3x8-Pin the max limit is 525W, and they set it to 520W, so we know for sure even without confirmation that Kingpin 3090 is definitely going to be 520 (or 525W), but I set it to 520 because that's what they set 2080 Ti Kingpin to. We know it's either 520 or 525, so might as well put it in now. Kingpin is by far the craziest card, but it'll also cost $2499 most likely, or more. 2080 Ti Kingpin had an MSRP of $1899, just 900 above Non-A chips and 700 above FE, so if we're guessing $1499 FE 3090 + 700-900, we're almost at $2499, and this time they'll feature a bigger AIO (360mm) so a little more wouldn't be surprising.


----------



## kot0005

I hope we can Flash FTW3 BIOS to strix OC because EVGA ftw cards looks ugly this time. Strix looks much more industrial and neater.


----------



## Sheyster

Shawnb99 said:


> The latest news is the NDA’s expire at the same time the 3090’s go on sale so no time for bad reviews or poor benchmarks to come out.
> 
> This is definitely shady by Nvida
> 
> 
> __ https://twitter.com/i/web/status/1308358228376461314


Not surprised by this. Order the card, then cancel the order if you feel the reviews are bad.


----------



## nyk20z3

Any rumors on a Lighting, Matrix or HOF 3090?


----------



## BigMack70

zhrooms said:


> This is without question the worst NVIDIA launch/series ever, 3090 is just about 25% faster than 2080 Ti on water cooling with XOC BIOS, 25% is nothing, 980 Ti to 1080 Ti was 70%, 1080 TI to 2080 Ti was 50% (OC to OC), and now we're seeing under 30% (max) OC to OC 2080 Ti to 3090, it's like a joke.


The 1080 Ti to 2080 Ti jump wasn't anywhere near 50% unless you had a bad/low clock on the 1080 Ti or are thinking specific benches where the Turing architecture simply did much better than Pascal. Pascal OC'd a little better than Turing; they both wound up topping out around similar clocks (2.1 GHz or a bit higher if on a lucky chip) but the "stock" clocks on 1080 Ti were slightly lower. It was the same 30-35% performance jump overall that we're seeing here.

Even if the 3090 is only 10% better than 3080, it's still going to be 30% faster than 2080 Ti OC vs OC when under water.

Some of you guys are way too Eeyore about this launch. No, it's not anything close to what Nvidia's marketing claimed. But did you really expect it to be?

I'd say this launch is easily better than the Kepler launch, where you couldn't even get a high end card for a year and a half after the architecture went live. I'd even take this over the initial Fermi 400 series launch, which was also (for the time) very power hungry and which didn't offer a great performance leap over what was already on the market.


----------



## vmanuelgm

RDR2 Vulkan 3440x1440p Ultra


----------



## Mooncheese

BigMack70 said:


> The 1080 Ti to 2080 Ti jump wasn't anywhere near 50% unless you had a bad/low clock on the 1080 Ti or are thinking specific benches where the Turing architecture simply did much better than Pascal. Pascal OC'd a little better than Turing; they both wound up topping out around similar clocks (2.1 GHz or a bit higher if on a lucky chip) but the "stock" clocks on 1080 Ti were slightly lower. It was the same 30-35% performance jump overall that we're seeing here.
> 
> Even if the 3090 is only 10% better than 3080, it's still going to be 30% faster than 2080 Ti OC vs OC when under water.
> 
> Some of you guys are way too Eeyore about this launch. No, it's not anything close to what Nvidia's marketing claimed. But did you really expect it to be?
> 
> I'd say this launch is easily better than the Kepler launch, where you couldn't even get a high end card for a year and a half after the architecture went live. I'd even take this over the initial Fermi 400 series launch, which was also (for the time) very power hungry and which didn't offer a great performance leap over what was already on the market.


Nonsense: 






See also: 

(Starting here, carrying into Rainbow Six Siege, 2080 Ti is 50% faster than 1080 Ti)








https://www.3dmark.com/compare/spy/14026055/spy/1723784



All of my demanding titles went up 50% at 3440x1440 at the same power draw. 

Middle Earth: Shadow of War
No Man's Sky
Shadow of the Tomb Raider
The Witcher 3
Watch Dogs 2

Only to name a few, there are a few outliers to be sure, such as AC: Odyssey and Far Cry 5, where there weren't real gains (25%). 

None of these are DLSS titles with exception of SOTTR and the 50% gain cited there was with it disabled.


----------



## Mooncheese

vmanuelgm said:


> RDR2 Vulkan 3440x1440p Ultra


I'm still looking for an actual benchmark of RDR2 @ 3440x1440 via Vulkan and all I can find are DX12 or gameplay videos. Here's a gameplay vid with 2080 Ti @ 2085 MHz core at 60 FPS avg on Vulkan @ 3440x1440 same settings to hold us over until then.






So 33% slower.

Subtract 10% because your shunted 3090 @ 550w is ~10% faster than at 390w.

Oh hey, that 23% lines right up with the 25% value derived from comparing the Metro Exodus benchmarks at 1440p!

Not impressed, not for $2300 (WB + backplate, tax and shipping).

For $2300 it's going to need to be 200% faster than 2080 Ti and be able to transform into a robot and make me a sandwich on command.

Not 25% without DLSS and 45% with DLSS @ 1440p.

This is what a monopoly looks like. An $1500-1800 80 Ti card.

NGreedia pushing the NDA date back to the morning of release LMAO.

The word is out, Ampere's 8nm EUV is underwhelming and NO this is not the Titan card, this is 100% the 80 Ti card and the only upgrade path for someone with last gens 80 Ti card except unlike last gen's 80 Ti card, which offered a 50% non-DLSS uplift for less at 1440p, this one is half the uplift for another 50% more money!

Golf clap NGreedia! Masterful marketing. And the mindless masses queue up for it!

"OH BOY I SURE HOPE I CAN F5 FAST ENOUGH ON THE 24TH FOR THIS!"


----------



## Nizzen

kot0005 said:


> I hope we can Flash FTW3 BIOS to strix OC because EVGA ftw cards looks ugly this time. Strix looks much more industrial and neater.


We want Asus XOC bios for Strix 3090


----------



## domenic

vmanuelgm said:


> RDR2 Vulkan 3440x1440p Ultra


Sorry for the noob question but what utility are you using to display all of the stats?


----------



## inutile

__ https://twitter.com/i/web/status/1308492945495150599


----------



## AngryLobster

Mooncheese said:


> I'm still looking for an actual benchmark of RDR2 @ 3440x1440 via Vulkan and all I can find are DX12 or gameplay videos. Here's a gameplay vid with 2080 Ti @ 2085 MHz core at 60 FPS avg on Vulkan @ 3440x1440 same settings to hold us over until then.
> 
> 
> 
> 
> 
> 
> So 33% slower.
> 
> Subtract 10% because your shunted 3090 @ 550w is ~10% faster than at 390w.
> 
> Oh hey, that 23% lines right up with the 25% value derived from comparing the Metro Exodus benchmarks at 1440p!
> 
> Not impressed, not for $2300 (WB + backplate, tax and shipping).
> 
> For $2300 it's going to need to be 200% faster than 2080 Ti and be able to transform into a robot and make me a sandwich on command.
> 
> Not 25% without DLSS and 45% with DLSS @ 1440p.
> 
> This is what a monopoly looks like. An $1500-1800 80 Ti card.
> 
> NGreedia pushing the NDA date back to the morning of release LMAO.
> 
> The word is out, Ampere's 8nm EUV is underwhelming and NO this is not the Titan card, this is 100% the 80 Ti card and the only upgrade path for someone with last gens 80 Ti card except unlike last gen's 80 Ti card, which offered a 50% non-DLSS uplift for less at 1440p, this one is half the uplift for another 50% more money!
> 
> Golf clap NGreedia! Masterful marketing. And the mindless masses queue up for it!
> 
> "OH BOY I SURE HOPE I CAN F5 FAST ENOUGH ON THE 24TH FOR THIS!"


I just showed up in this thread as a potential 3090 owner and the first thing I'm faced with is your non stop whining and crying. If the 3090 is a poor value proposition for you, that's great now carry on with your life.


----------



## ttnuagmada

CallsignVega said:


> I haven't even been paying attention to this over-hyped crappy gen. What is the TLR version? Something like:
> 
> 1. Ampere is already maxed out and is a terrible overclocker.
> 2. A 3080 overclocked barely beats a overlocked 2080Ti.
> 3. 3090 is only 9% faster than 3080.
> 4. Only 3090 cards 400w are greater are the Strix (400w) and FTW3 (440w) that are worthwhile to be put under water.
> 
> Anything that I am missing or needs correction?



where did you see a max TDP for the Strix?


----------



## ttnuagmada

Emmanuel said:


> I think even 14GB would have been acceptable and it wouldn't be that much more expensive to manufacture.


GPU VRAM amounts aren't arbitrary, they are tied to the bus width. 14gigs is not a possible memory config for the 3080.


----------



## vmanuelgm

domenic said:


> Sorry for the noob question but what utility are you using to display all of the stats?


Tuned MSI Afterburner with Rivatuner OSD


----------



## padman

Mooncheese said:


> (Starting here, carrying into Rainbow Six Siege, 2080 Ti is 50% faster than 1080 Ti)


And this is 2080Ti launch results from 35 games (timestamp 22:32 in the video)




23% Faster on average @ 1440p
I guess 2 years of drivers optimizations make a lot of diffrence.
Will the gap between Ampere and Turing also widen so much in 2 years?

Edit:
23x 2080Ti *launch reviews* results combined, 1440p & 4k; 1080Ti used as base (100% performance)




__





Launch-Analyse nVidia GeForce RTX 2080 & 2080 Ti (Seite 3) | 3DCenter.org


Mittwoch, 26. September 2018 / von Leonidas WQHD Vega 64 1080 1080 Ti Titan Xp 2080 (FE) 2080 Ti (FE) Technik Vega 10, 4096 SE @ 2048 Bit, 8 GB HBM2 Pascal GP104, 2560 SE @ 256 Bit, 8 GB GDDR5X Pascal GP102, 3584 SE @ 352 Bit, 11 GB




www.3dcenter.org




2080Ti avg. perf: 
@ 1440p: 128,4% 
@ 4k: 134,8%


----------



## changboy

If you can buy an msi rtx-3080 trio will you buy it ?


----------



## dentnu

changboy said:


> If you can buy an msi rtx-3080 trio will you buy it ?


Nope waiting on 3090. Now I am not you so you should do whatever works for you.


----------



## zhrooms

BigMack70 said:


> The 1080 Ti to 2080 Ti jump wasn't anywhere near 50% unless you had a bad/low clock on the 1080 Ti or are thinking specific benches where the Turing architecture simply did much better than Pascal. Pascal OC'd a little better than Turing; they both wound up topping out around similar clocks (2.1 GHz or a bit higher if on a lucky chip) but the "stock" clocks on 1080 Ti were slightly lower. It was the same 30-35% performance jump overall that we're seeing here.
> 
> Even if the 3090 is only 10% better than 3080, it's still going to be 30% faster than 2080 Ti OC vs OC when under water.


Blows my mind how people still don't know that 2080 Ti is 50% faster than 1080 Ti, seriously mind boggling, we've know about the performance difference for 2 full years now and you still don't have a clue..

GTX 1080 Ti from 980 Ti, ~30% CUDA core increase and ~50% higher clock speed = ~70% Faster
RTX 2080 Ti from 1080 Ti, ~20% FP CUDA cores and introduction of another 4352x INT CUDA Cores, total of 8704 CUDA cores up from 3584, thus ~60% larger die even though it went from 16nm to 12nm, $300 higher MSRP = ~50% Faster










Sure there are a few games out there that don't get the full benefit of these 4352x INT CUDA Cores, but *generally in modern titles* it provides around maybe 25%, on top of the already faster card, memory also overclocks 2.5 Gbps up from 1.5 Gbps, 18% vs 13.5%, ~30% faster memory OC to OC.

You can cherrypick old garbage games (engines) that no one cares about and make it look like 2080 Ti is 15% faster, very simple and effective, which is what a lot of reviewers did at Turing launch.

These images (below) has been featured in the 2080 Ti Owner's Club since launch, they are undeniable and everyone should have seen them by now.

_Time Spy 49%, Superposition 53%, BF1 46%, Draogn Quest 48%, F1 51%, Hellblade 50%, Hitman 46%, Kingdom Come 47%, Middle-earth 45%, Monster Hunter World 47%, Metro 51%, Prey 45%, Rainbow Six 50%, Witcher 3 44%, Wolfenstein 55% (Vulkan)_

45-50%, and this data was gathered from Guru3D, HardwareCanucks, Hardware.Info, HEXUS, Kitguru, PC Games Hardware, PC Perspective, SweClockers, TechPowerUp and TweakTown. This also does not include the even higher performance on 2080 Ti when overclocking, we know it overclocks better, 2175MHz is a breeze while you typically struggled for 2100MHz on 1080 Ti, along with memory as previously mentioned, about 6.5% higher memory OC on 2080 Ti. Taking these 45-50% to 50% and above. Below, all 15 (13 games, 2 benchmarks) averaged, coming out at 49%.



















So, stop embarrassing yourself, thinking 2080 Ti is only 30% faster than 1080 Ti in 2020, I'd excuse it a few months after Turing release possibly, but now? 😆


----------



## geriatricpollywog

Is anybody else gonna try to snag an EVGA 3090 tomorrow night?


----------



## dentnu

0451 said:


> Is anybody else gonna try to snag an EVGA 3090 at midnight?


Midnight? All cards will be up for sale on the 24th @ 9am est


----------



## Dagamus NM

BigMack70 said:


> The 1080 Ti to 2080 Ti jump wasn't anywhere near 50% unless you had a bad/low clock on the 1080 Ti or are thinking specific benches where the Turing architecture simply did much better than Pascal. Pascal OC'd a little better than Turing; they both wound up topping out around similar clocks (2.1 GHz or a bit higher if on a lucky chip) but the "stock" clocks on 1080 Ti were slightly lower. It was the same 30-35% performance jump overall that we're seeing here.
> 
> Even if the 3090 is only 10% better than 3080, it's still going to be 30% faster than 2080 Ti OC vs OC when under water.
> 
> Some of you guys are way too Eeyore about this launch. No, it's not anything close to what Nvidia's marketing claimed. But did you really expect it to be?
> 
> I'd say this launch is easily better than the Kepler launch, where you couldn't even get a high end card for a year and a half after the architecture went live. I'd even take this over the initial Fermi 400 series launch, which was also (for the time) very power hungry and which didn't offer a great performance leap over what was already on the market.


Yep. Very much Eeyore in this thread. Not sure why people with SLI 2080Tis would bother upgrading unless they have more money than they need or have a panel that their cards cannot drive. Most of the complaining comes from people that seem to not have either apply to. Covid bored I suppose.



AngryLobster said:


> I just showed up in this thread as a potential 3090 owner and the first thing I'm faced with is your non stop whining and crying. If the 3090 is a poor value proposition for you, that's great now carry on with your life.


Amen.

3090 will be obsolete soon enough and none of this will matter. Need to go above 4kp60? Get this card. If not, then what is the limitation of your existing hardware or the 3080 in your situation?

If buying for the sake of buying then cool, enjoy. 

If your current hardware meets your needs then why not enjoy it for another generation? Upgrading every generation without a true need just sets one up for disappointment. 

520W lol. I am in. Sounds fun.


----------



## doom26464

I dont mean to aruge with the more money then sense elitist here but I don't bite on the 2080ti 50% faster then a 1080ti nonsense.

Launch had it at like 23-26% depending on reveiwer or games tested.

I do understand turning had some driver maturing so ill be fair and say with that at best it lands around 35% faster. 50% is a stretch and one that I wont bite on.


----------



## dentnu

doom26464 said:


> I dont mean to aruge with the more money then sense elitist here but I don't bite on the 2080ti 50% faster then a 1080ti nonsense.
> 
> Launch had it at like 23-26% depending on reveiwer or games tested.
> 
> I do understand turning had some driver maturing so ill be fair and say with that at best it lands around 35% faster. 50% is a stretch and one that I wont bite on.


ok cool can you please go away now and drop this 2080 vs 1080 BS. This is the 3090 thread not the 2080 vs 1080 thread.


----------



## HyperMatrix

doom26464 said:


> I dont mean to aruge with the more money then sense elitist here but I don't bite on the 2080ti 50% faster then a 1080ti nonsense.
> 
> Launch had it at like 23-26% depending on reveiwer or games tested.
> 
> I do understand turning had some driver maturing so ill be fair and say with that at best it lands around 35% faster. 50% is a stretch and one that I wont bite on.


From what I've seen, comparing a 2.1GHz 2080ti to my 2.1GHz Pascal Titan X, the performance difference in games has been between 25-45% but by far most often around 30-35% faster. The guy arguing for the 2080ti being 50% faster than Pascal also argued with me on the RTX 3080 thread by saying the poorly built Zotac version of the card was better than or equal to the following cards: XC3, Phoenix, SG, Twin X2, iChill X3, iChill X4, Ventus, Gaming X Trio, Founders Edition, GamingPro, XLR8. For reference, tweaktown did a review of the Zotac Trinity card and were only able to get a +25MHz OC on it. By far the lowest of any RTX card reviewed anywhere.

I'm not sure why but he likes to disagree and fight with people for no reason at all. You'll have to pick and choose which posts you read and which ones you ignore.




dentnu said:


> ok cool can you please go away now and drop this 2080 vs 1080 BS. This is the 3090 thread not the 2080 vs 1080 thread.


The problem is that the RTX 3090 Owner's Club thread starter made this claim in a post on the previous page.


----------



## BigMack70

zhrooms said:


> Blows my mind how people still don't know that 2080 Ti is 50% faster than 1080 Ti, seriously mind boggling, we've know about the performance difference for 2 full years now and you still don't have a clue..
> 
> GTX 1080 Ti from 980 Ti, ~30% CUDA core increase and ~50% higher clock speed = ~70% Faster
> RTX 2080 Ti from 1080 Ti, ~20% FP CUDA cores and introduction of another 4352x INT CUDA Cores, total of 8704 CUDA cores up from 3584, thus ~60% larger die even though it went from 16nm to 12nm, $300 higher MSRP = ~50% Faster
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Sure there are a few games out there that don't get the full benefit of these 4352x INT CUDA Cores, but *generally in modern titles* it provides around maybe 25%, on top of the already faster card, memory also overclocks 2.5 Gbps up from 1.5 Gbps, 18% vs 13.5%, ~30% faster memory OC to OC.
> 
> You can cherrypick old garbage games (engines) that no one cares about and make it look like 2080 Ti is 15% faster, very simple and effective, which is what a lot of reviewers did at Turing launch.
> 
> These images (below) has been featured in the 2080 Ti Owner's Club since launch, they are undeniable and everyone should have seen them by now.
> 
> _Time Spy 49%, Superposition 53%, BF1 46%, Draogn Quest 48%, F1 51%, Hellblade 50%, Hitman 46%, Kingdom Come 47%, Middle-earth 45%, Monster Hunter World 47%, Metro 51%, Prey 45%, Rainbow Six 50%, Witcher 3 44%, Wolfenstein 55% (Vulkan)_
> 
> 45-50%, and this data was gathered from Guru3D, HardwareCanucks, Hardware.Info, HEXUS, Kitguru, PC Games Hardware, PC Perspective, SweClockers, TechPowerUp and TweakTown. This also does not include the even higher performance on 2080 Ti when overclocking, we know it overclocks better, 2175MHz is a breeze while you typically struggled for 2100MHz on 1080 Ti, along with memory as previously mentioned, about 6.5% higher memory OC on 2080 Ti. Taking these 45-50% to 50% and above. Below, all 15 (13 games, 2 benchmarks) averaged, coming out at 49%.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> So, stop embarrassing yourself, thinking 2080 Ti is only 30% faster than 1080 Ti in 2020, I'd excuse it a few months after Turing release possibly, but now? 😆


Nice cherries you picked there. I ran both cards XOC under water @ ~2.1 GHz. Saying 2080Ti is 50 % faster is about as accurate as saying the 3080 is twice as fast as the 2080. Pure BS.


----------



## psychrage

What's with the FE on Best Buy getting pushed from Friday Sept 24th, to the day AMD conveniently launches RDNA2 GPU's?


----------



## doom26464

Im glad some people who have legitment owned both and tested themselves dont try to skew numbers for the pure sake of argument.  


Maybe there is hope in OCN after all.


----------



## dentnu

psychrage said:


> What's with the FE on Best Buy getting pushed from Friday Sept 24th, to the day AMD conveniently launches RDNA2 GPU's?
> 
> View attachment 2459755


No stock so date gets moved up till they know when stock will arrive.


----------



## D-EJ915

zhrooms said:


> So, stop embarrassing yourself, thinking 2080 Ti is only 30% faster than 1080 Ti in 2020, I'd excuse it a few months after Turing release possibly, but now? 😆


Hardware Unboxed just did a comparison with 14 games and it was 33% faster on average at 1440P now and 35% at 4k. He says in their review 2 years ago they did use some old titles. 






As for actual topic, I think I will pick up a 3090 to replace my SLI 1080 Tis but I'm leaning toward waiting for availability and reviews and not just buying whatever I managed to click on the fastest. I mainly play ancient games and the 2080 Ti was not worth the money since my scaling is good.


----------



## Krzych04650

BigMack70 said:


> Some of you guys are way too Eeyore about this launch. No, it's not anything close to what Nvidia's marketing claimed. But did you really expect it to be?


It is exactly what NVIDIA was marketing, gains in specific games are in line with what they have shown and any generalized numbers were "up to". Problem is that these numbers are comapred to Turing FE thermal and power throttling into 1700s, while Ampere is really squeezed out of the box, so any margins that it has vs Turing diminish massively after OC. This is the deciding factor and reason why we are not so impressed here, Ampere is almost maxed out of the box, and as embarrassing as it is to say this, it behaves like AMD card.


----------



## Antsu

Some data from me for the 1080Ti vs 2080Ti:

1080Ti with really bad memory and meh core: 2114Mhz @ 1.2V and +300 on memory. No powerlimit.
Firestrike: 32650
Timespy: 11000

2080Ti with OK memory and pretty nice core: 2235Mhz @ 1.125V and +1300 on memory. No powerlimit.
Firestrike: 42500
Timespy: 18000

My 2 cents: the gain most likely was about 50% at 4K and around 25-30% at lower resolutions, if powerlimit was removed and both cards were average bin.

Now let's forget this and focus on getting those last percentages out of 3090s. This whole launch sucks for 2080Ti owners, but no use crying about it. Focus on pulling 1000W through your card, pushing for that +30% over 2080Ti =D


----------



## BigMack70

Krzych04650 said:


> It is exactly what NVIDIA was marketing, gains in specific games are in line with what they have shown and any generalized numbers were "up to". Problem is that these numbers are comapred to Turing FE thermal and power throttling into 1700s, while Ampere is really squeezed out of the box, so any margins that it has vs Turing diminish massively after OC. This is the deciding factor and reason why we are not so impressed here, Ampere is almost maxed out of the box, and as embarrassing as it is to say this, it behaves like AMD card.


I don't think it's what Nvidia marketed in the specific sense that Nvidia marketed this launch as their largest generational jump in performance ever. It's not really even close to that. They didn't really mislead in benchmarks, just cherry picked.


----------



## Jpmboy

I'm sitting here with a 1070Ti, a 1080 w/WC, two 2080Tis w/WC, a Radeon VII w/WC, three Titan Vs w/WC, a 580, a 295x2, a 290, a 980 w/WC, a 960, and a box full of older cards... time for a garage sale! 😄

3090 when they come available.


----------



## dentnu

Deleted


----------



## changboy

Jpmboy said:


> I'm sitting here with a 1070Ti, a 1080 w/WC, two 2080Tis w/WC, a Radeon VII w/WC, three Titan Vs w/WC, a 580, a 295x2, a 290, a 980 w/WC, a 960, and a box full of older cards... time for a garage sale! 😄
> 
> 3090 when they come available.


I actually play with my r9-290 powercolor pcs+ with 4gb vram and i can tell you some game play well but newer games i stop play it coz its really bad. Better stop play my best game then play at 1080p low setting. I really wait for the big one and i think i will grap any card if i can clic buy.

If you wait 10 second you cant get anymore another card lol, So bad and after you need wait and wait, sometimes its month.

I really think the best card wont be available the 24 but later. Like the 3080, no evga ultra and no strix.

So whats the 3rd and 4th choice ? I dont like much the models this time.


----------



## BigMack70

changboy said:


> So whats the 3rd and 4th choice ? I dont like much the models this time.


Only the FE and FTW3 cards look appealing to me at launch, and the FTW3 only if you can ignore how it looks, because it looks ugly. Hopefully October/November will see the release of better models and more water block options. 

I think I'm just going to try for an FE and if I can't get one, I'll just wait until something better comes in stock somewhere.


----------



## changboy

*@vmanuelgm *
Did you show some place how to mod the gigabyte 3090 ?


----------



## CallsignVega

This thread turning to crap lately. Who cares about how old cards perform versus even older cards. 

On topic: anyone seen any info if the 3090 FTW3 will be available at launch?


----------



## changboy

CallsignVega said:


> This thread turning to crap lately. Who cares about how old cards perform versus even older cards.
> 
> On topic: anyone seen any info if the 3090 FTW3 will be available at launch?


Dont expect to get one.


----------



## Sheyster

AngryLobster said:


> I just showed up in this thread as a potential 3090 owner and the first thing I'm faced with is your non stop whining and crying. If the 3090 is a poor value proposition for you, that's great now carry on with your life.


Anytime there is a new card launch from Nvidia, there's always "that guy" in the owner's thread acting up about price, performance, you name it. He is the most recent manifestation of "that guy". Ignore him: Haters gonna hate. I've seen this behavior since the GTX 580 days here on OCN and now I just ignore them. Decide for yourself what you want to do and if the price vs. performance is worth it to you.


----------



## domenic

changboy said:


> Dont expect to get one.


The only silver lining will be that we will have time to see all of these 3090s torn down with the PCBs analyzed by Buildzoid. If I am going to spend $1800 before even the water block I want to know what's what.


----------



## changboy

I think the performance will be similar over all card, like the any of the 3080, once in game you wont see any difference from +/- 75mhz-125mhz. At least 1 fps at 4k.

The only thing i think its important its a dual bios card for future update bios on power...that's it.


----------



## J7SC

I inherited a few S3 Virge cards; they work a lot better in Win NT4 than a RTX 3090 ever would (this thread got a bit too serious...)

As to 3090s, I already mentioned my focus on 3x 8 pin custom-PCB cards w/ factory mounted full w-blocks. Apart from the unobtainable-here Galax / KFA2 3090 HoF, that leaves the KingPin 3090 full w-block and the just confirmed Aorus 3090 Xtr WB. But I don't expect any of those to hit before late October / early November, due to also binning volume requirements, unless you're on a first-name basis with a senior vendor rep..


----------



## Shawnb99

J7SC said:


> I inherited a few S3 Virge cards; they work a lot better in Win NT4 than a RTX 3090 ever would (this thread got a bit too serious...)
> 
> As to 3090s, I already mentioned my focus on 3x 8 pin custom-PCB cards w/ factory mounted full w-blocks. Apart from the unobtainable-here Galax / KFA2 3090 HoF, that leaves the KingPin 3090 full w-block and the just confirmed Aorus 3090 Xtr WB. But I don't expect any of those to hit before late October / early November, due to also binning volume requirements, unless you're on a first-name basis with a senior vendor rep..


FTW3 is also 3x8 pin custom-PCB cards w/ factory mounted full w-blocks. Hydrocopper should be out before the Kingpin is.


----------



## changboy

I think evga hydro copper have 2 x 8 pins connector.


----------



## Shawnb99

changboy said:


> I think evga hydro copper have 2 x 8 pins connector.


Depends on what model. If the normal FTW3 has 3x8 pin then the Hydrocopper model should as well.


----------



## changboy

I just see then the asus strix came with 4 different models.
-asus strix
-asus strix oc
-asus strix top
-asus strix advance


----------



## Shawnb99

4 models that's not confusing at all...


----------



## changboy

Shawnb99 said:


> 4 models that's not confusing at all...


Ya they look the same lol. Just the price is update hehehe.


----------



## J7SC

Shawnb99 said:


> 4 models that's not confusing at all...


Yeah, I thought the Asus Strix OC was the top, but the Top is the top, before you Advance ?


----------



## Thoth420

psychrage said:


> What's with the FE on Best Buy getting pushed from Friday Sept 24th, to the day AMD conveniently launches RDNA2 GPU's?
> 
> View attachment 2459755


Because Nvidia would rather you go through an AIB partner and not pay to manufacture very many of these.


----------



## changboy

J7SC said:


> Yeah, I thought the Asus Strix OC was the top, but the Top is the top, before you Advance ?


LOL ! Now iam lost with my limited english hehehe

At least with all those strix, i will get one the 24


----------



## changboy

Actually they have 59 models of the rtx-3090 !






GIGABYTE GeForce RTX 3090 24GB AORUS XTREME WATERFORCE WB | VideoCardz.net


VideoCardz.net Graphics Cards Database




videocardz.net


----------



## J7SC

changboy said:


> Actually they have 59 models of the rtx-3090 !
> 
> 
> 
> 
> 
> 
> GIGABYTE GeForce RTX 3090 24GB AORUS XTREME WATERFORCE WB | VideoCardz.net
> 
> 
> VideoCardz.net Graphics Cards Database
> 
> 
> 
> 
> videocardz.net


Haha, I fell for that


Spoiler


----------



## HyperMatrix

J7SC said:


> Haha, I fell for that
> 
> 
> Spoiler
> 
> 
> 
> 
> View attachment 2459766


Gotta scroll down a bit more.


----------



## changboy

I was thinking the gigabyte with the lower left side looking weird, its the only one like this but after thinking this : if you add the sli bridge the card will be all equal.


----------



## J7SC

HyperMatrix said:


> Gotta scroll down a bit more.
> (...)


...I wanted to see the pics of the 3090 Xtr *WB* -- unless my scrolling has gone to the dogs, no luck. I have two of the previous-gen Aorus xtr WB 2080 Ti since Dec '18 and a much as I love them, I am a bit concerned that they'll pack on even more RGB and 'gamer messages' this time around with the 3090 Xtr WB top model. There's also the trend with this RTX 3K release by most vendors to go visually overboard...in my humble opinion


----------



## changboy

J7SC said:


> ...I wanted to see the pics of the 3090 Xtr *WB* -- unless my scrolling has gone to the dogs, no luck. I have two of the previous-gen Aorus xtr WB 2080 Ti since Dec '18 and a much as I love them, I am a bit concerned that they'll pack on even more RGB and 'gamer messages' this time around with the 3090 Xtr WB top model. There's also the trend with this RTX 3K release by most vendors to go visually overboard...in my humble opinion


Ya they are great, i sold my aorus 1080 ti WB 5 months ago, never got a problem with it. The picture as you can see is not available for now then i dont think the card will be available before 2021, even april 2021 lol. Will see but i dont think i can wait till they release it.


----------



## changboy

*Mooncheese*
What if i hate the amd driver, and long time ago after running crossfire of r9-290 and got all bug possible you can dream of i told myself to never buy another amd card in my life, is this clear to you ? NEVER AGAIN.

After sold my 1080 ti 5 month ago i plug back 1 r9-290 and you know what ? I dont have sound going from hdmi to my oled panel, just on my headphone from my dac wich is plug in my pc.

After all those years they never resolve that bug and you think i will buy an amd card...NEVER AGAIN.
Most of the time a new title are release, amd driver suck and have many bug while nvidia 99% of the time its perfect. Ya you can read new driver are really good but its a lie. Just the last 5700xt got so many bug from what i read on forum.

Also i didnt bought an amd ryzen 3950x but i bought an i9-10980xe and i dont regret it. Also each time amd release new driver, you can get old bug you got before with other drivers, its always the same.


----------



## mirkendargen

Mooncheese said:


> TL;DR I THOUGHT AMPERE CARDS WERE BARELY BETTER THEN MY 2080TI AND WORTHLESS AND THEN I WAS GOING TO BUY A 3090 AND THEN I LOOKED AT MY BANK ACCOUNT AND COULDN'T BUY A 3090 AND NOW I HATE 3090'S!!!


Cool story bro, tell it again.

Just kidding, that's like the third time you told it and I'm sure there will be a fourth...and fifth...and sixth...and...

Anyway, is there any solid info on whether Strix or FTW3's will be available on the 24th yet? Given that it seems they're going to be the same price and there's barely any info about the Strix yet I'm considering being a terrible person and trying to buy one of each, waiting a week to see how power limits shake out (whatever I get I'm going to ultimately put under water), and then returning (not scalping...) the other.


----------



## nievz

8K gameplay


----------



## changboy

mirkendargen said:


> Cool story bro, tell it again.
> 
> Just kidding, that's like the third time you told it and I'm sure there will be a fourth...and fifth...and sixth...and...
> 
> Anyway, is there any solid info on whether Strix or FTW3's will be available on the 24th yet?


On evga community i read they will have 5 evga rtx-3090 ultra the 24, i dont know if its true but i dont expect get 1 but if you can get 2 i will pay back mine


----------



## mirkendargen

changboy said:


> On evga community i read they will have 5 evga rtx-3090 ultra the 24, i dont know if its true but i dont expect get 1 but if you can get 2 i will pay back mine


Lol...5 total....for the entire EVGA store...? Maybe they did Newegg a solid and shipped them 2 then? It seems like if anything they'd prioritize manufacturing to their biggest markup cards and mysteriously have none of the "cheaper" ones...but who knows.


----------



## Chamidorix

Alright boys, we've got some decidedly mediocre data from linus:





We get a nice shot of the exact settings Nvidia is stipulating Forza 4 uses in these promo vids (no DLSS or dynamic res!). Now someone with one of those godlike galax 500w XOC watercooled 2080tis go and bench Forza with those settings at 8k via DSR or custom resolutions and report back.

We've had a depressing lack of 4k+ benchmarks on the 3080 so far, which is a true pity given that the double fp32 datapath should continue to scale better as the resolution increases. 8k is 4x the pixels of 4k, so it should be a shining example of the max best case scenario scaling we can expect out of the ampere cuda architecture changes. (with normalized power draw at 4k, its looking like we are only getting approx 15% or so increase per CUDA average with ga-102 vs tu-102, even with double fp32 alu)

Anyways, before everyone screams that 8k is pointless and 4k 120 hz is the second coming of salvation, I get it, just want to reiterate that 8k class rendering is quite relevant for enthusiast VR users where rendering 4k+ pixels vertical is a hell of a lot more of an visual improvement on a HP Reverb G1 4k panel strapped to your face than 27" 4k monitor sitting over a foot away.


----------



## kot0005

I cant Believe Nvidia are gimping the 3090 with 2x 8 pins!!! THe FE card looks so good, just watched Linu's vid. Not to mention it prolly has better silicon and $300 cheaper !


----------



## Mooncheese

Spoiler: off topic



Well if you must have one I recommend FE with a Bykski block @ $1600 ($130 for the block including backplate) with 390w limit.

GA102 falls off a cliff in terms of efficiency beyond 400w, see vmanuelgm's 3090 going from +20% faster than 3080 to +30% faster but at 550w!

My comment earlier is a bit harsh, we may see real gains in DLSS titles in the future, especially when NV rolls out DLSS 3.0, which will be Ampere specific. I just don't like the fact that I'm priced out of an upgrade path, but it's mostly my fault for needing an upgrade from 1080 Ti @ 3440x1440 120 Hz. Had I waited then it would have been 50% jump in FPS.

All of my games there's more than 33% jump. All of my games where I was at 60 FPS I'm now at 90. Like across the board. It's the same story in The Witcher 3 in 3D Vision (2560x1440x2) where 60 FPS is actually 120. Many areas would dip down to 40 FPS (Helen, at the Bloody Baron fort) all GPU side. Since upgrading not only are these areas 60 FPS now but I'm also running Phoenix Lighting Engine on top of it which is very demanding. It's easily over 50% faster than the 1080 Ti. No Man's Sky, same story. I upgraded before acquiring the Index but from what I gather from others 2080 Ti is WAY faster than 1080 Ti in VR. Like people with 1080 Ti and 2080 MaxQ (mobile, roughly 20% slower than desktop 2080) were complaining about dipping down to Fidelity Level 3 on out of the box settings whereas my 2080 Ti ran this at Fidelity Level 8 with the overclock. I've seen the difference Fidelity Level 3 and 8 (you can force a desired fidelity level via the console commands in that Medium article).

Having a look at the 3080 performance in Red Dead Redemption 2, where it's 14% faster than overclocked 2080 Ti at the same power draw at 1440p and 18% faster with an overclock (showing closer to 20% difference between the 3080 and 2080 Ti at the same power draw in this particular title.) Which means that the 3090, assuming it is closer to 20% faster than the 3080 in demanding titles such as SOTTR it will probably be closer to 40% faster than 2080 Ti, let's say 36% faster with this title. That's nothing to sneeze at. It's just a ridiculous value proposition given that 2080 Ti was easily 33% faster than 1080 Ti for less. I mean the only way you all are justifying this purchase is by pretending 2080 Ti was only 33% faster than 1080 Ti.

I think the fact is that my personal gain was higher than 33% because I didn't have the fastest 1080 Ti to start with. I think it did 30,700 in Firestrike and 10,600 Timespy GPU @ 300w. It was mostly held back by the power though. I think it would do more with more voltage but increasing voltage would cause the card to power throttle and actually reduce the score (like what we are seeing with GA102-200) so the best way was to undervolt, thereby being able to secure a higher freq with less power.

10,600 to 16,750 Timespy GPU, that's more than 50%.

And I see that in gains, in fact all of my games. I don't know what games you guys are playing where it's only a 25% gain, please sound off in the comment section. And the gain may be less for some of you because you went from 1080 Ti Kingpin to 2080 Ti FE and instead of a 56% overclock you have -10% on both sides (Kingpin 10% faster and FE 10% slower because 320w TDP) making for your 2080 Ti only seeming 35% faster than your outgoing 1080 Ti.

I came from 1080 Ti that could do 2000 MHz with an undervolt because 300w FE limit.

The gain is more than 35%.

Add in the fact that my 2080 Ti will 2130 MHz core and 1950 MHz memory @ 1.055-1.063v @ 43C @ 373w. That's not a mild overclock. In fact my 2080 Ti is 27% faster than FE at default clocks whereas my 1080 Ti was only 15% faster.

MY 2080 TI is easily 50% faster than MY 1080 Ti. I don't care what joe blow technical reviewer got comparing unoverclocked 2080 Ti FE @ 260w to unoverclocked 1080 Ti FE @ 300w (smaller difference, 2080 Ti has an INSANE overclock.

Look at the 3080 LN2 session Tech Jesus did this past weekend.

The 3080 only did 14k Port Royal at -165C with 900w on tap.

11.5k to 14k = 22% overclock

2080 Ti went from 8738 to 13090 Port Royal, a staggering 49% overclock.

And my particular 2080 Ti overclocks better than the 1080 Ti that it replaced to top it off.

Anyhow, going by Gamers Nexus 3080 review (where overclocked 2080 Ti is 50% faster than overclocked 1080 Ti) in the Red Dead Redemption benchmark the 3080 is 14% faster than overclocked 2080 Ti (Strix, +160 core, +400 memory if I remember correctly, which is a pretty lame memory overlock, mine does +950 MHz) and overclocked 3080 is 18% faster than overclocked 2080 Ti so that means if the 3090 is ~20% faster than the 3080, which in this title it probably will be (it seems the titles with much more going on on screen the bigger the gains, i.e. SOTTR at 18.8% faster than 3080 with the 3090).

Meaning, there's no way that 3090 can be less than ~36% faster than overclocked 2080 Ti at the same power draw (390w). Either my math is off or vmanuelgm's 3090 is not getting any performance out of the shunt @ 550w, or he's overclocked the memory too high and it's "error correcting" and actually degrading performance because it's fighting with the core for TDP and then spending all that current on "error correcting".

Not sure, but if his RDRD2 bench avg of 81 FPS @ 3440x1440 is @ 550w it should be ~10% more than 36% difference between the 2080 Ti and 3090 at same power draw, but going by the 2080 Ti @ 2100 MHz DX12 benches I've seen (55 FPS) and then adding 10% because that's the gain seen by overclocked 2080 Ti in RDR2 going from DX12 to Vulkan, we have 60 FPS, 60 vs 80% 33%

I think something is off with vmanuelgm's card.

It's either an improper overclock (too high of a memory OC, memory is "error correcting" and taking wattage away from the GPU core to do so) or the memory is not properly cooled and the card is dying (should be capable of a higher overclock but can't because it's held back by power because with increased heat the memory modules have become more impendent (heat causes metal expansion which then changes the resistance of the metal decreasing efficiency and requiring more voltage to stabilize and voltage controller obliges but the increased voltage heats the memory modules etc in a vicious cycle.

Something is off with that card. It should be doing more than 10% faster at 550w. The core isn't even that hot to thermal throttle (60C now under WB), it's gotta be the shunt itself. The shunt is overriding the hard limit imposed at the bios level because 8nm EUV falls off an performance-efficiency cliff beyond 400w. It literally cannot clock higher and I can't quite put my finger on it. If you've seen the videos Ampere's clocks swing wildly all over the place, even under a water block (from 1850 to 2050 MHz).

The 3080 22% increase under LN2 speaks volumes of about this. It's not power and it's not thermals. Ampere simply refuses to clock higher, that's part of the problem (they hit 2350 MHz under LN2, 2080 Ti did 2700 MHz) but something else is going on, and I wonder if it's like the memory its error correcting and more and more power is implicated in this beyond a certain freq. but the newer coolers are designed to mitigate most of this pretty well so you don't see insane temps. Like the 3080 FE gets up to 75C. That's amazing considering it's 370w. The coolers are hiding the fact that the core is probably error correcting and requiring more power beyond 400w.

I'm not sure, it's bizarre, it just falls right on it's face after 400w irrespective as to the power limit and thermals.

Anyhow, yeah the 3090 should run RDR2 36% faster than 2080 Ti at the same power draw at 1440p on Vulkan. That's my prediction. It will be one of those outliers like SOTTR that sees larger gains because the complex geometry in those two titles, among others, induces more of an FP32 load as more render time is spent in the FP32 pipeline.

Other less demanding titles, say Overwatch or Apex Legends at 4K / 8K, whatever resolution is required to make it demanding, would see much less frame-time spent in FP32 because there are less geometric calculations going on.

Ok, I believe I've again rambled on for longer than I ought to.

3090 is still not worth it, though parting with $1600 for an FE and putting it under a Bykski block for $1600 total is definitely the route I would go.

1. You would have a block right away, the Bykski FE block is already being manufactured.
2. Not much gain from 390 to 440w of the FTW3 because GA-102 falls off a performance-efficiency cliff beyond 400w anyway and NV always overbuilds their VRM (2080 Ti VRM good for over 600w according to Buildzoid analysis).

My only concern with FE is that the source of the 106C hotspot that Igor's Lab found was because they crammed the VRM in there with the memory because of the cramped PCB (to make room for the cooler fan to increase the thermal capacity of the heat-sink by 90W, to deal with the higher TDP less efficient Samsung 8nm EUV). A water block will most definitely help here, but you still have the VRM and the memory in close proximity and some of that residual heat will be present on the traces etc, even under a water block.

But ultimately it looks like this:

3090 FE = $1500
Bykski block + backplate = $130
$1630 blocked 3090, after taxes (8% here) around $1850 for a blocked 3090 total.

FTW3 Ultra = $1800
EKWB + Backplate (not even available yet, last time it took them 1 year!) = $230

$2030 total, another $165 with taxes = $2200, shipping = $2250.

$1800 vs $2250 and there will be a 5% performance difference.

The FTW3 will have a 3 year warranty however.

But yeah, I would probably opt for the FE and Bykski route, that's nearly $500 cheaper.

It's still not palatable at $1850, and youre most definitely not going to get on on the 24th fighting the bots but it's better than $2250.

Ok, good night everyone.


----------



## Spiriva

Tomorrow ill try to pick up a Asus Strix 3090 from Inet/komplett/webhallen (swedish shops) hopefully Im in luck and manage to get one


----------



## Thoth420

Spiriva said:


> Tomorrow ill try to pick up a Asus Strix 3090 from Inet/komplett/webhallen (swedish shops) hopefully Im in luck and manage to get one


In the US (at least) the Strix 3090 is not showing as releasing until October. Just tossing you a heads up. Hopefully I am wrong and you can get one. The strix look amazing and would be my top choice.

The only cards so far I can confirm with be available on the 24th are as follows:
Gigabyte Gaming OC
Gigabyte Eagle
MSI Ventus


----------



## Spiriva

Thoth420 said:


> In the US (at least) the Strix 3090 is not showing as releasing until October. Just tossing you a heads up. Hopefully I am wrong and you can get one. The strix look amazing and would be my top choice.
> 
> The only cards so far I can confirm with be available on the 24th are as follows:
> Gigabyte Gaming OC
> Gigabyte Eagle
> MSI Ventus


I talked to "Komplett" yesterday and according to thier customer service they should get all RTX 3090 they list on thier page tomorrow (24th sep). I guess I will see tomorrow if thier customer service was right or not =)










The prices are in SEK (Swedish crowns) and incl Swedish tax. $1 is 8,94SEK or €1 is 10,45SEK.


----------



## inutile

Mooncheese said:


> Well if you must have one I recommend FE with a Bykski block @ $1600 ($130 for the block including backplate) with 390w limit.
> 
> GA102 falls off a cliff in terms of efficiency beyond 400w, see vmanuelgm's 3090 going from +20% faster than 3080 to +30% faster but at 550w!
> 
> My comment earlier is a bit harsh, we may see real gains in DLSS titles in the future, especially when NV rolls out DLSS 3.0, which will be Ampere specific. I just don't like the fact that I'm priced out of an upgrade path, but it's mostly my fault for needing an upgrade from 1080 Ti @ 3440x1440 120 Hz. Had I waited then it would have been 50% jump in FPS.
> 
> All of my games there's more than 33% jump. All of my games where I was at 60 FPS I'm now at 90. Like across the board. It's the same story in The Witcher 3 in 3D Vision (2560x1440x2) where 60 FPS is actually 120. Many areas would dip down to 40 FPS (Helen, at the Bloody Baron fort) all GPU side. Since upgrading not only are these areas 60 FPS now but I'm also running Phoenix Lighting Engine on top of it which is very demanding. It's easily over 50% faster than the 1080 Ti. No Man's Sky, same story. I upgraded before acquiring the Index but from what I gather from others 2080 Ti is WAY faster than 1080 Ti in VR. Like people with 1080 Ti and 2080 MaxQ (mobile, roughly 20% slower than desktop 2080) were complaining about dipping down to Fidelity Level 3 on out of the box settings whereas my 2080 Ti ran this at Fidelity Level 8 with the overclock. I've seen the difference Fidelity Level 3 and 8 (you can force a desired fidelity level via the console commands in that Medium article).
> 
> Having a look at the 3080 performance in Red Dead Redemption 2, where it's 14% faster than overclocked 2080 Ti at the same power draw at 1440p and 18% faster with an overclock (showing closer to 20% difference between the 3080 and 2080 Ti at the same power draw in this particular title.) Which means that the 3090, assuming it is closer to 20% faster than the 3080 in demanding titles such as SOTTR it will probably be closer to 40% faster than 2080 Ti, let's say 36% faster with this title. That's nothing to sneeze at. It's just a ridiculous value proposition given that 2080 Ti was easily 33% faster than 1080 Ti for less. I mean the only way you all are justifying this purchase is by pretending 2080 Ti was only 33% faster than 1080 Ti.
> 
> I think the fact is that my personal gain was higher than 33% because I didn't have the fastest 1080 Ti to start with. I think it did 30,700 in Firestrike and 10,600 Timespy GPU @ 300w. It was mostly held back by the power though. I think it would do more with more voltage but increasing voltage would cause the card to power throttle and actually reduce the score (like what we are seeing with GA102-200) so the best way was to undervolt, thereby being able to secure a higher freq with less power.
> 
> 10,600 to 16,750 Timespy GPU, that's more than 50%.
> 
> And I see that in gains, in fact all of my games. I don't know what games you guys are playing where it's only a 25% gain, please sound off in the comment section. And the gain may be less for some of you because you went from 1080 Ti Kingpin to 2080 Ti FE and instead of a 56% overclock you have -10% on both sides (Kingpin 10% faster and FE 10% slower because 320w TDP) making for your 2080 Ti only seeming 35% faster than your outgoing 1080 Ti.
> 
> I came from 1080 Ti that could do 2000 MHz with an undervolt because 300w FE limit.
> 
> The gain is more than 35%.
> 
> Add in the fact that my 2080 Ti will 2130 MHz core and 1950 MHz memory @ 1.055-1.063v @ 43C @ 373w. That's not a mild overclock. In fact my 2080 Ti is 27% faster than FE at default clocks whereas my 1080 Ti was only 15% faster.
> 
> MY 2080 TI is easily 50% faster than MY 1080 Ti. I don't care what joe blow technical reviewer got comparing unoverclocked 2080 Ti FE @ 260w to unoverclocked 1080 Ti FE @ 300w (smaller difference, 2080 Ti has an INSANE overclock.
> 
> Look at the 3080 LN2 session Tech Jesus did this past weekend.
> 
> The 3080 only did 14k Port Royal at -165C with 900w on tap.
> 
> 11.5k to 14k = 22% overclock
> 
> 2080 Ti went from 8738 to 13090 Port Royal, a staggering 49% overclock.
> 
> And my particular 2080 Ti overclocks better than the 1080 Ti that it replaced to top it off.
> 
> Anyhow, going by Gamers Nexus 3080 review (where overclocked 2080 Ti is 50% faster than overclocked 1080 Ti) in the Red Dead Redemption benchmark the 3080 is 14% faster than overclocked 2080 Ti (Strix, +160 core, +400 memory if I remember correctly, which is a pretty lame memory overlock, mine does +950 MHz) and overclocked 3080 is 18% faster than overclocked 2080 Ti so that means if the 3090 is ~20% faster than the 3080, which in this title it probably will be (it seems the titles with much more going on on screen the bigger the gains, i.e. SOTTR at 18.8% faster than 3080 with the 3090).
> 
> Meaning, there's no way that 3090 can be less than ~36% faster than overclocked 2080 Ti at the same power draw (390w). Either my math is off or vmanuelgm's 3090 is not getting any performance out of the shunt @ 550w, or he's overclocked the memory too high and it's "error correcting" and actually degrading performance because it's fighting with the core for TDP and then spending all that current on "error correcting".
> 
> Not sure, but if his RDRD2 bench avg of 81 FPS @ 3440x1440 is @ 550w it should be ~10% more than 36% difference between the 2080 Ti and 3090 at same power draw, but going by the 2080 Ti @ 2100 MHz DX12 benches I've seen (55 FPS) and then adding 10% because that's the gain seen by overclocked 2080 Ti in RDR2 going from DX12 to Vulkan, we have 60 FPS, 60 vs 80% 33%
> 
> I think something is off with vmanuelgm's card.
> 
> It's either an improper overclock (too high of a memory OC, memory is "error correcting" and taking wattage away from the GPU core to do so) or the memory is not properly cooled and the card is dying (should be capable of a higher overclock but can't because it's held back by power because with increased heat the memory modules have become more impendent (heat causes metal expansion which then changes the resistance of the metal decreasing efficiency and requiring more voltage to stabilize and voltage controller obliges but the increased voltage heats the memory modules etc in a vicious cycle.
> 
> Something is off with that card. It should be doing more than 10% faster at 550w. The core isn't even that hot to thermal throttle (60C now under WB), it's gotta be the shunt itself. The shunt is overriding the hard limit imposed at the bios level because 8nm EUV falls off an performance-efficiency cliff beyond 400w. It literally cannot clock higher and I can't quite put my finger on it. If you've seen the videos Ampere's clocks swing wildly all over the place, even under a water block (from 1850 to 2050 MHz).
> 
> The 3080 22% increase under LN2 speaks volumes of about this. It's not power and it's not thermals. Ampere simply refuses to clock higher, that's part of the problem (they hit 2350 MHz under LN2, 2080 Ti did 2700 MHz) but something else is going on, and I wonder if it's like the memory its error correcting and more and more power is implicated in this beyond a certain freq. but the newer coolers are designed to mitigate most of this pretty well so you don't see insane temps. Like the 3080 FE gets up to 75C. That's amazing considering it's 370w. The coolers are hiding the fact that the core is probably error correcting and requiring more power beyond 400w.
> 
> I'm not sure, it's bizarre, it just falls right on it's face after 400w irrespective as to the power limit and thermals.
> 
> Anyhow, yeah the 3090 should run RDR2 36% faster than 2080 Ti at the same power draw at 1440p on Vulkan. That's my prediction. It will be one of those outliers like SOTTR that sees larger gains because the complex geometry in those two titles, among others, induces more of an FP32 load as more render time is spent in the FP32 pipeline.
> 
> Other less demanding titles, say Overwatch or Apex Legends at 4K / 8K, whatever resolution is required to make it demanding, would see much less frame-time spent in FP32 because there are less geometric calculations going on.
> 
> Ok, I believe I've again rambled on for longer than I ought to.
> 
> 3090 is still not worth it, though parting with $1600 for an FE and putting it under a Bykski block for $1600 total is definitely the route I would go.
> 
> 1. You would have a block right away, the Bykski FE block is already being manufactured.
> 2. Not much gain from 390 to 440w of the FTW3 because GA-102 falls off a performance-efficiency cliff beyond 400w anyway and NV always overbuilds their VRM (2080 Ti VRM good for over 600w according to Buildzoid analysis).
> 
> My only concern with FE is that the source of the 106C hotspot that Igor's Lab found was because they crammed the VRM in there with the memory because of the cramped PCB (to make room for the cooler fan to increase the thermal capacity of the heat-sink by 90W, to deal with the higher TDP less efficient Samsung 8nm EUV). A water block will most definitely help here, but you still have the VRM and the memory in close proximity and some of that residual heat will be present on the traces etc, even under a water block.
> 
> But ultimately it looks like this:
> 
> 3090 FE = $1500
> Bykski block + backplate = $130
> $1630 blocked 3090, after taxes (8% here) around $1850 for a blocked 3090 total.
> 
> FTW3 Ultra = $1800
> EKWB + Backplate (not even available yet, last time it took them 1 year!) = $230
> 
> $2030 total, another $165 with taxes = $2200, shipping = $2250.
> 
> $1800 vs $2250 and there will be a 5% performance difference.
> 
> The FTW3 will have a 3 year warranty however.
> 
> But yeah, I would probably opt for the FE and Bykski route, that's nearly $500 cheaper.
> 
> It's still not palatable at $1850, and youre most definitely not going to get on on the 24th fighting the bots but it's better than $2250.
> 
> Ok, good night everyone.


Uh, okay.


----------



## ribosome

inutile said:


> Uh, okay.


Is it wrong that I'm kind of impressed with his ability to churn out these walls of text?


----------



## kot0005

@mods Can you guys delete posts that are off topic please.


Just stop comparing 3090 with 3080 and 2080Ti. Its a expensive Card , everyone knows it dont buy it if u think it not for you. Dont need 10 paragraph justification posts.. seriously. THis card is not goign to get you the performance and price sweetspot. You should know that by now.

I wonder if Ek's FE block for 3090 will have some way to indirectly or directly make contact with the Backside VRAM modules. THey have water ports on the side so its can be doable. But then FE card is 2x8PIN..like why Nvidia.


----------



## Shawnb99

kot0005 said:


> @mods Can you guys delete posts that are off topic please.
> 
> 
> Just stop comparing 3090 with 3080 and 2080Ti. Its a expensive Card , everyone knows it dont buy it if u think it not for you. Dont need 10 paragraph justification posts.. seriously. THis card is not goign to get you the performance and price sweetspot. You should know that by now.


How are we to know if it’s for us if we don’t compare it to the 3080 or 2080ti?


----------



## Thoth420

Spiriva said:


> I talked to "Komplett" yesterday and according to thier customer service they should get all RTX 3090 they list on thier page tomorrow (24th sep). I guess I will see tomorrow if thier customer service was right or not =)
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> The prices are in SEK (Swedish crowns) and incl Swedish tax. $1 is 8,94SEK or €1 is 10,45SEK.


That is awesome news mate! I hope you get the one you want <3
I am hoping for either of the Gigabyte models.


----------



## inutile

kot0005 said:


> @mods Can you guys delete posts that are off topic please.
> 
> 
> Just stop comparing 3090 with 3080 and 2080Ti. Its a expensive Card , everyone knows it dont buy it if u think it not for you. Dont need 10 paragraph justification posts.. seriously. THis card is not goign to get you the performance and price sweetspot. You should know that by now.
> 
> I wonder if Ek's FE block for 3090 will have some way to indirectly or directly make contact with the Backside VRAM modules. THey have water ports on the side so its can be doable. But then FE card is 2x8PIN..like why Nvidia.


I know EK is saying they want to do something special for the FE. If you go to the product pages for their current lineup of reference and Strix blocks they also go out of their way to mention that it cools the front side VRAM only, so they seem to have backside cooling in mind. No idea if or how exactly they’ll end up adressing it but their teaser of the FE blocks seems like the backplate might be designed as sort of a pseudo heatsink. Who knows though.


----------



## TwinTurbo

After scrolling for 15 minutes, I finally made it past two dissertation-length personal rants (incredibly pathetic). I can't wait to do the same thing on the EVGA forums, right after watching the Linus 3090 vid.


----------



## ryan92084

It should be obvious but this is an owner's thread, not an amd, 2080ti, or price discussion thread. Any further offtopic posts will be removed.


----------



## shiokarai

Gewichtige RTX-Wochen, gestromter Spitzensport und kommende Wasserspiele | Labor Inside | igor´sLAB


Aktuell ist hier ein klein wenig Land unter, denn auch Logistik und enger Zeitplan sind keine wahren Freunde und werden es wohl auch nie. Der Launch der GeForce RTX 3090 soll ja, glaubt man den gut…




www.igorslab.de






RTX 3090 FE weight..... 2187g whoa!


----------



## Nizzen

kot0005 said:


> I cant Believe Nvidia are gimping the 3090 with 2x 8 pins!!! THe FE card looks so good, just watched Linu's vid. Not to mention it prolly has better silicon and $300 cheaper !


Gimping? 2x8pin can draw 650w easy


----------



## changboy

How the FE can do it if its limited from is bios and cant be flash ?


----------



## changboy

Here you can see the strix desassembly by Der8baur :


----------



## RobotDevil666

So the launch is tomorrow and no official benchmarks in sight ?
Anyone knows when is the NDA up ? 
I wanted to buy one but after recent leaks I'm not so sure, it would be good to see benchmarks from a trusted source.


----------



## Shawnb99

RobotDevil666 said:


> So the launch is tomorrow and no official benchmarks in sight ?
> Anyone knows when is the NDA up ?
> I wanted to buy one but after recent leaks I'm not so sure, it would be good to see benchmarks from a trusted source.


NDA expires at 6am pst same time they get released for sale.
They know the benchmarks will be an utter disaster so hence then lack of reviews.
If the card was anything like they promised they would have released them by now


----------



## BigMack70

Shawnb99 said:


> NDA expires at 6am pst same time they get released for sale.
> They know the benchmarks will be an utter disaster so hence then lack of reviews.
> If the card was anything like they promised they would have released them by now


To be fair, they haven't promised much about the 3090. They've mostly talked about things like niche 8k DLSS uses. But yeah, they know that reviews are all going to conclude "_The 3090 is a terrible buy and you should just get the 3080 unless you do non-gaming work that needs the 24GB vram, or don't care at all about performance per dollar_".


----------



## GanMenglin

this is my test result. MSI ventus with default frequency, 100% TDP


----------



## carlhil2

GanMenglin said:


> this is my test result. MSI ventus with default frequency, 100% TDP
> 
> View attachment 2459801
> 
> 
> View attachment 2459800
> 
> 
> View attachment 2459799


About 50% faster than 2080tI FE in those 3DMark scores, about 23% faster than mine OCed at 2145mhz runs in that bench. I want the 3090 FE...


----------



## nycgtr

The strix tax this time around is absurd. At a 300 premium they still want to play that game of micro segementation with oc, adv etc. Same pcb, same cooler as the 3080 yet suddenly it requires an additional 150 premium on vs the 3080 strx? I'll stick with the regular FTW and bump the extra 30-50 whatever it is of the Ultra on my own. Ideally, I am just after the founders. Stupid connector aside, I do not see it offering much less than these other options will on ambient.


----------



## changboy

The asus tuf is one of the cheaper and temperature are the best around and include a dual bios. I just dont like much the look of the card but performance is really one of the best.


----------



## Jpmboy

carlhil2 said:


> About 50% faster than 2080tI FE in those 3DMark scores, about 23% faster than mine OCed at 2145mhz runs in that bench. I want the 3090 FE...


I see only the 3090 on the NV website. Is this the "FE" you are talking about?

NVM - yeah it's the FE listed

Say, is it me, or is the "double time" cuda cores on the 3090 another NV gimmick?


----------



## nycgtr

changboy said:


> The asus tuf is one of the cheaper and temperature are the best around and include a dual bios. I just dont like much the look of the card but performance is really one of the best.


Once blocked it will just like any other card. Just shorter or longer.


----------



## dante`afk

Aquacomputer cooler:


----------



## changboy

dante`afk said:


> Aquacomputer cooler:
> 
> View attachment 2459804
> View attachment 2459805


For wich one ? FE?


----------



## dante`afk

changboy said:


> For wich one ? FE?


ref


----------



## dante`afk

carlhil2 said:


> About 50% faster than 2080tI FE in those 3DMark scores, about 23% faster than mine OCed at 2145mhz runs in that bench. I want the 3090 FE...


and it runs only at 1755, imagine under water 2000+


----------



## GanMenglin

dante`afk said:


> and it runs only at 1755, imagine under water 2000+


the main problem I think is the TDP of this card. max=100%, can't do any adjust now.

the best result of 3Dmark I can reach is: TSE 10400, FSE 25000

I will wait for the high TDP bios and EK water block.


----------



## lokran88

dante`afk said:


> ref


Acuacomputer says for short ref and also planned for Strix.

Watercool has updated compatibility list http://gpu.watercool.de/WATERCOOL_HEATKILLER_GPU_Compatibility.pdf

Also they want to support a custom design and are still undecided between TUF, Strix and FTW3. They will do a survey after release of the ref block to see which ones are prefered by the customers.



nycgtr said:


> The strix tax this time around is absurd. At a 300 premium they still want to play that game of micro segementation with oc, adv etc. Same pcb, same cooler as the 3080 yet suddenly it requires an additional 150 premium on vs the 3080 strx? I'll stick with the regular FTW and bump the extra 30-50 whatever it is of the Ultra on my own. Ideally, I am just after the founders. Stupid connector aside, I do not see it offering much less than these other options will on ambient.


I am a little undecided if I should go for a two 8-pin card like TUF or 3 8-pin card like Strix as I am going to watercool it anyway and I am not sure if the 375 watts of 2 pins or 400 watts+ of a Strix are worth the price difference for me as I am not doing extreme LN2 overclocking and DerBauer says in it's video that 3 pins are only really needed for extreme oc.
The only reason I see going for the Strix is that the GPU is probably a better bin and is likely to clock even higher. What would you do?


----------



## J7SC

Jpmboy said:


> I see only the 3090 on the NV website. Is this the "FE" you are talking about?
> 
> NVM - yeah it's the FE listed
> 
> Say, is it me, or is the "double time" cuda cores on the 3090 another NV gimmick?


...from what I've read, 'double time' is the ideal scenario - it comes down to what mix of floating point and integer the app / game has...the more integer, the less double-time - but it's still early days with real info short on the ground


----------



## Glerox

videocardz.com :

"When it comes to 4K gaming, NVIDIA made a surprising declaration ahead of the 3rd party reviews, the card will (only) be 10 to 15% faster in 4K gaming. We have confirmed this claim with at least four reviewers, although most of them are reporting a 10-12% performance increase over RTX 3080. The performance of the card is reportedly much better in synthetic and productivity workloads."

Disappointing when you think of the difference there was between 2080 and 2080ti. You do get 24gb of VRAM but most people don't need that much.


----------



## Mooncheese

GanMenglin said:


> the main problem I think is the TDP of this card. max=100%, can't do any adjust now.
> 
> the best result of 3Dmark I can reach is: TSE 10400, FSE 25000
> 
> I will wait for the high TDP bios and EK water block.


Max fan speed + undervolt on the curve (Alt+F in MSI AB). 

Try it! 

I bet your score will go up. Drop it down to 900mv, here go by this guide: 


__
https://www.reddit.com/r/nvidia/comments/iwt953

This is probably the most important thread in all or r/nvidia, I'm surprised it only has 322 upvotes. 


__ https://twitter.com/i/web/status/1308492945495150599
I've said this from day one, way before this guide, if you go back I said that the key to Ampere would bringing component temp down with an undervolt like the day before the 3080 release, if not sooner. Like over two weeks ago. 

I thought of that because it's apparent Ampere falls off a power-efficiency cliff beyond 400w and the best way to increase performance in this situation (1080 Ti FE @ 300w) is to bring the component temp down to 45C and undervolt.


----------



## RobotDevil666

This is very disappointing, I didn't even try to get 3080 as I was waiting for 3090 but 10% for 60% more money ...... no thanks Nvidia 
I guess I'll just wait for 3080 to be in stock and grab one to hold me over till they release something else worth attention.


----------



## nievz

I'm gutted with the 3090 performance. I prepared for the release, I even got a Lian Li Air just for the 3090. Now, I ended up waiting for a 3080 becoming available.


----------



## WaXmAn

WoW straight from NV:









GeForce RTX 3090, A “BFGPU” For Creators, Researchers and Extreme Gamers, Launches Thursday


Enables extreme gamers to play at 8K, and creators to explore a new world of cinematics.



www.nvidia.com





For 4K gaming, the GeForce RTX 3090 is about 10-15% faster on average than the GeForce RTX 3080, and up to 50% faster than the TITAN RTX. Since we built GeForce RTX 3090 for a unique group of users, like the TITAN RTX before it, we want to apologise upfront that this will be in limited supply on launch day. We know this is frustrating, and we’re working with our partners to increase the supply in the weeks to come.


----------



## dante`afk

RobotDevil666 said:


> This is very disappointing, I didn't even try to get 3080 as I was waiting for 3090 but 10% for 60% more money ...... no thanks Nvidia
> I guess I'll just wait for 3080 to be in stock and grab one to hold me over till they release something else worth attention.


I agree, very disappointed.

secretly I hope now RDNA2 is the dark horse with more than 10% ......


----------



## Jpmboy

J7SC said:


> ...from what I've read, 'double time' is the ideal scenario - it comes down to what mix of floating point and integer the app / game has...the more integer, the less double-time -* but it's still early days with real info short on the ground*


^^ that's for sure.


----------



## HyperMatrix

The 10-15% better remark is simply due to power limit restrictions. The KPE with 520W will be more than 20% faster. Same with FTW3 or Strix under water with unlocked bios, if one ever comes. And if it doesn’t come...at least you’ll have 90 days to initiate EVGA’s step up program and grab a KPE later if you get an FTW3 now.

edit: what is concerning is the reports of the GDDR6x on some 3080 cards hitting 105C and crashing without any overclocking. If you’re getting a 3090...active cooled backplate will be mandatory in order to have any sustained benefit over the 3080. At this rate, and with this level of card availability for the models I want, I may end up having to wait so long to get a card than the 20GB 3080s come out. Lol. Which...might actually be the smarter play here...


----------



## carlhil2

I went to the Boston area MC to be reimbursed for the price reduction of the 10980xe and there is already a line for the 3090....


----------



## carlhil2

I don't care how much faster it is compared to the 3080, I own the 2080Ti. it's as much of an upgrade as my TXP to 2080Ti was, with double the ram...


----------



## Mooncheese

Chamidorix said:


> Alright boys, we've got some decidedly mediocre data from linus:
> 
> 
> 
> 
> 
> We get a nice shot of the exact settings Nvidia is stipulating Forza 4 uses in these promo vids (no DLSS or dynamic res!). Now someone with one of those godlike galax 500w XOC watercooled 2080tis go and bench Forza with those settings at 8k via DSR or custom resolutions and report back.
> 
> We've had a depressing lack of 4k+ benchmarks on the 3080 so far, which is a true pity given that the double fp32 datapath should continue to scale better as the resolution increases. 8k is 4x the pixels of 4k, so it should be a shining example of the max best case scenario scaling we can expect out of the ampere cuda architecture changes. (with normalized power draw at 4k, its looking like we are only getting approx 15% or so increase per CUDA average with ga-102 vs tu-102, even with double fp32 alu)
> 
> Anyways, before everyone screams that 8k is pointless and 4k 120 hz is the second coming of salvation, I get it, just want to reiterate that 8k class rendering is quite relevant for enthusiast VR users where rendering 4k+ pixels vertical is a hell of a lot more of an visual improvement on a HP Reverb G1 4k panel strapped to your face than 27" 4k monitor sitting over a foot away.


Fantastic point! As a VR enthusiast the 8K metric was lost on me until reading your comment (HP Reverb G2 pre-ordered, currently with entire Index kit + HTC Vive trackers for Natural Locomotion minus HMD).

It's absolutely amazing that they managed to squeeze 2160x2160x2 onto 2x 2.89 inch LCD, thinking of double this resolution @ 3" inches boggles the mind. Not sure if it's technically possible to have that level of pixel density.


----------



## mirkendargen

So....about that moderation of the owners thread to get rid of people that don't have anything to say other than why they aren't going to be owners...


----------



## carlhil2

"Thanks for pricing everyone out of the hobby. .." you are welcome... I bought OG Titan also. the gpu that started the crying off...


----------



## dentnu

I am so happy to hear most of you are not getting a 3090... thank you!


----------



## carlhil2

dentnu said:


> I am so happy to hear most of you are not getting a 3090... thank you!


True, because I need one sooner than later and can use any help I can get...


----------



## Dagamus NM

Same, I need what is offered to get above 60Hz. If the 3080 will give me 110fps, then the extra % of the 3090 is going to be what I need to optimize my hardware. If I could get there without it I would, the benchmarks for the 3080 get it close and I could probably drop some settings to get the rest of the way but that is not what I want to do. 

It is perfectly ok for people to not buy this card. Particularly if your existing hardware runs your equipment. If it were not for HDMI2.1 and the ability to drive the panel at its rated specs I wouldn't do it. Not going to buy a card that doesn't get me where I want to be. 

You don't have to upgrade every release. That is perfectly ok. This isn't a playground where you are going to get made fun of for not having the newest sneakers.

For those with the means and reasons this card makes sense. A $3K+ Titan does not make sense for me, but for others it will.


----------



## psychrage

GeForce RTX 3090, A “BFGPU” For Creators, Researchers and Extreme Gamers, Launches Thursday


Enables extreme gamers to play at 8K, and creators to explore a new world of cinematics.



www.nvidia.com





"...we want to apologize upfront that this will be in limited supply on launch day."


----------



## inutile

Dagamus NM said:


> Same, I need what is offered to get above 60Hz. If the 3080 will give me 110fps, then the extra % of the 3090 is going to be what I need to optimize my hardware. If I could get there without it I would, the benchmarks for the 3080 get it close and I could probably drop some settings to get the rest of the way but that is not what I want to do.
> 
> It is perfectly ok for people to not buy this card. Particularly if your existing hardware runs your equipment. If it were not for HDMI2.1 and the ability to drive the panel at its rated specs I wouldn't do it. Not going to buy a card that doesn't get me where I want to be.
> 
> You don't have to upgrade every release. That is perfectly ok. This isn't a playground where you are going to get made fun of for not having the newest sneakers.
> 
> For those with the means and reasons this card makes sense. A $3K+ Titan does not make sense for me, but for others it will.


Yeah, this pricing tier is fine for me but the Titan's have always been a little too much for too little gaming perf for me personally. Acceptable price to perf is entirely subjective. I'm looking at it as $1500+block to get a solid increase over my 2080ti that will get me to stable 60fps+ in basically everything maxed at 4K, get me closer to hitting the 1440p 240hz target for monitor gaming, allow me to supersample way beyond what I could in VR, and give me HDMI 2.1 so I can enjoy 4K 120hz VRR on my Big Picture setup. That's acceptable for me, subjectively, and $2500+ is not.

Everyone has different budgets and different preferences and desires.

This whole launch has been more tense and just _angry _than I think I've ever seen before. I know that has a lot to do with the general state of the world and the eagerness of Pascal folks who waited through 20 Series to upgrade increasing demand by an order of magnitude coupled with low initial stock but, damn. People need to relax. I'm personally going to chill out a bit and let the AIB and waterblock market solidify a little bit more and look at buying in November-ish if I can.


----------



## dentnu

psychrage said:


> GeForce RTX 3090, A “BFGPU” For Creators, Researchers and Extreme Gamers, Launches Thursday
> 
> 
> Enables extreme gamers to play at 8K, and creators to explore a new world of cinematics.
> 
> 
> 
> www.nvidia.com
> 
> 
> 
> 
> 
> "...we want to apologize upfront that this will be in limited supply on launch day."


I really did not need for them to say it. Its always like this, every launch for the pass few years has always been limited. This is not surprising news to me.


----------



## psychrage

dentnu said:


> I really did not need for them to say it. Its always like this every launch for the pass few years has always been limited. This is not surprising news to me.


I'm not suprised either, it just sucks that they've been mum about the limitied stock of the 3080, but came out ahead of the 3090 to basically let people know "look, realistically, you won't be able to get it."


----------



## psychrage

Dude... You're going to wear out your keyboard. We get it. You don't want a 3090.


----------



## stefxyz

Oh i will vote with my wallet and buy it. Its not that the 3090 is bad just the 3080 excellently priced. For me 10 to 20% is more than worth it And compared to 2080 ti there are huge gains. Compared to latest Titans its even cheap.
This is the owners club btw not the rant about why someone doesnt buy it club...


----------



## dentnu

psychrage said:


> I'm not suprised either, it just sucks that they've been mum about the limitied stock of the 3080, but came out ahead of the 3090 to basically let people know "look, realistically, you won't be able to get it."


yea I get it also that 3080 launch was a disaster. I tried to to see if I could get a card in my cart on Newegg. Just to get a feel of what the 3090 launch would be like. All I saw was the page go from notify to crash for 5 mins then out of stock. It was brutal to watch, what a **** show. While I am hoping it will be a bit better tomorrow I know deep down its going to be the same or worst. I just hope I can get a 3090.


----------



## Diverge

Huh? Super rich aren't the people that buy Titans.. They are bought by enthusiasts that have decent jobs...


----------



## doom26464

Lots of golf clapping in here. 

There a golf game on?


----------



## BigMack70

Well... With Nvidia apologizing in advance about supply, I'm fully prepared to be disappointed tomorrow morning. Oh well. Maybe some of us will get lucky.


----------



## domenic

Just what we needed to hear - what's up with this all of a sudden?









GeForce RTX 3080 sees increasing reports of crashes in games - VideoCardz.com


An increasing number of users are reporting the crash to desktop (CTD) issues with factory-overclocked NVIDIA GeForce RTX 3080 graphics cards. The issues were reported on various forums and social media platforms. Users are reporting a crash to desktop issues with custom GeForce RTX 3080 models...




videocardz.com


----------



## Dagamus NM

BigMack70 said:


> I'm not sure why you don't all have mooncheese on ignore. The site immediately gets better with that simple click.


Lol, the rants are still amusing. Oh how the table have turned over the past week for our friend here. Stages of grief live in action. 

Yet he has a good setup already. Covid bored I guess.

There are a lot of things I opt out of spending money on, this card will not be one of them. I have a decent job but am not rich. I have two builds these cards will be going into. I will sell my older titans (6 total) and recoup some of the cost. No big deal. Definitely not class war stuff.

Also, if you haven't noticed the cost of everything is up dramatically since the last round of Nvidia cards released. I mean look at your grocery bill. Fuel is about the only thing that isn't 25-50% higher than it used to be. Was bored and going to build a house to sell, the costs of raw materials are up astronomically since the last time we did it. Not just imports, but stuff we produce here in the US. The cost of lumber is way up. Concrete, copper wire, pretty much everything.

Seems that the 3090 is right in that price increase zone if it really is intended to take up the upper tier of the lineup. I still think there will be another top card released. Similar specs to the quadro, but with gaming drivers instead. Isn't that how Ti cards started in the first place? Quadros that failed some specs but met enough to be a gaming card.


----------



## vmanuelgm

In regards to performance difference between 2080Ti and 3090.

Tried today Jedi Fallen Order again in the same initial passage and compared it to my previous video of the 2080Ti, both cards under water and power unlocked.

These are the captures:


















26%


----------



## HyperMatrix

vmanuelgm said:


> In regards to performance difference between 2080Ti and 3090.
> 
> Tried today Jedi Fallen Order again in the same initial passage and compared it to my previous video of the 2080Ti, both cards under water and power unlocked.
> 
> These are the captures:
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 26%


Yeah but on the 3090 a couple of your CPU cores are maxed out and your GPU is sitting at 93% load. To get a more fair estimate, you'd have to try with a higher GPU workload, perhaps through supersampling. Also noticed that you're only getting 2025MHz on the 3090 with your shunt mod. Power difference between cards is going to be important here I think. But assuming under water, we're able to hit at least 2.1GHz as EVGA hinted at with their FTW3 series, and with a shunt mod that means we would need to increase 2 things on your 138fps. First off, have to add 5.3% for GPU load difference (98% on 2080ti vs 93% on 3090) and account for another 3.7% for 2.1GHz clock. That would put us up to 151FPS. Meaning a potential 37% increase over the 2080ti. Still not great, but at least it's better.




domenic said:


> Just what we needed to hear - what's up with this all of a sudden?


Did you add me to ignore?  I wrote about this on the previous page I think. There's an issue with the cooling on some of the 3080s. There's inadequate cooling on the GDDR6x memory that's causing it to hit 105C which then results in it crashing. Tj max on GDDR6x is 110C I believe. But obviously with how unstable GDDR6x is in general, requiring a form of error correction already, where higher clocks can be stable but actually reduce performance, adding that amount of heat to the mix can surely break things.

All of this has me even more worried for the 3090. But Aquacomputer said they're considering building a block for either the ROG Strix or the FTW3 depending on which one is more popular. With an active cooled backplate, we'll have nothing to worry about.


----------



## kaydubbed

Correct me if I'm wrong but Jedi Fallen Order uses DX11, not DX12.

So that might not be the best game to compare the two generations


----------



## mirkendargen

I recall reading somewhere that the GDDR6X uses ~60W of power. I don't remember if that was for 10GB of it on a 3080 or 24GB of it on a 3090, but let's say it's for the 24GB. That would put 30W of heat (before any overclocking, VRM heat soaking through, etc) to dissipate with just a backplate. Seems awful sketch without AT LEAST some fins to increase surface area. Depending on the block designs I might repurpose the big ass finned backplate from an Arctic Accelero IV I have laying around and mount a quiet fan on it to cool the back of the card. I vertical mount and have plenty of room behind the card.


----------



## vmanuelgm

HyperMatrix said:


> Yeah but on the 3090 a couple of your CPU cores are maxed out and your GPU is sitting at 93% load. To get a more fair estimate, you'd have to try with a higher GPU workload, perhaps through supersampling. Also noticed that you're only getting 2025MHz on the 3090 with your shunt mod. Power difference between cards is going to be important here I think. But assuming under water, we're able to hit at least 2.1GHz as EVGA hinted at with their FTW3 series, and with a shunt mod that means we would need to increase 2 things on your 138fps. First off, have to add 5.3% for GPU load difference (98% on 2080ti vs 93% on 3090) and account for another 3.7% for 2.1GHz clock. That would put us up to 151FPS. Meaning a potential 37% increase over the 2080ti. Still not great, but at least it's better.
> 
> 
> 
> 
> Did you add me to ignore?  I wrote about this on the previous page I think. There's an issue with the cooling on some of the 3080s. There's inadequate cooling on the GDDR6x memory that's causing it to hit 105C which then results in it crashing. Tj max on GDDR6x is 110C I believe. But obviously with how unstable GDDR6x is in general, requiring a form of error correction already, where higher clocks can be stable but actually reduce performance, adding that amount of heat to the mix can surely break things.
> 
> All of this has me even more worried for the 3090. But Aquacomputer said they're considering building a block for either the ROG Strix or the FTW3 depending on which one is more popular. With an active cooled backplate, we'll have nothing to worry about.


The FTW3 is going to be a hell of watts eater then, I am reaching easily with clock fluctuation 550w...


----------



## saltedham

i am once again asking for frames per second. who else is gonna stay up and hope a site puts up the 3090s early?


----------



## carlhil2

I want to see some water cooled sub 40c results....


----------



## dante`afk

good luck


----------



## carlhil2

Glad I held on to my 2080Ti, might be a while. LOL....


----------



## Mooncheese

*Related to 3090, same GPU core, same architecture:*






*A negative offset of at least 100 MHz in the GPU clock can provide a remedy for the time being*


__
https://www.reddit.com/r/nvidia/comments/iydwp0


----------



## HyperMatrix

dante`afk said:


> good luck
> 
> View attachment 2459841


DE/AT as in Deutschland & Austria?


----------



## dante`afk

I guess, maybe also 250 for other regions? or worldwide..who knows


----------



## HyperMatrix

dante`afk said:


> I guess, maybe also 250 for other regions? or worldwide..who knows


If there's 250 for just those 2 countries, then there should be at least 1000 for USA/Canada. The question is, will people who missed out on the 3080 try to buy these? Will the 10-15% faster than 3080 claims dissuade people? Either way it'll be tight. Here's hoping people have more sense than me and decide not to buy one.  Although I am going for the FTW3, not FE.


----------



## dentnu

HyperMatrix said:


> If there's 250 for just those 2 countries, then there should be at least 1000 for USA/Canada. The question is, will people who missed out on the 3080 try to buy these? Will the 10-15% faster than 3080 claims dissuade people? Either way it'll be tight. Here's hoping people have more sense than me and decide not to buy one.  Although I am going for the FTW3, not FE.


Where you trying to snag the FTW3 from? Thats the same card I am trying to get.


----------



## Thoth420

Microcenter Cambridge only got 12 3090s. I was the 14th to arrive so I opted to just go home.


----------



## HyperMatrix

dentnu said:


> Where you trying to snag the FTW3 from? Thats the same card I am trying to get.


Anywhere that has them. Haha. EVGA’s store, Newegg.ca, canadacomputers if they list it. Can’t really think of anywhere else in Canada. Bestbuy only has the xc3 model from EVGA.


----------



## dentnu

HyperMatrix said:


> Anywhere that has them. Haha. EVGA’s store, Newegg.ca, canadacomputers if they list it. Can’t really think of anywhere else in Canada.


Well good luck! I am trying newegg and EVGA. I keep thinking not allot of people are willing to spend +$1800 on a video card. Hopefully that will be enough to allow us to snag one.


----------



## originxt

Supposedly a line started forming already at the Tustin Microcenter. No EVGA, other AIBS though like MSI. They do have stock though, unsure of quantity.


----------



## HyperMatrix

dentnu said:


> Well good luck! I am trying newegg and EVGA. I keep thinking not allot of people are willing to spend +$1800 on a video card. Hopefully that will be enough to allow us to snag one.


If I knew for sure Aquacomputer was building an ROG Strix block as well, I'd consider that card too. Since the Strix and FTW3 Ultra are the same price. Though I'd definitely prefer the FTW3 for higher TDP and bigger chance of KPE bios leak since it's the same board. Speaking of which...is there any information on how Ampere can be flashed? I know Gamer Nexus did it. But others haven't even been able to dump their own bios through nvflash/gpu-z.



originxt said:


> Supposedly a line started forming already at the Tustin Microcenter. No EVGA, other AIBS though like MSI. They do have stock though, unsure of quantity.


I'm gonna get a lot of hate here....but honestly, I wouldn't recommend the Gigabyte cards. Or Zotac. Look at vmanuelgm's Gigabyte 3090. Shunt modded. Pulling 550W. And still only 2025MHz clock. Although under load I'm expecting FE models to struggle maintaining 1950MHz unless the FE cooler design on the 3090 is substantially more efficient/effective than on the 3080. If you're paying this much for a card, at least get the right one. Or you'll get major buyer's remorse.


----------



## BigMack70

Good luck everyone as we enter the f5 ddos army lottery tomorrow...


----------



## HyperMatrix

BigMack70 said:


> Good luck everyone as we enter the f5 ddos army lottery tomorrow...


Just gotta hack EVGA's Cloudflare account and redirect all requests to the AMD store's mountain bike page.


----------



## Jpmboy

1000 posts on the 3090 owners thread... and the card has yet to launch.


----------



## lokran88

HyperMatrix said:


> If I knew for sure Aquacomputer was building an ROG Strix block as well, I'd consider that card too. Since the Strix and FTW3 Ultra are the same price. Though I'd definitely prefer the FTW3 for higher TDP and bigger chance of KPE bios leak since it's the same board.


As I wrote a few pages before, Aquacomputer today stated that they are planning for Strix too. Not 100% decided yet, but I am pretty confident there will be one.


----------



## domenic

lokran88 said:


> As I wrote a few pages before, Aquacomputer today stated that they are planning for Strix too. Not 100% decided yet, but I am pretty confident there will be one.


Did they also announce plans for the FTW3 3090 or just the Strix in terms of custom cards?


----------



## inutile

Just got my notification email.


----------



## J7SC

Thoth420 said:


> Microcenter Cambridge only got 12 3090s. I was the 14th to arrive so I opted to just go home.


...ouch

...also, too early to speculate, but I wonder what Samsung's actual yield is currently on 8nm 3090s, and / or whether the whole supply chain was just compressed and rushed due to other reasons


----------



## kx11

hopefully the failure rate isn't as bad as 3080s


----------



## carlhil2

Thoth420 said:


> Microcenter Cambridge only got 12 3090s. I was the 14th to arrive so I opted to just go home.


I was there this afternoon. I saw them lining up....


----------



## HyperMatrix

J7SC said:


> ...ouch
> 
> ...also, too early to speculate, but I wonder what Samsung's actual yield is currently on 8nm 3090s, and / or whether the whole supply chain was just compressed and rushed due to other reasons


It also doesn't help that the Quadro, 3090, and 3080 are all using the same die. So if the chip is better, it's a Quadro. If it's worse, it's a 3080. There's a very limited range in which the dies can qualify as a 3090. On previous cards, there were 7-8% SMs disabled. So anything between 92-99% usable dies would be 3090. But here, it literally has to be 97.6%-99.9% because there are only 2 SMs disabled from a full fat die. So it all depends on yields...but also if hypothetically the yields are amazing, on whether Nvidia is willing to sacrifice the perfect 100% dies that would normally go to the Quadros to the 3090, which I really doubt.

So just to recap on this:

0%-81% = Fail - no usable card as of yet.
81%-97.5% = RTX 3080
97.6-99.9% = RTX 3090
100% = Quadro (and possible future Titan)

Based on a screenshot someone shared earlier, Germany and Austria are getting a combined 250 FE models available at launch. So if i had to guess total global availability? 2500-5000 FE cards. Who knows how many AIB have though. And who knows how they're being allocated per region. But they weren't lying about limited supply of the 3090. 

So my advice? Grab a tub of vaseline and get that F5 finger primed.


----------



## Falkentyne

Ok that's more like it.


----------



## inutile

kx11 said:


> hopefully the failure rate isn't as bad as 3080s



I'm not so sure 3080's are failing. What I'm seeing people report in regards to black screen crashes seems to be something driver/software related rather than hardware failures.


----------



## J7SC

...while waiting for 3090 reviews / order forms and madly pushing F5, one can marvel at this thing in the meantime with its 8x Ampere A100 based compute-heavy GPUs and appreciate how much cheaper that 3090 will be...


----------



## Jpmboy

J7SC said:


> ...while waiting for 3090 reviews / order forms and madly pushing F5, one can marvel at this thing in the meantime with its 8x Ampere A100 based compute-heavy GPUs and appreciate how much cheaper that 3090 will be...


gotta but more lottery tickets. 🙂


----------



## maltamonk

Jpmboy said:


> gotta but more lottery tickets. 🙂


It's not a dumb person tax!.........lol


----------



## J7SC

Jpmboy said:


> gotta but more lottery tickets. 🙂


...ironically, that setup is supposedly 'half price' compared to the previous version, so only 1 instead of 2 fortunes...that said, I find the NVidia A100 chip itself very interesting...also available as PCIe, is 7nm by TSMC, has 40GB of HBM2 and '''only''' 6912 shaders, but without the floating point / compute changeover per earlier posts

...not a desktop / gaming card like the 3080/90, though


----------



## CallsignVega

vmanuelgm said:


> In regards to performance difference between 2080Ti and 3090.
> 
> Tried today Jedi Fallen Order again in the same initial passage and compared it to my previous video of the 2080Ti, both cards under water and power unlocked.
> 
> These are the captures:
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 26%


Oh my god that is pathetic. In my testing; my 2175 MHz shunt modded water RTX Titan was 10% faster than my 2140 MHz water 2080 Ti. That means a 550w watt water 3090 is a measly ~16% faster in rasterization. Ampere is garbage.

I'll still try and fail for a 3090 tomorrow since I need HDMI 2.1 for my 48CX; but Ampere is over-hyped garbage.


----------



## BigMack70

CallsignVega said:


> Oh my god that is pathetic. In my testing; my 2175 MHz shunt modded water RTX Titan was 10% faster than my 2140 MHz water 2080 Ti. That means a 550w watt water 3090 is a measly ~16% faster in rasterization. Ampere is garbage.
> 
> I'll still try and fail for a 3090 tomorrow since I need HDMI 2.1 for my 48CX; but Ampere is over-hyped garbage.


Nvidia owes LG money I think. Hdmi 2.1 is the only reason I have any interest in Ampere.


----------



## HyperMatrix

CallsignVega said:


> Oh my god that is pathetic. In my testing; my 2175 MHz shunt modded water RTX Titan was 10% faster than my 2140 MHz water 2080 Ti. That means a 550w watt water 3090 is a measly ~16% faster in rasterization. Ampere is garbage.
> 
> I'll still try and fail for a 3090 tomorrow since I need HDMI 2.1 for my 48CX; but Ampere is over-hyped garbage.


A few things. His gigabyte 3090 performance and power usage is defying the laws of physics so don’t assume it to be typical. You don’t need 550W under water for 2025MHz. The FTW3 3080 was able to hit 2070MHz, and stable ish around 2040MHz with under 410W on air. An extra 20% cores shouldn’t cause a lower clock rate when you have an extra 34% power, with a 20C+ reduction in temps.

Secondly, we don’t know how stable his memory clock is. PAM4 can be “stable” at a higher clock rate, but with an actual reduction in performance because it’s only stable due to the error correction that’s happening.

Lastly...you’re right that ampere is overhyped. But the problem people keep making is comparing the next card up in the chain to it. All the people comparing a $1200 2080ti to a $700 3080, or in this case, a $2500 Titan to a $1500 3090. The fact that a $700 card is actually able to beat a $2500 RTX Titan is great. Samsung node is crap. Power usage is crap. Delay on 20GB variants of the 3080 is crap. Marketing is crap. All of that is true. But we’re still talking about substantially lower priced cards beating out the kings of the previous generation.

So yes, Ampere isn’t as good as Nvidia claimed. It’s also not as good as it could have been. But what is there, at the current price, is still a great card. And the only thing that can undeniably prove that statement to be right or wrong is the release of AMD’s cards. And I’ve been burned one too many times by AMD to have any real faith in them at this point.


----------



## CallsignVega

The average price for a 3090 for the rest of the year will be closer to $2,500 RTX Titan once all the bots and ebay sales are taken into consideration. I cannot help NVIDIA priced the fastest card at "only" $1,500 MSRP. I always buy the best; and looking like just above 15% performance increase from previous best card to the next gen best is awful.

At the end of Amperes life-cycle in two years, I will still have been within ~15% of the performance I was at* four years* in the past. Any way you slice it, that is terrible.

Just to put how awful Ampere is, this 3090 "Titan" is less than half the performance increase going from Pascal Titan to Turing Titan.


----------



## J7SC

I would first want to see results of a 3090 with a little bit more mature drivers, but yeah, so far, it is not a game changer (specifically) for me, as I will be at 4K / 72 Hz (Dsp) for a while longer. 

I do think though that NVidia still has lots of room to come up with more (for example Ampere Titan) variations...the 3090 has 112 ROPs and that number can be pushed much higher per other Ampere data sets - all of course subject to improvements in yields.


----------



## nycgtr

HyperMatrix said:


> A few things. His gigabyte 3090 performance and power usage is defying the laws of physics so don’t assume it to be typical. You don’t need 550W under water for 2025MHz. The FTW3 3080 was able to hit 2070MHz, and stable ish around 2040MHz with under 410W on air. An extra 20% cores shouldn’t cause a lower clock rate when you have an extra 34% power, with a 20C+ reduction in temps.
> 
> Secondly, we don’t know how stable his memory clock is. PAM4 can be “stable” at a higher clock rate, but with an actual reduction in performance because it’s only stable due to the error correction that’s happening.
> 
> Lastly...you’re right that ampere is overhyped. But the problem people keep making is comparing the next card up in the chain to it. All the people comparing a $1200 2080ti to a $700 3080, or in this case, a $2500 Titan to a $1500 3090. The fact that a $700 card is actually able to beat a $2500 RTX Titan is great. Samsung node is crap. Power usage is crap. Delay on 20GB variants of the 3080 is crap. Marketing is crap. All of that is true. But we’re still talking about substantially lower priced cards beating out the kings of the previous generation.
> 
> So yes, Ampere isn’t as good as Nvidia claimed. It’s also not as good as it could have been. But what is there, at the current price, is still a great card. And the only thing that can undeniably prove that statement to be right or wrong is the release of AMD’s cards. And I’ve been burned one too many times by AMD to have any real faith in them at this point.


TBH the 2080ti was really priced horribly @ 1200. It was a 800-900 card at best with 0 competition. 2016 titan X pascal just got tapped out by mid range 3070 4 years later. We've been eating sandbags for years.


----------



## Shawnb99

Damn newegg canada. $2499.99 for the FTW3 Ultra


----------



## HyperMatrix

nycgtr said:


> TBH the 2080ti was really priced horribly @ 1200. It was a 800-900 card at best with 0 competition. 2016 titan X pascal just got tapped out by mid range 3070 4 years later. We've been eating sandbags for years.


I agree. I’m a pascal Titan x owner. Originally 2x SLI. And I skipped Turing because of how much of a rip off the 2080ti was, in terms of high price and crap performance improvement. I don’t agree with that pricing. The 3080 at $700 is solid though tbh. If I could grab a 20GB 3080 FTW3 model for $950~ I’d even do that over the 3090.


----------



## nycgtr

HyperMatrix said:


> I agree. I’m a pascal Titan x owner. Originally 2x SLI. And I skipped Turing because of how much of a rip off the 2080ti was, in terms of high price and crap performance improvement. I don’t agree with that pricing. The 3080 at $700 is solid though tbh. If I could grab a 20GB 3080 FTW3 model for $950~ I’d even do that over the 3090.


Unless the 6900xt is solidly ahead of the 3080, I think 950 on a FE with Aibs going to 1100 is highly possible. Even if the 6900XT is on par, Nvidia will market sell their selling points of rtx/dlss etc. If am looking at the prospect of a 1k-1100k20gb 3080, I might as well just spend the extra 400-500 now rather then wait around for a 10% slower card with 20gb ram. 500 for 10% isn't a great value point either, but there are many other hobbies were 10% is going to run you a lot more than 500.


----------



## vmanuelgm

HyperMatrix said:


> Yeah but on the 3090 a couple of your CPU cores are maxed out and your GPU is sitting at 93% load. To get a more fair estimate, you'd have to try with a higher GPU workload, perhaps through supersampling. Also noticed that you're only getting 2025MHz on the 3090 with your shunt mod. Power difference between cards is going to be important here I think. But assuming under water, we're able to hit at least 2.1GHz as EVGA hinted at with their FTW3 series, and with a shunt mod that means we would need to increase 2 things on your 138fps. First off, have to add 5.3% for GPU load difference (98% on 2080ti vs 93% on 3090) and account for another 3.7% for 2.1GHz clock. That would put us up to 151FPS. Meaning a potential 37% increase over the 2080ti. Still not great, but at least it's better


jedi is not a very optimized game saturating one thread.

I don't see a ftw3 totally unlocked and pulling 700w, that is more on the Kingpin level.

What did EVGA said specifically?


----------



## Shaded War

Are they for sure releasing a 3080 20GB model? I was going to buy a 3090 over the 3080 because of the extra VRAM for 4k gaming.

But the confirmation that it's only 10-15% better for $800-1000 more, I'm on the fence. I was under the impression it would have been 20% or so.


----------



## vmanuelgm

CallsignVega said:


> Oh my god that is pathetic. In my testing; my 2175 MHz shunt modded water RTX Titan was 10% faster than my 2140 MHz water 2080 Ti. That means a 550w watt water 3090 is a measly ~16% faster in rasterization. Ampere is garbage.
> 
> I'll still try and fail for a 3090 tomorrow since I need HDMI 2.1 for my 48CX; but Ampere is over-hyped garbage.


Can u record a video and link it to see your fps???


----------



## Esenel

Shaded War said:


> But the confirmation that it's only 10-15% better for $800-1000 more, I'm on the fence. I was under the impression it would have been 20% or so.


Of course it cannot be much more than 10-15%.
Only 375W max for the FE.
And the difference of Default Powerlimit will already be consumed by the more on memory.

So a 3090 only makes sense if you can pull more than 375W.


----------



## HyperMatrix

Shaded War said:


> Are they for sure releasing a 3080 20GB model? I was going to buy a 3090 over the 3080 because of the extra VRAM for 4k gaming.
> 
> But the confirmation that it's only 10-15% better for $800-1000 more, I'm on the fence. I was under the impression it would have been 20% or so.


20GB models have been 100% confirmed by more than 1 AIB partner now. Question of when and how much is an unknown.





vmanuelgm said:


> jedi is not a very optimized game saturating one thread.
> 
> I don't see a ftw3 totally unlocked and pulling 700w, that is more on the Kingpin level.
> 
> What did EVGA said specifically?


They didn’t specifically say anything. They released a picture of the FTW3 card overclocked To 2.1GHz. The boost numbers and other details didn’t add up and they didn’t talk about it afterwards. Honestly with your card I think you’re at the limit of your chip. Either that or the board isn’t able to provide reliable power to the die. Is GPU-Z just showing “power” as the reason your card is throttling? Your card is a mystery to me. Either you got unlucky with a bad overclocker, or we’re all in trouble with the 3090. Haha.


----------



## shiokarai

dante`afk said:


> good luck
> 
> View attachment 2459841


What's this and how do you get it  It's the inventory check script or app or page?


----------



## greg_p

dante`afk said:


> good luck
> 
> View attachment 2459841


Subscribed as I could be a customer for 3090, but actually I am not that sure... 250 is a paperlaunch....


----------



## HyperMatrix

greg_p said:


> Subscribed as I could be a customer for 3090, but actually I am not that sure... 250 is a paperlaunch....


250 is just FE cards. Not AIB. And only in DE/AT. Germany and Austria.


----------



## greg_p

HyperMatrix said:


> 250 is just FE cards. Not AIB. And only in DE/AT. Germany and Austria.


Thx, that was not that obvious, sorry for my noobitude on OC.net


----------



## GanMenglin

MSi Ventus 3X OC

@zhrooms


----------



## GanMenglin

GanMenglin said:


> MSi Ventus 3X OC
> 
> @zhrooms
> 
> 
> View attachment 2459906


PG132 SKU10, 2x8pin, so it's reference PCB?


----------



## greg_p

Does GPU-Z read the cards embedded power limit or does it read a database somewhere on internet?


----------



## TwinParadox

Did anyone find a way to dump bios? I need to update GOPUpdater tool.


----------



## asdkj1740

GanMenglin said:


> PG132 SKU10, 2x8pin, so it's reference PCB?


seems to be custom pcb just like 3080 ventus/gaming x trio.
this type of custom pcb are designed based on the reference pcb.


----------



## GanMenglin

asdkj1740 said:


> seems to be custom pcb just like 3080 ventus/gaming x trio.
> this type of custom pcb are designed based on the reference pcb.


Yes, MSI said ventus is "custmized PCB", but their support department told me it's compatible with the water block which design for reference PCB.


----------



## asdkj1740

GanMenglin said:


> Yes, MSI said ventus is "custmized PCB", but their support department told me it's compatible with the water block which design for reference PCB.


no, seems to be impossible to me, although i have not seen the water block.
look at the gap between right side vram and right side vrm.


----------



## mattxx88

greg_p said:


> Does GPU-Z read the cards embedded power limit or does it read a database somewhere on internet?


it should read bios info inside the gpu


----------



## GanMenglin

asdkj1740 said:


> no, seems to be impossible to me, although i have not seen the water block.
> look at the gap between right side vram and right side vrm.
> View attachment 2459908


there are 3 x 8pin on 1st card, it's definitely not the reference PCB.


----------



## GanMenglin

asdkj1740 said:


> no, seems to be impossible to me, although i have not seen the water block.
> look at the gap between right side vram and right side vrm.
> View attachment 2459908


by the way, what's the 1st card?


----------



## HyperMatrix

So currently the second highest TDP card we know of is the Palit Game Rock 3090 at 420W. Look at it in all its...glory....Palit Game Rock. For when you want to spend $1700 for a card not only made by an 8 year old girl, but also designed by one. And people thought the EVGA red lip was bad.


----------



## Avacado

53 pages of "owners" who don't own anything yet, lol! This thread makes me happy. Your motivation motivates me.


----------



## BigMack70

HyperMatrix said:


> So currently the second highest TDP card we know of is the Palit Game Rock 3090 at 420W. Look at it in all its...glory....Palit Game Rock. For when you want to spend $1700 for a card not only made by an 8 year old girl, but also designed by one. And people thought the EVGA red lip was bad.
> 
> View attachment 2459909


RGB is so last year.


----------



## CallsignVega

vmanuelgm said:


> Can u record a video and link it to see your fps???


Computer still in shipping. Just moved back to the US from Saudi Arabia.


----------



## HyperMatrix

BigMack70 said:


> RGB is so last year.


It's apparently a really popular design. So much so that Emtek is using the exact same shroud. This is why I don't understand foreign cultures.


----------



## greg_p

Hopefully it will improve in the coming minutes.... or not


----------



## Ferreal

Nvidia website is starting to lag lol...


----------



## greg_p

At your mark, robots on!


----------



## BigMack70

greg_p said:


> At your mark, robots on!


Get your disappointment set!


----------



## HyperMatrix

BigMack70 said:


> Get your disappointment set!


The fear of being flagged as a bot by the system for refreshing too much and having to do a minute long recaptcha request...


----------



## greg_p

EVGA is down, and national retailer (topachat - somehow specialized) says 2000€ for a FTW3 ultra... Half a surprise but still


----------



## Shawnb99

Duplicate


----------



## dante`afk

newegg, nvidia, amazon, evga,

all jumped from "notify me" to "out of stock".

nice


----------



## Jpmboy

note the time: 9:01. "Out of Stock"


----------



## Shawnb99

EVGA shows it out of stock already


----------



## dante`afk

it already showed out of stock at 8:58am



Jpmboy said:


> note the time: 9:01. "Out of Stock"
> View attachment 2459913


----------



## Ferreal

lol it never went in stock for me.


----------



## Jpmboy

eh, I'll check back once the toilet paper rush subsides.


----------



## HyperMatrix

EVGA says for elite members only? Seriously?


----------



## inutile




----------



## Shawnb99

Damn EVGA wouldn’t let me use Optmius as a shipping option so I lost my card


----------



## Shawnb99

**** this I’m done with this series


----------



## Ferreal

And the bots win again.


----------



## greg_p




----------



## Spiriva

Komplett.se never got the Strix today, i manage to order:

one ASUS GeForce RTX 3090 TUF GAMING and one Gigabyte GeForce RTX 3090 GAMING OC.


----------



## HyperMatrix

Doesn't say out of stock. But says for elite members only. Total bull. They should have said so ahead of launch.


----------



## Shawnb99

What total bullcrap.


----------



## Jpmboy

yeah well, I can;t even log into my elite account there.


----------



## inutile

The RTX 3090 Founders Edition Performance Revealed and Benchmarked


The RTX 3090 Founders Edition Arrives at $1499 - Ampere Performance Revealed - 35+ Games, SPEC, Pro App, GPGPU, & Workstation benchmarked




babeltechreviews.com


----------



## Imprezzion

I was going to order one but there's like 1 or 2 models in stock across the entire country and shops sell them for €1800-2000 lol. Not going to do that.. that's a bit crazy even for me.. and I don't really care about money lol..


----------



## Ferreal

wow... I was able to add to cart but now it's stuck.


----------



## HyperMatrix

If you're that elite member bullcrap on EVGA, keep refreshing. The out of stock keeps going away. They might be trickling it in. Still pissed though cuz nothing in their announcements said it would be restricted sales.


----------



## Jpmboy

i got this far and it is still sitting there...


----------



## Ferreal

Jpmboy said:


> i got this far and it is still sitting there...
> View attachment 2459918


Anyone got past this step? Mine is still checking out.


----------



## DooRules

i have a pending transaction for evga on cc online, no idea if it will go through


----------



## Shawnb99

I got all the way to pay for it and EVGA decided to forget my info so I had to enter it again by that time page just times out


----------



## Jpmboy

never got added to my cart. hopefully you have better luck.


----------



## DooRules

and yeah, stuck on the checking out page on nvidia site, lol


----------



## tiefox

I managed to order a 3090 STRIX OC from Komplett, now need to wait until the 17th of October to get it


----------



## Ferreal

and now it's sold out...


----------



## vmanuelgm

First in TimeSpy's for now (until Kingpin wants...)


----------



## HyperMatrix

Apparently no 3090s either. Lol.


----------



## Jpmboy

DooRules said:


> and yeah, stuck on the checking out page on nvidia site, lol


I think it links to Digital River for check out?


----------



## Wihglah

Got a FTW3 Ultra pre-ordered at Scan.co.uk. 30th October apparently - not even sure i will get one from the first batch. Website says they have 126 pre-orders for that card...


----------



## DooRules

Jpmboy said:


> I think it links to Digital River for check out?


yeah, I think that is how they do it, just gonna let it sit and see what happens, nothing to lose


----------



## Jpmboy

post back. I lost patience and... lost out.


----------



## Ferreal

DooRules said:


> yeah, I think that is how they do it, just gonna let it sit and see what happens, nothing to lose


I refreshed and my cart went to 0


----------



## DooRules

hoping that transaction for evga goes through, was even able to add my evga bucs to the purchase, lol... not much faith in it though to be honest


----------



## nizda

Jpmboy said:


> I think it links to Digital River for check out?


Got stuck here too. If you inspect the packets using developer tools in chrome for example, you can see it sending a get request and a block as the reply. Unreal. Newegg, sold out before I could checkout for any of the few 3090's they had in stock and Evga is pulling that elite member move for the 3090 and can barely handle the traffic. Have a nice day everyone, hopefully someone get's one.


----------



## Sheyster

Jpmboy said:


> i got this far and it is still sitting there...
> View attachment 2459918


Yep, same here for a while now:


----------



## DooRules

my cart at nvidia just zeroed out as well, see how evga goes


----------



## DooRules

online cc acct just showed up a refund for the pending transaction so thats down the tubes as well now


----------



## Jpmboy

Another fine Nvidia launch.


----------



## clipse84

we had no chance


----------



## DooRules

yeah, about what we suspected though right, lol,


----------



## kot0005

Looks like strix is 480w according to TPU's review,


----------



## Stephen.

Gotta love those preventative bot measures Nvidia implemented. My gigabit connection, with my Glock 18 middle finger pushing F5 wasn't fast enough.


----------



## inutile

ASUS GeForce RTX 3090 STRIX OC Review


The ASUS GeForce RTX 3090 STRIX OC is the fastest RTX 3090 we have tested today by quite the big margin. It also has a huge power limit adjustment range that maxes out at 480 W! We hance added a whole test run at 480 W to our review to see how much extra headroom RTX 3090 Ampere has left and...




www.techpowerup.com





Strix OC apparently has a 480W power limit and performs better than the other AIBs, but unfortunately it doesn't seem to make a substantial difference.


----------



## HyperMatrix

Effing EVGA site. Got to the last page after putting in all details and payment/etc. When I clicked the checkbox for "confirm you accept the terms" it decided to crash. -_-


----------



## greg_p

The funny stuff is when Nvidia would refuse to refund the 10 cards bought by scalpers that didn't find a customer. There already tons of them on the bay


----------



## inutile

inutile said:


> ASUS GeForce RTX 3090 STRIX OC Review
> 
> 
> The ASUS GeForce RTX 3090 STRIX OC is the fastest RTX 3090 we have tested today by quite the big margin. It also has a huge power limit adjustment range that maxes out at 480 W! We hance added a whole test run at 480 W to our review to see how much extra headroom RTX 3090 Ampere has left and...
> 
> 
> 
> 
> www.techpowerup.com
> 
> 
> 
> 
> 
> Strix OC apparently has a 480W power limit and performs better than the other AIBs, but unfortunately it doesn't seem to make a substantial difference.


Good news is that ASUS i saying they haven't decided on a price yet so maybe it will come in cheaper than the $1800 listed on Newegg. This seems like the one to get imo, high power limit, ASUS quality, and will assuredly get a decent selection of waterblocks.


----------



## tiefox

inutile said:


> Good news is that ASUS i saying they haven't decided on a price yet so maybe it will come in cheaper than the $1800 listed on Newegg. This seems like the one to get imo, high power limit, ASUS quality, and will assuredly get a decent selection of waterblocks.


They have a price, I just pre-ordered a 3090 strix co for the equivalent of 2180 USD here in denmark. ( 25% vat )


----------



## ThrashZone

tiefox said:


> They have a price, I just pre-ordered a 3090 strix co for the equivalent of 2180 USD here in denmark. ( 25% vat )


Hi,
Ouch


----------



## inutile

tiefox said:


> They have a price, I just pre-ordered a 3090 strix co for the equivalent of 2180 USD here in denmark. ( 25% vat )


"ASUS told us "we haven't decided on a price yet", the card is currently listed on Newegg for $1799 which is a lot of money."

Seems like it's just a placeholder price and they have room to adjust it before the card actually comes out next month.


----------



## Niju

Wihglah said:


> Got a FTW3 Ultra pre-ordered at Scan.co.uk. 30th October apparently - not even sure i will get one from the first batch. Website says they have 126 pre-orders for that card...


Where can you see the amount of pre-orders? I also got a XC3 Ultra from Scan, said 5th of October under the pre order button.


----------



## vmanuelgm

tiefox said:


> They have a price, I just pre-ordered a 3090 strix co for the equivalent of 2180 USD here in denmark. ( 25% vat )


Strix can be a nice choice, seeing this result... No clock variation...



https://www.3dmark.com/spy/14071929


----------



## kot0005

vmanuelgm said:


> Strix can be a nice choice, seeing this result... No clock variation...
> 
> 
> 
> https://www.3dmark.com/spy/14071929


2190Mhz ?~ is that LN2 ?


----------



## HyperMatrix

Well [email protected]#$ this. Went to the end of the checkout process including putting in my credit card info and getting it verified, only for it to crash/freeze/restart, and finally after 40 minutes of re-attempting it over and over again, the item was removed from my cart due to insufficient stock. Biggest god damn tease ever. -_-


----------



## vmanuelgm

kot0005 said:


> 2190Mhz ?~ is that LN2 ?


Nope, water is enough, the 3090's clock quite nicely, problem is power limits. My Gigabyte Gaming OC can go up to 2220 but the pcb cuts at 600w max=downclock, even with a light shunt mod. Chinese guy's Asus had a stable clock of 2190, so no power cut and probably pulling around 650-700w. I'll have to improve the shunt mod touching all the resistors (and see the result) or get a card like Strix, Kingpin, HOF...

This is superb at 20 degrees (if true). Its Kingpin at 2550, so...



https://www.3dmark.com/pr/328191


----------



## Stephen.

I'm happy just sitting back, and waiting to see which models Aquacomputer and Heatkiller will be focusing on. Then I'll go from there and choose on availability.


----------



## BigMack70

Not gonna lie... this hurt


----------



## HyperMatrix

BigMack70 said:


> Not gonna lie... this hurt
> View attachment 2459922


Exact same god damn thing happened to me. As soon as I clicked that "I have read and agree to the EVGA Store Terms and Conditions." This was more of a piss off than if it were just out of stock...


----------



## BigMack70

HyperMatrix said:


> Exact same god damn thing happened to me. As soon as I clicked that "I have read and agree to the EVGA Store Terms and Conditions." This was more of a piss off than if it were just out of stock...


I don't understand why NONE of these companies don't actually put the item in your cart when you add it to cart. Microsoft did that with the Xbox Series X pre-orders on their website and it works 10000% more smoothly because even if your site operates at the speed of a 3 legged turtle on Nyquil, people can still complete their purchases.

But Nvidia/EVGA/etc don't do that. You don't actually have the item in your cart, really, until the order is placed and confirmed. Stupid.


----------



## lester007

place button doesn't work on me and finally crashed


----------



## originxt

Refreshed at 6:58:58, sold out at nvidia on load lol. Try newegg as it pulled up, add to cart, sold out. Same with best buy, evga. What a ride lol.


----------



## mirkendargen

HyperMatrix said:


> Well [email protected]#$ this. Went to the end of the checkout process including putting in my credit card info and getting it verified, only for it to crash/freeze/restart, and finally after 40 minutes of re-attempting it over and over again, the item was removed from my cart due to insufficient stock. Biggest god damn tease ever. -_-


Literal exact same thing happened to me. Oh well, the Strix review surfaced while I was in the middle of all that and sent the message that I was better off waiting anyway.


----------



## BigMack70

originxt said:


> Refreshed at 6:58:58, sold out at nvidia on load lol. Try newegg as it pulled up, add to cart, sold out. Same with best buy, evga. What a ride lol.


Yeah. I'm done wasting my time on all these garbage websites that don't know how a shopping cart works. I'll just wait a month or two when stock isn't a problem, there's more high end cards to choose from, and we know more about all the possible water block and AIO options. 

FWIW, the performance looks fine from reviews if you get one of the nicer models. Strix is ~20% better than the 3080 at 4k.


----------



## Wihglah

Niju said:


> Where can you see the amount of pre-orders? I also got a XC3 Ultra from Scan, said 5th of October under the pre order button.


 Right hand side - above the Pre-order button "There are 124 people buying this product" or similar


----------



## vmanuelgm

Jayz got 14633 in Port Royal only applying AC to his Founders 3090...


----------



## ThrashZone

Hi,
Yep line up at micro center lol no online reserves


----------



## Glerox

Anybody bought one one the EVGA store?
I had one in cart but after the address part, it says "Part number has less than 0" *** I lost 1 hour of trying


----------



## HyperMatrix

Glerox said:


> Anybody bought one one the EVGA store?
> I had one in cart but after the address part, it says "Part number has less than 0" *** I lost 1 hour of trying


A few of us had that issue and the item being removed from cart when we were at the final process order page. Checking twitter, it's happening to lots of other people as well. Total waste of time. 

Update: ANNNNDDD....it happened again when I tried to order the FTW3 Ultra that got opened to the public. Go to last page...remove from cart cuz sold out. -_-


----------



## Thoth420

Well I won... kinda. Decided to throw money at the problem. Soon as I know what model is shipping I will update you all. Proceed to laugh now. Cost(after tax): $3,100 and change

PS I needed hardware for a buddies build anyways so all the rest of this will be used for that and the 3090 will be swapped out for either a 6900xt or 3080(super).


----------



## BigMack70

HyperMatrix said:


> A few of us had that issue and the item being removed from cart when we were at the final process order page. Checking twitter, it's happening to lots of other people as well. Total waste of time.
> 
> Update: ANNNNDDD....it happened again when I tried to order the FTW3 Ultra that got opened to the public. Go to last page...remove from cart cuz sold out. -_-


lol I also tried again, and that time my credit card blocked it for fraud and within the 2 mins it took me to deal with that and confirm the transaction, I got notified that it was removed from my cart

oh well...


----------



## Chamidorix

I just got a FTW Ultra from the US EVGA store. It was pure hell. I had 3 tabs each across 4 browsers and bookmarked each incremental step in the checkout process. So as soon as one tab would advance to adding to cart, start refreshing cart link on all 12 tabs. Repeat for shipping confirmation. Repeat for payment confirmation. At least 5 minutes non stop f5 on all tabs each step. Then, the icing on the cake, at the final place order I literally got it to load on 5 tabs, and it was ajax so you could hit the button over and over. I kid you not I mashed the place order button hundreds of times over a few minutes on all the tabs before one of them finally went through.

To be clear, they didn't open up ftw models until about 7:45 to elite members. Then they opened the ftw gaming to public, didn't get it. Then 15 min later they opened ftw ultra to public. I used that 15 min interval to setup my browser with bookmarks, autofill after learning from ftw gaming failure and then the f5 12 tab mashing started.

Absolute clusterfuck of a website.


----------



## zhrooms

GanMenglin said:


> MSi Ventus 3X OC, 350/350 W


And default boost clock?


----------



## Gofspar

Got one from NVIDIA.


----------



## Thoth420

Gofspar said:


> Got one from NVIDIA.


Damn now that is impressive! Congrats! I have never in my life had luck with Nvidia direct.


----------



## BigMack70

Gofspar said:


> Got one from NVIDIA.


How? I had it on "add to cart" several times and never could get it to go to a checkout page or register that the card was actually in my cart.


----------



## Chamidorix

BigMack70 said:


> How? I had it on "add to cart" several times and never could get it to go to a checkout page or register that the card was actually in my cart.


Pretty much everyone who got one used a direct add to cart link that was being spammed in the discord.


----------



## nievz

Got an MSI Gaming X Trio here in Philippines for $2,102.06


----------



## Thoth420

Chamidorix said:


> Pretty much everyone who got one used a direct add to cart link that was being spammed in the discord.


Ah nice! Glad to hear there was some success. I knew about a minute or two in with my potato of the moment that I wasn't getting through.


----------



## originxt

Well, that's disappointing. Granted, not a full waterblock but its still a good look at how it might perform underwater in general.


----------



## psychrage

Got my FE ordered. 5:58AM PST showed out of stock. In stock at 6:03AM. Entire checkout process was SLOOOOWW, but no fails. Checked out by 6:05 or 6:06AM. 1st Email recieved at 6:08am, 2nd email recieved at 6:50am.
I was wanting the FTW3, but evga's server is apparently hosted on a 486 or something. Didn't really even try anything other than nvidia.com, despite having tabs open for most of the stores.


----------



## Wihglah

Woohoo! got an order confirmation from Scan.


----------



## ThrashZone

originxt said:


> Well, that's disappointing. Granted, not a full waterblock but its still a good look at how it might perform underwater in general.


Hi,
Besides looks a hell of a lot better on water, water can cool 15c better silently which is most of the point of water cooling.
Then you get into water chillers and even just air chillers which the last is universal.


----------



## originxt

ThrashZone said:


> Hi,
> Besides looks a hell of a lot better on water, water can cool 15c better silently which is most of the point of water cooling.
> Then you get into water chillers and even just air chillers which the last is universal.


Fair enough. XC3 looks the best for me in terms of warranty since I'm watercooling and size since my 011d definitely can't fit the ftw unless I do some side panel modding. It looks like these cards are pretty much almost maxed out based on reviews, the 3 pin cards don't seem to net you much. At least for now it doesn't.


----------



## ThrashZone

originxt said:


> Fair enough. XC3 looks the best for me in terms of warranty since I'm watercooling and size since my 011d definitely can't fit the ftw unless I do some side panel modding. It looks like these cards are pretty much almost maxed out based on reviews, the 3 pin cards don't seem to net you much. At least for now it doesn't.


HI,
Yeah in a mid tower which I use a water block on any of these 30 series shortens the length of the card a lot lol 
So yeah space saver too going water


----------



## asdkj1740

好料堆疊中！ASUS ROG Strix GeForce RTX 3090 24GB 顯示卡供電分析 - BenchLife.info


這次的 GeForce RTX 3090 顯示晶片，會和 ROG Strix 會擦出什麼火花？




benchlife.info





strix pcb breakdown by R.F. on benchlife
two MP2888A 10 phases controller, one is set for 10, another is set for 6.


----------



## GanMenglin

zhrooms said:


> And default boost clock?


@zhrooms


----------



## Shawnb99

Is it only the FE that have the new 12 pin connector or do other non FE ones have it? If it's only the FE and everyone is using 2x or 3x 8 pin what was the point of it?


----------



## ThrashZone

Shawnb99 said:


> Is it only the FE that have the new 12 pin connector or do other non FE ones have it? If it's only the FE and everyone is using 2x or 3x 8 pin what was the point of it?


Hi,
Think it's just an nvidia card thing 
Other venders did same old 2x8... pin power.


----------



## Shawnb99

ThrashZone said:


> Hi,
> Think it's just an nvidia card thing
> Other venders did same old 2x8... pin power.


Maybe I'm too stoned but that seems really stupid to me. Why?


----------



## Thoth420

Shawnb99 said:


> Maybe I'm too stoned but that seems really stupid to me. Why?


I am also stoned(always am) and I never could figure it out either. It was 'explained' somewhere but nothing said made any sense.


----------



## ThrashZone

Shawnb99 said:


> Maybe I'm too stoned but that seems really stupid to me. Why?


Hi,
I haven't seen all fe cards or reference since these have been clouded as to which is which now days lol so it seems to me nvidia is just stupid lol


----------



## Thoth420

ThrashZone said:


> Hi,
> I haven't seen all fe cards or reference since these have been clouded as to which is which now days lol so it seems to me nvidia is just stupid lol


You're probably right based on everything they have done lately. I am also stupid however... thankfully not in control of a huge corp. I am a simple farmer.


----------



## GanMenglin

well, is there any chance to have a general high-TDP bios now? since almost no one has the card...


----------



## CallsignVega

Looking at the reviews, chip is so terrible you hardly get any more performance with more power. Strix at 480w only 2% faster than stock at 4K. Now that's pathetic.


----------



## Diverge

I must be in the top 1% of internet shoppers, in terms of success rate of broken launch day products. Got 3080 FE on launch day, and also a 3090 FE on launch day. 

I came across the direct link for the 3080 last night, so I copied and pasted it into a notepad. Then this morning I saw someone post the 3090 sku in discord (before people were posting the full links). So I copied the sku over the 3080's sku in the link, and gave it a try. got an error, so went back to manual F5ing. Then I figured I'd try the direct link again, and to my surprise it loaded the checkout page, asking me for credit card info's. Got tripped up by the new captcha's trying to rush through to complete checkout. Then the nvidia store pages were taking so long to load I thought the site was gonna go down. Look over in discord and the full direct link is being pasted in chat. Web page still not loading correctly. I get a ding on my phone about a purchase over $500. Open creditcard app, and see charge pending (no fraud warnings. Yay!) Look in my email, and see order processing email.


----------



## nyk20z3

lmao at that Strix tax


----------



## Shawnb99

Diverge said:


> I must be in the top 1% of internet shoppers, in terms of success rate of broken launch day products. Got 3080 FE on launch day, and also a 3090 FE on launch day.
> 
> I came across the direct link for the 3080 last night, so I copied and pasted it into a notepad. Then this morning I saw someone post the 3090 sku in discord (before people were posting the full links). So I copied the sku over the 3080's sku in the link, and gave it a try. got an error, so went back to manual F5ing. Then I figured I'd try the direct link again, and to my surprise it loaded the checkout page, asking me for credit card info's. Got tripped up by the new captcha's trying to rush through to complete checkout. Then the nvidia store pages were taking so long to load I thought the site was gonna go down. Look over in discord and the full direct link is being pasted in chat. Web page still not loading correctly. I get a ding on my phone about a purchase over $500. Open creditcard app, and see charge pending (no fraud warnings. Yay!) Look in my email, and see order processing email.



I'd say go buy a lottery ticket but you'd probably win and I'd hate you even more


----------



## BigMack70

CallsignVega said:


> Looking at the reviews, chip is so terrible you hardly get any more performance with more power. Strix at 480w only 2% faster than stock at 4K. Now that's pathetic.


It seems pretty clear that there's no meaningful performance to be gained from XOC/unlocked power this time around. Shunt modded 520W 3080 was performing +5% over the 320W performance... and 3090 looks like the same.


----------



## dante`afk

giggles


----------



## Z0eff

Looks like der8auer wasn't using a custom card with a decent power limit ceiling? Is that an accurate statement?
I'm very curious to see what watercooling a 3090 FTW3 Ultra or 3090 STRIX OC can do. Stable 2.1 GHz?


----------



## inutile

The 3DMark scoreboards are always fun to watch after a new GPU comes out. Current world record Port Royal is Kingpin with a pretty insane 16673, clearly on LN2 with 2550MHz core. That's more than 3600 more point than the top 2080ti at 2700MHz and 2500~ over the top 3080 at 2430MHz.


----------



## ThrashZone

Thoth420 said:


> You're probably right based on everything they have done lately. I am also stupid however... thankfully not in control of a huge corp. I am a simple farmer.


Hi,
Yeah nvidia is spoiled every release no matter how bad people soak it up.


----------



## nycgtr

Had some wife nag while I was trying to snag one from multiple sites this morning. No dice. Had my evga ftw3 sitting in cart unable to check out, till it let me with 0 items in cart lol. If I can get one within the next 2 weeks, I will just go 3080 strix/ftw till the 20gb 3080 drops and then move the 10gb to the wife box. Was hoping for more of a bump but seeing reviews after trying to spam buy. First gen that I am not going to do sli/nvlink I guess if I go the 80 route. Savings found lol.


----------



## Kalm_Traveler

nycgtr said:


> Had some wife nag while I was trying to snag one from multiple sites this morning. No dice. Had my evga ftw3 sitting in cart unable to check out, till it let me with 0 items in cart lol. If I can get one within the next 2 weeks, I will just go 3080 strix/ftw till the 20gb 3080 drops and then move the 10gb to the wife box. Was hoping for more of a bump but seeing reviews after trying to spam buy. First gen that I am not going to do sli/nvlink I guess if I go the 80 route. Savings found lol.


eh I am sitting tight for a bit - overkill rig is running two shunt-modded Titan RTX and does more than everything I need it to do... gaming rig is running one Kingpin 2080 Ti and I never really use it now that COVID is going (mainly built it for friends to use but social distancing put a kibosh on that). 

I like the overall performance increase over Titan RTX, but something tells me that we'll see a new Titan in some form closer to Christmas time. I can wait.


----------



## CallsignVega

BigMack70 said:


> It seems pretty clear that there's no meaningful performance to be gained from XOC/unlocked power this time around. Shunt modded 520W 3080 was performing +5% over the 320W performance... and 3090 looks like the same.


Ya, you might as well buy the cheapest 3090 and slap a Waterblock on it and call it a day.. 



Kalm_Traveler said:


> eh I am sitting tight for a bit - overkill rig is running two shunt-modded Titan RTX and does more than everything I need it to do... gaming rig is running one Kingpin 2080 Ti and I never really use it now that COVID is going (mainly built it for friends to use but social distancing put a kibosh on that).
> 
> I like the overall performance increase over Titan RTX, but something tells me that we'll see a new Titan in some form closer to Christmas time. I can wait.


15% increase doesn't leave much to like.


----------



## GanMenglin

Here is the PCB layout of ventus, is it compatible with reference?


----------



## domenic

Hmmm...The Strix now has a confirmed 480W power limit with at least pre-orders being taken from some on blocks. Same price as FTW3 which has a 40W less limit and no block availability / pre-orders anywhere as of yet. Yesterday I was in the EVGA camp but that seems to be changing.


----------



## kot0005

Ordered a Strix OC 3090. hoping that it wotn arrive in Time , I might just get the 3080 30GB or what ever Paid $2400 USD for it lol. weith water block its guna be a 2800usd card prolly not going to be worth it because the gains are not really Titan like compared to a XX80 Class card. I think this Card should have been the Ti with 12gb or 20gb vram..


----------



## asdkj1740

GanMenglin said:


> Here is the PCB layout of ventus, is it compatible with reference?
> 
> View attachment 2459936


i think it wont even fit lmao.


----------



## ThrashZone

kot0005 said:


> Ordered a Strix OC 3090. hoping that it wotn arrive in Time , I might just get the 3080 30GB or what ever Paid $2400 USD for it lol. weith water block its guna be a 2800usd card prolly not going to be worth it because the gains are not really Titan like compared to a XX80 Class card. I think this Card should have been the Ti with 12gb or 20gb vram..


Hi,
Titan rtx wasn't a good one compared to 2080ti either.


----------



## asdkj1740

inutile said:


> The 3DMark scoreboards are always fun to watch after a new GPU comes out. Current world record Port Royal is Kingpin with a pretty insane 16673, clearly on LN2 with 2550MHz core. That's more than 3600 more point than the top 2080ti at 2700MHz and 2500~ over the top 3080 at 2430MHz.
> 
> View attachment 2459934


he uses off boost bios again ?


----------



## Kalm_Traveler

CallsignVega said:


> 15% increase doesn't leave much to like.


yeah I get you - I mean that's enough to be noticeable at higher resolutions but I'm old and prefer 3840 x 1600 @ 144Hz over 4k @ 240Hz. 120-144Hz is about where I can't see smoothness above that point, and I've been spoiled with ultrawides for too long. 

At this resolution and with the few newish games I play from time to time... can't really see a benefit to swapping to 3090's.


----------



## Jpmboy

Kalm_Traveler said:


> yeah I get you - I mean that's enough to be noticeable at higher resolutions but I'm old and prefer 3840 x 1600 @ 144Hz over 4k @ 240Hz. 120-144Hz is about where I can't see smoothness above that point, and I've been spoiled with ultrawides for too long.
> 
> At this resolution and with the few newish games I play from time to time... can't really see a benefit to swapping to 3090's.


No doubt, it is difficult to "see" a jump in performance (absent nutech like RTX) between generations... outside of benchmarks. You'd need at least 2 gens for SLI RTX Titans to "feel" slow.


----------



## ThrashZone

Jpmboy said:


> No doubt, it is difficult to "see" a jump in performance (absent nutech like RTX) between generations... outside of benchmarks. You'd need at least 2 gens for SLI RTX Titans to "feel" slow.


Hi,
Can't really believe you kept your rtx titans seeing doorules returned his pretty quick after seeing the juice just wasn't there for the price jump.


----------



## padman

Strix OC Stock Power limit is 390W, maxed out slider is 480W.
from: ASUS GeForce RTX 3090 STRIX OC Review
@zhrooms


----------



## Chamidorix

I'm planning a good 'ol direct die phase cooling setup for this bad boy. I'm sure sub 0 C and 800+W with some V hardmods will get this 3090 up to the 60%+ uplift over the 2080ti I desire.

Also, everyone I've seen so far is woefully neglecting cooling the memory.


----------



## Krzych04650

Sold out completely, even here where good models are like 3x average monthly salary  Not getting any for few months it seems. 

Performance isn't that great though, even more favorable reviews like TPU have 45% average gain vs 2080 Ti before considering zero OC potential of Ampere, so real world average OC vs OC is likely less than 35%. In practice this is very enough to make you able to hit performance target instead of being far from it, performance target is kind of a 0-1 situation especially in most demanding scenarios, so all of these percentages are not telling full story, but still one would hope for more than another 35% gen on gen leap.


----------



## GanMenglin

asdkj1740 said:


> i think it wont even fit lmao.


haha, yeah, now I got a headache


----------



## asdkj1740

GanMenglin said:


> haha, yeah, now I got a headache


i can help, send me your address.


----------



## GanMenglin

asdkj1740 said:


> i can help, send me your address.


----------



## Thoth420

Don't give up hope fellas. I haven't yet on my friend in Hong Kong scoring me an HOF 3080 Super(or any of their lego design models) when they drop and that is way more of a pipe dream.


----------



## J7SC

Heck...even the 3DM HoF pages are slow now...I thought my PortRoyal score (overall, 2xGPU) in top-30 would have already been wiped out...but so far, only one PortRoyal result with dual 3090s (#1)...compared to my 2x 2080 Ti (stock bios, w-cooled), the dual 3090s (looks like stock cooling) are 28.8% faster -> source Then again, just one result for 2x 3090s so far, so not really 'scientific'

Checked my local retailer (w/ national online)...EVERY iteration of 3090s out of stock...good thing I decided to wait a bit because the 'ordering attempt' frustration dripping from these pages is massive


----------



## GanMenglin

asdkj1740 said:


> i think it wont even fit lmao.


it seems the water block for trio maybe fit ventus, quite smilar?


----------



## asdkj1740

GanMenglin said:


> it seems the water block for trio maybe fit ventus, quite smilar?


forget about that.










btw is that just me here using integrated gpu (hd630) ? and as a real 3090 owner how dare you asking me about further custom water cooling.


----------



## Sheyster

At this point I will just wait and get either a FTW3 Ultra, an MSI Trio X (apparently very quiet this time around) or the Strix, whichever one I can find first. I do not intend to water cool it.


----------



## ThrashZone

Sheyster said:


> At this point I will just wait and get either a FTW3 Ultra, an MSI Trio X (apparently very quiet this time around) or the Strix, whichever one I can find first. I do not intend to water cool it.


Hi,
Will be watching to see what KPE and ftw3 do seeing they lost one of their best engineers.


----------



## Sheyster

If anyone needs an EVGA FE cable, PM me. I was able to get one before they went out of stock, have it in hand:









EVGA PerFE 12 Cable, Individually Sleeved Cable, Built for NVIDIA GeForce RTX 30 Series Founders Edition, Included Cable Combs


NVIDIA GeForce RTX 30 Series GPUs deliver the ultimate performance for gamers and creators. They're powered by Ampere - NVIDIA's 2nd gen RTX architecture with new RT Cores, Tensor Cores, and streaming multiprocessors for the most realistic ray-traced graphics and cutting-edge AI features. The...




www.evga.com





I'm not gonna buy an FE card.


----------



## J7SC

ThrashZone said:


> Hi,
> Will be watching to see what KPE and ftw3 do seeing they lost one of their best engineers.


...speaking of which, do you know what TiN is doing now ? 

...saw this @ Jay2C (and elsewhere) re. VRAM limits on 10GB 3080....choosing a 3080 10GB or 3090 24GB NOW or waiting for a 3080 20GB is academic at this stage though; (almost) nothing seems available for order


----------



## ThrashZone

J7SC said:


> ...speaking of which, do you know what TiN is doing now ?
> 
> ...saw this @ Jay2C (and elsewhere) re. VRAM limits on 10GB 3080....choosing a 3080 10GB or 3090 24GB NOW or waiting for a 3080 20GB is academic at this stage though; (almost) nothing seems available for order
> 
> View attachment 2459949


Hi,
Only thing i read was moved closer to family.


----------



## asdkj1740

evga tin has been missing for months.

no z490 dark article, no rtx3080 fe article, no rtx3090 article


https://xdevs.com/guide/



this guys is insane, probably one of the very few R&D who masters mobo and gpu at the same time....


----------



## GanMenglin

asdkj1740 said:


> forget about that.
> 
> View attachment 2459945
> 
> 
> btw is that just me here using integrated gpu (hd630) ? and as a real 3090 owner how dare you asking me about further custom water cooling.


hahaha


----------



## bl4ckdot

Got myself a Strix OC  The 480W will be nice


----------



## Mooncheese

Krzych04650 said:


> Sold out completely, even here where good models are like 3x average monthly salary  Not getting any for few months it seems.
> 
> Performance isn't that great though, even more favorable reviews like TPU have 45% average gain vs 2080 Ti before considering zero OC potential of Ampere, so real world average OC vs OC is likely less than 35%. In practice this is very enough to make you able to hit performance target instead of being far from it, performance target is kind of a 0-1 situation especially in most demanding scenarios, so all of these percentages are not telling full story, but still one would hope for more than another 35% gen on gen leap.


Try 20-25%. 

Do the math, compare 2080 Ti at the same power draw (the OC Strix on the chart) to both the 3080 and 3090. 

Here, let me help you: 

RDR2 4K
3090 FE Stock: 92 FPS, 3080 FE OC: 90 FPS, 2080 Ti OC: 72 FPS, Percentage change: 27% and 25% 

RDR2 1440p
3090 FE Stock: 134, 3080 FE OC: 131, 2080 Ti OC: 118, Percentage change: 14% and 11%

Rainbow Six Siege 4K
3090 FE Stock: 203, 3080 FE OC: 184, 2080 Ti OC: 161, Percentage change: 26% and 14%

Rainbow Six Siege 1440p
3090 FE stock: 358, 3080 FE OC: 337, 2080 Ti OC: 299, Percentage Change: 20% and 13%

SOTTR 4K
3090 FE Stock: 104, 3080 FE OC: 94, 2080 Ti OC: 82, Percentage Change: 27% and 15%

SOTTR 1440p
3090 FE Stock: 169, 3080 FE OC: 158, 2080 Ti OC: 138, Percentage Change: 22% and 14%


Now, while we are at it, lets have a look at overclocked 2080 Ti vs overclocked 1080 Ti performance for comparison, including synthetics. Because I'm tired and because there isn't a meaningful percentage change between 1080 Ti and 2080 Ti at 4K I will only look at 1440p below. Youre more than welcome to look at 4K on your own. 

RDR2 1440p
2080 Ti OC: 118, 1080 Ti OC: 74, Percentage Change: 59%

Rainbow Six Siege 1440p
2080 Ti OC: 299, 1080 Ti OC: 198, Percentage Change: 51%

SOTTR 1440p
2080 Ti OC: 138, 1080 Ti OC: 95, Percentage Change: 45%

Before going forward I feel the need to place emphasis that, as I stated repeatedly, GA-102 is right at the very edge of the performance-efficiency curve and any more power does not translate linearly into performance gain. We see this with someone here commenting that Strix @ 480w yields 3% improvement over FE @ 390w and the fact that GA102-200 has a whopping 7% overclock and all of the figures above are comparing overclocked 2080 Ti to overclocked 3080 and that 20% at 1440p and 30% at 4K average has shrunk down to 11-14% at 1440p and 14-25% at 4K. I will include both non-overclocked and overclocked performance gains between architectures in the following synthetics. We will be looking at both Timespy and Port Royal, to be fair to the meaningful RT gains of the new architecture with exception of the 980 Ti to 1080 Ti comparison where Firestrike is used instead: 

3DMark Timespy GPU
3090 FE: 20,144 (390w) Overclocked: (no Timespy GPU data here, but it seems 3090 FE overclocked is good for another 5%)

3080 FE: 17,832 (320w)

2080 Ti FE: 13,610 (260w)

2080 Ti FE OC on air at same power draw as 3080: 15,500 (320w)

1080 Ti FE: 9521 (260w) (earlier benches have this card at 8660, they may have re-benched this with newer drivers)

Percentage change normalizing for power output: 

1080 Ti FE to 2080 Ti FE: 43% (with newer bench, 57% with original bench of 8660) 

2080 Ti FE to 3080 FE @ 320w: 15%

2080 Ti FE to 3090 FE @ comparable power draw (373w vs 390w): 20% and 25% (3090 overclocked) 


GTX 1080 Ti vs 980 Ti @ 300w (3DMark site is slow right now, but it's 20k vs 30k, or 50%)

Port Royal: 2080 Ti @ 373w = 10,500, 3080 @ 320w = 11,450, 3090 @ 390w = 13,500, percentage change: 9% and 30% 

The 3090 shows a meaningful gain in RT, but imagine this metric with a 628mm2 7nm TSMC die, or about 30-40% higher than this for the same amount of money. But don't worry, NGreedia saved $26 per yield going with Samsung 8nm EUV and no, the queue wasn't full and wasn't forecasted to be full (current 7nm TSMC capacity is at 30%) because crApple was switching their demand to 5nm and freeing up that much capacity. 

This is a good one, I think HyperMatrix made this statement: 

"The 3080 is unrivaled in efficiency, we have never seen a gain of 25% over the outgoing 80 Ti card at the same power draw". 

GTX 1080 = 23% faster than 980 Ti @ 36% less power draw. 

GTX 1080 Ti = 50% faster than 980 Ti @ the same power draw. 

AND THEY ONLY DROPPED THE NODE SIZE 43%. 

12nm FFT to 8nm EUV = 50%. 

And we are celebrating this mediocrity? 

Additionally, not only is this mediocre performance uplift considering the node halved but imagine the performance uplift between a 425mm2 3080 (same performance as now) and 628mm2 3090 on 7nm TSMC. 

10% and most of you will buy this crap, telling NV "hey, that's great, keep making garbage and slap whatever price-tag on it, as long as it's 10% faster than the next fastest card I will buy it at any price because I have more money than sense and don't understand that I've created this very situation by buying your overpriced Titan Xp and your overpriced 2080 Ti at full price and then some"

It's just disgusting honestly. 

And please save your "you can't afford it so youre complaining nonsense". I could buy this if I wanted to, but with the water block bringing the total to $1850 after taxes (FE + Bykski block) I would probably have to cancel my HP Reverb G2 pre-order, and for what, 25% gain at 1440p? Maybe 45% with DLSS on? 

This is garbage. 

I will wait to see what AMD has in store. 

Ampere is basically Nvidia's Comet Lake moment where the value proposition become absurd. Like Comet Lake, Ampere is an expensive inferior node and needs an incredible amount of power for marginal gains over whatever came before it. Meanwhile AMD is taking over as the CPU industry leader if they haven't done so already. 

$26 per yield for this mediocrity. That's all NV is saving having gone with 8nm EUV over 7nm TSMC and they knew that with slick marketing, a dumbed down consumer-base and a growing niche "boutique" market demographic (the .01%, the only demographic that can buy this crap) that they could make whatever and sell it. 

They literally made you whatever and youre buying it and cheering all along the way.

If AMD release a card (full 80 CU unit Navi 21) that runs faster than the 3080 with 16GB of video memory @ $1000 NV is done. 

With any console port, which 90% of PC games are, NV is going to have implement it's exclusive technologies on top of an engine that has been heavily optimized around RDNA 2, the architecture of Big Navi. The 3080 may still be faster on paper, in 3DMark Timespy and Port Royal, or it may be slower, but watch Big Navi run the console games, I don't know, 25% faster because the console based path tracing, AI super-sampling and "RTX IO" will be done natively and therefore with less driver and API involvement. 

See this is how it works, when you keep buying whatever a manufacturer makes you invariably end up in a situation where they sit on their laurels and don't push the envelop because they know that they have a captive market who will buy whatever is shoveled their way. 

See: Intel

Jim from AdoredTV is right. This is Nvidia's Dumbest Decision because now AMD is going to do to Nvidia what they did to Intel. We are going to see a HUGE upset in the discrete GPU industry this year. 

And guess what? 

See Intel? They are getting off the 14nm node. 

They have to. 

Nvidia? 

They will be forced to develop Hopper early.

I don't bear any allegiance to any brand or flag. As a combat veteran thrown under the bus by my country long ago I learned that lesson the hard way. 

Stop being a fan boy. 

Don't buy this garbage. 

Compel NV to produce a quality product. 

Remember, NV is saving $26 per yield with 8nm EUV. 

Imagine the 3090 with a 628mm2 7nm TSMC die that is 30-40% faster than the 425mm2 die (that is as fast as the current GA-102-200 628mm2 8nm EUV die). 

Same price. 

40% faster, easily. 

Now THAT is card worthy of a moniker denoting dual GPU, THAT would be a 3090.


----------



## BigMack70

bl4ckdot said:


> Got myself a Strix OC  The 480W will be nice


How/where? Everywhere I saw, they're not releasing until October.


----------



## J7SC

bl4ckdot said:


> Got myself a Strix OC  The 480W will be nice


toutes nos félicitations (which hopefully means Congrats !)


----------



## shiokarai

bl4ckdot said:


> Got myself a Strix OC  The 480W will be nice


It's not released yet, though...


----------



## bl4ckdot

BigMack70 said:


> How/where? Everywhere I saw, they're not releasing until October.


LDLC, french retailer.








ASUS ROG STRIX GeForce RTX 3090 O24G GAMING - Carte graphique ASUS sur LDLC | Muséericorde


Achat Carte graphique ASUS ROG STRIX GeForce RTX 3090 O24G GAMING (90YV0F93-M0NM00) sur LDLC, n°1 du high-tech. 24 Go GDDR6X - Dual HDMI/Tri DisplayPort - PCI Express (NVIDIA GeForce RTX 3090).




www.ldlc.com






J7SC said:


> toutes nos félicitations (which hopefully means Congrats !)


Thanks ! It does mean that ^^


shiokarai said:


> It's not released yet, though...


It will be their first shipment, have to wait until early october, not many were available, it only was up for 5 minutes


----------



## zhrooms

padman said:


> Strix OC Stock Power limit is 390W, maxed out slider is 480W.


If anyone can find proof of Strix OC 480W that'd be nice, *TechPowerUp article is wrong* (they got Gaming X Trio wrong by 5%), so it's likely not 480, because they count in % which is wrong (it can be 9.71% which makes it not 480), they have to look at NVIDIA BIOS section of GPU-Z for the correct BIOS reading.

Strix OC is between 470 and 490. So, if you find actual proof please share it in this thread.

*Edit:* Got proof, 94.02.26.C0.16 @ 390/480W Strix OC


----------



## Chamidorix

J7SC said:


> ...speaking of which, do you know what TiN is doing now ?


Here is his last post on evga forums Thank you TiN for all you have contributed! - EVGA Forums:
"
Thanks all for best wishes. I left for personal reasons, also switching the industry altogether to learn new things and explore different ideas. It was great fun to spend 8.5 years in lab with Vince, running thru countless tons of LN2.

Sadly I wouldn't be able to continue guides and teardowns of PC hardware on xDevs.com, which I enjoyed a great deal, as there is no budget for this. Like some famous people say "Want to learn something well - try to explain and teach it"... But its one thing to use some ES lab samples to make a content, and completely different to spend thousands for one article.

I have just noticed that my fancy forum rights are revoked, so I'm just average user for PC world. My PC is aircooled and not even overclocked 
"

ALSO

Frame Chasers just put out a great video with more or less normalized temperature and voltage between a 2080ti and 3080. Pretty good data showing the poor scaling of GA-102:






TL;DW


----------



## mirkendargen

zhrooms said:


> If anyone can find proof of Strix OC 480W that'd be nice, TechPowerUp article is wrong, it's very likely NOT 480, because they count in % which is wrong, they have to look at NVIDIA BIOS section of GPU-Z for the correct BIOS reading.
> 
> Strix OC is between 470 and 490. So, if you find actual proof please share it in this thread.


Whatever it's power limit is, you can see it holding 125mhz pretty rock solidly above the Gaming X Trio in the same review methodology and when we're talking about 10-20% above a 3080, another 6% sure isn't nothing.


----------



## changboy

So its seam i will keep playing with my old r9-290 for now  soooo tiredddd.
All the day i never saw 1 times buy option on all site in Canada, soo boring.
But what i see if i look outside is trees and trees and trees lol.


----------



## kx11

i hate Gigabyte so bad i found 1 3090 in Newegg and ignored it, now it's gone


----------



## Dickenud3L

Will be there a Mod Bios for a Gaming X Trio 3090?
Waterblock?

Have only 380W PT


----------



## J7SC

Chamidorix said:


> Here is his last post on evga forums Thank you TiN for all you have contributed! - EVGA Forums:
> "
> Thanks all for best wishes. I left for personal reasons, also switching the industry altogether to learn new things and explore different ideas. It was great fun to spend 8.5 years in lab with Vince, running thru countless tons of LN2.
> 
> Sadly I wouldn't be able to continue guides and teardowns of PC hardware on xDevs.com, which I enjoyed a great deal, as there is no budget for this. Like some famous people say "Want to learn something well - try to explain and teach it"... But its one thing to use some ES lab samples to make a content, and completely different to spend thousands for one article.
> 
> I have just noticed that my fancy forum rights are revoked, so I'm just average user for PC world. My PC is aircooled and not even overclocked
> "
> (...)


Thanks ! A few messages 'between the lines', perhaps, in TiN's comment


----------



## asdkj1740

Chamidorix said:


> Here is his last post on evga forums Thank you TiN for all you have contributed! - EVGA Forums:
> "
> Thanks all for best wishes. I left for personal reasons, also switching the industry altogether to learn new things and explore different ideas. It was great fun to spend 8.5 years in lab with Vince, running thru countless tons of LN2.
> 
> Sadly I wouldn't be able to continue guides and teardowns of PC hardware on xDevs.com, which I enjoyed a great deal, as there is no budget for this. Like some famous people say "Want to learn something well - try to explain and teach it"... But its one thing to use some ES lab samples to make a content, and completely different to spend thousands for one article.
> 
> I have just noticed that my fancy forum rights are revoked, so I'm just average user for PC world. My PC is aircooled and not even overclocked
> "


i want to cry................
i have been pressing f5 each day since the launch of z490 back in april, for z490 dark at first and 3080 fe and 3090 fe later..........................

if tin were an average user for pc world, i am then just a dust.


----------



## dante`afk

one can only hope RDNA2 is the dark horse in this division now.

embarrassing.


----------



## inutile

EVGA GeForce RTX 3090 KINGPIN reaches 2.58 GHz clock - VideoCardz.com


Vince Lucido breaks 3DMark Port Royal world record Vince “K|NGP|N” Lucido now holds the world record in 3DMark Port Royal with 16673 graphics points. This score was achieved with a fully-custom design by EVGA based on GeForce RTX 3090 graphics card, a card called – RTX 3090 KINGPIN. The...




videocardz.com


----------



## Mooncheese

Hoggle said:


> It’s a lot for the card but it’s also the card that will probably last over a generation. It could be four to six years a 3090 would still be a great card.



Prediction:

NV will be forced to develop 5nm Hopper next year after AMD upsets the GPU market this year.

What upsetting the GPU market looks like:

A $1000 GPU that is faster than the 3080 with 16GB of video memory (Navi 21, 80 CU) @ 300w that runs all of the next gen console ports anywhere from 25-35% (or more) faster because the console port engines were built from the ground up around RDNA 2 and Nvidia will then need to come in after the fact with their own version of path tracing, AI super-sampling and rapid storage retrieval GPU pipeline (RTX IO) facilitated through the driver and we know how efficient that looks right now (compare anything to Vulkan, i.e. Red Dead Redemption 2 where Vulkan is 10% faster than DX12).

Also, said consoles will have path tracing and AI supersampling and will be native to the engines developed around RDNA 2.

Basically what I'm talking about is something as fast or faster than the 3090 for $1000 (in console ports) in November.

If that happens, and Nvidia has their Comet Lake moment where people just stop buying iterative garbage because the competitor has a better product, well, we will see Hopper next year and Hopper on 5nm will completely blow 8nm EUV Ampere out of the water.

It will be Nvidia's come back moment.

I don't recommend anyone buy a 30 series card until we see what Big Navi looks like.

If you absolutely must buy an Nvidia card (G-Sync panel like I have with my AW3418DW) if you wait until Nov Nvidia may actually release a 16-20GB full GA-102-300 die (not cut down) card for $1000 in response to a 6900xt that is faster than the 3080.

They will be forced to.

Think about it.

They can't squeeze anything in between GA-102-200 and GA-102-300 because there is only a ~15% difference there.

They will be forced to put a "3080 Super" or "3080 Ti" up on offer in response and that's fine BECAUSE THE 3090 IS RIDICULOUSLY OVER-PRICED.

The profit margin on this must be astronomical. Especially the AIB variants that don't have a $180 cooler (Igor's Lab estimates the 3080 FE cooler is $155, how much do you think the 25% larger 3090 cooler costs?).

This is price-gouging iterative garbage at it's finest.

DO NOT BUY THIS GARBAGE.

EVEN IF YOURE IN THE MARKET FOR THE 3080 WAIT UNTIL NOVEMBER OR YOU WILL REGRET IT. 

NV can't squeeze a card in between the 3080 and 3090 and they will have to release a card that performs as fast as 6900xt if 6900xt winds up being faster than the 3080.

Here's how Navi 21 is shaping up: 




Relevant comparison to Quadro 6000.


----------



## Jpmboy

Tin retires from the business, time to frame my OG EVBOT!


----------



## HyperMatrix

@andrews2547 Can we please get a mod to remove Mooncheese from this thread? He's not an owner, nor does he have any intention of becoming an owner. This is an owner's thread. I have him on ignore but sometimes his post still pops up and it's just pure garbage every single time talking about why people shouldn't buy the card. Doesn't belong in this thread.


----------



## Sheyster

zhrooms said:


> If anyone can find proof of Strix OC 480W that'd be nice, *TechPowerUp article is wrong*, it's very likely NOT 480, because they count in % which is wrong (it can be 9.71% which makes it not 480), they have to look at NVIDIA BIOS section of GPU-Z for the correct BIOS reading.
> 
> Strix OC is between 470 and 490. So, if you find actual proof please share it in this thread.


You know W1zzard personally wrote that TPU review, right? I can't imagine he'd make a mistake like that.


----------



## Sheyster

HyperMatrix said:


> @andrews2547 Can we please get a mod to remove Mooncheese from this thread? He's not an owner, nor does he have any intention of becoming an owner. This is an owner's thread. I have him on ignore but sometimes his post still pops up and it's just pure garbage every single time talking about why people shouldn't buy the card. Doesn't belong in this thread.


+1

Although it will disrupt the anti-Nvidia doctorate he's working on, please ban him from here ASAP. He is bringing zero value to this thread. Thank you in advance!


----------



## inutile

Sheyster said:


> +1
> 
> Although it will disrupt the anti-Nvidia doctorate he's working on, please ban him from here ASAP. He is bringing zero value to this thread. Thank you in advance!


It wouldn't be _quite_ as annoying if his posts weren't so incredibly long. I mean I strongly disagree with the content of his posts as well and have no interest in reading about unrelated conspiracy theories in a GPU thread, but I put him on ignore mostly because it was making scrolling through the thread a real chore on mobile.


----------



## Stephen.

My condolences to everyone who wanted one today, and couldn't get one including myself.

Looks like Vince was able to clock it to oblivion with a little liquid nitrogen soup. Curious to see what the supposed ( not released yet ) top tier Strix will do next to this.

RTX 3090 2.58ghz Clock


----------



## doom26464

Im not going too be a 3090 owner but I will grab a 3080. None the less I like to still see info on the 3090 as its the same die so alot of the info in here still brings forward to the 3080. 

That said +1 on getting rid of mooncheese already. He like the guy in a bathrobe with no pants on screaming in the streets. At first its funny now its just sad. His wall of text conspiracy theories does not help.


----------



## mirkendargen

Stephen. said:


> My condolences to everyone who wanted one today, and couldn't get one including myself.
> 
> Looks like Vince was able to clock it to oblivion with a little liquid nitrogen soup. Curious to see what the supposed ( not released yet ) top tier Strix will do next to this.
> 
> RTX 3090 2.58ghz Clock





https://www.3dmark.com/spy/14071929



Seems to be looking pretty good, 2190Mhz average clock across the run.


----------



## HyperMatrix

mirkendargen said:


> https://www.3dmark.com/spy/14071929
> 
> 
> 
> Seems to be looking pretty good, 2190Mhz average clock across the run.


That has to be on LN2. You can't do 2190MHz with the 3090 under load unless you have a golden chip, unlocked power bios, and chilled water loop. Do we have any information on who did the run and their setup?


----------



## originxt

Curious, is port royal the new standard we go off now or still TSE?


----------



## Stephen.

mirkendargen said:


> https://www.3dmark.com/spy/14071929
> 
> 
> 
> Seems to be looking pretty good, 2190Mhz average clock across the run.


Very nice, still glad in a way I didn't get one today. Mostly because I want to see what Heatkiller or AquaComputer are going to focus on in regards to EVGA or ASUS water cooling options, then I'll go from there. I know Heatkiller is doing an XC3 block to start, after the reference blocks are out, and the rest is to be determined. I think same goes for Aquacomputer too.


----------



## zhrooms

Sheyster said:


> You know W1zzard personally wrote that TPU review, right? I can't imagine he'd make a mistake like that.


😂 What makes you think he has any credibility? 95% of reviewers have no idea what they're doing half the time, that includes W1zzard.

I know way more than any of these reviewers (like it or not that's just fact), there's bullshit everywhere, the difference is I can spot it while you can't, you are part of the problem, *no one calls them out on their mistakes because no one knows a mistake has been made*, and they (reviewers) never learn, and someone like me never visits those sites because I know they're low tier effort. So now there's a circlejerk going on spewing incorrect numbers, and when I call that **** out (like I just did now, asking for actual proof, not a guess) I basically get attacked (which you just did, you implied I must be wrong and W1zzard is.. a wizzard, and can do no wrong).

(Some further rambling) I prefer to do all testing myself or rely only on data that can't be manipulated in any way (like various benchmark software) so it's not influenced by any party. Or when I can't, I look at 20+ reviews and try to spot the inaccuracies between them, I recently called out some shady numbers on SweClockers, their Time Spy Graphics Score was almost 500 points above what other reviewers got, even with the same CPU (and clock), Memory and OS (none of which matters for Graphics Score), but when I did that, I was basically told "who cares?", can still find the word for word response if needed, but essentially they already pushed the review out, it got the clicks and now nothing matters, they got "paid" regardless if their testing was done correctly or not (the only variance in testing was Legacy vs DCH drivers, I pointed that out but they never bothered checking it out because why would they?).

And there are serious consequences to not bend over backwards for partners, most recent example is MSI shunning Gamers Nexus completely, they're not getting any review samples, because he wouldn't push their bullshit marketing. Basically can't trust anyone in this business, way too much money involved and reliance on brands. (Steve is one of the very very few legit dudes, no one is perfect though so there's been some debatable decisions but even then, mostly amazing work).

Great example from today, five reviewers (so far) from around the world pushed TUF reviews out, *none* of them mentioned the power limit, which is the most important factor for the 3090, it's incredibly power starved stock. So I can't fill in the front page TUF information yet, even though we have 5 reviews, the amount of incompetence is mind blowing, it takes 5 seconds to open GPU-Z and read the actual data from the BIOS. Instead someone like *W1zzard just guesses by taking the % power slider directly from Afterburner, and multiplying it*, which produces the wrong number, because 10% slider does not have to mean 10.0%, it can be 9.71%, then averaged up or down. So the TechPowerUp power limit section on the reviews are wrong.. if that's not embarrassing I don't know what is, truly basic **** that should be impossible to get wrong, yet here we are. Gaming X Trio has a *380W* maximum power limit, not 385W as W1zzard put it. I have three sources showing GPU-Z images and board load confirming 380W (~+-3).


----------



## inutile

HyperMatrix said:


> That has to be on LN2. You can't do 2190MHz with the 3090 under load unless you have a golden chip, unlocked power bios, and chilled water loop. Do we have any information on who did the run and their setup?


JayzTwoCents did a 2130mhz run of Port Royal on an FE just by using cardboard to duct AC into a cardboard box covering his test bench. Sub ambient air blowing over his bench obviously, but it got to 38C underload which doesn't seem too crazy far off what you could achieve on water.



https://www.3dmark.com/pr/328192



Also, that 2190mhz run was supposedly on a Strix but I agree that it was probably on a chilled loop or similar.


----------



## Jpmboy

originxt said:


> Curious, is port royal the new standard we go off now or still TSE?


depends on whether you want to test RTX (PR) or not (TSE)


----------



## HyperMatrix

inutile said:


> JayzTwoCents did a 2130mhz run of Port Royal on an FE just by using cardboard to duct AC into a cardboard box covering his test bench. Sub ambient air blowing over his bench obviously, but it got to 38C underload which doesn't seem too crazy far off what you could achieve on water.
> 
> 
> 
> https://www.3dmark.com/pr/328192


That's actually impressive and gives me renewed hope in these cards. If EVGA ever gets stock and more importantly, every gets their website working. I was one click away from a confirmed order this morning before it crashed. 

Also screw water cooling. If it doesn't work, I'll just duct tape an AC unit to the side of my case. Haha.


----------



## dewkethp

zhrooms said:


> If anyone can find proof of Strix OC 480W that'd be nice, *TechPowerUp article is wrong*, it's very likely NOT 480, because they count in % which is wrong (it can be 9.71% which makes it not 480), they have to look at NVIDIA BIOS section of GPU-Z for the correct BIOS reading.
> 
> Strix OC is between 470 and 490. So, if you find actual proof please share it in this thread.


----------



## Jpmboy

HyperMatrix said:


> That's actually impressive and gives me renewed hope in these cards. If EVGA ever gets stock and more importantly, every gets their website working. I was one click away from a confirmed order this morning before it crashed.
> 
> Also screw water cooling. If it doesn't work, I'll just duct tape an AC unit to the side of my case. Haha.


Yeah the FTW3 looks very tempting!


----------



## mirkendargen

HyperMatrix said:


> That's actually impressive and gives me renewed hope in these cards. If EVGA ever gets stock and more importantly, every gets their website working. I was one click away from a confirmed order this morning before it crashed.
> 
> Also screw water cooling. If it doesn't work, I'll just duct tape an AC unit to the side of my case. Haha.


Just use a PCIe ribbon cable and mount the card to the front of a window unit!


----------



## zhrooms

dewkethp said:


> View attachment 2459961


Thanks, perfect. Added 390/480W.


----------



## inutile

originxt said:


> Curious, is port royal the new standard we go off now or still TSE?


Port Royal is good (for Nvidia now, and AMD eventually) because it's more or less a pure GPU bench where a CPU with a bajillion cores isn't going to massively increase your score.


----------



## asdkj1740

zhrooms said:


> 😂 What makes you think he has any credibility? 95% of reviewers have no idea what they're doing half the time, that includes W1zzard.
> 
> I know way more than any of these reviewers (like it or not that's just fact), there's bullshit everywhere, the difference is I can spot it while you can't, you are part of the problem, *no one calls them out on their mistakes because no one knows a mistake has been made*, and they (reviewers) never learn, and someone like me never visits those sites because I know they're low tier effort. So now there's a circlejerk going on spewing incorrect numbers, and when I call that **** out (like I just did now, asking for actual proof, not a guess) I basically get attacked (which you just did, you implied I must be wrong and W1zzard is.. a wizzard, and can do no wrong).
> 
> (Some further rambling) I prefer to do all testing myself or rely only on data that can't be manipulated in any way (like various benchmark software) so it's not influenced by any party. Or when I can't, I look at 20+ reviews and try to spot the inaccuracies between them, I recently called out some shady numbers on SweClockers, their Time Spy Graphics Score was almost 500 points above what other reviewers got, even with the same CPU (and clock), Memory and OS (none of which matters for Graphics Score), but when I did that, I was basically told "who cares?", can still find the word for word response if needed, but essentially they already pushed the review out, it got the clicks and now nothing matters, they got "paid" regardless if their testing was done correctly or not (the only variance in testing was Legacy vs DCH drivers, I pointed that out but they never bothered checking it out because why would they?).
> 
> And there are serious consequences to not bend over backwards for partners, most recent example is MSI shunning Gamers Nexus completely, they're not getting any review samples, because he wouldn't push their bullshit marketing. Basically can't trust anyone in this business, way too much money involved and reliance on brands. (Steve is one of the very very few legit dudes, no one is perfect though so there's been some debatable decisions but even then, mostly amazing work).
> 
> Great example from today, five reviewers (so far) from around the world pushed TUF reviews out, *none* of them mentioned the power limit, which is the most important factor for the 3090, it's incredibly power starved stock. So I can't fill in the front page TUF information yet, even though we have 5 reviews, the amount of incompetence is mind blowing, it takes 5 seconds to open GPU-Z and read the actual data from the BIOS. Instead someone like *W1zzard just guesses by taking the % power slider directly from Afterburner, and multiplying it*, which produces the wrong number, because 10% slider does not have to mean 10.0%, it can be 9.71%, then averaged up or down. So the TechPowerUp power limit section on the reviews are wrong.. if that's not embarrassing I don't know what is, truly basic **** that should be impossible to get wrong, yet here we are. Gaming X Trio has a *380W* maximum power limit, not 385W as W1zzard put it. I have three sources showing GPU-Z images and board load confirming 380W (~+-3).


dont have to be that mean. it is not hard to ask for a proof (gpuz screenshot//cmd command) even from TPU W1zzard.


----------



## HyperMatrix

Jpmboy said:


> Yeah the FTW3 looks very tempting!


At this point, I'm torn between the FTW3 at 440W with the hopes of a 520W KPE or unlocked power bios flash being a possibility, or the ROG Strix for the same price but with a guaranteed 480W power limit. Really anything that can give me a minimum 2.1GHz sustained under load and under water is fine with me. But there have been wildly different OC numbers coming out now. Look at Derbauer and his water cooled Gigabyte 3090 not even able to hit 2GHz.


----------



## zhrooms

asdkj1740 said:


> dont have to be that mean. it is not hard to ask for a proof (gpuz screenshot//cmd command) even from TPU W1zzard.


It actually is, next to impossible to get hold of him, as I did that 6 months ago about GALAX XOC leak, had to go through email, was not fun, not doing that again.


----------



## asdkj1740

HyperMatrix said:


> At this point, I'm torn between the FTW3 at 440W with the hopes of a 520W KPE or unlocked power bios flash being a possibility, or the ROG Strix for the same price but with a guaranteed 480W power limit. Really anything that can give me a minimum 2.1GHz sustained under load and under water is fine with me. But there have been wildly different OC numbers coming out now. Look at Derbauer and his water cooled Gigabyte 3090 not even able to hit 2GHz.


after tin left, i am afraid there wont be unlocked bios avaliable publicly for the kingpin card on xdev.com where evga tin used to post it there.

for stock air cooling you wont be able to get 2100mhz stabe, at 80c.

power limit and temp limit, not one less.


----------



## domenic

I dropped a post on the AquaComputer English forum trying to probe them regarding their product plans for the 3090 specifically asking about their history of building active cooling with their blocks for the back side of PCBs & their plans for doing the same with the 3090. Based on our numbers within this forum I think we have the opportunity to shape their plans. I urge you to jump into that forum and make your suggestions & opinions known so perhaps we can help shape / influence their product based on our feedback.



Roadmap for Nvidia 30 Series waterblocks? - English forum - Aqua Computer Forum


----------



## asdkj1740

zhrooms said:


> It actually is, next to impossible to get hold of him, as I did that 6 months ago about GALAX XOC leak, had to go through email, was not fun, not doing that again.


no offense but i have been reaching him directly on the tpu forum without any difficulties.
i dont know what galax xoc leak 6 months ago was, so i cant comment on that.

as long as he stil has the cards on hands i dont think it is impossible to ask for a proof of power limit.


----------



## Sheyster

zhrooms said:


> Thanks, perfect. Added 390/480W.


So W1zzard was right eh? LMAO. Too bad you wrote that thesis about... what? I don't even remember now since I read like 2 sentences of it.


----------



## HyperMatrix

Sheyster said:


> So W1zzard was right eh? LMAO. Too bad you wrote that thesis about... what? I don't even remember now since I read like 2 sentences of it.


At least he was humble about it.



zhrooms said:


> I know way more than any of these reviewers (like it or not that's just fact)


----------



## asdkj1740

Sheyster said:


> So W1zzard was right eh? LMAO. Too bad you wrote that thesis about... what? I don't even remember now since I read like 2 sentences of it.


take it easy, we just want accuracy info.

he is not 100% wrong, i have seen so many reviewers' bull, some are just crazier enough to contradict words with photos that reviewer posted.


----------



## asdkj1740

NVIDIA GeForce RTX 3090 and RTX 3080 reference board designs pictured - VideoCardz.com


Both NVIDIA Ampere reference boards have been pictured. NVIDIA GeForce RTX 3080 ‘Reference’ PCB NVIDIA is introducing two graphics card models based on the PG132 reference board design, the RTX 3090, and RTX 3080. Both cards will also be available with NVIDIA Founders Edition-only board designs...




videocardz.com





i think the so called 3090 reference pcb posted on videocardz is wrong, given so many pcb breakdown now.
the vcz version should be having one more phases in the main vcore rail.


----------



## originxt

Has an XC card from EVGA ever received an unlocked bios? Or has that only been available for the higher end cards? Can't have any of the higher end cards with a waterblock in my case unless I mod the side panel to hang a 2-3cm or go full NOSIDEPANELGANG which is definitely not optimal for dust prevention lol.


----------



## inutile




----------



## changboy

kx11 said:


> i hate Gigabyte so bad i found 1 3090 in Newegg and ignored it, now it's gone


Sometimes i think that too, but once i pay it with paypal on newegg they take off on my cart and the message is : the item is not anymore in ur cart. Seam some have kind of reservation and can pass it to the end. I tried it around 10 times and after i paid no more item lol. Anyway i decide to wait for the one i really want and not buy the first card i can. It can take 6 months but i more want a card i will like then a card i will regret at this price.


----------



## J7SC

Stephen. said:


> My condolences to everyone who wanted one today, and couldn't get one including myself.
> 
> Looks like Vince was able to clock it to oblivion with a little liquid nitrogen soup. Curious to see what the supposed ( not released yet ) top tier Strix will do next to this.
> 
> RTX 3090 2.58ghz Clock


Yeah, 'not a bad' ;-) starting salvo by KingPin... Galax Hof OCL still hasn't made an appearance anywhere, afaik, though someone like 'Rauf' must be busy with it for a return volley. 

Also still hoping to see a 3x8 pin Aourus XTR 3090 WB tested (non-LN2) as I would consider those, but that may take a while longer


----------



## Avacado

mirkendargen said:


> https://www.3dmark.com/spy/14071929
> 
> 
> 
> Seems to be looking pretty good, 2190Mhz average clock across the run.


Very nice results. Still not enough for me to drop the dough on them though. I'll be waiting for the 4 series. I hope you all get one and overclock the ever living **** out of them. 








I did the math.
Your 3090 Asus SLI yielded a 45% better score, You paid 39% more to achieve that score than me. You paid 5.89$ per point, I paid 5.28$ per point. The money is not worth it to jump from the 98th% to the 99th%. Was fun doing the comparison.


----------



## mirkendargen

Avacado said:


> Very nice results. Still not enough for me to drop the dough on them though. I'll be waiting for the 4 series. I hope you all get one and overclock the ever living **** out of them.
> View attachment 2459974
> 
> I did the math.
> Your 3090 Asus SLI yielded a 45% better score, You paid 39% more to achieve that score than me. You paid 5.89$ per point, I paid 5.28$ per point. The money is not worth it to jump from the 98th% to the 99th%. Was fun doing the comparison.


Just to clarify, that isn't me, it's just a result from Asus (assumed Strix) cards for the purpose of comparison with the other cards that are already (out of stock) available. I'm not in the business of arguing whether one should/shouldn't buy a 3080/3090, I'm interested in discussing amongst people who have decided to buy a 3090 which 3090 to buy (the only thing this thread should actually be about at this point).


----------



## Avacado

mirkendargen said:


> Just to clarify, that isn't me, it's just a result from Asus (assumed Strix) cards for the purpose of comparison with the other cards that are already (out of stock) available. I'm not in the business of arguing whether one should/shouldn't buy a 3080/3090, I'm interested in discussing amongst people who have decided to buy a 3090 which 3090 to buy (the only thing this thread should actually be about at this point).


Yea, just to clarify, I said that I hope you all get one, overclock the hell out of them and did it for fun just to compare.


----------



## Thoth420

kx11 said:


> i hate Gigabyte so bad i found 1 3090 in Newegg and ignored it, now it's gone


I feel so attac!! 

Oh update on IBuyPower 3090s in their builds. First few processed got the couple FEs they had. They are working with a pool of Gigabyte Eagle, OC and the MSI Ventus at the moment. I was informed that my build was marked to have the Giga OC used for it. Aside the Strix there wasn't anything from the day 1 batch I wanted more. I am a bit sour with EVGA. Nothing they did wrong the aesthetic just doesn't work for me lately.

This card is a placeholder til I either block it or find a shroud that is white to match the build theme.


----------



## J7SC

originxt said:


> Curious, is port royal the new standard we go off now or still TSE?


Port Royal has a fundamental advantage in that it is almost independent of CPU (ie Intel vs AMD), or at least is the 3DM bench whereby the processor differences have the least impact. I did > this Port Royal with a TR 2950X...while I love it for a variety of other reasons, it is not the world's best OC CPU


----------



## stefxyz

I really dont know what the problem is. The 3090 is a great card. Roughly 50% more Performance than 2080ti. Thats more than what I personally hoped for this Gen. The only difference is that this time around the 3080 is really good while the 2080 was really bad why the differemce is much lower. We all will benefit from that because high performance is unlocked to a much broader audience this way.

First wave of cards for customs are the super low bins those who didnt make it for the higher ranges mostly and the fe is limited by power and thermals.

Techpowerups Strix Oc review shows what is actually possible soon. With a normal highend custom card another 10% performance compared to stock 3-90 seems no problem if u are willing to pay the power price.

I for myself am super happy that Nvidia also release the card for us ultra enthusiasts with a 2k usd custiom waterloop build into the desk with 4 480cm radiators who enjoy squeezing the latest bit of performance out of the components just fir fun. No one is forced to buy.

In 2 months time all segments will be filled with cards like 3080ti super whatever with 20gb For 900 usd and so on. So there will be so,ething for everyone.


----------



## asdkj1740

this is colorful 3080 vulcan pcb. i think the 3090 vulcan shares the same pcb.










ç³»ç»Ÿå‡ºé”™å•¦ï¼


----------



## J7SC

asdkj1740 said:


> this is colorful 3080 vulcan pcb. i think the 3090 vulcan shares the same pcb.
> View attachment 2459976
> 
> 
> 
> ç³»ç»Ÿå‡ºé”™å•¦ï¼


That's a nice looking PCB, almost how I imagine the Galax 3090 HoF PCB will look like (but in white, and with the extra VRAM...) . Also, I know this gen is different re. shunt mods, but it certainly looks easily accessible and isolated on this PCB


----------



## asdkj1740

J7SC said:


> That's a nice looking PCB, almost how I imagine the Galax 3090 HoF PCB will look like (but in white, and with the extra VRAM...) . Also, I know this gen is different re. shunt mods, but it certainly looks easily accessible and isolated on this PCB


supe high phase count, full spcaps/poscap instead of solid caps.


----------



## nievz

can anyone post screenshot of their Gaming X Trio 3090 GPUZ > Advanced tab showing the power limit section?


----------



## J7SC

asdkj1740 said:


> supe high phase count, full spcaps/poscap instead of solid caps.


...yes, I counted the phases (bad habit of mine) :- ) ...would be nice to see a (3090) PCB analysis by Buildzoid of this; I think he would like it.

On a more general 3090s note re. all vendors, the thing I haven't quite wrapped my head around is why there aren't more AIO or water-blocked cards out for review yet (unless vendors reserve those for the higher-end bins later). It seems to me that power draw (thus heat) will be a major issues with max PLs that can approach 450W - 500W+, and - I am guessing here - it could actually be cheaper than those giant air-cooled solutions...cheaper, smaller and possibly lighter. I realize that full w-blocks are not for everyone, but I haven't even seen a 3080/3090 AIO test yet.


----------



## asdkj1740

colorful igame advanced oc
fully filled up all the empty phases on reference pcb, with 3*8pin design, super rare design.










https://www.bilibili.com/video/BV1tZ4y1N7j3?from=search&seid=12225529208045945125


----------



## Jordel

Sheyster said:


> So W1zzard was right eh? LMAO. Too bad you wrote that thesis about... what? I don't even remember now since I read like 2 sentences of it.


He's not wrong. In this instance the slider would've been close enough that we ended up just shy of 480 by multiplying the values, but that's not a guarantee. The table on the front page should ideally be a source of truth. I agree with zhrooms. I'd rather see a dump from GPU-Z than someone winging it by multiplying the max slider value with the base TBP.


----------



## HyperMatrix

Jordel said:


> He's not wrong. In this instance the slider would've been close enough that we ended up just shy of 480 by multiplying the values, but that's not a guarantee. The table on the front page should ideally be a source of truth. I agree with zhrooms. I'd rather see a dump from GPU-Z than someone winging it by multiplying the max slider value with the base TBP.


The problem wasn't asking for verification. The problem was that he started attacking and making fun of the other guy, while literally saying, and I quote, "I know way more than any of these reviewers (like it or not that's just fact)." He's also engaged in stupid fights with me over things in other threads that he was proven wrong on, at which point he wouldn't acknowledge that he was wrong. He simply didn't respond anymore. He's just not a very pleasant guy, and he likes to fight for no reason, and I can see why that would tick some people off. I've personally been avoiding engaging with him directly because of this. 

Anyway this is all getting out of hand. Let's stick to 3090 chat please.


----------



## Jordel

HyperMatrix said:


> The problem wasn't asking for verification. The problem was that he started attacking and making fun of the other guy, while literally saying, and I quote, "I know way more than any of these reviewers (like it or not that's just fact)." He's also engaged in stupid fights with me over things in other threads that he was proven wrong on, at which point he wouldn't acknowledge that he was wrong. He simply didn't respond anymore. He's just not a very pleasant guy, and he likes to fight for no reason, and I can see why that would tick some people off. I've personally been avoiding engaging with him directly because of this.
> 
> Anyway this is all getting out of hand. Let's stick to 3090 chat please.


I can't speak on the other matters, and I don't agree with everything that's said by all parties involved. However, on the base issue, I agree.

I've got Trio X, FTW3 Ultra, and a STRIX on the way. Super keen to see which one I enjoy the most.


----------



## HyperMatrix

Jordel said:


> I can't speak on the other matters, and I don't agree with everything that's said by all parties involved. However, on the base issue, I agree.
> 
> I've got Trio X, FTW3 Ultra, and a STRIX on the way. Super keen to see which one I enjoy the most.


You Scandinavians have been having way more luck than us in getting cards. I’m definitely jealous. Haha. I’m hoping you get some good results out of the FTW3 since it’s my preferred card. But I’m also leaning towards the Strix for that 480W TDP.


----------



## kot0005

domenic said:


> I dropped a post on the AquaComputer English forum trying to probe them regarding their product plans for the 3090 specifically asking about their history of building active cooling with their blocks for the back side of PCBs & their plans for doing the same with the 3090. Based on our numbers within this forum I think we have the opportunity to shape their plans. I urge you to jump into that forum and make your suggestions & opinions known so perhaps we can help shape / influence their product based on our feedback.
> 
> 
> 
> Roadmap for Nvidia 30 Series waterblocks? - English forum - Aqua Computer Forum


i already emailed sven about 3090 strix WB but I mgiht cancel my order and wait for 3080 20gb or 3080ti , the marks up r insane here.


----------



## Jordel

HyperMatrix said:


> You Scandinavians have been having way more luck than us in getting cards. I’m definitely jealous. Haha. I’m hoping you get some good results out of the FTW3 since it’s my preferred card. But I’m also leaning towards the Strix for that 480W TDP.


Only some of us! I've see people with confirmed dates in December


----------



## bl4ckdot

zhrooms said:


> 😂 What makes you think he has any credibility? 95% of reviewers have no idea what they're doing half the time, that includes W1zzard.
> 
> I know way more than any of these reviewers (like it or not that's just fact), there's bullshit everywhere, the difference is I can spot it while you can't, you are part of the problem, *no one calls them out on their mistakes because no one knows a mistake has been made*, and they (reviewers) never learn, and someone like me never visits those


----------



## inutile

Info on upcoming Heatkiller blocks for Ampere:

*WATERCOOL HEATKILLER® Release Plan for Nvidia RTX 30*
We plan to support three different PCB layouts for Nvidia GeForce RTX 30 cards:

- Step1: water block for the so-called "partner reference layout". With this block, we will support as many different cards from as many different manufacturers as possible. You can now find a list of supported 3080 cards at http://gpu.watercool.de . This list is constantly updated as new information becomes available and we learn more about cards, layouts, and variations. This block will be available from mid to end of October.
Step2: customized water block for the EVGA XC3 layout. This layout is heavily based on the partner reference layout, and only requires some minor adaptions. It will become available by early to mid November.
Step3: a fully custom block for one of the "True" custom cards. We are currently undecided between Asus TUF, Asus Strix and EVGA FTW. We will run a poll after the first block is released to learn more about your preferences.






News - Watercool







watercool.de


----------



## Thoth420

Am I reading that right? No blocks for FE 3090 planned. It's small and I am tired, stoned and on my phone.


----------



## tiefox

Thoth420 said:


> Am I reading that right? No blocks for FE 3090 planned. It's small and I am tired, stoned and on my phone.


They will skip the FE this time.


----------



## Thoth420

tiefox said:


> They will skip the FE this time.


Do you think this will be a trend overall for waterblocks in regard to the FE? I know my card is not one but I am curious as I was still going to try and get one and will be putting a block on whatever I keep. I tend to go with EK or Barrow but the Bykski ones look really nice these days too so I am game to try one of those. Looks are big with me.


----------



## Spiriva

Inet (swedish shop) had a few "PNY GeForce RTX 3090 24GB XLR8 Gaming EPIC-X RGB" I managed to get one of them. 
The card is kinda ugly but nvm, atleast its on its way


----------



## shiokarai

Maybe because of poor availability of FE cards that's why they'll skip them?


----------



## inutile

Thoth420 said:


> Do you think this will be a trend overall for waterblocks in regard to the FE? I know my card is not one but I am curious as I was still going to try and get one and will be putting a block on whatever I keep. I tend to go with EK or Barrow but the Bykski ones look really nice these days too so I am game to try one of those. Looks are big with me.


I know EK is working on a pretty unique looking block for FE that takes advantage of the V cutout on the PCB by moving the ports there, looking forward to seeing how this one turns out myself.


----------



## Gofspar

Anyone know whos gonna be the first on the scene with a FC waterblock for founders?
I wish we had some chonkier blocks for the cards since the PCB looks so small lol.


----------



## inutile

Igor’s theory on some of the OC instability of some of the 3080 AIBs, presumably the same thing would apply to 3090 AIBs. Definitely something to keep an eye on.









The possible reason for crashes and instabilities of the NVIDIA GeForce RTX 3080 and RTX 3090 | Investigative | igor'sLAB


Not only the editors and testers were surprised by sudden instabilities of the new GeForce RTX 3080 and RTX 3090, but also the first customers who were able to get board partner cards from the first…




www.igorslab.de


----------



## GTANY

I have just ordered 1 EVGA RTX 3090 FTW3 Ultra at 1585 € : EVGA GeForce RTX 3090 FTW3 Ultra Gaming ab € 1585,88 (2020) | Preisvergleich geizhals.eu EU

One of the cheapest prices for one of the best 3090. I hesitated because of the huge price gap with the 3080 but the EVGA price will probably increase.


----------



## GanMenglin

@asdkj1740 how about this PCB? For the reference PCB water block?


----------



## jabski

GTANY said:


> I have just ordered 1 EVGA RTX 3090 FTW3 Ultra at 1585 € : EVGA GeForce RTX 3090 FTW3 Ultra Gaming ab € 1585,88 (2020) | Preisvergleich geizhals.eu EU
> 
> One of the cheapest prices for one of the best 3090. I hesitated because of the huge price gap with the 3080 but the EVGA price will probably increase.


you did well as it is now 1699 euro


----------



## asdkj1740

GanMenglin said:


> @asdkj1740 how about this PCB? For the reference PCB water block?
> 
> View attachment 2459998


This custom pcb simply extends the power connector area and has not change the scale from the io sheild to the right solid caps. 
This is chance face


----------



## GTANY

I noticed results differences in 3090 reviews : techpowerup gives +20 % at 4K compared to the 3080, hardware unboxed only +10 %. The tested games and scenes are different, cards frequencies may be different which might explain results discrepancies. Do you see another reasons ?

I would prefer to see a 20 % difference because 10 % would be hardly noticeable.


----------



## changboy

The asus strix oc 3090 is powerfull


----------



## Xeq54

Just received Asus TUF 3090. Solid card, wanted a Strix but this one was about 150eur cheaper and actually available. Will be keeping this one.

Max PL is 107% which is 380Watts

Some pics and benchmark scores with 100% pl stock and 85% pl in the link below:


http://imgur.com/a/qlyAjy1


----------



## tiefox

Thoth420 said:


> Do you think this will be a trend overall for waterblocks in regard to the FE? I know my card is not one but I am curious as I was still going to try and get one and will be putting a block on whatever I keep. I tend to go with EK or Barrow but the Bykski ones look really nice these days too so I am game to try one of those. Looks are big with me.


at leak EK and Bitspower have announced some kind of FE support so far.


----------



## changboy

Xeq54 said:


> Just received Asus TUF 3090. Solid card, wanted a Strix but this one was about 150eur cheaper and actually available. Will be keeping this one.
> 
> Max PL is 107% which is 380Watts
> 
> Some pics and benchmark scores with 100% pl stock and 85% pl in the link below:
> 
> 
> http://imgur.com/a/qlyAjy1


Asus tuf is also really good, nice temp and perf better then Fe, good choice. Where is available ?


----------



## asdkj1740

Xeq54 said:


> Just received Asus TUF 3090. Solid card, wanted a Strix but this one was about 150eur cheaper and actually available. Will be keeping this one.
> 
> Max PL is 107% which is 380Watts
> 
> Some pics and benchmark scores with 100% pl stock and 85% pl in the link below:
> 
> 
> http://imgur.com/a/qlyAjy1


Take the screenshot of gpuz power limit page under advanded


----------



## asdkj1740

inutile said:


> Igor’s theory on some of the OC instability of some of the 3080 AIBs, presumably the same thing would apply to 3090 AIBs. Definitely something to keep an eye on.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> The possible reason for crashes and instabilities of the NVIDIA GeForce RTX 3080 and RTX 3090 | Investigative | igor'sLAB
> 
> 
> Not only the editors and testers were surprised by sudden instabilities of the new GeForce RTX 3080 and RTX 3090, but also the first customers who were able to get board partner cards from the first…
> 
> 
> 
> 
> www.igorslab.de


Damn this is another masterpiece by Igor.

We used to think poscap>spcap>mlcc, but this time on the back of the gpu socket mlcc should be taking much more important role than spcap.


----------



## Avacado

Interesting, so basically buy the TUF.


----------



## dante`afk

Avacado said:


> Very nice results. Still not enough for me to drop the dough on them though. I'll be waiting for the 4 series. I hope you all get one and overclock the ever living **** out of them.
> View attachment 2459974
> 
> I did the math.
> Your 3090 Asus SLI yielded a 45% better score, You paid 39% more to achieve that score than me. You paid 5.89$ per point, I paid 5.28$ per point. The money is not worth it to jump from the 98th% to the 99th%. Was fun doing the comparison.


That's cool and stuff, justification for yourself.

You have microstutters
You have SLI profile ****show
You have microstutters


----------



## Avacado

dante`afk said:


> That's cool and stuff, justification for yourself.
> 
> You have microstutters
> You have SLI profile ****show
> You have microstutters


What's your deal brosef. You seem hostile for no apparent reason. P.S. Everyone pushes their hardware harder for TSE than they would most likely use everyday.


----------



## dante`afk

stefxyz said:


> I really dont know what the problem is. The 3090 is a great card. Roughly 50% more Performance than 2080ti. Thats more than what I personally hoped for this Gen. The only difference is that this time around the 3080 is really good while the 2080 was really bad why the differemce is much lower. We all will benefit from that because high performance is unlocked to a much broader audience this way.
> 
> First wave of cards for customs are the super low bins those who didnt make it for the higher ranges mostly and the fe is limited by power and thermals.
> 
> Techpowerups Strix Oc review shows what is actually possible soon. With a normal highend custom card another 10% performance compared to stock 3-90 seems no problem if u are willing to pay the power price.
> 
> I for myself am super happy that Nvidia also release the card for us ultra enthusiasts with a 2k usd custiom waterloop build into the desk with 4 480cm radiators who enjoy squeezing the latest bit of performance out of the components just fir fun. No one is forced to buy.
> 
> In 2 months time all segments will be filled with cards like 3080ti super whatever with 20gb For 900 usd and so on. So there will be so,ething for everyone.


50% more than the 2080Ti, which reviews were you watching lol? It's barely 10-20%.

1080ti to 2080Ti was roughly 45%


----------



## Spiriva

dante`afk said:


> 50% more than the 2080Ti, which reviews were you watching lol? It's barely 10-20%.
> 
> 1080ti to 2080Ti was roughly 45%


I havent watched the video so i have no clue whats his results are.

[video]




I will also change from a 2080ti to a 3090, mostly cause new hardware is fun. I dont care about preformance / dollar. I like my Audi A5 2020, but i understand that a Kia Rio 2020 would be much cheaper and get me from A to B too.


----------



## Avacado

Spiriva said:


> I havent watched the video so i have no clue whats his results are.
> 
> [video]
> 
> 
> 
> 
> I will also change from a 2080ti to a 3090, mostly cause new hardware is fun. I dont care about preformance / dollar. I like my Audi A5 2020, but i understand that a Kia Rio 2020 would be much cheaper and get me from A to B too.


Excited to see your results.


----------



## dante`afk

TUF and FE seems to be the way to go: The possible reason for crashes and instabilities of the NVIDIA GeForce RTX 3080 and RTX 3090 | Investigative | igor´sLAB


----------



## inutile

Spiriva said:


> I havent watched the video so i have no clue whats his results are.
> 
> [video]
> 
> 
> 
> 
> I will also change from a 2080ti to a 3090, mostly cause new hardware is fun. I dont care about preformance / dollar. I like my Audi A5 2020, but i understand that a Kia Rio 2020 would be much cheaper and get me from A to B too.


For anyone that doesn’t want to watch and just wants the numbers, in this guy’s test, in the sample of games he tested, the 3090 Gaming X Trio beat the 2080Ti Aorus by 36% at 1440p and almost 47% at 4K.


----------



## dante`afk

Spiriva said:


> I havent watched the video so i have no clue whats his results are.
> 
> [video]
> 
> 
> 
> 
> I will also change from a 2080ti to a 3090, mostly cause new hardware is fun. I dont care about preformance / dollar. I like my Audi A5 2020, but i understand that a Kia Rio 2020 would be much cheaper and get me from A to B too.


not bad, gamernexus have slightly different numbers, nonetheless we can see where this is going.


----------



## nievz

Seems like the Gaming X Trio 3090 that was released in the Philippines resembles the FE design which is a combination of POSCAPS and MLCC.















Here's for the FE 3090 PCB:


----------



## Sheyster

HyperMatrix said:


> But I’m also leaning towards the Strix for that 480W TDP.


479.7W ... Be careful here, you might ruffle some feathers.


----------



## plazmic

Regarding POSCAP vs MLCC, it appears EVGA went with 6 POSCAPs on their FTW3 series (at least from 3080s I've seen). Meanwhile, ASUS TUF and Strix on both models are using pure MLCC. I'd say that's another point in STRIX favor when torn between it and the FTW3.


----------



## BigMack70

plazmic said:


> Regarding POSCAP vs MLCC, it appears EVGA went with 6 POSCAPs on their FTW3 series (at least from 3080s I've seen). Meanwhile, ASUS TUF and Strix on both models are using pure MLCC. I'd say that's another point in STRIX favor when torn between it and the FTW3.


Ouch. Can anyone confirm? 

I may wind up avoiding the FTW3 Hybrid if this is the case.


----------



## ThrashZone

BigMack70 said:


> Ouch. Can anyone confirm?
> 
> I may wind up avoiding the FTW3 Hybrid if this is the case.


Hi,


----------



## rush2049

@ThrashZone Need a picture of the back of socket.


----------



## plazmic

BigMack70 said:


> Ouch. Can anyone confirm?
> 
> I may wind up avoiding the FTW3 Hybrid if this is the case.


3080 confirmed, though I dont know what specific POSCAPs they're using or if the 3090 is using the same.


----------



## BigMack70

ThrashZone said:


> Hi,


Gonna have to translate that pic for me.


----------



## ThrashZone

rush2049 said:


> @ThrashZone Need a picture of the back of socket.


Hi,
Got that off optimus water cooling thread maybe summon him for an image @Optimus WC 








Optimus Waterblock


Now that I look at your pics, are the KP's fins just copper-colored aluminum? Hmmm OMG please don't be so. KP can do no wrong...




www.overclock.net


----------



## BigMack70

So the launch cards that are actually worth getting are just the TUF and the FE? Am I understanding this correctly? I know the Strix is good too but not released yet. Does anyone know the date Strix will go on sale? 

Man... was really hoping for the FTW3 to be good because I like the idea of the hybrid version of it. I'm sure the Kingpin will be great but it will probably be $2200+ and impossible to find.


----------



## changboy

After i bought the strix i will tell you its time to buy yours


----------



## BigMack70

changboy said:


> After i bought the strix i will tell you its time to buy yours


lol

I've never tried to buy a Strix model of anything before - are they a mass production run card like the FTW3 that's usually available for a while, or are they a limited run like the Kingpin / Lightning / HOF cards often are?


----------



## GTANY

What are the advantages of the MLCC versus POSCAP components ?


----------



## changboy

BigMack70 said:


> lol
> 
> I've never tried to buy a Strix model of anything before - are they a mass production run card like the FTW3 that's usually available for a while, or are they a limited run like the Kingpin / Lightning / HOF cards often are?


Strix are a mass production like the strix 2080 ti before, its not was not announced as limited edition or els.


----------



## BigMack70

Found this pic of FTW3 3090 backplate... can someone more knowledgeable about these capacitors comment if this is good or not? 












HyperMatrix said:


> The RTX Titan does not beat the 3080....not to mention the $2500 for the small performance gains it had over the 2080ti. The RTX 3080 FE, not some super fancy model, is faster than the RTX Titan. This is not even debated anywhere. I'm not sure how you even came to make such a statement. Every single benchmark out there shows the 3080 is clearly ahead.
> 
> As for your comments on RT...you're really starting to talk out your rear mate.


I have an rtx titan, i sold a 3080 which is cheaper, but not stable in games.



Vapochilled said:


> Never tested the default.....
> 
> I had a custom offset curve. It's was hitting 2030mhz or more depending the power draw and the game. I play at 5k ultra wide, so my power draw is higher.
> 
> *Curious is: all bench's would go ok.
> But in real gaming, because the scenario changes a lot, explosions, buildings, etc , the game would cras*h.
> 
> Changing the curve to avoid 2000mhz, I have no crash.
> It's doing 1980 and 1995 stable no matter the game or bench (including firestrike ultra). I am using only 0.975.. 1v.. 0.956 for 1960mhz.
> 
> That's avoiding constantly power limit hits. I think that's the problem igorlabs stated. Those crap capacitors can deliver instant power changes
> 
> I am happy with these results for now


Don't be a fckboy guys, I didnt make [email protected]#$%#$^#4 up. The 3080 is largely flawed. If it was't nvidady would lower the 2080 ti price below 3080 



pewpewlazer said:


> At least there was SOMETHING factual in your post I guess...


Why would I stand to gain by warning people not to rush in and buy flawed cards? When 2080ti is 99% of the value, with 0 % of failed crashing crap.
Go and wear the invisible clothes, I am just saying your dong is going to be showing as you lick the stamps on your RMA box.



Nizzen said:


> In what game does Titan RTX beat 3080? I want to try that game with my new Palit 3080  Can't wait to try that game


csgo
valorant
flightsim 2020
It was a tie in Google Chrome dinosaur run


----------



## changboy

Maybe you will need a bit later after some sells have been done but i doubt evga will do custom board with bad part, even this they are warranty. Evga are pround of there product.


----------



## rush2049

That mimicks the FE edition part choices. So I think it is adequate.

I believe the capacitor banks on the sides are for auxiliry voltage rails (a guess) not as susceptible to ripple.


----------



## GanMenglin

asdkj1740 said:


> This custom pcb simply extends the power connector area and has not change the scale from the io sheild to the right solid caps.
> This is chance face


Well, EKWB has replied to me that there's no chance...

I've canceled my pre-order of ek water block for reference card, and ordered a bykski water block which just for ventus. I can use bykski at this moment, can replace it with EK when EK got the water block for ventus.


----------



## CallsignVega

Love how Europeans are getting cards no problem and like the entire US inventory is locked behind Ebay scalper warehouse. Fing pathetic. SCAMERICA.


----------



## ThrashZone

CallsignVega said:


> Love how Europeans are getting cards no problem and like the entire US inventory is locked behind Ebay scalper warehouse. Fing pathetic. SCAMERICA.


Hi,
Yeah China mad.


----------



## Chomuco

RTX 3000 Wasserkühler: Stand der Dinge (Update 25.09) - Hardware-Helden


----------



## stefxyz

BigMack70 said:


> Found this pic of FTW3 3090 backplate... can someone more knowledgeable about these capacitors comment if this is good or not?


This is good. Asus has all 6 like the ones in the middle but as long as at least the top middle one is nit just the plastic cap its fine according to IgorsLab. The ****ty ones are those who have all 6 like the outside ones.


----------



## padman

CallsignVega said:


> Love how Europeans are getting cards no problem and like the entire US inventory is locked behind Ebay scalper warehouse. Fing pathetic. SCAMERICA.





http://imgur.com/bSH6a0W

Indeed. One of the bigger retailers in Scandinavia yesterday. Site was working very smooth, no lag at all even at 15:00 (when it launched). No problem at all getting a card. Sadly they didn't have Strix OC so I'm still waiting. At least I get to read the reviews properly.


----------



## JackCY

CallsignVega said:


> Love how Europeans are getting cards no problem and like the entire US inventory is locked behind Ebay scalper warehouse. Fing pathetic. SCAMERICA.


LOL what? I haven't seen a single 3080/90 for sale in EU be it the central/eastern distribution or western distribution channel. Some big shops don't even bother listing the 3000 series at all on their eshops, why would they when they have no stock to sell.

Ah the northern/Scandinavian distribution channel has some? Neat, well almost no one lives there compared to other parts of Europe so go figure how come they do have them right.


----------



## originxt

ASUS GeForce RTX 3090 STRIX OC Review


The ASUS GeForce RTX 3090 STRIX OC is the fastest RTX 3090 we have tested today by quite the big margin. It also has a huge power limit adjustment range that maxes out at 480 W! We hance added a whole test run at 480 W to our review to see how much extra headroom RTX 3090 Ampere has left and...




www.techpowerup.com





Strix with MLCC


















好料堆疊中！ASUS ROG Strix GeForce RTX 3090 24GB 顯示卡供電分析 - BenchLife.info


這次的 GeForce RTX 3090 顯示晶片，會和 ROG Strix 會擦出什麼火花？




benchlife.info





Strix with a mix of both.


----------



## zlatanselvic

Snagged an order last night for a Zotac 3080 Trinity OC for a buddy and a Zotac 3090 Trinity. Any news on BIOS updates/flashing other bios on the cards for a bit more power?


----------



## tiefox

padman said:


> http://imgur.com/bSH6a0W
> 
> Indeed. One of the bigger retailers in Scandinavia yesterday. Site was working very smooth, no lag at all even at 15:00 (when it launched). No problem at all getting a card. Sadly they didn't have Strix OC so I'm still waiting. At least I get to read the reviews properly.


komplett had like 39 stk of 3090 strix OC for 17th october delivery, I was quick and finalized my order by 15:04 ( local launch time ). Now just wait.


----------



## mirkendargen

Well, I'm Strix preorder #4 on good ol' ShopBLT. We'll see when they actually get them. Their first order of 25 TUF's isn't sold out, but also isn't scheduled to arrive for 3 weeks.


----------



## AlKappaccino

CallsignVega said:


> Love how Europeans are getting cards no problem and like the entire US inventory is locked behind Ebay scalper warehouse. Fing pathetic. SCAMERICA.


I wish I could confirm this 

So far no luck at all, neither 3080 nor 3090. Even when I actually bought a card with "in stock" status, it just turned back to "not available" minutes after my order went through. I had one store, where I instantly called them after my order was placed and they confirmed me "yes, it is in stock and will be shipped tomorrow" and it turned out exactly the same. Well, I have some pre-orders going. I hope one of them gets through within the next 50 days or so. And none of those are FTW3 unfortunately, the only card I truly want :/ Guess I have to wait a little longer.


----------



## JackCY

AlKappaccino said:


> I wish I could confirm this
> 
> So far no luck at all, neither 3080 nor 3090. Even when I actually bought a card with "in stock" status, it just turned back to "not available" minutes after my order went through. I had one store, where I instantly called them after my order was placed and they confirmed me "yes, it is in stock and will be shipped tomorrow" and it turned out exactly the same. Well, I have some pre-orders going. I hope one of them gets through within the next 50 days or so. And none of those are FTW3 unfortunately, the only card I truly want :/ Guess I have to wait a little longer.


Just checked Proshop when looking for Swedish shops... here is what I found:
All Nordic distribution countries on this shop seem to have stock? And one can order. Denmark, Finland, Sweden, Norway. Nope my bad, it says "ordered, delivery date unknown" when one has it in cart. So no luck here either.
Now when you select central/eastern or western distribution countries: Poland, Germany, Austria... they don't even list the 3000 cards at all, none for sale, no stock.

So to me it really seems that either they did not ship anything as usual to central/eastern, plus the little that they did ship to western is sold out thanks to their massive population and high purchasing power, while Nordic got plenty of cards and the low population while having high purchasing power did not manage to deplete the supply yet.

I have notifications on a major shop for both 3080 and 3090 for lulz, to see how long, how many months it will take them to have a single card for sale. I suspect it may be 2021 and the notification timer will run out (120 days only). Pascal launch was 3 months before cards started to appear in stock for sale.

At least some shops list the 3000 series, some do not at all and continue to sell overpriced Turing stock since they have no Ampere stock to compete with it.


----------



## AlKappaccino

JackCY said:


> Just checked Proshop when looking for Swedish shops... here is what I found:
> All Nordic distribution countries on this shop seem to have stock? And one can order. Denmark, Finland, Sweden, Norway. Nope my bad, it says "ordered, delivery date unknown" when one has it in cart. So no luck here either.
> Now when you select central/eastern or western distribution countries: Poland, Germany, Austria... they don't even list the 3000 cards at all, none for sale, no stock.
> 
> So to me it really seems that either they did not ship anything as usual to central/eastern, plus the little that they did ship to western is sold out thanks to their massive population and high purchasing power, while Nordic got plenty of cards and the low population while having high purchasing power did not manage to deplete the supply yet.
> 
> I have notifications on a major shop for both 3080 and 3090 for lulz, to see how long, how many months it will take them to have a single card for sale. I suspect it may be 2021 and the notification timer will run out (120 days only). Pascal launch was 3 months before cards started to appear in stock for sale.
> 
> At least some shops list the 3000 series, some do not at all and continue to sell overpriced Turing stock since they have no Ampere stock to compete with it.


Sad thing is, you could be right. Atm it looks like we get little to no stock here, and those tidbits that are occasionally available are purchased within seconds by bots or people with scripts etc. So until the supply drastically increases, I see no chance in getting a card.


----------



## Thoth420

inutile said:


> I know EK is working on a pretty unique looking block for FE that takes advantage of the V cutout on the PCB by moving the ports there, looking forward to seeing how this one turns out myself.





tiefox said:


> at leak EK and Bitspower have announced some kind of FE support so far.


Thanks boys! So busy with securing a litany of parts that I haven't had time to even take the time to check on blocks. I have secured orders or have the following:
Hardware for 3 builds. 2x Ryzen b550 and 1x Intel z490 Outcome: need on more PSU and a few fans still. Waiting on delivery of the build mentioned below as well to snag the card for me and pretty it up etc.
Xbox Series X for myself and a couple friends. Outcome: 3/3 Perfection
2x 3080s for friends builds Outcome: 0/2 Utter Failure
1x 3090 for myself in a new whole build Outcome: 1/1 Success! Caveat: I had to process an b550 3700x system via IBuyPower but will be using the build less the card with a few modifications as one of my friends builds. After pricing it out I only got gouged approx 300 bucks and the approx time is 3-4 weeks. I kept the build as basic as possible so it would get through the process as fast as possible. When I was a young boy I had bought my first real gaming grade system from them and it was really well done. Oh and the card is a Gigabyte OC edition the sales rep informed me or if not it would be the Gigabyte Eagle.

P.S. This is pretty much what I wanted out of the first day batch. Not remotely the best but my end goal is to try and score a Galax HoF or any white card that may come out like the Strix 2080Ti did. My build is all white. Barring that I will block it and use a white backplate to get the aesthetic to fit. If this card blows hard then I may swap it out for something else. I made the mistake of selling my 2080Ti and 2070 Super like an idiot.

No botting used just text only browser pages and a gigabit internet connection.
Time for a respite for the day and a build starts this weekend. I wish you all the best of luck on securing your orders etc. <3


----------



## JackCY

AlKappaccino said:


> Sad thing is, you could be right. Atm it looks like we get little to no stock here, and those tidbits that are occasionally available are purchased within seconds by bots or people with scripts etc. So until the supply drastically increases, I see no chance in getting a card.


Proshop also seemed to have a "10 minute reservation" when the product is in a cart? That looks unusual to me and is very bot friendly as normally shops will only reserved a product in my experience when ordering after it is paid for and that part is hard to automate, to deal with all the shop's menus/pages one goes through for an order (except I guess Amazon 1 click etc.) and a custom "bot" has to be made for a specific shop. And then you gotta pay with the bot and that means entering card details, 2FA over mobile phone or even harder. It's probably far easier to talk a shop into a preorder instead or a find a shop that accepts preorders.

If a shop has a very simple buying process, with publicly exploitable API etc. no 2FA, pay on delivery etc. ... R.I.P. it's gonna get botted to death.

Ebay to this day has not changed their bidding from the seller favorable type to prevent sniping/bots. Wanna bid on Ebay... it's sniping/bots since pretty much day 1.
Regular shops are gonna have to learn to deal with high demand low supply situations. Fast.


----------



## J7SC

Spiriva said:


> Inet (swedish shop) had a few "PNY GeForce RTX 3090 24GB XLR8 Gaming EPIC-X RGB" I managed to get one of them.
> The card is kinda ugly but nvm, atleast its on its way


...one advantage though is that there _could_ be a water-block for it already, at least if the PCB is similar / same as for this 3080 version, per caseking.de offer > here ...worth checking out anyways


----------



## domenic

mirkendargen said:


> Well, I'm Strix preorder #4 on good ol' ShopBLT. We'll see when they actually get them. Their first order of 25 TUF's isn't sold out, but also isn't scheduled to arrive for 3 weeks.


The positive TechPowerUp review of the Strix 3090 OC, the 480W power limit, & the pre-order availability & delivery by 10/16 of the Alphacool Eisblock Aurora water block has convinced me to go with the Strix & the Alphacool block. Just ordered it for $202 delivered (no sales tax in my state). Now I just need to wait until I can find a Strix once released next month. The FTW3 was my second choice and I was very much interested in an actively cooled backplate with an AquaComputer block but they don't have either of these cards in their first run plans for a block release. Too many ghosts to chase...

Can anyone comment on the quality of the Alphacool blocks? My only experience is with EKWB blocks both on my 1080ti & 2080ti cards. 

*Alphacool Eisblock Aurora Plexi GPX-N RTX 3090/3080 ROG Strix with Backplate* - https://www.aquatuning.us/detail/index/sArticle/27989


----------



## mirkendargen

Looks like there's a Bykski Strix block available too: https://www.formulamod.com/bykski-g...-3080-3090-3070-n-as3090strix-x-p2916400.html


----------



## tiefox

mirkendargen said:


> Looks like there's a Bykski Strix block available too: Bykski GPU Water Cooling Block For ASUS RTX3080 3090 STRIX, Graphics Card Liquid Cooler System, RTX 3080 3090 3070, N-AS3090STRIX-X


bitspower also has one on preorder





BP-VG3080AST


Bitspower Classic VGA Water Block for ASUS ROG Strix GeForce RTX 3080




shop.bitspower.com


----------



## dante`afk

mirkendargen said:


> Looks like there's a Bykski Strix block available too: Bykski GPU Water Cooling Block For ASUS RTX3080 3090 STRIX, Graphics Card Liquid Cooler System, RTX 3080 3090 3070, N-AS3090STRIX-X


oh wow thanks, cheap price there. I paid 150 on amazon


----------



## Thoth420

With all this talk of water cooling I should go through my whole bin of random unused WC parts and get listing. Offset the cost of some of these new toys. I overbought stuff when I was learning to do custom loops. It's a mini warehouse and so many small items to sort though. Tons of fittings and adapters.


----------



## lokran88

domenic said:


> Can anyone comment on the quality of the Alphacool blocks? My only experience is with EKWB blocks both on my 1080ti & 2080ti cards.
> 
> *Alphacool Eisblock Aurora Plexi GPX-N RTX 3090/3080 ROG Strix with Backplate* - https://www.aquatuning.us/detail/index/sArticle/27989


In general Alphacool is good quality but among the German manufacturers Aquacomputer and Watercool are usually the top, quality and performance wise. Though those are rather small companies and it takes them longer to design and produce the high quality that they aim for.
But as every generation is different, I guess we need to see reviews first anyway. Alphacool design this time looks quite interesting, but we have to see how it performs.

If I personally needed a block right now I would probably go for the EK one, but that is just my personal taste.

I did order a 3090 Strix OC myself, but I will definitely wait for reviews and then decide most likely between Aquacomputer und Watercool as I really like both of them and my whole setup is composed of both of them.


----------



## domenic

dante`afk said:


> oh wow thanks, cheap price there. I paid 150 on amazon


Like I said my only previous experience has been with EKWB and I have no way of knowing which of these emerging blocks will be the best of the bunch. Had to pick one. Hope the Alphacool works & if not and a superior one emerges I can always swap (its just $ ). Everything is pointing towards the Strix being one of the top two at least. Would be great if someday we can actually buy any of these cards...


----------



## J7SC

Thoth420 said:


> With all this talk of water cooling I should go through my whole bin of random unused WC parts and get listing. Offset the cost of some of these new toys. I overbought stuff when I was learning to do custom loops. It's a mini warehouse and so many small items to sort though. Tons of fittings and adapters.


...may be make a special project out of all that extra w-cooling stuff ? I built a dual loop, 5x 360/55 rad, 4x D5 pump system (plus related fittings and tubes for it all) out of the left-overs of several years of building w-cooling systems for work and play...I only had to pay a few bucks for a bit of copper tubing and new cooling liquids - the only real drawback ? The thing must way more than 100 pds :-(. Then again, the GPU loop (3 of the 5 rads) has had no problems keeping temps low and steady for a combined 760W of GPUs, so a w-cooled 3090s upgrade shouldn't require much effort when the time comes.


----------



## Avacado

dante`afk said:


> oh wow thanks, cheap price there. I paid 150 on amazon


Don't worry, you haven't seen the price they want for shipping. You did better.


----------



## Thoth420

J7SC said:


> ...may be make a special project out of all that extra w-cooling stuff ? I built a dual loop, 5x 360/55 rad, 4x D5 pump system (plus related fittings and tubes for it all) out of the left-overs of several years of building w-cooling systems for work and play...I only had to pay a few bucks for a bit of copper tubing and new cooling liquids - the only real drawback ? The thing must way more than 100 pds :-(. Then again, the GPU loop (3 of the 5 rads) has had no problems keeping temps low and steady for a combined 760W of GPUs, so a w-cooled 3090s upgrade shouldn't require much effort when the time comes.


Sadly it all pretty much black and my new theme is all white with black mayhem premix with the Black Goo Concept from X-Files. Long time coming. I have my pump set aside. Need the rest. I kinda want the wood blocks from EK if they restock them again.


----------



## mirkendargen

Avacado said:


> Don't worry, you haven't seen the price they want for shipping. You did better.


Yeah shipping is bananas on that site, I was just googling to see if it existed. The official Bykski store on Aliexpress has it for $105 free shipping.

I've used a Bykski Threadripper cooler for a few years now and it's been great, but I'm not as sure about GPU blocks. There's a lot more room for precision to matter on GPUs than CPUs.


----------



## Pepillo

Has anyone tried msi GeForce RTX 3090 Gaming X's new Low Temperature Bios?






MSI GeForce RTX 3090 GAMING X TRIO 24G







www.msi.com





On Monday I get mine, I guess all it does is increase the RPM of the fans but I do not lose hope that they will take out a bios with higher TDP ....


----------



## Spiriva

J7SC said:


> ...one advantage though is that there _could_ be a water-block for it already, at least if the PCB is similar / same as for this 3080 version, per caseking.de offer > here ...worth checking out anyways


Thanks for the heads up, ill check that gpu-block out.


----------



## DirtyScrubz

Yikes, Gigabyte is trash if you plan to watercool: 





I got overnight shipping on my 3090 FE but sadly NVIDIA hasn’t shipped it out yet so looks like I’ll be waiting until Tuesday for it to get here. I still don’t think any 3090 will be worth water cooling until there’s a solid modded vbios that unlocks the power limit.


----------



## vmanuelgm

domenic said:


> The positive TechPowerUp review of the Strix 3090 OC, the 480W power limit, & the pre-order availability & delivery by 10/16 of the Alphacool Eisblock Aurora water block has convinced me to go with the Strix & the Alphacool block. Just ordered it for $202 delivered (no sales tax in my state). Now I just need to wait until I can find a Strix once released next month. The FTW3 was my second choice and I was very much interested in an actively cooled backplate with an AquaComputer block but they don't have either of these cards in their first run plans for a block release. Too many ghosts to chase...
> 
> Can anyone comment on the quality of the Alphacool blocks? My only experience is with EKWB blocks both on my 1080ti & 2080ti cards.
> 
> *Alphacool Eisblock Aurora Plexi GPX-N RTX 3090/3080 ROG Strix with Backplate* - https://www.aquatuning.us/detail/index/sArticle/27989


I always used EK blocks until right now I bought the Aurora for reference cards and my impression is EK is a bit superior quality wise. I bought the Aurora cos I wanted the block immediately and it was not expensive including the backplate. I had to cut the methacrylate to acomodate the extra display port in my Giga Gaming OC. Backplate is not very well designed since when u put the pads some parts don't get contact such as some memory modules (at least in my non perfect reference Giga Gaming OC).


----------



## J7SC

Thoth420 said:


> Sadly it all pretty much black and my new theme is all white with black mayhem premix with the Black Goo Concept from X-Files. Long time coming. I have my pump set aside. Need the rest. I kinda want the wood blocks from EK if they restock them again.


...I just do black, white and gun-metal grey _combo builds_ these days, after investing heavily in Rustoleum black, white and gun-metal grey paints


----------



## HyperMatrix

Well. Finally decided to do a preorder on the Asus ROG Strix. Should have done it sooner. 10th in line at my local retailer and based on stock availability of other units at launch, it’s likely I’ll miss out on the first batch. They said the FTW3 was their most requested card with the Strix coming in second. But with the higher build quality and higher TDP I think it’ll be the best overclocking option outside of the HOF AND KPE.


----------



## zlatanselvic

We need some custom BIOS for reference PCB's! For water, we need juice!


----------



## Foxrun

Got my confirmation email early this morning, but haven't received any shipping update. I'm hoping it'll ship out late night EST and be here Monday morning. I want to say that's what happened with my RTX Titan.


----------



## LordGurciullo

Have my evga 3090 ftw ultra arriving today. I'm doing 1080p with the xl2746s and a 9900k with 4100 17 17 17 ram.
When they make a 1440p 240hz Dyac or great anti motion blur I'll get that.
... I'm probably going to be getting a vr set soon (one of the new good ones).

Should I keep this card?


----------



## originxt

LordGurciullo said:


> Have my evga 3090 ftw ultra arriving today. I'm doing 1080p with the xl2746s and a 9900k with 4100 17 17 17 ram.
> When they make a 1440p 240hz Dyac or great anti motion blur I'll get that.
> ... I'm probably going to be getting a vr set soon (one of the new good ones).
> 
> Should I keep this card?


I personally think you've overspent on the card if you only plan on sticking with 1080p for the foreseeable future unless you plan on actually getting VR soon. I don't know what your use case is, what games you play, how competitively you play, or if you just want an overpowered card for your current build for the sake of saying you have a 3090. I could be entirely wrong depending on what you use the card for but you do you, the world's your oyster. If you want to keep the 3090 and say **** everything else, just keep it. It'll be a good card for a good long while but who knows what the next iteration brings.


----------



## HyperMatrix

LordGurciullo said:


> Have my evga 3090 ftw ultra arriving today. I'm doing 1080p with the xl2746s and a 9900k with 4100 17 17 17 ram.
> When they make a 1440p 240hz Dyac or great anti motion blur I'll get that.
> ... I'm probably going to be getting a vr set soon (one of the new good ones).
> 
> Should I keep this card?


1080p and 1440p? Unless you're doing some insane levels of superscaling, you don't need that much vram. 10gb is far more than enough. Even at 12GB on 4K I haven't seen a game that uses up 12GB. I've seen games that allocate the entire 12GB, but not actually use. 1440p and 1080p would be a joke. Since you're targeting high frame rate gameplay, you're not going to be using superscaling to an extent that would require more than 10GB vram anyway. You have a great card there. But unless you're using it for professional work that requires that amount of vram, you've greatly overspent. 

That having been said....if money isn't an issue, keep it. Under water it'll still net you an extra 10-15% performance over the FTW3 3080 and gives you a bit of futureproofing. It's a great card. But I know personally, i wouldn't spend an extra $1000 for an extra 10% fps if I didn't also need the 24GB vram on the 3090.


----------



## DirtyScrubz

LordGurciullo said:


> Have my evga 3090 ftw ultra arriving today. I'm doing 1080p with the xl2746s and a 9900k with 4100 17 17 17 ram.
> When they make a 1440p 240hz Dyac or great anti motion blur I'll get that.
> ... I'm probably going to be getting a vr set soon (one of the new good ones).
> 
> Should I keep this card?


Oh look another XL2746S owner! Not many of us around and this thing is probably the best gaming monitor on the market that very few people know about. Anyway, my 3090 FE shipped out today and gets here on Monday, gotta close my CPU loop now and see if I'm going to eventually get the EK blocks (depends if there's a vbios modded for it eventually).


----------



## changboy

HyperMatrix said:


> Well. Finally decided to do a preorder on the Asus ROG Strix. Should have done it sooner. 10th in line at my local retailer and based on stock availability of other units at launch, it’s likely I’ll miss out on the first batch. They said the FTW3 was their most requested card with the Strix coming in second. But with the higher build quality and higher TDP I think it’ll be the best overclocking option outside of the HOF AND KPE.


Where you order ur strix dude ?


----------



## HyperMatrix

changboy said:


> Where you order ur strix dude ?


Memory Express. Called a second location and got 8th place on the wait list. $2459 so not too bad compared to USD price after conversion fees.


----------



## jura11

GTANY said:


> What are the advantages of the MLCC versus POSCAP components ?


GPUs which follows FE MLCC caps configuration OC better and they can be OC beyond 2.0GHz without getting any CTD and seems GPUs with POSCAP only are unstable beyond 2.0GHz 

Hope this helps 

Thanks, Jura


----------



## Badass1982

Defo getting one of these , can't stretch my budget to 2 but I just sold my 2080ti for 800 bucks on flea bay , need to decide on a water block but I am thinking either the Asus strix one of the FTW3 ultra , really wanna get the one that has the best power delivery and overclocking potential.


----------



## domenic

vmanuelgm said:


> I always used EK blocks until right now I bought the Aurora for reference cards and my impression is EK is a bit superior quality wise. I bought the Aurora cos I wanted the block immediately and it was not expensive including the backplate. I had to cut the methacrylate to accommodate the extra display port in my Giga Gaming OC. Backplate is not very well designed since when u put the pads some parts don't get contact such as some memory modules (at least in my non perfect reference Giga Gaming OC).


I didn't realize EK was taking pre-orders for their Strix block. Based on having good experience with them in the past and your feedback I cancelled my Alphacool block and pre-ordered the EKWB block & backplate. More expensive - $256.85 delivered but at least I know better what I am getting. I also replied to my order confirmation email asking for confirmation they are addressing the memory chips on the back with including thermal pads. 

Also the MLCC vs POSCAP debacle has sealed my decision to go with the Strix because Asus is using ALL MLCC. This thread is keeping score for all brands / models.









EK-Quantum Vector Strix RTX 3080/3090 D-RGB - Nickel + Plexi


This is the 2nd generation Vector GPU water block from the EK® Quantum Line, designed for graphics cards based on the latest NVIDIA® Ampere™ architecture. For a precise compatibility match of this water block, we recommend you refer to the EK Cooling Configurator.




www.ekwb.com












EK-Quantum Vector Strix RTX 3070/3080/3090 Backplate - Black


EK-Quantum Vector Strix RTX 3080/3090 Backplate - Black is a CNC machined retention backplate made from black anodized aluminum, that fits all EK-Quantum Vector RTX 3080/3090 water blocks. EK® Quantum - Design & Performance Vector series backplates are part of the EK Quantum Product Line that...




www.ekwb.com


----------



## Thoth420

J7SC said:


> ...I just do black, white and gun-metal grey _combo builds_ these days, after investing heavily in Rustoleum black, white and gun-metal grey paints
> 
> View attachment 2460026


I was considering something like that but I have such shaky hands from anxiety. I am a terrible painter.

Good news though! It's official! Gigabyte Eagle for the curious. Approx 3-4 weeks til receive. Only needed to buy a whole system around that I do not need lel. Good prices though cannot complain. It was virtually at cost if parts were sourced individually even if they all came free ship on amazon prime. Build less the card is going to a buddy.













HyperMatrix said:


> Memory Express. Called a second location and got 8th place on the wait list. $2459 so not too bad compared to USD price after conversion fees.


Grats!!! That was the one I wanted most but had zero hope. Happy for you my dude. I also would have probably paid that tbh. I am foolish when a new game I am big on is coming.


----------



## vmanuelgm

Take it like a joke, but reality here is my Gigabyte is rock stable after the shunt mod!!! At first I also had some crash I thought it was due to drivers...


----------



## Thoth420

Think it was power supply pre shunt somehow? What PSU are you running (if you don't mind)? Only thing I could source was the new 12 series BeQuiet Dark Power Pro 1200w. It came with sleeved cables for its insane price tag at least. It is also the prettiest psu I have ever laid eyes on. Nobody will ever see it either...


----------



## LordGurciullo

DirtyScrubz said:


> Oh look another XL2746S owner! Not many of us around and this thing is probably the best gaming monitor on the market that very few people know about. Anyway, my 3090 FE shipped out today and gets here on Monday, gotta close my CPU loop now and see if I'm going to eventually get the EK blocks (depends if there's a vbios modded for it eventually).


This monitor is the very first that makes me feel ok about not having my CRT any more.

And to all of you guys above... You all are right. A 3080 Would be fine. (although now I'm hearing about the issues with the crashing).
its 1100 bucks more for 12 percent more... But the 3090 FTW ULTRA is the best... 

I'm at a loss to decide if I want to sell it or not. I'm not rich.


----------



## HyperMatrix

Thoth420 said:


> I was considering something like that but I have such shaky hands from anxiety. I am a terrible painter.
> 
> Good news though! It's official! Gigabyte Eagle for the curious. Approx 3-4 weeks til receive. Only needed to buy a whole system around that I do not need lel. Good prices though cannot complain. It was virtually at cost if parts were sourced individually even if they all came free ship on amazon prime. Build less the card is going to a buddy.
> 
> View attachment 2460047
> 
> 
> 
> 
> 
> Grats!!! That was the one I wanted most but had zero hope. Happy for you my dude. I also would have probably paid that tbh. I am foolish when a new game I am big on is coming.


to be clear it’s $2459 Canadian. So about $1840 in real money. I thought the $1800 pricing on ROG Strix was a tad high originally. But when I saw thhe FTW3 going for the same price, it made it easier to choose. Especially after 480w TDP and the whole poscaps/mlcc thing came up.

I wouldn’t have paid this much if I had an OC’d 2080Ti under water. But my Pascal Titan X has been needing a proper upgrade for 3 years ever since I bought one of those 4K 144Hz monitors and realized I can’t play anything on it.


----------



## changboy

HyperMatrix said:


> Memory Express. Called a second location and got 8th place on the wait list. $2459 so not too bad compared to USD price after conversion fees.


I just send mail to them and we wil see the anwser, but i'am in Québec and they have no store here in my area.


----------



## J7SC

HyperMatrix said:


> to be clear it’s $2459 Canadian. So about $1840 in real money. I thought the $1800 pricing on ROG Strix was a tad high originally. But when I saw thhe FTW3 going for the same price, it made it easier to choose. Especially after 480w TDP and the whole poscaps/mlcc thing came up.
> 
> I wouldn’t have paid this much if I had an OC’d 2080Ti under water. But my Pascal Titan X has been needing a proper upgrade for 3 years ever since I bought one of those 4K 144Hz monitors and realized I can’t play anything on it.


Congrats ! 
FYI, bought a fair amount of stuff @ Memory Express (ie. hard parts for Orca link below) and got treated very well. They even did 'price match' re. Newegg.ca on some major components, but given the current 3090 availability situation and the already good price (relatively in 3090 speak...), it is time to sit back for you and 'anticipate'...


----------



## Thoth420

HyperMatrix said:


> to be clear it’s $2459 Canadian. So about $1840 in real money. I thought the $1800 pricing on ROG Strix was a tad high originally. But when I saw thhe FTW3 going for the same price, it made it easier to choose. Especially after 480w TDP and the whole poscaps/mlcc thing came up.
> 
> I wouldn’t have paid this much if I had an OC’d 2080Ti under water. But my Pascal Titan X has been needing a proper upgrade for 3 years ever since I bought one of those 4K 144Hz monitors and realized I can’t play anything on it.


Ah gotcha. Same boat kinda. I sold my 2070 Super and my 2080 Ti so I am on a 1070 Ti atm. It's not driving my panel well at all. That card got you great mileage and yep the 4k high refresh is changing the game in regard to reqs.


----------



## dante`afk

MFW getting an email alert that asus TUF 3090 is available to order, but I'm about to bring my daughter to bed.


----------



## HyperMatrix

Apparently no telling when 3090's going back on evga's site. By the looks of things, they should be giving a heads up on twitter.


----------



## DirtyScrubz

LordGurciullo said:


> This monitor is the very first that makes me feel ok about not having my CRT any more.
> 
> And to all of you guys above... You all are right. A 3080 Would be fine. (although now I'm hearing about the issues with the crashing).
> its 1100 bucks more for 12 percent more... But the 3090 FTW ULTRA is the best...
> 
> I'm at a loss to decide if I want to sell it or not. I'm not rich.


I was facing a similar dilemma because I could use the extra money from selling this thing for $2000+ and buy a brand new Zen 3 system and/or 4K display as a secondary monitor. Guess I'll see when it arrives if it is tempting enough to keep.


----------



## kot0005

EVGA response about the caps: 






Message about EVGA GeForce RTX 3080 POSCAPs - EVGA Forums


Hi all, Recently there has been some discussion about the EVGA GeForce RTX 3080 series. During our mass production QC testing we discovered a full 6 POSCAPs solution cannot pass the real world applications testing. It took almost a week of R&D effort to find the cause and reduce the PO...



forums.evga.com


----------



## originxt

Buildzoid comes in. What a Fiesta, love it.


----------



## JackCY

DirtyScrubz said:


> Yikes, Gigabyte is trash if you plan to watercool:
> 
> 
> 
> 
> 
> I got overnight shipping on my 3090 FE but sadly NVIDIA hasn’t shipped it out yet so looks like I’ll be waiting until Tuesday for it to get here. I still don’t think any 3090 will be worth water cooling until there’s a solid modded vbios that unlocks the power limit.


Those caps on back of GPU behind die suck at filtering and it may actually matter on newer GPUs.


----------



## HyperMatrix

EVGA FTW3 - Stock bios - Air cooled (with 2 additional case fans blowing on it). 22GBps memory. +80MHz offset on gpu clock. don't remember actual boost clock. Fans on 100% of course. Honestly not too bad. Gamer Nexus video.


----------



## domenic

originxt said:


> Buildzoid comes in. What a Fiesta, love it.


Not only did Buildzoid confirm that the exclusive use of the cheaper POSCAPs is most likely the issue and the more MLCC capacitors used on a board the better, he also pointed out seeing various quality levels of MLCC capacitors being used amongst the boards. In particular he said the ones used on the Asus Strix shown in in the TechPowerUp review (below) are the much higher quality / most expensive. Yet another example of you get what you pay for and Asus built these cards right by using much higher components so I don't feel bad paying a $300 premium.

EVGA also officially announced tonight that the reason the FTW3 3080s were delayed was because EVGA didn't catch this issue and had to swap in two MLCC capacitors after their initial design of all POSCAPP failed testing. That's OK but even with their official FTW 3080s & 90s they only went with two MLCC instead of all six as Asus did and not clear if even then they are using the better ones. Again hat off to Asus for delivering a top notch product to match the premium price.


----------



## HyperMatrix

domenic said:


> Not only did Buildzoid confirm that the exclusive use of the cheaper POSCAPs is most likely the issue and the more MLCC capacitors used on a board the better, he also pointed out seeing various quality levels of MLCC capacitors being used amongst the boards. In particular he said the ones used on the Asus Strix shown in in the TechPowerUp review (below) are the much higher quality / most expensive. Yet another example of you get what you pay for and Asus built these cards right by using much higher components so I don't feel bad paying a $300 premium.
> 
> EVGA also officially announced tonight that the reason the FTW3 3080s were delayed was because EVGA didn't catch this issue and had to swap in two MLCC capacitors after their initial design of all POSCAPP failed testing. That's OK but even with their official FTW 3080s & 90s they only went with two MLCC instead of all six as Asus did and not clear if even then they are using the better ones. Again hat off to Asus for delivering a top notch product to match the premium price.
> 
> View attachment 2460066


Right now there’s a lot of conjecture. No one knows if 2 mlcc + 4 poscaps is worse than 6 mlcc. Some are saying it’s worse (which I doubt) and some are saying it’s better. I think we’ll know more depending on how KPE is designed. We don’t know at what sort of temperature and power levels the difference ends up mattering. It’s like arguing over the difference between having 2000 torque or 1000 toque on a 1hp car. Sure more torque is good. But is the difference something that’s going to matter based on the expected usage?

I’ve reserved an ROG Strix myself over the FTW3 because of the TDP. The MLCC along with the higher TDP does give me a higher indication of quality. But...I’ve also seen amazing overclocking on both the 3080 and 3090 FTW3 cards. On air, 3080 with stock bios doing 12460 on port royal and the 3090 doing 14391. That’s just a 1.9% lower than vmanuelgm and his Shunt modded 550w water cooled gigabyte card.


----------



## dante`afk

insane, scalpers selling the 3090s for 2500-5000$ on ebay, and they are bought.


----------



## Thoth420

dante`afk said:


> insane, scalpers selling the 3090s for 2500-5000$ on ebay, and they are bought.


All they had to do was buy a system from an SI. Probably get it two weeks later maybe three with a better driver out. Seems hella dumb. Whole system with card same price or less. Pop card out and do the swap with existing main and it's gpu. Done. I consider that pretty dumb and that is what I did. Buying from a scalper is smooth brain tier.


----------



## Chamidorix

Welp, we've got both the quiet and performance Asus Strix OC 3090 480W vBioses up on TPU:









Asus RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com












Asus RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com





So I suppose someone with actual flashing hardware could get in on testing it on other cards.

In the meantime, we wait for the nebulous developers behind nvflash to update.


----------



## HyperMatrix




----------



## Chamidorix

Missed this, W1zzard is saying it is coming "soon". Wonder exactly how he snags it from Nvidias cold dead hands.


----------



## J7SC

The MLCC plot thickens a bit...

I was checking Asus 3090 Strix prices in Europe and over here and noticed that early press pics - some still in use on Asus' site itself - showed a different GPU back cap arrangement...at first it seemed like it related to 'basic 3090 Strix' vs the upscale 3090 Strix ('o' model) but after further checking, that is not the case...more likely that some vendors such as Asus had the MLCC issue already figured out before it became a hot potato...

Below are two Strix images directly from Asus' site taken just now...detailed gallery shows updated MLCC for both variants...


----------



## Amdux

So I wanted a FE Card, but didn't get through the checkout thanks nvidia store.
I then snagged the whatever I could get ahold of and so I got a Gigabyte RTX 3090 Gaming OC 

It's sitting right here on my desk.. and I'm thinking about just returning it? Haven't broken a seal just to prevent any further hassle as we have a two week return period for online sales in germany if the goods are undamaged.

This whole capacitor thing is a mess and no one seems to knows for sure whats up but this card has only the SP-Caps from what Icould figure out by pictures. Due to the backplate beeing solid without a window I can't confirm without breaking the box open and thats currently not a thing i'm sure i wanna go with.

I wanted to watercool this thing, and found out after buying (review embargos until preorder time is hilarious..) that it is custom PCB with non standard power connectors so yikes here again..

So anyone with a Gigabyte 3090 Gaming OC who is getting good performance and I shouldn't worry, or should i just ditch it and go for another AIB?


----------



## Chamidorix

[deleted]


----------



## BlackyMeow

Hello everyone. I wanted to get a reference pcb card for water cooling, so I ordered the PNY non metal one. Does anyone have one of those ? Any crashes ? Is it possible to check the caps ?
Of course, I will report here as soon as I have mine (it’s backordered for now).


----------



## Chamidorix

The Rog Strix 3090 on the Asus livestream yesterday only has 2 MLCCs..... what a clusterfuck lol.


----------



## HyperMatrix

Chamidorix said:


> The Rog Strix 3090 on the Asus livestream yesterday only has 2 MLCCs..... what a clusterfuck lol.
> View attachment 2460086


Well I am thoroughly effing confused at this point. Did they push out 6x MLCC to reviewers so they cards would outperform in reviews and promote more sales? Did they go back to a 2 + 4 design because they realized it worked out better?

So far we’ve seen Asus ROG Strix with 6x poscaps, 6x mlcc, and also 2x mlcc + 4x poscaps. Which one am I going to get? -_-

If we get bios flashing and the Strix is also gonna be a 2 + 4 design I may just go back to the FTW3.


----------



## J7SC

Amdux said:


> So I wanted a FE Card, but didn't get through the checkout thanks nvidia store.
> I then snagged the whatever I could get ahold of and so I got a Gigabyte RTX 3090 Gaming OC
> 
> It's sitting right here on my desk.. and I'm thinking about just returning it? Haven't broken a seal just to prevent any further hassle as we have a two week return period for online sales in germany if the goods are undamaged.
> 
> This whole capacitor thing is a mess and no one seems to knows for sure whats up but this card has only the SP-Caps from what Icould figure out by pictures. Due to the backplate beeing solid without a window I can't confirm without breaking the box open and thats currently not a thing i'm sure i wanna go with.
> 
> I wanted to watercool this thing, and found out after buying (review embargos until preorder time is hilarious..) that it is custom PCB with non standard power connectors so yikes here again..
> 
> So anyone with a Gigabyte 3090 Gaming OC who is getting good performance and I shouldn't worry, or should i just ditch it and go for another AIB?


...there's someone who posts in this thread who has that Gigabyte card...albeit shunt-modded and w-cooled...he's (currently) #2 at HoF Port Royal / single card > his score here


----------



## Xeq54

I think those Asus photos are from pre-production/review samples. 

Here is a photo of my TUF 3090 Non-OC edition. Its full MLCC:









I would consider Asus safe.

Sorry for lousy focus, its mounted vertically so I did not really see properly when taking it. Should be enough though.


----------



## Wihglah

Actual in the wild photo of the MLCCs on a FTW3 Ultra. (not mine)










Source : Review EVGA GeForce RTX 3090 FTW3 ULTRA 24GB


----------



## kx11

Chamidorix said:


> The Rog Strix 3090 on the Asus livestream yesterday only has 2 MLCCs..... what a clusterfuck lol.
> View attachment 2460086




__ https://twitter.com/i/web/status/1309617232201175040


----------



## HyperMatrix

Anyone see the tech power up chart for the ROG Strix? At max power level, without overclocking, it boosts to 2070MHz while below 60C. Seems like easy 2.1GHz under water. Also Unrelated note....memory OC appears to make a rather huge difference with the 3090. At least on synthetic benchmarks. Here’s hoping Aquacomputer picks the ROG Strix to build a cooler for. Need that active cooled backplate.










Compare that to the MSI Gaming X Trio that was close behind:









and way behind are the gigabyte eagle oc:








And Zhroom’s favorite card, which he says is as good or better than almost all other cards because power stages and general poor build quality don’t matter, the Zotac Trinity:










See how stable the clocks are on the Strix? Simply clocks down as it heats up. The other cards are a constant fluctuation monster regardless of where in the curve you look during full load.


----------



## changboy

Another investigation :


----------



## iamjanco

It's really just another rehash of what Igor took note of. Everyone else is just parroting what Igor pointed out (even buildzoid).

What really needs to be done at this point is actually prove the caps in question are causing the issue. Even Igor implied his claims are speculative. Strong claims, of course, but the proof is always in the pudding.

Someone like Igor has got the equipment and technical know how to help make that happen. Tin could have done it too, if he was still with EVGA and wasn't sequestered by them to prevent him from doing so. In any event, it's a fair bet the manufacturers (including leather jacket's team) are busy following up.


----------



## changboy

I just bought an EVGA GeForce RTX 3080 XC3 ULTRA for 710$ cad on ebay hehehe, i really think its a scam, if the announce is bad they will refund me hahahaha.


----------



## HyperMatrix

changboy said:


> I just bought an EVGA GeForce RTX 3080 XC3 ULTRA for 710$ cad on ebay hehehe, i really think its a scam, if the announce is bad they will refund me hahahaha.


Why would it be a scam? Seems legit to me...


----------



## changboy

heheheh i check and its a coffe place in Belfast hahaha


----------



## HyperMatrix

changboy said:


> heheheh i check and its a coffe place in Belfast hahaha


Makes sense. Stocking up on 3080s to help brew their coffee faster. Nothing brings out the flavor and aroma of coffee like RT cores.


----------



## asdkj1740

GanMenglin said:


> Well, EKWB has replied to me that there's no chance...
> 
> I've canceled my pre-order of ek water block for reference card, and ordered a bykski water block which just for ventus. I can use bykski at this moment, can replace it with EK when EK got the water block for ventus.


you are ****ed, do you know that, i told you to send me your address for me to handle your panic.


----------



## changboy

Already 9 card sold...get yours guys hahahahaha


----------



## changboy

But under my transaction the info show : Your payment is pending , so its look like the announce is under investigation.
It's possible the coffe dosen't taste so good hehehehe.


----------



## Stephen.

HyperMatrix said:


> Why would it be a scam? Seems legit to me...
> View attachment 2460106


----------



## GanMenglin

Can we talk about the high-tdp bios now?

I'm planning to flash a high-tdp bios into my ventus. But still don't know which one will work... trio? strix?


----------



## changboy

Sold out guys(19 card) ! i waiting my card hehehehe !


----------



## HyperMatrix

GanMenglin said:


> Can we talk about the high-tdp bios now?
> 
> I'm planning to flash a high-tdp bios into my ventus. But still don't know which one will work... trio? strix?


Well if you're able to flash one at all, I'd stick to same-manufacturer first. Also risky to flash an unconfirmed bios unless your card has dual-bios. Has nvflash been updated? Or do you have hardware for direct flashing?


----------



## GanMenglin

HyperMatrix said:


> Well if you're able to flash one at all, I'd stick to same-manufacturer first. Also risky to flash an unconfirmed bios unless your card has dual-bios. Has nvflash been updated? Or do you have hardware for direct flashing?


oh... nvflash hasn't updated 

by the way, even failed to flash different bios, we still can use duo-card to flash the right one back, right?


----------



## changboy

GanMenglin said:


> oh... nvflash hasn't updated
> 
> by the way, even failed to flash different bios, we still can use duo-card to flash the right one back, right?


If i were you, i wont flash anything before other have done it, you can broke ur card and you are not in hurry, wait and be patient.


----------



## changboy

Lol under my transaction have message about the seller : This user is no longer registered on eBay.
My oder have been cancel hehehe, but i never pay it coz paypal was on hold.
Arggg i will need to spend 2800$ for a strix  lol.


----------



## asdkj1740

GanMenglin said:


> oh... nvflash hasn't updated
> 
> by the way, even failed to flash different bios, we still can use duo-card to flash the right one back, right?


in the past, yes. even integrated gpu can do that.

although your warranty is voiled, there is chances of recall, hopes are there, dont give up dude.
let me help.


----------



## HyperMatrix

changboy said:


> Lol under my transaction have message about the seller : This user is no longer registered on eBay.
> My oder have been cancel hehehe, but i never pay it coz paypal was on hold.
> Arggg i will need to spend 2800$ for a strix  lol.


Asus Rog Strix. Cheaper than USD price if you can actually grab one:


https://www.bestbuy.ca/en-ca/product/asus-rog-strix-nvidia-geforce-rtx-3090-24gb-gddr6x-video-card/14954117


----------



## GanMenglin

asdkj1740 said:


> in the past, yes. even integrated gpu can do that.
> 
> although your warranty is voiled, there is chances of recall, hopes are there, dont give up dude.
> let me help.


I think they won't recall the cards


----------



## changboy

HyperMatrix said:


> Asus Rog Strix. Cheaper than USD price if you can actually grab one:
> 
> 
> https://www.bestbuy.ca/en-ca/product/asus-rog-strix-nvidia-geforce-rtx-3090-24gb-gddr6x-video-card/14954117


This is not the oc one


----------



## DirtyScrubz

Anyone know if there’s a waterblock for 3090 FE that actively cools the rear memory modules?


----------



## DirtyScrubz

Amdux said:


> So I wanted a FE Card, but didn't get through the checkout thanks nvidia store.
> I then snagged the whatever I could get ahold of and so I got a Gigabyte RTX 3090 Gaming OC
> 
> It's sitting right here on my desk.. and I'm thinking about just returning it? Haven't broken a seal just to prevent any further hassle as we have a two week return period for online sales in germany if the goods are undamaged.
> 
> This whole capacitor thing is a mess and no one seems to knows for sure whats up but this card has only the SP-Caps from what Icould figure out by pictures. Due to the backplate beeing solid without a window I can't confirm without breaking the box open and thats currently not a thing i'm sure i wanna go with.
> 
> I wanted to watercool this thing, and found out after buying (review embargos until preorder time is hilarious..) that it is custom PCB with non standard power connectors so yikes here again..
> 
> So anyone with a Gigabyte 3090 Gaming OC who is getting good performance and I shouldn't worry, or should i just ditch it and go for another AIB?


Ditch it, the power connectors guarantee nobody will make a block for that card. Watch Der8auers video of that card, it sucks.


----------



## J7SC

iamjanco said:


> It's really just another rehash of what Igor took note of. Everyone else is just parroting what Igor pointed out (even buildzoid).
> 
> What really needs to be done at this point is actually prove the caps in question are causing the issue. Even Igor implied his claims are speculative. Strong claims, of course, but the proof is always in the pudding.
> 
> Someone like Igor has got the equipment and technical know how to help make that happen. Tin could have done it too, if he wasn't sequestered by his employer (EVGA) to prevent him from doing so. In any event, it's a fair bet the manufacturers (including leather jacket's team) are busy following up.


^^This ! I thought Igor did a good job in pointing out a potential / if not likely issue with crashing to the desktop once past a certain boost + oc; but now there's a wave of YouTubers and other tech 'inflcuencers' just rehashing everything Igor said and wrote...and some with a quick review of a Asus TUF vs TUF OC which isn't exactly apples-to-apples either.

Many of those YouTubers / 'influencers' were also at it when the 2080 Ti 'test escapes' fiasco happened during early release cards (Iamjanco can explain 'test escapes'). In any event, someone started the rumour that it was all to do with Micron VRAM...since almost all of early top-level Turing cards had Micron VRAM, people figured it was a slam-dunk. Few noticed that the the 2080 non-Ti used the same VRAM and didn't have the same problems...NVidia eventually came clean re. 'test escapes', but there are still a ton of YouTube clips out there by the usual 'secondary' rehashers that never fixed their original (also re-hashed from others) vids about that issue.

With RTX 3K Ampere, it looks like another somewhat mangled launch by Nvidia (from availability to 8K/60hz, to potential tech /cap issues now). Why, oh why ? It also seems that NVidia withheld full-spec drivers from AIBs until the last possible moment re. leak insurance, but that has now put AIBs in a difficult position as they could not fully test...and the damage is done, whether it turns out to be the 'cap MLCC' issue, something else in the VRM, or a combination thereof. I am sure Ampere RTX 3k can and will recover, but a certain degree of 'tainted' will remain. And who among you/us won't check the caps at the back of their GPU when the card arrives...


----------



## DirtyScrubz

J7SC said:


> ^^This ! I thought Igor did a good job in pointing out a potential / if not likely issue with crashing to the desktop once past a certain boost + oc; but now there's a wave of YouTubers and other tech 'inflcuencers' just rehashing everything Igor said and wrote...and some with a quick review of a Asus TUF vs TUF OC which isn't exactly apples-to-apples either.
> 
> Many of those YouTubers / 'influencers' were also at it when the there were the 2080 Ti 'test escapes' fiasco happened during early release cards (Iamjanco can explain 'test escapes'). In any event, someone started the rumour that it was all to do with Micron VRAM...since almost all of early top-level Turing cards had Micron VRAM, people figured it was a slam-dunk. Few noticed that the the 2080 non-Ti used the same VRAM and didn't have the same problems...NVidia eventually came clean re. 'test escapes', but there are still a ton of YouTube clips out there by the usual 'secondary' rehashers that never fixed their original (also re-hashed from others) vids about that issue.
> 
> With RTX 3K Ampere, it looks like another somewhat mangled launch by Nvidia (from availability to 8K/60hz, to potential tech /cap issues now). Why, oh why ? It also seems that NVidia withheld full-spec drivers from AIBs until the last possible moment re. leak insurance, but that has now put AIBs in a difficult position as they could not fully test...and the damage is done, whether it turns out to be the 'cap MLCC' issue, something else in the VRM, or a combination thereof. I am sure Ampere RTX 3k can and will recover, but a certain degree of 'tainted' will remain. And who among you/us wont check the caps at back of their GPU when the card arrives...


I won’t need to check since mines an FE.


----------



## vmanuelgm

Microsoft Flight Simulator 2020 @RTX 3090


----------



## J7SC

vmanuelgm said:


> Microsoft Flight Simulator 2020 @RTX 3090


Thanks for that !!
MS FS 2020 is my 'main 4K obsession' right now and I wondered how a high-clock single 3090 would perform. Fortunately, per below, I got it working for now on 2x Aorus 2080 Ti (w-cooled) in NVL-CFR, though Nvidia disabled that never-documented feature (added for upcoming mGPU developers?) with the latest drivers :-(.

Below are a few screenies (some w/ fps / clocks) for 4K / Ultra / Dense objects. This slightly older driver should last me until the custom w-cooled 3090s are out and available, and the MLCC stuff is sorted. I ended up taking some of the oc out of the CPU and GPUs as it was getting to over 1000W combined...even w/o any oc/PL on the GPUs alone, that's still around 600W just for the cards, so the 3090 can do the same/slightly better, but at lower pwr consumption.


----------



## mirkendargen

Chamidorix said:


> Welp, we've got both the quiet and performance Asus Strix OC 3090 480W vBioses up on TPU:
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Asus RTX 3090 VBIOS
> 
> 
> 24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory
> 
> 
> 
> 
> www.techpowerup.com
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Asus RTX 3090 VBIOS
> 
> 
> 24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory
> 
> 
> 
> 
> www.techpowerup.com
> 
> 
> 
> 
> 
> So I suppose someone with actual flashing hardware could get in on testing it on other cards.
> 
> In the meantime, we wait for the nebulous developers behind nvflash to update.


For everyone interested in doing this, keep in mind that the Strix has 3 power connectors and 2 HDMI ports. In 2080Ti flashing, flashing a card with a different power connector setup never worked that I can remember (it never bricked the card, you just ended up with an absurdly low power limit), and flashing cards with different output configs would sometimes disable one or more ports on the card. I can't think of any other card that's been announced so far that has both 3 power connectors and 2 HDMI ports.


----------



## vmanuelgm

J7SC said:


> Thanks for that !!
> MS FS 2020 is my 'main 4K obsession' right now and I wondered how a high-clock single 3090 would perform. Fortunately, per below, I got it working for now on 2x Aorus 2080 Ti (w-cooled) in NVL-CFR, though Nvidia disabled that never-documented feature (added for upcoming mGPU developers?) with the latest drivers :-(.
> 
> Below are a few screenies (some w/ fps / clocks) for 4K / Ultra / Dense objects. This slightly older driver should last me until the custom w-cooled 3090s are out and available, and the MLCC stuff is sorted. I ended up taking some of the oc out of the CPU and GPUs as it was getting to over 1000W combined...even w/o any oc/PL on the GPUs alone, that's still around 600W just for the cards, so the 3090 can do the same/slightly better, but at lower pwr consumption.
> 
> View attachment 2460123
> View attachment 2460124
> View attachment 2460125
> View attachment 2460126



Welcome mate.
Sorry for being a noob player!!!
U are gonna enjoy the game a lot with a SLI of 3090's


----------



## J7SC

vmanuelgm said:


> Welcome mate.
> Sorry for being a noob player!!!
> U are gonna* enjoy the game a lot with a SLI of 3090's*


...That's what I was thinking, but what about NVidia stating that NVL/SLI for RTX 3K will be up to the developers from now on ? 'Regular' SLI (ie AFR, AFR2) does not really work / work well with MS FS 2020, and the slightly older CFR driver I use won't work with 3090s


----------



## vmanuelgm

J7SC said:


> ...That's what I was thinking, but what about NVidia stating that NVL/SLI for RTX 3K will be up to the developers from now on ? 'Regular' SLI (ie AFR, AFR2) does not really work / work well with MS FS 2020, and the slightly older CFR driver I use won't work with 3090s


Let's wait and see what happens...


----------



## ThrashZone

vmanuelgm said:


> Welcome mate.
> Sorry for being a noob player!!!
> U are gonna enjoy the game a lot with a SLI of 3090's


Hi,
You posted a time spy yet sorry if I missed it but you got a link ?
Never mind found it


----------



## domenic

I don't understand something. Just noticed on Newegg the Strix 3090 OC product pictures are showing all POSCAPs which proves at some point Asus realized the issue and replaced them with all MLCC before launch. Seems they came to the same conclusion and took the same action that EVGA took.

So which of the following scenarios seem most likely?

*Scenario 1* - All of the AIBs were left to their own to discover & correct this problem in a vacuum? (i.e. some like Asus & EVGA figured it out & corrected before launch while others like Gigabyte & Zotac just didn't know about it and shipped product not up to reference spec)

*Scenario 2* - Either one or more AIBs discovered the problem and alerted NVIDIA who did not alert other AIB partners that were unaware?

*Scenario 3* - Nvidia discovered the problem & alerted the AIBs but some just decided to do nothing?

In any event this was a debacle that could have and should have been avoided.


----------



## ThrashZone

Hi,
Have to see what the ftw3 does but asus cards at least the tuf looks to be best out there so far.


----------



## HyperMatrix

changboy said:


> This is not the oc one


The model number says it is. Notice the 3090-O24G at the end. 

Bestbuy:









Newegg:


----------



## vmanuelgm

ThrashZone said:


> Hi,
> You posted a time spy yet sorry if I missed it but you got a link ?
> Never mind found it


U got plenty of them in Hall of Fame...









3DMark Time Spy Extreme Hall of Fame


The 3DMark.com Overclocking Hall of Fame is the official home of 3DMark world record scores.




www.3dmark.com





The AMD 64 cores are good in physics...


----------



## changboy

HyperMatrix said:


> The model number says it is. Notice the 3090-O24G at the end.
> 
> Bestbuy:
> View attachment 2460138
> 
> 
> Newegg:
> View attachment 2460139


Oh ya the number say it


----------



## ThrashZone

vmanuelgm said:


> U got plenty of them in Hall of Fame...
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 3DMark Time Spy Extreme Hall of Fame
> 
> 
> The 3DMark.com Overclocking Hall of Fame is the official home of 3DMark world record scores.
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> The AMD 64 cores are good in physics...


Hi,
I always forget about that :doh:


----------



## changboy

I contact many place and store and no one what take a pré-order on the strix 3090, today i also chat with newegg and they anwser me : if you dont see the option of pr.-order then you cant.
Also Memory express anwser my mail and write this :
As it turns out, these RTX 3090 graphics cards are currently in-store only. We do not have any way of getting these products to customers who are not physically near a Memory Express location at this time. If we get sufficient stock company-wide to fulfill all outstanding orders, then we may allow customers to start purchasing online and getting it shipped to their home. We already have thousands of orders so it will be awhile before that is the case.
So i need wait for stock online. The only place i can go at the store its best buy, i not tried yet.


----------



## HyperMatrix

changboy said:


> I contact many place and store and no one what take a pré-order on the strix 3090, today i also chat with newegg and they anwser me : if you dont see the option of pr.-order then you cant.
> Also Memory express anwser my mail and write this :
> As it turns out, these RTX 3090 graphics cards are currently in-store only. We do not have any way of getting these products to customers who are not physically near a Memory Express location at this time. If we get sufficient stock company-wide to fulfill all outstanding orders, then we may allow customers to start purchasing online and getting it shipped to their home. We already have thousands of orders so it will be awhile before that is the case.
> So i need wait for stock online. The only place i can go at the store its best buy, i not tried yet.


Need to make a refresh/order bot for BestBuy.  Also yeah I'm 8th place in line at one memory express location but they have 13 locations so even if all locations only have 7 preorders before me, that means this one retailer would need to get at least about 100 ROG Strix cards for me to have a chance of picking one up. And as far as I'm aware, no one individual card type ships in that kind of volume to a single retailer. I may end up picking up an FTW3 with an associates discount if bios flashing is possible. Pretty much same results but $200 less with unlocked pretty limit.


----------



## changboy

I just chat with best buy and they dont want take pré-order on this here, i need to wait lol.


----------



## Meyer

tiefox said:


> komplett had like 39 stk of 3090 strix OC for 17th october delivery, I was quick and finalized my order by 15:04 ( local launch time ). Now just wait.


Me too mate, they had around 30 3090 strix oc to order and they went within minutes, they had 5 palit 3090 in stock left after my order went trough (15.02). Estimated shipping for asus rog strix oc rtx 3090 is the 17th of october. as long as i have it before 2021 im pleased.

Best Regards


----------



## Meyer




----------



## vmanuelgm

As u all know I have shunt modded my Gigabyte 3090 which is one of the blacklisted cards for using sp-caps (470 microfarads in the Giga instead of 330 of Zotac or Colorful). Well, if the sp-caps were so bad, at 1.1v core voltage which pulls 550w (at the moment because of my beta shunt mod), my card should be crashing continously. I have left the card at 2025-2055Mhz 3 hours with Unigine Heaven (520-550w at 1.062-1.1v), and now I have been playing Battlefield V for 2 hours else, not a single crash!!!

Long live the shunt mod!!!


----------



## stefxyz

Not really surprising. The shunt mod stops the frequency to fluctuate so much which causes voktage fluctuations which need the other capacitors.


----------



## vmanuelgm

stefxyz said:


> Not really surprising. The shunt mod stops the frequency to fluctuate so much which causes voktage fluctuations which need the other capacitors.


Next Gigabyte press release: buyers, shunt mod your cards!!!


----------



## Meyer

vmanuelgm said:


> As u all know I have shunt modded my Gigabyte 3090 which is one of the blacklisted cards for using sp-caps (470 microfarads in the Giga instead of 330 of Zotac or Colorful). Well, if the sp-caps were so bad, at 1.1v core voltage which pulls 550w (at the moment because of my beta shunt mod), my card should be crashing continously. I have left the card at 2025-2055Mhz 3 hours with Unigine Heaven (520-550w at 1.062-1.1v), and now I have been playing Battlefield V for 2 hours else, not a single crash!!!
> 
> Long live the shunt mod!!!


Thats really good to hear, keep up the shunt fight mate!, and keep us updated!!!!!

Best Regards


----------



## skline00

Thank you for the MSFS post. I happily running it on a [email protected] with a 2080TI on a 43" 4k monitor.


----------



## TheScarecow

Just to chime in with the MSFS posts, its super CPU capped and that is the biggest limiter on frame rates in the game atm. Until it gets moved to DX12 sometime in the future thats always going to be the issue with that game.

unless your rocking a super overclocked 10900k please dont waste your money buying a 3090 thinking that will solve your problems. you would be far better spent getting a 10900k, a chunky cpu cooler, a high quality mobo and a 3080 instead.


----------



## changboy

Is the 10980xe better then 10900k to play flight simulator ?


----------



## TheScarecow

changboy said:


> Is the 10980xe better then 10900k to play flight simulator ?


no as its currently a D11 title and cant scale with cores and single thread performance is king


----------



## Antsu

TheScarecow said:


> no as its currently a D11 title and cant scale with cores and single thread performance is king


This so much, and because of this I'd add that RAM has a significant impact on performance.


----------



## J7SC

... I'm not sure I agree with all those comments, though certainly some of them. For one, this is a SIm, not an fps game...and GPU performance DOES matter in MS FS 2020, as does CPU, RAM and the internet connection speed, even what time of day you play - re. their servers that push a lot of the visual information to you. FPS varies depending how much detail your flight involves (ie high w/o clouds, or low with clouds over mountains, terrain, cities and such).

I'm using my workstation TR 2950X for MS FS 2020, and fast/tight 32 of RAM (tried 64, no real diff). Not the most outrageous oc CPU setup, but it works fine with a 100 Megabit/s internet connection. With one 2080 Ti, at 4K / Ultra / Dense, I typically get an average of 35 FPS to 40 FPS, with the second 2080 Ti added (via CFR, per above), that scales up to 55 fps to 65 fps for the same scenes. The few single 3090s I have seen in MS FS 2020 are in a similar 55 fps to 65 fps range. My 4K monitor does not do Vsync, and it is grayed-out in the MS FS 2020 option screen. Without a doubt, MS FS 2020 is the trickiest beast I have ever set up; its performance comes down to a multitude of CPU, GPU, RAM and connection factors all at once. Still, a single 3090 would currently be a great card for it (2x3090 even more so, depending what NVidia and MS do with SLI / AFR / CFR profiles, and if you want 8K).Here is a screenie I posted earlier which is the one with the most CPU and GPU detail. It's 4K so you might have to click on it a few times to fully enlarge, noting the info on the left and right


----------



## TheScarecow

can you do a screen shot using the ingame dev mode that shows the FPS and what is limiting the frame rate?


----------



## J7SC

...I already posted this one earlier today as a thumbnail...usually, it says GPU limited (top right)...then again, my 4K monitor is a Phillips workstation 40 inch model (decent game mode, though) on DSP, and oc'ed up to 72 Hz. With 3090(s), I would get a new monitor, just not sure about 'future proofing' and 8K etc at this stage as I tend to keep stuff a long time.

Another thing to keep in mind with MS FS 2020 is that this is an HEDT system, with system RAM read, write and copy all above 100 GB/s which probably helps


----------



## HyperMatrix

J7SC said:


> ...I already posted this one earlier today as a thumbnail...usually, it says GPU limited (top right)...then again, my 4K monitor is a Phillips workstation 40 inch model (decent game mode, though) on DSP, and oc'ed up to 72 Hz. With 3090(s), I would get a new monitor, just not sure about 'future proofing' and 8K etc at this stage as I tend to keep stuff a long time
> View attachment 2460171


Honestly don't think there are any good monitors out there with the same quality that you would get from a TV. The Asus PG27UQ which was the 144Hz 4K with GSYNC Ultimate, FALD, Quantum Dot, and HDR1000, is an example of that. Despite having a FALD system, you get far more haloing around bright objects than you do even on an edge lit samsung TV. So in any kind of FPS game, when you're in a dark area, and have even a tiny dot for your crosshair, there will be a big glowy ball right in the center of the screen. Even the newer ones like the new 4K 144Hz with DSC and Mini-LED with I think 3x the FALD zones exhibit the same problem. It's also not helped that they're all using really ****ty matte screens. On white areas you can even see the matte film splitting the white to the point you can actually see little rainbow colored pixels. It also spreads the FALD bloom as it's diffused when hitting the film. They really need a samsung ultra black coating.

Anyway, really upset with my purchase, so giving you a heads up. I almost feel like I need 2 separate monitors. One for work, which this one is good for, and then something like a 32-34" 4K/120Hz OLED with VRR if they ever make one for gaming/media consumption. This picture below has been brightness adjusted to compensate for the overexposure ofba phone camera in dark areas. As you can see the black areas are fully black. But areas with light become incredibly distracting and actually make it hard to see what's happening. Especially if the game you're playing happens to use an overly bright center dot crosshair. Cuz then you'll have these light circles right in the middle of your screen. This is the tunnel in metro exodus right at the start of the game.


----------



## originxt

HyperMatrix said:


> Honestly don't think there are any good monitors out there with the same quality that you would get from a TV. The Asus PG27UQ which was the 144Hz 4K with GSYNC Ultimate, FALD, Quantum Dot, and HDR1000, is an example of that. Despite having a FALD system, you get far more haloing around bright objects than you do even on an edge lit samsung TV. So in any kind of FPS game, when you're in a dark area, and have even a tiny dot for your crosshair, there will be a big glowy ball right in the center of the screen. Even the newer ones like the new 4K 144Hz with DSC and Mini-LED with I think 3x the FALD zones exhibit the same problem. It's also not helped that they're all using really ****ty matte screens. On white areas you can even see the matte film splitting the white to the point you can actually see little rainbow colored pixels. It also spreads the FALD bloom as it's diffused when hitting the film. They really need a samsung ultra black coating.
> 
> Anyway, really upset with my purchase, so giving you a heads up. I almost feel like I need 2 separate monitors. One for work, which this one is good for, and then something like a 32-34" 4K/120Hz OLED with VRR if they ever make one for gaming/media consumption. I'll attach a couple photos I just took in a second.


I have the pg27uq and the halo effect definitely does come in but I tend to zone it out after a few seconds. The PG27UQX is supposed to fix the issue somewhat with more dimming zones but probably has been pushed back quite a bit, probably to include hdmi 2.1 so they can do 120hz 4k without chroma sampling. Unsure if it does 144hz 4k w/o chroma subsampling. Honestly, I like my monitor but definitely wish the halo effect wasn't there. Also I've been having issues where the monitor essentially loses signal and kind of melts, prompting me to turn off and on the monitor. Unsure if its the cable (I'm using the one supplied by the box) or my gpu. In either case, would like to game at a reasonable frame rate when maxed soon with the 3090.

I'm curious as to how the 3090 will scale with an intel hedt, no one does those tests or benchmarks since the userbase is so small comparatively.


----------



## HyperMatrix

originxt said:


> I have the pg27uq and the halo effect definitely does come in but I tend to zone it out after a few seconds. The PG27UQX is supposed to fix the issue somewhat with more dimming zones but probably has been pushed back quite a bit, probably to include hdmi 2.1 so they can do 120hz 4k without chroma sampling. Honestly, I like my monitor but definitely wish the halo effect wasn't there. Also I've been having issues where the monitor essentially loses signal and kind of melts, prompting me to turn off and on the monitor. Unsure if its the cable (I'm using the one supplied by the box) or my gpu. In either case, would like to game at a reasonable frame rate when maxed soon with the 3090.
> 
> I'm curious as to how the 3090 will scale with an intel hedt, no one does those tests or benchmarks since the userbase is so small comparatively.


Halo effect isn't bad if the game isn't using a pitch black environment. Some go dark, but not pure black. And if there's little bits of light around the screen it's not as bad since the whole screen is kind of lit up. But when you're in an area which is fully black, like the metro tunnel area, in a section with no lights/brights, it becomes an incredibly poor experience you can't get away from because the bloom becomes the most visible thing on the screen. Haha. If you're using a lower brightness setting, it might not be as bad. But if you're using properly bright, but not overburned HDR settings, check that Metro scene I'm talking about. I don't think you'll be able to zone it out. I've tried. But it literally makes it hard to see other details around the bloom.

I had been looking forward to the new models with more zones and dsc, but I saw video of them in action and it's still the same problem. They're monitor manufacturers so they are missing 3 things that TV manufacturers have learned:


No Matte Film on premium displays
Use an extra black/darkening layer on the screen (like samsung pure black or ultra black)
Lower brightness output in hotspot zones when the scene around it is dark because it's better to have less bright areas than to have extra lit up areas.

About that last point, if you want to know what I mean, look at the starfield demo that LG uses to show the difference between their OLED and Samsung QLED. The QLED mutes the brightness. So you lose some details in the smaller stars, but it doesn't result in an entire screen being completely and fully lit, destroying the blacks, just to show bright star dots.


----------



## J7SC

With a 3090 and its 24 GDDR6X and HDMI 2.1, a new OLED / QLED makes a lot of sense, though I'm not sure what the latest is re. the LG / NVidia Ampere HDMI 2.1 issues (> here) ...Apart from wanting a certain custom w-cooled 3x8 pin PCB model with all the kinks worked out which may take a few weeks/ months, I also really want to take the time to come up with the right monitor to match...as this is in a separate workstation area here, going much bigger than 43 - 48 inches for the 'monitor' will not work 'seating distance-wise'


----------



## originxt

HyperMatrix said:


> Halo effect isn't bad if the game isn't using a pitch black environment. Some go dark, but not pure black. And if there's little bits of light around the screen it's not as bad since the whole screen is kind of lit up. But when you're in an area which is fully black, like the metro tunnel area, in a section with no lights/brights, it becomes an incredibly poor experience you can't get away from because the bloom becomes the most visible thing on the screen. Haha. If you're using a lower brightness setting, it might not be as bad. But if you're using properly bright, but not overburned HDR settings, check that Metro scene I'm talking about. I don't think you'll be able to zone it out. I've tried. But it literally makes it hard to see other details around the bloom.
> 
> I had been looking forward to the new models with more zones and dsc, but I saw video of them in action and it's still the same problem. They're monitor manufacturers so they are missing 3 things that TV manufacturers have learned:
> 
> 
> No Matte Film on premium displays
> Use an extra black/darkening layer on the screen (like samsung pure black or ultra black)
> Lower brightness output in hotspot zones when the scene around it is dark because it's better to have less bright areas than to have extra lit up areas.
> About that last point, if you want to know what I mean, look at the starfield demo that LG uses to show the difference between their OLED and Samsung QLED. The QLED mutes the brightness. So you lose some details in the smaller stars, but it doesn't result in an entire screen being completely and fully lit, destroying the blacks, just to show bright star dots.


I debated getting a colorimeter for the monitor so it can adjust brightness accordingly but I've heard its pointless since games and nvidia just override w/e settings you put and apply whatever the game decides to apply. Currently running it at 17 brightness sd since I don't want to burn out my eyes, haven't really ventured in hdr honestly lol. I'll check out a metro tunnel area with hdr on and see if it changes anything. How many nits are you running in hdr?

I've definitely noticed the bloom in a black loading screen with cursor and it was pretty bad even in sd. Curious how bad it can be in a very dark level. Think I should change out the dp cable to aid in my weird disconnect issue? Unsure of the quality of cable they gives us in the box.


----------



## HyperMatrix

originxt said:


> I debated getting a colorimeter for the monitor so it can adjust brightness accordingly but I've heard its pointless since games and nvidia just override w/e settings you put and apply whatever the game decides to apply. Currently running it at 17 brightness sd since I don't want to burn out my eyes, haven't really ventured in hdr honestly lol. I'll check out a metro tunnel area with hdr on and see if it changes anything. How many nits are you running in hdr?
> 
> I've definitely noticed the bloom in a black loading screen with cursor and it was pretty bad even in sd. Curious how bad it can be in a very dark level. Think I should change out the dp cable to aid in my weird disconnect issue? Unsure of the quality of cable they gives us in the box.


You bought what was supposed to be the best non-oled HDR monitor and haven't been playing in HDR? I don't think we can be friends...haha. I wouldn't worry about the cable. It's a digital signal. If there was a cable quality issue you'd get cutting in and out of the picture/lossbof signal. 

As for the brightness, HDR mode uses white point reference rather than the brightness you get in SDR. There's a setting for it in your monitor OSD, but games often have their own one built in as well. I normally do 60-100, but have gone up to 200 depending on the game. In these pics I was down to 60. If there was calibration in the game, I always set it to the max accurate hdr peak brightness. That’s the point where it’s bright, without overexposing and killing detail or burning the image.


----------



## originxt

HyperMatrix said:


> You bought what was supposed to be the best non-oled HDR monitor and haven't been playing in HDR? I don't think we can be friends...haha. I wouldn't worry about the cable. It's a digital signal. If there was a cable quality issue you'd get cutting in and out of the picture/lossbof signal.
> 
> As for the brightness, HDR mode uses white point reference rather than the brightness you get in SDR. There's a setting for it in your monitor OSD, but games often have their own one built in as well. I normally do 60-100, but have gone up to 200 depending on the game. In these pics I was down to 60. If there was calibration in the game, I always set it to the max accurate hdr peak brightness. That’s the point where it’s bright, without overexposing and killing detail or burning the image.


I think when I bought it, windows 10 was a having a fit everytime I turned HDR on so I just gave up on it xD. Think it also didn't do well with some games for some odd reason. Turned it on now and its not crashing the monitor, so I guess its fine now. And yeah, thats why I was asking about what nits you were running at, think I had it set to 52 originally. I'll try out some games with darker settings/lighting to see what's up.


----------



## HyperMatrix

originxt said:


> I think when I bought it, windows 10 was a having a fit everytime I turned HDR on so I just gave up on it xD. Think it also didn't do well with some games for some odd reason. Turned it on now and its not crashing the monitor, so I guess its fine now. And yeah, thats why I was asking about what nits you were running at, think I had it set to 52 originally. I'll try out some games with darker settings/lighting to see what's up.


first mission of metro exodus. You go down to the sewers right at the start. Test it out there. Lots of darkness. Also yeah windows and HDR are still problematic. Better than before, but still lots of issues. At least games work properly with it now.


----------



## Raggou

Well I managed to snag a MSI Ventus 3090 from Microcenter Launch day one of 12 
Current 3D mark score - https://www.3dmark.com/pr/340738


----------



## LordGurciullo

Just installed the card. Ran Port Royal on my xl2746s 1080p, and i got 13010! I feel like I should be in the 13800 realm with this evga ftw ultra 3090...
So I go to install precision x1 and it says "your firmware needs to be updated"..

Should i do that?!? Could this be the firmware update that lowers clock speeds to not crash? what do you guys think?


----------



## HyperMatrix

LordGurciullo said:


> Just installed the card. Ran Port Royal on my xl2746s 1080p, and i got 13010! I feel like I should be in the 13800 realm with this evga ftw ultra 3090...
> So I go to install precision x1 and it says "your firmware needs to be updated"..
> 
> Should i do that?!? Could this be the firmware update that lowers clock speeds to not crash? what do you guys think?


Well...there's no way you'll score less than 13010...so I'd do it. Haha. It could also be the one that increases your power limit. Does your max power limit currently show 440W in GPU-Z?


----------



## CDMAN

Hey everyone. Had a little time to spare, so I tried my hand at overclocking my evga 3090. I scored 14370 in Port Royal. That score put me number 5 on the Port Royal "Hall of fame". My placement will not last long, but nice to see none the less.  Posting a my 3090 scores on my blog for now, still trying to get a feel for all the changes on the site. Akihabara Arcade Games


----------



## HyperMatrix

CDMAN said:


> Hey everyone. Had a little time to spare, so I tried my hand at overclocking my evga 3090. I scored 14370 in Port Royal. That score put me number 5 on the Port Royal "Hall of fame". My placement will not last long, but nice to see none the less.  Posting a my 3090 scores on my blog for now, still trying to get a feel for all the changes on the site. Akihabara Arcade Games
> 
> View attachment 2460186


Grats man. That's a solid score. What were your clocks? Anything special for cooling?


----------



## LordGurciullo

Congrats! Did you update the firmware? ... 

GPUZ under load? Where do I see that?


----------



## LordGurciullo

nevermind... its for the fans. 

Why the hell is my score so low??


----------



## J7SC

LordGurciullo said:


> Just installed the card. Ran Port Royal on my xl2746s 1080p, and i got 13010! I feel like I should be in the 13800 realm with this evga ftw ultra 3090...
> So I go to install precision x1 and it says "your firmware needs to be updated"..
> 
> Should i do that?!? Could this be the firmware update that lowers clock speeds to not crash? what do you guys think?





HyperMatrix said:


> Well...there's no way you'll score less than 13010...so I'd do it. Haha. It could also be the one that increases your power limit. Does your max power limit currently show 440W in GPU-Z?


With PL slider on max in PrecX or MSI AB and the GPUz 'sensor' window open, you can also run Unigine Superposition 4K or 8K optimized...those really give your card a tough workout. Afterwards, check the max values in GPUz for power consumption, clocks, temps etc


----------



## LordGurciullo

Ok did tests again. I Gotta say I'm not thrilled for a 2000 dollar card. I'm in the very lows for 3090s and definitely not for an EVGA ULTRA FTW
i9900k at 5.0 and 4100 17 17 17 ram 1080p monitor

Port Royal 13100
Timespy 18670
Timespy extreme 9078

max power draw 390

Stock 

What ya guys think?


----------



## J7SC

PL maxed out ? ...shouldn't it be closer to 440W ?


----------



## Esenel

That is a poor result for an 3090 in Timespy.
My 2080Ti could do
TS 17186
TSE 8060


----------



## HyperMatrix

LordGurciullo said:


> Ok did tests again. I Gotta say I'm not thrilled for a 2000 dollar card. I'm in the very lows for 3090s and definitely not for an EVGA ULTRA FTW
> i9900k at 5.0 and 4100 17 17 17 ram 1080p monitor
> 
> Port Royal 13100
> Timespy 18670
> Timespy extreme 9078
> 
> max power draw 390
> 
> Stock
> 
> What ya guys think?


There's something wrong here. Did you use afterburner or precision x to max out your power limit? You should be pulling 440W. While you're running your benches, open up the sensor tab in GPU-Z and check to see what the performance limiting factor is. You sure you're not running the card on the Quiet bios??


----------



## J7SC

@LordGurciullo ...in addition to the bios switch check, are you running stock clocks ? If not, what oc ? That too can impact.


----------



## LordGurciullo

Moved power limit to 107 in evga. Power limit went up to 457 watts.
New Port Royal score is 13312... still not great.

Stock Clocks.


----------



## HyperMatrix

LordGurciullo said:


> Moved power limit to 107 in evga. Power limit went up to 457 watts.
> New Port Royal score is 13312... still not great.
> 
> Stock Clocks.


Oh...well bump up those clocks then. Lol. All those 14k+ ones you see are heavy memory OC and gpu oc with fans going at 100%.


----------



## pewpewlazer

LordGurciullo said:


> Moved power limit to 107 in evga. Power limit went up to 457 watts.
> New Port Royal score is 13312... still not great.
> 
> Stock Clocks.


Try underclocking the ram a bit and see what happens? Not sure what else it could be.


----------



## LordGurciullo

but a stock 2000 dollar card should be doing better than 13100 no? I also changed my settings to prefer max performance in nvidia control panel.


----------



## LordGurciullo

also whats a good idle temp?


----------



## HyperMatrix

pewpewlazer said:


> Try underclocking the ram a bit and see what happens? Not sure what else it could be.


You're giving him opposite advice. Stop confusing the poor boy. 



LordGurciullo said:


> but a stock 2000 dollar card should be doing better than 13100 no? I also changed my settings to prefer max performance in nvidia control panel.


I mean if you wanted to keep stock clocks, the non-$2000 FE model would have been a better choice. FTW3 is designed to enable additional overclocking/performance. Going up just 7% from 13100 would mean you'd be at over 14k which is a solid score, with clocks that you should be able to maintain easily during long gaming sessions.



LordGurciullo said:


> also whats a good idle temp?


Most cards are designed to be around the low to mid 70s. Some like the ROG Strix have a more aggressive fan profile designed to keep it around 65-70c if possible. The cooler the card, the better the performance. Ah. Idle temp. Depends on your ambient. You'll probably be in the mid 30s to mid 40s during idle, but that's just based on how the manufacturer set the fan curve/profile. Some models even turn off the fans completely until you get to over x temp (like 50c). So it's not a good indicator of anything. You can set a custom fan curve to change what your idle temps will be.


----------



## LordGurciullo

Ok heres what I got ... anything over 130 crashes. I got a 13927 with this... But cant seem to go any higher and I really would like 14200 and stable..

Any thoughts? Also my idle with this aggressive curve is 57c!!

Thoughts?


----------



## HyperMatrix

LordGurciullo said:


> Ok heres what I got ... anything over 130 crashes. I got a 13927 with this... But cant seem to go any higher and I really would like 14200 and stable..
> 
> Any thoughts? Also my idle with this aggressive curve is 57c!!
> 
> Thoughts?


You shouldn't "expect" much more than 14k stable on air. You may be able to do it with adjusting how much of your OC is going to memory clocks and how much to GPU. I know some people have gotten higher scores by lowering GPU clock a bit and pushing the memory up higher. You can play around with that. But honestly a 14k score is good on air. JayZ got 14.6k (4.2% higher) by literally sticking an AC unit on the side of his computer/bench. Nexus Gamer got 14.3k I think but that was on an open air bench with an extra high powered fan above and below the card to cool it. So even if you were able to hit 14.2k, I wouldn't call it stable for extended play.

Looking at your screenshot, is your memory clocked up to 22 GBps? That's really good. Just to be safe, try pushing that down a bit to see if it helps at all. The error correction on the memory can reduce performance if you overclock a little too aggressively, even though you won't see a crash.


----------



## LordGurciullo

does the voltage do anything? 

I'm able to do 130 and 1200 stable it seems... atleast on port royal... getting avreage of 13950 to 14150... Not sure how stable... but also having 100 percent fans... I'll keep playing with it
Also anyone doing msi mode yet?


----------



## HyperMatrix

LordGurciullo said:


> does the voltage do anything?
> 
> I'm able to do 130 and 1200 stable it seems... atleast on port royal... getting avreage of 13950 to 14150... Not sure how stable... but also having 100 percent fans... I'll keep playing with it
> Also anyone doing msi mode yet?


If you want to see what's limiting your card, install GPU-Z, open the sensors tab, and look at the PerfCap Reason bar. It'll tell you if you're limited by power limit, voltage, voltage reliability, or temperature.


----------



## psychrage

Anyone know if igorslab's 3080FE thermal pad mod is also recommended on the 3090FE?


----------



## Amdux

psychrage said:


> Anyone know if igorslab's 3080FE thermal pad mod is also recommended on the 3090FE?


In his second video about the 3090 (a german one) he compared the temps to the 3080 each with and without mod which was again in the same ballpark about 5-6 Kelvin lower with pad mod. So he modded and it had benefits..


----------



## vmanuelgm

Flight Simulator 2020 3440x1440p Ultra New York (let some time to watch at 4k):


----------



## shiokarai

Guys, please, enough of FS2020... it's turning into FS2020 lovers club


----------



## Asc3ns|on

Is it possible to flash a bios with higher power limit on a gigabyte rtx 3090?


----------



## psychrage

Amdux said:


> In his second video about the 3090 (a german one) he compared the temps to the 3080 each with and without mod which was again in the same ballpark about 5-6 Kelvin lower with pad mod. So he modded and it had benefits..


Cool thanks. Ordered 3mm thermal pad from Amazon that should be here later today. 3090 gets here tomorrow/Monday. Hope its not too long before some waterblocks come along.


----------



## Nizzen

shiokarai said:


> Guys, please, enough of FS2020... it's turning into FS2020 lovers club


*3090 Owner's Club*
Do you have the card? If you don't like that people have 3090, then it's free to leave


----------



## InHartWeTrust

I was able to pick up a 3090 FE on launch. In reality, I wanted a 3090 Strix OC or FTW3 Ultra, but I snagged the FE while I could bc I wanted to make sure I wasn’t without a card for months and months (I had, stupidly, sold my 1080Ti). 

I am okay to not have a card to use until late October, so now I’m wondering if I should just sell my FE and then target a Strix when they release? The FE actually seems to run very cool, so should be able to OC well, but you all understand that stuff much better than I do. Is the Strix going to destroy the FE in terms of ability to OC?

For reference: My gaming will be in VR only, mostly DCS and Assetto Corsa Competizione. Is the incremental boost I might get out of a Strix going to be meaningful? 


Sent from my iPhone using Tapatalk


----------



## changboy




----------



## cstkl1

InHartWeTrust said:


> I was able to pick up a 3090 FE on launch. In reality, I wanted a 3090 Strix OC or FTW3 Ultra, but I snagged the FE while I could bc I wanted to make sure I wasn’t without a card for months and months (I had, stupidly, sold my 1080Ti).
> 
> I am okay to not have a card to use until late October, so now I’m wondering if I should just sell my FE and then target a Strix when they release? The FE actually seems to run very cool, so should be able to OC well, but you all understand that stuff much better than I do. Is the Strix going to destroy the FE in terms of ability to OC?
> 
> For reference: My gaming will be in VR only, mostly DCS and Assetto Corsa Competizione. Is the incremental boost I might get out of a Strix going to be meaningful?
> 
> 
> Sent from my iPhone using Tapatalk


nope. 
at high reso.. is all about horsepower

for higher powerlimit gpu.. its worth it only if the asic bin for that gpu is boosting high. theres no guarantee in strix as asus doesnt bin. 

kingpin however different story.


----------



## HyperMatrix

changboy said:


>


Just 2010MHz with shunt mods. Not impressed. I'm sure FTW3 and STRIX can hit that on air with fans maxed out as is. But a lot of what we've been seeing is without the proper high wattage cards under water. I'm positive 2.1GHz+ is possible under water with the right card. Now if only I could convince evga or asus to send me a card so I can prove it.


----------



## lokran88

Has anyone seen reviews of 3090 custom cards mentioning coil whine so far? I think I've heard about the FE.

Just curious as the coil whine of my 2080ti was the loudest part of my PC. Preordered a Strix and hoping that it might be more quiet than my previous card.


----------



## ZealotKi11er

HyperMatrix said:


> Just 2010MHz with shunt mods. Not impressed. I'm sure FTW3 and STRIX can hit that on air with fans maxed out as is. But a lot of what we've been seeing is without the proper high wattage cards under water. I'm positive 2.1GHz+ is possible under water with the right card. Now if only I could convince evga or asus to send me a card so I can prove it.


Its cool for some benchmark runs but not to game at 450w+ power draw.


----------



## sultanofswing

LordGurciullo said:


> Ok heres what I got ... anything over 130 crashes. I got a 13927 with this... But cant seem to go any higher and I really would like 14200 and stable..
> 
> Any thoughts? Also my idle with this aggressive curve is 57c!!
> 
> Thoughts?


Your scores are really close and it almost sounds like you do not have the Nvidia Control panel tweaks set that everyone uses for benchmarks.
Go to your Nvidia control panel and go to "Adjust Image settings with Preview", check "use my preference emphasizing" and set that to performance
Now go to "manage 3d settings" scroll down to "texture filtering" and set that to High Performance.

Also make sure you are not running any overlays while running the benchmark such as Afterburner/Rivatuner as these will lower your score slightly.


----------



## Jordel

ZealotKi11er said:


> Its cool for some benchmark runs but not to game at 450w+ power draw.


I have no problem with gaming at 600+ W TBP, looking forward to seeing how far I can push a 3090 under water


----------



## TR1PL3D

In my excitement on launch day, I ended up ordering two of the ASUS 3090 ROG Strix OC cards. Looked like one order didn't go through, but got emails a bout 10 mins later confirming orders from both sites!

Anyway, I've been reading about the capacitor issues with some cards, does anyone know if the Strix OC card is affected by this capacitor issue? Thanks


----------



## dante`afk

look at this beast tho, not mine.


----------



## LordGurciullo

AHH ok... I can try some nvidia tweaks. I am getting temperature as the reason it won't go any higher.. Not sure why its running so damn hot. I'm going to try afterburner as opposed to Precisionx1

Great score there Dante, and Cdman as well! Wonder what settings you guys are using cause I'm not quite there.

Also I'm idling at 59... any ideas why? on default fan.


----------



## originxt

LordGurciullo said:


> AHH ok... I can try some nvidia tweaks. I am getting temperature as the reason it won't go any higher.. Not sure why its running so damn hot. I'm going to try afterburner as opposed to Precisionx1
> 
> Great score there Dante, and Cdman as well! Wonder what settings you guys are using cause I'm not quite there.
> 
> Also I'm idling at 59... any ideas why? on default fan.


What's your ambient temperature?


----------



## J7SC

vmanuelgm said:


> Flight Simulator 2020 3440x1440p Ultra New York (let some time to watch at 4k):


Nice ! For further system testing & integration of your 3090 ;-) , try the Icon A5 plane (amphibious)


Spoiler

















shiokarai said:


> Guys, please, enough of FS2020... it's turning into FS2020 lovers club


MS FS 2020 loves 3090, and 3090 loves MS FS 3090...when they go out together, they dine at the Port Royal and watch Firestrike Ultra light up the night sky


----------



## HyperMatrix

ZealotKi11er said:


> Its cool for some benchmark runs but not to game at 450w+ power draw.


I used to run quad-SLI with 2 PSUs. So I wouldnt care if the card used 1000W, as long as the performance could justify the power draw. Like give me a hypothetical 3090Ti that does 50% better than the 3090 so I could hit my 144Hz 4K on more games at 1500W and I’ll buy it it’s own 1600W PSU. Haha. People are making too big a deal out of the power usage. Yes it’s higher but it’s really not going to make such a huge difference if you’re going from a 380-400W OC’d 2080Ti to a 480W 3090.


----------



## BigMack70

HyperMatrix said:


> I used to run quad-SLI with 2 PSUs. So I wouldnt care if the card used 1000W, as long as the performance could justify the power draw. Like give me a hypothetical 3090Ti that does 50% better than the 3090 so I could hit my 144Hz 4K on more games at 1500W and I’ll buy it it’s own 1600W PSU. Haha. People are making too big a deal out of the power usage. Yes it’s higher but it’s really not going to make such a huge difference if you’re going from a 380-400W OC’d 2080Ti to a 480W 3090.


IMO power draw is only a problem if you can't keep your system cool and quiet; nobody likes having a jet engine next to their monitor. But for most users on this site, that's not a problem. GPUs in particular respond incredibly well to water cooling.


----------



## sultanofswing

Was looking at the Strix 3090OC and it appears it is using MP2888A voltage controllers for what appear to be the Core and the memory. These are the same controllers that are on the Kingpin 2080ti and should be able to be controlled via MSI afterburner since they are i2c controllers.
Probably going to be the card I get.


----------



## HyperMatrix

BigMack70 said:


> nobody likes having a jet engine next to their monitor.


You don't want an immersive next-gen FS 2020 experience?


----------



## JackCY

dante`afk said:


> look at this beast tho, not mine.
> 
> View attachment 2460224


Double the score, triple the price. Such an "improvement".



HyperMatrix said:


> You don't want an immersive next-gen FS 2020 experience?


Sure but where are the fighter jets in FS2020? Hmmm... ???


----------



## J7SC

...what ? I can't hear you ? Speak up please...

fyi, the grills for the GentleTyphoons & rad next to the monitor are being painted...GentleTyphoons are quieter than one would think and together with two interlinked PSUs are ready for two w-cooled 3090s,...still have a lot of useful power and cooling stuff left over from HWBot back in the day











@JackCY - yes, there's a F18 pack now (external view only)


----------



## GraphicsWhore

vmanuelgm said:


> Flight Simulator 2020 3440x1440p Ultra New York (let some time to watch at 4k):


Thanks for uploading. I went to a bunch of different points in the video and see your total power still only goes up to like ~260W? Is that right or did I miss higher numbers? That's with a shunt?


----------



## HyperMatrix

JackCY said:


> Sure but where are the fighter jets in FS2020? Hmmm... ???


=D









GraphicsWhore said:


> Thanks for uploading. I went to a bunch of different points in the video and see your total power still only goes up to like ~260W? Is that right or did I miss higher numbers? That's with a shunt?


Yeah he's shunt modded so it'll report a lower power usage. He said his card alone would peak at 550W.


----------



## vmanuelgm

Another day with the 3090, benching and playing, not a single crash due to poscaps...


----------



## HyperMatrix

J7SC said:


> ...what ? I can't hear you ? Speak up please...
> 
> fyi, the grills for the GentleTyphoons & rad next to the monitor are being painted...GentleTyphoons are quieter than one would think and together with two interlinked PSUs are ready for two w-cooled 3090s,...still have a lot of useful power and cooling stuff left over from HWBot back in the day
> 
> View attachment 2460239
> 
> 
> 
> @JackCY - yes, there's a F18 pack now (external view only)


Yeah...I'm running 6x Delta ffb1212eh fans on my rads. 150CFM. 56 DBA. 17.4W each. Haha. So...I think I got you beat. 









Delta FFB1212EH-PWM 120mm Extreme High Speed Fan (150 CFM, 56 dBA) - Sleeved


The Delta FFB1212EH-PWM features the highest CFM for a 120x25 mm PWM case fan readily available. With a whopping 150 CFM, the 120mm EH features the same directional fin design characteristic of FFB series.




www.performance-pcs.com


----------



## J7SC

HyperMatrix said:


> Yeah...I'm running 6x Delta ffb1212eh fans on my rads. 150CFM. 56 DBA. 17.4W each. Haha. So...I think I got you beat.
> 
> (...)


Ok, I take my 12x GentleTyphoon 3K marbles home and sulk :-(


----------



## keng

Did anyone experience crashes with 3090. Be honest.


----------



## keng

ZealotKi11er said:


> Its cool for some benchmark runs but not to game at 450w+ power draw.


This is sensible advice.
The silicon, especially this dense of an array is likely going to start to fail, very much how the 2080tis started to die a month into their usage. If you have moneyback guarantee, by all means, game for 12hours plus a day for a week and monitor your current draw, core util and error logs


----------



## JackCY

HyperMatrix said:


> =D


Sadly only a broken mod convert from older version.


----------



## Thoth420

keng said:


> This is sensible advice.
> The silicon, especially this dense of an array is likely going to start to fail, very much how the 2080tis started to die a month into their usage. If you have moneyback guarantee, by all means, game for 12hours plus a day for a week and monitor your current draw, core util and error logs


^^^
This is quality advice. Adult in world of impatient children. The need to max OC (on new architecture day one with a day one driver using existing known good methods as if nothing changes) for the gain of what is not perceivable real word is foolish. Lets see how these cards hold up under their factory parameters.

Let the LN2 guys find the silicons limits. That is not your job.


----------



## Chamidorix

I'm fairly confident most people are seriously neglecting overclocking the 2nd new core rail introduced with Ampere. Previously, in Turing etc, there was only one rail for the entire core so overclocking would overclock cuda cores, cache, everything. Now cache is at least in somepart been split off from the main core rail into a secondary ~1V rail. Everyone is pushing Vcore really high and then leaving the cache at stock settings.

I think if afterburner/software etc updates to allow easy "uncore" tweaking we will see a lot better results from overclocks. Based on Der8aers latest video it looks for sure like bumping up core frequency in afterbuner is only bumping the frequency of the main VCore rail. Also supported by much better overclocking results in pure RT workloads like Quake (as opposed to hybrid raster/raytracing on everything else); with only one render path there is far less cache contention.

For Ampere we are gonna need to overclock Vcore, Vuncore, and Vmem. 3 things instead of 2 like previous generations. With hard mods we have voltage access to all 3, the question just remains how are we going to have frequency adjustment for Vuncore. It would be a true shame if only custom Kingpin etc firmware let us adjust this (just like how custom XOC firmware was the only way to change memory subtimings in Turing)


----------



## J7SC

Chamidorix said:


> I'm fairly confident most people are seriously neglecting overclocking the 2nd new core rail introduced with Ampere. Previously, in Turing etc, there was only one rail for the entire core so overclocking would overclock cuda cores, cache, everything. Now cache is at least in somepart been split off from the main core rail into a secondary ~1V rail. Everyone is pushing Vcore really high and then leaving the cache at stock settings.
> 
> I think if afterburner/software etc updates to allow easy "uncore" tweaking we will see a lot better results from overclocks. Based on Der8aers latest video it looks for sure like bumping up core frequency in afterbuner is only bumping the frequency of the main VCore rail. Also supported by much better overclocking results in pure RT workloads like Quake (as opposed to hybrid raster/raytracing on everything else); with only one render path there is far less cache contention.
> 
> For Ampere we are gonna need to overclock Vcore, Vuncore, and Vmem. 3 things instead of 2 like previous generations. With hard mods we have voltage access to all 3, the question just remains how are we going to have frequency adjustment for Vuncore. It would be a true shame if only custom Kingpin etc firmware let us adjust this (just like how custom XOC firmware was the only way to change memory subtimings in Turing)


Like with every new gen, It won't take long until those secrets come out...oh, wait, it's already happening, per earlier post by @changboy re. DerBauer's latest vid






Whether you should do such a mod or not depends on your long-term use-case for the card, your approach to risk and also your knowledge base. I have cards here that were modded 8+ years ago, now 'on pasture' doing mundane daily work things...


----------



## kx11

i think this is a stupid design decision, that extra fan on the back will touch the ram sticks for most people






1-Clip Booster Installation Guide


Galaxy Microsystems Ltd.




www.galax.com


----------



## J7SC

kx11 said:


> i think this is a stupid design decision, that extra fan on the back will touch the ram sticks for most people
> 
> 
> 
> 
> 
> 
> 1-Clip Booster Installation Guide
> 
> 
> Galaxy Microsystems Ltd.
> 
> 
> 
> 
> www.galax.com


...and still no info / news / specs / pics for the 3090 HoF ?


----------



## bmgjet

What are people doing with the rear ram modules when going watercooled?


----------



## Kleimo

bmgjet said:


> What are people doing with the rear ram modules when going watercooled?


I am going to attach the Alphacool D-RAM block to back of the backplate.


https://www.aquatuning.us/water-cooling/hdd-ram-cooler/ram-water-blocks/19802/alphacool-d-ram-cooler-x4-universal-acetal-black-nickel?c=6480


----------



## Nizzen

bmgjet said:


> What are people doing with the rear ram modules when going watercooled?


Backplate and airflow is good enough  was good enough for overclocked a Titans too.


----------



## vmanuelgm

We should make a survey to know which models are being affected by crashes...


----------



## nievz

Can AMD Ryzen 3700X handle the RTX 3090? Will it bottleneck? Watch my Warzone performance stats!

Warzone BR Solos victory featuring GeForce RTX™ 3090 GAMING X TRIO 24G.

1440p gaming resolution
All settings MAXED out with RTX turned ON.

My build:
AMD Ryzen 7 3700x @ 4.250 all core OC @1.325v
GeForce RTX™ 3090 GAMING X TRIO 24G +40core, +500mem. +50vcore
32GB G. Skill Tridenz Z CL16 (tweaked using Ryzen DRAM Calc)
Legion Y27GQ-25 240hz Gsync V2 Monitor


----------



## dante`afk

game looks like garbage tho


----------



## Jared Pace

Some 8K RTX 3090 gaming here:


----------



## HyperMatrix

Jared Pace said:


> Some 8K RTX 3090 gaming here:


I'm actually surprised at how well Native 8K is working with his crappy 1815MHz clock. With a proper 2.1GHz+ card you'd be looking at:

RDR2 - 38 FPS
AC Odyssey - 50 FPS
HZD - 43 FPS
FS 2020 - 37 FPS
BF V - 54 FPS
Crysis Remastered - 34 FPS

If those games got DLSS support, we'd be looking at quite a few games that could do 8K60. But we do need another generation or two I think before we can get to proper 8K60 with ray tracing. But I think at that point I'd prefer 4K/240Hz over 8K/60Hz.


----------



## Spiriva

New Nvidia drivers out with improvments for RTX 30´s



https://us.download.nvidia.com/Windows/456.55/456.55-desktop-win10-64bit-international-whql.exe



452.55 WHQL drivers, these drivers provide support provides support for NVIDIA Reflex in the blockbuster titles, Call of Duty: Modern Warfare and Call of Duty: Warzone, as well as offers the best experience in Star Wars: Squadrons.The new Game Ready Driver also improves stability in certain games on RTX 30 Series GPUs.


----------



## BigMack70

This thing looks epic... too bad it's gonna probably cost like $2500


----------



## stefxyz

Problem is it looks like you cant change the little 360 radiator. I got all my watercooling installed in my desk and much more cooling capacity to run fans below audible levels (2 480mm radiators for gpu alone) with the new 12 inch noctuas.


----------



## Zurv

BigMack70 said:


> This thing looks epic... too bad it's gonna probably cost like $2500


Likely more. The 2080 ti version was $2500 and the 2080ti cost less than the 3090.
(then.. costly waterblock for another $300 (which sucked on the 2080ti version.)
one costly card. I had 2 kingpins in SLI that i thankfully sold 2 weeks before the launch of the 3080.

EDIT: i'm wrong. I just remembered paid more all (with waterblocks) thatn for my RTX Titan - which was still faster.


----------



## BigMack70

Zurv said:


> Likely more. The 2080 ti version was $2500 and the 2080ti cost less than the 3090.
> (then.. costly waterblock for another $300 (which sucked on the 2080ti version.)
> one costly card. I had 2 kingpins in SLI that i thankfully sold 2 weeks before the launch of the 3080.


2080 Ti version was $1900


----------



## Shawnb99

BigMack70 said:


> This thing looks epic... too bad it's gonna probably cost like $2500



Not a fan of the placement of that LCD panel. I guess there were limited in where to put it but I do not like that at all. Everything else on it looks nice, like the hint of copper behind the fan but nope that's a hard no with that screen for me.



stefxyz said:


> Problem is it looks like you cant change the little 360 radiator. I got all my watercooling installed in my desk and much more cooling capacity to run fans below audible levels (2 480mm radiators for gpu alone) with the new 12 inch noctuas.


They say there will be a hydrocopper model this time, though they promised that last time and instead forced you to buy the block separate then took it immediately off the market so everyone who missed out was SOL.
Should still be able to remove it. I think making it an AIO is a stupid concept in the first place, if you can afford the card then you can buy a custom loop to cool it.


----------



## originxt

Shawnb99 said:


> They say there will be a hydrocopper model this time, though they promised that last time and instead forced you to buy the block separate then took it immediately off the market so everyone who missed out was SOL.
> Should still be able to remove it. I think making it an AIO is a stupid concept in the first place, if you can afford the card then you can buy a custom loop to cool it.


Easy way to upsell for an additional $100-200~. No one buying the card would realistically leave it on an AIO for cooling but that's the pessimist in me talking. Creates e-waste if you care about that sort of thing.

Maybe the AIO is usable on another gpu if you have a fan for the vrms? I've heard the mounting holes are a bit different on ampere but unsure.


----------



## Thanh Nguyen

Anyone know more card coming? Waited for a week and have nothing.


----------



## Wihglah

Thanh Nguyen said:


> Anyone know more card coming? Waited for a week and have nothing.


I have an ETA for October 30th


----------



## originxt

Secured a ftw3 for $2100. Factoring tax and etc, I didn't overpay too much. Whether or not it fits in my case length wise? Good chance it won't but I'll move the reservoir if its a problem lol. I'll have it tomorrow or so.


----------



## J7SC

Shawnb99 said:


> (...) They say there will be a hydrocopper model this time, though they promised that last time and instead forced you to buy the block separate then took it immediately off the market so everyone who missed out was SOL.
> Should still be able to remove it. I think making it an AIO is a stupid concept in the first place, if you can afford the card then you can buy a custom loop to cool it.


...yeah, with having a full GPU-1080x55-custom loop already installed, I hope they'll actually deliver the w-block already announced for the 3090 KPE this time around. The Galax/Kfa2 3090 HoF should also come with a full w-block option but is rarely available here in the Great White North. That leaves the KPE and Aorus Xtr WB as my primary 3090s consideration


----------



## dante`afk

originxt said:


> Secured a ftw3 for $2100. Factoring tax and etc, I didn't overpay too much. Whether or not it fits in my case length wise? Good chance it won't but I'll move the rad if its a problem lol. I'll have it tomorrow or so.


did you order it today or last week?


----------



## originxt

dante`afk said:


> did you order it today or last week?


Bought it locally from someone about an hour away. Don't have time to pick up today so going tomorrow.


----------



## MonnieRock

BigMack70 said:


> This thing looks epic... too bad it's gonna probably cost like $2500



Are there any pictures of the back of the PCB to show what layout the capacitors are?


----------



## domenic

Over the weekend I ordered the EKWB Strix block & backplate from their site. I replied to my order confirmation asking the below question but received this nonsensical reply. Argh.









EK Water Blocks for ROG Strix NVIDIA RTX 30 Series Graphics Cards Are Now Available - ekwb.com


EK®, the leading computer cooling solutions provider, is ready to offer its premium high-performance GPU water block for ASUS® ROG® Strix edition NVIDIA® GeForce® RTX™ 30 Series graphics cards. This huge new water block is named EK-Quantum Vector Strix RTX 3080/3090 D-RGB, and it is very...




www.ekwb.com












EK-Quantum Vector Strix RTX 3080/3090 D-RGB - Nickel + Plexi


This is the 2nd generation Vector GPU water block from the EK® Quantum Line, designed for graphics cards based on the latest NVIDIA® Ampere™ architecture. For a precise compatibility match of this water block, we recommend you refer to the EK Cooling Configurator.




www.ekwb.com












EK-Quantum Vector Strix RTX 3070/3080/3090 Backplate - Black


EK-Quantum Vector Strix RTX 3080/3090 Backplate - Black is a CNC machined retention backplate made from black anodized aluminum, that fits all EK-Quantum Vector RTX 3080/3090 water blocks. EK® Quantum - Design & Performance Vector series backplates are part of the EK Quantum Product Line that...




www.ekwb.com


----------



## originxt

domenic said:


> Over the weekend I ordered the EKWB Strix block & backplate from their site. I replied to my order confirmation asking the below question but received this nonsensical reply. Argh.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> EK Water Blocks for ROG Strix NVIDIA RTX 30 Series Graphics Cards Are Now Available - ekwb.com
> 
> 
> EK®, the leading computer cooling solutions provider, is ready to offer its premium high-performance GPU water block for ASUS® ROG® Strix edition NVIDIA® GeForce® RTX™ 30 Series graphics cards. This huge new water block is named EK-Quantum Vector Strix RTX 3080/3090 D-RGB, and it is very...
> 
> 
> 
> 
> www.ekwb.com
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> EK-Quantum Vector Strix RTX 3080/3090 D-RGB - Nickel + Plexi
> 
> 
> This is the 2nd generation Vector GPU water block from the EK® Quantum Line, designed for graphics cards based on the latest NVIDIA® Ampere™ architecture. For a precise compatibility match of this water block, we recommend you refer to the EK Cooling Configurator.
> 
> 
> 
> 
> www.ekwb.com
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> EK-Quantum Vector Strix RTX 3070/3080/3090 Backplate - Black
> 
> 
> EK-Quantum Vector Strix RTX 3080/3090 Backplate - Black is a CNC machined retention backplate made from black anodized aluminum, that fits all EK-Quantum Vector RTX 3080/3090 water blocks. EK® Quantum - Design & Performance Vector series backplates are part of the EK Quantum Product Line that...
> 
> 
> 
> 
> www.ekwb.com
> 
> 
> 
> 
> View attachment 2460302


Well at minimum, it looks like they'll provide thermal pads. Just unsure if it'll be for the memory.










I really like their generic emails since the reveal on Sep 2nd.


----------



## DrunknFoo

Shawnb99 said:


> Not a fan of the placement of that LCD panel. I guess there were limited in where to put it but I do not like that at all. Everything else on it looks nice, like the hint of copper behind the fan but nope that's a hard no with that screen for me.
> 
> 
> 
> They say there will be a hydrocopper model this time, though they promised that last time and instead forced you to buy the block separate then took it immediately off the market so everyone who missed out was SOL.
> Should still be able to remove it. I think making it an AIO is a stupid concept in the first place, if you can afford the card then you can buy a custom loop to cool it.


They need to give us the option to buy the barebone board. Especially since it is a specialized product for enthusiasts.


----------



## changboy




----------



## J7SC

I'm wondering whether rigging up a universal GPU cooler on the back plate of a 3090 might be worth it regarding additional VRAM cooling, i.e. flush-mount s.th. like the Swiftech MCW82 to the back plate and using the GPU outlet tube as input, etc. We got quite a few of those universal GPU coolers in storage from older server builds, and I think I might try that.


----------



## changboy

J7SC said:


> I'm wondering whether rigging up a universal GPU cooler on the back plate of a 3090 might be worth it regarding additional VRAM cooling, i.e. flush-mount s.th. like the Swiftech MCW82 to the back plate and using the GPU outlet tube as input, etc. We got quite a few of those universal GPU coolers in storage from older server builds, and I think I might try that.
> 
> View attachment 2460318


It will work for sure, at least if your back plate is made of metal.


----------



## J7SC

changboy said:


> It will work for sure, at least if your back plate is made of metal.


yeah, I think would work...so obviously take the paint off at that spot of the (metal) back plate, as well as the stand-offs of the MCW82, and drill 4 corresponding holes in the back plate where it won't interfere when mounted.


----------



## LordGurciullo

Well. Using afterburner has given me much better results. I did some things and apparently got best in america score even beating Jayz2cents!
This card is a beast! Clearly totally not playable in real world scenarios (that's more like 14000) which is still fine. 

Also you guys are totally next level crazy elite! Shunt mods, custom water cooling, custom software to change rail voltages...
I don't think I can get that nuts (maybe when I have more time and money) but for just plugging this baby in and doing some things I'm very impressed. 
I'm sure you guys will destroy my score later but... for now... Here it is!


----------



## DrunknFoo

LordGurciullo said:


> Well. Using afterburner has given me much better results. I did some things and apparently got best in america score even beating Jayz2cents!
> This card is a beast! Clearly totally not playable in real world scenarios (that's more like 14000) which is still fine.
> 
> Also you guys are totally next level crazy elite! Shunt mods, custom water cooling, custom software to change rail voltages...
> I don't think I can get that nuts (maybe when I have more time and money) but for just plugging this baby in and doing some things I'm very impressed.
> I'm sure you guys will destroy my score later but... for now... Here it is!
> View attachment 2460321


very nice!


----------



## J7SC

LordGurciullo said:


> Well. Using afterburner has given me much better results. I did some things and apparently got best in america score even beating Jayz2cents!
> This card is a beast! Clearly totally not playable in real world scenarios (that's more like 14000) which is still fine.
> 
> Also you guys are totally next level crazy elite! Shunt mods, custom water cooling, custom software to change rail voltages...
> I don't think I can get that nuts (maybe when I have more time and money) but for just plugging this baby in and doing some things I'm very impressed.
> I'm sure you guys will destroy my score later but... for now... Here it is!
> View attachment 2460321


Monsta !


----------



## nievz

LordGurciullo said:


> Well. Using afterburner has given me much better results. I did some things and apparently got best in america score even beating Jayz2cents!
> This card is a beast! Clearly totally not playable in real world scenarios (that's more like 14000) which is still fine.
> 
> Also you guys are totally next level crazy elite! Shunt mods, custom water cooling, custom software to change rail voltages...
> I don't think I can get that nuts (maybe when I have more time and money) but for just plugging this baby in and doing some things I'm very impressed.
> I'm sure you guys will destroy my score later but... for now... Here it is!
> View attachment 2460321


What card are you using bro and what are your afterburner settings?


----------



## LordGurciullo

evga ftw ultra 3090 +170 +1300 max everything else


----------



## HyperMatrix

LordGurciullo said:


> evga ftw ultra 3090 +170 +1300 max everything else


+1300 on mem is amazing man. Have you tried the run again with the new drivers to see if it changes the performance?


----------



## cstkl1

HyperMatrix said:


> Just 2010MHz with shunt mods. Not impressed. I'm sure FTW3 and STRIX can hit that on air with fans maxed out as is. But a lot of what we've been seeing is without the proper high wattage cards under water. I'm positive 2.1GHz+ is possible under water with the right card. Now if only I could convince evga or asus to send me a card so I can prove it.


dont borther with him

theres a user in youtube that has shunt mod , watercooled giga 3090 running 2100. fixed.. he had it way before nda release date like a week before.
@vmanuelgm


----------



## Krzych04650

Not a 3090 but still... Stumbled upon it by accident


----------



## HyperMatrix

Derbauer modifies a 6 SP-CAP board to 4 SP-CAPS and 20 MLCCs:


----------



## ring0r

Does anyone know when it will be possible to flash his BIOS via NVFlash?


----------



## GanMenglin

ring0r said:


> Does anyone know when it will be possible to flash his BIOS via NVFlash?


I've got the nvflash 5.660, and flashed my MSI ventus OC with ROG strix 480W bios.


----------



## HyperMatrix

GanMenglin said:


> I've got the nvflash 5.660, and flashed my MSI ventus OC with ROG strix 480W bios.


Did that improve your OC at all?


----------



## GanMenglin

HyperMatrix said:


> Did that improve your OC at all?


Actually it's messed up right now...

I've tried strix and trio bios, both of them are not working properly on ventus. But strix's bios seems work on igame advanced.

I'm still try to figure out what's wrong with my card... I believe it's about the C-stat or P-stat


----------



## ring0r

Okay, I'd rather not try yet: |


----------



## domenic

Below is an update on my correspondence with EKWB after I paid for my Strix 3090 block & back plate pre-order. I I simply asked for more detail on how they are cooling the back side memory chips.

Apparently letting a customer who purchased the product know if they are including thermal pads with the back plate specifically to cover the memory chips is some sort of state secret...


----------



## GanMenglin

For your guys who want to have the nvflash utility for RTX30, you may download it here:



http://www.inno3d.com.cn/support_download_detail_get.php?refid=91



it's included in the inno3d's overclocking tool, after installation, you can find it in the directory 

@zhrooms


----------



## GanMenglin

I think I might find out what the problem is:

strix's bios is for 3x8pin PCB, when I use it on my 2x8pin ventus, it only "shows" there was 450W, but actually it take the extra 150W from the "3rd 8pin" which actually didn't exist.

My power meter shows my PC use 470W in total, there's no way for the 3090 use 450W.

That's why the strix bios can be use on my friend's igame 3090 advanced because it use the 3x8pin layout.

So does anyone know/has a high-TDP bios for 2x8pin cards?


----------



## Johneey

GanMenglin said:


> I think I might find out what the problem is:
> 
> strix's bios is for 3x8pin PCB, when I use it on my 2x8pin ventus, it only "shows" there was 450W, but actually it take the extra 150W from the "3rd 8pin" which actually didn't exist.
> 
> My power meter shows my PC use 470W in total, there's no way for the 3090 use 450W.
> 
> That's why the strix bios can be use on my friend's igame 3090 advanced because it use the 3x8pin layout.
> 
> So does anyone know/has a high-TDP bios for 2x8pin cards?


i have a palit 3090 gaming 365 W power limit


----------



## Dickenud3L

You flash the Strix Bios on your Ventus?


----------



## zhrooms

GanMenglin said:


> For your guys who want to have the nvflash utility for RTX30, you may download it here:
> 
> 
> 
> http://www.inno3d.com.cn/support_download_detail_get.php?refid=91
> 
> 
> 
> it's included in the inno3d's overclocking tool, after installation, you can find it in the directory
> 
> @zhrooms


Yep holy crap, latest NVFlash is 5.620 on TPU, updated June 8th, 2020.

This is 5.660, Inno3D messed up, this was clearly not meant to be out yet.

*Uploaded nvflash64 directly here* nvflash64_5.660.0_ampere.zip

*Password:* Ampere


----------



## GanMenglin

zhrooms said:


> Yep holy crap, latest NVFlash is 5.620 on TPU, updated June 8th, 2020.
> 
> This is 5.660, Inno3D ****ed up, this was clearly not meant to be out yet.
> 
> *Uploaded nvflash64 directly here* nvflash64_5.660.0_ampere.zip
> 
> *Password:* Ampere


Have you got your 3090 yet? Which one will you take? I got some info that MSI will release a real flag-ship card later.

by the way, for 2x8pin cards, I think gigabyte's gaming oc has the highest-tdp at this moment. But can't find the bios.


----------



## GanMenglin

Dickenud3L said:


> You flash the Strix Bios on your Ventus?


yeah, haha, but it's not working properly.

the strix bios can only work fine on the cards have 3xpin power.


----------



## zhrooms

GanMenglin said:


> Have you got your 3090 yet? Which one will you take? I got some info that MSI will release a real flag-ship card later.
> 
> by the way, for 2x8pin cards, I think gigabyte's gaming oc has the highest-tdp at this moment. But can't find the bios.


Yeah, GPU-Z is not updated yet with the latest NVFlash so it can't save BIOSes and upload to the TPU database. We have to manually save them to desktop and upload on file uploading services and share them here.

And no I have not decided yet which card I'm going for, it'll be up to BIOS flashing probably, "if" you can flash cheap Reference PCB 18 Power Stage card that's probably what I'll go for.


----------



## mirkendargen

GanMenglin said:


> I think I might find out what the problem is:
> 
> strix's bios is for 3x8pin PCB, when I use it on my 2x8pin ventus, it only "shows" there was 450W, but actually it take the extra 150W from the "3rd 8pin" which actually didn't exist.
> 
> My power meter shows my PC use 470W in total, there's no way for the 3090 use 450W.
> 
> That's why the strix bios can be use on my friend's igame 3090 advanced because it use the 3x8pin layout.
> 
> So does anyone know/has a high-TDP bios for 2x8pin cards?


Yes I said this earlier in this thread. It's never worked to flash bios's for cards with different power connector configs. Someone with a Gaming X Trio should try flashing the Strix bios though.


----------



## vmanuelgm

We could upload here our own bios, while we wait for new gpu-z to upload them to their database..

Here is mine, Giga Gaming OC 3090:









377.3 KB file on MEGA







mega.nz


----------



## GanMenglin

vmanuelgm said:


> We could upload here our own bios, while we wait for new gpu-z to upload them to their database..
> 
> Here is mine, Giga Gaming OC 3090:
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 377.3 KB file on MEGA
> 
> 
> 
> 
> 
> 
> 
> mega.nz


Thank you! I'm trying the Eagle OC 385W BIOS now, later will try yours.


----------



## Johneey

how u get ur bios down? gpu z shows not possible error


----------



## GanMenglin

Johneey said:


> how u get ur bios down? gpu z shows not possible error


use the nvflash:

nvflash64.exe --save xxx.rom


----------



## Johneey

Here is the Palit Gaming Pro 3090 Bios, but low powerlimit 365 Watt


Palit GamingPro3090.rom beim Filehorst - filehorst.de


----------



## Benni231990

my question is

will the rtx 3090 strix bios with 480 watt work with the tuf gaming 3090 with 2 power connectors ?

or is it possible somebody edit the tuf bios with more power offset ? from 7% to 25%?


----------



## mirkendargen

Benni231990 said:


> my question is
> 
> will the rtx 3090 strix bios with 480 watt work with the tuf gaming 3090 with 2 power connectors ?
> 
> or is it possible somebody edit the tuf bios with more power offset ? from 7% to 25%?


Extremely unlikely, this never worked on any 2080Ti's. Only XOC bioses with no power limits at all worked on different power connector configs.


----------



## Benni231990

ok xD

so i hope somebody edit the tuf bios


----------



## vmanuelgm

Benni231990 said:


> my question is
> 
> will the rtx 3090 strix bios with 480 watt work with the tuf gaming 3090 with 2 power connectors ?
> 
> or is it possible somebody edit the tuf bios with more power offset ? from 7% to 25%?


Can u please upload the TUF 3090 Bios???

Which is the maximum power in TUF 3090???


----------



## zhrooms

Started to gather a few here until GPU-Z is updated with BIOS uploading to TPU VGA BIOS Collection

(Just under the original post of this thread) [Official] NVIDIA RTX 3090 Owner's Club

If your cards BIOS isn't there, feel free to upload yours and tag me (@zhrooms) and I'll add it.


----------



## chispy

Thank you guys for all the information and for sharing the new nvflash utility , will share my Bios of the Asus Tuf oc rtx 3090 once it gets here hopefully before the weekend. Appreciate it the wealth of information guys , again thanks a lot , we needed this  , let the flashing games begin ...


----------



## Dickenud3L

Asus TUF OC has only 375W
Strix Bios doesent Work, it shows 440-460W but the Card stuck @ 1650-1700 Core Clock if you OC the Card Crash.
Flashed Back to TUF OC Bios.

Somebody try Gigabyte Gaming OC with 390W ?


----------



## GanMenglin

@vmanuelgm I saw you on the 1st place of 3Dmark FSE 1x grahpic card score. 

did you get that score with your gaming oc? LN2 + shunt mod?


----------



## zhrooms

Dickenud3L said:


> Strix Bios doesent Work, it shows 440-460W but the Card stuck @ 1650-1700 Core Clock if you OC the Card Crash.
> Flashed Back to TUF OC Bios.


Same exact issue another user reported, flashed 3x8-Pin BIOS on 2x8-Pin card and core clock was stuck around 1700MHz.
_(XC3 to FTW3)_


----------



## GanMenglin

Dickenud3L said:


> Asus TUF OC has only 375W
> Strix Bios doesent Work, it shows 440-460W but the Card stuck @ 1650-1700 Core Clock if you OC the Card Crash.
> Flashed Back to TUF OC Bios.
> 
> Somebody try Gigabyte Gaming OC with 390W ?


I'm using the gaming oc 390w bios now, it's perfect for my ventus (2x8pin)


----------



## Benni231990

**** xD so we must hope somebody edit the tuf bios or we must flash the gaming OC bios

can somebody write a manual step by step the flash process


----------



## Johneey

GanMenglin said:


> I'm using the gaming oc 390w bios now, it's perfect for my ventus (2x8pin)


u think i can flash the Giga 3090 390 watt to my palit gaming pro 3090?


----------



## GanMenglin

Johneey said:


> u think i can flash the Giga 3090 390 watt to my palit gaming pro 3090?


I will flash it if I got one, haha


----------



## GanMenglin

Benni231990 said:


> **** xD so we must hope somebody edit the tuf bios or we must flash the gaming OC bios
> 
> can somebody write a manual step by step the flash process


you can find the full steps in 2080ti owner club thread


----------



## Johneey

GanMenglin said:


> I will flash it if I got one, haha


i am worry about because i got a good chip https://www.3dmark.com/3dm/51013397 ...


----------



## Benni231990

its even better so you can hold longer a higher boost


----------



## zhrooms

GanMenglin said:


> you can find the full steps in 2080ti owner club thread


Yeah, added it to 3080/3090 threads too, but still showing 2080 Ti pics, still works exactly the same though.


----------



## JThaGray

My 3090 FTW3 Ultra comes in today. Has anyone flashed one with the Asus 3090 Strix OC bios on Techpowerup to get 480w vs the 440 from stock?


----------



## Benni231990

thanks for add the tutorial 

when my tuf is coming i will flash the gaming oc bios 

is it difficult to edit the bios?


----------



## JThaGray

Benni231990 said:


> thanks for add the tutorial
> 
> when my tuf is coming i will flash the gaming oc bios
> 
> is it difficult to edit the bios?


was not too bad on the 1080 ti cards i had but things change. and not every edit resulted in a working bios. also the security checksum was a pain to get correct.


----------



## Benni231990

ok this sounds not easy xD

why we cant backup a bios open a programm load the bios edit the power target to maybe 420-450 watt save it and flash the card xDDD


----------



## vmanuelgm

GanMenglin said:


> @vmanuelgm I saw you on the 1st place of 3Dmark FSE 1x grahpic card score.
> 
> did you get that score with your gaming oc? LN2 + shunt mod?


Nope, its just shunt modded and watercooled.

U got some LN2 results in PortRoyal, with Kingpin and rbuass leading in the first and second, then me.


----------



## martin28bln

Hey guys. I also have a Gainward Phoenix 3080 and have it watercooled. Which Bios is the one with maximum PT for 3080 and 2x8Pin?


----------



## Johneey

martin28bln said:


> Hey guys. I also have a Gainward Phoenix 3080 and have it watercooled. Which Bios is the one with maximum PT for 3080 and 2x8Pin?


Till now TUF but u need the bios go to search for one with the TUF


----------



## martin28bln

But the TUF is not Ref-PCB. Does anybody flashed this Bios to ref. cards 3080?


----------



## GanMenglin

vmanuelgm said:


> Nope, its just shunt modded and watercooled.
> 
> U got some LN2 results in PortRoyal, with Kingpin and rbuass leading in the first and second, then me.


I’m planning to shunt mod my card and wait for my water block. And also thinking change my ventus to trio, then can use the strix 480w bios...


----------



## vmanuelgm

GanMenglin said:


> I’m planning to shunt mod my card and wait for my water block. And also thinking change my ventus to trio, then can use the strix 480w bios...


Nice choice, Strix and Trio are good cards...


----------



## J7SC

...watched that video by DerBauer exchanging caps at the back of the GPU...per thumbnail below, makes me cringe for some reason, perhaps because my own soldering skills are 'rudimentary' at best.

Also, KingPin just took the overall Port Royal record with 2x 3090s (2415 MHz, 1319 MHz)  Fortunately, there's only one other 2x 3090 in Port Royal HoF (#2 rank) so I'm still in the top 30 for a tiny bit longer <> also still waiting for the Galax 3090 HoF to show up, or even see a pic of it. hmm


----------



## GanMenglin

vmanuelgm said:


> Nice choice, Strix and Trio are good cards...


by the way, do you know how to shunt mod the trio?


----------



## GanMenglin

deleted


----------



## vmanuelgm

GanMenglin said:


> by the way, do you know how to shunt mod the trio?





https://www.techpowerup.com/review/msi-geforce-rtx-3090-gaming-x-trio/images/front_full.jpg



U can start shunting the 3 r005's closest to the pciexpress power connectors.


----------



## J7SC

vmanuelgm said:


> https://www.techpowerup.com/review/msi-geforce-rtx-3090-gaming-x-trio/images/front_full.jpg
> 
> 
> 
> U can start shunting the 3 r005's closest to the pciexpress power connectors.


What's your take about the other two shunts by the 3x 8pin PCIe (total of 5) ? DerBauer's shunt (on 2x PCIe but also w/ a total of five) was a bit confusing. 


Spoiler


----------



## JThaGray

tone out the shunts from each power connector and then add one and see what gpu-z shows, and make notes.


----------



## Benni231990

but igorlabs say it not the best solution

because the gpu him self has also a powerlimit and says the card powerlimit and you cant higher boost

is this true or not?


----------



## Pepillo

3090 MSI Gaming X Trio with Strix Bios, works really well:


----------



## Nizzen

Pepillo said:


> 3090 MSI Gaming X Trio with Strix Bios, works really well:


How did you flash? Care to share? <3


----------



## bmgjet

Is there a memebers list up yet.
Whats considered a good chip?
Just plugged mine in.
3D Mark holds 2050mhz without touching anything with a few spots dropping to 1920 when it hits power limit.

One odd thing is it never downclocks.
Idle 1725mhz @ 0.9V 26C.


----------



## NCC-1701-A

Which card should i prefer? 3090fe or 3090 trio


----------



## Pepillo

Nizzen said:


> How did you flash? Care to share? <3


It's all in the first post of this thread, the bios, the nvflash, and how to do it.


----------



## Hulk1988

Pepillo said:


> 3090 MSI Gaming X Trio with Strix Bios, works really well:


Is that real? Please be so kind and be honest. I have a Trio and need higher PT. I am afraid to get the blackscreen.


----------



## Pepillo

Hulk1988 said:


> Is that real? Please be so kind and be honest. I have a Trio and need higher PT. I am afraid to get the blackscreen.


Of course it's real, don't you see the screenshot? A couple of minor problems, the fans only rotate up to 3.000 RPM (up from 3.200), higher temperatures (inevitable, higher power), and one DP port seems not to work, but the other two and HDMI smoothly (I imagine that having the Srtrix one more port fails). For everything else, and as far as I could try it today, great bios, highly recommended. And you can tell more about playing than with benchs, the clock stability is amazing.

But please remember that there is always some risk in flashing the bios, doing it at your own risk.


----------



## NCC-1701-A

Which card should i prefer? 3090fe or 3090 trio?

Please help me


----------



## changboy

deleted media.


----------



## Hulk1988

Pepillo said:


> Of course it's real, don't you see the screenshot? A couple of minor problems, the fans only rotate up to 3.000 RPM (up from 3.200), higher temperatures (inevitable, higher power), and one DP port seems not to work, but the other two and HDMI smoothly (I imagine that having the Srtrix one more port fails). For everything else, and as far as I could try it today, great bios, highly recommended. And you can tell more about playing than with benchs, the clock stability is amazing.
> 
> But please remember that there is always some risk in flashing the bios, doing it at your own risk.


Thank you! TechPowerUp
Which of the two ASUS Strix BIOS have you taken?


----------



## HyperMatrix

changboy said:


>


I almost got super excited thinking it was the 3090 when he said he was running 2145MHz at the start of the video before the crash. Lol. Warn a brother before linking 3080 videos in the 3090 thread.


----------



## changboy

HyperMatrix said:


> I almost got super excited thinking it was the 3090 when he said he was running 2145MHz at the start of the video before the crash. Lol. Warn a brother before linking 3080 videos in the 3090 thread.


It's for giving idea of what is coming, sorry for showing this vidéo.
I will never do that kind of mistake again, and so sorry you come so sad after you seen that vidéo.
Next time if you see on a vidéo its write in big letter 3080 strix just dont look at it.


----------



## bmgjet

bmgjet said:


> Is there a memebers list up yet.
> Whats considered a good chip?
> Just plugged mine in.
> 3D Mark holds 2050mhz without touching anything with a few spots dropping to 1920 when it hits power limit.
> 
> One odd thing is it never downclocks.
> Idle 1725mhz @ 0.9V 26C.



Quick overclock since that power limit and temperature is too agressive on this card (XC3 Ultra)
+130 core then into curve editor and change the 0.9V bin to 2095mhz which stops it boosting to 2175mhz and crashing at the last couple of frames.
+500 memory. (Probably has more in it since frames were still going up each change, Back plates getting hot as F tho so dont want to push memory too hard when its only got a slow fan blowing on back)

Timespy hangs out around 2095mhz with drops to 1995mhz
Port Royal hangs out around 1995mhz with drops to 1915mhz.

Temps 58C Fans 100%.

Time to play some games and validate this as stable. Then wait for tonight when the temps drop to do some more overclocking.


----------



## HyperMatrix

changboy said:


> It's for giving idea of what is coming, sorry for showing this vidéo.
> I will never do that kind of mistake again, and so sorry you come so sad after you seen that vidéo.
> Next time if you see on a vidéo its write in big letter 3080 strix just dont look at it.


Haha you don't know how much I'd love to be able to do 2200MHz on a 3090. So seeing 2145MHz on Air, I thought for sure with water cooling and shunt mods I could do it. If we can actually hit those clocks....then it means it really will be a golden generational performance increase over the 2080Ti without question. It's ok to share the video, just put a little warning text for people who blindly click play without looking like me. Haha.


----------



## Mad Pistol

Welp... I just broke down and bought a 3090 TUF from Amazon. Was going to get a 3080, but no dice. Yay for upselling!


----------



## mirkendargen

HyperMatrix said:


> Haha you don't know how much I'd love to be able to do 2200MHz on a 3090. So seeing 2145MHz on Air, I thought for sure with water cooling and shunt mods I could do it. If we can actually hit those clocks....then it means it really will be a golden generational performance increase over the 2080Ti without question. It's ok to share the video, just put a little warning text for people who blindly click play without looking like me. Haha.


Even without hitting 2.2ghz there's some missing performance hiding somewhere, based on the massive difference between render performance (VRAY, Blender, etc) and games on 30series cards vs. 2080Ti's. Seems like it probably isn't drivers since those are used in both situations, but something with Directx/Vulcan and/or game engines aren't utilizing Ampere architecture fully...and this will hopefully change in time.


----------



## HyperMatrix

mirkendargen said:


> Even without hitting 2.2ghz there's some missing performance hiding somewhere, based on the massive difference between render performance (VRAY, Blender, etc) and games on 30series cards vs. 2080Ti's. Seems like it probably isn't drivers since those are used in both situations, but something with Directx/Vulcan and/or game engines aren't utilizing Ampere architecture fully...and this will hopefully change in time.


Well since the either/or fp32/int32 core switch, it’d make sense that if a game were designed with that in mind, it could get more performance out of it. But one thing I like to do when I get a substantial GPU upgrade is to go back and replay old games in better quality and higher FPS, with super sampling added in if there was room for it. So traditional performance improvements are welcome. Also renderers are using the RT cores which were substantially boosted on Ampere, much more so rasterization performance.


----------



## J7SC

Checked Memory Express again..> .Asus 3090 Strix listed as 'in store only', but out of stock at all stores (btw, prices are in CDN $s) ...that's $10 cheaper less expensive than the EVGA FTW3 Ultra


----------



## bmgjet




----------



## HyperMatrix

J7SC said:


> Checked Memory Express again..> .Asus 3090 Strix listed as 'in store only', but out of stock at all stores (btw, prices are in CDN $s) ...that's $10 cheaper less expensive than the EVGA FTW3 Ultra


The Strix is $10 more than FTW3 Ultra. All the 3090s are In Store Only. They have quite a massive preorder/reservation list on RTX 3090s atm. Not sure what the breakdown is per card but I believe there are over 100 reservations for just the FTW3 Ultra and ROG Strix in Calgary alone at the moment.

But keep in mind BestBuy has the ROG Strix at $2379 ($80 less than mem ex), if you're ever able to grab one from them. There's no reservation system there so you gotta order it right as it goes on sale.

For reference, ROG Strix 3090 in Canada:

$2379 BestBuy https://www.bestbuy.ca/en-ca/produc...orce-rtx-3090-24gb-gddr6x-video-card/14954117

$2399 Canada Computers Welcome - Canada Computers & Electronics

$2459 Memory Express Asus ROG STRIX GeForce RTX 3090 O24G Gaming 24GB PCI-E w/ Dual HDMI, Triple DP - PCI-E Video Cards - Memory Express Inc.

$2499 Newegg ASUS ROG Strix GeForce RTX 3090 DirectX 12 ROG-STRIX-RTX3090-O24G-GAMING Video Card - Newegg.ca


----------



## changboy

I got mail to from amazon.ca for an asus rtx-3090 tuf but i not bought it, with tax it ask me 3300$ and this was over 600$ over the real price here so i not bought it lol. After another mail and for the 3080 and asking 2800$ i think so i not bought it too.
That's mean many card are coming soon. But i begin to be boring of checking stock all day for those cards lol.

About the vidéo, i was thinking its the stick 3090 too coz at begening he said The Big One hehehe, maybe he never saw a big one hehheeh.

We dont see any test of the strix 3090 but the evga ultra seam to be a beast, not sure if asus can do better then this.


----------



## HyperMatrix

changboy said:


> I got mail to from amazon.ca for an asus rtx-3090 tuf but i not bought it, with tax it ask me 3300$ and this was over 600$ over the real price here so i not bought it lol. After another mail and for the 3080 and asking 2800$ i think so i not bought it too.
> That's mean many card are coming soon. But i begin to be boring of checking stock all day for those cards lol.
> 
> About the vidéo, i was thinking its the stick 3090 too coz at begening he said The Big One hehehe, maybe he never saw a big one hehheeh.
> 
> We dont see any test of the strix 3090 but the evga ultra seam to be a beast, not sure if asus can do better then this.


Well now that bios flashing is available...even the FTW3 Gaming could be flashed to the STRIG 480W limit I believe. The question is at those power levels and higher, will the FTW3 4x SP-CAP + 20x MLCC be ok? Or is there a reason why Asus went with a 480W TDP and EVGA stuck to 440W?


----------



## changboy

bmgjet said:


>


 What ! the card crash before the end lol...


----------



## originxt

Well it came. But I can't put it in my computer yet because I think I accidently threw out my water cooling extras which were in an unmarked box. Whoops lol. Had to order a fitting and some extra tubing

FTW3 3090










My ugly but functional build lol. Ignore the res, unscrewed it in prep for the card but no way to drain the loop.










Why no color coordination? Why on earth red fluid? What is cable management? Why 45 degree adapter fittings on the cpu block? Why white compression fittings? Well the reason is...











Benchmarks tomorrow when I get some watercooling parts...


----------



## Thoth420

originxt said:


> View attachment 2460410
> 
> 
> Why no color coordination? Why on earth red fluid? What is cable management? Well the reason is...
> 
> View attachment 2460409


...and none of that is relevant because EVGA Dark. 
Grats on your card buddy! Mine comes sometime end of October.


----------



## bmgjet

changboy said:


> What ! the card crash before the end lol...


Max length of snapchat video.
Time spy clocks are good when power limit isnt limiting it.
Port royal not so much since it never gets off power limit.


----------



## J7SC

HyperMatrix said:


> The Strix is $10 more than FTW3 Ultra. All the 3090s are In Store Only. They have quite a massive preorder/reservation list on RTX 3090s atm. Not sure what the breakdown is per card but I believe there are over 100 reservations for just the FTW3 Ultra and ROG Strix in Calgary alone at the moment.
> 
> But keep in mind BestBuy has the ROG Strix at $2379 ($80 less than mem ex), if you're ever able to grab one from them. There's no reservation system there so you gotta order it right as it goes on sale.
> 
> For reference, ROG Strix 3090 in Canada:
> 
> 
> Spoiler
> 
> 
> 
> $2379 BestBuy https://www.bestbuy.ca/en-ca/produc...orce-rtx-3090-24gb-gddr6x-video-card/14954117
> 
> $2399 Canada Computers Welcome - Canada Computers & Electronics
> 
> $2459 Memory Express Asus ROG STRIX GeForce RTX 3090 O24G Gaming 24GB PCI-E w/ Dual HDMI, Triple DP - PCI-E Video Cards - Memory Express Inc.
> 
> $2499 Newegg ASUS ROG Strix GeForce RTX 3090 DirectX 12 ROG-STRIX-RTX3090-O24G-GAMING Video Card - Newegg.ca


Tx...'in stock' is the real problem, but I have plenty of time...I hope to also see the Aorus Xtr W/WB tested soon...

btw, Memory Express and Canada Computers are kitty corner to each other here - may be I should just hang out at the corner of Broadway & Cypress and watch for delivery trucks coming in from the Richmond airport / warehouse district...causally offer my help w/ unloading...


----------



## bmgjet

Latest Beta 4.63b2 for afterburner since there download links to 4.62 and there beta links to 4.62b3.

download.msi.com/uti_exe/vga/MSIAfterburnerSetup463Beta2.zip


----------



## cdnGhost

My 3090 Arrived this morning, @ work still... I scored a founders edition on the 24th it shipped last night and arrived at 1030 this morning.... can’t wait to try it out


----------



## changboy

After some other test those card boost at 2100nhz and crash on windows while never crash on linux. So maybe all this is not related at caps but driver issue. Then maybe later when driver will be more mature they will overclock better then what we see actually.


----------



## kx11

kx11 said:


> i think this is a stupid design decision, that extra fan on the back will touch the ram sticks for most people
> 
> 
> 
> 
> 
> 
> 1-Clip Booster Installation Guide
> 
> 
> Galaxy Microsystems Ltd.
> 
> 
> 
> 
> www.galax.com



looks like this ugly 3090 is gonna be the one i'm getting after all


----------



## mirkendargen

kx11 said:


> looks like this ugly 3090 is gonna be the one i'm getting after all


I don't see how that would particularly change the back side temperatures, it's just creating a push-pull config on the heatsink that only cools the front side...

My plan if backplate temperatures are a problem with a waterblock on is to slap on the bigass backplate heatsink from an Arctic Accelero IV I have lying around (and point a fan at it if needed).


----------



## kx11

mirkendargen said:


> I don't see how that would particularly change the back side temperatures, it's just creating a push-pull config on the heatsink that only cools the front side...
> 
> My plan if backplate temperatures are a problem with a waterblock on is the slap on the bigass backplate heatsink from an Arctic Accelero IV I have lying around (and point a fan at it if needed).
> 
> View attachment 2460431



maybe their idea is " more fans=better cooling" good thing is that i have a mobo that run x16 on all PCI slots ( rampage 6 omega) so i can use the bottom slot since i got hard line tubes on the CPU WB which would not allow me to use that extra fan


----------



## mirkendargen

kx11 said:


> maybe their idea is " more fans=better cooling" good thing is that i have a mobo that run x16 on all PCI slots ( rampage 6 omega) so i can use the bottom slot since i got hard line tubes on the CPU WB which would not allow me to use that extra fan


Yeah my case has a vertical mount with a decent amount of room behind the GPU before hitting the top of other cards so I have options. Someone that had nothing but a 16x in the first slot would have a tough time with this stuff.


----------



## Shaded War

I was just checking out newegg and I missed an Asus TUF OC 3090. By the time I logged into newegg, it was OOS. They even had a $70 off coupon.


----------



## GanMenglin

J7SC said:


> What's your take about the other two shunts by the 3x 8pin PCIe (total of 5) ? DerBauer's shunt (on 2x PCIe but also w/ a total of five) was a bit confusing.
> 
> 
> Spoiler


I think only need to mod 2 shunts of 8pin, that's enough, almost can have 150W more.


----------



## J7SC

GanMenglin said:


> I think only need to mod 2 shunts of 8pin, that's enough, almost can have 150W more.


Thanks ! 

...an unrelated question: Has anyone w/ a 3090 run Vulkan ? Linux ? Any hiccups, or business as usual


----------



## Jordel

kx11 said:


> maybe their idea is " more fans=better cooling" good thing is that i have a mobo that run x16 on all PCI slots ( rampage 6 omega) so i can use the bottom slot since i got hard line tubes on the CPU WB which would not allow me to use that extra fan


The Rampage 6 Omega can't do x16 on all of its slots. It can do 16x on the top two, or it can do 16x on the top slot and 8x on the remaining two, otherwise the last slot is inactive. Keep that in mind!


----------



## Spiriva

I flashed the "Gigabyte RTX 3090 Gaming OC (370/390W)" to my "PNY GeForce RTX 3090 24GB XLR8 Gaming EPIC-X RGB (350/365W)" and it worked fine. It seems like a very good bios for the 2x8 pin 3090´s. The clocks became both more stable and higher.

The Nvidia 3090FE have that weird 12pin connector, but i think those cards have a 400w limit? Would it be possible to flash the 3090FE bios to our 3090´s with 2x8pin connectors?


----------



## Niju

bmgjet said:


> Max length of snapchat video.
> Time spy clocks are good when power limit isnt limiting it.
> Port royal not so much since it never gets off power limit.


You have the 3090 XC3 Ultra, right? What's the power limit?


----------



## psychrage

Is anyone having HDMI issues? I have to unplug and replug the HDMI cable every time I turn off/on my display. Display shows no signal unless I do this. Even bought a brand new cable today, to also get full HDMI 2.1 benefits.

Didn't have this issue with my 2080ti.


----------



## asdkj1740

Pepillo said:


> 3090 MSI Gaming X Trio with Strix Bios, works really well:


Show us the god damn frequency during benchmark please


----------



## bmgjet

Niju said:


> You have the 3090 XC3 Ultra, right? What's the power limit?


350/366
Which is way too low.




Spiriva said:


> I flashed the "Gigabyte RTX 3090 Gaming OC (370/390W)" to my "PNY GeForce RTX 3090 24GB XLR8 Gaming EPIC-X RGB (350/365W)" and it worked fine. It seems like a very good bios for the 2x8 pin 3090´s. The clocks became both more stable and higher.
> 
> The Nvidia 3090FE have that weird 12pin connector, but i think those cards have a 400w limit? Would it be possible to flash the 3090FE bios to our 3090´s with 2x8pin connectors?


Id say no since the power layout is too different.
Iv been looking at flashing that gigabyte one myself.


----------



## Spiriva

bmgjet said:


> 350/366
> Which is way too low.
> 
> Id say no since the power layout is too different.
> Iv been looking at flashing that gigabyte one myself.


Try the Gigabyte 390w one, its really good. While playing games my core is around 2050-2100mhz. Its way better then the PNY original 365w one.


----------



## bmgjet

Spiriva said:


> Try the Gigabyte 390w one, its really good. While playing games my core is around 2050-2100mhz. Its way better then the PNY original 365w one.


Thats what im getting already with it bouncing off powerlimit of 366W.
I dont have a spare card tho to flash if back if it does some weird stuff.

---- Edit----
Just went full send and flashed it since have a bootable usb I could of blind flashed it back with.
Seems to be working.

----
Fans max speed is 400rpm less then orignal bios and card running 7C hotter.
But I can make it though a run with out it just sitting on the powerlimit.


----------



## asdkj1740

The shunt mod of der8auer can't lock the frequency. Still got little fluctuation even the temp unchanged. Seems shunt mod this time on rtx3000 is not worth it.


----------



## Spiriva

bmgjet said:


> ----
> Fans max speed is 400rpm less then orignal bios and card running 7C hotter.
> But I can make it though a run with out it just sitting on the powerlimit.


That was my experiance too. 390w seems to "fit" the 3090 alot better then 365w.


----------



## bmgjet

Havnt been able to gain any more overclock but its gotten rid of the drops in places 390W should of been the default limit for the 2plug cards.
Memory timings must be different on the GB card since 400-450mhz is the point fps doesnt change, with 500mhz being point it goes down. EVGA bios it was 500-550 point it didnt change anymore.

Basically tempture limited now.
Short benchmarks and light load games its happy 2100mhz at 55C.
Longer benchmarks and heavy load games drops to 2050 at 60C then 2ghz at 63C and bounces between that and 1975 @ 65C.
Have fan set to max at 40C.
Going to see what undervolting does now.


----------



## slopokdave

I flashed Gigabyte 3090 Gaming OC to Asus 3090 TUF (non-oc), I flashed to the "performance switch" since this card has dual bios. It worked, but gave me a heart attack when it must have rebooted after flashing. 

I cannot get "*nvflash64 --protecton"* to work, anyone else able to do this? I get "Setting of EEPROM protect failed." I see this is optional, but surely I should turn protect back on at some point right? 

I am hitting 390W now, but I still cannot get clocks above 2ghz. I see spikes to 2050, 2070mhz but then I'm right back down to 1900s.


----------



## LordGurciullo

BMGJET Are you using the curve on afterburner? I saw up to 2080 and 2100 only once and that was under 50c. Yah I drop down to 1900s a lot.. Also I'm having stuttering in games pretty bad.


----------



## ThrashZone

Hi,
Yeah shunt mod about the only way to stop the throttling.


----------



## smonkie

So is the 3090 TUF any good then? It seems like those 2x8 are a boomer.


----------



## Thoth420

Buildzoid weighs in again


----------



## Spiriva

slopokdave said:


> I flashed Gigabyte 3090 Gaming OC to Asus 3090 TUF (non-oc), I flashed to the "performance switch" since this card has dual bios. It worked, but gave me a heart attack when it must have rebooted after flashing.
> 
> I cannot get "*nvflash64 --protecton"* to work, anyone else able to do this? I get "Setting of EEPROM protect failed." I see this is optional, but surely I should turn protect back on at some point right?
> 
> I am hitting 390W now, but I still cannot get clocks above 2ghz. I see spikes to 2050, 2070mhz but then I'm right back down to 1900s.












This is how it looks in World of Warcraft for me, in other games it boosts about the same mhz, but uses more W.


----------



## Benni231990

2.1 ghz with a 390 watt bios very good



slopokdave said:


> I flashed Gigabyte 3090 Gaming OC to Asus 3090 TUF (non-oc), I flashed to the "performance switch" since this card has dual bios. It worked, but gave me a heart attack when it must have rebooted after flashing.
> 
> I cannot get "*nvflash64 --protecton"* to work, anyone else able to do this? I get "Setting of EEPROM protect failed." I see this is optional, but surely I should turn protect back on at some point right?
> 
> I am hitting 390W now, but I still cannot get clocks above 2ghz. I see spikes to 2050, 2070mhz but then I'm right back down to 1900s.


maybe you got a bad gpu (potato)


----------



## slopokdave

Benni231990 said:


> 2.1 ghz with a 390 watt bios very good
> 
> maybe you got a bad gpu (potato)


What is your core offset? Any other changes/tweaks?


----------



## chispy

slopokdave said:


> I flashed Gigabyte 3090 Gaming OC to Asus 3090 TUF (non-oc), I flashed to the "performance switch" since this card has dual bios. It worked, but gave me a heart attack when it must have rebooted after flashing.
> 
> I cannot get "*nvflash64 --protecton"* to work, anyone else able to do this? I get "Setting of EEPROM protect failed." I see this is optional, but surely I should turn protect back on at some point right?
> 
> I am hitting 390W now, but I still cannot get clocks above 2ghz. I see spikes to 2050, 2070mhz but then I'm right back down to 1900s.


 At what temps ? I have an Asus 3090TUF OC on the way and should be here by friday , maybe we can help each other figure this out since we will have the same video card. Thanks for reporting your findings + rep .


----------



## bmgjet

slopokdave said:


> I flashed Gigabyte 3090 Gaming OC to Asus 3090 TUF (non-oc), I flashed to the "performance switch" since this card has dual bios. It worked, but gave me a heart attack when it must have rebooted after flashing.
> 
> I cannot get "*nvflash64 --protecton"* to work, anyone else able to do this? I get "Setting of EEPROM protect failed." I see this is optional, but surely I should turn protect back on at some point right?
> 
> I am hitting 390W now, but I still cannot get clocks above 2ghz. I see spikes to 2050, 2070mhz but then I'm right back down to 1900s.


Whats the temps thats the most important thing after power.
Iv never bother to turn protection on again on any of my card until the day I returned them to stock bios.





LordGurciullo said:


> BMGJET Are you using the curve on afterburner? I saw up to 2080 and 2100 only once and that was under 50c. Yah I drop down to 1900s a lot.. Also I'm having stuttering in games pretty bad.


Yup, If I just use the offset then somethings are stable at 175mhz+ with this gigabyte 1755mhz base bios in heavy loaded stuff since it sits around 2050mhz but crashes in light load stuff since it flys up to 2255mhz.
So have curve flattened 1V @ 2055 with everything under that 175mhz+ And there three places after that which are 2075, 2100mhz and 2145mhz that I cant flatten since they just jump back to those spots once you close the curve editor.

Here is full test 1 of time spy before temps start dropping the clocks.





Also just found that using bat file to set powerlimit is higher then afterburners slider.
Afterburner takes it to 387W @ 5%, Bat file gets it to 390W.
And if you unlink the fans in the fan speed. You can set the speed higher, It seems that it limits the faster outer fans to match the slower one in the middle when they are linked on 100%.

All the benchmarks are stable so now going to do some game testing on this setting.


----------



## slopokdave

chispy said:


> At what temps ? I have an Asus 3090TUF OC on the way and should be here by friday , maybe we can help each other figure this out since we will have the same video card. Thanks for reporting your findings + rep .


Mid 50s to high 60s; I'm running fans 100% though with my side panel off. I have a custom loop, but no gpu water block yet so I'm not worried about noise, just want temps as low as possible right now for testing



bmgjet said:


> Whats the temps thats the most important thing after power.
> Iv never bother to turn protection on again on any of my card until the day I returned them to stock bios.


Interesting, maybe that is it. Maybe I would need to flash stock vbios first before I can turn protect back on. Guess it does not matter for time being.



bmgjet said:


> Yup, If I just use the offset then somethings are stable at 175mhz+ with this gigabyte 1755mhz base bios in heavy loaded stuff since it sits around 2050mhz but crashes in light load stuff since it flys up to 2255mhz.
> So have curve flattened 1V @ 2055 with everything under that 175mhz+ And there three places after that which are 2075, 2100mhz and 2145mhz that I cant flatten since they just jump back to those spots once you close the curve editor.
> 
> Here is full test 1 of time spy before temps start dropping the clocks.
> 
> 
> 
> 
> 
> Also just found that using bat file to set powerlimit is higher then afterburners slider.
> Afterburner takes it to 387W @ 5%, Bat file gets it to 390W.
> And if you unlink the fans in the fan speed. You can set the speed higher, It seems that it limits the faster outer fans to match the slower one in the middle when they are linked on 100%.
> 
> All the benchmarks are stable so now going to do some game testing on this setting.


I was using 3D Mark for testing but clocks are higher/more stable in game. More consistently in Doom Eternal 1950mhz and some in 2050mhz, this is with +125 offset core and +700 memory.

I notice when I pause I'm sitting at ~80% gpu usage and GPU power is down to lower 300s, THEN its steady at 2085. I settled on +100 core for now for stability.

I capped framerate to 165, now I'm 2ghz+ steady. Not ideal, I guess.


----------



## smonkie

smonkie said:


> So is the 3090 TUF any good then? It seems like those 2x8 are a boomer.


Anyone?


----------



## slopokdave

smonkie said:


> Anyone?


I think it's as good as you are going to get for the 2x8pin cards. Strix is of course probably better, but its not $300 better. But it depends on what you want to do and if you are watercooling....


----------



## hallowen

HyperMatrix said:


> Well now that bios flashing is available...even the FTW3 Gaming could be flashed to the STRIG 480W limit I believe. The question is at those power levels and higher, will the FTW3 4x SP-CAP + 20x MLCC be ok? Or is there a reason why Asus went with a 480W TDP and EVGA stuck to 440W?


Has anyone actually successfully flashed the Strix 480W bios to an EVGA RTX 3090 FTW3 Ultra yet?


----------



## HyperMatrix

slopokdave said:


> I think it's as good as you are going to get for the 2x8pin cards. Strix is of course probably better, but its not $300 better. But it depends on what you want to do and if you are watercooling....


I mean if you can get 2200MHz stable on a shunted Strix under water compared to 2025-2100 on the TUF, that’s about 5-8% more performance. If you’re willing to pay an extra $800 for 15-20%, wouldn’t be a better deal to pay another $150-$200 for 5-8% on top of that?

There’s definitely a solid argument to be made for the 3x 8 pin cards. Even if not the Strix, then the FTW3 Gaming for $1730, and an additional 5% off that price with an associate code.


----------



## slopokdave

HyperMatrix said:


> If you’re willing to pay an extra $800 for 15-20%, wouldn’t be a better deal to pay another $150-$200 for 5-8% on top of that?


Maybe. I'd wait until more overclocking info comes out though for regular joe's like me on air or water (non LN2, etc). But to each his own.


----------



## dante`afk

Wihglah said:


> I have an ETA for October 30th


from whom?



LordGurciullo said:


> evga ftw ultra 3090 +170 +1300 max everything else


you might wanna try with lower memory clock. I've seen people reporting lower scores the higher they went with the memory



Mad Pistol said:


> Welp... I just broke down and bought a 3090 TUF from Amazon. Was going to get a 3080, but no dice. Yay for upselling!


dam did you get the one for 1599 which was posted at 6:51pm EST? I got the notification but wasnt at my desk -_-





does anyone know which shunts to mod on the FE? I have a couple of 5ohm shunts here, will they work?


----------



## Jordel

I've got an ETA stating next week for 3090 FTW3 Ultra from my distributors


----------



## slopokdave

dante`afk said:


> does anyone know which shunts to mod on the FE? I have a couple of 5ohm shunts here, will they work?


Buildzoid's breakdown said you have to mod ALL of them. It's on gamers nexus youtube....


----------



## slopokdave

bmgjet said:


> Yup, If I just use the offset then somethings are stable at 175mhz+ with this gigabyte 1755mhz base bios in heavy loaded stuff since it sits around 2050mhz but crashes in light load stuff since it flys up to 2255mhz.
> So have curve flattened 1V @ 2055 with everything under that 175mhz+ And there three places after that which are 2075, 2100mhz and 2145mhz that I cant flatten since they just jump back to those spots once you close the curve editor.


I tried doing this but I get crashes. But I've never messed with curve before, but think I am doing it right. So far I can only keep it 2+ghz steady if I cap frame rate which therefore caps power limit.


----------



## bmgjet

390W still isnt enough.
Played a round of PubG and then some Rust no crashes so this will be my daily profile at the moment.

Average clock speed 2055mhz. Some drops when it hits against that 390W max temp of 67C.
Loading screens and first min of games still boosts up to 2145mhz while the card is in the 30C area.
Once I get a waterblock, Should then be boosting up to 2100mhz hopefully and in low 40C.


Iv been using curve since 1080ti got it for overclocking with.
The method to it is, Find the voltage you want to test. Select it and lock the card to that with ctrl+l
Find whats stable for it and write it down then unlock it by pressing shortcut again.
Then hold shift and in the blank space above the points click and drag and you can select multipal points at once.
You can move the whole selection up and down then. If you hold ctrl instead it pivoits the curve from the left hand side.

Here is what mine looks like.
When you click apply or close the window and reopen it. Some points will automatically move to maintain valid settings.









The rear of my card defantly needs better cooling. Thats probably why +400mhz on the vram is where it stops gaining FPS.
IR temp gun is showing 89C on that.
Its got a fan blowing on the back plate already. So going to have to do something when I get a waterblock to watercool that side.


----------



## HyperMatrix

slopokdave said:


> Buildzoid's breakdown said you have to mod ALL of them. It's on gamers nexus youtube....


Did you watch Derbauer's shunt mod test? He shunted them one at a time and then benched. After the first 3 were shunted, he didn't see any gains by shunting the last 2. It didn't hurt either so I guess no harm no foul. But did buildzoid notice a difference in overclocking between 3 and 5 shunts or did he just make a theoretical statement?


----------



## slopokdave

HyperMatrix said:


> Did you watch Derbauer's shunt mod test? He shunted them one at a time and then benched. After the first 3 were shunted, he didn't see any gains by shunting the last 2. It didn't hurt either so I guess no harm no foul. But did buildzoid notice a difference in overclocking between 3 and 5 shunts or did he just make a theoretical statement?


We are talking about 2 different GPUS. The previous question was about the Founders Edition.



bmgjet said:


> Iv been using curve since 1080ti got it for overclocking with.
> The method to it is, Find the voltage you want to test. Select it and lock the card to that with ctrl+l
> Find whats stable for it and write it down then unlock it by pressing shortcut again.
> Then hold shift and in the blank space above the points click and drag and you can select multipal points at once.
> You can move the whole selection up and down then. If you hold ctrl instead it pivoits the curve from the left hand side.


Pretty sure I'm doing the same but I can never get steady 2ghz+ in Doom, unless I cap framerate, which brings me under the power limit.

This would be great, less power and higher clock speeds, but not sure how I set that up in Afterburner.


----------



## chispy

Here is the 5 milliohm resistor needed for the power limit volt mod. ( Normally only 2 are needed for the 2 eight pin cards and 3 for the 3 eight pin cards check your card near the 8 pin power connectors how many shunts are there and use accordingly ) , in USA - ERJ-M1WSF5M0U Panasonic Electronic Components | Resistors | DigiKey


----------



## Johneey

https://www.3dmark.com/spy/14253756 Flashed Gigabyte Gaming Pro OC Bios and Watercooling. Looks Fine or?


----------



## chispy

Look great to me , awesome score @Johneey !


----------



## theytookourjerb

I'm sorry if this has been answered, and I know everything is relatively new, but is the FE locked to their bios or has some one been able to flash a higher power limit bios to the FE edition? The FE cooler just looks so _cool_ and I've been wanting to choose that one, but a locked bios may steer me differently.


----------



## Johneey

theytookourjerb said:


> I'm sorry if this has been answered, and I know everything is relatively new, but is the FE locked to their bios or has some one been able to flash a higher power limit bios to the FE edition? The FE cooler just looks so _cool_ and I've been wanting to choose that one, but a locked bios may steer me differently.


Yes u can flash the gigabyte gaming pro like me 395 watts but don’t no if it works with the 12 pin. Try it if not flash back and give ya feedback


----------



## Alelau18

So I got a 3090 TUF and I also flashed the GB's Gaming OC vBIOS into the 3090 TUF (non OC). I've been playing Horizon Zero Dawn for some time and it hits 2.1GHz with ease there with +110MHz to the core. On TimeSpy I can do +120MHz, but I've got a crash at +120MHz on Horizon Zero Dawn so overall happy enough for keeping the card on air and I'm planning to keep it that way.


----------



## Johneey

Alelau18 said:


> So I got a 3090 TUF and I also flashed the GB's Gaming OC vBIOS into the 3090 TUF (non OC). I've been playing Horizon Zero Dawn for some time and it hits 2.1GHz with ease there with +110MHz to the core. On TimeSpy I can do +120MHz, but I've got a crash at +120MHz on Horizon Zero Dawn so overall happy enough for keeping the card on air and I'm planning to keep it that way.


Please send us the bios !


----------



## Johneey

Johneey said:


> Please send us the bios !


Oh **** is they Gigabyte Bios higher pl as TUF ?


----------



## Alelau18

the TUF BIOS is limited to 375W yeah.

976 KB file on MEGA Here's the link to the TUF 3090 BIOS, nothing really special since yeah, 375W PL


----------



## Benni231990

yes pls upload the gaming OC bios with 390 power limit


----------



## Thoth420

psychrage said:


> Is anyone having HDMI issues? I have to unplug and replug the HDMI cable every time I turn off/on my display. Display shows no signal unless I do this. Even bought a brand new cable today, to also get full HDMI 2.1 benefits.
> 
> Didn't have this issue with my 2080ti.


System have sleep and/or display shutdown turned on in your OS power settings? If so try disabling both completely and manually power down your display if you want the system up long term but will be stepping away.


----------



## nievz

Alelau18 said:


> So I got a 3090 TUF and I also flashed the GB's Gaming OC vBIOS into the 3090 TUF (non OC). I've been playing Horizon Zero Dawn for some time and it hits 2.1GHz with ease there with +110MHz to the core. On TimeSpy I can do +120MHz, but I've got a crash at +120MHz on Horizon Zero Dawn so overall happy enough for keeping the card on air and I'm planning to keep it that way.


What did you set your voltage slider to to reach this core clock? +100?


----------



## Jordel

Alelau18 said:


> the TUF BIOS is limited to 375W yeah.
> 
> 976 KB file on MEGA Here's the link to the TUF 3090 BIOS, nothing really special since yeah, 375W PL


@zhrooms


----------



## GanMenglin

vmanuelgm said:


> https://www.techpowerup.com/review/msi-geforce-rtx-3090-gaming-x-trio/images/front_full.jpg
> 
> 
> 
> U can start shunting the 3 r005's closest to the pciexpress power connectors.


msi trio seems dont' have the R005 at 3rd 8pin connector (J5012)..


----------



## vmanuelgm

GanMenglin said:


> msi trio seems dont' have the R005 at 3rd 8pin connector (J5012)..
> 
> View attachment 2460491


U can try to shunt the five...


----------



## Johneey

Benni231990 said:


> yes pls upload the gaming OC bios with 390 power limit


already uploaded dude


----------



## Benni231990

oh yes thx


----------



## kx11

Jordel said:


> The Rampage 6 Omega can't do x16 on all of its slots. It can do 16x on the top two, or it can do 16x on the top slot and 8x on the remaining two, otherwise the last slot is inactive. Keep that in mind!


the other slots will be used @ x2/x4 speeds so the gpu can used in any slot @ 16x


----------



## Jordel

kx11 said:


> the other slots will be used @ x2/x4 speeds so the gpu can used in any slot @ 16x


Are you certain? The manual states that you need to manually set link speed on each slot, and slot three only supports 4x or 8x, slot two supports 16x or 8x, which then affects what slot three can be set to. It would seem slot three isn't electrically wired to 16 lanes.


----------



## kx11

Jordel said:


> Are you certain? The manual states that you need to manually set link speed on each slot, and slot three only supports 4x or 8x, slot two supports 16x or 8x, which then affects what slot three can be set to. It would seem slot three isn't electrically wired to 16 lanes.


pretty sure this covers what i mean


----------



## Jordel

kx11 said:


> pretty sure this covers what i mean
> 
> View attachment 2460495


It sure does! Check the breakdown table below!
If it's documented incorrectly, then I'd contact ASUS so they fix the entries, but that states that slot 3 is physically 16x, but not electrically as it won't allow past 8x link speed to be set.
It'll do up to 16x on slot 1, up to 16x on slot 2, and up to 8x on slot 3. However, if you set slot 2 to 16x, then slot 3 is disabled. Pretty standard setup for ASUS motherboards for Intel HEDT. Slot 3 could also have been made as 8x open-ended, but sometimes they use a 16x slot (8x electrical) for mechanical stability.
We encounter these issues at work all the time, and they're quite annoying with some setups, but it's understandable why they're designed this way.


----------



## dante`afk

I called Nvidia,

"Due to the high demand, we have currently no ETA for the 3090 to be back in stock. It could be 1 or 2 weeks, or longer"

Alright.


----------



## Jordel

dante`afk said:


> I called Nvidia,
> 
> "Due to the high demand, we have currently no ETA for the 3090 to be back in stock. It could be 1 or 2 weeks, or longer"
> 
> Alright.


Yeah, I've gotten the same word from NVIDIA and our distributors. There won't be any big batches, it'll simply drip over time.


----------



## Nitemare3219

Has anyone flashed a different VBIOS on a ZOTAC Trinity yet? I have an order in for that card as well as a TUF OC. If the ZOTAC performs similarly with a proper VBIOS, I’d rather keep it since it’s $200 cheaper (ordered it from somewhere without tax). I imagine going up from 350W would help the card a lot (unless the lower quality PCB becomes a factor).


----------



## cookiesowns

Looks like I got a potato 3090 FE.
1920-1959 average clocks. Anything higher than +175 on core is artifact or crash. Memory MAX is 600 before getting random instability. This is with fans at 90-100% and active fan cooling on backplate.


----------



## originxt

Running some stress tests on my ftw3. Still power limited at 450-455ish power draw. I will preface saying the card isn't really in the most optimal aircooling setup since it just has rads blowing warm air into it and ambient is 27c/81f. Side panels off though. 70-75c on msi afterburner. Might try the evga version instead? Unsure. No memory oc yet, working in core first. Think I'm also thermally limited.

Currently looping Tse test 1-2. Maxed fans, maxed power limit.

+175 on core clock. Getting about 2000 mhz. It can clock higher, hitting 2085-2100 when starting at about 57c but quickly downclocks as Temps get higher.


----------



## HyperMatrix

cookiesowns said:


> Looks like I got a potato 3090 FE.
> 1920-1959 average clocks. Anything higher than +175 on core is artifact or crash. Memory MAX is 600 before getting random instability. This is with fans at 90-100% and active fan cooling on backplate.


+175 on core should be much higher than 1959MHz. You're crashing because you're trying to clock too high when your card is already telling you it's throttling, likely due to a powerlimit issue. But to be sure of the reason, check the PerfCap reason under GPU-Z's sensors tab to see what your limiting factor is.


----------



## originxt

Currently 2010 with .937 locked voltage. Honestly, I can't wait for a water block for this card. This card needs some serious cooling lol. Still hitting power limits.


----------



## NapsterAU

Anyone flashed the Zotac 3090 yet? Mine arrives later today so might flash a higher power limit BIOS onto it if i can.


----------



## originxt

https://www.3dmark.com/pr/352503



Not bad for a quick and dirty. Need to get some work done or I might get in trouble lol.

1-2 fans partially blocked but probably not too bad.


----------



## Nitemare3219

NapsterAU said:


> Anyone flashed the Zotac 3090 yet? Mine arrives later today so might flash a higher power limit BIOS onto it if i can.


Been looking for the same thing. Someone on Reddit flashed a 360W BIOS but that’s useless. Try the Gigabyte OC BIOS which gives 390W.


----------



## Chamidorix

I will try flashing strix on my 3090 ftw ultra later tonight.

Everyone is messing up the shunt mod, there are around ~9 shunts you have to add another resistor on. Der8auer didn't shunt the resistor on the pci power draw. Framechasers seems to be the only guy who has done it correctly (perfectly stable clocks at a set voltage, no fluctuation of any sort)

Every shunt is connected and they all power balance in reference to each other. 

I've decided not to shunt since there is a good chance "clean" binning is going on according to Igor so I'm certainly going to be selling and upgrading to an actually binned aib down the road.


----------



## NapsterAU

Nitemare3219 said:


> Been looking for the same thing. Someone on Reddit flashed a 360W BIOS but that’s useless. Try the Gigabyte OC BIOS which gives 390W.


True, yeah i will try the 390w BIOS and see how it goes


----------



## HyperMatrix

Chamidorix said:


> Der8auer didn't shunt the resistor on the pci power draw.


It's not a great idea to increase power draw through your motherboard. It's not that Derbauer didn't know about it or think about it. He addressed the resistor there, and intentionally skipped it. That's a lot of excess power draw that would now have to go through your motherboard's VRMs as opposed to directly through cables to the GPU. Unless there's a specific reason that's been tested and confirmed to show that Ampere only overclocks well if there's an increase from all power sources including your PCIE port, this is just not a good idea.

So unless you can reference an article that shows there's an advantage or benefit to pumping additional wattage through your motherboard as opposed to through the 8 pin power connectors, I would recommend against this.


----------



## Duskfall

Has anyone flashed the strix bios on a gigabyte gaming oc?


----------



## Chamidorix

Db8 himself has shown you can pull 250+ W through PCI-Express on most motherboards. Sure, you avoid it if you can, but the fact is if you leave that shunt stock on this generation, since the voltage regulator on all models is digital this time round it will recognize the discrepancy between the shunt and all others and power limit.

This, in addition the aforementioned lack of overclocking the new core cache rail, is why no one has gotten truly impressive overclocking results yet outside of users with XOC bioses.

If you must avoid drawing over 75 on pci e, you can take the pci shunt off and manually route the circuit through the 8 pins. That way the voltage regulator views all shunts in their correct ratios and the pci is limited.


----------



## bmgjet

I cant remember which video it was since iv watched so many now on youtube.
Guy was measuring computer power draw at the wall and modded 1 shunt at a time.
Doing just the pci-e power connectors he got 20W extra total system draw but the card was then hitting power limit while gpuz said it was at 90% of power limiter.
Once he did the pci-e port one then he got a extra 100W from the wall.

What I took from that is there some sor of power load balancing going on which ignores the values reported from the shunts resistance.

He also said in that video that while it should be safe since it wasnt long ago there was a AMD card pulling 150W from PCI-E port only since it had no power plugs.
He still recommended to bypass the shunt fuse on that perticular model to tie it into a pci-e power connector so you have no draw from the slot.




Duskfall said:


> Has anyone flashed the strix bios on a gigabyte gaming oc?


Dont use a 3 plug bios on a 2 plug card since it will freak out when it doesnt see that 3rd plug getting any power.


----------



## mirkendargen

Got an ETA for my Strix from ShopBLT, I believe this is the ETA for it to get to their warehouse, then 2-5 days shipping for it to get into my hands.










Looks like they have some of the potentially ****ty Zotac's showing up on Friday too that aren't all claimed yet if anyone is interested: Zotac ZT-A30900D-10P Geforce Rtx 3090 Trinity 24gb Gddr6x 384bit 1695 19500 3dp Hdmi.


----------



## originxt

mirkendargen said:


> Got an ETA for my Strix from ShopBLT, I believe this is the ETA for it to get to their warehouse, then 2-5 days shipping for it to get into my hands.
> 
> View attachment 2460518
> 
> 
> Looks like they have some of the potentially ****ty Zotac's showing up on Friday too that aren't all claimed yet if anyone is interested: Zotac ZT-A30900D-10P Geforce Rtx 3090 Trinity 24gb Gddr6x 384bit 1695 19500 3dp Hdmi.


Ymmv but the last time I ordered from shopblt (10980xe), they kept moving the eta back as it got close to the date. Granted, it maybe not happen this time but it's something to keep in mind. Hope it goes through.


----------



## mirkendargen

originxt said:


> Ymmv but the last time I ordered from shopblt (10980xe), they kept moving the eta back as it got close to the date. Granted, it maybe not happen this time but it's something to keep in mind. Hope it goes through.


Yeah we'll see. I had good luck with them on a 3960X when they were new and scarce.


----------



## dude120

I'm currently interested in getting the Zotac 3090 as well.

Just like mirkendargen I was able to get a 3960x from shopblt when they were almost impossible to find (so i know that website is legit).

However, I'm very hesitant to get one for 1500 bucks if its performance is gonna be significant gimped by that stock bios with the low power limit.

@mirkendargen when did you order the 3090 strix? I cant seem to find any on shopblt's website


----------



## NapsterAU

dude120 said:


> I'm currently interested in getting the Zotac 3090 as well.
> 
> Just like mirkendargen I was able to get a 3960x from shopblt when they were almost impossible to find (so i know that website is legit).
> 
> However, I'm very hesitant to get one for 1500 bucks if its performance is gonna be significant gimped by that stock bios with the low power limit.
> 
> @mirkendargen when did you order the 3090 strix? I cant seem to find any on shopblt's website


Should know how the Zotac 3090 performs soon with one of the higher power limit BIOS's. 

These cards can still be recovered if bricked right? 
Going to try some BIOS's tonight when card arrives.


----------



## mirkendargen

dude120 said:


> I'm currently interested in getting the Zotac 3090 as well.
> 
> Just like mirkendargen I was able to get a 3960x from shopblt when they were almost impossible to find (so i know that website is legit).
> 
> However, I'm very hesitant to get one for 1500 bucks if its performance is gonna be significant gimped by that stock bios with the low power limit.
> 
> @mirkendargen when did you order the 3090 strix? I cant seem to find any on shopblt's website


Monday, they were only there for a day.


----------



## mirkendargen

NapsterAU said:


> Should know how the Zotac 3090 performs soon with one of the higher power limit BIOS's.
> 
> These cards can still be recovered if bricked right?
> Going to try some BIOS's tonight when card arrives.


I can't confirm on 30series, but on previous cards you could flash the bios using integrated video or another video card if you messed something up.


----------



## NapsterAU

mirkendargen said:


> I can't confirm on 30series, but on previous cards you could flash the bios using integrated video or another video card if you messed something up.


Ah okay, yeah that's all i know as well.


----------



## bmgjet

NapsterAU said:


> Should know how the Zotac 3090 performs soon with one of the higher power limit BIOS's.
> 
> These cards can still be recovered if bricked right?
> Going to try some BIOS's tonight when card arrives.


NVFlash allows blind forced flashing for if you brick it with a imcompatiable bios.
Or you can boot with a 2nd card to get into os then use the selection setting in nvflash to flash the disabled card.
But if worse comes to worst and you some how manage to erase the bios and not write one on at all you can always solder some wires onto the bios chip of the card and re-write it from a blank.

So make sure you save your orignal rom some where you can get to it.
I have mine in my C drive, Mega.co.nz and a USB stick.

One thing to note which is different this gen is it will auto restart after the flash is done. So dont panic when your screen stays black for 1min, just leave it until it restarts.

The main issue I forsee with people cross flashing is if they use a bios with different power layout like a 3plug on a 2 plug.
They will get stuck at boot screen saying to check pci-e cables until they flash it back.


----------



## dude120

Well I just purchased a 3090. God I hope the bios flash works on the zotac units. 
They looked like they were somewhere in between the same performance and ~3% slower than the other 3090 units/


----------



## NapsterAU

dude120 said:


> Well I just purchased a 3090. God I hope the bios flash works on the zotac units.
> They looked like they were somewhere in between the same performance and ~3% slower than the other 3090 units/


Congratz  
You and me both mate haha


----------



## ring0r

Thank you for the Gaming OC Bios  https://www.3dmark.com/spy/14253773


----------



## CallsignVega

What's the highest wattage BIOS you can plug into a 3090 TUF 2x 8-pin card?

I heard one of the two HDMI 2.1 ports or a DP ceases to function eh?


----------



## gimkim

Just Switched from 2080ti.

Noticed that 3090 idle power consumption is around 120W and core/mem clock always stay at maximum speed,
while 2080ti idling at only 20W and lowered it clock to around100mhz on idle.

Is there a way to make 3090 lower it's core/mem clock speed during idle to save power like 2080ti?


----------



## J7SC

gimkim said:


> Just Switched from 2080ti.
> 
> Noticed that 3090 idle power consumption is around 120W and core/mem clock always stay at maximum speed,
> while 2080ti idling at only 20W and lowered it clock to around100mhz on idle.
> 
> Is there a way to make 3090 lower it's core/mem clock speed during idle to save power like 2080ti?


...one reason would be a side effect if you have NV panel / manage 3D settings set to 'prefer maximum performance' instead of 'optimal.'

On another (definitively not power-saving) note, GN did some 2x 3090s NVL/SLI


----------



## mrv153

I own a Zotac Trinity 3090.
I tryd flashing the gigabyte bios, but one DP Port stopped working. This is because Gigabyte has 2x DP and 3x HDMI Ports.

I ended um with the flashed Plait Bios (350/365w) with a core clock +100mhz and mem clock +550.
Timespy 1440p graphic score: 20500, with fan 100% almost 20600.

Im fine with this result, since zotac is one of the cheaper cards.


----------



## bmgjet

gimkim said:


> Just Switched from 2080ti.
> 
> Noticed that 3090 idle power consumption is around 120W and core/mem clock always stay at maximum speed,
> while 2080ti idling at only 20W and lowered it clock to around100mhz on idle.
> 
> Is there a way to make 3090 lower it's core/mem clock speed during idle to save power like 2080ti?


Also depends on your screens and there refresh rate.
If I have all 3 screens plugged in. My cards always idling at 1755mhz at 0.74V
Unplug one of them so only running two screens and it drops to 330mhz at 0.74v
Unplug the other one so only running one screen and it drops down to 220mhz.




mrv153 said:


> I own a Zotac Trinity 3090.
> I tryd flashing the gigabyte bios, but one DP Port stopped working. This is because Gigabyte has 2x DP and 3x HDMI Ports.
> 
> I ended um with the flashed Plait Bios (350/365w) with a core clock +100mhz and mem clock +550.
> Timespy 1440p graphic score: 20500, with fan 100% almost 20600.
> 
> Im fine with this result, since zotac is one of the cheaper cards.


Im using 3 DP on GB bios with my EVGA card.
The GB card has 3 DP and 2 HDMI, The 3rd DP is above the 2 HDMI.
Did you do a complete driver uninstall and reinstall between switching the bios so it can remap the stuff.


----------



## mrv153

bmgjet said:


> Im using 3 DP on GB bios with my EVGA card.
> The GB card has 3 DP and 2 HDMI, The 3rd DP is above the 2 HDMI.
> Did you do a complete driver uninstall and reinstall between switching the bios so it can remap the stuff.


No, no reintall.
Im gonna try this, i thought you cannot use different ports cards..
So your use this bios? 
*Gigabyte RTX 3090 Gaming OC *(370/390W) https://mega.nz/file/giIU0Z6Q (Retail)


With a card with 3x DP and 1x HDMI and all ports work?
I have 3 DP monitors and 1 TV with hdmi 2.1


----------



## NapsterAU

Got home and ran 1 benchmark and then flashed the Zotac 3090 with the Gigabyte Gaming OC.
I can confirm that it works but without one of the DP's but that's not an issue for me. 


Could see spikes to 400w power draw during benchmarking so it's definitly working


----------



## mrv153

NapsterAU said:


> Got home and ran 1 benchmark and then flashed the Zotac 3090 with the Gigabyte Gaming OC.
> I can confirm that it works but without one of the DP's but that's not an issue for me.
> 
> 
> Could see spikes to 400w power draw during benchmarking so it's definitly working


Can you try to DDU nvidia drivers and install it back?


----------



## bmgjet

mrv153 said:


> No, no reintall.
> Im gonna try this, i thought you cannot use different ports cards..
> So your use this bios?
> *Gigabyte RTX 3090 Gaming OC *(370/390W) https://mega.nz/file/giIU0Z6Q (Retail)
> 
> 
> With a card with 3x DP and 1x HDMI and all ports work?
> I have 3 DP monitors and 1 TV with hdmi 2.1


Yup so same setup as me basically. using 3 dp and the hdmi

I have
2X PG27uq on DP.
1X qx2710 on DP to DVI adapter.
and TV on HDMI.


----------



## mrv153

bmgjet said:


> Yup so same setup as me basically. using 3 dp and the hdmi
> 
> I have
> 2X PG27uq on DP.
> 1X qx2710 on DP to DVI adapter.
> and TV on HDMI.


Which results you get with the gigabyte bios?
So reinstall the driver with ddu fixed the one DP port


----------



## Nammi

gimkim said:


> Just Switched from 2080ti.
> 
> Noticed that 3090 idle power consumption is around 120W and core/mem clock always stay at maximum speed,
> while 2080ti idling at only 20W and lowered it clock to around100mhz on idle.
> 
> Is there a way to make 3090 lower it's core/mem clock speed during idle to save power like 2080ti?


Get Nvidia Inspector and use the Multi Display Power Saver.


----------



## torqueroll

Got my FE card yesterday and spent the night benching it. Seems I got a decent chip. Got into the top10 on Port Royal. In my limited experience so far the card has really been set at the limit as the power draw increases exponentially after about the reference 350W. There isn't that much real world gains anymore after that. For regular gaming sessions having a power draw close to or above 400W really isn't worth it imho, although it's fun for benchmarking.  Maybe water would make it scale better.

I'm also still deciding on which waterblock to get. I've seen the Aqua Computer one and that one has a heatpipe from the backplate going into the water block. Has anyone tried this on previous gpu's? The mem chips at the back do get hot so this might be beneficial.


----------



## cookiesowns

HyperMatrix said:


> +175 on core should be much higher than 1959MHz. You're crashing because you're trying to clock too high when your card is already telling you it's throttling, likely due to a powerlimit issue. But to be sure of the reason, check the PerfCap reason under GPU-Z's sensors tab to see what your limiting factor is.


As you probably know core is an offset across the entire frequency/voltage curve. I've been playing with manual VF curve overclocking and it's helping me dial it in.. though I'm still struggling to get more than 1920-1950 core clock around the power limit wall.




torqueroll said:


> Got my FE card yesterday and spent the night benching it. Seems I got a decent chip. Got into the top10 on Port Royal. In my limited experience so far the card has really been set at the limit as the power draw increases exponentially after about the reference 350W. There isn't that much real world gains anymore after that. For regular gaming sessions having a power draw close to or above 400W really isn't worth it imho, although it's fun for benchmarking.  Maybe water would make it scale better.
> 
> I'm also still deciding on which waterblock to get. I've seen the Aqua Computer one and that one has a heatpipe from the backplate going into the water block. Has anyone tried this on previous gpu's? The mem chips at the back do get hot so this might be beneficial.



Got more details of your clocks?


----------



## torqueroll

cookiesowns said:


> As you probably know core is an offset across the entire frequency/voltage curve. I've been playing with manual VF curve overclocking and it's helping me dial it in.. though I'm still struggling to get more than 1920-1950 core clock around the power limit wall.
> 
> 
> 
> 
> 
> Got more details of your clocks?


Ran through Port Royal with +190 and +600mem on FE with the fans on 100%. I had to open the windows to get the room cold enough as well.


----------



## cookiesowns

torqueroll said:


> Ran through Port Royal with +190 and +600mem on FE with the fans on 100%. I had to open the windows to get the room cold enough as well.


Thats insane. Congrats on the silicon lottery!

Can you post a large res screenshot of the voltage frequency curve in either Afterburner (ctrl f) or Precision X? Stock & also your 190 offset 

gonna try repasting the GPU tomorrow. seems like mine is really on the low end. with a +120 offset it'll boost to 2050 and say CYA in time spy..


----------



## NapsterAU

Hows this looking compared to others?
Zotac 3090 with Gigabyte OC BIOS and +100 Core + 200 Mem 390w PLimit



https://www.3dmark.com/3dm/51130986?


----------



## mrv153

NapsterAU said:


> Hows this looking compared to others?
> Zotac 3090 with Gigabyte OC BIOS and +100 Core + 200 Mem 390w PLimit
> 
> 
> 
> https://www.3dmark.com/3dm/51130986?


This is mine with plait bios at 100core 550 mem


https://www.3dmark.com/spy/14253496



Your result scaled with the higher wattage.
300 points for 35 more w dont think its wort.


----------



## tiefox

torqueroll said:


> I've seen the Aqua Computer one and that one has a heatpipe from the backplate going into the water block. Has anyone tried this on previous gpu's? The mem chips at the back do get hot so this might be beneficial.


That one was for the reference card, FE is not reference. I'm not sure they will actually make a block for FE this time. I think only Bitspower and and EKWB announced FE blocks with no details about active backplate cooling.


----------



## Tias

Hello guys, i wanted to check with you before i make a buy.

I can either get the "PNY GeForce RTX 3090 24GB XLR8 Gaming EPIC-X RGB" or the " Gigabyte GeForce RTX 3090 GAMING OC".
my friend got them both and he said i could buy one of them, didnt matter to him which one. He is not a gamer and only needed the card for the vram.

I undertand that the PNY 3090 got a 365w limit bios and the gigabyte gaming oc got a 390w limit bios. But i could flash the gigabyte bios on to the PNY. So the question is really: which of these two cards will prolly be the best pick?

All help and toughts are welcome


----------



## nievz

Hi Guys, can someone post GPUZ sensor screenshot of MSI Gaming X Trio 3090 flashed to Strix BIOS? How much gain does it make when power limit is set to 480w?


----------



## Thoth420

Tias said:


> Hello guys, i wanted to check with you before i make a buy.
> 
> I can either get the "PNY GeForce RTX 3090 24GB XLR8 Gaming EPIC-X RGB" or the " Gigabyte GeForce RTX 3090 GAMING OC".
> my friend got them both and he said i could buy one of them, didnt matter to him which one. He is not a gamer and only needed the card for the vram.
> 
> I undertand that the PNY 3090 got a 365w limit bios and the gigabyte gaming oc got a 390w limit bios. But i could flash the gigabyte bios on to the PNY. So the question is really: which of these two cards will prolly be the best pick?
> 
> All help and toughts are welcome


I think the PNY looks fantastic tbh. No clue on how it does thermally. I know the Gigabyte does fairly well there. I would go with the one with the better cooling solution end of the day all things even.


----------



## twisted0ne

Anyone flashed a MSI Ventus 3X OC? Bit paranoid about flashing the wrong one.


----------



## mrv153

twisted0ne said:


> Anyone flashed a MSI Ventus 3X OC? Bit paranoid about flashing the wrong one.


No, but can you share your bios please?

*nvflash64 --protectoff
nvflash64 --save Partner2080TiModel.rom* 









nvflash64_5.660.0_ampere.zip


Compressed (zipped) Folder



1drv.ms




PW: Ampere


----------



## twisted0ne

mrv153 said:


> No, but can you share your bios please?
> 
> *nvflash64 --protectoff
> nvflash64 --save Partner2080TiModel.rom*
> 
> 
> 
> 
> 
> 
> 
> 
> 
> nvflash64_5.660.0_ampere.zip
> 
> 
> Compressed (zipped) Folder
> 
> 
> 
> 1drv.ms
> 
> 
> 
> 
> PW: Ampere











976 KB file on MEGA







mega.nz


----------



## mrv153

twisted0ne said:


> 976 KB file on MEGA
> 
> 
> 
> 
> 
> 
> 
> mega.nz


Thanks.
Its your retail bios? (MSI Ventus 3X OC 3090)


----------



## twisted0ne

mrv153 said:


> Thanks.
> Its your retail bios? (MSI Ventus 3X OC 3090)


Yes, got it yesterday from cclonline.


----------



## mrv153

Someone know the powerlimit of msi ventus 3x oc 3090?


----------



## ring0r

mrv153 said:


> Someone know the powerlimit of msi ventus 3x oc 3090?


350watt


----------



## Foxrun

torqueroll said:


> Ran through Port Royal with +190 and +600mem on FE with the fans on 100%. I had to open the windows to get the room cold enough as well.


Offsets on core can vary. What is your core clock while benching? >2100 or <2100 etc


----------



## slopokdave

bmgjet said:


> One thing to note which is different this gen is it will auto restart after the flash is done. So dont panic when your screen stays black for 1min, just leave it until it restarts.


Please ADD THIS TO THE FIRST POST! I had a heart attack when this happened. I also continued to get no video until I re-plugged my displayport cable. Again, mini heart attack. 😁


----------



## Nitemare3219

NapsterAU said:


> Got home and ran 1 benchmark and then flashed the Zotac 3090 with the Gigabyte Gaming OC.
> I can confirm that it works but without one of the DP's but that's not an issue for me.
> 
> 
> Could see spikes to 400w power draw during benchmarking so it's definitly working


Need more info on your results! I have a order for a Zotac ETA tomorrow and am not sure if I want to keep it since stock performance sucks. Cooling isn’t great either, but for the price I paid ($1,487), I may live with it.


----------



## torqueroll

tiefox said:


> That one was for the reference card, FE is not reference. I'm not sure they will actually make a block for FE this time. I think only Bitspower and and EKWB announced FE blocks with no details about active backplate cooling.


 Thanks! Of course! I keep thinking of FE as reference even if I know it's not.  Then it's probably going to be the EK one as usual.


----------



## torqueroll

cookiesowns said:


> Thats insane. Congrats on the silicon lottery!
> 
> Can you post a large res screenshot of the voltage frequency curve in either Afterburner (ctrl f) or Precision X? Stock & also your 190 offset
> 
> gonna try repasting the GPU tomorrow. seems like mine is really on the low end. with a +120 offset it'll boost to 2050 and say CYA in time spy..


Sure, here are the screenshots.

Stock::










+190 Offset:


----------



## Foxrun

torqueroll said:


> Sure, here are the screenshots.
> 
> Stock::
> View attachment 2460569
> 
> 
> 
> +190 Offset:
> View attachment 2460570


That's wildly different from if I put in 190. Crazy.


----------



## NapsterAU

Nitemare3219 said:


> Need more info on your results! I have a order for a Zotac ETA tomorrow and am not sure if I want to keep it since stock performance sucks. Cooling isn’t great either, but for the price I paid ($1,487), I may live with it.


It's actually not too bad, stock run managed 19200 GPU score on timespy with the factory BIOS ( 350w )
The Gigabyte OC BIOS ( 390w ) manged to get a 20200 just with the increased power limit and thats it. 
Tweaking the clocks and bumping the fan to 70% got me up to 20900.

Wouldn't mind seeing the Gigabyte Auros BIOS as that looks like it might be in the 400's power limit wise on the 2x8pin setup.


----------



## warbucks

I lucked out and managed to pick up MSI Ventus 3X OC yesterday. Now I just gotta wait for a compatible waterblock so I can add it to my loop. Anyone flash this card with a bios that has a higher power limit yet, if so, which one?


----------



## torqueroll

Foxrun said:


> Offsets on core can vary. What is your core clock while benching? >2100 or <2100 etc


Really hard to answer. It depends on the what I'm benching and can vary a lot. I have seen the it go far above 2100 on Port Royal at one point, but it is mostly just below 2100 during the benchmark.


Foxrun said:


> That's wildly different from if I put in 190. Crazy.


Now I'm curious what your curve looks like.


----------



## Nitemare3219

NapsterAU said:


> It's actually not too bad, stock run managed 19200 GPU score on timespy with the factory BIOS ( 350w )
> The Gigabyte OC BIOS ( 390w ) manged to get a 20200 just with the increased power limit and thats it.
> Tweaking the clocks and bumping the fan to 70% got me up to 20900.
> 
> Wouldn't mind seeing the Gigabyte Auros BIOS as that looks like it might be in the 400's power limit wise on the 2x8pin setup.


Very cool! One of the issues I saw people complain about was core clock hitching causing stuttering in game. Hopefully the increased power limit fixes that. Is the cooling not too bad? Reviews seem to peg it 5-10 degrees warmer than most of the other cards. Heatsink seems to be shorter, so maybe that’s why. My 2080 Ti hits 84 degrees. As long as the ZOTAC fans aren’t obnoxiously loud, I can deal with temps in the 70s.

Have you tried undervolting yet? Might get you even more performance if you have a good chip.


----------



## mrv153

Just tried the gigabyte bios on my trinity again.
One DP wont work, even if driver reinstalled with DDU.

Has someone tried one of these bios on a triniy or generally a 3090?

*ASUS ROG RTX 3090 Strix OC *(390/480W) Asus RTX 3090 VBIOS (Review Sample)
*MSI RTX 3090 Gaming X Trio *(370/380W) MSI RTX 3090 VBIOS (Review Sample)


----------



## Johneey

Okay now i tested my Palit 3090 GamingPro Fully. Specs i9 10900K 5.1 ghz 1.25v 32 Gb ram 3600 mhz.

I flashed the Gigabyte 3090 Gaming Pro OC with 390 Watts and Waterblock from Alphacool










Timespy : https://www.3dmark.com/spy/14253756 Graphic Score : 22073
TimeSpy Extreme : https://www.3dmark.com/spy/14254801 Graphic Score : 11 330 
Port Royal : https://www.3dmark.com/pr/354088 Graphic Score : 14 467 
SuperPosition :









Average Temp on card is 46 Degrees Celsius. 

If anyone want to compare with the aircooled rtx 3090 i got around 5-7% more scores with water


----------



## The-Real-Link

Finally got mine in from the launch day. Normally I'd consider waterblocking but really preferring the look of the FE card barring that power connector. Sadly need to spend the weekend rerunning my pipes to accommodate the card changes from my old TXP so can't bench it quite yet. But man, excited!


----------



## twisted0ne

> *Ventus 3X OC* | 3 Fan | 2.85 Slot | 305mm | 18 Power Stages | *1725* MHz Boost | 2x8-Pin | 350/350 W | *Custom PCB* | EAN 4719072762476 | PN V388-002R


Anyone know where this information came from? Information points towards it being reference not custom.

On a side note for anyone else interested the 'Gigabyte RTX 3090 Gaming OC' bios flashed fine to it.


----------



## Pepillo

Looks like I got a good silicon this time. By air, 2,160 Mhz and 59º maximum temperature sounds good. MSI Gaming X Trio OC with 480w Asus Strix bios:


----------



## Johneey

480 watt <33 100 points under me nice dude 
is it 3 x 8 pin ya?


----------



## slopokdave

I'm pretty much capped out in Port Royal I think at 13,500, at least until I get a water block. Asus TUF 3090 w/ Gigabyte Gaming OC at ~390W. I'm peaking at about 62C, so need that block.

~+160 core was my highest complete run, but 140 is stable.
~+800 memory was my highest complete run, but dialed it back to +700.

Edit: 20,500 graphics score in timespy. 

FYI: Do a DDU/driver reinstall when you flash new bios. I know this may be obvious to some of you, but it wasn't to me. I had boost clock oddities until I did this.


----------



## twisted0ne

warbucks said:


> I lucked out and managed to pick up MSI Ventus 3X OC yesterday. Now I just gotta wait for a compatible waterblock so I can add it to my loop. Anyone flash this card with a bios that has a higher power limit yet, if so, which one?


See previous post.


----------



## Johneey

slopokdave said:


> I'm pretty much capped out in Port Royal I think at 13,500, at least until I get a water block. Asus TUF 3090 w/ Gigabyte Gaming OC at ~390W. I'm peaking at about 62C, so need that block.
> 
> ~+160 core was my highest complete run, but 140 is stable.
> ~+800 memory was my highest complete run, but dialed it back to +700.
> 
> FYI: Do a DDU/driver reinstall when you flash new bios. I know this may be obvious to some of you, but it wasn't to me. I had boost clock oddities until I did this.


i got around 500-600 more score with block -15 celsius


----------



## Johneey

@warbucks ja flash this potato card one of the badest cards on market. need to flash this ****ti


----------



## slopokdave

Johneey said:


> i got around 500-600 more score with block -15 celsius


15degrees Celsius?? lol... 

But yeah I figure a 5%+ boost probably when under water.


----------



## zhrooms

twisted0ne said:


> Anyone know where this information came from? Information points towards it being reference not custom.


PCB dimensions are different, see here. It's likely based on the reference design, but how much they changed I don't know.


----------



## Spiriva

NapsterAU said:


> Wouldn't mind seeing the Gigabyte Auros BIOS as that looks like it might be in the 400's power limit wise on the 2x8pin setup.


That sounds like the bios to use, is the Gigabyte Auros 3090 card out on the market yet? Then it shouldnt be impossible to find the bios.


----------



## jomama22

Question: Does the tuf oc 3090 have voltage control up to 1.1v (+100)? Trying to find info on this and looking through some reviews it's hard to see if it does or not. Thanks!

Have an order from Amazon and will most likely shunt it for the extra power unless an unlocked bios comes around Soon. If it doesn't have voltage control, I'll probably end up buying one of elmors voltage controllers as well. Hopefully can find a block soon enough.


----------



## nievz

slopokdave said:


> I'm pretty much capped out in Port Royal I think at 13,500, at least until I get a water block. Asus TUF 3090 w/ Gigabyte Gaming OC at ~390W. I'm peaking at about 62C, so need that block.
> 
> ~+160 core was my highest complete run, but 140 is stable.
> ~+800 memory was my highest complete run, but dialed it back to +700.
> 
> Edit: 20,500 graphics score in timespy.
> 
> FYI: Do a DDU/driver reinstall when you flash new bios. I know this may be obvious to some of you, but it wasn't to me. I had boost clock oddities until I did this.


Was it hovering at 2.1ghz core using these settings during the benchmarks?


----------



## gamervivek

Anyone getting stuttering with power limiter in afterburner, with huge clockspeed jumps, 1.8Ghz down to 1.2-1.3Ghz and then back up. Not sure if driver-related or something else.


----------



## LukkyStrike

Spiriva said:


> That sounds like the bios to use, is the Gigabyte Auros 3090 card out on the market yet? Then it shouldn't be impossible to find the bios.


I have one installed in my PC right now (Gaming OC) that i can post from today/tomorrow, but i think that someone is pushing it around here in the previous pages tho.

I cant wait to get some work done on it over the next few days, and see where it goes.


----------



## Spiriva

LukkyStrike said:


> I have one installed in my PC right now (Gaming OC) that i can post from today/tomorrow, but i think that someone is pushing it around here in the previous pages tho.
> 
> I cant wait to get some work done on it over the next few days, and see where it goes.












Is this one the one you got? I think someone posted the 390w bios from "Gigabyte GeForce RTX 3090 GAMING OC" but the 400w bios should come from the Gigabyte 3090 called "Gigabyte GeForce RTX 3090 AORUS MASTER"


----------



## Johneey

gamervivek said:


> Anyone getting stuttering with power limiter in afterburner, with huge clockspeed jumps, 1.8Ghz down to 1.2-1.3Ghz and then back up. Not sure if driver-related or something else.


nope


----------



## gamervivek

Johneey said:


> nope


How much have you limited it at? I've tried from 60-85% and forgot to mention that it happens when you run a game at a demanding setting, and I can reliably reproduce it with GTAV at 4k and frame scaling set to 2/1, so effectively running it at 8k.


----------



## warbucks

twisted0ne said:


> See previous post.


I've looked at previous posts and don't see any mention of this specific card and one of the available bioses being used to flash it. Happen to have a link?


----------



## slopokdave

Is voltage lock supposed to work on these cards?


----------



## cookiesowns

torqueroll said:


> Sure, here are the screenshots.
> 
> Stock::
> View attachment 2460569


Yup looks like I lost the silicon lottery this time.. mine is _barely_ above stock rated boost clocks

You're running the latest driver yes?


----------



## dante`afk

web distill plugin finally gave me not a false positive, but the page does not continue ***


----------



## BigMack70

I'm in 

For anyone wanting an FE who is getting stuck at checkout, you need to use a direct link after adding to cart. I added to cart then used: NVIDIA Online Store - Checkout


----------



## LukkyStrike

Spiriva said:


> Is this one the one you got? I think someone posted the 390w bios from "Gigabyte GeForce RTX 3090 GAMING OC" but the 400w bios should come from the Gigabyte 3090 called "Gigabyte GeForce RTX 3090 AORUS MASTER"


No. I have the gaming oc. Not the master. My apologies. 

Does that have 3 8 pins opposed to my 2x8?


----------



## Benni231990

this card has also 2x 8 pin 

so we are all hot as **** of this bios xD


----------



## Ferreal

Sigh, sold out again.


----------



## Cavokk

Pepillo said:


> Looks like I got a good silicon this time. By air, 2,160 Mhz and 59º maximum temperature sounds good. MSI Gaming X Trio OC with 480w Asus Strix bios:
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Hi P


Hi Pepillo.

You got me daring to flash my RTX 3090 Gaming X Trio with the ASUS STRIX Bios and boooom it just works -now plenty of powerheadroom - just going stock settings with 10 + on powerlimit gave me roughly 5% higher score in 3DMark compared to maxed out MSI Gaming X trio (+3% powerlimit) all on 1440P 

Anyway - what are your settings in afterburner and what resolution are you benchmarking in?

Thanks

BR
C


----------



## BigMack70

Anyone attempt a BIOS flash on an FE card yet?


----------



## Spiriva

LukkyStrike said:


> Does that have 3 8 pins opposed to my 2x8?


No that card also uses 2x8pin. But it should have a 400w bios instead of the 390w on the gaming oc model. Not sure how much 10w gonna change tho, but i would like to try it. Just need to find someone who have the card


----------



## LukkyStrike

Benni231990 said:


> this card has also 2x 8 pin
> 
> so we are all hot as **** of this bios xD





Spiriva said:


> No that card also uses 2x8pin. But it should have a 400w bios instead of the 390w on the gaming oc model. Not sure how much 10w gonna change tho, but i would like to try it. Just need to find someone who have the card



well there you go, that makes sense, so I am also on the hunt for the Master OC bios then


----------



## slopokdave

nievz said:


> Was it hovering at 2.1ghz core using these settings during the benchmarks?


No, very far away from that.  1950ish at it's highest. This was with a straight up core offset. However, when I play games with a core offset, clocks are all over the place and most likely crash. 

I have to set manual curve in order to keep clocks in 1900s stable in games. OC'ing very different than before cards. I think there is headroom, but I can't figure it out yet.


----------



## nievz

Cavokk said:


> Hi Pepillo.
> 
> You got me daring to flash my RTX 3090 Gaming X Trio with the ASUS STRIX Bios and boooom it just works -now plenty of powerheadroom - just going stock settings with 10 + on powerlimit gave me roughly 5% higher score in 3DMark compared to maxed out MSI Gaming X trio (+3% powerlimit) all on 1440P
> 
> Anyway - what are your settings in afterburner and what resolution are you benchmarking in?
> 
> Thanks
> 
> BR
> C


What temps are you getting? Can you post screenshot of sensors tabs of GPUZ you got while on load?


----------



## Pepillo

Cavokk said:


> Hi Pepillo.
> 
> You got me daring to flash my RTX 3090 Gaming X Trio with the ASUS STRIX Bios and boooom it just works -now plenty of powerheadroom - just going stock settings with 10 + on powerlimit gave me roughly 5% higher score in 3DMark compared to maxed out MSI Gaming X trio (+3% powerlimit) all on 1440P
> 
> Anyway - what are your settings in afterburner and what resolution are you benchmarking in?
> 
> Thanks
> 
> BR
> C


Since my monitor is 4k I try to pass the tests to that resolution, which is the one I care about, like Time Spy Extreme. A screenshot of GPUZ and MSI Afterburner:


----------



## originxt

PL maxed 150+ on core, 68-69c

Gpuz is constantly green with power as my limiter. Hitting the cap of 450 watts on my ftw3.

Currently looping port royal in windowed mode.










Unsure how good my card is honestly.


----------



## Cavokk

nievz said:


> What temps are you getting? Can you post screenshot of sensors tabs of GPUZ you got while on load?


 sure will do Friday - already in bed


----------



## Zurv

While not the card on the top of my list (but still fine), i finally got an FE order in today. (My guess is it will be a week+ before i see it. The past orders took forever to ship. I also didn't see an option for different shipping options - but i was in a rush!)
I was getting cyberpunk stressed 

When the higher end models are out i'll likely upgrade. (i also want two cards. I have two systems. Desktop and TV.)
Now the problem at hand is finding a block for the 3090 FE. 

ahaha.. i have so much over kill cooling (The same setup when i ran 4 cards.) 3 pump and 2x 360s and a huge koolance external cooler. I guess it will be super quiet.


----------



## domenic

EKWB has this picture posted of the backplate for their Strix 3080 / 90 water block. I have repeatedly inquired regarding my preorder asking what the plan is to cool the backside memory chips on the 3090. EKWB has declined to answer saying such information is "confidential" & something their competitors would love to find out (complete nonsense as the product is set to ship shortly and I already paid for it). 

Anyway, notice what looks to be either outlines or indentations for where one would think they would include thermal pads to cover components such as the memory chips.

Anyone with any inside info on when the Strix 3090 will be up for sale and by whom? As far as I can tell the only US retailers at this point are Newegg, BestBuy, and the Asus store.









EK-Quantum Vector Strix RTX 3070/3080/3090 Backplate - Black


EK-Quantum Vector Strix RTX 3080/3090 Backplate - Black is a CNC machined retention backplate made from black anodized aluminum, that fits all EK-Quantum Vector RTX 3080/3090 water blocks. EK® Quantum - Design & Performance Vector series backplates are part of the EK Quantum Product Line that...




www.ekwb.com


----------



## hoborific

Hey guys, just sharing what I found while undervolting my 3090 gigabyte gaming oc

at [email protected] it draws about 225w running [email protected] and about 275w gaming, I manually set the curve to flat 1695mhz at 768mv in afterburner, when using 80% PL it would spike up and down constantly however with the manual undervolt and flat curve by uncapping the PL it uses more or less the power limit without massive fluctuations in core frequency

time spy bench:
275w [email protected] undervolt benchmark: 19.3k 
390w 1930-2100mhz overclock: 21.1k

[email protected]:
undervolt: 5m ppd (low 4.8 high 5.9)
overclock: 6.9m ppd (low 6.2 high 7.0)
[email protected] is more anecdotal, I only ran one unit per power configuration

the card loves 2.1ghz and only seems to clock down to 2070-2085 based off temperature in [email protected] and TDP while gaming

God I wish there was a waterblock available for my card, will there ever be?


----------



## BigMack70

Zurv said:


> While not the card on the top of my list (but still fine), i finally got an FE order in today. (My guess is it will be a week+ before i see it. The past orders took forever to ship. I also didn't see an option for different shipping options - but i was in a rush!)
> I was getting cyberpunk stressed
> 
> When the higher end models are out i'll likely upgrade. (i also want two cards. I have two systems. Desktop and TV.)
> Now the problem at hand is finding a block for the 3090 FE.
> 
> ahaha.. i have so much over kill cooling (The same setup when i ran 4 cards.) 3 pump and 2x 360s and a huge koolance external cooler. I guess it will be super quiet.


I also didn't see shipping options. Standard only. Kind of a bummer, but to be expected with the kung flu still around wreaking havoc on everything.

I'm content with the FE for now. I'm going to wait and see how the FTW3 Hybrid does, or how other cards on custom BIOS do, and then maybe upgrade from an FE to something a bit better. If the FE is loud, I may just build a custom loop for it and be done, but I'm hoping with that enormous cooler that it will be quiet in a high airflow case.

I'm happy being patient now and waiting a few months to see how things play out, but I really wanted my HDMI 2.1 and my extra performance pre-Cyberpunk launch.


----------



## dr/owned

I'm also on the 3090 waiting-to-ship list for a TUF OC. I've also ordered shunt resistors so I can shunt mod it. If anyone cares, I did the math on the PCIe 12V fingers and assuming the can tolerate 60C, 1.6amps. The spec of 1.1amps (66W 12V -> 75W total all voltages) is considered a "minimum" spec not a max. TLDR: 12mOhm shunt on top of the 5mOhm existing gets it to about 95W total 12V power. I'm just going to shunt the 8 pins with 5mOhm. Skip over all the messing around with BIOS to get higher power limit.



bmgjet said:


> I cant remember which video it was since iv watched so many now on youtube.
> Guy was measuring computer power draw at the wall and modded 1 shunt at a time.
> Doing just the pci-e power connectors he got 20W extra total system draw but the card was then hitting power limit while gpuz said it was at 90% of power limiter.
> Once he did the pci-e port one then he got a extra 100W from the wall.











Power Consumption Concerns on the Radeon RX 480 - PC Perspective


Power Consumption Concerns on the Radeon RX 480 UPDATE (7/1/16): I have added a third page to this story that looks at the power consumption and power




pcper.com





This article covers I think the situation being referenced and 95W was considered "OK" by a mobo designer.










.7mm is the width of each finger on the card.


----------



## mirkendargen

domenic said:


> EKWB has this picture posted of the backplate for their Strix 3080 / 90 water block. I have repeatedly inquired regarding my preorder asking what the plan is to cool the backside memory chips on the 3090. EKWB has declined to answer saying such information is "confidential" & something their competitors would love to find out (complete nonsense as the product is set to ship shortly and I already paid for it).
> 
> Anyway, notice what looks to be either outlines or indentations for where one would think they would include thermal pads to cover components such as the memory chips.
> 
> Anyone with any inside info on when the Strix 3090 will be up for sale and by whom? As far as I can tell the only US retailers at this point are Newegg, BestBuy, and the Asus store.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> EK-Quantum Vector Strix RTX 3070/3080/3090 Backplate - Black
> 
> 
> EK-Quantum Vector Strix RTX 3080/3090 Backplate - Black is a CNC machined retention backplate made from black anodized aluminum, that fits all EK-Quantum Vector RTX 3080/3090 water blocks. EK® Quantum - Design & Performance Vector series backplates are part of the EK Quantum Product Line that...
> 
> 
> 
> 
> www.ekwb.com
> 
> 
> 
> 
> View attachment 2460608


ShopBLT claims my Strix 3090 will get to their warehouse the 13th. If that's accurate, I assume other retailers will also be getting them that day. Considering all the stock coolers cool the backside with just a metal backplate and thermal pads...it's probably at least adequate for water blocks to do that too. We'll just see I guess, maybe someone can slap one of the flat RAM waterblocks on their backplate, heh.


----------



## CptAsian

hoborific said:


> Hey guys, just sharing what I found while undervolting my 3090 gigabyte gaming oc
> 
> at [email protected] it draws about 225w running [email protected] and about 275w gaming, I manually set the curve to flat 1695mhz at 768mv in afterburner, when using 80% PL it would spike up and down constantly however with the manual undervolt and flat curve by uncapping the PL it uses more or less the power limit without massive fluctuations in core frequency
> 
> time spy bench:
> 275w [email protected] undervolt benchmark: 19.3k
> 390w 1930-2100mhz overclock: 21.1k
> 
> [email protected]:
> undervolt: 5m ppd (low 4.8 high 5.9)
> overclock: 6.9m ppd (low 6.2 high 7.0)
> [email protected] is more anecdotal, I only ran one unit per power configuration
> 
> the card loves 2.1ghz and only seems to clock down to 2070-2085 based off temperature in [email protected] and TDP while gaming
> 
> God I wish there was a waterblock available for my card, will there ever be?


Thanks for the [email protected] data, and welcome to OCN!


----------



## Thoth420

So jelly of everyone that go a TUF or Strix. I got this Eagle OC on the way...


----------



## Manya3084

Anyone got the TUF OC 3090 bios?
Trying to get to 5th on the HOF.


----------



## DrunknFoo

domenic said:


> EKWB has this picture posted of the backplate for their Strix 3080 / 90 water block. I have repeatedly inquired regarding my preorder asking what the plan is to cool the backside memory chips on the 3090. EKWB has declined to answer saying such information is "confidential" & something their competitors would love to find out (complete nonsense as the product is set to ship shortly and I already paid for it).
> 
> Anyway, notice what looks to be either outlines or indentations for where one would think they would include thermal pads to cover components such as the memory chips.
> 
> Anyone with any inside info on when the Strix 3090 will be up for sale and by whom? As far as I can tell the only US retailers at this point are Newegg, BestBuy, and the Asus store.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> EK-Quantum Vector Strix RTX 3070/3080/3090 Backplate - Black
> 
> 
> EK-Quantum Vector Strix RTX 3080/3090 Backplate - Black is a CNC machined retention backplate made from black anodized aluminum, that fits all EK-Quantum Vector RTX 3080/3090 water blocks. EK® Quantum - Design & Performance Vector series backplates are part of the EK Quantum Product Line that...
> 
> 
> 
> 
> www.ekwb.com


what exactly are you asking ekwb for? it's not like the preorder price for the strix backplate not much different than the standard backplates they already have in place....
it will have pads to transfer heat from backside of gpu, vrm and mem modules...


----------



## mirkendargen

DrunknFoo said:


> what exactly are you asking ekwb for? it's not like the preorder price for the strix backplate not much different than the standard backplates they already have in place....
> it will have pads to transfer heat from backside of gpu, vrm and mem modules...


I think people are curious if anyone will make an actively cooled backplate (like, water actually flows through it).


----------



## HyperMatrix

For anyone curious about 3090 SLI performance:


----------



## slopokdave

originxt said:


> PL maxed 150+ on core, 68-69c
> 
> Gpuz is constantly green with power as my limiter. Hitting the cap of 450 watts on my ftw3.
> 
> Currently looping port royal in windowed mode.
> 
> View attachment 2460600
> 
> 
> Unsure how good my card is honestly.


That's pretty good IMO if you are at 2040mhz and 68C. 

My card is Asus TUF w/ gigabyte vBios, yeah less power and 2x8pin instead, but I'm no where near steady 2ghz and at less than 60C.


----------



## changboy

Can't get a 3090 but got an order on a asus strix 3080 oc, then i change my owner's club page lol hehehe.


----------



## DrunknFoo

mirkendargen said:


> I think people are curious if anyone will make an actively cooled backplate (like, water actually flows through it).


it wouldn't cost the same price as any other regular back plate then...
what comes to mind is the kyrographics system.. don't think we will see any to be honest...

you are likely better off slapping a cooling plate on a backplate, or directly on the the gpu backside and ram sink blocks to the vrm/ram modules.
This is something I will consider doing depending on how much heat is actually absorbed by the backplate. For those with smaller loops, dumping extra heat into the loop might not be worth it...(i got 4 large rads so it wouldn't be an issue)....probably better off actively cooling with a fan and additional heatsinks attached imo


----------



## domenic

mirkendargen said:


> ShopBLT claims my Strix 3090 will get to their warehouse the 13th. If that's accurate, I assume other retailers will also be getting them that day. Considering all the stock coolers cool the backside with just a metal backplate and thermal pads...it's probably at least adequate for water blocks to do that too. We'll just see I guess, maybe someone can slap one of the flat RAM waterblocks on their backplate, heh.


I you don't mind answering how did you manage to pre-order it? Can't even find it on their site...


----------



## Manya3084

mirkendargen said:


> I think people are curious if anyone will make an actively cooled backplate (like, water actually flows through it).


I ordered it for the TUF. i'm assuming that the water cooling on the top of the card will still filter through to the back.


----------



## mirkendargen

domenic said:


> I you don't mind answering how did you manage to pre-order it? Can't even find it on their site...


It was only listed there for a day (along with TUF and TUF OC's) and then the listings disappeared. A Zotac 3090 listing appeared yesterday then disappeared. It seems like rather than leaving out of stock listings piling up hundreds (thousands?) of backorders they're just listing them until roughly the amount of their shipments are ordered.



DrunknFoo said:


> it wouldn't cost the same price as any other regular back plate then...
> what comes to mind is the kyrographics system.. don't think we will see any to be honest...
> 
> you are likely better off slapping a cooling plate on a backplate, or directly on the the gpu backside and ram sink blocks to the vrm/ram modules.
> This is something I will consider doing depending on how much heat is actually absorbed by the backplate. For those with smaller loops, dumping extra heat into the loop might not be worth it...(i got 4 large rads so it wouldn't be an issue)....probably better off actively cooling with a fan and additional heatsinks attached imo


Yup I have a bigass heatsink I'll stick on the backplate (optionally with a fan) if I need to. I could also see slapping a TR4 waterblock on the backplate over the rear GPU area, they're about the right size to cover all the RAM chips, heh.


----------



## torqueroll

cookiesowns said:


> Yup looks like I lost the silicon lottery this time.. mine is _barely_ above stock rated boost clocks
> 
> You're running the latest driver yes?


Yes, I'm running the latest driver.

The silicon lottery really only matters for benchmarking. You have the option of selling it now though and reroll later.


----------



## HyperMatrix

Manya3084 said:


> I ordered it for the TUF. i'm assuming that the water cooling on the top of the card will still filter through to the back.


It will to an extent. But now the heat will have to transfer from the VRAM on the backside through the board, and on to the components on the other side which are connected to the block.

Think of it like warming up frozen food in a microwave. Only the exposed areas on the outside are heated. But as they get hot...the colder parts of your food on the inside start stealing the heat away. If you reheat on low, your food will have more time to pass the heat down to its center. But if you heat it on high...you can end up overcooking and ruining the outside while left with ice still in the center.

So basically speed of thermal transfer will be important. But also...any heat generated on the back side will negatively affect the temperature of the components on the other side. It may end up not being a huge deal. But it could also affect stability of memory overclocking at the higher end.

I’ve been using active backplate cooling since Maxwell and intend to stick to it after I saw just how hot the back of the card can get. Now to go microwave my dinner...


----------



## Manya3084

HyperMatrix said:


> It will to an extent. But now the heat will have to transfer from the VRAM on the backside through the board, and on to the components on the other side which are connected to the block.
> 
> Think of it like warming up frozen food in a microwave. Only the exposed areas on the outside are heated. But as they get hot...the colder parts of your food on the inside start stealing the heat away. If you reheat on low, your food will have more time to pass the heat down to its center. But if you heat it on high...you can end up overcooking and ruining the outside while left with ice still in the center.
> 
> So basically speed of thermal transfer will be important. But also...any heat generated on the back side will negatively affect the temperature of the components on the other side. It may end up not being a huge deal. But it could also affect stability of memory overclocking at the higher end.
> 
> I’ve been using active backplate cooling since Maxwell and intend to stick to it after I saw just how hot the back of the card can get. Now to go microwave my dinner...


I'll see how it goes. I do have a spare TR4 block I could attach.
I was hoping they would have made a "sandwich" style block, to be unique.


----------



## dante`afk

3090 fe | eBay 

jesus, i wish all of them the worst.


----------



## HyperMatrix

dante`afk said:


> 3090 fe | eBay
> 
> jesus, i wish all of them the worst.


Half of those listings are from Jensen's personal eBay account.


----------



## J7SC

^^well, he's got to pay Softbank $40 billion for Arm, IF the regulators let that happen, so add another pile for lawyers...spend away ! ;-)

Re. active back plate cooling, I posted this a while ago...have a pile of these universal GPU coolers that I will try out


----------



## gamervivek

hoborific said:


> Hey guys, just sharing what I found while undervolting my 3090 gigabyte gaming oc
> 
> at [email protected] it draws about 225w running [email protected] and about 275w gaming, I manually set the curve to flat 1695mhz at 768mv in afterburner, when using 80% PL it would spike up and down constantly however with the manual undervolt and flat curve by uncapping the PL it uses more or less the power limit without massive fluctuations in core frequency
> 
> time spy bench:
> 275w [email protected] undervolt benchmark: 19.3k
> 390w 1930-2100mhz overclock: 21.1k
> 
> [email protected]:
> undervolt: 5m ppd (low 4.8 high 5.9)
> overclock: 6.9m ppd (low 6.2 high 7.0)
> [email protected] is more anecdotal, I only ran one unit per power configuration
> 
> the card loves 2.1ghz and only seems to clock down to 2070-2085 based off temperature in [email protected] and TDP while gaming
> 
> God I wish there was a waterblock available for my card, will there ever be?


Even without undervolting, I am seeing similar thing with my 3090 card. Huge clockspeed fluctuations with different PL in games with demanding settings like running effective 8k in GTA V,









[Official] NVIDIA RTX 3090 Owner's Club


Just tried the gigabyte bios on my trinity again. One DP wont work, even if driver reinstalled with DDU. Has someone tried one of these bios on a triniy or generally a 3090? ASUS ROG RTX 3090 Strix OC (390/480W) Asus RTX 3090 VBIOS (Review Sample) MSI RTX 3090 Gaming X Trio (370/380W) MSI RTX...




www.overclock.net


----------



## bmgjet

Just finished a 4 hour mapping sesson of rust on max setting 4K with rustedit running as well to edit the map.
Vram usage 18-21GB.

Backplate temp with IR gun 97C
Peak core temp 70c.
GPUZ limiter is just a flat line of PWR. with the power draw graph just being 371-390W.

I have a 80mm 1300rpm fan blowing directly on the back plate.
Overclock is 400mhz on the vram.
Effectivly +150mhz on the core but its flat lined after 1V using curve editor.
Average core speed 1995mhz
Peak core speed 2100mhz.

Wonder what the rated temp of the vram is, Im guessing the backplate works since its hot as F, Just its really lacking mass on this XC3 Ultra card.


----------



## cookiesowns

torqueroll said:


> Yes, I'm running the latest driver.
> 
> The silicon lottery really only matters for benchmarking. You have the option of selling it now though and reroll later.


I wish that were just true. It seems that in addition to low clocks I think this chip is just high leakage.
Temps get toasty doing all 6 scenes of blender with CUDA running and fans spike to 100% after a short while presumably cooling down the VRM and Memory (core goes from 68C rapidly down to 62C or so )

repasting the first time made it worse. Second time and also adding apad in the rear backplate improved it just a tad. Core temps are the same roughly.

This cooler really isn’t meant to be disassembled.
If I can’t figure out the fan spiking up to 100% I may just RMA it.


----------



## mirkendargen

bmgjet said:


> Just finished a 4 hour mapping sesson of rust on max setting 4K with rustedit running as well to edit the map.
> Vram usage 18-21GB.
> 
> Backplate temp with IR gun 97C
> Peak core temp 70c.
> GPUZ limiter is just a flat line of PWR. with the power draw graph just being 371-390W.
> 
> I have a 80mm 1300rpm fan blowing directly on the back plate.
> Overclock is 400mhz on the vram.
> Effectivly +150mhz on the core but its flat lined after 1V using curve editor.
> Average core speed 1995mhz
> Peak core speed 2100mhz.
> 
> Wonder what the rated temp of the vram is, Im guessing the backplate works since its hot as F, Just its really lacking mass on this XC3 Ultra card.


Damn, it should at least have fins to increase surface area if it's getting that hot, and should really probably have a heat pipe connecting to the main heatsink...


----------



## Thoth420

dante`afk said:


> 3090 fe | eBay
> 
> jesus, i wish all of them the worst.


The amount of listing looks like pure cope as you scroll. Hopefully it keeps up and the price keeps falling. Cut all that profit out from under them.


----------



## BigMack70

dante`afk said:


> 3090 fe | eBay
> 
> jesus, i wish all of them the worst.


The problem is the people buying cards at nearly double MSRP because they're too angry/impatient to wait a week or two. That or they're so rich that $1000-1500 for a couple weeks of use of a GPU is no big deal to them.

What I don't understand is why they buy scalped GPUs instead of buying pre-built systems with a 3090 for ~$3.7K off Amazon. Only a few hundred more bucks and you get an entire system... could just pull out the GPU, sell the rest of the system, and save a ton of cash. Or, alternatively, get an entire new (relatively) high end PC for a few hundred bucks more than for the card itself.


----------



## Chamidorix

I finally got around to flashing the 3090 Strix OC Performance bios from TechPowerUp onto my 3090 EVGA FTW Ultra.

The 2nd display port from the top of the card down no longer works; it thinks it is HDMI. All other ports work.

The displayed board power draw in GPU-Z and HWINFO is maxing out at 318W. GPU-Z vbios reading shows the correct 480 max. However, Timespy Extreme graphics results went from 10500 to 10750, +250 points. This is with just adjusting temp and power in afterburner to max on each bios, memory/voltage/fan is all stock from each bios.

So I am unsure if it is really being capped at 320 W (2/3 of 480 which corresponds to 2 out of 3 power connectors) and I'm just gaining 250 points for more aggressive fans, or if it really did bump up the power.

More testing required, but alas Star Wars squadrons is currently more interesting.


----------



## Warocia

slopokdave said:


> I'm pretty much capped out in Port Royal I think at 13,500, at least until I get a water block. Asus TUF 3090 w/ Gigabyte Gaming OC at ~390W. I'm peaking at about 62C, so need that block.
> 
> ~+160 core was my highest complete run, but 140 is stable.
> ~+800 memory was my highest complete run, but dialed it back to +700.
> 
> Edit: 20,500 graphics score in timespy.
> 
> FYI: Do a DDU/driver reinstall when you flash new bios. I know this may be obvious to some of you, but it wasn't to me. I had boost clock oddities until I did this.


My 3090 non oc TUF is getting about 20,600 (+50 core, +500 memory). Stock bios. It feels like a core clock slider doesn't do anything. Maybe power limit is cause?

Run1 (+50 core, +500 memory)
Average clock frequency 1 825 MHz

Run2 (+75 core, +750 memory)
Average clock frequency 1 821 MHz


----------



## iamjanco

dante`afk said:


> 3090 fe | eBay
> 
> jesus, i wish all of them the worst.


*RTX 3090 | eBay*

A larger selection of listings than what the link you provided returned.

I sampled a few of those listings; the following summary listing kinda stood out:










The gouging included in the listings aside, as someone who just went through eBay's return and refund process last month for a used high value item, I don't envy the buyers of those cards should they end up wanting/needing to do the same.

As for the sellers, _choose your pick_... *roadkill meme | Google image SERP*.


----------



## ring0r

dante`afk said:


> 3090 fe | eBay
> 
> jesus, i wish all of them the worst.


I say to all of you: love your enemies and do good to those who hate you.
Bless those who wish you evil and pray for all who offend you.


----------



## DrunknFoo

mirkendargen said:


> Yup I have a bigass heatsink I'll stick on the backplate (optionally with a fan) if I need to. I could also see slapping a TR4 waterblock on the backplate over the rear GPU area, they're about the right size to cover all the RAM chips, heh.


Great minds think alike lol. I was looking at the largest surfaced waterblock available... other than commercial grade blocks. A TR will probably be ideal
Just gotta find one that has a large surface area and one that doesn't weigh too much. Even if the surface area doesn't directly cover above all the chips and the backside of the gpu, pulling the heat away from the plate would still be beneficial

Any recommendations lol?




bmgjet said:


> Backplate temp with IR gun 97C
> Peak core temp 70c.
> GPUZ limiter is just a flat line of PWR. with the power draw graph just being 371-390W.
> 
> I have a 80mm 1300rpm fan blowing directly on the back plate.
> Overclock is 400mhz on the vram.
> Effectivly +150mhz on the core but its flat lined after 1V using curve editor.
> Average core speed 1995mhz
> Peak core speed 2100mhz.
> 
> Wonder what the rated temp of the vram is, Im guessing the backplate works since its hot as F, Just its really lacking mass on this XC3 Ultra card.


thank you for sharing
hmmm, does evga use samsung or micron?
if micron, you can look for the modules on your pcb and reference below (my guess is <100C)





GDDR6 Part Catalog







www.micron.com


----------



## devilhead

torqueroll said:


> Sure, here are the screenshots.
> 
> Stock::
> View attachment 2460569
> 
> 
> 
> +190 Offset:
> View attachment 2460570


thats 45mhz higher than mine, mine is 1860mhz


----------



## bmgjet

DrunknFoo said:


> hmmm, does evga use samsung or micron?
> if micron, you can look for the modules on your pcb and reference below (my guess is <100C)
> 
> 
> 
> 
> 
> GDDR6 Part Catalog
> 
> 
> 
> 
> 
> 
> 
> www.micron.com


Its GDDR6X so only samsung makes it.
Looks like 125C is the rated max temp for them.


----------



## J7SC

bmgjet said:


> Its GDDR6X so *only samsung* makes it.
> Looks like 125C is the rated max temp for them.


? Micron !


----------



## Kleimo

Kleimo said:


> I am going to attach the Alphacool D-RAM block to back of the backplate.
> 
> 
> https://www.aquatuning.us/water-cooling/hdd-ram-cooler/ram-water-blocks/19802/alphacool-d-ram-cooler-x4-universal-acetal-black-nickel?c=6480


And here it is.


----------



## DrunknFoo

ahhh ok, had no idea, (haven't done my research honestly)


----------



## DrunknFoo

Kleimo said:


> And here it is.
> View attachment 2460646


that block looks larger than i thought, or that gpu looks a lot smaller than i thought

good stuff!


----------



## MRLslidchen

Kleimo said:


> And here it is.
> View attachment 2460646


Hey, you too 









That solution works like a charm. 38-40c Backplate Temp after one hour gaming.


----------



## Esenel

@Kleimo 
@MRLslidchen 

Is it screwed to the backplate?


----------



## kx11

MRLslidchen said:


> Hey, you too
> 
> View attachment 2460647
> 
> 
> That solution works like a charm. 38-40c Backplate Temp after one hour gaming.
> 
> View attachment 2460648


does it effect the core temps?


----------



## MRLslidchen

@Esenel Yes, i used four M3x6 screws.
@kx11 Not really, it just really helps the backside vram.


----------



## HyperMatrix

Those inquiring about the VRAM, it’s Micron, and Tj max for it is 110C but I mean If you’re getting up there, it’s probably going to crash. Once you hit 100C you should anticipate that there’s a problem that’s going to lead to yet another problem.


----------



## kx11

Bright Memory RTX benchmark is out on Steam, it should be a good test for 3090









Bright Memory: Infinite Ray Tracing Benchmark on Steam


Benchmark software created by FYQD-Studio to test performance of next-gen ray tracing in Bright Memory: Infinite.




store.steampowered.com


----------



## torqueroll

I've been doing some stress testing on my 3090 FE and there definitely seems to be issues with keeping the GDDR6X modules cool when running at max powerlimit at 400W. I actually get a cooler GPU temp at 400W than I do at lower power limits since the fan has to ramp up to keep the memory modules cool. Downclocking the memory while at 400W will also bring down the fan speed significantly. About 100-200 rpm less. The memory modules are so close to the GPU that the higher power limit really heats it up. This card is screaming for a water block.

Has anyone else noticed this fan behaviour?


----------



## Benni231990

i have a question 

how do I notice when the memory clock is too high?

i ask because i have alwasy green and red strips so i now ok its to high the memory


----------



## torqueroll

Benni231990 said:


> i have a question
> 
> how do I notice when the memory clock is too high?
> 
> i ask because i have alwasy green and red strips so i now ok its to high the memory


You'll have to validate your memory clock by testing. Frequency can be too high even if you don't see any artifacting and you'll see this as the performance will suffer. Increased temps can also decrease performance if you're on the edge and tbh I think most cards are on the edge on the memory temps. Specially when increasing the power limit. There's a reason Nvidia decided on a "conservative" memory clock.


----------



## bmgjet

Since maxwell they have used error correction on vram.
So you need to find a benchmark where you can freeze the frame.
I use heaven and find a place where fps is quite stable and freeze it there with space bar.
The dragon bit is a good one.

Then adjust ram in 25mhz from stock clocks.
Once you see FPS start to have dips and get lower then you have gone too far since error correction is kicking in.
If you go really really too far usually like 200mhz over that point youll get artifacts of currupt textures.

Once you have found the point where fps drops going 25mhz more. Go set it 75mhz under that since that should account for when the vram gets hotter and slightly lesser quality chips on it that arnt getting filled up because the benchmark doesnt use enough vram.


----------



## LordGurciullo

Love you guys exploring different ways to cool the Vram/backplate. I haven't really gotten any crazier then just normal overclocks. Somehow managed to get into the top 10 everywhere at least for now... I'm sure you fancy guys with your fancy blocks and shunts will destroy me soon... But anyways heres my findings.

Anything over 600 mem does damage to performance except in benchmarks.
I had to manually set the curve to not go over 2100 ever
My manual fan curve doesn't seem to max out as much either

Question though - I have manual fan curve set to 100 percent at 65c to prevent downclocking... This could mean 100 percent fan 80 percent of my gaming..
Is this gonna cause my card to get damaged or worn out fast? I've never really pushed a card this hard generally.
2. Are you guys using fan sync? or setting fan 1 and 2 separately... odd there are only 2 fan options and not 3...


----------



## twisted0ne

LordGurciullo said:


> Love you guys exploring different ways to cool the Vram/backplate. I haven't really gotten any crazier then just normal overclocks. Somehow managed to get into the top 10 everywhere at least for now... I'm sure you fancy guys with your fancy blocks and shunts will destroy me soon... But anyways heres my findings.
> 
> Anything over 600 mem does damage to performance except in benchmarks.
> I had to manually set the curve to not go over 2100 ever
> My manual fan curve doesn't seem to max out as much either
> 
> Question though - I have manual fan curve set to 100 percent at 65c to prevent downclocking... This could mean 100 percent fan 80 percent of my gaming..
> Is this gonna cause my card to get damaged or worn out fast? I've never really pushed a card this hard generally.
> 2. Are you guys using fan sync? or setting fan 1 and 2 separately... odd there are only 2 fan options and not 3...


Personally can't say I've ever seen a GPU fan fail from usage, if you're that worried about it I'd suggest investing in a waterblock and loop, better temps and you won't have to worry about fan burn out or noise.


----------



## GNU/LLJY

We really need an XOC BIOS but I don't know how well my Ventus OC would like that

I just flashed the gigabyte eagle BIOS and it lifted the ****ty stock (350W) power limit to 390W. I still can't hit 2000mhz though xd


----------



## Warocia

Does *protecton command work? I have Asus 3090 TUF. When I am trying to put it back on:

nvflash64 --protecton*
"Setting of EEPROM protect failed."


----------



## Johneey

torqueroll said:


> I've been doing some stress testing on my 3090 FE and there definitely seems to be issues with keeping the GDDR6X modules cool when running at max powerlimit at 400W. I actually get a cooler GPU temp at 400W than I do at lower power limits since the fan has to ramp up to keep the memory modules cool. Downclocking the memory while at 400W will also bring down the fan speed significantly. About 100-200 rpm less. The memory modules are so close to the GPU that the higher power limit really heats it up. This card is screaming for a water block.
> 
> Has anyone else noticed this fan behaviour?


400 Watts? so can u share u bios??


----------



## Johneey

we need the Aorus Master 3090 bios pls


----------



## Johneey

MRLslidchen said:


> Hey, you too
> 
> View attachment 2460647
> 
> 
> That solution works like a charm. 38-40c Backplate Temp after one hour gaming.
> 
> View attachment 2460648


can u please do a time spy?


----------



## mattxx88

GNU/LLJY said:


> We really need an XOC BIOS but I don't know how well my Ventus OC would like that
> 
> I just flashed the gigabyte eagle BIOS and it lifted the ****ty stock (350W) power limit to 390W. I still can't hit 2000mhz though xd


if you are on air even an XOC probably cannot make you reach 2000+ mhz


----------



## GNU/LLJY

mattxx88 said:


> if you are on air even an XOC probably cannot make you reach 2000+ mhz


The FE can quite easily hit 2000mhz IIRC. My card does 1950-1980mhz at ~~60C , I'm just short of 20mhz lol


----------



## Johneey

GNU/LLJY said:


> The FE can quite easily hit 2000mhz IIRC. My card does 1950-1980mhz at ~~60C , I'm just short of 20mhz lol


please do a time spy wanna see


----------



## Cavokk

Ok very green 3090 owner here with maybe a stupid question:

I am actually surprised that I successfully flashed the ASUS STRIX 3090 Bios to my MSI Gaming X Trio 3090 as the only common chips among those two seem to be the memory and GPU itself (and maybe the BIOS Flash)

As I understand it the voltage control is done by multiplexing all the VRMs to spread out the load but they are controlled by very different VRM controllers - ASUS uses MPS types and MSI uses UPI versions. How can the same bios work across different power delivery designs with different Phase controllers and numbers of VRMs?

My own best guess is that the BIOS ONLY holds the the Freq/boost freq/temp/voltage tables and not the actual logic to contol the phasecontrollers..?

Hope some can educate me on this  Thanks in advance.

C


----------



## GNU/LLJY

Johneey said:


> please do a time spy wanna see


I did one before flashing the BIOS



https://www.3dmark.com/spy/14210707



I'm number 69 on the Timespy extreme hall of fame (nice)

I used superposition to test after flashing, that was 7500 ish on 8k optimized


----------



## Johneey

Cavokk said:


> Ok very green 3090 owner here with maybe a stupid question:
> 
> I am actually surprised that I successfully flashed the ASUS STRIX 3090 Bios to my MSI Gaming X Trio 3090 as the only common chips among those two seem to be the memory and GPU itself (and maybe the BIOS Flash)
> 
> As I understand it the voltage control is done by multiplexing all the VRMs to spread out the load but they are controlled by very different VRM controllers - ASUS uses MPS types and MSI uses UPI versions. How can the same bios work across different power delivery designs with different Phase controllers and numbers of VRMs?
> 
> My own best guess is that the BIOS ONLY holds the the Freq/boost freq/temp/voltage tables and not the actual logic to contol the phasecontrollers..?
> 
> Hope some can educate me on this  Thanks in advance.
> 
> C


take care of fans, if the strix bios has faster fan speed, it can destroy ur fans


----------



## Johneey

GNU/LLJY said:


> I did one before flashing the BIOS
> 
> 
> 
> https://www.3dmark.com/spy/14210707
> 
> 
> 
> I'm number 69 on the Timespy extreme hall of fame (nice)
> 
> I used superposition to test after flashing, that was 7500 ish on 8k optimized


nice dude i am 8 on graphic score tse


----------



## Johneey

do one again with bios flash 390w


----------



## nievz

Has anyone tried repasting their cards? Any improvement?


----------



## Spiriva

Johneey said:


> we need the Aorus Master 3090 bios pls


Do you know if this card is released yet? If it is, someone somewhere gotta have this 400w 2x8pin bios for us


----------



## LukkyStrike

nievz said:


> Has anyone tried repasting their cards? Any improvement?


Not yet. Waiting on EK to drop a block for the gigabyte card.....disappointing that they are different than all the other reference designs. 

-----------

Otherwise, I had about an hour to mess around with the 3090 Gaming OC last night. I was able to add about +55 mhz to core speed (no memory changes really), adding more did not provide higher boosts. No crashes, but no boost to the set point. I was hitting the power limiter the entire time the TS was running. I like the 1845 average, and a max 2010, but looking at the charts: i am only seeing 2k mhz for a very small portion of the bench. I wanted to run an extreme at this OC but ran out of time.

1) i have never had issues getting the Voltage controller show up on Afterburner, anyone else having issues here?
2) even tho i cannot control voltage (or fans for that matter), i also cannot go beyond 105% power limit. I figured i could hit 115 or even 120? 
3) Even tho i cannot control the fans, i never broke 68F even though i had the temp limit up at 90. (so not bad for the OC that I did run).

https://www.3dmark.com/spy/14286325

Excuse the POTATO CPU score, I turned off all other OC while I work on the card. I will post finals once i solve the above problems. Noty a bad score, but i feel there is more there, especially with the thermals i am seeing right now. Cant wait for my waterblock, this card is a beast but she be hotttttt🔥...


----------



## slopokdave

GNU/LLJY said:


> We really need an XOC BIOS but I don't know how well my Ventus OC would like that
> 
> I just flashed the gigabyte eagle BIOS and it lifted the ****ty stock (350W) power limit to 390W. I still can't hit 2000mhz though xd


Same, I am "stuck" at 1935-1950 on Asus TUF w/ Gigabyte OC vBios. I see blips to 2010mhz every now and then, but my steady/stable clock is 1935. This is with custom voltage curve. I continue to get clock fluctuations if I just set a clock offset. I'm not sure a water block is going to get me to the magic 2ghz mark, even if temps come down to low 50s. 

I need that extra 10W with a 400W vBios I guess lol.


----------



## nievz

slopokdave said:


> Same, I am "stuck" at 1935-1950 on Asus TUF w/ Gigabyte OC vBios. I see blips to 2010mhz every now and then, but my steady/stable clock is 1935. This is with custom voltage curve. I continue to get clock fluctuations if I just set a clock offset. I'm not sure a water block is going to get me to the magic 2ghz mark, even if temps come down to low 50s.
> 
> I need that extra 10W with a 400W vBios I guess lol.


I have the Gaming X Trio and I'm able to maintain 2025 on stock BIOS. It must be your silicon.


----------



## originxt

Strix bios doesn't work for my card. Unsure if I did it wrong or its just incompatible, erring on the side of incompatible since there are a few differences between the ftw3 and the strix. Wish I had the higher power limit.


----------



## Pepillo

Johneey said:


> take care of fans, if the strix bios has faster fan speed, it can destroy ur fans


No problem, MSI Fans are better and can do 3.200 rpm, Strix 3.000 rpm, you only lose 200 mhz.


----------



## GanMenglin

Does anyone has info of MSI’s new flagship 3090 which is better than trio? I got some info said it’s on the way...


----------



## def2att

Received a Asus Tuf OC 3090.


----------



## mirkendargen

GanMenglin said:


> Does anyone has info of MSI’s new flagship 3090 which is better than trio? I got some info said it’s on the way...


If it's like the past 2 gens we can assume there will be a Lightning Z.


----------



## GanMenglin

mirkendargen said:


> If it's like the past 2 gens we can assume there will be a Lightning Z.


No, it’s not the lightning, just a real flagship card. Lightning z is more like a xoc card.


----------



## Johneey

anyone know the overlocker ROGKilla https://www.3dmark.com/spy/14281274 he stole my 1.place in germany. 30degreese nice


----------



## twisted0ne

Confirmation the MSI boards are not reference:









Can see an image here to compare it to reference


----------



## dante`afk

so it seems nvidia removed the "out of stock/order" button to prevent the web distill refreshes?


----------



## Spiriva

dante`afk said:


> so it seems nvidia removed the "out of stock/order" button to prevent the web distill refreshes?


Its still there on the Swedish Nvidia page, altho it takes a few sec for it to pop up.


----------



## Cavokk

twisted0ne said:


> Confirmation the MSI boards are not reference:
> View attachment 2460695
> 
> 
> Can see an image here to compare it to reference


Yes physically they’re different but electrically I am convinced they are REF spec -and MSI wanted the big gap between ram and caps to cater for their existing cooler so they could get the card fast to market  good thing is that the termals will be better as components are spread over larger area 😄

Alphacool has a matching waterblock - mine should arrive soon  (normally using EK)

C


----------



## twisted0ne

Cavokk said:


> Yes physically they’re different but electrically I am convinced they are REF spec -and MSI wanted the big gap between ram and caps to cater for their existing cooler so they could get the card fast to market  good thing is that the termals will be better as components are spread over larger area 😄
> 
> Alphacool has a matching waterblock - mine should arrive soon  (normally using EK)
> 
> C


Alphacool only appear to have reference and Asus waterblocks, what makes you think it'll fit the MSI?

On their official list it's not even been looked at so you seem to be taking a very expensive risk












https://www.alphacool.com/download/compatibility%20list%20Nvidia.pdf


----------



## warbucks

zhrooms said:


> 090AORUS X-2





Cavokk said:


> Yes physically they’re different but electrically I am convinced they are REF spec -and MSI wanted the big gap between ram and caps to cater for their existing cooler so they could get the card fast to market  good thing is that the termals will be better as components are spread over larger area 😄
> 
> Alphacool has a matching waterblock - mine should arrive soon  (normally using EK)
> 
> C


Do let us know if that block fits your MSI card. _fingers*crossed_


----------



## dr/owned

HyperMatrix said:


> Those inquiring about the VRAM, it’s Micron, and Tj max for it is 110C but I mean If you’re getting up there, it’s probably going to crash. Once you hit 100C you should anticipate that there’s a problem that’s going to lead to yet another problem.


I'd seriously doubt that the vram would actually get up to that temperature with any sort of passive cooling. Nvidia didn't feel it necessary to put even a backplate on the Titan X Maxwell backside vram. Has any reviewer looked at the temperature with a thermocouple/FLIR/whatever?


----------



## mirkendargen

dr/owned said:


> I'd seriously doubt that the vram would actually get up to that temperature with any sort of passive cooling. Nvidia didn't feel it necessary to put even a backplate on the Titan X Maxwell backside vram. Has any reviewer looked at the temperature with a thermocouple/FLIR/whatever?


Considering someone earlier in this thread measured their backplate at 97C (with a fan blowing on it), the backside VRAM is going to be 100C+. Whether that's REALLY a problem, IDK.


----------



## Cavokk

well here it is: https://www.alphacool.com/shop/graf...x-n-rtx-3090/3080-gaming-x-trio-mit-backplate



twisted0ne said:


> Alphacool only appear to have reference and Asus waterblocks, what makes you think it'll fit the MSI?
> 
> On their official list it's not even been looked at so you seem to be taking a very expensive risk
> 
> View attachment 2460714
> 
> 
> 
> 
> https://www.alphacool.com/download/compatibility%20list%20Nvidia.pdf


----------



## dr/owned

mirkendargen said:


> Considering someone earlier in this thread measured their backplate at 97C (with a fan blowing on it), the backside VRAM is going to be 100C+. Whether that's REALLY a problem, IDK.


Thanks for the info...I've been scrolling through but all the replies I miss some stuff.

May need to break out this setup again


----------



## twisted0ne

Cavokk said:


> well here it is: https://www.alphacool.com/shop/graf...x-n-rtx-3090/3080-gaming-x-trio-mit-backplate


Wonder if the trio is the same as the ventus 

EDIT: comparing pcb they're pretty different...


----------



## domenic

mirkendargen said:


> Considering someone earlier in this thread measured their backplate at 97C (with a fan blowing on it), the backside VRAM is going to be 100C+. Whether that's REALLY a problem, IDK.


I ordered the EKWB block & backplate for the Strix 3090. Notice the cutouts / indentations presumably for thermal pads. Since the Strix has PWFM fan headers on the card I am thinking I will just try placing a 120mm fan on top of the backplate somehow to keep it cool assuming the heat will first transfer from the ram chips through the pads to the backplate adequately. Also I am wondering how much heat from the backside of the PCB will naturally radiate through to the front side where the water block is?


----------



## Latchback

domenic said:


> I ordered the EKWB block & backplate for the Strix 3090. Notice the cutouts / indentations presumably for thermal pads. Since the Strix has PWFM fan headers on the card I am thinking I will just try placing a 120mm fan on top of the backplate somehow to keep it cool assuming the heat will first transfer from the ram chips through the pads to the backplate adequately. Also I am wondering how much heat from the backside of the PCB will naturally radiate through to the front side where the water block is?
> 
> 
> 
> View attachment 2460718


Where do you get a 3090 strix from?


----------



## domenic

Latchback said:


> Where do you get a 3090 strix from?


Waiting for them to become available as I am sure most everyone is also.... I picked the Strix OC due to the availability of the block, its max power delivery, and BuildZoid calling it out as having the best power delivery of the cards he as seen to date.


----------



## Latchback

domenic said:


> Waiting for them to become available as I am sure most everyone is also.... I picked the Strix OC due to the availability of the block, its max power delivery, and BuildZoid calling it out as having the best power delivery of the cards he as seen to date.


yeah thart makes sense, that is the card I want the most, but I will likely be okay with a FTW too (or maybe two founders)


----------



## mirkendargen

domenic said:


> I ordered the EKWB block & backplate for the Strix 3090. Notice the cutouts / indentations presumably for thermal pads. Since the Strix has PWFM fan headers on the card I am thinking I will just try placing a 120mm fan on top of the backplate somehow to keep it cool assuming the heat will first transfer from the ram chips through the pads to the backplate adequately. Also I am wondering how much heat from the backside of the PCB will naturally radiate through to the front side where the water block is?
> 
> 
> 
> View attachment 2460718


The backplate reaching 97C is a sign that the backplate can't dissipate heat well enough, not that components aren't transferring heat to the backplate well enough. All cutouts/nice thermal pads will do is transfer heat from the components to the backplate, doesn't help anything if the backplate can't shed the heat. A fan would help...but with the backplate just being a flat piece of metal with no fins surface area is limited, so it remains to be seen how much impact a fan on the backplate has.


----------



## HyperMatrix

Johneey said:


> anyone know the overlocker ROGKilla https://www.3dmark.com/spy/14281274 he stole my 1.place in germany. 30degreese nice


Funny name, considering he's probably flashed his Gigabyte card with the ROG Strix 3090 Bios. 



dr/owned said:


> I'd seriously doubt that the vram would actually get up to that temperature with any sort of passive cooling. Nvidia didn't feel it necessary to put even a backplate on the Titan X Maxwell backside vram. Has any reviewer looked at the temperature with a thermocouple/FLIR/whatever?


I had 3x Titan X Maxwell cards. Backside VRAM temps were a huge issue in my setup because they got ridiculously hot. I think it was over 80C. Switching to an active cooled backplate from Aquacomputer brought it down below 50C even with a +10% OC.



mirkendargen said:


> The backplate reaching 97C is a sign that the backplate can't dissipate heat well enough, not that components aren't transferring heat to the backplate well enough. All cutouts/nice thermal pads will do is transfer heat from the components to the backplate, doesn't help anything if the backplate can't shed the heat. A fan would help...but with the backplate just being a flat piece of metal with no fins surface area is limited, so it remains to be seen how much impact a fan on the backplate has.


Well, dissipation is relative to heat generation. Fan or Fins would perform similarly. One increases area of exposure of backplate to air, and the other increases the exposure of air to the backplate.  Assuming good ambient temps, a fan will be superior to adding fins to the backplate.

I have no idea how anyone thought you could go from 11GB of 16GBps GDDR6 to 24GB of a power hungry GDDR6x without any substantial change to backplate cooling. I mean it's fine at stock clocks. But beyond that, going to be generating a lot of heat which means wasted TDP headroom that could be used for better overclocking.


----------



## Cavokk

edit - got the answer


----------



## ROGKilla

edit - got the answer


----------



## HyperMatrix

ROGKilla said:


> I win the Silicon Lottery
> 
> Whats mean on the GPU "Qual Sample" not every GPU has this.
> Its on my GPU and its a f**king Beast. Boost up to 2235 mhz!!!
> 
> I need a Mod Bios! I am fighting only with Powerlimit


What's your address?


----------



## Wihglah

@LordGurciullo


----------



## Latchback

XOC biox pl0x


----------



## mirkendargen

HyperMatrix said:


> Funny name, considering he's probably flashed his Gigabyte card with the ROG Strix 3090 Bios.
> 
> 
> 
> I had 3x Titan X Maxwell cards. Backside VRAM temps were a huge issue in my setup because they got ridiculously hot. I think it was over 80C. Switching to an active cooled backplate from Aquacomputer brought it down below 50C even with a +10% OC.
> 
> 
> 
> Well, dissipation is relative to heat generation. Fan or Fins would perform similarly. One increases area of exposure of backplate to air, and the other increases the exposure of air to the backplate.  Assuming good ambient temps, a fan will be superior to adding fins to the backplate.
> 
> I have no idea how anyone thought you could go from 11GB of 16GBps GDDR6 to 24GB of a power hungry GDDR6x without any substantial change to backplate cooling. I mean it's fine at stock clocks. But beyond that, going to be generating a lot of heat which means wasted TDP headroom that could be used for better overclocking.


I meant something more like fins AND fans, not either or, heh. And for stock coolers I can't believe they don't have a heat pipe connecting the backplate to the main cooler.


----------



## FrameChasers

Hey guys I run a small YouTube channel called framechasers, I’m trying to find the FTW3 XOC bios that GN and Jayz2cents used on their ftw3 cards, does anyone have access to it by any chance? I already shunt modded my 3080 but I’d prefer not to shunt the 3090 if there is a bios available


----------



## Zurv

ask EVGA...


----------



## HyperMatrix

FrameChasers said:


> Hey guys I run a small YouTube channel called framechasers, I’m trying to find the FTW3 XOC bios that GN and Jayz2cents used on their ftw3 cards, does anyone have access to it by any chance? I already shunt modded my 3080 but I’d prefer not to shunt the 3090 if there is a bios available


Best chance is to get in touch with Jacob Freeman at EVGA. Not sure they'll release it to just anybody since the people they've given it to haven't leaked it, meaning they've been trustworthy partners that have done a great job of promoting Precision X OC software and EVGA cards for them for a while now. But it's worth a shot. I had his number a few years back and I know EVGA in general isn't against working with other people/sites.


----------



## mismatchedyes

FrameChasers said:


> Hey guys I run a small YouTube channel called framechasers, I’m trying to find the FTW3 XOC bios that GN and Jayz2cents used on their ftw3 cards, does anyone have access to it by any chance? I already shunt modded my 3080 but I’d prefer not to shunt the 3090 if there is a bios available


If you email Jacob a at EVGA with a picture of Lyla it could do wonders  If you are unlocked with the 3090 it will help her keep warm and comfortable. Win win for you and Lyla. Purrrrfect!


----------



## tubnotub1

Got my card the other day, haven't done any sort of bios/hard modding, just pushing up the PL and offset overclocking. It is an Asus Tuf 3090 Vanilla, impressive card, the thing keeps amazingly cool.



https://www.3dmark.com/spy/14301701


----------



## def2att

def2att said:


> Received a Asus Tuf OC 3090.


Back, Here's stock Bios Overclock


----------



## cookiesowns

torqueroll said:


> I've been doing some stress testing on my 3090 FE and there definitely seems to be issues with keeping the GDDR6X modules cool when running at max powerlimit at 400W. I actually get a cooler GPU temp at 400W than I do at lower power limits since the fan has to ramp up to keep the memory modules cool. Downclocking the memory while at 400W will also bring down the fan speed significantly. About 100-200 rpm less. The memory modules are so close to the GPU that the higher power limit really heats it up. This card is screaming for a water block.
> 
> Has anyone else noticed this fan behaviour?


I wonder if this is the same thing I'm encountering. I don't have a whole lot of airflow over the card.

Do you happen to have the blender benchmark tool? I do all 6 scenes, CUDA, 2.9.0 version of blender, and even at fully stock settings the fan will ramp to 100% by scene 5/6


----------



## DrunknFoo

FrameChasers said:


> Hey guys I run a small YouTube channel called framechasers, I’m trying to find the FTW3 XOC bios that GN and Jayz2cents used on their ftw3 cards, does anyone have access to it by any chance? I already shunt modded my 3080 but I’d prefer not to shunt the 3090 if there is a bios available


Just shunt it, test your findings and release some relevant info.for the evga community
if you already did one, just do the other, it can always be undone, doesn't take too long


----------



## LordGurciullo

Johneey said:


> anyone know the overlocker ROGKilla https://www.3dmark.com/spy/14281274 he stole my 1.place in germany. 30degreese nice


He got me too!


----------



## VickyBeaver

Spiriva said:


> I flashed the "Gigabyte RTX 3090 Gaming OC (370/390W)" to my "PNY GeForce RTX 3090 24GB XLR8 Gaming EPIC-X RGB (350/365W)" and it worked fine. It seems like a very good bios for the 2x8 pin 3090´s. The clocks became both more stable and higher.
> 
> The Nvidia 3090FE have that weird 12pin connector, but i think those cards have a 400w limit? Would it be possible to flash the 3090FE bios to our 3090´s with 2x8pin connectors?


did the same to my PNY GeForce RTX 3090 24GB XLR8 Gaming EPIC-X RGB and middle display port is not functional with Gigabyte RTX 3090 Gaming OC (370/390W) bios, but inded clocks are better, is this just a me thing or? if so can I get the PNY bios for a flash back if you grabbed a backup as oc is nice, but ill worry more about it once there is a water block on the card would prefer functionality of all display ports

update
flipped over to *Gigabyte RTX 3090 Eagle OC *(350/385W) bios and all display ports are functional again? no idea why the Gigabyte RTX 3090 Gaming OC (370/390W) was behaving the way it was

update
flashed back to Gigabyte RTX 3090 Gaming OC (370/390W) bios and this time everything is working guess just me with a bad flash first time round or somthing _shrug_


----------



## LordGurciullo

How did you do that ROGKILLA!?


----------



## DrunknFoo

LordGurciullo said:


> He got me too!


lol refresh


----------



## bmgjet

Finally found my issue with timespy test 1.
Was getting a random drop in the middle of it to 30fps. just nuking the score leaving me with a 17K graphics score even with +200 on the core. (stock clocks 14K score)

Unplugged my 2nd NVME drive and stock clocks run I got 20K graphics score lol.

Going to have to try figure out whats up with that since thats where all my games are installed.

Everything looks like it should be fine.
I9 7900X
GPU PCI-E 3.0 16X
NVME 1 @ 4X (OS)
NVME 2 @ 4X (Games)

Atleast I know its not something wrong with the card.


----------



## HyperMatrix

bmgjet said:


> Finally found my issue with timespy test 1.
> Was getting a random drop in the middle of it to 30fps. just nuking the score leaving me with a 17K graphics score even with +200 on the core. (stock clocks 14K score)
> 
> Unplugged my 2nd NVME drive and stock clocks run I got 20K graphics score lol.
> 
> Going to have to try figure out whats up with that since thats where all my games are installed.
> 
> Everything looks like it should be fine.
> I9 7900X
> GPU PCI-E 3.0 16X
> NVME 1 @ 4X (OS)
> NVME 2 @ 4X (Games)
> 
> Atleast I know its not something wrong with the card.


With your NVMe plugged back in are you SURE you're running at pcie 3 16x? On my x99 board, even with 44 lanes, you can only use certain slots for NVMe otherwise your main slot drops down to 8x speed. That would be the only thing I can see that would affect your performance based on an NVMe being plugged in or not. You can check if you're running x8 or x16 in GPU-Z.










Hit the ? sign next to Bus Interface and run the test to see what your actual PCIe speeds are.


----------



## bmgjet

Yup im 100% sure its running at 16X.
Every other benchmark is fine with the drive plugged in its just timespy test 1. Its cranking alone at 115-120fps then just slams down to 30fps at 20sec mark few 2-3 seconds before jumping back to 130fps and climbing up to 170 at the very end of the test.

The only difference here is 2nd NVME drive is in.










I guess its kind of funny, I though the 2080ti I had was a bad quality chip since it did exactly the same thing. Hit 30fps in same place for same ammount of time but was really good in other benchmarks.
So it turns out this has been a problem with my setup for last 3 years lol.

Just glad I found the cause.
Going to go down computer store grab a pci-e to nvme adapter card see if that makes a difference.


----------



## HyperMatrix

bmgjet said:


> Yup im 100% sure its running at 16X.
> Every other benchmark is fine with the drive plugged in its just timespy test 1. Its cranking alone at 115-120fps then just slams down to 30fps at 20sec mark few 2-3 seconds before jumping back to 130fps and climbing up to 170 at the very end of the test.
> 
> The only difference here is 2nd NVME drive is in.
> 
> View attachment 2460742
> 
> 
> I guess its kind of funny, I though the 2080ti I had was a bad quality chip since it did exactly the same thing. Hit 30fps in same place for same ammount of time but was really good in other benchmarks.
> So it turns out this has been a problem with my setup for last 3 years lol.
> 
> Just glad I found the cause.
> Going to go down computer store grab a pci-e to nvme adapter card see if that makes a difference.


Check your mobo manual for pcie lane allotments. Your on board NVMe could be sharing lanes with the slot your card is in, meaning swapping card slot could fix it. Either that or your computer has the Covid.


----------



## bmgjet

PCI-E card sorted it.

Just looking in the manual now all it says is using 2nd m.2 disables u2 slot.
And I should have 16/0/8/8 for the slots on the board in this config.


----------



## Spiriva

VickyBeaver said:


> did the same to my PNY GeForce RTX 3090 24GB XLR8 Gaming EPIC-X RGB and middle display port is not functional with Gigabyte RTX 3090 Gaming OC (370/390W) bios, but inded clocks are better, is this just a me thing or? if so can I get the PNY bios for a flash back if you grabbed a backup as oc is nice, but ill worry more about it once there is a water block on the card would prefer functionality of all display ports
> 
> update
> flipped over to *Gigabyte RTX 3090 Eagle OC *(350/385W) bios and all display ports are functional again? no idea why the Gigabyte RTX 3090 Gaming OC (370/390W) was behaving the way it was
> 
> update
> flashed back to Gigabyte RTX 3090 Gaming OC (370/390W) bios and this time everything is working guess just me with a bad flash first time round or somthing _shrug_


Goodthing you got it sorted out  I have the PNY bios saved if you need it.

The Gigabyte oc 390w bios are clocked higher "out of the box" 1755mhz instead of PNY who uses 1695mhz (nvidia stock).


----------



## Johneey

ROGKilla said:


> I win the Silicon Lottery
> 
> Whats mean on the GPU "Qual Sample" not every GPU has this.
> Its on my GPU and its a f**king Beast. Boost up to 2235 mhz!!!
> 
> I need a Mod Bios! I am fighting only with Powerlimit


Haha nice dude u take my place won which bios u flashed ???


----------



## Johneey

LordGurciullo said:


> He got me too!


And my 22073 graphic score is on water ! His card is amazing lul.


----------



## GNU/LLJY

How are memory overclocks doing for you guys? I can get +266 without any performance degradation


----------



## AngryLobster

Just received my 3090 TUF OC.

Seeing some strange stuff. There is a 20-30w difference in power consumption between the card under 50C and as it hits 70C. Don't think it's all from the fans as they speed up.

Secondly, undervolting results so far are pretty poor. Out the box it bounces off the power limit so I did 0.825v/1825mhz which gives identical performance but I'm only seeing a 25w reduction if even that compared to stock.


----------



## Johneey

@ROGKilla how u got 30 degrease ? I will bash u if I would get 30 degrees have now 37 with 22346 score


----------



## ref

Got a Founder's Edition coming in next week, has anybody flashed the BIOS yet for this card? Results?


----------



## ROGKilla

edit - got the answer


----------



## smonkie

My 3090 TUF keeps spinning its fans on idle randomly (hitting 53% every single time, and then back to 0% after a little while). Is this an expected behaviour? It clearly doesn't need the extra cooling...


----------



## nievz

My MSI Gaming X Trio 3090 in RDR 2.


----------



## HyperMatrix

ROGKilla said:


> I have Delta K from 7°C, with Water Temp 25°C i have GPU Temp from 32°C under Full load and OC. Its the Alphacool Block. Very Good Temps and Very Easy to Install.
> I suprised self, i buy normally only aqua computer or watercool and sometimes EK.
> 
> I have her :
> Gigabyte RTX 3080 Eagle OC
> Gigabyte RTX 3090 Gaming OC
> MSI Gaming X Trio RTX 3090
> 2x ASUS TUF RTX 3090 OC
> 1x ASUS Strix OC RTX 3090
> Inno3D RTX 3090 iChill X4 (The Beaastt!)
> 
> Everybody want a EVGA FTW3 Ultra or Asus Strix OC because high PL, its overhyped my Strix reches only 2070 stable with 480W
> MSI selected his Chips and has very good Chips on his Gaming X Trio its boost very high and this with only 380W
> When you have a **** GPU you can have 1000W bios it doesent goes over 1950-2000 mhz.
> 
> My Inno3D was the cheapest GPU but the Golden Chip, i Install it and think ok 370W only? 1650-1700W and then Boom its boost over 2000-2030 and this with 370W Bios
> i install the Gigabyte 390W Bios and overclock it to 2230-2250 Mhz its stable but i cant hold the high clock because Powerlimit.
> 
> *all my points was on my 24/7 setup, no tricks, no nvidia settings, no chiller, no window mod, no shunt mod. fans on 300rpm.
> watch Superposition Benchmark Leaderbords.*
> 
> With Powerlimit Mod Bios this Card will be a beast!
> 
> *Best English from me for you FTW! *


your card is boosting over 2200 but average clock according to 3dmark is 2025MHz. FTW3 boosted to 2130MHz but kept average Clock of 2108MHz and scored 15148 in port royal.

Peak boost clock alone means nothing if you can’t sustain it under load. Unfortunately the inno3d only has 2x 8-pin connectors so the 480-520W bios won’t work for you. So unless an XOC bios is released for the 2x 8 pin cards, the golden chip won’t be able to sustain as much performance as the 3x 8 pin cards with 480w (rog strix) or 520w (KPE) bios.


----------



## ROGKilla

edit - got the answer


----------



## devilhead

ROGKilla said:


> I have Delta K from 7°C, with Water Temp 25°C i have GPU Temp from 32°C under Full load and OC. Its the Alphacool Block. Very Good Temps and Very Easy to Install.
> I suprised self, i buy normally only aqua computer or watercool and sometimes EK.
> 
> I have her :
> Gigabyte RTX 3080 Eagle OC
> Gigabyte RTX 3090 Gaming OC
> MSI Gaming X Trio RTX 3090
> 2x ASUS TUF RTX 3090 OC
> 1x ASUS Strix OC RTX 3090
> Inno3D RTX 3090 iChill X4 (The Beaastt!)
> 
> Everybody want a EVGA FTW3 Ultra or Asus Strix OC because high PL, its overhyped my Strix reches only 2070 stable with 480W
> MSI selected his Chips and has very good Chips on his Gaming X Trio its boost very high and this with only 380W
> When you have a **** GPU you can have 1000W bios it doesent goes over 1950-2000 mhz.
> 
> My Inno3D was the cheapest GPU but the Golden Chip, i Install it and think ok 370W only? 1650-1700W and then Boom its boost over 2000-2030 and this with 370W Bios
> i install the Gigabyte 390W Bios and overclock it to 2230-2250 Mhz its stable but i cant hold the high clock because Powerlimit.
> 
> *all my points was on my 24/7 setup, no tricks, no nvidia settings, no chiller, no window mod, no shunt mod. fans on 300rpm.
> watch Superposition Benchmark Leaderbords.*
> 
> With Powerlimit Mod Bios this Card will be a beast!
> 
> *Best English from me for you FTW! *


couldn't match your score in superposition, and my cpu is limiting me a bit  but mostly limit is POWER, i have FE card, stock cooler just AC unit blowing to it  anyone knows about flashing 3090 FE to other bios? i think i could be difficult because of that 1x power connector.


----------



## ROGKilla

edit - got the answer


----------



## HyperMatrix

ROGKilla said:


> Its not Peak, i can holt over 2,2 Ghz when i am not in the Powerlimit. I have the Strix here it makes not more than 2070. The Gaming X Trio is better.
> 
> Show me another Card over 2,2 Ghz with no Mods.


Your port royal run says the average clock speed through the run was just 2036MHz. This is your card:










this is an FTW3 with average clock of 2108MHz:









Your max boost clock is higher. But with a higher sustained/average clock, the FTW3 beats your score by almost 600 points. If you could get an XOC bios, I’m sure your numbers would go way up. But I’m not sure if there are any XOC bios for 2x 8 pin cards or if they’ll ever leak.

You planning to shunt mod the card? Would love to see it fully unleashed. Especially with how great your cooling/temps are!


----------



## Manya3084

AngryLobster said:


> Just received my 3090 TUF OC.
> 
> Seeing some strange stuff. There is a 20-30w difference in power consumption between the card under 50C and as it hits 70C. Don't think it's all from the fans as they speed up.
> 
> Secondly, undervolting results so far are pretty poor. Out the box it bounces off the power limit so I did 0.825v/1825mhz which gives identical performance but I'm only seeing a 25w reduction if even that compared to stock.


Are you able to make the bios available?
I am curious to try it on my non-OC model.
cheers


----------



## ROGKilla

edit - got the answer


----------



## HyperMatrix

ROGKilla said:


> @HyperMatrix Yes Bro, that is that, what i mean if i have a XOC Bios for 2x8 Pin I can hold it over 2200 mhz easy.
> I am always in Powerlimit.


So does that mean you’ll be shunting it soon? 😃


----------



## ROGKilla

edit - got the answer


----------



## Manya3084

8th on the "Hall of Fame"


https://www.3dmark.com/spy/14273056


Memory OC is pretty limited. Anything over 515mhz starts artifacting.


----------



## HyperMatrix

ROGKilla said:


> No i am waiting for XOC Mod bios.
> 
> Who make this XOC Bios?


You can't make them anymore as far as I'm aware because they are digitally signed. The XOC bios you see are official bios made by manufacturers to give to select reviewers and overclockers. They're not released to the public. But sometimes they're leaked. I'm also not sure if there are any XOC bios made for 2x 8 pin cards. But at least for the 3x 8 pin cards we know the KPE will be released with a 520w limit later and that can be dumped and flashed.


----------



## asdkj1740

ROGKilla said:


> No i am waiting for XOC Mod bios.
> 
> Who make this XOC Bios?


you should not expect xoc bios leaked by those techtubers. they have done nothing to the community in the past.


----------



## devilhead

don't know how i got 3 place on timespy graphics extreme  Average clock frequency 2004Mhz 


https://www.3dmark.com/hall-of-fame-2/timespy+graphics+score+extreme+preset/version+1.0/1+gpu


----------



## Foxrun

Are we able to shunt these like we did with the RTX Titan by using a conductive pen?


----------



## Spiriva

Just placed an order from EK´s webshop. Hopefully the blocks are soon done and ready to ship


----------



## Johneey

Spiriva said:


> Just placed an order from EK´s webshop. Hopefully the blocks are soon done and ready to ship


173 euro lul


Spiriva said:


> Just placed an order from EK´s webshop. Hopefully the blocks are soon done and ready to ship


 alphacool u get for 124 with backplate


----------



## AlKappaccino

Got my 3090 FTW3 Gaming Yesterday and still testing stuff etc. I don't know if this is old stuff, I wasn't active the last days, but anyways some info:

Strix OC Bios works fine on FTW3. 2 out of 3 DisplayPorts are not usable with this bios, but I'm fine with that. I couldn't test HDMI. 











Best pic I could take of the socket backside for anyone curious:



Spoiler: Image
















So far it runs pretty good. I'm actually impressed by the cooler. It's noise is pretty tame and it cools pretty well. With no fan noise and high FPS I can hear electric noise when I open my case and get closer to the card. But other than that, no coil whine or anything like that, not even under high fps which is great.


----------



## originxt

AlKappaccino said:


> Got my 3090 FTW3 Gaming Yesterday and still testing stuff etc. I don't know if this is old stuff, I wasn't active the last days, but anyways some info:
> 
> Strix OC Bios works fine on FTW3. 2 out of 3 DisplayPorts are not usable with this bios, but I'm fine with that. I couldn't test HDMI.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Best pic I could take of the socket backside for anyone curious:
> 
> 
> 
> Spoiler: Image
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> So far it runs pretty good. I'm actually impressed by the cooler. It's noise is pretty tame and it cools pretty well. With no fan noise and high FPS I can hear electric noise when I open my case and get closer to the card. But other than that, no coil whine or anything like that, not even under high fps which is great.


Oh I tried the strix bios and thought it didn't work because all I had were black screen. Guess Ill try re flashing and use different display ports lol. Pretty sure I'll still be power limited on the ftw3 still.


----------



## nievz

originxt said:


> Oh I tried the strix bios and thought it didn't work because all I had were black screen. Guess Ill try re flashing and use different display ports lol. Pretty sure I'll still be power limited on the ftw3 still.


Must have been nerve racking to get a black screen. How did you flash it back to stock? Did you use igpu?


----------



## originxt

nievz said:


> Must have been nerve racking to get a black screen. How did you flash it back to stock? Did you use igpu?


Dual bios so I just switched back to the normal one, switched to the "oc" bios with strix when reflashing the original "oc" bios back onto the card and it worked fine. I'll try reflashing the strix bios again and try plugging into my different display ports.


----------



## hallowen

originxt said:


> Dual bios so I just switched back to the normal one, switched to the "oc" bios with strix when reflashing the original "oc" bios back onto the card and it worked fine. I'll try reflashing the strix bios again and try plugging into my different display ports.


Thanks for testing the Strix bios on an FTW3 and telling us about the loss of the two display ports. My FTW3 limit is 462W as shown in GPU-Z.


----------



## Johneey

Okay i dont get the 22600 grahic score like ROGKilla. Maybe tommorow morning if the PC is cold...
22523 Graphic Score for me best...


https://www.3dmark.com/3dm/51213539


=(
But i think its good card...


----------



## Spiriva

Johneey said:


> 173 euro lul
> 
> alphacool u get for 124 with backplate


I looked at the alphacool homepage, pressed the english flag...it was still in German, then i gave up and when to EK


----------



## escapee

Finally, managed to dethrone Jayztwocents on Port Royal with my Gigabyte 3090 Gaming OC air cooled on the trusty i7 4790k coming in at 4th spot.
Port Royal 3D Mark


----------



## GTANY

hallowen said:


> Thanks for testing the Strix bios on an FTW3 and telling us about the loss of the two display ports. My FTW3 limit is 462W as shown in GPU-Z.


I thought that the FTW3 power limit was 440 W, seen many pages before. And framechasers youtube channel mentionned 450 W when he tested it.

Is the max power limit different between the FTW3 and the FTW3 Ultra ? It may explain those differences.


----------



## Avacado

escapee said:


> Finally, managed to dethrone Jayztwocents on Port Royal with my Gigabyte 3090 Gaming OC air cooled on the trusty i7 4790k coming in at 4th spot.
> Port Royal 3D Mark


No small feat there, congrats my dude.


----------



## hallowen

GTANY said:


> I thought that the FTW3 power limit was 440 W, seen many pages before. And framechasers youtube channel mentionned 450 W when he tested it.
> 
> Is the max power limit different between the FTW3 and the FTW3 Ultra ? It may explain those differences.


Sorry about that, I meant to say my FTW3 Ultra has a stock 450W bios limit that shows on GPU-Z at 462W max. I don't know what GPU-Z will show yet until I flash to the 480W Strix bios and try it.


----------



## mirkendargen

AlKappaccino said:


> Got my 3090 FTW3 Gaming Yesterday and still testing stuff etc. I don't know if this is old stuff, I wasn't active the last days, but anyways some info:
> 
> Strix OC Bios works fine on FTW3. 2 out of 3 DisplayPorts are not usable with this bios, but I'm fine with that. I couldn't test HDMI.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Best pic I could take of the socket backside for anyone curious:
> 
> 
> 
> Spoiler: Image
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> So far it runs pretty good. I'm actually impressed by the cooler. It's noise is pretty tame and it cools pretty well. With no fan noise and high FPS I can hear electric noise when I open my case and get closer to the card. But other than that, no coil whine or anything like that, not even under high fps which is great.


...Those MLCC caps in the middle look like they were soldered on by hand (a good hand, but by hand...) after EVGA realized the problem.


----------



## torqueroll

cookiesowns said:


> I wonder if this is the same thing I'm encountering. I don't have a whole lot of airflow over the card.
> 
> Do you happen to have the blender benchmark tool? I do all 6 scenes, CUDA, 2.9.0 version of blender, and even at fully stock settings the fan will ramp to 100% by scene 5/6


I ran the blender benchmark straight after saturating the heatsink with a 10 min stress test. Scene 1-4 were very quiet, On scene 5 at about 30% the fan reached a max rpm of 1680. Same for scene 6.

I've ordered some new thermal pads so I will be replacing the stock FE thermal pads with some Fujipoly XR-M's. I'll retest and see if I get any improvements.


----------



## originxt

Guess my card doesn't like any of the asus strix bios. Doesn't even use the 3rd pin for power. I'll just wait for something to come up later.


----------



## dante`afk

ROGKilla said:


> I win the Silicon Lottery
> 
> Whats mean on the GPU "Qual Sample" not every GPU has this.
> Its on my GPU and its a f**king Beast. Boost up to 2235 mhz!!!
> 
> I need a Mod Bios! I am fighting only with Powerlimit


shuntmod that thing


----------



## piggycute

hallowen said:


> Thanks for testing the Strix bios on an FTW3 and telling us about the loss of the two display ports. My FTW3 limit is 462W as shown in GPU-Z.


would you share with your FTW3 BIOS? i flashed strix bios and loss 2 display ports so that i cant use dual monitors.

my card's PL just 380watts.


----------



## GTANY

I ordered a 3090 FTW3 Ultra a few days ago and waiting for it but I still wonder if it is the good choice, compared to the ASUS Strix model at the same price.

ASUS because of its higher power limit and a higher probability to obtain a XOC bios in the next weeks/months, if we look at the XOC bioses for 2080 TI and 1080 TI.

I intend to watercool the card with a universal waterblock.

What are your thoughts ?


----------



## J7SC

Has anyone seen any real info / tests / more than marketing pics of the Aorus 3090 Xtr/ W/ WB ? Links ? Also wondering about those re. 3x 8 pin PCIe or 2x 8 pin PCIe (as opposed to 'Master')? Tx


----------



## rdmetz

mirkendargen said:


> Got an ETA for my Strix from ShopBLT, I believe this is the ETA for it to get to their warehouse, then 2-5 days shipping for it to get into my hands.
> 
> View attachment 2460518
> 
> 
> Looks like they have some of the potentially ****ty Zotac's showing up on Friday too that aren't all claimed yet if anyone is interested: Zotac ZT-A30900D-10P Geforce Rtx 3090 Trinity 24gb Gddr6x 384bit 1695 19500 3dp Hdmi.


Good luck with them I had a 10900k on "awaiting shipment" from them for 6 months before I finally told em to shove off with their update emails. I had long since went and bought elsewhere but I just wanted to see how long it would take before they actually could ship it. 

Like I said after 6 months almost I just said cancel it.


----------



## Latchback

I would def get a card with 3x 8-pin PCIE ports for many reasons.


----------



## rdmetz

mirkendargen said:


> Got an ETA for my Strix from ShopBLT, I believe this is the ETA for it to get to their warehouse, then 2-5 days shipping for it to get into my hands.
> 
> View attachment 2460518
> 
> 
> Looks like they have some of the potentially ****ty Zotac's showing up on Friday too that aren't all claimed yet if anyone is interested: Zotac ZT-A30900D-10P Geforce Rtx 3090 Trinity 24gb Gddr6x 384bit 1695 19500 3dp Hdmi.


Good luck with them I had a 10900k on "awaiting shipment" from them for 6 months before I finally told em to shove off with their update emails. I had long since went and bought elsewhere but I just wanted to see how long it would take before they actually could ship it. 

Like I said after 6 months almost I just said


----------



## domenic

GTANY said:


> I ordered a 3090 FTW3 Ultra a few days ago and waiting for it but I still wonder if it is the good choice, compared to the ASUS Strix model at the same price.
> 
> ASUS because of its higher power limit and a higher probability to obtain a XOC bios in the next weeks/months, if we look at the XOC bioses for 2080 TI and 1080 TI.
> 
> I intend to watercool the card with a universal waterblock.
> 
> What are your thoughts ?


To me at least it was a no brainer to select the Strix as my card of choice even though its not available yet (patience is a virtue). The FTW3 was my first choice going in until I weighed the below factors.

Higher / highest power limit available.
TechPowerup Review of the Strix & other positive reviews of lesser cards in the Asus lineup. Up & down the product lines for the 30 series Asus cards this round seem to come out on top of performance tests.
All MLCC capacitors & seemingly one of the best or the best overall power delivery system amongst 3080 / 3090 cards. This is supported in the reviews and BuildZoid's comments.
Water block / backplate available for pre-order (ETA Mid October) across a couple of vendors (FTW3 not so much as it seems EVGA doesn't have a vested interest to share details since they make their own block).
Additional HDMI port (although the max # of concurrent outputs is still 4).
Optional PWFM port on the card. Thinking I could add a fan if needed blowing on the backplate if backside memory chips cooling is an issue.
Considered the FTW3 with flashing the Strix or other bios on it but why would I want to do that in the first place if avoidable plus who wants to lose one or more ports as a result?
As we all know its a little dumb to buy the 3090 (and model) in terms of price / performance over the 3080 for gaming. With such self inflicted stupidity in mind I ultimately decided if I am going to splurge on $1,800 for a flagship card I want the one best capable of achieving the highest overclock possible plus make it as fun as possible going through the process.


----------



## GTANY

domenic said:


> To me at least it was a no brainer to select the Strix as my card of choice even though its not available yet (patience is a virtue). The FTW3 was my first choice going in until I weighed the below factors.
> 
> Higher / highest power limit available.
> TechPowerup Review of the Strix & other positive reviews of lesser cards in the Asus lineup. Up & down the product lines for the 30 series Asus cards this round seem to come out on top of performance tests.
> All MLCC capacitors & seemingly one of the best or the best overall power delivery system amongst 3080 / 3090 cards. This is supported in the reviews and BuildZoid's comments.
> Water block / backplate available for pre-order (ETA Mid October) across a couple of vendors (FTW3 not so much as it seems EVGA doesn't have a vested interest to share details since they make their own block).
> Additional HDMI port (although the max # of concurrent outputs is still 4).
> Optional PWFM port on the card. Thinking I could add a fan if needed blowing on the backplate if backside memory chips cooling is an issue.
> Considered the FTW3 with flashing the Strix or other bios on it but why would I want to do that in the first place if avoidable plus who wants to lose one or more ports as a result?
> As we all know its a little dumb to buy the 3090 (and model) in terms of price / performance over the 3080 for gaming. With such self inflicted stupidity in mind I ultimately decided if I am going to splurge on $1,800 for a flagship card I want the one best capable of achieving the highest overclock possible plus make it as fun as possible going through the process.


OK, thank you.


----------



## LordGurciullo

escapee said:


> Finally, managed to dethrone Jayztwocents on Port Royal with my Gigabyte 3090 Gaming OC air cooled on the trusty i7 4790k coming in at 4th spot.
> Port Royal 3D Mark


I saw that. Blew past me too. How did ya do that? I read ROGKILLA using a waterblock so that maeks sense but you're on stock? Whats the secret man !


----------



## DrunknFoo

originxt said:


> Guess my card doesn't like any of the asus strix bios. Doesn't even use the 3rd pin for power. I'll just wait for something to come up later.
> 
> View attachment 2460797


AHHHHH THAT SUCKS lol, i was excited for you, now i feel like like i died a little..
in due time we will see some releases / leaks



domenic said:


> To me at least it was a no brainer to select the Strix as my card of choice even though its not available yet (patience is a virtue). The FTW3 was my first choice going in until I weighed the below factors.
> 
> 
> All MLCC capacitors & seemingly one of the best or the best overall power delivery system amongst 3080 / 3090 cards. This is supported in the reviews and BuildZoid's comments.


I find it amusing that the emphasis on caps since igors articles is outweighing vrm discussions. =P 

Can't go wrong with strix, but the support and community for evga is better overall in terms of ocing than asus. Didn't matter much during the 20x0 series since all cards were basically the same reference... this time around, things might be a little bit different.


----------



## Johneey

LordGurciullo said:


> I saw that. Blew past me too. How did ya do that? I read ROGKILLA using a waterblock so that maeks sense but you're on stock? Whats the secret man !


He has 3x420 moras that’s the secret haha


----------



## DrunknFoo

Johneey said:


> He has 3x420 moras that’s the secret haha


that is still insane temps, my 2080ti still creeps up to about 45c on full power draw with 2x480gts, 360gtx and a xe360.

AHHHHHHHHHHHHHHHHHHHHH WHERE IS MY CARD!!!!!!!! , you guys sound like your having tooo much fun


----------



## Johneey

DrunknFoo said:


> that is still insane temps, my 2080ti still creeps up to about 45c on full power draw with 2x480gts, 360gtx and a xe360.
> 
> AHHHHHHHHHHHHHHHHHHHHH WHERE IS MY CARD!!!!!!!! , you guys sound like your having tooo much fun


I only use 240mm and 360 radiator in case haha not possible to bash him even with a golden chip


----------



## AngryLobster

I see you guys are all blowing past 400w while I'm over here looking for underclock.net to get my 3090 below 300w at stock performance.


----------



## Chamidorix

AlKappaccino said:


> Got my 3090 FTW3 Gaming Yesterday and still testing stuff etc. I don't know if this is old stuff, I wasn't active the last days, but anyways some info:
> 
> Strix OC Bios works fine on FTW3. 2 out of 3 DisplayPorts are not usable with this bios, but I'm fine with that. I couldn't test HDMI.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Best pic I could take of the socket backside for anyone curious:


I already tested this pages back. And it only disables the second from top display port. Using the bios right now with 3 monitors.


----------



## Chamidorix

originxt said:


> Guess my card doesn't like any of the asus strix bios. Doesn't even use the 3rd pin for power. I'll just wait for something to come up later.
> 
> View attachment 2460797


Also already explained pages back, it only shows board power draw for 2 pins. But if you benchmark and use killawatt it is clearly drawing the full 480.


----------



## originxt

Chamidorix said:


> I already tested this pages back. And it only disables the second from top display port. Using the bios right now with 3 monitors.


Oddly for me, it disabled the first and third and worked on the 2nd one lol.


----------



## Chamidorix

originxt said:


> Oddly for me, it disabled the first and third and worked on the 2nd one lol.


Fascinating, I was just talking to framechasers who did the strix flash on ftw as well and he only lost 1 port, same as mine.

The strix goes dp hdmi dp hdmi dp, the ftw goes dp dp dp hdmi, so you would figure the misfunctioning port matches with the dp port on ftw in the place of hdmi on strix.


----------



## LordGurciullo

I went all air on this build No water cooling of anything. Im pretty satisfied. I tried to do higher clocks and voltage stuff but I think it aint gettin any faster.


----------



## Thanh Nguyen

Anyone flash the strix on the pny or pny can be flashed only with gigabyte oc?


----------



## slopokdave

nievz said:


> I have the Gaming X Trio and I'm able to maintain 2025 on stock BIOS. It must be your silicon.


You have 3x8pin. Not really apples to apples. Not sure which bios you're running...

My card is super sensitive to different voltage levels. I'm finally stable at 1980-1995mhz, but temps too high on air, need my block to go any further. 

I would have went 3x8pin card, but water blocks are too high for my corsair 280X case, and I really like my case.


----------



## Chamidorix

Framechasers just did a great round of testing on FTW Ultra bios vs Strix OC. Even with the 30 extra watts, you seem to actually get better stability/bin with FTW bios once you start OCing. I'm assuming the power delivery is messed up in some way, delivering more uneven voltage to the chip than the native ftw bios. So results were worse with bleeding edge OC even with 30 extra watts.

I guess we just really hope the kingpin bios plays nicely with the FTW. The kingpin has the same display outputs as the FTW as well, so shouldn't lose any outputs while I'm assuming flashing Kingpin bios on Strix will disable one of the HDMIs. So that's another thing to consider in FTW vs Strix.


----------



## dr/owned

So has anyone here done shunt mods yet? With der8auer making a how-to I'd have guessed tons of people would do it.


----------



## DrunknFoo

dr/owned said:


> So has anyone here done shunt mods yet? With der8auer making a how-to I'd have guessed tons of people would do it.


send me ur card, ill do it for ya. I promise to send it back after playing around with it =P


----------



## chispy

dr/owned said:


> So has anyone here done shunt mods yet? With der8auer making a how-to I'd have guessed tons of people would do it.


I'm about to do it in a few minutes on my Asus Tuf gaming oc 3090. Finally i have it in my hands today  , it will be a weekend for benching the card on my water chiller at -21c.

My first score on timespy , stock cooling no mods or bios flashing yet.



http://www.3dmark.com/spy/14326983


----------



## dr/owned

chispy said:


> I'm about to do it in a few minutes on my Asus Tuf gaming oc 3090. Finally i have it in my hands today  , it will be a weekend for benching the card on my water chiller at -21c.


Good luck! I'm curious if shunting the PCIe slot power is also needed to unlock a truly higher power limit. There was speculation 12 pages ago in this thread that it's applying some sort of ratio checking between the 8 pins and the slot power.

Also, Port Royal seems to be the benchmark everyone is really using this go around cause less CPU dependent?


----------



## bmgjet

dr/owned said:


> Good luck! I'm curious if shunting the PCIe slot power is also needed to unlock a truly higher power limit. There was speculation 12 pages ago in this thread that it's applying some sort of ratio checking between the 8 pins and the slot power.
> 
> Also, Port Royal seems to be the benchmark everyone is really using this go around cause less CPU dependent?



Basically. Well CPU does matter a little bit. But if you have around 5ghz your not far behind people pushing 5.8ghz on ln2 cpu wise.
Also the length also is another thing. So you get a good boost since the computer doesnt have time to heat soak.


----------



## HyperMatrix

dr/owned said:


> Good luck! I'm curious if shunting the PCIe slot power is also needed to unlock a truly higher power limit. There was speculation 12 pages ago in this thread that it's applying some sort of ratio checking between the 8 pins and the slot power.
> 
> Also, Port Royal seems to be the benchmark everyone is really using this go around cause less CPU dependent?


As I told the guy who originally mentioned it...I highly recommend against this. Unless there is actually some kind of a brand new weird power link between all power sources, which would make no sense, there’s no reason to assume that’s the case. Let someone else take the risk of causing long term damage to their motherboard to test this weird theory out. There should be absolutely no reason that extra power would be required from the pcie slot. Don’t believe me? Look at the XOC bios released for the cards. If they’re playing with 900W power limit, and power increase needed to come equally from all sources, then their PCIE slot power draw should have increased as well. But it doesn’t. Because that would be ridiculous


----------



## bmagnien

Chamidorix said:


> Framechasers just did a great round of testing on FTW Ultra bios vs Strix OC. Even with the 30 extra watts, you seem to actually get better stability/bin with FTW bios once you start OCing. I'm assuming the power delivery is messed up in some way, delivering more uneven voltage to the chip than the native ftw bios. So results were worse with bleeding edge OC even with 30 extra watts.
> 
> I guess we just really hope the kingpin bios plays nicely with the FTW. The kingpin has the same display outputs as the FTW as well, so shouldn't lose any outputs while I'm assuming flashing Kingpin bios on Strix will disable one of the HDMIs. So that's another thing to consider in FTW vs Strix.


Where is this framecrashers video where he's testing the strix bios on a ftw3 3090? Do you have a link? Is anyone else on here able to confirm performance changes between FTW3 stock OC bios vs the +30w strix bios on a 3090 ftw3? Wondering if it's worth the risk and haven't been able to find any hard data to convince me. Thank you!


----------



## Latchback

domenic said:


> To me at least it was a no brainer to select the Strix as my card of choice even though its not available yet (patience is a virtue). The FTW3 was my first choice going in until I weighed the below factors.
> 
> Higher / highest power limit available.
> TechPowerup Review of the Strix & other positive reviews of lesser cards in the Asus lineup. Up & down the product lines for the 30 series Asus cards this round seem to come out on top of performance tests.
> All MLCC capacitors & seemingly one of the best or the best overall power delivery system amongst 3080 / 3090 cards. This is supported in the reviews and BuildZoid's comments.
> Water block / backplate available for pre-order (ETA Mid October) across a couple of vendors (FTW3 not so much as it seems EVGA doesn't have a vested interest to share details since they make their own block).
> Additional HDMI port (although the max # of concurrent outputs is still 4).
> Optional PWFM port on the card. Thinking I could add a fan if needed blowing on the backplate if backside memory chips cooling is an issue.
> Considered the FTW3 with flashing the Strix or other bios on it but why would I want to dal with that in general if I don't have to plus who wants to lose one or more ports as a result?
> As we all know its a little dumb to buy the 3090 (and model) in terms of price / performance over the 3080 for gaming. With such self inflicted stupidity in mind I ultimately decided if I am going to splurge on $1,800 for a flagship card I want the one best capable of achieving the highest overclock possible plus make it as fun as possible going through the process.


Yeah I agree.


----------



## slopokdave

I asked many pages back, didn't see anyone respond: Is VOLTAGE LOCK (CTRL + L in the curve editor) working for you? I'm trying to figure out if this is broken for Ampere, shouldn't work in the first place, or if my card is busted. Even if I tell the curve to lock at a certain voltage, voltage is all over the place during gaming.

Edit: So I noticed if I try to lock it to (1995mhz at .975v for example) in windows, before I'm gaming; it does it. In windows its locked then to 1995mhz at .975v. As soon as I load a game up though, it's behaves like its no longer locked.


----------



## LordGurciullo

voltage lock does not work.


----------



## slopokdave

LordGurciullo said:


> voltage lock does not work.


Awesome, thank you for responding. At least I know it's not my card. So do we know this is just how Ampere will be? Or its a software issue that needs to be fixed?


----------



## cookiesowns

torqueroll said:


> I ran the blender benchmark straight after saturating the heatsink with a 10 min stress test. Scene 1-4 were very quiet, On scene 5 at about 30% the fan reached a max rpm of 1680. Same for scene 6.
> 
> I've ordered some new thermal pads so I will be replacing the stock FE thermal pads with some Fujipoly XR-M's. I'll retest and see if I get any improvements.


Damn. What’s ambient temp and how much airflow does the card have?

I can’t remember if mine started doing that only aftwe my repaste attempt.


----------



## wyattneill

I so wish there was a shunt modding tutorial for the FE. I used to have it on my 2080 TI but the founders has that wierd 12 pin plug so its not quite as simple as my StriX.

Liquid Metal for now.


----------



## bmgjet

Voltage lock works on mine afterburner 4.6.3 beta 2.
You need to click apply after setting it.

It will still drop voltages below what you lock if its hitting power limit. But it wont go over that.


----------



## slopokdave

bmgjet said:


> Voltage lock works on mine afterburner 4.6.3 beta 2.
> You need to click apply after setting it.
> 
> It will still drop voltages below what you lock if its hitting power limit. But it wont go over that.


Yeah I'm running same version and clicking Apply. If I run doom eternal at a 200 fps cap, clocks and voltage are much more steady; because I'm not bouncing off power limit. But then again, not sure why something like shadow tomb raider doesn't behave this way, and also will be at ~390 power limit (Asus TUF w/ Gigabyte OC bios).


----------



## dr/owned

wyattneill said:


> I so wish there was a shunt modding tutorial for the FE. I used to have it on my 2080 TI but the founders has that wierd 12 pin plug so its not quite as simple as my StriX.
> 
> Liquid Metal for now.


There is for the 3080 which is the same-ish pcb as the 3090, but it's 9 shunt resistors and quote Buildzoid: you have to do all of them or it's going to know something is wrong with the monitoring:


----------



## VickyBeaver

currently the limit is 390w on the gigabyte Gaming OC bios on my PNY 3090 but clearly the card wants MORE!
peaking out at 419w and 113%TDP I have serious concerns that the card absolutly needs a shunt mod.. more and more I kinda want to do it when I put this card on watercooling (still waiting on that ek waterblock )
so the card is constantly pinging off the limiter may be with better cooling over the VRM and not going over 35c and no fan power draw on water that things will be just fine but eeeeeeh not asking much just a solid stable 2070-2100mhz oc with out constantly hitting limits ya?
as ive never shunt modded a card before my main question is where would I obtain the reisitors from here in the aussie lands? ( last time I did any SMC modding was converting my geforce 2 GTS over to a quadro by moving a reistor from one set of pads to another long ass time ago now XD )


----------



## bmgjet

VickyBeaver said:


> currently the limit is 390w on the gigabyte Gaming OC bios on my PNY 3090 but clearly the card wants MORE!
> peaking out at 419w and 113%TDP I have serious concerns that the card absolutly needs a shunt mod.. more and more I kinda want to do it when I put this card on watercooling (still waiting on that ek waterblock )
> so the card is constantly pinging off the limiter may be with better cooling over the VRM and not going over 35c and no fan power draw on water that things will be just fine but eeeeeeh not asking much just a solid stable 2070-2100mhz oc with out constantly hitting limits ya?
> as ive never shunt modded a card before my main question is where would I obtain the reisitors from here in the aussie lands? ( last time I did any SMC modding was converting my geforce 2 GTS over to a quadro by moving a reistor from one set of pads to another long ass time ago now XD )


Order them from ebay, Should be here when the blocks start arriving.
I imagin its going to be like over here in NZ. All the electronic stores have closed down so no where carrys lots of parts.
Jaycar is the last one still open but they just order from China which is why you have a 1.5-2 week wait and pay 30X the price for getting them yourself.


----------



## psychrage

wyattneill said:


> Liquid Metal for now.


Nice. What kind of results did you get from liquid metal? I did a re-paste on my FE with kingpin kpx, which didn't drop the temps(peaking at 67* C), but did give more stable clocks.
I also have my FE sandwiched between 2x 140mm fans on my open air case.


----------



## Latchback

psychrage said:


> Nice. What kind of results did you get from liquid metal? I did a re-paste on my FE with kingpin kpx, which didn't drop the temps(peaking at 67* C), but did give more stable clocks.
> I also have my FE sandwiched between 2x 140mm fans on my open air case.


Means use liquid metal to do the shunt mod, it is a faster and more temporary way than soldering resistors.


----------



## Esenel

ROGKilla said:


> My fastest 2080 Ti (with 1000w Bios) vs my 3090 in Time Spy


Very cool results.
Only thing my heart is bleeding, seeing your low CPU score :-O
You definitely need better RAM 🙃


----------



## psychrage

Latchback said:


> Means use liquid metal to do the shunt mod, it is a faster and more temporary way than soldering resistors.


Oops, my bad.


----------



## Gofspar

wyattneill said:


> I so wish there was a shunt modding tutorial for the FE. I used to have it on my 2080 TI but the founders has that wierd 12 pin plug so its not quite as simple as my StriX.
> 
> Liquid Metal for now.


what was the temp drop with LM vs stock paste?
I only dropped like 2c going to Kryonaut


----------



## xSolid




----------



## LordGurciullo

Looks like I made quite the impression on Jay 
Lord Stupid


----------



## Johneey

Gofspar said:


> what was the temp drop with LM vs stock paste?
> I only dropped like 2c going to Kryonaut


Me too around 4 degrease


----------



## HyperMatrix

xSolid said:


>


I don’t get the point of the video. Yes liquid medal can and will over time destroy solder. But it’s really not a big deal. First off it’s best for temporary use. But I’ve literally had it destroy the solder and have the resistor come right off the board before and had the board shut down and fail to boot. Simple fix though...just solder another resistor on. I’ve only had this happen on one single resistor from a total of 5 cards that I’ve applied liquid metal to.

But to recap...there’s no harm in temporarily (a few months) using liquid metal on your card.


----------



## bmgjet

Change command rate to 1.


----------



## torqueroll

cookiesowns said:


> Damn. What’s ambient temp and how much airflow does the card have?
> 
> I can’t remember if mine started doing that only aftwe my repaste attempt.


My HexGear R80 case has four bottom fans for fresh air. There are no air filters in use so airflow is very good. Ambient was about 24 degrees Celsius.

Maybe your thermal pads didn't get good contact after the repaste. They are quite fragile and could perform worse.


----------



## Esenel

ROGKilla said:


> I dont know why my CPU Score is so bad, i try to find it out today.


Mine.


https://www.3dmark.com/spy/13658324


----------



## LordGurciullo

Wow you guys with your ram. I pushed mine to 4100 17 17 17 and that was it. Couldn't go any higher... 9900k at 5.0

4300 at 16 ? and 4100 at 15? Jesus! FAST! 

Wish I had ya'll processors and ram cause I'm still on 1080p..

BTW Esenel. What are you thinking about that monitor? I was thinking about it... but I love my xl2746s so much with dyac... how is that one?


----------



## bmgjet

Heres something that might be helpful.
Something I threw together in 5mins in C# to workout resulting power limits for shunt mod.








21 KB file on MEGA







mega.nz


----------



## piggycute

i got FTW3 ultra bios and flashed in my card. running 3dmark TSE max board power draw was 463.4watts, average GPU Clk was 1,864 MHz , Pefcap reason always Pwr.

i think GPU avg clk a little bit too low?


----------



## Chamidorix

HyperMatrix said:


> As I told the guy who originally mentioned it...I highly recommend against this. Unless there is actually some kind of a brand new weird power link between all power sources, which would make no sense, there’s no reason to assume that’s the case. Let someone else take the risk of causing long term damage to their motherboard to test this weird theory out. There should be absolutely no reason that extra power would be required from the pcie slot. Don’t believe me? Look at the XOC bios released for the cards. If they’re playing with 900W power limit, and power increase needed to come equally from all sources, then their PCIE slot power draw should have increased as well. But it doesn’t. Because that would be ridiculous


Obviously when the card firmware is explicitly controlling power in an XOC bios it will pull all extra power over the 8 pins. It's like you just have literally no fundamental reasoning ability with what a power limit in card firmware is actually trying to accomplish, and how adding additional resistance to the shunt resisters bypasses this.

We've established doubling or even tripling 75w power draw over pci and mobo vrm is not a problem.

And to top it all off we literally have proof that you have to do this to get locked clock at voltage, i.e no power limit. Everyone who hasn't shunted all resisters gets fluctuating clocks at limit of OC. Der8auer and vmanuelgm 's non complete shunts, with fluctuating clocks documented in video. TecLab whos confirmed they had to shunt every single resister to get unlimited power draw. Frame Chasers who have confirmed they had to shunt every single resister to get unlimited power draw. Buildzoid who has has with multiple PCB reviews now stated he assumes all resistors will need to be shunted to get unlimited power draw.

This generation added digital voltage regulators that did not exist on previous generations. It is silly to argue against what has been established as fact for days now.


----------



## bmgjet

piggycute said:


> i got FTW3 ultra bios and flashed in my card. running 3dmark TSE max board power draw was 463.4watts, average GPU Clk was 1,864 MHz , Pefcap reason always Pwr.
> 
> i think GPU avg clk a little bit too low?


Heres mine with it locked against 390W power limit. XC3 Ultra, GB OC Bios.


----------



## piggycute

bmgjet said:


> Heres mine with it locked against 390W power limit. XC3 Ultra, GB OC Bios.
> View attachment 2460858





https://www.3dmark.com/3dm/51258889?





https://www.3dmark.com/3dm/51259142?












My card is colorful advance OC (is easy to buy in China ,not out of stock.).


----------



## piggycute

Can I share FTW3 bios file here?

http://www.mediafire.com/file/7km336cv61n1146/filename.rom/file 

Thanks Chiphell BBS member "William753" to Provide this file.


----------



## chispy

Finally made the shunt mods to my Asus Tuf Gaming oc 3090 , it made a huge difference even on air cooling , this cards are starving for power sadly. Ran a few benchmarks for points at hwbot , ran the video card on my water chiller.











Here are some scores - 
Timespy - 22447 - http://www.3dmark.com/spy/14335079
Fire Strike Ultra - 14528 - http://www.3dmark.com/fs/23664901
Port Royal - 14713 - http://www.3dmark.com/pr/363974

Max temps under water chiller ~ 3c - 5c
Max clocks 2235 , average clocks 2138
msi afterburner curve does not work , clocks are all over the place and v.core too.


----------



## nievz

slopokdave said:


> I asked many pages back, didn't see anyone respond: Is VOLTAGE LOCK (CTRL + L in the curve editor) working for you? I'm trying to figure out if this is broken for Ampere, shouldn't work in the first place, or if my card is busted. Even if I tell the curve to lock at a certain voltage, voltage is all over the place during gaming.
> 
> Edit: So I noticed if I try to lock it to (1995mhz at .975v for example) in windows, before I'm gaming; it does it. In windows its locked then to 1995mhz at .975v. As soon as I load a game up though, it's behaves like its no longer locked.


I don’t have my voltage locked but I have my curve setup this way:


2040mhz = 1050mv
2025mhz = 1025mv
2055mhz = 1075mv


----------



## Falkentyne

Can you still use a #2 pencil as a temporary shunt mod just to see if it works?


----------



## slopokdave

nievz said:


> I don’t have my voltage locked but I have my curve setup this way:
> 
> 
> 2040mhz = 1050mv
> 2025mhz = 1025mv
> 2955mhz = 1075mv


Typo? 2955? 🤣


----------



## tubnotub1

Asus Tuff 3090 vanilla w/ Gigabyte OC Bios. Display ports 2 and 3 do not work w/ the Gigabyte bios but a bit of a performance uplift for sure, clocks seem far more stable in the 1920-1935 range. A bit of a dud when it comes to the silicon on this Tuff 3090, but it's my dud, I'll keep him. >_>


----------



## Cavokk

tubnotub1 said:


> A bit of a dud when it comes to the silicon on this Tuff 3090, but it's my dud, I'll keep him. >_>


----------



## slopokdave

tubnotub1 said:


> Asus Tuff 3090 vanilla w/ Gigabyte OC Bios. Display ports 2 and 3 do not work w/ the Gigabyte bios but a bit of a performance uplift for sure, clocks seem far more stable in the 1920-1935 range. A bit of a dud when it comes to the silicon on this Tuff 3090, but it's my dud, I'll keep him. >_>
> 
> View attachment 2460885


I was stuck in the 1935 range but now I'm stable 1980-1995, but depends on the game/voltage demand of the game. Some games it's all over the place. Don't give up! 🤣 2k or bust!


----------



## VickyBeaver

set up a very agressive curve so when the board bounces off the power limit it rarely drops below 1900mhz, its riding a fine line because the card will undervolt harder to down clock if you have no curve at all ofcourse resuting in a crash but this seems stable enough bouncing between 1900mhz and 2025mhz in the bench mark. 
nothing special scores, some improvements but the fact the card can clock that high when heavly undervolting shows great headroom if it was not for the pesky power limit.


----------



## nievz

VickyBeaver said:


> set up a very agressive curve so when the board bounces off the power limit it rarely drops below 1900mhz, its riding a fine line because the card will undervolt harder to down clock if you have no curve at all ofcourse resuting in a crash but this seems stable enough bouncing between 1900mhz and 2025mhz in the bench mark.
> nothing special scores, some improvements but the fact the card can clock that high when heavly undervolting shows great headroom if it was not for the pesky power limit.
> View attachment 2460913


I think AB curves has a bug for Ampere. After i painstakenly setup my curve, it adjusts the clocks higher when i reload my profile resulting in unstable clocks. Are you experiencing the same?


----------



## mirkendargen

nievz said:


> I think AB curves has a bug for Ampere. After i painstakenly setup my curve, it adjusts the clocks higher when i reload my profile resulting in unstable clocks. Are you experiencing the same?


I've had this happen on both a 1080TI and 2080TI after adjusting a curve a lot of times. It's like it somehow gets corrupted and you need to reset to default, then adjust straight to your final values.


----------



## originxt

nievz said:


> I think AB curves has a bug for Ampere. After i painstakenly setup my curve, it adjusts the clocks higher when i reload my profile resulting in unstable clocks. Are you experiencing the same?


It shifts according to gpu temps at the time which can be really frustrating since you can't just highlight multiple to a certain frequency/voltage. May need to try what mirkendargen did.


----------



## VickyBeaver

nievz said:


> I think AB curves has a bug for Ampere. After i painstakenly setup my curve, it adjusts the clocks higher when i reload my profile resulting in unstable clocks. Are you experiencing the same?


seems to be holding true for me on a saved profile, getting only 13696 in port royal but due to being forced onto air coolers till I get my new water block I am only running 4.4ghz all core on my 3900x could be running 4.475ghz stable on water... dying to get my loop up and running. 
things where so sexy before the upgrade, that 1080ti doing 2075mhz all day long never getting over 35c heck on a cool day never getting over 30c


----------



## HyperMatrix

Chamidorix said:


> Obviously when the card firmware is explicitly controlling power in an XOC bios it will pull all extra power over the 8 pins. It's like you just have literally no fundamental reasoning ability with what a power limit in card firmware is actually trying to accomplish, and how adding additional resistance to the shunt resisters bypasses this.
> 
> We've established doubling or even tripling 75w power draw over pci and mobo vrm is not a problem.
> 
> And to top it all off we literally have proof that you have to do this to get locked clock at voltage, i.e no power limit. Everyone who hasn't shunted all resisters gets fluctuating clocks at limit of OC. Der8auer and vmanuelgm 's non complete shunts, with fluctuating clocks documented in video. TecLab whos confirmed they had to shunt every single resister to get unlimited power draw. Frame Chasers who have confirmed they had to shunt every single resister to get unlimited power draw. Buildzoid who has has with multiple PCB reviews now stated he assumes all resistors will need to be shunted to get unlimited power draw.
> 
> This generation added digital voltage regulators that did not exist on previous generations. It is silly to argue against what has been established as fact for days now.


Is there a video or article that shows a before/after of shunting the PCIE slot power resistor? You said 9 need to be shunted. Did teclab or the others you mention do a benchmark with 9 shunted vs 8 shunted (no pcie slot power) to show what effect it had on sustained clocks and bench scores.

I’m open to the possibility but I’d like to see the theory in action before it becomes a mainstream recommendation.


----------



## Johneey

VickyBeaver said:


> seems to be holding true for me on a saved profile, getting only 13696 in port royal but due to being forced onto air coolers till I get my new water block I am only running 4.4ghz all core on my 3900x could be running 4.475ghz stable on water... dying to get my loop up and running.
> things where so sexy before the upgrade, that 1080ti doing 2075mhz all day long never getting over 35c heck on a cool day never getting over 30c


Show me proofs that u get only 35 degrease with 2x 240mm and same cpu loop in games please


----------



## Wrathier

Hi fellow 3090ers. 

I have a GeForce RTX™ 3090 GAMING OC 24G incoming next week and will flash it with the 390W bios.

Can anyone please share the power target, temp target, core and mem you use for your overclocks? - Just to dial me in on what to expect and help me out so I can get a stable OC faster. - I will most likely use Afterburners Auto Tuning for stable day to day OC, but I'm curious on what I can expect around maximum. 

Thank you in advance for your help.


----------



## VickyBeaver

Johneey said:


> Show me proofs that u get only 35 degrease with 2x 240mm and same cpu loop in games please


its 2 280's with noctua ppc 3000 rpm fans. I dont have a screen of it running over long term but yea easy 35c under long term gaming loads, but ya next loop will use the same rads and fans the 3090 dose use more watts then the 1080ti tho so probally be hotter ya?


----------



## dr/owned

Chamidorix said:


> We've established doubling or even tripling 75w power draw over pci and mobo vrm is not a problem.


Uhhhh, you would not want to double the power draw through the PCIe slot. The mobo itself isn't a problem...it's the fingers on the card that are not meant to handle that much current and they will get melting hot. Sauce:



> One vendor told me directly that while spikes as high as 95 watts of power draw through the PCIE connection are tolerated without issue, sustained power draw at that kind of level would likely cause damage.


I also ran it through a trace current calculator (1oz/ft2 copper, .7mm trace width) and found that 95W does correlate to about 65C temperature of the fingers.

I suspect that just shunting the slot with a more mild/safe 40% increase should unlock enough power from the 8 pins (assuming they're in some sort of bios-checked ratio) that it should be practically unlimited total power. Der8auer already showed that he was drawing over 500W with just the shunts around them.

EDIT: I watched the @FrameChasers video where he cites der8auer powering a video card from pcie slots only. He used a total of 4 PCIe slots to do it....not 1.


----------



## slopokdave

Wrathier said:


> Hi fellow 3090ers.
> 
> I have a GeForce RTX™ 3090 GAMING OC 24G incoming next week and will flash it with the 390W bios.


Gigabyte? The stock bios already does 390W if we are talking about the same card. I'm running it on my Asus TUF and that's the max I see.


----------



## tubnotub1

slopokdave said:


> Gigabyte? The stock bios already does 390W if we are talking about the same card. I'm running it on my Asus TUF and that's the max I see.


Can confirm, been playing around w/ that bios on my TUF as well, 390 has been the max I have seen. After quite a bit of testing the best I could get on Port Royal was a 13766 and 19648 on Time Spy w/ a GPU score of 21320. Not terrible, but my silicon is mediocre, ended up going back to the TUF bios as the performance gains just weren't really there in non-synthetic scenarios.

For what it's worth @Wrathier I was running +185 on the core +500 on the mem, maxed out power slider (105%) and temp target. It never went above 61c when benchmarking w/ fans at 100%.


----------



## Falkentyne

Asking again:
Can a shunt mod be attempted with a #2 pencil?


----------



## Glerox

LordGurciullo said:


> Looks like I made quite the impression on Jay
> Lord Stupid


haha, I laughed when I watched the video. Do you know where he got that bios with higher power limit for the FTW3 card?


----------



## cdnGhost

What are acceptable temps for a FE 3090? Mine sits around 67-69 with my VR and monitor both running everything on ultra. (Cpu is at 67, i9-9900k no oc, using a bequiet dark rock tf) case is the lian li pc0-11 xl using 10 coolermaster sf-120m fans.)


----------



## Alelau18

Messed around a bit more with my 3090 TUF and managed to snipe the 100 of the global score on TimeSpy (19586) with 22043 graphics score https://www.3dmark.com/spy/14352227


----------



## domenic

*FYI* - Tom over @ _Moore's Law Is Dead_ today reiterated during a live stream his belief that Nvidia is going to flood (relatively speaking) the market with 30 series cards later this month as part of their overall strategy which included limiting supply in the beginning intentionally. Hope he is is right and actually would be a negative for Nvidia if they still had literally no cards available when AMD releases their product stack.


----------



## Sheyster

Falkentyne said:


> Asking again:
> Can a shunt mod be attempted with a #2 pencil?


I have not heard of anyone able to do this with an Nvidia card. Last time I did it was on an ASUS mobo in 2008 from what I recall. I don't think you'll get enough resistance to have a meaningful power bump.


----------



## dante`afk

VickyBeaver said:


> its 2 280's with noctua ppc 3000 rpm fans. I dont have a screen of it running over long term but yea easy 35c under long term gaming loads, but ya next loop will use the same rads and fans the 3090 dose use more watts then the 1080ti tho so probally be hotter ya?


whats your room temp? show us settings, otherwise you might be running this on 800x600


----------



## originxt

I hate afterburner in that it keep boosting higher than a locked voltage and frequency, crashing the benchmark. Jesus, I forgot how finicky this program is.


----------



## Alelau18

That's why for benching I just resorted to manually edit the curve, it's the best way atm


----------



## RazielZ

Has anyone tried any of the available BIOSes on an MSI Ventus OC? I'd rather not brick it, since I have no igpu on this platform so fixing it will be a bit of a pain.
I also have the original ventus bios saved, what's the procedure to get it added to the first posts lists? I've uploaded it to gdrive here: MSIVentus3XOC3090.rom


----------



## HyperMatrix

domenic said:


> *FYI* - Tom over @ _Moore's Law Is Dead_ today reiterated during a live stream his belief that Nvidia is going to flood (relatively speaking) the market with 30 series cards later this month as part of their overall strategy which included limiting supply in the beginning intentionally. Hope he is is right and actually would be a negative for Nvidia if they still had literally no cards available when AMD releases their product stack.


The majority of what Moore’s law is dead says is just conspiracy theory nonsense. People point to a few times that he’s been right as a form of credibility. I’ve watched a few of his videos and he is either being intentionally disingenuous or just doesn’t understand the technology and marketing in general. I would take anything he says with a barrel of salt.


----------



## bmgjet

HyperMatrix said:


> The majority of what Moore’s law is dead says is just conspiracy theory nonsense. People point to a few times that he’s been right as a form of credibility. I’ve watched a few of his videos and he is either being intentionally disingenuous or just doesn’t understand the technology and marketing in general. I would take anything he says with a barrel of salt.


Basically this lol.
Throw enough videos out there and you have to be correct some of the times. But more often then not he is wrong.
Thats most youtubers for you tho. Clicks and views are all that matters.


----------



## nievz

mirkendargen said:


> I've had this happen on both a 1080TI and 2080TI after adjusting a curve a lot of times. It's like it somehow gets corrupted and you need to reset to default, then adjust straight to your final values.


This help a ton dude thank you! I first did a reset then went straight to my desired curve and applied. After a few reboots, my settings are still in place.


----------



## Johneey

I got during gaming with my 3090 and i9 10900k during gaming around 47 Celsius on gpu. What’s best way to improve temperature?


----------



## mirkendargen

nievz said:


> This help a ton dude thank you! I first did a reset then went straight to my desired curve and applied. After a few reboots, my settings are still in place.


Glad that took care of it. In the past I was able to make this happen just saving the same curve over and over with no changes, you could see it start to get out of whack and need to reset to default to fix it.


----------



## mirkendargen

Johneey said:


> I got during gaming with my 3090 and i9 10900k during gaming around 47 Celsius on gpu. What’s best way to improve temperature?
> View attachment 2460979


Make sure you did a good paste job on the waterblock, and decrease coolant temperature. Do you have a way to see your coolant temp?


----------



## Johneey

mirkendargen said:


> Make sure you did a good paste job on the waterblock, and decrease coolant temperature. Do you have a way to see your coolant temp?


Already have LM on GPU. Are the temps of the gpu are okay or isn’t it good ? Not yet but I measured with a fever thermometer shows around 40-41 degrease coolant. Is it dangerous for anything ?


----------



## psychrage

cdnGhost said:


> What are acceptable temps for a FE 3090? Mine sits around 67-69 with my VR and monitor both running everything on ultra. (Cpu is at 67, i9-9900k no oc, using a bequiet dark rock tf) case is the lian li pc0-11 xl using 10 coolermaster sf-120m fans.)


Full load, 67-68 is what my FE was floating at. I've since sandwiched it between two 140mm fans. Now full load is around 63. Fan center mounted at the front helped a lot. Keeping the backplate "cool" helps also. Waiting on waterblocks to hit the market but I might get impatient and pull the trigger on the Bykski block even though it doesn't _look_ all that great imo.


----------



## mirkendargen

Johneey said:


> Already have LM on GPU. Are the temps of the gpu are okay or isn’t it good ? Not yet but I measured with a fever thermometer shows around 40-41 degrease coolant. Is it dangerous for anything ?


If you have PETG tubing in your loop 40-41C is dangerous, I think some other types of plastics could start to loose rigidity (I had a res with a window that said it wasn't rated for above 45C) but other than being dangerous it's just quite high in general. A 6-7C delta between your coolant and your GPU is really good (your waterblock and that LM are performing), your coolant is just too hot. You need more radiators or better cool airflow over the radiators you have.


----------



## Esenel

Johneey said:


> I got during gaming with my 3090 and i9 10900k during gaming around 47 Celsius on gpu. What’s best way to improve temperature?


My rule of thumb.
Get a MoRa 420 per component


----------



## mirkendargen

Esenel said:


> My rule of thumb.
> Get a MoRa 420 per component


I resemble this remark...I have a triple-180mm radiator with 6 180mm fans in push-pull....in my basement where it's 10-20C ambient all year.


----------



## VickyBeaver

dante`afk said:


> whats your room temp? show us settings, otherwise you might be running this on 800x600


room temps are usually kept comfortable, when its hot I run the aircon?.. 
this same heaven benchmark makes my new 3090 bounce of the power limiter _shrug_
not hard to cool a 1080ti? been running that particular setup for over a year before breaking it down, never seen that 1080ti over 35c in that config, not really sure what is had to be leave on that one? on top of that rather off topic. 
ill let you know what I get when I get the ek water block for my 3090 and setup the new loop.


----------



## ring0r

Johneey said:


> I got during gaming with my 3090 and i9 10900k during gaming around 47 Celsius on gpu. What’s best way to improve temperature?
> View attachment 2460979


The temperature is okay. I have the same GPU waterblock and have a delta of 12-14 degrees. My coolant levels off at 34-35 degrees without my fans running at full speed. PETG is not dangerous at 40-41 Degrees, ... I've already tested it, the water must be above 50 degrees for something to deform ...


----------



## RedRumy3

I only have 3d mark demo but can someone take a look and see if everything looks good



https://www.3dmark.com/3dm/51290316


----------



## Johneey

RedRumy3 said:


> I only have 3d mark demo but can someone take a look and see if everything looks good
> 
> 
> 
> https://www.3dmark.com/3dm/51290316


Very good card


----------



## cookiesowns

torqueroll said:


> My HexGear R80 case has four bottom fans for fresh air. There are no air filters in use so airflow is very good. Ambient was about 24 degrees Celsius.
> 
> Maybe your thermal pads didn't get good contact after the repaste. They are quite fragile and could perform worse.


I triple checked the pads and they were contacting good. Yeah my ambient here is 26-30C


----------



## Thanh Nguyen

Can anyone confirm ftw3 bios can be flashed on pny model? Thanks.


----------



## olrdtg

Alright, to anyone who's interested in shunt modding the 3090 Founders Edition ---

*Edit: *Added note about 6th shunt resistor & updated images

Tonight I successfully shunt modded my RTX 3090 FE. It's a pretty straightforward process, just like Turing.
The method I used was replacing the R005/5mOhm resistors with 3mOhm Panasonic current sense resistors. I was afraid stacking 5mOhm resistors on top of the originals might cause clearance issues with the cooler. Besides, I'd always recommend replacing the resistors over stacking them for good measure.

There are 5 (five) 6 (six) resistors in total you need to replace. 3 of them are on the back of the card, and 3 on the front. (The 6th is not circled, but is on the front of the card near the springy push-pins for the fans - if the card compares values from each resistor, it's definitely a good idea to replace the 6th as well.

Though I may be wrong on that, one might not have to replace all 5 or 6 resistors, I only did it because I wanted to be absolutely sure the shunt mod would be effective and I was rushing. So if anyone has the time to test replacing these one by one please let me know your findings and I can update this.
For the time being though until some evidence comes out otherwise, I'd just replace all of them.

They are easy to spot as they are the only R005 resistors on the board. It's pretty easy to remove them using a heat gun and a pair of tweezers (be VERY careful not to disturb other components!!)
















*^ There is also a 6th shunt resistor next to push pins on the right side of the front on the card - I forgot to circle this one before, I've updated the images*. I have not noted any difference with this shunt resistor replaced, though you might want to replace this one as well. I've also marked the ground & +12V on the 12-pin connector in case you want to trace these yourself. The resistor closest to the PCIe socket on the rear should be measured through the PCIe power pins.

Once they are removed, use some solder paste and carefully place your new resistors on and heat them up w/ the heat gun until the solder melts, ONE AT A TIME! And again being very careful not to disturb other SMDs. This way you don't need to bother with using a soldering iron, and you can get this done pretty easily if you have the worst soldering skills like me.

I am still waiting on my water block to arrive so I'm not able to really push the card in long tests yet, but in short bursts I can get it up to around 2150Mhz before the temp spikes up.

Prior to this mod on air I was only able to get around 1930mhz stable in demanding 3D applications or 2010mhz in less demanding applications.
Post mod I am now able to get 2040mhz stable on air (with a crap-load of airflow) and I suspect I should be able to get at least 2100mhz stable on water once my block arrives.

I'm attaching images of the front & back of the card with the resistors circled that need to be changed from 5mOhm to 3mOhm

The resistors I used can be bought from DigiKey here

Note: Do this at your OWN risk! These are $1,500 GPUs, and shunt modding will absolutely void your warranty. They can pretty easily tell during RMA if you botched a shunt mod and will not honor your warranty. If you do this, make sure to save the original resistors in an anti-static bag in the event you sell the card in the future.


----------



## Spiriva

Thanh Nguyen said:


> Can anyone confirm ftw3 bios can be flashed on pny model? Thanks.


Before in this thread someone said that it didnt work to flash a 3x8pin bios to a 2x8pin card. 

Did that change ?


----------



## chispy

My Asus Tuf Gaming oc 3090 went subsero @ -21c on my water chiller , i did the shunt mods only on the 2 pci express 8 pin connectors and let it loose. Max clocks 2220/1350Mhz but the clocks are variable depending on the gpu load and benchmark , there is no way to lock the voltage or clocks steady.

Since this is overclock.net my team at hwbot i will post some pictures of my subsero session 

Ice Ice baby !


----------



## Jordel

chispy said:


> My Asus Tuf Gaming oc 3090 went subsero @ -21c on my water chiller , i did the shunt mods only on the 2 pci express 8 pin connectors and let it loose. Max clocks 2220/1350Mhz but the clocks are variable depending on the gpu load and benchmark , there is no way to lock the voltage or clocks steady.
> 
> Since this is overclock.net my team at hwbot i will post some pictures of my subsero session
> 
> View attachment 2461007
> View attachment 2461008
> View attachment 2461009
> View attachment 2461010


Did you measure power draw?


----------



## chispy

Jordel said:


> Did you measure power draw?


560~580w give or take.


----------



## Jordel

chispy said:


> 560~580w give or take.


Thanks!


----------



## twisted0ne

Waterblock for the MSI ventus cards has just gone up for pre-order



https://www.alphacool.com/detail/index/sArticle/27986



Anyone know if you need special fittings for these? Description says "also relies on the new patented stop fittings, which sit flush with the surface of the terminal" but doesn't say if they're mandatory or which ones they're referring to.


----------



## Wrathier

.


----------



## Wrathier

tubnotub1 said:


> Can confirm, been playing around w/ that bios on my TUF as well, 390 has been the max I have seen. After quite a bit of testing the best I could get on Port Royal was a 13766 and 19648 on Time Spy w/ a GPU score of 21320. Not terrible, but my silicon is mediocre, ended up going back to the TUF bios as the performance gains just weren't really there in non-synthetic scenarios.
> 
> For what it's worth @Wrathier I was running +185 on the core +500 on the mem, maxed out power slider (105%) and temp target. It never went above 61c when benchmarking w/ fans at 100%.


But everywhere I look I can only see that the Gigabyte Gaming OC has a TDP of 350W?


----------



## Spiriva

Wrathier said:


> But everywhere I look I can only see that the Gigabyte Gaming OC has a TDP of 350W?












did anyone try to flash a 2x8pin card with a bios from a 3x8pin card yet?


----------



## Wrathier

Spiriva said:


> did anyone try to flash a 2x8pin card with a bios from a 3x8pin card yet?


This is the card I mean









GeForce RTX™ 3090 GAMING OC 24G Key Features | Graphics Card - GIGABYTE Global


Discover AORUS premium graphics cards, ft. WINDFORCE cooling, RGB lighting, PCB protection, and VR friendly features for the best gaming and VR experience!




www.gigabyte.com


----------



## piperprinx

Asus Tuf OC with Gigabyte OC Bios.
Should be 390W pl, but afterburner only goes to 384W, looking to HWinfo e AB osd data.
I've read somewhere that you need to force to 390W with a batch\command line parameter, I tried some line but I'm doing something wrong cause it doesn't work.
Is there anybody who can help me?


----------



## dante`afk

olrdtg said:


> Alright, to anyone who's interested in shunt modding the 3090 Founders Edition ---
> 
> Tonight I successfully shunt modded my RTX 3090 FE. It's a pretty straightforward process, just like Turing.
> The method I used was replacing the R005/5mOhm resistors with 3mOhm Panasonic current sense resistors. I was afraid stacking 5mOhm resistors on top of the originals might cause clearance issues with the cooler. Besides, I'd always recommend replacing the resistors over stacking them for good measure.
> 
> There are 5 (five) resistors in total you need to replace. 3 of them are on the back of the card, and 2 on the front.
> They are easy to spot as they are the only R005 resistors on the board. It's pretty easy to remove them using a heat gun and a pair of tweezers (be VERY careful not to disturb other components!!)
> Once they are removed, use some solder paste and carefully place your new resistors on and heat them up w/ the heat gun until the solder melts, ONE AT A TIME! And again being very careful not to disturb other SMDs. This way you don't need to bother with using a soldering iron, and you can get this done pretty easily if you have the worst soldering skills like me.
> 
> I am still waiting on my water block to arrive so I'm not able to really push the card in long tests yet, but in short bursts I can get it up to around 2150Mhz before the temp spikes up.
> 
> Prior to this mod on air I was only able to get around 1930mhz stable in demanding 3D applications or 2010mhz in less demanding applications.
> Post mod I am now able to get 2040mhz stable on air (with a crap-load of airflow) and I suspect I should be able to get at least 2100mhz stable on water once my block arrives.
> 
> I'm attaching images of the front & back of the card with the resistors circled that need to be changed from 5mOhm to 3mOhm
> 
> The resistors I used can be bought from DigiKey here
> 
> Note: Do this at your OWN risk! These are $1,500 GPUs, and shunt modding will absolutely void your warranty. They can pretty easily tell during RMA if you botched a shunt mod and will not honor your warranty. If you do this, make sure to save the original resistors in an anti-static bag in the event you sell the card in the future.


doing gods work thx

what's the total power change by replacing the 5ohms with the 3ohms? 80%?


----------



## tubnotub1

@piperprinx Did the same mod w/ the Asus Tuf Non-OC, hit 390 power limit in any load that required it. My steps were flash bios, DDU nvidia drivers, uninstall msi afterburner, restart, install drivers, install msi afterburner, pull power limit to 105% in msi afterburner, then game on. I had no issues w/ the card only hitting 384, it would flat out run 390 at all times when under load.

@Spiriva from what I saw in this thread yes, the card will throw out an error stating that is missing a required 8 pin connector and fail to post. Seems like if you have a 2x8 card you need to stick w/ bios' from other 2x8, trying to force a 3x8 on the card can cause issues.


----------



## Spiriva

Wrathier said:


> This is the card I mean
> 
> 
> 
> 
> 
> 
> 
> 
> 
> GeForce RTX™ 3090 GAMING OC 24G Key Features | Graphics Card - GIGABYTE Global
> 
> 
> Discover AORUS premium graphics cards, ft. WINDFORCE cooling, RGB lighting, PCB protection, and VR friendly features for the best gaming and VR experience!
> 
> 
> 
> 
> www.gigabyte.com


I dunno what the card looks like i only got the bios


----------



## Wrathier

Is it worth it to install the STRIX OC 480W onto the Gigabyte Gaming OC?

Will I get a massive OC boost or is it fine as it is?


----------



## Wrathier

Spiriva said:


> I dunno what the card looks like i only got the bios


Ok there isn’t a lot of Gaming OC cards from Gigabyte so I assume it’s the one. 😀


----------



## wesley8

chispy said:


> At what temps ? I have an Asus 3090TUF OC on the way and should be here by friday , maybe we can help each other figure this out since we will have the same video card. Thanks for reporting your findings + rep .


can u share your TUF OC bios? thx


----------



## wesley8

Dickenud3L said:


> Asus TUF OC has only 375W
> Strix Bios doesent Work, it shows 440-460W but the Card stuck @ 1650-1700 Core Clock if you OC the Card Crash.
> Flashed Back to TUF OC Bios.
> 
> Somebody try Gigabyte Gaming OC with 390W ?


Can u share your TUF OC Bios here? THX


----------



## Cavokk

B


Wrathier said:


> Is it worth it to install the STRIX OC 480W onto the Gigabyte Gaming OC?
> 
> Will I get a massive OC boost or is it fine as it is?


Not sure if you can as the Strix is a 3x 8pin PCI-E power card. 

C


----------



## Pilz

Any word on who will be making an EVGA 3090 FTW3 Ultra block? I really don't want to run my card on air. I did get with EK and they have one coming in 2-3 weeks but I wanted to see what other options there will be.


----------



## Benni231990

i have a question 

my 3090 tuf non oc will come this week when i flash the 390 watt bios did i any loose a DP oder hmdi port? or works all display outputs with the new bios?


----------



## BigMack70

Fedex told me that my 3090 would arrive today but I still see it sitting in a warehouse about 1000 miles away from me if their tracking system is accurate. Ugh.

Wish Nvidia would use UPS. Had this same problem two years ago with my 2080 Ti founders from them. Fedex is useless. Anyone else here that got in on the FE drop a few days ago?


----------



## Wrathier

Cavokk said:


> B
> 
> 
> Not sure if you can as the Strix is a 3x 8pin PCI-E power card.
> 
> C


I can force it on, but wondering if it’s worth it. I prefer fans over custom loop I’m to lazy to make a loop honestly. Have all parts except GPU block though.


----------



## Nitemare3219

BigMack70 said:


> Fedex told me that my 3090 would arrive today but I still see it sitting in a warehouse about 1000 miles away from me if their tracking system is accurate. Ugh.
> 
> Wish Nvidia would use UPS. Had this same problem two years ago with my 2080 Ti founders from them. Fedex is useless. Anyone else here that got in on the FE drop a few days ago?


So much FedEx hate lol. If you’re talking about Osseo, they aren’t doing departure scans. I got the drop on Thursday afternoon and my card was delivered on time yesterday.


----------



## spikeot

Wrathier said:


> I can force it on, but wondering if it’s worth it. I prefer fans over custom loop I’m to lazy to make a loop honestly. Have all parts except GPU block though.


I'm less experienced than most on here, but there's a post on the page before this where someone says a 2x8pin card will not post with a 3x8pin BIOS because it'll say it doesn't have a cable connected.


----------



## BigMack70

Nitemare3219 said:


> So much FedEx hate lol. If you’re talking about Osseo, they aren’t doing departure scans. I got the drop on Thursday afternoon and my card was delivered on time yesterday.


Haha yeah that's where it appears. Maybe it will show up today after all.

I do have a bit of a grudge against Fedex... in the past 5 years, I've had several signature-required deliveries that never arrived when they were scheduled and were either a week late or wound up needing me to go drive to a location to pick them up. They're also worse than UPS in my area with delivering boxes that look like someone took a jackhammer to them.


----------



## Sheyster

Found this third-party FE 12-pin to dual 8-pin PS cable on Amazon:






Amazon.com: RTX 3.0 Series 12pin to Dual 8-pin nvida Graphics Card Extension Cable 3070/3080 18AWG for nvida PCIE Sleeved Braided Male to Female Extension Cord Power Supply Cable for PC Computer (Black&White): Home Audio & Theater


Buy RTX 3.0 Series 12pin to Dual 8-pin nvida Graphics Card Extension Cable 3070/3080 18AWG for nvida PCIE Sleeved Braided Male to Female Extension Cord Power Supply Cable for PC Computer (Black&White): Power Cables - Amazon.com ✓ FREE DELIVERY possible on eligible purchases



www.amazon.com





It comes in 3 colors and is about half the price of the EVGA FE cable when you add shipping and tax.


----------



## BigMack70

Sheyster said:


> Found this third-party FE 12-pin to dual 8-pin PS cable on Amazon:
> 
> 
> 
> 
> 
> 
> Amazon.com: RTX 3.0 Series 12pin to Dual 8-pin nvida Graphics Card Extension Cable 3070/3080 18AWG for nvida PCIE Sleeved Braided Male to Female Extension Cord Power Supply Cable for PC Computer (Black&White): Home Audio & Theater
> 
> 
> Buy RTX 3.0 Series 12pin to Dual 8-pin nvida Graphics Card Extension Cable 3070/3080 18AWG for nvida PCIE Sleeved Braided Male to Female Extension Cord Power Supply Cable for PC Computer (Black&White): Power Cables - Amazon.com ✓ FREE DELIVERY possible on eligible purchases
> 
> 
> 
> www.amazon.com
> 
> 
> 
> 
> 
> It comes in 3 colors and is about half the price of the EVGA FE cable when you add shipping and tax.


3rd party power cables are a big no


----------



## Wrathier

spikeot said:


> I'm less experienced than most on here, but there's a post on the page before this where someone says a 2x8pin card will not post with a 3x8pin BIOS because it'll say it doesn't have a cable connected.


It will deactivate 1 DisplayPort I think.

Does anyone here have experience with the 3 8 pin bios on a 2 8 pin card and are the extra power limit of 480 W worth it?

Thank you in advance.


----------



## LukkyStrike

Wrathier said:


> It will deactivate 1 DisplayPort I think.
> 
> Does anyone here have experience with the 3 8 pin bios on a 2 8 pin card and are the extra power limit of 480 W worth it?
> 
> Thank you in advance.


very curious myself about this. 

Looking into what 8-pin PCIe cables carry, ATX standard states 150W per plug, so 2x8pin + 75w from the slot, theoretically has a max wattage of 375. I know this is NOT true tho, because i am pulling about 395W out of my 3090 Gaming OC. And that bios has a stated max of 390. So i must be missing something in the standards, OR the 150W from the 8-pin is the MIN on a small PS, not my 1000W T2 from EVGA. Going to do some looking into this over the next few days.


----------



## warbucks

twisted0ne said:


> Waterblock for the MSI ventus cards has just gone up for pre-order
> 
> 
> 
> https://www.alphacool.com/detail/index/sArticle/27986
> 
> 
> 
> Anyone know if you need special fittings for these? Description says "also relies on the new patented stop fittings, which sit flush with the surface of the terminal" but doesn't say if they're mandatory or which ones they're referring to.


Giddy up. Need a block for this card badly. The technical data shows G 1/4" threaded fittings so I would assume you could use any standard G 1/4" threaded fitting. Might be a good idea to throw them an email to confirm though. I think I'm going to shunt mod my card rather than flash another bios given it'll be more effective and I have some soldering/heat gun skills I could put to use .


----------



## Jordel

LukkyStrike said:


> very curious myself about this.
> 
> Looking into what 8-pin PCIe cables carry, ATX standard states 150W per plug, so 2x8pin + 75w from the slot, theoretically has a max wattage of 375. I know this is NOT true tho, because i am pulling about 395W out of my 3090 Gaming OC. And that bios has a stated max of 390. So i must be missing something in the standards, OR the 150W from the 8-pin is the MIN on a small PS, not my 1000W T2 from EVGA. Going to do some looking into this over the next few days.


The specification states limits for safety sake. It should be able to provide a certain amount of stated current at a specific voltage. Anything above and all bets are off. That doesn't meant you _can't_ push more, it just means that the specifications top out there as a guarantee for "This will work".


----------



## BigMack70

Any high end power supply is easily capable of pushing 200-250W per 8-pin connector without any safety or stability problems. If you're putting a 3090 in a system where you have safety concerns on your PSU about exceeding spec power draw, you need a new PSU before your graphics card ever touches your system.


----------



## LukkyStrike

Jordel said:


> The specification states limits for safety sake. It should be able to provide a certain amount of stated current at a specific voltage. Anything above and all bets are off. That doesn't meant you _can't_ push more, it just means that the specifications top out there as a guarantee for "This will work".


Very good, that makes sense to me.

Thank you for the clarification. 



BigMack70 said:


> Any high end power supply is easily capable of pushing 200-250W per 8-pin connector without any safety or stability problems. If you're putting a 3090 in a system where you have safety concerns on your PSU about exceeding spec power draw, you need a new PSU before your graphics card ever touches your system.


I was leaning towards a much higher Wattage as you mention. Just never really looked at it until now, these 3090's want the juice. 

Amen about PSU in general. Skimping here is not a good idea.


----------



## Alelau18

the 150W calculation comes from ~4.2A per pin pair, 12*4.2*3~150W they are "rated" for ~4.2A per pin pair at 30C, but 18AWG wires can support way more than that, each 8pin connector should realistically be able to pull 324W of power by itself (9A per pin pair).


----------



## Baasha

BigMack70 said:


> Fedex told me that my 3090 would arrive today but I still see it sitting in a warehouse about 1000 miles away from me if their tracking system is accurate. Ugh.
> 
> Wish Nvidia would use UPS. Had this same problem two years ago with my 2080 Ti founders from them. Fedex is useless. Anyone else here that got in on the FE drop a few days ago?


Nice! You were able to get one? I missed the drop on Thursday since I was driving when I got a notification. Ugh...

The wait continues. At this rate, I think I'm just going to wait for the Strix 3090 and get 2x of those.


----------



## LukkyStrike

Alelau18 said:


> the 150W calculation comes from ~4.2A per pin pair, 12*4*3~ 150W they are "rated" for ~4.2A per pin pair at 30C, but 18AWG wires can support way more than that, each 8pin connector should realistically be able to pull 324W of power by itself (9A per pin pair).


So in other words: the 3rd power plug is a bit of a gimmick in the real world? to say we can pull, at least, 675w from the 2 8-pins+slot: Power delivery would probably not be the ceiling? (just thoughts here).

Unless there is some electrical mastery that would explain the need for 3 8pins?


----------



## Kleimo

twisted0ne said:


> Waterblock for the MSI ventus cards has just gone up for pre-order
> 
> 
> 
> https://www.alphacool.com/detail/index/sArticle/27986
> 
> 
> 
> Anyone know if you need special fittings for these? Description says "also relies on the new patented stop fittings, which sit flush with the surface of the terminal" but doesn't say if they're mandatory or which ones they're referring to.


It comes with these stop end fittings:


https://www.alphacool.com/shop/radiators/radiators-accessories/25210/alphacool-eiszapfen-screw-plug-v.2-g1/4-deep-black


I dont see why you could not use the traditional stop end fittings? But maybe best to confirm it with alphacool.


----------



## Alelau18

LukkyStrike said:


> So in other words: the 3rd power plug is a bit of a gimmick in the real world? to say we can pull, at least, 675w from the 2 8-pins+slot: Power delivery would probably not be the ceiling? (just thoughts here).
> 
> Unless there is some electrical mastery that would explain the need for 3 8pins?



It kinda helps for voltage drop although the 12V has not much of an issue with that nowadays, but yes, in theory you can pull 675W with 2x8pin+PCIe slot if there's no power limit restrictions by the own card, (reason the Gigabyte OC BIOS is 390W is because the partners also know it, obviously) but they are rated for 150W (4.2A) just for heavy prevention, as in the temperature of the cable to be 30C. If you increase the power the connector can pull the cable the temp of the cable will be a bit higher, but with 18AWG cables it shouldn't be an issue since they can support way higher temps than 30C.

You can also test that by cutting live cables from 8 pins, like der8auer did here:


----------



## LukkyStrike

Alelau18 said:


> It kinda helps for voltage drop although the 12V has not much of an issue with that nowadays, but yes, in theory you can pull 675W with 2x8pin+PCIe slot if there's no power limit restrictions by the own card, (reason the Gigabyte OC BIOS is 390W is because the partners also know it, obviously) but they are rated for 150W (4.2A) just for heavy prevention, as in the temperature of the cable to be 30C. If you increase the power the connector can pull the cable the temp of the cable will be a bit higher, but with 18AWG cables it shouldn't be an issue since they can support way higher temps than 30C.
> 
> You can also test that by cutting live cables from 8 pins, like der8auer did here:



I will go ahead and leave cutting live wires to professionals. 

Thanks for the technical information. It matches what I thought could be going on. 

Thanks!


----------



## BigMack70

Baasha said:


> Nice! You were able to get one? I missed the drop on Thursday since I was driving when I got a notification. Ugh...
> 
> The wait continues. At this rate, I think I'm just going to wait for the Strix 3090 and get 2x of those.


Yeah I got lucky and had the afternoon off to work from home so I could keep an eye out for drops, and got doubly lucky that my purchase went through.

Seeing reports of people missing their delivery date and then FedEx moves it to "pending", which is what I expect to happen... FedEx usually arrives by mid-day at my place if they come.

I don't know if I'll wind up keeping my FE card or not. If the FE card is loud, I'll definitely get rid of it. If better cards under water and custom XOC bios can hit FE+10% performance, I'll get rid of it too. But if the FE is quiet and water blocking / XOC BIOS only offers like +5% performance, I'll probably just keep it.

I wish multi-GPU with these was good... Would love to run two. But alas, I think I'm done chasing that rabbit. Maybe if I buy an 8k TV before Ampere's successor arrives.


----------



## Nizzen

Got a delivery today


----------



## Sheyster

BigMack70 said:


> 3rd party power cables are a big no


LOL, so is shunt modding, BIOS flashing, etc? What's your point? Third-party cables also include the FE cable from EVGA, soooo....

If anything, cables like this one and the EVGA cable are probably the least likely "mod" to void your FE warranty compared to what is typically done.


----------



## BigMack70

Sheyster said:


> LOL, so is shunt modding, BIOS flashing, etc? What's your point? Third-party cables also include the FE cable from EVGA, soooo....
> 
> If anything, cables like this one and the EVGA cable are probably the least likely "mod" to void your FE warranty compared to what is typically done.


EVGA is providing a 1st party cable for their own power supplies. You'd be dumb to use their cable with a different power supply. Seasonic and Corsair will be supplying their own first party cables, and I'm sure other PSU makers will too.

The danger with third party power cables isn't that you void your warranty, it's that you blow something up or melt something in your PC. If you want to trust third party power cables with your $2000+ PC, go right ahead.

With shunt mods, BIOS flashing, etc, you are doing something to your card where you are fully in control of all risk factors. With buying third party power cables, you are trusting some cut-rate company to do their testing and homework so you don't break something. Heck, those cables on Amazon don't even say which model power supplies they work with. Not something anyone should be rolling the dice on.


----------



## Sheyster

BigMack70 said:


> EVGA is providing a 1st party cable for their own power supplies. You'd be dumb to use their cable with a different power supply. Seasonic and Corsair will be supplying their own first party cables, and I'm sure other PSU makers will too.
> 
> The danger with third party power cables isn't that you void your warranty, it's that you blow something up or melt something in your PC. If you want to trust third party power cables with your $2000+ PC, go right ahead.
> 
> With shunt mods, BIOS flashing, etc, you are doing something to your card where you are fully in control of all risk factors. With buying third party power cables, you are trusting some cut-rate company to do their testing and homework so you don't break something. Heck, those cables on Amazon don't even say which model power supplies they work with. Not something anyone should be rolling the dice on.


I can understand your point of view. FWIW, I purchased the EVGA cable for my G3 1000 despite the absurd price they want for it. It came out to $51 delivered.

EDIT - We need to be clear that all of these FE cables are third-party as far as Nvidia is concerned. They want everyone to use that hideous looking adapter they include with the card, which is a damn shame considering they could have easily made something better that allows us to hide the connectors instead of the whole thing sitting directly on top of the card. That adapter cable is way too short.


----------



## Tias

I got a 3090 Asus TUF, its a 2x8pin card. I wanna flash the FTW3 or Strix bios to the card to get more W use. 390w from the gigabyte bios seem to little, even the 3090 FE uses 400w on its weird 2x8-to-1x12 connector.


----------



## Sheyster

Tias said:


> I got a 3090 Asus TUF, its a 2x8pin card. I wanna flash the FTW3 or Strix bios to the card to get more W use. 390w from the gigabyte bios seem to little, even the 3090 FE uses 400w on its weird 2x8-to-1x12 connector.


Don't flash a 3x8-pin BIOS to a 2x8-pin card. You won't get the desired result. Your best bet for now is the 390w GB BIOS.


----------



## Wrathier

Tias said:


> I got a 3090 Asus TUF, its a 2x8pin card. I wanna flash the FTW3 or Strix bios to the card to get more W use. 390w from the gigabyte bios seem to little, even the 3090 FE uses 400w on its weird 2x8-to-1x12 connector.


I wanted to do the same thing, but people keeps telling that it will result in an error as we only have 2 8 pins. So best bed is to use my Gaming OC bios and wait for the 400 Watt to be released.


----------



## piperprinx

tubnotub1 said:


> @piperprinx Did the same mod w/ the Asus Tuf Non-OC, hit 390 power limit in any load that required it. My steps were flash bios, DDU nvidia drivers, uninstall msi afterburner, restart, install drivers, install msi afterburner, pull power limit to 105% in msi afterburner, then game on. I had no issues w/ the card only hitting 384, it would flat out run 390 at all times when under load.


Tks for your answer. Didn't reinstall AB. I will give it another try when we will have Aorus 400W bios.


----------



## Sheyster

Wrathier said:


> I wanted to do the same thing, but people keeps telling that it will result in an error as we only have 2 8 pins. So best bed is to use my Gaming OC bios and wait for the 400 Watt to be released.


Where is the 390W GB OC BIOS? I don't see it listed in TPU's database. They have the 385W GB BIOS.


----------



## cookiesowns

olrdtg said:


> Alright, to anyone who's interested in shunt modding the 3090 Founders Edition ---
> 
> Tonight I successfully shunt modded my RTX 3090 FE. It's a pretty straightforward process, just like Turing.
> The method I used was replacing the R005/5mOhm resistors with 3mOhm Panasonic current sense resistors. I was afraid stacking 5mOhm resistors on top of the originals might cause clearance issues with the cooler. Besides, I'd always recommend replacing the resistors over stacking them for good measure.
> 
> There are 5 (five) resistors in total you need to replace. 3 of them are on the back of the card, and 2 on the front.
> They are easy to spot as they are the only R005 resistors on the board. It's pretty easy to remove them using a heat gun and a pair of tweezers (be VERY careful not to disturb other components!!)
> Once they are removed, use some solder paste and carefully place your new resistors on and heat them up w/ the heat gun until the solder melts, ONE AT A TIME! And again being very careful not to disturb other SMDs. This way you don't need to bother with using a soldering iron, and you can get this done pretty easily if you have the worst soldering skills like me.
> 
> I am still waiting on my water block to arrive so I'm not able to really push the card in long tests yet, but in short bursts I can get it up to around 2150Mhz before the temp spikes up.
> 
> Prior to this mod on air I was only able to get around 1930mhz stable in demanding 3D applications or 2010mhz in less demanding applications.
> Post mod I am now able to get 2040mhz stable on air (with a crap-load of airflow) and I suspect I should be able to get at least 2100mhz stable on water once my block arrives.
> 
> I'm attaching images of the front & back of the card with the resistors circled that need to be changed from 5mOhm to 3mOhm
> 
> The resistors I used can be bought from DigiKey here
> 
> Note: Do this at your OWN risk! These are $1,500 GPUs, and shunt modding will absolutely void your warranty. They can pretty easily tell during RMA if you botched a shunt mod and will not honor your warranty. If you do this, make sure to save the original resistors in an anti-static bag in the event you sell the card in the future.


Can you post your voltage curve? I'm still trying to figure out if I want to return/RMA my FE or wait for a block before pushing the system further?


----------



## Wrathier

Sheyster said:


> Where is the 390W GB OC BIOS? I don't see it listed in TPU's database. They have the 385W GB BIOS.


 https://mega.nz/file/giIU0Z6Q


----------



## Wrathier

Hi all,

Has anyone tested Afterburners Auto Tuning to see what it recommends? - It clocks core to 90% stability. 

Thanks.


----------



## Benni231990

has anyone testet the gaming oc bios with 390watt on a tuf gaming non oc? 

and are all display outputs working after the flash?


----------



## olrdtg

cookiesowns said:


> Can you post your voltage curve? I'm still trying to figure out if I want to return/RMA my FE or wait for a block before pushing the system further?


I haven't really modified the curve yet, I'll be messing around with that once my water block arrives. Right now I've just been testing with offsets on the core clock & memory.
Though I can say w/o a doubt that when shunt modded the FE can push some really high clocks on chilled air. Can't wait to see what it can do w/ chilled water.


----------



## Wrathier

Wrathier said:


> Hi all,
> 
> Has anyone tested Afterburners Auto Tuning to see what it recommends? - It clocks core to 90% stability.
> 
> Thanks.


Could someone please be kind and check it and compare it to there overclock please? - I want an idea of how well Afterburner get it right automatically. I need it for a project I'm doing.


----------



## Thoth420

A third party cable won't void anything. Moreso the chance of an issue is minuscule. Don't volunteer unnecessary information to someone processing an RMA is all.
The little circular sticker over one of the screws on the back of a GPU scare you too?


----------



## nievz

When I flashed my BIOS, it caused the screen do go blank for a few seconds due to the card resetting. Is it best practice to first disable the card in Device manager before beginning the flash process?


----------



## slopokdave

nievz said:


> When I flashed my BIOS, it caused the screen do go blank for a few seconds due to the card resetting. Is it best practice to first disable the card in Device manager before beginning the flash process?
> 
> View attachment 2461047


Your PC should restart when it flashes. I had to unplug/plug my displayport cable back in, but otherwise was good.


----------



## slopokdave

Benni231990 said:


> has anyone testet the gaming oc bios with 390watt on a tuf gaming non oc?
> 
> and are all display outputs working after the flash?


I am using that bios on TUF non-OC. My middle displayport out is not working. Not sure on HDMI.


----------



## Benni231990

ok thx good to know


----------



## dr/owned

chispy said:


> My Asus Tuf Gaming oc 3090 went subsero @ -21c on my water chiller , i did the shunt mods only on the 2 pci express 8 pin connectors and let it loose. Max clocks 2220/1350Mhz but the clocks are variable depending on the gpu load and benchmark , there is no way to lock the voltage or clocks steady.


Try shunting the PCIe slot so we can see if there is actual ratio checking going on and wall power increases. 5mOhm should be fine for temporary / open bench where there's a fan on it...wouldn't go over about 10-12mOhm long term.


----------



## DrunknFoo

nievz said:


> When I flashed my BIOS, it caused the screen do go blank for a few seconds due to the card resetting. Is it best practice to first disable the card in Device manager before beginning the flash process?
> 
> View attachment 2461047


Not necessary, older versions of nvflash required this, I'm pretty sure it is scripted in the nvflash command line anyway


----------



## NapsterAU

What happened to the custom BIOS editor? 
I remember using this a few generations ago where i could lock clocks and voltage for mad stable overclock. 

Also anyone with the Auros 2x 8pin yet? haha - i want that BIOS


----------



## Benni231990

we want a XOC bios for 2x8 pin and 3x8 pin when we have that we are all happy xD


----------



## NapsterAU

This is true hahaha


----------



## jura11

Benni231990 said:


> we want a XOC bios for 2x8 pin and 3x8 pin when we have that we are all happy xD


We need someone who will leak that BIOS, but don't think YT would do that hahaha 

Hope this helps 

Thanks, Jura


----------



## Baasha

the XOC BIOS is digitally signed iirc and so there is no way any youtuber is going to risk their relationship with an AIB to "leak" it to the public.

then again, anything is possible.


----------



## jura11

Baasha said:


> the XOC BIOS is digitally signed iirc and so there is no way any youtuber is going to risk their relationship with an AIB to "leak" it to the public.
> 
> then again, anything is possible.


DanCop previously released Asus RTX 2080Ti XOC BIOS and I agree these YT will jeopardise their relationship with the AIB if they release XOC BIOS, sadly 

But as you said, anything is possible 

Hope this helps 

Thanks, Jura


----------



## slopokdave

NapsterAU said:


> What happened to the custom BIOS editor?
> I remember using this a few generations ago where i could lock clocks and voltage for mad stable overclock.


Oh this sounds interesting. Yes please.


----------



## jura11

slopokdave said:


> Oh this sounds interesting. Yes please.


Last working BIOS editor has been for Maxwell only what I remember and I don't think Pascal or Turing or even Ampere BIOS editor will happen 

Hope this helps 

Thanks, Jura


----------



## Zurv

Baasha said:


> the XOC BIOS is digitally signed iirc and so there is no way any youtuber is going to risk their relationship with an AIB to "leak" it to the public.
> 
> then again, anything is possible.


EVGA will sell you the XOC bios... with the Kingpin 

hrmmm.. did anyone every try putting the 2080ti kingpin xoc bios on another card?

(and yes, i already did semi break my 3090 FE that i just got today. ***, the logo power cable - it really didn't want to come out. Sooo.. it isn't connected to the PCB anymore. )
(i just wanted to change the paste ;P )
(first game.. hades! yeah... real demanding.. haha...)

oh, note to anyone use an LG OLED (i'm using the c9) - yes yes, gsync is broken till firmware fix. But to get the 120hz you need to go to the PC rez options. (which i'm sure everyone already knew.)


----------



## bmgjet

Shunt modding is the only way to get more then 390W at the moment.
Iv made a tool to help you work out what shunts to use.








GitHub - bmgjet/ShutMod-Calculator: Work out what shunt values to use easily.


Work out what shunt values to use easily. Contribute to bmgjet/ShutMod-Calculator development by creating an account on GitHub.




github.com


----------



## Thoth420

Zurv said:


> oh, note to anyone use an LG OLED (i'm using the c9) - yes yes, gsync is broken till firmware fix. But to get the 120hz you need to go to the PC rez options. (which i'm sure everyone already knew.)


Was this not fixed when the CX models were? I have an order in for the CX 55 right now based on this(source below). It will not be my main panel for my 3090 but it will be hooked up to the card. I am buying this primarily as a TV upgrade and for the Xbox Series X but I have been obviously following this issue.


----------



## originxt

Wrathier said:


> Could someone please be kind and check it and compare it to there overclock please? - I want an idea of how well Afterburner get it right automatically. I need it for a project I'm doing.












There you go bud. YMMV cause my temps are hot garbage since my build isn't very air cooling friendly. It applied a 50+ memory oc and a wack curve. Also had 3 errors trying to even start the thing.

Initial setup: Max PL, Max Fans.


----------



## Xel_Naga

Whats the TLDR on cheapest 3090 to watercool and OC? Shunt mod TUF?


----------



## wyattneill

psychrage said:


> Nice. What kind of results did you get from liquid metal? I did a re-paste on my FE with kingpin kpx, which didn't drop the temps(peaking at 67* C), but did give more stable clocks.
> I also have my FE sandwiched between 2x 140mm fans on my open air case.



Well I was able to get way higher up the Port Royal leaderboard after LM. I was 50th, I was 17th and now I am here.


----------



## wyattneill

xSolid said:


>



Never would I shunt mod that way.


----------



## Zurv

Thoth420 said:


> Was this not fixed when the CX models were? I have an order in for the CX 55 right now based on this(source below). It will not be my main panel for my 3090 but it will be hooked up to the card. I am buying this primarily as a TV upgrade and for the Xbox Series X but I have been obviously following this issue.


That was just a test firmware (and only for the UK units.) The real one isn't out yet. (note, the KR and US units use the same firmware. So if KR gets it first (they normally do, you can use that one.)
You can still get 120hz.. just not with Gsync (till the fix.)
I'm having an odd issue with my 3090 and eARC now.. It keeps cutting in and out but i deal with that another time. Maybe the TV doesn't like 120hz eARC 'n stuff. (It isn't bandwidth or the cable.. as video would be more picky than audio.) _shrug_... problem for tomorrow. (it was fine with the 2080ti.) 
... maybe i should do a clean driver install too.


----------



## olrdtg

bmgjet said:


> Shunt modding is the only way to get more then 390W at the moment.
> Iv made a tool to help you work out what shunts to use.
> 
> 
> 
> 
> 
> 
> 
> 
> GitHub - bmgjet/ShutMod-Calculator: Work out what shunt values to use easily.
> 
> 
> Work out what shunt values to use easily. Contribute to bmgjet/ShutMod-Calculator development by creating an account on GitHub.
> 
> 
> 
> 
> github.com


Your calculator seems to only be set up for calculating stacking resistors, correct? Is there any way you can set this thing up to also calculate for resistor replacement?


----------



## mirkendargen

Thoth420 said:


> Was this not fixed when the CX models were? I have an order in for the CX 55 right now based on this(source below). It will not be my main panel for my 3090 but it will be hooked up to the card. I am buying this primarily as a TV upgrade and for the Xbox Series X but I have been obviously following this issue.


I have a US 48CX, but I don't have a 30series card yet. I can confirm with the Korean 03.11.25 firmware 1080p 120hz 4:4:4 chroma is fixed (can't test 4k on a 2080Ti), but others in this thread with 30series cards have confirmed GSync and RGB are both fixed at 4k with it:



https://hardforum.com/threads/lg-48cx.1991077/page-115



Some people are claiming weird Gsync framepacing issues, others are saying it's silky smooth. Seems more like a config problem (drivers need to be clean installed or a sketchy overclock or something) than TV firmware since other people are swearing up and down it's fine.


----------



## Nitemare3219

Zurv said:


> That was just a test firmware (and only for the UK units.) The real one isn't out yet. (note, the KR and US units use the same firmware. So if KR gets it first (they normally do, you can use that one.)
> You can still get 120hz.. just not with Gsync (till the fix.)
> I'm having an odd issue with my 3090 and eARC now.. It keeps cutting in and out but i deal with that another time. Maybe the TV doesn't like 120hz eARC 'n stuff. (It isn't bandwidth or the cable.. as video would be more picky than audio.) _shrug_... problem for tomorrow. (it was fine with the 2080ti.)
> ... maybe i should do a clean driver install too.


We found a temporary fix for the C9 on avsforums. May work for CX as well.

NVCP: Enable G-Sync, then scroll down ... "3. Display Specific Settings" - do not check "enable settings for the selected display model"
Windows display settings > scroll down > graphics settings > Variable refresh rate set to ON.

Screen tearing gone, seems to work almost as well as G-Sync. I may be noticing a stutter or two, hard to say yet. But it definitely stops tearing in game & the pendulum demo.


----------



## dr/owned

olrdtg said:


> Your calculator seems to only be set up for calculating stacking resistors, correct? Is there any way you can set this thing up to also calculate for resistor replacement?


There's a "replaced" checkbox.

It's also a pretty simple calculation of new_power = (5 / resistor) * old_power. You could set up in Excel in 30 seconds


----------



## dr/owned

Made a look up table for lazy people for stacking shunt resistors. If you want to do a "replacement" then you can fill in the gaps (ie. 3 mOhm would be about 1.65 multiplier on the power of whatever you shunt)

(stock = infinite resistance shunt resistor  )


----------



## Thoth420

mirkendargen said:


> I have a US 48CX, but I don't have a 30series card yet. I can confirm with the Korean 03.11.25 firmware 1080p 120hz 4:4:4 chroma is fixed (can't test 4k on a 2080Ti), but others in this thread with 30series cards have confirmed GSync and RGB are both fixed at 4k with it:
> 
> 
> 
> https://hardforum.com/threads/lg-48cx.1991077/page-115
> 
> 
> 
> Some people are claiming weird Gsync framepacing issues, others are saying it's silky smooth. Seems more like a config problem (drivers need to be clean installed or a sketchy overclock or something) than TV firmware since other people are swearing up and down it's fine.


Great and thanks for the info! I'm pretty excited for this tv as I always tend to dump all my display budget into my monitors. This thing will be replacing a cheapo Samsung 4K TN.


----------



## rhyno

wish i could share my vbios. my ftw3 3090 card hits 469 watts with it. and im on here looking for a way to get more without shunt mods haha. i flashed the asus 480 watt bios but then my software wouldnt read the clocks/wattage so i just flashed back to stock


----------



## VickyBeaver

rhyno said:


> wish i could share my vbios. my ftw3 3090 card hits 469 watts with it. and im on here looking for a way to get more without shunt mods haha. i flashed the asus 480 watt bios but then my software wouldnt read the clocks/wattage so i just flashed back to stock


what is stopping you from sharing it just save a backup of what you currently got?


----------



## Stampede

Zurv said:


> I'm having an odd issue with my 3090 and eARC now.. It keeps cutting in and out but i deal with that another time. Maybe the TV doesn't like 120hz eARC 'n stuff. (It isn't bandwidth or the cable.. as video would be more picky than audio.) _shrug_... problem for tomorrow. (it was fine with the 2080ti.)
> ... maybe i should do a clean driver install too.


Hi there, long time lurker, first time poster.

I have exactly the same issue. posted a temp fix on Avsforum. Run an hdmi cable direct to the tv and then hdmi cable(may need a DP to hdmi convertor) from pc direct to the Avr. It works but you end up with multimonitor issues.

If you dont care about dolby atmos/DTS X, you can use the Nvidia HD audio driver and set speakers to 7.1 and earc will work. MS hd audio driver doesnt work at all.

Note the cut outs is a know issue for the lg C9 when using Dolby digital plus as the Nvidia driver uses DD+ for atmos.

hope this helps.


----------



## bmgjet

olrdtg said:


> Your calculator seems to only be set up for calculating stacking resistors, correct? Is there any way you can set this thing up to also calculate for resistor replacement?


Tick the replacement checkbox.
Then right hand box will be what shunt your replacing the orignal with.


----------



## Nitemare3219

Feels like a dumb question... why isn't my user defined fan curve working for my 3090 FE with MSI Afterburner 4.6.3 beta 2? I've clicked every button and option every which way for fan control that I can think of. Doesn't seem like the fan stops at idle anymore either when set to default (at least I think it used to do this...?)


----------



## AdamK47

Maybe I missed it, but does the STRIX BIOS work on the MSI Gaming X Trio?


----------



## rhyno

VickyBeaver said:


> what is stopping you from sharing it just save a backup of what you currently got?


can i just post it here ? ftw33090stock.rom


----------



## AdamK47

AdamK47 said:


> Maybe I missed it, but does the STRIX BIOS work on the MSI Gaming X Trio?


Nevermind. I did miss it. It was posted 6 days ago. Got the STRIX BIOS on my Gaming X Trio. Going to play around with it now.


----------



## l88bastar

Nitemare3219 said:


> We found a temporary fix for the C9 on avsforums. May work for CX as well.
> 
> NVCP: Enable G-Sync, then scroll down ... "3. Display Specific Settings" - do not check "enable settings for the selected display model"
> Windows display settings > scroll down > graphics settings > Variable refresh rate set to ON.
> 
> Screen tearing gone, seems to work almost as well as G-Sync. I may be noticing a stutter or two, hard to say yet. But it definitely stops tearing in game & the pendulum demo.


THANK YOU!!! Works on 48cx with 3090 FE!


----------



## HyperMatrix

bmgjet said:


> Tick the replacement checkbox.
> Then right hand box will be what shunt your replacing the orignal with.


Noticed with the calculator that it's calculating a variable PCIe power draw based off of TDP even without any shunt modding. For example the ROG Strix shows that it would be pulling 91.2W from PCIe at stock 480W TDP while the FE would be pulling 71.2W at stock 375W TDP. Have we seen this behavior with the cards? I assumed outside of shunting the pcie port, the PCIe port power draw would be limited to 70-75W and the rest would go through your power cables.


----------



## bmgjet

Thats just a guess at what the power split would be.
The 2 plug split should be spot on since I have a 2 plug card and tested from min to max power slider target and worked out the average power split
Which is 20.5% slot, 39.5% each plug

I dont have a 3 plug card so cant confirm the power split on those.
So just watched some youtube videos and it looked like the power split on average was around.
19% slot, 27% each plug.

But of course with the shunts unmodded it would power limit if the pci-e slot trys pulling more then 75W.


----------



## Spiriva

Zurv said:


> EVGA will sell you the XOC bios... with the Kingpin
> 
> hrmmm.. did anyone every try putting the 2080ti kingpin xoc bios on another card?
> 
> (and yes, i already did semi break my 3090 FE that i just got today. ***, the logo power cable - it really didn't want to come out. Sooo.. it isn't connected to the PCB anymore. )
> (i just wanted to change the paste ;P )
> (first game.. hades! yeah... real demanding.. haha...)
> 
> oh, note to anyone use an LG OLED (i'm using the c9) - yes yes, gsync is broken till firmware fix. But to get the 120hz you need to go to the PC rez options. (which i'm sure everyone already knew.)


I used the XOC bios on my 2080ti FE, i had it flashed on my 2080ti as soon as it was uploaded.


----------



## Cavokk

AdamK47 said:


> Nevermind. I did miss it. It was posted 6 days ago. Got the STRIX BIOS on my Gaming X Trio. Going to play around with it now.


Yes indeed - works very well - only caveat until now is that the middle Display port stops working. I have personally flashed back to standard Trio BIOS until my waterblock arrives. Currently experimenting a lot with undervolting - getting 3% more performance than stock settings but with 60-70W lower powerdraw   smoooth as hell in games and benchmarks - no performance cap hence very stable clock..

C


----------



## dr/owned

Anyone want to guinea pig a solderless/reversible shunt mod? Tweezers still required unless you have 5-year-old size fingers.

















Solder added to bottom for thickness, flattened to make good contact with existing shunt resistors, and then kapton taped down (in this picture) to a copper base to verify resistor pads still making electrical contact. 

Resistors should be the same size (US: 2512) as these on the 30 series TUF and many others:











*Usual caveats of US only, no guarantees, you break you buy, etc.*


----------



## torqueroll

Cavokk said:


> Yes indeed - works very well - only caveat until now is that the middle Display port stops working. I have personally flashed back to standard Trio BIOS until my waterblock arrives. Currently experimenting a lot with undervolting - getting 3% more performance than stock settings but with 60-70W lower powerdraw   smoooth as hell in games and benchmarks - no performance cap hence very stable clock..
> 
> C


Nice. It's amazing how much cooler and quiet the card runs when undervolting. I've been playing with undervolting as well recently. I've locked mine atm at 0.85V at +240 for 1980Mhz.Power draw varies between 250W and 300W. Now the card is extremely silent as well. Fans running at 1000-1100 rpm.

When I get my water block I'll probably max out power limit as I'll be running 2x480 rads which should still be silent at a big power draw. Also winter is coming.


----------



## bmgjet

Personally if I wasnt soldering them on id be using something more like a tiny dot of super glue to hold them down over kapton tape and some conductive paint on the contacts.
But id still be worried of over time them coming off and shorting out something else.


----------



## nievz

torqueroll said:


> Nice. It's amazing how much cooler and quiet the card runs when undervolting. I've been playing with undervolting as well recently. I've locked mine atm at 0.85V at +240 for 1980Mhz.Power draw varies between 250W and 300W. Now the card is extremely silent as well. Fans running at 1000-1100 rpm.
> 
> When I get my water block I'll probably max out power limit as I'll be running 2x480 rads which should still be silent at a big power draw. Also winter is coming.


Great result! Can you take a screenshot of your AB curve mate?


----------



## TK421

Where's the 800w FTW3 bios?


----------



## psychrage

I gave in to ordering the Black POM Bykski FE block on AlieExpress. Hopefully it shows up soon.


----------



## devilhead

psychrage said:


> I gave in to ordering the Black POM Bykski FE block on AlieExpress. Hopefully it shows up soon.


same here, ordered today from them the block for 3090 FE


----------



## Spiriva

psychrage said:


> I gave in to ordering the Black POM Bykski FE block on AlieExpress. Hopefully it shows up soon.





devilhead said:


> same here, ordered today from them the block for 3090 FE


Wasnt it Bykski who messed up the size between the 3080 and 3090? And used the same size on them both, but the 3090FE card are bigger then the 3080FE.


----------



## Wrathier

originxt said:


> View attachment 2461060
> 
> 
> There you go bud. YMMV cause my temps are hot garbage since my build isn't very air cooling friendly. It applied a 50+ memory oc and a wack curve. Also had 3 errors trying to even start the thing.
> 
> Initial setup: Max PL, Max Fans.


So basically Afterburner doesn’t have great support for it yet?


----------



## BigMack70

So my FE missed delivery and went to "pending" yesterday... shows as stuck in Osseo. I gather from reddit that plenty of other folks are in the same position.

Anyone have experience dealing with this before? Does it speed up the process if you start bugging customer support, or is it best to just save your breath and wait it out a few days?


----------



## slopokdave

Spiriva said:


> Wasnt it Bykski who messed up the size between the 3080 and 3090? And used the same size on them both, but the 3090FE card are bigger then the 3080FE.


I had read there were issues, but couldn't find anything about it. Maybe that's it....



BigMack70 said:


> So my FE missed delivery and went to "pending" yesterday... shows as stuck in Osseo. I gather from reddit that plenty of other folks are in the same position.
> 
> Anyone have experience dealing with this before? Does it speed up the process if you start bugging customer support, or is it best to just save your breath and wait it out a few days?


FedEx: You must be new here....


----------



## BigMack70

slopokdave said:


> FedEx: You must be new here....


More or less what I'm afraid of... I'm not super interested in spending ages on hold to get through to a useless minimum wage service rep who tells me "_JuSt ChEcK tHe StAtuS pAgE DuMmY_"


----------



## devilhead

Spiriva said:


> Wasnt it Bykski who messed up the size between the 3080 and 3090? And used the same size on them both, but the 3090FE card are bigger then the 3080FE.


 after order, they have asked my which card i have  so my 3090 Fe is OK.


----------



## Wrathier

torqueroll said:


> Nice. It's amazing how much cooler and quiet the card runs when undervolting. I've been playing with undervolting as well recently. I've locked mine atm at 0.85V at +240 for 1980Mhz.Power draw varies between 250W and 300W. Now the card is extremely silent as well. Fans running at 1000-1100 rpm.
> 
> When I get my water block I'll probably max out power limit as I'll be running 2x480 rads which should still be silent at a big power draw. Also winter is coming.


I didn’t buy a Gigabyte Gaming OC to undervolt. Are these cards very noisy doing use?


----------



## nievz

This is my slightly undervolted curve  With this setting, I still hit 475w in Port Royal, 450w in RDR2, 420W in Warzone. It has higher efficiency and more performance compared to if i ran it at 2040mhz.


----------



## Baasha

BigMack70 said:


> So my FE missed delivery and went to "pending" yesterday... shows as stuck in Osseo. I gather from reddit that plenty of other folks are in the same position.
> 
> Anyone have experience dealing with this before? Does it speed up the process if you start bugging customer support, or is it best to just save your breath and wait it out a few days?


That's strange. I ordered the NVLink bridge and have the same thing! It's stuck in Osseo, MN and there's no update since 10/02/20. Ugh...Then again, I can wait since I don't have the GPUs yet. lol


----------



## Thoth420

Anyone else really hoping we see a Gigabyte Vision model for the 3090? I chose the B550 Vision D because of how incredible it looks and the card matches it so perfectly. It looks so adult in contrast to most of the other AIB cards.


----------



## psychrage

Spiriva said:


> Wasnt it Bykski who messed up the size between the 3080 and 3090? And used the same size on them both, but the 3090FE card are bigger then the 3080FE.





devilhead said:


> after order, they have asked my which card i have  so my 3090 Fe is OK.



I ordered this one - US $45.47 20% OFF|Bykski GPU Water Block For NVIDIA RTX 3090 3080 Founders Original PCB Radiator + Backplate 5V/12V MB SYNC N RTX3090FE X|Fans & Cooling| - AliExpress. The title description calls out the 3080 as well as the 3090, but everything else on the page only calls out the 3090 FE. Bykski also don't seem to have another SKU specific to the 3080 FE.










The dimensions don't line up with the rough dimensions of my card tho. I show from just above the pci slot to the top of the factory heatsink at ~120mm. They show a total height of the block at 127mm. The 186.2mm length looks to be pretty close. Not going to tear my card down to confirm that.I know the top part with the inlet/outlet is at least 30mm. Now I'm nervous...

I wonder if they've made a revision... And being that I didn't order direct from Bykski, am I getting said revision?.... gah!!!


----------



## slopokdave

I've seen some Aorus Master 3080s in the wild, but no 3090s yet. If you spot someone with one, kindly ask them to do a vbios dump please.


----------



## Wrathier

nievz said:


> This is my slightly undervolted curve  With this setting, I still hit 475w in Port Royal, 450w in RDR2, 420W in Warzone. It has higher efficiency and more performance compared to if i ran it at 2040mhz.
> 
> View attachment 2461137


So how much do you undervolt with?

Is it needed for anything than benchmarks or can the card work fine out of the box?


----------



## nievz

Wrathier said:


> So how much do you undervolt with?
> 
> Is it needed for anything than benchmarks or can the card work fine out of the box?


It's now using 50mv less than stock across the frequency range.My clock stays at 2010mhz while gaming unless I hit close to the power limit. 

Absolutely, the card will work fine out of the box. Undervolting just lowers the temperature and power usage.


----------



## DrunknFoo

bmgjet said:


> Personally if I wasnt soldering them on id be using something more like a tiny dot of super glue to hold them down over kapton tape and some conductive paint on the contacts.
> But id still be worried of over time them coming off and shorting out something else.


look into solder paste

literally 1mm dab on each contact (u can even use a toothpick), less than a second with the iron and it's done, and to remove, while gently pulling on an angle, touch the contact for less than a second with the iron.

easy to apply and remove, 0 chance to **** up if got some soldering know how.


----------



## Wrathier

nievz said:


> It's now using 50mv less than stock across the frequency range.My clock stays at 2010mhz while gaming unless I hit close to the power limit.
> 
> Absolutely, the card will work fine out of the box. Undervolting just lowers the temperature and power usage.


So if I don't undervolt I don't get more FPS fluctuations or anything like it? - It's only for less power usage and temperatures? With 4 years warranty I don't care about the temperatures honestly.


----------



## kx11

alright finally got the Galax 3090 SG installed and i think it's actually cooling very well with that 4th fan but ugly 






























you can see that i had to use the lower PCI slot (3rd one) to avoid the 4th fan hitting the ram sticks and DIMM.2 slot, other than the 4th fan installation issues this card is not that big, i bet it's a lot smaller than FE 3090, my case is O11D XL btw


----------



## Benni231990

what is the maximum powerlimit on you card?


----------



## Wrathier

kx11 said:


> alright finally got the Galax 3090 SG installed and i think it's actually cooling very well with that 4th fan but ugly
> 
> View attachment 2461158
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> you can see that i had to use the lower PCI slot (3rd one) to avoid the 4th fan hitting the ram sticks and DIMM.2 slot, other than the 4th fan installation issues this card is not that big, i bet it's a lot smaller than FE 3090, my case is O11D XL btw


I would get rid of that topfan asap.


----------



## kx11

Benni231990 said:


> what is the maximum powerlimit on you card?


100% , 352W


----------



## Wrathier

kx11 said:


> 100%


He means how many watts your card can draw. Ie: 480W like the ROG Strix OC for example.


----------



## kx11

Wrathier said:


> He means how many watts your card can draw. Ie: 480W like the ROG Strix OC for example.


that would be 352w the maximum power draw in Port Royal

this is my score, core 150+ ( didn't do much for core clock in the benchmark) Mem 1200+


https://www.3dmark.com/pr/372731


----------



## Sheyster

Hey guys, does anyone know for sure if the Zotac Trinity 3090 has build quality issues and tends to CTD under full load?


----------



## Wrathier

Double post.


----------



## Wrathier

Sheyster said:


> Hey guys, does anyone know for sure if the Zotac Trinity 3090 has build quality issues and tends to CTD under full load?


Sorted - Same goes for nearly if not all cards - ZOTAC Releases Statement on GeForce RTX 3080 Crash-to-Desktop Issues


----------



## Sheyster

Wrathier said:


> Sorted - Same goes for nearly if not all cards - ZOTAC Releases Statement on GeForce RTX 3080 Crash-to-Desktop Issues


Word is they used POSCAP's exclusively, which isn't good. That statement doesn't address the 3090 or that specific issue.


----------



## ScottRoberts91

Can someone please upload the ZOtac 3090 Trinity bios please, i have just mistakenly overwritten my back up like an idiot


----------



## cdnGhost

Sheyster said:


> Word is they used POSCAP's exclusively, which isn't good. That statement doesn't address the 3090 or that specific issue.


According to Buildzoid
"THESE ARE NOT POSCAPs they are aluminum polymer SMDs. Don't call the freaking capacitators POSCAPs just because they are in an SMD package".... lol. best quote from his video.
apparently they are SP-Caps fyi...


----------



## Wrathier

.


----------



## Wrathier

Sheyster said:


> Word is they used POSCAP's exclusively, which isn't good. That statement doesn't address the 3090 or that specific issue.


Yes and it's not POSCaps. However, my Gigabyte 3090 Gaming OC also uses 6 sp-caps and it isn't an issue. It's a design preference. If you worry about ZOTAC I don't and can relieve you of your card cheap or you can chose another brand.

And you ask about CTD issues and the post from ZOTAC is about CTD issues.

I boycotted my favorite brand EVGA for riding the hype and claiming: We will change design bla bla bla for this reason.


----------



## slopokdave

So I've had stability/clock issues since I got my 3090 TUF.... guess what? I only had ONE 8-PIN PLUGGED IN! I KNEW this was a no no. I didn't realize it; I have 2 sleeved extensions going to GPU so it looks like 2 separate cables, didn't even realize when I put together the case (3+ months ago) I only ever ran a single 8-pin. DOH. 

BTW, now I see Gigabyte Gaming bios go to 400W.


----------



## Nitemare3219

Here's my curve that OC Scanner came up with on its own. Seems stable, but didn't score much better in Port Royal (about 25 pts) than me just pushing core clock to +130 and going with stock voltage. Memory set to +250. FE card. Still looking for an answer why my custom fan curve in Afterburner won't work.


----------



## HyperMatrix

Wrathier said:


> Yes and it's not POSCaps. However, my Gigabyte 3090 Gaming OC also uses 6 sp-caps and it isn't an issue. It's a design preference. If you worry about ZOTAC I don't and can relieve you of your card cheap or you can chose another brand.
> 
> And you ask about CTD issues and the post from ZOTAC is about CTD issues.
> 
> I boycotted my favorite brand EVGA for riding the hype and claiming: We will change design bla bla bla for this reason.


You boycotted EVGA and bought an inferior product? ASUS did the same thing after they realized having 6x SP-Caps would on average provide lower overclocking potential and an increased likelihood of stability issues. Does this affect every single card? No. But it's a substantially increased risk of it happening.

I understand the psychology behind trying defend your purchase in order to put your own mind at ease, but please be objective in the advice you give to others. While you're not guaranteed to have a bad experience with 6x SP-Caps on a 3090, the likelihood of it is high, so if you're planning to overclock to 2GHz and above, it's recommended to go for a better designed card to give you a better chance of reaching and maintaining those clocks.

Derbauer also did a test where he took a 6x SP-Cap card and replaced just 2 of the SP-Caps with MLCC arrays, and ended up getting higher and more stable clock speeds out of the card. So again...please don't give people false information. I'm glad you're happy with your purchase. But from what we've seen from reviews and tests across the industry, including from AIBs, you'd have been even happier if you had some MLCCs on your card.


----------



## kx11




----------



## HyperMatrix

kx11 said:


>


Why's that core clock dropping to 1665MHz during the bench? It's weird because GPU is at 100% usage, but power just at 88% so it's not being power limited. And temps are great at 55C.


----------



## Wrathier

HyperMatrix said:


> You boycotted EVGA and bought an inferior product? ASUS did the same thing after they realized having 6x SP-Caps would on average provide lower overclocking potential and an increased likelihood of stability issues. Does this affect every single card? No. But it's a substantially increased risk of it happening.
> 
> I understand the psychology behind trying defend your purchase in order to put your own mind at ease, but please be objective in the advice you give to others. While you're not guaranteed to have a bad experience with 6x SP-Caps on a 3090, the likelihood of it is high, so if you're planning to overclock to 2GHz and above, it's recommended to go for a better designed card to give you a better chance of reaching and maintaining those clocks.
> 
> Derbauer also did a test where he took a 6x SP-Cap card and replaced just 2 of the SP-Caps with MLCC arrays, and ended up getting higher and more stable clock speeds out of the card. So again...please don't give people false information. I'm glad you're happy with your purchase. But from what we've seen from reviews and tests across the industry, including from AIBs, you'd have been even happier if you had some MLCCs on your card.


Yes I did - And of course also a bit due to their cards looking garbage this time around.

Throw some links etc to the claims you are so widely boasting. As what you are saying is absolutely not correct. As I will only use my card on air I have no interest in custom watercooling anymore done that a lot and it's just not needed nor fun anymore so I don't need 2200Mhz to be honest.

As the price tag Gigabyte Gaming OC VS an EVGA except KINGPIN is really not of any concern and the performance out of the box isn't really worth talking about on a power/performance ratio I decided to go with Gigabyte. Of course mostly due to availability, but EVGA was a nogo ass soon as they started lying as tards

I'm a moderator at COMPUTER ENTHUSIAST MASTER RACE on Facebook we are currently 71 thousand members and you are all welcome to join - a little commercial here lol - but this BS has been thrown around a lot and after Nvidia optimised their driver there hasn't been reported any issues.

And even if there was any issues, it's not really that hard to get a new card. Maybe if you go with ASUS as the RMA are ...., but for all other brands, including ZOTAC, it's pretty easy.

Here is the BS from EVGA there pissed me off: Message about EVGA GeForce RTX 3080 POSCAPs - EVGA Forums

They are not EVEN USING POSCAP's this is all a marketing stunt to sell there cards. SP-Caps even costs more than MLCC capacitors - the whole thing is more complicated than most people are aware of - especially Jaystwocents, so let's not start this whole discussion here, - I haven't received my card yet I will get it in a week or so. If I'm not satisfied - well I will replace it, not starting making posts claiming stuff I know absolutely 0 about,

I don't exactly know what marketing for Gigabyte smoked in their press release, but at least we are getting some numbers on the table in the bottom of their statement:

Statement Regarding the SP-CAP and MLCC Capacitor on GeForce RTX 3080 Graphics Cards | News - GIGABYTE Global (Still not POSCAP, guess some sort of POSCAP flue went around for a second there panicking half the Internet).

Regarding: Inferior product: LMAO  You truly gotta be a fanboy to not realise that the real difference are the looks, the cooler etc and not the chip itself there are different.

Regarding: Der8auer - he hasn't made any videos after the release of the Nvidia driver proving better anything. GALAX 3080 holds the world record in highest overclocked card on LN, not EVGA.









TecLab overclocks GALAX GeForce RTX 3080 SG to 2340 MHz and breaks the world record - VideoCardz.com


The Brazilian overclocking team TecLab, led by Ronaldo ‘rbuass’ Buassali is back with new world record. TecLab scores 2340 MHz clock speed with GeForce RTX 3080 Two weeks ago we reported that TecLab managed to break the 3 GHz barrier on the RTX 2080 Ti graphics card. The team is back with...




videocardz.com


----------



## HyperMatrix

Wrathier said:


> Yes I did - And of course also a bit due to their cards looking garbage this time around.
> 
> Throw some links etc to the claims you are so widely boasting. As what you are saying is absolutely not correct. As I will only use my card on air I have no interest in custom watercooling anymore done that a lot and it's just not needed nor fun anymore so I don't need 2200Mhz to be honest.
> 
> As the price tag Gigabyte Gaming OC VS an EVGA except KINGPIN is really not of any concern and the performance out of the box isn't really worth talking about on a power/performance ratio I decided to go with Gigabyte. Of course mostly due to availability, but EVGA was a nogo ass soon as they started lying as tards
> 
> I'm a moderator at COMPUTER ENTHUSIAST MASTER RACE on Facebook we are currently 71 thousand members and you are all welcome to join - a little commercial here lol - but this BS has been thrown around a lot and after Nvidia optimised their driver there hasn't been reported any issues.
> 
> And even if there was any issues, it's not really that hard to get a new card. Maybe if you go with ASUS as the RMA are ...., but for all other brands, including ZOTAC, it's pretty easy.


Not going to go searching for the 10+ videos and articles I've read and watched on this. That's what Google is for. But here's the Derbauer swapping out 2x SP-Cap for 20x MLCCs:






And just an FYI...confidence in your beliefs without evidence is called arrogance. It's adversarial in nature. And it's not helpful if you're trying to learn something. And if you're not here to share and to learn, then you're in the wrong place.


----------



## Sheyster

Wrathier said:


> Yes and it's not POSCaps. However, my Gigabyte 3090 Gaming OC also uses 6 sp-caps and it isn't an issue. It's a design preference. If you worry about ZOTAC I don't and can relieve you of your card cheap or you can chose another brand.
> 
> And you ask about CTD issues and the post from ZOTAC is about CTD issues.
> 
> I boycotted my favorite brand EVGA for riding the hype and claiming: We will change design bla bla bla for this reason.


All I'm trying to do is make an educated buying decision based on what I had heard about the card. Don't take it personally! My only other concern about the Zotac 3090 card is the fully locked down power limit of the BIOS (350W locked).


----------



## kx11

HyperMatrix said:


> Why's that core clock dropping to 1665MHz during the bench? It's weird because GPU is at 100% usage, but power just at 88% so it's not being power limited. And temps are great at 55C.
> 
> View attachment 2461182


not sure about that, maybe i OC the core too much?!


----------



## Sheyster

cdnGhost said:


> According to Buildzoid
> "THESE ARE NOT POSCAPs they are aluminum polymer SMDs. Don't call the freaking capacitators POSCAPs just because they are in an SMD package".... lol. best quote from his video.
> apparently they are SP-Caps fyi...


LOL, thanks!


----------



## HyperMatrix

kx11 said:


> not sure about that, maybe i OC the core too much?!


Shouldn't result in such a huge drop. There's some other issue happening here. Can you run the bench again with GPU-Z running so you can check the PerfCap reason under the sensors tab to see why your card is being held back? If it's not a thermal issue, or power limit issue, it would have to be a voltage or voltage reliability issue. But why would it have such an issue that it would drop down all the way to 1665MHz? What card do you have? And are you currently undervolting the card?


----------



## Wrathier

Sheyster said:


> All I'm trying to do is make an educated buying decision based on what I had heard about the card. Don't take it personally! My only other concern about the Zotac 3090 card is the fully locked down power limit of the BIOS (350W locked).


On page 1 you will find a bios for a Gigabyte Gaming OC. That has a limit of 390W. We are still waiting for the 400W there will come with Master.


----------



## Wrathier

HyperMatrix said:


> Not going to go searching for the 10+ videos and articles I've read and watched on this. That's what Google is for. But here's the Derbauer swapping out 2x SP-Cap for 20x MLCCs:
> 
> 
> 
> 
> 
> 
> And just an FYI...confidence in your beliefs without evidence is called arrogance. It's adversarial in nature. And it's not helpful if you're trying to learn something. And if you're not here to share and to learn, then you're in the wrong place.


And already - I seen the video - most likely many many many more videos than most - the title itself is so misleading that I think Derbauer is still slapping himself with his fist hard every morning.


----------



## kx11

HyperMatrix said:


> Shouldn't result in such a huge drop. There's some other issue happening here. Can you run the bench again with GPU-Z running so you can check the PerfCap reason under the sensors tab to see why your card is being held back? If it's not a thermal issue, or power limit issue, it would have to be a voltage or voltage reliability issue. But why would it have such an issue that it would drop down all the way to 1665MHz? What card do you have? And are you currently undervolting the card?


yeah it was power most of the benchmark


----------



## kx11

more benchmarks


----------



## HyperMatrix

Wrathier said:


> And already - I seen the video - most likely many many many more videos than most - the title itself is so misleading that I think Derbauer is still slapping himself with his fist hard every morning.


No offense but if I had to pick between Derbauer with video evidence and someone defending a cheaply made card with 6x SP-Caps that at least 2 manufacturers moved away from after failing internal QC testing, it's not even a contest. You're providing no evidence to support your claim, and are dismissing all evidence that proves you wrong. And you expect to be taken seriously.


----------



## dr/owned

HyperMatrix said:


> Not going to go searching for the 10+ videos and articles I've read and watched on this. That's what Google is for. But here's the Derbauer swapping out 2x SP-Cap for 20x MLCCs:
> 
> 
> 
> 
> 
> 
> And just an FYI...confidence in your beliefs without evidence is called arrogance. It's adversarial in nature. And it's not helpful if you're trying to learn something. And if you're not here to share and to learn, then you're in the wrong place.


"that guy" but holy hell der8auer needs some better soldering equipment. He's trying to do surgery with a chain saw.


----------



## HyperMatrix

kx11 said:


> more benchmarks



There's way too much throttling happening here with you dropping into the 1700s. What card do you have? Have you tried flashing with a higher powered bios?


----------



## kx11

HyperMatrix said:


> There's way too much throttling happening here with you dropping into the 1700s. What card do you have? Have you tried flashing with a higher powered bios?


it should be the OC offset in MSI AB which is now 100+core/500+mem 

i should drop the core down to 40+ to stabilize the core clock more


----------



## HyperMatrix

dr/owned said:


> "that guy" but holy hell der8auer needs some better soldering equipment. He's trying to do surgery with a chain saw.


Haha yeah he's taken a lot of heat for his old soldering iron. And his love of excessive flux use. With the type of work he does, I'm surprised he doesn't have a dual-head tweezer style SMD soldering iron. I don't do nearly as much work as he does and I picked one up a few years ago.


----------



## AdamK47

Got my power limit up with the STRIX BIOS on my MSI Gaming X Trio. Just did a couple hours of stress testing with +100 voltage, +123 Power Limit, +75 core, and +650 memory.


----------



## HyperMatrix

AdamK47 said:


> Got my power limit up with the STRIX BIOS on my MSI Gaming X Trio. Just did a couple hours of stress testing with +100 voltage, +123 Power Limit, +75 core, and +650 memory.


Nice work man. Looking at 480W required for 2010MHz under load makes me think I'll have to shunt mod my ROG Strix whenever/if ever it shows up. And then wondering how much power a single card will require to get 2.2GHz stable under load. And then wondering how many more radiators I'll need to add for what will probably be a 700W+ card. And then wondering how much more my portable AC unit will have to work to keep the room cool with this card on. And then wondering how I went from a 350W TDP card to 1900W (700W card + 1200W AC Unit) for an extra 10-15% performance. I'm currently questioning my own rationality and general life priorities.


----------



## originxt

HyperMatrix said:


> Nice work man. Looking at 480W required for 2010MHz under load makes me think I'll have to shunt mod my ROG Strix whenever/if ever it shows up. And then wondering how much power a single card will require to get 2.2GHz stable under load. And then wondering how many more radiators I'll need to add for what will probably be a 700W+ card. And then wondering how much more my portable AC unit will have to work to keep the room cool with this card on. And then wondering how I went from a 350W TDP card to 1900W (700W card + 1200W AC Unit) for an extra 10-15% performance. I'm currently questioning my own rationality and general life priorities.


I don't think you need to go too extreme... But this is overclock.net so I guess go for whatever you want lol 😆


----------



## J7SC

dr/owned said:


> "that guy" but holy hell der8auer needs some better soldering equipment. He's trying to do surgery with a chain saw.


...I just watched DerBauer accidentally wreck a TRX40 socket in his latest vid...ouch. But he knows what he's talking about re. power mods


----------



## iamjanco

J7SC said:


> ...I just watched DerBauer accidentally wreck a TRX40 socket in his latest vid...ouch.


"Scheiße"

I listened closely, but I didn't hear it.

(bet you he was thinking it though)


----------



## kot0005

kx11 said:


> more benchmarks


Avenger games is poorly optimised, doesnt even look that great and only around 70fps at 4k..


----------



## kot0005

__
https://www.reddit.com/r/hardware/comments/j6idky

with this I just Hope MSI Afterburner still gets updates..


Also did anyone install a proper block on their 3090 ? how are the Backside VRAM temps with just a passive backplate ? need it to be under load for atleast 40mins.


----------



## Wrathier

HyperMatrix said:


> No offense but if I had to pick between Derbauer with video evidence and someone defending a cheaply made card with 6x SP-Caps that at least 2 manufacturers moved away from after failing internal QC testing, it's not even a contest. You're providing no evidence to support your claim, and are dismissing all evidence that proves you wrong. And you expect to be taken seriously.


Of course I do. If this was a widespread issue after the new nvidia driver I would know.

All your talk about inferior cards is just freaking stupid. EVGA is not superior in any way do a google search lol.

Who besides EVGA claims they will change layout of their cards? Brands please.


----------



## HyperMatrix

Wrathier said:


> Of course I do. If this was a widespread issue after the new nvidia driver I would know.
> 
> All your talk about inferior cards is just freaking stupid. EVGA is not superior in any way do a google search lol.
> 
> Who besides EVGA claims they will change layout of their cards? Brands please.


Literally already told you ASUS did the same thing. They even went further. They switched to full MLCCs from their original full SP-Cap design. They also put out a notice about the reason they did it, which is indeed related to higher and more stable overclocking. So the 2 best brands did it. And surprise surprise...they also have the highest TDP cards. But of course you know better than me, the, AIBs, derbauer, gamer nexus, jayztwocents, and everyone else. Why? Because you have a Facebook group. Grats. They may be impressed by your BS. But I assure you no one here is.


----------



## Wrathier

HyperMatrix said:


> Literally already told you ASUS did the same thing. They even went further. They switched to full MLCCs from their original full SP-Cap design. They also put out a notice about the reason they did it, which is indeed related to higher and more stable overclocking. So the 2 best brands did it. And surprise surprise...they also have the highest TDP cards. But of course you know better than me, the, AIBs, derbauer, gamer nexus, jayztwocents, and everyone else. Why? Because you have a Facebook group. Grats. They may be impressed by your BS. But I assure you no one here is.


They have 3 freaking 8 pins so what exactly are the comparison? ASUS was stupid as well as going with only MLCC has its pros/cons. Personally I think everyone should have gone with 4 SP-Caps and 2 MLCC.

But directly saying that a brand that being ASUS, I love ASUS, but in Sweden their are not coming any cards until January and personally I don’t think they are that much better that I’m going to wait that long getting a card drawing more power for 50Hz OC. 

If you I don’t think you ever owned a Gigabyte card before, but I had a 2080 Ti from them and I don’t think they are inferior to my EVGA cards or ASUS cards. They are not superior either that’s not what I’m saying.

My inferior Fire Strike score with the Gigabyte 2080 Ti:



https://www.3dmark.com/spy/12208865



On Friday I’ll get the new card as I don’t own 3DMarks software, but only are a free user I’ll post my inferior results from that card.

I would like to see your Fire Strike result as I’m pretty confident that the difference won’t be that massive.


----------



## Spiriva

EK shipped the waterblock/backplate for the 3090 (PNY / Reference) to me today. Hopefully i can get the card under water soon 

The 390w bios seems to work fine for now, but hopefully the 400w from the gigabyte 3090 aorus will be out soon. So far Ive only seen the 3080 version of that model tho.


----------



## Wrathier

Spiriva said:


> EK shipped the waterblock/backplate for the 3090 (PNY / Reference) to me today. Hopefully i can get the card under water soon
> 
> The 390w bios seems to work fine for now, but hopefully the 400w from the gigabyte 3090 aorus will be out soon. So far Ive only seen the 3080 version of that model tho.


I don’t understand the need to put it under water, but hopefully it works out for you. 😀


----------



## Cavokk

Igor from Igors lab has done an article on undervolting the 3090 - pretty much inline with my own findings (and results on my 3090 X Trio at 800mv and 850mv) however torgueroll on this forum achieves 1980Mhz stable clock on his Founders edition at 850mv which implies that he must have a golden sample.. According to Igor His Asus TUF 3090 1785Mhz [email protected] is a very good chip - .001 bin. Luckily I am reaching two steps higher clock at 800mv so mine is probably good as well (not like torquerolls though - thats a golden one) 

C


----------



## Wrathier

Cavokk said:


> Igor from Igors lab has done an article on undervolting the 3090 - pretty much inline with my own findings (and results on my 3090 X Trio at 800mv and 850mv) however torgueroll on this forum achieves 1980Mhz stable clock on his Founders edition at 850mv which implies that he must have a golden sample.. According to Igor His Asus TUF 3090 1785Mhz [email protected] is a very good chip - .001 bin. Luckily I am reaching two steps higher clock at 800mv so mine is probably good as well (not like torquerolls though - thats a golden one)
> 
> C


Can you share your curve and settings? I think I’ll fiddle around with undervolt as well.
Does it make the card more stable or is it mainly for power usage?


----------



## Sync0r

Has anyone flashed the Zotac 3090 Trinity to a higher power limit vbios, like the strix one?

Thanks


----------



## Cavokk

Wrathier said:


> Can you share your curve and settings? I think I’ll fiddle around with undervolt as well.
> Does it make the card more stable or is it mainly for power usage?


Sure here it is:










Its mainly for the fun of achieving better results than stock at lower powerlevels  - My sweetspot is this [email protected] absolute stable 

This is currently my everyday profile and I wait for my waterblock to arrive for more serious overclocking on the Strix Bios  

C


----------



## twisted0ne

Anyone have issue with their card permanently boosting in Desktop? I can close literally every process and MSI still reports my card sitting at 1755mhz, not sure if it's a result of flashing the OC bios.

EDIT:

Seemed multi monitor was causing it, for anyone with this issue see here for solution









Enable Power-Saving Mode on NVIDIA GPUs with Multiple Monitors


A personal blog about PC gaming, hardware, and industry news. Contains a mix of reviews, current deals, and perspective on gaming news.




pcgamesbeat.blogspot.com


----------



## Zurv

Nitemare3219 said:


> We found a temporary fix for the C9 on avsforums. May work for CX as well.
> 
> NVCP: Enable G-Sync, then scroll down ... "3. Display Specific Settings" - do not check "enable settings for the selected display model"
> Windows display settings > scroll down > graphics settings > Variable refresh rate set to ON.
> 
> Screen tearing gone, seems to work almost as well as G-Sync. I may be noticing a stutter or two, hard to say yet. But it definitely stops tearing in game & the pendulum demo.


the fix (mostly) was using the new firmware. I switched the TV to engineer mode and then a new firmware was downloaded. That fixed the gsync issue.
It also fixed eARC audio... mostly. In most games i tested the audio drop out was fixed. But on a few like new star wars game the drop out for audio still happened. That said it recovered much faster.
The few issue might have been fix if i switched to PCM (vs Atmos) too. But.. that star wars game sucked.. so .. i didn't care.


----------



## devilhead

Cavokk said:


> Igor from Igors lab has done an article on undervolting the 3090 - pretty much inline with my own findings (and results on my 3090 X Trio at 800mv and 850mv) however torgueroll on this forum achieves 1980Mhz stable clock on his Founders edition at 850mv which implies that he must have a golden sample.. According to Igor His Asus TUF 3090 1785Mhz [email protected] is a very good chip - .001 bin. Luckily I am reaching two steps higher clock at 800mv so mine is probably good as well (not like torquerolls though - thats a golden one)
> 
> C


i think thats normal for Founders edition  i ran mine now to at 850mv 1980mhz(half way rised to 1995mhz), looks like FE are cherry picked.


----------



## shiokarai

How much performance difference between let's say ASUS TUF 3090 and ASUS STRIX 3090? basically, what +100w in OC gives you in terms of perf. uplift? 5% at best? 3%? 10%?


----------



## dante`afk

kx11 said:


> more benchmarks


wow, I'd check to see why you are throttling so hard, 1600/1700mhz on that card is pretty bad, even a 2080Ti will be faster there.


----------



## twisted0ne

Zurv said:


> the fix (mostly) was using the new firmware. I switched the TV to engineer mode and then a new firmware was downloaded. That fixed the gsync issue.
> It also fixed eARC audio... mostly. In most games i tested the audio drop out was fixed. But on a few like new star wars game the drop out for audio still happened. That said it recovered much faster.
> The few issue might have been fix if i switched to PCM (vs Atmos) too. But.. that star wars game sucked.. so .. i didn't care.


It's been advised to wait for the normal update, the engineer firmware is versioned higher than the public one so you may have problems getting future updates.


----------



## Zurv

the next update will be this update. I'd not worry about version numbers. If you did want to worry, worry that the pre-release firmware might have broken something (note that i've not seen any issues.) You can't roll backup firmware updates so you'd be stuck till the next release.


----------



## twisted0ne

I think you've misunderstood, what I'm saying is the next update version is 4.90 but the engineer version is already higher, for example 4.95 so you'll never get public updates if they're below that.


----------



## Zurv

i've been doing this for years on LG oled. (They normally test them in KR first. But the KR/US units use the same firmware. This time it is a US firmware from a US server. So it is likely closer to release than past updates i've used.) 
The pre-release test version are almost always the next version. What you are describing could happen, but over the last 6 years with LG TVs, it hasn't. If you are worried, don't do it. I'm not stressing it (clearly.) The eARC issue was a show stopper, for me, with the 3090.
This update also moves to version 5 of webos too. 
If one had their TV calibrated by a professional then maybe i'd suggest waiting. I also don't worry about this as i have the software, pattern generator, meters, etc... and always recalibrate after each firmware update. (that said, i'll wait a few weeks to see if this is final. calibration takes days.)


----------



## jomama22

Do we know if there have been any asic quality tools made available yet? Curious to see if there is an actual correlation for this chip or not.


----------



## Cavokk

devilhead said:


> i think thats normal for Founders edition  i ran mine now to at 850mv 1980mhz(half way rised to 1995mhz), looks like FE are cherry picked.


Wow - thats amazing - went for the FE version at first but instantly showed sold out in Denmark


----------



## Niju

twisted0ne said:


> Anyone have issue with their card permanently boosting in Desktop? I can close literally every process and MSI still reports my card sitting at 1755mhz, not sure if it's a result of flashing the OC bios.
> 
> EDIT:
> 
> Seemed multi monitor was causing it, for anyone with this issue see here for solution
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Enable Power-Saving Mode on NVIDIA GPUs with Multiple Monitors
> 
> 
> A personal blog about PC gaming, hardware, and industry news. Contains a mix of reviews, current deals, and perspective on gaming news.
> 
> 
> 
> 
> pcgamesbeat.blogspot.com


I had the same problem when I received my EVGA XC3 Ultra yesterday, but I'm using just one monitor. I uninstalled all the NVidia drivers manually, as well as GeForce experience and reinstalled just the drivers. That fixed for me at least.


----------



## Wrathier

Cavokk said:


> Sure here it is:
> 
> View attachment 2461238
> 
> 
> Its mainly for the fun of achieving better results than stock at lower powerlevels  - My sweetspot is this [email protected] absolute stable
> 
> This is currently my everyday profile and I wait for my waterblock to arrive for more serious overclocking on the Strix Bios
> 
> C


How did you find the sweet spot? Afterburner or?


----------



## Spiriva

New Game Ready Driver Released For The Call of Duty: Black Ops Cold War PC Beta


Get the optimal experience, enhanced with NVIDIA Reflex, in the upcoming Call of Duty: Black Ops Cold War PC beta, beginning October 15th.



www.nvidia.com





nvidia released new drivers today, same version (456.71) as the hotfix tho 0.o


----------



## devilhead

Cavokk said:


> Wow - thats amazing - went for the FE version at first but instantly showed sold out in Denmark


i was not fast enough to buy on nvidia site too, i bought used and ofc overpayed - 20 000kr, tomorrow will get one more 3090 FE, cherry pick better one and sell other


----------



## kot0005

Spiriva said:


> EK shipped the waterblock/backplate for the 3090 (PNY / Reference) to me today. Hopefully i can get the card under water soon
> 
> The 390w bios seems to work fine for now, but hopefully the 400w from the gigabyte 3090 aorus will be out soon. So far Ive only seen the 3080 version of that model tho.


can you let us know how the Temps on the backplate are please ?


----------



## Spiriva

kot0005 said:


> can you let us know how the Temps on the backplate are please ?


Sure thing


----------



## kx11

dante`afk said:


> wow, I'd check to see why you are throttling so hard, 1600/1700mhz on that card is pretty bad, even a 2080Ti will be faster there.


i added too much OC on the core, when i set it to just 40+ it became a lot better


----------



## Cavokk

devilhead said:


> i was not fast enough to buy on nvidia site too, i bought used and ofc overpayed - 20 000kr, tomorrow will get one more 3090 FE, cherry pick better one and sell other


 noooo dont sell - go SLI


----------



## jomama22

Is anyone able to read their asic quality through gpu-z?


----------



## J7SC

jomama22 said:


> Is anyone able to read their asic quality through gpu-z?


...haven't got my 3090s yet, but I doubt it...couldn't read ASIC for my 2080 Tis either, and I recall an earlier conversation with a vendor rep who said too many people had returned perfectly-functioning cards of earlier gens as 'RMA' because it had a low ASIC

...that said, looking at the (unmodified, original bios) voltage curve in MSI AB and its shape and level is an ASIC of sorts


----------



## AngryLobster

Anyone testing actual games with these big OC's? No offense but I don't play synthetic benchmarks and they are irrelevant to me.

Debating flashing bios since my card is limited by power but if it results in a 2FPS increase at 4K there's no point.


----------



## Wrathier

AngryLobster said:


> Anyone testing actual games with these big OC's? No offense but I don't play synthetic benchmarks and they are irrelevant to me.
> 
> Debating flashing bios since my card is limited by power but if it results in a 2FPS increase at 4K there's no point.


OC part is mostly just for benchmarks and bragging rights. An OC can also be stable 24 hours in different tests and crash 5 min into a game. Don’t expect much more FPS in games honestly.


----------



## kx11

jomama22 said:


> Is anyone able to read their asic quality through gpu-z?


Nvidia disabled that, i remember EVGA cards sold for 800$ back in 2014 for a good ASIC rating


----------



## jomama22

J7SC said:


> ...haven't got my 3090s yet, but I doubt it...couldn't read ASIC for my 2080 Tis either, and I recall an earlier conversation with a vendor rep who said too many people had returned perfectly-functioning cards of earlier gens as 'RMA' because it had a low ASIC
> 
> ...that said, looking at the (unmodified, original bios) voltage curve in MSI AB and its shape and level is an ASIC of sorts


Yeah, that's what I figure. Planning on binning a few 3090s and then shunting the best one. Was more just curious as to whether asic quality really correlated to anything this generation. I have found in the past that asic quality really didn't mean much interms of overclocking potential. Many if the higher leakage chips I would work with would perform much better. Granted, that is with good water-cooling and sub ambiant temperatures.


----------



## stefxyz

Anyone heard of any 3090 Strix oc or no shipping to customers yet?


----------



## BigMack70

Finally beat the Osseo final boss... card arrived at local fedex facility this afternoon and will be here tomorrow


----------



## J7SC

jomama22 said:


> Yeah, that's what I figure. Planning on binning a few 3090s and then shunting the best one. Was more just curious as to whether asic quality really correlated to anything this generation. I have found in the past that asic quality really didn't mean much interms of overclocking potential. Many if the higher leakage chips I would work with would perform much better. Granted, that is with good water-cooling and sub ambiant temperatures.


Yeah, GPUz had a legend for the ASIC sheet underscoring that...for air, a higher ASIC was probably preferable, but the colder it got (especially w/ sub ambient) lower ASIC / more leakage could be advantageous.

When you get your 3090s, you might want to take a screenshot of each card by itself w/ MSI AB voltage curve at stock...in addition to good-old-benchmark-slogging, that should also tell you about each card's 'pseudo-ASIC' before mods.


----------



## LordGurciullo

Glerox said:


> haha, I laughed when I watched the video. Do you know where he got that bios with higher power limit for the FTW3 card?


Yah I feel so honored! Was great fun!


----------



## HyperMatrix

First time seeing a demand vs. supply metric for the 3000 series cards. Interesting to see. Also sad to see.









Major Danish Retailer Says It Has Only Received 4% of its RTX 3080 Orders


Proshop only has enough RTX 3080s to fill 10% of customer orders.




www.tomshardware.com


----------



## LordGurciullo

So. I have a custom curve for real world gaming. Usually bounces from 1985-2100. All depends if the air conditioner is on or the case is open and temps go above 50. I also changed the Textures to QUALITY because I want the games to actually look good. If i go HIGH QUALITY it drops another 1 to 2 percent. I'm pretty satisfied in game ---- however... I am getting spikes in more than one game but in SERIOUS SAM 4 its absolutely terrible. Also in game performance drops to 69fps as you can see... and this is with a 3090 a 9900k and a 4100 ram... and I'm on a 1080 xl2746s..

Anyone else having these spikes in real world gaming? am I pushing the card too hard? or the memory? I'm at a +150 (curved down) and a +300 mem Fan curve goes to 100 at 58c.


----------



## dr/owned

HyperMatrix said:


> First time seeing a demand vs. supply metric for the 3000 series cards. Interesting to see. Also sad to see.


The ratios are what I find interested. You'd think the 3070 would be the "more volume" card because it's cheaper but maybe it exists in a weird price bracket where you either have enough money to go up to the 3080 or don't have enough money and go down to the 3060 (or whatever they call it this gen).

And the 3090...not surprised most orders are for the "top end" Strix card. That's definitely "if you have this much money, $200 more doesn't matter". (I just ordered the TUF cause it seems to already have a stout VRM and such that the Strix won't help any)


----------



## HyperMatrix

dr/owned said:


> The ratios are what I find interested. You'd think the 3070 would be the "more volume" card because it's cheaper but maybe it exists in a weird price bracket where you either have enough money to go up to the 3080 or don't have enough money and go down to the 3060 (or whatever they call it this gen).
> 
> And the 3090...not surprised most orders are for the "top end" Strix card. That's definitely "if you have this much money, $200 more doesn't matter". (I just ordered the TUF cause it seems to already have a stout VRM and such that the Strix won't help any)


Honestly the 3070 is a **** card at $500. For 40% more money you can get the 3080 which has:

50% more cores (cuda/rt/tensor)
60% more memory bandwidth
25% more memory

It should be priced at $399. It'll be absolutely clobbered at $499 unless AMD completely fails with their RDNA2 cards. If I were someone who absolutely couldn't go above $500, I would definitely wait to see what AMD is putting out before ordering anything.


----------



## Baasha

I have not heard the 3090 Strix being in stock anywhere - is there a different release date for it or have I just missed the drops (like with all the other cards)?


----------



## BigMack70

Baasha said:


> I have not heard the 3090 Strix being in stock anywhere - is there a different release date for it or have I just missed the drops (like with all the other cards)?


I haven't heard a firm release date yet. Saw the 21st on best buy's website but no idea if that's reliable/accurate


----------



## dr/owned

Baasha said:


> I have not heard the 3090 Strix being in stock anywhere - is there a different release date for it or have I just missed the drops (like with all the other cards)?


It's a Dutch retailer so maybe they're allowed to do a preorder/backorder queue and we're not in the US.


----------



## iamjanco

dr/owned said:


> It's a Dutch retailer so maybe they're allowed to do a preorder/backorder queue and we're not in the US.


They're a Danish retailer, not a Dutch one. That said, something is obviously rotten everywhere, and not just in Denmark.


----------



## HyperMatrix

dr/owned said:


> It's a Dutch retailer so maybe they're allowed to do a preorder/backorder queue and we're not in the US.


Memory Express in Canada is also doing preorders of all cards. I'm on a preorder list for the ROG Strix. Although if I can grab an FTW3 from EVGA sooner than that, I'll go that route. No idea how many Strix cards will be sent our way. I know 10 days ago my city alone had 40 preorders for the Strix and the have stores across the country. So I doubt they'll be getting 200-300 of the ROG Strix card for just 1 retail chain in just 1 tiny country named Canada.

I think Nvidia was disallowing preorders prior to launch. Not afterwards. EVGA's new "queue" system for reserved purchase spots is a preorder list as well.


----------



## animeowns

stefxyz said:


> Anyone heard of any 3090 Strix oc or no shipping to customers yet?


its shipping now you can buy it @ compsource someone is receiving it in 2 weeks who ordered from there 


http://imgur.com/a/7edK2Sf


----------



## lester007

Pre ordered too, asus strix in Mem Ex Vancouver. Currently in 10th since launch haven’t moved. 🤷‍♂️


----------



## Bilco

Alelau18 said:


> That's why for benching I just resorted to manually edit the curve, it's the best way atm


You mind sharing a screen of your curve?


----------



## domenic

animeowns said:


> its shipping now you can buy it @ compsource someone is receiving it in 2 weeks who ordered from there
> 
> 
> http://imgur.com/a/7edK2Sf


Why are they jacking up the prices and what are the differences in these cards?


----------



## stefxyz

dr/owned said:


> And the 3090...not surprised most orders are for the "top end" Strix card. That's definitely "if you have this much money, $200 more doesn't matter". (I just ordered the TUF cause it seems to already have a stout VRM and such that the Strix won't help any)


I think its more than the 200 usd doesnt matter.: the 3090 is not that far off a 3080 anyways: roughly 10ish to 20% (not limiting by other factos that is) if you compare same chip quality. So a very good 3080 might even be on par with a bad 3090 chip in most situations just performance wise.

Now getting a higher chance on a good chip for 200 usd extra becomes reasonable all off a sudden as a good 3090 with high power limit will be 5 to 10% ahead of a low end chip low power limkt 3090. So first 10% you pay 1600 instead off 800 and for the next you pay “only” 200 more...

in my opinion a base 3090 is the worst buy possible. Either you go the reason route and buy a 3080 (may be the 20gb version to come) or go all the way to the top. Fe might still not be a bad choice but all the basic board partner cards i would avoid at all cost for the 3090...


----------



## bmgjet

2 plug 3090s are a bad buy.
Especially if your running it stock with a 350W power limit.
Basically youll get the same performance as the hight end 3080s. Since it throttle way under its advertised boost speed with it pegged on power limiter if your running something thats high demand.

Bios flash it or adjust power slider if its one that goes that high to 390W.
That will get you to the stock sort of performance of the high end 3090s.

Shunt mod it so you can take the power limit further and it seems to be equal to the high end ones with the same power limit.
480W is about the point more power doesnt really let it boost and hold clocks any better.
But either way on my card 450W vs stock 350W is 16% up lift on benchmarks.


----------



## Thanh Nguyen

Anyone install the alphacool block for reference board? I installed on my pny but no signal to my monitor. What could be wrong? Pcie no power or what?


----------



## nievz

You guys why don’t you get the MSI Gaming X Trio, flash it to Strix BIOS and get top end performance for less money.


----------



## shiokarai

nievz said:


> You guys why don’t you get the MSI Gaming X Trio, flash it to Strix BIOS and get top end performance for less money.


but does it work?


----------



## DRFTA

Found this site last night and got some good info from it! First time iv ever flashed a GPU bios so lets see if i can mess up my new TUF 3090 (non oc)......
I flashed the GigabyteRTX3090GamingOC and it WORKED! works fine and its more stable with the higher GPU output.
Iv put it through a few 3dTSE runs shes got a bit more in her now! just waiting for the ambient temp to go down a bit to see if i can top my old score 
Here's my top run atm, this was done before bios swap.


----------



## bmgjet

You got a good chip there to score that with only 390W.
Shunt mod it to 480W and youll get another good bump in performance.


----------



## DRFTA

bmgjet said:


> You got a good chip there to score that with only 390W.
> Shunt mod it to 480W and youll get another good bump in performance.


That score was at 375W standard TUF bois......
And yeah i think i might have won the silicon lottery that's for sure!


----------



## nievz

shiokarai said:


> but does it work?


Here's RDR 2 comparison. I use custom graphics settings in game so my results are not comparable to other benchmarks of this game.

Power limit at 350 watts

















Power limit at 480W


----------



## shiokarai

nievz said:


> Here's RDR 2 comparison. I use custom graphics settings in game so my results are not comparable to other benchmarks of this game.
> 
> Power limit at 350 watts
> View attachment 2461327
> 
> View attachment 2461321
> 
> 
> Power limit at 480W
> View attachment 2461326
> 
> View attachment 2461324


Wow that's a HUGE difference... I wonder whether it's the outlier or it's the norm?


----------



## nievz

shiokarai said:


> Wow that's a HUGE difference... I wonder whether it's the outlier or it's the norm?


The min and max fps fluctuates wildly between runs at 350W. At 480W, I get consistent result.


----------



## Wrathier

stefxyz said:


> I think its more than the 200 usd doesnt matter.: the 3090 is not that far off a 3080 anyways: roughly 10ish to 20% (not limiting by other factos that is) if you compare same chip quality. So a very good 3080 might even be on par with a bad 3090 chip in most situations just performance wise.
> 
> Now getting a higher chance on a good chip for 200 usd extra becomes reasonable all off a sudden as a good 3090 with high power limit will be 5 to 10% ahead of a low end chip low power limkt 3090. So first 10% you pay 1600 instead off 800 and for the next you pay “only” 200 more...
> 
> in my opinion a base 3090 is the worst buy possible. Either you go the reason route and buy a 3080 (may be the 20gb version to come) or go all the way to the top. Fe might still not be a bad choice but all the basic board partner cards i would avoid at all cost for the 3090...


I don’t agree at all. I think the Gigabyte Gaming OC 3090 are a good buy. Priced right in the middle of a msrp tuf and a Strix.


----------



## Wrathier

shiokarai said:


> Wow that's a HUGE difference... I wonder whether it's the outlier or it's the norm?


What card you got?


----------



## Wrathier

Damn we need that 400w bios. I don’t want to shunt mod or water cool but want most out of the card of course.


----------



## Benni231990

no we need a damn XOC bios xDD


----------



## Wrathier

Benni231990 said:


> no we need a damn XOC bios xDD


he he


----------



## shiokarai

Shame 3x8 PIN cards' BIOSes doesn't work on 2x8 PIN cards...


----------



## Nizzen

shiokarai said:


> Shame 3x8 PIN cards' BIOSes doesn't work on 2x8 PIN cards...


It's a shame buying the cheaper models... 
Buy cheap, buy twice


----------



## shiokarai

Nizzen said:


> It's a shame buying the cheaper models...
> Buy cheap, buy twice


to be able to BUY is the key issue here  also, check out the prices - 2x8 pin cards are sometimes more expensive than 3xpin ones...


----------



## asdkj1740

3080 / 3090 / 3070 Gigabyte Eagle Gaming OC & Vision Power Connector Concerns


***UPDATE 1.5 IMPORTANT INFO To clarify for everyone and any one new here the cards affected are as follows re serial number WK39 onwards will have the revised new connector block *UPDATE however some cards may be mixed and still could be on the old connector block even after WK39 WK38...




www.overclockers.co.uk




to gigabyte users, go ask for the new adaptor in case the original one failed.


----------



## Baasha

BigMack70 said:


> I haven't heard a firm release date yet. Saw the 21st on best buy's website but no idea if that's reliable/accurate


Yea, same here. Patience is truly a virtue. I just hope I can get the cards before Cyberpunk 2077 releases.

How's the FE 3090 btw? Let's see some pics!


----------



## Wrathier

asdkj1740 said:


> 3080 / 3090 / 3070 Gigabyte Eagle Gaming OC & Vision Power Connector Concerns
> 
> 
> ***UPDATE 1.5 IMPORTANT INFO To clarify for everyone and any one new here the cards affected are as follows re serial number WK39 onwards will have the revised new connector block *UPDATE however some cards may be mixed and still could be on the old connector block even after WK39 WK38...
> 
> 
> 
> 
> www.overclockers.co.uk
> 
> 
> 
> 
> to gigabyte users, go ask for the new adaptor in case the original one failed.


****... I’m Gigabyte fan so of course I have a Gigabyte gaming oc incoming.

Really hope I don’t run into this issue. 😭


----------



## Nitemare3219

shiokarai said:


> Wow that's a HUGE difference... I wonder whether it's the outlier or it's the norm?


Average FPS being 5% different for 130W more heat hardly seems worth it. Also, I’d like to see what the difference actually is with a manually tuned voltage curve for both limits. 480W is going to be impossible to keep cool on air without thermal throttling or insanely loud fan speed.



nievz said:


> The min and max fps fluctuates wildly between runs at 350W. At 480W, I get consistent result.


It fluctuates at 350W because I assume you are running stock voltage. You are hitting power limit the entire time and clocks are suffering as a result. I am managing a steady 1950 MHz at 0.900V with my FE card so far at 400W power limit (though it’s often below 400W). Stock curve with 400W power limit is more like 1800-1920 MHz, basically all over the damn place. Clocks drop severely if temp limit hits around 70 degrees.



Nizzen said:


> It's a shame buying the cheaper models...
> Buy cheap, buy twice


There’s no real reason for 3 8-pins. Pure marketing stunt. Sure, ATX spec says 150W per cable. But we clearly know a good cable can handle 300W. Cards with 2 8-pins should be able to push 600W without a concern, but because AIBs want to make people pay more, they reserve higher power limits for the top tier cards. Hopefully we get some BIOS with more power for everyone.


----------



## nievz

Nitemare3219 said:


> Average FPS being 5% different for 130W more heat hardly seems worth it. Also, I’d like to see what the difference actually is with a manually tuned voltage curve for both limits. 480W is going to be impossible to keep cool on air without thermal throttling or insanely loud fan speed.
> 
> 
> It fluctuates at 350W because I assume you are running stock voltage. You are hitting power limit the entire time and clocks are suffering as a result. I am managing a steady 1950 MHz at 0.900V with my FE card so far at 400W power limit (though it’s often below 400W). Stock curve with 400W power limit is more like 1800-1920 MHz, basically all over the damn place. Clocks drop severely if temp limit hits around 70 degrees.


Not really insane fan speed. At sustained 470-480W power draw in Port Royal, I'm at 69-70C at 80% fan speed (mind you since im using Strix BIOS, i'm losing 250rpm) at around 26C ambient. The X Trio cooling performance is really good.

It fluctuates at 350W because it's hitting the PL. That's what you get when you are power limited.

Also Im using manually tuned F/V curve. I didn't win the silicon lottery but it's good enough.


----------



## Nizzen

Nitemare3219 said:


> Average FPS being 5% different for 130W more heat hardly seems worth it. Also, I’d like to see what the difference actually is with a manually tuned voltage curve for both limits. 480W is going to be impossible to keep cool on air without thermal throttling or insanely loud fan speed.
> 
> 
> It fluctuates at 350W because I assume you are running stock voltage. You are hitting power limit the entire time and clocks are suffering as a result. I am managing a steady 1950 MHz at 0.900V with my FE card so far at 400W power limit (though it’s often below 400W). Stock curve with 400W power limit is more like 1800-1920 MHz, basically all over the damn place. Clocks drop severely if temp limit hits around 70 degrees.
> 
> 
> There’s no real reason for 3 8-pins. Pure marketing stunt. Sure, ATX spec says 150W per cable. But we clearly know a good cable can handle 300W. Cards with 2 8-pins should be able to push 600W without a concern, but because AIBs want to make people pay more, they reserve higher power limits for the top tier cards. Hopefully we get some BIOS with more power for everyone.


Shuntmodding 3090, and drawing 7-800w. Then 3x8pin is useful


----------



## J7SC

Wrathier said:


> ****... I’m Gigabyte fan so of course I have a Gigabyte gaming oc incoming.
> 
> Really hope I don’t run into this issue. 😭


I'm also a Gigabyte fan (currently on 2x 2080 Ti Aorus Xtrr WB) but I still haven't seen any version of the 3090 Aorus Xtr tested in the wild, much less the W/WB editions. But that delay might also mean that as before, they might be building up higher bins for the Xtrs (hope springs eternal...)


----------



## Johneey

Thanh Nguyen said:


> Anyone install the alphacool block for reference board? I installed on my pny but no signal to my monitor. What could be wrong? Pcie no power or what?


Mine works. You damaged dude 😜?


----------



## Alanzaki_073

Wrathier said:


> I don’t understand the need to put it under water, but hopefully it works out for you. 😀


This is overclock.net, not some FB group of novices claiming to be enthusiasts.
Stable overclocks is the need to put on a waterblock, sadly that appears as a pagan concept to you.


----------



## Thanh Nguyen

Johneey said:


> Mine works. You damaged dude 😜?


It works now._ maybe when I wiped out the old thermal paste the alcohol was still wet around the die.Keep it overnight and it works now._
I flashed it to gigabyte bios but why I cant push the core above 2000 with msi afterburn?


----------



## kx11

Quantum Break benchmark


----------



## Wrathier

Alanzaki_073 said:


> This is overclock.net, not some FB group of novices claiming to be enthusiasts.
> Stable overclocks is the need to put on a waterblock, sadly that appears as a pagan concept to you.


LOL - I just got a bit older, not that I haven't made extreme builds on water. But the difference is honestly not worth my time anymore. Or I'm lazy - I guess the latter honestly.

I have everything needed plus extra to make a killer loop, but haven't got around to do it yet. My machine works fine on AIO 360 and air cooled GFX.

I have no experience with shunt mod though - that might be something there could be fun later on. - Then water would be something worth considering I assume.


----------



## asdkj1740

Wrathier said:


> ****... I’m Gigabyte fan so of course I have a Gigabyte gaming oc incoming.
> 
> Really hope I don’t run into this issue. 😭


the rep said new shipping cards out of factory has been improved. although you may not find any differences visually.


----------



## Wrathier

asdkj1740 said:


> the rep said new shipping cards out of factory has been improved. although you may not find any differences visually.


I'm from Sweden - general cards won't be here to january so I'm guaranteed first batch - bought from a scalper on auction. - Typical lol. Should have tested MSI this time around - But not everyone has the issue it seems and I don't plug in and out all the time so maybe I'll get lucky


----------



## BigMack70

Baasha said:


> Yea, same here. Patience is truly a virtue. I just hope I can get the cards before Cyberpunk 2077 releases.
> 
> How's the FE 3090 btw? Let's see some pics!


Pics coming in a bit... just got it installed. Going to test it out and make sure everything is working... gsync is definitely bugged out at 4k120. I remember people posting there was a workaround for 4k 120 VRR other than just waiting for LG to put out a firmware fix, but now I can't find it - anyone know what it was?


----------



## Wrathier

BigMack70 said:


> Pics coming in a bit... just got it installed. Going to test it out and make sure everything is working... gsync is definitely bugged out at 4k120. I remember people posting there was a workaround for 4k 120 VRR other than just waiting for LG to put out a firmware fix, but now I can't find it - anyone know what it was?


Nvidias driver are also totally bugged for the Odyssey G9 so in order to avoid black screen I have to run 120Hz - again.  Getting a bit annoying honestly. I might not even be able to push more than 120, but having a 240Hz monitor then I want it to at least be able to show a picture at 240Hz. At least it's mentioned in the GeForce 456.71 Game Ready Driver Feedback Thread so a fix will hopefully come.


----------



## Wrathier

So.... now my Gigabyte 3090 Gaming OC has been shipped to me and with luck it's here tomorrow or maybe monday.

Can anyone suggest what I should start with trying for OC? 100+ core 500+ mem or what are your suggestions? Also, will it be enough to test stability in Furmark or was Heaven 4 with pause at dragon and then going 25+ on mem until issues then dial back 75 to be sure of stability a good way?

Any ideas? - As easy as possible really. 

Thanks in advance.


----------



## kx11

not trying to be a diva here but this latest driver broke 8k video recording on 3090, can someone try that too ?!!


for me it's broken with this driver, i set the scaling to GPU/Display and to override the application settings

got this result


----------



## BigMack70

For those of you with cards, what are you doing to determine optimal memory OC? Are there specific benchmarks that are sensitive to memory bandwidth and good for determining where you hit performance degredation?

On my founders card, +130 core +350 memory is looking like the ballpark of where I'm going to wind up, but much more testing to go...



https://www.3dmark.com/3dm/51437892?


----------



## HyperMatrix

BigMack70 said:


> For those of you with cards, what are you doing to determine optimal memory OC? Are there specific benchmarks that are sensitive to memory bandwidth and good for determining where you hit performance degredation?
> 
> On my founders card, +130 core +350 memory is looking like the ballpark of where I'm going to wind up, but much more testing to go...
> 
> 
> 
> https://www.3dmark.com/3dm/51437892?


While I, sadly, don't have a card in hand yet, I saw in Gamer Nexus's videos where he'd bump the memory clock up and down a bit and do a full port royal test to see what effect it had on his score. For this to be accurate, you'd need to be maintaining similar temperatures after multiple runs so put the fans on full blast while testing.


----------



## Sheyster

BigMack70 said:


> Pics coming in a bit... just got it installed. Going to test it out and make sure everything is working... gsync is definitely bugged out at 4k120. I remember people posting there was a workaround for 4k 120 VRR other than just waiting for LG to put out a firmware fix, but now I can't find it - anyone know what it was?




__
https://www.reddit.com/r/OLED/comments/j23xf4

I'll probably be picking up a CX 48 from Costco soon. They're including the 3 year ST warranty for free and it's selling for $50 less than Amazon and BB.


----------



## HyperMatrix

Sheyster said:


> __
> https://www.reddit.com/r/OLED/comments/j23xf4
> 
> I'll probably be picking up a CX 48 from Costco soon. They're including the 3 year ST warranty for free and it's selling for $50 less than Amazon and BB.


Man I need to grab one of these new GSYNC compatible displays that has the color enhancement features included like on the LG CX 48. I can't believe how much better the image looks with G-Sync.


----------



## slopokdave

Has anyone got all of their displayports to work with any of the custom bios? Wasn't a problem for my Asus TUF, with gigabyte bios, until today. I got a 32" 165hz monitor, but I'm already using 2 of my DP's (34" ultrawide and Index VR). I need that 3rd Displayport!!


----------



## BigMack70

Spoiler: pics






















Overclocking stability is weird on this thing. I've never owned a card that could have 100 MHz variability between different tests as far as its stability is concerned. I can pass some benchmarks at +200 MHz reliably. Others fail at anything above +100 MHz. Seems to do with how high the clocks want to run - it can handle more offset below 2 GHz and less above it. I think I'm going to be pretty safely dialed in at +100 MHz on the core. Can't tell yet if +250 or +350 is better on the memory; still working on that.

It's about 30-42% faster than my water-cooled 2080 Ti was, depending on the test. 

Overall, very happy with the card. Looks like it's got a 420W power limit out of the box (114% power slider), which is fine. And it's so dang quiet. Easily the most quiet air-cooled card I've ever had. In fact, I'd go so far as to say it's simply the best air-cooled GPU I've ever had, period. We'll see how I feel after a few weeks of using it, but right now I'm tempted to just run a stock air-cooled GPU for the first time in 10+ years.

Only problem is the power connector location... there's just no way to make it anything but ugly 

Now, just need to get gsync fixed or find a workaround for VRR on my C9


----------



## mirkendargen

BigMack70 said:


> For those of you with cards, what are you doing to determine optimal memory OC? Are there specific benchmarks that are sensitive to memory bandwidth and good for determining where you hit performance degredation?
> 
> On my founders card, +130 core +350 memory is looking like the ballpark of where I'm going to wind up, but much more testing to go...
> 
> 
> 
> https://www.3dmark.com/3dm/51437892?


Hashrate mining ETH might be a good test, I know it's super memory bandwidth bound.


----------



## Alex24buc

Hello, I have a 3090 Gaming pro oc from PALIT and I play in 4k. I noticed in some games, especially in Red dead redemption 2 my core clock drops sometimes even in 1700s and high 1600s but in Assassin creed Odissey however it stays in 1900s. Is it normal? I mention to you that my card has the temps in the 60s and it appears that the gpu reaches power limit and that`s why it throttles. What should I do to keep the core clock higher, beacuse the power limit can only be increased to 104% and I didn`t overclock at all the gpu. Thanks in advance for your help!


----------



## HyperMatrix

Alex24buc said:


> Hello, I have a 3090 Gaming pro oc from PALIT and I play in 4k. I noticed in some games, especially in Red dead redemption 2 my core clock drops sometimes even in 1700s and high 1600s but in Assassin creed Odissey however it stays in 1900s. Is it normal? I mention to you that my card has the temps in the 60s and it appears that the gpu reaches power limit and that`s why it throttles. What should I do to keep the core clock higher, beacuse the power limit can only be increased to 104% and I didn`t overclock at all the gpu. Thanks in advance for your help!


Someone else was running into the same issue. Unfortunately at the moment the only way to fix it is to increase the power limit. You can do that by flashing a new bios with a higher power limit (which will help a bit), or doing shunt mod (which should help a lot). It's really unfortunate seeing some of the 3090s throttle so hard due to their TDP limits.


----------



## BigMack70

Alex24buc said:


> Is it normal?


It's the same behavior as I have seen on my card. 200-ish MHz difference in average clock between different tests. Just has to do with how hard the cores are getting hit. Harder hit on cores = run into power limit earlier and so clock lower.


----------



## zlatanselvic

Any news on a bios mod of the 3090 founders? My card needs some more juice:
https://www.3dmark.com/3dm/51439195? 


I've got the Byski waterblock pre-ordered. Hopefully I can push it a bit more, but it seems voltage is the limiting factor here not temps.


----------



## Alex24buc

@HyperMatrix 
@BigMack70 

Thanks a lot for your feedback. I understand and believe me it is also frustrating because I didn`t go that low with the clocks with my previous 2080ti. I don`t know if it has to do with the fact that Palit is not a good brand but I didn`t have another option.


----------



## HyperMatrix

Alex24buc said:


> @HyperMatrix
> @BigMack70
> 
> Thanks a lot for your feedback. I understand and believe me it is also frustrating because I didn`t go that low with the clocks with my previous 2080ti. I don`t know if it has to do with the fact that Palit is not a good brand but I didn`t have another option.


You can try doing some undervolting. Some people found they could maintain 1850-1900MHz with much lower voltage. Lower voltage = less power use. Less power use = less throttling. So you could potentially end up with even higher performance due to better sustained clocks with some undervolting because you won't be hitting your TDP limit.


----------



## Romir

BigMack70 said:


> Overall, very happy with the card. Looks like it's got a 420W power limit out of the box (114% power slider), which is fine. And it's so dang quiet. Easily the most quiet air-cooled card I've ever had. In fact, I'd go so far as to say it's simply the best air-cooled GPU I've ever had, period. We'll see how I feel after a few weeks of using it, but right now I'm tempted to just run a stock air-cooled GPU for the first time in 10+ years.


I can echo all of this, I think a 8800 GTS was my last air cooled card 14 years ago. How many watts this 3090 FE cools with two 110mm fans at 1250~ rpm is amazing.

I'm switching to an O11 dynamic though, because my case is a hot box with several intake radiators, and I'm tired of the creaking plexi window. It's fine after changing the top 480 to exhaust but the CPU only deltaT makes me sad. I'm going to move the rads and pumps into a wheeled file cabinet below it and make the external radbox I've wanted for so long. It'll be a nice display case on top of a giant pedestal. Most likely though, an EK FE block will look great in the O11. 😅


----------



## BigMack70

Romir said:


> I can echo all of this, I think a 8800 GTS was my last air cooled card 14 years ago. How many watts this 3090 FE cools with two 110mm fans at 1250~ rpm is amazing.
> 
> I'm switching to an O11 dynamic though, because my case is a hot box with several intake radiators, and I'm tired of the creaking plexi window. It's fine after changing the top 480 to exhaust but the CPU only deltaT makes me sad. I'm going to move the rads and pumps into a wheeled file cabinet below it and make the external radbox I've wanted for so long. It'll be a nice display case on top of a giant pedestal. Most likely though, an EK FE block will look great in the O11. 😅


I can definitely recommend the O11 dynamic, particularly the XL version if you need the extra space, which is what I'm using. I can keep all my fans at 1000 RPM or below my 3090 stays around 67C and pretty quiet. It looks gorgeous, and the cable management is nearly perfect... it lets me completely hide the rats nest caused by 10 RGB fans, up to 2 AIOs, and two Corsair commander units. Makes that stupid 12-pin connector all the more annoying, though... no way to hide it.

The only complaint I can think of with the O11D XL is the price since you have to supply all your own cooling... but this is a 3090 owners thread... we don't care about price in here XD

Other than the 8800 GTS, which I ran from 2007-2011, the only other air cooled cards I ran were Lightning 7970s... and I have no idea why I did that because they sounded like a vacuum cleaner in crossfire. Ever since then I've been under water or AIO.

But I've just been gaming for a couple hours and this FE card is nearly dead silent. I've always gone with water cooling for acoustic reasons, not so much performance.... I am seriously tempted to just keep this card. I'm honestly absurdly impressed with it. I think the only way to improve noise levels will be a large custom loop and I dunno if I want to shell out the cash for that.


----------



## Thanh Nguyen

Im using a pny 3090 and gigabyte oc bios. Why my voltage just stays at 950mv. Sometimes it goes max like 1025mv, so the core cant stay above 2000mhz.


----------



## nievz

BigMack70 said:


> For those of you with cards, what are you doing to determine optimal memory OC? Are there specific benchmarks that are sensitive to memory bandwidth and good for determining where you hit performance degredation?
> 
> On my founders card, +130 core +350 memory is looking like the ballpark of where I'm going to wind up, but much more testing to go...
> 
> 
> 
> https://www.3dmark.com/3dm/51437892?


+1200 memory for me fails port royal test. I settled at +900mhz.


----------



## originxt

I think I am reaching the current limits of my card.









Haven't had a successful run past +175 core. No voltage curve tuning because the tiny tiny tuner for evga's software is unusable.

Could run higher memory but diminishing returns isn't worth it.

Edit: Top 10 in Firestrike Ultra Whooooo


----------



## ratzofftoya

Just posted in another thread but...I'm a little dismayed that my benching experiment last night of 2x2080tis in SLI vs one RTX 3090 (Zotac AMP) ended up with my former SLI setup winning in all 3Dmark benchmarks and in Shadow of the Tomb Raider...Any thoughts? I know 3DMark and SotTR all supposedly scale quite well.


----------



## Frappuccino

I have the FTW Ultra and seeing the same large OC variables at 4K. I have it undervolt [email protected] and it could run that stable in Horizon while Gears 5 is does drop to 1895. But when I play Witcher 3 it goes to as low as 1785. I could increase the power limit to max but that draws in 420+ watts and it's too damn hot and loud for me.


----------



## supraclk06

My 3090 FE seems to be holding up well, +200 core and 600 on memory on air. Not looking for anything huge, but I am curious about flashing BIOS. Is it possible to flash the FE? I have been digging through these pages but haven't found anyone who has done so. I am guessing (in my limited knowledge) that other AIB cards with 2x8 pins could work potentially, but I am not smart enough to figure out that answer it seems.


----------



## VickyBeaver

Wanting to run the gigabyte gaming OC bios but you video outputs are refrence?
I stubled along the work around accedently if you use a display port to hdmi converter on your non functioning displayport then flash back to stock bios then after reboot, flash back over to the gigabyte bios it will be functional but you will loose function again however if you plug a display port connection to it. Weird it works at all but it dose work.

I used a cheep DP to HDMI from volans, it was just shear luck I found this.

on a side note a friend of mine discovered that the pci-e 4.0 was not upto snuff on his gigabyte x570 Aorus Master, if you are experiencing usb dropouts and audio stutters try running pci-e 3.0 forced


----------



## HyperMatrix

ratzofftoya said:


> Just posted in another thread but...I'm a little dismayed that my benching experiment last night of 2x2080tis in SLI vs one RTX 3090 (Zotac AMP) ended up with my former SLI setup winning in all 3Dmark benchmarks and in Shadow of the Tomb Raider...Any thoughts? I know 3DMark and SotTR all supposedly scale quite well.


Why exactly are you surprised? Tomb Raider is one of the few games that has built in mGPU support so you’re going to get amazing scaling. 3DMark is the same. No one ever claimed the 3090 would be 2x faster than the 2080Ti.

3090 has some advantages over the 2080Ti like the roughly 75-90% higher ray tracing performance and 24GB VRAM. But the biggest performance determinant with the 3090 is going to be what card you get, what the TDP is, and whether you shunt mod it.

The biggest mistake anyone can make is upgrading from a 2080Ti to one of the FE or 2x 8 pin connector models (unless you’re willing to shunt mod). Unfortunately Nvidia have left very little headroom for overclocking. Not because the hardware can’t take it, but because power usage would go through the roof. So you’ll find your basic model 3090 cards being only about 10% faster than the EVGA FTW3 RTX 3080.

So my recommendation would be to either shunt mod the card, or sell it (which you can do without a loss right now) and get any of the 3x8 pin connector cards so you can flash them with a 480W bios and later hopefully with the 520W KPE bios.

The 3090 is really only a _significant_ upgrade over the 2080Ti when it’s not TDP restricted.


----------



## sultanofswing

My current situation with a 2080ti Kingpin my only real upgrade path is 3090 Kingpin.
3080 just isn't a big enough upgrade for me and since I am already used to spending right at 2k for a GPU then 3090 KPE it will be.


----------



## ratzofftoya

HyperMatrix said:


> Why exactly are you surprised? Tomb Raider is one of the few games that has built in mGPU support so you’re going to get amazing scaling. 3DMark is the same. No one ever claimed the 3090 would be 2x faster than the 2080Ti.
> 
> 3090 has some advantages over the 2080Ti like the roughly 75-90% higher ray tracing performance and 24GB VRAM. But the biggest performance determinant with the 3090 is going to be what card you get, what the TDP is, and whether you shunt mod it.
> 
> The biggest mistake anyone can make is upgrading from a 2080Ti to one of the FE or 2x 8 pin connector models (unless you’re willing to shunt mod). Unfortunately Nvidia have left very little headroom for overclocking. Not because the hardware can’t take it, but because power usage would go through the roof. So you’ll find your basic model 3090 cards being only about 10% faster than the EVGA FTW3 RTX 3080.
> 
> So my recommendation would be to either shunt mod the card, or sell it (which you can do without a loss right now) and get any of the 3x8 pin connector cards so you can flash them with a 480W bios and later hopefully with the 520W KPE bios.
> 
> The 3090 is really only a _significant_ upgrade over the 2080Ti when it’s not TDP restricted.


I suppose I thought that even with the mGPU support, scaling was not super great, and that I would get like a 30% bump over a single 2080Ti, and that SLI 2080Tis is like a 30% bump over solo, so it would be equal in the end.

I think I'll hang on to this 3090 and keep it in the test bench and sell my two 2080Tis, if only for the power savings. I've never shunt modded...Maybe I'll give it a try! Then I'll pick up one of the 3x8 pin cards down the line for my main build.

Am I right in understanding that, even for enthusiast idiots like me, SLI is basically dead outside of benchmarks?


----------



## HyperMatrix

ratzofftoya said:


> I suppose I thought that even with the mGPU support, scaling was not super great, and that I would get like a 30% bump over a single 2080Ti, and that SLI 2080Tis is like a 30% bump over solo, so it would be equal in the end.
> 
> I think I'll hang on to this 3090 and keep it in the test bench and sell my two 2080Tis, if only for the power savings. I've never shunt modded...Maybe I'll give it a try! Then I'll pick up one of the 3x8 pin cards down the line for my main build.
> 
> Am I right in understanding that, even for enthusiast idiots like me, SLI is basically dead outside of benchmarks?


Well I used to run quad SLI with the 680s. Quad SLI with the original Titan. Tri-SLI with maxwell Titan. 2x SLI with Pascal Titans. And at that point I sold one of my Titan cards. Because SLI support was crap. Even games that supported it didn’t do it properly until quite a while after launch when I’d finished the game already. And now, Nvidia has officially cancelled standard SLI support. You’ll only have mGPU going forward.

mGPU will only become popular once someone releases a chiplet based video card. Without that, it’s a waste of development time for a feature that almost no one has access to anymore. So for now, I’d say SLI is a bad idea, although for games that it does work in right now, it’s not bad. But it’s as good as it’s going to get for the near future.

If you want to get an idea of how good a single 3090 can get, do this. Do a port royal run. Follow the link and see what it says your average clock speed was for the run. Expect that under the best circumstance (proper card, proper TDP, proper cooling) you’ll have 2100MHz average clock speed. So if your current average clock ends up being 1900MHz, you’ll know that at best your card could perform about 10% better.


----------



## Wrathier

Could someone give me an idea what to set mem and core to on the Gigabyte Gaming OC and a way to test stability? I don’t own 3DMark just a free user.
Cheers.


----------



## sk1nz

twisted0ne said:


> Anyone flashed a MSI Ventus 3X OC? Bit paranoid about flashing the wrong one.


I have the MSI 3090 Ventus 3X OC flashed to *Gigabyte RTX 3090 Gaming OC*. It works fine.


----------



## Alex24buc

What power limit do you get with the new bios? I wonder if it works with my Palit to increase the power limit of 104%.


----------



## Spiriva

Alex24buc said:


> What power limit do you get with the new bios? I wonder if it works with my Palit to increase the power limit of 104%.


What the % say doesnt matter, its your current bios.


----------



## Thanh Nguyen

Any shunt mod tutorial on reference board?


----------



## bolagnaise

Just got my ZOTAC 3090, im on +230 Core and +1000 memory but am obviously power limited by ZOTACs **** bios. Has anyone succesfgully flashed another bios with a higher limit on this card?


----------



## Nizzen

bolagnaise said:


> Just got my ZOTAC 3090, im on +230 Core and +1000 memory but am obviously power limited by ZOTACs **** bios. Has anyone succesfgully flashed another bios with a higher limit on this card?


Try for yourself and repport back. It takes 1 minute to flash the bios.

I flashed my 3080 palit oc like 10 times in one day


----------



## kot0005

psychrage said:


> I ordered this one - US $45.47 20% OFF|Bykski GPU Water Block For NVIDIA RTX 3090 3080 Founders Original PCB Radiator + Backplate 5V/12V MB SYNC N RTX3090FE X|Fans & Cooling| - AliExpress. The title description calls out the 3080 as well as the 3090, but everything else on the page only calls out the 3090 FE. Bykski also don't seem to have another SKU specific to the 3080 FE.
> 
> View attachment 2461140
> 
> 
> The dimensions don't line up with the rough dimensions of my card tho. I show from just above the pci slot to the top of the factory heatsink at ~120mm. They show a total height of the block at 127mm. The 186.2mm length looks to be pretty close. Not going to tear my card down to confirm that.I know the top part with the inlet/outlet is at least 30mm. Now I'm nervous...
> 
> I wonder if they've made a revision... And being that I didn't order direct from Bykski, am I getting said revision?.... gah!!!


Dont cheapout on waterblocks... you get what you pay for. 

Aquacomputer >ekwb> heatkiller >Alphacool

I wouldnot touch anything else


----------



## bolagnaise

Nizzen said:


> Try for yourself and repport back. It takes 1 minute to flash the bios.
> 
> I flashed my 3080 palit oc like 10 times in one day


I'm worried about bricking it, ihavent flashed a card in a long while so im a bit scared lol


----------



## Nizzen

bolagnaise said:


> I'm worried about bricking it, ihavent flashed a card in a long while so im a bit scared lol


Impossible to brick it with a stable computer. If the bios doesn't work for some reason, just flash another bios to the card. Easy like install a gpu driver. It's even WAY faster


----------



## bolagnaise

Nizzen said:


> Impossible to brick it with a stable computer. If the bios doesn't work for some reason, just flash another bios to the card. Easy like install a gpu driver. It's even WAY faster


Ok, yeah im goinmg to flash the gigabyte bios to it.


----------



## bolagnaise

Nizzen said:


> Impossible to brick it with a stable computer. If the bios doesn't work for some reason, just flash another bios to the card. Easy like install a gpu driver. It's even WAY faster


Done, thanks for the support 🤗


----------



## bolagnaise

Original ZOTAC 3090 bios


https://www.3dmark.com/3dm/51460935?



Gigabyte Eagle bios


https://www.3dmark.com/3dm/51463071?



TDP went from ~340w with 100% speed to 380w with 100% fan speed.


----------



## nievz

Anyone tried liquid metal yet and how was the result?


----------



## }{yBr!D^

My 3090 FE came in yesterday! Tweaked it for nearly 8hrs finding that sweet spot. I'm not i. Front of my pc now but man does it ripon air and surprisingly cool! Never reached 60C with overclock. 58C was max in all 3DMark benches and Heaven. I'm seriously impressed. I'm originally upgrading from an EVGA 2080 TiFTW3 Ultra but I sold that one before Nvidia announced the 30-Series. Picked up an EVGA 2060 KO Ultra for the stepup program but I lucked out on the FE. The comparison is with that 2060 for your enjoyment.












































Sent from my SM-N975U using Tapatalk


----------



## Thanh Nguyen

Anyone plz link me a link where to buy resistor for reference 3090. I want to shunt mod my card.


----------



## vmanuelgm

Thanh Nguyen said:


> Anyone plz link me a link where to buy resistor for reference 3090. I want to shunt mod my card.


Mouser for example...



https://www.mouser.com/ProductDetail/603-PR252FKF070R005L


----------



## krs360

Any Zotac 3090 Trinity users here confirm which bios they think is performing the best for them at the moment? Also, I've never flashed a bios other than to upgrade it from the official vendor. Is there a guide around here that people generally follow and how safe is it in terms of flash other brand's bioses?

Got my 3090 this morning, got lucky on a UK vendor that was only selling to previous customer's so it wasn't instantly cleaned out by bots but honestly I'd probably not have purchased this particular card if I was aware of the power limit lock.

Thanks


----------



## rawsome

I have flashed my MSI Geforce RTX 3090 Ventus OC yesterday with the Gigabyte Gaming OC bios. Everything still works as it should and it is +400 points in port royale with the same oc settings. It looks like it handles OC'ng better now, im currently testing with +150 clock speed while i was at +130 before.

Temperatures are at 70-72°C now with like 2200 rpm.



krs360 said:


> Any Zotac 3090 Trinity users here confirm which bios they think is performing the best for them at the moment? Also, I've never flashed a bios other than to upgrade it from the official vendor. Is there a guide around here that people generally follow and how safe is it in terms of flash other brand's bioses?


See first post of this Thread, there is a Tutorial. I am also new to this, but it seems like you can flash whatever you want without bricking your card. someone here i.e. flashed a 3-plug bios on a 2-plug card, did not give a higher power limit but no brick.


----------



## Thanh Nguyen

vmanuelgm said:


> Mouser for example...
> 
> 
> 
> https://www.mouser.com/ProductDetail/603-PR252FKF070R005L


We need 5om or 3om or 8om? Thanks.


----------



## Nizzen

Thanh Nguyen said:


> We need 5om or 3om or 8om? Thanks.


It depends on how high you will fly


----------



## Thanh Nguyen

To infinity and beyond.


----------



## Sync0r

I have a 3090 zotac trinity (I know worst card) and a alphacool block inbound . May have to shunt it.


----------



## shiokarai

Sync0r said:


> I have a 3090 zotac trinity (I know worst card) and a alphacool block inbound . May have to shunt it.


Actually it's the silicon quality that decides, may be a good one, even on the trinity


----------



## Sync0r

shiokarai said:


> Actually it's the silicon quality that decides, may be a good one, even on the trinity


Yeah, I guess the power delivery will be fine, I think it can cope with 700w, which I won't be anywhere near lol


----------



## mirkendargen

kot0005 said:


> Dont cheapout on waterblocks... you get what you pay for.
> 
> Aquacomputer >ekwb> heatkiller >Alphacool
> 
> I wouldnot touch anything else


I'd put Bykski blocks above EKWB, lol. When have you heard about a Bykski block losing it's coating? Or having garbage coverage area on Threadripper?


----------



## kx11

mirkendargen said:


> I'd put Bykski blocks above EKWB, lol. When have you heard about a Bykski block losing it's coating? Or having garbage coverage area on Threadripper?


i'd put Bitspower above all of them, they are always the quickest to release WBs and they never disappoint


----------



## GanMenglin

vmanuelgm said:


> Mouser for example...
> 
> 
> 
> https://www.mouser.com/ProductDetail/603-PR252FKF070R005L


How many shunts did you modded? I’ve only apply 2 of 8pin on my ventus, it shows the power limit still be the cap of performance.


----------



## J7SC

ratzofftoya said:


> I suppose I thought that even with the mGPU support, scaling was not super great, and that I would get like a 30% bump over a single 2080Ti, and that SLI 2080Tis is like a 30% bump over solo, so it would be equal in the end.
> 
> *I think I'll hang on to this 3090 and keep it in the test bench and sell my two 2080Tis*, if only for the power savings. I've never shunt modded...Maybe I'll give it a try! Then I'll pick up one of the 3x8 pin cards down the line for my main build.
> 
> *Am I right in understanding that, even for enthusiast idiots like me, SLI is basically dead outside of benchmarks?*





HyperMatrix said:


> Well I used to run quad SLI with the 680s. Quad SLI with the original Titan. Tri-SLI with maxwell Titan. 2x SLI with Pascal Titans. And at that point I sold one of my Titan cards. Because SLI support was crap. Even games that supported it didn’t do it properly until quite a while after launch when I’d finished the game already. And now, Nvidia has officially cancelled standard SLI support. You’ll only have mGPU going forward.
> 
> *mGPU will only become popular once someone releases a chiplet based video card. Without that, it’s a waste of development time for a feature that almost no one has access to anymore. So for now, I’d say SLI is a bad idea, although for games that it does work in right now, it’s not bad. But it’s as good as it’s going to get for the near future.*
> 
> (...)


I also have been doing nothing but 2x, 3x and 4x SLI for most of the past decade...though these days, I only do 2x SLI (or NVL with my 2080 Tis). 'Traditional SLI' support, such as AFR/2, in general is still there and still has my setup in the top-40 in Port Royal (for now), but beyond benchmarks, traditional SLI support is weaker than it used to be, and missing altogether for some newer titles. As stated many times before, the future for top GPUs is more likely to involve tile-based rendering / mGPU, for which a driver with 'CFR' is necessary (I did a separate thread on that @ EHW). NVidia had a hidden CFR driver feature from 11.2019 to late 05.2020 but disabled it again with the latest drivers. In any case, it only works for higher-end RTX2k, and only in some apps, though by sheer luck it covers all the ones I would need it for. Thus I'm running it now, not least as CFR doesn't do micro-stutter, unlike AFR/2 :

As I am still waiting for the 3090 models to show up which I have set my heart on, I continue to use 2x 2080 Ti in SLI-NVL-CFR...the one app which is very close to my heart is MS FlightSim 2020, and CRF does work with it, per thumbnail. The (few) 3090s running MS FS 2020 in 4K / Ultra I have seen are comparable to dual 2080 Tis / CFR - may be a bit lower in some scenes, and higher in others. The big drawback though with 2x 2080 Ti in MS FS 2020 compared to a single 3090 is the power draw - even when not running them full-tilt (ie max PL, oc) 2x 2080 Ti will draw a combined 730 W just for the 2x GPUs, whereby the 3090 would more likely be in the 400 W - 450 W range, depending on model and oc. So down the line, mGPU with CFR-like tile drivers will become important, but for now / this new gen, a single 3090 is hard to beat, IMO


----------



## Simkin

3090 FE, any idea if it will hurt the strengt of the gpu mounting if i cut here?


----------



## Falkentyne

Best Buy had an epic drop of Founder's Edition 3090's and 3080's today. Got an order in for a 3090 FE.
Someone said there were 10,000 cards dropped.
They also got some Gigabyte 3080 cards also.


----------



## Shaded War

Falkentyne said:


> Best Buy had an epic drop of Founder's Edition 3090's and 3080's today. Got an order in for a 3090 FE.
> Someone said there were 10,000 cards dropped.
> They also got some Gigabyte 3080 cards also.


I always miss out on these. It's going to take forever to get a 3090.


----------



## bolagnaise

krs360 said:


> Any Zotac 3090 Trinity users here confirm which bios they think is performing the best for them at the moment? Also, I've never flashed a bios other than to upgrade it from the official vendor. Is there a guide around here that people generally follow and how safe is it in terms of flash other brand's bioses?
> 
> Got my 3090 this morning, got lucky on a UK vendor that was only selling to previous customer's so it wasn't instantly cleaned out by bots but honestly I'd probably not have purchased this particular card if I was aware of the power limit lock.
> 
> Thanks


The gigabyte bios from page one one performs pretty good for me. Gave me +10% on my trinity card power slider and TDP went from 350 to 390w giving me around 600 points in port royal.


----------



## xkm1948

Joining the club with my EVGA 3090 FTW3 Ultra!

Average clock speed is about 1950MHz ~ 1965MHz.


----------



## bolagnaise

xkm1948 said:


> Joining the club with my EVGA 3090 FTW3 Ultra!
> 
> Average clock speed is about 1950MHz ~ 1965MHz.


Is that manually overclocked at all?


----------



## xkm1948

bolagnaise said:


> Is that manually overclocked at all?



Manually edited V/F curve. I capped it around 1V and 2010MHz max core.


----------



## bolagnaise

xkm1948 said:


> Manually edited V/F curve. I capped it around 1V and 2010MHz max core.


Oh nice, i need to play around with my curves as well


----------



## bmgjet

Thanh Nguyen said:


> We need 5om or 3om or 8om? Thanks.


You can workout what you need for what power limit you want.








GitHub - bmgjet/ShutMod-Calculator: Work out what shunt values to use easily.


Work out what shunt values to use easily. Contribute to bmgjet/ShutMod-Calculator development by creating an account on GitHub.




github.com


----------



## dr/owned

kot0005 said:


> Dont cheapout on waterblocks... you get what you pay for.
> 
> Aquacomputer >ekwb> heatkiller >Alphacool
> 
> I wouldnot touch anything else


Dunno this ranking, in my experience everything is pretty much the same quality including Bykski....just how much you want to pay.

EKWB is kind on my ****list....coatings even up to 2015 were trash and they're stupidly overpriced nowadays. $280 for a CPU waterblock? Get the eff outta here!


----------



## dr/owned

Thanh Nguyen said:


> We need 5om or 3om or 8om? Thanks.


----------



## xkm1948

bolagnaise said:


> Oh nice, i need to play around with my curves as well



How are your results? Man this new forum format makes searching for post difficult


----------



## HyperMatrix

dr/owned said:


> Dunno this ranking, in my experience everything is pretty much the same quality including Bykski....just how much you want to pay.
> 
> EKWB is kind on my ****list....coatings even up to 2015 were trash and they're stupidly overpriced nowadays. $280 for a CPU waterblock? Get the eff outta here!


AquaComputer blocks are entirely different from the rest. They use precision engineering to use direct contact with paste instead of thick thermal pads where possible. And they're the only one to offer an active cooled backplate option.


----------



## Nizzen

Norwegians are climbing on the ladder 









3DMark Port Royal Hall of Fame


The 3DMark.com Overclocking Hall of Fame is the official home of 3DMark world record scores.




www.3dmark.com


----------



## dr/owned

HyperMatrix said:


> AquaComputer blocks are entirely different from the rest. They use precision engineering to use direct contact with paste instead of thick thermal pads where possible. And they're the only one to offer an active cooled backplate option.


I last had them on my 680 Lightning's and if I'm remembering right they still used thermal pads for the VRM and memory; maybe it's changed recently. I ended up shearing one of the standoffs and had to order a new block. Other than that I don't remember anything particularly special about them. I remember them not machining any "helpers" into the o-ring grove to add friction, so it took forever for me to reassembly the block with the oring constantly popping out.

EDIT: oh yeah they mildly annoyed me when I called them on on their Skylake spacer plate (for direct die) not being properly deburred after laser cutting. They basically blew me off like it wasn't a problem when I had pressure sensitive paper showing that it was a problem. "German precision"...was more german arrogance.


----------



## HyperMatrix

dr/owned said:


> I last had them on my 680 Lightning's and if I'm remembering right they still used thermal pads for the VRM and memory; maybe it's changed recently. I ended up shearing one of the standoffs and had to order a new block. Other than that I don't remember anything particularly special about them. I remember them not machining any "helpers" into the o-ring grove to add friction, so it took forever for me to reassembly the block with the oring constantly popping out.
> 
> EDIT: oh yeah they mildly annoyed me when I called them on on their Skylake spacer plate (for direct die) not being properly deburred after laser cutting. They basically blew me off like it wasn't a problem when I had pressure sensitive paper showing that it was a problem. "German precision"...was more german arrogance.


VRMs still use pads. Memory is direct contact. I will not accept any German bashing here. They are our superiors.


----------



## mirkendargen

HyperMatrix said:


> VRMs still use pads. Memory is direct contact. I will not accept any German bashing here. They are our superiors.


"German engineering", the greatest marketing campaign of all time.


----------



## HyperMatrix

mirkendargen said:


> "German engineering", the greatest marketing campaign of all time.


There is truth to it though. That doesn't mean every single thing that is made by German hands is automatically amazing. But it does mean that there is, generally speaking, a higher level of care and concern given to engineering by German companies. Not sure how much of it is government regulation with regard to export quality controls and regulations, but I have found German products or Japanese products to better than what you normally get elsewhere.  Italy makes some great products as well, with a lot of ingenuity, but not to the same degree with regard to quality.

Just to to be clear....I'm not stating every German product is going to be better than every product from France, or Australia, for example. But if you had to blindly pick between a German engineered or an identical French engineered product, and you had to do this 1000 times, I'm fairly certain the odds would be more than just slightly in favor of the Germans.


----------



## Carillo

I need my waterblock now!  15th place so far



https://www.3dmark.com/pr/384058


----------



## denishiza

Hello Everyone, I guest I'm a official member now. Just order my 3090 FE this morning at BestBuy.

I also got the 3080, for my second workstation


----------



## J7SC

denishiza said:


> Hello Everyone, I guest I'm a official member now. Just order my 3090 this morning at BestBuy.


'grats ! Which one did you get ? Up here in the Great White North, still either no 3090s in stock or sold out at the major retailers (MemoryExpress, NewEgg.ca, BestBuy.ca)


----------



## Baasha

Falkentyne said:


> Best Buy had an epic drop of Founder's Edition 3090's and 3080's today. Got an order in for a 3090 FE.
> Someone said there were 10,000 cards dropped.
> They also got some Gigabyte 3080 cards also.


Please tell me you are joking.

I was F5'ing all morning on Nvidia's website and then had to run some errands. I never got a notification but 10,000 cards? PLEASE TELL ME YOU ARE JOKING!

I AM SO ANGRY RIGHT NOW! 

I NEED TWO RTX 3090 FFS.


----------



## Falkentyne

Baasha said:


> Please tell me you are joking.
> 
> I was F5'ing all morning on Nvidia's website and then had to run some errands. I never got a notification but 10,000 cards? PLEASE TELL ME YOU ARE JOKING!
> 
> I AM SO ANGRY RIGHT NOW!
> 
> I NEED TWO RTX 3090 FFS.


No, there was stock for about 10 minutes, which is an eternity compared to the 10 second and gone stocks we've been seeing.
Nvidia basically decided to stop using digital river and started using Best Buy for all their sales, and they got a pretty massive supply drop.
They got 3080 FE's too but those sold out faster (obviously) since they're less than half the price.


----------



## badjz

Galax 3090 under water and running well
Avg temp 45 and stock boost is 1950hz


----------



## Baasha

Falkentyne said:


> No, there was stock for about 10 minutes, which is an eternity compared to the 10 second and gone stocks we've been seeing.
> Nvidia basically decided to stop using digital river and started using Best Buy for all their sales, and they got a pretty massive supply drop.
> They got 3080 FE's too but those sold out faster (obviously) since they're less than half the price.


I can't believe this. I kept F5'ing nvidia's website thinking that's where the stock will be. never checked BB but I was out for a few hours only to come back to this. 

I feel like punching the wall rn..


----------



## denishiza

J7SC said:


> 'grats ! Which one did you get ? Up here in the Great White North, still either no 3090s in stock or sold out at the major retailers (MemoryExpress, NewEgg.ca, BestBuy.ca)


Thanks, the FE, I also got the 3080 for my second workstation. I was clicking the Add To Car button for about 10 Minutes to be able to purchase, it help me that I was logged in and have all my info set on my BestBuy account for fast payment process.


----------



## denishiza

Baasha said:


> Please tell me you are joking.
> 
> I was F5'ing all morning on Nvidia's website and then had to run some errands. I never got a notification but 10,000 cards? PLEASE TELL ME YOU ARE JOKING!
> 
> I AM SO ANGRY RIGHT NOW!
> 
> I NEED TWO RTX 3090 FFS.


Yep it's true. I got a 3090 FE and 3080 FE BestBuy this morning too.


----------



## J7SC

FYI - just watched this 3090 Strix test @ Hardware Unboxed
(btw 18 + 5 = 23...)


----------



## HyperMatrix

J7SC said:


> FYI - just watched this 3090 Strix test @ Hardware Unboxed
> (btw 18 + 5 = 23...)


He didn't explain his OC methodology at all. But he said his OC was only pulling 420W. so we have no idea what the Max OC on the card would actually be with fans cranked, or how the clocks held up during intensive workloads like port royal. Waste of a review. Plebs getting premium cards and not knowing what to do with them.


----------



## vmanuelgm

Thanh Nguyen said:


> We need 5om or 3om or 8om? Thanks.


U can try 5mohms or 8mohms


----------



## vmanuelgm

GanMenglin said:


> How many shunts did you modded? I’ve only apply 2 of 8pin on my ventus, it shows the power limit still be the cap of performance.


I modded all of them, even the tiny ones, but these caused fps unstability, so took them off.

Right now I have all shunted except the tiny ones and the pciexpress one.


----------



## olrdtg

Thanh Nguyen said:


> We need 5om or 3om or 8om? Thanks.


3mOhm. Not 5ohm, 3ohm or 8ohm. 3mOhm if you are replacing shunt resistors, 5mOhm if you are stacking.

Here is a link to 3mOhm Panasonic current sense resistors

Here is a link to 5mOhm Panasonic current sense resistors

I recommend replacing instead of stacking to avoid any clearance issues with your GPU cooler. If you do replace, you need a steady hand. I highly recommend using a heat gun and some tweezers. Use the heat gun to melt the solder, then tweezers to carefully lift the resistor away while not disturbing any other components. Yes it is fine to heat the components around the resistor. Once removed, put down some solder paste (Found here) over the contact pads, carefully place the new resistor down and use the heat gun on low airflow mode & high heat. Do not use high airflow at this point as you will likely blow the resistor right off the board. Heat it up until all of the solder paste melts, and then repeat for the remaining resistors. It took me about 35 minutes to do as I was taking my time and being as careful as possible.

If you don't have steady hands or don't feel comfortable removing the originals, definitely grab the 5mOhm resistors linked above. Using 5mOhm will give you around 732W max power, using a 8mOhm will give you 594W max, so it's really up to preference, but I'd personally go with the 5mOhm so you have more headroom to work with. And remember you can always dial in your OC and power usage using Afterburner to find the sweet spot for temps & clocks. You can also use some solder paste & a heat gun here too, but I'd recommend using a soldering iron if your stacking. Just carefully solder a 5mOhm resistor on top of the originals. Make absolutely sure that it will not cause clearance issues before you start though! I would do this by using some scotch tape and taping a resistor on top of each, then installing the cooler to make sure it fits properly before soldering.

Also be mindful of the fact that even after raising the max power draw this way, you will eventually hit another wall -- max voltage. I've only hit max voltage with a +200/+1200mhz OC on my 3090 FE though, so as long as your not pushing for top of the leader board on benchmarks using ln2 then you should be just fine. For my daily use once I get my water block I'm probably going to be running an OC of +100/+700. And the +700 only if I can figure out some sort of active cooling for the memory on the back of the card.


----------



## jomama22

It wouldn't let me add anything to cart during the drop. Was pretty frustrating tbh. Not sure what magic needed to happen but I I tried for 30 minutes straight and nothing.


----------



## Nitemare3219

Am I using the wrong version of Afterburner for my FE, or have they just not ironed out the kinks? Downloaded the latest beta a few days ago. Cannot get custom fan curve to work at all. Manual static fan setting needs to be set about 10% higher than what I actually want the fan to run at. And the voltage curve has a mind of its own... settings frequently jump up a bin or 2 regardless of what I set. Takes so much tinkering to get a profile set correctly and even then it doesn't always load correctly and I have to adjust the curve again. W. T. F.


----------



## olrdtg

Nitemare3219 said:


> Am I using the wrong version of Afterburner for my FE, or have they just not ironed out the kinks? Downloaded the latest beta a few days ago. Cannot get custom fan curve to work at all. Manual static fan setting needs to be set about 10% higher than what I actually want the fan to run at. And the voltage curve has a mind of its own... settings frequently jump up a bin or 2 regardless of what I set. Takes so much tinkering to get a profile set correctly and even then it doesn't always load correctly and I have to adjust the curve again. W. T. F.


Download the latest beta version of Afterburner from MSI's website. 

Here is a direct link to MSI Afterburner 4.6.3 Beta 2 as of 10/09/2020: MSI Afterburner 4.6.3 Beta 2


----------



## slopokdave

Has anyone flashed FE bios to an AIB card?


----------



## olrdtg

slopokdave said:


> Has anyone flashed FE bios to an AIB card?


That unfortunately wouldn't work. The FE board is completely different. This generation the FE board is not reference but rather a custom board, so it's BIOS wouldn't work on any AIB card, just as any AIB BIOS wouldn't work on a FE card.


----------



## Bilco

Struggling to search this thread with the new forum, but is there a bios for the TUF that raises the power limit without disabling the ports in the back?


----------



## Stampede

badjz said:


> View attachment 2461525
> 
> 
> Galax 3090 under water and running well
> Avg temp 45 and stock boost is 1950hz


Hi Badjz
An EKWB block on your card? I am interested to know how the backplate cools the memory chips in the back. does the back plate get very hot to the touch?

45 deg doesnt seem very impressive at stock.what happens if the 480watts is being drawn by the card?


----------



## badjz

Cools the backplate with about 10 thermal pads. Not hot to touch to be honest.. 

Edit - actually it is hot to touch!!

Max wattage seems to be 350 on this card. Can't move power target beyond 100... 

Seems stable at +200 core, boosts to 2025 - 2050...


----------



## slopokdave

olrdtg said:


> That unfortunately wouldn't work. The FE board is completely different. This generation the FE board is not reference but rather a custom board, so it's BIOS wouldn't work on any AIB card, just as any AIB BIOS wouldn't work on a FE card.


(Most of the "popular") AIBs aren't reference either. So not sure that logic flows.

But if it doesn't work, if doesn't work..


----------



## Stampede

badjz said:


> Cools the backplate with about 10 thermal pads. Not hot to touch to be honest..
> 
> Edit - actually it is hot to touch!!
> 
> Max wattage seems to be 350 on this card. Can't move power target beyond 100...
> 
> Seems stable at +200 core, boosts to 2025 - 2050...


not sure how this affects performance or card longevity but looks like we gonna need active cooled backplates. Do the Aquacool blocks perform any better or about the same?


----------



## Spiriva

PNY GeForce RTX 3090 24GB XLR8 Gaming EPIC-X RGB









Backside










Closeup










EK Waterblock installed.










EK Backplate installed.

*___*

The Thermal pads PNY used on the 3090 was the worse Ive ever seen, There was no "peeling" them off, it was more less scraping w/e that was stuck on the memory etc away from the card. It came off really easy tho.


----------



## dante`afk

aw man how did i miss the drop on best buy.....aaaaaaaaaaaahhhhhhhhhhhh


so nvidia stopped selling on their own page and only sells via BB now for the US...pathetic.


----------



## acegutta

i was able to get a xc3 ultra 3090 from microcenter what are the OC results anybody got?


----------



## LordGurciullo

So... Is it really worth me getting the strix BIOS on my ftw ultra? 30 watts? I'm not doing any water cooling ... i think ill just be temperature limited... I'm pretty stable at 150/350 overclock... Could I really push it to 180/400?


----------



## HyperMatrix

LordGurciullo said:


> So... Is it really worth me getting the strix BIOS on my ftw ultra? 30 watts? I'm not doing any water cooling ... i think ill just be temperature limited... I'm pretty stable at 150/350 overclock... Could I really push it to 180/400?


Do a port royal run. Compare the “average clock speed” it gives you at the end to your set/max clock speed. If your average speed is significantly lower, then you’re being throttled somewhere. Usually by power. If you do see you are being throttled you can check GPU-Z to see what’s causing the throttling. If it’s power, then flashing the Strix bios will likely help maintain your clocks with with less throttling. Also the difference is 40W. So ask yourself if an extra 9% power is important.


----------



## GanMenglin

vmanuelgm said:


> I modded all of them, even the tiny ones, but these caused fps unstability, so took them off.
> 
> Right now I have all shunted except the tiny ones and the pciexpress one.


It seems the pcie shunt is the most important one, when pcie power reach the limit which usually is 75w to 78w, the card will stop pulling power from 8pins.


----------



## HyperMatrix

GanMenglin said:


> It seems the pcie shunt is the most important one, when pcie power reach the limit which usually is 75w to 78w, the card will stop pulling power from 8pins.


Someone keeps saying that without showing proof. Vmanuelgm’s card already pulls 550W.


----------



## LordGurciullo

HyperMatrix said:


> Do a port royal run. Compare the “average clock speed” it gives you at the end to your set/max clock speed. If your average speed is significantly lower, then you’re being throttled somewhere. Usually by power. If you do see you are being throttled you can check GPU-Z to see what’s causing the throttling. If it’s power, then flashing the Strix bios will likely help maintain your clocks with with less throttling. Also the difference is 40W. So ask yourself if an extra 9% power is important.


Its alittle lower then the clock speed. bout 60 mghz, but I think my temps will go up with it... is there any downside or risk to it? Don't want any issues with warranty if I need it.


----------



## vmanuelgm

GanMenglin said:


> It seems the pcie shunt is the most important one, when pcie power reach the limit which usually is 75w to 78w, the card will stop pulling power from 8pins.


Not in my case, same result with or without the pciex shunt, one or multiple resistors.

Igors lab said the chip cuts power at 500w. I am seeing 550w and card starts to throttle to avoid going above 600w.

Don't know how 3x8pin cards are behaving. An unlocked bios for 2x8pin would be nice to check if it helps shunt to allow higher consumption.


----------



## HyperMatrix

LordGurciullo said:


> Its alittle lower then the clock speed. bout 60 mghz, but I think my temps will go up with it... is there any downside or risk to it? Don't want any issues with warranty if I need it.


Only danger is if it breaks and you can’t flash the original bios back and the manufacturer is able to get the card up and running to check its bios. But as for an extra 40W causing the card to break in the first place? No. Not a solid card like the FTW3. Keep in mind the power limit only says how much power your card will use if it needs it. So in a situation where it was throttling due to power limitations, it will draw more power now. Otherwise it’s not just using more power for no reason when it’s not needed.


----------



## Manya3084

I swear I am not doing anything stupid......

I'm not content with 9th place on the Time Spy extreme leaderboard...


----------



## c4rm

I have both an FTW 3 Ultra and an FE 3090 coming in the mail next week. I'll only be keeping one of them, and I'm very torn. I've read the last 20 pages of this thread hoping to be swayed one way or the other. I'm coming from a 2080 Ti XC3 Ultra that overclocks quite well. I prefer the aesthetic of the 3090 FE, but I don't want to leave performance on the table by passing on the FTW 3. For those of you with either of these cards, what made you pick it?


----------



## GTANY

c4rm said:


> I have both an FTW 3 Ultra and an FE 3090 coming in the mail next week. I'll only be keeping one of them, and I'm very torn. I've read the last 20 pages of this thread hoping to be swayed one way or the other. I'm coming from a 2080 Ti XC3 Ultra that overclocks quite well. I prefer the aesthetic of the 3090 FE, but I don't want to leave performance on the table by passing on the FTW 3. For those of you with either of these cards, what made you pick it?


FTW3 because of its higher power limit + high probability to be flashable with the Kingpin bios when it will be available.


----------



## torqueroll

c4rm said:


> I have both an FTW 3 Ultra and an FE 3090 coming in the mail next week. I'll only be keeping one of them, and I'm very torn. I've read the last 20 pages of this thread hoping to be swayed one way or the other. I'm coming from a 2080 Ti XC3 Ultra that overclocks quite well. I prefer the aesthetic of the 3090 FE, but I don't want to leave performance on the table by passing on the FTW 3. For those of you with either of these cards, what made you pick it?


I have the FE right now and also still waiting on my Strix order. I had to get the FE once Jensen held that thing in his hand and called it BFGPU. I really like the aesthetic and build quality. That feeling when you pick it up the first time lol. A brick of 2.2 kg. Made me chuckle a little too.

The silicon lottery has a much bigger impact on the performance you'll get than the power limit imho. FE is already 400W. Strix has 480W. The frequency curve tapers off hard at that point. There are several FEs on the 3D mark leaderboard even with "low" 400W so it's not like you are guaranteed to get less if you choose to get one. It's all random. For benchmark of course you want as high powerlimit as you can but in actual practical use it won't really matter.

I'm personally not missing any performance in practical use at all. I actually prefer to undervolt it just below the powerlimit to avoid freq fluctuations. I can set a higher offset on a specific voltage when the freq/voltage is locked and due to this I can also get much better efficiency and no noise at all.

This is overclock.net so I do understand that we want as much powerlimit as possible. I am getting a Strix also after all, but if I had to keep only one I would actually choose to keep my FE. The engineering that went into that card is amazing.


----------



## LordGurciullo

The FE is just ****ing gorgeous and beautiful and not full of shiny crap RGB. plus 330 dollars less... I'd like both though


----------



## Nitemare3219

torqueroll said:


> I have the FE right now and also still waiting on my Strix order. I had to get the FE once Jensen held that thing in his hand and called it BFGPU. I really like the aesthetic and build quality. That feeling when you pick it up the first time lol. A brick of 2.2 kg. Made me chuckle a little too.
> 
> The silicon lottery has a much bigger impact on the performance you'll get than the power limit imho. FE is already 400W. Strix has 480W. The frequency curve tapers off hard at that point. There are several FEs on the 3D mark leaderboard even with "low" 400W so it's not like you are guaranteed to get less if you choose to get one. It's all random. For benchmark of course you want as high powerlimit as you can but in actual practical use it won't really matter.
> 
> I'm personally not missing any performance in practical use at all. I actually prefer to undervolt it just below the powerlimit to avoid freq fluctuations. I can set a higher offset on a specific voltage when the freq/voltage is locked and due to this I can also get much better efficiency and no noise at all.
> 
> This is overclock.net so I do understand that we want as much powerlimit as possible. I am getting a Strix also after all, but if I had to keep only one I would actually choose to keep my FE. The engineering that went into that card is amazing.


Love my FE as well. Looks great, built like a tank, and cost base MSRP. Have you dialed in what freq/voltage you're targeting yet? I'm thinking .850v may be best to keep 100% stable under the power limit. So far I'm around 1900 MHz/+250 memory at that voltage but need to update everything today and try some more to see where I end up since the version of Afterburner I've been using is apparently not working properly.


----------



## kot0005

Spiriva said:


> PNY GeForce RTX 3090 24GB XLR8 Gaming EPIC-X RGB
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Backside
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Closeup
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> EK Waterblock installed.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> EK Backplate installed.
> 
> *___*
> 
> The Thermal pads PNY used on the 3090 was the worse Ive ever seen, There was no "peeling" them off, it was more less scraping w/e that was stuck on the memory etc away from the card. It came off really easy tho.



So how is the vram temps on the back after being under full load for atleast 30mins lol ? Even the underside backplate temps will be nice to kniw


----------



## synergon

is this really a crap chip?
thanks guys


----------



## ValSidalv21

Got my hands on the TUF couple of days ago.










Looks like I'm one of the very few people running it on a 5820k


----------



## NoDoz

I cant find one to buy to save my life. Im reading some people have 2 already, if you decide you want to sell one not at scalper prices..pm me. thanks To sweeten the pot I have a caselabs case willing to trade.


----------



## chispy

GanMenglin said:


> It seems the pcie shunt is the most important one, when pcie power reach the limit which usually is 75w to 78w, the card will stop pulling power from 8pins.


* Disclaimer * Do it at your own risk , none of us will be responsible if you damage your card !

I would like to correct you on this advice. I have just talked to Elmor and Roman ( Der8auer ) both are engineers and old friends of mine. They told me the most important and helpful shunt mods are the 2x8 power connector as that will open up the power limit for most rtx 3090 as you will get a 675w available power limit without shunt mod the pci express.

The pci express shunt mod is only helpful if you are running sub-sero temps and adding voltage to the v.core , also no need to do all of them if you are running on ambient temps as with the 2x8 pin power connectors shunt mod is more than enough.

Also the statement you made " when pcie power reach the limit which usually is 75w to 78w, the card will stop pulling power from 8pins " is completely wrong the card will still pulling more power from the 2x8 pin even when pci express reachs 75~80w . This information comes directly from 2 of the top engineers who do volt mods on a daily basis for extreme overclocking Elmor and Derbauer ( Roman ) .

There you have it , this is the most accurate information for everyone on the shunt mods topic. I hope this helps.

Kind regards: Angelo

** Ninja edit** I have just talked to another top overclocker and engineer Ronaldo ( rbuass ) from Brazil and his findings were the same as Elmor and Der8bauer , so it is 100% confirmed and accurate the information provided on my this post ***


----------



## Gryzor

I have since 4 days a 3090 KFA2. Temps are great (about 60 degrees with 100% load), but it has a powerlimit of 100%, not surpassing 350W. Even 3090 FE is better!!!.
Do you have any experience or recommendation about flashing another bios (for example, MSI, that has the same display ports, connectors, just 390W)?
Thanks for your help


----------



## Nizzen

Gryzor said:


> I have since 4 days a 3090 KFA2. Temps are great (about 60 degrees with 100% load), but it has a powerlimit of 100%, not surpassing 350W. Even 3090 FE is better!!!.
> Do you have any experience or recommendation about flashing another bios (for example, MSI, that has the same display ports, connectors, just 390W)?
> Thanks for your help


Shunt mod it. Only way to fix the powerlimit a proper way.


----------



## slopokdave

Nizzen said:


> Shunt mod it. Only way to fix the powerlimit a proper way.


Ugh stop tempting us.  Any definite/go to guides for first timers? 3090 TUF....


----------



## chispy

slopokdave said:


> Ugh stop tempting us.  Any definite/go to guides for first timers? 3090 TUF.... Or any services known to do this in the US?













ERJ-M1WSF5M0U Panasonic Electronic Components | Resistors | DigiKey


Order today, ships today. ERJ-M1WSF5M0U – 5 mOhms ±1% 1W Chip Resistor 2512 (6432 Metric) Current Sense Metal Element from Panasonic Electronic Components. Pricing and Availability on millions of electronic components from Digi-Key Electronics.




www.digikey.com


----------



## Nizzen

chispy said:


> I would like to correct you on this advice. I have just talked to Elmor and Roman ( Der8auer ) both are engineers and old friends of mine. They told me the most important and helpful shunt mods are the 2x8 power connector as that will open up the power limit for most rtx 3090 as you will get a 675w available power limit without shunt mod the pci express.
> 
> The pci express shunt mod is only helpful if you are running sub-sero temps and adding voltage to the v.core , also no need to do all of them if you are running on ambient temps as with the 2x8 pin power connectors shunt mod is more than enough.
> 
> Also the statement you made " when pcie power reach the limit which usually is 75w to 78w, the card will stop pulling power from 8pins " is completely wrong the card will still pulling more power from the 2x8 pin even when pci express reachs 75~80w . This information comes directly from 2 of the top engineers who do volt mods on a daily basis for extreme overclocking Elmor and Derbauer ( Roman ) .
> 
> There you have it , this is the most accurate information for everyone on the shunt mods topic. I hope this helps.
> 
> Kind regards: Angelo
> 
> ** Ninja edit** I have just talked to another top overclocker and engineer Ronaldo ( rbuass ) from Brazil and his findings were the same as Elmor and Der8bauer , so it is 100% confirmed and accurate the information provided on my this post ***


Best post in this thread! Voting for "sticky" post


----------



## slopokdave

chispy said:


>


Thanks..so should I do just the 2x8pin or all 5? I'm not planning on any voltage control.


----------



## shiokarai

Any way to safely remove ASUS warranty sticker on the GPU core screw? Or somewhere to buy the same sticker to replace? Asus is a pain to RMA when sticker is removed...


----------



## chispy

slopokdave said:


> Thanks..so should I do just the 2x8pin or all 5? I'm not planning on any voltage control.


You only need the 2x8 power connectors like i did on mine on the picture Asus Tuf gaming oc rtx 3090 and that's it , your 3090 will be fully open and not hold back by power limits. This video cards do need more power limit to begin stretching their legs and to maintain higher clocks. I hope this helps.

Kind Regards: Angelo


----------



## chispy

shiokarai said:


> Any way to safely remove ASUS warranty sticker on the GPU core screw? Or somewhere to buy the same sticker to replace? Asus is a pain to RMA when sticker is removed...


 Use hair blow drier or heat gun and tweezers and a sharp little knife or razor blade. Boom and done !  that's how i have done it before 😁


----------



## shiokarai

chispy said:


> Use hair blow drier or heat gun and tweezers and a sharp little knife or razor blade. Boom and done !  that's how i have done it before 😁


Heat gun + tweezers + razor blade I have, thanks!


----------



## GanMenglin

chispy said:


> I would like to correct you on this advice. I have just talked to Elmor and Roman ( Der8auer ) both are engineers and old friends of mine. They told me the most important and helpful shunt mods are the 2x8 power connector as that will open up the power limit for most rtx 3090 as you will get a 675w available power limit without shunt mod the pci express.
> 
> The pci express shunt mod is only helpful if you are running sub-sero temps and adding voltage to the v.core , also no need to do all of them if you are running on ambient temps as with the 2x8 pin power connectors shunt mod is more than enough.
> 
> Also the statement you made " when pcie power reach the limit which usually is 75w to 78w, the card will stop pulling power from 8pins " is completely wrong the card will still pulling more power from the 2x8 pin even when pci express reachs 75~80w . This information comes directly from 2 of the top engineers who do volt mods on a daily basis for extreme overclocking Elmor and Derbauer ( Roman ) .
> 
> There you have it , this is the most accurate information for everyone on the shunt mods topic. I hope this helps.
> 
> Kind regards: Angelo
> 
> ** Ninja edit** I have just talked to another top overclocker and engineer Ronaldo ( rbuass ) from Brazil and his findings were the same as Elmor and Der8bauer , so it is 100% confirmed and accurate the information provided on my this post ***


Thanks for all your confirmation. Speak of this, could you please confirm with them about the MSI X trio's shunt mods: if I want to mod the 3 x 8pins shunts, which 3 are they?


----------



## chispy

GanMenglin said:


> Thanks for all your confirmation. Speak of this, could you please confirm with them about the MSI X trio's shunt mods: if I want to mod the 3 x 8pins shunts, which 3 are they?
> 
> View attachment 2461555


The answer is the top 3 resistors closer to the 3x8 pin power connectors , the ones on top for number 1 and 2 and the single one 3 one , that's it.


----------



## GanMenglin

chispy said:


> The answer is the top 3 resistors closer to the 3x8 pin power connectors , the ones on top for 2 and the single one 3 one , that's it.


Got it. Thanks!


----------



## escapee

Managed to secure 3rd spot in Port Royal with the legendary 4790k + 3090 Gigabyte OC combination.

Made a short video to celebrate my win


----------



## Gryzor

sk1nz said:


> I have the MSI 3090 Ventus 3X OC flashed to *Gigabyte RTX 3090 Gaming OC*. It works fine.


Hi,
It´s sure to flash the same Gigabyte bios onto a KF2 Galax? Don´t will break my card? I´m a bit afraid of doing that


----------



## holyshade

escapee said:


> Managed to secure 3rd spot in Port Royal with the legendary 4790k + 3090 Gigabyte OC combination.
> 
> Made a short video to celebrate my win


Hah and this is why I need a strix bios. Might go crossflash my gaming x trio now. Was puzzled at 4790k beating 7700k


----------



## zhrooms

Gryzor said:


> Hi,
> It´s sure to flash the same Gigabyte bios onto a KF2 Galax? Don´t will break my card? I´m a bit afraid of doing that


Is it the KFA2 3090 SG? Can you check GPU-Z what the power limit is? (GPU Z > Advanced > NVIDIA BIOS)










Default, Maximum and Adjustment Range


----------



## AlKappaccino

I took some pictures of my FTW3 (Non Ultra) Not the best quality, but maybe helpful for some.



Spoiler: Images


----------



## Falkentyne

chispy said:


> ERJ-M1WSF5M0U Panasonic Electronic Components | Resistors | DigiKey
> 
> 
> Order today, ships today. ERJ-M1WSF5M0U – 5 mOhms ±1% 1W Chip Resistor 2512 (6432 Metric) Current Sense Metal Element from Panasonic Electronic Components. Pricing and Availability on millions of electronic components from Digi-Key Electronics.
> 
> 
> 
> 
> www.digikey.com
> 
> 
> 
> 
> 
> View attachment 2461553


Sorry to bother you, Chispy,
I tried to ask this before, but would a 4H Pencil work for a shunt mod?
Several people said it might, but the benefit wouldn't be that great, but at least there wouldn't be any risk like soldering can be.


----------



## DrunknFoo

Falkentyne said:


> Sorry to bother you, Chispy,
> I tried to ask this before, but would a 4H Pencil work for a shunt mod?
> Several people said it might, but the benefit wouldn't be that great, but at least there wouldn't be any risk like soldering can be.



LOL old school shunt modding! love it
hmmm, i think the lead in pencils aren't sufficient enough anymore to carry current, u can try to draw a line on paper and see if you get a continuous reading on a multi


----------



## J7SC

escapee said:


> Managed to secure 3rd spot in Port Royal with the legendary 4790k + 3090 Gigabyte OC combination.
> 
> Made a short video to celebrate my win


'grats ! If I wouldn't know the history of Jay w/ this on YT, I might have concluded that it was a bit rough - but since I do (especially re. @LordGurciullo), I was just laughing my head off. I got an old 4790K 'special batch' laying around here somewhere, just no 3090s yet


----------



## Gryzor

zhrooms said:


> Is it the KFA2 3090 SG? Can you check GPU-Z what the power limit is? (GPU Z > Advanced > NVIDIA BIOS)
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Default, Maximum and Adjustment Range


Yes, it is. Default 350, Maximum 350, Minimum 100:


----------



## GTANY

AlKappaccino said:


> I took some pictures of my FTW3 (Non Ultra) Not the best quality, but maybe helpful for some.
> 
> 
> 
> Spoiler: Images
> 
> 
> 
> 
> View attachment 2461560
> 
> View attachment 2461561
> View attachment 2461562


Thank you : I am going to cut copper plates to cool VRM and RAM modules, the PCB photograph is useful to evaluate the work difficulty.


----------



## chispy

Falkentyne said:


> Sorry to bother you, Chispy,
> I tried to ask this before, but would a 4H Pencil work for a shunt mod?
> Several people said it might, but the benefit wouldn't be that great, but at least there wouldn't be any risk like soldering can be.


Hi Falkentyne , shunt mod with pencil won't work as this mods needs way more current to pass thru it that the pencil can provide. Have you tried another Bios with higher power limits ? Since i know it is reasonable for some people to choose not to do the shunt mods as the gpu will loose the warranty this might be a better option for you. I hope this helps.

Kind Regards: Angelo


----------



## HyperMatrix

chispy said:


> * Disclaimer * Do it at your own risk , none of us will be responsible if you damage your card !
> 
> I would like to correct you on this advice. I have just talked to Elmor and Roman ( Der8auer ) both are engineers and old friends of mine. They told me the most important and helpful shunt mods are the 2x8 power connector as that will open up the power limit for most rtx 3090 as you will get a 675w available power limit without shunt mod the pci express.
> 
> The pci express shunt mod is only helpful if you are running sub-sero temps and adding voltage to the v.core , also no need to do all of them if you are running on ambient temps as with the 2x8 pin power connectors shunt mod is more than enough.
> 
> Also the statement you made " when pcie power reach the limit which usually is 75w to 78w, the card will stop pulling power from 8pins " is completely wrong the card will still pulling more power from the 2x8 pin even when pci express reachs 75~80w . This information comes directly from 2 of the top engineers who do volt mods on a daily basis for extreme overclocking Elmor and Derbauer ( Roman ) .
> 
> There you have it , this is the most accurate information for everyone on the shunt mods topic. I hope this helps.
> 
> Kind regards: Angelo
> 
> ** Ninja edit** I have just talked to another top overclocker and engineer Ronaldo ( rbuass ) from Brazil and his findings were the same as Elmor and Der8bauer , so it is 100% confirmed and accurate the information provided on my this post ***


Thank God for this post. I’ve been telling people for over a week that it’s a stupid idea to shunt your pcie slot and that it makes no sense but for some reason more and more people kept believing it. This should put that talk to bed. Cheers.


----------



## Gryzor

kx11 said:


> alright finally got the Galax 3090 SG installed and i think it's actually cooling very well with that 4th fan but ugly
> 
> View attachment 2461158
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> you can see that i had to use the lower PCI slot (3rd one) to avoid the 4th fan hitting the ram sticks and DIMM.2 slot, other than the 4th fan installation issues this card is not that big, i bet it's a lot smaller than FE 3090, my case is O11D XL btw


Hi, did you flashed with other bios to increase default 100% powerlimit?


----------



## kx11

Gryzor said:


> Hi, did you flashed with other bios to increase default 100% powerlimit?


 no, i don't want to break it tbh


----------



## Gryzor

kx11 said:


> no, i don't want to break it tbh


There are many folks here that flashed other 3090 brands (readed about TUF and PALIT) with the gigabyte gaming OC bios (390W vs 350W). Those cards have 2x8 pin connectors. I have a KFA2 3090 like you, I´m thinking to flashing it - I´m seriously limited by powerlimit, between 1700-1800 mhz with 100% load -. Seriously, do you think its dangerous?


----------



## bmgjet

Falkentyne said:


> I tried to ask this before, but would a 4H Pencil work for a shunt mod?
> Several people said it might, but the benefit wouldn't be that great, but at least there wouldn't be any risk like soldering can be.


No, It wont work since the resistance tollarance is quite sensitive.
Remove too much resistance by creating a path with the lead and the card will lock it self to 400mhz.

Iv got a no solder/easy removable method Ill be posting up next week. Im just making sure its still removable after 2 weeks.

(Conductive paint, on the contacts between the shunt and the stacked shunt. Then liquid electrical tape to help hold it in place and protect from shorting against the heat sink.)


----------



## bolagnaise

Gryzor said:


> There are many folks here that flashed other 3090 brands (readed about TUF and PALIT) with the gigabyte gaming OC bios (390W vs 350W). Those cards have 2x8 pin connectors. I have a KFA2 3090 like you, I´m thinking to flashing it - I´m seriously limited by powerlimit, between 1700-1800 mhz with 100% load -. Seriously, do you think its dangerous?


Correct, i just flashed my Zotac Trinity 3090 with the Gigabyte bios. No issues, its safe and very easy.


----------



## Gryzor

bolagnaise said:


> Correct, i just flashed my Zotac Trinity 3090 with the Gigabyte bios. No issues, its safe and very easy.


Maybe with Zotac there is no risk, but with GALAX... I don´t know if Gigabyte bios would apply safely


----------



## Sync0r

I've just flashed my Zotac Trinity to Gigabyte OC 390w bios as well. Just stuck a water block on for good measure. Current score with a quick bit of clocking. https://www.3dmark.com/pr/388482


----------



## chispy

Sync0r said:


> I've just flashed my Zotac Trinity to Gigabyte OC 390w bios as well. Just stuck a water block on for good measure. Current score with a quick bit of clocking. https://www.3dmark.com/pr/388482
> 
> View attachment 2461596
> 
> View attachment 2461599


Nice score on that zotac trinity  , indeed this cards need higher power limit to shine.


----------



## bolagnaise

Sync0r said:


> I've just flashed my Zotac Trinity to Gigabyte OC 390w bios as well. Just stuck a water block on for good measure. Current score with a quick bit of clocking. https://www.3dmark.com/pr/388482


Nice! Im still waiting on my EK block, what temps where you getting? I'm currently benching on air annd cannot break 14K. What was your OC?


https://www.3dmark.com/pr/385303


----------



## bolagnaise

Posting for posterity, here is the orignal ZOTAC Trinity 3090 BIOS as GPUZ can't currently save it for some reason.





ZOTAC Trinity 3090 BIOS - Google Drive







drive.google.com


----------



## Alex24buc

I think I will try to flash my Palit 3090, but I didn’t use the flash tool for upgrading bios before. Where can I find all the steps to get it done right? Thanks!


----------



## Sync0r

bolagnaise said:


> Nice! Im still waiting on my EK block, what temps where you getting? I'm currently benching on air annd cannot break 14K. What was your OC?
> 
> 
> https://www.3dmark.com/pr/385303


Around 33c with water temp of 21c.


----------



## Sync0r

Core clock is all over the place because of power limit, but stays above 2010Mhz in Control, goes up to 2080. Seems pretty good! Memory overclock was +1250Mhz before I saw it starting to reduce frames.


----------



## bolagnaise

Alex24buc said:


> I think I will try to flash my Palit 3090, but I didn’t use the flash tool for upgrading bios before. Where can I find all the steps to get it done right? Thanks!


Page 1 has a complete guide, its very easy, follow the instructions carefully and you will be fine


----------



## tubnotub1

Well, folks, I am throwing in the towel, 4 hours of fiddling w/ frequency/voltage curves, different Bios and even taking the case apart and using a vornado fan to drop the temps of the GPU and I just cannot hit 20k+ in Time Spy. 19825 will be a number that forever haunts me, at least until I purchase a 5950X. Mediocre silicon is mediocre.


----------



## bmgjet

tubnotub1 said:


> Well, folks, I am throwing in the towel, 4 hours of fiddling w/ frequency/voltage curves, different Bios and even taking the case apart and using a vornado fan to drop the temps of the GPU and I just cannot hit 20k+ in Time Spy. 19825 will be a number that forever haunts me, at least until I purchase a 5950X. Mediocre silicon is mediocre.



You tried the other tricks as well.
Run single screen 1080p 60hz,
Graphics quality in driver set to performance mode.
Close everything in background you dont need with task manager.
Run 3D Mark stand alone (Not steam version)
That stuff above is worth 500 point on my computer vs how I run daily 3X 4K screens and quality mode.


----------



## tubnotub1

@bmgjet I changed the graphics driver quality and closed down unused tasks, I did not run the stand-alone or run single screen 1080p 60hz. Hmmmmmmmmm. Alright. Fine. Once more into the breach!


----------



## dante`afk

what is the paste der8auer put on the shunts before soldering?


----------



## bmgjet

dante`afk said:


> what is the paste der8auer put on the shunts before soldering?


Its flux, basically a acid that helps soldering by removing corrosion and oxygen when your soldering.
Most solder has a flux core already inside it so you dont 100% need to use it but it still makes it a little easier.
Dont take his videos as a good quality job tho lol, his 3090 soldering is the worst iv seen him do and you could tell he was just in a rush and didnt really care what it looked like.
A good job has it looking like it was placed there from factory.


----------



## tubnotub1

Oof. 19893, GPU score actually dropped a bit on the run, CPU score compensated and put me up a couple of points, but no big changes from going single monitor 1080p. Still downloading the stand alone client, but 19893 puts me at the top of the leader board for CPU/GPU combo, so I guess that is something.


----------



## SoldierRBT

What are the chances of NVIDIA releasing a higher TDP vBIOS for the 3090 FE? 400W is little bit low and shunt modding a $1500 card seems risky.


----------



## LVNeptune

Has anyone crossflashed the 3090 FE with the Strix? Looking at getting a bit more power. I have resistors on the way to mod the card but would prefer to just bios flash, if possible.


----------



## tubnotub1

Holy hell we got there, the stand-alone client a bit of an additional overclock on the CPU and goosing the vram to +1000. Thanks for the tip @bmgjet ! 20036!



https://www.3dmark.com/spy/14484399


----------



## Warocia

I try ask again. I have 3090 TUF. Is there problem with nflash? I cannot complete this last step without error message.

*nvflash64 --protecton

exit

└ Step 27 of 27 - (Optional) Enable Flash Protection ┘

I would like to put protecton back on. 

PROGRAMMING ERROR: Error: set EEPROM protection function failed*


----------



## kx11

Gryzor said:


> There are many folks here that flashed other 3090 brands (readed about TUF and PALIT) with the gigabyte gaming OC bios (390W vs 350W). Those cards have 2x8 pin connectors. I have a KFA2 3090 like you, I´m thinking to flashing it - I´m seriously limited by powerlimit, between 1700-1800 mhz with 100% load -. Seriously, do you think its dangerous?


i don't want to damage bcuz i'm selling it as soon as i get the ROG 3090


----------



## DrunknFoo

bmgjet said:


> No, It wont work since the resistance tollarance is quite sensitive.
> Remove too much resistance by creating a path with the lead and the card will lock it self to 400mhz.
> 
> Iv got a no solder/easy removable method Ill be posting up next week. Im just making sure its still removable after 2 weeks.
> 
> (Conductive paint, on the contacts between the shunt and the stacked shunt. Then liquid electrical tape to help hold it in place and protect from shorting against the heat sink.)


rather than the paint solder paste would probably be better...it's a bit tacky and can assist along with the tape





No Clean Lead Free Low Temperature Solder Paste 15 Grams - - Amazon.com


No Clean Lead Free Low Temperature Solder Paste 15 Grams - - Amazon.com



www.amazon.com





this stuff is ideal for shunts, just a dab, and a tap with the iron


----------



## tubnotub1

@Warocia I get the same error on my 3090 TUF when trying to re-enable ROM protection, unsure if it is an NV Flash issue or an issue specific to the 3090 TUF as I don't have another card to test.


----------



## def2att

Warocia said:


> I try ask again. I have 3090 TUF. Is there problem with nflash? I cannot complete this last step without error message.
> 
> *nvflash64 --protecton
> 
> exit
> 
> └ Step 27 of 27 - (Optional) Enable Flash Protection ┘
> 
> I would like to put protecton back on.
> 
> PROGRAMMING ERROR: Error: set EEPROM protection function failed*


I have the same card and receive the same error when -protecton, use as is/wait for nvflash update


----------



## VickyBeaver

Spiriva said:


> The Thermal pads PNY used on the 3090 was the worse Ive ever seen, There was no "peeling" them off, it was more less scraping w/e that was stuck on the memory etc away from the card. It came off really easy tho.


still waiting on my ek block is taking forever to come in fun times on crappy pads ahead.
I got some 8mohm resistors coming in so thinking ill shunt and run stock bios for just under a 600w power limit.
though let me know how it is on the power limits with decent cooling on the 390W bios and no fan power draw tho if it can maintain 2025 or 2050 with out power limiting then perhaps ill just stick with the 390w bios.


----------



## Spiriva

VickyBeaver said:


> still waiting on my ek block is taking forever to come in fun times on crappy pads ahead.
> I got some 8mohm resistors coming in so thinking ill shunt and run stock bios for just under a 600w power limit.
> though let me know how it is on the power limits with decent cooling on the 390W bios and no fan power draw tho if it can maintain 2025 or 2050 with out power limiting then perhaps ill just stick with the bios 390w bios.


While playing Call of Duty modern warfare, Doom eternal and World of Warcraft BfA the card (those 3 games ive tried so far with the EK block on) card sits between 2100 - 2145mhz all the time. I have an ambient temp of around 20c inside, and the card is around 44-49c. Cooling it alone with a 360 radiator. (nothing els is in the loop but a pump radiator and the 3090).


----------



## VickyBeaver

Spiriva said:


> While playing Call of Duty modern warfare, Doom eternal and World of Warcraft BfA the card (those 3 games ive tried so far with the EK block on) card sits between 2100 - 2145mhz all the time. I have an ambient temp of around 20c inside, and the card is around 44-49c. Cooling it alone with a 360 radiator. (nothing els is in the loop but a pump radiator and the 3090).


seems like the propper cooling has really helped alot then if that is 100% usage, I was a bit sus of just how much of the power limit must be wasted on bad stock cooler seems like it must be alot.


----------



## HyperMatrix

VickyBeaver said:


> seems like the propper cooling has really helped alot then if that is 100% usage, I was a bit sus of just how much of the power limit must be wasted on bad stock cooler seems like it must be alot.


Just a few points about his claims.

- we don’t know what the clocks drop to under full and proper load (utilizing shader cores, rt, and Tensor cores at once, with high vram use). Port royal is usually a pretty good measure of that.

- If his claim is true, it’s not just because of cooling. If he’s keeping over 2.1GHz under full load with a 390W limit, he has a super golden chip. Not just a golden chip, but super duper ultra chip. But as I said I still have doubts that these clocks would actually stick under full load.

- While cooling definitely helps and is very important, it is a lesser factor compared to the power limit on the 3090. The 3080 does better with cooling because of more headroom. For example the 3080 FTW3 has a 420W power limit. The 3090 has 20.5% more cores and 2.4x more vram (estimated total power usage for 24GB GDDR6x I think I read was 60W). So basically need 30W more for the VRAM, and another 20.5% for the additional cores. Meaning to have the same OC potential as a 3080 FTW3 you’d need about 535W TDP where’s the 3090 FTW3 only has 440W.

So with the 3090 you’re fighting a major uphill battle in comparison to the 3080. Meaning even under same cooling conditions, you’re still heavily capped by your power limit. Meaning a 390W 3090 maintaining over 2.1GHz is either false or is a complete anomaly. Don’t expect the same to happen with your card.


----------



## Pepillo

HyperMatrix said:


> - While cooling definitely helps and is very important, it is a lesser factor compared to the power limit on the 3090. The 3080 does better with cooling because of more headroom. For example the 3080 FTW3 has a 420W power limit. The 3090 has 20.5% more cores and 2.4x more vram (estimated total power usage for 24GB GDDR6x I think I read was 60W). So basically need 30W more for the VRAM, and another 20.5% for the additional cores. Meaning to have the same OC potential as a 3080 FTW3 you’d need about 535W TDP where’s the 3090 FTW3 only has 440W.


In my tests with the Strix's 480w bios, the 3090's overclocking consumption playing in various games is around 460w (4k Ultra). In benchmarks, it does go up a lot depending on the one used.


----------



## Jordel

HyperMatrix said:


> Just a few points about his claims.
> 
> - we don’t know what the clocks drop to under full and proper load (utilizing shader cores, rt, and Tensor cores at once, with high vram use). Port royal is usually a pretty good measure of that.
> 
> - If his claim is true, it’s not just because of cooling. If he’s keeping over 2.1GHz under full load with a 390W limit, he has a super golden chip. Not just a golden chip, but super duper ultra chip. But as I said I still have doubts that these clocks would actually stick under full load.
> 
> - While cooling definitely helps and is very important, it is a lesser factor compared to the power limit on the 3090. The 3080 does better with cooling because of more headroom. For example the 3080 FTW3 has a 420W power limit. The 3090 has 20.5% more cores and 2.4x more vram (estimated total power usage for 24GB GDDR6x I think I read was 60W). So basically need 30W more for the VRAM, and another 20.5% for the additional cores. Meaning to have the same OC potential as a 3080 FTW3 you’d need about 535W TDP where’s the 3090 FTW3 only has 440W.
> 
> So with the 3090 you’re fighting a major uphill battle in comparison to the 3080. Meaning even under same cooling conditions, you’re still heavily capped by your power limit. Meaning a 390W 3090 maintaining over 2.1GHz is either false or is a complete anomaly. Don’t expect the same to happen with your card.


I see the same behavior on my 3090 as he does on his.

Now, I believe you are correct. It is under very specific circumstances where we are able to run at a high frequency with a load that doesn't demand a lot of core activity, so PL isn't hit.

This does potentially tell us, however, that the GPU is *severely* power limited. I'm curious to know if there's a hard limit set somewhere other than BIOS which will stop us, or if BIOS is the only limiting factor.


----------



## Johneey

My Palit 3090 with Waterblock and 390W BIOS OC +220 // +950 
Port Royal


https://www.3dmark.com/pr/389793


Timespy 


https://www.3dmark.com/spy/14472014


Superposition


----------



## Gryzor

Johneey said:


> My Palit 3090 with Waterblock and 390W BIOS OC +220 // +950
> Port Royal
> 
> 
> https://www.3dmark.com/pr/389793
> 
> 
> Timespy
> 
> 
> https://www.3dmark.com/spy/14472014
> 
> 
> Superposition
> View attachment 2461610


What bios did you flashed? Gigabyte? do you know if is safe to flash my KFA2 3090?


----------



## Johneey

Gryzor said:


> What bios did you flashed? Gigabyte?


Yes ! But I don’t know why my port royal score is so low ...


----------



## Johneey

Gryzor said:


> What bios did you flashed? Gigabyte? do you know if is safe to flash my KFA2 3090?


Sure flash it dude 👍🏽


----------



## HyperMatrix

Pepillo said:


> In my tests with the Strix's 480w bios, the 3090's overclocking consumption playing in various games is around 460w (4k Ultra). In benchmarks, it does go up a lot depending on the one used.


well we’re still dealing with 2 things. First off, other than just the bios, the power delivery system. More phases on the actual ROG Strix as well as full MLCCs. So the actual ROG Strix may do better with a 480W bios than other cards flashed with its bios. Beyond that, there’s also the silicon lottery factor. You can only increase power limit by so much before you need more voltage. All based on following your PerfCap reason in GPU-Z.


----------



## HyperMatrix

Jordel said:


> I see the same behavior on my 3090 as he does on his.
> 
> Now, I believe you are correct. It is under very specific circumstances where we are able to run at a high frequency with a load that doesn't demand a lot of core activity, so PL isn't hit.
> 
> This does potentially tell us, however, that the GPU is *severely* power limited. I'm curious to know if there's a hard limit set somewhere other than BIOS which will stop us, or if BIOS is the only limiting factor.


Super easy way to find out. Do a port royal run and post the link here. Then we can compare your set/max clock to the average clock you actually get throughout the run.


----------



## Sky3900

Has anyone tried flashing a higher power limit bios to the FE? Like the strix or ftw3 bios? Is there any way to know if they are compatible or what the risk would be if they are not?


----------



## shiokarai

Johneey said:


> Yes ! But I don’t know why my port royal score is so low ...


Low? seems nice for a "barebones" Palit 3090


----------



## Spiriva

HyperMatrix said:


> text.


As I typed before, I have _only_ tested the games I typed before. And in those games i have the results I typed before. I know it can be different from game to game to benchmark, thats what I typed what games I have tested.










This is in CoD MW after ~30mins of game play.










Settings in MSI AB.


----------



## Johneey

Spiriva said:


> As I typed before, I have _only_ tested the games I typed before. And in those games i have the results I typed before. I know it can be different from game to game to benchmark, thats what I typed what games I have tested.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> This is in CoD MW after ~30mins of game play.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Settings in MSI AB.


Wow this clock Speed is very high. Can u do a timedpy please ?


----------



## Nizzen

Johneey said:


> Wow this clock Speed is very high. Can u do a timedpy please ?


If the game is very cpubound, the gpu clock will be higher. Just tested this in Battlefield V with lower (60%) resolution scaling. 2100mhz stable gpuclock. 100% res scaling in 3440x1440 and the gpu was ~1950mhz and powerlimited...


----------



## HyperMatrix

Spiriva said:


> As I typed before, I have _only_ tested the games I typed before. And in those games i have the results I typed before. I know it can be different from game to game to benchmark, thats what I typed what games I have tested.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> This is in CoD MW after ~30mins of game play.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Settings in MSI AB.


Yes and as can be seen in your screenshot, your GPU usage is at 76%. Your clock speed doesn’t matter if your GPU isn’t being fully utilized anyway. what’s the difference between having 2130MHz with 76% utilization at 153FPS, 3000MHz with 50% utilization at 153FPS, or 1800MHz with 98% utilization at 153FPS? Nothing. They’re all 153FPS.

People need to be aware of what speeds they’re advertising because it gives other readers the wrong impression with regard to expectations. Yes you said you got those speeds in those games. But you didn’t mention that you got them in those games with low GPU usage.

The only clock speed that matters is the one you can maintain under full load.


----------



## Spiriva

HyperMatrix said:


> Yes and as can be seen in your screenshot, your GPU usage is at 76%. Your clock speed doesn’t matter if your GPU isn’t being fully utilized anyway. what’s the difference between having 2130MHz with 76% utilization at 153FPS, 3000MHz with 50% utilization at 153FPS, or 1800MHz with 98% utilization at 153FPS? Nothing. They’re all 153FPS.
> 
> People need to be aware of what speeds they’re advertising because it gives other readers the wrong impression with regard to expectations. Yes you said you got those speeds in those games. But you didn’t mention that you got them in those games with low GPU usage.
> 
> The only clock speed that matters is the one you can maintain under full load.


Ive seem to have upset you alot? I only answered what type of mhz Ive seen and in what games. I did not say that "oh yes its like this in every game, even the games i have not tried"


----------



## HyperMatrix

Spiriva said:


> Ive seem to have upset you alot? I only answered what type of mhz Ive seen and in what games. I did not say that "oh yes its like this in every game, even the games i have not tried"


I know. And if you saw, I never replied to your post. Someone else quoted your claims Of “2130MHz ALL THE TIME” because they were under the impression that your claim was based on actual sustainable clock speeds. I shared with them that it would be impossible for your claims to be valid under load and for them to not expect 2.1GHz on a 390W bios. And then you responded to me.


----------



## vmanuelgm

Metro Exodus benchmark, shunt modded 3090 (no mod in pciexpress resistor currently) with the card lowering down voltage and clocks to avoid going above 550-600w.






Have to find a 1v-2200 stable card, would be nice!!!


----------



## Wrathier

Does anyone have a Gigabyte 3090 Gaming OC? I read about power connector issues. Them being lose on first batch. Anyone with that issue?


----------



## Alex24buc

I try to flash bios but after I try to disable protection it doesn`t work. It seems it doesn`t want to turn off bios flashing protection because it didn`t change the bios after I accepted firmware update. I stil have the Palit one


----------



## GanMenglin

Cavokk said:


> Igor from Igors lab has done an article on undervolting the 3090 - pretty much inline with my own findings (and results on my 3090 X Trio at 800mv and 850mv) however torgueroll on this forum achieves 1980Mhz stable clock on his Founders edition at 850mv which implies that he must have a golden sample.. According to Igor His Asus TUF 3090 1785Mhz [email protected] is a very good chip - .001 bin. Luckily I am reaching two steps higher clock at 800mv so mine is probably good as well (not like torquerolls though - thats a golden one)
> 
> C


My MSI X Trio can run [email protected], but can't be as good as @torqueroll , 3dmark stress test will crash almost immediately at [email protected]


----------



## Alex24buc

Alex24buc said:


> I try to flash bios but after I try to disable protection it doesn`t work. It seems it doesn`t want to turn off bios flashing protection because it didn`t change the bios after I accepted firmware update. I stil have the Palit one


Can anybody help me please? Should I disable first the drivers from device manager?


----------



## Sync0r

Alex24buc said:


> Can anybody help me please? Should I disable first the drivers from device manager?


That's fine mine said the same thing when I removed protection. Save your current bios then try a flash.


----------



## Sync0r

My Time spy score https://www.3dmark.com/spy/14494441

Quick 1 and done run.


----------



## Amdux

Wrathier said:


> Does anyone have a Gigabyte 3090 Gaming OC? I read about power connector issues. Them being lose on first batch. Anyone with that issue?


No issue for me with my Gigabyte Gaming OC, but it is an issue for some cards. There was a link to a gigabyte post in this thread where the community manger said that they had found the problem and fixed it for newer cards and promised that they will pick it up next day and replace it if yours is faulty.


----------



## Nitemare3219

GanMenglin said:


> My MSI X Trio can run [email protected], but can't be as good as @torqueroll , 3dmark stress test will crash almost immediately at [email protected]


Have you determined what you can run at .850V? 1980 MHz seems impossible for me as well. For gaming my stability clock tops out at 1890 MHz at .850V on the stock FE cooler with +250 memory and a mild fan speed that keeps temps around 65 degrees. Gaming in 4K. Destiny 2 with these settings hovers around 320-360 watts, but Hunt Showdown pushes it all the way to 395 watts at times... at .850V! That game is power hungry.

Seems I can get away with a bin or 2 higher in Port Royal than what is stable in game.


----------



## Johneey

Sync0r said:


> My Time spy score https://www.3dmark.com/spy/14494441
> 
> Quick 1 and done run.
> View attachment 2461643


cpu score looks very low, i already get 15400 with 5.3
is urs throtteling?


----------



## Wrathier

Amdux said:


> No issue for me with my Gigabyte Gaming OC, but it is an issue for some cards. There was a link to a gigabyte post in this thread where the community manger said that they had found the problem and fixed it for newer cards and promised that they will pick it up next day and replace it if yours is faulty.


The card I receive tomorrow is bought from a scalper at auction so it’s definitely first batch. I have it around at auctions, but really tempted to keep it and hope for the best.

The gigabyte posts in this thread only applies to UK customers and I’m in Sweden. I have written esupport about this issue and what to do if I encounter it.


----------



## Sync0r

Johneey said:


> cpu score looks very low, i already get 15400 with 5.3
> is urs throtteling?


It might of been erroring at 5.5Ghz, will try lower.


----------



## AlKappaccino

Maybe some of you know this. The FTW3 has this stuff on it, Steve calls it "thermal putty" (



)

Though I can't find something to buy, in case I need to replace it. What exactly is that stuff?


----------



## Tias

Wrathier said:


> The card I receive tomorrow is bought from a scalper at auction so it’s definitely first batch. I have it around at auctions, but really tempted to keep it and hope for the best.
> 
> The gigabyte posts in this thread only applies to UK customers and I’m in Sweden. I have written esupport about this issue and what to do if I encounter it.


Two friends of mine got the gigabyte 3090 gaming oc, and none of them have this problem. Just be careful when you connect the two 8 pins and you will be good.


----------



## Alex24buc

Sync0r said:


> That's fine mine said the same thing when I removed protection. Save your current bios then try a flash.


Thanks for your answer, I flashed it, works a little better than Palit bios, my boosts are more consistent now. The only disadvantage is that the card is 4 degrees hottter because of the higher 390w bios annd one display port output doesn`t work (somebody also mentioned that on forums). I hope with the original bios will work again. I will keep this bios for now and test it.


----------



## Sync0r

Johneey said:


> cpu score looks very low, i already get 15400 with 5.3
> is urs throtteling?


Can't get my cpu score any higher, what speed is your RAM running at?


----------



## Nizzen

Sync0r said:


> Can't get my cpu score any higher, what speed is your RAM running at?


Fix cpuscore:


----------



## LVNeptune

HyperMatrix said:


> Super easy way to find out. Do a port royal run and post the link here. Then we can compare your set/max clock to the average clock you actually get throughout the run.





https://www.3dmark.com/pr/388847


This is the best I could get on my 3090 FE and this is still with some PL taking place. After reading this thread it makes so much more sense why the 3090 FE has less overclock headroom from the factory. The extra vRAM is taking up additional power. I am seriously considering just selling this off. I have 3mOhm resistors on order in order to remove the power limit completely. I plan on limiting everything through voltage curves but would prefer not to even do that. The FTW3 seems to have a much higher ceiling and I am pretty sure my power supply can push the additional power I need through just two connectors assuming the bios would even work. I was going to attempt it but I believe I have to wait for a patched nvflash as this doesn't allow "board ID mismatch" bypass.




Sky3900 said:


> Has anyone tried flashing a higher power limit bios to the FE? Like the strix or ftw3 bios? Is there any way to know if they are compatible or what the risk would be if they are not?


I am still awaiting an answer on that. I would really like to increase the power via BIOS instead of shunt replacements. I have some 3mOhm on the way to replace them with right now. The 3090 FE is seriously PL limited. even at 100% stock I get PL throttling all over the place. I absolutely HAVE to undervolt to keep PL throttling off.


----------



## Wrathier

Tias said:


> Two friends of mine got the gigabyte 3090 gaming oc, and none of them have this problem. Just be careful when you connect the two 8 pins and you will be good.


Thank you for the feedback.


----------



## Glerox

Is it still possible to do shunt mod with liquid metal like it was for Pascal? I don't trust my soldering skills...


----------



## Thanh Nguyen

https://www.3dmark.com/pr/385152



Why my gpu score is so low? Already on water. Gigabyte bios. Card hit power limit instantly when hit timespy or port royal


https://www.3dmark.com/spy/14444879


----------



## Sync0r

Nizzen said:


> Fix cpuscore:
> View attachment 2461663
> View attachment 2461664


Yeah that memory is much faster than mine. What memory kit is that?


----------



## Nizzen

Sync0r said:


> Yeah that memory is much faster than mine. What memory kit is that?


Team extreem 4500c18 8pack edition


----------



## Johneey

Sync0r said:


> Can't get my cpu score any higher, what speed is your RAM running at?


Using 3600mhz CL 16


----------



## Johneey

Thanh Nguyen said:


> https://www.3dmark.com/pr/385152
> 
> 
> 
> Why my gpu score is so low? Already on water. Gigabyte bios. Card hit power limit instantly when hit timespy or port royal
> 
> 
> https://www.3dmark.com/spy/14444879


is this Stock or ?


----------



## Johneey

Thanh Nguyen said:


> https://www.3dmark.com/pr/385152
> 
> 
> 
> Why my gpu score is so low? Already on water. Gigabyte bios. Card hit power limit instantly when hit timespy or port royal
> 
> 
> https://www.3dmark.com/spy/14444879


*** this cpu score is amazing which cooling u have? and vcore?


----------



## shiokarai

Assume I have 2 options: Asus TUF OC 3090 or Nvidia FE 3090 - what to get/what is better?


----------



## Thanh Nguyen

Johneey said:


> is this Stock or ?


I set 1.093v - 2175mhz for the curve. In time spy or port royal, it goes down all the way to 18xx mhz. In game, it stays around 1935-1980.


----------



## Johneey

Thanh Nguyen said:


> I set 1.093v - 2175mhz for the curve. In time spy or port royal, it goes down all the way to 18xx mhz. In game, it stays around 1935-1980.


Then u had bad luck on chip lottery :-(


----------



## Thanh Nguyen

Johneey said:


> Then u had bad luck on chip lottery :-(


I will try to shunt mod it to see if it can hold the clock or not. Waiting for those resistors.


----------



## psychrage

kot0005 said:


> Dont cheapout on waterblocks... you get what you pay for.
> 
> Aquacomputer >ekwb> heatkiller >Alphacool
> 
> I wouldnot touch anything else


Having owned multiple Bykski products and various Heatkiller & Alphacool products. I'd put Bykski between the two. Bykski is _easily_ higher quality than Alphacoo (in my personal experience).


----------



## Sky3900

.


----------



## Johneey

kot0005 said:


> Dont cheapout on waterblocks... you get what you pay for.
> 
> Aquacomputer >ekwb> heatkiller >Alphacool
> 
> I wouldnot touch anything else


I have the alphacool and got 6c Delta to water with lm on load very nice and cheap


----------



## Sky3900

LVNeptune said:


> https://www.3dmark.com/pr/388847
> 
> 
> This is the best I could get on my 3090 FE and this is still with some PL taking place. After reading this thread it makes so much more sense why the 3090 FE has less overclock headroom from the factory. The extra vRAM is taking up additional power. I am seriously considering just selling this off. I have 3mOhm resistors on order in order to remove the power limit completely. I plan on limiting everything through voltage curves but would prefer not to even do that. The FTW3 seems to have a much higher ceiling and I am pretty sure my power supply can push the additional power I need through just two connectors assuming the bios would even work. I was going to attempt it but I believe I have to wait for a patched nvflash as this doesn't allow "board ID mismatch" bypass.
> 
> 
> 
> I am still awaiting an answer on that. I would really like to increase the power via BIOS instead of shunt replacements. I have some 3mOhm on the way to replace them with right now. The 3090 FE is seriously PL limited. even at 100% stock I get PL throttling all over the place. I absolutely HAVE to undervolt to keep PL throttling off.


Same here, not excited about soldering on a $1500 board. Think I'll wait a few months to see what info comes out on bios compatibly before shunt modding.


----------



## mirkendargen

Johneey said:


> I have the alphacool and got 6c Delta to water with lm on load very nice and cheap


Yeah making waterblocks these days isn't exactly rocket science, every company knows the basic design. It's just a matter of precision in measuring for the design and then making the block so it's as flat and close as possible to all components...but 3d scanners and CNC mills kind of do that for you anyway...

Ironically the block I have the biggest problem with in that is the EK block that came on my MSI Seahawk-EK 2080ti. The stands are too tall and make it require way too much paste between the block and GPU. Even repasting it I have a 20C delta between the GPU and my coolant under load. It was apparently a common problem with these cards, another example of EK quality...and I guess MSI quality control.


----------



## Nitemare3219

olrdtg said:


> Download the latest beta version of Afterburner from MSI's website.
> 
> Here is a direct link to MSI Afterburner 4.6.3 Beta 2 as of 10/09/2020: MSI Afterburner 4.6.3 Beta 2


That's the version I'm using. Ran DDU, reinstalled everything in safe mode, still no change. That version of AB is not working properly with my FE card. Not sure why.


----------



## olrdtg

Nitemare3219 said:


> That's the version I'm using. Ran DDU, reinstalled everything in safe mode, still no change. That version of AB is not working properly with my FE card. Not sure why.


Open MSI Afterburner, then open the settings. Make sure you check the box 'Unlock voltage control' and set the drop down accordingly (play with the drop down settings), also check 'Unlock voltage monitoring'



LVNeptune said:


> https://www.3dmark.com/pr/388847
> 
> 
> This is the best I could get on my 3090 FE and this is still with some PL taking place. After reading this thread it makes so much more sense why the 3090 FE has less overclock headroom from the factory. The extra vRAM is taking up additional power. I am seriously considering just selling this off. I have 3mOhm resistors on order in order to remove the power limit completely. I plan on limiting everything through voltage curves but would prefer not to even do that. The FTW3 seems to have a much higher ceiling and I am pretty sure my power supply can push the additional power I need through just two connectors assuming the bios would even work. I was going to attempt it but I believe I have to wait for a patched nvflash as this doesn't allow "board ID mismatch" bypass.
> 
> 
> 
> I am still awaiting an answer on that. I would really like to increase the power via BIOS instead of shunt replacements. I have some 3mOhm on the way to replace them with right now. The 3090 FE is seriously PL limited. even at 100% stock I get PL throttling all over the place. I absolutely HAVE to undervolt to keep PL throttling off.


3mOhm does not remove the power limit completely. You are only raising the power limit to around 610W. If you stack a 3mOhm on top of the 5mOhm resistors on the GPU currently, you would raise the power limit to around 976W. Stacking 2 3mOhm resistors (replacing the originals) would net you near 732W on the power limit.

Here is a chart for resistor and power values:


Replace or StackOriginal ResistorNew (or stacked) ResistorMax Power DrawOriginal/Vanilla5 mOhm*N/A*366 WStacked5 mOhm3 mOhm976 W*Stacked3 mOhm3 mOhm732 W*Stacked5 mOhm8 mOhm594 W*Replaced5 mOhm3 mOhm610 W*
**Numbers coming from 'ShuntMod Calculator' by bmgjet*


----------



## Sky3900

olrdtg said:


> Open MSI Afterburner, then open the settings. Make sure you check the box 'Unlock voltage control' and set the drop down accordingly (play with the drop down settings), also check 'Unlock voltage monitoring'
> 
> 
> 
> 3mOhm does not remove the power limit completely. You are only raising the power limit to around 610W. If you stack a 3mOhm on top of the 5mOhm resistors on the GPU currently, you would raise the power limit to around 976W. Stacking 2 3mOhm resistors (replacing the originals) would net you near 732W on the power limit.


Could you share the calculation for power draw vs. resistance on these boards? Looking at what you've posted I'm estimating 4.5mOhm resistors would provide around the 450W max power draw I'm looking for.


----------



## Gofspar

AlKappaccino said:


> Maybe some of you know this. The FTW3 has this stuff on it, Steve calls it "thermal putty" (
> 
> 
> 
> )
> 
> Though I can't find something to buy, in case I need to replace it. What exactly is that stuff?








Amazon.com: K5 PRO viscous thermal paste for thermal pad replacement 20g (Apple iMac, Sony PS4 & PS3, XBOX, Acer Aspire etc): Computers & Accessories


Amazon.com: K5 PRO viscous thermal paste for thermal pad replacement 20g (Apple iMac, Sony PS4 & PS3, XBOX, Acer Aspire etc): Computers & Accessories



www.amazon.com




K5 Pro is the kryonaut of thermal putty. stuff is legit.


----------



## olrdtg

Sky3900 said:


> Could you share the calculation for power draw vs. resistance on these boards? Looking at what you've posted I'm estimating 4.5mOhm resistors would provide around the 450W max power draw I'm looking for.


I used the ShuntMod Calculator program by bmgjet to get my numbers. According to the software, a 4mOhm replacement resistor would yield you 457.5W max power draw.


----------



## Wrathier

Can a few people be kind and share there "undervolted" - more stable curves for me please?

I'm pretty new at this and don't want my speed to fluctuate so much. I want the most out of my card of course so inputs are very welcome. I use the Gigabyte Gaming OC so I have the 390W bios.


----------



## Alex24buc

Do you guys think it is ok for long term to use the Gigabyte Gaming OC bios on my Palit card?


----------



## Sky3900

olrdtg said:


> I used the ShuntMod Calculator program by bmgjet to get my numbers. According to the software, a 4mOhm replacement resistor would yield you 457.5W max power draw.





Alex24buc said:


> Do you guys think it is ok for long term to use the Gigabyte Gaming OC bios on my Palit card?


Just keep the GPU temps under 80C... Also verify that Palit used thermal pads between the VRMs, memory, and heatsinks/backplate. These components are designed to run hotter than the GPU, but, I'd be carful with OCing if there is no cooling on them.


----------



## Alex24buc

Sky3900 said:


> Just keep the GPU temps under 80C... Also verify that Palit used thermal pads between the VRMs, memory, and heatsinks/backplate. These components are designed to run hotter than the GPU, but, I'd be carful with OCing if there is no cooling on them.


I understand, my temps are in the low 70s with the power limit slider in msi at maximum (390w). I didn t overclock it with the new bios because the stock core clock with gigabyte bios is also higher (1775mhz) than my Palit bios (1725). Regarding the thermal pads I will check that. Thanks!!


----------



## Thoth420

Say I am not comfortable with a shunt mod or bios flash(reselling this card eventually for the Vision). How much performance (approx obv) am I leaving on the table with a Gigabyte Eagle OC 3090 running on a 3440 x 1440 120hz and occasionally the LG CX 55 OLED? Not at the same time of course.

PS Running a Seasonic Prime TX 1000 paired with a 3900x


----------



## Sky3900

Thoth420 said:


> Say I am not comfortable with a shunt mod or bios flash(reselling this card eventually for the Vision). How much performance (approx obv) am I leaving on the table with a Gigabyte Eagle OC 3090 running on a 3440 x 1440 120hz and occasionally the LG CX 55 OLED? Not at the same time of course.
> 
> PS Running a Seasonic Prime TX 1000 paired with a 3900x


I'm no expert, but, I'd say ~5% with stock cooling and maybe ~10% with a super awesome custom water loop.

Speaking from experience with my EVGA 2080ti XC3 Ultra. I was was getting about 1930mhz under heavy load with stock bios at max power (350W). Flashed to the Galaxy 380W bios and now I'm stable at 2025mhz same conditions. FPS went up about 5%.


----------



## Nitemare3219

Thoth420 said:


> Say I am not comfortable with a shunt mod or bios flash(reselling this card eventually for the Vision). How much performance (approx obv) am I leaving on the table with a Gigabyte Eagle OC 3090 running on a 3440 x 1440 120hz and occasionally the LG CX 55 OLED? Not at the same time of course.
> 
> PS Running a Seasonic Prime TX 1000 paired with a 3900x


You can increase performance by dialing in the voltage/clocks for your card. Basically trial and error. Stock settings cause a lot of clock fluctuations because it’s using a lot more voltage than it actually needs causing you to hit power limit before necessary.


----------



## Thoth420

Sky3900 said:


> I'm no expert, but, I'd say ~5% with stock cooling and maybe ~10% with a super awesome custom water loop.
> 
> Speaking from experience with my EVGA 2080ti XC3 Ultra. I was was getting about 1930mhz under heavy load with stock bios at max power (350W). Flashed to the Galaxy 380W bios and now I'm stable at 2025mhz same conditions. FPS went up about 5%.





Nitemare3219 said:


> You can increase performance by dialing in the voltage/clocks for your card. Basically trial and error. Stock settings cause a lot of clock fluctuations because it’s using a lot more voltage than it actually needs causing you to hit power limit before necessary.


Thanks peeps! I am making a long distance move early 2021 so I am stuck with AIO at best for the moment as far as water. I do have parts for a loop just not sure when I would have time to get to all that.
I can live with 5%. I am coming from a 2070 Super Strix OC edition. I had a 2080ti earlier in its gen but it was overkill for anything I played. My expectations are fairly reserved but I really just want that better RTX performance(the 2xxx series is just not capable of) for new titles coming out.


----------



## dante`afk

Thanh Nguyen said:


> https://www.3dmark.com/pr/385152
> 
> 
> 
> Why my gpu score is so low? Already on water. Gigabyte bios. Card hit power limit instantly when hit timespy or port royal
> 
> 
> https://www.3dmark.com/spy/14444879


shunt mod.


----------



## LVNeptune

Thanh Nguyen said:


> https://www.3dmark.com/pr/385152
> 
> 
> 
> Why my gpu score is so low? Already on water. Gigabyte bios. Card hit power limit instantly when hit timespy or port royal
> 
> 
> https://www.3dmark.com/spy/14444879


Is this a joke? I _WISH_ I could hit that on my 3090 FE.


----------



## Sheyster

LVNeptune said:


> Has anyone crossflashed the 3090 FE with the Strix? Looking at getting a bit more power. I have resistors on the way to mod the card but would prefer to just bios flash, if possible.


So the Strix is 3 x 8-pin, which doesn't work with any 2 x 8-pin card that I know of.

The FE is a single 12-pin Molex Micro-Fit 3.0 connector.

I would not bet anything on the Strix BIOS working with an FE, in fact I'd say it's a solid bet it won't work.


----------



## LVNeptune

Sheyster said:


> So the Strix is 3 x 8-pin, which doesn't work with any 2 x 8-pin card that I know of.
> 
> The FE is a single 12-pin Molex Micro-Fit 3.0 connector.
> 
> I would not bet anything on the Strix BIOS working with an FE, in fact I'd say it's a solid bet it won't work.


I believe that could be the case. It looks like the single 12 pin is capable of being powered by 3x 8pin but who knows right now.


----------



## bmgjet

LVNeptune said:


> I believe that could be the case. It looks like the single 12 pin is capable of being powered by 3x 8pin but who knows right now.


The FE power layout is too different. Yeah the hardware could take the extra power easily. But the bios is going to freak out when it sees a different ammount of phases and sensors.


----------



## Sky3900

bmgjet said:


> The FE power layout is too different. Yeah the hardware could take the extra power easily. But the bios is going to freak out when it sees a different ammount of phases and sensors.


Anyone know what all is involved in making a custom bios? Could you extract the stock FE bios and tweak the appropriate settings to raise the power limit in a rom editor of some sort? 

Wonder if NiBiTor will work...


----------



## LVNeptune

bmgjet said:


> The FE power layout is too different. Yeah the hardware could take the extra power easily. But the bios is going to freak out when it sees a different ammount of phases and sensors.


Unfortunate. I have 3mOhm on the way. 



Sky3900 said:


> Anyone know what all is involved in making a custom bios? Could you extract the stock FE bios and tweak the appropriate settings to raise the power limit in a rom editor of some sort?
> 
> Wonder if NiBiTor will work...


I don’t think custom bioses have been possible for some time.


----------



## bmgjet

Sky3900 said:


> Anyone know what all is involved in making a custom bios? Could you extract the stock FE bios and tweak the appropriate settings to raise the power limit in a rom editor of some sort?
> 
> Wonder if NiBiTor will work...


Editing the bios is easy, (Iv already found a few tables for power limit and fan stuff just comparing bioses)
Correcting the checksum is easy. (Its just total sum off all bytes added together)

Signing the file as being validated from Nvidia. Near imposiable.
Maxwell was the last series you could do this since the NVFlasher got cracked to accepting non-signed rom files.
They fixed that tho so that it wouldnt happen again.


----------



## Sheyster

LVNeptune said:


> I believe that could be the case. It looks like the single 12 pin is capable of being powered by 3x 8pin but who knows right now.


Nvidia's adapter in the box is for 2 x 8-pin into the one 12-pin and I'm sure they are regulating that power delivery. IMHO the only way you'll get more power limit for the FE is via a shunt mod.


----------



## Sky3900

bmgjet said:


> Editing the bios is easy, (Iv already found a few tables for power limit and fan stuff just comparing bioses)
> Correcting the checksum is easy. (Its just total sum off all bytes added together)
> 
> Signing the file as being validated from Nvidia. Near imposiable.
> Maxwell was the last series you could do this since the NVFlasher got cracked to accepting non-signed rom files.
> They fixed that tho so that it wouldnt happen again.


Lame! Wonder why they did this? I mean it's my card... Ought to be able to do whatever I want with it. 

Perhaps Nvidia was getting a bunch of RMAs from people frying their cards.


----------



## Sky3900

Sheyster said:


> Nvidia's adapter in the box is for 2 x 8-pin into the one 12-pin and I'm sure they are regulating that power delivery. IMHO the only way you'll get more power limit for the FE is via a shunt mod.





Sheyster said:


> Nvidia's adapter in the box is for 2 x 8-pin into the one 12-pin and I'm sure they are regulating that power delivery. IMHO the only way you'll get more power limit for the FE is via a shunt mod.


Guess I better start practicing my soldering skills... Also have some coworkers who are microelectronics techs, might be a better idea to try and talk one them into doing the mod.


----------



## LVNeptune

Sheyster said:


> Nvidia's adapter in the box is for 2 x 8-pin into the one 12-pin and I'm sure they are regulating that power delivery. IMHO the only way you'll get more power limit for the FE is via a shunt mod.


I’m aware, I have two of them lol.
There are several reddit posts about a future 3x 8 pin to 12 pin adapter.


----------



## Sheyster

bmgjet said:


> Editing the bios is easy, (Iv already found a few tables for power limit and fan stuff just comparing bioses)
> Correcting the checksum is easy. (Its just total sum off all bytes added together)
> 
> Signing the file as being validated from Nvidia. Near imposiable.
> Maxwell was the last series you could do this since the NVFlasher got cracked to accepting non-signed rom files.
> They fixed that tho so that it wouldnt happen again.


I can't imagine they are using checksum or MD5. It's probably SHA256 or a proprietary strong hash of some sort. Correcting that is not easy and has not been done as far as I know.


----------



## Sheyster

LVNeptune said:


> I’m aware, I have two of them lol.
> There are several reddit posts about a future 3x 8 pin to 12 pin adapter.


It probably won't matter without Frankenstein-ing the card somehow. Get your soldering iron ready.


----------



## Sky3900

Sheyster said:


> I can't imagine they are using checksum or MD5. It's probably SHA256 or a proprietary strong hash of some sort. Correcting that is not easy and has not been done as far as I know.


----------



## Stampede

Spiriva said:


> PNY GeForce RTX 3090 24GB XLR8 Gaming EPIC-X RGB
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Backside
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Closeup
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> EK Waterblock installed.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> EK Backplate installed.
> 
> *___*
> 
> The Thermal pads PNY used on the 3090 was the worse Ive ever seen, There was no "peeling" them off, it was more less scraping w/e that was stuck on the memory etc away from the card. It came off really easy tho.


Just doing some thinking, anyone thought of cooling the backplate by placing a cpu block with some thermal paste directly onto the back plate. need to find a way to find a way to tighten down the block to the back plate. Can plumb the cpu and gpu in parallel as well.

Do you think there will be any benefits to this mod, or is it not worth the money and effort?

my thinking is that it may bring down a watercooled gpu by another few deg's for people chasing benchmark scores.


----------



## LVNeptune

Sheyster said:


> It probably won't matter without Frankenstein-ing the card somehow. Get your soldering iron ready.


I am waiting on resistors to come in to mod it.


----------



## Sky3900

Any recommendations on which soldering paste or solder to use. I read somewhere else on this forum that tin solder can change the resistance and the effect is undesirable for this application.


----------



## Sheyster

LVNeptune said:


> I am waiting on resistors to come in to mod it.


You're braver than I am! I may end up paying someone locally to stack on some resistors. I used to solder quite a bit back in the day (worked in electrical appliance repair part-time while in college), but never anything as small as those resistors.


----------



## nievz

What framerate are you guys getting in BFV at 1440p ultra? Particularly in Operation Underground. My gpu utilization typically stays at 60-70% with fps at 100-120.


----------



## LVNeptune

Sheyster said:


> You're braver than I am! I may end up paying someone locally to stack on some resistors. I used to solder quite a bit back in the day (worked in electrical appliance repair part-time while in college), but never anything as small as those resistors.


These resistors are very big. The MLCCs are about 1/3 the size and are still pretty easy.



Sky3900 said:


> Any recommendations on which soldering paste or solder to use. I read somewhere else on this forum that tin solder can change the resistance and the effect is undesirable for this application.


I plan on using the same solder I use for all my projects. Literally never heard what you said before. 63/37 or 60/40 will be fine.


----------



## LVNeptune

olrdtg said:


> text


I noticed your other thread is gone, can you post that picture here?


----------



## twisted0ne

nievz said:


> What framerate are you guys getting in BFV at 1440p ultra? Particularly in Operation Underground. My gpu utilization typically stays at 60-70% with fps at 100-120.


If you're not seeing 100% GPU utilisation then you're CPU capped.


----------



## Sky3900

LVNeptune said:


> These resistors are very big. The MLCCs are about 1/3 the size and are still pretty easy.
> 
> 
> 
> I plan on using the same solder I use for all my projects. Literally never heard what you said before. 63/37 or 60/40 will be fine.


----------



## LordGurciullo

escapee said:


> Managed to secure 3rd spot in Port Royal with the legendary 4790k + 3090 Gigabyte OC combination.
> 
> Made a short video to celebrate my win


Damn bro you went hard. I called him out on twitter but you went hard. I got mad respect for Jay though. I don't even think I'd beat him even if I could but wow crazy score man.


----------



## LordGurciullo

J7SC said:


> 'grats ! If I wouldn't know the history of Jay w/ this on YT, I might have concluded that it was a bit rough - but since I do (especially re. @LordGurciullo), I was just laughing my head off. I got an old 4790K 'special batch' laying around here somewhere, just no 3090s yet


HAHA yah! Crazy ****. I did it on stock everything . Ya'll are nuts with your crazy **** .


----------



## LordGurciullo

tubnotub1 said:


> @bmgjet I changed the graphics driver quality and closed down unused tasks, I did not run the stand-alone or run single screen 1080p 60hz. Hmmmmmmmmm. Alright. Fine. Once more into the breach!


let me know the results man! I didn't do those things either!


----------



## Cholerikerklaus

Hello, I want to shunt mod my 3090, now I am searching for the right resistors. I find out that I am want to use the r008 resistor, but which size I have to buy. There are many difference sizes. Anyone can send me a link for the right one? 
Sorry for my bad English.


----------



## LordGurciullo

Nizzen said:


> Fix cpuscore:
> View attachment 2461663
> View attachment 2461664


WOW! I cant get my ram over 4100 17 17 17 at least when I tried it wasn't happening.


----------



## Sky3900

LordGurciullo said:


> HAHA yah! Crazy **. I did it on stock everything . Ya'll are nuts with your crazy ** .


----------



## nievz

twisted0ne said:


> If you're not seeing 100% GPU utilisation then you're CPU capped.


Normally, I would say I am. But BFV is poorly optimized. I'm wondering what the experience is with intel rigs at 1440p.


----------



## skywalkermibnasa2

Since there is no record has been updated to the HOF of firestrike ultra，i just wonder if the rtx3090 nvlink's sys will support the sli of DX11/DX10 which was supported by pre-generation cards like 20s and 10s~~~cause i want to play GTA5 w/RT reshade under the sli rigs for the poor performance offered by single RTX3090


----------



## Thanh Nguyen

nievz said:


> What framerate are you guys getting in BFV at 1440p ultra? Particularly in Operation Underground. My gpu utilization typically stays at 60-70% with fps at 100-120.


I see 180-240 on my pc.


----------



## olrdtg

LVNeptune said:


> I noticed your other thread is gone, can you post that picture here?


I edited it so it was awaiting moderation. It's back now


----------



## megahmad

Hello guys.

My 3090 TUF OC boost avg in GPUz is 1840mhz (max 1920, lowest 1805) while running Heaven Benchmark at stock clock but with power limit to 107% and temp limit to 91c. Temp is 69-70c at load and board power draw is around 375-385w.

My heaven benchmark at extreme preset is 278fps.
I got 19590 - 19670 score on Time Spy with two runs. 10001 on Time Spy extreme. 59.77 FPS on Port Royal. (all graphics scores)

Is that boost clock normal for this card? and are these scores normal for this card at stock?

Also, do you think it's worth flashing the Gigabyte OC gaming BIOS to my TUF OC? I don't care much for benchmarks, will I notice performance increase over stock TUF bios in games?

Thanks.


----------



## GerrodEZ

psychrage said:


> I ordered this one - US $45.47 20% OFF|Bykski GPU Water Block For NVIDIA RTX 3090 3080 Founders Original PCB Radiator + Backplate 5V/12V MB SYNC N RTX3090FE X|Fans & Cooling| - AliExpress. The title description calls out the 3080 as well as the 3090, but everything else on the page only calls out the 3090 FE. Bykski also don't seem to have another SKU specific to the 3080 FE.
> 
> View attachment 2461140
> 
> 
> The dimensions don't line up with the rough dimensions of my card tho. I show from just above the pci slot to the top of the factory heatsink at ~120mm. They show a total height of the block at 127mm. The 186.2mm length looks to be pretty close. Not going to tear my card down to confirm that.I know the top part with the inlet/outlet is at least 30mm. Now I'm nervous...
> 
> I wonder if they've made a revision... And being that I didn't order direct from Bykski, am I getting said revision?.... gah!!!


This Block will not fit.
As of today 12.october.2020 no block for the 3090fe bykski has been published yet. I expect a sample in the next days which has to be confirmed. Only then the block will be delivered to the dealers. (None of the dealers on Aliexpress (which by the way are not Bykski itself, no matter what the name says) has received 3090 blocks so far.
All blocks that are currently in circulation only fit the 3080fe. The backplate fits but will be exchanged for another one, as well as the name.


----------



## Carillo

Cholerikerklaus said:


> Hello, I want to shunt mod my 3090, now I am searching for the right resistors. I find out that I am want to use the r008 resistor, but which size I have to buy. There are many difference sizes. Anyone can send me a link for the right one?
> Sorry for my bad English.


«2512« size. Would recomend 5mOHM if you are going with Water


----------



## Carillo

megahmad said:


> Hello guys.
> 
> My 3090 TUF OC boost avg in GPUz is 1840mhz (max 1920, lowest 1805) while running Heaven Benchmark at stock clock but with power limit to 107% and temp limit to 91c. Temp is 69-70c at load and board power draw is around 375-385w.
> 
> My heaven benchmark at extreme preset is 278fps.
> I got 19590 - 19670 score on Time Spy with two runs. 10001 on Time Spy extreme. 59.77 FPS on Port Royal. (all graphics scores)
> 
> Is that boost clock normal for this card? and are these scores normal for this card at stock?
> 
> Also, do you think it's worth flashing the Gigabyte OC gaming BIOS to my TUF OC? I don't care much for benchmarks, will I notice performance increase over stock TUF bios in games?
> 
> Thanks.


Boost clock behavor always depends on thermals. Some is sitting in a 10 degree room with open test bench and some are sitting in a sauna with a closed case with no extra fans. 20 watts more is 20 watt. Flashing takes less than 1 minute, so i would say it’s worh a try, see if you gain anything in your USE scenario


----------



## Cholerikerklaus

Carillo said:


> «2512« size. Would recomend 5mOHM if you are going with Water


Thanks for the info. Why the 005? Do you think it's not too much?
I am still on water with alphacool Gpu block and 2x 360 Rad. Tommorow I will get a Mora 360. 
This are my highest results 


https://www.3dmark.com/pr/391310


I hope I can get more points then.


----------



## Carillo

Cholerikerklaus said:


> Thanks for the info. Why the 005? Do you think it's not too much?


i know its not enough. My card is drawing 700 watts in Timespy with 5mOHM. You can always reduse the power slider , to lets say 50 %


----------



## domenic

Stampede said:


> Just doing some thinking, anyone thought of cooling the backplate by placing a cpu block with some thermal paste directly onto the back plate. need to find a way to find a way to tighten down the block to the back plate. Can plumb the cpu and gpu in parallel as well.
> 
> Do you think there will be any benefits to this mod, or is it not worth the money and effort?
> 
> my thinking is that it may bring down a watercooled gpu by another few deg's for people chasing benchmark scores.


I am planning on trying / doing the same exact thing. I have an EK Strix block & backplate pre-ordered in anticipation of snagging a Strix 3090. Thought is to use a block for memory since its more of a rectangle. Ordered a bunch of fitting also so if the planets ever align and I can actually get all of the components needed I will get out the heat gun and do some engineering on the fly.

Also trying to noodle in my head how to secure the block to the backplate. Thermal paste seems like a good idea but not sure if it just floats with the only hold down pressure being the hard line tubes will work. Thinking maybe nylon screw / nut fed from inside the backplate upwards if there is enough room.

I was inspired by the below pic someone posed in the thread a while back. I have a Formula X motherboard with in / out ports to cool the VRMs so its more complicated in my case since I need to go from MB > GPU backplate > CPU > video card. I eyeballed it and its doable I think.


----------



## Stampede

domenic said:


> I am planning on trying / doing the same exact thing. I have an EK Strix block & backplate pre-ordered in anticipation of snagging a Strix 3090. Thought is to use a block for memory since its more of a rectangle. Ordered a bunch of fitting also so if the planets ever align and I can actually get all of the components needed I will get out the heat gun and do some engineering on the fly.
> 
> Also trying to noodle in my head how to secure the block to the backplate. Thermal paste seems like a good idea but not sure if it just floats with the only hold down pressure being the hard line tubes will work. Thinking maybe nylon screw / nut fed from inside the backplate upwards if there is enough room.
> 
> I was inspired by the below pic someone posed in the thread a while back. I have a Formula X motherboard with in / out ports to cool the VRMs so its more complicated in my case since I need to go from MB > GPU backplate > CPU > video card. I eyeballed it and its doable I think.
> 
> 
> View attachment 2461728


yes i was thinking l that the space under the backplate wont be enough to place the nylon screws exactly where it need to be, so thinking of epoxy glueing the nylon bolt or nut to backplate. not sure how well it will hold but Can try.

where did you get that pic, looks like an Alphacool block. Is that a product they make?

also waiting for EKWB for send my stuff.

i was thinking of putting it in parallel with gpu, which means you can keep your original loop, maybe.


----------



## Nizzen

nievz said:


> Normally, I would say I am. But BFV is poorly optimized. I'm wondering what the experience is with intel rigs at 1440p.


BF V is very good optimized. If you have poor performance in BF V, then you have either slow AMD cpu or running slow memorclock on Intel 
BF V fps is flying with 10900k @ 5.5ghz and 4700c17 tweaked memory


----------



## ThrashZone

domenic said:


> I am planning on trying / doing the same exact thing. I have an EK Strix block & backplate pre-ordered in anticipation of snagging a Strix 3090. Thought is to use a block for memory since its more of a rectangle. Ordered a bunch of fitting also so if the planets ever align and I can actually get all of the components needed I will get out the heat gun and do some engineering on the fly.
> 
> Also trying to noodle in my head how to secure the block to the backplate. Thermal paste seems like a good idea but not sure if it just floats with the only hold down pressure being the hard line tubes will work. Thinking maybe nylon screw / nut fed from inside the backplate upwards if there is enough room.
> 
> I was inspired by the below pic someone posed in the thread a while back. I have a Formula X motherboard with in / out ports to cool the VRMs so its more complicated in my case since I need to go from MB > GPU backplate > CPU > video card. I eyeballed it and its doable I think.
> 
> 
> View attachment 2461728


Hi,
Nice way too many 90 degree fittings though.


----------



## nievz

Nizzen said:


> BF V is very good optimized. If you have poor performance in BF V, then you have either slow AMD cpu or running slow memorclock on Intel
> BF V fps is flying with 10900k @ 5.5ghz and 4700c17 tweaked memory


Really?? I see i9 benchmarks on YT with 66% utilization in BFV. Can you screenshot your Operation Underground utilization? Also make sure it's 1440p ultra not 4k. That's only the place where I have issues. I'm upgrading my 3700x to the 5900x though 😁


----------



## Johneey

Cholerikerklaus said:


> Thanks for the info. Why the 005? Do you think it's not too much?
> I am still on water with alphacool Gpu block and 2x 360 Rad. Tommorow I will get a Mora 360.
> This are my highest results
> 
> 
> https://www.3dmark.com/pr/391310
> 
> 
> I hope I can get more points then.


I had same setting now I get with my mora 360 25 c and get 100 points more 14480 same clock as u. In this temperature area 20-35 not many increases anymore


----------



## Glerox

skywalkermibnasa2 said:


> Since there is no record has been updated to the HOF of firestrike ultra，i just wonder if the rtx3090 nvlink's sys will support the sli of DX11/DX10 which was supported by pre-generation cards like 20s and 10s~~~cause i want to play GTA5 w/RT reshade under the sli rigs for the poor performance offered by single RTX3090


Ampere doesn't support SLI profiles by Nvidia. So no, those games won't work with SLI. Ampere only support native mGPU and this means, literally, only a handful of games.
I'm quite mad myself because I had a build ready for two watercooled Ampere cards but will have to be satisfied with only one card.


----------



## Gryzor

Anyone that has flashed a 3090 KFA2/GALAX board? Any recommendation? I´m not sure to flash to the Gigabyte gaming OC bios (to increase PW 350W to 390W) . I´m afraid of bricking my card. If somebody that has the KFA2 had flashed it please tell me. Thanks.


----------



## domenic

Stampede said:


> yes i was thinking l that the space under the backplate wont be enough to place the nylon screws exactly where it need to be, so thinking of epoxy glueing the nylon bolt or nut to backplate. not sure how well it will hold but Can try.
> 
> where did you get that pic, looks like an Alphacool block. Is that a product they make?
> 
> also waiting for EKWB for send my stuff.
> 
> i was thinking of putting it in parallel with gpu, which means you can keep your original loop, maybe.


I ordered the block below. Its wider than the one in the picture which I found a ways back in this thread. The EKWB Strix backplate is very large ~2x the width of this block. Until I can get all of the parts on the table not exactly sure how everything will be laid out. Like you fastening it down somehow will be the engineering challenge.










Alphacool D-RAM Cooler X6 Universal - Plexi Black Nickel


Alphacool D-RAM Cooler X6 Universal - Plexi Black Nickel AC-17274




www.performance-pcs.com


----------



## synergon

hey guys, please share me the TUF OC Bios


----------



## Wrathier

Hi all,

So I got my Gigabyte RTX 3090 Gaming today and would like some feedback on my benchmark scores. Thanks in advance.

3DMark TimeSpy - unfortunately I don't have Extreme, but anyway here is the result:



http://www.3dmark.com/spy/14515682



For Superposition 1080P Extreme I got a 17th place:










For Superposition 4K Optimized I got a 18th place:










For Superposition 8K Optimized I got a 13th place:










This is on the stock BIOS and no water. What do you all think?


----------



## Cholerikerklaus

Johneey said:


> I had same setting now I get with my mora 360 25 c and get 100 points more 14480 same clock as u. In this temperature area 20-35 not many increases anymore


Yes I saw you're scores.. I think you got a better bin then me.. You use a curve or only add mhz? You will try a shunt mod? I ordered now 0.008 Resistors and a new psu


----------



## megahmad

Carillo said:


> Boost clock behavor always depends on thermals. Some is sitting in a 10 degree room with open test bench and some are sitting in a sauna with a closed case with no extra fans. 20 watts more is 20 watt. Flashing takes less than 1 minute, so i would say it’s worh a try, see if you gain anything in your USE scenario


Thank you. I am using 3 monitors occupying two DPs and one HDMI. I read in the topic that some guys lost one or two DPs when they flashed the Giga gaming OC bios to their TUF OC, can anyone confirm which ports exactly get disabled after flashing the giga to the TUF?

I may try later myself when I get home and report back.
Thanks.


----------



## Johneey

Cholerikerklaus said:


> Yes I saw you're scores.. I think you got a better bin then me.. You use a curve or only add mhz? You will try a shunt mod? I ordered now 0.008 Resistors and a new psu


Only offset. Yes but I am new in shunt mod. In discord they recommend me only with a conductive pen. Don’t want to soldering. They told me to use the conductive pen only on the 2 resistors on the connector


----------



## Johneey

Gryzor said:


> Anyone that has flashed a 3090 KFA2/GALAX board? Any recommendation? I´m not sure to flash to the Gigabyte gaming OC bios (to increase PW 350W to 390W) . I´m afraid of bricking my card. If somebody that has the KFA2 had flashed it please tell me. Thanks.


Lol how often u wanna ask us again. Flash it it will not brick if it’s not work correctly flash back lol


----------



## Stampede

ThrashZone said:


> Hi,
> Nice way too many 90 degree fittings though.


was thinking the same, a u-shaped tube bend will work there also, but wont look as good.


----------



## skywalkermibnasa2

Glerox said:


> Ampere doesn't support SLI profiles by Nvidia. So no, those games won't work with SLI. Ampere only support native mGPU and this means, literally, only a handful of games.
> I'm quite mad myself because I had a build ready for two watercooled Ampere cards but will have to be satisfied with only one card.


oh~~my~~~what a pitty~~~I am planing to assembe RTX3090 SLI sys too~~~but we can still make it work under the newest titles，right？like red dead redemption II，SOTTR，3dmark tp/pt...but crysis3、GTA5、assassin's creed:unity/syndicate、masseffect:andromeda..those things which can't run perfectly by single RTX3090, will stay in that trash-statues,right?


----------



## Baasha

Glerox said:


> Ampere doesn't support SLI profiles by Nvidia. So no, those games won't work with SLI. Ampere only support native mGPU and this means, literally, only a handful of games.
> I'm quite mad myself because I had a build ready for two watercooled Ampere cards but will have to be satisfied with only one card.


Did you get a 3090 yet? 

I'm still waiting... sigh.


----------



## Gryzor

Johneey said:


> Lol how often u wanna ask us again. Flash it it will not brick if it’s not work correctly flash back lol


This guy in other forum said that his 2080ti bricked totally, even with BIOS backup. That´s why I dont try it yet. I paste the link: https://forums.evga.com/I-think-I-br...-m2967051.aspx


----------



## Alex24buc

Wrathier said:


> Hi all,
> 
> So I got my Gigabyte RTX 3090 Gaming today and would like some feedback on my benchmark scores. Thanks in advance.
> 
> 3DMark TimeSpy - unfortunately I don't have Extreme, but anyway here is the result:
> 
> 
> 
> http://www.3dmark.com/spy/14515682
> 
> 
> 
> For Superposition 1080P Extreme I got a 17th place:
> 
> View attachment 2461758
> 
> 
> For Superposition 4K Optimized I got a 18th place:
> 
> View attachment 2461759
> 
> 
> For Superposition 8K Optimized I got a 13th place:
> 
> View attachment 2461760
> 
> 
> This is on the stock BIOS and no water. What do you all think?


Great results!! Did you overclock the card?


----------



## Johneey

Gryzor said:


> This guy in other forum said that his 2080ti bricked totally, even with BIOS backup. That´s why I dont try it yet. I paste the link: https://forums.evga.com/I-think-I-br...-m2967051.aspx


Your are serious ? He flashed KP and it Wasnt brick totally lol. He was only to stupid


----------



## Gryzor

Johneey said:


> Your are serious ? He flashed KP and it Wasnt brick totally lol. He was only to stupid


Maybe you´re right. So do you mean that the KP flashing bricked the card? he said that it was OK after that.


----------



## Wihglah

My retailer had 17 cancellations for EVGA cards over the weekend. Presumably since the Big Navi teaser.


----------



## LVNeptune

Why do you keep editing out your posts to blank them out?


----------



## GanMenglin

Well, after I get my MSI x trio, I’ve flashed the strix 480w bios and EVGA ftw3 ultra 450w bios, it seems the 450w bios can pull more power on my shunt modded trio.
But the ftw3 ultra’s cooler use 2 fans, it has 3 fans on trio, so the fan control is not perfect. The temperature is higher than 480w bios. Maybe because of more power...
I’m just waiting for the water block now, and I guess the 450w bios is better for a shunt modded + water cooling + 3x8pin card.


----------



## ThrashZone

Stampede said:


> was thinking the same, a u-shaped tube bend will work there also, but wont look as good.


Hi,
In some places clear soft tubing is more practical 
Modmymods has some good clear PVC soft tubing made in the USA 
ModMyMods 3/8" ID x 5/8" OD Flexible PVC Tubing - Crystal Clear (MOD-0003) - 3/8” ID x 5/8” OD Soft Tubing - Tubing ModMyMods.com - PC Watercooling Parts and Accessories


----------



## Esenel

nievz said:


> Really?? I see i9 benchmarks on YT with 66% utilization in BFV. Can you screenshot your Operation Underground utilization? Also make sure it's 1440p ultra not 4k. That's only the place where I have issues. I'm upgrading my 3700x to the 5900x though 😁


Most likely you are just using DX11 with Future Frame Rendering Off.
Correct?


----------



## Johneey

Gryzor said:


> Maybe you´re right. So do you mean that the KP flashing bricked the card? he said that it was OK after that.


Sure.. don’t worried Many ppl flashed the cards and urs are reference no problem


----------



## Sky3900

What is likely to happen if I flash a 3090 Strix bios to an 3090 Fe? I know the Strix has more power phases and uses 3 x 8 pin vs. 1 x 12 pin. It has a different output configuration as well. Is it OK to experiment with this? Do I risk any sort of permanent damage to the card or possibility of having to do a blind flash?


----------



## Johneey

Sky3900 said:


> What is likely to happen if I flash a 3090 Strix bios to an 3090 Fe? I know the Strix has more power phases and uses 3 x 8 pin vs. 1 x 12 pin. It has a different output configuration as well. Is it OK to experiment with this? Do I risk any sort of permanent damage to the card or possibility of having to do a blind flash?


Not risk because will not work I think anybody tried yet .


----------



## Wrathier

Alex24buc said:


> Great results!! Did you overclock the card?


Yeah I did. I was sort of hoping for EVGA Precision X1 or MSI Afterburner OR Auros utility could do it for me - I like that for day to day use, but this time I actually had to something myself. Felt like the old days lol.


----------



## ValSidalv21

megahmad said:


> Hello guys.
> 
> My 3090 TUF OC boost avg in GPUz is 1840mhz (max 1920, lowest 1805) while running Heaven Benchmark at stock clock but with power limit to 107% and temp limit to 91c. Temp is 69-70c at load and board power draw is around 375-385w.
> 
> My heaven benchmark at extreme preset is 278fps.
> I got 19590 - 19670 score on Time Spy with two runs. 10001 on Time Spy extreme. 59.77 FPS on Port Royal. (all graphics scores)
> 
> Is that boost clock normal for this card? and are these scores normal for this card at stock?
> 
> Also, do you think it's worth flashing the Gigabyte OC gaming BIOS to my TUF OC? I don't care much for benchmarks, will I notice performance increase over stock TUF bios in games?
> 
> Thanks.


Your scores look spot on for stock speed, pretty much identical to what I got. About flashing it to the Gigabyte BIOS, I wouldn't bother. Those extra 15W probably won't change anything.


----------



## Sheyster

Baasha said:


> Did you get a 3090 yet?
> 
> I'm still waiting... sigh.


Mine is on the truck for delivery. Hang in there!  FE card.. Already have the lovely $51 (delivered price) EVGA 12-pin Micro-fit cable waiting for it.


----------



## ValSidalv21

synergon said:


> hey guys, please share me the TUF OC Bios


How can I extract the BIOS? GPU-Z says not supported on this device bla bla bla.


----------



## mirkendargen

domenic said:


> I ordered the block below. Its wider than the one in the picture which I found a ways back in this thread. The EKWB Strix backplate is very large ~2x the width of this block. Until I can get all of the parts on the table not exactly sure how everything will be laid out. Like you fastening it down somehow will be the engineering challenge.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Alphacool D-RAM Cooler X6 Universal - Plexi Black Nickel
> 
> 
> Alphacool D-RAM Cooler X6 Universal - Plexi Black Nickel AC-17274
> 
> 
> 
> 
> www.performance-pcs.com
> 
> 
> 
> 
> 
> View attachment 2461757


Yeah I have a 4slot RAM block I'm going to do this with and was going to just be sloppy and sacrifice the backplate that comes with the GPU waterblock, keeping the stock cooler backplate to put back on when I sell the card. My plan is to use thermal paste on the bottom then a bead of JB Weld or similar epoxy around the edge. Like I said, sacrificing the backplate and not caring that the RAM block will never come off.


----------



## Sheyster

ValSidalv21 said:


> How can I extract the BIOS? GPU-Z says not supported on this device bla bla bla.


Use NVflash, the link is posted in the first post of this thread.


----------



## kx11

i'm having crashes after 2+ hrs of gaming, using latest WQHL and downclocked the core by -50mhz

anyone got a clue?! 

my gpu is Galax 3090 SG


----------



## zlatanselvic

Here's my Founders Card OC'd



https://www.3dmark.com/3dm/51577501




#33 overall, I'll take it 


Decided to keep the FE on air. The cooler is too pretty to take apart. It OC'd well; +140 on the core +950 on the memory. It needs more voltage to push any higher and be stable.

I don't see a bios coming for this card anytime soon. I also added a GPU bracket just in case.


----------



## Carillo

Sky3900 said:


> What is likely to happen if I flash a 3090 Strix bios to an 3090 Fe? I know the Strix has more power phases and uses 3 x 8 pin vs. 1 x 12 pin. It has a different output configuration as well. Is it OK to experiment with this? Do I risk any sort of permanent damage to the card or possibility of having to do a blind flash?


Your computer will explode


----------



## DrunknFoo

domenic said:


> I ordered the block below. Its wider than the one in the picture which I found a ways back in this thread. The EKWB Strix backplate is very large ~2x the width of this block. Until I can get all of the parts on the table not exactly sure how everything will be laid out. Like you fastening it down somehow will be the engineering challenge.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Alphacool D-RAM Cooler X6 Universal - Plexi Black Nickel
> 
> 
> Alphacool D-RAM Cooler X6 Universal - Plexi Black Nickel AC-17274
> 
> 
> 
> 
> www.performance-pcs.com
> 
> 
> 
> 
> 
> View attachment 2461757


i ordered the black one, unfortunately the qc from alphacool is pretty bad with the black acetal one... o ring was not in the channel and was crushed, cold plate scuffed up, acetal looked like it was tossed around a bit... definitely check the o ring when you receive it though and just do a quick leak pressure test. If you do need to replace the oring, alphacool told me they used NBR70 240,673mmx1,5mm.

the 6dimm is pretty damn heavy, got a bitspower 6dimm on the way now as a 'replacement' option, weights a little bit lighter.


----------



## DrunknFoo

mirkendargen said:


> Yeah I have a 4slot RAM block I'm going to do this with and was going to just be sloppy and sacrifice the backplate that comes with the GPU waterblock, keeping the stock cooler backplate to put back on when I sell the card. My plan is to use thermal paste on the bottom then a bead of JB Weld or similar epoxy around the edge. Like I said, sacrificing the backplate and not caring that the RAM block will never come off.


thermal pad would be more ideal for the block imo, when placed on a flat surface i can see light bleed on the base of the cold plate, but difference would probably be miniscule


----------



## mirkendargen

DrunknFoo said:


> thermal pad would be more ideal for the block imo, when placed on a flat surface i can see light bleed on the base of the cold plate, but difference would probably be miniscule


I worry about it not staying stuck on with just a thermal pad (card is gonna be vertically mounted). I thought about Arctic Silver/Alumina thermal adhesive...but it's like $15 and I already have regular paste and JB Weld at home. I'm not going to waste a tube of Kryonaut or something on it, just cheap paste I have laying around that's come with stuff.


----------



## Sheyster

Carillo said:


> Your computer will explode


LOL.. 

To the guy you replied to, just don't do it. Common sense my man, it just won't play nice with the FE.

If you want a high power limit card wait for the Strix OC to drop or better yet wait for a Kingpin.


----------



## Wrathier

Sorry for the lighting, but wanted to show the GB card.


----------



## Sheyster

zlatanselvic said:


> View attachment 2461769


^ That is a thing of beauty, love it!


----------



## DrunknFoo

mirkendargen said:


> I worry about it not staying stuck on with just a thermal pad (card is gonna be vertically mounted). I thought about Arctic Silver/Alumina thermal adhesive...but it's like $15 and I already have regular paste and JB Weld at home. I'm not going to waste a tube of Kryonaut or something on it, just cheap paste I have laying around that's come with stuff.


lol ya no way it'll hold up with just a pad


----------



## CrazyThunderbird

Wanted to let Feedback here:
Tested on my *Inno3d iChill X3 RTX 3090* the *Gigabyte RTX 3090 Gaming OC *Bios and one of the 3 Display ports wont work anymore (the middle one just to be let ppl know)


----------



## Sky3900

Carillo said:


> Your computer will explode


Figured this would be the case.. Loud bang and a massive electrical fire!


----------



## Sky3900

Sheyster said:


> LOL..
> 
> To the guy you replied to, just don't do it. Common sense my man, it just won't play nice with the FE.
> 
> If you want a high power limit card wait for the Strix OC to drop or better yet wait for a Kingpin.


I'm sure I'll be happy with the FE, but, if there is a relatively easy way to squeeze another 100mhz out of it... why not?


----------



## megahmad

ValSidalv21 said:


> Your scores look spot on for stock speed, pretty much identical to what I got. About flashing it to the Gigabyte BIOS, I wouldn't bother. Those extra 15W probably won't change anything.


Thanks for the info. Now that you said it, I indeed think the 15W won't make a big difference (outside of synthetic benchmarks) so I decided to not flash. 


kx11 said:


> i'm having crashes after 2+ hrs of gaming, using latest WQHL and downclocked the core by -50mhz
> 
> anyone got a clue?!
> 
> my gpu is Galax 3090 SG


The latest 456.71 driver didn't play good for me. Installed it (with all the DDU stuff in safe mode) when it came out and after few hours I got a TDR and had to hard reboot while watching a movie on MPC with madVR for some reason (gaming was fine). Event viewer said it was nvlddmkm which is related to the nvidia driver, I thought it was something coincedential. Next day I turned on my PC and 3 mins after booting I got squared artifacts on the desktop (no load was done to the GPU). Also my card was at stock clocks. I reverted back to the 456.55 driver and didn't have any issues since almost 5 days now. Same with the 456.38, no issues. Only the 456.71 was bad for me.

My card is the ASUS TUF 3090 Gaming OC.

Get back to 456.55 and see if you still have issues.


----------



## Alex24buc

CrazyThunderbird said:


> Wanted to let Feedback here:
> Tested on my *Inno3d iChill X3 RTX 3090* the *Gigabyte RTX 3090 Gaming OC *Bios and one of the 3 Display ports wont work anymore (the middle one just to be let ppl know)


Yes, the same happened to me, I wrote that on the forum yesterday when I flashed my palit. It is not a big issue because if we put back the original bios all 3 display port will work again. I hope!!!


----------



## Hulk1988

I got a pretty awesome EVGA 3090 FTW3 card. WIth undervolting I do not run into PT and still get a very high stable clock rate under air. This cards screams for water 

Could someone explain me the memory overclock process? I get less points with +500 but more with +300. Is that temperature related? Even Jays2Cents has in his Video around 850+ when overclocking the FTW3 memory and reaches very good points in Port Royal.


----------



## originxt

Hulk1988 said:


> I got a pretty awesome EVGA 3090 FTW3 card. WIth undervolting I do not run into PT and still get a very high stable clock rate under air. This cards screams for water
> 
> Could someone explain me the memory overclock process? I get less points with +500 but more with +300. Is that temperature related? Even Jays2Cents has in his Video around 850+ when overclocking the FTW3 memory and reaches very good points in Port Royal.
> 
> View attachment 2461792


Memory can be dependent on individual cards. Some cards have memory that can overclock very well, I think someone here got +1300 and still scaled. I can go up to +900 before I start losing score and get artifacting. Can be temperature related, I have a a12x25 on the back of my card right now hooked to the aux on my ftw3.

Or you can be unlucky and get poor overclocking on memory. I usually max my memory for benchmark scores and pull back +100-150 for normal use since I think I got some degeneration on my previous 1080ti by having it at the absolute max for about a few weeks.


----------



## CrazyThunderbird

Alex24buc said:


> It is not a big issue because if we put back the original bios all 3 display port will work again. I hope!!!


They do i can confirm because i flashed back , i don't care about a few points, my multiscreen setup is more important to me


----------



## bmgjet

originxt said:


> Memory can be dependent on individual cards. Some cards have memory that can overclock very well, I think someone here got +1300 and still scaled. I can go up to +900 before I start losing score and get artifacting. Can be temperature related, I have a a12x25 on the back of my card right now hooked to the aux on my ftw3.
> 
> Or you can be unlucky and get poor overclocking on memory. I usually max my memory for benchmark scores and pull back +100-150 for normal use since I think I got some degeneration on my previous 1080ti by having it at the absolute max for about a few weeks.



All cards will reach +600 on the memory provided its cooled well enough since thats what you need to make it reach the factroy validated spec from micron.
Nvidia has it underclocked tho because the massive ammount of power it uses and the tempture it outputs.

3080 uses 63W alone for the memory.
3090 pushes that up to 95W.
With +600 youv just added another 15-20W draw.

Max operating tempture is 105C.
But already at default clock speed its pushing 90C on the rear modules on a 3090 when they are loaded up.


----------



## bogdi1988

bmgjet said:


> All cards will reach +600 on the memory provided its cooled well enough since thats what you need to make it reach the factroy validated spec from micron.
> Nvidia has it underclocked tho because the massive ammount of power it uses and the tempture it outputs.
> 
> 3080 uses 63W alone for the memory.
> 3090 pushes that up to 95W.
> With +600 youv just added another 15-20W draw.
> 
> Max operating tempture is 105C.
> But already at default clock speed its pushing 90C on the rear modules on a 3090 when they are loaded up.


My 3090 FE runs 750 on the memory with 130 on the GPU without a single issue. I will try to push it further tomorrow after I move some components around to make space for some more airflow.


----------



## Warocia

megahmad said:


> Thanks for the info. Now that you said it, I indeed think the 15W won't make a big difference (outside of synthetic benchmarks) so I decided to not flash.
> 
> The latest 456.71 driver didn't play good for me. Installed it (with all the DDU stuff in safe mode) when it came out and after few hours I got a TDR and had to hard reboot while watching a movie on MPC with madVR for some reason (gaming was fine). Event viewer said it was nvlddmkm which is related to the nvidia driver, I thought it was something coincedential. Next day I turned on my PC and 3 mins after booting I got squared artifacts on the desktop (no load was done to the GPU). Also my card was at stock clocks. I reverted back to the 456.55 driver and didn't have any issues since almost 5 days now. Same with the 456.38, no issues. Only the 456.71 was bad for me.
> 
> My card is the ASUS TUF 3090 Gaming OC.
> 
> Get back to 456.55 and see if you still have issues.


I have same card ASUS TUF 3090 with the Gigabyte Gaming OC bios. I did install 456.71 drivers when they were a beta phase. It must have been at least a week ago. A couple days ago I also got squared artifacts on the desktop. It sounds very similar issue. I put stock bios back on and similar overlocks than with Gaming OC bios. I am still using 456.71. I haven't encountered issue since. Was it some sort of glitch or something worse. I don't know.


----------



## Glerox

skywalkermibnasa2 said:


> oh~~my~~~what a pitty~~~I am planing to assembe RTX3090 SLI sys too~~~but we can still make it work under the newest titles，right？like red dead redemption II，SOTTR，3dmark tp/pt...but crysis3、GTA5、assassin's creed:unity/syndicate、masseffect:andromeda..those things which can't run perfectly by single RTX3090, will stay in that trash-statues,right?


Here are the only games with native mGPU :

*What DirectX 12 games support SLI natively within the game?*
Shadow of the Tomb Raider, Civilization VI, Sniper Elite 4, Gears of War 4, Ashes of the Singularity: Escalation, Strange Brigade, Rise of the Tomb Raider, Zombie Army 4: Dead War, Hitman, Deus Ex: Mankind Divided, Battlefield 1, and Halo Wars 2.

*What Vulkan games support SLI natively within the game?*

Red Dead Redemption 2, Quake 2 RTX, Ashes of the Singularity: Escalation, Strange Brigade, and Zombie Army 4: Dead War



Baasha said:


> Did you get a 3090 yet?
> 
> I'm still waiting... sigh.


Nope... trying to get a FTW3 or Strix. At this point since I will buy only one card I'm still hesitating if I should wait for the Kingpin.


----------



## wesley8

megahmad said:


> Thanks for the info. Now that you said it, I indeed think the 15W won't make a big difference (outside of synthetic benchmarks) so I decided to not flash.
> 
> The latest 456.71 driver didn't play good for me. Installed it (with all the DDU stuff in safe mode) when it came out and after few hours I got a TDR and had to hard reboot while watching a movie on MPC with madVR for some reason (gaming was fine). Event viewer said it was nvlddmkm which is related to the nvidia driver, I thought it was something coincedential. Next day I turned on my PC and 3 mins after booting I got squared artifacts on the desktop (no load was done to the GPU). Also my card was at stock clocks. I reverted back to the 456.55 driver and didn't have any issues since almost 5 days now. Same with the 456.38, no issues. Only the 456.71 was bad for me.
> 
> My card is the ASUS TUF 3090 Gaming OC.
> 
> Get back to 456.55 and see if you still have issues.


Can you share your TUF 3090 OC's vbios here? Thanks


----------



## Bilco

So what resistor stack combo do you guys think would be safe for daily driving under water?


----------



## Stampede

DrunknFoo said:


> i ordered the black one, unfortunately the qc from alphacool is pretty bad with the black acetal one... o ring was not in the channel and was crushed, cold plate scuffed up, acetal looked like it was tossed around a bit... definitely check the o ring when you receive it though and just do a quick leak pressure test. If you do need to replace the oring, alphacool told me they used NBR70 240,673mmx1,5mm.
> 
> the 6dimm is pretty damn heavy, got a bitspower 6dimm on the way now as a 'replacement' option, weights a little bit lighter.


EKWB has a 4 dimm cooler as well.

EKWB 4 dimm cooler

has anyone actually tried this and know if there are any benefits? I dont wanna buy this and sacrifice a backplate without having any gains.

Thermalbench Active backplate test

There is minimal gain for core temps on the above test, but the vrm cooling was significant.

I think the block in the back is a much better solution and the 3090 has rear ram modules, means there could be alot more gains from this mod.

When(if) i get my EK waterblock, I am gonna try to temp mount a cpu cooler to the back and test with and without temps and stability.


----------



## NYCesquire

Sorry, my friend isn't a good reader. For his benefit: What real-world performance improvements can he expect between an overclocked 3080 on air and an overclocked 3090 on water? This is of course an unfair comparison, but interesting nonetheless! Thanks!


----------



## Bilco

wesley8 said:


> Can you share your TUF 3090 OC's vbios here? Thanks


Here's my 3090 TUF Gaming OC: asustuf3090oc.rom


----------



## Sheyster

So this arrived today. About to install and test it.


----------



## Sky3900

Sheyster said:


> So this arrived today. About to install and test it.
> 
> View attachment 2461813


Nice! mine is supposed to arrive Wednesday.


----------



## HyperMatrix

NYCesquire said:


> Sorry, my friend isn't a good reader. For his benefit: What real-world performance improvements can he expect between an overclocked 3080 on air and an overclocked 3090 on water? This is of course an unfair comparison, but interesting nonetheless! Thanks!


+20-25% if shunted or using a high/unlocked power bios. and assuming you don’t have a potato board/die. Higher in VRAM constricted games (should be very few right now).

Keep in mind it’s the exact same die. But 3090 has 20.5% more cores unlocked and active. And it’s Memory apparently requires an additional 30W as well. So in most cases you’re dealing with a huge power limit deficit compared to the 3080. If that power limit didn’t exist because of shunting...and you were able to keep the additional heat in check...there’s no reason not to see that 20% increase from the extra cores. But all that’s a little easier said than done.


----------



## bolagnaise

Bilco said:


> Here's my 3090 TUF Gaming OC: asustuf3090oc.rom


Is that the Gaming mode bios or the OC mode one? Can you switch bios and upload both?


----------



## wesley8

Bilco said:


> Here's my 3090 TUF Gaming OC: asustuf3090oc.rom


Thanks a lot


----------



## The-Real-Link

Well, I can say I'd like to join the 3090 owners' club but I have to send my Nvidia Store FE back as two of the three displayports are dead. They've at least agreed to RMA, though won't cross ship. Ugh.


----------



## Alex24buc

I am sorry, I hope they will get you a new card. But how did this happen, the two displayport output were dead from the beginning and you found out just now?


----------



## Shaded War

Does anyone know what time of day Bestbuy / newegg tend to restock these GPUs?

I feel like I will never be able to snag one with my work hours. The only time I had a chance to buy one was on launch day since the Zotac model was selling allot slower. But I gave that a hard pass.


----------



## asdkj1740

3080 / 3090 / 3070 Gigabyte Eagle Gaming OC & Vision Power Connector Concerns


***UPDATE 1.5 IMPORTANT INFO To clarify for everyone and any one new here the cards affected are as follows re serial number WK39 onwards will have the revised new connector block *UPDATE however some cards may be mixed and still could be on the old connector block even after WK39 WK38...




www.overclockers.co.uk




gigabyte further discloses the exact batches being affected by the "faulty" adaptor.
users of gaming/ealge models can check the sn to see whether yours is affected.


----------



## LordGurciullo

originxt said:


> Memory can be dependent on individual cards. Some cards have memory that can overclock very well, I think someone here got +1300 and still scaled. I can go up to +900 before I start losing score and get artifacting. Can be temperature related, I have a a12x25 on the back of my card right now hooked to the aux on my ftw3.
> 
> Or you can be unlucky and get poor overclocking on memory. I usually max my memory for benchmark scores and pull back +100-150 for normal use since I think I got some degeneration on my previous 1080ti by having it at the absolute max for about a few weeks.


DEGENERATION!? What does that mean!?!?


----------



## xenthor

Anyone tried flashing EVGA FTW3 rom to MSi gaming X Trio ?


----------



## megahmad

If I flash the Gigabyte Gaming OC bios while my TUF Gaming OC card is on the Silent mode bios switch (or the other way around doesn't matter), can I switch back to stock bios by putting the switch to the other position since the card has dual bios?

To make it simple, can the ASUS TUF have its dual bioses one with gigabyte gaming oc bios and one with stock? did anyone try to flash their TUF and try to switch to the other position and see if they go back to stock bios?



Warocia said:


> I have same card ASUS TUF 3090 with the Gigabyte Gaming OC bios. I did install 456.71 drivers when they were a beta phase. It must have been at least a week ago. A couple days ago I also got squared artifacts on the desktop. It sounds very similar issue. I put stock bios back on and similar overlocks than with Gaming OC bios. I am still using 456.71. I haven't encountered issue since. Was it some sort of glitch or something worse. I don't know.


Seems like we got the same issue which is related to the driver itself. I assume you used the driver when it was a hotfix. Usually hotfix drivers are the same as when they get labeled WHQL. I don't think the Gigabyte Gaming OC bios or your overclock caused this because when it happened I was on stock bios with stock clocks. Probably very specific hardware combination along with this driver causes this to happen with our TUF. Will wait for a new driver to test.


----------



## AlKappaccino

megahmad said:


> If I flash the Gigabyte Gaming OC bios while my TUF Gaming OC card is on the Silent mode bios switch (or the other way around doesn't matter), can I switch back to stock bios by putting the switch to the other position since the card has dual bios?
> 
> To make it simple, can the ASUS TUF have its dual bioses one with gigabyte gaming oc bios and one with stock? did anyone try to flash their TUF and try to switch to the other position and see if they go back to stock bios?
> 
> 
> Seems like we got the same issue which is related to the driver itself. I assume you used the driver when it was a hotfix. Usually hotfix drivers are the same as when they get labeled WHQL. I don't think the Gigabyte Gaming OC bios or your overclock caused this because when it happened I was on stock bios with stock clocks. Probably very specific hardware combination along with this driver causes this to happen with our TUF. Will wait for a new driver to test.


I'm not sure if I understand correctly, of course you can switch to the other bios, it's a switch after all. They're two seperate bios chips on the PCB you switch between physically. One doesn't affect the other.


----------



## Warocia

megahmad said:


> Seems like we got the same issue which is related to the driver itself. I assume you used the driver when it was a hotfix. Usually hotfix drivers are the same as when they get labeled WHQL. I don't think the Gigabyte Gaming OC bios or your overclock caused this because when it happened I was on stock bios with stock clocks. Probably very specific hardware combination along with this driver causes this to happen with our TUF. Will wait for a new driver to test.


Yes, I have been using 456.71 since 1.10.2020. So about 10 days before issue occurred. I booted computer and ran 3DMark Port Royal benchmark with a very minor overclock (+70 core, +475 memory). When I got back to desktop there was some artifacts. My card has been running same test much higher overclocks without problems (+170 core, +800 memory). I haven't had again this issue with stock bios and the newest driver.


----------



## Spiriva

Stampede said:


> Just doing some thinking, anyone thought of cooling the backplate by placing a cpu block with some thermal paste directly onto the back plate. need to find a way to find a way to tighten down the block to the back plate. Can plumb the cpu and gpu in parallel as well.
> 
> Do you think there will be any benefits to this mod, or is it not worth the money and effort?
> 
> my thinking is that it may bring down a watercooled gpu by another few deg's for people chasing benchmark scores.


Prolly not a bad idea. The backplate gets pretty warm, as in you dont wanna keep your finger on it for more then 5sec.


----------



## Carillo

Bilco said:


> So what resistor stack combo do you guys think would be safe for daily driving under water?


Please define Safe. Soldering on your PCB , well i would say you are allready way past 
«safe», and how could anybody here possibly answear your questions owning the card for 2 weeks... if you read a couple Pages back, i did recomend 5mOHM based on my findings performance vise.. safe ? Thats a good one


----------



## bmgjet

I posted a easy mode for shut modding in its own thread if you go back to the main list of threads under the Nvidia sub forum.
Id sort of call that safe with 15mOhm shunt resistors since it can be easily removed with just isopropyl alcohol and your only adding another 100W vs doubling the power limit with a 5m Ohm.
But then even that way can be unsafe since, You could break the card removing the cooler. Or if your slaze condutive paint all over the thing and short something out. Or if some how the shunt comes off during use and shorts something else out.


----------



## xenthor

Looking for a EVGA RTX 3090 FTW3 ULTRA rom to flash on my MSI Gaming X Trio. Anyone have that?


----------



## Thanh Nguyen

The board is designed for 350w. If we put 700w on it, will there be degradation or damage?


----------



## Carillo

Thanh Nguyen said:


> The board is designed for 350w. If we put 700w on it, will there be degradation or damage?


No, 100% safe. Best of all, Nvidia encourage it


----------



## Baasha

Sweet! Where did you get the card (meaning what e-tailer)?

I hope there's a good drop this week.. still looking to snag two.



Sheyster said:


> So this arrived today. About to install and test it.
> 
> View attachment 2461813


----------



## stryker7314

Is flashing giving anyone higher overclocks? Haven't tried but looking into it.


----------



## }{yBr!D^

Ran into some issues with my OC. 3Dmark benchmarks ran everything good with these numbers; CV: +80/PL: 114/TL: 90⁰/CC: +150/MC: +750. Which is where I peaked and believed was the sweet spot. In certain games it refused to launch and crashed (not to desktop) the game just froze. Games like Crysis 1 and 3 and Control you would play a bit then just freeze. I reverted everything to default yesterday and it did remedy the situation. Going to run these tests again with the games rather than benches. 

Running a 3090 FE
















Sent from my SM-N975U using Tapatalk


----------



## Dreams-Visions

beautiful setup, Hybrid.


----------



## Spiriva

}{yBr!D^ said:


> Ran into some issues with my OC. 3Dmark benchmarks ran everything good with these numbers; CV: +80/PL: 114/TL: 90⁰/CC: +150/MC: +750. Which is where I peaked and believed was the sweet spot. In certain games it refused to launch and crashed (not to desktop) the game just froze. Games like Crysis 1 and 3 and Control you would play a bit then just freeze. I reverted everything to default yesterday and it did remedy the situation. Going to run these tests again with the games rather than benches.


Do you run the latest drivers, 456.71-desktop-win10-64bit-international-whql ?









GeForce 456.71 WHQL driver download


Download GeForce 456.71 WHQL drivers, this new Game Ready Driver provides support for the upcoming Open Beta for Call of Duty: Black Ops Cold War. In addition, this driver enables support for NVIDI...




www.guru3d.com


----------



## nievz

}{yBr!D^ said:


> Ran into some issues with my OC. 3Dmark benchmarks ran everything good with these numbers; CV: +80/PL: 114/TL: 90⁰/CC: +150/MC: +750. Which is where I peaked and believed was the sweet spot. In certain games it refused to launch and crashed (not to desktop) the game just froze. Games like Crysis 1 and 3 and Control you would play a bit then just freeze. I reverted everything to default yesterday and it did remedy the situation. Going to run these tests again with the games rather than benches.
> 
> Running a 3090 FE
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Sent from my SM-N975U using Tapatalk


I really like that there's no sag on the FE card, unlike on my Gaming X Trio


----------



## Dreams-Visions

I would post pics of my 3090, but even with its best OC dialed in it scores lower than an average 3080 in Port Royale. 

I am ashamed. The ZOTAC struggle is real.


----------



## Spiriva

Dreams-Visions said:


> I would post pics of my 3090, but even with its best OC dialed in it scores lower than an average 3080 in Port Royale.
> 
> I am ashamed. The ZOTAC struggle is real.


Flash it with the gigabyte 3090 oc gaming bios!


----------



## megahmad

Dreams-Visions said:


> I would post pics of my 3090, but even with its best OC dialed in it scores lower than an average 3080 in Port Royale.
> 
> I am ashamed. The ZOTAC struggle is real.


What FPS did you get in Port Royal? I got 59.77 with TUF 3090 Gaming OC at stock clocks but with PL to max (107%)


----------



## Dreams-Visions

Spiriva said:


> Flash it with the gigabyte 3090 oc gaming bios!


oh is that a thing? Does that allow it to get around that 350W max power draw and see a power limit increase option in Afterburner?



megahmad said:


> What FPS did you get in Port Royal? I got 59.77 with TUF 3090 Gaming OC at stock clocks but with PL to max (107%)


It's like 12,300 or some such.

bottom 1% of 3090s and dumpstered by your most average 3080. that's with the best OC it was stable with too. +100/+650.

Note, the ZOTAC doesn't even have a PL in Afterburner to increase. It can go no higher than 100%. Worst 3090 for anyone interested in OCing, easily.


----------



## Spiriva

Dreams-Visions said:


> oh is that a thing? Does that allow it to get around that 350W max power draw and see a power limit increase option in Afterburner?


Yes., with the Gigabyte 3090 gaming oc bios you get 390w. Get the Nvidia flash tool and the bios from the first post in this thread.


----------



## Johneey

https://www.3dmark.com/spy/14537008


+211/+1230 mhz on 390 watts need to shunt for sure
was a Cold day hahaha.


----------



## }{yBr!D^

Spiriva said:


> Do you run the latest drivers, 456.71-desktop-win10-64bit-international-whql ?
> 
> 
> 
> 
> 
> 
> 
> 
> 
> GeForce 456.71 WHQL driver download
> 
> 
> Download GeForce 456.71 WHQL drivers, this new Game Ready Driver provides support for the upcoming Open Beta for Call of Duty: Black Ops Cold War. In addition, this driver enables support for NVIDI...
> 
> 
> 
> 
> www.guru3d.com


Yes I have the latest driver.









Sent from my SM-N975U using Tapatalk


----------



## Dreams-Visions

Spiriva said:


> Yes., with the Gigabyte 3090 gaming oc bios you get 390w. Get the Nvidia flash tool and the bios from the first post in this thread.


interesting! will do tyvm.


----------



## megahmad

Dreams-Visions said:


> interesting! will do tyvm.


You should know that flashing the Gigabyte 3090 Gaming OC bios to your Zotac will disable one or two display ports (depending on card), not sure about HDMI tho. Just make sure to make a backup of your original Zotac bios with the nvflash tool in case you want to go back to stock bios.


----------



## Sync0r

Dreams-Visions said:


> interesting! will do tyvm.


yeah flash it, I have one and get 14,392. This is on water though.


https://www.3dmark.com/pr/391436


----------



## RedRumy3

So I have this issue with my 3090 FE and it's when I play with power limit and set it to 114% my card power peaks around 390-395W of power but after playing a game at that power limit my pc will just shut down. Do you think I am hitting my psu power limit and it just shuts down? I am on Seasonic Prime TX-850 Titanium (not even a year old). It's only when I play with power limit

Here's one game where it will play for a bit and just randomly shut off my pc. Temps are good all around so the only thing I can think of is 850W is not enough power for this card. Any ideas?


----------



## megahmad

RedRumy3 said:


> So I have this issue with my 3090 FE and it's when I play with power limit and set it to 114% my card power peaks around 390-395W of power but after playing a game at that power limit my pc will just shut down. Do you think I am hitting my psu power limit and it just shuts down? I am on Seasonic Prime TX-850 Titanium (not even a year old). It's only when I play with power limit


If it happens only when you raise the PL then I am certainly sure it's your PSU acting up or not enough for some reason. If it was complete system freeze or a sudden reboot then it's a different story, but sudden PC shutdown is almost always a situation where the PSU is the culprit (if temps are fine ofc). Did you try other positions of the PCIe cables into the PSU? your PSU should be a single rail one so this shouldn't matter but I would try to plug the PCIe to other positions on the PSU end. Is your PSU hot to the touch when this happens?


----------



## Sheyster

Baasha said:


> Sweet! Where did you get the card (meaning what e-tailer)?
> 
> I hope there's a good drop this week.. still looking to snag two.


Direct from NV during the October 1 drop. I missed delivery twice before I finally was home for it! I didn't pay for express shipping since I had no time to mess with it during the following week.


----------



## Sheyster

nievz said:


> I really like that there's no sag on the FE card, unlike on my Gaming X Trio


Indeed, bricks don't sag much! I was shocked at the weight of it and the overall build quality. Apparently it tips the scales at 4.82 pounds, that's 2.19 kg for our metric/foreign friends.


----------



## RedRumy3

megahmad said:


> If it happens only when you raise the PL then I am certainly sure it's your PSU acting up or not enough for some reason. If it was complete system freeze or reboot it's a different story, but PC shutdown is almost always a situation where the PSU is the culprit (if temps are fine ofc). Did you try other positions of the PCIe cables into the PSU? your PSU should be single rail so this shouldn't matter but I would try to plug the PCIe to other positions on the PSU end.


Yeah no freeze or restarts just completely shuts off. I have removed the adapter and my extension cable and now using the 12 pin cable seasonic sent me and will test to see if it still happens. Psu is only 6 months old so it sucks if I need more power =[


----------



## Sync0r

RedRumy3 said:


> Yeah no freeze or restarts just completely shuts off. I have removed the adapter and my extension cable and now using the 12 pin cable seasonic sent me and will test to see if it still happens. Psu is only 6 months old so it sucks if I need more power =[


Under volt your CPU 😀


----------



## megahmad

RedRumy3 said:


> Yeah no freeze or restarts just completely shuts off. I have removed the adapter and my extension cable and now using the 12 pin cable seasonic sent me and will test to see if it still happens. Psu is only 6 months old so it sucks if I need more power =[


I doubt the adapter was the cause but let's hope it was. 
What I think is that your PSU kicks in the OCP when it happens. If that's the case then I am afraid you need a new PSU.
But it would be weird because 850w should be enough especially with Seasonic quality and single rail, unless your PSU has some defect that only appeared now with high power draw.


----------



## LordGurciullo

Can someone please respond top the degeneration question above? how does that happen and how do we prevent it and what is it?


----------



## Falkentyne

RedRumy3 said:


> Yeah no freeze or restarts just completely shuts off. I have removed the adapter and my extension cable and now using the 12 pin cable seasonic sent me and will test to see if it still happens. Psu is only 6 months old so it sucks if I need more power =[


Please let us all know if the Seasonic cable improves things or not.
Seasonic is sending me one also but I have no idea when it will arrive.


----------



## HyperMatrix

LordGurciullo said:


> Can someone please respond top the degeneration question above? how does that happen and how do we prevent it and what is it?


I assume you mean degradation. The GDDR6x already creates a ton of heat. More heat = more power = more instability. GDDR6x uses a form of error correction that'll basically repeat the data sent until it gets across correctly. You'll likely have that happen before you start seeing artifacts. So they are basically little errors that are automatically corrected by the hardware, so you can't see it, but in correcting it, your performance drops a bit. Best way to test it is to keep running port royal benchmarks and adjust the memory clock up and down to see if your score goes up or down. You need to try to maintain temperature on your VRAM/card for more accurate testing. 

How far you can clock your memory before degradation occurs can depend on several things such as thermals, power limit, and the memory chips themselves. We can't change the chip, but we can increase the power limit with a bios flash or through shunting (someone with more knowledge may be able to correct me on the link between power limit and memory stability here). And thermals can be lowered using an extra fan or with a waterblock.


----------



## Dreams-Visions

megahmad said:


> You should know that flashing the Gigabyte 3090 Gaming OC bios to your Zotac will disable one or two display ports (depending on card), not sure about HDMI tho. Just make sure to make a backup of your original Zotac bios with the nvflash tool in case you want to go back to stock bios.


Understood. Unfortunate to hear about the video ports, but I did backup the bios and will probably flash it back at some point and hope that maybe Zotac themselves releases a proper bios update that gives this card power in line with its competitors. The potential is there; just needs to be properly enabled. Worst case, I wonder if flashing the 3090 Amp! bios (assuming that card is coming) will allow for the performance increase while retaining port functionality.

I did do the flash and am happy to report that its performance is now in line with other 3090s. It got its dignity back. lol

*Pre-flash - Port Royale: *12,336 (+100/+600)
*Post-flash - Port Royale: *13,591 (+160/+875)

*Uplift:* 1,255 points, +10% performance.

*Post-flash - TimeSpy:* 19,035

Still behind (in some cases well behind) similarly priced or slightly more expensive cards, but again...it's respectable now. Actually outscored Guru3D's stock Gaming X Trio in Port Royale, so there's that at least.

Thank you all for the suggestion. I would have never thought of doing a bios flash.


----------



## bolagnaise

[QUOTE="Dreams-Visions, post: 28650061, member: 370477"

Note, the ZOTAC doesn't even have a PL in Afterburner to increase. It can go no higher than 100%. Worst 3090 for anyone interested in OCing, easily.
[/QUOTE]

Make sure your running the latest MSI AB beta, this fixes the issue. Also set voltage control to third party.


----------



## shiokarai

Dreams-Visions said:


> Understood. Unfortunate to hear about the video ports, but I did backup the bios and will probably flash it back at some point and hope that maybe Zotac themselves releases a proper bios update that gives this card power in line with its competitors. The potential is there; just needs to be properly enabled. Worst case, I wonder if flashing the 3090 Amp! bios (assuming that card is coming) will allow for the performance increase while retaining port functionality.
> 
> I did do the flash and am happy to report that its performance is now in line with other 3090s. It got its dignity back. lol
> 
> *Pre-flash - Port Royale: *12,336 (+100/+600)
> *Post-flash - Port Royale: *13,591 (+160/+875)
> 
> *Uplift:* 1,255 points, +10% performance.
> 
> *Post-flash - TimeSpy:* 19,035
> 
> Still behind (in some cases well behind) similarly priced or slightly more expensive cards, but again...it's respectable now. Actually outscored Guru3D's stock Gaming X Trio in Port Royale, so there's that at least.
> 
> Thank you all for the suggestion. I would have never thought of doing a bios flash.


Your pre-flash port royale score is so low it's hard to believe - my zotac 3080 has higher score  (card water-cooled, but still it's "just" the 3080)


----------



## Sync0r

shiokarai said:


> Your pre-flash port royale score is so low it's hard to believe - my zotac 3080 has higher score  (card water-cooled, but still it's "just" the 3080)


What's your 3080 score? Be good to compare ta.

My Zotac 3090 pre-flash, post-flash, post waterblocked and overclocked.


----------



## Wihglah

LordGurciullo said:


> Can someone please respond top the degeneration question above? how does that happen and how do we prevent it and what is it?


I can tell you that I ran my water cooled 980TI on a custom BIOS with a peak power equivalent to 180% of the original (which it never reached btw) since 2015 with an aggressive overclock.
For about a year it has had weird periods where my monitor would just go blank and force me to cold boot the PC.
About 3 months ago I had to reset the overclock to stock or it immediately crashed and just recently it started "hitching" every so often in games (every 15 minutes or so). It finally stopped outputting any signal last weekend.
Hope my pre-order comes soon...


----------



## Falkentyne

Wihglah said:


> I can tell you that I ran my water cooled 980TI on a custom BIOS with a peak power equivalent to 180% of the original (which it never reached btw) since 2015 with an aggressive overclock.
> For about a year it has had weird periods where my monitor would just go blank and force me to cold boot the PC.
> About 3 months ago I had to reset the overclock to stock or it immediately crashed and just recently it started "hitching" every so often in games (every 15 minutes or so). It finally stopped outputting any signal last weekend.
> Hope my pre-order comes soon...


This is very similar to how my Vega 64 died a few days ago.
Started "hitching" in Fortnite where the card would suddenly downclock or something for a few seconds. Then after a few sessions of doing that, Fortnite just black screened and wouldn't recover. Rebooted and ran Valley benchmark and it instantly black screened. Rebooted and ran Valley at stock and it again did the same.
Took apart the card completely, cleaned it and completely reapplied a fresh coat of liquid meta (molded die), reassembled twice (first time I had a hot hotspot doing the normal "diagonal alternating" tighten, so I remounted taking someone's advice to screw the top two X-bracket screws in first to resistance, then the bottom, then tighten the top then the bottom, which worked great, which I usually do but forgot), card then no longer slowed down and stuttered anymore, played for most of the day.

Then suddenly the card locked up with the "Omega symbols" (RTX Space Invaders) on the screen. Then black screened and then never worked again
(it would boot to BIOS with a vertical random color bar going down the screen).


----------



## Dreams-Visions

Sync0r said:


> What's your 3080 score? Be good to compare ta.
> 
> My Zotac 3090 pre-flash, post-flash, post waterblocked and overclocked.
> View attachment 2461896


Welp, guess I'm gonna have to water cool. Not leaving all that performance on the table. lol


----------



## Sync0r

Dreams-Visions said:


> Welp, guess I'm gonna have to water cool. Not leaving all that performance on the table. lol


That's the spirit! 😀 I really want to shunt mod it. Interestingly if you down clock the ram it gives the gpu core more headroom to boost higher.


----------



## LVNeptune

After seeing a bunch of benchmarks on here I am pretty disappointed with the 3090 FE. I either have a VERY VERY BADLY binned 3090 FE or this is what is to be expected.
https://www.3dmark.com/pr/384515 this was at 100% stock, it kept hitting power limits.

I HAD to voltage curve to dial it down to stop the power limit from gimping it. After spending many hours this was the highest I was ever able to achieve. https://www.3dmark.com/pr/388847 

After seeing people on Zotac blowing passed this by such a large margin, I think I am going to see if NVIDIA will take a return. After spending over $1700 I would expect something that should be performing better than a 3080. Coupled with the fact the 3090 FE can't be crossflashed it just doesn't seem worth it. I have resistors coming in for shunt modding but at this point it seems like the 3080 is the better buy and not because of cost.


----------



## Sky3900

Do you have the power at 114% and fans at 100%? Some of the reviews I've seen get about 13900 on port royal overclocked and 13500 stock.


----------



## LVNeptune

Sky3900 said:


> Do you have the power at 114% and fans at 100%? Some of the reviews I've seen get about 13900 on port royal overclocked and 13500 stock.


Not sure if this was to me but if you are seeing reviews with 13500 at stock that shouldn't have any power limit changes. With my testing I forgot to mention but with my lower voltage curve (to drop the power consumption) I have to have it at 114% as well. Fans at 100% wouldn't benefit, thermals aren't an issue here just power consumption. I don't recall ever seeing it go above 72c at the highest.


----------



## Sync0r

LVNeptune said:


> After seeing a bunch of benchmarks on here I am pretty disappointed with the 3090 FE. I either have a VERY VERY BADLY binned 3090 FE or this is what is to be expected.
> https://www.3dmark.com/pr/384515 this was at 100% stock, it kept hitting power limits.
> 
> I HAD to voltage curve to dial it down to stop the power limit from gimping it. After spending many hours this was the highest I was ever able to achieve. https://www.3dmark.com/pr/388847
> 
> After seeing people on Zotac blowing passed this by such a large margin, I think I am going to see if NVIDIA will take a return. After spending over $1700 I would expect something that should be performing better than a 3080. Coupled with the fact the 3090 FE can't be crossflashed it just doesn't seem worth it. I have resistors coming in for shunt modding but at this point it seems like the 3080 is the better buy and not because of cost.


My best score is water cooled though, gpu is running at 30c. I think there is still headroom in these cards if we remove power limits. The top performing 3080 gets 14131 Port Royal score running average clock of 2,405 MHz, that ain't a daily OC


----------



## LVNeptune

Sync0r said:


> My best score is water cooled though, gpu is running at 30c. I think there is still headroom in these cards if we remove power limits. The top performing 3080 gets 14131 Port Royal score running average clock of 2,405 MHz, that ain't a daily OC


Right but your stock score with the flashed Zotac is significantly higher.


----------



## Sky3900

LVNeptune said:


> Not sure if this was to me but if you are seeing reviews with 13500 at stock that shouldn't have any power limit changes. With my testing I forgot to mention but with my lower voltage curve (to drop the power consumption) I have to have it at 114% as well. Fans at 100% wouldn't benefit, thermals aren't an issue here just power consumption. I don't recall ever seeing it go above 72c at the highest.


Might try a bench running the fans at 100%. Gamers Nexus claims the card will start throttling clocks around 50c - 60c.


----------



## LVNeptune

Sky3900 said:


> Might try a bench running the fans at 100%. Gamers Nexus claims the card will start throttling clocks around 50c - 60c.


50c-60c is normal under load  that's really odd. In any case it's absolutely being power limited. Watching GPU-Z after the fact and the clock drops all coincide with power limits


----------



## IcyIvan

So I have the same problem with Neptune.

I haven't mod anything on my 3090 FE and this is my result. This is my first run with 30 - 50% fan speed



https://www.3dmark.com/3dm/51641752?



This is a 2nd run with 100% fan speed.



https://www.3dmark.com/3dm/51642689?



In Valorant I was running 210 fps on avg compare to 300-400 fps benchmark online.

Any help would appreciated


----------



## Sky3900

LVNeptune said:


> 50c-60c is normal under load  that's really odd. In any case it's absolutely being power limited. Watching GPU-Z after the fact and the clock drops all coincide with power limits


Wow, that is a bummer. I was expecting these cards to get 13500 out of the box. Might have to return my FE when it gets here and wait for a FTW3... Or do the shunt mod with water cooling.


----------



## Sync0r

IcyIvan said:


> So I have the same problem with Neptune.
> 
> I haven't mod anything on my 3090 FE and this is my result. This is my first run with 30 - 50% fan speed
> 
> 
> 
> https://www.3dmark.com/3dm/51641752?
> 
> 
> 
> This is a 2nd run with 100% fan speed.
> 
> 
> 
> https://www.3dmark.com/3dm/51642689?
> 
> 
> 
> In Valorant I was running 210 fps on avg compare to 300-400 fps benchmark online.
> 
> Any help would appreciated


Is your RAM actually running at 2,128 MHz ? You will see slightly lower scores because of the cpu you are using I think.


----------



## LVNeptune

Sync0r said:


> Is your RAM actually running at 2,128 MHz ? You will see slightly lower scores because of the cpu you are using I think.


Funny enough I forgot to turn XMP on when I did my first run if I recal, and when I did no change. I see plenty of 9900k’s hitting above where we are.

edit: he definitely shouldn’t be limited on a 3700x+


----------



## Thanh Nguyen

5 mohm stack will get power limit to what guys?


----------



## Sky3900

Thanh Nguyen said:


> 5 mohm stack will get power limit to what guys?











RTX 3090 Founders Edition working shunt mod


I posted this in the RTX 3090 owners club thread and I'm posting this here for more visibility to anyone who might stumble across this: Edit: Added note about 6th shunt resistor & updated images Tonight I successfully shunt modded my RTX 3090 FE. It's a pretty straightforward process, just...




www.overclock.net


----------



## megahmad

Just for the sake of comparison. TUF Gaming OC with stock bios and voltage. OC settings used are shown in AB. I am being so limited by PL, my average clock was 1855mhz.

With everything on stock I get ~12700.


----------



## Thanh Nguyen

Sky3900 said:


> RTX 3090 Founders Edition working shunt mod
> 
> 
> I posted this in the RTX 3090 owners club thread and I'm posting this here for more visibility to anyone who might stumble across this: Edit: Added note about 6th shunt resistor & updated images Tonight I successfully shunt modded my RTX 3090 FE. It's a pretty straightforward process, just...
> 
> 
> 
> 
> www.overclock.net


There is no 5 mohm stacked.


----------



## zlatanselvic

Sheyster said:


> ^ That is a thing of beauty, love it!


 Thank you!


----------



## Bilco

Thanh Nguyen said:


> There is no 5 mohm stacked.


I think a stacked 5 mohm should result in 750w max power limit


----------



## IcyIvan

Sync0r said:


> Is your RAM actually running at 2,128 MHz ? You will see slightly lower scores because of the cpu you are using I think.


Oops, just turn on XMP and my score just improve. It's still lower than what I expected

30 - 50% fan speed



https://www.3dmark.com/3dm/51645428?



100% fan speed



https://www.3dmark.com/3dm/51645557?


----------



## LVNeptune

megahmad said:


> Just for the sake of comparison. TUF Gaming OC with stock bios and voltage. OC settings used are shown in AB. I am being so limited by PL, my average clock was 1855mhz.
> 
> With everything on stock I get ~12700.


Lol, even at stock you are higher


----------



## Romir

RedRumy3 said:


> So I have this issue with my 3090 FE and it's when I play with power limit and set it to 114% my card power peaks around 390-395W of power but after playing a game at that power limit my pc will just shut down. Do you think I am hitting my psu power limit and it just shuts down? I am on Seasonic Prime TX-850 Titanium (not even a year old). It's only when I play with power limit


A friends stock 3090 FE on a TX-650 with Seasonic's cable shuts down in some games and at AC:O's menu (330w). That seemed bad to me for a stock 3900x system with minimal accessories. He's replacing it with a TX-850 tonight.


----------



## RedRumy3

megahmad said:


> I doubt the adapter was the cause but let's hope it was.
> What I think is that your PSU kicks in the OCP when it happens. If that's the case then I am afraid you need a new PSU.
> But it would be weird because 850w should be enough especially with Seasonic quality and single rail, unless your PSU has some defect that only appeared now with high power draw.





Falkentyne said:


> Please let us all know if the Seasonic cable improves things or not.
> Seasonic is sending me one also but I have no idea when it will arrive.


So I think my issue is fixed I just played Red Dead Redemption 2 for about 2 hours, gpu power draw was 390-404 and no shutdown. I will test some more but I think the seasonic cable is working good maybe it was from using psu cable to extensions to an adapter and all that was causing something idk I will test more out tomorrow.


----------



## LordGurciullo

Hey guys! So.. I had to remove the middle part of the case that had the built in ledge to hold the back of the card. So now its just flying free... I don't visibly see gpu sag but the card is huge.. Should I buy this?? https://www.amazon.com/gp/product/B079HSVSLR/ref=ox_sc_act_title_3?smid=AFRUWVWO3UJ63&th=1


----------



## anethema

Ya on my 3090FE if everything is stock I get in the mid 12ks in port royal.

100% Fan, +190 core, +900 mem, and max power, I got 14492 which sticks me in the mid 30s in the HOF. 




https://www.3dmark.com/pr/387363



It will be interesting to see what is achievable with a waterblock and shunt modding.


----------



## Sky3900

anethema said:


> Ya on my 3090FE if everything is stock I get in the mid 12ks in port royal.
> 
> 100% Fan, +190 core, +900 mem, and max power, I got 14492 which sticks me in the mid 30s in the HOF.
> 
> 
> 
> 
> https://www.3dmark.com/pr/387363
> 
> 
> 
> It will be interesting to see what is achievable with a waterblock and shunt modding.


That seems like a pretty solid stock OC. Wonder if you won the silicon lottery.

Hope mine does as well... Find out tomorrow!


----------



## mirkendargen

ETA to warehouse updated from today to 10/18. I don't feel too bad since it doesn't seem like anywhere else has actually gotten Strix stock yet either, and a new date means Asus told them something, otherwise it would just say past due.


----------



## LVNeptune

anethema said:


> Ya on my 3090FE if everything is stock I get in the mid 12ks in port royal.
> 
> 100% Fan, +190 core, +900 mem, and max power, I got 14492 which sticks me in the mid 30s in the HOF.
> 
> 
> 
> 
> https://www.3dmark.com/pr/387363
> 
> 
> 
> It will be interesting to see what is achievable with a waterblock and shunt modding.


Pack it in boys. He's won.

These are the numbers I would be expecting. The fact you are able to OC 2,000 points higher is INSANE.

EDIT: By the way you are in the Top 10 for your CPU/GPU combo.

You are also #36 as of this post on the Top 100 for Port Royal


----------



## Sky3900

LVNeptune said:


> Pack it in boys. He's won.
> 
> These are the numbers I would be expecting. The fact you are able to OC 2,000 points higher is INSANE.


Average temp is 47C...


----------



## Cholerikerklaus

anethema said:


> Ya on my 3090FE if everything is stock I get in the mid 12ks in port royal.
> 
> 100% Fan, +190 core, +900 mem, and max power, I got 14492 which sticks me in the mid 30s in the HOF.
> 
> 
> 
> 
> https://www.3dmark.com/pr/387363
> 
> 
> 
> It will be interesting to see what is achievable with a waterblock and shunt modding.


You definitely won the silicone lottery, you're score is better then my on water.


----------



## LVNeptune

Sky3900 said:


> Average temp is 47C...


I didn't even see that...JFC. That...that doesn't even make sense


----------



## Sky3900

LVNeptune said:


> I didn't even see that...JFC. That...that doesn't even make sense


Cough... already on water... cough cough... already shunted.


----------



## bogdi1988

Here's my run on my workstation. +130 core and +750 memory and fans at 100%


https://www.3dmark.com/3dm/51649589?


----------



## LVNeptune

Sky3900 said:


> Cough... already on water... cough cough... already shunted.


I mean. That would make sense but why make it up and say it’s just OC’d? If I can expect this with a shunt and waterblock I would absolutely be fine with it even.


----------



## Sky3900

LVNeptune said:


> I mean. That would make sense but why make it up and say it’s just OC’d? If I can expect this with a shunt and waterblock I would absolutely be fine with it even.


I don't know, just seems suspicious. Maybe they live in cold place and had the window open or AC blowing on it.


----------



## LVNeptune

Sky3900 said:


> I don't know, just seems suspicious. Maybe they live in cold place and had the window open or AC blowing on it.


Purple sus


----------



## Sky3900

LVNeptune said:


> Purple sus


LOL! 

I'll post my Port Royal scores on here in the next day or two for comparison.


----------



## Dreams-Visions

shiokarai said:


> Your pre-flash port royale score is so low it's hard to believe - my zotac 3080 has higher score  (card water-cooled, but still it's "just" the 3080)


The average 3090 score in Port Royal is just over 13,000. So a score a few hundred points lower for a card that has a hard 350W power limit is about right.


----------



## VickyBeaver

megahmad said:


> You should know that flashing the Gigabyte 3090 Gaming OC bios to your Zotac will disable one or two display ports (depending on card), not sure about HDMI tho. Just make sure to make a backup of your original Zotac bios with the nvflash tool in case you want to go back to stock bios.


there is a work around for this if you use a cheep display port to hdmi converter on the display port that dose not fuction, flash back to orignal bios to get the port working again then flash the gigabyte bios you can use that port with hdmi, how ever the port will become non functional again if you plug a normal display port connection into that slot.


----------



## GAN77

Falkentyne said:


> Seasonic is sending me one also but I have no idea when it will arrive.


Hello!

How did you order this cable? Or is it a gift from Seasonic?


----------



## megahmad

LVNeptune said:


> Lol, even at stock you are higher


Yah but my card is factory overclocked (1695mhz your FE vs 1740mhz my card) so your stock score of 12.3k is perfectly fine and what I would expect.


----------



## mfmmfmmfmmfm

gigabyte gaming oc no matter what i did this the max i can get 13863. maybe mine was bad silicon.



https://www.3dmark.com/3dm/51654011?



limited by power limit


----------



## megahmad

mfmmfmmfmmfm said:


> gigabyte gaming oc no matter what i did this the max i can get 13863. maybe mine was bad silicon.
> 
> 
> 
> https://www.3dmark.com/3dm/51654011?
> 
> 
> 
> limited by power limit


This is an awesome score, I would be more than happy with it  Seems like the average limit is in the range of ~14k if you're on air and without shunt modding and even then, not many people will be able to hit 14k.


----------



## bmgjet

Port Royal is basically a power limit benchmark provided you have temps in check and have a ok chips this is what youd expect.

350W = around 12.9K.
370W = around 13.4K
390W = around 13.9K
14K+ scores youll need 420W+


----------



## bolagnaise

bmgjet said:


> Port Royal is basically a power limit benchmark provided you have temps in check and have a ok chips this is what youd expect.
> 
> 350W = around 12.9K.
> 370W = around 13.4K
> 390W = around 13.9K
> 14K+ scores youll need 420W+


I’m pretty sure i can hit 14k on air, this is on my ZOTAC 3090 with a portable AC running pointed at the card about 1M away, no cardboard shroud. Current CPU is 3900X non OC, but i think if i switch to my 5.3Ghz 10900K i can pickup 50 points. 



https://www.3dmark.com/pr/385303


----------



## Spiriva

Dreams-Visions said:


> Understood. Unfortunate to hear about the video ports, but I did backup the bios and will probably flash it back at some point and hope that maybe Zotac themselves releases a proper bios update that gives this card power in line with its competitors. The potential is there; just needs to be properly enabled. Worst case, I wonder if flashing the 3090 Amp! bios (assuming that card is coming) will allow for the performance increase while retaining port functionality.
> 
> I did do the flash and am happy to report that its performance is now in line with other 3090s. It got its dignity back. lol
> 
> *Pre-flash - Port Royale: *12,336 (+100/+600)
> *Post-flash - Port Royale: *13,591 (+160/+875)
> 
> *Uplift:* 1,255 points, +10% performance.
> 
> *Post-flash - TimeSpy:* 19,035
> 
> Still behind (in some cases well behind) similarly priced or slightly more expensive cards, but again...it's respectable now. Actually outscored Guru3D's stock Gaming X Trio in Port Royale, so there's that at least.
> 
> Thank you all for the suggestion. I would have never thought of doing a bios flash.


Nice increase!


----------



## Wrathier

Do you need to buy 3DMark to run port royal?


----------



## Sync0r

LVNeptune said:


> Pack it in boys. He's won.
> 
> These are the numbers I would be expecting. The fact you are able to OC 2,000 points higher is INSANE.
> 
> EDIT: By the way you are in the Top 10 for your CPU/GPU combo.
> 
> You are also #36 as of this post on the Top 100 for Port Royal


I really wouldn't worry, in a game its only going to be 1 or 2 fps, not going to be noticeable. Be happy you managed to buy a 3090  we are all lucky people!


----------



## megahmad

Sync0r said:


> I really wouldn't worry, in a game its only going to be 1 or 2 fps, not going to be noticeable. Be happy you managed to buy a 3090  we are all lucky people!


This. Anyway this is OCN after all so we have to push the limits 

I wonder if all these people hitting 13.5k-14k are able to play all games without issues and artifacts. I managed to get 13.4k on Port Royal and tried many runs without any issues but once I ran Control or Trine 4 (or any demanding game), I immediately got a TDR crash.


----------



## nievz

megahmad said:


> This. Anyway this is OCN after all so we have to push the limits
> 
> I wonder if all these people hitting 13.5k-14k are able to play all games without issues and artifacts. I managed to get 13.4k on Port Royal and tried many runs without any issues but once I ran Control or Trine 4 (or any demanding game), I immediately got a TDR crash.


Yeah my highest at port royal was 13900 but it was unstable in games. My stable gaming setting is at 13.5K:



https://www.3dmark.com/3dm/51656597



I found that Warzone is a terrific test for stability. If I can game 2hrs straight without a DEV ERROR 6068, I'm confident i'm game stable.


----------



## bmgjet

megahmad said:


> This. Anyway this is OCN after all so we have to push the limits
> 
> I wonder if all these people hitting 13.5k-14k are able to play all games without issues and artifacts. I managed to get 13.4k on Port Royal and tried many runs without any issues but once I ran Control or Trine 4 (or any demanding game), I immediately got a TDR crash.


On 390W my highest port royal score is 13.8K, Game stable in Control and flight sim.
Shunt modded OC not touched and score is 14K and game stable since OC hasnt changed at all still the same values +400 mem +120 core flat lined after 1V. Just the card holds 950mv now where before it was dropping to 830mv. Which means the average clocks came up 60mhz.


----------



## Carillo

Thanh Nguyen said:


> 5 mohm stack will get power limit to what guys?


Double your bios current limit


----------



## Pepillo

VickyBeaver said:


> there is a work around for this if you use a cheep display port to hdmi converter on the display port that dose not fuction, flash back to orignal bios to get the port working again then flash the gigabyte bios you can use that port with hdmi, how ever the port will become non functional again if you plug a normal display port connection into that slot.


Interesting. I precisely have an HDMI monitor connected with a DP to HDMI cable and the 480w bios of the Strix that disables a DP port of my MSI Gaming X. Would this trick work?


----------



## Wrathier

Hi all,

I need some help checking my benchmark numbers and see if it's ok. I bought 3DMark today to check out what I would get. I'm sort of happy with Time Spy Extreme, but not with Port Royal. Please check it and let me know if it is as it should:

https://www.3dmark.com/spy/14551777 - Time Spy: 19078

https://www.3dmark.com/spy/14551415 - Time Spy Extreme: 9748

https://www.3dmark.com/pr/397670 - Port Royal: 13180

I seem to be epic limited somewhere, maybe power as that is all green in GPU-Z after run, but I think I should get 500ish more than I do in Port Royal... 


Thanks for looking and feedback.

Cheers.


----------



## nievz

Pepillo said:


> Interesting. I precisely have an HDMI monitor connected with a DP to HDMI cable and the 480w bios of the Strix that disables a DP port of my MSI Gaming X. Would this trick work?


I haven't tried the HDMI port after flashing Strix BIOS. Does it still work?


----------



## Pepillo

nievz said:


> I haven't tried the HDMI port after flashing Strix BIOS. Does it still work?


Works the HDMI port and two DP. No work only one DP.


----------



## anethema

Sky3900 said:


> Cough... already on water... cough cough... already shunted.


Haha I promise I have not yet shunted and put on a water block. I live in northern Canada. I benched first thing in the morning when the wood stove hadn't heated the house up much and it was maybe 12c in here. Fans at 100%.

There should be quite a diff with shunt modding and actual water cooling. The shunts should be here from digikey in a couple days. I was going to wait for the EK block so I didn't have to take it apart twice but maybe I will do it just to see what shunting alone can do.

EDIT: Proof it isn't under water anyways: 









And here is the tracking for my shunt resistors from Digikey:


----------



## warbucks

anethema said:


> Haha I promise I have not yet shunted and put on a water block. I live in northern Canada. I benched first thing in the morning when the wood stove hadn't heated the house up much and it was maybe 12c in here. Fans at 100%.
> 
> There should be quite a diff with shunt modding and actual water cooling. The shunts should be here from digikey in a couple days. I was going to wait for the EK block so I didn't have to take it apart twice but maybe I will do it just to see what shunting alone can do.


I've got my shunts coming from Digikey as well and can't wait to slap them on my 3090. It's frosty here in Alberta. Running benches in the early morning gives a small boost for me as well.


----------



## megahmad

Wrathier said:


> Hi all,
> 
> I need some help checking my benchmark numbers and see if it's ok. I bought 3DMark today to check out what I would get. I'm sort of happy with Time Spy Extreme, but not with Port Royal. Please check it and let me know if it is as it should:
> 
> https://www.3dmark.com/spy/14551777 - Time Spy: 19078
> 
> https://www.3dmark.com/spy/14551415 - Time Spy Extreme: 9748
> 
> https://www.3dmark.com/pr/397670 - Port Royal: 13180
> 
> I seem to be epic limited somewhere, maybe power as that is all green in GPU-Z after run, but I think I should get 500ish more than I do in Port Royal...
> 
> 
> Thanks for looking and feedback.
> 
> Cheers.


What card do you have and at what clocks? Your scores are fine if you're at stock. Actually I got lower Port Royal score with my 3090 TUF OC than what you got (12700) but my Time Spy scores were higher (19700 and 10000). All at stock.


----------



## TamronLoh

I come bearing gifts, Aorus Master 3090 BIOS Aorusmaster3090 - Google Drive If anyone is wondering how i got it, i got one of only two pieces in the whole of Singapore yesterday, 13/10/2020.

proof


----------



## Tias

TamronLoh said:


> I come bearing gifts, Aorus Master 3090 BIOS Aorusmaster3090 - Google Drive If anyone is wondering how i got it, i got one of only two pieces in the whole of Singapore yesterday, 13/10/2020.
> 
> proof


Thank you! I flashed it to my Asus TUF 3090


----------



## TamronLoh

Tias said:


> Thank you! I flashed it to my Asus TUF 3090


that was fast LOL. let me know if it helps


----------



## Spiriva

TamronLoh said:


> I come bearing gifts, Aorus Master 3090 BIOS Aorusmaster3090 - Google Drive If anyone is wondering how i got it, i got one of only two pieces in the whole of Singapore yesterday, 13/10/2020.


I flashed it to my PNY 3090 just now. No problems with the flash.










It seems to be the same as the gaming oc, 390W. Atleast that is what gpu-z reports.

Thanks for the bios, gonna try it out tonight!










This bios have a higher boost then the gaming oc tho. 1785mhz instead of 1755mhz on the gaming oc.


----------



## TamronLoh

Spiriva said:


> I flashed it to my PNY 3090 just now. No problems with the flash.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> It seems to be the same as the gaming oc, 390W. Atleast that is what gpu-z reports.
> 
> Thanks for the bios, gonna try it out tonight!


interesting that its 390w. afterburner and hwinfo reports 400w when OCed.


----------



## Baasha

guys.. someone needs to talk me out of it. Haven't been able to get a single 3090 yet. Missed all the drops so far.

I'm about to buy 2 on fleabay. I don't know how much longer I can wait.


----------



## Johneey

TamronLoh said:


> I come bearing gifts, Aorus Master 3090 BIOS Aorusmaster3090 - Google Drive If anyone is wondering how i got it, i got one of only two pieces in the whole of Singapore yesterday, 13/10/2020.
> 
> proof
> View attachment 2461982


Nice Thanks!


----------



## CptAsian

Baasha said:


> guys.. someone needs to talk me out of it. Haven't been able to get a single 3090 yet. Missed all the drops so far.
> 
> I'm about to buy 2 on fleabay. I don't know how much longer I can wait.


I may have missed something, but why do you need them so badly? Seems like this has been frustrating you a bit too much for the past few days.


----------



## Spiriva

TamronLoh said:


> interesting that its 390w. afterburner and hwinfo reports 400w when OCed.


In superposition it pulled very close to 400w, i saw it top out at 397w


----------



## Thanh Nguyen

Shunted 4 resistors at the 8 pins area and have like 500 points more but I think I got a bad chip. Maybe shunt 1 more near the pcie?


https://www.3dmark.com/pr/397494


----------



## LVNeptune

That’s Hall of Fame scores.


----------



## TamronLoh

Spiriva said:


> In superposition it pulled very close to 400w, i saw it top out at 397w


nice thats good.


----------



## IcyIvan

Not sure why my Time Spy score are so low compare to other



https://www.3dmark.com/3dm/51665242?



Any thoughts?


----------



## Johneey

TamronLoh said:


> I come bearing gifts, Aorus Master 3090 BIOS Aorusmaster3090 - Google Drive If anyone is wondering how i got it, i got one of only two pieces in the whole of Singapore yesterday, 13/10/2020.
> 
> proof
> View attachment 2461982


I thinked it has 400 watts ? All my software shows 390 msi too


----------



## xankg

TamronLoh said:


> I come bearing gifts, Aorus Master 3090 BIOS Aorusmaster3090 - Google Drive If anyone is wondering how i got it, i got one of only two pieces in the whole of Singapore yesterday, 13/10/2020.
> 
> proof
> View attachment 2461982


Hello friend! I flashed my TUF with this one and GPU-Z shows only 390W and so does Afterburner while running Port Royal BUT in Superposition it actually draws 400W, thanks a lot!


----------



## TamronLoh

xankg said:


> Hello friend! I flashed my TUF with this one and GPU-Z shows only 390W and so does Afterburner while running Port Royal BUT in Superposition it actually draws 400W, thanks a lot!


Welcome~


----------



## megahmad

Thanks a lot *TamronLoh*



xankg said:


> Hello friend! I flashed my TUF with this one and GPU-Z shows only 390W and so does Afterburner while running Port Royal BUT in Superposition it actually draws 400W, thanks a lot!





Tias said:


> Thank you! I flashed it to my Asus TUF 3090


Did flashing disable any display ports on the TUF? Thanks.


----------



## Johneey

megahmad said:


> Thanks a lot *TamronLoh*
> 
> 
> Did flashing disable any display ports? Thanks.


It does on my palit only one dp working anymore


----------



## xankg

Yeah same for me


----------



## xankg

It seems like only in Superposition it draws more power, tried Witcher 3 and it still is 390±5W and I couldnt clock any higher than before


----------



## megahmad

Thanks for the replies guys. I am using two display ports already so I won't be flashing this.


----------



## Johneey

xankg said:


> It seems like only in Superposition it draws more power, tried Witcher 3 and it still is 390±5W and I couldnt clock any higher than before


Too


----------



## Tias

megahmad said:


> Thanks a lot *TamronLoh*
> 
> 
> 
> 
> 
> Did flashing disable any display ports on the TUF? Thanks.


For me i dont know, i only got 1 DP cabel in, and that one worked fine after flashing.
My card hits 2250mhz have this bios, but only under a few secs during 3dmark. Altho its higher then ever before, even if just for a few seconds.

During 3dmark it pulls 390-399w


----------



## Johneey

The Aorus bios is not worth with Gigabyte my dp s working and now increase of performance


----------



## ROGKilla

edit - got the answer


----------



## megahmad

ROGKilla said:


> Now have over 15K at Port Royal. My card runs like hell. No Mods No Shunt Mod Nothing!


15k without modding or anything? are you on water? You must have the best chip ever.
What card do you have?


----------



## ROGKilla

edit - got the answer


----------



## Wrathier

Wrathier said:


> Hi all,
> 
> I need some help checking my benchmark numbers and see if it's ok. I bought 3DMark today to check out what I would get. I'm sort of happy with Time Spy Extreme, but not with Port Royal. Please check it and let me know if it is as it should:
> 
> https://www.3dmark.com/spy/14551777 - Time Spy: 19078
> 
> https://www.3dmark.com/spy/14551415 - Time Spy Extreme: 9748
> 
> https://www.3dmark.com/pr/397670 - Port Royal: 13180
> 
> I seem to be epic limited somewhere, maybe power as that is all green in GPU-Z after run, but I think I should get 500ish more than I do in Port Royal...
> 
> 
> Thanks for looking and feedback.
> 
> Cheers.


Ok I figured out why my score was so low, I simply overclocked the card more than it liked, though it only run at 54 degrees with 100 percent fan speed.

Now I scored: https://www.3dmark.com/3dm/51669758? - 13746 with mem at 800+ and core at 135+ - fans maxed.

But that seems more realistically what to expect from a Gigabyte 3090 Gaming OC.


----------



## Johneey

ROGKilla said:


> @TamronLoh
> 
> The Bios you Upload has only 390W not 400W, Switch on your Card from Silent Bios to OC Bios.
> 
> Flashed the Aorus Master Bios but still 390W. No Changes to Gaming OC Bios!
> All DPs are Working fine.
> 
> I overclocked the whole time without increasing the voltage. I thought he wouldn't accept it anyway.
> Now have over 15K at Port Royal. My card runs like hell. No Mods No Shunt Mod Nothing!
> 
> -> https://www.3dmark.com/pr/395145
> 
> 
> 
> View attachment 2461987


nice dude , hast mich on discord blockiert ?
Hab nun auch mora hhahah 
Schaffe nur 14786 :-(
Sehe du nutzt nicht den aktuellen Treiber meinst macht der was aus ?


----------



## ROGKilla

edit - got the answer


----------



## Johneey

ROGKilla said:


> Nein hab Ich nicht wie kommst du da drauf?
> 
> 9°C ??? How you do this? Mo-Ra Outside from House?


Kann dir keine Nachricht mehr schreiben. :-(
Hahhahaha


----------



## ROGKilla

edit - got the answer


----------



## Benni231990

so what is the better bios with more power the aorus or the gaming OC?


----------



## Tias

Benni231990 said:


> so what is the better bios with more power the aorus or the gaming OC?


I got abit higher scores in all 3dmark tests with the Aorus master, so im sticking with that one.


----------



## ROGKilla

Its both the same. 390W


----------



## Gryzor

ROGKilla said:


> Its both the same. 390W


What is your 3090 card? it´s having 18 power stages?


----------



## ROGKilla

edit - got the answer


----------



## TamronLoh

ROGKilla said:


> @TamronLoh
> 
> The Bios you Upload has only 390W not 400W, Switch on your Card from Silent Bios to OC Bios.
> 
> Flashed the Aorus Master Bios but still 390W. No Changes to Gaming OC Bios!
> All DPs are Working fine.
> 
> I overclocked the whole time without increasing the voltage. I thought he wouldn't accept it anyway.
> Now have over 15K at Port Royal. My card runs like hell. No Mods No Shunt Mod Nothing!
> 
> -> https://www.3dmark.com/pr/395145
> 
> 
> 
> View attachment 2461987


i already set it to the overclock profile. the only diff between silent and overclock profile is fan speed.


----------



## Johneey

I never got 400+ watts with the Aorus. Is it possible to the silent mode bios ? Anyone flashed the other from Aorus ?


----------



## Gryzor

Gryzor said:


> What is your 3090 card? it´s having 18 power stages?





ROGKilla said:


> Inno3D iChill X4. I dont know how much Power Stages why?


Some cards have 19 or 20 powerstages (with 2 x 8 pin plugs). Others like the galax KFA2 3090 has 18 powerstages. So I´m thinking in flashing the GB gaming oc bios (to increase powerlimit to 390W). Any could give me an advice regarding the number of powerstages? Galax, like Palit, has fewer powerstages in comparison with Gigabyte gaming OC (18 vs 19). Thanks.


----------



## HyperMatrix

Johneey said:


> nice dude , hast mich on discord blockiert ?
> Hab nun auch mora hhahah
> Schaffe nur 14786 :-(
> Sehe du nutzt nicht den aktuellen Treiber meinst macht der was aus ?





ROGKilla said:


> Nein hab Ich nicht wie kommst du da drauf?
> 
> 9°C ??? How you do this? Mo-Ra Outside from House?





Johneey said:


> Kann dir keine Nachricht mehr schreiben. :-(
> Hahhahaha


Guys. English only please. It’s part of the forum rules.

There are people from all over the world here and you never know when something you’re talking about could be helpful to them if they could understand it. Thanks!


----------



## LVNeptune

Just for reference, here's my my stock 3090 FE clocks. The ones I posted earlier weren't very descriptive.

Stock


https://www.3dmark.com/3dm/51673875?



Stock + Fan 100%


https://www.3dmark.com/3dm/51673955?



Stock + 114% Power


https://www.3dmark.com/3dm/51674047?



Stock + 114% Power + Fan Full


https://www.3dmark.com/3dm/51673791?


----------



## rawsome

I have tested the Aorusmaster bios on my msi ventus OC stock cooler.
I can see it going up to 400W for very short periods, the gaming oc bios never got past 390W.
i cannot proof that there is a real improvement in port royale, maybe +100 in score.

im around 13500 now without memory overclocking. i guess thats not bad for a ventus.

im right that if the Aorusmaster has a higher boost clock of 1780 and i set +130 on the core clock, it means a different thing than setting +130 on the gaming oc bios with 1755 boost clock, right? So +105 would be effectively what +130 was before?

one important thing i noticed: the first fan only goes up to 2000rpm so i had to adjust the fan curves to get to 100% at 70° earlier on this bios. i guess the aorus has different fans while the fans on the ventus are all equal (havent checked the max rpms with the stock bios). Without that tweak i get worse temps and noises.
i use the gainward expert tool 2 to set the independen fan curves. does the afterburner also support a fan curve per fan?


----------



## Sky3900

LVNeptune said:


> Just for reference, here's my my stock 3090 FE clocks. The ones I posted earlier weren't very descriptive.
> 
> Stock
> 
> 
> https://www.3dmark.com/3dm/51673875?
> 
> 
> 
> Stock + Fan 100%
> 
> 
> https://www.3dmark.com/3dm/51673955?
> 
> 
> 
> Stock + 114% Power
> 
> 
> https://www.3dmark.com/3dm/51674047?
> 
> 
> 
> Stock + 114% Power + Fan Full
> 
> 
> https://www.3dmark.com/3dm/51673791?


Seems low compared to a lot of other stock OC results on here, wonder what the deal is. Hope mine performs better...


----------



## Sky3900

What water cooling hardware would you all recommend for a 3090 FE (water block, radiator, tubing, fittings)?


----------



## LVNeptune

Sky3900 said:


> Seems low compared to a lot of other stock OC results on here, wonder what the deal is. Hope mine performs better...


No clue. I have a second card coming in but that was for someone else. I may just open it to test and see.


----------



## NapsterAU

Sad that the Auros Master BIOS is only 390w. 
I will try it later and see how it behaves.


----------



## HyperMatrix

NapsterAU said:


> Sad that the Auros Master BIOS is only 390w.
> I will try it later and see how it behaves.


I'm sure there's a bit of a liability issue with 2x 8-pin cards. If they exceed the spec and for some reason something goes wrong (PCIe fingers melting, cables designed just to spec burning, PSUs designed to minimum spec blowing) then they'd be responsible for not only the damage to the video card, but to your system, your house if it catches fire, and any injury or death to self or others as a result. 

We had mentioned from the start that the safest bet is the 3x 8-pin design for the 3090 unless you're willing to shunt. If you're willing to shunt though, problem is automatically solved.


----------



## LVNeptune

HyperMatrix said:


> I'm sure there's a bit of a liability issue with 2x 8-pin cards. If they exceed the spec and for some reason something goes wrong (PCIe fingers melting, cables designed just to spec burning, PSUs designed to minimum spec blowing) then they'd be responsible for not only the damage to the video card, but to your system, your house if it catches fire, and any injury or death to self or others as a result.
> 
> We had mentioned from the start that the safest bet is the 3x 8-pin design for the 3090 unless you're willing to shunt. If you're willing to shunt though, problem is automatically solved.


Technically the 12 pin FE isn't 2x or 3x 8-pin card. Supposedly there's supposed to be a 3x 8 to 3x 12 being released at some point. If that were to happen I can see NVIDIA releasing a bios that increases the power limits but who knows at this point. Only thing we can do is shunt mod to bypass. At least AIB have options. Seems like FE has the best cooling option but everything else is where it's at for power limits.


----------



## HyperMatrix

LVNeptune said:


> Technically the 12 pin FE isn't 2x or 3x 8-pin card. Supposedly there's supposed to be a 3x 8 to 3x 12 being released at some point. If that were to happen I can see NVIDIA releasing a bios that increases the power limits but who knows at this point. Only thing we can do is shunt mod to bypass. At least AIB have options. Seems like FE has the best cooling option but everything else is where it's at for power limits.


The 12 pin design itself isn't limited to 300W. However, it is limited to that by spec as it's currently just an adapter that's converting 2x 8-pin connectors. Nvidia wouldn't be able able to release an increased power bios unless it went with a 3x 8-pin to 1x 12-pin adapter or worked with future power supplies to connect directly to it. Well, I mean it could theoretically release one. But not without opening itself up to liability that's just not worth it.


----------



## HyperMatrix

Well f*#^. What do we do now?





__





NVIDIA will shift over to TSMC for new 7nm Ampere GPUs in 2021


NVIDIA will reportedly shift over most of the workload of producing 7nm Ampere GPUs to TSMC, away from Samsung 8nm right now.




www.tweaktown.com


----------



## Sky3900

HyperMatrix said:


> Well f*#^. What do we do now?
> 
> 
> 
> 
> 
> __
> 
> 
> 
> 
> 
> NVIDIA will shift over to TSMC for new 7nm Ampere GPUs in 2021
> 
> 
> NVIDIA will reportedly shift over most of the workload of producing 7nm Ampere GPUs to TSMC, away from Samsung 8nm right now.
> 
> 
> 
> 
> www.tweaktown.com


Nvidias new 5nm Hopper architecture is also supposed to be released in 2021. Potentially short product cycle coming this time.


----------



## HyperMatrix

Sky3900 said:


> Nvidias new 5nm Hopper architecture is also supposed to be released in 2021. Potentially short product cycle coming this time.


I wouldn't expect Nvidia to switch to TSMC 7NM for Ampere if they're planning to release 5NM Hopper a few months after. Wouldn't make any sense. But I am now considering just keeping the card on air when I actually get my hands on. Considering the scarcity, resale value should still hold.


----------



## Sky3900

It's here...


----------



## AlKappaccino

XOC bios released for 3080 FTW3, maybe 3090 follows soon.

__ https://twitter.com/i/web/status/1316572799050248192


----------



## HyperMatrix

Edit: might be 470W for the FTW3 3080. Going to need that 520W 3090 bios now. Might get away with no shunting.


----------



## Johneey

AlKappaccino said:


> XOC bios released for 3080 FTW3, maybe 3090 follows soon.
> 
> __ https://twitter.com/i/web/status/1316572799050248192


Wow nice we Need for 3090 now


----------



## Clouddddy

Johneey said:


> Wow nice we Need for 3090 now


What are the chances we can get/how can I get in contact with MSI to do the same for the Gaming x Trio 3090?


----------



## bolagnaise

Clouddddy said:


> What are the chances we can get/how can I get in contact with MSI to do the same for the Gaming x Trio 3090?


ZERO


----------



## Clouddddy

bolagnaise said:


> ZERO


----------



## Sheyster

xankg said:


> Hello friend! I flashed my TUF with this one and GPU-Z shows only 390W and so does Afterburner while running Port Royal BUT in Superposition it actually draws 400W, thanks a lot!


This isn't unusual. I've seen my FE draw 417w at 114%/400w power limit. It's not a precise cut-off.


----------



## Sky3900

Yup, FE is severely power limited. Needs another 100W -150W. Good news is... the stock cooler works great and this thing can definitely handle moar power. Build quality is amazing. Can't complain 40% faster than my OC'd 2080ti

Power: 114%
Fans: 100%
GPU Clock: +133
Mem Clock: +756

Port Royal Score: 13327

Clocks bounced around between 1930 and 2070. Any higher on the clocks is unstable.



https://www.3dmark.com/pr/399760


----------



## bmgjet

Theres more then 1 power limit. The slider only adjusts the board power limit.
Youll still get limited by the Chip power draw so in things that are 100% maxing out every bit of the die.


----------



## Sheyster

Clouddddy said:


> What are the chances we can get/how can I get in contact with MSI to do the same for the Gaming x Trio 3090?


You can flash the 480W ASUS Strix BIOS to the MSI Gaming X Trio. Both are 3 x 8-pin and others have done it successfully. If EVGA releases a 3090 FTW3 XOC BIOS there is a good chance it will also work with your MSI card.


----------



## Sky3900

bmgjet said:


> Theres more then 1 power limit. The slider only adjusts the board power limit.
> Youll still get limited by the Chip power draw so in things that are 100% maxing out every bit of the die.


Any way to adjust chip power draw or is it fixed? Do you know what the max is by chance?


----------



## bmgjet

Sky3900 said:


> Any way to adjust chip power draw or is it fixed? Do you know what the max is by chance?


Shut mod is the only way so far.
So far in the data im collecting it looks like some cards have chip power limit of 200W and others have 300W.
Do a stress test with GPUZ open and look at what chip power is doing when your reading power limited by the board power is less then the limiter.


----------



## Thanh Nguyen

Anyone flash the gigabyte bios and has only 1 dp work? Any way to fix it?


----------



## bmgjet

Thanh Nguyen said:


> Anyone flash the gigabyte bios and has only 1 dp work? Any way to fix it?


The AORUS Master only HDMI and 1 DP will work on most cards because the port asigning.
The Gaming OC bios, All of them work on most cards. But some still lose the middle DP port.


----------



## Sky3900

bmgjet said:


> Shut mod is the only way so far.
> So far in the data im collecting it looks like some cards have chip power limit of 200W and others have 300W.
> Do a stress test with GPUZ open and look at what chip power is doing when your reading power limited by the board power is less then the limiter.


My GPU chip power is peaking about 218W in Furmark. PRW_SRC is peaking 134W, memory 85W consistent.


----------



## Sky3900

My gaming rig. Case is kinda ghetto, but, it still works.


----------



## cstkl1

as i was doing waterblock assembly for strix 3080. this card sux btw.. on air it only boost 2070 and constant 2040-2055..
@owikh84 does 2145 constant...

strix 3090 confirmed and otw in 1 hour...


----------



## Sync0r

IcyIvan said:


> Not sure why my Time Spy score are so low compare to other
> 
> 
> 
> https://www.3dmark.com/3dm/51665242?
> 
> 
> 
> Any thoughts?


It looks like your not overclocked on the card. Potentially the CPU isn't going to be as strong as a clocked 10900k.

Comparison with my system:


https://www.3dmark.com/compare/spy/14554479/spy/14503202#


----------



## Cholerikerklaus

Could someone send me a link to the aorus bios? Can't find it.


----------



## bmgjet

Cholerikerklaus said:


> Could someone send me a link to the aorus bios? Can't find it.











976 KB file on MEGA







mega.nz


----------



## Cholerikerklaus

Johneey said:


> I thinked it has 400 watts ? All my software shows 390 msi too


A question, you got a permormance boost when you're with the mora on the outside with lower Temps. Iam only get inside the house 30 degrees on gpu. So it makes sense too do a bench when the mora is outside and Iam get nearly 20 degrees?


----------



## Cholerikerklaus

bmgjet said:


> 976 KB file on MEGA
> 
> 
> 
> 
> 
> 
> 
> mega.nz


Thx man


----------



## Johneey

Cholerikerklaus said:


> A question, you got a permormance boost when you're with the mora on the outside with lower Temps. Iam only get inside the house 30 degrees on gpu. So it makes sense too do a bench when the mora is outside and Iam get nearly 20 degrees?


Nope no differents maybe 100-150 score points timespy if u hit already 30c


----------



## Sky3900

Sky3900 said:


> Yup, FE is severely power limited. Needs another 100W -150W. Good news is... the stock cooler works great and this thing can definitely handle moar power. Build quality is amazing. Can't complain 40% faster than my OC'd 2080ti
> 
> Power: 114%
> Fans: 100%
> GPU Clock: +133
> Mem Clock: +756
> 
> Port Royal Score: 13327
> 
> Clocks bounced around between 1930 and 2070. Any higher on the clocks is unstable.
> 
> 
> 
> https://www.3dmark.com/pr/399760


Getting better... 14219 on that one. Closing out Corsair Icue and EVGA Precision X1 after setting clocks and fan speeds boosted my score 6.5%. These utilities were eating a bunch of clock cycles I think.



https://www.3dmark.com/3dm/51691276?


----------



## The-Real-Link

Alex24buc said:


> I am sorry, I hope they will get you a new card. But how did this happen, the two displayport output were dead from the beginning and you found out just now?


Hey Alex, sorry just saw your reply. They were dead from the start. I flashed up my board BIOS thinking there was some incompatibility but they arrived DoA. Tried them after the BIOS updates and still no issue. At least one of them works. Too bad Nvidia can't cross-ship me a replacement card.


----------



## bmgjet

Heres the effect of power limit on my card. All at the same overclock.
Went from day 1 thinking I really lost the silicon lottery when I couldnt even crack 13K. To now running in the low 14Ks.










If I can get the temps down to 30C should be able to crack high 14Ks. That or turn up the power limit further but not too keen on that when the VRM is just air cooled.


----------



## Clouddddy

Sheyster said:


> You can flash the 480W ASUS Strix BIOS to the MSI Gaming X Trio. Both are 3 x 8-pin and others have done it successfully. If EVGA releases a 3090 FTW3 XOC BIOS there is a good chance it will also work with your MSI card.


Yeah was thinking that, but heard there was some DP issues?


----------



## zhrooms

Reinhardovich773 said:


> *Heads-up guys*!
> 
> Latest GPU-Z is out and it adds the ability to save/export Ampere GPUs vBIOSes. Grab it here: TechPowerUp GPU-Z v2.35.0 Released


Finally!

https://www.techpowerup.com/vgabios...&version=&interface=&memType=&memSize=&since=
https://www.techpowerup.com/vgabios...&version=&interface=&memType=&memSize=&since=
Keep an eye there, all the BIOSes should appear in the next few days!


----------



## AlKappaccino

zhrooms said:


> Finally!
> 
> https://www.techpowerup.com/vgabios...&version=&interface=&memType=&memSize=&since=
> https://www.techpowerup.com/vgabios...&version=&interface=&memType=&memSize=&since=
> Keep an eye there, all the BIOSes should appear in the next few days!


What's the difference between using GPU-Z or just saving your bios with nvflash?


----------



## cstkl1

strix 3090 oc

conquest complete


----------



## LVNeptune

AlKappaccino said:


> What's the difference between using GPU-Z or just saving your bios with nvflash?


Automatically adds verified BIOS to TechPowerup


----------



## Sheyster

cstkl1 said:


> View attachment 2462060
> 
> 
> strix 3090 oc
> 
> conquest complete


Very nice, I'm hoping to snag one soon! Can you please check the BIOS revision? There are already 2 different versions in the TPU database.


----------



## SoldierRBT

Sheyster said:


> Very nice, I'm hoping to snag one soon! Can you please check the BIOS revision? There are already 2 different versions in the TPU database.


Probably Quiet Bios and Performance Bios. The only difference should be fan curve.


----------



## Baasha

where did you get it? that's the one I'm going to get - Strix 3090 OC SLI !!!



cstkl1 said:


> View attachment 2462060
> 
> 
> strix 3090 oc
> 
> conquest complete


----------



## cstkl1

Sheyster said:


> Very nice, I'm hoping to snag one soon! Can you please check the BIOS revision? There are already 2 different versions in the TPU database.


more concerned atm i kindda killed my 3080 strix. gonna check on that

just on stock with 10900k 5ghz ton of background steam 3dmark. 
the gpu score 19.6k for 3dmark
oc abit +90/500 gpu score nearly hit 21k

this card nuts


----------



## cstkl1

get it. its a monster. 

3080 2145/21gbps timespy 20k on water mind you
3090 strix on air avg clock 1950-1980 with boost 2070 unoptimized windows 21k..that was just first run test +90/500. didny even check max oc. 

theres no way a 3080 can catch up. 

again had to pull tons of strings short of calling the prime minister.


----------



## zlatanselvic

https://www.3dmark.com/3dm/51705688? 

#73 Overall in Port Royal on air with my FE. 


Any updates on a "jucier" bios for the FE?


----------



## Thebc2

cstkl1 said:


> get it. its a monster.
> 
> 3080 2145/21gbps timespy 20k on water mind you
> 3090 strix on air avg clock 1950-1980 with boost 2070 unoptimized windows 21k..that was just first run test +90/500. didny even check max oc.
> 
> theres no way a 3080 can catch up.
> 
> again had to pull tons of strings short of calling the prime minister.


Excited to see how she does!! The Strix was near the top of my list with the FTW3 Ultra until the ultra high end 3090s are out. I ended up with an FTW3 Ultra, hoping they release an XOC higher power limit bios for the 3090 as it definitely seems power capped right now.

Interestingly enough, the type of game/benchmark is really impacting power draw in relation to clock speed and thus performance as I hit the power limits. For example, in Heaven and in 3dmark benchmarks my clock speed will jump around between 1985-2100 and temps in the low 70s. When I play World of Warcraft in 4K with everything turned up I see clocks stuck at 2115 or 2130 SOLID and temps in the low 60s. Having trouble getting my head around that


Sent from my iPhone using Tapatalk Pro


----------



## Johneey

Anyone has a high res picture of the palit 3090 gaming pro? Or know which resistor there are r005? 5 times? Need it for know which resistors I need to buy :-(


----------



## Nizzen

cstkl1 said:


> get it. its a monster.
> 
> 3080 2145/21gbps timespy 20k on water mind you
> 3090 strix on air avg clock 1950-1980 with boost 2070 unoptimized windows 21k..that was just first run test +90/500. didny even check max oc.
> 
> theres no way a 3080 can catch up.
> 
> again had to pull tons of strings short of calling the prime minister.


Shunt her 🤩


----------



## cstkl1

Nizzen said:


> Shunt her 🤩


i think theres no need
for 3080 strix it didnt even do 400w on water thats with a 450w bios

the 3090 480 should be enough

what we need is voltage control before shunting

has anybody tried that elmorlabs thingy for a 24/7 daily??


----------



## LVNeptune

zlatanselvic said:


> https://www.3dmark.com/3dm/51705688?
> 
> #73 Overall in Port Royal on air with my FE.
> 
> 
> Any updates on a "jucier" bios for the FE?


You got a really good bin.


----------



## Bolded

Hi everyone. Long time reader, first time poster. Enjoyed the content on this site for years and have learned so much/applied it to so many products.

Wanted to flag for those of you really struggling with your 3090 FEs on air, that I was really fighting with the power limit and temps when I first got it. I ended up trying to undervolt/set the boost to see if it would do anything. I was skeptical from other posts if I am being honest, but oddly it seems to have improved my OCs by stabilizing my boost. 

This was one of my first mostly _stock_ runs on Port Royale (12886): https://www.3dmark.com/pr/389441

This was what I ended up as on a full hog 114% power, 90C temp, 100% core voltage run with settings at +90 and +700 (13853): https://www.3dmark.com/pr/372054

Finally here is my undervolted OC run at 893v on 1980mhz (14105): https://www.3dmark.com/pr/391821

The above is rock-solid thru hours of stress tests on 3dMark and days of gaming. I did get a 1400ish score from a +140/+600 full hog run, but that wasn't 24/7 stable.

The card sits at about 360-395w the whole time, but never goes above which stops the boost clock from jumping below 1900. I would _love_ to get just 20w more out of this thing, could easily be stable at 2025 if I had a bit more power. Wish an FE bios swap was possible, but seeing that it is unlikely I only have one option. I am not a solderer, so shuntmodding that way is out of the question. But I am intrigued by the post to use conducive paint to stack resistors. *Anyone done that on their FE yet? If so, which resistors did you use and how many did you stack?*

_NOTE: I know the Port Royale temp info shows the undervolted run is higher than the others, but it never gets above 59 now, whereas in previous attempts it would hit 70 from time to time. This is with the fan scaled to run at 90% from 50 degrees onwards. I plan to throw a waterblock on this thing once the good ones are out (though I saw Bitpower released one, but never heard of them.)._


----------



## LordGurciullo

ROGKilla said:


> @TamronLoh
> 
> The Bios you Upload has only 390W not 400W, Switch on your Card from Silent Bios to OC Bios.
> 
> Flashed the Aorus Master Bios but still 390W. No Changes to Gaming OC Bios!
> All DPs are Working fine.
> 
> I overclocked the whole time without increasing the voltage. I thought he wouldn't accept it anyway.
> Now have over 15K at Port Royal. My card runs like hell. No Mods No Shunt Mod Nothing!
> 
> -> https://www.3dmark.com/pr/395145
> 
> 
> 
> View attachment 2461987


I see you knocked me out of the top 10 . So this is just a water block that is keeping it at 21 degrees? So... excuse my ignorance cause I've never water cooled anything but that can drop a card like mine which runs at 55c down to 21 c? ... You're beating my card with 440 with a 390 watts. That temps must be the key? Or is it the auros bios?


----------



## LVNeptune

21c is LN2 temps.


----------



## Nizzen

cstkl1 said:


> i think theres no need
> for 3080 strix it didnt even do 400w on water thats with a 450w bios
> 
> the 3090 480 should be enough
> 
> what we need is voltage control before shunting
> 
> has anybody tried that elmorlabs thingy for a 24/7 daily??


My plan is to try Asus OC panel II when I get the 3090 strix oc  Already have strix waterblock ready.


----------



## LordGurciullo

LVNeptune said:


> 21c is LN2 temps.


Oh thats what he means? He did LN2? Cause I was like holy **** I will learn how to water block a muthafucka If I can get to 21 c... cause we all know after 44c it pulls power and clock drops...


----------



## ROGKilla

edit - got the answer


----------



## cstkl1

Nizzen said:


> My plan is to try Asus OC panel II when I get the 3090 strix oc  Already have strix waterblock ready.


Bitspower??

pic of that block on 3080 thread.


----------



## LVNeptune

So chilled coolant is being pumped into it?


----------



## ROGKilla

edit - got the answer


----------



## LordGurciullo

I'm a little confused Rog. Those are incredible temps. I have an airconditioner floor unit shooting cold air onto the card (stock card) and I'm getting 55 c at 100 percent fan on port royal run.... With air on it... Could you elaborate? 

Your setup drops that to 21 ??? That seems remarkable!!! Thats whats allowing the power to stay constant! Very cool! I've never water blocked anything.... I'm fearful to do so cause lack of knowledge and issues with warranty? Can't risk a 2000 dollar card for 2 percent performance and lose the warranty...


----------



## ROGKilla

edit - got the answer


----------



## Thanh Nguyen

Why I have like 10-12c delta with my card. Same block. Liquid metal is used also.


----------



## ROGKilla

edit - got the answer


----------



## LordGurciullo

I need you to write me a tutorial on how to do what you did ROG!


----------



## kx11

ROGKilla said:


> Watercooling is very nice, everything is quiet and you have more Power.
> 
> My Normal Watertemp is 25°C with 25°C i get 31-32°C GPU Temperature. (6-7°C Delta)
> 
> Ok guys lets play a litte bit Wild Life
> 
> -> https://www.3dmark.com/wl/4568
> 
> Who Join?


trying my best to add 1+ mhz to the core without the benchmark crashing 



https://www.3dmark.com/wl/4675


----------



## ROGKilla

edit - got the answer


----------



## Carillo

ROGKilla said:


> @TamronLoh
> 
> The Bios you Upload has only 390W not 400W, Switch on your Card from Silent Bios to OC Bios.
> 
> Flashed the Aorus Master Bios but still 390W. No Changes to Gaming OC Bios!
> All DPs are Working fine.
> 
> I overclocked the whole time without increasing the voltage. I thought he wouldn't accept it anyway.
> Now have over 15K at Port Royal. My card runs like hell. No Mods No Shunt Mod Nothing!
> 
> -> https://www.3dmark.com/pr/395145
> 
> 
> 
> View attachment 2461987


Nice score. Are u using VF curve or slider ?


----------



## ROGKilla

Thx, Slider. No Curve.


----------



## Carillo

ROGKilla said:


> Thx, Slider. No Curve.


Ok. When i look at your Timespy score, it looks like you are very limited by power.. since your max clock is 2295, but average clock is 2064mhz  My max Port Royal score is 14680 on AIR.. Waterblock and chiller tomorrow


----------



## ROGKilla

Yes i am fighting only with Powerlimit. Oh Nice Bro cant wait to see your Score =)


----------



## Wolfhunter1043

Hello all! New to the forum but had some questions about my OC. I have an EVGA 3090 FTW3 Ultra and i9-9900k.

My base benchmark in port royal is around 13300ish. After working with the overclock I was able to get and additional 150Mhz boost clock stable, 165Mhz with an occasional crash. My question comes with the memory OC. It seems most are at or below 800Mhz for memory. I am able to get 1300Mhz stable. Anything above that crashes but not a single crash including during games at 1300Mhz. The card is stock without a block on it for now. CPU is OC to 5Ghz and ambient air in the room is around 20C. The card never gets above 64C and rarely goes above 59C.

Did I win the lottery here? Final score is 14280 with 2115Mhz core/1381Mhz memory.


https://www.3dmark.com/pr/397082


----------



## LoucMachine

Seeing everybody's score here I am very tempted to flash the Strix bios on my X trio :O


----------



## Carillo

ROGKilla said:


> Yes i am fighting only with Powerlimit. Oh Nice Bro cant wait to see your Score =)


Thanks! You on Alphacool block ? But you clearly have a sick bin!


----------



## Alelau18

3DMark doesnt recognize the 3090 Master Bios yet but hey, it's here at least: https://www.3dmark.com/wl/2039


Wild life is so lightweight you can technically push the GPU like crazy, went +190 on air +600 mem for that score.


----------



## LVNeptune

ROGKilla said:


> Why your Core mhz is so low?
> 
> -> https://www.3dmark.com/3dm/51713655?
> 
> First place is a RTX 3080?


This is normal.


----------



## LordGurciullo

You can go up to 1300 memory but keep trying your benchmarks. In most games it will drop fps and especially MIN fps. If I go over 450 I lose minimum fps to gain some very minimal max fps gains. 
Real games usually cant use more than 4-600 before actually degrading performance. Id rather have 60 min and 380 max then 20 min and 390 max.


----------



## LordGurciullo

Fun . 

So Rog - or anyone!? Challenge/ Curiousity test!

Run metro exodus benchmark. Click Extreme, and then change RTX TO ULTRA..

My stable overclock in every single other game crashes like nothing.

METRO is a beast. I can only go up to 120 maybe 135 before crashing... 
What are your results guys?


----------



## zlatanselvic

LVNeptune said:


> You got a really good bin.


Thank you! I seem to have gotten lucky this time around.


----------



## LVNeptune

Anyone know off hand if you can redeem codes on the same card you already redeemed codes before on? Example, I have an extra unused code for Watchdogs Legion but I already redeemed one code using this card. Can someone else still use this card and the unused code or are they sending serial numbers/identifiers after redemption? They used to just verify you had a card not that the card has redeemed before.


----------



## ApplesOfEpicness

Does anyone know if it is possible to flash a 3090FE with a 3rd party BIOS to enable 0RPM mode with Afterburner's fan curve?


----------



## LVNeptune

ApplesOfEpicness said:


> Does anyone know if it is possible to flash a 3090FE with a 3rd party BIOS to enable 0RPM mode with Afterburner's fan curve?


FE can't run any third-party BIOS due to different board designs.


----------



## Falkentyne

Got my 3090 today. Too busy enjoying Fortnite to run 3dmark. CODMW + Warzone later.


----------



## LVNeptune

Falkentyne said:


> Got my 3090 today. Too busy enjoying Fortnite to run 3dmark. CODMW + Warzone later.


You had time to post though


----------



## cstkl1

LVNeptune said:


> Anyone know off hand if you can redeem codes on the same card you already redeemed codes before on? Example, I have an extra unused code for Watchdogs Legion but I already redeemed one code using this card. Can someone else still use this card and the unused code or are they sending serial numbers/identifiers after redemption? They used to just verify you had a card not that the card has redeemed before.


Gfe seems to scan your hardware to check deviceID

so the card has to be in your motherboard.

i gave my strix 3080 codes by login gfe. Entered the codes asus gave
It went to ubi account to link
Allowed my friend to teamviewer in
He entered his ubi acct details . Done
Just before this i tried doing this with my tuf on the mobo.. gfe said card already redeemed. 
[i redeemed tuf on m12e, tried redeeming strix with tuf installed on B550 strix itx, got rejected.. has to install strix on m12f and redeemed it)


----------



## LVNeptune

cstkl1 said:


> Gfe seems to scan your hardware to check deviceID
> 
> so the card has to be in your motherboard.
> 
> i gave my strix 3080 codes by login gfe. Entered the codes asus gave
> It went to ubi account to link
> Allowed my friend to teamviewer in
> He entered his ubi acct details . Done
> Just before this i tried doing this with my tuf on the mobo.. gfe said card already redeemed.
> [i redeemed tuf on m12e, tried redeeming strix with tuf installed on B550 strix itx, got rejected.. has to install strix on m12f and redeemed it)


Ok so you can redeem multiple codes with the same card.


----------



## cstkl1

LVNeptune said:


> Ok so you can redeem multiple codes with the same card.


Hmm no. U cant. The tuf couldnt redeem with strix code. Since the tuf code was redeemed with it installed. Had to swap in a strix back.

i can test this one more time since i got a strix 3090. The retailer forgot to send be a invoice with s/n on it so waiting for that.


----------



## LVNeptune

cstkl1 said:


> Hmm no. U cant. The tuf couldnt redeem with strix code. Since the tuf code was redeemed with it installed. Had to swap in a strix back.
> 
> i can test this one more time since i got a strix 3090. The retailer forgot to send be a invoice with s/n on it so waiting for that.


Ohhhhh =/


----------



## LVNeptune

Well...tested out my second card. Definitely better bin but only way I'm going to hit that 14,000 is shunt modding I think at this point. https://www.3dmark.com/pr/403132


----------



## AdamK47

I've been getting consistent 14K in Port Royal using my everyday stable overclocks with default driver settings. 10900K @ 5.1GHz all core, 32GB DDR4 4000 16-18-18-38-390, and the MSI RTX 3090 Gaming X Trio flashed with the Strix 480W BIOS. +600 memory, +123 power limit, +100 voltage, and a voltage curve that looks like this. Started with a base of +50 and upped the middle part to between +90 and +100.










Results: https://www.3dmark.com/pr/401792


----------



## AdamK47

ROGKilla said:


> Ok guys lets play a litte bit Wild Life
> 
> -> https://www.3dmark.com/wl/4568
> 
> Who Join?


Ran it with the same settings as my previous post.



https://www.3dmark.com/wl/6106












My stable everyday settings are killing your run and done unstable suicide runs for some reason.


----------



## bmgjet

How are you getting past it saying hardware monitoring is disabled.
Its on in the settings and every other benchmark is fine.


https://www.3dmark.com/wl/2234


110K


----------



## AdamK47

bmgjet said:


> How are you getting past it saying hardware monitoring is disabled.
> Its on in the settings and every other benchmark is fine.
> 
> 
> https://www.3dmark.com/wl/2234
> 
> 
> 110K


Under Options --> System Info


----------



## AdamK47

Well, I wasn't expecting this:


http://imgur.com/nfzbFEG


#4
I'm sure it won't last long.


----------



## Falkentyne

Anyone have any idea what's up with "Dev error: 5759" in COD: Modern warfare at _pure stock_ speeds with raytracing enabled and max settings at 1080p? 120hz vsync on, so video card isn't even being taxed fully. I tried the studio driver and made it through a game of gunfight and ground war without any crashes though. Is this a known issue?

Port Royal scored 12676 but says invalid score because I set aniso to x16 and texture to high quality in the NVCP.


----------



## bmgjet

AdamK47 said:


> Under Options --> System Info


Its on, Ill redownload 3dmark from there website.


----------



## shredy44

*INNO3D GEFORCE RTX 3090 ICHILL X4

hey guys this card has 2x8pin on ref board.. which bios do you recommend for a higher PL?*


----------



## Falkentyne

What's up with all the virtual memory usage?
I think the reason I got dev error 5759 was because I ran out of virtual memory? 
Why the hell is COD Modern Warfare using 32 GB of virtual memory?

Any of you guys with Warzone/ Modern warfare want to check your virtual memory usage please?


----------



## LVNeptune

Accidental achievement today


----------



## cstkl1

Falkentyne said:


> What's up with all the virtual memory usage?
> I think the reason I got dev error 5759 was because I ran out of virtual memory?
> Why the hell is COD Modern Warfare using 32 GB of virtual memory?
> 
> Any of you guys with Warzone/ Modern warfare want to check your virtual memory usage please?
> 
> View attachment 2462125


what was your vram usage bro??
virtual mem works as buffer for reserved mem...

see your physical mem commited..

do u have pagefile enabled??


----------



## Falkentyne

cstkl1 said:


> what was your vram usage bro??
> virtual mem works as buffer for reserved mem...
> 
> see your physical mem commited..
> 
> do u have pagefile enabled??


Had it at 2 GB fixed pagefile size.
Removed the fixed 2048MB size and set it to "System managed". Also changed to the "Studio" driver.
Managed to complete another game of Ground War without DEV error.


----------



## Boham_CY

Hey everyone, my Palit 3080 is getting here Monday, was wondering what's the best bios for it in terms of upping the power limit?
Fingers crossed on silicon lottery

Oops sorry wrong thread


----------



## cavemankr

Just installed rtx 3090 FE and here are my port royal results. 

+114 Power
+205 Core
+500 Mem
Fanspeed 100%

https://www.3dmark.com/pr/403858

Port Royal: 14251

I am happy with the score. 

anyone know why my clock speed is showing as 1,595 MHz (1,395 MHz) and avg clock speed as 358 MHz ?

thanks.


----------



## zlatanselvic

All in OC - I think I can push a bit more, but hitting diminishing results:

Power - 114
Core - 140
Memory - 1100
Fanspeed - 100%
Yeet - Maximum
#27 in single GPU Timespy as of this writting. I would love some more juice bios - seems like my card wants it.



https://www.3dmark.com/spy/14585963


----------



## The-Real-Link

Had a chance to test some Port Royal finally after never being able to run it before. Was able to complete a couple runs at +160 / +600, 68% fan, +15% power, standard voltage, but can't seem to get it much higher on air even if temps are good. So very close to the top 100 (getting around 14,1xx). Wil have to to dabble with more later once RMA completes.


----------



## LVNeptune

The-Real-Link said:


> Had a chance to test some Port Royal finally after never being able to run it before. Was able to complete a couple runs at +160 / +600, 68% fan, +15% power, standard voltage, but can't seem to get it much higher on air even if temps are good. So very close to the top 100 (getting around 14,1xx). Wil have to to dabble with more later once RMA completes.


RMA for what?


----------



## Sheyster

Falkentyne said:


> Had it at 2 GB fixed pagefile size.
> Removed the fixed 2048MB size and set it to "System managed". Also changed to the "Studio" driver.
> Managed to complete another game of Ground War without DEV error.


If you want to use a fixed page file with CODMW you'll probably need to set it to at least 8192 (8 GB) to avoid crashes. This was an issue with BO4 as well.


----------



## VickyBeaver

Pepillo said:


> Interesting. I precisely have an HDMI monitor connected with a DP to HDMI cable and the 480w bios of the Strix that disables a DP port of my MSI Gaming X. Would this trick work?


worth a shot ^.^


----------



## Nizzen

Boham_CY said:


> Hey everyone, my Palit 3080 is getting here Monday, was wondering what's the best bios for it in terms of upping the power limit?
> Fingers crossed on silicon lottery
> 
> Oops sorry wrong thread


Newest bios at palit.com I's 350w and the first bios is ~330w

I have palit 3080 gaming oc


----------



## Thanh Nguyen

Why in,test like timespy or port royal, my card drops voltage down under 1000mv, so it wont hold higher avg clock. Any way to clock voltage?


----------



## bmgjet

Thanh Nguyen said:


> Why in,test like timespy or port royal, my card drops voltage down under 1000mv, so it wont hold higher avg clock. Any way to clock voltage?


Look in GPUz if gives you throttle reason but can guarantee its because of power limit.


----------



## Thanh Nguyen

I already shunt mod why still power limit. I think my mod works coz before I did it I have 14100 in port royal and after I got 14600 and the temp of the card goes higher than before also.


----------



## Johneey

Anyone s


Thanh Nguyen said:


> I already shunt mod why still power limit. I think my mod works coz before I did it I have 14100 in port royal and after I got 14600 and the temp of the card goes higher than before also.


which one you shout and which resistors u use ? Can u send the link pls from port Royal ?


----------



## Thanh Nguyen

Johneey said:


> Anyone s
> 
> which one you shout and which resistors u use ? Can u send the link pls from port Royal ?


I shunt all 5 in the front. I think there is 1 in the back near pcie. I bought my shunt from mouser. The link is from a few pages before. U need to find it.


----------



## Johneey

Thanh Nguyen said:


> I shunt all 5 in the front. I think there is 1 in the back near pcie. I bought my shunt from mouser. The link is from a few pages before. U need to find it.


Yeah which one ? R005 ? Or R015? Are u on 2x 8pin?


----------



## Johneey

Thanh Nguyen said:


> I shunt all 5 in the front. I think there is 1 in the back near pcie. I bought my shunt from mouser. The link is from a few pages before. U need to find it.


So u only shunt the 5 in front or the back on pci Express too?


----------



## bmgjet

Look in gpuz, It probably says power limit. Which will be the PCI-E slot power limiting it since it has a forced power balance it trys to maintain.
Thats what the back shunt controlls.
With my card I could only get another 50W doing the 5 on the front of the card no matter what shunts I used.


----------



## shredy44

zlatanselvic said:


> All in OC - I think I can push a bit more, but hitting diminishing results:
> 
> Power - 114
> Core - 140
> Memory - 1100
> Fanspeed - 100%
> Yeet - Maximum
> #27 in single GPU Timespy as of this writting. I would love some more juice bios - seems like my card wants it.
> 
> 
> 
> https://www.3dmark.com/spy/14585963


which aib card have you got sir?


----------



## Thanh Nguyen

bmgjet said:


> Look in gpuz, It probably says power limit. Which will be the PCI-E slot power limiting it since it has a forced power balance it trys to maintain.
> Thats what the back shunt controlls.
> With my card I could only get another 50W doing the 5 on the front of the card no matter what shunts I used.


Ok I will shunt the one in the back also.


----------



## cstkl1

https://www.3dmark.com/wl/7776




https://www.3dmark.com/pr/405208




https://www.3dmark.com/spy/14594607


----------



## Johneey

cstkl1 said:


> https://www.3dmark.com/wl/7776
> 
> 
> 
> 
> https://www.3dmark.com/pr/405208
> 
> 
> 
> 
> https://www.3dmark.com/spy/14594607


Lol is this on air ? U done shunt mod ?


----------



## devilhead

bmgjet said:


> Look in gpuz, It probably says power limit. Which will be the PCI-E slot power limiting it since it has a forced power balance it trys to maintain.
> Thats what the back shunt controlls.
> With my card I could only get another 50W doing the 5 on the front of the card no matter what shunts I used.


Same here, just used liquid metal on 2x resistors (near pcie connectors) it gave me some help, but just minimal, still power throthling. Just beated jayz2cents in portroyal


----------



## Johneey

Johneey said:


> Lol is this on air ?
> [/QUOT





devilhead said:


> Same here, just used liquid metal on 2x resistors (near pcie connectors) it gave me some help, but just minimal, still power throthling. Just beated jayz2cents in portroyal


gg use LM have fun


----------



## cstkl1

Johneey said:


> Lol is this on air ? U done shunt mod ?


strix 3090.. air.


----------



## Johneey

cstkl1 said:


> strix 3090.. air.


Really nice dude


----------



## cstkl1

Johneey said:


> Really nice dude


Soon all the strix gonna come in with waterblocks and start hitting da boards. Expecting 15-16k min @Nizzen 
Port royal flies with dual ranks. Saw that on time spy cpu score also.


----------



## zlatanselvic

shredy44 said:


> which aib card have you got sir?


Founders Edition.


----------



## Nizzen

cstkl1 said:


> Soon all the strix gonna come in with waterblocks and start hitting da boards. Expecting 15-16k min @Nizzen
> Port royal flies with dual ranks. Saw that on time spy cpu score also.


This is 3090 Palit with "some" cooling. "hexachannel" dimm cooler  While we are waiting for Strix....
Look up for some scores from Norway soon 
@Carillo


----------



## LVNeptune

Backplate cooling? I didn’t see them offering that before. Definitely needed though.


----------



## cstkl1

Btw that colorful


Nizzen said:


> This is 3090 Palit with "some" cooling. "hexachannel" dimm cooler  While we are waiting for Strix....
> Look up for some scores from Norway soon
> @Carillo


holy cow!!! 😱

btw heard theres a "special" strix bios. 
only for 3090


----------



## Thebc2

Evga_jacobf just tweeted we may have a new XOC beta with higher power limits for the 3090 FTW3 Ultra next week! Hoping for at least 480w if not 500-520.


Sent from my iPhone using Tapatalk Pro


----------



## wasprodker

https://www.3dmark.com/pr/401268



This is my highest so far. Just as everyone else im pretty darned limited by the powerlimit.

Stock card as far as LM, powermods, shuntmods go.


----------



## Foxrun

Has anyone shunted with a conductive pen yet? I’ve done it for Volta and the RTX Titan. I was hoping to do it for this one as well.


----------



## wasprodker

Foxrun said:


> Has anyone shunted with a conductive pen yet? I’ve done it for Volta and the RTX Titan. I was hoping to do it for this one as well.


I wouldnt recommend it. If you do i advise that you measure what rescistance you got since its entierly possibly you may blow a fuse on the gpu if you just let it loose after that. The 3090 doesent seem to ever stop pulling power.

When shunting with a known resistor you can calculate the new power/amperage values. Using a pen it can end up being whatever rescistance depending on the thickness of the material you scratch on.


----------



## Thanh Nguyen

cstkl1 said:


> Soon all the strix gonna come in with waterblocks and start hitting da boards. Expecting 15-16k min @Nizzen
> Port royal flies with dual ranks. Saw that on time spy cpu score also.


16k min . Did u smoke or drink something?


----------



## slopokdave

So Asus released new BIOS for 3080 TUF/Strix and 3090 TUF, I don't see it for 3090 TUF OC yet on their support page. What's interesting is that 3080 TUF power limit is now 375W, which is the same as 3090 TUF. Come on, we need more power Asus.


----------



## chispy

Had some fun last night doing extreme overclocking on my Dual compressors single stage phase at -33c load for points at hwbot for my hwbot team ocn ( overclock.net ), this cards really loves cold more than voltage.
Asus Tuf Gaming OC rtx 3090 shunt mods all done.

Max stable overclock for some benchmarks like gpupi 1B - 2430/1351Mhz @ -33c load and -50c iddle , 3dmark vantage 2310/1350Mhz.


----------



## The-Real-Link

LVNeptune said:


> RMA for what?


Two dead Displayports. Sorry I know this thread's trucked along and I don't post too often.


----------



## kot0005

welll EK strix blocks dont fit, dont buy em

__ https://twitter.com/i/web/status/1317153613005746176


----------



## mirkendargen

kot0005 said:


> welll EK strix blocks dont fit, dont buy em
> 
> __ https://twitter.com/i/web/status/1317153613005746176


It's always something with EK...


----------



## Bolded

I like many have been waiting for EKWB's FE block, but wonder if I should just get the Bitspower one that was released the other day. Anyone have any experience with their blocks? How are their thermals in comparison?


----------



## Nizzen

mirkendargen said:


> It's always something with EK...


Is it? I have had like 15 ek blocks on alot of cards. Never had an issue that was a real issue. But hey, maybe I was lucky


----------



## iamjanco

blame the Internet.

if platforms like twitter didn't exist, there'd be no need for instantaneous dissatisfaction while waiting on instantaneous gratification until you actually had all the parts yourselves.


----------



## mirkendargen

Nizzen said:


> Is it? I have had like 15 ek blocks on alot of cards. Never had an issue that was a real issue. But hey, maybe I was lucky


I'm remembering TR4 blocks that didn't cover right because they were rushed out, an excuse about "but it covers where the chiplets are!", performed way worse than other cheaper blocks, then a "V2" that was the correct size got released.

I appreciate what EK has done to bring custom loop PC watercooling "main stream enough" that there is a thriving ecosystem of parts...but I think they're victims of their own success, got too big, and are complacent about quality now.


----------



## Nizzen

mirkendargen said:


> I'm remembering TR4 blocks that didn't cover right because they were rushed out, an excuse about "but it covers where the chiplets are!", performed way worse than other cheaper blocks, then a "V2" that was the correct size got released.
> 
> I appreciate what EK has done to bring custom loop PC watercooling "main stream enough" that there is a thriving ecosystem of parts...but I think they're victims of their own success, got too big, and are complacent about quality now.


I think EK got WAY to expensive too. Other options is cheaper AND has the same quality or better....


----------



## Twintale

So, Gigabyte finally revealed boost clock of their Aorus Xtreme 3090 - 1860 MHz which puts it on the same level as Asus Strix (I guess?): AORUS GeForce RTX™ 3090 XTREME 24G Specification | Graphics Card - GIGABYTE Global
Doubt it will have some crazy power limit like Asus but like 430 W maybe?


----------



## chispy

Finally everything it's coming together for my built  , today the alphacool ram cooler x6 water block arrived and will be attached to the EK back plate later when it arrives next week along with the water block EK for Asus Tuf Gaming 3090.

Also quite happy as my elmor ev2s order arrived today , i will have full voltage control of my 3090 now 😁 ...


----------



## Jpmboy

I think I just ordered a 3090FTW3. 🤨 Wierd, got an email saying I had 24h to place the order? Should be here Tuesday or Wednesday. Finally had a chance to pay attention to this sheet... so I pulled the trigger.


----------



## chispy

Jpmboy said:


> I think I just ordered a 3090FTW3. 🤨 Wierd, got an email saying I had 24h to place the order? Should be here Tuesday or Wednesday. Finally had a chance to pay attention to this sheet... so I pulled the trigger.


Very nice , you will have a lot of fun with your card , i'm happy for you as it is not easy to find stock of 3090 anywhere in the world right now , i got mine on a Best Buy store walk in and there were only 2 left on1 pny 3090 and one Asus Tuf oc 3090 and of course i took home the Asus Tuf . Enjoy it !

Kind Regards: Angelo


----------



## rdmetz

I've got a question my Asus Tuf 3090 is hitting Thermal power cap limit immediately once starting a game and clock speeds immediately plummet to between 200-500 and NEVER come back up. The temps are hitting 68-69c but never above that and fans can be at 100% of default it doesn't change anything the room its in is cooled to 65F at all times and the 10900k with it never goes above about 48-55c its not like the room is "hot" its actually pretty chilly. So what's going on here. I've included a log from gpu-z if that helps.


----------



## domenic

chispy said:


> Finally everything it's coming together for my built  , today the alphacool ram cooler x6 water block arrived and will be attached to the EK back plate later when it arrives next week along with the water block EK for Asus Tuf Gaming 3090.
> 
> Also quite happy as my elmor ev2s order arrived today , i will have full voltage control of my 3090 now 😁 ...
> 
> View attachment 2462201


LOL - I have the same in mind and my clear plexi version of the X6 arrived today as I am waiting on the EX Strix block & backplate.

What are your thoughts on attaching it? Out of the box I just laid it on top of my existing 2080ti EK backplate and its weight seems to be substantial. Do you think just having it float being held in place by the PETG tubes will be sufficient for good contact / heat transfer? If not I was thinking of drilling holes into the backplate with upwards facing nylon screws but I am worried about clearance wherever things will land.


----------



## Jpmboy

chispy said:


> Very nice , you will have a lot of fun with your card , i'm happy for you as it is not easy to find stock of 3090 anywhere in the world right now , i got mine on a Best Buy store walk in and there were only 2 left on1 pny 3090 and one Asus Tuf oc 3090 and of course i took home the Asus Tuf . Enjoy it !
> 
> Kind Regards: Angelo


Thanks Angelo. You walked in and got a 3090? That's pretty amazing. Weeks ago I logged into my old EVGA Elite account (too many Kingpins in my past I suppose) and asked for an alert... didn't think it would be a "reserve" thing. It's all good. 🤙


----------



## Falkentyne

rdmetz said:


> I've got a question my Asus Tuf 3090 is hitting Thermal power cap limit immediately once starting a game and clock speeds immediately plummet to between 200-500 and NEVER come back up. The temps are hitting 68-69c but never above that and fans can be at 100% of default it doesn't change anything the room its in is cooled to 65F at all times and the 10900k with it never goes above about 48-55c its not like the room is "hot" its actually pretty chilly. So what's going on here. I've included a log from gpu-z if that helps.
> View attachment 2462207


Isn't 210 mhz the clock you get when a shunt mod goes wrong and the card goes into protective mode?

RMA the card or check for shorts on the PCB around the shunts.


----------



## bmgjet

Hitting fail safe mode.
Something hitting over 100C for it to be doing that.
Im going to guess the die since its only drawing 31W and its showing 69c so probably has hot spot from no thermal compound on a corner or something.


----------



## chispy

domenic said:


> LOL - I have the same in mind and my clear plexi version of the X6 arrived today as I am waiting on the EX Strix block & backplate.
> 
> What are your thoughts on attaching it? Out of the box I just laid it on top of my existing 2080ti EK backplate and its weight seems to be substantial. Do you think just having it float being held in place by the PETG tubes will be sufficient for good contact / heat transfer? If not I was thinking of drilling holes into the backplate with upwards facing nylon screws but I am worried about clearance wherever things will land.


I'm thinking to attach it with Artic Alumina or screws and thermal pads fujipoly , don't know yet as i'm still waiting for EK water block and backplate. I think having float around it's not a good idea , screws or artic alumina i believe it's the way to go. Will let you know once i receive the backplate.



Jpmboy said:


> Thanks Angelo. You walked in and got a 3090? That's pretty amazing. Weeks ago I logged into my old EVGA Elite account (too many Kingpins in my past I suppose) and asked for an alert... didn't think it would be a "reserve" thing. It's all good. 🤙


Hola Jpmboy , i have a friend who works at Best Buy and he told me exactly the date and time where the 3090s would come in and put on the shelves  . yes , now on evga website it's only thru notifications and a list that goes by date and time in order to buy 3090 and 3080 gpus , no more selling to bots  , good thing evga have going on.


----------



## Falkentyne

bmgjet said:


> Hitting fail safe mode.
> Something hitting over 100C for it to be doing that.
> Im going to guess the die since its only drawing 31W and its showing 69c so probably has hot spot from no thermal compound on a corner or something.


I didn't even realize it was 69C with no load. I mean I 'thought' of "why is it so hot at idle" then I just ..ignored it.
Maybe that cooler was attached on a Friday afternoon?


----------



## Sky3900

chispy said:


> * Disclaimer * Do it at your own risk , none of us will be responsible if you damage your card !
> 
> I would like to correct you on this advice. I have just talked to Elmor and Roman ( Der8auer ) both are engineers and old friends of mine. They told me the most important and helpful shunt mods are the 2x8 power connector as that will open up the power limit for most rtx 3090 as you will get a 675w available power limit without shunt mod the pci express.
> 
> The pci express shunt mod is only helpful if you are running sub-sero temps and adding voltage to the v.core , also no need to do all of them if you are running on ambient temps as with the 2x8 pin power connectors shunt mod is more than enough.
> 
> Also the statement you made " when pcie power reach the limit which usually is 75w to 78w, the card will stop pulling power from 8pins " is completely wrong the card will still pulling more power from the 2x8 pin even when pci express reachs 75~80w . This information comes directly from 2 of the top engineers who do volt mods on a daily basis for extreme overclocking Elmor and Derbauer ( Roman ) .
> 
> There you have it , this is the most accurate information for everyone on the shunt mods topic. I hope this helps.
> 
> Kind regards: Angelo
> 
> ** Ninja edit** I have just talked to another top overclocker and engineer Ronaldo ( rbuass ) from Brazil and his findings were the same as Elmor and Der8bauer , so it is 100% confirmed and accurate the information provided on my this post ***


Do you know if this is true for the FE as well? Would replacing only the two resistors close to the 12 pin connecter have the same effect on this card?


----------



## chispy

Sky3900 said:


> Do you know if this is true for the FE as well? Would replacing only the two resistors close to the 12 pin connecter have the same effect on this card?


For FE there is more resistors to mod , a few pages back there is a guy who did this mod to his FE. Here is the link he can help you with the FE mods , i hope this helps. 









RTX 3090 Founders Edition working shunt mod


I posted this in the RTX 3090 owners club thread and I'm posting this here for more visibility to anyone who might stumble across this: Edit: Added note about 6th shunt resistor & updated images Tonight I successfully shunt modded my RTX 3090 FE. It's a pretty straightforward process, just...




www.overclock.net


----------



## cstkl1

use the nvflash that came with asus btw.. 
its a proper full nvflash in 3080 thread...

5.667


----------



## Sky3900

chispy said:


> For FE there is more resistors to mod , a few pages back there is a guy who did this mod to his FE. Here is the link he can help you with the FE mods , i hope this helps.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> RTX 3090 Founders Edition working shunt mod
> 
> 
> I posted this in the RTX 3090 owners club thread and I'm posting this here for more visibility to anyone who might stumble across this: Edit: Added note about 6th shunt resistor & updated images Tonight I successfully shunt modded my RTX 3090 FE. It's a pretty straightforward process, just...
> 
> 
> 
> 
> www.overclock.net


Thanks, I've read through that thread... just wanting to take the least invasive approach.


----------



## megahmad

The new BIOS update is now up in the 3090 TUF drivers page but sadly PL is still 375w after updating.








TUF-RTX3090-O24G-GAMING｜Graphics Cards｜ASUS USA


TUF Gaming graphics cards add hefty 3D horsepower to the TUF Gaming ecosystem, with features like Auto-Extreme manufacturing, steel backplates, high-tech fans, and IP5X certifications. And it’s all backed by a rigorous battery of validation tests to ensure compatibility with the latest TUF...




www.asus.com





I was expecting 390w after update tbh. The 3080 bios update changed the limit to 375w so I thought the 3090 should have gotten a PL upgrade as well.


----------



## bmgjet

chispy said:


> For FE there is more resistors to mod , a few pages back there is a guy who did this mod to his FE. Here is the link he can help you with the FE mods , i hope this helps.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> RTX 3090 Founders Edition working shunt mod
> 
> 
> I posted this in the RTX 3090 owners club thread and I'm posting this here for more visibility to anyone who might stumble across this: Edit: Added note about 6th shunt resistor & updated images Tonight I successfully shunt modded my RTX 3090 FE. It's a pretty straightforward process, just...
> 
> 
> 
> 
> www.overclock.net



Id like to correct you on your orignal post.
Doing just the connectors wont open up the full power.
Next youll be youll be on the GPU Chip power limit of 300W. So in my case I should of had 50W more per plug but I was only getting 20W more per plug.
I then went and shunted the GPU Chip and Memory shunts and was on the SRC Power limit with it reporting SRC Voltage of 13.9V instead of the usual 12.7V and it was only pulling a further 5W per connector. So I added a shunt to that one made no difference other then dropping voltage reading back to normal.

Lastly I shunt modded the PCI-E slot shunt with a 15 mOhm stacked ontop and bam it pulls 50W for each plug like I expected.

There is some power balancing going on with the plugs and slot.
Igor lab and frame chasers have reported the same.


----------



## rdmetz

bmgjet said:


> Hitting fail safe mode.
> Something hitting over 100C for it to be doing that.
> Im going to guess the die since its only drawing 31W and its showing 69c so probably has hot spot from no thermal compound on a corner or something.


BINGO!

I didn't see this til after the fact but I had a feeling it was a poor job of heatsink install so I took the card apart and the paste was definitely leaving some places not covered well. Applied some TG Kyronaut and put it back together and she singing pretty now!!

Thanks to Everyone who tried to help me out I'm sorry I didn't wait but I'm glad we all were on the same page!

I was just about to completely format my bc and deal with a multiple days of backup and restore!

Thank god I don't have to do that!!!


----------



## bmgjet

rdmetz said:


> BINGO!
> 
> I didn't see this til after the fact but I had a feeling it was a poor job of heatsink install so I took the card apart and the paste was definitely leaving some places not covered well. Applied some TG Kyronaut and put it back together and she singing pretty now!!
> 
> Thanks to Everyone who tried to help me out I'm sorry I didn't wait but I'm glad we all were on the same page!
> 
> I was just about to completely format my bc and deal with a multiple days of backup and restore!
> 
> Thank god I don't have to do that!!!


There have been some shocking jobs on a few cards now.
Did you get a pic of it?

Next thing Id do is put a custom frequency curve into your card. With it flat lined after 1V.
Its boosting up to 1.025V while only at 89% load which makes more heat then is needed.
Turbo Boost 3 is stupid how it over volts on light loads and just eats up your power limit and makes the card hotter.


----------



## ShadowYuna

At last my Strix RTX3090 has arrived. After use of several hours it is worth of card and it is beast compare to my Trinity 3090.
When I used Trinity with waterblock the boost clock was not that high and stable but with Strix on air it stable on 1980 easily with no overclock. 
Waiting for my waterblock for Strix 3090 now


----------



## Sheyster

Thebc2 said:


> Evga_jacobf just tweeted we may have a new XOC beta with higher power limits for the 3090 FTW3 Ultra next week! Hoping for at least 480w if not 500-520.
> 
> 
> Sent from my iPhone using Tapatalk Pro


I'm sure it'll be no less than 480w to match the Strix. I don't think they'll go all the way to 520w but I could be wrong. I would expect 480 or possibly 500 to slightly one-up the Strix.


----------



## Sheyster

Jpmboy said:


> I think I just ordered a 3090FTW3. 🤨 Wierd, got an email saying I had 24h to place the order? Should be here Tuesday or Wednesday. Finally had a chance to pay attention to this sheet... so I pulled the trigger.


I was gonna pick up a Strix OC 3090 when/if I could find one. However, with EVGA Jacob's recent tweet about the new higher PL 3090 BIOS they're working on for the FTW3 I may just go that route. I'm already in their queue for it but who knows in what position! In the meantime I'll mess around with the FE I managed to snag.


----------



## cstkl1

Sheyster said:


> I was gonna pick up a Strix OC 3090 when/if I could find one. However, with EVGA Jacob's recent tweet about the new higher PL 3090 BIOS they're working on for the FTW3 I may just go that route. I'm already in their queue for it but who knows in what position! In the meantime I'll mess around with the FE I managed to snag.


hmm 3080/3080 ftw3 just landed. thinking now should i go pick it up..


----------



## chispy

bmgjet said:


> Id like to correct you on your orignal post.
> Doing just the connectors wont open up the full power.
> Next youll be youll be on the GPU Chip power limit of 300W. So in my case I should of had 50W more per plug but I was only getting 20W more per plug.
> I then went and shunted the GPU Chip and Memory shunts and was on the SRC Power limit with it reporting SRC Voltage of 13.9V instead of the usual 12.7V and it was only pulling a further 5W per connector. So I added a shunt to that one made no difference other then dropping voltage reading back to normal.
> 
> Lastly I shunt modded the PCI-E slot shunt with a 15 mOhm stacked ontop and bam it pulls 50W for each plug like I expected.
> 
> There is some power balancing going on with the plugs and slot.
> Igor lab and frame chasers have reported the same.


Do not trust neither of them , my information came straight up with me talking directly to der8auer , elmor and rbuass so i do trust my sources and stand by what they found on their own testing. i have shunt moded all 5 shunts except pci express and i have no trouble hitting 1430/1351Mhz , gpu power chip limit does not apply ... that's more than enough proof i think


----------



## bmgjet

If you do really know der8auer then ask him to test it again. Since his GPUZ screen is showing the oppsite of what he is saying.









You can see perfcap reason PWR, But power consumption% is 75.6%. Instead of hitting 105% that he has set in the overclocking software.
Thats because he is on the pci-e Slot power limiter, 
It will maintain the power ratio between plugs and slots.


----------



## chispy

bmgjet said:


> If you do really know der8auer then ask him to test it again. Since his GPUZ screen is showing the oppsite of what he is saying.
> View attachment 2462217
> 
> 
> You can see perfcap reason PWR, But power consumption% is 75.6%. Instead of hitting 105% that he has set in the overclocking software.
> Thats because he is on the pci-e Slot power limiter,
> It will maintain the power ratio between plugs and slots.


Nah i won't bother my friends with questions and testing methods to do , they have better things to do , they already told me what i want it to hear and needed. Anyway i'm just finishing soldering and installing my evc2s to my rtx 3090 for some subsero benchmarking with voltage control i will show you later once i'm subsero if i'm limited by chip power limit as you said , once and for all we will know wish one is correct. By the way i have been doing volt mods for close to 20 years and this is not my first rodeo  .


----------



## Thanh Nguyen

I can confirm that I shunted 5 resistors on the front and just realized that tdp hits only 64%. No idea why I got more points in port royal though.


----------



## pewpewlazer

Thebc2 said:


> Evga_jacobf just tweeted we may have a new XOC beta with higher power limits for the 3090 FTW3 Ultra next week! Hoping for at least 480w if not 500-520.
> 
> 
> Sent from my iPhone using Tapatalk Pro


So matching Asus stock BIOS is now considered "XOC"? Wow...


----------



## Johneey

pewpewlazer said:


> So matching Asus stock BIOS is now considered "XOC"? Wow...


is it then possible to flash the FTW3 Bios on my 2x8 pin palit 3090?


----------



## Alelau18

Don't flash 3x8pin BIOSes on 2xpin cards, it's not gonna work properly, the card will think you have a 3rd power connector and throw a fake read from that non-existant power connector lowering your effective power you can draw instead of increasing it. 

Just flash the Gaming OC BIOS or the Aorus Master BIOS if you want higher power limit (390W) on 2x pin cards.


----------



## ShadowYuna

Does anyone know that New Bitspower RTX 3080 Strix waterblock fit with RTX 3090 Strix.

BP-VG3080AST 

The link above is new Bitspower waterblock. Product name is 3080 but product description says it support 30 series.

I believe that 3090 and 3080 has pretty much identical PCB on Strix. So anyone have any idea?


----------



## cstkl1

ShadowYuna said:


> Does anyone know that New Bitspower RTX 3080 Strix waterblock fit with RTX 3090 Strix.
> 
> BP-VG3080AST
> 
> The link above is new Bitspower waterblock. Product name is 3080 but product description says it support 30 series.
> 
> I believe that 3090 and 3080 has pretty much identical PCB on Strix. So anyone have any idea?


It does. Just make you ask them for more pads included. The 1mm

Skype them before you make any purchasing


----------



## ShadowYuna

cstkl1 said:


> It does. Just make you ask them for more pads included. The 1mm
> 
> Skype them before you make any purchasing


Thank you!!


----------



## glocked89

Hello! 3090 ftw3 ultra here. I've been very happy with my results in all of my 3dmark runs except port royal. I seem to be more power limited in port royal but i don't know why i would be.

Timespy extreme - 456watts peak
Port royal - 419watts peak

Anyone know why this is?


----------



## AlKappaccino

So yesterday I tried shunting my 3090 FTW3. I replaced those three 5mOhm with 3 mOhm shunts:



Spoiler: Image















It had trouble booting at first, but after some tries it managed to do so. I ran a short stress test and it looked like it's working. Temps were fine. Then it went black and refused to boot again. After more troubleshooting I couldn't get it to post anymore.

So I switched the shunts back to the 5mOhms and everything was working again. I checked everything with my multimeter etc. to ensure correct soldering. Anyone else already tried it on the FTW3 or has an idea why that might be?

Anyways; I made some more pictures of the PCB in the process:



Spoiler: Components






http://imgur.com/a/jJPcbgw







Spoiler: Some measurements






http://imgur.com/a/XWoYzXc






At least, I improved the temps by a few degrees.



Spoiler: Thermal Interface


----------



## GAN77

ShadowYuna said:


> Does anyone know that New Bitspower RTX 3080 Strix waterblock fit with RTX 3090 Strix.
> 
> BP-VG3080AST
> 
> The link above is new Bitspower waterblock. Product name is 3080 but product description says it support 30 series.
> 
> I believe that 3090 and 3080 has pretty much identical PCB on Strix. So anyone have any idea?


Bitspower Classic VGA Water Block for ASUS ROG Strix GeForce RTX 30 series, Bitspower Classic VGA Water Block for ASUS TUF Gaming GeForce RTX 30 series are compatible with ASUS ROG Strix / TUF 3090?

Yes, the 3080 ASUS ROG Strix / TUF water block is compatible with 3090 ASUS ROG Strix / TUF. But the back plate is not compatible.
At present, we will release the new back plate for 3090.

*Best regards!
Lily Wong*


----------



## ShadowYuna

GAN77 said:


> Bitspower Classic VGA Water Block for ASUS ROG Strix GeForce RTX 30 series, Bitspower Classic VGA Water Block for ASUS TUF Gaming GeForce RTX 30 series are compatible with ASUS ROG Strix / TUF 3090?
> 
> Yes, the 3080 ASUS ROG Strix / TUF water block is compatible with 3090 ASUS ROG Strix / TUF. But the back plate is not compatible.
> At present, we will release the new back plate for 3090.
> 
> *Best regards!
> Lily Wong*


So what happen if I purchase now? Do you guys send the new back plate?? In the mean time what should I use?

I am waiting for block that support 3090 Strix. EK estimate time is 4th Nov , so if Bitspower is available before than I will buy bitspower.

BTW are you _representative_ of Bitspower?


----------



## GAN77

ShadowYuna said:


> So what happen if I purchase now? Do you guys send the new back plate?? In the mean time what should I use?
> 
> I am waiting for block that support 3090 Strix. EK estimate time is 4th Nov , so if Bitspower is available before than I will buy bitspower.
> 
> BTW are you _representative_ of Bitspower?


No, I'm not a representative.
I asked them for information and received an answer. I think we need to wait for an update on the bitspower site for the 3090.


----------



## ShadowYuna

GAN77 said:


> No, I'm not a representative.
> I asked them for information and received an answer. I think we need to wait for an update on the bitspower site for the 3090.


I see. You already asked them a question. Thanks for the info. I might just wait for EK to send me one.


----------



## Tias

Anyone know whats the different between the two versions of nvflash?


----------



## Stampede

interesting test with this backplate watercooling. seems like its the only product made by this company?

i prefer ekwb or some known manufacturers make the backplate completely watercooled like the front.

just put in my ekwb reference block onto my palit 3090 and worried about pretty high temps. 45 deg stock and 52deg overclocked (peak 400 watts). gonna repaste it, maybe slightly misaligned.

water temps at peak 34deg, on a 3 x 360mm rad system. i have been seeing some awesome cooling from others on this forum with the Alphacool block. 

What temps is everyone else getting on the 3090's.

p.s. that video shows the Alphacool block giving temps over 45deg also on a 3080.


----------



## Stampede

chispy said:


> I'm thinking to attach it with Artic Alumina or screws and thermal pads fujipoly , don't know yet as i'm still waiting for EK water block and backplate. I think having float around it's not a good idea , screws or artic alumina i believe it's the way to go. Will let you know once i receive the backplate.
> 
> 
> 
> Hola Jpmboy , i have a friend who works at Best Buy and he told me exactly the date and time where the 3090s would come in and put on the shelves  . yes , now on evga website it's only thru notifications and a list that goes by date and time in order to buy 3090 and 3080 gpus , no more selling to bots  , good thing evga have going on.


check the video i posted above. guys use like a rubber band with a clip to hold it to the back plate. thinking of using plastic curtain clips. they strong and flexable.


----------



## cstkl1

Stampede said:


> interesting test with this backplate watercooling. seems like its the only product made by this company?
> 
> i prefer ekwb or some known manufacturers make the backplate completely watercooled like the front.
> 
> just put in my ekwb reference block onto my palit 3090 and worried about pretty high temps. 45 deg stock and 52deg overclocked (peak 400 watts). gonna repaste it, maybe slightly misaligned.
> 
> water temps at peak 34deg, on a 3 x 360mm rad system. i have been seeing some awesome cooling from others on this forum with the Alphacool block.
> 
> What temps is everyone else getting on the 3090's.
> 
> p.s. that video shows the Alphacool block giving temps over 45deg also on a 3080.


interesting. although i dont trust that dude. he doesnt even know that osd size of ab is control in riva tuner. if its a new software i get that but its like a decade long..

interesting though.


----------



## cstkl1

btw strix 3080 backplate has zero contact with back of vram.. 









thats just brushed alu. most probably a preo for pads used in 3090 strix

so... 

whats hot is that thicker pads . whatever is having contact with that.


----------



## cstkl1

@Stampede 
also i suspect. its not vram got cooler on vid. that backplatr cooling increased surface area of contact transfer for VRM which allowed lower gpu temps and vram temps.


----------



## cstkl1

Tias said:


> Anyone know whats the different between the two versions of nvflash?


use the one came with asus update tool. its a complete 5.667.0


----------



## cstkl1

GAN77 said:


> Bitspower Classic VGA Water Block for ASUS ROG Strix GeForce RTX 30 series, Bitspower Classic VGA Water Block for ASUS TUF Gaming GeForce RTX 30 series are compatible with ASUS ROG Strix / TUF 3090?
> 
> Yes, the 3080 ASUS ROG Strix / TUF water block is compatible with 3090 ASUS ROG Strix / TUF. But the back plate is not compatible.
> At present, we will release the new back plate for 3090.
> 
> *Best regards!
> Lily Wong*


must be because of me. their 3080 backplate differed alot from asus

had to send them pics. 

3090 i am opening it tomorrow to send to lily.


----------



## Johneey

AlKappaccino said:


> So yesterday I tried shunting my 3090 FTW3. I replaced those three 5mOhm with 3 mOhm shunts:
> 
> 
> 
> Spoiler: Image
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> It had trouble booting at first, but after some tries it managed to do so. I ran a short stress test and it looked like it's working. Temps were fine. Then it went black and refused to boot again. After more troubleshooting I couldn't get it to post anymore.
> 
> So I switched the shunts back to the 5mOhms and everything was working again. I checked everything with my multimeter etc. to ensure correct soldering. Anyone else already tried it on the FTW3 or has an idea why that might be?
> 
> Anyways; I made some more pictures of the PCB in the process:
> 
> 
> 
> Spoiler: Components
> 
> 
> 
> 
> 
> 
> http://imgur.com/a/jJPcbgw
> 
> 
> 
> 
> 
> 
> 
> Spoiler: Some measurements
> 
> 
> 
> 
> 
> 
> http://imgur.com/a/XWoYzXc
> 
> 
> 
> 
> 
> 
> At least, I improved the temps by a few degrees.
> 
> 
> 
> Spoiler: Thermal Interface


Ok i will not shunt haha. Had the shunts already here but u make me worry


----------



## cstkl1

also bits power flow is like this. its parrallel 3 flow


----------



## Stampede

bmgjet said:


> Id like to correct you on your orignal post.
> Doing just the connectors wont open up the full power.
> Next youll be youll be on the GPU Chip power limit of 300W. So in my case I should of had 50W more per plug but I was only getting 20W more per plug.
> I then went and shunted the GPU Chip and Memory shunts and was on the SRC Power limit with it reporting SRC Voltage of 13.9V instead of the usual 12.7V and it was only pulling a further 5W per connector. So I added a shunt to that one made no difference other then dropping voltage reading back to normal.
> 
> Lastly I shunt modded the PCI-E slot shunt with a 15 mOhm stacked ontop and bam it pulls 50W for each plug like I expected.
> 
> There is some power balancing going on with the plugs and slot.
> Igor lab and frame chasers have reported the same.


interesting, but if the pcie is now the limit, how much can you draw constantly before damaging mobo/gpu? with a 15mOhm shunt it gets you to 100watts on the pcie. I have seen peaks just under 80watts with no mods.


----------



## cstkl1

Stampede said:


> interesting, but if the pcie is now the limit, how much can you draw constantly before damaging mobo/gpu? with a 15mOhm shunt it gets you to 100watts on the pcie. I have seen peaks just under 80watts with no mods.


strix 3080 is interesting in idle and pcie power...

TUF pulls 50 constant with 6x peak.. so its hitting the max the pcie can do ( 5v i think.. theres a 3.3v reserved for something hence the 75watt max.. which is why u cannot get 375watt on tuf...)

Strix only 20x watts on load.. 10 watts idle...

strix 3080 total idle is 20 watts
tuf 3080 total idle is 50-60watts


----------



## cstkl1

edit.. tuf 3080 is affected by AB.. without it the idle goes down to 3x...
odd again strix doesnt suffer from this.


----------



## hapklaar

Alelau18 said:


> Don't flash 3x8pin BIOSes on 2xpin cards, it's not gonna work properly, the card will think you have a 3rd power connector and throw a fake read from that non-existant power connector lowering your effective power you can draw instead of increasing it.
> 
> Just flash the Gaming OC BIOS or the Aorus Master BIOS if you want higher power limit (390W) on 2x pin cards.


I have an Inno3d RTX 3090 iChill X4 with 2x8 pin PCIE connectors and can confirm after flashing a 3x8pin connector card bios (EVGA FTW3) to mine, it thinks it has 3 PCIE connectors and counts the power draw from port 1 twice effectively lowing the power limit by 100-130watts.

I wasn't able to use the Gigabyte Master BIOS, card wouldn't post after that. Was lucky to have a backup card to boot with and reflash my original X4 bios back in the secondary GPU slot.

I can't find a copy of the Gigabyte OC BIOS, can someone upload it to techpowerup? Or are there any other 2 port cards with a BIOS that allows for a higher power limit than my stock 370W.


----------



## megahmad

hapklaar said:


> I can't find a copy of the Gigabyte OC BIOS, can someone upload it to techpowerup? Or are there any other 2 port cards with a BIOS that allows for a higher power limit than my stock 370W.


You can try the Gigabyte Gaming OC bios, it's 2x8pin card. The PL is 390w.








976 KB file on MEGA







mega.nz




I don't have the gigabyte card to upload it to TPU but this is the same bios everyone here's been flashing.

Flashing this bios you will lose one or two display ports depending on your current card.


----------



## Typhest

megahmad said:


> Flashing this bios you will lose one or two display ports depending on your current card.


I am considering flashing this on my XC3 Ultra but I’ve got a VR headset that I need a second DisplayPort for. Do you know how many ports that GPU loses?


----------



## megahmad

Typhest said:


> I am considering flashing this on my XC3 Ultra but I’ve got a VR headset that I need a second DisplayPort for. Do you know how many ports that GPU loses?


Sorry idk, you can search the topic and see if someone already tried. But you can make a backup of original BIOS and flash the gigabyte one and see. In any case you won't lose all DPs so it's easy to go back to backup bios.


----------



## hapklaar

megahmad said:


> You can try the Gigabyte Gaming OC bios, it's 2x8pin card. The PL is 390w.
> 
> 
> 
> 
> 
> 
> 
> 
> 976 KB file on MEGA
> 
> 
> 
> 
> 
> 
> 
> mega.nz
> 
> 
> 
> 
> I don't have the gigabyte card to upload it to TPU but this is the same bios everyone here's been flashing.
> 
> Flashing this bios you will lose one or two display ports depending on your current card.


Thanks a bunch, much appreciated. Just flashed this to my X4 and the 2 display ports I was using are still working, albeit reversed.

Also uploaded the bios to TPU: Gigabyte RTX 3090 VBIOS


----------



## AlKappaccino

Johneey said:


> Ok i will not shunt haha. Had the shunts already here but u make me worry


Well, that's why I wanted to ask here. It could be that the controller somehow doesn't like the lower resistance, though the difference isn't that high. I know that you can trigger a "panic mode" in those, which would explain the behaviour, but other cards seem to do just fine with even lower resistance. So it would be interesting, to see other do a shunt mod on FTW3 and see if it works as expected or show the same struggle as mine.


----------



## anethema

Well to those saying that it was only my cold room allowing me to hit mid 14k with my 3090FE, I got the house nice and toasty (21c) with the wood stove and played with Afterburner some more, and just got a 14510.



http://www.3dmark.com/pr/409711



Here is a pic of the comp during the test with room temp shown:


http://imgur.com/7iXIgQ2


----------



## Sky3900

anethema said:


> Well to those saying that it was only my cold room allowing me to hit mid 14k with my 3090FE, I got the house nice and toasty (21c) with the wood stove and played with Afterburner some more, and just got a 14510.
> 
> 
> 
> http://www.3dmark.com/pr/409711
> 
> 
> 
> Here is a pic of the comp during the test with room temp shown:
> 
> 
> http://imgur.com/7iXIgQ2


Looks like you've got a really good chip. Best I can get 14264, Clock frequency at 2025, average clock of 1989 Mhz... and that's on the edge of instability. 23c indoors.


----------



## DrunknFoo

For those looking into one or the other, my recommendation is the bitspower, slightly more water surface coverage across the plate and volume capacity, thinner and more fins, QC significantly better (if you care for aesthetics) and ideally it is lighter and comes with some ram thermal pads, mounting screws and an extra o-ring. Where the alphacool is the block alone). (this is just all subjective though, honestly get whatever ends up being cheaper lol)


----------



## Tias

Have you guys seen this video:






He flashes a EVGA 3080 xc3 (a 2x8pin card) with the new bios for the 3080 ftw3 (3x8 pin card) 450w bios.
And says that it works. That it gives a boost in wattage, but that gpu-z reports it faulty, that 8-pin #3 (that does not exists on the xc3 card) just report the same as 8-pin #1.


----------



## Stampede

Gents #1 in 3dmark Wildlife.



https://www.3dmark.com/wl/10663



No Idea how i got such a high score, but it was a suicide run, even starting 3dmark wasnt even stable.


----------



## Stampede

DrunknFoo said:


> View attachment 2462281
> 
> 
> 
> 
> For those looking into one or the other, my recommendation is the bitspower, slightly more water surface coverage across the plate and volume capacity, thinner and more fins, QC significantly better (if you care for aesthetics) and ideally it is lighter and comes with some ram thermal pads, mounting screws and an extra o-ring. Where the alphacool is the block alone). (this is just all subjective though, honestly get whatever ends up being cheaper lol)


Was just thinking, whats the chances of "bare back" install of this block on the gpu. Just need to find the right thickness thermal pads. most important are the mem and vrm areas.


----------



## mirkendargen

Stampede said:


> Was just thinking, whats the chances of "bare back" install of this block on the gpu. Just need to find the right thickness thermal pads. most important are the mem and vrm areas.


I don't think you'll be able to cover everything you need to, and there REALLY isn't THAT much heat you need to dissipate...The backplates would have fins or heat pipes or SOMETHING from the factory if there was that much heat needing to be shed. A RAM block stuck on the backplate is overkill in terms of cooling capacity, but when you have a custom loop anyway it's simple to do so you might as well. Don't go crazy making it not simple


----------



## LoucMachine

Stampede said:


> Gents #1 in 3dmark Wildlife.
> 
> 
> 
> https://www.3dmark.com/wl/10663
> 
> 
> 
> No Idea how i got such a high score, but it was a suicide run, even starting 3dmark wasnt even stable.


Pretty sure CPU single core performance has a non-negligible impact on this test, and since you have a fast cpu clocked at 5.3ghz and a fast 3090, that explains the result


----------



## Zurv

Jpmboy said:


> I think I just ordered a 3090FTW3. 🤨 Wierd, got an email saying I had 24h to place the order? Should be here Tuesday or Wednesday. Finally had a chance to pay attention to this sheet... so I pulled the trigger.


hrmm... you are a big deal sir, i would have expected overnight. (Mine is coming Monday  )
... also a second 3090 FE... on yeah bitches.. SLI... for ... no.. reason... (i couldn't control myself)

(I have 2 bitspower 3090 FE blocks coming ordered. I'm waiting for some option for the evga 3090 FTW)


----------



## MikeSanders

Does anyone have a PCB shot of the Manli 3090?
This card:





Manli GeForce RTX™ 3090 (M3478+N613)


TITAN-class performance with NVIDIA Ampere Architecture. 2nd-generation RT cores. 19.5 Gbps GDDR6X memory. Six-6mm copper heat pipes with segmented heat sinks. Metal back plate for reinforcement.



www.manli.com





I can get it very cheap, but i would like to know if the watercoolers fit.


----------



## AlKappaccino

Tias said:


> Have you guys seen this video:
> 
> 
> 
> 
> 
> 
> He flashes a EVGA 3080 xc3 (a 2x8pin card) with the new bios for the 3080 ftw3 (3x8 pin card) 450w bios.
> And says that it works. That it gives a boost in wattage, but that gpu-z reports it faulty, that 8-pin #3 (that does not exists on the xc3 card) just report the same as 8-pin #1.


I have StrixOC bios on my 3090 FTW3 and GPU-Z doesn't report my third pin either. It still works though.


----------



## LoucMachine

So I did it boys! I installed the Strix bios on my X trio. I was expecting the program to tell me the flash was successful and ask me to reboot, but instead the screen went black for what was the longest 30s of my life... Then the computer restarted itself and everything seems fine ! 
Already broke my record on time spy by just putting 430w (110%) and raising a few sliders, I'll test some more tonight ! 
Ouf that was a big adrenaline rush, I know you guys are probably used to do this, but for me it was a big thing ! :O


----------



## Johneey

LoucMachine said:


> So I did it boys! I installed the Strix bios on my X trio. I was expecting the program to tell me the flash was successful and ask me to reboot, but instead the screen went black for what was the longest 30s of my life... Then the computer restarted itself and everything seems fine !
> Already broke my record on time spy by just putting 430w (110%) and raising a few sliders, I'll test some more tonight !
> Ouf that was a big adrenaline rush, I know you guys are probably used to do this, but for me it was a big thing ! :O


😂


----------



## kx11

who wants a 2600$ 3090


----------



## Nizzen

kx11 said:


> who wants a 2600$ 3090


Perfect upgrade for my 2x Asus Strix white 2080ti  I think the white is very pretty, also does my 11 years old daughter when I'm turning on the led lights on them LOL


----------



## Carillo

RIP dear Palit 3090, it's been a fun 14 day run. Sleep tight 💔


----------



## Nizzen

Carillo said:


> RIP dear Palit 3090, it's been a fun 14 day run. Sleep tight 💔


RIP


----------



## Sky3900

Carillo said:


> RIP dear Palit 3090, it's been a fun 14 day run. Sleep tight 💔
> View attachment 2462301


What happened?


----------



## Carillo

Sky3900 said:


> What happened?


it died, suddenly... Well binned. Did 2205mhz @ 1000mV... It's a shame


----------



## Benni231990

did you shunt moddet the card or what you did?


----------



## Carillo

Benni231990 said:


> did you shunt moddet the card or what you did?


Yes i did


----------



## Benni231990

what a shame  sry


----------



## Carillo

Benni231990 said:


> what a shame  sry


Thanks bro, i'm looking for a rope


----------



## Sky3900

Carillo said:


> it died, suddenly... Well binned. Did 2205mhz @ 1000mV... It's a shame


That sucks... You were running like 700W through that thing on air though! 

I am debating whether or not to shunt, only looking for another 50W -100W. Want to get the clocks stabilized around 2025-2040 on the RTX benches. Probably not much risk of frying my FE at 500W.


----------



## kx11

Carillo said:


> RIP dear Palit 3090, it's been a fun 14 day run. Sleep tight 💔
> View attachment 2462301



ouch, hit that warranty hard on them


----------



## Carillo

Sky3900 said:


> That sucks... You were running like 700W through that thing on air though!
> 
> I am debating whether or not to shunt, only looking for another 50W -100W. Want to get the clocks stabilized around 2025-2040 on the RTX benches. Probably not much risk of frying my FE at 500W.


Thanks mate! IDK the reason it died, it just did, and i really loved that card


----------



## Falkentyne

Quote


Sky3900 said:


> That sucks... You were running like 700W through that thing on air though!
> 
> I am debating whether or not to shunt, only looking for another 50W -100W. Want to get the clocks stabilized around 2025-2040 on the RTX benches. Probably not much risk of frying my FE at 500W.


500W should be safe on a FE. The VRM's can handle it.
Are you going to stack 8 mOhm's on top of the 005's ?


----------



## Carillo

kx11 said:


> ouch, hit that warranty hard on them


will try, but still... it was a good one. hit 2280 mhz with 15c water


----------



## HyperMatrix

Carillo said:


> will try, but still... it was a good one. hit 2280 mhz with 15c water


15c water? how low was your ambient? under normal circumstances that would cause condensation, no?


----------



## Carillo

HyperMatrix said:


> 15c water? how low was your ambient? under normal circumstances that would cause condensation, no?


My ambient was also 15 degree..... it's Norway, cold as ****... i have been using chiller for several years, so my dew point is under control. Maby worth to mention i am a electrical engineer, so I am not total noob


----------



## HyperMatrix

Carillo said:


> My ambient was also 15 degree..... it's Norway, cold as ****... i have been using chiller for several years, so my dew point is under control


Haha 15c ambient is damn cold man. Props to you for surviving that. Hope they'll accept RMA.


----------



## Carillo

HyperMatrix said:


> Haha 15c ambient is damn cold man. Props to you for surviving that. Hope they'll accept RMA.


Thanks bro. Wish you all the best


----------



## Sky3900

Falkentyne said:


> Quote
> 
> 
> 500W should be safe on a FE. The VRM's can handle it.
> Are you going to stack 8 mOhm's on top of the 005's ?


Plan is to replace 005s with 004s. According to *olrdtg *this will provide about 450W. I'm assuming this is at 100% power, at the 114% max it should be just over 500W. 

Not sure if I'm going to replace them all at once or do them one at a time and measure the effect it has on the various wattages reported in GPUz... Similar to what der8auer did in his 3090 shunting video. That's a lot of extra assembly/disassembly work though.


----------



## bmgjet

Use my tool to work out what power limit youll end up with.








GitHub - bmgjet/ShutMod-Calculator: Work out what shunt values to use easily.


Work out what shunt values to use easily. Contribute to bmgjet/ShutMod-Calculator development by creating an account on GitHub.




github.com





Also if you do mod them one at a time please post up what each one is so I can add it to the shunt map info.


----------



## Sky3900

bmgjet said:


> Use my tool to work out what power limit youll end up with.
> 
> 
> 
> 
> 
> 
> 
> 
> GitHub - bmgjet/ShutMod-Calculator: Work out what shunt values to use easily.
> 
> 
> Work out what shunt values to use easily. Contribute to bmgjet/ShutMod-Calculator development by creating an account on GitHub.
> 
> 
> 
> 
> github.com
> 
> 
> 
> 
> 
> Also if you do mod them one at a time please post up what each one is so I can add it to the shunt map info.


 Will do, if I decide to go ahead with the mod. Thanks for sharing the calculator.


----------



## animeowns

Baasha said:


> I have not heard the 3090 Strix being in stock anywhere - is there a different release date for it or have I just missed the drops (like with all the other cards)?


 it was in stock last night at 8:01 pm at newegg


----------



## Baasha

animeowns said:


> it was in stock last night at 8:01 pm at newegg


Yea, I saw that on Reddit about 5 mins after.  naturally, it was sold out instantly.

Did you get the 3090 yet? you waiting for the strix or FE?


----------



## DocratesCR

Hello from Costa Rica. 

I own a *MSI 3090 Ventus OC*. (350w power limit) and i9 10900k (5200 Mhz)

Factory Bios, 3DMark score is: https://www.3dmark.com/spy/14480662 (18398)

Then I installed Gigabyte Aorus Master Bios, lost 2 display ports and my score was: https://www.3dmark.com/spy/14563604 (19069)

Finally, I installed Gigabyte Gaming OC Bios, lost only one display port. First score was: https://www.3dmark.com/spy/14623194 (18979)

By doing some tunning, I set Core Clock to +180 and Memory Clock to +250.Final score is: https://www.3dmark.com/spy/14628723 (19356)

I am happy with results for a Ventus. Now I wanna try a SLI of Ventus OC. But, do someone knows if there is a flexible sli port?


----------



## jomama22

So from framechasers video:





The stuff about flashing the 450 on a reference for the 3080 isn't really important.

What is kinda possibly important is that you may still see a benefit from flashing a 3pin bios (such as the strix) to your 2 pin board. It's possible that you will still pull more total watts and current even though the 8 pin is non-existant. I know people have tried to flash the.on here before but I am not sure if those people attempted actual benchmark runs to compare.

You obviously wouldn't get the full 480w, but it's possible you will get some more over the 400w your limited to now.

Just somthing to try. I have a founders I just got an am benching now. Hopefully it's a winner!


----------



## LVNeptune

Hopefully when these start getting out they make a backplate that can be cooled. Definitely needs backplate cooling!


----------



## bmgjet

Your getting less power using a 3 plug bios that doesnt have the plug check in it.
Its pritty easy maths to do.

Such as take the 3080 XC3 running on that 450W 3plug bios.
450W - 75W for the slot gives you 375W over 3 plugs.
375W / 3 = 125W per plug.
Plug 3 isnt on the card, so its not pulling any real power in, intead its reporting plug 1s power.

125 X 2 plug = 250W
+ 75W for the slot = 325W real board power when its reporting 450W.


----------



## chispy

bmgjet said:


> Use my tool to work out what power limit youll end up with.
> 
> 
> 
> 
> 
> 
> 
> 
> GitHub - bmgjet/ShutMod-Calculator: Work out what shunt values to use easily.
> 
> 
> Work out what shunt values to use easily. Contribute to bmgjet/ShutMod-Calculator development by creating an account on GitHub.
> 
> 
> 
> 
> github.com
> 
> 
> 
> 
> 
> Also if you do mod them one at a time please post up what each one is so I can add it to the shunt map info.


Hola bmgjet , thank you for the calculator is an awesome tool. I believe you are correct on your findings as i keep hitting power limit and voltreg in gpu-z , also i believe what der8auer and elmor said shunts on the 2x8 plugs do help with the power limit to an extent but there is a catch what gpu-z reports in voltages it's completely wrong  i use my digital multimeter and poke my card and the voltages of what gpu-z reports are not the same as what my multimeter says.

Example before using elmor evc2s tool for unlock voltage control i tested my card that has 5 shunt mods except the pci express , ( Not using evc2s ) voltage reading for v.core at idle was correct but here comes the scary news , after all 5 shunt mods during load in firestrike extreme gpu-z reports 1.087v for v.core but Real Voltage on v.core was 1.11v to 1.15v. varying during load at -39c ,hence we cannot fully rely or trust software reading.

I was running at -47c on my dual compressor single stage wish is very strong SS and after i using the evc2s voltage controller from elmor succesfully adding more voltages make the card go supernova really hot hot and my SS could not handle it even at a mere 1.12v sustain fix. This cards running very very hot , also the card was pulling 82w on the pci express slot asking for more voltage i guess. This video cards are still so new we have to keep sharing our findings so people do not brake their cards trying shunt mods and mods as carrillo already kill his 3090 using water chiller shunt moded.

I was talking with elmor last night for many hours testing the new evc2s , he told me to be careful with the pci express shunt mod as going out of specifications it's only recommended for liquid nitrogen sub sero benchmarking as it helps cool the pci express slot because it gets really hot once shunt moded as per elmor digging deep into pci express slot shunt moding i'm going to copy paste his recommendations here
copy paste from elmor and myself chat:

"







*elmor10/11/2020*
I did a bit more digging on this and might need to revise the advice. I found the connector spec is: 1.1 amp per contact minimum per EIA-364—70, method 2 and PCI Express Connector High Speed Electrical Test Procedure. The temperature rise shall not exceed 30 degree C. Ambient condition is still air at 25°C.


_[_9:28 PM_]_
There are 5x 12V supply pins in a PCI-E slot 5 * 1.1A = 5.5A, at 12V that's 66W


_[_9:31 PM_]_
Assuming a worst case scenario of 75 degree temperature rise (100_C at 25_C ambient) that would allow for ~1.7A/pin, 1.7*5 = 8.5A or 102W


_[_9:33 PM_]_
PCI Express - Wikipedia
PCI Express
PCI Express (Peripheral Component Interconnect Express), officially abbreviated as PCIe or PCI-e, is a high-speed serial computer expansion bus standard, designed to replace the older PCI, PCI-X and AGP bus standards. It is the common motherboard interface for personal compute...



_[_9:33 PM_]_
All PCI express cards may consume up to 3 A at +3.3 V (9.9 W). The amount of +12 V and total power they may consume depends on the type of card: A full-sized x16 graphics card[18] may draw up to 5.5 A at +12 V (66 W) and 75 W combined after initialization and software configuration as a "high power device".
_[_9:33 PM_]_
The 75W number is combined +3.3V and +12V power









*chispy10/11/2020*
Thank you for the guidance and sharing your findings Elmor , so it should be safe up until worst case scenario ~ 102 w ? correct ?










*chispy10/11/2020*
I talk to Ronaldo rbauss from Brazil , he says he did all the shunt mods on his card Galax 3090 2x8 pin , it was 6 including pci express shunt mod without problems he ran full pot on Ln2. It seems they changed the Boost algorithm on the Bios of Ampere and the card changes clocks dinamically even under full pot clocks are swinging , it is not static anymore and there is no way to lock the frequencies ( termspy , k-boost hack , msi ab curve nothing works ). he says it was not v.core starve and it had all the power limits remove and running full pot. He says the big problem is the Bios and not the power delivery or temperatures.










*elmor10/11/2020*
Well on LN2 even the PCI-E connector would have better cooling







Yeah I wouldn't suggest to go past 100W on the PCI-E 12V rail on air/water









*chispy10/11/2020*
Thank you for the advice, i'm testing on water chiller at -21c



_[_11:15 PM_]_
so i will not do the pci express shunt mod to be on the safe side.



_[_11:17 PM_]_
Thank you elmor , appreciate it all the advice.



_[_11:17 PM_]_
Have a good day my friend.



_[_11:17 PM_]_
I'm going to sleep now , bye.










*elmor10/11/2020*
No problem, bye!


And that's why i have not done the shunt mod on the pci express slot. I'm going to attach some pictures of the real actual v.core after 5 shunt mods are done and nothing else on idle and during load:

During idle v.core reports 0.75v












During full load firestrike extreme reports 1.11v ~ all the way up to ~1.15v ( gpu-z reports 1.087v )













My set up for testing subsero on a dual compressor very strong single stage phase unit:


----------



## Sky3900

chispy said:


> Hola bmgjet , thank you for the calculator is an awesome tool. I believe you are correct on your findings as i keep hitting power limit and voltreg in gpu-z , also i believe what der8auer and elmor said shunts on the 2x8 plugs do help with the power limit to an extent but there is a catch what gpu-z reports in voltages it's completely wrong  i use my digital multimeter and poke my card and the voltages of what gpu-z reports are not the same as what my multimeter says.
> 
> Example before using elmor evc2s tool for unlock voltage control i tested my card that has 5 shunt mods except the pci express , ( Not using evc2s ) voltage reading for v.core at idle was correct but here comes the scary news , after all 5 shunt mods during load in firestrike extreme gpu-z reports 1.087v for v.core but Real Voltage on v.core was 1.11v to 1.15v. varying during load at -39c ,hence we cannot fully rely or trust software reading.
> 
> I was running at -47c on my dual compressor single stage wish is very strong SS and after i using the evc2s voltage controller from elmor succesfully adding more voltages make the card go supernova really hot hot and my SS could not handle it even at a mere 1.12v sustain fix. This cards running very very hot , also the card was pulling 82w on the pci express slot asking for more voltage i guess. This video cards are still so new we have to keep sharing our findings so people do not brake their cards trying shunt mods and mods as carrillo already kill his 3090 using water chiller shunt moded.
> 
> I was talking with elmor last night for many hours testing the new evc2s , he told me to be careful with the pci express shunt mod as going out of specifications it's only recommended for liquid nitrogen sub sero benchmarking as it helps cool the pci express slot because it gets really hot once shunt moded as per elmor digging deep into pci express slot shunt moding i'm going to copy paste his recommendations here
> copy paste from elmor and myself chat:
> 
> "
> 
> 
> 
> 
> 
> 
> 
> *elmor10/11/2020*
> I did a bit more digging on this and might need to revise the advice. I found the connector spec is: 1.1 amp per contact minimum per EIA-364—70, method 2 and PCI Express Connector High Speed Electrical Test Procedure. The temperature rise shall not exceed 30 degree C. Ambient condition is still air at 25°C.
> 
> 
> _[_9:28 PM_]_
> There are 5x 12V supply pins in a PCI-E slot 5 * 1.1A = 5.5A, at 12V that's 66W
> 
> 
> _[_9:31 PM_]_
> Assuming a worst case scenario of 75 degree temperature rise (100_C at 25_C ambient) that would allow for ~1.7A/pin, 1.7*5 = 8.5A or 102W
> 
> 
> _[_9:33 PM_]_
> PCI Express - Wikipedia
> PCI Express
> PCI Express (Peripheral Component Interconnect Express), officially abbreviated as PCIe or PCI-e, is a high-speed serial computer expansion bus standard, designed to replace the older PCI, PCI-X and AGP bus standards. It is the common motherboard interface for personal compute...
> 
> 
> 
> _[_9:33 PM_]_
> All PCI express cards may consume up to 3 A at +3.3 V (9.9 W). The amount of +12 V and total power they may consume depends on the type of card: A full-sized x16 graphics card[18] may draw up to 5.5 A at +12 V (66 W) and 75 W combined after initialization and software configuration as a "high power device".
> _[_9:33 PM_]_
> The 75W number is combined +3.3V and +12V power
> 
> 
> 
> 
> 
> 
> 
> 
> 
> *chispy10/11/2020*
> Thank you for the guidance and sharing your findings Elmor , so it should be safe up until worst case scenario ~ 102 w ? correct ?
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> *chispy10/11/2020*
> I talk to Ronaldo rbauss from Brazil , he says he did all the shunt mods on his card Galax 3090 2x8 pin , it was 6 including pci express shunt mod without problems he ran full pot on Ln2. It seems they changed the Boost algorithm on the Bios of Ampere and the card changes clocks dinamically even under full pot clocks are swinging , it is not static anymore and there is no way to lock the frequencies ( termspy , k-boost hack , msi ab curve nothing works ). he says it was not v.core starve and it had all the power limits remove and running full pot. He says the big problem is the Bios and not the power delivery or temperatures.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> *elmor10/11/2020*
> Well on LN2 even the PCI-E connector would have better cooling
> 
> 
> 
> 
> 
> 
> 
> Yeah I wouldn't suggest to go past 100W on the PCI-E 12V rail on air/water
> 
> 
> 
> 
> 
> 
> 
> 
> 
> *chispy10/11/2020*
> Thank you for the advice, i'm testing on water chiller at -21c
> 
> 
> 
> _[_11:15 PM_]_
> so i will not do the pci express shunt mod to be on the safe side.
> 
> 
> 
> _[_11:17 PM_]_
> Thank you elmor , appreciate it all the advice.
> 
> 
> 
> _[_11:17 PM_]_
> Have a good day my friend.
> 
> 
> 
> _[_11:17 PM_]_
> I'm going to sleep now , bye.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> *elmor10/11/2020*
> No problem, bye!
> 
> 
> And that's why i have not done the shunt mod on the pci express slot. I'm going to attach some pictures of the real actual v.core after 5 shunt mods are done and nothing else on idle and during load:
> 
> During idle v.core reports 0.75v
> 
> View attachment 2462306
> 
> 
> 
> 
> During full load firestrike extreme reports 1.11v ~ all the way up to ~1.15v ( gpu-z reports 1.087v )
> 
> 
> 
> View attachment 2462307
> 
> 
> 
> My set up for testing subsero on a dual compressor very strong single stage phase unit:
> 
> 
> View attachment 2462308
> 
> 
> 
> View attachment 2462309


Wow! nice setup, you're not joking around with cooling.


----------



## bmgjet

That over voltage could be the LLC at work if you can adjust it.
While those voltage test points read that voltage. The center of the die is going to be lower. How much lower I have no idea lol.

Those with fuses on there cards would have a easy job to remove the fuse and use that as a point to add another 12V plug.
EVGA uses a 10 amp fuse for pci-e slot.
MSI uses a 20 amp for it.
And both use 20 amp for each plug.


----------



## cstkl1

DrunknFoo said:


> View attachment 2462281
> 
> 
> 
> 
> For those looking into one or the other, my recommendation is the bitspower, slightly more water surface coverage across the plate and volume capacity, thinner and more fins, QC significantly better (if you care for aesthetics) and ideally it is lighter and comes with some ram thermal pads, mounting screws and an extra o-ring. Where the alphacool is the block alone). (this is just all subjective though, honestly get whatever ends up being cheaper lol)


just remember that alot of the backplates are aluminium. so
nickel on alu not a good idea.


----------



## Sarieri

Does anyone know if there's a difference between the two ASUS STRIX 480w Bios? Version numbers are different but they seem to have everything else the same.


----------



## Thoth420

kx11 said:


> who wants a 2600$ 3090


omg ME!


----------



## mirkendargen

cstkl1 said:


> just remember that alot of the backplates are aluminium. so
> nickel on alu not a good idea.


The backplate is gonna be painted/anodized, you'll have pads/paste in between, and it's not like there's electrolyte fluid flowing between them anyway. I can't imagine there would be a problem. Tons of cheap CPU coolers are aluminum and get stuck on nickel plated copper IHS's.


----------



## anethema

Ok so for those who wanna know what happens when I do open my window and get some cold air blowing in the computer's direction:



https://www.3dmark.com/pr/411685



#25 baby! 2,068 MHz average clock, 2220 Max. 

In afterburner I was +1000 mem, +195 Core, max power and fan. 

Just got my shunts in the mail too, really tempted to try em out.

You guys think paralleling 3mohm and 5mohm together will bring the measured power too low and trigger nvidia protection? That would bring the max theoretical power to like 930W haha.


----------



## chispy




----------



## Johneey

DocratesCR said:


> Hello from Costa Rica.
> 
> I own a *MSI 3090 Ventus OC*. (350w power limit) and i9 10900k (5200 Mhz)
> 
> Factory Bios, 3DMark score is: https://www.3dmark.com/spy/14480662 (18398)
> 
> Then I installed Gigabyte Aorus Master Bios, lost 2 display ports and my score was: https://www.3dmark.com/spy/14563604 (19069)
> 
> Finally, I installed Gigabyte Gaming OC Bios, lost only one display port. First score was: https://www.3dmark.com/spy/14623194 (18979)
> 
> By doing some tunning, I set Core Clock to +180 and Memory Clock to +250.Final score is: https://www.3dmark.com/spy/14628723 (19356)
> 
> I am happy with results for a Ventus. Now I wanna try a SLI of Ventus OC. But, do someone knows if there is a flexible sli port?


Really happy with this results? Some 3080s are better


----------



## AlKappaccino

chispy said:


>


He had the exact same issue I had, when I installed the 3mOhm shunts. Though he's not really talking about the solution, other than the same thing I suggested, meaning going too low triggers the panic mode. But his shunt stacking results in 3,08mOhm, which is not that much higher than my setup. 

He replaced all 5 shunts though, but that shouldn't be necessary.


----------



## Johneey

AlKappaccino said:


> He had the exact same issue I had, when I installed the 3mOhm shunts. Though he's not really talking about the solution, other than the same thing I suggested, meaning going too low triggers the panic mode. But his shunt stacking results in 3,08mOhm, which is not that much higher than my setup.
> 
> He replaced all 5 shunts though, but that shouldn't be necessary.


Which Delta he has to water ? Didn’t watch full but want compare to alphacool


----------



## Tias

bmgjet said:


> Your getting less power using a 3 plug bios that doesnt have the plug check in it.
> Its pritty easy maths to do.
> 
> Such as take the 3080 XC3 running on that 450W 3plug bios.
> 450W - 75W for the slot gives you 375W over 3 plugs.
> 375W / 3 = 125W per plug.
> Plug 3 isnt on the card, so its not pulling any real power in, intead its reporting plug 1s power.
> 
> 125 X 2 plug = 250W
> + 75W for the slot = 325W real board power when its reporting 450W.


Is this a new thing with the 3000 series ? Because i flashed the "kingpin xoc bios" to my 2080ti FE (2x8pin) and i had it on there for well over 6 months. And the 2080ti FE pulled around 500-550w while benchmarking. And it did that on 2x8pin.

Did something change to the 3000 serie, or is it a must that it is a "XOC bios" for it to work with 2x8pin card even if the card the XOC bios is from is a 3x8pin?

I dont understand the different why it worked to flash a 3x8pin bios to the 2x8pin 2080ti FE but some people say it doesnt work to flash a 3x8pin to a 2x8pin RTX3090.


----------



## Chamidorix

chispy said:


> Hola bmgjet , thank you for the calculator is an awesome tool. I believe you are correct on your findings as i keep hitting power limit and voltreg in gpu-z , also i believe what der8auer and elmor said shunts on the 2x8 plugs do help with the power limit to an extent but there is a catch what gpu-z reports in voltages it's completely wrong  i use my digital multimeter and poke my card and the voltages of what gpu-z reports are not the same as what my multimeter says.



What a godamn surprise, like I literally said weeks ago at this point you have to shunt more than just the 8 pin resisters this generation to truly uncap power draw. At least we finally have two sets of hard evidence now in this forum (yours and bmgjet), beyond just the framechasers + buildtec I was quoting and the explanation you could derive via buildzoid analysis, again all weeks ago. 

Salt aside, I nevertheless agree that pci shunting should not be a catch-all recommendation, ESPECIALLY with volt modding, but it is good to understand what limitations all users will be running into with different levels of overclock (soft vs hard, vBios vs shunt vs Elmor etc). I think bmgjet has pretty much spelled it all out for the FE, it just remains to be defined the particular differences between AIB cards. 

Seems XOC bios will be extra important this generation for safe massive wattage draws, as that will be the only way outside of extensive hard board rerouting to remove the pci rebalancing and let us pull 600+W with the majority over the over-specced and therefore safe 8 pins (rather than overloading the pci-e).

Regarding PCI-E safety, anecdotally framechasers has been running 600W+ over his 2 8-pin + PCI on XC3 for a while now (full shunts, under waterblock) and hasn't reported problems. 


Since I'm feeling pretty validated by now, would just like to remind those with voltage mods to take a look at the performance effects of overclocking the new core power rail that didn't exist in previous generations. This likely powers the cache, similar to uncore/ring in intel, and I'm thinking there will be a decent bit of performance to gain from upping it, which no one seems to be doing.


----------



## VickyBeaver

so this was somthing that cropped up a couple of days ago that may intrest ASUS TUF owners especially asus 450w OC bios

__ https://twitter.com/i/web/status/1317076936879898624
my question is dose the tuf stock bios work on other 2x8pin cards if so has any one attempted a flash from tuf bios to the new oc bios from asus on a refrence 3090?


----------



## Spiriva

VickyBeaver said:


> so this was somthing that cropped up a couple of days ago that may intrest ASUS TUF owners especially asus 450w OC bios
> 
> __ https://twitter.com/i/web/status/1317076936879898624
> my question is dose the tuf stock bios work on other 2x8pin cards if so has any one attempted a flash from tuf bios to the new oc bios from asus on a refrence 3090?


The tuf 3090 can be flashed to a ref 3090. Both have 2x8pin. But the asus tuf 3090 isnt 450w is it? If they did not change it, it should be 375w


----------



## VickyBeaver

Spiriva said:


> The tuf 3090 can be flashed to a ref 3090. Both have 2x8pin. But the asus tuf 3090 isnt 450w is it? If they did not change it, it should be 375w


unsure if legit this article drew my attention to 450w claims
Asus new BIOS increase power limit (450W) of Strix / TUF GeForce RTX 3090 / 3080 _shrugs_


----------



## megahmad

Johneey said:


> Really happy with this results? Some 3080s are better


Not really. Most of "those" 3080s you're talking about are either shunted/water cooled with really good bins or ones that are overclocked to oblivion that magically passed the 3dmark benchmark but with zero stability in games or anything else.
Getting 20500 graphics score in Time Spy with a 3080 needs a lot of work. I just wouldn't say "some 3080s".


----------



## slopokdave

jomama22 said:


> So from framechasers video:
> 
> 
> 
> 
> 
> The stuff about flashing the 450 on a reference for the 3080 isn't really important.
> 
> What is kinda possibly important is that you may still see a benefit from flashing a 3pin bios (such as the strix) to your 2 pin board. It's possible that you will still pull more total watts and current even though the 8 pin is non-existant. I know people have tried to flash the.on here before but I am not sure if those people attempted actual benchmark runs to compare.
> 
> You obviously wouldn't get the full 480w, but it's possible you will get some more over the 400w your limited to now.
> 
> Just somthing to try. I have a founders I just got an am benching now. Hopefully it's a winner!


Not to Asus TUF at least. If any 2x8pin board was to improve from the 3x8pin bios, I'd think it would be the TUF because of same maker/similarities. 

PC thinks its running 450W on my Asus TUF with Strix bios, but as stated it lies.  Never show more than total system 450W from the wall with Strix bios on TUF, whereas with TUF bios on TUF I see 525W. 



bmgjet said:


> Your getting less power using a 3 plug bios that doesnt have the plug check in it.
> Its pritty easy maths to do.


Yup, this. My smart plug at the wall doesn't lie, shows real watts at the wall. 3x8pin vbios on 2x8pin card, like TUF, is unfortunately worse than stock vbios.


----------



## cstkl1

nfs 4k - 727w

5.1ghz 10900k
2x 16gb 4266c17
Strix 3090 oc


----------



## cstkl1

megahmad said:


> Not really. Most of "those" 3080s you're talking about are either shunted/water cooled with really good bins or ones that are overclocked to oblivion that magically passed the 3dmark benchmark but with zero stability in games or anything else.
> Getting 20500 graphics score in Time Spy with a 3080 needs a lot of work. I just wouldn't say "some 3080s".


not really. 20.2k on wc strix 3080 pretty much given on TS
with stable clocks 2070-2100 only

but 3080 cannot make up for tensor core diff in PR.

reason y his score is bad is apparent on his cpu. its not stable with badly optimized ram setup.


----------



## Sheyster

VickyBeaver said:


> so this was somthing that cropped up a couple of days ago that may intrest ASUS TUF owners especially asus 450w OC bios
> 
> __ https://twitter.com/i/web/status/1317076936879898624
> my question is dose the tuf stock bios work on other 2x8pin cards if so has any one attempted a flash from tuf bios to the new oc bios from asus on a refrence 3090?


That new EVGA OC 450w BIOS is for the 3080 FTW3 only, which is 3 x 8-pin, not a 2 connector card:









EVGA GeForce RTX 3080 FTW3 ULTRA GAMING, 10G-P5-3897-KR, 10GB GDDR6X, iCX3 Technology, ARGB LED, Metal Backplate


The EVGA GeForce RTX 3080 delivers the unprecedented performance that gamers crave for 4K resolution gaming, powered by the NVIDIA Ampere architecture. It's built with enhanced RT Cores and Tensor Cores, new streaming multiprocessors, and superfast G6X memory for an amazing gaming experience...




www.evga.com





There is no compatible OC BIOS available for the FE. If there ever is, it will be posted in the first post of this thread.


----------



## Benni231990

so what brings the asus update bios??

is its true what the tuf 3090 have after update 450 watt ?? or 375 watt stock??


----------



## Sheyster

Benni231990 said:


> so what brings the asus update bios??
> 
> is its true what the tuf 3090 have after update 450 watt ?? or 375 watt stock??


I believe it's just related to fan curves and fan on/off temps. No bump in power limit as far as I know.


----------



## jomama22

megahmad said:


> Not really. Most of "those" 3080s you're talking about are either shunted/water cooled with really good bins or ones that are overclocked to oblivion that magically passed the 3dmark benchmark but with zero stability in games or anything else.
> Getting 20500 graphics score in Time Spy with a 3080 needs a lot of work. I just wouldn't say "some 3080s".


Does the cpu have that much of an affect on the final ending graphics score in timespy?


----------



## Bilco

Benni231990 said:


> so what brings the asus update bios??
> 
> is its true what the tuf 3090 have after update 450 watt ?? or 375 watt stock??


100% no power limit increase unfortunately... we are still cucked on power with the TUF 3090 =[

For the other 3090 TUF owners, which seems to be the best bios? I currently use two display ports and an HDMI.


----------



## smonkie

To all the owners here: are you afraid Big Navi could devaluate 3090’s price as soon as AMD presents them?


----------



## Jpmboy

^^ No. 🤨


----------



## Vgnusstffr

Can someone please help me? I have 2 RTX 3090s in SLI via the new NVLink Bridge in 4 way SLI on the MSI MEG Godlike Z390. Powered by the i9 9900k overclocked to 5.1 GHz on all 8 cores, with 32gb of 3600 MHz G-Skill Trident Z RGB RAM. I have an M.2 SSD installed into the 2nd M.2 Slot. I can not get more than 20% usage out of my second card when running games that are confirmed to run well in this configuration, such as Deus X Mankind Divided. I had Metro Exodus, and Control running well in SLI on my twin Kingpin 2080 Ti in SFR Checkerboard mode (Enabled in the Nvidia Profile Inspector), but now those games get 0% usage in SFR SLI on my 3090s. Shadow of the Tomb Raider will utilize nearly 100% of both cards in MGPU AFR, but that's it. None of the SLI games would utilize more than 20% of GPU #2. I saw the Linus Tech video in which Linus links 2 ASUS RTX 3090s on the "MSI MEG Godlike Z490" and was running the supported games like Deus Ex Mankind Divided perfectly on his setup. The thing is, he was SORELY mistaken about what his system. HE WAS NOT RUNNING ON THE MSI MEG GODLIKE Z490!!! HE WAS RUNNING ON THE MSI GODLIKE Z390!!! The Godlike Z490 DOES NOT support the required 4 slot spacing to run these cards in SLI, and when he zoomed in on the board in the video, you can clearly see that he is running on the Z390 Godlike. So I know ot can be done, since I am running the same setup as him (I assume he had the i9 9900k in there even though he claimed to be running with the i9 10900k since Z390 will not support that CPU), so does anyone have any insight as to what I need to do? Before I saw the Linus video I suspected that it was due to the slots dropping to 8x, but clearly Linus showed that is not the issue. Any help would be appreciated, as I've spent 3 days trying to figure out what I'm doing wrong.


----------



## Baasha

Vgnusstffr said:


> Can someone please help me? I have 2 RTX 3090s in SLI via the new NVLink Bridge in 4 way SLI on the MSI MEG Godlike Z390. Powered by the i9 9900k overclocked to 5.1 GHz on all 8 cores, with 32gb of 3600 MHz G-Skill Trident Z RGB RAM. I have an M.2 SSD installed into the 2nd M.2 Slot. I can not get more than 20% usage out of my second card when running games that are confirmed to run well in this configuration, such as Deus X Mankind Divided. I had Metro Exodus, and Control running well in SLI on my twin Kingpin 2080 Ti in SFR Checkerboard mode (Enabled in the Nvidia Profile Inspector), but now those games get 0% usage in SFR SLI on my 3090s. Shadow of the Tomb Raider will utilize nearly 100% of both cards in MGPU AFR, but that's it. None of the SLI games would utilize more than 20% of GPU #2. I saw the Linus Tech video in which Linus links 2 ASUS RTX 3090s on the "MSI MEG Godlike Z490" and was running the supported games like Deus Ex Mankind Divided perfectly on his setup. The thing is, he was SORELY mistaken about what his system. HE WAS NOT RUNNING ON THE MSI MEG GODLIKE Z490!!! HE WAS RUNNING ON THE MSI GODLIKE Z390!!! The Godlike Z490 DOES NOT support the required 4 slot spacing to run these cards in SLI, and when he zoomed in on the board in the video, you can clearly see that he is running on the Z390 Godlike. So I know ot can be done, since I am running the same setup as him (I assume he had the i9 9900k in there even though he claimed to be running with the i9 10900k since Z390 will not support that CPU), so does anyone have any insight as to what I need to do? Before I saw the Linus video I suspected that it was due to the slots dropping to 8x, but clearly Linus showed that is not the issue. Any help would be appreciated, as I've spent 3 days trying to figure out what I'm doing wrong.


First off all, congrats on getting 2x 3090s - which ones? The FE?

Anyway, SLI is all but dead on this platform.

SFR Checkerboard was removed after the 446 branch drivers (450.xx and above) so it cannot be forced via NVCP.

Secondly, the handful of games that are purported to work are only DX12/Vulkan API games - Strange Brigade is the only game that works with both Vulkan & DX12 MGPU almost perfectly.

That video you talk about of that nincompoop is garbage for several reasons; he couldn't even get Red Dead Redemption 2 running and not to mention he said he was running the Z490 platform when he clearly wasn't.

I'm still going to get 2x 3090 in SLI but they have disabled even older games (DX11) as the GPUs will simply ignore DX11 profiles.

There is a trick up my sleeve but I am not going to disclose it until I have 2x 3090s in my system. 

For all intents and purposes, bid farewell to SLI/MGPU. The least Nvidia could have done is to leave the older game profiles enabled - there are literally dozens of games that would've been incredible with 3090 SLI in native 8K (>60fps) such as GTA V, The Witcher 3, BFV etc. etc. Oh well...


----------



## Zurv

Baasha is a crazy person. Fact. 

oh, it might be worth making a thread for games that can take some advantage from multi 3090s.


----------



## J7SC

chispy said:


>


...watching that now


----------



## Vgnusstffr

Baasha said:


> First off all, congrats on getting 2x 3090s - which ones? The FE?
> 
> Anyway, SLI is all but dead on this platform.
> 
> SFR Checkerboard was removed after the 446 branch drivers (450.xx and above) so it cannot be forced via NVCP.
> 
> Secondly, the handful of games that are purported to work are only DX12/Vulkan API games - Strange Brigade is the only game that works with both Vulkan & DX12 MGPU almost perfectly.
> 
> That video you talk about of that nincompoop is garbage for several reasons; he couldn't even get Red Dead Redemption 2 running and not to mention he said he was running the Z490 platform when he clearly wasn't.
> 
> I'm still going to get 2x 3090 in SLI but they have disabled even older games (DX11) as the GPUs will simply ignore DX11 profiles.
> 
> There is a trick up my sleeve but I am not going to disclose it until I have 2x 3090s in my system.
> 
> For all intents and purposes, bid farewell to SLI/MGPU. The least Nvidia could have done is to leave the older game profiles enabled - there are literally dozens of games that would've been incredible with 3090 SLI in native 8K (>60fps) such as GTA V, The Witcher 3, BFV etc. etc. Oh well...


Could you PLEASE private message me your trick? It's my daughter's birthday and I want to surprise her with her favorite games working in SLI. She LOVES getting her "frames" lol, so cute. I would be so very grateful...


----------



## Vgnusstffr

Vgnusstffr said:


> They are the FE! I wouldn't spend my money on non-FE this round, and thank you. I paid a pretty penny for them. Could you PLEASE private message me your trick? It's my daughter's birthday and I want to surprise her with her favorite games working in SLI. She LOVES getting her "frames" lol, so cute. I would be so very grateful...


----------



## jepz

smonkie said:


> To all the owners here: are you afraid Big Navi could devaluate 3090’s price as soon as AMD presents them?


No, not at all 😁


----------



## Foxrun

Are any of you guys getting artifacts? It only happens in certain games: KF2, DRG, WoW so far. Temps never break 58c. This is at both stock and oc clocks. My 3090 FE downclocks and does not recover once I shut my TV off (48cx). Im hoping it's a driver/firmware issue as it only seems to happen at 10bit/120hz. It was not happening at 60hz.


----------



## cstkl1

Foxrun said:


> Are any of you guys getting artifacts? It only happens in certain games: KF2, DRG, WoW so far. Temps never break 58c. This is at both stock and oc clocks. My 3090 FE downclocks and does not recover once I shut my TV off (48cx). Im hoping it's a driver/firmware issue as it only seems to happen at 10bit/120hz. It was not happening at 60hz.


sounds like the back vram is overheating...
try putting a fan on it and see whether that helps.. try a ghetto fix first..


----------



## Zurv

Foxrun said:


> Are any of you guys getting artifacts? It only happens in certain games: KF2, DRG, WoW so far. Temps never break 58c. This is at both stock and oc clocks. My 3090 FE downclocks and does not recover once I shut my TV off (48cx). Im hoping it's a driver/firmware issue as it only seems to happen at 10bit/120hz. It was not happening at 60hz.


do you have the new firmware to fix gsync issues?


----------



## Foxrun

Yes, I’m using the 3.11.25. I’m going to remove the backplate tomorrow morning and see if I can slap some thermal pads on the VRMs. I don’t understand why it doesn’t happen in all games. The frustrating part is the weird behavior of downclocking and black screens when I turn my display off and turn it back on.


----------



## cstkl1

Foxrun said:


> Yes, I’m using the 3.11.25. I’m going to remove the backplate tomorrow morning and see if I can slap some thermal pads on the VRMs. I don’t understand why it doesn’t happen in all games. The frustrating part is the weird behavior of downclocking and black screens when I turn my display off and turn it back on.


its not the vrm bro.. it sounds like VRAM over heating..
if vrm over heat u will be throttling like nuts first...


----------



## Bilco

Foxrun said:


> Are any of you guys getting artifacts? It only happens in certain games: KF2, DRG, WoW so far. Temps never break 58c. This is at both stock and oc clocks. My 3090 FE downclocks and does not recover once I shut my TV off (48cx). Im hoping it's a driver/firmware issue as it only seems to happen at 10bit/120hz. It was not happening at 60hz.


Yes I just had it happen on my 3090 tuf oc with +170 core and +102 ram. Backed the core down to 160 and had a 3 hour DRG session with no artifacting. Required a reboot to get rid of the artifacts. Playing on 1080p with frames limited at 238.


----------



## Vgnusstffr

Foxrun said:


> Are any of you guys getting artifacts? It only happens in certain games: KF2, DRG, WoW so far. Temps never break 58c. This is at both stock and oc clocks. My 3090 FE downclocks and does not recover once I shut my TV off (48cx). Im hoping it's a driver/firmware issue as it only seems to happen at 10bit/120hz. It was not happening at 60hz.


I am getting strange vertical white lines that flash on the screen when doing 4K 120 on Shadow of the Tomb Raider, and Resident Evil 7. Wondering if it's overheating VRAM, or my 15 ft. Monster 8K 48gb cable.


----------



## cstkl1

Bilco said:


> Yes I just had it happen on my 3090 tuf oc with +170 core and +102 ram. Backed the core down to 160 and had a 3 hour DRG session with no artifacting. Required a reboot to get rid of the artifacts. Playing on 1080p with frames limited at 238.


15 increment dude 150,165,180


----------



## cstkl1

So far nfs heat yesterday gaming session was uber. Insane usage of cpu/gpu almost lile running realbench


----------



## DrunknFoo

cstkl1 said:


> just remember that alot of the backplates are aluminium. so
> nickel on alu not a good idea.


Irrelevant


----------



## rawsome

smonkie said:


> To all the owners here: are you afraid Big Navi could devaluate 3090’s price as soon as AMD presents them?


Big-Navi is expected to be slower than a 3080 and cheaper when it comes to FPS/$. 3090 is in the "I want the fastest and pay double" territory where AMD is not gonna touch.



Vgnusstffr said:


> I am getting strange vertical white lines that flash on the screen when doing 4K 120 on Shadow of the Tomb Raider, and Resident Evil 7. Wondering if it's overheating VRAM, or my 15 ft. Monster 8K 48gb cable.


I have The same for like 1 Frame sometimes on a 3090 and the 48CX. Thought it was my overclock. Will test it at Stock.


----------



## Spiriva

Did any one order and try one of these out yet?


----------



## Sarieri

Spiriva said:


> Did any one order and try one of these out yet?


I dont see a big diiference between these backplate modules and those RAM blocks. Except these backplate module is unreasonably expensive.


----------



## ShadowYuna

Has anyone water block their RTX 3090 strix yet??


----------



## Nizzen

ShadowYuna said:


> Has anyone water block their RTX 3090 strix yet??


I have Bykski Strix block, but no card  Waiting for arrival


----------



## ShadowYuna

Nizzen said:


> I have Bykski Strix block, but no card  Waiting for arrival


does byski support both 3080 and 3090?

Bitspower block only support 3080 due to backplate. Also do you mind where did you get the block from?

Thanks advance


----------



## devilhead

Nizzen said:


> I have Bykski Strix block, but no card  Waiting for arrival


did you ordered at proshop yours 3090? there is some updates NVIDIA GeForce RTX | 3070 | 3080 | 3090 » Full overview here!
damn, i have ordered mine 15:03 and i'm ~60 in queue


----------



## cstkl1

ShadowYuna said:


> does byski support both 3080 and 3090?
> 
> Bitspower block only support 3080 due to backplate. Also do you mind where did you get the block from?
> 
> Thanks advance


the tuf 3080 bits.. they pulled back saying it doesnt support the 3090 tuf


----------



## Nizzen

devilhead said:


> did you ordered at proshop yours 3090? there is some updates NVIDIA GeForce RTX | 3070 | 3080 | 3090 » Full overview here!
> damn, i have ordered mine 15:03 and i'm ~60 in queue


I ordered at proshop.no , komplett.no and elkjøp.no. All within 15.00-1504. Didn't call proshop.no about queue  Tnx for update!


----------



## asdkj1740

gigabyte 3090 turbo has better pcb components used than eagle and gaming, no solid caps on the front.
we now understand why gigabyte uses that strange adaptor on gaming and eagle models. the indented design on turbo series increases airflow to the turbo fan. thats why gigabyte has to use that super low profile adaptor.





原價屋@酷！PC • 檢視主題 - 【開箱】全銅散熱模組、鼓風扇獨挑大樑！技嘉 RTX3090 TURBO 24GB顯示卡。







www.coolpc.com.tw





techpowerup ealge oc pcb








Gigabyte GeForce RTX 3090 Eagle OC Review


Gigabyte debuts its Eagle brand of graphics cards to the enthusiast segment. Slotted between the WindForce OC and AORUS Gaming series, the RTX 3090 Eagle OC covers all the bases and bling gamers need, at the NVIDIA MSRP price. It also comes with a power connector innovation.




www.techpowerup.com


----------



## Nizzen

ShadowYuna said:


> does byski support both 3080 and 3090?
> 
> Bitspower block only support 3080 due to backplate. Also do you mind where did you get the block from?
> 
> Thanks advance


I think it support both. Ordered from a friend in Thailand  "clock em up" guy


----------



## Foxrun

cstkl1 said:


> its not the vrm bro.. it sounds like VRAM over heating..
> if vrm over heat u will be throttling like nuts first...


I doubt it's over heating. My ambient temp is 11c. 


Bilco said:


> Yes I just had it happen on my 3090 tuf oc with +170 core and +102 ram. Backed the core down to 160 and had a 3 hour DRG session with no artifacting. Required a reboot to get rid of the artifacts. Playing on 1080p with frames limited at 238.


Interesting. Ill try dropping my core by 15 at a time. Ill let you guys know how it goes. I still dont know why it would impact some games and not others. I get no artifacts in BFV.


----------



## Foxrun

rawsome said:


> I have The same for like 1 Frame sometimes on a 3090 and the 48CX. Thought it was my overclock. Will test it at Stock.


That's exactly what I am having with the same setup.


----------



## Carillo

Carillo said:


> RIP dear Palit 3090, it's been a fun 14 day run. Sleep tight 💔
> View attachment 2462301


My card is still alive! Only the HDMI port died. Since i use a LG CX48 with only HDMI, trying DP never crossed my mind.. Found a DP to HDMI adapter, and it all works as it should.. I really dont want to RMA the card, not yet, since then i will be left without a card for probably months. Any solution going from DP 1.4 to HDMI 2.1 without loosing G-sync ?


----------



## cstkl1

LVNeptune said:


> Ohhhhh =/


bro. nvidia farkers

gfe can only claim once per acct

so i had to change cards three times with all 3 diff emails to claim


----------



## Zurv

Nizzen said:


> I have Bykski Strix block, but no card  Waiting for arrival


bykski had a problem with their first block and it does not work for the 3090. I don't think their new 3090 block is out till the end of the month. You might want to check if your block is the new model.

Note that bitspower does have a FE block too.


----------



## Vgnusstffr

Does anyone know any tricks to get the games that used to work in SLI working on an RTX 3090 Sli setup? Thought there was something wrong with my setup until another user confirmed that NVIDIA causes the cards to ignore the sli profiles. Any help would be GREATLY appreciated.


----------



## bmgjet

Dont know if it would help but you can try a really old trick.
Nvidia inspector and force sli on with AFR.


----------



## Zurv

Vgnusstffr said:


> Does anyone know any tricks to get the games that used to work in SLI working on an RTX 3090 Sli setup? Thought there was something wrong with my setup until another user confirmed that NVIDIA causes the cards to ignore the sli profiles. Any help would be GREATLY appreciated.


MultiGpu still works in mGPU setups (like .. and i think only.. ashes)
also games that support SLI in dx12. Like Rise of the Tomb Raider. It will not work in dx11 games.
Get a good score in 3dmark timespy and port royal... nod to yourself.. epeen bonus points awarded.

That said, i wonder if anything but a x299 is a good fit right now. Are there z490s with 4 slot spacing between PCI-E slots that can do 16x. Maybe a workstation one? _shrug_
There was talk of AIBs making SLI bridges with options other than 4. But no one said a peep post nvidia's notice they are mostly killed SLI.

this is the official line from Nvidia


> With the emergence of low level graphics APIs such as DirectX 12 and Vulkan, game developers are able to implement SLI support natively within the game itself instead of relying upon a SLI driver profile. The expertise of the game developer within their own code allows them to achieve the best possible performance from multiple GPUs. As a result, NVIDIA will no longer be adding new SLI driver profiles on RTX 20 Series and earlier GPUs starting on January 1st, 2021. Instead, we will focus efforts on supporting developers to implement SLI natively inside the games. We believe this will provide the best performance for SLI users.
> 
> 
> Existing SLI driver profiles will continue to be tested and maintained for SLI-ready RTX 20 Series and earlier GPUs.
> 
> For GeForce RTX 3090 and future SLI-capable GPUs, SLI will only be supported when implemented natively within the game.
> 
> What DirectX 12 games support SLI natively within the game?
> 
> DirectX 12 titles include Shadow of the Tomb Raider, Civilization VI, Sniper Elite 4, Gears of War 4, Ashes of the Singularity: Escalation, Strange Brigade, Rise of the Tomb Raider, Zombie Army 4: Dead War, Hitman, Deus Ex: Mankind Divided, Battlefield 1, and Halo Wars 2.
> 
> What Vulkan games support SLI natively within the game?
> 
> Vulkan titles include Red Dead Redemption 2, Quake 2 RTX, Ashes of the Singularity: Escalation, Strange Brigade, and Zombie Army 4: Dead War
> 
> How about creative and other non-gaming applications -- will those still support multiple GPUs?
> 
> Yes, many creative and other non-gaming applications support multi-GPU performance scaling without the use of SLI driver profiles. These apps will continue to work across all currently supported GPUs as it does today.


----------



## Vgnusstffr

Zurv said:


> MultiGpu still works in mGPU setups (like .. and i think only.. ashes)
> also games that support SLI in dx12. Like Rise of the Tomb Raider. It will not work in dx11 games.
> Get a good score in 3dmark timespy and port royal... nod to yourself.. epeen bonus points awarded.
> 
> That said, i wonder if anything but a x299 is a good fit right now. Are there z490s with 4 slot spacing between PCI-E slots that can do 16x. Maybe a workstation one? _shrug_
> There was talk of AIBs making SLI bridges with options other than 4. But no one said a peep post nvidia's notice they are mostly killed SLI.
> 
> this is the official line from Nvidia


I am running them in the Z390 Godlike, which has 4 slot spacing. I'm getting full performance from the DX12 games in MGPU, so I assume it can handle the load. Just sucks that DX11 SLI is actually purposefully disabled. Would a driver rollback do the trick? Anything anyone can think of to get SLI working again in DX111 and maybe even CFR?


----------



## long2905

hapklaar said:


> Thanks a bunch, much appreciated. Just flashed this to my X4 and the 2 display ports I was using are still working, albeit reversed.
> 
> Also uploaded the bios to TPU: Gigabyte RTX 3090 VBIOS


hey man im going to get this card very soon. How was your flashing experience like so far? Did the gigabyte gaming oc vbios make a difference? Whats the highest boost clock and power draw can you get?


----------



## Foxrun

Well now I get hard crashing at 160 core clock. The gpu will shut off (LED indicator lights shut off on the gpu) at high clocks. I havent seen this behavior before with past gpus.


----------



## mirkendargen

Zurv said:


> bykski had a problem with their first block and it does not work for the 3090. I don't think their new 3090 block is out till the end of the month. You might want to check if your block is the new model.
> 
> Note that bitspower does have a FE block too.


Do you have a link with more info on this? What part of the backplate didn't fit? Was it just a thermal pad problem? What are the good vs. bad part numbers?


----------



## Zurv

The old unit was a combo 3080/3090 block. Now the old block is just called 3080 and there is a new block for the 3090. (the old page was removed too.)
They don't even have a link to the new block yet. (at least on the US site)

there was a post by them... some place.. bitching about pix didn't match with the final release. The Chinese seller of my block offered to cancel my order or gave me the option to wait till the end of the month for the new block.


----------



## Stampede

Carillo said:


> My card is still alive! Only the HDMI port died. Since i use a LG CX48 with only HDMI, trying DP never crossed my mind.. Found a DP to HDMI adapter, and it all works as it should.. I really dont want to RMA the card, not yet, since then i will be left without a card for probably months. Any solution going from DP 1.4 to HDMI 2.1 without loosing G-sync ?


Great to hear my friend!

I saw this on Avsforum.com, but most people are saying is doesnt really work fully, but you can try.





__





Club 3D | DisplayPort 1.4 to HDMI 4K120Hz HDR Active Adapter M/F


The Club 3D CAC-1085 is the perfect solution to connect to any HDMI™ 4K120Hz ready displays*. If you have a DisplayPort™ 1.4 ready PC or any other device that lacks the new HDMI™ 4K120Hz specification, the Club3D CAC-1085 will be the simple way to upgrade your device and connect to your new TV*...




www.club-3d.com





I have the club3d hdmi 2.1 cable, works very well.


----------



## mirkendargen

Zurv said:


> The old unit was a combo 3080/3090 block. Now the old block is just called 3080 and there is a new block for the 3090. (the old page was removed too.)
> They don't even have a link to the new block yet. (at least on the US site)
> 
> there was a post by them... some place.. bitching about pix didn't match with the final release. The Chinese seller of my block offered to cancel my order or gave me the option to wait till the end of the month for the new block.


Super. Mine gets delivered today I think, but no card to try it on. It was initially packaged, then sat for 2 weeks, then switched to a new tracking number/order and left china, so maybe that was them waiting for the new one? Who knows.


----------



## Pepillo

I've been using the Strix's 480w bios for a while on my MSI Gaming X Trio running really well. But since I lose a DP port I'm thinking of switching to the 450W bios of the FTW3 EVGA with three DPs and one HDMI like MSI. Has anyone tested the EVGA bios on a different card? It works well? 
Thanks


----------



## Xel_Naga

this is for the 3080... but interesting how he got around the evga update checker. might prove useful on other 3090 things. if EVGA releases a higher wattage FTW3/KP update






Also wondering if this EVGA update tool isnt even flashing a new bios and possibly only updating the boost table. Might be worth looking into. because if its able to edit cards boost values..... could be a golden ticket if we can modify the update program.


----------



## Chamidorix

Xel_Naga said:


> this is for the 3080... but interesting how he got around the evga update checker. might prove useful on other 3090 things. if EVGA releases a higher wattage FTW3/KP update
> 
> 
> 
> 
> 
> 
> Also wondering if this EVGA update tool isnt even flashing a new bios and possibly only updating the boost table. Might be worth looking into. because if its able to edit cards boost values..... could be a golden ticket if we can modify the update program.


We know it is flashing a new bios because if you watched the video he says he can give you the new flashed bios directly if you contact him via discord. So you don't have to do the intermediary steps.


----------



## Vgnusstffr

Anyone at all have any tricks to get SLI working for DX11 again with an RTX 3090 SLI setup?


----------



## DrunknFoo

Chamidorix said:


> We know it is flashing a new bios because if you watched the video he says he can give you the new flashed bios directly if you contact him via discord. So you don't have to do the intermediary steps.



I can't take this guys content seriously... seems like piggybacking information from established youtubers or what's already been posted here and other sources
where is he placed on the hof?


----------



## LordGurciullo

Super glad your card didn't die for real dude. I was scared on your behalf. 

Hey guys. Can yall answer a few questions for me please.

1. Do I need this - https://www.amazon.com/gp/product/B079HSVSLR/ref=ppx_yo_dt_b_asin_title_o09_s03?ie=UTF8&psc=1 I got it... but I'm wondering If I really should use it? 

2. Can anyone do a metro exodus benchmark and tell me how that goes. I found that I have to lower clock speed by 45 to keep a metro exodus bench from crashing 2 runs in a row (EXTREME with RTX on Ultra)

3. Anyone else getting huge spikes in games. 4 different games giving me huge fps drops for a quick second... a stutter than back to normal - is this from too aggressive overclocking??

Please chime in on these 3 things please guys.


----------



## Sky3900

LordGurciullo said:


> Super glad your card didn't die for real dude. I was scared on your behalf.
> 
> Hey guys. Can yall answer a few questions for me please.
> 
> 1. Do I need this - https://www.amazon.com/gp/product/B079HSVSLR/ref=ppx_yo_dt_b_asin_title_o09_s03?ie=UTF8&psc=1 I got it... but I'm wondering If I really should use it?
> 
> 2. Can anyone do a metro exodus benchmark and tell me how that goes. I found that I have to lower clock speed by 45 to keep a metro exodus bench from crashing 2 runs in a row (EXTREME with RTX on Ultra)
> 
> 3. Anyone else getting huge spikes in games. 4 different games giving me huge fps drops for a quick second... a stutter than back to normal - is this from too aggressive overclocking??
> 
> Please chime in on these 3 things please guys.


I can confirm on #2. +147 is stable on port royal, but, have to reduce to +120 to get Metro stable.


----------



## mirkendargen

mirkendargen said:


> Super. Mine gets delivered today I think, but no card to try it on. It was initially packaged, then sat for 2 weeks, then switched to a new tracking number/order and left china, so maybe that was them waiting for the new one? Who knows.


I did indeed get my Byski Strix block today, here's a picture of the backplate if anyone has some decent pictures/footage of a Strix 3090 teardown to verify if it'd fit...


----------



## Zurv

mirkendargen said:


> I did indeed get my Byski Strix block today, here's a picture of the backplate if anyone has some decent pictures/footage of a Strix 3090 teardown to verify if it'd fit...



oh, i thought you were talking about the FE 3090 block. It is the strix 3090 block then you should be fine


----------



## mirkendargen

Zurv said:


> oh, i thought you were talking about the FE 3090 block. It is the strix 3090 block then you should be fine


Oh whew....

It is a damn good looking block...if only I had a card to put it on...


----------



## tubnotub1

Bilco said:


> 100% no power limit increase unfortunately... we are still cucked on power with the TUF 3090 =[
> 
> For the other 3090 TUF owners, which seems to be the best bios? I currently use two display ports and an HDMI.


I played around w/ my TUF for quite a while, ended up using the Gigabyte 3090 OC Bios w/ the 390 watt PL in most of my benchmarks run. I have since sold it, I'll have a 3090 Ultra FTW arriving soon, but the TUF was an amazing card, some of the best cooling on a GPU I have seen. I hope they release an official BIOS update w/ a higher PL for the TUF cards cause that cooler can 100% handle it.


----------



## Bilco

tubnotub1 said:


> I played around w/ my TUF for quite a while, ended up using the Gigabyte 3090 OC Bios w/ the 390 watt PL in most of my benchmarks run. I have since sold it, I'll have a 3090 Ultra FTW arriving soon, but the TUF was an amazing card, some of the best cooling on a GPU I have seen. I hope they release an official BIOS update w/ a higher PL for the TUF cards cause that cooler can 100% handle it.


Damn, nice! I am hoping for the same. 

How did you score the 3090 FTW3? My bitspower waterblock came for the 3090 TUF but it just seems like a waste to integrate it into the loop if the power limit is so low. I think I am going to go for a 3090 FTW3 as well and watercool that instead of the TUF... seems like it's just the better watercooling choice with the higher power limit bios they released.


----------



## LordGurciullo

Sky3900 said:


> I can confirm on #2. +147 is stable on port royal, but, have to reduce to +120 to get Metro stable.


Thanks. Yes it appears Metro Exodus Extreme with RTX ultra is the hardest to pass. 
Anyone getting stuttering in games like unreal engine 4 games or Serious Sam?


----------



## mirkendargen

LordGurciullo said:


> Thanks. Yes it appears Metro Exodus Extreme with RTX ultra is the hardest to pass.
> Anyone getting stuttering in games like unreal engine 4 games or Serious Sam?


The Metro main menu when just starting the game up was always the lower bound of stability on my 2080ti, heh.


----------



## tubnotub1

@Bilco I had signed up for the EVGA notification system the day they were made available, almost immediately after they sold out (had requested the day off from work to try to get one). I didn't know EVGA was going to end up putting us into a queue, they announced it after I had managed to grab a TUF 3090 restock on Newegg. I got the purchase available email maybe 5 days ago, immediately put in my order, and flipped the TUF 3090 for retail to a member of a pc group I am part of before the wife got home. God only knows if I had told her I spent another 2000 after spending 1650 a couple of weeks prior I wouldn't have made it to the date of delivery of the FTW3 Ultra...


----------



## bmgjet

LordGurciullo said:


> Thanks. Yes it appears Metro Exodus Extreme with RTX ultra is the hardest to pass.
> Anyone getting stuttering in games like unreal engine 4 games or Serious Sam?


PUBG yes, (UE4)
The game in basically like 200FPS for 5 secs then a dip for 500ms at 20fps, back to 200fps and repeat.
GPU Load shows 96%, Then dips to 10% for the stutter.

Setting a Frame rate limiter of 144fps fixed it tho, well mostly.
144fps then dips to 120fps. GPU Load 80% dip takes it to 72%.

Tried everything else I could think of and even reinstalled windows.
Latancymon shows that its the NVidia driver going from 300us to 2500us during the stutter.


----------



## Sky3900

LordGurciullo said:


> Thanks. Yes it appears Metro Exodus Extreme with RTX ultra is the hardest to pass.
> Anyone getting stuttering in games like unreal engine 4 games or Serious Sam?


Everything has been smooth as butter except BF V. It stutters a lot right after the level loads/initial spawn. Goes
away after a minute or two. Think it's still loading level data for some reason.


----------



## cavemankr

Can someone with a rtx 3090 founders edition, check where the HDMI or DisplayPort cable connects to GPU and see how hot it is?
Mine gets very very hot to touch.
I am not sure if this is normal.

i upgraded from 2080 ti strix and never had this problem.

is this normal for a blower style gpu like founders edition?
My rtx 3090 fe averages 65 to 67 degrees under full load at 1700 to 1800 rpm.

I also undervolted my gpu to 850mv and constant 1875 core clock. Because I couldn’t stand the fan noise going over 2000 rpm.

thanks


----------



## ComansoRowlett

Sorry if it's been mentioned already but I'm looking for another card vbios with a higher power target for the 3090 FE, if anyone knows which bios is compatible/partially compatible (e.g. losing a HDMI port or whatever) can you send it to me? Thanks.


----------



## tiefox

mirkendargen said:


> Oh whew....
> 
> It is a damn good looking block...if only I had a card to put it on...
> 
> View attachment 2462529


Is it just me or the microfins dont look that micro, or that many ?


----------



## Warocia

Sky3900 said:


> Everything has been smooth as butter except BF V. It stutters a lot right after the level loads/initial spawn. Goes
> away after a minute or two. Think it's still loading level data for some reason.


BF V is processing shader cache when you change GPU. It should go away after while.


----------



## Chamidorix

DrunknFoo said:


> I can't take this guys content seriously... seems like piggybacking information from established youtubers or what's already been posted here and other sources
> where is he placed on the hof?


Like what the **** are you on mate. Gatekeeping by HOF scores, really? Try some first principles instead of memetic based reasoning.

To feed the troll, his 3dmark scores are all readily accessible in his videos, and his OC scores are currently limited to 3080, hence why you don't see a giant blathering "framechasers" entry next to jay, gamernexus, or whomever other tech god you worship. 

The guy makes tons of overreaching simplifications and gives out some questionable recommendations sometimes, but to deny he isn't putting out some good self-discovered content is disingenuous at best, and pure stupidity at the worst. Like who else actually tests liquid metal on XOC 500W 2080ti in youtube video form, who actually tests 3090 4 vs all shunts in youtube video form, who actually discusses and tests vBios optimization between FTW3 and Strix in youtube video form? Sure you get more hardcore analysis in forums and text posts but the guy is pretty hardcore for consumerist yotuube.


----------



## Joies

Hi Folks, if somebody can help me, I am having trouble shunting my MSI Ventus.
I created an extra thread about it, please if you can help me. I would really appreciate it.









MSI 3090 Ventus OC can´t shunt mod the card - need help


Problem fixed.




www.overclock.net


----------



## ShadowYuna

Just got a email from EK that 3080/3090 Strix block will be ship from end of Nov. It looks like has been delay by some reason.

Bitspower is making new backplate for 3090 strix so will be launch block+backplate for 3090 strix in about 2 weeks time

Above the info that I currently have.

Any other info for 3090 Strix block??


----------



## Foxrun

LordGurciullo said:


> Thanks. Yes it appears Metro Exodus Extreme with RTX ultra is the hardest to pass.
> Anyone getting stuttering in games like unreal engine 4 games or Serious Sam?


I get a lot of stuttering in SS4 and BFV. I also get the split second drops into the 20s but I think that was my cpu. My cpu oc became unstable once I added the 3090.


----------



## Somandarin

Hi! 

Seasonic Prime TX 750 TITANIUM will be enough for RTX 3090? 






Seasonic PRIME TX-750 Fully Modular PC Power Supply 80PLUS Titanium 750 Watt: Amazon.co.uk: Computers & Accessories


Buy Seasonic PRIME TX-750 Fully Modular PC Power Supply 80PLUS Titanium 750 Watt at Amazon UK. Free delivery and return on eligible orders.



www.amazon.co.uk





Motherboard GIGABYTE Z490 Aorus, Core i7 10700, SSD 500GB


----------



## Wihglah

Somandarin said:


> Hi!
> 
> Seasonic Prime TX 750 TITANIUM will be enough for RTX 3090?
> 
> 
> 
> 
> 
> 
> Seasonic PRIME TX-750 Fully Modular PC Power Supply 80PLUS Titanium 750 Watt: Amazon.co.uk: Computers & Accessories
> 
> 
> Buy Seasonic PRIME TX-750 Fully Modular PC Power Supply 80PLUS Titanium 750 Watt at Amazon UK. Free delivery and return on eligible orders.
> 
> 
> 
> www.amazon.co.uk
> 
> 
> 
> 
> 
> Motherboard GIGABYTE Z490 Aorus, Core i7 10700, SSD 500GB


That's a little too close for comfort for me.


----------



## GNU/LLJY

what bios can I flash on a ventus? I currently have a Gigabyte Gaming OC BIOS at 390W


----------



## Bilco

Is anyone looking to watercool their 3090 TUF? My bitspower block just came in and I am deciding to wait for a FTW3 ultra and put that on water instead. Looking to get rid of this at cost + shipping.


----------



## nievz

Foxrun said:


> I get a lot of stuttering in SS4 and BFV. I also get the split second drops into the 20s but I think that was my cpu. My cpu oc became unstable once I added the 3090.


What CPU do you use and what is your GPU utilization when in Operation Underground?


----------



## GAN77

Bilco said:


> Is anyone looking to watercool their 3090 TUF? My bitspower block just came.


But bitspower doesn't have an option for the TUF 3090. The backplate is not ready yet for the TUF 3090 .


----------



## cstkl1

cod coldwar beta 4k max

17gb vram


----------



## Falkentyne

cstkl1 said:


> cod coldwar beta 4k max
> 
> 17gb vram
> 
> 
> View attachment 2462634
> 
> 
> View attachment 2462635


The game doesn't use 17 GB of Vram. It just tries to allocate as much vram as reported.
Try setting the video memory scale to 0.55, assuming the INI file still exists.


----------



## mirkendargen

tiefox said:


> Is it just me or the microfins dont look that micro, or that many ?


Comparing it to my EK 2080Ti block, spacing looks the same (0.5mm) and I haven't counted the exact number of fins but it's definitely in the ballpark. The Alphacool 3090 block looks like a similar number of fins too, but possibly longer fins. It's hard to tell in the picture angles.


----------



## Wrathier

Anyone out there with a Gigabyte 3090 Gaming OC or similar?

I don’t understand why I can’t seem to get more than 13750ish in Port Royal.

Please help me out with recommendations for OC settings as I’m very frustrated by this slow card.

In games it get pretty warm ~72/73 and boost around 1825-1875ish with and without OC.

Thanks a lot.


----------



## Pepillo

I tested the 450w EVGA bios on my MSI Gaming X Trio, and although the 480w Strix bios are better, the difference is minimal and I recover the lost DP port from asus bios. Recommended BIOS the EVGA one.


----------



## Bilco

GAN77 said:


> But bitspower doesn't have an option for the TUF 3090. The backplate is not ready yet for the TUF 3090 .


OH damn it you are right... The order says for 3080 tuf.... I could have sworn the website said 3080/3090 when I ordered weeks ago! 

This whole damn launch is a nightmare...


----------



## Thebc2

Optimus has officially begun releasing blocks. They are doing the FTW3 blocks first, first batch is scheduled for mid-November and is sold out. They also mentioned working on a Strix block next.









Absolute GPU Block - FTW3 3080, 3080 Ti, 3090


Optimus Absolute GPU Block designed for the EVGA FTW3 3080/3090 The Absolute block is our all-out performance design, created to achieve maximum cooling on all areas of the new NVIDIA RTX 3080 and 3090 FTW3 cards from EVGA. The FTW3 GPUs pull huge amounts of power and require top cooling on all...




optimuspc.com






Sent from my iPhone using Tapatalk Pro


----------



## Stephen.

Preview of AquaComputer 3080/3090 ASUS Strix water block. 

AquaComputer 3080/3090 Strix


----------



## mirkendargen

Stephen. said:


> Preview of AquaComputer 3080/3090 ASUS Strix water block.
> 
> AquaComputer 3080/3090 Strix


Finally a heatpipe backplate. I'm still surprised the stock 3090 coolers didn't have this design.


----------



## Cavokk

Wrathier said:


> Anyone out there with a Gigabyte 3090 Gaming OC or similar?
> 
> I don’t understand why I can’t seem to get more than 13750ish in Port Royal.
> 
> Please help me out with recommendations for OC settings as I’m very frustrated by this slow card.
> 
> In games it get pretty warm ~72/73 and boost around 1825-1875ish with and without OC.
> 
> Thanks a lot.


Hi Wrathier.

I understand your frustration, - I recommend you to undervolt it and manually dial in a "fixed" clock speed just south of powerlimiting the card. On my MSI Trio X I am currently locked at 1 [email protected] stable constant frequency - a bit higher when cold and a bit lower when warmed up (still waiting for my waterblock  ) 

C


----------



## Stephen.

mirkendargen said:


> Finally a heatpipe backplate. I'm still surprised the stock 3090 coolers didn't have this design.


If I read the forum correctly the stock 3090 cooler back plate will be listed in their shop on Friday, and yes it will have the passive and active XCS cooling, as does the 3080 reference coolers.


----------



## Sheyster

Pepillo said:


> I tested the 450w EVGA bios on my MSI Gaming X Trio, and although the 480w Strix bios are better, the difference is minimal and I recover the lost DP port from asus bios. Recommended BIOS the EVGA one.


EVGA is supposedly working on a higher PL BIOS for the FTW3 card to at least match or possibly exceed the Strix's 480w power limit. You should be able to flash that one once they release it.


----------



## Pepillo

Sheyster said:


> EVGA is supposedly working on a higher PL BIOS for the FTW3 card to at least match or possibly exceed the Strix's 480w power limit. You should be able to flash that one once they release it.


Thank you, good news.


----------



## Keninishna

Hi all, I just got a zotac 3090 and was wondering if its worth investing in a water cooling setup? I don't feel like dropping another 500$ just to water cool. I saw a reddit post someone managed to rig some noctua fans on their card not sure how safe that is though.


----------



## shiokarai

mirkendargen said:


> Finally a heatpipe backplate. I'm still surprised the stock 3090 coolers didn't have this design.


It's their old, old, old design, barely changed since 10xx series cards. Look closely, heatpipe is covering only the portion where _supposedly_ vrm is, it's not covering the whole memory area etc. It's the least effort aquacomputer block I've seen in years actually, a shame really.


----------



## Sync0r

Keninishna said:


> Hi all, I just got a zotac 3090 and was wondering if its worth investing in a water cooling setup? I don't feel like dropping another 500$ just to water cool. I saw a reddit post someone managed to rig some noctua fans on their card not sure how safe that is though.


I have my Zotac 3090 watercooled and it does maintain higher clocks as the temps are below 35c. Depends if you want that little bit extra performance. The benefit of watercooling hardware is that you keep it forever, I've had my setup for years, still going strong. You just need to buy and sell waterblocks for each GPU you buy.


----------



## Avacado

Have not seen anyone post this and thought it might be helpful for anyone else still looking for a 3090. Looks like newegg will have stock on 10/29/20.

Link: RTX 30 Series FAQ - Newegg Knowledge Base

*GOOD LUCK!*


----------



## GanMenglin

Vgnusstffr said:


> Does anyone know any tricks to get the games that used to work in SLI working on an RTX 3090 Sli setup? Thought there was something wrong with my setup until another user confirmed that NVIDIA causes the cards to ignore the sli profiles. Any help would be GREATLY appreciated.


I don’t know whether this the way some talked as “trick”:

1. find the game which 3090 supportSLI, for example Apex.exe

2. change the file name like CSCO.exe to Apex.exe, make csgo to be supported sli like apex


----------



## HyperMatrix

smonkie said:


> To all the owners here: are you afraid Big Navi could devaluate 3090’s price as soon as AMD presents them?


I'm more worried about the new TSMC refresh of Ampere. If the rumors are true about the AMD cards, then TSMC 7nm could clock at 2.4-2.5GHz as opposed to 2.0-2.1GHz on Samsung. So you'd be looking at 10-20% more performance than the current models. That's what I'm worried will destroy the value of the cards. Imagine a 3080Ti with 20GB and 2.5GHz. That would outperform a 3090. And Nvidia is already setting up a deal to make some of their cards on TSMC 7nm so that's happening almost for sure. Only part that remains is performance. Whether they lower the number of cores to match performance against Samsung or not is another question. But this is all that worries me. Not AMD.




Falkentyne said:


> The game doesn't use 17 GB of Vram. It just tries to allocate as much vram as reported.
> Try setting the video memory scale to 0.55, assuming the INI file still exists.


If it works like previous COD games, it's more than just allocation. It preloads assets into VRAM in advance of actually needing it so it won't need to stream them in during gameplay. This has a positive impact on performance.



Stephen. said:


> Preview of AquaComputer 3080/3090 ASUS Strix water block.
> 
> AquaComputer 3080/3090 Strix


Thank god. Guess I'll wait it out for the STRIX then. Was considering the FTW3, especially with the lower price and 5% discount code. But I don't think I'd be able to deal with a 3090 without backplate cooling.


----------



## megahmad

Wrathier said:


> Anyone out there with a Gigabyte 3090 Gaming OC or similar?
> 
> I don’t understand why I can’t seem to get more than 13750ish in Port Royal.
> 
> Please help me out with recommendations for OC settings as I’m very frustrated by this slow card.
> 
> In games it get pretty warm ~72/73 and boost around 1825-1875ish with and without OC.
> 
> Thanks a lot.


What's your perfcap reason in GPUz? I think it's pwr limitation in your case with some voltage Vrel/op limitation in-between.
A screenshot of GPUz sensors while in a game would be good and someone here may be able to help you better.


----------



## bmgjet

Power limit is everything for port royal.


----------



## cstkl1

Stephen. said:


> Preview of AquaComputer 3080/3090 ASUS Strix water block.
> 
> AquaComputer 3080/3090 Strix


Gonna change to that. Not happy with bits on 3080


----------



## DrunknFoo

shiokarai said:


> It's their old, old, old design, barely changed since 10xx series cards. Look closely, heatpipe is covering only the portion where _supposedly_ vrm is, it's not covering the whole memory area etc. It's the least effort aquacomputer block I've seen in years actually, a shame really.


i contacted them if the backplate design was going to be like the 10xx kryrographic heatpipe or if it will actually have active water flow... they responded with active flow, but now we see this... unfortunate


----------



## Thanh Nguyen

Any benefit between 35c and 40c+ for the card? A few points or hundred points in port royal? My card stays at 40c plus at load and I may add an extra mora3 . Anyone know 1 extra mo-ra3 can cool down how many c? I have 1 mora3 420 and 3 360 rad now and have like 12c delta on my gpu which has alphacool block.


----------



## Stampede

Thanh Nguyen said:


> Any benefit between 35c and 40c+ for the card? A few points or hundred points in port royal? My card stays at 40c plus at load and I may add an extra mora3 . Anyone know 1 extra mo-ra3 can cool down how many c? I have 1 mora3 420 and 3 360 rad now and have like 12c delta on my gpu which has alphacool block.


My EKWB sits at 10-12c delta at stock with fans and pump at max 3 x 360mm rad. My guess is that your rads may not be the limit, but the Waterblock or the thermal interface. Increasing rads, just lowers water temps, but not the gpu delta T to water temp.









Radiators Part 2 – Performance - ekwb.com


The fundamental rule of radiator performance testing is to see how well the radiator cools the coolant. For us, computer geeks, the most widespread way of describing radiator performance is by using W/10°C, or in other words, Watts per 10 Delta T (sometimes K is used instead of ΔT). To make it...




www.ekwb.com





Swapping from Ek therm to Thermal grizzly bagged 3-4 deg delta T(may also have been reseating) for me.


You can try liquid metal which, according to framechasers, should give you a 2-4deg drop. if you are gonna try this, conformal coat(clear nail polish) the small smd caps around your chip. apply Liquid metal to both surfaces(use sparingly). And try to cover the other parts of your pcb when doing so to be safe. Liquid metal is always risky, so use with caution.(Corrosion over long term on copper?)

If the liquid metal doesnt work, then the block is your limit in terms of delta T.


----------



## HyperMatrix

Stampede said:


> My EKWB sits at 10-12c delta at stock with fans and pump at max 3 x 360mm rad. My guess is that your rads may not be the limit, but the Waterblock or the thermal interface. Increasing rads, just lowers water temps, but not the gpu delta T to water temp.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Radiators Part 2 – Performance - ekwb.com
> 
> 
> The fundamental rule of radiator performance testing is to see how well the radiator cools the coolant. For us, computer geeks, the most widespread way of describing radiator performance is by using W/10°C, or in other words, Watts per 10 Delta T (sometimes K is used instead of ΔT). To make it...
> 
> 
> 
> 
> www.ekwb.com
> 
> 
> 
> 
> 
> Swapping from Ek therm to Thermal grizzly bagged 3-4 deg delta T(may also have been reseating) for me.
> 
> 
> You can try liquid metal which, according to framechasers, should give you a 2-4deg drop. if you are gonna try this, conformal coat(clear nail polish) the small smd caps around your chip. apply Liquid metal to both surfaces(use sparingly). And try to cover the other parts of your pcb when doing so to be safe. Liquid metal is always risky, so use with caution.(Corrosion over long term on copper?)
> 
> If the liquid metal doesnt work, then the block is your limit in terms of delta T.


Regarding liquid metal application, I should add a couple notes. First off, make sure you coat both surfaces. Secondly, with liquid metal, you'd be surprised how much you can spread a little bit. Whether you're using plastic bristle brushes or q-tips, keep spreading it out from where you dropped the liquid metal. It can be spread out an incredible amount without adding more. You want full 100% coverage with as little used as possible. Especially if you're mounting it vertically. 

Conformal coating does help, but don't let it be an excuse for improper application technique. Anyone who's ever had a problem with liquid metal either hasn't applied it right, or used it with an aluminum block. Copper is fine. LM will fuse with copper, but that won't take away from its thermal capabilities now, nor in future applications. It will just visually tarnish the appearance of the copper which really doesn't matter at all. If you're overly concerned with the appearance, nickel plated copper will reduce - but not eliminate - the stains.


----------



## Tias

Wrathier said:


> Anyone out there with a Gigabyte 3090 Gaming OC or similar?
> 
> I don’t understand why I can’t seem to get more than 13750ish in Port Royal.
> 
> Please help me out with recommendations for OC settings as I’m very frustrated by this slow card.
> 
> In games it get pretty warm ~72/73 and boost around 1825-1875ish with and without OC.
> 
> Thanks a lot.


Disconnect your lan cable/disconnect from wifi, close every program thats running in the background (icue, steel series, antivirus, firewall etc). You live in Sweden since we dont believe in AC in Scandinavia we have to rely on the weather. 

So open the side panel of your pc, place it as close to the window (or if u got as balcony near even better) open the window and wait for your room to cool down. If you got a big "floor fan" place it so it blows at/in your pc.

Run steam in offline mode so you can start 3dmark anyhow. 

In Nvidia control panel make the fallowing changes for 3dmark.exe:

1. Open the nvidia control panel.
2. Select the adjust image settings with preview menu.
3. Select "Use my preference emphasizing" and set it to performance.
4. Apply the settings then goto the manage 3d settings menu, and set "Texture filtering - Quality" to High Performance from Quality, and disable Vertical Sync. 

Then wait till your room is cold, and run port royal.
After each run let the cold weather cool your room/pc down again, overclock some more and run port royal again. Do it over and over till your score doesnt go any higher.

Then wait for a colder day/night and do it agian.

Remember this is for benchmark, and you want as high as possible. No on will game with at +5c or maybe -5c in thier room 
This is however how you improve your score tho. You will get over 14k for sure.

Lycka till!


----------



## bmgjet

Theres just a stand alone version of 3D Mark. You dont need steam.


----------



## Thanh Nguyen

Shunt mod with different bios gives different result or the same guys?


----------



## cstkl1

Thanh Nguyen said:


> Shunt mod with different bios gives different result or the same guys?


Just flash strix 3090 unlock bios. Heard theres one.


----------



## Thanh Nguyen

Heard flash 3 pin on 2 does not work.


----------



## khunpunTH

Thanh Nguyen said:


> Heard flash 3 pin on 2 does not work.


I've flash strix bios onto my galax sg. not work. may b they share power load over 3 pin but my card have only 2 pin. score drop from normal around 5-10%.
best result is from gig gaming oc or aorus master bios but you will lose 2 display ports.
i hope galax will fix their bios.


----------



## Wihglah

Looks like my FTW3 will be with me by the end of next week. 
Just in time for the XOC BIOS. Nice.


----------



## Gryzor

khunpunTH said:


> I've flash strix bios onto my galax sg. not work. may b they share power load over 3 pin but my card have only 2 pin. score drop from normal around 5-10%.
> best result is from gig gaming oc or aorus master bios but you will lose 2 display ports.
> i hope galax will fix their bios.


Hi, I have a Galax 3090 SG too... do you tested flashing GB GAMING OC bios?


----------



## Tias

khunpunTH said:


> I've flash strix bios onto my galax sg. not work. may b they share power load over 3 pin but my card have only 2 pin. score drop from normal around 5-10%.
> best result is from gig gaming oc or aorus master bios but you will lose 2 display ports.
> i hope galax will fix their bios.


Did you also try the colorful 400w bios? 









Colorful RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com


----------



## Thanh Nguyen

Tias said:


> Did you also try the colorful 400w bios?
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Colorful RTX 3090 VBIOS
> 
> 
> 24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory
> 
> 
> 
> 
> www.techpowerup.com


I think colorfull has 3 8 pin
After shunt mod the pcie, I got 15k. Not a bad result for a bad chip.


https://www.3dmark.com/pr/420348


----------



## khunpunTH

Gryzor said:


> Hi, I have a Galax 3090 SG too... do you tested flashing GB GAMING OC bios?


yes i did.
I try gaming oc and aorus master. same performance.better than galax bios.
try FE bios but it won't pass id check.
try strix ,poor performance than original.
you can try but remember display port 2&3 will inop. ( I am not sure about dp1 bcoz I not use dp1. it is obstuck by my case. )
so i switch to hdmi. dont be frightening. your screen will be blank ultil u find the right display port or put your hdmi cable.



Tias said:


> Did you also try the colorful 400w bios?
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Colorful RTX 3090 VBIOS
> 
> 
> 24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory
> 
> 
> 
> 
> www.techpowerup.com


I will wait for verification first. won't take a risk. too expensive here.


----------



## Sync0r

Thanh Nguyen said:


> Any benefit between 35c and 40c+ for the card? A few points or hundred points in port royal? My card stays at 40c plus at load and I may add an extra mora3 . Anyone know 1 extra mo-ra3 can cool down how many c? I have 1 mora3 420 and 3 360 rad now and have like 12c delta on my gpu which has alphacool block.


Hey, I'm not running a mora, but alphacool rads, 3x480mmx60mm, 1x420mmx60mm with 2xD5 pumps. I can keep my loop at ambient, so 20c and my 3090 runs around 32c therefore 12c delta. I think you see the first 15Mhz speed reduction at 38c, not sure I should go test it really.

I probably missed your post but did you post which resistors you changed for your shunt mod? Thanks


----------



## zlatanselvic

Any news on FE power bios?


----------



## Sheyster

cstkl1 said:


> Just flash strix 3090 unlock bios. Heard theres one.


Unlocked as in a 2K power limit?? Where is this mythical beast?


----------



## Baasha

zlatanselvic said:


> Any news on FE power bios?


Have you tried flashing the FE to the Strix BIOS? Is that safe?


----------



## Thebc2

New 3090 FTW3 XOC beta bios out. No word on power limits yet. Will test shortly.



EVGA GeForce RTX 3090 FTW3 XOC BIOS



Edit: looks like new bios is 500W 


Sent from my iPhone using Tapatalk Pro


----------



## GTANY

Thebc2 said:


> New 3090 FTW3 XOC beta bios out. No word on power limits yet. Will test shortly.
> 
> 
> 
> EVGA GeForce RTX 3090 FTW3 XOC BIOS
> 
> 
> 
> Edit: looks like new bios is 500W
> 
> 
> Sent from my iPhone using Tapatalk Pro


Thank you. Great news, I thought that EVGA would release a 480W-only bios to match the Strix power limit. Bettter than I anticipated.


----------



## originxt

Bios is only for ftw3 ultra, doesn't seem to work on the normal ftw3. I flashed the ftw3 ultra bios and applied the "xoc" bios, ddu, driver, and it works.


----------



## TamronLoh

Hows this for an aircooled 3090 aorus master?

Btw anyone knows why its recognised as Generic VGA? i didnt do anything to the bios. Its the OCed Aorus master 3090 bios.


----------



## Thanh Nguyen

originxt said:


> Bios is only for ftw3 ultra, doesn't seem to work on the normal ftw3. I flashed the ftw3 ultra bios and applied the "xoc" bios, ddu, driver, and it works.


What is the score after xoc bios?


----------



## Avacado

TamronLoh said:


> Hows this for an aircooled 3090 aorus master?


Looks like a generic, unrecognized VGA card to me.


----------



## Wihglah

NM


----------



## TamronLoh

Avacado said:


> Looks like a generic, unrecognized VGA card to me.


Yeah i dont know why 3dmark doesnt recognise the GPU.

Guess the 3090 master is still too rare at the moment _shrugs_


----------



## Mroberts95

So can you flash FTW3 Ultra bios onto a regular 3090FTW3 then apply this update?


----------



## originxt

Mroberts95 said:


> So can you flash FTW3 Ultra bios onto a regular 3090FTW3 then apply this update?


Yes, I did exactly that. 



Thanh Nguyen said:


> What is the score after xoc bios?



Currently out of the office/home so I'll do a test run when I get back.


----------



## Tias

Thebc2 said:


> New 3090 FTW3 XOC beta bios out. No word on power limits yet. Will test shortly.
> 
> 
> 
> EVGA GeForce RTX 3090 FTW3 XOC BIOS
> 
> 
> 
> Edit: looks like new bios is 500W
> 
> 
> Sent from my iPhone using Tapatalk Pro


If this is a XOC bios it should be possible to flash on a 2x8pin card. Much like the kingpin 2080ti was possbile to flash and run on a 2080ti fe. 

On the 2080ti FE i could flash all the XOC bioses and they worked just fine, but if i tried to flash the msi trio 2080ti bios it would not boost at all.


----------



## Sheyster

Thebc2 said:


> New 3090 FTW3 XOC beta bios out. No word on power limits yet. Will test shortly.
> 
> 
> 
> EVGA GeForce RTX 3090 FTW3 XOC BIOS
> 
> 
> 
> Edit: looks like new bios is 500W
> 
> 
> Sent from my iPhone using Tapatalk Pro


This BIOS should work on both the MSI Trio and ASUS Strix. Not all of the output ports may work though.

I'm in line in the queue for a FTW3 Ultra and will probably no longer pursue a Strix 3090 card.


----------



## Sheyster

Tias said:


> If this is a XOC bios it should be possible to flash on a 2x8pin card. Much like the kingpin 2080ti was possbile to flash and run on a 2080ti fe.


There have been multiple reports that the older 3-pin BIOS does not flash to 2-pin cards. I believe people have tried it with both the Strix BIOS and the EVGA BIOS.


----------



## Outcasst

How well is a 2-pin card going to handle unlocked power limits? Is it safe to be pushing over 200 watts per 8-pin?


----------



## Pepillo

Sheyster said:


> This BIOS should work on both the MSI Trio and ASUS Strix. Not all of the output ports may work though.


Working well on MSI Gaiming X Trio, all ouput ports:










A little improvement is noticeable about the Strix's 480w bios:


----------



## GanMenglin

I flashed my MSI X trio OC with the new EVGA 500W BIOS, it works quite well.

I recommend X trio OC user with water cooling to flash, but not the factory-cooling system, due to the numbers of fans: 2 to 3. FAN control doesn't work perfect if use a 2 fans card's bios on a 3 fans card.


----------



## holyshade

Anyone able to dump that bios without the exe so I can flash my gaming x trio please?


----------



## Nitemare3219

Outcasst said:


> How well is a 2-pin card going to handle unlocked power limits? Is it safe to be pushing over 200 watts per 8-pin?


Yes. ATX spec is stupid. 8 pin should actually be rated for around 300 watts with good wiring.


----------



## GanMenglin

holyshade said:


> Anyone able to dump that bios without the exe so I can flash my gaming x trio please?


You can flash your trio with ftw3 ultra's 450w bios first, then use the update.exe


----------



## Thebc2

New bios seems borked for FTW3 Ultra testers so far. Not seeing any additional power draw above the original 107% target even though it shows 119% on the slider. Wattage (for me at least) still maxed out around 450w. We’re also still showing power limited in gpu-z perfcap. 

Curious what people who flashed non-Evga cards are seeing for actual power draw. You guys getting anywhere near 500w?




Sent from my iPhone using Tapatalk Pro


----------



## Chamidorix

Can confirm that the new EVGA vBios doesn't actually do anything on my FTW Ultra 3090. Power draw in applications does not go above 450W, even though afterburner/precision/gpu-z show max as 500.

Lots of similar reports on the EVGA forums. Haven't seen acknowledgement from EVGA yet.


----------



## Sheyster

Thebc2 said:


> New bios seems borked for FTW3 Ultra testers so far. Not seeing any additional power draw above the original 107% target even though it shows 119% on the slider. Wattage (for me at least) still maxed out around 450w. We’re also still showing power limited in gpu-z perfcap.
> 
> Curious what people who flashed non-Evga cards are seeing for actual power draw. You guys getting anywhere near 500w?


Yep, seems like quite a few folks are not seeing anything over 107% at EVGA forums with the Ultra FTW3 and the new BIOS. They did say it was "beta" after all.


----------



## Vgnusstffr

GanMenglin said:


> You can flash your trio with ftw3 ultra's 450w bios first, then use the update.exe


I got an email with part of a reply from you to my question about the "trick" to get sli working with the 3090s but I can't find it. . Can you please tell me what to do?


----------



## GanMenglin

Vgnusstffr said:


> I got an email with part of a reply from you to my question about the "trick" to get sli working with the 3090s but I can't find it. . Can you please tell me what to do?


Here is: 



GanMenglin said:


> I don’t know whether this the way some talked as “trick”:
> 
> 1. find the game which 3090 supportSLI, for example Apex.exe
> 
> 2. change the file name like CSCO.exe to Apex.exe, make csgo to be supported sli like apex


----------



## Vgnusstffr

GanMenglin said:


> Here is:


Even if it is a DX11 title and not Vulcan or MGPU DX12? Will CFR still work with this method like it did on DX12 titles that don't use MGPU like Metro Exodus and Control?


----------



## Wrathier

Cavokk said:


> Hi Wrathier.
> 
> I understand your frustration, - I recommend you to undervolt it and manually dial in a "fixed" clock speed just south of powerlimiting the card. On my MSI Trio X I am currently locked at 1 [email protected] stable constant frequency - a bit higher when cold and a bit lower when warmed up (still waiting for my waterblock  )
> 
> C


Hi mate,

Can you share the curve with me so i can sort of copy and paste it? I have never tried that before.


----------



## Pepillo

Thebc2 said:


> New bios seems borked for FTW3 Ultra testers so far. Not seeing any additional power draw above the original 107% target even though it shows 119% on the slider. Wattage (for me at least) still maxed out around 450w. We’re also still showing power limited in gpu-z perfcap.
> 
> Curious what people who flashed non-Evga cards are seeing for actual power draw. You guys getting anywhere near 500w?
> 
> 
> 
> 
> Sent from my iPhone using Tapatalk Pro


What a strange thing. In my MSI Gaming X Trio Afterburner reported 493.7w of maximum consumption in Port Royale bench. How can it work well in my case and not in others?


----------



## Cavokk

Wrathier said:


> Hi mate,
> 
> Can you share the curve with me so i can sort of copy and paste it? I have never tried that before.


Sure - here it is - just open the curve editor, hold shift while dragging the point at 1 volt up to 2100Mhz and click accept in afterburner, - thats it  

Beware your milage may vary as silicon varies but this is very stable for me

C


----------



## Wrathier

Cavokk said:


> Sure - here it is - just open the curve editor, hold shift while dragging the point at 1 volt up to 2100Mhz and click accept in afterburner, - thats it
> 
> Beware your milage may vary as silicon varies but this is very stable for me
> 
> C
> 
> View attachment 2462796


Thanks a lot. Did you use Afterburner for this? Stupid question as well, how do you apply the curve after creating it?


----------



## originxt

Seems to use less power on the ftw3 overclocked than just with max PL. Running Port Royal. Using the new "XOC" bios.



http://gpuz.techpowerup.com/20/10/21/zx9.png





http://gpuz.techpowerup.com/20/10/21/dke.png


----------



## nievz

GanMenglin said:


> I flashed my MSI X trio OC with the new EVGA 500W BIOS, it works quite well.
> 
> I recommend X trio OC user with water cooling to flash, but not the factory-cooling system, due to the numbers of fans: 2 to 3. FAN control doesn't work perfect if use a 2 fans card's bios on a 3 fans card.
> 
> View attachment 2462783
> 
> 
> View attachment 2462784


What do you mean? The FTW3 Ultra has 3 fans like trio x, so what's the issue with stock cooling?

Care to share how you flashed to the XOC bios? Did you first go for the 450W EVA BIOS then use the EXE to flast the 500W?


----------



## Cavokk

Wrathier said:


> Thanks a lot. Did you use Afterburner for this? Stupid question as well, how do you apply the curve after creating it?


Yes the newest betaversion of afterburner - the accept button is this one :  - it may look different based on what skin you are using in afterburner.

Also I highly recommend to search youtube for RTX 3090 undervolting - they show the exact process in action (well mostly the powersaving ones where you first drop the whole curve down before upping one dot at the desired voltage - in my case I just upped the 1 volt dot to 2100Mhz and done for max performance )

C


----------



## JackiesBenchmarks

We need a XOC Bios for 2x8 Pin.. 

Cant get over 15100, best run was 15078.


----------



## Sync0r

JackiesBenchmarks said:


> We need a XOC Bios for 2x8 Pin..
> 
> Cant get over 15100, best run was 15078.


Indeed we do. Have you tried the 500w XOC vbios? Maybe it will magically work on 2x8pin


----------



## JackiesBenchmarks

tried, it doesent work.


----------



## Sync0r

JackiesBenchmarks said:


> tried, it doesent work.


Damn, have you got Windows GPU scheduling turned off? That gave me around a 300pt bump


----------



## zlatanselvic

Baasha said:


> Have you tried flashing the FE to the Strix BIOS? Is that safe?



I haven't "Han Yolo'd" it yet and done that. I think the FE is kind of tricky as the PCB doesn't match up with other designs. I was hoping some big boys have tried it and reported back.


----------



## JackiesBenchmarks

@Sync0r 

Yes its off.


----------



## Thanh Nguyen

Oc with curve or slider is better guys? I tried curve but dont know why clock keep running around and it has less score than slider.


----------



## dr/owned

Outcasst said:


> How well is a 2-pin card going to handle unlocked power limits? Is it safe to be pushing over 200 watts per 8-pin?


300W per 8 pin would be no big deal. 400W....I'd probably want to use an IR camera to make sure the thing wasn't melting, but with high quality power supplies and connectors and some airflow it would probably still be OK.

I don't think you can currently get a 3090 to consume more than 600W anyways unless applying stupid high voltage...and you'll probably exceed the VRM delivery of the card unless it had 3x8pin to begin with.


----------



## Chamidorix

Let me tirelessly repeat that we have already seen more than 650W power draw on a 3080 in Timespy extreme at stock voltages, when shunted and watercooled. The 2080ti would pull over 550W no problem with unlimited power draw and water. The 3090 will surely draw 700W+ in full uncapped utilization scenarios. Also TiN, Kingpin, Buildzoid have tirelessly repeated that VRMs are massively overbuilt to ensure safe function with cheap power supplies, and for use in 3rd world environments with no AC and extreme summer ambient temperatures in places like middle east, Europe ).

As zhrooms has tirelessly repeated his giga cheap 2080ti could effortlessly pull 550W+ over two 8 pins with a skimpy VRM with unlimited vBios.


----------



## kot0005

shiokarai said:


> It's their old, old, old design, barely changed since 10xx series cards. Look closely, heatpipe is covering only the portion where _supposedly_ vrm is, it's not covering the whole memory area etc. It's the least effort aquacomputer block I've seen in years actually, a shame really.


Doesnt matter, the Pipe is in close contact with water so it will take a lot of heat away from the backplate.


----------



## HyperMatrix

For anyone in the EVGA card queue, you may want to take a look here to see whereabouts you are on the wait list:



https://docs.google.com/spreadsheets/d/e/2PACX-1vRDV5oSQmdnMFzovrtJ0LhdPMidoNjsTxX7WdxN44hzFGh4r6frAeJxnywrkMEwsgznv-8ipgQti9Ac/pubhtml



What's most interesting for me is that there is a MORE than 10:1 ratio of people who went for the FTW3 Ultra than those who went for the FTW3 Gaming, even though they're identical. The downside is...since the cards are equal, but one is $70 more than the other, EVGA has no real reason to sell the FTW3 gaming so there's been almost no stock of the gaming model sent out, while a decent amount of the Ultra have gone out.


----------



## Thanh Nguyen

@bmgjet do u know why I shunted 6 resistors but gpuz shows only 50%-60% tdp?


----------



## bmgjet

Get a screen show of HWINFO64 sensor page for your GPU after you have ran Port Royal.
Should be able to work out what sensors tripping it to hit power limit.


----------



## Thanh Nguyen

Hwinfo64 does not detect my card.


----------



## bmgjet

Thats odd, What version are you on since it gives me way more info the gpuz on my card with gigabyte gaming oc bios.


----------



## Thanh Nguyen

View attachment 2462846


bmgjet said:


> Thats odd, What version are you on since it gives me way more info the gpuz on my card with gigabyte gaming oc bios.
> View attachment 2462845





http://imgur.com/a/52Jct4l


----------



## bmgjet

GPU Core output sum 318W, Power limit for this is usually 300W on 2 plug cards. Id say thats what triggering the power limit.
Maybe you have bad contact on one of the shunts.


----------



## Falkentyne

Thanh Nguyen said:


> View attachment 2462846
> 
> 
> 
> http://imgur.com/a/52Jct4l


What power supply do you have and how new or old is it?
Your PCIE voltages and GPU supply voltages are VERY low.
Also your GPU core input power is lower than your output power. Not sure if that's normal or not though (I put mine in for comparison).

here is mine (using Maximus 12 extreme, Seasonic Prime PX-1000, Seasonic 12 pin cable (using the 2x8 pins to FE adapter gave maybe 50mv lower PCIE input voltages, but still healthy above 12.0v).


----------



## bmgjet

Heres a picture of mine after a port royal run with no corrections entered into HWINFO. Power limit set to 90%, Frequency curve capped to 1V since iv got air cooler back on it while my water blocks back plate is getting modded.
But that should be 440W with the 15 mohm on each shunt.









Ill re-enter the corrections into HWINFO and do another run.
You right click on the value and go to customise then you enter the multiplyer for that shunts your using to get it to read the real power.
----Edit HWINFO With 1.33 Correction----


----------



## Thanh Nguyen

So is it my shunt or psu or cable? I have a ax1500i 2 years old. Or the verticle mount kit? What to check first


----------



## bmgjet

The 12V ATX rail spec is 11.4 V to 12.6 V so if your within that your fine.
Lower voltages will = more amps for the same wattage tho but not enough to really matter.

Id be looking at the SRC and GPU Chip shunts. Since you can see the plug and slot ones are doing there job in dropping that reading down.


----------



## Thanh Nguyen

Which one are they? The 4 at the 8 pin area?


----------



## bmgjet

This is what they were on my EVGA Card.


----------



## Thanh Nguyen

Just found out. I asked my dad to do the shunt for me. Instead of putting 1 5 mohm resistor on top of each 5 mohm resistor on the pcb, he put 2 on top.


----------



## bmgjet

Supprised your not in fail safe mode.
That the same as adding a 2.5 mohm ontop of the 5mohm for a new resistance of 1.6 mohm.
Like 1000W power limit.


----------



## Baasha

HyperMatrix said:


> For anyone in the EVGA card queue, you may want to take a look here to see whereabouts you are on the wait list:
> 
> 
> 
> https://docs.google.com/spreadsheets/d/e/2PACX-1vRDV5oSQmdnMFzovrtJ0LhdPMidoNjsTxX7WdxN44hzFGh4r6frAeJxnywrkMEwsgznv-8ipgQti9Ac/pubhtml
> 
> 
> 
> What's most interesting for me is that there is a MORE than 10:1 ratio of people who went for the FTW3 Ultra than those who went for the FTW3 Gaming, even though they're identical. The downside is...since the cards are equal, but one is $70 more than the other, EVGA has no real reason to sell the FTW3 gaming so there's been almost no stock of the gaming model sent out, while a decent amount of the Ultra have gone out.


True. However, regarding that spreadsheet, I'd take it with a bucket of salt since there are TONS of people who probably signed up who are not on that sheet. Although some have gotten their FTW3 Ultra 3090's already (damn you @Zurv lol ), they signed up literally minutes of the release of the 3090 on 09/24.

I signed up for the notification on 10/06 and am supposedly ~ 215 on the list. With about 10 - 15 cards per week, I am looking at at least 6 months before I get my card unless there is some sudden influx of stock which I highly doubt.


----------



## HyperMatrix

Baasha said:


> True. However, regarding that spreadsheet, I'd take it with a bucket of salt since there are TONS of people who probably signed up who are not on that sheet. Although some have gotten their FTW3 Ultra 3090's already (damn you @Zurv lol ), they signed up literally minutes of the release of the 3090 on 09/24.
> 
> I signed up for the notification on 10/06 and am supposedly ~ 215 on the list. With about 10 - 15 cards per week, I am looking at at least 6 months before I get my card unless there is some sudden influx of stock which I highly doubt.


Well the interesting/helpful part is seeing the time at which those people reserved their card and when they got their notification emails. Kind of a progression of time to give you an idea of how close (or far away) you are. Even if just 5% of the people who signed up for notifications also submitted their info to that chart, the information is still accurate. The number of total people and distribution of those people within each time slot won't be accurate. 

edit: just noticed I posted the wrong link. lol. this is what you want to look at:






Google Sheets - create and edit spreadsheets online, for free.


Create a new spreadsheet and edit with others at the same time -- from your computer, phone or tablet. Get stuff done with or without an internet connection. Use Sheets to edit Excel files. Free from Google.



docs.google.com


----------



## Sky3900

I'm hearing that the EVGA 3090 FTW3 Ultra has the same sort of power load balancing thing going with the PCIe slot shunt as the FE and can't take full advantage of the new 500W bios. Anyone know if this is actually true?


----------



## Chamidorix

I pointed out the EVGA 500W bios kafuffle (it wasn't doing anything on FTW3 cards but was working on GB Aorus and MSI trio cards) to Framechasers over Discord and Violia he made a great video proving the pci-e shunt is hard limiting power draw on the EVGA boards, at least. 






Since overclock.net is turning into Reddit and many won't watch the video, let me summarize:

He tests the FTW3 3090 at stock bios, then at 500W bios, then adds an additional 5MO shunt to just the pci-e resister and tests 500W bios again. From data we learn:

1. Once the PCI-E shunt detects over 75W of draw it will not scale power draw further from 8 pins, even if they are not at power limit. The card has a locked in power balance ratio on entire card derived from PCI-E shunt, set for 450W max. You can get over 450W draw nevertheless by pulling more power into memory or cache rail, so it seems likely it is just power balancing the main core1 power rail.
2. With the stock FTW3 vBios at 450W, the pci limit is actually 80W instead of 75 so you actually technically get better performance with 450W than 500W (unless less watts stops clocks from fluctuating, as is more likely to happen on air). Less power allowed on PCI-E means less total board power draw; other people have already noticed this.
3. There isn't anyway to get around this limitation other than shunting pci express. Luckily this is very easy, as you don't have to remove cooler and you can literally just hot glue a 5MO resister on top. 
4. Other boards like trio don't seem to have this power balancing issue; you could say it is a product segmentation "feature" of more expensive boards. The other place you see it is on the Nvidia founders edition.
5. This means KPE etc bios likely won't do anything for FTW3 unless they unlock PCI power draw. 

My personal opinion is EVGA is purposefully doing this to segment the FTW3 from the upcoming KPE. Why buy a $2500 KPE, when you can waterblock a FTW3 and painlessly flash the 525W bios, for the exact same performance minus binned components? They want to prevent this after it being done so frequently with previous generations.

So yea, if you don't want to shunt mod or draw more than 75W over pci-e, stay far far away from EVGA this gen. Get the MSI trio for cost effectiveness, or the Strix for the best possible pcb components pre XOC cards. I'll be trying to get a trio since with the same display output config as KPE it should have the best compatibility. If anyone is interested in buying my 3090 FTW3 for retail+tax+shipping etc etc, PM me since I will likely be dumping it.


----------



## Chamidorix

bmgjet said:


> This is what they were on my EVGA Card.
> 
> View attachment 2462851


You seem to be the only one on here actually trying to match up each shunt with 8pin, core rail, etc. Have you tested the shunts on the back of the card yet, and/or the smaller 5MO shunts? 

I also recall you mentioning a "shunt map" project or something like that many pages back, could you refresh me on what that is? I think it is important to determine what every shunt on a board is doing so we can be more systematic about testing each of them to determine what hidden power limits are in place. 

For example, the FTW3 Ultra 3080 on techpowerup has 7 large 5MO shunts, (6 on front 1 on back) and 3 small 5MO shunts (1 on front 2 on back). I cannot figure out how this possibly matches up with: 3 8 pin connectors, 1 PCI-E connector, 1 core rail, 1 uncore/cache rail, 1 memory rail.


----------



## asdkj1740

Chamidorix said:


> I pointed out the EVGA 500W bios kafuffle (it wasn't doing anything on FTW3 cards but was working on GB Aorus and MSI trio cards) to Framechasers over Discord and Violia he made a great video proving the pci-e shunt is hard limiting power draw on the EVGA boards, at least.
> 
> 
> 
> 
> 
> 
> Since overclock.net is turning into Reddit and many won't watch the video, let me summarize:
> 
> He tests the FTW3 3090 at stock bios, then at 500W bios, then adds an additional 5MO shunt to just the pci-e resister and tests 500W bios again. From data we learn:
> 
> 1. Once the PCI-E shunt detects over 75W of draw it will not scale power draw further from 8 pins, even if they are not at power limit. The card has a locked in power balance ratio on entire card derived from PCI-E shunt, set for 450W max. You can get over 450W draw nevertheless by pulling more power into memory or cache rail, so it seems likely it is just power balancing the main core1 power rail.
> 2. With the stock FTW3 vBios at 450W, the pci limit is actually 80W instead of 75 so you actually technically get better performance with 450W than 500W (unless less watts stops clocks from fluctuating, as is more likely to happen on air). Less power allowed on PCI-E means less total board power draw; other people have already noticed this.
> 3. There isn't anyway to get around this limitation other than shunting pci express. Luckily this is very easy, as you don't have to remove cooler and you can literally just hot glue a 5MO resister on top.
> 4. Other boards like trio don't seem to have this power balancing issue; you could say it is a product segmentation "feature" of more expensive boards. The other place you see it is on the Nvidia founders edition.
> 5. This means KPE etc bios likely won't do anything for FTW3 unless they unlock PCI power draw.
> 
> My personal opinion is EVGA is purposefully doing this to segment the FTW3 from the upcoming KPE. Why buy a $2500 KPE, when you can waterblock a FTW3 and painlessly flash the 525W bios, for the exact same performance minus binned components? They want to prevent this after it being done so frequently with previous generations.
> 
> So yea, if you don't want to shunt mod or draw more than 75W over pci-e, stay far far away from EVGA this gen. Get the MSI trio for cost effectiveness, or the Strix for the best possible pcb components pre XOC cards. I'll be trying to get a trio since with the same display output config as KPE it should have the best compatibility. If anyone is interested in buying my 3090 FTW3 for retail+tax+shipping etc etc, PM me since I will likely be dumping it.


some dude have already reported the xc3 ultra has fake power limit in bios. it seems evga did this on purpose, and the sheeet xc3 cooling is likely to be the reason behind.
and then evga forgot to unlock this hidden limit on ftw xoc bios, thats why they fcuked up on 500w bios.

if we recall the der8auer shunt mod video on rtx3080, he has not mod the pcie shunt, and he ended up having fluctuating core frequency even the temp is unchanged and external volt mod used to lock to voltage.
i think the pcie shunt trick applies to all cards, as it seems to be the design by nvidia to stop the community getting more power like they used to do on last generation.

the reason why cross flashing bios somehow removed the pcie shunt trick is still not yet clear. but it is nothing new as crossflashing in the past would mess up some particular design on particular custom model.
for example, my friend who had a galax 1070 crossflashed zotac 300w bios and ended up having 400w power limit.


kingpin aio is kind of plug and play, which is totally not like watercooling needing so much time and money of the rest setup and followup routine check/maintenance.
also, according to evga tin (former), evga kingpin has the best performance on the same core frequency, the card and the bios are well tuned, kingpin can have higher benchmarks at the same level of frequency.

for ppl who care about gaming performance (avg fps) only, they should get the cheapest one.

to me, ftw3 is not on the same grade/level as rog strix, judging by the pcb components and pricings.
so i dont think evga needs extra tricks to distinguish kingpin and ftw, although lots of dudes treat ftw as good as rog strix/aorus/surprim etc.

if colorful started to sell its advanced oc model and vulcan model globally, we shall have another strong alternative for 3*8pin cards with better pcb (all missing phases on reference pcb are filled up, unlike gaming trio from msi).


----------



## LordGurciullo

bmgjet said:


> PUBG yes, (UE4)
> The game in basically like 200FPS for 5 secs then a dip for 500ms at 20fps, back to 200fps and repeat.
> GPU Load shows 96%, Then dips to 10% for the stutter.
> 
> Setting a Frame rate limiter of 144fps fixed it tho, well mostly.
> 144fps then dips to 120fps. GPU Load 80% dip takes it to 72%.
> 
> Tried everything else I could think of and even reinstalled windows.
> Latancymon shows that its the NVidia driver going from 300us to 2500us during the stutter.


This.. What is this stuttering...


----------



## LordGurciullo

Foxrun said:


> I get a lot of stuttering in SS4 and BFV. I also get the split second drops into the 20s but I think that was my cpu. My cpu oc became unstable once I added the 3090.


it did!??


----------



## Chamidorix

So actually taking the time to look at ASUS, EVGA, MSI 3 pin PCBs they all seem to have the same shunt layout: 7 big and 3 small. It seems we can figure out what all the big are for. 3 for 8 pins, 1 for pci-e, 1 for each power rail. But what are the 3 small resistors for? Here is my annotated picture of 3080 FTW3 showing them all.


----------



## Chamidorix

Also, sorry for spam, but finally a somewhat proper look at a EVGA pcb by builzoid. Not looking so hot, if the current retail core cap config is barely above reference for a "premium" card like the FTW3.


----------



## bmgjet

Chamidorix said:


> You seem to be the only one on here actually trying to match up each shunt with 8pin, core rail, etc. Have you tested the shunts on the back of the card yet, and/or the smaller 5MO shunts?


Just the ones I marked are enough on my card.
Im sort of keeping info on the shutmodding calculators page if people post up there finding and I see it on here.








GitHub - bmgjet/ShutMod-Calculator: Work out what shunt values to use easily.


Work out what shunt values to use easily. Contribute to bmgjet/ShutMod-Calculator development by creating an account on GitHub.




github.com





He is massivly rounding his number up.
The PCI-E Slot has a 10 amp fuse on it on EVGA cards.
So if he pulled 120w+ from the slot then that fuse would blow.


----------



## HyperMatrix

Chamidorix said:


> I pointed out the EVGA 500W bios kafuffle (it wasn't doing anything on FTW3 cards but was working on GB Aorus and MSI trio cards) to Framechasers over Discord and Violia he made a great video proving the pci-e shunt is hard limiting power draw on the EVGA boards, at least.
> 
> 
> 
> 
> 
> 
> Since overclock.net is turning into Reddit and many won't watch the video, let me summarize:
> 
> He tests the FTW3 3090 at stock bios, then at 500W bios, then adds an additional 5MO shunt to just the pci-e resister and tests 500W bios again. From data we learn:
> 
> 1. Once the PCI-E shunt detects over 75W of draw it will not scale power draw further from 8 pins, even if they are not at power limit. The card has a locked in power balance ratio on entire card derived from PCI-E shunt, set for 450W max. You can get over 450W draw nevertheless by pulling more power into memory or cache rail, so it seems likely it is just power balancing the main core1 power rail.
> 2. With the stock FTW3 vBios at 450W, the pci limit is actually 80W instead of 75 so you actually technically get better performance with 450W than 500W (unless less watts stops clocks from fluctuating, as is more likely to happen on air). Less power allowed on PCI-E means less total board power draw; other people have already noticed this.
> 3. There isn't anyway to get around this limitation other than shunting pci express. Luckily this is very easy, as you don't have to remove cooler and you can literally just hot glue a 5MO resister on top.
> 4. Other boards like trio don't seem to have this power balancing issue; you could say it is a product segmentation "feature" of more expensive boards. The other place you see it is on the Nvidia founders edition.
> 5. This means KPE etc bios likely won't do anything for FTW3 unless they unlock PCI power draw.
> 
> My personal opinion is EVGA is purposefully doing this to segment the FTW3 from the upcoming KPE. Why buy a $2500 KPE, when you can waterblock a FTW3 and painlessly flash the 525W bios, for the exact same performance minus binned components? They want to prevent this after it being done so frequently with previous generations.
> 
> So yea, if you don't want to shunt mod or draw more than 75W over pci-e, stay far far away from EVGA this gen. Get the MSI trio for cost effectiveness, or the Strix for the best possible pcb components pre XOC cards. I'll be trying to get a trio since with the same display output config as KPE it should have the best compatibility. If anyone is interested in buying my 3090 FTW3 for retail+tax+shipping etc etc, PM me since I will likely be dumping it.


Do we have confirmation that the FTW3 is limited to 450W power draw on the ROG Strix 480W bios before making these claims?


----------



## Chamidorix

bmgjet said:


> The PCI-E Slot has a 10 amp fuse on it on EVGA cards.
> So if he pulled 120w+ from the slot then that fuse would blow.


How does ~100W hard power limit on the pci-e make sense? He already has an XC3 with all 6 shunts (2 8 pin 1 pci-e 3 rail) 5MO stacked (for double limit) pulling 600+W so that has to be pulling more than the 33% increase from stock 75W to 100W fuse rated on pci-e. How can this be?

I.e, if you stack a 5mo resister on a pci-shunt with a 10a fuse shouldn't it guarantee blow if you can pull 33+% more power? But it is not, on our sample size of 1.


----------



## Chamidorix

HyperMatrix said:


> Do we have confirmation that the FTW3 is limited to 450W power draw on the ROG Strix 480W bios before making these claims?


Yes, in fact. Performance is actually worse on both mine and frame chasers 3090 FTW Ultra on the strix 480W bios vs 450W. I now understand this is because it is most likely capping pci at 75W vs 80 on the EVGA 450W bios. So same exact problem.


----------



## Sky3900

Chamidorix said:


> How does ~100W hard power limit on the pci-e make sense? He already has an XC3 with all 6 shunts (2 8 pin 1 pci-e 3 rail) 5MO stacked (for double limit) pulling 600+W so that has to be pulling more than the 33% increase from stock 75W to 100W fuse rated on pci-e. How can this be?
> 
> I.e, if you stack a 5mo resister on a pci-shunt with a 10a fuse shouldn't it guarantee blow if you can pull 33+% more power?


Perhaps he soldered it over like Tech Jesus did on some of the fuses near the 8 pins.


----------



## bmgjet

He said he didnt bypass them when I asked about it. Said his card doesnt have fuses, I took screen shot of his tear down video marked them and posted it link to screens shots. 
Comment got deleted.


----------



## Sky3900

Well, I have a FTW3 Ultra showing up tomorrow. Now I am debating whether or not keep it or the FE. If I'm going to have to shunt to get up around 500W might as well do it on the cheaper card. I'll wait a few days to see how this all shakes out.

And the FE is one sweet ass card. I'm impressed.


----------



## Foxrun

LordGurciullo said:


> it did!??


Yes. I had 5.1GHz @ 1.33v with 8 hours of realbench behind it. I’ve never had any whea errors or crashes with my shunted rtx Titan. This is no longer stable with the 3090. I started getting a lot of crashes and even corrupted my os. I backed it down to 5GHz @ 1.3 and now everything is great. I even can get more out of my core oc on the 3090.


----------



## DooRules

Baasha said:


> True. However, regarding that spreadsheet, I'd take it with a bucket of salt since there are TONS of people who probably signed up who are not on that sheet. Although some have gotten their FTW3 Ultra 3090's already (damn you @Zurv lol ), they signed up literally minutes of the release of the 3090 on 09/24.
> 
> I signed up for the notification on 10/06 and am supposedly ~ 215 on the list. With about 10 - 15 cards per week, I am looking at at least 6 months before I get my card unless there is some sudden influx of stock which I highly doubt.


It is more like 15 per day going off the list as opposed to 15 per week.





3090 24G-P5-3987-KR place in line (Page 16) - EVGA Forums


Share your place in line for this card.Mine is #154 @ 9/25/2020 3:34 AM PTPLEASE ONLY POST ABOUT 24G-P5-3987-KRPOST AGAIN IF YOU GET YOUR CARD AND I WILL UPDATE THIS OP. Thanks The latest notify time to receive email thus far is below; ~9/24/2020 15:14:00 PSTSpreadsheet...



forums.evga.com


----------



## Cavokk

nievz said:


> What do you mean? The FTW3 Ultra has 3 fans like trio x, so what's the issue with stock cooling?
> 
> Care to share how you flashed to the XOC bios? Did you first go for the 450W EVA BIOS then use the EXE to flast the 500W?


Yes I did exactly that and its now running very well here on my MSI Trio X with all display ports working  Just had to do remove nVidia display driver an reinstall before running the EXE file

C


----------



## TK421

what fuses do I buy to fully mod the card PCIE slot and PCIE power cable?

ERJ-M1WSF8M0U‎? but this says 0.08 ohms instead of 0.05 like the standard ones the card


----------



## TK421

asdkj1740 said:


> some dude have already reported the xc3 ultra has fake power limit in bios. it seems evga did this on purpose, and the sheeet xc3 cooling is likely to be the reason behind.
> and then evga forgot to unlock this hidden limit on ftw xoc bios, thats why they fcuked up on 500w bios.
> 
> if we recall the der8auer shunt mod video on rtx3080, he has not mod the pcie shunt, and he ended up having fluctuating core frequency even the temp is unchanged and external volt mod used to lock to voltage.
> i think the pcie shunt trick applies to all cards, as it seems to be the design by nvidia to stop the community getting more power like they used to do on last generation.
> 
> the reason why cross flashing bios somehow removed the pcie shunt trick is still not yet clear. but it is nothing new as crossflashing in the past would mess up some particular design on particular custom model.
> for example, my friend who had a galax 1070 crossflashed zotac 300w bios and ended up having 400w power limit.
> 
> 
> kingpin aio is kind of plug and play, which is totally not like watercooling needing so much time and money of the rest setup and followup routine check/maintenance.
> also, according to evga tin (former), evga kingpin has the best performance on the same core frequency, the card and the bios are well tuned, kingpin can have higher benchmarks at the same level of frequency.
> 
> for ppl who care about gaming performance (avg fps) only, they should get the cheapest one.
> 
> to me, ftw3 is not on the same grade/level as rog strix, judging by the pcb components and pricings.
> so i dont think evga needs extra tricks to distinguish kingpin and ftw, although lots of dudes treat ftw as good as rog strix/aorus/surprim etc.
> 
> if colorful started to sell its advanced oc model and vulcan model globally, we shall have another strong alternative for 3*8pin cards with better pcb (all missing phases on reference pcb are filled up, unlike gaming trio from msi).


what hidden power limit?

care to explain more?


also do you have invite link to frame chasers discord?


----------



## Cavokk

nievz said:


> What do you mean? The FTW3 Ultra has 3 fans like trio x, so what's the issue with stock cooling?
> 
> Care to share how you flashed to the XOC bios? Did you first go for the 450W EVA BIOS then use the EXE to flast the 500W?


just dumped the 500W EVGA 3090 3x8pin Bios from my Trio X but cant upload 

C


----------



## Spiriva

Cavokk said:


> just dumped the 500W EVGA 3090 3x8pin Bios from my Trio X but cant upload
> 
> C


The 500w evga bios can be downloaded here; 









EVGA RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com


----------



## TK421

Cavokk said:


> just dumped the 500W EVGA 3090 3x8pin Bios from my Trio X but cant upload
> 
> C


upload it to mega.nz then share the link here


----------



## Cavokk

TK421 said:


> upload it to mega.nz then share the link here


See post 3280 - its on techpowerup database of unverified BIOS 

C


----------



## Wrathier

Hi all.

I sold my Gigabyte 3090 Gaming OC and ordered the ROG Strix 3090 OC instead.

Can someone please help me with a performance curve or a stable overclock for this card? I assume with 480W it will overclock better than the 390W Gaming OC.

Also is it worth jumping on the 500W bios on air or just keeping the Strix bios?

Cheers.


----------



## Sarieri

Wrathier said:


> Hi all.
> 
> I sold my Gigabyte 3090 Gaming OC and ordered the ROG Strix 3090 OC instead.
> 
> Can someone please help me with a performance curve or a stable overclock for this card? I assume with 480W it will overclock better than the 390W Gaming OC.
> 
> Also is it worth jumping on the 500W bios on air or just keeping the Strix bios?
> 
> Cheers.


I have a coloful 3090 advanced oc. I reached a score of 218xx on time spy falshing strix 480w bios(+120/+600). With EVGA's 500w bios, I was able to reach 220xx on time spy(+165/+650 with a little lift on the curve; I'm #40 on the list). For me I would rather keep the EVGA's bios because strix bios on my card seems to have minor problems, including one of the DP ports not functional(this is expected since they have different number of ports in total), and power usage of the card not showing correctly on GPUZ(GPU chip power draw shown greater than MVDDC power draw, which is weird). EVGA's bios is so far so good on my card. I really think you should consider buying a card only on how good the PCB is and if it has three 8-pin ports, because most of the three 8-pin cards can flash bios anyway. Strix has very good components on its PCB but it's just wayyyy to expensive for me. If you don't care how much it cost, I think Strix is the togo for now, or maybe you want to wait for the kingpin.


----------



## wasprodker

Yep, ive tried "everything" for about 2 hours now but the powerlimit simply refuses to go above 450w for the "500w" evga bios. Using a 3090 FTW3 Ultra.

Anyway, problably a bug or something in evga's bios. I doubt there is any hardwired problem since people have reported the 500w limit working for them. But im a bit sour that other AIB's gets the full 500 whilst us ftw3 owners gets the shaft (for now)


----------



## Sheyster

wasprodker said:


> Yep, ive tried "everything" for about 2 hours now but the powerlimit simply refuses to go above 450w for the "500w" evga bios. Using a 3090 FTW3 Ultra.
> 
> Anyway, problably a bug or something in evga's bios. I doubt there is any hardwired problem since people have reported the 500w limit working for them. But im a bit sour that other AIB's gets the full 500 whilst us ftw3 owners gets the shaft (for now)


It's hard to believe EVGA didn't notice or catch this issue. I know there was a lot of pressure to release this BIOS for the 3090 Ultra but they had to know people would notice the issue immediately, which is exactly what happened (check out the sticky thread on their forum).

I think I'm back to waiting for a Strix. Components are better and I'm perfectly happy with 480w, although I hear the EVGA BIOS works great on the Strix and the MSI Trio X as well!


----------



## Baasha

Anyone get the RoG Strix 3090 in the US? I've seen only one drop on NewEgg last Friday evening for about 1 minute.


----------



## ExDarkxH

okay so i have a great sample 3090 ftw3 gaming and have been able to push this card very far on the stock air cooler and stock power limit 450w bios. In fact i was able to score 14,853 in port royal last night https://www.3dmark.com/pr/423566
Since it's such good silicone I would hate to let this card go. 
I went ahead and ordered the optimus water block but it wont come in until next month

now, I am bothered by this 450w hard limit so i just ordered some 0.05ohm resistors
If I shunt just the pcie express, would that in theory allow me to pull 141w from each 8pin in addition to the increase from pcie? so it would be over 500w ?


----------



## TimeLord84

Can anyone benchmark Shadow of the Tomb Raider for me so I can compare? Also SW Squadrons is getting horrible performance for me, worse than my 1080 FTW in the same 9900K system and nVidia talking about RMA at this point, also getting a light buzzing noise once 3D is being rendered in all scenarios. I'm at 2560x1440 165Hz G-Sync

Some things are showing the performance that I believe they should, but others are leaving me with questions. Boundary ray tracing has some micro-stutters and I'm trying to uncover more.

Input would be greatly appreciated!









game is at max settings, everything enabled 1440


----------



## LordGurciullo

ExDarkxH said:


> okay so i have a great sample 3090 ftw3 gaming and have been able to push this card very far on the stock air cooler and stock power limit 450w bios. In fact i was able to score 14,853 in port royal last night https://www.3dmark.com/pr/423566
> Since it's such good silicone I would hate to let this card go.
> I went ahead and ordered the optimus water block but it wont come in until next month
> 
> now, I am bothered by this 450w hard limit so i just ordered some 0.05ohm resistors
> If I shunt just the pcie express, would that in theory allow me to pull 141w from each 8pin in addition to the increase from pcie? so it would be over 500w ?


Same. Im right next to you. Got this on old bios. Also EVGA will figure this out. if even one person is getting over 450 then we all will


----------



## dante`afk

interesting observations over the last two pages.

I guess its either FE or Strix OC, nothing else.


----------



## jomama22

Sw


TimeLord84 said:


> Can anyone benchmark Shadow of the Tomb Raider for me so I can compare? Also SW Squadrons is getting horrible performance for me, worse than my 1080 FTW in the same 9900K system and nVidia talking about RMA at this point, also getting a light buzzing noise once 3D is being rendered in all scenarios. I'm at 2560x1440 165Hz G-Sync
> 
> Some things are showing the performance that I believe they should, but others are leaving me with questions. Boundary ray tracing has some micro-stutters and I'm trying to uncover more.
> 
> Input would be greatly appreciated!
> View attachment 2462907


Without settings we can't know what to bench lol


----------



## TimeLord84

It’s safe to assume with a 3090 that everything is maxed, no? I included the resolution..


----------



## Cholerikerklaus

Any suggestions what i can do better? ITS a TRIO 3090 with EVGA 500W bios Air cooled. Watercooler is ordered


----------



## Outcasst

Does anybody know which BIOS has the highest power limit that can be flashed to a 2-Pin card?

Edit: Nevermind, both EVGA FTW3 Ultra and STRIX bios's successfully flashed to my Palit GamePro OC


----------



## jomama22

TimeLord84 said:


> It’s safe to assume with a 3090 that everything is maxed, no? I included the resolution..


I mean, no, lol. Would be easier to just post. Don't know if your using dlss, and cas, what AA, motion blur, hdr, w.e.


----------



## HyperMatrix

Wrathier said:


> Hi all.
> 
> I sold my Gigabyte 3090 Gaming OC and ordered the ROG Strix 3090 OC instead.
> 
> Cheers.


Why? I thought you said Gigabyte is the best card brand ever and Asus and EVGA are stupid and you're boycotting them for switching from SP-Caps to MLCCs with their deceptive marketing.


----------



## Chamidorix

bmgjet said:


> He said he didn't bypass them when I asked about it. Said his card doesn't have fuses, I took screen shot of his tear down video marked them and posted it link to screens shots.
> Comment got deleted.


I appreciate you taking the time to reply, and to be clear I'm trying to learn, not be contentious here - I think you are pretty much the only guy on this entire thread who seems to really have any progress in figuring out how power limits actually work this generation.

What do you think is the cause of the discrepancy? Do you find any fault in the way Frame Chasers is measuring/inferring total board power draw? Do the 10 amp fuses actually have higher tolerances than 10a? If every single shunt on the board has halved resistance, how is he possibly not pulling over 100W over pci-e?

Is there actually a power limit for each individual rail of Vcore1, Vcore2, and Vmem (going by names from buildzoid)? Or is it just Vcore1? I'm assuming the only mechanism of current measurement is the shunt resister on that particular rail?

Even though you don't need to shunt the 3 small 5MO resistors to evidently remove power draw limits, do we have any idea what they are for? To be clear, you are of the opinion that the order of encountering power limts is: 1. board, via 8 pin shunts then 2. Vcore1, via 1 shunt then 3. PCI-E via 1 shunt.

Am I being stupid and should I just reread all your posts, because maybe you have already answered these questions if I would just read more carefully?


----------



## synergon

3090 strix on air 😬


----------



## HyperMatrix

ExDarkxH said:


> okay so i have a great sample 3090 ftw3 gaming and have been able to push this card very far on the stock air cooler and stock power limit 450w bios. In fact i was able to score 14,853 in port royal last night https://www.3dmark.com/pr/423566
> Since it's such good silicone I would hate to let this card go.
> I went ahead and ordered the optimus water block but it wont come in until next month
> 
> now, I am bothered by this 450w hard limit so i just ordered some 0.05ohm resistors
> If I shunt just the pcie express, would that in theory allow me to pull 141w from each 8pin in addition to the increase from pcie? so it would be over 500w ?


2 things. well. 3 things. first off, I'm super jealous. second, ampere responds very well to cooling. If you're getting 14853 on air, you should be able to hit 15k+ under water. thirly, don't do shunt mods yet. Wait to see what EVGAs response is as they're aware the bios they've released isn't working as intended. I think with how good your card is, when you get it under water, you're looking at very minimal clock increase without needing to increase the voltage. And shunting will void your warranty. If you're shunting in order to get a meaningful performance uplift, that's fine. But if you end up getting literally 1% more performance, then is it really worth giving up a 3 year warranty for it? lastly, I want your card.  seriously. grats mate. you got a solid card there.


----------



## ExDarkxH

HyperMatrix said:


> 2 things. well. 3 things. first off, I'm super jealous. second, ampere responds very well to cooling. If you're getting 14853 on air, you should be able to hit 15k+ under water. thirly, don't do shunt mods yet. Wait to see what EVGAs response is as they're aware the bios they've released isn't working as intended. I think with how good your card is, when you get it under water, you're looking at very minimal clock increase without needing to increase the voltage. And shunting will void your warranty. If you're shunting in order to get a meaningful performance uplift, that's fine. But if you end up getting literally 1% more performance, then is it really worth giving up a 3 year warranty for it? lastly, I want your card.  seriously. grats mate. you got a solid card there.


Thanks for that I appreciate it.
Yeah I've never shunted before and I'm hoping to avoid it so I'll wait a little while. I think with water block I will be happy even with stock bios


----------



## Zurv

some pads info for the 3090 FTW

first,
these are the pads used on the ftw (which are squishy as hell)



> Memory is 2.25mm, VRM 2.85mm, back VRAM is 2mm








3090 FTW3 Thermal Pads - EVGA Forums


So has anyone seen yet what the thermal pad thickness is on the 3090 FTW3. If not EVGA could you please let us know. Thanks



forums.evga.com





I used fuji ultra extreme - which doesn't compress much. So it isn't useful to compare the below info to other more squishy pads (like the ones that came with the card)

I replaced the putty and use a .5mm pad. It made contact. (i also put a layer of TIM too.)
For the ram i used a 2mm pad (1.5mm plus a .5mm - there wasn't a fuji extreme 2mm option)

First, ram and the rest of the card pretty well cooled. I never saw the ram hit more than 70C. (with 30min load of timespy extreme with +750.) That is read from the ICX3 sensors. 
The problem is the GPU, it is hotter than it was before (and thus didn't OC was well.) I'm not sure which of the pads is causing the problem (my guess would be the ram)

_sigh_ those fuji's cost a ton. I'll most likekly have to replace it was a 2mm pad that has more give. I have some TG pads around.

I also put TG pads on the back of the card on the ram. 2mm is right.
For other parts of the card i needed 2.5mm to make it to the PCB. (i put pads on the back side of the VRMs)

(for TIM on the GPU i used TG Kryonaut Extreme)


----------



## TK421

Outcasst said:


> Does anybody know which BIOS has the highest power limit that can be flashed to a 2-Pin card?
> 
> Edit: Nevermind, both EVGA FTW3 Ultra and STRIX bios's successfully flashed to my Palit GamePro OC


Do you actually see higher scores due to increased power budget?

The X-Trio and some cards here apparently works with 500w EVGA bios due to not monitoring the PCIE power draw.


----------



## originxt

Zurv said:


> some pads info for the 3090 FTW
> 
> first,
> these are the pads used on the ftw (which are squishy as hell)
> 
> 
> 
> 
> 
> 
> 
> 3090 FTW3 Thermal Pads - EVGA Forums
> 
> 
> So has anyone seen yet what the thermal pad thickness is on the 3090 FTW3. If not EVGA could you please let us know. Thanks
> 
> 
> 
> forums.evga.com
> 
> 
> 
> 
> 
> I used fuji ultra extreme - which doesn't compress much. So it isn't useful to compare the below info to other more squishy pads (like the ones that came with the card)
> 
> I replaced the putty and use a .5mm pad. It made contact. (i also put a layer of TIM too.)
> For the ram i used a 2mm pad (1.5mm plus a .5mm - there wasn't a fuji extreme 2mm option)
> 
> First, ram and the rest of the card pretty well cooled. I never saw the ram hit more than 70C. (with 30min load of timespy extreme with +750.) That is read from the ICX3 sensors.
> The problem is the GPU, it is hotter than it was before (and thus didn't OC was well.) I'm not sure which of the pads is causing the problem (my guess would be the ram)
> 
> _sigh_ those fuji's cost a ton. I'll most likekly have to replace it was a 2mm pad that has more give. I have some TG pads around.
> 
> I also put TG pads on the back of the card on the ram. 2mm is right.
> For other parts of the card i needed 2.5mm to make it to the PCB. (i put pads on the back side of the VRMs)
> 
> (for TIM on the GPU i used TG Kryonaut Extreme)


How did you clean off the thermal putty? Getting a waterblock in a few weeks and it looks annoying as well. I have crc cleaner, works on thermal paste but unsure if it works on that stuff.


----------



## Jpmboy

Zurv said:


> some pads info for the 3090 FTW
> 
> first,
> these are the pads used on the ftw (which are squishy as hell)
> 
> 
> 
> 
> 
> 
> 
> 3090 FTW3 Thermal Pads - EVGA Forums
> 
> 
> So has anyone seen yet what the thermal pad thickness is on the 3090 FTW3. If not EVGA could you please let us know. Thanks
> 
> 
> 
> forums.evga.com
> 
> 
> 
> 
> 
> I used fuji ultra extreme - which doesn't compress much. So it isn't useful to compare the below info to other more squishy pads (like the ones that came with the card)
> 
> I replaced the putty and use a .5mm pad. It made contact. (i also put a layer of TIM too.)
> For the ram i used a 2mm pad (1.5mm plus a .5mm - there wasn't a fuji extreme 2mm option)
> 
> First, ram and the rest of the card pretty well cooled. I never saw the ram hit more than 70C. (with 30min load of timespy extreme with +750.) That is read from the ICX3 sensors.
> The problem is the GPU, it is hotter than it was before (and thus didn't OC was well.) I'm not sure which of the pads is causing the problem (my guess would be the ram)
> 
> _sigh_ those fuji's cost a ton. I'll most likekly have to replace it was a 2mm pad that has more give. I have some TG pads around.
> 
> I also put TG pads on the back of the card on the ram. 2mm is right.
> For other parts of the card i needed 2.5mm to make it to the PCB. (i put pads on the back side of the VRMs)
> 
> (for TIM on the GPU i used TG Kryonaut Extreme)


The stock pads are that bad? Did you get the FTW3 Ultra? If the Fuji's you been combining have two different plastic films on each side, at least with the old mfr process, the thick plastic side needs to be on the heat source and the thin (checkerboard) plastic side to the "cooler/heatsink/waterblock". Mushing the extremes up into a (hot) putty for a custom thickness never gave the same result (well short of 17K/m). And yeah, Fuji extremes are crazy expensive!


----------



## Zurv

Jpmboy said:


> The stock pads are that bad? Did you get the FTW3 Ultra? If the Fuji's you been combining have two different plastic films on each side, at least with the old mfr process, the thick plastic side needs to be on the heat source and the thin (checkerboard) plastic side to the "cooler/heatsink/waterblock". Mushing the extremes up into a (hot) putty for a custom thickness never gave the same result (well short of 17K/m). And yeah, Fuji extremes are crazy expensive!


I have the 3090 ftw ultra and two 3090 FEs 

The stock is fine, but i messed them up a bit when i opened the GPU to replace the tim.
I'm more bummed that i have to open the card up again to replace.. something.. then the waste of fujis (which also sucks)

@originxt RE: cleaning the putty. It isn't a big deal. Just wipe them away with a paper towel. (then follow up with your normal TIM cleaning.)


----------



## TK421

Jpmboy said:


> The stock pads are that bad? Did you get the FTW3 Ultra? If the Fuji's you been combining have two different plastic films on each side, at least with the old mfr process, the thick plastic side needs to be on the heat source and the thin (checkerboard) plastic side to the "cooler/heatsink/waterblock". Mushing the extremes up into a (hot) putty for a custom thickness never gave the same result (well short of 17K/m). And yeah, Fuji extremes are crazy expensive!


where do you buy bulk of these pads


so far I've only been using arctic and chinese thermal pads which can be compressed quite a lot


----------



## Falkentyne

Don't think this has been asked, but does anyone know the thickness of the thermal pads on the 3090 Founders Edition? I'd like to have the pads I need in advance before opening things up.


----------



## bmagnien

ExDarkxH said:


> okay so i have a great sample 3090 ftw3 gaming and have been able to push this card very far on the stock air cooler and stock power limit 450w bios. In fact i was able to score 14,853 in port royal last night https://www.3dmark.com/pr/423566
> Since it's such good silicone I would hate to let this card go.
> I went ahead and ordered the optimus water block but it wont come in until next month
> 
> now, I am bothered by this 450w hard limit so i just ordered some 0.05ohm resistors
> If I shunt just the pcie express, would that in theory allow me to pull 141w from each 8pin in addition to the increase from pcie? so it would be over 500w ?


@ExDarkxH what were your offsets and max temps when you hit that score? fan speeds? thanks!


----------



## ZackZhan

Cholerikerklaus said:


> View attachment 2462921
> 
> 
> 
> Any suggestions what i can do better? ITS a TRIO 3090 with EVGA 500W bios Air cooled. Watercooler is ordered


Hey bro, this result is so impressive. Mind to tell me the temp and noise for using the 500w bios? Cuz my 3090 gaming x trio is arriving soon.
Thank you.


----------



## Thanh Nguyen

53c on air. What is your ambient? Im on water and hit 45c already.


----------



## AdamK47

TK421 said:


> Do you actually see higher scores due to increased power budget?
> 
> The X-Trio and some cards here apparently works with 500w EVGA bios due to not monitoring the PCIE power draw.


The 500W EVGA BIOS does not increase power draw on PCI-E with other vendor cards. It stays under 70ish Watts. The increase comes from the 8-pin connectors. As it should. EVGA's hardware implementation for power draw is to blame. I don't see how they can fix this with a BIOS. I'm just happy EVGA released it so that I can use it on my MSI Gaming X Trio.


----------



## AdamK47

dante`afk said:


> interesting observations over the last two pages.
> 
> I guess its either FE or Strix OC, nothing else.


No love for MSI? This EVGA fiasco has proven the Gaming X Trio to be a very solid card.


----------



## TK421

AdamK47 said:


> The 500W EVGA BIOS does not increase power draw on PCI-E with other vendor cards. It stays under 70ish Watts. The increase comes from the 8-pin connectors. As it should. EVGA's hardware implementation for power draw is to blame. I don't see how they can fix this with a BIOS. I'm just happy EVGA released it so that I can use it on my MSI Gaming X Trio.


the bug is that when pcie power draw gets 75w it limits the other power connectors

this doesn't happen with gigglebyte and msi trio cards, but affects those with a shunt monitoring built into the pcie slot


450w = 80w
500w = 75w 
(beta bios)


----------



## Sky3900

.


----------



## TK421

Sky3900 said:


> Welp, keeping the FE. It clocks slightly better, runs cooler, consumes less power, and is more stable than this 3090 FTW3 Ultra I just got. I guess a higher power limit and advertised boost clock doesn't necessarily mean a particular model of card is going to OC any better. What a waste of time.


FTW3 doesn't have the board as nice as FE model, it's bigger for sure but the components aren't as expensive/high quality


----------



## BattlePhenom

TimeLord84 said:


> Can anyone benchmark Shadow of the Tomb Raider for me so I can compare? Also SW Squadrons is getting horrible performance for me, worse than my 1080 FTW in the same 9900K system and nVidia talking about RMA at this point, also getting a light buzzing noise once 3D is being rendered in all scenarios. I'm at 2560x1440 165Hz G-Sync
> 
> Some things are showing the performance that I believe they should, but others are leaving me with questions. Boundary ray tracing has some micro-stutters and I'm trying to uncover more.
> 
> Input would be greatly appreciated!
> View attachment 2462907
> 
> 
> game is at max settings, everything enabled 1440


I can giver 'er a shot. I've got a Tuf oc flashed to gigabyte gaming oc (390w).

Edit: This is what I got.


----------



## BattlePhenom

Outcasst said:


> Does anybody know which BIOS has the highest power limit that can be flashed to a 2-Pin card?
> 
> Edit: Nevermind, both EVGA FTW3 Ultra and STRIX bios's successfully flashed to my Palit GamePro OC


Would love to know how that went for you. You'd be the first to get a 3x8 pin bios to work with 2x8 pin card as far as I know.


----------



## TK421

BattlePhenom said:


> Would love to know how that went for you. You'd be the first to get a 3x8 pin bios to work with 2x8 pin card as far as I know.


some models 2x8 can be flashed with 3x8 but watch out for pcie power limiting, you have to shunt it


----------



## Spiriva

TK421 said:


> some models 2x8 can be flashed with 3x8 but watch out for pcie power limiting, you have to shunt it


Some say that the colorful 3090 400w bios work on a 2x8pin card.









Colorful RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com


----------



## TK421

Spiriva said:


> Some say that the colorful 3090 400w bios work on a 2x8pin card.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Colorful RTX 3090 VBIOS
> 
> 
> 24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory
> 
> 
> 
> 
> www.techpowerup.com


ah I didn't know this existed


----------



## bmgjet

TK421 said:


> ah I didn't know this existed


Its a 3 plug bios.
So either youll get a message on POST to plug in the 3rd plug. Or if it doesnt have plug check then youll get less power since it will read plug 3 the same as plug 1.
400W - 75W slot.
325W / 3 plugs
109W over each plug, You only have 2 plugs so 218W + 75W from slot for a total of 293W power limit.


----------



## Tias

bmgjet said:


> Its a 3 plug bios.
> So either youll get a message on POST to plug in the 3rd plug. Or if it doesnt have plug check then youll get less power since it will read plug 3 the same as plug 1.
> 400W - 75W slot.
> 325W / 3 plugs
> 109W over each plug, You only have 2 plugs so 218W + 75W from slot for a total of 293W power limit.


Have you tested? And what did change to the 3000 series?

I could flash the kingpin xoc bios on my 2080ti fe and it pulled like ~500-550w no problem. I ran it like that for well over 6 month, never had a problem.
I did try the "msi trio 2080ti" bios and that did not work at all on my 2080ti fe, it flashed w/o problem but the card wouldnt boost at all.

So is it a fact that something did change to the 3000 series so that absolutly no 3x8pin card bios will work on a 2x8pin, or is it more of a seek to find what 3x8pin bios that will work on a 2x8pin card?


----------



## bmgjet

Ampere has strict power balance between plugs and slot.
Its been known since the first time people tried 3 plug bios on 2 plug cards thats the 2 out comes from using them. Either stuck on POST telling you to plug power in, Or lower power limit then the Gigabyte Gaming OC one.

That bios is the best one for the 2 plug cards. 370 with 390W max slider.
Its a 19 power stage card but still works on the 18 power stage ones. They have 1 stage removed from the vram so youll notice the reading in gpuz is wrong on the voltage for the vram.
And some cards the middle DP wont work while on that bios.


----------



## Johneey

TimeLord84 said:


> Can anyone benchmark Shadow of the Tomb Raider for me so I can compare? Also SW Squadrons is getting horrible performance for me, worse than my 1080 FTW in the same 9900K system and nVidia talking about RMA at this point, also getting a light buzzing noise once 3D is being rendered in all scenarios. I'm at 2560x1440 165Hz G-Sync
> 
> Some things are showing the performance that I believe they should, but others are leaving me with questions. Boundary ray tracing has some micro-stutters and I'm trying to uncover more.
> 
> Input would be greatly appreciated!
> View attachment 2462907
> 
> 
> game is at max settings, everything enabled 1440











graphic score timespy 22692 port royal 14800 .
3090 rtx 390 Watt. 
intel i9 10900k 5.4 GHz


----------



## BattlePhenom

bmgjet said:


> Ampere has strict power balance between plugs and slot.
> Its been known since the first time people tried 3 plug bios on 2 plug cards thats the 2 out comes from using them. Either stuck on POST telling you to plug power in, Or lower power limit then the Gigabyte Gaming OC one.
> 
> That bios is the best one for the 2 plug cards. 370 with 390W max slider.
> Its a 19 power stage card but still works on the 18 power stage ones. They have 1 stage removed from the vram so youll notice the reading in gpuz is wrong on the voltage for the vram.
> And some cards the middle DP wont work while on that bios.


Is there any reason why an AIB wouldn't release a 400w bios for a 2x8 pin card? I know the FE is 400w with 2x8 pin to 12 pin and am wondering if that configuration has any advantage over 2x8 pin. I'd be happy with just a 400w bios with full coverage waterblock on 360mm rad .


----------



## bmgjet

BattlePhenom said:


> Is there any reason why an AIB wouldn't release a 400w bios for a 2x8 pin card? I know the FE is 400w with 2x8 pin to 12 pin and am wondering if that configuration has any advantage over 2x8 pin. I'd be happy with just a 400w bios with full coverage waterblock on 360mm rad .


Because its out of spec.
8pins spec is 150W.
390W is already pushing past that.


----------



## kot0005

AMD Radeon RX 6800XT alleged 3DMark scores hit the web - VideoCardz.com


Synthetic benchmark results of Radeon RX 6800XT graphics cards seem to be in the wild. AMD Radeon RX 6800XT, a tale of 3DMark There are now three sources providing benchmark results of Radeon RX 6800 XT based on the UL 3DMark GPU testing software stack. AMD Radeon RX 6800XT is the Navi 21-based...




videocardz.com





Well if these leaks r true 6900xt should easily beat 3090 on anything other than raytracing.

May have to cancel my 3090 and get a 3080 since they canned 20gb model. Rip.


----------



## TK421

bmgjet said:


> Ampere has strict power balance between plugs and slot.
> Its been known since the first time people tried 3 plug bios on 2 plug cards thats the 2 out comes from using them. Either stuck on POST telling you to plug power in, Or lower power limit then the Gigabyte Gaming OC one.
> 
> That bios is the best one for the 2 plug cards. 370 with 390W max slider.
> Its a 19 power stage card but still works on the 18 power stage ones. They have 1 stage removed from the vram so youll notice the reading in gpuz is wrong on the voltage for the vram.
> And some cards the middle DP wont work while on that bios.





bmgjet said:


> Because its out of spec.
> 8pins spec is 150W.
> 390W is already pushing past that.



so the only reliable way is to shunt the card, with hotglue it shouldn't have any traces of alteration or soldering so you can still retain warranty


the 8pin gpu connectors are underspecced by 1/2 at least, so there's still a lot of headroom with quality psu cables and some airflow over them


----------



## Wrathier

HyperMatrix said:


> Why? I thought you said Gigabyte is the best card brand ever and Asus and EVGA are stupid and you're boycotting them for switching from SP-Caps to MLCCs with their deceptive marketing.


Because of this: - I still like the 4 years warranty etc, but honestly I ran into this issue with power connectors and the answer from their support was garbage. Seems like only UK customers can get power connectors replaced in a few days and for the rest of us it’s honestly not worth the wait. - Since I have to wait anyway, I just decided to buy the better card. ASUS seems to be the best overall of all cards so far, so guessing it's worth the wait. 

(By the way, only EVGA actually used deceptive marketing).






3080 / 3090 / 3070 Gigabyte Eagle Gaming OC & Vision Power Connector Concerns


***UPDATE 1.5 IMPORTANT INFO To clarify for everyone and any one new here the cards affected are as follows re serial number WK39 onwards will have the revised new connector block *UPDATE however some cards may be mixed and still could be on the old connector block even after WK39 WK38...




www.overclockers.co.uk


----------



## HyperMatrix

kot0005 said:


> AMD Radeon RX 6800XT alleged 3DMark scores hit the web - VideoCardz.com
> 
> 
> Synthetic benchmark results of Radeon RX 6800XT graphics cards seem to be in the wild. AMD Radeon RX 6800XT, a tale of 3DMark There are now three sources providing benchmark results of Radeon RX 6800 XT based on the UL 3DMark GPU testing software stack. AMD Radeon RX 6800XT is the Navi 21-based...
> 
> 
> 
> 
> videocardz.com
> 
> 
> 
> 
> 
> Well if these leaks r true 6900xt should easily beat 3090 on anything other than raytracing.
> 
> May have to cancel my 3090 and get a 3080 since they canned 20gb model. Rip.


If these numbers are accurate, why would AMD not have put them out earlier? There is no benefit to them withholding rock solid performance like this and actually hurt themselves by showing performance numbers lower than the 3080.


----------



## Thanh Nguyen

@bmgjet Remove 1 resistor and stil not max out tdp


----------



## Wrathier

Hi,

Is the founder edition 3090 as good as the ASUS GeForce RTX 3090 24GB ROG STRIX GAMING OC?


----------



## TK421

Thanh Nguyen said:


> @bmgjet Remove 1 resistor and stil not max out tdp
> View attachment 2463000


it's to add more resistor on top of the current one, not removing it


----------



## Outcasst

TK421 said:


> some models 2x8 can be flashed with 3x8 but watch out for pcie power limiting, you have to shunt it


Yes, definitely power limited on the hardware side. Afterburner said I'm using all 480w but the gpu was about 8c cooler and performance was much worse than the stock bios so clearly hasn't worked.

I've not found a bios which gives me more performance than the stock palit one as of yet. Even the TUF bios which is another 2 pin card with a 10w higher limit performs about 7fps worse on a heaven benchmark run.

I'm going to give the gigabyte bios a go later tonight and see how it works out. This palit is a reference which means 14 phase so I'm not sure how it will hold up. Should be OK though since I assume these cards are overbuilt in the first place.


----------



## Jpmboy

Zurv said:


> I have the 3090 ftw ultra and two 3090 FEs
> 
> The stock is fine, but i messed them up a bit when i opened the GPU to replace the tim.
> I'm more bummed that i have to open the card up again to replace.. something.. then the waste of fujis (which also sucks)
> 
> @originxt RE: cleaning the putty. It isn't a big deal. Just wipe them away with a paper towel. (then follow up with your normal TIM cleaning.)


Ugh, EVGA TIM job eh?  

That aside, did you find a lottery winner among the three?


----------



## Avacado

Jpmboy said:


> Ugh, EVGA TIM job eh?
> 
> That aside, did you find a lottery winner among the three?


He somehow has 3 when people can't get one. Bot subscription paid off!


----------



## Jpmboy

Avacado said:


> He somehow has 3 when people can't get one. Bot subscription paid off!


Zurv? Dude has NGO (new gpu obsession). I checked into NGO rehab but flunked out fast.


----------



## Johneey

BattlePhenom said:


> I can giver 'er a shot. I've got a Tuf oc flashed to gigabyte gaming oc (390w).
> 
> Edit: This is what I got.
> View attachment 2462989





Johneey said:


> View attachment 2462998





Outcasst said:


> Yes, definitely power limited on the hardware side. Afterburner said I'm using all 480w but the gpu was about 8c cooler and performance was much worse than the stock bios so clearly hasn't worked.
> 
> I've not found a bios which gives me more performance than the stock palit one as of yet. Even the TUF bios which is another 2 pin card with a 10w higher limit performs about 7fps worse on a heaven benchmark run.
> 
> I'm going to give the gigabyte bios a go later tonight and see how it works out. This palit is a reference which means 14 phase so I'm not sure how it will hold up. Should be OK though since I assume these cards are overbuilt in the first place.


go for Gigabyte 390 Bios I use on Palit too. It’s works well with performance boost


----------



## Sheyster

Sky3900 said:


> Welp, keeping the FE. It clocks slightly better, runs cooler, consumes less power, and is more stable than this 3090 FTW3 Ultra I just got. I guess a higher power limit and advertised boost clock doesn't necessarily mean a particular model of card is going to OC any better. What a waste of time.


I may do the same. My FE boosts to 1980 without touching a thing in Afterburner and is able to maintain 2100 while gaming. It drops a few bins as it heats up. I'm gonna play with undervolting it a bit over the weekend.


----------



## Sheyster

Wrathier said:


> Hi,
> 
> Is the founder edition 3090 as good as the ASUS GeForce RTX 3090 24GB ROG STRIX GAMING OC?


The Strix is regarded as the best card available right now. Rock solid components, 480w stock BIOS. The only complaint has been that it's a little loud. Nothing a fan curve adjustment can't fix. The icing on the cake: It works with the EVGA 500w BIOS.


----------



## DrunknFoo

anyone got the strix oc here? does it play nice with the evga 'xoc' beta bios? (guess the question is, does it disable all but 1 display port?)


man, skimming through the evga forums, it's pretty funny. panic crying about the ftw3 ultra being hardware limited or the card being defective..... where joe has flashed a 94.02.26.08.2e bios and is easily pushing insane power (voltage limited of course) with his slider only like at half of what it can be.... just wait for a new revision from evga or a leak


----------



## ExDarkxH

bmagnien said:


> @ExDarkxH what were your offsets and max temps when you hit that score? fan speeds? thanks!


For the benchmarks I used 100% fan speed but the reason for nice temps on air cooler is mainly due to the fact that I vertically mount my gpu in a View 91 and i dont use the exterior glass so it has good air flow and its isolated 
ambient temp is about 20-21 C

I was able to do +200 core and +900 memory without any issues. 
I was also able to get a top 10 firestrike ultra score but i havent really focused at all on improving in FSU i only ran the bench ~3 times


----------



## Thanh Nguyen

Any idea why msi afterburn or hwinfo64 shows my card hit only 270w but I got 900 more points after the shunt?


----------



## hapklaar

long2905 said:


> hey man im going to get this card very soon. How was your flashing experience like so far? Did the gigabyte gaming oc vbios make a difference? Whats the highest boost clock and power draw can you get?


Flashing went well. The power draw did increase to 390-400W according to GPU-Z, after upping the power limit with Afterburner.

I was only able to get a few extra points in the benchmarks and I didn't like the fan curve (too loud), so I flashed back to stock bios. It's worth a try though. Keep us posted on how you're faring.


----------



## hapklaar

Thanh Nguyen said:


> Any idea why msi afterburn or hwinfo64 shows my card hit only 270w but I got 900 more points after the shunt?


Probably because your card thinks it's using less power after shunting. I think that's the whole idea behind the procedure, the voltage drop across those resistors determine how much amps your card thinks it is using.


----------



## rawsome

Spiriva said:


> Some say that the colorful 3090 400w bios work on a 2x8pin card.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Colorful RTX 3090 VBIOS
> 
> 
> 24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory
> 
> 
> 
> 
> www.techpowerup.com


Iam following this thread but id not read that yet. 

can someone confirm that the colorful bios actually works on 2 pin and gives more performance?

Thanks.


----------



## khunpunTH

Johneey said:


> graphic score timespy 22692 port royal 14800 .
> 3090 rtx 390 Watt.
> intel i9 10900k 5.4 GHz


i have 2 pin card then flash to gig 390w bios. I only get 215xx graphic score in timespy.
you did shunt mod to get those score right?


----------



## Outcasst

Johneey said:


> go for Gigabyte 390 Bios I use on Palit too. It’s works well with performance boost


Just gave the gigabyte BIOS a go gives me 3fps more on heaven stock for stock, and the gigabyte even has a lower boost. Impressed so far.



rawsome said:


> Iam following this thread but id not read that yet.
> 
> can someone confirm that the colorful bios actually works on 2 pin and gives more performance?
> 
> Thanks.


Will give it a go now.


----------



## Sheyster

DrunknFoo said:


> man, skimming through the evga forums, it's pretty funny. panic crying about the ftw3 ultra being hardware limited or the card being defective..... where joe has flashed a 94.02.26.08.2e bios and is easily pushing insane power (voltage limited of course) with his slider only like at half of what it can be.... just wait for a new revision from evga or a leak


The MSI guys posting/gloating over there are not helping matters at all. I know this is the Internet but they need to stop the trolling.


----------



## Outcasst

rawsome said:


> Iam following this thread but id not read that yet.
> 
> can someone confirm that the colorful bios actually works on 2 pin and gives more performance?
> 
> Thanks.


Just tried the colourful BIOS, flashes and works but very poor performance, 142fps on heaven vs 161fps on the gigabyte bios at stock. I'd say avoid this one - clockspeeds were down in to an average of 1550Mhz.

I will do a DDU and fresh driver install to test again and see if anything changes because that is really bad...


----------



## Zurv

Jpmboy said:


> Ugh, EVGA TIM job eh?
> 
> That aside, did you find a lottery winner among the three?


the second FE comes Monday. So i've only played with two so far. The FTW GPU isn't great. I can't get 2ghz under load. It should make it was water, but.. meh. The FE is pretty good. It is rock solid stable at 2050.
I ordered of bunch of shunts and plan to ghetto shunt the FE (silver paint and glue).
The bitspower blocks for FE nor the optiums blocks will be here till mid nov  So i have some time to play. (Once in a loop i'm unlikely to want to change stuff around.)

How is your ftw doing?



Avacado said:


> He somehow has 3 when people can't get one. Bot subscription paid off!



f5 more 
I lucked out from the NVidia store a week after launch and saw it in stock before the internet/bots noticed. The other was from the EVGA store notify list. (come on people, if you wanted it so bad.. then you should have hit the notify button.) The 3rd was from a friend that got two... 

oh the tin hat masses...


----------



## Outcasst

Outcasst said:


> Just tried the colourful BIOS, flashes and works but very poor performance, 142fps on heaven vs 161fps on the gigabyte bios at stock. I'd say avoid this one - clockspeeds were down in to an average of 1550Mhz.
> 
> I will do a DDU and fresh driver install to test again and see if anything changes because that is really bad...


Still trash performance. Avoid this BIOS on a 2 pin card.


----------



## Sheyster

Zurv said:


> the second FE comes Monday. So i've only played with two so far. The FTW GPU isn't great. I can't get 2ghz under load. It should make it was water, but.. meh. The FE is pretty good. It is rock solid stable at 2050.
> I ordered of bunch of shunts and plan to ghetto shunt the FE (silver paint and glue).
> The bitspower blocks for FE nor the optiums blocks will be here till mid nov  So i have some time to play. (Once in a loop i'm unlikely to want to change stuff around.)


What's your boost with that card with everything at default except the PL to 114%? Just trying to compare to mine which is 1980. I'm stable at 2100 BTW but power limited of course.


----------



## Jpmboy

Zurv said:


> the second FE comes Monday. So i've only played with two so far. The FTW GPU isn't great. I can't get 2ghz under load. It should make it was water, but.. meh. The FE is pretty good. It is rock solid stable at 2050.
> I ordered of bunch of shunts and plan to ghetto shunt the FE (silver paint and glue).
> The bitspower blocks for FE nor the optiums blocks will be here till mid nov  So i have some time to play. (Once in a loop i'm unlikely to want to change stuff around.)
> 
> *How is your ftw doing?*


pony express shipping (yeah, I cheaped out but would not be here for the stupid signature had I done 2-day). Expecting delivery today so I let you know. Disappointing that many folks are finding EVGA's flagship aircooled offering to be "meh". Previous gen FTW3 SKUs have been well binned (in my experience). KPE to come, but without Tin in the mix it could be a 980KPE misstep all over again.


----------



## Sheyster

Outcasst said:


> Still trash performance. Avoid this BIOS on a 2 pin card.


This is a forgone conclusion. Do not flash a 3x8-pin BIOS to a 2x8-pin card. It's been stated repeatedly in this thread and others.


----------



## Outcasst

Sheyster said:


> This is a forgone conclusion. Do not flash a 3x8-pin BIOS to a 2x8-pin card. It's been stated repeatedly in this thread and others.


Any ideas why the colorful BIOS in particular is far worse than the others? I wonder if its how they chose to balance the power through the connectors differently compared to the STRIX and FTW3.


----------



## zlatanselvic

Locked my Boost Clock in Timespy:
*


https://www.3dmark.com/3dm/52039123?


*










*







*


Still in search of a BIOS flash for the FE, but seeing all the discussions there might not be much meat on the bone left. Still choosing to stay on air with this FE as the cooler looks to good to take apart.


----------



## Wrathier

zlatanselvic said:


> Still in search of a BIOS flash for the FE, but seeing all the discussions there might not be much meat on the bone left. Still choosing to stay on air with this FE as the cooler looks to good to take apart.


FFS Now I'm so tempted to buy a Founders...


----------



## Avacado

Zurv said:


> oh the tin hat masses...


What is that supposed to mean? You calling me a flat Earther? I have no doubt that you F5'd yourself to death and the Bot comment was a joke. Thanks for the insult.

P.S. Never wanted this card or any of the new ones, if that helps.


----------



## Sheyster

Wrathier said:


> FFS Now I'm so tempted to buy a Founders...


It's actually great, better than I expected it to be. Build quality is awesome, akin to a literal brick, and the new cooler design seems very effective. Some folks have also reported good results with undervolting them and being able to maintain stable high clocks up to about 2000 or so on air.


----------



## zlatanselvic

Wrathier said:


> FFS Now I'm so tempted to buy a Founders...


I hear that. I was in the hunt for the Strix, the FTW3, or the FE. I was able to snipe this a few weeks ago


----------



## Sheyster

Outcasst said:


> Any ideas why the colorful BIOS in particular is far worse than the others? I wonder if its how they chose to balance the power through the connectors differently compared to the STRIX and FTW3.


Probably how they balance out the PL for the 8-pin connectors and rails. There could be some kind of ratio at play in relation to the PCI-E power slot that limits overall power draw as well. Without a deeper dive it's hard to know for sure.


----------



## pompss

Just saw this thread . Will the bios work on the 3090 fe ? improvements are minimal ?


----------



## Sheyster

pompss said:


> Just saw this thread . Will the bios work on the 3090 fe ? improvements are minimal ?


As far as I know there is no known BIOS that will work on the FE.


----------



## jomama22

The only way to get a 3 pin bios to work on a 2 pin would to be to shunt all the needed resistors aside from the pcie slot shunt. This should allow you to make up for the absence of the 3rd pin. If using a 500w 3pin bios, you would need to shunt enough to allow 142 watts extra to pass though the two pins, so 71 watts per pin or a 50% increase in power. You may be able to get away with 10mohm shunts but 8 or 5 would probably be safer/more effective (for the stacked shunts that is). A 3ohm in place of the 5 ohm should also be fine.

That's all in theory at least. The nice thing is, you don't have to sweat the pcie slot current usage with this as you would just full shunting, but does limit you to whatever the 3pin bios allows.


----------



## ExDarkxH

Do you think the pcie shunt can handle a 5 miliohm stacked shunt? Can pci express handle 150w ? If you tried this with a riser would it protect the motherboard in the case of failure?


----------



## Nizzen

Avacado said:


> What is that supposed to mean? You calling me a flat Earther? I have no doubt that you F5'd yourself to death and the Bot comment was a joke. Thanks for the insult.
> 
> P.S. Never wanted this card or any of the new ones, if that helps.


Looks like you posted in the wrong thread...
"*NVIDIA RTX 3090 Owner's Club"*


----------



## Wrathier

ExDarkxH said:


> Do you think the pcie shunt can handle a 5 miliohm stacked shunt? Can pci express handle 150w ? If you tried this with a riser would it protect the motherboard in the case of failure?


No that is highly unlikely.


----------



## ExDarkxH

Wrathier said:


> No that is highly unlikely.


so an 8 miliohm would be suitable? Thats what framechasers used


----------



## Wrathier

ExDarkxH said:


> so an 8 miliohm would be suitable? Thats what framechasers used


Sorry I'm no expert in shunt mods - never tried it and most likely never will - what I meant is that the PCI-e riser cable won't protect the board.


----------



## ExDarkxH

https://www.mouser.com/ProductDetail/Welwyn-Components-TT-Electronics/ULR2-R008FT2?qs=sGAEpiMZZMtlleCFQhR%2FzTN4asDksfTkLRn38fKogII%3D


I'm looking at this product if anyone with shunting knowledge can chime in. I have a 5 already but eyeing the 8


----------



## ColinMacLaren

What is expected with a Strix. I got a Strix OC, but the achievable clock speeds seem pretty low.

The absolute maximum I can squeeze out of it is 22.000 Timespy with stock [email protected]% and the case side panel open. This settings if far from gamestable.
Gamestable is [email protected] with +1025 MHz on the RAM.

In 4K its [email protected] and occasionally hitting the power limit.


----------



## jomama22

ExDarkxH said:


> Do you think the pcie shunt can handle a 5 miliohm stacked shunt? Can pci express handle 150w ? If you tried this with a riser would it protect the motherboard in the case of failure?


I would not do this lol. A riser won't protect you from anything.

Stacking resistors gives these pcie power outputs:
20 mohm = 90w
15 mohm = 100w
12 mohm = 106.25w
10 mohm = 112.5w
8 mohm= 121.875w

Pick your poison. Wouldn't want to go over 10 ohms personally.


----------



## Avacado

Nizzen said:


> Looks like you posted in the wrong thread...
> "*NVIDIA RTX 3090 Owner's Club"*


Sorry I must have mistaken you for an *******. Not owning one does not mean that I don't follow the progress that the community is making on it by and large. If you love tech, you follow it even if you don't own it.

Did not happen to see anyone complaining when I posted the next release dates for newegg for 3090 stock to help those who might still not have been able to get theirs.


----------



## GTANY

ColinMacLaren said:


> What is expected with a Strix. I got a Strix OC, but the achievable clock speeds seem pretty low.
> 
> The absolute maximum I can squeeze out of it is 22.000 Timespy with stock [email protected]% and the case side panel open. This settings if far from gamestable.
> Gamestable is [email protected] with +1025 MHz on the RAM.
> 
> In 4K its [email protected] and occasionally hitting the power limit.


22000 seems a good score. Did you compare it to others'results ? It is possible that you achieve the same score with lower GPU and RAM frequencies ?


----------



## ExDarkxH

jomama22 said:


> I would not do this lol. A riser won't protect you from anything.
> 
> Stacking resistors gives these pcie power outputs:
> 20 mohm = 90w
> 15 mohm = 100w
> 12 mohm = 106.25w
> 10 mohm = 112.5w
> 8 mohm= 121.875w
> 
> Pick your poison. Wouldn't want to go over 10 ohms personally.


perfect thank you


----------



## ColinMacLaren

GTANY said:


> 22000 seems a good score. Did you compare it to others'results ? It is possible that you achieve the same score with lower GPU and RAM frequencies ?


More important then the actual score (where the influence of other system components comes into play) is the actual achievable clock rate.

Here someone in this thread manged to squeeze out an average clock of 2133MHz out of his 450W Evga, while I only managed to get 2026 out of my 480W Strix on the same benchmark. 

*https://www.3dmark.com/pr/428809*


----------



## jomama22

ExDarkxH said:


> perfect thank you


Also note that the pcie slot will limit the total power overall it will allow you to draw. Depending on this bios' max, let's say 400w, 75w would be 18.75% of the total power. So, if using the 12mohm resistor on the pcie slot shunt, your max new power draw would be 566w. You would need atleast 10ohm shunts on the other resistors to provide you the extra extra headroom you get from the pcie shunt. But it's probably better to go with 8ohm, or even 5, incase the power balancing uses less than 18.75% of it's power from the pcie. But I would imagine it's probably more than 18.75% so the total power would be less and the lower resistance wouldn't be as necessary.


----------



## Sheyster

jomama22 said:


> The only way to get a 3 pin bios to work on a 2 pin would to be to shunt all the needed resistors aside from the pcie slot shunt. This should allow you to make up for the absence of the 3rd pin. If using a 500w 3pin bios, you would need to shunt enough to allow 142 watts extra to pass though the two pins, so 71 watts per pin or a 50% increase in power. You may be able to get away with 10mohm shunts but 8 or 5 would probably be safer/more effective (for the stacked shunts that is). A 3ohm in place of the 5 ohm should also be fine.
> 
> That's all in theory at least. The nice thing is, you don't have to sweat the pcie slot current usage with this as you would just full shunting, but does limit you to whatever the 3pin bios allows.


Why don't you test out said theory and report back?  In the meantime, the safe bet is to use the stock BIOS and shunt the card as minimally as possible.


----------



## jomama22

Sheyster said:


> Why don't you test out said theory and report back?  In the meantime, the safe bet is to use the stock BIOS and shunt the card as minimally as possible.


I have an FE so not possible lol. But, would be worth a try for anyone who is already planning on shunting their 2 pin card. Wouldn't cause any harm for them to try before slapping the pcie slot resistor on and going that route.


----------



## anethema

I replaced all the large shunts on my FE yesterday with 3mohm. 

I tried not doing the PCI slot shunt, but oddly enough GPU-Z reported MVDDC taking 250W under load while the entire card was reporting like 150W.

Got considerable drops even from stock in port royal. Percap reason listed as power. Very odd behavior.

So I said **** it, shunted PCIe as well with 3mohm replacement.

Totally resolved the issue. MVDDC dropped by a ton somehow, and the card pulls 550W+ under load. Got me a PR score of 14980 with minimal tweaking. Need to get some cold onto it and try to push into the 15ks. 

I personally used hot air rework to heat the resistor and board around it then kept touching the sides with the iron. Eventually the shunt floated free. 

Used chipquik solder paste to put the new shunts on.


----------



## Sky3900

anethema said:


> I replaced all the large shunts on my FE yesterday with 3mohm.
> 
> I tried not doing the PCI slot shunt, but oddly enough GPU-Z reported MVDDC taking 250W under load while the entire card was reporting like 150W.
> 
> Got considerable drops even from stock in port royal. Percap reason listed as power. Very odd behavior.
> 
> So I said **** it, shunted PCIe as well with 3mohm replacement.
> 
> Totally resolved the issue. MVDDC dropped by a ton somehow, and the card pulls 550W+ under load. Got me a PR score of 14980 with minimal tweaking. Need to get some cold onto it and try to push into the 15ks.
> 
> I personally used hot air rework to heat the resistor and board around it then kept touching the sides with the iron. Eventually the shunt floated free.
> 
> Used chipquik solder paste to put the new shunts on.


Did you originally stack resistors on your first try and then replace them later on? I am wondering if there are any issues with clearance between the resistors/heatsink/backplate. I am thinking about stacking 010s on the existing 005s.


----------



## Wrathier

So it seems I might have the possibility to buy a ASUS TUF OC RTX 3090 24GB Gaming very soon instead of a Strix in 2 months.

Will the Gigabyte BIOS be worth switching to?

Does anyone have any clue on what to expect to OC core and mem?

Does anyone have a stable curve to share? - Perhaps better results with undervolt I don't know so much about that part.

Thanks in advance for your help and what to expect from that model.


----------



## smonkie

Wrathier said:


> So it seems I might have the possibility to buy a ASUS TUF OC RTX 3090 24GB Gaming very soon instead of a Strix in 2 months.
> 
> Will the Gigabyte BIOS be worth switching to?
> 
> Does anyone have any clue on what to expect to OC core and mem?
> 
> Does anyone have a stable curve to share? - Perhaps better results with undervolt I don't know so much about that part.
> 
> Thanks in advance for your help and what to expect from that model.


If you don't care about Benchmark scores, you are not going to notice the difference at all between TUF and Strix. And I mean that literally. Even a 125W difference would not get much more than 3-4fps of difference.

Go for the TUF, you are never going to regret it.


----------



## kx11




----------



## anethema

Sky3900 said:


> Did you originally stack resistors on your first try and then replace them later on? I am wondering if there are any issues with clearance between the resistors/heatsink/backplate. I am thinking about stacking 010s on the existing 005s.


Never stacked. Just replaced.


----------



## Wrathier

smonkie said:


> If you don't care about Benchmark scores, you are not going to notice the difference at all between TUF and Strix. And I mean that literally. Even a 125W difference would not get much more than 3-4fps of difference.
> 
> Go for the TUF, you are never going to regret it.


Really 3-4 fps ok I thought it would be a bit more considering the 375w vs the 480w would mean a massive difference. 

Do you have the Tuf and do you have it overclocked?


----------



## Sky3900

anethema said:


> Never stacked. Just replaced.


My mistake, that was someone else.


----------



## Falkentyne

anethema said:


> I replaced all the large shunts on my FE yesterday with 3mohm.
> 
> I tried not doing the PCI slot shunt, but oddly enough GPU-Z reported MVDDC taking 250W under load while the entire card was reporting like 150W.
> 
> Got considerable drops even from stock in port royal. Percap reason listed as power. Very odd behavior.
> 
> So I said **** it, shunted PCIe as well with 3mohm replacement.
> 
> Totally resolved the issue. MVDDC dropped by a ton somehow, and the card pulls 550W+ under load. Got me a PR score of 14980 with minimal tweaking. Need to get some cold onto it and try to push into the 15ks.
> 
> I personally used hot air rework to heat the resistor and board around it then kept touching the sides with the iron. Eventually the shunt floated free.
> 
> Used chipquik solder paste to put the new shunts on.


You don't feel nervous about 125W from the PCIE slot? That's what the shunt calculator says you're going to pull at 100% power draw with 003 mOhm shunts replaced.



Sky3900 said:


> Did you originally stack resistors on your first try and then replace them later on? I am wondering if there are any issues with clearance between the resistors/heatsink/backplate. I am thinking about stacking 010s on the existing 005s.


010 stacked on top of 005 should be 112W from the PCIE slot. Should be safer than above person's 125W for sure.
If you want to be safe, 015 mOhm shunts stacked (100W PCIE). 010 stacked mOhm on top of originals should give the maximum scores but make sure your power connectors can handle it. It's up to you.


----------



## dante`afk

jomama22 said:


> The only way to get a 3 pin bios to work on a 2 pin would to be to shunt all the needed resistors aside from the pcie slot shunt. This should allow you to make up for the absence of the 3rd pin. If using a 500w 3pin bios, you would need to shunt enough to allow 142 watts extra to pass though the two pins, so 71 watts per pin or a 50% increase in power. You may be able to get away with 10mohm shunts but 8 or 5 would probably be safer/more effective (for the stacked shunts that is). A 3ohm in place of the 5 ohm should also be fine.
> 
> That's all in theory at least. The nice thing is, you don't have to sweat the pcie slot current usage with this as you would just full shunting, but does limit you to whatever the 3pin bios allows.


why would you need a 500w bios on shunted card?


----------



## dante`afk

Wrathier said:


> Really 3-4 fps ok I thought it would be a bit more considering the 375w vs the 480w would mean a massive difference.
> 
> Do you have the Tuf and do you have it overclocked?


200w bios, 300 or 1000w, you won't see any big difference in games, only in bechmarks for the epeen.


----------



## Thanh Nguyen

Hey guys, do we need to place the resistor face up to see the r005 or it does not matter? My card works perfectly. Its just hwinfo64 does not show the gpu power as I thought.only 276w instead of 500w-600w.


----------



## Chamidorix

bmgjet said:


> Ampere has strict power balance between plugs and slot.
> Its been known since the first time people tried 3 plug bios on 2 plug cards thats the 2 out comes from using them. Either stuck on POST telling you to plug power in, Or lower power limit then the Gigabyte Gaming OC one.


Pretty frustrating you just keep repeating the stuff you think you know but ignore the questions that poke flaws in it (after crawling through all your posts and taking many hours testing and researching).

Power balancing between each 8 pin and pci-e is locked in each vendor specific bios. It is not a global generation thing. EVGA has stricter balancing than other vendors. It is completely possible a colorful 3-pin bios has completely lax balancing and works just fine on a 2 8-pin.

10 amp fuses have way higher tolerances than implied 10 amp at 1.1v = 110W. I'm trying to find some actual testing on the exact fuses EVGA cards are using, but you certainly should be able to pull at least double before they blow. This is a known thing in LN2 OC (gpu fuse tolerances). Explains why framechasers hasn't had issues with his 6 shunted 650W XC3, with over 100W pci-e. Yes I am aware XOC will volt mod as well to get higher wattage at same currents. 

You don't necessarily need to shunt pci-e, you can shunt the 8 pins quite high before hitting 2d clocks even when pci-e is at 75W. As you know you will hit the primary vcore1 power limit first, but after shutting those you can just continue increasing resistance on 8 pins OR increase on pci-e. But a nice compromise does indeed seem to be about 100W on PCI and then as high as you can go on 8 pins before hitting 2d clocks. But even on EVGA 10 amp fuses you should be able to very safely pull 150 W and therefore have 1000W range headroom (shunting the 8 pins without hitting 2Dclock reported draws) even with the strict balancing.


----------



## Zurv

Thanh Nguyen said:


> Hey guys, do we need to place the resistor face up to see the r005 or it does not matter? My card works perfectly. Its just hwinfo64 does not show the gpu power as I thought.only 276w instead of 500w-600w.


That is the point of the shunt mod. Trick the system to think it is using less power than it really is (ie, getting around the power limit by thinking that it isn't at the limit.)


----------



## Sky3900

Falkentyne said:


> You don't feel nervous about 125W from the PCIE slot? That's what the shunt calculator says you're going to pull at 100% power draw with 003 mOhm shunts replaced.
> 
> 
> 
> 010 stacked on top of 005 should be 112W from the PCIE slot. Should be safer than above person's 125W for sure.
> If you want to be safe, 015 mOhm shunts stacked (100W PCIE). 010 stacked mOhm on top of originals should give the maximum scores but make sure your power connectors can handle it. It's up to you.


Good point. I was also think of limiting max power draw by just using the power slider in afterburner. I have some 004s on the way in addition the 010s. Replacing with 004s will keep the PCIe power draw down around 100W as well.


----------



## LVNeptune

All these EVGA BIOS upgrades but FE is left in the dust. We too would like an extra 50W boost.


----------



## NoDoz

New owner. Rough install right now...things going to get changed around with a new build but its installed.


----------



## LVNeptune

Thanh Nguyen said:


> Hey guys, do we need to place the resistor face up to see the r005 or it does not matter? My card works perfectly. Its just hwinfo64 does not show the gpu power as I thought.only 276w instead of 500w-600w.


All you are doing is tricking the GPU into thinking it is using LESS power. The only proper way to get current draw is to use a kill-a-watt and measure from the wall.


----------



## Baasha

So there were ZERO FE drops this week and NewEgg just had a drop that lasted about 3 seconds.

Meanwhile a RoG Strix 3090 OC on fleaBay sold for $4500.  

Someone punch me and wake me up next year. This is painfully stupid.


----------



## Falkentyne

Baasha said:


> So there were ZERO FE drops this week and NewEgg just had a drop that lasted about 3 seconds.
> 
> Meanwhile a RoG Strix 3090 OC on fleaBay sold for $4500.
> 
> Someone punch me and wake me up next year. This is painfully stupid.


Are you sure it "sold" for $4500?
Sounds more like a fake account bought it, to "mail" it to himself, to try to make people THINK it's being bought for $4500. I don't know if you can check the purchase history of bidders anymore, last time I tried that it worked 15 years ago...


----------



## kx11

Baasha said:


> So there were ZERO FE drops this week and NewEgg just had a drop that lasted about 3 seconds.
> 
> Meanwhile a RoG Strix 3090 OC on fleaBay sold for $4500.
> 
> Someone punch me and wake me up next year. This is painfully stupid.


i'm 4th in line ordering 1 from an online store in Hong Kong, 2200$ with shipping 

i should have it next weekend


----------



## anethema

Falkentyne said:


> You don't feel nervous about 125W from the PCIE slot? That's what the shunt calculator says you're going to pull at 100% power draw with 003 mOhm shunts replaced.


Well I somewhat did. The issue is, shunt modding the other 5 2512 package shunts literally LOWERS performance if you don't shunt the PCIe on the 3090FE. I dropped at least a thousand points in port royal, and card was throttling with "Power Limit" as the reason even though the card was only drawing 190W according to GPU-Z. But the odd part was the MVDDC was showing 260W at the same time. It really messed with some kind of balancing alg. 

The thing that made me feel better about it, is a good few people had done that mod already and had success, so hoping for the best I guess. So far no mobo issues even with furmark.


----------



## dante`afk

Baasha said:


> So there were ZERO FE drops this week and NewEgg just had a drop that lasted about 3 seconds.
> 
> Meanwhile a RoG Strix 3090 OC on fleaBay sold for $4500.
> 
> Someone punch me and wake me up next year. This is painfully stupid.


USA is pretty peasant-like in terms of numbers for the 3090. If I look over at Germany, they get so many Strix OCs and other cards available, its unreal their availability vs ours.


----------



## jomama22

dante`afk said:


> why would you need a 500w bios on shunted card?


This was merely an explanation for how a 3pin bios would be able to work on a 2pin card. If your aim is 500w, you have a 2pin card and you don't want to do a pcie slot shunt, this is a way to do that.


----------



## Falkentyne

jomama22 said:


> This was merely an explanation for how a 3pin bios would be able to work on a 2pin card. If your aim is 500w, you have a 2pin card and you don't want to do a pcie slot shunt, this is a way to do that.


But why would you even need a bios flash if you were doing a shunt mod to begin with? The absolute only reason to do a bios flash is if your bios didn't allow power limit adjustment above 100%, or if for some reason the PL slider were locked, in which case just flash any 2 pin bios which allows PL adjustments. That gives you a higher range to play with.


----------



## jomama22

anethema said:


> Well I somewhat did. The issue is, shunt modding the other 5 2512 package shunts literally LOWERS performance if you don't shunt the PCIe on the 3090FE. I dropped at least a thousand points in port royal, and card was throttling with "Power Limit" as the reason even though the card was only drawing 190W according to GPU-Z. But the odd part was the MVDDC was showing 260W at the same time. It really messed with some kind of balancing alg.
> 
> The thing that made me feel better about it, is a good few people had done that mod already and had success, so hoping for the best I guess. So far no mobo issues even with furmark.


To add to this, so long as it's not a sustained load of 125w it's probably fine. It's also worth thinking about the actual total board power you get from different shunts on the pcie. 3 mohm replacements all around is going to give the Fe around 625w of total available power. If you never plan to actually push that limit except for benchmarking, then using a 3mohm on the pcie slot probably isn't a huge deal. 

So long as you don't forget to turn the power limit down when you aren't benching lol.


----------



## jomama22

Falkentyne said:


> But why would you even need a bios flash if you were doing a shunt mod to begin with? The absolute only reason to do a bios flash is if your bios didn't allow power limit adjustment above 100%, or if for some reason the PL slider were locked, in which case just flash any 2 pin bios which allows PL adjustments. That gives you a higher range to play with.


Again, because of all the questions of "will X 3pin bios work on my 2pin card?". There isn't a single 2 pin bios over 400w atm, if you want more, you need a 3 pin card or be willing to shunt the pcie slot. Using a 3pin bios (let's say, a strix 480w bios) means not needing to do a pcie slot shunt mod to achieve that power limit. If you were to just shunt to increase your stock bios limit, you NEED to shunt the pcie slot, which is somthing that needs to be considered more thoughtfully than shunting the pine connectors. Blowing out a pcie slot isnt too difficult when running 100+w through it for a sustainable load.


----------



## bmgjet

Every 3 plug bios will result in less power limit then using the Gigabyte Gaming OC bios.
Take the 500W bios for example.
You will only get 357W out of it on a 2 plug card. 141W per 8 pin plug and 75W from the slot.

Use the Gigabyte bios,
on 100% power limit its 70W from slot and 150W per 8 pin plug.
on 105% power limit is 78W from slot and 156W per 8 pin plug.
Stack 15 mOhm resistors on each shunt 100W slot, 210W each 8 pin plug for 520W


----------



## dev2035

ColinMacLaren said:


> More important then the actual score (where the influence of other system components comes into play) is the actual achievable clock rate.
> 
> Here someone in this thread manged to squeeze out an average clock of 2133MHz out of his 450W Evga, while I only managed to get 2026 out of my 480W Strix on the same benchmark.
> *https://www.3dmark.com/pr/428809*


Strix OC here too. I've got ~2050MHz average on air so far. RAM seems to crash if I go over +800MHz.



https://www.3dmark.com/compare/pr/418723/pr/418628/pr/418704



I'm also wondering how good I've scored on the chip lottery, kinda hard to find Strix results to compare with right now.


----------



## cstkl1

dev2035 said:


> Strix OC here too. I've got ~2050MHz average on air so far. RAM seems to crash if I go over +800MHz.
> 
> 
> 
> https://www.3dmark.com/compare/pr/418723/pr/418628/pr/418704
> 
> 
> 
> I'm also wondering how good I've scored on the chip lottery, kinda hard to find Strix results to compare with right now.


really.. i havent seen a asus card that doesnt hit 21gbps


----------



## long2905

ROGKilla said:


> edit - got the answer


What is this question that you got answer for? Are you using the inno3d ichillx4 card? Can you take a picture of the pcb? Whats your temp on the stoxk air cooler? And noise too while you are at it.


----------



## Spiriva

Dunno why it says unknown on my system, but this is my best so far:



















System is the same as the one that shows, but its colder outside now, -8c made my score go up some.

PNY 3090 flashed with the Aorus Master 390w bios. +170/+1300 (max it boosted to was ~2225mhz)

Used the Nvidia hotfix driver 456.98


----------



## shiokarai

Spiriva said:


> Dunno why it says unknown on my system, but this is my best so far:
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> System is the same as the one that shows, but its colder outside now, -8c made my score go up some.
> 
> PNY 3090 flashed with the Aorus Master 390w bios. +170/+1300 (max it boosted to was ~2225mhz)
> 
> Used the Nvidia hotfix driver 456.98


Shunted? What's average clock?


----------



## Spiriva

shiokarai said:


> Shunted? What's average clock?


No shunt, on water tho (EK block). Im not sure about average clock, 3dmark didnt report anything "systeminfo data is incomplete" i have no clue why.

Only thing changed is: new nvidia drivers, new icue (which was turned off anyhow) and the new Windiws10 version 20H2.

UPDATE 1: Apperently its the Windiws10 version 20H2 who screwed 3dmark over. Its the same on my "tv system" (amd 1700x & gtx 1070).
I get no prompt to update 3dmark when starting tho. Bought via steam.

UPDATE 2: From "tv system" and it seems to be the same, this is the freeversion of 3dmark tho.


----------



## long2905

CrazyThunderbird said:


> Wanted to let Feedback here:
> Tested on my *Inno3d iChill X3 RTX 3090* the *Gigabyte RTX 3090 Gaming OC *Bios and one of the 3 Display ports wont work anymore (the middle one just to be let ppl know)


How is your temperature?


----------



## Manya3084

Asus TUF 3090 with OC Bios and shunt mod. The best score I've managed so far. Also wired in the EVC2 voltage controller.
My chiller await the EK block...when it eventually arrives....


----------



## krs360

Anyone flashed the Aorus Master bios on a Zotac Trinity 3090 yet?


----------



## Johneey

yes but avoid aorus master bios. gigabyte is better


----------



## krs360

Johneey said:


> yes but avoid aorus master bios. gigabyte is better


Hi there, what's the issue with the bios? Seemed to work quite well for Spiriva on his PNY card I have tried the GB bios and was just wondering how it compares to the Aorus Master in terms of benchmarks etc. I can get to like 13,9xx on air using the GB bios in Port Royal.


----------



## Johneey

krs360 said:


> Hi there, what's the issue with the bios? Seemed to work quite well for Spiriva on his PNY card I have tried the GB bios and was just wondering how it compares to the Aorus Master in terms of benchmarks etc. I can get to like 13,9xx on air using the GB bios in Port Royal.


dont know but get lower fps ingame with the aorus as with gigabyte


----------



## Spiriva

Johneey said:


> yes but avoid aorus master bios. gigabyte is better


For my PNY the Aorus Master was better. I guess the advice would be to try both and pick the better one.


----------



## Johneey

Spiriva said:


> For my PNY the Aorus Master was better. I guess the advice would be to try both and pick the better one.


hm okay. this was my https://www.3dmark.com/pr/398148 gigabyte and with aorus get around -100 scores








with +208 / + 1250 mhz and Watercooling 20 C on Load . No Shuntmod etc


----------



## Spiriva

Johneey said:


> hm okay. this was my https://www.3dmark.com/pr/398148 gigabyte and with aorus get around -100 scores
> 
> with +208 / + 1250 mhz and Watercooling 20 C on Load . No Shuntmod etc


The 2x8pin cards are looking pretty good tbh


----------



## Wrathier

Could someone please advise me, I’ll soon get the TUF shall I flash the bios from gigabyte or does the extra 20W not matter?
Anyone got an idea to core and mem settings on the TUF?

thanks.


----------



## Spiriva

Wrathier said:


> Could someone please advise me, I’ll soon get the TUF shall I flash the bios from gigabyte or does the extra 20W not matter?
> Anyone got an idea to core and mem settings on the TUF?
> 
> thanks.


I would flash either of the gigabyte bioses (OC & Aorus). I tried the TUF bios on my card, but the Aorus was better on my PNY 3090.


----------



## Wrathier

Spiriva said:


> I would flash either of the gigabyte bioses (OC & Aorus). I tried the TUF bios on my card, but the Aorus was better on my PNY 3090.


Ok.
What happens when I flash? The card shuts down for 60 seconds right? Do you know if I lose any ports?


----------



## GTANY

Manya3084 said:


> Asus TUF 3090 with OC Bios and shunt mod. The best score I've managed so far. Also wired in the EVC2 voltage controller.
> My chiller await the EK block...when it eventually arrives....
> 
> View attachment 2463083


I may contemplate buying the same model, shunt and voltage mod and watercool the GPU with a modded CPU waterblock. I have a few questions :

1. Did you rely on der8auer youtube video ? What shunts did you mod ? Did you modify the PCI-e shunt ?
2. What is your max power consumption ?
3. What is your max temperature ?
4. According to your screen , your GPU frequency is not constant ? Because of the temperature ?
5. Is it possible to check the VRM temperature with HWinfo ? I have read that the Strix has a VRM thermocouple, which can be used by this software.
6. Did you check the backplate temperature, behind the VRM with an IR thermometer ?
7. For the EVC2 voltage controller, was it easy to connect it to the card ?
8. What voltage did you add to the GPU ?

I am a little bit worried by the long-term reliability if I use the graphic card shunt and voltage modded for games during 2 years : the power stage has 16 CPU phases, each 50 A max which is 800 W max at 1 V. A 3090 shunt and voltage modded may go beyond this limit which is risky, even with the GPU watercooled. Your opinion ?


----------



## Spiriva

Wrathier said:


> Ok.
> What happens when I flash? The card shuts down for 60 seconds right? Do you know if I lose any ports?


I tried a few bioses and after I flashed nothing special happens. It says you need to reboot for the changes to become active or such.

I normaly flash the bios and then run DDU and reboot to safe mode and let DDU uninstall the drivers. Then reboot and install the nvidia driver and then your done.


----------



## Glerox

Baasha said:


> First off all, congrats on getting 2x 3090s - which ones? The FE?
> 
> Anyway, SLI is all but dead on this platform.
> 
> SFR Checkerboard was removed after the 446 branch drivers (450.xx and above) so it cannot be forced via NVCP.
> 
> Secondly, the handful of games that are purported to work are only DX12/Vulkan API games - Strange Brigade is the only game that works with both Vulkan & DX12 MGPU almost perfectly.
> 
> That video you talk about of that nincompoop is garbage for several reasons; he couldn't even get Red Dead Redemption 2 running and not to mention he said he was running the Z490 platform when he clearly wasn't.
> 
> I'm still going to get 2x 3090 in SLI but they have disabled even older games (DX11) as the GPUs will simply ignore DX11 profiles.
> 
> There is a trick up my sleeve but I am not going to disclose it until I have 2x 3090s in my system.
> 
> For all intents and purposes, bid farewell to SLI/MGPU. The least Nvidia could have done is to leave the older game profiles enabled - there are literally dozens of games that would've been incredible with 3090 SLI in native 8K (>60fps) such as GTA V, The Witcher 3, BFV etc. etc. Oh well...


On a scale from 1 to 10, what’s the probability that your trick works for running games in SLI?


----------



## TimeLord84

BattlePhenom said:


> I can giver 'er a shot. I've got a Tuf oc flashed to gigabyte gaming oc (390w).
> 
> Edit: This is what I got.
> View attachment 2462989


Thanks man, I really appreciate that!!! Take care bro


----------



## TimeLord84

Johneey said:


> View attachment 2462998
> 
> graphic score timespy 22692 port royal 14800 .
> 3090 rtx 390 Watt.
> intel i9 10900k 5.4 GHz


Thank you very much!!!


----------



## long2905

hapklaar said:


> Flashing went well. The power draw did increase to 390-400W according to GPU-Z, after upping the power limit with Afterburner.
> 
> I was only able to get a few extra points in the benchmarks and I didn't like the fan curve (too loud), so I flashed back to stock bios. It's worth a try though. Keep us posted on how you're faring.


I got the card but immediately had to go to the hospital to take care of family member so didnt have much chance to play around with the card yet. How is your temp and overall clock speed? Both on stock and on the gaming oc vbios? This card runs very hot for me.


----------



## Wrathier

BattlePhenom said:


> I can giver 'er a shot. I've got a Tuf oc flashed to gigabyte gaming oc (390w).


Hi mate

Can you answer a few questions for me please?

How much more do you get out of the card with the 390W bios?

What are your stable overclock core and memory settings?

Thanks for your help.


----------



## YouKnowSedri

There is any1 with non OC Strix? Or just OC version on market


----------



## LordGurciullo

Folks. It appears that the 3 pin vs 2 pin, or the strix vs evga isn't nearly as important as the actual silicon lottery itself.

Get a 3090 however you can.

UNLESS!!! the rumors of the new big navi are true and they are beating our 3090s on non rtx (which is 99 percent of everything) ... in which case I might feel quite stupid.

Also - Pumpkin Jack any good?


----------



## Baasha

Glerox said:


> On a scale from 1 to 10, what’s the probability that your trick works for running games in SLI?


Right now it's 5/10 since I don't have the GPUs yet. However, realistically, if I *have* to give one number from 1 to 10, I'd say a solid *7*.

If I do get it working, I'll be sure to ping you.


----------



## Cholerikerklaus

what could i get when my Card is on Water, you think 15100 is possible?


----------



## Falkentyne

So on a 3090 FE, at 1920x1080 on an old Benq XL2720Z, I get about 170-240 FPS (some maps went up to 260 fps or a bit more) on max settings and RT enabled, in COD: Modern warfare and Warzone, depending on map, with +95/+600 on core and vram. With 400W power limit, card seems to average around 320-340W of power draw and 98% usage. CPU was 10900k @ 5.4 ghz with hyperthreading disabled.

When I overclocked my Benq XL2720Z monitor to 2560x1440 with ToastyX CRU, only 100hz worked, with a pixel clock of 408.55 mhz, and color format changed to YCbCR422 instead of RGB, for passing 360 mhz (monitor is a DP 1.2 monitor but does NOT support "High Bit Rate 2"). I found the absolute highest I could go was 115 hz, (Horizontal total: 2720, Vertical total: 1502), any higher and the pixels started "sparkling" and artifacting. Unfortunately my custom 100hz 1920x1080 (vertical total 1500 for less crosstalk during blur reduction) got 'replaced' with desktop resolution 1920x1080 with signal resolution 2560x1440 which is a big yikes, since it's downsampled and not RGB).

Anyway, at 1440p, FPS was only slightly lower. It was around 150-190 FPS depending on map, averaging around 170 FPS. What's interesting is that the power draw was now constantly hitting the power limit of 400W, or straddling just below it.

Don't know if anyone cares about such numbers but there you go.


----------



## Malicium

Hello,





I've finally received my EVGA RTX 3090 FTW3 ULTRA!!!

I put it in a test bench (old NZXT case ) and start doing some Port Royal benchmarks. I put the PL to max and started increasing the core clock it remained stable until + 200 MHz

Then start increasing mem loose of performance occurred around + 1100 Mhz.

Results were already very good from what I have seen from other users (around 14800 pts in PR) I was so happy to have a good chip.
I continued tweaking I downloaded XOC beta Bios from EVGA and add +100 mV (pulling around 460 spikes at 475 can't wait for the fix to reach 500) and was able to reach over 15k all on air! 13th on PR at the moment  https://www.3dmark.com/pr/433261



I'm impatient to put it on water.

I wanted to have the KP version but do you guys think I will be able to have a better chip and better scores with the voltage control. Or with a chip like that just water would be fine to be on par with KP version?


----------



## Sky3900

.


----------



## Sheyster

Malicium said:


> I wanted to have the KP version but do you guys think I will be able to have a better chip and better scores with the voltage control. Or with a chip like that just water would be fine to be on par with KP version?


KPE is engineered for max OC'ing (more power phases among other things) plus they're all using binned GPU's. All other EVGA cards are not binned, according to them. Plus KPE will have a higher power limit stock BIOS and probably also an XOC BIOS which may or may not work on other cards. So no, your FTW Ultra will almost certainly not be on par with the KPE cards.


----------



## Malicium

Thank you for your insight, that's what I thought I will upgrade to the KPE when available


----------



## Raggou

rawsome said:


> I have flashed my MSI Geforce RTX 3090 Ventus OC yesterday with the Gigabyte Gaming OC bios. Everything still works as it should and it is +400 points in port royale with the same oc settings. It looks like it handles OC'ng better now, im currently testing with +150 clock speed while i was at +130 before.
> 
> Temperatures are at 70-72°C now with like 2200 rpm.
> 
> 
> See first post of this Thread, there is a Tutorial. I am also new to this, but it seems like you can flash whatever you want without bricking your card. someone here i.e. flashed a 3-plug bios on a 2-plug card, did not give a higher power limit but no brick.


All of your display ports still work? I thought there was an issue where all the DP connections wouldnt work since the Gigabyte gaming has 2x DP and 2x HDMI vs the Ventus 3x DP and 1 x HDMI doesn't this kill one of your ports?


----------



## BattlePhenom

Spiriva said:


> No shunt, on water tho (EK block). Im not sure about average clock, 3dmark didnt report anything "systeminfo data is incomplete" i have no clue why.
> 
> Only thing changed is: new nvidia drivers, new icue (which was turned off anyhow) and the new Windiws10 version 20H2.
> 
> UPDATE 1: Apperently its the Windiws10 version 20H2 who screwed 3dmark over. Its the same on my "tv system" (amd 1700x & gtx 1070).
> I get no prompt to update 3dmark when starting tho. Bought via steam.
> 
> UPDATE 2: From "tv system" and it seems to be the same, this is the freeversion of 3dmark tho.


Impressive scores for a 2x8 pin card. What kind of temps do you get under load with the water block and what rad are you using? Also, how many power stages does your PNY card have?


----------



## Spiriva

BattlePhenom said:


> Impressive scores for a 2x8 pin card. What kind of temps do you get under load with the water block and what rad are you using? Also, how many power stages does your PNY card have?


When i ran 3dmark (only had time for 3 runs, each time i got a higher score tho) the temp on the card was around 35c under load. It was however with open side panel on the case and window open, with around -8c outside. And in the room in was pretty cold, around +8c. I only got a view on the naked pcb while i changed to the waterblock, it was late at night and i forgot to take a picture of it, but as i remember it was 19 power stages.

Im running dual 360 "Alphacool NexXxoS XT45" radiators with push/pull "Noctua NF-A12x25" fans at around 1000rpm. While gaming it keeps the card at around 40-46c depending on what game. With an room temp of around 21c. If i push the fans abit faster to around 1500rpm the temp drops some, but i kinda like it abit more quiet while gaming.

I only cool the 3090 with this loop.

A friend cooled his 3090 with an EK block too, but only one single 360 radiator from EK, the "classic 360". His card ran pretty warm (for beeing water cooled) at around 64c at under full load. It became alot better when he added another 360 radiator.


----------



## wirx

Hi
I have MSI Gaming X Trio 3090 and thanx to this forum found out, that I can update MSI BIOS with 500W EVGA. It was first time for me, but luckily all went well.
I can confirm that all 3 Displayports are working fine.
How do you guys get so low temps with stock air coolers?
Mine best is +69c but I have seen here about +50c? I managed to get +62c with leafblower, but this is still too high, what I do wrong? Room temp is about +24c
Tried with 0% core voltage, but temp remains same.
So far best Port Royal 14493 and Time Spy Extreme graphics 11549
https://www.3dmark.com/compare/pr/434899/pr/434882 


https://www.3dmark.com/spy/14785654


Board Power Draw remains quite stable 500w and I have seen sometimes 516w peaks, PSU is Corsair RM750x 750W


----------



## GAN77

Spiriva said:


> A friend cooled his 3090 with an EK block too, but only one single 360 radiator from EK, the "classic 360". His card ran pretty warm (for beeing water cooled) at around 64c at under full load. It became alot better when he added another 360 radiator.
> 
> View attachment 2463210
> View attachment 2463211
> View attachment 2463212


The temperature is high even for one radiator. Maybe he needs to reinstall the water block.


----------



## BattlePhenom

Spiriva said:


> When i ran 3dmark (only had time for 3 runs, each time i got a higher score tho) the temp on the card was around 35c under load. It was however with open side panel on the case and window open, with around -8c outside. And in the room in was pretty cold, around +8c. I only got a view on the naked pcb while i changed to the waterblock, it was late at night and i forgot to take a picture of it, but as i remember it was 19 power stages.
> 
> Im running dual 360 "Alphacool NexXxoS XT45" radiators with push/pull "Noctua NF-A12x25" fans at around 1000rpm. While gaming it keeps the card at around 40-46c depending on what game. With an room temp of around 21c. If i push the fans abit faster to around 1500rpm the temp drops some, but i kinda like it abit more quiet while gaming.
> 
> I only cool the 3090 with this loop.
> 
> A friend cooled his 3090 with an EK block too, but only one single 360 radiator from EK, the "classic 360". His card ran pretty warm (for beeing water cooled) at around 64c at under full load. It became alot better when he added another 360 radiator.
> 
> View attachment 2463210
> View attachment 2463211
> View attachment 2463212


Tyvm for the share . I have to wonder if you stuck with the EK paste or went with Kryonaut or liquid metal? I thought I read someone got better temps with kryo vs EK. I'm currently planning on getting and Alphacool Eiswolf 3(or w/e it will be called) on a 360mm rad with kryonaut. I'm currently capped at 14044 on Port Royal on Tuf OC flashed to Gigabyte OC. I'm usually sitting at 61c while gaming on an aggressive fan curve with a typical ambient of 17-18c, so it's weird your friend is getting those temps on water.


----------



## BattlePhenom

wirx said:


> Hi
> I have MSI Gaming X Trio 3090 and thanx to this forum found out, that I can update MSI BIOS with 500W EVGA. It was first time for me, but luckily all went well.
> I can confirm that all 3 Displayports are working fine.
> How do you guys get so low temps with stock air coolers?
> Mine best is +69c but I have seen here about +50c? I managed to get +62c with leafblower, but this is still too high, what I do wrong? Room temp is about +24c
> Tried with 0% core voltage, but temp remains same.
> So far best Port Royal 14493 and Time Spy Extreme graphics 11549
> https://www.3dmark.com/compare/pr/434899/pr/434882
> 
> 
> https://www.3dmark.com/spy/14785654
> 
> 
> Board Power Draw remains quite stable 500w and I have seen sometimes 516w peaks, PSU is Corsair RM750x 750W
> View attachment 2463218
> 
> View attachment 2463219
> 
> View attachment 2463220


With an ambient of +24c, I'd say your room temp might be what's holding it back.


----------



## Spiriva

BattlePhenom said:


> Tyvm for the share . I have to wonder if you stuck with the EK paste or went with Kryonaut or liquid metal? I thought I read someone got better temps with kryo vs EK. I'm currently planning on getting and Alphacool Eiswolf 3(or w/e it will be called) on a 360mm rad with kryonaut. I'm currently capped at 14044 on Port Royal on Tuf OC flashed to Gigabyte OC. I'm usually sitting at 61c while gaming on an aggressive fan curve with a typical ambient of 17-18c, so it's weird your friend is getting those temps on water.


I used "Noctua NT-H1" paste, mostly cause that was what I had at home  I think its the same you get with all Noctua´s tower coolers, as I must have gotten it with the cooler I use for my other system, which is cooled by a "Noctua NH-D15".

I can recommend the Noctua NF-A12x25 fans, they are very quiet and cool well. No rgb tho if you are in to that.



GAN77 said:


> The temperature is high even for one radiator. Maybe he needs to reinstall the water block.


My friend used a "EK-CoolStream Classic SE 360" with 3 fans in push mode, I dont know the brand of the fans. It was some regular 120mm black ones. Anyhow the 360 radiator was hot to the touch, the pump was hot to the touch, and his 3090 was hot to the touch, even the fittings from the 3090 water block was uncomfortable to touch. He had his card under water before me tho, so after seeing that I decided to use atleast two 360 radiators in my system.

If i touch my radiators, pump, backplate during full load they are not uncomfortable warm to touch.

Im thinking if he had a bad mount of his water block, his 3090 should have been hot, but the rest of the stuff in the loop wouldnt be as warm, and it wouldnt have become better with another 360 radiator.


----------



## Stampede

BattlePhenom said:


> Tyvm for the share . I have to wonder if you stuck with the EK paste or went with Kryonaut or liquid metal? I thought I read someone got better temps with kryo vs EK. I'm currently planning on getting and Alphacool Eiswolf 3(or w/e it will be called) on a 360mm rad with kryonaut. I'm currently capped at 14044 on Port Royal on Tuf OC flashed to Gigabyte OC. I'm usually sitting at 61c while gaming on an aggressive fan curve with a typical ambient of 17-18c, so it's weird your friend is getting those temps on water.


yeah that was me, but I suspect it had more to do with reseating it properly. 64deg is way too high on water even at 30deg ambient. water temp/gpu temp delta should be bellow 15deg on the ek waterblock, unless the card is pulling 500watts


----------



## evensen007

Sync0r said:


> I have my Zotac 3090 watercooled and it does maintain higher clocks as the temps are below 35c. Depends if you want that little bit extra performance. The benefit of watercooling hardware is that you keep it forever, I've had my setup for years, still going strong. You just need to buy and sell waterblocks for each GPU you buy.


Which block are you using? I usually go with EK in the past, but their zotac trinity block still says pre order.


----------



## Thanh Nguyen

Does msi afterburn lock the clock and voltage? I clock at 1.1v 2190mhz but its only in game. In bench app, it cant hold that clock.


----------



## Jpmboy

Spiriva said:


> When i ran 3dmark (only had time for 3 runs, each time i got a higher score tho) the temp on the card was around 35c under load. It was however with open side panel on the case and window open, with around -8c outside. And in the room in was pretty cold, around +8c. I only got a view on the naked pcb while i changed to the waterblock, it was late at night and i forgot to take a picture of it, but as i remember it was 19 power stages.
> 
> Im running dual 360 "Alphacool NexXxoS XT45" radiators with push/pull "Noctua NF-A12x25" fans at around 1000rpm. While gaming it keeps the card at around 40-46c depending on what game. With an room temp of around 21c. If i push the fans abit faster to around 1500rpm the temp drops some, but i kinda like it abit more quiet while gaming.
> 
> I only cool the 3090 with this loop.
> 
> A friend cooled his 3090 with an EK block too, but only one single 360 radiator from EK, the "classic 360". His card ran pretty warm (for beeing water cooled) at around 64c at under full load. It became alot better when he added another 360 radiator.


I sure hope the new Ek 3090 blocks perform better than the 2080Ti blocks do when the cards are mounted in the vertical position. The last design had a flow path that refuses to clear air without you making the card do the macarena. 🙂


----------



## Sheyster

Thanh Nguyen said:


> Does msi afterburn lock the clock and voltage? I clock at 1.1v 2190mhz but its only in game. In bench app, it cant hold that clock.


Your best bet is to lock using the curve. Pick a point and move it up to the frequency you want for that voltage, then press CTRL-L to lock it. A yellow line will appear on the curve. This said, as the card heats up it will drop bins in 15 MHz increments. Keep the card as cool as possible when benching.


----------



## Fire2

anyone seen this?
Asus new BIOS increase power limit - Asus new BIOS increase power limit (450W) of Strix / TUF GeForce RTX 3090 / 3080

I couldn't get it working on a msi vent tuf flashed card :/


----------



## LiquidHaus

Anyone here have the EVGA 3090 FTW3 Ultra?

Looking for some OC settings to compare other cards to.


----------



## originxt

LiquidHaus said:


> Anyone here have the EVGA 3090 FTW3 Ultra?
> 
> Looking for some OC settings to compare other cards to.


I don't have the ultra but I do have the ftw3. +175/+900, problem being I'm hitting 70cish. Once I get a block, it should be better since my setup isn't really good for aircooled cards. Maybe I can push more once cooler?


----------



## Thanh Nguyen

Sheyster said:


> Your best bet is to lock using the curve. Pick a point and move it up to the frequency you want for that voltage, then press CTRL-L to lock it. A yellow line will appear on the curve. This said, as the card heats up it will drop bins in 15 MHz increments. Keep the card as cool as possible when benching.


I drag the point at 1.1v up to 2190mhz press L then hit apply. I do see the yellow line. It can do that in game but will throttle down lot when its in port royal.


----------



## Sync0r

evensen007 said:


> Which block are you using? I usually go with EK in the past, but their zotac trinity block still says pre order.


Alphacool, working well.


----------



## Sheyster

Thanh Nguyen said:


> I drag the point at 1.1v up to 2190mhz press L then hit apply. I do see the yellow line. It can do that in game but will throttle down lot when its in port royal.


What temps? It's probably just dropping bins as it warms up. If you're not water-cooling that will always be a problem. Only thing you can do is cool down the room and run fans at 100% while benching.


----------



## NYCesquire

What are folks seeing as the optimum radiator config for watercooling the 3090?


----------



## Wihglah

NYCesquire said:


> What are folks seeing as the optimum radiator config for watercooling the 3090?







That said , more is better. My new loop plan has a 480 in push/pull for the 3090 alone.


----------



## NYCesquire

Wihglah said:


> That said , more is better. My new loop plan has a 480 in push/pull for the 3090 alone.


Yeah but... he's not even OCing, let along raising the power limit is he? What are folks who are actually pushing their card seeing?


----------



## LordGurciullo

Also, Pumpkin Jack any good?


Falkentyne said:


> So on a 3090 FE, at 1920x1080 on an old Benq XL2720Z, I get about 170-240 FPS (some maps went up to 260 fps or a bit more) on max settings and RT enabled, in COD: Modern warfare and Warzone, depending on map, with +95/+600 on core and vram. With 400W power limit, card seems to average around 320-340W of power draw and 98% usage. CPU was 10900k @ 5.4 ghz with hyperthreading disabled.
> 
> When I overclocked my Benq XL2720Z monitor to 2560x1440 with ToastyX CRU, only 100hz worked, with a pixel clock of 408.55 mhz, and color format changed to YCbCR422 instead of RGB, for passing 360 mhz (monitor is a DP 1.2 monitor but does NOT support "High Bit Rate 2"). I found the absolute highest I could go was 115 hz, (Horizontal total: 2720, Vertical total: 1502), any higher and the pixels started "sparkling" and artifacting. Unfortunately my custom 100hz 1920x1080 (vertical total 1500 for less crosstalk during blur reduction) got 'replaced' with desktop resolution 1920x1080 with signal resolution 2560x1440 which is a big yikes, since it's downsampled and not RGB).
> 
> Anyway, at 1440p, FPS was only slightly lower. It was around 150-190 FPS depending on map, averaging around 170 FPS. What's interesting is that the power draw was now constantly hitting the power limit of 400W, or straddling just below it.
> 
> Don't know if anyone cares about such numbers but there you go.


I do! Thanks. It looks like with these cards the fps hit is about 10-20 percent at 1440 which means... Its time to get a 1440... I love Benq Zowie and will probably be getting a 1440 240hz dyac the day they release. Also - What is your custom 100hz resolution and do you have a 120 and 144hz resolution too?


----------



## LordGurciullo

Malicium said:


> Hello,
> 
> 
> 
> 
> 
> I've finally received my EVGA RTX 3090 FTW3 ULTRA!!!
> 
> I put it in a test bench (old NZXT case ) and start doing some Port Royal benchmarks. I put the PL to max and started increasing the core clock it remained stable until + 200 MHz
> 
> Then start increasing mem loose of performance occurred around + 1100 Mhz.
> 
> Results were already very good from what I have seen from other users (around 14800 pts in PR) I was so happy to have a good chip.
> I continued tweaking I downloaded XOC beta Bios from EVGA and add +100 mV (pulling around 460 spikes at 475 can't wait for the fix to reach 500) and was able to reach over 15k all on air! 13th on PR at the moment  https://www.3dmark.com/pr/433261
> 
> 
> 
> I'm impatient to put it on water.
> 
> I wanted to have the KP version but do you guys think I will be able to have a better chip and better scores with the voltage control. Or with a chip like that just water would be fine to be on par with KP version?


Can you please tell me which Bios you had? if you wrote it down or remember? we have 20 people on EVGA forums that can't get over 440 watts ever (I'm one of them). My hunch is the bios yours shipped with upgraded to the new one actually works. They shipped with 2 bios. That's better than my score. I lose stability after 180 and with the new 500 bios I can't get over 14700 no matter what it seems. and never go above 440


----------



## Bilco

So what do you guys think, 3090 Strix or 3090 ftw3 ultra for watercooling and OCing?


----------



## mirkendargen

Bilco said:


> So what do you guys think, 3090 Strix or 3090 ftw3 ultra for watercooling and OCing?


Easier to find blocks for Strix (way easier than finding a Strix itself........), but I recall seeing somewhere a block on the way for FTW3.


----------



## olrdtg

I did some surface temp testing on my card during a few stress tests. During the tests the card peaked at 605 W power draw, 84 C core temp on both tests. The GPU is shunt modded (replaced original resistors with 3mOhm to give a limit of 641 W give or take.

My first test I had an OC of +120mhz core and +350mhz VRAM on this test.
For the second test I used an OC of +125mhz core and +1000mhz VRAM

The sides of the GPU remain far cooler than the back plate does. The sides for instance got up to around 49 C before leveling off and setting in bouncing back and forth between 49 and 50 C on test 1 (few degrees warmer on test 2).
The rear of the card sat near 38 C to 40 C during both tests, while the back plate was a toasty 65.7 C during test 1 and a max recorded temp of 71.5 C during test 2.
I ran another test but it revealed much of the same using the surface temp reader for the sides, the back plate was only around 6 C hotter (71.5 C)
A thermal coupler on one of the modules revealed the RAM hitting a nice toasty 86 C on test 1 and 91 C at it's hottest during test number 2.

Those RAM modules get freakin' hot!

It would seem like the back plate on the Founders Edition does a pretty decent job at spreading the heat out given the delta between the chip temps and the back plate surface temps, also I'm using a pretty cheap surface temperature reader (rebranded as a forehead thermometer for COVID times) -- however testing using a thermal coupler shows it to be pretty accurate at higher temps.

I think it's EXTREMELY safe to say -- anyone wondering about trying to run a 3080 or 3090 using a Kraken G12 or other AIO to GPU device -- *DON'T DO IT.*
These memory modules get suuuuuuuper duper hot, hot enough to instantly overwhelm one of those tiny heatsinks you'd use with an AIO on a GPU. I'm sorta concerned about the backplate that's coming with my Bitsbower block now, I hope it can handle the heat dissipation. Some sort of active water cooling on the back of the card may be ideal.

RTX 3090 FE back plate surface temperature during first stress test (under high air flow):


----------



## Thanh Nguyen

With 600w+ power limit, do u actually gain some points or fps?


----------



## Falkentyne

LordGurciullo said:


> Also, Pumpkin Jack any good?
> 
> I do! Thanks. It looks like with these cards the fps hit is about 10-20 percent at 1440 which means... Its time to get a 1440... I love Benq Zowie and will probably be getting a 1440 240hz dyac the day they release. Also - What is your custom 100hz resolution and do you have a 120 and 144hz resolution too?


I used 2560x1440, porch 48/3, sync 32/5, HT: 2720, VT: 1502, refresh 100hz, pixel clock 408.55 mhz.

120hz has flickering and shimmering pixels and 475 mhz pixel clock so its unusable. The highest I can go is 115hz, and that requires lowering the horizontal total as far down as possible where if you go any lower, you get an out of range error. I suspect some newer XL2720 panels (maybe the Zowie rebrand) may be capable of 120hz if you stay below 480 mhz. I haven't seen a single 1080p monitor that can overclock to 1440p @ 144hz. I only used 100hz for testing purposes. It messes up some of the scaling and makes things blurrier (since it's not a native 1440p panel) but it functioned more like an upsampled screen with a small amount of supersampling (like if you created 2560x1440 via DSR).

Normally I use custom vertical totals for 1080p 100hz and 120hz to reduce blur reduction crosstalk. For some reason however, using 2560x1440 @ 100hz prevents 1920x1080 100hz from working right. Selecting it causes a signal resolution of 2560x1440 @ 100hz and a display resolution of 1920x1080 which looks blurry. I didn't have this issue with my r9 290X however. But I don't remember if I was on windows 10 or 7 back then. This also causes an issue in Fortnite where Fortnite tries to automatically use the highest resolution instead of what you selected on the desktop, so I just went back to my regular profile.

Usually I use an overclocked 165hz (1920x1080, porch 24/3, sync 32/5, HT: 1984, VT: 1094), which goes out of range, but if you press a button on your S-switch, the screen suddenly appears and works perfectly. Opening the service menu (e.g. to change strobe settings) or the OSD causes the out of range to appear again after you close the menu, although the screen doesn't vanish. Pressing a profile button on S-switch again and it goes away. There's some executable on blurbusters forums that automatically makes the OOR go away (for those who don't have an S-switch).

Those reduced timings are necessary to keep RGB mode and prevent it from switching to YCbCR422. 1 mhz more on the pixel clock and it switches (RGB mode would cause 6 bit color depth at 360 mhz+). Not sure why it switches at 359 mhz instead of 360 mhz.


----------



## kx11

switching from Intel to Ryzen 3900XT, you guys think for 4k gaming it might bottleneck an OC 3090? i've almost no performance gains @ 4k between i9 and 3900xt

should i wait for 5900X(T) ?!


----------



## long2905

kx11 said:


> switching from Intel to Ryzen 3900XT, you guys think for 4k gaming it might bottleneck an OC 3090? i've almost no performance gains @ 4k between i9 and 3900xt
> 
> should i wait for 5900X(T) ?!


I wasnt aware you still get CPU bottleneck at 4K? thought it only happens in 1080p and 1440p in some specific titles?


----------



## kx11

long2905 said:


> I wasnt aware you still get CPU bottleneck at 4K? thought it only happens in 1080p and 1440p in some specific titles?


well, i meant missing fps compared to Intel


----------



## iPurple

I think I am getting pretty low scores Port Royal. I have a Galax 3090 SG using the 390w bios from gigabyte. Is this normal at all? Card was really underperforming before the bios flash as well. Any insight appreciated . I was planning on watercooling it and doing shunt mod, but I still have the option of returning it so idk if its worth it.
It's running at +155MHz on the core and 670 on the memory.


----------



## Johneey

iPurple said:


> I think I am getting pretty low scores Port Royal. I have a Galax 3090 SG using the 390w bios from gigabyte. Is this normal at all? Card was really underperforming before the bios flash as well. Any insight appreciated . I was planning on watercooling it and doing shunt mod, but I still have the option of returning it so idk if its worth it.


Is it stock ? And check ur rams. It’s only 3000mhz


----------



## olrdtg

Thanh Nguyen said:


> With 600w+ power limit, do u actually gain some points or fps?


It has improved rendering times a little bit. And I have noticed some gains while working in Unreal Engine. For straight gaming, there is a bit of a boost in frame rates, but not worth the extra power usage IMO. From 57 FPS avg to around 62 FPS avg in Red Dead Redemption 2 at 4k with all settings maxed out to ultra for instance. +5 fps at the cost of 250W. So not really worth it  That's pretty much the only game I've tested w/ the mod though, so maybe some other games have better benefits. I benched all my favorites before I modded, I need to re-bench them and compare. I am curious to see how Metro Exodus plays at 4K w/ RTX and DLSS off at 2180mhz -- but that's gonna have to wait until I get my water block 😅

When I game I usually set the power target down to 60% (which would be stock 350W power with my shunt mod)
The reduction in rendering time is helpful though. Time is money after all.


----------



## Malicium

LordGurciullo said:


> Can you please tell me which Bios you had? if you wrote it down or remember? we have 20 people on EVGA forums that can't get over 440 watts ever (I'm one of them). My hunch is the bios yours shipped with upgraded to the new one actually works. They shipped with 2 bios. That's better than my score. I lose stability after 180 and with the new 500 bios I can't get over 14700 no matter what it seems. and never go above 440


Hello, I didn't check my shipping bios but I flashed several times the original oc and normal bios that evga provided on the xoc post then back to the 500w bios and I was still able to reach around 470 watts.


----------



## iPurple

Johneey said:


> Is it stock ? And check ur rams. It’s only 3000mhz


It's running at +155MHz on the core and 670 on the memory. I recently purchased 4400 ram but its not arriving till next week


----------



## Cavokk

iPurple said:


> I think I am getting pretty low scores Port Royal. I have a Galax 3090 SG using the 390w bios from gigabyte. Is this normal at all? Card was really underperforming before the bios flash as well. Any insight appreciated . I was planning on watercooling it and doing shunt mod, but I still have the option of returning it so idk if its worth it.
> It's running at +155MHz on the core and 670 on the memory.


My best advice is to undervolt it so it dont hit the powerlimit constantly - I get better performance by doing that compared to using the core+xxxMhz slider - I'm currently running 2050Mhz constantly at a voltage cap of 950mV consuming approx 420W in PR for a nice "cool" overclock ;D

C


----------



## Thanh Nguyen

olrdtg said:


> It has improved rendering times a little bit. And I have noticed some gains while working in Unreal Engine. For straight gaming, there is a bit of a boost in frame rates, but not worth the extra power usage IMO. From 57 FPS avg to around 62 FPS avg in Red Dead Redemption 2 at 4k with all settings maxed out to ultra for instance. +5 fps at the cost of 250W. So not really worth it  That's pretty much the only game I've tested w/ the mod though, so maybe some other games have better benefits. I benched all my favorites before I modded, I need to re-bench them and compare. I am curious to see how Metro Exodus plays at 4K w/ RTX and DLSS off at 2180mhz -- but that's gonna have to wait until I get my water block 😅
> 
> When I game I usually set the power target down to 60% (which would be stock 350W power with my shunt mod)
> The reduction in rendering time is helpful though. Time is money after all.


U use watt meter or how u know it pulls 600w?


----------



## iPurple

Cavokk said:


> My best advice is to undervolt it so it dont hit the powerlimit constantly - I get better performance by doing that compared to using the core+xxxMhz slider - I'm currently running 2050Mhz constantly at a voltage cap of 950mV consuming approx 420W in PR for a nice "cool" overclock ;D
> 
> C


Thanks! I haven't tried this yet, will check if it provides better performance. Appreciate the help


----------



## Avacado

Thanh Nguyen said:


> U use watt meter or how u know it pulls 600w?


U tink he know what u sayin?


----------



## dante`afk

kx11 said:


> switching from Intel to Ryzen 3900XT, you guys think for 4k gaming it might bottleneck an OC 3090? i've almost no performance gains @ 4k between i9 and 3900xt
> 
> should i wait for 5900X(T) ?!


lmao, you realize what bottlenecks at those resolutions, right? gpu only.


----------



## NCC-1701-A

Is it possible to flash my 3090 trio bios with the 3090 strix bios? Does it works?


----------



## Sheyster

kx11 said:


> should i wait for 5900X(T) ?!


Yes. I may opt for the 5950X, not sure yet though. It's time to go AMD finally.


----------



## Sheyster

duplicate


----------



## Sheyster

NCC-1701-A said:


> Is it possible to flash my 3090 trio bios with the 3090 strix bios? Does it works?


Yes you can, but Trio users are reporting great results with the 500W EVGA OC BIOS. I would try that one first.


----------



## rawsome

iPurple said:


> I think I am getting pretty low scores Port Royal. I have a Galax 3090 SG using the 390w bios from gigabyte. Is this normal at all? Card was really underperforming before the bios flash as well. Any insight appreciated . I was planning on watercooling it and doing shunt mod, but I still have the option of returning it so idk if its worth it.
> It's running at +155MHz on the core and 670 on the memory.


iam in the same range basicly with the msi ventus oc with gigabyte oc bios.

i was able to get 14000 points by disabling my fps overlay (fps monitor), setting clocks to +170 +800, pushing fans up to 100% and an ambient temperature of around 20°C.
without fans to 100% i was not able to get more than 13400.


afaik if you flash a bios from a card that has only 2000 rpm fans on a card that has 3000 rpm fans, you can only reach 2000rpm at 100%. so it is possible that even when running at 100% with the gigabyte bios, your cards fans are not running at 100%. i have observed that with the aurus master bios that has one fan only spinning at 2000rpm max while others can go higher.


----------



## Sync0r

Shunted my Zotac Trinity, just the 2 8 pins so far. I'm still power limited, think its the pcie slot hitting limit.



https://www.3dmark.com/pr/439481


----------



## rawsome

How much performance increase can I expect air vs water at all day stable oc and noise levels?
Nonshunted card at 390w 2x8 running at 71-74°c.
Im also interested in results with stunting, but also stable oc ones.
Thank you.


----------



## NCC-1701-A

Sheyster said:


> Yes you can, but Trio users are reporting great results with the 500W EVGA OC BIOS. I would try that one first.


Which 500w bios from evga does work? Where did you read it?


----------



## Thanh Nguyen

Hey guys, I just receive a watt meter and see at idle, my pc is at 325w. In port royal, its max peak is 750w . Does my shunt mod work correctly?


----------



## Sheyster

NCC-1701-A said:


> Which 500w bios from evga does work? Where did you read it?











EVGA RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com





Look back in this thread for posts from MSI Trio users.


----------



## Jpmboy

Anybody have any issues with the new GPUZ and their 3090?


----------



## ThrashZone

HI,
Kill-a-watt ez are popular 


https://www.amazon.com/P3-International-P4460-Electricity-Monitor/dp/B000RGF29Q


----------



## Sheyster

Jpmboy said:


> Anybody have any issues with the new GPUZ and their 3090?


Not with the latest version. The one right before it seemed to cause a weird keyboard issue for me while gaming (keys not registering in game). Seems to be resolved now.


----------



## Wrathier

Hey

I got a question I'm sure someone can answer:

I keep reading things like this: I'm currently running 2050Mhz constantly at a voltage cap of 950mV consuming approx 420W in PR for overlock.

If Vcap at 950mV uses approx 420W will 900vM be the correct value to target with a 390W Bios?


----------



## Cavokk

Wrathier said:


> Hey
> 
> I got a question I'm sure someone can answer:
> 
> I keep reading things like this: I'm currently running 2050Mhz constantly at a voltage cap of 950mV consuming approx 420W in PR for overlock.
> 
> If Vcap at 950mV uses approx 420W will 900vM be the correct value to target with a 390W Bios?


Hey Wrathier - it depends on the specific bins of 3090 but yes [email protected] is a good - but optimistic - place to start and if stable you definitely have a good bin. At 900mv I can only reach about 1950 Mhz constant stable in PR. At 1000mv I am rock stable at 2100Mhz though but with a power usage around 470W max in PR (muuuuuch less in actual games though).Success mostly depends on bin and temp 

Should receive my waterblock this week for my Trio X

C


----------



## Wrathier

Cavokk said:


> Hey Wrathier - it depends on the specific bins of 3090 but yes [email protected] is a good - but optimistic - place to start and if stable you definitely have a good bin. At 900mv I can only reach about 1950 Mhz constant stable in PR. At 1000mv I am rock stable at 2100Mhz though but with a power usage around 470W max in PR (muuuuuch less in actual games though).Success mostly depends on bin and temp
> 
> Should receive my waterblock this week for my Trio X
> 
> C


But my card will be a TUF with maximum pl of 390 with Gigabyte bios, so that is why I'm asking if 900mv is the right place to start instead of 950mv. I assume 1000mv isn't even relevant for a 2x8 pin card or do I misunderstand it totally?

Thanks


----------



## MoltenMoose

Cavokk said:


> Hey Wrathier - it depends on the specific bins of 3090 but yes [email protected] is a good - but optimistic - place to start and if stable you definitely have a good bin. At 900mv I can only reach about 1950 Mhz constant stable in PR. At 1000mv I am rock stable at 2100Mhz though but with a power usage around 470W max in PR (muuuuuch less in actual games though).Success mostly depends on bin and temp
> 
> Should receive my waterblock this week for my Trio X
> 
> C


Which waterblock you going with out of interest? I've had a look around and can't actually find any that are available at the moment.


----------



## Sync0r

Wrathier said:


> But my card will be a TUF with maximum pl of 390 with Gigabyte bios, so that is why I'm asking if 900mv is the right place to start instead of 950mv. I assume 1000mv isn't even relevant for a 2x8 pin card or do I misunderstand it totally?
> 
> Thanks


It all depends on the game. They each load the GPU differently. Control is RT so more power hungry but Horizon new dawn I can run 2145mhz at 1.1v and not hit power limit, with shunted zotac.

Just play around with the curve, each chip is different.


----------



## Jpmboy

quick testing with this FTW3 Apex X/[email protected] Stock air cooler


----------



## Pepillo

Sheyster said:


> Yes you can, but Trio users are reporting great results with the 500W EVGA OC BIOS. I would try that one first.


In addition to working better and having a little more power limit, all the video outputs work well, with the Strix bios you lose a DP.


----------



## MrTOOSHORT

Jpmboy said:


> quick testing with this FTW3 Apex X/[email protected] Stock air cooler
> View attachment 2463525


Nice!


----------



## Jpmboy

MrTOOSHORT said:


> Nice!


Needs a waterblock for sure.


----------



## Sheyster

Jpmboy said:


> quick testing with this FTW3 Apex X/[email protected] Stock air cooler
> View attachment 2463525


Looks like you installed the 500W BIOS. How much power did you actually pull on that bench run?


----------



## Jpmboy

Yeah, I put the new OC bios onto the OC bios slot. Thermal clock bin drops like crazy... max I saw by the card was 420 or so. Unigine Superposition seemed to pull more but with the clocks fluctuation (even with them locked) there was a lot of float. It really needs a block (and then the chiller hooked up  )


----------



## Cavokk

MoltenMoose said:


> Which waterblock you going with out of interest? I've had a look around and can't actually find any that are available at the moment.


Hey.

Its the alphacool version - https://www.alphacool.com/shop/gpu-...-n-rtx-3090/3080-gaming-x-trio-with-backplate 

C


----------



## vagos1103gr

Hi guys i have the 3090 trinity with the gigabyte bios. I have it on afterburner +100 core and +450 memory, and it runs good max 75 on auto fan. The problem is when i leave my pc on in idle for a few hours the resolution messud up the nvidia control panel disappear and i have to reboot to come back the correct resolution. Then the afterburner i see it reset the card to power limit to 0 temp 0 core and memory 0. So i have to reload the correct profile. I dont know if this happening bcs of the bios, my pc is connected through hdmi on oled 48 cx 2.1 tv.


----------



## Sync0r

vagos1103gr said:


> Hi guys i have the 3090 trinity with the gigabyte bios. I have it on afterburner +100 core and +450 memory, and it runs good max 75 on auto fan. The problem is when i leave my pc on in idle for a few hours the resolution messud up the nvidia control panel disappear and i have to reboot to come back the correct resolution. Then the afterburner i see it reset the card to power limit to 0 temp 0 core and memory 0. So i have to reload the correct profile. I dont know if this happening bcs of the bios, my pc is connected through hdmi on oled 48 cx 2.1 tv.


I have the same card and bios, never had this issue. I'm using Display Port.


----------



## DooRules

Jpmboy said:


> quick testing with this FTW3 Apex X/[email protected] Stock air cooler
> View attachment 2463525





Jpmboy said:


> Needs a waterblock for sure.


Nice to see you got one. I hope to have my email today or tom for the same gpu. What waterblock you plan on using?


----------



## ScottRoberts91

Who's going to be the first the throw the new 450Watt Tuff bios on a Zotac 3090? I've had no issues with the 390w Gigabyte Bios but I think that 60watts might break the camels back so to speak


----------



## Fire2

I'm finding my msi ventus 90 doesn't like the gigabyte bios so much keeps crashing out in AC Odyssey, even with stock settings.
fan speed at 100% is also slower, so the asus tuf seems to be the best for me as a daily.

the gigabyte does give the best timespy score thou a good 500/600 points better.

21k club - https://www.3dmark.com/spy/14737223


----------



## Spiriva

ScottRoberts91 said:


> Who's going to be the first the throw the new 450Watt Tuff bios on a Zotac 3090? I've had no issues with the 390w Gigabyte Bios but I think that 60watts might break the camels back so to speak


There isnt a 450w bios for the TUF, is there? 
As far as I know the 3090 TUF bios is 375w.


----------



## ScottRoberts91

Spiriva said:


> There isnt a 450w bios for the TUF, is there?
> As far as I know the 3090 TUF bios is 375w.


Released in Beta I believe


----------



## Fire2

any links for this beta tuf bios that should work with any 2x8pin 3090 cards?


----------



## ScottRoberts91

Fire2 said:


> any links for this beta tuf bios that should work with any 2x8pin 3090 cards?


Here you go, would try it my self but I'm currently sitting on a labour ward with a wife pending a Cesarian 



https://t.co/rL8kXu8c3e?amp=1


----------



## Avacado

ScottRoberts91 said:


> Here you go, would try it my self but I'm currently sitting on a labour ward with a wife pending a Cesarian
> 
> 
> 
> https://t.co/rL8kXu8c3e?amp=1


And you are on the site posting BIOS URL's? I bet your wife is gonna slap you lol.


----------



## ScottRoberts91

Avacado said:


> And you are on the site posting BIOS URL's? I bet your wife is gonna slap you lol.


Ha, can't do much at the moment other than busy ourselves, 4 days in and out of a hospital starts to wear thin.


----------



## ScottRoberts91

Avacado said:


> And you are on the site posting BIOS URL's? I bet your wife is gonna slap you lol.


It's been 4 days lol so we are both just thing to make the time pass. She's a bit of a gamer herself so I'm toying with the idea of building her a new rig as a birth present


----------



## Fire2

I couldn't get that to work at all even tried with the tuf 94.02.26.48.5D bios to start with, kept coming up with some odd error.

"your graphic card no need to update VBIOS!"


----------



## ScottRoberts91

Fire2 said:


> I couldn't get that to work at all even tried with the tuf 94.02.26.48.5D bios to start with, kept coming up with some odd error.
> 
> "your graphic card no need to update VBIOS!"


Did you turn protect off?


----------



## Tias

Fire2 said:


> I couldn't get that to work at all even tried with the tuf 94.02.26.48.5D bios to start with, kept coming up with some odd error.
> 
> "your graphic card no need to update VBIOS!"


Use 7zip/winrar to unpack that the file (3090biosupdate.exe)

nvflash --protectoff
nvflash --save yourbios.rom (u can name with what you like)

then to flash:

nvflash -6 biosname.rom (the name of the file you want to flash)


----------



## Fire2

thank you Tais 

which file did you use?
94.02.26.48.AS05
-
-
-
-
94.02.26.48.AS12

still cant see how a 2x8 + pci-e slot can pull 450watt thou?!


----------



## Jpmboy

DooRules said:


> Nice to see you got one. I hope to have my email today or tom for the same gpu. What waterblock you plan on using?


right now I'm only in for the OPtimus block. What others are available for the FTW3?


----------



## Tias

Fire2 said:


> thank you Tais
> 
> which file did you use?
> 94.02.26.48.AS05
> -
> -
> -
> -
> 94.02.26.48.AS12
> 
> still cant see how a 2x8 + pci-e slot can pull 450watt thou?!


I use to run the kingpin XOC bios on my 2080ti FE, it also only had 2x8pin. It pulled around 500-550w no problem.

Im at work so i havent tested either of them yet, what you can do is to download the normal TUF 3090 bios from: Asus RTX 3090 VBIOS
flash that one first, and then it should work to just run the 3090biosupdate.exe file. It will then think you have a TUF 3090 installed, and should pick the right .rom file for you.


----------



## Sync0r

Tias said:


> I use to run the kingpin XOC bios on my 2080ti FE, it also only had 2x8pin. It pulled around 500-550w no problem.
> 
> Im at work so i havent tested either of them yet, what you can do is to download the normal TUF 3090 bios from: Asus RTX 3090 VBIOS
> flash that one first, and then it should work to just run the 3090biosupdate.exe file. It will then think you have a TUF 3090 installed, and should pick the right .rom file for you.


I just tried the update exe and it said it flashed but the limit still says 375w in GPUz.


----------



## Alelau18

Every BIOS released for the TUF is limited to 375W, best bioses for 2x8pin cards are currently the Gaming OC BIOS and the Aorus Master BIOS (390W of PL). 

The updated BIOS for the TUF that Asus posted is just to correct behavior for the fan stop feature not to increase power limits.


----------



## Fire2

I just loaded your linked older tuf bios, DDU then installed drivers again.
then used the updater no issues but its showing 375watt max in gpuz


----------



## spikeot

ScottRoberts91 said:


> It's been 4 days lol so we are both just thing to make the time pass. She's a bit of a gamer herself so I'm toying with the idea of building her a new rig as a birth present


Good luck with the birth, and good luck finding time to use the 3090! As someone that went through the same last year, I think you'll still appreciate having a good PC to sit down at when you get the chance.


----------



## Foxrun

I contacted Nvidia about the vertical white line artifacts that occur during gaming on my 3090. After multiple tests they are leaning towards it being a hardware issue. If anyone else is still experiencing this then you should contact Nvidia. It’s looking like it might be an RMA.


----------



## anethema

Never seen it. Hope i don't since I shunt modded my card haha.


----------



## rawsome

Foxrun said:


> I contacted Nvidia about the vertical white line artifacts that occur during gaming on my 3090. After multiple tests they are leaning towards it being a hardware issue. If anyone else is still experiencing this then you should contact Nvidia. It’s looking like it might be an RMA.


do you have a LG OLED like CX? see

__
https://www.reddit.com/r/OLED_Gaming/comments/jhuq7h
there is also a suggested workaround (setting input to pc mode).
i currently think it is screen related, not gpu related.


----------



## warbucks

For anyone with a MSI 3090 Ventus 3x OC, I just ordered a Barrow waterblock for my card from Formulamod. Looks like they have lots of stock now. I'll let you all know how it works out. I've got my resistors for my shunt mod and will be installing those along with the block when it arrives.


----------



## Thanh Nguyen

Anyone here has shadow of tomb raider? If not, it has a demo on steam. I dont know why I hit only 170 avg fps at 1440p TAA highest graphic setting, but Hardware Unboxed has 184 with his strix. That difference is unaccceptable for me coz I have higher clock.


----------



## Esenel

Jpmboy said:


> right now I'm only in for the OPtimus block. What others are available for the FTW3?


Watercool's vote selected FTW3.


----------



## Foxrun

rawsome said:


> do you have a LG OLED like CX? see
> 
> __
> https://www.reddit.com/r/OLED_Gaming/comments/jhuq7h
> there is also a suggested workaround (setting input to pc mode).
> i currently think it is screen related, not gpu related.


That is exactly it. I’ll forward this over to Nvidia. I’ll keep you guys updated as well.


----------



## ExDarkxH

I'm thinking that the evga ftw3 card is locked down and that 450w is its limit.
I did some testing on air, with a +210 offset my card peaks at 2,235 Mhz and is able to complete port royal. I pushed it a little more and was able to reach 2,250 but i ended up crashing at 1:07
The reason im focusing on max clocks is because if i can control temps I may be able to see very high sustained clocks

So I'm thinking now to shunt the whole board but i want to ask, is it worth it with my card? I have an Optimus water block coming next month and it will be going into a 3 x 420mm radiator setup so I would be able to control temps very well. Would you just take it and be satisfied?
If you owned my card would you shunt it?


----------



## SoldierRBT

ExDarkxH said:


> I'm thinking that the evga ftw3 card is locked down and that 450w is its limit.
> I did some testing on air, with a +210 offset my card peaks at 2,235 Mhz and is able to complete port royal. I pushed it a little more and was able to reach 2,250 but i ended up crashing at 1:07
> The reason im focusing on max clocks is because if i can control temps I may be able to see very high sustained clocks
> 
> So I'm thinking now to shunt the whole board but i want to ask, is it worth it with my card? I have an Optimus water block coming next month and it will be going into a 3 x 420mm radiator setup so I would be able to control temps very well. Would you just take it and be satisfied?
> If you owned my card would you shunt it?


That seems like a very good chip. I'd say if you know how to solder, shunt mod would allow you to push it further and the cooling you mentioned would keep it nice and cool. I've been keeping an eye on the 500W bios issue, hopefully they are able to solve it soon.

For the Port Royal run, you may be hitting power limits. Run it again and check the lowest voltage point it got (let's say 1.037v) then locked that voltage and add +210 or higher. The average clocks should be better and hopefully higher score.


----------



## Jpmboy

Esenel said:


> Watercool's vote selected FTW3.


Not sure I understand what you are saying...


ExDarkxH said:


> I'm thinking that the evga ftw3 card is locked down and that 450w is its limit.
> I did some testing on air, with a +210 offset my card peaks at 2,235 Mhz and is able to complete port royal. I pushed it a little more and was able to reach 2,250 but i ended up crashing at 1:07
> The reason im focusing on max clocks is because if i can control temps I may be able to see very high sustained clocks
> 
> So I'm thinking now to shunt the whole board but i want to ask, is it worth it with my card? I have an Optimus water block coming next month and it will be going into a 3 x 420mm radiator setup so I would be able to control temps very well. Would you just take it and be satisfied?
> *If you owned my card would you shunt it*?


Me... No. But then I usually do not keep cards all that long (unless they are milestone SKUs like the V or something). At least when I was doing the "buy-bench-sell" circuit, resale on any hard-mod card gets hammered.


----------



## Esenel

Jpmboy said:


> Not sure I understand what you are saying...


The company Watercool had a vote which GPU should get a waterblock.
The winner was FTW3.


----------



## Jpmboy

Ah. I agree.


----------



## Caper87

Hello! I just installed my 3090 tuf oc yesterday. My first run on 3dmark timespy was a 17800(+500 mem/+50 core). I have never flashed a bios update on a card before and am trying to figure it out as well.


----------



## DooRules

Jpmboy said:


> right now I'm only in for the OPtimus block. What others are available for the FTW3?


Not sure what other blocks might be available yet. No interest in Optimus though. 
Time to start looking though for sure. Just been super busy on the farm, finally slowing down now.


----------



## Sync0r

People that have shunted, has anyone shunted the pcie slot resistor yet? If so how much have you pushed the power limits of the pcie slot? Thanks

I've currently shunted the two pcie connectors, and seeing power draw limit of around 475w.


----------



## Caper87

VickyBeaver said:


> so this was somthing that cropped up a couple of days ago that may intrest ASUS TUF owners especially asus 450w OC bios
> 
> __ https://twitter.com/i/web/status/1317076936879898624
> my question is dose the tuf stock bios work on other 2x8pin cards if so has any one attempted a flash from tuf bios to the new oc bios from asus on a refrence 3090?





cstkl1 said:


> View attachment 2462351
> 
> 
> nfs 4k - 727w
> 
> 5.1ghz 10900k
> 2x 16gb 4266c17
> Strix 3090 oc


Savageee


----------



## Jpmboy

DooRules said:


> Not sure what other blocks might be available yet. No interest in Optimus though.
> Time to start looking though for sure. Just been super busy on the farm, *finally slowing down now*.


lol - it's just picking up here! ✌


----------



## Nizzen

Finally got my Asus 3090 Strix oc here in Norway 
Is there any better Bios than the "450w" bios? Any XOC bios in the wild?

I also got a Byksi Strix 3090 waterblock and backplate ready


----------



## DrunknFoo

Nizzen said:


> Finally got my Asus 3090 Strix oc here in Norway
> Is there any better Bios than the "450w" bios? Any XOC bios in the wild?
> 
> I also got a Byksi Strix 3090 waterblock and backplate ready


backup your stock and give the xoc ftw3 a shot, been searching around, haven't found any posts confirming if it worked and or if it pulls the correct power


----------



## Thanh Nguyen

480w vs 500w. Hope u find fun.


----------



## Nizzen

Thanh Nguyen said:


> 480w vs 500w. Hope u find fun.


I want 1200w if I can find it...


----------



## HyperMatrix

Nizzen said:


> I want 1200w if I can find it...


lol. You’ll need more voltage before you’ll need even have that much power.  And if you’re willing to go that extreme with the volt modding, shunting resistors would be an easy job for you.


----------



## ExDarkxH

even then it only goes so far. nothing beats binning/golden samples
Take a look at Jays2cents. He picked the best sample he has from his stack of 3090s and then volt modded in addition to using a fully unlocked secret bios that is i believe 2000w. Then he ran the thing with a watercooler attached to an air conditioner at sub ambient temps and still only scored 15,600 in port royal

Whereas you have a guy in this thread who scored over 15,000 with air cooler and stock bios no mods 

your card sample is everything


----------



## LVNeptune

Nizzen said:


> This is 3090 Palit with "some" cooling. "hexachannel" dimm cooler  While we are waiting for Strix....
> Look up for some scores from Norway soon
> @Carillo


Can you post some more pictures of how you did this? Did you drill the backplate for 4 holes and just a .5mm-1mm thermal pad in between?


----------



## LVNeptune

Nizzen said:


> Finally got my Asus 3090 Strix oc here in Norway
> Is there any better Bios than the "450w" bios? Any XOC bios in the wild?
> 
> I also got a Byksi Strix 3090 waterblock and backplate ready


Stock backplate is going to be BAD for thermals. There are ram chips back there and it is going to burn up. One guy modded a backplate with a waterblock (above my post I quoted him).


----------



## Alemancio

Guys how's the Zotac Trinity? Can you flash it with a different BIOS to unlock the power slider?


----------



## LiquidHaus

For those who own the FTW3 Ultra, can you give me some overclock settings you're running with? Trying to get a baseline so I can see how this card is doing versus others out there.


----------



## anethema

ExDarkxH said:


> I'm thinking that the evga ftw3 card is locked down and that 450w is its limit.
> I did some testing on air, with a +210 offset my card peaks at 2,235 Mhz and is able to complete port royal. I pushed it a little more and was able to reach 2,250 but i ended up crashing at 1:07
> The reason im focusing on max clocks is because if i can control temps I may be able to see very high sustained clocks
> 
> So I'm thinking now to shunt the whole board but i want to ask, is it worth it with my card? I have an Optimus water block coming next month and it will be going into a 3 x 420mm radiator setup so I would be able to control temps very well. Would you just take it and be satisfied?
> If you owned my card would you shunt it?


Shunt modding is for benchmarking basically. My card went from 12500 or whatever in port royal stock, to OCd and shunted now getting near 15k. 

But in games its a couple percent diff. IMO for real world use totally not worth. You'd get a significant enough gain just undervolting a bit. 

It isn't worth it IMO to draw 600W balls to the wall to get a 2-5% max gain in gaming.


----------



## Falkentyne

anethema said:


> Shunt modding is for benchmarking basically. My card went from 12500 or whatever in port royal stock, to OCd and shunted now getting near 15k.
> 
> But in games its a couple percent diff. IMO for real world use totally not worth. You'd get a significant enough gain just undervolting a bit.
> 
> It isn't worth it IMO to draw 600W balls to the wall to get a 2-5% max gain in gaming.


Shunt modding matters the most in 4k, when your card would be constantly hammering the power limit and downclocking due to how much power is needed for that resolution (or supersampling 4xAA @ 1920x1080/200% render scale), when it needs as much clocks as possible at a pixel size 4x the amount of 1920x1080. Getting 144 fps sustained at 4k at max settings in any modern non indie title is pretty much impossible. Even 120 FPS is a big struggle. If you're trying to use blur reduction (strobing) at 4k at something like 100hz vsync on/scanline sync, where you want FPS=hz, every single % of performance counts at that point.


----------



## Jpmboy

... or a good ole bios editor would do the same.


----------



## DrunknFoo

Nizzen said:


> I want 1200w if I can find it...


haha i think you can hit by shunt stacking 3 x .003 on each

if you do try the 500w evga bios, mind testing the functionality of the ports to see what gets disabled for me?

(still waiting for a card here. /sigh) i'll live vicariously through you


Lol not xoc meant 500w 😜


----------



## jura11

Nizzen said:


> I want 1200w if I can find it...


Maybe someone from OC community or OC'ers will leak or release XOC BIOS, I have used Asus RTX 2080Ti XOC BIOS and still using that BIOS for benchmarks 

Hope this helps 

Thanks, Jura


----------



## Thanh Nguyen

Did a shunt mod and did not see a real gain in daily usage. Of course the card may clock higher but remember it is silicon lottery. Some can clock higher with higher power limit but some cant. Moreover, the card is hotter so the clock is lower a little bit.


----------



## Alemancio

Guys how's the Zotac Trinity? Can you flash it with a different BIOS to unlock the power slider? (it seems that the Gigabyte Gaming OC Works?)


----------



## Zurv

Thanh Nguyen said:


> Did a shunt mod and did not see a real gain in daily usage. Of course the card may clock higher but remember it is silicon lottery. Some can clock higher with higher power limit but some cant. Moreover, the card is hotter so the clock is lower a little bit.


Are you water cooling too? All that extra power doesn’t help if the card is getting super hot.


----------



## NapsterAU

Alemancio said:


> Guys how's the Zotac Trinity? Can you flash it with a different BIOS to unlock the power slider? (it seems that the Gigabyte Gaming OC Works?)


Yep the Gigabyte BIOS works but i lost a display port on mine but no loss as it's not being used.

You will be able to adjust upto 390w PL. Max score on my card in Port Royal is 13700 stock cooling and about +85 core +800 mem overclock. 

Hits power limit constantly so needs to be higher. 450w would be nice


----------



## Thanh Nguyen

Zurv said:


> Are you water cooling too? All that extra power doesn’t help if the card is getting super hot.


Yes, im on water. It was at 36c-38c before shunt mod and now it is around 43c-45c. 14100 max port royal before and 15031 now. However, the clock in port royal is not stable in game, so I have to clock down.


----------



## Dreams-Visions

wirx said:


> Hi
> I have MSI Gaming X Trio 3090 and thanx to this forum found out, that I can update MSI BIOS with 500W EVGA. It was first time for me, but luckily all went well.
> I can confirm that all 3 Displayports are working fine.
> How do you guys get so low temps with stock air coolers?
> Mine best is +69c but I have seen here about +50c? I managed to get +62c with leafblower, but this is still too high, what I do wrong? Room temp is about +24c
> Tried with 0% core voltage, but temp remains same.
> So far best Port Royal 14493 and Time Spy Extreme graphics 11549
> https://www.3dmark.com/compare/pr/434899/pr/434882
> 
> 
> https://www.3dmark.com/spy/14785654
> 
> 
> Board Power Draw remains quite stable 500w and I have seen sometimes 516w peaks, PSU is Corsair RM750x 750W
> View attachment 2463218
> 
> View attachment 2463219
> 
> View attachment 2463220


Man this is awkward.

I have a 3090 Trio X and a 3090 FTW3 coming this week. I did not expect that the Trio would stand a good to great chance of outperforming the FTW3 Ultra on the Bios that was intended for the later rather than the former.

I guess I'm going to leave both in the box and decide which to keep once EVGA responds with an updated Beta bios. But with the X Trio performing like this, I'm not sure why I would keep the FTW3 Ultra.

Did your XTrio lose functionality of any outputs? I saw you mention all DP ports are working fine, but is the HDMI okay as well? If so, the XTrio has suddenly become a much more valuable card than its price suggested. Your performance numbers are great.


----------



## Pepillo

All ports works well on Gaming X with EVGA 500w bios. DP and HDMI.


----------



## mattxx88

I have a TUF OC coming next week, if i'm right there's no other way than shunting to increase PL, isn't it?


----------



## Fire2

mattxx88 said:


> I have a TUF OC coming next week, if i'm right there's no other way than shunting to increase PL, isn't it?


you can go from 375 to 390.
if that makes any difference...


----------



## mattxx88

Fire2 said:


> you can go from 375 to 390.
> if that makes any difference...


just ordered 3mOhms resistors, those should makes it 🤣


----------



## mbm

sorry if I havent payed enough attention to this thread, but what do you gain in constantly, minimum boost frequenzy in everyday gaming?
From my experience in games like BF5 where I max my GPU at 99% it will not boost as high as it might do in COD MW.
So I settle at 1920 mhz at 0,850W.. Here I can maintain 1920 mhz in all scenarios.
Isnt a more stable core frequenze better than jumping up and down 1800-2100 mhz?


----------



## CanM4

Hi guys,
i have a msi ventus oc version and i want to OC it. but the Powerlimit is so bad.
some of you flash the tuf oc or gaming oc bios into the ventus. Do you approuve tuf or gaming oc bios in ventus ?

for the moment i cant reach 1900mhz stable with 350w power limit.


----------



## Thoth420

I got my hands on the MSI Ventus so that should tide me over until the Vision comes out. Card runs fairly cool and quiet which is all I ask for in the meantime. I feel fortunate to have gotten one with so many people still having no luck.


----------



## Cavokk

mbm said:


> sorry if I havent payed enough attention to this thread, but what do you gain in constantly, minimum boost frequenzy in everyday gaming?
> From my experience in games like BF5 where I max my GPU at 99% it will not boost as high as it might do in COD MW.
> So I settle at 1920 mhz at 0,850W.. Here I can maintain 1920 mhz in all scenarios.
> Isnt a more stable core frequenze better than jumping up and down 1800-2100 mhz?


Indeed - I also get better performance with undervolting and sacrificing boost frequency for a rock stable constant frequency.

C


----------



## CrazyThunderbird

long2905 said:


> How is your temperature?


Wasn't a lot higher... but one of my DP stopped working so i flashed back


----------



## mbm

#3578
So why is everybody so interrested in these higher watt mods?


----------



## CanM4

mbm said:


> #3578
> So why is everybody so interrested in these higher watt mods?


more power limite to reach more stable frequency, you can undervolt the gpu but when any game use the gpu at 100% you need higher power limit to get a stable frequency
even with undervolt


----------



## Thanh Nguyen

mbm said:


> #3578
> So why is everybody so interrested in these higher watt mods?


Because we want higher clock but its gpu has its own limit even with 1000w power limit. I saw a strix with 480w power limit and has 2175mhz stable clock in port royal while my card has 700w power limit and cant go past 2130mhz.


----------



## mbm

so higher powerlimit has nothing to do with higher vcore?
My card (Asus 3090 TUF OC) uses max. 350 watts at 1920 mhz/0,850W.


----------



## ExDarkxH

I did my first shunt last night. Was easier than expected. It was only the pcie... minimally invasive and didnt require taking the card apart. I should be getting ~37 extra watts from this so now im inline with the strix bios (which isnt compatable with evga cards)









Unfortunately, I can no longer run +210 offset. I can run 190 max so perhaps i should experiment with undervolting.
I cant deny this result though I'm pretty satisfied as my card is very power starved. When my waterblock finally comes I'm thinking to shunt the whole board. Should easily get into the top 10 which would be pretty nice


----------



## mbm

why do you all talk about offset and not real core frequenzy?


----------



## Sync0r

ExDarkxH said:


> I did my first shunt last night. Was easier than expected. It was only the pcie... minimally invasive and didnt require taking the card apart. I should be getting ~37 extra watts from this so now im inline with the strix bios (which isnt compatable with evga cards)
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Unfortunately, I can no longer run +210 offset. I can run 190 max so perhaps i should experiment with undervolting.
> I cant deny this result though I'm pretty satisfied as my card is very power starved. When my waterblock finally comes I'm thinking to shunt the whole board. Should easily get into the top 10 which would be pretty nice


It will be due to it running at a higher voltage and hitting higher boost bins, best to just tweak the curve manually sometimes.

So you shunted the PCIe slot ? Stacked shunt? What size shunt did you use? Ta


----------



## ExDarkxH

mbm said:


> why do you all talk about offset and not real core frequenzy?


We talk about both


----------



## ExDarkxH

Sync0r said:


> It will be due to it running at a higher voltage and hitting higher boost bins, best to just tweak the curve manually sometimes.
> 
> So you shunted the PCIe slot ? Stacked shunt? What size shunt did you use? Ta


I stacked it. First I tried a 10mo shunt but it was too small and I didnt want to mess up my first shunt so I went with 8MO and it ran fine. I'm probably getting a lil more than the 37 maybe closer to 50 but my apex mobo seems to have no issue with it

When i eventually shunt the rest i would like to go back to 10mo on the pcie but i need to find a bigger shunt because that thing was tiny

Also, i have a ton of 5mo for the rest of the board but G.N. went with 8mo because it gives less chance of the card forcing safe mode... thoughts?


----------



## Sync0r

ExDarkxH said:


> I stacked it. First I tried a 10mo shunt but it was so small and i got worried it wouldnt work. Then i went with 8MO and it ran fine. So I'm probably getting a lil more than the 37 maybe closer to 50 but my apex mobo seems to have no issue with it
> 
> When i eventually shunt the rest i would like to go back to 10mo on the pcie but i need to find a bigger shunt because that thing was tiny
> 
> Also, i have a ton of 5mo for the rest of the board but G.N. went with 8mo because it gives less chance of the card forcing safe mode... thoughts?
> Would you stack 5s or 8s?


So a 8mo stacked on a 5mo means you are drawing a potential 121.9w from the pcie slot, but I expect you are hitting the power limit of the pcie connectors first now. In GPUz what power draw are you seeing from the pcie slot? I think if you multiply the value by 1.63 you will get roughly the power draw.


----------



## rawsome

Thoth420 said:


> I got my hands on the MSI Ventus so that should tide me over until the Vision comes out. Card runs fairly cool and quiet which is all I ask for in the meantime. I feel fortunate to have gotten one with so many people still having no luck.


Gigabyte gaming OC bios (390W) is the way to go. alternative is the aerus master bios which may peak sometimes at 400w but the center fan can only go up to 2000rpm which is much lower than the ventus can go. im not sure about DPs get disabled or not as i only use HDMI.


----------



## Menko22

I received my 3090 Strix oc a few days ago and I have been trying to overclock at least a little.

As soon I put the GPU core to more than +25mhz it freezes. No matter what power limit or values in afterburner or others tweak programs.

Is it normal?


----------



## mbm

#3591 No this it not "normal" but what is your boost frequenze? 1800 or 2200 mhz?


----------



## Sync0r

Menko22 said:


> I received my 3090 Strix oc a few days ago and I have been trying to overclock at least a little.
> 
> As soon I put the GPU core to more than +25mhz it freezes. No matter what power limit or values in afterburner or others tweak programs.
> 
> Is it normal?


What power supply you got?


----------



## kot0005

Well F 3090, I told you guys. I will be moving to cheaper 3080TI if nvidia released it. Will be cancelling my strix oc 3090.


----------



## Thanh Nguyen

6900xt ($999) >= 3090 ($1499) = Good game AMD.


----------



## Menko22

mbm said:


> #3591 No this it not "normal" but what is your boost frequenze? 1800 or 2200 mhz?


My boost in port royal goes from 1800-1995mhz with the card in stock. Temperature 70°C (23°C in the room). Port royal Score in stock is 13096.

My PSU is a Corsair AX1200W. Power shouldn't be a problem.

I saw once go above 2000mhz core but just once. As soon I put +50mhz in core game or benchmarks freeze. I put power limit at max and fans at 100%, nothing changes.

Is it "normal" or I have a bad chip?


----------



## Falkentyne

Thanh Nguyen said:


> 6900xt ($999) >= 3090 ($1499) = Good game AMD.


6900xt is running overclocked (Rage mode) and with a special boost memory feature only on Ryzen 5000 processors. 3090 is stock. On Intel, stock vs stock, 3090 should win.
What matters is what both cards will do overclocked on air, how far will 6900xt scale when the gloves get removed. We know 3090 gains about 10% at 4k at 600W with a shunt mod


----------



## bogdi1988

Falkentyne said:


> 6900xt is running overclocked (Rage mode) and with a special boost memory feature only on Ryzen 5000 processors. 3090 is stock. On Intel, stock vs stock, 3090 should win.
> What matters is what both cards will do overclocked on air, how far will 6900xt scale when the gloves get removed. We know 3090 gains about 10% at 4k at 600W with a shunt mod


Some AIB cards apparently go up to 2.5GHz OC, so it should be a REALLY interesting comparison!


----------



## kot0005

AMD card is only 300W, 3090 is 400W so yeah.. Nvidia really has to release 3080Ti now unless they gave up. 3080Ti should hopefully have more OC headroom and VRAm than 3080, AMD with the $999 pricing tho..they must be super confident.


----------



## Wihglah

Not interested in what might happen with the AIB cards in January TBH, I want a new card now.
I expect 3090 prices to drop $200-300 if they ever get in stock though.
No mention of RTX performance as well- must be crappy.


----------



## mbm

#3596
I wouldn't say 2000 mhz on core is bad.


----------



## Fire2

rawsome said:


> Gigabyte gaming OC bios (390W) is the way to go. alternative is the aerus master bios which may peak sometimes at 400w but the center fan can only go up to 2000rpm which is much lower than the ventus can go. im not sure about DPs get disabled or not as i only use HDMI.


I agree the centre fan on the gigabyte 390 is very slow (2000rpm) and the outa fans also don’t run very fast! (2800rpm)

the tuf bios was better around 3000rpm

but msi has 3400rpm and much better cooling.

takes an age to get it cooled back down for for port royal bench marks but I’m nearly at 14k so all is good!


----------



## mbm

You are aware of msi afterburner can do custom fan. 
Seems pretty stupid to flash bios to get different fanspeed.


----------



## Redlurkeraite

I have just received the palit rtx 3090. 
It boosts up to 2150mhz, however averages around 1950 reaching temperatures of 70 degrees Celsius. Is this the above average for this cards? 
As I am thinking of returning it due to pretty loud coil whine :/


----------



## Sync0r

What I am confused by is the AMD 6900 numbers were "up to" frame rates. Does this mean average? Or is it maximum? If it's average why not say average? All a bit confusing. They also state their numbers are overclocked "RAGE mode" and have to be on Ryzen 5000 to take advantage of "Smart Access Memory", so 3090 still can be clocked and DLSS enabled for more fps. The numbers they are quoting for Nvidia are also lower than actual benchmarks from other reviews. I guess we have to wait and see from unbiased reviews.


----------



## Fire2

mbm said:


> You are aware of msi afterburner can do custom fan.
> Seems pretty stupid to flash bios to get different fanspeed.


I force 100% fan speed

I flash bios for extra watt.
390>375>350.


----------



## Menko22

Silicon lottery affects a lot to the Asus Strix OC.

I just got my second one. The first one didn't even start with 50mhz core and power limit to the max.

The second one hits 100mhz with no problem. I even have to test a bit more to see where it can go.

Finally I can see 2100mhz core in bechmarks and games.


----------



## ExDarkxH

https://www.newark.com/tt-electroni...t4/current-sense-res-0r08-3w-2512/dp/54AH5824 
what do you guys think of these?
good to recommend ?


----------



## Fire2

https://www.3dmark.com/pr/444685


port royal

14k that's my absolute max with this 390watt 2x8pin card 
other then another 100 or so MHz cpu.


----------



## kheopstr

Hi guys i have been using tuf 3090 oc version over than 3 weeks. This card is really good after 2080ti. I only complained about the core clocks. Just now i bought a new asus strix 3090 oc. When using tuf as you known you can only push 107% power limit (375watt). with this settings temperature never goes up 70 celsius and card runs so quiet.... as i said only the problem for Tuf version is the gpu clocks never gets stable. 

For strix oc i pushed it 123% (480watt) only pushing the power limit, no other settings such as core clock mem clock not been touched.. then i start port royal and get shocked, fans are turning crazy as hell after one minute benchmarking fan speed pushes 100% temperature gets nearly at 80 Celsius. i think with this fan noises i never going do anything. Is this normal for this car or have i done something wrong? 

my specs
intel 10700k @5.1 
corsair dominator 16*2 32GB @3400mhz cl14 
Thermaltake Thoughpower rgb 1200 watt
2 ssd evo970


----------



## Wrathier

kheopstr said:


> Hi guys i have been using tuf 3090 oc version over than 3 weeks. This card is really good after 2080ti. I only complained about the core clocks. Just now i bought a new asus strix 3090 oc. When using tuf as you known you can only push 107% power limit (375watt). with this settings temperature never goes up 70 celsius and card runs so quiet.... as i said only the problem for Tuf version is the gpu clocks never gets stable.
> 
> For strix oc i pushed it 123% (480watt) only pushing the power limit, no other settings such as core clock mem clock not been touched.. then i start port royal and get shocked, fans are turning crazy as hell after one minute benchmarking fan speed pushes 100% temperature gets nearly at 80 Celsius. i think with this fan noises i never going do anything. Is this normal for this car or have i done something wrong?
> 
> my specs
> intel 10700k @5.1
> corsair dominator 16*2 32GB @3400mhz cl14
> Thermaltake Thoughpower rgb 1200 watt
> 2 ssd evo970


This bios addresses the fan issues: https://t.co/rL8kXu8c3e?amp=1


----------



## Wrathier

I'm a bit annoyed right now... I sold my 3090 Gigabyte Gaming OC and ordered a TUF OC I still hasn't received. And following this thread over the last days it seems like my card was actually ok with a score of 13750 in Port Royal and that the fluctuating clocks could have been sorted with undervolting. 

Damn hope that this TUF is going to be great or I'll be upset.

I will also be annoyed if Nvidia suddenly crashes prices by releasing a 3080 Ti with 20GB VRAM that overclocks higher than a 3090 for 999 dollars to compete with AMD. 

This is a ****show to be honest. 

I'm not going to shunt or watercool my card, I just want a decent card, but now I understand that has more to do with luck than brand on this generation of cards.

If I had just kept my Gigabyte card I wouldn't have any downtime and would have been able to game and have fun for the last 10 days instead of worrying if my TUF OC will be any good.


----------



## Sync0r

Fire2 said:


> https://www.3dmark.com/pr/444685
> 
> 
> port royal
> 
> 14k that's my absolute max with this 390watt 2x8pin card
> other then another 100 or so MHz cpu.


Water block and shunt  = great epeen. In reality I guess you just get more stable clocks and bit more FPS. My 390w 2x8pin, draws 475w now https://www.3dmark.com/pr/439558


----------



## Fire2

wow stunning score dude!


----------



## Wrathier

Sync0r said:


> Water block and shunt  = great epeen. In reality I guess you just get more stable clocks and bit more FPS. My 390w 2x8pin, draws 475w now https://www.3dmark.com/pr/439558


I can't wait to see in 3-6 months when cards starts burning out how it will be handled with warranty. When I know that the card will be replaced, I will shunt - not a second before. Can't afford to destroy the card honestly.


----------



## jtisgeek

rawsome said:


> do you have a LG OLED like CX? see
> 
> __
> https://www.reddit.com/r/OLED_Gaming/comments/jhuq7h
> there is also a suggested workaround (setting input to pc mode).
> i currently think it is screen related, not gpu related.


I have a LG c9 and get this at random but I can usually just turn the tv off and back on and then it's fine till the next reboot etc.


----------



## Sync0r

Wrathier said:


> I can't wait to see in 3-6 months when cards starts burning out how it will be handled with warranty. When I know that the card will be replaced, I will shunt - not a second before. Can't afford to destroy the card honestly.


I don't think I have to worry about warranty, it's already gone


----------



## Falkentyne

Wrathier said:


> I can't wait to see in 3-6 months when cards starts burning out how it will be handled with warranty. When I know that the card will be replaced, I will shunt - not a second before. Can't afford to destroy the card honestly.


This is smart. Never mod unless you can afford to replace stuff if things go wrong. And really, the only point of modding is to compete on leaderboards or to help with 4k FPS, because that is what turns the card into a sweat.

The FE cards won't start burning as long as they are properly cooled and you stay under 600W. The 12 pin is capable of up to 600W, and as long as you are using an official 12 pin connector, or 3 eight pins on AIB cards, that's not an issue. Two 8 pins start getting into wild west territory. Not only that, you aren't modding the _voltage_. It's voltage modding which will burn cards if you don't take precautions.

Provided you don't mess up, there's more risk to the PSU or the PCIE port itself than the card. As it is right now it's similar to just increasing the power limit on a CPU.


----------



## Wrathier

Falkentyne said:


> This is smart. Never mod unless you can afford to replace stuff if things go wrong. And really, the only point of modding is to compete on leaderboards or to help with 4k FPS, because that is what turns the card into a sweat.
> 
> The FE cards won't start burning as long as they are properly cooled and you stay under 600W. The 12 pin is capable of up to 600W, and as long as you are using an official 12 pin connector, or 3 eight pins on AIB cards, that's not an issue. Two 8 pins start getting into wild west territory. Not only that, you aren't modding the _voltage_. It's voltage modding which will burn cards if you don't take precautions.
> 
> Provided you don't mess up, there's more risk to the PSU or the PCIE port itself than the card. As it is right now it's similar to just increasing the power limit on a CPU.


Thanks 

I run an Odyssey G9, but will only overclock within what I can without waterblock and shunt. - Will most likely throw the Gigabyte bios onto the tuf oc, but honestly, I'm not sure it's worth it.


----------



## rawsome

Falkentyne said:


> Two 8 pins start getting into wild west territory.


i guess more for the PSU than for the GPU right?
are you guys shunting for benchmarking only or for daily use? anyone shunted a older card and have it running at high wattages over years? are shunted cards regulary burning?  or not failing more than any other GPU.
in know, nothing is save in this world, i just want to do something stupid 


I think about shunting my MSI ventus OC that is said to have 20 amps fuses on the PCIes. 










when going that way, will i have a high risk of blowing the fuse on peaks?

i could also go back to my stock bios which is 350W; that would translate to 206W per plug.

also i have read over here that only shunting the pcies may not cut it, that guy says that he had to "shunt everything but the DDR6". anyone else already shunted the ventus? how did it go?


----------



## Dreams-Visions

Pepillo said:


> All ports works well on Gaming X with EVGA 500w bios. DP and HDMI.


Excellent. I have a tough decision to make soon, then.


----------



## HyperMatrix

Thanh Nguyen said:


> 6900xt ($999) >= 3090 ($1499) = Good game AMD.


6900XT is a solid competitor to the 3090 in performance. And with the price difference, it's the better value by far. But the 3090 is still a superior card, even if not by as big a margin as before. It does seem likely that Nvidia would come out with a $999 3080 Ti to compete with it though.



bogdi1988 said:


> Some AIB cards apparently go up to 2.5GHz OC, so it should be a REALLY interesting comparison!


There are no AIB models for the 6900XT. It's "Founders Edition" only. Direct from AMD. They're highly binned chips. And that's the best they can do. Both OC'd to their max, 3090 will win every time. Not to mention better RT performance (since AMD didn't show any RT bench results for comparison), and also no DLSS. 

However - I will hand it to AMD. the 6900XT is a really solid card. And it's got that big FU pricing. If I didn't have a G-Sync display, and I could pick one up today while I can't get my hands on a 3090, I'd buy it. But I'm expecting the card to be as rare and hard to get as the Radeon VII.




Wrathier said:


> I'm a bit annoyed right now... I sold my 3090 Gigabyte Gaming OC and ordered a TUF OC I still hasn't received. And following this thread over the last days it seems like my card was actually ok with a score of 13750 in Port Royal and that the fluctuating clocks could have been sorted with undervolting.
> 
> Damn hope that this TUF is going to be great or I'll be upset.
> 
> I will also be annoyed if Nvidia suddenly crashes prices by releasing a 3080 Ti with 20GB VRAM that overclocks higher than a 3090 for 999 dollars to compete with AMD.
> 
> This is a ****show to be honest.
> 
> I'm not going to shunt or watercool my card, I just want a decent card, but now I understand that has more to do with luck than brand on this generation of cards.
> 
> If I had just kept my Gigabyte card I wouldn't have any downtime and would have been able to game and have fun for the last 10 days instead of worrying if my TUF OC will be any good.


Based on previous Nvidia strategy, a 3080 Ti is guaranteed in response to AMD. I'm thinking back to Kepler right now when AMD brought the heat. There was that rumor of a 3080 Ti with 12GB VRAM (which, honestly, for 4K gaming is enough) and also more cores than the normal 3080, very close to the 3090 cores. This is a likely probability at $999. However, don't expect higher clocks. Nvidia wouldn't be able to do that without switching over to TSMC. Samsung 8nm is tapped out at current clocks.


----------



## Wihglah

So new driver that unlocks another 10% performance from nVidia this weekend?


----------



## Thanh Nguyen

Wihglah said:


> So new driver that unlocks another 10% performance from nVidia this weekend?


Maybe 20% price cut of 3090 for upcoming buyer. 😆


----------



## Menko22

Wrathier said:


> This bios addresses the fan issues: https://t.co/rL8kXu8c3e?amp=1


What does this bios do? Any changelog?


----------



## Wihglah

Thanh Nguyen said:


> Maybe 20% price cut of 3090 for upcoming buyer. 😆


Could be. Not sure I could stand to wait another month though.


----------



## LordGurciullo

Falkentyne said:


> I used 2560x1440, porch 48/3, sync 32/5, HT: 2720, VT: 1502, refresh 100hz, pixel clock 408.55 mhz.
> 
> 120hz has flickering and shimmering pixels and 475 mhz pixel clock so its unusable. The highest I can go is 115hz, and that requires lowering the horizontal total as far down as possible where if you go any lower, you get an out of range error. I suspect some newer XL2720 panels (maybe the Zowie rebrand) may be capable of 120hz if you stay below 480 mhz. I haven't seen a single 1080p monitor that can overclock to 1440p @ 144hz. I only used 100hz for testing purposes. It messes up some of the scaling and makes things blurrier (since it's not a native 1440p panel) but it functioned more like an upsampled screen with a small amount of supersampling (like if you created 2560x1440 via DSR).
> 
> Normally I use custom vertical totals for 1080p 100hz and 120hz to reduce blur reduction crosstalk. For some reason however, using 2560x1440 @ 100hz prevents 1920x1080 100hz from working right. Selecting it causes a signal resolution of 2560x1440 @ 100hz and a display resolution of 1920x1080 which looks blurry. I didn't have this issue with my r9 290X however. But I don't remember if I was on windows 10 or 7 back then. This also causes an issue in Fortnite where Fortnite tries to automatically use the highest resolution instead of what you selected on the desktop, so I just went back to my regular profile.
> 
> Usually I use an overclocked 165hz (1920x1080, porch 24/3, sync 32/5, HT: 1984, VT: 1094), which goes out of range, but if you press a button on your S-switch, the screen suddenly appears and works perfectly. Opening the service menu (e.g. to change strobe settings) or the OSD causes the out of range to appear again after you close the menu, although the screen doesn't vanish. Pressing a profile button on S-switch again and it goes away. There's some executable on blurbusters forums that automatically makes the OOR go away (for those who don't have an S-switch).
> 
> Those reduced timings are necessary to keep RGB mode and prevent it from switching to YCbCR422. 1 mhz more on the pixel clock and it switches (RGB mode would cause 6 bit color depth at 360 mhz+). Not sure why it switches at 359 mhz instead of 360 mhz.


I'm a little confused... may I dm you? I didn't even know you could run 1440 on a 1080 monitor... And I use the 182hz... is that changing it to ycbrcr422? I have the xl2746s


----------



## Zurv

Has anyone shunt mod'd the 3090 ftw?

I did the 5 up top and all the 3 power connectors reporting less power and not maxing out (so that worked)
but the card reports that i'm still power limited. It also shows the PCI-E slot maxed.
I'm not seeing any extra perf. zero. with the card thinking less power is coming in via the 3x 8pins.

Should i have mod'd other shunts too? There is another on the front of the card. (and the PCI-E one on the back which i'd like not to do.)









thoughts?


Edit:
It looks like GN didn't do the top one.. maybe i'll remove that and see what happens.


----------



## Falkentyne

LordGurciullo said:


> I'm a little confused... may I dm you? I didn't even know you could run 1440 on a 1080 monitor... And I use the 182hz... is that changing it to ycbrcr422? I have the xl2746s


Yes you can. But I don't know if it works on the XL2746S but one person on blurbusters said 1440p @ 100hz and @ 120hz worked on his monitor (either an XL2546s or XL2746s) so it should work for you.

The reason it works is because the panel accepts the resolution but downsamples it. Sometimes if the firmware hates the signal, it goes out of range and you need to press the s-switch to make it appear. It's best to first start with a known working resolution and refresh rate, enter a custom resolution in ToastyX CRU, then run restart64.exe (a file part of ToastyX CRU), then test the new resolution.

You are absolutely _NOT_ going to be running 182hz at 1440p. Try 100, 120 and 144hz and see which one works (if you get an out of range error, press the S-switch profile to see if the screen re-appears. if it doesn't, just press "escape" to end the resolution test when you try to switch resolutions.


----------



## anethema

When I did my FE, I modded them all but the PCI one, and same thing happened. Far less power reported, but perf down if anything, and perfcap reason was power still.

Ended up changing the PCIe slot one from a 5mohm to a 3mohm and all the sudden it was fully unlocked. Never get power as perfacap anymore even with voltage and freq maxed.

Here is what my GPU-Z Looked like during a bench:









Nvidia def doing some sneaky balance checking. You can see in the image the board power reporting super low but MVDDC showing super high? But once PCIe shunt modded that MVDDC power went way down, but board power read as normal and perf skyrocketed.



Zurv said:


> Has anyone shunt mod'd the 3090 ftw?
> 
> I did the 5 up top and all the 3 power connectors reporting less power and not maxing out (so that worked)
> but the card reports that i'm still power limited. It also shows the PCI-E slot maxed.
> I'm not seeing any extra perf. zero. with the card thinking less power is coming in via the 3x 8pins.
> 
> Should i have mod'd other shunts too? There is another on the front of the card. (and the PCI-E one on the back which i'd like not to do.)
> View attachment 2463760
> 
> 
> thoughts?
> 
> 
> Edit:
> It looks like GN didn't do the top one.. maybe i'll remove that and see what happens.


----------



## Dreams-Visions

Zurv said:


> Has anyone shunt mod'd the 3090 ftw?
> 
> I did the 5 up top and all the 3 power connectors reporting less power and not maxing out (so that worked)
> but the card reports that i'm still power limited. It also shows the PCI-E slot maxed.
> I'm not seeing any extra perf. zero. with the card thinking less power is coming in via the 3x 8pins.
> 
> Should i have mod'd other shunts too? There is another on the front of the card. (and the PCI-E one on the back which i'd like not to do.)
> View attachment 2463760
> 
> 
> thoughts?
> 
> 
> Edit:
> It looks like GN didn't do the top one.. maybe i'll remove that and see what happens.


you have to mod the shunt on the back. pcie slot power is holding it back.


----------



## Sync0r

Zurv said:


> Has anyone shunt mod'd the 3090 ftw?
> 
> I did the 5 up top and all the 3 power connectors reporting less power and not maxing out (so that worked)
> but the card reports that i'm still power limited. It also shows the PCI-E slot maxed.
> I'm not seeing any extra perf. zero. with the card thinking less power is coming in via the 3x 8pins.
> 
> Should i have mod'd other shunts too? There is another on the front of the card. (and the PCI-E one on the back which i'd like not to do.)
> View attachment 2463760
> 
> 
> thoughts?
> 
> 
> Edit:
> It looks like GN didn't do the top one.. maybe i'll remove that and see what happens.


I think you are hitting the power limit of the PCIe Slot, this is preventing your 8 pins from using more power. This is what I've seen anyway from my 2x8pin card.


----------



## DrunknFoo

Zurv said:


> Has anyone shunt mod'd the 3090 ftw?
> 
> I did the 5 up top and all the 3 power connectors reporting less power and not maxing out (so that worked)
> but the card reports that i'm still power limited. It also shows the PCI-E slot maxed.
> I'm not seeing any extra perf. zero. with the card thinking less power is coming in via the 3x 8pins.
> 
> Should i have mod'd other shunts too? There is another on the front of the card. (and the PCI-E one on the back which i'd like not to do.)
> 
> thoughts?


are you reading perf cap on gpuz? it would flag for any power connections... I.e pcie slot, what does hwinfo64 say?


----------



## tubnotub1

What tests have yall been using to verify memory OC on these cards? I have dialed in my core offset as much as the silicon will allow for (mediocre), and I am trying to dial in the VRAM OC w/ the knowledge that this GDDR6X has error-checking that will reduce performance before the card artifacts. I have attempted using a static scene in heaven and multiple video games to check for the huge ECC drops some youtubers have seen when they hit the point where the VRAM starts throwing out errors but I have seen no such performance degradation on my card up to and past +1200 offset. At one point in testing, I just goosed it to 1400 and _finally_ got some artifacts at which point I backed it down a bit. Either way, in any test I run I see negligible real-time benefits in FPS between stock versus upwards of +1200 or anything in between which makes it frustratingly hard to dial in that sweet spot. At this point, with this architecture is it just run synthetic benchmarks until you see a repeatable drop in your overall scores?


----------



## HyperMatrix

Meow. Thanks to AMD, I was able to join the club today. 1 month on wait list. I expected it to take longer for this card.


----------



## DrunknFoo

HyperMatrix said:


> Meow. Thanks to AMD, I was able to join the club today. 1 month on wait list. I expected it to take longer for this card.


I hate u!

Ok now flash the 500w bios and lmk if ports are all working and if the 500w gets pulled.


----------



## HyperMatrix

DrunknFoo said:


> I hate u!
> 
> Ok now flash the 500w bios and lmk if ports are all working and if the 500w gets pulled.


Still playing around with it as is. Haha. Just came from a Pascal Titan X so it's a heck of an upgrade. It's boosting to 1985MHz without any overclocking (just +PL maxed) right now. And pulls a max of 488W. Just did a basic run of Port Royal and scored 14k with average clocks of 1970. It heats up real quick under load though and begins to throttle. Water would do it some real good.


----------



## DrunknFoo

HyperMatrix said:


> Still playing around with it as is. Haha. Just came from a Pascal Titan X so it's a heck of an upgrade. It's boosting to 1985MHz without any overclocking (just +PL maxed) right now. And pulls a max of 488W. Just did a basic run of Port Royal and scored 14k with average clocks of 1970. It heats up real quick under load though and begins to throttle. Water would do it some real good.


Repasting would prob help a tad..😉
Grats on the card


----------



## Nizzen




----------



## Sheyster

HyperMatrix said:


> It's boosting to 1985MHz without any overclocking (just +PL maxed) right now. And pulls a max of 488W.


I thought a Strix would boost a little higher than that stock. With +PL maxed (114%) and no +core at all, my FE boosts to 1980. Of course it runs into the power limit much sooner. The most I've seen it pull is 414w.


----------



## Johneey

Hey guys,
Long time no see here.
Whats highest bios for 2x8pin yet ?


----------



## DrunknFoo

Sheyster said:


> I thought a Strix would boost a little higher than that stock. With +PL maxed (114%) and no +core at all, my FE boosts to 1980. Of course it runs into the power limit much sooner. The most I've seen it pull is 414w.


Temps play a role


----------



## Sheyster

DrunknFoo said:


> Temps play a role


Indeed, I have not seen mine go over 71C so far. Ambient is typically 25C in the room lately.


----------



## HyperMatrix

DrunknFoo said:


> Repasting would prob help a tad..😉
> Grats on the card


just realized part of the problem. I have a pcie NVMe card pretty much stuck to the bottom of the card, almost covering 1.5 fans. Haha. Will have to wait for block 



Sheyster said:


> I thought a Strix would boost a little higher than that stock. With +PL maxed (114%) and no +core at all, my FE boosts to 1980. Of course it runs into the power limit much sooner. The most I've seen it pull is 414w.


I got it running 1905MHz locked with 0.893v. Also did 1980MHz with .950v and it was hitting 450W just for that in port royal. Before heat kicked in under max load it was doing 2085MHz. I’ll probably run it at 1905MHz with Low heat/power right now until I get it under water.

But I’m dealing with even bigger issues now. My PC is having trouble with its overclock after installing this card. Have had the computer reboot on me during testing/gaming and also refusing to boot. May be related to system memory. So a system that was fully stable for years now needs to downclock it’s ram below it’s rated speed. Wasn’t even fast ram to begin with but at least it’s quad channel so won’t matter too much. just hoping this stabilizes it.


----------



## tps3443

Someone show me their watercooled and heavily overclocked RTX 3090 in a standard timespy run. 

I am curious to see what these cards can do with “Actual overclocking”

Thank you! I am very interested.


----------



## Thoth420

rawsome said:


> Gigabyte gaming OC bios (390W) is the way to go. alternative is the aerus master bios which may peak sometimes at 400w but the center fan can only go up to 2000rpm which is much lower than the ventus can go. im not sure about DPs get disabled or not as i only use HDMI.


Oh I am hardly a max overclocker. Aesthetics boi. I was planning on the Master or Extreme until I saw the Vision. I am running the B550 board so it is a perfect fit for the build overall. This card is just something to run until the market opens up a bit more. Cheers though! I do use one DP and the one HDMI 2.1 the Ventus has atm.


----------



## mbm

could anybody please fill me in here whats all about wanting the card to draw more power?
If it gives better overclocks. Could you please share the results before and after?


----------



## Nizzen

tps3443 said:


> Someone show me their watercooled and heavily overclocked RTX 3090 in a standard timespy run.
> 
> I am curious to see what these cards can do with “Actual overclocking”
> 
> Thank you! I am very interested.


Timespy is like a cpubenchmark


----------



## HyperMatrix

So...can anyone tell me why my card is pulling 481W at 1950MHz with just 0.950v? Lol. Are the cards really THAT sensitive to temperature?


----------



## Sync0r

HyperMatrix said:


> So...can anyone tell me why my card is pulling 481W at 1950MHz with just 0.950v? Lol. Are the cards really THAT sensitive to temperature?
> 
> View attachment 2463812


Wow, it's really eating! I haven't tried crisis 3, some games put more load on the GPU than others. It does seem like a lot for that low clock and voltage, but temperature might be making it inefficient producing more watts.


----------



## bmgjet

Seems about right, If that was at 40C youd be at around 2050mhz.


----------



## Sync0r

tps3443 said:


> Someone show me their watercooled and heavily overclocked RTX 3090 in a standard timespy run.
> 
> I am curious to see what these cards can do with “Actual overclocking”
> 
> Thank you! I am very interested.


Just a quick overclock for this one, didn't adjust each point in the profile curve to get max clocks, just a +120 I think to core and +1300 mem.


https://www.3dmark.com/spy/14834041


----------



## Esenel

Joining now. Strix OC.

22k GPU. 21k Overall will follow as well ;-)


https://www.3dmark.com/spy/14868878


----------



## jsarver

Grabbed a 3090 fe from yesterday’s Best Buy drop with 10 percent off coupon. plan to put it under water in me hydro x loop but I’m having doubts that I made the right call. Original thought was to wait for ftw3 ultra or hydro copper. I saved at least 550 after tax compared to Evga cards but am I leaving significant performance on the table by not having the additional power on hand/available? Thanks


----------



## HyperMatrix

jsarver said:


> Grabbed a 3090 fe from yesterday’s Best Buy drop with 10 percent off coupon. plan to put it under water in me hydro x loop but I’m having doubts that I made the right call. Original thought was to wait for ftw3 ultra or hydro copper. I saved at least 550 after tax compared to Evga cards but am I leaving significant performance on the table by not having the additional power on hand/available? Thanks


You’re probably looking at a 5% difference at most tbh. Playing around with my Strix I’m seeing how temperatures are far more important than TDP. With temps in the 30s it’ll clock to over 2100MHz at full load and 480W. At temps in the 70-80 range it’ll drop to 1850-1920 with the same 480W draw.

Most important thing is whether you can watercool your card or not. If I could have gotten an FE with 10% off, I probably would have done that myself if it was the first card that became available.

a few posts back someone told me to flash my card with the 500W EVGA bios. Up until today that would have been #1 on my list. But what I’ve realized is that the power consumption is so high that if you don’t have temperatures fully under control, that extra 20W power is really just 20W extra heat that’s going to push your card to throttle even sooner from the heat accumulation. A lot of the stats and scores you see in this post are from benchmark setups. Not day to day. So unless you really care about an extra few percentage points in performance, don’t worry about it at all. Worst case scenario...even if you wanted more power under water...you could always do a shunt mod. The FE board is higher quality than the FTW3 anyway.


----------



## Spiriva

Johneey said:


> Hey guys,
> Long time no see here.
> Whats highest bios for 2x8pin yet ?











 Gigabyte RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com




or








Gigabyte RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com





One of these two will be your best bet for 2x8pin.


----------



## BigMack70

Finally got the new firmware for the C9 so that 4k 120 Gsync works correctly... feels really good to have a gaming setup now "finished" after pursuing it for years, waiting on the tech to be able to deliver.

Any custom BIOS for FE cards out in the wild yet?


----------



## tps3443

Nizzen said:


> Timespy is like a cpubenchmark


Ohh I see. Still I’m curious, since watercooling my latest FE 2080Ti. I am around 17,400 graphics with my 2080Ti. But it is watercooled, stacked 8 Ohm resistors, and I run the 380 watt Galax bios too.

I would like to see how the 3090 compares.

So your saying the 3090 doesn’t get 99% utilization in standard timespy run due to CPU bottleneck?

I am considering a RTX3090. But I may wait until RTX3080Ti.


----------



## Johneey

Spiriva said:


> Gigabyte RTX 3090 VBIOS
> 
> 
> 24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory
> 
> 
> 
> 
> www.techpowerup.com
> 
> 
> 
> 
> or
> 
> 
> 
> 
> 
> 
> 
> 
> Gigabyte RTX 3090 VBIOS
> 
> 
> 24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory
> 
> 
> 
> 
> www.techpowerup.com
> 
> 
> 
> 
> 
> One of these two will be your best bet for 2x8pin.


Thanks already got 390 bios we 22,7k score timespy. I was thinking maybe a XOC Bios is there 🥴


----------



## Sheyster

BigMack70 said:


> Finally got the new firmware for the C9 so that 4k 120 Gsync works correctly... feels really good to have a gaming setup now "finished" after pursuing it for years, waiting on the tech to be able to deliver.
> 
> Any custom BIOS for FE cards out in the wild yet?


Are you sure it's truly working correctly? On the CX it's bugged at 4K120+10bpc+G-Sync with the latest firmware for the CX (3.11.25 I believe). I have to run 8bpc+YCbCr420 when G-Sync is on to remove the stutter at 4K.

EDIT - No BIOS upgrades for FE as far as I know to date.


----------



## Dreams-Visions

Sheyster said:


> I thought a Strix would boost a little higher than that stock. With +PL maxed (114%) and no +core at all, my FE boosts to 1980. Of course it runs into the power limit much sooner. The most I've seen it pull is 414w.


People are gonna learn the brand doesn't matter. Most boosting is going to come down to your silicon RNG. 

I've seen enough FE's outpace FTW3's and Strix in benchmarks at this point to know what time it is.


----------



## Sheyster

Dreams-Visions said:


> People are gonna learn the brand doesn't matter. Most boosting is going to come down to your silicon RNG.
> 
> I've seen enough FE's outpace FTW3's and Strix in benchmarks at this point to know what time it is.


Seems that way. I have no intention of water-cooling it and I'm leaning towards undervolting. I've seen my card hit 2100 with some +core applied, next step is to lock it at 950mv (or possibly 1000) and see how high and stable it'll go with the lower voltage.


----------



## dante`afk

jeez i'm getting so impatient now no card to buy anywhere


----------



## Sheyster

dante`afk said:


> jeez i'm getting so impatient now no card to buy anywhere


The 3070's apparently all sold out in minutes even with the AMD rumors/news in play. Bots are working hard this morning it seems.


----------



## BigMack70

Sheyster said:


> Are you sure it's truly working correctly? On the CX it's bugged at 4K120+10bpc+G-Sync with the latest firmware for the CX (3.11.25 I believe). I have to run 8bpc+YCbCr420 when G-Sync is on to remove the stutter at 4K.
> 
> EDIT - No BIOS upgrades for FE as far as I know to date.


Yes, as far as I can tell everything is now working properly; I no longer feel stutter in games when they drop to ~90fps. Got the new 04.90.03 firmware for the C9 after a couple weeks of trying to contact the right person at LG support who could actually get the firmware to me.

4k 120 + 12bpc + G-sync all appears correctly functional. Only bug was that I had to disable "instant game response" mode on the TV, as it was causing black screen flickering still, but I'm pretty sure that doesn't actually do anything meaningful since I'm already using game mode on the TV; in any case, my mouse response doesn't feel any different with it off, so either it's not a feature that does anything, or I'm too old to pick up on the few ms of difference it might save in input latency.



Sheyster said:


> The 3070's apparently all sold out in minutes even with the AMD rumors/news in play. Bots are working hard this morning it seems.


I am guessing a lot of people don't trust AMD yet. I know I don't. If given the option for a $1000 6900 XT or a $1500 3090 and the performance is the same, I'd take the 3090 every time. I'm sure people think that way at the lower price points too. Nvidia just has a far superior feature set and recent track record of software/driver support.

I do think that the 6900 XT would give me pause about buying some of the $1800+ custom 3090 models though. I'd pay +50% for Nvidia's better feature set with RTX and DLSS and driver support, but +80% or more I am less sure... probably not.


----------



## Sheyster

BigMack70 said:


> Yes, as far as I can tell everything is now working properly; I no longer feel stutter in games when they drop to ~90fps. Got the new 04.90.03 firmware for the C9 after a couple weeks of trying to contact the right person at LG support who could actually get the firmware to me.
> 
> 4k 120 + 12bpc + G-sync all appears correctly functional. Only bug was that I had to disable "instant game response" mode on the TV, as it was causing black screen flickering still, but I'm pretty sure that doesn't actually do anything meaningful since I'm already using game mode on the TV; in any case, my mouse response doesn't feel any different with it off, so either it's not a feature that does anything, or I'm too old to pick up on the few ms of difference it might save in input latency.


Good that it seems to be working on the C9! LG seems to be committed to fixing it on the CX, hopefully the firmware drops soon. Question: Do you notice any difference with 12bpc vs 10? I know the panel is 10, but the set will accept a 12bpc input. Is there actually any difference in IQ that you've noticed?



BigMack70 said:


> I am guessing a lot of people don't trust AMD yet. I know I don't. If given the option for a $1000 6900 XT or a $1500 3090 and the performance is the same, I'd take the 3090 every time. I'm sure people think that way at the lower price points too. Nvidia just has a far superior feature set and recent track record of software/driver support.
> 
> I do think that the 6900 XT would give me pause about buying some of the $1800+ custom 3090 models though. I'd pay +50% for Nvidia's better feature set with RTX and DLSS and driver support, but +80% or more I am less sure... probably not.


Well I can tell you one thing: AMD is notorious for marketing BS and I've already noticed their numbers are off for the 3090 comparison on a least a few of the games listed on the 6900XT slide. We'll wait for real reviews to drop to make a final conclusion. Then there are those AMD drivers ... 💩


----------



## BigMack70

Sheyster said:


> Do you notice any difference with 12bpc vs 10?


No, but I've not gone looking for differences either. I imagine that you can create scenarios where it will show minor improvement, but since the panels are 10 bit, it's hard to imagine it making any kind of obvious difference in general use.


----------



## Sheyster

BigMack70 said:


> No, but I've not gone looking for differences either. I imagine that you can create scenarios where it will show minor improvement, but since the panels are 10 bit, it's hard to imagine it making any kind of obvious difference in general use.


I'll probably just roll with 10bpc since the CX has the lower 40Gbps input. 

Another question: Which HDMI cable are you using? Apparently none of the certified HDMI 2.1 cables are available yet. The closest thing I've found are these two 4.9 foot cables: $99 Austere VII cable and the $59 Monster Ultra 8K cable, both available on Amazon.


----------



## BigMack70

Sheyster said:


> I'll probably just roll with 10bpc since the CX has the lower 40Gbps input.
> 
> Another question: Which HDMI cable are you using? Apparently none of the certified HDMI 2.1 cables are available yet. The closest thing I've found are these two 4.9 foot cables: $99 Austere VII cable and the $59 Monster Ultra 8K cable, both available on Amazon.


I've seen a lot of discussion on forums about hdmi 2.1 cables and it makes me feel like I missed a memo somewhere because I haven't noticed any problems... I bought this last winter when I got the TV and it appears to be working just fine:



https://www.amazon.com/dp/B07R4GX984/ref=cm_sw_r_u_apa_i_I.TMFbB78SNW1


----------



## Sheyster

BigMack70 said:


> I've seen a lot of discussion on forums about hdmi 2.1 cables and it makes me feel like I missed a memo somewhere because I haven't noticed any problems... I bought this last winter when I got the TV and it appears to be working just fine:
> 
> 
> 
> https://www.amazon.com/dp/B07R4GX984/ref=cm_sw_r_u_apa_i_I.TMFbB78SNW1


I'm using this one:



https://www.amazon.com/48Gbps-Compatible-Netflix-Playstation-Samsung/dp/B07S1CGQ9Z



It's very popular and seems to work well for me so far. This said, I'd like to go with a certified cable eventually. I don't want the weakest link in this setup to be the HDMI cable!


----------



## BigMack70

Sheyster said:


> I'll probably just roll with 10bpc since the CX has the lower 40Gbps input.
> 
> Another question: Which HDMI cable are you using? Apparently none of the certified HDMI 2.1 cables are available yet. The closest thing I've found are these two 4.9 foot cables: $99 Austere VII cable and the $59 Monster Ultra 8K cable, both available on Amazon.


So I just spent about 15 minutes looking at color gradients at 10bpc vs 12bpc. There is a perceptible, though very small, improvement to 12bpc over 10bpc. However, it's nothing I would ever notice in general use, nor can I imagine others noticing it during general use. 

Seemed like the difference between running a game at 100fps vs 105fps. You can measure it and one is technically "better" than the other, but there's absolutely no perceptible difference with the FPS counter off.


----------



## mirkendargen

HyperMatrix said:


> just realized part of the problem. I have a pcie NVMe card pretty much stuck to the bottom of the card, almost covering 1.5 fans. Haha. Will have to wait for block
> 
> 
> 
> I got it running 1905MHz locked with 0.893v. Also did 1980MHz with .950v and it was hitting 450W just for that in port royal. Before heat kicked in under max load it was doing 2085MHz. I’ll probably run it at 1905MHz with Low heat/power right now until I get it under water.
> 
> But I’m dealing with even bigger issues now. My PC is having trouble with its overclock after installing this card. Have had the computer reboot on me during testing/gaming and also refusing to boot. May be related to system memory. So a system that was fully stable for years now needs to downclock it’s ram below it’s rated speed. Wasn’t even fast ram to begin with but at least it’s quad channel so won’t matter too much. just hoping this stabilizes it.


Got a Zen 2 CPU? First PCIE4.0 device ever? Try switching the slot to PCIE3.0 in BIOS and see if the issues clear up.


----------



## Sheyster

BigMack70 said:


> So I just spent about 15 minutes looking at color gradients at 10bpc vs 12bpc. There is a perceptible, though very small, improvement to 12bpc over 10bpc. However, it's nothing I would ever notice in general use, nor can I imagine others noticing it during general use.
> 
> Seemed like the difference between running a game at 100fps vs 105fps. You can measure it and one is technically "better" than the other, but there's absolutely no perceptible difference with the FPS counter off.


Another issue is are there currently any PC games that support 12bpc HDR specifically? I think BF5 HDR is 10bpc only. I'm only using this CX 48 with a 3090 for PC gaming, literally nothing else. I plan to optimize the CX settings for this use only. Why run 4K 12bpc to a CX or C9 if no games actually support it?


----------



## BigMack70

Sheyster said:


> Another issue is are there currently any PC games that support 12bpc HDR specifically? I think BF5 HDR is 10bpc only. I'm only using this CX 48 with a 3090 for PC gaming, literally nothing else. I plan to optimize the CX settings for this use only. Why run 4K 12bpc to a CX or C9 if no games actually support it?


I don't really know. It seems like it's only been this year that we've started to get decent HDR implementations in games on PC, but I have no idea if they are 10 or 12 bit.


----------



## Esenel

Aquacomputer Strix block now for preorder.





kryographics NEXT RTX 3080 Strix / RTX 3090 Strix


kryographics NEXT RTX 3080 Strix / RTX 3090 Strix: Fullcover-Wasserkühler für Grafikkarten vom Typ ASUS GeForce RTX 3080 ROG Strix und ASUS GeForce RTX 3090 ROG Strix. Der aus einem massiven Kupferblock gefräste Kühlerboden hat Kontakt zu allen zu kühlenden Bauelementen. Im Bereich der GPU sorgt...




shop.aquacomputer.de


----------



## Bilco

Man, where are all the 3090 FTW3 waterblocks. Don't these companies want my money?


----------



## aznsniper911

Bilco said:


> Man, where are all the 3090 FTW3 waterblocks. Don't these companies want my money?


Seriously, this FTW3 is killing me. No waterblock and I still trying to figure out why i get blank screen gaming at [email protected] but not [email protected] with a LG CX


----------



## Wrathier

Menko22 said:


> What does this bios do? Any changelog?


Hi mate. Only fixes the fans speeds at idle and full load. Has been many complaints on both the TUF and the STRIX cards so this bios was made. You can just run the .exe and if your card needs the bios it will update. You don't need to mess around with nvflash for this bios if you got an ASUS TUF or STRIX card.


----------



## tiefox

Strix block and active backplate from aquacomputer are up for preorder Aqua Computer Shop


----------



## LVNeptune

aznsniper911 said:


> Seriously, this FTW3 is killing me. No waterblock and I still trying to figure out why i get blank screen gaming at [email protected] but not [email protected] with a LG CX


It's either your HDMI cable or you need to update the TV firmware.


----------



## rawsome

aznsniper911 said:


> Seriously, this FTW3 is killing me. No waterblock and I still trying to figure out why i get blank screen gaming at [email protected] but not [email protected] with a LG CX


oh man i can feel you. Spending so much cash on a screen and GPU and this **** is not working together.

by blank screen, you mean you screen is turning black at some point? how long can you game until this happens?
trying a different (8k certified) cable would be my first guess too.


----------



## HyperMatrix

mirkendargen said:


> Got a Zen 2 CPU? First PCIE4.0 device ever? Try switching the slot to PCIE3.0 in BIOS and see if the issues clear up.


Still running a 6950x on X99. Using 3 NVMe drives atm and I can’t change the slot they’re in or my GPU will drop down to x8 speed. Going to offload the content of one of the drives to my NAS and just remove it for now the card is blocking about 50-60% of 2 of the fans and it’s almost literally sticking to it like maybe 3-5mm. Haha. That won’t do.


----------



## Stephen.

*In case anyone is interested, AquaComputer 3080/3090 Strix blocks with passive, and active XCS back plates are live for pre-order.  According to the forum they should be shipping them out to customers in the next 2-3 weeks. *

*AquaComputer 3080/3090 Strix Pre-Order*


----------



## LVNeptune

Stephen. said:


> *In case anyone is interested, AquaComputer 3080/3090 Strix blocks with passive, and active XCS back plates are live for pre-order.  According to the forum they should be shipping them out to customers in the next 2-3 weeks. *
> 
> *AquaComputer 3080/3090 Strix Pre-Order*


FE 3090 when :'(


----------



## Stephen.

LVNeptune said:


> FE 3090 when :'(


I quoted this post made by Stephan an Administrator at AquaComputer, it's translated from German.

*"An update on how things stand:
The first 3080/3090 coolers for the reference cards will be sent out next week. There are enough coolers produced for all pre-orders so far.
We have finished developing the Strixx 3080/3090 and have started production. The coolers will be available in 2-3 weeks as the parts still have to be surface coated.
In addition, we will bring an adapted cooler for the Zotac Trinity. This will be available approximately with the Strix. The current cooler for the reference cards fits this card, but the card is not completely covered."

"When we have processed these cards, it may continue as planned with the FE3080 or, depending on availability, first with the AMD RX6000. Solutions for EVGA are also currently being considered."*

They're hoping to produce a 3080/3090 FE water block, I'm guessing they don't have the cards yet to work off of.


----------



## LVNeptune

Stephen. said:


> I quoted this post made by Stephan an Administrator at AquaComputer, it's translated from German.
> 
> *"An update on how things stand:
> The first 3080/3090 coolers for the reference cards will be sent out next week. There are enough coolers produced for all pre-orders so far.
> We have finished developing the Strixx 3080/3090 and have started production. The coolers will be available in 2-3 weeks as the parts still have to be surface coated.
> In addition, we will bring an adapted cooler for the Zotac Trinity. This will be available approximately with the Strix. The current cooler for the reference cards fits this card, but the card is not completely covered."
> 
> "When we have processed these cards, it may continue as planned with the FE3080 or, depending on availability, first with the AMD RX6000. Solutions for EVGA are also currently being considered."*
> 
> They're hoping to produce a 3080/3090 FE water block, I'm guessing they don't have the cards yet to work off of.


Pretty sure NVIDIA provides the dimensions to designers for free though...


----------



## HyperMatrix

LVNeptune said:


> Pretty sure NVIDIA provides the dimensions to designers for free though...


They had sent mockups of the cards out but they didn't trust building a precision block around a mockup and said they were going to wait to have an actual card in hand. For someone like AquaComputer that does direct-contant cooling, even 0.1mm difference is a big deal.


----------



## Stephen.

LVNeptune said:


> Pretty sure NVIDIA provides the dimensions to designers for free though...


Not too sure about that, *( especially with this launch )*, I know Sven from AquaComputer made a post maybe 3 weeks ago, asking if anyone had a Strix card they could send in, and in return they would get their card back + a free cooler.


----------



## ExDarkxH

.


----------



## ExDarkxH

sucks that Evga is screwing over 3090 owners. Maybe they are trying to protect the upcoming kingpin bios? Who knows


----------



## jsarver

HyperMatrix said:


> You’re probably looking at a 5% difference at most tbh. Playing around with my Strix I’m seeing how temperatures are far more important than TDP. With temps in the 30s it’ll clock to over 2100MHz at full load and 480W. At temps in the 70-80 range it’ll drop to 1850-1920 with the same 480W draw.
> 
> Most important thing is whether you can watercool your card or not. If I could have gotten an FE with 10% off, I probably would have done that myself if it was the first card that became available.
> 
> a few posts back someone told me to flash my card with the 500W EVGA bios. Up until today that would have been #1 on my list. But what I’ve realized is that the power consumption is so high that if you don’t have temperatures fully under control, that extra 20W power is really just 20W extra heat that’s going to push your card to throttle even sooner from the heat accumulation. A lot of the stats and scores you see in this post are from benchmark setups. Not day to day. So unless you really care about an extra few percentage points in performance, don’t worry about it at all. Worst case scenario...even if you wanted more power under water...you could always do a shunt mod. The FE board is higher quality than the FTW3 anyway.


Thanks for the input. I’ve got a two month return policy through Best Buy for the holiday season so I’m gonna use it and test it. If it’s poor performer or has too much coil whine I’ll just return it when I can get my hands on something else.


----------



## kx11

ExDarkxH said:


> sucks that Evga is screwing over 3090 owners. Maybe they are trying to protect the upcoming kingpin bios? Who knows


KP is around Dec. or late November, why would they protect it?!


----------



## Thanh Nguyen

How much better that aqua waterblock than alphacool? 5c-10c?


----------



## HyperMatrix

Thanh Nguyen said:


> How much better that aqua waterblock than alphacool? 5c-10c?


5c-10c would be massive. Don’t expect more than a couple degrees at best. Most important thing about AquaComputer is the active cooled backplate.


----------



## Bilco

ExDarkxH said:


> sucks that Evga is screwing over 3090 owners. Maybe they are trying to protect the upcoming kingpin bios? Who knows


What do you mean?


----------



## Falkentyne

LordGurciullo said:


> I'm a little confused... may I dm you? I didn't even know you could run 1440 on a 1080 monitor... And I use the 182hz... is that changing it to ycbrcr422? I have the xl2746s


Hi, didn't hear back from you


----------



## domenic

Stephen. said:


> *In case anyone is interested, AquaComputer 3080/3090 Strix blocks with passive, and active XCS back plates are live for pre-order.  According to the forum they should be shipping them out to customers in the next 2-3 weeks. *
> 
> *AquaComputer 3080/3090 Strix Pre-Order*


I just cancelled the EKWB block I had on pre-order to pair with a Strix 3090. With the EK block I was planning on jury-rigging a RAM block to cool the backplate / backside RAM chips. I had been reading about the AquaComputer blocks with the actively cooled backplate with the attached heatpipe. After reading your post and learning its available for preorder this seems like a better solution so I pulled the trigger.

My only experience with water blocks previously has been with the EK 1080ti & 2080ti cards over the past four years. Is the overall quality of the AquaComputer product on par or better as compared to the EKWB product? Does the active cooled backplate work well?

Today was a good day. Besides the AQ block going up for pre-order I also managed to snag a Strix OC 3090 from Amazon (ETA is currently December 9-15 but hopefully will come in sooner).


----------



## DrunknFoo

Well got my ftw3 ultra, started baseline testing... See you guys on the leader boards


----------



## DrunknFoo

domenic said:


> I just cancelled the EKWB block I had on pre-order to pair with a Strix 3090. With the EK block I was planning on jury-rigging a RAM block to cool the backplate / backside RAM chips. I had been reading about the AquaComputer blocks with the actively cooled backplate with the attached heatpipe. After reading your post and learning its available for preorder this seems like a better solution so I pulled the trigger.
> 
> My only experience with water blocks previously has been with the EK 1080ti & 2080ti cards over the past four years. Is the overall quality of the AquaComputer product on par or better as compared to the EKWB product? Does the active cooled backplate work well?
> 
> Today was a good day. Besides the AQ block going up for pre-order I also managed to snag a Strix OC 3090 from Amazon (ETA is currently December 9-15 but hopefully will come in sooner).


Hmmmm im sticking with the first option with the ramblock... The heatpipe will likely not perform as good. But lets wait n see


----------



## HyperMatrix

DrunknFoo said:


> Hmmmm im sticking with the first option with the ramblock... The heatpipe will likely not perform as good. But lets wait n see


It's not a new concept. AquaComputer have been doing it for 3 generations now I believe, if not longer. I remember on my Pascal Titan X the back of the card would normally hit over 80C and with the backplate, it stayed below 50C. A full direct block will perform better. But at under 50C you're not running into any issues with VRAM overclocking anyway.


----------



## dante`afk

domenic said:


> Is the overall quality of the AquaComputer product on par or better as compared to the EKWB product? Does the active cooled backplate work well?



Aquacomputer/Watercool > insert anything else here > EKWB


----------



## HyperMatrix

dante`afk said:


> Aquacomputer/Watercool > insert anything else here > EKWB


I'm not a fan of EKWB. Bought a CPU/Motherboard uniblock for my X99/6950X. Computer kept shutting down on me under extended heavy load sessions in SLI. Thought it was my cards. Took me forever to figure out that one part of the motherboard was hitting around 100C and causing a forced shutdown/restart because the block was useless and didn't provide coverage where a heatsink was previously located. So I had to stick a fan on that spot to solve the problem.


----------



## long2905

CrazyThunderbird said:


> Wasn't a lot higher... but one of my DP stopped working so i flashed back


what is the temp you are seeing now? i ask cause mine heat up like crazy even with a repaste. a undervolt helps a little [email protected] but the card is still creeping around 76-77c


----------



## Falkentyne

long2905 said:


> what is the temp you are seeing now? i ask cause mine heat up like crazy even with a repaste. a undervolt helps a little [email protected] but the card is still creeping around 76-77c


What card do you have?
What thermal paste did you use?

Did you do a spread test to make sure you have good contact pressure around the entire die? (a spread test is easy, you just apply a small pinhead/rice grain sized dot to the middle of the chip, tighten the four heatsink X bracket screws in an alternating X pattern (one full turn top left, one full turn bottom right, etc etc), then immediately unscrew the same alternating way and remove the heatsink directly away from the PCB without sliding it around).


----------



## domenic

DrunknFoo said:


> Hmmmm im sticking with the first option with the ramblock... The heatpipe will likely not perform as good. But lets wait n see


Did some research and it seems at least the 2080ti version of the AQ block gets nothing but praise / seemingly best watrerblock on the planet including the active backplate. I also very much wanted to put my Macgyver skills to the test with the RAM block but noting beats German engineering....


----------



## Dreams-Visions

Welp, slipped the FTW Ultra XOC bios on my new Trio. Unfortunately my A/C decided to stop working here in Florida so I can't properly bench it. High ambient temps contributing to at an average GPU temp of 70C with the door open and 100% fans.

In the context of a 70C score, does this look respectable to you all?

*Port Royal:* 13,758
*Time Spy (GPU only):* 20,127
*Average clock:* 2052MHz
*Frequency:* 2100MHz
+160 / +1150

I only ran a couple of tests, but does this look like a GPU that could see a nice improvement once the A/C is back on? Or, more importantly, if I slap a block on it?

Trying to decide if I should keep this Trio...or open this FTW3 Ultra that's sitting next to me. Not sure where on the bell curve this Trio sits given the ambient conditions.

thoughts?


----------



## WilliamLeGod

Dreams-Visions said:


> Welp, slipped the FTW Ultra XOC bios on my new Trio. Unfortunately my A/C decided to stop working here in Florida so I can't properly bench it. High ambient temps contributing to at an average GPU temp of 70C with the door open and 100% fans.
> 
> In the context of a 70C score, does this look respectable to you all?
> 
> *Port Royal:* 13,758
> *Time Spy (GPU only):* 20,127
> *Average clock:* 2052MHz
> *Frequency:* 2100MHz
> +160 / +1150
> 
> I only ran a couple of tests, but does this look like a GPU that could see a nice improvement once the A/C is back on? Or, more importantly, if I slap a block on it?
> 
> Trying to decide if I should keep this Trio...or open this FTW3 Ultra that's sitting next to me. Not sure where on the bell curve this Trio sits given the ambient conditions.
> 
> thoughts?


Thats Super low. My trio with 500W vbios did ~14k8 PR, 22k7 TS, 11k8 TS extreme, 14k2 8k optimized unigine


----------



## DrunknFoo

Stock bios and tim... not bad score... Disconnected xe360 and pump serving as a half assed mini ac thats doing a little. Ambient temp (HOT)



http://www.3dmark.com/pr/449076


----------



## Elric2a

Hi i have a strix 3090 , i had some questions about oc on it with gpu tweak. What values should i try first? 

Envoyé de mon SM-G973F en utilisant Tapatalk


----------



## Dreams-Visions

WilliamLeGod said:


> Thats Super low. My trio with 500W vbios did ~14k8 PR, 22k7 TS, 11k8 TS extreme, 14k2 8k optimized unigine


like I said, it's hot in here.

edit: at 14.8K PR, your score would be top 100 on the leaderboard. Link it.

What on earth would make you think your score is a reasonable measure? lol



DrunknFoo said:


> Stock bios and tim... not bad score... Disconnected xe360 and pump serving as a half assed mini ac thats doing a little. Ambient temp (HOT)
> 
> 
> 
> http://www.3dmark.com/pr/449076


Your average temp is 53C. Your ambient temp is not hot. Mine is hot.


----------



## long2905

Falkentyne said:


> What card do you have?
> What thermal paste did you use?
> 
> Did you do a spread test to make sure you have good contact pressure around the entire die? (a spread test is easy, you just apply a small pinhead/rice grain sized dot to the middle of the chip, tighten the four heatsink X bracket screws in an alternating X pattern (one full turn top left, one full turn bottom right, etc etc), then immediately unscrew the same alternating way and remove the heatsink directly away from the PCB without sliding it around).


i have the ichill x4 inno3d i assume same as you. i just used a credit card to spread the paste evenly on the die to avoid any area not having contact. my paste is mx4 only though. 

i would love to know what you use and what you are seeing though


----------



## HyperMatrix

long2905 said:


> i have the ichill x4 inno3d i assume same as you. i just used a credit card to spread the paste evenly on the die to avoid any area not having contact. my paste is mx4 only though.
> 
> i would love to know what you use and what you are seeing though


not that this is the cause of your problems, but spreading out the paste is a bad idea as it increases the chance of creating little air pockets. The best method is just to drop it in the center and press it down, unless we’re talking about LM.


----------



## long2905

HyperMatrix said:


> not that this is the cause of your problems, but spreading out the paste is a bad idea as it increases the chance of creating little air pockets. The best method is just to drop it in the center and press it down, unless we’re talking about LM.


yeah i noramlly do the pea method and let pressure do its things but i did that and still got high tempt so i opted for spreading the paste evenly this time around. temp is slightly better but yeah, just want some confirmation on the ichill x4 performance before i finish with it and sell it for another card.


----------



## GTANY

domenic said:


> I just cancelled the EKWB block I had on pre-order to pair with a Strix 3090. With the EK block I was planning on jury-rigging a RAM block to cool the backplate / backside RAM chips. I had been reading about the AquaComputer blocks with the actively cooled backplate with the attached heatpipe. After reading your post and learning its available for preorder this seems like a better solution so I pulled the trigger.
> 
> My only experience with water blocks previously has been with the EK 1080ti & 2080ti cards over the past four years. Is the overall quality of the AquaComputer product on par or better as compared to the EKWB product? Does the active cooled backplate work well?
> 
> Today was a good day. Besides the AQ block going up for pre-order I also managed to snag a Strix OC 3090 from Amazon (ETA is currently December 9-15 but hopefully will come in sooner).


Aquacomputer quality is far better than EK, and equals the Watercool quality.


----------



## ShadowYuna

Aquacomputer makes Strix block very fast this time. Looks like faster than EK! Canceled EK preorder and put preorder on Aquacom.

Due to 3090 has memory on backplate, Active cooling is worth it


----------



## Menko22

Big question guys.

My 3090 Strix goes very well in gpu and mem overclock. 2150mhz core easy.

Problem= the middle fan makes bad noise from 87% to 100%.

Should I rma or keep this card with a bad fan?


----------



## lokran88

Bilco said:


> Man, where are all the 3090 FTW3 waterblocks. Don't these companies want my money?


So Watercool says probably December for their block.


ShadowYuna said:


> Aquacomputer makes Strix block very fast this time. Looks like faster than EK! Canceled EK preorder and put preorder on Aquacom.
> 
> Due to 3090 has memory on backplate, Active cooling is worth it


Also I think that Aquacomputer foresees Thermalpaste on the vram instead of Pads. So even better for this hot running GDDR6X.




GTANY said:


> Aquacomputer quality is far better than EK, and equals the Watercool quality.


Last gen Aquacomputer was slightly better cooling wise than Watercool.
But this Gen WC revised and bring's Heatkiller V. I wanted that bud sadly no block for the Strix.


Menko22 said:


> Big question guys.
> 
> My 3090 Strix goes very well in gpu and mem overclock. 2150mhz core easy.
> 
> Problem= the middle fan makes bad noise from 87% to 100%.
> 
> Should I rma or keep this card with a bad fan?


My Strix also makes strange, inconsistent fan noises. It is bothersome at high Rpm. But I will change to waterblock anyway.
Got the new BIOS and it seems that it Made it worse to me. Even though this should have only changed 0 Fan mode


----------



## Menko22

lokran88 said:


> So Watercool says probably December for their block.
> 
> Also I think that Aquacomputer foresees Thermalpaste on the vram instead of Pads. So even better for this hot running GDDR6X.
> 
> 
> 
> Last gen Aquacomputer was slightly better cooling wise than Watercool.
> But this Gen WC revised and bring's Heatkiller V. I wanted that bud sadly no block for the Strix.
> 
> My Strix also makes strange, inconsistent fan noises. It is bothersome at high Rpm. But I will change to waterblock anyway.
> Got the new BIOS and it seems that it Made it worse to me. Even though this should have only changed 0 Fan mode


Is it the noise of the fan too high? Mine at 100% is like having big electricity cuts.

Not sure if I should RMA. The other Strix card I had didn't do just 50mhz core.


----------



## Pinto

Swap coolers


----------



## ShadowYuna

Menko22 said:


> Big question guys.
> 
> My 3090 Strix goes very well in gpu and mem overclock. 2150mhz core easy.
> 
> Problem= the middle fan makes bad noise from 87% to 100%.
> 
> Should I rma or keep this card with a bad fan?


Can you have look below my youtube?

RTX 3090 ROG Strix cover noise 

I had same problem on my strix 3090. When fan speed hit more than 60% , strange sounds comes up. It sounds like coil whine but it is not.

So when I press the LED Bar it disappear and comes up again as soon as I lift my finger.


----------



## Esenel

@ShadowYuna 
Same here.
In vertical it helps touching the middle fan.
Occurs at full fan speed.

Waterblock is coming.


----------



## Spiriva

ShadowYuna said:


> Can you have look below my youtube?
> 
> RTX 3090 ROG Strix cover noise
> 
> I had same problem on my strix 3090. When fan speed hit more than 60% , strange sounds comes up. It sounds like coil whine but it is not.
> 
> So when I press the LED Bar it disappear and comes up again as soon as I lift my finger.


My friend got a Strix 3090, it sounds exactly like yours. And also like urs it becomes better if you press it with your finger. 
Id say you should just close ur case and forget about it, or put a waterblock on it. Not worth RMA and play the waiting game forever with ASUS.


----------



## Sync0r

ShadowYuna said:


> Can you have look below my youtube?
> 
> RTX 3090 ROG Strix cover noise
> 
> I had same problem on my strix 3090. When fan speed hit more than 60% , strange sounds comes up. It sounds like coil whine but it is not.
> 
> So when I press the LED Bar it disappear and comes up again as soon as I lift my finger.


Just sounds like the fan is catching on something, try moving the fan a bit when its off.


----------



## pat182

so i just got my strix OC (canada) and it pretty stable at 2100mhz on air with a custom lock curve, pretty happy with my gpu so far, had the msi ventus for 4 weeks too and its like 10-15 more fps in 4k than the msi 3090. crazy


----------



## ShadowYuna

Esenel said:


> @ShadowYuna
> Same here.
> In vertical it helps touching the middle fan.
> Occurs at full fan speed.
> 
> Waterblock is coming.


I went to my local Asus service centre. They says they will give me replacement within 5 days so waiting for new one


----------



## ShadowYuna

Spiriva said:


> My friend got a Strix 3090, it sounds exactly like yours. And also like urs it becomes better if you press it with your finger.
> Id say you should just close ur case and forget about it, or put a waterblock on it. Not worth RMA and play the waiting game forever with ASUS.


Once I get the new replacement I will put waterblck from aquacomputer.

Luckily have spare 3080 FE to use


----------



## ExDarkxH

What I meant regarding the Evga screwing ppl over post is that;
They locked down their 3090 cards including the ftw3 ultra to 450w max. 
They went and authored a 500w bios for PR that the evga customers cant even use (but works perfectly on many other 3 pin cards)
The limit is at the hardware level.

What I did to get around this is use a 50miliohm shunt on the pcie. 
Do you know what happens when you stack a 50 on a 5? you gain 10% or 7.5watts LOL

Despite this, my scores have exploded because the mod fixed the hard lock and now the 500w bios works.


----------



## Keninishna

Sync0r said:


> Shunted my Zotac Trinity, just the 2 8 pins so far. I'm still power limited, think its the pcie slot hitting limit.
> 
> 
> 
> https://www.3dmark.com/pr/439481
> 
> 
> 
> View attachment 2463531


How much power were you drawing for that score? I am tempted to shunt my zotac and wondering if I should just do the 2 8pin connectors or do a full shunting.


----------



## Sync0r

Keninishna said:


> How much power were you drawing for that score? I am tempted to shunt my zotac and wondering if I should just do the 2 8pin connectors or do a full shunting.


I've just shunted the two closest to the PCIe power connectors, stacked 5miliohm on existing 5miliohms. I've seen it draw a max of 475w due to the power limit of the PCIe slot now limiting the two 8 pins. I have configured MSI afterburner gpu power reading to be multiplied by 1.8 to account for the shunt, so I can see what its actually pulling. Full system load is about 650w to 700w from the wall, using a power meter. My cpu is also overclocked, adding to this. I have just ordered up some 50 miliohm shunt resistors to up the pcie slot by 10% like *ExDarkxH has done. *I'm using the 390w gigabyteOC bios with this mod.


----------



## HyperMatrix

pat182 said:


> so i just got my strix OC (canada) and it pretty stable at 2100mhz on air with a custom lock curve, pretty happy with my gpu so far, had the msi ventus for 4 weeks too and its like 10-15 more fps in 4k than the msi 3090. crazy



Are you saying you're getting 2100MHz on air, without any exotic cooling, under full 100% gpu load, and without any throttling? If so, can you share a port royal link? Highest average clock speed I've been able to record through a port royal run is 2040MHz.


----------



## ExDarkxH

HyperMatrix said:


> Are you saying you're getting 2100MHz on air, without any exotic cooling, under full 100% gpu load, and without any throttling? If so, can you share a port royal link? Highest average clock speed I've been able to record through a port royal run is 2040MHz.


I believe him
It's entirely possible. I did it with the Evga card and stock cooler


https://www.3dmark.com/pr/442727


----------



## ivans89

@ExDarkxH check your conversations


----------



## Johneey

Sync0r said:


> I've just shunted the two closest to the PCIe power connectors, stacked 5miliohm on existing 5miliohms. I've seen it draw a max of 475w due to the power limit of the PCIe slot now limiting the two 8 pins. I have configured MSI afterburner gpu power reading to be multiplied by 1.8 to account for the shunt, so I can see what its actually pulling. Full system load is about 650w to 700w from the wall, using a power meter. My cpu is also overclocked, adding to this. I have just ordered up some 50 miliohm shunt resistors to up the pcie slot by 10% like *ExDarkxH has done. *I'm using the 390w gigabyteOC bios with this mod.


i will do same tommorow. have 5mOhm too. So you shunt only the 2 on 8 Pins yeah?


----------



## HyperMatrix

ExDarkxH said:


> I believe him
> It's entirely possible. I did it with the Evga card and stock cooler
> 
> 
> https://www.3dmark.com/pr/442727


Without shunting or doing exotic cooling? (Pc in garage, by open window, AC blowing cold air on it)


----------



## ivans89

I ordered some shunts too, my card is at max 405w (500w bios)


----------



## Thanh Nguyen

Sync0r said:


> I've just shunted the two closest to the PCIe power connectors, stacked 5miliohm on existing 5miliohms. I've seen it draw a max of 475w due to the power limit of the PCIe slot now limiting the two 8 pins. I have configured MSI afterburner gpu power reading to be multiplied by 1.8 to account for the shunt, so I can see what its actually pulling. Full system load is about 650w to 700w from the wall, using a power meter. My cpu is also overclocked, adding to this. I have just ordered up some 50 miliohm shunt resistors to up the pcie slot by 10% like *ExDarkxH has done. *I'm using the 390w gigabyteOC bios with this mod.


Any idea why I have higher avg clock but lower score?


https://www.3dmark.com/pr/450062


----------



## ExDarkxH

HyperMatrix said:


> Without shunting or doing exotic cooling? (Pc in garage, by open window, AC blowing cold air on it)


the only shunt i have adds 7w to make the 500w bios work but i also have a score of 14,853 before i did the mod using stock 450w shipping bios stock everything with 2,114Mhz average clock.
so yes its definitely possibly if you win silicone lottery

I was able to find it


https://www.3dmark.com/pr/423566



as you can see my peak clocks are way lower now but my average clocks are higher so i should probably play around and undervolt it a bit
You dont need to keep asking about cooling you can see the average temps in the bench links


----------



## bmagnien

Thanh Nguyen said:


> Any idea why I have higher avg clock but lower score?
> 
> 
> https://www.3dmark.com/pr/450062


temperature. you're 11c over the other score, so hitting boost thresholds tied to thermals.


----------



## jura11

domenic said:


> I just cancelled the EKWB block I had on pre-order to pair with a Strix 3090. With the EK block I was planning on jury-rigging a RAM block to cool the backplate / backside RAM chips. I had been reading about the AquaComputer blocks with the actively cooled backplate with the attached heatpipe. After reading your post and learning its available for preorder this seems like a better solution so I pulled the trigger.
> 
> My only experience with water blocks previously has been with the EK 1080ti & 2080ti cards over the past four years. Is the overall quality of the AquaComputer product on par or better as compared to the EKWB product? Does the active cooled backplate work well?
> 
> Today was a good day. Besides the AQ block going up for pre-order I also managed to snag a Strix OC 3090 from Amazon (ETA is currently December 9-15 but hopefully will come in sooner).


Hi there 

I can only vouch for Aquacomputer Kryographics waterblocks and their active backplates, I'm using now or have one Kryographics RTX 2080Ti with active backplate on my RTX 2080Ti AMP with EVGA FTW3 BIOS , on that card temperatures are overall better by 5-6°C if I'm comparing with EKWB Vector RTX 2080Ti block, in rendering my RTX 2080Ti Strix with EKWB Vector RTX 2080Ti Strix waterblock will reach now around 36-40°C and Zotac RTX 2080Ti AMP with Kryographics RTX 2080Ti waterblock and active backplate temperatures are literally in low 30's, usually sits in 32-34°C at same clocks and same power draw

Quality of Aquacomputer is on another level if you are comparing with EKWB and for sure you will love their blocks and backplate 

I'm going too with Aquacomputer Kryographics for RTX 3090 and AMD 6900XT, no way I will go with EKWB again, wish Aquacomputer released Strix waterblock for RTX 2080Ti as well, I would get their block as well

Hope this helps 

Thanks, Jura


----------



## ivans89

I would go with aquacomputer too, but I don’t know if there will be a ftw3 block. I hope there will be any block for ftw3 soon


----------



## Sync0r

Thanh Nguyen said:


> Any idea why I have higher avg clock but lower score?
> 
> 
> https://www.3dmark.com/pr/450062


My memory is clocked higher, might be it? My GPU was running cooler. Dunno not much in it. I had GPU scheduling off, performance settings in Nvidia control panel, gsync off, as many other programs closed as I could. Running stand alone 3dmark not steam version. Shouldn't be telling you don't want you to beat me 😂


----------



## DrunknFoo

jura11 said:


> Hi there
> 
> I can only vouch for Aquacomputer Kryographics waterblocks and their active backplates, I'm using now or have one Kryographics RTX 2080Ti with active backplate on my RTX 2080Ti AMP with EVGA FTW3 BIOS , on that card temperatures are overall better by 5-6°C if I'm comparing with EKWB Vector RTX 2080Ti block, in rendering my RTX 2080Ti Strix with EKWB Vector RTX 2080Ti Strix waterblock will reach now around 36-40°C and Zotac RTX 2080Ti AMP with Kryographics RTX 2080Ti waterblock and active backplate temperatures are literally in low 30's, usually sits in 32-34°C at same clocks and same power draw
> 
> Quality of Aquacomputer is on another level if you are comparing with EKWB and for sure you will love their blocks and backplate
> 
> I'm going too with Aquacomputer Kryographics for RTX 3090 and AMD 6900XT, no way I will go with EKWB again, wish Aquacomputer released Strix waterblock for RTX 2080Ti as well, I would get their block as well
> 
> Hope this helps
> 
> Thanks, Jura


What rads? Water temp and fan rpm on full load for those temps?

You got me considering getting this now for the 3090 and slapping a ram block on it as well lmao


----------



## Jpmboy

Sync0r said:


> My memory is clocked higher, might be it? My GPU was running cooler. Dunno not much in it. I had GPU scheduling off, performance settings in Nvidia control panel, gsync off, as many other programs closed as I could. Running stand alone 3dmark not steam version. Shouldn't be telling you don't you want to beat me 😂


Both the GPU and memory controller have error-checking protocols... Higher clocks which produce correctable errors (mismatched checksums) loop to correct and result in lowered efficiency. ✌


----------



## Wihglah

Got my FTW3 today.
One strange thing - when I ran PX1, it made me update my "Firmware" - this actually punted me down a bin generally.
Flashed the 500W BIOS and I have seen 478W total and 79.6W on the PCIe connector afterward during PR.
Best results from undervolting though. 0.975V @ 2040MHZ (+100)


https://www.3dmark.com/pr/450495



This any good?

Haven't touched the memory yet.


----------



## ivans89

I have similar benchmark in pr. 14244, but my card has total max 405W and not going above. :/


----------



## domenic

DrunknFoo said:


> What rads? Water temp and fan rpm on full load for those temps?
> 
> You got me considering getting this now for the 3090 and slapping a ram block on it as well lmao


I was thinking the same thing since I already bought the RAM block when I was in DIY mode. Can never be cool enough....


----------



## Johneey

I have a palit and I shunt the two closest to but it’s not measreading the power. Is it bad solder joint or should I shunt all 4?
Using 5mohm


----------



## Sync0r

Johneey said:


> I have a palit and I shunt the two closest to but it’s not measreading the power. Is it bad solder joint or should I shunt all 4?
> Using 5mohm


GPUz started showing about 80 to 90w draw per connector when I did mine, halved the power draw. What you seeing? Have you got a multimeter to check soldering? Which bios are you using?


----------



## Johneey

Sync0r said:


> GPUz started showing about 80 to 90w draw per connector when I did mine, halved the power draw. What you seeing? Have you got a multimeter to check soldering? Which bios are you using?


Ok so I think need to check the solders again , shows me 150 ... I hate it I have fully hardtube custom


----------



## Sync0r

Johneey said:


> Ok so I think need to check the solders again , shows me 150 ... I hate it I have fully hardtube custom


Lols damn, I used to use hardtube, never again. I have my card on soft tubes with quick disconnects now.


----------



## Johneey

Sync0r said:


> Lols damn, I used to use hardtube, never again. I have my card on soft tubes with quick disconnects now.


So ur layout is same as mine or ? 4 on pci and 1 on the bottum


----------



## Sync0r

Johneey said:


> So ur layout is same as mine or ? 4 on pci and 1 on the bottum


Board looks like this (Zotac trinity):


----------



## jura11

DrunknFoo said:


> What rads? Water temp and fan rpm on full load for those temps?
> 
> You got me considering getting this now for the 3090 and slapping a ram block on it as well lmao


I'm running 4*360mm radiators(2*HWLabs SR-2 360mm,2*HWLabs GTS360) plus MO-ra3 360mm, water temperature is usually in rendering with all 3*GPUs setup is in 23-25°C max and water delta T is 2-4°C max, not seen higher than 5°C and fan speed usually sits in 650-750RPM as max, my fan speed is based on Water delta T not actual water temperature or component temperature

Looking forward to seeing Aquacomputer Kryographics reviews, I expect Igors Lab will do review and test of this block 

Hope this helps 

Thanks, Jura


----------



## mattxx88

Johneey said:


> Ok so I think need to check the solders again , shows me 150 ... I hate it I have fully hardtube custom


i read here RTX 3090 Founders Edition working shunt mod
that you should shunt all resistors, even the pcie one.


----------



## stefxyz

What is a good mem oc value for the 3090? Mine does plus 1700 before crashing on air.


----------



## Vgnusstffr

Can someone please help me? I have 2 RTX 3090 in SLI and I can not get Red Dead Redemption 2 to run. Just goes to a black screen after I start Story mode. I am using Vulcan, as I know SLI won't work with DX12 on this title. Also, I saw someone post that you can "trick" these cards into allowing SLI on other titles that used to work in SLI, but no longer will on the 3090 cards by changing the name of the .exe file to the name of a title that is supported on the 3090 setup. Is this true? If so, can someone elaborate on if there is anything else that needs to be done to get it to work?


----------



## Nizzen

Bykski strix 3090 block in action 😇


----------



## Menko22

ShadowYuna said:


> Can you have look below my youtube?
> 
> RTX 3090 ROG Strix cover noise
> 
> I had same problem on my strix 3090. When fan speed hit more than 60% , strange sounds comes up. It sounds like coil whine but it is not.
> 
> So when I press the LED Bar it disappear and comes up again as soon as I lift my finger.


I'm going to keep it. It's my second 3090 Strix and the first one didn't have this fan noise but didn't even accept 50mhz overclock with power limit to the max.

I guess I'll leave the fans at 85% to avoid the noise and see how much more oc can I get with a little undervolting.

How much performance gain can you get with water-cooling plates? I have never tried water-cooling in cpu-gpu.


----------



## Nizzen

Menko22 said:


> I'm going to keep it. It's my second 3090 Strix and the first one didn't have this fan noise but didn't even accept 50mhz overclock with power limit to the max.
> 
> I guess I'll leave the fans at 85% to avoid the noise and see how much more oc can I get with a little undervolting.
> 
> How much performance gain can you get with water-cooling plates? I have never tried water-cooling in cpu-gpu.


15600 points+ in port royal with 3090 strix watercooled now.. Shuntmodded...


----------



## Johneey

Nizzen said:


> 15600 points+ in port royal with 3090 strix watercooled now.. Shuntmodded...


Link to proof?


----------



## Nizzen

Johneey said:


> Link to proof?


Me and Carillo is having OC party 









3DMark Port Royal Hall of Fame


The 3DMark.com Overclocking Hall of Fame is the official home of 3DMark world record scores.




www.3dmark.com


----------



## CrazyThunderbird

long2905 said:


> what is the temp you are seeing now? i ask cause mine heat up like crazy even with a repaste. a undervolt helps a little [email protected] but the card is still creeping around 76-77c


Replaced the stock fans with 2x 120mm fans, i usually get around 70 degrees @ 850mV

And oh boy, your card runs like a monster...

I only get a stable ~1800 @850mV (it holds 1875 easily for benches on it, but on long gaming session i get a driver reset...)


----------



## Wrathier

CrazyThunderbird said:


> Replaced the stock fans with 2x 120mm fans, i usually get around 70 degrees @ 850mV
> 
> And oh boy, your card runs like a monster...
> 
> I only get a stable ~1800 @850mV (it holds 1875 easily for benches on it, but on long gaming session i get a driver reset...)


Would you not get a bit more if you adjusted to 0.900?


----------



## ShadowYuna

Menko22 said:


> I'm going to keep it. It's my second 3090 Strix and the first one didn't have this fan noise but didn't even accept 50mhz overclock with power limit to the max.
> 
> I guess I'll leave the fans at 85% to avoid the noise and see how much more oc can I get with a little undervolting.
> 
> How much performance gain can you get with water-cooling plates? I have never tried water-cooling in cpu-gpu.


I went to Asus Service centre for check up. At least they can offer me new one as replacement that will do. 

I am waiting for waterblock to be arrive anyway


----------



## Johneey

Sync0r said:


> Board looks like this (Zotac trinity):
> View attachment 2463937


So u shunt the 2 or the 4 on 8 Pin Right top


----------



## Thanh Nguyen

41c to 26c, how many mhz if I can lower the temp?


----------



## Nizzen

Looks like I beated JayzTwoCents in port royale 😅
-Nzz


----------



## Wrathier

Nizzen said:


> Looks like I beated JayzTwoCents in port royale 😅
> -Nzz


I'm in Sweden - Åmål - can you drop by and due this mods for me so I can get on that freaking list??

Thanks


----------



## jura11

Nizzen said:


> Looks like I beated JayzTwoCents in port royale 😅
> -Nzz


What temperatures are you seeing with Bykski Waterblock? Have as well here Bykski Waterblock for RTX 3090 but no GPU 😔

Thanks, Jura


----------



## Jpmboy

Still no FTW3 waterblocks?


----------



## originxt

Jpmboy said:


> Still no FTW3 waterblocks?











Absolute GPU Block - FTW3 3080, 3080 Ti, 3090


Optimus Absolute GPU Block designed for the EVGA FTW3 3080/3080Ti/3090 The Absolute block is our all-out performance design, created to achieve maximum cooling on all areas of the new NVIDIA RTX 3080 and 3090 FTW3 cards from EVGA. The FTW3 GPUs pull huge amounts of power and require top cooling...




optimuspc.com


----------



## Jpmboy

Thanks. I'm already in for a "notify" from Optimus. Looks like they will have batch 2 sometime late November.

Edit: had a chance to check optimus... and they ARE taking orders. Got an email sayin batch 2 is ready!


----------



## long2905

CrazyThunderbird said:


> Replaced the stock fans with 2x 120mm fans, i usually get around 70 degrees @ 850mV
> 
> And oh boy, your card runs like a monster...
> 
> I only get a stable ~1800 @850mV (it holds 1875 easily for benches on it, but on long gaming session i get a driver reset...)


haha not sure how i should feel about that. i wouldnt expect this card to be as good as the strix and ftw3 but the gap so far is too much both performance and thermal wise. 

the looks of the card is certainly unique and its quite surprising that the cooling is inadequate even with that extra "VRM fan". i can strap on 2 noctua fans but the card is so new and i paid so much already you know, its better to sell if off and get something else for a little extra (the demands cooled down a bit here in Vietnam whats with AMD announcement and how much more expensive 3090 is in general here).

The strix is about $2400 here so i guess its still far of reach (reasonably) but we will see.


----------



## pat182

HyperMatrix said:


> Are you saying you're getting 2100MHz on air, without any exotic cooling, under full 100% gpu load, and without any throttling? If so, can you share a port royal link? Highest average clock speed I've been able to record through a port royal run is 2040MHz.


well, mainly in horizon zero dawn that im playing, did a quick PR without any tinkering here 14 442 :https://www.3dmark.com/3dm/52343513?


----------



## originxt

Jpmboy said:


> Thanks. I'm already in for a "notify" from Optimus. Looks like they will have batch 2 sometime late November.
> 
> Edit: had a chance to check optimus... and they ARE taking orders. Got an email sayin batch 2 is ready!


Sorry, I should have clarified when I posted. Batch 2 was up at time of posting.


----------



## Sync0r

Johneey said:


> So u shunt the 2 or the 4 on 8 Pin Right top


Just 2, the top 2, closest to 8 pins.


----------



## HyperMatrix

pat182 said:


> well, mainly in horizon zero dawn that im playing, did a quick PR without any tinkering here 14 442 :https://www.3dmark.com/3dm/52343513?












Average dropping to 2045. This makes more sense. Like I said mine would get 2040 MHz average in port royal. You had me scared that I got a really bad card for a few hours there. I was legit contemplating selling it and picking up another one later. Haha. Still looks to be better than my card because I only got the 2040 when I let the room temperature drop to around 20C and put an extra fan in front of it. But small differences like this, I'm ok with.


----------



## jsarver

Was originally worried I would be settling with the fe model as ftw was my first choice. Been testing today and am able to complete time spy with +200 and +1000 on chilled air. Coil whine on this is really noticeable. Anyone else have this? Goal is to put under water. You guys feel like this is a keeper or flip when my ftw que email pops? Games only stable at +120 and I drop mem to +500 to avoid hidden error correction. keeps me pegged at 2100mhz. Thoughts?


----------



## HyperMatrix

jsarver said:


> Was originally worried I would be settling with the fe model as ftw was my first choice. Been testing today and am able to complete time spy with +200 and +1000 on chilled air. Coil whine on this is really noticeable. Anyone else have this? Goal is to put under water. You guys feel like this is a keeper or flip when my ftw que email pops? Games only stable at +120 and I drop mem to +500 to avoid hidden error correction. keeps me pegged at 2100mhz. Thoughts?


Honestly I've seen some higher and some lower but in general, my understanding is that the only realistically optimistic target you should be looking at is 2100MHz locked while gaming at 100% GPU usage. Without any drops. And that's not an easy task. Very few people have been able to hit over 2200MHz for short bursts, and I doubt they'd be able to maintain that with standard water cooling. don't think anyone is going to be able to sustain 2200MHz under full load for extended gaming sessions. My Strix can momentarily hit 2175MHz with 80% load while under 40C before throttling down (just a few seconds as it heats up real quick). Even at under 40C, that was pushing my power draw to 495W and giving me VRel and VOp throttling at the same time. If I was under 100% load, in that same scenario, it only boosts to 2100MHz. I know a few people have had better cards but the difference was minimal, and it was very few people. 

My hope for my card, under water, with KPE 520/525W Bios is to be able to sustain 2100MHz. If it was possible to get it up to 2150MHz, that'd be gravy. But you're basically battling several things. 1) silicon lottery. 2) heat. 3) tdp. TDP you can adjust with shunt mods. Heat you can control with water cooling. But silicon lottery is the same across all vendors. Unless you're willing to shell out for a KPE or other binned card, there are no guarantees.


----------



## Thanh Nguyen

You can run port royal at 2200mhz but it does not mean u can run other tests or long seasion gaming. Anyone who has shunt your card. How many C your card get after the shunt? My card has 12c delta or less but after shunt, its 16c-18c.


----------



## cstkl1

wth. i can get another 3090strix 

seems alot of ppl wait and see now because of 6900xt

should i?? sli??


----------



## HyperMatrix

cstkl1 said:


> wth. i can get another 3090strix
> 
> seems alot of ppl wait and see now because of 6900xt
> 
> should i?? sli??


Well I justed tested Shadow of The Tomb Raider and with Ultra RTX Shadows, my FPS drops down to around 60. Since it has mGPU support built in, that means if you got a second card, you'll be able play it at 100fps+. Keep in mind that this will be one of the only games you'll actually be able to play with 2 cards. But still...if money is no object...I would have done it if the 3090 were priced at $999 like the 6900 XT. But at $1800/piece + $200/piece for blocks, and then 1000-1200W draw for GPU and 300W for CPU, I dunno. Little bit much. I would be envious if you did it. Haha.


----------



## ivans89

Hi


originxt said:


> Absolute GPU Block - FTW3 3080, 3080 Ti, 3090
> 
> 
> Optimus Absolute GPU Block designed for the EVGA FTW3 3080/3080Ti/3090 The Absolute block is our all-out performance design, created to achieve maximum cooling on all areas of the new NVIDIA RTX 3080 and 3090 FTW3 cards from EVGA. The FTW3 GPUs pull huge amounts of power and require top cooling...
> 
> 
> 
> 
> optimuspc.com





Jpmboy said:


> Thanks. I'm already in for a "notify" from Optimus. Looks like they will have batch 2 sometime late November.
> 
> Edit: had a chance to check optimus... and they ARE taking orders. Got an email sayin batch 2 is ready!



This block is so expensive, are they worth it? Never heard from optimus.


----------



## cstkl1

HyperMatrix said:


> Well I justed tested Shadow of The Tomb Raider and with Ultra RTX Shadows, my FPS drops down to around 60. Since it has mGPU support built in, that means if you got a second card, you'll be able play it at 100fps+. Keep in mind that this will be one of the only games you'll actually be able to play with 2 cards. But still...if money is no object...I would have done it if the 3090 were priced at $999 like the 6900 XT. But at $1800/piece + $200/piece for blocks, and then 1000-1200W draw for GPU and 300W for CPU, I dunno. Little bit much. I would be envious if you did it. Haha.


to be honest. i wont sli on gaming. so pure synthetic. sli i gave up after 1080ti..
its just so poorly supported and just pisses me off trying to get it to work

afaik only
@Baasha has the patience for this

its usd 2100mrsp here. money can always spend. will affect me on buying other things though. wanted to test a 5900xt with strix 6800xt LC and also get either a ps5/xbox (waiting till ppl test the cooling etc)


----------



## cstkl1

ivans89 said:


> Hi
> 
> 
> 
> 
> This block is so expensive, are they worth it? Never heard from optimus.


seems well designed. 
ek has POOR nickel plating. 
US market with koolance dead and swiftech not competing. buying wc from europe/asian kindda of silly since American companies has better cs/warannty etc . i know microcenter now stocks all sort of wc brand but.. hmm optimus sure looks good.


----------



## ivans89

cstkl1 said:


> seems well designed.
> ek has POOR nickel plating.
> US market with koolance dead and swiftech not competing. buying wc from europe/asian kindda of silly since American companies has better cs/warannty etc . i know microcenter now stocks all sort of wc brand but.. hmm optimus sure looks good.


The block looks great ofc! I would normally prefer aqua computer, but i dont know if there will come a FTW3 block and when (maybe next year?). Bykski has one, but the vendor in germany dont know when it will be avaiable.
Hard decision  it's $430 with shipping :s


----------



## cstkl1

ivans89 said:


> The block looks great ofc! I would normally prefer aqua computer, but i dont know if there will come a FTW3 block and when (maybe next year?). Bykski has one, but the vendor in germany dont know when it will be avaiable.
> Hard decision  it's $430 with shipping :s


bykski also bad plating but the gpu so cheap..so np experementing..

barrows has some afaik. looks decent.
cons is they reuse existing backplates so the plexi part extends out of pcb. their plating no issue decent afaik.

are u on that oc german discord?? user nrw testing one on 3080 tuf.


----------



## ivans89

cstkl1 said:


> bykski also bad plating but the gpu so cheap..so np experementing..
> 
> barrows has some afaik. looks decent.
> cons is they reuse existing backplates so the plexi part extends out of pcb. their plating no issue decent afaik.
> 
> are u on that oc german discord?? user nrw testing one on 3080 tuf.


There are a few announcements for the ftw3 block, but nobody has ETA (only optimus).

Which discord you mean? i think im not.


----------



## cstkl1

ivans89 said:


> There are a few announcements for the ftw3 block, but nobody has ETA (only optimus).
> 
> Which discord you mean? i think im not.


@Esenel seek this dude.


----------



## ivans89

thank you @cstkl1 

i ordered rn the optimus.


----------



## kheopstr

this is my score fans are auto nvidia panel is on stock settings gpu pl 123--gpu core 130--vram 0-- system ram 8*4 3600 Cl18-- cpu ocled 5.1



https://www.3dmark.com/3dm/52351042?


----------



## ShadowYuna

Hi Guys

Does anyone know when is 

Aqua Computer kryographics NEXT RTX 3080 Strix / RTX 3090 Strix, nickel plated version will be launch?? It takes preorder now but it says out of stock and it may take up to 60 days






kryographics NEXT RTX 3080 Strix / RTX 3090 Strix, vernickelte Ausführung


kryographics NEXT RTX 3080 Strix / RTX 3090 Strix, vernickelte Ausführung: Fullcover-Wasserkühler für Grafikkarten vom Typ ASUS GeForce RTX 3080 ROG Strix und ASUS GeForce RTX 3090 ROG Strix. Der aus einem massiven Kupferblock gefräste Kühlerboden hat Kontakt zu allen zu kühlenden Bauelementen...




shop.aquacomputer.de


----------



## ivans89

ShadowYuna said:


> Hi Guys
> 
> Does anyone know when is
> 
> Aqua Computer kryographics NEXT RTX 3080 Strix / RTX 3090 Strix, nickel plated version will be launch?? It takes preorder now but it says out of stock and it may take up to 60 days
> 
> 
> 
> 
> 
> 
> kryographics NEXT RTX 3080 Strix / RTX 3090 Strix, vernickelte Ausführung
> 
> 
> kryographics NEXT RTX 3080 Strix / RTX 3090 Strix, vernickelte Ausführung: Fullcover-Wasserkühler für Grafikkarten vom Typ ASUS GeForce RTX 3080 ROG Strix und ASUS GeForce RTX 3090 ROG Strix. Der aus einem massiven Kupferblock gefräste Kühlerboden hat Kontakt zu allen zu kühlenden Bauelementen...
> 
> 
> 
> 
> shop.aquacomputer.de


In their forum they say in 2.-3 weeks they start shipping.


----------



## ShadowYuna

ivans89 said:


> In their forum they say in 2.-3 weeks they start shipping.


Thanks then they should start shipping around 16th or 23rd of Nov. Good to know.


----------



## originxt

ivans89 said:


> Hi
> 
> 
> 
> 
> This block is so expensive, are they worth it? Never heard from optimus.


Optimus makes quality parts. If you don't mind waiting a bit, you will get some numbers soon when people get the first batch in around mid november. I have their sig v2 block for my 10980xe and its a quality block, fins are amazing. Customer service is good but they are small so have a hard to getting to everyone in a timely manner. GamersNexus got a block and had a max 8c difference from ambient delta and I think they used a mora3 360 for their initial testing.


----------



## Esenel

@Nizzen 
Going from air to block.
How much further could you increase core clock?

Do you by chance have a before and after from Timespy?

Thanks ;-)


----------



## shredy44

6900XTkiller


----------



## ivans89

originxt said:


> Optimus makes quality parts. If you don't mind waiting a bit, you will get some numbers soon when people get the first batch in around mid november. I have their sig v2 block for my 10980xe and its a quality block, fins are amazing. Customer service is good but they are small so have a hard to getting to everyone in a timely manner. GamersNexus got a block and had a max 8c difference from ambient delta and I think they used a mora3 360 for their initial testing.


i ordered it anyway. But ty for you answer


----------



## GTANY

Nizzen said:


> Bykski strix 3090 block in action 😇
> View attachment 2463954


Are you satisfied with the Bykski quality ? (manufacturing, contact with the components)
Are chokes in contact with the waterblock ?

Backplate :

What is its thickness ?
Are thermal pads included with it ?
Is it alumium or copper ?

What temperatures do you have ? If the card is shunt-modded (2 PCI-e 8 pins + PCI-e connector), will you be able to install the waterblock ?

Ordered on the Aliexpress Bykski shop ? What is the delivery time ?


----------



## Jpmboy

GTANY said:


> Are you satisfied with the Bykski quality ? (manufacturing, contact with the components)
> Are chokes in contact with the waterblock ?
> 
> Backplate :
> 
> What is its thickness ?
> Are thermal pads included with it ?
> Is it alumium or copper ?
> 
> What temperatures do you have ? If the card is shunt-modded (2 PCI-e 8 pins + PCI-e connector), will you be able to install the waterblock ?
> 
> Ordered on the Aliexpress Bykski shop ? What is the delivery time ?


I have 4 Bykski on 3 Titan Vs. 1 on a 1080. 2 EK Vectors or 2080Tis, Koolance VGA, Swifttec VGA on several other cards, 2 optimus CPU blocks, 4 Koolance and EK...
The B blocks work just fine. Quality is good and no silly jet plates to putz with.
The EK vector are okay. Jet plates -1. Coolant flow on the 2080Ti blocks is poor if the blocks are mounted vertically. Ni plating is very good. The issue with EK plating happened many years ago. Not seen any plating problems since.
Optimus quality has been absolutely top notch. So if past performance is a predictor, their ftw3 block should be top line stuff. yes, they are expensive, but their CPU blocks are the best available right now IMO.


----------



## Nizzen

GTANY said:


> Are you satisfied with the Bykski quality ? (manufacturing, contact with the components)
> Are chokes in contact with the waterblock ?
> 
> Backplate :
> 
> What is its thickness ?
> Are thermal pads included with it ?
> Is it alumium or copper ?
> 
> What temperatures do you have ? If the card is shunt-modded (2 PCI-e 8 pins + PCI-e connector), will you be able to install the waterblock ?
> 
> Ordered on the Aliexpress Bykski shop ? What is the delivery time ?


Quality is good! Very thick backplate. (Need to mesure) There is no support for the block. No "how to install" papers.
Thermal pads is included. Looks like alu backplate.
Used chiller yesterday, so haven't tested ambient water yet. There was no time lol 

Shunted every 7? shunts 😅 If it's the right way, I don't know. Haven't seen anyone else shuntmod 3090 strix on the Internet 

Havent checked the contact yet, but I will some day.

It looks like it works pretty well, if you look at the scores 
Block was shipped from our friends in Thailand. "Clock Em Up"

That's all the info I have for now 

Love from Norway 

-Nzz


----------



## GTANY

Nizzen said:


> Quality is good! Very thick backplate. (Need to mesure) There is no support for the block. No "how to install" papers.
> Thermal pads is included. Looks like alu backplate.
> Used chiller yesterday, so haven't tested ambient water yet. There was no time lol
> 
> Shunted every 7? shunts 😅 If it's the right way, I don't know. Haven't seen anyone else shuntmod 3090 strix on the Internet
> 
> Havent checked the contact yet, but I will some day.
> 
> It looks like it works pretty well, if you look at the scores
> Block was shipped from our friends in Thailand. "Clock Em Up"
> 
> That's all the info I have for now
> 
> Love from Norway
> 
> -Nzz


OK, thank you very much for your feedback. 

I forgot one question : what is the distance between the waterblock plexiglass and the Strix PCB, on the right of the card ? Indeed, I intend to plug an Elmor EVC2S module on the right of the Strix PCB card to increase the voltage. I will have to solder a 3-pin angled connector : I don't know if this connector will fit betwwen the plexiglass and the PCB.


----------



## Tias

How is watch dogs legion running for you guys with a 3090? I got the game like everyone els did who bought a RTX 3080/3090, but does the game run good, or is it better to just wait a few weeks for a few preformance patches?


----------



## Nizzen

Tias said:


> How is watch dogs legion running for you guys with a 3090? I got the game like everyone els did who bought a RTX 3080/3090, but does the game run good, or is it better to just wait a few weeks for a few preformance patches?


Never tested it, and never will 
Only time for some Battlefield and benchmarking LOL. 

I'm a Battlefield fanboy and Quake 3 cpma


----------



## Spiriva

Tias said:


> How is watch dogs legion running for you guys with a 3090? I got the game like everyone els did who bought a RTX 3080/3090, but does the game run good, or is it better to just wait a few weeks for a few preformance patches?


There already was a patch for the game lol  I dunno if it did any diff or not tho, didnt try the game before the patch.
It runs, okay i guess.










This is how the OSD looks after an hour or such of gaming. All set to ultra on 3840 x 1600.

Under 39c the card boost abit higher in this game.








[


----------



## Wihglah

Ordered the Optimus waterblock (Batch 2)


----------



## HyperMatrix

Tias said:


> How is watch dogs legion running for you guys with a 3090? I got the game like everyone els did who bought a RTX 3080/3090, but does the game run good, or is it better to just wait a few weeks for a few preformance patches?


In 4K, it's fine. I tried turning down some settings and saw there were CPU limitations at times past the 80fps mark. So if you're doing 1080p/1440p, you'll have issues. But if you're playing with settings maxed out at 4k, you won't be hitting 80fps often. Out on the streets I'm averaging around 62-75 with all maxed except medium RT reflections and DLSS Quality (i can't see a difference visually between medium/high/ultra). It's honestly a nice game, visually. Especially if you're playing with HDR. Little upsetting that I can't play it with higher FPS...but it's looking like I'll have to accept that the 3090 is just a 4K60 card with RT/DLSS.


----------



## HyperMatrix

How are you guys doing for memory overclocking, btw? 21GHz seems to run all day no problem regardless of heat. 21.5GHz runs for around 15-20 minutes before crashing. 21.8GHz takes about 3-5 minutes to crash. 22GHz is an instant crash as soon as I try to apply it. The weird thing about the crash is that it's not a driver crash/recovery. It'll literally either lock up or restart my system. I've seen great FPS increase with faster memory so I'd definitely want to push it as much as possible. Is it safe to assume the crashes are all heat related at this point? Should I expect to maintain 21.8GHz with an active cooled backplate? Or is there more at play here?


----------



## Nizzen

HyperMatrix said:


> How are you guys doing for memory overclocking, btw? 21GHz seems to run all day no problem regardless of heat. 21.5GHz runs for around 15-20 minutes before crashing. 21.8GHz takes about 3-5 minutes to crash. 22GHz is an instant crash as soon as I try to apply it. The weird thing about the crash is that it's not a driver crash/recovery. It'll literally either lock up or restart my system. I've seen great FPS increase with faster memory so I'd definitely want to push it as much as possible. Is it safe to assume the crashes are all heat related at this point? Should I expect to maintain 21.8GHz with an active cooled backplate? Or is there more at play here?


Most likely you will need a bit more juice "voltage" or/and way more cooling. Like big water dimm cooler monted on the back


----------



## HyperMatrix

Nizzen said:


> Most likely you will need a bit more juice "voltage" or/and way more cooling. Like big water dimm cooler monted on the back


I’m starting to think I should sell my card and roll the dice on the silicon lottery with a new one. So far, unimpressive for what I expected of the ROG Strix.


----------



## Baasha

woohoo!


----------



## Thanh Nguyen

Baasha said:


> woohoo!


And when your game does not support SLI?


----------



## Maddog_1769

Anyone here have a MSI Ventus OC 3090 with recommended OC settings? I have afterburner set to +100 on core and +800 on mem. 

During gaming core goes between 2025mhz and 2040mhz and mem stays at 10300mhz

Just trying to find that happy place. If its lower the settings or raise them. I ran some benchmarks but just wanting to see what everyone else is up to on this card.


----------



## Nizzen

Thanh Nguyen said:


> And when your game does not support SLI?


Then play anyway 
LOL
People without SLI tend to have the most problems with SLI. Pretty funny


----------



## mbm

How do you get msi afterburner to show watt draw from GFX card?


----------



## Bilco

Wihglah said:


> Ordered the Optimus waterblock (Batch 2)


I did as well but after seeing how this XOC bios thing is unfolding I am unsure if I want to keep the order and the FTW3... I have a strix 3090 coming Wednesday and I am starting to think I'd be better off with an aquacomputer block and flash the evga bios on to it instead of keeping the ftw3... I take it you aren't concerned about the XOC bios issues?


----------



## Wihglah

Bilco said:


> I did as well but after seeing how this XOC bios thing is unfolding I am unsure if I want to keep the order and the FTW3... I have a strix 3090 coming Wednesday and I am starting to think I'd be better off with an aquacomputer block and flash the evga bios on to it instead of keeping the ftw3... I take it you aren't concerned about the XOC bios issues?


TBH, not really. I mean I am not much of a bencher, so in game performance is more my metric. And mine is good enough for me on the stock BIOS. My PC lives in a small office - so 450 or 500W will still heat up the room within an hour or so. I will get a nice efficient cooling system and be happy getting 2fps less than the Strix with the 500W Flash. Currently modding my Rad box to accept a 480mm radiator for push/pull AF12x25s.

BTW - I have seen 478W with the XOC BIOS, and 817W total system power. From a Corsair AX860i. So I don't have headroom for much more anyway.


----------



## Spiriva

mbm said:


> How do you get msi afterburner to show watt draw from GFX card?


Check the "power" option and it will show it.


----------



## tiefox

Finally got my 3090 Strix !


----------



## mbm

--


----------



## pat182

HyperMatrix said:


> View attachment 2463966
> 
> 
> Average dropping to 2045. This makes more sense. Like I said mine would get 2040 MHz average in port royal. You had me scared that I got a really bad card for a few hours there. I was legit contemplating selling it and picking up another one later. Haha. Still looks to be better than my card because I only got the 2040 when I let the room temperature drop to around 20C and put an extra fan in front of it. But small differences like this, I'm ok with.


yea but irl in games its always 2085 or 2100mhz, only in PR the clock drops like that cause its to hard of a work task


----------



## Falkentyne

HyperMatrix said:


> How are you guys doing for memory overclocking, btw? 21GHz seems to run all day no problem regardless of heat. 21.5GHz runs for around 15-20 minutes before crashing. 21.8GHz takes about 3-5 minutes to crash. 22GHz is an instant crash as soon as I try to apply it. The weird thing about the crash is that it's not a driver crash/recovery. It'll literally either lock up or restart my system. I've seen great FPS increase with faster memory so I'd definitely want to push it as much as possible. Is it safe to assume the crashes are all heat related at this point? Should I expect to maintain 21.8GHz with an active cooled backplate? Or is there more at play here?


That's crazy wild.
My 3090 FE card runs at +120 core, +600 (10351 mhz) memory all day (tested Call of Duty MW, Heaven, port royale, the works), but at +700 (10451 mhz) memory, Heaven runs for about 5 minutes or so, then suddenly the system just black screens, doesn't recover and reboots  I did not test with +0 core offset. You doing 21.5 ghz is completely unreal. Is the GDDR6X binning that real?


----------



## Falkentyne

Maddog_1769 said:


> Anyone here have a MSI Ventus OC 3090 with recommended OC settings? I have afterburner set to +100 on core and +800 on mem.
> 
> During gaming core goes between 2025mhz and 2040mhz and mem stays at 10300mhz
> 
> Just trying to find that happy place. If its lower the settings or raise them. I ran some benchmarks but just wanting to see what everyone else is up to on this card.


I'm confused.
What is the stock reference memory speed on your card?
Because on my FE, +600 memory is 10351 mhz and stock (+0) memory is 9751 mhz.
Are you saying that your MSI Ventus OC has a LOWER stock memory speed??
How is yours +800 translating into 10300 mhz? The card doesn't downclock the memory for power limit...


----------



## Vgnusstffr

Can someone please help me? I have 2 RTX 3090 in SLI and I can not get Red Dead Redemption 2 to run. Just goes to a black screen after I start Story mode. I am using Vulcan, as I know SLI won't work with DX12 on this title. Also, I saw someone post that you can "trick" these cards into allowing SLI on other titles that used to work in SLI, but no longer will on the 3090 cards by changing the name of the .exe file to the name of a title that is supported on the 3090 setup. Is this true? If so, can someone elaborate on if there is anything else that needs to be done to get it to work? I would be ever so grateful.


----------



## Dreams-Visions

Guys, need a bit of advice. I'm running Port Royale and I think something might be up with my settings. Despite this 500W bios (X Trio on EVGA), GPU-Z is screaming "Power" as the PerfCap reason, from the beginning to end of the test. But board power draw is stable at around 500W in benchmarking. I'm also noticing the power draw on my 3rd 8-pin is suspiciously low.

Am I doing something wrong in Afterburner? Should I attempt to re-flash the bios?










Power consumption % was in the 117%-120% range (sorry, missed that in this screenshot).

Any advice is appreciated. I feel like this may be the reason I can't get past 13.8K in PR.


----------



## Nizzen

Dreams-Visions said:


> Guys, need a bit of advice. I'm running Port Royale and I think something might be up with my settings. Despite this 500W bios (X Trio on EVGA), GPU-Z is screaming "Power" as the PerfCap reason, from the beginning to end of the test. But board power draw is stable at around 500W in benchmarking. I'm also noticing the power draw on my 3rd 8-pin is suspiciously low.
> 
> Am I doing something wrong in Afterburner? Should I attempt to re-flash the bios?
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Power consumption % was in the 117%-120% range (sorry, missed that in this screenshot).
> 
> Any advice is appreciated. I feel like this may be the reason I can't get past 13.8K in PR.


Shuntmod. Then everything is fun 😁
After a few days, you want Asus oc panel II for voltagecontrol 😎


----------



## Wrathier

Falkentyne said:


> That's crazy wild.
> My 3090 FE card runs at +120 core, +600 (10351 mhz) memory all day (tested Call of Duty MW, Heaven, port royale, the works), but at +700 (10451 mhz) memory, Heaven runs for about 5 minutes or so, then suddenly the system just black screens, doesn't recover and reboots  I did not test with +0 core offset. You doing 21.5 ghz is completely unreal. Is the GDDR6X binning that real?


My Port Royal score got negative impact as I increased mem overclock above 500ish, so I definitely think there are a sweet spot of some kind even if you don't crash.

What are your Port Royal score @Falkentyne


----------



## Dreams-Visions

Nizzen said:


> Shuntmod. Then everything is fun 😁
> After a few days, you want Asus oc panel II for voltagecontrol 😎


will consider. I'd just like to see scores more in line with others on air. I feel like I'm coming in low and I'm not sure if I'm doing something wrong or not. If I'm being limited by temps, that would be okay and I'd feel better since I plan to water block whatever 3090 I end up keeping. I'm just not clear if this is just a mediocre chip or if I'm screwing something up.

My best Port Royal score so far:



 https://www.3dmark.com/3dm/52377373?



This is with a rock-solid stable +160/+1150 @ 500W.


----------



## Zurv

Baasha said:


> woohoo!


i'm waiting for you sir


----------



## Wrathier

Dreams-Visions said:


> will consider. I'd just like to see scores more in line with others on air. I feel like I'm coming in low and I'm not sure if I'm doing something wrong or not. If I'm being limited by temps, that would be okay and I'd feel better since I plan to water block whatever 3090 I end up keeping. I'm just not clear if this is just a mediocre chip or if I'm screwing something up.
> 
> My best Port Royal score so far:
> 
> 
> 
> https://www.3dmark.com/3dm/52377373?
> 
> 
> 
> This is with a rock-solid stable +160/+1150 @ 500W.


This was my best PR score https://www.3dmark.com/pr/398212 so I of course think yours are good. 

I seen better results by people who done undervolting etc, but I didn't came to that. I sold this Gigabyte card with a loss and picked up an ASUS TUF OC I'll receive Tuesday. - Now I'm not sure that was the right choice honestly, I'm not planning to use anything but air etc and the Gigabyte card was really silent and worked perfectly fine. 

Anyway, what is done is done so I just hope that my ASUS card will be as good or better than the Gigabyte card.


----------



## Falkentyne

Wrathier said:


> My Port Royal score got negative impact as I increased mem overclock above 500ish, so I definitely think there are a sweet spot of some kind even if you don't crash.
> 
> What are your Port Royal score @Falkentyne


I don't remember anymore, but it was 13,xxx and something. I'm trying to remember if it was Time Spy graphics score or Port Royal that kept dropping back to 12,900 or something because of the card heating up. I didn't save my scores. Sorry for that bad news.

_edit_ oh it's in my run history.
My last two Port Royal runs at +100/600 (which is the +90 core offset) were 13984 and 13623. I did not run it at +120 core clock yet.


----------



## Cholerikerklaus

my score with +180mhz and 1200mem ist 15000. On air


----------



## Falkentyne

Dreams-Visions said:


> will consider. I'd just like to see scores more in line with others on air. I feel like I'm coming in low and I'm not sure if I'm doing something wrong or not. If I'm being limited by temps, that would be okay and I'd feel better since I plan to water block whatever 3090 I end up keeping. I'm just not clear if this is just a mediocre chip or if I'm screwing something up.
> 
> My best Port Royal score so far:
> 
> 
> 
> https://www.3dmark.com/3dm/52377373?
> 
> 
> 
> This is with a rock-solid stable +160/+1150 @ 500W.


I got a max of 13923 PR score at +90/+600 @ 400W on a 3090 FE
You may want to reduce your memory overclock and re-test just to make sure? You seem to be getting negative scaling. I shouldn't score above you with lower clocks.


----------



## Baasha

hehe.. I'm coming for you!  



Zurv said:


> i'm waiting for you sir
> View attachment 2464072


----------



## eggofevil

So finally I received my Strix 3090OC, and it looks like I won the Silicon lottery  GPU: 175 Mem: 1300 AirCooling 100% Evga 500w bios

Graphic Score 22543


----------



## chispy

Guys , i need to order today a water block for my Asus Tuf 3090 and i have this options. EK , Byksi , Alphacool Aurora.
Guys who have tested this water blocks in asus tuf or other rtx 3090s can please post your findings on cooling performance please ? idle temps , load temps. 

Wish one performs the best as far as you guys know , and any negative or issues with said water block. Thank you in advanced.

Kind Regards: Angelo


----------



## Dreams-Visions

Falkentyne said:


> I got a max of 13923 PR score at +90/+600 @ 400W on a 3090 FE
> You may want to reduce your memory overclock and re-test just to make sure? You seem to be getting negative scaling. I shouldn't score above you with lower clocks.


Ya I tried that. It scaled positively up to about 1200 where it crashed. 170 is where the core crashed, and lower on either would always end in a lower score.

It might be my environment. I continue to wonder if it's a matter of ambient temp impacting scoring.


----------



## Glerox

Baasha said:


> woohoo!


can't wait for your SLI "special trick"


----------



## DrunknFoo

3090 FTW3 Ultra. About 1.5 days of benching, slowly crawling up the ladder. (no mods, haven't taken the stock hsf off) Cooling supplemented with a disconnected rad and pump serving as a mini ac in the case, ambient is the room ~ 25C. (had a red light, no signal on card... turned out to be my psu being on multi rail mode (dunno why it'd just stop after a full day of benching) 

Started playing around with the 500w bios earlier today... hope evga can figure out the bios and release an update so it actually works for the card it was designed for. LMAO?

For evga owners, running occt and furmark gpu simultaneously and some games can draw an average of 490w continuously... take noobtubers conclusions with a grain of salt. Joe has had a retail card and an xoc bios before this 500w even came out. Think about it, how does his card pull way over 500w if it was ratio locked by the pcie draw? =P


----------



## Bradwell

Well, since nobody wants to come out with a FTW3 waterblock I took maters into my own hands. This is how far i got it on my chiller. Originally placed 9th a few days ago, seems I got pushed out though. Looks like I got a golden memory overclocker as well, highest pass was a +1565 mem. Highest core is a +215 which equates to 2220 at 19c. Now to see where a K|NGP|N card will land


----------



## DrunknFoo

Bradwell said:


> Well, since nobody wants to come out with a FTW3 waterblock I took maters into my own hands. This is how far i got it on my chiller. Originally placed 9th a few days ago, seems I got pushed out though. Looks like I got a golden memory overclocker as well, highest pass was a +1565 mem. Highest core is a +215 which equates to 2220 at 19c. Now to see where a K|NGP|N card will land


shunted? if so what wattage target did you go for?


----------



## Thanh Nguyen

Anyone know the alphacool block is not good for verticle mount or not?


----------



## DrunknFoo

Thanh Nguyen said:


> Anyone know the alphacool block is not good for verticle mount or not?


odd question... if it works it works...vertical, horizontal, floating, upside down..


----------



## motivman

guys, I need your opinion. If you had a chance to get either the PNY 3090 with waterblock and backplate or the EVGA FTW3 Ultra, what will you choose? I can get either for about the same price. which will have better performance? no blocks current available for FTW3 (except super expensive optimum block), but will PNY with 390w bios and under water, perform better than FTW3 ultra?


----------



## Bradwell

DrunknFoo said:


> shunted? if so what wattage target did you go for?


Nope, used the 500w bios though, but like most of us it peaks at 470 and hovers at 430.


----------



## DrunknFoo

Bradwell said:


> Nope, used the 500w bios though, but like most of us it peaks at 470 and hovers at 430.


You just gave me a bit of hope, now im looking forward to playing with this a bit


----------



## HyperMatrix

So I just flashed my Strix with the EVGA 500W bios. And GPU-Z shows max power draw of about 350-370W with PL slider set to max. But I'm also getting higher clocks. Looking at individual sensor reading max pulls, I'm getting:

PCIe Slot: 46.8W
8-Pin #1: 146.9W
8-Pin #2: 22.6W
8-pin #3: 3.8W

I'm getting higher clocks in game under load when temperatures go up. But all these readings are just off. Anyone experience a problem like this?

Like I'm getting 2130MHz in the 50-55C range....all this can't be because of an extra 20W...


















Up to 74C it still hits 2055MHz...









And 77C, 2040MHz.









This is with +500 on memory as well. This is night and day improvement over the standard Strix bios. I have no idea why....


----------



## Bradwell

DrunknFoo said:


> You just gave me a bit of hope, now im looking forward to playing with this a bit


Yeah I originally thought we might not get anything when moving to water cooling considering how well the 3090s of all brands are cooled but i was pleasantly surprised.


----------



## long2905

https://www.3dmark.com/pr/454105



im so jealous of you guys. my best PR score is 13436 and that was with a UV [email protected] My memory overclocked decent-ish though +900. Now my best bet is to slap a WB on it i guess but this card ichill x4 have an extra RGB port in between the fan ports.


----------



## Chamidorix

HyperMatrix said:


> So I just flashed my Strix with the EVGA 500W bios. And GPU-Z shows max power draw of about 350-370W with PL slider set to max. But I'm also getting higher clocks. Looking at individual sensor reading max pulls, I'm getting:
> 
> PCIe Slot: 46.8W
> 8-Pin #1: 146.9W
> 8-Pin #2: 22.6W
> 8-pin #3: 3.8W
> 
> I'm getting higher clocks in game under load when temperatures go up. But all these readings are just off. Anyone experience a problem like this?
> 
> Like I'm getting 2130MHz in the 50-55C range....all this can't be because of an extra 20W...
> 
> View attachment 2464116
> 
> View attachment 2464117
> 
> 
> Up to 74C it still hits 2055MHz...
> View attachment 2464118
> 
> 
> And 77C, 2040MHz.
> View attachment 2464119
> 
> 
> This is with +500 on memory as well. This is night and day improvement over the standard Strix bios. I have no idea why....


The 3rd pin is not measured when using EVGA 500 on strix. So, It is locking you at max voltage since the controller thinks it is way below power limit while the Strix bios keeps bouncing against it so the voltage and thus clocks will fluctuate. If you lock strix to 1.1V the difference should be much more in line with 20W difference.


----------



## HyperMatrix

Chamidorix said:


> The 3rd pin is not measured when using EVGA 500 on strix. So, It is locking you at max voltage since the controller thinks it is way below power limit while the Strix bios keeps bouncing against it so the voltage and thus clocks will fluctuate. If you lock strix to 1.1V the difference should be much more in line with 20W difference.


Any idea why even Pin 2 and PCIe is showing lower power draw?

edit: Also, wouldn't this mean that the card will behave as though it's shunted now since it thinks it's pulling far less power than it's allowed?


----------



## Nizzen

HyperMatrix said:


> So I just flashed my Strix with the EVGA 500W bios. And GPU-Z shows max power draw of about 350-370W with PL slider set to max. But I'm also getting higher clocks. Looking at individual sensor reading max pulls, I'm getting:
> 
> PCIe Slot: 46.8W
> 8-Pin #1: 146.9W
> 8-Pin #2: 22.6W
> 8-pin #3: 3.8W
> 
> I'm getting higher clocks in game under load when temperatures go up. But all these readings are just off. Anyone experience a problem like this?
> 
> Like I'm getting 2130MHz in the 50-55C range....all this can't be because of an extra 20W...
> 
> View attachment 2464116
> 
> View attachment 2464117
> 
> 
> Up to 74C it still hits 2055MHz...
> View attachment 2464118
> 
> 
> And 77C, 2040MHz.
> View attachment 2464119
> 
> 
> This is with +500 on memory as well. This is night and day improvement over the standard Strix bios. I have no idea why....


Frame chasers is explaning this in his strix 3090 video. It doesen't read the third 8 pin. Evga 500 bios works on strix. Nice result


----------



## Nizzen

DrunknFoo said:


> View attachment 2464101
> 
> 
> 
> 3090 FTW3 Ultra. About 1.5 days of benching, slowly crawling up the ladder. (no mods, haven't taken the stock hsf off) Cooling supplemented with a disconnected rad and pump serving as a mini ac in the case, ambient is the room ~ 25C. (had a red light, no signal on card... turned out to be my psu being on multi rail mode (dunno why it'd just stop after a full day of benching)
> 
> Started playing around with the 500w bios earlier today... hope evga can figure out the bios and release an update so it actually works for the card it was designed for. LMAO?
> 
> For evga owners, running occt and furmark gpu simultaneously and some games can draw an average of 490w continuously... take noobtubers conclusions with a grain of salt. Joe has had a retail card and an xoc bios before this 500w even came out. Think about it, how does his card pull way over 500w if it was ratio locked by the pcie draw? =P


If you think Frame Chasers is a younoob, why not prove him wrong? He may be wrong, an may not. If you know the solution we all want to know 

I shuntmodded all shunts on my 3090 strix oc, and it working great for me. 

Still top 10 in Port Royal with water cooling


----------



## HyperMatrix

Nizzen said:


> If you think Frame Chasers is a younoob, why not prove him wrong? He may be wrong, an may not. If you know the solution we all want to know
> 
> I shuntmodded all shunts on my 3090 strix oc, and it working great for me.
> 
> Still top 10 in Port Royal with water cooling


Nizzen, but as my previous post said, doesn’t the EVGA 500W bios essentially shunt mod the ROG Strix anyway? Shunt modding works by misleading the system as to the reported power draw. The same thing is happening with the EVGA bios. Since only 8-Pin #1 is reporting proper power draw, you could argue that you’d benefit from shunting the resistor for that one connector (assuming there is load balancing happening). That way you should be able to avoid shunting the pcie. Or am I missing something here?


----------



## DrunknFoo

Nizzen said:


> If you think Frame Chasers is a younoob, why not prove him wrong? He may be wrong, an may not. If you know the solution we all want to know
> 
> I shuntmodded all shunts on my 3090 strix oc, and it working great for me.
> 
> Still top 10 in Port Royal with water cooling


Joe (bearded hardware) has a retail card he purchased, with an XOC bios he received, he clearly pulling over 500wats on single card.. this would eliminate many of theories/conclusions

Anyway, this is my card just as i type this message...

no shunts ftw3 ultra 3090 on xoc


----------



## DrunknFoo

Once again, it is a beta bios, nothing guaranteed. (well guaranteed to work for any other card it wasn't intended for) lol


Just wait for a new build release and or a driver update that cooperates properly


----------



## warrior-kid

Hi all, new poster here.

Ventus 3X OC with U3218K monitor. Some basic questions:

1. Older version of GPU-Z fails to start and the latest causes blue screen reboot.
2. Tried to flash the Gigabyte OC BIOS, this resulted in 8K losing a 60 Hz option.
3. I do not understand something basic, if the card has boost frequency of 1725 MHz, how do the owners of the same card report a 2025 MHz clock frequency after an overclock of +100 on core? Isn't 1725+100=1825? Where is this extra 200 coming from to get to 2000+? If I run benchmarks or a game, my core hovers around 1800+, not 2000+. What am I missing?
4. Port Royal without overclocking is something measly of 12500+. So that's to do with that 1725 MHz, right?
5. Oh, and by the way, the little panel at the bottom of the screen during the PR benchmark showing the current framerate is ultra small when running the benchmark--as compared with when the monitor is set to 4K, in which case it is bigger and easier to read, so why is the PR benchmark picture affected by the native monitor resolution?

Maybe very silly questions or is my card a dud?
Cheers


----------



## Nizzen

DrunknFoo said:


> Joe (bearded hardware) has a retail card he purchased, with an XOC bios he received, he clearly pulling over 500wats on single card.. this would eliminate many of theories/conclusions
> 
> Anyway, this is my card just as i type this message...
> 
> no shunts ftw3 ultra 3090 on xoc
> 
> View attachment 2464140


Can you post a Port royale score? Does it hit 500w in Port Royal? Looks like you just got a spike to 500w? Graph looks like it's up and down.

With what program did you use to hit 500w?

JayTwoCents got a fully unlocked bios, maybe Bearded hardware got it to?


----------



## bmgjet

warrior-kid said:


> Hi all, new poster here.
> 
> Ventus 3X OC with U3218K monitor. Some basic questions:
> 
> 1. Older version of GPU-Z fails to start and the latest causes blue screen reboot.
> 2. Tried to flash the Gigabyte OC BIOS, this resulted in 8K losing a 60 Hz option.
> 3. I do not understand something basic, if the card has boost frequency of 1725 MHz, how do the owners of the same card report a 2025 MHz clock frequency after an overclock of +100 on core? Isn't 1725+100=1825? Where is this extra 200 coming from to get to 2000+? If I run benchmarks or a game, my core hovers around 1800+, not 2000+. What am I missing?
> 4. Port Royal without overclocking is something measly of 12500+. So that's to do with that 1725 MHz, right?
> 5. Oh, and by the way, the little panel at the bottom of the screen during the PR benchmark showing the current framerate is ultra small when running the benchmark--as compared with when the monitor is set to 4K, in which case it is bigger and easier to read, so why is the PR benchmark picture affected by the native monitor resolution?
> 
> Maybe very silly questions or is my card a dud?
> Cheers


3. GPU Boost 3.0 depending on temp and power limit. It will keep increasing voltage and clocks until it hits power limit,temp limit or voltage limit based on a curve you can see in the curve editor in overclocking software like afterburner.
The number advertised is what the minimum boost should be for that model, Bust most likely it will boost over that but some cards also get less then it because of powerlimit being too low for the quality of there chip.


----------



## HyperMatrix

Seem to have found the sweet spot for air cooling. 0.950v, 2010MHz, +500 on mem, and 60C after running for an hour. Stable clocks with occasional dip to 1995MHz. Water cooling will realistically only get me to 2130MHz at best. But an extra 6% is an extra 6%. And hopefully with an extra 6% on memory as well. Not as good a chip as many others in this thread. But with stock issues as they are...don't really wanna sell the card and wait until god knows when I'll have access to another one.










edit: For the same voltage/clock speeds/gpu usage, I'm seeing higher power draw and temperatures in a game like Pumpkin Jack than I am in Watch Dog: Legions.


----------



## escapee

Interesting observation, seems like Gigabyte have also switched over their cap configuration for the Gaming OC

1st Batch (launch):










2nd Batch (6 weeks after launch)










Honestly doesn't seem necessary since the original config ran fine after the driver update.


----------



## Sheyster

HyperMatrix said:


> Nizzen, but as my previous post said, doesn’t the EVGA 500W bios essentially shunt mod the ROG Strix anyway? Shunt modding works by misleading the system as to the reported power draw. The same thing is happening with the EVGA bios. Since only 8-Pin #1 is reporting proper power draw, you could argue that you’d benefit from shunting the resistor for that one connector (assuming there is load balancing happening). That way you should be able to avoid shunting the pcie. Or am I missing something here?


It sure sounds like it. Break out that DMM and get some readings. 

It sure is sounding like the Strix OC and the Trio X with the EVGA 500w BIOS are the hot ticket, at least without physically shunt-modding. Whichever one I can find first I will buy!

EDIT - @HyperMatrix - Did you lose the second HDMI port when you flashed the EVGA 500w BIOS? I noticed the FTW3 card only has 1 x HDMI port while the Strix has 2.


----------



## HyperMatrix

Sheyster said:


> It sure sounds like it. Break out that DMM and get some readings.
> 
> It sure is sounding like the Strix OC and the Trio X with the EVGA 500w BIOS are the hot ticket, at least without physically shunt-modding. Whichever one I can find first I will buy!


Another interesting fact. There may indeed be load balancing. I noticed in some of my testing that 8-pin #1 and #2 would get up to 150W. #3 would be 0 as it's not reported. But the interesting part is that the PCIE slot was reporting only 45-50W draw. It would seem to indicate that there is some balancing there indeed where for each 150W (1/3rd total cable power) it's pulling from from each connector, it'd take 25W or 1/3rd of the max draw from the pcie slot. So there is likely still going to be a benefit from shunt modding the Strix. Although if my theory is correct...you should only need to shunt 8-pin #1 and #2. Since #3 doesn't report anything, and pcie slot has another 50% to go up. Meaning you should be able to pull up to 750W without needing to shunt the pcie slot. Theories theories....


----------



## asdkj1740

escapee said:


> Interesting observation, seems like Gigabyte have also switched over their cap configuration for the Gaming OC
> 
> 1st Batch (launch):
> 
> View attachment 2464150
> 
> 
> 2nd Batch (6 weeks after launch)
> 
> View attachment 2464151
> 
> 
> Honestly doesn't seem necessary since the original config ran fine after the driver update.


source?


----------



## warrior-kid

bmgjet said:


> 3. GPU Boost 3.0 depending on temp and power limit. It will keep increasing voltage and clocks until it hits power limit,temp limit or voltage limit based on a curve you can see in the curve editor in overclocking software like afterburner.
> The number advertised is what the minimum boost should be for that model, Bust most likely it will boost over that but some cards also get less then it because of powerlimit being too low for the quality of there chip.


Thanks


----------



## escapee

asdkj1740 said:


> source?


Source: I have the cards in front of me. Those are pictures I took.


----------



## asdkj1740

escapee said:


> Source: I have the cards in front of me. Those are pictures I took.


would you mind taking more pictures showing the whole back pcb? cant tell this is gigabyte cards just by looking the socket behind. thanks!


----------



## bmagnien

Since I'm new to this I'm a little confused so I was hoping to could clear something up. Working on making the FTW3 work with the 500w XOC. I went with the stacked 8mohm on just the pcie shunt because that's what FrameChasers did. Before the mod i would hit 85w max through the PCIe slot. To calculate what that would be with the shunt I took 85x1.625 = 138 which is way more than I'd want to pull through the slot, but I did it anyway because I had no other knowledge of other available mohm amounts. However, with the mod installed, GPUz was only showing like 62w max through the PCIe slot. Does that mean I was actually only pulling 62x1.625 = 101? Because that would be perfect. And then I'm wondering now if I installed a 25mohm (which i picked up yesterday as I thought it might be safe to use long term) it won't be enough, but it's almost as if I can't actually calculate the correct final resistance to use if it's going to change so much once I install it. Does that make sense? Is it just trial and error?


----------



## Falkentyne

bmagnien said:


> Since I'm new to this I'm a little confused so I was hoping to could clear something up. Working on making the FTW3 work with the 500w XOC. I went with the stacked 8mohm on just the pcie shunt because that's what FrameChasers did. Before the mod i would hit 85w max through the PCIe slot. To calculate what that would be with the shunt I took 85x1.625 = 138 which is way more than I'd want to pull through the slot, but I did it anyway because I had no other knowledge of other available mohm amounts. However, with the mod installed, GPUz was only showing like 62w max through the PCIe slot. Does that mean I was actually only pulling 62x1.625 = 101? Because that would be perfect. And then I'm wondering now if I installed a 25mohm (which i picked up yesterday as I thought it might be safe to use long term) it won't be enough, but it's almost as if I can't actually calculate the correct final resistance to use if it's going to change so much once I install it. Does that make sense? Is it just trial and error?


No, not at all. I think you're getting mixed up with "maximum" real power the shunt would allow, versus real power. It simply means that, WITH the shunt, the PCIE slot would now be able to draw 138 watts of real power before the PCIE slot power limit gets "flagged." It doesn't mean you're pulling that much power. It's a limit.

I believe, that multiplier is based on cards that have full load balancing, example, 75W slot, 150W PCIE1, 150W PCIE2 = 375W, and then shunting. Example a 5 mOhm shunt on top of a 5 mOhm shunt would make the rail report half usage, so PCIE would report 37.5W, PCIE1 would report 75W and PCIE2 would also report 75W. So this would double the amount of real watts you can pull (750W, by pulling 150W from slot, 300W from PCIE1 and 300W from PCIE2), while it would only report 375W. That's basically how I understand it. I could be very wrong.

If the card is already pulling 75W without a shunt, adding a shunt doesn't mean it's now suddenly pulling 75W * 1.625, etc. 
So most likely your math of 62 * 1.625=101W is correct. So, 101W "real watts) with 62W reported, out of 138W with 75W reported. Does that make sense?


----------



## bmagnien

Falkentyne said:


> No, not at all. I think you're getting mixed up with "maximum" real power the shunt would allow, versus real power. It simply means that, WITH the shunt, the PCIE slot would now be able to draw 138 watts of real power before the PCIE slot power limit gets "flagged." It doesn't mean you're pulling that much power. It's a limit.
> 
> I believe, that multiplier is based on cards that have full load balancing, example, 75W slot, 150W PCIE1, 150W PCIE2 = 375W, and then shunting. Example a 5 mOhm shunt on top of a 5 mOhm shunt would make the rail report half usage, so PCIE would report 37.5W, PCIE1 would report 75W and PCIE2 would also report 75W. So this would double the amount of real watts you can pull (750W, by pulling 150W from slot, 300W from PCIE1 and 300W from PCIE2), while it would only report 375W. That's basically how I understand it. I could be very wrong.
> 
> If the card is already pulling 75W without a shunt, adding a shunt doesn't mean it's now suddenly pulling 75W * 1.625, etc.
> So most likely your math of 62 * 1.625=101W is correct. So, 101W "real watts) with 62W reported, out of 138W with 75W reported. Does that make sense?


Thanks. I'm thinking of just replacing the 8mohm with a 25mohm. Should unlock the slot just enough to allow the ratio to send enough power to the pcie cables to hit the 500w total, without risking sending too much power through the slot during brief surges. Sort've split the difference while giving me the peace of mind to not be constantly concerned about the slot draw.


----------



## bmgjet

FTW card has a 10 amp fuse on the pci-e slot.
So youll never spike over 120W or it will blow and youll get a single red led come up with no display.


----------



## Thebc2

bmgjet said:


> FTW card has a 10 amp fuse on the pci-e slot.
> So youll never spike over 120W or it will blow and youll get a single red led come up with no display.


That sounds suspiciously like the bricked EVGA 3080/3090s people are seeing, single red light of death.


----------



## bmagnien

bmgjet said:


> FTW card has a 10 amp fuse on the pci-e slot.
> So youll never spike over 120W or it will blow and youll get a single red led come up with no display.


Lol well yeah that’s what I’m trying to avoid...that’s not really an option I want to encounter hence my question about working out the right way to calculate the resistances such that I shift the ratio to improve PL on the PCIe cables without sending a dangerous amount to the slot.


----------



## bmgjet

Thebc2 said:


> That sounds suspiciously like the bricked EVGA 3080/3090s people are seeing, single red light of death.


Thought every one on here already knew that was the cause. Its been mentioned a few times.
GN mentioned he bridged over his FTW ones because he got a red led fault he was trouble shooting with EVGA in his shunt modding video.

The plugs have 20 amp ones on them. Iv already managed to pop the plug 2 fuse just pulling 500W on my 2 plug XC3 ultra card. They only had 17-18 amps going though them but obvously it must of gotten a quick spike up to 20 amps.




bmagnien said:


> Lol well yeah that’s what I’m trying to avoid...that’s not really an option I want to encounter hence my question about working out the right way to calculate the resistances such that I shift the ratio to improve PL on the PCIe cables without sending a dangerous amount to the slot.


You cant change the balance, There a IC doing load balancing to the power stage controllers with a pwm signal. Well I guess techonally you could cut some traces and replace this IC with your own controller. Or maybe some one will figure out a way to change it with some i2c commands.

I used 15 mOhm on every one of the shunts on my card. I made a tool for working out what the new power limit and multiplyer would be.








GitHub - bmgjet/ShutMod-Calculator: Work out what shunt values to use easily.


Work out what shunt values to use easily. Contribute to bmgjet/ShutMod-Calculator development by creating an account on GitHub.




github.com





Probably not much help for you tho since your only changing a single shunt.


----------



## bmagnien

bmgjet said:


> Thought every one on here already knew that was the cause. Its been mentioned a few times.
> GN mentioned he bridged over his FTW ones because he got a red led fault he was trouble shooting with EVGA in his shunt modding video.
> 
> The plugs have 20 amp ones on them. Iv already managed to pop the plug 2 fuse just pulling 500W on my 2 plug XC3 ultra card. They only had 17-18 amps going though them but obvously it must of gotten a quick spike up to 20 amps.
> 
> 
> 
> 
> You cant change the balance, There a IC doing load balancing to the power stage controllers with a pwm signal. Well I guess techonally you could cut some traces and replace this IC with your own controller. Or maybe some one will figure out a way to change it with some i2c commands.
> 
> I used 15 mOhm on every one of the shunts on my card. I made a tool for working out what the new power limit and multiplyer would be.
> 
> 
> 
> 
> 
> 
> 
> 
> GitHub - bmgjet/ShutMod-Calculator: Work out what shunt values to use easily.
> 
> 
> Work out what shunt values to use easily. Contribute to bmgjet/ShutMod-Calculator development by creating an account on GitHub.
> 
> 
> 
> 
> github.com
> 
> 
> 
> 
> 
> Probably not much help for you tho since your only changing a single shunt.


Thanks. I didn't mean change the balance. i meant increase the slot limit to allow the baked in balance ratio to then allow more power to the pcie plugs. And yeah, I just am dipping my toe into this and if I can apply just a simple single pcieslot shunt on the backside of the card to make use of the 500w xoc bios I'll be a happy camper and call it a day.


----------



## dante`afk

what the... EVGA - Articles - Kingpin 3090












by the time i get my fingers on a msi trio/FE/strix OC this thing might be available lmao.


----------



## Thanh Nguyen

I dont know why I have 18c delta now. Anyone here has the alphacool block and did the shunt mod, does your card become hotter? I lost 4 washers when I installed the block but I dont think those washers is important.


----------



## HyperMatrix

dante`afk said:


> what the... EVGA - Articles - Kingpin 3090
> 
> View attachment 2464183
> 
> 
> 
> 
> by the time i get my fingers on a msi trio/FE/strix OC this thing might be available lmao.


That’s score is on LN2. Basically take an existing Asus ROG Strix. Shunt mod it. And add a binned GPU die. And liquid nitrogen. 

It would be interesting to see what clocks fully binned dies get under water though. Wonder if we’re talking about 2200MHz or 2300MHz.


----------



## Stampede

Thanh Nguyen said:


> I dont know why I have 18c delta now. Anyone here has the alphacool block and did the shunt mod, does your card become hotter? I lost 4 washers when I installed the block but I dont think those washers is important.


Hi there, I have an Ekwb , card pulling bellow 450watts and i have a delta temp of 12-14deg.

washers are important as you maybe hitting a deadstop on the screw, with the washer it will pull the board and the block closer, not to mention allowing the screw not to damage the pcb!


----------



## anethema

Man I wish the FE had a sweet XOC bios. I had to shunt mine to pull 500W

The odd thing is the calculated increase is from 400W to 666W or so, but even after shunting all 6 including PCIe slot I only am able to pull 500W. 

After getting some cold outside air blowing on there I was able to break 15k PR( https://www.3dmark.com/pr/456920 ), but the perfcap reason is still Power and reading *after *adjustment is 500W. I'd like to remove power as a consideration all together.


----------



## ExDarkxH

HyperMatrix said:


> That’s score is on LN2. Basically take an existing Asus ROG Strix. Shunt mod it. And add a binned GPU die. And liquid nitrogen.
> 
> It would be interesting to see what clocks fully binned dies get under water though. Wonder if we’re talking about 2200MHz or 2300MHz.


You talking peak clocks or sustained clocks? because 2,300Mhz is absolutely out of the question for sustained and I bet most wont even hold 2,200.
yes they are binned but the standard isnt crazy
your guaranteed a _good_ card but your not guaranteed a _great_ card. Thats the thing about the kingpin. Your paying mostly for that bios and perhaps thats why evga is being so stingy with the ftw3


----------



## ExDarkxH

but if I paid all that money for a KP and it didnt reach 2,300? I'd be pissed


----------



## DrunknFoo

Nizzen said:


> Can you post a Port royale score? Does it hit 500w in Port Royal? Looks like you just got a spike to 500w? Graph looks like it's up and down.
> 
> With what program did you use to hit 500w?
> 
> JayTwoCents got a fully unlocked bios, maybe Bearded hardware got it to?


Yes Joe has had it for awhile...

as for port royal, no bueno, my highest score was based on stock bios. somewhere top 45 on single card, just shy of 15000 (a goal i have yet to hit cause i spending more time testing this horrible beta bios)

It is likely that drivers are not cooperating with the bios to either instruct px1/afterburner to function correctly in relation to timespy and or any other app...

Inconclusive but the power draw appears to vary between drivers and settings used.(tried a few random ones) and toggling MFAA and power management mode shows very some response and none at all amongst them...

Current and Average extended fuzzy donut runs are attached

These results have a less than 10% chance to replicate and get consistently. You'd think once you have it sort of functioning, by making one change on the display driver and it will start acting up... (LOL, does this make sense? sorry a bit tipsy here)

i.e toggle to adaptive wattage the draw is limited to 460w avg 440w, performance mode limited to 420w avg 380w, toggle back to adaptive, sometimes 450w and other random times 490-500w.


Ill probably shunt this card next week, giving the card a few more days to ensure it doesn't plan to die on me like how others had. (i already had a red light scare but was my psu in multirail)
You shunted yours right? if so mind telling me which ones you did? ill probably double 7mohms, think thats supposed to be around 800w at100% gotta dig up calc to confirm


----------



## Johneey

Thanh Nguyen said:


> I dont know why I have 18c delta now. Anyone here has the alphacool block and did the shunt mod, does your card become hotter? I lost 4 washers when I installed the block but I dont think those washers is important.


I did shunt the both closest to the 8 pins. And I got now Delta from around 10 before was 7c 😂 dont know why hahah


----------



## kheopstr

Anyone did try this one? 











Hydro X Series XG7 RGB 30-SERIES STRIX GPU Water Block (3090, 3080 Ti, 3080, 3070 Ti, 3070)


The CORSAIR Hydro X Series XG7 RGB 30-SERIES STRIX GPU Water Block is a total conversion solution for your ASUS ROG STRIX GeForce RTX™ 30-Series graphics card, unlocking the true potential of your ray tracing-capable GPU.




www.corsair.com


----------



## Spiriva

Apperently Asus will sell 3080/3090 with EK waterblock preinstalled.









EK Partners up With ASUS To Deliver Water-Cooled GeForce RTX 30 Series GPUs - ekwb.com


EK®, the leading computer cooling solutions provider, is proud to announce its latest collaboration with ASUS®, the leading graphics card and motherboard manufacturer. The results is three GeForce® RTX™ 30 Series graphics cards with a pre-installed EK, full-cover water block. A match made in...




www.ekwb.com


----------



## nievz

I have the 3090 Gaming X Trio on the Strix 480W BIOS and air cooling. Does anyone know at which wattage will I potentially blow the fuse on the PCB? I'm not good with electronic components in general so I have no idea of its limits.


----------



## bmgjet

Read what amperage the fuses have written on them. The 3080 Gigabyte 2 plug has 10 amp on slot and 20 amp each connector. Havnt seen a gaming x trio pcb yet to look.
12V input voltage X 10 amps = 120W slot
12V x 20 amps = 240W each plug
You wont blow them on 480W with a 3 plug card.


----------



## nievz

bmgjet said:


> Read what amperage the fuses have written on them. The 3080 Gigabyte 2 plug has 10 amp on slot and 20 amp each connector. Havnt seen a gaming x trio pcb yet to look.
> 12V input voltage X 10 amps = 120W slot
> 12V x 20 amps = 240W each plug
> You wont blow them on 480W with a 3 plug card.


Thank you! Looks like it's 20A (240W) on all 3 plugs and PCIe?

Hires PCB Gaminx X Trio


----------



## GTANY

On a Frame Chasers youtube video comments, I have read that a Bykski 3080/3090 strix waterblock owner says that the waterblock interferes with a fan header. So it's bending the pcb on the top when it's mounted.

Can you confirm ?


----------



## Spiriva

*



*
I currently use 2x360 radiators with push/pull Noctua NF-A12x25 fans. I would not go any less then 2x360 radiators for the 3090.


----------



## devilhead

Looks like in India nvidia starting to cut prices  (



)


----------



## pat182

devilhead said:


> Looks like in India nvidia starting to cut prices  (
> 
> 
> 
> )
> View attachment 2464225


wasnt it over msrp for starter ?


----------



## GTANY

devilhead said:


> Looks like in India nvidia starting to cut prices  (
> 
> 
> 
> )
> View attachment 2464225


Your new prices are the same as in Europe. Your old prices were over the MSRP. Consequently, it seems that is is an indian price rectification only, we cannot conclude to a future worldwide price decrease. 

According to me, if there are future price drops, they will be after AMD launch if AMD cards keep their promises.


----------



## HyperMatrix

devilhead said:


> Looks like in India nvidia starting to cut prices  (
> 
> 
> 
> )
> View attachment 2464225


$1800 USD (new price) isn't what I'd call lowered prices. It's a rip off. Just less of a rip off than before. I'm expecting moore's law is dead clickbait video incoming. Haha.


----------



## devilhead

Hope mine 3090 Strix will show better performance than 6900XT for that price  but anyway will try to buy and test new 6900XT


----------



## Sheyster

anethema said:


> Man I wish the FE had a sweet XOC bios. I had to shunt mine to pull 500W
> 
> The odd thing is the calculated increase is from 400W to 666W or so, but even after shunting all 6 including PCIe slot I only am able to pull 500W.
> 
> After getting some cold outside air blowing on there I was able to break 15k PR( https://www.3dmark.com/pr/456920 ), but the perfcap reason is still Power and reading *after *adjustment is 500W. I'd like to remove power as a consideration all together.


It's tempting to shunt mine. I'd be happy with even a 450w BIOS for gaming use. I think I will continue to hold out for a Strix or Trio X and ultimately sell the FE. Demand for it is still very high. I plan to run the EVGA 500w BIOS with one of those AIB cards I mentioned.


----------



## rawsome

devilhead said:


> Hope mine 3090 Strix will show better performance than 6900XT for that price  but anyway will try to buy and test new 6900XT


my impression is that the 6900XT is sligtly below the 3090 at stock speeds at 4k non rtx. and only if amds claims are true.
you can call it a tie if you have a ryzen 5000 GPU with smart memory access enabled. i also noticed that some 3090 numbers are a bit off when comparing with some benchmark sites.









As they did not benchmark raytracing, i guess the bars would not look so promising. so for raytraxing, the 3090 may be the king.

I have no personal experience in overclocking amd gpus, but i read as there are no 3rd party models, there will not be that much room for overclocking. so [email protected] vs [email protected] with 500W or something, the winner is clear if only raw performance counts. as we are here at overclock on the 3090 owners thread, money and therefore power consumption is not much of a concern 

-------------------------------------------------------------

btw, i was not satisfied with the msi ventus oc as it was a 2plug card; i got PR 14k with the gigabyte bios but i guess it needs more power for getting more. so i now ordered a gaming x trio and hope to get some luck in the lottery  i know i could have shunted it, but i think a 3 plug card with just a bios flash is the easier and safer option.

is there any substantial difference between the trio and the strix when it comes to OC headroom? i know the strix is MLCC only, but does that make a difference? i think if i wait out some days i may also be able to get one of these, as i see 3090 stock coming in faster now.


----------



## ExDarkxH

Evga Jacob says 850w psu will be fine for the kingpin
This is what i expected and means that the KP is totally a waste of money. They locked down the ftw3 to protect the KP bios as thats what you will be paying for. 525watts max


----------



## pat182

hey guys im using asus gpu tweak II for my strix. but it seems the software make the gpu idle at 1860mhz. any tip to make it do idle like afterburner ? (not using afterburner cause voltage slider is in beta and my fans doesnt work properly on it)


----------



## pat182

rawsome said:


> my impression is that the 6900XT is sligtly below the 3090 at stock speeds at 4k non rtx. and only if amds claims are true.
> you can call it a tie if you have a ryzen 5000 GPU with smart memory access enabled. i also noticed that some 3090 numbers are a bit off when comparing with some benchmark sites.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> As they did not benchmark raytracing, i guess the bars would not look so promising. so for raytraxing, the 3090 may be the king.
> 
> I have no personal experience in overclocking amd gpus, but i read as there are no 3rd party models, there will not be that much room for overclocking. so [email protected] vs [email protected] with 500W or something, the winner is clear if only raw performance counts. as we are here at overclock on the 3090 owners thread, money and therefore power consumption is not much of a concern
> 
> -------------------------------------------------------------
> 
> btw, i was not satisfied with the msi ventus oc as it was a 2plug card; i got PR 14k with the gigabyte bios but i guess it needs more power for getting more. so i now ordered a gaming x trio and hope to get some luck in the lottery  i know i could have shunted it, but i think a 3 plug card with just a bios flash is the easier and safer option.
> 
> is there any substantial difference between the trio and the strix when it comes to OC headroom? i know the strix is MLCC only, but does that make a difference? i think if i wait out some days i may also be able to get one of these, as i see 3090 stock coming in faster now.


had the ventus too, man this card sucks, in Control it tops 350watt at 1725mhz... 2 plugs 3090 are a scam, got the strix 3 weeks after


----------



## chispy

GALAX Gives First Look at GeForce RTX 30 HOF Series Graphics Cards - Triple-Fan Cooling, White Colored PCB & OLED Display


GALAX has provided a first look of its next-generation GeForce RTX 30 series graphics cards in its virtual online expo 2020.




wccftech.com


----------



## chispy

The battle of the high end rtx 3090 has begun. kingpin 3090 versus Galax HOF 3090 versus Asus Strix 3090. I hope msi comes up with a Lightning , not sure ???


----------



## Twintale

Hello everyone from Russia! Got this today finally - Palit Gamerock OC RTX 3090 (sorry for cable/tubing management):


----------



## Fire2

Hi guys finally managed to sell my 90 ventus, ordered a trioX I take it the strix 480watt bios flashes straight onto this msi trioX just need to check fan speeds?

anyone had any luck with the 500watt beta bios's?


----------



## MonnieRock

ExDarkxH said:


> Evga Jacob says 850w psu will be fine for the kingpin
> This is what i expected and means that the KP is totally a waste of money. They locked down the ftw3 to protect the KP bios as thats what you will be paying for. 525watts max


Does the FTW3 and Kingpin share the same pcb / water blocks for the 3000 series?


----------



## Nizzen

chispy said:


> GALAX Gives First Look at GeForce RTX 30 HOF Series Graphics Cards - Triple-Fan Cooling, White Colored PCB & OLED Display
> 
> 
> GALAX has provided a first look of its next-generation GeForce RTX 30 series graphics cards in its virtual online expo 2020.
> 
> 
> 
> 
> wccftech.com


It better have XOC bios out of the box if it's more expensive than Asus strix  Love the White cards!


----------



## wilchy

Hey everyone, just got my MSI Ventus 3X OC and did a bit of overclocking today using the Gigabyte OC bios. Its looking pretty good I reckon? +150 Core +1200 memory https://www.3dmark.com/pr/461644
Are there any better bios's for this board out there?


----------



## Johneey

wilchy said:


> Hey everyone, just got my MSI Ventus 3X OC and did a bit of overclocking today using the Gigabyte OC bios. Its looking pretty good I reckon? +150 Core +1200 memory https://www.3dmark.com/pr/461644
> Are there any better bios's for this board out there?
> View attachment 2464232


Sure go gigabyte 390 bios


----------



## wilchy

Thanks. I’m already running the gigabyte bios.


----------



## rawsome

wilchy said:


> Hey everyone, just got my MSI Ventus 3X OC and did a bit of overclocking today using the Gigabyte OC bios. Its looking pretty good I reckon? +150 Core +1200 memory https://www.3dmark.com/pr/461644
> Are there any better bios's for this board out there?
> View attachment 2464232


i also have a ventus OC with the gigabyte bios. there is also the aorus master bios with also 390W but the middle fan can only go up to 2000rpm on that.

that was with +177 +800 and fans at 100%
try with less memory clock, at some point higher memory clocks lower performance.

14k is all i was able to get, guess everything more requires more power limit. and for 2 plug cards, we wont see something like a 500W.

i think the ventus is one of the more available cards, i have seen it together with a trinity for days in stock with the ventus having 100+ sales on a single trader. all sold now, but it took days


----------



## Fire2

wilchy said:


> Hey everyone, just got my MSI Ventus 3X OC and did a bit of overclocking today using the Gigabyte OC bios. Its looking pretty good I reckon? +150 Core +1200 memory https://www.3dmark.com/pr/461644
> Are there any better bios's for this board out there?
> View attachment 2464232


nice mem offset max I got was around 900.
your core should be able to go alittle higher, crank that fan speed. hello 14/14.2k

deffo use the Gigabyte.RTX3090.24576.200904.rom over the aorus master bios (i had low 2000rpm centre fan also)

dont use what I used Gigabyte.RTX3090.24576.200928. killed my boost clocks as temps went high.

oem msi ventus bios is 3400 3450 3400 fan speeds.


----------



## wilchy

Thanks guys. I will have another tinker


----------



## ExDarkxH

For those using PX1. Can you input a high offset and then boost lock over 2,300Mhz without crashing?

Is this the best way to figure out max boost at ambient temp with a theoretically unlimited power budget? aka figure out truly how good your chip is

I want to know what a normal value for this is. Can most cards achieve 2300 or is this level for golden chips?


----------



## Dreams-Visions

Yea, I just can't get to 14000 in PR on this XTrio on air. Not sure if the ambient temps are the issue (~25C) or what, but yea. I get almost the same score with an 0.950mV undervolt @ 2064MHz locked. About 80 points lower and 50W less power draw (best run was 13,821, about 500 points higher than my FTW3Ultra 3090 tested under the same ambient conditions).



https://www.3dmark.com/3dm/52470095?



Is there any practical way to assess whether I have a good chip or not given I can't blow cold air directly on it and have a relatively high ambient temp (in Florida, AC not working at the moment, so I have no way to keep it under 70C)? Is the core clock and memory clock a good indicator of potential? The ability to hold a fairly high frequency while decently undervolted?

Trying to figure out if the card is safe to keep or if I should be looking to sell and keep fishing. I'd like to put whichever 3090 I keep on a waterblock, but don't want to buy a waterblock if the card isn't good enough to justify it. Thanks, folks.


----------



## Johneey

Dreams-Visions said:


> Yea, I just can't get to 14000 in PR on this XTrio on air. Not sure if the ambient temps are the issue (~25C) or what, but yea. I get almost the same score with an 0.950mV undervolt @ 2064MHz locked. About 80 points lower and 50W less power draw (best run was 13,821, about 500 points higher than my FTW3Ultra 3090 tested under the same ambient conditions).
> 
> 
> 
> https://www.3dmark.com/3dm/52470095?
> 
> 
> 
> Is there any practical way to assess whether I have a good chip or not given I can't blow cold air directly on it and have a relatively high ambient temp (in Florida, AC not working at the moment, so I have no way to keep it under 70C)? Is the core clock and memory clock a good indicator of potential? The ability to hold a fairly high frequency while decently undervolted?
> 
> Trying to figure out if the card is safe to keep or if I should be looking to sell and keep fishing. I'd like to put whichever 3090 I keep on a waterblock, but don't want to buy a waterblock if the card isn't good enough to justify it. Thanks, folks.


Go for trio flash 500 watt bios and buy a block. So u can get 14,8k +


----------



## Dreams-Visions

Johneey said:


> Go for trio flash 500 watt bios and buy a block. So u can get 14,8k +


Ah so it IS the ambient temps, do you think? My plan was to get a block for whatever card I kept, but wasn't sure if my issue in scoring was in fact the temps or the card itself. If it is the temps, I'll order my block today. lol

I am using the 500W bios on this Trio. I am getting no meaningful improvements going from the undervolt (~420W) to the full 500W in benches. I've been trying to figure out why. But yea if temps (block) are the solution, I'll go all-in with the XTrio. My open loop is sitting in a series of boxes waiting for the right GPU.


----------



## Sylencer90

Hello there guys, has anyone found the max stable clock for the 3090 asus strix OC? Im currently running mine in silent mode because i feel like i don't need much more performance for now. Im pretty confident that most games will run without any issues @1440p 60 fps maxed settings. Until i hit a wall with it, i won't bother with overclocking too much. But a stable oc so start with would save me some time finding one myself 

Thanks


----------



## Sheyster

Sylencer90 said:


> Hello there guys, has anyone found the max stable clock for the 3090 asus strix OC?


Every card is different. I suggest you start with +90 core and push it up from there in 15 MHz increments until it's unstable. If +90 crashes lower it to +75 and so on.


----------



## RagingCain

I got my hands on the Alienware small factor RTX 3090 if anyone has any questions. Seemed a sneaky way to get a RTX 3090 and gave the Alienware with a GPU swap to my son.

It's small - in terms of other RTX 3090s.

I am currently learning how to overclock the i9 10900KF on a ASUS ROG Strix Z490-A. Not having the easiest time with it or the RAM kits so I have some time to kill. Glad I delidded and swapped out the IHS. This thing likes to heat up.


----------



## ExDarkxH

any issues with block contact for shunters?

I went and ordered some 7mo shunts to stack. Since i have an evga card this will put me right around 240w for every 3pin, and thats where I feel comfortable because of the 20amps

Since im stacking it will add a few mm to the height. I'm assuming there will be no issues with a water block but im just double checking if anyone had contact issues. I'm getting that optimus block


----------



## Baasha

RagingCain said:


> I got my hands on the Alienware small factor RTX 3090 if anyone has any questions. Seemed a sneaky way to get a RTX 3090 and gave the Alienware with a GPU swap to my son.


Got any pics of this small form-factor RTX 3090? Is it a blower cooler? I LOVE the FE design and the cooling is amazing.


----------



## Nizzen

Sylencer90 said:


> Hello there guys, has anyone found the max stable clock for the 3090 asus strix OC? Im currently running mine in silent mode because i feel like i don't need much more performance for now. Im pretty confident that most games will run without any issues @1440p 60 fps maxed settings. Until i hit a wall with it, i won't bother with overclocking too much. But a stable oc so start with would save me some time finding one myself
> 
> Thanks


Max stock with hot ambient
Max stock cooler with shuntmod
Max watercooled with or without shuntmod
Max chilled watercooled with or without shuntmod, and the list goes on 

I testet now my shuntmodded 3090 strix OC on aircooling. 15000points in Port royale 
My max score in Port Royale is with chilled water, and it's
*15 671 points*

It's safe to say there is not much gain over a shuntmodded aircooled Strix  The drawback with Aircooling with shunts modded: It's HOT 

10900k @ 5ghz all core 1.2v and this 3090 strix oc is drawing up to 620w from the wall.


----------



## HyperMatrix

Sylencer90 said:


> Hello there guys, has anyone found the max stable clock for the 3090 asus strix OC? Im currently running mine in silent mode because i feel like i don't need much more performance for now. Im pretty confident that most games will run without any issues @1440p 60 fps maxed settings. Until i hit a wall with it, i won't bother with overclocking too much. But a stable oc so start with would save me some time finding one myself
> 
> Thanks


If I may ask...why spend $1800 on a 3090 when you’re gaming at 1440p60? Even a 3070 could do that. 

Also regarding your question the biggest problem I’m having with the Strix is heat management. I can run over 2000MHz at 82C but when it gets that hot it eventually crashes. I’ve found no easy way of keeping it cool on air. Of course I’ll be putting it under water but for now I’ve found using a curve to hit 1920MHz at 0.887v keeps it locked at that performance and temperatures generally stay below 60C with fans at 100%. But each card is different. Yours could end up doing that same clock with just .875 or .850 volts. 

I’ve even seen videos of people with the ASUS TUF 3090 keeping close to 2GHz with .950v in extended game sessions. So gotta test and see how good your chip is.


----------



## warrior-kid

Fire2 said:


> nice mem offset max I got was around 900.
> your core should be able to go alittle higher, crank that fan speed. hello 14/14.2k
> 
> deffo use the Gigabyte.RTX3090.24576.200904.rom over the aorus master bios (i had low 2000rpm centre fan also)
> 
> dont use what I used Gigabyte.RTX3090.24576.200928. killed my boost clocks as temps went high.
> 
> oem msi ventus bios is 3400 3450 3400 fan speeds.


There is contradictory information here I think about which DP ports are enabled or disabled after the Gigabyte BIOS is flashed on Ventus.

Are two DP's available or only one out of three? I absolutely need 2 out of 3 to be active for my 8K monitor.

Thanks!


----------



## RagingCain

Baasha said:


> Got any pics of this small form-factor RTX 3090? Is it a blower cooler? I LOVE the FE design and the cooling is amazing.


It's not pretty but seems to get the job done. Include ref pic too.










That's the "design".

Here is the new build.


----------



## Bilco

Dreams-Visions said:


> Yea, I just can't get to 14000 in PR on this XTrio on air. Not sure if the ambient temps are the issue (~25C) or what, but yea. I get almost the same score with an 0.950mV undervolt @ 2064MHz locked. About 80 points lower and 50W less power draw (best run was 13,821, about 500 points higher than my FTW3Ultra 3090 tested under the same ambient conditions).
> 
> 
> 
> https://www.3dmark.com/3dm/52470095?
> 
> 
> 
> Is there any practical way to assess whether I have a good chip or not given I can't blow cold air directly on it and have a relatively high ambient temp (in Florida, AC not working at the moment, so I have no way to keep it under 70C)? Is the core clock and memory clock a good indicator of potential? The ability to hold a fairly high frequency while decently undervolted?
> 
> Trying to figure out if the card is safe to keep or if I should be looking to sell and keep fishing. I'd like to put whichever 3090 I keep on a waterblock, but don't want to buy a waterblock if the card isn't good enough to justify it. Thanks, folks.


I'm in the same boat... Here's my FTW3's best score so far with the XOC 500w bios: https://www.3dmark.com/pr/459069

I have a 3090 strix arriving Wednesday... I am not sure if I should try to resell to pay off the FTW3 or open it up and keep it over this card....


----------



## dante`afk

to the ftw3 owners, how much did you pay including tax and shipping? 2030$ ?

asking as might be able to get one for 2175, which would be fine...


----------



## HyperMatrix

dante`afk said:


> to the ftw3 owners, how much did you pay including tax and shipping? 2030$ ?
> 
> asking as might be able to get one for 2175, which would be fine...


Sending you a private message. Check it.


----------



## bmgjet

warrior-kid said:


> There is contradictory information here I think about which DP ports are enabled or disabled after the Gigabyte BIOS is flashed on Ventus.
> 
> Are two DP's available or only one out of three? I absolutely need 2 out of 3 to be active for my 8K monitor.
> 
> Thanks!


1 HDMI and 2 DP will 100% work.
The Middle DP it depends on your setup. If your using a active adapter then it will work until you unplug that port and plug it in again. Then it wont detect it until you flash bios back. But if you never unplug the adapter it will keep working fine.


----------



## changboy

I order a strix 3080 the 1 october from memory express and i have no eta or news of the card. So i cancel my order and ask to be refund and today i got email from evga they have a card for me coz i clic to be notify for the rtx-3090 ft3w ultra and i buy it and they already prepare to ship my card.

So happy coz so tired to wait for a new card and now its done. So what's the best thing to do to get best performance from this 3090 ft3w ultra ? what the highest boost clock to expect ?


----------



## Falkentyne

HyperMatrix said:


> If I may ask...why spend $1800 on a 3090 when you’re gaming at 1440p60? Even a 3070 could do that.
> 
> Also regarding your question the biggest problem I’m having with the Strix is heat management. I can run over 2000MHz at 82C but when it gets that hot it eventually crashes. I’ve found no easy way of keeping it cool on air. Of course I’ll be putting it under water but for now I’ve found using a curve to hit 1920MHz at 0.887v keeps it locked at that performance and temperatures generally stay below 60C with fans at 100%. But each card is different. Yours could end up doing that same clock with just .875 or .850 volts.
> 
> I’ve even seen videos of people with the ASUS TUF 3090 keeping close to 2GHz with .950v in extended game sessions. So gotta test and see how good your chip is.


To be fair, he didn't say he was using a 60hz monitor.
He just wants to get at least 60 FPS in his titles, not that he was limited to 60 FPS.
Now if he's really using a 60hz monitor, well...2012 wants its year back from 2020


----------



## HyperMatrix

Falkentyne said:


> To be fair, he didn't say he was using a 60hz monitor.
> He just wants to get at least 60 FPS in his titles, not that he was limited to 60 FPS.
> Now if he's really using a 60hz monitor, well...2012 wants its year back from 2020


Haha it's still weird because I don't know of a single game that even a 1080ti wasn't maxing out at 1440p60. Obviously money isn't an issue since...you know...3090...so I was legitimately curious.


----------



## Dreams-Visions

Nizzen said:


> Max stock with hot ambient
> Max stock cooler with shuntmod
> Max watercooled with or without shuntmod
> Max chilled watercooled with or without shuntmod, and the list goes on
> 
> I testet now my shuntmodded 3090 strix OC on aircooling. 15000points in Port royale
> My max score in Port Royale is with chilled water, and it's
> *15 671 points*
> 
> It's safe to say there is not much gain over a shuntmodded aircooled Strix  The drawback with Aircooling with shunts modded: It's HOT
> 
> 10900k @ 5ghz all core 1.2v and this 3090 strix oc is drawing up to 620w from the wall.


Out of curiosity, what was it scoring on air before you started modding? Still trying to get a better context on my current card.


----------



## Antsu

Dreams-Visions said:


> Out of curiosity, what was it scoring on air before you started modding? Still trying to get a better context on my current card.


Are you sure you are not running some weird Nvidia setting? Your score is way lower than mine, even tho I am running worse clocks: https://www.3dmark.com/pr/463056 
Btw, since you were thinking about getting another lottery ticket, I'd definitely advice against doing it. You said you ran this score (https://www.3dmark.com/3dm/52470095?) with 0.950V, meanwhile I can't hold those clocks at 1.050V on my Strix OC. I can barely keep the average above 2Ghz @ 0.950V with the windows open... (https://www.3dmark.com/3dm/52482676)


----------



## Jpmboy

Dreams-Visions said:


> Out of curiosity, what was it scoring on air before you started modding? Still trying to get a better context on my current card.


you should be able to get at least into the low-mid 14000's with an aircooled 3090 regardless of the model. Put the fans at 100%, set the core at 2000 (it will drop clock bins with heat) ram to 10000, PL to Max. Use the windows High-power plan. NV driver quality to High-performance etc...


----------



## bmgjet

Jpmboy said:


> you should be able to get at least into the low-mid 14000's with an aircooled 3090 regardless of the model. Put the fans at 100%, set the core at 2000 (it will drop clock bins with heat) ram to 10000, PL to Max. Use the windows High-power plan. NV driver quality to High-performance etc...


Would have to disagree with that.
Most 2 plug cards will have a hard time to crack into 14000s since port royal is basically a power limit benchmark. 
So youd need to have won the lottery and gotten a low leakage chip to make 14K with 390W or less.
On stock bios I couldnt get over 13500 score. 350/366W. 
Turning fans onto 100% drops score since they are eating up the power limit difference between 80% fan and 100% fan is 6 extra watts usage.
Then on the gigabyte 370/390W bios I got to 13900 score. Had to shunt mod to get to 14400 score with 450W.


----------



## Dreams-Visions

Antsu said:


> Are you sure you are not running some weird Nvidia setting? Your score is way lower than mine, even tho I am running worse clocks: https://www.3dmark.com/pr/463056
> Btw, since you were thinking about getting another lottery ticket, I'd definitely advice against doing it. You said you ran this score (https://www.3dmark.com/3dm/52470095?) with 0.950V, meanwhile I can't hold those clocks at 1.050V on my Strix OC. I can barely keep the average above 2Ghz @ 0.950V with the windows open... (https://www.3dmark.com/3dm/52482676)


I'm pretty sure my Nvidia settings are normal. And yep, 2064MHz @ .950mV all day long.

Second opinion is welcome:


























I am testing


Jpmboy said:


> you should be able to get at least into the low-mid 14000's with an aircooled 3090 regardless of the model. Put the fans at 100%, set the core at 2000 (it will drop clock bins with heat) ram to 10000, PL to Max. Use the windows High-power plan. NV driver quality to High-performance etc...


Thanks for the help.

I have been running at 100% fans, PL to max, NV settings as listed above. I am using the AMD Ryzen High Performance power plan, which I would think wouldn't limit GPU performance.

I'll try locking core to 2000MHz and report back if anything changes. I'm not sure how to lock the ram to 10000. Did you mean 1000MHz?


----------



## olrdtg

bmgjet said:


> Would have to disagree with that.
> Most 2 plug cards will have a hard time to crack into 14000s since port royal is basically a power limit benchmark.
> So youd need to have won the lottery and gotten a low leakage chip to make 14K with 390W or less.
> On stock bios I couldnt get over 13500 score. 350/366W.
> Turning fans onto 100% drops score since they are eating up the power limit difference between 80% fan and 100% fan is 6 extra watts usage.
> Then on the gigabyte 370/390W bios I got to 13900 score. Had to shunt mod to get to 14400 score with 450W.


I don't know much about electrical engineering, so forgive me if this is a really stupid question, but do you think it might be possible for someone to make a custom NVIDIA 12-pin adapter with 3x 8-pins instead of 2x to help balance the load or bring in more power for shunt modded FEs? 🤔

I realize the 12-pin pretty much matches 2x 8-pins in terms of +12v inputs, but would there be any benefit at all from such a cable?


----------



## HyperMatrix

olrdtg said:


> I don't know much about electrical engineering, so forgive me if this is a really stupid question, but do you think it might be possible for someone to make a custom NVIDIA 12-pin adapter with 3x 8-pins instead of 2x to help balance the load or bring in more power for shunt modded FEs? 🤔
> 
> I realize the 12-pin pretty much matches 2x 8-pins in terms of +12v inputs, but would there be any benefit at all from such a cable?


Nvidia 12-pin spec is rated for higher than 300W. That's not the limiting factor. It's that Nvidia put out the card with 2x 8-pin connectors, meaning to stay within spec, and avoid liability issues, they have to keep the power limit capped. If you shunt mod, you'll be able to pull in more power. But don't expect Nvidia to put out a bios allowing you to draw more.


----------



## dante`afk

another reason why to avoid EK


----------



## Dreams-Visions

Jpmboy said:


> you should be able to get at least into the low-mid 14000's with an aircooled 3090 regardless of the model. Put the fans at 100%, set the core at 2000 (it will drop clock bins with heat) ram to 10000, PL to Max. Use the windows High-power plan. NV driver quality to High-performance etc...


just following up. Core locked to 2010MHz through the test (curve), memory at 10000 exactly (+248), fans 100%.











Well, at least it's not power PerfCapping. lol


----------



## Antsu

Dreams-Visions said:


> just following up. Core locked to 2010MHz through the test (curve), memory at 10000 exactly (+248), fans 100%.
> 
> View attachment 2464287
> 
> 
> 
> Well, at least it's not power PerfCapping. lol


Tried wiping the drivers and closing every background program? My Nvidia settings are identical, except that I use ULMB instead of G-sync, but I don't think that should matter. You could give it a try though.

Really jealous of your core, if it's anything like Turing that card should do ~2200 @ 1.1V under a nice loop. Meanwhile I am crashing at 2040Mhz 1.05V, damnit. I guess I might be able to squeeze 2100 on water, but even that is not looking too sure, just like my 1080Ti, sigh.


----------



## Falkentyne

Dreams-Visions said:


> just following up. Core locked to 2010MHz through the test (curve), memory at 10000 exactly (+248), fans 100%.
> 
> View attachment 2464287
> 
> 
> 
> Well, at least it's not power PerfCapping. lol


Disable Gsync and also make sure your PCIE is set to Gen 4.
I've seen benchmarks in the past, even with Vsync off, where you scored lower if Gsync were enabled instead of disabled.


----------



## DrunknFoo

ExDarkxH said:


> any issues with block contact for shunters?
> 
> I went and ordered some 7mo shunts to stack. Since i have an evga card this will put me right around 240w for every 3pin, and thats where I feel comfortable because of the 20amps
> 
> Since im stacking it will add a few mm to the height. I'm assuming there will be no issues with a water block but im just double checking if anyone had contact issues. I'm getting that optimus block


No issues, stacking a shunt sits lower than the chokes


----------



## Falkentyne

DrunknFoo said:


> No issues, stacking a shunt sits lower than the chokes


The shunts are extremely thin. They're not 'fat' like people might be thinking.


----------



## Carillo

GTANY said:


> On a Frame Chasers youtube video comments, I have read that a Bykski 3080/3090 strix waterblock owner says that the waterblock interferes with a fan header. So it's bending the pcb on the top when it's mounted.
> 
> Can you confirm ?


I can conf


Jpmboy said:


> you should be able to get at least into the low-mid 14000's with an aircooled 3090 regardless of the model. Put the fans at 100%, set the core at 2000 (it will drop clock bins with heat) ram to 10000, PL to Max. Use the windows High-power plan. NV driver quality to High-performance etc...


Thats wrong. Unless you flash a 390w bios to a two pin 3090, MOST 3090 do not brake 14 000. Mid 13000 to high 13 000 with stock bios is what you should expect. I have tried 5 different two pins cards, and 14 500 is not even possible with a 390 watt bios with air cooling, unless you have zub zero room temp. 

What is the point of making such claims? This is exactly what makes inexperienced users ponder in frustration why they have such a "low" score in Port Royal.


----------



## Thanh Nguyen

My PNY with Gigabyte bios hit 14k1 under hot water.


----------



## olrdtg

HyperMatrix said:


> Nvidia 12-pin spec is rated for higher than 300W. That's not the limiting factor. It's that Nvidia put out the card with 2x 8-pin connectors, meaning to stay within spec, and avoid liability issues, they have to keep the power limit capped. If you shunt mod, you'll be able to pull in more power. But don't expect Nvidia to put out a bios allowing you to draw more.


That's not what I was asking at all and I never mentioned anything about a vBIOS as I know they would never release an update to ease power restrictions. 

I know it's NVIDIA that put 2x 8-pin on the adapter, but putting 2x 8-pins doesn't stay within any spec, just the spec that they set. Having a 350W vs a 450W card isn't a matter of liability as the only limiting factor in this situation is the power supply, and as long as the card doesn't catch fire at 450W then there is no problem. They could very easily have made a 3x 8-pin variant for the 3090, but I don't believe that would help as the way they designed this 12-pin connector only has 6 +12v lines, which is equivalent to 2 8-pin cables. Probably the defining reason for their power limitation is the fact that 350 W is already a completely ridiculous amount of power for a GPU.

My card is shunt modded and pulls around 590 W peak, what I was asking bmgjet is if adding a 3rd 8-pin to the adapter would make any difference.


----------



## changboy

Someone here have the evga ft3w ultra ?


----------



## DrunknFoo

changboy said:


> Someone here have the evga ft3w ultra ?


Im running one atm


----------



## changboy

DrunknFoo said:


> Im running one atm


ATM ? what you mean ? Did you update the new bios of 500w ?


----------



## Esenel

Dreams-Visions said:


> Out of curiosity, what was it scoring on air before you started modding? Still trying to get a better context on my current card.


My Strix just out of the box did in Timespy 20123 GPU points.
That is my baseline for all tests.


----------



## DrunknFoo

changboy said:


> ATM ? what you mean ? Did you update the new bios of 500w ?


Ill have a strix coming soon /hope

I played with the beta yes


----------



## HyperMatrix

olrdtg said:


> That's not what I was asking at all and I never mentioned anything about a vBIOS as I know they would never release an update to ease power restrictions.
> 
> I know it's NVIDIA that put 2x 8-pin on the adapter, but putting 2x 8-pins doesn't stay within any spec, just the spec that they set. Having a 350W vs a 450W card isn't a matter of liability as the only limiting factor in this situation is the power supply, and as long as the card doesn't catch fire at 450W then there is no problem. They could very easily have made a 3x 8-pin variant for the 3090, but I don't believe that would help as the way they designed this 12-pin connector only has 6 +12v lines, which is equivalent to 2 8-pin cables. Probably the defining reason for their power limitation is the fact that 350 W is already a completely ridiculous amount of power for a GPU.
> 
> My card is shunt modded and pulls around 590 W peak, what I was asking bmgjet is if adding a 3rd 8-pin to the adapter would make any difference.


Unless the board itself is designed to handle power coming in from 3 separate cables, the only benefit you’d get is with regard to the delivery of power from your PSU up to the connector. I know some of the heavier equipment we work with are 3 phase designs that would run 3x 110v lines instead of 1x 330v and in those cases we get lower temperature across the lines, and as a result of that, more efficiency/less power used. But the equipment has to be designed for 3 phase power. 

I don’t know how much of that applies to video cards. But my assumption is that since each line coming into the board has its own shunts and traces, you wouldn’t be able to gain any benefit from the connector on to the card. Only from the PSU up to the connector. So theoretically you could create a new adapter that splits the power across 3 separate cables. This would lower cable temperature, and could help if you have a multi-rail PSU by pulling cables from different rails. But I don’t think at under 300W power from each cable you’re going to get any sort of benefit out of it. 

If you were interested, look at Der8auer’s video on YouTube where he takes a pcie 8-pin connector and starts snipping the strands one by one while the card is active. I think he got down to just 1 or 2 strands and it was still enough to supply the card.


----------



## bmgjet

olrdtg said:


> My card is shunt modded and pulls around 590 W peak, what I was asking bmgjet is if adding a 3rd 8-pin to the adapter would make any difference.


Wouldnt really make any difference. Power is power doesnt matter if it comes over 1, 2 or 3 plugs. 
The spec is really under rated and any psu worth using should be able to go far beyond. 
Just look at the PCI-E 8 pin vs the 8 pin EPS. Same plug basically just 1 extra 12V and gnd instead of a sense wires and its rated to 235W instead of 150W.
2 Plugs will be fine if your PSU is up to the task. 
The only place I might get worryed about is the pins in the plugs. 
If you get some that are lose they will have resistance to them. Which then will cause heat and maybe melt the plugs if your really unlucky.


----------



## asdkj1740

it is reported that there are more than 20 cases of evga cards dead recently on evga official forum. leadex/evga g3 psu involved.
some said there are near 100 cases reported in china currently.
leadex has issued a statement in china saying they are looking into the problem.

try to flash back to original bios in case the card dies suddently.


----------



## Dreams-Visions

Falkentyne said:


> Disable Gsync and also make sure your PCIE is set to Gen 4.
> I've seen benchmarks in the past, even with Vsync off, where you scored lower if Gsync were enabled instead of disabled.


I did disable Gsync. Card is reading 16x gen 4 PCIe. I've been testing combinations since that last post.

100% core voltage, 119% power limit, 100% fan, 500W bios in all tests. About 25C ambient. +248 memory was used for the 10000MHz memory clock.

--
core, mem = PR score (frequency range, wattage range, max temp)

+0, +0 = 13123 (1905-1920, 472W-496W, max temp 75C) -- baseline
+100, +248 = 13552 (1980-2010, 480W-502W, max temp 74C)
+130, +248 = 13673 (2010-2050, 485W-505W, max temp 74C)
+145, +248 = 13724 (2025-2070, 480W-512W, max temp 74C)
+160, +248 = 13799 (2040-2055, 475W-501W, max temp 73C)
+175, +248 = 13855 (2085-2115, 2130MHz max, 485W-504W, max temp 74C)
+165, +498 = 13882 (2055-2100, 2160MHz max, 480W-501W, max temp 74C)
+160, +898 = 13893 (2025-2085, 2115MHz max, 488W-501W, max temp 73C)
+165, +848 = 13896 (2010-2085, 2115MHz max, 486W-505W, max temp 75C)
+164, +958 = 13907 (1995-2085, 2130MHz max, 483W-503W, max temp 75C)
+165, +748 = 13926 (2055-2100, 2160MHz max, 478W-507W, max temp 75C)
+164, +1048 = 13932 (2025-2085, 2100MHz max, 480W-503W, max temp 75C)
+165, +898 = 13946 (2040-2100, 2145MHz max, 475W-490W, max temp 73C)
+160, +1148 = 13946 (2010-2070, 2115MHz max, 490W-506W, max temp 74C)
+160, +1248 = 13955 (2025-2070, 2115MHz max, 488W-500W, max temp 74C)
+166, +948 = 13964 (2040-2085, 2115MHz max, 482W-501W, max temp 74C)
+165, +948 = 13975 (2040-2100, 2115MHz max, 475W-502W, max temp 73C)
+166, +998 = 13977 (2040-2085, 2115MHz max, 484W-504W, max temp 74C)
+165, +1098 = 13981 (2025-2100, 2130MHz max, 477-505W, max temp 74C)
+165, +1248 = 14011 (2055-2070, 2115MHz max, 480W-504W, max temp 74C)
*+166, +1248 = 14026 (2055-2100, 2115MHz max, 480W-505W, max temp 75C)*

So I did eventually break 14k which was my goal.



https://www.3dmark.com/3dm/52491578?



...but it sure feels like I had to do a lot of dancing to get there. Anything over +160 on the core was pretty unstable. Would usually finish PR; sometimes wouldn't. I don't think I saw negative until I tried +1298 or so, but core clock is what mattered the most by far. +180 and higher would always just crash. Had to step down from +175 as memory clock increased. I feel like if I could have maintained +175, I may have gotten around 14.2K. Still seems to be only as good as some posters here on 2-pin cards, which still gives me a bit of pause considering 500W are being thrown at it.

*If anyone has further advice to try to squeeze out more on air, do let me know.* I'll be placing my block order tomorrow if people feel like this chip has potential.


----------



## bmgjet

Tempture is your biggest problem at the moment. 75C is probably losing you about 80mhz vs if it was at 50C.
Feeding a AC into your case is about all you could improve or trying to repaste the card as well.


----------



## changboy

asdkj1740 said:


> it is reported that there are more than 20 cases of evga cards dead recently on evga official forum. leadex/evga g3 psu involved.
> some said there are near 100 cases reported in china currently.
> leadex has issued a statement in china saying they are looking into the problem.
> 
> try to flash back to original bios in case the card dies suddently.


Hehehe funny, are you nervous ?


----------



## warrior-kid

bmgjet said:


> 1 HDMI and 2 DP will 100% work.
> The Middle DP it depends on your setup. If your using a active adapter then it will work until you unplug that port and plug it in again. Then it wont detect it until you flash bios back. But if you never unplug the adapter it will keep working fine.


That was super helpful, thanks!

I've now flashed the Gigabyte BIOS with DP1 and DP3 (not the middle one) connected to the 8K monitor and the monitor set on DP2 manual (hmm, that last numbering is internal to the monitor itself). After the PC auto-rebooted, I have 8K at 60Hz, power slider available in AB, and GPU-Z 3.4 is working.

[UPDATED]
An update for the PR benchmark: Boost 3.0 is not wanting to play by itself, but cautiously bringing up the actual offsets to +204/+223, 1.03 power, and 90% fans has brought my terrible previous scores to 13216 https://www.3dmark.com/3dm/52498204? with average temps of 66.


----------



## asdkj1740

changboy said:


> Hehehe funny, are you nervous ?


no, i dont have one.
but interestingly ppl here shorted shunt and got >600w seem to be fine.


----------



## Sylencer90

HyperMatrix said:


> If I may ask...why spend $1800 on a 3090 when you’re gaming at 1440p60? Even a 3070 could do that.
> 
> Also regarding your question the biggest problem I’m having with the Strix is heat management. I can run over 2000MHz at 82C but when it gets that hot it eventually crashes. I’ve found no easy way of keeping it cool on air. Of course I’ll be putting it under water but for now I’ve found using a curve to hit 1920MHz at 0.887v keeps it locked at that performance and temperatures generally stay below 60C with fans at 100%. But each card is different. Yours could end up doing that same clock with just .875 or .850 volts.
> 
> I’ve even seen videos of people with the ASUS TUF 3090 keeping close to 2GHz with .950v in extended game sessions. So gotta test and see how good your chip is.


Thanks for your reply, i'll gladly answer you this question. First of all i was in need of a new pc since i sold my "1 year old" one to a friend because he needed one badly and didn't had the time to build one from the scratch. And i wasn't too happy with my old one anyways so i figured why not get a new one. Took me 2 weeks to get a build i wanted. Money didn't matter at any point in time. This pc is meant to last me a couple of years so i went overkill with the GPU. And i play competitive games at 165fps/hz only eye candy games at 60 fps.

I got into the habit of spending more and more money since my setup is the only hobby i really got. Whenever i don't reach my desired fps i can just overclock my gpu a bit and im prolly safe with it. The 3090 offers me a good amount of headroom if i ever need it. Also i might go for 4k for the singleplayer games if i find a good one.

I might also look into undervolting my 3090, not sure if it's gonna go buy imma atleast try it.

Cheers


----------



## Sync0r

Shunted the pcie slot now (stacked 20mh on a 5mh), as well as the 2x8pins (stacked 5mh on 5mh) on my zotac trinity. Hitting power limit again, as I'm only seeing 65w from pcie. I'm guessing it's now the GPU power limit? Do people know what the limit is for the GPU to consume?


----------



## changboy

asdkj1740 said:


> no, i dont have one.
> but interestingly ppl here shorted shunt and got >600w seem to be fine.


Oh so some have done the shunt mode on this card ? I just order 10 x Panasonic ERJ-M1WSF8M0U Resistor 0.008 Ohm. Can you link me where you saw the mod or the guy have made it please ?


----------



## bmgjet

Sync0r said:


> Shunted the pcie slot now (stacked 20mh on a 5mh), as well as the 2x8pins (stacked 5mh on 5mh) on my zotac trinity. Hitting power limit again, as I'm only seeing 65w from pcie. I'm guessing it's now the GPU power limit? Do people know what the limit is for the GPU to consume?


Could be because your cards max power limit is 350W.
65W from slot 142.5W from each plug would be a logical power split for it.
You could try a bios with a higher power limit one that has 75w from the slot like the gigabyte gaming oc. Or stack another 20 mohm on the slot which would be the equvently of have 1 10 mohm on the 5 mohm.


----------



## Sync0r

bmgjet said:


> Could be because your cards max power limit is 350W.
> 65W from slot 142.5W from each plug would be a logical power split for it.
> You could try a bios with a higher power limit one that has 75w from the slot like the gigabyte gaming oc. Or stack another 20 mohm on the slot which would be the equvently of have 1 10 mohm on the 5 mohm.


Yeah I'm already on the 390w bios. Around 95w per 8 pin and 65w pcie slot is what GPUz is reading, so not sure it's that.


----------



## pat182

trying to lock your clock with a voltage curve is useless with the dlss and RT tech, if a game use dlss and ray tracing, your clock gonna be lower since it draw power for that compute vs normal rasterisation. even then, im playing Shadow of war with a strix and it draws 480watt @ 2055mhz in 4k max setting vs Horizon zero dawn i can go to 2150mhz @ 440watt.

with a ventus in Horizon i was doing 1950mhz but in control 1725mhz cause of 350watt limit being utilise by RT and dlss.

I highly suggest you just move power slider and set an offset to the core vs trying to do a custom curve that wont work on all games


----------



## bmgjet

Sync0r said:


> Yeah I'm already on the 390w bios. Around 95w per 8 pin and 65w pcie slot is what GPUz is reading, so not sure it's that.


So realistlcally your pulling.
(95 X 2) X 2. 380W on plugs
65 x 1.25. 81.25W from slot.
461.25W total.

Chip power limit on my card was 250-280W on EVGA bios (350/366W) and 300W on Gigabyte Bios.
If you do that shunt youll need to do SRC shunt as well or it will over volt and hit power limit for that. It will think somethings is wrong because the difference it sees between the two.


----------



## Sync0r

bmgjet said:


> So realistlcally your pulling.
> (95 X 2) X 2. 380W on plugs
> 65 x 1.25. 81.25W from slot.
> 461.25W total.
> 
> Chip power limit on my card was 250-280W on EVGA bios (350/366W) and 300W on Gigabyte Bios.
> If you do that shunt youll need to do SRC shunt as well or it will over volt and hit power limit for that. It will think somethings is wrong because the difference it sees between the two.


Cheers, ok so might go and shunt those two next. Ta for the help.


----------



## nievz

Dreams-Visions said:


> Yea, I just can't get to 14000 in PR on this XTrio on air. Not sure if the ambient temps are the issue (~25C) or what, but yea. I get almost the same score with an 0.950mV undervolt @ 2064MHz locked. About 80 points lower and 50W less power draw (best run was 13,821, about 500 points higher than my FTW3Ultra 3090 tested under the same ambient conditions).
> 
> 
> 
> https://www.3dmark.com/3dm/52470095?
> 
> 
> 
> Is there any practical way to assess whether I have a good chip or not given I can't blow cold air directly on it and have a relatively high ambient temp (in Florida, AC not working at the moment, so I have no way to keep it under 70C)? Is the core clock and memory clock a good indicator of potential? The ability to hold a fairly high frequency while decently undervolted?
> 
> Trying to figure out if the card is safe to keep or if I should be looking to sell and keep fishing. I'd like to put whichever 3090 I keep on a waterblock, but don't want to buy a waterblock if the card isn't good enough to justify it. Thanks, folks.


I'm on the same boat. I can't break 14000 in PR on my Xtrio running Strix BIOS. Your chip is actually better than mine since it can do 2064 @ 954mv. Mine needs 1.031 at 2025mhz. The best i can do is 13900. I'm starting to blame my CPU, 3700x @ 4.25Ghz.


----------



## Thanh Nguyen

Sync0r said:


> Cheers, ok so might go and shunt those two next. Ta for the help.


Come on man, Im just 10 points behind you. If you mod more, I have to work more to catch up.


----------



## warrior-kid

pat182 said:


> trying to lock your clock with a voltage curve is useless with the dlss and RT tech, if a game use dlss and ray tracing, your clock gonna be lower since it draw power for that compute vs normal rasterisation. even then, im playing Shadow of war with a strix and it draws 480watt @ 2055mhz in 4k max setting vs Horizon zero dawn i can go to 2150mhz @ 440watt.
> 
> with a ventus in Horizon i was doing 1950mhz but in control 1725mhz cause of 350watt limit being utilise by RT and dlss.
> 
> I highly suggest you just move power slider and set an offset to the core vs trying to do a custom curve that wont work on all games


*[CORRECTION]*
Not sure about DLSS at all. Take SoTR in 8K without any DLSS: my benchmark on reasonable RT medium settings is 38 average and 26-*62* range. It is an eye-candy game but has decent action and I can see no dropouts, it is all very smooth. Not to mention it is gorgeous in 8K on this Dell monitor. Memory usage is 14M, by the way, so 3080 or 2080Ti won't work.


----------



## pat182

warrior-kid said:


> *[CORRECTION]*
> Not sure about DLSS at all. Take SoTR in 8K without any DLSS: my benchmark on reasonable RT medium settings is 38 average and 26-*62* range. It is an eye-candy game but has decent action and I can see no dropouts, it is all very smooth. Not to mention it is gorgeous in 8K on this Dell monitor. Memory usage is 14M, by the way, so 3080 or 2080Ti won't work.


im not saying it wont work. im saying its a game per game basis, take Control for exemple on a 350watt 3090.
no RT or dlss the Ventus can go to 1950mhz and cap at 350watt, but when you enable RT and dlss (maybe RT mostly) now the clock drop to 1725mhz but still 350watt usage.
so even if i do a custom curve for 1950mhz @ 0.950v it will never acheive it cause its gonna be power lock way before that if a game is heavy RT and the game might crash cause of my voltage cap with a high target to reach

maybe dlss doesnt take that much more power since you render at lower res so you free up horsepower in that way, but RT sure does


----------



## warrior-kid

pat182 said:


> im not saying it wont work. im saying its a game per game basis, take Control for exemple on a 350watt 3090.
> no RT or dlss the Ventus can go to 1950mhz and cap at 350watt, but when you enable RT and dlss (maybe RT mostly) now the clock drop to 1725mhz but still 350watt usage.
> so even if i do a custom curve for 1950mhz @ 0.950v it will never acheive it cause its gonna be power lock way before that if a game is heavy RT and the game might crash cause of my voltage cap with a high target to reach
> 
> maybe dlss doesnt take that much more power since you render at lower res so you free up horsepower in that way, but RT sure does


Agreed, RT is the beast (haven't checked the cost of DLSS).


----------



## Baasha

With the near death of SLI, the 3090 seems to struggle at 4K maxed out with RayTracing in certain titles.

Watchdogs Legion looks amazing but the performance is absolute garbage (~ 40fps) in 4K maxed out with RayTracing. Enabling DLSS ("Balanced") only improved it slightly.










I thought this 3090 was the real 4K 60fps card. So much for that!  

This proves that MGPU/SLI is an absolute must to enjoy fully maxed out 4K 60fps+ gameplay.

Shadow of the Tomb Raider (DX12) and Red Dead Redemption 2 (Vulkan) are amazing.



















Would you guys be interested in creating a petition to lobby the developers to implement MGPU in DX12 games?

Although not many people have RTX 3090 SLI, I think it would help the gaming community as a whole. The technology is there - SOTTR works in DX12 with MGPU with RayTracing AND DLSS!

I am going to create a petition - let me know if you guys want to join in the lobbying efforts (we will encourage these devs to implement MGPU by contacting them on Twitter, Facebook, and email). It is unconscionable that in 2020, a single RTX 3090 still cannot achieve 4K 60fps maxed out. Every time a new GPU comes out promising the moon but the reality is quite different.


----------



## pat182

Baasha said:


> With the near death of SLI, the 3090 seems to struggle at 4K maxed out with RayTracing in certain titles.
> 
> Watchdogs Legion looks amazing but the performance is absolute garbage (~ 40fps) in 4K maxed out with RayTracing. Enabling DLSS ("Balanced") only improved it slightly.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I thought this 3090 was the real 4K 60fps card. So much for that!
> 
> This proves that MGPU/SLI is an absolute must to enjoy fully maxed out 4K 60fps+ gameplay.
> 
> Shadow of the Tomb Raider (DX12) and Red Dead Redemption 2 (Vulkan) are amazing.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Would you guys be interested in creating a petition to lobby the developers to implement MGPU in DX12 games?
> 
> Although not many people have RTX 3090 SLI, I think it would help the gaming community as a whole. The technology is there - SOTTR works in DX12 with MGPU with RayTracing AND DLSS!
> 
> I am going to create a petition - let me know if you guys want to join in the lobbying efforts (we will encourage these devs to implement MGPU by contacting them on Twitter, Facebook, and email). It is unconscionable that in 2020, a single RTX 3090 still cannot achieve 4K 60fps maxed out. Every time a new GPU comes out promising the moon but the reality is quite different.


legion is cpu bound, not gpu bound my strix lower its clock and only get to 350watt even at 80fps cause cpu cant keep up


----------



## HyperMatrix

pat182 said:


> legion is cpu bound, not gpu bound


Legion is not cpu bound at 40fps. Lol. the only times I would sometimes hit a cpu bottleneck was around/above 80fps. You're not cpu bound when your gpu is at 100% usage.


----------



## pat182

Any idea how much reg ddr6 draw vs ddr6x ? 162watt for memory is sure something


----------



## warrior-kid

Regarding a petition, I'm certainly happy to join, I had used triple cards in the past to great effect, and the advertised option of running in 4K 60Hz, and much more relevant for me 8K 30-60Hz and 4K 120Hz are suggested to be completely within reach. When we are hitting hard either of these limits with the top of the line card now, right after its release, it is not a good sign, and if there was an option to do mGPU or SLI, that would solve this nicely.


----------



## GAN77

pat182 said:


> Any idea how much reg ddr6 draw vs ddr6x ? 162watt for memory is sure something


No, this is not memory consumption.








NVIDIA GeForce RTX 3090 Founders Edition Review: Zwischen Mehrwert und vermeintlicher Dekadenz - wenn der Preis nicht alles ist | Seite 2 | igor´sLAB


Mit der GeForce RTX 3090 rundet NVIDIA sein Grafikkarten-Portfolio heute nach oben hin ab, vorerst. Viel mehr geht mit dem GA102-300 ja eh nicht mehr und so darf man den jetzigen Ausbau wohl eher als…




www.igorslab.de


----------



## dante`afk

Baasha said:


> With the near death of SLI, the 3090 seems to struggle at 4K maxed out with RayTracing in certain titles.
> 
> Watchdogs Legion looks amazing but the performance is absolute garbage (~ 40fps) in 4K maxed out with RayTracing. Enabling DLSS ("Balanced") only improved it slightly.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I thought this 3090 was the real 4K 60fps card. So much for that!
> 
> This proves that MGPU/SLI is an absolute must to enjoy fully maxed out 4K 60fps+ gameplay.
> 
> Shadow of the Tomb Raider (DX12) and Red Dead Redemption 2 (Vulkan) are amazing.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Would you guys be interested in creating a petition to lobby the developers to implement MGPU in DX12 games?
> 
> Although not many people have RTX 3090 SLI, I think it would help the gaming community as a whole. The technology is there - SOTTR works in DX12 with MGPU with RayTracing AND DLSS!
> 
> I am going to create a petition - let me know if you guys want to join in the lobbying efforts (we will encourage these devs to implement MGPU by contacting them on Twitter, Facebook, and email). It is unconscionable that in 2020, a single RTX 3090 still cannot achieve 4K 60fps maxed out. Every time a new GPU comes out promising the moon but the reality is quite different.


disregard watchdogs legion, its terribly optimized, just look at reviews.


----------



## pat182

so im getting 61fps on air on the new RT 3dmark


https://www.3dmark.com/3dm/52529955?



my strix can boost like crazy ''2160'', kinda want to WC now


----------



## Midian

Got my Asus Strix 3090 OC today, now just waiting for two 980 PRO 1TB drives and a 5950X then we will see what we can push this card up to.


----------



## Wrathier

Hi all,

So I got my ASUS TUF OC 3090 today and flashed the Gigabyte 3090 Gaming OC bios and this seems to be what I can push out of the card in Port Royal. 

What are your thoughs? (Any suggestions I can try to pass that 14.000 mark or is 13987 a decent result). 

https://www.3dmark.com/pr/466904 = 13987 points

Thanks.


----------



## jsarver

Happy with my fe after hearing so many struggling with 14k time spy.



https://www.3dmark.com/pr/451272


----------



## GQNerd

jsarver said:


> Happy with my fe after hearing so many struggling with 14k time spy.
> 
> 
> https://www.3dmark.com/pr/451272


Hasn't been a prob for my FE either: Port Royal - 14,454

Stock cooling w/no mods... yet


----------



## changboy

You have good score with your card guys, i hope i can do 14 000 too with my ftw3 ultra but i think this ultra dont score so high, i dont know coz i stop follow this thread for a laps.


----------



## Thanh Nguyen

I just flashed strix biox on my reference PNY 3090 and it works. Hwinfo64 shows gpu power 100w more than the PNY bios. The performance in game is the same I guess but in tom clancy siege the strix bios has more fps.


----------



## Dreams-Visions

bmgjet said:


> Tempture is your biggest problem at the moment. 75C is probably losing you about 80mhz vs if it was at 50C.
> Feeding a AC into your case is about all you could improve or trying to repaste the card as well.


Cool thank you for the reply. I have a 3090 Strix now in the mail. I look forward to comparing it to this 500W XTrio and my FTW3U. I'll be interested in seeing how the Strix is impacted by my ambient temps. If the Strix ends up not performing any better than the XTrio because of temps, I'll finally have my answer and I'll just keep the least expensive of the 3 given the winner will just be waterblocked anyway (so physical/cooler appearance doesn't matter).


----------



## long2905

Thanh Nguyen said:


> I just flashed strix biox on my reference PNY 3090 and it works. Hwinfo64 shows gpu power 100w more than the PNY bios. The performance in game is the same I guess but in tom clancy siege the strix bios has more fps.
> [/QUOT


whats the power draw with the strix vbios? Yours is a 2 8 pins card right? Do you see the weird clocking behavior as the ftw3 vbios?


----------



## Dreams-Visions

jsarver said:


> Happy with my fe after hearing so many struggling with 14k time spy.
> 
> 
> 
> https://www.3dmark.com/pr/451272


well...your card temp is literaly 25C lower than mine.



Miguelios said:


> Hasn't been a prob for my FE either: Port Royal - 14,454
> 
> Stock cooling w/no mods... yet


and yours is 15C lower. 

Yea...I'm thinking the struggle is ambient temp related. For me, at least.


----------



## Thanh Nguyen

long2905 said:


> whats the power draw with the strix vbios? Yours is a 2 8 pins card right? Do you see the weird clocking behavior as the ftw3 vbios?


Because I did shunt mod so gpu power in hwinfo64 is multiplied by 2.


----------



## motivman

will FE bios work on 2X8 pin card?


----------



## martinhal

What is the average memory overclock on these cards ?


----------



## Warocia

Baasha said:


> With the near death of SLI, the 3090 seems to struggle at 4K maxed out with RayTracing in certain titles.
> 
> Watchdogs Legion looks amazing but the performance is absolute garbage (~ 40fps) in 4K maxed out with RayTracing. Enabling DLSS ("Balanced") only improved it slightly.


There is some odd engine bottleneck in the Watchdogs Legion. I have everything maxed at 3440x1440 and I am getting about 40-50 fps while driving. GPU usage is 70%. A average fps is 70-80 during ingame bechmark. I am thinking that this game could benefit fast ddr4.


----------



## Baasha

Warocia said:


> There is some odd engine bottleneck in the Watchdogs Legion. I have everything maxed at 3440x1440 and I am getting about 40-50 fps while driving. GPU usage is 70%. A average fps is 70-80 during ingame bechmark. I am thinking that this game could benefit fast ddr4.


Well, Ubi-crap is famous for its garbage titles - the environments look amazing in Legion but character models and animations are slightly less than trash.

The problem I was alluding to, however, is with other games as well. I just played Control without DLSS in 4K maxed out (with Ray Tracing) and I get ~ 40fps as well. I mean, performance is "better"; I was getting ~ 40fps with 2x Titan RTX with the same settings beforehand but still, 40fps with a "BFGPU" is nuts. Cyberpunk 2077 looks way more demanding and I bet they won't implement MGPU either so we'll be stuck with sub-60fps 4K gameplay with all the bells and whistles turned on.

The point is - we need MGPU/SLI to be properly implemented. RDR2 is a fantastic experience at 4K with literally every setting turned up to the max including TAA (no FXAA or MSAA though).

As is SOTTR. 

On another note, here's a dumb question: I have 3DMark Advanced Edition but I don't see the "Wildlife" and new "ray tracing" benchmarks?!? Do I have to re-download 3DMark (6.7GB) and uninstall the previous version? I thought it would automatically update...


----------



## bmgjet

Warocia said:


> There is some odd engine bottleneck in the Watchdogs Legion. I have everything maxed at 3440x1440 and I am getting about 40-50 fps while driving. GPU usage is 70%. A average fps is 70-80 during ingame bechmark. I am thinking that this game could benefit fast ddr4.


Denuvo drm is the bottleneck.


----------



## Esenel

pat182 said:


> legion is cpu bound, not gpu bound my strix lower its clock and only get to 350watt even at 80fps cause cpu cant keep up





Warocia said:


> There is some odd engine bottleneck in the Watchdogs Legion. I have everything maxed at 3440x1440 and I am getting about 40-50 fps while driving. GPU usage is 70%. A average fps is 70-80 during ingame bechmark. I am thinking that this game could benefit fast ddr4.


Edit:
There is a Engine limit in WD Legion.
CPU is far from being used fully.
But saw this behaviour now only around Themse.


----------



## Carillo

nievz said:


> I'm on the same boat. I can't break 14000 in PR on my Xtrio running Strix BIOS. Your chip is actually better than mine since it can do 2064 @ 954mv. Mine needs 1.031 at 2025mhz. The best i can do is 13900. I'm starting to blame my CPU, 3700x @ 4.25Ghz.


My Strix pulled a 14 800 Port Royal Air cooled. +110/ +1022


----------



## shiokarai

Baasha said:


> Well, Ubi-crap is famous for its garbage titles - the environments look amazing in Legion but character models and animations are slightly less than trash.
> 
> The problem I was alluding to, however, is with other games as well. I just played Control without DLSS in 4K maxed out (with Ray Tracing) and I get ~ 40fps as well. I mean, performance is "better"; I was getting ~ 40fps with 2x Titan RTX with the same settings beforehand but still, 40fps with a "BFGPU" is nuts. Cyberpunk 2077 looks way more demanding and I bet they won't implement MGPU either so we'll be stuck with sub-60fps 4K gameplay with all the bells and whistles turned on.
> 
> The point is - we need MGPU/SLI to be properly implemented. RDR2 is a fantastic experience at 4K with literally every setting turned up to the max including TAA (no FXAA or MSAA though).
> 
> As is SOTTR.
> 
> On another note, here's a dumb question: I have 3DMark Advanced Edition but I don't see the "Wildlife" and new "ray tracing" benchmarks?!? Do I have to re-download 3DMark (6.7GB) and uninstall the previous version? I thought it would automatically update...


So much for the 4k gaming utopia  honestly, how big is your monitor, how far are you sitting, what's your viewing distance? I'm using LG 38GN950, 38" 21:9 and it's great, i'm sitting about 60-70cm away and not getting any problems, had 4k 27" 144hz before, doesn't see any difference apart from that I can get now higher frame rates with games Ever thought of that? Or it's too peony to change the resolution? I've been following your gymnastics with SLI for a few years now, and it's getting funny to watch, really  btw I've changed from SLI to 1 card with RTX 2080 Ti, the only game that was working so-so with it was Witcher 3 and that's it.


----------



## Cavokk

Hmm.. 7 RTX 3090 in stock at Danish retailer this morning, 1 MSI Trio (non X) and 6 inno3D. The retailer is the one who has published supply data for all the 30xx over the last couple of weeks.

Interesting to see if they’re still some available later today.

C

Edit: The trio is gone but 6 pcs of inno3d 3090s are still available.


----------



## Macros023

Hi I am new to VGA BIOS flashing, I read the guide and tried it with my old Card. Everything worked. Now I want to have a higher Power Limit on my Zotac 3090. How can I Tell which bios is compatible. I have seen people stating that the Palit 3090 bios works. Can anybody confirm this?


----------



## Carillo

Macros023 said:


> Hi I am new to VGA BIOS flashing, I read the guide and tried it with my old Card. Everything worked. Now I want to have a higher Power Limit on my Zotac 3090. How can I Tell which bios is compatible. I have seen people stating that the Palit 3090 bios works. Can anybody confirm this?


I have used the 390w Gigabyte Gaming OC bios on the Zotac Trinity . Works great Gigabyte RTX 3090 VBIOS


----------



## Spiriva

Macros023 said:


> Hi I am new to VGA BIOS flashing, I read the guide and tried it with my old Card. Everything worked. Now I want to have a higher Power Limit on my Zotac 3090. How can I Tell which bios is compatible. I have seen people stating that the Palit 3090 bios works. Can anybody confirm this?


One of these two will be good for your card:









Gigabyte RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com













Gigabyte RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com





Both bioses are 390w.


----------



## Macros023

Cool thanks for the fast help, any problems with ports or can the higher boost clock be a problem in I loose Silicon Lottery?


----------



## mbm

Carillo said:


> I have used the 390w Gigabyte Gaming OC bios on the Zotac Trinity . Works great Gigabyte RTX 3090 VBIOS


What are the benefits?


----------



## changboy

I will recieve my ft3w ultra today and check the perf but what i read on evga forum, its seam many are desapointed with the card, its seam evga working on a bios (not beta).

Is someone have try to flash the asus oc bios on the ft3w ultra yet ?


----------



## twisted0ne

Anyone been having issues with driver resets? I was having problems with my last PC (Random BSODS, CTD etc... think it might have been RAM releated but never found the exact cause). 
Rebuilt PC with a new 3700x, new motherboard, cpu & ram and I'm still getting crashes to destop in several different games but the same error 

"dxgi_error_device_removed"

Clean driver install and even wiped OS with the same issue. I'm running a MSI 3090 Ventus OC, originally flashed with the gigabyte bios, gone back to stock to see if that was the issue and it still happens. 

Depending on what game I play I've seen the GPU sit at 79 degrees constantly, seems a bit high?


----------



## Cavokk

twisted0ne said:


> Depending on what game I play I've seen the GPU sit at 79 degrees constantly, seems a bit high?


Yes that's a bit high for gaming - maybe bad ventilation of your pc case or very high ambient temps?

On my Trio X running 500W Bios I am in the 60s range celcius with auto fancurve in 23 celcius ambient.

C


----------



## twisted0ne

Cavokk said:


> Yes that's a bit high for gaming - maybe bad ventilation of your pc case or very high ambient temps?
> 
> On my Trio X running 500W Bios I am in the 60s range celcius with auto fancurve in 23 celcius ambient.
> 
> C


It's autumn UK so ambient temps aren't an issue and I have 4 fans blowing directly at it, CPU on AIO at top of case and doesn't go over 60 so I doesn't seem like a airflow problem. 
Waiting on my alphacool waterblock still so hopefully that''ll help with temps.


----------



## Cavokk

twisted0ne said:


> It's autumn UK so ambient temps aren't an issue and I have 4 fans blowing directly at it, CPU on AIO at top of case and doesn't go over 60 so I doesn't seem like a airflow problem.
> Waiting on my alphacool waterblock still so hopefully that''ll help with temps.


#meetoo - they keep postponing the delivery date - latest update from alphacool is delivery next week 

my 3x360 rads needs something to work on as my CPU barely heats them up now  (all Rads working as inflow with one big open exhaust in the rear of casing)

C


----------



## twisted0ne

Getting the impression Alphacool just have placeholder dates and don't actually have any idea when they're arriving, doesn't fill me with confidence when they don't even have any contact information on their site...


----------



## changboy

Did i miss something coz no people talk about the rtx-3090 ft3w ultra here ? I can't get any info from you guys....common....talk to me lol.


----------



## Johneey

Shunt mod works fine now  thanks to @Thanh Nguyen 
15,2k port royal with 500 watts wow https://www.3dmark.com/pr/468279


----------



## Johneey

Thanh Nguyen said:


> I just flashed strix biox on my reference PNY 3090 and it works. Hwinfo64 shows gpu power 100w more than the PNY bios. The performance in game is the same I guess but in tom clancy siege the strix bios has more fps.


What u flashed a 3x8 pin on a 2x8 pin and it works ???


----------



## Zurv

@Johneey you mean shunt and chilling. Much of your score is because your GPU is at 24c


----------



## Johneey

Zurv said:


> @Johneey you mean shunt and chilling. Much of your score is because your GPU is at 24c


No chilling it’s on 14 c room temp hahah . I went into the room on the bottum of the house haha. With my mora 360


----------



## Johneey

On 21 degreese Ambiente I got 15110 around on gpu 33c


----------



## Tias

Thanh Nguyen said:


> I just flashed strix biox on my reference PNY 3090 and it works. Hwinfo64 shows gpu power 100w more than the PNY bios. The performance in game is the same I guess but in tom clancy siege the strix bios has more fps.


Is your PNY 3090 shunt modded and is that the reason the Strix bios (3x8pin) works on your 2x8pin card? 
Or do you know if it works to flash the strix bios to a none shunt modded reference 2x8pin card, like PNY, TUF etc?


----------



## Thanh Nguyen

Tias said:


> Is your PNY 3090 shunt modded and is that the reason the Strix bios (3x8pin) works on your 2x8pin card?
> Or do you know if it works to flash the strix bios to a none shunt modded reference 2x8pin card, like PNY, TUF etc?


Dont know. I flashed after the mod.


----------



## dante`afk

motivman said:


> will FE bios work on 2X8 pin card?


No, FE is FE PCB, anything else 2x8 board partners is ref.


----------



## Baasha

shiokarai said:


> So much for the 4k gaming utopia  honestly, how big is your monitor, how far are you sitting, what's your viewing distance? I'm using LG 38GN950, 38" 21:9 and it's great, i'm sitting about 60-70cm away and not getting any problems, had 4k 27" 144hz before, doesn't see any difference apart from that I can get now higher frame rates with games Ever thought of that? Or it's too peony to change the resolution? I've been following your gymnastics with SLI for a few years now, and it's getting funny to watch, really  btw I've changed from SLI to 1 card with RTX 2080 Ti, the only game that was working so-so with it was Witcher 3 and that's it.


The monitor is 55" (Alienware AW5520QF) and I sit about 3 feet from it. After going 4K 120Hz OLED, nothing else comes close - and I've had a LOT of different monitors including the Asus 27" 4K 144Hz G-Sync panel.

Yea, I see what you're saying. I knew going with the 3090 SLI setup is going to be iffy at best; I mainly got it because of RDR2 (and now SOTTR) and those two games work great. If I can get DX11 games to work with SLI, then great. If not, meh. But yea, I think this will be my last SLI rig. lol

On my Z490 rig, I have the Titan RTX SLI and because of the lack of CPU lanes (16 total), each GPU is at x8 and makes the experience quite horrid. I think I will stick to single GPU on that rig (and my X99 rig).


----------



## dante`afk

NVIDIA GeForce RTX 3080 Ti Graphics Card To Feature 10,496 CUDA Cores, 20 GB GDDR6X Memory & 320W TGP, Tackles AMD's RX 6800 XT


NVIDIA seems to have finally finalized the specs for its enthusiast graphics card, the GeForce RTX 3080 Ti which tackles AMD's RX 6800 XT.




wccftech.com





3080Ti, same performance as 3090 with 20GB VRAM for 500$ less.

Great, Thanks Nvidia.


----------



## Wrathier

dante`afk said:


> NVIDIA GeForce RTX 3080 Ti Graphics Card To Feature 10,496 CUDA Cores, 20 GB GDDR6X Memory & 320W TGP, Tackles AMD's RX 6800 XT
> 
> 
> NVIDIA seems to have finally finalized the specs for its enthusiast graphics card, the GeForce RTX 3080 Ti which tackles AMD's RX 6800 XT.
> 
> 
> 
> 
> wccftech.com
> 
> 
> 
> 
> 
> 3080Ti, same performance as 3090 with 20GB VRAM for 500$ less.
> 
> Great, Thanks Nvidia.


BS move by Nvidia. Honestly dissapointed as hell.


----------



## pat182

Wrathier said:


> BS move by Nvidia. Honestly dissapointed as hell.
> [/QUOTE
> 
> still a lower bus and probly lower binned chip,


----------



## Cavokk

dante`afk said:


> NVIDIA GeForce RTX 3080 Ti Graphics Card To Feature 10,496 CUDA Cores, 20 GB GDDR6X Memory & 320W TGP, Tackles AMD's RX 6800 XT
> 
> 
> NVIDIA seems to have finally finalized the specs for its enthusiast graphics card, the GeForce RTX 3080 Ti which tackles AMD's RX 6800 XT.
> 
> 
> 
> 
> wccftech.com
> 
> 
> 
> 
> 
> 3080Ti, same performance as 3090 with 20GB VRAM for 500$ less.
> 
> Great, Thanks Nvidia.





Wrathier said:


> BS move by Nvidia. Honestly dissapointed as hell.


It will be closer to the 3090 performance but with 4 Gb less Memory and a narrower memory bus though.

If the rumor is true nVidia for sure must be under pressure ...

Interesting times ahead


----------



## shiokarai

Baasha said:


> The monitor is 55" (Alienware AW5520QF) and I sit about 3 feet from it. After going 4K 120Hz OLED, nothing else comes close - and I've had a LOT of different monitors including the Asus 27" 4K 144Hz G-Sync panel.
> 
> Yea, I see what you're saying. I knew going with the 3090 SLI setup is going to be iffy at best; I mainly got it because of RDR2 (and now SOTTR) and those two games work great. If I can get DX11 games to work with SLI, then great. If not, meh. But yea, I think this will be my last SLI rig. lol
> 
> On my Z490 rig, I have the Titan RTX SLI and because of the lack of CPU lanes (16 total), each GPU is at x8 and makes the experience quite horrid. I think I will stick to single GPU on that rig (and my X99 rig).


yeah, I've decided it's not worth it  it kinda narrowed games I've played, always going with few titles SLI was working with, discouraged to play other titles. now it's fine


----------



## Wrathier

Carillo said:


> My Strix pulled a 14 800 Port Royal Air cooled. +110/ +1022


I can set a custom curve to max pl @ 2010MHz and 1100 mem, but even at below 60 degrees the card throttles down below 2010 and my pl score seems to stagnate at 13987.

perhaps it’s my 7900X cpu or something. Rocking a x299 build.


----------



## shiokarai

Wrathier said:


> BS move by Nvidia. Honestly dissapointed as hell.


Too early to tell it's fake rumours or not, doesn't make sense as this would essentially be the same chip right? They can't satisfy the demand for the 3080/3090 yet they would bring another high end SKU this fast? Well, we'll see.


----------



## martinhal

shiokarai said:


> Too early to tell it's fake rumours or not, doesn't make sense as this would essentially be the same chip right? They can't satisfy the demand for the 3080/3090 yet they would bring another high end SKU this fast? Well, we'll see.


Yes , a pre-paper launch


----------



## Glerox

Finally got my hand on an ROG strix OC.
I also ordered the alphacool block which seems to be the best performer but I don't have results for the upcoming one from Bitspower.


----------



## Johneey

Thanh Nguyen said:


> Dont know. I flashed after the mod.





Glerox said:


> Finally got my hand on an ROG strix OC.
> I also ordered the alphacool block which seems to be the best performer but I don't have results for the upcoming one from Bitspower.


yeah I have the AC too 8c Delta so amazing !


----------



## Sync0r

Thanh Nguyen said:


> Dont know. I flashed after the mod.


Yep STRIX oc bios works with a shunted Zotac Trinity card. Getting locked voltage at 1.1 now


----------



## Foxrun

I’ve been using WoW fully maxed at res render 111% to stress my oc and so far it’s been far more effective at determining clocks and stability than any bench. I’ve got .893 @ 1980. When I try to push 2k+ I begin to run into a power limit and it downclocks; well in WoW as that’s the most demanding.


----------



## Johneey

Sync0r said:


> Yep STRIX oc bios works with a shunted Zotac Trinity card. Getting locked voltage at 1.1 now


thanks for info will put it now on my palit 3090 shunted, u think its safe for daily gaming?


----------



## Spiriva

Sync0r said:


> Yep STRIX oc bios works with a shunted Zotac Trinity card. Getting locked voltage at 1.1 now


Did you try it before the shunt mod ?


----------



## long2905

Spiriva said:


> Did you try it before the shunt mod ?


Wont work without shunt mod. Your power consumption will increase but boost clock will be lower


----------



## RagingCain

Baasha said:


> AW5520QF


I too have this amazing monitor.


----------



## Glerox

Johneey said:


> yeah I have the AC too 8c Delta so amazing !


Nice! What's your max stable OC clockspeed? Do you hit powerlimit? Have you tried shunt mod?


----------



## Johneey

Spiriva said:


> Did you try it before the shunt mod ?


no not tried


----------



## Johneey

Glerox said:


> Nice! What's your max stable OC clockspeed? Do you hit powerlimit? Have you tried shunt mod?


its shunted already. average clock in Portroyal 2170 max 2220


----------



## Sync0r

Spiriva said:


> Did you try it before the shunt mod ?


No I didn't, heard it didn't work so never tried it.

Bumped me up a bit https://www.3dmark.com/pr/469026


----------



## mirkendargen

Got my Strix 3090 finally and put the Bykski block on. My loop is still bleeding so my flowrate isn't back up to normal yet, but currently I have a 2C delta at idle and a ~13C delta at 450w+ draw, good enough to keep the card below 40C at full load, good enough for me!

I can confirm that a cutout in the acrylic for the stock cooler fan header (or maybe RGB, not sure) isn't deep enough and flexes the PCB a bit. It could be solved by dremeling the arylic a bit or shaving the header a bit.

I haven't done any in-depth overclocking yet, but at stock it boosts to 1980 in Time Spy, it seems stable at +100 core, it seems not stable at +150 core. I'll mess with it more tonight.


----------



## Glerox

Johneey said:


> its shunted already. average clock in Portroyal 2170 max 2220


That's impressive! May I ask which resistors have you shunted on the ROG 3090 and with what? Thx


----------



## Johneey

Glerox said:


> That's impressive! May I ask which resistors have you shunted on the ROG 3090 and with what? Thx


I have palit , 5mohm soldering all 6 . You need all 6 otherwise u get throtteling


----------



## Sync0r

Johneey said:


> thanks for info will put it now on my palit 3090 shunted, u think its safe for daily gaming?


No idea 🙂 hopefully


----------



## Thanh Nguyen

Sync0r said:


> No idea 🙂 hopefully


What kind of cooling u have? I have no idea why my card is too hot. 3c delta at idle and 18c at load. It was cold 2 days ago and I was able to cool the water temp down to 17c and the card hit 35c during PR so I was 10 points behind u. Now you have massive points ahead of me and it is hot outside again so I cant cool it down to bench.


----------



## ivans89

anyone have a idea For my problem: i shunted on my evga 3090 ftw3 ultra with 5mohm and tried with 10mohm, but my PSU shut off immediately when i start the pc and its not starting. 1000W PSU...


----------



## Sync0r

Thanh Nguyen said:


> What kind of cooling u have? I have no idea why my card is too hot. 3c delta at idle and 18c at load. It was cold 2 days ago and I was able to cool the water temp down to 17c and the card hit 35c during PR so I was 10 points behind u. Now you have massive points ahead of me and it is hot outside again so I cant cool it down to bench.


Yeah it's cold in Scotland now, 4c ambient, perfect bench weather . Custom water loop, 3x480mm and 1x420 rads, all 60mm thick with push pull fans and dual D5s,


----------



## shiokarai

Glerox said:


> Finally got my hand on an ROG strix OC.
> I also ordered the alphacool block which seems to be the best performer but I don't have results for the upcoming one from Bitspower.


got any reviews for strix 3090 blocks? links?


----------



## Johneey

Thanh Nguyen said:


> What kind of cooling u have? I have no idea why my card is too hot. 3c delta at idle and 18c at load. It was cold 2 days ago and I was able to cool the water temp down to 17c and the card hit 35c during PR so I was 10 points behind u. Now you have massive points ahead of me and it is hot outside again so I cant cool it down to bench.


I had same problem after renew liquid Metall and pads all good


----------



## Johneey

ivans89 said:


> anyone have a idea For my problem: i shunted on my evga 3090 ftw3 ultra with 5mohm and tried with 10mohm, but my PSU shut off immediately when i start the pc and its not starting. 1000W PSU...


I hope for you u don’t broke ur card with the soldering


----------



## ivans89

Nono, when I remove it everything working normal...that’s the weird thing...


----------



## Johneey

Johneey said:


> Sounds not good to me.I hope for you u don’t broke ur card with the soldering


----------



## Johneey

ivans89 said:


> Nono, when I remove it everything working normal...that’s the weird thing...


Hmmm


----------



## mirkendargen

mirkendargen said:


> Got my Strix 3090 finally and put the Bykski block on. My loop is still bleeding so my flowrate isn't back up to normal yet, but currently I have a 2C delta at idle and a ~13C delta at 450w+ draw, good enough to keep the card below 40C at full load, good enough for me!
> 
> I can confirm that a cutout in the acrylic for the stock cooler fan header (or maybe RGB, not sure) isn't deep enough and flexes the PCB a bit. It could be solved by dremeling the arylic a bit or shaving the header a bit.
> 
> I haven't done any in-depth overclocking yet, but at stock it boosts to 1980 in Time Spy, it seems stable at +100 core, it seems not stable at +150 core. I'll mess with it more tonight.


Just did a bit more simple overclocking. At +125/+750 I'm getting 14400 Port Royal, it's clearly power limited the whole way and I need to do a proper curve.


----------



## ivans89

Johneey said:


> Hmmm


yeah I’m not understanding it...


----------



## ScottRoberts91

Just received my Gaming X trio 3090, which Bios work on this? Presumably both the 450 and 500watt Strix Bios


----------



## Pepillo

ScottRoberts91 said:


> Just received my Gaming X trio 3090, which Bios work on this? Presumably both the 450 and 500watt Strix Bios


Yes, they both work well. Better the 500w EVGA, 50w more and keeps all the video outputs, with the 480w Strix you lose a DP.


----------



## Jordel

Which would be the best BIOS for a 3090 Ventus 3X OC? I'm guessing the Gigabyte Gaming OC 390W?


----------



## dante`afk

there she is, thanks again @HyperMatrix 











and actually I don't even wanna use her without waterblock (and shuntmod)


----------



## Carillo

Wrathier said:


> I can set a custom curve to max pl @ 2010MHz and 1100 mem, but even at below 60 degrees the card throttles down below 2010 and my pl score seems to stagnate at 13987.
> 
> perhaps it’s my 7900X cpu or something. Rocking a x299 build.


Use the slider, you will score more pr mhz than using VF curve. Thats just how 3dmark is even if your are hitting PL hard. I don't understand it myself 

EDIT: Also try to lower your mem OC to see if it scales positive. There might be some error correcting. And disable GPU scheduling in Windows.


----------



## Carillo

Sync0r said:


> Yep STRIX oc bios works with a shunted Zotac Trinity card. Getting locked voltage at 1.1 now


Can you lock the voltage to 1.1 with VF curve ? My stock Strix, is getting max 1081mV, even when running low resolution Valley benchmark without hitting PL

EDIT: Why are you using the Strix Bios ? What you essentially do using a 3 pin bios on a 2 pin card, is lowering your PL. On a 3 pin bios the voltage drop is measured over 3x8 pin, so 480w divided by 3 = 160 watt on each max, so in your case 320 watt total plus what ever resistance you are using on your stacked shunts..vs Gaming OC 390w divided by 2 = 195 watt on each. Yes i know it's shunted, but that does not change the fact that you are lowering your power limit using this bios


----------



## Johneey

i flashed now the strix OC bios on my Palit 3090 2x 8 Pin shunted. it works but the 3 Pin shows same like 1 pin, means that i can calculate hole power use? Like 418 - 3 PIN Power draw? Means i had 318 x 2 = 640 Watts. Is it right?


https://www.3dmark.com/3dm/52572483


Ambiente 30c because of my Wife....-.-


----------



## Falkentyne

ivans89 said:


> yeah I’m not understanding it...


Did you solder new shunts on by removing the old shunts completely, or did you stack the shunts?
Can you show us what shunts you used? A clear picture would be nice. (Topside and bottom side please).

What did you use to make the shunt attach to the original shunt (what material?) Solder? 

When does it shut off? When you try to turn on the computer, or when you run a game?
PSU shutting off instantly is from a short or overcurrent protection.


----------



## pat182

zotac price drop


----------



## Johneey

Falkentyne said:


> Did you solder new shunts on by removing the old shunts completely, or did you stack the shunts?
> Can you show us what shunts you used? A clear picture would be nice. (Topside and bottom side please).
> 
> What did you use to make the shunt attach to the original shunt (what material?) Solder?
> 
> When does it shut off? When you try to turn on the computer, or when you run a game?
> PSU shutting off instantly is from a short or overcurrent protection.


even if the shunt isnt correctly and not do a circuit the card psu shoulnd crash. very wired.


----------



## Sync0r

Carillo said:


> Can you lock the voltage to 1.1 with VF curve ? My stock Strix, is getting max 1081mV, even when running low resolution Valley benchmark without hitting PL
> 
> EDIT: Why are you using the Strix Bios ? What you essentially do using a 3 pin bios on a 2 pin card, is lowering your PL. On a 3 pin bios the voltage drop is measured over 3x8 pin, so 480w divided by 3 = 160 watt on each max, so in your case 320 watt total plus what ever resistance you are using on your stacked shunts..vs Gaming OC 390w divided by 2 = 195 watt on each. Yes i know it's shunted, but that does not change the fact that you are lowering your power limit using this bios


From my findings it doesn't work as you described. With the gaming oc bios it's something like 78 pcie slot then 156 per 8 pin for the 390w bios. The Asus bios will have a different ratio to the pcie slot to the gaming oc so it's allowing me to pull more from the 8 pins before power limit and because I'm shunted I can pull more. I'll send a screenshot later on of GPUz. All I know is I can now set the graph at 1.1 with 2190mhz core in port royal and it just sits at that clock and voltage, couldn't do this with gigabyte oc.


----------



## Carillo

Sync0r said:


> From my findings it doesn't work as you described. With the gaming oc bios it's something like 78 pcie slot then 156 per 8 pin for the 390w bios. The Asus bios will have a different ratio to the pcie slot to the gaming oc so it's allowing me to pull more from the 8 pins before power limit and because I'm shunted I can pull more. I'll send a screenshot later on of GPUz. All I know is I can now set the graph at 1.1 with 2190mhz core in port royal and it just sits at that clock and voltage, couldn't do this with gigabyte oc.


Have you tried your stock bios vs the Strix in Port Royal after the shunt mod ?


----------



## olrdtg

Johneey said:


> even if the shunt isnt correctly and not do a circuit the card psu shoulnd crash. very wired.


An incorrect shunt installation can very much lead to a short which can trip OCP (you'd be SUPER lucky not to fry your GPU in this case though).



ivans89 said:


> yeah I’m not understanding it...


As Falkentyne said, please post some pictures of each shunt pre-mod and post-mod. Something is likely causing a short which can trigger OCP causing the PSU to shut down to prevent dead components or the PSU from dying. Take some pics and check very carefully the surrounding areas, make sure you didn't accidentally blob some solder onto other SMDs in the process. A proper shunt mod should not cause this issue. 

_That OR_ your PSU might be struggling, and an immediate boost in current going to the GPU might be triggering OCP. Can you test your power supply under a full synthetic load? What is your PSU rated for? Brand/model?

Does your motherboard have any auxiliary power inputs for the PCIe slots? Did you shunt the PCIe resistor?


----------



## Sync0r

Carillo said:


> Have you tried your stock bios vs the Strix in Port Royal after the shunt mod ?


Not tried stock after doing all resistors, only after 2x8 pins shunts. It didn't work as well as gigabyte due to no adjustment to PL slider, I know it shouldn't matter but I was hitting some PL earlier, might of been the pcie slot at the time.


----------



## Carillo

ivans89 said:


> anyone have a idea For my problem: i shunted on my evga 3090 ftw3 ultra with 5mohm and tried with 10mohm, but my PSU shut off immediately when i start the pc and its not starting. 1000W PSU...


Sounds like a over current somewhere.. Did you check that your cooler or backplate cleared all the shunts if you were stacking ? Does that card smell burned ?


----------



## Carillo

Sync0r said:


> Not tried stock after doing all resistors, only after 2x8 pins shunts. It didn't work as well as gigabyte due to no adjustment to PL slider, I know it shouldn't matter but I was hitting some PL earlier, might of been the pcie slot at the time.


I did also hit PL hard when only shunting the 2x8 pins. I had to do all 6 of them. ( On my Gigabyte)


----------



## ScottRoberts91

Pepillo said:


> Yes, they both work well. Better the 500w EVGA, 50w more and keeps all the video outputs, with the 480w Strix you lose a DP.


What's the best way of getting this on? NV Flash is kicking out an incompatible error


----------



## chispy

Finally my Asis Strix oc 3090 just arrived and my ek water block , shunts to be done later and go for a test drive on my -21c water chiller


----------



## Johneey

Carillo said:


> Have you tried your stock bios vs the Strix in Port Royal after the shunt mod ?


Hey Carrillo, I flashed the strix too and I got with Ambiente 27c now same port royal stock as with my flashed 390 gigabyte on 8c Ambiente.


----------



## Johneey

Carillo said:


> I did also hit PL hard when only shunting the 2x8 pins. I had to do all 6 of them. ( On my Gigabyte)


Me too but with strix I don’t hit only sometimes on 1.1 V goes back to 1.083


----------



## Carillo

Johneey said:


> Me too but with strix I don’t hit only sometimes on 1.1 V goes back to 1.083


interesting. Thanks


----------



## Falkentyne

Johneey said:


> Me too but with strix I don’t hit only sometimes on 1.1 V goes back to 1.083


What happens if you slide the MSI Afterburner Voltage Slider to 100%?
This is usually supposed to unlock another voltage step.


----------



## Johneey

Falkentyne said:


> What happens if you slide the MSI Afterburner Voltage Slider to 100%?
> This is usually supposed to unlock another voltage step.


its already to 100%. But i think its not make any diffrence or? The voltage slider didnt work on Ampere or?


----------



## ScottRoberts91

If anyone in the UK is still looking for a 3090 I have a spare Zotac I'm happy to sell for £1550


----------



## Falkentyne

Johneey said:


> its already to 100%. But i think its not make any diffrence or? The voltage slider didnt work on Ampere or?


It unlocks +13 mhz and one higher voltage point IF the temps are below a certain point (usually below 56C).
Example: with +0% mv, +135 mhz is 2085 mhz / 1.081v at 50C, and 100%mv is 2100 mhz / 1.081v
But at 62C, it becomes 2070 mhz / 1.075v.

However I can't seem to always reproduce this at idle even with the 1.10v locked in the VF curve. I can only seem to make it work reliably when alt tabbed from Modern Warfare while the game is locked to 60hz.


----------



## Manya3084

Finally got my EK water block for the Asus TUF 3090.

The highest score yet.

Note I have also done a shunt mod.












https://www.3dmark.com/spy/15013596


----------



## rawsome

ScottRoberts91 said:


> What's the best way of getting this on? NV Flash is kicking out an incompatible error


did you get the latest nvflash? Just got my trio x now and it flashed ok. had that at some point too... was accidential using a older version that was flying around on my pc.

*@all trio X users*
This is my best score so far:
-500W bios EVGA RTX 3090 VBIOS (the newer one)
-gsync of, 3d preferences performance (is there something else?)
-core +177 mem +1000


https://www.3dmark.com/pr/469896



*@all* one thing i wonder, if i have the PL at 380W i can run +200clock, but with PL 500W it crashes after 30 seconds at max. does that mean my chip is not able to run on 2100mhz PR stable as this seems to be the wall im hitting? Or is it just not stable as the card heats up? or is it the voltage?
Im asking questions here, maybe someone can enlighten my how this relates.

I think this thread needs more FAQ at the beginning, at least it should answer the "which bios for my card" question. is there a official overclock.net wiki somewhere?


----------



## Johneey

Manya3084 said:


> Finally got my EK water block for the Asus TUF 3090.
> 
> The highest score yet.
> 
> Note I have also done a shunt mod.
> 
> View attachment 2464380
> 
> 
> 
> 
> https://www.3dmark.com/spy/15013596


Go for strix bios if u shunted. I got 12k+ with my shunted palit and strix bios


----------



## Falkentyne

rawsome said:


> did you get the latest nvflash? Just got my trio x now and it flashed ok. had that at some point too... was accidential using a older version that was flying around on my pc.
> 
> *@all trio X users*
> This is my best score so far:
> -500W bios EVGA RTX 3090 VBIOS (the newer one)
> -gsync of, 3d preferences performance (is there something else?)
> -core +177 mem +1000
> 
> 
> https://www.3dmark.com/pr/469896
> 
> 
> 
> *@all* one thing i wonder, if i have the PL at 380W i can run +200clock, but with PL 500W it crashes after 30 seconds at max. does that mean my chip is not able to run on 2100mhz PR stable as this seems to be the wall im hitting? Or is it just not stable as the card heats up? or is it the voltage?
> Im asking questions here, maybe someone can enlighten my how this relates.
> 
> I think this thread needs more FAQ at the beginning, at least it should answer the "which bios for my card" question. is there a official overclock.net wiki somewhere?


Yes it means it's not stable at 2100 mhz. Having the lower power limit in place causes the clocks to drop, but the +200 mhz offset moves the entire curve up, so the card will use the "reduced" clock (example 1920 mhz) at a lower voltage (Just look at the V/F curve). 

Try 2070-2085 mhz and you can also try putting the MSI Afterburner slider over to the far right at 100%. This can unlock 1.087v and 1.10v however it still seems uncertain under what conditions the card will use those two steps. It might be at very low temps. One thing to keep in mind is that if the voltage goes to the right one point when you set the slider to 100% mv, the core clock will also increase by 15 mhz (one step). The silicon lottery is very real in these chips, so don't expect cards to be stable at 2100 mhz. Some even have problems doing 2025 mhz.


----------



## Glerox

shiokarai said:


> got any reviews for strix 3090 blocks? links?


Well EKWB from experience are always the worst. For alphacool I checked the review of Igor's lab about the reference RTX 3089/3099 and he was quite impressed with their new block (I guess it's gonna be the same for the strix block). For Bitspower, no review yet.









Alphacool Aurora Plexi GPX-N RTX 3090/3080 GPU Water Block Review - How to turn 340 watts on a GeForce RTX 3080 into a frosty zone | igor'sLAB


With the Alphacool GPU water block Aurora Plexi GPX-N RTX 3090/3080 I want to start the new round of GPU water blocks, but this time for Ampere and not Turing. A water cooling system makes sense with…




www.igorslab.de


----------



## HyperMatrix

dante`afk said:


> there she is, thanks again @HyperMatrix
> 
> View attachment 2464369
> 
> 
> 
> and actually I don't even wanna use her without waterblock (and shuntmod)


Grats mate. Hope she clocks better than my card. Haha.


----------



## ScottRoberts91

rawsome said:


> did you get the latest nvflash? Just got my trio x now and it flashed ok. had that at some point too... was accidential using a older version that was flying around on my pc.
> 
> *@all trio X users*
> This is my best score so far:
> -500W bios EVGA RTX 3090 VBIOS (the newer one)
> -gsync of, 3d preferences performance (is there something else?)
> -core +177 mem +1000
> 
> 
> https://www.3dmark.com/pr/469896
> 
> 
> 
> *@all* one thing i wonder, if i have the PL at 380W i can run +200clock, but with PL 500W it crashes after 30 seconds at max. does that mean my chip is not able to run on 2100mhz PR stable as this seems to be the wall im hitting? Or is it just not stable as the card heats up? or is it the voltage?
> Im asking questions here, maybe someone can enlighten my how this relates.
> 
> I think this thread needs more FAQ at the beginning, at least it should answer the "which bios for my card" question. is there a official overclock.net wiki somewhere?


Thank you, How much additional performance does this grant you whilst gaming? I assume you no longer hit a power limit and experience the bin drop that goes with it.


----------



## changboy

Just finish install my ft3w ultra and i also install precision x but i never use this, so i need to learn it a bit coz for a reason i dont know my card boost at 2175mhz and game crash after 15 or 20 second lol. Just install another 8 pins for the third one on the card and i got hard time just pass this other cable, look my photo and see how full my case is in the back, i have hdd everywhere in my case, also some hiding on side of radiator, its just crazy. I never saw a corsair 750D full like this.


----------



## changboy

I install 3dmark from guru and i wanna try port royal benchmark but cant or i need buy this program and they ask me $37.38cad, is this normal and the way to go to try those benchmark ?


----------



## pantsoftime

changboy said:


> I install 3dmark from guru and i wanna try port royal benchmark but cant or i need buy this program and they ask me $37.38cad, is this normal and the way to go to try those benchmark ?


Keep an eye on the steam version. It sometimes drops to $5 for the entire suite.


----------



## DrunknFoo

changboy said:


> I install 3dmark from guru and i wanna try port royal benchmark but cant or i need buy this program and they ask me $37.38cad, is this normal and the way to go to try those benchmark ?


What makes you think this is not normal? It isnt freeware... Wait for a sale if u wanna save... But if you can afford a 3090, what's 40$ lol


----------



## changboy

Ok guys thank you for the info, i will wait a deal on steam hehehe, at $40 i will put this on a real game lol.
$40 its big money hahahaha. I have préorder cold war and ask for a refund and they refund me coz they give me for free with my 3090 so i will buy watch dog legion with the same money hehehe. But i didnt found yet how claiming my free cold war with my card. I have done test with read dead 2 and finaly i can crank all setting to max and game is a beauty.


----------



## bmgjet

Dont know if you still can but I got a life time futuremark membership for $5 off a key website 10 years ago.
Thats gotten me all of there benchmarks advanced versions since the same key works in all of them. Maybe look around if the same thing still exsists or just get 3dmark from a key website.
$40 is more then its worth IMO.


----------



## Stampede

Johneey said:


> i flashed now the strix OC bios on my Palit 3090 2x 8 Pin shunted. it works but the 3 Pin shows same like 1 pin, means that i can calculate hole power use? Like 418 - 3 PIN Power draw? Means i had 318 x 2 = 640 Watts. Is it right?
> 
> 
> https://www.3dmark.com/3dm/52572483
> 
> 
> Ambiente 30c because of my Wife....-.-


Hi Johneey,

This is quite interesting, I think the limiting factor has been the Pcie power draw on the 2 pin cards. Can you please check on gpuz or Hwinfo for the pcie power draw on strix vs gigabyte bios?

I think the Strix bios uses less pcie power, hence it can allow more power for the 8pin power. even thou its only 150watts max per connector, your shunt mod will allow 300watts.
300+300+ pcie power(100?) = 700watts max in theory...if voltage and frequency allows it.
The power draw ratio of the pcie power to 3 x 8pin card= 75watts/3 = 25watt per 8pin. if using 2 x 8pin will give max 50watts(shunt mod gives x 2= 100watt). This means that the card is not power limiting on the pcie power(75watts), and its safer then drawing 150watts from the pcie slot.


----------



## changboy

I will check that about the benchmark bundle, but i have done one test i was able to do and its time spy so i dont know if me result is good i just oc a bit for the first time and not try adjust much, i haven't update the bios, my card still out of the box :


----------



## Sky3900

Made it to 15K! FE, silver paint shunt mod, air cooled. 



https://www.3dmark.com/pr/470948


----------



## HyperMatrix

Sky3900 said:


> Made it to 15K! FE, silver paint shunt mod, air cooled.
> 
> 
> 
> https://www.3dmark.com/pr/470948


Nice work! Any idea how much extra power you were able to pull with silver paint?


----------



## Thoth420

I noticed Newegg listed the Gigabyte Vision model card and ofc has it listed sold out. Did this card sell already or is that just a placeholder? Anyone?! Sorry been away from the web a lot for the past week.


----------



## ExDarkxH

Anyone try out the new DXR test yet?


----------



## Sky3900

HyperMatrix said:


> Nice work! Any idea how much extra power you were able to pull with silver paint?


Think I'm pulling somewhere between 500w and 525w. Power readings in GPUZ decreased about 23% from my baseline test. Waiting for my killawatt to arrive so I can verify.


----------



## Sync0r

Stampede said:


> Hi Johneey,
> 
> This is quite interesting, I think the limiting factor has been the Pcie power draw on the 2 pin cards. Can you please check on gpuz or Hwinfo for the pcie power draw on strix vs gigabyte bios?
> 
> I think the Strix bios uses less pcie power, hence it can allow more power for the 8pin power. even thou its only 150watts max per connector, your shunt mod will allow 300watts.
> 300+300+ pcie power(100?) = 700watts max in theory...if voltage and frequency allows it.
> The power draw ratio of the pcie power to 3 x 8pin card= 75watts/3 = 25watt per 8pin. if using 2 x 8pin will give max 50watts(shunt mod gives x 2= 100watt). This means that the card is not power limiting on the pcie power(75watts), and its safer then drawing 150watts from the pcie slot.


Yes this is what I am seeing. The pcie slot consumption is less on strix bios, however I shunted the pcie slot when I was on the gigabyte bios and I wasnt hitting the 75w limit but I was power limited, so there was another limit at play on the gigabyte bios.
Here are the new power figures from strix bios. Much more being drawn from 8 pins, I've left out the 3rd 8pin from the spreadsheet, it shows the same power draw as the 1st 8pin, but this is a 2x8pin card, so its irrelevant (370.2 = (2x115.9) + 98.7 + 39.7). I've seen it draw to about 530w total board power briefly in Control.


----------



## Mike211

*2 EVGA GeForce RTX 3090 FTW3 ULTRA in SLI*


----------



## Turokisme

Has anyone tried the 3090 Aorus Xtream card and how it compares to the FTW3 and Strix OC? I can't find any info on the power limit and performance metrics.

Thanks!


----------



## Somandarin

guys!!! I bought the Palit RTX 3090 for € 1900!! I'm really happy!! happy happy! ))))


----------



## Johneey

Somandarin said:


> guys!!! I bought the Palit RTX 3090 for € 1900!! I'm really happy!! happy happy! ))))


Happy for a 1900 palit ? Lol they scammed u dude I paid 1549 euro


----------



## Foxrun

Sky3900 said:


> Made it to 15K! FE, silver paint shunt mod, air cooled.
> 
> 
> 
> https://www.3dmark.com/pr/470948


Could you link the pen? I’m assuming you scraped all 6 of the resistors, cleaned them, and applied the paint? I’ve done it with my last two gpus.


----------



## ivans89

Falkentyne said:


> Did you solder new shunts on by removing the old shunts completely, or did you stack the shunts?
> Can you show us what shunts you used? A clear picture would be nice. (Topside and bottom side please).
> 
> What did you use to make the shunt attach to the original shunt (what material?) Solder?
> 
> When does it shut off? When you try to turn on the computer, or when you run a game?
> PSU shutting off instantly is from a short or overcurrent protection.


i stack the shunts. I use Panasonic shunts (Shunt). Should be the correct ones.

Yes im using solder for it.

It not starting, the PSU is immidiateley off. It starts again if i put off the 3xPCIe Powercables and turn the PSU off from power.
Cant believe its an overcurrent (Corsair AX 1000).





olrdtg said:


> An incorrect shunt installation can very much lead to a short which can trip OCP (you'd be SUPER lucky not to fry your GPU in this case though).
> 
> 
> 
> As Falkentyne said, please post some pictures of each shunt pre-mod and post-mod. Something is likely causing a short which can trigger OCP causing the PSU to shut down to prevent dead components or the PSU from dying. Take some pics and check very carefully the surrounding areas, make sure you didn't accidentally blob some solder onto other SMDs in the process. A proper shunt mod should not cause this issue.
> 
> _That OR_ your PSU might be struggling, and an immediate boost in current going to the GPU might be triggering OCP. Can you test your power supply under a full synthetic load? What is your PSU rated for? Brand/model?
> 
> Does your motherboard have any auxiliary power inputs for the PCIe slots? Did you shunt the PCIe resistor?


I removed them already but i will repeat it later and post some pics.
I tested 2 different PSUs, one for the Mainboard/CPU and one only for the GPU. its still the same, the PSU that is on the GPU immidiateley turns off.
its the Corsair AX1000 and the other Seasonic Prime 750 Gold.

the PCIe was not shunted.

Will post some pics later.


----------



## pat182

ExDarkxH said:


> Anyone try out the new DXR test yet?


500watt bios ? got 61.5fps on a strix stock bios, didnt oc the mem much tho


----------



## ExDarkxH

yea thats the bios im using on my 3090 ftw3 gaming
I also left the memory at 900. but this test inst very stressful on my system so i want to go back with a 1050 memory offset but im not gona play with this benchmark anymore until i get my waterblock.

with a higher core clock I'll shoot for 63.5 - 64


----------



## changboy

Is there just the optimus waterblock for the ftw3, it cost 378 mean for me in Québec around 600$, its crazy a bit, maybe will have other block soon with similar perf at lower price, what you think ?


----------



## Baasha

Guys, has anyone flashed a different BIOS to the 3090 FE? Is it safe given the unique PCB of the FE card? It looks like the FE tops out at 400W (398W to be precise) during gaming/benchmarks - would like to use the Strix OC BIOS (480W) - is this advisable or not? I definitely do NOT want to brick the card especially since it doesn't have multiple BIOS (aka BIOS switch).

Also, I notice that the GPU clock sometimes goes to 2170, 2160 etc. but comes back to 1960Mhz very quickly even when the temps are ~ 50C. What gives?


----------



## Johneey

Baasha said:


> Guys, has anyone flashed a different BIOS to the 3090 FE? Is it safe given the unique PCB of the FE card? It looks like the FE tops out at 400W (398W to be precise) during gaming/benchmarks - would like to use the Strix OC BIOS (480W) - is this advisable or not? I definitely do NOT want to brick the card especially since it doesn't have multiple BIOS (aka BIOS switch).
> 
> Also, I notice that the GPU clock sometimes goes to 2170, 2160 etc. but comes back to 1960Mhz very quickly even when the temps are ~ 50C. What gives?


u cannot flash the FE u need to shunt


----------



## Zurv

I got an email from Bitspower RE: 3090 FE blocks (short version, they are shipping next friday)



> Dear Customer,
> Thanks for your pre-order of 3090FE water block.
> It will be available by next Friday, maybe faster!
> We will arrange the package by Fedex priority.
> It takes about 3-5 working days for shipping.
> Once the block is available, we will give you the tracking number.
> Any question, feel free to contact us.
> 
> *Best regards!
> Lily Wong*


----------



## Sky3900

Foxrun said:


> Could you link the pen? I’m assuming you scraped all 6 of the resistors, cleaned them, and applied the paint? I’ve done it with my last two gpus.


Yep. scrape, clean, and paint all six. Bought it from EIO.









Mg Chemical 842AR-P Silver Conductive Pen


Buy and save on Mg Chemical 842AR-P Silver Conductive Pen with fast shipping and unbeatable prices, you're good to go with EIO.com.




www.eio.com







https://www.mgchemicals.com/downloads/tds/tds-842ar-p.pdf


----------



## devilhead

So On Air 3090 Strix was able to get Port Royal 14963, FE on water 15150. On this strix i got unlucky with memory, max +1100, anything over - just crash, meanwhile both 3090FE could do +1400 on memory. https://www.3dmark.com/3dm/52610516


----------



## ExDarkxH

yea i cant even get that high on memory. I usually run at 900, 950 max. 1050 usually crashes.

The thing about memory though it doesnt really move the needle much


----------



## rawsome

devilhead said:


> So On Air 3090 Strix was able to get Port Royal 14963, FE on water 15150. On this strix i got unlucky with memory, max +1100, anything over - just crash, meanwhile both 3090FE could do +1400 on memory. https://www.3dmark.com/3dm/52610516


did you shunt it already? which ones? what was the score without shunting?


----------



## devilhead

rawsome said:


> did you shunt it already? which ones? what was the score without shunting?


3090 strix without shunt on air, 3090FE have used liquid metal on 2x pcie resistors, but just ~30w extra from that.


----------



## motivman

best I can do on my PNY reference card, no shunt mod... +250 on core, +1000 on memory, voltage maxed out in afterburner. I can run +200/1000 with stock voltage, but my score is a little lower. My card is requiring too much power, pretty much pegged at 390W the entire time (on Gigabyte 390W bios), I paid $1800 with tax for my card, and was so hard to find, too chicken to shunt mod it. I know I will hit 15000 easily on PR if I shunted this card.. SMH

My Card is on water with Alphacool waterblock. 29C ambient temps, Max 51C load temps.



http://www.3dmark.com/pr/472792



*Sky3900 *-- can you post a detailed guide on how you shunted with the conductive pen? I might have to try this, jealous of you guys hitting 15K plus on PR.


----------



## Falkentyne

devilhead said:


> 3090 strix without shunt on air, 3090FE have used liquid metal on 2x pcie resistors, but just ~30w extra from that.
> View attachment 2464498


You absolutely _SHOULD NOT_ use liquid metal on the shunts!!
Even if you coat them with nail polish, the LM can still eat away at the solder connecting the shunt to the PCB, unless somehow that were protected before-hand.
You should use either a conductive silver pen or conductive paint of the same type (the paint is easier to apply than the pen as people have had issues with the pen drying out while the paint can just be dipped in with a toothpick then screwed back.

You can use this: https://www.amazon.com/gp/product/B01MCXW1Y1/
Or the pen version of the exact same material: https://www.amazon.com/MG-Chemicals-842AR-P-Silver-Conductive/dp/B01LYXQE0M/
Even though the jar is a lot more expensive, it's going to last much longer vs drying out (Just screw it back instantly).

Treat it like an aerosol--make sure you shake it vigorously before using. The pen has a shaker already.


----------



## olrdtg

Zurv said:


> I got an email from Bitspower RE: 3090 FE blocks (short version, they are shipping next friday)


I got the same email after they told me it would be shipping this week. So, I'm taking it with a huge grain of salt :\


----------



## Glerox

Falkentyne said:


> You absolutely _SHOULD NOT_ use liquid metal on the shunts!!
> Even if you coat them with nail polish, the LM can still eat away at the solder connecting the shunt to the PCB, unless somehow that were protected before-hand.
> You should use either a conductive silver pen or conductive paint of the same type (the paint is easier to apply than the pen as people have had issues with the pen drying out while the paint can just be dipped in with a toothpick then screwed back.
> 
> You can use this: https://www.amazon.com/gp/product/B01MCXW1Y1/
> Or the pen version of the exact same material: https://www.amazon.com/MG-Chemicals-842AR-P-Silver-Conductive/dp/B01LYXQE0M/
> Even though the jar is a lot more expensive, it's going to last much longer vs drying out (Just screw it back instantly).
> 
> Treat it like an aerosol--make sure you shake it vigorously before using. The pen has a shaker already.


Have you used that instead of soldering new resistors?


----------



## Falkentyne

Glerox said:


> Have you used that instead of soldering new resistors?


@Sky3900 already did. 
And I can't solder new resistors. I have a metal bar in my spine and do not possess this type of skill or physical ability to do such tedious work, or the proper equipment, even though I do have a hakko soldering iron and desoldering pump. 

So I will be doing just that--painting over the resistors, since stacking is going to cause contact resistance issues with the sides being lower than the middle on the originals. I already have all the equipment ready except 1.5mm thermal pads (which are coming today). I do not know when I will do it since Call of Duty never hits 400W at 1080p max settings. Overwatch is hammering it at 4k @ 165 hz/165 fps though (1080p + 200% render scale), which I'm using with Fast Sync + RTSS scanline sync.


----------



## Glerox

Falkentyne said:


> @Sky3900 already did.
> And I can't solder new resistors. I have a metal bar in my spine and do not possess this type of skill or physical ability to do such tedious work, or the proper equipment, even though I do have a hakko soldering iron and desoldering pump.
> 
> So I will be doing just that--painting over the resistors, since stacking is going to cause contact resistance issues with the sides being lower than the middle on the originals. I already have all the equipment ready except 1.5mm thermal pads (which are coming today). I do not know when I will do it since Call of Duty never hits 400W at 1080p max settings. Overwatch is hammering it at 4k @ 165 hz/165 fps though (1080p + 200% render scale), which I'm using with Fast Sync + RTSS scanline sync.


I think I'll go this way too. I did that with liquid metal back in the days on the Titan XP.
However I'll try the silver paint this time.

What do you need the thermal pads for?

I have a ROG strix OC going under water. I'm trying to find how many shunts I have to paint to prevent any power limit.


----------



## changboy

I dont understand totaly ur oc memory. This is a screenshoot of gpuz of my oc after 2 or 3 test, can you post a screenshoot of ur gpuz so i can see difference of you and me ? Iam new to oc with precision x1 and those rtx card lol.


----------



## Falkentyne

Glerox said:


> I think I'll go this way too. I did that with liquid metal back in the days on the Titan XP.
> However I'll try the silver paint this time.
> 
> What do you need the thermal pads for?
> 
> I have a ROG strix OC going under water. I'm trying to find how many shunts I have to paint to prevent any power limit.


Sky3900 said that the thermal pads must be replaced when disassembling the cooler. He tried to re-use them but he had 5C higher temps, until he replaced them with new Fujipoly pads (he used 1.5mm on each side), although he mentioned one side seemed to have 2mm pads and the other 1.0mm pads, but the 1.5mm pads fit both sides and compressed enough as well.

I have a giant 145x145mm square of 0.5mm Arctic pads, 1.0mm and 1.5mm pads, so I'm good to go (i can just stack if I need 2mm or 3mm on some spots).


----------



## mirkendargen

Got my Strix with Bykski block and a Bitspower RAM block on the backplate dialed in. The 500w EVGA BIOS is a solid winner on Strix's, and disables the middle displayport NOT the second HDMI 2.1 port. Running it at +160 core (+175 crashes) and +1000 memory (benches higher still than +900, +1100 crashes) I'm getting 14643 Port Royal with an average core clock of 2119. Not on par with some of the epic "stable at 2200+" cores I've seen a few people here get, but not bad.


----------



## Dreams-Visions

Hey folks, what are recommended waterblock brands? I was looking at an Alphacool for the XTrio, mostly because it's one of the only ones I've seen. But I'm new to waterblock manufacturers so I don't know if they're reputable or not. I see the comments about EK in here so I'm assuming they are not recommended presently. 

Any recommendations for an XTrio, Strix, and possibly a FE would be great. Thanks.


----------



## HyperMatrix

Dreams-Visions said:


> Hey folks, what are recommended waterblock brands? I was looking at an Alphacool for the XTrio, mostly because it's one of the only ones I've seen. But I'm new to waterblock manufacturers so I don't know if they're reputable or not. I see the comments about EK in here so I'm assuming they are not recommended presently.
> 
> Any recommendations for an XTrio, Strix, and possibly a FE would be great. Thanks.


AquaComputer with Active Backplate if available for your card. Best out there. Strix model is being made now. Not sure about the others.


----------



## Sky3900

motivman said:


> best I can do on my PNY reference card, no shunt mod... +250 on core, +1000 on memory, voltage maxed out in afterburner. I can run +200/1000 with stock voltage, but my score is a little lower. My card is requiring too much power, pretty much pegged at 390W the entire time (on Gigabyte 390W bios), I paid $1800 with tax for my card, and was so hard to find, too chicken to shunt mod it. I know I will hit 15000 easily on PR if I shunted this card.. SMH
> 
> My Card is on water with Alphacool waterblock. 29C ambient temps, Max 51C load temps.
> 
> 
> 
> http://www.3dmark.com/pr/472792
> 
> 
> 
> *Sky3900 *-- can you post a detailed guide on how you shunted with the conductive pen? I might have to try this, jealous of you guys hitting 15K plus on PR.


Check out *olrdtg's *thread on FE shunt modding. There is a description of what I did on pages 7 and 8. It's spread out over few posts.









RTX 3090 Founders Edition working shunt mod


This matches my experience of being unable to remove the solder on my 3080 FE resistors. I ended up stacking the resistors using the soldering paste, which was very tricky as well. I saw some folks are suggesting using conductive paint + liquid electric tape: that sounds much safer to me, will...




www.overclock.net





This thread from *bmgjet *is also very good.









How To: Easy mode Shut Modding.


Heres some basic instructions for a easy to do and easy to remove shunt mod method. Tools Needed: Small phillips screwdriver. Tweesers Materials: Conductive paint. (Stuff iv used is water based and very easy to remove with warm water and cotton swab), But a silver type one would work even...




www.overclock.net


----------



## Pepillo

Alphacool's block has been delayed undated today. I'm tired of waiting, I've applied for annulment and bought the Bykski. I like the Bykski you keep the original backplate, that in the case of the MSI Trio, besides pretty carries heatpipes to better cool the vram.


----------



## Dreams-Visions

Pepillo said:


> Alphacool's block has been delayed undated today. I'm tired of waiting, I've applied for annulment and bought the Bykski. I like the Bykski you keep the original backplate, that in the case of the MSI Trio, besides pretty carries heatpipes to better cool the vram.


can you link the model you went with? I'm looking at their website and only see a FE waterblock available.



HyperMatrix said:


> AquaComputer with Active Backplate if available for your card. Best out there. Strix model is being made now. Not sure about the others.


Will look into them. ty.

edit: would you mind linking to a product page? having a hard time finding it.


----------



## devilhead

__





Suchergebnisse


Aqua Computer GmbH & Co. KG liefert fast geräuschlose Wasserkühlsysteme für PCs. Reduzieren Sie lästigen Lärm am Arbeitsplatz oder zu Hause bei überlegener Kühlleistung. Das plug&cool System garantiert Sicherheit im Betrieb und eine einfache Montage.




shop.aquacomputer.de


----------



## parityclaws

In terms of performance (both air and water) and assuming price is roughly the same is there any reason to pick the Strix OC over the FTW3 or vice versa, or are they much of a muchness (silicon lottery notwithstanding)? I saw a couple of things about the FTW3 power delivery maybe not being on par with the Strix but whether that equated much in the way of performance I couldn't tell. Also keeping in consideration I wouldn't be doing any hardware mods that involved soldering etc.

I have read through a bunch of the thread but there is a lot to get through and didn't really get a sense for where things were landing.


----------



## rawsome

Dreams-Visions said:


> Hey folks, what are recommended waterblock brands? I was looking at an Alphacool for the XTrio, mostly because it's one of the only ones I've seen. But I'm new to waterblock manufacturers so I don't know if they're reputable or not. I see the comments about EK in here so I'm assuming they are not recommended presently.
> 
> Any recommendations for an XTrio, Strix, and possibly a FE would be great. Thanks.


just watched igors lab review on the alphacool waterblock. he says he only got 8° delta and speaks his recommendation for these blocks.

But yes, i got the block delayed mail today too, very disappointing.

macguver me was thinking about getting a reference waterblock now and trying to fit it to the triox, but the waterblock looks completely different. That also makes me wonder what else is different, as the metal part is larger in the triox one.

guess the only option is waiting now or getting another card haha.


----------



## Cavokk

Pepillo said:


> Alphacool's block has been delayed undated today. I'm tired of waiting, I've applied for annulment and bought the Bykski. I like the Bykski you keep the original backplate, that in the case of the MSI Trio, besides pretty carries heatpipes to better cool the vram.


Strange - I just got confirmation 1 minute ago that my Alphacool Trio X block is being shipped finally 

C


----------



## Cavokk

damn - now I also got confirmation that I can pick up my long ordered 3090 Strix OC .

Not sure if I risk to take a silicon lottery roll here . My Trio X works perfectly fine and stable 2.1Ghz constant undervoltet at 1000 mv in PR and games

C


----------



## Johneey

devilhead said:


> 3090 strix without shunt on air, 3090FE have used liquid metal on 2x pcie resistors, but just ~30w extra from that.
> View attachment 2464498


*** u shunted with liquid metall ? Serious or u trolling?


----------



## Johneey

Cavokk said:


> Strange - I just got confirmation 1 minute ago that my Alphacool Trio X block is being shipped finally
> 
> C


show us Port Royal


----------



## HyperMatrix

Johneey said:


> *** u shunted with liquid metall ? Serious or u trolling?


It's not bad for short term. On average, from my experience (a few times, haha) it takes 1-2 years for the liquid metal to eat away at the solder enough for the resistor to fall off the board. If you're just doing it for a few weeks, it's fine. Even if for a few months, if you're planning to re-solder new resistors on, it's not an issue.


----------



## devilhead

Johneey said:


> *** u shunted with liquid metall ? Serious or u trolling?


Yes, but cleaned after testing without any marks


----------



## Johneey

devilhead said:


> Yes, but cleaned after testing without any marks


after some time u see the how ur solder is chewing gum... put of this **** immidently and solder or use a silver pen


----------



## pat182

mirkendargen said:


> Got my Strix with Bykski block and a Bitspower RAM block on the backplate dialed in. The 500w EVGA BIOS is a solid winner on Strix's, and disables the middle displayport NOT the second HDMI 2.1 port. Running it at +160 core (+175 crashes) and +1000 memory (benches higher still than +900, +1100 crashes) I'm getting 14643 Port Royal with an average core clock of 2119. Not on par with some of the epic "stable at 2200+" cores I've seen a few people here get, but not bad.


2119mhz on water ?

my strix stock bios can do 2085mhz stable in all games on air, maybe i should leave it like that , i dont wanna spend on WC to gain 30-35mhz. i can flash the 500watt bios to see if i can reach 2100 stable,

i can get 2100mhz when below 56c and 2160 on the ray tracing 3dmark test, i think thats my max stable whatever the wattage


----------



## HyperMatrix

It depends on your die. There are good ones and bad ones. Shunt and Water will let you get your max clocks and keep them stable under full load. If you’re already able to _maintain_ those clocks under full load on air, then ask yourself if you’d like the chance to go from 2100MHz to 2200MHz under water. My die kind of sucks. It can do 2130MHz at under 53C but it also can’t maintain it. After gaming for a few minutes it drops to 2025MHz at 80C and goes down hill from there. So for me, I see water cooling as a way to get from 1900-1980MHz sustained clocks under load, to over 2100MHz.


----------



## MangoMunchaa

Just wanted to share my results with the 3090 after finally finishing my first loop

This is on the Galax SG 3090 with the EK Velocity waterblock and 2x 360mm Rads, managed to get 14.7k on port royal using the Aorus Master BIOS


https://www.3dmark.com/pr/474813



Would love to see what I could do with more than 390W on this card but very happy with the results, final overclock was +215 Core and +1300 Memory


----------



## Sheyster

Cavokk said:


> damn - now I also got confirmation that I can pick up my long ordered 3090 Strix OC .
> 
> Not sure if I risk to take a silicon lottery roll here . My Trio X works perfectly fine and stable 2.1Ghz constant undervoltet at 1000 mv in PR and games


Unless you need the second HDMI port on the Strix, just keep the Trio X.


----------



## mirkendargen

HyperMatrix said:


> It depends on your die. There are good ones and bad ones. Shunt and Water will let you get your max clocks and keep them stable under full load. If you’re already able to _maintain_ those clocks under full load on air, then ask yourself if you’d like the chance to go from 2100MHz to 2200MHz under water. My die kind of sucks. It can do 2130MHz at under 53C but it also can’t maintain it. After gaming for a few minutes it drops to 2025MHz at 80C and goes down hill from there. So for me, I see water cooling as a way to get from 1900-1980MHz sustained clocks under load, to over 2100MHz.


Yup mine can do 2130, MAYBE 2145 at 1.1v when that's achievable under the power limit, more than that seems unstable. It got me mid 40s in the HOF for Port Royal and mid 20s in Time Spy Extreme, so it must not be that bad. A decent enough number of people have cards at this point.


----------



## Johneey

MangoMunchaa said:


> Just wanted to share my results with the 3090 after finally finishing my first loop
> 
> This is on the Galax SG 3090 with the EK Velocity waterblock and 2x 360mm Rads, managed to get 14.7k on port royal using the Aorus Master BIOS
> 
> 
> https://www.3dmark.com/pr/474813
> 
> 
> 
> Would love to see what I could do with more than 390W on this card but very happy with the results, final overclock was +215 Core and +1300 Memory


i got 14970 after shunt I got 15250
Before with 390 bios https://www.3dmark.com/pr/468170
Palit 3090


----------



## Thanh Nguyen

Why the core clock drops when I increase the voltage? Right now, I set 1.043v for 2175mhz. It hold 2160mhz in PR and games but when I increase voltage to 1.068v, it cant hold the clock. At 2160mhz, I cant run metro exodus rtx enable.


----------



## nievz

Anyone here is on Zen 3? I've been watching gameplay videos of Warzone on Youtube and it seems the 10900K can maxes out the 3090 more often than a 5000 series. Even on BFV, the max frame rate of a 10900K is higher.

@JackiesBenchmarks 😁


----------



## JackiesBenchmarks

Yes me, watch my Benchmarks on Youtube..









JackiesBenchmarks


Hello Guys, Welcome to my Channel, i make Benchmark Videos, Gameplays and other Stuff. Please Support my Channel and Subscribe Me. As you can see, a video he...




www.youtube.com


----------



## ExDarkxH

port royal 15 033
Average clock frequency 2,142 MHz
Average temperature 61 °C

I badly need the stupid optimus block to come. They havent updated first batch buyers on anything and still list ETA as mid November


----------



## ExDarkxH

the guy above me has a 2,124 average and scored over 100 points higher. his average temp is 41
is it because of hot spots that my card performance isnt better?

I'm also using a riser. The old thermaltake one. I'm gona replace it with the fancy one and see if my scores improve


----------



## Johneey

JackiesBenchmarks said:


> Yes me, watch my Benchmarks on Youtube..
> 
> 
> 
> 
> 
> 
> 
> 
> 
> JackiesBenchmarks
> 
> 
> Hello Guys, Welcome to my Channel, i make Benchmark Videos, Gameplays and other Stuff. Please Support my Channel and Subscribe Me. As you can see, a video he...
> 
> 
> 
> 
> www.youtube.com


you can check whats a overclocked 5950x can reach on time spy ? and why in ur benchmarks the 5950x is not boosting ? its only 4.6 ghz around


----------



## D3LTA KING

pat182 said:


> 2119mhz on water ?
> 
> my strix stock bios can do 2085mhz stable in all games on air, maybe i should leave it like that , i dont wanna spend on WC to gain 30-35mhz. i can flash the 500watt bios to see if i can reach 2100 stable,
> 
> i can get 2100mhz when below 56c and 2160 on the ray tracing 3dmark test, i think thats my max stable whatever the wattage


I take it that you have the Strix OC version and that you have the bios set to P mode?


----------



## ExDarkxH

anyone see the new framechasers video?

He used liquid metal on a 3080 and saw a 15-30mhz increase in clock speeds. 

3dmark scores stayed the same despite the increase

thoughts?


----------



## Falkentyne

Ampere is-15 mhz every 6C temp increase. So if the temps are lower of course he's going to get 15 mhz. And lower temps may allow you to step up to another mhz tier stable (they are in 15 mhz steps). And he's using water cooling and the stronger the cooling method you have, the better LM will perform, while on air, unless you are using a very strong cooling solution, you wind up being heat saturated (People have seen that with blower systems when using liquid metal).

That beginner at framechasers also has NO idea how to use liquid metal. You do NOT use liquid metal on polished surfaces! Liquid metal needs a slightly rough surface in order to "grip" and not migrate to itself (creating hot spots) when it gets compressed or heats up. On a fully polished surface, LM tends to want to grip to itself rather than to the surface, which is why it's so hard to spread when you apply it on a polished surface. So it's best to have a surface that is bufffed with 1500 grit sandpaper. This greatly enhances longevity on surfaces that generate a lot of heat.


----------



## LVNeptune

EKWB 3080 FE Waterblocks are available for preorder now. They do exactly what was needed which was backplate cooling as well!

EK-Quantum Vector FE RTX 3080 D-RGB - Silver Special Edition 
EK-Quantum Vector FE RTX 3080 D-RGB - Black Special Edition 

Hopefully we can expect 3090 FE coolers soon!

Expensive as hell but they definitely engineered the hell out of it.


----------



## Thanh Nguyen

Anyone here has a 2200mhz+ chip stable for daily use?


----------



## lokran88

Don't know yet. I should be among the first to receive the Aquacomputer block for my Strix OC in about two weeks. Putting it on a big loop with Mora 3 420 + other radiators and will post an update then.


----------



## Dreams-Visions

Welp, this is what paralysis of analysis gets you.










4 card enter. 1 card leave. Place your bets.

Also, that EK block looks amazing.


----------



## Fire2

MSI!

but saying that mines not great...


----------



## Nizzen

Thanh Nguyen said:


> Anyone here has a 2200mhz+ chip stable for daily use?


K|ngp|n has 

My 3090 strix does it on cold water 

2220mhz average on Port Royal


https://www.3dmark.com/pr/451869


----------



## Cavokk

FE version must stay - SOOO beautiful


----------



## Sheyster

Dreams-Visions said:


> Welp, this is what paralysis of analysis gets you.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 4 card enter. 1 card leave. Place your bets.
> 
> Also, that EK block looks amazing.


Nice! I'll take the Trio or the Strix off your hands if you're gonna pass on them.


----------



## Dreams-Visions

Fire2 said:


> MSI!
> 
> but saying that mines not great...


I for damn sure would like that outcome. Mine seems pretty decent, and I'll never say no to saving money while getting the performance I'm looking for.

Also, the 3090FE box is heavy as hell. Like, based on the box alone it must be like 30%-50% heavier than any AIB 3090.



Cavokk said:


> FE version must stay - SOOO beautiful


And this outcome would be even better. Saving even more! lol

(real talk: by the time you start adding on waterblocks...whew my pocketbook.)

Will document all of my performance numbers.



Sheyster said:


> Nice! I'll take the Trio or the Strix off your hands if you're gonna pass on them.


Duly noted, sir/ma'am!


----------



## ExDarkxH

Thanh Nguyen said:


> Anyone here has a 2200mhz+ chip stable for daily use?


Thats a tough ask

If going by port royal leaderboards and checking temps it would seem that not a single card out there has accomplished it yet. Anything over 2,200 average is running cold temps.

Currently I'm doing 2,145 @ 60c while gaming and also benchmarks, so probably with a water block i will be able to get the 2,200mhz stable for daily, but the fact that I havent seen ANY card doing it so far is concerning so i wont get my hopes up.


----------



## Sync0r

What speed system memory are people running with their 10900k's and 3090's? Is there much benefit in games from high bandwidth tight timings with the 3090? Ta


----------



## Dreams-Visions

ExDarkxH said:


> Thats a tough ask
> 
> If going by port royal leaderboards and checking temps it would seem that not a single card out there has accomplished it yet. Anything over 2,200 average is running cold temps.
> 
> Currently I'm doing 2,145 @ 60c while gaming and also benchmarks, so probably with a water block i will be able to get the 2,200mhz stable for daily, but the fact that I havent seen ANY card doing it so far is concerning so i wont get my hopes up.


Yea I haven't seen that feat either. My 3080 Strix (since returned) was doing 2115MHz stable on air @ 70C-80C. Pretty impressive chip in retrospect, and gave me hope that a 3090 would be able to do similar. My FTW3U (stock bios) couldn't do too much better than 2020MHz stable but the Trio w/500W bios has been able to do around 2080 MHz stable in gaming on air. I've left it undervolted, 2055MHz @ 0.905mV and she's whisper quiet and locked at that speed.

We'll see in the next 30 minutes as I start collecting OC and performance data. My XTrio is, I believe, a good performer and as such IS the bar that other cards will have to confidently clear for me to let it go. I'm pretty sure the FTW3U is out, but was trying to wait for a 500W bios revision before making a hard decision there. Assuming that bios isn't coming this weekend, it's likely going to be voted off the island.

As for the rest...we'll see starting right now.









4 card enter! 1 card leave!


----------



## ExDarkxH

haha let us know your results!
That X Trio seems to be the sweet spot with that price + 500w Evga bios


----------



## ExDarkxH

Sync0r said:


> What speed system memory are people running with their 10900k's and 3090's? Is there much benefit in games from high bandwidth tight timings with the 3090? Ta


Im running 4,600 Cl18 @ 1.4v

I tired 4600 cl17 and couldn't boot.
Later on I might give it a go at 4,200 cl16 and maybe up the voltage and see if i can get it stable

Apparently there is an increase if you watch some of the youtube comparison benchmarks out there


----------



## ExDarkxH

this vid is showing 3200 cl16 vs 4400cl19
large improvements across the board. keep in mind this is with a 2080ti so a 3090 would yield better results


----------



## Sync0r

ExDarkxH said:


> Im running 4,600 Cl18 @ 1.4v
> 
> I tired 4600 cl17 and couldn't boot.
> Later on I might give it a go at 4,200 cl16 and maybe up the voltage and see if i can get it stable
> 
> Apparently there is an increase if you watch some of the youtube comparison benchmarks out there


Which RAM kit are you using?


----------



## ExDarkxH

G.Skill 16GB DDR4 TridentZ RGB 4400Mhz PC4-35200 CL18 1.4V Dual Channel Kit (2x8GB) for Intel Z270 at Amazon.com


Buy G.Skill 16GB DDR4 TridentZ RGB 4400Mhz PC4-35200 CL18 1.4V Dual Channel Kit (2x8GB) for Intel Z270: Memory - Amazon.com ✓ FREE DELIVERY possible on eligible purchases



www.amazon.com




bought it back in January 2019
4400 CL18 (18-19-19-39) at 1.4V at xmp settings

It's an excellent kit but you need a beefy mobo


----------



## Falkentyne

Why 2x8 GB in 2020? 2x16 GB is where it's at now.


----------



## rawsome

Falkentyne said:


> Why 2x8 GB in 2020? 2x16 GB is where it's at now.


Is there a single game that uses more than 10GB ram? aside from some heavy modded skyrim/fallouts. sure, 32Gb is the way to go, but it would make more sense if a single game clould take advantage of that.

i have just seen that there are some new cards in stock, most interesting being that one: https://www.alternate.de/Palit/GeFo...Grafikkarte/html/product/1688672?event=search

does not sound that bad right?


----------



## ExDarkxH

My ram is nearly 2 yrs old. But if there was a catalyst for change i would consider.
but unless i see a need for 32gb in some games ill keep it as it. If anything, l would prob get worse performance due to lower memory overclock. The only thing i would notice is my wallet get thinner


----------



## Falkentyne

rawsome said:


> Is there a single game that uses more than 10GB ram? aside from some heavy modded skyrim/fallouts. sure, 32Gb is the way to go, but it would make more sense if a single game clould take advantage of that.
> 
> i have just seen that there are some new cards in stock, most interesting being that one: https://www.alternate.de/Palit/GeFo...Grafikkarte/html/product/1688672?event=search
> 
> does not sound that bad right?


I've seen some games allocate more than 8 GB. I tested 2x8 GB in a test bench and ran Apex Legends and Overwatch at the same time and windows complained about being out of RAM.
Don't forget having browser windows open also use up a lot of memory.


----------



## mirkendargen

Could someone with PCIe 4.0 support (or a few someone's for consistency!) post their results for the 3dMark PCIe bandwidth test? I'm registering a PCIe 4.0 16x link and not seeing any problems, but I want to make sure the ribbon cable to my vertical mount riser isn't causing a bunch of error correction slowdown or something. Thanks in advance! I'm getting ~26GB/s.


----------



## Esenel

rawsome said:


> Is there a single game that uses more than 10GB ram? aside from some heavy modded skyrim/fallouts. sure, 32Gb is the way to go, but it would make more sense if a single game clould take advantage of that.


It is not about the memory size itself.
Benefit of 4x8 or 2x16 is 4-Way Interleaving.
So with the exact same timings, 2x16 4300 16-17-17 will be as fast (some ocassions faster) as 2x8 4600 16-17-17.
So you compensate around 300MHz.


----------



## HyperMatrix

Esenel said:


> It is not about the memory size itself.
> Benefit of 4x8 or 2x16 is 4-Way Interleaving.
> So with the exact same timings, 2x16 4300 16-17-17 will be as fast (some ocassions faster) as 2x8 4600 16-17-17.
> So you compensate around 300MHz.


You might be just the man I'm looking for. I'm looking for guidance on ram. I had previously been running an old set of 8x 4GB DDR4 3000 CL15. When I installed the 3090, my system no longer liked to boot. And when it did, it would only register 24GB instead of 32GB, even though all 8 dimms would be listed in the bios. So I took 4 of them out and with just 16GB in there, it's working fine. 

Now, I've been trying to find a solution and one thing I can't figure out is whether the instability is related due to the number of dimms (4 vs. 8) or total capacity (16gb vs 32gb). If it's just due to the number of dimms, then easy...I can switch to 4x 8GB. But if it's due to capacity, then I'll still have the same problem and that would mean an even bigger issue with my IMC. This is on an X99 platform with a 6950x OC'd to 4.3GHz, running quad-channel memory.

I have no idea why installing the 3090 would cause this issue. I haven't heard it happening to anyone else. But from your experience, would you say going with a 4x 8GB setup will resolve the problem? Or is 32GB is just too high for whatever is causing the instability in the first place? Thanks.


----------



## mirkendargen

HyperMatrix said:


> You might be just the man I'm looking for. I'm looking for guidance on ram. I had previously been running an old set of 8x 4GB DDR4 3000 CL15. When I installed the 3090, my system no longer liked to boot. And when it did, it would only register 24GB instead of 32GB, even though all 8 dimms would be listed in the bios. So I took 4 of them out and with just 16GB in there, it's working fine.
> 
> Now, I've been trying to find a solution and one thing I can't figure out is whether the instability is related due to the number of dimms (4 vs. 8) or total capacity (16gb vs 32gb). If it's just due to the number of dimms, then easy...I can switch to 4x 8GB. But if it's due to capacity, then I'll still have the same problem and that would mean an even bigger issue with my IMC. This is on an X99 platform with a 6950x OC'd to 4.3GHz, running quad-channel memory.
> 
> I have no idea why installing the 3090 would cause this issue. I haven't heard it happening to anyone else. But from your experience, would you say going with a 4x 8GB setup will resolve the problem? Or is 32GB is just too high for whatever is causing the instability in the first place? Thanks.


Memory controller stress comes from the total number of ranks connected to it. So dual rank DIMMS stresses more than single rank DIMMs, 4 DIMMs stresses more than 2 DIMMs, etc. The actual overall capacity doesn't add any stress though.

I can't think of any reason changing GPU's would affect any of that though. Try bumping the voltage up slightly in your BIOS to the IMC and your DIMMs, it's probably just getting old. X99 has been awhile, my X59 board had similar behavior with all 6 DIMMs filled later in it's life.


----------



## Dreams-Visions

Alright, so I flashed the 500W bios on this 3090 Strix, but the 3rd pin isn't showing the correct power draw in GPZ-Z. It's only showing 2W. I don't think it's working correctly, and that pin showing only 2W is also making the max power draw look low. Otherwise the bios appears to be showing the right max power limit. Any ideas?


----------



## mirkendargen

Dreams-Visions said:


> Alright, so I flashed the 500W bios on this 3090 Strix, but the 3rd pin isn't showing the correct power draw in GPZ-Z. It's only showing 2W. I think it's working correctly, but that pin showing only 2W is also making the max power draw look low. Otherwise the bios appears to be working correctly. Any ideas?


I see the same behavior, and so did FrameChasers in their video, but it's definitely pulling 500W. I was maxing out at 450-460 with the stock Strix BIOS.


----------



## Dreams-Visions

mirkendargen said:


> I see the same behavior, and so did FrameChasers in their video, but it's definitely pulling 500W. I was maxing out at 450-460 with the stock Strix BIOS.


cool cool. yea I was actually watching his video and failed to move ahead far enough to actually see the part where he had the same issue, which is mildly embarrassing. ty for the confrimation.


----------



## Sync0r

Did some more benching this evening, 6c ambient, resulting in 10c water temps, about 24c GPU temp. Managed to clock the core up to 2250mhz but started to see artifacts so dropped down to 2235mhz. I saw gains in port royal but still below what some people have achieved with lower clocks. Maybe I need a fresh install of Windows, I dunno, or I'm missing a few tweaks.



https://www.3dmark.com/pr/477158


----------



## Thebc2

Got my first of two Strix OCs to bin. Scoring slightly better than my ftw3, but running warmer overall.

Old vs new: 14.4K was the ftw3, 14.6k is Strix.



https://www.3dmark.com/compare/pr/477279/pr/407821



Re: ram I am running 4400 16-16-17-36 2x16GB and it screams


Sent from my iPhone using Tapatalk Pro


----------



## Chamidorix

Dreams-Visions said:


> Dreams-Visions said:
> 
> 
> 
> Welp, this is what paralysis of analysis gets you.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 4 card enter. 1 card leave. Place your bets.
> 
> Duly noted, sir/ma'am!
Click to expand...

Haha very nice, it's a long shot but I'm on the hunt for a Strix so I'll put myself down as interested if somehow the best boy card doesn't end up the permanent resident. I want elmor volt mod and my FTW3 ain't got no IC2 VC!


----------



## Sky3900

Sky3900 said:


> Made it to 15K! FE, silver paint shunt mod, air cooled.
> 
> 
> 
> https://www.3dmark.com/pr/470948


14,626 Metro Exodus stable PR bench. +74gpu, +750mem, 114%pwr, 100% fans. Looks like this the best my card will do for gaming at ambient room temp.



https://www.3dmark.com/3dm/52684116



Interesting that the various 3D mark stress test will run all day at +115 on the gpu, but Heaven and Metro will crash almost instantly.


----------



## Thanh Nguyen

Metro RTX enabled will destroy so many “stable” 3090.


----------



## martinhal

ExDarkxH said:


> anyone see the new framechasers video?
> 
> He used liquid metal on a 3080 and saw a 15-30mhz increase in clock speeds.
> 
> 3dmark scores stayed the same despite the increase
> 
> thoughts?


I think it depends on the average clock over the run and not the spikes.


----------



## Nizzen

Thanh Nguyen said:


> Metro RTX enabled will destroy so many “stable” 3090.


There is nothing stable, only degree of stable in given environment. Like things can be stable in 20c ambient, but very unstable in 30c ambient.

If things is stable enough for me, I'm satisfied


----------



## Esenel

@HyperMatrix 
I do not have experience with X99.
But the suggestion with more voltage sounds reasonable to me as well.



Thebc2 said:


> Re: ram I am running 4400 16-16-17-36 2x16GB and it screams


Cab you post a Timespy run?


----------



## Nizzen

Esenel said:


> It is not about the memory size itself.
> Benefit of 4x8 or 2x16 is 4-Way Interleaving.
> So with the exact same timings, 2x16 4300 16-17-17 will be as fast (some ocassions faster) as 2x8 4600 16-17-17.
> So you compensate around 300MHz.


Maybe that's why x299 tweaked with 4000c16 memory 49ns memorylatency still is very good for gaming? Quad channel memory works quite well with 3090


----------



## Johneey

ExDarkxH said:


> anyone see the new framechasers video?
> 
> He used liquid metal on a 3080 and saw a 15-30mhz increase in clock speeds.
> 
> 3dmark scores stayed the same despite the increase
> 
> thoughts?


i am using Liquid Metall too


----------



## Johneey

Thanh Nguyen said:


> Anyone here has a 2200mhz+ chip stable for daily use?


yes i got 2200 mhz on 4k with shunt mode daily game with Mora 360 + 360 + 240 Radiator 600-650 watts


----------



## pantsoftime

Looking for some advise. I've been testing an Aorus Xtreme and I'm doing alright with the core overclocking and temps are looking good... but memory goes unstable past +200. I'm using Afterburner and it looks like it's +400 effective when I go through the math... but this seems extraordinarily low compared to what you guys are getting with other cards. 

My initial thought is that maybe one (or more) of the memory chips aren't being heatsunk correctly. This is one of the first cards off the line and who knows about the quality control. I've heard that GDDR6X has thermal monitoring - is there any way to access those temp readings? I'd like to troubleshoot this but I'm not super eager to take it apart just yet.


----------



## HyperMatrix

Johneey said:


> yes i got 2200 mhz on 4k with shunt mode daily game with Mora 360 + 360 + 240 Radiator 600-650 watts


Does it maintain 2200MHz under all gaming and benching situations? For example, does the "average clock speed" in port royal stay at 2200MHz+? Also what clocks were you getting on air? I'm trying to understand what I should expect from my card once I put it under water. Thanks.


----------



## Esenel

I do not think you can maintain 2200MHz in games like Control.
It is on Stock settings +480W PL hammering those 480W constantly and therefor downclocking. 
So I doubt 2200MHz can be the average in any game even under water ;-)


----------



## ScottRoberts91

What fixed frequency and voltage are people getting with the Trio on the 500w bios?


----------



## Johneey

HyperMatrix said:


> Does it maintain 2200MHz under all gaming and benching situations? For example, does the "average clock speed" in port royal stay at 2200MHz+? Also what clocks were you getting on air? I'm trying to understand what I should expect from my card once I put it under water. Thanks.


Sorry I can reach in port royal 2189. Avg.
Don’t remember what I got on air. Because after I shunted


https://www.3dmark.com/pr/478503


I think if I lower my Ambiente from 23c I can get easily 2200+


----------



## Johneey

Esenel said:


> I do not think you can maintain 2200MHz in games like Control.
> It is on Stock settings +480W PL hammering those 480W constantly and therefor downclocking.
> So I doubt 2200MHz can be the average in any game even under water ;-)


Sorry don’t have the game. Never run powerlimit I can get 700-800 watts 😅


----------



## Sheyster

Dreams-Visions said:


> cool cool. yea I was actually watching his video and failed to move ahead far enough to actually see the part where he had the same issue, which is mildly embarrassing. ty for the confrimation.


I'd be curious to know 2 things:

- Do you actually see any meaningful gain in any benchmark with the 500w BIOS over the stock 480w?

- Do both HDMI ports still work with the 500w BIOS?

Thanks!


----------



## mirkendargen

Sheyster said:


> I'd be curious to know 2 things:
> 
> - Do you actually see any meaningful gain in any benchmark with the 500w BIOS over the stock 480w?
> 
> - Do both HDMI ports still work with the 500w BIOS?
> 
> Thanks!


Define "meaningful" . Measurable yes.

Both HDMI ports work (I'm using them simultaneously), the middle DP port is disabled.


----------



## dr.Rafi

Zurv said:


> MultiGpu still works in mGPU setups (like .. and i think only.. ashes)
> also games that support SLI in dx12. Like Rise of the Tomb Raider. It will not work in dx11 games.
> Get a good score in 3dmark timespy and port royal... nod to yourself.. epeen bonus points awarded.
> 
> That said, i wonder if anything but a x299 is a good fit right now. Are there z490s with 4 slot spacing between PCI-E slots that can do 16x. Maybe a workstation one? _shrug_
> There was talk of AIBs making SLI bridges with options other than 4. But no one said a peep post nvidia's notice they are mostly killed SLI.
> 
> this is the official line from Nvidia


That is true for real time 3d rendering softwares like Vray it can use any resource availables cpus , even different generations graphic cards, so ithink is time for game developer to implement that in games and we can simply tick the cards we want to render in games.


----------



## dr.Rafi

cavemankr said:


> Can someone with a rtx 3090 founders edition, check where the HDMI or DisplayPort cable connects to GPU and see how hot it is?
> Mine gets very very hot to touch.
> I am not sure if this is normal.
> 
> i upgraded from 2080 ti strix and never had this problem.
> 
> is this normal for a blower style gpu like founders edition?
> My rtx 3090 fe averages 65 to 67 degrees under full load at 1700 to 1800 rpm.
> 
> I also undervolted my gpu to 850mv and constant 1875 core clock. Because I couldn’t stand the fan noise going over 2000 rpm.
> 
> thanks


Your cable is feeding back current from you monitor to the card try get ne cable.


----------



## andrvas

Anyone successfully flashed another BIOS to a MSI Ventus OC 3090 yet?


----------



## Wihglah

rawsome said:


> Is there a single game that uses more than 10GB ram? aside from some heavy modded skyrim/fallouts. sure, 32Gb is the way to go, but it would make more sense if a single game clould take advantage of that.
> 
> i have just seen that there are some new cards in stock, most interesting being that one: https://www.alternate.de/Palit/GeFo...Grafikkarte/html/product/1688672?event=search
> 
> does not sound that bad right?


COD (23gb)
Arma III (17gb)
DCS in VR (18gb)

COD allocates a lot of RAM is doesn't actually need , but Running DCS maxed and Arma with a massive view distance is an experience.


----------



## rawsome

andrvas said:


> Anyone successfully flashed another BIOS to a MSI Ventus OC 3090 yet?


You can flash whatever you want, as long as it is a bios for a 2-plug card. Just flash the Gaming OC bios, its the best one currently. dont forget to use DDU after flashing or your clocks are messed up.



Wihglah said:


> COD (23gb)
> Arma III (17gb)
> DCS in VR (18gb)
> 
> COD allocates a lot of RAM is doesn't actually need , but Running DCS maxed and Arma with a massive view distance is an experience.


ok, good to know. as the PS5 has 16GB ram and devs now know they can use it, we may see more games taking advantage of more ram in the near future.


----------



## shallow_

Anyone in norway ?? the KFA2/Galax RTX 3090 seems to be in stock to order if you are interested..









GeForce RTX 3090 SG 24GB GDDR6X


PCI Express 4.0 16x, 24 GB GDDR6X-minne, NVIDIA Ampere-arkitektur, NVIDIA DLSS, 3 stk. 92 mm-vifter og ekstra bakvifte




www.netonnet.no


----------



## devilhead

Anybody knows which 3x resistors on strix 3090 is for those 3x pcie power connectors?


----------



## HyperMatrix

Wihglah said:


> COD (23gb)
> Arma III (17gb)
> DCS in VR (18gb)
> 
> COD allocates a lot of RAM is doesn't actually need , but Running DCS maxed and Arma with a massive view distance is an experience.


You can test whether there is a benefit to having over 10GB or 12GB of ram without having to compare different video cards. Set up a virtual disk using your VRAM. For example make a 14GB virtual disk. So that part will be allocated elsewhere and only 10GB left for both the system and gaming. See if there’s a performance difference. I highly doubt all the claims about games _needing_ more than 10GB. At least not current gen games. 



rawsome said:


> ok, good to know. as the PS5 has 16GB ram and devs now know they can use it, we may see more games taking advantage of more ram in the near future.


Only 10-12GB of that is intended for gaming. The rest is for the system. Remember consoles have a unified memory structure. On the Xbox series X, they’ve even partitioned the speed for the ram. 10GB of it is at 560GBps and the other 6GB is at 336GBps.

24GB VRAM is actually a bad thing if it’s left unused and you’re not shunting. Because the extra capacity over the 3080 results in more than a 30W increase in power consumption/heat for no reason. If there were a bios that would let you disable half the VRAM, I honestly would for now while gaming.


----------



## Falkentyne

devilhead said:


> Anybody knows which 3x resistors on strix 3090 is for those 3x pcie power connectors?
> View attachment 2464694


Shunt or conductive paint over all of the 005's (of the same size) that you see on the board. Front and backside.
There's shunts for GPU core, SRC, Memory and PCI Express also. That board should have 7 "005" 5 mOhm shunts.


----------



## Chamidorix

devilhead said:


> Anybody knows which 3x resistors on strix 3090 is for those 3x pcie power connectors?
> View attachment 2464694


Also the Strix doesn't have fuses AND has a very sensible hardcoded pci-e to pci-8pin ratio in the VC, unlike EVGA, so you can go crazy with shunting and not worry about the PCI-E draw as much as other cards. Shunt em all. I'd recommend even the 3 small 5mO shunts, they measure the 3 rails on the ~1V plane and seem to be used by the VC for power limiting even when all 7 big/12V ones are shunted (you can pull more than 800W people!!!).


----------



## mirkendargen

Esenel said:


> I do not think you can maintain 2200MHz in games like Control.
> It is on Stock settings +480W PL hammering those 480W constantly and therefor downclocking.
> So I doubt 2200MHz can be the average in any game even under water ;-)


I'm more curious if anyone is even stable at 2200 in Control/Metro... I seem to be stable ~50-75Mhz lower in those games than Port Royal or the DXR benchmark.


----------



## Falkentyne

If you guys are replacing pads on a 3090 Founder's Edition, this is where you want the pads to be, so all hotspots are covered, besides just RAM areas.
1.5mm pads work well for everything (even though the original pads seem to be different thicknesses, some are very compressed).


----------



## Manya3084

Asus TUF 3090 Shunt modded. +0.03 on vcore. 150Mhz+ Core, 960Mhz+ Memory. (EK Block)


https://www.3dmark.com/3dm/52736571?


----------



## dr.Rafi

bmgjet said:


> Shunt modding is the only way to get more then 390W at the moment.
> Iv made a tool to help you work out what shunts to use.
> 
> 
> 
> 
> 
> 
> 
> 
> GitHub - bmgjet/ShutMod-Calculator: Work out what shunt values to use easily.
> 
> 
> Work out what shunt values to use easily. Contribute to bmgjet/ShutMod-Calculator development by creating an account on GitHub.
> 
> 
> 
> 
> github.com


🙏


----------



## Dreams-Visions

Hey guys! Q: *What performance signals/signs do you look for in a card while on air to determine if it will be a good card to put on a block?*

Is there anything in the frequency performance or temperatures to give some indication as to how it might do? While on air, what would a "decent" card perform like vs a "good" card vs a "great" card?

For context, one of my 3090s (a 3-pin, call it card #1) _seems_ solid at between 2115-2130 on during the time it can stay at < 60C...but the stock cooler simply cannot keep it in that temperature range long enough to know if it can maintain that performance over an extended stress test or benchmark. In Unigine Heaven (extreme testing), it floats between 2020MHz-2100MHz. In other testing, it floats between 2020MHz-2080MHz though the max temp it tends to reach is about 79C. Is 2020-2080 @ 79C a good sign that under an open loop it would flourish?

Another 3090 (also a 3-pin, we'll call it card #2) benches slightly better, but does so in part because its cooler appears to be able to better keep the GPU about 10C-15C cooler, maxing at 69C. I'm not sure if the cooler is that much better on this one or if silicon quality is a factor? In Unigine Heaven benchmarking, this sits at 2100MHz-2145MHz at all times, with maybe a very rare dip to 2080MHz @ 69C.

In testing both cards *with the same 500W EVGA bios*, card #2 can clock the chip (+20) and memory (+300) meaningfully higher giving it a slim lead in most game and synthetic benchmarks. Since card #2 is running much cooler, should my takeaway be that #2 is a straight up better performing chip...or should it be that #1 is being limited by a subpar cooler and could potentially a better (or at least equivalent) open loop candidate? Would the chip and memory on #1 possibly OC higher under water? Would that be equally true for #2?

...and then, how much does this even matter if the correct move forward should be a shunt mod to hit 15K?

*Random notes: *I'm not sure what the ambient was when I tested, but the cards would get down to about 30C under 100% fan speed. In this same environment, a 3090FE's max temp at 100% load was 56C. FE's stock cooler is a beast...but it's also the loudest fan I've ever heard when at 100%. Fortunately, it cools so well that it rarely actually gets to an audible level for any reason and sits comfortably at 65C under a normal fan profile. Very impressive for anyone looking to stay on air.

Anyway, yea...as I've mentioned before, I'm new to slapping blocks on cards and I'm just not sure which one is the best candidate or what the best way to determine it is. I'd like to only do this once and be confident that I picked the right card for the job. Any advice or insights would be appreciated. I collected a bunch of data in testing today but am not sure how best to interpret the results.

Thanks.


----------



## kx11

Strix 3090 WB






__





BP-VG3090AST


Bitspower Classic VGA Water Block for ASUS ROG Strix GeForce RTX 3090




shop.bitspower.com


----------



## Falkentyne

Dreams-Visions said:


> Hey guys! Q: *What performance signals/signs do you look for in a card while on air to determine if it will be a good card to put on a block?*
> 
> Is there anything in the frequency performance or temperatures to give some indication as to how it might do? While on air, what would a "decent" card perform like vs a "good" card vs a "great" card?
> 
> For context, one of my 3090s (a 3-pin, call it card #1) _seems_ solid at between 2115-2130 on during the time it can stay at < 60C...but the stock cooler simply cannot keep it in that temperature range long enough to know if it can maintain that performance over an extended stress test or benchmark. In Unigine Heaven (extreme testing), it floats between 2020MHz-2100MHz. In other testing, it floats between 2020MHz-2080MHz though the max temp it tends to reach is about 79C. Is 2020-2080 @ 79C a good sign that under an open loop it would flourish?
> 
> Another 3090 (also a 3-pin, we'll call it card #2) benches slightly better, but does so in part because its cooler appears to be able to better keep the GPU about 10C-15C cooler, maxing at 69C. I'm not sure if the cooler is that much better on this one or if silicon quality is a factor? In Unigine Heaven benchmarking, this sits at 2100MHz-2145MHz at all times, with maybe a very rare dip to 2080MHz @ 69C.
> 
> In testing both cards *with the same 500W EVGA bios*, card #2 can clock the chip (+20) and memory (+300) meaningfully higher giving it a slim lead in most game and synthetic benchmarks. Since card #2 is running much cooler, should my takeaway be that #2 is a straight up better performing chip...or should it be that #1 is being limited by a subpar cooler and could potentially a better (or at least equivalent) open loop candidate? Would the chip and memory on #1 possibly OC higher under water? Would that be equally true for #2?
> 
> ...and then, how much does this even matter if the correct move forward should be a shunt mod to hit 15K?
> 
> *Random notes: *I'm not sure what the ambient was when I tested, but the cards would get down to about 30C under 100% fan speed. In this same environment, a 3090FE's max temp at 100% load was 56C. FE's stock cooler is a beast...but it's also the loudest fan I've ever heard when at 100%. Fortunately, it cools so well that it rarely actually gets to an audible level for any reason and sits comfortably at 65C under a normal fan profile. Very impressive for anyone looking to stay on air.
> 
> Anyway, yea...as I've mentioned before, I'm new to slapping blocks on cards and I'm just not sure which one is the best candidate or what the best way to determine it is. I'd like to only do this once and be confident that I picked the right card for the job. Any advice or insights would be appreciated. I collected a bunch of data in testing today but am not sure how best to interpret the results.
> 
> Thanks.


In order to answer this question, you _MUST_ first re-paste both cards with the same thermal paste, and it's best to use a vicious paste to try to eliminate imperfections in the flatness of the cold plate, something like Thermalright TFX or Kryonaut Extreme or Kingpin KPX. But basic Kryonaut will always be better than stock paste. A bad thermal paste job from the factory can account for temp differences larger than any silicon lottery ever can.


----------



## Dreams-Visions

Falkentyne said:


> In order to answer this question, you _MUST_ first re-paste both cards with the same thermal paste, and it's best to use a vicious paste to try to eliminate imperfections in the flatness of the cold plate, something like Thermalright TFX or Kryonaut Extreme or Kingpin KPX. But basic Kryonaut will always be better than stock paste. A bad thermal paste job from the factory can account for temp differences larger than any silicon lottery ever can.


very interesting. unfortunately I'd need to order some thermal pads in order to do that and I don't have any at the moment. 

once I have the pads and can swap out pastes, what signs/signals/symptoms should I be looking at to determine good from mediocre?


----------



## Falkentyne

With high quality thermal paste and the pads swapped out, you shouldn't see more than a 5C temp difference from the cores. Then you can just test the clocks. The one to block will be the one that clocks higher on air+repaste, since you are aware of the -15 mhz / 6C "Pascal" ratio already (not fully sure where this starts happening...it was 38C on pascal. I think it's around 50 something C on Ampere), however the voltage slider on MSI Afterburner can unlock 1.087v-1.10v _if_ the chip is cool enough. But it seems strange how it works because if the chip warms up and the slider stops doing anything, then it cools down again and the slider no longer works ....weird stuff like that...but I hope I answered your question.


----------



## dr.Rafi

dr/owned said:


> Anyone want to guinea pig a solderless/reversible shunt mod? Tweezers still required unless you have 5-year-old size fingers.
> 
> View attachment 2461105
> View attachment 2461104
> 
> 
> Solder added to bottom for thickness, flattened to make good contact with existing shunt resistors, and then kapton taped down (in this picture) to a copper base to verify resistor pads still making electrical contact.
> 
> Resistors should be the same size (US: 2512) as these on the 30 series TUF and many others:
> 
> View attachment 2461106
> 
> 
> 
> *Usual caveats of US only, no guarantees, you break you buy, etc.*


that is not proper method it will make contact but shunts are very low resistance element which pass high current through you dont need all that current pass through touch contact , will get micro sparks then oxidate the solder outer surface and fail the contact or even can lead to fire .


----------



## Dreams-Visions

Falkentyne said:


> With high quality thermal paste and the pads swapped out, you shouldn't see more than a 5C temp difference from the cores. Then you can just test the clocks. The one to block will be the one that clocks higher on air+repaste, since you are aware of the -15 mhz / 6C "Pascal" ratio already (not fully sure where this starts happening...it was 38C on pascal. I think it's around 50 something C on Ampere), however the voltage slider on MSI Afterburner can unlock 1.087v-1.10v _if_ the chip is cool enough. But it seems strange how it works because if the chip warms up and the slider stops doing anything, then it cools down again and the slider no longer works ....weird stuff like that...but I hope I answered your question.


Yep, I'm clear. TY!

Now to figure out what thermal pads to go with. 

Is Kryonaut okay for GPUs? Or would you recommend something different?


----------



## Falkentyne

Kryonaut is fine on GPU's. Just not ideal if the surface is not 100% flat. Better is thicker paste like Kingpin KPX, Thermalright TFX or Kyronaut Extreme.
Thermal pads: Arctic 6mm pads are the most bang for the buck even though the price is much higher than a year ago (they used to be $12 for a 145x145mm, 1mm thickness pad, now they're like $20, and 1.5mm pads are even more).

FE cards use 1.5mm pads, even though the stock pads are a combination of 1mm and 2mm pads, the 2mm pads are _super_ compressed and 1.5mm pads contact everything.
I do NOT know what pads AIB cards use. Some may use pads of different thickness. If you have to take a guess, 1.5mm pads on everything is usually the best guess as long as they are compressible.

These used to be $15. Now they're $28. https://www.amazon.com/gp/product/B00UYTU6Z6/
You get a lot of pad space though.

These pads cool a lot better. But you don't get a lot of pad space, but you get more than Fujipoly's 60x50 saltine cracker sized pads.


https://www.amazon.com/gp/product/B08CGVZ4YG/



You probably won't have enough to do an entire GPU with these, so you would need to buy two sets.

Fujipoly 11 w/mk 1.5mm pads are also very good but they are the smallest sample size. 60x50mm size is a LOT smaller than it looks. And for a FE, you would need MORE than two sets, because I'm not even sure that one set can complete just one side of a GPU (since you also need to pad the hot spots, not just the VRMs and the RAM chips!)


----------



## andrvas

rawsome said:


> You can flash whatever you want, as long as it is a bios for a 2-plug card. Just flash the Gaming OC bios, its the best one currently. dont forget to use DDU after flashing or your clocks are messed up.


So the Strix and EVGA xoc bios won't work?


----------



## Dreams-Visions

Falkentyne said:


> Kryonaut is fine on GPU's. Just not ideal if the surface is not 100% flat. Better is thicker paste like Kingpin KPX, Thermalright TFX or Kyronaut Extreme.
> Thermal pads: Arctic 6mm pads are the most bang for the buck even though the price is much higher than a year ago (they used to be $12 for a 145x145mm, 1mm thickness pad, now they're like $20, and 1.5mm pads are even more).
> 
> FE cards use 1.5mm pads, even though the stock pads are a combination of 1mm and 2mm pads, the 2mm pads are _super_ compressed and 1.5mm pads contact everything.
> I do NOT know what pads AIB cards use. Some may use pads of different thickness. If you have to take a guess, 1.5mm pads on everything is usually the best guess as long as they are compressible.
> 
> These used to be $15. Now they're $28. https://www.amazon.com/gp/product/B00UYTU6Z6/
> You get a lot of pad space though.
> 
> These pads cool a lot better. But you don't get a lot of pad space, but you get more than Fujipoly's 60x50 saltine cracker sized pads.
> 
> 
> https://www.amazon.com/gp/product/B08CGVZ4YG/
> 
> 
> 
> You probably won't have enough to do an entire GPU with these, so you would need to buy two sets.
> 
> Fujipoly 11 w/mk 1.5mm pads are also very good but they are the smallest sample size. 60x50mm size is a LOT smaller than it looks. And for a FE, you would need MORE than two sets, because I'm not even sure that one set can complete just one side of a GPU (since you also need to pad the hot spots, not just the VRMs and the RAM chips!)


very good, I will look into all of these options.

oh also, something you mentioned: that the cores shouldn't be more than about 5C difference. Is that difference as measured under load? Or idle? At the moment, all of them are around 30C idle at max fan speed. They diverge heavily under load, however.


----------



## Zogge

andrvas said:


> So the Strix and EVGA xoc bios won't work?


I have now put the 500w evga bios on a strix 3090 and it seems to work but pin 3 for power always shows 3w, is it really supposed to skip measuring that one ? With strix bios all readings worked. Anyone else noticed the same ?


----------



## Zogge

Same as frame chasers got I see now. But it looks like it works anyway.


----------



## nordschleife

Galax SG with Giga 390w bios, air. 230(+60) core and +248 mem.

port royal 14044

https://www.3dmark.com/3dm/52732430


----------



## Sky3900

Deleted.


----------



## long2905

nordschleife said:


> Galax SG with Giga 390w bios, air. 230(+60) core and +248 mem.
> 
> port royal 14044
> 
> https://www.3dmark.com/3dm/52732430


either the SG cooler is really good or your ambient is very low. how do you manage to keep the card at 50*c on air?


----------



## 7empe

Hi,

Just want to share some OC results on my ASUS RTX 3090 Strix OC card and ask for your opinion.

GPU is on air with stock 390/480W bios, combined with water-cooled i9-9900K with all-core frequency OC'ed to 5.1 GHz (5.0 GHz at AVX). I managed to reach stable 3DMark with manual OC in MSI AB: +150 MHz core clock (up to 2190 MHz initially) and +1130 MHz on memory. Core voltage and power limit (123%) maxed out, temp limit left on default 83 C. 3DMark graphics scores are:

Time Spy Extreme: 11432
Fire Strike Extreme: 27335
Port Royal: 14679
Max temperature reaches up to 71 C on default fan curve - e.g. in Port Royal this makes GPU to down clock from 2190 MHz to 2115 MHz. On average it is around 66 C (20 C ambient temp).

Some questions:

1) What could I benefit performance-wise from investing in a water block? Is keeping the GPU clock high and stable (without fluctuations) possible with lower temp?
2) When trying MSI AB OC Scanner or ASUS GPU Tweak II OC Scanner, my PC restarts - is it a known issue? NVIDIA OC Scanner from GeForce Experience works fine though and says +123 MHz clock OC.
3) Is there any trick I could do to bump up a performance still being on air?

Cheers!


----------



## rawsome

andrvas said:


> So the Strix and EVGA xoc bios won't work?


With these 3-Pin-Bioses you are missing the power from the 3rd pin, effectively giving you less than 390W as you would with the gaming OC bios.



nordschleife said:


> Galax SG with Giga 390w bios, air. 230(+60) core and +248 mem.
> 
> port royal 14044
> 
> https://www.3dmark.com/3dm/52732430


yes, that is exactly the score i got with my ventus too. guess that is the maximum you can expect from 390W without a golden chip.


----------



## Falkentyne

If you have a Founder's Edition, please download Rivatuner Statistics Server or install it as part of MSI Afterburner, and enable "Scanline Sync" (you can use a value of 1) with Vsync disabled, to cap the framerate at the monitor's refresh rate, and use a game that won't limit your FPS, and check for any signs of tearing (as if you were trying to cap FPS at refresh rate with vsync disabled, but without using RTSS).

This will be MUCH easier to do if you are using some form of motion blur reduction / ULMB (backlight strobing) although you can do it with MBR disabled.

Then (exit the game) and enable fast sync in the Nvidia drivers and do the same thing.
You should have a very smooth image with NO out of frame sync stuttering (note, random very occasional light stutters are normal). Look especially for very out of sync frames, almost like the engine were trying to do 164 or 166 FPS with a 165hz refresh rate, with the frame counter showing a flat 165 fps but it looking very out of phase randomly.

Vsync on will not show issues.
If you get any tearing (fastsync/vsync off) or massive out of phase frames (fastsync enabled), you may need to do a mod.

Note I've only tested this with Overwatch. I am not sure if it will work in Fortnite or other games.


----------



## Johneey

7empe said:


> Hi,
> 
> Just want to share some OC results on my ASUS RTX 3090 Strix OC card and ask for your opinion.
> 
> GPU is on air with stock 390/480W bios, combined with water-cooled i9-9900K with all-core frequency OC'ed to 5.1 GHz (5.0 GHz at AVX). I managed to reach stable 3DMark with manual OC in MSI AB: +150 MHz core clock (up to 2190 MHz initially) and +1130 MHz on memory. Core voltage and power limit (123%) maxed out, temp limit left on default 83 C. 3DMark graphics scores are:
> 
> Time Spy Extreme: 11432
> Fire Strike Extreme: 27335
> Port Royal: 14679
> Max temperature reaches up to 71 C on default fan curve - e.g. in Port Royal this makes GPU to down clock from 2190 MHz to 2115 MHz. On average it is around 66 C (20 C ambient temp).
> 
> Some questions:
> 
> 1) What could I benefit performance-wise from investing in a water block? Is keeping the GPU clock high and stable (without fluctuations) possible with lower temp?
> 2) When trying MSI AB OC Scanner or ASUS GPU Tweak II OC Scanner, my PC restarts - is it a known issue? NVIDIA OC Scanner from GeForce Experience works fine though and says +123 MHz clock OC.
> 3) Is there any trick I could do to bump up a performance still being on air?
> 
> Cheers!


1. Sure u can get cooler = more high stable clock speed. How much depends on Radiator Size, pump, block and so on.
2. Not using this software lul.
3. Buy a AC and/or Remove stock thermal paste and place new one on it/ or use Liqiuid metall


----------



## nordschleife

long2905 said:


> either the SG cooler is really good or your ambient is very low. how do you manage to keep the card at 50*c on air?


AC on @ 18-20c and open case very ventilated  The cooler is not the best, but not bad either.


----------



## nordschleife

rawsome said:


> With these 3-Pin-Bioses you are missing the power from the 3rd pin, effectively giving you less than 390W as you would with the gaming OC bios.
> 
> 
> yes, that is exactly the score i got with my ventus too. guess that is the maximum you can expect from 390W without a golden chip.


Still, it's a pretty good improvement, with the original bios I couldn't get more than 13.4-13.5 and now that's my "daily use" score. I'm pretty happy with the SG, didn't expect much.


----------



## Dreams-Visions

7empe said:


> Hi,
> 
> Just want to share some OC results on my ASUS RTX 3090 Strix OC card and ask for your opinion.
> 
> GPU is on air with stock 390/480W bios, combined with water-cooled i9-9900K with all-core frequency OC'ed to 5.1 GHz (5.0 GHz at AVX). I managed to reach stable 3DMark with manual OC in MSI AB: +150 MHz core clock (up to 2190 MHz initially) and +1130 MHz on memory. Core voltage and power limit (123%) maxed out, temp limit left on default 83 C. 3DMark graphics scores are:
> 
> Time Spy Extreme: 11432
> Fire Strike Extreme: 27335
> Port Royal: 14679
> Max temperature reaches up to 71 C on default fan curve - e.g. in Port Royal this makes GPU to down clock from 2190 MHz to 2115 MHz. On average it is around 66 C (20 C ambient temp).
> 
> Some questions:
> 
> 1) What could I benefit performance-wise from investing in a water block? Is keeping the GPU clock high and stable (without fluctuations) possible with lower temp?
> 2) When trying MSI AB OC Scanner or ASUS GPU Tweak II OC Scanner, my PC restarts - is it a known issue? NVIDIA OC Scanner from GeForce Experience works fine though and says +123 MHz clock OC.
> 3) Is there any trick I could do to bump up a performance still being on air?
> 
> Cheers!


Impressive. What is your ambient temp like?


----------



## wyattneill

6th on Time spy extreme leaderboard now. 5950x


----------



## beamformer

Falkentyne said:


> Don't think this has been asked, but does anyone know the thickness of the thermal pads on the 3090 Founders Edition? I'd like to have the pads I need in advance before opening things up.


I did some measurements with a caliper.
Front VRAM 1.5mm VRM 1.8mm~2.0mm
Back VRAM 1.0mm VRM 1.0mm


----------



## Baasha

A couple of my 3090s OC really well.. or so I thought. I can do +200 on the core and +1000 on the mem all day long on benchmarks. however, when gaming, it freezes so I'm having to drop the core OC to +150. if it can bench all day with +200, why does it freeze during gaming (lower continuous load)?

individually, the 3090s OC quite high but under SLI, the OC is quite a bit lower (+150 vs +100).. funky oc is funky


----------



## Bilco

Has anyone heard word on the release date for Aquacomputer's 3090 strix waterblock?


----------



## Falkentyne

beamformer said:


> I did some measurements with a caliper.
> Front VRAM 1.5mm VRM 1.8mm~2.0mm
> Back VRAM 1.0mm VRM 1.0mm


Shunt modded and re-padded everything already.

1.5mm pads work fine on the entire board globally, both front and back. If you want to be extra perfect, you can use very compressible (not thick) 2.0mm pads on the PCB (not on the RAM) to help with hotspot heat transfer. Fujipoly pads work well for this.

You can easily use 1.5mm fujipoly (or thermalright) pads on the RAM and 2.0mm (1.5mm + 0.5mm stack) fujipoly pads for the hotspots. But avoid doing this with the arctic pads because they are to thick. Just use 1.5mm in that case.

So in other words, If you are using Arctic 1.5mm pads on the backplate side for the RAM, it's best to also use the same 1.5mm Arctic pads for the hotspots, or 2mm compressible pads (like fujipoly or Thermalright 11-12 w/mk) for the hotspots (because the Arctic pads are thicker, although a lot more durable, you don't want to reduce contact pressure on the DRAM).

The benefit of the Arctic pads is you can get a 145x145mm giant pad for comparatively cheap (Even though prices are very inflated compared to a year ago). Fujipoly pads are tiny, not much larger than an oversized Saltine cracker  Thermalright pads are slightly larger than the Fujipolys. Not sure about Grizzly pads or Gelids, but I think gelid pads are soft and compressible as well. Both Gelid and Thermalright have 12 w/mk pads.

If you're keeping the _stock_ RAM pads (they seem to be very high quality) on the backplate side, do NOT use 1.5mm Arctic pads for the hotspots-they're too thick. Just use very soft 1.5mm (non-Arctic) soft pads or 1mm Arctic pads


----------



## Falkentyne

Baasha said:


> A couple of my 3090s OC really well.. or so I thought. I can do +200 on the core and +1000 on the mem all day long on benchmarks. however, when gaming, it freezes so I'm having to drop the core OC to +150. if it can bench all day with +200, why does it freeze during gaming (lower continuous load)?
> 
> individually, the 3090s OC quite high but under SLI, the OC is quite a bit lower (+150 vs +100).. funky oc is funky


Drop the RAM overclock to +0 then test +200 on the core first.


----------



## bmgjet

Baasha said:


> A couple of my 3090s OC really well.. or so I thought. I can do +200 on the core and +1000 on the mem all day long on benchmarks. however, when gaming, it freezes so I'm having to drop the core OC to +150. if it can bench all day with +200, why does it freeze during gaming (lower continuous load)?
> 
> individually, the 3090s OC quite high but under SLI, the OC is quite a bit lower (+150 vs +100).. funky oc is funky


Difference between benchmarks/stress tests and gaming is the peak clocks.
For example. If your doing a benchmark it will most likely hit power limit then sit around middle sort of clock speeds 2ghz.
Vs having a game it will go into a light load part of the game and boost right up to the max of the curve. Which on +200 will be 2280mhz on some cards.
You might have better luck setting a custom curve to cap the max speed its going upto on light loads.


----------



## Jpmboy

hey folks, sorry for the "drive-by" question, but has there been any new bios for the FTW3 in the last few weeks? (besides the 500W OC bios)
✌


----------



## beamformer

Falkentyne said:


> Shunt modded and re-padded everything already.
> 
> 1.5mm pads work fine on the entire board globally, both front and back. If you want to be extra perfect, you can use very compressible (not thick) 2.0mm pads on the PCB (not on the RAM) to help with hotspot heat transfer. Fujipoly pads work well for this.
> 
> You can easily use 1.5mm fujipoly (or thermalright) pads on the RAM and 2.0mm (1.5mm + 0.5mm stack) fujipoly pads for the hotspots. But avoid doing this with the arctic pads because they are to thick. Just use 1.5mm in that case.
> 
> So in other words, If you are using Arctic 1.5mm pads on the backplate side for the RAM, it's best to also use the same 1.5mm Arctic pads for the hotspots, or 2mm compressible pads (like fujipoly or Thermalright 11-12 w/mk) for the hotspots (because the Arctic pads are thicker, although a lot more durable, you don't want to reduce contact pressure on the DRAM).
> 
> The benefit of the Arctic pads is you can get a 145x145mm giant pad for comparatively cheap (Even though prices are very inflated compared to a year ago). Fujipoly pads are tiny, not much larger than an oversized Saltine cracker  Thermalright pads are slightly larger than the Fujipolys. Not sure about Grizzly pads or Gelids, but I think gelid pads are soft and compressible as well. Both Gelid and Thermalright have 12 w/mk pads.
> 
> If you're keeping the _stock_ RAM pads (they seem to be very high quality) on the backplate side, do NOT use 1.5mm Arctic pads for the hotspots-they're too thick. Just use very soft 1.5mm (non-Arctic) soft pads or 1mm Arctic pads


Thanks for the info. I just teardown my 3090 FE to get some measurements today and plan to buy some thermal pad. Previously, I had some Arctic pad. But their website and datasheet are contradicting each other on the thermal conductivity coefficient. The Arctic website claims 1.2 W/mK, but their datasheet claim 6W/mK. I am not very confident about putting the Arctic pad on my 3090 FE. So I searched on Digikey.com and found the Panasonic EYGT graphite pad (13 W/mK). It seems the price seems not too bad, and they have 1.0mm, 1.5mm, 2.0mm options.


----------



## Sky3900

beamformer said:


> Thanks for the info. I just teardown my 3090 FE to get some measurements today and plan to buy some thermal pad. Previously, I had some Arctic pad. But their website and datasheet are contradicting each other on the thermal conductivity coefficient. The Arctic website claims 1.2 W/mK, but their datasheet claim 6W/mK. I am not very confident about putting the Arctic pad on my 3090 FE. So I searched on Digikey.com and found the Panasonic EYGT graphite pad (13 W/mK). It seems the price seems not too bad, and they have 1.0mm, 1.5mm, 2.0mm options.


You know those pads are electrically conductive right? Be careful where you put those.


----------



## Falkentyne

beamformer said:


> Thanks for the info. I just teardown my 3090 FE to get some measurements today and plan to buy some thermal pad. Previously, I had some Arctic pad. But their website and datasheet are contradicting each other on the thermal conductivity coefficient. The Arctic website claims 1.2 W/mK, but their datasheet claim 6W/mK. I am not very confident about putting the Arctic pad on my 3090 FE. So I searched on Digikey.com and found the Panasonic EYGT graphite pad (13 W/mK). It seems the price seems not too bad, and they have 1.0mm, 1.5mm, 2.0mm options.


The Arctics are indeed 6 w/mk. Graphite thermal pads must be used with the same care as LIQUID METAL (They are conductive!), they also do not conform very well to uneven surfaces and can NOT be used for hot spot heat transfer because they are conductive!!!. Just use Gelid thermal pads (12 w/mk) or Thermalright thermal pads (12 w/mk).


----------



## beamformer

Sky3900 said:


> You know those pads are electrically conductive right? Be careful where you put those.


Well. Panasonic EYGT datasheet claims volume resistivity (Ω·cm) is 4×10^5. I am not sure what tech they applied. I know carbon should be conductivity.


----------



## beamformer

Falkentyne said:


> The Arctics are indeed 6 w/mk. Graphite thermal pads must be used with the same care as thermal paste, they also do not conform very well to uneven surfaces and can NOT be used for hot spot heat transfer. Just use Gelid thermal pads (12 w/mk) or Thermalright thermal pads (12 w/mk).


Anyway, thank you for the suggestion. The Panasonic EYGT datasheet parameters seem very impressive. The temperature resistance and compressibility seem not to be an issue. I want to give a try.


https://industrial.panasonic.com/cdbs/www-data/pdf/AYA0000/AYA0000C53.pdf


----------



## defiledge

the guy has the 400w bios. see if we can get him to share it.


----------



## Johneey

defiledge said:


> the guy has the 400w bios. see if we can get him to share it.


Everyone has the 400 watt bios yet lol u can find in tech power but it’s not as good as the gigabyte 390w


----------



## defiledge

Johneey said:


> Everyone has the 400 watt bios yet lol u can find in tech power but it’s not as good as the gigabyte 390w


can you link the 400w bios? and why is gaming oc better


----------



## ExDarkxH

[


----------



## Johneey

ExDarkxH said:


> Oh come on dude... at 32 C you couldnt even average 2200 on a short benchmark...
> Your not even close to what he was referring to, and thats 24/7 stable.
> 
> at a given temp, anyone can achieve 2200 unless we talk about


Ok if u mean


----------



## Falkentyne

Just btw.

If you want to repad the entire card with 1.5mm thermal pads (you can use them on both front and back without a problem, and I _THINK_ the Thermalright 12 w/mk pads are simply a "rebrand" or the same supplier as the Fujipoly 11/mk pads--they feel like 100% the exact same material!, you are going to need *FOUR* packages of the Thermalright (80mm * 45mm * 1.5mm) or Fujipoly pads (60mm * 50mm * 1.5mm) pads. They are very small and it will take you close to 2 complete packages to do one full side of the card (VRAM and VRM hotspots on front, for two packages, VRAM and VRM's on back on GPU chip side for two more packages).

These pads are very soft and compressible and high quality. I just re-padded the entire front side of the card with Thermalright and Fujipoly.
The back still has the original pads on the GDDR6X, and two long 1.5mm strips of Arctic pads on the VRM's.


----------



## sugi0lover

Hello guys,
I saw some posts that you can adjust voltage from MSI afterburner. 
Even after I checked the voltage options, my voltage bar is greyed (inactive).
Do you need any special option to activate the voltage bar? Thanks in advance!


----------



## Zogge

You select that in settings, first tab. There are 3 or 4 boxes there stating voltage control, just read and tick those you want to enable.


----------



## Falkentyne

beamformer said:


> Thanks for the info. I just teardown my 3090 FE to get some measurements today and plan to buy some thermal pad. Previously, I had some Arctic pad. But their website and datasheet are contradicting each other on the thermal conductivity coefficient. The Arctic website claims 1.2 W/mK, but their datasheet claim 6W/mK. I am not very confident about putting the Arctic pad on my 3090 FE. So I searched on Digikey.com and found the Panasonic EYGT graphite pad (13 W/mK). It seems the price seems not too bad, and they have 1.0mm, 1.5mm, 2.0mm options.


I looked more into the Arctic pads.
Apparently there is an Arctic basic pad (pink) and an Arctic advanced pad. The basic pad (ATP2012, ACTPD00020A, ACTPD00024A ) is pink and 1.2 w/mk. The advanced pad (ATP2560) have different SKU's (ACTPD00004A , ACTPD00005A , ACTPD00006A for 0.5mm, 1.0mm and 1.5mm), is blue and 6.0 w/mk. Seems like Arctic needs to update their website 

Just remember avoid the PINK pads.


----------



## sugi0lover

Zogge said:


> You select that in settings, first tab. There are 3 or 4 boxes there stating voltage control, just read and tick those you want to enable.


I did that and it worked with 2080ti but not with 3090. Two of my friends with 3090s are the same as my situation.


----------



## Keninishna

I got a gigabyte vision oc card and shunted it and I can’t seem to break 14k in port royal which seems odd. I’m pretty sure the shunt is working because I almost never run in to power limit. I can get 2ghz stable at 982 mv and port royal scores 13.9k. Temps are reasonable at 64c. I replaced the paste on the gpu and used some expensive Fujipoly pads but only bought enough for the front side. At 1v I get thermal throttling. Should I bother replacing pads on the back side or just start saving for water cooling?


----------



## ExDarkxH

Johneey said:


> Ok if u mean


It posted before i finished my bad. Your still getting phenomenal numbers/performance from your card


----------



## HyperMatrix

Last set of pads I bought were the Fujipoly Ultra Extreme 17.0 W/mK. Pricey. But good. I don't believe they're electrically conductive. If they are, my laptop should have died by now. 

Edit: BTW, anyone know just how much thermal pad we need to redo the whole card? Since these pads are pricey, I don't like to buy more than I need.

Edit 2: NVM, Falkentyne said we need 4 packs of the 60x50x1.5. Just another $240 CAD to add to the cost of the 3090.  Considering AquaComputer is also asking for $110 Euros to ship to Canada for some reason....that's another $500 there. So total of $3350 CAD or almost $2600 USD. By the end of it...it may have been cheaper to just buy a KPE

Edit 3: I just remembered there will be a lot less thermal pad usage with the AquaComputer blocks because they use direct-contact with the memory. So just have to get good non-conductive paste. *Also for the VRMs on the AquaComputer blocks, CHECK WITH THEM BEFORE ORDERING THERMAL PADS. For the Pascal Titan X blocks, I just checked my emails from them and they were using spacing for 0.5mm pads.*

Edit 4: So now I still need to know how much thermal pads will be required for AquaComputer blocks since you're only using them on the VRMs and not on the memory. I'm thinking just 1 or 2 of the 60x50x0.5mm packs will be more than enough.


----------



## 7empe

Dreams-Visions said:


> Impressive. What is your ambient temp like?


Ambient is 20-21 C.


----------



## shiokarai

Sooo... I finally got a Strix 3090 yay  but.... it's a total dud!  max core OC +165, max mem OC+550(!!!!) - 100% fans, open bench case etc. nothing helps. Theoretically everything is fine, card is within spec but oh my... what a disappointment


----------



## Nizzen

shiokarai said:


> Sooo... I finally got a Strix 3090 yay  but.... it's a total dud!  max core OC +165, max mem OC+550(!!!!) - 100% fans, open bench case etc. nothing helps. Theoretically everything is fine, card is within spec but oh my... what a disappointment


It's not a dud, it's your cooling that sux 
+ 165 is over 2200mhz on good cooling 
The memory needs extra cooling too, to get higher.


----------



## Sky3900

Nizzen said:


> It's not a dud, it's your cooling that sux
> + 165 is over 2200mhz on good cooling
> The memory needs extra cooling too, to get higher.


Agreed, if you are getting a stable +165 that is great! I shunt modded my FE and the best it will do +98 or 2100mhz max when not power limited. Gaming with a 3090 running a consistent 2100mhz clock is phenomenal.


----------



## shiokarai

Nizzen said:


> It's not a dud, it's your cooling that sux
> + 165 is over 2200mhz on good cooling
> The memory needs extra cooling too, to get higher.


So you're saying it's actually a good one?  Also 65c on air is not a bad cooling lol (my core GPU temps, albeit mem probably much more due to the backplate etc.). Also, you have a block for a strix already? which one? is it any good? (I have preorder on the aquacomputer one but I'm hesitant as I don't like the paste on the mem (it's a mess, didn't like it with my rtx 2080 ti aquacomputer blocks)).

EDIT: about 14400-14500 is my max in PR with this settings ([email protected])


----------



## 7empe

shiokarai said:


> So you're saying it's actually a good one?  Also 65c on air is not a bad cooling lol (my core GPU temps, albeit mem probably much more due to the backplate etc.). Also, you have a block for a strix already? which one? is it any good? (I have preorder on the aquacomputer one but I'm hesitant as I don't like the paste on the mem (it's a mess, didn't like it with my rtx 2080 ti aquacomputer blocks)).
> 
> EDIT: about 14400-14500 is my max in PR with this settings ([email protected])


I have the same card, and +165 MHz is unstable with higher mem OC. I got PR score of 14723 with +150 and +1210 on memory. Maybe this is worth trying? I am on air too ([email protected]).


----------



## shiokarai

7empe said:


> I have the same card, and +165 MHz is unstable with higher mem OC. I got PR score of 14723 with +150 and +1210 on memory. Maybe this is worth trying? I am on air too ([email protected]).


no dice, anything above +550 mem -> artifacts/crash/freeze (even with +0 on core)


----------



## OC2000

Nizzen said:


> It's not a dud, it's your cooling that sux
> + 165 is over 2200mhz on good cooling
> The memory needs extra cooling too, to get higher.


Im not sure about my Strix OC either (Not repasted it). I can only get +140 Core on mine and I could get 1100, but that seems to have lowered to +1000 on the mem now as well. As soon as it hits 2145 or 2130 for a short period of time the screen freezes. Is this down to silicon or poor cooling. I have noticed it does this in the high 70s and hates voltages over 1.065.

I do have a water block coming which I'm hoping will allow me to clock it higher and sustain over 2100 mhz, but wondering if its even worth it if my Strix is not a good overclocker. Max PR score I can get is close to 14600


----------



## Zogge

+165 on strix is quite good in my view. I am around +125-130 on mine. But I need to tweak some more. Sustained around 2130 in gaming.


----------



## HyperMatrix

shiokarai said:


> no dice, anything above +550 mem -> artifacts/crash/freeze (even with +0 on core)


Are you saying your card freezes when you go above 20600MHz?



OC2000 said:


> Im not sure about my Strix OC either (Not repasted it). I can only get +140 Core on mine and I could get 1100, but that seems to have lowered to +1000 on the mem now as well. As soon as it hits 2145 or 2130 for a short period of time the screen freezes. Is this down to silicon or poor cooling. I have noticed it does this in the high 70s and hates voltages over 1.065.
> 
> I do have a water block coming which I'm hoping will allow me to clock it higher and sustain over 2100 mhz, but wondering if its even worth it if my Strix is not a good overclocker. Max PR score I can get is close to 14600


A few people (including myself) seem to be stuck at around 2130-2145 for the Strix. Using the EVGA bios, it wouldn't crash at 2130MHz while under 53C. So shunt modding should make that stable with a water block. It's hard to know if it's poor silicon. But it's definitely not great silicon. It's average at best. And I've seen many cheaper cards (TUF, or FE, for example) perform better. 

I'm not entirely sure how to guess the OC potential under water with shunt mod. I know that with light workloads (around 40% GPU usage) I was able to hit 2175MHz while temps were good. And I know at lower GPU usage (around 25%) I was able to hit over 2200MHz. But definitely not doable under full load. The question is whether it's a temperature and power limit issue (which can be resolved with water cooling and shunt mods) or voltage issue that will persist regardless of power limit and thermals. 

I gotta say one really bad thing about Ampere is that often times GPU OC crashes result in a full system hang/reboot. This wasn't an issue with Pascal (my last card). So proper testing can be hard.


----------



## shiokarai

HyperMatrix said:


> Are you saying your card freezes when you go above 20600MHz?


When I put more than +550 on the mem in afterburner, Port Royale always crashes -> sometimes CTD, sometimes system freeze, sometimes reset, sometimes artifacts+freeze/crash. +500 is always OK. All this irrelevant of core OC. Low binned micron chips? Technically it's within spec...


----------



## HyperMatrix

shiokarai said:


> When I put more than +550 on the mem in afterburner, Port Royale always crashes -> sometimes CTD, sometimes system freeze, sometimes reset, sometimes artifacts+freeze/crash. +500 is always OK. All this irrelevant of core OC. Low binned micron chips? Technically it's within spec...


That's really unfortunate and not normal for Strix memory OC. Most have been able to hit over +1000. I think the lowest I've heard of for this card. However...I'd trade the lower memory OC for that nice GPU clock OC your card seems to be capable of. Haha.


----------



## shiokarai

HyperMatrix said:


> That's really unfortunate and not normal for Strix memory OC. Most have been able to hit over +1000. I think the lowest I've heard of for this card. However...I'd trade the lower memory OC for that nice GPU clock OC your card seems to be capable of. Haha.


well, AT LEAST it's something, right?  honestly, mem OC is so low I'm considering selling mine and try another one... or at least disassemble and check if there are any problems, hot spots, maybe thermal pad with the protective film intact (asus QC is so low nowadays) or something. could this be a defective mem chip? anyways, not so thrilled about that strix 3090 of mine.


----------



## OC2000

HyperMatrix said:


> A few people (including myself) seem to be stuck at around 2130-2145 for the Strix. Using the EVGA bios, it wouldn't crash at 2130MHz while under 53C. So shunt modding should make that stable with a water block. It's hard to know if it's poor silicon. But it's definitely not great silicon. It's average at best. And I've seen many cheaper cards (TUF, or FE, for example) perform better.
> 
> I'm not entirely sure how to guess the OC potential under water with shunt mod. I know that with light workloads (around 40% GPU usage) I was able to hit 2175MHz while temps were good. And I know at lower GPU usage (around 25%) I was able to hit over 2200MHz. But definitely not doable under full load. The question is whether it's a temperature and power limit issue (which can be resolved with water cooling and shunt mods) or voltage issue that will persist regardless of power limit and thermals.
> 
> I gotta say one really bad thing about Ampere is that often times GPU OC crashes result in a full system hang/reboot. This wasn't an issue with Pascal (my last card). So proper testing can be hard.


That sounds like my situation. With Turing, the process of overclocking was relatively simple and linear. Finding the max clock was relatively simple. With Ampere, Im finding on Timespy for example, it runs fine then suddenly spikes to 2145 and crashes or resets immediately. Trying to set a curve to counter this doesn't work either. The EVGA 500W Bios I have noticed seems to work better for sustaining higher frequencies so not sure if its an Asus Bios thing. Regardless, other people seem to be having no problem running theirs stock on air at over 2200 mhz, which leads me to believe this GPU barely passed QC for a binned OC variant.
I wonder how hard it is going to be to get a King Pin edition especially if Im in the UK. May have to look into the other brands Grade A binned equivalents. I wouldn't risk trying to get another Strix OC.


----------



## krs360

Just got the TUF OC 3090 today and I'm noticing that it's performing really badly in Port Royal even vs the Zotac Trinity 3090. Looking at the clock speeds it barely seems to move from 1740 mhz, it's like it's just not getting the GPU boost speeds. Anyone know why this might be happening? Looks like it even ignores the OC I set in Afterburner.


----------



## long2905

krs360 said:


> Just got the TUF OC 3090 today and I'm noticing that it's performing really badly in Port Royal even vs the Zotac Trinity 3090. Looking at the clock speeds it barely seems to move from 1740 mhz, it's like it's just not getting the GPU boost speeds. Anyone know why this might be happening? Looks like it even ignores the OC I set in Afterburner.


run DDU and see if that helps.

I squeezed out a bit more of the iChill X4 and got 13909 in Port Royal with 100% fan. the cooling on this card is just awful.



https://www.3dmark.com/pr/485282


----------



## krs360

What score were you getting stock on the card without the GB bios? I'll run DDU again. It really looks like the card is fighting the power limit which is reported as, as high as 120 and as low as 90. Looks like it's having a little battle.


----------



## ExDarkxH

A lot of you guys claiming you have bad cards actually have really good cards.
I’m not sure what you were expecting?

Anything over 2,100 is good. 2,200 is for golden chips and even then, havent seen anyone sustain it with a 20-22 C room temp and normal cooling conditions.

I know we are jaded here but on most forums they seem happy to be at 2,000Mhz
In this forum half of us are on the hall of fame and we say we have bad cards...


----------



## GTANY

HyperMatrix said:


> Are you saying your card freezes when you go above 20600MHz?
> 
> 
> 
> A few people (including myself) seem to be stuck at around 2130-2145 for the Strix. Using the EVGA bios, it wouldn't crash at 2130MHz while under 53C. So shunt modding should make that stable with a water block. It's hard to know if it's poor silicon. But it's definitely not great silicon. It's average at best. And I've seen many cheaper cards (TUF, or FE, for example) perform better.
> 
> I'm not entirely sure how to guess the OC potential under water with shunt mod. I know that with light workloads (around 40% GPU usage) I was able to hit 2175MHz while temps were good. And I know at lower GPU usage (around 25%) I was able to hit over 2200MHz. But definitely not doable under full load. The question is whether it's a temperature and power limit issue (which can be resolved with water cooling and shunt mods) or voltage issue that will persist regardless of power limit and thermals.
> 
> I gotta say one really bad thing about Ampere is that often times GPU OC crashes result in a full system hang/reboot. This wasn't an issue with Pascal (my last card). So proper testing can be hard.


Against a potential voltage issue, you can increase the GPU voltage if needed : an Elmor EVC2S can be easily connected the the Strix to alter voltages.


----------



## HyperMatrix

GTANY said:


> Against a potential voltage issue, you can increase the GPU voltage if needed : an Elmor EVC2S can be easily connected the the Strix to alter voltages.


Haha definitely something I've considered. It'll really depend on what kind of clocks I can get under water. If I get less than 2145MHz I may end up going the Elmor route and aiming for 2200MHz.


----------



## vmanuelgm




----------



## Falkentyne

shiokarai said:


> well, AT LEAST it's something, right?  honestly, mem OC is so low I'm considering selling mine and try another one... or at least disassemble and check if there are any problems, hot spots, maybe thermal pad with the protective film intact (asus QC is so low nowadays) or something. could this be a defective mem chip? anyways, not so thrilled about that strix 3090 of mine.


What is the max overclock you can do on the core, by looping "heaven" benchmark, with +500 memory?

Both memory and core overclock can be improved by repasting the core with Grizzly Kryonaut Extreme or Thermalright TFX (core OC), changing the thermal pads to 11 / 12 w/mk Fujipoly or Thermalright Odyssey pads, which I believe come from the exact same supplier so they may be the exact same pad. Gelid 12 w/mk pads also are very good. I would avoid the fujipoly 17 w/mk pads unless you are made of money, because those are EXTREMELY expensive, and it will take TWO packages of either Fujipoly, Thermalright or Gelid pads to do *ONE* side of the card.

You also need to do the hotspots on the backside of the card as well. Doing hotspots can really help both core (mainly core and microstuttering) and possibly memory OC, and can fix some microstuttering issues some cards may have (obvious with Fast Sync enabled and FPS capped at monitor refresh rate by using RTSS "Scanline Sync" combined with Fastsync (only on games that can maintain fps=refresh rate). Unfortunately, I have only seen hotspots measured on the FE cards, by Igor's Lab. I don't know if he measured this on AIB cards.

Also, some people who claim to do 1000 on the memory are actually getting error correction and negative or very poor scaling from error retries, and lower Port Royal or Timespy graphic scores, and would do better with +500 to +600 memory. Also, reducing the memory OC can sometimes allow you to OC the core another +15 to +30 mhz offset (15 mhz steps). I got a higher score with +135 / +500 than someone using +210 +1100, because he was getting negative scaling, and hammering the 400W power limit as well.

I used the hotspot location shown on his 3080 page for my 3090, even though the 3090 ran cooler by default, my unmodded unopened 3090 FE card had _microstuttering_ issues until I repadded the entire card, and added extra thermal pads. I am not sure about the layout of pads on the Strix, but you can use this reference to see where you should apply extra thermal pads to see if this helps you as well.









NVIDIA copies the Pad-Mod from igorsLAB for the GeForce RTX 3080 FE to smooth the hotspot for the GDDR6X | igor'sLAB


It's nice to see that NVIDIA seems to be actively involved in this, or that you've reported on your reading of the catchy social media content, because the hotspot that I found in the launch article…




www.igorslab.de












NVIDIA GeForce RTX 3090 Founders Edition Review: Between Value and Decadence - When Price is Not Everything | Page 2 | igor'sLAB


With the GeForce RTX 3090, NVIDIA is rounding out its graphics card portfolio at the top end today, for now. Much more is not possible with the GA102-300 anyway and so one may see the current…




www.igorslab.de





I actually combined the pad layout from the 3080 hotspot picture along the "V" area, with the 3090, which was missing a pad on the other side of the V. This gave me a large improvement.


----------



## HyperMatrix

Falkentyne said:


> I would avoid the fujipoly 17 w/mk pads unless you are made of money, because those are EXTREMELY expensive, and it will take TWO packages of either Fujipoly, Thermalright or Gelid pads to do *ONE* side of the card.


You won't need 2 packs per side with the AquaComputer block mate. Remember, it doesn't use pads between the block and the memory modules. It uses thermal paste for direct contact cooling. Also only uses 0.5mm pads for the rest like the VRMs. At least this was the case with the last 2 blocks I bought from them. So to be on the safe side, you need 2 packs of the 60x50x0.5mm. And it's just $22.50 per pack: Amazon.com: Fujipoly/mod/smart Ultra Extreme XR-m Thermal Pad - 60 x 50 x 0.5 - Thermal Conductivity 17.0 W/mK: Computers & Accessories

edit: just checked the igorslab links you posted. I wonder how placement is going to work with the aquacomputer blocks, and in particular the spacing between the block and those components. Either way, I don't think a pad that has 50% higher thermal transfer is a bad investment for an extra $30-$40 on such an expensive card.


----------



## Falkentyne

HyperMatrix said:


> You won't need 2 packs per side with the AquaComputer block mate. Remember, it doesn't use pads between the block and the memory modules. It uses thermal paste for direct contact cooling. Also only uses 0.5mm pads for the rest like the VRMs. At least this was the case with the last 2 blocks I bought from them. So to be on the safe side, you need 2 packs of the 60x50x0.5mm. And it's just $22.50 per pack: Amazon.com: Fujipoly/mod/smart Ultra Extreme XR-m Thermal Pad - 60 x 50 x 0.5 - Thermal Conductivity 17.0 W/mK: Computers & Accessories
> 
> edit: just checked the igorslab links you posted. I wonder how placement is going to work with the aquacomputer blocks, and in particular the spacing between the block and those components. Either way, I don't think a pad that has 50% higher thermal transfer is a bad investment for an extra $30-$40 on such an expensive card.


Yeah, my stock card had a microstuttering issue I only picked up when using "Scanline Sync" with Rivatuner RTSS, and especially Fast Sync enabled+scanline sync, in Overwatch.
The microstuttering went completely away after adding 1.5mm pads to the hotspots. It made a huge difference and if it worked for me, there have to be other users who have the same issue but probably haven't noticed it at all (you won't notice it unless you use scanline sync with fastsync (frame hitching out of sync) or re-trace tearing lines (fastsync disabled with vsync disabled and scanline sync enabled). Vsync enabled will not show any issues.

What was strange is there was no thermal alert anywhere. 

Also, the hotspot rework allowed me to increase my core overclock to +150. Previously, the max I could do was +135. The memory is at +500. I just passed an hour of looping Heaven at +150. I have 1.5mm Fujipoly 11 w/mk + Thermalright Odyssey (they appear to be the exact same pad) along the entire backplate side for both RAM and hotspots and it worked splendid. It's possible AIB users will benefit also, and maybe (some) block users for the backplate.


----------



## devilhead

Zogge said:


> +165 on strix is quite good in my view. I am around +125-130 on mine. But I need to tweak some more. Sustained around 2130 in gaming.


It's hard to say, because all cards have different - silicon lottery, for one card +130 will be 2130mhz, for other higher silicon lottery card +130 will be 2150mhz 
you can check your Msi Afterburner ( stock Voltage/Frequency) let's say at 950mV, mine 3090 strix stock is 1905Mhz.


----------



## Nizzen

vmanuelgm said:


>


You know you will get more fps with DX 12 right? DX 12 works great with both 2080ti strix and 3090 strix here


----------



## OC2000

Falkentyne said:


> What is the max overclock you can do on the core, by looping "heaven" benchmark, with +500 memory?
> 
> Both memory and core overclock can be improved by repasting the core with Grizzly Kryonaut Extreme or Thermalright TFX (core OC), changing the thermal pads to 11 / 12 w/mk Fujipoly or Thermalright Odyssey pads, which I believe come from the exact same supplier so they may be the exact same pad. Gelid 12 w/mk pads also are very good. I would avoid the fujipoly 17 w/mk pads unless you are made of money, because those are EXTREMELY expensive, and it will take TWO packages of either Fujipoly, Thermalright or Gelid pads to do *ONE* side of the card.
> 
> You also need to do the hotspots on the backside of the card as well. Doing hotspots can really help both core (mainly core and microstuttering) and possibly memory OC, and can fix some microstuttering issues some cards may have (obvious with Fast Sync enabled and FPS capped at monitor refresh rate by using RTSS "Scanline Sync" combined with Fastsync (only on games that can maintain fps=refresh rate). Unfortunately, I have only seen hotspots measured on the FE cards, by Igor's Lab. I don't know if he measured this on AIB cards.
> 
> I actually combined the pad layout from the 3080 hotspot picture along the "V" area, with the 3090, which was missing a pad on the other side of the V. This gave me a large improvement.


Thanks or the great explanation! 
Which of the Grizzly Kryonaut Extreme or Thermalright TFX would you recommend over the other? Would you also recommend aftermarket pads over the EK ones that come with the waterblock? I have bought the fujipoly 17 w/mk before to cover both my 1080 ti in SLI and the cost was extreme lol, but not sure if it made a huge improvement. I will look into the hotspot suggestion though. 
I have a feeling my card isnt pasted well as it gets extremely hot on stock playing games to the point i feel uncomfortable. I may just be used to a water cooled gpu though.


----------



## Twintale

vmanuelgm said:


>


How does your 3090 stay at sub 200 W constantly with max power limit?


----------



## OC2000

HyperMatrix said:


> Haha definitely something I've considered. It'll really depend on what kind of clocks I can get under water. If I get less than 2145MHz I may end up going the Elmor route and aiming for 2200MHz.


Are you not under water yet? I got the impression you were 😄I would love to see a light load 2200 mhz! or even 2160!


----------



## shiokarai

Falkentyne said:


> What is the max overclock you can do on the core, by looping "heaven" benchmark, with +500 memory?
> 
> Both memory and core overclock can be improved by repasting the core with Grizzly Kryonaut Extreme or Thermalright TFX (core OC), changing the thermal pads to 11 / 12 w/mk Fujipoly or Thermalright Odyssey pads, which I believe come from the exact same supplier so they may be the exact same pad. Gelid 12 w/mk pads also are very good. I would avoid the fujipoly 17 w/mk pads unless you are made of money, because those are EXTREMELY expensive, and it will take TWO packages of either Fujipoly, Thermalright or Gelid pads to do *ONE* side of the card.
> 
> You also need to do the hotspots on the backside of the card as well. Doing hotspots can really help both core (mainly core and microstuttering) and possibly memory OC, and can fix some microstuttering issues some cards may have (obvious with Fast Sync enabled and FPS capped at monitor refresh rate by using RTSS "Scanline Sync" combined with Fastsync (only on games that can maintain fps=refresh rate). Unfortunately, I have only seen hotspots measured on the FE cards, by Igor's Lab. I don't know if he measured this on AIB cards.
> 
> Also, some people who claim to do 1000 on the memory are actually getting error correction and negative or very poor scaling from error retries, and lower Port Royal or Timespy graphic scores, and would do better with +500 to +600 memory. Also, reducing the memory OC can sometimes allow you to OC the core another +15 to +30 mhz offset (15 mhz steps). I got a higher score with +135 / +500 than someone using +210 +1100, because he was getting negative scaling, and hammering the 400W power limit as well.
> 
> I used the hotspot location shown on his 3080 page for my 3090, even though the 3090 ran cooler by default, my unmodded unopened 3090 FE card had _microstuttering_ issues until I repadded the entire card, and added extra thermal pads. I am not sure about the layout of pads on the Strix, but you can use this reference to see where you should apply extra thermal pads to see if this helps you as well.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> NVIDIA copies the Pad-Mod from igorsLAB for the GeForce RTX 3080 FE to smooth the hotspot for the GDDR6X | igor'sLAB
> 
> 
> It's nice to see that NVIDIA seems to be actively involved in this, or that you've reported on your reading of the catchy social media content, because the hotspot that I found in the launch article…
> 
> 
> 
> 
> www.igorslab.de
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> NVIDIA GeForce RTX 3090 Founders Edition Review: Between Value and Decadence - When Price is Not Everything | Page 2 | igor'sLAB
> 
> 
> With the GeForce RTX 3090, NVIDIA is rounding out its graphics card portfolio at the top end today, for now. Much more is not possible with the GA102-300 anyway and so one may see the current…
> 
> 
> 
> 
> www.igorslab.de
> 
> 
> 
> 
> 
> I actually combined the pad layout from the 3080 hotspot picture along the "V" area, with the 3090, which was missing a pad on the other side of the V. This gave me a large improvement.


Thanks, a lot of good help here.  I'll be opening the card, just not yet - waiting for the aquacomputer (or other brand) block for it. Also opening the card = warranty bye bye (there's annoying warranty sticker, firmly put, too - maybe you have a way to get around this sticker not damaging it?). Will try to bench further, that was preliminary findings (that make me wonder what's up with the mem very very low OC vs ppl posting here).


----------



## shiokarai

Twintale said:


> How does your 3090 stay at sub 200 W constantly with max power limit?


Shunt mod


----------



## Falkentyne

devilhead said:


> It's hard to say, because all cards have different - silicon lottery, for one card +130 will be 2130mhz, for other higher silicon lottery card +130 will be 2150mhz
> you can check your Msi Afterburner ( stock Voltage/Frequency) let's say at 950mV, mine 3090 strix stock is 1905Mhz.
> View attachment 2464875


All cards of a certain OEM will have the exact same curve. That is not silicon lottery at all. This is set by vbios.
Differences are only due to temps, which can be both silicon lottery/leakage or a better factory thermal paste stamp job. The curve will shift up or down depending on temps. +/- 15 mhz every 6C or 8C (not sure of the step). Also it seems to act different after rebooting vs playing a game and then letting it cool down.
And it's in 15 mhz steps, not 20 mhz. 1995, 2010, 2025, 2040, 2055, 2070, 2085, 2100, etc....


----------



## ExDarkxH

OC2000 said:


> Are you not under water yet? I got the impression you were 😄I would love to see a light load 2200 mhz! or even 2160!


try the new DXR raytracing feature


----------



## Falkentyne

shiokarai said:


> Thanks, a lot of good help here.  I'll be opening the card, just not yet - waiting for the aquacomputer (or other brand) block for it. Also opening the card = warranty bye bye (there's annoying warranty sticker, firmly put, too - maybe you have a way to get around this sticker not damaging it?). Will try to bench further, that was preliminary findings (that make me wonder what's up with the mem very very low OC vs ppl posting here).


Warranty void if removed is not legally enforceable in USA, as long as the end user did not damage the card by opening it. Users have a right to open and clean their own hardware.


----------



## shiokarai

HyperMatrix said:


> You won't need 2 packs per side with the AquaComputer block mate. Remember, it doesn't use pads between the block and the memory modules. It uses thermal paste for direct contact cooling. Also only uses 0.5mm pads for the rest like the VRMs. At least this was the case with the last 2 blocks I bought from them. So to be on the safe side, you need 2 packs of the 60x50x0.5mm. And it's just $22.50 per pack: Amazon.com: Fujipoly/mod/smart Ultra Extreme XR-m Thermal Pad - 60 x 50 x 0.5 - Thermal Conductivity 17.0 W/mK: Computers & Accessories
> 
> edit: just checked the igorslab links you posted. I wonder how placement is going to work with the aquacomputer blocks, and in particular the spacing between the block and those components. Either way, I don't think a pad that has 50% higher thermal transfer is a bad investment for an extra $30-$40 on such an expensive card.


Not a fan of aquacomputer direct mem contact - with my last 2 rtx 2080ti blocks both of them had 1 of the mem chips not making any contact due to manufacturing variance (one card was EVGA, one card was ZOTAC), needed to use pads anyway on those chips (found out when mem OC wasn't stable after putting them under water). also more paste = mess.


----------



## shiokarai

Falkentyne said:


> Warranty void if removed is not legally enforceable in USA, as long as the end user did not damage the card by opening it. Users have a right to open and clean their own hardware.


Not from USA unfortunately


----------



## Zogge

Is the warranty gone if you mount a block on Strix ? I understood it as if you put back the original cooler in event of RMA then you are ok, assuming you do not damage the card when you are mounting/demounting coolers. Stickers are not valid to use for voiding warranty since 2018. 
Right ?


----------



## OC2000

ExDarkxH said:


> try the new DXR raytracing feature


Will do in a sec, hadnt realised my Windows was still 1909 version so it wont run until i update it!


----------



## Zogge

And it is the same rule in Europe I assume.. .


----------



## shiokarai

Zogge said:


> And it is the same rule in Europe I assume.. .


Haven't seen any valid info whether it's the same in Europe, in the case of ASUS your warranty is basically gone when the sticker is tampered with (at least here with the Asus Poland). Someone was posting Asus rep saying it's not the case but the actual RMAs say otherwise (just google it, people complain a lot about this).


----------



## mirkendargen

Zogge said:


> And it is the same rule in Europe I assume.. .


EU generally has more stringent warranty protections, so I SERIOUSLY doubt warranty stickers are enforceable there.

Like in the US this probably won't stop companies from trying to deny you though...

Seriously when you think about it, what proof is there the sticker was ever there from the factory? Maybe it didn't get stuck on due to a....manufacturing defect? And what are warranties for? Fixing manufacturer defects...


----------



## rawsome

defiledge said:


> can you link the 400w bios? and why is gaming oc better


for me the aorus master bios gave no advantage above the margin of error. important to know is that the center fan of the aorus master is only 2000RPM max, so if you have a card like a ventus with 3500RPM you are loosing 1500RPM on that bios.

When trying a new bios, make sure you know whats going on with you fans by checking the RPMs at 100% via GPU-Z sensors tab.

*Warrenty stickers: *someone here some dozend pages back said that you can remove the warranty stickers using a hot air gun, a razor blade and tweezers. If you have to RMA just put it back to avoid questions.

*@all* you can learn so much about OC'ing just by reading this thread. i highly recommend it. many questions here were answered already like 10 times.


----------



## Falkentyne

OC2000 said:


> Thanks or the great explanation!
> Which of the Grizzly Kryonaut Extreme or Thermalright TFX would you recommend over the other? Would you also recommend aftermarket pads over the EK ones that come with the waterblock? I have bought the fujipoly 17 w/mk before to cover both my 1080 ti in SLI and the cost was extreme lol, but not sure if it made a huge improvement. I will look into the hotspot suggestion though.
> I have a feeling my card isnt pasted well as it gets extremely hot on stock playing games to the point i feel uncomfortable. I may just be used to a water cooled gpu though.


Kryonaut Extreme is only in a $99 jar so I never used it, but the problems with Kryonaut's longevity seem to be fixed with this.
Kingpin KPX is also a good paste.
Thermalright TFX is an even worse price/gram that KEX, $40 for 6.2 grams but that's still cheaper than $99. TFX seems to be a C or 2 better than regular Kryonaut after it cures for a week.
I believe the asian ZF-EX thermal paste (14.6 w/mk) is the same paste as TFX. They have the same w/mk and exact same look. Probably similar to Thermalright Odyssey pads looking identical and feeling identical to Fujipoly 11 wmk pads.

All are excellent for both CPUs and GPUs and quite thick, designed for not cracking under LN2.


----------



## OC2000

ExDarkxH said:


> try the new DXR raytracing feature


It Ran the test at +175 (2160) , but crashed when I raised it to +180 (2190 mhz)


----------



## OC2000

Falkentyne said:


> Kryonaut Extreme is only in a $99 jar so I never used it, but the problems with Kryonaut's longevity seem to be fixed with this.
> Kingpin KPX is also a good paste.
> Thermalright TFX is an even worse price/gram that KEX, $40 for 6.2 grams but that's still cheaper than $99. TFX seems to be a C or 2 better than regular Kryonaut after it cures for a week.
> I believe the asian ZF-EX thermal paste (14.6 w/mk) is the same paste as TFX. They have the same w/mk and exact same look. Probably similar to Thermalright Odyssey pads looking identical and feeling identical to Fujipoly 11 wmk pads.
> 
> All are excellent for both CPUs and GPUs and quite thick, designed for not cracking under LN2.


Thanks. Kryonaut Extreme is easier to get in the UK as well as the Thermalright Pads. So ill get those. Hopefully I can secure the Thermal Grizzly to make it last a few years. Luckily ill have the 5950X to paste too.


----------



## Spiriva

defiledge said:


> the guy has the 400w bios. see if we can get him to share it.


The bios version he got is: 94.02.26.08.54
the Aorus 3090 bios I have and the one posted here before is version 94.02.26.08.53
I wonder what changed?










94.02.26.08.53 is here Gigabyte RTX 3090 VBIOS


----------



## ExDarkxH

OC2000 said:


> It Ran the test at +175 (2160) , but crashed when I raised it to +180 (2190 mhz)
> [/QUOTE


thats a good value

what was your fps btw?


----------



## jomama22

Have we figured out how to fix the 500w beta bios for the ftw3 ultra? 

Got my evga email to order it and want to know if the 500w bios works as advertised or if people are still having issues and only getting 450w.

I have an FE that performsnwell and was going to shunt it once waterblocks come out for it but if the 500w bios works on the ftw3, I may just pick it up.


----------



## Falkentyne

OC2000 said:


> It Ran the test at +175 (2160) , but crashed when I raised it to +180 (2190 mhz)


Offsets are at 150, 165, 180...


----------



## Sheyster

shiokarai said:


> Sooo... I finally got a Strix 3090 yay  but.... it's a total dud!  max core OC +165, max mem OC+550(!!!!) - 100% fans, open bench case etc. nothing helps. Theoretically everything is fine, card is within spec but oh my... what a disappointment


If you're not keeping it PM me.


----------



## OC2000

ExDarkxH said:


> thats a good value
> 
> what was your fps btw?


it was just over 61. Can’t remember exact figure. What was yours?


----------



## Wihglah

jomama22 said:


> Have we figured out how to fix the 500w beta bios for the ftw3 ultra?
> 
> Got my evga email to order it and want to know if the 500w bios works as advertised or if people are still having issues and only getting 450w.
> 
> I have an FE that performsnwell and was going to shunt it once waterblocks come out for it but if the 500w bios works on the ftw3, I may just pick it up.


Nope. Still the Beta BIOS that in the main doesn't work.


----------



## ExDarkxH

OC2000 said:


> it was just over 61. Can’t remember exact figure. What was yours?


Your inline with what I would consider a good value. I think the creators designed it for us to target 60fps.
take a look here to get a better idea of where you stand :
Over 61 would be considered a strong score





New 3DMark DX12 RT benchmark results - EVGA Forums


Post 'em here if you got 'em. If you dont know, 3DMark released a pure DX12 ray tracing benchmark as to allow users to test apples to apples Nvidia and AMD GPUs in raw RT performance. (https://wccftech.com/futuremark-launches-3dmark-directx-raytracing-benchmark-amd-nvidia-gpus/)Here's mine on a...



forums.evga.com





I was 62.84fps but consider that an outlier


----------



## shiokarai

Sheyster said:


> If you're not keeping it PM me.


will do, still debating it


----------



## krs360

Anyone here also have an Asus 3090 TUF OC card? Wondering what kind of stock PR scores you all got on air? Just putting it into the machine and not changing any settings it seems to come in at about 12.5k.

The boost clocks seem really off in PR compared to my Zotac Trinity 3090. The Zotac clock ends up being on average something in the 1900s but the TUF OC the average clock speed seems to be in the 1840ish mark and quite often if I watch the clock speed during the runs it's dipping to the 1700s, sometimes all the way down to it's boost of 1740, it's like it's not getting any benefit from the GPU boost or whatever it's called. Temps are around 60OC.

I've DDUd several times and get the same results every time. Is this normal cause it just feels really off compared to how the Zotac performed.


----------



## atleaage

krs360 said:


> Anyone here also have an Asus 3090 TUF OC card? Wondering what kind of stock PR scores you all got on air? Just putting it into the machine and not changing any settings it seems to come in at about 12.5k.
> 
> The boost clocks seem really off in PR compared to my Zotac Trinity 3090. The Zotac clock ends up being on average something in the 1900s but the TUF OC the average clock speed seems to be in the 1840ish mark and quite often if I watch the clock speed during the runs it's dipping to the 1700s, sometimes all the way down to it's boost of 1740, it's like it's not getting any benefit from the GPU boost or whatever it's called. Temps are around 60OC.
> 
> I've DDUd several times and get the same results every time. Is this normal cause it just feels really off compared to how the Zotac performed.


I have the Asus TUF 3090 OC, with the power target maxed out at 107% i get 13.6k in PR with 1924 mhz average clock, card was 62C. Room temp is 23 degrees celcius with closed case. Should be pretty much what you should expect as a minimum i guess.


----------



## rawsome

Falkentyne said:


> Offsets are at 150, 165, 180...


What is the reason behind this? Does +160 not give any advantage over +150?


----------



## chispy

Guys please we got to be realistic and lower your overclock expectations. The guys here wanting to run stable 2200Mhz on ambient temps on games and applications it is completely unrealistic ( unless using water chiller for subsero temps ) , even with all shunt mods done and elmor evc2s voltage controller with fixed voltage at 1.2v. v.core it is impossible to maintain 2200Mhz on that core. I have tested this on my Asus Tuf on water EK full block all shunts done and ev2s v.core volt control and Asus Strix all shunts done too.

Max stable sustain clocks for me it seems 2115~2130MhzMhz on the core with shunt mods and on water at 41c degrees. Be aware that after shunt mods and the more voltage you add the insane power requirements , heat and temps become an issue even under water cooling for stable overclocks.

i would like you guys to post the exact sustain clocks instead of posting +160 or +120 because every card has different boost table and some are factory oc. Please post your overclocks on actually sustain loads instead of max oc shown , ex. 2000Mhz oc iddle-light load / under full load 1985Mhz , that way we can have a more accurate picture and data base on this thread.


----------



## Zurv

Shame SLI is pretty much dead. AC: V could use it 






i'll be playing it on Ultra with lowered AA.


----------



## Falkentyne

chispy said:


> Guys please we got to be realistic and lower your overclock expectations. The guys here wanting to run stable 2200Mhz on ambient temps on games and applications it is completely unrealistic ( unless using water chiller for subsero temps ) , even with all shunt mods done and elmor evc2s voltage controller with fixed voltage at 1.2v. v.core it is impossible to maintain 2200Mhz on that core. I have tested this on my Asus Tuf on water EK full block all shunts done and ev2s v.core volt control and Asus Strix all shunts done too.
> 
> Max stable sustain clocks for me it seems 2115~2130MhzMhz on the core with shunt mods and on water at 41c degrees. Be aware that after shunt mods and the more voltage you add the insane power requirements , heat and temps become an issue even under water cooling for stable overclocks.
> 
> i would like you guys to post the exact sustain clocks instead of posting +160 or +120 because every card has different boost table and some are factory oc. Please post your overclocks on actually sustain loads instead of max oc shown , ex. 2000Mhz oc iddle-light load / under full load 1985Mhz , that way we can have a more accurate picture and data base on this thread.


Fully stable +135 / +500 with shunt mod with 3090 FE. 
Testing +150 / +500 now. Stable so far in Heaven benchmark (over an hour) and Overwatch after re-padding the entire backplate side VRAM and hotspot PCB areas with Fujipoly 1.5mm (11 w/mk) and Thermalright Odyssey 1.5mm (12 w/mk) thermal pads. These pads seem to be 100% identical in supplier, probably both rebranding the same pad.. I will need to do long term tests to see stability.

Note: original stock card had a hotspot issue that caused stuttering and uneven frame syncing. Problem went completely away after re-padding. Keep this in mind. This problem was only caught by using RTSS (Rivatuner Statistics Server) Scanline Sync + NVCP Fast Sync at locked FPS=refresh rate.

Note: Any official 12 pin cable may allow higher overclocks than FE adapter into two 8 pins. One person had consistent crashing that was solved by using the Seasonic 12 pin cable, although it isn't 100% clear whether the fault was with the adapter itself or with the two 8 pin cables possibly being worn or oxidized.



https://hardforum.com/threads/troubleshooting-request-3080-fe-reboots.2002859/#post-1044793649


----------



## chispy

Falkentyne said:


> Fully stable +135 / +500 with shunt mod with 3090 FE.
> Testing +150 / +500 now. Stable so far in Heaven benchmark (over an hour) and Overwatch after re-padding the entire backplate side VRAM and hotspot PCB areas with Fujipoly 1.5mm (11 w/mk) and Thermalright Odyssey 1.5mm (12 w/mk) thermal pads. These pads seem to be 100% identical in supplier, probably both rebranding the same pad.. I will need to do long term tests to see stability.
> 
> Note: original stock card had a hotspot issue that caused stuttering and uneven frame syncing. Problem went completely away after re-padding. Keep this in mind. This problem was only caught by using RTSS (Rivatuner Statistics Server) Scanline Sync + NVCP Fast Sync at locked FPS=refresh rate.
> 
> Note: Any official 12 pin cable may allow higher overclocks than FE adapter into two 8 pins. One person had consistent crashing that was solved by using the Seasonic 12 pin cable, although it isn't 100% clear whether the fault was with the adapter itself or with the two 8 pin cables possibly being worn or oxidized.
> 
> 
> 
> https://hardforum.com/threads/troubleshooting-request-3080-fe-reboots.2002859/#post-1044793649


Can you tell us what is your sustain clocks please ? +150 / +500 cannot be used for data base , we need clock numbers as it will be more helpful , if you can please let us know in Mhz , appreciate it.

Kind Regards: Chispy


----------



## Thanh Nguyen

For me, 2130mhz is the max I can get my card loop an hour in metro exodus full RTX enable. Port Royal is r15 for GPU. We need some prime95 to get a daily clock.


----------



## chispy

Thanh Nguyen said:


> For me, 2130mhz is the max I can get my card loop an hour in metro exodus full RTX enable. Port Royal is r15 for cpu. We need some prime95 to get a daily clock.


Thank you so much for the information. More or less what i get in games , like you realistic clocks are 2130mhz max sustain on water on Red Dead Redemption 2 at 4k maxed out.

You are correct PR it is heavy but not so much 😁 , we need to find a harder test so we can all use to compare for data base in here


----------



## Sky3900

chispy said:


> Guys please we got to be realistic and lower your overclock expectations. The guys here wanting to run stable 2200Mhz on ambient temps on games and applications it is completely unrealistic ( unless using water chiller for subsero temps ) , even with all shunt mods done and elmor evc2s voltage controller with fixed voltage at 1.2v. v.core it is impossible to maintain 2200Mhz on that core. I have tested this on my Asus Tuf on water EK full block all shunts done and ev2s v.core volt control and Asus Strix all shunts done too.
> 
> Max stable sustain clocks for me it seems 2115~2130MhzMhz on the core with shunt mods and on water at 41c degrees. Be aware that after shunt mods and the more voltage you add the insane power requirements , heat and temps become an issue even under water cooling for stable overclocks.
> 
> i would like you guys to post the exact sustain clocks instead of posting +160 or +120 because every card has different boost table and some are factory oc. Please post your overclocks on actually sustain loads instead of max oc shown , ex. 2000Mhz oc iddle-light load / under full load 1985Mhz , that way we can have a more accurate picture and data base on this thread.


Here's some data points for an air cooled 3090FE, silver paint shunt mod, probably drawing around 500w-525w.

Borderlands 3: +98gpu +550mem, 2085-2100mhz, stable for several hours. 2k, ultra.

Metro bench loop: +82gpu +550mem, clocks vary through bench 1995-2070, probably averages 2025, stable for 30 loops+. 4k, ultra rtx.

Heaven: +98gpu +550 mem, clocks vary 1985-2070, average probably 2010, stable for an hour+. 4k, ultra.

WD Legion: +98gpu +550mem, 2070mhz, stable for several hours. 2k, ultra rtx.

3d mark: +115gpu +550mem, clocks vary a lot depending on the bench. 3d mark seems to be able to tolerate a higher gpu clock for some reason.

Best PR: +147gpu +750mem, clocks vary throughout 1985-2170, average 2114. GPU chilled to 27c before starting bench.


----------



## chispy

Sky3900 said:


> Here's some data points for an air cooled 3090FE, silver paint shunt mod, probably drawing around 500w-525w.
> 
> Borderlands 3: +98gpu +550mem, 2085-2100mhz, stable for several hours.
> 
> Metro bench loop: +82gpu +550mem, clocks vary through bench 1995-2070, probably averages 2025, stable for 30 loops+.
> 
> Heaven: +98gpu +550 mem, clocks vary 1985-2070, average probably 2010, stable for an hour+.
> 
> WD Legion: +98gpu +550mem, 2070mhz, stable for several hours.
> 
> 3d mark: +115gpu +550mem, clocks vary a lot depending on the bench. 3d mark seems to be able to tolerate a higher gpu clock for some reason.
> 
> Best PR: +147gpu +750mem, clocks vary throughout 1985-2170, average 2114. GPU chilled to 27c before starting bench.


Thank you so much , this is real reliable data for metrics in real world scenario. i will try to make a chart if results like this keeps coming in. Appreciate it.

Kind Regards: Chispy


----------



## Falkentyne

chispy said:


> Can you tell us what is your sustain clocks please ? +150 / +500 cannot be used for data base , we need clock numbers as it will be more helpful , if you can please let us know in Mhz , appreciate it.
> 
> Kind Regards: Chispy


Sustain clocks:
530W TDP: 2100 (<50C), 2070-2085 (60-70C). Power draw during Heaven benchmark at 1080p is about 515W max.
Memory: 10251 mhz (+500)

So far this is stable in Heaven/Overwatch
Will test COD Warzone/Modern Warfare. MW was fully stable at +135 (2055-2070 mhz), i did not test +150 (2070-2085 mhz) yet, which I will do now actually.
I'm expecting it to crash.

@Sky3900 did you see my private message?


----------



## Sky3900

Falkentyne said:


> Sustain clocks:
> 530W TDP: 2100 (<50C), 2070-2085 (60-70C). Power draw during Heaven benchmark at 1080p is about 515W max.
> Memory: 10251 mhz (+500)
> 
> So far this is stable in Heaven/Overwatch
> Will test COD Warzone/Modern Warfare. MW was fully stable at +135 (2055-2070 mhz), i did not test +150 (2070-2085 mhz) yet, which I will do now actually.
> I'm expecting it to crash.
> 
> @Sky3900 did you see my private message?


Got your message and will reply later.


----------



## chispy

Falkentyne said:


> Sustain clocks:
> 530W TDP: 2100 (<50C), 2070-2085 (60-70C). Power draw during Heaven benchmark at 1080p is about 515W max.
> Memory: 10251 mhz (+500)
> 
> So far this is stable in Heaven/Overwatch
> Will test COD Warzone/Modern Warfare. MW was fully stable at +135 (2055-2070 mhz), i did not test +150 (2070-2085 mhz) yet, which I will do now actually.
> I'm expecting it to crash.
> 
> @Sky3900 did you see my private message?


Awesome ! thank you so much for your input , appreciate it. I will start gathering this data for a chart.


----------



## Falkentyne

chispy said:


> Awesome ! thank you so much for your input , appreciate it. I will start gathering this data for a chart.


So Modern Warfare didn't crash at 2100-2085 mhz. I wonder if the new Nvidia driver today will hurt or improve FPS things in Warzone...

Still wondering why my GPU is sitting around 230 FPS and only 85% usage, with the CPU not even at 80%...you would think uncapped FPS would peg the GPU at 100%...


----------



## vmanuelgm

Nizzen said:


> You know you will get more fps with DX 12 right? DX 12 works great with both 2080ti strix and 3090 strix here


I am not using DX12 since I always had stuttering in the beginning of sessions, I will try again DX12, thanks mate!!!


----------



## chispy

Falkentyne said:


> So Modern Warfare didn't crash at 2100-2085 mhz. I wonder if the new Nvidia driver today will hurt or improve FPS things in Warzone...
> 
> Still wondering why my GPU is sitting around 230 FPS and only 85% usage, with the CPU not even at 80%...you would think uncapped FPS would peg the GPU at 100%...


Nice clocks on your FE on modern warfare.
Have you tried to add more aa and af ? that might help under some circunstancias to increase load.


----------



## Falkentyne

chispy said:


> Nice clocks on your FE on modern warfare.
> Have you tried to add more aa and af ? that might help under some circunstancias to increase load.


Details are set to maximum. Nothing to change except adding render scale past 100%. It's a 1920x1080 144hz monitor overclocked to 165hz.

I do get 100% usage if I set a custom resolution in ToastyX CRU to use 1440p (or use higher render scale %), but I'm limited to 115hz MAX on my monitor if I try 1440p, or I get flickering shimmering pixels at 120hz (even 116hz is not stable), and since 115hz isn't even a multiple of a mouse polling rate, I only used 100hz. I don't like it because the color depth becomes YCBCR422 instead of RGB, due to the age of my monitor. It's a Benq XL2720Z, so 1080p 144hz monitor. I already have a custom overclocked refresh rate of 165hz (OSD black screens with an "Out of range" error, but pressing the S-switch profile button magically makes the screen appear!)


----------



## Nizzen

vmanuelgm said:


> I am not using DX12 since I always had stuttering in the beginning of sessions, I will try again DX12, thanks mate!!!


It wil stutter in all maps when it's caching shaders. After that it's smooth as butter 

Many think it isn't going away, but it does 🤩

Way better performance


----------



## Falkentyne

Scored 14611 in Port Royal



https://www.3dmark.com/3dm/52821285?


----------



## anethema

I am using K5Pro thick thermal paste but I'm prob gonna change it to the pads since its thermal conductivity isn't really that good. And it is messy as hell.

I am def still getting a weird thermal issue on memory, because when I run the coin miner with a +800mem, I get a Perfcap reason of Thermal even with the GPU only at 55c. It goes away when I remove the memory OC.

Maybe some good thermal pads will fix it who knows.

I am also really annoyed that I replaced all shunts on the FE with 3mohm ones which should let me pull 666W but I can only do 500 before hitting perfcap reason power.

Can't complain too much since I'm like #25 on the HoF with the card as is, but I really feel like it could do better.


----------



## Falkentyne

anethema said:


> I am using K5Pro thick thermal paste but I'm prob gonna change it to the pads since its thermal conductivity isn't really that good. And it is messy as hell.
> 
> I am def still getting a weird thermal issue on memory, because when I run the coin miner with a +800mem, I get a Perfcap reason of Thermal even with the GPU only at 55c. It goes away when I remove the memory OC.
> 
> Maybe some good thermal pads will fix it who knows.
> 
> I am also really annoyed that I replaced all shunts on the FE with 3mohm ones which should let me pull 666W but I can only do 500 before hitting perfcap reason power.
> 
> Can't complain too much since I'm like #25 on the HoF with the card as is, but I really feel like it could do better.


Did you do the PCIE slot shunt? You did all six shunts, right? One shunt out of the six is hidden in the middle of nowhere. You said you desoldered the original shunts and put on 3 mOhm ones? Can you post your hwinfo screenshot showing with the "1.67x multiplier" for all the power limits added, while you are pulling max power you can before getting Perfcap: Power, for all the power rails?

For fixing your thermal issues:

Just follow my guide and repad your card. Trust me on this. This will fix _ALL_ percap Thermal flags as well as prevent "stuttering" bugs that I had on my card before I ever even opened the card up. It may even let you push the clocks higher!
This works. Perfectly.

1) Buy four packs of 1.5mm thickness, Gelid 12 w/mk pads, Thermalright Odyssey 12 w/mk pads or Fujipoly 11 w/mk pads. You will need them for re-padding the entire card (which you probably should do if you're repasting anyway. If you spent $1500+ on a FE, you can spend a little bit more on quality pads. I _believe_ all of these pads are from the same supplier (at least the Thermalright and Fujipoly pads are). You need 4 packs because the amount of pad you get is extremely small. The fujipoly 60mm * 50mm pad is barely larger than a saltine cracker. Don't buy the 17 w/mk pads, they literally crumble if you even open the card after a repad once--even though they offer the best thermal performance, they are not re-usable and are expensive.

1.5mm pads work on both sides of the card and make fine contact.

2) The heatsink side of the card is pretty self explanatory. Just use what is shown on Igor's lab page here. BTW make sure the entire left side of the strip is covered on the heatsink picture.








NVIDIA GeForce RTX 3090 Founders Edition Review: Between Value and Decadence - When Price is Not Everything | Page 2 | igor'sLAB


With the GeForce RTX 3090, NVIDIA is rounding out its graphics card portfolio at the top end today, for now. Much more is not possible with the GA102-300 anyway and so one may see the current…




www.igorslab.de





3) The backplate side of the card is where the problems happen. Hotspots and stuttering galore as well as hot GDDR6X. To do this, look at my picture carefully.
Pay close attention to the red outlines. This is exactly where you pad.











I got this pad layout by comparing the 3090 and 3080 pages on Igor's lab, and also noticed that my 3090 didn't even have pads on one side of the "V", when I saw another user in the shunt mod thread that did have a pad on that side (it was just a square). And since the hotspots at the V should be the same on FE 3090 and 3080, I came up with that placement. Pay attention to igor's hotspot drawing for 3080 FE with Nvidia's V pads, that don't seem to be on both V sides of the 3090....









NVIDIA copies the Pad-Mod from igorsLAB for the GeForce RTX 3080 FE to smooth the hotspot for the GDDR6X | igor'sLAB


It's nice to see that NVIDIA seems to be actively involved in this, or that you've reported on your reading of the catchy social media content, because the hotspot that I found in the launch article…




www.igorslab.de





End result: No more "Thermal" flag, no more weird "Fast sync enabled" + Scanline Sync (RTSS) massive microstuttering, no more retrace lines with RTSS Scanline Sync + Vsync disabled, etc. And I can overclock the core higher too!

Remember use 1.5mm pads and choose the ones I listed. The Arctic 6 w/mk giant 145mm*145mm*1.5mm pads do work, but are not ideal for this card and I was not able to overclock as high, even though my first re-pad (where I took that picture before doing ANOTHER repad) with the arctic pads is what fixed the weird microstuttering problem!

I now have fujipoly and Thermalright Odyssey 1.5mm pads completely on the backplate side for RAM and hotspots, and the stock pads (which seem to be high quality and better than Arctics) on the GPU chip side (GPU re-pasted with Thermalright TFX) GDDRX RAM side but I replaced the original pads for the VRM's with Arctic 1.5mm's (which seems to be working fine, since the original pads are still on the RAM).


----------



## Jpmboy

^^ good post bro! I'm still waiting on a block for this FTW3.


----------



## Sky3900

Falkentyne said:


> Did you do the PCIE slot shunt? You did all six shunts, right? One shunt out of the six is hidden in the middle of nowhere. You said you desoldered the original shunts and put on 3 mOhm ones? Can you post your hwinfo screenshot showing with the "1.67x multiplier" for all the power limits added, while you are pulling max power you can before getting Perfcap: Power, for all the power rails?
> 
> For fixing your thermal issues:
> 
> Just follow my guide and repad your card. Trust me on this. This will fix _ALL_ percap Thermal flags as well as prevent "stuttering" bugs that I had on my card before I ever even opened the card up. It may even let you push the clocks higher!
> This works. Perfectly.
> 
> 1) Buy four packs of 1.5mm thickness, Gelid 12 w/mk pads, Thermalright Odyssey 12 w/mk pads or Fujipoly 11 w/mk pads. You will need them for re-padding the entire card (which you probably should do if you're repasting anyway. If you spent $1500+ on a FE, you can spend a little bit more on quality pads. I _believe_ all of these pads are from the same supplier (at least the Thermalright and Fujipoly pads are). You need 4 packs because the amount of pad you get is extremely small. The fujipoly 60mm * 50mm pad is barely larger than a saltine cracker. Don't buy the 17 w/mk pads, they literally crumble if you even open the card after a repad once--even though they offer the best thermal performance, they are not re-usable and are expensive.
> 
> 1.5mm pads work on both sides of the card and make fine contact.
> 
> 2) The heatsink side of the card is pretty self explanatory. Just use what is shown on Igor's lab page here. BTW make sure the entire left side of the strip is covered on the heatsink picture.
> 
> 
> 
> 
> 
> 
> 
> 
> NVIDIA GeForce RTX 3090 Founders Edition Review: Between Value and Decadence - When Price is Not Everything | Page 2 | igor'sLAB
> 
> 
> With the GeForce RTX 3090, NVIDIA is rounding out its graphics card portfolio at the top end today, for now. Much more is not possible with the GA102-300 anyway and so one may see the current…
> 
> 
> 
> 
> www.igorslab.de
> 
> 
> 
> 
> 
> 3) The backplate side of the card is where the problems happen. Hotspots and stuttering galore as well as hot GDDR6X. To do this, look at my picture carefully.
> Pay close attention to the red outlines. This is exactly where you pad.
> 
> View attachment 2464901
> 
> 
> 
> I got this pad layout by comparing the 3090 and 3080 pages on Igor's lab, and also noticed that my 3090 didn't even have pads on one side of the "V", when I saw another user in the shunt mod thread that did have a pad on that side (it was just a square). And since the hotspots at the V should be the same on FE 3090 and 3080, I came up with that placement. Pay attention to igor's hotspot drawing for 3080 FE with Nvidia's V pads, that don't seem to be on both V sides of the 3090....
> 
> 
> 
> 
> 
> 
> 
> 
> 
> NVIDIA copies the Pad-Mod from igorsLAB for the GeForce RTX 3080 FE to smooth the hotspot for the GDDR6X | igor'sLAB
> 
> 
> It's nice to see that NVIDIA seems to be actively involved in this, or that you've reported on your reading of the catchy social media content, because the hotspot that I found in the launch article…
> 
> 
> 
> 
> www.igorslab.de
> 
> 
> 
> 
> 
> End result: No more "Thermal" flag, no more weird "Fast sync enabled" + Scanline Sync (RTSS) massive microstuttering, no more retrace lines with RTSS Scanline Sync + Vsync disabled, etc. And I can overclock the core higher too!
> 
> Remember use 1.5mm pads and choose the ones I listed. The Arctic 6 w/mk giant 145mm*145mm*1.5mm pads do work, but are not ideal for this card and I was not able to overclock as high, even though my first re-pad (where I took that picture before doing ANOTHER repad) with the arctic pads is what fixed the weird microstuttering problem!
> 
> I now have fujipoly and Thermalright Odyssey 1.5mm pads completely on the backplate side for RAM and hotspots, and the stock pads (which seem to be high quality and better than Arctics) on the GPU chip side (GPU re-pasted with Thermalright TFX) GDDRX RAM side but I replaced the original pads for the VRM's with Arctic 1.5mm's (which seems to be working fine, since the original pads are still on the RAM).


Hmmm....You know, I've never tried using this RTSS sync method. Only gsync, will have to test this out.


----------



## HyperMatrix

OC2000 said:


> Are you not under water yet? I got the impression you were 😄I would love to see a light load 2200 mhz! or even 2160!


No block yet. Waiting for AquaComputer. I tested max clocks by lowering ambient. Setting RTSS FPS cap to 1 while running a game (Crysis 3 is my stress test game for years now). Then I set clocks. And change FPS cap up so the GPU hits about 40% load and jumps to its max clocks (2175MHz) and then I watch how the clock speed goes down as the temperature goes up. Then set FPS to 1, wait until card cools down, and repeat with new clocks. That same 2175MHz at 40% load would be 2100MHz at 100% load, even at 30C. Likely power limit issue. I haven’t/won’t be shunting until I get a block because I already have trouble keeping the card cool. 



Zogge said:


> Is the warranty gone if you mount a block on Strix ? I understood it as if you put back the original cooler in event of RMA then you are ok, assuming you do not damage the card when you are mounting/demounting coolers. Stickers are not valid to use for voiding warranty since 2018.
> Right ?


ASUS warranty is bad. In the United States it is illegal to deny warranty because a card has been serviced by you or anyone else unless they can _prove_ you broke it yourself in the process. But ASUS will still turn people away. So what do you do? Gotta go the legal route. The EU may have similar protection policies in place. Check your laws to be sure. Then at least if anything happens you can quote them the exact law and report them to the authorities if they violate it.


----------



## kx11

just got ROG O24G and in Performance bios the gpu can hit 400w











EDIT: no wait it hits 480w ?!


this happens during the benchmark loading


----------



## AdamK47

In the 25 years of building my own going PC I still don't understand why some run their hardware on the ragged edge of stability. Isn't piece of mind just as important when you're getting into a game? I find the edge and I back off a bit. Makes getting absorbed into a game so much easier.


----------



## anethema

Awesome post thanks. Here is what hwinfo looks like with 1.6667 multiplier on all powers once it hits 'thermal' limit while mining:









During port royal it shows about 500W





Falkentyne said:


> M and hotspots, and the stock pads (which seem to be high quality and better than Arctics) on the GPU chip side (GPU re-pasted with Thermalright TFX) GDDRX RAM side but I replaced the original pads for the VR





Falkentyne said:


> Did you do the PCIE slot shunt? You did all six shunts, right? One shunt out of the six is hidden in the middle of nowhere. You said you desoldered the original shunts and put on 3 mOhm ones? Can you post your hwinfo screenshot showing with the "1.67x multiplier" for all the power limits added, while you are pulling max power you can before getting Perfcap: Power, for all the power rails?
> 
> For fixing your thermal issues:
> 
> Just follow my guide and repad your card. Trust me on this. This will fix _ALL_ percap Thermal flags as well as prevent "stuttering" bugs that I had on my card before I ever even opened the card up. It may even let you push the clocks higher!
> This works. Perfectly.
> 
> 1) Buy four packs of 1.5mm thickness, Gelid 12 w/mk pads, Thermalright Odyssey 12 w/mk pads or Fujipoly 11 w/mk pads. You will need them for re-padding the entire card (which you probably should do if you're repasting anyway. If you spent $1500+ on a FE, you can spend a little bit more on quality pads. I _believe_ all of these pads are from the same supplier (at least the Thermalright and Fujipoly pads are). You need 4 packs because the amount of pad you get is extremely small. The fujipoly 60mm * 50mm pad is barely larger than a saltine cracker. Don't buy the 17 w/mk pads, they literally crumble if you even open the card after a repad once--even though they offer the best thermal performance, they are not re-usable and are expensive.
> 
> 1.5mm pads work on both sides of the card and make fine contact.
> 
> 2) The heatsink side of the card is pretty self explanatory. Just use what is shown on Igor's lab page here. BTW make sure the entire left side of the strip is covered on the heatsink picture.
> 
> 
> 
> 
> 
> 
> 
> 
> NVIDIA GeForce RTX 3090 Founders Edition Review: Between Value and Decadence - When Price is Not Everything | Page 2 | igor'sLAB
> 
> 
> With the GeForce RTX 3090, NVIDIA is rounding out its graphics card portfolio at the top end today, for now. Much more is not possible with the GA102-300 anyway and so one may see the current…
> 
> 
> 
> 
> www.igorslab.de
> 
> 
> 
> 
> 
> 3) The backplate side of the card is where the problems happen. Hotspots and stuttering galore as well as hot GDDR6X. To do this, look at my picture carefully.
> Pay close attention to the red outlines. This is exactly where you pad.
> 
> View attachment 2464901
> 
> 
> 
> I got this pad layout by comparing the 3090 and 3080 pages on Igor's lab, and also noticed that my 3090 didn't even have pads on one side of the "V", when I saw another user in the shunt mod thread that did have a pad on that side (it was just a square). And since the hotspots at the V should be the same on FE 3090 and 3080, I came up with that placement. Pay attention to igor's hotspot drawing for 3080 FE with Nvidia's V pads, that don't seem to be on both V sides of the 3090....
> 
> 
> 
> 
> 
> 
> 
> 
> 
> NVIDIA copies the Pad-Mod from igorsLAB for the GeForce RTX 3080 FE to smooth the hotspot for the GDDR6X | igor'sLAB
> 
> 
> It's nice to see that NVIDIA seems to be actively involved in this, or that you've reported on your reading of the catchy social media content, because the hotspot that I found in the launch article…
> 
> 
> 
> 
> www.igorslab.de
> 
> 
> 
> 
> 
> End result: No more "Thermal" flag, no more weird "Fast sync enabled" + Scanline Sync (RTSS) massive microstuttering, no more retrace lines with RTSS Scanline Sync + Vsync disabled, etc. And I can overclock the core higher too!
> 
> Remember use 1.5mm pads and choose the ones I listed. The Arctic 6 w/mk giant 145mm*145mm*1.5mm pads do work, but are not ideal for this card and I was not able to overclock as high, even though my first re-pad (where I took that picture before doing ANOTHER repad) with the arctic pads is what fixed the weird microstuttering problem!
> 
> I now have fujipoly and Thermalright Odyssey 1.5mm pads completely on the backplate side for RAM and hotspots, and the stock pads (which seem to be high quality and better than Arctics) on the GPU chip side (GPU re-pasted with Thermalright TFX) GDDRX RAM side but I replaced the original pads for the VRM's with Arctic 1.5mm's (which seems to be working fine, since the original pads are still on the RAM).


----------



## D3LTA KING

ExDarkxH said:


> port royal 15 033
> Average clock frequency 2,142 MHz
> Average temperature 61 °C
> 
> I badly need the stupid optimus block to come. They havent updated first batch buyers on anything and still list ETA as mid November
> 
> That's what it states for my order on my block and back plate November 26th more so for the back plate than the block in my case since they already have the block in stock. But I did order the two so I will have to wait till than.


----------



## jsarver

Anyone run fire strike anymore? Grabbed the 14th spot for extreme in single card scores but not sure how relevant it is.


----------



## wyattneill

jsarver said:


> Anyone run fire strike anymore? Grabbed the 14th spot for extreme in single card scores but not sure how relevant it is.


I was able to grab 6th


----------



## Falkentyne

anethema said:


> Awesome post thanks. Here is what hwinfo looks like with 1.6667 multiplier on all powers once it hits 'thermal' limit while mining:
> 
> 
> 
> 
> 
> 
> 
> 
> 
> During port royal it shows about 500W


I found the problem.
Your GPU Core (NVVDD) Output Power (sum) is HIGHER than your total GPU power! I don't know how that's possible.
Or is this what is causing a thermal limit?
But I'm sure this is what causing a power limit at 500W. But I am unfamiliar with what NVVDD output power even is.
or FBVDD.

I'm sorry, I am unfamiliar with mining.

Can you do a port royal or Heaven benchmark run instead and post the hwinfo from that?
I'd like to see if the FBVDD and NVDDD Output power is still reporting sky high.

Your GPU core NVVDD2 Input Power (sum) (with respect to GPU power) seems to be okay,
as well as your GPU Core NVVDD input power (sum)

I'm taking a wild guess that GPU Core NVVDD Output Power (sum) should be relatively close to GPU Core NVVDD2 Input Power (sum)

I'm taking another wild guess that there's something weird with your GPU FBVDD Input Power (sum). Yours is reading a massive 287W,
while when I did a COD Warzone game at 4k, drawing about 547 watts, mine only read 32W ! Perhaps those two being high are related to each other.

Also my GPU Core NVVDD Output power (sum) was 472W and GPU Core NVVDD2 Input Power (sum) was 449W, both close to each other and lower than total GPU Power.


----------



## Falkentyne

I just did a quick Heaven run at 482W, which is less total power and the power balance is pretty much the same.
So I think something is wrong with your shunt mod somewhere possibly?

Unless mining messes something up? Or did you make a typo in a multiplier for the layout?

Can you take a screenshot running "Heaven benchmark" at unthrottled FPS and post your hwinfo for the power rails there? I want to see if you still have that strange high power reporting (Unless you entered a wrong multiplier??)


----------



## jsarver

wyattneill said:


> I was able to grab 6th
> View attachment 2464951


Nice! Got me on the gpu score. Need to get this fe under water ASAP.


----------



## anethema

Here it is looping heaven. It was only so low and weird because in mining it goes to 'thermal' for throttle so it knocks the core down so power goes way down.

This is running heaven so full power:














Falkentyne said:


> I just did a quick Heaven run at 482W, which is less total power and the power balance is pretty much the same.
> So I think something is wrong with your shunt mod somewhere possibly?
> 
> Unless mining messes something up? Or did you make a typo in a multiplier for the layout?
> 
> Can you take a screenshot running "Heaven benchmark" at unthrottled FPS and post your hwinfo for the power rails there? I want to see if you still have that strange high power reporting (Unless you entered a wrong multiplier??)
> 
> View attachment 2464955


----------



## Falkentyne

anethema said:


> Here it is looping heaven. It was only so low and weird because in mining it goes to 'thermal' for throttle so it knocks the core down so power goes way down.
> 
> This is running heaven so full power:


Ok compare yours to my 2nd screenshot (Heaven at 494W).
Your Input power is higher than your GPU power again.
And your FBVDD is abnormally high.
This is what is causing your power limit.

This is either a shunt mod gone wrong somewhere, or maybe you using K5 Pro.
When you get the new thermal pads I suggested, when you take apart the card, before applying the pads, please check the soldering and integrity of all the shunts. It's very possible maybe one isn' making proper contact. But even if it wasn't, your FBVDD should NOT be that high.

A badly done shunt can cause a skyrocketing wattage from another rail. I saw this in the shunt mod thread where you skipped PCIE slot and your MVDCC was sky high until you modded the PCIE slot.


----------



## HyperMatrix

ExDarkxH said:


> Your inline with what I would consider a good value. I think the creators designed it for us to target 60fps.
> take a look here to get a better idea of where you stand :
> Over 61 would be considered a strong score
> 
> 
> 
> 
> 
> New 3DMark DX12 RT benchmark results - EVGA Forums
> 
> 
> Post 'em here if you got 'em. If you dont know, 3DMark released a pure DX12 ray tracing benchmark as to allow users to test apples to apples Nvidia and AMD GPUs in raw RT performance. (https://wccftech.com/futuremark-launches-3dmark-directx-raytracing-benchmark-amd-nvidia-gpus/)Here's mine on a...
> 
> 
> 
> forums.evga.com
> 
> 
> 
> 
> 
> I was 62.84fps but consider that an outlier


You're a lucky SOB. Haha. Mine only gets 60.91. +165 (2175MHz) is my crash point. It'll do it for 15 seconds but as soon as it heats up it'll crash. At +150 it starts 2160MHz and eventually heats up and goes down to 2115MHz. I know while benching the Tensor Cores with Optical Flow I was able to get to 2250MHz with +225 though.


----------



## wyattneill

jsarver said:


> Nice! Got me on the gpu score. Need to get this fe under water ASAP.



It wont let me DM you I had a few questions about how you overclocked you 5950x


----------



## jsarver

wyattneill said:


> It wont let me DM you I had a few questions about how you overclocked you 5950x


Hmm not sure why. Try my Twitter odyssey_og


----------



## wyattneill

jsarver said:


> odyssey_og


I did but will wont let me message you until you follow me back.


----------



## Sky3900

Deleted.


----------



## anethema

Correct ya I didn't want to mod PCIe so I did them all but that, but ya it caused a massive spike on the one rail.

I will order some pads and give every shunt a reflow.

Appreciate the help that is awesome thanks.



Falkentyne said:


> Ok compare yours to my 2nd screenshot (Heaven at 494W).
> Your Input power is higher than your GPU power again.
> And your FBVDD is abnormally high.
> This is what is causing your power limit.
> 
> This is either a shunt mod gone wrong somewhere, or maybe you using K5 Pro.
> When you get the new thermal pads I suggested, when you take apart the card, before applying the pads, please check the soldering and integrity of all the shunts. It's very possible maybe one isn' making proper contact. But even if it wasn't, your FBVDD should NOT be that high.
> 
> A badly done shunt can cause a skyrocketing wattage from another rail. I saw this in the shunt mod thread where you skipped PCIE slot and your MVDCC was sky high until you modded the PCIE slot.


----------



## anethema

Falkentyne said:


> Ok compare yours to my 2nd screenshot (Heaven at 494W).
> Your Input power is higher than your GPU power again.
> And your FBVDD is abnormally high.
> This is what is causing your power limit.
> 
> This is either a shunt mod gone wrong somewhere, or maybe you using K5 Pro.
> When you get the new thermal pads I suggested, when you take apart the card, before applying the pads, please check the soldering and integrity of all the shunts. It's very possible maybe one isn' making proper contact. But even if it wasn't, your FBVDD should NOT be that high.
> 
> A badly done shunt can cause a skyrocketing wattage from another rail. I saw this in the shunt mod thread where you skipped PCIE slot and your MVDCC was sky high until you modded the PCIE slot.


Also, are the pads on both sides 1.5mm? I want to redo all the pads from scratch on both sides. I will have them then for when I watercool also.


----------



## Falkentyne

anethema said:


> Also, are the pads on both sides 1.5mm? I want to redo all the pads from scratch on both sides. I will have them then for when I watercool also.


Technically they are a mix of 1.5-1.8 thickness on the back and 1.0mm pads on the front (someone here said he measured them), but 1.5mm pads will work perfectly on both sides.
Just remember to buy good pads. You will need four packs of the ones I mentioned due to how small they are (they seem to all be the same, 11-12 w/mk pads from Thermalright, Gelid, and fujipoly). They are also very compressible

Apparently Thermalright makes a 120x120 version of these pads but they aren't for sale in USA. Found them on aliexpress.


----------



## OC2000

ExDarkxH said:


> Your inline with what I would consider a good value. I think the creators designed it for us to target 60fps.
> take a look here to get a better idea of where you stand :
> Over 61 would be considered a strong score
> 
> 
> 
> 
> 
> New 3DMark DX12 RT benchmark results - EVGA Forums
> 
> 
> Post 'em here if you got 'em. If you dont know, 3DMark released a pure DX12 ray tracing benchmark as to allow users to test apples to apples Nvidia and AMD GPUs in raw RT performance. (https://wccftech.com/futuremark-launches-3dmark-directx-raytracing-benchmark-amd-nvidia-gpus/)Here's mine on a...
> 
> 
> 
> forums.evga.com
> 
> 
> 
> 
> 
> I was 62.84fps but consider that an outlier


Just re ran the test setting a custom curve with 1.06x set to 2160, 1.083v set to 2175 and 1.1 set to 2205 which passed, however my score was less than 61. Maybe I needed to reset the system as I had a lot of crashes before hand getting to those settings.


----------



## jsarver

wyattneill said:


> I did but will wont let me message you until you follow me back.


Handle doc? If so followed.


----------



## OC2000

Falkentyne said:


> Technically they are a mix of 1.5-1.8 thickness on the back and 1.0mm pads on the front (someone here said he measured them), but 1.5mm pads will work perfectly on both sides.
> Just remember to buy good pads. You will need four packs of the ones I mentioned due to how small they are (they seem to all be the same, 11-12 w/mk pads from Thermalright, Gelid, and fujipoly). They are also very compressible
> 
> Apparently Thermalright makes a 120x120 version of these pads but they aren't for sale in USA. Found them on aliexpress.



I just bought 4 packs of 1mm and 2 packs of 1.5mm Thermalright to do the entire front and back of the Asus 3090 with the EK block and backplate. Are there any known hotspots on the Strix that you know of?


----------



## GTANY

Sky3900 said:


> Here's some data points for an air cooled 3090FE, silver paint shunt mod, probably drawing around 500w-525w.
> 
> Borderlands 3: +98gpu +550mem, 2085-2100mhz, stable for several hours. 2k, ultra.
> 
> Metro bench loop: +82gpu +550mem, clocks vary through bench 1995-2070, probably averages 2025, stable for 30 loops+. 4k, ultra rtx.
> 
> Heaven: +98gpu +550 mem, clocks vary 1985-2070, average probably 2010, stable for an hour+. 4k, ultra.
> 
> WD Legion: +98gpu +550mem, 2070mhz, stable for several hours. 2k, ultra rtx.
> 
> 3d mark: +115gpu +550mem, clocks vary a lot depending on the bench. 3d mark seems to be able to tolerate a higher gpu clock for some reason.
> 
> Best PR: +147gpu +750mem, clocks vary throughout 1985-2170, average 2114. GPU chilled to 27c before starting bench.


Since Metro bench GPU frequency varies between 1995 and 2070, does it mean that this game consumes more than 525 W ?

And for those owning Control, may you test the GPU max frequency ? Control might be one of the most power-hungry games.


----------



## vmanuelgm

Assassin's Creed Valhalla Internal Bench with AMD 5950x and RTX 3090 (shunt modded):








Let some time for Youtube processing...


----------



## GTANY

I managed to order a RTX 3090 FE last week, it works fine, very silent but since I was afraid to shunt it with 2 power plugs only, I finally managed to order a RTX 3090 Strix, the model I wanted. I will shunt and voltage mod and watercool it. I aim at 2100 Mhz minimum sustained in 4 K, 100 % load.

I hope that I will have some luck since a few Strix owners did not win the silicon lottery with this model.

For those who ordered a Bykski RTX 3090 Strix waterblock, what aliexpress shop can you recommend ? I want to order to a reliable seller.


----------



## Sync0r

GTANY said:


> Since Metro bench GPU frequency varies between 1995 and 2070, does it mean that this game consumes more than 525 W ?
> 
> And for those owning Control, may you test the GPU max frequency ? Control might be one of the most power-hungry games.












Control with my shunted Zotac Trinity about ~493w.








Here's another, 512w core at 2205Mhz.

Not hitting any power limit. So its drawing all the power its needs to maintain that clock @ 1.1v.


----------



## LeoGrant

Hi folks!
I've got Aorus RTX 3090 Master. Great card with monster cooling system. Very impressive design. According to table at this topic it should have 370W default to 390W max powerlimit.
But actually I see that default TDP is 350W, not 370W.
Bios is up to date (94.02.26.08.54 for "OC" switch position and 94.02.26.08.53 for "Silent").

Please see the screenshots: 1. Default mode. 350W.









2. Limit + "5%". Boost to 390W.









But 390W is not 350W + 5% actually. And why BIOS reported default value 370W?









I've discussion with Gigabyte support about this issue and they said the problem may be in NVidia drivers. I've tried NVIDIA drivers from 456.38 to 457.30. Nothing changed.
I think that Aorus have to make a new correct BIOS with REAL 370W default of correct BIOS data from 370W to 350W.
Am I right?
What are your default results, Aorus 3090 Master owners?


----------



## Sync0r

LeoGrant said:


> Hi folks!
> I've got Aorus RTX 3090 Master. Great card with monster cooling system. Very impressive design. According to table at this topic it should have 370W default to 390W max powerlimit.
> But actually I see that default TDP is 350W, not 370W.
> Bios is up to date (94.02.26.08.54 for "OC" switch position and 94.02.26.08.53 for "Silent").
> 
> Please see the screenshots: 1. Default mode. 350W.
> View attachment 2464990
> 
> 
> 2. Limit + "5%". Boost to 390W.
> View attachment 2464991
> 
> 
> But 390W is not 350W + 5% actually. And why BIOS reported default value 370W?
> View attachment 2464992
> 
> 
> I've discussion with Gigabyte support about this issue and they said the problem may be in NVidia drivers. I've tried NVIDIA drivers from 456.38 to 457.30. Nothing changed.
> I think that Aorus have to make a new correct BIOS with REAL 370W default of correct BIOS data from 370W to 350W.
> Am I right?
> What are your default results, Aorus 3090 Master owners?


Default with that card is 370w (100% PL) 390w is 105% PL. Its working fine.

Remember that it won't always draw 370w if it doesn't need to, it pulls/draws power for what is required, its a limit, its not like forcing a voltage.


----------



## LeoGrant

Sync0r said:


> Default with that card is 370w (100% PL) 390w is 105% PL. Its working fine.
> 
> Remember that it won't always draw 370w if it doesn't need to, it pulls/draws power for what is required, its a limit, its not like forcing a voltage.


Thanks for a quick reply. 
But Furmark get full load to vc. (When I move +5% in Aorus Engine the consumption increases to full 390W). I have no special software that make corrections in PL, energy mode in NVidia panel is "Optimal". Windows 10 20H2.
What settings can lower the default limit to 350W?


----------



## Sync0r

LeoGrant said:


> Thanks for a quick reply.
> But Furmark get full load to vc. (When I move +5% in Aorus Engine the consumption increases to full 390W). I have no special software that make corrections in PL, energy mode in NVidia panel is "Optimal". Windows 10 20H2.
> What settings can lower the default limit to 350W?


I've never had someone request this before, surely you will just have it at 105% all the time?


----------



## LeoGrant

Sync0r said:


> I've never had someone request this before, surely you will just have it at 105% all the time?


My request is not get 350W. Just get my 370W default instead of 350W.


----------



## krs360

Just tried to download nvflash from the first thread but it's missing the nvflash64 file, is this normal? Do we just use the nvflash.exe now instead?


----------



## GTANY

Sync0r said:


> View attachment 2464994
> 
> 
> Control with my shunted Zotac Trinity about ~493w.
> View attachment 2464995
> 
> Here's another, 512w core at 2205Mhz.
> 
> Not hitting any power limit. So its drawing all the power its needs to maintain that clock @ 1.1v.


Is ray-tracing enabled ? What is your resolution ? 512 W is lower than I expected.


----------



## Sync0r

GTANY said:


> Is ray-tracing enabled ? What is your resolution ? 512 W is lower than I expected.


1440p yeah rtx on and dlss on, all max.


----------



## GTANY

Sync0r said:


> 1440p yeah rtx on and dlss on, all max.


OK, thank you. Consequently, with a shunt mod allwing a 600 W power limit, every game shouldn't be power-limited anymore ?


----------



## Falkentyne

GTANY said:


> OK, thank you. Consequently, with a shunt mod allwing a 600 W power limit, every game shouldn't be power-limited anymore ?


Depends on resolution. At 1080p, no. At 1440p, not sure. At 4k (or 1080p + 4x OGSS Supersampling / 200% render scale), you will probably leapfrog 600W. The chip will probably go nuclear and will need to be sub ambient cooled at that point. Except for games that for some reason don't draw a lot of watts (like WD:L. Even Warzone at 4k won't seem to draw more than 500W).


----------



## Sync0r

GTANY said:


> OK, thank you. Consequently, with a shunt mod allwing a 600 W power limit, every game shouldn't be power-limited anymore ?


Pretty much. At 1440p not seen any game power limit me at the moment. If I get some time I could try some res scaling to see if I can get it to hit limit.


----------



## GTANY

Falkentyne said:


> Depends on resolution. At 1080p, no. At 1440p, not sure. At 4k (or 1080p + 4x OGSS Supersampling / 200% render scale), you will probably leapfrog 600W. The chip will probably go nuclear and will need to be sub ambient cooled at that point. Except for games that for some reason don't draw a lot of watts (like WD:L. Even Warzone at 4k won't seem to draw more than 500W).


OK, thanks. I will give a consumption feedback when my Strix will be watercooled and shunt-modded, since I play in 4K.


----------



## Spiriva

LeoGrant said:


> Thanks for a quick reply.
> But Furmark get full load to vc. (When I move +5% in Aorus Engine the consumption increases to full 390W). I have no special software that make corrections in PL, energy mode in NVidia panel is "Optimal". Windows 10 20H2.
> What settings can lower the default limit to 350W?


Is the only diff between 94.02.26.08.53 / 94.02.26.08.54 fan curve? Im running the 94.02.26.08.53 and it says 390w is my limit.


----------



## mirkendargen

GTANY said:


> I managed to order a RTX 3090 FE last week, it works fine, very silent but since I was afraid to shunt it with 2 power plugs only, I finally managed to order a RTX 3090 Strix, the model I wanted. I will shunt and voltage mod and watercool it. I aim at 2100 Mhz minimum sustained in 4 K, 100 % load.
> 
> I hope that I will have some luck since a few Strix owners did not win the silicon lottery with this model.
> 
> For those who ordered a Bykski RTX 3090 Strix waterblock, what aliexpress shop can you recommend ? I want to order to a reliable seller.


I got my Bykski Strix block from here: bykski Official Store - Amazing prodcuts with exclusive discounts on AliExpress


----------



## GTANY

mirkendargen said:


> I got my Bykski Strix block from here: bykski Official Store - Amazing prodcuts with exclusive discounts on AliExpress


Thank you. I am going to order to your recommended shop.


----------



## Sky3900

GTANY said:


> Since Metro bench GPU frequency varies between 1995 and 2070, does it mean that this game consumes more than 525 W ?
> 
> And for those owning Control, may you test the GPU max frequency ? Control might be one of the most power-hungry games.


Yep, Metro bench is still power limited running 4K, ultra, RTX at 525w. Gpuz displays power perf cap. At +98gpu the card can hit 2130-2145mhz @1.1v. 

The card comes pretty close to running at the voltage limit in several games I play when running 2k resolution.

Furmark at 4k only clocks in at 1700mhz... I wonder how many watts this thing would devour running Furmark at 2145mhz? 800w-900w?


----------



## Falkentyne

Sky3900 said:


> Yep, Metro bench is still power limited running 4K, ultra, RTX at 525w. Gpuz displays power perf cap. At +98gpu the card can hit 2130-2145mhz @1.1v.
> 
> The card comes pretty close to running at the voltage limit in several games I play when running 2k resolution.
> 
> Furmark at 4k only clocks in at 1700mhz... I wonder how many watts this thing would devour running Furmark at 2145mhz? 800w-900w?


Furmark is intentionally power throttled in the drivers on purpose.
You can run it full speed by renaming it to Quake3.exe but please do not do this. I suggest doing that on a backup card.
Please don't do this....
Plus the last time I saw people renaming was in windows XP. I have no idea if it works in windows 10 with driver access to apps. Might need to disable permissions in Shutup 10.


----------



## GTANY

Sky3900 said:


> Yep, Metro bench is still power limited running 4K, ultra, RTX at 525w. Gpuz displays power perf cap. At +98gpu the card can hit 2130-2145mhz @1.1v.
> 
> The card comes pretty close to running at the voltage limit in several games I play when running 2k resolution.
> 
> Furmark at 4k only clocks in at 1700mhz... I wonder how many watts this thing would devour running Furmark at 2145mhz? 800w-900w?


Interesting. Thank you for your feedback.


----------



## iquit040

I have 2 of these cards on the way, Was this close to cancelling the orders until I saw you shunted this card with good success..... now I am psyched to do it 



Sync0r said:


> View attachment 2464994
> 
> 
> Control with my shunted Zotac Trinity about ~493w.
> View attachment 2464995
> 
> Here's another, 512w core at 2205Mhz.
> 
> Not hitting any power limit. So its drawing all the power its needs to maintain that clock @ 1.1v.


----------



## Sync0r

iquit040 said:


> I have 2 of these cards on the way, Was this close to cancelling the orders until I saw you shunted this card with good success..... now I am psyched to do it


Its a beast, basically the same as all the others if you shunt! I got mine fairly cheap as well £1400.

Shunt all with 5mo on the front and I did a 20mo on the PCIe slot, then flashed with Asus Strix bios.


----------



## Sync0r

Okay if I run Metro Exodus benchmark with DLSS off and RTX on I am power limited. Sorry for awful pic, had to use my phone. 563.1w!


----------



## Sync0r

Control with DLSS off consumes more power 540w (DLSS Off) vs 492w (DLSS On), makes sense I guess, lower actual res upscaled:


----------



## LeoGrant

Spiriva said:


> Is the only diff between 94.02.26.08.53 / 94.02.26.08.54 fan curve? Im running the 94.02.26.08.53 and it says 390w is my limit.


It seems that diff is just fan curve. I see no other differences.
What software says that your limit is 390W?


----------



## iquit040

Sync0r said:


> Its a beast, basically the same as all the others if you shunt! I got mine fairly cheap as well £1400.
> 
> Shunt all with 5mo on the front and I did a 20mo on the PCIe slot, then flashed with Asus Strix bios.


I might skip the PCIE slot shunt. I am just hoping the alphacool block and loop with 420 rad is good enough to keep it tamed


----------



## pat182

Sync0r said:


> View attachment 2464994
> 
> 
> Control with my shunted Zotac Trinity about ~493w.
> View attachment 2464995
> 
> Here's another, 512w core at 2205Mhz.
> 
> Not hitting any power limit. So its drawing all the power its needs to maintain that clock @ 1.1v.


On my strix it caps at 2085mhz 480watt capped all the time (4k). How can you see real power draw if you re shunted?

Also is it worth shunting repaste if im staying on stock cooler


----------



## GTANY

Sync0r said:


> Okay if I run Metro Exodus benchmark with DLSS off and RTX on I am power limited. Sorry for awful pic, had to use my phone. 563.1w!
> View attachment 2465019


Accoding to your tests, the GPU frequency shouldn't decrase with a 600 W power limit ?


----------



## Falkentyne

iquit040 said:


> I might skip the PCIE slot shunt. I am just hoping the alphacool block and loop with 420 rad is good enough to keep it tamed


NOT a good idea to skip PCIE shunt! Skipping PCIE shunt will cause PCIE 75W limit to flag PWR limit, and limit the 8 pin connections from drawing more power.
GPU Core, SRC and NVVDD can also limit power as well. There is power balancing going on. All shunts must be modded!



pat182 said:


> On my strix it caps at 2085mhz 480watt capped all the time. How can you see real power draw if you re shunted?
> 
> Also is it worth shunting repaste if im staying on stock cooler


Use the shunt mod calculator to determine the hwinfo multiplier. You can customize values in HWinfo to change the reporting.








GitHub - bmgjet/ShutMod-Calculator: Work out what shunt values to use easily.


Work out what shunt values to use easily. Contribute to bmgjet/ShutMod-Calculator development by creating an account on GitHub.




github.com





Note: Painting the shunt directly with MG chemicals silver paint (842AR)-after removing the protective conformal coating from the edges of the original shunts with a small flat blade, acts as a ~15 mOhm resistor.


----------



## iquit040

Falkentyne said:


> NOT a good idea to skip PCIE shunt! Skipping PCIE shunt will cause PCIE 75W limit to flag PWR limit, and limit the 8 pin connections from drawing more power.
> GPU Core, SRC and NVVDD can also limit power as well. There is power balancing going on. All shunts must be modded!
> 
> 
> 
> Use the shunt mod calculator to determine the hwinfo multiplier. You can customize values in HWinfo to change the reporting.
> 
> 
> 
> 
> 
> 
> 
> 
> GitHub - bmgjet/ShutMod-Calculator: Work out what shunt values to use easily.
> 
> 
> Work out what shunt values to use easily. Contribute to bmgjet/ShutMod-Calculator development by creating an account on GitHub.
> 
> 
> 
> 
> github.com
> 
> 
> 
> 
> 
> Note: Painting the shunt directly with MG chemicals silver paint (842AR)-after removing the protective conformal coating from the edges of the original shunts with a small flat blade, acts as a ~15 mOhm resistor.


I see... Well guess it going to be time to order some shunts


----------



## pat182

Falkentyne said:


> NOT a good idea to skip PCIE shunt! Skipping PCIE shunt will cause PCIE 75W limit to flag PWR limit, and limit the 8 pin connections from drawing more power.
> GPU Core, SRC and NVVDD can also limit power as well. There is power balancing going on. All shunts must be modded!
> 
> 
> 
> Use the shunt mod calculator to determine the hwinfo multiplier. You can customize values in HWinfo to change the reporting.
> 
> 
> 
> 
> 
> 
> 
> 
> GitHub - bmgjet/ShutMod-Calculator: Work out what shunt values to use easily.
> 
> 
> Work out what shunt values to use easily. Contribute to bmgjet/ShutMod-Calculator development by creating an account on GitHub.
> 
> 
> 
> 
> github.com
> 
> 
> 
> 
> 
> Note: Painting the shunt directly with MG chemicals silver paint (842AR)-after removing the protective conformal coating from the edges of the original shunts with a small flat blade, acts as a ~15 mOhm resistor.


yes but will i see mhz gains even on stock strix cooler ? or is it gonna be mooh point cause its gonna raise temps and clock gonna lower anyway. avg temps 480watt sustain is 70c


----------



## Spiriva

LeoGrant said:


> It seems that diff is just fan curve. I see no other differences.
> What software says that your limit is 390W?


Gpu-Z shows 390w, and RTSS shows ~396w as max during Superposition Benchmark. Watchdogs Legion around 385w while gpu at 2190mhz.
I got a PNY 3090 flashed with the Arous master bios.


----------



## Falkentyne

iquit040 said:


> I see... Well guess it going to be time to order some shunts


No need to order shunts. You can just paint the shunts as I listed several pages ago, with MG chemicals silver paint. (842AR).
Make sure you wipe the conformal coating off the original shunts first. When it becomes shiny on the edges instead of a "dull" silver, you did your job.
Clean the shunt with isopropyl alcohol and some tissue/lint free cloth (NO STATIC) then use a toothpick to bridge the silver edges together by painting the entire shunt.

This acts like a ~15 mOhm resistor.

All you need:
1) https://www.amazon.com/gp/product/B01MCXW1Y1/
2) wooden toothpicks and a small flat blade screwdriver to scrape off the conformal coating off the edges of the shunts.
3) steady hand.
4) patience.
5) isopropyl alcohol (to clean up any goofs).

6) Let it dry 15 minutes before flipping the board over to do backside shunt.
7) I suggest having new thermal pads for a re-pad job if you need to do that also.

COVER THE PAINT with the cap always, after dipping the toothpick in it and have as little as possible on the toothpick so you can control the application.


----------



## DimmyK

Question to you 3090 gurus: is it safe to run 3090 FE on good quality 650W PSU or will I be pushing it? Not planning to overclock. PSU is EVGA SuperNOVA 650 G3 650W 80+ Gold, bought in 2017. CPU is 7700K @4.5Ghz. 3 SSDs, 2HDDs, 3 or 4 case fans. 2 sticks of DDR4 3200 memory.


----------



## Falkentyne

DimmyK said:


> Question to you 3090 gurus: is it safe to run 3090 FE on good quality 650W PSU or will I be pushing it? Not planning to overclock. PSU is EVGA SuperNOVA 650 G3 650W 80+ Gold, bought in 2017. CPU is 7700K @4.5Ghz. 3 SSDs, 2HDDs, 3 or 4 case fans. 2 sticks of DDR4 3200 memory.


Don't even think about it.


----------



## Sync0r

pat182 said:


> On my strix it caps at 2085mhz 480watt capped all the time (4k). How can you see real power draw if you re shunted?
> 
> Also is it worth shunting repaste if im staying on stock cooler


I've added a multiplier to the GPU power figure in the msi afterburner overlay so it shows real power usage. It seems to be fairly accurate based on what is being pulled from the wall.


----------



## Sync0r

DimmyK said:


> Question to you 3090 gurus: is it safe to run 3090 FE on good quality 650W PSU or will I be pushing it? Not planning to overclock. PSU is EVGA SuperNOVA 650 G3 650W 80+ Gold, bought in 2017. CPU is 7700K @4.5Ghz. 3 SSDs, 2HDDs, 3 or 4 case fans. 2 sticks of DDR4 3200 memory.


I'm seeing 750w from the wall when gaming. 10900k at 5.3 @ 1.375v and GPU pulling 500w+


----------



## pat182

Sync0r said:


> I've added a multiplier to the GPU power figure in the msi afterburner overlay so it shows real power usage. It seems to be fairly accurate based on what is being pulled from the wall.


woud shunting a strix with the air cooler do anything good or not worth it ?


----------



## Sync0r

pat182 said:


> woud shunting a strix with the air cooler do anything good or not worth it ?


I think temps would become unmanageable, you really need to water cool it when you are asking it to draw more power. I'd water cool the card first, you will get a good bump in performance, because it will clock higher from reduced temp.


----------



## Sync0r

GTANY said:


> Accoding to your tests, the GPU frequency shouldn't decrase with a 600 W power limit ?


On the Strix bios I am hitting the power limit on the 8 pin in Metro, assuming the balance is 135w per 8 pin and 75w from Slot (3x135w + 75w = 480w). 138.5w is actually 277w through a single 8 pin, I think they can go to 330w, don't think I want to push it any further 

So my power limit is about 565w there about.


----------



## Sync0r

Okay I flashed the EVGA XOC BIOS with 500 W power limit, works a treat!

597.225w power limit now. Time to bench!


----------



## wyattneill

GTANY said:


> I managed to order a RTX 3090 FE last week, it works fine, very silent but since I was afraid to shunt it with 2 power plugs only, I finally managed to order a RTX 3090 Strix, the model I wanted. I will shunt and voltage mod and watercool it. I aim at 2100 Mhz minimum sustained in 4 K, 100 % load.
> 
> I hope that I will have some luck since a few Strix owners did not win the silicon lottery with this model.
> 
> For those who ordered a Bykski RTX 3090 Strix waterblock, what aliexpress shop can you recommend ? I want to order to a reliable seller.


I have done all these things and mine can run, the highest I have seen was over 2200Mhz sustained in 3dmark, I managed to get on every leaderboard with this card. What do you mean a few owners did not win silicon lottery? There cards only go up to 2000? or what? Just curious.


----------



## Wihglah

DimmyK said:


> Question to you 3090 gurus: is it safe to run 3090 FE on good quality 650W PSU or will I be pushing it? Not planning to overclock. PSU is EVGA SuperNOVA 650 G3 650W 80+ Gold, bought in 2017. CPU is 7700K @4.5Ghz. 3 SSDs, 2HDDs, 3 or 4 case fans. 2 sticks of DDR4 3200 memory.


I have seen 816W max draw from my machine. It's in my sig.


----------



## Esenel

Sync0r said:


> I think temps would become unmanageable, you really need to water cool it when you are asking it to draw more power. I'd water cool the card first, you will get a good bump in performance, because it will clock higher from reduced temp.


What is your percentual performance gain from Strix waterblock +480W to your shuntmoded one?

Thanks


----------



## iquit040

Sync0r said:


> Okay I flashed the EVGA XOC BIOS with 500 W power limit, works a treat!
> 
> 597.225w power limit now. Time to bench!
> 
> View attachment 2465036


Holy Hell.... 600w lol You happen to have any pics of the shutmod on the trinity? I am going to order my shunts today hopefully


----------



## Sync0r

Esenel said:


> What is your percentual performance gain from Strix waterblock +480W to your shuntmoded one?
> 
> Thanks


I'm using a Zotac Trinity 3090. I'm not sure what percentage gain you would get exactly. But reducing temp allows the card to clock higher, because of how GPU boost works with its 15mhz bins. Every card is different as well.


----------



## Sync0r

iquit040 said:


> Holy Hell.... 600w lol You happen to have any pics of the shutmod on the trinity? I am going to order my shunts today hopefully


I don't have a pic sorry, my soldering is pretty terrible


----------



## iquit040

Sync0r said:


> I don't have a pic sorry, my soldering is pretty terrible


Looks like your power readings are showing 3 8pin, isn't the 3090 trinity only 2x ?


----------



## DimmyK

Falkentyne said:


> Don't even think about it.





Sync0r said:


> I'm seeing 750w from the wall when gaming. 10900k at 5.3 @ 1.375v and GPU pulling 500w+





Wihglah said:


> I have seen 816W max draw from my machine. It's in my sig.


Thanks fellas. Just what I though. Ordered Seasonic PRIME PX-850 to go with new shiny 3090. Soon will be joining the club


----------



## Sync0r

iquit040 said:


> Looks like your power readings are showing 3 8pin, isn't the 3090 trinity only 2x ?


Yeah it is only 2, ignore the 3rd reading, it's exactly the same as the 1st.


----------



## Falkentyne

DimmyK said:


> Thanks fellas. Just what I though. Ordered Seasonic PRIME PX-850 to go with new shiny 3090. Soon will be joining the club


Solid PSU. However if you plan on TDP modding via shunt mod in the future, I would go with the PX-1000, not the 850.
There's also a new TEC Cyro CPU cooler coming out that will probably wind up drawing 200 watts by itself, that is installed like an AIO. With all of this information, I think a PX-1000 is a wiser choice.

Note: I have a Seasonic PX-1000.

Whatever you choose, when you receive the PSU, make sure you contact Seasonic for a free 12 pin cable. It also helps overclock stability compared to the FE adapter. One user had his computer crash and reboot when using the FE adapter, which was fixed by switching to the Seasonic 12 pin directly.

Food for thought.


----------



## DimmyK

Falkentyne said:


> Solid PSU. However if you plan on TDP modding via shunt mod in the future, I would go with the PX-1000, not the 850.
> There's also a new TEC Cyro CPU cooler coming out that will probably wind up drawing 200 watts by itself, that is installed like an AIO. With all of this information, I think a PX-1000 is a wiser choice.
> 
> Note: I have a Seasonic PX-1000.
> 
> Whatever you choose, when you receive the PSU, make sure you contact Seasonic for a free 12 pin cable. It also helps overclock stability compared to the FE adapter. One user had his computer crash and reboot when using the FE adapter, which was fixed by switching to the Seasonic 12 pin directly.
> 
> Food for thought.


Thanks for the info, appreciate free 12pin cable tip. Not planning on OCing, much less shunt modding though, so 850W should be plenty, especially from something like this Seasonic unit. I had Seasonics before, love them.


----------



## pat182

Sync0r said:


> I'm using a Zotac Trinity 3090. I'm not sure what percentage gain you would get exactly. But reducing temp allows the card to clock higher, because of how GPU boost works with its 15mhz bins. Every card is different as well.


trinty has only 2x8pin, how can the evga bios work and how does it show 3x8pin power, people said you cant flash a 3pin on a 2pin, i just dont understand anything now.


----------



## Sync0r

pat182 said:


> trinty has only 2x8pin, how can the evga bios work and how does it show 3x8pin power, people said you cant flash a 3pin on a 2pin, i just dont understand anything now.


It will only work well if you are shunted, due too halving the reported power usage. If not shunted you will only get 375w power limit there about, probably less as 2nd 8 pin doesn't seem to get utilised like the first.


----------



## motivman

Absolute best I can do with my setup: 9900k @ 5.0ghz, MSI Z390A PRO, 32gb ddr4 3600, PNY 3090 (reference 2X8 pin PCB) with gigabyte 390w bios +240 core/+1000 memory. On EK waterblock, 29C ambient, 45C load temp. If I can drop my ambient 10 Degrees, I know I can score higher... lol. I do not want to shunt mod my card, I usually sell my hardware to upgrade, and won't feel right selling a shunted card to a new owner.. HAHA



https://www.3dmark.com/3dm/52869376?


----------



## kx11

gained 7fps in RT test with my new build




https://www.3dmark.com/dxr/15518




older run



https://www.3dmark.com/dxr/4852


----------



## HyperMatrix

motivman said:


> Absolute best I can do with my setup: 9900k @ 5.0ghz, MSI Z390A PRO, 32gb ddr4 3600, PNY 3090 (reference 2X8 pin PCB) with gigabyte 390w bios +240 core/+1000 memory. On EK waterblock, 29C ambient, 45C load temp. If I can drop my ambient 10 Degrees, I know I can score higher... lol. I do not want to shunt mod my card, I usually sell my hardware to upgrade, and won't feel right selling a shunted card to a new owner.. HAHA
> 
> 
> 
> https://www.3dmark.com/3dm/52869376?


I’ve never understood why people would be hesitant to sell a shunt modded card. Since I sell my cards with water blocks anyway, there’s a high chance the person buying it actually appreciates the shunting. I always list it in the item details but have never thought of it as a negative thing. The card is able to outperform a stock card and not everyone is able to or comfortable doing the soldering Themselves.


----------



## motivman

HyperMatrix said:


> I’ve never understood why people would be hesitant to sell a shunt modded card. Since I sell my cards with water blocks anyway, there’s a high chance the person buying it actually appreciates the shunting. I always list it in the item details but have never thought of it as a negative thing. The card is able to outperform a stock card and not everyone is able to or comfortable doing the soldering Themselves.


shunted card = bye bye warranty, plus my plan is to sell this PNY and buy either the ASUS 3090 Strix or MSI 3090 gaming trio when these cards are readily available as well as waterblocks. My understanding is that I can get 500W on either the strix or the trio without shunt modding. So that is my plan as least. the PNY will do for now.


----------



## Falkentyne

HyperMatrix said:


> I’ve never understood why people would be hesitant to sell a shunt modded card. Since I sell my cards with water blocks anyway, there’s a high chance the person buying it actually appreciates the shunting. I always list it in the item details but have never thought of it as a negative thing. The card is able to outperform a stock card and not everyone is able to or comfortable doing the soldering Themselves.


Nothing wrong with selling a shunt modded card. They get a card that clocks and benches better than stock cards. Just tell the buyer the card is modded and see if he agrees to purchase.



motivman said:


> shunted card = bye bye warranty.


Warranties are non transferable anyway.


----------



## HyperMatrix

motivman said:


> shunted card = bye bye warranty, plus my plan is to sell this PNY and buy either the ASUS 3090 Strix or MSI 3090 gaming trio when these cards are readily available as well as waterblocks. My understanding is that I can get 500W on either the strix or the trio without shunt modding. So that is my plan as least. the PNY will do for now.


First off, as Falk mentioned, warranties are generally non-transferable. Secondly, if they were transferable, I believe under the magnuson-moss warranty act in the United States, the manufacturer would have to _prove_ that the fault at hand is a direct result of the service you performed on the card. They can't say "well service was performed by you...therefore we don't know what happened...but it wasn't done by us...therefore no warranty."

So technically speaking...if you reinstall the standard resistors, or replace them with equivalent resistors, and send the card in for warranty, there is no way for the manufacturer to _prove_ that you did anything other than replace the resistors. They can have guesses as to why. But for all they know...you could have had a dead card, you took it to a repair shop, and that repair shop said that the most common thing they've seen go wrong with cards is the resistors so they replaced them. Or you replaced them yourself. Unless you give them evidence of having done anything to the contrary, you're protected by law.

You might think this is dishonest. But keep in mind that first off...it's the law. Secondly, companies are always looking for any excuse whatsoever to deny you service, even when you've done nothing wrong. Thirdly, you don't even have to lie to them. As a lawyer would tell you...just don't say anything. They have nothing unless you give it to them. Burden of proof is on them.


----------



## ExDarkxH

Feeling good. Maxed out the stock air cooler as far as i could.

Water block should be here any day now i would love to push my card some more.


----------



## motivman

ExDarkxH said:


> Feeling good. Maxed out the stock air cooler as far as i could.
> 
> Water block should be here any day now i would love to push my card some more.


Good lord, what card is that? it is shunted? hating my PNY reference card right now, want to rip it out my PC and throw it in the garbage! lol.


----------



## motivman

HyperMatrix said:


> First off, as Falk mentioned, warranties are generally non-transferable. Secondly, if they were transferable, I believe under the magnuson-moss warranty act in the United States, the manufacturer would have to _prove_ that the fault at hand is a direct result of the service you performed on the card. They can't say "well service was performed by you...therefore we don't know what happened...but it wasn't done by us...therefore no warranty."
> 
> So technically speaking...if you reinstall the standard resistors, or replace them with equivalent resistors, and send the card in for warranty, there is no way for the manufacturer to _prove_ that you did anything other than replace the resistors. They can have guesses as to why. But for all they know...you could have had a dead card, you took it to a repair shop, and that repair shop said that the most common thing they've seen go wrong with cards is the resistors so they replaced them. Or you replaced them yourself. Unless you give them evidence of having done anything to the contrary, you're protected by law.
> 
> You might think this is dishonest. But keep in mind that first off...it's the law. Secondly, companies are always looking for any excuse whatsoever to deny you service, even when you've done nothing wrong. Thirdly, you don't even have to lie to them. As a lawyer would tell you...just don't say anything. They have nothing unless you give it to them. Burden of proof is on them.


yeah, I know, you are right, I am just too chicken to shunt my card... I rather just sell it later, and get a trio x and waterblock once they are readily available...flash the 500W bios, and live happily ever after... at least that is the plan, LOL.


----------



## LVNeptune

HyperMatrix said:


> First off, as Falk mentioned, warranties are generally non-transferable. Secondly, if they were transferable, I believe under the magnuson-moss warranty act in the United States, the manufacturer would have to _prove_ that the fault at hand is a direct result of the service you performed on the card. They can't say "well service was performed by you...therefore we don't know what happened...but it wasn't done by us...therefore no warranty."
> 
> So technically speaking...if you reinstall the standard resistors, or replace them with equivalent resistors, and send the card in for warranty, there is no way for the manufacturer to _prove_ that you did anything other than replace the resistors. They can have guesses as to why. But for all they know...you could have had a dead card, you took it to a repair shop, and that repair shop said that the most common thing they've seen go wrong with cards is the resistors so they replaced them. Or you replaced them yourself. Unless you give them evidence of having done anything to the contrary, you're protected by law.
> 
> You might think this is dishonest. But keep in mind that first off...it's the law. Secondly, companies are always looking for any excuse whatsoever to deny you service, even when you've done nothing wrong. Thirdly, you don't even have to lie to them. As a lawyer would tell you...just don't say anything. They have nothing unless you give it to them. Burden of proof is on them.


This is the most correct explanation I have seen anyone post on a forum regarding legal matters (that didn't actually involved lawyers), ever. The onus is on the accuser.


----------



## padman

DimmyK said:


> Thanks fellas. Just what I though. Ordered Seasonic PRIME PX-850 to go with new shiny 3090. Soon will be joining the club


My 2 year old Prime TX 750 can't handle 3090 Strix OC @ 500W Bios + 10700k @ 5.2Ghz OC while gaming. PC shuts down randomly during Watchdogs: Legion. Also waiting for Seasonic Prime TX-1000 now.


----------



## mirkendargen

LVNeptune said:


> This is the most correct explanation I have seen anyone post on a forum regarding legal matters (that didn't actually involved lawyers), ever. The onus is on the accuser.


Sure a company denying you your warranty would be breaking US law...but they also know your only real recourse is hiring a lawyer and suing...which would cost way more than a 3090. Basically, they're going to get away with it if they want to unless you have money to burn and want to prove a point.


----------



## LVNeptune

mirkendargen said:


> Sure a company denying you your warranty would be breaking US law...but they also know your only real recourse is hiring a lawyer and suing...which would cost way more than a 3090. Basically, they're going to get away with it if they want to unless you have money to burn and want to prove a point.


Unfortunately.


----------



## Falkentyne

LVNeptune said:


> Unfortunately.


I'm warning you
Do not harass me again on evga forums. I _WILL_ block you. You mean nothing to me.


----------



## LVNeptune

Falkentyne said:


> I'm warning you
> Do not harass me again on evga forums. I _WILL_ block you. You mean nothing to me.


Dude. You are a whack job.

Really sorry his drama had to follow suite over here fellow OCers. For reference:





EVGA GeForce RTX 3090 FTW3 XOC BIOS (Page 36) - EVGA Forums


Important update 3/19/21:Newer EVGA GeForce RTX 3090 FTW3 ULTRA cards will be shipping with the XOC BIOS already loaded on the secondary position. The below BIOS should ONLY be installed if your card DID NOT already have it applied. We have a new BIOS that increases the maximum Powe...



forums.evga.com


----------



## Falkentyne

LVNeptune said:


> Dude. You are a whack job.
> 
> Really sorry his drama had to follow suite over here fellow OCers. For reference:
> 
> 
> 
> 
> 
> EVGA GeForce RTX 3090 FTW3 XOC BIOS (Page 36) - EVGA Forums
> 
> 
> Important update 3/19/21:Newer EVGA GeForce RTX 3090 FTW3 ULTRA cards will be shipping with the XOC BIOS already loaded on the secondary position. The below BIOS should ONLY be installed if your card DID NOT already have it applied. We have a new BIOS that increases the maximum Powe...
> 
> 
> 
> forums.evga.com


You're gone, cya.


----------



## bmgjet

Falkentyne said:


> You're gone, cya.


Lol its toxic over there. Between the mods deleting posts that they dont like for no reason and fanboys screaming what your doing wont work its hardly worth wasting your breath there.
They all tried to flame me saying using conductive paint wouldnt work. Then I showed the proof with before and after benchmarks and my posts got reported and deleted by mods.
And then my other thread about the fuses being the cause of single red led error, with video proof of my card having that fault then fixing it by bridging the fuse that blew got deleted as well.


----------



## LVNeptune

bmgjet said:


> Lol its toxic over there. Between the mods deleting posts that they dont like for no reason and fanboys screaming what your doing wont work its hardly worth wasting your breath there.
> They all tried to flame me saying using conductive paint wouldnt work. Then I showed the proof with before and after benchmarks and my posts got reported and deleted by mods.
> And then my other thread about the fuses being the cause of single red led error, with video proof of my card having that fault then fixing it by bridging the fuse that blew got deleted as well.


I personally never said it wouldn't work just that it isn't optimal. I even linked him over there BACK HERE to the thread all three of us were in of yours where someone else actually broke down why conductive paint was a bad idea. It's not a matter of toxicity he just went ballistic when I wasn't even talking to/about him.


----------



## Sheyster

I'm in line for an EVGA FTW3 Ultra and that thread over there has convinced me to pass on the card. MSI Trio or ASUS Strix inbound one of these days.


----------



## KANG_VX

Passive Cooling on my GPU backplate
Aluminum heatsink cheap but workgreat


















just 1-2 usd


----------



## long2905

KANG_VX said:


> Passive Cooling on my GPU backplate
> Aluminum heatsink cheap but workgreat
> just 1-2 usd


any obvious improvement? my card backplate does get really hot and this looks promising


----------



## Sky3900

LVNeptune said:


> I personally never said it wouldn't work just that it isn't optimal. I even linked him over there BACK HERE to the thread all three of us were in of yours where someone else actually broke down why conductive paint was a bad idea. It's not a matter of toxicity he just went ballistic when I wasn't even talking to/about him.


I would agree that solder is the proper way to go if you have the equipment and skills to do it without much risk to your card. That said, I personally don't see a huge issue with using silver paint if done properly. I mean the MG stuff specifically says it is for PCB traces and provides resistance data.


----------



## LVNeptune

Sheyster said:


> I'm in line for an EVGA FTW3 Ultra and that thread over there has convinced me to pass on the card. MSI Trio or ASUS Strix inbound one of these days.


Yea...I've had weird random driver crashing and random startup hard locks. Others are having some more serious issues. I had ZERO problems with my FE. Sold it the other day. Have a new 3090 FE coming tomorrow. Going to return my 3090 FTW3 Ultra and call it a day.


----------



## LVNeptune

KANG_VX said:


> Passive Cooling on my GPU backplate
> Aluminum heatsink cheap but workgreat
> View attachment 2465057
> 
> 
> View attachment 2465058
> 
> 
> just 1-2 usd


This is great! Paint them black to match and you'll be good to go.


----------



## KANG_VX

long2905 said:


> any obvious improvement? my card backplate does get really hot and this looks promising


yes it work great, my backplate don't get hot anymore , dont forget to use thermal pad or thermal paste underneath it.


----------



## rawsome

KANG_VX said:


> Passive Cooling on my GPU backplate
> Aluminum heatsink cheap but workgreat
> View attachment 2465057
> 
> 
> View attachment 2465058
> 
> 
> just 1-2 usd


dude, i did the same on my ventus 

















backplate temp reduced from 54/56 to 48/49 (measured by external sensor on the side, not center, may not be that meaningful). no change on core temp aside from the fact that the card needs much longer to heat up on benchmarking.
but i had to use redicilous amounts of silicon pads because the backplate of the ventus was convex near the core. sold that card now and did not try on the trio x yet. i'll wait until the waterblock arrives until i give it another try, as it comes with its own backplate.

i wonder if there are full-copper variants of these heatsinks are available.  these are the biggest i have found. just not sure if 200x80 copper is better than 300x110 alu. i also dont like the orientation of the fins.

ah, and you can, indeed, put fans on that headsink for even better cooling. but i did not test temps.


----------



## GTANY

For the owners of the RTX 3090 Strix Bykski waterblock, what is the thermal pads thickness you used ? And did you use it on coils too ?


----------



## Sky3900

rawsome said:


> dude, i did the same on my ventus
> View attachment 2465059
> 
> View attachment 2465060
> 
> 
> backplate temp reduced from 54/56 to 48/49 (measured by external sensor on the side, not center, may not be that meaningful). no change on core temp aside from the fact that the card needs much longer to heat up on benchmarking.
> but i had to use redicilous amounts of silicon pads because the backplate of the ventus was convex near the core. sold that card now and did not try on the trio x yet. i'll wait until the waterblock arrives until i give it another try, as it comes with its own backplate.
> 
> i wonder if there are full-copper variants of these heatsinks are available.  these are the biggest i have found. just not sure if 200x80 copper is better than 300x110 alu. i also dont like the orientation of the fins.
> 
> ah, and you can, indeed, put fans on that headsink for even better cooling. but i did not test temps.


LOL, holy **** dude! Nice back plate, I may have to do something like this on my FE with a fan!


----------



## defiledge

which bios should I flash. aorus master or gaming oc? Also does any of the 3x8pin bios work for 2x8pin cards?


----------



## Sync0r

defiledge said:


> which bios should I flash. aorus master or gaming oc? Also does any of the 3x8pin bios work for 2x8pin cards?


Here you go:
2x8pin no shunt = gaming oc bios
2x8pin with shunt = strix or evga xoc bios

Which card do you have?


----------



## Sync0r

Deleted


----------



## Spiriva

defiledge said:


> which bios should I flash. aorus master or gaming oc? Also does any of the 3x8pin bios work for 2x8pin cards?


For me Aorus Master gave better overclock then gaming oc bios. 
Try both and see of there is any diff for you.


----------



## defiledge

Sync0r said:


> Here you go:
> 2x8pin no shunt = gaming oc bios
> 2x8pin with shunt = strix or evga xoc bios
> 
> Which card do you have?


Why do you flash a 3x8pin bios with shunt?


----------



## long2905

Spiriva said:


> For me Aorus Master gave better overclock then gaming oc bios.
> Try both and see of there is any diff for you.


Can you elaborate? What card/cooling method/ core clock are you getting with these vbioses?


----------



## Foxrun

LVNeptune said:


> Yea...I've had weird random driver crashing and random startup hard locks. Others are having some more serious issues. I had ZERO problems with my FE. Sold it the other day. Have a new 3090 FE coming tomorrow. Going to return my 3090 FTW3 Ultra and call it a day.


I switched my FE out yesterday for the FTW Ultra and so far no issues. I am getting much higher clocks which are being sustained a a better rate than the FE.


----------



## HyperMatrix

I find it amusing to see people associate the results of silicon lottery with card quality.


----------



## Spiriva

long2905 said:


> Can you elaborate? What card/cooling method/ core clock are you getting with these vbioses?


PNY 3090, EK block - 2x360 rad, push/pull. Max boost 2205mhz, while gaming it sits around 2150-2190mhz depending on title.

With the Aorus master bios i could get it stable at 2190mhz while gaming and memory oc of 1300+ with the "gigabyte gaming oc" bios it was stable ar 2145mhz but the memory would not go over +650.


----------



## Sky3900

HyperMatrix said:


> I find it amusing to see people associate the results of silicon lottery with card quality.


Yep, I had the exact opposite experience that @Foxrun mentioned. FE performed slightly better than the FTW3 Ultra.


----------



## Nizzen

Falkentyne said:


> I'm warning you
> Do not harass me again on evga forums. I _WILL_ block you. You mean nothing to me.


----------



## changboy

Got my evga 3090 ultra last week and i was on amd card so i unistall amd driver with ddu before put my new card and when i boot the first time i install the nvidia driver, all my game run fine exept far cry 5.
When i start far cry 5 its start and inside 30 second my pc bluescreen and its write : irlq less or equal and i cant find why i have this error. Before with my old amd its not doing this, someone know how to resolve this ?

I play many hours with other game and all is fine, just the division 2 i need to stop my rog live service from my asus mobo litghting coz easy anti cheat stop the game at start and show me this error. Some kind of weird bug occur with ubisoft title.

Far cry 5 bsod can be from driver issue but all other game run well. I have check windows file are ok, my memory test is ok. I'am sure its a simple manipulation from the folder game or pc config...i dont know


----------



## long2905

changboy said:


> Got my evga 3090 ultra last week and i was on amd card so i unistall amd driver with ddu before put my new card and when i boot the first time i install the nvidia driver, all my game run fine exept far cry 5.
> When i start far cry 5 its start and inside 30 second my pc bluescreen and its write : irlq less or equal and i cant find why i have this error. Before with my old amd its not doing this, someone know how to resolve this ?
> 
> I play many hours with other game and all is fine, just the division 2 i need to stop my rog live service from my asus mobo litghting coz easy anti cheat stop the game at start and show me this error. Some kind of weird bug occur with ubisoft title.
> 
> Far cry 5 bsod can be from driver issue but all other game run well. I have check windows file are ok, my memory test is ok. I'am sure its a simple manipulation from the folder game or pc config...i dont know


have you tried to run the game at stock settings with no oc whatsoever?


----------



## pat182

changboy said:


> Got my evga 3090 ultra last week and i was on amd card so i unistall amd driver with ddu before put my new card and when i boot the first time i install the nvidia driver, all my game run fine exept far cry 5.
> When i start far cry 5 its start and inside 30 second my pc bluescreen and its write : irlq less or equal and i cant find why i have this error. Before with my old amd its not doing this, someone know how to resolve this ?
> 
> I play many hours with other game and all is fine, just the division 2 i need to stop my rog live service from my asus mobo litghting coz easy anti cheat stop the game at start and show me this error. Some kind of weird bug occur with ubisoft title.
> 
> Far cry 5 bsod can be from driver issue but all other game run well. I have check windows file are ok, my memory test is ok. I'am sure its a simple manipulation from the folder game or pc config...i dont know


FC5 is cpu, mostly always, so probly your cpu oc isnt stable


----------



## Baasha

Was playing AC Valhalla last night (4K max settings) and there were areas where the FPS dipped into the 30's!  

Tell me again why SLI/MGPU is not needed?

It's the same experience all over again. AC Odyssey gave ~ 40 - 45fps with a single Titan RTX at 4K maxed and AC Valhalla is no different with the RTX 3090. Yes, at certain times/scenes the FPS goes up to 90+ but it also drops a lot in other areas.

In other words, the RTX 3090 is NOT even a 4K 60fps card! Let alone an '8K gaming' card.

I'm sick and tired of these worthless developers and morons at Nvidia not supporting technologies that will let us, the customers, enjoy games at a solid 4K 60fps.  

With MGPU properly implemented, AC Valhalla should be at a constant 4K 60fps or more in every single area/part of the game.

WatchDogs Legion is another stinking pile of garbage as is AC Valhalla. Ubisoft needs to get their act together and optimize their games and/or implement MGPU natively.

May be it's time to find another hobby. Sigh... 😥


----------



## dante`afk

backside of my ftw 3 ultra. just wanted to do the pcie shunt for the bios to actually go to 500w. will do a full mod once I get my waterblock from optimus.

is this the correct shunt?











@Baasha and the game does not even look good imo. could also use some optimization if you look up reviews.



/edit, done:


----------



## mirkendargen

GTANY said:


> For the owners of the RTX 3090 Strix Bykski waterblock, what is the thermal pads thickness you used ? And did you use it on coils too ?


I used the ones it came with, but I also put a RAM block on the backplate to actively cool it so I wasn't worried about having the absolute highest quality pads..

I just checked a scrap of it and it looks like 1.5mm (compared it to an Arctic 0.5mm) but I don't have calipers to be certain.


----------



## HyperMatrix

Baasha said:


> Was playing AC Valhalla last night (4K max settings) and there were areas where the FPS dipped into the 30's!
> 
> Tell me again why SLI/MGPU is not needed?
> 
> It's the same experience all over again. AC Odyssey gave ~ 40 - 45fps with a single Titan RTX at 4K maxed and AC Valhalla is no different with the RTX 3090. Yes, at certain times/scenes the FPS goes up to 90+ but it also drops a lot in other areas.
> 
> In other words, the RTX 3090 is NOT even a 4K 60fps card! Let alone an '8K gaming' card.
> 
> I'm sick and tired of these worthless developers and morons at Nvidia not supporting technologies that will let us, the customers, enjoy games at a solid 4K 60fps.
> 
> With MGPU properly implemented, AC Valhalla should be at a constant 4K 60fps or more in every single area/part of the game.
> 
> WatchDogs Legion is another stinking pile of garbage as is AC Valhalla. Ubisoft needs to get their act together and optimize their games and/or implement MGPU natively.
> 
> May be it's time to find another hobby. Sigh... 😥


To be fair, Ubisoft spends half of its resources on social justice integration. I don’t think it matters what cards Nvidia or AMD put out. Ubisoft will create a game that’s just unoptimized enough to run like rubbish. 

Ubisoft and GPU power are like governments and tax money. No matter how much you give them, they’ll always find a way to come up short.

Personally all I’m looking forward to is seeing if I can play Cyberpunk 2077 at 4K60 with RTX/DLSS. Can’t take the performance of Ubisoft games as a measure of graphics power. Even watch dogs with DLSS quality drops into the 40s at times with RTX on medium. That’s really just unacceptable in so many ways.


----------



## mirkendargen

HyperMatrix said:


> To be fair, Ubisoft spends half of its resources on social justice integration. I don’t think it matters what cards Nvidia or AMD put out. Ubisoft will create a game that’s just unoptimized enough to run like rubbish.
> 
> Ubisoft and GPU power are like governments and tax money. No matter how much you give them, they’ll always find a way to come up short.
> 
> Personally all I’m looking forward to is seeing if I can play Cyberpunk 2077 at 4K60 with RTX/DLSS. Can’t take the performance of Ubisoft games as a measure of graphics power. Even watch dogs with DLSS quality drops into the 40s at times with RTX on medium. That’s really just unacceptable in so many ways.


I don't have AC: Valhalla yet, but a friend that does (and has a 3090) says the benchmark has awful framepacing (200ms+ frametime spikes) while not maxing GPU or CPU. It's just dogshit coding that needs to be patched, not GPU makers missing the mark.


----------



## Zurv

Bitspower 3090 FE blocks finally shipped today (from tw to Brooklyn)


----------



## Jpmboy

Zurv said:


> Bitspower 3090 FE blocks finally shipped today (from tw to Brooklyn)


I got nothing from optimus yet...


----------



## NBrock

I want to see what Optimus has in store for the 3090 FE. Getting tired of waiting for either them or EK lol.


----------



## Twintale

I don't even know when to expect at least a single company to make a block for Palit Gamerock card which I own: EK and Alphacool will probably offer one "if demand is good", Bykski gave me overall answer which was: "We do have plans for RTX 30 series cards".


----------



## Zurv

NBrock said:


> I want to see what Optimus has in store for the 3090 FE. Getting tired of waiting for either them or EK lol.


I'm not clear if they have a FE to test. I did offer them one of mine, but they didn't respond (it was on an email thread i already had with them.. ie, not a random msg to them.)
They also signaled that they might not even make one.
It is super odd that there seems to be no FE blocks out (other than the bitspower that just shipped today.)

I'm also waiting on the optiums FTW block.. but, they don't have a good track record of delivering on time. They also seem to never learn and always go with optimistic timelines they never hit.. with anything.


----------



## kx11

Twintale said:


> I don't even know when to expect at least a single company to make a block for Palit Gamerock card which I own: EK and Alphacool will probably offer one "if demand is good", Bykski gave me overall answer which was: "We do have plans for RTX 30 series cards".


ask their support team, they might help you with that


----------



## changboy

Thanks for your income to resolve my problem so :

I disable all overclock on my system and try start far cry 5 and its working lol, the weird thing its i have done around 35 bluray encode last month without a problem. This with my 10980xe @ 4.9ghz.
After i begun oc step by step with first thing xpm setting, work fine.
-then i oc my cpu at 4.8ghz but i just adjust manual voltage on cpu and vccin and all other spec at auto and far cry 5 work fine.
-then i oc my evga 3090 and its working fine.
-next step i will put again 4.9ghz and i will tell you if its work.

The thing i think its maybe i got 2 setting in my bios have conflict and far cry 5 didnt like this. Before i have fault maneger to desable and now i put it on auto, and many other i left on auto and temp seam better. Sometime too much its like not enough, anyway tomorrow will try again at 4.9ghz. I will give you feedback on this.


----------



## GTANY

mirkendargen said:


> I used the ones it came with, but I also put a RAM block on the backplate to actively cool it so I wasn't worried about having the absolute highest quality pads..
> 
> I just checked a scrap of it and it looks like 1.5mm (compared it to an Arctic 0.5mm) but I don't have calipers to be certain.


OK, thank you.


----------



## Gryzor

Hello, I have a question, I searched without success. Can I put my 3090 on second PCI-EX slot while mantaining 16x? I think its better just for cooling (I have three 120mm fans just below), in addition, I think that hot air is better not exhausting direct DDR4 ram. Any feedback please? My board is ASUS ROG Crosshair Hero VIII 570x. Thanks.


----------



## dante`afk

bykski has a 3090FE block since release? I got min here since over a month.


----------



## rawsome

Gryzor said:


> Hello, I have a question, I searched without success. Can I put my 3090 on second PCI-EX slot while mantaining 16x? I think its better just for cooling (I have three 120mm fans just below), in addition, I think that hot air is better not exhausting direct DDR4 ram. Any feedback please? My board is ASUS ROG Crosshair Hero VIII 570x. Thanks.


you can read that in the mainboard specs.
these say: 2x PCIe 4.0 x16 Safeslots (x16, x8/x8) 
i guess that means: yes

or just try it and look at CPU-Z









if it says link width 16x, you are fine.


----------



## derthballs

Hey guys, new to the forum. Is there a bios I can update my PNY 3090 with to improve to power? Its a 2 pin pci-e card.


----------



## martinhal

Does anyone know what delta temps between the GPU and the water I should expect on a 3090 390 watt card


----------



## Falkentyne

derthballs said:


> Hey guys, new to the forum. Is there a bios I can update my PNY 3090 with to improve to power? Its a 2 pin pci-e card.


You can flash the Asus ROG Tuf OC Bios. This should give you a slightly higher clock speed or possibly a higher power limit, and/or allow you to adjust the power limit slider in MSI Afterburner past 100%, but I'm really not sure.
Someone may have uploaded this BIOS in one of the early posts on this thread, because I remember seeing it posted there. It may also be on techpowerup's bios archive.
(I know that one of the Gigabyte Bioses that were uploaded there was corrupted).

GPU_Z will show the default and max power limits in the Nvidia Bios menu section.


----------



## derthballs

thank you!


----------



## Falkentyne

derthballs said:


> thank you!


In the future, if you are interested,

The best way to get a higher power limit is to shunt mod with either shunt stacking, by soldering a 15 mOhm 2512 size current sensing (safe mOhms) shunts on the six "2512 size" shunt resistors that say 005 or R005 on them (if there are a few 'smaller' R005/005 shunts, ignore them), using MG Chemicals Silver conductive paint to attach the 15 mOhm 2512 size shunts to the original ones by painting a small amount over the "Silver" edges of the original shunts, covering the edges, *IF the original shunt edges are Flat/flush with the black middle (after you are finished letting the shunt/paint dry, which creates a solid bond, you should use a small amount of liquid electrical tape around the edges to make sure the shunt stays fully secured), Or if the original shunts have depressed edges lower than the lbafck middle, then it's best to just solder, OR you can paint the entire shunt over (bridging the two silver edges) fully with the same conductive thermal paint.

Use a wooden toothpick to apply the paint.






MG Chemicals 842AR-15ML Silver Print (Conductive Paint), 12 ml: Amazon.com: Industrial & Scientific


MG Chemicals 842AR-15ML Silver Print (Conductive Paint), 12 ml: Amazon.com: Industrial & Scientific



www.amazon.com





That's because if the original shunts are not fully flat, you would have to add a small amount of extra paint to compensate for the gap, which adds a little unwanted resistance. Several people in the shunt mod thread simply painted over the original shunt because of the uneven height, which worked extremely well. The MG Chemicals paint acts similar to a 15 mOhm stacked resistor.

You can also desolder the original shunts and solder on 3 mOhm new shunts (replacing the old ones) but this is MUCH harder to do and has a much higher risk of destroying something if you aren't skilled, so I suggest stacking by soldering, or painting over them fully with the MG chemicals silver paint.

Protip: If you are stacking shunts or if you are painting over the shunt to create a 15 mOhm "shunt" connection, please scrape off any conformal coating from the edges of the original shunts with a small flat blade screwdriver, before painting or stacking. 

Calculator here:








GitHub - bmgjet/ShutMod-Calculator: Work out what shunt values to use easily.


Work out what shunt values to use easily. Contribute to bmgjet/ShutMod-Calculator development by creating an account on GitHub.




github.com





Two threads for your pleasure.









RTX 3090 Founders Edition working shunt mod


I posted this in the RTX 3090 owners club thread and I'm posting this here for more visibility to anyone who might stumble across this: Edit: Added note about 6th shunt resistor & updated images Tonight I successfully shunt modded my RTX 3090 FE. It's a pretty straightforward process, just...




www.overclock.net












How To: Easy mode Shut Modding.


Heres some basic instructions for a easy to do and easy to remove shunt mod method. Tools Needed: Small phillips screwdriver. Tweesers Materials: Conductive paint. (Stuff iv used is water based and very easy to remove with warm water and cotton swab), But a silver type one would work even...




www.overclock.net





(Note: I recommend MG chemicals silver conductive paint rather than the graphite based paint in the yellow tube in the original post! You can even avoid having to buy new shunts and just paint over the entire shunt resistor with MG Silver paint, which acts like a 15 mOhm shunt! )


----------



## derthballs

Falkentyne said:


> You can flash the Asus ROG Tuf OC Bios. This should give you a slightly higher clock speed or possibly a higher power limit, and/or allow you to adjust the power limit slider in MSI Afterburner past 100%, but I'm really not sure.
> Someone may have uploaded this BIOS in one of the early posts on this thread, because I remember seeing it posted there. It may also be on techpowerup's bios archive.
> (I know that one of the Gigabyte Bioses that were uploaded there was corrupted).
> 
> GPU_Z will show the default and max power limits in the Nvidia Bios menu section.


This worked, thank you very much.


----------



## Spiriva

derthballs said:


> Hey guys, new to the forum. Is there a bios I can update my PNY 3090 with to improve to power? Its a 2 pin pci-e card.


Try one of the Gigabyte bioses, "gaming oc" or "aorus master" They both are 390w bioses and currently the best for 2x8pin cards.









Gigabyte RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com




or








Gigabyte RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com




or








Gigabyte RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com





The one at the very bottom got abit more aggressive fan curve.


----------



## derthballs

Spiriva said:


> Try one of the Gigabyte bioses, "gaming oc" or "aorus master" They both are 390w bioses and currently the best for 2x8pin cards.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Gigabyte RTX 3090 VBIOS
> 
> 
> 24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory
> 
> 
> 
> 
> www.techpowerup.com
> 
> 
> 
> 
> or
> 
> 
> 
> 
> 
> 
> 
> 
> Gigabyte RTX 3090 VBIOS
> 
> 
> 24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory
> 
> 
> 
> 
> www.techpowerup.com
> 
> 
> 
> 
> or
> 
> 
> 
> 
> 
> 
> 
> 
> Gigabyte RTX 3090 VBIOS
> 
> 
> 24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory
> 
> 
> 
> 
> www.techpowerup.com
> 
> 
> 
> 
> 
> The one at the very bottom got abit more aggressive fan curve.


thank you. Ive got it working on the TUF OC one at the moment, is there any benefit to switching to one of these?


----------



## Spiriva

derthballs said:


> thank you. Ive got it working on the TUF OC one at the moment, is there any benefit to switching to one of these?


TUF is 375w and these are 390w.

I use the bottom one of the bioses I linked, on my PNY 3090.


----------



## Jpmboy

NBrock said:


> I want to see what Optimus has in store for the 3090 FE. Getting tired of waiting for either them or EK lol.


We're always tired of waiting on these guys... never understood that. You got a business and are slow to bring your product(s) out. Like bring out aftermarket go-fast parts for rides a decade after the car is out. Better be a phenomenal part!!


----------



## smonkie

In general terms, how much performance in real scenarios (games) would a 3x8/480W 3090 have over a regular 2x8 3090/350W with similar air cooled configurations?


----------



## derthballs

Spiriva said:


> TUF is 375w and these are 390w.
> 
> I use the bottom one of the bioses I linked, on my PNY 3090.


Is that noisy? I used the middle one, seems to max out about 81C, out of interest what kind of overclock do you get on your core and memory?

Its transformed the card though, ive put 100mhz on the core and its boosting over 2000 in game now, it seemed so gimped at stock i was considering returning it.

And thank you for the advice and the others too, i dont think im brave enough to do any soldering mods to it, i near had a heart attack after flashing and finding out my screen wasnt working till i unplugged the Displayport cable and put it in another socket.


----------



## Spiriva

derthballs said:


> Is that noisy? I used the middle one, seems to max out about 81C, out of interest what kind of overclock do you get on your core and memory?
> 
> Its transformed the card though, ive put 100mhz on the core and its boosting over 2000 in game now, it seemed so gimped at stock i was considering returning it.
> 
> And thank you for the advice and the others too, i dont think im brave enough to do any soldering mods to it, i near had a heart attack after flashing and finding out my screen wasnt working till i unplugged the Displayport cable and put it in another socket.


I got a EK waterblock on my PNY 3090 so I dont know how the fans sounds with any of the bioses 

My card boosts to around 2145 - 2190mhz depending on the title. Max temp duing games is around 42-44c (ambient temp of around 18-19c) 2x360 rads push/pull Noctua NF-A12x25 fans.
During benchmark i put the memory to +1300 in MSI Afterburner, and during gaming i have it set to +5-600

On the PNY 3090 with these bioses the DP that is farthest to the right (looking from the card from the back) will allways work.





















This is from Watch dogs legion, it doesnt stay at 2205mhz very long tho, goes down to 2190mhz in this particular title.


----------



## Thanh Nguyen

smonkie said:


> In general terms, how much performance in real scenarios (games) would a 3x8/480W 3090 have over a regular 2x8 3090/350W with similar air cooled configurations?


Depend on the resolution, I see 5-10 fps at 1440p between 1935mhz and 2130mhz with some memory oc too. If the 2x8 3090/350w can hold 2100mhz, you dont see the difference.


----------



## Thanh Nguyen

Spiriva said:


> I got a EK waterblock on my PNY 3090 so I dont know how the fans sounds with any of the bioses
> 
> My card boosts to around 2145 - 2190mhz depending on the title. Max temp duing games is around 42-44c (ambient temp of around 18-19c) 2x360 rads push/pull Noctua NF-A12x25 fans.
> During benchmark i put the memory to +1300 in MSI Afterburner, and during gaming i have it set to +5-600
> 
> On the PNY 3090 with these bioses the DP that is farthest to the right (looking from the card from the back) will allways work.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> This is from Watch dogs legion, it doesnt stay at 2205mhz very long tho, goes down to 2190mhz in this particular title.


42c-44c with 18c-19c ambient is not great. Do you use liquid metal?


----------



## Spiriva

Thanh Nguyen said:


> 42c-44c with 18c-19c ambient is not great. Do you use liquid metal?


No I used some noctua paste I had at home, I run my fans at like 800rpm tho, i like the silence better then keeping the card colder


----------



## Falkentyne

smonkie said:


> In general terms, how much performance in real scenarios (games) would a 3x8/480W 3090 have over a regular 2x8 3090/350W with similar air cooled configurations?


I like how my post vanished instantly.
Anyway, no card comes with a stock 480W vbios anyway unless it's one of the HOF editions, and Founders 3090 is stock 350W, 400W for 114% PL.

Since 320w to 400W is already at least 5% (didn't measure), I'm guessing 350W to 480W is between 10 to 15% performance.


----------



## derthballs

Spiriva said:


> I got a EK waterblock on my PNY 3090 so I dont know how the fans sounds with any of the bioses
> 
> My card boosts to around 2145 - 2190mhz depending on the title. Max temp duing games is around 42-44c (ambient temp of around 18-19c) 2x360 rads push/pull Noctua NF-A12x25 fans.
> During benchmark i put the memory to +1300 in MSI Afterburner, and during gaming i have it set to +5-600
> 
> On the PNY 3090 with these bioses the DP that is farthest to the right (looking from the card from the back) will allways work.


Could you tell me how to unlock core voltage on it? Ive tried the 4 settings in afterburner and it remains hatched out.


----------



## Thanh Nguyen

derthballs said:


> Could you tell me how to unlock core voltage on it? Ive tried the 4 settings in afterburner and it remains hatched out.


Do you use beta version of afterburn? I use the beta and it has core voltage unlock but the other version does not.


----------



## Spiriva

derthballs said:


> Could you tell me how to unlock core voltage on it? Ive tried the 4 settings in afterburner and it remains hatched out.












Check - Unlock voltage control
Check - Unlock voltage monitoring


----------



## derthballs

Thanh Nguyen said:


> Do you use beta version of afterburn? I use the beta and it has core voltage unlock but the other version does not.


I thought I had but seems not, working now, thank you again. Does that make much difference using the voltage slider? Ive not seen any benefit trying it before on my other cards.


----------



## Sync0r

Gryzor said:


> Hello, I have a question, I searched without success. Can I put my 3090 on second PCI-EX slot while mantaining 16x? I think its better just for cooling (I have three 120mm fans just below), in addition, I think that hot air is better not exhausting direct DDR4 ram. Any feedback please? My board is ASUS ROG Crosshair Hero VIII 570x. Thanks.


Just look in the slot, if you see metal pins the full length of the slot its 16x, if you only see half, its 8x.


----------



## Spiriva

derthballs said:


> I thought I had but seems not, working now, thank you again. Does that make much difference using the voltage slider? Ive not seen any benefit trying it before on my other cards.


Put Power limit to max, and core voltage prolly doesnt matter what you put it on. I have mine set to max, but it dosnt change anything if i set it to 0 either


----------



## mirkendargen

Thanh Nguyen said:


> 42c-44c with 18c-19c ambient is not great. Do you use liquid metal?


The delta between your GPU and your coolant (not ambient) is what tells the story about the performance of your block/TIM. The delta between your coolant and ambient tells the story of the performance of your rads/fans.



Falkentyne said:


> I like how my post vanished instantly.
> Anyway, no card comes with a stock 480W vbios anyway unless it's one of the HOF editions, and Founders 3090 is stock 350W, 400W for 114% PL.
> 
> Since 320w to 400W is already at least 5% (didn't measure), I'm guessing 350W to 480W is between 10 to 15% performance.


Strix comes with a 480W stock BIOS. It's not at all manufacturer limited (well any less than all 30 series cards are right now...) like HOF/Kingpin.


----------



## Falkentyne

derthballs said:


> I thought I had but seems not, working now, thank you again. Does that make much difference using the voltage slider? Ive not seen any benefit trying it before on my other cards.





Spiriva said:


> Put Power limit to max, and core voltage prolly doesnt matter what you put it on. I have mine set to max, but it dosnt change anything if i set it to 0 either


The voltage slider unlocks one more voltage "tier", the one connected to the 1.10v point, with 1.10v maximum. It's unclear when it actually uses that tier though. It seems be something related to temps or power draw, assuming you are NOT at the power limit, and it doesn't make sense. It could be related to what the GPU thinks it "needs" to be stable on that tier. For example, at 530W TDP, looping Heaven, it was at 1.087v at 64C at about 480W of power, then it decided it wanted to move to 1.10v at 68C for some reason. The chip basically went nuclear at that point and became uncoolable on air (staying at 1.10v until it reached 75C when I stopped the benchmark).

I also noticed that if the next voltage "tier" gets activated with the slider, the clock speed will _also_ increase by 15 mhz! (unless it dropped 15 mhz from temp scaling).

With the voltage slider at 0%, it stayed at 1.075v.

The problem is, sometimes you slide it to 100% and it still remains on the "previous" 1.075v tier and doesn't change...
I "believe" this is related to "VREL"--meaning the voltage is on the highest mhz point that the core can boost too, at any voltage point along that tier, that will trigger VREL if you are not at the power limit. I am not sure how vOP gets triggered without VREL.

This is assuming there are multiple voltage points on that tier. If there is only one voltage point on a tier, you will always get VREL if the GPU is at full clocks and not at vOP.
vOP is the max voltage point being used on the last tier (the tier linked to either 1.10v or 1.075v), at whatever voltage point the video card decides to use. That tier can trigger VREL and vOP at the same time.

Example if a mhz tier has 1.050v, 1.062v, 1.075v and it's on 1.050v, I am (guessing) that you won't get VREL, but if it is at 1.075v, you will get VREL and vOP if the next tier can't be used. Perhaps I'm wrong and it's the previous tier that can trigger vrel and not vop (example: 1.050v).

Looking at a quick test, the 2070 mhz voltage points are 1.062v, 1.068v, 1.075v and 1.081v, and right now, 1.062v is selected by the card itself and is triggering VREL. Attempting to lock 1.068v, 1.075v or 1.081v by pressing "L" is doing absolutely nothing.

HOWEVER GET THIS.
if i then select 1.062v and select 100% voltage slider, it becomes IDLE.

Look.


















Ok so that answers that.
Idle means that another voltage tier is available but the card doesn't "need" to use it.

VOP means it's at the max operating voltage allowed and a tier increase isn't possible (e.g. 1.1v would always be VOP). So vOP only happens at the last point on the last available tier.


----------



## Glerox

Baasha said:


> Was playing AC Valhalla last night (4K max settings) and there were areas where the FPS dipped into the 30's!
> 
> Tell me again why SLI/MGPU is not needed?
> 
> It's the same experience all over again. AC Odyssey gave ~ 40 - 45fps with a single Titan RTX at 4K maxed and AC Valhalla is no different with the RTX 3090. Yes, at certain times/scenes the FPS goes up to 90+ but it also drops a lot in other areas.
> 
> In other words, the RTX 3090 is NOT even a 4K 60fps card! Let alone an '8K gaming' card.
> 
> I'm sick and tired of these worthless developers and morons at Nvidia not supporting technologies that will let us, the customers, enjoy games at a solid 4K 60fps.
> 
> With MGPU properly implemented, AC Valhalla should be at a constant 4K 60fps or more in every single area/part of the game.
> 
> WatchDogs Legion is another stinking pile of garbage as is AC Valhalla. Ubisoft needs to get their act together and optimize their games and/or implement MGPU natively.
> 
> May be it's time to find another hobby. Sigh... 😥


I feel you


----------



## bmgjet

On my card with shunt mods. Voltage slider on 100 makes it use the 1100mv dot vs 0 where it only goes up to 1093mv.
Card never runs at those voltages tho outside of loading screens since hits on 520W at 1050mv in most games which is where I have my power limit set. And really heavy loads its down to 1V with 520W draw.


----------



## Falkentyne

bmgjet said:


> On my card with shunt mods. Voltage slider on 100 makes it use the 1100mv dot vs 0 where it only goes up to 1093mv.
> Card never runs at those voltages tho outside of loading screens since hits on 520W at 1050mv in most games which is where I have my power limit set. And really heavy loads its down to 1V with 520W draw.


This is strange.
1.093v and 1.10v should be on the same tier. So 0% should be 1.081v and lower.
Your scenario is impossible unless 1.10v and 1.093v are on different tiers.
Your card should also increase by +15 mhz when 1.10v is selected.

Can you take a screenshot of your V/F graph when this happens at 1.10v and 1.093v?

I'm starting to think the card adjusts the curve (or how many voltage points are on the curve) whenever it wants to.
Unless it's hardwired to each individual card somehow...I'd like to see your V/F graph...


----------



## Thebc2

Sheyster said:


> I'm in line for an EVGA FTW3 Ultra and that thread over there has convinced me to pass on the card. MSI Trio or ASUS Strix inbound one of these days.



I returned my ftw3 ultra. I am binning 2 Strix cards right now. One does +140 core +900 mem and averages 14.6k in port royal, the other does +130 core and +1200 mem but only seems to top out around 14.4-14.5k. Torn but probably keeping the former, any other good way to bin these two cards other than benchmarking?


Sent from my iPhone using Tapatalk Pro


----------



## mirkendargen

Thebc2 said:


> I returned my ftw3 ultra. I am binning 2 Strix cards right now. One does +140 core +900 mem and averages 14.6k in port royal, the other does +130 core and +1200 mem but only seems to top out around 14.4-14.5k. Torn but probably keeping the former, any other good way to bin these two cards other than benchmarking?
> 
> 
> Sent from my iPhone using Tapatalk Pro


The Metro main menu seems to be the ultimate stability test to me, find the highest stable clocks on each card sitting on it for 30min, then bench each of them at those clocks.


----------



## MonnieRock

Does this look right for a new sample of the FTW3 Ultra 3090 leaving the factory or some hand soldering has been done? Highlighted areas with red boxes


----------



## HyperMatrix

Falkentyne said:


> no card comes with a stock 480W vbios


Hello Falkentyne. I am Asus ROG Strix.


----------



## Falkentyne

HyperMatrix said:


> Hello Falkentyne. I am Asus ROG Strix.


480W at 100% power limit?
So this card doesn't have a power limit slider past 100% (that's what I meant by "stock"--e.g. 400W=100%, 480W=116%, etc...).


----------



## HyperMatrix

Falkentyne said:


> 480W at 100% power limit?
> So this card doesn't have a power limit slider past 100% (that's what I meant by "stock"--e.g. 400W=100%, 480W=116%, etc...).


In the words of Jordan Peterson, it’s important to be precise in your speech. Most people reading your comment would take it to mean that there are no cards with a 480W capable bios right out of the box, and that information could be detrimental to the decision making process when it comes to choosing a card. 

The VBIOS is indeed 480W. Just as the EVGA XOC one is 500W. Yes you have to use a slider to choose how much of it you want to use. But on overclock.net I think we’re past the point of wondering if people know what MSI afterburner is. Otherwise if we’re comparing out of the box performance...even a card like the Strix would only be getting 1750-1890 depending on load/temp. And as you should be able to gather from the 226 pages so far in this thread...no one is here for that.


----------



## dante`afk

so my ftw3 ultra chip is nothing special as well.

power limited throughout, need my optimus block quick and then full shun tmod.
thing gets super hot, don't need heating in winter.
run of the mill binned chip, as any other can do 150/550

450w stock: https://www.3dmark.com/spy/15208406
450w oc: https://www.3dmark.com/spy/15209302
500w oc: https://www.3dmark.com/3dm/52919035?


system has not been reinstalled with windows after I did intel > amd switch. cpu/memory running on stock.


tomorrow I'll tweak on my cpu/memory before I test UV on the gpu.


----------



## HyperMatrix

dante`afk said:


> so my ftw3 ultra chip is nothing special as well.
> 
> power limited throughout, need my optimus block quick and then full shun tmod.
> thing gets super hot, don't need heating in winter.
> run of the mill binned chip, as any other can do 150/550
> 
> 450w stock: https://www.3dmark.com/spy/15208406
> 450w oc: https://www.3dmark.com/spy/15209302
> 500w oc: https://www.3dmark.com/3dm/52919035?
> 
> 
> system has not been reinstalled with windows after I did intel > amd switch. cpu/memory running on stock.
> 
> 
> tomorrow I'll tweak on my cpu/memory before I test UV on the gpu.


Still better average clocks and temps than mine. I seriously think the paste on my card must be terrible. I can’t play higher than 1890-1920MHz at 0.887v for hours without temps outdoing the fans. Even then it still sits at around 65C (+/- a few). Any higher and the heat slowly keeps ramping up because the fans can’t handle it even at 100% speed.


----------



## martinhal

mirkendargen said:


> The delta between your GPU and your coolant (not ambient) is what tells the story about the performance of your block/TIM. The delta between your coolant and ambient tells the story of the performance of your rads/fans.


Any idea what the coolant / GPU delta should be ?


----------



## mirkendargen

martinhal said:


> Any idea what the coolant / GPU delta should be ?


Mine's about 20C when pulling 500W. Dunno if that's good/average/bad.


----------



## Falkentyne

Timespy @ +150 core, +500 RAM, 100% voltage (last voltage tier used: 1.080-1.10v, driver 456.98 hotfix (since the newer drivers are buggy for too many people)
Chip heat going nuclear with the stock FE heatsink and that voltage.



https://www.3dmark.com/3dm/52920622?


----------



## martinhal

mirkendargen said:


> Mine's about 20C when pulling 500W. Dunno if that's good/average/bad.


I am at 15 drawing 390 W , also do not know if that is good or bad


----------



## KANG_VX

sory for bad english.

My gpu is a reference pcb 2x8pin power (galax sg with 350 watt power limit).
CPU [email protected] RAM 3600MHz 480mm RAD

- Stock
Port Royal (1,815 MHz Temp 66C ambient 29 C) score 12,600 OC(1,920 MHz Temp 70 C ambient 29 C) score 13,500

- replace cooler with waterblock (bykski) can boost clock over 2025MHz (Temp 50C ambient 29 C) score 13,700 (power cap 350 watt)

- shunt 5 resistor with 5mOhm and pcie slot with 15 mOhm (2,115 MHz Temp 51C ambient 29 C) PR Score hit 14,300 still hit the power limit. (my psu watt output indicate my GPU consume around 460 watt)

- flash aorus master bios (2,145 MHz Temp 52C ambient 29 C) PR Score hit 14,823 (GPU consume 560 watt)

Now real world gaming 2,115-2,190 MHz Temp 52C ambient 29 C coolant temp 34-36C still hit power limit (i think pcie slot hit power limit at 75 watt (actual draw 100 watt) need lower resitance or use better bios to unlock more power cap )


----------



## Thanh Nguyen

Port Royal scales well with temp? 2175mhz and has lower score than my previous 2154mhz.


https://www.3dmark.com/pr/493705


----------



## LVNeptune

Foxrun said:


> I switched my FE out yesterday for the FTW Ultra and so far no issues. I am getting much higher clocks which are being sustained a a better rate than the FE.


Yes, because of the higher wattage. Pay attention to GPU-Z though, you are still getting Pwr limited and EVGA doesn't know why yet. FE w/ shunt should technically perform as well as the FTW3 without the weird issues EVGA seems to have been having.


----------



## Falkentyne

Scored a little higher with +600 memory instead of +500.



https://www.3dmark.com/3dm/52923197?


----------



## kx11

i don't know who this is but i'm beating his 5950x CPU with my 3900xt




https://www.3dmark.com/compare/spy/15210106/spy/15212427


----------



## Nizzen

kx11 said:


> i don't know who this is but i'm beating his 5950x CPU with my 3900xt
> 
> 
> 
> 
> https://www.3dmark.com/compare/spy/15210106/spy/15212427


Memory performance is the key in the cpu benchmark.

10900k @ 5600mhz and 4800c17 tweaked is about 17500 points in the cpu test 

You have 12 331 cpu score.


----------



## Esenel

Nizzen said:


> Memory performance is the key in the cpu benchmark.
> 
> 10900k @ 5600mhz and 4800c17 tweaked is about 17500 points in the cpu test


Solid score.

My 5.3 + 2x16 4300 CL16 are reaching 16660 max.

I am fine with this.
It is still a lot faster than most systems and also 24/7 stable  

My chip won't do 5.4 for benching though....


----------



## kx11

Nizzen said:


> Memory performance is the key in the cpu benchmark.
> 
> 10900k @ 5600mhz and 4800c17 tweaked is about 17500 points in the cpu test
> 
> You have 12 331 cpu score.


yep

i won't go farther than 3600mhz, it won't matter in gaming


----------



## Sync0r

kx11 said:


> yep
> 
> i won't go farther than 3600mhz, it won't matter in gaming


It has been shown that it does matter. Check out Framechasers findings on YouTube. I'm going to do some analysis as well, my Teamgroup 4500mhz Cl18 memory just arrived!


----------



## pat182

Falkentyne said:


> I like how my post vanished instantly.
> Anyway, no card comes with a stock 480W vbios anyway unless it's one of the HOF editions, and Founders 3090 is stock 350W, 400W for 114% PL.
> 
> Since 320w to 400W is already at least 5% (didn't measure), I'm guessing 350W to 480W is between 10 to 15% performance.


i have gain 10fps in most games in 4k from the msi 350watt to the strix 480watt (real cards not bios swap)


----------



## ExDarkxH

I was looking at Gskill 4400MHz CL16-19-19-39 1.50V ram kit

not sure if its any better than their 4400MHz CL18-19-19-39 1.40V ram kit? as you should be able to make up any latency difference with the voltage headroom

Maybe some ppl just want to run xmp and not tinker at all? Seems like they make a kit with every combination possible just to hit everything for marketing purposes


----------



## kx11

Sync0r said:


> It has been shown that it does matter. Check out Framechasers findings on YouTube. I'm going to do some analysis as well, my Teamgroup 4500mhz Cl18 memory just arrived!


not @ 4k man


----------



## Xdrqgol

hey guys, joining the club with a Zotac Trinity 3090 - yeah it is not the greatest ... what bios would you recommend for it, just to unlock some extra juice?

cheers!


----------



## Esenel

@ExDarkxH 
Forget about those high bins.
Just very expensive and lack the capability to lower the primary timings most of the time.

3200 CL14 is the best.
Or 3600 CL14.


----------



## HyperMatrix

Esenel said:


> @ExDarkxH
> Forget about those high bins.
> Just very expensive and lack the capability to lower the primary timings most of the time.
> 
> 3200 CL14 is the best.
> Or 3600 CL14.


My understanding was that the memory speed increase should be weighed against the timing speed decrease to see if it's a positive or negative. So for example moving from 3200 CL14 to 4400 CL16 has 14% slower timing but has 37.5% faster speed so it would be a net gain. From a 3600 CL14 to 4400 CL16 would be still 14% slower timing, but 22% faster speed so it would still be an effective gain. 4400 CL18 that he was also looking at vs. 3600 CL14 would be 28.5% slower timing and only 22% faster speed so it would end up being a negative. 

This is just from my reading on the subject. I'm not an electrical engineer. So I'm open to corrections and learning.


----------



## Sync0r

Xdrqgol said:


> hey guys, joining the club with a Zotac Trinity 3090 - yeah it is not the greatest ... what bios would you recommend for it, just to unlock some extra juice?
> 
> cheers!


Its a beast, you just have to mod it a little.

Go with the gigabyte oc bios, 390w


----------



## Xdrqgol

Sync0r said:


> Its a beast, you just have to mod it a little.
> 
> Go with the gigabyte oc bios, 390w


Thank you for the kind words, made me feel that the purchase is not entirely a waste. Modding the card - I have no experience doing that ... 
Flasshing the bios  yeap ,will try that. Thanks a lot for the kind support.


----------



## vmanuelgm




----------



## ExDarkxH

Example: because the latency in nanoseconds for DDR4-2400 CL17 and DDR4-2666 CL19 is roughly the same, the higher speed DDR4-2666 RAM will provide better performance
whats interesting is that if you used a calculator the true latency is worse for the 2666 cl19. yet crucial says it will perform better. I wont claim I'm smater than them and I wont doubt them

this is on crucial website. point is, both matter but pure speed is the most important factor. 

I think the ppl targeting 3200-3600 and leaving their ram this way as opposed to pushing for 4000+ are making a mistake


----------



## Esenel

@HyperMatrix
Everything correct.
That is why I run those 2x16GB 3200 CL14 @ 4300 CL16.

The only thing I wanted to say, he should bin it for himself and not waste money on those high bins 

@ExDarkxH 
That is would you should aim for, or surpass.


----------



## Xdrqgol

Falkentyne said:


> Timespy @ +150 core, +500 RAM, 100% voltage (last voltage tier used: 1.080-1.10v, *driver 456.98 hotfix* (since the newer drivers are buggy for too many people)
> Chip heat going nuclear with the stock FE heatsink and that voltage.


Damn, just installed this driver *driver 456.98 hotfix and got like 10 FPS *in some games I tried on my stock Zotac 3090 Trinity ... amazing! Thanks dude


----------



## HyperMatrix

ExDarkxH said:


> Example: because the latency in nanoseconds for DDR4-2400 CL17 and DDR4-2666 CL19 is roughly the same, the higher speed DDR4-2666 RAM will provide better performance
> whats interesting is that if you used a calculator the true latency is worse for the 2666 cl19. yet crucial says it will perform better. I wont claim I'm smater than them and I wont doubt them
> 
> this is on crucial website. point is, both matter but pure speed is the most important factor.
> 
> I think the ppl targeting 3200-3600 and leaving their ram this way as opposed to pushing for 4000+ are making a mistake


The timing is performed within the constraints of the envelope. The envelope being the speed. So the more cycles per second (speed), the more overall throughput you have, even if technically you’re doing less work per cycle. Because you’re doing those cycles so much faster.

So speed isn’t the most important factor but it is a very important one that is tied directly to the effectiveness of timings. For example as I mentioned going with that CL18 4400MHz memory would give you worse performance than CL14 3600MHz.


----------



## ExDarkxH

I hit up optimus yesterday morning and got this reply "We'll try to get back to you ASAP! If we don't get back to you quickly, then we're hard at work making blocks, please give us a day or two to respond  "

They said mid November sooo. I would like at least an update. Even if they say they need another week at least i would have something more concrete

This company is garbage. I dont care how nice their products are, the way they do business is pure trash. This is how it feels when you do business with a Chinese company, but worse....

Pay money a month in advance and hear nothing from them no updates. Why not wait to charge until you ship the items? At least allow us to check on our orders online or something. With this company there is no trace of anything


----------



## long2905

HyperMatrix said:


> Still better average clocks and temps than mine. I seriously think the paste on my card must be terrible. I can’t play higher than 1890-1920MHz at 0.887v for hours without temps outdoing the fans. Even then it still sits at around 65C (+/- a few). Any higher and the heat slowly keeps ramping up because the fans can’t handle it even at 100% speed.


Wait so you guys with FTW3 air cooler still struggle with boost clock around 1890-1920? I have to undervolt to maintain [email protected] but mine is not that great a ref card. I guess there is no choice but to put a waterblock on the 3090?


----------



## changboy

I'am on the 457.30 latest driver and i dont know if its me but i feel like this driver stop me to boost higher then 2000mhz, not sure about this but i played 4 or 5 games since this new driver and i see always 1990mhz in gpuz lol.

I mean i can see when i start the game 2100mhz but soon after its around 1980mhz and stay there, its just doing a week i have my card but i think i saw it many times at 2050mhz when i game 2 or 3 hours.


----------



## Jpmboy

ExDarkxH said:


> I hit up optimus yesterday morning and got this reply "We'll try to get back to you ASAP! If we don't get back to you quickly, then we're hard at work making blocks, please give us a day or two to respond  "
> 
> They said mid November sooo. I would like at least an update. Even if they say they need another week at least i would have something more concrete
> 
> This company is garbage. I dont care how nice their products are, the way they do business is pure trash. This is how it feels when you do business with a Chinese company, but worse....
> 
> Pay money a month in advance and hear nothing from them no updates. Why not wait to charge until you ship the items? At least allow us to check on our orders online or something. With this company there is no trace of anything


So far, the two block I got from them have been exceptionally well designed and made. It's a small "newco" so let them get it right vs fast. Tho my patience has limits too. ✌


----------



## Falkentyne

changboy said:


> I'am on the 457.30 latest driver and i dont know if its me but i feel like this driver stop me to boost higher then 2000mhz, not sure about this but i played 4 or 5 games since this new driver and i see always 1990mhz in gpuz lol.
> 
> I mean i can see when i start the game 2100mhz but soon after its around 1980mhz and stay there, its just doing a week i have my card but i think i saw it many times at 2050mhz when i game 2 or 3 hours.


Try 456.98 hotfix please.


----------



## Zurv

huzzah. finally blocks (bitspower for the 3090 FE)

They put in fans for the backplate too.

now to find the package with like $300 worth of thermal pads that i just got (that i can't find... my guess is my wife threw them out... grrr..)


----------



## changboy

My overclocking is +100 core and +1000 memory, is this possible hier memory use more power then the core cant boost higher ? I will try that hot fix but i notice then i didnt have any crash with the new driver, maybe they have done this to avoid crash and stopping people complain,


----------



## Falkentyne

Zurv said:


> huzzah. finally blocks (bitspower for the 3090 FE)
> 
> They put in fans for the backplate too.
> 
> now to find the package with like $300 worth of thermal pads that i just got (that i can't find... i my guess is my wife threw them out... grrr..)
> View attachment 2465211


$300 worth of Thermal pads? What?
Please don't tell me you went nuts with the 17 W/mk fujipolys....those saltine sized cracker pads for $30 a pop which take at least two to do one SIDE of a stock heatsink for a card...

These are a better value since it's 120mm * 120mm instead of postage stamp sized (seems to be the same supplier as Fujipoly 11 w/mk from the material tested)








8.65US $ 6% OFF|Thermalright Thermal Pad 120x120mm 12.8 W/mk 0.5mm 1.0mm 1.5mm 2.0mm High Efficient Thermal Conductivity Original Authentic - Pc Components Cooling & Tools - AliExpress


Smarter Shopping, Better Living! Aliexpress.com




www.aliexpress.com


----------



## Zurv

2 cards need a lot of pads  - but i have zero nows..._sigh_
the bitspower pads are poop.

hrmm.. i seem to have a bunch of TG pads, but they aren't great. Just 8w/mk
edit: just found a box full of fujiploy 17.. but the sizes are right.. blah.. where are you amazon package! (HEATHER!)

on the plus side... FP pad don't have much give.. so they sometimes cause problems when you are replacing squishy pads.

also, because i have the cards open again - here is a close up on stacked shunt with silver paint:


----------



## Falkentyne

Zurv said:


> 2 cards need a lot of pads  - but i have zero nows..._sigh_
> the bitspower pads are poop.
> 
> hrmm.. i seem to have a bunch of TG pads, but they aren't great. Just 8w/mk


Maybe try out the thermalright pads I linked, even though that's from the slow boat from China. It's still 120mm*120mm, with different thicknesses. For some reason that size doesn't seem to be available on USA sites.

I bought both Thermalright Odyssey 12 w/mk pads (85mm * 45mm) and Fujipoly 11 w/mk pads (60mm * 50mm) on Amazon, and put them together on the backplate side of my 3090. The TM pads ran out after the backplate RAM was done, while I was working on doing the PCB hotspots for the backplate side. Took about 20% of the Fuji postage stamp pads to finish the hotspots.

From examining those pads, they seem to be from the exact same supplier going to different OEM's. I'm betting the Gelid 12 w/mk pads are the same.


----------



## changboy

I try the hot fix driver and play the division 2 with same overclock then latest driver and got a crash, i knew the new driver limiting something to avoid crashing, coz with the new driver i put an overclock then i was crashing on shadow of the tomb raider and didnt got a crash.
I will réinstall the latest driver and will avoid crashing issue if some oc dont match a game or els.
This make sense coz before my card boost in many way with different game and now its more equal and this is the way it should be.
Also the latest are ready for cold war wich i got with my card, i also got a code for watch dog legion for 20$ on ebay lol.
btw i increase a bit my vccin and vcore and didnt crash anymore on far cry 5, so that issue was my overclocking cpu, the wierd thing its didnt crash when i encoding lol.


----------



## mirkendargen

Looking at pictures of the Thermalright Odyssey material it looks just like what was packaged with the Bykski block, probably also using the same supplier.


----------



## DrunknFoo

Ever so slowly climbing up the ladder in PR, personal best 15099 (dunno why i cant get that last 1 point LOL), gotta figure out how to deal with the heat



https://www.3dmark.com/pr/494272



btw anyone got a clue what the lower shunt is for on the front of the evga ftw3? (pcie is the back so it's not that)
stacked a 10 there for now cause i'm out of 8s


----------



## Thanh Nguyen

Anyone here has made a switch from Alphacool block to EK? I heard from 1 person and he has 3c less with EK than Alphacool.


----------



## changboy

I have 10 resistor pana 8 mohm in hand but didnt do anything with them for now, Tell me if its worth it ? And if you know the place explain all place they need to be. I have the ftw3 too.

I see 5 on ur pcb picture, at least i think.


----------



## Falkentyne

changboy said:


> I have 10 resistor pana 8 mohm in hand but didnt do anything with them for now, Tell me if its worth it ? And if you know the place explain all place they need to be. I have the ftw3 too.
> 
> I see 5 on ur pcb picture, at least i think.


Please do not put an 8 mOhm resistor on the PCIE slot. The PCIE slot has a 10 amp fuse which will blow at 10A * 12v=120W, and an 8 mOhm shunt resistor may allow more than 120W to the PCIE slot.

I would personally use 15 mOhm current sensing 2512 shunts on that board. eVGA seems to have a higher ratio of PCIE slot to PCIE plug draw compared to FE / Asus. You could use 8 mOhms on all the shunts except the PCIE and 15 mOhm on PCIE, but PCIE slot power will trigger PWR Limit when it reaches 75W.


----------



## vmanuelgm

Thanh Nguyen said:


> Anyone here has made a switch from Alphacool block to EK? I heard from 1 person and he has 3c less with EK than Alphacool.


Personal question, Have u met James from Rockit Cool?


----------



## Thanh Nguyen

vmanuelgm said:


> Personal question, Have u met James from Rockit Cool?


No


----------



## DrunknFoo

changboy said:


> I have 10 resistor pana 8 mohm in hand but didnt do anything with them for now, Tell me if its worth it ? And if you know the place explain all place they need to be. I have the ftw3 too.
> 
> I see 5 on ur pcb picture, at least i think.


5 front top of card
1 front mid/lower of card
1 lower back of card
if you are asking if it is worth it, no probably not




Falkentyne said:


> Please do not put an 8 mOhm resistor on the PCIE slot. The PCIE slot has a 10 amp fuse which will blow at 10A * 12v=120W, and an 8 mOhm shunt resistor may allow more than 120W to the PCIE slot.
> 
> I would personally use 15 mOhm current sensing 2512 shunts on that board. eVGA seems to have a higher ratio of PCIE slot to PCIE plug draw compared to FE / Asus. You could use 8 mOhms on all the shunts except the PCIE and 15 mOhm on PCIE, but PCIE slot power will trigger PWR Limit when it reaches 75W.


for the pcie i originally went 20 mohm, something didn't add up, solder was good too, swapped it out for an 8ohm (ya probably overshooting the amperage by 5) but hasn't blown, not a big deal, ill swap out the fuse later anyway... (that said, im not sure why / how... maybe the analog controller on the card is lowering the voltage as current goes up?)


----------



## Falkentyne

DrunknFoo said:


> 5 front top of card
> 1 front mid/lower of card
> 1 lower back of card
> if you are asking if it is worth it, no probably not
> 
> 
> 
> 
> for the pcie i originally went 20 mohm, something didn't add up, solder was good too, swapped it out for an 8ohm (ya probably overshooting the amperage by 5) but hasn't blown, not a big deal, ill swap out the fuse later anyway... (that said, im not sure why / how... maybe the analog controller on the card is lowering the voltage as current goes up?)


The system has current monitoring balancing. If a resistance is out of whack on one shunt but others read a vastly different value, it's going to go "hey something's wrong here" and you could see anything from 2D mode to something like happened over here:









RTX 3090 Founders Edition working shunt mod


Yes idea would be to stack them. Should calculate that later in the tool....




www.overclock.net





see the memory core logic voltage? 

Even after he shunted the PCIE slot, which lowered the MVDCC, it was still reading way too high (it shouldn't go above 20W after multiplier is factored in, or 14W on his GPU-Z screenshot), so there were other problems.

So it's just best to keep all the shunts close in resistance.


----------



## DrunknFoo

Falkentyne said:


> The system has current monitoring balancing. If a resistance is out of whack on one shunt but others read a vastly different value, it's going to go "hey something's wrong here" and you could see anything from 2D mode to something like happened over here:
> 
> 
> 
> 
> 
> 
> 
> 
> 
> RTX 3090 Founders Edition working shunt mod
> 
> 
> Yes idea would be to stack them. Should calculate that later in the tool....
> 
> 
> 
> 
> www.overclock.net
> 
> 
> 
> 
> 
> see the memory core logic voltage?
> 
> Even after he shunted the PCIE slot, which lowered the MVDCC, it was still reading way too high (it shouldn't go above 20W after multiplier is factored in, or 14W on his GPU-Z screenshot), so there were other problems.
> 
> So it's just best to keep all the shunts close in resistance.


Well 8s across the board is as close as it can get. Lol

Ya hasnt gone into limp mode yet, had the odd first boot which is usual from my experience with the 2080


----------



## Falkentyne

DrunknFoo said:


> Well 8s across the board is as close as it can get. Lol
> 
> Ya hasnt gone into limp mode yet, had the odd first boot which is usual from my experience with the 2080


No I mean there's nothing wrong with 8 mOhms on the shunts with 10 mOhms on PCIE.
But 8 mOhms is 1.63x (3.08 new mOhms for the stack) and 20 mOhms is 1.25x (4.17 mOhms) and you said you had problems with a 20 mOhm...


----------



## Zurv

Living the dream... join me @Baasha. It is our special place  

3090 FE with bitspower blocks


----------



## mirkendargen

Zurv said:


> Living the dream... join me @Baasha. It is our special place
> 
> 3090 FE with bitspower blocks
> View attachment 2465254
> 
> View attachment 2465255


1. That's the fattest plexi I think I've ever seen on a block, lol.
2. Am I seeing correctly that they included a 2 slot bracket? That's awesome, the 3 slot stock bracket is a big showstopper for anyone vertical mounting.


----------



## DrunknFoo

Zurv said:


> Living the dream... join me @Baasha. It is our special place
> 
> 3090 FE with bitspower blocks


damn those so polished, no clouding, nice and clean, im so jelly


----------



## changboy

From what i read on another forum, the memory dosent really overclock on those card, its EEc memory and run at is speed without error but if you change its speed and increase it, some error will be introduce and memory will do more to push all that wihtout error and you lost performance, more you incease it more your fps drop and memory temp increase.
Its seam to be true coz i check it while i play the division 2 and when i increase it my fps drop lol. Also Jay2Cent made a vidéo on it while overclocking an rtx-3080 FE. He show more he put off set + fps drop, at +800 fps drop a lot. I can do + 1000 lol and also latency increase a lot from 8ms to 60ms.
Far cry 5 game not related to my overclock but a driver problem, thing with surround 3d or something like this.


----------



## changboy

Not sure i will mod my pcb, for now i won't but later i dont know. But thanks for the income, at least i will know what to do if i want.


----------



## Zurv

mirkendargen said:


> 1. That's the fattest plexi I think I've ever seen on a block, lol.
> 2. Am I seeing correctly that they included a 2 slot bracket? That's awesome, the 3 slot stock bracket is a big showstopper for anyone vertical mounting.


yeah, that 2 slot is a big deal. I have two 905p optane drives that i had to remove when i added the 3090 FEs.
now.. to find some use for these two cards. haha.

I'm super looking forward to my block form optiums. Talk about fat plexi.. that sucker is a monster!

(of course, the no noise is a BIG DEAL! I was playing BL3 with these (only one card...) and it was loud as hell. Also hitting 77C!
Now nothing can get it over 45C - even both cards looping time spy extreme.
(I have 4 360 rads.... which helps too)


----------



## defiledge

My 3090 runs stable at 0.85v 1935Mhz on metro exodus max benchmark. Did I win the silicon lottery?


----------



## DrunknFoo

defiledge said:


> My 3090 runs stable at 0.85v 1935Mhz on metro exodus max benchmark. Did I win the silicon lottery?


i seriously hate reading that question... lol? 

no you didn't


----------



## defiledge

DrunknFoo said:


> i seriously hate reading that question... lol?
> 
> no you didn't


No need to be jealous now. Your 1800Mhz is still good enough for cyberpunk. What undervolts are you guys getting


----------



## DrunknFoo

defiledge said:


> No need to be jealous now. Your 1800Mhz is still good enough for cyberpunk. What undervolts are you guys getting


haven't bothered to undervolt, trying to improve PR ladder score, if possible I would like to increase the voltage =P


----------



## defiledge

I'll think about increasing the voltage when we get a bios that doesn't power limit the 3090s


----------



## HyperMatrix

long2905 said:


> Wait so you guys with FTW3 air cooler still struggle with boost clock around 1890-1920? I have to undervolt to maintain [email protected] but mine is not that great a ref card. I guess there is no choice but to put a waterblock on the 3090?


Really comes down to silicon lottery. Some people got lucky and have GPUs that are really good. One youtuber does gaming videos and his ASUS TUF 3090 runs 1965-2025MHz (Average 2000) at 1V and maintains temps around 56-57C on air. My Strix can’t do sustained above 1890-1920MHz at 0.887V and even then temps are usually in the 60s. This is with fans at 100%. The card can definitely clock way more, but it would need to be kept much cooler. So a block will help a ton. 



changboy said:


> From what i read on another forum, the memory dosent really overclock on those card, its EEc memory and run at is speed without error but if you change its speed and increase it, some error will be introduce and memory will do more to push all that wihtout error and you lost performance, more you incease it more your fps drop and memory temp increase.
> Its seam to be true coz i check it while i play the division 2 and when i increase it my fps drop lol. Also Jay2Cent made a vidéo on it while overclocking an rtx-3080 FE. He show more he put off set + fps drop, at +800 fps drop a lot. I can do + 1000 lol and also latency increase a lot from 8ms to 60ms.
> Far cry 5 game not related to my overclock but a driver problem, thing with surround 3d or something like this.


Not entirely true on memory. This depends on silicon lottery as well. Some can’t OC much at all. Some OC and soon begin to degrade in performance. And some scale with their OC. In the 1 minute VRAY RTX benchmark I got a higher score when pushing memory to +1200. Of course I couldn’t sustain that without the system crashing in longer runs. And it may be different in other benches. But I know at +1000 in games I got higher FPS than when I had +500. Will have to do more testing whenever I get an Aquacomputer block with active cooling for the backside. 



defiledge said:


> My 3090 runs stable at 0.85v 1935Mhz on metro exodus max benchmark. Did I win the silicon lottery?


Well yours does better than mine at that range. But you gotta see what your top end is to determine how good the chip is.


----------



## Thanh Nguyen

Look like my card sucks at least 800w to get 2145mhz in metro full rtx and gimpwork.


----------



## HyperMatrix

Thanh Nguyen said:


> Look like my card sucks at least 800w to get 2145mhz in metro full rtx and gimpwork.
> View attachment 2465278


What were you getting on air/pre-shunt?


----------



## Falkentyne

So a RTX 3090 FE can draw 526 Watts WITHOUT an overclock (stock clocks) and 76C at 100% fan speed at 3840x2160 resolution (tested in Overwatch, 1080p+200% render scale), Power Limit flag not flagged, PL starts at 530W

And about 466 watts without an overclock at 1920x1080 (~71C) with stock cooler (tested in Heaven benchmark).

Actual RTX and gimpworks and other Nvidia stuff would probably leapfrog that as above.


----------



## Nizzen

defiledge said:


> My 3090 runs stable at 0.85v 1935Mhz on metro exodus max benchmark. Did I win the silicon lottery?


Running under 2000mhz is loosing


----------



## Baasha

Wow! Those blocks are thicc!

What's your average clock during benching?!?

Man I need to learn how to water cool! 



Zurv said:


> Living the dream... join me @Baasha. It is our special place
> 
> 3090 FE with bitspower blocks
> View attachment 2465254
> 
> View attachment 2465255


----------



## krs360

Anyone with a 3090 TUF OC maybe shed some light on this for me?

My Asus card seems to mostly disregard the OC set within afterburner. There are a couple of times where it might get into the 1900s but it's usually averaging around 850mhz (core) on a PR run. The Zotac 3090 card I have goes into the 1900 and stays there (1920-1950 average) despite having the same OC applied and having a lower factory boost clock.

Asus card is under water and sits no higher than 48 degrees ish so heat isn't a factor. The same issue was present when it was on air.

Any ideas? Is it just the Asus bios?


----------



## mbm

I have the 3090 Asus TUF OC.. Im running mine at 0,85V 1905 mhz consistent.
On default curve with +200 mhz it will boost up to 2150 mhz. But cannot maintain for longer times.
Running with default cooler anf fancurve,, hitting MAX 70C.


----------



## krs360

mbm said:


> I have the 3090 Asus TUF OC.. Im running mine at 0,85V 1905 mhz consistent.
> On default curve with +200 mhz it will boost up to 2150 mhz. But cannot maintain for longer times.
> Running with default cooler anf fancurve,, hitting MAX 70C.


Hi, thanks for the reply.

What are you running on the core which is stable?


----------



## mbm

as I wrote 1905 mhz this works on every scenario.
Battlefield 5 seems to be what pushes my system the most.


----------



## Spiriva

krs360 said:


> Anyone with a 3090 TUF OC maybe shed some light on this for me?
> 
> My Asus card seems to mostly disregard the OC set within afterburner. There are a couple of times where it might get into the 1900s but it's usually averaging around 850mhz (core) on a PR run. The Zotac 3090 card I have goes into the 1900 and stays there (1920-1950 average) despite having the same OC applied and having a lower factory boost clock.
> 
> Asus card is under water and sits no higher than 48 degrees ish so heat isn't a factor. The same issue was present when it was on air.
> 
> Any ideas? Is it just the Asus bios?


Try one of the Gigabyte bioses, "gaming oc" or "aorus master" They both are 390w bioses and currently the best for 2x8pin cards.









Gigabyte RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com




or








Gigabyte RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com




or








Gigabyte RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com





The one at the very bottom got abit more aggressive fan curve.


----------



## rawsome

GeForce RTX 3080 and RTX 3090 with bended package - why water and air coolers have such a hard time | Investigative | igor'sLAB


Of course, you'll be amazed if, for example, three supposedly identical cards from one manufacturer have completely different temperatures or fan speeds and this is then reflected very similarly in…




www.igorslab.de






> And no, this time it is not the supplier with the best gap dimensions who wins, but the one with the lowest standard deviation. *So it becomes the GPU package lotto*, where the cooler suppliers almost mutate into extras.


maybe it is not that much difference in the paste job by the manufacturer but the wrapping of the package. that at least explains temperature differences between cards of the same model - and also in general, as no manufacturer can avoid that problem.

so it is not that simple as saying, "waterblock a is better than waterblock b because it has a lower delta", as both waterblocks can yeld very different results on different cards of the same card model.

that also means that you have to choose the right thermal paste and double-check that all thermal pads are touching their component.


----------



## slopokdave

Looking for some PSU guidance. I have a 750W Seasonic Gold (GX-750, newer version of this series) but I'm getting hard shut downs when overclocking 3090 FE w/ AMD 3800X. I can undervolt and I'm fine. I've found similar experiences from other Seasonic 750 users.... 

Anyone running a Corsair RM850x on an overclocked 3090?


----------



## GAN77

Phanteks released GLACIER G30 STRIX BLOCK for Asus ROG Strix RTX 3090/3080.


----------



## mbm

Spiriva said:


> Try one of the Gigabyte bioses, "gaming oc" or "aorus master" They both are 390w bioses and currently the best for 2x8pin cards.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Gigabyte RTX 3090 VBIOS
> 
> 
> 24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory
> 
> 
> 
> 
> www.techpowerup.com
> 
> 
> 
> 
> or
> 
> 
> 
> 
> 
> 
> 
> 
> Gigabyte RTX 3090 VBIOS
> 
> 
> 24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory
> 
> 
> 
> 
> www.techpowerup.com
> 
> 
> 
> 
> or
> 
> 
> 
> 
> 
> 
> 
> 
> Gigabyte RTX 3090 VBIOS
> 
> 
> 24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory
> 
> 
> 
> 
> www.techpowerup.com
> 
> 
> 
> 
> 
> The one at the very bottom got abit more aggressive fan curve.


And what are the benefits? Please dont say higher power


----------



## ExDarkxH

I'll probably post this over in the memory section as well but just to follow up on yesterdays discussion

I was able to get 4000 cl15 stable @ 1.45v. Passed all the stress testing but I'm not seeing any performance benefit?
Performance test - memory mark ram scores went down across the board.. I also ran fire strike ultra and timespy extreme cpu tests and they were all the same/lower.
Does anyone know of any good applications for memory benchmarking?

I was also able to get 4,000Mhz 14 17 17 37 stable but it needed 1.55v so i just switched back.

In the end I'm not seeing any benefit going down from 4600 cl18. Is there something I'm missing or are tight timings just overrated and pure speed is the way to go?
I'd like some more apps to test performance


----------



## HyperMatrix

ExDarkxH said:


> I'll probably post this over in the memory section as well but just to follow up on yesterdays discussion
> 
> I was able to get 4000 cl15 stable @ 1.45v. Passed all the stress testing but I'm not seeing any performance benefit?
> Performance test - memory mark ram scores went down across the board.. I also ran fire strike ultra and timespy extreme cpu tests and they were all the same/lower.
> Does anyone know of any good applications for memory benchmarking?
> 
> I was also able to get 4,000Mhz 14 17 17 37 stable but it needed 1.55v so i just switched back.
> 
> In the end I'm not seeing any benefit going down from 4600 cl18. Is there something I'm missing or are tight timings just overrated and pure speed is the way to go?
> I'd like some more apps to test performance


A couple points. You would only see a benefit in memory bound situations as is. But the problem is when you account for the speed vs timing difference of 4600 CL18 and 4000 CL15, yes the CL18 took 20% longer per cycle, but the cycle itself was also happening 15% faster. So you're looking at a very small uplift and even that will skew a bit depending on whether the task itself benefited more from tighter timings or higher speeds. You should see a difference in memory benchmarks though.


----------



## MoltenMoose

Hey there

I'm relatively new to this thread and have been working through the backlog of posts, which you can imagine is taking me quite a while!

I have an MSI 3090 Gaming X Trio, which has a rather disappointing maximum power limit. I have an Alphacool waterblock arriving in the next few days so was looking at ways of increasing this.
It seems that for the Trio I can flash it to the EVGA 500w bios, which I dutifully did last week and it has indeed unlocked the power limit slider.
However, since doing that I've been getting BSODs when waking from sleep - even though I am still running without an overclock or adjustments to the power limit.
I flashed the card back to its default MSI bios and the problem has gone away.

The card seems to work fine with the EVGA bios in all other ways, but this sleep issue is annoying enough that I'd rather not have the increased power limit if I have to pay this cost!

I was wondering if anyone else has noticed getting BSODs when waking from sleep with the EVGA 500W bios?


----------



## changboy

MoltenMoose said:


> Hey there
> 
> I'm relatively new to this thread and have been working through the backlog of posts, which you can imagine is taking me quite a while!
> 
> I have an MSI 3090 Gaming X Trio, which has a rather disappointing maximum power limit. I have an Alphacool waterblock arriving in the next few days so was looking at ways of increasing this.
> It seems that for the Trio I can flash it to the EVGA 500w bios, which I dutifully did last week and it has indeed unlocked the power limit slider.
> However, since doing that I've been getting BSODs when waking from sleep - even though I am still running without an overclock or adjustments to the power limit.
> I flashed the card back to its default MSI bios and the problem has gone away.
> 
> The card seems to work fine with the EVGA bios in all other ways, but this sleep issue is annoying enough that I'd rather not have the increased power limit if I have to pay this cost!
> 
> I was wondering if anyone else has noticed getting BSODs when waking from sleep with the EVGA 500W bios?


With my evga ultra i never have that bug but in my setting under power mode config i never give access to my pc going in sleep mode and close my screen, why dont you just change this in windows 10.


----------



## ExDarkxH

HyperMatrix said:


> A couple points. You would only see a benefit in memory bound situations as is. But the problem is when you account for the speed vs timing difference of 4600 CL18 and 4000 CL15, yes the CL18 took 20% longer per cycle, but the cycle itself was also happening 15% faster. So you're looking at a very small uplift and even that will skew a bit depending on whether the task itself benefited more from tighter timings or higher speeds. You should see a difference in memory benchmarks though.


f it then I'll just set it to 4000 cl14 @1.55. I bought this Apex mobo for a reason right?


----------



## rawsome

slopokdave said:


> Looking for some PSU guidance. I have a 750W Seasonic Gold (GX-750, newer version of this series) but I'm getting hard shut downs when overclocking 3090 FE w/ AMD 3800X. I can undervolt and I'm fine. 750W should be plenty, I'm seeing at most 580W from the wall, but thats another issue with these Seasonic PSUs apparently.
> 
> I need something in the meantime until Seasonic can decide what they want to do with my RMA.
> 
> I can pickup a Corsair RM850x or EVGA SuperNOVA 1000 G+ from a Best Buy ~45 min away, which would you recommend? I'd love to think the 850 is enough, it looks like a better PSU IMO, but I thought 750 would be enough also. Thanks.
> 
> I have a single pump/res waterloop, 6X fans, 2NVME and 2 sata ssds. I do plan on swapping the 3800X for a 5900X.


you are asking this on overclock.net. so the answer is: go for 1000W at least, so you have room for shunting 



MoltenMoose said:


> Hey there
> 
> I'm relatively new to this thread and have been working through the backlog of posts, which you can imagine is taking me quite a while!
> 
> I have an MSI 3090 Gaming X Trio, which has a rather disappointing maximum power limit. I have an Alphacool waterblock arriving in the next few days so was looking at ways of increasing this.
> It seems that for the Trio I can flash it to the EVGA 500w bios, which I dutifully did last week and it has indeed unlocked the power limit slider.
> However, since doing that I've been getting BSODs when waking from sleep - even though I am still running without an overclock or adjustments to the power limit.
> I flashed the card back to its default MSI bios and the problem has gone away.
> 
> The card seems to work fine with the EVGA bios in all other ways, but this sleep issue is annoying enough that I'd rather not have the increased power limit if I have to pay this cost!
> 
> I was wondering if anyone else has noticed getting BSODs when waking from sleep with the EVGA 500W bios?


You did use DDU after switching the vbios? i also have i trio x with the 500W bios, will give standby a try and report back.


----------



## MoltenMoose

changboy said:


> With my evga ultra i never have that bug but in my setting under power mode config i never give access to my pc going in sleep mode and close my screen, why dont you just change this in windows 10.


Not too sure what you're saying here!
Sounds like you're saying that you never encountered this sleep-specific behaviour but that you're also *not *using sleep...



rawsome said:


> You did use DDU after switching the vbios? i also have i trio x with the 500W bios, will give standby a try and report back.


No I didn't use DDU - guess I could give that a go, certainly. Let me know whether yours seems happy.


----------



## Spiriva

mbm said:


> And what are the benefits? Please dont say higher power


My card boosts higher then with PNY original bios.


----------



## HyperMatrix

Looks like there's a price drop on 3090s incoming. Hydrocopper and Hybrid EVGA cards listed. Hybrid is same price as air. Hydrocopper is $50 more.


----------



## ExDarkxH

i think the pricing is more of a response to the inevitable 3080ti 

Wouldnt make sense to release a $2,000 hydrocopper 3090 when a 3080ti could release and cannibalize all the sales. They've put a lot of time and resources to top out the 3090 with all the watercooling and KPE and i doubt they want to go very high in price when a 3080ti could potentially drop at $1000


----------



## changboy

I hit 521W while playing the division 2 on my evga ultra


----------



## Falkentyne

MoltenMoose said:


> Hey there
> 
> I'm relatively new to this thread and have been working through the backlog of posts, which you can imagine is taking me quite a while!
> 
> I have an MSI 3090 Gaming X Trio, which has a rather disappointing maximum power limit. I have an Alphacool waterblock arriving in the next few days so was looking at ways of increasing this.
> It seems that for the Trio I can flash it to the EVGA 500w bios, which I dutifully did last week and it has indeed unlocked the power limit slider.
> However, since doing that I've been getting BSODs when waking from sleep - even though I am still running without an overclock or adjustments to the power limit.
> I flashed the card back to its default MSI bios and the problem has gone away.
> 
> The card seems to work fine with the EVGA bios in all other ways, but this sleep issue is annoying enough that I'd rather not have the increased power limit if I have to pay this cost!
> 
> I was wondering if anyone else has noticed getting BSODs when waking from sleep with the EVGA 500W bios?


Known Nvidia driver issue.


----------



## mbm

Spiriva said:


> My card boosts higher then with PNY original bios.


Could you post before and after boost?
And do you use afterburner what are your settings before and after?


----------



## dante`afk

evga doesnt sell the HC block as single item? wut


----------



## changboy

I wanna sell my evga3090 ftw3 ultra and get the hydro copper ultra one lol.
But the spec are the same, its 1800mhz. Maybe on water it will hold better boost clock.


----------



## Spiriva

mbm said:


> Could you post before and after boost?
> And do you use afterburner what are your settings before and after?


its boost to max 2205mhz now. EK waterblock. With the PNY 365w it was ~2085mhz.


----------



## HyperMatrix

changboy said:


> I wanna sell my evga3090 ftw3 ultra and get the hydro copper ultra one lol.
> But the spec are the same, its 1800mhz. Maybe on water it will hold better boost clock.


The AquaComputer block with the backplate and shipping to Canada is $500 CAD before taxes and duty fees. So the HydroCopper is definitely a good financial choice. Especially if you can sell your existing card for at least the same price you bought it. Wouldn't hurt to sign up for the notify list.


----------



## Thanh Nguyen

Spiriva said:


> its boost to max 2205mhz now. EK waterblock. With the PNY 365w it was ~2085mhz.


What bios u using now?


----------



## changboy

HyperMatrix said:


> The AquaComputer block with the backplate and shipping to Canada is $500 CAD before taxes and duty fees. So the HydroCopper is definitely a good financial choice. Especially if you can sell your existing card for at least the same price you bought it. Wouldn't hurt to sign up for the notify list.


Ya i will clic to be notify hehehe. 600$ for a waterblock, i think i will keep it on air unless i go with the hydro copper one, will see.


----------



## Spiriva

Thanh Nguyen said:


> What bios u using now?


Aorus master 390w


----------



## Nizzen

Spiriva said:


> its boost to max 2205mhz now. EK waterblock. With the PNY 365w it was ~2085mhz.


What is the average clock in port royal?

There is no point telling what the max boost is on very light load.


----------



## Spiriva

Nizzen said:


> What is the average clock in port royal?
> 
> There is no point telling what the max boost is on very light load.


I dunno, last time i ran it 3dmark couldnt see what cpu, gpu i had or anything els about my pc. It was just when windows10 20h2 was rlsed.


----------



## MoltenMoose

Falkentyne said:


> Known Nvidia driver issue.


Thanks for that - I had a google around and found a thread about it, and specifically found this comment to be very interesting:


https://www.nvidia.com/en-us/geforce/forums/geforce-graphics-cards/5/405560/bsod-3090fe-with-driver-45709-update-after-sleep-i/2868607/?commentPage=3


I had enabled manual GPU fan curve when running the EVGA bios as it was a little noisy otherwise; I don't need to run this curve with the vanilla bios. So it is probably this.

Once my waterblock arrives I'll flash it again and given I obviously won't need to touch the fan curve we'll hopefully be laughing.


----------



## defiledge

will 3080TI devalue our 3090s


----------



## HyperMatrix

changboy said:


> Ya i will clic to be notify hehehe. 600$ for a waterblock, i think i will keep it on air unless i go with the hydro copper one, will see.


Well at this rate a RTX 3080 HydroCopper will likely be able to outperform an RTX 3090 with a mediocre chip on air. So I'm seriously questioning my Strix purchase at the moment. Haha. That one's not up for a notify. But I am genuinely contemplating just selling this Strix since it's not the greatest chip and getting either a hydrocopper 3080 or 3080ti. Because currently for the Strix it cost me $2600 CAD. Block + duty/tax is another $550. Then good thermal pads/lm is another $120. So close to $3300. Or I could just get a 3080 Hydrocopper for probably $1200 CAD with shipping/tax and maybe 5% less performance. Won't need to be shunted either because a 450W power limit on a 3080 is the same as a 560W power limit on a 3090. And you'll have far less heat from the GPU as well as from VRAM to deal with. 



defiledge said:


> will 3080TI devalue our 3090s


Yes. Definitely. Reminds me of the Pascal Titan X launch, then 1080ti with same cores for a lot less, then Pascal Titan XP with even more cores. Nvidia does have a habit of intentionally screwing/milking early adopters. A lot is going to depend on what the AMD cards can do though. If they match or beat Nvidia's cards, then our cards are devalued even before the 3080ti launches.


----------



## changboy

Ya but the thing its these day i play a lot of game and iam happy to got a 3090, card are hard to get, at least for me. No store still have it in stock, so many people want buy one but dont see it anywhere.
If i didnt put my name at EVGA i didnt got one already.


----------



## mbm

Spiriva said:


> its boost to max 2205mhz now. EK waterblock. With the PNY 365w it was ~2085mhz.


But this could just be the result of different offset. Something that could have been done with org bios and msi afterburner.
And boost max doesnt say much.. Actually I can have my card to boost beyond 2200 mhz in the New cod cold war.
But in battlefield it cannot maintain more than max 1950 mhz constantly


----------



## Spiriva

mbm said:


> But this could just be the result of different offset. Something that could have been done with org bios and msi afterburner.
> And boost max doesnt say much.. Actually I can have my card to boost beyond 2200 mhz in the New cod cold war.
> But in battlefield it cannot maintain more than max 1950 mhz constantly


Not for my card atleast. With the 365w bios the speed was jumping like nuts. If you are happy with w/e bios you are using thats good tho 
Im happy with the Aorus Master bios for my card.


----------



## defiledge

so the 3080ti and 3090 have the exact same die. But the 3090 costs 50% more? Nice one


----------



## HyperMatrix

defiledge said:


> so the 3080ti and 3090 have the exact same die. But the 3090 costs 50% more? Nice one


Welcome to Nvidia.


----------



## changboy

To get the highest end card in the first time you need pay for it or wait 6 month to 1 year and pay less for revision.


----------



## mirkendargen

defiledge said:


> so the 3080ti and 3090 have the exact same die. But the 3090 costs 50% more? Nice one


Call 3090 "RTX Titan Ampere" instead, and it's 100% Nvidia business as usual.


----------



## defiledge

how are you guys OCing 2x8pin cards with these awful power limits?


----------



## Spiriva

defiledge said:


> how are you guys OCing 2x8pin cards with these awful power limits?


Flash a 390w bios, put on a waterblock and hope you got lucky with your chip 

Ive seen Strix cards with 480w bioses not beeing able to go over 2055mhz etc, so for exemple a strix card isnt guaranteed to be an awesome overclocker.


----------



## olrdtg

Well, my waterblock for the FE finally arrived from Bitspower. The acrylic part of the block is THICK. The block alone weighs probably half as much as the FE cooler. The backplate is a huge disappointment. Solid piece of aluminum painted black, and they include one of those stand up chipset coolers to attach to the back. I'm definitely gonna look into better cooling for the back, at least until EK releases their sexy ass block for the 3090 FE


----------



## Falkentyne

defiledge said:


> how are you guys OCing 2x8pin cards with these awful power limits?


Shunt mods 









How To: Easy mode Shut Modding.


Wow this forum upgrade is bad for trying to work PMs. Lol says I dont have any until I go into the conversations page lol 11 messages I never knew I had.




www.overclock.net


----------



## changboy

Hybrid cooler will be available for the evga 3000 serie line, also auto-notify work :









Products - Cooling - Cooling - GPU HYBRID Cooler


Products - Cooling - Cooling - GPU HYBRID Cooler




www.evga.com


----------



## Zogge

I am reaching 2160 on air on strix 3090oc, starts there at 45 degrees then drops down to around 2040 at 78 degrees.
Core +185, Mem +250. Voltage +100, temp set 81 max, 123% power. I think I might be able to stabilize it on 2150 or even 2200 on water.
Feels like an ok bin, not revolutionary but better than average. Right ?


----------



## mirkendargen

Zogge said:


> I am reaching 2160 on air on strix 3090oc, starts there at 45 degrees then drops down to around 2040 at 78 degrees.
> Core +185, Mem +250. Voltage +100, temp set 81 max, 123% power. I think I might be able to stabilize it on 2150 or even 2200 on water.
> Feels like an ok bin, not revolutionary but better than average. Right ?


Depends what that test is doing. I can do that on the 3dmark DXR benchmark and Port Royal on my Strix, but Metro main menu says NOPE and is more like +100 core, 2025-2070.

I expect you can likely go above +250 memory also, +750 puts it at what the chips are rated at from Micron. Some likely don't meet that (or have heat issues meeting that after some time) but I think most do.


----------



## dr/owned

I ordered the Bykski block for the TUF. $104 shipped with the backplate. Already had a Bykski block on my 1080Ti (only people to make one for the Gigabyte G1) with no issues. And their CPU waterblock is such a good knockoff of the Supremacy that the parts are interchangeable between the two. Hopefully 3 for 3.


----------



## Antsu

Anyone heard from Aquacomputer about their Strix block? Resisting the EK block is getting harder every day...


----------



## kx11

olrdtg said:


> Well, my waterblock for the FE finally arrived from Bitspower. The acrylic part of the block is THICK. The block alone weighs probably half as much as the FE cooler. The backplate is a huge disappointment. Solid piece of aluminum painted black, and they include one of those stand up chipset coolers to attach to the back. I'm definitely gonna look into better cooling for the back, at least until EK releases their sexy ass block for the 3090 FE


tell us more about the fan, can you mount it on top if the gpu is placed in the top PCI slot?!


----------



## DocratesCR

Any one tested EVGA 3090 FTW3 Ultra with a different BIOS? not the BETA but another, like Asus.


----------



## changboy

I ask it many times and got no anwser, try it and tell me if its working nice heheh. The bios is .rom, i think you need a tool to flash it.


----------



## dangerSK

DocratesCR said:


> Any one tested EVGA 3090 FTW3 Ultra with a different BIOS? not the BETA but another, like Asus.


yea, not good, card has weird power limit issue, flashing 2x8pin bios should solve this, preferably XC3 bios


----------



## DrunknFoo

Antsu said:


> Anyone heard from Aquacomputer about their Strix block? Resisting the EK block is getting harder every day...


Their site estimated 1.5-2months for strix

Dissapointed that they are not 'prioritizing' a ftw block...


----------



## DrunknFoo

DocratesCR said:


> Any one tested EVGA 3090 FTW3 Ultra with a different BIOS? not the BETA but another, like Asus.


Tried a few, the 500w still appears to output the most power


----------



## tiefox

DocratesCR said:


> Any one tested EVGA 3090 FTW3 Ultra with a different BIOS? not the BETA but another, like Asus.


I ordered from them the day it was made available, still nothing


----------



## changboy

This is what i got on time spy my card on air and my cpu at 5.0ghz, i can push my cpu a bit more then 5.1ghz and get around 700+ but i dont want pass much 1.3volt lol.


https://www.3dmark.com/3dm/53032644


----------



## Edge0fsanity

I posted this in another thread but thought I would post here as well. I know there are a lot of ftw3 owners frustrated with the 500w FTW3 bios. It seems you can flash a 2x8pin bios and get a good 50w bump over the 500w evga bios. I went with the XC3 bios. EVGA 3090 FTW POWER LIMIT BYPASS

So this does not remove the limiter, it will still throttle with the power slider maxed out and an offset applied. I ended up using the VF curve in AB and settled on 1000mV and was able to run port royal with almost no power throttling. 1 scene where it drops for about a second, thats it. Previously on the 500w bios I was limited to 950mV on the VF curve for similar no throttle results. Runs about 5C hotter and gained 45mhz on my OC.

500w bios https://www.3dmark.com/pr/500592
XC3 bios http://www.3dmark.com/pr/500725

Neither of these results are dialed in as I just got the card last night. Mem OC not dialed in at all. Had overlays open, gsync on, additional monitors on, not cooling the card off between runs, etc. Few hundred points left on the table.

Overall this felt like a solid 50w bump over the 500w bios. I may give the gigabyte 390w bios a try to see if it allows for higher power but I won't take the card past 1000mV on air. Already running around 72C once its fully heatsoaked. Will have this on water and will push further whenever Optimus ships my block.


----------



## cloudconnected

Hello,

can anyone pls upload a Gigabyte Gaming OC 3090 Silent Bios?




Can i flash it on my 3090 TUF OC?


----------



## lokran88

Antsu said:


> Anyone heard from Aquacomputer about their Strix block? Resisting the EK block is getting harder every day...


Aquacomputer official said at the end of October that they will ship the first Strix Blocks in 2-3 weeks, which would be mid November. I ordered minutes after they put it online, so I assume to receive it soon.
Honestly I might even prefer the EK block for aesthetics, but I assumed that it performs worse and no active Backplate, so I went for Aquacomputer.


DrunknFoo said:


> Their site estimated 1.5-2months for strix
> 
> Dissapointed that they are not 'prioritizing' a ftw block...


Watercool is making the FTW Block. Heatkiller V all new developped. I would definitely go for that If I had a FTW. 
Not sure If AC will make one at all as they chose the Strix.


----------



## changboy

*Edge0fsanity*
How is your score on time spy ?


----------



## changboy




----------



## Edge0fsanity

changboy said:


> *Edge0fsanity*
> How is your score on time spy ?





http://www.3dmark.com/spy/15275836



Still playing with voltages and freq's. Discovered my previous port royal settings were not even close to stable in games. This timespy run was done at 1025mV. Holds it throughout GPU test 1, GPU test 2 is all over the place. Same as before, still dialing in a OC for daily use so points left on the table.


----------



## Thanh Nguyen

changboy said:


>


7.5c delta on a stock or mod?


----------



## Foxrun

Well I’m surprised with my 3090 FTW Ultra. My FE could manage 1890mhz in UE4 games like Insurgency and maybe 1995-2025 in AC Valhalla, all of which were severely power limited. The FTW maintains 2025-2050 in Insurgency, 2100 in Valhalla, and 2025-2040 in Black Ops; everything maxed at 120hz. I’m power limited in Black Ops and UE4 so far, but I’m pretty sure this is a good chip.


----------



## ExDarkxH

I'm getting some ridiculous clocks using the new DXR 3dmark feature









on thing I noticed is that at 20-24c testing boost lock and this offset my card jumps to 2,340Mhz. Unfortunately, my card is still on air so I jump to the 40s almost immediately and end benchmark near 50c
It makes me super optimistic though about the potential gaming clocks with a water block

Could i realistically be at 2,200mhz with a block if i stay under 40c? maybe!


----------



## HyperMatrix

Foxrun said:


> Well I’m surprised with my 3090 FTW Ultra. My FE could manage 1890mhz in UE4 games like Insurgency and maybe 1995-2025 in AC Valhalla, all of which were severely power limited. The FTW maintains 2025-2050 in Insurgency, 2100 in Valhalla, and 2025-2040 in Black Ops; everything maxed at 120hz. I’m power limited in Black Ops and UE4 so far, but I’m pretty sure this is a good chip.


How the heck are you hitting a power limit in black ops? It’s the only game I can run over 2000MHz. Haha. Average 370-430W depending on temp/load. Fully maxed out with DLSS Quality. [email protected]


----------



## Foxrun

HyperMatrix said:


> How the heck are you hitting a power limit in black ops? It’s the only game I can run over 2000MHz. Haha. Average 370-430W depending on temp/load. Fully maxed out with DLSS Quality. [email protected]


I’m not using DLSS.


----------



## Spiriva

cloudconnected said:


> Hello,
> 
> can anyone pls upload a Gigabyte Gaming OC 3090 Silent Bios?
> 
> 
> Can i flash it on my 3090 TUF OC?


Try one of the Gigabyte bioses, "gaming oc" or "aorus master" They both are 390w bioses and currently the best for 2x8pin cards.









Gigabyte RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com




or








Gigabyte RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com




or








Gigabyte RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com





The one at the very bottom got abit more aggressive fan curve.


----------



## Carillo

lokran88 said:


> Aquacomputer official said at the end of October that they will ship the first Strix Blocks in 2-3 weeks, which would be mid November. I ordered minutes after they put it online, so I assume to receive it soon.
> Honestly I might even prefer the EK block for aesthetics, but I assumed that it performs worse and no active Backplate, so I went for Aquacomputer.
> 
> Watercool is making the FTW Block. Heatkiller V all new developped. I would definitely go for that If I had a FTW.
> Not sure If AC will make one at all as they chose the Strix.


Talked to Aquatuning yesterday ( who gets their parts from Aquacomputer) The Strix blocks are delayed another 3 weeks from now 

Edit: glad i have my ****ty byksi block for now. With some power tools i does fit


----------



## mirkendargen

ExDarkxH said:


> I'm getting some ridiculous clocks using the new DXR 3dmark feature
> 
> 
> 
> 
> 
> 
> 
> 
> 
> on thing I noticed is that at 20-24c testing boost lock and this offset my card jumps to 2,340Mhz. Unfortunately, my card is still on air so I jump to the 40s almost immediately and end benchmark near 50c
> It makes me super optimistic though about the potential gaming clocks with a water block
> 
> Could i realistically be at 2,200mhz with a block if i stay under 40c? maybe!


Yeah since this uses the RT cores only it clocks a lot differently (and uses less power so you'll never hit the power limit). Don't consider it indicative of what you'll get in normal workloads.

It does make me wonder what part of the chip is becoming unstable in Metro/Control making them unable to clock as high stably as Port Royal. Maybe it's tensor cores, I have DLSS on for both all the time.


----------



## ExDarkxH

mirkendargen said:


> Yeah since this uses the RT cores only it clocks a lot differently (and uses less power so you'll never hit the power limit). Don't consider it indicative of what you'll get in normal workloads.
> 
> It does make me wonder what part of the chip is becoming unstable in Metro/Control making them unable to clock as high stably as Port Royal. Maybe it's tensor cores, I have DLSS on for both all the time.


Oh of course no way in hell i will get 2300 in games

Its still clocking high tho even for this bechmark
In port royal in getting 2160 average with a 55c average tho so i want to see with normal gaming conditions at 38-40c with a water block
Is 2200 actually possible?? Lets find out

I was planning to buy a kpe this generation, but with this chip that i already have, i think ill pass


----------



## Thanh Nguyen

In what condition the clock will stay at what you set? I set 2205mhz at 1.093v but in games or port royal, it clocks down to 2160mhz at 43c-45c. After I exit those apps, it runs at 2175mhz at 31c idle


----------



## HyperMatrix

mirkendargen said:


> Yeah since this uses the RT cores only it clocks a lot differently (and uses less power so you'll never hit the power limit). Don't consider it indicative of what you'll get in normal workloads.
> 
> It does make me wonder what part of the chip is becoming unstable in Metro/Control making them unable to clock as high stably as Port Royal. Maybe it's tensor cores, I have DLSS on for both all the time.


You can test Tensor Core max clocks using an Nvidia Optical Flow heavy workload. I use SVP and if I try to live convert 24-30fps 4K content to 120fps, it maxes it out completely. I was able to get higher clocks with Tensor Cores (2250MHz) at full load than with RT cores at full load.


----------



## HyperMatrix

ExDarkxH said:


> Oh of course no way in hell i will get 2300 in games
> 
> Its still clocking high tho even for this bechmark
> In port royal in getting 2160 average with a 55c average tho so i want to see with normal gaming conditions at 38-40c with a water block
> Is 2200 actually possible?? Lets find out
> 
> I was planning to buy a kpe this generation, but with this chip that i already have, i think ill pass


If you're able to maintain 2200MHz at about 40% load, you should theoretically be able to maintain it at 100% load if you've shunted out the power limit and are also able to keep it cool.


----------



## olrdtg

Well, I have successfully reached 2230mhz stable on my 3090 FE after installing the water block..... one time. Now it won't go above 2200mhz on subsequent benchmarks in Port Royal, and it usually settles down to 2190Mhz after a minute or so where it stays. Temps don't go over around 47 C which is nice. My Threadripper is holding me back from the HoF on Port Royal though, I need to overclock it as running at stock speeds doesn't seem to help the score much. But at least I finally peaked 14600 points


kx11 said:


> tell us more about the fan, can you mount it on top if the gpu is placed in the top PCI slot?!


Unfortunately no. I figured that out trying to put it in last night :\ I ended up taking their chipset fan off. If the fan screw was mounted on the other side of the card it would probably fit. Super unfortunate, as I did some thermal testing, and the backplate included does *NOT *do a good job dissipating the heat from the RAM on the back of the card. I ran an overnight stress test and the backplate got up to around 98 C, almost 18 C hotter than the FE air cooler backplate got under a similar stress test. I can see this being a huge problem for the longevity of the card over time, luckily I don't plan on keeping this block on, as I want to get the EKWB FE block as soon as they release them for the 3090.

On the lighter side of things - now that my 3090 FE is water cooled, I was able to hit 2230Mhz stable during an initial benchmark. Unfortunately all subsequent benchmarks were only able to reach 2200Mhz, followed by settling in around 2190Mhz after about a minute and a half. I'm thinking I may have degraded the silicon a little bit during the initial benchmark. Temperatures other than the backplate are great. The card never goes higher than 47 C, allowing the card to use the extra voltage step with core voltage set to 100%. During a gaming test, clocks were rock solid at 2190mhz, and the boost of going from around 1950mhz on air to 2190mhz did offer a tiny increase in performance. Though I will probably set the overclocks to a much more conservative value for daily driving.

OC during the initial bench and subsequent benchmarks:
+210 core
+1100 mem
100% core voltage
+114% power limit (shunt modded, card peaked at 590W & settled around 500W according to kill-a-watt)

OC during gaming:
+180 core
+500 mem
100% core voltage
+114% power limit

For daily driving I'll probably go with +80/+350 to just push me over 2100mhz core and 10000mhz memory, though I may forego memory overclocking until I get the EK block or figure out a better cooling solution for the VRAM on the back side of the card.


----------



## Foxrun

olrdtg said:


> Well, I have successfully reached 2230mhz stable on my 3090 FE after installing the water block..... one time. Now it won't go above 2200mhz on subsequent benchmarks in Port Royal, and it usually settles down to 2190Mhz after a minute or so where it stays. Temps don't go over around 47 C which is nice. My Threadripper is holding me back from the HoF on Port Royal though, I need to overclock it as running at stock speeds doesn't seem to help the score much. But at least I finally peaked 14600 points
> 
> 
> Unfortunately no. I figured that out trying to put it in last night :\ I ended up taking their chipset fan off. If the fan screw was mounted on the other side of the card it would probably fit. Super unfortunate, as I did some thermal testing, and the backplate included does *NOT *do a good job dissipating the heat from the RAM on the back of the card. I ran an overnight stress test and the backplate got up to around 98 C, almost 18 C hotter than the FE air cooler backplate got under a similar stress test. I can see this being a huge problem for the longevity of the card over time, luckily I don't plan on keeping this block on, as I want to get the EKWB FE block as soon as they release them for the 3090.
> 
> On the lighter side of things - now that my 3090 FE is water cooled, I was able to hit 2230Mhz stable during an initial benchmark. Unfortunately all subsequent benchmarks were only able to reach 2200Mhz, followed by settling in around 2190Mhz after about a minute and a half. I'm thinking I may have degraded the silicon a little bit during the initial benchmark. Temperatures other than the backplate are great. The card never goes higher than 47 C, allowing the card to use the extra voltage step with core voltage set to 100%. During a gaming test, clocks were rock solid at 2190mhz, and the boost of going from around 1950mhz on air to 2190mhz did offer a tiny increase in performance. Though I will probably set the overclocks to a much more conservative value for daily driving.
> 
> OC during the initial bench and subsequent benchmarks:
> +210 core
> +1100 mem
> 100% core voltage
> +114% power limit (shunt modded, card peaked at 590W & settled around 500W according to kill-a-watt)
> 
> OC during gaming:
> +180 core
> +500 mem
> 100% core voltage
> +114% power limit
> 
> For daily driving I'll probably go with +80/+350 to just push me over 2100mhz core and 10000mhz memory, though I may forego memory overclocking until I get the EK block or figure out a better cooling solution for the VRAM on the back side of the card.


What were your clocks before the shunt? Were they below 2000mhz under heavy load?


----------



## olrdtg

Foxrun said:


> What were your clocks before the shunt? Were they below 2000mhz under heavy load?


Prior to the shunt, the highest I could achieve on chilled air was around 2070mhz BUT I didn't have a water block at the time to do some in depth testing under cooler conditions


----------



## HyperMatrix

olrdtg said:


> Well, I have successfully reached 2230mhz stable on my 3090 FE after installing the water block..... one time. Now it won't go above 2200mhz on subsequent benchmarks in Port Royal, and it usually settles down to 2190Mhz after a minute or so where it stays. Temps don't go over around 47 C which is nice. My Threadripper is holding me back from the HoF on Port Royal though, I need to overclock it as running at stock speeds doesn't seem to help the score much. But at least I finally peaked 14600 points
> 
> 
> Unfortunately no. I figured that out trying to put it in last night :\ I ended up taking their chipset fan off. If the fan screw was mounted on the other side of the card it would probably fit. Super unfortunate, as I did some thermal testing, and the backplate included does *NOT *do a good job dissipating the heat from the RAM on the back of the card. I ran an overnight stress test and the backplate got up to around 98 C, almost 18 C hotter than the FE air cooler backplate got under a similar stress test. I can see this being a huge problem for the longevity of the card over time, luckily I don't plan on keeping this block on, as I want to get the EKWB FE block as soon as they release them for the 3090.
> 
> On the lighter side of things - now that my 3090 FE is water cooled, I was able to hit 2230Mhz stable during an initial benchmark. Unfortunately all subsequent benchmarks were only able to reach 2200Mhz, followed by settling in around 2190Mhz after about a minute and a half. I'm thinking I may have degraded the silicon a little bit during the initial benchmark. Temperatures other than the backplate are great. The card never goes higher than 47 C, allowing the card to use the extra voltage step with core voltage set to 100%. During a gaming test, clocks were rock solid at 2190mhz, and the boost of going from around 1950mhz on air to 2190mhz did offer a tiny increase in performance. Though I will probably set the overclocks to a much more conservative value for daily driving.
> 
> OC during the initial bench and subsequent benchmarks:
> +210 core
> +1100 mem
> 100% core voltage
> +114% power limit (shunt modded, card peaked at 590W & settled around 500W according to kill-a-watt)
> 
> OC during gaming:
> +180 core
> +500 mem
> 100% core voltage
> +114% power limit
> 
> For daily driving I'll probably go with +80/+350 to just push me over 2100mhz core and 10000mhz memory, though I may forego memory overclocking until I get the EK block or figure out a better cooling solution for the VRAM on the back side of the card.


For memory cooling, if you have a top mounted radiator, you can switch the fans to work as an intake instead of an exhaust so it blows colder (relative to vram temps) air on the back of the card. Though you’d need to create an alternate exhaust point or just leave the side of your case open.


----------



## Foxrun

olrdtg said:


> Prior to the shunt, the highest I could achieve on chilled air was around 2070mhz BUT I didn't have a water block at the time to do some in depth testing under cooler conditions


Was that under load like port royal or some equivalent? I still have my FE and I remember clocks would vary greatly depending on the game being played. I’m trying to figure out if I should shunt it before I decide to get rid of it for my FTW3.


----------



## motivman

So was just having a private conversation with member Carillo about shunt modding, and he just told me he has killed 3 3090s due to shunt modding including his strix, this does not make sense to me, haven't seen any other reports of peoples cards dying from shunting...So I am gonna ask, has anyone else experienced premature death on their card due to shunting?... I kinda feel he might be playing a sick joke on me....


----------



## olrdtg

Foxrun said:


> Was that under load like port royal or some equivalent? I still have my FE and I remember clocks would vary greatly depending on the game being played. I’m trying to figure out if I should shunt it before I decide to get rid of it for my FTW3.


Under Port Royal, Red Dead Redemption 2 and Unigine Superpostion were the 3 I mainly tested with. Using Furmark I only ever see clocks around 1875mhz as it acts more like a power virus, even with the shunt mod Furmark causes the GPU to hit its limit around 590W causing it to clock down thinking it reached the 400W max.

Doing some more testing, if I try and push the clocks higher I can hit 2250mhz but it results in almost an instant crash of whatever benchmark I use. I either degraded the silicon last night or I need to do a volt mod to reach higher clock speeds, though a volt mod of even a few mv could degrade the silicon even further. I'll probably wait until I can buy another 3090 FE or 2 when they are in stock again before attempting a volt mod on this card.

I highly recommend a shunt mod on the FE if you can handle it (look at bmgjets easy mode shunt modding thread)



motivman said:


> So was just having a private conversation with member Carillo about shunt modding, and he just told me he has killed 3 3090s due to shunt modding including his strix, this does not make sense to me, haven't seen any other reports of peoples cards dying from shunting...So I am gonna ask, has anyone else experienced premature death on their card due to shunting... I kinda feel he might be playing a sick joke on me....


Card death from a shunt mod is usually not the cause of the shunts, but some other form of user error. It's likely he damaged the cards in some other unforeseen ways by shorting something out. That or he allowed too much power to flow into the card, overloading it (though OCP should have stopped that). If you do the mod correctly, it's relatively safe. The safest method by far for longevity purposes, IF you have a steady hand that is, is to replace the original shunts with 2mOhm or 3mOhm shunt resistors. If you don't have the confidence to remove SMDs or lack the steady hand, the best method is the one posted in bmgjets "easy mode shunt modding" thread.

So, either he really messed something up or he's trolling you 🤷‍♂️


----------



## Falkentyne

olrdtg said:


> Under Port Royal, Red Dead Redemption 2 and Unigine Superpostion were the 3 I mainly tested with. Using Furmark I only ever see clocks around 1875mhz as it acts more like a power virus, even with the shunt mod Furmark causes the GPU to hit its limit around 590W causing it to clock down thinking it reached the 400W max.
> 
> Doing some more testing, if I try and push the clocks higher I can hit 2250mhz but it results in almost an instant crash of whatever benchmark I use. I either degraded the silicon last night or I need to do a volt mod to reach higher clock speeds, though a volt mod of even a few mv could degrade the silicon even further. I'll probably wait until I can buy another 3090 FE or 2 when they are in stock again before attempting a volt mod on this card.
> 
> I highly recommend a shunt mod on the FE if you can handle it (look at bmgjets easy mode shunt modding thread)
> 
> 
> 
> Card death from a shunt mod is usually not the cause of the shunts, but some other form of user error. It's likely he damaged the cards in some other unforeseen ways by shorting something out. That or he allowed too much power to flow into the card, overloading it (though OCP should have stopped that). If you do the mod correctly, it's relatively safe. The safest method by far for longevity purposes, IF you have a steady hand that is, is to replace the original shunts with 2mOhm or 3mOhm shunt resistors. If you don't have the confidence to remove SMDs or lack the steady hand, the best method is the one posted in bmgjets "easy mode shunt modding" thread.
> 
> So, either he really messed something up or he's trolling you 🤷‍♂️


Hi,
can you kindly take alook at these posts ?









[Official] NVIDIA RTX 3080 Owner's Club


with gpu-z after shunt it says 260w max So roughly 500 watt




www.overclock.net


----------



## motivman

olrdtg said:


> Under Port Royal, Red Dead Redemption 2 and Unigine Superpostion were the 3 I mainly tested with. Using Furmark I only ever see clocks around 1875mhz as it acts more like a power virus, even with the shunt mod Furmark causes the GPU to hit its limit around 590W causing it to clock down thinking it reached the 400W max.
> 
> Doing some more testing, if I try and push the clocks higher I can hit 2250mhz but it results in almost an instant crash of whatever benchmark I use. I either degraded the silicon last night or I need to do a volt mod to reach higher clock speeds, though a volt mod of even a few mv could degrade the silicon even further. I'll probably wait until I can buy another 3090 FE or 2 when they are in stock again before attempting a volt mod on this card.
> 
> I highly recommend a shunt mod on the FE if you can handle it (look at bmgjets easy mode shunt modding thread)
> 
> 
> 
> Card death from a shunt mod is usually not the cause of the shunts, but some other form of user error. It's likely he damaged the cards in some other unforeseen ways by shorting something out. That or he allowed too much power to flow into the card, overloading it (though OCP should have stopped that). If you do the mod correctly, it's relatively safe. The safest method by far for longevity purposes, IF you have a steady hand that is, is to replace the original shunts with 2mOhm or 3mOhm shunt resistors. If you don't have the confidence to remove SMDs or lack the steady hand, the best method is the one posted in bmgjets "easy mode shunt modding" thread.
> 
> So, either he really messed something up or he's trolling you 🤷‍♂️


I am thinking he is trolling me, but why do that though? I came to him (with good intentions) to just gather some relevant information... not nice, if that is what he is doing.... I thought the purpose of this thread is to help each other out... SMH


----------



## Dreams-Visions

Update on the 4 cards I picked up and was testing. If you all will recall:










I did all my testing on the same rig, case sides off, 100% fans, reasonably cool room. I didn't want to repaste the cards and jeopardize warranties or returns, so they were tested as-is out of the box. As benchmarks went, for the silicon I received:

*Summary: *XTrio w/500W EVGA Bios > FE >~ Strix > FTW3U

*XTrio (w/500W EVGA bios)*
XTrio was shockingly quiet even at max fan speed. Card was the second coolest of the bunch, maxing out at around 73C in stress testing and did the best job maintaining its clocks stable. All of the reviews saying it is super quiet are spot-on. 500W bios allows the card to bounce between 480W-510W in benchmarks. This was the card I decided to keep, and am running it daily at 0.956mV @ 2040MHz. It's_ rock solid_ there and is drawing 380W-420W typical. Memory is very strong, and actually goes up to about +1248 before seeing performance degrade. That memory clock wasn't stable @ 0.956mV so I dialed that down to +1048 daily usage.

*BEST STABLE OC (Afterburner):* +161 / +1148
*CLOCK STABILITY:* 2085-2130, ~480W-510W, max temp 69C
*DAILY (Afterburner):* 0.956mV @ 2040 locked / +1048, ~62C


*FE*
FE at 100% fans is extremely loud. Like, unacceptably loud. But the nice thing about the FE is that it will likely never reach anywhere near 100% fan speed because its cooler is SO GOOD. In same stress tests and environment, this card maxed at *56C*. That's almost 20C cooler than the XTrio, the second best performer. Even when undervolting the XTrio to around 400W (which is where the FE hangs around in benchmarks), it's still a good ~10C warmer than the FE. I continue to go back and forth on keeping this one because that EK FE waterblock is gorgeous and a shunt mod would be a trivial way to really find out how good this card could be. I have another couple of months to decide if I want to keep it or not (Best Buy), but I'm probably going to return it as I am now VERY intrigued by the EVGA FTW3 HC.

*BEST STABLE OC (Afterburner):* +170 / +698
*CLOCK STABILITY:* 1950-2010, ~390W-408W, max temp 56C


*Strix OC (stock and w/500W EVGA bios)*
Strix just ran too damn hot to get a confident impression either way. It would get up to about 75C with stock bios, but around 84C on the 500W EVGA bios. Just as a reminder, that is the same bios that the XTrio maxed at 69C on; a huge gap in cooling performance. Volume at 100% fan speed was acceptable but was definitely much louder than the XTrio. Under water and/or with a repaste it may have performed better, but again, I'm not going to open a card that I'm not 100% sure I want to keep and this wasn't the impression I was looking for out of the vaunted Strix model OOTB. What sealed my decision to not keep this card was being able to compare OC's for this and the XTrio on the SAME 500W bios directly. The XTrio could simply do more. Higher core OC, higher mem OC, significantly lower temps. Strix kept it close in benchmarks and I know it was very limited by temps, but again, wasn't ready to take any risk given the cost of the card. I was going to return it but I ended up selling this one for my cost + ship which worked out because someone who wanted a Strix got one and I got to keep my credit card's reward points.

*BEST STABLE OC (Afterburner):* +141/+898
*CLOCK STABILITY:* 1980-2070, 463W-480W, max temp 74C, stock bios
*CLOCK STABILITY:* 2010-2115, ~510W, max temp 84C, 500W FTW3U bios


*FTW3 (stock only)*
FTW3 was slightly louder than the Strix at 100% fan speed, but not offensive like the FE. Not a lot to say here; clocks would regularly dip below 2GHz which sort of automatically disqualified it from this race, and the max temp got up there much higher than I would consider acceptable even on the basic 450W bios. Just a reminder at this point: all 4 cards were tested in the exact same environment. Huge room, nice and cool, open case. This card and the Strix were literally ~20C+ hotter than the FE under the same load. This card had the weakest silicon on the core of the bunch, maxing at just +90 core offset AND the second weakest memory offset, beating out only the FE.

*BEST STABLE OC (Afterburner):* +90/+800
*CLOCK STABILITY:* 1950-2045, ~425-450W, max temp 80C, stock bios (did not try 500W bios because of the issues)

--

There was a bunch of other data and scores and stuff (scoring as I incrementally rose core and mem offsets, some game bench scores), but I'll only post some of those if someone is really interested. Realistically, all were close enough to consider them a wash in real world applications, but the FTW3 was about 300 points behind the other 3 cards (which were all within 50 points of each other in 3DMark benches). The FE beat out the Strix and FTW3U in both Port Royal and Time Spy despite having 100W less power available to it.

In the end, as we should all know at this point: the silicon lottery is more important than the make and model on the side of your box. The chips for these cards are not binned, so you can get a core that can't OC much...or one that can OC a lot. Same with memory. But outside of benchmarking, I don't think you'd miss or appreciate a few extra frames either way.

Back to the Hydro Copper, I didn't expect them to be so competitively priced and was always planning to put whatever card I settled on under water. The HC represents a savings of at least a couple hundred depending on the card + block (Strix + block is like $200 more than the HC, for example). Unless I deem the XTrio to be a proper golden chip (I think it might be given those offsets and ease of applying the 500W bios), the HC value proposition seems unbeatable for the price. Also, what a nuts air cooler the FE has. If I were building on air and not concerned at all about leaderboard benchmarking, I don't see why I would purchase any other card. It's beautiful, it runs cold, boosts to 400W (which is where you want to be for daily gaming regardless of card to keep temps down on air), and sits right around 2GHz. At best, we're looking at a few FPS advantage for one of the more expensive AIBs? While being able to gawk at arguably the sexiest waterblock ever made in that EK FE block.

So yea, having a bit of paralysis of analysis, but can at least say I'm down to 3 options. FE + EK block? XTrio + someone's block? Sell everything and try to score a Hydro Copper sometime next month? Decisions, decisions. But at least they will be mostly informed decisions, free from reliance of Youtubers.


----------



## Antsu

[QUOTE="ExDarkxH said:


> I'm getting some ridiculous clocks using the new DXR 3dmark feature
> 
> 
> 
> 
> 
> 
> 
> 
> 
> on thing I noticed is that at 20-24c testing boost lock and this offset my card jumps to 2,340Mhz. Unfortunately, my card is still on air so I jump to the 40s almost immediately and end benchmark near 50c
> It makes me super optimistic though about the potential gaming clocks with a water block
> 
> Could i realistically be at 2,200mhz with a block if i stay under 40c? maybe!


I crash instantly at 2235Mhz with 9C ambient, you average 2250Mhz! You better be optimistic, damnit.  Wish I had a block so I could confirm if my chip is actually as bad as it seems.


----------



## DrunknFoo

Tried this test once and ignored cause inhad nothing to compare it to.... Didnt seem to generate as much heat as port royal.... Tried it once... Ill have a look into this one again


On air 2205 avg 2170. Wierd... Can up offset clock by +150 on px1 for this vs port royal
... So completely different instruction set, maybe only tensor being used? 

I got no idea how much im drawing, cant get to 2200... Temps at 60c at end of run

Killawat on the way, maybe a block whenever someone other than optimus decides to release one for ftw


----------



## Falkentyne

Dreams-Visions said:


> Update on the 4 cards I picked up and was testing. If you all will recall:
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I did all my testing on the same rig, case sides off, 100% fans, reasonably cool room. I didn't want to repaste the cards and jeopardize warranties or returns, so they were tested as-is out of the box. As benchmarks went, for the silicon I received:
> 
> *Summary: *XTrio w/500W EVGA Bios > FE >~ Strix > FTW3U
> 
> *XTrio (w/500W EVGA bios)*
> XTrio was shockingly quiet even at max fan speed. Card was the second coolest of the bunch, maxing out at around 73C in stress testing and did the best job maintaining its clocks stable. All of the reviews saying it is super quiet are spot-on. 500W bios allows the card to bounce between 480W-510W in benchmarks. This was the card I decided to keep, and am running it daily at 0.956mV @ 2040MHz. It's_ rock solid_ there and is drawing 380W-420W typical. Memory is very strong, and actually goes up to about +1248 before seeing performance degrade. That memory clock wasn't stable @ 0.956mV so I dialed that down to +1048 daily usage.
> 
> *BEST STABLE OC (Afterburner):* +161 / +1148
> *CLOCK STABILITY:* 2085-2130, ~480W-510W, max temp 69C
> *DAILY (Afterburner):* 0.956mV @ 2040 locked / +1048, ~62C
> 
> 
> *FE*
> FE at 100% fans is extremely loud. Like, unacceptably loud. But the nice thing about the FE is that it will likely never reach anywhere near 100% fan speed because its cooler is SO GOOD. In same stress tests and environment, this card maxed at *56C*. That's almost 20C cooler than the XTrio, the second best performer. Even when undervolting the XTrio to around 400W (which is where the FE hangs around in benchmarks), it's still a good ~10C warmer than the FE. I continue to go back and forth on keeping this one because that EK FE waterblock is gorgeous and a shunt mod would be a trivial way to really find out how good this card could be. I have another couple of months to decide if I want to keep it or not (Best Buy), but I'm probably going to return it as I am now VERY intrigued by the EVGA FTW3 HC.
> 
> *BEST STABLE OC (Afterburner):* +170 / +698
> *CLOCK STABILITY:* 1950-2010, ~390W-408W, max temp 56C
> 
> 
> *Strix OC (stock and w/500W EVGA bios)*
> Strix just ran too damn hot to get a confident impression either way. It would get up to about 75C with stock bios, but around 84C on the 500W EVGA bios. Just as a reminder, that is the same bios that the XTrio maxed at 69C on; a huge gap in cooling performance. Volume at 100% fan speed was acceptable but was definitely much louder than the XTrio. Under water and/or with a repaste it may have performed better, but again, I'm not going to open a card that I'm not 100% sure I want to keep and this wasn't the impression I was looking for out of the vaunted Strix model OOTB. What sealed my decision to not keep this card was being able to compare OC's for this and the XTrio on the SAME 500W bios directly. The XTrio could simply do more. Higher core OC, higher mem OC, significantly lower temps. Strix kept it close in benchmarks and I know it was very limited by temps, but again, wasn't ready to take any risk given the cost of the card. I was going to return it but I ended up selling this one for my cost + ship which worked out because someone who wanted a Strix got one and I got to keep my credit card's reward points.
> 
> *BEST STABLE OC (Afterburner):* +141/+898
> *CLOCK STABILITY:* 1980-2070, 463W-480W, max temp 74C, stock bios
> *CLOCK STABILITY:* 2010-2115, ~510W, max temp 84C, 500W FTW3U bios
> 
> 
> *FTW3 (stock only)*
> FTW3 was slightly louder than the Strix at 100% fan speed, but not offensive like the FE. Not a lot to say here; clocks would regularly dip below 2GHz which sort of automatically disqualified it from this race, and the max temp got up there much higher than I would consider acceptable even on the basic 450W bios. Just a reminder at this point: all 4 cards were tested in the exact same environment. Huge room, nice and cool, open case. This card and the Strix were literally ~20C+ hotter than the FE under the same load. This card had the weakest silicon on the core of the bunch, maxing at just +90 core offset AND the second weakest memory offset, beating out only the FE.
> 
> *BEST STABLE OC (Afterburner):* +90/+800
> *CLOCK STABILITY:* 1950-2045, ~425-450W, max temp 80C, stock bios (did not try 500W bios because of the issues)
> 
> --
> 
> There was a bunch of other data and scores and stuff (scoring as I incrementally rose core and mem offsets, some game bench scores), but I'll only post some of those if someone is really interested. Realistically, all were close enough to consider them a wash in real world applications, but the FTW3 was about 300 points behind the other 3 cards (which were all within 50 points of each other in 3DMark benches). The FE beat out the Strix and FTW3U in both Port Royal and Time Spy despite having 100W less power available to it.
> 
> In the end, as we should all know at this point: the silicon lottery is more important than the make and model on the side of your box. The chips for these cards are not binned, so you can get a core that can't OC much...or one that can OC a lot. Same with memory. But outside of benchmarking, I don't think you'd miss or appreciate a few extra frames either way.
> 
> Back to the Hydro Copper, I didn't expect them to be so competitively priced and was always planning to put whatever card I settled on under water. The HC represents a savings of at least a couple hundred depending on the card + block (Strix + block is like $200 more than the HC, for example). Unless I deem the XTrio to be a proper golden chip (I think it might be given those offsets and ease of applying the 500W bios), the HC value proposition seems unbeatable for the price. Also, what a nuts air cooler the FE has. If I were building on air and not concerned at all about leaderboard benchmarking, I don't see why I would purchase any other card. It's beautiful, it runs cold, boosts to 400W (which is where you want to be for daily gaming regardless of card to keep temps down on air), and sits right around 2GHz. At best, we're looking at a few FPS advantage for one of the more expensive AIBs? While being able to gawk at arguably the sexiest waterblock ever made in that EK FE block.
> 
> So yea, having a bit of paralysis of analysis, but can at least say I'm down to 3 options. FE + EK block? XTrio + someone's block? Sell everything and try to score a Hydro Copper sometime next month? Decisions, decisions. But at least they will be mostly informed decisions, free from reliance of Youtubers.


When you did your temp and 100% fan speed testing, what program were you playing/benching when you recorded the max temps?
What is your ambient temp?

I've never seen temps that low on the FE, and I even repasted mine and applied 12 w/mk thermal pads to the entire backplate RAM and hotspots.
My ambient is 26C/78.5F. I guess you're living in an igloo compared to me...


----------



## warrior-kid

Now got a MSI Trio X instead of an MSI Ventus OC. Flashed it with a 500W EVGA BIOS, and I am still getting ultra low scores like 13.3K in PR. What am I missing?

Otherwise, Threadripper 3970x, Corsair 3200 RAM, Fractal 7 XL.


----------



## dr/owned

olrdtg said:


> Unfortunately no. I figured that out trying to put it in last night :\ I ended up taking their chipset fan off. If the fan screw was mounted on the other side of the card it would probably fit. Super unfortunate, as I did some thermal testing, and the backplate included does *NOT *do a good job dissipating the heat from the RAM on the back of the card. I ran an overnight stress test and the backplate got up to around 98 C, almost 18 C hotter than the FE air cooler backplate got under a similar stress test. I can see this being a huge problem for the longevity of the card over time, luckily I don't plan on keeping this block on, as I want to get the EKWB FE block as soon as they release them for the 3090.


Which backplate is this? Presumably Bitspower since I think them and EK the only two that support FE. I'm cooking up a scheme to drill and counterbore the backplate with a 75mmx75mm hole pattern so then any intel waterblock can be mounted with paste instead of pads or adhesive. FE is a problematic card finding a safe area for 75mmx75mm though because they had to pack so much stuff on the back of the PCB. Last resort is drilling and tapping the backplate itself instead of using screws that pass through.


----------



## SoldierRBT

I don't get the amount of hate I've been reading in this forum the last couple of weeks. We all share the same hobby and we should help each other. 
I'm not sure if this is all about the stock issues with this generation. Hopefully it gets better


----------



## motivman

SoldierRBT said:


> I don't get the amount of hate I've been reading in this forum the last couple of weeks. We all share the same hobby and we should help each other.
> I'm not sure if this is all about the stock issues with this generation. Hopefully it gets better


All I did was ask a simple question, but I guess someone was having a bad day. I have been registered to OCN since 2012, but haven't really been active. This experience has definitely put a sour taste in my mouth. I have had the pleasure of interacting with other members, and they have been super helpful.


----------



## olrdtg

dr/owned said:


> Which backplate is this? Presumably Bitspower since I think them and EK the only two that support FE. I'm cooking up a scheme to drill and counterbore the backplate with a 75mmx75mm hole pattern so then any intel waterblock can be mounted with paste instead of pads or adhesive. FE is a problematic card finding a safe area for 75mmx75mm though because they had to pack so much stuff on the back of the PCB. Last resort is drilling and tapping the backplate itself instead of using screws that pass through.


Yep it's the bitspower block. It's a fairly thick backplate, probably 2mm (I'd need to find my calipers to get an exact measurement) but you could easily countersink an area for a block on there. Mounting is the tricky part. This company recently did release a block that rubber bands to the backplate of the 3000 series GPUs to provide active VRAM cooling, but it just sits on top and uses the tiny tubing to connect to the rest of the loop in parallel. Countersinking a nice area for a block and some paste on there might do some good. Use some good quality pads in between the VRAM and the backplate and you probably would get decent VRAM cooling. 
I really want to go with the EK block because it's really sexy looking, but I'm probably going to do something similar, grind down the grooves on the back then carefully make an area to place a small waterblock, then salvage one of my old EK blocks, remove the standoffs so it sits flush and come up with some sort of mounting system for it, possibly using some sort of home made retention system that wraps around the entire card & block (zip ties? lol). It won't look elegant but it should get the job done


----------



## olrdtg

Antsu said:


> I crash instantly at 2235Mhz with 9C ambient, you average 2250Mhz! You better be optimistic, damnit.  Wish I had a block so I could confirm if my chip is actually as bad as it seems.


I believe that test uses the tensor cores more than anything else, which isn't a real good indicator of how the card performs overall in rasterization workloads (but I may be wrong about this -- @mirkendargen mentioned it on the previous page as well). In my testing I can hit similar clocks using the DXR feature test in 3DMark, usually around 2250Mhz, but if I even try to edge near 2230Mhz on something like Time Spy or Port Royal, it's an instacrash the second the benchmark starts. I was able to do one Port Royal run last night at 2230Mhz, after which I'm only able to get as high as 2200Mhz +/- 5mhz. However, having clock speeds that good on the tensor cores could be a darn good thing for ray tracing/denoising!


----------



## khunpunTH

Just got mine 3 days ago. Msi 3090 gaming x trio is so impressive.
Msi also have low temp bios which will run full load about 67c. You can download from their website, that i normally use for gaming.
Then I flash into 500w bios to get score and i'm now satify with my score without any shuant mod or waterblock.



https://www.3dmark.com/spy/15269156




https://www.3dmark.com/pr/497741



The product is very good in term of price and performance compare to others brand. 300$ cheaper than strix.
Anyway my strix will come next week. Will test and see the result again.


----------



## reflex75

Dreams-Visions said:


> Update on the 4 cards I picked up and was testing. If you all will recall:
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I did all my testing on the same rig, case sides off, 100% fans, reasonably cool room. I didn't want to repaste the cards and jeopardize warranties or returns, so they were tested as-is out of the box. As benchmarks went, for the silicon I received:
> 
> *Summary: *XTrio w/500W EVGA Bios > FE >~ Strix > FTW3U
> 
> *XTrio (w/500W EVGA bios)*
> XTrio was shockingly quiet even at max fan speed. Card was the second coolest of the bunch, maxing out at around 73C in stress testing and did the best job maintaining its clocks stable. All of the reviews saying it is super quiet are spot-on. 500W bios allows the card to bounce between 480W-510W in benchmarks. This was the card I decided to keep, and am running it daily at 0.956mV @ 2040MHz. It's_ rock solid_ there and is drawing 380W-420W typical. Memory is very strong, and actually goes up to about +1248 before seeing performance degrade. That memory clock wasn't stable @ 0.956mV so I dialed that down to +1048 daily usage.
> 
> *BEST STABLE OC (Afterburner):* +161 / +1148
> *CLOCK STABILITY:* 2085-2130, ~480W-510W, max temp 69C
> *DAILY (Afterburner):* 0.956mV @ 2040 locked / +1048, ~62C
> 
> 
> *FE*
> FE at 100% fans is extremely loud. Like, unacceptably loud. But the nice thing about the FE is that it will likely never reach anywhere near 100% fan speed because its cooler is SO GOOD. In same stress tests and environment, this card maxed at *56C*. That's almost 20C cooler than the XTrio, the second best performer. Even when undervolting the XTrio to around 400W (which is where the FE hangs around in benchmarks), it's still a good ~10C warmer than the FE. I continue to go back and forth on keeping this one because that EK FE waterblock is gorgeous and a shunt mod would be a trivial way to really find out how good this card could be. I have another couple of months to decide if I want to keep it or not (Best Buy), but I'm probably going to return it as I am now VERY intrigued by the EVGA FTW3 HC.
> 
> *BEST STABLE OC (Afterburner):* +170 / +698
> *CLOCK STABILITY:* 1950-2010, ~390W-408W, max temp 56C
> 
> 
> *Strix OC (stock and w/500W EVGA bios)*
> Strix just ran too damn hot to get a confident impression either way. It would get up to about 75C with stock bios, but around 84C on the 500W EVGA bios. Just as a reminder, that is the same bios that the XTrio maxed at 69C on; a huge gap in cooling performance. Volume at 100% fan speed was acceptable but was definitely much louder than the XTrio. Under water and/or with a repaste it may have performed better, but again, I'm not going to open a card that I'm not 100% sure I want to keep and this wasn't the impression I was looking for out of the vaunted Strix model OOTB. What sealed my decision to not keep this card was being able to compare OC's for this and the XTrio on the SAME 500W bios directly. The XTrio could simply do more. Higher core OC, higher mem OC, significantly lower temps. Strix kept it close in benchmarks and I know it was very limited by temps, but again, wasn't ready to take any risk given the cost of the card. I was going to return it but I ended up selling this one for my cost + ship which worked out because someone who wanted a Strix got one and I got to keep my credit card's reward points.
> 
> *BEST STABLE OC (Afterburner):* +141/+898
> *CLOCK STABILITY:* 1980-2070, 463W-480W, max temp 74C, stock bios
> *CLOCK STABILITY:* 2010-2115, ~510W, max temp 84C, 500W FTW3U bios
> 
> 
> *FTW3 (stock only)*
> FTW3 was slightly louder than the Strix at 100% fan speed, but not offensive like the FE. Not a lot to say here; clocks would regularly dip below 2GHz which sort of automatically disqualified it from this race, and the max temp got up there much higher than I would consider acceptable even on the basic 450W bios. Just a reminder at this point: all 4 cards were tested in the exact same environment. Huge room, nice and cool, open case. This card and the Strix were literally ~20C+ hotter than the FE under the same load. This card had the weakest silicon on the core of the bunch, maxing at just +90 core offset AND the second weakest memory offset, beating out only the FE.
> 
> *BEST STABLE OC (Afterburner):* +90/+800
> *CLOCK STABILITY:* 1950-2045, ~425-450W, max temp 80C, stock bios (did not try 500W bios because of the issues)
> 
> --
> 
> There was a bunch of other data and scores and stuff (scoring as I incrementally rose core and mem offsets, some game bench scores), but I'll only post some of those if someone is really interested. Realistically, all were close enough to consider them a wash in real world applications, but the FTW3 was about 300 points behind the other 3 cards (which were all within 50 points of each other in 3DMark benches). The FE beat out the Strix and FTW3U in both Port Royal and Time Spy despite having 100W less power available to it.
> 
> In the end, as we should all know at this point: the silicon lottery is more important than the make and model on the side of your box. The chips for these cards are not binned, so you can get a core that can't OC much...or one that can OC a lot. Same with memory. But outside of benchmarking, I don't think you'd miss or appreciate a few extra frames either way.
> 
> Back to the Hydro Copper, I didn't expect them to be so competitively priced and was always planning to put whatever card I settled on under water. The HC represents a savings of at least a couple hundred depending on the card + block (Strix + block is like $200 more than the HC, for example). Unless I deem the XTrio to be a proper golden chip (I think it might be given those offsets and ease of applying the 500W bios), the HC value proposition seems unbeatable for the price. Also, what a nuts air cooler the FE has. If I were building on air and not concerned at all about leaderboard benchmarking, I don't see why I would purchase any other card. It's beautiful, it runs cold, boosts to 400W (which is where you want to be for daily gaming regardless of card to keep temps down on air), and sits right around 2GHz. At best, we're looking at a few FPS advantage for one of the more expensive AIBs? While being able to gawk at arguably the sexiest waterblock ever made in that EK FE block.
> 
> So yea, having a bit of paralysis of analysis, but can at least say I'm down to 3 options. FE + EK block? XTrio + someone's block? Sell everything and try to score a Hydro Copper sometime next month? Decisions, decisions. But at least they will be mostly informed decisions, free from reliance of Youtubers.


Thank you for the feedback. 
What 3DMARK results are you getting ?


----------



## Zooms

Hello ,

I just upgraded my Asus 3090 Strix OC to Waterblock EKWB.

I am new to the OC GPU.
I know afterburner etc but I am interested in flash bios.

I saw that there was an EVGA 500W BIOS.

Is it risk-free? There is just to flash the bios normally like a BIOS update from ASUS (the one for the fans lately)


----------



## Antsu

Zooms said:


> Hello ,
> 
> I just upgraded my Asus 3090 Strix OC to Waterblock EKWB.
> 
> I am new to the OC GPU.
> I know afterburner etc but I am interested in flash bios.
> 
> I saw that there was an EVGA 500W BIOS.
> 
> Is it risk-free? There is just to flash the bios normally like a BIOS update from ASUS (the one for the fans lately)


it is quite safe, feel free to do it. Look up any flashing guide and just use the newest NVflash. I have the 500W BIOS on my Strix as we speak. Would be nice if you could post some results after!


----------



## Zooms

It's no just click on the exe update bios like new bios Asus strix for dB startup ?

Have you the 500w bios file ?


----------



## GQNerd

I have an FE but picked up a Strix for comparison. 

The FE is awesome, but Power limited. Highest Score was 14,454 PR

As for the Strix, I got better results with the stock 480w bios:14,668 PR
vs the EVGA 500w: 14,489 PR

Both able to OC memory to +1300 (1200 stable)

Strix runs much quieter at max with higher Clocks/scores, but higher temps..

FE is loud AF at max, but runs cooler and is BEAUTIFUL! 


I don't know which card to keep but either one will be under water and shunt-modded. 

My dilemma is I can't discern which has more potential??


----------



## olrdtg

Miguelios said:


> I have an FE but picked up a Strix for comparison.
> 
> The FE is awesome, but Power limited. Highest Score was 14,454 PR
> 
> As for the Strix, I got better results with the stock 480w bios:14,668 PR
> vs the EVGA 500w: 14,489 PR
> 
> Both able to OC memory to +1300 (1200 stable)
> 
> Strix runs much quieter at max with higher Clocks/scores, but higher temps..
> 
> FE is loud AF at max, but runs cooler and is BEAUTIFUL!
> 
> 
> I don't know which card to keep but either one will be under water and shunt-modded.
> 
> My dilemma is I can't discern which has more potential??


Well, the Strix isn't as power limited, *but* if you shunt mod the FE, you can basically remove the power limit, well, not really, but you can bring the limit up to 600W, or even 900W.
The FE is also getting those super awesome EK blocks - they kinda look like ridge wallets, but still really cool looking blocks. Expensive tho.

Honestly, if I were you, I'd shunt mod em both, get them to the exact same power limit, then test the silicon on both. Keep whichever can achieve higher clocks with less power, then sell the other one on ebay as pre-shunt modded (Believe me, you can add like $150 to the price tag for this service alone, I was able to sell one of my Titan RTXs a week before the 3000 series launch at nearly MSRP because it was shunt modded)


----------



## Zooms

What version of the nvidia driver you are using


----------



## cloudconnected

Spiriva said:


> Try one of the Gigabyte bioses, "gaming oc" or "aorus master" They both are 390w bioses and currently the best for 2x8pin cards.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Gigabyte RTX 3090 VBIOS
> 
> 
> 24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory
> 
> 
> 
> 
> www.techpowerup.com
> 
> 
> 
> 
> or
> 
> 
> 
> 
> 
> 
> 
> 
> Gigabyte RTX 3090 VBIOS
> 
> 
> 24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory
> 
> 
> 
> 
> www.techpowerup.com
> 
> 
> 
> 
> or
> 
> 
> 
> 
> 
> 
> 
> 
> Gigabyte RTX 3090 VBIOS
> 
> 
> 24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory
> 
> 
> 
> 
> www.techpowerup.com
> 
> 
> 
> 
> 
> The one at the very bottom got abit more aggressive fan curve.


Thanks

I cant use the Aorus Bios because most of the ports doesnt work then.
Allready tried that.

Why cant anyone upload his Gaming OC Silent Bios?


----------



## warrior-kid

khunpunTH said:


> Just got mine 3 days ago. Msi 3090 gaming x trio is so impressive.
> Msi also have low temp bios which will run full load about 67c. You can download from their website, that i normally use for gaming.
> Then I flash into 500w bios to get score and i'm now satify with my score without any shuant mod or waterblock.
> 
> 
> 
> https://www.3dmark.com/spy/15269156
> 
> 
> 
> 
> https://www.3dmark.com/pr/497741
> 
> 
> 
> The product is very good in term of price and performance compare to others brand. 300$ cheaper than strix.
> Anyway my strix will come next week. Will test and see the result again.


Please guys, I need help, I already asked on the other page. max I can do on my Trio X is 13.3K in PR on the EVGA BIOS. I am increasing the power limit to 12%, set fans on 90%, set max performance GPU and CPU. What else can I do?


----------



## cloudconnected

khunpunTH said:


> Just got mine 3 days ago. Msi 3090 gaming x trio is so impressive.
> Msi also have low temp bios which will run full load about 67c. You can download from their website, that i normally use for gaming.
> Then I flash into 500w bios to get score and i'm now satify with my score without any shuant mod or waterblock.
> 
> 
> 
> https://www.3dmark.com/spy/15269156
> 
> 
> 
> 
> https://www.3dmark.com/pr/497741
> 
> 
> 
> The product is very good in term of price and performance compare to others brand. 300$ cheaper than strix.
> Anyway my strix will come next week. Will test and see the result again.


How loud is the MSI with the Low Temperature Bios?


----------



## asdkj1740

3080 aorus xtreme 450w bios, near 3000rpm-->~63c
not too bad and should be better than strix.


----------



## khunpunTH

warrior-kid said:


> Please guys, I need help, I already asked on the other page. max I can do on my Trio X is 13.3K in PR on the EVGA BIOS. I am increasing the power limit to 12%, set fans on 90%, set max performance GPU and CPU. What else can I do?


Do you set power management mode to prefer max performance in nvidia control panel?



cloudconnected said:


> How loud is the MSI with the Low Temperature Bios?


I cannot hear it. I recommend you to use it.
and combo with undervolt 2010 mhz @ 0.95v for normal use. hightest temp is 65c.
It can go further but i prefer lower temp.


----------



## warrior-kid

khunpunTH said:


> Do you set power management mode to prefer max performance in nvidia control panel?


Yep, all on maximum. Do I need to go max power limit, play with voltage, to get this anywhere higher? Any clues?
UPDATE. Now set Power Limit to maximum +19%, +140 core, +282 mem. This takes it to only 13411.


https://www.3dmark.com/3dm/53078053?


UPDATE2 Interesting that the monitor resolution affects the score. I've added mem to +529 but otherwise, two identical runs below:

1. This is with native 8K monitor resolution: 13458: https://www.3dmark.com/3dm/53078543?
2. This is switching to 4K: 13696: https://www.3dmark.com/3dm/53078384?

That's a new difference for sure. What monitor resolution do you set when you run the PR benchmark--as it would appear it actually affects the score?

UPDATE3 Disabled GPU Scheduling and mem +637: 13872 https://www.3dmark.com/3dm/53079239?
UPDATE4 100% fan and power limit, core +140, mem+822: 14006: https://www.3dmark.com/3dm/53081040?


----------



## khunpunTH

warrior-kid said:


> Yep, all on maximum. Do I need to go max power limit, play with voltage, to get this anywhere higher? Any clues?
> UPDATE. Now set Power Limit to maximum +19%, +140 core, +282 mem. This takes it to only 13411.
> 
> 
> https://www.3dmark.com/3dm/53078053?


I did max power limit . +190 core +1000 mem. max temp bar and 100% fan.


----------



## warrior-kid

khunpunTH said:


> I did max power limit . +190 core +1000 mem. max temp bar and 100% fan.


Thanks, updated above for the record. +140 core, +822 mem, and 100% fan help: PL is14006 now with monitor set for 4K: https://www.3dmark.com/3dm/53081040?.


----------



## lokran88

Carillo said:


> Talked to Aquatuning yesterday ( who gets their parts from Aquacomputer) The Strix blocks are delayed another 3 weeks from now
> 
> Edit: glad i have my ****ty byksi block for now. With some power tools i does fit
> View attachment 2465424


Nice. But I would have recommended to order at Aquacomputer's own shop. As they will probably serve their own customers first and then ship to other retailers like Aquatuning.


----------



## Zogge

Just to make sure, strix 3090 with evga 500w bios shows max power ~375 W and Pin 3 ~3W.
This is just an error and it is in fact using ~150W on pin 3 and 500+ W in total ... or ?


----------



## Zooms

If my Asus Rog strix OC exceeds 480w it crashes by the BIOS?


----------



## Zogge

Zooms have you flashed the 500w bios ? If so how can you see it use more than 480w?


----------



## Zooms

I use GPU Z and see 493W during time spy extreme


----------



## Zogge

So you have flashed the 500w bios on your strix and can see 493 W as well as full power on pin 3 ? So weird why I cannot if so.


----------



## Zooms

No , it's stock BIOS ASUS


----------



## Zogge

Ah ok. Anyone else with 500w evga bios on strix 3090 who can explain/verify ?


----------



## mbm

khunpunTH said:


> Do you set power management mode to prefer max performance in nvidia control panel?
> 
> 
> I cannot hear it. I recommend you to use it.
> and combo with undervolt 2010 mhz @ 0.95v for normal use. hightest temp is 65c.
> It can go further but i prefer lower temp.
> View attachment 2465511


hmm I dont understand. You say 2010 mhz for normal use.. What is normal use? From the picture you only boost to 1995 mhz


----------



## andrvas

Anyone undervolting their 3090? What voltage are you running, and what frequency? I'm at 825mv and 1830MHz, stable under heavy 4k gaming. Can't go further due to power limit (365w). Curious how others might be doing 🙂


----------



## Emmanuel

After 2 months without a GPU, I'll be receiving my FTW3 Ultra in a few days. I was wondering, is there any advantage to using Evga X1 over MSI Afterburner for this card? I've used MSI AB for like 10 years and have no interest in LED controls. I'm just looking for an app that will give me the best control over clocks, voltage, power limits etc.


----------



## Xdrqgol

khunpunTH said:


> undervolt 2010 mhz @ 0.95v for normal use. hightest temp is 65c.
> It can go further but i prefer lower temp.
> View attachment 2465511


please show that it draws exactly 0.95v at 2010mhz and like it has been mentioned - > 2010 mhz vs 1995mhz it is a LONG way.
let's show proper info with proof otherwise this is BS.


*I can run 1950 mhz @ 0.875v on my Zotac Trinity "👀


----------



## mbm

Running 0.85v at 1905 mhz.


----------



## GQNerd

Zogge said:


> Ah ok. Anyone else with 500w evga bios on strix 3090 who can explain/verify ?


yes, the 3rd pin was not displaying correctly but it was pulling the full 500w


----------



## Devinishome

I just received a Ftw3 Ultra. From some short reading it seems like the evga 500w bios is not going to perform with the flagship EVGA card?


----------



## Edge0fsanity

Devinishome said:


> I just received a Ftw3 Ultra. From some short reading it seems like the evga 500w bios is not going to perform with the flagship EVGA card?


It was no better than stock on mine. If you want more power right now your only option is to flash the xc3 2x8pin bios. EVGA 3090 FTW POWER LIMIT BYPASS


----------



## Devinishome

Thanks for confirming. A little frustrated with the premium offering from EVGA being ignored when it comes to maximum performance. Has EVGA provided feedback on a solution?


----------



## Lord of meat

khunpunTH said:


> Just got mine 3 days ago. Msi 3090 gaming x trio is so impressive.
> Msi also have low temp bios which will run full load about 67c. You can download from their website, that i normally use for gaming.
> Then I flash into 500w bios to get score and i'm now satify with my score without any shuant mod or waterblock.
> 
> 
> 
> https://www.3dmark.com/spy/15269156
> 
> 
> 
> 
> https://www.3dmark.com/pr/497741
> 
> 
> 
> The product is very good in term of price and performance compare to others brand. 300$ cheaper than strix.
> Anyway my strix will come next week. Will test and see the result again.


 where did you get the bios? is it the asus one? if so wont it mess up the fan settings?


----------



## Edge0fsanity

Devinishome said:


> Thanks for confirming. A little frustrated with the premium offering from EVGA being ignored when it comes to maximum performance. Has EVGA provided feedback on a solution?


I seem to recall evga say they were looking into after they released it but it's been total silence since. I'm probably going to shunt mod my card as 500w is still not enough.


----------



## dr/owned

Edge0fsanity said:


> I seem to recall evga say they were looking into after they released it but it's been total silence since. I'm probably going to shunt mod my card as 500w is still not enough.


They want you to buy (overpay) a hydrocopper or kingpin to hit insane wattages even though you can slap a waterblock on an XC3 and shove 500W+ through it mostly-safely.


----------



## DrunknFoo

Edge0fsanity said:


> I posted this in another thread but thought I would post here as well. I know there are a lot of ftw3 owners frustrated with the 500w FTW3 bios. It seems you can flash a 2x8pin bios and get a good 50w bump over the 500w evga bios. I went with the XC3 bios. EVGA 3090 FTW POWER LIMIT BYPASS
> 
> So this does not remove the limiter, it will still throttle with the power slider maxed out and an offset applied. I ended up using the VF curve in AB and settled on 1000mV and was able to run port royal with almost no power throttling. 1 scene where it drops for about a second, thats it. Previously on the 500w bios I was limited to 950mV on the VF curve for similar no throttle results. Runs about 5C hotter and gained 45mhz on my OC.
> 
> 500w bios https://www.3dmark.com/pr/500592
> XC3 bios http://www.3dmark.com/pr/500725
> 
> Neither of these results are dialed in as I just got the card last night. Mem OC not dialed in at all. Had overlays open, gsync on, additional monitors on, not cooling the card off between runs, etc. Few hundred points left on the table.
> 
> Overall this felt like a solid 50w bump over the 500w bios. I may give the gigabyte 390w bios a try to see if it allows for higher power but I won't take the card past 1000mV on air. Already running around 72C once its fully heatsoaked. Will have this on water and will push further whenever Optimus ships my block.


How did you confirm the power draw? gpuz?
Likely maybe you won't see a perf cap reason cause your card is capable of drawing 450 vs the 3xx reflected on the installed bios...

Tested the xc3 bios with my shunted ftw3 the other night. Kilawat read 150w less peak vs the 500w bios. didn't think it'd work for mine but gave it a shot anyway
but i did notice less wattage fluctuations with the xc3 bios compared to the 450w bios that likely improves temps and stability


----------



## Mike211

Hi


----------



## ExDarkxH

Should you be shunting Src power?


----------



## DrunknFoo

ExDarkxH said:


> Should you be shunting Src power?


it made no measurable difference on the ftw3

Oo i just noticed we are neck and neck on the ladder

I think ill remove the scr and the pcie shunts to see what changes...


----------



## ExDarkxH

DrunknFoo said:


> it made no measurable difference on the ftw3
> 
> Oo i just noticed we are neck and neck on the ladder
> 
> I think ill remove the scr and the pcie shunts to see what changes...


It’s only appropriate we take out anethema then! 😆
Interesting i have been curious if it makes a difference.


----------



## khunpunTH

mbm said:


> hmm I dont understand. You say 2010 mhz for normal use.. What is normal use? From the picture you only boost to 1995 mhz





Xdrqgol said:


> please show that it draws exactly 0.95v at 2010mhz and like it has been mentioned - > 2010 mhz vs 1995mhz it is a LONG way.
> let's show proper info with proof otherwise this is BS.
> 
> 
> *I can run 1950 mhz @ 0.875v on my Zotac Trinity "👀


As requested. Card ran [email protected] as u can see from gpuz.
I dont see any point to do like [email protected] 0.825v . You are reducing your card performance.
Mine do undervolt + overclock.
First you have to detemine your peak freq on normal operating. (dont confuse with get higher score. to get higher score you put it to max. no undervolt)
by run valhalla,horizon ,legion,timespy ,pr and see what is your normal peak freq. mine is 1965 mhz.
So i overclock it up a little bit to 2010 and undervolt to 0.95. i can do like [email protected] but what the point to do that when my card normal can hit 1965mhz but will be lock to only 1850mhz?
Do this way your card will run with a bit overclock while reducing your temperature.


----------



## HyperMatrix

dr/owned said:


> They want you to buy (overpay) a hydrocopper or kingpin to hit insane wattages even though you can slap a waterblock on an XC3 and shove 500W+ through it mostly-safely.


HydroCopper is $50 more than FTW3 Ultra. How is that overpaying?


----------



## LVNeptune

Anyone know offhand if you lose the HDMI port when flashing to the strix bios on a FTW3 Ultra?

Confirmed it doesn't. The wattage shows way wrong now though, shows using a maximum of 300W but framerates haven't changed so that's obviously wrong. Temps stays the same too. Going to run a Port Royal bench.


----------



## DrunknFoo

Prob between runs with port royal and even furmark/occt wattage pulls vary anywhere between 50-80w


ExDarkxH said:


> It’s only appropriate we take out anethema then! 😆
> Interesting i have been curious if it makes a difference.


Prob between runs back to back port royal runs and even furmark/occt wattage pulls vary anywhere between 50-80w

I really dont underatand whats going on with this evga card or bios lol...

Ill get back to you tomorrow if it makes any diff with or without


----------



## DrunknFoo

Jpmboy said:


> AFAIK, the fans and lights do not come off the card's TDP rating. The TDP rating is based on the core and a bios-set limitation.
> 
> The shunt mods done on previous gen cards were done specifically to overcome this power-induced clock bin drop. If you know which resistors to mod, on an Ampere, it works the same. (eg, lowers the signal vBIOS compares against). If we get a bios editor, no hardware mod is needed.


50% rpm to 100% rpm on ftw3 draws an extra 15w...

Some rgb fans in general can consume upto 20-25w ea... 

Doubt gpu gains anything...


----------



## mbm

#4722
So what clockspeed can you maintain in all scenarios full load constantly and at what voltage?
from pictures you are "only" on 95% GPU load.


----------



## DrunknFoo

ExDarkxH said:


> It’s only appropriate we take out anethema then! 😆
> Interesting i have been curious if it makes a difference.


shunt the src, i likely didn't notice any gains due to the increased heat that i can't control atm


----------



## menko2

Strange results for me in benchmarks.

In Fire Strike Extreme 4K I get better score setting up 1980mhz @ 0.900v and fans 85% than 2150mhz @1.050v and fans at 100%.

I left memory untouched and power at full usage (it gets to 485W max in Stocks Strix Bios). Gpu temps around 67°C.

Any reason for this lower value?


----------



## long2905

menko2 said:


> Strange results for me in benchmarks.
> 
> In Fire Strike Extreme 4K I get better score setting up 1980mhz @ 0.900v and fans 85% than 2150mhz @1.050v and fans at 100%.
> 
> I left memory untouched and power at full usage (it gets to 485W max in Stocks Strix Bios). Gpu temps around 67°C.
> 
> Any reason for this lower value?


check your average clock breakdown during the run. does your card keep 2150 consistently?


----------



## cletus-cassidy

Been skimming the 230+ pages and not sure there is a clear answer, but I had no 3090 and suddenly I have a FTW3 and a Strix coming this week. Can folks give me a sense of whether one is better than the other? I'll be putting under water, hoping to OC as far as possible w/o a shunt mod, and PSU is no concern. Thanks in advance.


----------



## Zogge

I sent back my 3090FTW3 and got the 3090 Strix OC. I was not happy with the bios issues etc for EVGA and since they do not answer you need to keep it and shunt for 450W+ or go with Strix and skip shunting and go 500W easy mode. 
It works like a charm but pin 3 power seems bugged in readings but it delivers 500W as per the fps and benchmark scores.
Strix runs a bit hotter than EVGA though. If you plan to put it under water that won't matter either.

Bottom line keep strix is my recommendation.


----------



## Zooms

Zogge can you send me your tutorial and bios for switch my 3090 strix oc like you ? 
No , bugs ? instability ?


----------



## mbm

long2905 said:


> does your card keep 2150 consistently?


Not likely


----------



## menko2

long2905 said:


> check your average clock breakdown during the run. does your card keep 2150 consistently?


It doesn't stay consistent. In a part of the test, the clock goes up and down from 2150-1920mhz. Temp in that part goes to 71°C.

What can I do to stay around 2100mhz more stable like in the 1980mhz? I put the fans at 100% for temp but it doesn't help.

Lowering to less than 1.050v will crash.


----------



## cletus-cassidy

@Zogge, great advice thanks. Did you see any real difference between the Strix 480W vs. EVGA 500W such that it was worth flashing (and potentially incurring the 3rd 8 pin issue you mention)?


----------



## Zogge

I mailed the detailed steps to zooms if you need it let him or me know. 
I doubt it will make that much more fps in games (I have not really done a detailed test) but for sure in benchmarking it does and I see it boosting higher and not as fast to power limit.

It goes a bit hotter though so waterblock and it will be super I think. It boosts to 2190 for me now then start to drop as temps raise. So under water 2200 mhz stable might be doable. 
2100 is within reach for sure. Now if aquacomputer could just deliver the strix block and active backplate things would be even better .....


----------



## long2905

menko2 said:


> It doesn't stay consistent. In a part of the test, the clock goes up and down from 2150-1920mhz. Temp in that part goes to 71°C.
> 
> What can I do to stay around 2100mhz more stable like in the 1980mhz? I put the fans at 100% for temp but it doesn't help.
> 
> Lowering to less than 1.050v will crash.


yeah so what you are seeing is only the max possible boost clock which doesnt mean much. you need to keep the average clock as high and as consistent as possible.


----------



## Zogge

Lower voltages and hope that you have really good silicon or water cool it to keep temps lower would be my advice. Every 8 or so degrees it increases, it drops 15 mhz boost I read somewhere. But it might be load dependent too, I really do not know.


----------



## khunpunTH

Edge0fsanity said:


> It was no better than stock on mine. If you want more power right now your only option is to flash the xc3 2x8pin bios. EVGA 3090 FTW POWER LIMIT BYPASS


lol so strange i try 2 pins bios Xc3 on 3 Pins card gaming x trio and the performance is the same as 500w bios.
Does gig gaming oc 390w much more power limit make any difference? or the card has completely bypass power limit.


----------



## menko2

So if I can only get stable clocks around the 2000mhz range with the fans at 100%... What's the difference between other brands and the Strix I have? It seems all the 3090 are more or less the same.
The 480W power bios is good but without water cooling I wonder what is the usage for.

I thought paying 300€ extra would make a difference.


----------



## defiledge

Will there be any new AIB models for 2x8pin with higher bios power limits? Really tempted to shunt my card


----------



## khunpunTH

mbm said:


> #4722
> So what clockspeed can you maintain in all scenarios full load constantly and at what voltage?
> from pictures you are "only" on 95% GPU load.


When you can read from the graph that my card ran 2010mhz Steady at 0.95 with 99% load then we can talk.
Do i have to pin point for you? Get some knowledge or just want to catch fake? Be happy with your 1905 then.
waste my time answer such a question.


----------



## HyperMatrix

Wish there was a sticky section for frequently asked questions. Tired of people asking if x card is better than y card when it really doesn’t matter with shunt. Or if you don’t want to shunt, just any 3 power connector model with 500W bios flash. 

End of the day from what we’ve seen, the only thing that really matters is silicon lottery and block availability.


----------



## cletus-cassidy

HyperMatrix said:


> Wish there was a sticky section for frequently asked questions. Tired of people asking if x card is better than y card when it really doesn’t matter with shunt. Or if you don’t want to shunt, just any 3 power connector model with 500W bios flash.
> 
> End of the day from what we’ve seen, the only thing that really matters is silicon lottery and block availability.


Could add to FAQ on Page 1. This is good feedback and EK Strix block being available is the determining feature for me.


----------



## mbm

khunpunTH said:


> When you can read from the graph that my card ran 2010mhz Steady at 0.95 with 99% load then we can talk.
> Do i have to pin point for you? Get some knowledge or just want to catch fake? Be happy with your 1905 then.
> waste my time answer such a question.


Its not a competition for me.
But People keep writing how high there cards boost. But it doesnt say how high there card can maintain this frequency.
My can can run 2150 mhz when playing call of duty but only minimum 1905 when playing battlefield.
So if we dont measure the same all this boost frequency Seems pretty worth less IMO.


----------



## mirkendargen

LVNeptune said:


> Anyone know offhand if you lose the HDMI port when flashing to the strix bios on a FTW3 Ultra?
> 
> Confirmed it doesn't. The wattage shows way wrong now though, shows using a maximum of 300W but framerates haven't changed so that's obviously wrong. Temps stays the same too. Going to run a Port Royal bench.


FTW3 seems to have some weird power setup, since there is weird stuff flashing the other way too (on a Strix flashing the 500w FTW3 BIOS makes the 3rd 8pin not report power correctly).


----------



## Zogge

My strix show 350 to 370 w after 500w flash.


----------



## GQNerd

Zogge said:


> My strix show 350 to 370 w after 500w flash.


Yes, if you read my previous response to you that is because the third power pin is reporting incorrectly.. Check what the other 2 are pulling and assume the 3rd pin is doing the same.. Add all 3, plus the Pcie power and it should give you the true overall Board Power


----------



## Zogge

Got it thanks,but I meant you only reached 300+150ish W which is less than the 350/370+150 I got.


----------



## GQNerd

Zogge said:


> Got it thanks,but I meant you only reached 300+150ish W which is less than the 350/370+150 I got.


I think you're confusing me with someone else.. I never said that.

On the 500w bios my Strix was pulling 150w per pin (3rd was still reporting incorrectly on GPUZ) = 450w 
*PLUS* 
50w from PCIE ... 450w + 50w = *500w*


----------



## dante`afk

HyperMatrix said:


> Wish there was a sticky section for frequently asked questions. Tired of people asking if x card is better than y card when it really doesn’t matter with shunt. Or if you don’t want to shunt, just any 3 power connector model with 500W bios flash.
> 
> End of the day from what we’ve seen, the only thing that really matters is silicon lottery and block availability.


spot on, and when you look on the HoF 3d mark, FE' cards are leading pretty much.


because of the bios BS and the latest buildzoid video about the FTW3 3080, I ended up getting rid of my ftw3 ultra 3090 which you helped me get (thanks again) and found a FE which should come in a couple of days


----------



## dpoverlord

Hey guys!
Have an issue. My new EVGA FTW3 board is causing weird sounds from my motherboard.

I installed it and when my computer starts I hear a beep then a **** load of successive beats. Too fast for me to even count them.

This is the odd part... It boots fine after that. However, I have never had this happen before. I spent the last 3 hrs disconnecting the connectors thinking it was a fan and nothing has solved it. Any ideas?

The motherboard reads A2 and usually *code A2* is a drive or gpu issue. Since you removed your drives and still get *code A2* remove your gtx 970 and try connecting to the onboard graphics.

No issue with the 2080ti but its really odd as everything is seated into the Asus Rampage V as well as the power connectors into my EVGA G2 1300.

When I first installed the card the GPU Fans went to 100% instantaneously and now they seem to settle after the third reboot.


Anyone have these issues?


----------



## Falkentyne

dpoverlord said:


> Hey guys!
> Have an issue. My new EVGA FTW3 board is causing weird sounds from my motherboard.
> 
> I installed it and when my computer starts I hear a beep then a **** load of successive beats. Too fast for me to even count them.
> 
> This is the odd part... It boots fine after that. However, I have never had this happen before. I spent the last 3 hrs disconnecting the connectors thinking it was a fan and nothing has solved it. Any ideas?


Disable CSM in your bios.
That beep sequence is a VGA detection issue.


----------



## Nizzen

menko2 said:


> So if I can only get stable clocks around the 2000mhz range with the fans at 100%... What's the difference between other brands and the Strix I have? It seems all the 3090 are more or less the same.
> The 480W power bios is good but without water cooling I wonder what is the usage for.
> 
> I thought paying 300€ extra would make a difference.


With Strix 3090 oc you are paying for the full package. High powerlimit, top tier components, hot wire support for voltage regulation!, good card for LN2 and water with shuntmodding and hotwire. All the good stuff for a overclocker.

I'm a pleb in overclocking, but watercooled the 3090 strix OC gave me a pretty good result. 2220mhz average in Port Royal. #place atm...

11







Nzz15671

Asus strix is the difference I think. Only shuntmod and cold watercooling. No Ln2, phase change, dryice or voltage mod.
Normal day for a normal pleb.


----------



## pat182

menko2 said:


> So if I can only get stable clocks around the 2000mhz range with the fans at 100%... What's the difference between other brands and the Strix I have? It seems all the 3090 are more or less the same.
> The 480W power bios is good but without water cooling I wonder what is the usage for.
> 
> I thought paying 300€ extra would make a difference.


well if its a stock card with less than 400watt bios, you wont get 2000mhz in raytracing games like control ( more like 1750mhz)
you are paying for higher power limit in bottlenecked scenario like full ray traced and a warranty by not having to shunt or bios swap de card and better power delivery overall

my msi couldnt hold 1750mhz in control cause of the low power limit, while my strix get 2050mhz so im getting like 10fps more from the ''same'' card


----------



## domenic

Miguelios said:


> I think you're confusing me with someone else.. I never said that.
> 
> On the 500w bios my Strix was pulling 150w per pin (3rd was still reporting incorrectly on GPUZ) = 450w
> *PLUS*
> 50w from PCIE ... 450w + 50w = *500w*


Sorry for being an idiot but what is the "500w" bios for the Strix OC? It comes with a 480w bios right? Did someone obtain an unofficial replacement that is 20w more?


----------



## menko2

Nizzen said:


> With Strix 3090 oc you are paying for the full package. High powerlimit, top tier components, hot wire support for voltage regulation!, good card for LN2 and water with shuntmodding and hotwire. All the good stuff for a overclocker.
> 
> I'm a pleb in overclocking, but watercooled the 3090 strix OC gave me a pretty good result. 2220mhz average in Port Royal. #place atm...
> 
> 11
> 
> 
> 
> 
> 
> 
> 
> Nzz15671
> 
> Asus strix is the difference I think. Only shuntmod and cold watercooling. No Ln2, phase change, dryice or voltage mod.
> Normal day for a normal pleb.


Thank you Nizzen.

What is the water-cooling brand you are using?


----------



## HyperMatrix

dante`afk said:


> spot on, and when you look on the HoF 3d mark, FE' cards are leading pretty much.
> 
> 
> because of the bios BS and the latest buildzoid video about the FTW3 3080, I ended up getting rid of my ftw3 ultra 3090 which you helped me get (thanks again) and found a FE which should come in a couple of days


I'm selling my Strix in a few hours and grabbing an FTW3 HydroCopper in a couple weeks. Haha. I checked out the Buildzoid video and he keeps mentioning how it's a good card that's better than reference with better power delivery than the FE, but that he wishes certain things were upgraded to higher quality versions for a potential 10-15MHz higher clock speed under LN2. I won't disagree with him that the ROG Strix has a much better build quality for the same price as the FTW3 ultra, therefore it doesn't make sense that the FTW3 demand a premium price, but doesn't deliver the premium quality components that Asus offers, or that EVGA itself offered in previous generation FTW3 cards. But...all of that came down to saying the design shouldn't result in any difference in performance compared to other cards, except an extra 10-15MHz under LN2 due to the lack of premium capacitors.

Silicon Lottery is still king. And with the new HydroCopper and Hybrid cards EVGA released, honestly I'm thinking it's the best value out there. With the Associate's discount code, the FTW3 Ultra HydroCopper comes in at $1757 and the FTW3 Ultra Hybrid comes in at $1709. And from everything I've seen, the FTW3 does appear to do a better job at cooling the card than the ROG Strix. Honestly the Strix broke my heart. Mediocre overclocking performance, made even worse by absolutely terrible cooling capability. And it'd cost me another $420 to get the AquaComputer block/backplate with shipping/tax in god knows how many months (currently 60 day+). I figure the FTW3 can't perform any worse. Selling the Strix for $2140 USD (just $150 more than I bought it). Will grab the FTW3 HydroCopper for $1757. And won't have to pay that $420 for a block either. Though I'll likely still do the liquid metal and high end fujipoly thermal pad swap and just be happy with a (hopefully) 2100MHz sustained clock with _relatively_ minimal cost. 

3090 pricing/performance has been a bit upsetting to be honest (due to heat/power limit/throttling), especially if the Radeon 6900 XT performance number leaks and 2.5GHz overclocking turns out to be true.




Nizzen said:


> With Strix 3090 oc you are paying for the full package. High powerlimit, top tier components, hot wire support for voltage regulation!, good card for LN2 and water with shuntmodding and hotwire. All the good stuff for a overclocker.
> 
> I'm a pleb in overclocking, but watercooled the 3090 strix OC gave me a pretty good result. 2220mhz average in Port Royal. #place atm...
> 
> 11
> 
> 
> 
> 
> 
> 
> 
> Nzz15671
> 
> Asus strix is the difference I think. Only shuntmod and cold watercooling. No Ln2, phase change, dryice or voltage mod.
> Normal day for a normal pleb.


You wouldn't be saying that if you hadn't won the silicon lottery. Haha. No amount of power limit or normal liquid cooling is going to get the clock on my Strix up that high. The Strix is definitely the best built card and will let you push golden chips up very high. But there's no guarantee that you'll get such a chip by buying a Strix.


----------



## DrunknFoo

Nizzen said:


> With Strix 3090 oc you are paying for the full package. High powerlimit, top tier components, hot wire support for voltage regulation!, good card for LN2 and water with shuntmodding and hotwire. All the good stuff for a overclocker.
> 
> I'm a pleb in overclocking, but watercooled the 3090 strix OC gave me a pretty good result. 2220mhz average in Port Royal. #place atm...
> 
> 11
> 
> 
> 
> 
> 
> 
> 
> Nzz15671
> 
> Asus strix is the difference I think. Only shuntmod and cold watercooling. No Ln2, phase change, dryice or voltage mod.
> Normal day for a normal pleb.


i seriously wonder how many of the strix cards on the leaderboards are actually strix cards, but having a block alone definitely adds to the scores.


----------



## DrunknFoo

HyperMatrix said:


> I'm selling my Strix in a few hours and grabbing an FTW3 HydroCopper in a couple weeks. Haha. I checked out the Buildzoid video and he keeps mentioning how it's a good card that's better than reference with better power delivery than the FE, but that he wishes certain things were upgraded to higher quality versions for a potential 10-15MHz higher clock speed under LN2. I won't disagree with him that the ROG Strix has a much better build quality for the same price as the FTW3 ultra, therefore it doesn't make sense that the FTW3 demand a premium price, but doesn't deliver the premium quality components that Asus offers, or that EVGA itself offered in previous generation FTW3 cards. But...all of that came down to saying the design shouldn't result in any difference in performance compared to other cards, except an extra 10-15MHz under LN2 due to the lack of premium capacitors.
> 
> Silicon Lottery is still king. And with the new HydroCopper and Hybrid cards EVGA released, honestly I'm thinking it's the best value out there. With the Associate's discount code, the FTW3 Ultra HydroCopper comes in at $1757 and the FTW3 Ultra Hybrid comes in at $1709. And from everything I've seen, the FTW3 does appear to do a better job at cooling the card than the ROG Strix. Honestly the Strix broke my heart. Mediocre overclocking performance, made even worse by absolutely terrible cooling capability. And it'd cost me another $420 to get the AquaComputer block/backplate with shipping/tax in god knows how many months (currently 60 day+). I figure the FTW3 can't perform any worse. Selling the Strix for $2140 USD (just $150 more than I bought it). Will grab the FTW3 HydroCopper for $1757. And won't have to pay that $420 for a block either. Though I'll likely still do the liquid metal and high end fujipoly thermal pad swap and just be happy with a (hopefully) 2100MHz sustained clock with _relatively_ minimal cost.
> 
> 3090 pricing/performance has been a bit upsetting to be honest (due to heat/power limit/throttling), especially if the Radeon 6900 XT performance number leaks and 2.5GHz overclocking turns out to be true.


how bad was the card? clocks vs temps? avg ram freq?
i sorta want a strix to replace my ftw but only cause there are actual blocks that can sorta be obtained for it lol


----------



## HyperMatrix

DrunknFoo said:


> how bad was the card? clocks vs temps? avg ram freq?
> i sorta want a strix to replace my ftw but only cause there are actual blocks that can sorta be obtained for it lol


Ram OC was good. Could do +1100 to +1200 based on temperature. Clocks were terrible due to temperature. With fans at 100%, doing 0.887v at 1890-1920MHz would still have my temps running around 65-69C and would eventually crash. I could start with a higher clock, but temperatures would start to rise and throttle and crash. I think I saw the card hit 89C once. If temps weren't an issue, it could do about 2130MHz. The problem is, temps were an issue. And 2130MHz is something almost anyone on any card can get with enough cooling. So to me, I didn't see anything exceptional from my Strix (other than great component quality). The GPU die itself wasn't great. And the card did a poor job of keeping temperatures down.

edit: Just so there's no confusion...I'm not saying there's anything wrong with the Strix card. I'm not recommending that people don't buy one. I'm just saying that simply buying a Strix isn't going to guarantee you better clocks because that comes down to silicon lottery. And as for air cooling, it's not the best. So I figured if my option was $2600 for the strix, $550 for the aquacomputer block, $150 for the thermal pads/lm (Total $3300 CAD with taxes), and I would still only get around 2130-2160MHz with shunts/under water, then I may as well get an EVGA FTW3 Ultra HydroCopper for $2450 CAD with a block already installed. At worse, if I get a really bad GPU die, I'll get 1-2% lower performance. At best, I'll roll even better numbers. And for $850 CAD less.


----------



## GQNerd

domenic said:


> Sorry for being an idiot but what is the "500w" bios for the Strix OC? It comes with a 480w bios right? Did someone obtain an unofficial replacement that is 20w more?


Cross-flash the EVGA XOC 500w bios for the FTW3 
Didn't lose any outputs, but the only bug is the 3rd 8 pin connector does not report correctly on GPU-Z
I actually had better results with the default 480w bios so that's what I'm running. 

Perhaps I'll revisit it after I slap a waterblock on the Strix


----------



## DrunknFoo

HyperMatrix said:


> Ram OC was good. Could do +1100 to +1200 based on temperature. Clocks were terrible due to temperature. With fans at 100%, doing 0.887v at 1890-1920MHz would still have my temps running around 65-69C and would eventually crash. I could start with a higher clock, but temperatures would start to rise and throttle and crash. I think I saw the card hit 89C once. If temps weren't an issue, it could do about 2130MHz. The problem is, temps were an issue. And 2130MHz is something almost anyone on any card can get with enough cooling. So to me, I didn't see anything exceptional from my Strix (other than great component quality). The GPU die itself wasn't great. And the card did a poor job of keeping temperatures down.
> 
> edit: Just so there's no confusion...I'm not saying there's anything wrong with the Strix card. I'm not recommending that people don't buy one. I'm just saying that simply buying a Strix isn't going to guarantee you better clocks because that comes down to silicon lottery. And as for air cooling, it's not the best.


FWIW, using a 360xe rad disconnected from the main system in my case serving as a half assed built in ac, (can get as cold as 15c in the chassis) i also see temps of about 75-80C drawing random power output of anywhere between 420w-470w during port royal runs, the clocks can barely sustain anything above 2.1. Temperature definitely plays a huge role, cause i need at least 10 minutes apart between runs to even complete a pass with identical settings.

With the pump disabled, on the stock 450w bios i could barely hold i think 1970mhz and temps were averaging about 75c. ram wouldn't clock to anything above +1100

On another note, the evga card appears to have some really weird voltage and power draw fluctuations with any of the 3 pin bios. the 2 pin xc bios appears to be the most stable, unfortunately you lose the power output...

As for the memory on this one, +1340-1345 averaging 1387, but once again, heat likely limiting.

just something to consider i guess.


----------



## mirkendargen

DrunknFoo said:


> i seriously wonder how many of the strix cards on the leaderboards are actually strix cards, but having a block alone definitely adds to the scores.


I bet 0 of the EVGA cards high on the leaderboards are actually EVGA (other than Kingpin), lol.


----------



## DrunknFoo

mirkendargen said:


> I bet 0 of the EVGA cards high on the leaderboards are actually EVGA (other than Kingpin), lol.


Joe is legit there in 2nd


----------



## mirkendargen

Miguelios said:


> Cross-flash the EVGA XOC 500w bios for the FTW3
> Didn't lose any outputs, but the only bug is the 3rd 8 pin connector does not report correctly on GPU-Z
> I actually had better results with the default 480w bios so that's what I'm running.
> 
> Perhaps I'll revisit it after I slap a waterblock on the Strix


You do actually lose the middle DisplayPort, but keeping the 2 HDMI ports is preferable to me anyway, 2 DPs is plenty.


----------



## Asmodian

Now I find that there aren't any water block available for my 3090 FE. Anyone know of something that is available?

I have a decent comparison to my 2080 Ti on the same system:


https://www.3dmark.com/compare/spy/15334596/spy/12584659#


----------



## mirkendargen

Asmodian said:


> Now I find that there aren't any water block available for my 3090 FE. Anyone know of something that is available?
> 
> I have a decent comparison to my 2080 Ti on the same system:
> 
> 
> https://www.3dmark.com/compare/spy/15334596/spy/12584659#


Bykski and Bitspower.


----------



## Asmodian

mirkendargen said:


> Bykski and Bitspower.


Both are pre-order or out of stock on their stores. 
Bitspower 3090 FE
Bykski 3090 FE


----------



## Thebc2

Ek Strix block arrived, just scratching the surface.











Sent from my iPhone using Tapatalk Pro


----------



## Sheyster

HyperMatrix said:


> Or if you don’t want to shunt, just any 3 power connector model with 500W bios flash.


Except for the EVGA FTW3 Ultra, the one the BIOS was actually intended for but apparently no one can get it up to 500w!


----------



## domenic

Miguelios said:


> Cross-flash the EVGA XOC 500w bios for the FTW3
> Didn't lose any outputs, but the only bug is the 3rd 8 pin connector does not report correctly on GPU-Z
> I actually had better results with the default 480w bios so that's what I'm running.
> 
> Perhaps I'll revisit it after I slap a waterblock on the Strix


I have the aquacomputer Strix block on pre-order. So you tested both HDMI ports? I need three DP and one HDMI working. Cant hurt test flash I guess.


----------



## HyperMatrix

Sheyster said:


> Except for the EVGA FTW3 Ultra, the one the BIOS was actually intended for but apparently no one can get it up to 500w!


I think for air it's fine since air cooling can't keep up with 450W draw anyway. But yeah it'll make some difference under water. Really wish those real XOC bios would get released/leaked so we could get higher power limits without having to void warranties.


----------



## mirkendargen

domenic said:


> I have the aquacomputer Strix block on pre-order. So you tested both HDMI ports? I need three DP and one HDMI working. Cant hurt test flash I guess.


I'm using top and bottom DPs (middle is disabled) and both HDMI ports on a Strix with the EVGA 500w BIOS right now. I also tested both HDMI ports at 2.1 speeds with an LG OLED successfully, but I only have one so I couldn't test 2xHDMI 2.1 at the same time.


----------



## Lobstar

Anyone else getting artifacting with stock clocks on the 'OC' bios with EVGA WTF3 Ultra?

edit: leaving it 'wtf3' because i shouldn't be getting artifacts on a card which can't even hit it's power limits ...


----------



## DrunknFoo

Lobstar said:


> Anyone else getting artifacting with stock clocks on the 'OC' bios with EVGA WTF3 Ultra?
> 
> edit: leaving it 'wtf3' because i shouldn't be getting artifacts on a card which can't even hit it's power limits ...


Heat likely culprit


----------



## Lobstar

DrunknFoo said:


> Heat likely culprit


Lemme know if you have any tips on how to improve cooling. Ambient is 23C.


----------



## bmgjet

Disassemble and put the thermal pads on the Vram correctly.
On my card from factroy the thermal pads only covered 1/3rd of the vram, which seems to be one of the worst ones. But even the good installed ones still only have 3/4 coverage.


----------



## ThrashZone

bmgjet said:


> Disassemble and put the thermal pads on the Vram correctly.
> On my card from factroy the thermal pads only covered 1/3rd of the vram, which seems to be one of the worst ones. But even the good installed ones still only have 3/4 coverage.


Hi,
Yeah evga is terrible paste too.


----------



## Lobstar

bmgjet said:


> Disassemble and put the thermal pads on the Vram correctly.
> On my card from factroy the thermal pads only covered 1/3rd of the vram, which seems to be one of the worst ones. But even the good installed ones still only have 3/4 coverage.


Good to know! Guess I'll have to wait till my block comes in.


----------



## dr/owned

HyperMatrix said:


> Silicon Lottery is still king. And with the new HydroCopper and Hybrid cards EVGA released, honestly I'm thinking it's the best value out there. With the Associate's discount code, the FTW3 Ultra HydroCopper comes in at $1757 and the FTW3 Ultra Hybrid comes in at $1709. And from everything I've seen, the FTW3 does appear to do a better job at cooling the card than the ROG Strix. Honestly the Strix broke my heart.


For me the value of the TUF/STRIX is the open header for the I2C bus. It's like they're giving the green light to shove 1.1-1.2V in, hit 800W of power consumption, and see what happens to the clocks.

(It's of course possible that the 3000 series just doesn't care about more voltage and is hitting a speedpath / silicon speed wall at room temperature. I've seen this with the last several gens of Intel where you set the voltage to around 1.35V and that's all the help you're going to get...even 1.5V doesn't improve the overclocking regardless of thermals)


----------



## olrdtg

Alright, I finally got my new Kill-A-Watt in, since my old one was having issues and not displaying the correct power usage. I've plugged in my GPU power supply (I ONLY have my 3090 running off this PSU along with it's water pump (Subtract around 10 to 15W from any result for the pump usage). I've learned some things! So, I shunt modded my 3090 FE by removing the 5mOhm shunt resistors and replacing them with 3mOhm, giving the card a theoretical 641 W max power usage, *HOWEVER*, according to my Kill-A-Watt, my card only uses around 500 W** of power, and I'm assuming this might be the case due to the card hitting voltage limits on vcore, therefor not requesting any additional power since it's already hit a limit. Peaks were around 560W when running a Port Royal stress test, and hovering in the 510W** to 520W** range during Superposition runs.

***EDIT: *Forgot to account for PCIe slot power draw, which ranges from 90 to 120W under load, so the card is indeed drawing nearly 600 to 660W peak under load. 
Thank you* dr/owned *for reminding me of this.

I am considering stacking another 3mOhm resistor on top 5 of the current resistors, leaving the PCIe with only a single 3mOhm, which will bring my cards theoretical power limit to around 770W, at the same time I am also going to mod a third 8-pin input on my 12-pin adapter so that I don't accidentally melt one of the two 8-pin power cables with this high of wattage -- it won't help the board side, but that looks pretty robust, like it can handle a lot, the weakest link here would be the cables, and adding a third one should help distribute the power a little better, but only for the cables themselves.

PC idle usage:









Peak usage:









General usage:

















I will attach some pictures of the Kill-A-Watt during these runs as evidence.


On another note, I also did some thermal testing of the backplate that came with my Bitspower water block and using the small included chipset fan to blow air directly onto the backplate, and the results are actually way better than the FE cooler (perhaps due to some new thermal pads and extra pressure I applied using washers for the backplate mounting [no washers were used on the core!])
I will attach some photos of this as well, I took these around 1hr 20mins into a Port Royal stress test.


----------



## Thebc2

Finally broke 15k with the waterblock! I have core maxed at +186 (2046) and it won't let me go any higher, no stability issues at all on the core side at all with the waterblock. Previously the best I could manage air cooled was a +140 core, I am now convinced the Strix air cooler was not making proper contact, even after I repasted it and remounted, I would frequently find myself in the high 70s while benchmarking and playing games.



http://www.3dmark.com/pr/508031


----------



## dr/owned

olrdtg said:


> Alright, I finally got my new Kill-A-Watt in, since my old one was having issues and not displaying the correct power usage. I've plugged in my GPU power supply (I ONLY have my 3090 running off this PSU along with it's water pump (Subtract around 10 to 15W from any result for the pump usage). I've learned some things! So, I shunt modded my 3090 FE by removing the 5mOhm shunt resistors and replacing them with 3mOhm, giving the card a theoretical 641 W max power usage, *HOWEVER*, according to my Kill-A-Watt, my card only uses around 500 W of power, and I'm assuming this might be the case due to the card hitting voltage limits on vcore, therefor not requesting any additional power since it's already hit a limit. Peaks were around 560W when running a Port Royal stress test, and hovering in the 510W to 520W range during Superposition runs.


You also have to add in the PCIE slot power consumption from the 24pin, and the wall measurement doesn't take into account about 6-7% power loss (where 100W at the wall = 93W ouptut).

Can you get a probe on VCORE to see where it's maxing at? Off the top of my head I think it's around 1.0V.


----------



## Lobstar

bmgjet said:


> Disassemble and put the thermal pads on the Vram correctly.
> On my card from factroy the thermal pads only covered 1/3rd of the vram, which seems to be one of the worst ones. But even the good installed ones still only have 3/4 coverage.


Looks like I'm hitting a power limit ...


----------



## 414347

Lobstar said:


> Lemme know if you have any tips on how to improve cooling. Ambient is 23C.
> View attachment 2465672


EVGA have some serious issues with their cards. It might be like 8 out of 10 dying and that is on idle and some when pushing a bit harder. Between me and my family member 4x 3090FTW died within a month


----------



## DrunknFoo

Lobstar said:


> Lemme know if you have any tips on how to improve cooling. Ambient is 23C.


as others mentioned, swap out the thermal paste, kpx dropped avg core temp about 6c (but not in a controlled environment so yrmv)
for the time being i wouldn't recommend changing the pads cause you'd actually want less efficient ram cooling since the stock hsf is shared. (sounds crazy right?) as cpu is king here, sacrificing a bit of ram frequency to potentially gain some core frequency is the idea here. but on my card at least, half a ram block wasn't even covered, so repositioning the pads may be required... (use a plastic razer to keep them intact)

as for the fans u got hanging above, just sit it between the other pcie card and the gpu and the other blowing directly at the back of the cpu/ram area. direct air flow pressure/contact is more ideal my case has 20 fans and 6 are being used as a mini half assed ac... maybe get an ac lol, there really is only so much u can do without other things assisting


----------



## Lobstar

DrunknFoo said:


> as others mentioned, swap out the thermal paste, kpx dropped avg core temp about 6c (but not in a controlled environment so yrmv)
> for the time being i wouldn't recommend changing the pads cause you'd actually want less efficient ram cooling since the stock hsf is shared. (sounds crazy right?) as cpu is king here, sacrificing a bit of ram frequency to potentially gain some core frequency is the idea here. but on my card at least, half a ram block wasn't even covered, so repositioning the pads may be required... (use a plastic razer to keep them intact)
> 
> as for the fans u got hanging above, just sit it between the other pcie card and the gpu and the other blowing directly at the back of the cpu/ram area. direct air flow pressure/contact is more ideal my case has 20 fans and 6 are being used as a mini half assed ac... maybe get an ac lol, there really is only so much u can do without other things assisting


I was being sarcastic. Posted a pic under load above. It's power-limiting at 63c.


----------



## olrdtg

dr/owned said:


> You also have to add in the PCIE slot power consumption from the 24pin, and the wall measurement doesn't take into account about 6-7% power loss (where 100W at the wall = 93W ouptut).
> 
> Can you get a probe on VCORE to see where it's maxing at? Off the top of my head I think it's around 1.0V.


This PSU isn't connected to the 24-pin/motherboard. The only thing connected to this power supply is 2x 8-pin PCIe power cables going straight into the GPU, but you are right about the power loss. If I were to add in the 90 to 100 or so watts of power coming from the PCIe slot, my card would probably be closer to drawing 580 to 600W under full load.

When I take it to bits again to stack on some more 3mOhm resistors, I'll leave the back plate off so I can probe and get some vcore values. Software claims 1.1v, but realistically I would assume it's closer to 1080mv


----------



## mirkendargen

olrdtg said:


> This PSU isn't connected to the 24-pin/motherboard. The only thing connected to this power supply is 2x 8-pin PCIe power cables going straight into the GPU, but you are right about the power loss. If I were to add in the 90 to 100 or so watts of power coming from the PCIe slot, my card would probably be closer to drawing 580 to 600W under full load.
> 
> When I take it to bits again to stack on some more 3mOhm resistors, I'll leave the back plate off so I can probe and get some vcore values. Software claims 1.1v, but realistically I would assume it's closer to 1080mv


His point is your GPU is getting ~75W from your main PSU via the PCIe slot, not the PSU connected to the 8pins so you aren't accounting for it with your killawatt.


----------



## Falkentyne

olrdtg said:


> This PSU isn't connected to the 24-pin/motherboard. The only thing connected to this power supply is 2x 8-pin PCIe power cables going straight into the GPU, but you are right about the power loss. If I were to add in the 90 to 100 or so watts of power coming from the PCIe slot, my card would probably be closer to drawing 580 to 600W under full load.
> 
> When I take it to bits again to stack on some more 3mOhm resistors, I'll leave the back plate off so I can probe and get some vcore values. Software claims 1.1v, but realistically I would assume it's closer to 1080mv


What resolution were you running your tests at?
If you run at 4k, or DSR up to 4k, or 4x Supersampling AA from 1080p (200% render scale from 1920x1080), you will draw a lot more power


----------



## olrdtg

Falkentyne said:


> What resolution were you running your tests at?
> If you run at 4k, or DSR up to 4k, or 4x Supersampling AA from 1080p (200% render scale from 1920x1080), you will draw a lot more power


I am indeed running at 4k 60hz. At this point I'm hitting voltage limits on vcore, so even if I slam the card with more power, it's no longer going to improve performance. If I want to get any extra from the card I'm probably going to need to do a volt mod to give the core an extra few mv.

I will try running something at DSR 8K and see how much juice it draws though.


mirkendargen said:


> His point is your GPU is getting ~75W from your main PSU via the PCIe slot, not the PSU connected to the 8pins so you aren't accounting for it with your killawatt.


I'm actually getting ~100 - 120W from my main PSU through the PCIe slot, and I accounted for it in my reply to him  I did word it kind of weird though, my bad. With that extra power I am pulling closer to 600W on the card, which is just about the calculated limit with 3mOhm resistors. I will edit the original post and account for that though.



Lobstar said:


> Looks like I'm hitting a power limit ...
> View attachment 2465684


Hey can you let me know how you are displaying those stats through a stream deck? I might actually buy one of those to use in this manner as I sit far away from the desktop itself, and I'd love to free up screen real estate by removing my on-screen sensor panels and using something like that.


----------



## dr/owned

@olrdtg Forgot to ask...might not be very accurate but can you IR shoot the PCIE slot too? The big concern when shunting the PCIE slot power (agressively...seems you have to do some shunting if you want the 8pin shunts to be effective cause ratios) is that there's only like 5 12V fingers on the card that now have to carry 100% more current than before (if you go down to 2.5ohm). I did some quick math in a pcb calculator and found that really 50% more is about as much as you want to "safely" go before you start getting to >+50C temp increase. I bought a FLIR camera just for this purpose (and the 8 pin connector temp) but I'm still waiting for the dang card to show up.


----------



## olrdtg

dr/owned said:


> @olrdtg Forgot to ask...might not be very accurate but can you IR shoot the PCIE slot too? The big concern when shunting the PCIE slot power (agressively...seems you have to do some shunting if you want the 8pin shunts to be effective cause ratios) is that there's only like 5 12V fingers on the card that now have to carry 100% more current than before (if you go down to 2.5ohm). I did some quick math in a pcb calculator and found that really 50% more is about as much as you want to "safely" go before you start getting to >+50C temp increase. I bought a FLIR camera just for this purpose (and the 8 pin connector temp) but I'm still waiting for the dang card to show up.


Alrighty, I let Superposition go for around 27 minutes, and the PCIe slot +12v was bouncing between 100W and 119W during the test. I attached a straight zip-tie to my IR temp reader so I could better determine where I was aiming it and got it aimed directly at the slot, taking temp readings of a few different positions of the slot, and the motherboard below the slot. I've done this temperature reading before and found the slot temperatures do not differ very much (3 to 4 degrees) between having a 60% power limit (equal to stock power limit) and a 100% limit with the shunt mod. The slot and motherboard around the slot never go over around 39 C in my case, but I do have A LOT of airflow. Depending on your motherboard, the quality of the 24-pin cables and if you have an auxiliary power input on your motherboard for PCIe slots (use this if you are already pulling a lot of power from the 24-pin) then you should be able to get away with pulling around 100 - 120W from the slot. I'm not going to guarantee anything, as who knows if this will degrade the slot or the traces on the boards. Since I've shunt modded I have run multiple 24-hour burn in tests followed up by testing the slot with a multimeter and nothing has melted or degraded in any way. If you do shunt your PCIe slot power input to match the rest, just keep a very close eye on it for a while before assuming it's safe.

Measuring the slot (closest to where the +12v pins are):










Measuring the motherboard underneath the GPU/PCIe slot:









Measuring the board around the 24-pin, 8-pin EPS & 6-pin PCIe supplemental inputs:


----------



## DrunknFoo

How much power draw on superposition when u took that?


----------



## Gadfly

Is there a 500w bios for the 3090FE?


----------



## Lobstar

I tried the beta bios in the thread on the EVGA forums and now I crash when loading Time Spy Extreme.  Oh well.


----------



## chispy

Gadfly said:


> Is there a 500w bios for the 3090FE?


Sadly no  .


----------



## olrdtg

DrunknFoo said:


> How much power draw on superposition when u took that?


Around 610 W


----------



## chispy

Guys i have found some disturbing entries at the Hall Of Fame of futuremark for Port Royal , there is some guys cheating and a lot of shenanigans going on with the rtx 3090 single gpu entries for Port Royal. Not everything you see it's real even when it says valid result , there are loopholes that have been found. So if you find yourself with a lower score than some of this entries do not worried it does not represent real , legit scores.

Futuremark / UL is doing an investigation and clean up 

Do not ask me what are the loopholes as it would be counter productive.


----------



## HyperMatrix

Lobstar said:


> Looks like I'm hitting a power limit ...
> View attachment 2465684


Awesome use of the stream deck. Haha. How did you get that set up?


----------



## Gadfly

chispy said:


> Sadly no  .


Bummer  what is the highest power bios for the FE?


----------



## bogdi1988

Gadfly said:


> Bummer  what is the highest power bios for the FE?


Unfortunately your best "500W" BIOS is a shunt mod  There are no other BIOS files for the FE other than stock.


----------



## Gadfly

bogdi1988 said:


> Unfortunately your best "500W" BIOS is a shunt mod  There are no other BIOS files for the FE other than stock.


Really? I saw some live streams where the power limit was raised.


----------



## Memuser

Hi.

I own the PNY GeForce RTX 3080 10GB XLR8 Gaming UPRISING EPIC-X, but unfortunately the card doesn't seem to support the 0 db mode.
This means that the fans also rotate in idle state. Can this also be fixed with a Bios from another manufacturer?


----------



## GTANY

HyperMatrix said:


> Ram OC was good. Could do +1100 to +1200 based on temperature. Clocks were terrible due to temperature. With fans at 100%, doing 0.887v at 1890-1920MHz would still have my temps running around 65-69C and would eventually crash. I could start with a higher clock, but temperatures would start to rise and throttle and crash. I think I saw the card hit 89C once. If temps weren't an issue, it could do about 2130MHz. The problem is, temps were an issue. And 2130MHz is something almost anyone on any card can get with enough cooling. So to me, I didn't see anything exceptional from my Strix (other than great component quality). The GPU die itself wasn't great. And the card did a poor job of keeping temperatures down.
> 
> edit: Just so there's no confusion...I'm not saying there's anything wrong with the Strix card. I'm not recommending that people don't buy one. I'm just saying that simply buying a Strix isn't going to guarantee you better clocks because that comes down to silicon lottery. And as for air cooling, it's not the best. So I figured if my option was $2600 for the strix, $550 for the aquacomputer block, $150 for the thermal pads/lm (Total $3300 CAD with taxes), and I would still only get around 2130-2160MHz with shunts/under water, then I may as well get an EVGA FTW3 Ultra HydroCopper for $2450 CAD with a block already installed. At worse, if I get a really bad GPU die, I'll get 1-2% lower performance. At best, I'll roll even better numbers. And for $850 CAD less.


FrameChasers noticed that its Strix GPU had high temperatures because of a bad radiator contact. He inserted washers and saw a large temperature decrease. Your temperature problem may be related to the same cause.

With the FTW3, you will be limited if you shunt-mod because of the fuses. Moreover, I am not sure that shunt-modding the EVGA will enable you to bypass the power limit. Finally, you may have a worse GPU on the FTW3.

I would keep the Strix, your RAM overclock is very good, my 3090 FE managed only 600 Mhz stable. Against the high temperatures, try to insert washers with the default cooler and buy a waterblock (Bykski...) on Black Friday : you should obtain a good price. Or if you are a handyman, you can mod a CPU waterblock to fit the GPU and cut copper plates for RAM, VRM and backplate. I did that on previous cards : it was really efficient and cost me 10 $ only.


----------



## GTANY

GTANY said:


> FrameChasers noticed that its Strix GPU had high temperatures because of a bad radiator contact. He inserted washers and saw a large temperature decrease. Your temperature problem may be related to the same cause.
> 
> With the FTW3, you will be limited if you shunt-mod because of the fuses. Moreover, I am not sure that shunt-modding the EVGA will enable you to bypass the power limit. Finally, you may have a worse GPU on the FTW3.
> 
> I would keep the Strix, your RAM overclock is very good, my 3090 FE managed only 600 Mhz stable. Against the high temperatures, try to insert washers with the default cooler and buy a waterblock (Bykski...) on Black Friday : you should obtain a good price. Or if you are a handyman, you can mod a CPU waterblock to fit the GPU and cut copper plates for RAM, VRM and backplate. I did that on previous cards : it was really efficient and cost me 10 $ only.


----------



## DrunknFoo

chispy said:


> Guys i have found some disturbing entries at the Hall Of Fame of futuremark for Port Royal , there is some guys cheating and a lot of shenanigans going on with the rtx 3090 single gpu entries for Port Royal. Not everything you see it's real even when it says valid result , there are loopholes that have been found. So if you find yourself with a lower score than some of this entries do not worried it does not represent real , legit scores.
> 
> Futuremark / UL is doing an investigation and clean up
> 
> Do not ask me what are the loopholes as it would be counter productive.


And finally i hit 15199 after a long session testing the asus bios on the ftw, just to get knocked down a bit lol

Which user (example)? I dont see anything atm out of the ordinary....


----------



## menko2

DrunknFoo said:


> And finally i hit 15199 after a long session testing the asus bios on the ftw, just to get knocked down a bit lol
> 
> Which user (example)? I dont see anything atm out of the ordinary....


What overclock are you using?

I'm with my Strix 480W Bios and the maximum I can get is 14512 in port royal.

I have fans at 100% and tried many configurations, undervolting, oc curves,... My temps are limiting me even with fans at 100%.


----------



## pat182

in GPU tweak II from asus, you can set a custom temperature drop curve, in theory, you can set the custom ceiling to like 2200mhz from 60c to 91c so it would not trottle the higher the temp goes, any1 gave it a try ?


----------



## Thebc2

GTANY said:


> FrameChasers noticed that its Strix GPU had high temperatures because of a bad radiator contact. He inserted washers and saw a large temperature decrease. Your temperature problem may be related to the same cause.
> 
> With the FTW3, you will be limited if you shunt-mod because of the fuses. Moreover, I am not sure that shunt-modding the EVGA will enable you to bypass the power limit. Finally, you may have a worse GPU on the FTW3.
> 
> I would keep the Strix, your RAM overclock is very good, my 3090 FE managed only 600 Mhz stable. Against the high temperatures, try to insert washers with the default cooler and buy a waterblock (Bykski...) on Black Friday : you should obtain a good price. Or if you are a handyman, you can mod a CPU waterblock to fit the GPU and cut copper plates for RAM, VRM and backplate. I did that on previous cards : it was really efficient and cost me 10 $ only.


I would second this regarding the Strix mounting on the stock heatsink. Mine was not mounted well at all, and even a remount with new paste didn’t help - my temps would get up into the 80s originally and mid 70s after my repaste. I did not try the washer method as I ended up putting it under water. No issues with contact using the EKWB Strix block, temps are as low as 21c idle and max out around 37-38 during the warmest part of the day and didn’t top 35c testing last night.

The temps have made a huge difference, I am maxed out at 2046 core (+186) and playing my game of choice at 4K I am seeing sustained clocks at 2220mhz. Really really impressive stuff without any power mods and using the stock bios. My best Port Royal was 15,172 but I think I can push it higher with some power modding.


Sent from my iPhone using Tapatalk Pro


----------



## odin24seven

Got lucky at Microcenter. On Nov 9th Microcenter showed 2 on there website. I went up there at 7am and 12 people were inline so I left. At 9:45 they still showed one in stock. I got there and it was sold. But the guy was like we found two more and everyone that camped out left so of course I was like I'll take it. That's my story. This thing smashes gaming like no other.


----------



## long2905

odin24seven said:


> Got lucky at Microcenter. On Nov 9th Microcenter showed 2 on there website. I went up there at 7am and 12 people were inline so I left. At 9:45 they still showed one in stock. I got there and it was sold. But the guy was like we found two more and everyone that camped out left so of course I was like I'll take it. That's my story. This thing smashes gaming like no other.


congrats! what card did you get?


----------



## Foxrun

I painted all 6 resistors with a conductive pen, same way I did Volta and the Titan RTX, but it did absolutely nothing. I scraped and cleaned before I applied as well. Any ideas why this may have not worked? My FE is far better than the FTW but the damn power limit is holding it back.


----------



## Nizzen

Thebc2 said:


> Ek Strix block arrived, just scratching the surface.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Sent from my iPhone using Tapatalk Pro


Show me 2200mhz in average in Port Royal, and I'm impressed...


----------



## Falkentyne

Foxrun said:


> I painted all 6 resistors with a conductive pen, same way I did Volta and the Titan RTX, but it did absolutely nothing. I scraped and cleaned before I applied as well. Any ideas why this may have not worked? My FE is far better than the FTW but the damn power limit is holding it back.


You painted the FE and it did nothing?
Can you please run Heaven benchmark and have GPU-Z showing in the background with the values (maximum) visible and take a screenshot?


----------



## Nizzen

chispy said:


> Guys i have found some disturbing entries at the Hall Of Fame of futuremark for Port Royal , there is some guys cheating and a lot of shenanigans going on with the rtx 3090 single gpu entries for Port Royal. Not everything you see it's real even when it says valid result , there are loopholes that have been found. So if you find yourself with a lower score than some of this entries do not worried it does not represent real , legit scores.
> 
> Futuremark / UL is doing an investigation and clean up
> 
> Do not ask me what are the loopholes as it would be counter productive.


Nice find!

I hope they are over me on the chart, so I can climb to top 10 again 

Love from Nzz


----------



## cstkl1

has anybody noticed ac valhalla purposely limiting 3090 to 350w and 3080s to 300w

but in menu and gender selection its all 480(3090) 420w (3080) fps just shoots up and gpu utilization hits 98/99%..

is this a ploy to make 6800xt/6900xt look good. since theres multiple articles stating 3090 cannot hit 4k 60fps in ac Val..

coincidence my foot. 

1440p-4k same limit. 
nvenc record same limit with fps tanking..

this is bs if ubi think they can mess with the 70% market share. ac games i look forward to everyone of them.


----------



## Foxrun

Falkentyne said:


> You painted the FE and it did nothing?
> Can you please run Heaven benchmark and have GPU-Z showing in the background with the values (maximum) visible and take a screenshot?


I will once I get home. Hwinfo is still showing a max of 393 watts, and I’m still power limited.


----------



## HyperMatrix

cstkl1 said:


> has anybody noticed ac valhalla purposely limiting 3090 to 350w and 3080s to 300w
> 
> but in menu and gender selection its all 480(3090) 420w (3080) fps just shoots up and gpu utilization hits 98/99%..
> 
> is this a ploy to make 6800xt/6900xt look good. since theres multiple articles stating 3090 cannot hit 4k 60fps in ac Val..
> 
> coincidence my foot.
> 
> 1440p-4k same limit.
> nvenc record same limit with fps tanking..
> 
> this is bs if ubi think they can mess with the 70% market share. ac games i look forward to everyone of them.


Video games have absolutely 0 control over your clock speeds or power limits. Different types of rendering use different amounts of power. In cod black ops Cold War for example in areas I’ll be at 2000MHz+ at 100% usage with like 330-370W draw. Other times I’ll be at 480W, unrelated to temperatures.


----------



## HyperMatrix

Foxrun said:


> I will once I get home. Hwinfo is still showing a max of 393 watts, and I’m still power limited.


Shunting does the opposite effect that you’re looking for. It shows lower power in software. That’s how it tricks the card into thinking it’s below the limit, and allows it to pull more. Have to check with killawatt to see your actual power draw.


----------



## cstkl1

HyperMatrix said:


> Video games have absolutely 0 control over your clock speeds or power limits. Different types of rendering use different amounts of power. In cod black ops Cold War for example in areas I’ll be at 2000MHz+ at 100% usage with like 330-370W draw. Other times I’ll be at 480W, unrelated to temperatures.


nice speech. play it or keep that opinion to yourself.

and stop assuming the observation was on early scenes. i have my osd on throughout the game. 

its the only reason my 3080 strix oced not smacking a stock 3090. its powerlimited. 

5900x with tuf 3080 same issue. 

3090 was even glorious at 350w cap in game play. 

i am pissed btw. ac games is one of the few franchise i am nut on. ac V when his dad was slain. immediately stopped game play. bought full ultimate version. world conquer revenger slayer mode was so on. 

dx12 no special game libs. for some odd reason all amd cards runs 100%..

yeah plot thickens. y the weird number on 3080/3090 only in gameplay. coincidence my foot.


----------



## HyperMatrix

cstkl1 said:


> nice speech. play it or keep that opinion to yourself.
> 
> and stop assuming the observation was on early scenes. i have my osd on throughout the game.
> 
> its the only reason my 3080 strix oced not smacking a stock 3090. its powerlimited.
> 
> 5900x with tuf 3080 same issue.
> 
> 3090 was even glorious at 350w cap in game play.
> 
> i am pissed btw. ac games is one of the few franchise i am nut on. ac V when his dad was slain. immediately stopped game play. bought full ultimate version. world conquer revenger slayer mode was so on.
> 
> dx12 no special game libs. for some odd reason all amd cards runs 100%..
> 
> yeah plot thickens. y the weird number on 3080/3090 only in gameplay. coincidence my foot.


Sorry it seems you missed what I wrote. Here you go. I added bold to make it easier to see. 

𝗩𝗶𝗱𝗲𝗼 𝗴𝗮𝗺𝗲𝘀 𝗵𝗮𝘃𝗲 𝗮𝗯𝘀𝗼𝗹𝘂𝘁𝗲𝗹𝘆 0 𝗰𝗼𝗻𝘁𝗿𝗼𝗹 𝗼𝘃𝗲𝗿 𝘆𝗼𝘂𝗿 𝗰𝗹𝗼𝗰𝗸 𝘀𝗽𝗲𝗲𝗱𝘀 𝗼𝗿 𝗽𝗼𝘄𝗲𝗿 𝗹𝗶𝗺𝗶𝘁𝘀.


----------



## cstkl1

btw 1080p-1440p-


HyperMatrix said:


> Sorry it seems you missed what I wrote. Here you go. I added bold to make it easier to see.
> 
> 𝗩𝗶𝗱𝗲𝗼 𝗴𝗮𝗺𝗲𝘀 𝗵𝗮𝘃𝗲 𝗮𝗯𝘀𝗼𝗹𝘂𝘁𝗲𝗹𝘆 0 𝗰𝗼𝗻𝘁𝗿𝗼𝗹 𝗼𝘃𝗲𝗿 𝘆𝗼𝘂𝗿 𝗰𝗹𝗼𝗰𝗸 𝘀𝗽𝗲𝗲𝗱𝘀 𝗼𝗿 𝗽𝗼𝘄𝗲𝗿 𝗹𝗶𝗺𝗶𝘁𝘀.


think you-missed what i wrote
play the damn game and stop making your "well" encompass the world. ppl do shady **** everyday that most dufus dont even know its happening. 

rtx 3080
1080p -300w
1440p -300w
4k -300w
5k-300w

play the game. then comment. cause atm its a waste if time replying so just going to ignore you.


----------



## HyperMatrix

cstkl1 said:


> btw 1080p-1440p-
> think you-missed what i wrote
> play the damn game and stop making your "well" encompass the world. ppl do shady **** everyday that most dufus dont even know its happening.
> 
> rtx 3080
> 1080p -300w
> 1440p -300w
> 4k -300w
> 5k-300w
> 
> play the game. then comment.


Hopefully third time’s the charm. 

𝗩𝗶𝗱𝗲𝗼 𝗴𝗮𝗺𝗲𝘀 𝗵𝗮𝘃𝗲 𝗮𝗯𝘀𝗼𝗹𝘂𝘁𝗲𝗹𝘆 0 𝗰𝗼𝗻𝘁𝗿𝗼𝗹 𝗼𝘃𝗲𝗿 𝘆𝗼𝘂𝗿 𝗰𝗹𝗼𝗰𝗸 𝘀𝗽𝗲𝗲𝗱𝘀 𝗼𝗿 𝗽𝗼𝘄𝗲𝗿 𝗹𝗶𝗺𝗶𝘁𝘀.


----------



## ExDarkxH

Nizzen said:


> Nice find!
> 
> I hope they are over me on the chart, so I can climb to top 10 again
> 
> Love from Nzz


#5 is very suspicious

Hes using ram speed of 3000mhz and a i7 7700k
additionally he achieved an average clock of about 2,300Mhz with an average temp of 47c?

Unless hes using a handpicked chip that belongs to Jensen Huang, I'n pretty sure that clock is impossible

and even if it was possible, hes still scoring higher than ppl with better clocks using exotic cooling solutions. They better remove his score


----------



## Foxrun

HyperMatrix said:


> Shunting does the opposite effect that you’re looking for. It shows lower power in software. That’s how it tricks the card into thinking it’s below the limit, and allows it to pull more. Have to check with killawatt to see your actual power draw.


Yeah I remember that happening with my last two. This time the wattage hasn’t changed the slightest. It’s still the same measurements as it was before the shunt. I’ll probably clean and scrape again then re apply.


----------



## mirkendargen

cstkl1 said:


> has anybody noticed ac valhalla purposely limiting 3090 to 350w and 3080s to 300w
> 
> but in menu and gender selection its all 480(3090) 420w (3080) fps just shoots up and gpu utilization hits 98/99%..
> 
> is this a ploy to make 6800xt/6900xt look good. since theres multiple articles stating 3090 cannot hit 4k 60fps in ac Val..
> 
> coincidence my foot.
> 
> 1440p-4k same limit.
> nvenc record same limit with fps tanking..
> 
> this is bs if ubi think they can mess with the 70% market share. ac games i look forward to everyone of them.


This is just how the AC engine is. Odyssey also didn't use a huge amount of power with 100% GPU utilization and maxed core speeds. If your GPU utilization isn't maxed you're probably CPU limited. If your GPU utilization is maxed, and your core speeds aren't throttled, what exactly is the problem? If you're such an AC aficionado you should have seen this in at least Odyssey and Origins too.


----------



## Daneth

mirkendargen said:


> This is just how the AC engine is. Odyssey also didn't use a huge amount of power with 100% GPU utilization and maxed core speeds. If your GPU utilization isn't maxed you're probably CPU limited. If your GPU utilization is maxed, and your core speeds aren't throttled, what exactly is the problem? If you're such an AC aficionado you should have seen this in at least Odyssey and Origins too.


I've found that limiting fps to 60 helps stabilize power draw, which in turn seems to get rid of some of the frame time inconsistencies in Valhalla. At the end of the day I'd rather play the game without stuttering than maximize fps, I hope they fix performance soon so I can go back to uncapped fps.


----------



## Daneth

Question: Has anyone who has the Aorus Xtreme 3090 attempted to flash the bios with something else? If so, how did that work out for you? Also, anyone who has questions about the good and bad aspects of this card, I have one and maybe I can answer some of them. The default bios power limit is 450w, same as the 3080 xtreme.


----------



## Thanh Nguyen

Nizzen said:


> Show me 2200mhz in average in Port Royal, and I'm impressed...


You should ask for Metro Exodus with RTX enable. Port Royal is not hard.


----------



## originxt

What's the minimum score to tell if your chip is a dud? Can just break 14200 on the 500w bios with the ftw3 at around 65c. 2040 at 0.950 voltage cap. If I try to run it at max PL/voltage at offsets, I start hitting 68-70c. Any memory offsets results in crashing PR.


----------



## HyperMatrix

Foxrun said:


> Yeah I remember that happening with my last two. This time the wattage hasn’t changed the slightest. It’s still the same measurements as it was before the shunt. I’ll probably clean and scrape again then re apply.


You should be able to use a multimeter to check the resistance between the 2 ends of the resistor before/after you apply the paint. The silver paint is essentially lowering the resistance. You should be able to quantify that change this way.


----------



## ExDarkxH

Thanh Nguyen said:


> You should ask for Metro Exodus with RTX enable. Port Royal is not hard.


it's not hard?
umm okay....

show me someone doing this then if its so easy at a normal temperature (no open windows, no cold water, no 5 fans blowing at it, ext)

The person closest to this is #13 in the world with 2,205 average @ 34c
I dont see ANY other scores doing this. as far as i can see, the #18 score with a below 30c temp is only getting 2,159 average
If its so easy why isnt anybody able to do it?


----------



## DrunknFoo

ExDarkxH said:


> #5 is very suspicious
> 
> Hes using ram speed of 3000mhz and a i7 7700k
> additionally he achieved an average clock of about 2,300Mhz with an average temp of 47c?
> 
> Unless hes using a handpicked chip that belongs to Jensen Huang, I'n pretty sure that clock is impossible
> 
> and even if it was possible, hes still scoring higher than ppl with better clocks using exotic cooling solutions. They better remove his score



Could be dice or modified cooling for that score.. cpu doesnt do much for pr... Likely a test bench if that old of a cpu...


----------



## Lobstar

HyperMatrix said:


> Awesome use of the stream deck. Haha. How did you get that set up?


I don't remember but I think there is a place in the SD software to grab extensions. You turn on memory sharing mode in HWinfo64 and it just kinda works. Then you just drag out the widget to a button, choose your sensor and off you go. (

__
https://www.reddit.com/r/StreamDeckSDK/comments/aj3h01
)


----------



## ExDarkxH

DrunknFoo said:


> Could be dice or modified cooling for that score.. cpu doesnt do much for pr... Likely a test bench if that old of a cpu...


while its true that cpu doesnt matter that much, we are talking about the top 5 in the world here. Even a tiny difference would affect the placement on the boards, not to mention 3000 ram speed? There is a part in the benchmark where fps goes over 110 so im sure such an outdated cpu would affect him there. Who buys a 3090 and uses dice but keeps their ram at 3000mhz?
Its a very very suspicious score


----------



## DrunknFoo

ExDarkxH said:


> while its true that cpu doesnt matter that much, we are talking about the top 5 in the world here. Even a tiny difference would affect the placement on the boards, not to mention 3000 ram speed? There is a part in the benchmark where fps goes over 110 so im sure such an outdated cpu would affect him there. Who buys a 3090 and uses dice but keeps their ram at 3000mhz?
> Its a very very suspicious score


ahhh, maybe person just threw a block on the gpu, didn't have sufficient cooling for ram and thought to play safe without touching the ram?
i dunno, i think its legit, that user has been on the ladder for awhile, maybe waiting for a block to play around with the ladder, or is satisfied and just playing games and not ocing anymore?


----------



## ExDarkxH

DrunknFoo said:


> ahhh, maybe person just threw a block on the gpu, didn't have sufficient cooling for ram and thought to play safe without touching the ram?
> i dunno, i think its legit, that user has been on the ladder for awhile, maybe waiting for a block to play around with the ladder, or is satisfied and just playing games and not ocing anymore?


possible but it annoys me that it says 47c average for his gpu temperature. if his true gpu temp is below zero then fine it would be possible, but thats not what is showing.
It's simply the totality of everything combined

The crappy cpu
The ddr4 ram at 3,000Mhz
The average gpu temp of 47c

Also, i want to point out that his peak and average clock differ by 2 full gpu bins... That doesnt even make sense. If he was running such cold temps there wouldn't be such a variation between peak and average.

Example:
The guy right below him in the boards, Zedville @ #6

peak clock 2,325
average clock 2,325

How about #7

peak clock 2,310
average clock 2,310

The reality is everything about his score is a red flag and doesnt make sense. it's very suspicious


----------



## HyperMatrix

ExDarkxH said:


> possible but it annoys me that it says 47c average for his gpu temperature. if his true gpu temp is below zero then fine it would be possible, but thats not what is showing.
> It's simply the totality of everything combined
> 
> The crappy cpu
> The ddr4 ram at 3,000Mhz
> The average gpu temp of 47c
> 
> Also, i want to point out that his peak and average clock differ by 2 full gpu bins... That doesnt even make sense. If he was running such cold temps there wouldn't be such a variation between peak and average.
> 
> Example:
> The guy right below him in the boards, Zedville @ #6
> 
> peak clock 2,325
> average clock 2,325
> 
> How about #7
> 
> peak clock 2,310
> average clock 2,310
> 
> The reality is everything about his score is a red flag and doesnt make sense. it's very suspicious


It's weird that he'd have not just a 3090, but one that is shunted and possibly volt modded, considering his crap CPU. But the clock speed being different from his average clock part isn't suspicious. His average temp was 47C. Meaning it was much higher by the end of the run. So a bit of a drop in clocks is normal. #6 and #7 are showing no temps because they were running LN2 which explains why they were able to maintain the same clock.

He'd have to have a golden chip or running elmor, and shunted. Card appears to be under water from the temps. But yeah....why a 7700k. Lol. Unless it's his barebones bench system for some reason that he dropped the card into.


----------



## bogdi1988

Gadfly said:


> Really? I saw some live streams where the power limit was raised.


They might have raised the power level via MSI Afterburner but not past what the BIOS allows. The other way they raised the power level was with a shunt.


----------



## DrunknFoo

menko2 said:


> What overclock are you using?
> 
> I'm with my Strix 480W Bios and the maximum I can get is 14512 in port royal.
> 
> I have fans at 100% and tried many configurations, undervolting, oc curves,... My temps are limiting me even with fans at 100%.


don't undervolt or use curves when trying to climb the ladder. if you have heat issues, then best to work on fixing that...
likely you are power limited at that score


----------



## Nizzen

Thanh Nguyen said:


> You should ask for Metro Exodus with RTX enable. Port Royal is not hard.


Battlefield V is pretty hard compared to cod. Cod is like 2200mhz, and BF V is like 1950mhz with the same settings LOL


----------



## DrunknFoo

HyperMatrix said:


> It's weird that he'd have not just a 3090, but one that is shunted and possibly volt modded, considering his crap CPU. But the clock speed being different from his average clock part isn't suspicious. His average temp was 47C. Meaning it was much higher by the end of the run. So a bit of a drop in clocks is normal. #6 and #7 are showing no temps because they were running LN2 which explains why they were able to maintain the same clock.
> 
> He'd have to have a golden chip or running elmor, and shunted. Card appears to be under water from the temps. But yeah....why a 7700k. Lol. Unless it's his barebones bench system for some reason that he dropped the card into.


would it be weird that i once posted top 25 back when 2080 launched with an i3 from my htpc? lol


----------



## ExDarkxH

HyperMatrix said:


> It's weird that he'd have not just a 3090, but one that is shunted and possibly volt modded, considering his crap CPU. But the clock speed being different from his average clock part isn't suspicious. His average temp was 47C. Meaning it was much higher by the end of the run. So a bit of a drop in clocks is normal. #6 and #7 are showing no temps because they were running LN2 which explains why they were able to maintain the same clock.
> 
> He'd have to have a golden chip or running elmor, and shunted. Card appears to be under water from the temps. But yeah....why a 7700k. Lol. Unless it's his barebones bench system for some reason that he dropped the card into.


Well thats part of the point

The only way he could get such a clock is if he was sub ambient cooling. I call bs on getting 2,325 mhz at 47c 
Even if his chip is literally 1 in a million, i doubt its 3 standard deviations better than a “golden chip”. Aka impossible 

not to mention the guy getting better sustained clocks scoring lower than him

Fraudulent score

And if such a gpu existed, im sure he could sell it for $20,000 and im not exaggerating since that gpu would be SIGNIFICANTLY faster than any other gpu out there by a large margin and some rich snob would easily buy it up


----------



## uniketou

Hello all,

I've a RTX3090 ZOTAC and I want to unlock the power limit.

I don't find any vBios.

Someone had already customed the vBios ?


----------



## Foxrun

HyperMatrix said:


> You should be able to use a multimeter to check the resistance between the 2 ends of the resistor before/after you apply the paint. The silver paint is essentially lowering the resistance. You should be able to quantify that change this way.


Ill order one of those. I don’t remember this being so difficult before. Maybe I didn’t scrape the ends enough? Am I supposed to scrape the middle black portion as well? Honestly I don’t remember having to scrape the middle. I added more this time and still nothing. Maybe an extra clock jump but that could be placebo as power draw is the same.


----------



## HyperMatrix

Foxrun said:


> Ill order one of those. I don’t remember this being so difficult before. Maybe I didn’t scrape the ends enough? Am I supposed to scrape the middle black portion as well? Honestly I don’t remember having to scrape the middle. I added more this time and still nothing. Maybe an extra clock jump but that could be placebo as power draw is the same.


No you just have to scrape the metallic parts on the ends. Could be a case of you not having scraped one or two of them well enough or the conductivity of your silver paint isn't high enough or application of it is too sparse. Lot of variables. But yeah testing the resistance on each one with a multimeter after application should give you peace of mind as well as an indication of how much of an increase you should expect.


----------



## Foxrun

HyperMatrix said:


> No you just have to scrape the metallic parts on the ends. Could be a case of you not having scraped one or two of them well enough or the conductivity of your silver paint isn't high enough or application of it is too sparse. Lot of variables. But yeah testing the resistance on each one with a multimeter after application should give you peace of mind as well as an indication of how much of an increase you should expect.


Stupid question here, but will isopropyl be able to remove the ‘paint’?


----------



## HyperMatrix

Foxrun said:


> Stupid question here, but will isopropyl be able to remove the ‘paint’?


Either that or acetone (nailpolish remover) will do the trick.


----------



## ExDarkxH

excited to see AMD 6000 series benchmarks tmrw.
They totally mislead everyone in their graphs by focusing completely on 1440p because they knew that would give the impression their gpus are faster, but in reality their superior cpus were carrying them at that resolution and misleading performance. They purposely compared the performance to NVidia using intel cpus....

The rare graph that showed 4k benchmarks (which neutralized the cpu advantage) they enabled SAM to further mislead performance

Tmrw when we see apples to apples comparisons.. with both using the same cpu and with SAM disabled, you will see that AMD is still slower than Nvidia; especially at 4k resolutions


----------



## dr/owned

ExDarkxH said:


> The only way he could get such a clock is if he was sub ambient cooling. I call bs on getting 2,325 mhz at 47c
> Even if his chip is literally 1 in a million, i doubt its 3 standard deviations better than a “golden chip”. Aka impossible


I haven't seen people overvolting but maybe that gets these cards up to 2300Mhz. 47C looks like a waterblock with pretty cold water.


----------



## GAN77

ExDarkxH said:


> Tmrw when we see apples to apples comparisons.. with both using the same cpu and with SAM disabled, you will see that AMD is still slower than Nvidia; especially at 4k resolutions


True Fan Speech) I smiled)


----------



## Foxrun

HyperMatrix said:


> Either that or acetone (nailpolish remover) will do the trick.


I just finished scraping the hell out of them, all 6, and still nothing. This is crazy. At this point I’m not sure what to do. If I can’t get this then I’m going to try for a kingpin. I’m getting rusty in my old age lol


----------



## HyperMatrix

Foxrun said:


> I just finished scraping the hell out of them, all 6, and still nothing. This is crazy. At this point I’m not sure what to do. If I can’t get this then I’m going to try for a kingpin. I’m getting rusty in my old age lol


Are you using proper conductive silver paint? Didn't mistake it for your daughter's silver nail polish or something? Haha.


----------



## Foxrun

HyperMatrix said:


> Are you using proper conductive silver paint? Didn't mistake it for your daughter's silver nail polish or something? Haha.


Ha, yes I’m using the same stuff I had used before for my previous shunts. I’ll try again tomorrow. I’ve taken it apart 6 times now. I must admit, these FE’s are incredibly easy to disassemble and reassemble. It’s actually enjoyable.


----------



## Asmodian

Foxrun said:


> I just finished scraping the hell out of them, all 6, and still nothing.


The silver paint has too high and variable resistance. With a huge variance in the resistance at each shunt you end up breaking the power balancing logic even if it does raise the power limit. You could end up running some VRMs much harder than others.

Buy a bunch of 3 mOhm shunts and replace them all. Soldering is really not much harder, though it is much more permanent.


----------



## Foxrun

Asmodian said:


> The silver paint has too high and variable resistance. With a huge variance in the resistance at each shunt you end up breaking the power balancing logic even if it does raise the power limit. You could end up running some VRMs much harder than others.
> 
> Buy a bunch of 3 mOhm shunts and replace them all. Soldering is really not much harder, though it is much more permanent.


I’ve never done that before but I’m sure I can figure it out with some YouTube videos. The silver paint worked very well in the past. I’m still stunned that it’s not working.


----------



## ShadowYuna

EK waterblock and backplate for STRIX 3090 came this week.

Had some problem on coil whine capacitor after the installation , so had to apply 2mm thermal pad on cap and it reduce the coil whine alot.(Still there is some but I can live with that)

After the waterblock the GPU stays at around 45 degree and boost stable at 2055 or 2070 on Assassin Creed Valhalla . Very happy with the result but hate coil whine specially I want silent gaming.

I will use EK block for awhile until I get block from Aqua Computer or Bitspower. I hope other block can contact the Cap directly not like EK


----------



## Foxrun

Do those clocks hold in in BFV, Control, or any UE4 game? I can hit 2100 in Valhalla with my FE no shunt, but in Insurgency I’ll drop to 1875-2040.


----------



## cstkl1

Foxrun said:


> Do those clocks hold in in BFV, Control, or any UE4 game? I can hit 2100 in Valhalla with my FE no shunt, but in Insurgency I’ll drop to 1875-2040.


that because valhalla caps 3090 to 350w
and 3080 to 300w.
if you record with nvenc it reduces the cap even further. 

this happens at EVERY resolution.


----------



## kx11

cstkl1 said:


> that because valhalla caps 3090 to 350w
> and 3080 to 300w.
> if you record with nvenc it reduces the cap even further.
> 
> this happens at EVERY resolution.


not with me, Valhalla pulls up to 480w from my Strix OC 3090


----------



## HyperMatrix

cstkl1 said:


> that because valhalla caps 3090 to 350w
> and 3080 to 300w.
> if you record with nvenc it reduces the cap even further.
> 
> this happens at EVERY resolution.





HyperMatrix said:


> Hopefully third time’s the charm.
> 
> 𝗩𝗶𝗱𝗲𝗼 𝗴𝗮𝗺𝗲𝘀 𝗵𝗮𝘃𝗲 𝗮𝗯𝘀𝗼𝗹𝘂𝘁𝗲𝗹𝘆 0 𝗰𝗼𝗻𝘁𝗿𝗼𝗹 𝗼𝘃𝗲𝗿 𝘆𝗼𝘂𝗿 𝗰𝗹𝗼𝗰𝗸 𝘀𝗽𝗲𝗲𝗱𝘀 𝗼𝗿 𝗽𝗼𝘄𝗲𝗿 𝗹𝗶𝗺𝗶𝘁𝘀.


Looks like 3 times wasn’t enough. If you’re going to spout off unsubstantiated claims, then insult anyone who replies to you, please leave this thread. Your conspiracy theories aren’t helping anyone here, and you’re clearly not looking to receive help from anyone.


----------



## Asmodian

HyperMatrix said:


> Looks like 3 times wasn’t enough. If you’re going to spout off unsubstantiated claims, then insult anyone who replies to you, please leave this thread. Your conspiracy theories aren’t helping anyone here, and you’re clearly not looking to receive help from anyone.


OK, whew. I was super confused when I read that. I simply couldn't believe someone thought a game could or would change the power limits of a GPU.

It makes sense if a game is simply too easy to render (CPU bound, etc.) that it would end up using less power but the game itself telling the GPU to throttle is a really weird thing to suggest.


----------



## cstkl1

kx11 said:


> not with me, Valhalla pulls up to 480w from my Strix OC 3090


 screenshot open world

@vmanuelgm 

bro i saw your shunted 3090 videos also pulling less power. .. did u check on the wall bro?


----------



## bmgjet

Valhalla pulls 465W on mine. Which in Nvidia-SMI = 350W since Im running 15 mOhm shunts.
Also of note is its locking the memory clocks to P2 state like what happens when your running Cuda compute load.


----------



## defiledge

6800xt is 22% slower than a 3090. I wonder if 6900xt will beat a 3090. Sell my 3090 and invest?


----------



## kx11

cstkl1 said:


> screenshot open world
> 
> @vmanuelgm
> 
> bro i saw your shunted 3090 videos also pulling less power. .. did u check on the wall bro?


how about video?


----------



## Asmodian

defiledge said:


> 6800xt is 22% slower than a 3090. I wonder if 6900xt will beat a 3090. Sell my 3090 and invest?


I wonder how much faster the 6900 XT will be. With the same memory bus is it unlikely to scale up much further?


----------



## defiledge

Apparently amd has insane OC headroom. On another note, does anybody know the resistance of the shunts that debauer used for the 3090 tuf?


----------



## Asmodian

I thought 3 mOhm?

No, he stacked 0.005 Ohm, just like Pascal.


----------



## dante`afk

i think cstkl1 is trolling all of us and is having a roll



defiledge said:


> 6800xt is 22% slower than a 3090. I wonder if 6900xt will beat a 3090. Sell my 3090 and invest?


personally, I'll try to get a 6900xt and then decide having both here.

I believe on 8th December (or when NDA falls) a huge load of 3090 will go for sale from people ditching their cards


then again, you have to consider what nvidia is giving you, gsync (if you have such a monitor atm), rtx voice, dlss, nvidia streaming etc. all of that which amd does not have. especially rtx voice during covid times is insane


----------



## chispy

ShadowYuna said:


> EK waterblock and backplate for STRIX 3090 came this week.
> 
> Had some problem on coil whine capacitor after the installation , so had to apply 2mm thermal pad on cap and it reduce the coil whine alot.(Still there is some but I can live with that)
> 
> After the waterblock the GPU stays at around 45 degree and boost stable at 2055 or 2070 on Assassin Creed Valhalla . Very happy with the result but hate coil whine specially I want silent gaming.
> 
> I will use EK block for awhile until I get block from Aqua Computer or Bitspower. I hope other block can contact the Cap directly not like EK
> 
> View attachment 2465797


Sorry to hear about your coil whine , it seems is a hit or miss with it. This strix cards all they need is good temps to shine  . I'm receiving the Byski block tomorrow for my Strix also , very cheap block coming from China Aliexpress. I want he Aqua computer or Phantek if the Byski does not performs as i want.


----------



## ShadowYuna

chispy said:


> Sorry to hear about your coil whine , it seems is a hit or miss with it. This strix cards all they need is good temps to shine  . I'm receiving the Byski block tomorrow for my Strix also , very cheap block coming from China Aliexpress. I want he Aqua computer or Phantek if the Byski does not performs as i want.


Yeah. It seems there is some miscontact point somewhere. Anyway ATM the coil whine is not that big issue on my current distance from my desktop to my sofa , I can deal with it. Cause temp and boost clock is very stable
I am also waiting for other brand block to come out so I can test it. I hope your Bykski works well.


----------



## HyperMatrix

defiledge said:


> 6800xt is 22% slower than a 3090. I wonder if 6900xt will beat a 3090. Sell my 3090 and invest?


6900XT has 11% more SMs than the 6800XT. So even if there is no extra heat/power related issue with the 6900XT and it can clock exactly like the 6800XT, and if the 6800XT turns out to be 22% slower than the 3090, then the 3090 will still be 11.5% faster than the 6900XT, along with 30-50% faster RT. 

Considering that, as well as G-Sync, DLSS, and Nvidia claiming a similar “SAM” level hack that shows similar performance improvements to what AMD is claiming, and things start to look even better. 

I don’t think the 6900XT will make you regret getting a 3090. But I think the release of the 3080Ti with the same core count and 20GB vram at $899 might.


----------



## Sky3900

Real world data time fwiw...

I recently re-modded my air cooled 3090 FE, stacking 004s on top of all six large shunts using the mg842 silver paint pen. This ended up adding ~7- 8 mOhms in parallel or ~2.9 - 3.1 mOhms for the total new shunt resistance.

I ran a test to see what effect additional power has on performance in Borderlands 3. Settings: 4K, Badass, HDR on, NV control panel set for max quality/performance/sharpening. I pointed towards a scene with lots of geometric detail and NPC activity... which drew as close to max power as I could get... and left it there as I adjusted the power settings down from +114. Temps and clocks were allowed to equalize at each new setting until stable for a couple minutes. Power draw was measured at the wall and compensated for PSU efficiency and system power consumption. Fans @ 100%, GPU +98, mem + 558, for all tests.

Power / GPU clock / FPS / Temp (C) / Board power draw (W)

+114 / 2055 / 78 / 76 / 589
+107 / 2070 / 80 / 72 / 564
+100 / 2040 / 80 / 68 / 523
+93 / 2025 / 79 / 67 / 519
+86 / 1995 / 79 / 64 / 485
+79 / 1965 / 78 / 61 / 451
+73 / 1905 / 77 / 58 / 411
+66 / 1875 / 76 / 55 / 380
+59 / 1785 / 73 / 52 / 343

Main take aways:

- As some have already said, adding a ton of extra power doesn't gain a whole lot of extra performance with air cooling. In this case a 64% increase in power yielded a 10% increase in FPS.

- Performance actually decreased from +107 to +114 power. I suspect due to the automatic temperature down clocking.

- Seems like there really is no point in running an air cooled FE over 525w... at least in BL3.

Side note, the card will draw close to full power in the metro bench. I've seen it pull 630w-640w peak. Also, the back plate measures just a couple degrees hotter than the reported GPU temp using both a thermocouple and IR thermometer (High EMS).

System: I9 9900K @ 5.1, 16g ddr4 b-die @ 4100 (16,17,17, 36, 320), 1TB M.2, 750 PSU (I know)


----------



## Antsu

kx11 said:


> how about video?


If you are actually serious, then I'd suggest taking a look at your GPU usage. No **** the powerdraw won't be record breaking when you can't even push the card to 100% usage. I don't know if you deserve my kindness, but if you want to know why this is, it's because of a CPU bottleneck. Overclock your CPU/RAM or crank up the settings even higher (Resolution scaling for example) and watch it bang against the powerlimit like there is no tomorrow.


----------



## Mat_UK

Cavokk said:


> Yes I did exactly that and its now running very well here on my MSI Trio X with all display ports working  Just had to do remove nVidia display driver an reinstall before running the EXE file
> 
> C


Any chance you could link the BIOSes and give a quick step by step to update the MSI X Trio to the 500w EVGA BIOS please?


----------



## kx11

Antsu said:


> If you are actually serious, then I'd suggest taking a look at your GPU usage. No **** the powerdraw won't be record breaking when you can't even push the card to 100% usage. I don't know if you deserve my kindness, but if you want to know why this is, it's because of a CPU bottleneck. Overclock your CPU/RAM or crank up the settings even higher (Resolution scaling for example) and watch it bang against the powerlimit like there is no tomorrow.


the benchmark is weird as is it, i just wanted to prove him wrong

for my setup there's no CPU bottleneck and GPU usage is fine


----------



## vmanuelgm

cstkl1 said:


> screenshot open world
> 
> @vmanuelgm
> 
> bro i saw your shunted 3090 videos also pulling less power. .. did u check on the wall bro?



650w including Monitor and UPS, from wall.

In regards to power board, 230w is the point in which this card starts to throttle in spite of shunt modding. It is a simple shunt with only 1x5mohms on every original resistor. It is another 3090.









Let some time for Youtube Processing.

The game is a bit unoptimized, guess some patches will bring a bit more performance. Typical port.

A game that pulls a lot of power is Metro Exodus at max settings. The RTX 3090 downclocks heavily.


----------



## mattxx88

i see only bykski and bitspower blocks compatibles for FE, is that right?


----------



## Foxrun

mattxx88 said:


> i see only bykski and bitspower blocks compatibles for FE, is that right?


Ekwb has a really nice one either in December or January. Their 3080 FE block is being released in January if you wanted to check out the design; it’s gorgeous.


----------



## Sleuth

Hey guys

Joined the big dick club with a 3090 master,
However i see the bios power limit is pretty low,
wanted to move it to the following bios








EVGA RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com





Worth it ?
The card runs well, OC isnt insane but its not a terrible card,
so figured a bigger TDP would allow more headroom ?



https://www.3dmark.com/spy/15044564


Can check out username for other bench results


----------



## Cavokk

Mat_UK said:


> Any chance you could link the BIOSes and give a quick step by step to update the MSI X Trio to the 500w EVGA BIOS please?


Sure - its all on the first page in this thread and now the 500w bios is directly available for flashing so you dont need to start with the original bios and use the .exe afterwards like I did  Bios is here: EVGA RTX 3090 VBIOS 

Key thing is to use DDU after flashing the new bios and the reinstall the nVidia drivers like normal.

C


----------



## Sleuth

Cavokk said:


> Sure - its all on the first page in this thread and now the 500w bios is directly available for flashing so you dont need to start with the original bios and use the .exe afterwards like I did  Bios is here: EVGA RTX 3090 VBIOS
> 
> Key thing is to use DDU after flashing the new bios and the reinstall the nVidia drivers like normal.
> 
> C



Can you use this bios on a 2 pin card though out of interest ? im gonna try the 450w on my master, but unsure if a 500 would be possible


----------



## vmanuelgm

Godfall 4K

Let some time for Youtube processing...


----------



## mattxx88

Foxrun said:


> Ekwb has a really nice one either in December or January. Their 3080 FE block is being released in January if you wanted to check out the design; it’s gorgeous.


i did the preorder on that block, canceled last week cause i see they give compatibility only for 3080 
so is it compatible also with 3090? eventually i reaorder it


----------



## mbm

Sleuth said:


> Hey guys
> 
> Joined the big dick club with a 3090 master,
> However i see the bios power limit is pretty low,
> wanted to move it to the following bios
> 
> 
> 
> 
> 
> 
> 
> 
> EVGA RTX 3090 VBIOS
> 
> 
> 24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory
> 
> 
> 
> 
> www.techpowerup.com
> 
> 
> 
> 
> 
> Worth it ?
> The card runs well, OC isnt insane but its not a terrible card,
> so figured a bigger TDP would allow more headroom ?
> 
> 
> 
> https://www.3dmark.com/spy/15044564
> 
> 
> Can check out username for other bench results


what do you expect to gain with another BIOS?


----------



## Sleuth

mbm said:


> what do you expect to gain with another BIOS?


Higher power should mean higher OC no ?


----------



## mostwantd1

Hi I have the FE and the card does get loud quick. I know I can use afterburner and have successfully set up a decent fan curve where it stays relatively quiet at load but was wondering if any of the provided flash bios files would do the trick so I wouldn't have to use a 3rd party software. Thank you in advance.


----------



## Foxrun

mattxx88 said:


> i did the preorder on that block, canceled last week cause i see they give compatibility only for 3080
> so is it compatible also with 3090? eventually i reaorder it


lol I did the same exact thing. They told me the 3090 FE block is coming. I’m hoping near the end of December but realistically I think we will see it in January.


----------



## ExDarkxH

as predicted, the 6800xt is slower than the 3080. They are _extremely _close in rasterization though and trade blows a lot depending on the game

Just saw the GN review

amd posted all those misleading charts using their faster cpu vs nvida/intel combo to mislead gpu performance when in reality the cpu was carrying the fps higher.


----------



## Wihglah

ExDarkxH said:


> as predicted, the 6800xt is slower than the 3080
> Just saw the GN review
> 
> amd posted all those misleading charts using their faster cpu vs nvida/intel combo to mislead gpu performance when in reality the cpu was carrying the fps higher.


Most interesting was the memory bandwidth issues at 4K. Presumably the 6900XT will be no faster in those cases?

And then the RTX scores...


----------



## kx11

ExDarkxH said:


> as predicted, the 6800xt is slower than the 3080. They are _extremely _close in rasterization though and trade blows a lot depending on the game
> 
> Just saw the GN review
> 
> amd posted all those misleading charts using their faster cpu vs nvida/intel combo to mislead gpu performance when in reality the cpu was carrying the fps higher.


RAGE MODE makes it a little bit faster though in AMD titles like HITMAN


----------



## ExDarkxH

What i found interesting was the poor Vulcan performance for the 6800xt?

It actually runs worse with Vulcan


----------



## defiledge

are we gonna get ****ed by the 6900xt?


----------



## ExDarkxH

defiledge said:


> are we gonna get ****ed by the 6900xt?


not a chance lol


----------



## defiledge

Why does the 3080 score much higher? Does PR have RTX on?


----------



## kx11

defiledge said:


> are we gonna get ****ed by the 6900xt?


not yet i guess but it's coming


----------



## mattxx88

kx11 said:


> not yet i guess but it's coming


i got yesterday mine 3090 FE 😭


----------



## ExDarkxH

at least their cpus are still good :/


----------



## kx11

ExDarkxH said:


> at least their cpus are still good :/


Guru3d benchmark of FARCRY 3 tells another story, they capped the fps to 120hz for some reason


----------



## Wihglah

defiledge said:


> are we gonna get ****ed by the 6900xt?


I doubt it.

at any rate, G-Sync, RTX and DLSS are way too important at the moment to trade them in.


Also - why are they all testing with 2 sticks of 3200MHz single channel ram?

Who gets a 3090 and has 3200MHz ram???
Also - I bet the ring bus was at stock or they used Ryzen 5000 series CPUs.


----------



## kx11

mattxx88 said:


> i got yesterday mine 3090 FE 😭


you're fine for about a Yeat, until AMD release DXR/DLSS/ANSEL/Shadowplay competition


----------



## defiledge

Wihglah said:


> I doubt it.
> 
> at any rate, G-Sync, RTX and DLSS are way too important at the moment to trade them in.
> 
> 
> Also - why are they all testing with 2 sticks of 3200MHz single channel ram?
> 
> Who gets a 3090 and has 3200MHz ram???
> Also - I bet the ring bus was at stock or they used Ryzen 5000 series CPUs.


i have 3200Mhz ram, is that bad?


----------



## Wihglah

defiledge said:


> i have 3200Mhz ram, is that bad?


About 20% FPS loss compared to double channel 4400MHz.

(on Intel systems)


----------



## defiledge

I have dual channel 3200mhz cl16 with a 3900x. Does the ram speed really make that much of a difference


----------



## long2905

kx11 said:


> Guru3d benchmark of FARCRY 3 tells another story, they capped the fps to 120hz for some reason


isnt that CPU bottleneck?


----------



## Wihglah

defiledge said:


> I have dual channel 3200mhz cl16 with a 3900x. Does the ram speed really make that much of a difference


You will see improvements up until you match the RAM frequency with the Fabric. I have no experience with Ryzen though, so not sure how much.

You are already at that right?


That's the point though - with a "ring" (i.e. Intel) you get scaling with memory speed all the way up. (which is why using 3200MHz ram is a nerf for Intel systems)

If you used Dual channel 4400MHz ram and had the CPU Cache ratio at 5ghz, you can add 20% to the intel 1080p scores. (well - not everywhere, but in a lot of places)


----------



## mbm

Sleuth said:


> Higher power should mean higher OC no ?


not necessarily


----------



## mbm

kx11 said:


> Guru3d benchmark of FARCRY 3 tells another story, they capped the fps to 120hz for some reason


bencmarking these card at fullHD doesnt make sence either.


----------



## olrdtg

Wihglah said:


> I doubt it.
> 
> at any rate, G-Sync, RTX and DLSS are way too important at the moment to trade them in.
> 
> 
> Also - why are they all testing with 2 sticks of 3200MHz single channel ram?
> 
> Who gets a 3090 and has 3200MHz ram???
> Also - I bet the ring bus was at stock or they used Ryzen 5000 series CPUs.


I have 3200Mhz ram...  I actually have to run it at a lower speed too.


----------



## mirkendargen

Wihglah said:


> You will see improvements up until you match the RAM frequency with the Fabric. I have no experience with Ryzen though, so not sure how much.
> 
> You are already at that right?
> 
> 
> That's the point though - with a "ring" (i.e. Intel) you get scaling with memory speed all the way up. (which is why using 3200MHz ram is a nerf for Intel systems)
> 
> If you used Dual channel 4400MHz ram and had the CPU Cache ratio at 5ghz, you can add 20% to the intel 1080p scores. (well - not everywhere, but in a lot of places)


3000 series Ryzen the sweet spot is 3600 RAM/1800 fabric, some can do 3800 RAM/1900 fabric. AMD says 5000 series should do 4000 RAM/2000 fabric but I haven't done any reading to see how much above that they actually do stably.


----------



## HyperMatrix

Wihglah said:


> I doubt it.
> 
> at any rate, G-Sync, RTX and DLSS are way too important at the moment to trade them in.
> 
> 
> Also - why are they all testing with 2 sticks of 3200MHz single channel ram?
> 
> Who gets a 3090 and has 3200MHz ram???
> Also - I bet the ring bus was at stock or they used Ryzen 5000 series CPUs.


You guys keep making me feel bad about running 2933MHz CL14 on my 6950x. 😢


----------



## jsarver

The reason people are running 3200 right now is cause there is a bug that anything over that is causing a ton of errors in Windows. Msi is looking into it. I have the exact issue. No crashes just constant windows event viewer errors.


----------



## HyperMatrix

Appears to be some serious memory bandwidth issues with the 6800XT. Look at the different going from 1440p to 4K.


6800 XT is 26.6% faster at 1080p
6800 XT is 12.5% faster at 1440p
3090 ends up 9% faster at 4K.

That's a huge difference. About 23% performance loss versus the 3090 going from 1440p to 4k. And since there's no increase in infinity cache or memory bandwidth in the 6900 XT, these same bottlenecks will exist. With the 3090 you have 20.5% more cores compared to the 3080, but you also have 23% more memory bandwidth. With the 6900 XT you have 11% more cores and no additional memory bandwidth.

But in that exact same scenario....when you enable SAM...it all changes again. Instead of the 3090 being 9% faster, you end up with the 6800 XT being 5% faster. That's an extra 12.5% performance uplift from SAM at 4K. Now it comes down to what Nvidia does with its PCIe BAR resize implementation. AMD gets 12.5% more performance uplift with a matched CPU/Motherboard. Could Nvidia's somewhat hardware agnostic method yield even an extra 10%? And what motherboards will support it? The AMD method apparently requires new BIOS versions. If there's an extra 10% in it....I may have to upgrade my system as I'm not sure my Asus X99 Deluxe/3.1 and 6950x will be compatible. Will be neat to wait and see.

Also for anyone who's interested, EVGA KingPin Hybrid Notify List starting 6am on the 20th. Based on competition from AMD and recent pricing of HydroCopper and Hybrid models, I wouldn't be surprised if the KPE came in at around $2000-2200.

I gotta say though. I am envious of that TSMC 7nm OC capability. Imagine being able to push 2.6-2.7GHz under water. We could have had an extra 25% performance just by not using this crap Samsung node. 

























_Link to original article for good measure: AMD Radeon RX 6800 XT review_


----------



## ExDarkxH

AC is an anomaly. You shouldn't compare the 2 gpus based on this game as it has all kind of issues. Use a large game average


----------



## HyperMatrix

ExDarkxH said:


> AC is an anomaly. You can make your bandwidth analysis with it but you shouldn't compare the 2 gpus based on this game as it has all kind of issues


I know there are games that show 0 improvement with SAM on because it all depends on where the bottleneck happens to be. But if we're talking about a free performance uplift in some games (at least for Ampere users), that's still noteworthy. And God knows AC: Valhalla needs better performance.


----------



## ExDarkxH

HyperMatrix said:


> I know there are games that show 0 improvement with SAM on because it all depends on where the bottleneck happens to be. But if we're talking about a free performance uplift in some games (at least for Ampere users), that's still noteworthy. And God knows AC: Valhalla needs better performance.


It's an AMD sponsored title and NVidia cards have been getting horrible performance. There is a lot more to it than meets the eye


----------



## HyperMatrix

ExDarkxH said:


> It's an AMD sponsored title and NVidia cards have been getting horrible performance. There is a lot more to it than meets the eye


I'm tired of people talking bad about Ubisoft games in general. Ubisoft honestly has the best game engines around. They are beautifully optimized to deliver world class console level performance on PCs. No other game developer has worked as hard as Ubisoft to bring the full 4K30 console experience to the high end PC market.


----------



## derthballs

Loads seemed to be released in UK today, my mate snagged a FE and i went for the Gamerock OC, on scan if anyones looking.


----------



## dr/owned

My 3090 TUF finally delivered (Amazon September 27 order) but now I don't have a waterblock for it (Bykski from China). So a picture is going to be coming shortly of my 1080Ti still plumbed into the waterloop dangling in the case with a 3090 installed above it. I really just want to go balls deep and shunt it with 0 hours on the cooler but _sigh_ I guess I'll collect some baseline numbers first.

+1 for soft tubing enabling this sort of f-ery.


----------



## HyperMatrix

dr/owned said:


> My 3090 TUF finally delivered (Amazon September 27 order) but now I don't have a waterblock for it (Bykski from China). So a picture is going to be coming shortly of my 1080Ti still plumbed into the waterloop dangling in the case with a 3090 installed above it. I really just want to go balls deep and shunt it with 0 hours on the cooler but _sigh_ I guess I'll collect some baseline numbers first.
> 
> +1 for soft tubing enabling this sort of f-ery.


A few years ago I went a little nuts with quick disconnect fittings. Did my entire rig with them. So I can literally take out any bit of tube from anywhere and with QDC on both ends, the liquid just stays in there. Haha. Little pricey but totally worth it. Looking forward to your TUF clocks. They've generally been really good from what I've seen.


----------



## ExDarkxH

has anyone purchased the mp5works?
I like the idea of it but the price is steep for what it is


----------



## vmanuelgm

Horizon Zero Dawn 4K:


Benchmark:







Gameplay:


----------



## Thanh Nguyen

ExDarkxH said:


> has anyone purchased the mp5works?
> I like the idea of it but the price is steep for what it is


I got it. Not sure it is worth but Im able to do +1100 on memory with the mp5. Cant do it without.


----------



## HyperMatrix

ExDarkxH said:


> has anyone purchased the mp5works?
> I like the idea of it but the price is steep for what it is


I'm confused about how it works. Does it come with its own radiator/pump? With such tiny tubing it can't be run off your standard loop without adapters and even then it'd be massively constricting flow, no? I couldn't see where it was connected to in that video.


----------



## Wihglah

HyperMatrix said:


> A few years ago I went a little nuts with quick disconnect fittings. Did my entire rig with them. So I can literally take out any bit of tube from anywhere and with QDC on both ends, the liquid just stays in there. Haha. Little pricey but totally worth it. Looking forward to your TUF clocks. They've generally been really good from what I've seen.


I had some at one point, but they reduced my flow rate too much for me.


----------



## Wihglah

ExDarkxH said:


> has anyone purchased the mp5works?
> I like the idea of it but the price is steep for what it is


It's on my list - just figuring out my loop.


double post... FTL.


----------



## dr/owned

So some probably pretty basic Port Royal scores on my 3090 TUF:

Bone stock fan curve 100% power: 12351 https://www.3dmark.com/pr/513545
100% fan: 12448 https://www.3dmark.com/pr/513553
100% fan, 107% power: 12694 https://www.3dmark.com/pr/513566

Do the clocks with +0 overclock really say anything anyways?


----------



## HyperMatrix

dr/owned said:


> So some probably pretty basic Port Royal scores on my 3090 TUF:
> 
> Bone stock fan curve 100% power: 12351 https://www.3dmark.com/pr/513545
> 100% fan: 12448 https://www.3dmark.com/pr/513553
> 100% fan, 107% power: 12694 https://www.3dmark.com/pr/513566
> 
> Do the clocks with +0 overclock really say anything anyways?


These results are definitely lower than I was expecting. Your average clocks are dropping a lot even though your temps are good. Try using a curve to set a locked voltage/clock so you don't hit the power limit as hard. I think you should be able to manage 13500 at least.

Basically try testing to see the highest clocks you can keep at different voltages. So max clock speed at 0.850v, 0.875v, 0.900v, etc etc. That'll give you an idea of how good your actual chip is and what you can expect with shunt/water.


----------



## Falkentyne

Foxrun said:


> Ekwb has a really nice one either in December or January. Their 3080 FE block is being released in January if you wanted to check out the design; it’s gorgeous.


You mentioned a failed shunt mod right?

Can you please take a screenshot of gpu_z and have all of the "wattage" sensor windows open for me, at maximum load, when you're hitting the power limit?
Please set the sensor to show "max" and not "current".

Thank you very much.


----------



## OC2000

HyperMatrix said:


> I'm confused about how it works. Does it come with its own radiator/pump? With such tiny tubing it can't be run off your standard loop without adapters and even then it'd be massively constricting flow, no? I couldn't see where it was connected to in that video.


I have the MP5works, but only just got my Strix block today. Still waiting on some thermal pads though.

you run it in parallel. I’ll be connecting Mine to a second rad port and a t junction on the GPU out.


----------



## OC2000

HyperMatrix said:


> These results are definitely lower than I was expecting. Your average clocks are dropping a lot even though your temps are good. Try using a curve to set a locked voltage/clock so you don't hit the power limit as hard. I think you should be able to manage 13500 at least.
> 
> Basically try testing to see the highest clocks you can keep at different voltages. So max clock speed at 0.850v, 0.875v, 0.900v, etc etc. That'll give you an idea of how good your actual chip is and what you can expect with shunt/water.


my Strix on stock cooler will run 0.850 at 1905 MHz stable.Not sure how that will translate to a shunt mod though. Any idea?


----------



## dr/owned

HyperMatrix said:


> These results are definitely lower than I was expecting. Your average clocks are dropping a lot even though your temps are good. Try using a curve to set a locked voltage/clock so you don't hit the power limit as hard. I think you should be able to manage 13500 at least.
> 
> Basically try testing to see the highest clocks you can keep at different voltages. So max clock speed at 0.850v, 0.875v, 0.900v, etc etc. That'll give you an idea of how good your actual chip is and what you can expect with shunt/water.


13400 was the best I could do locking the voltage to 1.0V (ctrl+L on that point in the curve) : https://www.3dmark.com/pr/513760
+220 on the core.

It's still hitting power limits but less hard.


----------



## HyperMatrix

OC2000 said:


> my Strix on stock cooler will run 0.850 at 1905 MHz stable.Not sure how that will translate to a shunt mod though. Any idea?


The method I use for testing is this:


Load up a game (I use Crysis 3 for all stress testing).
Set FPS cap in RTSS so your GPU usage is around 40%-45% so it boosts properly.
Remember that FPS cap number.
Set cap down to 1 FPS and allow your card to cool down.
Set a new clock speed in afterburner.
Change FPS cap back to that number that gave you 40-45% usage.
See what your card boosts up to, and see how it drops down as the heat goes up.
If it didn't crash, increase your GPU clock offset in afterburner by +15MHz and repeat.

This will theoretically give you an idea of what clocks you'll be able to maintain when shunted and watercooled.


----------



## Emmanuel

dr/owned said:


> So some probably pretty basic Port Royal scores on my 3090 TUF:
> 
> Bone stock fan curve 100% power: 12351 https://www.3dmark.com/pr/513545
> 100% fan: 12448 https://www.3dmark.com/pr/513553
> 100% fan, 107% power: 12694 https://www.3dmark.com/pr/513566
> 
> Do the clocks with +0 overclock really say anything anyways?


I have an almost identical setup as you and I'm scoring a lot higher despite running a lot hotter with constant fluctuating clocks. No overclocking, just 107% power limit and 100% fans.


----------



## kx11

long2905 said:


> isnt that CPU bottleneck?


No, they capped the games to 120fps


----------



## Lobstar

Emmanuel said:


> I have an almost identical setup as you and I'm scoring a lot higher despite running a lot hotter with constant fluctuating clocks. No overclocking, just 107% power limit and 100% fans.
> View attachment 2465866


Huh, my stock card, zero power limit increase and no fan changes compared to yours.


----------



## kx11

vmanuelgm said:


> Horizon Zero Dawn 4K:
> 
> 
> Benchmark:
> 
> 
> 
> 
> 
> 
> 
> Gameplay:


i'm getting a bit better performance in the benchmark


----------



## dr/owned

Emmanuel said:


> I have an almost identical setup as you and I'm scoring a lot higher despite running a lot hotter with constant fluctuating clocks. No overclocking, just 107% power limit and 100% fans.


Thank you. Because your setup is so identical to mine it set me off...looks like my slot running x8 instead of x16. I'm not sure I can do anything about it because I legit have a lot of PCIe cards (10GbE, soundcard, M.2, etc.) but I will at least temporarily pull some cards out to try x16 and see if that's the 700 point difference.


----------



## mirkendargen

dr/owned said:


> Thank you. Because your setup is so identical to mine it set me off...looks like my slot running x8 instead of x16. I'm not sure I can do anything about it because I legit have a lot of PCIe cards (10GbE, soundcard, M.2, etc.) but I will at least temporarily pull some cards out to try x16 and see if that's the 700 point difference.


For what it's worth I tested my card at 16x PCIE4 and 16x PCIE3 (I was testing if my vertical mount ribbon cable was causing any issues at PCIE4) and there was literally 0 difference. Maybe 8x PCIE3 is low enough to make a 5% difference.


----------



## Emmanuel

I can't wait for FTW3 blocks to come out because even without hitting the power limiter in games, I'm still losing 45MHz on the core after temps settle around 80 degrees.
Although I've tried other combinations that gave me better scores in 3DMark, here's what gives me the highest sustained performance in games: 2100MHz at 1.00v

This then turns into 2085MHz, 2070MHz and eventually 2055MHz as temps increase. I'll have to run more stability tests but so far this was stable for one full uninterrupted hour (no death, no menus, no alt-tab) in BF5. The card was pulling on average 390W at 4K, everything maxed out. As far as gaming is concerned, the power limiter on the stock BIOS does not seem to get in the way. Also, in my opinion the stock cooler at 100% is pretty inadequate for continuous 420W+, and 3DMark keeps bouncing off the 450W limit as temperatures rapidly enter the 80s range.

I was pretty concerned at first when reading people complaining about not being able to pull 500W from their FTW3, but I don't know why anyone would want that without first upgrading the cooling in a major way. Otherwise you're just trading power limiter downclocking for temperature downclocking.


----------



## mbm

HyperMatrix said:


> Set FPS cap in RTSS so your GPU usage is around 40%-45% so it boosts properly.


My card boost to different frequenzies depending on load and temperature.
My card will boost up to 2150-2200 mhz (70C) in certain games like COD but only to 1935 mhz (70C) in BF5.
This is because BF5 is alot more demanding on my hardware than COD. But still same temp.

So do you say I can run BF5 at 2150-2200 mhz if I could lower the temps. to 50-60C?


----------



## dhruvky94

Got my RTX 3090 FE last weekend and finally got to test it day before yesterday. The card is underperforming with it reaching only 19,000 score in TimeSpy with maximum performance mode. The average clocks are around 1750 MHZ with max peak being 1800 MHZ for a second. I have the card plugged in to a gen 4 slot with two separate PCIE power cables. I am running the latest NVIDIA drivers.... Anyone has any inputs to what is going wrong here? The card never exceeds 69C temp.


----------



## dr/owned

mirkendargen said:


> For what it's worth I tested my card at 16x PCIE4 and 16x PCIE3 (I was testing if my vertical mount ribbon cable was causing any issues at PCIE4) and there was literally 0 difference. Maybe 8x PCIE3 is low enough to make a 5% difference.


Yeah it was only worth about 100-150 points in Port Royal...basically nothing.

Looking through the Results section for other people it seems like scores have come down with driver updates....even "last version" drivers a month ago was +500 points higher than mine with slower CPU and similar GPU core clock average.

My temperatures seem decent...I think it's just getting absolutely wacked by power limit and can't go past about 0.90V.


----------



## vmanuelgm

kx11 said:


> i'm getting a bit better performance in the benchmark





kx11 said:


> i'm getting a bit better performance in the benchmark



Nice result!!!

But I had the motion blur enabled!!!


----------



## vinogrados

hihi. any one know how much palit gamerock diff from ref? will ekwb first variant suits it?


----------



## HyperMatrix

mbm said:


> My card boost to different frequenzies depending on load and temperature.
> My card will boost up to 2150-2200 mhz (70C) in certain games like COD but only to 1935 mhz (70C) in BF5.
> This is because BF5 is alot more demanding on my hardware than COD. But still same temp.
> 
> So do you say I can run BF5 at 2150-2200 mhz if I could lower the temps. to 50-60C?


Not just lower temps but also shunting to unlock the power limit. You’d need keep temperatures under 50C with Ampere. And then you should be able to maintain between 2150-2200.


----------



## olrdtg

dhruvky94 said:


> Got my RTX 3090 FE last weekend and finally got to test it day before yesterday. The card is underperforming with it reaching only 19,000 score in TimeSpy with maximum performance mode. The average clocks are around 1750 MHZ with max peak being 1800 MHZ for a second. I have the card plugged in to a gen 4 slot with two separate PCIE power cables. I am running the latest NVIDIA drivers.... Anyone has any inputs to what is going wrong here? The card never exceeds 69C temp.


Perhaps something else running in the background could be affecting your score a little bit. Those look like stock clocks though, have you done any overclocking yet?
Run GPU-Z and go to the Sensors tab and turn on 'Log to file', then run a benchmark, check the PerfCap Reason, it should tell you why it's not boosting higher in benchmarks. Play with some overclock settings in Afterburner or something and see if you don't get higher clocks then. Eventually you are going to run into power limitation though.

If you do want to raise the power limit, you can handle it would be to shunt mod the card (Check my FE shunt mod post and bmgjets 'Easy mode shunt modding' thread) -- only do this if you have complete confidence that you can do so without breaking the GPU.


----------



## long2905

vinogrados said:


> hihi. any one know how much palit gamerock diff from ref? will ekwb first variant suits it?


pretty sure gamerock is a custom pcb and you need a custom waterblock for it. the gamingpro ones are ref pcb.


----------



## dhruvky94

olrdtg said:


> Perhaps something else running in the background could be affecting your score a little bit. Those look like stock clocks though, have you done any overclocking yet?
> Run GPU-Z and go to the Sensors tab and turn on 'Log to file', then run a benchmark, check the PerfCap Reason, it should tell you why it's not boosting higher in benchmarks. Play with some overclock settings in Afterburner or something and see if you don't get higher clocks then. Eventually you are going to run into power limitation though.
> 
> If you do want to raise the power limit, you can handle it would be to shunt mod the card (Check my FE shunt mod post and bmgjets 'Easy mode shunt modding' thread) -- only do this if you have complete confidence that you can do so without breaking the GPU.


Thanks for the reply!
No, I haven't done any overclocking yet. I actually wanted to see how the gpu is performing at stock first before doing any overclocking. I am not sure if these are stock clocks though since GeForce RTX 3090 Founder review shows benchmark of 20K in TimeSpy. Mine is just 19k.
I did use MSI AB throttling reason and it came up as power. In timespy, my average clock speed is 1750 and power is 100%.

I will probably do a shunt mod after few months.... want to run my card with slight overclock for now


----------



## Sync0r

dhruvky94 said:


> Got my RTX 3090 FE last weekend and finally got to test it day before yesterday. The card is underperforming with it reaching only 19,000 score in TimeSpy with maximum performance mode. The average clocks are around 1750 MHZ with max peak being 1800 MHZ for a second. I have the card plugged in to a gen 4 slot with two separate PCIE power cables. I am running the latest NVIDIA drivers.... Anyone has any inputs to what is going wrong here? The card never exceeds 69C temp.


Got a little link to your timespy score? Can take a look at your hardware.


----------



## dhruvky94

Sync0r said:


> Got a little link to your timespy score? Can take a look at your hardware.


Yup, here you go: https://www.3dmark.com/spy/15380372


----------



## Sync0r

dhruvky94 said:


> Yup, here you go: https://www.3dmark.com/spy/15380372


Your average core clock on the GPU is very low, even for stock. Just need to increase power limit, cooling and overclock to bring it up.

I don't know a lot about the 5900x but with Intel the CPU score can suffer from slow memory, might be the same with yours, 4000mhz kit with tight timings will probably help with infinity fabric.

For comparison (I'm water cooled and shunted 600w power limit):
https://www.3dmark.com/compare/spy/15380372/spy/15333212#


----------



## dhruvky94

Sync0r said:


> Your average core clock on the GPU is very low, even for stock. Just need to increase power limit, cooling and overclock to bring it up.
> 
> I don't know a lot about the 5900x but with Intel the CPU score can suffer from slow memory, might be the same with yours, 4000mhz kit with tight timings will probably help with infinity fabric.
> 
> For comparison (I'm water cooled and shunted 600w power limit):
> https://www.3dmark.com/compare/spy/15380372/spy/15333212#


Yes average core clock is low.... the issue is I am not sure why.... Is the card defective? I can probably overclock it enough to bring it up to stock but I am not sure if I should given that it is a new card. I can try to get a replacement if the issue is with the card and not my settings. I have also not changed the power limit btw....


----------



## bmgjet

I wouldnt call that low. Thats more average then anything.
Stock my card does 17800 graphics score on 350W in time spy. Had to flash 390W bios to get to 19K.
Then had to shunt mod to get to 21K and it takes 520W for that.


----------



## dhruvky94

bmgjet said:


> I wouldnt call that low. Thats more average then anything.
> Stock my card does 17800 graphics score on 350W in time spy. Had to flash 390W bios to get to 19K.
> Then had to shunt mod to get to 21K and it takes 520W for that.


My graphics score is falling below average in TimeSpy. Is 350W what the card uses at 100% or full power (default bios)?


----------



## bmgjet

350W is the default power limit at 100% on FE and on my card.
Max power slider out and yours will go to 400W.


----------



## dhruvky94

olrdtg said:


> Perhaps something else running in the background could be affecting your score a little bit. Those look like stock clocks though, have you done any overclocking yet?
> Run GPU-Z and go to the Sensors tab and turn on 'Log to file', then run a benchmark, check the PerfCap Reason, it should tell you why it's not boosting higher in benchmarks. Play with some overclock settings in Afterburner or something and see if you don't get higher clocks then. Eventually you are going to run into power limitation though.
> 
> If you do want to raise the power limit, you can handle it would be to shunt mod the card (Check my FE shunt mod post and bmgjets 'Easy mode shunt modding' thread) -- only do this if you have complete confidence that you can do so without breaking the GPU.


I re-ran benchmarks using GPUz and the PerfCap reason was Power.


----------



## defiledge

Does anybody know how the "washer mod" works to improve gpu temps? I know it increases the mounting pressure of the heatsink but why? If you are adding washers/extra height to the bracket, wouldn't that actually decrease the mounting pressure?


----------



## long2905

defiledge said:


> Does anybody know how the "washer mod" works to improve gpu temps? I know it increases the mounting pressure of the heatsink but why? If you are adding washers/extra height to the bracket, wouldn't that actually decrease the mounting pressure?


better contact and heat spread for certain cards. you may risk breaking the silicon though


----------



## defiledge

long2905 said:


> better contact and heat spread for certain cards. you may risk breaking the silicon though


But how does it physically work. If the mounting bracket is on washers, wouldn't that decrease the pressure.


----------



## MoltenMoose

defiledge said:


> But how does it physically work. If the mounting bracket is on washers, wouldn't that decrease the pressure.


The washers cause the springs to be compressed more, which in turn causes more pressure on the block.


----------



## dhruvky94

Sync0r said:


> Your average core clock on the GPU is very low, even for stock. Just need to increase power limit, cooling and overclock to bring it up.
> 
> I don't know a lot about the 5900x but with Intel the CPU score can suffer from slow memory, might be the same with yours, 4000mhz kit with tight timings will probably help with infinity fabric.
> 
> For comparison (I'm water cooled and shunted 600w power limit):
> https://www.3dmark.com/compare/spy/15380372/spy/15333212#


I have seen few folks complain about the latest driver performance... could it also be because of that?


----------



## motivman

Reference cards are absolute beasts when shunted and on water. Finally past 15k on PR. #38 on the hall of fame for PR, yay! PNY reference card on water (EkWB) with 5X 5ohm shunts and 10mohm shunt on the PCIE slot, flashed with Strix bios ( a must IMHO). The stock bios and even the 390W gigabyte bios holds the card back performance wise. Strix bios lets these cards fly.. but I am a little worried about power usage via 8 pin pcie cable. What is the maximum safe power draw for the pcie cable? My card is drawing almost 300W just from one cable, lol. Based on readings from GPU-Z, my card is using 573W peak (ignore value of 3rd pin, since its a 2 pin card)



http://www.3dmark.com/pr/515206


----------



## Falkentyne

defiledge said:


> Does anybody know how the "washer mod" works to improve gpu temps? I know it increases the mounting pressure of the heatsink but why? If you are adding washers/extra height to the bracket, wouldn't that actually decrease the mounting pressure?


The X-bracket is basically a 'reverse spring." So when you apply a washer mod, you're basically increasing the distance of the edges relative to the center square of the X-bracket, creating more pressure in the center, because the heatsink is on the opposite side. It's the same as if you were able to permanently "flex" the x-bracket edges outward a few mm.


----------



## Sync0r

motivman said:


> Reference cards are absolute beasts when shunted and on water. Finally past 15k on PR. #38 on the hall of fame for PR, yay! PNY reference card on water (EkWB) with 5X 5ohm shunts and 10mohm shunt on the PCIE slot, flashed with Strix bios ( a must IMHO). The stock bios and even the 390W gigabyte bios holds the card back performance wise. Strix bios lets these cards fly.. but I am a little worried about power usage via 8 pin pcie cable. What is the maximum safe power draw for the pcie cable? My card is drawing almost 300W just from one cable, lol. Based on readings from GPU-Z, my card is using 573W peak (ignore value of 3rd pin, since its a 2 pin card)
> 
> 
> 
> http://www.3dmark.com/pr/515206
> 
> 
> 
> 
> View attachment 2465909


Flash the evga 500w bios, you will get 600w pl


----------



## Edge0fsanity

Emmanuel said:


> I can't wait for FTW3 blocks to come out because even without hitting the power limiter in games, I'm still losing 45MHz on the core after temps settle around 80 degrees.
> Although I've tried other combinations that gave me better scores in 3DMark, here's what gives me the highest sustained performance in games: 2100MHz at 1.00v
> 
> This then turns into 2085MHz, 2070MHz and eventually 2055MHz as temps increase. I'll have to run more stability tests but so far this was stable for one full uninterrupted hour (no death, no menus, no alt-tab) in BF5. The card was pulling on average 390W at 4K, everything maxed out. As far as gaming is concerned, the power limiter on the stock BIOS does not seem to get in the way. Also, in my opinion the stock cooler at 100% is pretty inadequate for continuous 420W+, and 3DMark keeps bouncing off the 450W limit as temperatures rapidly enter the 80s range.
> 
> I was pretty concerned at first when reading people complaining about not being able to pull 500W from their FTW3, but I don't know why anyone would want that without first upgrading the cooling in a major way. Otherwise you're just trading power limiter downclocking for temperature downclocking.


You have cooling issues if you're seeing 80C+ on the 450w bios with a ftw3. I'm running the xc3 bios for 500w+ and do not get above 72-74C looping timespy extreme for an hour. Games like Control @ 4k with DLSS and RTX on run 68-70C once fully heatsoaked. All of this in a case with the panels on and normal ambient temps.


----------



## EconomyFishFinger

Hi everyone,

wondering if anyone can offer some help? Im nearly ready to batter this bloody 3090 x trio.

Ive tried a number of bios's from stock to asus 480w to evga 500w (back on stock now)

My issue is voltage. I try to lock the voltage, or hell, even just set it to 100% and it may blip to 1.08v, but soon as any load is applied it drops to 800-900mv. Nothing I do and no combination appears to have any effect.

When no load is applied it will sit at 1.087v when I lock it using MSI afterburner. But soon as I load a benchmark or game it immediately drops to 800-900mv and the boost is terrible (~1700mhz)

I've run out of ideas now. Please can anyone offer any advice?

Regards


----------



## HyperMatrix

EconomyFishFinger said:


> Hi everyone,
> 
> wondering if anyone can offer some help? Im nearly ready to batter this bloody 3090 x trio.
> 
> Ive tried a number of bios's from stock to asus 480w to evga 500w (back on stock now)
> 
> My issue is voltage. I try to lock the voltage, or hell, even just set it to 100% and it may blip to 1.08v, but soon as any load is applied it drops to 800-900mv. Nothing I do and no combination appears to have any effect.
> 
> When no load is applied it will sit at 1.087v when I lock it using MSI afterburner. But soon as I load a benchmark or game it immediately drops to 800-900mv and the boost is terrible (~1700mhz)
> 
> I've run out of ideas now. Please can anyone offer any advice?
> 
> Regards


Check power usage in GPU-Z. This usually happens when you have a heat and power limit issue and it’s clocking down to stay within the power limit. You can try setting a certain clock to a set voltage. For example 0.85-0.90v at 1905MHz or 0.95v at 1995MHz or whatever your specific chip is able to do. That’ll keep heat and power usage lower and enable you to maintain higher clocks almost indefinitely.


----------



## EconomyFishFinger

HyperMatrix said:


> Check power usage in GPU-Z. This usually happens when you have a heat and power limit issue and it’s clocking down to stay within the power limit. You can try setting a certain clock to a set voltage. For example 0.85-0.90v at 1905MHz or 0.95v at 1995MHz or whatever your specific chip is able to do. That’ll keep heat and power usage lower and enable you to maintain higher clocks almost indefinitely.


Thanks for replying. My card hits 71C, but im confused as my 2080 ti never behaved this way. The downvoltage is instant though as soon as a load is applied, even when the temperature is under 70C.

I'll have another go


----------



## Mat_UK

Cavokk said:


> Sure - its all on the first page in this thread and now the 500w bios is directly available for flashing so you dont need to start with the original bios and use the .exe afterwards like I did  Bios is here: EVGA RTX 3090 VBIOS
> 
> Key thing is to use DDU after flashing the new bios and the reinstall the nVidia drivers like normal.
> 
> C



Awesome, thanks. So I got the new bios file but what version of nvflash did you use?

I downloaded this... NVIDIA NVFlash with Board Id Mismatch Disabled (v5.590.0) Download

but when I run with this version I am getting 'ERROR : No NVIDIA display adapters found'












I can happily run the bios backup command with the standard nvflash but the ID mismatch patched version can't see my GPU for some reason. Any ideas?

Thanks


----------



## HyperMatrix

EconomyFishFinger said:


> Thanks for replying. My card hits 71C, but im confused as my 2080 ti never behaved this way. The downvoltage is instant though as soon as a load is applied, even when the temperature is under 70C.
> 
> I'll have another go


Ampere is very sensitive to temperature. As temps go up, it ups the voltage to maintain clocks. When it does that, it quickly hits the power limit and has to throttle everything down. You can remove all the guesswork by checking power usage in GPU-Z or by enabling it in your Afterburner OSD so you can see your exact power usage before it clocks down. You can also try testing your clock speeds in games that use less power, like assassin's creed origins.


----------



## HyperMatrix

Mat_UK said:


> Awesome, thanks. So I got the new bios file but what version of nvflash did you use?
> 
> I downloaded this... NVIDIA NVFlash with Board Id Mismatch Disabled (v5.590.0) Download
> 
> but when I run with this version I am getting 'ERROR : No NVIDIA display adapters found'
> 
> 
> View attachment 2465912
> 
> 
> 
> I can happily run the bios backup command with the standard nvflash but the ID mismatch patched version can't see my GPU for some reason. Any ideas?
> 
> Thanks


Why are you using version 5.590.0 instead of 5.667.0 that's in the first post of this thread? You can already bypass the board id checks in the latest version when the prompt comes up. The one you're trying to use is from January. My apologies if I'm missing the reason you're trying to use the old Turing version.


----------



## Mat_UK

HyperMatrix said:


> Why are you using version 5.590.0 instead of 5.667.0 that's in the first post of this thread? You can already bypass the board id checks in the latest version when the prompt comes up. The one you're trying to use is from January. My apologies if I'm missing the reason you're trying to use the old Turing version.



Ok thanks, this is probably my mistake as I am following 2 different guides with slightly conflicting instructions. I will go back and double check the instructions on this thread. RTFM for me


----------



## EconomyFishFinger

HyperMatrix said:


> Ampere is very sensitive to temperature. As temps go up, it ups the voltage to maintain clocks. When it does that, it quickly hits the power limit and has to throttle everything down. You can remove all the guesswork by checking power usage in GPU-Z or by enabling it in your Afterburner OSD so you can see your exact power usage before it clocks down. You can also try testing your clock speeds in games that use less power, like assassin's creed origins.


Hi again, any idea why I would be running into thermal caps when the temperature is only 68C and the temp limit is set at 91C? im barely able to keep stock clocks like this. Have you experienced this issue at all before?

regards


----------



## pat182

Godfall enabling RT on AMD cards only, LOL wow what a let down, did wanna try it even if its garbage game


----------



## Emmanuel

Edge0fsanity said:


> You have cooling issues if you're seeing 80C+ on the 450w bios with a ftw3. I'm running the xc3 bios for 500w+ and do not get above 72-74C looping timespy extreme for an hour. Games like Control @ 4k with DLSS and RTX on run 68-70C once fully heatsoaked. All of this in a case with the panels on and normal ambient temps.


Perhaps and that's because my case setup is really not intended for air cooling, I don't have great airflow. This is only temporary anyway, once waterblocks are out this all goes out the window, so I won't dwell on it too much


----------



## HyperMatrix

EconomyFishFinger said:


> Hi again, any idea why I would be running into thermal caps when the temperature is only 68C and the temp limit is set at 91C? im barely able to keep stock clocks like this. Have you experienced this issue at all before?
> 
> regards


You're not hitting a thermal cap. You're hitting a power cap. As temperature goes up, you have more leakage, and you need higher voltage to maintain the same clock speed. Higher voltage means higher power usage. That leads to even more heat, and even more voltage required. And you hit your power limit, and start throttling.


----------



## dhruvky94

dhruvky94 said:


> Got my RTX 3090 FE last weekend and finally got to test it day before yesterday. The card is underperforming with it reaching only 19,000 score in TimeSpy with maximum performance mode. The average clocks are around 1750 MHZ with max peak being 1800 MHZ for a second. I have the card plugged in to a gen 4 slot with two separate PCIE power cables. I am running the latest NVIDIA drivers.... Anyone has any inputs to what is going wrong here? The card never exceeds 69C temp.


One additional observation: I used game mode on my Ryzen 5900x and the score immediately improved by 200 pts for the graphics test but dropped for the CPU part. Could my CPU be holding back the Graphics score of my GPU?


----------



## plonkman

defiledge said:


> Does anybody know how the "washer mod" works to improve gpu temps? I know it increases the mounting pressure of the heatsink but why? If you are adding washers/extra height to the bracket, wouldn't that actually decrease the mounting pressure?


hiya..  if you're that desperate to reduce temps liquid metal it (you're talking about putting the silicon under more stress anyway and imo LM is the safer option)... as long as you protect the caps around the die you'll see a huge drop in temps


----------



## EconomyFishFinger

HyperMatrix said:


> You're not hitting a thermal cap. You're hitting a power cap. As temperature goes up, you have more leakage, and you need higher voltage to maintain the same clock speed. Higher voltage means higher power usage. That leads to even more heat, and even more voltage required. And you hit your power limit, and start throttling.


hmm, i've looked at GPUz and the power limit is not being hit, however i'm getting a percap "thrm" message.


----------



## Falkentyne

EconomyFishFinger said:


> Hi again, any idea why I would be running into thermal caps when the temperature is only 68C and the temp limit is set at 91C? im barely able to keep stock clocks like this. Have you experienced this issue at all before?
> 
> regards


Wait, what card do you have? Is it modded ? What is the cooling solution?


----------



## Falkentyne

HyperMatrix said:


> You're not hitting a thermal cap. You're hitting a power cap. As temperature goes up, you have more leakage, and you need higher voltage to maintain the same clock speed. Higher voltage means higher power usage. That leads to even more heat, and even more voltage required. And you hit your power limit, and start throttling.


He should not be hitting thermal. There are temp sensors which are not exposed to the user. He may have a hotspot problem or Vram may be overheating. I need to know which card he has.


----------



## EconomyFishFinger

Falkentyne said:


> He should not be hitting thermal. There are temp sensors which are not exposed to the user. He may have a hotspot problem or Vram may be overheating. I need to know which card he has.


Hi there,

I have an MSI Gaming X Trio. I have the original bios running on the card, and I also repasted the gpu with Hydronaut thermal compound. I've applied the thermal paste correctly. I just dont get whats going on with this card. If it was hitting power limits that would be fair enough but the bios is 380W and its pulling no-where near that. I appreciate the help and advice


----------



## plonkman

Falkentyne said:


> He should not be hitting thermal. There are temp sensors which are not exposed to the user. He may have a hotspot problem or Vram may be overheating. I need to know which card he has.





EconomyFishFinger said:


> hmm, i've looked at GPUz and the power limit is not being hit, however i'm getting a percap "thrm" message.
> View attachment 2465932


seems weird that the 3 power conx are drawing different powers.. is this normal? my (addmittedly 2 plug) card always balances the power


----------



## EconomyFishFinger

plonkman said:


> seems weird that the 3 power conx are drawing different powers.. is this normal? my (addmittedly 2 plug) card always balances the power


Just for the record I have a 1.5kw PSU, 1.6kw peak, just in case folk thought the power supply was freaking out


----------



## Falkentyne

EconomyFishFinger said:


> Hi there,
> 
> I have an MSI Gaming X Trio. I have the original bios running on the card, and I also repasted the gpu with Hydronaut thermal compound. I've applied the thermal paste correctly. I just dont get whats going on with this card. If it was hitting power limits that would be fair enough but the bios is 380W and its pulling no-where near that. I appreciate the help and advice


Hotspot problem. If you have an IR gun and an open test bench, you would be able to thermally image the backplate side of the card to see what is overheating.
The GDDR6X has an unexposed temp sensor, which the vbios has access to. There may be a VRM or other hotspot temperature as well. This is going to be a difficult issue to fix without a good IR gun and good IR guns cost money. 

Unfortunately you will need to do things the hard way. Do you have a calipers? Try measuring the thickness of the thermal pads along the "edge" of them (the thickest part) and also see how much they are compressed. On FE cards, 1.5mm thermal pads fit perfectly on the backplate side RAM and hotspots perfectly, and also can work on the GPU side and VRM's.

I am not sure about your card. However if you want to play it fully safe, you can get a 2mm highly compressible pad with high w/mk and try to re-pad the Vram and hotspots. But without an IR gun, you may have to guess where the hotspots are. If it helps you can look at these pages: Thermalright Odyssey have 2mm pad options but I am not sure if 2mm pads are even available in the USA under this brand. I know aliexpress has them. These are EXCELLENT PADS!









8.65US $ 6% OFF|Thermalright Thermal Pad 120x120mm 12.8 W/mk 0.5mm 1.0mm 1.5mm 2.0mm High Efficient Thermal Conductivity Original Authentic - Pc Components Cooling & Tools - AliExpress


Smarter Shopping, Better Living! Aliexpress.com




www.aliexpress.com













NVIDIA GeForce RTX 3090 Founders Edition Review: Between Value and Decadence - When Price is Not Everything | Page 2 | igor'sLAB


With the GeForce RTX 3090, NVIDIA is rounding out its graphics card portfolio at the top end today, for now. Much more is not possible with the GA102-300 anyway and so one may see the current…




www.igorslab.de












NVIDIA copies the Pad-Mod from igorsLAB for the GeForce RTX 3080 FE to smooth the hotspot for the GDDR6X | igor'sLAB


It's nice to see that NVIDIA seems to be actively involved in this, or that you've reported on your reading of the catchy social media content, because the hotspot that I found in the launch article…




www.igorslab.de





Here is where I repadded my own hotspots. But idk if this will help your card layout.


----------



## HyperMatrix

EconomyFishFinger said:


> hmm, i've looked at GPUz and the power limit is not being hit, however i'm getting a percap "thrm" message.
> View attachment 2465932


As Falkentyne mentioned, and based on saying you repasted the GPU, it seems like there is an issue with thermal pad contacts after you put it back together. If you hadn't touched it yourself and it was having this problem, I would have recommended swapping it under warranty. But your best bet is to follow Falk's advice. Find a picture of your board and see where thermal pads are supposed to be and make sure pads are placed properly and of the right thickness so it doesn't cause loss of contact in other spots. Because currently it seems like there's a component somewhere that is overheating. This is one of the few times the 9 thermal sensors on the EVGA cards would come in handy.


----------



## Mat_UK

A big thank you to Cavokk and Hypermatrix !

I have successfully flashed my MSI 3090 X Trio with the EVGA 500w BIOS

I can confirm everything seems to work absolutely fine ( including I am running 3 x 1440p monitors over the 3 display port outputs ).

Initial results are amazing - on the stock bios the max power draw was 400 watts, see below this BIOS does indeed up that to 500 watts with the power slider in Afterburner maxed out at 119%












I am also now well over 20k in TimeSpy - my previous high score on the stock Bios was 19,567

https://www.3dmark.com/3dm/53269683?

This is still on air cooling with the default fan curve - gpu hits 70c average in the above run. So possibly I have some more headroom on air cooling but I can't wait for my waterblock to arrive (hopefully next week) so I can really max this thing out 

Seems to me MSI are missing a trick here, they should release their own XOC Bios to compete with the other brands... although maybe they just want to leave room to sell us the SuprimX cards when they come out....


----------



## Mat_UK

plonkman said:


> seems weird that the 3 power conx are drawing different powers.. is this normal? my (addmittedly 2 plug) card always balances the power


Yes I think it's normal, same thing for me the 3rd 8pin was drawing a lot less than the other 2 on the stock BIOS but the card still hit its 400w limit


----------



## Mat_UK

EconomyFishFinger said:


> Hi there,
> 
> I have an MSI Gaming X Trio. I have the original bios running on the card, and I also repasted the gpu with Hydronaut thermal compound. I've applied the thermal paste correctly. I just dont get whats going on with this card. If it was hitting power limits that would be fair enough but the bios is 380W and its pulling no-where near that. I appreciate the help and advice



Hi, I have the exact same card - MSI 3090 Gaming X Trio - not sure I can offer much more help than the others here but if you want to compare any tests etc just let me know. It only arrived yesterday and I haven't removed the stock cooler yet so I can't look inside right now but when I swap to the waterblock I can take pics of all the thermal pads etc if that's helpful.

Only other thing I can suggest is check your airflow and maybe rig up a temporary fan over the backplate?

I feel for you man, I hate random hardware issues like this


----------



## Falkentyne

Mat_UK said:


> Yes I think it's normal, same thing for me the 3rd 8pin was drawing a lot less than the other 2 on the stock BIOS but the card still hit its 400w limit


What is the absolute max power any of your 8 pins are pulling?

Each 8 pin has a power limit associated with it. For some cards, this might be 150-175W. If all three 8 pins drew balanced power, and then you added PCIE, that would be at least 520W default TDP (100%) if you went 150W * 3 + 60W PCIE, and that's not even accounting for a >100% TDP slider. I honestly don't see why the three pin cards aren't properly balanced. Maybe it would be uncoolable so they gimp the power delivery?


----------



## Mat_UK

Falkentyne said:


> What is the absolute max power any of your 8 pins are pulling?
> 
> Each 8 pin has a power limit associated with it. For some cards, this might be 150-175W. If all three 8 pins drew balanced power, and then you added PCIE, that would be at least 520W default TDP (100%) if you went 150W * 3 + 60W PCIE, and that's not even accounting for a >100% TDP slider. I honestly don't see why the three pin cards aren't properly balanced. Maybe it would be uncoolable so they gimp the power delivery?


Hi Falkentyne

See above I just flashed to the 500w EVGA bios but even with this BIOS the 3rd 8pin is drawing a lot less than the other 2...











Also it's interesting to note that GPUz is still saying it's mostly PWR limited even at 500w !


----------



## Falkentyne

Mat_UK said:


> Hi Falkentyne
> 
> See above I just flashed to the 500w EVGA bios but even with this BIOS the 3rd 8pin is drawing a lot less than the other 2...
> 
> View attachment 2465939
> 
> 
> 
> Also it's interesting to note that GPUz is still saying it's mostly PWR limited even at 500w !


Both your 8 pin #2 and board power are limiting the power draw.

The other power rails besides all the eight pins, are PCIE slot power and GPU Chip. Usually GPU Chip throttles at 300W on 400W cards. But I have no idea what it's set to (which is why the shunt resistor has to be modded for chip power draw also). PCIE slot power can trigger power limit anywhere between 68W to 85W. If it's a 75W limit, it may trigger it early and start to slowly reduce power. 
I am not sure if "board power draw" is its own calculation or if its simply PCIE + 8 pins with respect to how the vbios sees it, but I believe its separate (which is why the TDP slider works to begin with).
Because 172.2+107.5+161.4+70.2 gets you to 500W.
I am not sure what GPU Chip power throttles at however but I know it definitely senses that.


----------



## Mat_UK

Mat_UK said:


> I have successfully flashed my MSI 3090 X Trio with the EVGA 500w BIOS
> 
> I can confirm everything seems to work absolutely fine ( including I am running 3 x 1440p monitors over the 3 display port outputs ).



Update on this - "everything seems to work absolutely fine" ...* EXCEPT for the RGB which has gone dark on me*. No big deal as it's going under water and now I can uninstall Dragon Centre which I only installed to set the RGB colours in the first place...


----------



## Baasha

Is there anyone who will do the shunt mod for me? I would like to get the most out of them.


----------



## Pepillo

Mat_UK said:


> Update on this - "everything seems to work absolutely fine" ...* EXCEPT for the RGB which has gone dark on me*. No big deal as it's going under water and now I can uninstall Dragon Centre which I only installed to set the RGB colours in the first place...


I have no problem with that in my MSI Trio with the 500w EVGA bios. I simply put the desired color with the original bios and it stays correct with the new bios although I can't change it. I'm also waiting for waterblock, a Bykski, I'm tired of waiting for the Alphacool. Which one are you going to install?


----------



## Falkentyne

Baasha said:


> Is there anyone who will do the shunt mod for me? I would like to get the most out of them.


You would trust a stranger on the internet to possibly brick a $1500 video card?

You're much better off doing a shunt mod yourself. What exact card do you have?


----------



## defiledge

What kind of fps difference do you get from a shunt mod? Trying to decide if I should do it.


----------



## Mat_UK

Pepillo said:


> I have no problem with that in my MSI Trio with the 500w EVGA bios. I simply put the desired color with the original bios and it stays correct with the new bios although I can't change it. I'm also waiting for waterblock, a Bykski, I'm tired of waiting for the Alphacool. Which one are you going to install?


Ahh interesting, I had the colours set on the stock BIOS but lost them with the flash...


----------



## DooRules

Running the EVGA FTW3 Ultra. I have tried the XOC bios, and the XC3 bios, cannot get past 440 watt draw. Just did a TSE run @ +220 on core and + 800 on the memory. Cleared 11k but still missing something here.


----------



## Falkentyne

defiledge said:


> What kind of fps difference do you get from a shunt mod? Trying to decide if I should do it.


It depends on if you're power limited and how -much- you are power limited. It basically depends on how much your core clocks drop.
Usually you can see about a 10% performance gain from not hitting the power limit if you were riding the power limit hard before.


----------



## Johneey

defiledge said:


> What kind of fps difference do you get from a shunt mod? Trying to decide if I should do it.


sure dude u didnt go into powerlimit anymore!! Do it!


----------



## ExDarkxH

whats your max idle clock using px1 boost lock before crashing?

are you over or under 2,300? I'm trying to get an idea for general info/comparison

I'm guessing thats around the range ppl might see before artifacts/instability at idle temps but i could be wrong


----------



## Lobstar

DooRules said:


> Running the EVGA FTW3 Ultra. I have tried the XOC bios, and the XC3 bios, cannot get past 440 watt draw. Just did a TSE run @ +220 on core and + 800 on the memory. Cleared 11k but still missing something here.


I can't get mine over 420 no matter what I do. Just luck of the draw I guess.


----------



## DooRules

Lobstar said:


> I can't get mine over 420 no matter what I do. Just luck of the draw I guess.


I dont buy that. Missing something on my system or maybe some refining of that XOC bios . Luck of the draw for sure in OC ability of each gpu, but they should all be able to draw the wattage.


----------



## Emmanuel

Reposting what I posted on the Evga forum, but here's my experience with the FTW3 Ultra.

On the stock BIOS, 107% would result in the power limiter kicking in around 440W. After flashing the 500W XOC BIOS and increasing the limit to 119%, I get up to ~480W and sometimes up to 505W before it kicks in.

So for me, the XOC BIOS does work but does not provide a significant OC headroom. The best I can do right now is a sustained 2085MHz in Battlefield 5 at 1.012v after dropping from 2130MHz due to temps (mid 70s). The card also seems stable at 1.0v and 2070MHz but I wanted to get as close to 2100MHz as possible.

Any higher than 1.0v and I start hitting the power limit in Port Royal, but that's not a huge deal as I prioritize high clocks in games which benefit from the additional voltage while still drawing less power than benchmarks. Time Spy Extreme is a power-limiter fest regardless of the voltage, dropping the clocks into the 1800s on occasion.


----------



## Lobstar

DooRules said:


> I dont buy that. Missing something on my system or maybe some refining of that XOC bios . Luck of the draw for sure in OC ability of each gpu, but they should all be able to draw the wattage.


I haven't tested my card with anything but games and synthetics. Maybe there is a situation where encoding/decoding video or some how saturating more of the processors while playing games would push it higher? I'm not being apologetic to evga here or anything but more trying to understand how these cards work and if they should get my money in the future.


----------



## rawsome

Finally put my Gaming X Trio under water (not shunted), now i am in the Hall of Fame #49, first time!  Will have to do some more tests and try to get into 15xxx. but this is already everything i hoped for when starting OC'ing the 3090.



https://www.3dmark.com/pr/516723



*@trio X Users with alphacool waterblock
@water users:* Im first time water cooler, what are your temperature deltas?
Mine runs at up to 49/50° core with 35° water temp after an hour of gaming, ambient around 20°. not sure if that is good or just average.


----------



## Johneey

rawsome said:


> Finally put my Gaming X Trio under water (not shunted), now i am in the Hall of Fame #49, first time!  Will have to do some more tests and try to get into 15xxx. but this is already everything i hoped for when starting OC'ing the 3090.
> 
> 
> 
> https://www.3dmark.com/pr/516723
> 
> 
> 
> *@trio X Users with alphacool waterblock
> @water users:* Im first time water cooler, what are your temperature deltas?
> Mine runs at up to 49/50° core with 35° water temp after an hour of gaming, ambient around 20°. not sure if that is good or just average.


its average till bad. you have a high delta, which block u use? my alphacool has delta from 7* gpu temp to water


----------



## rawsome

Johneey said:


> its average till bad. you have a high delta, which block u use? my alphacool has delta from 7* gpu temp to water


This block: Alphacool MSI Gaming X Trio 3080/3090 Waterblock with Backplate
With this paste: Thermaltight TF8
do you have the same block?
i tried to do a layer of thermal paste that is solid enough so i cannot see the metal through it but not much thicker. then tightened the screws so that the springs on the back are 95% compressed, but not much more.
should i try a different paste or just repasting thinner/thicker?


----------



## Johneey

rawsome said:


> This block: Alphacool MSI Gaming X Trio 3080/3090 Waterblock with Backplate
> With this paste: Thermaltight TF8
> do you have the same block?
> i tried to do a layer of thermal paste that is solid enough so i cannot see the metal through it but not much thicker. then tightened the screws so that the springs on the back are 95% compressed, but not much more.
> should i try a different paste or just repasting thinner/thicker?


no i use Aplpacool for Reference


----------



## Mat_UK

rawsome said:


> This block: Alphacool MSI Gaming X Trio 3080/3090 Waterblock with Backplate
> With this paste: Thermaltight TF8
> do you have the same block?
> i tried to do a layer of thermal paste that is solid enough so i cannot see the metal through it but not much thicker. then tightened the screws so that the springs on the back are 95% compressed, but not much more.
> should i try a different paste or just repasting thinner/thicker?


I have the same card and the same waterblock on order - hopefully coming next week  but until then I am running on air - so we can compare results once I get this bad boy under water. With my 2080ti XC ultra I was getting around 42c gpu temp with max 32c water temps (using an EK vector gpu block). So 10c delta between the GPU and the water. My ambient is usually around 22c so the same 10c delta between ambient and water temps - roughly.

What rads do you have and are you cooling just the gpu or the cpu & gpu in the same loop ??


----------



## andrvas

I have a reference board (Palit Gamingpro), and have successfully flashed the Gigabyte 390w bios. Still power limited, what happens if I flash the 500w xoc bios? The board has 2 pcie power connectors, so I assume it wont work?


----------



## EconomyFishFinger

Mat_UK said:


> Hi, I have the exact same card - MSI 3090 Gaming X Trio - not sure I can offer much more help than the others here but if you want to compare any tests etc just let me know. It only arrived yesterday and I haven't removed the stock cooler yet so I can't look inside right now but when I swap to the waterblock I can take pics of all the thermal pads etc if that's helpful.
> 
> Only other thing I can suggest is check your airflow and maybe rig up a temporary fan over the backplate?
> 
> I feel for you man, I hate random hardware issues like this


Hi Mat, (or anyone else that wants to chime in  )

I re-seated the stock cooler and the temperatures dropped 30C idle, so I dont know what the hell was going on.

Are you able to confirm a couple of things from your own card?

I have original bios on my Gaming X Trio and have ran 3dmark:

Time spy extreme got me 10,460 for the GPU score (9119 points total for the run)

Port Royal got me a graphics score of 13,291.

Do these scores look about right for stock bios/cooler?

Card is pulling more power but the voltage still looks a little low but no longer getting the perfcap thermal status. Clocks bounced around stock levels


----------



## motivman

andrvas said:


> I have a reference board (Palit Gamingpro), and have successfully flashed the Gigabyte 390w bios. Still power limited, what happens if I flash the 500w xoc bios? The board has 2 pcie power connectors, so I assume it wont work?


less power than the 390w bios, unless you shunt mod


----------



## profundido

Heya all. It's that time again to upgrade to a newer generation of video cards. Since I have 4 gsync premium monitors going AMD right now isn't really an option for me plus I like the geforce drivers suites in general. My plan was to wait for the RTX 3080 ti to get this level of performance but at a reasonable price. However, since it may take several months before custom watercool cards actual hit the market and then remain unvailable (scarsity) for god know how long and considering the fact that I have a Pimax 8kx VR headset (aka the "GPU killer") incoming in a few days I couldn't wait anymore.

Moreover, I noticed that the first (for my region) RTX 3090 custom watercool prefab card just became really available and in stock !! yes really in stock !! so I bit the bullet on the following:






CD-ROM-LAND Breda - Inno3D GeForce RTX 3090 iChill Frostbite Videokaart


Bekijk Inno3D GeForce RTX 3090 iChill Frostbite Videokaart op CDROMLAND.NL. Voor 22:30 besteld, morgen in huis! €1799.00




www.cdromland.nl





That's a relatively cheap for a card with a block on it and a guarantee I don't need to void.

I just popped it in to my computer and ran some quickies:



http://imgur.com/a/GBet565




https://www.3dmark.com/spy/15422652





https://www.3dmark.com/spy/15406195





https://www.3dmark.com/fs/24061890



The card runs very cool under 50°C with my fans never going loud. I like it so far. If you have any questions or recommendations, shoot. This is my little contribution to this awesome thread.


----------



## EconomyFishFinger

profundido said:


> Heya all. It's that time again to upgrade to a newer generation of video cards. Since I have 4 gsync premium monitors going AMD right now isn't really an option for me plus I like the geforce drivers suites in general. My plan was to wait for the RTX 3080 ti to get this level of performance but at a reasonable price. However, since it may take several months before custom watercool cards actual hit the market and then remain unvailable (scarsity) for god know how long and consider the fact that I have a Pimax 8kx VR headset incoming in a few days I couldn't wait anymore.
> 
> Moreover, I noticed that the first custom watercool prefab card just became really available and in stock !! yes really in stock !! so I bit the bullet on the following:
> 
> 
> 
> 
> 
> 
> CD-ROM-LAND Breda - Inno3D GeForce RTX 3090 iChill Frostbite Videokaart
> 
> 
> Bekijk Inno3D GeForce RTX 3090 iChill Frostbite Videokaart op CDROMLAND.NL. Voor 22:30 besteld, morgen in huis! €1799.00
> 
> 
> 
> 
> www.cdromland.nl
> 
> 
> 
> 
> 
> That's a relatively cheap for a card with a block on it and a guarantee I don't need to void.
> 
> I just popped it in to my computer and ran some quickies:
> 
> 
> 
> http://imgur.com/a/GBet565
> 
> 
> 
> 
> https://www.3dmark.com/spy/15422652
> 
> 
> 
> 
> 
> https://www.3dmark.com/spy/15406195
> 
> 
> 
> 
> 
> https://www.3dmark.com/fs/24061890
> 
> 
> 
> The card runs very cool under 50°C with my fans never going loud. I like it so far. If you have any questions or recommendations, shoot. This is my little contribution to this awesome thread.


Nice! What temperatures are you getting?


----------



## profundido

EconomyFishFinger said:


> Nice! What temperatures are you getting?


under full load 47-49 C with an Ambient living room temp of 23.4 and fans on "silent profile" (=600-900RPM). It's quite amazing I dont' hear my machine as I'm used to. This alphablock must be cooling rather well. Didn't want to go through the hastle again of having to rip apart a non-watercooled version. Since I have quick disconnector couplings in my custom waterloop before and after my gpu it's always an easy 15min swap operation.


----------



## EconomyFishFinger

profundido said:


> under full load 47-49 C with an Ambient living room temp of 23.4 and fans on "silent profile" (=600-900RPM). It's quite amazing I dont' hear my machine as I'm used to. This alphablock must be cooling rather well. Didn't want to go through the hastle again of having to rip apart a non-watercooled version. Since I have disconnectors before and after my gpu it's always an easy 15min swap operation.


Im waiting on an X Trio alphacool block due in the end of this month, so its exciting to see decent temperatures and a pretty slick look. Got some EK Acid Green dye for my coolent 

It will be getting cooled by this: https://www.aquatuning.co.uk/water-...nexxxos-xt45-full-copper-1080mm-nova-radiator


----------



## long2905

profundido said:


> Heya all. It's that time again to upgrade to a newer generation of video cards. Since I have 4 gsync premium monitors going AMD right now isn't really an option for me plus I like the geforce drivers suites in general. My plan was to wait for the RTX 3080 ti to get this level of performance but at a reasonable price. However, since it may take several months before custom watercool cards actual hit the market and then remain unvailable (scarsity) for god know how long and consider the fact that I have a Pimax 8kx VR headset incoming in a few days I couldn't wait anymore.
> 
> The card runs very cool under 50°C with my fans never going loud. I like it so far. If you have any questions or recommendations, shoot. This is my little contribution to this awesome thread.


very nice, i have the air cooled version of this card. Can you flash the Gigabyte Gaming OC vbios and try to run Port Royal?


----------



## profundido

long2905 said:


> very nice, i have the air cooled version of this card. Can you flash the Gigabyte Gaming OC vbios and try to run Port Royal?


No promises but I'll look into it after work. I suppose I can find the bios files and flash procedure somewhere in this thread ?


----------



## long2905

profundido said:


> under full load 47-49 C with an Ambient living room temp of 23.4 and fans on "silent profile" (=600-900RPM). It's quite amazing I dont' hear my machine as I'm used to. This alphablock must be cooling rather well. Didn't want to go through the hastle again of having to rip apart a non-watercooled version. Since I have quick disconnector couplings in my custom waterloop before and after my gpu it's always an easy 15min swap operation.


how does the quick disconnect work? dont you still have to fill up the gpu block?



> profundido said:
> No promises but I'll look into it after work. I suppose I can find the bios files and flash procedure somewhere in this thread ?


just do a port royal run first if thats okay? dont think you can max out the voltage and power slider in msi afterburner with stock bios though

you can find the flashing guide and vbios on the first post. or since you are using inno3d like me, you can download their utility which includes a flashing tool GUI.


----------



## long2905

deleted


----------



## Mat_UK

EconomyFishFinger said:


> Hi Mat, (or anyone else that wants to chime in  )
> 
> I re-seated the stock cooler and the temperatures dropped 30C idle, so I dont know what the hell was going on.
> 
> Are you able to confirm a couple of things from your own card?
> 
> I have original bios on my Gaming X Trio and have ran 3dmark:
> 
> Time spy extreme got me 10,460 for the GPU score (9119 points total for the run)
> 
> Port Royal got me a graphics score of 13,291.
> 
> Do these scores look about right for stock bios/cooler?
> 
> Card is pulling more power but the voltage still looks a little low but no longer getting the perfcap thermal status. Clocks bounced around stock levels


My results below, a bit incomplete as I didn't record everything from each run - just put into a spreadsheet which I should have done ages ago...










I can some other benchmarks for comparison, just let me know.


----------



## profundido

EconomyFishFinger said:


> Im waiting on an X Trio alphacool block due in the end of this month, so its exciting to see decent temperatures and a pretty slick look. Got some EK Acid Green dye for my coolent
> 
> It will be getting cooled by this: https://www.aquatuning.co.uk/water-...nexxxos-xt45-full-copper-1080mm-nova-radiator
> 
> View attachment 2466027


that should be cooling nicely. As for the machine in my Imgur screenshots where I just put this card in the radiator space used is:






Hardware Labs | Black Ice SR2 420 MP







hardwarelabs.com









Hardware Labs | Black Ice SR2 360 MP







hardwarelabs.com












EK-CoolStream PE 360 (Triple)


EK-CoolStream PE is a high-performance computer water-cooling radiator which combines EK's unique CSQ design with latest radiator core engine.Best cooling capacity in the 40mm thick radiator class!




www.ekwb.com





So that looks like a compareable 3D cooling volume.


----------



## profundido

long2905 said:


> how does the quick disconnect work? dont you still have to fill up the gpu block?
> 
> 
> just do a port royal run first if thats okay? dont think you can max out the voltage and power slider in msi afterburner with stock bios though
> 
> you can find the flashing guide and vbios on the first post. or since you are using inno3d like me, you can download their utility which includes a flashing tool GUI.


Quick disconnect gpu replacement process is litteraly:

1. Power down machine for safety even though you can do it while running
2. Litteraly "pull off" the tube (the disconnector on it in fact) from the cpu watercooling block, do the same with the equivalent on the other side of the card (the tube that dissappears down below)
3. Over the watersink connect 2 pieces of empty open tube with the same disconnectors on them. This opens the loop to full air on both sides and drains all liquid from the tubes and block instanatly right back into its original watercooling liquid container.
4. Replace the part in between the tubes (=the GPU)
5. Plug it back into the pc in the exact same way (with empty gpu block and tubes) and power up your machine. The waterpump fills the empty space in the loop immediately and you pour the liquid from the bottle back into your reservoir. Done


----------



## khunpunTH

Dreams-Visions said:


> Update on the 4 cards I picked up and was testing. If you all will recall:
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I did all my testing on the same rig, case sides off, 100% fans, reasonably cool room. I didn't want to repaste the cards and jeopardize warranties or returns, so they were tested as-is out of the box. As benchmarks went, for the silicon I received:
> 
> *Summary: *XTrio w/500W EVGA Bios > FE >~ Strix > FTW3U
> 
> *XTrio (w/500W EVGA bios)*
> XTrio was shockingly quiet even at max fan speed. Card was the second coolest of the bunch, maxing out at around 73C in stress testing and did the best job maintaining its clocks stable. All of the reviews saying it is super quiet are spot-on. 500W bios allows the card to bounce between 480W-510W in benchmarks. This was the card I decided to keep, and am running it daily at 0.956mV @ 2040MHz. It's_ rock solid_ there and is drawing 380W-420W typical. Memory is very strong, and actually goes up to about +1248 before seeing performance degrade. That memory clock wasn't stable @ 0.956mV so I dialed that down to +1048 daily usage.
> 
> *BEST STABLE OC (Afterburner):* +161 / +1148
> *CLOCK STABILITY:* 2085-2130, ~480W-510W, max temp 69C
> *DAILY (Afterburner):* 0.956mV @ 2040 locked / +1048, ~62C
> 
> 
> *FE*
> FE at 100% fans is extremely loud. Like, unacceptably loud. But the nice thing about the FE is that it will likely never reach anywhere near 100% fan speed because its cooler is SO GOOD. In same stress tests and environment, this card maxed at *56C*. That's almost 20C cooler than the XTrio, the second best performer. Even when undervolting the XTrio to around 400W (which is where the FE hangs around in benchmarks), it's still a good ~10C warmer than the FE. I continue to go back and forth on keeping this one because that EK FE waterblock is gorgeous and a shunt mod would be a trivial way to really find out how good this card could be. I have another couple of months to decide if I want to keep it or not (Best Buy), but I'm probably going to return it as I am now VERY intrigued by the EVGA FTW3 HC.
> 
> *BEST STABLE OC (Afterburner):* +170 / +698
> *CLOCK STABILITY:* 1950-2010, ~390W-408W, max temp 56C
> 
> 
> *Strix OC (stock and w/500W EVGA bios)*
> Strix just ran too damn hot to get a confident impression either way. It would get up to about 75C with stock bios, but around 84C on the 500W EVGA bios. Just as a reminder, that is the same bios that the XTrio maxed at 69C on; a huge gap in cooling performance. Volume at 100% fan speed was acceptable but was definitely much louder than the XTrio. Under water and/or with a repaste it may have performed better, but again, I'm not going to open a card that I'm not 100% sure I want to keep and this wasn't the impression I was looking for out of the vaunted Strix model OOTB. What sealed my decision to not keep this card was being able to compare OC's for this and the XTrio on the SAME 500W bios directly. The XTrio could simply do more. Higher core OC, higher mem OC, significantly lower temps. Strix kept it close in benchmarks and I know it was very limited by temps, but again, wasn't ready to take any risk given the cost of the card. I was going to return it but I ended up selling this one for my cost + ship which worked out because someone who wanted a Strix got one and I got to keep my credit card's reward points.
> 
> *BEST STABLE OC (Afterburner):* +141/+898
> *CLOCK STABILITY:* 1980-2070, 463W-480W, max temp 74C, stock bios
> *CLOCK STABILITY:* 2010-2115, ~510W, max temp 84C, 500W FTW3U bios
> 
> 
> *FTW3 (stock only)*
> FTW3 was slightly louder than the Strix at 100% fan speed, but not offensive like the FE. Not a lot to say here; clocks would regularly dip below 2GHz which sort of automatically disqualified it from this race, and the max temp got up there much higher than I would consider acceptable even on the basic 450W bios. Just a reminder at this point: all 4 cards were tested in the exact same environment. Huge room, nice and cool, open case. This card and the Strix were literally ~20C+ hotter than the FE under the same load. This card had the weakest silicon on the core of the bunch, maxing at just +90 core offset AND the second weakest memory offset, beating out only the FE.
> 
> *BEST STABLE OC (Afterburner):* +90/+800
> *CLOCK STABILITY:* 1950-2045, ~425-450W, max temp 80C, stock bios (did not try 500W bios because of the issues)
> 
> --
> 
> There was a bunch of other data and scores and stuff (scoring as I incrementally rose core and mem offsets, some game bench scores), but I'll only post some of those if someone is really interested. Realistically, all were close enough to consider them a wash in real world applications, but the FTW3 was about 300 points behind the other 3 cards (which were all within 50 points of each other in 3DMark benches). The FE beat out the Strix and FTW3U in both Port Royal and Time Spy despite having 100W less power available to it.
> 
> In the end, as we should all know at this point: the silicon lottery is more important than the make and model on the side of your box. The chips for these cards are not binned, so you can get a core that can't OC much...or one that can OC a lot. Same with memory. But outside of benchmarking, I don't think you'd miss or appreciate a few extra frames either way.
> 
> Back to the Hydro Copper, I didn't expect them to be so competitively priced and was always planning to put whatever card I settled on under water. The HC represents a savings of at least a couple hundred depending on the card + block (Strix + block is like $200 more than the HC, for example). Unless I deem the XTrio to be a proper golden chip (I think it might be given those offsets and ease of applying the 500W bios), the HC value proposition seems unbeatable for the price. Also, what a nuts air cooler the FE has. If I were building on air and not concerned at all about leaderboard benchmarking, I don't see why I would purchase any other card. It's beautiful, it runs cold, boosts to 400W (which is where you want to be for daily gaming regardless of card to keep temps down on air), and sits right around 2GHz. At best, we're looking at a few FPS advantage for one of the more expensive AIBs? While being able to gawk at arguably the sexiest waterblock ever made in that EK FE block.
> 
> So yea, having a bit of paralysis of analysis, but can at least say I'm down to 3 options. FE + EK block? XTrio + someone's block? Sell everything and try to score a Hydro Copper sometime next month? Decisions, decisions. But at least they will be mostly informed decisions, free from reliance of Youtubers.


with 500w bios both msi and rog .
my strix can't beat my msi in timespy.
for normal use ,cant tell any much different in term of performance. strix can beat msi by 1 frame. strix 390w vs msi 370w
for normal oc , strix is better. 480w bios vs msi 390w.
for xoc 500w bios, msi is better. can overclock 15-30 mhz higher than strix.
not see any different in temp control and loudness. around 67c in normal gaming. ( msi low temp bios from their website. it is official. )
anyway I have to keep them both because I have 2 PCs.
but If i want to buy a new one. I will buy msi because price is a lot cheaper but performance is nearly the same.

ps. test with [email protected] maximus xii apex 32gb gskill 4000 cl19-19-19-39


----------



## profundido

long2905 said:


> how does the quick disconnect work? dont you still have to fill up the gpu block?
> 
> 
> just do a port royal run first if thats okay? dont think you can max out the voltage and power slider in msi afterburner with stock bios though
> 
> you can find the flashing guide and vbios on the first post. or since you are using inno3d like me, you can download their utility which includes a flashing tool GUI.


Here's a quick Port Royal:



https://www.3dmark.com/pr/518491



Core voltage I could unlock in the profiles dir but Power limit is locked:


----------



## Spiriva

profundido said:


> Here's a quick Port Royal:
> 
> 
> 
> https://www.3dmark.com/pr/518491
> 
> 
> 
> Core voltage I could unlock in the profiles dir but Power limit is locked:


If you have a 2x8pin 3090:

nvflash - NVIDIA NVFlash (5.667.0) Download

3090 gigabyte gaming oc - Gigabyte RTX 3090 VBIOS


nvflash64 --protectoff

nvflash64 --save Partner3090Model.rom

nvflash64 -6 GigabyteRTX3090GamingOC.rom

Then use DDU to remove ur current drivers, reboot and install nvidia drivers.

Now you will have a 390w bios instead.


----------



## profundido

long2905 said:


> how does the quick disconnect work? dont you still have to fill up the gpu block?
> 
> 
> just do a port royal run first if thats okay? dont think you can max out the voltage and power slider in msi afterburner with stock bios though
> 
> you can find the flashing guide and vbios on the first post. or since you are using inno3d like me, you can download their utility which includes a flashing tool GUI.


I have a real treat for you. Since Michael J Fox announced that he's retiring and will be having loads of time I called him up. He came to pick me up with the Delorean, flew me back in time 2 years exactly where I filmed my former self using my phone while he was swapping out a Titan XP for a RTX 2080Ti and then came back to upload the video for you:






Hope you enjoy


----------



## PowerK

Does anyone here have Gigabyte 3090 Aorus Master/Xtreme ?
I wonder if there's a way to display GPU, temp/usage etc info on the LCD _without_ installing their softtware?


----------



## long2905

profundido said:


> I have a real treat for you. Since Michael J Fox announced that he's retiring and will be having loads of time I called him up. He came to pick me up with the Delorean, flew me back in time 2 years exactly where I filmed my former self using my phone while he was swapping out a Titan XP for a RTX 2080Ti and then came back to upload the video for you:
> 
> 
> 
> 
> 
> 
> Hope you enjoy


thanks man. this is indeed very helpful. i want to dabble into water cooling but still procrastinate on the idea lol.

looks like you have a lot to tinkle around with the card. my best PR score on stock air cooler is below, though it was quite tedious to get there. no shunt mod or watercool yet obviously. I may deshroud the card though.



https://www.3dmark.com/pr/485282


----------



## pat182

kingpin is here : EVGA's new GeForce RTX 3090 KINGPIN Hybrid unleashed, costs $2000


----------



## Foxrun

WOOOO! I managed to get into the first batch of Kingpins! I can’t shunt my FE to save my life so the heck with it. I’ve never owned a KP before, I’m pumped.


----------



## derthballs

Picked up a gamerock OC today, cant see anything in the manual on how to switch bioses and struggling to find anything online, could someone let me know how to do this?


----------



## pat182

Foxrun said:


> WOOOO! I managed to get into the first batch of Kingpins! I can’t shunt my FE to save my life so the heck with it. I’ve never owned a KP before, I’m pumped.


happy with my strix, im glad for you , maybe ill flash the kingpin bios, lets see thte max pwer of this one


----------



## Thanh Nguyen

Kingpins is still cheaper than my PNY which I bought from a scalper.


----------



## pat182

Thanh Nguyen said:


> Kingpins is still cheaper than my PNY which I bought from a scalper.


well thats on you pal


----------



## pat182

im curious to see how the kingpin gonna stack up, im doing 2085mhz on air 24/7 and some of you hitting the wall under water in games at 2115-2150mhz, maybe in syntetic the kingpin gonna smoke but i doubt in game clock will be much higher since it seems 2150-2200mhz is the limit for normal use.


----------



## Foxrun

pat182 said:


> happy with my strix, im glad for you , maybe ill flash the kingpin bios, lets see thte max pwer of this one


Ill upload the bios as soon as I get it.


----------



## derthballs

Picked up my gamerock oc today, if i put any extra on the core it just forces my pc to reboot which is a bit strange, i only have a 750w psu, do i need to look to upgrade that? it shows its pulling about 430w in afterburner in game.

The only other thing i could think ,ive got one pci-e that splits into 2 at the end, is it more stable running 3 seperate pci-e cables to the card?


----------



## kot0005

defiledge said:


> I have dual channel 3200mhz cl16 with a 3900x. Does the ram speed really make that much of a difference



Thats at 1080 and probably.. at 1440p and 4k ram wont make that much difference.. also 3090 is a waste for 1080p because it performs same as 3080..


----------



## Shaded War

I finally got an order placed for a 3090 FE on bestbuy. Been waiting since day 1 and always missed them.

They are in stock right now if you want one https://www.bestbuy.com/site/nvidia...rd-titanium-and-black/6429434.p?skuId=6429434 SOLD OUT


----------



## profundido

Spiriva said:


> If you have a 2x8pin 3090:
> 
> nvflash - NVIDIA NVFlash (5.667.0) Download
> 
> 3090 gigabyte gaming oc - Gigabyte RTX 3090 VBIOS
> 
> 
> nvflash64 --protectoff
> 
> nvflash64 --save Partner3090Model.rom
> 
> nvflash64 -6 GigabyteRTX3090GamingOC.rom
> 
> Then use DDU to remove ur current drivers, reboot and install nvidia drivers.
> 
> Now you will have a 390w bios instead.


Thanks for the quick reminder. Saves me work. I flashed and now see it unlocked up to 105% Power limit instead of 100%. What max voltage is safe btw and what overlock settings are you using for core and mem ?


----------



## Baasha

Falkentyne said:


> You would trust a stranger on the internet to possibly brick a $1500 video card?
> 
> You're much better off doing a shunt mod yourself. What exact card do you have?


lol.. I really want to do water-cooling (actually I want to get a chiller and do that) but I'm concerned about leaks etc. never done water-cooling before and it seems too cumbersome.

I have 4x FE 3090s and 1 FTW3 Ultra.

I need to flash the FTW3 Ultra with the 500W BIOS - is there a guide for that?

I would like to shunt all 4 FE's. how hard/easy is it?


----------



## Lobstar

Baasha said:


> I need to flash the FTW3 Ultra with the 500W BIOS - is there a guide for that?








EVGA GeForce RTX 3090 FTW3 XOC BIOS - EVGA Forums


Important update 3/19/21:Newer EVGA GeForce RTX 3090 FTW3 ULTRA cards will be shipping with the XOC BIOS already loaded on the secondary position. The below BIOS should ONLY be installed if your card DID NOT already have it applied. We have a new BIOS that increases the maximum Powe...



forums.evga.com


----------



## stryker7314

Baasha said:


> lol.. I really want to do water-cooling (actually I want to get a chiller and do that) but I'm concerned about leaks etc. never done water-cooling before and it seems too cumbersome.
> 
> I have 4x FE 3090s and 1 FTW3 Ultra.
> 
> I need to flash the FTW3 Ultra with the 500W BIOS - is there a guide for that?
> 
> I would like to shunt all 4 FE's. how hard/easy is it?


You need more 3090's


----------



## Spiriva

profundido said:


> Thanks for the quick reminder. Saves me work. I flashed and now see it unlocked up to 105% Power limit instead of 100%. What max voltage is safe btw and what overlock settings are you using for core and mem ?


You can set it too +100 core voltage, 105% power limit it will draw max 390w. It is safe for 24/7, Gigabyte use this bios on thier 3090 "gaming oc" card.

I use +100 core voltage, 105% power limit, +120mhz gpu and +500mhz on memory (while benchmarking i run at +170mhz gpu and +1300mhz memory). 
Watercooled with EK block.

390w is currently the best bios for a 2x8pin card.


----------



## csaris

Does anyone have the msi suprim bios? I would like to flash it to my trio x...


----------



## dante`afk

Baasha said:


> lol.. I really want to do water-cooling (actually I want to get a chiller and do that) but I'm concerned about leaks etc. never done water-cooling before and it seems too cumbersome.
> 
> 
> I need to flash the FTW3 Ultra with the 500W BIOS - is there a guide for that?


yes, double click the .exe file and reboot pc :O



seeing the kingpni card only at 2k USD, I guess there is not much OC headroom anyway, thus the "low" price. the card is for LN2 overclocking, regular users don't benefit from that.
also no waterblock only version? jeez no one wants this ugly inefficient hybrid cooler.


----------



## HyperMatrix

dante`afk said:


> yes, double click the .exe file and reboot pc :O
> 
> 
> 
> seeing the kingpni card only at 2k USD, I guess there is not much OC headroom anyway, thus the "low" price. the card is for LN2 overclocking, regular users don't benefit from that.
> also no waterblock only version? jeez no one wants this ugly inefficient hybrid cooler.


I think for the price...it's actually amazing. 3 fan radiator for the GPU die along with conventional fans for the rest of the board. For someone who's not gonna watercool (apparently like Baasha. haha) this would be awesome. I'd have signed up for it or the HydroCopper version that'll come out soon but you need to be EVGA Elite to get in on the first run which is upsetting. Also problem with the pricing is that it completely destroys the value of the normal FTW3 cards.

Still curious to see what the power limit on the LN2 bios for the card is and whether it can be flashed onto other cards or if they did some kind of kinky engineering so it wouldn't play nice with other card designs.


----------



## profundido

long2905 said:


> very nice, i have the air cooled version of this card. Can you flash the Gigabyte Gaming OC vbios and try to run Port Royal?


Here you go. The unlocked extra 5% does yield an immediate increase in my results:



https://www.3dmark.com/pr/519431





https://www.3dmark.com/spy/15430446




Can't believe how small this card is compared to all the others I've seen so far. Doesn't even reach the edge of an ATX mobo.


----------



## ExDarkxH

Probably more to do with the upcoming 3080ti
Imagine paying like $2.5k in December for a kingpin and then in January they drop a 3080ti that is 5% slower for $1k

I anticipated $2k for KPE when they offered the hydrocopper for only $50 premium


----------



## Chamidorix

It is so godamn gratifying to be so validated by everything kingpin himself is saying live on stream right now. For weeks I've been saying everyone is ignoring cache/uncore/msvdd rail, and bam straight off the bat he's like there is so much performance to be gained on ambient if you overclock the 2nd core rail.

Also people have been mega pussyfooting the memory OC. Can take 1.4V no problem with good airflow on ambient.

This stream is such a bizarre mix of noob and tech issues and every word from vince liquid gold.


----------



## ExDarkxH

why is J doing so poorly in port royal right now
Lol i bet deep down he wants to drop this kpe and use his personal card


----------



## Falkentyne

Baasha said:


> lol.. I really want to do water-cooling (actually I want to get a chiller and do that) but I'm concerned about leaks etc. never done water-cooling before and it seems too cumbersome.
> 
> I have 4x FE 3090s and 1 FTW3 Ultra.
> 
> I need to flash the FTW3 Ultra with the 500W BIOS - is there a guide for that?
> 
> I would like to shunt all 4 FE's. how hard/easy is it?


Do my method and it will work 100%.


Baasha said:


> lol.. I really want to do water-cooling (actually I want to get a chiller and do that) but I'm concerned about leaks etc. never done water-cooling before and it seems too cumbersome.
> 
> I have 4x FE 3090s and 1 FTW3 Ultra.
> 
> I need to flash the FTW3 Ultra with the 500W BIOS - is there a guide for that?
> 
> I would like to shunt all 4 FE's. how hard/easy is it?


It's easy if you follow the extra steps I mentioned in this post, if you are using the MG Chemicals 842AR method.
The difficulty comes in that the original shunt edges are not flush with the middle, creating some extra annoying work that has to be done (unless you are soldering)









RTX 3090 Founders Edition working shunt mod


So only specific cards do. Does the tuf have fuses?




www.overclock.net





The best method is to solder and use the solder to cover the gaps, but not everyone here can solder safely, and soldering with a hot tip can instantly damage something, while paint can be quickly removed with isopropyl alcohol if you make a mistake.


----------



## Baasha

dante`afk said:


> yes, double click the .exe file and reboot pc :O
> 
> seeing the kingpni card only at 2k USD, I guess there is not much OC headroom anyway, thus the "low" price. the card is for LN2 overclocking, regular users don't benefit from that.
> also no waterblock only version? jeez no one wants this ugly inefficient hybrid cooler.


Yes just flashed the BIOS - average clocks are quite a bit higher than the FE cards. However, never saw 500W on GPU-Z or on RTSS OSD. Max I saw was 483W in Time Spy Extreme.

As far as OC (just tested preliminary OC) is +150 on the core and +1000 on the mem - passes everything thrown at it. However, first time to break 14k on Port Royal with the FTW3 Ultra. None of the FE's broke 14k - came close - 13960 but this FTW3 Ultra pushed it over. 2 of my FE cards can do +150 core and +1000 on the mem. The other 2 can do +130 core and +750 on the mem.

The other thing is that the FTW3 feels much lighter than the FE cards - the FE is an absolute monster.


----------



## Baasha

Falkentyne said:


> Do my method and it will work 100%.
> 
> 
> It's easy if you follow the extra steps I mentioned in this post, if you are using the MG Chemicals 842AR method.
> The difficulty comes in that the original shunt edges are not flush with the middle, creating some extra annoying work that has to be done (unless you are soldering)
> 
> 
> 
> 
> 
> 
> 
> 
> 
> RTX 3090 Founders Edition working shunt mod
> 
> 
> So only specific cards do. Does the tuf have fuses?
> 
> 
> 
> 
> www.overclock.net
> 
> 
> 
> 
> 
> The best method is to solder and use the solder to cover the gaps, but not everyone here can solder safely, and soldering with a hot tip can instantly damage something, while paint can be quickly removed with isopropyl alcohol if you make a mistake.


That seems way too cumbersome and time-consuming; hence why I asked if someone would be willing to do it for me. Knowing my impatience with these things, I am certain I would mess it up and brick the card. I'm not willing to risk that.

Is there a 'professional' service that can do this? Or someone who is willing to help who has done it several times before? I can pay a nominal fee.


----------



## Chamidorix

Vince is saying you can very consistently get 15550 - 15600 port royal on stock ambient KPE AIO. That is a fairly significant jump over current ambient averages.


----------



## ExDarkxH

Chamidorix said:


> Vince is saying you can very consistently get 15600 port royal on stock ambient KPE AIO. That is a fairly significant jump over current ambient averages.


thats not even close to what we are seeing right now

I'd like to see it though
dont forget vince gets commission on these cards


----------



## Apecos

profundido said:


> Here you go. The unlocked extra 5% does yield an immediate increase in my results:
> 
> 
> 
> https://www.3dmark.com/pr/519431
> 
> 
> 
> 
> 
> https://www.3dmark.com/spy/15430446
> 
> 
> 
> 
> Can't believe how small this card is compared to all the others I've seen so far. Doesn't even reach the edge of an ATX mobo.


What is your gpu? Be careful because the gigabyte 3090 gaming oc has one more power phase than ventus (for example).


----------



## mirkendargen

ExDarkxH said:


> thats not even close to what we are seeing right now
> 
> I'd like to see it though


Yeah I'm gonna believe that when I see it unless they are SERIOUSLY binning the GPUs they put in them. Most cards won't do 15000 at 500w on water.


----------



## Chamidorix

The point is that current overclocking efforts focus overwhelmingly on the core, which very quickly out scales stock memory and cache bandwidth. People devote a modicum of effort to memory OC, but literally 0 effort has been expended to MSVDD (cache) OC on this entire thread. And here kingpin, arguably the most knowledgeable OCer for Nvidia platforms on the planet, comes in and specifically calls out the importance of overclocking this new core power rail on ambient and yet it will probably just continue to be ignored.

It is true the kingpin makes voltage adjustments extremely easy. But the elmor makes it super simple on all Asus and MSI cards as well, but you see people denouncing it all the time as irrelevant for non-ln2 scenarios.

People should be far more interested in volt modding this gen, not for the core, but for the heavily stock temp limited memory and uncore. This is particularly relevant in the complete absence of afterburner/precision allowing frequency adjustment of MSVDD. It is perfectly exposed and modifiable in the NVAPI (which AB/PS use), it has just been completely neglected in consumer software due to customer ignorance.


----------



## stryker7314

Chamidorix said:


> The point is that current overclocking efforts focus overwhelmingly on the core, which very quickly out scales stock memory and cache bandwidth. People devote a modicum of effort to memory OC, but literally 0 effort has been expended to MSVDD (cache) OC on this entire thread. And here kingpin, arguably the most knowledgeable OCer for Nvidia platforms on the planet, comes in and specifically calls out the importance of overclocking this new core power rail on ambient and yet it will probably just continue to be ignored.
> 
> It is true the kingpin makes voltage adjustments extremely easy. But the elmor makes it super simple on all Asus and MSI cards as well, but you see people denouncing it all the time as irrelevant for non-ln2 scenarios.
> 
> People should be far more interested in volt modding this gen, not for the core, but for the heavily stock temp limited memory and uncore. This is particularly relevant in the complete absence of afterburner/precision allowing frequency adjustment of MSVDD. It is perfectly exposed and modifiable in the NVAPI (which AB/PS use), it has just been completely neglected in consumer software due to customer ignorance.


Maybe a guide is in order from someone knowledgeable, I have no idea how to do this unfortunately.


----------



## Chamidorix

First shots of the kingpin PCB, looks weaker component wise than the 2080ti. Way less caps on mem.


----------



## Xeq54

Chamidorix said:


> This is particularly relevant in the complete absence of afterburner/precision allowing frequency adjustment of MSVDD. It is perfectly exposed and modifiable in the NVAPI (which AB/PS use), it has just been completely neglected in consumer software due to customer ignorance.


But how do you adjust it though ? I have the elmor EVCS2 and I can adjust all the voltages on my TUF (though I am still waiting for waterblock before giving it a good go), but how do I increase the MSVDD clock ?


----------



## Chamidorix

stryker7314 said:


> Maybe a guide is in order from someone knowledgeable, I have no idea how to do this unfortunately.


Luckily the 3090 KPE OC guide should be available for everyone on release. EVGA lost TiN, who wrote the excellent guides for previous gens, but hopefully if this gen's guide at least approaches previous guides in quality it should be an excellent starting point.


----------



## dr/owned

Chamidorix said:


> It is true the kingpin makes voltage adjustments extremely easy. But the elmor makes it super simple on all Asus and MSI cards as well, but you see people denouncing it all the time as irrelevant for non-ln2 scenarios.


A problem is that EVC2 settings aren't persistent...you'd have to set them on every reboot and then set your Afterburner overclocks.


----------



## dr/owned

Baasha said:


> That seems way too cumbersome and time-consuming; hence why I asked if someone would be willing to do it for me. Knowing my impatience with these things, I am certain I would mess it up and brick the card. I'm not willing to risk that.
> 
> Is there a 'professional' service that can do this? Or someone who is willing to help who has done it several times before? I can pay a nominal fee.


For someone like me this solder job is trivially easy, but even back in the hammer-delid days of Ivy Bridge I offered to do it for people for free, but then comes "what happens if you break it?". Uhhh...it's free I'm not offering a warranty. Followed by choosing beggar outrage which was odd.

Why does that link you post mention a conformal coating on the FE? Does it actually have a conformal coating or is the dude getting confused about oxides that settle on solder joints? (I haven't heard about conformal coatings outside of like the 700 series MSI Lightning cards)


----------



## Falkentyne

dr/owned said:


> For someone like me this solder job is trivially easy, but even back in the hammer-delid days of Ivy Bridge I offered to do it for people for free, but then comes "what happens if you break it?". Uhhh...it's free I'm not offering a warranty. Followed by choosing beggar outrage which was odd.
> 
> Why does that link you post mention a conformal coating on the FE? Does it actually have a conformal coating or is the dude getting confused about oxides that settle on solder joints? (I haven't heard about conformal coatings outside of like the 700 series MSI Lightning cards)


Are you talking about me?
I'm not the original person who said it has conformal coating.
It was mentioned that if you're going to bridge the shunts (shunts that are flat and flush) with conductive paint rather than soldering, you need to scrape the edges of the shunts so the silver is bright and the stuff you are scraping on FE cards, is conformal coating.

If you are soldering, the solder goes right through it, so no need.

The issue with FE cards is that stacking or just painting directly with conductive paint (842AR) is problematic because the edges of the shunts are lower than the middle. So if you tried stacking a shunt, it won't make contact unless it's through the solder...and if it contacts through the conductive paint, the resistance will be increased like you were using a much higher mOhm shunt.

So for direct stacking, solder is recommended. Painting with 842AR works if you can make sure the paint -properly- contacts the lower edges (after scraping the edges).

BTW I just managed this in Port Royal



https://www.3dmark.com/pr/518779



114% power limit set in MSI Afterburner at +150/+600

The power limit never got triggered. TDP max was 108%.
Card is maxed out for those clocks.


----------



## asdkj1740




----------



## HyperMatrix

I really like the Kingpin for those easy voltage controls. So jealous. Unfortunately it's elite members only. Both JayZ and Steve were able to get over 15k on Port Royal with just the Hybrid cards. That's pretty impressive. It's too bad the elmor unit doesn't have persistent settings otherwise that could have been a good option too.


----------



## ExDarkxH

are the voltage controls a software package?
I'm an elite member and can sign up for the hydro, the issue is I already have a golden chip and dont think I will get this lucky again.
My question is do you think i can buy the card, install the software, and then sell the card and use the controls on my ftw3?


----------



## Nizzen

HyperMatrix said:


> I really like the Kingpin for those easy voltage controls. So jealous. Unfortunately it's elite members only. Both JayZ and Steve were able to get over 15k on Port Royal with just the Hybrid cards. That's pretty impressive. It's too bad the elmor unit doesn't have persistent settings otherwise that could have been a good option too.


What about
*ASUS ROG OC PANEL II ? To use with hotwire on the Strix cards?*


----------



## DrunknFoo

ExDarkxH said:


> thats not even close to what we are seeing right now
> 
> I'd like to see it though
> dont forget vince gets commission on these cards


Even with a small .01 to .015 voltage increase and a proper working power delivery bios (cough) under water, i'm sure those are avg numbers for the card, and likely the lower score spectrum for it.


----------



## dr/owned

The settings that Kingpin exposes look exactly like EVC2 stuff. NVVDD = core, FBVDD = frame buffer (maybe mem?), MSVDD = cache. Freq = PWM switching speed.


----------



## HyperMatrix

ExDarkxH said:


> are the voltage controls a software package?
> I'm an elite member and can sign up for the hydro, the issue is I already have a golden chip and dont think I will get this lucky again.
> My question is do you think i can buy the card, install the software, and then sell the card and use it on my ftw3?


If you're asking whether you can use the classified voltage controls on an FTW3, the answer is no. If you already have a golden chip, I wouldn't worry about replacing it. There's not that much more headroom in a gaming build if you're already hitting 2200MHz+. So unless you're just after short term benchmark scores, it's not worth it. However, reserve the KingPin if you can. Even if you don't take it, I or someone else on here will take it.




Nizzen said:


> What about
> *ASUS ROG OC PANEL II ?*


I had one of those for an Asus motherboard before. Is there a GPU version now?




dr/owned said:


> View attachment 2466095
> 
> 
> The settings that Kingpin exposes look exactly like EVC2 stuff. NVVDD = core, FBVDD = frame buffer (maybe mem?), MSVDD = cache. Freq = PWM switching speed.


Yeah but these settings are saved on the card. Set it and forget it. Elmor has to be redone after every boot.


----------



## DrunknFoo

dr/owned said:


> View attachment 2466095
> 
> 
> The settings that Kingpin exposes look exactly like EVC2 stuff. NVVDD = core, FBVDD = frame buffer (maybe mem?), MSVDD = cache. Freq = PWM switching speed.


likely the same controller


----------



## Chamidorix

You could of course script an elmor to apply on system startup. Not exactly user friendly though.


----------



## ExDarkxH

I was driving when they dropped the notify list this morning so i actually failed the captcha LOL. So technically i didnt get in line for the hybrid but i wont have any issue getting in line for the hydrocopper ill just come to work early.


if you shunt the SRC resistor what will that do exactly? increase pwm switching speed? i shunted everything except the src

anyways, I'm getting 2,158Mhz average @ 55c on air in port royal right now. So basically 2,160 the entire run until in gets over 60 C at end of run and bins down

Not exactly sure what i can achieve with a water block but with some cold water I imagine I can drop it another 35c

This is why i am hesitant to switch cards as i dont feel there will be any benefit, but im annoyed i cant use an EVC on the ftw3


----------



## Nizzen

HyperMatrix said:


> If you're asking whether you can use the classified voltage controls on an FTW3, the answer is no. If you already have a golden chip, I wouldn't worry about replacing it. There's not that much more headroom in a gaming build if you're already hitting 2200MHz+. So unless you're just after short term benchmark scores, it's not worth it. However, reserve the KingPin if you can. Even if you don't take it, I or someone else on here will take it.
> 
> 
> 
> 
> I had one of those for an Asus motherboard before. Is there a GPU version now?
> 
> 
> 
> 
> Yeah but these settings are saved on the card. Set it and forget it. Elmor has to be redone after every boot.


OC panel II has header to connect to the hotwire on the strix card to voltagecontrol. I haven't seen any people try it on the new cards.


----------



## HyperMatrix

ExDarkxH said:


> I was driving when they dropped the notify list this morning so i actually failed the captcha LOL. So technically i didnt get in line for the hybrid but i wont have any issue getting in line for the hydrocopper ill just come to work early.
> 
> 
> if you shunt the SRC resistor what will that do exactly? increase pwm switching speed? i shunted everything except the src
> 
> anyways, I'm getting 2,158Mhz average @ 55c on air in port royal right now. So basically 2,160 the entire run until in gets over 60 C at end of run and bins down
> 
> Not exactly sure what i can achieve with a water block but with some cold water I imagine I can drop it another 35c
> 
> This is why i am hesitant to switch cards as i dont feel there will be any benefit, but im annoyed i can use an EVC on the ftw3


Lol if you're getting 2158MHz average on air...you're doing better than just Golden. That's a better bin than KingPin cards. But never hurts to reserve the KingPin.


----------



## ExDarkxH

so any power mod options for ftw3 users?



HyperMatrix said:


> Lol if you're getting 2158MHz average on air...you're doing better than just Golden. That's a better bin than KingPin cards. But never hurts to reserve the KingPin.





https://www.3dmark.com/pr/490499



yeah im very happy with the card. definitely got lucky


----------



## HyperMatrix

ExDarkxH said:


> so any power mod options for ftw3 users?
> 
> 
> 
> 
> https://www.3dmark.com/pr/490499
> 
> 
> 
> yeah im very happy with the card. definitely got lucky


I see what you did there...


----------



## ExDarkxH

HyperMatrix said:


> I see what you did there...
> 
> View attachment 2466097


LOL i didnt notice


----------



## HyperMatrix

ExDarkxH said:


> whats wrong with my memory clock?


Nothing wrong with it. It's very...1337. Am I too old? Does that not mean anything anymore?


----------



## ExDarkxH

no no it took me just a second. Been a few years since i last saw someone post it


----------



## HyperMatrix

ExDarkxH said:


> no no it took me just a second. Been a few years since i last saw someone post it


Lol. For the record I just ordered a $161 GTX 1650 video card so I can join the Elite Member club. I need that KingPin card.


----------



## Falkentyne

ExDarkxH said:


> I was driving when they dropped the notify list this morning so i actually failed the captcha LOL. So technically i didnt get in line for the hybrid but i wont have any issue getting in line for the hydrocopper ill just come to work early.
> 
> 
> if you shunt the SRC resistor what will that do exactly? increase pwm switching speed? i shunted everything except the src
> 
> anyways, I'm getting 2,158Mhz average @ 55c on air in port royal right now. So basically 2,160 the entire run until in gets over 60 C at end of run and bins down
> 
> Not exactly sure what i can achieve with a water block but with some cold water I imagine I can drop it another 35c
> 
> This is why i am hesitant to switch cards as i dont feel there will be any benefit, but im annoyed i cant use an EVC on the ftw3


Which resistor is the SRC Resistor? Is that the resistor linked to the "Current sensing" chip?
I assume it has to be #1 or #2 here:









RTX 3090 Founders Edition working shunt mod


I posted this in the RTX 3090 owners club thread and I'm posting this here for more visibility to anyone who might stumble across this: Edit: Added note about 6th shunt resistor & updated images Tonight I successfully shunt modded my RTX 3090 FE. It's a pretty straightforward process, just...




www.overclock.net




Since on the back side, I'm sure that the three shunts are the two 8 pins and GPU Core.

According to igor's picture, #1 is close to "NVVDD PWM" and #2 is close to Current Sense.








NVIDIA GeForce RTX 3090 Founders Edition Review: Between Value and Decadence - When Price is Not Everything | Page 2 | igor'sLAB


With the GeForce RTX 3090, NVIDIA is rounding out its graphics card portfolio at the top end today, for now. Much more is not possible with the GA102-300 anyway and so one may see the current…




www.igorslab.de


----------



## dr/owned

I just spent $100 to ask Kingpin on livestream...1.2V is the ambient temperature voltage max according to him. [EDIT: technically donated so at least it goes to help some kitty cats ]


----------



## ExDarkxH

not my card its the xc3 i believe but the shunt is in purple

msvdd?


----------



## Falkentyne

ExDarkxH said:


> not my card its the xc3 i believe but the shunt is in purple
> 
> msvdd?


HWinfo64 calls it "GPU PP Input Source Power".
What the hell is PP?

Power Plane?


----------



## ExDarkxH

"Added per-rail voltage and power monitoring" ?

anyone with knowledge on the subject care to chime in?

maybe its related to power balancing?
"input PP Power (sum) is the total power provided by the PCIe slot"
maybe if i shunt this my board will think its getting less power from the pcie. Not the same as pcie shunt but perhaps the balancing wont account for as much power coming from the pcie and increase the power to the rails

my 3rd rail still only gets like 140w even after shunting so maybe i need to still shunt this

Ive been trying to figure out for a while now what this is but nobody seems to know


----------



## Nizzen

dr/owned said:


> I just spent $100 to ask Kingpin on livestream...1.2V is the ambient temperature voltage max according to him. [EDIT: technically donated so at least it goes to help some kitty cats ]


I have tested my 3090 strix oc shuntmodded on stock cooler. In ambient 22c, tha cards gets TOOO hot  That's like 1.08v or so?


----------



## dr/owned

Nizzen said:


> I have tested my 3090 strix oc shuntmodded on stock cooler. In ambient 22c, tha cards gets TOOO hot  That's like 1.08v or so?


Yeah there's no chance really of going to 1.2V outside of a waterblock....thermals are always exponential to the voltage. All the cards top out at 1.08V I think by NV rules, so to get to 1.2 anyways you're messing with the voltage controller. (Asus put headers on the TUF and STRIX, EVGA you have to buy the Kingpin to get headers). Some hardcore people just solder wires directly to them. 

The lower tier cards have analog voltage controllers which are basically "welp you can't do anything with this" unless you can figure out whatever combo of resistors and capacitors are needed to set a voltage.


----------



## ExDarkxH

Nizzen said:


> I have tested my 3090 strix oc shuntmodded on stock cooler. In ambient 22c, tha cards gets TOOO hot  That's like 1.08v or so?


did you shunt the SRC ?🙃


----------



## Nizzen

ExDarkxH said:


> did you shunt the SRC ?🙃


Shunted them all 
Most people above me has sub zero on the gpu, unless my friend Carillo. He shunted them all too on the Strix. He is on water too.
Port royal. System on water:










Nzz
15671


----------



## dr/owned

Another good bit of info from Kingpin: "the last 100 Mhz on the memory are where your score can go down"

So...go until it crashes and then -100?


----------



## Chamidorix

dr/owned said:


> I just spent $100 to ask Kingpin on livestream...1.2V is the ambient temperature voltage max according to him. [EDIT: technically donated so at least it goes to help some kitty cats ]


He also said right near the beginning of the stream that 1.4v with 1.43 max was safe for memory on stock ambient. Now we just need him to cough up a rough magical msvdd voltage for ambient! (since apparently it has an optimal point per core frequency and temp)

Also we knew about the memory behavior back from the whitepaper. The error correction to give a wider margin between unstable and crashing. Cool to see it clarified at ~100mhz though. Really a great number of nuggets in this 4 hour+ stream.


----------



## dr/owned

Chamidorix said:


> He also said right near the beginning of the stream that 1.4v with 1.43 max was safe for memory on stock ambient. Now we just need him to cough up a rough magical msvdd voltage for ambient! (since apparently it has an optimal point per core frequency and temp)
> 
> Also we knew about the memory behavior back from the whitepaper. The error correction to give a wider margin between unstable and crashing. Cool to see it clarified at ~100mhz though. Really a great number of nuggets in this 4 hour+ stream.


In screenshots I've seen 0.9V (stock). When Steve was at 1.2V core he had it set to 1.13V. Later screenshots he's up at 1.2V MSVDD. So probably <1.15V for ambient is OK?


----------



## Nizzen

dr/owned said:


> In screenshots I've seen 0.9V (stock). When Steve was at 1.2V core he had it set to 1.13V. Later screenshots he's up at 1.2V MSVDD. So probably <1.15V for ambient is OK?


Ambient water in games.


----------



## DrunknFoo

ExDarkxH said:


> I was driving when they dropped the notify list this morning so i actually failed the captcha LOL. So technically i didnt get in line for the hybrid but i wont have any issue getting in line for the hydrocopper ill just come to work early.
> 
> 
> if you shunt the SRC resistor what will that do exactly? increase pwm switching speed? i shunted everything except the src
> 
> anyways, I'm getting 2,158Mhz average @ 55c on air in port royal right now. So basically 2,160 the entire run until in gets over 60 C at end of run and bins down
> 
> Not exactly sure what i can achieve with a water block but with some cold water I imagine I can drop it another 35c
> 
> This is why i am hesitant to switch cards as i dont feel there will be any benefit, but im annoyed i cant use an EVC on the ftw3


Whatever the src did it appears that the avg power draw went up by about 20w. This is across 4 reboots. I cant emphasise that simple reboots of the system can vary this cards power draw and voltage behvaior. It is strange. But ya i think i posted to u to shunt it


----------



## Falkentyne

ExDarkxH said:


> did you shunt the SRC ?🙃


I shorted all six shunts on the FE.
PP Source Input power can't be the PCIE slot. I've seen PCIE at 40W and SRC at 0W. Then I've seen PCIE at like 0W and SRC at 30W, which makes no sense.


----------



## asdkj1740

this classified/kingpin voltage tool is no longer avaliable on xdev.com.
some dude said since turing the 2080ti kingpin voltage tool has to be applied for, rather than freely download from xdev.com.


----------



## Chamidorix

It will be a ****show I'm sure as always obtaining and sharing the bios, classified tool, XOC bios etc that will come with Kingpin launch.

Now I'm trying to find this SmashClocks utility Kingpin gave out as a "cookie" tip to monitor internal clocks not displayed by AB/PS.


----------



## Falkentyne

Chamidorix said:


> It will be a ****show I'm sure as always obtaining and sharing the bios, classified tool, XOC bios etc that will come with Kingpin launch.
> 
> Now I'm trying to find this SmashClocks utility Kingpin gave out as a "cookie" tip to monitor internal clocks not displayed by AB/PS.


Chamidorix,
is there really a VRM temperature and Memory temperature sensor that is not exposed to NVAPI or the user, that can be read ?


----------



## Chamidorix

asdkj1740 said:


> this classified/kingpin voltage tool is no longer avaliable on xdev.com.
> some dude said since turing the 2080ti kingpin voltage tool has to be applied for, rather than freely download from xdev.com.


Also actually it still seems to be right here https://xdevs.com/guide/2080ti_kpe/#swvtune



https://xdevs.com/doc/_PC_HW/EVGA/E200/tool/Classified_r4.zip


----------



## Cholerikerklaus

I have a problem with my new processor. I had a 9900k before and had 14,700 points with the 3090 FE. With the new Ryzen 5950x I only get 14200 points. I use 4000mhz ram and 2000mhz inf. with good timings. My CPU Score is 16900 Points in CPU Score in Timespy.
But unfortunately I am losing so many points in Port Royal, does anyone have any idea why it is or how I can fix it?



https://www.3dmark.com/compare/pr/499007/pr/469325


----------



## HyperMatrix

Cholerikerklaus said:


> I have a problem with my new processor. I had a 9900k before and had 14,700 points with the 3090 FE. With the new Ryzen 5950x I only get 14200 points. I use 4000mhz ram and 2000mhz inf. with good timings. My CPU Score is 16900 Points in CPU Score in Timespy.
> But unfortunately I am losing so many points in Port Royal, does anyone have any idea why it is or how I can fix it?
> 
> 
> 
> https://www.3dmark.com/compare/pr/499007/pr/469325


I don't know the specifics of your setup but TechGage did benchmarks on the RTX 3080 with a 10900K and then "upgraded" to a 5950x. In some games, their FPS was 24% higher with the 10900K than with the 5950x. Are you getting a lower score on more cpu related benchmarks like in firestrike?


----------



## Chamidorix

Falkentyne said:


> Chamidorix,
> is there really a VRM temperature and Memory temperature sensor that is not exposed to NVAPI or the user, that can be read ?


Its not the temp, but a clock reading on the MSVDD (cache/misc) and the ability to change it. Unfortunately it is part of the GPU Overlocking libraries in the API which are under NDA. I have a work buddy who has access to an NDA version and I asked him to to check if there were headers for the cache rail. Now, I wasn't super specific at the time on it being called MSVDD/miscellaneous voltage so its possible he understood wrong but he said there were headers for reading and modifying 3 separate clocks on 1V planes.


----------



## Falkentyne

Chamidorix said:


> Its not the temp, but a clock reading on the MSVDD (cache/misc) and the ability to change it. Unfortunately it is part of the GPU Overlocking libraries in the API which are under NDA. I have a work buddy who has access to an NDA version and I asked him to to check if there were headers for the cache rail. Now, I wasn't super specific at the time on it being called MSVDD/miscellaneous voltage so its possible he understood wrong but he said there were headers for reading and modifying 3 separate clocks on 1V planes.


What is it that triggers "Thermal Throttling" flag limit even if the core is only like at 65C?
This happened to someone when they didn't put the thermal pads on properly.
Usually thermal should only flag if the core is at 83C+...

On one of Igor's pictures, he had "Memory temperature" exposed.








NVIDIA copies the Pad-Mod from igorsLAB for the GeForce RTX 3080 FE to smooth the hotspot for the GDDR6X | igor'sLAB


It's nice to see that NVIDIA seems to be actively involved in this, or that you've reported on your reading of the catchy social media content, because the hotspot that I found in the launch article…




www.igorslab.de


----------



## dr/owned

Falkentyne said:


> What is it that triggers "Thermal Throttling" flag limit even if the core is only like at 65C?


It's probable that there are a few temp sensors that aren't reporting outside of the bios. Multiple memory sensors, package sensors, back of socket, VRM, etc. It makes sense that if you're trying to minimize warranty problems you want a ton of hardware temperature limits.


----------



## Cholerikerklaus

HyperMatrix said:


> I don't know the specifics of your setup but TechGage did benchmarks on the RTX 3080 with a 10900K and then "upgraded" to a 5950x. In some games, their FPS was 24% higher with the 10900K than with the 5950x. Are you getting a lower score on more cpu related benchmarks like in firestrike?


I don't try firestrike before, so I can't say it.


----------



## Falkentyne

dr/owned said:


> It's probable that there are a few temp sensors that aren't reporting outside of the bios. Multiple memory sensors, package sensors, back of socket, VRM, etc. It makes sense that if you're trying to minimize warranty problems you want a ton of hardware temperature limits.


It would help minimize warranty even more if the temperature sensors were exposed to the user. Then he could see "oh, my VRAM is overheating" (because there is a too thick pad on the hotspot or bad contact with the Vram) or "my hotspot is overheating", because the PCB / VRM areas are getting too hot from the other side of the card.

My 3090 had a strange stuttering problem with RTSS's "Scanline Sync" where it seems to not be syncing perfectly to the refresh rate, and Fast Sync+Scanline Sync was causing some massive stutters, which I attributed to Nvidia driver problems. "Thermal" was never flagged as a limit reason. Yet this went away when I added thermal pads on the hotspots.


----------



## Cholerikerklaus

Any one else a top what I can do now? I can't believe that this 5950x is slower then my 9900k in Port royal. I want my points back.
Is there a secret setting I don't know when I want to use the ryzen?


----------



## ShadowYuna

Has anyone got coil whine boost after installing EK Vector Strix water block + EK Vector Strix Backplate on RTX 3090 Strix??

Mine got crazy coil whine(very loud and it is worse than stock cooler) after installing both water block and back plate.

So I put 2mm thermal pad on right capacitor and it reduce coil whine but still worse than stock cooler(full load , 0% fan speed to test the coil whine noise)

Is there anyone who is using Strix with water block but not EK ?? If there is anyone what brand and how is your coil whine noise?


----------



## chispy

ShadowYuna said:


> Has anyone got coil whine boost after installing EK Vector Strix water block + EK Vector Strix Backplate on RTX 3090 Strix??
> 
> Mine got crazy coil whine(very loud and it is worse than stock cooler) after installing both water block and back plate.
> 
> So I put 2mm thermal pad on right capacitor and it reduce coil whine but still worse than stock cooler(full load , 0% fan speed to test the coil whine noise)
> 
> Is there anyone who is using Strix with water block but not EK ?? If there is anyone what brand and how is your coil whine noise?


I'm using Bykski 3090 water block for my Strix , no coil wine here.


----------



## dr/owned

chispy said:


> I'm using Bykski 3090 water block for my Strix , no coil wine here.
> 
> View attachment 2466122
> View attachment 2466123



I'm using Bykski block for my 1080Ti and bought another for my 3090 TUF. Barrow also enters the chat with a block that looks like it may have a denser fin stack so I'm not sure...thinking about buying both and seeing who wins but the "***" of changing pads out twice. The Bykski seems a little less lazy about directing flow over the VRM.

Barrow: 









Bykski:


----------



## DrunknFoo

Falkentyne said:


> What is it that triggers "Thermal Throttling" flag limit even if the core is only like at 65C?
> This happened to someone when they didn't put the thermal pads on properly.
> Usually thermal should only flag if the core is at 83C+...


There is a certain temperature limit for any given clock frequency. Micron ram isn't impacted by this as it auto downticks based on temps and continues to function.


----------



## mirkendargen

ShadowYuna said:


> Has anyone got coil whine boost after installing EK Vector Strix water block + EK Vector Strix Backplate on RTX 3090 Strix??
> 
> Mine got crazy coil whine(very loud and it is worse than stock cooler) after installing both water block and back plate.
> 
> So I put 2mm thermal pad on right capacitor and it reduce coil whine but still worse than stock cooler(full load , 0% fan speed to test the coil whine noise)
> 
> Is there anyone who is using Strix with water block but not EK ?? If there is anyone what brand and how is your coil whine noise?


Also using a Bykski Strix block. I have some whine, but not what I would call "loud" when putting 500w through a card. It's no louder than my 2080ti or 1080ti were.


----------



## DrunknFoo

mirkendargen said:


> Also using a Bykski Strix block. I have some whine, but not what I would call "loud" when putting 500w through a card. It's no louder than my 2080ti or 1080ti were.


dampen them with extra thermal pads if you got, it'll help, gotta find some squishy ones though


----------



## mirkendargen

DrunknFoo said:


> dampen them with extra thermal pads if you got, it'll help, gotta find some squishy ones though


Yeah I have a big squishy set, but it doesn't bother me enough to deal with taking off my block or backplate (I have a RAM block attached to my backplate with thermal adhesive) to try and fix it, heh.


----------



## djxinator

Just flashed the EVGA XOC BIOS onto my Aorus Xtreme, broke 15k on Port Royal, 15056. For reference, my best score on the stock 450w bios was 14816. This is on air.

An oddity though, I was holding way higher clocks average (lower max boost clock though) on my initial run with the 500w BIOS but ended up with 14660, 200 points lower than my highest 450w run. I really had to work for the 15k with the new BIOS. Im not sure how I go from 2070 average boost to 2100 average boost but lose score. I'm happy in the end that I managed a 15k+ but I feel like there's more in it and I screwed up a setting somewhere.

Highest run with XOC BIOS - https://www.3dmark.com/pr/522183
Highest run with stock Aorus BIOS - https://www.3dmark.com/pr/520267


----------



## ExDarkxH

so i went ahead and shunted the src and gained a ton of power. It fixed the issue where my 3rd pin was only pulling 140watts. now all of them pull well into the 200s


----------



## ExDarkxH

djxinator said:


> Just flashed the EVGA XOC BIOS onto my Aorus Xtreme, broke 15k on Port Royal, 15056. For reference, my best score on the stock 450w bios was 14816. This is on air.


nice score and card 👏
you got a winner


----------



## Falkentyne

ExDarkxH said:


> so i went ahead and shunted the src and gained a ton of power. It fixed the issue where my 3rd pin was only pulling 140watts. now all of them pull well into the 200s
> 
> i did a quick test run in fire strike (its actually my first and only fire strike bench on this card)
> 
> 
> https://www.3dmark.com/fs/24076324
> 
> 
> 
> is it that nobody runs/cares about this test? because i ran a low offset and apparently 51,352 is like a top 5 or 6 graphics score.
> 
> tried out PR as well but havent gained anything to my score on air. unfortunately 😕


I Just ran fire strike and got this.



https://www.3dmark.com/3dm/53345463?



I have no idea why I can't pass 14,660 on port royal, though (not triggering PWR). Maybe I'll try it again.


----------



## ShadowYuna

mirkendargen said:


> Also using a Bykski Strix block. I have some whine, but not what I would call "loud" when putting 500w through a card. It's no louder than my 2080ti or 1080ti were.


Thanks. I might just wait for Bitspower or Alphacool block.

I will try another block.


----------



## originxt

djxinator said:


> Just flashed the EVGA XOC BIOS onto my Aorus Xtreme, broke 15k on Port Royal, 15056. For reference, my best score on the stock 450w bios was 14816. This is on air.
> 
> An oddity though, I was holding way higher clocks average (lower max boost clock though) on my initial run with the 500w BIOS but ended up with 14660, 200 points lower than my highest 450w run. I really had to work for the 15k with the new BIOS. Im not sure how I go from 2070 average boost to 2100 average boost but lose score. I'm happy in the end that I managed a 15k+ but I feel like there's more in it and I screwed up a setting somewhere.
> 
> Highest run with XOC BIOS - https://www.3dmark.com/pr/522183
> Highest run with stock Aorus BIOS - https://www.3dmark.com/pr/520267


People are lucky with their cards, barely breaking 14200 on mine lol.



https://www.3dmark.com/pr/381731


----------



## Nizzen

ExDarkxH said:


> so i went ahead and shunted the src and gained a ton of power. It fixed the issue where my 3rd pin was only pulling 140watts. now all of them pull well into the 200s
> 
> i did a quick test run in fire strike (its actually my first and only fire strike bench on this card)
> 
> 
> https://www.3dmark.com/fs/24076324
> 
> 
> 
> is it that nobody runs/cares about this test? because i ran a low offset and apparently 51,352 is like a top 5 or 6 graphics score.
> 
> tried out PR as well but havent gained anything to my score on air. unfortunately 😕


Firestrike is like a cpu benchmark with 1080p.

Try Firestrike Ultra


----------



## Nizzen

originxt said:


> People are lucky with their cards, barely breaking 14200 on mine lol.
> 
> 
> 
> https://www.3dmark.com/pr/381731


There is no "luck" in shuntmodding and watercooling


----------



## Falkentyne

ExDarkxH said:


> so i went ahead and shunted the src and gained a ton of power. It fixed the issue where my 3rd pin was only pulling 140watts. now all of them pull well into the 200s
> 
> i did a quick test run in fire strike (its actually my first and only fire strike bench on this card)
> 
> 
> https://www.3dmark.com/fs/24076324
> 
> 
> 
> is it that nobody runs/cares about this test? because i ran a low offset and apparently 51,352 is like a top 5 or 6 graphics score.
> 
> tried out PR as well but havent gained anything to my score on air. unfortunately 😕


I just ran Port Royal and got this.



https://www.3dmark.com/3dm/53345733?


Almost the same as my last PR score, 14667.
(which was this). https://www.3dmark.com/pr/518779

PWR was being triggered because it was getting close to the 114% power limit, but you can see it didn't downclock except because of temps. (it got up to 66C).
I know PCIE slot will trigger pwr at 68W on GPU_Z (hwinfo has corrected values of 1.45x).


----------



## ExDarkxH

I actually noticed hardly anyone tried the new feature test.
From what Ive seen on these 3090s

60fps. Very average. Stock bios and mild ambient temps

61fps a little above average

62.50fps good score!

63fps great score. You probably need to shunt or have a nice loop to get this

64fps. Exceptional score. Golden chip + mods

65fps Havent seen it yet. Im sure its doable but its honestly the equivalent of like 15,500-15,700 port royal. You need a hell of a lot to get 65fps. shunt or unlocked bios. Maybe power as well. For sure watercool. Golden chip. Run cold ambient temps + cold water. Fresh boot and max clocks at edge of stability


----------



## ExDarkxH

Falkentyne said:


> I just ran Port Royal and got this.
> 
> 
> 
> https://www.3dmark.com/3dm/53345733?
> 
> 
> Almost the same as my last PR score, 14667.
> (which was this). https://www.3dmark.com/pr/518779
> 
> PWR was being triggered because it was getting close to the 114% power limit, but you can see it didn't downclock except because of temps. (it got up to 66C).
> I know PCIE slot will trigger pwr at 68W on GPU_Z (hwinfo has corrected values of 1.45x).
> View attachment 2466137


Can u try the DXR feature test and see where you fall? I have a good idea from a thread on a different forum of whats a good score (since they dont have leaderboards)
Because of the kind of test that PR is this can help u identity a potential bottleneck


----------



## Falkentyne

ExDarkxH said:


> Can u try the DXR feature test and see where you fall? I have a good idea from a thread on a different forum of whats a good score (since they dont have leaderboards)
> Because of the kind of test that PR is this can help u identity a potential bottleneck


Yes I scored 60.11



https://www.3dmark.com/3dm/53346362?


----------



## ExDarkxH

U should be able to run extremely high clocks for this test. 60 is a little low can u push for higher?
There are so many moving parts on ampere.
Your card might perform worse in raytracing. The firestrike score is exceptional though. So in certain games your fps will outperform based on what it needs and how the game engine is built


----------



## Falkentyne

ExDarkxH said:


> U should be able to run extremely high clocks for this test. 60 is a little low can u push for higher?


I'm going to play video games now :/
Also I'm on the 456.98 hotfix driver because of crashes and issues with Warzone / COD on 457.30 that other people reported and Nvidia confirmed.


----------



## mirkendargen

ExDarkxH said:


> I actually noticed hardly anyone tried the new feature test.
> From what Ive seen on these 3090s
> 
> 60fps. Very average. Stock bios and mild ambient temps
> 
> 61fps a little above average
> 
> 62.50fps good score!
> 
> 63fps great score. You probably need to shunt or have a nice loop to get this
> 
> 64fps. Exceptional score. Golden chip + mods
> 
> 65fps Havent seen it yet. Im sure its doable but its honestly the equivalent of like 15,500-15,700 port royal. You need a hell of a lot to get 65fps. shunt or unlocked bios. Maybe power as well. For sure watercool. Golden chip. Run cold ambient temps + cold water. Fresh boot and max clocks at edge of stability


My experience with the DXR test is it's never power limited, so shunts or not isn't going to make a difference. Memory clock also makes no difference, it's purely core clock.

Since it isn't power limited using an elmor to crank the core voltage to 1.2v or something so you can clock higher stabily would be the most beneficial, but it's a very contrived situation that doesn't match real use cases.


----------



## Zooms

I have 63.77 https://www.3dmark.com/dxr/20776

Maybe with lucky bench reach 64 but i think it's my limit
I'm on WB custom loop CPU + GPU EKWB with 3090 Asus strix OC
Stock bios 480W Asus no shunt , no mods

I have OC +206 core clock / +1206 memory

Maybe next level is 500W bios , i don't want shunt.
And maybe some tricks for optimize more than max power setting Nvidia or windows and g sync disable and layout windows off for performance and disconnect , unplug all thing on my PC , internet too , just mouse

I have a windows modified with only primordial thing very light

So I see live of EVGA with steve and i see a software with some thing to ajust thing for the GPU but i dont' know what it is.

And i read we can switch of economy power of GPU with Kboost but it's good ?


I have also Port royal with same settings 14 972 : http://www.3dmark.com/pr/522778
Maybe reach 15 000.

And i think i'm limited by PCIE3 and 9900k 5Ghz ... I9 10900k OC can be broke my scores


----------



## Thebc2

Nizzen said:


> Show me 2200mhz in average in Port Royal, and I'm impressed...


Done

https://www.3dmark.com/pr/523155
No hard mods yet, just the EVGA XOC bios and waterblock. Think I am still power limited.


----------



## DrunknFoo

Thebc2 said:


> Done
> 
> https://www.3dmark.com/pr/523155
> No hard mods yet, just the EVGA XOC bios and waterblock. Think I am still power limited.


well done


----------



## DrunknFoo

and wow


https://www.3dmark.com/pr/512942


----------



## profundido

Baasha said:


> lol.. I really want to do water-cooling (actually I want to get a chiller and do that) but I'm concerned about leaks etc. never done water-cooling before and it seems too cumbersome.
> 
> I have 4x FE 3090s and 1 FTW3 Ultra.
> 
> I need to flash the FTW3 Ultra with the 500W BIOS - is there a guide for that?
> 
> I would like to shunt all 4 FE's. how hard/easy is it?


hey mate. I remember you from many many years ago always having the latest and greatest (and more than 1) card, even quad SLI's at some point with those insane BF scores  I can only say I personally ended up moving from AIO's to full custom loops X years ago and looking back I can only say I wish I had done it sooner. After the initial learning curve the silence, the flexibility, the comfort....it's unmatched whenever stuff gets hot. And it's fun too !

Seeing you got 4 FE 3090's again I would slam blocks on them and create a "radiator wall" or something outside my case


----------



## Nizzen

Thebc2 said:


> Done
> 
> https://www.3dmark.com/pr/523155
> No hard mods yet, just the EVGA XOC bios and waterblock. Think I am still power limited.


Nice job!

Take it easy, you are soon beating me 



https://www.3dmark.com/compare/pr/523155/pr/451869


----------



## profundido

Apecos said:


> What is your gpu? Be careful because the gigabyte 3090 gaming oc has one more power phase than ventus (for example).


This one:



https://www.inno3d.com/images/products/spec_sheet/products_id_566.pdf



It's a 370W card by default already so I would expect that 5% more power won't break it.

Can't see the exact number of power phases mentioned there


----------



## escapee

This xoc bios works wonders when coupled with extensive hardware mods and super cold temps on ryzen with 5.6ghz+ all core!


----------



## Benni231990

holy **** we need this bios  

pls for 2x8pin and 3x 8pin


----------



## HyperMatrix

escapee said:


> This xoc bios works wonders when coupled with extensive hardware mods and super cold temps on ryzen with 5.6ghz+ all core!


I saw your score on port royal hof, hacker. Haha. Now share that XOC! You'll be saving a lot of people the warranty on their cards by avoiding shunting. Also the build date you scratched off clearly says 2020-10-21.


----------



## long2905

Holys**t if this works on 2 8pin cards without shunting


----------



## escapee

HyperMatrix said:


> I saw your score on port royal hof, hacker. Haha. Now share that XOC! You'll be saving a lot of people the warranty on their cards by avoiding shunting. Also the build date you scratched off clearly says 2020-10-21.


This bios is actually quite dangerous since it disables some sensors eg temp ones, so you can easily blow the card up if you aren't careful and look away.


----------



## chispy

escapee said:


> This bios is actually quite dangerous since it disables some sensors eg temp ones, so you can easily blow the card up if you aren't careful and look away.


Is this kingpin xoc bios or Asus ?


----------



## HyperMatrix

escapee said:


> This bios is actually quite dangerous since it disables some sensors eg temp ones, so you can easily blow the card up if you aren't careful and look away.


Shouldn’t be an issue if under water and not volt modding though, right?


----------



## escapee

HyperMatrix said:


> Shouldn’t be an issue if under water and not volt modding though, right?


If the pump fails or stops for whatever reason the temp climbs very quickly and before you know it you'll have a crater on your die


----------



## chispy

escapee said:


> If the pump fails or stops for whatever reason the temp climbs very quickly and before you know it you'll have a crater on your die


Can you post a Timespy score at hwbot ?


----------



## HyperMatrix

escapee said:


> If the pump fails or stops for whatever reason the temp climbs very quickly and before you know it you'll have a crater on your die


It’s a good thing I use dual pump. Now hand it over. Haha. Yeah I understand the hesitation around it. Can definitely imagine people who don’t know what they’re doing ruining cards and blaming you or whoever made the bios or trying to warranty claim something that’s completely their own fault. Like that guy who repasted his GPU and misplaced a thermal pad somewhere, causing his card to start thermal throttling at 60C. On this bios that would have blown his card.


----------



## escapee

chispy said:


> Can you post a Timespy score at hwbot ?


My am4 socket got a little flooded. When I bring the setup back online sure thing


----------



## chispy

escapee said:


> My am4 socket got a little flooded. When I bring the setup back online sure thing


Thanks. maybe firestrike extreme and ultra if you have the time.


----------



## asdkj1740

have you tried that on dual 8pin cards? or can you read the bios by hex editior to check each power limit for each 8pin connectors as well as the pcie?

do you know is there any chance there will be such xoc bios for 3080?



i guess 1000w would be split to may be 100w for pcie and 300w each to each of three 8pins?
then dual 8pins cards should be able to enjoy that.


----------



## motivman

I think I have one of the fastest 2X8 pin cards in the world... results in 26C ambient air. Wonder how fast this thing can go if I could get my room colder... TOP 25 in the world for single cards on PR, lol.



https://www.3dmark.com/pr/524063


----------



## GTANY

escapee said:


> This xoc bios works wonders when coupled with extensive hardware mods and super cold temps on ryzen with 5.6ghz+ all core!


Wonderful ! The bios which will save all of us. With watercooling, the 3090 will fly. It will avoid me from shunting and losing the warranty.


----------



## defiledge

motivman said:


> I think I have one of the fastest 2X8 pin cards in the world... results in 26C ambient air. Wonder how fast this thing can go if I could get my room colder... TOP 25 in the world for single cards on PR, lol.
> 
> 
> 
> https://www.3dmark.com/pr/524063


You forgot to mention that it's a shunt-modded 2x8pin.


----------



## kx11

some OC and getting a bit better at ram OC with Ryzen 3000 , currently using 3600mhz on Dominator 4 sticks Rank1 rated 3200mhz, Corsair guide is very useful



https://www.corsair.com/corsairmedia/sys_master/productcontent/Ryzen3000_MemoryOverclockingGuide.pdf


----------



## Falkentyne

GTANY said:


> Wonderful ! The bios which will save all of us. With watercooling, the 3090 will fly. It will avoid me from shunting and losing the warranty.


Oh you will lose the warranty. What happens if one of your thermal pads is on wrong and the vrm / gddr6 overheats but the card doesnt signal "Thermal" to alert you? The card will slowly fry or the RAM chip will die and then what? You're going to claim warranty when you just cratered your card? I sure hope not!!!!

Are you people actually thinking before you ask for this bios? THIS IS NOT THE BIOS YOU WANT!!

You want a bios with all thermal protections intact. Not disabled.


----------



## djxinator

I've flicked the BIOS back over to stock 450w on my Aorus Xtreme now, I've done sufficient benching on air. I was considering leaving it on the 500w bios and tuning the power limit to 475w (seems to be the sweet spot for my card) but i'm honestly new to the crossflashing side of this scene, the fan control is a bit wonky (its fans rated for 2600rpm running at 3000rpm) and I'm not sure how strong the power delivery is on this card.


----------



## ExDarkxH

you guys getting pm's with the bios or something?
no bios found lol


----------



## Thebc2

Nizzen said:


> Nice job!
> 
> Take it easy, you are soon beating me
> 
> 
> 
> https://www.3dmark.com/compare/pr/523155/pr/451869


Lol I saw you up there. Goals.


Sent from my iPhone using Tapatalk Pro


----------



## ExDarkxH

unfortunately, Escapee felt the need to have 2 different 3dmark accounts so now top 10 isnt realistic
currently targeting BradweLL, the REAL #1. 

Everyone in the top 10 has a N/A temperature


----------



## Falkentyne

ExDarkxH said:


> you guys getting pm's with the bios or something?
> no bios found lol


I just can't believe people who are not LN2'ers are stupid enough to want the 1000W Vbios.
If this thing gets into the wild, there are going to be a hell of a lot of dead cards from kids and young adults going "But muh powa limit!"
The failures usually won't be from stock cards unless they're trying to play a brand new game at 4k that will try to pull 700W. They will be from people modding the cooling and making a mistake somewhere and the card melting instead of signaling "Thermal"

My 3090 signaled Thermal at 68C in 4k, in Overwatch, awhile ago, because when I first shunt modded and then added hotspot pads to fix a frame stuttering issue the stock unopened setup had, I ended up with a messed up hotspot pad sticking on top of another hotspot pad (not sure how, probably moved and stacked on itself), preventing the Vram and other hotspot pads from making full contact with the backplate. Now imagine if that were on an XOC Bios with thermal and OCP limits disabled?

Yeah.


----------



## mirkendargen

Falkentyne said:


> I just can't believe people who are not LN2'ers are stupid enough to want the 1000W Vbios.
> If this thing gets into the wild, there are going to be a hell of a lot of dead cards from kids and young adults going "But muh powa limit!"
> The failures usually won't be from stock cards unless they're trying to play a brand new game at 4k that will try to pull 700W. They will be from people modding the cooling and making a mistake somewhere and the card melting instead of signaling "Thermal"
> 
> My 3090 signaled Thermal at 68C in 4k, in Overwatch, awhile ago, because when I first shunt modded and then added hotspot pads to fix a frame stuttering issue the stock unopened setup had, I ended up with a messed up hotspot pad sticking on top of another hotspot pad (not sure how, probably moved and stacked on itself), preventing the Vram and other hotspot pads from making full contact with the backplate. Now imagine if that were on an XOC Bios with thermal and OCP limits disabled?
> 
> Yeah.


There were multiple 1000w 2080Ti BIOSes, there wasn't any rash of card fires.


----------



## dangerSK

escapee said:


> This xoc bios works wonders when coupled with extensive hardware mods and super cold temps on ryzen with 5.6ghz+ all core!


Is it possible for me to get that 1000W bios please  wanna run FTW3 on subzero and do some testing. If so can u PM me please ?


----------



## motivman

defiledge said:


> You forgot to mention that it's a shunt-modded 2x8pin.


oh yeah, I thought that was quite obvious, my card is shunted. The best I could do non-shunted was 14.4K... The highest scoring non shunted 3090 I have seen is on post #3,423 , 14.7K, and his temps were super low, like 20C. I think its literally impossible for a non modded 2X8 pin to hit 15K.


----------



## Falkentyne

dangerSK said:


> Is it possible for me to get that 1000W bios please  wanna run FTW3 on subzero and do some testing


I don't mind responsible LN2'ers getting these bioses. But this is the day of the viral internet. If it gets leaked anywhere, someone is going to post it on their onedrive or google drive and then it's going to be good game. For example I still have the Z490 VRM tool Asus gave me. I've posted pictures of it but I still to this day have not leaked it and will not leak it. People can destroy their boards with things like this.


----------



## dangerSK

Falkentyne said:


> I don't mind responsible LN2'ers getting these bioses. But this is the day of the viral internet. If it gets leaked anywhere, someone is going to post it on their onedrive or google drive and then it's going to be good game. For example I still have the Z490 VRM tool Asus gave me. I've posted pictures of it but I still to this day have not leaked it and will not leak it. People can destroy their boards with things like this.


Well yea thats true, idk man Im just asking.. Im "LN2er" and many NDA things came to me including Lightning 2080Ti XOC NDA pack (also not leaked to this day) so im sure there wont by any problems 
btw: iirc OCP isnt handled by bios since like 9xx nvidia series, u have to disable it by i2c com. so u need classified tool or galax nvdd tool (examples), your worst worry should be temps and I hope nobody is that idiotic to run ln2 bios on air


----------



## Daneth

djxinator said:


> I've flicked the BIOS back over to stock 450w on my Aorus Xtreme now, I've done sufficient benching on air. I was considering leaving it on the 500w bios and tuning the power limit to 475w (seems to be the sweet spot for my card) but i'm honestly new to the crossflashing side of this scene, the fan control is a bit wonky (its fans rated for 2600rpm running at 3000rpm) and I'm not sure how strong the power delivery is on this card.


But did the screen still work?

I'm asking the important questions here!

Sent from my SM-G975U using Tapatalk


----------



## asdkj1740

there is an option to set the power limit lower in afterburner.
flashing unofficial assigned bios is always risky no matter how easy could they be done like just one click update.


bios tweaker is gone since pascal, there was asus t04 bios eventually "leaked out" for 1080ti.
then there were dudes who dared to try that without knowing what could have happened.
ppl who are afraid of the possible damages after flashing high power limit bios can wait and see. but it is not fair to block all these bios from ppl who want it.

last, if someone made it possible to bypass windows certificaton ocne again just like the good old days, who would then be cared about these xoc bios. ppl here, to my understanding, are not stupid enough to try to compete with ln2 scores. we need xoc bios now becuase there is simply none for air cooling/watercooling/aio cooling users to boost up the power limit. no one gives a **** about what these bios is named.

is shorting shunt 100% safe? i dont think so. ppl who care about warranty should not do the shunt mod. the same thing goes to flashing bios.


----------



## nievz

How are your temps with AIB cards and CPU air coolers? I have the Gaming X Trio and Planning to get the NH-D15 chromax.black, will they collide and cause significant performance drop?


----------



## DrunknFoo

ExDarkxH said:


> you guys getting pm's with the bios or something?
> no bios found lol


Hey give the strix bios a shot now see what it does for you now that you shunted the src.

And ya can someone pm us the new bios please


----------



## HyperMatrix

Falkentyne said:


> I don't mind responsible LN2'ers getting these bioses. But this is the day of the viral internet. If it gets leaked anywhere, someone is going to post it on their onedrive or google drive and then it's going to be good game. For example I still have the Z490 VRM tool Asus gave me. I've posted pictures of it but I still to this day have not leaked it and will not leak it. People can destroy their boards with things like this.


You can't protect people from themselves. Or rather, you shouldn't try to. Personal responsibility is important. I remember at one point over 2 decades ago I had just bought a motherboard and for testing it, I decided it'd be a great idea if I hooked up the motherboard to a PSU and and run it...while it was on some nice thick shaggy carpet. It was a brilliant idea that didn't need to be downloaded off of google drive. 

I understand why he's not sharing it. One part is potential damage to cards by people who don't know any better, but I think it would also have to do with the source that provided him the bios. Otherwise he wouldn't have covered up the bios details in the GPU-Z screenshot. But to say that this isn't the bios we want isn't accurate. Would it be better if we had a 1000W bios that still had proper thermal monitoring? But remember there are only 2 ways that this can damage your card. One way is if you're overvolting the card with an elmor, at which point I hope to god you know how to monitor your thermals. Secondly, if you do a remount and mess up. But keep in mind a lot of us are properly equipped to run the bios. Personally I wouldn't do a remount with this bios without doing some FLIR testing before hand. It's really no different than setting up a custom loop. If you don't properly test to make sure you did everything right, you'll have liquid spraying all over your computer and shorting it out. That doesn't make water cooling bad.

I wouldn't pressure him to release the bios. If he does, there's a chance whoever gave it to him could get in trouble or simply won't share them with him in the future. And that's completely understandable. But there's a difference between whether he should release the bios, and whether I should want the bios.


----------



## npagzy

This is as high as my card can go without it freezing up, no artifacts though at 2190mhz. Would watercooling help me increase that clock or is that this particular gpu's limit?

Clock frequency 2,115 MHz 
Average clock frequency 2,027 MHz
51 °C on Air currently.


----------



## shiokarai

what's your take on Strix 3090 with [email protected] vs ryzen 9 5950x? upgrade? sidegrade? downgrade? 90% gaming  have 5950x on hand, waiting for the board, wondering if it's worth it though....


----------



## Nizzen

Falkentyne said:


> I don't mind responsible LN2'ers getting these bioses. But this is the day of the viral internet. If it gets leaked anywhere, someone is going to post it on their onedrive or google drive and then it's going to be good game. For example I still have the Z490 VRM tool Asus gave me. I've posted pictures of it but I still to this day have not leaked it and will not leak it. People can destroy their boards with things like this.


Z490 VRM tool is open for download at hwbot forums.

Is this safe-clock.net now?


----------



## HyperMatrix

shiokarai said:


> what's your take on Strix 3090 with [email protected] vs ryzen 9 5950x? upgrade? sidegrade? downgrade? 90% gaming  have 5950x on hand, waiting for the board, wondering if it's worth it though....


On average, the 9900ks will be better for gaming. I'm basing this off of a TechGage benchmark of the RTX 3080 that they did when it came out on their 10900k system and then their benchmark of the 6800 XT, which also re-tested the RTX 3080 on their new 5950x test bed. Horizon Zero Dawn and Total War: Three Kingdoms were 1% faster on the 5950x. CSGO was 11% faster on the 5950x. But the 10900k was 3% faster in Rainbow Six Siege, 5% faster in Monster Hunter: World, 24% faster in Destiny 2, and 23% faster in Borderlands 3. So for pure gaming, it seems Intel is still in the lead. These were tests done at 4K, not 1080p. Although for all I know the extra 2 cores on the 10900k might mean your results will be different with the 9900ks.


----------



## wirx

Thnx EVGA for great BIOS, I got 14932 points without shunting and outside +6c celcius air with MSI 3090 Trio X 


https://www.3dmark.com/compare/pr/524729/pr/524270


14779 was in house just for comparsion.


----------



## dangerSK

Nizzen said:


> Z490 VRM tool is open for download at hwbot forums.
> 
> Is this safe-clock.net now?


Looks like it, general rule is if its under strict NDA or can get someone in trouble dont post it, however protecting others by keeping something from their reach is childish.. if youre idiot who can damage your card its your fault, we are not forum parents... For me this is quite funny, nobody cared about those things in the age of maxwell and rly dangerous over-volting/custom bioses but when everything is like totally idiot-proof as u cant overvolt/hard-ocp limits/boost algos etc everybody is concerned about others  This is OCN, not wallmart help-desk.


----------



## rawsome

motivman said:


> I think I have one of the fastest 2X8 pin cards in the world... results in 26C ambient air. Wonder how fast this thing can go if I could get my room colder... TOP 25 in the world for single cards on PR, lol.
> 
> 
> 
> https://www.3dmark.com/pr/524063


Holy ****!



Mat_UK said:


> I have the same card and the same waterblock on order - hopefully coming next week  but until then I am running on air - so we can compare results once I get this bad boy under water. With my 2080ti XC ultra I was getting around 42c gpu temp with max 32c water temps (using an EK vector gpu block). So 10c delta between the GPU and the water. My ambient is usually around 22c so the same 10c delta between ambient and water temps - roughly.
> 
> What rads do you have and are you cooling just the gpu or the cpu & gpu in the same loop ??


2x 420 rads with noctua A14. i used crossflow rads so i do not have so much tubing in my case. first time build 
but i still have a temp delta of 15° water->gpu. will repaste my card with kyronaut and see if it makes a difference.










also moved a bit up in the ladder, PR 14975


----------



## Nizzen

dangerSK said:


> Looks like it, general rule is if its under strict NDA or can get someone in trouble dont post it, however protecting others by keeping something from their reach is childish.. if youre idiot who can damage your card its your fault, we are not forum parents... For me this is quite funny, nobody cared about those things in the age of maxwell and rly dangerous over-volting/custom bioses but when everything is like totally idiot-proof as u cant overvolt/hard-ocp limits/boost algos etc everybody is concerned about others  This is OCN, not wallmart help-desk.


Yeah, we overclocked 780/780ti classified like crazy with evbot on water LOL. 1.35v on gpu. My 2x 780ti and 3930/3960x drew 1700w from the wall. Had to use 2x 1200w psu's. 
I want xoc bios to be open for all.


----------



## dangerSK

Nizzen said:


> Yeah, we overclocked 780/780ti classified like crazy with evbot on water LOL. 1.35v on gpu. My 2x 780ti and 3930/3960x drew 1700w from the wall. Had to use 2x 1200w psu's.
> I want xoc bios to be open for all.


Its Nvidias fault.. they are banning vendors from releasing XOC bioses to public :/ U have to be in-house XOCer or sign nda


----------



## Falkentyne

dangerSK said:


> Looks like it, general rule is if its under strict NDA or can get someone in trouble dont post it, however protecting others by keeping something from their reach is childish.. if youre idiot who can damage your card its your fault, we are not forum parents... For me this is quite funny, nobody cared about those things in the age of maxwell and rly dangerous over-volting/custom bioses but when everything is like totally idiot-proof as u cant overvolt/hard-ocp limits/boost algos etc everybody is concerned about others  This is OCN, not wallmart help-desk.


Sorry what I meant is that people shouldn't go on reddit and say "here is a 1000W XOC Bios with thermal protections disabled", not the bios being shared on Hwbot between competitive overclockers.
I never meant that people like you or hypermatrix shouldn't have access to this bios.


----------



## dangerSK

Falkentyne said:


> Sorry what I meant is that people shouldn't go on reddit and say "here is a 1000W XOC Bios with thermal protections disabled", not the bios being shared on Hwbot between competitive overclockers.
> I never meant that people like you or hypermatrix shouldn't have access to this bios.


I gotcha, u have good intentions however u will never save "casuals" like this, unfortunately. U want to know why ? U will "hide" XOC bios from them and then "youtuber" Frame chasers comes in and tell them to shunt mod pcie on their FTW3 because that will help without any proof/proper testing. (today we have confirmed PCIE is not the problem btw)


----------



## asdkj1740

HyperMatrix said:


> You can't protect people from themselves. Or rather, you shouldn't try to. Personal responsibility is important. I remember at one point over 2 decades ago I had just bought a motherboard and for testing it, I decided it'd be a great idea if I hooked up the motherboard to a PSU and and run it...while it was on some nice thick shaggy carpet. It was a brilliant idea that didn't need to be downloaded off of google drive.
> 
> I understand why he's not sharing it. One part is potential damage to cards by people who don't know any better, but I think it would also have to do with the source that provided him the bios. Otherwise he wouldn't have covered up the bios details in the GPU-Z screenshot. But to say that this isn't the bios we want isn't accurate. Would it be better if we had a 1000W bios that still had proper thermal monitoring? But remember there are only 2 ways that this can damage your card. One way is if you're overvolting the card with an elmor, at which point I hope to god you know how to monitor your thermals. Secondly, if you do a remount and mess up. But keep in mind a lot of us are properly equipped to run the bios. Personally I wouldn't do a remount with this bios without doing some FLIR testing before hand. It's really no different than setting up a custom loop. If you don't properly test to make sure you did everything right, you'll have liquid spraying all over your computer and shorting it out. That doesn't make water cooling bad.
> 
> I wouldn't pressure him to release the bios. If he does, there's a chance whoever gave it to him could get in trouble or simply won't share them with him in the future. And that's completely understandable. But there's a difference between whether he should release the bios, and whether I should want the bios.


agree. calling it XOC is nothing but stupid. just like what evga calls their 450/500w bios as XOC.
high power limit and protection like thermal current etc are not contradicted with each other.

just details what is it inside, and the possible gains/risks of flashing it, to honor the freedom to have fun or to die.




Nizzen said:


> Yeah, we overclocked 780/780ti classified like crazy with evbot on water LOL. 1.35v on gpu. My 2x 780ti and 3930/3960x drew 1700w from the wall. Had to use 2x 1200w psu's.
> I want xoc bios to be open for all.


the worst thing of flashing 1000w bios maybe not just the gpu but also the power supply or even the whole pc died. it is too much to think of fire and burning the whole house down/real ppl died.
and not everyone here cares about that. if you ask them how much they paid for 3090 lolllllllllllllllllllllllllll. thanks to nvidia and aib giving us one more motivation.

"you played two hours to die like this?"


nowadays OCN.net is nothing, if you compare it with five years before, or before the whole changes to new system here.
ppl know youtube, facebook, maybe reddit too, but hwbot/ocn.net, come on they may never ever heard of them.

vrm threads? who cares nowadays? everyone is making his own version of vrm tier list and post that on his facebook/youtube/local forum.
90A yeah~~~must be good! check out biostar b550m.
19+3 ftw3 not good? but 19+4 kingpin is wonderful yeah~~~~~~~~~~~

you must have something wrong if you still go to xdevs.com and hope to see tin writting z490 dark / 3090 kingpin articles.


----------



## dangerSK

asdkj1740 said:


> you must have something wrong if you still go to xdevs.com and hope to see tin writting z490 dark / 3090 kingpin articles.


There would be guides if he didnt leave EVGA though.


----------



## Nizzen

asdkj1740 said:


> agree. calling it XOC is nothing but stupid. just like what evga calls their 450/500w bios as XOC.
> high power limit and protection like thermal current etc are not contradicted with each other.
> 
> just details what is it inside, and the possible gains/risks of flashing it, to honor the freedom to have fun or to die.
> 
> 
> 
> the worst thing of flashing 1000w bios maybe not just the gpu but also the power supply or even the whole pc died. it is too much to think of fire and burning the whole house down/real ppl died.
> and not everyone here cares about that. if you ask them how much they paid for 3090 lolllllllllllllllllllllllllll. thanks to nvidia and aib giving us one more motivation.
> 
> "you played two hours to die like this?"
> 
> 
> nowadays OCN.net is nothing, if you compare it with five years before, or before the whole changes to new system here.
> ppl know youtube, facebook, maybe reddit too, but hwbot/ocn.net, come on they may never ever heard of them.
> 
> vrm threads? who cares nowadays? everyone is making his own version of vrm tier list and post that on his facebook/youtube/local forum.
> 90A yeah~~~must be good! check out biostar b550m.
> 19+3 ftw3 not good? but 19+4 kingpin is wonderful yeah~~~~~~~~~~~
> 
> you must have something wrong if you still go to xdevs.com and hope to see tin writting z490 dark / 3090 kingpin articles.


I miss good old xtremesystems.org. Too bad it died.


----------



## asdkj1740

dangerSK said:


> There would be guides if he didnt leave EVGA though.


i think even he didnt leave, in gernal how many ppl would go there to see his article of how to mod the FE or reference 3080/3090 before kingpin model is out.
tin's article is not just about XOC but also the break down and voltages suggestion and power draw data, they are all valuable.
we already got the protection of being a niche commuity here.


----------



## Thebc2

wirx said:


> Thnx EVGA for great BIOS, I got 14932 points without shunting and outside +6c celcius air with MSI 3090 Trio X
> 
> 
> https://www.3dmark.com/compare/pr/524729/pr/524270
> 
> 
> 14779 was in house just for comparsion.


Don’t get me started. I have a major bone to pick with EVGA re: this bios. I was an FTW3 Ultra owner and one of the first to test the XOC bios on its “intended” card and provided feedback to EVGA that it wasn’t working as intended.

They have sat on it for over a month now, I returned my FTW3 after a couple weeks of no response from Jacob and seeing MSI and Asus owners fully leverage their XOC bios. One card that costed significantly cheaper and the other the same price but slightly better built for the money.

Regardless, EVGA lost me as a gfx card customer after this debacle. I’ll still rock the 1600 T2, but definitely no gfx cards from them as an early adopter.


Sent from my iPhone using Tapatalk Pro


----------



## DrunknFoo

Nizzen said:


> I miss good old xtremesystems.org. Too bad it died.


THIS!


----------



## DrunknFoo

dangerSK said:


> I gotcha, u have good intentions however u will never save "casuals" like this, unfortunately. U want to know why ? U will "hide" XOC bios from them and then "youtuber" Frame chasers comes in and tell them to shunt mod pcie on their FTW3 because that will help without any proof/proper testing. (today we have confirmed PCIE is not the problem btw)


lol he was proven wrong a few weeks back even with the 500w beta bios, i clearly showed as well as a others posting runs where current and avg is about 490+w continuous and peaks of upto 525w

problem is, fools may try this 1000w bios while still shunted and slider maxed, attempting to draw ~1500+w to the card? (LMAO)


----------



## martinhal

rawsome said:


> Holy ****!
> 
> 
> 
> 2x 420 rads with noctua A14. i used crossflow rads so i do not have so much tubing in my case. first time build
> but i still have a temp delta of 15° water->gpu. will repaste my card with kyronaut and see if it makes a difference.
> 
> View attachment 2466218
> 
> 
> also moved a bit up in the ladder, PR 14975


Are you referring to the delta between your water and the GPU ? I have the same using kyronaut . What is your ambient temp ?


----------



## HyperMatrix

DrunknFoo said:


> lol he was proven wrong a few weeks back even with the 500w beta bios, i clearly showed as well as a others posting runs where current and avg is about 490+w continuous and peaks of upto 525w
> 
> problem is, fools may try this 1000w bios while still shunted and slider maxed, attempting to draw ~1500+w to the card? (LMAO)


You can’t pull 1000W unless you are volt modding so that’s not an issue.


----------



## rawsome

martinhal said:


> Are you referring to the delta between your water and the GPU ? I have the same using kyronaut . What is your ambient temp ?


ambient 20°, water 35° and gpu 50°. something like that on longer gaming sessions.
i currently think a 15° gpu core - water delta is bad because other ppl are seeing up to 7° with these alphacool blocks.


----------



## DrunknFoo

HyperMatrix said:


> You can’t pull 1000W unless you are volt modding so that’s not an issue.


which is why i chose the word attempting


----------



## dr/owned

DrunknFoo said:


> which is why i chose the word attempting


It's just un-possible to get that crazy I would think. 

If the VRM is even capable of delivering that much power (which maybe the Strix could on paper), you're going to get thermal runaway up the butt and the VRM is going to shut itself down anyways. Watched enough Buildzoid vids to see that when you start going up to 50A per power stage the heat output gets absurd where you're dealing with hundreds of watts of heat on components that aren't enough surface area to dissipate effectively.

And 1440W is all you can get out of 3 x 8pin connectors (40A) and that's assuming you don't get some sort of hardware OCP kicking in. (Theoretically on Corsair you can turn it off but I doubt it's really "off"). At that load anyways the wires are probably going to be self-fusing anyways 

And on LN2 yesterday with the Kingpin card they were only drawing about 700W. Power consumption goes down on LN2 (superconducting blah blah...so the ambient number may be >25% higher), that's still only 875W...long distance away from 1000W and that's at voltages that would destroy the chip at ambient.

TLDR: I'd pay to see someone draw more than 1000W


----------



## bmagnien

DrunknFoo said:


> which is why i chose the word attempting


Drunk can you check your evga box to see where your card was made? By serial number. Taiwan or China?


----------



## DrunknFoo

bmagnien said:


> Drunk can you check your evga box to see where your card was made? By serial number. Taiwan or China?


how do i tell by the sn? i can dig it up


----------



## bmagnien

DrunknFoo said:


> how do i tell by the sn? i can dig it up


it says right on the white sticker along with the P/N and the S/N. " Made in 'xxx' " Thanks! I meant 'by' as in 'in proximity to' not 'via'


----------



## martinhal

rawsome said:


> ambient 20°, water 35° and gpu 50°. something like that on longer gaming sessions.
> i currently think a 15° gpu core - water delta is bad because other ppl are seeing up to 7° with these alphacool blocks.


I have asked EK support what delta I should be looking at


----------



## DrunknFoo

bmagnien said:


> it says right on the white sticker along with the P/N and the S/N. " Made in 'xxx' " Thanks! I meant 'by' as in 'in proximity to' not 'via'


just says packaged in the usa? lol


First two digits: Year Manufactured
Third digit: Product type (1 being graphics card).
Fourth and Fifth: What factory the product was built at.
Sixth: Warranty Length.
Seventh to Tenth: Part Number.
Eleventh: Region Sold.
Twelfth to Sixteenth: SNS or Serial Number Sequence.


if this still holds true
4th and 5th digit is 13


----------



## bmagnien

DrunknFoo said:


> just says packaged in the usa? lol
> 
> 
> First two digits: Year Manufactured
> Third digit: Product type (1 being graphics card).
> Fourth and Fifth: What factory the product was built at.
> Sixth: Warranty Length.
> Seventh to Tenth: Part Number.
> Eleventh: Region Sold.
> Twelfth to Sixteenth: SNS or Serial Number Sequence.
> 
> 
> if this still holds true
> 4th and 5th digit is 13


Interesting. I'm going to share over on the EVGA XOC bios page if you don't mind? Pop on over there and take a look at the last several posts. Folks thinking it might be a manufacturing difference. Could be way off but figured I'd try to add more data points for the good of the order.


----------



## dangerSK

DrunknFoo said:


> just says packaged in the usa? lol
> 
> 
> First two digits: Year Manufactured
> Third digit: Product type (1 being graphics card).
> Fourth and Fifth: What factory the product was built at.
> Sixth: Warranty Length.
> Seventh to Tenth: Part Number.
> Eleventh: Region Sold.
> Twelfth to Sixteenth: SNS or Serial Number Sequence.
> 
> 
> if this still holds true
> 4th and 5th digit is 13


2014133 seems most common


----------



## bmagnien

dangerSK said:


> 2014133 right ?


No he's saying his is 2011333987. Correct me if I'm wrong Drunk?


----------



## dangerSK

bmagnien said:


> No he's saying his is 2011333987. Correct me if I'm wrong Drunk?


edited my post, 2am here im kinda fcked up mb


----------



## DrunknFoo

bmagnien said:


> Interesting. I'm going to share over on the EVGA XOC bios page if you don't mind? Pop on over there and take a look at the last several posts. Folks thinking it might be a manufacturing difference. Could be way off but figured I'd try to add more data points for the good of the order.


go ahead, but i honestly doubt it is that...
seems like grasping at straws man, blind to what others write, framechasers has done an amazing job ****in herding up his sheep.
it is a beta, and maybe a handful of people have enough cooling to even handle the power. meh, 

just saw a comment there

i think 2014 is common for the start of the sn
13 is likely the identifier

i.e mine is 201413xxxxxxxxxx


i mentioned prior, messing around with driver settings and random reboots pulled the proper 490-500w continuously but was impossible to replicate after toggling, on avg i was pulling 430-470 randomly during benches with the beta without shunts


----------



## dangerSK

DrunknFoo said:


> go ahead, but i honestly doubt it is that...
> seems like grasping at straws man, blind to what others write, framechasers has done an amazing job ****in herding up his sheep.
> it is a beta, and maybe a handful of people have enough cooling to even handle the power. meh,
> 
> just saw a comment there
> 
> i think 2014 is common for the start of the sn
> 13 is likely the identifier
> 
> i.e mine is 201413xxxxxxxxxx
> 
> 
> i mentioned prior, messing around with driver settings and random reboots pulled the proper 490-500w continuously but was impossible to replicate after toggling, on avg i was pulling 430-470 randomly during benches with the beta without shunts


can u share driver settings ?


----------



## bmagnien

DrunknFoo said:


> go ahead, but i honestly doubt it is that...
> seems like grasping at straws man, blind to what others write, framechasers has done an amazing job ****in herding up his sheep.
> it is a beta, and maybe a handful of people have enough cooling to even handle the power. meh,
> 
> just saw a comment there
> 
> i think 2014 is common for the start of the sn
> 13 is likely the identifier
> 
> i.e mine is 201413xxxxxxxxxx
> 
> 
> i mentioned prior, messing around with driver settings and random reboots pulled the proper 490-500w continuously but was impossible to replicate after toggling, on avg i was pulling 430-470 randomly during benches with the beta without shunts


oh so your 5th and 6th digit are 13...you said your 4th and 5th were 13...

also we're just trying to figure it out. not sure why you're calling us 'framechaser's sheep' ... didn't say anything about that guy? confused...


----------



## DrunknFoo

bmagnien said:


> oh so your 5th and 6th digit are 13...you said your 4th and 5th were 13...
> 
> also we're just trying to figure it out. not sure why you're calling us 'framechaser's sheep' ... didn't say anything about that guy? confused...


lol oops
20141333xxxxxxxxxx
cant cound


----------



## DrunknFoo

dangerSK said:


> can u share driver settings ?


nope i really cant i was just randomly toggling settings i.e quality to performance and quality to power etc just random settings set
i was able to replicate a 490w+ continuous draw maybe 3 times over dozens of attempts over multiple drivers and across 3 versions of the 500w bios


----------



## bmagnien

DrunknFoo said:


> nope i really cant i was just randomly toggling settings i.e quality to performance and quality to power etc just random settings set
> i was able to replicate a 490w+ continuous draw maybe 3 times over dozens of attempts over multiple drivers and across 3 versions of the 500w bios


you have three 3's? Are you sure it's not 2014133987?


----------



## DrunknFoo

the third one in question is 94:02:26:88:14
*unverified *evga bios 500w

the one ending in *87* appeared to have worked the best as in (higher avg draw and less power fluctuations)
that said,* it can take up to 3 hard reboots of my system to get voltage and power draw to go up*...
random is random

anyway in case you are curious


----------



## DrunknFoo

If there is in fact a difference between where each card has originated from, im sure evga will eventually create a bios for each specific card and consider classifying x to x and y to y
lol (wishfulthinking/doubtful)

i really don't think theres a difference


----------



## Falkentyne

It's "25" for China right? The "25" cards are the ones able to pull >85W from PCIE and >161W from the 8 pins?

If this is the case, technically depending on where the internal power limit rails trigger at, that card is capable of at least 568W if all the rails are balanced out.


----------



## dante`afk

my FE already clocks better than my previous ftw3 ultra.



https://www.3dmark.com/spy/15476906



unfortunately the byksky block they sent me is for ref cards, even though I ordered for FE cards.

waiting on a block now.


----------



## nievz

HyperMatrix said:


> On average, the 9900ks will be better for gaming. I'm basing this off of a TechGage benchmark of the RTX 3080 that they did when it came out on their 10900k system and then their benchmark of the 6800 XT, which also re-tested the RTX 3080 on their new 5950x test bed. Horizon Zero Dawn and Total War: Three Kingdoms were 1% faster on the 5950x. CSGO was 11% faster on the 5950x. But the 10900k was 3% faster in Rainbow Six Siege, 5% faster in Monster Hunter: World, 24% faster in Destiny 2, and 23% faster in Borderlands 3. So for pure gaming, it seems Intel is still in the lead. These were tests done at 4K, not 1080p. Although for all I know the extra 2 cores on the 10900k might mean your results will be different with the 9900ks.


The TechGage benchmarks were done done on a 3200 RAM and they didn't even mention of they were using CL14 or CL16. Aside from that, the Curve Optimizer OC feature of Zen 3 will just add to its advantage. I have a 5800x and I haven't seen any overclocked 10900k beat my frame rates in BFV and Warzone at 1440p with my 3090. I would say it's even with a 10900k but consumes a lot less power (110w vs 325w). I am using 3800CL16 ram with curve optimized at -20. When I'm running at 3600 CL16, my framerates and GPU utilization dip a bit.


----------



## cletus-cassidy

ShadowYuna said:


> Has anyone got coil whine boost after installing EK Vector Strix water block + EK Vector Strix Backplate on RTX 3090 Strix??
> 
> Mine got crazy coil whine(very loud and it is worse than stock cooler) after installing both water block and back plate.
> 
> So I put 2mm thermal pad on right capacitor and it reduce coil whine but still worse than stock cooler(full load , 0% fan speed to test the coil whine noise)
> 
> Is there anyone who is using Strix with water block but not EK ?? If there is anyone what brand and how is your coil whine noise?


I have an identical setup to you with Strix and EK block plus backplate. Also have pretty bad coil whine.

Out of curiosity what temps are you seeing running Port Royal? I have a decent amount of rad space (3x360) and I am hitting 50 degrees on the GPU core just on the benchmark without heat soaking the water. Seems too hot when I had 2080 ti blocks in the past that were more like 41 or 42 after really long heat soaking gaming sessions.


----------



## dr/owned

Messing around with IR camera. Memory chips go hissssssssss. FWIW I don't see any issues around the PCIE slot itself or the 8 pin connectors. This is still a stock 375W TUF.


----------



## Nizzen

nievz said:


> The TechGage benchmarks were done done on a 3200 RAM and they didn't even mention of they were using CL14 or CL16. Aside from that, the Curve Optimizer OC feature of Zen 3 will just add to its advantage. I have a 5800x and I haven't seen any overclocked 10900k beat my frame rates in BFV and Warzone at 1440p with my 3090. I would say it's even with a 10900k but consumes a lot less power (110w vs 325w). I am using 3800CL16 ram with curve optimized at -20. When I'm running at 3600 CL16, my framerates and GPU utilization dip a bit.


Tell us what map in battlefield an what settings in Battlefield V 


If you can post a video ingame with fpscounter, it will help others to compare


----------



## shiokarai

nievz said:


> The TechGage benchmarks were done done on a 3200 RAM and they didn't even mention of they were using CL14 or CL16. Aside from that, the Curve Optimizer OC feature of Zen 3 will just add to its advantage. I have a 5800x and I haven't seen any overclocked 10900k beat my frame rates in BFV and Warzone at 1440p with my 3090. I would say it's even with a 10900k but consumes a lot less power (110w vs 325w). I am using 3800CL16 ram with curve optimized at -20. When I'm running at 3600 CL16, my framerates and GPU utilization dip a bit.


Thanks for the valuable input! 

What motherboard? Debating getting Asus Dark Hero or "normal" C8Hero ie. is Asus DOS OC feature worth it for gaming or not basically.

On this topic, very very interesting article:









How is Intel Beating AMD Zen 3 Ryzen in Gaming?


Our Ryzen 5000 Zen 3 launch day reviews saw unexpected gaming FPS results many questioned. In this article, we will investigate these results in more detail, and do more testing to figure out what is going on. The results are surprising and set things right in the battle of AMD vs. Intel.




www.techpowerup.com





Also, this:









AMD Ryzen 9 5950X Review


Ryzen 9 5950X is AMD's flagship 16-core, 32-thread monster. It offers outstanding application performance, your productivity tasks will complete faster than before. Thanks to the Zen 3 IPC advantage, it also excels in gaming, even winning against Intel's Core i9-10900K.




www.techpowerup.com


----------



## ShadowYuna

cletus-cassidy said:


> I have an identical setup to you with Strix and EK block plus backplate. Also have pretty bad coil whine.
> 
> Out of curiosity what temps are you seeing running Port Royal? I have a decent amount of rad space (3x360) and I am hitting 50 degrees on the GPU core just on the benchmark without heat soaking the water. Seems too hot when I had 2080 ti blocks in the past that were more like 41 or 42 after really long heat soaking gaming sessions.


I am using MORA 360 Pro Radiator and on Time Spy bench my GPU hit around 42-43. ON Assassin Creed Valhalla it hit around 47 since it is summer in Australia. 

I am very happy with the tem and stable boost which is 2070 but coil whine is my concern. So I might buy another block when it comes out


----------



## Mat_UK

rawsome said:


> Holy ****!
> 
> 
> 
> 2x 420 rads with noctua A14. i used crossflow rads so i do not have so much tubing in my case. first time build
> but i still have a temp delta of 15° water->gpu. will repaste my card with kyronaut and see if it makes a difference.
> 
> View attachment 2466218
> 
> 
> also moved a bit up in the ladder, PR 14975



Looks nice mate, those crossflow rads really do simplify the tube runs, Never used them myself but maybe in my next build 

I should clarify, my temps above with the 2080ti are gaming temps (Division 2) not benching which is probably going to create a bigger delta gpu > water.

I don't think your 15c delta is actually too bad and 2 x 420 should be plenty of rad to cool that set up (I have 2 x 360). Do you have good flow rates?

I am waiting on my Aplhacool block to come, it was due to ship in a couple of days but now the site says 5th Dec.I am itching to test it in my loop but I'll just have to be patient. I'll post here once I get some results.


----------



## Nizzen

What was the whole point of bragging about having a XOC bios for the 3090 here in this forum, without sharing it? @escapee


----------



## Sobakaa

csaris said:


> Does anyone have the msi suprim bios? I would like to flash it to my trio x...


Any luck finding one? All i managed to find is this unverified one MSI RTX 3090 VBIOS


----------



## cletus-cassidy

ShadowYuna said:


> I am using MORA 360 Pro Radiator and on Time Spy bench my GPU hit around 42-43. ON Assassin Creed Valhalla it hit around 47 since it is summer in Australia.
> 
> I am very happy with the tem and stable boost which is 2070 but coil whine is my concern. So I might buy another block when it comes out


Thanks. I'll do some more testing with heatsoaking, but wondering if I have a bad mount. Anyone else with an EK block (Strix or otherwise) care to give some other temps data points before I have to tear this thing down again?


----------



## Foxrun

Finally my shunt worked. I did a thin coat and my clocks barely drop below 2040 on my FE. I replaced the paste with grizzly and some of the thermo pads with grizzly pads as well. Im going to apply a thicker coat because I am still hitting a power limit. However, Im still getting my KP. This FE will make someone happy when I sell it.



https://www.3dmark.com/3dm/53424584?


----------



## nievz

Nizzen said:


> Tell us what map in battlefield an what settings in Battlefield V
> 
> 
> If you can post a video ingame with fpscounter, it will help others to compare


Zen3 5800x, Warzone 95-98% GPU Utilization even indoors, 1440p, high





Zen3 5800x, BFV - Ultra, RTX Off, 1440p, DX12





Please let me know how they compare to your 10900K overclocked. My main goal in upgrading to the 5800x is to be able to at least minimize the CPU bottleneck on my 3090 and high FPS at 1440p since I play at 240hz.

@shiokarai Gigabyte x570 Aorus Master, sorry I have zero knowledge on ASUS boards. hehe


----------



## Thebc2

cletus-cassidy said:


> I have an identical setup to you with Strix and EK block plus backplate. Also have pretty bad coil whine.
> 
> Out of curiosity what temps are you seeing running Port Royal? I have a decent amount of rad space (3x360) and I am hitting 50 degrees on the GPU core just on the benchmark without heat soaking the water. Seems too hot when I had 2080 ti blocks in the past that were more like 41 or 42 after really long heat soaking gaming sessions.


My temps are much lower with the same gpu and block/backplate. I idle in the very low 20s. Under full load with a 500w bios I see temps generally in the mid to high 30s depending on ambient, highest I have seen is 40c. Ambient is low to mid 20s.











Sent from my iPhone using Tapatalk Pro


----------



## WilliamLeGod

Does any1 here got the 3090 suprim yet? Would u mind sharing the 450W vbios OC mode? Appreciate it!


----------



## Fire2

WilliamLeGod said:


> Does any1 here got the 3090 suprim yet? Would u mind sharing the 450W vbios OC mode? Appreciate it!


VGA Bios Collection: MSI RTX 3090 24 GB | TechPowerUp

use that??


----------



## csaris

Sobakaa said:


> Any luck finding one? All i managed to find is this unverified one MSI RTX 3090 VBIOS


No mate, didn't find anything.

I'm not 100% confident of flashing an unverified bios.


----------



## Fire2

anyone found this 1000watt galaxy 3090 bios yet?!


----------



## Falkentyne

Foxrun said:


> Finally my shunt worked. I did a thin coat and my clocks barely drop below 2040 on my FE. I replaced the paste with grizzly and some of the thermo pads with grizzly pads as well. Im going to apply a thicker coat because I am still hitting a power limit. However, Im still getting my KP. This FE will make someone happy when I sell it.
> 
> 
> 
> https://www.3dmark.com/3dm/53424584?


What did you do differently than before?


----------



## Zogge

My 3090 strix is now under water. Ek block. Without chiller just on ambient it idles on 25 degrees and goes to 45 degrees max on load. Coolant is on steady 25 degrees C as I have a 3360x140 + 1080x12 radiator loop on slow fans at 0.65 gpm. (Can go higher gpm if needed as pumps are on 50%)

I can bench on 2200 mhz steady overclock on port royale and superposition. 
Core +200 Mem +250.
Evga 500w bios. 

10980 xe massively overclocked in same loop as well with a 1080x120 radiator between cpu and gpu and a 3360x140 between gpu and cpu.

Gaming crash on +200 and only +175 / 2160 mhz works steady.

I can push memory further but have not tried over +750 so far.


----------



## Glerox

I have the strix also. Do you have betters results with the 480w strix bios or the 500w FTW3 bios?

I'm also waiting for Optimus FTW3 block. Wondering if I should go with the FTW3. Does anybody knows if the 500w bios power limitation bug on Taiwanese version FTW3 can be overcome with shunt modding?


----------



## Zogge

Better in fact with 500w bios, when gaming and benching I get power limit at 500 to 510 W constant. 

No shunting done here also.

I expected 2100 stable when gaming but got 2160 to 2175 so I am happy.


----------



## ExDarkxH

So i ran some Time Spy Extreme and the card hit 838watts and regularly stayed above 800

Perf cap reason: Power

unbelievable. I use 7mo shunts because I wanted to work around the limits of the fuses. I can obviously replace them with stacked 5mo but meh. I realize TS is high power usage but damn. Got a top 20 graphics score at least

I'm on air too. I feel like once on water it will want to guzzle more power. I dunno what to do I'm kinda "stuck" since its a FTW3 card


----------



## ExDarkxH

Glerox said:


> I have the strix also. Do you have betters results with the 480w strix bios or the 500w FTW3 bios?
> 
> I'm also waiting for Optimus FTW3 block. Wondering if I should go with the FTW3. Does anybody knows if the 500w bios power limitation bug on Taiwanese version FTW3 can be overcome with shunt modding?


I have Taiwanese card shunted it works just fine


----------



## lokran88

ShadowYuna said:


> Has anyone got coil whine boost after installing EK Vector Strix water block + EK Vector Strix Backplate on RTX 3090 Strix??
> 
> Mine got crazy coil whine(very loud and it is worse than stock cooler) after installing both water block and back plate.
> 
> So I put 2mm thermal pad on right capacitor and it reduce coil whine but still worse than stock cooler(full load , 0% fan speed to test the coil whine noise)
> 
> Is there anyone who is using Strix with water block but not EK ?? If there is anyone what brand and how is your coil whine noise?


I am hoping that Aquacomputer might ship my Strix block this week.
Currently with stock cooler coil whine is acceptable and not very loud. But I was also afraid that it will get more noticeable with water block.


----------



## defiledge

What kind of fps increase can I expect going from 390W to 500W? Trying to decide if shunting is worth the effort.


----------



## HyperMatrix

defiledge said:


> What kind of fps increase can I expect going from 390W to 500W? Trying to decide if shunting is worth the effort.


Depends on your clocks and what it throttles down to under full load. Run a game at 60% load. Look at your clock speed. Imagine it's 2050MHz. Then change to 100% load. Imagine it drops to 1900MHz. Shunting would allow you to maintain that previous 2050MHz (assuming you're able to keep the card cool). So your gains would be about 8%. But don't underestimate the thermal problem. If you're not under water, you likely won't be able to benefit from a 500W PL because your card likely won't be able to keep cool under that extra load.


----------



## motivman

Can anyone answer this question? on the reference PCB, where is the SRC resistor located? I think I am still hitting power limit due to SRC. This is PNY 3090 on stock PNY bios.


----------



## ShadowYuna

lokran88 said:


> I am hoping that Aquacomputer might ship my Strix block this week.
> Currently with stock cooler coil whine is acceptable and not very loud. But I was also afraid that it will get more noticeable with water block.


I am also waiting for Aqua block as well. But as always Aqua block is best but always took too long. So ATM I am waiting for Aqua and Bitspower.(Both Preordered)

Also Phanteck Block looks good so waiting for that as well.


----------



## Sobakaa

csaris said:


> No mate, didn't find anything.
> 
> I'm not 100% confident of flashing an unverified bios.


Well i was in the mood for some testing and flashed this unverified thing. It is 420 watt as advertised, has pretty much the same fan curve compared to gaming X from what i can tell so the fan stop thing would activate without any manual fiddling, unlike the evga 500W bios. Other than that... Nothing to write home about, i suppose. I get about 14k in Port Royal with that one on my non-modded 3090 Gaming X, with the 500W bios i managed a 14400. With Gaming X bios my best was like 13300-13500.


----------



## NapsterAU

Any BIOS's around for the 2-Pin cards above 390w yet?


----------



## chispy

Anyone knows the size of the thermal pads on the Bykski full cover water block for the Asus Strix 3090 ?
120x120x1.5mm or 120x120x2.0mm ? @Nizzen or anyone else who is using this water block on the 3090 strix any help will be greatly appreciate it. I need more thermal pads for my gpu , there was not enough thermal pads on my package of this Bykski water block  , cheap manufacturer lol.


----------



## sultanofswing

ExDarkxH said:


> So i ran some Time Spy Extreme and the card hit 838watts and regularly stayed above 800
> 
> Perf cap reason: Power
> 
> unbelievable. I use 7mo shunts because I wanted to work around the limits of the fuses. I can obviously replace them with stacked 5mo but meh. I realize TS is high power usage but damn. Got a top 20 graphics score at least
> 
> I'm on air too. I feel like once on water it will want to guzzle more power. I dunno what to do I'm kinda "stuck" since its a FTW3 card


your playing with fire if you ask me, Gonna end up with a dead card that way.


----------



## dr/owned

Ordered some 18awg high-strand wire so I can solder directly to the front of the shunt on the PCIe slot. As far as I can tell all the ground planes are shorted together so I only need to run 1 wire to effectively triple the slot power capacity and not have to worry about melting the slot.

Has anyone yet figured out what the minimum resistance needed is to not send the card in safe mode?


----------



## Falkentyne

ExDarkxH said:


> So i ran some Time Spy Extreme and the card hit 838watts and regularly stayed above 800
> 
> Perf cap reason: Power
> 
> unbelievable. I use 7mo shunts because I wanted to work around the limits of the fuses. I can obviously replace them with stacked 5mo but meh. I realize TS is high power usage but damn. Got a top 20 graphics score at least
> 
> I'm on air too. I feel like once on water it will want to guzzle more power. I dunno what to do I'm kinda "stuck" since its a FTW3 card


Is time spy extreme run at 4k ?


----------



## mirkendargen

chispy said:


> Anyone knows the size of the thermal pads on the Bykski full cover water block for the Asus Strix 3090 ?
> 120x120x1.5mm or 120x120x2.0mm ? @Nizzen or anyone else who is using this water block on the 3090 strix any help will be greatly appreciate it. I need more thermal pads for my gpu , there was not enough thermal pads on my package of this Bykski water block  , cheap manufacturer lol.


Mine had more than enough as long as I cut the strips to just cover each memory die, not lay a whole strip across the row wasting a bunch. Looking at a leftover piece I THINK it's 1.5mm but I don't have calipers to verify.


----------



## WilliamLeGod

Has any1 got the Msi 3090 suprim yet? Could u upload the 450W vbios? Thanks!


----------



## changboy

Some user of evga rtx-3090 ftw3 ultra are not happy and said the card underperform, can you tell me if this score is ok for my ftw3 ultra or is bad on superposition 8k benchmark ?


----------



## DrunknFoo

ExDarkxH said:


> So i ran some Time Spy Extreme and the card hit 838watts and regularly stayed above 800
> 
> Perf cap reason: Power
> 
> unbelievable. I use 7mo shunts because I wanted to work around the limits of the fuses. I can obviously replace them with stacked 5mo but meh. I realize TS is high power usage but damn. Got a top 20 graphics score at least
> 
> I'm on air too. I feel like once on water it will want to guzzle more power. I dunno what to do I'm kinda "stuck" since its a FTW3 card


838w?!
is that a kila wat read out? i.e total system draw?
838w to the gpu alone is nuts =P


----------



## Falkentyne

anethema said:


> Also, are the pads on both sides 1.5mm? I want to redo all the pads from scratch on both sides. I will have them then for when I watercool also.


Hey you haven't posted in 13 days. Everything okay?


----------



## Lobstar

changboy said:


> Some user of evga rtx-3090 ftw3 ultra are not happy and said the card underperform, can you tell me if this score is ok for my ftw3 ultra or is bad on superposition 8k benchmark ?
> View attachment 2466382


My card (ftw3 ultra) rarely breaks 400w. +1498 mem, +200 GPU on air. XOC bios @119%









Reset to 'Default', still on XOC bios, @ 100% (36 more points if I put the power cap back to 119%)


----------



## Falkentyne

DrunknFoo said:


> 838w?!
> is that a kila wat read out? i.e total system draw?
> 838w to the gpu alone is nuts =P


Nah, another person running at 4k on his modded board in some brand new game, at max settings, Raytracing and another feature, exceeded 700W also.

Overwatch, a 4 year old+ game, can pull 550W if you run it at 4k (or 1080p+200% render scale), max settings and uncapped FPS (which is around average of 200 or so with a lot of people shooting at you), if you do this on air cooling...

At 1080p, (100% render scale), you reach the FPS cap of 400 FPS at only 390 watts.... . Here if the GPU isn't at 99%, the CPU becomes the limit, at 90%+ usage, if the FPS goes below 400 FPS.

If you're on water, trying 4k uncapped, you can keep it at about 500 watts with GPU at 99% usage.
Now imagine doing this on a current game? 800 watts isn't out of reason.


----------



## changboy

OMG, your score is verry low ! Then maybe my card is ok ?

I dont know when you got your card but if mine was like this i have return it. Maybe something wrong coz its not normal my score is 8220 and your is 6600.

Can it be the cpu make that difference ? i have a 10980xe and you a 3950x ?


----------



## Lord of meat

WilliamLeGod said:


> Has any1 got the Msi 3090 suprim yet? Could u upload the 450W vbios? Thanks!


post #5,237 • 11 h ago someone replied to ya.
i wouldnt flash it.


----------



## DrunknFoo

Falkentyne said:


> Nah, another person running at 4k on his modded board in some brand new game, at max settings, Raytracing and another feature, exceeded 700W also.
> 
> Overwatch, a 4 year old+ game, can pull 550W if you run it at 4k (or 1080p+200% render scale), max settings and uncapped FPS (which is around average of 200 or so with a lot of people shooting at you), if you do this on air cooling...
> 
> At 1080p, (100% render scale), you reach the FPS cap of 400 FPS at only 390 watts.... . Here if the GPU isn't at 99%, the CPU becomes the limit, at 90%+ usage, if the FPS goes below 400 FPS.
> 
> If you're on water, trying 4k uncapped, you can keep it at about 500 watts with GPU at 99% usage.
> Now imagine doing this on a current game? 800 watts isn't out of reason.


... Are you answering on his behalf?
Asking cause im running 5s stacked and we got the same card, calculating kilawat -system draw, it seems unlikely that it is all to the gpu. Tried tse for the first time outa curiosity... My temps are higher, score higher with a lower draw

edit typo grammar


----------



## Lobstar

changboy said:


> OMG, your score is verry low ! Then maybe my card is ok ?
> 
> I dont know when you got your card but if mine was like this i have return it. Maybe something wrong coz its not normal my score is 8220 and your is 6600.
> 
> Can it be the cpu make that difference ? i have a 10980xe and you a 3950x ?


My card has a 2014 serial number.  I just got it in a recent batch from Newegg. EVGA responded to my ticket by telling me to do basic troubleshooting and reinstall drivers then closing the ticket. **** them entirely. I'll stick with Asus like I have for the past 20 years for my next high end card purchase.


----------



## changboy

Lobstar said:


> My card has a 2014 serial number. I just got it in a recent batch from Newegg. EVGA responded to my ticket by telling me to do basic troubleshooting and reinstall drivers then closing the ticket. **** them entirely. I'll stick with Asus like I have for the past 20 years for my next high end card purchase.


Call EVGA direct, but Newegg are suppose to exchange a product in the first 30 days, maybe try again tomorrow with Newegg.
My card also have a 2014 serial number. I'am also on air cooling, it can perform a lot better under watercooling.


----------



## Lobstar

changboy said:


> Call EVGA direct, but Newegg are suppose to exchange a product in the first 30 days, maybe try again tomorrow with Newegg.
> My card also have a 2014 serial number. I'am also on air cooling, it can perform a lot better under watercooling.


The issue is with availability what it is and EVGA unwilling to help I don't want to eat 15% restock. I think I'll just try and sell it to be honest. Let it be someone else's problem.

Here are my Port Royal results OC on the left, stock on the right. Both with XOC bios. 2014 Serial made in Taiwan.


----------



## changboy

Seam not so bad, i dont have port royal to check perf on my card but i wait black friday to buy it heheh, if no deal i will buy those bench whatherver.
I saw on your other post you overclock ur memory verry high, me after i have done some test with time spy if i pass + 800 on memory sometimes performance decrease. Try to oc at +800 and +50 mvolt on gpu and put your normal oc on gpu. Try again to see if your result is better.
Did you pass ddu cleaner before install ur card ?
Myself i dont know how to put my setting in nvidia panel to get best result, i actually run as default setting.


----------



## Lobstar

changboy said:


> Seam not so bad, i dont have port royal to check perf on my card but i wait black friday to buy it heheh, if no deal i will buy those bench whatherver.
> I saw on your other post you overclock ur memory verry high, me after i have done some test with time spy if i pass + 800 on memory sometimes performance decrease. Try to oc at +800 and +50 mvolt on gpu and put your normal oc on gpu. Try again to see if your result is better. - See below.
> Did you pass ddu cleaner before install ur card ? - yes
> Myself i dont know how to put my setting in nvidia panel to get best result, i actually run as default setting. - I uninstalled all nvidia and did the fresh settings option on standalone driver install.


First is your 800/50 @119%; 800/200 @119%; 0/200 @119%; 1498/200 @119%; 0/0 @100%
Everything is with zero voltage change


















Edit:

Left is 1498/200 @ +50mv 119%, right is the same with zero mv added, everything else the same.


----------



## ExDarkxH

DrunknFoo said:


> ... Are you answering on his behalf?
> Asking cause im running 5s stacked and we got the same card, calculating kilawat -system draw, it seems unlikely that it is all to the gpu. Tried tse for the first time outa curiosity... My temps are higher, score higher with a lower draw
> 
> edit typo grammar


No its not kilowatt thats just HWinfo so im not sure how accurate but it should be in the ballpark. 
I’m getting 240ish to each 3pin and around 120 to the pcie.
I multiplied the values using 1.71x 

They were 7mo shunts on 500w bios. Used solder with stacked method. Apex mobo with molex to psu for better power to pcie

Its just timespy that seems to guzzle power like that

What did you report as a power draw? I took a look at your score as your close to me on the boards. I notice you have an extremely high memory overclock but your average core is around 40Mhz lower and that would sure affect draw

Before i shunted the src i was getting power draw readings as something like 

238w pin A
199w pin B
140w pin C

Ever since i shunted the SRC power draw skyrocketed and i can reach 240
on every pin if the gpu calls for it


----------



## changboy

13500 or 13700, i dont know if its normal in port royal on air, i didnt try it. I know some score 14600 to 14800 but sometime guys have made some mode or temperature are really low. At 40c you can get a really higher score. I saw my card boost drop when card over 60c. Each 10c or 5c you lost some boost. Maybe mine can keep 2200mhz if temp below 40c. This can give +1000 on score maybe.
Arg maybe tomorrow i will check for buy that port royal benchmark and cameback with my result. It can happen it will not better then you, i dont know.


----------



## ExDarkxH

Lobstar said:


> The issue is with availability what it is and EVGA unwilling to help I don't want to eat 15% restock. I think I'll just try and sell it to be honest. Let it be someone else's problem.
> 
> Here are my Port Royal results OC on the left, stock on the right. Both with XOC bios. 2014 Serial made in Taiwan.
> View attachment 2466387


What temps do you usually idle at? Anyways thats not that bad seems very average for these cards (compared to 2pin cards mostly) I mean 14k would be more ideal. There are more factors at play like your power supply, ext
Are you using PX1 to OC? You have to X out of the application after setting overclocks. Even if you minimize it will rob you of resources you must X out completely

The average clock speed is a little on the low side. Is that a max OC?


----------



## HyperMatrix

ExDarkxH said:


> So i ran some Time Spy Extreme and the card hit 838watts and regularly stayed above 800
> I'm on air too. I feel like once on water it will want to guzzle more power. I dunno what to do I'm kinda "stuck" since its a FTW3 card


You will likely use less power under water. Remember it’s not all about power. More power is only good as long as you’re not limited by temperature or by voltage. And while water cooling brings down the temperatures and as a result your power draw, your voltage is going to max out at a certain clock speed. Where that limit is will differ on a card by card basis but don’t expect your GPU to use 800W under water unless you’re volt modding it.


----------



## DrunknFoo

ExDarkxH said:


> No its not kilowatt thats just HWinfo so im not sure how accurate but it should be in the ballpark.
> I’m getting 240ish to each 3pin and around 120 to the pcie.
> I multiplied the values using 1.71x
> 
> They were 7mo shunts on 500w bios. Used solder with stacked method. Apex mobo with molex to psu for better power to pcie
> 
> Its just timespy that seems to guzzle power like that
> 
> What did you report as a power draw? I took a look at your score as your close to me on the boards. I notice you have an extremely high memory overclock but your average core is around 40Mhz lower and that would sure affect draw


That run i think i roughly estimated 820w total based on the total draw of system, i didnt really crunch the number as i just did it on a whim after seeing your post...
I never thought to edit hwinfo offset, ill look into that n compare with the readings on my meter later

(also took the cpu section out of the avg equation) hmmmm


----------



## deadjon

Anyone have any idea how memory clocks affect core clock stability with Ampere? Specifically with higher temperatures (on air). I've moved my memory to up 21000 for gaming but I seem to be dropping my core clocks by 10mhz every day just to stay stable. I was running AC Valhalla at a nice and steady 2130mhz core/20500mhz memory for days and after moving memory up to 21000 I've stepped the core down twice due to instability. Running steady 2100core/21000mem now.

Hopefully I didn't screw anything when I crossflashed the 500w EVGA bios onto my Aorus Xtreme. I smashed a few PR runs out with it, topped 15k and then called it a day and im back on the stock Gigabyte bios now.

I've also noticed that most high PR scores are using the previous Nvidia driver version, is that due to OC instability on newer drivers?


----------



## mirkendargen

deadjon said:


> Anyone have any idea how memory clocks affect core clock stability with Ampere? Specifically with higher temperatures (on air). I've moved my memory to up 21000 for gaming but I seem to be dropping my core clocks by 10mhz every day just to stay stable. I was running AC Valhalla at a nice and steady 2130mhz core/20500mhz memory for days and after moving memory up to 21000 I've stepped the core down twice due to instability. Running steady 2100core/21000mem now.
> 
> Hopefully I didn't screw anything when I crossflashed the 500w EVGA bios onto my Aorus Xtreme. I smashed a few PR runs out with it, topped 15k and then called it a day and im back on the stock Gigabyte bios now.
> 
> I've also noticed that most high PR scores are using the previous Nvidia driver version, is that due to OC instability on newer drivers?


Actually there seemed to be OC instability in the older drivers because they ramped up the clocks more aggressively...which also led to higher scores when you could manage to get it to be stable. I personally don't notice core being less stable with higher memory clocks...until the memory clocks cause instability themselves (which seems to usually cause a full reboot for some reason on Ampere).


----------



## Lobstar

ExDarkxH said:


> What temps do you usually idle at? Anyways thats not that bad seems very average for these cards (compared to 2pin cards mostly) I mean 14k would be more ideal. There are more factors at play like your power supply, ext
> Are you using PX1 to OC? You have to X out of the application after setting overclocks. Even if you minimize it will rob you of resources you must X out completely
> 
> The average clock speed is a little on the low side. Is that a max OC?


GPU is idling at 35c. CM v3 1300 psu. 3 individual cables with no splits. This is the max I can get in TSE. Any time I throw any voltage at it the scores start dropping. Thanks for the tip on X1. 

















Edit: Another comparison on TSE


----------



## DrunknFoo

ExDarkxH said:


> What temps do you usually idle at? Anyways thats not that bad seems very average for these cards (compared to 2pin cards mostly) I mean 14k would be more ideal. There are more factors at play like your power supply, ext
> Are you using PX1 to OC? You have to X out of the application after setting overclocks. Even if you minimize it will rob you of resources you must X out completely
> 
> The average clock speed is a little on the low side. Is that a max OC?


hmmm
what value was edited for the src? or was the pcie and pins only modified in hwinfo?
guess pcie and pins only matter....

if pcie and pin123 were averaged out, compared to the kilawat it is reporting over 100w more give or take a few (at least in my case)


----------



## Sobakaa

WilliamLeGod said:


> Has any1 got the Msi 3090 suprim yet? Could u upload the 450W vbios? Thanks!


Sorry to disappoint you, but it's 420 not 450. Here, take a look at the specs: Specification GeForce RTX 3090 SUPRIM X 24G. It has been since verified and is listed alongside all other MSI bioses now: TechPowerUp.
Edit: My bad, actually draws up to 450, as that's the "target" power draw.


----------



## bmgjet

Got my EK XC3 block today.
Has one of the most unflat surfaces iv ever seen on a waterblock.









The inductors dont make contact using the termal pads they provide and lable in the instructions. Which was giving me perf cap therm.
Had to dobule up pads to fix that.


----------



## GQNerd

Strix w/Phanteks waterblock, flashed 500w EVGA.. +165core +1300mem

Port Royal - 15,167 - HOF Baby


----------



## DrunknFoo

bmgjet said:


> Got my EK XC3 block today.
> Has one of the most unflat surfaces iv ever seen on a waterblock.
> The inductors dont make contact using the termal pads they provide and lable in the instructions. Which was giving me perf cap therm.
> Had to dobule up pads to fix that.


That is horrible! contact them, they haven't let me down with their support from my experiences, i basically borrowed a 2080ti block from them for 2 years and they refunded me entirely.
i assume u attempted several remounts already...


----------



## ShadowYuna

Miguelios said:


> Strix w/Phanteks waterblock, flashed 500w EVGA.. +165core +1300mem
> 
> Port Royal - 15,167 - HOF Baby
> 
> View attachment 2466401


Nice score.

How did you manage to get Phantek Block??


----------



## bmgjet

DrunknFoo said:


> That is horrible! contact them, they haven't let me down with their support from my experiences, i basically borrowed a 2080ti block from them for 2 years and they refunded me entirely.
> i assume u attempted several remounts already...


Have messaged them about it.
Remounted it 3 times with the tim they provided and couldnt get a good die temp. 60-65C.
Changed to liquid metal and that seems to be filling the gap on the die since block and die are both convex and thats gotten me to 40-45C.
Still getting GPUz showing perf cap of therm and power for few ms a couple of times after a hour long game so doubling up the thermal pads arnt doing the inductors any favours but its better then when they had no contact.
Im going to get some thermal putty like factory cooler had and see if thats any better tomorrow.


----------



## GQNerd

ShadowYuna said:


> Nice score.
> 
> How did you manage to get Phantek Block??


Was a sneak drop by Newegg last week


----------



## Sobakaa

I'm sure it's been mentioned at least a dozen of times but it's not in the opening post and i can't find it for some reason, what are the temp thresholds for 3090 to start dropping frequency?


----------



## ShadowYuna

Miguelios said:


> Was a sneak drop by Newegg last week


I can search the backplate on Newegg.

Do you mind give me a product link for block as well? -- Update: Nevermind found the item.

Thanks advance


----------



## bmgjet

Sobakaa said:


> I'm sure it's been mentioned at least a dozen of times but it's not in the opening post and i can't find it for some reason, what are the temp thresholds for 3090 to start dropping frequency?


20c onwards.
Steps looks to be every 4C.

GamersNexus did a video on it and he mapped it out using ln2 and let it go from -20c up to 70c.


----------



## rawsome

Mat_UK said:


> Looks nice mate, those crossflow rads really do simplify the tube runs, Never used them myself but maybe in my next build
> 
> I should clarify, my temps above with the 2080ti are gaming temps (Division 2) not benching which is probably going to create a bigger delta gpu > water.
> 
> I don't think your 15c delta is actually too bad and 2 x 420 should be plenty of rad to cool that set up (I have 2 x 360). Do you have good flow rates?
> 
> I am waiting on my Aplhacool block to come, it was due to ship in a couple of days but now the site says 5th Dec.I am itching to test it in my loop but I'll just have to be patient. I'll post here once I get some results.


i have not measured the flow, but i can see the water moving in the reservoir, my D5 pump runs around 3000rpm. the ambient 20° - water 35° is with fans in silent mode. if i turn the fans up, it gets lower.
ok i have now read a review from igorslab on the reference waterblock, and he gets 9° on 340W. so maybe 15°on 500W is not that bad? i'll power limit my card down and see how it compares.


----------



## Lord of meat

Msi suprim bios is up and verified.
Two versions. Btw ugly cooler.








MSI RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com












MSI RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com


----------



## Fire2

ah yes

VGA Bios Collection: MSI RTX 3090 24 GB | TechPowerUp 

nice!


----------



## chef1702

Did someone already get a waterblock for the Trio X 3090 from Bykski? The reason I'm asking is about the compatibility. On the Bykski site they say the block for Trio X 3080 is the same as 3090 (fits both). On Ezmodding's shop page the block is listed as 3080 block and no info about 3090 comp. anywhere in the text. Alphacool's block also says it fits both cards. Now Barrow is making a Trio X 30*80* block too and no info about the 3090 in the text...
It looks like the PCB is nearly equal, at least the holes are all the same and the positions of the components which has to be cooled are the same.
Depending on the availability I would buy which comes first but not if it's not gonna fit. Some extra info would be great.


----------



## rawsome

Lord of meat said:


> Msi suprim bios is up and verified.
> Two versions. Btw ugly cooler.
> 
> 
> 
> 
> 
> 
> 
> 
> MSI RTX 3090 VBIOS
> 
> 
> 24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory
> 
> 
> 
> 
> www.techpowerup.com
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> MSI RTX 3090 VBIOS
> 
> 
> 24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory
> 
> 
> 
> 
> www.techpowerup.com





Fire2 said:


> ah yes
> 
> VGA Bios Collection: MSI RTX 3090 24 GB | TechPowerUp
> 
> nice!


what is so special about this bios? Its not the power limit i guess.


----------



## Foxrun

Falkentyne said:


> What did you do differently than before?


My MG pen came in finally, and that did the trick. Now the only issue is temp lol


----------



## vinogrados

and a few more question?
all ref card is 2*8pin power? only 390 power limit? and lower 2000hz?
and may be any one try ewkb ref waterblock on some non ref cards? may be anyone suits it?


----------



## GAN77

Miguelios said:


> Strix w/Phanteks waterblock, flashed 500w EVGA.. +165core +1300mem


I congratulate you. Where did you buy the new product phanteks?
What are your impressions, delta water chip?


----------



## Falkentyne

Foxrun said:


> My MG pen came in finally, and that did the trick. Now the only issue is temp lol


So the paint version was too hard to apply but the pen was more accurate?
Both were 842AR right?


----------



## Thanh Nguyen

Which block has the best delta for shunt card? At least 30m stress test.


----------



## LoucMachine

rawsome said:


> what is so special about this bios? Its not the power limit i guess.


I guess its the compatibility with the X trio? At least for me, the strix bios was not top, fan curve didnt work, power reading were false, flashing strix bios showed a black screen until it reboot by itself and almost gave me a heart attack, etc. Lol I am almost tempted to try this one, especially since I want a higher power target but I dont necessarily want to go to 480w+ as temps are hard to control and the X trio power delivery is sub par, and from what I have seen, the Suprim is not much better, only 1 more memory phase and 2 core phases.


----------



## Sobakaa

rawsome said:


> what is so special about this bios? Its not the power limit i guess.


I flashed it on my Gaming X and yeah, it's about the slightly increased limit and a fan curve that matches the fans on MSI products better - fan stop actually engages, etc. FTW 500w bios gives better overclocks even on unmodified air.


----------



## Lord of meat

LoucMachine said:


> I guess its the compatibility with the X trio? At least for me, the strix bios was not top, fan curve didnt work, power reading were false, flashing strix bios showed a black screen until it reboot by itself and almost gave me a heart attack, etc. Lol I am almost tempted to try this one, especially since I want a higher power target but I dont necessarily want to go to 480w+ as temps are hard to control and the X trio power delivery is sub par, and from what I have seen, the Suprim is not much better, only 1 more memory phase and 2 core phases.


I'm not sure its baked in bios

Pixel Rate189.8 GPixel/s201.6 GPixel
Texture Rate556.0 GTexel/s590.4 GTexel
FP16 (half) performance35.58 TFLOPS37.79TFLOPS(1:1)
FP32 (float) performance35.58 TFLOPS 37.79 TFLOPS
FP64 (double) performance556.0 GFLOPS 590.4 GFLOPS (1:64)

so u get lots of flippy flops number u can brag about.


----------



## lokran88

ShadowYuna said:


> I am also waiting for Aqua block as well. But as always Aqua block is best but always took too long. So ATM I am waiting for Aqua and Bitspower.(Both Preordered)
> 
> Also Phanteck Block looks good so waiting for that as well.


Today Aquacomputer said that parts for the Strix block are still being treated at an external partner and they do not expect them back before next week.
So I am expecting the block hopefully in two weeks now. Finally want to Watercool that card as I got new MoRa 3 radiator for it too.


----------



## Foxrun

Falkentyne said:


> So the paint version was too hard to apply but the pen was more accurate?
> Both were 842AR right?


It was advertised as such (842AR) but it did not list it on the actual packaging. I am assuming that it wasnt. My stable overclocks prior to shunting now shut down due to thermals. I arranged the pads differently and added more which helped the therms, but I still have to drop the power limit a bit. This thing absolutely drinks power. Whoever ends up buying it will need to put it underwater or it could likely fry.


----------



## ExDarkxH

For those watercooling,
Did you get better memory overclocks on your card?
Im a little disappointed with my Memory OC and was hoping for more. Want to see if a waterblock would help me add + 100 - 200 memory offset


----------



## cletus-cassidy

bmgjet said:


> Got my EK XC3 block today.
> Has one of the most unflat surfaces iv ever seen on a waterblock.
> View attachment 2466398
> 
> 
> The inductors dont make contact using the termal pads they provide and lable in the instructions. Which was giving me perf cap therm.
> Had to dobule up pads to fix that.
> View attachment 2466399
> 
> View attachment 2466400


How could you tell you were getting perf cap thermal limiting? What were the symptoms?


----------



## mattxx88

dante`afk said:


> my FE already clocks better than my previous ftw3 ultra.
> 
> 
> 
> https://www.3dmark.com/spy/15476906
> 
> 
> 
> unfortunately the byksky block they sent me is for ref cards, even though I ordered for FE cards.
> 
> waiting on a block now.


omg, i'm still waiting the order confirm since 3 days, where did you buy it?
mine from formulamod.com
when you have it in your hands can you confirm it requires thermal paste rather than thermal pads on vram and mosfet?


----------



## Falkentyne

Foxrun said:


> It was advertised as such (842AR) but it did not list it on the actual packaging. I am assuming that it wasnt. My stable overclocks prior to shunting now shut down due to thermals. I arranged the pads differently and added more which helped the therms, but I still have to drop the power limit a bit. This thing absolutely drinks power. Whoever ends up buying it will need to put it underwater or it could likely fry.


Can you tell me the link where you bought the paint jar version and pen version from?


----------



## PsY4

Here is the *390W *BIOS for *KFA2/GALAX RTX 3090 SG* : KFA2 RTX 3090 VBIOS

Got it from their support, all DisplayPorts working, working great so far for me, better than Gigabyte one 'cause all ports working, and lower base power target !


----------



## long2905

PsY4 said:


> Here is the *390W *BIOS for *KFA2/GALAX RTX 3090 SG* : KFA2 RTX 3090 VBIOS
> 
> Got it from their support, all DisplayPorts working, working great so far for me, better than Gigabyte one 'cause all ports working, and lower base power target !


nice. i will try it out and see how it goes


----------



## Foxrun

Falkentyne said:


> Can you tell me the link where you bought the paint jar version and pen version from?


Circuit Pen :https://www.amazon.com/gp/product/B00B88B9KI/ref=ppx_yo_dt_b_asin_title_o01_s00?ie=UTF8&psc=1
MG Pen :https://www.amazon.com/gp/product/B01LYXQE0M/ref=ppx_yo_dt_b_asin_title_o00_s00?ie=UTF8&psc=1

Circuit pen worked on my RTX Titan. MG worked on my Volta and 2080ti, as well as the 3090. I think I may need more thermal pads. The grizzly pads may be a tad bit thinner than the stock which may be causing my .5 second therm limit in GPU-Z.


----------



## NBrock

Lobstar said:


> My card (ftw3 ultra) rarely breaks 400w. +1498 mem, +200 GPU on air. XOC bios @119%
> View attachment 2466384
> 
> 
> Reset to 'Default', still on XOC bios, @ 100% (36 more points if I put the power cap back to 119%)
> View attachment 2466385


Your GPU memory OC is probably not stable and killing your score.


----------



## GanMenglin

chef1702 said:


> Did someone already get a waterblock for the Trio X 3090 from Bykski? The reason I'm asking is about the compatibility. On the Bykski site they say the block for Trio X 3080 is the same as 3090 (fits both). On Ezmodding's shop page the block is listed as 3080 block and no info about 3090 comp. anywhere in the text. Alphacool's block also says it fits both cards. Now Barrow is making a Trio X 30*80* block too and no info about the 3090 in the text...
> It looks like the PCB is nearly equal, at least the holes are all the same and the positions of the components which has to be cooled are the same.
> Depending on the availability I would buy which comes first but not if it's not gonna fit. Some extra info would be great.


I'm using it now, it's same. Actually the engraving on the water block is "3080"


----------



## chispy

mirkendargen said:


> Mine had more than enough as long as I cut the strips to just cover each memory die, not lay a whole strip across the row wasting a bunch. Looking at a leftover piece I THINK it's 1.5mm but I don't have calipers to verify.


Well is enough for the front of the card but not enough for the backplate unfortunately , i manage to re-do the front of the card fully everything is cover but i could not installed the backplate because not enough thermal pads

On my 3090 Strix shunt moded all shunts with 5mo with this Bykski block i get 17c delta from ambient , ambient at idle the card sits at 30c , under load max temperatures are 48c looping timespy extreme 4k after 30 minutes , 1.87v 2100mhz but goes down one bin at 45c to 2075 and stays there stable for 30 minutes.

testing with a big fat hfc black ice gt 360 rad + d5 pump maxed out. i think the thermal pads are 1.5mm so i will order some more for the backplate and to have some spare ones in case i need them.


----------



## cletus-cassidy

What are people roughly getting for memory offset? I can't go above +380 on my Strix without it crashing. That seems (very?) much lower than memory increases others are using, and I'm wondering if the thermal pads on my block are not making proper contact. Maybe there is another issue? Curious what others think.



https://www.3dmark.com/pr/530365


----------



## Foxrun

Do you get any thermal limit alerts in GPU-Z while benching or gaming?


----------



## Pepillo

GanMenglin said:


> I'm using it now, it's same. Actually the engraving on the water block is "3080"


In a few days I get mine for use with the original backplate, does it work well? Is it easy to install? Are the instructions clear? The right thermal pads? Anything to mention?

Thanks


----------



## chispy

cletus-cassidy said:


> What are people roughly getting for memory offset? I can't go above +380 on my Strix without it crashing. That seems (very?) much lower than memory increases others are using, and I'm wondering if the thermal pads on my block are not making proper contact. Maybe there is another issue? Curious what others think.
> 
> 
> 
> https://www.3dmark.com/pr/530365


I had the same issue on my first mount , clean everything with alcohol 91% , re worked the thermal pads on the memory and now i can run 1375Mhz memory. Somewhere one of your pads or more are not making good contact on your memory chips. What block are you using ?


----------



## Carillo

cletus-cassidy said:


> What are people roughly getting for memory offset? I can't go above +380 on my Strix without it crashing. That seems (very?) much lower than memory increases others are using, and I'm wondering if the thermal pads on my block are not making proper contact. Maybe there is another issue? Curious what others think.
> 
> 
> 
> https://www.3dmark.com/pr/530365


All 7 cards i have tried have done from +850 to the last one witch does +1750 on air. Have you tried raising memory offset wothout touching core offset ?


----------



## shiokarai

cletus-cassidy said:


> What are people roughly getting for memory offset? I can't go above +380 on my Strix without it crashing. That seems (very?) much lower than memory increases others are using, and I'm wondering if the thermal pads on my block are not making proper contact. Maybe there is another issue? Curious what others think.
> 
> 
> 
> https://www.3dmark.com/pr/530365


My Strix 3090 won't go over 500-550 (bench), 400-450 (demanding games) on mem, too. Just a "bad" bin card, but still within specs.... or there are some serious thermal issues, didn't find out yet (waiting for my block for strix). Yours is better score than mine, though (mine about 14500 PR with [email protected])


----------



## Falkentyne

Foxrun said:


> Circuit Pen :https://www.amazon.com/gp/product/B00B88B9KI/ref=ppx_yo_dt_b_asin_title_o01_s00?ie=UTF8&psc=1
> MG Pen :https://www.amazon.com/gp/product/B01LYXQE0M/ref=ppx_yo_dt_b_asin_title_o00_s00?ie=UTF8&psc=1
> 
> Circuit pen worked on my RTX Titan. MG worked on my Volta and 2080ti, as well as the 3090. I think I may need more thermal pads. The grizzly pads may be a tad bit thinner than the stock which may be causing my .5 second therm limit in GPU-Z.


Thank you. I'm a bit confused. Are you saying the circuitwriter pen failed on the 3090? Or the MG _paint_ (Jar) failed but the MG pen worked?

Are you using the 1mm or 1.5mm grizzly pads?
Did you re-pad both the GPU core side and backplate or only the backplate?

I used 1.5mm Thermalright Odyssey pads for the Vram and hotspots.


----------



## Foxrun

Falkentyne said:


> Thank you. I'm a bit confused. Are you saying the circuitwriter pen failed on the 3090? Or the MG _paint_ (Jar) failed but the MG pen worked?
> 
> Are you using the 1mm or 1.5mm grizzly pads?
> Did you re-pad both the GPU core side and backplate or only the backplate?
> 
> I used 1.5mm Thermalright Odyssey pads for the Vram and hotspots.


The circuit pen failed, MG worked. I used 1mm but I think I’m going to have to use 1.5mm since I still had to use some of the stock pads. I put them on both sides.


----------



## cletus-cassidy

chispy said:


> I had the same issue on my first mount , clean everything with alcohol 91% , re worked the thermal pads on the memory and now i can run 1375Mhz memory. Somewhere one of your pads or more are not making good contact on your memory chips. What block are you using ?


Using the EK Strix block and backplate. Maybe need to use thicker thermal pads.


----------



## Foxrun

Falkentyne said:


> I used 1.5mm Thermalright Odyssey pads for the Vram and hotspots.


I just ordered 1.5mm pads. I’ll do a whole repadding tomorrow with all identical pads.


----------



## cletus-cassidy

Carillo said:


> All 7 cards i have tried have done from +850 to the last one witch does +1750 on air. Have you tried raising memory offset wothout touching core offset ?


Have not yet tried that but I will. Currently using +135 on the core and +380 on the memory. Is there any way to determine if the RAM is overheating in HWINFO or otherwise?


----------



## Foxrun

cletus-cassidy said:


> Have not yet tried that but I will. Currently using +135 on the core and +380 on the memory. Is there any way to determine if the RAM is overheating in HWINFO or otherwise?


GPU -Z was the only way I could tell that my VRM was/is overheating.


----------



## Carillo

cletus-cassidy said:


> Have not yet tried that but I will. Currently using +135 on the core and +380 on the memory. Is there any way to determine if the RAM is overheating in HWINFO or otherwise?


I know FE have memory IC temp read out, but Strix don’t as far as i know.You could try pointing a fan on the backplate to see if you can increase memory. If I was you I would remove the cooler to see if all memory chips have pads. I actually experienced once one of the thermal pads on a single memory ic was gone. That was a factory cooler on a MSI 1080 Ti. I was the one breaking the “factory seal” sticker. So you never know. Or maby you just got unlucky bin-vice 🤷‍♂️


----------



## Lobstar

Anyone else's card idle at near 160w?


----------



## dr/owned

Posting some notes I took from my 3090 TUF PCB:

-for the PGSCON (I2C) header: the square hole is SCL, the middle hole is GND, the last hole is SDA.
-my right angle header needs 2.5mm of clearance

PCB = 0mm
Memory = 1mm
Metal frame around package = 2.53mm (probably 2.5mm)
PCB_package = 1.8mm
Die = 2.68mm (yes the die is slightly higher than the metal frame) [probably 2.75mm really if I had more accurate measuring capability]
VRM capacitors = 1.75mm (also noticed they are all Panasonic which differs from the 3080 TUF review PCB shots)
Inductors = 4.85mm
VRM mosfets = 0.7mm

-the backside capatiors were not making any contact with the thermal pads.
-the best way to get these thermal pads up without them splitting in half is with a guitar pick / ifixit pick.


It's not always straightforward though figuring out thermal pads because it depends on how the cooler is designed. Most of the time a waterblock has more material build up for the memory, vrm, and gpu so the thermal pads can be thinner.


----------



## 0209002877224

Anyone having trouble flashing the ZOTAC 3090 Trinity?

I am trying to flash the ZOTAC card with the GIGABYTE OC one, I succesfully flashed it (first time) according to GPU-Z but my main monitor (DisplayPort Dell) isn't recognised.

Is this basically unfixable (I don't have a clue how to fix this)?

I googled "second monitor not working after gpu flashing" but couldn't find much.

I also reinstalled drivers (cleaning with DDU), tried to detect new montitor hardware in Device manager and Display on windows etc, no dice.

Cheers!


----------



## Carillo

Does anyone know of a web shop somewhere out there in the universe, that have MSI Ventus water block in STOCK ready to ship? (Exept Alixpress ) ?


----------



## pat182

0209002877224 said:


> Anyone having trouble flashing the ZOTAC 3090 Trinity?
> 
> I am trying to flash the ZOTAC card with the GIGABYTE OC one, I succesfully flashed it (first time) according to GPU-Z but my main monitor (DisplayPort Dell) isn't recognised.
> 
> Is this basically unfixable (I don't have a clue how to fix this)?
> 
> I googled "second monitor not working after gpu flashing" but couldn't find much.
> 
> I also reinstalled drivers (cleaning with DDU), tried to detect new montitor hardware in Device manager and Display on windows etc, no dice.
> 
> Cheers!


probly the bios disable some dp and hdmi connections., try an other output


----------



## 0209002877224

pat182 said:


> probly the bios disable some dp and hdmi connections., try an other output


Yes once I flash back to my backup it's fine.

I can't use another port because I need to use the 165Hz on my dell monitor.

You think disabling some of the unused ports might make it get detected?

That's odd!


----------



## changboy

Finally i have done this royal benchmark with my ftw3 ultra and my result not verry high, iam on air : 14162


https://www.3dmark.com/3dm/53481573?


----------



## ExDarkxH

changboy said:


> Finally i have done this royal benchmark with my ftw3 ultra and my result not verry high, iam on air : 14162
> 
> 
> https://www.3dmark.com/3dm/53481573?


thats a decent score. Your in line with where you should be. most of the 2pin cards have a hard time scoring 14k with stock bios. 3pin cards usually can do 14k-14.4k
For the most part 14.5k+ requires higher power draw OR a good bin.

To reach 14.8k+ on air you need a golden lottery winner. There are some reporting this but its rare and only tiny percentage of users.

Almost 14.2k for you is good


----------



## VickyBeaver

0209002877224 said:


> Anyone having trouble flashing the ZOTAC 3090 Trinity?
> 
> I am trying to flash the ZOTAC card with the GIGABYTE OC one, I succesfully flashed it (first time) according to GPU-Z but my main monitor (DisplayPort Dell) isn't recognised.
> 
> Is this basically unfixable (I don't have a clue how to fix this)?
> 
> I googled "second monitor not working after gpu flashing" but couldn't find much.
> 
> I also reinstalled drivers (cleaning with DDU), tried to detect new montitor hardware in Device manager and Display on windows etc, no dice.
> 
> Cheers!


the gigabyte bios usually will make one or more display ports stop working..
a work around for this is to use a displayport to HDMI converter on those ports, flash back to oringal bios get the ports working again, then flash the gigabyte bios and they will work how ever the will loose all functionality again if you plug a normal display port connection into them.
going to be looking into this bios myself as a few posts back aparently this 390w bios works with refrence with out the loss of one or more display ports https://www.techpowerup.com/vgabios/226727/226727 , 
I am also very curious about the evga XC3 watercooled bios as to if that will be functional on refrence cards but have not seen that bios in the wild yet


----------



## 0209002877224

VickyBeaver said:


> the gigabyte bios usually will make one or more display ports stop working..
> a work around for this is to use a displayport to HDMI converter on those ports, flash back to oringal bios get the ports working again, then flash the gigabyte bios and they will work how ever the will loose all functionality again if you plug a normal display port connection into them.
> going to be looking into this bios myself as a few posts back aparently this 390w bios works with refrence with out the loss of one or more display ports KFA2 RTX 3090 VBIOS ,
> I am also very curious about the evga XC3 watercooled bios as to if that will be functional on refrence cards but have not seen that bios in the wild yet


Ok interesting, thanks for your reply! I didn't try the other DisplayPort... didn't think that only one would be disabled (if at all!)

Just to clarify: The displayport on the graphics card is (potentially) disabled, right? Not the monitor? 

So if the other DP doesn't work, what do I buy the adapter for? So It goes Monitor -> DP to HDMI converter -> HDMI cable to GPU HDMI slot?

Thanks again


----------



## Lobstar

ExDarkxH said:


> Almost 14.2k for you is good


I've now tweaked my card to run nothing but Port Royal at the best score I can get on air with my FTW3 Ultra. I've run PR 55 times over the course of the past 24 hours, all tweaks were done with no changes to voltage as my card ****s itself at anything but 10mv and stock clocks. I'm 14 points away from 14k. The result of so much testing: I started at 12968 and ended at 13986 on air only.



https://www.3dmark.com/3dm/53482066?


----------



## changboy

13986 is not far from me 

Me i dindt play much with my setting i put +800 and +165 on gpu, maybe i can get lil better if i tweak and tweak but anyway i cant put +165 on my gpu when i play many games, some game work some crash so i just put +125 on my gpu and leave it like this. But i see my boost clock atart high then drop so if i watercooling it i will keep my boost.
Just wait for EK they release a block for this model.


----------



## changboy

Time spy i got this : 20108


https://www.3dmark.com/spy/15491079


----------



## Zogge

I have high ambient and didnt really tweak my system that much to reach 14 764 at +210/+750. I am sure I can hit a 100 or 200 higher if I lower ambient to 20 (we have 28 indoor now) as my card hit 48 deg on 500w bios and work some more on the memory oc. 
My water loop delta T is under 0.5 degrees at load so there I have done what I can.


----------



## Lobstar

I removed my overclock and but kept the beta bios and beta driver and Doom Eternal crashes on max settings. God I hate EVGA.


----------



## changboy

It can happen your card have a problem, i didnt try doom eternal yet, but why beta driver, why dont you put the normal driver ?


----------



## jura11

Got Palit RTX 3090 GamingPro here for testing and must admit its almost half of size of my Asus RTX 2080Ti Strix OC with waterblock

Just putting GPU through the paces and benchmarks, initial view or thoughts on that GPU, its power limited a lot, tried Port Royal and there GPU is bouncing or running at 0.906v most of the time, which means shunt mod is only option if I want higher power, temperatures with Bykski Waterblock and backplate are beautiful in 30-35°C

Not sure on results yet, will post them later today or tomorrow

Hope this helps

Thanks, Jura


----------



## Lobstar

changboy said:


> It can happen your card have a problem, i didnt try doom eternal yet, but why beta driver, why dont you put the normal driver ?


It's a new vulkan driver. it crashes on the whql driver too.


----------



## GanMenglin

Pepillo said:


> In a few days I get mine for use with the original backplate, does it work well? Is it easy to install? Are the instructions clear? The right thermal pads? Anything to mention?
> 
> Thanks


Yes, I'm using it with the original backplate, actually they dont have the customized backplate for it. Every thing is easy with it.


----------



## Pepillo

GanMenglin said:


> Yes, I'm using it with the original backplate, actually they dont have the customized backplate for it. Every thing is easy with it.


Thanks for the answer. I look forward to putting the block, I am very happy with the 4k performance of this graphic card.


----------



## Cavokk

Zogge said:


> My water loop delta T is under 0.5 degrees at load so there I have done what I can.


Delta T between what? - before vs after gpu in waterflow or?

Thanks

C


----------



## DrunknFoo

ExDarkxH said:


> For those watercooling,
> Did you get better memory overclocks on your card?
> Im a little disappointed with my Memory OC and was hoping for more. Want to see if a waterblock would help me add + 100 - 200 memory offset


Have u tested ram freq without a gpu offset or with a negagive offset? Definately temps play a huge role with how micron chips auto ticks their clock based on temps


----------



## cletus-cassidy

Foxrun said:


> Do you get any thermal limit alerts in GPU-Z while benching or gaming?


Tested again with GPU-Z and no thermal limit alerts in GPU-Z other than in PerfCap Reason = Vrel VOp. I think that's fine (I don't have access to the core voltage slider with my Strix in Afterburner). I don't have a way to see RAM temps on the Strix so not sure where else I would be seeing VRAM thermal throttling evidence in GPU-Z.


----------



## NapsterAU

I'm sorry whaaat? I see best a 12-15°C delta from my loop to gpu with the EK strix block on my Zotac.


----------



## HyperMatrix

changboy said:


> Finally i have done this royal benchmark with my ftw3 ultra and my result not verry high, iam on air : 14162
> 
> 
> https://www.3dmark.com/3dm/53481573?


That's higher than my Strix got. Best I got on air was just over 14k. I sold the card yesterday and going to roll the dice on the silicon lottery with 2 or 3 other cards.


----------



## Falkentyne

cletus-cassidy said:


> Tested again with GPU-Z and no thermal limit alerts in GPU-Z other than in PerfCap Reason = Vrel VOp. I think that's fine (I don't have access to the core voltage slider with my Strix in Afterburner). I don't have a way to see RAM temps on the Strix so not sure where else I would be seeing VRAM thermal throttling evidence in GPU-Z.


Idle / Utilization=GPU clock boost and voltage boost is available
VREL= GPU clock boost is being limited by Voltage Reliability (Bios)
VOp=Voltage has reached its maximum operating point (silicon).
PWR=GPU clock is being limited by a Power Limit (total board power, chip power, power supply rail, PCIE slot rail, or internal rails)
THERM: GPU clock is being limited by a thermal event


----------



## cletus-cassidy

Falkentyne said:


> Idle / Utilization=GPU clock boost and voltage boost is available
> VREL= GPU clock boost is being limited by Voltage Reliability (Bios)
> VOp=Voltage has reached its maximum operating point (silicon).
> PWR=GPU clock is being limited by a Power Limit (total board power, chip power, power supply rail, PCIE slot rail, or internal rails)
> THERM: GPU clock is being limited by a thermal event


Many thanks for this. Really helpful. Assume a thermal event would cover both GPU and VRMA overheating so maybe VRAM heat not the cause of my weak mem OC? Will do some more testing to ensure no thermal events.


----------



## changboy

HyperMatrix said:


> That's higher than my Strix got. Best I got on air was just over 14k. I sold the card yesterday and going to roll the dice on the silicon lottery with 2 or 3 other cards.


Really, so my card not so bad at all lol.
Actually i just use gskill memory at 3200mhz its on xpm setting, i know i can get higher if i overclock my memory but i dont know how and i dont want corrupt my system if i try. I think this memory dont oc verry well, not sure hehehe. I know more how oc my cpu but no experience with memory.


----------



## Falkentyne

cletus-cassidy said:


> Many thanks for this. Really helpful. Assume a thermal event would cover both GPU and VRMA overheating so maybe VRAM heat not the cause of my weak mem OC? Will do some more testing to ensure no thermal events.


These cards run as hot as a weak portable heater. Heat can very well limit vram overclock, however each and every card should be able to do at LEAST +250 mhz ram overclock as that's 10,000 mhz. I'm not sure if that's the spec or if the spec is "10 gbps".

If you can't even go up by 50 mhz, check your vram thermal pads and make sure there's actually an imprint on them from contact.
Really good thermal pads to buy for these cards are Thermalright Odyssey 12.8 w/mk 1.0mm, 1.5mm or 2.0mm pads. The large 120x120mm sheets are only available on Aliexpress but you can ask one of Thermalright's USA suppliers, FrozenCPU, or Nan's Gaming Gear, to see if they can order them for you; the smaller pads are more easily available. I would avoid the fujipoly 11 w/mk pads because they're not much larger than a postage stamp. 60x50mm is very small. One 80mm * 45mm should be enough to do one full side of a video card, unless you're adding pads for hotspots.

Naturally, the fujipoly 17 w/mk would be the best pads once you know the exact size, but you would need to buy four of them to do a full sized 3090, and that would set you back $100, plus these pads are known to rip and crumble up once you open up the heatsink even once. The 11 and 12.8 w/mk pads are more reusable, but since they are soft pads, they should be replaced after the card is serviced, to preserve heat transfer capability that is lost after they compress into place.


----------



## ScottRoberts91

Changed my Gaming X Trio over to the Suprim bios, Back to hitting the power limit ever few seconds with an unstable frequently. Reinstalled the 500W EVGA bios and I'm back to a stable 2070Mhz locked sitting at 72c on the stock cooler.

Fingers Crossed the lighting Z will have a 500w bios


----------



## defiledge

Does running a high power limit damage the silicon?


----------



## Falkentyne

defiledge said:


> Does running a high power limit damage the silicon?


Stock cards on air cooling? Not even close.

Shunt modded? Well...As long as you don't use voltage mods, the silicon can take it.
The question is if the VRM's can handle the current. Most boards are over-specced (except Zotac). 
We all know most boards have no problem with 600 watts. The question is if you can cool it.
The GPU starts going nuclear at >1.10v at >500 watts.


----------



## defiledge

Falkentyne said:


> Stock cards on air cooling? Not even close.
> 
> Shunt modded? Well...As long as you don't use voltage mods, the silicon can take it.
> The question is if the VRM's can handle the current. Most boards are over-specced (except Zotac).
> We all know most boards have no problem with 600 watts. The question is if you can cool it.
> The GPU starts going nuclear at >1.10v at >500 watts.


I have a tuf that I'm planning to run at 600W. Wondering if I will damage the die if the wattage is too high.


----------



## long2905

the KFA/GALAX vbios is great so far for my iChill X4. all DP ports work as expected AND fan control seems to be better


----------



## dr/owned

defiledge said:


> I have a tuf that I'm planning to run at 600W. Wondering if I will damage the die if the wattage is too high.


The power is irrelevant if it's cooled properly. I'm aiming for around 800W on mine with volt mods and shunting.

Speaking of...










All cleaned up, I2C header soldered, shunt resistors tinned waiting for 4mOhms to arrive (theoretically 833W total power), and extra 18awg wire run from 8 pin to PCIe shunt so I can slam 120W through that without melting the slot fingers. Won't be able to actually power the thing on until the boat from china arrives with my waterblock.


----------



## defiledge

dr/owned said:


> The power is irrelevant if it's cooled properly. I'm aiming for around 800W on mine with volt mods and shunting.
> 
> Speaking of...
> 
> View attachment 2466453
> 
> 
> All cleaned up, I2C header soldered, shunt resistors tinned waiting for 4mOhms to arrive (theoretically 833W total power), and extra 18awg wire run from 8 pin to PCIe shunt so I can slam 120W through that without melting the slot fingers. Won't be able to actually power the thing on until the boat from china arrives with my waterblock.


how does that wire work. I thought the pcie slot could handle 120W anyways


----------



## dr/owned

defiledge said:


> how does that wire work. I thought the pcie slot could handle 120W anyways


All of the 12V fingers (5 of them in total) on the PCIe slot are connected together in a power plane near that shunt resistor. The fingers are the weak link because it's just a narrow scratch where the slot makes contact (some people will say it's the 24pin connector that supplies pcie power, or the motherboard traces from the 24pin to the pcie slots...I think these are beefy enough on modern psus and mobos). This will add supplemental 12V power from the 8 pin directly before the shunt resistor, but still be accounted for by the shunt sensing itself. I don't need to run a ground wire because there's nothing to really add...all of the grounds from the pcie slot to the 8 pins are already shorted together.

18awg is rated for about 10A by itself, plus it's still able to use the slot...so this is just to ensure there's no chance 4mOhm stacked shunt (145W) is going to blow up anything related to the slot. Without it I wouldn't really be comfortable going above 12mOhm on the stack (~95W slot power).


----------



## bmgjet

defiledge said:


> how does that wire work. I thought the pcie slot could handle 120W anyways


Redirects the power from what would be going to the slot to that pin on the 8pin.
Better way would be if you have a card with a fuse you can remove the fuse and solder it on to there.
Other wise he is going to have to insolate the edge connector contacts to stop it pulling from them.
Bit of electrical tape on them or nail polish works.
And youd be better just getting like a old 6 to 8pin adapter and use a seperate plug with the 12v wires all tied together since your asking a lot of that one pin on the 2nd 8pin plug.


----------



## Lobstar

I posted my data on the evga forum but here is my FTW3 Ultra which won't boost. My UPS is listed there as well. Port Royal, bios as specified in the sheet, mem/gpu were +0, power slider was maxed at 107 and 119 respectively. Verified via command line I'm on the 500w bios. Graphics Card BS


----------



## changboy

Lobstar said:


> It's a new vulkan driver. it crashes on the whql driver too.


 If you want stop crashing on DOOM do this :
C drive under user(ur name pc) to MY GAME to ID SOFTWARE and delete DOOM ETERNAL , ya you will lost ur save game but you wont crash anymore.
Its happened to me coz i take a copy from web and purshase the game after and got crash, then i have done that and no more crash. If your game not original it may be the issue.


----------



## dr/owned

bmgjet said:


> Redirects the power from what would be going to the slot to that pin on the 8pin.
> Better way would be if you have a card with a fuse you can remove the fuse and solder it on to there.
> Other wise he is going to have to insolate the edge connector contacts to stop it pulling from them.
> Bit of electrical tape on them or nail polish works.
> And youd be better just getting like a old 6 to 8pin adapter and use a seperate plug with the 12v wires all tied together since your asking a lot of that one pin on the 2nd 8pin plug.


I'm running it in parallel with the slot since the goal was to just supplement, not isolate. If I was aiming to push an absurd amount of power through the slot shunt then yeah I'd have run a separate molex or pcie connector, but for cable management tidiness I just wanted to keep it internal to the card with 1 wire.

I will be curious if that one pin on the 8 pin gets toasty from the extra load; I have a IR camera though that I'm going to use to monitor. I'm hoping these shenanigans proves that the 8 pins are hugely under-rated.


----------



## Falkentyne

defiledge said:


> how does that wire work. I thought the pcie slot could handle 120W anyways


All bets are off when you exceed 85W through the PCIE slot. A lot depends on motherboard quality, physical slot construction, cooling, and so on. The MXM graphics slot is only spec'd to 150W continuous and 195W burst (which is why GTX 1080 150W cards do not have an auxiliary power connector but 200W cards DO have one), but I've run 230W directly through my MXM slot on a TDP modded GTX 1070 for years and the laptop hasn't melted yet. That's already 80W over the continuous spec maximum and 30W over the max burst spec, and it hasn't fried yet.

Has anyone actually bothered to email PCI-SIG and ask them if a PCI express slot 4 or 5 can handle more than 75W? And if the higher watts are from amps? (6.5A * 12V=75W).


----------



## VickyBeaver

0209002877224 said:


> Ok interesting, thanks for your reply! I didn't try the other DisplayPort... didn't think that only one would be disabled (if at all!)
> 
> Just to clarify: The displayport on the graphics card is (potentially) disabled, right? Not the monitor?
> 
> So if the other DP doesn't work, what do I buy the adapter for? So It goes Monitor -> DP to HDMI converter -> HDMI cable to GPU HDMI slot?
> 
> Thanks again


usually its the middle display port that stops working out of the 3 but yea that galax bios I linked provides the 390w powerlimit and allows for normal functionality of all display ports


----------



## olrdtg

bmgjet said:


> Redirects the power from what would be going to the slot to that pin on the 8pin.
> Better way would be if you have a card with a fuse you can remove the fuse and solder it on to there.
> Other wise he is going to have to insolate the edge connector contacts to stop it pulling from them.
> Bit of electrical tape on them or nail polish works.
> And youd be better just getting like a old 6 to 8pin adapter and use a seperate plug with the 12v wires all tied together since your asking a lot of that one pin on the 2nd 8pin plug.


I've been considering insulating the PCIe slot power fingers on my card and running a wire from a separate 6-pin cable to the power plane. So I would be better off connecting the 3 +12v wires together at the end prior to connecting to the card? I had a plan to rip an old 6-pin slot off a dead GPU and frankensteins monstering up an adapter that I could just attach to the PCB next to the 12-pin since my water block has enough space over there, I suppose I could just join the 3 +12v pins on the connector there with a bit of metal and solder to evenly distribute the power as best possible.

I recently finished hacking up my NVIDIA 2x 8-pin to 12-pin adapter and turning it into an effective 2x8+6pin to 12-pin adapter. I ran 2 of the new +12v wires onto two of the first 8-pins +12v wires, then the third I attached to a +12v on the other 8-pin to help alleviate some potential stress on 2 8-pins after I stacked 3m0hm ontop of my 3m0hm resistors. for a 770W limit. Testing so far everything looks ok, cables are not melting and measuring the added 6-pin shows it is indeed utilizing the extra cable and taking a small amount of the load off the 8-pins. I left the slot shunt alone for the time being, until I can come up with a solution for the slot power, which I'll probably do what I detailed above.

I had originally planned to stack a 2mOhm on the 3mOhm for a 900W power limit, but I think that's too much for a hacky 8+6pin adapter.


----------



## Falkentyne

olrdtg said:


> I've been considering insulating the PCIe slot power fingers on my card and running a wire from a separate 6-pin cable to the power plane. So I would be better off connecting the 3 +12v wires together at the end prior to connecting to the card? I had a plan to rip an old 6-pin slot off a dead GPU and frankensteins monstering up an adapter that I could just attach to the PCB next to the 12-pin since my water block has enough space over there, I suppose I could just join the 3 +12v pins on the connector there with a bit of metal and solder to evenly distribute the power as best possible.
> 
> I recently finished hacking up my NVIDIA 2x 8-pin to 12-pin adapter and turning it into an effective 2x8+6pin to 12-pin adapter. I ran 2 of the new +12v wires onto two of the first 8-pins +12v wires, then the third I attached to a +12v on the other 8-pin to help alleviate some potential stress on 2 8-pins after I stacked 3m0hm ontop of my 3m0hm resistors. for a 770W limit. Testing so far everything looks ok, cables are not melting and measuring the added 6-pin shows it is indeed utilizing the extra cable and taking a small amount of the load off the 8-pins. I left the slot shunt alone for the time being, until I can come up with a solution for the slot power, which I'll probably do what I detailed above.
> 
> I had originally planned to stack a 2mOhm on the 3mOhm for a 900W power limit, but I think that's too much for a hacky 8+6pin adapter.


You're going to hate me for asking you this.
But which exact shunt is the GPU Core? :/

If you look at Igor's page, you can see the shunt on the GPU side seems to say "NVVDD" (voltage)", and I have no idea what's that for. Is that PWR_SRC?

Then the shunt at on the backplate at the top left says "NVVDD PWM".









NVIDIA GeForce RTX 3090 Founders Edition Review: Between Value and Decadence - When Price is Not Everything | Page 2 | igor'sLAB


With the GeForce RTX 3090, NVIDIA is rounding out its graphics card portfolio at the top end today, for now. Much more is not possible with the GA102-300 anyway and so one may see the current…




www.igorslab.de





And below that there's the one you lined off yourself, going directly to "Current sensing". Current sensing?
Then which shunt is for MSVDD / PWM?

I'm confused because I noticed my MSVDD (FBVDD) and PWR_SRC seem to be lower than anyone else's, but my GPU Core is close to the limit. 381 (GPU-Z) board power in Time Spy nets 291 GPU Core, although it's lower in Port Royal.

Sorry for bugging you guys. I did ask this before but no one seemed to know the answer at all.


----------



## olrdtg

Falkentyne said:


> But which exact shunt is the GPU Core? :/


I can't remember for certain, but I am pretty sure (like 75%) that it's the shunt next to NVVDD, near the V shaped cutout on the top of the card. I'll be pulling my card out in a few days when my new set of resistors arrives, and I'll trace all of the resistors and write down where they go for you.


----------



## Esenel

nievz said:


> Zen3 5800x, BFV - Ultra, RTX Off, 1440p, DX12
> 
> 
> 
> 
> 
> Please let me know how they compare to your 10900K overclocked. My main goal in upgrading to the 5800x is to be able to at least minimize the CPU bottleneck on my 3090 and high FPS at 1440p since I play at 240hz.


Here my 10900K Conquest Wake Island 1440p - Ultra
Stock 3090 Strix.
The 10900K is doing a great job utilizing the 3090 in BF V ;-)
RAM is the key.


----------



## nievz

Esenel said:


> Here my 10900K Conquest Wake Island 1440p - Ultra
> Stock 3090 Strix.
> The 10900K is doing a great job utilizing the 3090 in BF V ;-)
> RAM is the key.
> 
> View attachment 2466471


Wake island is easier to drive. Can you post Operations Underground result as well? I can max out GPU except for this map on Ultra.


----------



## Esenel

nievz said:


> Wake island is easier to drive. Can you post Operations Underground result as well? I can max out GPU except for this map on Ultra.


Tried to. No map with 64 players.
Can do so later.


----------



## changboy

What is the faster ram i can put with my 10980xe on the asus x299 rampage omega ?


----------



## Zogge

I have the encore / 10980xe and never got it to post over 4000 mhz. I run 8x8gb trident rgb at 4000 / CL16 1.45 V.

(It is the 4400 mhz CL18 trident z rgb modules)


----------



## Nizzen

nievz said:


> Wake island is easier to drive. Can you post Operations Underground result as well? I can max out GPU except for this map on Ultra.


You post video from Wake Island


----------



## nievz

Nizzen said:


> You post video from Wake Island


Here you go buddy. Wake Island, 64 players, Breakthrough. Pretty good utilization, minimal dips, smooth play, didn't really notice any stutter. What do you think?😊

(video is still processing)


----------



## Esenel

nievz said:


> Can you post Operations Underground result as well? I can max out GPU except for this map on Ultra.


The dips/spikes are due to DX12 Shader Caching. Is a pain on Operation Underground. Haven't played in month :-D
Still an average Load of 93% with 480W PL.


----------



## pat182

ive been playing around to see the optimal airflow for my strix in the o11 , the card exaust choke on the glass, the card is too thicc. ive tried:

no riser, bot fan
no riser, no bot fan
riser, bot fan
riser, no bot fan
(top fan aways @ 100% on load)

the best was riser no bot fan for some reasons, so ive decided to 3d print a funel so the exausted air at the bottom doesnt goes in the intake while the intake have all the bot open for fresh air and im having the best result so far in Control RTX on no dlss 4k, 2085mhz 66c at 480watt sustain @ 24c ambient vs 2040mhz 70c before the funel

gonna paint everything when its done


----------



## nievz

Esenel said:


> The dips/spikes are due to DX12 Shader Caching. Is a pain on Operation Underground. Haven't played in month :-D
> Still an average Load of 93% with 480W PL.
> View attachment 2466477


Nice. I think performance is even. 😃

In my experience higher PL won't benefit Operation Underground since we're not really at MAX utilization all the time.


----------



## cletus-cassidy

Falkentyne said:


> These cards run as hot as a weak portable heater. Heat can very well limit vram overclock, however each and every card should be able to do at LEAST +250 mhz ram overclock as that's 10,000 mhz. I'm not sure if that's the spec or if the spec is "10 gbps".
> 
> If you can't even go up by 50 mhz, check your vram thermal pads and make sure there's actually an imprint on them from contact.
> Really good thermal pads to buy for these cards are Thermalright Odyssey 12.8 w/mk 1.0mm, 1.5mm or 2.0mm pads. The large 120x120mm sheets are only available on Aliexpress but you can ask one of Thermalright's USA suppliers, FrozenCPU, or Nan's Gaming Gear, to see if they can order them for you; the smaller pads are more easily available. I would avoid the fujipoly 11 w/mk pads because they're not much larger than a postage stamp. 60x50mm is very small. One 80mm * 45mm should be enough to do one full side of a video card, unless you're adding pads for hotspots.
> 
> Naturally, the fujipoly 17 w/mk would be the best pads once you know the exact size, but you would need to buy four of them to do a full sized 3090, and that would set you back $100, plus these pads are known to rip and crumble up once you open up the heatsink even once. The 11 and 12.8 w/mk pads are more reusable, but since they are soft pads, they should be replaced after the card is serviced, to preserve heat transfer capability that is lost after they compress into place.


Following up on this thread -- disassembled my waterblock last night and you guys were right! One of the VRAM thermal pads was not making good contact. Early testing is showing me stable at around +1200 memory (and +135 core). Thanks again to everyone who helped me identify the issue.


----------



## changboy

FrameChasers just put a score on port royal with is strix and all is mod and he score 14771 he is #97.

With all is mod and claiming power draw of 800w i'am not impress 😄


----------



## stryker7314

Can an Asus Tuf use the STRIX bios or FTW3 XOC bios?


----------



## ExDarkxH

changboy said:


> FrameChasers just put a score on port royal with is strix and all is mod and he score 14771 he is #97.
> 
> With all is mod and claiming power draw of 800w i'am not impress 😄


I didnt expect much since he was showing previews of it hitting “2,130MHz” and i kept saying thats it? Hes using the mp5works backplate watercooler as well. Pretty disappointing. He tried everything on that card he tried liquid metal, all kinds of different pads and contact techniques, washers for more compression, shunts, ext. obviously he has a water block but i cant remember if he volt modded it as well

I scored better than that completely stock with aircooler, no mods, stock bios
Oh well just goes to show you silicone lottery >>>> everything else


----------



## Dreams-Visions

Bykski waterblock is on the way. Looking forward to putting this open loop together. Still gotta find a 5900X, though. :-/



ScottRoberts91 said:


> Changed my Gaming X Trio over to the Suprim bios, Back to hitting the power limit ever few seconds with an unstable frequently. Reinstalled the 500W EVGA bios and I'm back to a stable 2070Mhz locked sitting at 72c on the stock cooler.
> 
> Fingers Crossed the lighting Z will have a 500w bios


Is the power limit indicator in GPU-Z a sign that the bios is having an issue on the card? My X-Trio constantly shows power as limit in benchmarks with the 500W EVGA bios, but the frequency is rock solid. I'm new to GPU-Z and what the Pref stuff means for a given GPU and its performance potential. 

Any insights are appreciated!


----------



## Thanh Nguyen

I have higher avg clock but lower PR score than my previous highest score with lower avg clock. PR is weird.


https://www.3dmark.com/pr/527278


----------



## SuperMumrik

Damn. I thought my card was mediocre, but that's a lottery looser for sure
Edit: Frame chaser's strix


----------



## ExDarkxH

Thanh Nguyen said:


> I have higher avg clock but lower PR score than my previous highest score with lower avg clock. PR is weird.
> 
> 
> https://www.3dmark.com/pr/527278


perhaps your memory is unstable
Thats a very high average clock for the score you achieved. maybe you had background apps running?
I would expect about ~350 points higher for those clocks


----------



## cletus-cassidy

Can others with Strix access the core voltage slider in Afterburner? Greyed out for me (even when enabling voltage control in settings).


----------



## ExDarkxH

cletus-cassidy said:


> Can others with Strix access the core voltage slider in Afterburner? Greyed out for me (even when enabling voltage control in settings).


i believe you need a specific version of afterburner thats compatible with ampere


----------



## dante`afk

mattxx88 said:


> omg, i'm still waiting the order confirm since 3 days, where did you buy it?
> mine from formulamod.com
> when you have it in your hands can you confirm it requires thermal paste rather than thermal pads on vram and mosfet?


had bought it from amazon before the 3090 were even released, turned out the one (even though it said for founder edition on amazon) they send me is for ref cards.


----------



## changboy

From what i see of your score and all your efford to score this with : cpu at 5.5ghz ! memory at 4600mhz! + watercooling on your gpu, i tell myself to keep my 14200 port royal score like this and i dont expect gain lot even i invest a lot of money.

I just ear lil voice inside me : Keep Your System Like That, lol.

At some point upgrading compoment each 6 month dont worth it, better keep a system and get it verry stable and enjoy it for 2 or 3 years at least then change all time compoment and running all time in some issue or hard working on system more then take adventage of it and pleasure.

This is what i think, enjoy what you have. Or it can be an expensive hobby too.


----------



## dr/owned

olrdtg said:


> I've been considering insulating the PCIe slot power fingers on my card and running a wire from a separate 6-pin cable to the power plane. So I would be better off connecting the 3 +12v wires together at the end prior to connecting to the card? I had a plan to rip an old 6-pin slot off a dead GPU and frankensteins monstering up an adapter that I could just attach to the PCB next to the 12-pin since my water block has enough space over there, I suppose I could just join the 3 +12v pins on the connector there with a bit of metal and solder to evenly distribute the power as best possible.
> 
> I recently finished hacking up my NVIDIA 2x 8-pin to 12-pin adapter and turning it into an effective 2x8+6pin to 12-pin adapter. I ran 2 of the new +12v wires onto two of the first 8-pins +12v wires, then the third I attached to a +12v on the other 8-pin to help alleviate some potential stress on 2 8-pins after I stacked 3m0hm ontop of my 3m0hm resistors. for a 770W limit. Testing so far everything looks ok, cables are not melting and measuring the added 6-pin shows it is indeed utilizing the extra cable and taking a small amount of the load off the 8-pins. I left the slot shunt alone for the time being, until I can come up with a solution for the slot power, which I'll probably do what I detailed above.
> 
> I had originally planned to stack a 2mOhm on the 3mOhm for a 900W power limit, but I think that's too much for a hacky 8+6pin adapter.


I personally don't think you need to go that extreme on slot power delivery. Even at 900W...the slot isn't just going to magically start consuming 300W+. IF you were planning on going bigger than me I would solder a molex 12V and ground wire to the card. 12V to the shunt resistor upper half and the ground to anywhere that is ground on the card...which is a ton of places.

It's interesting that you got away with stacking 1.5mOhm effectively (3m+3m)...on previous generations if you went too low the bios thought something was wrong with the shunt sensing and would go into a low power state. I'm not planning on attempting that much multiplier, because at some point the card is just going to stop scaling clocks regardless of how much voltage/power is thrown at it.


----------



## Thebc2

ExDarkxH said:


> For those watercooling,
> Did you get better memory overclocks on your card?
> Im a little disappointed with my Memory OC and was hoping for more. Want to see if a waterblock would help me add + 100 - 200 memory offset


I got another +100 out of mine after going under water.


Sent from my iPhone using Tapatalk Pro


----------



## defiledge

So I did a shunt mod and my screen is artifacting at the same OC, and memory OC before I did the mod.


----------



## Asmodian

defiledge said:


> So I did a shunt mod and my screen is artifacting at the same OC, and memory OC before I did the mod.


Depending on the power limits you could be running much higher core clocks now than you were at the same OC with the same workload.


----------



## BigMack70

Any EVGA 3090 Hybrid or KPE benchmarks in the wild yet?


----------



## HyperMatrix

BigMack70 said:


> Any EVGA 3090 Hybrid or KPE benchmarks in the wild yet?


Well you had the KPE Hybrid benchmark youtube thing with Nexus Gamer and JayZ. One of them scored 15000 and the other around 15300 in Port Royal. This was just on the standard Hybrid cooler before they took it off and went LN2 mode.


----------



## imbaray

Anyone tried flashing a different bios in an Aorus Xtreme? Was interested in trying one of the 500W ones.


----------



## Farquhar

Is there a list anywhere that says which BIOS work with which cards? 

I want to see what will work with the non-OC TUF cards specifically.


----------



## Falkentyne

defiledge said:


> So I did a shunt mod and my screen is artifacting at the same OC, and memory OC before I did the mod.


Please explain which exact method you did for your mod.
You should not have artifacts. And what are your overclocks?


----------



## defiledge

I soldered it, probably need to replace the thermal pads


----------



## changboy

Hahahahaha : This guy is crazy hehehe.


----------



## Falkentyne

defiledge said:


> I soldered it, probably need to replace the thermal pads


What shunts did you solder on?
What were your overclocks? Temps?
What video card aib model?

I don't mean to be rude, and I apologize, but it's not easy to help you if you're giving low effort replies. You see how much work I put in...


----------



## Zogge

Now we are getting somewhere. Nr 47 in hall of fame on 15 028 in PR.

Core +220/Mem +1200 no shunts and water only.

3dmark.com/pr/537143

Zogge/Zograth


----------



## defiledge

I soldered 5moh shunts and it's artifacting at +700 mem but before the mod I could go up to 1100 mem no issues. Temp increased by +10C. This is TUF OC model


----------



## Falkentyne

defiledge said:


> I soldered 5moh shunts and it's artifacting at +700 mem but before the mod I could go up to 1100 mem no issues. Temp increased by +10C. This is TUF OC model


It should definitely not be artifacting. Usually mem fail will either show "Power Limit: Thermal" or will outright crash the system/display driver or the system will just reboot.
Temp increase is normal ONLY if you are exceeding previous power draw. If you are not, that's a problem. You can calculate the power change for 8 pins, slot and chip power draw from the ver 0.7 shunt calculator tool, for adjusting values in HWinfo64.

I've had plenty of crashes due to memory getting too hot, but I have had zero artifacts so far.

I would very carefully check and make sure you do not have any conductive balls of doom ™ anywhere on the PCB and that you didn't accidentally short something. 

Redo the thermal pads and paste while you're at it but I do not know what thickness pads to use. Thermalright Odyssey 1.5mm pads are an excellent choice if you aren't sure. They compress a lot so can be used as long as the original pad thickness isn't more than 0.5mm above or below the pads (I even have them where the original pads were 2mm).


----------



## stryker7314

Farquhar said:


> Is there a list anywhere that says which BIOS work with which cards?
> 
> I want to see what will work with the non-OC TUF cards specifically.


In the same boat, want to know if I can put the Strix or XOC bios on the Asus Tuf 3090 to up the wattage capability.

Will this work folks?

It's on water if that helps.


----------



## chanchan

Hi everyone, I just joined the RTX 3090 family, albeit my card is the cheapest of the cheap. Rather than getting scalped for RTX 3080, a MSRP Zotac RTX 3090 seems like my best option for a GPU at the moment. I've been trying to finish this build for quite some time now (started in Feb) and waiting for Zen 3 + current gen GPU's.

Current overclocks: +100/+650, 66% static fan speed, 90% vcore (it's about 0.80v~ under Unigine Heaven)

Questions:
1- Are there any BIOS's worth flashing this card to? (I have no idea what's compatible or will work)
2- Is it worth to put this under water? (Barrow has a well priced block for this PCB)
3- I am using Unigine Heaven as a stability test (8-12h loops), is there anything else that is better suited for testing overclocks?

Now I just need to finish the wife's build with my hand-me-downs...


----------



## OC2000

defiledge said:


> So I did a shunt mod and my screen is artifacting at the same OC, and memory OC before I did the mod.


I’ve just been testing shunt modding with my old 1080ti and that silver paint ( wear a face mask with that stuff) and got the same artifacting. It’s because I was using old 17 w/mk thermal pads that had dried up.

as soon as I replaced them with fresh pads everything ran as it should.

that GTX 1080ti has been through a dishwasher at most aggressive settings been thrown around like a toy and was even considered as something I wasgoing to frame. But doing a test run shunt mod on it today saw it pass timespy at 2100 for the first time ever lol. Before that it crapped out at 2070.

these things can take a lot of punishment.


----------



## Falkentyne

OC2000 said:


> I’ve just been testing shunt modding with my old 1080ti and that silver paint ( wear a face mask with that stuff) and got the same artifacting. It’s because I was using old 17 w/mk thermal pads that had dried up.
> 
> as soon as I replaced them with fresh pads everything ran as it should


Hi, you mean those fujipoly 17 w/mk pads?
Yeah those things can only be used ONCE. I also have no idea how long they last stored. Most people aren't a fan of their durability. The Thermalright 12.8 w/mk / Fujipoly 11 w/mk pads (and I assume the Gelid 12 w/mk pads) seem to be better and more reusable, but it's generally good practice if you aren't opening the card up every day to fix something, to change out the pads after you finished your rework.

Arctic 6 w/mk pads last literally forever but I would not use those on a video card that can function as a space heater.


----------



## DrunknFoo

Fujipoly last forever when stored assuming the plastic backings are in place, i took some out that are over 10yrs old to check the sizes i had


----------



## GQNerd

GAN77 said:


> I congratulate you. Where did you buy the new product phanteks?
> What are your impressions, delta water chip?


Ty, bought it a week ago on Newegg. I'm impressed with the overall package, they included all the thermal pads I needed and they were of high-quality. And, get this.. they included an actual manual!

Ambient room temp 20c
Loop temp at idle 25c
Gpu temp at full load 45c

Newest Score after further tuning:

*Port Royal 15,459 - 20th place HOF!*

_*(PS. this is on the stock 480w Strix bios, as opposed to the EVGA 500w I used last time)*_


I think I can get a few more points by applying a cooler to the backplate, installing the GPU directly into the PCIE slot instead of riser, and using liquid metal.

I'll save that for another day


----------



## mirkendargen

Farquhar said:


> Is there a list anywhere that says which BIOS work with which cards?
> 
> I want to see what will work with the non-OC TUF cards specifically.


As far as I've seen, any 2x8pin BIOS will work on any 2x8pin card and any 3x8pin BIOS will work on any 3x8pin card, with the caveat that some display outputs might be lost depending on the connector config on the physical card and the BIOS used.


----------



## motivman

changboy said:


> Hahahahaha : This guy is crazy hehehe.


Yeah Framechasers is awesome, learned a ton from him and *Falkentyne. * Because of them, I have on my hands an awesome as hell PNY 3090. I beat framechasers on all benchmarks except for Timespy extreme (guessing cause timespy extreme is a power hog, and my card only pulls 586W, compared to his 800W pull on his card). My card is a PNY reference 2X8 pin, with EK waterblock. I use the 500W evga bios for benching. Max power draw is about 586W. I shunted with 5mohms and 10mohms on the pcie express slot. Was gonna sell this card, and get either an msi gaming x trio or Strix with the MP5works block to cool my memory, but after seeing Framechasers video, all that is unnecessary IMHO. Check out my numbers.. My card is flying. Done for now, till we get that 1000W bios, lol

*https://www.3dmark.com/pr/532582** - Port Royal
https://www.3dmark.com/fs/24089435 - Firestrike
https://www.3dmark.com/fs/24070672 - Firestrike Ultra
https://www.3dmark.com/spy/15459526 - TimeSpy
https://www.3dmark.com/spy/15483882 - TimeSpy Extreme


http://imgur.com/LydD7wR

 - Superposition 4K


http://imgur.com/82iLhkx

 - Superposition 1080p extreme


http://imgur.com/fcBaYvQ

 - Superposition 8K *


----------



## mirkendargen

Miguelios said:


> Ty, bought it a week ago on Newegg. I'm impressed with the overall package, they included all the thermal pads I needed and they were of high-quality. And, get this.. they included an actual manual!
> 
> Ambient room temp 20c
> Loop temp at idle 25c
> Gpu temp at full load 45c
> 
> Newest Score after further tuning:
> 
> *Port Royal 15,459 - 20th place HOF!*
> 
> _*(PS. this is on the stock 480w Strix bios, as opposed to the EVGA 500w I used last time)*_
> 
> 
> I think I can get a few more points by applying a cooler to the backplate, installing the GPU directly into the PCIE slot instead of riser, and using liquid metal.
> 
> I'll save that for another day


You have a really ****ing good card, the cooler isn't getting you that alone. My Strix stays below 38C under load and the best I can do is ~14800 on the 500w BIOS, ~14700 on the 480W BIOS. Both can only do ~2115 core stable.


----------



## motivman

mirkendargen said:


> You have a really ****ing good card, the cooler isn't getting you that alone. My Strix stays below 38C under load and the best I can do is ~14800 on the 500w BIOS, ~14700 on the 480W BIOS. Both can only do ~2115 core stable.


yeah, his card is FASSSSTTTTT, think you got a golden sample man! congrats!!!!


----------



## Falkentyne

One thing I found really bizarre is that your maximum core overclock before Modern Warfare Warzone decides to start DEV 6068'ing on you is _directly_ related to your maximum bench scores. If for some reason you bench lower out of nowhere, you can go +15 / +30 mhz +45 mhz higher on the core with no problem.

I was getting erratic bench scores, like, 20667 Port Royal at +150/+500, then I rebooted and then scored 20250. What. And wondering why I was able to play without crashing at +165 mhz core overclock all evening...

When I updated from the 456.98 hotfix driver, to the newest Vulkan beta driver (457.44 I think), it made my bench scores a lot more consistent, no longer getting wood performance for no reason. But also means I can't do any more yet +165 mhz overclocks in Warzone anymore. +120 or +135 with 100% slider is the max now.


----------



## GQNerd

mirkendargen said:


> You have a really ****ing good card, the cooler isn't getting you that alone. My Strix stays below 38C under load and the best I can do is ~14800 on the 500w BIOS, ~14700 on the 480W BIOS. Both can only do ~2115 core stable.





motivman said:


> yeah, his card is FASSSSTTTTT, think you got a golden sample man! congrats!!!!


Well, full disclosure.. My card is also shunted w/15mohm resistors.

Myself and anyone hitting 15k+ on PR have hard-mods.. (Slinky, Carillo, Wyatt.. )

My card stock was only holding 2085 max, w/ a score in the 14.7k range
500w Evga bios got me to 14.95k


----------



## mirkendargen

Miguelios said:


> Well, full disclosure.. My card is also shunted w/15mohm resistors.
> 
> Myself and anyone hitting 15k+ on PR have hard-mods.. (Slinky, Carillo, Wyatt.. )
> 
> My card stock was only holding 2085 max, w/ a score in the 14.7k range


You're actually stable at 2200+ though, that's nothing to do with a shunt. My card literally isn't stable above 2115-2130, even at 1.1v.


----------



## OC2000

Falkentyne said:


> Hi, you mean those fujipoly 17 w/mk pads?
> Yeah those things can only be used ONCE. I also have no idea how long they last stored. Most people aren't a fan of their durability. The Thermalright 12.8 w/mk / Fujipoly 11 w/mk pads (and I assume the Gelid 12 w/mk pads) seem to be better and more reusable, but it's generally good practice if you aren't opening the card up every day to fix something, to change out the pads after you finished your rework.
> 
> Arctic 6 w/mk pads last literally forever but I would not use those on a video card that can function as a space heater.


Yep those Fuji poly ones. I found some old ones that were 2 years old and already pretty dry and wanted to see first if the 1080ti still worked, which it did, but as soon as I took the block off for the second time and shunted I got the artifacts. I had just received some .5mm 12 w/mk I was saving for the 3090 Strix back plate with mp5works which I replaced them with and it’s all working fine now and that’s after a second block change as I added a 3rd shunt on the 1080ti pcie resistor.(all 8 ohm) some reason though I was still getting pwr cap in GPUZ when running standard timespy. I’m guessing 8 ohm was not enough for a 6+8 pin.

I plan on using all 8 ohm on the Strix 3090 including pcie. I have 5 and 10 ohm on the way should I wait for those? 5 ohm on 8pins 10 ohm on pcie. Frame chaser used 8 on his pcie and 8 pins which is the tutorial I’m using (I know he has since double shunted)


----------



## Zogge

Strange that some of you score higher than me on port royal when your avg clocks etc are less than mine during the test. Is it my 10980xe or 4000 mhz cl17 ram holding it back ? I have 2175mhz avg and top 2220mhz. Mem +1200.

Or is it your shunts ?


----------



## GQNerd

Zogge said:


> Strange that some of you score higher than me on port royal when your avg clocks etc are less than mine during the test. Is it my 10980xe or 4000 mhz cl17 ram holding it back ? I have 2175mhz avg and top 2220mhz. Mem +1200.
> Or is it your shunts ?


Well the shunts only allow me to sustain my voltage at 1.1v indefinitely
I customized my voltage curve to hold ~2190 and top at 2220 as well.. my mem can OC to +1400, but +1300 is more stable

The 10900k is at 5.2ghz all-core, and my ram is running at 4267mhz CL17-17-37 @1.5v

*Edit:* _I also see you're using 8 dimm slots to my 2.. I would think there has to be some sort of latency introduced when running so many_


----------



## ExDarkxH

why dont graphite thermal pads works better?
Forget about the cost for a second, in theory shouldnt it outperform fujipoly pads given its 35W/m-K performance and in general will eek out any hotspots given it's nature

Is it because its conductive like liquid metal and ppl are scared to use it? Or possibly because its thin and ppl are scared of not getting contact? Just wondering because I've never heard of anyone replacing stock gpu pads with graphite


----------



## Falkentyne

ExDarkxH said:


> why dont graphite thermal pads works better?
> Forget about the cost for a second, in theory shouldnt it outperform fujipoly pads given its 35W/m-K performance and in general will eek out any hotspots given it's nature
> 
> Is it because its conductive like liquid metal and ppl are scared to use it? Or possibly because its thin and ppl are scared of not getting contact? Just wondering because I've never heard of anyone replacing stock gpu pads with graphite


Graphite thermal pads are 
1) too thin.
2) conductive.

They're only useful on dies and only as a last forever and forget method. You also need to use the same precautions as liquid metal.
Not for use as thermal pads unless you like waking up to the smell of fried silicon in the morning.


----------



## defiledge

Miguelios said:


> Well the shunts only allow me to sustain my voltage at 1.1v indefinitely
> I customized my voltage curve to hold ~2190 and top at 2220 as well.. my mem can OC to +1400, but +1300 is more stable
> 
> The 10900k is at 5.2ghz all-core, and my ram is running at 4267mhz CL17-17-37 @1.5v


Did you shunt your pcie slot?


----------



## olrdtg

dr/owned said:


> I personally don't think you need to go that extreme on slot power delivery. Even at 900W...the slot isn't just going to magically start consuming 300W+. IF you were planning on going bigger than me I would solder a molex 12V and ground wire to the card. 12V to the shunt resistor upper half and the ground to anywhere that is ground on the card...which is a ton of places.
> 
> It's interesting that you got away with stacking 1.5mOhm effectively (3m+3m)...on previous generations if you went too low the bios thought something was wrong with the shunt sensing and would go into a low power state. I'm not planning on attempting that much multiplier, because at some point the card is just going to stop scaling clocks regardless of how much voltage/power is thrown at it.


I haven't run into safe mode yet, but I'm expecting I will when I try stacking the 2 mOhm resistors. Stacking a 2mOhm on would just be for testing purposes as I was curious how far I could take this thing and how long it would scale, also accompanied by a volt mod once I get a second GPU in (just in case). 

Thing about the pcie slot is I'm already peaking at almost 120W out of the slot, and I'd rather just tape off the power fingers and run a +12v and ground wire from a 6-pin so that it doesn't draw anything from the slot, allowing me to also probe the 6-pin with my multimeter while the card is running knowing that all the power is coming in from that cable and none from the slot. Currently the way my case is I can only probe the cables going in, and I'd like to probe the PCIe slot power as well for some accurate readings.



defiledge said:


> So I did a shunt mod and my screen is artifacting at the same OC, and memory OC before I did the mod.


Exactly as @Asmodian said, you are likely achieving higher clock speeds at that overclock. Prior to my shunt mod I was able to run a +150/+1250 OC safely, and the card would hit its power limit long before reaching that point. Post shunt mod I can only do around +140 on the core otherwise the card immediately tries to jump to 2290mhz and crashes the benchmarks.


----------



## Falkentyne

It looks like if you are direct painting, thicker paint will reduce resistance (aka a lower value mOhm shunt) if you want it to be more effective. Thinner coat is not better if you want it to be effective.



https://www.mgchemicals.com/downloads/tds/tds-842ar-l.pdf


----------



## jura11

I have done only few tests on Palit RTX 3090 GamingPro, tried few BIOS but no luck with that thing, Gigabyte 390W would be best but with that BIOS I can run only single monitor which is no go for me... 

Stock Port Royal score is 13492 points, with 145MHz OC on core and 875MHz on VRAM score is 14008 points 

In gaming clocks usually bounce around 1995MHz and in lower load 2130MHz, currently running KFA2 390W BIOS which is not bad but still without the shunt mod I don't think I will be able to run 2130MHz at full load

I never done shunt mod and if someone is in UK or around the London area who can do this for me I would be grateful and appreciated for this 

Thanks, Jura


----------



## GQNerd

defiledge said:


> Did you shunt your pcie slot?


Yes 15mohm on every shunt including pcie


----------



## Cholerikerklaus

Zogge said:


> Strange that some of you score higher than me on port royal when your avg clocks etc are less than mine during the test. Is it my 10980xe or 4000 mhz cl17 ram holding it back ? I have 2175mhz avg and top 2220mhz. Mem +1200.
> 
> Or is it your shunts ?


I got a similar problem. I switched from a 9900k to a 5950x and I lost 500 points in Port royal. I think 3dmark got a problem with some kind of cpu.


----------



## anethema

anethema said:


> Here it is looping heaven. It was only so low and weird because in mining it goes to 'thermal' for throttle so it knocks the core down so power goes way down.
> 
> This is running heaven so full power:


Well I finally got the Thermalright extreme Odyssey pads. Did both sides of the board as you mentioned. Cleared that temp flag completely and def helped stabilize things. I'm getting almost 15k PR in a room-temp room now. Next time its cold I'll try again on cold air. 

Didn't affect my power balance really though. I resoldered all the shunts with both hot air and iron to ensure good contact, refluxing etc. Visually inspected all of them, they all look like really solid solder joints.

The new power balance during mining with sliders dragged up is:


----------



## Chamidorix

olrdtg said:


> I haven't run into safe mode yet, but I'm expecting I will when I try stacking the 2 mOhm resistors. Stacking a 2mOhm on would just be for testing purposes as I was curious how far I could take this thing and how long it would scale, also accompanied by a volt mod once I get a second GPU in (just in case).


It is curious that I have not seen a single report anywhere on the internet of ga102 entering safe/2dclocks mode. Even with people screwing up shunt math, and also the insane numbers reported if you don't shunt evenly/multiple. I wonder if they did away with it this gen.


----------



## dr/owned

Here's some more accurate measurements of height of various components on 3090 TUF. I'm within +/- 0.05mm for most of these...variance is going to happen just based on how the solder melts and cools, and whether my gauge was sitting on a PCB trace or not. And this is just normal "technician grade" tools for measuring...not going all nuts with a surface plate and Mitutoyo micrometer and whatnot. 

*Not shown = Back MLC caps behind the die...they're 1.0mm. Backside memory chip was measuring 0.1mm less than the frontside..not sure why but it was consistent.


----------



## olrdtg

Chamidorix said:


> It is curious that I have not seen a single report anywhere on the internet of ga102 entering safe/2dclocks mode. Even with people screwing up shunt math, and also the insane numbers reported if you don't shunt evenly/multiple. I wonder if they did away with it this gen.


It's possible they did away with that feature and replaced it with the current monitoring that GA102+ has now, where it takes all of the shunts into account to balance the power. That would make sense after all, since current balancing is probably a better method, if one resistor screws up a bit the system doesn't have to lock into safe mode, it can instead balance things out and restrict power where it needs to.


----------



## dr/owned

olrdtg said:


> It's possible they did away with that feature and replaced it with the current monitoring that GA102+ has now, where it takes all of the shunts into account to balance the power. That would make sense after all, since current balancing is probably a better method, if one resistor screws up a bit the system doesn't have to lock into safe mode, it can instead balance things out and restrict power where it needs to.


Plausible. A shorted shunt would only indicate zero watts of power draw to the controller which doesn't really help anything except maybe indicating something blew up and decided to un-short itself the hard way. 

Fastest way to test this theory would be soldering a 18awg wire to both sides of the shunt which would effectively be zero mOhm. An inch of 30awg is something like 10mOhm resistance. But shorting would also mean you no longer have any software monitoring of the power consumption...which is kinda a handy feature even if you have to do math to get the accurate number.

Important to point out though that at some point past 850W you risk melting the PCB power planes, being unable to cool the VRM's effectively (since they don't have metal pads on top for heatsining), etc. And the "safe" voltage limit is (according to Kingpin, 1.2V for the core) so it's not going to be getting up to 1000W anyways unless you're trying to kill the card.


----------



## bmgjet

Was a post back hundred pages or so of guy that did liquid metal on the shunts and got clocks locked at 135mhz.
Then he got flammed by every one and never updated if he fixed it lol.


----------



## Falkentyne

Chamidorix said:


> It is curious that I have not seen a single report anywhere on the internet of ga102 entering safe/2dclocks mode. Even with people screwing up shunt math, and also the insane numbers reported if you don't shunt evenly/multiple. I wonder if they did away with it this gen.


Dude. I literally had PWR_SRC reporting 65W at idle and 0.0W at load, and had MVDDC reporting 0.0w at idle and 8.0W at load.
No safe mode. It was like a negative loadline on SRC. No safe mode.

I did swear I saw someone with safe mode though but it wasn't on a FE card. I think it was a FTW3 from that guy trying to bypass blown fuses.

I decided to remove the 10 mOhm stacked shunts on PWR_SRC and PCIE slot and just painted them fully with 842AR, and that instantly made both of them report normal (MVDCC going up to like 32W instead of 8W...) and this time PWR_SRC actually rose at load instead of dropping to 0.0W (like 17W idle, 70W+ load). I still have NO IDEA which one is GPU Chip Power @olrdtg but I managed to get GPU chip power even lower (so its no longer triggering a power limit when it reaches "290W") by just repainting all the shunts. GPU Chip power is either the #1 shunt or the #6 shunt.

I'm 100% sure that PWR_SRC is shunt #2. It can't be anything else. And olrdtg drew it to "Current sense" already, which Igor labeled in his teardown.

@Sky3900 @olrdtg did you guys look at the MG 842AR datasheet?
It seems like the resistance drops the thicker the layer is. I guess that makes sense in a way. If you barely have any paint on, the resistance will be high. I didn't know that...


----------



## DrunknFoo

ya someone is stuck on the evga forum with 2d clocks, the individual ended up blowing a few of the fuses somehow (didn't specify how, i assume it was a short) stated he bridged the fuses but now remains at 2d clocks


----------



## bmgjet

When plug 2 fuse on mine blew I just got stuck with 1 red led.
dab of solder over the top of the fuse and was back to normal.


----------



## Falkentyne

bmgjet said:


> When plug 2 fuse on mine blew I just got stuck with 1 red led.
> dab of solder over the top of the fuse and was back to normal.


Wait what?
You put solder on top of the dead fuse? How does that work?.....


----------



## bmgjet

Bridges the connection since the fuse is internal to the ceramic bit.
Theres no phyical sign the fuse had blown but I was expecting since I was hitting 10W off its max raiting so knew to check it with a multi-meter when my card powered off with black screen and computer wouldnt turn on with single red led by connections being on.


----------



## asdkj1740

bmgjet said:


> Bridges the connection since the fuse is internal to the ceramic bit.
> Theres no phyical sign the fuse had blown but I was expecting since I was hitting 10W off its max raiting so knew to check it with a multi-meter when my card powered off with black screen and computer wouldnt turn on with single red led by connections being on.
> 
> 
> View attachment 2466543


seems the vrm info online about 3090 xc3 is wrong. this should be 16+4 rather than 14+4.


----------



## Falkentyne

+165 / +600
22357 graphics (20821 Timespy): https://www.3dmark.com/3dm/53546793?

14726 PR: https://www.3dmark.com/3dm/53546640?


----------



## olrdtg

Falkentyne said:


> +165 / +600
> 22357 graphics (20821 Timespy): https://www.3dmark.com/3dm/53546793?
> 
> 14726 PR: https://www.3dmark.com/3dm/53546640?


Nice! Since I ran my first benchmark I haven't been able to OC over +140. At 2230mhz the card craps out, even at low temps. I have noticed it bouncing between 0.980mv - 1v BUT my clocks are usually 2190mhz - 2205mhz, so I'm thinking I need to push another 100mv or so. I *REALLY* wish the FE had the I2C header, as I'm still trying to work out the best way to volt mod the card. Being able to use an Elmor EVC2S would make things so much easier.


----------



## Sky3900

Falkentyne said:


> @Sky3900 @olrdtg did you guys look at the MG 842AR datasheet?
> It seems like the resistance drops the thicker the layer is. I guess that makes sense in a way. If you barely have any paint on, the resistance will be high. I didn't know that...


Yep, surface resistance drops as the coatings thickness increases. They also give a bulk resistivity of 1.0 x 10-4 Ω·cm.










I used a good thick layer for bridging across the top of shunts. The glob of wet paint was probably 1mm high. It lost most of it's thickness after it cured.


----------



## Lord of meat

Hey people 
Any point flashing the Msi suprim 3090 bios on the trio?
I tried the 500w evga didnt see much results , maybe 2120 core spikes. On stock I get between 2000-2100 at 1.065, temp doesn't go over 55c on air (pretty cold where I am).
I think the silicone is good and I would like to get to 2200 stable, dont wanna mess up a $1600 card.
Thanks for your opinions!


----------



## DrunknFoo

bmgjet said:


> When plug 2 fuse on mine blew I just got stuck with 1 red led.
> dab of solder over the top of the fuse and was back to normal.


Yup i asked him why not just bridge with solder, he went and tried to use some random gauge wire and still limp mode... Good luck to that person i guess...


----------



## Sky3900

Does anyone know how much effect PSU voltage ripple and load regulation have on 3090 FE OC stability? I have a very mediocre PSU that produces about 50mv of ripple on the 12v rail. I am wondering how much of that ripple makes it through the VRMs to the gpu?


----------



## Falkentyne

Sky3900 said:


> Does anyone know how much effect PSU voltage ripple and load regulation have on 3090 FE OC stability? I have a very mediocre PSU that produces about 50mv of ripple on the 12v rail. I am wondering how much of that ripple makes it through the VRMs to the gpu?


How much ripple are we talking about here?
There's no way to know how much ripple is affecting the 12v PCIE rails or the GPU voltage without an oscilloscope, and a good one at that. The voltages you see in hwinfo64 are averages that are not updated fast enough to show anything except if a serious problem is occuring.



https://electronics.stackexchange.com/questions/433220/pc-12v-rail-shows-ripple-under-load-when-checked-through-a-multimeter








How much ripple for a ATX PSU is too much? - Page 1


How much ripple for a ATX PSU is too much? - Page 1



www.eevblog.com













Defining Power Supply Voltage Ripple & Its Real-World Impact


We will first cover what voltage ripple is, then how it affects users, and we’ll end by quantifying voltage ripple objectively. -




www.gamersnexus.net


----------



## Sky3900

Falkentyne said:


> How much ripple are we talking about here?
> There's no way to know how much ripple is affecting the 12v PCIE rails or the GPU voltage without an oscilloscope, and a good one at that. The voltages you see in hwinfo64 are averages that are not updated fast enough to show anything except if a serious problem is occuring.


I run a dehumidifier that's on the same circuit as my PC. Just noticed that my OC becomes unstable under high load when it is on...The dehumidifier starts running very rough as well.... Got me thinking about PSU load regulation/power quality. Need to get that thing on another circuit or a buy a UPS/power conditioner.


----------



## olrdtg

Sky3900 said:


> I run a dehumidifier that's on the same circuit as my PC. Just noticed that my OC becomes unstable under high load when it is on...The dehumidifier starts running very rough as well.... Got me thinking about PSU load regulation/power quality. Need to get that thing on another circuit or a buy a UPS/power conditioner.


Definitely go with a UPS. I'm currently using an old Monster surge protector/power conditioner that has been working well enough, though I'd love to get a good UPS when I can afford it. Something capable of keeping my PC running for at least 5 - 10 minutes in the event of a power outage.


----------



## dr/owned

olrdtg said:


> Definitely go with a UPS. I'm currently using an old Monster surge protector/power conditioner that has been working well enough, though I'd love to get a good UPS when I can afford it. Something capable of keeping my PC running for at least 5 - 10 minutes in the event of a power outage.


Vast majority of consumer UPS's won't make a difference because they're battery backups, not line-interactive where they're always filtering power. My Cyberpower wouldn't even engage unless the voltage is <90V which is like "that's never going to happen".


----------



## Sync0r

stryker7314 said:


> In the same boat, want to know if I can put the Strix or XOC bios on the Asus Tuf 3090 to up the wattage capability.
> 
> Will this work folks?
> 
> It's on water if that helps.


I have a Zotac Trinity (2x8pin) and I have successfully flashed the Asus Strix and the EVGA 500w bios. It will only be worth while if you have shunted the card though, otherwise you will end up with a lower power limit than what you started with.

My Zotac is currently double shunted to remove all power limits. 



https://www.3dmark.com/pr/516859


----------



## uniketou

Hi !

I've the 3090 Zotac.

Is it possible to flash the vBios with EVGA EC3 vBios ?


----------



## uniketou

That is great !
How did you perform that ? Any tuto exist ?


----------



## Sync0r

uniketou said:


> That is great !
> How did you perform that ? Any tuto exist ?


The double shunt? I just stacked resistors, a 5 and a 10 on the 5 front side resistors. I'm monitoring my cables closely, don't seem to get hot. Most of the time while gaming it doesn't pull too many watts, only when benchmarking.

Tuto? Don't know what you mean sorry.


----------



## uniketou

Sync0r said:


> Tuto? Don't know what you mean sorry.


Tutorial or process to do it


----------



## GTANY

I don't manage to reach 480 W anymore on my ASUS RTX 3090 Strix : yesterday, I was able to attain this power on a few games and benchmarks (Time Spy, Superposition 4k) and today, my max power usage is around 430 W with 123 % power limit.

I resintalled the Nvidia driver, unistalled MSI Afterburner and installed ASUS GPU Tweak instead. Same problem.

Any idea ?


----------



## Sync0r

uniketou said:


> Tutorial or process to do it


YouTube is your friend on how to shunt. Der8aur has a good vid on it.


----------



## bubbl3

I was able to get my hands on an INNO3D 3090 Ichill Frostbite, wasn't a bad deal as I needed a waterblock anyway. I think PCB is probably the same as Ichill X4, does anyone have any experience with those cards? Hardly finding any info from well known sources (apart what's on the first post of this thread, which is very much appreciated!).


----------



## long2905

bubbl3 said:


> I was able to get my hands on an INNO3D 3090 Ichill Frostbite, wasn't a bad deal as I needed a waterblock anyway. I think PCB is probably the same as Ichill X4, does anyone have any experience with those cards? Hardly finding any info from well known sources (apart what's on the first post of this thread, which is very much appreciated!).


the air cooled version x4 is quite hot and loud. the best i can manage is 13900 PR with a bios flash, no shunt. there is another X4 user and one frostbite like yours in this thread, just backtrack about 10-15 pages.


----------



## bubbl3

long2905 said:


> the air cooled version x4 is quite hot and loud. the best i can manage is 13900 PR with a bios flash, no shunt. there is another X4 user and one frostbite like yours in this thread, just backtrack about 10-15 pages.


Thanks, found the post, the liquid cooled one seems decent enough with the 390W bios flash.


----------



## ScottRoberts91

Dreams-Visions said:


> Bykski waterblock is on the way. Looking forward to putting this open loop together. Still gotta find a 5900X, though. :-/
> 
> 
> Is the power limit indicator in GPU-Z a sign that the bios is having an issue on the card? My X-Trio constantly shows power as limit in benchmarks with the 500W EVGA bios, but the frequency is rock solid. I'm new to GPU-Z and what the Pref stuff means for a given GPU and its performance potential.
> 
> Any insights are appreciated!


I don't really bother with benchmarks, I use realistic loads while playing games at 4K to gauge my performance.


----------



## ArcticZero

Just got my 3090. It's a PNY REVEL EPIC-X one, so basically reference. Tried overclocking it but alas, I cannot breach the 20k graphics score wall.

https://www.3dmark.com/spy/15560964

According to GPU-Z, what's holding me back is the 365w power limit. I'm a bit wary of flashing it since it doesn't have dual BIOS and doing a blind flash unnerves me a bit. Not to mention I haven't seen any reports of anyone flashing this particular card. What BIOS would be optimal for this? And if anyone has this card and flashed it, your experience would be greatly appreciated.

Cheers!


----------



## bl4ckdot

Hi everyone, sorry for not reading the last few pages. I have a 3090 FE on its way, whats currently the best bios for it (if there is any) ?
Cheers


----------



## Sync0r

bl4ckdot said:


> Hi everyone, sorry for not reading the last few pages. I have a 3090 FE on its way, whats currently the best bios for it (if there is any) ?
> Cheers


You can't change bios on FE, just shunting.


----------



## Sync0r

ArcticZero said:


> Just got my 3090. It's a PNY REVEL EPIC-X one, so basically reference. Tried overclocking it but alas, I cannot breach the 20k graphics score wall.
> 
> https://www.3dmark.com/spy/15560964
> 
> According to GPU-Z, what's holding me back is the 365w power limit. I'm a bit wary of flashing it since it doesn't have dual BIOS and doing a blind flash unnerves me a bit. Not to mention I haven't seen any reports of anyone flashing this particular card. What BIOS would be optimal for this? And if anyone has this card and flashed it, your experience would be greatly appreciated.
> 
> Cheers!


I haven't tried that card, I presume 2*8pin power, use the Asus gaming oc bios for 390w power limit.


----------



## long2905

or the galax one with the same limit 390w but keep all your ports working


----------



## jura11

My personal best in Port Royal is 14206 that's with KFA2 390W BIOS without the shunt mod and needed to run downvolted to 0.9v, because with normal offset clocks would bounce like crazy and best score with offset 14009



https://www.3dmark.com/pr/540093



Have run with VRAM set at +1235MHz with all benchmarks, with 1250MHz PC would freeze 

Hope this helps 

Thanks, Jura


----------



## chispy

jura11 said:


> My personal best in Port Royal is 14206 that's with KFA2 390W BIOS without the shunt mod and needed to run downvolted to 0.9v, because with normal offset clocks would bounce like crazy and best score with offset 14009
> 
> 
> 
> https://www.3dmark.com/pr/540093
> 
> 
> 
> Have run with VRAM set at +1235MHz with all benchmarks, with 1250MHz PC would freeze
> 
> Hope this helps
> 
> Thanks, Jura


Hola my friend Jura , nice to see you join the 3090 club  . If you are still on the stock bios and no shunt + stock cooler that PR result of 14+ is actually a good score. One thing , this cards are very temperature sensitive , once you go water cooling on them they do stretch their legs and begin to shine.


----------



## jura11

And temperatures been in 32-33°C during whole benchmark, tried again to render in Blender and temperatures are awesome in 32-33°C GPU usage is in 80-90% on RTX 3090 and on RTX 2080Ti's I have seen 70-80% as max 

In gaming temperatures are same and clocks are similar bouncing like crazy which means I need shunt mod

Thanks, Jura


----------



## jura11

chispy said:


> Hola my friend Jura , nice to see you join the 3090 club  . If you are still on the stock bios and no shunt + stock cooler that PR result of 14+ is actually a good score. One thing , this cards are very temperature sensitive , once you go water cooling on them they do stretch their legs and begin to shine.



Hola Angelo, how are you mate, hope you're well and OK mate 

Stock Palit BIOS is just how to say it politely rubbish hahaha, right now I'm on KFA2 390W BIOS which is okay although I still think 450-500W is sweet spot for 3090 two 8 pin

Palit RTX 3090 GamingPro is already under water, running Bykski Waterblock and temperatures are nice in 32-33°C during whole benchmark

Its friend RTX 3090, I only testing that GPU for him, my Asus RTX 3090 Strix OC not sure of will be delivered by the end of the year

Thanks, Jura


----------



## TK421

does the 80w uplift of the strix help the card out a lot when stabilizing frequency compared to founders? seeing no xoc bios have been leaked and I don't think crossflashing works with the FE(maybe?)

thought of just selling the 3090 close to retail on ocn marketplace and moving over to the strix if the difference is huge


----------



## chispy

jura11 said:


> Hola Angelo, how are you mate, hope you're well and OK mate
> 
> Stock Palit BIOS is just how to say it politely rubbish hahaha, right now I'm on KFA2 390W BIOS which is okay although I still think 450-500W is sweet spot for 3090 two 8 pin
> 
> Palit RTX 3090 GamingPro is already under water, running Bykski Waterblock and temperatures are nice in 32-33°C during whole benchmark
> 
> Its friend RTX 3090, I only testing that GPU for him, my Asus RTX 3090 Strix OC not sure of will be delivered by the end of the year
> 
> Thanks, Jura


Nice  . You won't be disappointed by the Strix it just does not get better than that , well maybe kingpin 3090 😁 , cross fingers and hope you get your Strix 3090 soon bro. Lots of fun ahead of you once you get it.

Kind Regards: Angelo


----------



## TK421

chispy said:


> Nice  . You won't be disappointed by the Strix it just does not get better than that , well maybe kingpin 3090 😁 , cross fingers and hope you get your Strix 3090 soon bro. Lots of fun ahead of you once you get it.
> 
> Kind Regards: Angelo


imo the price diff between kpe and strix isn't worth it, paying a lot more just for the aio

most likely the pcb isn't that different in terms of quality either, only reason to maybe get it is an unlocked power limit bios which you may or may not get


----------



## mrpeters

Any other GIGABYTE AORUS GeForce RTX 3090 MASTER 24G (GV-N3090AORUS M-24GD) owners?

Looking for some advice on best BIOS and comparing overclocking results.

A little underwhelmed with my initial 3DMark Timespy runs.



https://www.3dmark.com/3dm/53568696?


----------



## jura11

mrpeters said:


> Any other GIGABYTE AORUS GeForce RTX 3090 MASTER 24G (GV-N3090AORUS M-24GD) owners?
> 
> Looking for some advice on best BIOS and comparing overclocking results.
> 
> A little underwhelmed with my initial 3DMark Timespy runs.
> 
> 
> 
> https://www.3dmark.com/3dm/53568696?



That's quite low graphic score, my stock graphic score with Palit RTX 3090 is 20305 I think and OC is something like 21200 

For my Palit RTX 3090 GamingPro best BIOS is KFA2 390W BIOS 

Hope this helps 

Thanks, Jura


----------



## jura11

chispy said:


> Nice  . You won't be disappointed by the Strix it just does not get better than that , well maybe kingpin 3090 😁 , cross fingers and hope you get your Strix 3090 soon bro. Lots of fun ahead of you once you get it.
> 
> Kind Regards: Angelo


Hi Angelo 

I still have Asus RTX 2080Ti Strix which is one of my best OC RTX 2080Ti's which I tested or tried at least 

Yes I think so too I won't be disappointed by Asus RTX 3090 Strix OC for sure just finding that GPU in stock is almost impossible now but hopefully by next year availability will be much better 

Thanks, Jura


----------



## mrpeters

jura11 said:


> That's quite low graphic score, my stock graphic score with Palit RTX 3090 is 20305 I think and OC is something like 21200
> 
> For my Palit RTX 3090 GamingPro best BIOS is KFA2 390W BIOS
> 
> Hope this helps
> 
> Thanks, Jura


indeed -- what app are you using for overclocking? I am using MSI Afterburner 4.6.3 Beta 3 but it seems to have some issues. I can't even get the OC Scanner to kick off.


----------



## mrpeters

Improved a bit with 10% voltage increase, 200mhz core clock and 500mhz memory clock



https://www.3dmark.com/3dm/53573133?



Guess I'll keep tinkering


----------



## jura11

mrpeters said:


> indeed -- what app are you using for overclocking? I am using MSI Afterburner 4.6.3 Beta 3 but it seems to have some issues. I can't even get the OC Scanner to kick off.


Yes I'm using as well MSI Afterburner not sure which version, I don't use OC Scanner at all, personally I prefer tinkering with offset and VRAM offsets, I think OC Scanner is broken because of changes in Nvidia Nvapi but hopefully developers will fix that issue

Try with increasing VRAM and you can try as well downvolting and manual V/F curve, my best scores or results are with GPU downvolted to 0.9v 

Hope this helps 

Thanks, Jura


----------



## OC2000

So I just shunted the 3090 Strix with all 8 ohm resistors using the silver paint method. Unfortunately it didn't work. Thought I put the on properly. Any advice on making sure they're put on properly?

On a side note, adding the EK water block on to the 3090 made the coil whine unbearable. Does anyone have any ideas on how to reduce this. No idea why the water block has caused this.


----------



## jura11

TK421 said:


> does the 80w uplift of the strix help the card out a lot when stabilizing frequency compared to founders? seeing no xoc bios have been leaked and I don't think crossflashing works with the FE(maybe?)
> 
> thought of just selling the 3090 close to retail on ocn marketplace and moving over to the strix if the difference is huge


Hi there 

80W definitely helps with better OC, if you are not looking for OC and planning to just run at stock clocks and you are okay with noise of stock cooler etc then get Founders Edition 

I'm literally limited on my Palit RTX 3090 GamingPro by power limit, running 390W power limit and I'm hitting all the time power limit, my best results or scores are with downvolted GPU 

Crossflashing FE not sure if this will be possible and if you want higher power limit then only option is mod shunt

XOC BIOS from Asus is not yet leaked and I imagine or assume it will be leaked maybe next year as always it is

If you are planning to run under water then Strix, Kingpin or FTW3 are best GPUs 

Hope this helps 

Thanks, Jura


----------



## jura11

OC2000 said:


> So I just shunted the 3090 Strix with all 8 ohm resistors using the silver paint method. Unfortunately it didn't work. Thought I put the on properly. Any advice on making sure they're put on properly?
> 
> On a side note, adding the EK water block on to the 3090 made the coil whine unbearable. Does anyone have any ideas on how to reduce this. No idea why the water block has caused this.


Yup this I have experienced with EK waterblock and RTX 2080Ti Strix adding extra pads or using better pads this helps with coil whine, I swapped EK thermal pad pads for Thermalright Odyssey pads 

Hope this helps 

Thanks, Jura


----------



## Falkentyne

OC2000 said:


> So I just shunted the 3090 Strix with all 8 ohm resistors using the silver paint method. Unfortunately it didn't work. Thought I put the on properly. Any advice on making sure they're put on properly?
> 
> On a side note, adding the EK water block on to the 3090 made the coil whine unbearable. Does anyone have any ideas on how to reduce this. No idea why the water block has caused this.


How did it not work?
Did you scrape the edges of the original shunts fully so the silver is shiny? Are the original shunts fully flat or are the edges lower than the black part? Which paint was used? I need to know this.


----------



## Sync0r

OC2000 said:


> So I just shunted the 3090 Strix with all 8 ohm resistors using the silver paint method. Unfortunately it didn't work. Thought I put the on properly. Any advice on making sure they're put on properly?
> 
> On a side note, adding the EK water block on to the 3090 made the coil whine unbearable. Does anyone have any ideas on how to reduce this. No idea why the water block has caused this.


Did you scratch away the conformal coating on the original resistors?


----------



## OC2000

jura11 said:


> Yup this I have experienced with EK waterblock and RTX 2080Ti Strix adding extra pads or using better pads this helps with coil whine, I swapped EK thermal pad pads for Thermalright Odyssey pads
> 
> Hope this helps
> 
> Thanks, Jura


I am currently using the stock EK thermal pads until I get the shunts working then I have some 12 w/mk Thermalright and 15 m/wk GELID CP-UL ultimate to use once I know its working properly.Is it the VRMs causing it or the capacitors which don't have thermal pads on them?


----------



## OC2000

Falkentyne said:


> How did it not work?
> Did you scrape the edges of the original shunts fully so the silver is shiny? Are the original shunts fully flat or are the edges lower than the black part? Which paint was used? I need to know this.


I did a small bit of scratching away on the conformal coating, but they looks quite shiny already. Plus I saw Frame Chaser not even doing it and got results, so didn't focus too much on it.
The shunts on the Strix have the lower edges with the black slightly higher. I used the 842AR, the one you have recommended. Any idea how to overcome the lower edges as it sounds like that could be it.


----------



## anethema

@Falkentyne Ya sorry messed up my reply last time. Ya reflowed all shunts and still a bit of a weird power balance.









Put all the Thermalright 12.8 pads on and it fixed all my thermal flag issues tho. Getting around 14900 PR in my room temp room at 100% fan and OC now.


----------



## Falkentyne

OC2000 said:


> I did a small bit of scratching away on the conformal coating, but they looks quite shiny already. Plus I saw Frame Chaser not even doing it and got results, so didn't focus too much on it.
> The shunts on the Strix have the lower edges with the black slightly higher. I used the 842AR, the one you have recommended. Any idea how to overcome the lower edges as it sounds like that could be it.


Hi there,
To use MG842AR paint on shunts that have edges lower than the middle, you need to either:

1) do NOT use a stacked shunt, and simply paint the entire shunt with the paint instead. the paint will function between 15mOhm to 5 mOhm, depending on how thick it is. The thicker, the lower the resistance. The paint may look like a thick layer when wet, but when it dies, it gets much thinner. The difficulty is making sure you get full contact with the edges of the shunts without getting any on the PCB. You can use Kaptom tape to seal off the shunt completely when doing this (Protip!). Having edges of shunts lower than the middle is sort of a worst case scenario for shunt stacking--that requires soldering to bridge them, replacing the original shunts, or method #2. As long as your contact is proper at the edges (this is important!), if you do direct painting without stacking shunts, this will create a new shunt resistance between 3 to 3.75 mOhms (remember, the thicker the paint, the lower the resistance). The only real tricky issue here is trying to get the resistance between the two or three 8-pins to be close to each other. Either way, as long as you make full contact, you should have 500-600W in the end.

2) For stacking: Paint the shunt fully as #1, making 100% sure you have paint covering the edges of the shunt and a nice thick layer with no gaps. Then let it dry for 30 minutes. Then, put a 4 mOhm shunt resistor on top of it by applying a new layer of paint, on the now flat surface and then pressing the shunt down softly. Make sure the new shunt's edges have full contact. This will create a total shunt resistance as if you had originally soldered a 5 mOhm shunt on top of a 5 mOhm shunt, so about 2.5 mOhms to 3 mOhms. Do NOT cover the sides or top of the shunt you are applying.

One of the 'nice' things about the #2 method is, is if you find that your new shunt doesn't give you enough reduction in wattage, you can just pry it off and put on another one (provided the paint below it is fully dry).

Using a multimeter helps take out some of the guesswork from all of this.

------

If the original shunts were fully flat, you wouldn't be having any problems.


----------



## Falkentyne

anethema said:


> @Falkentyne Ya sorry messed up my reply last time. Ya reflowed all shunts and still a bit of a weird power balance.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Put all the Thermalright 12.8 pads on and it fixed all my thermal flag issues tho. Getting around 14900 PR in my room temp room at 100% fan and OC now.


What did you add multipliers to in hwinfo64? Which fields exactly and what is the multiplier? 
There's something whacky about your GPU Core NVVDD Output Power (sum). Did you add a multiplier to that?


----------



## anethema

Falkentyne said:


> What did you add multipliers to in hwinfo64? Which fields exactly and what is the multiplier?
> There's something whacky about your GPU Core NVVDD Output Power (sum). Did you add a multiplier to that?


I put 1.66667 on every power wattage value. Not on %TDP tho.


----------



## Falkentyne

anethema said:


> I put 1.66667 on every power wattage value. Not on %TDP tho.


Ok,
Your GPU Core NVVDD Output Power (sum) should only be slightly higher than "GPU Core Power"(GPU Core NVVDD Input Power (sum)). This is called GPU Core Power in GPU-Z, but GPU Core NVVDD Input Power (sum) here. I'm actually not fully sure tbh, which one matches the GPU-Z entry, but it's one of the two. It can NOT be the second one from your screenshot, however, because the GPU core would power limit itself if it reaches 290+ watts. And the second value's original value is 310 watts (310 * 1.66=514.6). 

Ergo it must be indeed the first one (Input power (sum).

I think the reason your output power (sum) is so high is because of your FBVDD (128 * 1.66 = 212). And your Source Input PP wattage is reading extremely high. Mine doesn't get above 85W.

(of course, my MVDDC drops to 0.0W at load for some reason--it's reporting 25W (after 1.58x multiplier) GPU running in 3d clocks at idle, but we don't talk about that).
I'm pretty sure it's one of those two shunts causing your own strange readings. Did you get that reading while mining? What is your MVDDC when playing a benchmark or something?

From most posts I've seen, MVDDC (FBVDD) isn't higher than 70W (base value) on a founder's edition. Unfortunately I have NO idea which shunt links to it but I am 100% sure it's either the shunt on the "GPU" side, at the middle of the "V" cutout, _OR_ the shunt on the backplate side, diagonally opposite the PCIE shunt in the other corner (whichever one it is not, is the GPU Core Power shunt).

But you said you reflowed all the shunts...hmmm :/

Can you check those two shunts that I mentioned (GPU side, around the middle of the "V", and backplate side, in the corner diagonally opposite PCIE shunt?). Make sure there's nothing shorting anything.

Or if you were mining, can you take a screenshot when you were actually just running something like "Heaven" or "Timespy"?


----------



## mrpeters

jura11 said:


> Yes I'm using as well MSI Afterburner not sure which version, I don't use OC Scanner at all, personally I prefer tinkering with offset and VRAM offsets, I think OC Scanner is broken because of changes in Nvidia Nvapi but hopefully developers will fix that issue
> 
> Try with increasing VRAM and you can try as well downvolting and manual V/F curve, my best scores or results are with GPU downvolted to 0.9v
> 
> Hope this helps
> 
> Thanks, Jura


Would you be willing to share a screenshot of one of your manual curves downvolted to 0.9v? Have seen many articles mention that the 3000 series seem to like undervoltage. 

Is a 400MHz VRAM overclock too modest or should I be pushing for more there, and less core clock?


----------



## ExDarkxH

mrpeters said:


> Would you be willing to share a screenshot of one of your manual curves downvolted to 0.9v? Have seen many articles mention that the 3000 series seem to like undervoltage.
> 
> Is a 400MHz VRAM overclock too modest or should I be pushing for more there, and less core clock?


never give core clock for vram clock
dial in your core clock first and from there dial in memory.


----------



## Carillo

OC2000 said:


> So I just shunted the 3090 Strix with all 8 ohm resistors using the silver paint method. Unfortunately it didn't work. Thought I put the on properly. Any advice on making sure they're put on properly?
> 
> On a side note, adding the EK water block on to the 3090 made the coil whine unbearable. Does anyone have any ideas on how to reduce this. No idea why the water block has caused this.


This method did not work for me neither. Tried 3 times, so I just soldered them on


----------



## mirkendargen

ExDarkxH said:


> never give core clock for vram clock
> dial in your core clock first and from there dial in memory.


Yeah I'm sure it varies with workload, but for me in Port Royal like +400Mhz mem = +15Mhz core in terms of performance.


----------



## Nizzen

Cholerikerklaus said:


> I got a similar problem. I switched from a 9900k to a 5950x and I lost 500 points in Port royal. I think 3dmark got a problem with some kind of cpu.


Port Royal likes low latency, so that's a few points of the score. Ultra tight timings can improve the Port Royal score 
Good luck


----------



## Lobstar

ExDarkxH said:


> dial in your core clock first and from there dial in memory.


I did this the other way around. By setting the GPU clock first my MEM stopped being stable at +448 around +200 on the GPU. By setting the MEM first I'm able to get stable +1448 in games and benchmarks but my GPU clocks lower by about 20 or so. After playing with undervolting a bit I was able to get a solid +180 GPU and +1448 MEM.


----------



## Falkentyne

Lobstar said:


> I did this the other way around. By setting the GPU clock first my MEM stopped being stable at +448 around +200 on the GPU. By setting the MEM first I'm able to get stable +1448 in games and benchmarks but my GPU clocks lower by about 20 or so. After playing with undervolting a bit I was able to get a solid +180 GPU and +1448 MEM.


Wait, what? This works?
You set MEM first, hit apply, then set GPU?....

Anyone else want to verify this?
This is the first I've heard of hocus pocus like this working....


----------



## mirkendargen

Falkentyne said:


> Wait, what? This works?
> You set MEM first, hit apply, then set GPU?....
> 
> Anyone else want to verify this?
> This is the first I've heard of hocus pocus like this working....


I think he's saying he's stable at +200 core +448 mem, or +180 core and +1448 mem and never would have known about the latter if he maxed core then started overclocking mem.


----------



## jura11

In my case I do first memory OC, in previous generation or with RTX 2080Ti I would do core first then memory OC 

With Palit RTX 3090 I have tried everything possible like 160MHz on core and 875MHz on VRAM, my results has been in range of 14k(14008, 14009 or 14006),adding extra 25MHz on VRAM results been lower by around 200 points and adding additional 15-45MHz on core didn't improved scores at all 

In my case downvolting gave me a best results, running 2040MHz at 0.9v and 1250MHz on VRAM with pitty 390W power limit, temperatures are no issues because I'm getting 32-33°C in gaming or rendering or in benchmarks 

Hope this helps 

Thanks, Jura


----------



## ExDarkxH

well, your supposed to eventually test out your max memory offset by running +0 core and ramping up memory until you start losing point by increasing. That's just to have your ideal mem offset in mind when your dialing in core.

With that said, 20 core allowing for 1000 extra memory is an extreme anomaly and highlights another issue all together. low power limit, thermal issues, ext. More than likely 200core was at the extreme edge of stability in general and was limiting further memory offset.
The concept i layed out however is generally to stress how much more important core will be than memory in gaming

If i was able to run just 15Mhz average higher in port royal my score would shoot up tremendously.

But if you run like for example +300 mem and then test again at +400, you will laugh at how little the +100 actually adds
There are some applications where you will want more memory speed but focus on the core.
I can probably run +0 memory and get higher scores than all these guys running +1100-1500 memory
Core core core focus on the core


----------



## HyperMatrix

chispy said:


> Nice  . You won't be disappointed by the Strix it just does not get better than that , well maybe kingpin 3090 😁 , cross fingers and hope you get your Strix 3090 soon bro. Lots of fun ahead of you once you get it.
> 
> Kind Regards: Angelo


Yeah....unless you get a Strix like mine that wanted to crash above 2130MHz. Silicon Lottery is still king with these cards. 



TK421 said:


> imo the price diff between kpe and strix isn't worth it, paying a lot more just for the aio
> 
> most likely the pcb isn't that different in terms of quality either, only reason to maybe get it is an unlocked power limit bios which you may or may not get


No offense but you have no idea what you're talking about. KPE is binned so you're going to get a higher clock than the average Strix (note: mine was a 2130-2160MHz max Strix OC). KPE also has the advantage of an unlocked power limit under the LN2 bios switch, but more importantly...built in voltage controls that are saved. Not to mention the KPE has absolutely by far superior cooling to the Strix. The Strix is a nice card, IF you get a good GPU die. Otherwise, there's nothing you can do.


----------



## jura11

HyperMatrix said:


> Yeah....unless you get a Strix like mine that wanted to crash above 2130MHz. Silicon Lottery is still king with these cards.
> 
> 
> 
> No offense but you have no idea what you're talking about. KPE is binned so you're going to get a higher clock than the average Strix (note: mine was a 2130-2160MHz max Strix OC). KPE also has the advantage of an unlocked power limit under the LN2 bios switch, but more importantly...built in voltage controls that are saved. Not to mention the KPE has absolutely by far superior cooling to the Strix. The Strix is a nice card, IF you get a good GPU die. Otherwise, there's nothing you can do.


I must say that I have been lucky with my RTX 2080Ti Strix which is one of the highest clocking RTX 2080Ti which I have tested or used, I tried like Zotac RTX 2080Ti AMP, Palit RTX 2080Ti, EVGA RTX 2080Ti FTW3, Gigabyte RTX 2080Ti Aorus etc 

Most of these above RTX 2080Ti would do 2100MHz, FTW3 would struggle go above 2085MHz with waterblock, Gigabyte that one wouldn't break 2130MHz and Zotac 2115MHz and my current Asus RTX 2080Ti Strix will do 2205-2220MHz in benchmarks, gaming 2175-2190MHz and in rendering 2070-2085MHz 

I never tried KPE and can't comment but paying £2k just for that card is bit absurd for me, but yes if you want best and cherry picked GPU die then you will pay for it, friend have KPE RTX 2080Ti and his will not do stable 2205MHz in gaming, he is running 2130-2145MHz for most of the time

Silicone lottery sadly always have been involved in the OC and sometimes its bit of lucky draw with GPUs

I can't comment on stock coolers because I have never run stock cooler and I probably will never will be running them with stock coolers

Hope this helps 

Thanks, Jura


----------



## TK421

HyperMatrix said:


> Yeah....unless you get a Strix like mine that wanted to crash above 2130MHz. Silicon Lottery is still king with these cards.
> 
> 
> 
> No offense but you have no idea what you're talking about. KPE is binned so you're going to get a higher clock than the average Strix (note: mine was a 2130-2160MHz max Strix OC). KPE also has the advantage of an unlocked power limit under the LN2 bios switch, but more importantly...built in voltage controls that are saved. Not to mention the KPE has absolutely by far superior cooling to the Strix. The Strix is a nice card, IF you get a good GPU die. Otherwise, there's nothing you can do.



I'm pretty sure those cards are just binned like any other 3090s, even the strix or other highend models.

It can do higher clocks because of the power limit bios (product segmentation) and cooling.

Even KP himself said that prebinning cards can be hit or miss.



For reference, the 2080Ti KPE doesn't have an unlimited tdp bios from the factory, even on the LN2 switch.

You need to download and flash the provided unofficial LN2 bios that may not even be released for this gen.


----------



## ExDarkxH

dont use the word cherry picked die so lightly, the bin isnt that great!
Your not guaranteed to get a good card, only promised not to get a bad one

Thats my main issue with the KPE. Since the vrm of the strix and ftw3 are already overkill. Honestly what your paying for are the tools.
You get a waterblock but you can already buy a hybrid for 1800 and hydro for 1850 so lets ignore that
The price difference is really from the toolset that the KPE offers: Classified tool, EVBot update, XOC VBIOS, etc.

You might be able to get an addition 15mhz to the core from overall build quality. It should in theory run cooler as well but you can always buy a nicer waterblock for your current card and balance this out. The caps used for instance are extremely dense among many other things.

Last generation it was better. The kpe came with Samsung memory and also came as a (rare) 3pin card. this time around its the same memory and same 3pin as the others. Your really paying for the tools and thats why evga is locking down their ftw3 cards so hard. The actual hardware itself isnt worth the price. your paying for the bios/tools

Now, if it was truely a highly binned card, it would be worth every penny of that $2,000 asking price


----------



## jura11

@ExDarkxH 

EVGA Hydrocopper its one of the worst waterblock, I tested their blocks and their blocks I could outperform by Bykski or Barrow waterblock, if I would pay extra for waterblock, off course yes, I bought for my RTX 3090 Strix Aquacomputer Kryographics RTX 3090 again, I used their block with RTX 2080Ti and they're been one of the best in my opinion 

Hybrid not sure, only once I used EVGA Hybrid with my Titan X SC and that time failed within 3 months of use, due this I'm staying away from anything what is AIO, I simply don't trust AIO hahaha 

My RTX 2080Ti Strix come with Samsung memories and no issues, I have run on that GPU Matrix BIOS with which I have run +700MHz

Hope this helps 

Thanks, Jura


----------



## ShadowYuna

OC2000 said:


> So I just shunted the 3090 Strix with all 8 ohm resistors using the silver paint method. Unfortunately it didn't work. Thought I put the on properly. Any advice on making sure they're put on properly?
> 
> On a side note, adding the EK water block on to the 3090 made the coil whine unbearable. Does anyone have any ideas on how to reduce this. No idea why the water block has caused this.


I had same issues. I received the final feedback from EK Support as below










EK tested on their end and found that with backplate it may cause the coil whine. To reduce loosen the screw or remove the backplate. 

So I am planning to return the block and wait for Bitspower or Aqaurcom.


----------



## mirkendargen

ShadowYuna said:


> I had same issues. I received the final feedback from EK Support as below
> 
> View attachment 2466656
> 
> 
> EK tested on their end and found that with backplate it may cause the coil whine. To reduce loosen the screw or remove the backplate.
> 
> So I am planning to return the block and wait for Bitspower or Aqaurcom.


Loosen the backplate so it no longer makes proper contact, what a great suggestion.


----------



## OC2000

ShadowYuna said:


> I had same issues. I received the final feedback from EK Support as below
> 
> View attachment 2466656
> 
> 
> EK tested on their end and found that with backplate it may cause the coil whine. To reduce loosen the screw or remove the backplate.
> 
> So I am planning to return the block and wait for Bitspower or Aqaurcom.


ill try it tomorrow without the backplate and see. I’ll let you know.


----------



## dr/owned

mirkendargen said:


> Loosen the backplate so it no longer makes proper contact, what a great suggestion.


Kinda odd....are they implying the backplate is turning into a speaker for the coil while so less contact is less noise? 

If that's the case why couldn't you just put some more thermal pads around to "stiffen" the plate up more?


----------



## ShadowYuna

mirkendargen said:


> Loosen the backplate so it no longer makes proper contact, what a great suggestion.


Also they asked me to use without backplate on my Strix 3090. I have no intend to use without the backplate.


----------



## Cholerikerklaus

Nizzen said:


> Port Royal likes low latency, so that's a few points of the score. Ultra tight timings can improve the Port Royal score
> Good luck


Thx for your answer. I will try it. Now I am nearly happy with my FE I got place 7 in the Hof in Firestrike Ultra with only 530W and Watercooled. 
But in Port royal my PC sucks. I got a avarage from 2218mhz but only a 15100 score. That is so strange. I hate this ryzen.


----------



## mirkendargen

dr/owned said:


> Kinda odd....are they implying the backplate is turning into a speaker for the coil while so less contact is less noise?
> 
> If that's the case why couldn't you just put some more thermal pads around to "stiffen" the plate up more?


It might just happen to be a size/shape that resonates with the coil whine frequency and is amplifying it.

But yeah the right answer is to dampen it with more thermal pads, not take the backplate off.


----------



## HyperMatrix

TK421 said:


> I'm pretty sure those cards are just binned like any other 3090s, even the strix or other highend models.
> 
> It can do higher clocks because of the power limit bios (product segmentation) and cooling.
> 
> Even KP himself said that prebinning cards can be hit or miss.
> 
> 
> 
> For reference, the 2080Ti KPE doesn't have an unlimited tdp bios from the factory, even on the LN2 switch.
> 
> You need to download and flash the provided unofficial LN2 bios that may not even be released for this gen.
> 
> View attachment 2466655


Steve and JayZ were pulling something around 700-800W on the Hybrid KPE. I don't believe they mentioned anything about having to flash any other bios but it's possible they may have. In previous videos, they had mentioned when they were using an unreleased XOC bios. KPE in the past have been binned (including the 2080ti). This doesn't mean that every single KPE will be able to clock higher than every single non-KPE from EVGA. But it does mean your chances at higher clock speeds are increased, and your chances of a bum card is greatly decreased. Unless something has changed with the 3090 model, you should expect the same with this card as we've seen with past generations.

So for an extra 11% over the Strix, you get a better bios, a better binned chip, voltage controls that get saved unlike the elmor, a neat little oled gimick, a substantially better cooler, all while keeping your warranty. If that's not worth $200 to you, I'm not sure how you justify the extra $1100 for the Strix over a 3080 FE. Yes the Hybrid Cooler doesn't mean as much if you're going to get a separate water block for it anyway, but it doesn't take away from the fact that it's included.

And as I mentioned....the Strix doesn't come with any guarantees. On the air cooler, I couldn't game over 1905MHz at 0.900v without it overheating. And even then, it'd still crash in some games. Best case scenario for OC'ing, it was able to do 2130MHz momentarily while under 50C on the 500W EVGA bios. Yes anyone who has a Strix that can hit 2200MHz loves it. Just like people who have a Zotac card capable of 2200MHz. But we've seen plenty of people with the Strix who are having difficulty getting higher results than any other card. Because while it's a well built card, it can't overcome the restrictions imposed by silicon lottery. At least with the KPE, you can pump more voltage into it to get it up a bit higher. Sure you can do the same with the Strix, but you'd have to void your warranty AND your settings would have to be reapplied after every boot.

Sorry but I'm just not buying the "Strix is just the right amount of overpriced" argument. If you're planning to shunt and watercool the card anyway, then there's no point in getting a Strix. Just get the 3090 FE. If you're paying an extra $350 just to get the Strix for EVC2, then you really can't make an argument that an extra $150 is too much for a card with higher power bios, built in voltage controls that don't have to be reapplied after every boot, intact warranty, a better binned chip (regardless of how much better binned, it's still better binned), a better designed board, and a triple radiator AIO cooler.


----------



## motivman

ArcticZero said:


> Just got my 3090. It's a PNY REVEL EPIC-X one, so basically reference. Tried overclocking it but alas, I cannot breach the 20k graphics score wall.
> 
> https://www.3dmark.com/spy/15560964
> 
> According to GPU-Z, what's holding me back is the 365w power limit. I'm a bit wary of flashing it since it doesn't have dual BIOS and doing a blind flash unnerves me a bit. Not to mention I haven't seen any reports of anyone flashing this particular card. What BIOS would be optimal for this? And if anyone has this card and flashed it, your experience would be greatly appreciated.
> 
> Cheers!


I have the same card. It needs to be shunt modded. Mine is and i am current top 30 in PR. If you dont shunt, then flash the Gigabyte 390W bios.


----------



## Sheyster

So this arrived today:










First impression after installing it and a quick gaming test drive:

- The P mode BIOS fan curve is LOUD. Fans hit 94% with the card at 76 deg C.

- 123% on the power limit slider with a small +105 core OC easily pegged the card at 480w. I guess I will install the 500w EVGA BIOS for a (very small) PL bump.

I'll do some more testing tomorrow. Definitely need to test the Q (quiet) mode BIOS. Having a second HDMI 2.1 port on the ASUS card is the biggest benefit over the 3090 FE card IMHO.


----------



## Skubaben

Hi am getting my tuf in a few hours. And I was wondering about the new bios installer. First is it safe? Just click the install file and it will install? Am a bit worried for two reasons. One I did that on my old 970 and it never posted after that. Second its 6 months of food price alone But it is official. So it's tempting to get the extra wattage. 

Could someone enlighten me on risk? or tell me why its safe?


----------



## Shaded War

I just installed my new 3090 FE. Checked a quick bench and everything seems fine at stock.

What offsets should I start off with for GPU and memory? How about average oc settings? I haven't really looked into it much and would like a good range to look at.

EDIT: I went to +140 / 400 no voltage change. Max clock is 2100Mhz and it passed 3dmark with a good score. Not sure if I should keep going or just game on it for a while to see if its stable. https://www.3dmark.com/3dm/53607877?


----------



## originxt

Got an RMA unit to replace my 3090 ftw3 non ultra due to issues and the new card can't go over 400 watts. Tried new cables, remount, 500w bios, reflashed stock, ddu and still 380-400. Guess I'm gonna have to request another RMA lol. Maybe I can ask them for a new unit this time.


----------



## dr/owned

HyperMatrix said:


> Sorry but I'm just not buying the "Strix is just the right amount of overpriced" argument. If you're planning to shunt and watercool the card anyway, then there's no point in getting a Strix. Just get the 3090 FE. If you're paying an extra $350 just to get the Strix for EVC2, then you really can't make an argument that an extra $150 is too much for a card with higher power bios, built in voltage controls that don't have to be reapplied after every boot, intact warranty, a better binned chip (regardless of how much better binned, it's still better binned), a better designed board, and a triple radiator AIO cooler.


The TUF is the card to get this generation IMO. All of the extra VRM adders on top for the Strix don't really matter if you're waterblocking the card anyways and can handle the inefficiency of maxing out the VRM. KPE is just too much of a price adder for not much real world benefit. And get it on the backend for resale $$ lost too.


----------



## TK421

dr/owned said:


> The TUF is the card to get this generation IMO. All of the extra VRM adders on top for the Strix don't really matter if you're waterblocking the card anyways and can handle the inefficiency of maxing out the VRM. KPE is just too much of a price adder for not much real world benefit. And get it on the backend for resale $$ lost too.


makes me wonder how's the resale market on the kingpin on ebay...


----------



## Jarmel

So I’m on the queue list for the 3090 FTW3 Hybrid and I’m wondering if I should try and get a Strix instead? I’m not sure whether the PCB being better on the Strix means it’ll likely OC better than the EVGA Hybrids, despite the Hybrids being on water.


----------



## dr/owned

TK421 said:


> makes me wonder how's the resale market on the kingpin on ebay...


On the 2080 Ti the Kingpin going for about $1100 on average, so a loss of $800. Regular 2080 Ti going for about $850 so maybe $400-$500 loss. Probably gets even worse the further back you go.


----------



## dr/owned

Jarmel said:


> So I’m on the queue list for the 3090 FTW3 Hybrid and I’m wondering if I should try and get a Strix instead? I’m not sure whether the PCB being better on the Strix means it’ll likely OC better than the EVGA Hybrids, despite the Hybrids being on water.


The FTW3 is barely above being a reference design with a couple more power stages glued on that you can't make use of anyways with the analog voltage controllers. The Strix is 2 levels above it. EVGA was pretty lazy this go-around and the Kingpin is close to what the FTW3 design should have been. (Compared to say the 980Ti Kingpin that had a TON of hardware features on the PCB meant for hardcore overclocks)

(Worth adding though that if you're just planning on doing a mild overclock and whatnot...literally any 3090 is going to work fine, cause it's coming down to the silicon lottery anyways)


----------



## nordschleife

jura11 said:


> My personal best in Port Royal is 14206 that's with KFA2 390W BIOS without the shunt mod and needed to run downvolted to 0.9v, because with normal offset clocks would bounce like crazy and best score with offset 14009
> 
> 
> 
> https://www.3dmark.com/pr/540093
> 
> 
> 
> Have run with VRAM set at +1235MHz with all benchmarks, with 1250MHz PC would freeze
> 
> Hope this helps
> 
> Thanks, Jura


Hi, I have a reference Galax 3090 SG and been using the Gigabyte 390w bios, pretty much the same score, on air, with 18c ambient: 



https://www.3dmark.com/pr/521904



There's a Galax 390w bios which might be the same as the KFA2 you are using, I'm thinking of switching to get all DP ports working. 

Have you tried the Gigabyte bios? Could you please share the KFA2 bios?


----------



## GTANY

HyperMatrix said:


> Steve and JayZ were pulling something around 700-800W on the Hybrid KPE. I don't believe they mentioned anything about having to flash any other bios but it's possible they may have. In previous videos, they had mentioned when they were using an unreleased XOC bios. KPE in the past have been binned (including the 2080ti). This doesn't mean that every single KPE will be able to clock higher than every single non-KPE from EVGA. But it does mean your chances at higher clock speeds are increased, and your chances of a bum card is greatly decreased. Unless something has changed with the 3090 model, you should expect the same with this card as we've seen with past generations.
> 
> So for an extra 11% over the Strix, you get a better bios, a better binned chip, voltage controls that get saved unlike the elmor, a neat little oled gimick, a substantially better cooler, all while keeping your warranty. If that's not worth $200 to you, I'm not sure how you justify the extra $1100 for the Strix over a 3080 FE. Yes the Hybrid Cooler doesn't mean as much if you're going to get a separate water block for it anyway, but it doesn't take away from the fact that it's included.
> 
> And as I mentioned....the Strix doesn't come with any guarantees. On the air cooler, I couldn't game over 1905MHz at 0.900v without it overheating. And even then, it'd still crash in some games. Best case scenario for OC'ing, it was able to do 2130MHz momentarily while under 50C on the 500W EVGA bios. Yes anyone who has a Strix that can hit 2200MHz loves it. Just like people who have a Zotac card capable of 2200MHz. But we've seen plenty of people with the Strix who are having difficulty getting higher results than any other card. Because while it's a well built card, it can't overcome the restrictions imposed by silicon lottery. At least with the KPE, you can pump more voltage into it to get it up a bit higher. Sure you can do the same with the Strix, but you'd have to void your warranty AND your settings would have to be reapplied after every boot.
> 
> Sorry but I'm just not buying the "Strix is just the right amount of overpriced" argument. If you're planning to shunt and watercool the card anyway, then there's no point in getting a Strix. Just get the 3090 FE. If you're paying an extra $350 just to get the Strix for EVC2, then you really can't make an argument that an extra $150 is too much for a card with higher power bios, built in voltage controls that don't have to be reapplied after every boot, intact warranty, a better binned chip (regardless of how much better binned, it's still better binned), a better designed board, and a triple radiator AIO cooler.


I encountered a similar situation with my RTX 3090 Strix OC : even with the default cooler at 100 %, I couldn't game at more than 1995 Mhz in Assassin's Creed Origins in 4k. With my previous 3090 FE, on the same game, I was stable at 2055 Mhz, fans on auto and 2085 Mhz fans at 100 %. On the Strix, the GPU chip quality was far worse than the one on the FE model. I even repasted the Strix with Kryonaut, inserted washers to improve the GPU - cooler contact and that did not change anything.

Consequence : I kept my Strix only 1 day, sold it and bought a 3090 FE again. I am now waiting for my 2nd Founders Edition, hoping that it will be as good as the first one.


----------



## OC2000

ShadowYuna said:


> I had same issues. I received the final feedback from EK Support as below
> 
> View attachment 2466656
> 
> 
> EK tested on their end and found that with backplate it may cause the coil whine. To reduce loosen the screw or remove the backplate.
> 
> So I am planning to return the block and wait for Bitspower or Aqaurcom.


Just tested with the back plate off. Coil Wine is whisper quiet. Need to work out a way to prevent the backplate from amplifying the coil whine


----------



## ShadowYuna

OC2000 said:


> Just tested with the back plate off. Coil Wine is whisper quiet. Need to work out a way to prevent the backplate from amplifying the coil whine


Yeah it is definitely something wrong with backplate. Better to get other brand


----------



## OC2000

Falkentyne said:


> Hi there,
> To use MG842AR paint on shunts that have edges lower than the middle, you need to either:
> 
> 1) do NOT use a stacked shunt, and simply paint the entire shunt with the paint instead. the paint will function between 15mOhm to 5 mOhm, depending on how thick it is. The thicker, the lower the resistance. The paint may look like a thick layer when wet, but when it dies, it gets much thinner. The difficulty is making sure you get full contact with the edges of the shunts without getting any on the PCB. You can use Kaptom tape to seal off the shunt completely when doing this (Protip!). Having edges of shunts lower than the middle is sort of a worst case scenario for shunt stacking--that requires soldering to bridge them, replacing the original shunts, or method #2. As long as your contact is proper at the edges (this is important!), if you do direct painting without stacking shunts, this will create a new shunt resistance between 3 to 3.75 mOhms (remember, the thicker the paint, the lower the resistance). The only real tricky issue here is trying to get the resistance between the two or three 8-pins to be close to each other. Either way, as long as you make full contact, you should have 500-600W in the end.
> 
> 2) For stacking: Paint the shunt fully as #1, making 100% sure you have paint covering the edges of the shunt and a nice thick layer with no gaps. Then let it dry for 30 minutes. Then, put a 4 mOhm shunt resistor on top of it by applying a new layer of paint, on the now flat surface and then pressing the shunt down softly. Make sure the new shunt's edges have full contact. This will create a total shunt resistance as if you had originally soldered a 5 mOhm shunt on top of a 5 mOhm shunt, so about 2.5 mOhms to 3 mOhms. Do NOT cover the sides or top of the shunt you are applying.
> 
> One of the 'nice' things about the #2 method is, is if you find that your new shunt doesn't give you enough reduction in wattage, you can just pry it off and put on another one (provided the paint below it is fully dry).
> 
> Using a multimeter helps take out some of the guesswork from all of this.
> 
> ------
> 
> If the original shunts were fully flat, you wouldn't be having any problems.



Thank you for the excellent guide. I thought Id give it one more try doing the shunt stacking since the resistors I have are the raised edged ones which I thought with a little more paint and precise placement would fit like a lego block. Unfortunately it didn't work, and I saw no higher system draw than previously.

I am now doing your step 2 method. 

Could you check to see if this is right. I am currently waiting for it to dry.



http://imgur.com/NJuBTJT


I only have 8 ohm / 5ohm and 10 ohm resistors. So planning on using an 8 ohm on the PCIE and 5 Ohm on the 6 others. Would this work ok as I don't have 4 Ohm.

Thanks


----------



## bmgjet

Why does it look like there is liquid metal spilled everywhere on your capacitors and look like its gotten super hot.


----------



## OC2000

bmgjet said:


> Why does it look like there is liquid metal spilled everywhere on your capacitors and look like its gotten super hot.


Thats left over thermal paste from the stock cooler. I think the ring around it was from the thermal pad as it cleaned right off.



http://imgur.com/ojRdTwk


----------



## mrpeters

Lobstar said:


> I did this the other way around. By setting the GPU clock first my MEM stopped being stable at +448 around +200 on the GPU. By setting the MEM first I'm able to get stable +1448 in games and benchmarks but my GPU clocks lower by about 20 or so. After playing with undervolting a bit I was able to get a solid +180 GPU and +1448 MEM.


Wow! Great results from trying this method.

Stock BIOS still; +100 GPU +1000 MEM and graphics score at 20498!



https://www.3dmark.com/3dm/53621796?


----------



## ArcticZero

long2905 said:


> or the galax one with the same limit 390w but keep all your ports working


Thanks for the reply. I can't seem to find the GALAX 390w BIOS anywhere. I only saw one unverified TPU link for a beta, not sure if that was the one.

Also, would flashing the Gigabyte OC / Asus OC 390w BIOS on my PNY (reference) really disable one DP port, just because the port configs aren't the same? Is there no way around that? Kinda iffy flashing a BIOS that doesn't have a lot of feedback.

Cheers! 



motivman said:


> I have the same card. It needs to be shunt modded. Mine is and i am current top 30 in PR. If you dont shunt, then flash the Gigabyte 390W bios.


Thanks for this! Hard to find people with the same card. Seems to not be a very popular choice in most areas, but it was a great value and I just wanted to get my hands on any 3090 I could over here.

My main concern about the Gigabyte BIOS is the potential loss of one DP port as I use all four. I'm looking into the new 390w GALAX BIOS but I haven't heard a lot of post-flash feedback for it yet. Not sure if I have the courage to do a shunt mod on a $1500 card yet since I can't exactly afford to replace it.


----------



## Jarmel

dr/owned said:


> The FTW3 is barely above being a reference design with a couple more power stages glued on that you can't make use of anyways with the analog voltage controllers. The Strix is 2 levels above it. EVGA was pretty lazy this go-around and the Kingpin is close to what the FTW3 design should have been. (Compared to say the 980Ti Kingpin that had a TON of hardware features on the PCB meant for hardcore overclocks)
> 
> (Worth adding though that if you're just planning on doing a mild overclock and whatnot...literally any 3090 is going to work fine, cause it's coming down to the silicon lottery anyways)


I’m wondering though about the FTW3 being on water would offset that as the temps would be way cooler regardless of any absurd power draw I might want to do.


----------



## Bishi

ArcticZero said:


> Thanks for the reply. I can't seem to find the GALAX 390w BIOS anywhere. I only saw one unverified TPU link for a beta, not sure if that was the one.
> 
> Also, would flashing the Gigabyte OC / Asus OC 390w BIOS on my PNY (reference) really disable one DP port, just because the port configs aren't the same? Is there no way around that? Kinda iffy flashing a BIOS that doesn't have a lot of feedback.
> 
> Cheers!
> 
> 
> 
> Thanks for this! Hard to find people with the same card. Seems to not be a very popular choice in most areas, but it was a great value and I just wanted to get my hands on any 3090 I could over here.
> 
> My main concern about the Gigabyte BIOS is the potential loss of one DP port as I use all four. I'm looking into the new 390w GALAX BIOS but I haven't heard a lot of post-flash feedback for it yet. Not sure if I have the courage to do a shunt mod on a $1500 card yet since I can't exactly afford to replace it.


I also have a new PNY 3090. 8700K @ 5.0ghz on air currently but watercooling parts come in a few days 
@ 120 / 1000 so far. Haven't checked that high memory clock to see if its actually impacting FPS yet.
Have not tried undervolt yet. Tried all 3 bios with same clocks settings and max power limit
Stock bios 104% power limit - *12454*
Galax 350/390W BIOS 111% power limit- *13369*
Gigabyte 370/390 BIOS 105% power limit - *12949 *(and ofc it breaks some ports)
For whatever reason NVFLASH isn't letting me to --protecton at the moment, but im not sure if it matters for now.


----------



## Sheyster

GTANY said:


> I encountered a similar situation with my RTX 3090 Strix OC : even with the default cooler at 100 %, I couldn't game at more than 1995 Mhz in Assassin's Creed Origins in 4k. With my previous 3090 FE, on the same game, I was stable at 2055 Mhz, fans on auto and 2085 Mhz fans at 100 %. On the Strix, the GPU chip quality was far worse than the one on the FE model. *I even repasted the Strix with Kryonaut, inserted washers to improve the GPU - cooler contact and that did not change anything.*
> 
> Consequence : I kept my Strix only 1 day, sold it and bought a 3090 FE again. I am now waiting for my 2nd Founders Edition, hoping that it will be as good as the first one.


Good to know that highlighted part, I was thinking I might try this but won't bother now. The 3090 FE is just a beautiful card for a pure gamer (no benching). Good looking, super quiet while gaming, many OC quite well. I'm on the fence about which card to keep.


----------



## ArcticZero

Bishi said:


> I also have a new PNY 3090. 8700K @ 5.0ghz on air currently but watercooling parts come in a few days
> @ 120 / 1000 so far. Haven't checked that high memory clock to see if its actually impacting FPS yet.
> Have not tried undervolt yet. Tried all 3 bios with same clocks settings and max power limit
> Stock bios 104% power limit - *12454*
> Galax 350/390W BIOS 111% power limit- *13369*
> Gigabyte 370/390 BIOS 105% power limit - *12949 *(and ofc it breaks some ports)
> For whatever reason NVFLASH isn't letting me to --protecton at the moment, but im not sure if it matters for now.


Nice result! It's at least affirmation that the Galax 390W BIOS is indeed working. Can I confirm this is the same one:

KFA2 RTX 3090 VBIOS

Thanks for the information!


----------



## Bishi

ArcticZero said:


> Nice result! It's at least affirmation that the Galax 390W BIOS is indeed working. Can I confirm this is the same one:
> 
> KFA2 RTX 3090 VBIOS
> 
> Thanks for the information!


yeah thats the one


----------



## chispy

For anyone that needs an installation guide for the Bykski waterblock for the Asus Strix 3090 , i had to contact Bykski directly and ask for it as there is nowhere to be found online , not even on the store , the block came with no instructions so i had to contact them for it , Part Number is Bykski N-AS3090STRIX-X ( Asus Strix rtx 3090 only ) - Here is the Instruction manual how to correctly install the block.

Download pdf here - N-AS3090STRIX-X


----------



## mattxx88

chispy said:


> For anyone that needs an installation guide for the Bykski waterblock for the Asus Strix 3090 , i had to contact Bykski directly and ask for it as there is nowhere to be found online , not even on the store , the block came with no instructions so i had to contact them for it , Part Number is Bykski N-AS3090STRIX-X ( Asus Strix rtx 3090 only ) - Here is the Instruction manual how to correctly install the block.
> 
> Download pdf here - N-AS3090STRIX-X


What contact mail did you use? i wrote 3 times now to Formulamod without receiving any answer.


----------



## HyperMatrix

dr/owned said:


> The TUF is the card to get this generation IMO. All of the extra VRM adders on top for the Strix don't really matter if you're waterblocking the card anyways and can handle the inefficiency of maxing out the VRM. KPE is just too much of a price adder for not much real world benefit. And get it on the backend for resale $$ lost too.


The TUF is definitely a great card. There's very little difference in overclock performance across the cards on average. The only factor is the power limit which is removed with shunting. And for the price, yes the TUF is a good card to get. But still no better than the FE card. It just irks me that someone would say $1800 for the Strix is a good price, but $1900 for the KPE is too much. Just makes no sense at all. And then to further compound that by stating that the Strix in some way results in better overclocking which it for a fact does not. And if you're staying on air, the Strix has a terrible cooler design for its power limit.

$1500 FE with Shunts = Best Value on Air.
$1567 XC3 with shunts + included water block = Best Value under Water.

If you go above $1500, then you're getting no additional guaranteed value out of it, with the exception of cards that have additional ports on the back. But thing is...when you're dealing with the 3090s, we're already out of the realm of having "value" propositions.

And if you can justify $1850 for a Strix with an EVC2, then I have no idea how you couldn't justify $1899 for a KPE. Perhaps people have forgotten that these cards are sold through EVGA only, and as a result you get 5% off the retail price with an associate's code.

Although honestly....I wish everyone thought the KPE was a bad deal so I could actually buy one.


----------



## TK421

dr/owned said:


> On the 2080 Ti the Kingpin going for about $1100 on average, so a loss of $800. Regular 2080 Ti going for about $850 so maybe $400-$500 loss. Probably gets even worse the further back you go.


no I mean with all the scalper activity and such


----------



## chispy

mattxx88 said:


> What contact mail did you use? i wrote 3 times now to Formulamod without receiving any answer.


Hello there , here is the information:

Overseas Manager of Bykski
Mob:0086-13362967703
Mail:[email protected]
www.bykski.com


----------



## HyperMatrix

TK421 said:


> no I mean with all the scalper activity and such


I would expect it to be high. The first batch of KingPin cards are only about 100-120 units. Next batch will be middle of December. So not a lot of cards in the wild and most people buying them will be using them since they're only selling them to EVGA elite members at the moment.


----------



## OC2000

Falkentyne said:


> Hi there,
> To use MG842AR paint on shunts that have edges lower than the middle, you need to either:
> 
> 1) do NOT use a stacked shunt, and simply paint the entire shunt with the paint instead. the paint will function between 15mOhm to 5 mOhm, depending on how thick it is. The thicker, the lower the resistance. The paint may look like a thick layer when wet, but when it dies, it gets much thinner. The difficulty is making sure you get full contact with the edges of the shunts without getting any on the PCB. You can use Kaptom tape to seal off the shunt completely when doing this (Protip!). Having edges of shunts lower than the middle is sort of a worst case scenario for shunt stacking--that requires soldering to bridge them, replacing the original shunts, or method #2. As long as your contact is proper at the edges (this is important!), if you do direct painting without stacking shunts, this will create a new shunt resistance between 3 to 3.75 mOhms (remember, the thicker the paint, the lower the resistance). The only real tricky issue here is trying to get the resistance between the two or three 8-pins to be close to each other. Either way, as long as you make full contact, you should have 500-600W in the end.
> 
> 2) For stacking: Paint the shunt fully as #1, making 100% sure you have paint covering the edges of the shunt and a nice thick layer with no gaps. Then let it dry for 30 minutes. Then, put a 4 mOhm shunt resistor on top of it by applying a new layer of paint, on the now flat surface and then pressing the shunt down softly. Make sure the new shunt's edges have full contact. This will create a total shunt resistance as if you had originally soldered a 5 mOhm shunt on top of a 5 mOhm shunt, so about 2.5 mOhms to 3 mOhms. Do NOT cover the sides or top of the shunt you are applying.
> 
> One of the 'nice' things about the #2 method is, is if you find that your new shunt doesn't give you enough reduction in wattage, you can just pry it off and put on another one (provided the paint below it is fully dry).
> 
> Using a multimeter helps take out some of the guesswork from all of this.
> 
> ------
> 
> If the original shunts were fully flat, you wouldn't be having any problems.


I’ve done both 1 and 2 and neither worked.
When you say edge does that mean the side as well or just the entire top edge to edge?

im doing one last try lego blocking the resistors on using as little silver paint as possible. If that doesn’t work I’ll probably give up.

Update: Last try worked!!

I cleaned all shunts back to stock and put a tiny amount on either end so no paint was connecting to the otherside. I long thought this could be the problem. Then firmly lego blocked the resistors on. Prior to doing that a scraped each resistor I was shunting on. (didnt do that before) Not sure how to work out though how much power I have. Im using 5 ohm on all but the PCIE which is 8 ohm. PCIE draws no more than 50w

Now to tackle getting a backplate on without causing coilwhine!


----------



## dr/owned

Jarmel said:


> I’m wondering though about the FTW3 being on water would offset that as the temps would be way cooler regardless of any absurd power draw I might want to do.


Not sure what "that" is in this context, but if you're on a waterblock without having the ability to volt-mod (which is easy with Asus cause they provide I2C header to the voltage regulators) you're not going to get anywhere near actually taxing the VRM designs of even the reference PCB's. Even the TUF on paper can handle something like 1000W+ if you're maxing it out (for the gpu core alone it's 10 power stages * 50A max for each stage * the voltage (1.1 or 1.2V max) ...but that's also dumping 10W of heat for each stage too and more than triple what it would be dumping if operating in it's efficient range of 30A or so.


----------



## dr/owned

HyperMatrix said:


> I would expect it to be high. The first batch of KingPin cards are only about 100-120 units. Next batch will be middle of December. So not a lot of cards in the wild and most people buying them will be using them since they're only selling them to EVGA elite members at the moment.


It's unfortunate that EVGA didn't make the requirements higher for elite memebership...getting 100 forum posts is not that difficult to do and they don't have any rules about spamming for post-count like you see on other forums. Like they could have done some sort of queue of previous Ti / Titan / Kingpin owners and sold to them first under the assumption they're not buying just to scalp. Or take their list of people that received the XOC Kingpin BIOS and offer them first because you know they're actually planning on pushing these cards somewhat.


----------



## HyperMatrix

dr/owned said:


> It's unfortunate that EVGA didn't make the requirements higher for elite memebership...getting 100 forum posts is not that difficult to do and they don't have any rules about spamming for post-count like you see on other forums. Like they could have done some sort of queue of previous Ti / Titan / Kingpin owners and sold to them first under the assumption they're not buying just to scalp. Or take their list of people that received the XOC Kingpin BIOS and offer them first because you know they're actually planning on pushing these cards somewhat.


Well problem is as soon as Nvidia took over Titan card sales I stopped buying EVGA. My last purchase from them was 4x GTX 680 2GB and 4x GTX 680 Classified 4GB. So under this system I’d have been SOL. 

Even without the forum post requirement you just need to buy a $160 card that you can turn around and resell for like $120. Not a big financial loss. Even worse, if you’ve already been scalping EVGA cards, you could just register one of them before reselling to get elite status. 

I would have liked some kind of no-resale for 90 days clause or something where the card was registered to you and your name directly, without transferable warranty, and based off the serial number they could tell if you sold it and ban your Name/account from future sales of all cards. Because it would suck for scalpers to steal cards that enthusiasts would actually love to have and use.


----------



## LVNeptune

What the heck is up with 3DMark since the update a few days ago? If they are going to gimp it they should reset the scoreboard. Everyone I've talked to is having issues. I used to be able to push +165 or +170 no issues now I can't even do +150 without Time Spy crashing. 20k score down to around 17k.


----------



## mirkendargen

chispy said:


> For anyone that needs an installation guide for the Bykski waterblock for the Asus Strix 3090 , i had to contact Bykski directly and ask for it as there is nowhere to be found online , not even on the store , the block came with no instructions so i had to contact them for it , Part Number is Bykski N-AS3090STRIX-X ( Asus Strix rtx 3090 only ) - Here is the Instruction manual how to correctly install the block.
> 
> Download pdf here - N-AS3090STRIX-X


I downloaded this expecting something good...but this was exactly what was on the product page of the Aliexpress shop I ordered mine from  US $30.01 28% OFF|Bykski Water Block use for ASUS RTX 3090 /3080 Strix GPU Card / Full Cover Copper Radiator Block /A RGB / RGB|Fans & Cooling| - AliExpress

The instructions were actually useful because mine came with a ludicrous amount of random extra screws. It seems like they have a generic "hardware baggy" they just throw in everything lol.


----------



## Thebc2

Miguelios said:


> Well, full disclosure.. My card is also shunted w/15mohm resistors.
> 
> Myself and anyone hitting 15k+ on PR have hard-mods.. (Slinky, Carillo, Wyatt.. )
> 
> My card stock was only holding 2085 max, w/ a score in the 14.7k range
> 500w Evga bios got me to 14.95k


No hard mods here yet


Sent from my iPhone using Tapatalk Pro


----------



## dr/owned

mirkendargen said:


> I downloaded this expecting something good...but this was exactly what was on the product page of the Aliexpress shop I ordered mine from  US $30.01 28% OFF|Bykski Water Block use for ASUS RTX 3090 /3080 Strix GPU Card / Full Cover Copper Radiator Block /A RGB / RGB|Fans & Cooling| - AliExpress
> 
> The instructions were actually useful because mine came with a ludicrous amount of random extra screws. It seems like they have a generic "hardware baggy" they just throw in everything lol.


You really only need the EK instructions and the diagram on the Bykski of how many screws to use with the dashed lines showing which holes they go into/through. [EDIT: take it back, EK design assumes you don't have a backplate...Bykski the two are always paired]










What more do you need to know? install the 4 spring screws around the GPU die and then 7 screws with washers for the backplate on top. I'd have to look at the actual block to see why only 2 of those standoffs need washers...kinda weird.


----------



## mirkendargen

dr/owned said:


> You really only need the EK instructions and the diagram on the Bykski of how many screws to use with the dashed lines showing which holes they go into/through.
> 
> View attachment 2466736
> 
> 
> What more do you need to know? install the 4 spring screws around the GPU die and then 7 screws with washers for the backplate on top. I'd have to look at the actual block to see why only 2 of those standoffs need washers...kinda weird.


Because mine came with about 20 of 6 different sizes of screws, and random spacers that weren't actually used anywhere.


----------



## OC2000

LVNeptune said:


> What the heck is up with 3DMark since the update a few days ago? If they are going to gimp it they should reset the scoreboard. Everyone I've talked to is having issues. I used to be able to push +165 or +170 no issues now I can't even do +150 without Time Spy crashing. 20k score down to around 17k.


my cpu score has gone from 15500 to 13000 in Tome Spy. Also says I have an unknown GPU


----------



## smonkie

A fact: if you manage to get your 3090 to 2000Mhz, then you are as good as it gets for pure gaming. Even with 500W BIOS and water cooling, you are not going to improve it even by a tiny amount. Above 350W, the gains in real fps are laughable.

So just get the cheapest and call it for a day. Bench is fun but it gets old very quick.


----------



## OC2000

Wouldn’t that comment defeat the purpose of this forum?


----------



## jura11

nordschleife said:


> Hi, I have a reference Galax 3090 SG and been using the Gigabyte 390w bios, pretty much the same score, on air, with 18c ambient:
> 
> 
> 
> https://www.3dmark.com/pr/521904
> 
> 
> 
> There's a Galax 390w bios which might be the same as the KFA2 you are using, I'm thinking of switching to get all DP ports working.
> 
> Have you tried the Gigabyte bios? Could you please share the KFA2 bios?


Hi there 

Yup I have already tried Gigabyte 390W BIOS but sadly this BIOS will disable one of the DP ports and Galax this unverified posted here is one of better BIOS for reference PCB although shunt mod for RTX 3090 is only way to go if you want higher power limit, I'm using that BIOS this unverified Galax 390W(KFA2 or Galax iz same, sorry for misunderstanding or confusion there) 

With 390W BIOS you will hit power limit most of the time, in rendering where I use GPUs there can holds easily 2130-2145MHz, but is one of best scoring BIOS for my Palit RTX 3090 GamingPro 

Hope this helps 

Thanks, Jura


----------



## HyperMatrix

smonkie said:


> A fact: if you manage to get your 3090 to 2000Mhz, then you are as good as it gets for pure gaming. Even with 500W BIOS and water cooling, you are not going to improve it even by a tiny amount. Above 350W, the gains in real fps are laughable.
> 
> So just get the cheapest and call it for a day. Bench is fun but it gets old very quick.


Uhmmm....difference between 2000MHz and 2130MHz+ with additional thermals/power limit and cooling which allows higher memory clocks as well is actually quite decent. If you compare being happy with 2000MHz on air vs. what you can potentially get with shunting/water cooling, you're looking at anywhere from 7% to 11% more performance. We care about that level of performance.


----------



## smonkie

HyperMatrix said:


> Uhmmm....difference between 2000MHz and 2130MHz+ with additional thermals/power limit and cooling which allows higher memory clocks as well is actually quite decent. If you compare being happy with 2000MHz on air vs. what you can potentially get with shunting/water cooling, you're looking at anywhere from 7% to 11% more performance. We care about that level of performance.


Could you show me real world performance of those 130Mhz more? Not benchmarks, just games.


----------



## OC2000

smonkie said:


> Could you show me real world performance of those 130Mhz more? Not benchmarks, just games.


I think youre preaching to the wrong choir here


----------



## LVNeptune

OC2000 said:


> my cpu score has gone from 15500 to 13000 in Tome Spy. Also says I have an unknown GPU


this is my point lol


----------



## smonkie

OC2000 said:


> I think youre preaching to the wrong choir here


But I just wanted to help the user asking what to buy.


----------



## HyperMatrix

smonkie said:


> Could you show me real world performance of those 130Mhz more? Not benchmarks, just games.


Uhmm....clock speed + memory bandwidth = performance. Are you disputing this now?


----------



## mirkendargen

smonkie said:


> But I just wanted to help the user asking what to buy.


With that logic, buy a 3080. Wrong thread here.


----------



## Thebc2

olrdtg said:


> Definitely go with a UPS. I'm currently using an old Monster surge protector/power conditioner that has been working well enough, though I'd love to get a good UPS when I can afford it. Something capable of keeping my PC running for at least 5 - 10 minutes in the event of a power outage.


+1 on the UPS. 2 words of advice, size your UPS properly and make sure it support PFC for the best compatibility with your PSU.

I am running this guy which is the cheapest I could find that supports PFC, and can handle up to 1500w. Most of the consumer grade units I found topped out around 800-900w which was borderline for my setup.



https://www.amazon.com/gp/product/B0083TXNMM/ref=as_li_tl?ie=UTF8&camp=1789&creative=9325&creativeASIN=B0083TXNMM&linkCode=as2&tag=thetewfikgrou-20&linkId=664383153233d9f9671ab434aaadc0fb




Sent from my iPhone using Tapatalk Pro


----------



## Chamidorix

As someone with a KPE 3090 coming next week, I've been giving it a lot of thought and I'm pretty sure there is a good argument for a case where the Strix will still actually be the best card for 24/7 + watercooling use.

1. The first and primary issue is that the Kingpin does not appear to easily expose power measuring circuitry via 5mO shunt resistors. We don't have a high res pcb shot, but pouring over the shots from LN2 livestream (extremely frustrating that GN deleted their higher res version btw) there appears to only be 1 5mO shunt resistor on the front. We know there are variety of other ways to measure current draw; shunts are just used for cost afaik. So KPE is likely using a higher cost solution, not as easily user modifiable. I welcome someone with more knowledge to chime in. This is the biggest crux of my argument that could be wrong and I really wish we had a better PCB shot.

2. So, the only practical way to uncap power limit could be via vBios. You can't just simply stack resistors. OC and LN2 stock bios both cap out at 520W with max power slider. On stream they were using the XOC bios, which for 2080ti was only released via TiN on xDevs.com. Now, if you pay attention, on the entire 8 hour stream they never explicitly confirmed that any average user will be able to download this xoc bios, just the classified voltage adjustment tool.

I can see a very possible world where EVGA refuses to release the XOC bios themselves just like every single previous release, and with TiN leaving EVGA we can no longer count on him to release it publicly. Therefore, you will actually be screwed with a Kingpin as you will not be able to shunt resistors to easily raise power limit, and will be stuck on 520W.

Secondly, say the XOC bios is released, or you email Kingpin and get it somehow, it will still have a slew of impractical side effects for 24/7 use, most importantly disabling temperature protection. The classified tool only lets you enable/disable over current protection, as with the 2080ti temp protection will likely be enabled/disabled on vBios.

The overall point is that without a simple way to hardware mod uncapped power, or a 24/7 non-ln2 friendly unlimited power vBios, you will actually be worse off than a strix with shunts + elmor. Sure you have a worse stock cooling solution, sure maybe the kingpin has an extra VRM stage and a few more stock caps, and simpler voltage adjustment that persists, but that is all meaningless if you are capped at 520W on a card that will easily draw 800W above 1.1V on Timespy Extreme, without compromising on a possibly difficult to get LN2 XOC bios that will happily destroy your 2K card if you **** up a thermal pad for 24/7 use.

So, obviously either 1. an easy way to hardware mod or 2. a 24/7 friendly uncapped vBios would alleviate this concern. But in classic 3000-series bulshittery we lack the data from the manufacturer to actually know, and the name of the game is pay first regret later.

Finally, it is obvious these cards are not cherry picked and would barely even pass as binned. They danced around every question on the livestream and the price + time of release after launch is far too short for any meaningful increase over normal card silicon lottery. You guaranteed won't get a bottom 30% **** bin (based on Igors numbers), that's it.


----------



## smonkie

HyperMatrix said:


> Uhmm....clock speed + memory bandwidth = performance. Are you disputing this now?


I'm disputing the convenience of overclock in Ampere cards, that's all. The amount of real world performance you get by raising Mhz above 2000 is neligible according to my tests. 130 Mhz in Ampere are NOWHERE close to 130Mhz in Pascal, just to name one.


----------



## HyperMatrix

smonkie said:


> I'm disputing the convenience of overclock in Ampere cards, that's all. The amount of real world performance you get by raising Mhz above 2000 is neligible *according to my tests*. 130 Mhz in Ampere are NOWHERE close to 130Mhz in Pascal, just to name one.


*Show me these tests.*

I'm sorry what are you basing this on? Clock speed is literally the defining factor in performance, as long as you're not facing a memory bandwidth bottleneck. If you drop your clock speed from 2000MHz to 1000MHz, you will have exactly half the FPS in a game. So if you can raise your clock speed, and in a relative manner increase your memory bandwidth, there is no reason that you shouldn't see that exact amount of performance increase especially at 4K where CPU bottlenecks shouldn't come into play.

There is no such thing as 130MHz in Pascal being a bigger increase than 130MHz in Ampere if both are increasing the clock speed by the same %. So 2000MHz Pascal going to 2130MHz will be the same % increase you will get with Ampere from 2000MHz to 2130MHz, unless your 2130MHz isn't actually stable and validated.


----------



## HyperMatrix

Chamidorix said:


> As someone with a KPE 3090 coming next week, I've been giving it a lot of thought and I'm pretty sure there is a good argument for a case where the Strix will still actually be the best card for 24/7 + watercooling use.


I'll trade your KPE for a BNIB Strix 3090 OC.  I'll even throw in an EVC2 for you.


----------



## Jarmel

dr/owned said:


> Not sure what "that" is in this context, but if you're on a waterblock without having the ability to volt-mod (which is easy with Asus cause they provide I2C header to the voltage regulators) you're not going to get anywhere near actually taxing the VRM designs of even the reference PCB's. Even the TUF on paper can handle something like 1000W+ if you're maxing it out (for the gpu core alone it's 10 power stages * 50A max for each stage * the voltage (1.1 or 1.2V max) ...but that's also dumping 10W of heat for each stage too and more than triple what it would be dumping if operating in it's efficient range of 30A or so.


That was in reference to the Strix having a better PCB. Mainly whether I’m more likely to get higher clocks from the Hybrid compared to the Strix, as the temps would just be significantly lower.


----------



## Chamidorix

HyperMatrix said:


> I'll trade your KPE for a BNIB Strix 3090 OC.  I'll even throw in an EVC2 for you.


Haha, now being so forthcoming with your bin will be your downfall   Appreciate the sentiment, but I'm in US and have gotten a few pings to buy strix already via distil monitors so I'd probably just go new if I end up passing on KPE.


----------



## HyperMatrix

Chamidorix said:


> Haha, now being so forthcoming with your bin will be your downfall   Appreciate the sentiment, but I'm in US and have gotten a few pings to buy strix already via distil monitors so I'd probably just go new if I end up passing on KPE.


No I already sold my card (and told the guy max he can expect is 2130-2160 if he goes under water). I’d be sending you a brand new unopened one. I may be scum but I’m not that scum.


----------



## pantsoftime

Just passing along a tip for people with bad memory OC. My Aorus Xtreme couldn't overclock memory worth a damn (+200 was barely stable) when I first installed it. I had upgraded from a 2080Ti the day before. I found that reinstalling the drivers (same version) magically fixed memory OC and +900 was stable after that. 

I know a lot of guys here are religious about using DDU and all the rest but I'm sure there are other folks like me who only do that as a last resort.


----------



## LVNeptune

Thebc2 said:


> +1 on the UPS. 2 words of advice, size your UPS properly and make sure it support PFC for the best compatibility with your PSU.
> 
> I am running this guy which is the cheapest I could find that supports PFC, and can handle up to 1500w. Most of the consumer grade units I found topped out around 800-900w which was borderline for my setup.
> 
> 
> 
> https://www.amazon.com/gp/product/B0083TXNMM/ref=as_li_tl?ie=UTF8&camp=1789&creative=9325&creativeASIN=B0083TXNMM&linkCode=as2&tag=thetewfikgrou-20&linkId=664383153233d9f9671ab434aaadc0fb
> 
> 
> 
> 
> Sent from my iPhone using Tapatalk Pro


This is just a general statement but VA and watt are not the same. People think they are for some reason.


----------



## Esenel

Evga 500W bios.

Is there a way to recalculate the power consumption on a Strix in HWInfo due to the 3rd Pin output showing garbage?

Thanks!


----------



## GQNerd

Thebc2 said:


> No hard mods here yet
> Sent from my iPhone using Tapatalk Pro


Waterblock not considered a hard mod?


----------



## geriatricpollywog

HyperMatrix said:


> *Show me these tests.*
> 
> I'm sorry what are you basing this on? Clock speed is literally the defining factor in performance, as long as you're not facing a memory bandwidth bottleneck. If you drop your clock speed from 2000MHz to 1000MHz, you will have exactly half the FPS in a game. So if you can raise your clock speed, and in a relative manner increase your memory bandwidth, there is no reason that you shouldn't see that exact amount of performance increase especially at 4K where CPU bottlenecks shouldn't come into play.
> 
> There is no such thing as 130MHz in Pascal being a bigger increase than 130MHz in Ampere if both are increasing the clock speed by the same %. So 2000MHz Pascal going to 2130MHz will be the same % increase you will get with Ampere from 2000MHz to 2130MHz, unless your 2130MHz isn't actually stable and validated.


TiN said on the EVGA forums that starting with Turing, there are other characteristics that define performance besides core clock that define core performance. He didn’t go into detail as to what those characteristics are, he just said to focus on the performance not the clock speed. It makes sense because my b stock 2080ti Kingpin only sustains 2145 core with the aio, but outperforms other 2080tis at the same clock speed.


----------



## HyperMatrix

0451 said:


> TiN said on the EVGA forums that starting with Turing, there are other characteristics that define performance besides core clock that define core performance. He didn’t go into detail as to what those characteristics are, he just said to focus on the performance not the clock speed. It makes sense because my b stock 2080ti Kingpin only sustains 2145 core with the aio, but outperforms other 2080tis at the same clock speed.


So you're saying running at 1000MHz compared to running at 2000MHz on the same system won't be exactly half the FPS when not accounting for Tensor/RT core usage and assuming there wasn't a memory bandwidth limitation at 2000MHz?

I'd be curious to see data around this. My only understanding at the top end was that sometimes the clock speed that's reported in afterburner/precision x wasn't the actual clock speed the card was running at internally. Vince referenced some app called clocksmash or something that would show you the true clock speed. And the problem with difference in performance at the top end was that people weren't validating their OCs. So they'd be running at clocks that looked in software to be higher than they were actually running, and would actually be giving worse performance than if you were to lower your OC a bit. 

So outside of CPU/System Ram/GPU bandwidth bottlenecks, a validated GPU clock speed OC should have linear scaling. Again if there's anything that shows otherwise, I'd be interested in seeing more information on it.


----------



## geriatricpollywog

HyperMatrix said:


> So you're saying running at 1000MHz compared to running at 2000MHz on the same system won't be exactly half the FPS when not accounting for Tensor/RT core usage and assuming there wasn't a memory bandwidth limitation at 2000MHz?


That’s putting a lot of words in my mouth considering I’m just repeating an already cryptic statement by an expert from memory. I’ll pull up the quote when I get back on my PC later.


----------



## HyperMatrix

0451 said:


> That’s putting a lot of words in my mouth considering I’m just repeating an already cryptic statement by an expert from memory. I’ll pull up the quote when I get back on my PC later.


Haha well I'm trying to be clear regarding what you're actually saying because to me, I'm not sure what exactly you're trying to get across. Tin could have been referring to anything. But if his words mean what it seems like you were implying, we should be able to test a GPU bound scenario at 1000MHz vs 2000MHz and observe a difference in FPS scaling. If you weren't memory bandwidth bound at 2000MHz, then at 1000MHz you shouldn't be higher than 50% of the previous FPS.

I would test this myself but I sold my 3090 a few days ago and have Pascal cards in both my desktop and laptop.


----------



## geriatricpollywog

HyperMatrix said:


> Haha well I'm trying to be clear regarding what you're actually saying because to me, I'm not sure what exactly you're trying to get across. Tin could have been referring to anything. But if his words mean what it seems like you were implying, we should be able to test a GPU bound scenario at 1000MHz vs 2000MHz and observe a difference in FPS scaling. If you weren't memory bandwidth bound at 2000MHz, then at 1000MHz you shouldn't be higher than 50% of the previous FPS.
> 
> I would test this myself but I sold my 3090 a few days ago and have Pascal cards in both my desktop and laptop.


I was fortunate enough to secure a 3090 Kingpin, so I’ll test it out. Santa’s elves are still assembling it but should arrive by the 15th. I was one of the first 33.


----------



## HyperMatrix

0451 said:


> I was fortunate enough to secure a 3090 Kingpin, so I’ll test it out. Santa’s elves are still assembling it but should arrive by the 15th. I was one of the first 33.


Grats man. I'm gonna grab an ftw3 hydrocopper and hybrid next week and then warm up my F5 finger for the next round of KingPin notifies. Preferrably the HC.

Also you may get it sooner than that. Some notifies were going out yesterday for the KPE.


----------



## geriatricpollywog

HyperMatrix said:


> Grats man. I'm gonna grab an ftw3 hydrocopper and hybrid next week and then warm up my F5 finger for the next round of KingPin notifies. Preferrably the HC.
> 
> Also you may get it sooner than that. Some notifies were going out yesterday for the KPE.


Thanks. Even though I have a loop, the hybrid model is my first choice. It comes with a 360mm rad this time and heat sinks for the VRM which will come in handy when I go LN2. For the 2080ti, EVGA said there would be a hydrocopper version but ended up releasing a waterblock for the hybrid cards instead.


----------



## dr/owned

Chamidorix said:


> As someone with a KPE 3090 coming next week, I've been giving it a lot of thought and I'm pretty sure there is a good argument for a case where the Strix will still actually be the best card for 24/7 + watercooling use.
> 
> 1. The first and primary issue is that the Kingpin does not appear to easily expose power measuring circuitry via 5mO shunt resistors. We don't have a high res pcb shot, but pouring over the shots from LN2 livestream (extremely frustrating that GN deleted their higher res version btw) there appears to only be 1 5mO shunt resistor on the front. We know there are variety of other ways to measure current draw; shunts are just used for cost afaik. So KPE is likely using a higher cost solution, not as easily user modifiable. I welcome someone with more knowledge to chime in. This is the biggest crux of my argument that could be wrong and I really wish we had a better PCB shot.


They're very likely expecting you to use either an EVC2 / Pi / EVBOT to interface with the VRM instead of dealing with the shunts. The power stages themselves probably have the capability of monitoring their own current output and feeding it back to the VRM for load balancing. Shunts are just a cheapie way of doing the job upstream (and thank god for that so it's easy to get around TDP limits)

I just don't like the whole song and dance they want to put you through. You're spending $2k for a GPU and you have to beg them to give you a crazy bios and they don't sell an EVBOT anymore either. Hell why even bother selling the thing with a hybrid cooler? Sell a bare PCB and offer up a PCB 3D model and let the hyper enthusiasts go nuts on it.


----------



## edsontajra

hey guys, today i bought an asus tuf rtx 3090. In Brazil the availability of these cards is so bad. It was the best option I found. It was a good choice? 
this model is on the same average as the other custons 90's? 

If anyone has any suggestions of OC configuration, it is very welcome. 

Tks!!


----------



## dr/owned

0451 said:


> TiN said on the EVGA forums that starting with Turing, there are other characteristics that define performance besides core clock that define core performance. He didn’t go into detail as to what those characteristics are, he just said to focus on the performance not the clock speed. It makes sense because my b stock 2080ti Kingpin only sustains 2145 core with the aio, but outperforms other 2080tis at the same clock speed.


I can speak from the Intel side of things that there's various trimming values fused into the silicon that are unique per-die. There's so much power and performance gating things going on that every little section of the chip has it's own PLL clock going on with a different trim value...separate from the baseclock and multiplier.


----------



## ArcticZero

So I flashed the beta Galax/KFA2 390W BIOS on my PNY card and got far better results on Time Spy, among other things. My main issue now is it can't seem to verify my score anymore for some reason:










Any idea why this is happening? I've tried DDU as well just to be sure I have a clean driver install, but no go.

Thanks!


----------



## bmgjet

ArcticZero said:


> So I flashed the beta Galax/KFA2 390W BIOS on my PNY card and got far better results on Time Spy, among other things. My main issue now is it can't seem to verify my score anymore for some reason:
> Any idea why this is happening? I've tried DDU as well just to be sure I have a clean driver install, but no go.
> 
> Thanks!


Competely uninstalled 3dmark and 3dmark hwinfo driver.
Restart computer and reinstall it.
restart computing again and it should be good. Mine basically dies like that as well every time 3dmark updates.

----
Just tried that bios as well.
Max uncorrect value its pulling on my XC3 is 379W vs Gigabyte one which hits 388W.
Overclocks a little worse 2125mhz = crash in heaven vs gigabyte one that does 2130mhz stable for a hour.
Looks like voltage is capped to 1.075V instead of 1.093V since the curve is a little different.
All DP ports work.
Idles 5W less.
Ill keep trying it out for the new few days but will probably go back to the gigabyte bios.


----------



## cletus-cassidy

HyperMatrix said:


> Grats man. I'm gonna grab an ftw3 hydrocopper and hybrid next week and then warm up my F5 finger for the next round of KingPin notifies. Preferrably the HC.
> 
> Also you may get it sooner than that. Some notifies were going out yesterday for the KPE.


I also have a dog Strix that hits 2130 max on water. Do we know there will be FTW3 hydro coppers available next week?


----------



## HyperMatrix

cletus-cassidy said:


> I also have a dog Strix that hits 2130 max on water. Do we know there will be FTW3 hydro coppers available next week?


Only through EVGA and only if you got on the Notify queue 2 weeks ago. I was able to get on the early reserve list for the FTW3 Ultra HydroCopper, Hybrid, as well as the XC3 HydroCopper. They'll be shipping first week of December according to EVGA. Don't think I'm going to grab the XC3 though. I just panic reserved whatever I could. Haha.


----------



## cletus-cassidy

HyperMatrix said:


> Only through EVGA and only if you got on the Notify queue 2 weeks ago. I was able to get on the early reserve list for the FTW3 Ultra HydroCopper, Hybrid, as well as the XC3 HydroCopper. They'll be shipping first week of December according to EVGA. Don't think I'm going to grab the XC3 though. I just panic reserved whatever I could. Haha.


Damn. Missed that entirely. Guess I’ll be waiting a while.


----------



## ArcticZero

bmgjet said:


> Competely uninstalled 3dmark and 3dmark hwinfo driver.
> Restart computer and reinstall it.
> restart computing again and it should be good. Mine basically dies like that as well every time 3dmark updates.
> 
> ----
> Just tried that bios as well.
> Max uncorrect value its pulling on my XC3 is 379W vs Gigabyte one which hits 388W.
> Overclocks a little worse 2125mhz = crash in heaven vs gigabyte one that does 2130mhz stable for a hour.
> Looks like voltage is capped to 1.075V instead of 1.093V since the curve is a little different.
> All DP ports work.
> Idles 5W less.
> Ill keep trying it out for the new few days but will probably go back to the gigabyte bios.


I tried that but it still can't seem to validate my result. No idea why it says Unknown GPU(0x). I suppose I'll try flashing the Gigabyte BIOS and deal with one DP not working since I can always hook that up to my iGPU anyway.


----------



## Thanh Nguyen

Anyone here has the aquacomputer block for rtx 3090? They sold out the active backplate so can I use other brand backplate?


----------



## defiledge

has anyone tried a 5moh shunt mod on the pcie slot? any problems with the increased power draw? since pcie slot is only rated for 75W


----------



## motivman

defiledge said:


> has anyone tried a 5moh shunt mod on the pcie slot? any problems with the increased power draw? since pcie slot is only rated for 75W


a lot of people are running 5mohm on the pcie slot with no issues. I personally went with 10mohms, but If I could do it again, I will go with 5mohm.


----------



## defiledge

motivman said:


> a lot of people are running 5mohm on the pcie slot with no issues. I personally went with 10mohms, but If I could do it again, I will go with 5mohm.


Why not then? Also who are these people. I'm wondering if there are longterm effects of a 150W draw from the pcie slot. Currently I'm drawing 450W on air, getting around 2070Mhz and 14k PR score, how good is that?


----------



## TK421

does FE have fuses on the power connector?


also if overclocking memory what kind of starting clock is good?


----------



## ArcticZero

Bit the bullet and flashed Gigabyte OC BIOS and lost that one DP port. That finally got it to validate properly and break 20k graphics score.

https://www.3dmark.com/3dm/53677377?

Temps are high because I attempted a vertical GPU mount on my Evolv X (not ideal for >2 slot card), so will most likely switch it back to horizontal.

105% Power
+150 Core
+250 Memory

Definitely happy so far. Thanks to all who helped.


----------



## DrunknFoo

motivman said:


> a lot of people are running 5mohm on the pcie slot with no issues. I personally went with 10mohms, but If I could do it again, I will go with 5mohm.


Tried 5mohm, went back to 8ohm, didn't see a raise in power draw on kilawat, this is due to the temperatures that i cant control from my testing...

Better to get confirmed answers from those with at least a water block rather than on air i think

(Ftw3 card)


----------



## Falkentyne

defiledge said:


> Why not then? Also who are these people. I'm wondering if there are longterm effects of a 150W draw from the pcie slot. Currently I'm drawing 450W on air, getting around 2070Mhz and 14k PR score, how good is that?


Your PCIE slot is NOT going to draw 150 watts!

FE cards use more like a 17%/41.5%/41.5% ratio or something similar, which is better than the 20%/40%/40% some two 8 pin cards use. If you were getting close to 150 watts through the slot, you would be at >800 watts, with 300+watts through each eight pin. You wouldn't be able to cool that and you would have other problems to worry about also.


----------



## shadow85

I have a standard FTW3 ULtra 3090, and get average clock speeds of 1850-1950 MHz. Temps reach 70-75°C in cooler weather and currently in the hotter weather here in Australia I am getting temps reach 75-80°C.

Is this normal results?


----------



## DrunknFoo

shadow85 said:


> I have a standard FTW3 ULtra 3090, and get average clock speeds of 1850-1950 MHz. Temps reach 70-75°C in cooler weather and currently in the hotter weather here in Australia I am getting temps reach 75-80°C.
> 
> Is this normal results?


sounds about right


----------



## defiledge

shadow85 said:


> I have a standard FTW3 ULtra 3090, and get average clock speeds of 1850-1950 MHz. Temps reach 70-75°C in cooler weather and currently in the hotter weather here in Australia I am getting temps reach 75-80°C.
> 
> Is this normal results?


post some scores


----------



## DrunknFoo

Now that I think about it, anyone here with an evga card shunted that is able to pull 850w+ based on readings on their kila wat or wall meter (minus cpu draw)?


----------



## dr/owned

DrunknFoo said:


> Now that I think about it, anyone here with an evga card shunted that is able to pull 850w+ based on readings on their kila wat or wall meter (minus cpu draw)?


There is no way without volt modding. Aren't all these cards factory capped at 1.10V?


----------



## TK421

dr/owned said:


> There is no way without volt modding. Aren't all these cards factory capped at 1.10V?


1.093


----------



## Thebc2

Miguelios said:


> Waterblock not considered a hard mod?


Not that I have ever seen, typically reserved for power mods no?


Sent from my iPhone using Tapatalk Pro


----------



## Chamidorix

HyperMatrix said:


> Vince referenced some app called clocksmash or something that would show you the true clock speed.


Yea, Smashclocks is what it is called. I've still been searching for it, can't seem to actually find it anywhere, just a few scattered references. There is also thermspy, but it doesn't seem to play nicely with newer GPUs. Would hopefully give us some insight into the inner-die clocks as Dr Owned was describing and Vince alluded to (getting "drivered" where these inner die clocks get lowered even though the main reference clock stays the same).



dr/owned said:


> They're very likely expecting you to use either an EVC2 / Pi / EVBOT to interface with the VRM instead of dealing with the shunts. The power stages themselves probably have the capability of monitoring their own current output and feeding it back to the VRM for load balancing. Shunts are just a cheapie way of doing the job upstream (and thank god for that so it's easy to get around TDP limits)
> 
> I just don't like the whole song and dance they want to put you through. You're spending $2k for a GPU and you have to beg them to give you a crazy bios and they don't sell an EVBOT anymore either. Hell why even bother selling the thing with a hybrid cooler? Sell a bare PCB and offer up a PCB 3D model and let the hyper enthusiasts go nuts on it.


My understanding was that you can't really do fast load balancing from power stage current detection, unless you use a separate circuit with shunts though? Like with AMD cards - they do balance to a an extent, but my understanding is since it is digital (on smart VRM) it is very slow and non dynamic compared to an analog solution like Nvidia employs with shunts + dedicated analog power balance chips. I could be way off base though, I certainly defer to your expertise in this case.

Yea, the kingpin really seems to occupy a strange spot where it is trying to cater to both the casual user and the ln2 enthusiast on two extremes, while segregating out the ambient enthusiast in the middle.


----------



## olrdtg

defiledge said:


> has anyone tried a 5moh shunt mod on the pcie slot? any problems with the increased power draw? since pcie slot is only rated for 75W


I replaced all of my shunt resistors with 3mOhm by removing all 6 of the original resistors, including the PCIe slot, so I'm currently running a 3mOhm on my slot. The peak draw is like 128W. 
If you stack a 5mOhm on your PCIe slot shunt resistor, then just keep a few things in mind --
If you have multiple PCIe cards + m.2 SSDs in your PC + a lot of fans plugged into the motherboard, you might want to reduce the amount of components pulling +12v power from your 24-pin cable, as you risk potentially melting your 24-pin cable. If your motherboard has an auxiliary power input for the PCIe slots (usually a MOLEX, SATA or 6-pin PCIe port) on the motherboard, plug into that so that your PCIe slots can take load off of the 24-pin cable.

Personally if you are stacking 5mOhm, I'd stack a higher value on the PCIe slot shunt resistor, for longevity sake.

ALSO, be careful shunting the PCIe slots, as some 3090s have fuses connected there, so too much power draw could blow the fuse, rendering your card useless. Though if that did occur, you could just put some solder over the fuse to bridge it, fixing the card.


----------



## olrdtg

TK421 said:


> does FE have fuses on the power connector?
> 
> 
> also if overclocking memory what kind of starting clock is good?


No.

For mem, start at +100 and work your way up until you notice slowdowns and instabilities.



Falkentyne said:


> Your PCIE slot is NOT going to draw 150 watts!
> 
> FE cards use more like a 17%/41.5%/41.5% ratio or something similar, which is better than the 20%/40%/40% some two 8 pin cards use. If you were getting close to 150 watts through the slot, you would be at >800 watts, with 300+watts through each eight pin. You wouldn't be able to cool that and you would have other problems to worry about also.


Mine peaks at almost 130 W from the slot. I've seen it get as high as 128 W, but it usually peaks around 122 W. If the right resistor is used when shunting, it is absolutely possible to pull 150 W from the PCIe slot, though the motherboard and 24-pin cable likely would not survive for very long.



TK421 said:


> 1.093


Actually 1.1v - but you have to meet some operating conditions like the GPU must be below I think 56 C or something, and you have to set core voltage to 100% to allow the card to use it's extra voltage step. My 3090 FE is usually around 47 to 52 C under load, and is frequently at 1100mv, but generally settles at 1093mv. So the card can hit 1.1v


----------



## geriatricpollywog

HyperMatrix said:


> Haha well I'm trying to be clear regarding what you're actually saying because to me, I'm not sure what exactly you're trying to get across. Tin could have been referring to anything. But if his words mean what it seems like you were implying, we should be able to test a GPU bound scenario at 1000MHz vs 2000MHz and observe a difference in FPS scaling. If you weren't memory bandwidth bound at 2000MHz, then at 1000MHz you shouldn't be higher than 50% of the previous FPS.
> 
> I would test this myself but I sold my 3090 a few days ago and have Pascal cards in both my desktop and laptop.


I found the quote. Mind you, this is for the Turing version.



EVGA NVIDIA GeForce RTX 2080 Ti KINGPIN is HERE!



TiN from EVGA:
_To end the speculation on GPU binning - yes, the chips that go into these KPE are cherry picked. Every single one of them. 

We wanted to make all KPE cards behave very nice, especially with OC. We don't advertise specific XXXX MHz clock, because there are multiple performance criteria on RTX TU102 GPUs and it's not all black and white like on 1080Ti or previous generation chips. As somebody mentioned earlier - keep an eye on *performance* results, not just raw frequency.









Combined with hybrid cooler, better VRM w/o bogged power limiting and Samsung memory, these should easily give out of the box results that otherwise on other cards may require HC blocks, hours if not days of fiddling with settings and big GPU lottery luck._


----------



## DrunknFoo

dr/owned said:


> There is no way without volt modding. Aren't all these cards factory capped at 1.10V?


1.1v is the hard limit for the card. can bounce on and off this value, but ya pretty impossible to maintain..

i have seen my total package spike to 940-980w on the meter; that said, controlled furmark testing, taking out an average of 115w (65w of it of this being the cpu) system draw (for furmark), i have been pulling anywhere from 685w avg and peaks of upto 710w utilizing different combinations of 5 and 8 mohm shunts, the odd run or two may trigger the voltage flag once or twice and other times continuously... Case temp of ~ 14c results the gpu to avg 78-82c range during a 3 min run..

Guess my question is, is there anyone with a block able to pull power within 5-7% of their calculated shunt setup?


----------



## HyperMatrix

0451 said:


> I found the quote. Mind you, this is for the Turing version.
> 
> 
> 
> EVGA NVIDIA GeForce RTX 2080 Ti KINGPIN is HERE!
> 
> 
> 
> TiN from EVGA:
> _To end the speculation on GPU binning - yes, the chips that go into these KPE are cherry picked. Every single one of them.
> 
> We wanted to make all KPE cards behave very nice, especially with OC. We don't advertise specific XXXX MHz clock, because there are multiple performance criteria on RTX TU102 GPUs and it's not all black and white like on 1080Ti or previous generation chips. As somebody mentioned earlier - keep an eye on *performance* results, not just raw frequency.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Combined with hybrid cooler, better VRM w/o bogged power limiting and Samsung memory, these should easily give out of the box results that otherwise on other cards may require HC blocks, hours if not days of fiddling with settings and big GPU lottery luck._


Yes this matches up with what Steve and Vince have said. That's not to say that the frequency isn't important. But that similar to the memory on these cards, you need to validate your OC. Because at some point, the number you picked may be higher, but the result you get may be worse. There is no reason to believe that an 2205MHz validated OC wouldn't be 10% faster than a 2000MHz OC. However there's a possibility that on your specific chip/card, you will see a 2205MHz OC in afterburner, but will actually perform worse than if you were, for example, at 2190MHz or 2175MHz. So you definitely have to keep benching with each change to see if you are getting an increase from your OC. And if that increase is seen in the benchmarks, there's absolutely 0 reason to believe that the same type of performance increase wouldn't be seen in a game.


----------



## TK421

olrdtg said:


> No.
> 
> For mem, start at +100 and work your way up until you notice slowdowns and instabilities.
> 
> 
> 
> Mine peaks at almost 130 W from the slot. I've seen it get as high as 128 W, but it usually peaks around 122 W. If the right resistor is used when shunting, it is absolutely possible to pull 150 W from the PCIe slot, though the motherboard and 24-pin cable likely would not survive for very long.
> 
> 
> 
> Actually 1.1v - but you have to meet some operating conditions like the GPU must be below I think 56 C or something, and you have to set core voltage to 100% to allow the card to use it's extra voltage step. My 3090 FE is usually around 47 to 52 C under load, and is frequently at 1100mv, but generally settles at 1093mv. So the card can hit 1.1v


oh huh I didn't know about this


what't the expected average stable for mem?


----------



## mirkendargen

0451 said:


> I found the quote. Mind you, this is for the Turing version.
> 
> 
> 
> EVGA NVIDIA GeForce RTX 2080 Ti KINGPIN is HERE!
> 
> 
> 
> TiN from EVGA:
> _To end the speculation on GPU binning - yes, the chips that go into these KPE are cherry picked. Every single one of them.
> 
> We wanted to make all KPE cards behave very nice, especially with OC. We don't advertise specific XXXX MHz clock, because there are multiple performance criteria on RTX TU102 GPUs and it's not all black and white like on 1080Ti or previous generation chips. As somebody mentioned earlier - keep an eye on *performance* results, not just raw frequency.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Combined with hybrid cooler, better VRM w/o bogged power limiting and Samsung memory, these should easily give out of the box results that otherwise on other cards may require HC blocks, hours if not days of fiddling with settings and big GPU lottery luck._


Call me a cynic, but when I read that with 0 verifiable details, my thought is that they can't/aren't bothering to really bin the GPU's and are just saying "oh they'll perform better even if the clocks are lower, we promise!" and if they don't, they have CPU's, memory, riser cables, etc. as scapegoats since there's no way to prove otherwise with so many variables going into "performance".


----------



## HyperMatrix

mirkendargen said:


> Call me a cynic


Cynic.


----------



## geriatricpollywog

mirkendargen said:


> Call me a cynic, but when I read that with 0 verifiable details, my thought is that they can't/aren't bothering to really bin the GPU's and are just saying "oh they'll perform better even if the clocks are lower, we promise!" and if they don't, they have CPU's, memory, riser cables, etc. as scapegoats since there's no way to prove otherwise with so many variables going into "performance".


TiN is really a hardware enthusiast, not some "they" who will make false claims to boost sales. He goes above and beyond writing technical articles about hardware on his site xdevs.com and even continues to help people on the EVGA forums after leaving the hardware industry altogether and having his moderator privileges taken away. I am naturally a cynic and I understand why you would be, but I'll take TiN's word for it. Still, I have no idea what the "criteria" are, but it might have something to do with the new rails on Turing and Ampere.


----------



## olrdtg

TK421 said:


> oh huh I didn't know about this
> 
> 
> what't the expected average stable for mem?


It literally all depends on the silicon quality. My memory OCs up to +1200, I've seen some people able to go to like +1400 and some who can't go higher than +600mhz on the memory.
When testing your memory overclock, watch for both instability AND performance. Sometimes with GDDR6 and GDDR6X if you put in a high OC it may be stable, but performance might actually degrade significantly.


----------



## Zogge

To an earlier question: yes you can add a sensor in HWinfo and calculate the power draw through a formula. PM me if you need support on that.


----------



## whaleboy_4096

Hello all...

I have one of the FTW3 ultras that, like many, seems to be capped at 450w, even with the EVGA 500w bios. Current evidence supports the theory that cards coming out of China have no issue with that bios, but the ones from Taiwan do. Of course, I have a TW card. I plan on watercooling my system (CPU and GPU), and have all that is needed except the GPU block, which is on order. I have two questions... Has anyone here shunted their FTW3 ultra and are happy with the results? And if so, did your card work with the 500w bios unmodded? If I go to the effort of doing the shunt mod, I want to make sure that the affected EVGA cards arent somehow hobbled to the point where the shunt mod isn't very effective. 

I also have a more general question regarding shunt modding. I (like all of us here) would like more performance from the card. I feel I don't have a particularly good card/chip, just average... more likely below average, to be honest. I'm not looking to break any records, I just want a good performing card in games and VR. I know the question "is it worth the effort to shunt mod" is subjective, but I am wondering what I would get out of it. I'd like to keep the power at safe levels for water and not go crazy... The current plan is to piggyback 8's onto the existing 5's, and maybe replace the PCIe with a 4 (like I said I want to keep the power at safe levels). Will this at least give me a small boost, or am I looking at at only a couple percent? Like I said I am not looking to top the benchmark charts or anything, but the current power limit drives me crazy and is severely holding the card back. 

Thoughts? Thanks


----------



## dr/owned

whaleboy_4096 said:


> wondering what I would get out of it. I'd like to keep the power at safe levels for water and not go crazy... The current plan is to piggyback 8's onto the existing 5's, and maybe replace the PCIe with a 4 (like I said I want to keep the power at safe levels). Will this at least give me a small boost, or am I looking at at only a couple percent? Like I said I am not looking to top the benchmark charts or anything, but the current power limit drives me crazy and is severely holding the card back.
> 
> Thoughts? Thanks


The biggest benefit is to non-EVGA owners who don't have XOC bios option and don't want to give up display outputs flashing random other BIOS's to their cards. Without touching the voltage these cards seem to only top out around 500W-ish so if you're already at 450 I don't think you're going to see more than 5% real performance gain, at best. I'd probably invest more in actively cooling the backplate than shunt modding if I were running EVGA.


----------



## DrunknFoo

dr/owned said:


> The biggest benefit is to non-EVGA owners who don't have XOC bios option and don't want to give up display outputs flashing random other BIOS's to their cards. Without touching the voltage these cards seem to only top out around 500W-ish so if you're already at 450 I don't think you're going to see more than 5% real performance gain, at best. I'd probably invest more in actively cooling the backplate than shunt modding if I were running EVGA.


my ftw3


https://www.3dmark.com/pr/511706


----------



## mirkendargen

0451 said:


> TiN is really a hardware enthusiast, not some "they" who will make false claims to boost sales. He goes above and beyond writing technical articles about hardware on his site xdevs.com and even continues to help people on the EVGA forums after leaving the hardware industry altogether and having his moderator privileges taken away. I am naturally a cynic and I understand why you would be, but I'll take TiN's word for it. Still, I have no idea what the "criteria" are, but it might have something to do with the new rails on Turing and Ampere.


I feel like someone that really was purely out to do good for the community would have given some details, not just "they'll be good, trust us!" Maybe he did later outside of that quote.

With that said, a difference I could imagine is the presence of RT and tensor cores starting with Turing, and perhaps they are sometimes unstable at a lower frequency that you won't realize if you're just benching on Timespy or something like that.


----------



## Apecos

Hello friends, I want to share my results in this forum since I read everything and learned a lot about how the 3090 line works.I
I have 3090 VENTUS 3X OC this card has 350w max output, I flashed GALAX/KFA2 (390W) I only gained 1 fps in MOST GAMES. STOCK MSI-STOCK GALAX
i went back to the original bios because the risk is too high only for 1 fps more.

The goods things about galax bios over the gigabyte is: The fans has the full rpm load (up to 3500rpm), with gigabyte bios only 2200rpm
And the second good thing in GALAX BIOS, that it has a LOWER GPU TEMP IN IDLE (44 c° in idle with msi bios and 32 c° in idle with galax bios) both with fan stop.


Thanks


----------



## OC2000

I've spent all morning working on the coil whine on the 3090 Strix with EK water block and have come to the conclusion there is a major design fault in the back plate.
With no back plate the 3090 coil wine is whisper quiet. With the back plate on and no thermal pads on back of the VRMs the coil whine is whisper quiet. As soon as the thermal pad is applied to the VRMs on the back the coil whine is horrendous.

I am not sure this can be fixed so Ill pre order a different water block for now. Will not adding thermal pads on the back side of the VRMs cause any issue? The ram thermal pads do not affect the coil whine so it is still beneficial to have the back plate on.


----------



## dr/owned

OC2000 said:


> I've spent all morning working on the coil whine on the 3090 Strix with EK water block and have come to the conclusion there is a major design fault in the back plate.
> With no back plate the 3090 coil wine is whisper quiet. With the back plate on and no thermal pads on back of the VRMs the coil whine is whisper quiet. As soon as the thermal pad is applied to the VRMs on the back the coil whine is horrendous.
> 
> I am not sure this can be fixed so Ill pre order a different water block for now. Will not adding thermal pads on the back side of the VRMs cause any issue? The ram thermal pads do not affect the coil whine so it is still beneficial to have the back plate on.


There isn't that huge benefit heatsink through the PCB...as you can see here the memory chips are dumping in way more heat:










You could try adding weight to the backplate to change it's resonance frequency.


----------



## jura11

OC2000 said:


> I've spent all morning working on the coil whine on the 3090 Strix with EK water block and have come to the conclusion there is a major design fault in the back plate.
> With no back plate the 3090 coil wine is whisper quiet. With the back plate on and no thermal pads on back of the VRMs the coil whine is whisper quiet. As soon as the thermal pad is applied to the VRMs on the back the coil whine is horrendous.
> 
> I am not sure this can be fixed so Ill pre order a different water block for now. Will not adding thermal pads on the back side of the VRMs cause any issue? The ram thermal pads do not affect the coil whine so it is still beneficial to have the back plate on.


Running or using now Bykski Waterblock and their backplate on Palit RTX 3090 GamingPro and at full load I literally can't hear the coilwhine, temperatures are awesome 

But I remember I have still Asus RTX 2080Ti Strix with EK Vector RTX 2080Ti Strix waterblock myself on which previously I could hear the coilwhine with high load, replaced thermal pads on GPUs and everything seems is fine and no coilwhine 

Hope this helps 

Thanks, Jura


----------



## OC2000

dr/owned said:


> There isn't that huge benefit heatsink through the PCB...as you can see here the memory chips are dumping in way more heat:
> 
> View attachment 2466853
> 
> 
> You could try adding weight to the backplate to change it's resonance frequency.


Thanks for the Thermal image.
I am putting an MP5works on. Maybe that will help. Ill test that later. I have ordered the Aqua Computer NEXT block (says 2 months wait) so will make do with the EK for now.




jura11 said:


> Running or using now Bykski Waterblock and their backplate on Palit RTX 3090 GamingPro and at full load I literally can't hear the coilwhine, temperatures are awesome
> 
> But I remember I have still Asus RTX 2080Ti Strix with EK Vector RTX 2080Ti Strix waterblock myself on which previously I could hear the coilwhine with high load, replaced thermal pads on GPUs and everything seems is fine and no coilwhine
> 
> Hope this helps
> 
> Thanks, Jura


I tried both EK stock pads and Thermalright 12 w/mk pads. I even cut them down further do they just covered the VRMs with no success.


----------



## warbucks

DrunknFoo said:


> Tried 5mohm, went back to 8ohm, didn't see a raise in power draw on kilawat, this is due to the temperatures that i cant control from my testing...
> 
> Better to get confirmed answers from those with at least a water block rather than on air i think
> 
> (Ftw3 card)


Are you running 8ohm on all the shunts or just the PCIE on your FTW3?


----------



## Sheyster

whaleboy_4096 said:


> Hello all...
> 
> I have one of the FTW3 ultras that, like many, seems to be capped at 450w, even with the EVGA 500w bios. Current evidence supports the theory that cards coming out of China have no issue with that bios, but the ones from Taiwan do. Of course, I have a TW card. I plan on watercooling my system (CPU and GPU), and have all that is needed except the GPU block, which is on order. I have two questions... Has anyone here shunted their FTW3 ultra and are happy with the results? And if so, did your card work with the 500w bios unmodded? If I go to the effort of doing the shunt mod, I want to make sure that the affected EVGA cards arent somehow hobbled to the point where the shunt mod isn't very effective.
> 
> I also have a more general question regarding shunt modding. I (like all of us here) would like more performance from the card. I feel I don't have a particularly good card/chip, just average... more likely below average, to be honest. I'm not looking to break any records, I just want a good performing card in games and VR. I know the question "is it worth the effort to shunt mod" is subjective, but I am wondering what I would get out of it. I'd like to keep the power at safe levels for water and not go crazy... The current plan is to piggyback 8's onto the existing 5's, and maybe replace the PCIe with a 4 (like I said I want to keep the power at safe levels). Will this at least give me a small boost, or am I looking at at only a couple percent? Like I said I am not looking to top the benchmark charts or anything, but the current power limit drives me crazy and is severely holding the card back.
> 
> Thoughts? Thanks


If all you want to do is max out the EVGA 500w BIOS on your FTW3 card, all you need to do is shunt mod (glue on) one resistor for PCI-E power. You only need to remove the backplate to do it. Check out this video:






EDIT: I will add that another option is to return or sell your FTW3 card and buy an MSI Trio X or ASUS Strix 3090 and use the 500w EVGA BIOS, no shunt mod required! It's ironic that the EVGA 500w BIOS works on un-modded competitor cards.


----------



## Outcasst

jura11 said:


> Running or using now Bykski Waterblock and their backplate on Palit RTX 3090 GamingPro and at full load I literally can't hear the coilwhine, temperatures are awesome
> 
> But I remember I have still Asus RTX 2080Ti Strix with EK Vector RTX 2080Ti Strix waterblock myself on which previously I could hear the coilwhine with high load, replaced thermal pads on GPUs and everything seems is fine and no coilwhine
> 
> Hope this helps
> 
> Thanks, Jura


I can also confirm, have the exact same card and it's by far the best card I've ever used for coil whine. Practically silent.


----------



## Wihglah

My FTW3 came with coil whine at zero load. However after a few heat cycles it disappeared. Currently it's silent.


----------



## Sheyster

A little feedback regarding the ASUS Strix 3090 flashed with the EVGA 500w BIOS: Both HDMI 2.1 ports still work after flashing. You will lose one of the DP ports though.


----------



## ExDarkxH

anyone with a crazy ddr4 speed/timings able to notice improvement in games/benchmarks
I spent a bit of time tinkering and was able to get 4,000 CL14 CM1 working and stable. Should be a true latency of 7. I felt accomplished but right after i watched the last Jay2cents video and he claims he will be able to get a true latency of 6! which is basically 4000 cl12 
albeit, with a ram voltage of 1.8v.
I guess this is for port royal it supposedly likes low latency

Also is CM1 really worth it? I can tighten the other timings more if i switch to cm2


----------



## geriatricpollywog

ExDarkxH said:


> anyone with a crazy ddr4 speed/timings able to notice improvement in games/benchmarks
> I spent a bit of time tinkering and was able to get 4,000 CL14 CM1 working and stable. Should be a true latency of 7. I felt accomplished but right after i watched the last Jay2cents video and he claims he will be able to get a true latency of 6! which is basically 4000 cl12
> albeit, with a ram voltage of 1.8v.
> I guess this is for port royal it supposedly likes low latency
> 
> Also is CM1 really worth it? I can tighten the other timings more if i switch to cm2


Have you tried higher ram speed with higher latencies and compared the scores? My score is about 20-30 points higher at 4300c16 vs 4000c13.


----------



## ExDarkxH

0451 said:


> Have you tried higher ram speed with higher latencies and compared the scores? My score is about 20-30 points higher at 4300c16 vs 4000c13.


oh geez. I was initially under that impression that tight timings were slightly over rated and that i should shoot for higher speed. I ran as high as 4600 with these sticks but that was with loose timings.
I guess i will need to try something similar and test out which shoots back higher scores
I was actually running 4266 cl16 before i decided to "tighten things up"

I will say however that i set a new PR in fire strike ultra with these ram settings. Unfortunately 3dmark wont put it on their leader boards because "hw monitoring disabled" error pops up ever since the new update


----------



## geriatricpollywog

ExDarkxH said:


> oh geez. I was initially under that impression that tight timings were slightly over rated and that i should shoot for higher speed. I ran as high as 4600 with these sticks but that was with loose timings.
> I guess i will need to try something similar and test out which shoots back higher scores
> I was actually running 4266 cl16 before i decided to "tighten things up"
> 
> I will say however that i set a new PR in fire strike ultra with these ram settings. Unfortunately 3dmark wont put it on their leader boards because "hw monitoring disabled" error pops up ever since the new update


I am getting the same message. Glad it’s not just me.

also make sure your memory isn’t so tight that it’s on the verge of instability. Your PR scores will be lower if your memory has errors.


----------



## Lobstar

olrdtg said:


> When testing your memory overclock, watch for both instability AND performance. Sometimes with GDDR6 and GDDR6X if you put in a high OC it may be stable, but performance might actually degrade significantly.


I'm at +1498 mem on air with no gpu overclock. I have 56 TimeSpy and 67 Port Royal runs tweaking it. In those two applications I found essentially linear scaling between score and freq. +1448 allows me the best GPU clock speeds with stability. I hit a weird issues with memory timings around 798 but skipped a few blocks and resumed my climb. I'm able to complete some tasks up to +1598 but could not get clean runs and had some reboots. My FTW3 Ultra throws a fit with any voltage tweaking but I was at +119% power limit on X1 during testing. I'm not even sure if memory is affected by power settings but thought I'd throw it out there.


----------



## Nizzen

ExDarkxH said:


> oh geez. I was initially under that impression that tight timings were slightly over rated and that i should shoot for higher speed. I ran as high as 4600 with these sticks but that was with loose timings.
> I guess i will need to try something similar and test out which shoots back higher scores
> I was actually running 4266 cl16 before i decided to "tighten things up"
> 
> I will say however that i set a new PR in fire strike ultra with these ram settings. Unfortunately 3dmark wont put it on their leader boards because "hw monitoring disabled" error pops up ever since the new update


This is my Firestrike Ultra result if you want to compare:
10900k and 3090 strix oc shunted on water.
4700c17 tweaked


http://www.3dmark.com/fs/23879807


----------



## Sheyster

The post above reminds me to mention this: 3DMark is currently on sale for $4.49 on Steam. Happy Black Friday!


----------



## defiledge

olrdtg said:


> I replaced all of my shunt resistors with 3mOhm by removing all 6 of the original resistors, including the PCIe slot, so I'm currently running a 3mOhm on my slot. The peak draw is like 128W.
> If you stack a 5mOhm on your PCIe slot shunt resistor, then just keep a few things in mind --
> If you have multiple PCIe cards + m.2 SSDs in your PC + a lot of fans plugged into the motherboard, you might want to reduce the amount of components pulling +12v power from your 24-pin cable, as you risk potentially melting your 24-pin cable. If your motherboard has an auxiliary power input for the PCIe slots (usually a MOLEX, SATA or 6-pin PCIe port) on the motherboard, plug into that so that your PCIe slots can take load off of the 24-pin cable.
> 
> Personally if you are stacking 5mOhm, I'd stack a higher value on the PCIe slot shunt resistor, for longevity sake.
> 
> ALSO, be careful shunting the PCIe slots, as some 3090s have fuses connected there, so too much power draw could blow the fuse, rendering your card useless. Though if that did occur, you could just put some solder over the fuse to bridge it, fixing the card.


If the 8 pin cables can handle 200W+, why would the 24pin have a problem with 150W? I'm a bit confused about this.


----------



## mardon

Will the Gigabyte Bios flash to a Zotac Trinity do we know?


----------



## GTANY

I received my 2nd 3090 FE today : it performs about the same as the 1st one : +10 Mhz GPU, -100 Mhz RAM.

They both overclock better than my previous Strix OC : in Assassin's Creed Origins, a not very power-hungry game, the GPU is stable at a constant 2055 Mhz. With the Strix, I was able to reach 1995 Mhz only in the same conditions (same ambient temperature).

Consequently, I wonder if Founders Edtion cards GPUs are binned. What are your thoughts ? What frequencies other FE owners attain ?


----------



## defiledge

GTANY said:


> I received my 2nd 3090 FE today : it performs about the same s the 1st one : +10 Mhz GPU, -100 Mhz RAM.
> 
> They both overclock better than my previous Strix OC : in Assassin's Creed Origins, a not very power-hungry game, the GPU is stable at a constant 2055 Mhz. With the Strix, I was able to reach 1995 Mhz only in the same conditions (same ambient temperature).
> 
> Consequently, I wonder if Founders Edtion cards GPUs are binned. What are your thoughts ? What frequencies other FE owners attain ?


Doesn't mean much until you run some proper benchmarks and post scores


----------



## HyperMatrix

defiledge said:


> Doesn't mean much until you run some proper benchmarks and post scores


It's his 3rd card. I'm sure he knows his way around the 3090 by now.


----------



## DrunknFoo

warbucks said:


> Are you running 8ohm on all the shunts or just the PCIE on your FTW3?


Why would anyone shunt only the pcie...


----------



## GTANY

defiledge said:


> Doesn't mean much until you run some proper benchmarks and post scores


In benchmarks, the card is very power-limited. In order to test the overclock capability, one must use a game which consumes less than 400 W. I found that AC Origins is a good mix, quite stressful while consuming around 380 W which enables my GPU to reach higher frequencies than in benchmarks.

In benchmarks, I was around 1900 Mhz on the FEs and on the Strix : consequently, benchmarks are not useful for bringing out the overclock differences, when the cards are not shunted.


----------



## defiledge

You can still run bunchmarks without power limiting your GPU if you undervolt it and set the frequency. Then you can directly compare the scores + frequency at your chosen voltage.


----------



## Chamidorix

Alright guys, at this point I will literally paypall some money to someone who can find or PM me the smashclocks utility Vince mentioned on the GN vs Jay Ln2 stream. I went heavy digging for it last night, and could not find it at all. Old SysUtility, Ntune, Thermspy builds, got all those, but couldn't find a single smidgen of a link to the SmashClocks/Smash Clocks/Smash Clock? Nvidia utility Vince discusses here:

Twitch VOD link


----------



## Sheyster

DrunknFoo said:


> Why would anyone shunt only the pcie...


He's talking about the EVGA FTW3 in particular. If just the PCI-E is shunted, the card will max out the 500w OC BIOS they released for it. Otherwise you're stuck at 460-ish watts.


----------



## warbucks

DrunknFoo said:


> Why would anyone shunt only the pcie...


Some folks might not use the same resistor on the PCIE shunt versus the 8pin plugs if they're concerned about the power draw(fuse, etc) from the PCIE slot. That's why I asked if you used 8ohm across all of them or only on the PCIE.


----------



## HyperMatrix

Chamidorix said:


> Alright guys, at this point I will literally paypall some money to someone who can find or PM me the smashclocks utility Vince mentioned on the GN vs Jay Ln2 stream. I went heavy digging for it last night, and could not find it at all. Old SysUtility, Ntune, Thermspy builds, got all those, but couldn't find a single smidgen of a link to the SmashClocks/Smash Clocks/Smash Clock? Nvidia utility Vince discusses here:
> 
> Twitch VOD link


You let me have your KPE notify and I'll write you the program from scratch if I have to.


----------



## defiledge

defiledge said:


> If the 8 pin cables can handle 200W+, why would the 24pin have a problem with 150W? I'm a bit confused about this.


An atx cable has 2 x +12V 16AWG wires, which is rated at 13A at 0-50ft length. So 156W per wire at 12V, and 300W+ total for the 24pin for +12V. A shunted 3090 wouldn't draw enough power from the slot to even warrant concerns over the cables melting. The only cause for concern would be if the mobo VRMs could handle the power draw. Correct me if I'm wrong this is all hypothetical.


----------



## DrunknFoo

Sheyster said:


> He's talking about the EVGA FTW3 in particular. If just the PCI-E is shunted, the card will max out the 500w OC BIOS they released for it. Otherwise you're stuck at 460-ish watts.


if you add a shunt to the pcie, it'll increase the power draw by about 45-65w depending on what was modified regardless of the the bios used


----------



## Falkentyne

DrunknFoo said:


> Why would anyone shunt only the pcie...


The rumor is that the PCIE balancing is limiting the 8 pins from being able to supply the rest of the 500W (at least 425W), so shunting just the PCIE only would allow the vbios to allow the advertised full limit. That's assuming that's the actual cause of the issue in the first place.
And shunting the 8 pins would require everything else be shunted also, which is of course a lot more work.


----------



## ExDarkxH

DrunknFoo said:


> if you add a shunt to the pcie, it'll increase the power draw by about 45-65w depending on what was modified regardless of the the bios used


you can use a tiny 50mo resistor that would only add ~10watts to the pcie but would allow the 500w bios to work on the card based on how the power balancing works (and how Evga chose to lock down the ftw3)
so you can do a 10watt mod and potentially gain 60watts from it

This is good for many ppl who dont feel comfortable shunting in general and dont feel their cooling/case solutions are adequate to take on the heat from a full board shunt.
Also, it would be technically a lot easier as you wouldn't need to take the card apart only the backplate. A lot of ppl dont feel comfortable taking their cards apart

For someone who doesnt really "shunt" this is a very nice middle ground that will allow them to get more out of their cards

This would only make sense for the FTW3 cards specifically

I actually tested this theory out before i shunted the whole board and it holds up. Could easily reach the 500w with that method whereas before it was stuck at 450w


----------



## DrunknFoo

Falkentyne said:


> The rumor is that the PCIE balancing is limiting the 8 pins from being able to supply the rest of the 500W (at least 425W), so shunting just the PCIE only would allow the vbios to allow the advertised full limit. That's assuming that's the actual cause of the issue in the first place.
> And shunting the 8 pins would require everything else be shunted also, which is of course a lot more work.


U dont see the problem with this logic though?

The ftw card using the stock 450w bios works and can hold 450 to 460w, shunting the pcie allowing a 45-60w increased draw is what is occurs, the power through the pins arent necessarily increased.


----------



## ExDarkxH

DrunknFoo said:


> U dont see the problem with this logic though?
> 
> The ftw card using the stock 450w bios works and can hold 450 to 460w, shunting the pcie allowing a 45-60w increased draw is what is occurs, the power through the pins arent necessarily increased.


why are you assuming they are stacking a 5mo, 7mo, heck even 10mo resistor?
From what I've read, the large majority of ppl feel very uncomfortable adding watts to the pcie. they read horror stories of their pcie fingers melting and get scared about stressing out their mobos.
The reality is, ppl dont want to be pulling 120-130watts from their pcie

but if you used a much smaller shunt, like a 50mo. pcie would only pull 82-85watts which is much more comforting to them, and it would get the bios to work based on that stupid ratio of power balancing


----------



## whaleboy_4096

DrunknFoo said:


> U dont see the problem with this logic though?
> 
> The ftw card using the stock 450w bios works and can hold 450 to 460w, shunting the pcie allowing a 45-60w increased draw is what is occurs, the power through the pins arent necessarily increased.


The confusion for me is that it can be running, and the card is in power limit mode, even though it is drawing only 410-420w. So the power limit can kick in well below the 450w I would expect. So is it looking at just the PCIe slot? Which in this case was drawing between 75 and 77w, while the 8 pins are drawing 120/124/90 (the third 8 pin always draws over 30 less than the other two). These numbers are based on a furmark test I just ran as an example. I can reach 450ish (although it's usually a little less) on other tests. So what exactly is triggering the power limit?


----------



## DrunknFoo

whaleboy_4096 said:


> n for me is that it can be running, and the card is in power limit mode, even though it is drawing only 410-420w. So the power limit can kick in well below the 450w I would expect. So is it looking at just the PCIe slot? Which in this case was drawing between 75 and 77w, while the 8 pins are drawing 120/124/90 (the third 8 pin always draws over 30 less than the other two). These numbers are based on a f


didn't mean to assume, but likely is the case based on what i have read on the evga forums

but yes, i have melted a pcie slot on a dfi board in the past, wasn't paying attention to the slot or cable temps. that was well over 10 yrs ago, much has been learned since then


----------



## HyperMatrix

Chamidorix said:


> Alright guys, at this point I will literally paypall some money to someone who can find or PM me the smashclocks utility Vince mentioned on the GN vs Jay Ln2 stream. I went heavy digging for it last night, and could not find it at all. Old SysUtility, Ntune, Thermspy builds, got all those, but couldn't find a single smidgen of a link to the SmashClocks/Smash Clocks/Smash Clock? Nvidia utility Vince discusses here:
> 
> Twitch VOD link


Chami, do we know if SmashClocks is getting info directly through Nvidia SMI or through another method? If it's through SMI then it's just a matter of understanding how the internals work to decipher the information. There are definitely stats exposed there that aren't visible in GPU-Z/Afterburner











I just playing with it on my laptop with integrated graphics + gtx 1060 so I'm not sure what additional details may be available with a turing/ampere card on desktop.


----------



## DrunknFoo

nvm i be confused


----------



## Carillo

Hello! could i flash a strix in to a evga strix 480 bios on the bios ?what bios should i expect ? is there enough bios to flash the shunt ? should i buy a soldering iron ?


----------



## reflex75

GTANY said:


> I received my 2nd 3090 FE today : it performs about the same as the 1st one : +10 Mhz GPU, -100 Mhz RAM.
> 
> They both overclock better than my previous Strix OC : in Assassin's Creed Origins, a not very power-hungry game, the GPU is stable at a constant 2055 Mhz. With the Strix, I was able to reach 1995 Mhz only in the same conditions (same ambient temperature).
> 
> Consequently, I wonder if Founders Edtion cards GPUs are binned. What are your thoughts ? What frequencies other FE owners attain ?


I don't know if Nvidia keep good silicon for themselves, but it looks like FE are running above average.

My 3090 FE can run almost 2.2GHz, if not power limited.
Here my *Port Royal* result with its stock air cooler and no shunt mod: *14402*
https://www.3dmark.com/3dm/52701185?
Clock 2160Mhz, but average only 1992Mhz because of power limit all the time.

And here *Time Spy* with average clock even lower at 1983Mhz: *22341*
https://www.3dmark.com/3dm/52801039?

And here are some of my screens during gaming at +2100Mhz.

_Call of Duty_ _Black Ops Cold War_









_Assassin's Creed Valhalla_


----------



## FarmerJo

Anyone having the same issue as me? I have A evga FTW3 and none of the settings in set in X1 are being applied properly. Fans are spinning way above what i have set and power limit also


----------



## defiledge

reflex75 said:


> I don't know if Nvidia keep good silicon for themselves, but it looks like FE are running above average.
> 
> My 3090 FE can run almost 2.2GHz, if not power limited.
> Here my *Port Royal* result with its stock air cooler and no shunt mod: *14402*
> https://www.3dmark.com/3dm/52701185?
> Clock 2160Mhz, but average only 1992Mhz because of power limit all the time.
> 
> And here *Time Spy* with average clock even lower at 1983Mhz: *22341*
> https://www.3dmark.com/3dm/52801039?
> 
> And here are some of my screens during gaming at +2100Mhz.
> 
> _Call of Duty_ _Black Ops Cold War_
> 
> 
> 
> 
> 
> 
> 
> 
> 
> _Assassin's Creed Valhalla_


46C on air? Do you live in the north pole or something? Also you have a very good clock on your CPU which probably adds 500 points to your PR score


----------



## ExDarkxH

I will no longer play the game of speed vs timings. Now i got both covered


----------



## Chamidorix

HyperMatrix said:


> Chami, do we know if SmashClocks is getting info directly through Nvidia SMI or through another method? If it's through SMI then it's just a matter of understanding how the internals work to decipher the information. There are definitely stats exposed there that aren't visible in GPU-Z/Afterburner


Great idea, yea I was playing with SMI on Turing a bit, but I don't think it shows anything that's also not exposable via the NVAPI, or thermspy for that matter. The graphics/SM/video clocks aren't really what Vince was talking about I don't think, unless it really is just the SM clock downclocking while the main clock doesn't. Like we've got (possibly virtual) clocks for SMs, memory, nvenc/dec, and then the standard overall chip clock, but I think the point here is to look at actual hardware clocks on a more granular level within SMs or whatever, otherwise how else are we supposed to use the info to "learn how a GPU works" as Vince said?

Still pretty useful though, and worth playing around with more.

Shame I don't have a 3000 card atm after selling FTW3, I'm curious if SMI shows info on MSVDD/uncore part of chip.

I can't find anything from Nvida themselves on Smashclocks at all; don't know when it was released, what package it was part of, what the intended feature set was etc. I think the only way to find it will be to spin up a VM and troll through every torrent and fileshare and archived usenet I can find on Nvida to see if its bundled in something. Or try to get buddy buddies with Vince on FB and just ask him, lol.


----------



## dr/owned

Nice and cleanly solder-stacked some 4mOhm shunts. If anyone wants an exact procedure on doing this I can write one up, but the gist is: tin the existing shunts to build up solder, put a small/micro dab of flux on top of both pads, chisel tip soldering iron tinned, tweezers to hold down new shunt resistor on top, dab of solder on the iron tip..and then just touch to the stacked shunt and everything should melt together well. Then just a tiny amount of solder wire fed in to the joint of both shunts to ensure it's fully connected. Iron set to 675F, Weller ETB tip. Resistor model #: WSL25124L000FEA 

Now I just need the waterblock and I'm off to reassembly.


----------



## HyperMatrix

Chamidorix said:


> Great idea, yea I was playing with SMI on Turing a bit, but I don't think it shows anything that's also not exposable via the NVAPI, or thermspy for that matter. The graphics/SM/video clocks aren't really what Vince was talking about I don't think, unless it really is just the SM clock downclocking while the main clock doesn't. Like we've got (possibly virtual) clocks for SMs, memory, nvenc/dec, and then the standard overall chip clock, but I think the point here is to look at actual hardware clocks on a more granular level within SMs or whatever, otherwise how else are we supposed to use the info to "learn how a GPU works" as Vince said?
> 
> Still pretty useful though, and worth playing around with more.
> 
> Shame I don't have a 3000 card atm after selling FTW3, I'm curious if SMI shows info on MSVDD/uncore part of chip.
> 
> I can't find anything from Nvida themselves on Smashclocks at all; don't know when it was released, what package it was part of, what the intended feature set was etc. I think the only way to find it will be to spin up a VM and troll through every torrent and fileshare and archived usenet I can find on Nvida to see if its bundled in something. Or try to get buddy buddies with Vince on FB and just ask him, lol.



Well I was curious if SMI showed any additional entries on Turing or Ampere. Sold the Strix last week so can't test yet. And second part is...if SmashClocks isn't getting the data from SMI, where would it be getting it from? Without a hardware mod, we'd be limited to what Nvidia decides to expose through NVAPI and SMI. And SMI being more detailed than NVAPI. Would be interesting to see the readout difference at a lower clock compared to maxing it out to the edge of crashing.


----------



## bmgjet

HyperMatrix said:


> Well I was curious if SMI showed any additional entries on Turing or Ampere. Sold the Strix last week so can't test yet. And second part is...if SmashClocks isn't getting the data from SMI, where would it be getting it from? Without a hardware mod, we'd be limited to what Nvidia decides to expose through NVAPI and SMI. And SMI being more detailed than NVAPI. Would be interesting to see the readout difference at a lower clock compared to maxing it out to the edge of crashing.


HWInfo64 shows more then SMI.
Nvidia-SMI is more aimed at quadro people as a minimalist control pannel. It was used quite a bit few years ago when mining for controlling p-states and power limits.
Theres a lot of info not exposed to us normal people. But if you know the i2c address of it you can view it and change settings for a lot of things.
Last gen there was a effort to go though and scan every address which afterburner even has a tool built in it to do that. Nothing really ever came of it on Nvidia cards. But AMD ones got a lot of support for basically unlimited voltage and power control.


----------



## whaleboy_4096

dr/owned said:


> Nice and cleanly solder-stacked some 4mOhm shunts. If anyone wants an exact procedure on doing this I can write one up, but the gist is: tin the existing shunts to build up solder, put a small/micro dab of flux on top of both pads, chisel tip soldering iron tinned, tweezers to hold down new shunt resistor on top, dab of solder on the iron tip..and then just touch to the stacked shunt and everything should melt together well. Then just a tiny amount of solder wire fed in to the joint of both shunts to ensure it's fully connected. Iron set to 675F, Weller ETB tip. Resistor model #: WSL25124L000FEA


Out of curiosity, what is the wire for? 
thanks


----------



## dr/owned

whaleboy_4096 said:


> Out of curiosity, what is the wire for?
> thanks


I mentioned it earlier, but it's just to take some load off of the PCIe 12V slot fingers and put it on the 8 pin connector, since that's where I'm most concerned about something melting with really high power loads.

A tiny bit more power draw is now on a single 8-pin pin, but it's not 100% because all of the pins there are connected together on the PCB anyways.


----------



## DrunknFoo

dr/owned said:


> Nice and cleanly solder-stacked some 4mOhm shunts. If anyone wants an exact procedure on doing this I can write one up, but the gist is: tin the existing shunts to build up solder, put a small/micro dab of flux on top of both pads, chisel tip soldering iron tinned, tweezers to hold down new shunt resistor on top, dab of solder on the iron tip..and then just touch to the stacked shunt and everything should melt together well. Then just a tiny amount of solder wire fed in to the joint of both shunts to ensure it's fully connected. Iron set to 675F, Weller ETB tip. Resistor model #: WSL25124L000FEA
> 
> Now I just need the waterblock and I'm off to reassembly.
> 
> View attachment 2466928


What awg wire? Looks like 20-22? Hmmm maybe i try this once a block is avail


----------



## changboy

reflex75 said:


> I don't know if Nvidia keep good silicon for themselves, but it looks like FE are running above average.
> 
> My 3090 FE can run almost 2.2GHz, if not power limited.
> Here my *Port Royal* result with its stock air cooler and no shunt mod: *14402*
> https://www.3dmark.com/3dm/52701185?
> Clock 2160Mhz, but average only 1992Mhz because of power limit all the time.
> 
> And here *Time Spy* with average clock even lower at 1983Mhz: *22341*
> https://www.3dmark.com/3dm/52801039?
> 
> And here are some of my screens during gaming at +2100Mhz.
> 
> _Call of Duty_ _Black Ops Cold War_
> 
> 
> 
> 
> 
> 
> 
> 
> 
> _Assassin's Creed Valhalla_


 When i click on ur tyme spy score it not show 22341 but 20316 and i dont know why hehehe.

I have try time spy extreme and i got : 11383 ( dont know why but iam #45 hall of fame lol)


https://www.3dmark.com/spy/15671167



And for time spy i got : 20294


https://www.3dmark.com/spy/15671622



And port royal : 14257


https://www.3dmark.com/pr/555746


----------



## reflex75

changboy said:


> When i click on ur tyme spy score it not show 22341 but 20316 and i dont know why hehehe.
> 
> I have try time spy extreme and i got : 11383 ( dont know why but iam #45 hall of fame lol)
> 
> 
> https://www.3dmark.com/spy/15671167
> 
> 
> 
> And for time spy i got : 20294
> 
> 
> https://www.3dmark.com/spy/15671622
> 
> 
> 
> And port royal : 14257
> 
> 
> https://www.3dmark.com/pr/555746


First, we are on Nvidia 3090 GPU subject, which means we focus on GPU scores, not CPU.
Second, if you really care about CPU scores, you should check your i9-10980XE which provides almost the same level score as my 9900ks, quite disturbing...


----------



## bmgjet

Arnt you worried about that wire drawing power from the pci-e and to feed the pin on the 8pin since it has the higher potential.


----------



## dr/owned

DrunknFoo said:


> What awg wire? Looks like 20-22? Hmmm maybe i try this once a block is avail


18awg. This stuff: https://www.amazon.com/gp/product/B07PVTLXWP/ref=ppx_yo_dt_b_asin_title_o07_s00?ie=UTF8&psc=1 . It's chinese brand but it seems to live up to it's description about strand count and being copper.

High strand count and silicone jacket allow for that tight bend radius...and 18awg fills out the width of the 2512 shunt resistor really well. Feels like multimeter probe wire.

--------------

Also just putzing around with checking clearance of a 6 DIMM waterblock on a backplate. I start by printing a rectangle with the mounting hole spacing on transparent paper (why the light is reflecting in the photo) so I can better see what is covered and then move it around until I cover most of what I want to cover (the memory chips) . I'm using these adorable M2.5 screws that have such a low-profile head that I don't think I'll even need to counterbore the backplate to allow clearance...they'll probably not interfere with any components anyways.


----------



## dr/owned

bmgjet said:


> Arnt you worried about that wire drawing power from the pci-e and to feed the pin on the 8pin since it has the higher potential.


I would guess that by the distance from the power supply to the mobo 24pin through the mobo to the slot....the 12V there is probably slightly lower potential than the 12V at the PCIe connector which is directly on the PSU. Especially under load where it would really matter. Would be an amusing problem if power is "going backwards" from the slot to the 8 pin. I could probably see this happening via Corsair Link 24 pin monitoring or putting a current clamp on the wire.

I didn't actually think about this scenario though so I appreciate you bringing it up. Worst case I can just desolder the wire off the 8 pin around run it to a molex plug but then I'd be running another cable to the PSU....and I'm not sure if I can "1 wire" a molex plug because the PSU might turn off that connector if it's not also sensing a ground return.


----------



## changboy

reflex75 said:


> First, we are on Nvidia 3090 GPU subject, which means we focus on GPU scores, not CPU.
> Second, if you really care about CPU scores, you should check your i9-10980XE which provides almost the same level score as my 9900ks, quite disturbing...


Then maybe something wrong with my cpu overclocking ? I score 4830 cinebench r15
But now its seam i drop after i update to the last mobo bios 3201, i need check this.

But its possible when i encoding a bluray it will be 3 time faster then your 9900ks.


----------



## DrunknFoo

dr/owned said:


> Also just putzing around with checking clearance of a 6 DIMM waterblock on a backplate. I start by printing a rectangle with the mounting hole spacing on transparent paper (why the light is reflecting in the photo) so I can better see what is covered and then move it around until I cover most of what I want to cover (the memory chips) . I'm using these adorable M2.5 screws that have such a low-profile head that I don't think I'll even need to counterbore the backplate to allow clearance...they'll probably not interfere with any components anyways.


as for the wire ya, i have a few random gauges i can use

i got the same 6dimm waiting for a block

i think centering it between the ram chips heat dissipating evenly might work out better? i haven't paid any attention to icx sensors on the ftw, maybe ill look into that, that one ram chip at the bottom is so lonely, it'd suck hard if that one module was what ends up hindering potential the chips, then again, micron isn't as picky as samsung, probably subjective


----------



## dr/owned

DrunknFoo said:


> as for the wire ya, i have a few random gauges i can use
> 
> i got the same 6dimm waiting for a block
> 
> i think centering it between the ram chips heat dissipating evenly might work out better? i haven't paid any attention to icx sensors on the ftw, maybe ill look into that, that one ram chip at the bottom is so lonely, it'd suck hard if that one module was what ends up hindering potential the chips, then again, micron isn't as picky as samsung, probably subjective


The backplate is going to act as a heat spreader anyways so I don't think exactly placement or leaving one memory chip uncovered by the block will really matter much. Plus these memory chips probably work up to 100C which just isn't going to happen unless they're not heatsinked at all. I'm hopefully one step ahead too by hard-mounting the block to the backplate and using thermal paste (boy that's going to be a mess to clean up later) instead of thermal adhesive tape that has awful thermal conductivity and doesn't fill gaps well at all.

I can't wait to see on the IR camera the "radius" of cooling around the block.


----------



## changboy

dr/owned said:


> The backplate is going to act as a heat spreader anyways so I don't think exactly placement or leaving one memory chip uncovered by the block will really matter much. Plus these memory chips probably work up to 100C which just isn't going to happen unless they're not heatsinked at all. I'm hopefully one step ahead too by hard-mounting the block to the backplate and using thermal paste (boy that's going to be a mess to clean up later) instead of thermal adhesive tape that has awful thermal conductivity and doesn't fill gaps well at all.
> 
> I can't wait to see on the IR camera the "radius" of cooling around the block.


 From what i see more the temp go high on memory less you can overclock it before your performance drop.
Coz i put a fan over the backplate for testing and before memory was at +800 before score going down and after the fan i put +1100 and my score go higher. Not sure of this at 100% coz i just try with the fan tonight but its seam lower the temp on memory better oc you can achieve.


----------



## GTANY

reflex75 said:


> I don't know if Nvidia keep good silicon for themselves, but it looks like FE are running above average.
> 
> My 3090 FE can run almost 2.2GHz, if not power limited.
> Here my *Port Royal* result with its stock air cooler and no shunt mod: *14402*
> https://www.3dmark.com/3dm/52701185?
> Clock 2160Mhz, but average only 1992Mhz because of power limit all the time.
> 
> And here *Time Spy* with average clock even lower at 1983Mhz: *22341*
> https://www.3dmark.com/3dm/52801039?
> 
> And here are some of my screens during gaming at +2100Mhz.
> 
> _Call of Duty_ _Black Ops Cold War_
> 
> 
> 
> 
> 
> 
> 
> 
> 
> _Assassin's Creed Valhalla_


Thank you for your feedback. I will shunt and watercool my FE, I will see my overclocking results.


----------



## DrunknFoo

line the edge (non contact area) with painter tape, should catch most of the paste on to the tape


----------



## DrunknFoo

changboy said:


> From what i see more the temp go high on memory less you can overclock it before your performance drop.
> Coz i put a fan over the backplate for testing and before memory was at +800 before score going down and after the fan i put +1100 and my score go higher. Not sure of this at 100% coz i just try with the fan tonight but its seam lower the temp on memory better oc you can achieve.


Error correction, itll downtick freq based on temp


----------



## Nizzen

Carillo said:


> Hello! could i flash a strix in to a evga strix 480 bios on the bios ?what bios should i expect ? is there enough bios to flash the shunt ? should i buy a soldering iron ?


Have you been hacked? LOL


----------



## Nizzen

ExDarkxH said:


> I will no longer play the game of speed vs timings. Now i got both covered


Show us the full Aida64 run


----------



## Nizzen

defiledge said:


> 46C on air? Do you live in the north pole or something? Also you have a very good clock on your CPU which probably adds 500 points to your PR score


Best air cooling trick: Blasting Ln2 through the aircooler. They do this in aircooling competitions 

"Aircooled"


----------



## nordschleife

Nizzen said:


> Best air cooling trick: Blasting Ln2 through the aircooler. They do this in aircooling competitions
> 
> "Aircooled"


I get 50c average on PR @ 16c ambient on AC, with a big fan blowing on the open case. stock cooler with stock paste. 

cheers from Rio, Brazil


----------



## reflex75

Nizzen said:


> Best air cooling trick: Blasting Ln2 through the aircooler. They do this in aircooling competitions
> 
> "Aircooled"


Indeed 'aircooled', don't need any tricks, the oversized and over-engineered FE cooler is good enough for 400w, just need to increase fans speed.
And it's written 46° average
(temp was around 54° at the end of the bench)
I hope stock aircooled OC is still allowed here without shunt...


----------



## chispy

Did some quick testing on my 2 Asus 3090s ( 1 Strix + 1 Tuf all mods done + evc2 for voltage control ) , ambient water cooled both cards v.core set to 1.16v ran perfectly stable on gpupi 1B at 2325Mhz both cards


----------



## Sheyster

I have not done any serious benchmarking since the Sandy Bridge days. Now I primarily just test for stability and purely game, when I have the time.

I decided to do a 4K Optimized test run in Superposition with the Strix because of the wonky GPU-Z readings (see Framechasers video about it). Note that this result below is an all-air cooled system, GPU +core only at 60 to bring it up to the original Strix spec, no GPU memory OC. Stock EVGA OC fan curve was active. CPU is 5 GHz all core 9900K on air.










The Strix on air with the 500W EVGA BIOS is a beast! That run is good enough for #45 on the leaderboard.  I won't publish it though.

EDIT: 

One more run at +120 core, +500 mem, 100% Fan:










And that's it from me for next decade!


----------



## ExDarkxH

some interesting mods you have there but couldnt you accomplish the same thing with a mobo that has a molex connection to supplement pcie power?
i.e something like an Apex


----------



## Falkentyne

The best FE backplate pad mod for 3090 I think is this one.

Use 1.5mm Thermalright Odyssey 12.8 w / mk or better pads (I am not sure if the Fujipoly 17 w / mk pads are suitable as they are WAY too expensive and tend to break apart after first application / disassembly).
Use the pads everywhere marked on the PCB. These pads work great and compress easily. 120mm * 120mm versions are available on Aliexpress but may need to be specially ordered from a USA or Europe supplier. 

Gelid 12 w/mk pads may work. The fujipoly 11 w/mk pads seem to be identical to the Thermalright pads but are even smaller in dimension.


----------



## ivans89

Anyone has extreme coil whine with the ekwb block for the 3090 strix? If yes, anyone found a solution for it?


----------



## changboy

With my evga ftw3 ultra also on air on superposition 4k and 8k :


----------



## HyperMatrix

ExDarkxH said:


> some interesting mods you have there but couldnt you accomplish the same thing with a mobo that has a molex connection to supplement pcie power?
> i.e something like an Apex


The problem isn’t about there not being enough power available to the pcie slot. It’s that a high amount of power through the slot can melt the fingers. This bypasses the power that goes directly through the slot but still reports it to the video card that it’s getting power from pcie slot.


----------



## shiokarai

ivans89 said:


> Anyone has extreme coil whine with the ekwb block for the 3090 strix? If yes, anyone found a solution for it?


yes, quite a few ppl have issues -> check few pages back, even EKWB engineer said it's the issue they are aware of (but no solution apart from loosening backplate screws or removing backplate altogether)


----------



## changboy

shiokarai said:


> yes, quite a few ppl have issues -> check few pages back, even EKWB engineer said it's the issue they are aware of (but no solution apart from loosening backplate screws or removing backplate altogether)


Maybe this can be resolve by apply thermalpads at some point under the backplate


----------



## ivans89

I tried to apply pads on the backplate and it’s not That’s loud anymore but still annoying...


----------



## escapee

HyperMatrix said:


> The problem isn’t about there not being enough power available to the pcie slot. It’s that a high amount of power through the slot can melt the fingers. This bypasses the power that goes directly through the slot but still reports it to the video card that it’s getting power from pcie slot.


+1. I intentionally cut the trace on the PCIE fingers for ground and +12v and routed them to another 8 pin reducing the vdrop along this plane (3x 8 pin on the board, another 8 pin to feed the PCIE - will void your warranty 1000% if you do this but you gain some extra oc stability)


----------



## LVNeptune

escapee said:


> +1. I intentionally cut the trace on the PCIE fingers for ground and +12v and routed them to another 8 pin reducing the vdrop along this plane (3x 8 pin on the board, another 8 pin to feed the PCIE - will void your warranty 1000% if you do this but you gain some extra oc stability)


Probably would've been safer/cheaper to cut the traces on the motherboard to isolate power going to the card and feed into it on the back side of the board.


----------



## escapee

LVNeptune said:


> Probably would've been safer/cheaper to cut the traces on the motherboard to isolate power going to the card and feed into it on the back side of the board.


Yeah I agree but I was switching the card between boards to test scores so I didn't want to snip all my PCIE slots as I was testing. Being permanent to the card also means when I throw my 6900xt's in there next month I can easily reuse the pcie power if needed


----------



## chispy

escapee said:


> Yeah I agree but I was switching the card between boards to test scores so I didn't want to snip all my PCIE slots as I was testing. Being permanent to the card also means when I throw my 6900xt's in there next month I can easily reuse the pcie power if needed


You have not posted any benchmarks at hwbot like you said you would ? Would you please do ?  , i know you are not a noobie and have been around the block many times overclocking on subsero temps for a long time. What is your real name in hwbot ? Reveal yourself  , ( You have been making new accounts here and in Futuremark , why ? 

Obviously you have been using Ln2 on your runs of PR along a very old version of windows 1909 , but can you bench anything else and post pictures of your set up , pictures speaks more than a thousand words


----------



## geriatricpollywog

chispy said:


> You have not posted any benchmarks at hwbot like you said you would ? Would you please do ?  , i know you are not a noobie and have been around the block many times overclocking on subsero temps for a long time. What is your real name in hwbot ? Reveal yourself  , ( You have been making new accounts here and in Futuremark , why ?
> 
> Obviously you have been using Ln2 on your runs of PR along a very old version of windows 1909 , but can you bench anything else and post pictures of your set up , pictures speaks more than a thousand words


I heard Port Royal was banned from HWBot because people are cheating the level of detail. They are basically seeing garbage textures.


----------



## chispy

0451 said:


> I heard Port Royal was banned from HWBot because people are cheating the level of detail. They are basically seeing garbage textures.


Where did you heard that from ? got any links ?


----------



## escapee

chispy said:


> You have not posted any benchmarks at hwbot like you said you would ? Would you please do ?  , i know you are not a noobie and have been around the block many times overclocking on subsero temps for a long time. What is your real name in hwbot ? Reveal yourself  , ( You have been making new accounts here and in Futuremark , why ?
> 
> Obviously you have been using Ln2 on your runs of PR along a very old version of windows 1909 , but can you bench anything else and post pictures of your set up , pictures speaks more than a thousand words


I've been busy with port royal lately as you can see - can't confirm that i'm on LN2 anymore though.... i'll definitely get around to your request shortly xo


----------



## chispy

Overclocking, overclocking, and much more! Like overclocking.


HWBOT is a site dedicated to overclocking. We promote overclocking achievements and competitions for professionals as well as enthousiasts with rankings and a huge hardware database.




hwbot.org


----------



## chispy

escapee said:


> I've been busy with port royale lately as you can see - can't confirm that i'm on LN2 anymore though.... i'll definitely get around to your request shortly xo


A run of timespy and firestrike takes a couple of minutes and you tell me between your PR runs you cannot run those , yep right and you are not on Ln2 i call you BS from a mile away.


----------



## chispy

@escapee you are totally fake and made up a whole lot of BS this is me calling you out for all the lies and BS cheat runs PR cheat runs hence you needed to make a new account in futuremark  you cannot fool me i have been doing these for close to 20 years ocing on LN2 ..


----------



## escapee

chispy said:


> a whole lot of BS this is me calling you out for all the lies and BS cheat runs PR cheat runs hence you needed to make a new account in futuremark  you cannot fool me i have been doing these for close to 20 years ocing on LN2 ..


----------



## changboy

About account on 3dmark, before i bought those benchmark i have created account under name of C-3-P-0 and after i bought it on Steami use to log with name of my game Steam Duracuir lol, is there a way i can bench and my Steam result be on my C-3-P-O account, doing a fusion with these 2 account ?


----------



## Falkentyne

chispy said:


> @escapee you are totally fake and made up a whole lot of BS this is me calling you out for all the lies and BS cheat runs PR cheat runs hence you needed to make a new account in futuremark  you cannot fool me i have been doing these for close to 20 years ocing on LN2 ..


I sent a PM if you have the time.


----------



## chispy

escapee said:


> View attachment 2467003


hahahaha.. sure meme all you want  , if it walks like a duck , kuack like a duck , smell like a duck it is definitely a duck  , like i said before you cannot fool me. Post pictures of your set up , lots of pictures and post benchmarks at hwbot or you are fake 100%


----------



## chispy

Falkentyne said:


> I sent a PM if you have the time.



I PM back my friend


----------



## chispy

Dang it i hate cheaters and liers  ! @escapee  remember to post lots of pictures and submit the TS and FS runs you promise to hwbot  do that whenever you come back If you come back


----------



## escapee

sigh, even my low clocked gigabyte 3090 at 2445mhz on my 4790k smacked you in timespy extreme graphics score... feelsbad  must've been doing something wrong for the past 20 years aye ;D (edit - 2355mhz* it's been that long since I last ran it)


----------



## escapee

Alas, I shall refrain from arguing and deliver on your request in due time <3


----------



## HyperMatrix

I leave you kids alone for a few minutes and this is what happens.


----------



## escapee

HyperMatrix said:


> I leave you kids alone for a few minutes and this is what happens.


I was just providing useful information before I got abused outta nowhere by chipsy the bully... :/


----------



## TrumpyAl

HyperMatrix said:


> I leave you kids alone for a few minutes and this is what happens.


Smacked bottoms all round methinks !


----------



## chispy

escapee said:


> sigh, even my low clocked gigabyte 3090 at 2445mhz on my 4790k smacked you in timespy extreme... feelsbad  must've been doing something wrong for the past 20 years aye ;D


Was that on your old ban account on FM ? Did you used the integrated graphics card video output on that run again and post at futuremark ??? Like the one you did with your 4790k and 3090 claiming the fastest 3090 , all BS like you ! I can keep going pulling dirt from your BS ,

You made a brand new account at hwbot ? Were you ban before  ??? ,

Do you want me to post my talks on private emails with the head of UL/FM about your cheat scores ??? I do not think so !


----------



## escapee

chispy said:


> Was that on your old ban account on FM ? Did you used the integrated graphics card video output on that run again and post at futuremark ??? Like the one you did with your 4790k and 3090 claiming the fastest 3090 , all BS like you ! I can keep going pulling dirt from your BS ,
> 
> You made a brand new account at hwbot ? Were you ban before  ??? ,
> 
> Do you want me to post my talks on private emails with the head of UL/FM about your cheat scores ??? I do not think so !


Eh... if you think IGPU for rendering my msi afterburner frequency charts to minimise the performance penalty is cheating then you have serious issues so feel free to think whatever you like at this point as I am done arguging haha


----------



## Falkentyne

escapee said:


> I was just providing useful information before I got abused outta nowhere by chipsy the bully... :/


There's absolutely NO bloody well in hell a STOCK 3090 FE can score 14,700 in port Royal. NO WAY possible. You can clock the core at +240 mhz offset--it wont mater--the power limit will drop the core all the way down to 1800 mhz. And you would wind up crashing if the "load" was low and the core boosted up to 2200+ mhz.

So stop your bloody lying.
And messing with NVInspector to destroy the texture settings is CHEATING.


----------



## escapee

Falkentyne said:


> There's absolutely NO bloody well in hell a STOCK 3090 FE can score 14,700 in port Royal. NO WAY possible. You can clock the core at +240 mhz offset--it wont mater--the power limit will drop the core all the way down to 1800 mhz. And you would wind up crashing if the "load" was low and the core boosted up to 2200+ mhz.
> 
> So stop your bloody lying.
> And messing with NVInspector to destroy the texture settings is CHEATING.


😂😂😂😂


----------



## chispy

escapee said:


> Eh... if you think IGPU for rendering my msi afterburner frequency charts to minimise the performance penalty is cheating then you have serious issues so feel free to think whatever you like at this point as I am done arguging haha



hahaha... sure keep on making excuses for the way you cheat , it's all now in the open  by the way i know how it works you do not have to tell me anything , using igpu to boost your rendering with the 3090 + low clock 4790k at PR for an out of reach score for everyone was enough to trigger the mods at FM / UL trust me + nvidia inspector it is 100% cheating , no excuse for that.


----------



## escapee

chipsy & falkentyne you both have been entertaining. I have some serious business to attend to namely setting up my second card for SLI and setting up my ryzen for more prolonged sub zero runs. I'll see y'all on the leaderboard in time . To the others on this group - I sincerely apologize for bringing out the worst in these two xo


----------



## chispy

@Falkentyne i sent you a PM bro


----------



## chispy

escapee said:


> chipsy & falkentyne you both have been entertaining. I have some serious business to attend to namely setting up my second card for SLI and setting up my ryzen for more prolonged sub zero runs. I'll see y'all on the leaderboard in time . To the others on this group - I sincerely apologize for bringing out the worst in these two xo



Sure run , run fast ....  , you never did answer any questions neither post any pictures of your so claim set up for sub-sero  .

Guys beyond a shadow of a doubt @escapee has been fooling you all , posting cheat benchmarks at FM hall of fame  where he was ban before and needed to make a new account to post cheat results again.

Here at ocn.net my home for overclocking hwbot team won't let stuff like this happen , he is a fake !


----------



## escapee

chispy said:


> Sure run , run fast ....  , you never did answer any questions neither post any pictures of your so claim set up for sub-sero  .
> 
> Guys beyond a shadow of a doubt @escapee has been fooling you all , posting cheat benchmarks at FM hall of fame  where he was ban before and needed to make a new account to post cheat results again.
> 
> Here at ocn.net my home for overclocking hwbot team won't let stuff like this happen , he is a fake !


chipsy please, I sincerely hope you learn something or two about how 3rd series actually works before slinging hackusations to those who actually managed to get much further than you legitimately  Pity your past 20 years of overclocking experience has only taught you one thing - claim hacks on those who are better than you 😂😂😂😂

Cool fact - 5950x has no IGPU so yeah...


----------



## chispy

escapee said:


> chipsy please, I sincerely hope you learn something or two about how 3rd series actually works before slinging hackusations to those who actually managed to get much further than you legitimately  Pity your past 20 years of overclocking experience has only taught you one thing - claim hacks on those who are better than you 😂😂😂😂
> 
> Cool fact - 5950x has no IGPU so yeah...


The 4790k does  i was not talking about the 5950x ... do not stray away from the reality of the conversation. Post pictures of your set up , let's see if it is real ? Post benchmarks at hwbot


----------



## chispy

Let me get this thread back onto rtx 3090 , @escapee is fake 100% . And this will end this conversation about his way of cheating , up to you guys who to believe ! This guy is not worth my bandwith ...

In other news , i can report that the Byksi blocks for 3090 works very well , i'm liking it more and more


----------



## defiledge

Can anyone with a 3090 gaming OC bios verify if the pcie slot goes above the 75W spec?


----------



## jura11

chispy said:


> Let me get this thread back onto rtx 3090 , @escapee is fake 100% . And this will end this conversation about his way of cheating , up to you guys who to believe ! This guy is not worth my bandwith ...
> 
> In other news , i can report that the Byksi blocks for 3090 works very well , i'm liking it more and more


Hi Angelo

I have Bykski waterblock on Palit RTX 3090 GamingPro and works well, I'm just power limited that's it hahaha, wish you are in UK mate, I would sent you my RTX 3090 for shunt mod

Thanks, Jura


----------



## Falkentyne

Now back to topic...

So, that brand new Kryonaut Extreme actually works.
Seems to be about 1C better than Thermalright TFX, which was already about 2C better than Kryonaut (after it cures for about a week).
Plus it's MUCH cheaper per gram. 

Two grams of thermalright TFX for $19 is highway robbery. 6 grams for $39 is like getting sexually violated by a female hot Klingon Warrior but cheaper per gram.
Kryonaut Extreme was $99 (now it's $104), but it's $10 per 3.38 grams, so the same $40 gets you 13.52 grams, which is more than double the amount you get with TFX.
And that's about equal to the 11.1 grams of regular Kryonaut costing $29.

So it's worth it for sure if you can handle the starting asking price. You get alot of it for repasting. To use it effectively, don't spread it. Just apply a thin line of paste with the spatula in a long X line to each diagonal corner of the GPU (or CPU IHS if you're doing that), and then 4 small drops in each quadrant left over and then tighten.


----------



## chispy

jura11 said:


> Hi Angelo
> 
> I have Bykski waterblock on Palit RTX 3090 GamingPro and works well, I'm just power limited that's it hahaha, wish you are in UK mate, I would sent you my RTX 3090 for shunt mod
> 
> Thanks, Jura


Great to hear that my friend Jura  , same here the Byksi works very well on the Strix , exceeded my expectations. You can do it bro shunting is not that hard , let me know if you need assistance , i can help you thru skype or discord so that i can see your 3090


----------



## dr/owned

Thought about it a bit more late last night and decided to take the PCIe slot out of the equation completely. Kapton taped off pins 1-3 on the front side and 2-3 on the backside (pain to do the trimming for pin 1 which is some sort of sense pin). So all of the power is going to come from the second PCIe connector. I also took 5 copper strands from 8awg wire (so about 14awg worth) and tied all the 12V pins together after the connector to balance them out. In theory this is 240W capable through the slot shunt. I may come back and balance the first 8 pin to the second 8 pin as well.

I'll have to monitor if this connector is getting significantly hotter than the other one. If it is I can easily solder on another lead to my bridge to take external power. I need to experiment today to see if my Corsair PSU deliverers 1-wire power if the ground return is through another connector or if it has some sort of loop sensing where there needs to be a ground return on the same connector. Would be so much easier if I didn't have to solder a ground wire to this thing (since it already has tons of ground through the slot and 2x8pins)

(The wires taking a wide loop off the end of the card is intentional so I can get a current clomp on it)


----------



## jura11

Falkentyne said:


> Now back to topic...
> 
> So, that brand new Kryonaut Extreme actually works.
> Seems to be about 1C better than Thermalright TFX, which was already about 2C better than Kryonaut (after it cures for about a week).
> Plus it's MUCH cheaper per gram.
> 
> Two grams of thermalright TFX for $19 is highway robbery. 6 grams for $39 is like getting sexually violated by a female hot Klingon Warrior but cheaper per gram.
> Kryonaut Extreme was $99 (now it's $104), but it's $10 per 3.38 grams, so the same $40 gets you 13.52 grams, which is more than double the amount you get with TFX.
> And that's about equal to the 11.1 grams of regular Kryonaut costing $29.
> 
> So it's worth it for sure if you can handle the starting asking price. You get alot of it for repasting. To use it effectively, don't spread it. Just apply a thin line of paste with the spatula in a long X line to each diagonal corner of the GPU (or CPU IHS if you're doing that), and then 4 small drops in each quadrant left over and then tighten.


Many people prefer Kingpin KPx I never used that TIM and can't comment if its good or better than Kryonaut Extreme, regarding the TFX, I agree its expensive but I have used their TIM in past and they always performed well on waterblocks, can't comment on air

I use like TFX or Kryonaut or Noctua NT-H1, all three of them are great with waterblocks and difference is literally minimal in my case 

Hope this helps 

Thanks, Jura


----------



## jura11

chispy said:


> Great to hear that my friend Jura  , same here the Byksi works very well on the Strix , exceeded my expectations. You can do it bro shunting is not that hard , let me know if you need assistance , i can help you thru skype or discord so that i can see your 3090


Hi Angelo 

On RTX 2080Ti's Bykski performed I would say around 2-5°C worse than competitors or worse than EK and sometimes literally beaten EK by 2°C in same loop

On RTX 3090 I didn't expected Bykski waterblocks will be performing like they're performing, will have here EK RTX 3090 and Aquacomputer Kryographics RTX 3090 Strix waterblock as well for testing and will see which at the I will keep, but for now Bykski is my dark horse 

Friend running same Bykski waterblock on his RTX 3090 and he is getting like 36°C, his old RTX 2080Ti AMP with EK Vector waterblock has run constantly in 42-46°C 

I'm not good with soldering iron Angelo, due this I'm staying away from that, due this I never done shunt mod on my RTX 2080Ti Strix or Zotac RTX 2080Ti AMP 

But let's see my friend, I will PM you if I have guts for that 

Thanks, Jura


----------



## Nizzen

chispy said:


> Let me get this thread back onto rtx 3090 , @escapee is fake 100% . And this will end this conversation about his way of cheating , up to you guys who to believe ! This guy is not worth my bandwith ...
> 
> In other news , i can report that the Byksi blocks for 3090 works very well , i'm liking it more and more


Bykski has like double the thickness of the backplate of the EK Vector strix!
Maybe I have to use the Bykski strix backplate tougether with my EK strix block 

Testing Ek strix waterblock on a new 3090 strix oc @ stock bios and not shuntmodded.
This is about max in port Royal with this card: 14804


http://www.3dmark.com/pr/561190



On the other 3090 strix card that is shuntmodded I got about 15000points on stock aircooling  Shuntmodding is a must


----------



## chispy

Nizzen said:


> Bykski has like double the thickness of the backplate of the EK Vector strix!
> Maybe I have to use the Bykski strix backplate tougether with my EK strix block
> 
> Testing Ek strix waterblock on a new 3090 strix oc @ stock bios and not shuntmodded.
> This is about max in port Royal with this card: 14804
> 
> 
> http://www.3dmark.com/pr/561190
> 
> 
> 
> On the other 3090 strix card that is shuntmodded I got about 15000points on stock aircooling  Shuntmodding is a must


So true 😁 the Bykski backplate is thick af ... I have an EK vector on my Tuf oc and the backplate on EK it's very thin like a thin can  ,.

Very nice temps on your set up + you are correct shunt modding is a must + some volts for v.core helps up to 1.16v for benching , more than that and temps goes out of control 🔥🔥🔥 and needs LN2 cooling ...


----------



## dr/owned

Nizzen said:


> Bykski has like double the thickness of the backplate of the EK Vector strix!
> Maybe I have to use the Bykski strix backplate tougether with my EK strix block


For several reasons like this I say EK is overrated. They're asking something like $250 for a waterblock+backplate...it better be really good for that kind of money. But nothing they do is really above the quality of what you can get out of China from Bykski or Barrow.


----------



## Nizzen

chispy said:


> So true 😁 the Bykski backplate is thick af ... I have an EK vector on my Tuf oc and the backplate on EK it's very thin like a thin can  ,.
> 
> Very nice temps on your set up + you are correct shunt modding is a must + some volts for v.core helps up to 1.16v for benching , more than that and temps goes out of control 🔥🔥🔥 and needs LN2 cooling ...


Do you know if the Asus OC Panel II is working with hotwire like elmor thing? OC panel II supports hotwire on older gpu's atleast....


----------



## bmgjet

Flatness of stock XC3 base vs EKWB XC3 block.
Having a real bad time with there block.
First off was getting thermal throttle issues because the inductors dont make contact with the pads they provided and flowing install instructions. 
So had to get some thicker pads.
Then bad die temps with it only being 5c better then stock cooler with it hitting 70c (2080ti before hand on same loop ran at 40c).
Contacted them and they will update there install info about the inductor pads. But only offer a refund for the flatness and not a replacement block and I have to pay $80 return shipping which is a deal breaker since then iv lost a lot of money and will be stuck with a back plate I cant even use anymore as well.

Improved the die contact a little bit by skimming the block stand offs since they were all different heights.
And using liquid metal has filled that gap in a bit better so now im at 54C for the same load that was taking it to 70c.
Im tempted to give it a very light lapping but dont want to destroy the nickel plating.

Then coil whine was crazy loud. Putting a couple rubber bands around the whole card has mostly removed that noise.
They recommended losening the screws on the back plate. But then I ended up with thermal throttle because the rear memory would over heat.


----------



## chispy

Nizzen said:


> Do you know if the Asus OC Panel II is working with hotwire like elmor thing? OC panel II supports hotwire on older gpu's atleast....


i have one Asus OC Panel 2 and it does not work for me  only Elmor's evc2 works flawlesly for me on both the Asus Strix and the Tuf oc my friend @Nizzen . I have not re-try again the oc panel since then.


----------



## chispy

bmgjet said:


> Flatness of stock XC3 base vs EKWB XC3 block.
> Having a real bad time with there block.
> First off was getting thermal throttle issues because the inductors dont make contact with the pads they provided and flowing install instructions.
> So had to get some thicker pads.
> Then bad die temps with it only being 5c better then stock cooler with it hitting 70c (2080ti before hand on same loop ran at 40c).
> Contacted them and they will update there install info about the inductor pads. But only offer a refund for the flatness and not a replacement block and I have to pay $80 return shipping which is a deal breaker since then iv lost a lot of money and will be stuck with a back plate I cant even use anymore as well.
> 
> Improved the die contact a little bit by skimming the block stand offs since they were all different heights.
> And using liquid metal has filled that gap in a bit better so now im at 54C for the same load that was taking it to 70c.
> Im tempted to give it a very light lapping but dont want to destroy the nickel plating.
> 
> Then coil whine was crazy loud. Putting a couple rubber bands around the whole card has mostly removed that noise.
> They recommended losening the screws on the back plate. But then I ended up with thermal throttle because the rear memory would over heat.
> 
> 
> View attachment 2467056


So sorry to hear that  , EK has gone downhill lately. Seen that picture and seen that block not been completely flat is unacceptable from EK QC . Also i believe the EK backplate amplify the coil whine because it is so thin. I hope you fix everything soon as 52c is kind of high for water cooling.

Regards: Angelo


----------



## DrunknFoo

bmgjet said:


> Flatness of stock XC3 base vs EKWB XC3 block.
> Having a real bad time with there block.
> First off was getting thermal throttle issues because the inductors dont make contact with the pads they provided and flowing install instructions.
> So had to get some thicker pads.
> Then bad die temps with it only being 5c better then stock cooler with it hitting 70c (2080ti before hand on same loop ran at 40c).
> Contacted them and they will update there install info about the inductor pads. But only offer a refund for the flatness and not a replacement block and I have to pay $80 return shipping which is a deal breaker since then iv lost a lot of money and will be stuck with a back plate I cant even use anymore as well.
> 
> Improved the die contact a little bit by skimming the block stand offs since they were all different heights.
> And using liquid metal has filled that gap in a bit better so now im at 54C for the same load that was taking it to 70c.
> Im tempted to give it a very light lapping but dont want to destroy the nickel plating.
> 
> Then coil whine was crazy loud. Putting a couple rubber bands around the whole card has mostly removed that noise.
> They recommended losening the screws on the back plate. But then I ended up with thermal throttle because the rear memory would over heat.
> 
> 
> View attachment 2467056











GeForce RTX 3080 and RTX 3090 with bended package - why water and air coolers have such a hard time | Investigative | igor'sLAB


Of course, you'll be amazed if, for example, three supposedly identical cards from one manufacturer have completely different temperatures or fan speeds and this is then reflected very similarly in…




www.igorslab.de





Have u had a look at this?

/Shrug i wonder if it now becomes a gamble to have decent contact....


----------



## dr/owned

DrunknFoo said:


> GeForce RTX 3080 and RTX 3090 with bended package - why water and air coolers have such a hard time | Investigative | igor'sLAB
> 
> 
> Of course, you'll be amazed if, for example, three supposedly identical cards from one manufacturer have completely different temperatures or fan speeds and this is then reflected very similarly in…
> 
> 
> 
> 
> www.igorslab.de
> 
> 
> 
> 
> 
> Have u had a look at this?
> 
> /Shrug i wonder if it now becomes a gamble to have decent contact....


I don't like Igor cause he seems super arrogant. Same guy who was like "ah-ha it's the capacitors" ... except it really wasn't and he wasn't even right about the types of capacitors being used by reference designs.

The GPU boards have always been warpy and whatnot...it's the responsibility of the waterblock manufacturer to machine to a tight enough tolerance and use enough (all) of the screw mounting points to ensure the thing is fully flatted and pressing against the die and memory chips. 0.06mm height delta across the silicon is nothing. Probably even room temperature variations can change those measurements let alone when the card is running and different sections are at different temperatures.


----------



## Naieve

Has anyone tried flashing an FTW3 bios to a reference 3090? I saw a post on EVGA where someone said they flashed it on to an xc3. Just wanna up the power on this PNY 3090.


----------



## Cholerikerklaus

Falkentyne said:


> There's absolutely NO bloody well in hell a STOCK 3090 FE can score 14,700 in port Royal. NO WAY possible. You can clock the core at +240 mhz offset--it wont mater--the power limit will drop the core all the way down to 1800 mhz. And you would wind up crashing if the "load" was low and the core boosted up to 2200+ mhz.
> 
> So stop your bloody lying.
> And messing with NVInspector to destroy the texture settings is CHEATING.


My Stock 3090 FE did this Port Royal score


https://www.3dmark.com/pr/469325


----------



## shiokarai

Nizzen said:


> Bykski has like double the thickness of the backplate of the EK Vector strix!
> Maybe I have to use the Bykski strix backplate tougether with my EK strix block
> 
> Testing Ek strix waterblock on a new 3090 strix oc @ stock bios and not shuntmodded.
> This is about max in port Royal with this card: 14804
> 
> 
> http://www.3dmark.com/pr/561190
> 
> 
> 
> On the other 3090 strix card that is shuntmodded I got about 15000points on stock aircooling  Shuntmodding is a must


Do you have coil whine amplification issue with the EK block, too? Seems to be the design/production flaw of this particular block. Also, was the coil whine an issue with the bykski block? And comparison temp/design/qc wise EK block vs bykski block? I have EK block inbound but I'm serious eyeing other blocks due to the coil whine amp issue + price/perf ratio of EK block...


----------



## OC2000

ivans89 said:


> Anyone has extreme coil whine with the ekwb block for the 3090 strix? If yes, anyone found a solution for it?


The only way I have removed Coil Whine altogether on the EK Strix Water block backplate is to remove the thermal pads covering the back of the VRMS. I have an Aqua Computer NEXT Waterblock with active back plate on order now as everything I have tried to do doesn't solve this issue.


----------



## ArcticZero

defiledge said:


> Can anyone with a 3090 gaming OC bios verify if the pcie slot goes above the 75W spec?


I have a PNY card which is 2x8pin, currently using the Gigabyte OC BIOS, and I normally see the additional power come from the 8-pin wires (saw a max of up to 170W last time I checked), while the PCIe slot generally stays at around 68-75W.


----------



## shiokarai

OC2000 said:


> The only way I have removed Coil Whine altogether on the EK Strix Water block backplate is to remove the thermal pads covering the back of the VRMS. I have an Aqua Computer NEXT Waterblock with active back plate on order now as everything I have tried to do doesn't solve this issue.


So a botched attempt by EK to make a proper Strix block... they were in such a hurry they forgot to test the thing! lol Shame watercool is doing FTW3 block instead... (I don't like the aquacomputer ones because of the paste on the mem chips = mess, also with my rtx 2080 ti aq block there was no contact with 2 mem chips, needed to use mem pads instead)


----------



## stryker7314

Any FTW3 Ultra waterblocks available for purchase?


----------



## DrunknFoo

OC2000 said:


> The only way I have removed Coil Whine altogether on the EK Strix Water block backplate is to remove the thermal pads covering the back of the VRMS.


??? What?!!!? 

So less pressure results in less/no whine? Have u dampened the coils with pads to help eliminate reverb?


----------



## GTANY

Falkentyne said:


> The best FE backplate pad mod for 3090 I think is this one.
> 
> Use 1.5mm Thermalright Odyssey 12.8 w / mk or better pads (I am not sure if the Fujipoly 17 w / mk pads are suitable as they are WAY too expensive and tend to break apart after first application / disassembly).
> Use the pads everywhere marked on the PCB. These pads work great and compress easily. 120mm * 120mm versions are available on Aliexpress but may need to be specially ordered from a USA or Europe supplier.
> 
> Gelid 12 w/mk pads may work. The fujipoly 11 w/mk pads seem to be identical to the Thermalright pads but are even smaller in dimension.
> 
> View attachment 2466992


Thank you.

And for the front, are there hotspots ?


----------



## OC2000

DrunknFoo said:


> ??? What?!!!?
> 
> So less pressure results in less/no whine? Have u dampened the coils with pads to help eliminate reverb?



I tried adding an MP5 Works Clamped on to the back plate, which did nothing to help.
I used different thermal pads (Thermalright 12 m/wk) I used more, I used less. Nothing helped. I tried thermal pads on one side then the other too. Both are as bad as each other.


----------



## SuperMumrik

What temps are you guys seeing on your shunted water cooled cards? My strix (bykski wb) reaches 47-48C almost instantly during gameplay and 49-50 during benchmarks. Long before the loop is getting heat soaked. 
Game clocks are [email protected] with +1000Mhz to the memory(not a great card) and pulling around 500-600 Watts mostly. 

What puzzles me is that temps don't get any higher after it's heat soaked. Might have to do a remount some day 😊


----------



## jura11

SuperMumrik said:


> What temps are you guys seeing on your shunted water cooled cards? My strix (bykski wb) reaches 47-48C almost instantly during gameplay and 49-50 during benchmarks. Long before the loop is getting heat soaked.
> Game clocks are [email protected] with +1000Mhz to the memory(not a great card) and pulling around 500-600 Watts mostly.
> 
> What puzzles me is that temps don't get any higher after it's heat soaked. Might have to do a remount some day 😊


I would try remount waterblock and check TIM print, what TIM did you used, supplied one or Kryonaut or something like that? 

How many radiators do you have? 

Hope this helps 

Thanks, Jura


----------



## dr/owned

SuperMumrik said:


> What temps are you guys seeing on your shunted water cooled cards? My strix (bykski wb) reaches 47-48C almost instantly during gameplay and 49-50 during benchmarks. Long before the loop is getting heat soaked.
> Game clocks are [email protected] with +1000Mhz to the memory(not a great card) and pulling around 500-600 Watts mostly.
> 
> What puzzles me is that temps don't get any higher after it's heat soaked. Might have to do a remount some day 😊


That sounds like reasonable temperatures to me. 20C delta is pretty normal for 300W cards...let alone 600W.


----------



## SuperMumrik

jura11 said:


> I would try remount waterblock and check TIM print, what TIM did you used, supplied one or Kryonaut or something like that?
> 
> How many radiators do you have?


Yeah, might be a good idea. The stock cooler print was really bad actually.
Kryonaut on the die and I got 360+240 fatty rads.




dr/owned said:


> That sounds like reasonable temperatures to me. 20C delta is pretty normal for 300W cards...let alone 600W.


It's the part where it gets to those "high" temps before the loop is getting heat soaked that puzzles me (and the fact that temps don't really get any higher after the loop is heat soaked).
And to stabilize Vcore above 1081mV (using slider) I need to start the chiller tells me there is somthing wonky🤔


----------



## cletus-cassidy

Falkentyne said:


> Now back to topic...
> 
> So, that brand new Kryonaut Extreme actually works.
> Seems to be about 1C better than Thermalright TFX, which was already about 2C better than Kryonaut (after it cures for about a week).
> Plus it's MUCH cheaper per gram.
> 
> Two grams of thermalright TFX for $19 is highway robbery. 6 grams for $39 is like getting sexually violated by a female hot Klingon Warrior but cheaper per gram.
> Kryonaut Extreme was $99 (now it's $104), but it's $10 per 3.38 grams, so the same $40 gets you 13.52 grams, which is more than double the amount you get with TFX.
> And that's about equal to the 11.1 grams of regular Kryonaut costing $29.
> 
> So it's worth it for sure if you can handle the starting asking price. You get alot of it for repasting. To use it effectively, don't spread it. Just apply a thin line of paste with the spatula in a long X line to each diagonal corner of the GPU (or CPU IHS if you're doing that), and then 4 small drops in each quadrant left over and then tighten.


You can get it for $100.95 plus 10% off for Black Friday and free shipping at Performance PCs:









Thermal Grizzly Kryonaut Extreme Thermal Grease - 9ml


Thermal Grizzly Kryonaut Extreme is based on our well known Kryonaut paste. For Kryonaut Extreme the maximum thermal conductivity was accomplished due to the smallest particle size, thinner minimum layer height and improved low temperature application.




www.performance-pcs.com


----------



## reflex75

Cholerikerklaus said:


> My Stock 3090 FE did this Port Royal score
> 
> 
> https://www.3dmark.com/pr/469325


14728 is a very nice score for a stock 3090 FE 
You should have the record of the stock FE OC league 😃
Mine is at 14.4K but I have still room for improvement on both core and memory 😉
https://www.3dmark.com/compare/pr/469325/pr/478711


----------



## Carls_Car

Flashed my Ventus 3X OC w/Gigabyte Gaming OC vbios. Worked, but couldn't load any games. Mainly tried Gears 5, Control, Doom, and Apex. I was able to run TS and get a decent score and a light OC. Looks like the crashes happen if the clock went over 2Ghz when under load. Tried messing with power limit to see if that did anything, but nope. Needless to say, but the Gigabyte vbios was fine without any OC and only the power limit pushed to max. Here's a TS score I managed to get with the Gigabyte vbios (110core/200mem).



https://www.3dmark.com/spy/15723685



Went back to the original 3X OC vbios. 115core/300mem.



https://www.3dmark.com/spy/15726700



All tests are air (if it wasn't already obvious) with fan @ 85%. I feel like I got lucky with my Ventus OC, it doesn't run as hot as other people claim theirs does, and it seems to OC pretty well on its own vbios.

I'll probably not push the OC for now, will wait for a water block to finally come out. Would like to push past the 350W wall if I'm honest, but for now I'm content. I'm playing games at 2030-2085Mhz with Gsync/vsync on since it's not pushing the TDP to the limit.


----------



## bmgjet

Carls_Car said:


> Flashed my Ventus 3X OC w/Gigabyte Gaming OC vbios. Worked, but couldn't load any games. Mainly tried Gears 5, Control, Doom, and Apex. I was able to run TS and get a decent score and a light OC. Looks like the crashes happen if the clock went over 2Ghz when under load. Tried messing with power limit to see if that did anything, but nope. Needless to say, but the Gigabyte vbios was fine without any OC and only the power limit pushed to max. Here's a TS score I managed to get with the Gigabyte vbios (110core/200mem).
> 
> 
> 
> https://www.3dmark.com/spy/15723685
> 
> 
> 
> Went back to the original 3X OC vbios. 115core/300mem.
> 
> 
> 
> https://www.3dmark.com/spy/15726700
> 
> 
> 
> All tests are air (if it wasn't already obvious) with fan @ 85%. I feel like I got lucky with my Ventus OC, it doesn't run as hot as other people claim theirs does, and it seems to OC pretty well on its own vbios.
> 
> I'll probably not push the OC for now, will wait for a water block to finally come out. Would like to push past the 350W wall if I'm honest, but for now I'm content. I'm playing games at 2030-2085Mhz with Gsync/vsync on since it's not pushing the TDP to the limit.


Use the KFA2 390W bios instead. Its more inline with your cards power layout.


----------



## Carls_Car

bmgjet said:


> Use the KFA2 390W bios instead. Its more inline with your cards power layout.


I'll give it a shot, thanks for the reply!


----------



## WilliamLeGod

Is it worth to use Liquid metal on the 3090 gaming x trio? If so, how much temp drop I would expect? Thanks for any insights!


----------



## GRABibus

Which Bios could we use on Asus strix gaming (Not the OC version) in order to get better performances and increase power limit ?
Did someone try ?


----------



## Carillo

Carls_Car said:


> Flashed my Ventus 3X OC w/Gigabyte Gaming OC vbios. Worked, but couldn't load any games. Mainly tried Gears 5, Control, Doom, and Apex. I was able to run TS and get a decent score and a light OC. Looks like the crashes happen if the clock went over 2Ghz when under load. Tried messing with power limit to see if that did anything, but nope. Needless to say, but the Gigabyte vbios was fine without any OC and only the power limit pushed to max. Here's a TS score I managed to get with the Gigabyte vbios (110core/200mem).
> 
> 
> 
> https://www.3dmark.com/spy/15723685
> 
> 
> 
> Went back to the original 3X OC vbios. 115core/300mem.
> 
> 
> 
> https://www.3dmark.com/spy/15726700
> 
> 
> 
> All tests are air (if it wasn't already obvious) with fan @ 85%. I feel like I got lucky with my Ventus OC, it doesn't run as hot as other people claim theirs does, and it seems to OC pretty well on its own vbios.
> 
> I'll probably not push the OC for now, will wait for a water block to finally come out. Would like to push past the 350W wall if I'm honest, but for now I'm content. I'm playing games at 2030-2085Mhz with Gsync/vsync on since it's not pushing the TDP to the limit.


Where did you order your water block from ? My is shipped with a chinese dude on bike, probably arriving late 2022.


----------



## DimQa

Can i flash MSI 3090 Suprim X bios to MSI 3090 Gaming X Trio? Sorry if it was asked before.


----------



## Lord of meat

DimQa said:


> Can i flash MSI 3090 Suprim X bios to MSI 3090 Gaming X Trio? Sorry if it was asked before.


Yes!
make sure u use afterburner and set fan curves.
My settings:
power limit 107
core clock +60
mem clock +730
i set the fan curve to 50 and when the temp hits 65c it goes to 100.
hope this help ya.


----------



## reflex75

Falkentyne said:


> There's absolutely NO bloody well in hell a STOCK 3090 FE can score 14,700 in port Royal. NO WAY possible. You can clock the core at +240 mhz offset--it wont mater--the power limit will drop the core all the way down to 1800 mhz. And you would wind up crashing if the "load" was low and the core boosted up to 2200+ mhz.
> 
> 
> 
> So stop your bloody lying.
> 
> And messing with NVInspector to destroy the texture settings is CHEATING.





Cholerikerklaus said:


> My Stock 3090 FE did this Port Royal score
> 
> 
> 
> https://www.3dmark.com/pr/469325





reflex75 said:


> 14728 is a very nice score for a stock 3090 FE
> 
> You should have the record of the stock FE OC league 😃
> 
> Mine is at 14.4K but I have still room for improvement on both core and memory 😉
> 
> 
> 
> https://www.3dmark.com/compare/pr/469325/pr/478711


Just improved my scores thanks to the winter time here 🙂 (window open)
3090 FE stock aircooled OC
Port Royal: *14 636* https://www.3dmark.com/3dm/53877759?
Time Spy: *22 435* https://www.3dmark.com/3dm/53879031?

Because of the power limit (400w), my voltage drops around 0.9v - 0.95v range, while the frequency remains high between 2Ghz - 2.1Ghz, which is fine for undervolt 😃

I wonder what frequency could be reached at 1.10v without the power limit... 🤔


----------



## Carillo

reflex75 said:


> Just improved my scores thanks to the winter time here 🙂 (window open)
> 3090 FE stock aircooled OC
> Port Royal: *14 636* https://www.3dmark.com/3dm/53877759?
> Time Spy: *22 435* https://www.3dmark.com/3dm/53879031?
> 
> Because of the power limit (400w), my voltage drops around 0.9v - 0.95v range, while the frequency remains high between 2Ghz - 2.1Ghz, which is fine for undervolt 😃
> 
> I wonder what frequency could be reached at 1.10v without the power limit... 🤔


Average temp 39c ? Are you benching outside on the North Pole ?


----------



## LoucMachine

Lord of meat said:


> Yes!
> make sure u use afterburner and set fan curves.
> My settings:
> power limit 107
> core clock +60
> mem clock +730
> fan curve is at 50 and when the temp hits 65 it goes to 100.
> hope this help ya.


I flashed it yesterday but my fan curve does not act like that? It seems to be acting somewhat normally, just that my gpu stabilise at 75-76C instead of 68-69C.. and the fan speed goes a bit higher, which is all normal imo as its 20% more power than original.


----------



## Glerox

I have a FTW3 Ultra and a Strix OC, both sealed. Which one should be the best for overclocking under water? I will return one of them, don't know which one to open haha.


----------



## GAN77

Glerox said:


> I have a FTW3 Ultra and a Strix OC, both sealed. Which one should be the best for overclocking under water? I will return one of them, don't know which one to open haha.


Strix


----------



## Lord of meat

LoucMachine said:


> I flashed it yesterday but my fan curve does not act like that? It seems to be acting somewhat normally, just that my gpu stabilise at 75-76C instead of 68-69C.. and the fan speed goes a bit higher, which is all normal imo as its 20% more power than original.


Thats what i set my curve to, sorry for not being clear. edited it.


----------



## HyperMatrix

Glerox said:


> I have a FTW3 Ultra and a Strix OC, both sealed. Which one should be the best for overclocking under water? I will return one of them, don't know which one to open haha.


Theoretically the Strix. But silicon lottery will matters most. Test both and keep the one that clocks higher with less voltage. You can always sell a used card at cost or a but higher even when opened on kijiji.


----------



## changboy

Glerox said:


> I have a FTW3 Ultra and a Strix OC, both sealed. Which one should be the best for overclocking under water? I will return one of them, don't know which one to open haha.


They can oc the same, you can be desapoint with both or one its better then the other.
I have notice this from previous post : strix can get coil wine and its not funny to ear this, some with ftw3 clame have bad performance but mine run verry well, no coil wine, its quiet.

No one can give you a real anwser to this, you have to pick up 1.

The only thing iam thinking its : how the hell you have those cards and didnt open a box lol


----------



## geriatricpollywog

Glerox said:


> I have a FTW3 Ultra and a Strix OC, both sealed. Which one should be the best for overclocking under water? I will return one of them, don't know which one to open haha.


The Strix OC is Asus's top GPU while the FTW Ultra is still $100-$200 less than the top EVGA card, so there could be some better binning with the Strix.


----------



## HyperMatrix

0451 said:


> The Strix OC is Asus's top GPU while the FTW Ultra is still $100-$200 less than the top EVGA card, so there could be some better binning with the Strix.


There is no binning with the Strix, but as you said, EVGA does bin their KPE cards, which means you have a lower chance of getting a great die with the FTW3. End of the day though...lower chances don’t mean guaranteed lower performance. For all we know, the rejects of the KPE binning process could all be going to the XC3 cards and the FTW3 remains an untouched bin. I’m not saying that’s the case but considering the number of great clocking FTW3 cards, not all the dies are being kept for the kingpin cards. So there’s no way to know what’s happening exactly. 

If I had 2 cards...I’d be testing them both instead of theorycrafting. Haha. Definitely going to test out both the FTW3 Hybrid as well as the HydroCopper before deciding which to keep until (potentially) getting a KingPin. So if I had to blindly choose between rolling the dice between a Strix (again) or an FTW3, I’d go with the Strix. But if I had both in my hand? He’ll yes I’d test them both. I’d even test a Zotac if I could. Silicon lottery is king and any seemingly bad card could end up with a solid chip.


----------



## geriatricpollywog

HyperMatrix said:


> There is no binning with the Strix, but as you said, EVGA does bin their KPE cards, which means you have a lower chance of getting a great die with the FTW3. End of the day though...lower chances don’t mean guaranteed lower performance. For all we know, the rejects of the KPE binning process could all be going to the XC3 cards and the FTW3 remains an untouched bin. I’m not saying that’s the case but considering the number of great clocking FTW3 cards, not all the dies are being kept for the kingpin cards. So there’s no way to know what’s happening exactly.
> 
> If I had 2 cards...I’d be testing them both instead of theorycrafting. Haha. Definitely going to test out both the FTW3 Hybrid as well as the HydroCopper before deciding which to keep until (potentially) getting a KingPin. So if I had to blindly choose between rolling the dice between a Strix (again) or an FTW3, I’d go with the Strix. But if I had both in my hand? He’ll yes I’d test them both. I’d even test a Zotac if I could. Silicon lottery is king and any seemingly bad card could end up with a solid chip.


The Kingpin is the only non-FE 3090 that makes sense if you were patient enough to wait for it and are buying a card today. For $100 more than the Strix and $200 more than the FTW3 Ultra, you get: unlocked power, better VRM, Classified tool, accurate voltage monitoring, triple bios, OLED screen, 360mm AIO, dipswitch controls, EVGA customer service, and that bin. If I had 2 unopened 3090s I would return them both for the KPE.


----------



## HyperMatrix

0451 said:


> The Kingpin is the only non-FE 3090 that makes sense if you were patient enough to wait for it and are buying a card today. For $100 more than the Strix and $200 more than the FTW3 Ultra, you get: unlocked power, better VRM, Classified tool, accurate voltage monitoring, triple bios, OLED screen, 360mm AIO, dipswitch controls, EVGA customer service, and that bin. If I had 2 unopened 3090s I would return them both for the KPE.


If I had access to the KPE I would agree with you. Haha. FTW3 this year is a budget build card. Although the Hybrid and HydroCopper ones are good value due to the coolers. Gonna grab them both and keep the best die and then keep trying to get a KPE.


----------



## Glerox

0451 said:


> The Kingpin is the only non-FE 3090 that makes sense if you were patient enough to wait for it and are buying a card today. For $100 more than the Strix and $200 more than the FTW3 Ultra, you get: unlocked power, better VRM, Classified tool, accurate voltage monitoring, triple bios, OLED screen, 360mm AIO, dipswitch controls, EVGA customer service, and that bin. If I had 2 unopened 3090s I would return them both for the KPE.


I need to do my build next week, no time to wait for the KINGPIN hydrocopper unfortunately.


----------



## DrunknFoo

Chamidorix said:


> Alright guys, at this point I will literally paypall some money to someone who can find or PM me the smashclocks utility Vince mentioned on the GN vs Jay Ln2 stream. I went heavy digging for it last night, and could not find it at all. Old SysUtility, Ntune, Thermspy builds, got all those, but couldn't find a single smidgen of a link to the SmashClocks/Smash Clocks/Smash Clock? Nvidia utility Vince discusses here:
> 
> Twitch VOD link


Have you looked through archives? 








Guru3D.com


Guru of 3D: Computer PC Hardware and Consumer Electronics reviews




www.guru3d.com





Walking down memory lane when i was super experimental, i do not ever recall hearing of this smashclock

Nvidia system tool nickname maybe? /Shrug do they happen to reference another a gpu series by chance?


----------



## ivans89

Small update, I tested a bit and after I remove only the screws from the backplate (let the backplate on the card) the coil whine was gone 100%.

You can just try to remove the 2 screws that are near the slot, should reduce the whine a lot.


----------



## Cavokk

Glerox said:


> I have a FTW3 Ultra and a Strix OC, both sealed. Which one should be the best for overclocking under water? I will return one of them, don't know which one to open haha.


Just to echo the other responders - Ill go for the strix as it has the superior electrical design. They do NOT bin though and its all down to silicon lottery. I had both a Strix and a MSI Trio X, - the latter was FAR superior despite inferior PCB/VRM design and I kept that having it under water (Aphacool)

So if you dont open them up - ill go with Strix 

C


----------



## Esenel

ivans89 said:


> Small update, I tested a bit and after I remove only the screws from the backplate (let the backplate on the card) the coil whine was gone 100%.
> 
> You can just try to remove the 2 screws that are near the slot, should reduce the whine a lot.


I can confirm.
*EK waterblock backplate Strix.*

Remove the two screws next to the socket and your coil whine issue with the backplate is gone/near gone.
For me it is nearly gone, but even better than with the Strix Air cooler.


----------



## Chamidorix

HyperMatrix said:


> There is no binning with the Strix, but as you said, EVGA does bin their KPE cards, which means you have a lower chance of getting a great die with the FTW3.


The very first bench coming out on the EVGA forum isn't looking particularly impressive at standard voltages. Granted, this one guy is running an ancient cpu + ram so his PR score is bad but he did bump up frequency until crash didn't get anything impressive that indicates any meaningful binning. I stand by the assertion that the only benefit of KPE this gen is a great AIO cooler, and easy persistent software voltage offset + mem and cache clock adjustment. Still some great benefits for 200$ over strix, but there's a reason it is only 200$ more and that reason is no binning.

Now, I did more digging and am 90% confident the 3090 KP is using these same shunts as on the 2080ti KP for 8 pin inputs, at least:









I'm newish to hardware modding, so forgive my ignorance but does anyone know what the hell these are, if they can be easily lowered in resistance, etc? Don't know how I would go about finding the spec sheet for one either. BZ just skips right over them and TiN never talks about them in guide. 

Also, confirmed that the LN2 520W bios and the XOC unlimited bios both remove temperature protections. So it still stands that FTW3 500W bios on either strix or possibly this card with shunt mods should be the best option for power unlimited but temperature safe 24/7 running.


----------



## geriatricpollywog

Chamidorix said:


> The very first benches coming out on the EVGA forum aren't looking particularly impressive at standard voltages. Granted, one guy is running an ancient cpu + ram but the second guy seemed competent and didn't get anything impressive that indicates any meaningful binning. I stand by the assertion that the only benefit of KPE this gen is a great AIO cooler, and easy persistent software voltage offset + mem and cache clock adjustment. Still some great benefits for 200$ over strix, but there's a reason it is only 200$ more and that reason is no binning.
> 
> Now, I did more digging and am 90% confident the 3090 KP is using these same shunts as on the 2080ti KP for 8 pin inputs, at least:
> View attachment 2467172
> 
> 
> I'm newish to hardware modding, so forgive my ignorance but does anyone know what the hell these are, if they can be easily lowered in resistance, etc? Don't know how I would go about finding the spec sheet for one either. BZ just skips right over them
> 
> Also, confirmed that the LN2 520W bios and the XOC unlimited bios both remove temperature protections. So it still stands that FTW3 500W bios on either strix or possibly this card with shunt mods should be the best option for power unlimited but temperature safe 24/7 running.


The only person I see who has posted results is creekthagrey. Who else posted Kingpin results?


----------



## GRABibus

0451 said:


> The only person I see who has posted results is creekthagrey. Who else posted Kingpin results?





Chamidorix said:


> The very first benches coming out on the EVGA forum aren't looking particularly impressive at standard voltages. Granted, one guy is running an ancient cpu + ram but the second guy seemed competent and didn't get anything impressive that indicates any meaningful binning. I stand by the assertion that the only benefit of KPE this gen is a great AIO cooler, and easy persistent software voltage offset + mem and cache clock adjustment. Still some great benefits for 200$ over strix, but there's a reason it is only 200$ more and that reason is no binning.
> 
> Now, I did more digging and am 90% confident the 3090 KP is using these same shunts as on the 2080ti KP for 8 pin inputs, at least:
> View attachment 2467172
> 
> 
> I'm newish to hardware modding, so forgive my ignorance but does anyone know what the hell these are, if they can be easily lowered in resistance, etc? Don't know how I would go about finding the spec sheet for one either. BZ just skips right over them
> 
> Also, confirmed that the LN2 520W bios and the XOC unlimited bios both remove temperature protections. So it still stands that FTW3 500W bios on either strix or possibly this card with shunt mods should be the best option for power unlimited but temperature safe 24/7 running.


Can the Asus Strix gaming (Not the OC version) can be flashed with 500 FTW3 bios ?
What is the reference of this bios ?

is it this one ?








EVGA RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com


----------



## Chamidorix

0451 said:


> The only person I see who has posted results is creekthagrey. Who else posted Kingpin results?


Yea, I was wrong, updated.


----------



## Rbk_3

I got my FTW3. Does this look about right for stock default fan curve and voltage power sliders maxed in an inclosed case?


https://www.3dmark.com/pr/566813


----------



## changboy

Rbk_3 said:


> I got my FTW3. Does this look about right for stock default fan curve and voltage power sliders maxed in an inclosed case?
> 
> 
> https://www.3dmark.com/pr/566813


This is not the ftw3 ultra ?


----------



## Lobstar

Glerox said:


> I have a FTW3 Ultra and a Strix OC, both sealed. Which one should be the best for overclocking under water? I will return one of them, don't know which one to open haha.


Get the strix, the FTW3/Ultra seem to have some sort of bug. EVGA GeForce RTX 3090 FTW3 XOC BIOS BETA - EVGA Forums


GRABibus said:


> Can the Asus Strix gaming (Not the OC version) can be flashed with 500 FTW3 bios ?
> What is the reference of this bios ?
> 
> is it this one ?
> 
> 
> 
> 
> 
> 
> 
> 
> EVGA RTX 3090 VBIOS
> 
> 
> 24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory
> 
> 
> 
> 
> www.techpowerup.com


Check this post: EVGA GeForce RTX 3090 FTW3 XOC BIOS BETA - EVGA Forums


----------



## Avacado

Lobstar said:


> Get the strix, the FTW3/Ultra seem to have some sort of bug. EVGA GeForce RTX 3090 FTW3 XOC BIOS BETA - EVGA Forums
> 
> 
> Check this post: EVGA GeForce RTX 3090 FTW3 XOC BIOS BETA - EVGA Forums


Ambient.


----------



## changboy

Not mean coz 5 customer complain about a ftw3 after they already sold 2000 card, all card have a bug or a problem, dont do missinformation, i saw some complain about the strix too if you read too.

With my ftw3 i can clock my memory at +1200 +180 on gpu and even more, i dont think its bad at all and this on air. HyperMatrix sold is strix coz its underperforming, is the strix have a bug ? NO


----------



## Rbk_3

changboy said:


> This is not the ftw3 ultra ?


Sorry yea the Ultra


----------



## changboy

Rbk_3 said:


> Sorry yea the Ultra


Its a bit low to me, i score 14257 but i overclock it. Try +150 gpu and +1000 on memory and say ur score.


----------



## Rbk_3

changboy said:


> Its a bit low to me, i score 14257 but i overclock it. Try +150 gpu and +1000 on memory and say ur score.


Is that at 100% fans? Which Bios? I’ll try that now.


----------



## changboy

Rbk_3 said:


> Is that at 100% fans? Which Bios? I’ll try that now.


100 % fan and try with the original bios before anything, close all back ground program before do ur test.
After you got ur score do the run again with gpuz open and look at the total power draw after ur test, see the max watt on the pcb board .


----------



## Rbk_3

changboy said:


> Its a bit low to me, i score 14257 but i overclock it. Try +150 gpu and +1000 on memory and say ur score.


Here is +150 +1000 100% fans in enclosed case 



https://www.3dmark.com/pr/566958


----------



## changboy

Its better right ? You can try +175gpu and +1200 memory lol


----------



## Rbk_3

changboy said:


> Its better right ? You can try +175gpu and +1200 memory lol


Yeah but still a lot lower than yours at the same settings, no?


----------



## changboy

Me i put +175 on gpu and + 1200 on memory, if i remember

Overclocking ur cpu can give higher score too i think.


----------



## HyperMatrix

Chamidorix said:


> The very first benches coming out on the EVGA forum aren't looking particularly impressive at standard voltages. Granted, one guy is running an ancient cpu + ram but the second guy seemed competent and didn't get anything impressive that indicates any meaningful binning. I stand by the assertion that the only benefit of KPE this gen is a great AIO cooler, and easy persistent software voltage offset + mem and cache clock adjustment. Still some great benefits for 200$ over strix, but there's a reason it is only 200$ more and that reason is no binning.


First off it’s only $100 more than the Strix. Secondly, I think people are misunderstanding binning. There are varying degrees of binning. No one is saying the KPEs have been binned to hit 2200MHz. Even if they’re binned for an average of 30MHz over the average card, that’s still a binned card. Doesn’t guarantee 2200MHz but still leaves you with a higher chance of getting there than with a non-binned card.


----------



## changboy

I dont think kingpin is binned, BUT ITS FULL UNLOCK on gpu core for ln2 thats it.
Kingpin score around 15100 port royal with is 360mm rad, you can do around this with a waterblock on other card i think. Thats mean, if you dont plan do ln2 its verry expensive and like me, i have my water loop and not sure i can fit another 360mm at some place. Me i just want a waterblock for my ftw3 and i dont think about the kingpin.


----------



## Nizzen

changboy said:


> I dont think kingpin is binned, BUT ITS FULL UNLOCK on gpu core for ln2 thats it.
> Kingpin score around 15100 port royal with is 360mm rad, you can do around this with a waterblock on other card i think. Thats mean, if you dont plan do ln2 its verry expensive and like me, i have my water loop and not sure i can fit another 360mm at some place. Me i just want a waterblock for my ftw3 and i dont think about the kingpin.


My shuntmodded 3090 strix does 15k port royal on stock cooler. Ps: not recomended to run daily, but i had to try the aircooler too.


----------



## changboy

Ya nice score dude but but before you shunt ur strix, what was the score of ur card ?


----------



## HyperMatrix

changboy said:


> I dont think kingpin is binned, BUT ITS FULL UNLOCK on gpu core for ln2 thats it.
> Kingpin score around 15100 port royal with is 360mm rad, you can do around this with a waterblock on other card i think. Thats mean, if you dont plan do ln2 its verry expensive and like me, i have my water loop and not sure i can fit another 360mm at some place. Me i just want a waterblock for my ftw3 and i dont think about the kingpin.


That Would makes this the first non binned kingpin card made. It’s possible. But I’d need proof. Gotta wait for OC result average from owners with regard to voltage required for clock speeds


----------



## changboy

HyperMatrix said:


> That Would makes this the first non binned kingpin card made. It’s possible. But I’d need proof. Gotta wait for OC result average from owners with regard to voltage required for clock speeds


I say that coz when gamer nexus and jay2cent ln2 those kingpin, Vicent (kingpin man) said : i ive test 3 x kingpin 1 do 16500 and the other 2 ive sent you do 17000+ on ln2. So Vincent said not all kingpin will overclock the same.

And can you tell me how they can know what card will be the best overclocker on ln2 ? From what you say they test all card produce with ln2 ? lol. Its clear even the kingpin card won't clock at same speed before crash. In all this we can see markething on all product on earth.


----------



## Nizzen

changboy said:


> Ya nice score dude but but before you shunt ur strix, what was the score of ur card ?


Don't remember, but about 14300-14500


----------



## changboy

Another thing its even you mod ur pcb and ur card can draw 10 000 watt of power, you wont clock higher at some temperature. At 35C you wont ever draw the power your card can do, you need go ln2 to find the end of the overclocking capabilities of any gpu. So all in all a 3090 is a 3090.


----------



## Rbk_3

changboy said:


> Me i put +175 on gpu and + 1200 on memory, if i remember
> 
> Overclocking ur cpu can give higher score too i think.





https://www.3dmark.com/pr/567194



I forgot to turn Shadowplay off. I got 14473 at +195 +1100 max fans in the beta bios with my case open


----------



## changboy

Nizzen said:


> Don't remember, but about 14300-14500


Ya its good too but not much difference, for this i dont know if i will try do something with mine, maybe i will just put waterblock and keep it like this.

I have the resistor in my hand but it can happen i wont use it hehehe.


----------



## changboy

Rbk_3 said:


> https://www.3dmark.com/pr/567194
> 
> 
> 
> I forgot to turn Shadowplay off. I got 14473 at +195 +1100 max fans in the beta bios with my case open


 Wow you have a beast of card dude ! Are you happy now ?


----------



## Rbk_3

changboy said:


> Wow you have a beast of card dude ! Are you happy now ?


Yes  I’m going to be putting the Hybrid Cooler on it when it becomes available. I knew there was something wrong, and turns out it was the shadow play recording in the background. Going to try the memory a little higher


----------



## bmgjet

EVGA has said they are "Cherry picking dies". While thats not really binning since binning would be sorting them from best to worse to go into there product stack. They do say they test them at 2.1ghz in there chip tester before they are attached to the PCB. So all your really getting is more of a garantee your not going to get a super dud that wont go over 2ghz.

Iv been playing with the KFA2 bios over the weekend on my XC3 and that IMO is the best 2 plug bios out at the moment.
350-390W, All DP ports should work for every one. Has a 18 phase design so power balance is better for the entry level cards.
Managed to get 4 more points then my best run on the gigabyte gaming oc bios, even tho average clock speed was down 25mhz


https://www.3dmark.com/pr/564926


----------



## changboy

Rbk_3 said:


> Yes  I’m going to be putting the Hybrid Cooler on it when it becomes available. I knew there was something wrong, and turns out it was the shadow play recording in the background. Going to try the memory a little higher


I clic auto notify for the hybrid cooler too, but i dont think i will buy it, coz from all i see with a hydrid cooler you wont clock much higher coz temp still a bit high, not like a real waterblock, me i have already a loop on my system, for someone who dont have then ya its better then nothing .
I really not sure i will buy the hibrid cooler, i keep thinking about it.

You beat my score in port royal, iam not so happy with this hehehehe


----------



## Rbk_3

changboy said:


> I clic auto notify for the hybrid cooler too, but i dont think i will buy it, coz from all i see with a hydrid cooler you wont clock much higher coz temp still a bit high, not like a real waterblock, me i have already a loop on my system, for someone who dont have then ya its better then nothing .
> I really not sure i will buy the hibrid cooler, i keep thinking about it.
> 
> You beat my score in port royal, iam not so happy with this hehehehe


Its more for noise than anything for me. This thing gets a little loud in my case for my liking.

over 22000 in Timespy for graphics



https://www.3dmark.com/spy/15751468


----------



## dr/owned

Chamidorix said:


> I'm newish to hardware modding, so forgive my ignorance but does anyone know what the hell these are, if they can be easily lowered in resistance, etc? Don't know how I would go about finding the spec sheet for one either. BZ just skips right over them and TiN never talks about them in guide.


Looks like a 2W shunt resistor instead of the standard 1W ones we're used to. They probably widened out the traces on the power planes so they needed a wider shunt.


----------



## dr/owned

bmgjet said:


> EVGA has said they are "Cherry picking dies". While thats not really binning since binning would be sorting them from best to worse to go into there product stack. They do say they test them at 2.1ghz in there chip tester before they are attached to the PCB. So all your really getting is more of a garantee your not going to get a super dud that wont go over 2ghz.


I'll LOL when the launch a KPE Classified in a few months along with the 3080Ti that has some true freak-ish binning which is a whole different class than "dud vs. not dud" binning.


----------



## geriatricpollywog

Guys, I think its abundantly clear that the 3090 Kingpin is the greatest card there is, was, or ever will be. This will become apparent once I take delivery of mine.


----------



## changboy

Rbk_3 said:


> Its more for noise than anything for me. This thing gets a little loud in my case for my liking.
> 
> over 22000 in Timespy for graphics
> 
> 
> 
> https://www.3dmark.com/spy/15751468


 Hey dont stuck with fan at 100% dude, you can apply ur manual fan curve and this is what i do 1000rpm on 2D deskdrop and goes to 80% at 65c then 100% at 80c.

I didnt ear the fan from my card coz i ear the fan from my front 280mm rad before and this rad have industrial Nocthua 140mm.


----------



## changboy

0451 said:


> Guys, I think its abundantly clear that the 3090 Kingpin is the greatest card there is, was, or ever will be. This will become apparent once I take delivery of mine.


Well a kingpin is a Kingpin for sure, you cant go wrong.


----------



## long2905

bmgjet said:


> Iv been playing with the KFA2 bios over the weekend on my XC3 and that IMO is the best 2 plug bios out at the moment.
> 350-390W, All DP ports should work for every one. Has a 18 phase design so power balance is better for the entry level cards.
> Managed to get 4 more points then my best run on the gigabyte gaming oc bios, even tho average clock speed was down 25mhz
> 
> 
> https://www.3dmark.com/pr/564926


I have a similar experience with the KFA2 bios. the card seems to be more stable at higher clock and the fans run at their correct speed and keep the card cool-ish compared to the Gaming OC bios. Havent attempted to push my PR score with it though.


----------



## Rbk_3

changboy said:


> Hey dont stuck with fan at 100% dude, you can apply ur manual fan curve and this is what i do 1000rpm on 2D deskdrop and goes to 80% at 65c then 100% at 80c.
> 
> I didnt ear the fan from my card coz i ear the fan from my front 280mm rad before and this rad have industrial Nocthua 140mm.


Oh I don’t that is only for benching


----------



## Mr.Vegas

Hey guys, can I join your club? I also have 3090 
Question, whats the best Bios to flash on x2 PCIe powered 3090? Im water cooling MSI Ventus OC, I unpacked it today to see the power meter and its just 100% max, not even doing 102% like TRIO X does.
So which is the recommended Bios for such card that only has x2 power connectors?


----------



## Mr.Vegas

Rbk_3 said:


> Is that at 100% fans? Which Bios? I’ll try that now.


I did +220 on GPU, fans to 100%, no MEM overclock [MSI Ventus OC] and got 13210


----------



## changboy

Mr.Vegas said:


> I did +220 on GPU, fans to 100%, no MEM overclock [MSI Ventus OC] and got 13210


This depend of the card you have and if the card already overclock from the reference boost clock speed.
If your boost clock is reference and you add +200 and if your card already oc out of the box and you add +200 then you will get more of the one with the overclocked desing.


----------



## Carls_Car

Carillo said:


> Where did you order your water block from ? My is shipped with a chinese dude on bike, probably arriving late 2022.


Haven't ordered one yet. Waiting on Corsair or EKWB. Probably will get an EKWB.


----------



## Carls_Car

Mr.Vegas said:


> I did +220 on GPU, fans to 100%, no MEM overclock [MSI Ventus OC] and got 13210


Default vbios, I did 115 gpu, 300 on mem, fans at 85% and got 13,456 in PR, 18,328 in TS.

To answer you about Ventus vbios, it was suggested to me to try the KFA2. I tried Gigabyte Gaming OC last night and it crashed with any game I tried. I had stable TS and PR runs though.... added about 200-300 to my scores. Went back to original bios so I could game. I'll probably try the KFA2 vbios tonight.


----------



## chispy

Guys for your information there is only 2 rtx 3090 video cards with XOC bios fully unlocked to 1000w. Asus Strix XOC 1000w fully unlocked and kingpin 1000w fully unlocked bios. Good luck trying to get any of this as they are impossible to get , i tried my best to get either one but they are not giving this Bios away _except for a few Top overclockers in the world for big PR stunts running on Ln2_ and you will have to sign nda and can be sued if you leak the bios. I did ask the right people as i used to be sponsor by Asus long time ago when i was doing lots of ln2 overclocking , but no luck they are holding this Bios very tight. The kingpin Bios there is only one person you can ask for this Bios Vince itself ( kingpin ) , but again no way he is releasing this bios to anyone.

My 2 cents !


----------



## changboy

I think a bios can be mod with some program like KeplerBiosTweaker but i dont know anything about that hehehe.
One thing for sure company edit there bios and this can be change or edit like the evga xoc bios up to 500w, they just edit the bios, so its sure someone can do this, you just need find who


----------



## geriatricpollywog

chispy said:


> Guys for your information there is only 2 rtx 3090 video cards with XOC bios fully unlocked to 1000w. Asus Strix XOC 1000w fully unlocked and kingpin 1000w fully unlocked bios. Good luck trying to get any of this as they are impossible to get , i tried my best to get either one but they are not giving this Bios away _except for a few Top overclockers in the world for big PR stunts running on Ln2_ and you will have to sign nda and can be sued if you leak the bios. I did ask the right people as i used to be sponsor by Asus long time ago when i was doing lots of ln2 overclocking , but no luck they are holding this Bios very tight. The kingpin Bios there is only one person you can ask for this Bios Vince itself ( kingpin ) , but again no way he is releasing this bios to anyone.
> 
> My 2 cents !


Kingpin said the bios would be released on a forum. He said this before about the 2080ti and it ended up on TiN’s website. There will be more Kingpins available in December. You just need to either keep an eye on Jacob’s Twitter or the EVGA forums.


----------



## Sheyster

Lobstar said:


> Get the strix, the FTW3/Ultra seem to have some sort of bug. EVGA GeForce RTX 3090 FTW3 XOC BIOS BETA - EVGA Forums


I've been following that thread since they released the 500W BIOS. I passed on the FTW3 when I got the invite to buy, and went with the Strix. It's doing just fine with their 500W BIOS installed.


----------



## DrunknFoo

Sheyster said:


> I've been following that thread since they released the 500W BIOS. I passed on the FTW3 when I got the invite to buy, and went with the Strix. It's doing just fine with their 500W BIOS installed.


Lol why am i running the strix bios on my ftw? I must not be doing fine


----------



## bmgjet

changboy said:


> I think a bios can be mod with some program like KeplerBiosTweaker but i dont know anything about that hehehe.
> One thing for sure company edit there bios and this can be change or edit like the evga xoc bios up to 500w, they just edit the bios, so its sure someone can do this, you just need find who


Editing the bios is the easy bit.
Signing it again is the bit that cant be done so NVFlash rejects any file you edit since its not signed by Nvidia anymore.


----------



## Carls_Car

Has anyone tried the 390W KFA2 vbios? It is still under the unverified section, so I am a bit hesitant.


----------



## bmgjet

Carls_Car said:


> Has anyone tried the 390W KFA2 vbios? It is still under the unverified section, so I am a bit hesitant.


Posted my results on it back a page.


----------



## Carls_Car

bmgjet said:


> Posted my results on it back a page.


Ah missed it, sorry man. I'll go dig. Thanks!


----------



## Skubaben

Okay, i got my 3090 tuf to stop crashing without downclocking it. I downloaded the driver removal from 3dguru. And I did a safe mode uninstall. And reinstalled the drivers. I got abut 10% more performance in doom eternal. And it doesn't crash anymore. At first instal, I just used the Nvidia removal and it didn't remove the reg files. I have yet tried to overclock it. I kinda don't want to or need atm. But it boosted to 1980 stock. And stays around 1920-1860 ish in game.


----------



## wesley8

3090 Vulcan OC flashed XOC 500w vbios with Bykski waterblock :
PR 15345 https://www.3dmark.com/pr/551934
TSE 12117 https://www.3dmark.com/spy/15688314
TS 23096 https://www.3dmark.com/spy/15714286


----------



## pat182

so im happy to report that my 3d printed air funel makes my strix fully stable on air 2085mhz 480watt @ 66c max during more than an hour in Control which was maxing my card none stop


----------



## Carillo

chispy said:


> Guys for your information there is only 2 rtx 3090 video cards with XOC bios fully unlocked to 1000w. Asus Strix XOC 1000w fully unlocked and kingpin 1000w fully unlocked bios. Good luck trying to get any of this as they are impossible to get , i tried my best to get either one but they are not giving this Bios away _except for a few Top overclockers in the world for big PR stunts running on Ln2_ and you will have to sign nda and can be sued if you leak the bios. I did ask the right people as i used to be sponsor by Asus long time ago when i was doing lots of ln2 overclocking , but no luck they are holding this Bios very tight. The kingpin Bios there is only one person you can ask for this Bios Vince itself ( kingpin ) , but again no way he is releasing this bios to anyone.
> 
> My 2 cents !


The same rules Applied for pascal and Turing, but there is always someone that are willing to share the bios like they did with Turing and Pascal. Sooner or later it will show up. But I don’t see the point if you not are planning to do LN2 since most of security features is disabled. I’d rather shunt the card with stock bios, then you at least have the over current and over heating protection


----------



## ExDarkxH

lol i didnt even know about H.A.G.S until now. wonder if it will boost my benchmark scores


----------



## chispy

Carillo said:


> The same rules Applied for pascal and Turing, but there is always someone that are willing to share the bios like they did with Turing and Pascal. Sooner or later it will show up. But I don’t see the point if you not are planning to do LN2 since most of security features is disabled. I’d rather shunt the card with stock bios, then you at least have the over current and over heating protection



Exactly amigo , hence why i shunt moded my cards because i do know it will be almost impossible to get this XOC bios from Asus and evga. Anyhow we do not need it anymore if you have shunt your card 

Kind Regards: chispy


----------



## dangerSK

0451 said:


> Guys, I think its abundantly clear that the 3090 Kingpin is the greatest card there is, was, or ever will be. This will become apparent once I take delivery of mine.


disagree, Galax OC Lab are best


----------



## Carls_Car

Getting great TS results with the KFA2 vbios, however not stable in any games with OC that makes the card go over 2Ghz. Works just fine with original vbios. Guess I'm stuck @ 350W with this card.


----------



## long2905

Carls_Car said:


> Getting great TS results with the KFA2 vbios, however not stable in any games with OC that makes the card go over 2Ghz. Works just fine with original vbios. Guess I'm stuck @ 350W with this card.


run DDU and reinstall driver


----------



## escapee

chispy said:


> Exactly amigo , hence why i shunt moded my cards because i do know it will be almost impossible to get this XOC bios from Asus and evga. Anyhow we do not need it anymore if you have shunt your card
> 
> Kind Regards: chispy


The 1000w XOC bios netted me 1000+ points on port royal. The XOC removes all the 'invisible' power limits which most people are facing even with shunt mods


----------



## dante`afk

I'll be receiving the bitspower and bykski FE block this week. Which one would you use and why?

I dont' like the inlet/outlet positions on the bitspower block, but also have head the fins are too widespread on the bykski?


----------



## changboy

escapee said:


> The 1000w XOC bios netted me 1000+ points on port royal. The XOC removes all the 'invisible' power limits which most people are facing even with shunt mods


 Post the bios so we can download it


----------



## Apecos

Mr.Vegas said:


> I did +220 on GPU, fans to 100%, no MEM overclock [MSI Ventus OC] and got 13210


The best scores on my 3090 ventus was 13647 in PR with stock msi bios and UNDERVOLTING THE CARD. Try a custom curve with 825 mvolts and 1890 mhz.There is the sweet spot for that card.Please try and you will see.

I forgot to say that i flashed the KFA 390w VBIOS and the results was the same like MSI stock bios.I flashed back to stock

The GIGABYTE 3090 GAMING OC vbios is a waste of time, thAs bios is for 2000 rpm fans the VENTUS use 3500 fans (you loose 1500 rpm in cooling)


----------



## dante`afk

chispy said:


> Exactly amigo , hence why i shunt moded my cards because i do know it will be almost impossible to get this XOC bios from Asus and evga. Anyhow we do not need it anymore if you have shunt your card
> 
> Kind Regards: chispy


post it or you don't have it.


----------



## dante`afk

escapee said:


> The 1000w XOC bios netted me 1000+ points on port royal. The XOC removes all the 'invisible' power limits which most people are facing even with shunt mods




post it or you don't have it.


----------



## Falkentyne

dante`afk said:


> post it or you don't have it.


There's a 0% chance he has the 1000W Bios. Zero percent.


BTW let me know how your 8 mOhm SRC experiment goes. If my guess is correct, your PCIE slot will drop down to 55W, 8 pin#1 will go up to 170W and 8 pin #2 will stop at 125-130W.


----------



## escapee

Falkentyne said:


> There's a 0% chance he has the 1000W Bios. Zero percent.
> 
> 
> BTW let me know how your 8 mOhm SRC experiment goes. If my guess is correct, your PCIE slot will drop down to 55W, 8 pin#1 will go up to 170W and 8 pin #2 will stop at 125-130W.


 If you get your hands on it one day, the MD5 for it is 431DAE48FB87AEAAAF6715953A29ABFD. I uploaded pics of GPUZ reports a week ago reporting my success with this bios


----------



## chispy

dante`afk said:


> post it or you don't have it.


Is 2430Mhz enough proof ?

Here you go buddy i do not have a problem posting posting pictures of my hardware , unlike some people and you know who you are  , i won't mention his name as he does not want to provide any pictures of his system or rtx3090 or release xoc 1000w bios and real benchmarks at hwbot like he promised. Not worth it to mention his name all of you know who this guys is  .


----------



## Carillo

escapee said:


> If you get your hands on it one day, the MD5 for it is 431DAE48FB87AEAAAF6715953A29ABFD. I uploaded pics of GPUZ reports a week ago reporting my success with this bios


why do you keep bragging about being in possession of this bios in this community when you clearly don’t want to share it ?? What a self righteous ****!


----------



## ExDarkxH

yeah in fact i believe i was the first to call out his fake scores weeks ago
It's a shame too he has 2 fake accounts both top 10 in the leaderboards. He wants everyone to believe his "#1 in the world BS run"
as well as another score with a 3090 clocking at 2,427 MHz @ *47 C * 

LOLOL while using ram clocked at 2,132 MHz with a i7 4790k


Seriously just petition 3dmark to investigate and remove his fraud scores


----------



## chispy

Carillo said:


> why do you keep bragging about being in possession of this bios in this community when you clearly don’t want to share it ?? What self righteous ****!


He is a fake , i already ask Asus the right people and they told me only 2 persons have the Asus xoc bios for big PR stunts on LN2 at hwbot for the top 2 rtx3090 scores and they are tops + you have to sign nda and can sued if you leak this bios from Asus , i have ask Vince dear close friend who i was his moderator at kingpin forum for many years and i have the privilege of overclocking with him at his lab ( kingpin ) and no , hell no he did not gave this guy 1000w xoc bios. This guy keeps lying and does not have 1000w xoc bios. Everything is a fake on this guy. Better to ignore anything he says ...


----------



## Falkentyne

chispy said:


> Is 2430Mhz enough proof ?
> 
> Here you go buddy i do not have a problem posting posting pictures of my hardware , unlike some people and you know who you are  , i won't mention his name as he does not want to provide any pictures of his system or rtx3090 or release xoc 1000w bios and real benchmarks at hwbot like he promised. Not worth it to mention his name all of you know who this guys is  .
> 
> 
> 
> 
> 
> 
> 
> 
> View attachment 2467257
> 
> 
> View attachment 2467258
> View attachment 2467259
> View attachment 2467260


I'm almost 100% sure Dante was trying to reply to escapee, not you. He wrote the same reply to both of you.


----------



## ExDarkxH

OGS and Kingpin work very hard for their scores. They deserve the right to battle it out for #1 legitimately
It's important to them and these scores bring them business opportunities and provide food on their tables 

This guy is a shameless scumbag and we know the tricks he used to get those scores.

What a loser escapee is

Please petition 3dmark and get his crap taken down


----------



## chispy

Falkentyne said:


> I'm almost 100% sure Dante was trying to reply to escapee, not you. He wrote the same reply to both of you.


I'm sorry if that was directed to the cheater and lier escapee or escapeee or escapeeee lol he has to many accounts on FM , no harm done , it's al cool with me  , i can prove any of my benchmarks as legit 100% and have the pics of the hardware and mods to prove it , unlike this escape 💩


----------



## OC2000

Anyone know if a shunt mod can partly work on a 3090?

my Strix has 8 ohm on the PCIE and 5 Ohms on the others. I’m seeing a total system draw over over 800W now in timespy extreme, yet I’m still getting power capped in the normal timespy. Surely with these shunts I should be seeing total system power draws over 900 - 1000w

there’s definitely a power increase as before these shunts I saw no higher than 680W using the Strix 480 Bios

I used the paint method, so perhaps one of the 7 shunts isn’t perfect. I was always under the assumption though that either they all work or none of them work.


----------



## Falkentyne

OC2000 said:


> Anyone know if a shunt mod can partly work on a 3090?
> 
> my Strix has 8 ohm on the PCIE and 5 Ohms on the others. I’m seeing a total system draw over over 800W now in timespy extreme, yet I’m still getting power capped in the normal timespy. Surely with these shunts I should be seeing total system power draws over 900 - 1000w
> 
> there’s definitely a power increase as before these shunts I saw no higher than 680W using the Strix 480 Bios
> 
> I used the paint method, so perhaps one of the 7 shunts isn’t perfect. I was always under the assumption though if they all work or none of them work.


Is the power cap on test #2 on Timespy, in two locations around the beginning of the test?
You can see it easily on GPU-Z if you look at it right when the test ends.


----------



## chispy

ExDarkxH said:


> OGS and Kingpin work very hard for their scores. They deserve the right to battle it out for #1 legitimately
> It's important to them and these scores bring them business opportunities and provide food on their tables
> 
> This guy is a shameless scumbag and we know the tricks he used to get those scores.
> 
> What a loser escapee is
> 
> Please petition 3dmark and get his crap taken down


+1 exactly this , he is beating the Top overclockers in the world , the legends in the overclocking scene that nobody can beat , yet he claims to beat them all lol , makes anyone wonder uhhh..  , and there is no track record anywhere of his benchmarks but at FM / UL where he has many accounts , also he has a new account at hwbot but has not posted any , zero , 0 , nada benchmarks there , why ? easy , he cheats his way and will be rip apart from the real overclockers. Best to ignore everything he post bro.


----------



## chispy

OC2000 said:


> Anyone know if a shunt mod can partly work on a 3090?
> 
> my Strix has 8 ohm on the PCIE and 5 Ohms on the others. I’m seeing a total system draw over over 800W now in timespy extreme, yet I’m still getting power capped in the normal timespy. Surely with these shunts I should be seeing total system power draws over 900 - 1000w
> 
> there’s definitely a power increase as before these shunts I saw no higher than 680W using the Strix 480 Bios
> 
> I used the paint method, so perhaps one of the 7 shunts isn’t perfect. I was always under the assumption though that either they all work or none of them work.


To fully utilize all that power ( 680+ watts is no joke ) you will need to go subsero temps , the card will still throttle down at that high wattage even with shunt mods on water and ambient temps, it needs lots of cooling below ambient.


----------



## geriatricpollywog

The 2080ti KPE XOC bios was posted on Xdevs for anybody to download. If the same is true for the 3090 KPE bios, can anybody with a 3090 flash it?


----------



## OC2000

Falkentynes post should have been quoted here ...


I’m getting it on the first graphics test too although it maintains the same boost through out. This is the none extreme timespy.
I’m also getting vOP and vRel constantly too


----------



## HyperMatrix

chispy said:


> View attachment 2467258


You probably don't want to share closeups of your resistor solder job. Haha.


----------



## OC2000

chispy said:


> To fully utilize all that power (800 + watts is no joke ) you will need to go subsero temps , the card will still throttle down at that high wattage even with shunt mods on water and ambient temps, it needs lots of cooling below ambient.


I can understand that if that was the GPU alone, but this is what ICue is reporting from the AXi 1500 as full system draw. Seems right now I’m only getting an extra 100w which is a little low considering the ohms I’ve shunted on top.


----------



## chispy

OC2000 said:


> I’m getting it on the first graphics test too although it maintains the same boost through out. This is the none extreme timespy.
> I’m also getting vOP and vRel constantly too


If you can maintain the same boost throughout the benchmark at x or y clocks i can say that's really good then , nice silicon , high clocks need subsero to maintain stable same boost thru all the benchmarks. Temps play a big role on Amphere.

As per vOP and vRel i too get those in gpu-z , it seems gpu-z sometimes gets crazy with shunt mods because if i use a curve in msi ab i wont gee vOP and vRel at the same clocks , tested and you will see  .


----------



## chispy

HyperMatrix said:


> You probably don't want to share closeups of your resistor solder job. Haha.


 hahahaha .... bust job i know but it works 100% , but oh well i'm 52 years young and my eye sight is not good anymore


----------



## chispy

0451 said:


> The 2080ti KPE XOC bios was posted on Xdevs for anybody to download. If the same is true for the 3090 KPE bios, can anybody with a 3090 flash it?


 Sadly , we do not know yet until and if that bios comes out. Keep your eyes open at evga forum and Vince facebook page.


----------



## dante`afk

chispy said:


> I'm sorry if that was directed to the cheater and lier escapee or escapeee or escapeeee lol he has to many accounts on FM , no harm done , it's al cool with me  , i can prove any of my benchmarks as legit 100% and have the pics of the hardware and mods to prove it , unlike this escape 💩


yea this should have went to that other guy not you 

damn that soldering job, and I thought mine is ugly/bad


----------



## Falkentyne

OC2000 said:


> I can understand that if that was the GPU alone, but this is what ICue is reporting from the AXi 1500 as full system draw. Seems right now I’m only getting an extra 100w which is a little low considering the ohms I’ve shunted on top.


Your PCIE Slot shunt is limiting you. Your power draw is limited to the HIGHEST mOhm resistance shunt on the power plane.
Apparently, "interesting" things will happen if the Source (PP source input shunt) is the highest mOhm shunt (even higher than PCIE slot).


----------



## Chamidorix

dr/owned said:


> Looks like a 2W shunt resistor instead of the standard 1W ones we're used to. They probably widened out the traces on the power planes so they needed a wider shunt.


Thank you so much! 2W was the key word I was looking for. Shunting this thing should be fun



escapee said:


> The 1000w XOC bios netted me 1000+ points on port royal. The XOC removes all the 'invisible' power limits which most people are facing even with shunt mods





changboy said:


> Post the bios so we can download it





escapee said:


> If you get your hands on it one day, the MD5 for it is 431DAE48FB87AEAAAF6715953A29ABFD. I uploaded pics of GPUZ reports a week ago reporting my success with this bios





Carillo said:


> why do you keep bragging about being in possession of this bios in this community when you clearly don’t want to share it ?? What a self righteous ****!


Ah... drama.... hopefully Galax is our savior this generation again. Been stuck on my cheepo msi 2080ti waiting for KPE and I forgot how wonderful the XOC galax bios was.... Just plug and play and gain complete control of the card, pulling 600W on 1.125v in shadowlands feels good.


----------



## chispy

dante`afk said:


> yea this should have went to that other guy not you
> 
> damn that soldering job, and I thought mine is ugly/bad


  , i bet there is someone on this thread with worst looking soldering jobs than me 😁 . I have re- done this soldering on the shunts 2 times already as after running in subsero they start to fail , do not know if the heavy vaseilne used to protect the card from condensation or the cold temps when goes back to normal ( contract and expand of the solder form from negative to positive temps ).


----------



## HyperMatrix

chispy said:


> , i bet there is someone on this thread with worst looking soldering jobs than me 😁 . I have re- done this soldering on the shunts 2 times already as after running in subsero they start to fail , do not know if the heavy vaseilne used to protect the card from condensation or the cold temps when goes back to normal ( contract and expand of the solder form negative to positive temps ).


More flux. Less solder. Best advice I can give you. Haha.


----------



## dante`afk

I like how separated the shunts are on the strix, less risk to spread solder onto other parts.

on the FE they are very cramped.


----------



## DrunknFoo

HyperMatrix said:


> More flux. Less solder. Best advice I can give you. Haha.


Smd solder paste is literally created for shunt n smd work. No fuss no mess

Downside... Price compared to regular flux core solder

demo that had me switch from conventional solder (sorry wrong vid) lol





stuff works with a good iron, just apply between the contact points and tap with iron for a second and it'll do the trick


----------



## jura11

dante`afk said:


> I'll be receiving the bitspower and bykski FE block this week. Which one would you use and why?
> 
> I dont' like the inlet/outlet positions on the bitspower block, but also have head the fins are too widespread on the bykski?


Personally I would keep Bykski waterblock,I use that block and many here too using their blocks which are performing quite well 

Backplate of BYKSKI works as well in my case and no issues 

Hope this helps 

Thanks, Jura


----------



## man1ac

Would you guys switch from an FE to the Zotac? Ill could get a Zotac for MSRP but it seems that the P/L is like 50W less so I cant hope for a faster card even if I get the golden sample, do I? Would you guys switch?
(And I kinda cant figure out if my 3090 has a good chip - any way to find out for certain? I managed to get 13,9 to 14k in Port Royal...but thats about it...)


----------



## HyperMatrix

DrunknFoo said:


> Smd solder paste is literally created for shunt n smd work. No fuss no mess
> 
> Downside... Price compared to regular flux core solder
> 
> demo that had me switch from conventional solder (sorry wrong vid) lol
> 
> 
> 
> 
> 
> stuff works with a good iron, just apply between the contact points and tap with iron for a second and it'll do the trick


I don't like it. For the simple fact that it makes me look like an amateur. Haha. And here I thought I was well equipped with my SMD soldering iron and flux pen. Learn something new every day.


----------



## HyperMatrix

man1ac said:


> Would you guys switch from an FE to the Zotac? Ill could get a Zotac for MSRP but it seems that the P/L is like 50W less so I cant hope for a faster card even if I get the golden sample, do I? Would you guys switch?
> (And I kinda cant figure out if my 3090 has a good chip - any way to find out for certain? I managed to get 13,9 to 14k in Port Royal...but thats about it...)


Just shunt mod it if it's an actual better GPU die. Power limit is the easiest thing to overcome. You can test out theoretical clock speeds on your FE/other cards by doing a reduced load. Maybe 60% so you're not hitting power limit. See how high you can clock it like that. It should give you a rough idea of best case scenario overclocking under water and with no power limit.


----------



## OC2000

Argh, forgot to insert Falkentyne’s quote again.

I feared that might be the case. The Strix only pulls mid 40s Max. Maybe a 5 ohm there won’t be too bad. I just got the crosshair viii hero dark so no additional pcie power input so a 5 ohm would be pulling around 100w.

I still have the EVGA bios to play around with and as Chispy mentioned I haven’t looked at modding the curve yet.
Mine isn’t a golden chip but will hold a 2145-2160 so should get a 15k PR score which is my goal and a 2100 extended time period gaming clock

Struggling with the EK back plate though. It’s a nightmare to even put on!


----------



## man1ac

HyperMatrix said:


> Just shunt mod it if it's an actual better GPU die. Power limit is the easiest thing to overcome. You can test out theoretical clock speeds on your FE/other cards by doing a reduced load. Maybe 60% so you're not hitting power limit. See how high you can clock it like that. It should give you a rough idea of best case scenario overclocking under water and with no power limit.


Thanks. Ill get my Bykski by the end of the year (from China and had a pretty sweet deal) What do you mean by 60%? Currently I run the card at 0,875 with 1890 and thats rock stable (I use 20 runs Metro for this and when that ran through I never had any problems).I did nothing else for "this" score like kiling everything in the task manager etc. Stock fan curve an go...


----------



## jomama22

Got a 3090 strix to mess with. Gets lower scores at 480w than my FE at 400w in all 3dmark tests.

Such a shame. Was hoping it would be atleast equal. 

This is my FE @ 400w: https://www.3dmark.com/spy/15777006

I'm guessing this is atleast a decent chip. Just gonna shunt mod this now as the strix didn't work out.

If anyone is looking for an EK 3090 strix waterblock, I have one available. Will just sell at $170 +shipping in US. It's unused. Also have a backplate for it which I will sell for $40.


----------



## Rbk_3

I am having issues with a custom fan curve in afterburner on my FTW3. It doesn’t track with my curve at all. Anyone else have issues? With out custom curve it rams up to 80% 


Sent from my iPhone using Tapatalk


----------



## man1ac

jomama22 said:


> Got a 3090 strix to mess with. Gets lower scores at 480w than my FE at 400w in all 3dmark tests.
> 
> Such a shame. Was hoping it would be atleast equal.
> 
> This is my FE @ 400w: https://www.3dmark.com/spy/15777006
> 
> I'm guessing this is atleast a decent chip. Just gonna shunt mod this now as the strix didn't work out.
> 
> If anyone is looking for an EK 3090 strix waterblock, I have one available. Will just sell at $170 +shipping in US. It's unused. Also have a backplate for it which I will sell for $40.


Sorry, I am quite new to this: What settings except the 400W? How much curve+ ? Air/Water? I kinda want to figure out how my 3090 performs but I never got anywhere near that. (Got 21100 at +200/800 with a stock curve)


----------



## OC2000

jomama22 said:


> Got a 3090 strix to mess with. Gets lower scores at 480w than my FE at 400w in all 3dmark tests.
> 
> Such a shame. Was hoping it would be atleast equal.
> 
> This is my FE @ 400w: https://www.3dmark.com/spy/15777006
> 
> I'm guessing this is atleast a decent chip. Just gonna shunt mod this now as the strix didn't work out.
> 
> If anyone is looking for an EK 3090 strix waterblock, I have one available. Will just sell at $170 +shipping in US. It's unused. Also have a backplate for it which I will sell for $40.


what was up with your Strix. I got 222xx on air cooling and feel my card wasn’t great.


----------



## HyperMatrix

man1ac said:


> Thanks. Ill get my Bykski by the end of the year (from China and had a pretty sweet deal) What do you mean by 60%? Currently I run the card at 0,875 with 1890 and thats rock stable (I use 20 runs Metro for this and when that ran through I never had any problems).I did nothing else for "this" score like kiling everything in the task manager etc. Stock fan curve an go...


I mean open up a game in windowed mode. Set an FPS cap in RTSS that lowers your GPU usage to around 60%. Remember that FPS cap number (for example, 80 fps). Set the FPS cap to 1. Wait until your card cools down to around 30C. Enter in a new clock speed. Set FPS cap in RTSS from 1 back to 80 (or whatever the FPS cap was to get 60% usage in that game). Look at how high your card clocks, and watch how much it drops as the card heats up. Set FPS cap back to 1 in RTSS and wait for the card to cool down. Set new clock speeds. And repeat. Fans at 100% for this testing.


----------



## Sheyster

DrunknFoo said:


> Lol why am i running the strix bios on my ftw? I must not be doing fine


You'll get better performance with the EVGA 500W BIOS. See Framechasers Strix video.


----------



## DrunknFoo

Sheyster said:


> You'll get better performance with the EVGA 500W BIOS. See Framechasers Strix video.


Uhhh no, that guy is a fool


----------



## Sheyster

jomama22 said:


> Got a 3090 strix to mess with. Gets lower scores at 480w than my FE at 400w in all 3dmark tests.


Flash the EVGA 500W BIOS to your Strix and you'll be pleasantly surprised with benchmark results.


----------



## Sheyster

DrunknFoo said:


> Uhhh no, that guy is a fool


His results were easy for me to replicate. I won't comment on him being a fool or not. My last SuperPosition 4K run was good enough to be in the top 20 and I wasn't even trying that hard. This is on air with the stock cooler mind you.


----------



## HyperMatrix

DrunknFoo said:


> Uhhh no, that guy is a fool


While the framechasers guy is a tool, I had better results on my Strix with the EVGA 500W bios. I think the reason for that is due to the 3rd 8-pin power connector not reporting any power usage, and the card thinking it's well below its power limit, and keeping a higher voltage going for longer.


----------



## Sheyster

HyperMatrix said:


> While the framechasers guy is a tool, I had better results on my Strix with the EVGA 500W bios. I think the reason for that is due to the 3rd 8-pin power connector not reporting any power usage, and the card thinking it's well below its power limit, and keeping a higher voltage going for longer.


Exactly this... Thank you. It also pulls power from the 3 x 8-pin first before the PCI-E slot which also helps. You might see 50w being pulled from it max.


----------



## DrunknFoo

HyperMatrix said:


> While the framechasers guy is a tool, I had better results on my Strix with the EVGA 500W bios. I think the reason for that is due to the 3rd 8-pin power connector not reporting any power usage, and the card thinking it's well below its power limit, and keeping a higher voltage going for longer.


The beta bios is functioning as designed for all other 3 pin pcbs other than the ftw to the best of my knowledge...

Out of the three available 500w bios from evga, on the ftw, 1 out of lets say 40 reboots and driver settings would appear to set the tdp limit flag correctly allowing the ftw to pull an average of 490-495w and peaking 520w (furmark) consistently. Otherwise, the tdp limit flag appears to be randomly set lower and can be as low as 380w for some reason. (this does appear to be temp, clock nor voltage related). But i honestly haven't a clue why.

Regardless, at this moment in time, my ftw can currently draw and hold 760-780w using the strix bios consistently on furmark where temps are peaking at 72C with a case internal temperature of about 15-16C, where as the beta bios would have a higher peak value of up to approximately 810-815w (rare) but significantly lower random average draws. Even lower tdp bios i.e the colorful 400w i tested held higher draws. (thought i would test this one cause at one point someone with the colorful was in the top 10 on PR ladder)...

Anyway, prior to shunting the same was true with the stock evga 450w and 480w strix bios, they held lower peak draws, but consistently higher averages compared to the beta, where the beta may not even draw 420w on average, but if and when the beta decided to work as intended, 490-520w was achievable.


----------



## jomama22

OC2000 said:


> what was up with your Strix. I got 222xx on air cooling and feel my card wasn’t great.


No idea. Lost the silicon lottery is all I can guess. It's possible somthing funky happened with windows or somthing when I switched the cards out but seems pretty doubtfully. Core and memory clocks more or less matched up with what I would expect at the score I was getting (about 21750 graphics in time spy at 480w, avg core clock about 2020 with 900-1000 on the memory).

The score be slightly lower than I would expect at those clocks but not by much. Can always toss it back in and see what's up but meh.

The 500w bios isn't going to do much here as it's a crap chip it seems like. Wanted to shunt it but it's not worth doing that at all. Card struggled to hit 2045 core at 1.0v


----------



## HyperMatrix

DrunknFoo said:


> The beta bios is functioning as designed for all other 3 pin pcbs other than the ftw to the best of my knowledge...
> 
> Out of the three available 500w bios from evga, on the ftw, 1 out of lets say 40 reboots and driver settings would appear to set the tdp limit flag correctly allowing the ftw to pull an average of 490-495w and peaking 520w (furmark) consistently. Otherwise, the tdp limit flag appears to be randomly set lower and can be as low as 380w for some reason. (this does appear to be temp, clock nor voltage related). But i honestly haven't a clue why.


The bios works in that it pulls 500W on the Strix. But it doesn't report any power usage on the 3rd power connector. So your card will just report around 330W or so. Actual peaks were above 500W. I set off the OCP alarm on my 900W battery backup in games where my CPU usage went above 50% as well. Never happened with the stock bios. But the beta bios definitely held on to higher voltage for longer, even when temps went up really high. It kept 2040MHz even up to 77C on my terrible card after 10 minutes of gameplay. Haha. Unfortunately eventually dropped to 2025MHz beyond that and crashed at 81C.

I'm not sure of the reason behind it not working properly on all FTW3 cards, although it works on some. But at least on the Strix, that was my experience. Could finish PR run with 2040MHz average clock. On stock bios, couldn't break 2000MHz average.


----------



## DrunknFoo

HyperMatrix said:


> The bios works in that it pulls 500W on the Strix. But it doesn't report any power usage on the 3rd power connector. So your card will just report around 330W or so. Actual peaks were above 500W. I set off the OCP alarm on my 900W battery backup in games where my CPU usage went above 50% as well. Never happened with the stock bios. But the beta bios definitely held on to higher voltage for longer, even when temps went up really high. It kept 2040MHz even up to 77C on my terrible card after 10 minutes of gameplay. Haha. Unfortunately eventually dropped to 2025MHz beyond that and crashed at 81C.
> 
> I'm not sure of the reason behind it not working properly on all FTW3 cards, although it works on some. But at least on the Strix, that was my experience. Could finish PR run with 2040MHz average clock. On stock bios, couldn't break 2000MHz average.


ya the wattage pulled estimation is from my outlet less 150w system/cpu draw...

temps will hinder the clock and alter the voltage, wouldn't say your card is terrible yet, really need a block to see what it can do..... only thing you can really tell from my experience is the ram frequency potential when using air


----------



## jomama22

man1ac said:


> Sorry, I am quite new to this: What settings except the 400W? How much curve+ ? Air/Water? I kinda want to figure out how my 3090 performs but I never got anywhere near that. (Got 21100 at +200/800 with a stock curve)


That's running +225/900 from stock curve line. It is possible there is somthing going on with windows or the bios. 

When I took my strix out and put my fe back in, I noticed lower scores in timespy that what I was getting normally, about 200-300 points lower at the same clocks. If I raised the clocks, it would actually give me a HIGHER average clock with no crashes in time spy but still the lower results. 

I tried ddu and driver reinstall, made it even worse than before. Now was getting 1000 points lower than normal.

Did a fresh install of windows...even worse score! Now about 4000 points lower! Somthing was clearly messed up.

Did a cmos reset of the bios, completely turned off the system and let it chill. Went back in to windows, tried it again. Still lower scores!

Tried suggestions for setting windows power to performance. No change. Messed it AB a bit. No change.

Check gpu-z sensors after running their full screen pcie version checker, notice that the voltage being used was horribly low, like .79-.9 for the entire run even though board power was being more or less maxed out.

I went into AB clicked reset to default a bunch of times, closed AB, then checked gpu-z thing again...man's now the voltage seemed more or less back to normal. (.9-1.05). Ran a timespy and bam, normal scores again.

Used AB, set my usual overclock and bam, back to what I was use to seeing.

I have no idea what fixed it or anything, seriously, no idea. I never had to use the performance plan prior to get anything to work.

But even now, somthing just doesn't seem 100% right. My timespy core averages are higher than before and getting more or less the same scores. I would think the higher average would help somewhere but I really don't know.

My port royal sits at about 14285 which seems kinda low to me? But I don't have much reference data on what it should be tbh


----------



## DrunknFoo

jomama22 said:


> That's running +225/900 from stock curve line. It is possible there is somthing going on with windows or the bios.
> 
> When I took my strix out and put my fe back in, I noticed lower scores in timespy that what I was getting normally, about 200-300 points lower at the same clocks. If I raised the clocks, it would actually give me a HIGHER average clock with no crashes in time spy but still the lower results.
> 
> I tried ddu and driver reinstall, made it even worse than before. Now was getting 1000 points lower than normal.
> 
> Did a fresh install of windows...even worse score! Now about 4000 points lower! Somthing was clearly messed up.
> 
> Did a cmos reset of the bios, completely turned off the system and let it chill. Went back in to windows, tried it again. Still lower scores!
> 
> Tried suggestions for setting windows power to performance. No change. Messed it AB a bit. No change.
> 
> Check gpu-z sensors after running their full screen pcie version checker, notice that the voltage being used was horribly low, like .79-.9 for the entire run even though board power was being more or less maxed out.
> 
> I went into AB clicked reset to default a bunch of times, closed AB, then checked gpu-z thing again...man's now the voltage seemed more or less back to normal. (.9-1.05). Ran a timespy and bam, normal scores again.
> 
> Used AB, set my usual overclock and bam, back to what I was use to seeing.
> 
> I have no idea what fixed it or anything, seriously, no idea. I never had to use the performance plan prior to get anything to work.
> 
> But even now, somthing just doesn't seem 100% right. My timespy core averages are higher than before and getting more or less the same scores. I would think the higher average would help somewhere but I really don't know.
> 
> My port royal sits at about 14285 which seems kinda low to me? But I don't have much reference data on what it should be tbh


cant use same offsets on different boards, they vary, +125 can equate to +160 on another pcb or even bios
temperatures make or break scores, give the gpu time to idle and cool off between runs
if for any reason you get stuck where a run previously can not be repeated, disable reenable your drivers, or simply rebooting the system may help, or that run was 100% luck)
best advice if new, would be not to compare results with those on the ladder. Set a stock baseline, and compare your recent runs with your previous ones
also, if you are focusing on gpu, only look at the gpu score between each runs per session, as cpu/ram settings (if not set in stone can alter timespy results)

good luck


----------



## jomama22

DrunknFoo said:


> cant use same offsets on different boards, they vary, +125 can equate to +160 on another pcb or even bios
> temperatures make or break scores, give the gpu time to idle and cool off between runs
> if for any reason you get stuck where a run previously can not be repeated, disable reenable your drivers, or simply rebooting the system may help, or that run was 100% luck)
> best advice if new, would be not to compare results with those on the ladder. Set a stock baseline, and compare your recent runs with your previous ones
> also, if you are focusing on gpu, only look at the gpu score between each runs per session, as cpu/ram settings (if not set in stone can alter timespy results)
> 
> good luck


Yeah trust me, been doing all of that. GPU is always down to 24-26* before any test bench, only been caring about gpu score, same exact cpu setup, etc. This is with the exact same card in the exact same system. The only change was swapping my fe out for a strix and then putting the fe back in, that's it

If you take a look on the interwebs, lots of people have weird issues with the 3080 or 3090 simply not scoring how it should be. Were talking thousands of points lower on gpu score in benchmarks.

There is somthing very screwy going on somewhere. Some people think it's Intel speedster but...I have a 5950x and am running a manual overclock. I could try disabling cstates and see what that does, possible somthing weird going on there all of a sudden.

Regardless, there is somthing that is absolutely causing these wired issues for myself and others. Like I said with the scores, I literally was just sitting at the desktop messing with ab, running benchmarks and getting obviously wrong scores and them bam, all of a sudden they worked.


----------



## bmgjet

jomama22 said:


> Yeah trust me, been doing all of that. GPU is always down to 24-26* before any test bench, only been caring about gpu score, same exact cpu setup, etc. This is with the exact same card in the exact same system. The only change was swapping my fe out for a strix and then putting the fe back in, that's it
> 
> If you take a look on the interwebs, lots of people have weird issues with the 3080 or 3090 simply not scoring how it should be. Were talking thousands of points lower on gpu score in benchmarks.
> 
> There is somthing very screwy going on somewhere. Some people think it's Intel speedster but...I have a 5950x and am running a manual overclock. I could try disabling cstates and see what that does, possible somthing weird going on there all of a sudden.
> 
> Regardless, there is somthing that is absolutely causing these wired issues for myself and others. Like I said with the scores, I literally was just sitting at the desktop messing with ab, running benchmarks and getting obviously wrong scores and them bam, all of a sudden they worked.


If your doing benchmarking you need to go into windows power plan. Set CPU min speed to 100%. Then go into nvidia control pannel and set max performance for power.
Other wise youll get a different score each time since it takes a few seconds during the first few frames of the benchmark for the CPU and GPU to go from low power to max power mode, Which in port royal is worth about 200points. Time spy even more so since it drops back to idle state when loading the next test.


----------



## defiledge

what frequency and scores are considered "good" for the 500W bios?


----------



## Falkentyne

jomama22 said:


> Yeah trust me, been doing all of that. GPU is always down to 24-26* before any test bench, only been caring about gpu score, same exact cpu setup, etc. This is with the exact same card in the exact same system. The only change was swapping my fe out for a strix and then putting the fe back in, that's it
> 
> If you take a look on the interwebs, lots of people have weird issues with the 3080 or 3090 simply not scoring how it should be. Were talking thousands of points lower on gpu score in benchmarks.
> 
> There is somthing very screwy going on somewhere. Some people think it's Intel speedster but...I have a 5950x and am running a manual overclock. I could try disabling cstates and see what that does, possible somthing weird going on there all of a sudden.
> 
> Regardless, there is somthing that is absolutely causing these wired issues for myself and others. Like I said with the scores, I literally was just sitting at the desktop messing with ab, running benchmarks and getting obviously wrong scores and them bam, all of a sudden they worked.


I've seen the same things myself. Seems to be about a 300 point difference happening under exact same conditions. It's either something getting "Drivered", or possibly something funky happening with the VF curves. It's already well known that if you adjust ONLY the 1.093v voltage point upwards and not the other points, your score gets noticeably worse...


----------



## DrunknFoo

Nvm, i havent really benched since the last update... My bad for replying ignorantly


----------



## DrunknFoo

defiledge said:


> what frequency and scores are considered "good" for the 500W bios?


Fwiw port royal 14800 was achieved using the 450w evga bios on my ftw3. If temps are in check 15400+ score should be obtainable at 500w. (dunno what freq are being averaged)


----------



## defiledge

DrunknFoo said:


> Fwiw port royal 14800 was achieved using the 450w evga bios on my ftw3. If temps are in check 15400+ score should be obtainable. (dunno what freq are being averaged)


is this on water?. Im getting around 14k PR with a 3900x on air, and that was with balanced power plan if that affects it. I'm wondering if upgrading my cpu will make a significant difference also.


----------



## bmgjet

defiledge said:


> what frequency and scores are considered "good" for the 500W bios?



You cant really give a estimate since silicon quality varies a lot.
My turd card,
Stock 12.7K score. 50C
Stock bios max power limit and OC (366W) 13.3K 55C
390W bios 13.9K 60C
shunt modded, 450W 14.4K 70C
waterblock and 520W 14.7K 49C


https://www.3dmark.com/pr/564926


----------



## jura11

defiledge said:


> is this on water?. Im getting around 14k PR with a 3900x on air, and that was with balanced power plan if that affects it. I'm wondering if upgrading my cpu will make a significant difference also.


14k for 500W BIOS, I would say that's quite low, my best result is 14206 I think with KFA2/Galax 390W and 2*8pin Palit RTX 3090 GamingPro without the shunts with Bykski Waterblock and backplate(temperatures stays at 32-34°C),I'm running 3900X as well with 4.35GHz OC 

Hope this helps 

Thanks, Jura


----------



## DrunknFoo

defiledge said:


> is this on water?. Im getting around 14k PR with a 3900x on air, and that was with balanced power plan if that affects it. I'm wondering if upgrading my cpu will make a significant difference also.


No card is on air... i have a pump and a rad disconnected from the rest of the loop, it is serving as a built in ac for the case until i get a block.... 23 fans in total doesnt hurt either.

Ambient of about 24c, inside the case it can get as cold as 13c if i ramp up the fans and pump


----------



## DrunknFoo

bmgjet said:


> You cant really give a estimate since silicon quality varies a lot.
> My turd card,
> Stock 12.7K score. 50C
> Stock bios max power limit and OC (366W) 13.3K 55C
> 390W bios 13.9K 60C
> shunt modded, 450W 14.4K 70C
> waterblock and 520W 14.7K 49C
> 
> 
> https://www.3dmark.com/pr/564926


Have you tried the strix bios? The 500w beta is no bueno for my ftw. Give it a shot, strix with +130 core to start and see where you end up (assuming base clock is even)

Oh wait xc is 2 pin right?.. nvm if thats the case


----------



## bmgjet

DrunknFoo said:


> Have you tried the strix bios? The 500w beta is no bueno for my ftw. Give it a shot, strix with +130 core to start and see where you end up (assuming base clock is even)
> 
> Oh wait xc is 2 pin right?.. nvm if thats the case


Yeah 2 plug card so 3 plug bios would result in less power.
390W bios is the best I can get and using 15 mohm shunts stacked.
Im not willing to push any more power since I already have 100W draw hitting the pci-e slot in spikes.


----------



## Carls_Car

long2905 said:


> run DDU and reinstall driver


Will this really help though?

I don’t need to reinstall driver each time I flash the card do I?


----------



## jomama22

DrunknFoo said:


> Have you tried the strix bios? The 500w beta is no bueno for my ftw. Give it a shot, strix with +130 core to start and see where you end up (assuming base clock is even)
> 
> Oh wait xc is 2 pin right?.. nvm if thats the case


That poop strix of mine would only do +110 on the core lmao. Genuinely a turd of silicone.

Edit: on a nother note, if his card is shunted, may be worth a shot to try a 3 pin bios. May actually benefit him.


----------



## wesley8

defiledge said:


> what frequency and scores are considered "good" for the 500W bios?


Just flashed XOC 500w vbios without any mod :
PR 15345 https://www.3dmark.com/pr/551934
TSE 12117 https://www.3dmark.com/spy/15688314
TS 23096 https://www.3dmark.com/spy/15714286


----------



## DrunknFoo

wesley8 said:


> Just flashed XOC 500w vbios without any mod :
> PR 15345 https://www.3dmark.com/pr/551934
> TSE 12117 https://www.3dmark.com/spy/15688314
> TS 23096 https://www.3dmark.com/spy/15714286


Average temperature 34 °C
what card and block?


----------



## jomama22

wesley8 said:


> Just flashed XOC 500w vbios without any mod :
> PR 15345 https://www.3dmark.com/pr/551934
> TSE 12117 https://www.3dmark.com/spy/15688314
> TS 23096 https://www.3dmark.com/spy/15714286


That is one nice chip. Do you have any record of your scores at stock bios/400w or so?


----------



## wesley8

jomama22 said:


> That is one nice chip. Do you have any record of your scores at stock bios/400w or so?


With stock cooling and vbios(420W) ：
TSE 11213 https://www.3dmark.com/spy/15211809
TS 21299 https://www.3dmark.com/spy/15211537


----------



## wesley8

DrunknFoo said:


> Average temperature 34 °C
> what card and block?


3090 Vulcan OC and Bykski waterblock


----------



## defiledge

will you all be upgrading to hopper 5nm when it comes out in 2021? inb4 3k upgrade


----------



## bmgjet

defiledge said:


> will you all be upgrading to hopper 5nm when it comes out in 2021? inb4 3k upgrade


yes


----------



## changboy

What is hopper 5nm in 2021 ?


----------



## DrunknFoo

so if i hit apply on my ftw3 what's the worst that can happen?


----------



## DrunknFoo

nevermind nothing changed =P


----------



## Falkentyne

DrunknFoo said:


> View attachment 2467325
> 
> 
> 
> so if i hit apply on my ftw3 what's the worst that can happen?
































Be careful if you don't want to play Space Invaders!


----------



## DrunknFoo

LMAO
well someone on evga forum said they got the kp in their hands... hope they upload the firm bios for us to try, jayz2cents refused to upload / share... stating incompatible.. likely due to accountability... meh, guess we will confirm sooner or later


----------



## HyperMatrix

DrunknFoo said:


> LMAO
> well someone on evga forum said they got the kp in their hands... hope they upload the firm bios for us to try, jayz2cents refused to upload / share... stating incompatible.. likely due to accountability... meh, guess we will confirm sooner or later


Just 520W bios under LN2 mode. Although Jacob said they’ll be releasing the other bios through their forums later apparently.


----------



## geriatricpollywog

wesley8 said:


> Just flashed XOC 500w vbios without any mod :
> PR 15345 https://www.3dmark.com/pr/551934
> TSE 12117 https://www.3dmark.com/spy/15688314
> TS 23096 https://www.3dmark.com/spy/15714286





DrunknFoo said:


> View attachment 2467325
> 
> 
> 
> so if i hit apply on my ftw3 what's the worst that can happen?


At least get the 2080ti tool that disables OCP.


----------



## DrunknFoo

0451 said:


> At least get the 2080ti tool that disables OCP.


Lol it was just the first one i came across bahaha


----------



## Lobstar

Rbk_3 said:


> Yeah but still a lot lower than yours at the same settings, no?


Yours is scoring the same as mine essentially. Welcome to the suck.


----------



## man1ac

jomama22 said:


> That's running +225/900 from stock curve line. It is possible there is somthing going on with windows or the bios.
> 
> My port royal sits at about 14285 which seems kinda low to me? But I don't have much reference data on what it should be tbh


Thanks for the detailed answer!
I got with +225/800 about 14k at PR (just stock air) and it doesn't seem to get anyway near your score. But I observed this weird scores too and even have 70mhz more average but a lower score.
Weird thing for me is, on some runs I have a max Clock of >2150 with this and on others just about 2050...


----------



## changboy

port raoyal is a ray tracing benchmark, and its not always direct peformance to all game. 14 300 score versus 15300 is 7% increase and this with RAY TRACING only, without ray tracing it can be lower.

If you score 14300 and got 50 fps, the guy who score 15300 will get 53 fps on ray tracing game for purpose. This mean in every day normal use with your rtx-3090 its hard to tell the difference, at least this is what i think.

If you get 30fps then its time to buy a new model hehehe.


----------



## changboy

I play 1 hour the division 2 and this is what gpuz tell me :


----------



## LoucMachine

Can someone explain to me what exactly is PWR_SRC ?


----------



## man1ac

I could get the FTW Ultra for 1900€
Is it worth to get the card? I own the FE but just don't want to shunt it (because I'll loose warranty). With the 500W bios it seems like a nice choice (I want to upgrade to Water soon)


----------



## DrunknFoo

changboy said:


> I play 1 hour the division 2 and this is what gpuz tell me :
> View attachment 2467346


This doesnt really show us anything other than what you ur card is spiking at, doesnt tell us anything. i would assume the peaks are simply the transitions between load screens or zones... Use avg to paint a picture not max, even better, interval captures of current would give more detail....


----------



## DrunknFoo

man1ac said:


> I could get the FTW Ultra for 1900€
> Is it worth to get the card? I own the FE but just don't want to shunt it (because I'll loose warranty). With the 500W bios it seems like a nice choice (I want to upgrade to Water soon)


Based on what you wrote, because money is a considered factor...from an overclockers perspective i would honestly say no. Just get a block for the fe, find a stable oc (which would likely be negligable) and enjoy playing games and or work (whatever you use it for)

If you want to get into overclocking, ocing an fe a ftw or even a 1080ti is basically the same thing... Set a goal and try to achieve it, or keep working on improving a score or goal....

Just my two cents


----------



## DrunknFoo

LoucMachine said:


> Can someone explain to me what exactly is PWR_SRC ?


I can only assume it serves as another power rail for the gpu. (Couples power from various sources and sends it to somewhere (would vrm make sense? Then to gpu or whatever) I dont remember exactly what my multimeter told me, but there is some direct relation to one or two of the power pins (not all 3) and the maybe even the pcie slot... that said, not all resistors associated with the 4 power sources completed a continuity test to it.. Either that or my multimeter is faulty or i was too high to hit the proper contact points

In other words, id like to know too 🤣


----------



## Darth_Buster

I have a question: I recently got myself a new Trinity RTX 3090, installed my EK Vector Waterblock + Backplate and flashed the Gigabyte Gaming OC BIOS and overclocked via Afterburner +135 Core +350 Memory. But when I leave Folding @ Home running, my 3090 only draws ~300 Watts of Power (see my Screenshot for further details)








Is that normal or did I possibly apply the thermal pads incorrectly? My GPU is usually around ~47 and max 52°C in my loop and drops instantly to below 40 when I stop folding/playing.
I once ran timespy and got what I think is a pretty normal score https://www.3dmark.com/3dm/53810805?
I have about the same power draw while playing Cold War, but when playing Destiny 2 it sometimes hits 390W like it's supposed to.
Anybody has a clue on what/where to look out for or should I just run a few more benchmarks and see if my performance is steady?


----------



## Carls_Car

The reason you're seeing less power draw is because your GPU Load isn't at 99-100%.


----------



## Bradwell

Hello, I have what I belive to be the highest non LN2 or dice or ss score on PR and im wondering if there's anything else I can do to get higher on the leaderboard. I have an FTW3 Ultra running the Gigabyte 2 pin bios and the card pulls about 75-85w more than 500w bios which equates to roughly 505w. I'm a little hesitant to shunt the pcie but I do have pcie power on my z490 godlike. Is shunting my only option left? Any custom bios' I should know about? Thanks guys!
PR Score 15,837
https://www.3dmark.com/pr/514398


----------



## Carls_Car

Don't see many people posting about the Ventus 3X OC (probably has something to do with price per watt xD), so here are my results. Stock vbios, on air. BTW, love seeing all your results, extremely fascinating!

Some of this is obvious but...
+0 on core voltage (Does this actually affect score/performance? I haven't touched it yet.)
Full power limit (This card is limited to 100%/350W as you may know.)
Left temp limit at 83 (Should I drag this to max? I'm not hitting max so I don't think changing this would matter...)
+110 on core
+300 on memory

3D Mark seems to be OK with higher core OC's, and I could go higher on core, but anything over 110 crashes my games that I play so I felt like I'd provide some everyday numbers. In Gears 5/Apex @ 1440p my clocks are easily between 2025-2085 depending on GPU load. Full load in games drops my clocks to somewhere in the low 1900's high 1800's (as seen in 3D Mark scores below).

90% fan speed. Got my window open and PC near it, so my temps were nice and frosty. Can't wait to slap a water block on this bad boy. I don't think any are available just yet, are they?

TS: https://www.3dmark.com/spy/15805067
TSE: https://www.3dmark.com/spy/15804543
PR: https://www.3dmark.com/pr/576218

I could probably get higher overall scores if I bumped the speed up on my RAM and CPU, but I feel like the GPU scores aren't too shabby for this card. Any/all feedback is appreciated.


----------



## Nizzen

Bradwell said:


> Hello, I have what I belive to be the highest non LN2 or dice or ss score on PR and im wondering if there's anything else I can do to get higher on the leaderboard. I have an FTW3 Ultra running the Gigabyte 2 pin bios and the card pulls about 75-85w more than 500w bios which equates to roughly 505w. I'm a little hesitant to shunt the pcie but I do have pcie power on my z490 godlike. Is shunting my only option left? Any custom bios' I should know about? Thanks guys!
> PR Score 15,837
> https://www.3dmark.com/pr/514398


Nice job!
You beated me Nzz and my friend Carillo, so I guess we have to work harder with our chilled water 

Next run will be Dualrank memory 2x16GB for shure


----------



## reflex75

Bradwell said:


> Hello, I have what I belive to be the highest non LN2 or dice or ss score on PR and im wondering if there's anything else I can do to get higher on the leaderboard. I have an FTW3 Ultra running the Gigabyte 2 pin bios and the card pulls about 75-85w more than 500w bios which equates to roughly 505w. I'm a little hesitant to shunt the pcie but I do have pcie power on my z490 godlike. Is shunting my only option left? Any custom bios' I should know about? Thanks guys!
> PR Score 15,837
> https://www.3dmark.com/pr/514398


Great score without exotic cooling!
The only way to increase your frequency is to feed more voltage!
And thanks to your lower temp, you should be able to tame the beast!
The only concern is your power limit, 500w is not enough to compete against the "unlimited" power rivals...


----------



## Apecos

Carls_Car said:


> Don't see many people posting about the Ventus 3X OC (probably has something to do with price per watt xD), so here are my results. Stock vbios, on air. BTW, love seeing all your results, extremely fascinating!
> 
> Some of this is obvious but...
> +0 on core voltage (Does this actually affect score/performance? I haven't touched it yet.)
> Full power limit (This card is limited to 100%/350W as you may know.)
> Left temp limit at 83 (Should I drag this to max? I'm not hitting max so I don't think changing this would matter...)
> +110 on core
> +300 on memory
> 
> 3D Mark seems to be OK with higher core OC's, and I could go higher on core, but anything over 110 crashes my games that I play so I felt like I'd provide some everyday numbers. In Gears 5/Apex @ 1440p my clocks are easily between 2025-2085 depending on GPU load. Full load in games drops my clocks to somewhere in the low 1900's high 1800's (as seen in 3D Mark scores below).
> 
> 90% fan speed. Got my window open and PC near it, so my temps were nice and frosty. Can't wait to slap a water block on this bad boy. I don't think any are available just yet, are they?
> 
> TS: https://www.3dmark.com/spy/15805067
> TSE: https://www.3dmark.com/spy/15804543
> PR: https://www.3dmark.com/pr/576218
> 
> I could probably get higher overall scores if I bumped the speed up on my RAM and CPU, but I feel like the GPU scores aren't too shabby for this card. Any/all feedback is appreciated.


HI I have the same card as you, and I got the same scores like you. I tryed diferents vbios (gigabyte and galax 390w) The best results was with the stock bios (msi) TRY CUSTOM CURVE 825MVOLTS AND 1890 MHZ and you will see.


----------



## Bradwell

reflex75 said:


> Great score without exotic cooling!
> The only way to increase your frequency is to feed more voltage!
> And thanks to your lower temp, you should be able to tame the beast!
> The only concern is your power limit, 500w is not enough to compete against the "unlimited" power rivals...


From what I've seen, the FTW3 is tough to use the evc2 with. Might not work at all?


----------



## ALSTER868

Have read carefully a dozen of last pages and couldn't figure it out: is it worth flashin' 500W EVGA bios into the Strix? Or its bios is no worse or less performant than EVGA's?


----------



## reflex75

Carls_Car said:


> Don't see many people posting about the Ventus 3X OC (probably has something to do with price per watt xD), so here are my results. Stock vbios, on air. BTW, love seeing all your results, extremely fascinating!
> 
> Some of this is obvious but...
> +0 on core voltage (Does this actually affect score/performance? I haven't touched it yet.)
> Full power limit (This card is limited to 100%/350W as you may know.)
> Left temp limit at 83 (Should I drag this to max? I'm not hitting max so I don't think changing this would matter...)
> +110 on core
> +300 on memory
> 
> 3D Mark seems to be OK with higher core OC's, and I could go higher on core, but anything over 110 crashes my games that I play so I felt like I'd provide some everyday numbers. In Gears 5/Apex @ 1440p my clocks are easily between 2025-2085 depending on GPU load. Full load in games drops my clocks to somewhere in the low 1900's high 1800's (as seen in 3D Mark scores below).
> 
> 90% fan speed. Got my window open and PC near it, so my temps were nice and frosty. Can't wait to slap a water block on this bad boy. I don't think any are available just yet, are they?
> 
> TS: https://www.3dmark.com/spy/15805067
> TSE: https://www.3dmark.com/spy/15804543
> PR: https://www.3dmark.com/pr/576218
> 
> I could probably get higher overall scores if I bumped the speed up on my RAM and CPU, but I feel like the GPU scores aren't too shabby for this card. Any/all feedback is appreciated.


350W for a hungry GA-102 is starvation!
Even my 400W FE is not enough for benching!
You should start at least with a 390w bios, if you don't want to do shunt-mod.
Then work your voltage/frequency while lowering your temp.
There is still room for improvement 
https://www.3dmark.com/3dm/53877759?


----------



## Carls_Car

Apecos said:


> HI I have the same card as you, and I got the same scores like you. I tryed diferents vbios (gigabyte and galax 390w) The best results was with the stock bios (msi) TRY CUSTOM CURVE 825MVOLTS AND 1890 MHZ and you will see.


Thanks dude! Would you mind taking a snap of your curve?
I too tried different vbios (the same ones you tried). I got good scores, but it was unstable in games with OC. I'm sure I could mess around and get it stable, but stock bios seems to be just fine.



reflex75 said:


> 350W for a hungry GA-102 is starvation!
> Even my 400W FE is not enough for benching!
> You should start at least with a 390w bios, if you don't want to do shunt-mod.
> Then work your voltage/frequency while lowering your temp.
> There is still room for improvement
> https://www.3dmark.com/3dm/53877759?


I know! T_T. I want to feed it some power, but as you can see above, I've tried a 390w vbios and it just isn't stable in games. I'll probably give it a go again when I get a water block on it. Someone suggested reinstalling video drivers after flashing the vbios. Would this help with stability in games, or does it all come down to voltage/clocks?


----------



## Apecos

Carls_Car said:


> Thanks dude! Would you mind taking a snap of your curve?
> I too tried different vbios (the same ones you tried). I got good scores, but it was unstable in games with OC. I'm sure I could mess around and get it stable, but stock bios seems to be just fine.
> 
> 
> I know! T_T. I want to feed it some power, but as you can see above, I've tried a 390w vbios and it just isn't stable in games. I'll probably give it a go again when I get a water block on it. Someone suggested reinstalling video drivers after flashing the vbios. Would this help with stability in games, or does it all come down to voltage/clocks?


This afternoon i will send you some pics (fan curve) check private messages!


----------



## orbitech

ALSTER868 said:


> Have read carefully a dozen of last pages and couldn't figure it out: is it worth flashin' 500W EVGA bios into the Strix? Or its bios is no worse or less performant than EVGA's?


Tried it, eventually it helped achieve some higher clocks but w/o a waterblock it won't make a change unless you care so much about some 3DMark scores. There you can achieve a couple of hundred points more but realistically nothing important. I reverted back to Asus official one because, unless Asus gives a 500w one I won't ever bother again without water cooling..


----------



## ExDarkxH

Bradwell said:


> From what I've seen, the FTW3 is tough to use the evc2 with. Might not work at all?


I was actually interested in this as well. is there any point on the card we can add an evc2 or are were **** out of luck with the ftw3?


----------



## Carillo

BY BY ESCAPEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEE


----------



## ExDarkxH

nice! his #7 score is even more suspect however. They should take that down as well


----------



## ALSTER868

orbitech said:


> Tried it, eventually it helped achieve some higher clocks but w/o a waterblock it won't make a change unless you care so much about some 3DMark scores. There you can achieve a couple of hundred points more but realistically nothing important. I reverted back to Asus official one because, unless Asus gives a 500w one I won't ever bother again without water cooling..


So if I watercool it, then this bios can do good in benchmarks? Some people say it would, others don't, I saw some with FTW3 card using Strix bios so that's why questions arise.


----------



## jomama22

reflex75 said:


> 350W for a hungry GA-102 is starvation!
> Even my 400W FE is not enough for benching!
> You should start at least with a 390w bios, if you don't want to do shunt-mod.
> Then work your voltage/frequency while lowering your temp.
> There is still room for improvement
> https://www.3dmark.com/3dm/53877759?


That's with a waterblock.im assuming?


----------



## jomama22

Anyone having issues with 3dmark taking forever to calculate the score of a benchmark after the run is finished? This is with the latest updated 3dmark.


----------



## dangerSK

0451 said:


> At least get the 2080ti tool that disables OCP.


Nothing because u need classy tool that matched specifical gpu, that old tool isnt working at all, even on KP 3090. U need tool that has updated i2c registers


----------



## chispy

Carillo said:


> View attachment 2467388
> 
> 
> BY BY ESCAPEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEE


Great to see Futuremark taking actions on his cheat scores  , karma is coming for you @escapee , i think it's a matter of time before he gets completely ban at the hall of fame in FM as he still have multiple accounts and i know for a fact that his score with a lame 4790k cpu and 2133Mhz ram with a rtx 3090 gpu clock of 2445Mhz at 46c it's not real and it is impossible to obtain 🔥 , he is a fake and a cheater , glad for everyone to see the facts and thank you for the update @Carillo you made my day .

Let's get this guy out of here , he is not good for ocn community.

Cheat score , guys report this one as it is not real - https://www.3dmark.com/pr/437850


----------



## chispy

jomama22 said:


> Anyone having issues with 3dmark taking forever to calculate the score of a benchmark after the run is finished? This is with the latest updated 3dmark.


Yes i did  , the new 3dmark suite it is bloated and behaves completely different to older 3dmark suites , it is buggy as well. try older 3dmark suite bro


----------



## changboy

DrunknFoo said:


> This doesnt really show us anything other than what you ur card is spiking at, doesnt tell us anything. i would assume the peaks are simply the transitions between load screens or zones... Use avg to paint a picture not max, even better, interval captures of current would give more detail....


When i play the division 2 my average power board is 480w.


----------



## edsontajra

Guys, my card arrive today. Here is my benchmark for Timespy. 

Its a Asus TUF OC 3090. Good Results?









I scored 18 178 in Time Spy


Intel Core i9-10900KF Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## ExDarkxH

how can i revert to an older version of 3dmark? they force me to update


----------



## Carillo

ExDarkxH said:


> nice! his #7 score is even more suspect however. They should take that down as well


Patience. All his scores is clearly fake


----------



## Carillo

edsontajra said:


> Guys, my card arrive today. Here is my benchmark for Timespy.
> 
> Its a Asus TUF OC 3090. Good Results?
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 18 178 in Time Spy
> 
> 
> Intel Core i9-10900KF Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com


Good result


----------



## xrb936

Finally got my Strix OC 3090 on my hand! Guys, what's the best BIOS for it?


----------



## jomama22

ExDarkxH said:


> how can i revert to an older version of 3dmark? they force me to update


Techpwerup may have links to older version on their site. Shouldn't force you to update if you use the standalone version and not the steam version


----------



## edsontajra

Carillo said:


> Good result


Do you think this is on average for 3090? 

Tks for the answer btw..


----------



## reflex75

jomama22 said:


> That's with a waterblock.im assuming?


Not watercooled, but aircooled with fresh air coming from the window 😀










And credits to Nvidia for their incredible 3090 FE cooler, overengineered and oversized compared to the smaller 3080 which carries the same chip!


----------



## chispy

ExDarkxH said:


> how can i revert to an older version of 3dmark? they force me to update











Futuremark 3DMark for Windows (v2.25.8049) Download


3DMark is a popular gaming performance benchmark used by millions of people, hundreds of hardware review sites and many of the world's leading techno




www.techpowerup.com


----------



## Nizzen

xrb936 said:


> Finally got my Strix OC 3090 on my hand! Guys, what's the best BIOS for it?


I have used the stock bios, and it works great. Have to see if the Evga 500w bios is better in port royal. Shuntmod fixes the power, if you "need" "unlimited" power.

Tested 3090 strix OC with stock bios and ek waterblock.14955points in port Royal, so the bios is good enough for gaming LOL


----------



## Carillo

ExDarkxH said:


> how can i revert to an older version of 3dmark? they force me to update


Why ? Is there some secret sauce in the older version ?


----------



## Carillo

edsontajra said:


> Do you think this is on average for 3090?
> 
> Tks for the answer btw..


I think its much easier to determine GPU performance using Port Royal, because with 3090 under the hood, Time Spy is more a CPU / Memory benchmark  But from what i can see, that is a decent / average 2x8 pin score


----------



## chispy

Carillo said:


> Why ? Is there some secret sauce in the older version ?


Yes , the new 3dmark suite has been completely revamp and gear towards gamers sadly  , it is very bloated and scores 600~800 less in all the benchmarks as i tested this already bro , the new systeminfo and hardware monitor alone eats up 200 points away from the score. As long as you use the latest systeminfo with older 3dmark suite the score will be valid at FM/UL hall of fame.


----------



## ExDarkxH

I've been using the steam version regardless. I'm gona try the standalone and see if i can get a higher score


----------



## Carillo

chispy said:


> Yes , the new 3dmark suite has been completely revamp and gear towards gamers sadly  , it is very bloated and scores 600~800 less in all the benchmarks as i tested this already bro , the new systeminfo and hardware monitor alone eats up 200 points away from the score. As long as you use the latest systeminfo with older 3dmark suite the score will be valid at FM/UL hall of fame.


Thanks!


----------



## Carillo

chispy said:


> Futuremark 3DMark for Windows (v2.25.8049) Download
> 
> 
> 3DMark is a popular gaming performance benchmark used by millions of people, hundreds of hardware review sites and many of the world's leading techno
> 
> 
> 
> 
> www.techpowerup.com


DELETED


----------



## xrb936

Nizzen said:


> I have used the stock bios, and it works great. Have to see if the Evga 500w bios is better in port royal. Shuntmod fixes the power, if you "need" "unlimited" power.
> 
> Tested 3090 strix OC with stock bios and ek waterblock.14955points in port Royal, so the bios is good enough for gaming LOL


Thanks. I am using the stock now and trying to OC it with AB. However, I can only set the core clock to +50mhz, any number beyond that will cause crash in 3DMark. Is that normal?


----------



## Carls_Car

Managed to snag an MSI SUPRIM X. Guess it’s time to find a buyer for my Ventus 3X OC. Or I could be a madman, keep it, and throw it in the miss’s PC...

Should be here in a couple of days, will post some stock scores.


----------



## HyperMatrix

Still waiting on the HydroCopper notify to pop up. Then to eventually try to get a KingPin card. Let's just hope one of these cards ends up being better than my Strix was.


----------



## jomama22

xrb936 said:


> Thanks. I am using the stock now and trying to OC it with AB. However, I can only set the core clock to +50mhz, any number beyond that will cause crash in 3DMark. Is that normal?


It is very possible you may have gotten pooped on like I did with my strix lmao. Granted, I could set +110 in timespy but still...could only match/lose to my FE's score even at 480w. 

I feel as though they may not be binning these cards as much as they did at the beginning to try and get more on the shelves. Who knows.


----------



## xrb936

jomama22 said:


> It is very possible you may have gotten pooped on like I did with my strix lmao. Granted, I could set +110 in timespy but still...could only match/lose to my FE's score even at 480w.
> 
> I feel as though they may not be binning these cards as much as they did at the beginning to try and get more on the shelves. Who knows.


Ew. How about the EVGA FTW3 Ultra? My local merchant said I can return it and get a replacement for a EVGA one, which is in stock right now.


----------



## HyperMatrix

xrb936 said:


> Ew. How about the EVGA FTW3 Ultra? My local merchant said I can return it and get a replacement for a EVGA one, which is in stock right now.


FTW3 is a medium quality card sold at a premium price. It's not bad though. And being able to do 450-520W without shunt modding (depending on bios) is still a good option to have. I wouldn't pay scalper pricing for it. But obviously I just bought an FTW3 with the hybrid cooler on it after having sold my previous Strix card. Just hoping for some silicon lottery. Because I've seen 370W power limited cards outperforming my 480W Strix. So all depends how good of a chip you end up with. 

If you can roll the dice on as many cards as possible without losing money in the process, do it. Consumer level binning.


----------



## pat182

do we know the max wattage of the KPE on OC mode ?


----------



## man1ac

So I was wondering if someone can give me a FPS number in like Metro @ Max Settings, or RDR2 or whatever you have. Whats the Difference of a 400W Air, 400W Water and 500W Air/Water 3090?! I just want to know what goals are worth setting for me. Sure a high Bench score would be nice too, but (for me) 3x WQHD on my Simracing Rig (ACC, iR, AC, AMS2) is where it counts for me


----------



## xrb936

HyperMatrix said:


> FTW3 is a medium quality card sold at a premium price. It's not bad though. And being able to do 450-520W without shunt modding (depending on bios) is still a good option to have. I wouldn't pay scalper pricing for it. But obviously I just bought an FTW3 with the hybrid cooler on it after having sold my previous Strix card. Just hoping for some silicon lottery. Because I've seen 370W power limited cards outperforming my 480W Strix. So all depends how good of a chip you end up with.
> 
> If you can roll the dice on as many cards as possible without losing money in the process, do it. Consumer level binning.


Great. I am just going to grab one.


----------



## HyperMatrix

pat182 said:


> do we know the max wattage of the KPE on OC mode ?


LN2 mode isn't really LN2 mode. It's just the normal bios with a 520W limit which is the maximum which is what you should be using without worry. The normal mode is 450W. And the "OC" mode was either 480W or 500W. I forget the exact number. But max on the card is 520W atm. A higher power limit bios is said to be released on their forums later though.


----------



## pat182

HyperMatrix said:


> LN2 mode isn't really LN2 mode. It's just the normal bios with a 520W limit which is the maximum which is what you should be using without worry. The normal mode is 450W. And the "OC" mode was either 480W or 500W. I forget the exact number. But max on the card is 520W atm. A higher power limit bios is said to be released on their forums later though.


oh my, thats disapointing i was looking foward to flash my strix with a 500+ bios, maybe the ln2 mode will be compatible


----------



## HyperMatrix

pat182 said:


> oh my, thats disapointing i was looking foward to flash my strix with a 500+ bios, maybe the ln2 mode will be compatible


You still may be able to. The normal “LN2” mode bios should be dumpable and usable on any 3 power connector card, giving you 520W. If they really do share the LN2 bios, which disables thermal monitoring and throttling....I’d caution against using that for your day to day. You could burn your card because all of the protections in place would be disabled. 

I would recommend shunting over a fully unlocked XOC bios for the majority of users, even those here on OCN. There’s no performance advantage to using XOC over shunting under water cooling. But there are much more serious risks.


----------



## pat182

HyperMatrix said:


> You still may be able to. The normal “LN2” mode bios should be dumpable and usable on any 3 power connector card, giving you 520W. If they really do share the LN2 bios, which disables thermal monitoring and throttling....I’d caution against using that for your day to day. You could burn your card because all of the protections in place would be disabled.
> 
> I would recommend shunting over a fully unlocked XOC bios for the majority of users, even those here on OCN. There’s no performance advantage to using XOC over shunting under water cooling. But there are much more serious risks.


i see, the thing is that i want to test the max on air, my card can sustain 480watts at 66c all day and my clock in Control or Metro only drop in the 19xxmhz cause of power limit, . im sure being able to push 500watts id still be in the 2000mhz range @ 70c which would make me happy i guess i could try the evga 500watt bios then

in non rtx like BL3 im fully stable 2085mhz 66c 480watt


----------



## geriatricpollywog

HyperMatrix said:


> LN2 mode isn't really LN2 mode. It's just the normal bios with a 520W limit which is the maximum which is what you should be using without worry. The normal mode is 450W. And the "OC" mode was either 480W or 500W. I forget the exact number. But max on the card is 520W atm. A higher power limit bios is said to be released on their forums later though.


On the 2080ti KPE, the LN2 bios switch position disables thermal protection and runs the fans at jet engine speed. The fan speed can be fixed by plugging the fans into a motherboard PWM header but the temperature protection cannot be enabled in LN2 mode. You are still under warranty if the card fails. However, with the XOC bios flashed, you are not covered under warranty if the card fails and the RMA department discovers you flashed the XOC bios.


----------



## HyperMatrix

0451 said:


> On the 2080ti KPE, the LN2 bios switch position disables thermal protection and runs the fans at jet engine speed. The fan speed can be fixed by plugging the fans into a motherboard PWM header but the temperature protection cannot be enabled in LN2 mode. You are still under warranty if the card fails. However, with the XOC bios flashed, you are not covered under warranty if the card fails and the RMA department discovers you flashed the XOC bios.


3090 KPE doesn't disable any protections. Jacob said you could even max out voltage sliders in the classified tool and still wouldn't break your card. This "LN2" mode isn't really LN2 mode. Hence only being 4% higher power limit.

The yet to be released actual XOC LN2 bios they used in their stream will have all protections disabled though.

Edit: Correction on my previous statement. No indication that voltage sliders in classified tool would be safe. That was misread by moi. But says LN2 mode is safe for all other OC'ing which means thermal protections are still in place.


----------



## HyperMatrix

Also FYI regarding EVGA 500W bios and theories about cards made in Taiwan vs. China:


----------



## Mr.Vegas

Carls_Car said:


> Haven't ordered one yet. Waiting on Corsair or EKWB. Probably will get an EKWB.


There is new Byksky Block for Ventus 3090, came out last week and comes with backplate as set


----------



## Mr.Vegas

Apecos said:


> The best scores on my 3090 ventus was 13647 in PR with stock msi bios and UNDERVOLTING THE CARD. Try a custom curve with 825 mvolts and 1890 mhz.There is the sweet spot for that card.Please try and you will see.
> 
> I forgot to say that i flashed the KFA 390w VBIOS and the results was the same like MSI stock bios.I flashed back to stock
> 
> The GIGABYTE 3090 GAMING OC vbios is a waste of time, thAs bios is for 2000 rpm fans the VENTUS use 3500 fans (you loose 1500 rpm in cooling)


Do you think the KFA bios might help me under water? Any other better bioses that work? i dont care about fans since im doing custom loop
Whats the exact KFA model, ill get the bios


----------



## pat182

HyperMatrix said:


> 3090 KPE doesn't disable any protections. Jacob said you could even max out voltage sliders in the classified tool and still wouldn't break your card. This "LN2" mode isn't really LN2 mode. Hence only being 4% higher power limit.
> 
> The yet to be released actual XOC LN2 bios they used in their stream will have all protections disabled though.
> 
> Edit: Correction on my previous statement. No indication that voltage sliders in classified tool would be safe. That was misread by moi. But says LN2 mode is safe for all other OC'ing which means thermal protections are still in place.
> 
> View attachment 2467419


sick, cant wait for people to start flashing ln2 bios to see whats up with an extra 20watts


----------



## geriatricpollywog

HyperMatrix said:


> 3090 KPE doesn't disable any protections. Jacob said you could even max out voltage sliders in the classified tool and still wouldn't break your card. This "LN2" mode isn't really LN2 mode. Hence only being 4% higher power limit.
> 
> The yet to be released actual XOC LN2 bios they used in their stream will have all protections disabled though.
> 
> Edit: Correction on my previous statement. No indication that voltage sliders in classified tool would be safe. That was misread by moi. But says LN2 mode is safe for all other OC'ing which means thermal protections are still in place.
> 
> View attachment 2467419


This would be a departure from the 2080ti KPE then.


----------



## HyperMatrix

0451 said:


> This would be a departure from the 2080ti KPE then.


I think we’re dealing with so much more power this generation that it’d be far easier for things to go wrong and fry some cards. And lack of general card availability and price similarity between the FTW3 and KingPin card means more of your average Joe consumer is going to be interested in it. 

I know I wouldn’t trust the average consumer who doesn’t even know how to flash a bios with a 520W card with 0 thermal protections. Haha.


----------



## jimdurt

HyperMatrix said:


> FTW3 is a medium quality card sold at a premium price. It's not bad though. And being able to do 450-520W without shunt modding (depending on bios) is still a good option to have. I wouldn't pay scalper pricing for it. But obviously I just bought an FTW3 with the hybrid cooler on it after having sold my previous Strix card. Just hoping for some silicon lottery. Because I've seen 370W power limited cards outperforming my 480W Strix. So all depends how good of a chip you end up with.
> 
> If you can roll the dice on as many cards as possible without losing money in the process, do it. Consumer level binning.


I just got my email for the XC3 and FTW3 Hybrid. I also have the Strix 3090 and disappointed that even at 480W I am way lower on my 3DMark scores than from what I have seen from other cards. I even setup last night outside in my carport when it was 4°C running every benchmark I could. Only managed to squeeze a few hundred extra out on timespy. 20629 was the best I could do on Timespy. 

So do you recommend the XC3 or FTW3 Hybrid. Got the option for both and also in the queue for the Hydro Coppers. I happened to see the twitter post by Jacob and hit notify way early.


----------



## pat182

jimdurt said:


> I just got my email for the XC3 and FTW3 Hybrid. I also have the Strix 3090 and disappointed that even at 480W I am way lower on my 3DMark scores than from what I have seen from other cards. I even setup last night outside in my carport when it was 4°C running every benchmark I could. Only managed to squeeze a few hundred extra out on timespy. 20629 was the best I could do on Timespy.
> 
> So do you recommend the XC3 or FTW3 Hybrid. Got the option for both and also in the queue for the Hydro Coppers. I happened to see the twitter post by Jacob and hit notify way early.


what are the bios limit for the hybrid ? unless you shunt or bios swap the hybrid , im pretty sure the strix got better performance


----------



## HyperMatrix

jimdurt said:


> I just got my email for the XC3 and FTW3 Hybrid. I also have the Strix 3090 and disappointed that even at 480W I am way lower on my 3DMark scores than from what I have seen from other cards. I even setup last night outside in my carport when it was 4°C running every benchmark I could. Only managed to squeeze a few hundred extra out on timespy. 20629 was the best I could do on Timespy.
> 
> So do you recommend the XC3 or FTW3 Hybrid. Got the option for both and also in the queue for the Hydro Coppers. I happened to see the twitter post by Jacob and hit notify way early.


Definitely the FTW3. If EVGA does binning of any kind, the XC3 line gets the worst of the chips. Also with XC3 you’re limited to 390W or whatever the limit is for the 2 power connector cards. At least with FTW3 you can flash the 520W bios without voiding warranty. I also have a notify (hasn’t popped yet) for the XC3 HydroCopper but already decided against picking it up. 



pat182 said:


> what are the bios limit for the hybrid ? unless you shunt or bios swap the hybrid , im pretty sure the strix got better performance


Hybrid is the same as normal FTW3. Just better cooler design for the same price.


----------



## Apecos

Mr.Vegas said:


> Do you think the KFA bios might help me under water? Any other better bioses that work? i dont care about fans since im doing custom loop
> Whats the exact KFA model, ill get the bios


Sorry man but I don t know how perform ventus under water. I flashed this bios but the card runs better in stock.
I compare the ventus over the strix and evga, (----IN GAMES---) I have only 4 fps less. Be carefull with KFA2 bios, the galax has one more power phase than ventus.









KFA2 RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com


----------



## jimdurt

pat182 said:


> what are the bios limit for the hybrid ? unless you shunt or bios swap the hybrid , im pretty sure the strix got better performance


Well i am running with stock air cooler. But seeing the scores from others with EVGA FTW3 when it was new on the 450W bios, it was scoring 500 points higher than the Strix 3090 i have. Thats why i was testing outside trying to achieve the absolute best with the stock cooler. Even when outside temps were 4C, I could start Timespy at 13C GPU temp and hit 60C everytime according to GPUZ. I have this in a Lancool 2 Mesh. All fans on max. I do get slightly higher scores with GPU fans at 60-80%, but not much. Either way, hitting 60C seems high when pushing 4C air through the stock cooler.


----------



## Carillo

chispy said:


> Yes , the new 3dmark suite has been completely revamp and gear towards gamers sadly  , it is very bloated and scores 600~800 less in all the benchmarks as i tested this already bro , the new systeminfo and hardware monitor alone eats up 200 points away from the score. As long as you use the latest systeminfo with older 3dmark suite the score will be valid at FM/UL hall of fame.


Great. What version do you recommend ?


----------



## pat182

jimdurt said:


> Well i am running with stock air cooler. But seeing the scores from others with EVGA FTW3 when it was new on the 450W bios, it was scoring 500 points higher than the Strix 3090 i have. Thats why i was testing outside trying to achieve the absolute best with the stock cooler. Even when outside temps were 4C, I could start Timespy at 13C GPU temp and hit 60C everytime according to GPUZ. I have this in a Lancool 2 Mesh. All fans on max. I do get slightly higher scores with GPU fans at 60-80%, but not much. Either way, hitting 60C seems high when pushing 4C air through the stock cooler.


yea maybe just repaste it, im getting 66-67c max fan speed indoor, for me it helped alot to vertical mount it so hot air would exaust better


----------



## chispy

Carillo said:


> Great. What version do you recommend ?


7042 OR 7048


----------



## jimdurt

pat182 said:


> yea maybe just repaste it, im getting 66-67c max fan speed indoor, for me it helped alot to vertical mount it so hot air would exaust better


That's an option I guess. I have the vertical mount for Lancool 2 Mesh, but wanted to get a good baseline before changing any orientations. May just buy the Hybrid to compare. Really got bit by the enthusiast bug. Need to get back to doing other the things but this is too much fun at the moment. Haven't even turned on a game in a few days because of high score chasing in 3DMark. I have the updated 3DMark so that may be an issue as well. Will download the old version and compare with my current scores.


----------



## DrunknFoo

ExDarkxH said:


> I was actually interested in this as well. is there any point on the card we can add an evc2 or are were **** out of luck with the ftw3?


analog voltage controllers on the ftw.



https://xdevs.com/guide/epower_v/#swsets


----------



## changboy

pat182 said:


> yea maybe just repaste it, im getting 66-67c max fan speed indoor, for me it helped alot to vertical mount it so hot air would exaust better


You repaste ur ftw3 ? What paste did you use ? I think i can put liquid metal on it, i have in stock.


----------



## Rbk_3

Lobstar said:


> Yours is scoring the same as mine essentially. Welcome to the suck.


Mine is actually scoring over 22k on graphics. I had Shadow Play recording messing it thing up.


----------



## ExDarkxH

DrunknFoo said:


> analog voltage controllers on the ftw.
> 
> 
> 
> https://xdevs.com/guide/epower_v/#swsets


thats disappointing
The way you connect that thing is complicated. direct soldering with heavy gauge copper wire or solid copper plates. Since you want minimal resistance, you could go solid wide copper. The thing would be massive as it would essentially extend the gpu significantly and then you cant really close it up and use it in a typical water block


----------



## xrb936

HyperMatrix said:


> FTW3 is a medium quality card sold at a premium price. It's not bad though. And being able to do 450-520W without shunt modding (depending on bios) is still a good option to have. I wouldn't pay scalper pricing for it. But obviously I just bought an FTW3 with the hybrid cooler on it after having sold my previous Strix card. Just hoping for some silicon lottery. Because I've seen 370W power limited cards outperforming my 480W Strix. So all depends how good of a chip you end up with.
> 
> If you can roll the dice on as many cards as possible without losing money in the process, do it. Consumer level binning.


Just got another Strix OC 3090, can set it to +110. Looks better, but still not the best...


----------



## edsontajra

Here is my best results on 3dmark with ASUS TUF OC 3090:

Portal Royal:








I scored 13 205 in Port Royal


Intel Core i9-10900KF Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com





Time Spy








I scored 18 441 in Time Spy


Intel Core i9-10900KF Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com





+120 Core
+700 Memory

I don't know if its the best result but im very satisfied with the card. Any tips?


----------



## HyperMatrix

xrb936 said:


> Just got another Strix OC 3090, can set it to +110. Looks better, but still not the best...


Ouch. It's definitely upsetting to pay premium prices but end up with a GPU die and cooler design that doesn't give any advantage over the cheaper cards. I was going to rebuy a brand new Strix off of Kijiji for about $100 more than I sold mine for but decided against it when I got on the EVGA queue. My FTW3 Hybrid arrives on Thursday although I don't think I'll be home to pick it up. I figure at least with the Hybrid/HydroCopper model of the FTW3, even if I end up with a mediocre chip, I won't have to spend anything more to get 2100MHz+. We'll see how that logic plays out in a few days. Fingers crossed. 

Thank God the Strix is so in demand. Can always sell it used without a loss. Haha.


----------



## defiledge

edsontajra said:


> Here is my best results on 3dmark with ASUS TUF OC 3090:
> 
> Portal Royal:
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 13 205 in Port Royal
> 
> 
> Intel Core i9-10900KF Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> Time Spy
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 18 441 in Time Spy
> 
> 
> Intel Core i9-10900KF Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> +120 Core
> +700 Memory
> 
> I don't know if its the best result but im very satisfied with the card. Any tips?


did u try 390W bios


----------



## ExDarkxH

The way i see it

3090 kingpin should be 3090 ftw3
3090 ftw3 should be the XC3

Evga just rebranded their cards and overcharged. 
They will pay the price in the future. Ampere will eventually be in stock everywhere, and the 3080ti will be a hot ticket. 
The ftw3 version wont be sought after the same way anymore, they permanently damaged that brand name


----------



## HyperMatrix

ExDarkxH said:


> The way i see it
> 
> 3090 kingpin should be 3090 ftw3
> 3090 ftw3 should be the XC3
> 
> Evga just rebranded their cards and overcharged.
> They will pay the price in the future. Ampere will eventually be in stock everywhere, and the 3080ti will be a hot ticket.
> The ftw3 version wont be sought after the same way anymore, they permanently damaged that brand name


I think they know they priced it too high as well. Why else would they be selling the Hybrid model for the exact same price and HydroCopper for just $50 more? Or in other words, since these cards are only available through their own website with 5% discount, why would they be selling HydroCopper and Hybrid models for less than the retail price of the FTW3 Ultra basic edition?

Think about it...the XC3 Hybrid model sells for just $1538 on their own website. XC3 HydroCopper for $1567. That's some damn good value, tbh. But while the FTW3 is lesser quality and priced too high, I disagree that the KingPin should be the FTW3 card. I still think KingPin is better engineered than an FTW3/Strix level card


----------



## changboy

How the hell do you buy an rtx-3090 like a chocolate bar, buy 2 or 3 didnt mind at all, most of people cant afford 1


----------



## HyperMatrix

changboy said:


> How the hell do you buy an rtx-3090 like a chocolate bar, buy 2 or 3 didnt mind at all, most of people cant afford 1


Most people aren't keeping them. They're reselling them after testing/binning. Problem isn't cost since you can turn around and resell it quick. Problem is getting access to cards to buy.


----------



## HyperMatrix

_delete_


----------



## ExDarkxH

HyperMatrix said:


> I think they know they priced it too high as well. Why else would they be selling the Hybrid model for the exact same price and HydroCopper for just $50 more? Or in other words, since these cards are only available through their own website with 5% discount, why would they be selling HydroCopper and Hybrid models for less than the retail price of the FTW3 Ultra basic edition?
> 
> Think about it...the XC3 Hybrid model sells for just $1538 on their own website. XC3 HydroCopper for $1567. That's some damn good value, tbh. But while the FTW3 is lesser quality and priced too high, I disagree that the KingPin should be the FTW3 card. I still think KingPin is better engineered than an FTW3/Strix level card


It's over engineered but its lacking 
20watts higher and they have the nerve to call this "LN2"

Last generation the kingpin shipped with a bios 150watts higher than the FTW3. You actually felt like you were getting the value. It was three pin which was rare last gen and felt special
You also got Samsung memory with every unit. The Samsung memory was much more consistent and overall better than the micron memory chips that shipped with the FTW3. This allowed for significantly higher overclocks on average. Also, the bin while not great, was more stringent
This time around you can hardly even call it a bin

Ppl will be getting their kingpins and posting 14,700-14,800 port royal scores on 520w watercooled and I will just roll my eyes
They could have easily made a ftw3 hydrocopper with 520w and it would perform exactly the same

That card doesnt deserve the kingpin label. My original point stands


----------



## HyperMatrix

ExDarkxH said:


> It's over engineered but its lacking
> 20watts higher and they have the nerve to call this "LN2"
> 
> Last generation the kingpin shipped with a bios 150watts higher than the FTW3. You actually felt like you were getting the value. It was three pin which was rare last gen and felt special
> You also got Samsung memory with every unit. The Samsung memory was much more consistent and overall better than the micron memory chips that shipped with the FTW3. This allowed for significantly higher overclocks on average. Also, the bin while not great, was more stringent
> This time around you can hardly even call it a bin
> 
> Ppl will be getting their kingpins and posting 14,700-14,800 port royal scores on 520w watercooled and I will just roll my eyes
> They could have easily made a ftw3 hydrocopper with 520w and it would perform exactly the same
> 
> That card doesnt deserve the kingpin label. My original point stands


A few things.

1) Yes, the "LN2" mode currently isn't really LN2 mode. I covered this a few posts or pages back. But, they said they will be releasing a proper XOC bios for it on their forums. So that's only a short term issue. And EVGA can't ship the cards with a higher than 520W limit due to liability for exceeding specs. 150 + 150 + 150 + 75. They'd have to have gone with a 4 power connector design to be able to officially sell it with a 670W power limit. But what about Asus? Why aren't they even providing a bios above 480W? You're ok to complain about the FTW3 and EVGAs 500W and 520W bios but nothing about Asus's 480W limit? 

2) There is no Samsung GDDR6X memory. Only Micron. And if you watched the live OC stream, Vince showed that with active cooling on the memory, you can increase the voltage and clock the memory up very very high. If I'm not mistaken they had it up past 28GHz. I could be very mistaken about that exact number but I do remember it was a substantial OC that you can play with thanks to the included voltage controls.

3) The only person that's reported results on the KingPin thread over at EVGA so far did get over 15,000 with the Hybrid cooler. Steve and JayZ earlier also got 15k and 15.3k with the Hybrid Cooler. I'm not sure what the averages will end up being with the card, but I think it's safe to say the combination of the 360mm AIO along with voltage controls and however small gpu die binning will result in people being able to hit 15000+ with proper tuning without doing any modifications.

I don't like rage and complaints for the sake of complaining. Like everyone that went off on Nvidia for their "paper launch" only to see the AMD CPU and GPU launch, as well as PS5/XSX launch be even worse. Valid criticism is fair. But is it fair to criticize the KingPin card for being $100 more than the Strix when it has a higher boost clock, much better 360mm AIO cooler, integrated voltage controls, somewhat binned GPU die, more power phases, higher power limit, and even higher power limit bios planned to be released?

I paid $1800 for a Strix that I couldn't even game on at 1920MHz. Do I blame Asus for that? No. Well...actually a little. The cooler on that card is not sufficient for compensating for a bad GPU die. It was unable to provide sustained clocks above its advertised 1890MHz. So would I not think paying $1900 for a card that can guaranteed maintain 2100MHz without having to void my warranty or spend hundreds more to buy a water block is a substantially better deal?

All I'm saying is...if you're going to criticize EVGA, make sure you equally criticize all the other vendors. Because I don't think any of them have done a great job with their 3090s. Asus TUF being the exception, but still power limited without shunting.


----------



## geriatricpollywog

HyperMatrix said:


> A few things.
> 
> 1) Yes, the "LN2" mode currently isn't really LN2 mode. I covered this a few posts or pages back. But, they said they will be releasing a proper XOC bios for it on their forums. So that's only a short term issue. And EVGA can't ship the cards with a higher than 520W limit due to liability for exceeding specs. 150 + 150 + 150 + 75. They'd have to have gone with a 4 power connector design to be able to officially sell it with a 670W power limit. But what about Asus? Why aren't they even providing a bios above 480W? You're ok to complain about the FTW3 and EVGAs 500W and 520W bios but nothing about Asus's 480W limit?
> 
> 2) There is no Samsung GDDR6X memory. Only Micron. And if you watched the live OC stream, Vince showed that with active cooling on the memory, you can increase the voltage and clock the memory up very very high. If I'm not mistaken they had it up past 28GHz. I could be very mistaken about that exact number but I do remember it was a substantial OC that you can play with thanks to the included voltage controls.
> 
> 3) The only person that's reported results on the KingPin thread over at EVGA so far did get over 15,000 with the Hybrid cooler. Steve and JayZ earlier also got 15k and 15.3k with the Hybrid Cooler. I'm not sure what the averages will end up being with the card, but I think it's safe to say the combination of the 360mm AIO along with voltage controls and however small gpu die binning will result in people being able to hit 15000+ with proper tuning without doing any modifications.
> 
> I don't like rage and complaints for the sake of complaining. Like everyone that went off on Nvidia for their "paper launch" only to see the AMD CPU and GPU launch, as well as PS5/XSX launch be even worse. Valid criticism is fair. But is it fair to criticize the KingPin card for being $100 more than the Strix when it has a higher boost clock, much better 360mm AIO cooler, integrated voltage controls, somewhat binned GPU die, more power phases, higher power limit, and even higher power limit bios planned to be released?
> 
> I paid $1800 for a Strix that I couldn't even game on at 1920MHz. Do I blame Asus for that? No. Well...actually a little. The cooler on that card is not sufficient for compensating for a bad GPU die. It was unable to provide sustained clocks above its advertised 1890MHz. So would I not think paying $1900 for a card that can guaranteed maintain 2100MHz without having to void my warranty or spend hundreds more to buy a water block is a substantially better deal?
> 
> All I'm saying is...if you're going to criticize EVGA, make sure you equally criticize all the other vendors. Because I don't think any of them have done a great job with their 3090s. Asus TUF being the exception, but still power limited without shunting.


Exactly.

-XOC bios
-Better VRM and filtering
-Triple bios
-Triple 8 pin
-Dipswitch voltage and LLC control
-360mm Asetek Gen6 AIO
-EVGA customer service
-Oled monitoring
-$100 more than Strix

The XOC bios alone is worth the extra $100. Literally all anybody talks about on this thread is power limits and which bios to flash.


----------



## ExDarkxH

2070mhz so you know your not getting the “2100” guaranteed maintained. Also note that it needed 1.087mv and water to achieve that clock on heaven which isnt too stressful of a test
this is how terrible the “bin” is isnt 2100 maintained
Its 2100 peak that qualifies as the bin











also, he couldnt even crack 15k in port royal
This was with his special sauce cpu 10900k capable of 5.7ghz and his 12ns memory
This is after playing around with voltage and lowering the ambient temp of the room
Still couldnt bust 15k

ALL HAIL THE KING!


----------



## Wihglah

Is the Kingpin BIOS in the wild yet?

and will it work on a FTW3?


----------



## HyperMatrix

ExDarkxH said:


> 2070mhz so you know your not getting the “2100” guaranteed maintained. Also note that it needed 1.087mv and


Well **** I should forget about the KingPin and go back to my 1890MHz Strix OC.


----------



## ExDarkxH

0451 said:


> Exactly.
> 
> -XOC bios
> -Better VRM and filtering
> -Triple bios
> -Triple 8 pin
> -Dipswitch voltage and LLC control
> -360mm Asetek Gen6 AIO
> -EVGA customer service
> -Oled monitoring
> -$100 more than Strix
> 
> The XOC bios alone is worth the extra $100. Literally all anybody talks about on this thread is power limits and which bios to flash.


You just listed a bunch of stuff that other cards have and provide for cheaper
You main point about a bios that doesnt ship with the card and even if it did, whats to stop another card from using?
Im not saying its a bad card
Im not saying its not priced competitively,
What im saying is that this is what the ftw3 should have been—minus the voltage controls but who cares when you can add an evc2. Oh wait, evga purposely doesnt allow for this on the ftw3 because they kingpin aint all that and they dont want the two card competing


----------



## HyperMatrix

This conversation is beyond pointless. It's so off base I can't even decide where to start. So I've decided to stop. Everyone please listen to ExDark. EVGA is crap. KingPin cards are crap. Please don't buy them. At least not until daddy can get on the notify queue for them. There are so many cards superior to the KingPin that I don't even have enough room to name them so I just won't.


----------



## geriatricpollywog

ExDarkxH said:


> You just listed a bunch of stuff that other cards have and provide for cheaper
> You main point about a bios that doesnt ship with the card and even if it did, whats to stop another card from using?
> Im not saying its a bad card
> Im not saying its not priced competitively,
> What im saying is that this is what the ftw3 should have been—minus the voltage controls but who cares when you can add an evc2. Oh wait, evga purposely doesnt allow for this on the ftw3 because they kingpin aint all that and they dont want the two card competing


I don’t think there are any cards that have any of those features except the 3x8 pin and maybe triple bios. Correct me if I’m wrong there. The 2080ti KPE XOC bios is downloadable on xdevs by anybody, but will not work with non KPE cards. And it makes sense. Even if you managed to flash the bios, your consumer grade card will not automatically grow the extra power stages and capacitance that XOC cards have. Also, what’s an FTW3, some non KPE card?


----------



## changboy

To me i found the evga ftw3 ultra have the best desing on air then any other card hehehehe.
The backplate have hole to cool the memory on the back and the pcb itself have hole to allow the fan blow air at backplate to help cooling....what a smart desing ! Its less huge then the strix and its cooler outperform the strix. The strix is built like a brick and its nearly close all aound the cooler, so how this thing can be cool, and the part from the strix make coil whine and asus knew it from previous years but keep put same part wich make noise year after year.

On this thread i saw a lot of ftw3 have high score just out of the box on air but i didnt saw so many strix with no mod or waterblock on it with verry high score. Even frame chaser mod to the end is strix to score 14700 on port royal....are you kidding me, maybe if i tweak my pc as hell i can score this with my evga ftw3 on air, thats it guys.


----------



## ExDarkxH

HyperMatrix said:


> This conversation is beyond pointless. It's so off base I can't even decide where to start. So I've decided to stop. Everyone please listen to ExDark. EVGA is crap. KingPin cards are crap. Please don't buy them. At least not until daddy can get on the notify queue for them. There are so many cards superior to the KingPin that I don't even have enough room to name them so I just won't.


You being defensive because you want the card. 
It will be a great card
Guess what? The FTW3 is supposed to he a great card too but its not
My original point stands, the card that is shipping as their ftw3 is overpriced and mediocre. What the kingpin is now is 95% of what the ftw3 should be. Minus voltage controls and oled screen


----------



## changboy

So you like go fishing ? hahahahah


----------



## DrunknFoo

ExDarkxH said:


> thats disappointing
> The way you connect that thing is complicated. direct soldering with heavy gauge copper wire or solid copper plates. Since you want minimal resistance, you could go solid wide copper. The thing would be massive as it would essentially extend the gpu significantly and then you cant really close it up and use it in a typical water block


another option would be to tap into the analog controller with resistors to increase the voltage, i don't know the details, but joe *beardedhardware attempted to mod his ftw, and due to an accident, he ended up RIPing his card, not sure if he plans to attempt to repair it with other parts from an older card or gave up on it, i think he moving on to navi next


----------



## pat182

HyperMatrix said:


> Well **** I should forget about the KingPin and go back to my 1890MHz Strix OC.


running +140 core on my strix since day 1 can do +155 but it might crash once a day so, not good


----------



## Chipsaru

I'm unable to find dimensions of 80/90 FE PCBs, did anyone disassemble and measure it?


----------



## HyperMatrix

pat182 said:


> running +140 core on my strix since day 1 can do +155 but it might crash once a day so, not good


Yeah max I was able to do is about 2040MHz after 5 minutes of gameplay but temp was at 77C and kept going up and soon crashed. Problem was finding a clock speed/voltage that the cooler was capable of keeping stable. Anything higher than around 1900MHz under 0.900v would eventually build up heat and crash. Some games were fine. Others were more sensitive. Like Red Dead Redemption 2. So in terms of stable clocks for hours long gaming sessions across a variety of games, I didn't have any guarantees above around 1900MHz.


----------



## DrunknFoo

ExDarkxH said:


> You being defensive because you want the card.
> It will be a great card
> Guess what? The FTW3 is supposed to he a great card too but its not
> My original point stands, the card that is shipping as their ftw3 is overpriced and mediocre. What the kingpin is now is 95% of what the ftw3 should be. Minus voltage controls and oled screen


from a consumer perspective and based on stock performance, i think the ftw3 is fine for what it is. 
Pricing...it is competitive with the strix pricing 1:1: (at least in canada)
As for features... less than 1% of consumers would even consider or have known of the existence of the evc until they come across a youtuber that features it, or even the evga power unit, look at some of our members here who are afraid to using an iron for shunting. (just saying)

the KP card, is just absurd imo, why? not because of what evga decided to choose for the power profiles. I mean its essence along with classified should be XOC, keep the stock limit to nvidia spec who cares, it'll be fine; just as long as power delivery is guaranteed by the components to lets say exceed and or reach 800w (being conservative), where an enthusiest would have no doubt to tinker with shunts, be it swap or stack...What bugs me is the fact that it comes with an AIO style cooler... For that price, Id rather have it come with a POT, or just the option to buy the pcb alone minus 200-300 USD... (i dunno i probably don't make much sense to you guys, it's just the way i have always viewed the KP line)


----------



## DrunknFoo

480w








EVGA RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com





520w








EVGA RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com





PX1 requests a firmware update if you have it installed (evga owners)

will start testing the 480w without the update to see how it behaves with the ftw3


----------



## geriatricpollywog

Wow, legit. New #1

Edit: this was not me, so don’t rep.









I scored 18 284 in Port Royal


AMD Ryzen 9 5950X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## ArcticZero

Did some underclocking on my PNY (Gigabyte OC BIOS). Settled for around +234 at 0.837v locked, boost to 1895mhz. At stock curve, the card would spike to above 1.05v at times. And this actually beat my highest score on Time Spy by 100 points consistently, with GPU score consistently above 20k now without much fluctuation. GPU-Z reports hardly any PerfCap now (used to be pegged at Pwr).

Best thing is, I got -10c off with better performance.

My new record (16519)

Spent the whole night dialing numbers in, but I can probably get a slightly higher score with some increments (crashes at 1900mhz with this voltage).

It still hits power limits occasionally, so might do a shunt mod at some point, though am still a bit worried about taking a soldering iron/heat gun to a $1500+ GPU so I might go with the conductive paint / paste method. Feels like this is the only way forward though, barring being able to use an even higher capped BIOS for 2x8pin.


----------



## pat182

HyperMatrix said:


> Yeah max I was able to do is about 2040MHz after 5 minutes of gameplay but temp was at 77C and kept going up and soon crashed. Problem was finding a clock speed/voltage that the cooler was capable of keeping stable. Anything higher than around 1900MHz under 0.900v would eventually build up heat and crash. Some games were fine. Others were more sensitive. Like Red Dead Redemption 2. So in terms of stable clocks for hours long gaming sessions across a variety of games, I didn't have any guarantees above around 1900MHz.


yea you need a repaste i get 2085mhz lock at 66c


----------



## pat182

DrunknFoo said:


> 480w
> 
> 
> 
> 
> 
> 
> 
> 
> EVGA RTX 3090 VBIOS
> 
> 
> 24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory
> 
> 
> 
> 
> www.techpowerup.com
> 
> 
> 
> 
> 
> 520w
> 
> 
> 
> 
> 
> 
> 
> 
> EVGA RTX 3090 VBIOS
> 
> 
> 24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory
> 
> 
> 
> 
> www.techpowerup.com
> 
> 
> 
> 
> 
> PX1 requests a firmware update if you have it installed (evga owners)
> 
> will start testing the 480w without the update to see how it behaves with the ftw3


sickkkk want to try the 520 bios so bad, gonna wait for people with strix to do it cause im such a noob want to make sure its ok lolol


----------



## HyperMatrix

pat182 said:


> yea you need a repaste i get 2085mhz lock at 66c


I was going to but tested and saw max clock of 2130MHz from 30C-52C anyway. And since in Canada ASUS will void your warranty for opening up the card, and possible replacement needed for thermal pads, I was going to just wait for a block to arrive and do it all at once. But then I decided 2130MHz wasn't good enough for that price and sold the card.

Dare I say...it's rather poor assembly by Asus on an $1800 card if you need to do a repaste to get higher than 1890MHz without crashing. They should have charged $1500 for it and given the TUF cards to people for free.


----------



## Thanh Nguyen

DrunknFoo said:


> 480w
> 
> 
> 
> 
> 
> 
> 
> 
> EVGA RTX 3090 VBIOS
> 
> 
> 24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory
> 
> 
> 
> 
> www.techpowerup.com
> 
> 
> 
> 
> 
> 520w
> 
> 
> 
> 
> 
> 
> 
> 
> EVGA RTX 3090 VBIOS
> 
> 
> 24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory
> 
> 
> 
> 
> www.techpowerup.com
> 
> 
> 
> 
> 
> PX1 requests a firmware update if you have it installed (evga owners)
> 
> will start testing the 480w without the update to see how it behaves with the ftw3


We just flash it normally or do something else?


----------



## WilliamLeGod

Has any1 tested LN2 520W on the Gaming X trio yet?


----------



## Mystic33

Hi guys i got recently a KFA2 GeForce® RTX 3090 SG and i flashed with a 390w bios but still i need more headroom. Before i do the shut/power mod i was just wondering if the 480w bios of the asus strix oc is fine to flash on my model. Someone can give me a hand, please? thx in advance

This is my best last result, as you can see on the avg clock frequency is too low only 1943mhz because the 390w bios is not enough i think...i have throttling by power limit all the time









I scored 19 788 in Time Spy


Intel Core i9-9900K Processor, NVIDIA GeForce RTX 3090 x 1, 16384 MB, 64-bit Windows 10}




www.3dmark.com


----------



## pat182

HyperMatrix said:


> I was going to but tested and saw max clock of 2130MHz from 30C-52C anyway. And since in Canada ASUS will void your warranty for opening up the card, and possible replacement needed for thermal pads, I was going to just wait for a block to arrive and do it all at once. But then I decided 2130MHz wasn't good enough for that price and sold the card.
> 
> Dare I say...it's rather poor assembly by Asus on an $1800 card if you need to do a repaste to get higher than 1890MHz without crashing. They should have charged $1500 for it and given the TUF cards to people for free.


i think 2130 is normal to crash, im not seeing alot of 3090 being stable pass 2100, from CA too, mtrl


----------



## mirkendargen

520W BIOS works fine on Strix, same deal with the 3rd 8pin not reading power correctly as the 500w BIOS.


----------



## pat182

WilliamLeGod said:


> Has any1 tested LN2 520W on the Gaming X trio yet?


waiting too to see whats up with the ln2 bios if its safe probly in the next 24h we ll have an answer


----------



## Edge0fsanity

DrunknFoo said:


> 480w
> 
> 
> 
> 
> 
> 
> 
> 
> EVGA RTX 3090 VBIOS
> 
> 
> 24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory
> 
> 
> 
> 
> www.techpowerup.com
> 
> 
> 
> 
> 
> 520w
> 
> 
> 
> 
> 
> 
> 
> 
> EVGA RTX 3090 VBIOS
> 
> 
> 24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory
> 
> 
> 
> 
> www.techpowerup.com
> 
> 
> 
> 
> 
> PX1 requests a firmware update if you have it installed (evga owners)
> 
> will start testing the 480w without the update to see how it behaves with the ftw3


Interested in this, have the xc3 bios on my ftw3 right now, curious if the 520w works. That would likely keep me at 1025mV in games without throttling.


----------



## HyperMatrix

pat182 said:


> i think 2130 is normal to crash, im not seeing alot of 3090 being stable pass 2100, from CA too, mtrl


Yeah I priced out the Fujipoly pads as well as the Aquacomputer block with active backplate. And right now they're doing $110 euros for shipping to Canada...which is ridiculous. So basically it was going to be another $600~ CAD on top of already spending $2600 CAD on the card. All to try to get 2130MHz. Maybe 2145MHz. Figured $3200 for that wasn't worth it when I could get an FTW3 HydroCopper for a grand total of $2450 CAD with shipping/taxes that at worse will be 1-3% slower and at best could be 1-3% faster. (or hopefully KingPin if I'm lucky enough). Heck I even considered the XC3 HydroCopper for $2190 CAD all in with shunt mods but I decided to not go that much trash. 

Also I think 2130MHz has been the bottom of the OC pool. Don't think anyone is getting less than that with proper cooling and power limit.


----------



## DrunknFoo

Thanh Nguyen said:


> We just flash it normally or do something else?


cmd prompt or powershell via admin access
and
nvflash -6 xxxx.rom


----------



## Thanh Nguyen

DrunknFoo said:


> cmd prompt or powershell via admin access
> and
> nvflash -6 xxxx.rom


The 520w will turn off some protection feauture?


----------



## geriatricpollywog

Thanh Nguyen said:


> The 520w will turn off some protection feauture?


The LN2 bios on the 2080ti KPE disabled temperature protection. Therefore the 520w bios (LN2 bios) for the 3090 KPE might do the same.


----------



## mirkendargen

HyperMatrix said:


> Yeah I priced out the Fujipoly pads as well as the Aquacomputer block with active backplate. And right now they're doing $110 euros for shipping to Canada...which is ridiculous. So basically it was going to be another $600~ CAD on top of already spending $2600 CAD on the card. All to try to get 2130MHz. Maybe 2145MHz. Figured $3200 for that wasn't worth it when I could get an FTW3 HydroCopper for a grand total of $2450 CAD with shipping/taxes that at worse will be 1-3% slower and at best could be 1-3% faster. (or hopefully KingPin if I'm lucky enough). Heck I even considered the XC3 HydroCopper for $2190 CAD all in with shunt mods but I decided to not go that much trash.
> 
> Also I think 2130MHz has been the bottom of the OC pool. Don't think anyone is getting less than that with proper cooling and power limit.


This thread makes you think that, but the Port Royal HOF says otherwise. This thread is the country club, and even the poorest people in the country club are still rich, heh. Proof in point, I only just today got bumped out of the Port Royal top 100 with a Strix that won't do above 2130 stable in Port Royal, and more like 2085 in Metro. No shunt mod, just a waterblock, 3960X system with 3200 CL14 memory, not even ideal for overclocking at all. I'm not buying <100 people have watercooled triple 8pin 3090's and have run Port Royal in the world. Yeah there are people with cards that can do 2200 stable, but they seem more like the top 1-5%, not the top 20-40%.


----------



## pat182

HyperMatrix said:


> Yeah I priced out the Fujipoly pads as well as the Aquacomputer block with active backplate. And right now they're doing $110 euros for shipping to Canada...which is ridiculous. So basically it was going to be another $600~ CAD on top of already spending $2600 CAD on the card. All to try to get 2130MHz. Maybe 2145MHz. Figured $3200 for that wasn't worth it when I could get an FTW3 HydroCopper for a grand total of $2450 CAD with shipping/taxes that at worse will be 1-3% slower and at best could be 1-3% faster. (or hopefully KingPin if I'm lucky enough). Heck I even considered the XC3 HydroCopper for $2190 CAD all in with shunt mods but I decided to not go that much trash.
> 
> Also I think 2130MHz has been the bottom of the OC pool. Don't think anyone is getting less than that with proper cooling and power limit.


yea im not willing to WC or open it what so ever, im tempted to flash the 520watt bios but thats it. dont want to shunt etc, if i can get a little more with a bios ill be happy


----------



## DrunknFoo

scared right now


----------



## DrunknFoo

so updating firmware, on px1, when completed and u hit close, it opens again

i have no idea if it's trying to flash 3 different bios' that originated with the kingpin

i did 2 anyway, cancelling the third popup and rebooting now


----------



## Thanh Nguyen

DrunknFoo said:


> View attachment 2467473
> 
> scared right now


Why it update after flash the new bios?


----------



## motivman

Kingpin 520W bios is the real deal guys. This is my first run on my shunt modded PNY 2X8 pin reference card, with ZERO overclock!!!!!


----------



## DrunknFoo

after reboot px1 still asking to update firmware, testing now with AB (ftw3)


----------



## ArcticZero

motivman said:


> Kingpin 520W bios is the real deal guys. This is my first run on my shunt modded PNY 2X8 pin reference card, with ZERO overclock!!!!!
> 
> View attachment 2467476


No negative effects flashing that BIOS on the PNY? I have the same card, just worried about the whole 2x8pin vs 3x8pin difference.


----------



## defiledge

motivman said:


> Kingpin 520W bios is the real deal guys. This is my first run on my shunt modded PNY 2X8 pin reference card, with ZERO overclock!!!!!
> 
> View attachment 2467476


how does a 3x8pin even work on a 2x8pin? why does it perform better


----------



## motivman

ArcticZero said:


> No negative effects flashing that BIOS on the PNY? I have the same card, just worried about the whole 2x8pin vs 3x8pin difference.


nope, card performs better.


----------



## motivman

defiledge said:


> how does a 3x8pin even work on a 2x8pin? why does it perform better


Card has to be shunt modded for this to work. It only uses power from the PCIE slot and first 2 8 pin connectors. any power usage reported on the 3rd 8 pin should be ignored.


----------



## Falkentyne

motivman said:


> Kingpin 520W bios is the real deal guys. This is my first run on my shunt modded PNY 2X8 pin reference card, with ZERO overclock!!!!!
> 
> View attachment 2467476


Is this drawing 600 watts? Or is this drawing (152.4W + 104.1W + 41.9W?). What is the "shunt multiplier" for this?

I assume the third power pin (152.7W) is completely ignored here?

What shunt resistance are you using? 15 mOhm stacked? 5 mOhm stacked? Replaced?

Are you using 15 mOhm stacked shunts (3.75 mOhm total shunt resistance) or 3 mOhm replaced shunts?

And why is the first 8 pin "152W" and the second "104W"? Which one is being used? #1 and #3? or #1 and #2?


----------



## mirkendargen

motivman said:


> Kingpin 520W bios is the real deal guys. This is my first run on my shunt modded PNY 2X8 pin reference card, with ZERO overclock!!!!!
> 
> View attachment 2467476


I don't get it. You already shunt modded so the power limit isn't anything special. The only difference is the 1920 baseclock vs. whatever you had before, which is no different than if you just overclocked whatever BIOS you had before. Am I missing something about why this is special?


----------



## DrunknFoo

mirkendargen said:


> I don't get it. You already shunt modded so the power limit isn't anything special. The only difference is the 1920 baseclock vs. whatever you had before, which is no different than if you just overclocked whatever BIOS you had before.


Yup this is true, but i like tinkering... That said the strix bios worked better than the 500w beta for the ftw3 on my card, so the 520w gives me a bit of hope in obtaining a slightly higher draw. Vanity, oc forum.... Why not?


----------



## WilliamLeGod

Any tested 520W vbios on the 3090 Gaming x trio yet?


----------



## motivman

mirkendargen said:


> I don't get it. You already shunt modded so the power limit isn't anything special. The only difference is the 1920 baseclock vs. whatever you had before, which is no different than if you just overclocked whatever BIOS you had before. Am I missing something about why this is special?


When I run the reference PNY bios, My card draws Max of about 535W with that bios. With this bios, Looks like my max power draw is about 600W, so my benchmark scores are higher.


----------



## geriatricpollywog

Curious why the 520 watt bios doesn’t immediately cause issues. It’s written for a card with 23 power stages.


----------



## pat182

0451 said:


> Curious why the 520 watt bios doesn’t immediately cause issues. It’s written for a card with 23 power stages.


this how can a card with a different powerstage seetup benefit from an other bios ? wouldnt it cause issue ?


----------



## pat182

DrunknFoo said:


> cmd prompt or powershell via admin access
> and
> nvflash -6 xxxx.rom


do you need to disable the gpu and run on an igpu when flashing and ddu etc etc or you can just yolo it no lube ?


----------



## DrunknFoo

pat182 said:


> this how can a card with a different powerstage seetup benefit from an other bios ? wouldnt it cause issue ?


Basically , the number of power stages and vrm is already overkill to some point


----------



## DrunknFoo

pat182 said:


> do you need to disable the gpu and run on an igpu when flashing and ddu etc etc or you can just yolo it no lube ?


No newest nvflash does it in the sub command automatically


----------



## motivman

0451 said:


> Curious why the 520 watt bios doesn’t immediately cause issues. It’s written for a card with 23 power stages.


works just fine on my card. Played an hour of Marvel's avengers in 4k max settings, no issues. Higher overall FPS, just all around better performance. Just reporting my results, use at your own risk. LOL


----------



## motivman

pat182 said:


> do you need to disable the gpu and run on an igpu when flashing and ddu etc etc or you can just yolo it no lube ?


go to device manager, right click on your card, click disable device. Screen with black out and video comes back.

open cmd prompt in admin mode

chdir C:\nvflash (or wherever your nvflash folder is located)

Type nvflash --protectoff

then nvflash -6 "bios".rom

Y

Y

renable your card in device manager, restart computer... done!


----------



## Nizzen

Average clock frequency 2,144 MHz in Port Royal with unmodded 3090 strix OC with EK waterblock. ~30 c watertemp. Kingpin 520w bios


http://www.3dmark.com/pr/579428


----------



## WilliamLeGod

motivman said:


> works just fine on my card. Played an hour of Marvel's avengers in 4k max settings, no issues. Higher overall FPS, just all around better performance. Just reporting my results, use at your own risk. LOL


Whats yr 3090?


----------



## motivman

WilliamLeGod said:


> Whats yr 3090?


PNY 2X8 pin reference PCB


----------



## DrunknFoo

Nizzen said:


> Average clock frequency 2,144 MHz in Port Royal with unmodded 3090 strix OC with EK waterblock. ~30 c watertemp. Kingpin 520w bios
> 
> 
> http://www.3dmark.com/pr/579428


I need a god damn block!


----------



## Falkentyne

motivman said:


> PNY 2X8 pin reference PCB


Did you see my question?


----------



## pat182

Nizzen said:


> Average clock frequency 2,144 MHz in Port Royal with unmodded 3090 strix OC with EK waterblock. ~30 c watertemp. Kingpin 520w bios
> 
> 
> http://www.3dmark.com/pr/579428


sick, im sure i could get 2085mhz on air in Control with 40 more watts if temps still below 70c


----------



## motivman

Falkentyne said:


> Did you see my question?


10 mohm on PCIE, 5 mohm on the rest. The screenshot I posted is with no overclock. After I overclocked, and maxed out power slider, card pulled around 601W.


----------



## Mystic33

motivman said:


> Card has to be shunt modded for this to work. It only uses power from the PCIE slot and first 2 8 pin connectors. any power usage reported on the 3rd 8 pin should be ignored.


 Can i use that 520W bios if i do the shunt mod with the conductonaut?


----------



## bmgjet

Mystic33 said:


> Can i use that 520W bios if i do the shunt mod with the conductonaut?


Dont use conductonaut theres better methods that dont cause long term damage.


----------



## defiledge

motivman said:


> 10 mohm on PCIE, 5 mohm on the rest. The screenshot I posted is with no overclock. After I overclocked, and maxed out power slider, card pulled around 601W.


whats your PCIE power draw and board powerdraw on a normal bios? and did you paint or solder the resistors


----------



## Falkentyne

motivman said:


> 10 mohm on PCIE, 5 mohm on the rest. The screenshot I posted is with no overclock. After I overclocked, and maxed out power slider, card pulled around 601W.


Nice work.
What are the two power readings?
Is it from 8 pin #1 and 8 pin #2, or 8 pin#1 and 8 pin #3?

If it's from 8 pin 1 and 2, why are they so far away from each other? Did that only happen after doing the shunt mod or after flashing the KP Bios?


----------



## pat182

+155 +550 strix OC not modded gonna try the 520watt bios soon, hitting the 480watts all bench long
NVIDIA GeForce RTX 3090 video card benchmark result - Intel Core i9-9900K Processor,ASUSTeK COMPUTER INC. PRIME Z390-A (3dmark.com)

edit; stil stock bios +155 +800
NVIDIA GeForce RTX 3090 video card benchmark result - Intel Core i9-9900K Processor,ASUSTeK COMPUTER INC. PRIME Z390-A (3dmark.com)

54 points below motivman fully stock, i think i can go pass that if I stop pussying out !


----------



## defiledge

Honestly I wouldn't mess with a 3x8pin bios even with a shunt mod. How do you even calculate the power draw for the 2x8pins?


----------



## Falkentyne

defiledge said:


> Honestly I wouldn't mess with a 3x8pin bios even with a shunt mod. How do you even calculate the power draw for the 2x8pins?


That's what I'm trying to ask him.
I assume his valid 8 pin readings are 152.4 and 104.1 watts right?
Then multiply them both by "2", the factor for a 5 mOhm stacked shunt mod. That gives 513.4 after *2 multiplier.
Then if PCIE has a 10 mOhm shunt, multiply the 41.9w PCIE watts by 1.5x (for the 10 mOhm shunt multiplier)
For 62.85W.
Then add them
For a total of 576.25 watts?

i'm assuming this is correct math?

My question is, why is 8 pin #2 only drawing 104 watts? It seems like the 150W cap of 8 pin #1 is causing a throttle flag?


----------



## motivman

defiledge said:


> whats your PCIE power draw and board powerdraw on a normal bios? and did you paint or solder the resistors


Around 84-90W max on PCIE slot and 210-220W max on the other two 8 pin pcie power connectors... this is on the reference PNY bios. Cannot remember what my board power draw is, but If u look at my old posts, you should get an Idea. I have posted multiple screenshots.


----------



## motivman

defiledge said:


> Honestly I wouldn't mess with a 3x8pin bios even with a shunt mod. How do you even calculate the power draw for the 2x8pins?


in my case, I used GPU-Z and multiply the values reported on the 8 pin connectors by 2 (since I used 5mohm resistors to mod). I then verify with my wall power meter. It is really accurate to as little as 10W difference ( between my calculations on GPU-Z and what I see from the wall power meter)


----------



## motivman

Falkentyne said:


> That's what I'm trying to ask him.
> I assume his valid 8 pin readings are 152.4 and 104.1 watts right?
> Then multiply them both by "2", the factor for a 5 mOhm stacked shunt mod. That gives 513.4 after *2 multiplier.
> Then if PCIE has a 10 mOhm shunt, multiply the 41.9w PCIE watts by 1.5x (for the 10 mOhm shunt multiplier)
> For 62.85W.
> Then add them
> For a total of 576.25 watts?
> 
> i'm assuming this is correct math?
> 
> My question is, why is 8 pin #2 only drawing 104 watts? It seems like the 150W cap of 8 pin #1 is causing a throttle flag?


not sure why pin 2 is drawing less power than 1 (on 3X8 pin bios), but I never used a 3X8 pin bios until after I shunt modded. On the reference PNY bios, Pin 1 and 2 uses almost the same power, usually the difference between both is about 2-5 watts.


----------



## motivman

pat182 said:


> +155 +550 strix OC not modded gonna try the 520watt bios soon, hitting the 480watts all bench long
> NVIDIA GeForce RTX 3090 video card benchmark result - Intel Core i9-9900K Processor,ASUSTeK COMPUTER INC. PRIME Z390-A (3dmark.com)
> 
> edit; stil stock bios +155 +800
> NVIDIA GeForce RTX 3090 video card benchmark result - Intel Core i9-9900K Processor,ASUSTeK COMPUTER INC. PRIME Z390-A (3dmark.com)
> 
> 54 points below motivman fully stock, i think i can go pass that if I stop pussying out !


U need to shunt your card, mine is currently benching about 15.3K in PR, and my room temps are pretty high (29C ambient)


----------



## jimdurt

pat182 said:


> +155 +550 strix OC not modded gonna try the 520watt bios soon, hitting the 480watts all bench long
> NVIDIA GeForce RTX 3090 video card benchmark result - Intel Core i9-9900K Processor,ASUSTeK COMPUTER INC. PRIME Z390-A (3dmark.com)
> 
> edit; stil stock bios +155 +800
> NVIDIA GeForce RTX 3090 video card benchmark result - Intel Core i9-9900K Processor,ASUSTeK COMPUTER INC. PRIME Z390-A (3dmark.com)
> 
> 54 points below motivman fully stock, i think i can go pass that if I stop pussying out !


Do you have water block? Or still stock air cooler? I can get +135 core in Port Royal, but only when testing with outside air temps of 4C. Winter is good for benchmarks on stock air cooler. I am hitting 480w with my Strix 3090.


----------



## jura11

ArcticZero said:


> No negative effects flashing that BIOS on the PNY? I have the same card, just worried about the whole 2x8pin vs 3x8pin difference.


Hi there 

With this BIOS my scores in PR(Port Royal) are around 600-700 points lower than with KFA2/Galax 390W with 0 OC on core or VRAM, due this I don't recommend use this BIOS on non shunted 2x8pin GPU

Score in PR is with Kingpin 520W BIOS with no OC on core and VRAM in region of 12800 points

With KFA2/Galax 390W my stock score is 13487 or 13492 points

Power draw from wall is something like 80W lower than with KFA2/Galax 390W BIOS 

Hope this helps 

Thanks, Jura


----------



## pat182

trying to get the


jimdurt said:


> Do you have water block? Or still stock air cooler? I can get +135 core in Port Royal, but only when testing with outside air temps of 4C. Winter is good for benchmarks on stock air cooler. I am hitting 480w with my Strix 3090.


air cooler


----------



## motivman

jura11 said:


> Hi there
> 
> With this BIOS my scores in PR(Port Royal) are around 600-700 points lower than with KFA2/Galax 390W with 0 OC on core or VRAM, due this I don't recommend use this BIOS on non shunted 2x8pin GPU
> 
> Score in PR is with Kingpin 520W BIOS with no OC on core and VRAM in region of 12800 points
> 
> With KFA2/Galax 390W my stock score is 13487 or 13492 points
> 
> Power draw from wall is something like 80W lower than with KFA2/Galax 390W BIOS
> 
> Hope this helps
> 
> Thanks, Jura


EXACTLY. 2X8 pin card has to be shunt modded to benefit from this bios.


----------



## pat182

ok guys im like super happy with the results ! STRIX bone stock, +160 +1000 air cooled. i dont think i can push it more, unless i crack open a window, gonna sleep on that and come back tmorow with the 520watt to see whats up
run here : NVIDIA GeForce RTX 3090 video card benchmark result - Intel Core i9-9900K Processor,ASUSTeK COMPUTER INC. PRIME Z390-A (3dmark.com)


----------



## pat182

motivman said:


> U need to shunt your card, mine is currently benching about 15.3K in PR, and my room temps are pretty high (29C ambient)


im to ***** for that hahaha, bios flash will be enough for me


----------



## jimdurt

pat182 said:


> trying to get the
> 
> air cooler


Nice man. That's really good results for stock.


----------



## jomama22

bmgjet said:


> Dont use conductonaut theres better methods that dont cause long term damage.


Such as? I mean, the you're not really damaging anything. The die will get a bit scuffed up but that's not going to hurt it. Same with the waterblock. Even if you want to keep it for 7 years, you aren't going to be doing any damage that affects performance or usability.


----------



## bmgjet

jomama22 said:


> Such as? I mean, the you're not really damaging anything. The die will get a bit scuffed up but that's not going to hurt it. Same with the waterblock. Even if you want to keep it for 7 years, you aren't going to be doing any damage that affects performance or usability.


Read the orignal question, He isnt asking about using it for cooling. He wants to put it over the shunt resistors.


----------



## ExDarkxH

flashed the kingpin on a ftw3 card. should be most compatible
anyways the fans are not in sync

at 100% :
fan 1 2000rpm
fan 2 3300rpm
fan 3 3000rpm


----------



## DrunknFoo

ExDarkxH said:


> flashed the kingpin on a ftw3 card. should be most compatible
> anyways the fans are not in sync
> 
> at 100% :
> fan 1 2000rpm
> fan 2 3300rpm
> fan 3 3000rpm


Check ur leds at power connectors. R they lit or off?

Something strange happened after i went back to beta and retried the 520w and refirmware update...

Still testing with the red lights lit up


----------



## DrunknFoo

Maybe better to continue over at evga forum...


----------



## ExDarkxH

Precision never needed a firmware update either. I know there is a new version listed on their website with support for the kingpin but Im not sure i want it


----------



## DrunknFoo

ExDarkxH said:


> Precision never needed a firmware update either. I know there is a new version listed on their website with support for the kingpin but Im not sure i want it


K thats the diff between us then. For some reason though whatever i had setup n working the first time around pulled more than what i am able to do now ..

So im really clueless... But i will say it is functioing better than the beta


----------



## xrb936

HyperMatrix said:


> I think they know they priced it too high as well. Why else would they be selling the Hybrid model for the exact same price and HydroCopper for just $50 more? Or in other words, since these cards are only available through their own website with 5% discount, why would they be selling HydroCopper and Hybrid models for less than the retail price of the FTW3 Ultra basic edition?
> 
> Think about it...the XC3 Hybrid model sells for just $1538 on their own website. XC3 HydroCopper for $1567. That's some damn good value, tbh. But while the FTW3 is lesser quality and priced too high, I disagree that the KingPin should be the FTW3 card. I still think KingPin is better engineered than an FTW3/Strix level card


How did you join the EVGA waitlist for Hydro Copper? I cannot find a link there...


----------



## jomama22

bmgjet said:


> Read the orignal question, He isnt asking about using it for cooling. He wants to put it over the shunt resistors.


Ah my bad. Didn't see anything about putting it on shunts lol.


----------



## jomama22

Think this is the best I can do with the stock FE I have:
https://www.3dmark.com/spy/15824073

Can't wait to shunt it with a waterblock, she seems like a pretty good chip.

Also, port royal apparently does not want to work for me now *****. Just immediately exits out before even the loading screen. Not sure what's up but if anyone has any clues throw me a line.


----------



## HyperMatrix

xrb936 said:


> How did you join the EVGA waitlist for Hydro Copper? I cannot find a link there...


It was only up for a few minutes on Friday the 13th. They stopped it immediately as they only listed the amount of units they expected to ship at the beginning of december. I'd expect the notify queue to become active in a week or so again.


----------



## man1ac

jomama22 said:


> Think this is the best I can do with the stock FE I have:
> https://www.3dmark.com/spy/15824073
> 
> Can't wait to shunt it with a waterblock, she seems like a pretty good chip.
> 
> Also, port royal apparently does not want to work for me now ***. Just immediately exits out before even the loading screen. Not sure what's up but if anyone has any clues throw me a line.


Would you care to share a picture of your curve? I'd be eager to test it. I only can get about 21500 graphic score out of it (running on a 5900X if that matters).


----------



## Thanh Nguyen

Strix








I scored 15 533 in Port Royal


Intel Core i9-10900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com




Strix works better for me or something 
kingpin








I scored 15 482 in Port Royal


Intel Core i9-10900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## Nizzen

Thanh Nguyen said:


> Strix
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 533 in Port Royal
> 
> 
> Intel Core i9-10900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> Strix works better for me or something
> kingpin
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 482 in Port Royal
> 
> 
> Intel Core i9-10900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com


When a card is shunted, orginal bios is always the best


----------



## motivman

Thanh Nguyen said:


> Strix
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 533 in Port Royal
> 
> 
> Intel Core i9-10900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> Strix works better for me or something
> kingpin
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 482 in Port Royal
> 
> 
> Intel Core i9-10900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com


how are your temps that low? Is your computer out in the garage or something or are you using chilled water????


----------



## Nizzen

25c average is chilled water.


----------



## Thanh Nguyen

motivman said:


> how are your temps that low? Is your computer out in the garage or something or are you using chilled water????


Mora is on opened window.


----------



## motivman

Thanh Nguyen said:


> Mora is on opened window.


Nice! 15.3K is the best I can do with my card, but my ambient temps are high, like 24-25C.


----------



## geriatricpollywog

Thanh has a ton of radiators and New Mexico is a cold state.


----------



## motivman

0451 said:


> Thanh has a ton of radiators and New Mexico is a cold state.


Just took a lot at port royal top 30 for single cards. Besides GQnerd and myself, everyone else has load temps below 40C, so my card is not doing too bad, considering my load temp is at 43C. I wonder how high I could score if I took this bad boy to the garage and ran some benchmarks, lol.


----------



## Thebc2

HyperMatrix said:


> Yeah max I was able to do is about 2040MHz after 5 minutes of gameplay but temp was at 77C and kept going up and soon crashed. Problem was finding a clock speed/voltage that the cooler was capable of keeping stable. Anything higher than around 1900MHz under 0.900v would eventually build up heat and crash. Some games were fine. Others were more sensitive. Like Red Dead Redemption 2. So in terms of stable clocks for hours long gaming sessions across a variety of games, I didn't have any guarantees above around 1900MHz.


Clearly a bad cooler on yours. I had a similar, although not quite as bad experience with the Strix, glad I ended up keeping it a d trying it under water.

With the stock Strix cooler I was topping out around 14400 PR but I was getting up into the 80s C.

I tried repasting the card with Kryonaut Extreme and it only dropped my temps a few C.

Fast forward a week and the EK block arrives. Temps are now stable in the 30s and the chip ended up being a decent bin at the end of the day. They did an amazing job on their pcb but there is something seriously up with the consistency their coolers are making good contact. Obviously your luck will vary with bin regardless of what card you get as well. All that being said, if you can line up some good cooling with a good pcb and good bin, you really can’t ask for much more.

I have 0 love for EVGA as I felt burned buying an ftw3 and the 500w bios debacle that has ensued since then. I would still disagree with people who say the Kingpin cards don’t have any real/good differentiating features. The voltage control alone is worth the extra money. And from what I have seen from the pcb it looks like they improved the component choices.

The air cooled ftw3 are not horrible cards. They are just overpriced for what they are and not worth the $1800 IMO. And the fact EVGA would drop their standards with the ampere ftw3 is disappointing. It’s also disappointing the way they have handled themselves with the 500w bios.


Sent from my iPhone using Tapatalk Pro


----------



## DrunknFoo

motivman said:


> Just took a lot at port royal top 30 for single cards. Besides GQnerd and myself, everyone else has load temps below 40C, so my card is not doing too bad, considering my load temp is at 43C. I wonder how high I could score if I took this bad boy to the garage and ran some benchmarks, lol.


Last i saw i was maybe 8-10 spots behind u with about 60c avg lol

i was averaging about 68c after each run after the 520w bios, so just decided to just go LM with the stock cooler, now avg 60c, so a bit of hope, but 60C is still hot to do much


----------



## Carls_Car

Mr.Vegas said:


> There is new Byksky Block for Ventus 3090, came out last week and comes with backplate as set


Thanks man! But I’ve actually found a buyer for my Ventus 😂 Now to wait for a block for the Suprim X...


----------



## Nephalem89

One question the new bios from evga kingpin is working fine for Asus tuf 2x8 pimes? The fans work fine ? Is for push my power limit.


----------



## bmgjet

Nephalem89 said:


> One question the new bios from evga kingpin is working fine for Asus tuf 2x8 pimes? The fans work fine ? Is for push my power limit.


Dont use 3 plug bios on 2 plug cards, It will always be less power.
Gigabyte gaming oc or KFA2 bios are the best for 2 plug cards at 390W.

The maths is really easy to work out for using 3 plug bios on 2 plug cards.

Kingpin for example.
520W.
Remove the pci-e slot power limit from that which is 75W.
Gives you 445W split over 3 plugs.
Work out what each plugs power limit is by deviding it by 3 plugs.
148W per plug.
Your card only has 2 plugs so 148W X 2.
296W over the plugs.
Add the slot power back in and you have 371W real draw when it reports 520w in software since it just mirrors plug 1 reading on plug 3 sensor in software.


----------



## shiokarai

Where to get the 520w kingpin bios? Link? Also, anyone with strix 3090 with this 520w kingpin bios - any weird issues, fans out of sync, not working DP ports etc? Safe flash to the strix?


----------



## OC2000

Could anyone tell me which of the 3 8 pins on the 3090 is #3

I have managed to reshunt all the 7 resistors and am now seeing a 30W on PCIE, 90W on 8 pin 1 & 2, but 150W on 8 pin 3. So need to reshunt that one. Problem is im not sure which side GPU-Z considers the #3 8pin

Any ideas?

Thanks


----------



## defiledge

Nizzen said:


> When a card is shunted, orginal bios is always the best


why


----------



## dr/owned

Might have a shot at MSRP Strix...need to be talked out of needing it. Just bad timing when I haven't been able to see yet what my full-hack TUF can do. On paper I suspect the TUF can handle 800W without issue, but I don't know how much power 1.20-1.25V on the core is going to really consume. Sure the Strix has a VRM that can pop a breaker but does it even matter?


----------



## pat182

shiokarai said:


> Where to get the 520w kingpin bios? Link? Also, anyone with strix 3090 with this 520w kingpin bios - any weird issues, fans out of sync, not working DP ports etc? Safe flash to the strix?


techpowerup, im might try it on my stock strix tonight, ill tell you


----------



## Fire2

WilliamLeGod said:


> Has any1 tested LN2 520W on the Gaming X trio yet?


I would love to know also!
any brave msi trio owners out there?!

VGA Bios Collection: EVGA RTX 3090 24 GB | TechPowerUp - 520 brave bios !!


----------



## pat182

does the strix really have a dual bios or is it the same bios with a fan profile swtich ? i want to flash the kingpin on the silent bios and keep the power mode original to go back and forth


----------



## krs360

Anyone with the ASUS TUF 3080/3090 have really bad coil whine under an EK waterblock? Never heard anything like it in my life, it's terrible under load. The coil whine is easily as loud as the stock fans going full tilt so it's more or less made the rest of my silent custom loop pointless.

Also, I need to re assemble the stock cooler and test again. Does anyone know the thermal pad sizes I should use to re assemble the stock cooler? Some of it seemed like 2mm thick but it honestly fell apart when removing it from the stock cooler and I never expected to have to re assemble it again so soon.


----------



## defiledge

krs360 said:


> Anyone with the ASUS TUF 3080/3090 have really bad coil whine under an EK waterblock? Never heard anything like it in my life, it's terrible under load. The coil whine is easily as loud as the stock fans going full tilt so it's more or less made the rest of my silent custom loop pointless.
> 
> Also, I need to re assemble the stock cooler and test again. Does anyone know the thermal pad sizes I should use to re assemble the stock cooler? Some of it seemed like 2mm thick but it honestly fell apart when removing it from the stock cooler and I never expected to have to re assemble it again so soon.


2.5mm and 1.5mm


----------



## krs360

defiledge said:


> 2.5mm and 1.5mm


Thanks for the reply.

Don't suppose you know which pads go where? I can use the teardown from GN but it's not clear what size is where apart from the one time I think he mentions the size of the pads he just took off. As it's going back to the supplier I really want it to be right so there can't be any "you used the wrong pads and that's why its doing this"


----------



## jura11

krs360 said:


> Anyone with the ASUS TUF 3080/3090 have really bad coil whine under an EK waterblock? Never heard anything like it in my life, it's terrible under load. The coil whine is easily as loud as the stock fans going full tilt so it's more or less made the rest of my silent custom loop pointless.
> 
> Also, I need to re assemble the stock cooler and test again. Does anyone know the thermal pad sizes I should use to re assemble the stock cooler? Some of it seemed like 2mm thick but it honestly fell apart when removing it from the stock cooler and I never expected to have to re assemble it again so soon.


Do you have EK backplate fitted if yes then try loose screws around core until you won't hear any kind of coilwhine, EK backplates seems amplify coilwhine which is strange but seems is down to how thin is this backplate 

On Bykski waterblocks and their backplate you won't hear coilwhine at all there

Hope this helps 

Thanks, Jura


----------



## Esenel

jura11 said:


> Do you have EK backplate fitted if yes then try loose screws around core until you won't hear any kind of coilwhine, EK backplates seems amplify coilwhine which is strange but seems is down to how thin is this backplate


Only remove the two screws next to the socket of the EK backplate. ;-)


----------



## Pinto

Got the EK wb on my Strix and got the horrible coil whine too. 
Remove the two screws and the sound is partially remove but it's more audible than before putting the waterblock. 
Somebody tests without the thermal pads for the back vrm and it seems that's the whine was gone. Somebody else said that the Byski block doesn't have this problem but when i look at the aliexpress page for this block i can see that there is no thermal interface with the back vrm and contact as to be made by thermal grease like aquacomputer. I hate this method as it's very difficult to clean and have good evenly contact with the memory.
So can someone who got the Byski block can confirm that and is the contact with the EK backplate and the back vrm is needed?
Performance aregood but the sound is...









129.76US $ 10% de réduction|Bykski – bloc d'eau pour ASUS RTX 3090 /3080 Strix, carte GPU, couverture complète en cuivre, rvb, en stock | AliExpress


Achetez malin, vivez mieux! Aliexpress.com




fr.aliexpress.com


----------



## shiokarai

Pinto said:


> Got the EK wb on my Strix and got the horrible coil whine too.
> Remove the two screws and the sound is partially remove but it's more audible than before putting the waterblock.
> Somebody tests without the thermal pads for the back vrm and it seems that's the whine was gone. Somebody else said that the Byski block doesn't have this problem but when i look at the aliexpress page for this block i can see that there is no thermal interface with the back vrm and contact as to be made by thermal grease like aquacomputer. I hate this method as it's very difficult to clean and have good evenly contact with the memory.
> So can someone who got the Byski block can confirm that and is the contact with the EK backplate and the back vrm is needed?
> Performance aregood but the sound is...


Wait what, bykski block doesn't have any thermal pads on the backplate? You're supposed to make a mess with the thermal paste there? wouldn't slim ie. 0.5-1.0mm pads work there?


----------



## Nizzen

shiokarai said:


> Wait what, bykski block doesn't have any thermal pads on the backplate? You're supposed to make a mess with the thermal paste there? wouldn't slim ie. 0.5-1.0mm pads work there?


 bykski says only on Vram on strix backplate block.

Alphacool is very good, because it doesn't interfer with the stacked shunts


----------



## ExDarkxH

motivman said:


> Just took a lot at port royal top 30 for single cards. Besides GQnerd and myself, everyone else has load temps below 40C, so my card is not doing too bad, considering my load temp is at 43C. I wonder how high I could score if I took this bad boy to the garage and ran some benchmarks, lol.





https://www.3dmark.com/pr/490499


average in the 50s 
cant do anything else to control temps as im on air. but benchmark sits comfortably in the 60s near second half of bench


----------



## jura11

Pinto said:


> Got the EK wb on my Strix and got the horrible coil whine too.
> Remove the two screws and the sound is partially remove but it's more audible than before putting the waterblock.
> Somebody tests without the thermal pads for the back vrm and it seems that's the whine was gone. Somebody else said that the Byski block doesn't have this problem but when i look at the aliexpress page for this block i can see that there is no thermal interface with the back vrm and contact as to be made by thermal grease like aquacomputer. I hate this method as it's very difficult to clean and have good evenly contact with the memory.
> So can someone who got the Byski block can confirm that and is the contact with the EK backplate and the back vrm is needed?
> Performance aregood but the sound is...
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 129.76US $ 10% de réduction|Bykski – bloc d'eau pour ASUS RTX 3090 /3080 Strix, carte GPU, couverture complète en cuivre, rvb, en stock | AliExpress
> 
> 
> Achetez malin, vivez mieux! Aliexpress.com
> 
> 
> 
> 
> fr.aliexpress.com


I'm sure with Bykski backplate you are using thermal pads on VRM and VRAM chips and thermal paste or TIM you should use only on core and that's it

Aquacomputer its literally easiest waterblock to use and spreading TIM lr thermal paste on the VRAM chips takes few minutes or rather 1 or 2 minutes and its not messy job, I would say its much easier than using EK thin thermal pads which are one of most sh1tiest pads ever which will break or tear 

Aquacomputer and Bykski I think both use same thermal pads Thermalright or something very similar 

I'm thinking go with Bykski as well for Strix because getting Aquacomputer Kryographics block will be impossible for me, last time I waited I think 6 months 

Hope this helps 

Thanks, Jura


----------



## Warocia

krs360 said:


> Anyone with the ASUS TUF 3080/3090 have really bad coil whine under an EK waterblock? Never heard anything like it in my life, it's terrible under load. The coil whine is easily as loud as the stock fans going full tilt so it's more or less made the rest of my silent custom loop pointless.
> 
> Also, I need to re assemble the stock cooler and test again. Does anyone know the thermal pad sizes I should use to re assemble the stock cooler? Some of it seemed like 2mm thick but it honestly fell apart when removing it from the stock cooler and I never expected to have to re assemble it again so soon.


I have 3090 TUF. It did have a terrible coil whine after EK waterblock. I applied more thermal pads to backplate and it is much better now. I have thermal pads everywhere now, like this region


----------



## jura11

Nizzen said:


> bykski says only on Vram on strix backplate block.
> 
> Alphacool is very good, because it doesn't interfer with the stacked shunts


Bykski is for me winner, because of price and performance

Hope this helps 

Thanks, Jura


----------



## man1ac

If I got that right I can flash the 500W Evga Bios onto my Trio?
When doing so, is there any downside? I'd like to nose it for Dailys use and benches...


----------



## Pinto

shiokarai said:


> Wait what, bykski block doesn't have any thermal pads on the backplate? You're supposed to make a mess with the thermal paste there? wouldn't slim ie. 0.5-1.0mm pads work there?


Don't have this block but look at the page i posted. It says thermal paste. It's why i'm looking for someone who have this particular block to confirm.


----------



## wirx

man1ac said:


> If I got that right I can flash the 500W Evga Bios onto my Trio?
> When doing so, is there any downside? I'd like to nose it for Dailys use and benches...


No downsides, I use it with stock MSI Trio X coolers about month already. Gaming temps with silent fans about 65-70c, with normal fans 60-65c. But you can always litlebit downclock and get 50c with air. Card speed is much faster than stock bios.


----------



## krs360

Warocia said:


> I have 3090 TUF. It did have a terrible coil whine after EK waterblock. I applied more thermal pads to backplate and it is much better now. I have thermal pads everywhere now, like this region
> View attachment 2467526


Hello there, as it happens my Amazon delivery of thermal pads just showed up. I don't suppose you'd mind sharing the thickness of the pads and where you placed them to dampen the noise? Happy to PM you if that's easier.


----------



## WilliamLeGod

wirx said:


> No downsides, I use it with stock MSI Trio X coolers about month already. Gaming temps with silent fans about 65-70c, with normal fans 60-65c. But you can always litlebit downclock and get 50c with air. Card speed is much faster than stock bios.


Would it be dangerous in a long run because of bad Vrm? I ve been using for a month and started to notice weird click sound whenever I draw more than 400W to 500W so kinda scared


----------



## pat182

yea still waiting for an answer if the strix is a real dual bios so i can flash the 520watt on one side and keep the 480watt on the other, or is the switch only a fan select profile


----------



## HyperMatrix

pat182 said:


> yea still waiting for an answer if the strix is a real dual bios so i can flash the 520watt on one side and keep the 480watt on the other, or is the switch only a fan select profile


It’s dual bios. Fan profiles are saved in the bios.


----------



## shiokarai

Fire2 said:


> I would love to know also!
> any brave msi trio owners out there?!
> 
> VGA Bios Collection: EVGA RTX 3090 24 GB | TechPowerUp - 520 brave bios !!


Is the bios in this link confirmed working ok?


----------



## OC2000

pat182 said:


> yea still waiting for an answer if the strix is a real dual bios so i can flash the 520watt on one side and keep the 480watt on the other, or is the switch only a fan select profile


Yes it is, I have Stock Asus Strix 480 power on one bios and the EVGA 500W on the other instead of the quiet mode.


----------



## DrunknFoo

ExDarkxH said:


> https://www.3dmark.com/pr/490499
> 
> 
> average in the 50s
> cant do anything else to control temps as im on air. but benchmark sits comfortably in the 60s near second half of bench


Try dropping power slider down 2-4% might get u up a bit.

So been rattling my brain around port royal score i am now at just a few ticks over 15200 with the 520w. Read that since update, scores we once were able to obtain are now harder, (i was clueless n unaware)... So i guess if you have improved or are close to what you scored prior, you are heading in the right direction... /Shrug wonder if they will wipe out older entries , dont seem like the case....


----------



## geriatricpollywog

DrunknFoo said:


> Try dropping power slider down 2-4% might get u up a bit.
> 
> So been rattling my brain around port royal score i am now at just a few ticks over 15200 with the 520w. Read that since update, scores we once were able to obtain are now harder, (i was clueless n unaware)... So i guess if you have improved or are close to what you scored prior, you are heading in the right direction... /Shrug wonder if they will wipe out older entries , dont seem like the case....


Kingpin and OGS just smashed new records today.


----------



## mirkendargen

Pinto said:


> Don't have this block but look at the page i posted. It says thermal paste. It's why i'm looking for someone who have this particular block to confirm.


It's a bad translation. When they say "thermal paste" they mean "thermal pads" (which it comes with) and when they say "thermal grease" they mean "thermal paste" (which it comes with but I used Kryonaut instead).


----------



## ExDarkxH

0451 said:


> Kingpin and OGS just smashed new records today.


more like kingpin smashed new record and OGS posted a higher score he had saved just to keep him on top (typical tactic used)


----------



## Pinto

mirkendargen said:


> It's a bad translation. When they say "thermal paste" they mean "thermal pads" (which it comes with) and when they say "thermal grease" they mean "thermal paste" (which it comes with but I used Kryonaut instead).


Ok thx, so no pads on the back vrm contrary to the EK one maybe it's a part of the answer...


----------



## ExDarkxH

Also, they removed Escapees top "fake score" but he was allowed to change his name to xtremeoc 

so he has 3 fake scores all in the top 10 listed under:
escapee
IKashI
xtremeoc

Cambotar, who scored 15,844, should be #10
i consider his score beatable for me with chilled water (if optimus ever ships my block)
This was my chance to move into the top 10

With escapee making new account after new account with all fake scores im not sure this will ever my a reality


----------



## geriatricpollywog

ExDarkxH said:


> more like kingpin smashed new record and OGS posted a higher score he had saved just to keep him on top (typical tactic used)


Why would you feel the need to make that assumption when the results clearly say Dec 1 and Dec 2?


----------



## ExDarkxH

well your original response was regarding new 3dmark update reducing scores but im almost positive they are using stand alone versions anyways and dont need to update.
So it wouldnt really apply here


----------



## DrunknFoo

ExDarkxH said:


> well your original response was regarding new 3dmark update reducing scores but im almost positive they are using stand alone versions anyways and dont need to update.
> So it wouldnt really apply here


about the update, any idea if there is an archive for the previous version? my profile only has a link to download whatever 3dmark offers for my account, i thought it was only the hwinfo that was updated... i didn't pay much attention to it until now


----------



## Outcasst

Is there a way to flash the FE BIOS to a different card yet?


----------



## geriatricpollywog

ASUS GeForce RTX 3090, RTX 3080, RTX 3070 ROG STRIX White Graphics Cards Pictured, Beautiful & With Higher Factory Overclocks


ASUS's upcoming GeForce RTX 3090, 3080 & 3070 ROG STRIX White edition graphics cards have been listed by European retailer, ProShop (via Videocardz). The retailer has listed down six ROG STRIX White series cards which include three factory overclocked & three standard clock models. ASUS GeForce...




news.google.com





Looks like more scalping by Asus. $2400 for a non XOC card.


----------



## ExDarkxH

DrunknFoo said:


> about the update, any idea if there is an archive for the previous version? my profile only has a link to download whatever 3dmark offers for my account, i thought it was only the hwinfo that was updated... i didn't pay much attention to it until now


I believe techpowerup has it


----------



## HyperMatrix

0451 said:


> ASUS GeForce RTX 3090, RTX 3080, RTX 3070 ROG STRIX White Graphics Cards Pictured, Beautiful & With Higher Factory Overclocks
> 
> 
> ASUS's upcoming GeForce RTX 3090, 3080 & 3070 ROG STRIX White edition graphics cards have been listed by European retailer, ProShop (via Videocardz). The retailer has listed down six ROG STRIX White series cards which include three factory overclocked & three standard clock models. ASUS GeForce...
> 
> 
> 
> 
> news.google.com
> 
> 
> 
> 
> 
> Looks like more scalping by Asus. $2400 for a non XOC card.


You call that scalping? That's nothing. Check this out. Bastard won't even give you next day shipping. Lol. 









RTX 3090 EVGA KINGPIN - IN HAND | eBay


Find many great new & used options and get the best deals for RTX 3090 EVGA KINGPIN - IN HAND at the best online prices at eBay! Free shipping for many products!



www.ebay.com


----------



## geriatricpollywog

HyperMatrix said:


> You call that scalping? That's nothing. Check this out. Bastard won't even give you next day shipping. Lol.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> RTX 3090 EVGA KINGPIN - IN HAND | eBay
> 
> 
> Find many great new & used options and get the best deals for RTX 3090 EVGA KINGPIN - IN HAND at the best online prices at eBay! Free shipping for many products!
> 
> 
> 
> www.ebay.com


That card has been up for several days. It won’t sell at that price.

It’s less excusable when the manufacturer is scalping by charging more for a white shroud. Then again the “Republic of Gamerz” will no doubt go crazy for that white Strix.


----------



## geriatricpollywog

Classified tool








Classified


MediaFire is a simple to use free service that lets you put all your photos, documents, music, and video in a single place so you can access them anywhere and share them everywhere.



www.mediafire.com


----------



## ExDarkxH

0451 said:


> Classified tool
> 
> 
> 
> 
> 
> 
> 
> 
> Classified
> 
> 
> MediaFire is a simple to use free service that lets you put all your photos, documents, music, and video in a single place so you can access them anywhere and share them everywhere.
> 
> 
> 
> www.mediafire.com


This the offical one for 3090 kingpin  ???

I wish i could try it out but im at work


----------



## geriatricpollywog

Supposedly. It was shared on the EVGA forums.


----------



## defiledge

krs360 said:


> Thanks for the reply.
> 
> Don't suppose you know which pads go where? I can use the teardown from GN but it's not clear what size is where apart from the one time I think he mentions the size of the pads he just took off. As it's going back to the supplier I really want it to be right so there can't be any "you used the wrong pads and that's why its doing this"


I used 3mm and 2mm for my card. 3mm everywhere, and 2mm on the shorter column of VRMs to the left of the chip.


----------



## mirkendargen

0451 said:


> Classified tool
> 
> 
> 
> 
> 
> 
> 
> 
> Classified
> 
> 
> MediaFire is a simple to use free service that lets you put all your photos, documents, music, and video in a single place so you can access them anywhere and share them everywhere.
> 
> 
> 
> www.mediafire.com


Definitely doesn't control core voltage on a Strix  I tried it at the desktop and under load, didn't do anything.


----------



## lokran88

What delta core/water temp are ppl getting on their 3090s under load on average?

Still waiting for the Aquacomputer Strix block, which is supposed to ship in a few days and would like to compare it to EK and Bykski as I haven't seen any review yet.


----------



## defiledge

I'm getting 14.4k PR 60C air. Have 3900x and cl16 3200Mhz ram. Will upgrading cpu and ram give higher scores for PR? I want to break 15k


----------



## mirkendargen

lokran88 said:


> What delta core/water temp are ppl getting on their 3090s under load on average?
> 
> Still waiting for the Aquacomputer Strix block, which is supposed to ship in a few days and would like to compare it to EK and Bykski as I haven't seen any review yet.


15-17C on a Strix Bykski block mounted with Kryonaut pulling 520W.

My MSI 2080Ti with a factory mounted EK block was more like 25C delta pulling 150W less so this is a HUGE improvement, but it had a known issue where the posts were too long and made poor die contact.


----------



## jura11

lokran88 said:


> What delta core/water temp are ppl getting on their 3090s under load on average?
> 
> Still waiting for the Aquacomputer Strix block, which is supposed to ship in a few days and would like to compare it to EK and Bykski as I haven't seen any review yet.


Best way how to measure delta is idle and load temperatures, in my case is 10-12°C and Aquacomputer Kryographics RTX 2080Ti waterblock have delta something like 6-8°C in my case 

If I measure core and water temperature then its incomparable because my water temperature usually is around 20-22°C and my Palit RTX 3090 GamingPro wity Bykski RTX 3090 waterblock would get as max to 30-32°C, highest temperature I have seen 34°C 

Bykski for money is great waterblock, I'm very surprised by temperatures which I'm getting with that waterblock 

Hope this helps 

Thanks, Jura


----------



## lokran88

I can only refer to my 2080ti which at 330 watts had an 8c delta with an Watercool Heatkiller IV and was basically never exceeding 40C core temp so this just seems like a bigger delta to me than Turing was, on the other hand it was pulling less power.

34 is super low but I guess that you must be running a chiller then.


----------



## jura11

lokran88 said:


> I can only refer to my 2080ti which at 330 watts had an 8c delta with an Watercool Heatkiller IV and was basically never exceeding 40C core temp so this just seems like a bigger delta to me than Turing was, on the other hand it was pulling less power.
> 
> 34 is super low but I guess that you must be running a chiller then.


Wish I'm running chiller there, I'm just running 4*360mm radiators plus MO-ra3 360mm with 3*GPUs setup(2*RTX 2080Ti and RTX 3090 in same loop) and temperatures on all GPUs in rendering are in 33-36°C(Asus RTX 2080Ti Strix runs hottest from all GPUs because is running EKWB Vector RTX 2080Ti Strix waterblock), water delta T usually in gaming or rendering is in 2-4°C and in benchmarks usually is in 1-2°C 

Hope this helps 

Thanks, Jura


----------



## bmagnien

This may be a dumb question but hopefully an easy one to answer: if I constantly crash when boosting to 2100mhz or higher in Port Royal on 100% air fans averaging 60-65c, will I be able to boost higher if I go under water and drop temps by like 15c? Or is the crashing due to silicon quality and so even if I reduce temps I won’t be able to score any better if I’d still crash at 2100?


----------



## HyperMatrix

bmagnien said:


> This may be a dumb question but hopefully an easy one to answer: if I constantly crash when boosting to 2100mhz or higher in Port Royal on 100% air fans averaging 60-65c, will I be able to boost higher if I go under water and drop temps by like 15c? Or is the crashing due to silicon quality and so even if I reduce temps I won’t be able to score any better if I’d still crash at 2100?


Above 50-52C appears to be the point of thermal throttling with Ampere. If you're only crashing when you get to 60-65C, then yes a water block that keeps you under 50C should allow you to clock even a bit higher than 2100MHz assuming you're not power limited.


----------



## bmagnien

HyperMatrix said:


> Above 50-52C appears to be the point of thermal throttling with Ampere. If you're only crashing when you get to 60-65C, then yes a water block that keeps you under 50C should allow you to clock even a bit higher than 2100MHz assuming you're not power limited.


Definitely not power limited for my needs, just threw the KP bios on my FTW3 Ultra with only the PCIe slot shunted and am pulling about 560w which is all I want since I’m in a SFFPC. Planning on throwing on the hybrid kit to drop temps a bit and call it a day. Thanks!


----------



## mirkendargen

bmagnien said:


> This may be a dumb question but hopefully an easy one to answer: if I constantly crash when boosting to 2100mhz or higher in Port Royal on 100% air fans averaging 60-65c, will I be able to boost higher if I go under water and drop temps by like 15c? Or is the crashing due to silicon quality and so even if I reduce temps I won’t be able to score any better if I’d still crash at 2100?


You won't be able to boost higher stably, but you will be able to maintain your max stable clock (unless you're power limited too).


----------



## lokran88

jura11 said:


> Wish I'm running chiller there, I'm just running 4*360mm radiators plus MO-ra3 360mm with 3*GPUs setup(2*RTX 2080Ti and RTX 3090 in same loop) and temperatures on all GPUs in rendering are in 33-36°C(Asus RTX 2080Ti Strix runs hottest from all GPUs because is running EKWB Vector RTX 2080Ti Strix waterblock), water delta T usually in gaming or rendering is in 2-4°C and in benchmarks usually is in 1-2°C
> 
> Hope this helps
> 
> Thanks, Jura


Very interesting, thank you. As I also just bought a Mo-Ra3 420, but haven't used it yet as I am waiting for the 3090 Strix block.

I expected the MoRa to be a great radiator, but I did not expect it to give you such great temperatures. May I ask what pump config you are running for this loop?


----------



## DrunknFoo

ExDarkxH said:


> Cambotar, who scored 15,844, should be #10
> i consider his score beatable (if optimus ever ships my block)
> This was my chance to move into the top 10


Did you order from the first batch?


----------



## jura11

lokran88 said:


> Very interesting, thank you. As I also just bought a Mo-Ra3 420, but haven't used it yet as I am waiting for the 3090 Strix block.
> 
> I expected the MoRa to be a great radiator, but I did not expect it to give you such great temperatures. May I ask what pump config you are running for this loop?


Yes I agree, Mo-ra3 radiators are great radiators for money for sure, my case is Caselabs M8 with pedestal and I'm running 4xD5 pumps but in your case you should be okay with maybe 2 pump setup or maybe one pump will be more than enough 

Other radiators I have in my loop are 2x HWLabs SR-2 360mm and 2x HWLabs GTS360 at pedestal and fans I'm running on all radiators in 650-750RPM as max

Hope this helps 

Thanks, Jura


----------



## MacMus

guys,

What board is there on MSI GeForce RTX 3090 DirectX 12 RTX 3090 VENTUS 3X ?

Is it referance board?

L.


----------



## chispy

ExDarkxH said:


> Also, they removed Escapees top "fake score" but he was allowed to change his name to xtremeoc
> 
> so he has 3 fake scores all in the top 10 listed under:
> escapee
> IKashI
> xtremeoc
> 
> Cambotar, who scored 15,844, should be #10
> i consider his score beatable for me with chilled water (if optimus ever ships my block)
> This was my chance to move into the top 10
> 
> With escapee making new account after new account with all fake scores im not sure this will ever my a reality


yes sadly he still cheating his way to the top  , he has 4 accounts  add escapeee ( 3 eee ) too. 

Guys you need to report his cheat scores otherwise it will go unnotice to FM/UL mods and stay there forever and ever ...


----------



## ExDarkxH

DrunknFoo said:


> Did you order from the first batch?


yes i did actually


----------



## chispy

ExDarkxH said:


> yes i did actually



Thank you i did too. Everyone else that wants to report the cheat scores of escapee is welcome to do so as we do not need people like this to be part of the overclocking scene and it's ruin it even for the Top legends Overclockers in the world like kingpin , OGS , safedisk , and most of you guys , etc ... .


----------



## mirkendargen

jura11 said:


> Yes I agree, Mo-ra3 radiators are great radiators for money for sure, my case is Caselabs M8 with pedestal and I'm running 4xD5 pumps but in your case you should be okay with maybe 2 pump setup or maybe one pump will be more than enough
> 
> Other radiators I have in my loop are 2x HWLabs SR-2 360mm and 2x HWLabs GTS360 at pedestal and fans I'm running on all radiators in 650-750RPM as max
> 
> Hope this helps
> 
> Thanks, Jura


4x D5's, jesus... I thought my 2x D5's for a triple-180mm rad, 280mm rad, 3090 block, 3x RAM blocks, Threadripper block was overkill other than the redundancy to cover a pump failure...


----------



## pat182

using nvfkash 5.667 i cant flash the 520watt bios on my strix, even puting the -6 it says missmatch, any idea ?!?


----------



## mirkendargen

pat182 said:


> using nvfkash 5.667 i cant flash the 520watt bios on my strix, even puting the -6 it says missmatch, any idea ?!?


Just push Y when it says there's a mismatch.


----------



## pat182

mirkendargen said:


> Just push Y when it says there's a mismatch.


It just abord, i click y 2 time and says failed to flash


----------



## mirkendargen

pat182 said:


> It just abord, i click y 2 time and says failed to flash


Dunno, I flashed it fine on my Strix, but I was flashing it from the 500W EVGA BIOS, not from the stock Strix BIOS. Maybe that matters but I doubt it. Did you open cmd/powershell as admin?


----------



## DrunknFoo

Anyone try the uploaded classified tool?
I doubt itll function, thought i ask, not playing around with the card till later tonight


----------



## pat182

is there version of nvflash i need for flashing an other bios and puting the -6 ?


----------



## mirkendargen

DrunknFoo said:


> Anyone try the uploaded classified tool?
> I doubt itll function, thought i ask, not playing around with the card till later tonight


Tried it on a Strix, did nothing as expected.


----------



## pat182

tried 5.660 and 5.667 : nvflash -6 227017.rom (KPE bios)

keep getting write protection error on my strix


----------



## HyperMatrix

pat182 said:


> tried 5.660 and 5.667 : nvflash -6 227017.rom (KPE bios)
> 
> keep getting write protection error on my strix


did you do _nvflash --protectoff First?_


----------



## pat182

HyperMatrix said:


> did you do _nvflash --protectoff First?_


i just did at my last attempt ive succesfully flash the 500watt bios now, not sure if the KPE was the problem or something

i have to write --protectionoff then enter then nvflash -6 ?


----------



## bmgjet

Classified tool is written in Microsoft Visual C/C++(2017 v.15.9).
Easy to decompile and look thought it.
At a quick glance its first scanning for i2c controllers at 0x138 to 0x142.
If 0 controllers found then throw message device not supported.

Easy to NOP out the checks but its going to be pointless since theres no controllers to address.
The info would be more useful to add KP support to afterburner using the addresses its scanning for.


----------



## mirkendargen

bmgjet said:


> Classified tool is written in Microsoft Visual C/C++(2017 v.15.9).
> Easy to decompile and look thought it.
> At a quick glance its first scanning for i2c controllers at 0x138 to 0x142.
> If 0 controllers found then throw message device not supported.
> 
> Easy to NOP out the checks but its going to be pointless since theres no controllers to address.
> The info would be more useful to add KP support to afterburner using the addresses its scanning for.


Interesting. I did NOT get any sort of "device not supported" error when running it on a Strix with the Kingpin BIOS, it just didn't work. The BIOS is probably projecting a phantom i2c controller that isn't doing anything. I'll try it with the actual Strix BIOS for the hell of it.

Confirmed, I get "Device unsupported" when running it on the actual Strix BIOS.


----------



## Pinto

Use the 520w bios on my Strix, no special flash works flawlessly. Water temp 20°c











I scored 15 306 in Port Royal


Intel Core i9-10980XE Extreme Edition Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## jura11

mirkendargen said:


> 4x D5's, jesus... I thought my 2x D5's for a triple-180mm rad, 280mm rad, 3090 block, 3x RAM blocks, Threadripper block was overkill other than the redundancy to cover a pump failure...


I have run previously single D5 with dual 18W DDC pumps and flow rate has been okay at 125-150LPH that's with 4*GPUs(RTX 2080Ti and GTX1080Ti and 2*GTX1080's) setup that time, currently running 3 D5 pumps at full speed and 4th pump is running at lowest speed at 1700-1800RPM and flow rate is 208-220LPH 

I think so too its overkill pump setup, will probably downsize to 3 pump setup but no issues running so many pumps, noisiest thing on my loop is PSU(Superflower 8 Pack 2000W PSU) 

What is your flow rate in your loop? 

Hope this helps 

Thanks, Jura


----------



## bmgjet

Heres what it says with the i2c checks still in place.
XC3 card so it returns 0 address after the scan and pushes this messagebox before closing itself.









XC3 has all analog controllers so its a very quick test to fail.
Maybe with the Strix which has some digital ones that can be controlled with EVC2 is taking longer to scan and finds them. Then some where later in the program it sees they arnt the right ones for a kingpin card so closes.
Might be posiable to change some addresses around and get it working with those controllers.

Either way no use to me since I dont have a strix card lol.


----------



## geriatricpollywog

Upon closer inspection, that classified tool has fewer switches and dials than the 2080ti version. It must be for the 1080ti or older. Here is a link to the 2080ti version.


https://xdevs.com/doc/_PC_HW/EVGA/E200/tool/Classified_r4.zip


----------



## pat182

520watt succes ! benching now


----------



## mirkendargen

jura11 said:


> I have run previously single D5 with dual 18W DDC pumps and flow rate has been okay at 125-150LPH that's with 4*GPUs(RTX 2080Ti and GTX1080Ti and 2*GTX1080's) setup that time, currently running 3 D5 pumps at full speed and 4th pump is running at lowest speed at 1700-1800RPM and flow rate is 208-220LPH
> 
> I think so too its overkill pump setup, will probably downsize to 3 pump setup but no issues running so many pumps, noisiest thing on my loop is PSU(Superflower 8 Pack 2000W PSU)
> 
> What is your flow rate in your loop?
> 
> Hope this helps
> 
> Thanks, Jura


3.2LPM according to my Koolance flowmeter, I don't know how accurate it is. I'm in the same boat with the noisiest thing in my computer being my 1000w EVGA (Superflower) PSU.


----------



## pat182

well good news, on a strix air cooled in port royal with 520watt bios, i can sustain 2100mhz all run if the temp stay below 70c


----------



## jomama22

man1ac said:


> Would you care to share a picture of your curve? I'd be eager to test it. I only can get about 21500 graphic score out of it (running on a 5900X if that matters).


Yeah, here it is. Nothing special. Just using an offset.









and a hair bit better:
NVIDIA GeForce RTX 3090 video card benchmark result - AMD Ryzen 9 5950X,Micro-Star International Co., Ltd. MEG X570 ACE (MS-7C35) (3dmark.com)
finally got a cpu score over 18k


----------



## pat182

well, thats the best I can do on air with 520watt bios power is no longer the my hard limit, temp is. and tbh im fine with that, now i know i can play RTX games without being bottleneck by software and im gonna squeeze all the fps i can on stock cooler no mods, it was a fun ride. for 40 more watts i got a 15mhz gain to get 2100mhz flat lining no drop


----------



## defiledge

This guy melted his cable drawing 255W from the PCIE

__
https://www.reddit.com/r/nvidia/comments/k5l9qs
 My shunt-modded 8pins are drawing around 220W. Do you guys shunt mods will cause cable melts? Or does this guy just have a **** PSU


----------



## HyperMatrix

defiledge said:


> This guy melted his cable drawing 255W from the PCIE
> 
> __
> https://www.reddit.com/r/nvidia/comments/k5l9qs
> My shunt-modded 8pins are drawing around 220W. Do you guys shunt mods will cause cable melts? Or does this guy just have a **** PSU


All about the PSU cable. If the cable was made to spec, and just to spec, then he was running it 50% above spec. But this isn’t typical. It would have to be an absolute trash cable. I would guess 99% of brand name PSUs that aren’t bargain basement priced would have no issue handling even 2x the spec power draw. I guess the lesson is don’t run a $700 video card when piggy backing cables off of a $50 PSU, which I assume is the case since he didn’t just use a second cable that every PSU I can think of has. 

If you’re Worried about your system you can always do a touch test on the cables. There should be little to no perceptible heat coming from them.


----------



## dante`afk

did you read what he wrote? he used a single slot from his psu. you're supposed to use (as per nvidia) two seperate.

anyway, I think he had a crap psu and crap cables.


----------



## MacMus

are there any modded bioses for MSI GeForce GTX 3090 DirectX 12 RTX 3090 VENTUS ?
in othe words how f$%#$%ed i am with this card?


----------



## motivman

dante`afk said:


> did you read what he wrote? he used a single slot from his psu. you're supposed to use (as per nvidia) two seperate.
> 
> anyway, I think he had a crap psu and crap cables.


I recognize that PSU, it's not crap at all, its an EVGA G2.. great PSU's. However, I had enough sense to use separate plugs for each of my 8 pins, lol. some people shouldn't be allowed to play with expensive toys, SMH.


----------



## pat182

MacMus said:


> are there any modded bioses for MSI GeForce GTX 3090 DirectX 12 RTX 3090 VENTUS ?
> in othe words how f$%#$%ed i am with this card?


Gigabyte one, had one before , its hot trash haha... 350watt or 390watt bios just dont do the 3090 justice


----------



## pat182

Evga bios not showing 3 pin power you just assume 150watt? Cause in control. Getting 550watt on the kpe bios with pin2 at 180watt


----------



## defiledge

dante`afk said:


> did you read what he wrote? he used a single slot from his psu. you're supposed to use (as per nvidia) two seperate.
> 
> anyway, I think he had a crap psu and crap cables.


Yes and a shunt mod can draw more that per cable


motivman said:


> I recognize that PSU, it's not crap at all, its an EVGA G2.. great PSU's. However, I had enough sense to use separate plugs for each of my 8 pins, lol. some people shouldn't be allowed to play with expensive toys, SMH.


I'm pretty sure a shunt-modded PCIE 8pin can pull more than what his cable did.


----------



## bmgjet

That guy had 300W coming over the cable. He used one of those single cables that split out to two 8pins.
Which yeah thats pushing the wires if they are sleeved in a case with no air flow.



MacMus said:


> are there any modded bioses for MSI GeForce GTX 3090 DirectX 12 RTX 3090 VENTUS ?
> in othe words how f$%#$%ed i am with this card?


Gigabyte gaming oc or KFA2 390W bios are your best choice.
3090 is really gimped under 450W power tho so your leaving a lot on the table with 2 plug cards if you dont shunt mod them.


----------



## mirkendargen

defiledge said:


> This guy melted his cable drawing 255W from the PCIE
> 
> __
> https://www.reddit.com/r/nvidia/comments/k5l9qs
> My shunt-modded 8pins are drawing around 220W. Do you guys shunt mods will cause cable melts? Or does this guy just have a **** PSU


Just (carefully) feel how warm your cabling and connectors are under load. I did something super ghetto like this to make a 1070 run in a Dell that was basically powering it from 1 6pin and 1 molex. I just felt the cabling under load, it was only slightly warm, called it good, wasn't ever an issue.


----------



## jomama22

The issue isn't the wires at all. 16 or 18 gauge is fine. It's the physical connectors that can't handle the wattage. That's why the cables are rated at 150w. Doesn't matter if it's a 1500w or 500w, most places will absolutely source the connectors from the same place (most likely molex themselves).

That is what happens in this case. The connector failed. So yes, there should be some worry if you are shunting and pulling 300w per cable for any extended amount of time tbh...


----------



## dangerSK

bmgjet said:


> Heres what it says with the i2c checks still in place.
> XC3 card so it returns 0 address after the scan and pushes this messagebox before closing itself.
> View attachment 2467608
> 
> 
> XC3 has all analog controllers so its a very quick test to fail.
> Maybe with the Strix which has some digital ones that can be controlled with EVC2 is taking longer to scan and finds them. Then some where later in the program it sees they arnt the right ones for a kingpin card so closes.
> Might be posiable to change some addresses around and get it working with those controllers.
> 
> Either way no use to me since I dont have a strix card lol.


Its lot easier to control through i2c with afterburner, as long as the i2c is supported. MP2888A present on strix is supported however u still need to scan i2c adresses to find voltage loops, u can try that. 
final afterburner file should look like this : (its for 2080Ti however same principle applies for 3090 too, since same ic)
; 2080Ti LIGHTNING 3774
[VEN_10DE&DEV_1E07&SUBSYS_37741462&REV_??]
Desc = 2080Ti Lightning XOC
VDDC_Generic_Detection = 0
VDDC_MP2888A_Detection = 4:15h 
VDDC_MP2888A_Type = 0
CoreVoltageMax = 1200


----------



## dangerSK

mirkendargen said:


> Interesting. I did NOT get any sort of "device not supported" error when running it on a Strix with the Kingpin BIOS, it just didn't work. The BIOS is probably projecting a phantom i2c controller that isn't doing anything. I'll try it with the actual Strix BIOS for the hell of it.
> 
> Confirmed, I get "Device unsupported" when running it on the actual Strix BIOS.


U didnt get error msg because Classy tool is just comparing device id, and u didnt get voltage control because MP2888A can use multiple adresses for voltage loops, run i2c dump to find voltage loops and u can use afterburner to control voltage, again im probably talking too much and 3Dguru boys are gonna be angry again.


----------



## DrunknFoo

dangerSK said:


> U didnt get error msg because Classy tool is just comparing device id, and u didnt get voltage control because MP2888A can use multiple adresses for voltage loops, run i2c dump to find voltage loops and u can use afterburner to control voltage, again im probably talking too much and 3Dguru boys are gonna be angry again.


Tinkered around with it trying to manipulate something / anything... Nothing applying on the ftw


----------



## dangerSK

DrunknFoo said:


> Tinkered around with it trying to manipulate something / anything... Nothing applying on the ftw


As mentioned multiple times ftw has uP9511R which has no i2c support, only method to control voltage is through hardmod.


----------



## dr/owned

jomama22 said:


> The issue isn't the wires at all. 16 or 18 gauge is fine. It's the physical connectors that can't handle the wattage. That's why the cables are rated at 150w. Doesn't matter if it's a 1500w or 500w, most places will absolutely source the connectors from the same place (most likely molex themselves).
> 
> That is what happens in this case. The connector failed. So yes, there should be some worry if you are shunting and pulling 300w per cable for any extended amount of time tbh...


We don't really know what cable he used....because he could have bought some made-in-China "custom sleeved cable" that uses crappy crimp connectors. 

The wire gauge does matter to a certain degree because the copper will heatsink the connector. (Stock cables are 18awg, you can buy 16awg custom sleeved cables)

This makes me mildly concerned that I'm planning on pulling about 400W from each cable, cause I only have a 2 connector card and disabled the PCIe slot power.


----------



## Sarieri

dr/owned said:


> We don't really know what cable he used....because he could have bought some made-in-China "custom sleeved cable" that uses crappy crimp connectors.
> 
> The wire gauge does matter to a certain degree because the copper will heatsink the connector. (Stock cables are 18awg, you can buy 16awg custom sleeved cables)
> 
> This makes me mildly concerned that I'm planning on pulling about 400W from each cable, cause I only have a 2 connector card and disabled the PCIe slot power.


IMHO, cables he used seemed to be the ones come together with the PSU.


----------



## geriatricpollywog

Just don't use CableMod cables. I melted one pushing 500-600w into a Vega 64. Since then I've only used the cables that came with my PSU.


----------



## jomama22

dr/owned said:


> We don't really know what cable he used....because he could have bought some made-in-China "custom sleeved cable" that uses crappy crimp connectors.
> 
> The wire gauge does matter to a certain degree because the copper will heatsink the connector. (Stock cables are 18awg, you can buy 16awg custom sleeved cables)
> 
> This makes me mildly concerned that I'm planning on pulling about 400W from each cable, cause I only have a 2 connector card and disabled the PCIe slot power.


I don't think 400w is advisable no matter what quality of cable you have tbh lol. For short bursts it's probably fine but anything beyond that is definitely risky. I would be surprised if you could even pull that much without going sub zero tbh.

There are some manufacturers that use 16 gauge wire (my old thermaltake 1475w does for example) but yeah, that majority seem to be 18. Though again, I'd be more worried about the connectors failing at the crimp contacts (like in the reddit link) before the actual wiring fails.


----------



## mirkendargen

dangerSK said:


> U didnt get error msg because Classy tool is just comparing device id, and u didnt get voltage control because MP2888A can use multiple adresses for voltage loops, run i2c dump to find voltage loops and u can use afterburner to control voltage, again im probably talking too much and 3Dguru boys are gonna be angry again.


You got me started down a rabbit hole...but for some reason the Afterburner command line switches aren't working for me to test I2C stuff, they're just opening Afterburner. I can't find any documentation indicating they're gone in the latest version...

And I see what you mean about the Guru3d people being mad at you in the thread there, heh...


----------



## motivman

0451 said:


> Just don't use CableMod cables. I melted one pushing 500-600w into a Vega 64. Since then I've only used the cables that came with my PSU.


So would you say about 350 watts is Safe to push through one 8 pin PCIE cable?


----------



## bogdi1988

There's also a chance that homie didn't plug that all the way in the PSU which caused for an imperfect connection which melted the plug. Happened the same thing on my 3D printer recently...


----------



## geriatricpollywog

motivman said:


> So would you say about 350 watts is Safe to push through one 8 pin PCIE cable?


If a Kingpin pulls 900w on LN2, subtract 75w for the PCIE slot, then divide that by 3 and you'll get 275w per cable for LN2 benching. I wouldn't want to pull more than 275w on quality cables with a quality power supply.


----------



## jomama22

motivman said:


> So would you say about 350 watts is Safe to push through one 8 pin PCIE cable?


For short bursts, probably. Anything sustained is going to run the risk of failure. I would like to think most of us aren't attempting to consume 700w total board power with two pin cards (assuming 100w is going through the pcie).

I wouldn't want to game with over 300w going through any pcie cable.


----------



## jomama22

bogdi1988 said:


> There's also a chance that homie didn't plug that all the way in the PSU which caused for an imperfect connection which melted the plug. Happened the same thing on my 3D printer recently...


Yeah, this is the only other explanation I can think of. Having an arc being created. Could be worm out crimp connections as well. They will only last for so many insertions and unplugs before stretching out of shape enough to create localized arcing.


----------



## dangerSK

mirkendargen said:


> You got me started down a rabbit hole...but for some reason the Afterburner command line switches aren't working for me to test I2C stuff, they're just opening Afterburner. I can't find any documentation indicating they're gone in the latest version...
> 
> And I see what you mean about the Guru3d people being mad at you in the thread there, heh...


dont go down this road  everything is under nda, complicated and no one will help u


----------



## mirkendargen

dangerSK said:


> dont go down this road  everything is under nda, complicated and no one will help u


This is an ASUS card not an MSI card, I doubt the Afterburner people care


----------



## dangerSK

mirkendargen said:


> This is an ASUS card not an MSI card, I doubt the Afterburner people care


afterburner is developed by 3Dguru guys, last time i got yelled at for showing people how to overvolt their lightning cards, problem is they dont like if u kill your card with their software


----------



## bmgjet

A 3090 isnt the card you want to blindly play with editing i2c stuff on lol.
Use a EVC2 and the hardware breakout that you have on your card to connect to it, Thats a lot less risky.


----------



## khunpunTH

Finally I break 15000 PR on air.
msi 3090 gaming x trio. stock cooling. 520w bios.
no water block. no shunt mod.

pr 15041


https://www.3dmark.com/pr/585960



time spy graphic score 22865


https://www.3dmark.com/spy/15866399



You should be aware that with 520w bios your 1st fan will max rpm at 2000 rpm and sometimes your 2nd fan will stop.
Use it carefully. bios is not design for air cooling.


----------



## Carls_Car

My Suprim X came in today.

Here is stock everything for TS and PR, respectively:



https://www.3dmark.com/spy/15872728




https://www.3dmark.com/pr/586076


----------



## Keninishna

dangerSK said:


> Its lot easier to control through i2c with afterburner, as long as the i2c is supported. MP2888A present on strix is supported however u still need to scan i2c adresses to find voltage loops, u can try that.
> final afterburner file should look like this : (its for 2080Ti however same principle applies for 3090 too, since same ic)
> ; 2080Ti LIGHTNING 3774
> [VEN_10DE&DEV_1E07&SUBSYS_37741462&REV_??]
> Desc = 2080Ti Lightning XOC
> VDDC_Generic_Detection = 0
> VDDC_MP2888A_Detection = 4:15h
> VDDC_MP2888A_Type = 0
> CoreVoltageMax = 1200


is there a tool to scan i2c addresses? I have a gigabyte card which I think has a digital controller on it.


----------



## Carls_Car

More from the Suprim X:

+100 core / +200 mem -- 85% fan speed



https://www.3dmark.com/spy/15873077




https://www.3dmark.com/pr/586121



This thing is fantastic.


----------



## mirkendargen

Keninishna said:


> is there a tool to scan i2c addresses? I have a gigabyte card which I think has a digital controller on it.


According to the documentation, "msiafterburner.exe /i2cd" should scan and dump them, but all it's doing for me is opening Afterburner.


----------



## dr/owned

jomama22 said:


> I don't think 400w is advisable no matter what quality of cable you have tbh lol. For short bursts it's probably fine but anything beyond that is definitely risky. I would be surprised if you could even pull that much without going sub zero tbh.
> 
> There are some manufacturers that use 16 gauge wire (my old thermaltake 1475w does for example) but yeah, that majority seem to be 18. Though again, I'd be more worried about the connectors failing at the crimp contacts (like in the reddit link) before the actual wiring fails.


I'm going to go for it just for fun. I may end up getting a Strix anyways and returning/selling the TUF (not particularly jazzed by either option) but I kinda want to see where the limits really are. Worst case I melt a AX1500i and swap in a spare AX1200i that I upgraded from over 5 years ago when I had SLI 680 Lightnings burning 1200W in worst case scenario. Kinda bonkers that even an AX1500i is "cheap" compared to the cost of the card. I also bought an IR camera because a) I just really want to live out my Predator fantasies and b) cause it'll be handy to see what's going on with more precision than "ow that hurts my finger " or an IR gun with a 3" cone of measurement.


----------



## Keninishna

mirkendargen said:


> According to the documentation, "msiafterburner.exe /i2cd" should scan and dump them, but all it's doing for me is opening Afterburner.


I found the documentation. It looks like it’s a bit more than that switch though.

*3.1. I2C dump tool*

I2C dump tool is intended for automated scanning of all devices residing on all I2C buses of each GPU installed in the system. The first register of each scanned I2C device is being polled, if the device reply to polling then dump of 256 byte registers is saved into the dump. I2C dump tool is activated via the command line with the following switch:

MSIAfterburner.exe /i2cd[[<i2c_bus>],[<i2c_device>],[<register>]]

where <i2c_bus> is I2C bus number, <i2c_device> is 7-bit I2C device address and <register> is address of I2C device register to be polled when checking I2C device availability. Optional <i2c_bus> and <i2c_device> parameters allow scanning required device address on all I2C buses or desired I2C bus only. When I2C bus and device address are not specified I2C dump tool is scanning all available I2C buses and 7-bit I2C address range starting from 00h up to 4Fh inclusive. Optional <register> parameter allows dumping state of some sophisticated I2C devices with unreadable register 0 (e.g. NCP4206).

It only scans to 4Fh I was reading the controller for gigabye uP9505
Is at either 88h or 8Ah


----------



## specopsFI

Would someone happen to know if all the ports on Asus RTX 3090 TUF OC work when using the Gigabyte Gaming OC BIOS on it? Both have the same configuration (2xHDMI, 3XDP) but are they wired in a compatible way?

Or does it even make a difference going from TUF's stock 375W to Gigabyte's 390W? Clearly the difference isn't huge, but does the TUF go up to 390W if flashed? 3080s have been hit and miss, most 2x8-pin models seem to hold 350-360 and no more.


----------



## Deve5tat0R

Greetings from India, joining the 3090 club. Got my Strix 3090 OC two weeks back and been fiddling with the clocks and finally was able to push up to +150 core and +750 mem on Air with stock 480W Bios.

PR : https://www.3dmark.com/pr/586148

Is this a good score or should i try to buy another one?. System is a bit old but doesn't seem to bottleneck in benchmarks or games. Will upgrade once DDR5 and PCIE-4 are both available.


----------



## man1ac

wirx said:


> No downsides, I use it with stock MSI Trio X coolers about month already. Gaming temps with silent fans about 65-70c, with normal fans 60-65c. But you can always litlebit downclock and get 50c with air. Card speed is much faster than stock bios.


Thanks! Thats all I wanted to know.

Does the same apply to the 3090 TUF OC? Got a really sweet Deal for it! Will take whatever card runs better  Or which BIOS works perfectly on the 3090 TUF OC?


----------



## Fire2

khunpunTH said:


> Finally I break 15000 PR on air.
> msi 3090 gaming x trio. stock cooling. 520w bios.
> no water block. no shunt mod.
> 
> pr 15041
> 
> 
> https://www.3dmark.com/pr/585960
> 
> 
> 
> time spy graphic score 22865
> 
> 
> https://www.3dmark.com/spy/15866399
> 
> 
> 
> You should be aware that with 520w bios your 1st fan will max rpm at 2000 rpm and sometimes your 2nd fan will stop.
> Use it carefully. bios is not design for air cooling.


awesome thank you interesting about the fan issues!


----------



## elucid087

For those worried about their PSU's not being enough. This is directly from ASUS - https://dlcdnets.asus.com/pub/ASUS/Accessory/Power_Supply/Manual/RECOMMENDED_PSU_TABLE.pdf











On my 3090 aorus xtreme box it states the recommended psu would be an 850watt but that's them assuming people have an i9 or ryzen equivalent.


----------



## GTANY

elucid087 said:


> For those worried about their PSU's not being enough. This is directly from ASUS - https://dlcdnets.asus.com/pub/ASUS/Accessory/Power_Supply/Manual/RECOMMENDED_PSU_TABLE.pdf
> 
> 
> View attachment 2467664
> 
> 
> On my 3090 aorus xtreme box it states the recommended psu would be an 850watt but that's them assuming people have an i9 or ryzen equivalent.


This PSU table is for stock graphic cards. What we discuss here is shunted graphic cards which can consume more than 800 W themselves.


----------



## elucid087

GTANY said:


> This PSU table is for stock graphic cards. *What we discuss here is shunted graphic cards* which can consume more than 800 W themselves.


Speak for yourself and yourself only. Running with a 750watt EVGA p2 with an 8th gen i7 + 3090 Aorus XE with zero issues under every synthetic bench/game in my library.


----------



## GTANY

elucid087 said:


> Speak for yourself and yourself only. Running with a 750watt EVGA p2 with an 8th gen i7 + 3090 Aorus XE with zero issues under every synthetic bench/game in my library.


Did you read what I wrote ?


----------



## dr/owned

elucid087 said:


> Speak for yourself and yourself only. Running with a 750watt EVGA p2 with an 8th gen i7 + 3090 Aorus XE with zero issues under every synthetic bench/game in my library.


Like the other dude said, we're worried about shunted graphics cards not stock ones that are contained to <500W, depending on the BIOS. Specifically the capacity of individual connectors, cause tripping OCP is no big deal vs. melting something.


----------



## Zogge

520W bios works on my 3090 OC Strix, confirmed. Though heat is up to 50-51 degrees on the card in my custom loop now.
It was like 44 before. EK WB , 150 l/h and A LOT of radiators/fans. Is this temp too bad, shall I try to reseat it ? Ambient 27 C.

Also for the record, I get 520 W power draw both in games and benchmark so it truely works.


----------



## Nizzen

Zogge said:


> 520W bios works on my 3090 OC Strix, confirmed. Though heat is up to 50-51 degrees on the card in my custom loop now.
> It was like 44 before. EK WB , 150 l/h and A LOT of radiators/fans. Is this temp too bad, shall I try to reseat it ? Ambient 27 C.
> 
> Also for the record, I get 520 W power draw both in games and benchmark so it truely works.


I have 10900k and 3090 strix oc EK block/backplate and kingpinbios in the same loop. Asus Helios with 360 radiator in the top and 420 radiadiator in the front. When playing, my card goes up to 50-55C. Ambient is 23-27c. It depends of how long I'm playing games 

I don't want to have loud fans, so the temperature is fine for me. Locking the gpu to 2100mhz when gaming Battlefield V.


----------



## Zogge

Aha then maybe it is what it is without a chiller. I have a 10980XE at 5.0Ghz so say 300W on normal load + the 3090 520W.
3x 360x120mm ST30 radiators with IPPC3000 fans at 800 rpm and then an airplex gigant 3360x140 with 24x140mm fans at 600 rpm. 2x D5 pumps and 150 l/h. Water temp around 27-28 degrees. Ambient 25-27.

Then it sounds like I get the temps one could expect.

I lock the card at 2160Mhz core / 22000 Mhz memory.


----------



## Nizzen

Zogge said:


> Aha then maybe it is what it is without a chiller. I have a 10980XE at 5.0Ghz so say 300W on normal load + the 3090 520W.
> 3x 360x120mm ST30 radiators with IPPC3000 fans at 800 rpm and then an airplex gigant 3360x140 with 24x140mm fans at 600 rpm. 2x D5 pumps and 150 l/h. Water temp around 27-28 degrees. Ambient 25-27.
> 
> Then it sounds like I get the temps one could expect.
> 
> I lock the card at 2160Mhz core / 22000 Mhz memory.


Looks about right yes. My watertemp is 31-33c


----------



## OC2000

After about 5 days of spending pretty much the entire day shunting my card, so much so the RGB cable on my EKWB fell off from dangling around so much, I have finally managed to 100% shunt my 3090 Strix card using the silver pain method.

It wasn't until yesterday that I started realising that some of my shunts were working using the GPU-Z to see which of the 8 pins was capping out at 140 like the pic below. 8 pin 1 was capping out causing Power Cap.

Both images are taken in Timespy Extreme 2nd game test. Not to its highest potential as it was being held together with 6 screws and no backplate.










I was finally able to isolate which shunts needed replacing and have now just achieved it. 

As you can see from the below pic all 3 pins are running close to the same watts as each other.










I must have taken off that water block over 30 times!, removed and replaced over 100 shunts, but finally achieved it. I now have to tackle the back plate coil whine problem.


----------



## Biscottoman

Hi guys, i'm considering to pick a 3090 strix and shunt it. What would be the best resistors setup to achieve the highest power while being safe to not damage any component due to the high wattage? I was considering to use six 0.005ohm resistors + one 0.010ohm resistor for the pcie, what power limit should i be able to reach with this setup? Thanks in advice for the help


----------



## bmgjet

Are you able to label each one so I can add it to the info.








GitHub - bmgjet/ShutMod-Calculator: Work out what shunt values to use easily.


Work out what shunt values to use easily. Contribute to bmgjet/ShutMod-Calculator development by creating an account on GitHub.




github.com


----------



## Biscottoman

Everyone 0.005 ohm but the PCI one which is 0.010 (dunno of It is safe), should be 7 resistors in total if i am correct


----------



## Falkentyne

OC2000 said:


> After about 5 days of spending pretty much the entire day shunting my card, so much so the RGB cable on my EKWB fell off from dangling around so much, I have finally managed to 100% shunt my 3090 Strix card using the silver pain method.
> 
> It wasn't until yesterday that I started realising that some of my shunts were working using the GPU-Z to see which of the 8 pins was capping out at 140 like the pic below. 8 pin 1 was capping out causing Power Cap.
> 
> Both images are taken in Timespy Extreme 2nd game test. Not to its highest potential as it was being held together with 6 screws and no backplate.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I was finally able to isolate which shunts needed replacing and have now just achieved it.
> 
> As you can see from the below pic all 3 pins are running close to the same watts as each other.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I must have taken off that water block over 30 times!, removed and replaced over 100 shunts, but finally achieved it. I now have to tackle the back plate coil whine problem.


How did you manage to "isolate" the shunt? What exactly did you do to "replace" it? What was wrong exactly? Can you go into more detail on exactly how you "fixed" it? Were you using paint to stack shunts or just painting only? Was the shunt bad? Sorry for the questions....


----------



## pat182

Zogge said:


> 520W bios works on my 3090 OC Strix, confirmed. Though heat is up to 50-51 degrees on the card in my custom loop now.
> It was like 44 before. EK WB , 150 l/h and A LOT of radiators/fans. Is this temp too bad, shall I try to reseat it ? Ambient 27 C.
> 
> Also for the record, I get 520 W power draw both in games and benchmark so it truely works.


same config but on air., in control my card goes to 550watts, assuming the pin3 draw 150watt (no reading so idk) pin1 156watt and pin2 180watt

is there any danger for the card not reading 3pin proper ? like can i 24/7 that bios or ?!?


----------



## OC2000

Falkentyne said:


> How did you manage to "isolate" the shunt? What exactly did you do to "replace" it? What was wrong exactly? Can you go into more detail on exactly how you "fixed" it? Were you using paint to stack shunts or just painting only? Was the shunt bad? Sorry for the questions....


I was able to isolate it down to 2 shunts. Because I could see on GPU-Z which 8 pin was capping in Watts. In the pic I posted. 8 pin ##1 was reaching over 140W vs 8pin 2 and 3 hitting 90W. To fix it I took both the shunts for 8 pin 1 off cleaned them and re applied them with less silver paint. It took quite a few attempts to get it right, but now I can do a 4k Timespy extreme without hitting power cap. 
Interestingly though I wasn't drawing that much more power. Total system draw under TimeSpy extreme game test 2 was only hitting about 780W.

Managed to get a 15000 PR score using really bad RAM timings. I think basically stock, with an average boost of 2167 mhz and peak of 2175. 

Managed to sort the coil whine out too on the back plate keeping all thermal pads. Involved going up .5mm in thermal pad thickness of everything as well as removing 2 screws by the socket.

Looking forward to benchmarking properly once I have a new install going and sorted out my new RAM and Motherboard.


----------



## OC2000

Biscottoman said:


> Hi guys, i'm considering to pick a 3090 strix and shunt it. What would be the best resistors setup to achieve the highest power while being safe to not damage any component due to the high wattage? I was considering to use six 0.005ohm resistors + one 0.010ohm resistor for the pcie, what power limit should i be able to reach with this setup? Thanks in advice for the help



If you look at the picture above you. The GPU-Z. Thats my 3090 Strix OC shunted with 6x 5 ohm for the 8 pins and an 8 ohm for the PCIE. you can see the PCIe is only drawing 36.6W if you 1.67x it , it means its actually running at 61W, well within spec.


----------



## jomama22

Biscottoman said:


> Hi guys, i'm considering to pick a 3090 strix and shunt it. What would be the best resistors setup to achieve the highest power while being safe to not damage any component due to the high wattage? I was considering to use six 0.005ohm resistors + one 0.010ohm resistor for the pcie, what power limit should i be able to reach with this setup? Thanks in advice for the help


On the strix card, you are probably safe to using 5 mohm on the pcie as it only pulls about 45w under full load @480w. Granted, I don't think you need the full 960w that give you lmao. Using a 10 mohm on the pcie would give you 720w on the stock bios, 750w with the 500w and 780w on the 520w.


----------



## Sheyster

HyperMatrix said:


> The bios works in that it pulls 500W on the Strix. But it doesn't report any power usage on the 3rd power connector. So your card will just report around 330W or so. Actual peaks were above 500W.


Been gone a few days and catching up....

This is my exact experience. My card seems to be a bit better than the one you received and sold. Also just wanted to add that the 500w and peaks over 500 we observed are without shunt modding, just to be clear to everyone.


----------



## Luca Prinzi

elucid087 said:


> For those worried about their PSU's not being enough. This is directly from ASUS - https://dlcdnets.asus.com/pub/ASUS/Accessory/Power_Supply/Manual/RECOMMENDED_PSU_TABLE.pdf
> 
> 
> View attachment 2467664
> 
> 
> On my 3090 aorus xtreme box it states the recommended psu would be an 850watt but that's them assuming people have an i9 or ryzen equivalent.


In a few days I too will have the Auros Extreme 3090. Have you tried it with the 500 watt bios Evga or the 520 watt ?


----------



## Biscottoman

jomama22 said:


> On the strix card, you are probably safe to using 5 mohm on the pcie as it only pulls about 45w under full load @480w. Granted, I don't think you need the full 960w that give you lmao. Using a 10 mohm on the pcie would give you 720w on the stock bios, 750w with the 500w and 780w on the 520w.


Really thanks for the help and all data/suggestions. First time going for shunt and this gen seems the most "complicated" to del with it too


----------



## Aristotle

Is there any way to set a fan curve based on RPM instead of fan percentage? I'm using a 3090 strix, and the 520w bios messes with the fan percentage settings. Normally the mid fan and side fans match with percentage and rpm, but using the 520w bios, the fan rpms are mismatched at the same percentage.


----------



## mirkendargen

Keninishna said:


> I found the documentation. It looks like it’s a bit more than that switch though.
> 
> *3.1. I2C dump tool*
> 
> I2C dump tool is intended for automated scanning of all devices residing on all I2C buses of each GPU installed in the system. The first register of each scanned I2C device is being polled, if the device reply to polling then dump of 256 byte registers is saved into the dump. I2C dump tool is activated via the command line with the following switch:
> 
> MSIAfterburner.exe /i2cd[[<i2c_bus>],[<i2c_device>],[<register>]]
> 
> where <i2c_bus> is I2C bus number, <i2c_device> is 7-bit I2C device address and <register> is address of I2C device register to be polled when checking I2C device availability. Optional <i2c_bus> and <i2c_device> parameters allow scanning required device address on all I2C buses or desired I2C bus only. When I2C bus and device address are not specified I2C dump tool is scanning all available I2C buses and 7-bit I2C address range starting from 00h up to 4Fh inclusive. Optional <register> parameter allows dumping state of some sophisticated I2C devices with unreadable register 0 (e.g. NCP4206).
> 
> It only scans to 4Fh I was reading the controller for gigabye uP9505
> Is at either 88h or 8Ah


Yeah like it says, all the params are optional. And I didn't have any luck with it even using params for my mobo fans/sensors that I know.


----------



## pat182

Aristotle said:


> Is there any way to set a fan curve based on RPM instead of fan percentage? I'm using a 3090 strix, and the 520w bios messes with the fan percentage settings. Normally the mid fan and side fans match with percentage and rpm, but using the 520w bios, the fan rpms are mismatched at the same percentage.


no its like that, my mid fan runs 2000rpm and side fans 3000rpm, not sure if its faster than stock ??
it doesnt affect much cooling tho of what i have tested

also, mid DP not working ,have to use the 1st and the 3rd


----------



## MacMus

pat182 said:


> Gigabyte one, had one before , its hot trash haha... 350watt or 390watt bios just dont do the 3090 justice


I was planning to put it on water, so i don't care about cooling solution.

however i'm interested in if that can get 500W with only two power pins.

Which gigabyte bios to load?


----------



## MacMus

bmgjet said:


> Gigabyte gaming oc or KFA2 390W bios are your best choice.
> 3090 is really gimped under 450W power tho so your leaving a lot on the table with 2 plug cards if you dont shunt mod them.


Founders also have 2 cables right ?

Can any of this bioses go to 450 ?

I should be satified with anything 400-450.


----------



## Nizzen

MacMus said:


> I was planning to put it on water, so i don't care about cooling solution.
> 
> however i'm interested in if that can get 500W with only two power pins.
> 
> Which gigabyte bios to load?


Use stock bios with shuntmod, if you want 500w with 2x8pin


----------



## Aristotle

pat182 said:


> no its like that, my mid fan runs 2000rpm and side fans 3000rpm, not sure if its faster than stock ??
> it doesnt affect much cooling tho of what i have tested
> 
> also, mid DP not working ,have to use the 1st and the 3rd


I double checked by switching to the stock strix quiet bios via the switch. The fan rpm between mid and side fans are synced. I guess I can force a more aggressive fan curve for mid fan to compensate for the rpm mismatch when using the 520w bios.


----------



## pat182

Aristotle said:


> I double checked by switching to the stock strix quiet bios via the switch. The fan rpm between mid and side fans are synced. I guess I can force a more aggressive fan curve for mid fan to compensate for the rpm mismatch when using the 520w bios.


yea they are sync in normal bios,. i was saying its normal out of sync in 520watt bios. im more ''worried'' about not seeing the 3pin power on asus models, msi seems to show normal reading with evga bios but not asus, so in game i cant see rlly the real powerdraw


----------



## Aristotle

pat182 said:


> yea they are sync in normal bios,. i was saying its normal out of sync in 520watt bios. im more ''worried'' about not seeing the 3pin power on asus models, msi seems to show normal reading with evga bios but not asus, so in game i cant see rlly the real powerdraw


Is that why my board power draw on gpuz is showing only around 350-370w?

Also is there a way to raise the midfan max rpm on the 520w bios? Like you said it seems to cap at 2000rpm when it should normally spin up to 3000 rpm.


----------



## pat182

Aristotle said:


> Is that why my board power draw on gpuz is showing only around 350-370w?
> 
> Also is there a way to raise the midfan max rpm on the 520w bios? Like you said it seems to cap at 2000rpm when it should normally spin up to 3000 rpm.


no. mid fan must be the blower fan on the KPE while the others are like the aio fans

you always need to add +150watt to your power draw cause pin3 not showing, thats why its 370watt +150 = 520watts

but in Control you can see 400watts without the pin3 which would mean 550watts, that why it sucks not knowing whats going on rlly

not sure if theres a way to see real power draw with RTTS maybe or something


----------



## Aristotle

pat182 said:


> no. mid fan must be the blower fan on the KPE while the others are like the aio fans
> 
> you always need to add +150watt to your power draw cause pin3 not showing, thats why its 370watt +150 = 520watts
> 
> but in Control you can see 400watts without the pin3 which would mean 550watts, that why it sucks not knowing whats going on rlly
> 
> not sure if theres a way to see real power draw with RTTS maybe or something


Ok thanks for letting me know. That makes the power draw make more sense.


----------



## Aristotle

One more question. Since the stock boost clock of the KPE is 1920 compared to the strix's 1860, if I added +100 to the KPE bios, that would be effectively +160 relative to the stock strix bios right?


----------



## Sheyster

MacMus said:


> I should be satified with anything 400-450.


Founder's is up to 400w. If that is good enough get one, it's an amazing card. Also, it uses the new 12-pin micro connector, not 2 x 8-pin.


----------



## mirkendargen

Aristotle said:


> One more question. Since the stock boost clock of the KPE is 1920 compared to the strix's 1860, if I added +100 to the KPE bios, that would be effectively +160 relative to the stock strix bios right?


Yes.


----------



## OC2000

I just ran this in PR which I am not impressed with. I have seen other lower average boost scores get higher. GPU Ram was set to 975. some reason i crashes now at 1000. Used to be able to do 1200 on it.









I scored 15 053 in Port Royal


AMD Ryzen 9 3950X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com





My Ram timings are loose. Is there anything else in PR than helps optimize other than GPU Core


----------



## Pinto

OC2000 said:


> I just ran this in PR which I am not impressed with. I have seen other lower average boost scores get higher. GPU Ram was set to 975. some reason i crashes now at 1000. Used to be able to do 1200 on it.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 053 in Port Royal
> 
> 
> AMD Ryzen 9 3950X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> My Ram timings are loose. Is there anything else in PR than helps optimize other than GPU Core


Don't you think the increase of the thermal pads thickness is not the problem?


----------



## OC2000

Pinto said:


> Don't you think the increase of the thermal pads thickness is not the problem?


You should see the thickness of the thermal pads on the stock strix cooler. I have an MP5Works hooked up which I have not plumbed in yet though. Hopefully that will help. Failing tha a completely new water block has been ordered as the EK one is really badly built


----------



## Pinto

OC2000 said:


> You should see the thickness of the thermal pads on the stock strix cooler. I have an MP5Works hooked up which I have not plumbed in yet though. Hopefully that will help. Failing tha a completely new water block has been ordered as the EK one is really badly built


Got the card and the EK Block so i know ;-) I have only 11°c delta on my setup at 400/440w really please with the EK cold plate not with the backplate.


----------



## shiokarai

OC2000 said:


> I just ran this in PR which I am not impressed with. I have seen other lower average boost scores get higher. GPU Ram was set to 975. some reason i crashes now at 1000. Used to be able to do 1200 on it.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 053 in Port Royal
> 
> 
> AMD Ryzen 9 3950X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> My Ram timings are loose. Is there anything else in PR than helps optimize other than GPU Core


You ran what? 520w kingpin bios with strix or something else? what's "this"?


----------



## Jordel

So, my 3090 FTW3 Ultra keeps hitting a much lower power limit in GPUZ than what it's supposed to. Is this intentional/a bug?

No matter if it's the 480, 500, or 520 W BIOS, the max is around 400W Board Power Draw (max 415.9W right now from last session), PerfCap reason sticking to Pwr


----------



## OC2000

shiokarai said:


> You ran what? 520w kingpin bios with strix or something else? what's "this"?


stock 480 Strix Bios, but with a water block and shunted


----------



## shiokarai

Is there a way to "compensate" in the RTSS afterburner OSD GPU power draw to display proper values when card is shunted or with KPE 520w bios on strix (third 8 pin reporting 0w draw instead of 150w)? Something like adding +150w to actual display or x2 etc.


----------



## Falkentyne

shiokarai said:


> Is there a way to "compensate" in the RTSS afterburner OSD GPU power draw to display proper values when card is shunted or with KPE 520w bios on strix (third 8 pin reporting 0w draw instead of 150w)? Something like adding +150w to actual display or x2 etc.


Yes there is. Very similar method as hwinfo. I think you have to enter * for multi though.


----------



## pat182

Falkentyne said:


> Yes there is. Very similar method as hwinfo. I think you have to enter * for multi though.


yea , just wont be accurate tho, like its gonna show 150watt idle since you lock the pin3 at 150watt, and it may draw more than 150 in some loads so, its moohh

also wondering if its bad for the power management not being able to see power


----------



## dr/owned

OC2000 said:


> You should see the thickness of the thermal pads on the stock strix cooler. I have an MP5Works hooked up which I have not plumbed in yet though. Hopefully that will help. Failing tha a completely new water block has been ordered as the EK one is really badly built


Pretty clever attachment method they made with the elastic bands...unfortunately they'll weaken within a year from the heatcycling until they're basically zero clamping force. The price is also kinda "yeesh" considering for half as much you can buy the exact same thing as a DIMM waterblock.


----------



## MacMus

Nizzen said:


> Use stock bios with shuntmod, if you want 500w with 2x8pin


what is shuntmod?


----------



## reflex75

Has anyone seen some silicon degradation with Ampere yet?
I am asking because by default Ampere boosts to higher max voltage (1.081v) compare to Turing (1.068v)
So I'm wondering if Ampere can better handle higher voltage...


----------



## GQNerd

MacMus said:


> what is shuntmod?


What is search function:








How To: Easy mode Shut Modding.


Heres some basic instructions for a easy to do and easy to remove shunt mod method. Tools Needed: Small phillips screwdriver. Tweesers Materials: Conductive paint. (Stuff iv used is water based and very easy to remove with warm water and cotton swab), But a silver type one would work even...




www.overclock.net


----------



## geriatricpollywog

reflex75 said:


> Has anyone seen some silicon degradation with Ampere yet?
> I am asking because by default Ampere boosts to higher max voltage (1.081v) compare to Turing (1.068v)
> So I'm wondering if Ampere can better handle higher voltage...


Kingpin recommends not exceeding 1.2v for both TU102 and GA102 without extreme cooling. I don’t think one handles more voltage than the other.


----------



## OC2000

dr/owned said:


> Pretty clever attachment method they made with the elastic bands...unfortunately they'll weaken within a year from the heatcycling until they're basically zero clamping force. The price is also kinda "yeesh" considering for half as much you can buy the exact same thing as a DIMM waterblock.


it comes with a long string of it spare so you make replacements if necessary. It actually clamps on pretty hard without the attachments too


----------



## Falkentyne

reflex75 said:


> Has anyone seen some silicon degradation with Ampere yet?
> I am asking because by default Ampere boosts to higher max voltage (1.081v) compare to Turing (1.068v)
> So I'm wondering if Ampere can better handle higher voltage...


You're not going to get degradation without vcore adjustment. The voltage% slider is NOT vcore adjustment! All the slider does is unlock one higher VF tier under certain thermal conditions (and depending on the current curve shape). The next voltage tier being used also adds +15 mhz to the core as well.


----------



## nievz

Is the EVGA 500w BIOS still the best to lfash on the Gaming X Trio? Will RGB control work after?


----------



## Pepillo

nievz said:


> Is the EVGA 500w BIOS still the best to lfash on the Gaming X Trio? Will RGB control work after?


1 Yes
2 No


----------



## Nillas

New here, have a Gigabyte Aorus 3090 Xtreme card that i get pretty good scores with out of the box. Really cant find any reviews or guides for how to / or if its possible to flash another bios on this card. Anyone know?

With + 180 on the core i get this score with original bios and air cooler, https://www.3dmark.com/pr/540234 , so i guess this card got some potential.


----------



## pat182

Nillas said:


> New here, have a Gigabyte Aorus 3090 Xtreme card that i get pretty good scores with out of the box. Really cant find any reviews or guides for how to / or if its possible to flash another bios on this card. Anyone know?
> 
> With + 180 on the core i get this score with original bios and air cooler, https://www.3dmark.com/pr/540234 , so i guess this card got some potential.


Keep it like that, it good enough ,you wont get much more with a custom bios, near 15k with no water cooling


----------



## pantsoftime

Damn that's a nice score. My Aorus 3090 Xtreme can't really handle more than +75 on the core and it power limits really low. I'm hoping I can tame it with water when a WB is available.


----------



## Fire2

+75 core seems very low!
I returned my +115/+123 card (msi triox) 

whats your mem do? +800/+1000?


----------



## mirkendargen

Fire2 said:


> +75 core seems very low!
> I returned my +115/+123 card (msi triox)
> 
> whats your mem do? +800/+1000?


You can't compare core offsets between cards, the starting point is different. Aorus Xtreme's "+75" is 1935, Gaming X Trio's "+115" is 1900.


----------



## MacMus

Miguelios said:


> What is search function:
> 
> 
> 
> 
> 
> 
> 
> 
> How To: Easy mode Shut Modding.
> 
> 
> Heres some basic instructions for a easy to do and easy to remove shunt mod method. Tools Needed: Small phillips screwdriver. Tweesers Materials: Conductive paint. (Stuff iv used is water based and very easy to remove with warm water and cotton swab), But a silver type one would work even...
> 
> 
> 
> 
> www.overclock.net


i have to glue some resistors to existing once .. this is what it is ?


----------



## jomama22

MacMus said:


> i have to glue some resistors to existing once .. this is what it is ?


There is nothing in there about glueing..just go read.


----------



## shiokarai

Alphacool Eisblock Aurora Acryl GPX-N RTX 3090/3080 ROG Strix mit Backplate


Der Alphacool Eisblock Aurora Plexi GPX-N RTX 3080/3090 vereint Style mit Performance und eine umfangreiche Digital RGB Beleuchtung. Die Erfahrung von über 17 Jahren sind in diesen Grafikkarten-Wasserkühler eingeflossen und stellen den...




www.alphacool.com





Alphacool block for Strix 3090 postponed another 3 weeks, again... this means the only somewhat available block is bykski one, as EK blocks are expected after 11th dec. and aquacomputer blocks... well, sometime maybe in dec. How come it's such a problem to get the damn blocks out?


----------



## mirkendargen

shiokarai said:


> Alphacool Eisblock Aurora Acryl GPX-N RTX 3090/3080 ROG Strix mit Backplate
> 
> 
> Der Alphacool Eisblock Aurora Plexi GPX-N RTX 3080/3090 vereint Style mit Performance und eine umfangreiche Digital RGB Beleuchtung. Die Erfahrung von über 17 Jahren sind in diesen Grafikkarten-Wasserkühler eingeflossen und stellen den...
> 
> 
> 
> 
> www.alphacool.com
> 
> 
> 
> 
> 
> Alphacool block for Strix 3090 postponed another 3 weeks, again... this means the only somewhat available block is bykski one, as EK blocks are expected after 11th dec. and aquacomputer blocks... well, sometime maybe in dec. How come it's such a problem to get the damn blocks out?


Probably everyone is finding screwups with their blocks from not having access to a card after the other companies' screwups went public (Bykski not clearing a fan header, EK backplate causing loud coil whine).


----------



## jomama22

shiokarai said:


> Alphacool Eisblock Aurora Acryl GPX-N RTX 3090/3080 ROG Strix mit Backplate
> 
> 
> Der Alphacool Eisblock Aurora Plexi GPX-N RTX 3080/3090 vereint Style mit Performance und eine umfangreiche Digital RGB Beleuchtung. Die Erfahrung von über 17 Jahren sind in diesen Grafikkarten-Wasserkühler eingeflossen und stellen den...
> 
> 
> 
> 
> www.alphacool.com
> 
> 
> 
> 
> 
> Alphacool block for Strix 3090 postponed another 3 weeks, again... this means the only somewhat available block is bykski one, as EK blocks are expected after 11th dec. and aquacomputer blocks... well, sometime maybe in dec. How come it's such a problem to get the damn blocks out?


It's not delayed further, the initial batch sold out. Was slated for 2 weeks when first announced a few days ago.


----------



## warbucks

I ended up shunt modding the PCIE(10 mOhm) on my FTW3 Ultra so I can use the full 500W with the XOC bios. I plan to shunt mod the rest of them once a block is available and I can get my hands on one.

TSE:








Result not found







www.3dmark.com




PR:








Result not found







www.3dmark.com


----------



## whaleboy_4096

warbucks said:


> I ended up shunt modding the PCIE(10 mOhm) on my FTW3 Ultra so I can use the full 500W with the XOC bios. I plan to shunt mod the rest of them once a block is available and I can get my hands on one.
> 
> TSE:
> 
> 
> 
> 
> 
> 
> 
> 
> Result not found
> 
> 
> 
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> PR:
> 
> 
> 
> 
> 
> 
> 
> 
> Result not found
> 
> 
> 
> 
> 
> 
> 
> www.3dmark.com


What was your avg power draw during these tests before and after the mod? Thanks


----------



## jomama22

shiokarai said:


> Alphacool Eisblock Aurora Acryl GPX-N RTX 3090/3080 ROG Strix mit Backplate
> 
> 
> Der Alphacool Eisblock Aurora Plexi GPX-N RTX 3080/3090 vereint Style mit Performance und eine umfangreiche Digital RGB Beleuchtung. Die Erfahrung von über 17 Jahren sind in diesen Grafikkarten-Wasserkühler eingeflossen und stellen den...
> 
> 
> 
> 
> www.alphacool.com
> 
> 
> 
> 
> 
> Alphacool block for Strix 3090 postponed another 3 weeks, again... this means the only somewhat available block is bykski one, as EK blocks are expected after 11th dec. and aquacomputer blocks... well, sometime maybe in dec. How come it's such a problem to get the damn blocks out?


If you were in the us I'd sell you my ek strix block and backplate. Probably gonna put it up on the FS forum here soon.


----------



## GQNerd

Everyone complaining about coil whine, I thinks it’s unavoidable.. 

The whine is more pronounced because of the lack of the high rpm fans from the stock cooler.. well, that and the fact we’re all shunt-modding and running the cards under more voltage

For example, Phanteks’ glacier blocks for the Strix work great, but coil whine is still present.. I tried diff thermal pads, removed screws near the slot, and went as far as to install the stock Strix backplate w/stock pads, but it the card still whines.. 

Solution: Don’t run it at over 500w and max OC for my daily, turn up system fans a bit, and actually put the side/front covers on the atx case.


----------



## dante`afk

Does anyone know the mk/w of bitspower thermal pads? I've used the Thermaltake odyssey ones which are pretty thin compared to the bitspower ones. Which are better?

@Falkentyne


----------



## Falkentyne

dante`afk said:


> Does anyone know the mk/w of bitspower thermal pads? I've used the Thermaltake odyssey ones which are pretty thin compared to the bitspower ones. Which are better?
> 
> @Falkentyne


Thermalright?
BTW 2mm pads exist as well.









8.65US $ 6% OFF|Thermalright Thermal Pad 120x120mm 12.8 W/mk 0.5mm 1.0mm 1.5mm 2.0mm High Efficient Thermal Conductivity Original Authentic - Pc Components Cooling & Tools - AliExpress


Smarter Shopping, Better Living! Aliexpress.com




www.aliexpress.com





If you need thicker, stack a 2mm and a 0.5mm, but I doubt anything higher than 2mm is needed on a stock heatsink (FE likes 1.5mm on both sides--2.5mm is too thick for the heatsink to make contact with the GPU on the GPU side, and 2mm would leave bad mounting pressure on the die).

And I highly doubt bitspower is using better pads. There aren't many choices out there. You have Gelid 12 w/mk, Thermalright 12.8 w/mk and Fujipoly 11 w/mk. The 17 w/mk Fujipoly pads exist but those are extremely hard and tend to just crumble and tear after one use.

If you can't afford to spend hundreds of dollars on pads, Arctic 145mm * 145mm 6 w/mk blue pads are the best cheap pads, and you get a nice giant size, but probably will struggle on hot video cards VRAM.


----------



## DrunknFoo

warbucks said:


> I ended up shunt modding the PCIE(10 mOhm) on my FTW3 Ultra so I can use the full 500W with the XOC bios. I plan to shunt mod the rest of them once a block is available and I can get my hands on one.
> 
> TSE:
> 
> 
> 
> 
> 
> 
> 
> 
> Result not found
> 
> 
> 
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> PR:
> 
> 
> 
> 
> 
> 
> 
> 
> Result not found
> 
> 
> 
> 
> 
> 
> 
> www.3dmark.com


Did you check the joints and confirm the signal?
Score seems low. Maybe try the 450w stock bios again...


----------



## Falkentyne

DrunknFoo said:


> Did you check the joints and confirm the signal?
> Score seems low. Maybe try the 450w stock bios again...


I've had PR scores range randomly from 14,250 to 14,630 on my 3090 FE, without hitting power limit. +150 core, +500 memory. Probably getting "drivered" again because I can't reproduce when I get 14600+ or when I get 14,250+.

Most of the people reaching 15k are probably reaching +1000 on VRAM. I can't bench past +700 and Heaven eventually just crashes and sometimes even reboots the computer at +700.
+600 can work but sometimes I get Warzone/MW issues, which tend to be worse if I set the core clock first, then the memory (??). +500 is no problem.


----------



## DrunknFoo

Falkentyne said:


> I've had PR scores range randomly from 14,250 to 14,630 on my 3090 FE, without hitting power limit. +150 core, +500 memory. Probably getting "drivered" again because I can't reproduce when I get 14600+ or when I get 14,250+.
> 
> Most of the people reaching 15k are probably reaching +1000 on VRAM. I can't bench past +700 and Heaven eventually just crashes and sometimes even reboots the computer at +700.
> +600 can work but sometimes I get Warzone/MW issues, which tend to be worse if I set the core clock first, then the memory (??). +500 is no problem.



I looked at the vram frequency for the score posted, just something for the poster to consider, either check the solder joint, and or try a proper functioning bios, or even the strix bios and kp option

the beta bios just does not work properly on some cards (as you are aware)


----------



## Nizzen

shiokarai said:


> Alphacool Eisblock Aurora Acryl GPX-N RTX 3090/3080 ROG Strix mit Backplate
> 
> 
> Der Alphacool Eisblock Aurora Plexi GPX-N RTX 3080/3090 vereint Style mit Performance und eine umfangreiche Digital RGB Beleuchtung. Die Erfahrung von über 17 Jahren sind in diesen Grafikkarten-Wasserkühler eingeflossen und stellen den...
> 
> 
> 
> 
> www.alphacool.com
> 
> 
> 
> 
> 
> Alphacool block for Strix 3090 postponed another 3 weeks, again... this means the only somewhat available block is bykski one, as EK blocks are expected after 11th dec. and aquacomputer blocks... well, sometime maybe in dec. How come it's such a problem to get the damn blocks out?


I got alphacool strix 1 week ago.


----------



## dante`afk

Falkentyne said:


> Thermalright?
> BTW 2mm pads exist as well.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 8.65US $ 6% OFF|Thermalright Thermal Pad 120x120mm 12.8 W/mk 0.5mm 1.0mm 1.5mm 2.0mm High Efficient Thermal Conductivity Original Authentic - Pc Components Cooling & Tools - AliExpress
> 
> 
> Smarter Shopping, Better Living! Aliexpress.com
> 
> 
> 
> 
> www.aliexpress.com
> 
> 
> 
> 
> 
> If you need thicker, stack a 2mm and a 0.5mm, but I doubt anything higher than 2mm is needed on a stock heatsink (FE likes 1.5mm on both sides--2.5mm is too thick for the heatsink to make contact with the GPU on the GPU side, and 2mm would leave bad mounting pressure on the die).
> 
> And I highly doubt bitspower is using better pads. There aren't many choices out there. You have Gelid 12 w/mk, Thermalright 12.8 w/mk and Fujipoly 11 w/mk. The 17 w/mk Fujipoly pads exist but those are extremely hard and tend to just crumble and tear after one use.
> 
> If you can't afford to spend hundreds of dollars on pads, Arctic 145mm * 145mm 6 w/mk blue pads are the best cheap pads, and you get a nice giant size, but probably will struggle on hot video cards VRAM.


ended up using a mix of both, but something seems broken. even though the GPU is making good contact with the block, its 45c in idle and 70c on load, i'm getting immediately thermal throttling.


----------



## bmgjet

dante`afk said:


> ended up using a mix of both, but something seems broken. even though the GPU is making good contact with the block, its 45c in idle and 70c on load, i'm getting immediately thermal throttling.


VRM and VRAM will trigger the thermal perf cap in GPUz.
If its instant then its usually the VRM. If it takes a little bit then more likely the Vram as its getting used.

On my card when I first did the waterblock I got it for VRM since the pads and instructors had the inductors not making contact. So would get GPUz reporting thermal throttle about 10 secs into full load.
Replacing the pads with thermal putty fixed that. 
Then when trying to get rid of coil whine I losened the back plate like EK surgested and would get thermal throttle for VRAM about 15-20mins into a game. Fixed that by just filling the whole back plate with thermal putty and doing it up tight.


----------



## dante`afk

bmgjet said:


> VRM and VRAM will trigger the thermal perf cap in GPUz.
> If its instant then its usually the VRM. If it takes a little bit then more likely the Vram as its getting used.
> 
> On my card when I first did the waterblock I got it for VRM since the pads and instructors had the inductors not making contact. So would get GPUz reporting thermal throttle about 10 secs into full load.
> Replacing the pads with thermal putty fixed that.
> Then when trying to get rid of coil whine I losened the back plate like EK surgested and would get thermal throttle for VRAM about 15-20mins into a game. Fixed that by just filling the whole back plate with thermal putty and doing it up tight.


Couldn't believe it, but I had to use the 1.0mm pads that were included in the box, I tried twice with my thermalright odyssey 1.5mm but the temps would be too high.

Immediatelly after just using the bitspower pads all was good. and only on the positions as per the manual (unfortunately)

idle temps 26c, max load 44c, but with lots of airbububbles in the block.

no power perfcap  only vrel and vop.


----------



## MacMus

can MSI 3090 ventus be shunk modded? Did anyone tried ?


----------



## bmgjet

MacMus said:


> can MSI 3090 ventus be shunk modded? Did anyone tried ?


Every Nvidia card of the last few gens can.
Shunts resistors are what it uses to detect the power draw and modding those tricks it into thinking it draws less so doesnt hit the power limit.


----------



## dante`afk

15k on my FE...no window mod, no icepacks...there's more possible :d


----------



## MacMus

bmgjet said:


> Every Nvidia card of the last few gens can.
> Shunts resistors are what it uses to detect the power draw and modding those tricks it into thinking it draws less so doesnt hit the power limit.


Is this safe?


----------



## bmgjet

MacMus said:


> Is this safe?


If you have to ask then its not something youll be able to do safely.


----------



## shiokarai

Nizzen said:


> I got alphacool strix 1 week ago.


did you preorder when they were first announced? how's the block vs EK/bykski?


----------



## Nillas

Did some more testing, its obvious my powerlimit is stopping me. 450 watt Gigabyte Aorus 3090 xtreme card. 
This is in a pretty cold room with stock air cooler. 

Still trying to find info on my card, pcb layout etc.. or if i can put an evga bios on it?. But cant find anything..

core+130 mem+200 14577
core+180 mem+900 14818 with Windows GPU acceleration on
core+180 mem+900 14943 
core+190 mem+700 14953 
core+190 mem+800 14976 
core+190 mem+1000 14853 with Windows GPU acceleration on
core+200 mem+800 15055 
core+200 mem+900 15061 https://www.3dmark.com/pr/592396
core+210 mem+900 15056


----------



## reflex75

dante`afk said:


> 15k on my FE...no window mod, no icepacks...there's more possible :d
> 
> View attachment 2467778


And no shunt mod either?


----------



## pat182

Strix with 520watt bios, AC Vahalla crash even with no OC so, the bios isnt rlly stable, not the best game to test since theres full of crash bugs but theres that


----------



## Zooms

For the rog strix 3090, Asus will release a bios of 500w or more?


----------



## sprayingmango

Hi everyone, I have a question. I have the Asus Tuf Gaming 3090 non-OC card. What's the best bios to flash to it for an all around performance bump? Not looking to do any extreme overclocking, just a nice performance bump all around. Thanks!!


----------



## motivman

sprayingmango said:


> Hi everyone, I have a question. I have the Asus Tuf Gaming 3090 non-OC card. What's the best bios to flash to it for an all around performance bump? Not looking to do any extreme overclocking, just a nice performance bump all around. Thanks!!


gigabyte 390W bios


----------



## jomama22

reflex75 said:


> And no shunt mod either?


His is shunted, hence the 284 board power max lol


----------



## dante`afk

reflex75 said:


> And no shunt mod either?


with shunt mod, otherwise thats not possible without modded bios ^^


----------



## sprayingmango

motivman said:


> gigabyte 390W bios


Awesome thank you!!


----------



## Nizzen

pat182 said:


> Strix with 520watt bios, AC Vahalla crash even with no OC so, the bios isnt rlly stable, not the best game to test since theres full of crash bugs but theres that


Have strix 3090 on water with kingpinbios. Stable here in Battlefield V and benchmarks lol. I don't have Valhalla. So here it's stable.


----------



## Nizzen

Zooms said:


> For the rog strix 3090, Asus will release a bios of 500w or more?


Ask Asus, then repport back


----------



## pat182

Nizzen said:


> Have strix 3090 on water with kingpinbios. Stable here in Battlefield V and benchmarks lol. I don't have Valhalla. So here it's stable.


yea must be game to game basis, ubi games are broken most of the time. im gonna try in avengers since i know it chew power alot. any tought on the bios so far ? will you run it 24/7 ? idk how ''safe'' it is in long gaming sessions


----------



## pat182

does bios regulate hardware voltage control like vrms and swtiching frequencies and deep stuff or is it just binaires to set basic stuff like power target, fan speed etc. cause if the Strix and the KPE doesnt have the same voltage controller i can see why its not stable in some case if it regulate that


----------



## sprayingmango

Has anyone heard of weird power delivery issues from cards having all of the same type of caps on the back? Like having 6 MLCC vs a mix of MLCC and POSCAPs? The bitcoin forums seem to think this causes issues?


----------



## pat182

sprayingmango said:


> Has anyone heard of weird power delivery issues from cards having all of the same type of caps on the back? Like having 6 MLCC vs a mix of MLCC and POSCAPs? The bitcoin forums seem to think this causes issues?


you re like 3 months lagging


----------



## rkbu11

shiokarai said:


> did you preorder when they were first announced? how's the block vs EK/bykski?


I second this. Nizzen, your thoughts?? I have one on order and curious how it performs. Cancelled my EK order after hearing about all of the coil whine issues, to me removing/loosening backplate or messing with a bunch of different thermal pads isn't an appropriate solution to a block that costs that much.


----------



## sprayingmango

pat182 said:


> you re like 3 months lagging


Ok, sorry. I just got my card so this is new to me. Your post hasn't really answered my question either. Can anyone share some info with me?


----------



## cakesg

So I flashed my Strix 3090 on an EK block to the Kingpin 520W Bios. However, whenever I flash to an EVGA bios, the first power cable reading always breaks. It just stays at around 1.5w no matter what. Also I don't seem to be able to increase the voltage at all, I've never used classified before. Is there any guide to it as I get a voltage perfcap constantly. Some other problems I've been having are that I can barely overclock this card. It doesnt go over +50 core on the 520W bios. My pr scores are just under 14k which seems really low.
Edit: Also, any oc scanner doesn't work with my card. It either gives me error cod 22h(msi afterburner) or it runs for like 30 seconds and then goes to 254% completion and says failed


----------



## MacMus

does newegg have returns ?


----------



## 414347

Newegg does have returns, but unless the card is defective, they will charge you whopping 20% restocking fee


----------



## HyperMatrix

pat182 said:


> Strix with 520watt bios, AC Vahalla crash even with no OC so, the bios isnt rlly stable, not the best game to test since theres full of crash bugs but theres that


Technically the 520W KingPin bios already has an OC loaded, since the boost clock on it is at 1920MHz. You will have the most luck locking to a specific voltage and clock speed for your card using a curve in afterburner.



cakesg said:


> So I flashed my Strix 3090 on an EK block to the Kingpin 520W Bios. However, whenever I flash to an EVGA bios, the first power cable reading always breaks. It just stays at around 1.5w no matter what. Also I don't seem to be able to increase the voltage at all, I've never used classified before. Is there any guide to it as I get a voltage perfcap constantly. Some other problems I've been having are that I can barely overclock this card. It doesnt go over +50 core on the 520W bios. My pr scores are just under 14k which seems really low.
> Edit: Also, any oc scanner doesn't work with my card. It either gives me error cod 22h(msi afterburner) or it runs for like 30 seconds and then goes to 254% completion and says failed


You can't use the Classified tool on anything other than the KingPin card. OC scanner is also trash. Don't use it. Set a custom curve. My Strix could only do around 14050 at best. Although I'm also using an older cpu/ram. 6950x at 4.3GHz with 2933MHz CL15 ram. As for one of the power connectors not reporting power usage, that's common. Strix flashed with the FTW3 500W bios did the same. But as long as you can cool your card, this will actually allow you to get better results because the card maintains a higher voltage for longer.


----------



## Sheyster

dante`afk said:


> 15k on my FE...no window mod, no icepacks...there's more possible :d


Modded FE is a beast! I'm struggling to decide which 3090 card to keep, FE or Strix. First-world problem I know..


----------



## pat182

HyperMatrix said:


> Technically the 520W KingPin bios already has an OC loaded, since the boost clock on it is at 1920MHz. You will have the most luck locking to a specific voltage and clock speed for your card using a curve in afterburner.
> 
> 
> 
> You can't use the Classified tool on anything other than the KingPin card. OC scanner is also trash. Don't use it. Set a custom curve. My Strix could only do around 14050 at best. Although I'm also using an older cpu/ram. 6950x at 4.3GHz with 2933MHz CL15 ram. As for one of the power connectors not reporting power usage, that's common. Strix flashed with the FTW3 500W bios did the same. But as long as you can cool your card, this will actually allow you to get better results because the card maintains a higher voltage for longer.


yea true, is that all a bios do ? setting voltage to mhz curve and watch power draw? doesnt it manage onboard chips power delivery and frenqency switching or is this just stand alone regards of the bios?


----------



## Sheyster

HyperMatrix said:


> As for one of the power connectors not reporting power usage, that's common. Strix flashed with the FTW3 500W bios did the same. But as long as you can cool your card, this will actually allow you to get better results because the card maintains a higher voltage for longer.


Apparently the MSI Trio X looks normal in GPU-Z with the 500W BIOS from what I've heard. That card has to be the best value for a 3 x 8-pin card, about 90$ more than the FE here in the US.

EDIT- They too are impossible to find at MSRP here. Damn bots!


----------



## ninshoo

bmagnien said:


> Definitely not power limited for my needs, just threw the KP bios on my FTW3 Ultra with only the PCIe slot shunted and am pulling about 560w which is all I want since I’m in a SFFPC. Planning on throwing on the hybrid kit to drop temps a bit and call it a day. Thanks!


Hi,

stacked resistor? how many plz?
With KP bios i read its brick card if PX1 doing update?
Have you lost DP, Fan bug etcc plz?

Ty


----------



## Nizzen

rkbu11 said:


> I second this. Nizzen, your thoughts?? I have one on order and curious how it performs. Cancelled my EK order after hearing about all of the coil whine issues, to me removing/loosening backplate or messing with a bunch of different thermal pads isn't an appropriate solution to a block that costs that much.


Looks like Alphacool is cooling a bit better than EK. My friend @Carillo is using my 3090 strix with Alphacool atm. This card is shuntmodded, and the delta is very low! Coilwhine is pretty much on every strix card. Some is crazy loud and some is average...

We need some more testing to compare them. My stock 3090 strix with EK is cooling pretty well too, and is cooling a few more parts under the 3x 8pin... I know Carillo soon will post some picrures of the Alphacool beauty beast strix cooler  Special edition backplate cooled 

PS: Stacked shunts is no problem with Alphacool like EK that you need to mod a bit to fit... "Carve" in the backplate


----------



## Sheyster

NewUser16 said:


> Newegg does have returns, but unless the card is defective, they will charge you whopping 20% restocking fee


I don't know what the law is in Canada. California (where Newegg HQ is located) used to have a limit on restocking fee percentage. Now this is what we have:

_Under California law, stores are free to set their own policies, such as requiring a restocking fee or requiring that the merchandise is in its original packaging – as long as they are posted conspicuously at the store or on the order form._

EDIT - California is NOT business friendly at all. This restocking fee is one rare exception. Many companies are leaving California for Texas, Colorado and other business friendly states.


----------



## Nizzen

shiokarai said:


> did you preorder when they were first announced? how's the block vs EK/bykski?


I preordered 01.11.2020


----------



## rkbu11

Nizzen said:


> Looks like Alphacool is cooling a bit better than EK. My friend @Carillo is using my 3090 strix with Alphacool atm. This card is shuntmodded, and the delta is very low! Coilwhine is pretty much on every strix card. Some is crazy loud and some is average...
> 
> We need some more testing to compare them. My stock 3090 strix with EK is cooling pretty well too, and is cooling a few more parts under the 3x 8pin... I know Carillo soon will post some picrures of the Alphacool beauty beast strix cooler  Special edition backplate cooled
> 
> PS: Stacked shunts is no problem with Alphacool like EK that you need to mod a bit to fit... "Carve" in the backplate


Awesome reply man thank you for all of the info.


----------



## Sheyster

I'm debating if I should try the 520W BIOS on my Strix. Not much point since the 500W BIOS is working fine and this is an air-cooled card. Anyone here try it? I heard someone say the fan curve was off a bit (2 fans run slower than normal). Any other feedback?


----------



## Foxrun

Just got the email notification for a 3090 FTW3 hybrid; I nearly jumped out of my seat thinking it was for the KP, damnit.


----------



## HyperMatrix

Sheyster said:


> Apparently the MSI Trio X looks normal in GPU-Z with the 500W BIOS from what I've heard. That card has to be the best value for a 3 x 8-pin card, about 90$ more than the FE here in the US.
> 
> EDIT- They too are impossible to find at MSRP here. Damn bots!


The advantage with the Strix on the 500/520W bios with 1 pin not reporting and only 50W PCIE slot power usage is that you should _theoretically_ be able to shunt just 2 resistors to get up to 780W. And even without shunting, the non-reporting power connector should help maintain higher clocks/voltage since your card seems to think it has a lot more power limit headroom. Not that I'm a huge fan of the Strix or anything but that's definitely a plus for the card. Especially since it doesn't kill the second HDMI port on the card either.


----------



## OC2000

Nizzen said:


> PS: Stacked shunts is no problem with Alphacool like EK that you need to mod a bit to fit... "Carve" in the backplate


Too true. I had t buy a Dremmel to carve the back plate for the back resistor. EK have seriously messed up on this block. I’ve had nothing but problems with it. The RGB cable fell off due to being poorly placed.

hopefully the aqua computer one will be better.

this card has cost me a small fortune. It’s makes the king pin version seem cheap!


----------



## HyperMatrix

Foxrun said:


> Just got the email notification for a 3090 FTW3 hybrid; I nearly jumped out of my seat thinking it was for the KP, damnit.


Been waiting for the worst effing courier company in North America to deliver mine since yesterday. Even took the day off from work yesterday to get it and play with it. But nope. [email protected]#^ UPS. And they were supposed to hold it at the depot for me to go and pick it up today after work. But nope...soon as I got to work I saw it was out for delivery. -_-

Regarding the card...I really wish they had used the 360mm radiators on the FTW3 just as they did with the KingPin though. I think that could have been enough to get around an average of 2100MHz~ without a water block and depending on the game/load.

Also for anyone interested in the HydroCopper model of the cards, they were officially delayed yesterday due to some problem with them that needs to be corrected.


----------



## pat182

Sheyster said:


> I'm debating if I should try the 520W BIOS on my Strix. Not much point since the 500W BIOS is working fine and this is an air-cooled card. Anyone here try it? I heard someone say the fan curve was off a bit (2 fans run slower than normal). Any other feedback?


working mostly fine, playing avengers 2100mhz lock 520watt lock on air at 69c
vs stock bios 1995mhz capping the 480watts

being tested for 2 days now, its not rlly about getting more mhz, its more about keeping it stable, like i can hold 2085mhz on stock bios if i dont powercap . the KPE bios just allow me to keep 2085mhz even if the power kicks higher than 480watt to maintain the clock. and if i power cap at 520watt, it wont drop to 1995mhz like stock bios , more like 2025mhz

im thinking im gonna lock it to 2085mhz like the oem bios and let the power do the rest, it should stay at 2085mhz whatever the load its gonna be on + or - 40watt load diff


----------



## HyperMatrix

pat182 said:


> yea true, is that all a bios do ? setting voltage to mhz curve and watch power draw? doesnt it manage onboard chips power delivery and frenqency switching or is this just stand alone regards of the bios?


As far as I'm aware, all the component management is done outside of the bios. That's why the cards can behave differently and pull more voltage when you flash a bios that reports one power connector is sitting at 1W and your whole card is only utilizing 330W out of 500W limit.

If all the board's components and vrms and everything was managed by the bios, I'd expect there to be some serious failures when cross flashing a bios considering the rather big difference in designs with regard to analog vs. digital controllers.


----------



## Nizzen

Bykski:
Pros:
-Cheap
-Cooling very good
-THICK backplate
- It doesn't interfere with any of the double stacked shunts

Cons:

No "how to do it" manual, but @chispy got one from a Bykski rep
Interfer with an fanheader, so modding is required. It is bending the pcb. Carving in the top of the block.
It doesn't cool "every" components like EK block.
It did lack some Thermalpads LOL

Alphacool:
Pros:
-It's cooling very good, and looks nice (my opinion)
- It doesn't interfere with any of the double stacked shunts

Cons: Expensive? Too early to tell more...

EK :

Pros:

Cooling "everything". Thermalpad galore  Full pcb cooling.
A few options on "colors"

Cons:
- Interfer with one stacked shunt on the backplate  Need to mod (carve)
-Lacked one thermalpad LOL ( Bought Thermal Grizzley minus pads to replace them all)
Too early to tell more...

Will update when time goes by (maybe )


----------



## Sheyster

HyperMatrix said:


> Especially since it doesn't kill the second HDMI port on the card either.


That is indeed a big plus for the Strix with the 500W BIOS, even the fact that it has a second HDMI 2.1 port at all in fact. The FE and MSI Trio X only have one as do all the EVGA cards AFAIK.


----------



## bmagnien

ninshoo said:


> Hi,
> 
> stacked resistor? how many plz?
> With KP bios i read its brick card if PX1 doing update?
> Have you lost DP, Fan bug etcc plz?
> 
> Ty


I used conductive paint to stack an 8mohm on the PCIe slot shunt. This bypasses the FTW3's issue where the ratio is locked between the PCIe slot and 8pin for power draw. I flashed the KP bios (previously was using the XOC 500w bios which was the entire reason I did the shunt mod in the first place in order to get the full 500w out of that as I, like many others, weren't able to without it). I DID NOT upgrade the firmware when prompted to by PX1. I actually did try to at first but red lights came on the card and it kept trying over and over to complete the firmware update. I was able to cancel out of this process by switching the bios switch and restarting, then switching it back to the KP and just cancelling out of the PX1's attempts to update the firmware, and finally just uninstalling PX1 and handling all OC via Afterburner, and controlling the fan speeds via a custom curve in Argus Monitor. Power limit goes to 120%, boost clock is 1920, and GPUz reads max PL as 520w, so this is definitely a working version of the KP Bios. I calculate my board total by multiplying the PCIe slot reading 1.625, and then adding the 3 8-pin readings which are all accurate. Total board draw caps at 565w. About 115w through the slot, which is more than I'd like but has been totally stable so far. All scores have increased in benchmarks and gaming. Max temps ~65c on air. Will be adding the hybrid cooler next week and will report back on temps, hoping for ~50 with a full push/pull solution using the 240mm rad and adding 4x 120mmx25mm noctuas.


----------



## HyperMatrix

Sheyster said:


> That is indeed a big plus for the Strix with the 500W BIOS, even the fact that it has a second HDMI 2.1 port at all in fact. The FE and MSI Trio X only have one as do all the EVGA cards AFAIK.


Yup. And if you have an AV receiver hooked up to your PC....there goes your one HDMI port. And if your receiver isn't literally brand new from the last few months, you won't have an HDMI 2.1 port for pass-through so no TV hookup. 2 things surprise me:

1) There's no way to hook up a receiver for just audio, without it showing up as a monitor on your system (if it were hdmi you could get an extractor device, but 4k/144Hz DP = no bueno).

2) No one but Asus provided a second HDMI port on a $1500-$2000 video card.


----------



## bmagnien

HyperMatrix said:


> Been waiting for the worst effing courier company in North America to deliver mine since yesterday. Even took the day off from work yesterday to get it and play with it. But nope. [email protected]#^ UPS. And they were supposed to hold it at the depot for me to go and pick it up today after work. But nope...soon as I got to work I saw it was out for delivery. -_-
> 
> Regarding the card...I really wish they had used the 360mm radiators on the FTW3 just as they did with the KingPin though. I think that could have been enough to get around an average of 2100MHz~ without a water block and depending on the game/load.
> 
> Also for anyone interested in the HydroCopper model of the cards, they were officially delayed yesterday due to some problem with them that needs to be corrected.


Linus (i know) had a video comparing 120 vs 240 vs 360 rads on the 3090, 120 > 240 was a big difference but 360 added little extra cooling. I think a push/pull on a 240 would be about the best you could hope for with a hybrid solution without going full custom waterloop


----------



## Sheyster

pat182 said:


> working mostly fine, playing avengers 2100mhz lock 520watt lock on air at 69c
> vs stock bios 1995mhz capping the 480watts
> 
> being tested for 2 days now, its not rlly about getting more mhz, its more about keeping it stable, like i can hold 2085mhz on stock bios if i dont powercap . the KPE bios just allow me to keep 2085mhz even if the power kicks higher than 480watt to maintain the clock. and if i power cap at 520watt, it wont drop to 1995mhz like stock bios , more like 2025mhz
> 
> im thinking im gonna lock it to 2085mhz like the oem bios and let the power do the rest, it should stay at 2085mhz whatever the load its gonna be on + or - 40watt load diff


Thanks for this feedback. I'll probably go ahead and try it. What are you doing with the fans? Locking them at X% or custom curve or ?


----------



## Aristotle

pat182 said:


> working mostly fine, playing avengers 2100mhz lock 520watt lock on air at 69c
> vs stock bios 1995mhz capping the 480watts
> 
> being tested for 2 days now, its not rlly about getting more mhz, its more about keeping it stable, like i can hold 2085mhz on stock bios if i dont powercap . the KPE bios just allow me to keep 2085mhz even if the power kicks higher than 480watt to maintain the clock. and if i power cap at 520watt, it wont drop to 1995mhz like stock bios , more like 2025mhz
> 
> im thinking im gonna lock it to 2085mhz like the oem bios and let the power do the rest, it should stay at 2085mhz whatever the load its gonna be on + or - 40watt load diff


Is boost lock ok to run 24/7? Won't it be constantly putting a high load on the PSU?


----------



## HyperMatrix

bmagnien said:


> Linus (i know) had a video comparing 120 vs 240 vs 360 rads on the 3090, 120 > 240 was a big difference but 360 added little extra cooling. I think a push/pull on a 240 would be about the best you could hope for with a hybrid solution without going full custom waterloop


It would depend on a few things. How much heat the card in question is putting out (was it 500W+?) and also the speed of the pump. And also what's considered "little extra cooling." to me, 3-5C cooler would be massive. But if he's making the video for your average user who doesn't mind 70-80C temps, 3-5C may not be much.

But now you've got me curious though. I wonder if the Hybrid FTW3 cooler has screw holes on the other side to add 2 more fans. All the pics I've seen of it are showing off those fancy rbg fans....


----------



## mirkendargen

HyperMatrix said:


> Yup. And if you have an AV receiver hooked up to your PC....there goes your one HDMI port. And if your receiver isn't literally brand new from the last few months, you won't have an HDMI 2.1 port for pass-through so no TV hookup. 2 things surprise me:
> 
> 1) There's no way to hook up a receiver for just audio, without it showing up as a monitor on your system (if it were hdmi you could get an extractor device, but 4k/144Hz DP = no bueno).
> 
> 2) No one but Asus provided a second HDMI port on a $1500-$2000 video card.


1. eARC is the answer to this, but both TV and receiver implementations are still half baked and have issues.
2. Gigabyte Aorus Master has 2x HDMI and Aorus Xtreme has 3x HDMI. Not sure if they're all 2.1 or not on those cards.


----------



## pat182

HyperMatrix said:


> As far as I'm aware, all the component management is done outside of the bios. That's why the cards can behave differently and pull more voltage when you flash a bios that reports one power connector is sitting at 1W and your whole card is only utilizing 330W out of 500W limit.
> 
> If all the board's components and vrms and everything was managed by the bios, I'd expect there to be some serious failures when cross flashing a bios considering the rather big difference in designs with regard to analog vs. digital controllers.


makes sens


----------



## pat182

Aristotle said:


> Is boost lock ok to run 24/7? Won't it be constantly putting a high load on the PSU?


thats on your psu, i have a 850watt so id say it limit


----------



## HyperMatrix

mirkendargen said:


> 1. eARC is the answer to this, but both TV and receiver implementations are still half baked and have issues.
> 2. Gigabyte Aorus Master has 2x HDMI and Aorus Xtreme has 3x HDMI. Not sure if they're all 2.1 or not on those cards.


interestingly enough...so yes. 2x HDMI 2.1 and 1x useless 2.0 for your receiver. Good choice imo. 



Output
DisplayPort 1.4a *3
HDMI 2.1*2, HDMI 2.0*1 (The middle HDMI output supports up to HDMI 2.0)


----------



## pat182

so on 520watt bios on strix im perma Vrel/Vop in demanding games, so the description would be voltage limited to achieve the target mhz ? thats my understanding of it


----------



## pantsoftime

Nillas said:


> Did some more testing, its obvious my powerlimit is stopping me. 450 watt Gigabyte Aorus 3090 xtreme card.
> This is in a pretty cold room with stock air cooler.
> 
> Still trying to find info on my card, pcb layout etc.. or if i can put an evga bios on it?. But cant find anything..
> 
> core+130 mem+200 14577
> core+180 mem+900 14818 with Windows GPU acceleration on
> core+180 mem+900 14943
> core+190 mem+700 14953
> core+190 mem+800 14976
> core+190 mem+1000 14853 with Windows GPU acceleration on
> core+200 mem+800 15055
> core+200 mem+900 15061 https://www.3dmark.com/pr/592396
> core+210 mem+900 15056


Wonderful scores. I haven't found anything about the layout of the card. I have one but don't want to risk taking it apart until a waterblock is available - I'm not sure what size thermal pads i'd need if they tore, etc. 

Some guy earlier in the thread was able to crossflash a different BIOS (I believe it was EVGA) but he lost display outputs. I believe triple 8-pin BIOSes will work but you might lose some outputs. Strix has 2 HDMIs so maybe the 480W Strix BIOS would work as well? Might be worth trying since there's always the dual-BIOS switch if something goes wrong.


----------



## OC2000

HyperMatrix said:


> interestingly enough...so yes. 2x HDMI 2.1 and 1x useless 2.0 for your receiver. Good choice imo.
> 
> 
> 
> Output
> DisplayPort 1.4a *3
> HDMI 2.1*2, HDMI 2.0*1 (The middle HDMI output supports up to HDMI 2.0)


really hope you get a decent card next.!


----------



## bmagnien

HyperMatrix said:


> It would depend on a few things. How much heat the card in question is putting out (was it 500W+?) and also the speed of the pump. And also what's considered "little extra cooling." to me, 3-5C cooler would be massive. But if he's making the video for your average user who doesn't mind 70-80C temps, 3-5C may not be much.
> 
> But now you've got me curious though. I wonder if the Hybrid FTW3 cooler has screw holes on the other side to add 2 more fans. All the pics I've seen of it are showing off those fancy rbg fans....


Totally valid point about temperature subjectivity. But I just meant it to show that, all other things being equal, the delta between 120 and 240 was larger than 240 to 360.

regarding the rad, it’s a very basic asetek pump and rad combo that evga white labels with a custom built copper heat sink preattached around the pump to fit their pcb. So the rad is just like any other rad with holes on both sides that evga slapped some rgb fans on to. I plan to swap them out with noctuas connected directly to my mobo to control separately, and will start with just push and then bolt on the others as pull to see if there’s a difference there


----------



## Falkentyne

pat182 said:


> so on 520watt bios on strix im perma Vrel/Vop in demanding games, so the description would be voltage limited to achieve the target mhz ? thats my understanding of it


Yeah, so all you can do from here is raise the clocks up until you're unstable, or buy the Elmor controller tool to gain direct voltage control (you're going to need chilled water or better to cool a GPU >1.1v).


----------



## pat182

Falkentyne said:


> Yeah, so all you can do from here is raise the clocks up until you're unstable, or buy the Elmor controller tool to gain direct voltage control (you're going to need chilled water or better to cool a GPU >1.1v).


meh its fine, its not that big of a deal, im already pushing the limit of the air cooler and dont want to WC


----------



## pantsoftime

Nillas said:


> Still trying to find info on my card, pcb layout etc.. or if i can put an evga bios on it?. But cant find anything..


I found this image when I was on Bykski's chinese website. Their USA reps said they don't have an ETA yet, but the chinese site has a waterblock spec listed. At least we finally have a view of it.









Source: https://www.bykski.com/page133?product_id=5053&brd=1


----------



## motivman

So conclusion time folks...at least for reference PCB, not sure about FE PCB or other Custom PCB's, but just my experience with my PNY reference card....

It does not matter what resistors you use to shunt... this PCB will still draw about 515W-535W max on the stuck bios. History of my shunting shenanigans...

1. First shunt - used conductive paint on all six resistors --- max power draw (verified with GPU-Z and kill-a-watt meter - 520w-535w (depending on benchmark)
2. Second shunt - used 5mohms on all resistors except on PCIE resistor (10 mohms) --- Max power draw - 520W-535W (depending on benchmark)
3. Third shunt attempt - Used 5mohms on all resistors including PCIE ---- max power draw 520W-535W (depending on benchmark)

Now the interesting part lol....

1. first shunt with conductive paint on PCIE slot - 85W max power draw on PCIE SLOT ( max power draw on GPU-Z on average 64W)
2. second shunt with 10mohms on PCIE slot - 85W max power draw PCIE SLOT ( max power draw on GPU-Z on average 55W)
3. Third shunt with 5mohm on PCIE slot - 85W max power draw from PCIE slot... *** ( max power draw on GPU-Z on average 42.5W)

You see the pattern here folks???? You cannot outsmart NVIDIA engineers, LOL. There are other power limits on these cards that shunting cannot defeat. Only a proper bios can disable these hidden limits.

These stats above are all results using the stock PNY reference bios. THE ONLY WAY I can get this thing to draw more power (600W) is to flash either the EVGA 500W bios or the kingpin 520W bios.

So conclusion --- IT DOES NOT MATTER what resistor you use to shunt the PCIE Slot or even the entire card for that matter, your card will draw a certain amount of power, and changing shunts will just result in the card adjusting values based on your shunt resistor values. The only way to get more power is to flash a bios that allows more power draw.

have a good weekend folks!!!!!

LOL.


----------



## Falkentyne

motivman said:


> So conclusion time folks...at least for reference PCB, not sure about FE PCB or other Custom PCB's, but just my experience with my PNY reference card....
> 
> It does not matter what resistors you use to shunt... this PCB will still draw about 515W-535W max on the stuck bios. History of my shunting shenanigans...
> 
> 1. First shunt - used conductive paint on all six resistors --- max power draw (verified with GPU-Z and kill-a-watt meter - 520w-535w (depending on benchmark)
> 2. Second shunt - used 5mohms on all resistors except on PCIE resistor (10 mohms) --- Max power draw - 520W-535W (depending on benchmark)
> 3. Third shunt attempt - Used 5mohms on all resistors including PCIE ---- max power draw 520W-535W (depending on benchmark)
> 
> Now the interesting part lol....
> 
> 1. first shunt with conductive paint on PCIE slot - 85W max power draw on PCIE SLOT ( max power draw on GPU-Z on average 64W)
> 2. second shunt with 10mohms on PCIE slot - 85W max power draw PCIE SLOT ( max power draw on GPU-Z on average 55W)
> 3. Third shunt with 5mohm on PCIE slot - 85W max power draw from PCIE slot... *** ( max power draw on GPU-Z on average 42.5W)
> 
> You see the pattern here folks???? You cannot outsmart NVIDIA engineers, LOL. There are other power limits on these cards that shunting cannot defeat. Only a proper bios can disable these hidden limits.
> 
> These stats above are all results using the stock PNY reference bios. THE ONLY WAY I can get this thing to draw more power (600W) is to flash either the EVGA 500W bios or the kingpin 520W bios.
> 
> So conclusion --- IT DOES NOT MATTER what resistor you use to shunt the PCIE Slot or even the entire card for that matter, your card will draw a certain amount of power, and changing shunts will just result in the card adjusting values based on your shunt resistor values. The only way to get more power is to flash a bios that allows more power draw.
> 
> have a good weekend folks!!!!!
> 
> LOL.


I replied to you in the shunt mod thread.
I can get the FE to draw up to 600W, but "something" at around 540W starts triggering some strange throttle event--which actually GOES AWAY as the power draw /heat keeps going up.

Maybe the PCIE Slot and Chip Power "Mini 005 shunts" have something to do with this?









RTX 3090 Founders Edition working shunt mod


Here is the perfect placement for hotspots on a 3090 FE. This is based on igor's picture and a combination of his hotspot image for the 3080 aswell Try this if you haven't already and see if this fixes your random reboots. On a TDP modded 3090, the pads must be literally -perfect- or any...




www.overclock.net


----------



## Thebc2

Sheyster said:


> I'm debating if I should try the 520W BIOS on my Strix. Not much point since the 500W BIOS is working fine and this is an air-cooled card. Anyone here try it? I heard someone say the fan curve was off a bit (2 fans run slower than normal). Any other feedback?


I had good luck with it, but I am water cooled. Was a linear bump for me over the 500w bios.

I think I may go ahead with shunting at this point, that bios did help me get #25 back on Port Royal

Also this was new nvidia drivers and new 3dmark, so not 100% sure where the bump came from.


Sent from my iPhone using Tapatalk Pro


----------



## HyperMatrix

pat182 said:


> so on 520watt bios on strix im perma Vrel/Vop in demanding games, so the description would be voltage limited to achieve the target mhz ? thats my understanding of it


Could be either too much or too little voltage. Usually in this situation, too little voltage. You basically have to manage 3 things for an OC. 1) Voltage, 2) Thermals, 3) Power Limit. You can increase power limit with bios or shunts. You can lower temps with a water block. But eventually, you'll run into a voltage limit. That's why the Strix with EVC2 or KingPin with classified tool is handy for increasing voltage. But...I wouldn't recommend significantly increased voltage for a daily gaming build. Good for benchmarks, and good if you're trying to...for example...get a small bump from 2175MHz to 2205MHz just to break the barrier. But overall, you're going to get into a cyclical heat/power constraint if you're trying to push for anything substantial and push past 800W.


----------



## HyperMatrix

bmagnien said:


> Totally valid point about temperature subjectivity. But I just meant it to show that, all other things being equal, the delta between 120 and 240 was larger than 240 to 360.
> 
> regarding the rad, it’s a very basic asetek pump and rad combo that evga white labels with a custom built copper heat sink preattached around the pump to fit their pcb. So the rad is just like any other rad with holes on both sides that evga slapped some rgb fans on to. I plan to swap them out with noctuas connected directly to my mobo to control separately, and will start with just push and then bolt on the others as pull to see if there’s a difference there


I just ran home to meet the UPS driver to get my card. It's actually not a bad cooling setup. I stuck the exhaust next to the intake on one of my existing radiators to get a kind of push/pull setup. I only got to do 1 port royal run before heading back to work. But here's what I observed so far.

- 2130-2160MHz seems doable. I same seems doable because I was able to hit it. And it didn't crash while the card was cool. But there was power limit throttling.

- I flashed the KingPin bios right away. Didn't try with original bios. But with KingPin 520W bios, it still seems to be limited to around 430W draw.

- I did a PR run with that 430W limit and got the same score I got with my Strix with a the 500W EVGA bios.

- During the PR run, my temps stayed below 50C even at the end. But this was with 430W limit. So if it were able to pull 520W, it'd be a different story. Still, that's really not bad. Especially since the room was warm (it's 12C outside today. That's 54F).

My biggest concern is the power limit issue. I was really hoping I'd have one of the cards that wasn't affected by it. I'll have to play around some more when I get home but overall....the only different from short term testing is that my Strix would run 2175MHz for a few seconds before crashing. For this card it seems to be 2160MHz. Of course I'll have to do more testing. But great first results. Can't wait for the HydroCopper to get released.

p.s. If you swap out the fans with ones connected to your motherboard, you'll gain an additional 10-15W in your power limit as well. I know some have said the fans don't draw from the TDP limit but they're wrong.



OC2000 said:


> really hope you get a decent card next.!


Thanks mate. This FTW3 Hybrid seems to be about the same as my Strix but at least able to cool it a bit better and maintain better clocks. Power limit is still an issue. But I'll see what can be done about that without shunting, as I'm planning to sell this card once my HydroCopper comes in. The HydroCopper will do even better due to consistently lower temperatures so that's something to look forward to, even if I don't end up being able to pick up a KingPin card.

So far my theory of all chips being able to hit at least 2130MHz is holding up.


----------



## Falkentyne

HyperMatrix said:


> I just ran home to meet the UPS driver to get my card. It's actually not a bad cooling setup. I stuck the exhaust next to the intake on one of my existing radiators to get a kind of push/pull setup. I only got to do 1 port royal run before heading back to work. But here's what I observed so far.
> 
> - 2130-2160MHz seems doable. I same seems doable because I was able to hit it. And it didn't crash while the card was cool. But there was power limit throttling.
> 
> - I flashed the KingPin bios right away. Didn't try with original bios. But with KingPin 520W bios, it still seems to be limited to around 430W draw.
> 
> - I did a PR run with that 430W limit and got the same score I got with my Strix with a the 500W EVGA bios.
> 
> - During the PR run, my temps stayed below 50C even at the end. But this was with 430W limit. So if it were able to pull 520W, it'd be a different story. Still, that's really not bad. Especially since the room was warm (it's 12C outside today. That's 54F).
> 
> My biggest concern is the power limit issue. I was really hoping I'd have one of the cards that wasn't affected by it. I'll have to play around some more when I get home but overall....the only different from short term testing is that my Strix would run 2175MHz for a few seconds before crashing. For this card it seems to be 2160MHz. Of course I'll have to do more testing. But great first results. Can't wait for the HydroCopper to get released.
> 
> p.s. If you swap out the fans with ones connected to your motherboard, you'll gain an additional 10-15W in your power limit as well. I know some have said the fans don't draw from the TDP limit but they're wrong.


Interesting that the scores are the same despite a 70W difference...hmmm...

Throwing a 15 mOhm (safe) shunt on PCIE Slot should give you the full 500W. Just make sure the TOP of the stacked shunt is not bridged with solder (as @dante`afk found out, doing that literally halves the resistance of the entire shunt...)


----------



## bmagnien

HyperMatrix said:


> I just ran home to meet the UPS driver to get my card. It's actually not a bad cooling setup. I stuck the exhaust next to the intake on one of my existing radiators to get a kind of push/pull setup. I only got to do 1 port royal run before heading back to work. But here's what I observed so far.
> 
> - 2130-2160MHz seems doable. I same seems doable because I was able to hit it. And it didn't crash while the card was cool. But there was power limit throttling.
> 
> - I flashed the KingPin bios right away. Didn't try with original bios. But with KingPin 520W bios, it still seems to be limited to around 430W draw.
> 
> - I did a PR run with that 430W limit and got the same score I got with my Strix with a the 500W EVGA bios.
> 
> - During the PR run, my temps stayed below 50C even at the end. But this was with 430W limit. So if it were able to pull 520W, it'd be a different story. Still, that's really not bad. Especially since the room was warm (it's 12C outside today. That's 54F).
> 
> My biggest concern is the power limit issue. I was really hoping I'd have one of the cards that wasn't affected by it. I'll have to play around some more when I get home but overall....the only different from short term testing is that my Strix would run 2175MHz for a few seconds before crashing. For this card it seems to be 2160MHz. Of course I'll have to do more testing. But great first results. Can't wait for the HydroCopper to get released.
> 
> p.s. If you swap out the fans with ones connected to your motherboard, you'll gain an additional 10-15W in your power limit as well. I know some have said the fans don't draw from the TDP limit but they're wrong.


Can I ask what the highest frequency you were able to hit in PR before crashing on air cooling? Mine would consistently crash at 2100mhz no matter if it was 60 degrees or 70 degrees. I’m wondering if getting it down closer to 50 will allow me to go over 2100 without crashing


----------



## Falkentyne

bmagnien said:


> Can I ask what the highest frequency you were able to hit in PR before crashing on air cooling? Mine would consistently crash at 2100mhz no matter if it was 60 degrees or 70 degrees. I’m wondering if getting it down closer to 50 will allow me to go over 2100 without crashing


If you don't mind a FE result, my PR has no problem whatsoever "starting out" with +2130 mhz (+165 offset) on a FE, but obviously it drops down to 2100-2085 mhz due to temp scaling.
Call of duty however is BARELY stable with +150. Barely. +135 is rock stable. +150 is rock stable at 1080p. Throw 1440p at it and suddenly it starts not liking you.


----------



## HyperMatrix

bmagnien said:


> Can I ask what the highest frequency you were able to hit in PR before crashing on air cooling? Mine would consistently crash at 2100mhz no matter if it was 60 degrees or 70 degrees. I’m wondering if getting it down closer to 50 will allow me to go over 2100 without crashing


Did you put the fans on 100% ?? Mine didn't even hit 50C with maxed out fans during entire PR run. Monitoring with GPUZ I was power limited the entire run at 430W so the clocks would be bouncing up and down. I was using either +140 or +155 on the KingPin bios. It would hit 2145-2160MHz on Valley, but would throttle down lower than that. In PR, it was power limited and would drop quite a bit more than that. Probably low to mid 2000s. 

I'll do more testing in a few hours when I'm home. Still a little bummed about the power limit issue though. What's the max power you're pulling?

Also regarding thermals, with my Strix clock speeds would start going down after 52C.


----------



## DrunknFoo

motivman said:


> So conclusion time folks...at least for reference PCB, not sure about FE PCB or other Custom PCB's, but just my experience with my PNY reference card....
> 
> It does not matter what resistors you use to shunt... this PCB will still draw about 515W-535W max on the stuck bios. History of my shunting shenanigans...
> 
> 1. First shunt - used conductive paint on all six resistors --- max power draw (verified with GPU-Z and kill-a-watt meter - 520w-535w (depending on benchmark)
> 2. Second shunt - used 5mohms on all resistors except on PCIE resistor (10 mohms) --- Max power draw - 520W-535W (depending on benchmark)
> 3. Third shunt attempt - Used 5mohms on all resistors including PCIE ---- max power draw 520W-535W (depending on benchmark)
> 
> Now the interesting part lol....
> 
> 1. first shunt with conductive paint on PCIE slot - 85W max power draw on PCIE SLOT ( max power draw on GPU-Z on average 64W)
> 2. second shunt with 10mohms on PCIE slot - 85W max power draw PCIE SLOT ( max power draw on GPU-Z on average 55W)
> 3. Third shunt with 5mohm on PCIE slot - 85W max power draw from PCIE slot... *** ( max power draw on GPU-Z on average 42.5W)
> 
> You see the pattern here folks???? You cannot outsmart NVIDIA engineers, LOL. There are other power limits on these cards that shunting cannot defeat. Only a proper bios can disable these hidden limits.
> 
> These stats above are all results using the stock PNY reference bios. THE ONLY WAY I can get this thing to draw more power (600W) is to flash either the EVGA 500W bios or the kingpin 520W bios.
> 
> So conclusion --- IT DOES NOT MATTER what resistor you use to shunt the PCIE Slot or even the entire card for that matter, your card will draw a certain amount of power, and changing shunts will just result in the card adjusting values based on your shunt resistor values. The only way to get more power is to flash a bios that allows more power draw.
> 
> have a good weekend folks!!!!!
> 
> LOL.


Curious... cause my kilawat shows otherwise..then again, ftw3 card... As well as the increase in temperature based on shunts used or swapped...

Even using the stock 450w bios 50c avg to 65c avg when shunted., kilawat was pulling 570w vs 730w.

Now with the kp bios im seeing upwards to 900-1000w (different shunt config) on the kilawat and required to use LM. Showing peaks of 80c avg 75c



But prior to this my very first shunt attempt, i gained 0 difference in wattage draw, and no clue as to why. Likely my choice of resistence ranges across the shunts were too large? /Shrug no idea...


----------



## bmagnien

HyperMatrix said:


> Did you put the fans on 100% ?? Mine didn't even hit 50C with maxed out fans during entire PR run. Monitoring with GPUZ I was power limited the entire run at 430W so the clocks would be bouncing up and down. I was using either +140 or +155 on the KingPin bios. It would hit 2145-2160MHz on Valley, but would throttle down lower than that. In PR, it was power limited and would drop quite a bit more than that. Probably low to mid 2000s.
> 
> I'll do more testing in a few hours when I'm home. Still a little bummed about the power limit issue though. What's the max power you're pulling?
> 
> Also regarding thermals, with my Strix clock speeds would start going down after 52C.


Oh sorry I think I just got confused. This is your first FTW3, and it’s a hybrid, correct? I’m converting my current FTW3 into a hybrid with the kit next week. I think what I was asking you was to compare air vs hybrid which I imagine would not be possible for you lol, that was a brain fart by me


----------



## HyperMatrix

bmagnien said:


> Oh sorry I think I just got confused. This is your first FTW3, and it’s a hybrid, correct? I’m converting my current FTW3 into a hybrid with the kit next week. I think what I was asking you was to compare air vs hybrid which I imagine would not be possible for you lol, that was a brain fart by me


Ah. I saw someone mention their FTW3 Hybrid arriving today and thought it was you since you were talking about it. Haha. Yes this is my first FTW3. Had a Strix before. Have an FTW3 HydroCopper arriving whenever they release it as well. And still hoping to get in on the KingPin queue.


----------



## bmagnien

bmagnien said:


> Oh sorry I think I just got confused. This is your first FTW3, and it’s a hybrid, correct? I’m converting my current FTW3 into a hybrid with the kit next week. I think what I was asking you was to compare air vs hybrid which I imagine would not be possible for you lol, that was a brain fart by me


Best thing I could recommend is getting a 10 or 15 mohm resistor and stacking your pcie slot with conductive paint. It’s literally the easiest/safest mod you can do and completely alleviates the power limit issue. I originally said I’d never do shunt modding but when I read about that I figured it was a pretty low risk alternative that met my needs and so I went for it and am super happy with the results. Obviously it’d be nice is evga hadn’t ****ed up their power ratios and maybe they’ll fix it eventually but all I know is I’ve got the best of both worlds now, a reversible mod that I could rma if things go south and a 520w vbios that works for my card and goes to 565 with the pcie slot modded


----------



## Thanh Nguyen

What is the best block in your opinion? I had the aquacomputer block for reference board and the temp is almost the same as alphacool. Worth it to give heatkiller a chance?


----------



## motivman

DrunknFoo said:


> Curious... cause my kilawat shows otherwise..then again, ftw3 card... As well as the increase in temperature based on shunts used or swapped...
> 
> Even using the stock 450w bios 50c avg to 65c avg when shunted., kilawat was pulling 570w vs 730w.
> 
> Now with the kp bios im seeing upwards to 900-1000w (different shunt config) on the kilawat and required to use LM. Showing peaks of 80c avg 75c
> 
> 
> 
> But prior to this my very first shunt attempt, i gained 0 difference in wattage draw, and no clue as to why. Likely my choice of resistence ranges across the shunts were too large? /Shrug no idea...


I do not believe this affects the 3X8 pin custom cards, as Framechasers claims his card ( Highly modified and shunted strix) is pulling 800Watts on its own! However, my 2X8 pin card that pulls ONLY 600W ( on 520W KP bios) beats him in all benchmarks except for timespy extreme??? I think 600W is more than enough for these cards anyways, anything extra might just be wasted energy....


----------



## motivman

Thanh Nguyen said:


> What is the best block in your opinion? I had the aquacomputer block for reference board and the temp is almost the same as alphacool. Worth it to give heatkiller a chance?


Thinking about dumping my EK for heatkiller also. The blocks are now in stock, however it will take another 2-3 weeks for backplate to become in stock... I need both at the same time, so will just wait it out.






Waterblocks - GPU GeForce® RTX 3080/3090


GPU Universal, Backplates, Accessories, Spare Parts




shop.watercool.de









__





Backplates


Backplates: HEATKILLER® GPU Backplate for GTX 980, 980Ti and TITAN X - HEATKILLER® IV eBC - Backplate for GTX 1080 und 1070 FTW/FTW2 - HEATKILLER V eBC - Ba




shop.watercool.de


----------



## HyperMatrix

motivman said:


> I do not believe this affects the 3X8 pin cards, as Framechasers claims his card ( Highly modified and shunted strix) is pulling 800Watts on its own! However, my 2X8 pin card that pulls ONLY 600W ( on 520W KP bios) beats him in all benchmarks except for timespy extreme??? I think 600W is more than enough for this cards though, anything extra might just be wasted energy....


It just depends on how much voltage your chip needs to get to a certain clock. The higher the voltage required, the more power required to reach the same clock/performance. Other system specs also affect benchmark numbers. I also don't believe he's pulling 800W from just his card unless he's volt modding.

I hate to make things about appearance but I've seen a couple of his videos and I just hate looking at his stupid face. Makes it hard to take anything he says seriously.


----------



## motivman

HyperMatrix said:


> It just depends on how much voltage your chip needs to get to a certain clock. The higher the voltage required, the more power required to reach the same clock/performance. Other system specs also affect benchmark numbers. I also don't believe he's pulling 800W from just his card unless he's volt modding.
> 
> I hate to make things about appearance but I've seen a couple of his videos and I just hate looking at his stupid face. Makes it hard to take anything he says seriously.


Lol, He might not be such an expert and "guru" like he tries to portray himself, but I must admit that some of his videos are entertaining as hell. I can listen to framechasers and buildzoid ramble all day and not get bored....

Edit: Buildzoid is a guru at his craft, that man know what the hell he is talking about. I wonder what his background is? He must be some kind of engineer or something...


----------



## HyperMatrix

motivman said:


> Lol, He might not be such an expert and "guru" like he tries to portray himself, but I must admit that some of his videos are entertaining as hell. I can listen to framechasers and buildzoid ramble all day and not get bored....
> 
> Edit: Buildzoid is a guru at his craft, that man know what the hell he is talking about. I wonder what his background is? He must be some kind of engineer or something...


Please don't compare buildzoid to framechasers. Haha. Despite not always agreeing with buildzoid...he's speaking from a technical standpoint and he understands his craft. Framechasers...doesn't have a craft. Haha. He's like that Moore's Law is Dead guy. He's right once in a while. And some of their videos can have useful information in them. But it's just as bad as watching Linus explain how amazing 8K gaming is with the RTX 3090 on his brand new $30k OLED courtesy of Nvidia.  They're either talking out of their bums because they don't know any better, or they're intentionally lying because they're hoping you don't know any better. And I'm not sure which of those is worse.


----------



## DrunknFoo

HyperMatrix said:


> Please don't compare buildzoid to framechasers. Haha. Despite not always agreeing with buildzoid...he's speaking from a technical standpoint and he understands his craft. Framechasers...doesn't have a craft. Haha. He's like that Moore's Law is Dead guy. He's right once in a while. And some of their videos can have useful information in them. But it's just as bad as watching Linus explain how amazing 8K gaming is with the RTX 3090 on his brand new $30k OLED courtesy of Nvidia.  They're either talking out of their bums because they don't know any better, or they're intentionally lying because they're hoping you don't know any better. And I'm not sure which of those is worse.


Framechasers is not logical, i only seen two of his vids and i question his content and unfounded conclusions.... Just not for me.... only cause some of his claims and statements he is fixated on have been proved wrong by many even before he uploaded the vid. I.e ftw hardware issue... (There isnt one)

Buildzoids content is good but yes his pcb breakdown and analysis from paper/photos sometimes can be too critical to the point of 'misleading'.


----------



## bmgjet

Framechasers just straight out lies to get click bait titles and try impress noobs. 
Then deletes your comments and bans you from his groups when you provide proof lol.
I called him out on how he was getting more then 20 amps though the 20amp fuses on his card. He responce at first was his card didnt have fuses so I posted screen shots from one of his really early videos tearing down the card and highlighted the fuses in it. Bam I was gone. 
My XC3 card blew a 20amp fuse at 520W yet he was claiming 600W though his.
But then you could also see in the corner of his screen when he was calling out he was hitting over 600W that GPUz listed 240W and with his 5 mohm stacks youd double that number to 480W.
Same with him trying to claim his total power draw from the PSU as GPU only power when you could see in HWINFO that his CPU was pulling 130W and that pulls from the 12V rail over the EPS plugs lol.


----------



## Falkentyne

bmgjet said:


> Framechasers just straight out lies to get click bait titles and try impress noobs.
> Then deletes your comments and bans you from his groups when you provide proof lol.
> I called him out on how he was getting more then 20 amps though the 20amp fuses on his card. He responce at first was his card didnt have fuses so I posted screen shots from one of his really early videos tearing down the card and highlighted the fuses in it. Bam I was gone.
> My XC3 card blew a 20amp fuse at 520W yet he was claiming 600W though his.
> But then you could also see in the corner of his screen when he was calling out he was hitting over 600W that GPUz listed 240W and with his 5 mohm stacks youd double that number to 480W.
> Same with him trying to claim his total power draw from the PSU as GPU only power when you could see in HWINFO that his CPU was pulling 130W and that pulls from the 12V rail over the EPS plugs lol.


That framechasers guy is a weirdo. Yet he claims to be married and living with his wife? Who the hell would want to live with a doofus like him?
Maybe his "wife" is his cat? Did anyone see her in his videos?


----------



## pat182

bmgjet said:


> Framechasers just straight out lies to get click bait titles and try impress noobs.
> Then deletes your comments and bans you from his groups when you provide proof lol.
> I called him out on how he was getting more then 20 amps though the 20amp fuses on his card. He responce at first was his card didnt have fuses so I posted screen shots from one of his really early videos tearing down the card and highlighted the fuses in it. Bam I was gone.
> My XC3 card blew a 20amp fuse at 520W yet he was claiming 600W though his.
> But then you could also see in the corner of his screen when he was calling out he was hitting over 600W that GPUz listed 240W and with his 5 mohm stacks youd double that number to 480W.
> Same with him trying to claim his total power draw from the PSU as GPU only power when you could see in HWINFO that his CPU was pulling 130W and that pulls from the 12V rail over the EPS plugs lol.


Wait, does evga only have fuses or strix too? Should i be worried to blow a fuse on the 520 bios


----------



## HyperMatrix

Falkentyne said:


> That framechasers guy is a weirdo. Yet he claims to be married and living with his wife? Who the hell would want to live with a doofus like him?
> Maybe his "wife" is his cat? Did anyone see her in his videos?


His wife's probably 15 and locked in the basement.


----------



## Nizzen

motivman said:


> Lol, He might not be such an expert and "guru" like he tries to portray himself, but I must admit that some of his videos are entertaining as hell. I can listen to framechasers and buildzoid ramble all day and not get bored....
> 
> Edit: Buildzoid is a guru at his craft, that man know what the hell he is talking about. I wonder what his background is? He must be some kind of engineer or something...


Buildzoid is actually a noob in overclocking cpu and memory compared to many people here in this forum. Have anyone seen him testing his overclock in other things than just boot? 
I have yet to see any good stable memorybenchmarks from him....

This is my opinion, and I may be wrong. If I'm wrong, pleace prove me wrong


----------



## 414347

HyperMatrix said:


> His wife's probably 15 and locked in the basement.


Not cool comment, not cool at all.


----------



## Nizzen

HyperMatrix said:


> His wife's probably 15 and locked in the basement.


She is 41 years old. Anything else you want to know?


----------



## lolhaxz

When shunting the 3090 Strix, if I simply shunt everything except the PCIE slot with 10mohm shunts (ie, stacked) - what will the end result be?

The way I understood it is that the Strix specifically shouldn't be limited by the PCIE slot because it prioritises the 8pin's?


----------



## HyperMatrix

Nizzen said:


> She is 41 years old. Anything else you want to know?


That's already more than I'd like to know. Haha.


----------



## Nizzen

lolhaxz said:


> When shunting the 3090 Strix, if I simply shunt everything except the PCIE slot with 10mohm shunts (ie, stacked) - what will the end result be?
> 
> The way I understood it is that the Strix specifically shouldn't be limited by the PCIE slot because it prioritises the 8pin's?


I have shunted EVERY shunt with 8mohm. It works perfect. Endresult is: You don't need any more power for stock voltage " 1.1v"


----------



## bmgjet

pat182 said:


> Wait, does evga only have fuses or strix too? Should i be worried to blow a fuse on the 520 bios


EVGA, MSI, Gigabyte.
Iv provided images of them on the shuntmod tool page.








GitHub - bmgjet/ShutMod-Calculator: Work out what shunt values to use easily.


Work out what shunt values to use easily. Contribute to bmgjet/ShutMod-Calculator development by creating an account on GitHub.




github.com





----

No need to make personal attacks on the guy. If he has a wife or not who cares keep it to computer stuff.




lolhaxz said:


> When shunting the 3090 Strix, if I simply shunt everything except the PCIE slot with 10mohm shunts (ie, stacked) - what will the end result be?
> 
> The way I understood it is that the Strix specifically shouldn't be limited by the PCIE slot because it prioritises the 8pin's?


You need to do them all since the cards all have hardware balancing.
If you shut mod the plugs only youll get like another 20-40W before you hit the power limit on the slot, gpu chip or src.


----------



## HyperMatrix

lolhaxz said:


> When shunting the 3090 Strix, if I simply shunt everything except the PCIE slot with 10mohm shunts (ie, stacked) - what will the end result be?
> 
> The way I understood it is that the Strix specifically shouldn't be limited by the PCIE slot because it prioritises the 8pin's?


I don't remember the power draw with the stock bios but with the EVGA XOC bios it was reporting just 50W draw on PCIE slot. So you'd have additional room to draw power from it without shunting.

If this was only happening with the XOC bios then it would indicate load balancing in the form of 25W on PCIE Slot for each 8-pin connector drawing full power. And since one of the connectors was showing basically 0W draw on that bios, you'd be at just 50W draw on PCIE slot and about 330W max draw for the card as a whole.




bmgjet said:


> You need to do them all since the cards all have hardware balancing.
> If you shut mod the plugs only youll get like another 20-40W before you hit the power limit on the slot, gpu chip or src.


Sounds like something FrameChasers would say.  Read my comment above regarding Strix with XOC bios.


----------



## techenth

Nizzen said:


> Buildzoid is actually a noob in overclocking cpu and memory compared to many people here in this forum. Have anyone seen him testing his overclock in other things than just boot?
> I have yet to see any good stable memorybenchmarks from him....
> 
> This is my opinion, and I may be wrong. If I'm wrong, pleace prove me wrong


Agree with this, he's just a kid that spent some time in BIOS trying different settings, not actually knowing what they do.
You can pinpoint tell the obvious mistakes he's making and not getting a POST in the vids if you watch carefully.


----------



## Falkentyne

lolhaxz said:


> When shunting the 3090 Strix, if I simply shunt everything except the PCIE slot with 10mohm shunts (ie, stacked) - what will the end result be?
> 
> The way I understood it is that the Strix specifically shouldn't be limited by the PCIE slot because it prioritises the 8pin's?


It will throw off your normalized TDP%
There is still power balancing going on.
Just throw a 15 mOhm on the PCIE slot and call it a day. 

A too _low_ PCIE slot reported to vbios can still throttle you just as heavily as a too high PCIE slot. Because then, your 8 pins will be too high relative to your PCIE slot and you get a speeding ticket for that  Also it seems PCIE slot shunt has continuity with MVDDC shunt.


----------



## motivman

Nizzen said:


> Buildzoid is actually a noob in overclocking cpu and memory compared to many people here in this forum. Have anyone seen him testing his overclock in other things than just boot?
> I have yet to see any good stable memorybenchmarks from him....
> 
> This is my opinion, and I may be wrong. If I'm wrong, pleace prove me wrong







I copied his settings down to a T for my system (same exact hardware), and my system has been rock stable! Even passed a 12 hour realbench test. Before finding that video, I couldn't get my ram to post past 3600 MHz with decent timings. So, I am a buildzoid believer, till proven otherwise. As for Framechasers, He is entertaining, but I take what he says with a grain of salt. When he posted his benchmarks for his strix, I posted a comment, comparing my numbers to his ( I beat him in all benchmarks but 1 (timespy extreme), and he "hid" my comment, SMH. I guess he didn't want his subscribers to see that a 2X8 pin card destroyed his highly modified "fastest 6900XT killer version 4.0 Strix".


----------



## HyperMatrix

Falkentyne said:


> It will throw off your normalized TDP%
> There is still power balancing going on.
> Just throw a 15 mOhm on the PCIE slot and call it a day.
> 
> A too _low_ PCIE slot reported to vbios can still throttle you just as heavily as a too high PCIE slot. Because then, your 8 pins will be too high relative to your PCIE slot and you get a speeding ticket for that  Also it seems PCIE slot shunt has continuity with MVDDC shunt.


So are you saying in the case of the Strix with the XOC bios, which shows 1W on one power connector and 150W on each of the others (although I think it was 150-170W if I remember correctly), and just 50W on the PCIE slot, if you were to just shunt the 2 connectors that are drawing full power, that you'd still be limited somewhere due to load balancing? 

If that were true, wouldn't the performance of the XOC bios be lower than the stock bios since as you said, it's seeing 0 draw on one power connector? If there's any throttling happening as a result of that, we should see it in performance. But in fact I got better results with that bios than with the stock bios. If I'm wrong, let me know so I can apologize to bmgjet.


----------



## motivman

NewUser16 said:


> Not cool comment, not cool at all.


LOLOLOLOL, is that you framechasers? watsup bro!


----------



## lolhaxz

At 390-400W stock power limit (100%) the PCIE (or PEX as NVIDIA seems to refer to it) is drawing average 36W

At 480W (123%) the PCIE slot drawing average 40W.

That in itself does not make alot of sense if we try to extrapolate some kind of percentage, ie 8% will be drawn from PCIE slot.

I'd just rather leave the PCIE slot alone frankly, and I also only have 10mohm resistors .

480W Limit:










~390W Limit:


----------



## Nizzen

motivman said:


> I copied his settings down to a T for my system (same exact hardware), and my system has been rock stable! Even passed a 12 hour realbench test. Before finding that video, I couldn't get my ram to post past 3600 MHz with decent timings. So, I am a buildzoid believer, till proven otherwise. As for Framechasers, He is entertaining, but I take what he says with a grain of salt. When he posted his benchmarks for his strix, I posted a comment, comparing my numbers to his ( I beat him in all benchmarks but 1 (timespy extreme), and he "hid" my comment, SMH. I guess he didn't want his subscribers to see that a 2X8 pin card destroyed his highly modified "fastest 6900XT killer version 4.0 Strix".


All this results is average at best... Just saying 
I don't care about either Buildzoid or Framechasers....


----------



## Falkentyne

HyperMatrix said:


> So are you saying in the case of the Strix with the XOC bios, which shows 1W on one power connector and 150W on each of the others (although I think it was 150-170W if I remember correctly), and just 50W on the PCIE slot, if you were to just shunt the 2 connectors that are drawing full power, that you'd still be limited somewhere due to load balancing?
> 
> If that were true, wouldn't the performance of the XOC bios be lower than the stock bios since as you said, it's seeing 0 draw on one power connector? If there's any throttling happening as a result of that, we should see it in performance. But in fact I got better results with that bios than with the stock bios. If I'm wrong, let me know so I can apologize to bmgjet.


Yes. Take a look at this. 

My card can pull up to 600W via kill-a-watt checking before the 8 pin #1 really starts limiting it (that's as close as I could get them), I think at about 170W GPU-Z'ish.

Superposition 1080p Extreme: NO throttling.










Superposition 4k custom:
Nonstop throttling









The only real difference in what is reported is the SRC is higher on 4k but that wouldn't be the cause of the throttling.
Something not shown on GPUZ is causing it. Look at the _Normalized_ TDP%--exceeding the 114% TDP slider.
An internal rail is throttling somewhere and no one knows what it is....


----------



## motivman

Nizzen said:


> All this results is average at best... Just saying
> I don't care about either Buildzoid or Framechasers....


If you can help me overclock this ram to 4400 on my system, will highly appreciate it, LOL. best I can get this ram to post is 4266 on this unify board. Anyways, this is a 3090 thread, so that is that... haha.


----------



## DrunknFoo

Well i wouldnt sya


bmgjet said:


> Framechasers just straight out lies to get click bait titles and try impress noobs.
> Then deletes your comments and bans you from his groups when you provide proof lol.
> I called him out on how he was getting more then 20 amps though the 20amp fuses on his card. He responce at first was his card didnt have fuses so I posted screen shots from one of his really early videos tearing down the card and highlighted the fuses in it. Bam I was gone.
> My XC3 card blew a 20amp fuse at 520W yet he was claiming 600W though his.
> But then you could also see in the corner of his screen when he was calling out he was hitting over 600W that GPUz listed 240W and with his 5 mohm stacks youd double that number to 480W.
> Same with him trying to claim his total power draw from the PSU as GPU only power when you could see in HWINFO that his CPU was pulling 130W and that pulls from the 12V rail over the EPS plugs lol.


Lmao! He banned u? Bahahahabaha

Thats fuxkin hilarious to read


----------



## originxt

Got a new ftw3 non ultra from EVGA via RMA and it seems to take on the 500w bios. Maybe if the temps were lower I could push the card more. I'd try the KP bios but feel like it would get too hot. Taiwan card.


----------



## motivman

originxt said:


> Got a new ftw3 non ultra from EVGA via RMA and it seems to take on the 500w bios. Maybe if the temps were lower I could push the card more. I'd try the KP bios but feel like it would get too hot. Taiwan card.
> 
> View attachment 2467936


These EVGA FTW3 cards seems weak TBH. 14.2k in PR is my score with no overclock AT ALL on my reference shunted card on EK waterblock on the stock PNY bios. My max power draw on that bios is about 500W


----------



## originxt

motivman said:


> These EVGA FTW3 cards seems weak TBH. 14.2k in PR is my score with no overclock AT ALL on my reference shunted card on EK waterblock on the stock PNY bios. My max power draw on that bios is about 500W


What are your temps? Mine was hitting 70-71 usually between runs.


----------



## motivman

originxt said:


> What are your temps? Mine was hitting 70-71 usually between runs.


I am on water, but I run my fans really really low, except when benching. Usually around 55C. Here is a screenshot I saved on my computer of my PR run with zero overclock on the KP 520W bios. On my stock PNY bios, I score a little lower... between 14.1k to 14.2k


----------



## originxt

motivman said:


> I am on water, but I run my fans really really low, except when benching. Usually around 55C. Here is a screenshot I saved on my computer of my PR run with zero overclock on the KP 520W bios. On my stock PNY bios, I score a little lower... between 14.1k to 14.2k
> 
> View attachment 2467937


Once I get a block, I'll do a comparison test and see how the numbers change. I don't see a point for the KP bios yet for myself since my temps seem to be high as is.


----------



## bmgjet

motivman said:


> I am on water, but I run my fans really really low, except when benching. Usually around 55C. Here is a screenshot I saved on my computer of my PR run with zero overclock on the KP 520W bios. On my stock PNY bios, I score a little lower... between 14.1k to 14.2k


Kingpin bios has higher base clock, So that would be like starting your PNY card with a +125 core offset.


----------



## originxt

bmgjet said:


> Kingpin bios has higher base clock, So that would be like starting your PNY card with a +125 core offset.


I was gonna mention that but wasn't 100% sure on that. I think the base boost clock was like 1920 or so.


----------



## jura11

Thanh Nguyen said:


> What is the best block in your opinion? I had the aquacomputer block for reference board and the temp is almost the same as alphacool. Worth it to give heatkiller a chance?


Aquacomputer have been my favorite for while as I used their block on Zotac RTX 2080Ti AMP and comparing that block to EK Vector RTX 2080Ti block temperatures are like 3-5°C lower with Aquacomputer 

Heatkiller IV block I have used on my same GPU(Zotac RTX 2080Ti AMP) and Heatkiller run like 1-2°C hotter than Aquacomputer Kryographics block 

On current generation of GPUs Bykski seems is great waterblock for money, Alphacool seems is good too, EKWB is EKWB average, Heatkiller not sure there, I would guess only it will be on par with Aquacomputer 

Bitspower didn't tried yet, only on previous generation of RTX and not been very impressed with temperatures that time, same applies for Phanteks blocks, temperatures have been on average 

Hope this helps 

Thanks, Jura


----------



## DrunknFoo

originxt said:


> Once I get a block, I'll do a comparison test and see how the numbers change. I don't see a point for the KP bios yet for myself since my temps seem to be high as is.


Really is no point unless you can keep temps down.

Running the 520w simply because it works, just have to lower the % power for now.

Switching to LM helped drop peak by about 9c for me but regardless, in need of a block.


----------



## MacMus

NewUser16 said:


> Newegg does have returns, but unless the card is defective, they will charge you whopping 20% restocking fee


When I clicked refund on a page it came back full amount.. if the product is not opened.
They don't give this 20% in initial quote?


----------



## changboy

Is soemone with rtx-3090 and far cry 5 can do the benchmark test at 4k max + hd texture and post the result like me :


----------



## HyperMatrix

Just some clock speed testing in Valley with FTW3 Hybrid:


Card can do +1350 MHz (22,2204 MHz) on memory without artifacting. Can do +1400 but get artifacts.
Can do 2055 MHz at 0.95V
Can do 2130 MHz at 1.0v

Card still throttles around the 430-440W mark. That's been my biggest issue. Power limit. XOC and KingPin bios don't do anything. Strix bios did nothing either except one power connector doesn't report usage and I lose fan speed controls.

2130 at 1v is looking like a pretty decent card though...


----------



## MacMus

could someone link me tested bios for msi ventus with highest wattage ?


----------



## Falkentyne

HyperMatrix said:


> Just some clock speed testing in Valley with FTW3 Hybrid:
> 
> 
> Card can do +1350 MHz (22,2204 MHz) on memory without artifacting. Can do +1400 but get artifacts.
> Can do 2055 MHz at 0.95V
> Can do 2130 MHz at 1.0v
> 
> Card still throttles around the 430-440W mark. That's been my biggest issue. Power limit. XOC and KingPin bios don't do anything. Strix bios did nothing either except one power connector doesn't report usage and I lose fan speed controls.
> 
> 2130 at 1v is looking like a pretty decent card though...


Did you try flashing the XC3 bios? That's the one that is supposed to work.


----------



## MacMus

Sheyster said:


> Founder's is up to 400w. If that is good enough get one, it's an amazing card. Also, it uses the new 12-pin micro connector, not 2 x 8-pin.


you mean bios or card ? they only 3090 i was able to get is msi ventus... I want to squeez everything from it.
I'm can play around with shunting but it would be good to have someone done it also for msi ventus to let me know what resotors to buy.

if i can avoid shunting by loading some bumped bios that is fine too. which one i shall use?


----------



## Alex24buc

Does anybody with Palit gaming pro oc tried the new bios launched from the manufacturer on the website 3 days ago or know what brings new? I momentarily have the gigabyte gaming pro oc bios with 390w and I don’t want to change it unless palit new bios is better.


----------



## HyperMatrix

Falkentyne said:


> Did you try flashing the XC3 bios? That's the one that is supposed to work.


Do I still maintain the ability to set both the card fan and radiator fans to max 100%?


----------



## Falkentyne

HyperMatrix said:


> Do I still maintain the ability to set both the card fan and radiator fans to max 100%?


You would have to ask or check the eVGA thread. I don't pay the most attention to any non technical ramble there, but most people seem to say the best results for the power limit, are the XC3 Bios, due to the similarities in the PCB's. That's what I picked up over there. You do have dual bios so there's no harm in just flipping the switch and recovering.


----------



## HyperMatrix

Falkentyne said:


> You would have to ask or check the eVGA thread. I don't pay the most attention to any non technical ramble there, but most people seem to say the best results for the power limit, are the XC3 Bios, due to the similarities in the PCB's. That's what I picked up over there. You do have dual bios so there's no harm in just flipping the switch and recovering.


You're a god damn beauty. This worked perfectly. Not sure what the power limit is but ended up having to increase voltage a bit as I was crashing midway. Checking GPU-Z I'm still power limited. No vrel/vop. Did a run with +1200 on mem and 2130 MHz @ 1.025v to prevent crashing. Massive power limited performance though. Clocks were anywhere from 1980 to 2100 during the run. Scored a 14247. Might not seem high, but remember I'm running a 6950x at 4.3GHz with 2933MHz ram. And this is already 200 points higher than my Strix got. Temperature on the card maxed out at 51C at the end of the run.


----------



## bmgjet

Since people are trying the XC3 bios on 3 plug cards, Has any one tried the KFA2 390W bios on them since thats the best bios for the XC3 it has a identical VRM and DP port layout.








KFA2 RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com


----------



## HyperMatrix

changboy said:


> Is soemone with rtx-3090 and far cry 5 can do the benchmark test at 4k max + hd texture and post the result like me :
> View attachment 2467940


I moved the game off of my NVMe raid so it's just on an old SSD. Could be why the minimum is lower. Or could be my CPU.


----------



## Sheyster

HyperMatrix said:


> If that were true, wouldn't the performance of the XOC bios be lower than the stock bios since as you said, it's seeing 0 draw on one power connector? If there's any throttling happening as a result of that, we should see it in performance. *But in fact I got better results with that bios than with the stock bios.* If I'm wrong, let me know so I can apologize to bmgjet.


I saw similar results as you. My second 4K optimized Superposition run with the 500W BIOS was good enough for #19 on the leaderboard a week ago. I'll test the 520W BIOS soon and give my feedback in this thread.


----------



## Sheyster

MacMus said:


> you mean bios or card ? they only 3090 i was able to get is msi ventus... I want to squeez everything from it.
> I'm can play around with shunting but it would be good to have someone done it also for msi ventus to let me know what resotors to buy.
> 
> if i can avoid shunting by loading some bumped bios that is fine too. which one i shall use?


The FE card/BIOS when power limit is maxed out in Afterburner will scale up to 400W. That's what I meant. It's a perfect choice if you're happy with a 400W power limit on air. For the Ventus you should probably load the 390W Gigabyte BIOS.


----------



## changboy

HyperMatrix said:


> I moved the game off of my NVMe raid so it's just on an old SSD. Could be why the minimum is lower. Or could be my CPU.
> 
> View attachment 2467948


What is the 3090 you have done that test ?


----------



## motivman

MacMus said:


> you mean bios or card ? they only 3090 i was able to get is msi ventus... I want to squeez everything from it.
> I'm can play around with shunting but it would be good to have someone done it also for msi ventus to let me know what resotors to buy.
> 
> if i can avoid shunting by loading some bumped bios that is fine too. which one i shall use?


its a 2X8 pin card. I tried looking for pics of the PCB on the internet, with no luck. shunt everything with 5 mohm and call it a day. it will most likely draw about 500 - 520 w on the reference bios, or if you don't want to shunt, the best bios would be the gigabyte 390W bios, but not sure how the gigabyte fan profile will affect the ventus fan. But I think best option is to just shunt the card. if you are not confident in your soldering, then use conductive paint, you will still get the same 500W with conductive paint.


----------



## Sheyster

Quick update: My 3090 FE is sold and gone, long live the Founders! In the end the extra HDMI 2.1 port and better OC headroom (silicon lottery) won me over, the Strix is staying.

The FE is an amazing card. I highly recommend it particularly if the card will be kept stock on air.


----------



## HyperMatrix

changboy said:


> What is the 3090 you have done that test ?


FTW3 Hybrid.


----------



## bmgjet

KFA2 390W bios is better then Gigabyte Gaming OC.
You dont lose the middle DP port,
Its made for a 18 phase card (Gigabytes only one to use that 19 phase config on 2 plug cards)
Fans run at simular speeds.


----------



## jomama22

Sheyster said:


> Quick update: My 3090 FE is sold and gone, long live the Founders! In the end the extra HDMI 2.1 port and better OC headroom (silicon lottery) won me over, the Strix is staying.
> 
> The FE is an amazing card. I highly recommend it particularly if the card will be kept stock on air.


Funny, my experience is the complete opposite. Started with an fe, got a strix, strix at 480w couldn't match my fe at 400w lol. Fe gets 22k graphics in TS, stick could only muster 21750.

Long live the strix lmao.


----------



## motivman

changboy said:


> Is soemone with rtx-3090 and far cry 5 can do the benchmark test at 4k max + hd texture and post the result like me :
> View attachment 2467940


Here you go... man I haven't played this in a while, best HDR implementation in a PC game IMHO.


----------



## Sheyster

jomama22 said:


> Funny, my experience is the complete opposite. Started with an fe, got a strix, strix at 480w couldn't match my fe at 400w lol. Fe gets 22k graphics in TS, stick could only muster 21750.
> 
> Long live the strix lmao.


The EVGA 500W BIOS did wonders for my Strix. It was very ho-hum with the stock 480W BIOS. I'm gonna try the 520W KPE BIOS a little later this evening.


----------



## changboy

motivman said:


> Here you go... man I haven't played this in a while, best HDR implementation in a PC game IMHO.
> 
> View attachment 2467960


 Great result, what is your card and oc on it and cpu ?


----------



## motivman

changboy said:


> Great result, what is your card and oc on it and cpu ?


shunt modded reference card on water (ekwb) +180 core / +800 memory. 10900k @ 5.1ghz, DDR4 4266 @ 17-17-17-37 2T


----------



## HyperMatrix

So with up to about 60% GPU load I can run 2220MHz on this card. At 80% load it drops to 2205MHz. XC3 bios so power consumption isn't accurate. Man I wish I had this GPU die on the HydroCopper card.  Once it arrives, I may end up swapping the shroud before selling it. Haha.



















I think it could possibly keep that clock speed with shunt mods and proper cooling....


----------



## alitayyab

Alex24buc said:


> Does anybody with Palit gaming pro oc tried the new bios launched from the manufacturer on the website 3 days ago or know what brings new? I momentarily have the gigabyte gaming pro oc bios with 390w and I don’t want to change it unless palit new bios is better.


I have a 3090 Game Pro OC. As far as I have noticed it only fixes buggy customized fan curve when using thunder master. I have not noticed anything else new. They have in fact released a new bios for their entire 3080 and 3090 series of cards.


----------



## Alex24buc

Thanks for this information, so the new official bios from palit doesn’t raise the tdp from 365w. I’ll stick with the gigabyte one with 390w. From what I have read there isn’t another bios better than the one from gigabyte gaming oc.


----------



## Alex24buc

bmgjet said:


> KFA2 390W bios is better then Gigabyte Gaming OC.
> You dont lose the middle DP port,
> Its made for a 18 phase card (Gigabytes only one to use that 19 phase config on 2 plug cards)
> Fans run at simular speeds.


So for my Palit gaming pro oc do you think it is better to flash this bios? I don t need the middle dp port I lost with gigabyte bios.


----------



## cakesg

Oh shoot, I realized I was running at PCIE 3.0 x8. I just got a new riser card to vertically mount it, its the asus one. Is there any way I could try to fix it running at x8 or should I buy a new riser card? I have a Maximus XII Extreme and I used to use the Dimm.2 slot before I got my 3090 but now I removed and disabled the Dimm.2 and the GPU is still running a x8


----------



## mirkendargen

cakesg said:


> Oh shoot, I realized I was running at PCIE 3.0 x8. I just got a new riser card to vertically mount it, its the asus one. Is there any way I could try to fix it running at x8 or should I buy a new riser card? I have a Maximus XII Extreme and I used to use the Dimm.2 slot before I got my 3090 but now I removed and disabled the Dimm.2 and the GPU is still running a x8


Running at 8x sounds more like a mobo BIOS/slot population thing than a riser card thing, but if you want to get a new one I'm using https://www.amazon.com/gp/product/B0898R2HBS right now at PCIe 4.0 16x with 0 issues, and I used https://www.amazon.com/gp/product/B01NH0GW7Z previously at PCIe 3.0 16x with no issues. I didn't try it at 4.0 but I had a feeling due to the length it wouldn't have worked and the new one definitely has far better shielding.


----------



## cakesg

mirkendargen said:


> Running at 8x sounds more like a mobo BIOS/slot population thing than a riser card thing, but if you want to get a new one I'm using https://www.amazon.com/gp/product/B0898R2HBS right now at PCIe 4.0 16x with 0 issues, and I used https://www.amazon.com/gp/product/B01NH0GW7Z previously at PCIe 3.0 16x with no issues. I didn't try it at 4.0 but I had a feeling due to the length it wouldn't have worked and the new one definitely has far better shielding.


Is there any troubleshooting I could do to determine if it was a bios/slot population problem before I buy a new riser cable? Since I actually had 2 of these asus cables and both of them didn't work at x16. If it did go to x16, it would be after a hard reset and the computer would shut off after a few minutes and reboot into x8


----------



## mirkendargen

cakesg said:


> Is there any troubleshooting I could do to determine if it was a bios/slot population problem before I buy a new riser cable? Since I actually had 2 of these asus cables and both of them didn't work at x16. If it did go to x16, it would be after a hard reset and the computer would shut off after a few minutes and reboot into x8


Put a GPU directly in the slot and see if it runs at 16x there. If it's just a spare GPU to test with not the one you're using with the riser, also use it with the riser to see if it drops to 8x there. If you get 8x with it directly in the slot, it's the mobo and you probably have an m.2 SSD somewhere that steals lanes from the PCIe slot. The fact that it sometimes boots at 16x then crashes sounds more like the riser is the problem. It wouldn't even try to do 16x if it was BIOS/slot population.


----------



## Fufuuu

Hey,

I just bought a Zotac 3090 trrinity that I have added in my custom watercooling loop. I'm lokking for a Bios to get Power Limit to 390W and have all my Display Ports working. There is a Bios like that ?

Thanks


----------



## bmgjet

Fufuuu said:


> Hey,
> 
> I just bought a Zotac 3090 trrinity that I have added in my custom watercooling loop. I'm lokking for a Bios to get Power Limit to 390W and have all my Display Ports working. There is a Bios like that ?
> 
> Thanks


Back 1 page.



bmgjet said:


> KFA2 390W bios is better then Gigabyte Gaming OC.
> You dont lose the middle DP port,
> Its made for a 18 phase card (Gigabytes only one to use that 19 phase config on 2 plug cards)
> Fans run at simular speeds.


----------



## Fufuuu

bmgjet said:


> Back 1 page.


Thank you very much. I wanted a confirmation to be sure


----------



## mardon

Having cancelled one card due to buyers remorse I've sold a few bits and bobs and now ordered another. 
I'm limited on card size due to being in a SFF case but have x2 240mm worth of radiator space (Ncase M1).

Do we have any owners on here with a 2pin card EK block and 390w bios? What sort of clocks are you running in real life workloads? Can you sustain 2100mhz (ish) in gaming loads? 

If I do shunt I think I'd only want to go up to 450/500w for block compatibility would I be better of completely swapping out the shunts? I've got a good electronics engineer near me who would do it.


----------



## GTANY

HyperMatrix said:


> So with up to about 60% GPU load I can run 2220MHz on this card. At 80% load it drops to 2205MHz. XC3 bios so power consumption isn't accurate. Man I wish I had this GPU die on the HydroCopper card.  Once it arrives, I may end up swapping the shroud before selling it. Haha.
> 
> View attachment 2467963
> 
> 
> View attachment 2467964
> 
> 
> I think it could possibly keep that clock speed with shunt mods and proper cooling....


This time, you are lucky : once shunted and watercooled, your card will fly. Far better than your previous Strix.


----------



## Herald

Sheyster said:


> I saw similar results as you. My second 4K optimized Superposition run with the 500W BIOS was good enough for #19 on the leaderboard a week ago. I'll test the 520W BIOS soon and give my feedback in this thread.


Here is mine. #7 on the leaderboard


----------



## Biscottoman

jomama22 said:


> On the strix card, you are probably safe to using 5 mohm on the pcie as it only pulls about 45w under full load @480w. Granted, I don't think you need the full 960w that give you lmao. Using a 10 mohm on the pcie would give you 720w on the stock bios, 750w with the 500w and 780w on the 520w.


I was just re-calculating the total, using 10mhom on the pcie + 5mohm on the others shouldn't i reach 937.5w total (480w BIOS) as 67.5 on the pcie (45x1.5) + 290w for each 8 PIN. Maybe i'm doing same mistakes in the calculation of the total power


----------



## lolhaxz

10mohm shunts installed! - just your typical 25w iron / no flux.

Not shunting the PCI express slot - so only 6 total, I guess we'll see what happens... should be fine on the Strix I suspect so long as your not chasing something stupid like 700W (I will probably daily ~500W max, but want working power monitoring + the headroom to bench occasionally)


----------



## Nizzen

lolhaxz said:


> 10mohm shunts installed! - just your typical 25w iron / no flux.
> 
> Not shunting the PCI express slot - so only 7 total, I guess we'll see what happens... should be fine on the Strix I suspect so long as your not chasing something stupid like 700W
> 
> View attachment 2467997
> 
> 
> View attachment 2467998


Nice and clean! Good job


----------



## Carls_Car

PSU cable extenders affect voltage much? Will I be losing any performance? Or will the GPU pull what it needs? I'm not planning on going past my card's power limit, or if I do, much past it.


----------



## Sheyster

Herald said:


> Here is mine. #7 on the leaderboard


Very nice! Watercooled I assume? I'll post my KPE BIOS 4K optimized screenie soon. My card is running the stock air cooler.


----------



## pat182

Sheyster said:


> Very nice! Watercooled I assume? I'll post my KPE BIOS 4K optimized screenie soon. My card is running the stock air cooler.


your mid fan gonna run 1000rpm slower, can you confirm the 500watts keep 3 of them at 3000rpm ? might flash back to 500watt from the KPE, still debating what to do, KPE works great when optimised. can have 2100mhz stable max power around 70c but would get lower temps probly with the 500watt with all 3 fans at 3000rpm


----------



## Pinto

Just to update my EK Strix coil whine issue. I manage to get rid of the whine, so now the block and the backplate are awesome! In gaming gpu 2115/2130 mem +1250 with only 11°c delta temp (39° max load).
For the whine i simply replace the memory pads use with the 3090 (1.5mm) for the 3080 ones (2mm) and add the include washers between the card and the backplate to compensate the thickness off the pads, every screws tight no whine, simple and efficient!


----------



## Sheyster

pat182 said:


> your mid fan gonna run 1000rpm slower, can you confirm the 500watts keep 3 of them at 3000rpm ? might flash back to 500watt from the KPE, still debating what to do, KPE works great when optimised. can have 2100mhz stable max power around 70c but would get lower temps probly with the 500watt with all 3 fans at 3000rpm


Thanks for the heads up. I'll check fan RPM's while I play this morning. I'm going to wait for cooler ambient later this evening to bench with SP.


----------



## poncjusz

I've got https://www.3dmark.com/3dm/54037035 14945 with my strix on air with evga 500w bios 
I have old CPU and im planning to upgrade my whole platform.
I wanted get high end cooling that's gonna last me for years.
Could you guys point me to some good resources for newbies.
Do you guys think it's worth getting chiller or maybe multiple MO-RA 3 radiators?


----------



## Nizzen

poncjusz said:


> I've got https://www.3dmark.com/3dm/54037035 14945 with my strix on air with evga 500w bios
> I have old CPU and im planning to upgrade my whole platform.
> I wanted get high end cooling that's gonna last me for years.
> Could you guys point me to some good resources for newbies.
> Do you guys think it's worth getting chiller or maybe multiple MO-RA 3 radiators?


The question is WHAT do you actually want?
Chiller right beside the computer is LOUD!
MO-RA 3 radiators outside the case has it's own "problems". Not Wife friendly LOL
Do you even have a case, or is this computer only for benchmarking. So many questions


----------



## poncjusz

Nizzen said:


> The question is WHAT do you actually want?
> Chiller right beside the computer is LOUD!
> MO-RA 3 radiators outside the case has it's own "problems". Not Wife friendly LOL
> Do you even have a case, or is this computer only for benchmarking. So many questions


PC will be only for gaming.
Im gonna put my pc in another room and put cables throught the wall so i don't care about noise.
I just wanna get highest performance.
I don't have WIFE neither gf and don't care about aesthetics.


----------



## Nizzen

poncjusz said:


> PC will be only for gaming.
> Im gonna put my pc in another room and put cables throught the wall so i don't care about noise.
> I just wanna get highest performance.
> I don't have WIFE neither gf and don't care about aesthetics.


Some guy at guru3d said it was impossible to buy 30 series cards in Poland. Where did you bought yours?

Watercool the card and put on the 520w bios. If you are using a chiller, just set it to 20c watertemp and play games  Then you will get the most performance possible. There is no problem getting 2100+mhz on water without chiller. 30c watertemp andyou are pretty much there with only 2x 360 radiators. Want the last 100mhz, then go all in with chiller and shuntmodding.

Is this worth it? Maybe for some few 

You better have the other fast hardware to balance the system


----------



## poncjusz

Nizzen said:


> Some guy at guru3d said it was impossible to buy 30 series cards in Poland. Where did you bought yours?
> 
> Watercool the card and put on the 520w bios. If you are using a chiller, just set it to 20c watertemp and play games  Then you will get the most performance possible. There is no problem getting 2100+mhz on water without chiller. 30c watertemp andyou are pretty much there with only 2x 360 radiators. Want the last 100mhz, then go all in with chiller and shuntmodding.
> 
> Is this worth it? Maybe for some few
> 
> You better have the other fast hardware to balance the system


I've set email notifications on few sites. Got email and bought one in 1 minute and i still got last piece. The shop was electro.pl. It's easy to buy 3090 in Poland but no strix.




I've watched this video and was dissappointed with results.
Can chiller actually hold temperature in 20c region?


----------



## jura11

mardon said:


> Having cancelled one card due to buyers remorse I've sold a few bits and bobs and now ordered another.
> I'm limited on card size due to being in a SFF case but have x2 240mm worth of radiator space (Ncase M1).
> 
> Do we have any owners on here with a 2pin card EK block and 390w bios? What sort of clocks are you running in real life workloads? Can you sustain 2100mhz (ish) in gaming loads?
> 
> If I do shunt I think I'd only want to go up to 450/500w for block compatibility would I be better of completely swapping out the shunts? I've got a good electronics engineer near me who would do it.


I'm not running EKWB waterblock bit I'm running Bykski waterblock on my Palit RTX 3090 GamingPro with KFA2 390W BIOS and in normal gaming workloads with normal GPU usage around 95-97% yes I can sustain 2115-2130MHz, although when I hit power limit clocks start to bounce from 1920MHz to 1995MHz or 2010MHz, highest temperature what I have seen on my loop is 36-38°C that's in 26-28°C ambient temperature

Shunt mod I would do on 2 8 pin GPUs as first because of higher power limit, if you can or have chance to do shunt mod, then do it there

Hope this helps 

Thanks, Jura


----------



## mardon

Pinto said:


> Just to update my EK Strix coil whine issue. I manage to get rid of the whine, so now the block and the backplate are awesome! In gaming gpu 2115/2130 mem +1250 with only 11°c delta temp (39° max load).
> For the whine i simply replace the memory pads use with the 3090 (1.5mm) for the 3080 ones (2mm) and add the include washers between the card and the backplate to compensate the thickness off the pads, every screws tight no whine, simple and efficient!


Good to know. Got a 3090 reference and EK block coming next week.
If I have issues I'll give that a go.

So I guess 2100mhz is a big ask for a 390w bios in gaming loads then..


----------



## HyperMatrix

Anyone notice any weird voltage stepping with their card? I tested 0.950v for 2055MHz and it works fine without crashing as long as I'm under 54C which isn't an issue. I tested with 1.025v at 2130MHz and it also works fine without crashing under 54C (although it will throttle due to PL atm). But the problem I have is that I can't get anything to stick without crashing between 0.95v and 1.025v. Here's what I tried and has failed:

2100MHz at 1.000v and 1.012v
2085MHz at 0.975v and 0.981v

How can it be good with 0.950v at 2055 but not be ok with just 30MHz more at 0.981v? If this were the limit of the chip, it'd make sense. But it can do 1.025v at 2130MHz but not 1.012v at 2100MHz? I don't remember such a behavior with my Strix. Anyone else getting specific voltages that your card likes, that don't appear to correlate properly with clock speed?



mardon said:


> So I guess 2100mhz is a big ask for a 390w bios in gaming loads then..


Yes. Although power requirement for 2100MHz will vary from game to game, you definitely shouldn't expect 2100MHz at 390W. Depending on how good your chip is, you may be able to get away with it in certain games. On my card though, even when I'm in the 40-45C temperature range at 0.950v for 2055MHz, it's pulling over 400W in most games. So unless you've got a card that can do 2055MHz at 0.900v, I'd say it's out of the question. Fortunately, silver paint power mods exist.


----------



## mardon

jura11 said:


> I'm not running EKWB waterblock bit I'm running Bykski waterblock on my Palit RTX 3090 GamingPro with KFA2 390W BIOS and in normal gaming workloads with normal GPU usage around 95-97% yes I can sustain 2115-2130MHz, although when I hit power limit clocks start to bounce from 1920MHz to 1995MHz or 2010MHz, highest temperature what I have seen on my loop is 36-38°C that's in 26-28°C ambient temperature
> 
> Shunt mod I would do on 2 8 pin GPUs as first because of higher power limit, if you can or have chance to do shunt mod, then do it there
> 
> Hope this helps
> 
> Thanks, Jura


Thanks for replying. I'll have to start doing some research into which shunts to get. I don't think I can do piggy back route due to the block. It will have to be a replacement resistor. I best run the card stock for a month to check all is well before I do something permanent.

My other thought was to undervolt and see how far the silicone will let me go to free up some of the power budget. Say 2100mhz @ 0.950v maybe?


Edit.. That might be being a bit optimistic


----------



## jura11

mardon said:


> Thanks for replying. I'll have to start doing some research into which shunts to get. I don't think I can do piggy back route due to the block. It will have to be a replacement resistor. I best run the card stock for a month to check all is well before I do something permanent.
> 
> My other thought was to undervolt and see how far the silicone will let me go to free up some of the power budget. Say 2100mhz @ 0.950v maybe?
> 
> 
> Edit.. That might be being a bit optimistic


Hi there 

I would say its bit optimistic for 2 8 pin GPU but maybe you will be lucky with silicone gods mate, I can do at 0.9v 2010MHz and at 0.95v 2055-2070MHz and at 1v 2100MHz is possible there but this literally depending on the game, with higher power limit BIOS or shunt mod I'm sure 2100MHz at 1v should be possible

I will do few tests later tonight and post my results back with undervolting 

Hope this helps 

Thanks, Jura


----------



## Herald

Sheyster said:


> Very nice! Watercooled I assume? I'll post my KPE BIOS 4K optimized screenie soon. My card is running the stock air cooler.


Nope, Gamerock OC on air with the kingpin bios.


----------



## mardon

jura11 said:


> Hi there
> 
> I would say its bit optimistic for 2 8 pin GPU but maybe you will be lucky with silicone gods mate, I can do at 0.9v 2010MHz and at 0.95v 2055-2070MHz and at 1v 2100MHz is possible there but this literally depending on the game, with higher power limit BIOS or shunt mod I'm sure 2100MHz at 1v should be possible
> 
> I will do few tests later tonight and post my results back with undervolting
> 
> Hope this helps
> 
> Thanks, Jura


OK great. Mine arrives early next week. Perhaps we'll be going down the shunt mod route at a similar time. 
500w would be about my limit as my PSU is a sfx750w and I'm already pushing a 9900ks @ 5.2ghz 1.315v


----------



## omarrana

hello everyone,
i just got rtx 3090 founders edition. Is there any bios i can flash for more power? currently its limited to 400watt


----------



## mardon

omarrana said:


> hello everyone,
> i just got rtx 3090 founders edition. Is there any bios i can flash for more power? currently its limited to 400watt


Nope. Shunt mod only.


----------



## GAN77

Is there a bios above 390 watts for a 2x8 pin reference card? Excluding the shunt mod?

Can someone do their best at 390 watts and water cooling? Thanks!


----------



## MacMus

khunpunTH said:


> msi 3090 gaming x trio. stock cooling. 520w bios.


--

Can i use same bios on msi ventus 3090 ? does it matter that ventus is 2pin?


----------



## bl4ckdot

whats the downside of using the Strix bios on a FTW3 Ultra ?


----------



## jura11

GAN77 said:


> Is there a bios above 390 watts for a 2x8 pin reference card? Excluding the shunt mod?
> 
> Can someone do their best at 390 watts and water cooling? Thanks!


I don't think so there is BIOS above 390W for 2x8pin

I'm running Palit RTX 3090 GamingPro with Bykski Waterblock and in gaming temperatures are 32°C, in higher ambient temperatures seen 36-38°C and gaming clocks 2115-2130MHz in AC:Valhalla, I think 2145-2160MHz is possible

I will do few more tests later tonight 

Hope this helps 

Thanks, Jura


----------



## MacMus

motivman said:


> its a 2X8 pin card. I tried looking for pics of the PCB on the internet, with no luck. shunt everything with 5 mohm and call it a day. it will most likely draw about 500 - 520 w on the reference bios, or if you don't want to shunt, the best bios would be the gigabyte 390W bios, but not sure how the gigabyte fan profile will affect the ventus fan. But I think best option is to just shunt the card. if you are not confident in your soldering, then use conductive paint, you will still get the same 500W with conductive paint.


i can photos of PCB, however will other componentson this card have issues with 500W ?(caps, power phases etc?)
Is there any diffrence between 2pin and 3pin <500W ? Do i need any special cables for that?


----------



## HyperMatrix

bl4ckdot said:


> whats the downside of using the Strix bios on a FTW3 Ultra ?


I lost both fan controls. Well the fans only ran at 33%. Also lose icx sensors. XC3 bios gives you more power and fan controls and sensors still work.


----------



## Falkentyne

bl4ckdot said:


> whats the downside of using the Strix bios on a FTW3 Ultra ?


You don't use the Strix Bios on the FTW3 Ultra.
You use the XC3 Bios on the FTW3 Ultra 

You use the XOC (FTW3) Ultra Bios on the 3 pin Strix or MSI models.


----------



## Thanh Nguyen

Anyone here has a shunt card and the temp is under 40c in 25c-26c ambient? If yes, what your cooling set up and block?


----------



## MacMus

bmgjet said:


> KFA2 390W bios is better then Gigabyte Gaming OC.
> You dont lose the middle DP port,
> Its made for a 18 phase card (Gigabytes only one to use that 19 phase config on 2 plug cards)
> Fans run at simular speeds.


Is this also applicable for MSI Ventus? Can i use this bios on this card?

And general note.. how save it is and is there any procedure to do backups of current bios if i need one?
Do i just download KFA2 390W and load it with a flasher or how shall i do it, as i will be doing that for the first time.


----------



## MacMus

jura11 said:


> Palit RTX 3090 GamingPro with KFA2 390W BIOS


I would like to try that with MSI Ventus it's also 2 pin card. Iam also putting it on watter so idc about fan profiles. 
Could you let me know the procedure and where to get the bios and do the backups ?

I want to first play with bios before i do shunt mod.


----------



## Falkentyne

MacMus said:


> i can photos of PCB, however will other componentson this card have issues with 500W ?(caps, power phases etc?)
> Is there any diffrence between 2pin and 3pin <500W ? Do i need any special cables for that?


Just take close-up pictures of the shunts. It helps greatly if the shunts are flat (the other MSI model seemed to have flat flush shunts). But I think the MSI boards have 20 amp fuses, right?








GitHub - bmgjet/ShutMod-Calculator: Work out what shunt values to use easily.


Work out what shunt values to use easily. Contribute to bmgjet/ShutMod-Calculator development by creating an account on GitHub.




github.com





A good 16 Aug PSU cable will be able to handle 300W through each 8 pin connector. The issue is if the board has fuses or not. A 20 amp fuse with a 12v supply can only handle _240W_ through the connector before the fuse blows, so that means you're limited to <480W through two 8-pins, or <720W through three 8-pins (not including the PCIE slot).

If you have 20 amp fuses, I would not go ANY LOWER than a 10 mOhm stacked shunt if stacking, or a 3 mOhm shunt if desoldering and replacing (Warning: if replacing with 3 mOhm shunts, do NOT raise the power limit % slider past 100%).

Someone somewhere said that they _shorted_ the fuses on their board with solder, which completely solved the fuse problem. I do _NOT_ know if that means they applied solder to bridge the edges of the fuse, or if they REMOVED the fuse and replaced it with solder!!!!!


----------



## jura11

MacMus said:


> I would like to try that with MSI Ventus it's also 2 pin card. Iam also putting it on watter so idc about fan profiles.
> Could you let me know the procedure and where to get the bios and do the backups ?
> 
> I want to first play with bios before i do shunt mod.


Hi there 

Here is the BIOS which I'm using on my Palit RTX 3090 GamingPro 









KFA2 RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com





For flash procedure please use this guide 









How To Flash A Different BIOS On Your 1080 Ti.


Important note: After flashing your BIOS or reinstalling your video card driver unzip the file below and right click it, Edit it using NotePad, change the '-pl 3**' to your max TDP on your card, Save As the file, choose 'All Files', not '.txt', save the powerlimit.bat file, than right click it...




www.overclock.net





Hope this helps 

Thanks, Jura


----------



## HyperMatrix

Falkentyne said:


> Someone somewhere said that they _shorted_ the fuses on their board with solder, which completely solved the fuse problem. I do _NOT_ know if that means they applied solder to bridge the edges of the fuse, or if they REMOVED the fuse and replaced it with solder!!!!!


A fuse is designed specifically to blow when limits are exceeded in order to prevent damage to other components down the line. Bridging it just bypass the fuse whether the fuse is still there or removed. Yes your card will run. But you will no longer have that protection in place and if something goes wrong, you’re looking at potentially irreparable damage. So I wouldn’t recommend it. You could replace the fuse with a higher limit one though.


----------



## mirkendargen

HyperMatrix said:


> A fuse is designed specifically to blow when limits are exceeded in order to prevent damage to other components down the line. Bridging it just bypass the fuse whether the fuse is still there or removed. Yes your card will run. But you will no longer have that protection in place and if something goes wrong, you’re looking at potentially irreparable damage. So I wouldn’t recommend it. You could replace the fuse with a higher limit one though.


It's like replacing the fuses in an old house with a stack of pennies


----------



## HyperMatrix

mirkendargen said:


> It's like replacing the fuses in an old house with a stack of pennies


I recommend using silver paint between the pennies to reduce the chance of poor contact and you know...fire and death. Haha.


----------



## pat182

HyperMatrix said:


> I recommend using silver paint between the pennies to reduce the chance of poor contact and you know...fire and death. Haha.


So will the fuses jump with a 520 watt bios ? Whats the threshold? Not if if i understood if the strix have fuses or not


----------



## HyperMatrix

pat182 said:


> So will the fuses jump with a 520 watt bios ? Whats the threshold? Not if if i understood if the strix have fuses or not


You shouldn’t have to touch the fuses. Each AIB sets its own components and the only one I’ve heard of that uses a lower threshold fuse is the EVGA card on the pcie slot with I believe a 10 amp fuse. So 10 amp at 12v is 120W. Which means if you shunt your pcie slot and happen to pull over 120W, there goes your fuse.


----------



## GRABibus

Who tested the FTW3 Bios 500W on the Asus Strix OC gaming ?
Is there any benefit to do it ?

Is it this one ?








EVGA RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com


----------



## Falkentyne

GRABibus said:


> Who tested the FTW3 Bios 500W on the Asus Strix OC gaming ?
> Is there any benefit to do it ?


Higher TDP.


----------



## GRABibus

Falkentyne said:


> Higher TDP.


ok, this I know 😊.
I wanted to know if it helps to sustain stable frequency ?

is it this one ?








EVGA RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com





or this one ?








EVGA RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com


----------



## 7empe

GRABibus said:


> ok, this I know 😊.
> I wanted to know if it helps to sustain stable frequency ?
> 
> is it this one ?
> 
> 
> 
> 
> 
> 
> 
> 
> EVGA RTX 3090 VBIOS
> 
> 
> 24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory
> 
> 
> 
> 
> www.techpowerup.com
> 
> 
> 
> 
> 
> or this one ?
> 
> 
> 
> 
> 
> 
> 
> 
> EVGA RTX 3090 VBIOS
> 
> 
> 24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory
> 
> 
> 
> 
> www.techpowerup.com


I was able to increase initial frequency in Port Royale by +30 MHz, from 2175 MHz to 2205 MHz. Average maintained frequency during the test was higher (from 2112 MHz to 2197 MHz). This translates to score increase 15136 -> 15290. GPU is on water.


----------



## GRABibus

7empe said:


> I was able to increase initial frequency in Port Royale by +30 MHz, from 2175 MHz to 2205 MHz. Average maintained frequency during the test was higher (from 2112 MHz to 2197 MHz). This translates to score increase 15136 -> 15290. GPU is on water.





7empe said:


> I was able to increase initial frequency in Port Royale by +30 MHz, from 2175 MHz to 2205 MHz. Average maintained frequency during the test was higher (from 2112 MHz to 2197 MHz). This translates to score increase 15136 -> 15290. GPU is on water.


ok thanks.
And with stock cooling, no issues so far ?


----------



## HyperMatrix

GRABibus said:


> ok thanks.
> And with stock cooling, no issues so far ?


With stock cooling your problem is that more power limit = more heat. More heat = quicker throttling. And there is no air cooled card capable of dissipating 500W of heat. That's why so many people are finding better results with undervolting. And the Strix OC doesn't have the greatest cooling solution IMO. But one advantage of using the EVGA XOC 500W bios on your Strix is that one of the power connectors doesn't report any power usage, making your card think it's well under its limit. This allows the card to maintain a higher voltage for longer. So you may be able to do some short term tests with better results using it. But after a few minutes your results will end up being even lower due to your card basically frying. I tested it and got to around 89C with my Strix OC on that bios before it finally crashed.


----------



## GRABibus

HyperMatrix said:


> With stock cooling your problem is that more power limit = more heat. More heat = quicker throttling. And there is no air cooled card capable of dissipating 500W of heat. That's why so many people are finding better results with undervolting. And the Strix OC doesn't have the greatest cooling solution IMO. But one advantage of using the EVGA XOC 500W bios on your Strix is that one of the power connectors doesn't report any power usage, making your card think it's well under its limit. This allows the card to maintain a higher voltage for longer. So you may be able to do some short term tests with better results using it. But after a few minutes your results will end up being even lower due to your card basically frying. I tested it and got to around 89C with my Strix OC on that bios before it finally crashed.


thank you.
Which Bios of both links I posted above are we talking about ?


----------



## 7empe

GRABibus said:


> thank you.
> Which Bios of both links I posted above are we talking about ?


Regarding bioses, the difference is in the default fan curve. First is a normal, second OC, but from TDP standpoint both are exactly the same. If you’re on air, you will hit high 80’s very fast by drawing all the juice allowed by Power Limit, which will in fact limit some gains from higher available power due to temperature-based downclock by a one or two frequency steps.


----------



## GRABibus

7empe said:


> Regarding bioses, the difference is in the default fan curve. First is a normal, second OC, but from TDP standpoint both are exactly the same. If you’re on air, you will hit high 80’s very fast by drawing all the juice allowed by Power Limit, which will in fact limit some gains from higher available power due to temperature-based downclock by a one or two frequency steps.


thank you all.
I will then first see what I can get from It without any flash.


----------



## HyperMatrix

GRABibus said:


> thank you all.
> I will then first see what I can get from It without any flash.


The best indication of how good a card is will be in seeing how it undervolts. My Strix wouldn't even do 2000MHz at 1.000v but my new FTW3 does 2055MHz at 0.950v. The higher the voltage you need to reach a certain frequency at lower levels, the sooner you'll run out of voltage even if you had unlimited power limit, which means lower maximum clock speed even under water. So test out how high you can clock at 0.9 and 0.95 and 1v. Should give you a good idea of the quality of your chip.


----------



## GRABibus

HyperMatrix said:


> The best indication of how good a card is will be in seeing how it undervolts. My Strix wouldn't even do 2000MHz at 1.000v but my new FTW3 does 2055MHz at 0.950v. The higher the voltage you need to reach a certain frequency at lower levels, the sooner you'll run out of voltage even if you had unlimited power limit, which means lower maximum clock speed even under water. So test out how high you can clock at 0.9 and 0.95 and 1v. Should give you a good idea of the quality of your chip.


ok.
I come from 2080Ti.
V/F curve is available on those cards ?


----------



## HyperMatrix

GRABibus said:


> ok.
> I come from 2080Ti.
> V/F curve is available on those cards ?


Yes just use afterburner to set it (CTRL + F).


----------



## GRABibus

HyperMatrix said:


> Yes just use afterburner to set it (CTRL + F).
> 
> View attachment 2468064


ok, then as usual.


----------



## Falkentyne

HyperMatrix said:


> With stock cooling your problem is that more power limit = more heat. More heat = quicker throttling. And there is no air cooled card capable of dissipating 500W of heat. That's why so many people are finding better results with undervolting. And the Strix OC doesn't have the greatest cooling solution IMO. But one advantage of using the EVGA XOC 500W bios on your Strix is that one of the power connectors doesn't report any power usage, making your card think it's well under its limit. This allows the card to maintain a higher voltage for longer. So you may be able to do some short term tests with better results using it. But after a few minutes your results will end up being even lower due to your card basically frying. I tested it and got to around 89C with my Strix OC on that bios before it finally crashed.


Mine can.
My shunt modded 3090 FE can run at 540W sustained at 75C with the stock heatsink if I keep the ambients down.
I re-did the thermal pads with Thermalright Odyssey 1.5mm everywhere and am using Kryonaut Extreme.


----------



## indicajones

just got a 3090 ftw ultra hybrid. XOC bios in EVGA forums doesnt recognize the card. Can anyone recommend a bios for this card? I would be beyond appreciative. I will continue to read through this forum post. Previously on the zotac 3090 card I had I used the gigabyte bios but that was for a 2 pin card. Thank you so much!!
FYI: hybrid seems to max at 400W right now. 2100mhz @50-55C in an open air case with fans manual at 48%.


----------



## HyperMatrix

Falkentyne said:


> My shunt modded 3090 FE can run at 540W sustained at 75C with the stock heatsink if I keep the ambients down.


Pads and LM on GPU die must be very helpful then. I couldn't imagine any card being able to do that stock. I'm also afraid to ask what your ambient temp is. Even when I open up the windows and let it get cool, I'm only bringing ambient down to about 20C before I get cold myself. Haha.



indicajones said:


> just got a 3090 ftw ultra hybrid. XOC bios in EVGA forums doesnt recognize the card. Can anyone recommend a bios for this card? I would be beyond appreciative. I will continue to read through this forum post. Previously on the zotac 3090 card I had I used the gigabyte bios but that was for a 2 pin card. Thank you so much!!
> FYI: hybrid seems to max at 400W right now. 2100mhz @50-55C in an open air case with fans manual at 48%.


as falk mentioned earlier, flash the XC3 bios to get somewhere around 480-500W power limit. You'll still have full fan controls and your temperature sensors will work too. If you want a little extra power headroom, disconnect the radiator fans and run them through your motherboard or fan controller. You'll gain approximately 10-15W extra (well...at least you would if you were running the fans at 100%).


----------



## Falkentyne

HyperMatrix said:


> Pads and LM on GPU die must be very helpful then. I couldn't imagine any card being able to do that stock. I'm also afraid to ask what your ambient temp is. Even when I open up the windows and let it get cool, I'm only bringing ambient down to about 20C before I get cold myself. Haha.
> 
> 
> 
> as falk mentioned earlier, flash the XC3 bios to get somewhere around 480-500W power limit. You'll still have full fan controls and your temperature sensors will work too. If you want a little extra power headroom, disconnect the radiator fans and run them through your motherboard or fan controller. You'll gain approximately 10-15W extra (well...at least you would if you were running the fans at 100%).


Kryonaut Extreme, not LM.








Amazon.com: Thermal Grizzly Kryonaut Extreme The High Performance Thermal Paste for Cooling All Processors, Graphics Cards and Heat Sinks in Computers and Consoles (33,84 Gram / 9,0 ml): Computers & Accessories


Buy Thermal Grizzly Kryonaut Extreme The High Performance Thermal Paste for Cooling All Processors, Graphics Cards and Heat Sinks in Computers and Consoles (33, 84 Gram / 9, 0 ml): Heatsinks - Amazon.com ✓ FREE DELIVERY possible on eligible purchases



www.amazon.com





I'm too chicken to put LM on a $1500 "out of stock" video card. Was bad enough managing to get stable core deltas long term on my laptop.
I do have almost 150 grams of homemade Galinstan LM I can throw on stuff if I need it, and some nail polish left, but even though the FE 3090 is built like a tank, there's too many tiny little thingies all over the PCB for me to want a Conductive Ball of Doom running amok around my video card. At least the 842AR paint dries up....

And another problem with LM is if you have to disassemble the card to fix the shunts or some other thing, if it's been more than a day, you have to completely clean off the heat block of LM, since some gallium may have been absorbed and hardened spots can start forming, which gets extremely messy, which means possible thermal pad replacement may be needed, which means more money spent...Just not worth it.

Repasting Kryonaut Extreme=alcohol and a tissue, just wipe it, wipe again with a lint free cloth, and done.


----------



## bl4ckdot

Falkentyne said:


> You don't use the Strix Bios on the FTW3 Ultra.
> You use the XC3 Bios on the FTW3 Ultra
> 
> You use the XOC (FTW3) Ultra Bios on the 3 pin Strix or MSI models.


Wait ... XC3 on FTW3 Ultra ? Am I missing something ? Whats the reasoning ?


----------



## Falkentyne

bl4ckdot said:


> Wait ... XC3 on FTW3 Ultra ? Am I missing something ? Whats the reasoning ?


The PCB's are basically the same, except for the 8 pin. So flashing an XC3 vbios on an Ultra means that the third 8 pin reports "0" watts, since the XC3 card only has two 8 pin connectors, but the hardware of the FTW3 is designed to use all three 8 pins. So the card will draw more power but can't report the power from the third 8 pin since the vbios can't see it, which basically acts like the third 8 pin has a massive shunt mod on it. So you gain more power to use.


----------



## indicajones

HyperMatrix said:


> Pads and LM on GPU die must be very helpful then. I couldn't imagine any card being able to do that stock. I'm also afraid to ask what your ambient temp is. Even when I open up the windows and let it get cool, I'm only bringing ambient down to about 20C before I get cold myself. Haha.
> 
> 
> 
> as falk mentioned earlier, flash the XC3 bios to get somewhere around 480-500W power limit. You'll still have full fan controls and your temperature sensors will work too. If you want a little extra power headroom, disconnect the radiator fans and run them through your motherboard or fan controller. You'll gain approximately 10-15W extra (well...at least you would if you were running the fans at 100%).


 thanks you so much!! really really appreciate!


----------



## indicajones

Falkentyne said:


> The PCB's are basically the same, except for the 8 pin. So flashing an XC3 vbios on an Ultra means that the third 8 pin reports "0" watts, since the XC3 card only has two 8 pin connectors, but the hardware of the FTW3 is designed to use all three 8 pins. So the card will draw more power but can't report the power from the third 8 pin since the vbios can't see it, which basically acts like the third 8 pin has a massive shunt mod on it. So you gain more power to use.


Thank you Falkentyne! You are everywhere man and always amazing! A true asset to the entire hardware community! If I ever win the lottery I'm sending you all the top computer hardware x12 for the rest of your life!


----------



## bl4ckdot

Falkentyne said:


> The PCB's are basically the same, except for the 8 pin. So flashing an XC3 vbios on an Ultra means that the third 8 pin reports "0" watts, since the XC3 card only has two 8 pin connectors, but the hardware of the FTW3 is designed to use all three 8 pins. So the card will draw more power but can't report the power from the third 8 pin since the vbios can't see it, which basically acts like the third 8 pin has a massive shunt mod on it. So you gain more power to use.


Thats pretty clever. Is there any risk doing that (like getting too much power from the PCIe slot for some reason ?) or does it only touch the 8pin part ?
Is is also a "fix" to the XOC bios not working on some cards ?


----------



## HyperMatrix

Falkentyne said:


> Kryonaut Extreme, not LM.
> 
> 
> 
> 
> 
> 
> 
> 
> Amazon.com: Thermal Grizzly Kryonaut Extreme The High Performance Thermal Paste for Cooling All Processors, Graphics Cards and Heat Sinks in Computers and Consoles (33,84 Gram / 9,0 ml): Computers & Accessories
> 
> 
> Buy Thermal Grizzly Kryonaut Extreme The High Performance Thermal Paste for Cooling All Processors, Graphics Cards and Heat Sinks in Computers and Consoles (33, 84 Gram / 9, 0 ml): Heatsinks - Amazon.com ✓ FREE DELIVERY possible on eligible purchases
> 
> 
> 
> www.amazon.com
> 
> 
> 
> 
> 
> I'm too chicken to put LM on a $1500 "out of stock" video card. Was bad enough managing to get stable core deltas long term on my laptop.
> I do have almost 150 grams of homemade Galinstan LM I can throw on stuff if I need it, and some nail polish left, but even though the FE 3090 is built like a tank, there's too many tiny little thingies all over the PCB for me to want a Conductive Ball of Doom running amok around my video card. At least the 842AR paint dries up....
> 
> And another problem with LM is if you have to disassemble the card to fix the shunts or some other thing, if it's been more than a day, you have to completely clean off the heat block of LM, since some gallium may have been absorbed and hardened spots can start forming, which gets extremely messy, which means possible thermal pad replacement may be needed, which means more money spent...Just not worth it.
> 
> Repasting Kryonaut Extreme=alcohol and a tissue, just wipe it, wipe again with a lint free cloth, and done.


Haha considering how much you pimp for MG silver paint, I'm surprised you didn't order some MG conformal coating. Coat the area around the die, and apply as thin of a layer of LM as possible to both surfaces. Super easy to do. That's what I did when I put LM on my laptop cpu and gpu which is even more tightly packed and sensitive.  Haha I'm just saying if you've seen that much of a cooling performance uplift with just better thermal paste, then you should really see a nice drop in temps with LM. Also you don't want just any LM. I'm not familiar with your homemade LM but even CLU which is well known and I'd used for years has only half the thermal conductivity of conductonaut. And conductonaut also has 5x the thermal transfer of the kryonaut extreme.


----------



## WilliamLeGod

HyperMatrix said:


> Yes just use afterburner to set it (CTRL + F).
> 
> View attachment 2468064


U did the v/f totally wrong


----------



## HyperMatrix

WilliamLeGod said:


> U did the v/f totally wrong


I'm locked in to my voltage and clock speed. What are you trying to achieve?


----------



## Falkentyne

bl4ckdot said:


> Thats pretty clever. Is there any risk doing that (like getting too much power from the PCIe slot for some reason ?) or does it only touch the 8pin part ?
> Is is also a "fix" to the XOC bios not working on some cards ?


You need to read the eVGA forums. Keep in mind I have an FE. I'm only reporting what other people have said. I can't really answer questions about cards I don't have.


----------



## mrpeters

anyone tried this bios on the AORUS Master?









Gigabyte RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com


----------



## Falkentyne

HyperMatrix said:


> Haha considering how much you pimp for MG silver paint, I'm surprised you didn't order some MG conformal coating. Coat the area around the die, and apply as thin of a layer of LM as possible to both surfaces. Super easy to do. That's what I did when I put LM on my laptop cpu and gpu which is even more tightly packed and sensitive.  Haha I'm just saying if you've seen that much of a cooling performance uplift with just better thermal paste, then you should really see a nice drop in temps with LM. Also you don't want just any LM. I'm not familiar with your homemade LM but even CLU which is well known and I'd used for years has only half the thermal conductivity of conductonaut. And conductonaut also has 5x the thermal transfer of the kryonaut extreme.


My homemade LM is only at worst 1-2C worse than Conductonaut. I've used both. (I got tired of paying highway robbery prices for the commercial stuff). And I can improve that by changing the Indium-->GA ratio slightly, but it really makes very little difference. You can't change the laws of chemistry.

Conductonaut is only about 25-30 w/mk at absolute maximum, and probably closer to 25 w/mk than 30 w/mk. Galinstan proper is 16 w/mk. It goes against the laws of metallurgy to have an eutectic alloy with the w/mk HIGHER than the lowest w/mk component of it.

Out of Gallum, Indium and Tin, Gallium is the most abundant component and also has the lowest w/mk of the bunch, which is 40.6 w/mk. Therefore it is physically impossible for any LM to be higher than 40.6 w/mk just by the laws of chemistry. 

Take a look at this page to see how Gold Tin solder acts.









Thermal Conductivity of Solders | Electronics Cooling


Soldering has been a primary method of establishing mechanical and electrical connections in electronics for many years and will likely be used in this fashion in the future. While there are several physical properties and characteristics of solders that are […]




www.electronics-cooling.com







> A candidate to replace tin-lead (SnPb) solder is an alloy of tin (Sn), silver (Ag), and copper (Cu) termed SAC. Several variations of this alloy are available but the conductivity for all of them is approximately 60 W/mK at 25°C. Some data may be found with the disclaimer that it is an estimated value, but no details are given on the estimation method. It should be noted that using a “rule of mixtures” for estimating the thermal conductivity of a solder based on the pure thermal conductivity of the constituent element metals can lead to significant errors. For example, the thermal conductivity of AuSn (80/20) solder is 57 W/mK which is lower than the conductivity of either of the parent metals of gold (315 W/mK) or tin (66 W/mK).


Notice: Gold: 315 w/mk. Tin: 66 w/mk. 80/20 Gold Tin solder: 57 w/mk.
Therefore it's impossible for Conductonaut or any gallium based LM to be above 40 w/mk.

I think someone either on OCN on notebookreview forums said that Conductonaut was actually tested by someone and was far below 40 w/mk.

I use nail polish (I mentioned that). The MG conformal coating is expensive although i've had it saved in my cart for years now.
I find Super 33+ tape works just as well (especially if you apply it on top of the dried nail polish) plus Super 33+ tape has so many other uses (like protecting the PCB when you're scraping stuff). What I do need to buy is some kapton tape though.


----------



## MacMus

jura11 said:


> I don't think so there is BIOS above 390W for 2x8pi
> I'm running Palit RTX 3090 GamingPro with Bykski Waterblock and in gaming temperatures are 32°C, in higher ambient temperatures seen 36-38°C and gaming clocks 2115-2130MHz in AC:Valhalla, I think 2145-2160MHz is possible


someone said on 2pin they were able to pull 500W


----------



## bmgjet

MacMus said:


> someone said on 2pin they were able to pull 500W


390W is the most on 2X8pin without shunt modding.
I can pull 520W on mine on that bios with 15 mohm shunts stacked.


----------



## Sheyster

pat182 said:


> your mid fan gonna run 1000rpm slower, can you confirm the 500watts keep 3 of them at 3000rpm ? might flash back to 500watt from the KPE, still debating what to do, KPE works great when optimised. can have 2100mhz stable max power around 70c but would get lower temps probly with the 500watt with all 3 fans at 3000rpm


The 500W EVGA BIOS will run the fans at ~3000 RPM at 100%. You were right about the KPE 520W BIOS running the mid fan at ~2000 RPM at 100%.

So, bottom line is this: If you're on air with an ASUS Strix, stick to the 500W EVGA FTW3 OC BIOS. If you watercool your Strix, the KPE 520W is probably your best bet.


----------



## pat182

Sheyster said:


> The 500W EVGA BIOS will run the fans at ~3000 RPM at 100%. You were right about the KPE 520W BIOS running the mid fan at ~2000 RPM at 100%.
> 
> So, bottom line is this: If you're on air with an ASUS Strix, stick to the 500W EVGA FTW3 OC BIOS. If you watercool your Strix, the KPE 520W is probably your best bet.


Tks for getting me back, yea im gonna flash it down to 500watts


----------



## WilliamLeGod

HyperMatrix said:


> I'm locked in to my voltage and clock speed. What are you trying to achieve?


Its supposed to be +189 first, hit apply then Lock flat peak at 1v. If u conpare this to what ubdid u will se more power is used hence more performance. What u did badically means, game will run at stock offset and may randomly hit +189 at some times not that it always run at 1v 189 even though it shows on screen at lock 1v +189. This also explain why in 1 specific game screen, if u did yr way u wont crash, but u u +189 first and lock 1v flat 189, u will crash - it is when the game "actually" runs at +189


----------



## HyperMatrix

WilliamLeGod said:


> Its supposed to be +189 first, hit apply then Lock flat peak at 1v. If u conpare this to what ubdid u will se more power is used hence more performance. What u did badically means, game will run at stock offset and may randomly hit +189 at some times not that it always run at 1v 189 even though it shows on screen at lock 1v +189. This also explain why in 1 specific game screen, if u did yr way u wont crash, but u u +189 first and lock 1v flat 189, u will crash - it is when the game "actually" runs at +189


That would be incorrect my friend. If you're using "prefer maximum performance" for power management, and the game is actually using more than 40%~ GPU usage, it will always stick to the top clock/voltage. I'm sure your way works as well. But what I'm doing works for me and is quick and easy for on the fly manipulation.


----------



## WilliamLeGod

HyperMatrix said:


> That would be incorrect my friend. If you're using "prefer maximum performance" for power management, and the game is actually using more than 40%~ GPU usage, it will always stick to the top clock/voltage. I'm sure your way works as well. But what I'm doing works for me and is quick and easy for on the fly manipulation.


Nope bro, see the difference in Fps, not because of Optimal or Prefer max performance. I see so many people doing this wrong without testing


----------



## WilliamLeGod

HyperMatrix said:


> That would be incorrect my friend. If you're using "prefer maximum performance" for power management, and the game is actually using more than 40%~ GPU usage, it will always stick to the top clock/voltage. I'm sure your way works as well. But what I'm doing works for me and is quick and easy for on the fly manipulation.


The quickest way to test is using 3Dmark. Try it my way and yr way, should be hundreds of point difference


----------



## HyperMatrix

WilliamLeGod said:


> The quickest way to test is using 3Dmark. Try it my way and yr way, should be hundreds of point difference


Well I'm always looking for ways to be wrong. Because it means I have found a way to improve. I'll try your theory out.


----------



## MacMus

bmgjet said:


> 390W is the most on 2X8pin without shunt modding.
> I can pull 520W on mine on that bios with 15 mohm shunts stacked.


what card and bios?


----------



## Falkentyne

WilliamLeGod said:


> Nope bro, see the difference in Fps, not because of Optimal or Prefer max performance. I see so many people doing this wrong without testing


Hi WIlliamlegod
can you please explain exactly what you're referring to and what we're supposed to be doing?
The performance can vary if you "lock" a voltage? Why?


----------



## HyperMatrix

WilliamLeGod said:


> The quickest way to test is using 3Dmark. Try it my way and yr way, should be hundreds of point difference


No difference in my results. 14295 with your method. Previously 14343 with mine. 2130MHz at 1.025v with +1200 on memory. Within margin of error though. It’s not worse. It’s just not any different. At least not while I’m being throttled by power limit. Or maybe I’m not fully understanding what you’re asking me to do.

Could you give me a step by step “for dummies” version of what you’re asking me to do? I don’t fully understand it because I don’t see where the actual problem might be with the way I’m doing it now since I’m already seeing it locked to the same voltage and clocks (with the exceptional of power limit throttling) as reported in afterburner and gpu-z.


----------



## bmgjet

MacMus said:


> what card and bios?


XC3 with KFA 390W bios.


----------



## WilliamLeGod

HyperMatrix said:


> No difference in my results. 14295 with your method. Previously 14343 with mine. 2130MHz at 1.025v with +1200 on memory. Within margin of error though. It’s not worse. It’s just not any different. At least not while I’m being throttled by power limit. Or maybe I’m not fully understanding what you’re asking me to do.
> 
> Could you give me a step by step “for dummies” version of what you’re asking me to do? I don’t fully understand it because I don’t see where the actual problem might be with the way I’m doing it now since I’m already seeing it locked to the same voltage and clocks (with the exceptional of power limit throttling) as reported in afterburner and gpu-z.


Can u provide image with my method? I would like to see the curve u set


----------



## HyperMatrix

WilliamLeGod said:


> Can u provide image with my method? I would like to see the curve u set


Well I'm not sure I understood you correctly. But this is what I thought you meant:










Unless you're just asking me to hit L on the voltage I want?


----------



## MacMus

bmgjet said:


> XC3 with KFA 390W bios.


Did you use pain to stick shunts .. with waterblock?


----------



## cheddle

I have a galax 3090 SG, reference board, 2x8pin - zero luck with shunt modding. I tried stacking 8mohm resistors with zero effect, first trying the 5 shunts on the front of the card and then just the two under the 8-pin connectors. 

No effect at all.

Im running the 390w galax bios, the 520w EVGA one was worse with a ’phantom 8 pin’ reporting the same power draw as my pcie-1, I actually had less TDP available than the 390w galax. (Maybe there is some opportunity here to use unbalanced shunts?)

as an aside replacing the stock thermal paste is well worthwhile as the Galax stuff sucks. Also the stock cooler pulls about 10w of power to max all four fans.

Eagerly awaiting a 450w+ BIOS for reference boards.

These 3090’s are hamstrung hard by power limit.


----------



## Falkentyne

cheddle said:


> I have a galax 3090 SG, reference board, 2x8pin - zero luck with shunt modding. I tried stacking 8mohm resistors with zero effect, first trying the 5 shunts on the front of the card and then just the two under the 8-pin connectors.
> 
> No effect at all.
> 
> Im running the 390w galax bios, the 520w EVGA one was worse with a ’phantom 8 pin’ reporting the same power draw as my pcie-1, I actually had less TDP available than the 390w galax. (Maybe there is some opportunity here to use unbalanced shunts?)
> 
> as an aside replacing the stock thermal paste is well worthwhile as the Galax stuff sucks. Also the stock cooler pulls about 10w of power to max all four fans.
> 
> Eagerly awaiting a 450w+ BIOS for reference boards.
> 
> These 3090’s are hamstrung hard by power limit.


Did you scrape the conformal coating off the silver contact areas of the shunts first?
What did you use to attach the shunts? Solder or paint? What exact material or product was used in either case?

Are the original shunts flat shunts or do they have the silver conductive contact edges sitting 0.5mm below the black middle housing shell?


----------



## WilliamLeGod

HyperMatrix said:


> Well I'm not sure I understood you correctly. But this is what I thought you meant:
> 
> View attachment 2468075
> 
> 
> Unless you're just asking me to hit L on the voltage I want?
> 
> View attachment 2468076


Okay, here is the step by step: first hit +180 then hit Apply, then in the chart, pull all the dots on the right of dot 1v down to flat frequency of the 1v, then apply. Im sorry im not at my pc now so I couldnt show my image as an example. Just a tip incase u dont know yet: U can hold shift, hold left mouse and move all the right from the dot 1.006v, choose a random dot in the highlighted, pull all down anywhere lower than 1v, hit apply and all those dots will automatically flat to 1v frequency


----------



## cheddle

Falkentyne said:


> Did you scrape the conformal coating off the silver contact areas of the shunts first?
> What did you use to attach the shunts? Solder or paint? What exact material or product was used in either case?
> 
> Are the original shunts flat shunts or do they have the silver conductive contact edges sitting 0.5mm below the black middle housing shell?


I scuffed up the existing shunts and used circuit cleaner.

I did attempt to solder but I suck at it.

I then attempted to use two materials to attach, firstly 'AP Anders Products Wire Glue' and then 'CAIG labs circuit writer'. Continuity between the top shunts and the 8pin connectors existed but my multimeter is not accurate enough to measure such a small resistance. Its entirley possible that the bonding material has a resistance that is simply too high to effectivley drop the overall shunt resistance.

The originals are near enough to flat.


----------



## HyperMatrix

WilliamLeGod said:


> Okay, here is the step by step: first hit +180 then hit Apply, then in the chart, pull all the dots on the right of dot 1v down to flat frequency of the 1v, then apply. Im sorry im not at my pc now so I couldnt show my image as an example. Just a tip incase u dont know yet: U can hold shift, hold left mouse and move all the right from the dot 1.006v, choose a random dot in the highlighted, pull all down anywhere lower than 1v, hit apply and all those dots will automatically flat to 1v frequency


That's just king of the hill. It doesn't move down dots to the left of 1.006v. It moves the entire line down, but with the left-most dot being an anchor. Whichever dot happens to be the highest on the board, all the dots to its right match it. This exact same thing happens if you take 1.000v and move it above all the other dots on the right. Whenever you move a dot above all the other dots, all dots to the right of it line up with it. This is all basic Afterburner functionality. I still don't understand what you're trying to say or how what you're saying is in any way different from what I've already been doing. Whether I do it "your way" or the way I've been doing it...the clock and voltage have always stayed locked (minus throttling).


----------



## Sheyster

WilliamLeGod said:


> Okay, here is the step by step: first hit +180 then hit Apply, then in the chart, pull all the dots on the right of dot 1v down to flat frequency of the 1v, then apply. Im sorry im not at my pc now so I couldnt show my image as an example. Just a tip incase u dont know yet: U can hold shift, hold left mouse and move all the right from the dot 1.006v, choose a random dot in the highlighted, pull all down anywhere lower than 1v, hit apply and all those dots will automatically flat to 1v frequency











Undervolting the RTX 3080 and the RTX3090 - Bjorn3D.com


Introduction - How to undervolt I think most of us are pretty familiar with the concept of overclock




bjorn3d.com





I like his method of adjusting the right side of the curve better.


----------



## dante`afk

Thanh Nguyen said:


> Anyone here has a shunt card and the temp is under 40c in 25c-26c ambient? If yes, what your cooling set up and block?



mine sits at 43c at about 25c ambient with dual mora3 420 setup and 4 pumps. I'd imagine I can get her down sub 40c if I turn on my AC or open the window.


----------



## Falkentyne

cheddle said:


> I scuffed up the existing shunts and used circuit cleaner.
> 
> I did attempt to solder but I suck at it.
> 
> I then attempted to use two materials to attach, firstly 'AP Anders Products Wire Glue' and then 'CAIG labs circuit writer'. Continuity between the top shunts and the 8pin connectors existed but my multimeter is not accurate enough to measure such a small resistance. Its entirley possible that the bonding material has a resistance that is simply too high to effectivley drop the overall shunt resistance.
> 
> The originals are near enough to flat.


Multiple people have had problems with that circuitwriter pen NOT working on 3090's at all, even though it worked fine on 2080's Ti's for them! Those people had no problems when they switched to the MG 842AR conductive pen (or paint).






MG Chemicals 842AR-15ML Silver Print (Conductive Paint), 12 ml: Amazon.com: Industrial & Scientific


MG Chemicals 842AR-15ML Silver Print (Conductive Paint), 12 ml: Amazon.com: Industrial & Scientific



www.amazon.com







https://www.amazon.com/MG-Chemicals-842AR-P-Silver-Conductive/dp/B01LYXQE0M/



You should have easy success rate with that.

Just make sure you coat the silver conductive edges very well. If you are using the paint as an adhesive to 'stack' a shunt on top of it, use enough to ensure proper full connection. (you will have to use significantly more on "depressed" edge shunts like those on MSI and Founder's Edition cards).

You can also use the paint as its own shunt without stacking. You will need to apply a thicker layer and make sure the shunt is fully bridged and painted end to end, with a nice thick layer (the paint works better if it's thicker than thinner--the resistance is lower on a thicker coat). The paint will act like similar to a 10 to 15 mOhm shunt being applied on top of the 5 mOhm if you go that route.

The FE shunts are the hardest to manage with the paint because it's difficult to get contact with the lower edges (particularly on one of the 8 pins, which is basically right next to a choke horizontally on a 3090 FE, but easier on a 3080 FE).

Protip for working with the conductive 842AR paint--and you will thank me later---

Use Super 33+ tape, cut it to size and insulate around the shunts you are painting fully. This will protect the PCB when you're applying. Remove the tape after appliccation, before the paint hardens.



https://www.amazon.com/gp/product/B00004WCCL/



Kapton tape is another good choice for protecting the PCB.


----------



## dante`afk

poncjusz said:


> I've got https://www.3dmark.com/3dm/54037035 14945 with my strix on air with evga 500w bios
> I have old CPU and im planning to upgrade my whole platform.
> I wanted get high end cooling that's gonna last me for years.
> Could you guys point me to some good resources for newbies.
> Do you guys think it's worth getting chiller or maybe multiple MO-RA 3 radiators?


using a chiller you have to consider condensation, its super loud and horrendous electric usage/cost for 24/7 usage. 1-2 mora3 420 are much more reasonable.


----------



## HyperMatrix

Sheyster said:


> Undervolting the RTX 3080 and the RTX3090 - Bjorn3D.com
> 
> 
> Introduction - How to undervolt I think most of us are pretty familiar with the concept of overclock
> 
> 
> 
> 
> bjorn3d.com
> 
> 
> 
> 
> 
> I like his method of adjusting the right side of the curve better.


Yeah this has been the method I've been using forever. Just move the voltage dot you want to play with up to the clockspeed you want and hit apply. Everything else lines up automatically. No fuss no muss. If there are voltage points to the right which are higher than where you want to set your clock, CTRL-Drag the right side all the way down then move your selected voltage point up where you want it. Done deal. I have no idea what this fella is trying to get at. He got me all excited for a while there thinking he's got some way of increasing my OC performance.


----------



## WilliamLeGod

HyperMatrix said:


> Yeah this has been the method I've been using forever. Just move the voltage dot you want to play with up to the clockspeed you want and hit apply. Everything else lines up automatically. No fuss no muss. If there are voltage points to the right which are higher than where you want to set your clock, CTRL-Drag the right side all the way down then move your selected voltage point up where you want it. Done deal. I have no idea what this fella is trying to get at. He got me all excited for a while there thinking he's got some way of increasing my OC performance.


No bro thats undervolting which leads to lower perfomance. What I see in yr chart is not my method becayse there is a huge jump of 2 dots before the 1.025v. my mothed is that all dots only 1 step above or below each other, not that huge step


----------



## cheddle

Falkentyne said:


> Multiple people have had problems with that circuitwriter pen NOT working on 3090's at all, even though it worked fine on 2080's Ti's for them! Those people had no problems when they switched to the MG 842AR conductive pen (or paint).
> 
> 
> 
> 
> 
> 
> MG Chemicals 842AR-15ML Silver Print (Conductive Paint), 12 ml: Amazon.com: Industrial & Scientific
> 
> 
> MG Chemicals 842AR-15ML Silver Print (Conductive Paint), 12 ml: Amazon.com: Industrial & Scientific
> 
> 
> 
> www.amazon.com
> 
> 
> 
> 
> 
> 
> 
> https://www.amazon.com/MG-Chemicals-842AR-P-Silver-Conductive/dp/B01LYXQE0M/
> 
> 
> 
> You should have easy success rate with that.
> 
> Just make sure you coat the silver conductive edges very well. If you are using the paint as an adhesive to 'stack' a shunt on top of it, use enough to ensure proper full connection. (you will have to use significantly more on "depressed" edge shunts like those on MSI and Founder's Edition cards).
> 
> You can also use the paint as its own shunt without stacking. You will need to apply a thicker layer and make sure the shunt is fully bridged and painted end to end, with a nice thick layer (the paint works better if it's thicker than thinner--the resistance is lower on a thicker coat). The paint will act like similar to a 10 to 15 mOhm shunt being applied on top of the 5 mOhm if you go that route.
> 
> The FE shunts are the hardest to manage with the paint because it's difficult to get contact with the lower edges (particularly on one of the 8 pins, which is basically right next to a choke horizontally on a 3090 FE, but easier on a 3080 FE).
> 
> Protip for working with the conductive 842AR paint--and you will thank me later---
> 
> Use Super 33+ tape, cut it to size and insulate around the shunts you are painting fully. This will protect the PCB when you're applying. Remove the tape after appliccation, before the paint hardens.
> 
> 
> 
> https://www.amazon.com/gp/product/B00004WCCL/
> 
> 
> 
> Kapton tape is another good choice for protecting the PCB.


Thanks for the advice. Are you aware of what shunts need doing on the reference board (non-FE).


----------



## Falkentyne

cheddle said:


> Thanks for the advice. Are you aware of what shunts need doing on the reference board (non-FE).


You have to ask one of the others. But usually two or three 8 pins, then "PWR SRC" (Power plane input source), MVDDC (FBVDD), GPU Chip Power, and PCIE Slot power. Basically any place with a 2512 size, 5 mOhm shunt resistor.


----------



## Carls_Car

Anyone here using PSU cable extenders while OC'ing their 3090 + system? Looking at tidying up my build before putting in all my water cooling later this month and was wondering if I'd come across any overly negative effects.


----------



## cakesg

mirkendargen said:


> Put a GPU directly in the slot and see if it runs at 16x there. If it's just a spare GPU to test with not the one you're using with the riser, also use it with the riser to see if it drops to 8x there. If you get 8x with it directly in the slot, it's the mobo and you probably have an m.2 SSD somewhere that steals lanes from the PCIe slot. The fact that it sometimes boots at 16x then crashes sounds more like the riser is the problem. It wouldn't even try to do 16x if it was BIOS/slot population.


Ok so I took out the riser cable, redid my entire loop, its still running at x8 directly on the slot. What should I do now?


----------



## WilliamLeGod

HyperMatrix said:


> That's just king of the hill. It doesn't move down dots to the left of 1.006v. It moves the entire line down, but with the left-most dot being an anchor. Whichever dot happens to be the highest on the board, all the dots to its right match it. This exact same thing happens if you take 1.000v and move it above all the other dots on the right. Whenever you move a dot above all the other dots, all dots to the right of it line up with it. This is all basic Afterburner functionality. I still don't understand what you're trying to say or how what you're saying is in any way different from what I've already been doing. Whether I do it "your way" or the way I've been doing it...the clock and voltage have always stayed locked (minus throttling).


I just got home. Here is 3 images as example of my method. I hope they make sense and sorry for my English, not my mother language


----------



## MacMus

bmgjet said:


> XC3 with KFA 390W bios.


can i use same bios for MSI Ventus ? andy issues with your and possible with mine?


----------



## mirkendargen

cakesg said:


> Ok so I took out the riser cable, redid my entire loop, its still running at x8 directly on the slot. What should I do now?


Read your mobo manual to figure out how it assigns lanes to determine what's going on...or get a Threadripper and have so many available lanes every slot can run at 16x


----------



## Deve5tat0R

With the below custom curve with uniform step till 2145 @ 1.04V , i'm able to reach 14500 PR Score on Air

https://www.3dmark.com/pr/602742


----------



## HyperMatrix

WilliamLeGod said:


> I just got home. Here is 3 images as example of my method. I hope they make sense and sorry for my English, not my mother language
> View attachment 2468081
> View attachment 2468082
> View attachment 2468083


Ok I get what you're saying. Maintaining a more uniform curve means that when the card is throttling the clock speed down, it doesn't do it at the same voltage. It does it at a lower voltage, allowing for less heat but also lowers the hit to the power limit. I actually did get a slightly bump to my score at 14361 with this method. Thanks for sharing. At least this is the behavior I noticed. If your intention was something else, then I still don't get it. Haha.


----------



## 7empe

HyperMatrix said:


> The best indication of how good a card is will be in seeing how it undervolts. My Strix wouldn't even do 2000MHz at 1.000v but my new FTW3 does 2055MHz at 0.950v. The higher the voltage you need to reach a certain frequency at lower levels, the sooner you'll run out of voltage even if you had unlimited power limit, which means lower maximum clock speed even under water. So test out how high you can clock at 0.9 and 0.95 and 1v. Should give you a good idea of the quality of your chip.


My Strix does 2055 MHz at 925 mV, 2025 MHz at 900 MHz and 2010 MHz at 868 mV.


----------



## HyperMatrix

7empe said:


> My Strix does 2055 MHz at 925 mV, 2025 MHz at 900 MHz and 2010 MHz at 868 mV.


You can finish a port royal run with those clocks and voltage? If so, those numbers are VERY impressive.


----------



## WilliamLeGod

HyperMatrix said:


> Ok I get what you're saying. Maintaining a more uniform curve means that when the card is throttling the clock speed down, it doesn't do it at the same voltage. It does it at a lower voltage, allowing for less heat but also lowers the hit to the power limit. I actually did get a slightly bump to my score at 14361 with this method. Thanks for sharing. At least this is the behavior I noticed. If your intention was something else, then I still don't get it. Haha.


Im glad it helps! Cheers bro


----------



## khunpunTH

HyperMatrix said:


> The best indication of how good a card is will be in seeing how it undervolts. My Strix wouldn't even do 2000MHz at 1.000v but my new FTW3 does 2055MHz at 0.950v. The higher the voltage you need to reach a certain frequency at lower levels, the sooner you'll run out of voltage even if you had unlimited power limit, which means lower maximum clock speed even under water. So test out how high you can clock at 0.9 and 0.95 and 1v. Should give you a good idea of the quality of your chip.


just finished pr run with [email protected] avg clock 2061 from 3dmark result.
My strix ok?








I scored 14 301 in Port Royal


Intel Core i9-10900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## smonkie

You should try those clocks in RT intensive games like Control or Legion. You are surely having a big surprise.


----------



## 7empe

HyperMatrix said:


> You can finish a port royal run with those clocks and voltage? If so, those numbers are VERY impressive.


Yes, I can. Running KPE 520W bios. I also pass TSE and FSE stress tests.


----------



## ALSTER868

7empe said:


> Yes, I can. Running KPE 520W bios. I also pass TSE and FSE stress tests.


Is that on stock air cooling or on water? Looks impressive indeed.


----------



## 7empe

ALSTER868 said:


> Is that on stock air cooling or on water? Looks impressive indeed.


It is on water custom from EKWB. 360 mm rad push/pull with 240 mm flat reservoir. Btw. in non-RT games it keeps stable clocks at 2205-2220 MHz.


----------



## 7empe

7empe said:


> It is on water custom from EKWB. 360 mm rad push/pull with 240 mm flat reservoir. Btw. in non-RT games it keeps stable clocks at 2205-2220 MHz.


One more interesting voltage point is 887 mV where card can push stable 2040 MHz (+270 MHz offset from stock frequency at this v point).


----------



## reflex75

7empe said:


> My Strix does 2055 MHz at 925 mV, 2025 MHz at 900 MHz and 2010 MHz at 868 mV.


Nice efficient core!

My stock FE can go as high as 2100Mhz at only 0.950v thanks to low temperature (39 avg)
PR 14636 https://www.3dmark.com/3dm/53877759?
TS 22435 https://www.3dmark.com/3dm/53879031?

No shunt mod, so heavily power limited during the bench.
My only option to improve score is working on frequency at low voltage range (0.850v-0.950v)

It looks to me that this ampere chips are more efficient at lower voltages compare to turing...


----------



## ALSTER868

7empe said:


> It is on water custom from EKWB. 360 mm rad push/pull with 240 mm flat reservoir. Btw. in non-RT games it keeps stable clocks at 2205-2220 MHz.


Do you happen to remember how it was compared to stock air cooling? Were you able to lower voltages for same freq. when went to water and by how much?


----------



## 7empe

ALSTER868 said:


> Do you happen to remember how it was compared to stock air cooling? Were you able to lower voltages for same freq. when went to water and by how much?


Yeah, I have everything written down ;-) and it goes like this (Freq | Air | Water), same BIOS:

2010 MHz | 918 mV | 868 mV
2040 MHz | 950 mV | 887 mV
2100 MHz | 1018 mV | 943 mV

Are you interested in some specific VF point?

One more thing is worth to notice. I have +1210 MHz offset on memory (1370 MHz). If you are core-unstable for given VF, then you might try to lower the memory by 40 or 80 MHz. I have noticed that for some VF points this allows to push core further by another 15-30 MHz.


----------



## defiledge

metro exodus on a loop still beats any 3dmark stress tests. I can get an extra +80Mhz in 3dmark tests stable compared to metro exodus.


----------



## ALSTER868

7empe said:


> Yeah, I have everything written down ;-) and it goes like this (Freq | Air | Water), same BIOS:
> 
> 2010 MHz | 918 mV | 868 mV
> 2040 MHz | 950 mV | 887 mV
> 2100 MHz | 1018 mV | 943 mV
> 
> Are you interested in some specific VF point?
> 
> One more thing is worth to notice. I have +1210 MHz offset on memory (1370 MHz). If you are core-unstable for given VF, then you might try to lower the memory by 40 or 80 MHz. I have noticed that for some VF points this allows to push core further by another 15-30 MHz.


Thanks a lot for the input, that is what I was interested in, the volts dropped quite well in your case which gives me hope to have higher clocks with lower volts when the waterblock for my strix arrives. Now on stock air it's beyond possible to get above 2000 mhz, most of the time it's 1920-1950-ish and hitting power limits badly.
I'm aware of the unstable memory at some frequency, so far have not come across this, may be it's valid for higher clocks than those I'm sitting now.


----------



## bmgjet

Finally made it to 15K on PR.








I scored 15 008 in Port Royal


Intel Core i9-7900X Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com





Just sucks that its summer where I live now so it doesnt get below 20C in my room even at 2am. Cant wait for winter again and 0C temps but I would off been knocked off HOF long before then.


----------



## ALSTER868

defiledge said:


> metro exodus on a loop still beats 3dmark stress tests anyday. I can get an extra +80Mhz in 3dmark tests stable than in metro exodus.


You may want to try to see what's ingame if you are refferring to the benchmark  I have seen peaks of 497W at [email protected] and temps rocketing to 82-84 C. That game is a real hell for the card.
The benchmarks is lighter


----------



## reflex75

7empe said:


> Yeah, I have everything written down ;-) and it goes like this (Freq | Air | Water), same BIOS:
> 
> 2010 MHz | 918 mV | 868 mV
> 2040 MHz | 950 mV | 887 mV
> 2100 MHz | 1018 mV | 943 mV
> 
> Are you interested in some specific VF point?
> 
> One more thing is worth to notice. I have +1210 MHz offset on memory (1370 MHz). If you are core-unstable for given VF, then you might try to lower the memory by 40 or 80 MHz. I have noticed that for some VF points this allows to push core further by another 15-30 MHz.


Nice improvement with water 
You can see a pattern where the higher is the voltage and the more rewarding is to do watercooling!
There is diminishing return to increase voltage without keeping temperature under control.


----------



## lolhaxz

Spin this puppy up tomorrow.










function over form!


----------



## wilchy

Hey everyone, I have just got an Aorus Xtreme 3090 and have ran a benchmark on Port Royal and it's only giving me a score of 12434! My FE was in low 1300's with no OC. Anybody any idea what might be causing this please?


----------



## monkeyboy46800

Hi guys sorry to jump on your thread, just got a 3060 Ti FE, looking for NVFLASH compatible with FE, current version 5.667.0 give me PCI ID Mismatch when I flash MSI bios on the FE 3060 Ti Card.
Thanks


----------



## xrb936

So my stock aircooling Strix 3090 won’t even be stable at 2050... I am so frustrated with it compared to others Strix 3090. My EK block is on the way, do you guys think it helps?


----------



## xrb936

ALSTER868 said:


> Thanks a lot for the input, that is what I was interested in, the volts dropped quite well in your case which gives me hope to have higher clocks with lower volts when the waterblock for my strix arrives. Now on stock air it's beyond possible to get above 2000 mhz, most of the time it's 1920-1950-ish and hitting power limits badly.
> I'm aware of the unstable memory at some frequency, so far have not come across this, may be it's valid for higher clocks than those I'm sitting now.


yours sounds like my first Strix 3090. If you cannot overclock it to 2000mhz with stock cooling, I doubt the water block can help either. You just got a poor binned chip.


----------



## ALSTER868

xrb936 said:


> yours sounds like my first Strix 3090. If you cannot overclock it to 2000mhz with stock cooling, I doubt the water block can help either. You just got a poor binned chip.


I cannot overclock it past 2000 mhz because I'm hitting power limits and high temps. The waterblock should help and I don't think it's a poor chip either. I've seen much worse chips.
Let's talk about it when I put on the waterblock.


----------



## pat182

Guys, trying to voltage lock a curve is useless , in RTX games you will never achieve the target vs normal raster load. Just give an offset thats the best you can do


----------



## Sheyster

HyperMatrix said:


> Ok I get what you're saying. Maintaining a more uniform curve means that when the card is throttling the clock speed down, it doesn't do it at the same voltage. It does it at a lower voltage, allowing for less heat but also lowers the hit to the power limit. I actually did get a slightly bump to my score at 14361 with this method. Thanks for sharing. At least this is the behavior I noticed. If your intention was something else, then I still don't get it. Haha.


So the only thing you did differently was not lock the voltage in AB by pressing L?

EDIT - I have not done any undervolting since Titan X (Maxwell) SLI days. I ran both cards at 1404 MHz. I can't remember the voltage I used, it's been a while. You guys have peaked my interest. I will test the Strix at 950mv and see what I can push it to at that voltage.


----------



## Sheyster

MacMus said:


> can i use same bios for MSI Ventus ? andy issues with your and possible with mine?


Why not just use the Gigabyte 390W BIOS? That has been tried and confirmed with the MSI Ventus. This said, any 2x 8-pin 3090 BIOS should work. Depending on which BIOS you use, you might lose a DP port.


----------



## xrb936

ALSTER868 said:


> I cannot overclock it past 2000 mhz because I'm hitting power limits and high temps. The waterblock should help and I don't think it's a poor chip either. I've seen much worse chips.
> Let's talk about it when I put on the waterblock.


Cool. Let me know what’s the result. My block is on the way as well.


----------



## Sheyster

If anyone wants to purchase a discounted EVGA PerFE 12 cable, PM me. The guy who purchased my 3090 FE had a cable for his Corsair power supply. I'm not going to bother listing a cable in the marketplace.


----------



## dante`afk

lolhaxz said:


> Spin this puppy up tomorrow.
> 
> View attachment 2468128
> 
> 
> function over form!



do tell?


----------



## Biscottoman

what would be in your opinion the best shunt mod setup for a 3090 strix?


----------



## cakesg

Is my Strix just horrible? Or does running at x8 affect benchmark scores. This is like one of the highest scores I can get on port royal under water. I’ve also noticed my temps are extremely high so I took everything apart and made sure contact was good. I cannot overclock it far at all, it instantly crashes above like 80mhz and isn’t really stable at lower overclocks either. I’m running 520w KP bios








I scored 13 786 in Port Royal


Intel Core i9-10900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## Nizzen

Biscottoman said:


> what would be in your opinion the best shunt mod setup for a 3090 strix?



Shunt all shunts with 8mohm
Alphacool waterblock/backplate
Bitspower 6x dimm cooler mounted on the backplate
Enough radiators with strong enough pump(s)
GG


----------



## Biscottoman

Nizzen said:


> Shunt all shunts with 8mohm
> Alphacool waterblock/backplate
> Bitspower 6x dimm cooler mounted on the backplate
> Enough radiators with strong enough pump(s)
> GG


Would the 5mohm + 8mohm on pcie be too much?
Btw probably going for a single black ice nemesis gtx 560 + tbe 300 d5 from ekwb; as waterblock i'm probably gonna wait kryographics/heatkiller or optimus+mp5works


----------



## Nizzen

Biscottoman said:


> Would the 5mohm + 8mohm on pcie be too much?
> Btw probably going for a single black ice nemesis gtx 560 + tbe 300 d5 from ekwb; as waterblock i'm probably gonna wait kryographics/heatkiller or optimus+mp5works


If you want low flow in the system, and 1c better backplate temperature, buy mp5works LOL.


----------



## Biscottoman

Nizzen said:


> If you want low flow in the system, and 1c better backplate temperature, buy mp5works LOL.


Actually the bitspower ram block would be a great choice and even more cheap then the mp5 works, the problem for me is to find a way to install it safely in the backplate and to realize a good tubing setup with that (it's a dual loop tower 900 build), if you have any suggestion regarding that would be really appreciated. 

Anyway the 5mohm+8mhom on the pcie could be good for the card?


----------



## MacMus

Sheyster said:


> Why not just use the Gigabyte 390W BIOS? That has been tried and confirmed with the MSI Ventus. This said, any 2x 8-pin 3090 BIOS should work. Depending on which BIOS you use, you might lose a DP port.


I want to keep all 3 DP ports for my monitors. Somebody said with Gigabyte i may lose one.
Is there a chance i can brick the card with new bios ?


----------



## shiokarai

lolhaxz said:


> Spin this puppy up tomorrow.
> 
> View attachment 2468128
> 
> 
> function over form!


What's this setup, please elaborate  is this some kind of industrial/universal GPU/CPU block or what?


----------



## shiokarai

Please somebody enlighten me about Bykski block not clearing the fan header with the Strix 3090? Is this really the case? Was it some early samples maybe and now it's ok? Is it fixable easy enough? Have the Bykski block inbound.


----------



## mirkendargen

shiokarai said:


> Please somebody enlighten me about Bykski block not clearing the fan header with the Strix 3090? Is this really the case? Was it some early samples maybe and now it's ok? Is it fixable easy enough? Have the Bykski block inbound.


It's really the case (unless it's been fixed in later batches). I have it on just flexing the PCB a bit. You could either trim some plastic off the fan header or dremel a bit more acrylic (it's in an acrylic section not the actual cold plate) out of the block to make it clear.


----------



## shiokarai

mirkendargen said:


> It's really the case (unless it's been fixed in later batches). I have it on just flexing the PCB a bit. You could either trim some plastic off the fan header or dremel a bit more acrylic (it's in an acrylic section not the actual cold plate) out of the block to make it clear.


Is it like 1-2mm or worse? Debating returning the block or mod it (don't wanna cut the gpu itself)


----------



## 7empe

7empe said:


> Yeah, I have everything written down ;-) and it goes like this (Freq | Air | Water), same BIOS:
> 
> 2010 MHz | 918 mV | 868 mV
> 2040 MHz | 950 mV | 887 mV
> 2100 MHz | 1018 mV | 943 mV
> 
> Are you interested in some specific VF point?





ALSTER868 said:


> Thanks a lot for the input, that is what I was interested in, the volts dropped quite well in your case which gives me hope to have higher clocks with lower volts when the waterblock for my strix arrives. Now on stock air it's beyond possible to get above 2000 mhz, most of the time it's 1920-1950-ish and hitting power limits badly.
> I'm aware of the unstable memory at some frequency, so far have not come across this, may be it's valid for higher clocks than those I'm sitting now.


Today’s PR score of 15438 and I am not done yet: https://www.3dmark.com/pr/603195


----------



## HyperMatrix

khunpunTH said:


> just finished pr run with [email protected] avg clock 2061 from 3dmark result.
> My strix ok?
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 14 301 in Port Royal
> 
> 
> Intel Core i9-10900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> View attachment 2468090


Yes that’s 15MHz higher than my current FTW3 Hybrid can do while under 54C. My old Strix on air couldn’t even break 2000 MHz on a full 1.000v. With a water block and shunts you’re should be very very happy with your card 



Sheyster said:


> So the only thing you did differently was not lock the voltage in AB by pressing L?
> 
> EDIT - I have not done any undervolting since Titan X (Maxwell) SLI days. I ran both cards at 1404 MHz. I can't remember the voltage I used, it's been a while. You guys have peaked my interest. I will test the Strix at 950mv and see what I can push it to at that voltage.


I never used to lock it anyway since it always stuck to max voltage. But here’s the behavior I noticed. So let’s say I set it to 2130MHz at 1.025v and ran PR. Since it’s not under water and temperature builds up, and since it’s not shunt modded so I’m power limited, here’s what would normally happen. Voltage would stay at 1.025v but my clock speed would go down to 2115, 2000, 1985, and 1970.

If you have unlocked power limit and have epic cooling this may not be an issue since your clocks will stay the same at 2130 throughout the run. But if it does throttle and drop down to 2100MHz for example...why stay at 1.025v for 2100MHz? If your curve is uniform up to your “max point” you’ll have entries in place for the voltage required for each of those lower clock steps. So as the clock speed drops down for whatever reason, it also drops the voltage required for it. And that means slightly less power used and slightly less heat created.

but 2 caveats with that. One thing as I mentioned is that if you’re able to lock a stable clock without throttling already due to shunting/water cooling, this would do nothing for you. Secondly, you gotta watch what voltage/clock speed is in that curve because on your card, the automatically set values may actually be unstable and crash.

a more accurate way of describing this is simply to add in stable voltage/clocks for several clicks below your max clock. For example with my card when I set it to 2130/1.025 and if it clocks down to 2055MHz, why would I still want to run that 2055 at 1.025 when I know I can run it at 0.950v?



cakesg said:


> Is my Strix just horrible? Or does running at x8 affect benchmark scores. This is like one of the highest scores I can get on port royal under water. I’ve also noticed my temps are extremely high so I took everything apart and made sure contact was good. I cannot overclock it far at all, it instantly crashes above like 80mhz and isn’t really stable at lower overclocks either. I’m running 520w KP bios
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 13 786 in Port Royal
> 
> 
> Intel Core i9-10900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com


PCIe 3.0 x8 is definitely a limiting factor on a 3090.


----------



## dante`afk

cakesg said:


> Is my Strix just horrible? Or does running at x8 affect benchmark scores. This is like one of the highest scores I can get on port royal under water. I’ve also noticed my temps are extremely high so I took everything apart and made sure contact was good. I cannot overclock it far at all, it instantly crashes above like 80mhz and isn’t really stable at lower overclocks either. I’m running 520w KP bios
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 13 786 in Port Royal
> 
> 
> Intel Core i9-10900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com


Your temps are way too high for WC. Something's wrong with the block not having contact properly, what's your rad setup look like?


----------



## mirkendargen

shiokarai said:


> Is it like 1-2mm or worse? Debating returning the block or mod it (don't wanna cut the gpu itself)


I didn't measure but I'd estimate ~2mm.


----------



## cakesg

dante`afk said:


> Your temps are way too high for WC. Something's wrong with the block not having contact properly, what's your rad setup look like?


I have a 420mm and a 280mm, my ambient water temp is around 30 degrees.


----------



## Falkentyne

HyperMatrix said:


> Yes that’s 15MHz higher than my current FTW3 Hybrid can do while under 54C. My old Strix on air couldn’t even break 2000 MHz on a full 1.000v. With a water block and shunts you’re should be very very happy with your card
> 
> 
> 
> I never used to lock it anyway since it always stuck to max voltage. But here’s the behavior I noticed. So let’s say I set it to 2130MHz at 1.025v and ran PR. Since it’s not under water and temperature builds up, and since it’s not shunt modded so I’m power limited, here’s what would normally happen. Voltage would stay at 1.025v but my clock speed would go down to 2115, 2000, 1985, and 1970.
> 
> If you have unlocked power limit and have epic cooling this may not be an issue since your clocks will stay the same at 2130 throughout the run. But if it does throttle and drop down to 2100MHz for example...why stay at 1.025v for 2100MHz? If your curve is uniform up to your “max point” you’ll have entries in place for the voltage required for each of those lower clock steps. So as the clock speed drops down for whatever reason, it also drops the voltage required for it. And that means slightly less power used and slightly less heat created.
> 
> but 2 caveats with that. One thing as I mentioned is that if you’re able to lock a stable clock without throttling already due to shunting/water cooling, this would do nothing for you. Secondly, you gotta watch what voltage/clock speed is in that curve because on your card, the automatically set values may actually be unstable and crash.
> 
> a more accurate way of describing this is simply to add in stable voltage/clocks for several clicks below your max clock. For example with my card when I set it to 2130/1.025 and if it clocks down to 2055MHz, why would I still want to run that 2055 at 1.025 when I know I can run it at 0.950v?
> 
> 
> 
> PCIe 3.0 x8 is definitely a limiting factor on a 3090.


Maybe my message got lost in the discussion but did you see my post about the Liquid metal w/mk stuff yesterday?


----------



## jura11

MacMus said:


> I want to keep all 3 DP ports for my monitors. Somebody said with Gigabyte i may lose one.
> Is there a chance i can brick the card with new bios ?


Yes you will loose one DP port with Gigabyte 390W BIOS, with unverified KFA2 390W BIOS you will not loose any BIOS etc

Bricking the GPU you shouldn't if you will do that correctly, I think I have flashed my RTX 3090 probably 9 times with several different BIOS and no issues 

Hope this helps 

Thanks, Jura


----------



## jura11

cakesg said:


> I have a 420mm and a 280mm, my ambient water temp is around 30 degrees.


Yours GPU temperatures are way too high for such loop, I would expect like 38-42°C for your loop at least 

I know you don't want to take apart loop but I would check GPU contact with waterblock and mainly TIM imprint on block etc 

Some earlier EKWB Vector RTX blocks have issues with good contact with core/GPU, its been like happening now with 3090 and previously with Asus RTX 2080Ti Strix 

Hope this helps 

Thanks, Jura


----------



## HyperMatrix

Falkentyne said:


> Maybe my message got lost in the discussion but did you see my post about the Liquid metal w/mk stuff yesterday?


I saw it but I didn't really agree with it. But at the same time, I'm not knowledgeable enough about some aspects of the discussion to have been able to properly counter some of the points you were making so I just decided to let it be. Haha. So for example, you said:

"It goes against the laws of metallurgy to have an eutectic alloy with the w/mk HIGHER than the lowest w/mk component of it. "

I'm not a scientist and didn't want to spend hours reading up on the specifics of this but to me, it seems impossible that if you had a liquid metal mixture consisting of 80% of a 100 W/mk metal mixed with 20% of a 20 W/mk metal, that you would be limited to just 20 W/mk as that is your weakest link. Because if that were true, why would you not just entirely use the 20 W/mk metal? And if conductonaut were only 25-30 W/mk then it would mean that coollaboratory ultra LM is even worse than that since conductonaut performs substantially better than it. So then if we're dealing with CLU being in the 15-20 range, how would fujipoly 17 W/mk thermal pads be more efficient than direct liquid metal? 

None of that makes sense to me and I don't agree with it but I'd also have to spend hours looking up data and information to counter your points. But really none of that matters to me and wouldn't benefit me to do so. All I know is in benchmark tests that have been done, conductonaut has been the most thermally conductive lm. So even if looking at it from a relative performance perspective, it's still the best choice. So if you're making your own home made lm and you're happy with it...so be it. 

But if you think you've made something more effective or equal in performance to conductonaut, you should be packaging and selling it instead of debating with me on a forum. But realistically speaking, it's unlikely that this would be the case.


----------



## motivman

pat182 said:


> Guys, trying to voltage lock a curve is useless , in RTX games you will never achieve the target vs normal raster load. Just give an offset thats the best you can do


AMEN!!!!!


----------



## motivman

jura11 said:


> Yours GPU temperatures are way too high for such loop, I would expect like 38-42°C for your loop at least
> 
> I know you don't want to take apart loop but I would check GPU contact with waterblock and mainly TIM imprint on block etc
> 
> Some earlier EKWB Vector RTX blocks have issues with good contact with core/GPU, its been like happening now with 3090 and previously with Asus RTX 2080Ti Strix
> 
> Hope this helps
> 
> Thanks, Jura


Jura, I would have to disagree with you. 38-42C is very optimistic. 480W-520W (he is running kingpin bios) is a lot of heat to dissipate regardless of rad space. With 30C ambient temp, lets say his water temp is 35C (again very optimistic), then lowest his core temp for GPU would be would be about 45-46C, and that's if his block has a delta T of 11C. With my overclocked 2080ti, my water temp to GPU core temp delta-T was 11C, but with my 3090, lowest I can get my Delta-T is 13C. His temps are definitely too high for water-cooling, unless he is running his fans at 500rpm, then 61C would make sense.


----------



## Esenel

For the possible GPU - Water Delta temp discussion.
AquaComputer says with 150 l/h their block will have a Delta of 18 Kelvin.


----------



## Falkentyne

HyperMatrix said:


> I saw it but I didn't really agree with it. But at the same time, I'm not knowledgeable enough about some aspects of the discussion to have been able to properly counter some of the points you were making so I just decided to let it be. Haha. So for example, you said:
> 
> "It goes against the laws of metallurgy to have an eutectic alloy with the w/mk HIGHER than the lowest w/mk component of it. "
> 
> I'm not a scientist and didn't want to spend hours reading up on the specifics of this but to me, it seems impossible that if you had a liquid metal mixture consisting of 80% of a 100 W/mk metal mixed with 20% of a 20 W/mk metal, that you would be limited to just 20 W/mk as that is your weakest link. Because if that were true, why would you not just entirely use the 20 W/mk metal? And if conductonaut were only 25-30 W/mk then it would mean that coollaboratory ultra LM is even worse than that since conductonaut performs substantially better than it. So then if we're dealing with CLU being in the 15-20 range, how would fujipoly 17 W/mk thermal pads be more efficient than direct liquid metal?
> 
> None of that makes sense to me and I don't agree with it but I'd also have to spend hours looking up data and information to counter your points. But really none of that matters to me and wouldn't benefit me to do so. All I know is in benchmark tests that have been done, conductonaut has been the most thermally conductive lm. So even if looking at it from a relative performance perspective, it's still the best choice. So if you're making your own home made lm and you're happy with it...so be it.
> 
> But if you think you've made something more effective or equal in performance to conductonaut, you should be packaging and selling it instead of debating with me on a forum. But realistically speaking, it's unlikely that this would be the case.


I never said I made anything more effective than conductonaut  Just cheaper. 97% of the performance for 1/3 the price...I think that's a win?

I know you're someone who likes numbers and not words, so here's some good old numbers to make you happy (not my post).


__
https://www.reddit.com/r/overclocking/comments/6ues42



http://imgur.com/a/T6JlP


0.5C worse for a small fraction of the cost of the commercial stuff (ambient normalized).










Plus the very first time I bought conductonaut, I used it on my laptop. After a few weeks, the temps were skyrocketing, even though they were still better than Kryonaut, the core temp deltas were like 8C apart. I took it apart and saw that the entire layer was basically a hardened rock. Most of the gallium had basically been absorbed into the copper and there was nothing left to keep the LM being liquid. I suspect, with conductonaut's higher indium content, it's more vulnerable to gallium absorption (since its the Ga/In alloy that reduces the melting point of both substances, and then Tin reduces it down to between 6C to -8C, depending on if you refer to the melting point or solidifying point), etc...but I'm getting off topic.

Eventually I read posts on notebookreview from other people who used LM on laptops and who had discovered ways to prevent premature gallium absorption and if any was absorbed, to preserve long term performance, and then I started experimenting with _countless_ repastes, until I found my own method, which worked with both a desktop retail delid IHS (9900k) and my laptop, to keep great temps, which I still use and tell people to use to this day. It was also on NBR where other experienced metallurgists talked about the w/mk of Conductonaut being a complete lie. And Thermalright Silver King is even worse in that regard...it advertises 81 w/mk which is barely lower than pure Indium... 

You do make a valid point about eutectic alloys and the lowest w/mk, depending on ratio of composition, but I think we need to leave that to the professional chemists. If something is 45% Ga, 45% In, 10% sn, I would bet anyone $500 right now that the w/mk of this mixture is going to be lower than gallium (40.6 w/mk).


----------



## Thanh Nguyen

Esenel said:


> For the possible GPU - Water Delta temp discussion.
> AquaComputer says with 150 l/h their block will have a Delta of 18 Kelvin.
> 
> View attachment 2468176


I have aquacomputer block and a 100% speed d5 right before it goes to the block and I have 15c delta or more.


----------



## HyperMatrix

Falkentyne said:


> I never said I made anything more effective than conductonaut  Just cheaper. 97% of the performance for 1/3 the price...I think that's a win?
> 
> I know you're someone who likes numbers and not words, so here's some good old numbers to make you happy (not my post).
> 
> 
> __
> https://www.reddit.com/r/overclocking/comments/6ues42
> 
> 
> 
> http://imgur.com/a/T6JlP
> 
> 
> 0.5C worse for a small fraction of the cost of the commercial stuff (ambient normalized).
> 
> View attachment 2468178
> 
> 
> Plus the very first time I bought conductonaut, I used it on my laptop. After a few weeks, the temps were skyrocketing, even though they were still better than Kryonaut, the core temp deltas were like 8C apart. I took it apart and saw that the entire layer was basically a hardened rock. Most of the gallium had basically been absorbed into the copper and there was nothing left to keep the LM being liquid. I suspect, with conductonaut's higher indium content, it's more vulnerable to gallium absorption (since its the Ga/In alloy that reduces the melting point of both substances, and then Tin reduces it down to between 6C to -8C, depending on if you refer to the melting point or solidifying point), etc...but I'm getting off topic.
> 
> Eventually I read posts on notebookreview from other people who used LM on laptops and who had discovered ways to prevent premature gallium absorption and if any was absorbed, to preserve long term performance, and then I started experimenting with _countless_ repastes, until I found my own method, which worked with both a desktop retail delid IHS (9900k) and my laptop, to keep great temps, which I still use and tell people to use to this day. It was also on NBR where other experienced metallurgists talked about the w/mk of Conductonaut being a complete lie. And Thermalright Silver King is even worse in that regard...it advertises 81 w/mk which is barely lower than pure Indium...
> 
> You do make a valid point about eutectic alloys and the lowest w/mk, depending on ratio of composition, but I think we need to leave that to the professional chemists. If something is 45% Ga, 45% In, 10% sn, I would bet anyone $500 right now that the w/mk of this mixture is going to be lower than gallium (40.6 w/mk).


Seems interesting enough. Even if his testing isn't entirely accurate, the difference at the top end is close enough where I can't say I'd be all that concerned. Although at $7 per gram for conductonaut, I'm not sure I'd go through all this trouble to join the junior metallurgists club. Haha. I don't do enough applications to make it worth while for me. I generally like to set it and forget it. Doing it right once. I don't have the patience of some of the younger kids who open up and repaste and re-pad their cards before their waterclocks come in. Just waiting until the block comes in before I do the shunts and pads and lm all at once. But if I was going to use over 10 grams a year then yeah I can definitely see the appeal.


----------



## motivman

Thanh Nguyen said:


> I have aquacomputer block and a 100% speed d5 right before it goes to the block and I have 15c delta or more.


with my EK block, my delta is 16-18C. I keep my water temp at 34C, and my gpu temp is 51-52C, fans run at 1150rpm (140mm fans) and 1350rpm (120mm fans), ambient is 25C


----------



## HyperMatrix

motivman said:


> ambient is 25C


You shouldn't set up your PC in your sauna room.


----------



## motivman

HyperMatrix said:


> You shouldn't set up your PC in your sauna room.


lol, 25C is actually kinda cold for me, I prefer 28-29C ambient.


----------



## jura11

motivman said:


> with my EK block, my delta is 16-18C. I keep my water temp at 34C, and my gpu temp is 51-52C, fans run at 1150rpm (140mm fans) and 1350rpm (120mm fans), ambient is 25C


Yours ambient temperature plays part in higher temperatures as well there, in my case I can keep delta between the idle to load temperature in 8-10°C (idle temperature of RTX 3090 usually is 22°C in normal ambient temperature and water temperature sits around the same temperature as RTX 3090 idle temperature, load temperature under load or in gaming is 30-32°C) 

My radiator fans are running at 650-750RPM max, only on MO-ra3 360mm I'm running fans at 1000-1200RPM 

Hope this helps 

Thanks, Jura


----------



## HyperMatrix

motivman said:


> lol, 25C is actually kinda cold for me, I prefer 28-29C ambient.


Jesus. I start melting if it goes even 0.5C over 21. Just out of curiosity...you're not white, are you? I'm trying to understand how someone could prefer 28-29. Lol.


----------



## jura11

Thanh Nguyen said:


> I have aquacomputer block and a 100% speed d5 right before it goes to the block and I have 15c delta or more.


I have Zotac RTX 2080Ti AMP with Aquacomputer Kryographics RTX 2080Ti waterblock and active backplate and that GPU have pretty much lowest temperatures 30-32°C in rendering with all 3*GPUs rendering 

Hope this helps 

Thanks, Jura


----------



## Esenel

HyperMatrix said:


> Jesus. I start melting if it goes even 0.5C over 21. Just out of curiosity...you're not white, are you? I'm trying to understand how someone could prefer 28-29. Lol.


And you wonder about jokes with temperature and being Canadian?


----------



## Falkentyne

I prefer 26C room temp myself. And no, I'm not white either. Anything colder just makes my hands get cold and painful.


----------



## HyperMatrix

Esenel said:


> And you wonder about jokes with temperature and being Canadian?


We have lots of temperature here. Ranging from deep freeze to permafrost to slightly below frozen. Haha. Although it is unseasonably warm right now where I live due to the Chinook. Currently 14C/57F outside. It got to 16C a few days ago and I had to turn my AC back on. 




Falkentyne said:


> I prefer 26C room temp myself. And no, I'm not white either. Anything colder just makes my hands get cold and painful.


I remember just last week I was at my parent's house and I started freaking out because it was too hot. At 22.5C.


----------



## motivman

Falkentyne said:


> I prefer 26C room temp myself. And no, I'm not white either. Anything colder just makes my hands get cold and painful.


yeah


HyperMatrix said:


> We have lots of temperature here. Ranging from deep freeze to permafrost to slightly below frozen. Haha. Although it is unseasonably warm right now where I live due to the Chinook. Currently 14C/57F outside. It got to 16C a few days ago and I had to turn my AC back on.
> 
> 
> 
> 
> I remember just last week I was at my parent's house and I started freaking out because it was too hot. At 22.5C.


lolololol, something is wrong with you, you COLD blooded man, you! haha (joke...)


----------



## shiokarai

Falkentyne said:


> I prefer 26C room temp myself. And no, I'm not white either. Anything colder just makes my hands get cold and painful.


26c? That's when I'm setting my AC to 100%, to get to 22-23c


----------



## mirkendargen

Esenel said:


> For the possible GPU - Water Delta temp discussion.
> AquaComputer says with 150 l/h their block will have a Delta of 18 Kelvin.
> 
> View attachment 2468176


18C is quite high...I wonder if they're accounting for a mediocre paste job or something. I'm at 15C with the Bykski block pulling 520W.


----------



## Esenel

I don't know. But that was there statement and I thought I should post it ;-)


----------



## WayWayUp

so I tightened up my memory timings and the latency improvements made a big difference in port royal
Going from 42ns to 34ns 

The part of the benchmark where the ship comes in I would always top out at 110.6fps
Now, I can reach almost 113fps during that part. Will post some scores once my water block comes in (hopefully next week) but im looking for a top 15 score


----------



## bmgjet

WayWayUp said:


> so I tightened up my memory timings and the latency improvements made a big difference in port royal
> Going from 42ns to 34ns
> 
> The part of the benchmark where the ship comes in I would always top out at 110.6fps
> Now, I can reach almost 113fps during that part. Will post some scores once my water block comes in (hopefully next week) but im looking for a top 15 score


My latency is terriable and at that part of the benchmark it starts off 114fps and creeps down to 110fps before switching to the next scene which is at 70fps.

I basically cancel the run depending how the fps looks in 3 places.
At the start if it sinks below 63fps its going to be a bad run.
Then panning across the landing bay if it drops below 70fps you know your not going to break 15K.
And the bit where its coming in if I dont see 114ps at the start or at the end it drops below 110fps you might as well quit the run.

Wish they showed the FPS graph on the online results so you can see how other peoples compare. 
There is only 1 place it drops below 60fps and thats at the last 1/4 of the test
Where it leaves the waiting room and goes around the ship it hits 58fps for 1 sec.


----------



## WayWayUp

Im interested in your 114fps at that part. As i regularly get 15k+ no problem but have never seen such numbers. What cpu are you using and at what clock if you dont mind me asking? mainly because i want to use it as a target
I've seen an LN2 user reach 119fps at that part but he scores usually in the 17,xxx range so I simply didnt see it as comparable to my system

There are a lot of differences of course from system to system. For instance, you may have hardware-accelerated GPU scheduling enabled, etc


----------



## bmgjet

7900X 4.9ghz, so quite weak by now day standards.
That bit where ship comes in the card goes right up to 2175 MHz and GPUz listing only V-OP/V-REL as the perf cap. Other wise its bouncing off the PWR and V-OP/V-REL the rest of the test.
My card is shunt modded and waterblocked but Im sure that bit where it drops to 58fps is killing alot of score. Since you watch old runs from GN and he is doing 14.5K runs and his frames at that point touch 59-60 but im getting 15K.


----------



## WayWayUp

thats could be a key difference with the waterblock.
Since im still on air perhaps the temps are not there for it to jump up in clock

yeah maybe you could tighten the ram a bit i dont recall any part that drops into the 50s


----------



## MacMus

jura11 said:


> Yes you will loose one DP port with Gigabyte 390W BIOS, with unverified KFA2 390W BIOS you will not loose any BIOS etc
> Bricking the GPU you shouldn't if you will do that correctly, I think I have flashed my RTX 3090 probably 9 times with several different BIOS and no issues
> Hope this helps
> Thanks, Jura


Thank you explanation. So why u were suggesting before Gibabyte bios ?
Why you saying "unverified" ?
Any advantage over one or the other?


----------



## bmgjet

MacMus said:


> Thank you explanation. So why u were suggesting before Gibabyte bios ?
> Why you saying "unverified" ?
> Any advantage over one or the other?


Unverified = a user uploaded it vs a staff member having the card and uploading it.
KFA2 bios is better.
Made for 18 phase card, Gigabyte made for 19 phase card. (Gigabyte only ones to do 19 phase)
KFA2 fan profile matches the other low end cards, Gigabyte one you wont get full speed because the different fans.
KFA2 doesnt have a factory overclock on so uses less power at idle since its not boosting up to 1755mhz on 2d loads.
KFA2 default power limit is 350W and slider goes up to 111% 390W (Gigabyte default is 370 and 104% for 390W)

This has been covered 5 times now in the last couple pages on this thread.


----------



## jomama22

reflex75 said:


> Nice efficient core!
> 
> My stock FE can go as high as 2100Mhz at only 0.950v thanks to low temperature (39 avg)
> PR 14636 https://www.3dmark.com/3dm/53877759?
> TS 22435 https://www.3dmark.com/3dm/53879031?
> 
> No shunt mod, so heavily power limited during the bench.
> My only option to improve score is working on frequency at low voltage range (0.850v-0.950v)
> 
> It looks to me that this ampere chips are more efficient at lower voltages compare to turing...


Got a cold window open for those runs? Lol. 39*C average temperature. Nice chip for sure though.


----------



## MacMus

bmgjet said:


> Unverified = a user uploaded it vs a staff member having the card and uploading it.


I'm sorry but i did not uderstand what that means?
who is staff member? and user ?


----------



## FrouJoker

Hey guys , what temperature do you have on your card . Because i have 55 degrees of celsius in not working , and 77 in games (4k) . Card by MSI - 3090 3X OC . Please told if you not busy .


----------



## xrb936

Guys, how much improvement I can gain from shunt mod? AFAIK Nvidia still has a power limit in its driver.


----------



## xrb936

FrouJoker said:


> Hey guys , what temperature do you have on your card . Because i have 55 degrees of celsius in not working , and 77 in games (4k) . Card by MSI - 3090 3X OC . Please told if you not busy .


Sounds awful. Mine is 24C/58C on idle/gaming. What's your room temp?


----------



## Falkentyne

xrb936 said:


> Guys, how much improvement I can gain from shunt mod? AFAIK Nvidia still has a power limit in its driver.


520W limit from 10 mOhm shunts will gain maybe 10% at 4k.


----------



## alitayyab

MacMus said:


> I'm sorry but i did not uderstand what that means?
> who is staff member? and user ?


In this context
Verified Bios = A bios that has been uploaded to techpowerup and verified by techpowerup
Unverified = A bios that has been uploaded to techpowerup & has not been verified techpowerup. Consider this a bios that as an example i uploaded but has not been tested/ verified to work as intended by techpowerup.


----------



## FrouJoker

xrb936 said:


> Sounds awful. Mine is 24C/58C on idle/gaming. What's your room temp?


Hmm , maybe 23 degree . Backplate so hot , i can`t touch it (when card not working) . But when , computer - off , backplate is cold .


----------



## FrouJoker

xrb936 said:


> Sounds awful. Mine is 24C/58C on idle/gaming. What's your room temp?


What vender you have?


----------



## FrouJoker

xrb936 said:


> Guys, how much improvement I can gain from shunt mod? AFAIK Nvidia still has a power limit in its driver.


i think you can add to you performance near 10 - 15 % , but question how hard is it ...


----------



## xrb936

Falkentyne said:


> 520W limit from 10 mOhm shunts will gain maybe 10% at 4k.


Sounds fine. Thanks, I will look into it.


----------



## xrb936

FrouJoker said:


> What vender you have?


Strix 3090 OC


----------



## FrouJoker

xrb936 said:


> Strix 3090 OC


i have MSI card , but i think my pc case has bad air vents , and so i have so bad temperature .


----------



## xrb936

FrouJoker said:


> i have MSI card , but i think my pc case has bad air vents , and so i have so bad temperature .


Yes, it is important, unless you are using watercooling.


----------



## xrb936

The best I can do with my stock air-cooling Strix 3090. Still waiting for my EK block to arrive.









I scored 14 514 in Port Royal


Intel Core i9-10980XE Extreme Edition Processor, NVIDIA GeForce RTX 3090 x 1, 65536 MB, 64-bit Windows 10}




www.3dmark.com












I scored 20 433 in Time Spy


Intel Core i9-10980XE Extreme Edition Processor, NVIDIA GeForce RTX 3090 x 1, 65536 MB, 64-bit Windows 10}




www.3dmark.com


----------



## Falkentyne

xrb936 said:


> Strix 3090 OC


Strix 3090 OC? 
No need to shunt that then.
Just use the eVGA XOC Bios for the FTW3 and you will get 500W.

Sure you could still shunt with 10 mohm stacked shunts and probably get 650W, but you're not going to be able to cool any sustained 650W on air cooling, even with top tier thermal paste and a thermal pad re-work...but at least you won't see the power limit anymore!

Those are good bench results above.


----------



## FrouJoker

xrb936 said:


> The best I can do with my stock air-cooling Strix 3090. Still waiting for my EK block to arrive.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 14 514 in Port Royal
> 
> 
> Intel Core i9-10980XE Extreme Edition Processor, NVIDIA GeForce RTX 3090 x 1, 65536 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 20 433 in Time Spy
> 
> 
> Intel Core i9-10980XE Extreme Edition Processor, NVIDIA GeForce RTX 3090 x 1, 65536 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com


how you made so high frequency ?


----------



## xrb936

Falkentyne said:


> Strix 3090 OC?
> No need to shunt that then.
> Just use the eVGA XOC Bios for the FTW3 and you will get 500W.
> 
> Sure you could still shunt with 10 mohm stacked shunts and probably get 650W, but you're not going to be able to cool any sustained 650W on air cooling, even with top tier thermal paste and a thermal pad re-work...but at least you won't see the power limit anymore!
> 
> Those are good bench results above.


I am waiting for the EK block, so I won't worry about the temperature. What's the best shunt mod option for Strix 3090 OC tho? Mod all the shunts on the PCB?


----------



## xrb936

FrouJoker said:


> how you made so high frequency ?


Didn't change anything yet, just overclock it under stock condition.


----------



## xrb936

FrouJoker said:


> how you made so high frequency ?


It's not high at all. Actually, it is low, compared to other Strix 3090.


----------



## MacMus

FrouJoker said:


> i have MSI card


what MSI?


----------



## MacMus

alitayyab said:


> In this context
> Verified Bios = A bios that has been uploaded to techpowerup and verified by techpowerup
> Unverified = A bios that has been uploaded to techpowerup & has not been verified techpowerup. Consider this a bios that as an example i uploaded but has not been tested/ verified to work as intended by techpowerup.


so in this case KFA2 was not verified by techpowerups to work on that card or in general was not verified by them ?
Do I have to modify somehow the bios or i just downloaded latest for KFA2 and flash it ? Is it possible to destroy card this way ?


----------



## alitayyab

MacMus said:


> so in this case KFA2 was not verified by techpowerups to work on that card or in general was not verified by them ?
> Do I have to modify somehow the bios or i just downloaded latest for KFA2 and flash it ? Is it possible to destroy card this way ?


1. No, the latter (in general not verified by them); techpowerup has no way of knowing either way i.e. if it works or not as they have not tested the bios. 
2. Again no, as long as you know how to flash it should flash without any modification. 
3. Can it destroy your card. Yes, if you do not know what you are doing. But generally, barring power failure or windows/ OS crashing, it is very safe. But you do this at your own risk. 

I have a Palit 3090 Game pro OC and have flashed it with MSI bios loosing one DP in the process. The gains while good for synthetics, did not provide me with a better gaming experience at all. Further, flashing for higher power limit is just one variable. Decent cooling is the another. Running components faster would degrade them faster as they generate more heat. This process would accelerate even more if the excess heat is not appropriately dissipated. Unless you are only interested in getting a high score in synthetic benchmarks or really care about those 10% extra frames, leave things as they are and enjoy gaming.


----------



## motivman

MacMus said:


> so in this case KFA2 was not verified by techpowerups to work on that card or in general was not verified by them ?
> Do I have to modify somehow the bios or i just downloaded latest for KFA2 and flash it ? Is it possible to destroy card this way ?


lol, you are overthinking this. I flashed that KFA bios on my card, no issues. It is better than Gigabyte 390W bios, same performance, and you do not loose display port.


----------



## FrouJoker

MacMus said:


> what MSI?


OC 3X


----------



## FrouJoker

Guys , i found problem . Msi made silent card , but with high temperature . I fix it , just set fan speed on 20% , and have 35 degrees .


----------



## FrouJoker

People how i can check how many watts eat my card ?


----------



## bmgjet

FrouJoker said:


> People how i can check how many watts eat my card ?


GPUz will say.
Also id recommend instead of posting post after post when no one has replied you edit your last post to add onto it.


----------



## Alex24buc

@bmgjet 

Just to clarify, regarding the fan speed of the kfa2 bios. If I use a fan curve do you suggest that with the gigabyte bios I still can’t get the full speed from the 3 fans? I use the palit gamingpro oc. Thanks inadvance for your answer.


----------



## bogdi1988

lolhaxz said:


> Spin this puppy up tomorrow.
> 
> View attachment 2468128
> 
> 
> function over form!


What the heck are the 2 parts mounted on the back plate? Links?


EDIT: NVM. Found the EKWB part.


----------



## bmgjet

Alex24buc said:


> @bmgjet
> 
> Just to clarify, regarding the fan speed of the kfa2 bios. If I use a fan curve do you suggest that with the gigabyte bios I still can’t get the full speed from the 3 fans? I use the palit gamingpro oc. Thanks inadvance for your answer.


On my XC3 and KFA2 bios 100% = 3500rpm on all fans.
Gigabyte bios 100% = 3200rpm on outerfans and 2900 on center fan.


----------



## Alex24buc

@bmgjet 
I will install kfa2 bios, thanks again for your details.


----------



## Twintale

Small update regarding recent BIOS update for Palit cards:


----------



## khunpunTH

water block has arrived.
msi gaming x trio & barrow cheap block

PR 15242








I scored 15 242 in Port Royal


Intel Core i7-10700K Processor, NVIDIA GeForce RTX 3090 x 1, 16384 MB, 64-bit Windows 10}




www.3dmark.com




TS graphic score top#22 - 23249








I scored 20 297 in Time Spy


Intel Core i7-10700K Processor, NVIDIA GeForce RTX 3090 x 1, 16384 MB, 64-bit Windows 10}




www.3dmark.com





tomorrow I will try chilled water.
ps. 520w bios , no shunt


----------



## WilliamLeGod

khunpunTH said:


> water block has arrived.
> msi gaming x trio & barrow cheap block
> 
> PR 15242
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 242 in Port Royal
> 
> 
> Intel Core i7-10700K Processor, NVIDIA GeForce RTX 3090 x 1, 16384 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> TS graphic score top#25 - 23205
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 20 311 in Time Spy
> 
> 
> Intel Core i7-10700K Processor, NVIDIA GeForce RTX 3090 x 1, 16384 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> tomorrow I will try chilled water.


Is this 500 or 520w?


----------



## MacMus

alitayyab said:


> 1. No, the latter (in general not verified by them); techpowerup has no way of knowing either way i.e. if it works or not as they have not tested the bios.
> 2. Again no, as long as you know how to flash it should flash without any modification.
> 3. Can it destroy your card. Yes, if you do not know what you are doing. But generally, barring power failure or windows/ OS crashing, it is very safe. But you do this at your own risk.
> 
> I have a Palit 3090 Game pro OC and have flashed it with MSI bios loosing one DP in the process. The gains while good for synthetics, did not provide me with a better gaming experience at all. Further, flashing for higher power limit is just one variable. Decent cooling is the another. Running components faster would degrade them faster as they generate more heat. This process would accelerate even more if the excess heat is not appropriately dissipated. Unless you are only interested in getting a high score in synthetic benchmarks or really care about those 10% extra frames, leave things as they are and enjoy gaming.


Thank you for deep explanation. I will be running mine 3090 with waterblock which in my loop, so cooling is not an issue. I was able to cool 500W 2080ti, but i did use stock bios.
So if the bioses do not match or are incompatibile can this cause issue or in 3090 series it just doesn't matter?

Why did u flash Palit with MSI bios and why u did not use KFA2?

So i just download the other vendor bios and flash as it into my card?


----------



## MacMus

bmgjet said:


> KFA2 bios is better.





jura11 said:


> Hi there
> 
> Here is the BIOS which I'm using on my Palit RTX 3090 GamingPro
> 
> 
> 
> 
> 
> 
> 
> 
> 
> KFA2 RTX 3090 VBIOS
> 
> 
> 24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory
> 
> 
> 
> 
> www.techpowerup.com
> 
> 
> 
> 
> 
> For flash procedure please use this guide
> 
> 
> 
> 
> 
> 
> 
> 
> 
> How To Flash A Different BIOS On Your 1080 Ti.
> 
> 
> Important note: After flashing your BIOS or reinstalling your video card driver unzip the file below and right click it, Edit it using NotePad, change the '-pl 3**' to your max TDP on your card, Save As the file, choose 'All Files', not '.txt', save the powerlimit.bat file, than right click it...
> 
> 
> 
> 
> www.overclock.net
> 
> 
> 
> 
> 
> Hope this helps
> 
> Thanks, Jura


Any issues on that bios ? like not working fans etc?


----------



## jura11

MacMus said:


> Any issues on that bios ? like not working fans etc?


I have no issues with that BIOS although I'm running under water my Palit RTX 3090 GamingPro and can't comment on the fans, sorry about that there

Hope this helps 

Thanks, Jura


----------



## menko2

Have anyone try to repaste the Strix 3090 with Grizzly Kryonaut? Any improvement?


----------



## GQNerd

menko2 said:


> Have anyone try to repaste the Strix 3090 with Grizzly Kryonaut? Any improvement?


Yes, and yes (although remount probably helped as well).. and liquid metal is performing even better. 

Temps during Full Load
*Stock cooler:*
Stock paste 65c - 75c
Kryonaut 55c - 62c 

*Water-cooled, Phanteks block*
Kryonaut 45c - 52c
Conductonaut 30c - 42c


----------



## shiokarai

Miguelios said:


> Yes, and yes (although remount probably helped as well).. and liquid metal is performing even better.
> 
> Temps during Full Load
> *Stock cooler:*
> Stock paste 65c - 75c
> Kryonaut 55c - 62c
> 
> *Water-cooled, Phanteks block*
> Kryonaut 45c - 52c
> Conductonaut 30c - 42c


No issues with the liquid metal on the GPU die, like liquid metal damaging die over time? Those are some spectacular temp drops with the conductonaut, just like with my 9900ks direct die+conductonaut.


----------



## HyperMatrix

shiokarai said:


> No issues with the liquid metal on the GPU die, like liquid metal damaging die over time? Those are some spectacular temp drops with the conductonaut, just like with my 9900ks direct die+conductonaut.


LM has no effect on your gpu die. But it will eat aluminum (so...don't use aluminum blocks) and also bonds with copper so you'll see some discoloration, but that's not an issue and doesn't affect performance.


----------



## shiokarai

HyperMatrix said:


> LM has no effect on your gpu die. But it will eat aluminum (so...don't use aluminum blocks) and also bonds with copper so you'll see some discoloration, but that's not an issue and doesn't affect performance.


I have no issues using LM on the 9900ks for the one year straight (added some few days ago and it looks just like new, maybe some discoloration even when using heatkiller nickel block)... so GPU is essentially the same? apart from the fact there's a ton of SMCs around the die...


----------



## HyperMatrix

shiokarai said:


> I have no issues using LM on the 9900ks for the one year straight (added some few days ago and it looks just like new, maybe some discoloration even when using heatkiller nickel block)... so GPU is essentially the same? apart from the fact there's a ton of SMCs around the die...


Nickel shouldn't be bonding with the LM so you're good there. Will be minimal discoloration at best. But it definitely will bond with copper. It's not an issue though. You don't need to scrape it off before your next mount/remount. And it won't affect performance. 

As for the GPU die, the LM will have absolutely no effect on it and can be completely wiped off without a trace after years of usage. Applying too much LM may leak out on to other components though, so if you're not super experienced with your application of LM, I'd recommend applying some conformal coating around the die to prevent a short in case anything leaks out. 

But best advice for LM application is to apply the LM to both surfaces, and also to remember that less is more. You'd be surprised just how much you can spread LM. Even when it looks like it's completely flat and there's nothing left to spread, you can spread it. Use as absolutely little LM as you can and you will be fine. Extra precautions never hurt though. Especially if you're vertically mounting your card. For example I apply conformal coating to my laptop when using LM. But not to my video cards or desktop CPUs. But in general it would be good practice to apply it in all cases.


----------



## HyperMatrix

You guys see the CyberPunk 2077 RTX 3090 benchmarks? Kinda painful to be honest. Lol. But looking like Ray Tracing Medium setting with DLSS Quality (which is the lowest version I can use without the lower image quality bothering me) is possible at 60fps+. Can never tell with these charts because you have no idea what clock speed their 3090 was running at. Is it one of the cards that's throttling down to 1750MHz? Or one of the ones that's getting 2GHz. Cuz if this chart is a throttling 1750MHz 3090 performance, then we can expect an extra 20-25% higher performance than what's shown. 

If it supports mGPU maybe I should hang on to that other FTW3 HydroCopper that's coming soon so I can play this game before selling it. Haha.


----------



## bmagnien

HyperMatrix said:


> You guys see the CyberPunk 2077 RTX 3090 benchmarks? Kinda painful to be honest. Lol. But looking like Ray Tracing Medium setting with DLSS Quality (which is the lowest version I can use without the lower image quality bothering me) is possible at 60fps+. Can never tell with these charts because you have no idea what clock speed their 3090 was running at. Is it one of the cards that's throttling down to 1750MHz? Or one of the ones that's getting 2GHz. Cuz if this chart is a throttling 1750MHz 3090 performance, then we can expect an extra 20-25% higher performance than what's shown.
> 
> If it supports mGPU maybe I should hang on to that other FTW3 HydroCopper that's coming soon so I can play this game before selling it. Haha.
> 
> View attachment 2468285


They delayed a month after going gold for a reason. Day 1 patch plus 46gb patch soon after that they confirmed was separate from day 1 patch are supposedly optimization focused rather than in-game bug focused. Plus Nvidia game-ready drivers. I’d expect substantial increases from those 2 things alone


----------



## HyperMatrix

bmagnien said:


> They delayed a month after going gold for a reason. Day 1 patch plus 46gb patch soon after that they confirmed was separate from day 1 patch are supposedly optimization focused rather than in-game bug focused. Plus Nvidia game-ready drivers. I’d expect substantial increases from those 2 things alone


For the sake of the RTX 3080’s 11.4 FPS at Ray Tracing Ultra with native 4K, I hope you’re right. But this game seems to be the first truly memory bound crippler for the 3080s 10GB VRAM.


----------



## bmagnien

Also, going back to those temp comparisons above, are 10 load / 15c idle deltas between LM and traditional paste the norm? That’s ****ing insane. People redo their whole loops and don’t get that kind of difference. I was originally holding off on LM as the juice just didn’t seem worth the squeeze but thats kind of not possible to ignore. Am I missing something?


----------



## HyperMatrix

bmagnien said:


> Also, going back to those temp comparisons above, is 10-13c delta between LM and regular paste the norm? That’s ****ing insane. People redo their whole loops and don’t get that kind of difference. I was originally holding off on LM as the juice just didn’t seem worth the squeeze but thats kind of not possible to ignore. Am I missing something?


Look at it this way. Copper has a thermal transfer rate of 400 W/mk. So you go with a copper block for the best performance. But how are you connecting that copper to your die? The heat transfer rate between your die and your copper block relies heavily on your thermal paste. So if someone transfers the heat at 40 W/mk vs 12 W/mk...you’re going to be passing that heat over to the copper block 3.5x faster. As long as your pump and radiator are doing their job, that ends up being a huge difference.


----------



## bmagnien

HyperMatrix said:


> Look at it this way. Copper has a thermal transfer rate of 400 W/mk. So you go with a copper block for the best performance. But how are you connecting that copper to your die? The heat transfer rate between your die and your copper block relies heavily on your thermal paste. So if someone transfers the heat at 40 W/mk vs 12 W/mk...you’re going to be passing that heat over to the copper block 3.5x faster. As long as your pump and radiator are doing their job, that ends up being a huge difference.


That’s wild...thx for the explanation.
Also - bitcoin’s about to be cheap boys, sell now! $340,000 worth of MSI's Nvidia RTX 3090s Stolen in China | Tom's Hardware


----------



## mirkendargen

HyperMatrix said:


> Look at it this way. Copper has a thermal transfer rate of 400 W/mk. So you go with a copper block for the best performance. But how are you connecting that copper to your die? The heat transfer rate between your die and your copper block relies heavily on your thermal paste. So if someone transfers the heat at 40 W/mk vs 12 W/mk...you’re going to be passing that heat over to the copper block 3.5x faster. As long as your pump and radiator are doing their job, that ends up being a huge difference.


It isn't that simple, what people seem to ignore in the "W/mK" unit is the m (meters). If you have a 20W/mK TIM and a 10W/mK TIM, but the 20W/mK is twice as thick, the heat transfer is the same. I suspect a lot of the reason people get such good results with liquid metal is it's viscosity is way lower than regular paste and they end up with a layer multiple times thinner than slathering on Kryonaut.


----------



## HyperMatrix

mirkendargen said:


> It isn't that simple, what people seem to ignore in the "W/mK" unit is the m (meters). If you have a 20W/mK TIM and a 10W/mK TIM, but the 20W/mK is twice as thick, the heat transfer is the same. I suspect a lot of the reason people get such good results with liquid metal is it's viscosity is way lower than regular paste and they end up with a layer multiple times thinner than slathering on Kryonaut.


This is very true. If LM is applied properly, it's almost infinitely thinner than what you'd normally get with a paste.


----------



## menko2

Miguelios said:


> Yes, and yes (although remount probably helped as well).. and liquid metal is performing even better.
> 
> Temps during Full Load
> *Stock cooler:*
> Stock paste 65c - 75c
> Kryonaut 55c - 62c
> 
> *Water-cooled, Phanteks block*
> Kryonaut 45c - 52c
> Conductonaut 30c - 42c


Wow that's definitely worth it then. I might get the Kryonaut Extreme so even better.

Was it dificult to do? I'm not that experienced.


----------



## HyperMatrix

For anyone who was curious about the 6800/6900 XT "rasterization supremacy" over the RTX series, and the effect of "infinity cache" vs. Nvidia's larger memory bandwidth:

3090 Advantage over 6800XT:

1080p - 3.5%
1440p - 23.7%
2160p - 36.1%

And this is without RT or DLSS. Just basic rasterization performance. That memory bandwidth can't keep up at 4K. At all. And while the 6900 XT has 11% more cores, it has the exact same memory bandwidth and infinity cache size as the 6800 XT. It's cripple city even without RT which it absolutely fails at.


----------



## Falkentyne

bmagnien said:


> Also, going back to those temp comparisons above, are 10 load / 15c idle deltas between LM and traditional paste the norm? That’s ****ing insane. People redo their whole loops and don’t get that kind of difference. I was originally holding off on LM as the juice just didn’t seem worth the squeeze but thats kind of not possible to ignore. Am I missing something?





HyperMatrix said:


> Look at it this way. Copper has a thermal transfer rate of 400 W/mk. So you go with a copper block for the best performance. But how are you connecting that copper to your die? The heat transfer rate between your die and your copper block relies heavily on your thermal paste. So if someone transfers the heat at 40 W/mk vs 12 W/mk...you’re going to be passing that heat over to the copper block 3.5x faster. As long as your pump and radiator are doing their job, that ends up being a huge difference.


I have to agree something's whacky with the w/mk. Those load temps are sorta baloney. Topping out at 40C with LM on a loop? That's almost impossible unless the water is chilled.
Even on CPU's, the improvement going from the best thermal paste to LM wasn't that high. The main improvement besides just temps, was that Kyronaut on direct die CPU's would just degrade and pump out in a matter of weeks at best because it wasn't thick enough for those intense thermal stresses.

And this is proof that water has to be chilled:

Conductonaut 30c - 42c <--30C? That's barely above ambient. And it doesn't say "idle to load temps"--it says "load temp range".
Yeah I'm not buying that for a moment.

On my (RIP) Vega 64, going from Thermalright TFX or Kryonaut to LM was about 8C. And Vega 64 was a hot chip.


----------



## stryker7314

What's a good OC for a 3090? My FTW3 Ultra has hit 2100 Mhz so far and haven't tried higher yet, waiting to get it on water to do so, that's a problem for another day, none available to speak of ugh. Want the Optimus block but seems like it will be impossible to get with the current demand...


----------



## HyperMatrix

Falkentyne said:


> I have to agree something's whacky with the w/mk. Those load temps are sorta baloney. Topping out at 40C with LM on a loop? That's almost impossible unless the water is chilled.
> Even on CPU's, the improvement going from the best thermal paste to LM wasn't that high. The main improvement besides just temps, was that Kyronaut on direct die CPU's would just degrade and pump out in a matter of weeks at best because it wasn't thick enough for those intense thermal stresses.
> 
> And this is proof that water has to be chilled:
> 
> Conductonaut 30c - 42c <--30C? That's barely above ambient. And it doesn't say "idle to load temps"--it says "load temp range".
> Yeah I'm not buying that for a moment.
> 
> On my (RIP) Vega 64, going from Thermalright TFX or Kryonaut to LM was about 8C. And Vega 64 was a hot chip.


Before switching to quick disconnect throughout my entire loop which reduced the flow rate, and my nano coolant behind past the point of expiry/replacement, my Pascal Titan X would max out around 35-37C with ambient of I believe 21C. This was with shunt mod and 2.1GHz. So if you have adequate flow, enough radiators/fans, and ambient is cool enough, the difference is big. I've never done a GPU block without LM so I can't compare that but when I switched my CPU from paste to LM, I saw a drop of over 15C on the same water loop.


----------



## HyperMatrix

stryker7314 said:


> What's a good OC for a 3090? My FTW3 Ultra has hit 2100 Mhz so far and haven't tried higher yet, waiting to get it on water to do so, that's a problem for another day, none available to speak of ugh. Want the Optimus block but seems like it will be impossible to get with the current demand...


Depends if we're talking on air or under water. From my experience, which is open to criticism, all 3090 cards under water with shunts will be able to do at least 2130MHz. More than half of all cards will be able to hit 2160MHz. Beyond that is just anyone's guess.


----------



## mirkendargen

HyperMatrix said:


> Before switching to quick disconnect throughout my entire loop which reduced the flow rate, and my nano coolant behind past the point of expiry/replacement, my Pascal Titan X would max out around 35-37C with ambient of I believe 21C. This was with shunt mod and 2.1GHz. So if you have adequate flow, enough radiators/fans, and ambient is cool enough, the difference is big. I've never done a GPU block without LM so I can't compare that but when I switched my CPU from paste to LM, I saw a drop of over 15C on the same water loop.


People should talk about coolant temp when comparing blocks/TIM's, not "ambient". You could have a ton of rads with fans blasting and keep your coolant basically at ambient, or you could have not enough rads with not enough airflow and coolant 20C+ above ambient. The temp of your coolant is the "ambient temp" of your waterblock.


----------



## HyperMatrix

mirkendargen said:


> People should talk about coolant temp when comparing blocks/TIM's, not "ambient". You could have a ton of rads with fans blasting and keep your coolant basically at ambient, or you could have not enough rads with not enough airflow and coolant 20C+ above ambient.


You mean the 240mm AIO radiator with 2x 5W fans on my FTW3 Hybrid won't be able to cool as well as my 360mm + 240mm + 120mm with 6x 20W fans??


----------



## MacMus

jura11 said:


> I have no issues with that BIOS although I'm running under water my Palit RTX 3090 GamingPro and can't comment on the fans, sorry about that there
> 
> Hope this helps
> 
> Thanks, Jura


any reason u decided to go with 2pin instead of 3 pin if on water?


----------



## mirkendargen

HyperMatrix said:


> You mean the 240mm AIO radiator with 2x 5W fans on my FTW3 Hybrid won't be able to cool as well as my 360mm + 240mm + 120mm with 6x 20W fans??


I know right?

And for the record, my 3090 idles at my coolant temp with Kryonaut, I expect most people's do whatever they're using unless they did something wrong, heh. It's why comparing to ambient is such a misnomer.


----------



## warbucks

For anyone with an EVGA FTW3, Aliexpress has a sale on right now and I picked up the new Bykski waterblock for a decent price.


----------



## MacMus

what is the best 3090 model to put on water?


----------



## HyperMatrix

MacMus said:


> what is the best 3090 model to put on water?


Honestly mate, the KingPin because of built in voltage controls. Outside of that, it doesn't seem to matter. It all comes down to silicon lottery. Any card can be great. Just depends on the chip you get. Some cards are easier to put under water like the Strix because of more block options. Shunt gets rid of power limits so as long as you're willing to do that, any card is a good card provided you get lucky and get a good chip.


----------



## pat182

seeing the CP2077 benchs today, im gonna keep that 520watt bios xD

gonna make theses bench lies with a 2100mhz clock

pretty sure the clocks where in the low 1700mhz like Control with a 2pin 3090


----------



## MacMus

HyperMatrix said:


> Honestly mate, the KingPin because of built in voltage controls. Outside of that, it doesn't seem to matter. It all comes down to silicon lottery. Any card can be great. Just depends on the chip you get. Some cards are easier to put under water like the Strix because of more block options. Shunt gets rid of power limits so as long as you're willing to do that, any card is a good card provided you get lucky and get a good chip.


If I don't plan to do it. It's better with 3pin like MSI Trio, or Asus Strix right ? Any card has the best PCBs?


----------



## pat182

MacMus said:


> If I don't plan to do it. It's better with 3pin like MSI Trio, or Asus Strix right ? Any card has the best PCBs?


strix for the 480watt bios if you dont want to mod bios, msi trio if you wanna flash the 520watt bios


----------



## HyperMatrix

MacMus said:


> If I don't plan to do it. It's better with 3pin like MSI Trio, or Asus Strix right ? Any card has the best PCBs?


Yes without shunt modding you can get up to 500W-520W on any of the 3-pin cards with a bios flash. The Strix has the best board quality and design. It also has an extra HDMI 2.1 port which may be beneficial to you. But end of the day, any of the cheapo cards can clock just as well. So no guarantees either way. Strix does have the advantage of getting an AquaComputer block with active cooled backplate though, so that's a plus.


----------



## jura11

MacMus said:


> any reason u decided to go with 2pin instead of 3 pin if on water?


Reason why I went with 2pin instead of 3 pin, this GPU its going to friend build and my friend asked me to test for him GPU for now until I built for him loop which I should be building end of the year and then I really hope I will have Asus RTX 3090 Strix OC here, I'm now 650th in the queue hahaha

Hope this helps 

Thanks, Jura


----------



## MacMus

pat182 said:


> strix for the 480watt bios if you dont want to mod bios, msi trio if you wanna flash the 520watt bios


i don't understand why not evey card can flash this 520watt bios.. are there any restrictions besides 2/3 pin bioses should not be loaded on eachother?


----------



## jura11

MacMus said:


> i don't understand why not evey card can flash this 520watt bios.. are there any restrictions besides 2/3 pin bioses should not be loaded on eachother?


Hi there 

Its down mostly to IO and DP layout or HDMi layout as well there, Strix usually is compatible with itself when it comes to IO and DP layout and when you flash there FTW3 or Kingpin BIOS you will usually loose DP ports or HDMi ports... 

You can try flash 3 pin BIOS on non shunted or non modded 2 pin RTX 3090 but you will loose performance with that, I posted previously my findings with that there 

Hope this helps 

Thanks, Jura


----------



## jura11

Hi there 

Here is another 390W BIOS from KFA2 too, which have 1740MHz boost clock, memory clocks are same as with other 390W BIOS 










GALAX RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com





Hope this helps 

Thanks, Jura


----------



## MacMus

wuhu just snatched 3090 strix ;-D


----------



## HyperMatrix

For anyone interested in an FTW3 (or KingPin) Hybrid card, apparently the CLC covers the memory as well. It's not just cooled by the regular fan. Which would explain the cool temperatures I was getting off the VRAM sensor. Not sure how that translates into the backside temps. But still an interesting tidbit.


----------



## Chamidorix

Quite depressing to get lumped in with the final kingpin order even with a time of 6:0:42....

Also quite depressing that we still don't have a single quality Kingpin PCB picture. Around a hundred cards out there in the wild (~20% of the 500 in initial batch according to Vince); I've explicitly PMed 3 separate people who I KNOW have taken the card apart under ln2 or chiller, GN and Jay have had theirs for weeeks now, and yet we still don't have a single pcb reference except for the blurry and compressed recording on EVGA twtich.

Finally, depressing that the Kingpin XOC bios situation is ye old good boys club. Vince is ignoring requests on email+facebook, so it is only Jacob responding, and only to known and established ln2 overclockers. They are also only giving out signed bios files via firmware update package that won't flash outside of your specific serial.


----------



## Chamidorix

HyperMatrix said:


> For anyone interested in an FTW3 (or KingPin) Hybrid card, apparently the CLC covers the memory as well. It's not just cooled by the regular fan. Which would explain the cool temperatures I was getting off the VRAM sensor. Not sure how that translates into the backside temps. But still an interesting tidbit.


The back memory modules are entirely passively cooled by thin backplate. The few people I've talked to OCing these things say this is the number one bottleneck in getting performance out of a stock kingpin; you can jack up nvvdd, msvddd quite high on stock AIO (even more with LM) but you really have to upgrade the backplate cooling situation if you want to up fbvdd and get more bandwidth for your higher clocks.


----------



## HyperMatrix

Chamidorix said:


> The back memory modules are entirely passively cooled by thin backplate. The few people I've talked to OCing these things say this is the number one bottleneck in getting performance out of a stock kingpin; you can jack up nvvdd, msvddd quite high on stock AIO (even more with LM) but you really have to upgrade the backplate cooling situation if you want to up fbvdd and get more bandwidth for your higher clocks.


I'll do some FLIR testing tonight because I'm curious to see how much of an impact cold plate contact from the front has on the back temps.


----------



## mirkendargen

Chamidorix said:


> Quite depressing to get lumped in with the final kingpin order even with a time of 6:0:42....
> 
> Also quite depressing that we still don't have a single quality Kingpin PCB picture. Around a hundred cards out there in the wild (~20% of the 500 in initial batch according to Vince); I've explicitly PMed 3 separate people who I KNOW have taken the card apart under ln2 or chiller, GN and Jay have had theirs for weeeks now, and yet we still don't have a single pcb reference except for the blurry and compressed recording on EVGA twtich.
> 
> Finally, depressing that the Kingpin XOC bios situation is ye old good boys club. Vince is ignoring requests on email+facebook, so it is only Jacob responding, and only to known and established ln2 overclockers. They are also only giving out signed bios files via firmware update package that won't flash outside of your specific serial.


If a kind soul were to get one of those BIOS's that can only be flashed with EVGA software to their specific serial...I'm pretty sure that check is entirely in the software flashing it and that person could dump the BIOS using nvflash after installing it and it would work for anyone. The BIOS may well have a specific version for that person and be traceable back to them...but I don't think EVGA can make it unflashable.


----------



## jura11

MacMus said:


> wuhu just snatched 3090 strix ;-D


Good luck there with the Strix, I'm 650th in the queue in Scan for Asus RTX 3090 Strix OC hahaha 

I don't think I will see that GPU this year maybe next year with bit of luck 

Thanks, Jura


----------



## MacMus

HyperMatrix said:


> Yes without shunt modding you can get up to 500W-520W on any of the 3-pin cards with a bios flash. The Strix has the best board quality and design. It also has an extra HDMI 2.1 port which may be beneficial to you. But end of the day, any of the cheapo cards can clock just as well. So no guarantees either way. Strix does have the advantage of getting an AquaComputer block with active cooled backplate though, so that's a plus.


I was just able to snatch 3090 strix! Due to this 2 hdmi ports, one for VR second for my recevier.


----------



## MacMus

jura11 said:


> Good luck there with the Strix, I'm 650th in the queue in Scan for Asus RTX 3090 Strix OC hahaha
> 
> I don't think I will see that GPU this year maybe next year with bit of luck
> 
> Thanks, Jura


thank you i have my ventus 3090 for sale for msrp unopened. any idea where to sell ?


----------



## jura11

MacMus said:


> thank you i have my ventus 3090 for sale for msrp unopened. any idea where to sell ?


Sorry I don't know where to sell,try eBay or Craigslist or Facebook there or over here,maybe someone will take it from you over here

Hope this helps

Thanks,Jura


----------



## MacMus

jura11 said:


> Sorry I don't know where to sell,try eBay or Craigslist or Facebook there or over here,maybe someone will take it from you over here
> 
> Hope this helps


i can alway retur it but if someone can get it for reasonable price then why not to help a friend in need.


----------



## Herald

Palit Gamerock OC 3090 with the 520w bios on stock cooler

Score validated, 5thn on the leaderboard 









UNIGINE Benchmarks


Performance benchmarks by Unigine




benchmark.unigine.com


----------



## HyperMatrix

MacMus said:


> thank you i have my ventus 3090 for sale for msrp unopened. any idea where to sell ?


Best advice? Do cash and trade for something else you need at MSRP. For example if you wanted to buy a PS5 or a 5950x or whatever other thing you may want that's not commonly available. So neither you nor the other person scalps. And you both get to acquire something you wanted at normal MSRP pricing. That's my advice, anyway. Otherwise I personally do a small increase in price of $100-200 more for local sales. If you sell it too low, someone's gonna buy it off you and resell it higher. May as well be a bit higher so it's not worth someone buying it for the purpose of reselling.


----------



## MacMus

HyperMatrix said:


> Yes without shunt modding you can get up to 500W-520W on any of the 3-pin cards with a bios flash. The Strix has the best board quality and design. It also has an extra HDMI 2.1 port which may be beneficial to you. But end of the day, any of the cheapo cards can clock just as well. So no guarantees either way. Strix does have the advantage of getting an AquaComputer block with active cooled backplate though, so that's a plus.


That actiIve backplate! That was a deciding factor. I'm about to put order for that backplate. Just wondering if i can get it now in us or i need to wait untill end of the month?


----------



## MacMus

HyperMatrix said:


> Best advice? Do cash and trade for something else you need at MSRP. For example if you wanted to buy a PS5 or a 5950x or whatever other thing you may want that's not commonly available. So neither you nor the other person scalps. And you both get to acquire something you wanted at normal MSRP pricing. That's my advice, anyway. Otherwise I personally do a small increase in price of $100-200 more for local sales. If you sell it too low, someone's gonna buy it off you and resell it higher. May as well be a bit higher so it's not worth someone buying it for the purpose of reselling.


I don't need anything right now ;-) I'm waiting for new TR to arrive


----------



## dangerSK

Chamidorix said:


> Quite depressing to get lumped in with the final kingpin order even with a time of 6:0:42....
> 
> Also quite depressing that we still don't have a single quality Kingpin PCB picture. Around a hundred cards out there in the wild (~20% of the 500 in initial batch according to Vince); I've explicitly PMed 3 separate people who I KNOW have taken the card apart under ln2 or chiller, GN and Jay have had theirs for weeeks now, and yet we still don't have a single pcb reference except for the blurry and compressed recording on EVGA twtich.
> 
> Finally, depressing that the Kingpin XOC bios situation is ye old good boys club. Vince is ignoring requests on email+facebook, so it is only Jacob responding, and only to known and established ln2 overclockers. They are also only giving out signed bios files via firmware update package that won't flash outside of your specific serial.


EVGA is total ignorant to enthusiasts/XOC community this gen, Vince replied to me however I have ftw3 which he doesnt support so no go there for me. (wanted XOC bios for FTW3, that 1000W+ one)
Yea Im too rly frustrated that there are no proper KP pcb pics, blame this on lazy EVGA, Tin was always sharing his guide with pics etc and now hes gone they dont care about it at all.
Im sure its not hard to dump the bios, someone with KP has to dump it after flash, only problem might be some number that can trace back to that person, but I highly doubt they are changing bios version for each person to trace it back, it will be likely in gpuz nvidia bios message so If u get one, dont share this part of gpuz.


----------



## pat182

Hey guys im benching EVGA 500watt and KPE on my strix, i think its burning more than the bios says, im doing mostly 400watts board power in gpu Z and since you have to add 150watt for the pin3 not reading, seems im going way over the max power limit (550watt). anyone having the same results ?


----------



## HyperMatrix

pat182 said:


> Hey guys im benching EVGA 500watt and KPE on my strix, i think its burning more than the bios says, im doing mostly 400watts board power in gpu Z and since you have to add 150watt for the pin3 not reading, seems im going way over the max power limit (550watt). anyone having the same results ?


Strix with XOC bios was setting off the alarm on my 900W battery backup when CPU load was at 50%. Haven’t had that happen any other time. And individual power connectors were getting up to 160-170W each With PCIe slot at 50W. So technically it could be peaking at 530-560W.


----------



## cletus-cassidy

warbucks said:


> For anyone with an EVGA FTW3, Aliexpress has a sale on right now and I picked up the new Bykski waterblock for a decent price.


What price did you get?


----------



## FrouJoker

bmgjet said:


> GPUz will say.
> Also id recommend instead of posting post after post when no one has replied you edit your last post to add onto it.


okay , i understood


----------



## cakesg

So I'm looking to put my watercooled strix back on air/stock cooler to do some testing on air cooling. Most of the thermal putty that came with the stock cooler is gone/destroyed. How should I go about putting the card back together? Should I buy thermal pads or will I be fine to just run a few benchmarks?


----------



## andrvas

Herald said:


> Palit Gamerock OC 3090 with the 520w bios on stock cooler
> 
> Score validated, 5thn on the leaderboard
> 
> 
> 
> 
> 
> 
> 
> 
> 
> UNIGINE Benchmarks
> 
> 
> Performance benchmarks by Unigine
> 
> 
> 
> 
> benchmark.unigine.com


How is this bios doing, compared to the 390w ones out there? I have the same card, just curious. What score do you get in port royal?


----------



## Nizzen

HyperMatrix said:


> l to be honest. Lol. But looking like Ray Tracing Medium setting with DLSS Quality (which is the lowest version I can use without the lower image





HyperMatrix said:


> Honestly mate, the KingPin because of built in voltage controls. Outside of that, it doesn't seem to matter. It all comes down to silicon lottery. Any card can be great. Just depends on the chip you get. Some cards are easier to put under water like the Strix because of more block options. Shunt gets rid of power limits so as long as you're willing to do that, any card is a good card provided you get lucky and get a good chip.


Strix has voltagecontrol too with elmor tool. Hotwiresupport is one of the features you pay for on strix.


----------



## HyperMatrix

Nizzen said:


> Strix has voltagecontrol too with elmor tool. Hotwiresupport is one of the features you pay for on strix.


Yes but with EVC2 you have to reapply after every boot. It’s far more convenient on the KingPin card. Better deal to get KingPin for $1900 than to get Strix with EVC2 for $1850 unless you need they extra HDMI port.


----------



## stryker7314

Anyone running the bykski waterblock? If not which block ya running?

Tempted to get one from aliexpress and bite the bullett and pay for fast shipping for like 75 to get it this month instead of Jan 21


----------



## Nizzen

HyperMatrix said:


> Yes but with EVC2 you have to reapply after every boot. It’s far more convenient on the KingPin card. Better deal to get KingPin for $1900 than to get Strix with EVC2 for $1850 unless you need they extra HDMI port.


Almost impossible to buy kingpin, that's too bad. Miss the 780/780ti Evga classified days with evbot  
Fun for everyone that wanted to play...


----------



## xrb936

HyperMatrix said:


> Yes but with EVC2 you have to reapply after every boot. It’s far more convenient on the KingPin card. Better deal to get KingPin for $1900 than to get Strix with EVC2 for $1850 unless you need they extra HDMI port.


Man do you have 3DMark results for your FTW3 Ultra? I am going to grab one tomorrow. Which BIOS are you using?


----------



## HyperMatrix

Nizzen said:


> Almost impossible to buy kingpin, that's too bad. Miss the 780/780ti Evga classified days with evbot
> Fun for everyone that wanted to play...


That we can agree on. Haha.




xrb936 said:


> Man do you have 3DMark results for your FTW3 Ultra? I am going to grab one tomorrow. Which BIOS are you using?


Running XC3 bios. Problem with benchmark numbers from me is that I’m still running a 6950x at 4.3GHz 2933MHz CL15 memory. So numbers aren’t all that great. But I did do port royal with approximately average clock speed of around 2075MHz with vram at 22GHz. That got me about 14,400 but I’m power limited the whole run. Not going to shunt and repaste until I get a block though.


----------



## xrb936

HyperMatrix said:


> That we can agree on. Haha.
> 
> 
> 
> 
> Running XC3 bios. Problem with benchmark numbers from me is that I’m still running a 6950x at 4.3GHz 2933MHz CL15 memory. So numbers aren’t all that great. But I did do port royal with approximately average clock speed of around 2075MHz with vram at 22GHz. That got me about 14,400 but I’m power limited the whole run. Not going to shunt and repaste until I get a block though.


I see. But you can just run the benchmark and show the graphic points only.


----------



## HyperMatrix

xrb936 said:


> I see. But you can just run the benchmark and show the graphic points only.


I think the score is still affected by cpu and system memory. Some people were complaining about a drop in points after switching to a 5950x. I could be wrong about port royal, but either way best I’ve been able to get right now in port royal is around 14,400 because I’m heavily power limited. No issue with thermals. End the run under 50C. No VRel/VOp. Just power throttling the whole time. Would need shunts to unleash the chip.


----------



## xrb936

HyperMatrix said:


> I think the score is still affected by cpu and system memory. Some people were complaining about a drop in points after switching to a 5950x. I could be wrong about port royal, but either way best I’ve been able to get right now in port royal is around 14,400 because I’m heavily power limited. No issue with thermals. End the run under 50C. No VRel/VOp. Just power throttling the whole time. Would need shunts to unleash the chip.


Fire point. But that point is about the same as my Strix 3090.


----------



## dangerSK

Hey guys,
I did a video about shunt modding my 3090Ftw3, so if wanna watch how its done or maybe ask about shunt modding heres a link


----------



## HyperMatrix

xrb936 said:


> Fire point. But that point is about the same as my Strix 3090.


Yeah not much I can do without shunting at the moment. Doesn’t matter that it can clock to over 2200MHz when it doesn’t have the power budget to do so. Gonna see if I can pick up an optimus block for it. Don’t want that bykski nonsense.


----------



## GQNerd

shiokarai said:


> No issues with the liquid metal on the GPU die, like liquid metal damaging die over time? Those are some spectacular temp drops with the conductonaut, just like with my 9900ks direct die+conductonaut.


Nope. Conformal coating and Kapton tape surrounding the die, and changed from vertical GPU mount to horizontally in the slot to ensure gravity doesn’t affect the LM


----------



## GQNerd

menko2 said:


> Wow that's definitely worth it then. I might get the Kryonaut Extreme so even better.
> 
> Was it dificult to do? I'm not that experienced.


Nope, Strix is actually one of the easier cards to disassemble. Watch videos before hand, with Kryonaut/extreme you’ll be fine.. it’s LM that’s dangerous


----------



## Aristotle

I'm thinking about using liquid metal on my strix. Is the cooler contact surface nickel plated copper?

I'm getting around 77C under load. Hoping liquid metal can drop that a fair amount.


----------



## xrb936

HyperMatrix said:


> Yeah not much I can do without shunting at the moment. Doesn’t matter that it can clock to over 2200MHz when it doesn’t have the power budget to do so. Gonna see if I can pick up an optimus block for it. Don’t want that bykski nonsense.


What do you mean by "over 220mhz"? Under what condition? How did you achieve that?


----------



## GQNerd

Falkentyne said:


> Those load temps are sorta baloney. Topping out at 40C with LM on a loop? That's almost impossible unless the water is chilled.
> And this is proof that water has to be chilled:
> Conductonaut 30c - 42c <--30C? That's barely above ambient. And it doesn't say "idle to load temps"--it says "load temp range".
> Yeah I'm not buying that for a moment.


No need to call others’ #s baloney, instead you should ask what they’re running.. 

*The numbers I provided earlier are strictly for my Strix.* Just so you have the whole picture:

PC with no panels by a slightly opened window. It’s been approx 5-9c outside at night, and ambient inside set to 21c. So I guess yes, chilled water..

Overall Loop ambient at idle after temps flattening out, is 26-27c

Setup:
1200L/h pump directly into gpu, then Into CHONKY 240rad, then into CPU block and out to a 360rad. Conductonaut on both CPU and GPU. Gelid Ultimate 15w/mk thermal pads all around the GPU. Front and back..
I have some 15.2k+ PR runs where the card doesn’t even reach 40c. _GQNERD on 3dmark_

Typically after 2 hrs of gaming, highest I see is gpu at 40c and CPU (10900k @ 5.2) at 72c. My loop’s ambient temp ends up being around 55-60c

Just want to paint the whole picture for anyone contemplating doing it. Your results may vary obviously


----------



## Herald

andrvas said:


> How is this bios doing, compared to the 390w ones out there? I have the same card, just curious. What score do you get in port royal?


This is my score with the stock performance bios. So not a huge difference, but it can get improved, i think i can hit 20k with the kingpin bios. 

I was lucky with the ram binning, i can run unigine with a +1675 oc on the vram. Also, the temps with the fans max are comparable to AIOs. This thing is nuts


----------



## Carillo

stryker7314 said:


> Anyone running the bykski waterblock? If not which block ya running?
> 
> Tempted to get one from aliexpress and bite the bullett and pay for fast shipping for like 75 to get it this month instead of Jan 21


even if you pay 75 dollar shipping with dhl ,they use 3-4 weeks to hand it over to DHL. Thrust me, i have ordered 4 different blocks. Kina shuts down from late December to February, so good luck getting anything before that


----------



## long2905

what temp are you guys seeing with stock air cooler? I was going to sell the ichill x4 for something else with 3 pin and keep it stock hoping the air cooler on other model would be better. but the results im seeing on here is all over the place, even on strix and ftw3.

would the gaming x trio still a decent choice flashing to the EVGA vbios?


----------



## asdkj1740

look forward to see ppl saying this is garbage because of so many solid caps used.


----------



## bmgjet

asdkj1740 said:


> look forward to see ppl saying this is garbage because of so many solid caps used.


Theres only 5 in the picture. Unless you mean Electrolytic Caps.


----------



## dangerSK

asdkj1740 said:


> View attachment 2468460
> 
> 
> look forward to see ppl saying this is garbage because of so many solid caps used.


Galax is using this "layout" all the time


----------



## Nephalem89

Hi 

One question.... What is the best bios for flash on my asus 3090 tuf gaming no oc ? The tuf gaming oc or gigabyte aorus master bios it's compatible ? Many many thanks


----------



## FrouJoker

Why we have so little choices of bios ? (just 2 msi seriously ...)
Maybe i can find a more ? Because silent and cool , this limited choice .


----------



## FrouJoker

Nephalem89 said:


> Hi
> 
> One question.... What is the best bios for flash on my asus 3090 tuf gaming no oc ? The tuf gaming oc or gigabyte aorus master bios it's compatible ? Many many thanks


i think - yeah , but i think too it maybe will given not correct working and troubles with frequences . And in little chance you will has bad working of your cooling .


----------



## pat182

HyperMatrix said:


> Strix with XOC bios was setting off the alarm on my 900W battery backup when CPU load was at 50%. Haven’t had that happen any other time. And individual power connectors were getting up to 160-170W each With PCIe slot at 50W. So technically it could be peaking at 530-560W.


Yea, looks like. Drawing 175 watts per pin


----------



## shiokarai

pat182 said:


> Yea, looks like. Drawing 175 watts per pin


By the XOC Bios you both mean 500w FTW XOC Bios or 520w KP Bios? Also, wouldn't 520w KP Bios do the same ie. bugged 3rd 8 pin reading = pulling more w?


----------



## Edge0fsanity

dangerSK said:


> Hey guys,
> I did a video about shunt modding my 3090Ftw3, so if wanna watch how its done or maybe ask about shunt modding heres a link


I though the ftw3 had fuses that would blow if too much power is drawn?


----------



## Alex24buc

Installed tue KFA2 390w bios on my palit gamingpro oc replacing the gigabyte gaming oc one. I can confirm that the KFA2 is better, the fans run at a higher speed, I got the middle display port working again and the temps in idle are lower lower because of the lower boost. So far everything works ok and thanks again for help.


----------



## andrvas

Herald said:


> This is my score with the stock performance bios. So not a huge difference, but it can get improved, i think i can hit 20k with the kingpin bios.
> 
> I was lucky with the ram binning, i can run unigine with a +1675 oc on the vram. Also, the temps with the fans max are comparable to AIOs. This thing is nuts


Did you shuntmod the card? As I've said I have the same card, with the 390w Gigabyte BIOS, and my score in Superposition is around 17700...


----------



## bmgjet

Edge0fsanity said:


> I though the ftw3 had fuses that would blow if too much power is drawn?


3 Plug card so not too much of a worry until sort of 720W+
Which would need volt mods and L2N to get over 650W


----------



## pat182

shiokarai said:


> By the XOC Bios you both mean 500w FTW XOC Bios or 520w KP Bios? Also, wouldn't 520w KP Bios do the same ie. bugged 3rd 8 pin reading = pulling more w?


both are the same , they dont read 8pin and can drive a strix to 550watts, the diff is xoc let your fan speed intact while kpe drop the mid fan to 2000rpm max


----------



## shiokarai

Guys/gals what's your take on the Bykski Strix block vs EKWB Strix block? Got both of them inbound, want to keep only one, don't wanna test myself


----------



## dangerSK

Edge0fsanity said:


> I though the ftw3 had fuses that would blow if too much power is drawn?


20A fuses on pins and 10A fuse on pcie, if youre going cold and pushing Vcore u need to hardshort fuses


----------



## Edge0fsanity

dangerSK said:


> 20A fuses on pins and 10A fuse on pcie, if youre going cold and pushing Vcore u need to hardshort fuses


So 5mOhm fuses ok for ambient water and 1.1v? Not going sub ambient with this card, only shunting if I think I can exceed 2200mhz on water. Card tops out at 2160mhz on air with 60% load around 50c. Primary use is 4k gaming. I'm on the list for the kingpin, will try out sub ambient with that card.


----------



## dangerSK

Edge0fsanity said:


> So 5mOhm fuses ok for ambient water and 1.1v? Not going sub ambient with this card, only shunting if I think I can exceed 2200mhz on water. Card tops out at 2160mhz on air with 60% load around 50c. Primary use is 4k gaming. I'm on the list for the kingpin, will try out sub ambient with that card.


yea 5mohm "shunts" for piggyback should be fine. u shouldnt trip fuse.


----------



## Edge0fsanity

dangerSK said:


> yea 5mohm "shunts" for piggyback should be fine. u shouldnt trip fuse.


Yeah meant shunts lol


----------



## khunpunTH

updated score with chilled water.

PR - 15458








I scored 15 458 in Port Royal


Intel Core i7-10700K Processor, NVIDIA GeForce RTX 3090 x 1, 16384 MB, 64-bit Windows 10}




www.3dmark.com




TS graphic score - 23539 top #12 








I scored 20 533 in Time Spy


Intel Core i7-10700K Processor, NVIDIA GeForce RTX 3090 x 1, 16384 MB, 64-bit Windows 10}




www.3dmark.com




TS EX graphic score - 12027 top #17








I scored 10 240 in Time Spy Extreme


Intel Core i7-10700K Processor, NVIDIA GeForce RTX 3090 x 1, 16384 MB, 64-bit Windows 10}




www.3dmark.com





ps. msi gaming x trio, barrow block, chilled water.520w. no shunt.


----------



## warbucks

cletus-cassidy said:


> What price did you get?


The block and backplate with 5V lighting was $170 CAD.


----------



## dante`afk

Miguelios said:


> No need to call others’ #s baloney, instead you should ask what they’re running..
> 
> *The numbers I provided earlier are strictly for my Strix.* Just so you have the whole picture:
> 
> PC with no panels by a slightly opened window. It’s been approx 5-9c outside at night, and ambient inside set to 21c. So I guess yes, chilled water..
> 
> Overall Loop ambient at idle after temps flattening out, is 26-27c
> 
> Setup:
> 1200L/h pump directly into gpu, then Into CHONKY 240rad, then into CPU block and out to a 360rad. Conductonaut on both CPU and GPU. Gelid Ultimate 15w/mk thermal pads all around the GPU. Front and back..
> I have some 15.2k+ PR runs where the card doesn’t even reach 40c. _GQNERD on 3dmark_
> 
> Typically after 2 hrs of gaming, highest I see is gpu at 40c and CPU (10900k @ 5.2) at 72c. My loop’s ambient temp ends up being around 55-60c
> 
> Just want to paint the whole picture for anyone contemplating doing it. Your results may vary obviously


so, I have more flow than you, I have bigger rads than you, but I'm using kryonaut extreme and not condactonaut.

so you're telling me, LM will decrease my temps on load by 4c? Hard to believe that, because I tested it on 2080ti with condactonaut. while it's initially better, after couple of days its the same as normal paste












Carillo said:


> even if you pay 75 dollar shipping with dhl ,they use 3-4 weeks to hand it over to DHL. Thrust me, i have ordered 4 different blocks. Kina shuts down from late December to February, so good luck getting anything before that


I received my bykski block within 6 days after ordering with DHL express from China
The bitspower block arrived from Taiwan in 8 days.


----------



## Sheyster

long2905 said:


> what temp are you guys seeing with stock air cooler? I was going to sell the ichill x4 for something else with 3 pin and keep it stock hoping the air cooler on other model would be better. but the results im seeing on here is all over the place, even on strix and ftw3.
> 
> *would the gaming x trio still a decent choice flashing to the EVGA vbios?*


Yes, very much so if you can find one. It also works with the 520W KPE BIOS. For the price (~$1589 in the U.S.) this is the best value of any 3090 card IMHO.


----------



## Sheyster

pat182 said:


> both are the same , they dont read 8pin and can drive a strix to 550watts, the diff is xoc let your fan speed intact while kpe drop the mid fan to 2000rpm max


I confirm this statement, I've tried both of them on my own Strix.


----------



## long2905

Sheyster said:


> Yes, very much so if you can find one. It also works with the 520W KPE BIOS. For the price (~$1589 in the U.S.) this is the best value of any 3090 card IMHO.


thank you for affirming my future decision lol. the gaming x trio should be plenty enough here in Vietnam. Its the Strix that is a unicorn even with its strix tax (about $2400 here)


----------



## Thanh Nguyen

My 2 years old corsair ax1500w psu is suddenly dead. Not sure because the shunt card or something else.


----------



## dante`afk

damn


----------



## GRABibus

Since today, in France, some brand new EVGA RTX 3090 FTW3 Ultra gaming are available at one store at 2240$ (1850€). I took one.
At all other stores they are at 2540$ and available as of end of December.


----------



## HyperMatrix

xrb936 said:


> What do you mean by "over 220mhz"? Under what condition? How did you achieve that?


Well I can get 2220MHz up to about 60% GPU load. And 2205MHz up to about 80% GPU load. No VRel/VOp. So with shunts and adequate cooling I hope to be able to maintain that at 100% load. I had put up a couple pics a few days back.





















dante`afk said:


> so, I have more flow than you, I have bigger rads than you, but I'm using kryonaut extreme and not condactonaut.
> 
> so you're telling me, LM will decrease my temps on load by 4c? Hard to believe that, because I tested it on 2080ti with condactonaut. while it's initially better, after couple of days its the same as normal past


LM really is that much better. There must have been something wrong with your application. Sometimes the liquid metal gets soaked up into the copper block and gives you less than perfect contact. If that happens, simply re-apply the LM. Don’t scrape it off of the copper either. The discoloration is ok and actually good. It’s filling in all the little micro holes in it when it fuses. But yes LM is kind of a big deal when done right, so definitely give it another shot.


----------



## dante`afk

I'll try again since I have the results with Kryonaut Extreme now.

what do you use around the DIE to protect the surrounding area? I used Super 33+ tape in the past.


----------



## GAN77

khunpunTH said:


> updated score with chilled water.



What kind of waterblock manufacturer are you using?


----------



## GQNerd

dante`afk said:


> so, I have more flow than you, I have bigger rads than you, but I'm using kryonaut extreme and not condactonaut.
> so you're telling me, LM will decrease my temps on load by 4c? Hard to believe that, because I tested it on 2080ti with condactonaut. while it's initially better, after couple of days its the same as normal paste


****MAY... *
also see the temps/conditions I run it at.. Since it may be considered using Chilled water.

Kryo Ex - 14w/mk
Conducto - 73w/mk


----------



## GQNerd

khunpunTH said:


> updated score with chilled water.
> 
> PR - 15458
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 458 in Port Royal
> 
> 
> Intel Core i7-10700K Processor, NVIDIA GeForce RTX 3090 x 1, 16384 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> TS graphic score - 23539 top #12
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 20 533 in Time Spy
> 
> 
> Intel Core i7-10700K Processor, NVIDIA GeForce RTX 3090 x 1, 16384 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> TS EX graphic score - 12027 top #17
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 10 240 in Time Spy Extreme
> 
> 
> Intel Core i7-10700K Processor, NVIDIA GeForce RTX 3090 x 1, 16384 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> ps. msi gaming x trio, barrow block, chilled water.520w. no shunt.


NICE!!

But oof, creeping up on my score.. dropped from Top 20 to #29... Back to the lab this weekend!


----------



## HyperMatrix

dante`afk said:


> I'll try again since I have the results with Kryonaut Extreme now.
> 
> what do you use around the DIE to protect the surrounding area? I used Super 33+ tape in the past.


Personally I don't use anything on my CPUs/GPUs because I've done enough applications to know exactly how little to apply. But on laptops I do use MG Conformal Coating. You can use clear nailpolish or your tape method. It's all good. Just don't apply the LM thinking "it'll be fine i have tape." You have to take your time with LM because even an amount that seems impossible to spread any further, can be spread quite a bit. You just have to keep stroking away and use as absolutely little of it as you can. And of course, apply to both surfaces. You should apply so little of it that there would be no way for there to be anything that could trickle/drip down. That's also how you get the best thermal conductivity because LM itself isn't nearly as thermally conductive as your copper block. So the less there is, the closer your die is to the actual copper.


----------



## GQNerd

HyperMatrix said:


> Personally I don't use anything on my CPUs/GPUs because I've done enough applications to know exactly how little to apply. But on laptops I do use MG Conformal Coating. You can use clear nailpolish or your tape method. It's all good. Just don't apply the LM thinking "it'll be fine i have tape." You have to take your time with LM because even an amount that seems impossible to spread any further, can be spread quite a bit. You just have to keep stroking away and use as absolutely little of it as you can. And of course, apply to both surfaces. You should apply so little of it that there would be no way for there to be anything that could trickle/drip down.


Spot-on.. But I did apply conformal coating and tape around the die, and also moved the GPU from vertical to horizontal orientation just to ensure no dripping.


----------



## Falkentyne

Miguelios said:


> ****MAY... *
> also see the temps/conditions I run it at.. Since it may be considered using Chilled water.
> 
> Kryo Ex - 14w/mk
> Conducto - 73w/mk


Conductonaut is no higher than 40 w/mk and most likely much lower (<20-30 w/mk). It can't be higher than the lowest w/mk of the primary base metal used in the eutectic alloy (Gallium, 40.6 w/mk). Any metallurgist will tell you this.

Galinstan is 16.5 w/mk in its original ratio form.

If you don't believe this, someone compared homemade galinstan with conductonaut.
0.5C difference. I have 150g of my own homemade galinstan left. Just not interested in having Conductive Balls of Doom ™ running amok on my expensive video card right now.


__
https://www.reddit.com/r/overclocking/comments/6ues42



http://imgur.com/a/T6JlP


----------



## GQNerd

Falkentyne said:


> Conductonaut is no higher than 40 w/mk and most likely much lower (<20-30 w/mk). It can't be higher than the primary base metal used in the eutectic alloy (Gallium, 40.6 w/mk). Any metallurgist will tell you this.
> 
> Galinstan is 16.5 w/mk in its original ratio form.


Cool, wish I knew a metallurgist.. just posting the advertised ratings

Would love to make my own


----------



## HyperMatrix

Miguelios said:


> Spot-on.. But I did apply conformal coating and tape around the die, and also moved the GPU from vertical to horizontal orientation just to ensure no dripping.


Never hurts to take extra precautions. Especially when it costs so little and takes so little time to do. I'd never say that you shouldn't or don't need to do it. 




Falkentyne said:


> Conductonaut is no higher than 40 w/mk and most likely much lower (<20-30 w/mk). It can't be higher than the primary base metal used in the eutectic alloy (Gallium, 40.6 w/mk). Any metallurgist will tell you this.
> 
> Galinstan is 16.5 w/mk in its original ratio form.


Continuing on our talk from before, do you think it's possible that the 73 W/mk is derived based on actual thermal transfer when applying an appropriately thin layer? Your figures are based off of if you have a mass of the alloy that has to transfer all the heat. But when we're dealing with such a thin layer with pure copper on the other side, the thermal properties would likely be different.


----------



## Falkentyne

Miguelios said:


> Cool, wish I knew a metallurgist.. just posting the advertised ratings
> 
> Would love to make my own


A metallurgist who worked in the industry posted on the notebookreview forums some time back, when there was a bunch of discussion about how Ga affects Cu, and so on, and Thermal Grizzly's 73 w/mk claim got thoroughly debunked in that thread. I don't have the direct links to it but it was at least two years ago. I don't even remember the results but it was not far off from 25 w/mk. If you had the time to do a long search (use an ad blocker, though), you would find the discussion. No one there believed Grizzly's claims and there was discussion about CLU and CLP also.


----------



## Falkentyne

HyperMatrix said:


> Never hurts to take extra precautions. Especially when it costs so little and takes so little time to do. I'd never say that you shouldn't or don't need to do it.
> 
> 
> 
> 
> Continuing on our talk from before, do you think it's possible that the 73 W/mk is derived based on actual thermal transfer when applying an appropriately thin layer? Your figures are based off of if you have a mass of the alloy that has to transfer all the heat. But when we're dealing with such a thin layer with pure copper on the other side, the thermal properties would likely be different.


Almost impossible. Probably just related to Indium's 86 w/mk, because Thermalright Silver King goes even beyond Grizzly's and claims 81.8 w/mk.
I tried smelting a small mixture using more indium than gallium. Didn't perform any better and seemed less vicious than usual (kind of how runny CLU was when they changed their formula to what many people felt was worse than the original).


----------



## HyperMatrix

Falkentyne said:


> A metallurgist who worked in the industry posted on the notebookreview forums some time back, when there was a bunch of discussion about how Ga affects Cu, and so on, and Thermal Grizzly's 73 w/mk claim got thoroughly debunked in that thread. I don't have the direct links to it but it was at least two years ago. I don't even remember the results but it was not far off from 25 w/mk. If you had the time to do a long search (use an ad blocker, though), you would find the discussion. No one there believed Grizzly's claims and there was discussion about CLU and CLP also.


I'm not talking about the technical properties and limitations of the alloy components. But let me explain it this way. Let's say you're trying to do the opposite of transfer heat away. Let's say you're trying to insulate to prevent heat from escaping. Different material will do a better or worse job. But also the thickness of your insulation makes a difference in effectiveness or lack thereof. 

So pretend the copper block is the freezing outside cold weather. And your GPU die is the warmth inside your home. The only thing between preventing the outside cold from stealing your inside warmth, is your insulation. Now with full direct contact, the outside cold would steal that heat away until the temperatures were equalized. In the case of copper, that's 400 W/mk. But there's one thing preventing that heat transfer from happening. And that's your insulation (LM). Are we saying that an ultra thin layer that you can barely spread is able to fully stop the thermal transfer granted by the copper? I have a very hard time believing that.


----------



## dante`afk

HyperMatrix said:


> Personally I don't use anything on my CPUs/GPUs because I've done enough applications to know exactly how little to apply. But on laptops I do use MG Conformal Coating. You can use clear nailpolish or your tape method. It's all good. Just don't apply the LM thinking "it'll be fine i have tape." You have to take your time with LM because even an amount that seems impossible to spread any further, can be spread quite a bit. You just have to keep stroking away and use as absolutely little of it as you can. And of course, apply to both surfaces. You should apply so little of it that there would be no way for there to be anything that could trickle/drip down. That's also how you get the best thermal conductivity because LM itself isn't nearly as thermally conductive as your copper block. So the less there is, the closer your die is to the actual copper.



great ty, ordered some of that MG stuff


----------



## GQNerd

Falkentyne said:


> A metallurgist who worked in the industry posted on the notebookreview forums some time back, when there was a bunch of discussion about how Ga affects Cu, and so on, and Thermal Grizzly's 73 w/mk claim got thoroughly debunked in that thread. I don't have the direct links to it but it was at least two years ago. I don't even remember the results but it was not far off from 25 w/mk. If you had the time to do a long search (use an ad blocker, though), you would find the discussion. No one there believed Grizzly's claims and there was discussion about CLU and CLP also.


Nice, I also frequent the NBR forums (Magilla Gorilla). But I'm not interested in going down that rabbit hole (as I do for all my mods/tweaks) lol.. 

I just roll with whatever is the consensus best commercially available TIM. 

For paste, i roll with Kryo (regular), for LM I roll with Conductonaut or Thermalright


----------



## mirkendargen

HyperMatrix said:


> I'm not talking about the technical properties and limitations of the alloy components. But let me explain it this way. Let's say you're trying to do the opposite of transfer heat away. Let's say you're trying to insulate to prevent heat from escaping. Different material will do a better or worse job. But also the thickness of your insulation makes a difference in effectiveness or lack thereof.
> 
> So pretend the copper block is the freezing outside cold weather. And your GPU die is the warmth inside your home. The only thing between preventing the outside cold from stealing your inside warmth, is your insulation. Now with full direct contact, the outside cold would steal that heat away until the temperatures were equalized. In the case of copper, that's 400 W/mk. But there's one thing preventing that heat transfer from happening. And that's your insulation (LM). Are we saying that an ultra thin layer that you can barely spread is able to fully stop the thermal transfer granted by the copper? I have a very hard time believing that.


Thermal conductivity is the same measurement, whichever direction the heat is traveling. Thermal Grizzly is either lying about their measurement or not, there isn't really any middle ground/fudge room.

Now actual performance of LM as a TIM in real world use has all sorts of variables. Application, thickness, coverage, drying/absorption, etc. These are all independent of the thermal conductivity of the material itself.


----------



## JonnyV75

Received my 3090 FTW Ultra yesterday. With Afterburner I'm able to set +195 core clock with a 2100 Mhz Max in GPU-Z. Time Spy reports 2000 MHZ average up to 2,085 MHZ max. However, best I can do with memory is +250, which seems low. This gives me 19267 overall and a 20872 GPU Score in Time Spy. 

The +195 core and +250 memory in PR gives me 13,935.

Power slider is 117% and voltage is 50%. Max temp is 76C with 70C average. I am not getting any PerfCap warnings. 

https://www.3dmark.com/spy/16074933
https://www.3dmark.com/pr/61350

I think I'm temperature limited at this point and it's an average chip, but poor memory. Thoughts and/or suggestions? 

This is on air, XOC Bios and a 5900x in a Strix x570-E MB.


----------



## dante`afk

if you're not getting any perfcap warnings, then no throttling.

what limits your scores and more clock is temperature, get it on water and you'll be able to push more


----------



## Falkentyne

Papusan claimed that the only way Conductonaut can even come close to 73 w/mk is if they were using _88%_ Indium in the mixture.
And Johnksss's reply simply doesn't match up with real world testing of homemade galinstan vs conductonaut.





__





TechnologyGuide


Thank you for visiting the TechnologyGuide network. Unfortunately, these forums are no longer active. We extend a heartfelt thank you to the entire community for their steadfast support—it is really you, our readers, that drove




forum.notebookreview.com







http://imgur.com/a/T6JlP


16.5 w/mk --->73 w/mk is going to be a hell of a lot more than 0.5C difference.
I'm still trying to find the other thread where they were discussing the w/mk values.
In the meantime, I'll just use my Kryonaut Extreme and keep all my LM for a rainy day.


----------



## HyperMatrix

mirkendargen said:


> Thermal conductivity is the same measurement, whichever direction the heat is traveling. Thermal Grizzly is either lying about their measurement or not, there isn't really any middle ground/fudge room.
> 
> Now actual performance of LM as a TIM in real world use has all sorts of variables. Application, thickness, coverage, drying/absorption, etc. These are all independent of the thermal conductivity of the material itself.


We're not talking about LM thermal conductivity as though it's a large block of the alloy that heat has to transfer through. We're talking about it as an ultra thin layer meant to fill in the gaps between the die and the copper block. With such a thin application, you wouldn't be able to essentially fully block the thermal conductivity of the main copper block down to the limits of the LM.


----------



## JonnyV75

dante`afk said:


> if you're not getting any perfcap warnings, then no throttling.
> 
> what limits your scores and more clock is temperature, get it on water and you'll be able to push more


As soon as the Hybrid kits go back up for notify, I'll be putting my hat in the ring. The low headroom on the memory bothers me though. My old 3080 FTW3 was able to push +1000 memory.


----------



## stryker7314

Just though about it and my GPU (3090) has more memory than there is system Ram, weird, oh well still not moving to 32gb.


----------



## mirkendargen

HyperMatrix said:


> We're not talking about LM thermal conductivity as though it's a large block of the alloy that heat has to transfer through. We're talking about it as an ultra thin layer meant to fill in the gaps between the die and the copper block. With such a thin application, you wouldn't be able to essentially fully block the thermal conductivity of the main copper block down to the limits of the LM.


It's literally the exact same measurement... The m in W/mK is meters, the thickness of the material. It could be a layer one molecule thick or one meter thick, it's the same value for the same material.


----------



## Falkentyne

Ok so I posted this question on the Chemistry and Metallurgy reddits. Wonder what the pros and Ph'D's have to say about this, if they even see the posts...


----------



## WayWayUp

der8auer released a new video interestingly with the 6800xt before and after liquid metal pretty much improves 6c with stock cooler under load.

I can imagine a hotter 3090 especially with shunts seeing even more improvements


----------



## Herald

andrvas said:


> Did you shuntmod the card? As I've said I have the same card, with the 390w Gigabyte BIOS, and my score in Superposition is around 17700...


No, no shuntmodding, just the kingpin bios


----------



## WayWayUp

In the end the 73 w/mk number is almost meaningless to us. 

We cant perceivably understand what 73 w/mk is Supposed to get us,
If we had a ton of materials in the 20-60 w/mk range (plus substantial testing) we could better understand but the reality is we don't

but what we DO understand is that it outperforms paste, every single one, by a large margin. If it was proven somehow that this figure turned out to be 30w/mk or 50w/mk does it really matter? The testing proves that LM performs substantially better than paste


----------



## mirkendargen

WayWayUp said:


> In the end the 73 w/mk number is almost meaningless to us.
> 
> We cant perceivably understand what 73 w/mk is Supposed to get us,
> If we had a ton of materials in the 20-60 w/mk range (plus substantial testing) we could better understand but the reality is we don't
> 
> but what we DO understand is that it outperforms paste, every single one, by a large margin. If it was proven somehow that this figure turned out to be 30w/mk or 50w/mk does it really matter? The testing proves that LM performs substantially better than paste


This is what really matters. Whether the compound conducts heat better, applies better, or both, the end result is increased performance in component temperature (the only metric anyone should actually care about).


----------



## HyperMatrix

Has anyone with a water block measured the backplate temps? I thought VRAM temp would be a huge issue but I tested with 22GHz on my memory with GPU doing 100% usage for an hour and the temperature sensor on the card was reading 62C and flir camera was reporting hottest temperature on the back as being 64C. If your GDDR6x on the back is under 60C with a proper block, I don't think there's any reason to be concerned about active cooled backplates or mounting that mp5 kit to the backside. I mean the mp5 will help lower your overall card temps a bit so it's not all bad, but I'm just not sure it'd be worth the $$ if your block is already keeping backside VRAM under 60C. So unless you're planning to volt mod and increase the memory clocks beyond the standard range, I'd say test your temps before you go too overboard.


----------



## mirkendargen

HyperMatrix said:


> Has anyone with a water block measured the backplate temps? I thought VRAM temp would be a huge issue but I tested with 22GHz on my memory with GPU doing 100% usage for an hour and the temperature sensor on the card was reading 62C and flir camera was reporting hottest temperature on the back as being 64C. If your GDDR6x on the back is under 60C with a proper block, I don't think there's any reason to be concerned about active cooled backplates or mounting that mp5 kit to the backside. I mean the mp5 will help lower your overall card temps a bit so it's not all bad, but I'm just not sure it'd be worth the $$ if your block is already keeping backside VRAM under 60C. So unless you're planning to volt mod and increase the memory clocks beyond the standard range, I'd say test your temps before you go too overboard.


I would estimate mine is around 40C under load (warm to the touch but no problem leaving my finger on it) with the Bykski Strix block, I'll bust out a meat probe and test better later. I'm a bad example though because I have a 4 DIMM RAM block glued on the backplate with Arctic Alumina, and pretty cool coolant temps.

It was only ~$30 to rig up and would have been a whole lot more work to decide to do after testing, so I did it from the start "just in case".


----------



## dante`afk

I put this heatsink https://www.amazon.com/gp/product/B089QJQY17/ref=ppx_yo_dt_b_search_asin_title?ie=UTF8&psc=1&fpw=alm with 0.5mm thermalright odyseey pads on my backplate, and a 120mm fan on top if it.

temps are 1-3c at best better under load. it gets warm but not much improvement I think, well any difference is better than none. don't have a temp measurement tool here tho..


----------



## HyperMatrix

Official Nvidia performance chart for CyberPunk 2077 leaked. This is likely on a build without Denuvo. 11.1% faster than the TomsHardware benchmarks at native 4K. 9.9% faster with DLSS Performance.

Curently it's looking like at full ultra settings and ultra ray tracing, the best we'd be able to do is about 50-53fps under water with DLSS Quality mode and close to 60fps with DLSS Balanced mode.


----------



## pat182

My store got like 40 3090 haha ***


----------



## WayWayUp

you could always go with the mp5 works although i hate its low flow and how you need to connect it. Since i use ekwb CE rads its not convenient as they only come with 2 ports.
I will be moving into a core p7 on Saturday and open a 3rd loop so i will throw it on its own loop with the memory. keep the cpu and gpu on separate loops as well


----------



## WayWayUp

HyperMatrix said:


> Official Nvidia performance chart for CyberPunk 2077 leaked. This is likely on a build without Denuvo. 11.1% faster than the TomsHardware benchmarks at native 4K. 9.9% faster with DLSS Performance.
> 
> Curently it's looking like at full ultra settings and ultra ray tracing, the best we'd be able to do is about 50-53fps under water with DLSS Quality mode and close to 60fps with DLSS Balanced mode.
> 
> View attachment 2468542


It was my expectation but honestly I'll take it
watch dogs isnt the most optimized game but i get 64fps in the benchmark - ultra with full RT and balanced DLSS.
I will take 4k with balanced dlss with 60fps with raytracing in an open world game

from what reviewers are saying, the dlss implementation in this game is excellent


----------



## HyperMatrix

WayWayUp said:


> It was my expectation but honestly I'll take it
> watch dogs isnt the most optimized game but i get 64fps in the benchmark - ultra with full RT and balanced DLSS.
> I will take 4k with balanced dlss with 60fps with raytracing in an open world game
> 
> from what reviewers are saying, the dlss implementation in this game is excellent


I find DLSS Balanced to be a bit too fuzzy compared to native and if you then add sharpening to it, you drop performance again. And that 60fps with DLSS balanced would be your average. Variance will still mess you up. Normally I'd say to go with RT medium instead of ultra, but from what I've read there's a significant visual difference between the 2, unlike in watch dogs where it's hardly noticeable.

Either way we'll find out soon. May end up doing what I did with Witcher 3 and just cap it to 55FPS to keep frame consistency. Consistent 55 > variable 55-65. Or if it supports mGPU, wait until my HydroCopper card comes in and beat the game before selling off the card. Haha.

*Edit: * Curious about something though. Does overclocking improve DLSS scaling as well? So in this chart it goes from 22 fps at native 4k to 57.8 fps with dlss performance. That's a 163% increase. Now assuming this was done on a 3090 FE, and our cards under water/shunted would be 20-25% faster, our native 4k fps would be 27fps. With 163% increase that would be 71 fps. But does our +163% scaling also increase because our tensor cores are overclocked 20-25% too? So to rephrase, do we know if overclocking improves DLSS scaling outside of just the benefit to the initial rasterization/rt performance bump?


----------



## mattxx88

dangerSK said:


> Hey guys,
> I did a video about shunt modding my 3090Ftw3, so if wanna watch how its done or maybe ask about shunt modding heres a link


nice vid m8
can i ask you why you opted for wire instead resistor on pcie?


----------



## Carillo

dante`afk said:


> so, I have more flow than you, I have bigger rads than you, but I'm using kryonaut extreme and not condactonaut.
> 
> so you're telling me, LM will decrease my temps on load by 4c? Hard to believe that, because I tested it on 2080ti with condactonaut. while it's initially better, after couple of days its the same as normal paste
> 
> View attachment 2468498
> 
> 
> 
> 
> I received my bykski block within 6 days after ordering with DHL express from China
> The bitspower block arrived from Taiwan in 8 days.


Did you buy from Aliexpress ?


----------



## dante`afk

Carillo said:


> Did you buy from Aliexpress ?


bykski from amazon.com
bitspower from their website


----------



## Carillo

dante`afk said:


> bykski from amazon.com
> bitspower from their website


Thats why, i will try amazon next time. Alixpress is horrible


----------



## ttnuagmada

Finally found a 3090 Strix for MSRP at Microcenter. Had already bought the EK block for it. Haven't had much time to play around with it, but it finished the 3dmark stress test at +125, though it was hitting the power limiter even with the power limiter maxed. Temps seem about 5C or so worse under load than my 1080 Ti's, which I expected given the increased heat density of these chips.

I've done a little skimming of this thread, and noticed that the Kingpin BIOS will give me an extra 40w or so. Has anyone tried it on a Strix? Are there any drawbacks to doing so? (disabled HDMI ports etc).

Would also appreciate any tips/info I might want to know about this card.

Thanks!


----------



## Falkentyne

Found some information that some or none of you may care about, 
but

@Miguelios @HyperMatrix 





__





US5198189A - Liquid metal matrix thermal paste - Google Patents


A liquid metal matrix thermal paste comprises a dispersion of non-reacting thermally conductive particles in a low melting temperature liquid metal matrix. The particles preferably are silicon, molybdenum, tungsten or other materials which do not react with gallium at temperatures below...



patents.google.com






__
https://www.reddit.com/r/metallurgy/comments/k9a1zm/_/gf3gjnp


__
https://www.reddit.com/r/metallurgy/comments/k9a1zm/_/gf3f1fo

tl;dr: if the Grizzly stuff is indeed 73 w/mk and the garage stuff is half a C higher at 16.5 w/mk, you aren't going to be able to chemically test that without some lab work...


----------



## mirkendargen

Falkentyne said:


> Found some information that some or none of you may care about,
> but
> 
> @Miguelios @HyperMatrix
> 
> 
> 
> 
> 
> __
> 
> 
> 
> 
> 
> US5198189A - Liquid metal matrix thermal paste - Google Patents
> 
> 
> A liquid metal matrix thermal paste comprises a dispersion of non-reacting thermally conductive particles in a low melting temperature liquid metal matrix. The particles preferably are silicon, molybdenum, tungsten or other materials which do not react with gallium at temperatures below...
> 
> 
> 
> patents.google.com
> 
> 
> 
> 
> 
> 
> __
> https://www.reddit.com/r/metallurgy/comments/k9a1zm/_/gf3gjnp
> 
> 
> __
> https://www.reddit.com/r/metallurgy/comments/k9a1zm/_/gf3f1fo
> 
> tl;dr: if the Grizzly stuff is indeed 73 w/mk and the garage stuff is half a C higher at 16.5 w/mk, you aren't going to be able to chemically test that without some lab work...


The real tl;dr is that it's all about the thickness and cleanness of the TIM application not the specific thermal conductivity of the compound.


----------



## Sheyster

ttnuagmada said:


> Finally found a 3090 Strix for MSRP at Microcenter. Had already bought the EK block for it. Haven't had much time to play around with it, but it finished the 3dmark stress test at +125, though it was hitting the power limiter even with the power limiter maxed. Temps seem about 5C or so worse under load than my 1080 Ti's, which I expected given the increased heat density of these chips.
> 
> I've done a little skimming of this thread, and noticed that the Kingpin BIOS will give me an extra 40w or so. Has anyone tried it on a Strix? Are there any drawbacks to doing so? (disabled HDMI ports etc).
> 
> Would also appreciate any tips/info I might want to know about this card.
> 
> Thanks!


You will lose 1 DP port, but keep both HDMI 2.1 ports with the 500W EVGA BIOS and/or the 520W KPE BIOS.

If you're keeping the card on air just use the 500W BIOS. Fan support is much better. If you're water-cooling use the 520W BIOS. You won't be able to run the middle fan at more than 66% effective RPM (~2000 RPM) with the 520W BIOS which is a problem.

You should look into under-volting if you're staying on air. At 1.000v my card is fully stable at 2085 MHz.


----------



## HyperMatrix

Falkentyne said:


> Found some information that some or none of you may care about,
> but
> 
> @Miguelios @HyperMatrix
> 
> 
> 
> 
> 
> __
> 
> 
> 
> 
> 
> US5198189A - Liquid metal matrix thermal paste - Google Patents
> 
> 
> A liquid metal matrix thermal paste comprises a dispersion of non-reacting thermally conductive particles in a low melting temperature liquid metal matrix. The particles preferably are silicon, molybdenum, tungsten or other materials which do not react with gallium at temperatures below...
> 
> 
> 
> patents.google.com
> 
> 
> 
> 
> 
> 
> __
> https://www.reddit.com/r/metallurgy/comments/k9a1zm/_/gf3gjnp
> 
> 
> __
> https://www.reddit.com/r/metallurgy/comments/k9a1zm/_/gf3f1fo
> 
> tl;dr: if the Grizzly stuff is indeed 73 w/mk and the garage stuff is half a C higher at 16.5 w/mk, you aren't going to be able to chemically test that without some lab work...


Interesting reading. For me it was never about whether the marketed 73 W/mk was accurate or not. It was the inability to accept the reason given for why it would be lower. It just didn't make logical sense to me. So if adding particles to LM can increase its thermal conductivity, I think it would make sense to say that the thinner the application, the better the conductivity as well.

Edit:

From the reddit post you linked, the guy wrote:

"Because the layer of paste or liquid metal is so thin, the thermal resistance of these interfaces is probably the limiting factor in the heat transfer from heat source to heat sink."

Remember my house insulation example? The effectiveness of the insulation (resistance to thermal transfer) depends on not only the material, but the thickness. Because copper itself has over 400 W/mk thermal transfer, and LM is much lower, the amount of thermal resistance created by the LM to slow the thermal transfer to the copper would be the determining factor. And the less LM there is, the less thermal resistance there would be.


----------



## jomama22

Welp, another strix coming in on Monday, sold the old poop strix last week. Hopefully this one can match or beat my FE at equal power, if not, then another poop strix it is.

Kinda regret waiting on this alphacool block for the FE. Said it would ship in 10-11 days over a week ago...then changed it to 2-3 weeks. Oh well. I have a strix block in hand if this new one can actually perform.


----------



## GQNerd

HyperMatrix said:


> Interesting reading. For me it was never about whether the marketed 73 W/mk was accurate or not. It was the inability to accept the reason given for why it would be lower.


@Falkentyne

This. 

At the end of the day LM is going to outperform a silicone based paste 99.9% of the time.

Research does reveal the fact the advertised W/mk numbers should have an * 

Also interesting read regarding varying LM performance based on the constitution.


----------



## dangerSK

mattxx88 said:


> nice vid m8
> can i ask you why you opted for wire instead resistor on pcie?


Testing limit of pcie fuse  wasnt my decision though


----------



## icefirex

I have a strix 3090 oc and have tried both the kingpin and xoc evga bios.. Im pretty sure that extra 20w is being applied as I can't hold quite the same clocks and score with the xoc bios. It's close and the fans do spin down etc but it's definitely not the same. I wish Asus would just release a 550w or 580w bios that'd be almost perfect.


----------



## jomama22

icefirex said:


> I have a strix 3090 oc and have tried both the kingpin and xoc evga bios.. Im pretty sure that extra 20w is being applied as I can't hold quite the same clocks and score with the xoc bios. It's close and the fans do spin down etc but it's definitely not the same. I wish Asus would just release a 550w or 580w bios that'd be almost perfect.


Can just shunt. It's ezpz on the strix. Even just using the silver paint method you would get upwards of 650w if not more, and you can just clean it off when you don't want to shunt anymore.


----------



## mirkendargen

HyperMatrix said:


> Interesting reading. For me it was never about whether the marketed 73 W/mk was accurate or not. It was the inability to accept the reason given for why it would be lower. It just didn't make logical sense to me. So if adding particles to LM can increase its thermal conductivity, I think it would make sense to say that the thinner the application, the better the conductivity as well.
> 
> Edit:
> 
> From the reddit post you linked, the guy wrote:
> 
> "Because the layer of paste or liquid metal is so thin, the thermal resistance of these interfaces is probably the limiting factor in the heat transfer from heat source to heat sink."
> 
> Remember my house insulation example? The effectiveness of the insulation (resistance to thermal transfer) depends on not only the material, but the thickness. Because copper itself has over 400 W/mk thermal transfer, and LM is much lower, the amount of thermal resistance created by the LM to slow the thermal transfer to the copper would be the determining factor. And the less LM there is, the less thermal resistance there would be.


I keep saying it's all in the units that people are quoting (I guess without knowing what they mean...?)....there's no mystery here... W/mK, watts per meter Kelvin. Watt = joule/sec, energy per second.. It's being divided by distance times difference in temperature. What happens when you divide by a smaller number? You get a bigger number. Halving the thickness of material doubles the energy transfer rate. Doubling the temperature difference between sides doubles the energy transfer rate. Simple math.

Conductonaut is 73 or whatever W/mK, that means that if you have a vat of it 1 meter deep with the temperature 1C higher at the bottom than the top, it will transfer 73 joules of energy per second from the bottom to the top. Make that 1cm deep and it's 7300 joules per second. Make it 0.1mm deep (something around what I'd expect TIM applications to be) and you get the idea.


----------



## Falkentyne

While we're on a roll, more suggestions that Grizzly and Thermalright may just be lying about their w/mk.


__
https://www.reddit.com/r/metallurgy/comments/k9a1zm/_/gf43l5y


----------



## DrunknFoo

WayWayUp said:


> der8auer released a new video interestingly with the 6800xt before and after liquid metal pretty much improves 6c with stock cooler under load.
> 
> I can imagine a hotter 3090 especially with shunts seeing even more improvements


About 8c avg, 12c peak reduction improvment over kpx on my ftw3


----------



## Aristotle

icefirex said:


> I have a strix 3090 oc and have tried both the kingpin and xoc evga bios.. Im pretty sure that extra 20w is being applied as I can't hold quite the same clocks and score with the xoc bios. It's close and the fans do spin down etc but it's definitely not the same. I wish Asus would just release a 550w or 580w bios that'd be almost perfect.


I have a 3090 strix and I run the 520w bios for the same reason. It takes my OC from almost completely stable to completely stable. I try to compensate for the lower fan limit on the mid fan by having it ramp to 100% more quickly. Between the 500w and 520w bios, my Temps are the same so I don't see a reason not to use the 520w.


----------



## icefirex

Aristotle said:


> I have a 3090 strix and I run the 520w bios for the same reason. It takes my OC from almost completely stable to completely stable. I try to compensate for the lower fan limit on the mid fan by having it ramp to 100% more quickly. Between the 500w and 520w bios, my Temps are the same so I don't see a reason not to use the 520w.


Glad I'm not the only one then.. Ill reflash back tonight and bear the 30% Min fan limit. 

Historically does asus usually provide unlocked bios for water cooling?


----------



## ttnuagmada

There anything special about flashing these, or is it the same as it's always been?


----------



## cletus-cassidy

Dumb question: I can’t find the 520 watt BIOS on TechPowerUp. Just see the 500 watt XOC. What am I missing?

EDIT: NM got it


----------



## HyperMatrix

icefirex said:


> Glad I'm not the only one then.. Ill reflash back tonight and bear the 30% Min fan limit.
> 
> Historically does asus usually provide unlocked bios for water cooling?


I mean technically you can run the fan off of your own fan controller if you have one. Depending on the connector, you could wire it up through your motherboard as well. And since the fans won't be running off your card's TDP anymore, you'll actually get a bit of extra power limit out of it for overclocking. I know the 2 fans of my FTW3 Hybrid pulled about 10W more when going from 30% speed to 100% speed.


----------



## HyperMatrix

*NOTE FOR ANYONE WHO'S LOOKING FOR AN EVGA FTW3 HYBRID CARD. NOTIFY QUEUE COMING UP AGAIN IN 2 DAYS.*

Also don't forget to use an associate's code to get an extra 5% off.

















EVGA GeForce RTX 3090 FTW3 ULTRA HYBRID GAMING, 24G-P5-3988-KR, 24GB GDDR6X, ARGB LED, Metal Backplate


The EVGA GeForce RTX 3090 is colossally powerful in every way imaginable, giving you a whole new tier of performance at 8K resolution. It's powered by the NVIDIA Ampere architecture, which doubles down on ray tracing and AI performance with enhanced RT Cores, Tensor Cores, and new streaming...




www.evga.com


----------



## mirkendargen

icefirex said:


> Glad I'm not the only one then.. Ill reflash back tonight and bear the 30% Min fan limit.
> 
> Historically does asus usually provide unlocked bios for water cooling?


One leaked for 2080Ti's from Asus, but not sure if it's been the norm. It also wasn't until 6 months or so after release.


----------



## stryker7314

How do you get afterburner to go to 121% power limit with kingpin bios? It goes 120 on a kingpin bios modified ftw3 ultra.


----------



## icefirex

HyperMatrix said:


> I mean technically you can run the fan off of your own fan controller if you have one. Depending on the connector, you could wire it up through your motherboard as well. And since the fans won't be running off your card's TDP anymore, you'll actually get a bit of extra power limit out of it for overclocking. I know the 2 fans of my FTW3 Hybrid pulled about 10W more when going from 30% speed to 100% speed.


Does anyone know the pinout for the 7 pin fan header? Or if a standard 7 pin jst female is compatible? I've already thought of doing this trust me - just didn't know the pinout.


----------



## thorswrath91

mirkendargen said:


> I would estimate mine is around 40C under load (warm to the touch but no problem leaving my finger on it) with the Bykski Strix block, I'll bust out a meat probe and test better later. I'm a bad example though because I have a 4 DIMM RAM block glued on the backplate with Arctic Alumina, and pretty cool coolant temps.
> 
> It was only ~$30 to rig up and would have been a whole lot more work to decide to do after testing, so I did it from the start "just in case".


I would say it's significantly north of that. Havent tested per se, but it definitely not warm to touch. quite toasty. I am running a 390w bios on my Zotac and undervolted to maintain 1995mhz. Using the same bykski block with max temps at 65 c at 25c ambient.


----------



## pat182

Aristotle said:


> I have a 3090 strix and I run the 520w bios for the same reason. It takes my OC from almost completely stable to completely stable. I try to compensate for the lower fan limit on the mid fan by having it ramp to 100% more quickly. Between the 500w and 520w bios, my Temps are the same so I don't see a reason not to use the 520w.


the 500watt have the same results and you will keep the fan speed


----------



## thorswrath91

Keninishna said:


> Hi all, I just got a zotac 3090 and was wondering if its worth investing in a water cooling setup? I don't feel like dropping another 500$ just to water cool. I saw a reddit post someone managed to rig some noctua fans on their card not sure how safe that is though.


i did it for about 350 bucks. the results are definitely worth it. bykski block and thermaltake kit for the rest. can maintain max 65c with a 390w bios flashed, with just a 240mm rad.


----------



## mirkendargen

thorswrath91 said:


> I would say it's significantly north of that. Havent tested per se, but it definitely not warm to touch. quite toasty. I am running a 390w bios on my Zotac and undervolted to maintain 1995mhz. Using the same bykski block with max temps at 65 c at 25c ambient.


Good to know my cheap RAM block hack job is effective then!


----------



## ttnuagmada

So does the Kingpin bios not track TDP accurately when flashed on the Strix? Im hitting a power Perfcap at like 350w despite having the power limit maxed out, though Im clearly running pulling a lot more than that (similar scenario was showing 460+ on the stock bios with more throttling). I noticed that there is nothing reading from 8-pin #3. is this to be expected?


----------



## Falkentyne

ttnuagmada said:


> So does the Kingpin bios not track TDP accurately when flashed on the Strix? Im hitting a power Perfcap at like 350w despite having the power limit maxed out, though Im clearly running pulling a lot more than that (similar scenario was showing 460+ on the stock bios with more throttling). I noticed that there is nothing reading from 8-pin #3. is this to be expected?


The bios you flash on the Strix should be the FTW3 XOC Bios. That's the one that should give you a 500-520W TDP.


----------



## ttnuagmada

Falkentyne said:


> The bios you flash on the Strix should be the FTW3 XOC Bios. That's the one that should give you a 500-520W TDP.


The XOC gives 500w, the Kingpin gives 520w.


----------



## HyperMatrix

ttnuagmada said:


> The XOC gives 500w, the Kingpin gives 520w.


No. The XOC gives 500W to some cards. But on the Strix we think it’s anywhere from 520-575W because it doesn’t report proper power usage on one of the ports and gives much better results than just having an extra 20W.


----------



## bmagnien

pretty pumped about this and nowhere else to share where it'd be appreciated: https://www.3dmark.com/fs/24273620
Top 15 in HOF (among single GPU FS scores), and first American on that list. I decided since I'm building in a SFFPC with a handle, I might as well pick it up and put it outside. Got me a few extra points until the hybrid kit comes


----------



## ttnuagmada

HyperMatrix said:


> No. The XOC gives 500W to some cards. But on the Strix we think it’s anywhere from 520-575W because it doesn’t report proper power usage on one of the ports and gives much better results than just having an extra 20W.


You posting something about this earlier in the thread is the only thing i see mentioned about it. I just swapped over to the XOC and clocks drop even lower in Furmark and my AX1600i doesnt show the wattage hitting the same average/peak as the Kingpin bios.

Is there a discussion about this somewhere, or is this just your own observation?


----------



## mirkendargen

Random thought I wonder if anyone has tried (I can just try myself too...)

Anyone tried the TUF BIOS on a Strix? My idea being that the XC 2x8pin BIOS helps FTW3's, would the TUF 2x8pin BIOS help 3x8pin Strixs more than the 520W BIOS and keep all the outputs since they have the same configuration?



HyperMatrix said:


> No. The XOC gives 500W to some cards. But on the Strix we think it’s anywhere from 520-575W because it doesn’t report proper power usage on one of the ports and gives much better results than just having an extra 20W.


If the idea is that this happens because the 3rd 8pin's power isn't reported, the 520W KPE BIOS also doesn't report the 3rd 8pin correctly and would do the same thing. I personally see slightly less power limiting with the 520W BIOS on my Strix than the 500W BIOS, which was slightly better than the 480W BIOS.


----------



## HyperMatrix

mirkendargen said:


> Random thought I wonder if anyone has tried (I can just try myself too...)
> 
> Anyone tried the TUF BIOS on a Strix? My idea being that the XC 2x8pin BIOS helps FTW3's, would the TUF 2x8pin BIOS help 3x8pin Strixs more than the 520W BIOS and keep all the outputs since they have the same configuration?
> 
> 
> 
> If the idea is that this happens because the 3rd 8pin's power isn't reported, the 520W KPE BIOS also doesn't report the 3rd 8pin correctly and would do the same thing.


I’m basing it off of the pin not reporting, voltage and clocks staying much higher, and my 900W battery backup triggering whenever card was at full load and CPU went over 50% usage. Hasn’t happened in any other situation. It’s possible that the same thing happens with the KPE bios. I wouldn’t be able to test that. I believe Sheyster also had the same experience with the XOC bios on Strix.

Also consider the fact that the XC3 390W bios on the FTW3 allows for somewhere between 480-520W while not reporting 3rd pin power and 2nd pin just reporting about 50% usage.

another interesting fact to go with the pcie slot conspiracy theories: The XOC bios only pulls 430W on most FTW3 cards. It pulls 75W from the slot. With the XOC bios that gets it somewhere around 500W, it only pulls 65W from the slot. The Strix with the XOC bios drops from normal 75W usage down to 50W usage. Could have some relation. Freeing up pcie slot utilization = more power pulled from elsewhere?


----------



## ttnuagmada

HyperMatrix said:


> I’m basing it off of the pin not reporting, voltage and clocks staying much higher, and my 900W battery backup triggering whenever card was at full load and CPU went over 50% usage. Hasn’t happened in any other situation. It’s possible that the same thing happens with the KPE bios. I wouldn’t be able to test that. I believe Sheyster also had the same experience with the XOC bios on Strix.


the KPE bios doesnt report 3rd 8pin either, and my personal results on clocks/power are the opposite of yours


----------



## HyperMatrix

ttnuagmada said:


> the KPE bios doesnt report 3rd 8pin either, and my personal results on clocks/power are the opposite of yours


So you’re getting a lower scores with the KPE bios on your Strix? Are temps the same? Do you have the same issue with the XOC bios? Or do you mean that kingpin gives better results than XOC? Because that’s possible since they’re both similarly designed EVGA bios so triggering the same mechanism in the Strix but with larger PL limit on KPE bios.


----------



## ttnuagmada

HyperMatrix said:


> So you’re getting a lower scores with the KPE bios on your Strix? Are temps the same? Do you have the same issue with the XOC bios? Or do you mean that kingpin gives better results than XOC? Because that’s possible since they’re both similarly designed EVGA bios so triggering the same mechanism in the Strix but with larger PL limit on KPE bios.


Im saying that i get slightly better results on the KPE bios. It's not a major difference, but it's there. I stay in the 1815-1830mhz range in furmark with the KPE, and i drop under 1800 with the XOC. My AX1600i also shows slightly more power usage with the KPE bios.


----------



## HyperMatrix

ttnuagmada said:


> Im saying that i get slightly better results on the KPE bios. It's not a major difference, but it's there. I stay in the 1815-1830mhz range in furmark with the KPE, and i drop under 1800 with the XOC. My AX1600i also shows slightly more power usage with the KPE bios.


Yeah that's definitely possible. I never said anything about what the KPE could or couldn't do. Was just explaining that the 500W XOC bios actually provides more than 500W. KPE, also being an EVGA bios like the XOC, is likely bugging out the Strix in the same way, but with even more internal headroom with the power limit. I should have tested the actual power draw when I had the Strix. I have an ax1500i. I just didn't want to install the 1.5GB bloatware iCue software to do it. Lol.


----------



## ttnuagmada

HyperMatrix said:


> Yeah that's definitely possible. I never said anything about what the KPE could or couldn't do. Was just explaining that the 500W XOC bios actually provides more than 500W. KPE, also being an EVGA bios like the XOC, is likely bugging out the Strix in the same way, but with even more internal headroom with the power limit. I should have tested the actual power draw when I had the Strix. I have an ax1500i. I just didn't want to install the 1.5GB bloatware iCue software to do it. Lol.


Ok i gotcha. I took your original reply to mean that the XOC bios was doing something the KPE wasn't. 

in any case, hitting the power limit is still stupid easy with either of them. My 1080Ti's would stay under the power limit, locked at 2100mhz with just the stock 380w Aorus Xtreme vbios in literally anything but Firestrike. It's crazy to me that a 500w+ TDP is still not nearly enough to keep a 3090 consistently maxed out.


----------



## mirkendargen

mirkendargen said:


> Random thought I wonder if anyone has tried (I can just try myself too...)
> 
> Anyone tried the TUF BIOS on a Strix? My idea being that the XC 2x8pin BIOS helps FTW3's, would the TUF 2x8pin BIOS help 3x8pin Strixs more than the 520W BIOS and keep all the outputs since they have the same configuration?


Just tested the TUF BIOS out, it's a big nope. Power limited in Port Royal at 1950Mhz/1.045v. KPE 520W BIOS does 2100+ 1.1v almost never power limited a whole run.


----------



## HyperMatrix

mirkendargen said:


> Just tested the TUF BIOS out, it's a big nope. Power limited in Port Royal at 1950Mhz/1.045v. KPE 520W BIOS does 2100+ 1.1v almost never power limited a whole run.


Yeah I just checked the power reporting on my FTW3 with the 390W XC3 bios that also doesn't report 1 pin. At idle, it reports 65W but obviously incorrect since one pin isn't reporting and one is reporting at half. But while it's idle and reporting at 65W, my total system power consumption is at 390W. Don't ask. Lol. If I remember correctly from the previous cards, they would idle at somewhere between 130-150W since my card doesn't downclock. Anyway, when the GPU goes to full load, which reports 300W, system power usage goes up to 770W. So it reports going up from 65 to 300 = 235W but system power consumption goes up by 380W. Now CPU usage is a part of that. My CPU under non-AVX full load uses an extra 60W above idle usage. So not a big deal So let's give 15W of that extra system power usage to the 10-24% average CPU usage during port royal loop. That means 365W more power at full load vs at idle. Considering idle is, as I said, somewhere between 130-150W from memory, this would line up perfectly with a 500W-520W power limit from a bios that's rated for 390W, because it's not reporting pin power usage properly, AND - possibly - because it reduces PCIe slot power usage.


----------



## Keninishna

thorswrath91 said:


> i did it for about 350 bucks. the results are definitely worth it. bykski block and thermaltake kit for the rest. can maintain max 65c with a 390w bios flashed, with just a 240mm rad.


Yeah I ended up giving in and doing a water cooling setup and it’s definitely worth it temps are better and best of all it’s almost silent at load.


----------



## thorswrath91

mardon said:


> Will the Gigabyte Bios flash to a Zotac Trinity do we know?


Yes, works like a dream but expect higher thermals. I suggest you water cool if you so choose to do. The card loses stability at higher temps negating the extra 40w envelope advantage.


----------



## thorswrath91

chanchan said:


> View attachment 2466500
> 
> 
> Hi everyone, I just joined the RTX 3090 family, albeit my card is the cheapest of the cheap. Rather than getting scalped for RTX 3080, a MSRP Zotac RTX 3090 seems like my best option for a GPU at the moment. I've been trying to finish this build for quite some time now (started in Feb) and waiting for Zen 3 + current gen GPU's.
> 
> Current overclocks: +100/+650, 66% static fan speed, 90% vcore (it's about 0.80v~ under Unigine Heaven)
> 
> Questions:
> 1- Are there any BIOS's worth flashing this card to? (I have no idea what's compatible or will work)
> 2- Is it worth to put this under water? (Barrow has a well priced block for this PCB)
> 3- I am using Unigine Heaven as a stability test (8-12h loops), is there anything else that is better suited for testing overclocks?
> 
> Now I just need to finish the wife's build with my hand-me-downs...
> View attachment 2466505


Fantastic. 
1. I think the sweet spot is the Gigabyte Gaming OC one with the 390w power limit. Mind you, one DP port gets disabled after flashing.
2. Have to go water - the card loses stability at higher temps negating the extra 40w envelope advantage.
3. Suggest you use Port Royal or Boundary ray tracing (in demo mode for loop) to test out. it has both raytracing, dlss and rasterization techniques used in it. Would really work the card.


----------



## thorswrath91

defiledge said:


> I have a tuf that I'm planning to run at 600W. Wondering if I will damage the die if the wattage is too high.


First to die would be the power delivery. not saying it would, but gpu die would be the last to be fried.


----------



## Carillo

Hey. Anyone been running 5mOhm resistors on all 6 with 2x8 pin card for a while ? Currently running 8mohm on all 6, and according to the shunt calculator that should be around 643 watt using a 390 watt bios? But in my case I’m only seeing around 530 watt, and still power limited. I’m more familiar with 3x8 pins. What are you’re recommendations to get around 650 watt ? Also tried the KP 520w bios , but I then lost 60 watt. I’m reading power from the PSU, not software. Thanks


----------



## mattxx88

Carillo said:


> Hey. Anyone been running 5mOhm resistors on all 6 with 2x8 pin card for a while ? Currently running 8mohm on all 6, and according to the shunt calculator that should be around 643 watt using a 390 watt bios? But in my case I’m only seeing around 530 watt, and still power limited. I’m more familiar with 3x8 pins. What are you’re recommendations to get around 650 watt ? Also tried the KP 520w bios , but I then lost 60 watt. I’m reading power from the PSU, not software. Thanks


i shunted mine FE last sunday (full 5 mOhm) and played for about 5 hours yesterday 
i see 650w peaks at the wall (kill a watt) + 9900ks @5.3ghz
also 2145mhz stable on air is amazing, waiting for waterblok


----------



## jodasanchezz

deleted


----------



## Sheyster

HyperMatrix said:


> I’m basing it off of the pin not reporting, voltage and clocks staying much higher, and my 900W battery backup triggering whenever card was at full load and CPU went over 50% usage. Hasn’t happened in any other situation. It’s possible that the same thing happens with the KPE bios. I wouldn’t be able to test that. *I believe Sheyster also had the same experience with the XOC bios on Strix.*


My assessment was that on air the 500W EVGA BIOS should be used. Giving up 1000 RPM on the middle fan is just not a good idea on air. Locked at 100% using the KPE 520W BIOS it's actually only running at 66% (~2000 RPM). My SP 4K Optimized testing was almost identical with both of them. One reason why the KPE might seem better is because the default clock is 1920 MHz and some guys may not be compensating for the difference when using the 1800 MHz 500W BIOS.

Finally, if folks are using the EVGA 500W BIOS on an air-cooled Strix, I highly recommend trying a small undervolt. I'm having good results with a 1000mv curve in AB. I'm not locking voltage like we discussed before. Naturally the curve to the right of the 1000mv point is flat at all higher voltage points.


----------



## jodasanchezz

Im a bit convused, what BIOS gives me the highest PWR draw for a 3090 Strix ?
The KPE or the EVGA 500W Beta Bios ?
(BTW Wattercooled EK Block and MO-RA 420 Radiator)

Thanks in advance


----------



## pat182

mirkendargen said:


> Random thought I wonder if anyone has tried (I can just try myself too...)
> 
> Anyone tried the TUF BIOS on a Strix? My idea being that the XC 2x8pin BIOS helps FTW3's, would the TUF 2x8pin BIOS help 3x8pin Strixs more than the 520W BIOS and keep all the outputs since they have the same configuration?
> 
> 
> 
> If the idea is that this happens because the 3rd 8pin's power isn't reported, the 520W KPE BIOS also doesn't report the 3rd 8pin correctly and would do the same thing. I personally see slightly less power limiting with the 520W BIOS on my Strix than the 500W BIOS, which was slightly better than the 480W BIOS.


only on water cause KPE reduce your fan speed


----------



## Luca Prinzi

Has anyone tried the 500 watt evga bios on the Auros extreme?


----------



## Thanh Nguyen

Anyone here has an extra 3090 that want to sell? I think I fried my pny 3090 or so. Pc no post when plug in 2 8 pin to the card.


----------



## stryker7314

Thanh Nguyen said:


> Anyone here has an extra 3090 that want to sell? I think I fried my pny 3090 or so. Pc no post when plug in 2 8 pin to the card.


RMA time.


----------



## stryker7314

stryker7314 said:


> How do you get afterburner to go to 121% power limit with kingpin bios? It goes 120 on a kingpin bios modified ftw3 ultra.


Anyone?
What program are yall using to overclock to get the available 121%?


----------



## Sheyster

jodasanchezz said:


> Im a bit convused, what BIOS gives me the highest PWR draw for a 3090 Strix ?
> The KPE or the EVGA 500W Beta Bios ?
> (BTW Wattercooled EK Block and MO-RA 420 Radiator)
> 
> Thanks in advance


Use the 520W KPE if you're on water. Fan speeds won't be an issue for you.


----------



## dangerSK

stryker7314 said:


> Anyone?
> What program are yall using to overclock to get the available 121%?


Precision


----------



## Sheyster

dangerSK said:


> Precision


Yuck...

Keep in mind the actual +PL percentage number is 120.9% so AB is probably just dropping the decimal portion in the UI. Precision is crap compared to AB, stay away.


----------



## MacMus

Thanh Nguyen said:


> Anyone here has an extra 3090 that want to sell? I think I fried my pny 3090 or so. Pc no post when plug in 2 8 pin to the card.


msi ventus 3090


----------



## stryker7314

Sheyster said:


> Yuck...
> 
> Keep in mind the actual +PL percentage number is 120.9% so AB is probably just dropping the decimal portion in the UI. Precision is crap compared to AB, stay away.


Agreed. Precision is a bloated pos and tries to force firmware updates or it won't run which is ridiculous, I would rather lose out on 1% to use AB if that's the case. Good to know about the decimal thing, thanks!


----------



## WayWayUp

I was never forced to update firmware. You can also x out the program so you dont deal with the overhead so no performance penalty. Your settings remain
I use both programs but honestly dont see a difference between the two, unless you plan to use a curve which can be useful sometimes


----------



## indicajones

card: evga 3090 ftw ultra hybrid.
XOC bios limited to 400 after flash
XC 2 pin bios works better than XOC 3pin.
Since the XC bios has a 360 limiter would using the gigabyte 390 watt bios be better and not cause any issues (other than maybe a display port not working)
Can someone link me the kingpin 520 watt bios?
Is the strix bios no longer a good idea for 3 pin cards?

The way I'm sitting right now it seems like the 2pin bios on a 3pin card is the way to go if you are not shunting. Thanks for babysitting me and the advise=)


----------



## ttnuagmada

jodasanchezz said:


> Im a bit convused, what BIOS gives me the highest PWR draw for a 3090 Strix ?
> The KPE or the EVGA 500W Beta Bios ?
> (BTW Wattercooled EK Block and MO-RA 420 Radiator)
> 
> Thanks in advance


I switched back and forth between them last night just to see what was up, and as far as I can tell, the KPE gives a little more headroom. It's not significant though, and it's still clearly not enough for the power limit to not still be an issue. My card will do 2190+, but it still bounces off the power limiter in GPU bound situations. (I have a 3090 Strix in a custom loop, temps stay below 50C)


----------



## HyperMatrix

ttnuagmada said:


> I switched back and forth between them last night just to see what was up, and as far as I can tell, the KPE gives a little more headroom. It's not significant though, and it's still clearly not enough for the power limit to not still be an issue. My card will do 2190+, but it still bounces off the power limiter in GPU bound situations. (I have a 3090 Strix in a custom loop, temps stay below 50C)


At 2200~ MHz and 45-49C temps, you should anticipate max power draw somewhere between 650-750W depending on the type of load. Definitely need to shunt.


----------



## ttnuagmada

HyperMatrix said:


> At 2200~ MHz and 45-49C temps, you should anticipate max power draw somewhere between 650-750W depending on the type of load. Definitely need to shunt.


Bleh, I guess i should have looked into this before i threw it in my loop. Not sure if i want to mess with it now lol.


----------



## jura11

thorswrath91 said:


> i did it for about 350 bucks. the results are definitely worth it. bykski block and thermaltake kit for the rest. can maintain max 65c with a 390w bios flashed, with just a 240mm rad.


Hi there 

You need at least 360mm radiator for GPU alone I would say, if you do CPU and GPU then 360mm and 240mm radiator is absolutely minimum what I would do 

I have run single 360mm radiator with 8086k with 5.1GHz and RTX 2080Ti in one loop abd GPU temperatures never broke 42-45°C 

I would stay away from the Thermaltake kits, they are one of the most rubbish kits out there 

Hope this helps 

Thanks, Jura


----------



## Sheyster

ttnuagmada said:


> Bleh, I guess i should have looked into this before i threw it in my loop. Not sure if i want to mess with it now lol.


Try a small undervolt for daily driving. It will help you stay within the power limit while gaming. I've had good success with 1000mv.


----------



## man1ac

Just got my FTW3 Ultra and tried the 500W Bios but my card won't go higher than 450W in any way.
Read on the Evga Forum that a lot of people have this problem?!


----------



## ttnuagmada

Sheyster said:


> Try a small undervolt for daily driving. It will help you stay within the power limit while gaming. I've had good success with 1000mv.


10-4. The power draw on these things is insane. I'm coming from 1080 Ti's, and the 380w bios on the Aorus Xtreme was enough to keep them locked at 2100 for any situation other than Firestrike.


----------



## HyperMatrix

man1ac said:


> Just got my FTW3 Ultra and tried the 500W Bios but my card won't go higher than 450W in any way.
> Read on the Evga Forum that a lot of people have this problem?!


Yes they're working on it. Use the XC3 bios instead. Gets around 500-520W.


----------



## man1ac

Does it have any disadvantage?


----------



## Falkentyne

man1ac said:


> Just got my FTW3 Ultra and tried the 500W Bios but my card won't go higher than 450W in any way.
> Read on the Evga Forum that a lot of people have this problem?!


Common problem. No one has been able to pinpoint the exact "cause", except that it's the PCIE slot power limiting the 8 pins from drawing more power (triggering a rail power limit at 75W).
The fix to this is to flash the XC3 (2 pin card) VBIOS onto the card, because it won't detect the third 8 pin as a power source for limiting TDP (but the physical hardware on the card must draw from the third 8 pin anyway).


----------



## HyperMatrix

man1ac said:


> Does it have any disadvantage?


No. Fan controls work. ICX sensors work. But power reporting won't be accurate.


----------



## man1ac

Thanks. Is there a way to get a link?


----------



## shiokarai

btw, Cyberpunk 2077 stream embargo is lifted... currently 1.1mln ppl watching on twitch  this is 3090s were bought for, right?RIGHT?!


----------



## motivman

Thanh Nguyen said:


> Anyone here has an extra 3090 that want to sell? I think I fried my pny 3090 or so. Pc no post when plug in 2 8 pin to the card.


Geez, what happened? Did you check you cables to see if they might have melted on the PSU side? That is why I do not run the XOC or KP bios daily, one 8 pin pulls over 300W... I think your card might be fine, and maybe your cable is just fried.


----------



## mirkendargen

The new game ready driver for Cyberpunk boosted my Port Royal score ~2%.


----------



## Falkentyne

@HyperMatrix @Miguelios 
this was posted right on this forum yet everyone missed it :/
Seems I was right the entire time.









Spectroscopy Composition of Liquid Metal Thermal...


Greetings, I had access to a Niton XL3t hand-held x-ray spectrometer and wondered what actual differences between the three available brands of liquid metal thermal interface material, Thermal Grizzly's Conductonaut, Coollaboratory's Liquid Ultra, and Phobya's Liquid Metal. I purchased the...




www.overclock.net





Have fun!


----------



## man1ac

So how's 14320 PR Score on my FTW3 @450W with +180 Core? Doing 2130 and 2051 on Average.
Is that a card water-worth? (100% Fan, so completely unbearable)


----------



## Carillo

MSI Ventus 3x OC 2x8 pin, Byksi Water block, KingPin bios. Card is pulling around 570 watt, hitting PL hard in Port Royal.


----------



## Gebeleisis

Carillo said:


> MSI Ventus 3x OC 2x8 pin, Byksi Water block, KingPin bios. Card is pulling around 570 watt, hitting PL hard in Port Royal.
> View attachment 2468655


Awesome result.

I have a few questions so please bare with me 

Where did you buy the bykski?
What radiators are you using?
Was the test with chilled water?
Do you have a link for that bios for 2*8pin cards?


----------



## GAN77

Carillo said:


> MSI Ventus 3x OC 2x8 pin, Byksi Water block, KingPin bios. Card is pulling around 570 watt, hitting PL hard in Port Royal.


It's fantastic!


----------



## Benni231990

i thougt a 2x 8pin card cant run a 3x8pin bios? so the card must detect 1x8pin isnt connected? so you must loose 150watt and cant get maximum 350-400 watt?


----------



## Carillo

Benni231990 said:


> i thougt a 2x 8pin card cant run a 3x8pin bios? so the card must detect 1x8pin isnt connected? so you must loose 150watt and cant get maximum 350-400 watt?


if you shunt it you can.


----------



## Benni231990

ahhhhh ok


----------



## dante`afk

shiokarai said:


> btw, Cyberpunk 2077 stream embargo is lifted... currently 1.1mln ppl watching on twitch  this is 3090s were bought for, right?RIGHT?!


and it can't even run it properly. no card can at the moment, maybe geforce 4000 series lol.
CP2077 will be the new crysis meme



Carillo said:


> MSI Ventus 3x OC 2x8 pin, Byksi Water block, KingPin bios. Card is pulling around 570 watt, hitting PL hard in Port Royal.
> View attachment 2468655


holy moly, what gpu clock is it holding? temps?

feels almost you know something I dont; https://www.3dmark.com/pr/593985

/edit - ah I see, just checked the HOF, you're using either window mod or chilled water, 16c lul


----------



## GQNerd

Falkentyne said:


> @HyperMatrix @Miguelios
> this was posted right on this forum yet everyone missed it :/
> Seems I was right the entire time.


http://
Personally never looked this up, and never said you were wrong.. I just don’t dive this deep into the constitution of the products I use, aside from basics. Instead I check reviews and base my choices on my personal experience testing different products thru the years.

Currently I jump between Conductonaut and Thermaltake Silver King based on how quickly Amazon can get it here, lol. When I’m building for someone else, or just don’t need the performance, I use Kryonaut.

So good find and research links for someone that’s actually interested in it, but that’s not me


----------



## Carillo

dante`afk said:


> and it can't even run it properly. no card can at the moment, maybe geforce 4000 series lol.
> CP2077 will be the new crysis meme
> 
> 
> 
> holy moly, what gpu clock is it holding? temps?
> 
> feels almost you know something I dont; https://www.3dmark.com/pr/593985
> 
> /edit - ah I see, just checked the HOF, you're using either window mod or chilled water, 16c lul


The score on HOF is my Strix card on chilled water yes  My new score with the MSI was also with chilled water, but 21 average  For some reason is not comming up yet  2211mhz average and plus 1600mhz memory


----------



## dante`afk

how'd you go over 1500 on memory, precision?


----------



## Carillo

Gebeleisis said:


> Awesome result.
> 
> I have a few questions so please bare with me
> 
> Where did you buy the bykski?
> What radiators are you using?
> Was the test with chilled water?
> Do you have a link for that bios for 2*8pin cards?


Thanks  Aliexpress, chilled water and shunted card  

bios : EVGA RTX 3090 VBIOS


----------



## Carillo

dante`afk said:


> how'd you go over 1500 on memory, precision?


Yes. I can run 1900, but its power limited by the PCIE shunt. Im re-shunting friday.


----------



## GQNerd

dante`afk said:


> how'd you go over 1500 on memory, precision?


My shunted and WCooled Strix can go all the way to +1650 mem on Precision before crashing intermittently, had a few successful PR runs at +1750..
I’m being held back by my Core clocks, card crashes at any frequency above 2220Mhz, and can only hold 2190 max.

Picking up a second Strix on Saturday for binning. Hoping I can get to 2250-2280


----------



## Sheyster

Falkentyne said:


> @HyperMatrix @Miguelios
> this was posted right on this forum yet everyone missed it :/
> Seems I was right the entire time.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Spectroscopy Composition of Liquid Metal Thermal...
> 
> 
> Greetings, I had access to a Niton XL3t hand-held x-ray spectrometer and wondered what actual differences between the three available brands of liquid metal thermal interface material, Thermal Grizzly's Conductonaut, Coollaboratory's Liquid Ultra, and Phobya's Liquid Metal. I purchased the...
> 
> 
> 
> 
> www.overclock.net
> 
> 
> 
> 
> 
> Have fun!


Guess I'll hang on to my old trusty tube of Phobya LM since they're all apparently the same. I think I paid $7 for it. Last time I used it was to de-lid a 4790K, used it under the IHS only. It made a huge difference!


----------



## dante`afk

Carillo said:


> The score on HOF is my Strix card on chilled water yes  My new score with the MSI was also with chilled water, but 21 average  For some reason is not comming up yet  2211mhz average and plus 1600mhz memory


still wondering how, my average clock is higher than yours, but temps higher obviously as no chilled water. yes 500 points less


----------



## stryker7314

man1ac said:


> Just got my FTW3 Ultra and tried the 500W Bios but my card won't go higher than 450W in any way.
> Read on the Evga Forum that a lot of people have this problem?!


I used the 520W Kingpin bios, works great on my FTW3 Ultra, so far have seen up to 505W and 128% draw.









EVGA RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com





I take no responsibility for any issues if you decide to use the bios.


----------



## vmanuelgm

Cyberpunk 2077 Gameplay at 3440x1440p, Highest Settings with DLSS on... Starting the game as nomad...









Let some time for Youtube processing...


----------



## jura11

Thanh Nguyen said:


> Anyone here has an extra 3090 that want to sell? I think I fried my pny 3090 or so. Pc no post when plug in 2 8 pin to the card.


I think someone have here for sale RTX 3090, just check few post here 

Hope this helps 

Thanks, Jura


----------



## MacMus

jura11 said:


> I think someone have here for sale RTX 3090, just check few post here
> 
> Hope this helps
> 
> Thanks, Jura


yeah me


----------



## jura11

MacMus said:


> yeah me


I knew someone have here for sale RTX 3090 but literally couldn't find your post there

Thanks, Jura


----------



## HyperMatrix

Could someone with shunted/water cooled 2205MHz+ card check the power usage in Cyberpunk 2077? It’s giving my card a pretty good thrashing.


----------



## MacMus

jura11 said:


> I knew someone have here for sale RTX 3090 but literally couldn't find your post there
> 
> Thanks, Jura


If nobody will DM me untill end of the week i will return it to store. 3090 is orginal box.


----------



## changboy

Someone post the procedure to update bios using nvflash but couldn't find it, can someone post it again ?

When i tried i recive error cant read the file and i do --protectoff before but maybe i write something wrong after when time to flash.


----------



## jfry94

Hi guys i have a zotac 3090 trinity what is the best bios to flash onto it, considering i need all three displayport connectors to still work as i have a triple monitor set up? Im slowly working my way through the last 300 pages of this thread. Did i make a mistake in getting this specific card as it was the only one available in the uk at under £1500. I plan on slapping a block on it and water cooling it so while i would like some amount of overclocking to happen i wont be going super hard with it.


----------



## long2905

jfry94 said:


> Hi guys i have a zotac 3090 trinity what is the best bios to flash onto it, considering i need all three displayport connectors to still work as i have a triple monitor set up? Im slowly working my way through the last 300 pages of this thread. Did i make a mistake in getting this specific card as it was the only one available in the uk at under £1500. I plan on slapping a block on it and water cooling it so while i would like some amount of overclocking to happen i wont be going super hard with it.


just put the KFA2 vbios on it. if you want to shunt mod, then best to read through the whole thread.


----------



## dante`afk

HyperMatrix said:


> Could someone with shunted/water cooled 2205MHz+ card check the power usage in Cyberpunk 2077? It’s giving my card a pretty good thrashing.


dont have my gpu on a KAW atm, but total psu draw i've seen going between 500w and 800w while playing for 2 hours


----------



## HyperMatrix

dante`afk said:


> dont have my gpu on a KAW atm, but total psu draw i've seen going between 500w and 800w while playing for 2 hours


Throttling at all or able to maintain 2205+?


----------



## DrunknFoo

HyperMatrix said:


> Could someone with shunted/water cooled 2205MHz+ card check the power usage in Cyberpunk 2077? It’s giving my card a pretty good thrashing.


1080w draw on my kilawat after playing a bit... Didnt look to see how much of that was possibly cpu / system... Then again when i bench with my 5.4 -2 with prime avx max cpu is roughly 250w.... At 5.3 -2 ibelieve the draw was roughly 220-225... Eitherway, yes cyberpunk appears to draw quite a bit.. (not at those clocks nor have a block)


----------



## Carillo

dante`afk said:


> still wondering how, my average clock is higher than yours, but temps higher obviously as no chilled water. yes 500 points less





dante`afk said:


> still wondering how, my average clock is higher than yours, but temps higher obviously as no chilled water. yes 500 points less


Intel


----------



## mirkendargen

HyperMatrix said:


> Could someone with shunted/water cooled 2205MHz+ card check the power usage in Cyberpunk 2077? It’s giving my card a pretty good thrashing.


It's maxing out my unshunted Strix with the 520W BIOS pretty much steady at 2100Mhz 1.087V. ~5% higher clocks at the same voltage should be ~5% more power.

...And putting moderate load on 18 cores of my Threadripper, lol. Total power draw from my UPS is 750-800W.

It looks like the lower the DLSS quality the less power usage too. I don't seems to get power limit throttled much/at all with DLSS performance, and almost always am slightly throttled with DLSS quality.


----------



## Gebeleisis

Carillo said:


> Thanks  Aliexpress, chilled water and shunted card
> 
> bios : EVGA RTX 3090 VBIOS


Thanks m8!

How did you solve so that your card can boot up with only 2 pins instead of 3 how the bios is asking for ? 
What did you shunt ?


----------



## man1ac

So what's better for the FTW3 Ultra card? 
Xc3 bios? 








EVGA RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com





Is this the correct one?

Or the King Pin Bios?

And last:
What kind of +core clock would be a good starting point for using the 500W BIOS?
Yesterday I could manage +180 on 450W


----------



## Carillo

man1ac said:


> So what's better for the FTW3 Ultra card?
> Xc3 bios?
> 
> 
> 
> 
> 
> 
> 
> 
> EVGA RTX 3090 VBIOS
> 
> 
> 24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory
> 
> 
> 
> 
> www.techpowerup.com
> 
> 
> 
> 
> 
> Is this the correct one?
> 
> Or the King Pin Bios?
> 
> And last:
> What kind of +core clock would be a good starting point for using the 500W BIOS?
> Yesterday I could manage +180 on 450W


+180 on one bios is not the same as on another bios. Same with temp.


----------



## Carillo

Gebeleisis said:


> Thanks m8!
> 
> How did you solve so that your card can boot up with only 2 pins instead of 3 how the bios is asking for ?
> What did you shunt ?


no magic needed for my card. I simply flashed the bios and reinstalled the driver. All 6 shunt resistors. If not shunted you are better off with the Galax 390 watt bios.


----------



## man1ac

Thanks, so how do I compare cards then? Max and average Clock?


----------



## Carillo

man1ac said:


> Thanks, so how do I compare cards then? Max and average Clock?


By doing just that, compare clocks , not offset


----------



## man1ac

Carillo said:


> By doing just that, compare clocks , not offset


Thanks.

So I got the FTW3 and FE and tried to compare them at 400W.








Result







www.3dmark.com





14006 is the FTW3.
Which card would you keep? The block from the FTW3 will be way more expensive than the Bykski I got on the way (70€)
I don't want to shunt my FE and loose warranty by that...


----------



## erazortt

jfry94 said:


> Hi guys i have a zotac 3090 trinity what is the best bios to flash onto it, considering i need all three displayport connectors to still work as i have a triple monitor set up? Im slowly working my way through the last 300 pages of this thread. Did i make a mistake in getting this specific card as it was the only one available in the uk at under £1500. I plan on slapping a block on it and water cooling it so while i would like some amount of overclocking to happen i wont be going super hard with it.


I guess this is the relevant bios:


jura11 said:


> Hi there
> 
> Here is another 390W BIOS from KFA2 too, which have 1740MHz boost clock, memory clocks are same as with other 390W BIOS
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> GALAX RTX 3090 VBIOS
> 
> 
> 24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory
> 
> 
> 
> 
> www.techpowerup.com
> 
> 
> 
> 
> 
> Hope this helps
> 
> Thanks, Jura


There has been already a reaction which I attributed to this bios:


Alex24buc said:


> Installed tue KFA2 390w bios on my palit gamingpro oc replacing the gigabyte gaming oc one. I can confirm that the KFA2 is better, the fans run at a higher speed, I got the middle display port working again and the temps in idle are lower lower because of the lower boost. So far everything works ok and thanks again for help.


and btw, he was apparently comparing that BIOS with the Gigabyte Gaming OC BIOS, which is also great but does have the issue to kill the middle display port.


----------



## Neon Lights

I have a PNY GeForce RTX 3090 XLR8 Gaming Uprising Epic-X and am powerlimiting quite a lot in Cyberpunk 2077. What would be the best BIOS to flash for me?


----------



## Zink6

Hey guys, new to OC business. I just got the EVGA 3090 xc3 ultra. I heard that evga messed up the power limits in their bios for their cards but I am seeing now that they released a fix for that. Just wondering if that is true and also if there is a over clocking guide I can follow for my card?


----------



## Garlon_GreeN

Good day I purchased MSI GeForce RTX3090 VENTUS 3X OC 24 GB OC does it have this problem Trottling as it was on the first revisions? And is this map good at all? Thanks


----------



## bmgjet

Zink6 said:


> Hey guys, new to OC business. I just got the EVGA 3090 xc3 ultra. I heard that evga messed up the power limits in their bios for their cards but I am seeing now that they released a fix for that. Just wondering if that is true and also if there is a over clocking guide I can follow for my card?


Flash this bios,








KFA2 RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com




That will get the power slider to go to 111% which is 390W.

Then put +80 on core and +250 on the mem.
Custom fan profile for fan to go to 83% at 65C, 90% at 70C and 100% at 75C, Card should end up around 70C full load.
Then you need to do stress tests before increasing core and mem further and make sure card stays under 80C, Altho the cooler you can get it the higher it will boost its just going to be serverly power limited because the 390W limit for 2 plug cards before you shunt mod them.

Heres what mine done with waterblock and 15m ohm shunt mod.








I scored 15 008 in Port Royal


Intel Core i9-7900X Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com




On 390W dont expect much higher then 14K


----------



## dante`afk

Carillo said:


> Intel


nah 



HyperMatrix said:


> Throttling at all or able to maintain 2205+?


only if it can hold the 1.081v, if it drops it goes down to 2190, but overall its often holding 1.1v.


----------



## WayWayUp

anyone try balanced vs quality DLSS mode in cyberpunk?
There seems to be a huge performance penalty going to quality but im not sure its worth it


----------



## Sheyster

man1ac said:


> I don't want to shunt my FE and loose warranty by that...


Then you should probably stick with the 3 x 8-pin card (FTW3) and use the 500W or 520W BIOS if you want to get the most out of your 3090 without shunting. From what I hear the XC3 BIOS is the best option for the FTW3. Apparently the 500W BIOS is an epic fail on that card but works with MSI and ASUS just fine.  See EVGA forums for the details.


----------



## dante`afk

WayWayUp said:


> anyone try balanced vs quality DLSS mode in cyberpunk?
> There seems to be a huge performance penalty going to quality but im not sure its worth it


for me in 5120x1440p, ultra settings, rt ultra, dlss performance.

dlss auto: 63 fps
dlss quality: 54 fps
dlss balanced: 64 fps
dlss performance: 74 fps
dlss ultra performance: 104 fps


----------



## HyperMatrix

WayWayUp said:


> anyone try balanced vs quality DLSS mode in cyberpunk?
> There seems to be a huge performance penalty going to quality but im not sure its worth it


personally on a 4k 27” HDR display I find anything below Quality mode kind of darkens the image and loses too much sharpness. It still looks “alright” to be honest with you but the clarity in Quality mode is hard to pass up. So I’ve kept swapping between Quality and Auto mode. 40-55fps vs 60 FPS. It’s a hard choice because it’s an FPS game. If it were the Witcher 3, I’d run it quality mode without hesitation. But aiming is a little harder at low FPS. Almost wish there was a DLSS hotkey I could set up so I’d play the game in Quality and just switch to Auto and 60fps when I’m about to start a gun fight.


----------



## MacMus

mirkendargen said:


> It's maxing out my unshunted Strix with the 520W BIOS pretty much steady at 2100Mhz 1.087V. ~5% higher clocks at the same voltage should be ~5% more power.


what bios u use for 520W?


----------



## mirkendargen

MacMus said:


> what bios u use for 520W?


There's only one, Kingpin LN2 BIOS.


----------



## kx11

maxed out Cyberpunk 2077 and tested the settings to get more fps @ 4k HDR


----------



## jodasanchezz

Flahed the Kingoin bios on my Strix.
Undervolted to 1018mv Temps are good anmd clocks are fine.

But I'm a little worried because it keeps saying temp and therm in Gpuz ?


----------



## Falkentyne

jodasanchezz said:


> Flahed the Kingoin bios on my Strix.
> Undervolted to 1018mv Temps are good anmd clocks are fine.
> 
> But I'm a little worried because it keeps saying temp and therm in Gpuz ?
> 
> 
> 
> View attachment 2468764


That "power/Therm" bug has been in the Nvidia drivers since the first 457.xx branch, including the Vulkan drivers. It was not in 456.98 hotfix at all (probably one of the most stable drivers released).
You can ignore it. As you can see from your usage graph, it only happens when a load changes drastically. Sometimes it happens at pure idlie.


----------



## stryker7314

What's the best non-liquid metal thermal paste?


----------



## rawsome

hm for me its saying power limited at around 450W. I have the 520W bios on a non-modded MSI Trio. Any ideas what is going on? Running kombuster i can see it going up to 520W.


----------



## Falkentyne

stryker7314 said:


> What's the best non-liquid metal thermal paste?


Thermalright TFX for sure. But it's too expensive unless you buy it from China.

Several people have tested it on laptops. It has the longevity of old thick pastes like Arctic Ceramique (meaning it's going to be stable for a VERY long time), and the only thing that may outperform it is Kingpin KPx.


----------



## menko2

Miguelios said:


> Nope, Strix is actually one of the easier cards to disassemble. Watch videos before hand, with Kryonaut/extreme you’ll be fine.. it’s LM that’s dangerous


I will use the Kryonaut Extreme. Is it better to use the dot in the center method or cover the whole cover of the gpu?


----------



## rawsome

menko2 said:


> I will use the Kryonaut Extreme. Is it better to use the dot in the center method or cover the whole cover of the gpu?


spreading the paste over the whole GPU is the safest way to get optimum performance. at least that was the conclusion on this vid:


----------



## Falkentyne

menko2 said:


> I will use the Kryonaut Extreme. Is it better to use the dot in the center method or cover the whole cover of the gpu?


Dot only in the middle is not sufficient.

Either draw a long X diagonally to each corner of the GPU (with spatula or plunger), or apply the 5 dot "star" method (drop in middle and in 4 corners), without spreading.
Grizzly recommends the long X pattern. Thermalright recommends the 5 dot "star" method, but I don't remember where on thermalright's site that I found that.
Grizzly also says in their KE instruction that you can apply a drop in the middle but spread it with the spatula afterwards.

Either one works. Just make sure the drops are not too tiny. I prefer the X or "5 dot" method (without spreading).


----------



## MacMus

Any nice blocks for strix?


----------



## defiledge

Falkentyne said:


> Dot only in the middle is not sufficient.


This is false. I have used the dot method and it covered the entirety of the gpu die with no issues. I have confirmed this multiple times when disassembling my heatsink for shunt modding. Any method is fine as long as you aren't stingy with your application, like the GN video concluded.


----------



## Falkentyne

defiledge said:


> This is false. I have used the dot method and it covered the entirety of the gpu die with no issues. I have confirmed this multiple times when disassembling my heatsink for shunt modding.


That's if you apply the right size. That's the problem. People apply too little and they end up with not enough coverage (this is what I mean by "not sufficient." People apply too much and it make a mess. A problem right now with some thermal pastes costing highway robbery prices.
And not enough coverage on a GPU is a much bigger problem than on a IHS CPU.

By doing the "5 dot" method or the X method, that helps eliminate guesswork and doesn't waste as much.
Also, with Thermalright TFX (the best paste I've ever used), you do NOT want to spread it (it's so thick it's almost impossible to spread and that just makes air bubble problems worse)--you want to apply the "X" method, or the Thermalright approved "5 dots" method.

Believe me, I used the dot method for years. That was my first method after the old fashioned spreading with Ceramique and Arctic Silver and Radio Shack paste days.


----------



## Sheyster

defiledge said:


> This is false. I have used the dot method and it covered the entirety of the gpu die with no issues. I have confirmed this multiple times when disassembling my heatsink for shunt modding. Any method is fine as long as you aren't stingy with your application, like the GN video concluded.


If using the "pea" method (single drop in the middle), the drop really should be the size of a small pea, no smaller and not much bigger. I tend to favor the X method but not to the outer edge/corner of the CPU. I stop a little short of the edge but I try to make the lines a little thicker to get a full spread on the IHS.

EDIT - For total paste noobs there is always this:









Thermaltake reinvents how to apply thermal paste to CPUs


What's wrong with a pea-sized amount of thermal paste?




www.pcgamer.com


----------



## menko2

I think after reading everything I might just cover the whole gpu die with thin layer. Just in case I don't cover everything. For cpu I could use the small dot method.


----------



## mirkendargen

menko2 said:


> I think after reading everything I might just cover the whole gpu die with thin layer. Just in case I don't cover everything. For cpu I could use the small dot method.


This risk of doing this is leaving air pockets in your application because there's no way to avoid folding layers of paste over air while spreading it.

It's easy to cover a GPU-sized die completely with the dot method...it's when you try and cover a credit card sized Threadripper heat spreader that things get tough...


----------



## Haldi

erazortt said:


> I guess this is the relevant bios:


Thanks a lot! Found yout answer by using Search and flashed the 390W Bios on my Zotac Trinity. Worked wonders!

SuperPosition 4k Optimized went from:
*16186 *Points: Stock
*17254* Points: OC +240Mhz Core +800Mhz VRAM 100% Fanspeed
*17724 *Points: OC 390W +230Mhz Core +800Mhz VRAM 100% Fanspeed




















TimeSpy Extreme went from: (Graphics Score)
9 802 Stock Average 1.696 MHz
10 410 OC Average 1.781 MHz
10 770 390W Average 1.859 MHz

Sadly Port Royale only works with +200Mhz Core and +800Mhz VRAM. else the Raytracing needs to much performance and it crashes 
13 503 Points Average 1.906 MHz

for the Cable this is not very healthy as the 2nd Cable now takes about 175W^^ But they should be able to bear that.












What do you guys think?
Is that average OC result for a RTX 3090? Or do i have little luck with the Silicon for once?
I'm pretty sure i hit a thermal limit right now. so further OC is only possible with an even higher PowerLimit and cooler GPU.


----------



## Neon Lights

I have a PNY GeForce RTX 3090 XLR8 Gaming Uprising Epic-X and am powerlimiting quite a lot in Cyberpunk 2077. What would be the best BIOS to flash for me?


----------



## Sheyster

Neon Lights said:


> I have a PNY GeForce RTX 3090 XLR8 Gaming Uprising Epic-X and am powerlimiting quite a lot in Cyberpunk 2077. What would be the best BIOS to flash for me?


It's a 2 x 8-pin card so one of the 390W BIOS will do, either Gigabyte or KFA2.


----------



## Neon Lights

Sheyster said:


> It's a 2 x 8-pin card so one of the 390W BIOS will do, either Gigabyte or KFA2.


Would a higher one work as well/work better? I was thinking of the FTW3 500W one.


----------



## Sheyster

Neon Lights said:


> Would a higher one work as well/work better? I was thinking of the FTW3 500W one.


That's a BIOS for 3 x 8-pin cards. It won't work properly in a 2 x 8-pin configuration. You can look into shunt modding your card if you feel you need more power, but I would not bother to do that unless you're water-cooling.


----------



## Neon Lights

Sheyster said:


> That's a BIOS for 3 x 8-pin cards. It won't work properly in a 2 x 8-pin configuration. You can look into shunt modding your card if you feel you need more power, but I would not bother to do that unless you're water-cooling.


In what ways would it not work properly?
I’m planning on water cooling my card.
I’d like to avoid shunt modding (as, while it worked on my 1080 using liquid metal thermal paste, I suspect it killed my 1080 Ti after I had sold it to someone (I assume the LM ran onto the solder) and I could see it had already somewhat attacked the solder of my 2080 Ti).


----------



## Falkentyne

Neon Lights said:


> In what ways would it not work properly?
> I’m planning on water cooling my card.
> I’d like to avoid shunt modding (as, while it worked on my 1080 using liquid metal thermal paste, I suspect it killed my 1080 Ti after I had sold it to someone (I assume the LM ran onto the solder) and I could see it had already somewhat attacked the solder of my 2080 Ti).


We don't use LM to shunt mod anymore. We either use conductive paint (MG 842AR silver paint works better than that circuitwriter pen that worked fine on 2080 Ti's but doesn't seem to work on 3090's very well) to bridge the conductive edges of the shunts completely and fully across the shunt, or we solder (stack shunts or replace them).


----------



## Neon Lights

Falkentyne said:


> We don't use LM to shunt mod anymore. We either use conductive paint (MG 842AR silver paint works better than that circuitwriter pen that worked fine on 2080 Ti's but doesn't seem to work on 3090's very well) to bridge the conductive edges of the shunts completely and fully across the shunt, or we solder (stack shunts or replace them).


Is the paint alright/can it cause any problems and does it work as well as LM?


----------



## Falkentyne

Neon Lights said:


> Is the paint alright/can it cause any problems and does it work as well as LM?


The paint acts similar to using a 15 mOhm stacked shunt. If you mean about it shorting something, no, as long as you cover the PCB around the shunts with Super 33+ tape or Kapton tape. The paint dries into a hard substance so it doesn't have the corrosive effects of LM's gallium.

The main problem with the paint is dealing with shunts that have depressed (<0.5mm) edges from the middle package, because it's difficult to get good contact on the lower conductive platform without accidentally getting it on the PCB (this is where Super 33+ tape helps you a LOT!). The worst part is when you have a shunt's conductive edge sitting right next to a choke, because you have so little room available. This has even caused issues with people trying to solder(!), because they have to make sure they use the flux properly on the lower silver edges so the solder makes good contact and creates an "elevated" surface for you to apply more flux, stack the shunt on top, then melt the solder joint you created into a nice conductive solid circuit.

Having the original shunts fully flat and flush (no depressed edges) make both painting and soldering much easier.

Here's an almost perfect demonstration of a soldering method, so simple even I could understand it.


----------



## bmgjet

Neon Lights said:


> In what ways would it not work properly?
> I’m planning on water cooling my card.
> I’d like to avoid shunt modding (as, while it worked on my 1080 using liquid metal thermal paste, I suspect it killed my 1080 Ti after I had sold it to someone (I assume the LM ran onto the solder) and I could see it had already somewhat attacked the solder of my 2080 Ti).


In the way you end up with less power.
The 500W 3 plug bios on a 2 plug card will just pull 141W per plug.
You only have 2 plugs so you end up with 357W real draw power limit.
The 390W bios draw 155W per plug.


----------



## defiledge

Anybody tried SLI cyberpunk? I'm wondering what it takes to get 4k 60 without DLSS


----------



## mirkendargen

defiledge said:


> Anybody tried SLI cyberpunk? I'm wondering what it takes to get 4k 60 without DLSS


If multigpu even worked and scaled well, it would take more than 2 3090's with ray tracing at ultra (nevermind psycho).

But why would you want to? DLSS set to quality is an objectively better looking picture due to the superior AA compared to native resolution. It also won't hit 60FPS currently on a 3090...but it's double the FPS of native 4k.


----------



## HyperMatrix

If it supported mGPU I would totally grab a second 3090 for a few weeks. Haha. Can’t imagine how amazing it must be to play at 90-100fps maxed out on DLSS Quality. Although I’m running into a lot of light bloom and halo issues with my PG27UQ. Really need an OLED for this. Maybe in a couple more generations it’ll be doable. Haha.









Somewhat over exaggerated but all those areas are actually gray and you don’t see the visual detail/objects behind them. Common problem with games that have a lot of dark scenes.


----------



## Nizzen

HyperMatrix said:


> If it supported mGPU I would totally grab a second 3090 for a few weeks. Haha. Can’t imagine how amazing it must be to play at 90-100fps maxed out on DLSS Quality. Although I’m running into a lot of light bloom and halo issues with my PG27UQ. Really need an OLED for this. Maybe in a couple more generations it’ll be doable. Haha.
> 
> View attachment 2468863
> 
> Somewhat over exaggerated but all those areas are actually gray and you don’t see the visual detail/objects behind them. Common problem with games that have a lot of dark scenes.


I used 3-way sli with Witcher 3 some years ago. 3x titan x. Had to run sli to get enough performance 

I wang mgpu to work with cyberpunk, so i can use 2x 3090 strix 

We need the performance 😁


----------



## ttnuagmada

MacMus said:


> Any nice blocks for strix?


I have the EK block for mine. Seems a lot nicer than the EK blocks for the 1080 Ti's I upgraded from.


----------



## mirkendargen

HyperMatrix said:


> If it supported mGPU I would totally grab a second 3090 for a few weeks. Haha. Can’t imagine how amazing it must be to play at 90-100fps maxed out on DLSS Quality. Although I’m running into a lot of light bloom and halo issues with my PG27UQ. Really need an OLED for this. Maybe in a couple more generations it’ll be doable. Haha.
> 
> View attachment 2468863
> 
> Somewhat over exaggerated but all those areas are actually gray and you don’t see the visual detail/objects behind them. Common problem with games that have a lot of dark scenes.


I'm playing on an OLED, you unfortunately aren't missing out because HDR is very broken in the game anyway.


----------



## wirx

kx11 said:


> maxed out Cyberpunk 2077 and tested the settings to get more fps @ 4k HDR


Maxed out would be without DLSS  
But great fps anyway (Y)

If I overclock my MSI trio x 3090 500W GPU over +75 and/or RAM over +500 Cyberpunk throws instantly to desktop. With other highend graphics games hours of playing with GPU +120 and MEM +1200 is no problem, well at least most of them. 
How to get videocard more stable? Or why this even happens? Videocard isn't crashing, it just throws to desktop after adding more boost, so it can't be temperature depended, some kind of windows problem or motherboard BIOS?


----------



## Gebeleisis

Haldi said:


> Thanks a lot! Found yout answer by using Search and flashed the 390W Bios on my Zotac Trinity. Worked wonders!
> 
> SuperPosition 4k Optimized went from:
> *16186 *Points: Stock
> *17254* Points: OC +240Mhz Core +800Mhz VRAM 100% Fanspeed
> *17724 *Points: OC 390W +230Mhz Core +800Mhz VRAM 100% Fanspeed
> View attachment 2468820
> View attachment 2468821
> View attachment 2468822
> 
> 
> TimeSpy Extreme went from: (Graphics Score)
> 9 802 Stock Average 1.696 MHz
> 10 410 OC Average 1.781 MHz
> 10 770 390W Average 1.859 MHz
> 
> Sadly Port Royale only works with +200Mhz Core and +800Mhz VRAM. else the Raytracing needs to much performance and it crashes
> 13 503 Points Average 1.906 MHz
> 
> for the Cable this is not very healthy as the 2nd Cable now takes about 175W^^ But they should be able to bear that.
> View attachment 2468823
> 
> 
> 
> 
> 
> What do you guys think?
> Is that average OC result for a RTX 3090? Or do i have little luck with the Silicon for once?
> I'm pretty sure i hit a thermal limit right now. so further OC is only possible with an even higher PowerLimit and cooler GPU.


I have a Palit 3090 and I am hitting your scores with the same 390w bios. My mem can go only to +650, more and it crashes.

I am on stock air at 22-23 C ambient.

I am now looking into watercooling but I would need to change the case and I don't want to because I love the case that I have now.


----------



## WilliamLeGod

wirx said:


> Maxed out would be without DLSS
> But great fps anyway (Y)
> 
> If I overclock my MSI trio x 3090 500W GPU over +75 and/or RAM over +500 Cyberpunk throws instantly to desktop. With other highend graphics games hours of playing with GPU +120 and MEM +1200 is no problem, well at least most of them.
> How to get videocard more stable? Or why this even happens? Videocard isn't crashing, it just throws to desktop after adding more boost, so it can't be temperature depended, some kind of windows problem or motherboard BIOS?


And Screen space reflection psycho instead of utral lol


----------



## shiokarai

HyperMatrix said:


> If it supported mGPU I would totally grab a second 3090 for a few weeks. Haha. Can’t imagine how amazing it must be to play at 90-100fps maxed out on DLSS Quality. Although I’m running into a lot of light bloom and halo issues with my PG27UQ. Really need an OLED for this. Maybe in a couple more generations it’ll be doable. Haha.
> 
> View attachment 2468863
> 
> Somewhat over exaggerated but all those areas are actually gray and you don’t see the visual detail/objects behind them. Common problem with games that have a lot of dark scenes.


Im playing on LG38GN950 38" ultrwawide 3840x1600 with everything maxed out +RTX maxed out + HDR + [email protected] and getting between 40-65fps (g-sync really helps) with my 3090 Strix OC... and consider this being okay  (without DLSS: 20-30 FPS lolol) ALSO: absolutely needed when using DLSS: image sharpening turned on in nvidia control panel, it's like night and day difference. All in all game looks and play phenomenal


----------



## krs360

Quick question in terms of the silicone lottery. Is there a list somewhere of what's considered to be a gold sample, etc in terms of numerical core/mem speeds on an OC?


----------



## mbm

#7015
personal preferences..
I dont feel 40-65 fps is playable even with gsync.
I need min. 100


----------



## changboy

kx11 said:


> maxed out Cyberpunk 2077 and tested the settings to get more fps @ 4k HDR


 If you have issue with the sound on this game you need to go in sound option on ur pc and drop the quality from 24bit/192 000khz to 24bit/96000khz and the problem will be resolve, no more distorded sound dude


----------



## kx11

changboy said:


> If you have issue with the sound on this game you need to go in sound option on ur pc and drop the quality from 24bit/192 000khz to 24bit/96000khz and the problem will be resolve, no more distorded sound dude


yeah i fixed it already, strange how the problem wasn't there in the 1st 2hrs of playing the game


----------



## changboy

mirkendargen said:


> I'm playing on an OLED, you unfortunately aren't missing out because HDR is very broken in the game anyway.


I found the hdr on my oled is amazing on this game and i dont know why you said that, maybe you put wrong setting under resoltion and colours in nividia setting.


----------



## jomama22

wirx said:


> Maxed out would be without DLSS
> But great fps anyway (Y)
> 
> If I overclock my MSI trio x 3090 500W GPU over +75 and/or RAM over +500 Cyberpunk throws instantly to desktop. With other highend graphics games hours of playing with GPU +120 and MEM +1200 is no problem, well at least most of them.
> How to get videocard more stable? Or why this even happens? Videocard isn't crashing, it just throws to desktop after adding more boost, so it can't be temperature depended, some kind of windows problem or motherboard BIOS?


Different workloads create different demands on the gpu. Same reason why everyone can run higher settings in time spy than I'm actual games. Just how it works.


----------



## Haldi

I agree that Cyberpunk with HDR 10 scRGB looks really great. At least on my Samsung G9.


----------



## DooRules

Got my 3090 KP yesterday evening. Just started to play around with it. Using OC bios, no cold air yet, no classified tool or dipswitches used yet.









I scored 14 977 in Port Royal


Intel Core i9-10980XE Extreme Edition Processor, NVIDIA GeForce RTX 3090 x 1, 32440 MB, 64-bit Windows 10}




www.3dmark.com


----------



## changboy

Cyberpunk hdr is broke on play station i think but pc is great


----------



## HyperMatrix

changboy said:


> Cyberpunk hdr is broke on play station i think but pc is great


It’s broken on PC because the settings page is broken. You have to set your midpoint or whatever factor to somewhere between 0.6 and 0.75 otherwise it all looks washed out. Also the “sample images” on the HDR settings page aren’t affected by the changes you make, which makes it very hard to calibrate it.


----------



## changboy

HyperMatrix said:


> It’s broken on PC because the settings page is broken. You have to set your midpoint or whatever factor to somewhere between 0.6 and 0.75 otherwise it all looks washed out. Also the “sample images” on the HDR settings page aren’t affected by the changes you make, which makes it very hard to calibrate it.


You need an OLED hehehe just kidding you )


----------



## changboy

Its weird, in the benchmark far cry 5 at 4k, if i do the bench i got 6558 and if i put on fps counter from gforce my result drop at 5958 ! 

So fps counter drop performance of 10% wowww. Is there a way to see fps without dropping of 10%, i didnt check in other game but thats weird.


----------



## shiokarai

mbm said:


> #7015
> personal preferences..
> I dont feel 40-65 fps is playable even with gsync.
> I need min. 100


with open world RPG about 50-60 FPS is okay for me, on the other hand, Doom Eternal 200+ FPS is a must  btw. DLSS perf. gives me about 100 FPS in this scenario in CP2077, diff. in quality is noticeable, but is minimal in practice


----------



## dante`afk

honestly, CP2077 is the first game where I don't notice any difference between 50 fps and 100 fps.

buttery smooth engine


----------



## mirkendargen

changboy said:


> I found the hdr on my oled is amazing on this game and i dont know why you said that, maybe you put wrong setting under resoltion and colours in nividia setting.


No, it's very broken. Blacks are raised (regardless of the settings), colors are squashed and create banding. It's extremely easy to see in the VR training area. It's broken in the same way on all platforms. It's possible non-OLED users can't see the raised blacks. You can find tons of people saying the same issues, try it in SDR and you'll see how much better it looks.


----------



## HyperMatrix

changboy said:


> You need an OLED hehehe just kidding you )


I do man. But I need a 32-34" OLED. Can't do all this 38" stuff. Waiting for the tech Gods to grace us with what should be an easy to make panel.


----------



## changboy

HyperMatrix said:


> I do man. But I need a 32-34" OLED. Can't do all this 38" stuff. Waiting for the tech Gods to grace us with what should be an easy to make panel.


My pc screen is a 65 inch oled, why would you play on a small screen ?


----------



## TooBAMF

mirkendargen said:


> No, it's very broken. Blacks are raised (regardless of the settings), colors are squashed and create banding. It's extremely easy to see in the VR training area. It's broken in the same way on all platforms. It's possible non-OLED users can't see the raised blacks. You can find tons of people saying the same issues, try it in SDR and you'll see how much better it looks.


Coming out of hiding to say I discovered this yesterday as well. 

On an OLED you can easily tell there is an HDR black level issue when launching the game because the black bars in the opening cutscene aren't true black (pixels off). Disabling HDR shows correct black in this regard.

Also, hey everyone. Back after lurking for 3+ years. Need to update my sig rig with my 3090 FE. Despite OCN losing a lot of life, glad to see this thread is very active similar to the Titan threads of old


----------



## changboy

Under my nvidia panel resolution i put : 4k-ycbcr;4.2.0 color depth 12 bit and you ?


----------



## mirkendargen

changboy said:


> Under my nvidia panel resolution i put : 4k-ycbcr;4.2.0 color depth 12 bit and you ?


RGB 10bit, It's a 10bit panel so 12bit isn't doing anything, and 4:2:0 is doing bad things on text and fine lines. I tried both full and limited range for fun, but neither make a difference (and shouldn't because the game should output a proper HDR10 RGB signal regardless of your driver settings, then the driver processes it based on your driver settings).


----------



## HyperMatrix

changboy said:


> My pc screen is a 65 inch oled, why would you play on a small screen ?


I have an emperor gaming chair and can't mount anything bigger than about 32-34". I also already have a TV for games that I can play on the couch with a controller.


----------



## Carillo

Hello. Have a problem that I find a little strange. My 2x8 MSI card, I first shunted with 8mohm on all including PCIE. Worked well, but i wanted more. So did a triple shunt on all 5 except PCIE. So 4mohm on all 5, and 8 on PCIE. I then hit the power limit earlier than ever .. The card maxed at 350 watts. I reduced to 5mohm on PCIE, but still the same problem with all 2x8 pins bios .. max around 400 watts .. because of that I use the Kingpin 3x8 bios, then I get at least 520 watts.


----------



## shiokarai

TooBAMF said:


> Coming out of hiding to say I discovered this yesterday as well.
> 
> On an OLED you can easily tell there is an HDR black level issue when launching the game because the black bars in the opening cutscene aren't true black (pixels off). Disabling HDR shows correct black in this regard.
> 
> Also, hey everyone. Back after lurking for 3+ years. Need to update my sig rig with my 3090 FE. Despite OCN losing a lot of life, glad to see this thread is very active similar to the Titan threads of old


HDR in cyberpunk 2077 is partially broken, it's obvious. Highlights/max brightness is higher, which is good, but black levels are also raised which causes washed out blacks/low contrast... but it's still better IMO than 100% SDR. But should be at least as good as HDR in Doom Eternal, which is really great


----------



## changboy

Why i use this setting on my oled display in nvidia resolution setting its for watch movies in 4k hdr, and its also working for games, but if i put 4;4;4 its just 8 bits and i lost some grade colors for movies. 4;4;4 at 12 bits its impossible and the only way its 4;2;0 12 bits.

If you have better explanation then tell me


----------



## jtisgeek

changboy said:


> Under my nvidia panel resolution i put : 4k-ycbcr;4.2.0 color depth 12 bit and you ?


4:2:0 would never be worth it just to run 12bit
Better off running 4:4:4 then going as high as your screen will let you . For me C9 oled user it's 10bit


----------



## jtisgeek

changboy said:


> Why i use this setting on my oled display in nvidia resolution setting its for watch movies in 4k hdr, and its also working for games, but if i put 4;4;4 its just 8 bits and i lost some grade colors for movies. 4;4;4 at 12 bits its impossible and the only way its 4;2;0 12 bits.
> 
> If you have better explanation then tell me


Well you right you get more color but you lose text clarity when going down to 4:2:0 

So you either have to change when gaming or find a setting in the middle that works for both movies and games.


----------



## mirkendargen

changboy said:


> Why i use this setting on my oled display in nvidia resolution setting its for watch movies in 4k hdr, and its also working for games, but if i put 4;4;4 its just 8 bits and i lost some grade colors for movies. 4;4;4 at 12 bits its impossible and the only way its 4;2;0 12 bits.
> 
> If you have better explanation then tell me


Sounds like you have an old TV without HDMI 2.1 and can't do 10bit RGB/4:4:4 120hz. I'd do 60hz 10bit RGB for Cyberpunk because you aren't gonna get much more than 60FPS anyway, and lower than that a lot of places.


----------



## TooBAMF

changboy said:


> Why i use this setting on my oled display in nvidia resolution setting its for watch movies in 4k hdr, and its also working for games, but if i put 4;4;4 its just 8 bits and i lost some grade colors for movies. 4;4;4 at 12 bits its impossible and the only way its 4;2;0 12 bits.
> 
> If you have better explanation then tell me


Without HDMI 2.1 on the TV (and GPU, but assuming you have a 3090) 4:2:0 10 bit is technically what should be set if you're doing it manually.

On a C9 before I had a 3090 I'd set the HDMI input icon to gaming console when playing or watching HDR. I'd also set the nvidia resolution color settings to auto. PC mode with HDR (without full RGB/4:4:4 due to HDMI 2.0 limitations) just looked worse than console mode. 

With HDMI 2.1, full RGB and 10 bit are possible and PC mode looks perfect in most games. Cyberpunk HDR may look better with my HDMI 2.0 strategy, but if so it means there's an HDR issue with the game itself.


----------



## changboy

I have a lg C8 oled with no hdmi 2.1, not verry old but got it just before hdmi 2.1, and ya i think i cant do 10 bit at 4;4;4. But i got 3 months ago a new panel from lg to replace mine and this one ist perfect, no banding or mark.

I also have no issue with the firmware, coz i read the X serie have problem gsync or now it have been resolve but not long time ago, that raise the black level and most user have many kind of bug with black level.
I dont feel the need to upgrade and get a bending screen and els till they bring some new stuff in a near future.
My text look fine and my windows deskdrop at 4k and 300% upscale and for what i doing on it, its ok, maybe if my text verry small then i can have some problem but actually all look good to me.


----------



## changboy

btw lg panel can do 12 bit color depth but just on dolby vision, no other stuff and its for this we said dolby vision is the best 4k hdr picture possible, its better then hdr 10. hdr 10+ is near of dolby vision coz its using a dynamic metadata color like dolby vision.


----------



## mollet

has anyone of you guys been able to flash a 3090 bios from a various manufacturer to an 3090 founders edition ?


----------



## JonnyV75

changboy said:


> My pc screen is a 65 inch oled, why would you play on a small screen ?


I see your 65” oled and raise you one 120” Stuart Filmscreen/JVC projector combo.


----------



## Falkentyne

mollet said:


> has anyone of you guys been able to flash a 3090 bios from a various manufacturer to an 3090 founders edition ?


No, and it can't be easily force flashed with a hardware programmer either. The BIOS chip is a completely non-standard size, which means Pomona 5250 clips (with male to female jumper cables+1.8v adapter) won't fit and you would have to desolder it and find a proper convertor as it isn't even SOIC8 size, but it's very likely a 1.8v chip.

You can see the vbios chip in red and white painted dots on the Asus and eVGA pictures. Notice the different package on the FE.



https://www.techpowerup.com/review/evga-geforce-rtx-3090-ftw3-ultra/images/front.jpg





https://www.techpowerup.com/review/asus-geforce-rtx-3090-strix-oc/images/front.jpg





https://www.igorslab.de/wp-content/uploads/2020/09/Power-Scheme-Front-2.jpg


----------



## Sheyster

Falkentyne said:


> No


^ TL;DR version. 

Even if it was possible I doubt an EVGA or ASUS BIOS would support the single 12-pin micro power connector. If you look at the BIOS file structure there are sections dedicated to each power section (PCI-E, each power connector, etc.) The FE card is a completely different design power-wise compared to AIB cards. Additionally, since the card is nVidia's own exclusive in-house design, I would not be surprised at all if they changed the signing encryption for the BIOS. The FE can't be considered "reference" anymore, it is now effectively proprietary.


----------



## changboy

JonnyV75 said:


> I see your 65” oled and raise you one 120” Stuart Filmscreen/JVC projector combo.
> 
> View attachment 2468941


Wow nice set up heheh, but just 120 inch ! hihihi, the way i ask super matrix about why he want play on small oled was not to said; mine is bigger then yours but just to understand is point and with is super chair i understand it, those chair are kind of amazing for play, comfort and position. Like we can buy the 48 inch oled but its less cheaper then the bigger one (55 inch).


----------



## Sheyster

changboy said:


> Wow nice set up heheh, but just 120 inch ! hihihi, the way i ask super matrix about why he want play on small oled was not to said; mine is bigger then yours but just to understand is point and with is super chair i understand it, those chair are kind of amazing for play, comfort and position. Like we can buy the 48 inch oled but its less cheaper then the bigger one (55 inch).


I have the CX 48. Trust me when I say you don't want the 55 inch version on your desk. The 48 is plenty big enough on a normal desk.


----------



## mirkendargen

Sheyster said:


> I have the CX 48. Trust me when I say you don't want the 55 inch version on your desk. The 48 is plenty big enough on a normal desk.


+1 to this, I also have a CX 48.


----------



## JonnyV75

changboy said:


> Wow nice set up heheh, but just 120 inch ! hihihi, the way i ask super matrix about why he want play on small oled was not to said; mine is bigger then yours but just to understand is point and with is super chair i understand it, those chair are kind of amazing for play, comfort and position. Like we can buy the 48 inch oled but its less cheaper then the bigger one (55 inch).


I was just goofing around. Thought I'd be fun to show. We started at a 85" projection wall and slowly got bigger and more expensive. It's mainly for movies for me, but my son enjoys it for games. Cyberpunk looks very nice. 

With regards to 3090s. I'm selling my FTW3 after I was able to get a FTW3 Hybrid. Very interested to see how my PR and TS scores change with an AIO. ... and if any of you fellow Canadians know of anyone looking to buy, I'm selling the 3-day old FTW3 at the pre-tax price (no scalping).


----------



## BigMack70

mirkendargen said:


> No, it's very broken. Blacks are raised (regardless of the settings), colors are squashed and create banding. It's extremely easy to see in the VR training area. It's broken in the same way on all platforms. It's possible non-OLED users can't see the raised blacks. You can find tons of people saying the same issues, try it in SDR and you'll see how much better it looks.


Yup, SDR is the best image quality in cyberpunk for the time being. Hopefully they fix. 

It's just nice to have a proper PC game for once that's not held back by consoles. You can even push the 3090 to nearly 100% usage at 720p resolution with all the settings maxed. Insane.

And DLSS is a game changer... There's very little difference between native 4k and DLSS quality. Even DLSS balanced looks very close, given the crazy performance boost over native.


----------



## DooRules

mirkendargen said:


> +1 to this, I also have a CX 48.


I used a corner wall mount for my CX 48", that way I could move the desk a little back, made a big difference for me.


----------



## changboy

DooRules said:


> I used a corner wall mount for my CX 48", that way I could move the desk a little back, made a big difference for me.


Ya i understand you are near of ur screen, but iam at 10 feet away and dont want a telescope to look at the picture hehehe.


----------



## changboy

BigMack70 said:


> Yup, SDR is the best image quality in cyberpunk for the time being. Hopefully they fix.
> 
> It's just nice to have a proper PC game for once that's not held back by consoles. You can even push the 3090 to nearly 100% usage at 720p resolution with all the settings maxed. Insane.
> 
> And DLSS is a game changer... There's very little difference between native 4k and DLSS quality. Even DLSS balanced looks very close, given the crazy performance boost over native.


Ya its kind of broken on hdr10 setting but if you put at standar but ur windows is still on hdr then you have just to adjust your brightness and its the same then hdr, coz under windows setting is engage. My oled dont tell me iam on standar mode like on tv cable where i can put simulate hdr picture. HDR is engage on the tv and windows so if you can get good black on ur picture game then its ok.

I belive a patch later will adjust this


----------



## HyperMatrix

I think I made a mistake buying a 3090. Should have gotten a 6900 XT. Ray Tracing doesn't even make that much of a difference in games. I bet most of you won't even be able to tell which one is with RT on. 

















...said no one ever


----------



## changboy

lol hehehehe funny


----------



## Neon Lights

TooBAMF said:


> Without HDMI 2.1 on the TV (and GPU, but assuming you have a 3090) 4:2:0 10 bit is technically what should be set if you're doing it manually.
> 
> On a C9 before I had a 3090 I'd set the HDMI input icon to gaming console when playing or watching HDR. I'd also set the nvidia resolution color settings to auto. PC mode with HDR (without full RGB/4:4:4 due to HDMI 2.0 limitations) just looked worse than console mode.
> 
> With HDMI 2.1, full RGB and 10 bit are possible and PC mode looks perfect in most games. Cyberpunk HDR may look better with my HDMI 2.0 strategy, but if so it means there's an HDR issue with the game itself.


So in HDR, 4:2:0 10 bit is better than RGB 8 bit to set in Nvidia Control Panel?
Do games even support 10 bit HDR output, because this wasn't possible (at RGB) until HDMI 2.1?


----------



## changboy

I use 4;2;0 12 bit for years and have 0 problem switching film to game to hdr, but on my oled i have 3 preset, 1 for non hdr movies, 1 for hdr 4k movies and another preset for gaming under game mode and my input set to pc mode.

But this is only possible on nvidia cards, with amd its not working coz no option for 4;2;0 and amd full of bug you just can dream about hehehehe. I know what i say coz i have an amd card too lol.

Oh ya i have C8 oled with hdmi 2.0...no hdmi 2.1 yet for me


----------



## mirkendargen

Neon Lights said:


> So in HDR, 4:2:0 10 bit is better than RGB 8 bit to set in Nvidia Control Panel?
> Do games even support 10 bit HDR output, because this wasn't possible (at RGB) until HDMI 2.1?


4k 10bit RGB HDR has fit fine in HDMI 2.0 bandwidth for years....at 60hz. The big benefit of HDMI 2.1 bandwidth that's finally happening now is 4k 10bit RGB HDR at 120hz.


----------



## TooBAMF

Neon Lights said:


> So in HDR, 4:2:0 10 bit is better than RGB 8 bit to set in Nvidia Control Panel?
> Do games even support 10 bit HDR output, because this wasn't possible (at RGB) until HDMI 2.1?


In my view 10 bit is a requirement for it to be considered proper HDR. That said, I've seen a lot of people say they preferred 8 bit full RGB with windows HDR on when limited by HDMI 2.0. This is understandable when considering text degradation on 4:2:0. The display itself also may make incorrect assumptions based on the input. Like "PCs are usually full RGB and don't do HDR", or "consoles don't usually do full RGB" etc.

It's unclear what games, especially PC games support, because picture quality seems to vary quite a bit game to game. Ideally the game outputs full RGB regardless and the driver handles it. But if assets were mastered differently then things can look off. 

Movies, even with HDR, are 4:2:0. So in that sense HDR comes in at the 10 bit (HDR10) or above (Dolby Vision) level.

I think we're in a weird place now where all the newest devices with HDMI 2.1 don't have to compromise between full RGB/4:4:4 and 10+ bit, but parts of the industry may be stuck in a 4:2:0/movie mindset.


----------



## Neon Lights

TooBAMF said:


> In my view 10 bit is a requirement for it to be considered proper HDR. That said, I've seen a lot of people say they preferred 8 bit full RGB with windows HDR on when limited by HDMI 2.0. This is understandable when considering text degradation on 4:2:0. The display itself also may make incorrect assumptions based on the input. Like "PCs are usually full RGB and don't do HDR", or "consoles don't usually do full RGB" etc.
> 
> It's unclear what games, especially PC games support, because picture quality seems to vary quite a bit game to game. Ideally the game outputs full RGB regardless and the driver handles it. But if assets were mastered differently then things can look off.
> 
> Movies, even with HDR, are 4:2:0. So in that sense HDR comes in at the 10 bit (HDR10) or above (Dolby Vision) level.
> 
> I think we're in a weird place now where all the newest devices with HDMI 2.1 don't have to compromise between full RGB/4:4:4 and 10+ bit, but parts of the industry may be stuck in a 4:2:0/movie mindset.


I suppose my only question is, what do I set to get the most correct image possible?


----------



## Neon Lights

mirkendargen said:


> 4k 10bit RGB HDR has fit fine in HDMI 2.0 bandwidth for years....at 60hz. The big benefit of HDMI 2.1 bandwidth that's finally happening now is 4k 10bit RGB HDR at 120hz.


When I went to Wikipedia and looked up the HDMI 2.0 bandwidth, I came to a different conclusion. But if I'm wrong please tell me how.


----------



## BigMack70

changboy said:


> Ya its kind of broken on hdr10 setting but if you put at standar but ur windows is still on hdr then you have just to adjust your brightness and its the same then hdr, coz under windows setting is engage. My oled dont tell me iam on standar mode like on tv cable where i can put simulate hdr picture. HDR is engage on the tv and windows so if you can get good black on ur picture game then its ok.
> 
> I belive a patch later will adjust this


Black levels are broken in HDR. I'm on an LG C9 and yeah, it says HDR is on, but the HDR is broken - the black level is too high.


----------



## TooBAMF

Neon Lights said:


> I suppose my only question is, what do I set to get the most correct image possible?


For HDMI 2.0 you'd set 4:2:0 10 bit. However, due to the game, display or other factors you may find you prefer a different setting. Technically that should be the most correct if the game or display aren't making incorrect assumptions.

Personally I used to set it to auto in Nvidia CP when using Windows HDR because that looked best to me along with the console (not PC) input setting. I'd switch back to full RGB 8 bit and PC mode when not using HDR.


----------



## Neon Lights

TooBAMF said:


> For HDMI 2.0 you'd set 4:2:0 10 bit. However, due to the game, display or other factors you may find you prefer a different setting. Technically that should be the most correct if the game or display aren't making incorrect assumptions.
> 
> Personally I used to set it to auto in Nvidia CP when using Windows HDR because that looked best to me along with the console (not PC) input setting. I'd switch back to full RGB 8 bit and PC mode when not using HDR.


When I set it to 4:2:0 10 bit (on an LG C7), I get higher input lag. Also, the gray gradient on the Nvidia CP bands
Why would game console look better than PC?


----------



## mirkendargen

Neon Lights said:


> When I went to Wikipedia and looked up the HDMI 2.0 bandwidth, I came to a different conclusion. But if I'm wrong please tell me how.


I definitely remember the Nvidia drivers showing it as an option on my 2080Ti before I got a 3090, but it may have been tricky and forced 4:2:2 at the same time.

Also as far as 8bit HDR, in Windows I'm pretty certain it does dithering when you enable HDR at 8bit and it's pretty hard to detect the difference from 10bit.


----------



## TooBAMF

Neon Lights said:


> When I set it to 4:2:0 10 bit (on an LG C7), I get higher input lag. Also, the gray gradient on the Nvidia CP bands
> Why would game console look better than PC?


PC mode created banding on my C9. The modes may have made assumptions about the prevalence of HDR content on devices at the time. 

Windows didn't even have HDR mode in 2017, whereas the Xbox One S had it in 2016.


----------



## Neon Lights

TooBAMF said:


> PC mode created banding on my C9. The modes may have made assumptions about the prevalence of HDR content on devices at the time.
> 
> Windows didn't even have HDR mode in 2017, whereas the Xbox One S had it in 2016.


I don't know if what you said really (still, as it could've been fixed through updates) applies, but what I know is that in PC mode, I can have isf Expert (dark room) mode with game mode activated together (so lower input lag and some deactivated functions), but in game console only Game mode (I think, at least).


----------



## changboy

mirkendargen said:


> 4k 10bit RGB HDR has fit fine in HDMI 2.0 bandwidth for years....at 60hz. The big benefit of HDMI 2.1 bandwidth that's finally happening now is 4k 10bit RGB HDR at 120hz.


RGB dont do 10 bit its 8 bit, for 10 bit you need go with ycbcr, at least with my C8. I never use nvidia control panel on C9 or CX coz i never own them.

For cuyberpunk on pc actually you need to put a setting to get a good black then you cant adjust correctly HDR so put it back to normal then adjust your brightness coz it will be too dark then you good to go coz your windows and tv still on HDR setting at ON. Its for this when you put normal and not HDR your picture come verry dark and too much black. Again just adjust brightness in game and on ur tv game mode setting to play this game.

And ya i know its not the best thing in the world but to the time we get a patch for this its better then nothing or washed out game.


----------



## changboy

TooBAMF said:


> PC mode created banding on my C9. The modes may have made assumptions about the prevalence of HDR content on devices at the time.
> 
> Windows didn't even have HDR mode in 2017, whereas the Xbox One S had it in 2016.


PC mode on input setting create bending :

Ya this happening coz you dont put nvidia setting at ycbcr 4;2;0 12 bit or 10 bit and if you put that setting your bending will go on hdr movie or at least you will see a lot less and it should be fine.

If you dont put pc mode as input on ur tv you will find other issue, like jagging on movies on motion and less résolution picture quality. Me i use mpc hc BE with madvr with chroma upscaling at NGU /anti-alis at verry high quality. The rtx-3090 is capable of that. I wont go deeper in that subject but its the best you can get on pc.


----------



## Zink6

bmgjet said:


> Flash this bios,
> 
> 
> 
> 
> 
> 
> 
> 
> KFA2 RTX 3090 VBIOS
> 
> 
> 24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory
> 
> 
> 
> 
> www.techpowerup.com
> 
> 
> 
> 
> That will get the power slider to go to 111% which is 390W.
> 
> Then put +80 on core and +250 on the mem.
> Custom fan profile for fan to go to 83% at 65C, 90% at 70C and 100% at 75C, Card should end up around 70C full load.
> Then you need to do stress tests before increasing core and mem further and make sure card stays under 80C, Altho the cooler you can get it the higher it will boost its just going to be serverly power limited because the 390W limit for 2 plug cards before you shunt mod them.
> 
> Heres what mine done with waterblock and 15m ohm shunt mod.
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 008 in Port Royal
> 
> 
> Intel Core i9-7900X Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> On 390W dont expect much higher then 14K


Hey, thanks so much for the advice and the reply. Just got a couple questions if you dont mind.
Would flashing the bios void my warranty? Also I am not doing anything nuts, I am getting the Samsung G9 and want to play cyberpunk on full at maybe around 144 fps. Do I need to flash the bios for that? 
I was messing around precision x and I am able to run Cyberpunk with no issues with the GPU Clock at +120 and Mem at +400. Is that good? Can I get better numbers?


----------



## TooBAMF

changboy said:


> PC mode on input setting create bending :
> 
> Ya this happening coz you dont put nvidia setting at ycbcr 4;2;0 12 bit or 10 bit and if you put that setting your bending will go on hdr movie or at least you will see a lot less and it should be fine.
> 
> If you dont put pc mode as input on ur tv you will find other issue, like jagging on movies on motion and less résolution picture quality. Me i use mpc hc BE with madvr with chroma upscaling at NGU /anti-alis at verry high quality. The rtx-3090 is capable of that. I wont go deeper in that subject but its the best you can get on pc.


Yeah, I use PC mode exclusively with the 3090 on a C9. These PC mode issues were with my 2080 ti even on 4:2:0 10 or 12 bit.


----------



## changboy

TooBAMF said:


> Yeah, I use PC mode exclusively with the 3090 on a C9. These PC mode issues were with my 2080 ti even on 4:2:0 10 or 12 bit.


Is your 2080 ti was plug on the C9 too ?
When you see bending is this on compress file with low quality ?
I know LG oled are not perfect but when the file have high quality it should be fine but the chip inside LG oled is not the one on Panasonic oled wich is a lot better at noise reduction and even you use pc mode you still have access to all other setting like noise reduction but not on LG. Its bad but the LG is the best at gaming for input lag.

If money not a kind of barrier for me i want own the Panasonic oled for the best picture quality and iam sure it will play fine on game too. Panasonic oled beat LG in many way and the chip do a better job at everything.

The limitation here is the tv itself, LG oled are not perfect but ya the picture is nice but your eyes begin to see the limitation with this picture too.
Same when you play shadow of tomb raider its look nice and you go back at raise of tomb raider woooo you see the picture is not good like shadows of tomb raider but first time you have played raise of tomb raider thats was woww. Eyes learn same way as ear for system sound. More you upgrade more you spend money hehehe.


----------



## MacMus

Did someone used BPC from mp5?
I was thinking to use it with my ek backplate for strix 3090.


----------



## changboy

I think cyberpunk with new patch 1.4 hdr is better now, you can adjust the setting and all change


----------



## HyperMatrix

changboy said:


> I think cyberpunk with new patch 1.4 hdr is better now, you can adjust the setting and all change


So when you move the slider bars around on the HDR settings page, you see the effect in the sample images they're showing you? Because for me it's always the same static image. Nothing I do changes it.


----------



## shiokarai

HyperMatrix said:


> So when you move the slider bars around on the HDR settings page, you see the effect in the sample images they're showing you? Because for me it's always the same static image. Nothing I do changes it.


Max luminance was changing the images, though, even with the launch version, paper white, too - only midtone point doesn't do anything in the calibration page, only in-game. So that's fixed now?


----------



## changboy

Ya its fixe now, when i put midtone at .10 the picture is dark all place and if i put 1.0 its bright.
You can actually adjust everything. If you have a non legit game i think its not working or you will be able to download an update at ur non legit game.
This game is verry beatiful and will have a lot of update i think so you better buy it.


----------



## OC2000

Im getting a low PR Score

NVIDIA GeForce RTX 3090 video card benchmark result - AMD Ryzen 9 5950X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII DARK HERO (3dmark.com)

My average clock is 2220 Mhz. I should be getting into the low / mid 15k with this. Any ideas what could be causing this?

I got higher with my old clogged up system running an average of 2178 Mhz with loose memory timings

NVIDIA GeForce RTX 3090 video card benchmark result - AMD Ryzen 9 3950X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII DARK HERO (3dmark.com)


----------



## Nizzen

OC2000 said:


> Im getting a low PR Score
> 
> NVIDIA GeForce RTX 3090 video card benchmark result - AMD Ryzen 9 5950X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII DARK HERO (3dmark.com)
> 
> My average clock is 2220 Mhz. I should be getting into the low / mid 15k with this. Any ideas what could be causing this?
> 
> I got higher with my old clogged up system running an average of 2178 Mhz with loose memory timings
> 
> NVIDIA GeForce RTX 3090 video card benchmark result - AMD Ryzen 9 3950X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII DARK HERO (3dmark.com)


AMD 😅


----------



## OC2000

Nizzen said:


> AMD 😅


It hits a ceiling, and no additional overclock increases the score. Weird. Im going to try different drivers and reinstalling the OS.


----------



## Redjester

New to the whole shunt mod world, but I ended up shunting my Zotac 3090 before putting it into a EKWB waterblock. (15 ohms stacked on all shunts)

Been running a few timespys while running GPU-Z to see where I'm at and noticed that my Pin#1 power was maxing at 150.6 while my pin#2 is at 176.8. Does this mean that I missed or screwed up one of the shunts on pin 1?

Here is the timespy NVIDIA GeForce RTX 3090 video card benchmark result - Intel Core i9-10850K Processor,Micro-Star International Co., Ltd. MEG Z490 UNIFY (MS-7C71) (3dmark.com)

And the GPU-Z to go with it.








Can anyone see what my newbie eyes cannot? Am I maxed out here?

*edited to add shunt ohms


----------



## OC2000

Redjester said:


> New to the whole shunt mod world, but I ended up shunting my Zotac 3090 before putting it into a EKWB waterblock. (15 ohms stacked on all shunts)
> 
> Been running a few timespys while running GPU-Z to see where I'm at and noticed that my Pin#1 power was maxing at 150.6 while my pin#2 is at 176.8. Does this mean that I missed or screwed up one of the shunts on pin 1?
> 
> Here is the timespy NVIDIA GeForce RTX 3090 video card benchmark result - Intel Core i9-10850K Processor,Micro-Star International Co., Ltd. MEG Z490 UNIFY (MS-7C71) (3dmark.com)
> 
> And the GPU-Z to go with it.
> View attachment 2469074
> 
> 
> Can anyone see what my newbie eyes cannot? Am I maxed out here?
> 
> *edited to add shunt ohms


yep it hasn’t worked.If you havent shunted the PCIE resistor you’ll need to do that. If you have, the. It’s Pin 2 that’s not shunted properly. I had the same issues about 30 pages back and showed images.


----------



## Redjester

Thanks! I knew I had probably messed something up, just was not sure what.

Am I correct in multiplying my max board / pin 1 & 2 by 1.33 to find my true voltage?


----------



## OC2000

Yep, 15 ohms will be a 1.33x for Watts not voltage


----------



## Falkentyne

Redjester said:


> New to the whole shunt mod world, but I ended up shunting my Zotac 3090 before putting it into a EKWB waterblock. (15 ohms stacked on all shunts)
> 
> Been running a few timespys while running GPU-Z to see where I'm at and noticed that my Pin#1 power was maxing at 150.6 while my pin#2 is at 176.8. Does this mean that I missed or screwed up one of the shunts on pin 1?
> 
> Here is the timespy NVIDIA GeForce RTX 3090 video card benchmark result - Intel Core i9-10850K Processor,Micro-Star International Co., Ltd. MEG Z490 UNIFY (MS-7C71) (3dmark.com)
> 
> And the GPU-Z to go with it.
> View attachment 2469074
> 
> 
> Can anyone see what my newbie eyes cannot? Am I maxed out here?
> 
> *edited to add shunt ohms


Does the zotac have fully flat flush shunts or are the silver edges lower than the black middle?

Can you please set all of the watts to 'max' in gpu-z next time?
Also in hwinfo64, can you have tdp% and tdp normalized% showing?

If normalized is <6% of TDP%, then the mod isn't done that poorly. Not ideal balance, but 25W is a bit too high of a delta in GPU-Z.
The two 8 pins in GPU_Z should be less than 20W apart from each other if you used paint, and <10W if you soldered, so yes maybe you should re-do it.
I need the other values too, even src and mvddc and pcie slot.
Is the Zotac's max unshunted power limit 390W (before multipliers?)


----------



## Esenel

OC2000 said:


> It hits a ceiling, and no additional overclock increases the score. Weird. Im going to try different drivers and reinstalling the OS.


Nizzen is correct.
Your AMD CPU is bottlenecking the GPU.
Around 300 points difference.
Tested with a colleague.


----------



## jomama22

Esenel said:


> Nizzen is correct.
> Your AMD CPU is bottlenecking the GPU.
> Around 300 points difference.
> Tested with a colleague.


What exactly are the clocks in the amd cpu/what cpu is it? I have no issues with my 5950x @ 4825.


----------



## HyperMatrix

shiokarai said:


> Max luminance was changing the images, though, even with the launch version, paper white, too - only midtone point doesn't do anything in the calibration page, only in-game. So that's fixed now?





changboy said:


> Ya its fixe now, when i put midtone at .10 the picture is dark all place and if i put 1.0 its bright.
> You can actually adjust everything. If you have a non legit game i think its not working or you will be able to download an update at ur non legit game.
> This game is verry beatiful and will have a lot of update i think so you better buy it.


That’s so weird. Other than max brightness, nothing else is changing the sample images for me. Even from launch if I made a change and went back to the game I would see a difference there. But the sample images on the settings page remain static.

Also on the topic of piracy...all I will say is that I would never pirate a CDPR game. And I don't appreciate the accusation. They're one of the greatest developers ever. They make amazing games that focus on entertainment. They sell the game DRM free. And they don’t cram it full of micro transactions. That is incredibly rare and should be supported by everyone - preferably on GOG so they get 100% of the proceeds (and also get the DRM free/offline play = ok version which you don't get with Steam) - unless we want to be left with nothing but a bunch of Ubisoft and EA clones.


----------



## changboy

*HyperMatrix*
I didnt accuse you hehehe, i said that for anyone who have try it like this before buy coz it can have some bug, i dont know. Myself sometime i download a non legit game and try it and sometimes i not buy it coz i dont like it. If i like i buy it 

And you right about when you change the setting on the sample picture while you made change, this picture not change but you need go back in game to see what you have done and like this you can adjust.
And yes it can take many times to adjust go back in game to look what you have done.

Me after the patch 1.3 i unistall the game and do a french install for the patch 1.4 and i didnt lost my save game.
I dont know if this make a difference but this is what ive done.


----------



## kx11

A must read for people with Ryzen cpus playing CYBERPUNK2077









Cyberpunk 2077 gets FPS boost with a patch for AMD Ryzen CPUs - VideoCardz.com


Cyberpunk 2077 appears not to support AMD SMT? Users and reviewers noticed that Cyberpunk 2077 has problems utilizing the full potential of the AMD Ryzen CPUs, in particular the SMT (Simultaneous Multi-Threading) technology. The issue can easily be observed in Windows Task Manager, where the...




videocardz.com


----------



## changboy

AMD bug as usual 😄


----------



## Jordel

changboy said:


> AMD bug as usual 😄


The Cyberpunk issue isn't an "AMD bug" though, but a poor GPUOpen fix for old FX processors that has unconditionally been passed onwards. CDPR will probably be patching that shortly.


----------



## OC2000

Esenel said:


> Nizzen is correct.
> Your AMD CPU is bottlenecking the GPU.
> Around 300 points difference.
> Tested with a colleague.


interesting. I’ll turn the 4.8/4.75 all core clock off and switch to PBO and see if that helps.The same was happening too with my timespy gpu score. I can turn on DOS for that.
Will try that tomorrow.


----------



## wilchy

mrpeters said:


> Any other GIGABYTE AORUS GeForce RTX 3090 MASTER 24G (GV-N3090AORUS M-24GD) owners?
> 
> Looking for some advice on best BIOS and comparing overclocking results.
> 
> A little underwhelmed with my initial 3DMark Timespy runs.
> 
> 
> 
> https://www.3dmark.com/3dm/53568696?


Hey Mate did you sort out this issue with the low scores?


----------



## jomama22

OC2000 said:


> interesting. I’ll turn the 4.8/4.75 all core clock off and switch to PBO and see if that helps.The same was happening too with my timespy gpu score. I can turn on DOS for that.
> Will try that tomorrow.


Need to confirm what's going on honestly. I get 18000+ cpu score in time spy with my 5950x and 22k gpu score with my 3090 FE averaging a bit over 2000mhz and +900 on mem.









I scored 21 268 in Time Spy


AMD Ryzen 9 5950X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com





Others saying it's AMD are just dumb lol.

Comparing my GPU score to others with roughly the same clocks and Intel CPUs perform near the exact same or even slightly worse.

I have had issues with Nvidia drivers recently and inconsistent scores with varying clock frequencies. Have had to uninstall/reinstall drivers multiple times. Reinstalling windows was the only sure way to consistently get my scores back in line.

I have seen anywhere from a few 100's of points to 1000's. Look it up, it's a common issue with 3xxx series cards.

Also, you can look at the white paper for port royal. It is purposefully built to minimize any sort of cpu bottleneck. You would need a severely handicapped cpu (multiple generations old) to see much of a difference between processors.


----------



## dante`afk

replaced kryonaut extreme for condactonaut, 4-5c better temps on load, let's see how it looks like in a week 

btw, is it enough to change the profile for 3dmark.exe alone or do you have to change it for the timespy/port royal workload too? 













Esenel said:


> Nizzen is correct.
> Your AMD CPU is bottlenecking the GPU.
> Around 300 points difference.
> Tested with a colleague.


details


----------



## Nizzen

jomama22 said:


> Need to confirm what's going on honestly. I get 18000+ cpu score in time spy with my 5950x and 22k gpu score with my 3090 FE averaging a bit over 2000mhz and +900 on mem.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 21 268 in Time Spy
> 
> 
> AMD Ryzen 9 5950X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> Others saying it's AMD are just dumb lol.
> 
> Comparing my GPU score to others with roughly the same clocks and Intel CPUs perform near the exact same or even slightly worse.
> 
> I have had issues with Nvidia drivers recently and inconsistent scores with varying clock frequencies. Have had to uninstall/reinstall drivers multiple times. Reinstalling windows was the only sure way to consistently get my scores back in line.
> 
> I have seen anywhere from a few 100's of points to 1000's. Look it up, it's a common issue with 3xxx series cards.


I'm dumb LOL









I scored 22 401 in Time Spy


Intel Core i9-10900K Processor, NVIDIA GeForce RTX 3090 x 1, 16384 MB, 64-bit Windows 10}




www.3dmark.com


----------



## Falkentyne

dante`afk said:


> replaced kryonaut extreme for condactonaut, 4-5c better temps on load, let's see how it looks like in a week
> 
> btw, is it enough to change the profile for 3dmark.exe alone or do you have to change it for the timespy/port royal workload too?
> 
> View attachment 2469118
> 
> 
> 
> 
> 
> details


I have 70 grams of Galinstan and 70 more grams of raw materials to make more, but I can't be bothered using it. Applying it is easy enough, but all the work involved with cleaning off the heatsink if you have to disassemble the card for some reason, and possibly getting the pads dirty if you need to use alcohol to wipe off the old LM or any hardened layers, is just way too much work for 4-5C improvement. It's so much easier to just use a high end thermal paste like Kryonaut Extreme, Kingpin KPx or Thermalright TFX (this is better than Kryonaut--you just have to apply it correctly like the "X" pattern, NOT by spreading and let it cure about a week..

TFX is probably the best paste on the market right now.


----------



## jomama22

Nizzen said:


> I'm dumb LOL
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 22 401 in Time Spy
> 
> 
> Intel Core i9-10900K Processor, NVIDIA GeForce RTX 3090 x 1, 16384 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com


Are you that thick skulled or are you really just that dumb? Not sure what point you're making by showing a shunted/chilled water card clocked 200mhz above mine?

It was posed that an AMD cpu is the reason for lesser graphics score/port royal score and I am saying it's not.

All you're doing is trying to make up for somthing small in your pants it seems.


----------



## Nizzen

jomama22 said:


> Are you that thick skulled or are you really just that dumb? Not sure what point you're making by showing a shunted/chilled water card clocked 200mhz above mine?
> 
> It was posed that an AMD cpu is the reason for lesser graphics score/port royal score and I am saying it's not.
> 
> All you're doing is trying to make up for somthing small in your pants it seems.


Why so toxic? This is not the place. Guru3d is the place for toxic people 

Tuned dualrank memory 4600mhz+, high cache and overclocked cpu (10900k) is like 500points extra in port royal, vs 5ghz "stock" cpu and average memory


----------



## jomama22

Nizzen said:


> Why so toxic? This is not the place. Guru3d is the place for toxic people
> 
> Tuned dualrank memory 4600mhz+, high cache and overclocked cpu (10900k) is like 500points extra in port royal, vs 5ghz "stock" cpu and average memory


If that's what you know then why not say that in the first place? Merely posting a score gives 0 context for that. Same goes with just saying "Amd lol"

Also, that's for an intel vs intel. That is why I made my comparison of my scores and gpu clock speed compared to intel counterparts to give context and reasoning its not merely an AMD issue.

You act as though you provided any help or reasoning then get salty when called out for it. Just doesn't make sense.


----------



## Esenel

dante`afk said:


> details











Result







www.3dmark.com





@ivans89 tested it.
We have the same GPU. He has shunt. I do not.
Both with 520W bios.
Both +105 on Core.

He changed to Intel and his score got a lot higher ;-)
Although his boost and avg was lower in this run.

It seems Timespy etc are loving low Latency ;-)


----------



## changboy

Cyberpunk 2077 will support Dolby Vision in a near future with a new patch, they said in 2021. I belive it will look amazing with that feature


----------



## ivans89

yes, i can confirm that.

Here are my results on timespy.

5900x - Dark Hero - 3800cl15 DDR4 (its not running with more) - 3090 Strix (+105 core +1500 mem) with 8mohm shuntmod + voltagemod (EVC2, 1,2v) : https://www.3dmark.com/spy/15955193

10900k - Maximus XII Apex - 4400cl16 DDR4 (maybe they could run higher, but was to lazy to test it more) - 3090 Strix (+105 core +1500 mem) with 8mohm shuntmod (WITHOUT voltagemod): https://www.3dmark.com/spy/16153455

need to do some more tests on intel with higher clock, higher rams and voltagemod on the strix.


----------



## OC2000

Here is a new one with an average Mhz of 2232 - Less than 15k









I scored 14 911 in Port Royal


AMD Ryzen 9 5950X, NVIDIA GeForce RTX 3090 x 1, 16384 MB, 64-bit Windows 10}




www.3dmark.com





Here is my Timespy with an average Mhz of 2195









I scored 22 087 in Time Spy


AMD Ryzen 9 5950X, NVIDIA GeForce RTX 3090 x 1, 16384 MB, 64-bit Windows 10}




www.3dmark.com





vs an average of 2151 mhz of my other one









I scored 22 019 in Time Spy


AMD Ryzen 9 5950X, NVIDIA GeForce RTX 3090 x 1, 16384 MB, 64-bit Windows 10}




www.3dmark.com





Im going to try PBO and see if that changes the GPU score instead of using a 4.8 / 4.75 all core shortly.

Im also using single rank 2x 8GB 3800 CL14 13 13 13 26 36.

My 2x 16GB 3800 CL14 have just arrived so can try that too.


----------



## SuperMumrik

I can also confirm it
Here is my turd strix running an avg of 2139Mhz








I scored 22 383 in Time Spy


Intel Core i9-10900K Processor, NVIDIA GeForce RTX 3090 x 1, 16384 MB, 64-bit Windows 10}




www.3dmark.com


----------



## Cholerikerklaus

For me the same. I told it days ago. After I switch to the new ryzen my scores are 500 points less.


----------



## lolhaxz

How are people finding their OC in Cyberpunk 2077?

Interestingly it seems vastly more sensitive to OC than any other game I have tried, anywhere upto 60MHz less.

It tends to take a very long time to crash also - ie, 30 minutes... with curve flattened at 1.068v (about 480-500W) I'm not able to go above 2100-2115MHz, but 2130-2145MHz will take anywhere upto 30 mins+ to crash... 2160MHz perhaps 15 minutes.


----------



## shiokarai

lolhaxz said:


> How are people finding their OC in Cyberpunk 2077?
> 
> Interestingly it seems vastly more sensitive to OC than any other game I have tried, anywhere upto 60MHz less.
> 
> It tends to take a very long time to crash also - ie, 30 minutes... with curve flattened at 1.068v (about 480-500W) I'm not able to go above 2100-2115MHz, but 2130-2145MHz will take anywhere upto 30 mins+ to crash... 2160MHz perhaps 15 minutes.


I need to lower my OC by about 30-45 MHz in core, about 50-100 in mem to get it stable. Very, very sensitive to OC imo


----------



## ALSTER868

lolhaxz said:


> How are people finding their OC in Cyberpunk 2077?
> 
> Interestingly it seems vastly more sensitive to OC than any other game I have tried, anywhere upto 60MHz less.
> 
> It tends to take a very long time to crash also - ie, 30 minutes... with curve flattened at 1.068v (about 480-500W) I'm not able to go above 2100-2115MHz, but 2130-2145MHz will take anywhere upto 30 mins+ to crash... 2160MHz perhaps 15 minutes.


Same here. This game destroyed all my OC profiles which have worked with no issues for me so far anywhere while the game is not so power demanding.


----------



## Falkentyne

Modern Warfare/Warzone already did this exact thing a long time ago. But seems no one here except me plays that game.


----------



## menko2

I'm going to repaste my 3090 Strix with Kryonaut Extreme.

What thickness have the pads for the Vram and other components in the rear and front pcb?


----------



## shiokarai

Falkentyne said:


> Modern Warfare/Warzone already did this exact thing a long time ago. But seems no one here except me plays that game.


Warzone? Is this the game that Framechasers guy talks about so much? lol ;-)


----------



## Gbone

Greetings fellow 3090 owners. I'm now a skeptical owner of a MSI 3090 Gaming X Trio. running with a 5.2ghz 10700k./ Rog strix Z490-F
I feel as tho the performance of this card seems to be a little Underwhelming, 
I game in 1440p, running the latest drivers etc.this card runs super-hot in my NXZT h510 case. so hot that i have to remove the glass side panel in order to play games. The first 5 mins after installation running benchmark, i saw 85c temps before i panicked and shut the PC down.
Now I'm running custom fan profiles in Afterburner at 60-70% to keep temps below 75c. what i want to know is if i should take this card back because MSI are duds. or will flashing a new BIOS make it usable.
CODMW- BR 1440p - dont ever get over 160FPS
Cyberpunk looks great in 1440p - seems to drop FPS badly sometimes.


----------



## menko2

Miguelios said:


> Yes, and yes (although remount probably helped as well).. and liquid metal is performing even better.
> 
> Temps during Full Load
> *Stock cooler:*
> Stock paste 65c - 75c
> Kryonaut 55c - 62c
> 
> *Water-cooled, Phanteks block*
> Kryonaut 45c - 52c
> Conductonaut 30c - 42c


I'm going to do the thermal paste change but I'm going to need the thermal pads as they will get damaged disable the card.

Which thickness have the thermal pads for Vram and the vrm?


----------



## Cholerikerklaus

OC2000 said:


> Here is a new one with an average Mhz of 2232 - Less than 15k
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 14 911 in Port Royal
> 
> 
> AMD Ryzen 9 5950X, NVIDIA GeForce RTX 3090 x 1, 16384 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> Here is my Timespy with an average Mhz of 2195
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 22 087 in Time Spy
> 
> 
> AMD Ryzen 9 5950X, NVIDIA GeForce RTX 3090 x 1, 16384 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> vs an average of 2151 mhz of my other one
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 22 019 in Time Spy
> 
> 
> AMD Ryzen 9 5950X, NVIDIA GeForce RTX 3090 x 1, 16384 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> Im going to try PBO and see if that changes the GPU score instead of using a 4.8 / 4.75 all core shortly.
> 
> Im also using single rank 2x 8GB 3800 CL14 13 13 13 26 36.
> 
> My 2x 16GB 3800 CL14 have just arrived so can try that too.


 Could you send me your zentimings? 
Thx


----------



## mirkendargen

Falkentyne said:


> Modern Warfare/Warzone already did this exact thing a long time ago. But seems no one here except me plays that game.


Metro Exodus and Control also. The common denominator is the ray tracing/DLSS doublewhammy. I've been stability testing my daily driver overclock with the Metro Exodus benchmark all along.


----------



## iPEN

Hi,

I'm about to get a 3090 to replace my 2080 Ti. I would like to buy a 3-pin card -So I'm not constrained with the power limit, but maybe it is not that important, any advice would be appreciated here-, and based of what is available in our local retailers, I basically have 2 options: MSI Gaming X Trio or MSI Suprim X. 

I've read that the electrical design of the Suprim X is slightly better. It is €100 more, but given the price of these cards, I do not think it makes a real difference at the end of the day. What's your opinion?

Thank you in advance!

Regards,

PS: Just in case, the rest of my setup consists on a 1200WPSU, a Ryzen 5950X + Crosshair VIII Formula and 64GB of RAM (2x32GB 3600Mhz)


----------



## jura11

mirkendargen said:


> Metro Exodus and Control also. The common denominator is the ray tracing/DLSS doublewhammy. I've been stability testing my daily driver overclock with the Metro Exodus benchmark all along.


Usually if its stable in Metro Exodus or Control usually its stable in most of the RT games, in 3D Mark I can run quite easily 2205MHz or 2220MHz on RTX 2080Ti Strix, in gaming usually 2160-2175MHz(2190MHz is possible in some games), in Control or Metro Exodus I'm running 2130-2145MHz max with RTX 2080Ti Strix 

With RTX 3090 GamingPro my normal OC is +145MHz on core and 1295MHz on VRAM and in Cyberpunk 2077 or Control and Metro Exodus I'm running +100MHz on core with 1225-1250MHz on VRAM 

Hope this helps 

Thanks, Jura


----------



## GAN77

iPEN said:


> What's your opinion?


I also expect Suprim X. The board looks better than Trio.
The Trio waterblocks are compatible.
Possibly has control I2C - need expert opinion.


----------



## OC2000

Cholerikerklaus said:


> Could you send me your zentimings?
> Thx


basically these except I was using single rank. Dual rank hasn’t made any difference, but same timings


----------



## st33L

Hello expert,

i have already read a few days here in the forum. I wanted advice or could recommend which graphics card can offer better 3090 references or FE for the beginning or maybe 3x8pin would be strix if I have enough space in the case as a drawer.


----------



## Cholerikerklaus

OC2000 said:


> basically these except I was using single rank. Dual rank hasn’t made any difference, but same timings


Thx man. I will try this for me. But my Aida bench wich cl14 is nearly the same


----------



## BigMack70

Falkentyne said:


> Modern Warfare/Warzone already did this exact thing a long time ago. But seems no one here except me plays that game.


This was my experience as well. I was comfortably running a custom curve on my FE that operated between +115 and +150 core clock, and then I started playing Modern Warfare. Killed my OC down to +60 MHz, which is a bit lame but has never crashed in anything, including cyberpunk.

Honestly, this card's performance scaling sucks with overclocking. Even a +150 offset basically did nothing meaningful for performance. I view Ampere as basically a "buy it, max the power slider, forget about it" product unless you have a card where you can run a fully unlimited XOC BIOS and pump 600+ watts through it.


----------



## cakesg

BigMack70 said:


> This was my experience as well. I was comfortably running a custom curve on my FE that operated between +115 and +150 core clock, and then I started playing Modern Warfare. Killed my OC down to +60 MHz, which is a bit lame but has never crashed in anything, including cyberpunk.
> 
> Honestly, this card's performance scaling sucks with overclocking. Even a +150 offset basically did nothing meaningful for performance. I view Ampere as basically a "buy it, max the power slider, forget about it" product unless you have a card where you can run a fully unlimited XOC BIOS and pump 600+ watts through it.


I can go up to like +70 core on my strix 3090 with the kpe 390w bios on air and run stress tests all day, but it's not stable in warzone/cold war at anything past the stock 1920mhz of the kingpin bios


----------



## mirkendargen

cakesg said:


> I can go up to like +70 core on my strix 3090 with the kpe 390w bios on air and run stress tests all day, but it's not stable in warzone/cold war at anything past the stock 1920mhz of the kingpin bios


Ouch and I thought my Strix was bad. I'm stable in Metro/Control/Cyberpunk at +60 on the KPE 1920 BIOS and +120-135 in Port Royal.


----------



## cakesg

mirkendargen said:


> Ouch and I thought my Strix was bad. I'm stable in Metro/Control/Cyberpunk at +60 on the KPE 1920 BIOS and +120-135 in Port Royal.


Are you on air or water? I have the stock cooler on right now. Also what is your mem offset?
But yeah my strix is horrible, I've wanted a kingpin hydrocopper for a while


----------



## mirkendargen

cakesg said:


> Are you on air or water? I have the stock cooler on right now. Also what is your mem offset?
> But yeah my strix is horrible, I've wanted a kingpin hydrocopper for a while


Water, but that isn't going to really change the max clock speed it's stable at, just how high it will clock on average once heatsoaked. I can do Port Royal at +900mem but can't at +1000. I leave it at +750 for 24/7 usage.


----------



## GQNerd

menko2 said:


> I'm going to do the thermal paste change but I'm going to need the thermal pads as they will get damaged disable the card.
> 
> Which thickness have the thermal pads for Vram and the vrm?


I used the ones included w/ my Phanteks block, there’s varying sizes. From .5mm- 1.75mm


----------



## menko2

Miguelios said:


> I used the ones included w/ my Phanteks block, there’s varying sizes. From .5mm- 1.75mm


Thank you Miguelios.

Maybe the ones in the Vram are the ones that get damage while opening the card. I could leave the foam-stock ones except the Vram of that the case. 

Do you remember which thickness are the Vram? Any help-advice you can give me will be very appreciated. First time repasting gpu.


----------



## Gebeleisis

jomama22 said:


> Need to confirm what's going on honestly. I get 18000+ cpu score in time spy with my 5950x and 22k gpu score with my 3090 FE averaging a bit over 2000mhz and +900 on mem.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 21 268 in Time Spy
> 
> 
> AMD Ryzen 9 5950X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> Others saying it's AMD are just dumb lol.
> 
> Comparing my GPU score to others with roughly the same clocks and Intel CPUs perform near the exact same or even slightly worse.
> 
> I have had issues with Nvidia drivers recently and inconsistent scores with varying clock frequencies. Have had to uninstall/reinstall drivers multiple times. Reinstalling windows was the only sure way to consistently get my scores back in line.
> 
> I have seen anywhere from a few 100's of points to 1000's. Look it up, it's a common issue with 3xxx series cards.
> 
> Also, you can look at the white paper for port royal. It is purposefully built to minimize any sort of cpu bottleneck. You would need a severely handicapped cpu (multiple generations old) to see much of a difference between processors.


I have a 3090 and a 5950x.
the 3090 is stock bios, stock air cooling.
Case is medium tower .









I scored 19 323 in Time Spy


AMD Ryzen 9 5950X, NVIDIA GeForce RTX 3090 x 1, 131072 MB, 64-bit Windows 10}




www.3dmark.com





This is my score with some undervolt + oc.


----------



## defiledge

I'm getting a temp limit throttle at 70C after shunt modding, and my powerlimit + power limit slider is maxed. Does any1 else have this bug and know how to fix it?


----------



## Apothysis

Any folks with direct experience on the 3090 Suprim X here? Does it work with the Strix or Kingpin-bios and how are the bins?


----------



## bmgjet

defiledge said:


> I'm getting a temp limit throttle at 70C after shunt modding, and my powerlimit + power limit slider is maxed. Does any1 else have this bug and know how to fix it?


Reseat the cooler to make sure there are no hot spots on the die where thermal compound not making contact.
Or the VRM is getting to hot from the higher load though it.


----------



## mattxx88

lolhaxz said:


> How are people finding their OC in Cyberpunk 2077?
> 
> Interestingly it seems vastly more sensitive to OC than any other game I have tried, anywhere upto 60MHz less.
> 
> It tends to take a very long time to crash also - ie, 30 minutes... with curve flattened at 1.068v (about 480-500W) I'm not able to go above 2100-2115MHz, but 2130-2145MHz will take anywhere upto 30 mins+ to crash... 2160MHz perhaps 15 minutes.


the division 2 destroyed mine, more.
i played CP2077 4 hours yesterday 2100mhz 1,000v without issues (curve OC)


----------



## WilliamLeGod

mattxx88 said:


> the division 2 destroyed mine, more.
> i played CP2077 4 hours yesterday 2100mhz 1,000v without issues (curve OC)


Can u show the image of yr curve OC?


----------



## erazortt

st33L said:


> Hello expert,
> 
> i have already read a few days here in the forum. I wanted advice or could recommend which graphics card can offer better 3090 references or FE for the beginning or maybe 3x8pin would be strix if I have enough space in the case as a drawer.


Not sure what the question is, but first you need to know what maximal size of a card fits in your system. Reference PCB cards are the smallest, FE and 3-Pin Custom PCB cards are considerably bigger.
Reference PCB cards can only be flashed with a 390W bios. The 3-Pin Custom cards can be flashed with a 600W bios. The FE cards cannot be flashed at all, and have a power limit of 400W. If you are willing to shunt mod, a Reference PCB card can then also be flashed with the 600W bios.
I for example can only fit a reference PCB card in my case but it’s running great with the 390W bios, which is a very significant step up from the default 350W.
And then there is of course the issue of availability which currently means that FE cards are out of scope.


----------



## pat182

lolhaxz said:


> How are people finding their OC in Cyberpunk 2077?
> 
> Interestingly it seems vastly more sensitive to OC than any other game I have tried, anywhere upto 60MHz less.
> 
> It tends to take a very long time to crash also - ie, 30 minutes... with curve flattened at 1.068v (about 480-500W) I'm not able to go above 2100-2115MHz, but 2130-2145MHz will take anywhere upto 30 mins+ to crash... 2160MHz perhaps 15 minutes.


strix on air, 2.1ghz lock with 500watt bios


----------



## dante`afk

Cholerikerklaus said:


> For me the same. I told it days ago. After I switch to the new ryzen my scores are 500 points less.


oh well, but at least the IPC is faster in games, so more fps in games than intel 




iPEN said:


> I would like to buy a 3-pin card -So I'm not constrained with the power limit,


you'll be power limited even with 3pin card without shunt mod.


----------



## Esenel

dante`afk said:


> oh well, but at least the IPC is faster in games, so more fps in games than intel


Not really.
As games benefit a lot from Copy and low Latency, Intel is still in the lead in most of the cases.

Zen3 IPC is hold back by their slow Infinity Fabric.


----------



## dante`afk

Esenel said:


> Not really.
> As games benefit a lot from Copy and low Latency, Intel is still in the lead in most of the cases.
> 
> Zen3 IPC is hold back by their slow Infinity Fabric.


dont' destroy my dreams, I switched from year long intel 5.2ghz cpu with ddr4700 cl 17 to AMD


----------



## bwana

pat182 said:


> strix on air, 2.1ghz lock with 500watt bios


Where does one find a 500w strix bios?


----------



## Sheyster

bwana said:


> Where does one find a 500w strix bios?


It's the EVGA XOC 500W BIOS, it works with other 3 connector cards like the Strix and MSI Trio.


----------



## pat182

Sheyster said:


> It's the EVGA XOC 500W BIOS, it works with other 3 connector cards like the Strix and MSI Trio.


i could probably do 2.150 or 2180mhz but ive lock the voltage to 1.075 cause that way it hover around 500watts, going offset can go to 550watts and im playing 4h at a time so i dont want anything bad to happen, often switching between normal and XOC to see the gains, 480watts its avg 1995mhz 4k RT max , 2100mhz with 500watt, 4-5 mor fps

this game can run on air at 1.1v and around 580watts without getting powercap, its crazy


----------



## WayWayUp

It's insane how much power cyberpunk pulls. I have a shunted card on air and the heat its generating is crazy (to be fair i ordered optimus waterblock 2 months ago but they keep lying about delivery dates. trash company)

At first i kept crashing... so i kept dialing back my overclock but i still kept crashing. I checked gpu temps but they were actually good.......

Finally found the heat that my gpu dumps is causing my ram to overheat. I had to backdown my overclock from 4500 15 15 15 32 (which is stable in other games) down to 4300. This way I was able to lower dram voltage significantly and reduce heat. I now plan to watercool my ram sticks LOL


----------



## Keninishna

What is the consensus on liquid metal tim? Worth it? I am getting 60c on core at around 520w on my bykski block using TF8 for thermal paste. Water temps don't go above 31c so I am thinking the gpu could be dumping more heat into the loop.


----------



## Sheyster

pat182 said:


> i could probably do 2.150 or 2180mhz but ive lock the voltage to 1.075 cause that way it hover around 500watts, going offset can go to 550watts and im playing 4h at a time so i dont want anything bad to happen, often switching between normal and XOC to see the gains, 480watts its avg 1995mhz 4k RT max , 2100mhz with 500watt, 4-5 mor fps
> 
> this game can run on air at 1.1v and around 580watts without getting powercap, its crazy


Indeed, that 500W EVGA BIOS worked out great for us Strix owners! 😜 I'm undervolting at 1000mv with a +180 core (EVGA BIOS is 1800 default core). It's 100% stable in SuperPosition 4KO, Warzone and BF5.


----------



## Sheyster

Keninishna said:


> What is the consensus on liquid metal tim? Worth it? I am getting 60c on core at around 520w on my bykski block using TF8 for thermal paste. Water temps don't go above 31c so I am thinking the gpu could be dumping more heat into the loop.


Personally I would just go with Kryonaut Extreme and call it a day. It's a bit better than TF8 from what I've heard, but not by much! TF8 and TFX are good pastes, although TFX is hard to spread.


----------



## WMDVenum

Keninishna said:


> What is the consensus on liquid metal tim? Worth it? I am getting 60c on core at around 520w on my bykski block using TF8 for thermal paste. Water temps don't go above 31c so I am thinking the gpu could be dumping more heat into the loop.


I swapped Kryonaut to conductonaut on my 9900ks (EK-Velocity water block) and 3090 FE (Bitspower water block) last night. I run a single 360 and a single 480 rad in the loop. I performed a realbench 15 minute run, time spy benchmark, and port royale benchmark before and after LM. My Ambient temperature is around 21C. The CPU temp and water temp maxed at 85C for the CPU and 34C for the water after 15 minutes of real bench before and after LM but I feel like the temperatures are more uniform across the cores. The video card responded exceptionally well to LM.

I am debating if I want to shunt mod it since I feel like 40C on a 3090 at full load is a bit wasteful.


----------



## shiokarai

WayWayUp said:


> It's insane how much power cyberpunk pulls. I have a shunted card on air and the heat its generating is crazy (to be fair i ordered optimus waterblock 2 months ago but they keep lying about delivery dates. trash company)
> 
> At first i kept crashing... so i kept dialing back my overclock but i still kept crashing. I checked gpu temps but they were actually good.......
> 
> Finally found the heat that my gpu dumps is causing my ram to overheat. I had to backdown my overclock from 4500 15 15 15 32 (which is stable in other games) down to 4300. This way I was able to lower dram voltage significantly and reduce heat. I now plan to watercool my ram sticks LOL


It's the same for me I've had to dial back down mu CPU OC from 5.4 to 5.3GHZ as the heat dumped to my case was just too much and CPU become unstable lol


----------



## HyperMatrix

dante`afk said:


> oh well, but at least the IPC is faster in games, so more fps in games than intel
> 
> 
> 
> 
> you'll be power limited even with 3pin card without shunt mod.


For pure gaming, intel is still ahead. 5950x is superior in compression/decompression though which is a reason I'd be willing to give up a few single digit percentage losses in order to get more than 2x the performance elsewhere compared to Intel. But I'm probably just going to hang on to the 6950x until next gen comes out. Whether Intel or AMD. Get in on some PCIe 5 and DDR5 goodness with 10Gbe ethernet, enough lanes to support multiple NVMe drives, and possibly USB 4. Don't really like switching full systems that often especially since it doesn't affect performance at 4K much.


----------



## mattxx88

WilliamLeGod said:


> Can u show the image of yr curve OC?




i could not include Jhonny :3
DLSS ON

DLSS OFF - RIP 3090


----------



## WayWayUp

I was testing DLSS balanced, quality, and native 4k
At first, only quality looked as sharp as native but i noticed the left and right edges of the screen were very blurry. I began focusing on that and it ruined my experience so i had to turn it off...... When i tested balanced it was visibly cloudy

Then i realized DLSS and chromatic aberration dont mix. After turning that off, i can run with balanced DLSS and the image quality is excellent! The only other change i made was move ray traced lighting from psycho to ultra

Besides that everything is at max settings and it lets me play at 4k with 60fps


----------



## shiokarai

WayWayUp said:


> I was testing DLSS balanced, quality, and native 4k
> At first, only quality looked as sharp as native but i noticed the left and right edges of the screen were very blurry. I began focusing on that and it ruined my experience so i had to turn it off...... When i tested balanced it was visibly cloudy
> 
> Then i realized DLSS and chromatic aberration dont mix. After turning that off, i can run with balanced DLSS and the image quality is excellent! The only other change i made was move ray traced lighting from psycho to ultra
> 
> Besides that everything is at max settings and it lets me play at 4k with 60fps


Try turning on the sharpening in the nvidia control panel, default 50% is okay, try this  It's really hard to balance all the tech in the game - RTX + DLSS + chromatic aberration + film grain, everything on top of everything and it's a mess. No good way around it. I personally find the game without chroma/grain looking too sterile, like a game.. with those effects it looks kinda like a retro-future movie shot on the VHS, which i like  An aesthetic, individual choice for sure


----------



## BigMack70

WayWayUp said:


> I was testing DLSS balanced, quality, and native 4k
> At first, only quality looked as sharp as native but i noticed the left and right edges of the screen were very blurry. I began focusing on that and it ruined my experience so i had to turn it off...... When i tested balanced it was visibly cloudy
> 
> Then i realized DLSS and chromatic aberration dont mix. After turning that off, i can run with balanced DLSS and the image quality is excellent! The only other change i made was move ray traced lighting from psycho to ultra
> 
> Besides that everything is at max settings and it lets me play at 4k with 60fps


Yeah I dunno what the devs were thinking with the film grain, chromatic aberration, and lens flare implementation in this game. On each account, easily the worst implementation I've ever seen.

The game immediately looks 100x better with those settings disabled. I also don't like how motion blur looks in the game so I disabled that too.


----------



## mirkendargen

WayWayUp said:


> I was testing DLSS balanced, quality, and native 4k
> At first, only quality looked as sharp as native but i noticed the left and right edges of the screen were very blurry. I began focusing on that and it ruined my experience so i had to turn it off...... When i tested balanced it was visibly cloudy
> 
> Then i realized DLSS and chromatic aberration dont mix. After turning that off, i can run with balanced DLSS and the image quality is excellent! The only other change i made was move ray traced lighting from psycho to ultra
> 
> Besides that everything is at max settings and it lets me play at 4k with 60fps


There are still some situations where you can see a difference with balanced, namely repeating patterns in the distance (like look at the decorations in the wall in Lizzie's bar at a distance, they'll get pretty intense aliasing on balanced and are much better on quality).

...Or don't look for problems and enjoy the 20% FPS boost of going from quality to balanced


----------



## defiledge

I'm getting temp throttle at 70C, this just started happening recently without me changing any settings. I replaced all the thermal pads and I'm pretty sure they have good contact with the heatsink. This is really bugging me.


----------



## HyperMatrix

BigMack70 said:


> Yeah I dunno what the devs were thinking with the film grain, chromatic aberration, and lens flare implementation in this game. On each account, easily the worst implementation I've ever seen.
> 
> The game immediately looks 100x better with those settings disabled. I also don't like how motion blur looks in the game so I disabled that too.


Film grain and Chromatic Aberration are just stupid and I disable them in ever game I play. I've really liked the lens flare effects though. And they make sense since you're using artificial glass optics. Motion blur has always been subjective. I've always liked it. Even back when I was 165Hz 1440p gaming. When playing with low fps though, it becomes so much easier on the eyes.



mirkendargen said:


> There are still some situations where you can see a difference with balanced, namely repeating patterns in the distance (like look at the decorations in the wall in Lizzie's bar at a distance, they'll get pretty intense aliasing on balanced and are much better on quality).
> 
> ...Or don't look for problems and enjoy the 20% FPS boost of going from quality to balanced


I've noticed other weird behavior. Like I will be on the sidewalk. I'll turn around and look at the building for a few seconds. And quickly do a 180 degree. Even cars that are like 2 meters away from me are massively LOD reduced. And not only that....sometimes the car will be completely different. So apparently in different scenarios it can completely delete and respawn a car into an entirely different one. In your apartment complex I've walked down a hallway and ran into 3 identical characters. One right after the other. Same build/clothing/etc. 

But as far as DLSS goes, I've noticed going down from DLSS Quality actually lowers the brightness of the game. Not just the sharpness. I already have sharpness at 0.7 in nvidia control panel in DLSS quality mode. Going down to balanced or performance gives an FPS boost but just destroys the "shine" and clarity of bright objects. If there's a big shootout scene where I'm not going to notice and appreciate any of the details I may momentarily switch to 60fps since it's a shooter game and that helps when you're not using smart weapons. But other than that, I haven't been able to play the majority of the game with anything other than DLSS quality. It just looks that damn good.



defiledge said:


> I'm getting temp throttle at 70C, this just started happening recently without me changing any settings. I replaced all the thermal pads and I'm pretty sure they have good contact with the heatsink. This is really bugging me.


You may have lost a pad in the process, or just don't have good contact in some areas since the pads were previously compressed. I'd recommend looking up pad placement guides for your specific card, and ordering all new pads (with the proper thickness) and just redoing the whole thing. 70C thermal throttling means something is heating up far beyond normal operating range.


----------



## Falkentyne

defiledge said:


> I'm getting temp throttle at 70C, this just started happening recently without me changing any settings. I replaced all the thermal pads and I'm pretty sure they have good contact with the heatsink. This is really bugging me.


What card?
Founder's Edition or AIB?
Are you using stock heatsink or H2O?


----------



## defiledge

I've used thermal pads/paste that were 0.5mm thicker than the stock ones so they can compress and make contact. Everything worked like normal until yesterday. I have the 3090 TUF OC shunt modded on stock heatsink. Could this be a driver/bios issue, or problems with the shunt mod?


----------



## HyperMatrix

defiledge said:


> I've used thermal pads/paste that were 0.5mm thicker than the stock ones so they can compress and make contact.


Bad idea. Because if they don't compress enough, they're preventing other areas from making proper contact.


----------



## BigMack70

mirkendargen said:


> There are still some situations where you can see a difference with balanced, namely repeating patterns in the distance (like look at the decorations in the wall in Lizzie's bar at a distance, they'll get pretty intense aliasing on balanced and are much better on quality).
> 
> ...Or don't look for problems and enjoy the 20% FPS boost of going from quality to balanced


I decided I like the look of maxed settings + balanced better than dropping settings + quality DLSS. The differences are very minor and I basically don't notice them unless I go looking.



HyperMatrix said:


> Film grain and Chromatic Aberration are just stupid and I disable them in ever game I play. I've really liked the lens flare effects though. And they make sense since you're using artificial glass optics. Motion blur has always been subjective. I've always liked it. Even back when I was 165Hz 1440p gaming. When playing with low fps though, it becomes so much easier on the eyes.
> 
> 
> I've noticed other weird behavior. Like I will be on the sidewalk. I'll turn around and look at the building for a few seconds. And quickly do a 180 degree. Even cars that are like 2 meters away from me are massively LOD reduced. And not only that....sometimes the car will be completely different. So apparently in different scenarios it can completely delete and respawn a car into an entirely different one. In your apartment complex I've walked down a hallway and ran into 3 identical characters. One right after the other. Same build/clothing/etc.
> 
> But as far as DLSS goes, I've noticed going down from DLSS Quality actually lowers the brightness of the game. Not just the sharpness. I already have sharpness at 0.7 in nvidia control panel in DLSS quality mode. Going down to balanced or performance gives an FPS boost but just destroys the "shine" and clarity of bright objects. If there's a big shootout scene where I'm not going to notice and appreciate any of the details I may momentarily switch to 60fps since it's a shooter game and that helps when you're not using smart weapons. But other than that, I haven't been able to play the majority of the game with anything other than DLSS quality. It just looks that damn good.


Strange issue with the brightness - I don't have that problem on my setup.

This is the first game I've truly been impressed by DLSS. Balanced DLSS looks vastly superior to dropping resolution to 1440p.


----------



## cletus-cassidy

Thanh Nguyen said:


> e here has an extra 3090 that want to sell? I think I fr





defiledge said:


> I'm getting temp throttle at 70C, this just started happening recently without me changing any settings. I replaced all the thermal pads and I'm pretty sure they have good contact with the heatsink. This is really bugging me.


This happened to me as well and I had a VRAM thermal pad slide off of the back plate. Check to make sure you didn't lose a thermal pad along the way as 70 degrees on core shouldn't be throttling you.


----------



## Falkentyne

defiledge said:


> I've used thermal pads/paste that were 0.5mm thicker than the stock ones so they can compress and make contact. Everything worked like normal until yesterday. I have the 3090 TUF OC shunt modded on stock heatsink. Could this be a driver/bios issue, or problems with the shunt mod?


Different parts of the chip have different sized thermal pads usually.

So you have to be careful about that, when you increase the thickness of all of the pads at once.
What thickness pads did you use, and what exact pads did you actually buy?

Did you measure the thickness of the original pads, and which ones were which. It's probably 100% certain that your VRAM doesn't have proper contact somewhere, as from what I've seen on most heatsinks, the VRM pad areas tend to stick out more on the heatsink than the VRAM pads do. On the FE cards, the VRM pads looked like they were thicker and compressed more than the VRAM pads. For example, someone with a calipers said that on the backplate side, the VRAM pads were 1mm and the hotspot pads were 1.5mm, while on the GPU side, the VRAM pads were 1.5mm and the VRM pads were "1.8mm" (since 1.8mm pads don't really exist, they were probably 2.0mm pads that were compressed a lot). The VRM pads had a large imprint indentation on them on the GPU side.

Usually when you increase the thickness beyond default, the VRM pads tend to work fine but the VRAM and GPU contact is what suffers in most cases. Of course it gets tricky when you get bizarre stuff like three different thicknesses used on one side of the PCB...

Therefore, using 1.5mm pads everywhere (frontside, VRAM/VRM, backplate side, and hotspot PCB) worked perfectly (I've been up to 81C pulling 600W on my 3090 FE with Thermalright Odyssey 1.5mm pad rework and still didn't trigger #Thermal)


----------



## WilliamLeGod

mattxx88 said:


> i could not include Jhonny :3
> DLSS ON
> 
> DLSS OFF - RIP 3090


Ok I see why u dont crash at that curve. Its a wrong OC curve, even u see the 1v2100 on screen but the game actually doesnt run at that vf all the time. The correct way is to set uniform curve with peak 1v 2100mhz and im 100 sure u crash in no time in game


----------



## ShadowYuna

To expert.

Just curious about 3090 boost clock.

Since I change from 3090 Strix to 3090 Galax SG1 (cause horrible coil whine on Strix when custrom watercool) , I notice that there is boost clock difference on games.

My setup is

8700k 4.9 OC
4000 CL15 XMP
3090 Galax SG1 130/500 OC
OLED C9 4K TV

When I play Vahalla my boost stays at 2020 but on Cyberpunk and Horizon Zero Dawn it stay between 1950~1850.

The difference is Vahalla hit load around 97% but Cyverpunk and Horizon hit load 100%

Can anyone explain me why this is happen?

Below is my current setup


----------



## Falkentyne

WilliamLeGod said:


> Ok I see why u dont crash at that curve. Its a wrong OC curve, even u see the 1v2100 on screen but the game actually doesnt run at that vf all the time. The correct way is to set uniform curve with peak 1v 2100mhz and im 100 sure u crash in no time in game


How do you do this? This is confusing.
Can you show a picture? I know you tried to explain this before but I don't think anyone was able to understand.


----------



## WilliamLeGod

ShadowYuna said:


> To expert.
> 
> Just curious about 3090 boost clock.
> 
> Since I change from 3090 Strix to 3090 Galax SG1 (cause horrible coil whine on Strix when custrom watercool) , I notice that there is boost clock difference on games.
> 
> My setup is
> 
> 8700k 4.9 OC
> 4000 CL15 XMP
> 3090 Galax SG1 130/500 OC
> OLED C9 4K TV
> 
> When I play Vahalla my boost stays at 2020 but on Cyberpunk and Horizon Zero Dawn it stay between 1950~1850.
> 
> The difference is Vahalla hit load around 97% but Cyverpunk and Horizon hit load 100%
> 
> Can anyone explain me why this is happen?
> 
> Below is my current setup
> View attachment 2469446


Because Ubisoft titles use low gpu power especially Vahalla. Same voltage, max settings in 3 games, gpu power is used as follow: Vahalla < Hzd < Cyberpunk and yr Power limit was between Vahalla and Hzd


----------



## WilliamLeGod

Falkentyne said:


> How do you do this? This is confusing.
> Can you show a picture? I know you tried to explain this before but I don't think anyone was able to understand.


I deleted those images. U can check back some pages, i believe it was between 345-355


----------



## ShadowYuna

WilliamLeGod said:


> Because Ubisoft titles use low gpu power especially Vahalla. Same voltage, max settings in 3 games, gpu power is used as follow: Vahalla < Hzd < Cyberpunk and yr Power limit was between Vahalla and Hzd


Thanks for the reply. So the card hit the power limit and can not go above 2000?? 

I know Galax SG1 card is 8pin * 2 card and I have changed the bios to KFA 390W bios. 

It is bit sad that it can not hit above 2000 on watercool..


----------



## Falkentyne

WilliamLeGod said:


> I deleted those images. U can check back some pages, i believe it was between 345-355


Thank you. I saw the images but i didn't understand what to do.
It looked like you were moving some points "up" on the graph, but you had different points "Highlighted" than what were being pulled up.
So it didn't make sense to me. So I couldn't understand it.
Like, for example, you had "1.0v" moved up, but 1.012v "selected"....so what were we supposed with 1.012v?
That is just an example. I literally looked at it for 10 minutes and just gave up because I didn't understand what you did.

You must remember, what seems simple and basic to you, can be very confusing to others.


----------



## cletus-cassidy

Apologies as this question may have been answered previously, but I just installed the EVGA 500W BIOS on my Strix. How do I know if it's working? Seems like the reported wattage is way down but scores are slightly up? Sound about right?


----------



## scaramonga

ShadowYuna said:


> Thanks for the reply. So the card hit the power limit and can not go above 2000??
> 
> I know Galax SG1 card is 8pin * 2 card and I have changed the bios to KFA 390W bios.
> 
> It is bit sad that it can not hit above 2000 on watercool..


Same here with my KFA2 SG 3090, severely power limited. It will sit quite happily in the 1850-1950 range, but any more, forget it, despite using the KFA2 390W BIOS. I can't be assed doing shunt mods and ripping the block off again, so it is what it is I guess. I may try the undervolting? route, but I just can't grasp the 'curve' thing lol


----------



## HyperMatrix

Falkentyne said:


> Thank you. I saw the images but i didn't understand what to do.
> It looked like you were moving some points "up" on the graph, but you had different points "Highlighted" than what were being pulled up.
> So it didn't make sense to me. So I couldn't understand it.
> Like, for example, you had "1.0v" moved up, but 1.012v "selected"....so what were we supposed with 1.012v?
> That is just an example. I literally looked at it for 10 minutes and just gave up because I didn't understand what you did.
> 
> You must remember, what seems simple and basic to you, can be very confusing to others.


I thought I explained it a few days back. I'll do a quick recap:

- When you're power limited and locked to a certain voltage, your card will keep the same voltage, and drop the clock speed (let's say you were set to 2100MHz at 1V and your clocks drop as low as 2025MHz, while still staying at 1V)

- If you set your curve to have points covering those lower clock speeds (for example 0.983 at 2085MHz, 0.975 at 2170MHz, 0.950 at 2155MHz, etc etc) then when the card is clocking down to those lower voltages, it no longer does so at the locked 1V and will drop down according to your curve. So when you're power limited, dropping both clock speed and voltage means you can actually hold a higher clock speed because for example 2070MHz at 0.975v may require less power than 2055MHz at 1.0v.

- This doesn't change anything if you are shunted, and under water. Because you won't be hit with thermal or power throttling to begin with.

Anything else that is claimed is just false.


----------



## EniGma1987

Are there any well made "active" backplates for this card yet for watercooling? The MP5 one is not what I would consider well made. Or is everyone still basically just doing that where they put a RAM block on top of a standard backplate? I want to get the Heatkiller active backplate, but no word on it yet and it looks like this will be delayed quite a few months.


and OMG, @HyperMatrix is back posting again? lol. Its been a while sine Ive seen you, but then again I dont get into the graphics subforums unless I have just gotten a new card. I remember you from back in the 120hz .net days. I believe I was in one of the first drops of the monitors through your site back then.


----------



## ShadowYuna

scaramonga said:


> Same here with my KFA2 SG 3090, severely power limited. It will sit quite happily in the 1850-1950 range, but any more, forget it, despite using the KFA2 390W BIOS. I can't be assed doing shunt mods and ripping the block off again, so it is what it is I guess. I may try the undervolting? route, but I just can't grasp the 'curve' thing lol


Yeah. I think this is limit for this card. I may just undervolt to 2000mhz at @987mv which stays at 1995.


----------



## bmgjet

Wow PR HOF been changing a lot lately.
I was 30th when I submitted it now im about to get knocked off.
Guess it hung on for a while and wasnt too bad for a 2 plug card.


----------



## HyperMatrix

EniGma1987 said:


> Are there any well made "active" backplates for this card yet for watercooling? The MP5 one is not what I would consider well made. Or is everyone still basically just doing that where they put a RAM block on top of a standard backplate? I want to get the Heatkiller active backplate, but no word on it yet and it looks like this will be delayed quite a few months.
> 
> 
> and OMG, @HyperMatrix is back posting again? lol. Its been a while sine Ive seen you, but then again I dont get into the graphics subforums unless I have just gotten a new card. I remember you from back in the 120hz .net days. I believe I was in one of the first drops of the monitors through your site back then.


Haha hey man. Yeah the 120Hz.Net days were good times man. Quad SLI'ing to be able to keep up with 1440p @ 120Hz back then. And now 8 years later struggling to hit 60fps at 4K in CyberPunk.  I also boycotted the RTX 2000 series so wasn't around for all that. I gave my last 2 Yamakasi monitors to my parents. Still going strong after all these years. Still the last "good" series of affordable glossy gaming monitors.


----------



## Falkentyne

HyperMatrix said:


> I thought I explained it a few days back. I'll do a quick recap:
> 
> - When you're power limited and locked to a certain voltage, your card will keep the same voltage, and drop the clock speed (let's say you were set to 2100MHz at 1V and your clocks drop as low as 2025MHz, while still staying at 1V)
> 
> - If you set your curve to have points covering those lower clock speeds (for example 0.983 at 2085MHz, 0.975 at 2170MHz, 0.950 at 2155MHz, etc etc) then when the card is clocking down to those lower voltages, it no longer does so at the locked 1V and will drop down according to your curve. So when you're power limited, dropping both clock speed and voltage means you can actually hold a higher clock speed because for example 2070MHz at 0.975v may require less power than 2055MHz at 1.0v.
> 
> - This doesn't change anything if you are shunted, and under water. Because you won't be hit with thermal or power throttling to begin with.
> 
> Anything else that is claimed is just false.


This doesn't happen on my card.
If I lock 1.1v with L, in MSI Afterburner, it drops both the voltage and the clocks when it hits the power limit. I am not using a custom curve. It does not, for example, drop down to 1860 mhz @ 1.081v (for example), rather it may drop to [email protected] Even though 1.1v is locked in.

Are you sure you aren't referring to using a custom curve or something?


----------



## HyperMatrix

Falkentyne said:


> This doesn't happen on my card.
> If I lock 1.1v with L, in MSI Afterburner, it drops both the voltage and the clocks when it hits the power limit. I am not using a custom curve. It does not, for example, drop down to 1860 mhz @ 1.081v (for example), rather it may drop to [email protected] Even though 1.1v is locked in.
> 
> Are you sure you aren't referring to using a custom curve or something?


Depends on the stepping in your curve. So if you have 0.85v all the way up to 1950MHz, and then immediately go to 1v for 2100MHz, it should be sticking to 1V all the way down to 1965MHz and then if it hits 1950MHz, drop to 0.85v. Outside of minor voltage variance you shouldn't have any voltage/clock stepping as you throttle down from power limits. If you are seeing that happen, I'd be interested in getting more information because this isn't behavior I've seen.

edit: and yes I'm speaking of custom curves. the whole topic the guy brought up was on having a smooth linear curve as opposed to having a giant step from the baseline to your desired voltage/clockspeed. Essentially what he's saying is, you don't want a curve like this:










Because the next step down from 1V at 2100MHz is 0.993v at 2040MHz and any clock speed between 2041-2100MHz would stick to the 1V. Whereas if you do a nice linear curve, you'll get:










And this way your card will do 0.987v at 2085, 0.975v at 2070, 0.968v at 2055, and 0.956 at 2040.


----------



## Falkentyne

HyperMatrix said:


> Depends on the stepping in your curve. So if you have 0.85v all the way up to 1950MHz, and then immediately go to 1v for 2100MHz, it should be sticking to 1V all the way down to 1965MHz and then if it hits 1950MHz, drop to 0.85v. Outside of minor voltage variance you shouldn't have any voltage/clock stepping as you throttle down from power limits. If you are seeing that happen, I'd be interested in getting more information because this isn't behavior I've seen.
> 
> edit: and yes I'm speaking of custom curves. the whole topic the guy brought up was on having a smooth linear curve as opposed to having a giant step from the baseline to your desired voltage/clockspeed.


No, my curve is stock, just +150 offset.


----------



## jura11

scaramonga said:


> Same here with my KFA2 SG 3090, severely power limited. It will sit quite happily in the 1850-1950 range, but any more, forget it, despite using the KFA2 390W BIOS. I can't be assed doing shunt mods and ripping the block off again, so it is what it is I guess. I may try the undervolting? route, but I just can't grasp the 'curve' thing lol


Hi there 

I'm running or have Palit RTX 3090 GamingPro without the shunt mod and using KFA2 390W BIOS with Bykski Waterblock and my clocks in power limited games or benchmarks usually doesn't go below 1950MHz, in Cyberpunk 2077 I'm running +100MHz on core and +1295MHz on VRAM and my clocks are in range of 1995MHz to 2100MHz, but usually in heavier scenarios it will be bouncing at 1995MHz to 2025MHz or 2040MHz

GPU is pulling in Cyberpunk 2077 usually something like 375-384W all the time 

With undervolting is same, it won't hold 2100MHz in Cyberpunk, in Assassin's Creed Valhalla it holds 2115-2130MHz quite easily 

Temperatures are same in both games in 30's, not seen higher temperatures than 36°C 

Hope this helps 

Thanks, Jura


----------



## HyperMatrix

Falkentyne said:


> No, my curve is stock, just +150 offset.
> 
> View attachment 2469475
> View attachment 2469475


Then it doesn't apply to you because offset applies the exact value to each stock stepping. I added pics to demonstrate what I mean to my previous post. 

edit: although apparently picture uploads aren't processing right now. check back in a few.


----------



## scaramonga

HyperMatrix said:


> I thought I explained it a few days back. I'll do a quick recap:
> 
> - When you're power limited and locked to a certain voltage, your card will keep the same voltage, and drop the clock speed (let's say you were set to 2100MHz at 1V and your clocks drop as low as 2025MHz, while still staying at 1V)
> 
> - If you set your curve to have points covering those lower clock speeds (for example 0.983 at 2085MHz, 0.975 at 2170MHz, 0.950 at 2155MHz, etc etc) then when the card is clocking down to those lower voltages, it no longer does so at the locked 1V and will drop down according to your curve. So when you're power limited, dropping both clock speed and voltage means you can actually hold a higher clock speed because for example 2070MHz at 0.975v may require less power than 2055MHz at 1.0v.
> 
> - This doesn't change anything if you are shunted, and under water. Because you won't be hit with thermal or power throttling to begin with.
> 
> Anything else that is claimed is just false.


I know what your saying buddy, or at least I think I do?, lol  It's a hard thing to get your head around, if one is thicker than yourself , and not everyone is gonna grasp it, and to be quite honest, I still don't, despite looking at tutorials galore. I wish someone would just come here and do it for me


----------



## HyperMatrix

scaramonga said:


> I know what your saying buddy, or at least I think I do?, lol  It's a hard thing to get your head around, if one is* thicker than yourself *, and not everyone is *gonna grasp it*, and to be quite honest, I still don't, despite looking at tutorials galore. *I wish someone would just come here and do *it for* me *


Even if you lived down the street from me I don't think I'd come over to help you with all those creepy winky faces and tongue emojis and suggestive language...haha.


----------



## scaramonga

jura11 said:


> Hi there
> 
> I'm running or have Palit RTX 3090 GamingPro without the shunt mod and using KFA2 390W BIOS with Bykski Waterblock and my clocks in power limited games or benchmarks usually doesn't go below 1950MHz, in Cyberpunk 2077 I'm running +100MHz on core and +1295MHz on VRAM and my clocks are in range of 1995MHz to 2100MHz, but usually in heavier scenarios it will be bouncing at 1995MHz to 2025MHz or 2040MHz
> 
> GPU is pulling in Cyberpunk 2077 usually something like 375-384W all the time
> 
> With undervolting is same, it won't hold 2100MHz in Cyberpunk, in Assassin's Creed Valhalla it holds 2115-2130MHz quite easily
> 
> Temperatures are same in both games in 30's, not seen higher temperatures than 36°C
> 
> Hope this helps
> 
> Thanks, Jura



Hmm, so different games matter now, as you could have a stable overclock and under limits etc, but come across the few who push the envelope, how times change  Temps are 38 MAX here also, so although we will never be temp limited, it's irritating that we have all that headroom, but are power limited, with absolutely no point in over volting, as we have very little scope to do so


----------



## scaramonga

HyperMatrix said:


> Even if you lived down the street from me I don't think I'd come over to help you with all those creepy winky faces and tongue emojis and suggestive language...haha.


Jesus! My mind must be somewhere else lol 😂


----------



## MonnieRock

HyperMatrix said:


> Even if you lived down the street from me I don't think I'd come over to help you with all those creepy winky faces and tongue emojis and suggestive language...haha.


Dude, thanks for the good laugh !!!!


----------



## cakesg

Ok, so I realized I had forgot to put a thermal pad on a part of my card that was supposed to have them. Will this cause any problems? It is a strix on the air cooler currently (as im waiting for a new waterblock). I attached a picture to where the missing thermal pads are supposed to cover


----------



## HyperMatrix

cakesg said:


> Ok, so I realized I had forgot to put a thermal pad on a part of my card that was supposed to have them. Will this cause any problems? It is a strix on the air cooler currently (as im waiting for a new waterblock). I attached a picture to where the missing thermal pads are supposed to cover
> View attachment 2469501


I don’t know how to respond to this because I don’t know if you’re trolling or not. But I will say as a general rule that you should never remove thermal pads or thermal paste in a card that pulls 500W+.


----------



## Falkentyne

cakesg said:


> Ok, so I realized I had forgot to put a thermal pad on a part of my card that was supposed to have them. Will this cause any problems? It is a strix on the air cooler currently (as im waiting for a new waterblock). I attached a picture to where the missing thermal pads are supposed to cover
> View attachment 2469501


Those are part of the primary VRM (Vcore, VMEM (MVDDC)) banks...(they are on both sides of the core)
You had better have a thermal pad on them...


----------



## cakesg

HyperMatrix said:


> I don’t know how to respond to this because I don’t know if you’re trolling or not. But I will say as a general rule that you should never remove thermal pads or thermal paste in a card that pulls 500W+.


I'm not even joking I don't do teardowns that much but I legitimately didn't have the thermal pad on this part. All other pads/paste is in properly.


----------



## ShadowYuna

jura11 said:


> Hi there
> 
> I'm running or have Palit RTX 3090 GamingPro without the shunt mod and using KFA2 390W BIOS with Bykski Waterblock and my clocks in power limited games or benchmarks usually doesn't go below 1950MHz, in Cyberpunk 2077 I'm running +100MHz on core and +1295MHz on VRAM and my clocks are in range of 1995MHz to 2100MHz, but usually in heavier scenarios it will be bouncing at 1995MHz to 2025MHz or 2040MHz
> 
> GPU is pulling in Cyberpunk 2077 usually something like 375-384W all the time
> 
> With undervolting is same, it won't hold 2100MHz in Cyberpunk, in Assassin's Creed Valhalla it holds 2115-2130MHz quite easily
> 
> Temperatures are same in both games in 30's, not seen higher temperatures than 36°C
> 
> Hope this helps
> 
> Thanks, Jura


Hi Jura

Just wondering does Monitor resolution cause boost clock?? Cause when I see youtube or other review on 1440p has higher boost clock than 4k.

I am using 4k Oled TV as monitor and can not go above 2000 on my Galax SG1 on water. My temp is around 45 due to it is summer in Australia and I put my MORA fans at 800rpm.

What monitor are you using?

All I want is steady boost clock around 2025 or 2040 on games..


----------



## bmgjet

ShadowYuna said:


> Hi Jura
> 
> Just wondering does Monitor resolution cause boost clock?? Cause when I see youtube or other review on 1440p has higher boost clock than 4k.
> 
> I am using 4k Oled TV as monitor and can not go above 2000 on my Galax SG1 on water. My temp is around 45 due to it is summer in Australia and I put my MORA fans at 800rpm.
> 
> What monitor are you using?
> 
> All I want is steady boost clock around 2025 or 2040 on games..


Res makes a massive difference since it uses more of the GPU.
On mine at 1440p there is nothing that drops under 2100mhz.
On 4K most things are around 2000-2050mhz.
Thats while using 480-500W
If I limited it to 390W you can easily cut 100mhz off that.



cakesg said:


> Ok, so I realized I had forgot to put a thermal pad on a part of my card that was supposed to have them. Will this cause any problems? It is a strix on the air cooler currently (as im waiting for a new waterblock). I attached a picture to where the missing thermal pads are supposed to cover


Dont even use it until you get those cooled.
They are the 2nd hottest part on the card


----------



## Sheyster

HyperMatrix said:


> I thought I explained it a few days back. I'll do a quick recap:
> 
> - When you're power limited and locked to a certain voltage, your card will keep the same voltage, and drop the clock speed (let's say you were set to 2100MHz at 1V and your clocks drop as low as 2025MHz, while still staying at 1V)
> 
> - If you set your curve to have points covering those lower clock speeds (for example 0.983 at 2085MHz, 0.975 at 2170MHz, 0.950 at 2155MHz, etc etc) then when the card is clocking down to those lower voltages, it no longer does so at the locked 1V and will drop down according to your curve. So when you're power limited, dropping both clock speed and voltage means you can actually hold a higher clock speed because for example 2070MHz at 0.975v may require less power than 2055MHz at 1.0v.
> 
> - This doesn't change anything if you are shunted, and under water. Because you won't be hit with thermal or power throttling to begin with.
> 
> *Anything else that is claimed is just false.*


Indeed, undervolting with a curve isn't rocket science. I'll repost the link I shared before, this covers it:









Undervolting the RTX 3080 and the RTX3090 - Bjorn3D.com


Introduction - How to undervolt I think most of us are pretty familiar with the concept of overclock




bjorn3d.com


----------



## ShadowYuna

bmgjet said:


> Res makes a massive difference since it uses more of the GPU.
> On mine at 1440p there is nothing that drops under 2100mhz.
> On 4K most things are around 2000-2050mhz.
> Thats while using 480-500W
> If I limited it to 390W you can easily cut 100mhz off that.
> 
> 
> 
> Dont even use it until you get those cooled.
> They are the 2nd hottest part on the card


I see. So you mean in my current setup(4k OLED) my boost clock is normal??

I normally get around 1950~1850 boost clock on Horizon and cyberpunk 2077 even on 130/500 OC at Galax SG1

Is there anyway I can make the boost clock steady??


----------



## cakesg

bmgjet said:


> Res makes a massive difference since it uses more of the GPU.
> On mine at 1440p there is nothing that drops under 2100mhz.
> On 4K most things are around 2000-2050mhz.
> Thats while using 480-500W
> If I limited it to 390W you can easily cut 100mhz off that.
> 
> 
> 
> Dont even use it until you get those cooled.
> They are the 2nd hottest part on the card


yep, just took the card apart and put some pads in.


----------



## bmgjet

ShadowYuna said:


> I see. So you mean in my current setup(4k OLED) my boost clock is normal??
> 
> I normally get around 1950~1850 boost clock on Horizon and cyberpunk 2077 even on 130/500 OC at Galax SG1
> 
> Is there anyway I can make the boost clock steady??


Your basically at the mercy of Powerlimit, Temp limit then lastly get past those two and voltage limit.
Your temps not going to get better, Hate that its summer since thats all thats holding me back.
All you can really do is risk losing your warranty doing the silver painted shunt mod. That will get you 480ish W power limit and can be cleaned up to remove it.
But dont get to set on trying to hold a clock speed.


----------



## defiledge

Do you guys think I damaged the VRMs because now it throttles me at 70C? I orderd a byski block for my tuf, should I use the TIM that comes with it or get new ones for it?


----------



## Falkentyne

defiledge said:


> Do you guys think I damaged the VRMs because now it throttles me at 70C? I orderd a byski block for my tuf, should I use the TIM that comes with it or get new ones for it?


You never replied to my previous message...


----------



## defiledge

Falkentyne said:


> You never replied to my previous message...


The stock pads on the TUF measure at 1.5mm and 2.5mm and I replaced these with gelid 2mm and 3mm. They seem to be able to compress a lot and I had no issues with throttling for several weeks.


----------



## mattxx88

WilliamLeGod said:


> Ok I see why u dont crash at that curve. Its a wrong OC curve, even u see the 1v2100 on screen but the game actually doesnt run at that vf all the time. The correct way is to set uniform curve with peak 1v 2100mhz and im 100 sure u crash in no time in game


what you mean with "wrong OC"? i cannot get what you are trying to explain
i set a curve of 2130mhz at 1,000v i'm on air with the card shunted and it drops cause of temps at 2100mhz barely stable, sporadic drop at 2085mhz
today i'll get my block to have it on water

edit: and yes, this oc profile is not stable on the division 2, i still have to dig further but that game makes freq and volt still drop even with shunts


----------



## Falkentyne

defiledge said:


> The stock pads on the TUF measure at 1.5mm and 2.5mm and I replaced these with gelid 2mm and 3mm. They seem to be able to compress a lot and I had no issues with throttling for several weeks.


Which exact pads were 1.5mm and which exact pads were 2.5mm? The card is double sided so there's going to be VRAM pads on both sides.
2.5mm seems rather generous and I know 3mm is just too thick.

Did you check both the GPU side and the backplate side? On the founder's edition, the backplate side VRAM pads were 1.0mm and the GPU side were 1.5mm. They were not the same thickness.
But 1.5mm pads worked for both sides VRAM, as well as the VRM's as well (and PCB hotspots!).
Keep that in mind.

I'm assuming the VRAM pads were 1.5mm and VRM were 2.5mm?

Are you sure they were 2.5mm and not 2mm? Because the founder's edition VRM pads were stock 2mm...(but 1.5mm worked perfectly for both VRAM and VRM on the GPU side, as the 2mm pads were VERY (Like, big time) heavily compressed, which meant that was a bit too thick). 1.5mm pads also worked for backplate "PCB hotspots".

I'm going to assume that's the case because I've never seen 2.5mm VRAM pads before.
You also should be careful about increasing the pad thickness too far over the stock pads or you will hurt GPU die contact also.
There may also be a PCB hotspot issue, but again I have not seen that board, so I don't know for sure.

Try 1.5mm pads on the VRAM and 2mm on the VRM First. You can use Thermalright Odyssey pads for that. Also grab some 0.5mm Odyssey pads also.
Do an application test with just a few pads to make sure you have contact (to avoid wasting pads).

I would check this and start with 1.5mm VRAM thermal pads on both sides. Do not use 2mm pads on the VRAM, as this may affect hotspot cooling on the backplate side (again I do no not know how your backplate is cooled). Then consider 2mm pads on the VRM and check the contact with a small strip before completing. I also do not know if there are even hotspot PCB pads on that card.

For the thermalright pads, you can buy 2mm and 0.5mm pads, and if you need 2.5mm someplace, then you can stack them.
If you need them quickly, the best place is Amazon but those only come in 85mm * 45mm size, which I found is NOT enough to do an entire GPU.

Yao Yue Shop on Aliexpress (Thermalright's chinese reseller) has 120mm * 120mm size pads, but of course shipping won't be fast.

_Edit_
I just looked at the GamersNexus teardown of the TUF 3080.
Assuming the 3090 is very similar, I can tell you directly that those pads were 1.5mm for VRAM and 2.0mm for VRM. They definitely aren't 2mm for VRAM and 100% are not 2.5mm for VRM! Not from what I saw...

I do see _something_ that looks like a 2.5mm pad, but I can't see what it's for. You can see it at 18:24 in this video. 






So if that's the case, there may be three different sized pads on your video card. So please check that.


----------



## defiledge

Falkentyne said:


> Which exact pads were 1.5mm and which exact pads were 2.5mm? The card is double sided so there's going to be VRAM pads on both sides.
> 2.5mm seems rather generous and I know 3mm is just too thick.
> 
> Did you check both the GPU side and the backplate side? On the founder's edition, the backplate side VRAM pads were 1.0mm and the GPU side were 1.5mm. They were not the same thickness.
> But 1.5mm pads worked for both sides VRAM, as well as the VRM's as well (and PCB hotspots!).
> Keep that in mind.
> 
> I'm assuming the VRAM pads were 1.5mm and VRM were 2.5mm?
> 
> Are you sure they were 2.5mm and not 2mm? Because the founder's edition VRM pads were stock 2mm...(but 1.5mm worked perfectly for both VRAM and VRM on the GPU side, as the 2mm pads were VERY (Like, big time) heavily compressed, which meant that was a bit too thick). 1.5mm pads also worked for backplate "PCB hotspots".
> 
> I'm going to assume that's the case because I've never seen 2.5mm VRAM pads before.
> You also should be careful about increasing the pad thickness too far over the stock pads or you will hurt GPU die contact also.
> There may also be a PCB hotspot issue, but again I have not seen that board, so I don't know for sure.
> 
> Try 1.5mm pads on the VRAM and 2mm on the VRM First. You can use Thermalright Odyssey pads for that. Also grab some 0.5mm Odyssey pads also.
> Do an application test with just a few pads to make sure you have contact (to avoid wasting pads).
> 
> I would check this and start with 1.5mm VRAM thermal pads on both sides. Do not use 2mm pads on the VRAM, as this may affect hotspot cooling on the backplate side (again I do no not know how your backplate is cooled). Then consider 2mm pads on the VRM and check the contact with a small strip before completing. I also do not know if there are even hotspot PCB pads on that card.
> 
> For the thermalright pads, you can buy 2mm and 0.5mm pads, and if you need 2.5mm someplace, then you can stack them.
> If you need them quickly, the best place is Amazon but those only come in 85mm * 45mm size, which I found is NOT enough to do an entire GPU.
> 
> Yao Yue Shop on Aliexpress (Thermalright's chinese reseller) has 120mm * 120mm size pads, but of course shipping won't be fast.
> 
> _Edit_
> I just looked at the GamersNexus teardown of the TUF 3080.
> Assuming the 3090 is very similar, I can tell you directly that those pads were 1.5mm for VRAM and 2.0mm for VRM. They definitely aren't 2mm for VRAM and 100% are not 2.5mm for VRM! Not from what I saw...
> 
> I do see _something_ that looks like a 2.5mm pad, but I can't see what it's for. You can see it at 18:24 in this video.
> 
> 
> 
> 
> 
> 
> So if that's the case, there may be three different sized pads on your video card. So please check that.


I measured around 1.4mm for left VRMs and 2.3mm for right VRMs with a caliper. I didn't replace the backplate pads because I didn't need to take it off when shuntmodding so they shouldn't have been affected. I'll replace them with some cheap pads I have lying around and see if its a contact problem. Hopefully I didn't damage any components because the warranty is voided already


----------



## Nico67

ShadowYuna said:


> Since I change from 3090 Strix to 3090 Galax SG1 (cause horrible coil whine on Strix when custrom watercool) , I notice that there is boost clock difference on games.


Hows the EK block on the Galax? Quiet or still have whine issues, assuming its just use EK pad recommendations?


----------



## Hulk1988

I saw people starting to put a heat sink on the backplate to get better temperatures. What do you think about that? https://www.amazon.com/gp/product/B089QJQY17/ref=ppx_yo_dt_b_asin_title_o05_s00?ie=UTF8&th=1


----------



## kx11

A new ROG 3090 bios dropped

RTX3090 bios update tool 
- Further optimize the performance for 0dB fan feature
- Fixed motherboard “beeping” bug during computer start-up 

DL


https://dlcdnets.asus.com/pub/ASUS/Graphic%20Card/NVIDIA/BIOSUPDATE_TOOL/RTX3090/RTX3090_V2.zip


----------



## ShadowYuna

Nico67 said:


> Hows the EK block on the Galax? Quiet or still have whine issues, assuming its just use EK pad recommendations?


Galax with EK Reference Vector block works just fine. 

Backplate fit just fine and works well. On my current setup and weather in Aus , max temp is around 45.

There is coil whine when card is on heavy load but it can be hear only if you put ears next to the block. Pretty happy with Galax and EK block except it has only 2 * 8Pin.


----------



## reflex75

Falkentyne said:


> No, my curve is stock, just +150 offset.
> 
> View attachment 2469475
> View attachment 2469475


It would be interesting to compare all cards default boost clock, because for instance we both have the same card 3090 FE but my core is programmed to boost +100Mhz higher by itself:
(+150 gives me 2220Mhz instead of 2115Mhz in your case)









The real question is: what is the meaning of all this different boost behaviors from one core to another?
Is it really based on silicon quality?
My core can do 2100Mhz during game (if not power limited) and can go as high as 2.2Ghz for a short bench duration...
I miss the simple ASICS QUALITY % value during Maxwell era...


----------



## pat182

Im playing 4k HDR DLSS quality NO ray tracing WITH sharpen filter 10%, on my asus 4k 120hz HDR and doing 80fps lock, enabled motion blur to smooth it out.

I recommand sharpen filter as follow in 4k
Quality: 10% sharpen 50% ignore film grain
Balance: 25% - 50 %
performance 50% -50%
ultra performance: good luck


----------



## dante`afk

Hulk1988 said:


> I saw people starting to put a heat sink on the backplate to get better temperatures. What do you think about that? https://www.amazon.com/gp/product/B089QJQY17/ref=ppx_yo_dt_b_asin_title_o05_s00?ie=UTF8&th=1


I'm using that with a 120mm fan on top of it and 0.5mm thermalright pads.

I think it does 1-2c at best better temps, or nothing.


----------



## jura11

ShadowYuna said:


> Hi Jura
> 
> Just wondering does Monitor resolution cause boost clock?? Cause when I see youtube or other review on 1440p has higher boost clock than 4k.
> 
> I am using 4k Oled TV as monitor and can not go above 2000 on my Galax SG1 on water. My temp is around 45 due to it is summer in Australia and I put my MORA fans at 800rpm.
> 
> What monitor are you using?
> 
> All I want is steady boost clock around 2025 or 2040 on games..


Hi there 

My monitor is Acer Predator X34p which is 3440x1440 120Hz monitor, not tried yet any 4k monitor although I got 4k TV which does only 4k at 30hz and not sure if this helps to test 

But I assume in 4k monitor you are bound by GPU more than CPU and therefore yours GPU will be bouncing at power limit most of the time at such resolution, I'm only guessing on that there 

Sadly 390W power limit is just not enough for Cyberpunk 2077, I would say 480-520W is sweet spot for RTX 3090 

2100MHz I can only see when GPU load is around 85-90% as max, if GPU sees 95-97% load clocks will drop to 1950-1995MHz 

Hope this helps 

Thanks, Jura


----------



## reflex75

Falkentyne said:


> Thank you. I saw the images but i didn't understand what to do.
> It looked like you were moving some points "up" on the graph, but you had different points "Highlighted" than what were being pulled up.
> So it didn't make sense to me. So I couldn't understand it.
> Like, for example, you had "1.0v" moved up, but 1.012v "selected"....so what were we supposed with 1.012v?
> That is just an example. I literally looked at it for 10 minutes and just gave up because I didn't understand what you did.
> 
> You must remember, what seems simple and basic to you, can be very confusing to others.


If it can help to understand how the curve voltage/frequency have impact on the performance, then here are my findings about Nvidia boost behavior after playing a lot with the voltage curves on my previous cards 1080ti and 2080ti (I haven't checked it yet for ampere).
You have to imagine boost algorithm as a 3D box stacked with many 2D voltage curve layers.
Each layer represents a curve applied to only a certain range of temperature.
So the boost is dynamic, and it changes its curve from one temperature to another (reducing at higher temperature but not lineary).
When you make a change to the curve, you change only one curve from the stack, which is the current curve used/activated at the time corresponding to your current temperature level (but the change is applied to all the stack proportionally).
Now, the trick is: performance depends not only on your current point of frequency/voltage running, but also on the several previous voltage points from where you are running!
Which means the steeper the curve and the lower your performance!
With a good logarithmic curve, you can have better performance at lower clock by doing more work, rather than extra high frequency doingnothing...


----------



## Edgenier

I'm on air and decided to shunt my TUF 3090 OC (just the 2 resistors) with 4mohm resistors, and repasted with Thermal Grizzly Kryonaut (X pattern), but kept same thermal pads. It allowed me to go from .85v at 1890mhz with some games still managing to pull more than 375W and causing annoying downclocks, to a still quiet .918v at 1965mhz solid in Cyberpunk maxed out with psycho RTX on pulling what reads like 230W-240W on heaviest loads (i think that's like 480W?). Being on air temp downclocks are now the constraint, and I think I would need like 1 volt for 2000+mhz .

Edit: anyone with the new Reverb G2, I would recommend checking out Vorpx. Cyberpunk in semi-VR is insanity, feels like you're there.


----------



## nycgtr

KPE arrives tmr. Hoping for the best with the lottery. Will report.


----------



## rocklobsta1109

I've scanned through about 40 pages of this thread but didn't find my answer... Is there a compatibility matrix with which bios are compatible with specific card styles? I have a PNY Revel 2x8pin reference PCB card under an EK waterblock and I'm looking for a compatible bios with a greater than 365w power limit


----------



## mirkendargen

rocklobsta1109 said:


> I've scanned through about 40 pages of this thread but didn't find my answer... Is there a compatibility matrix with which bios are compatible with specific card styles? I have a PNY Revel 2x8pin reference PCB card under an EK waterblock and I'm looking for a compatible bios with a greater than 365w power limit


2x8pin BIOSes work on all 2x8pin cards (you may lose ports if they are different), 3x8pin BIOSes work on 3x8pin cards (you may lose ports if they are different). The exception is because of the way power delivery works on FTW3's they do better with a 2x8pin BIOS.


----------



## rocklobsta1109

mirkendargen said:


> because of the way power delivery works on FTW3's they do better with a 2x8pin BIOS.


Cool so is the consensus still that the KFA2 bios is the best best for the 2x8 cards?


----------



## Gebeleisis

rocklobsta1109 said:


> I've scanned through about 40 pages of this thread but didn't find my answer... Is there a compatibility matrix with which bios are compatible with specific card styles? I have a PNY Revel 2x8pin reference PCB card under an EK waterblock and I'm looking for a compatible bios with a greater than 365w power limit


If you have reference board with 2 pin get the 2 pin 390w bios
You have 2 options one will disable a display port the other will not.


----------



## cletus-cassidy

nycgtr said:


> KPE arrives tmr. Hoping for the best with the lottery. Will report.


Hey man - Wondering when you would show up here! Let us know!


----------



## ttnuagmada

Keninishna said:


> What is the consensus on liquid metal tim? Worth it? I am getting 60c on core at around 520w on my bykski block using TF8 for thermal paste. Water temps don't go above 31c so I am thinking the gpu could be dumping more heat into the loop.


That seems a little spicy. I have the EK block, and maxing out the 520w KPE limit in Furmark on my Strix, mine stays about 16C above water temp. I used Conductonaut.


----------



## Zogge

ttnuagmada said:


> That seems a little spicy. I have the EK block, and maxing out the 520w KPE limit in Furmark on my Strix, mine stays about 16C above water temp. I used Conductonaut.


I run a strix 3090 with kpe bios but it tends to crash games randomly even at +30 oc. My card does +190 and +220 on the 500w bios. Did you experience instability also? Did you have a link to the 520W bios you used? 
Also with ek and water temp 30 degrees max, I am at 55-57 degrees at full load with kryonaut non conductive, 520w bios. Shall I reapply ? 150l/h flow


----------



## mirkendargen

Zogge said:


> I run a strix 3090 with kpe bios but it tends to crash games randomly even at +30 oc. My card does +190 and +220 on the 500w bios. Did you experience instability also? Did you have a link to the 520W bios you used?
> Also with ek and water temp 30 degrees max, I am at 55-57 degrees at full load with kryonaut non conductive, 520w bios. Shall I reapply ? 150l/h flow


I'm at more like 16-18C delta on a Strix with the Bykski block and 520W BIOS and regular Kryonaut. I think you should reapply. Maybe the Bykski block is a bit better than the EK block, but it can't be 7-10C better.


----------



## ttnuagmada

Zogge said:


> I run a strix 3090 with kpe bios but it tends to crash games randomly even at +30 oc. My card does +190 and +220 on the 500w bios. Did you experience instability also? Did you have a link to the 520W bios you used?
> Also with ek and water temp 30 degrees max, I am at 55-57 degrees at full load with kryonaut non conductive, 520w bios. Shall I reapply ? 150l/h flow


I got the one off of TPU (227017). I have not experienced any abnormal instability. Haven't played too much with it yet, but I scored 15207 in Port Royal with a [email protected] setting. Clocks averaged about 2145. It will clock higher with more volts, but so far this is the setting that got me the highest average clock due to the power limit (crazy that its still power limited at 520w)

my water temp maxes at 31-32ish under heavy load with my fans at 500rpm. flowrate is .7gpm (roughly the same as yours). I have not seen temps break 50C so far under any circumstance.

here's my 3dmark run. I did have fans/pump cranked which is why the temp is so low.








I scored 15 207 in Port Royal


Intel Core i7-10700K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com





Haven't had just a whole lot of time to dial things in yet, but for normal gaming with fans/pumps turned down, Cyberpunk is stable with a 2130/1.025v setting and seems to hold that clock speed.

im tempted to shunt mod, pretty sure it will do north of 2200 with enough volts.


----------



## cletus-cassidy

Zogge said:


> I run a strix 3090 with kpe bios but it tends to crash games randomly even at +30 oc. My card does +190 and +220 on the 500w bios. Did you experience instability also? Did you have a link to the 520W bios you used?
> Also with ek and water temp 30 degrees max, I am at 55-57 degrees at full load with kryonaut non conductive, 520w bios. Shall I reapply ? 150l/h flow


I have a very similar setup: Strix with EK block using regular Kryonaut. I'm getting about 50-51 degrees on the Strix with 32-33 degree water, so I think you might benefit from reapplication. I have 3 x 360mm rads (two 360GTX rads and an EK PE 360) in a dynamic XL with a heavily overclocked 10900K. Your water temps are a bit better than mine so curious what your setup is and if I need to change something.


----------



## mirkendargen

ttnuagmada said:


> I got the one off of TPU (227017). I have not experienced any abnormal instability. Haven't played too much with it yet, but I scored 15207 in Port Royal with a [email protected] setting. Clocks averaged about 2145. It will clock higher with more volts, but so far this is the setting that got me the highest average clock due to the power limit (crazy that its still power limited at 520w)
> 
> my water temp maxes at 31-32ish under heavy load with my fans at 500rpm. flowrate is .7gpm (roughly the same as yours). I have not seen temps break 50C so far under any circumstance.
> 
> here's my 3dmark run. I did have fans/pump cranked which is why the temp is so low.
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 207 in Port Royal
> 
> 
> Intel Core i7-10700K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> Haven't had just a whole lot of time to dial things in yet, but for normal gaming with fans/pumps turned down, Cyberpunk is stable with a 2130/1.025v setting and seems to hold that clock speed.
> 
> im tempted to shunt mod, pretty sure it will do north of 2200 with enough volts.


2160 at 0.987V is ****ing bananas, congrats on that card and shunt mod it ASAP, lol.


----------



## ttnuagmada

mirkendargen said:


> 2160 at 0.987V is ****ing bananas, congrats on that card and shunt mod it ASAP, lol.


Port Royal is the only thing I can really run at that setting. I tried Time Spy and it crashed almost immediately.


----------



## HyperMatrix

mirkendargen said:


> 2160 at 0.987V is ****ing bananas, congrats on that card and shunt mod it ASAP, lol.


I wanted to ask for proof but figured I'd be even more upset when he showed it. Haha. That's ridiculous especially for non-chilled water.


----------



## ttnuagmada

I'm running about an 18C ambient right now because it got freezing cold here yesterday and I haven't turned the heater on yet. 









I scored 15 210 in Port Royal


Intel Core i7-10700K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## Edgenier

Can anyone who has fine-tuned their curve and tested it for stability under Metro Exodus / Control / Cyberpunk, tell me the voltage they're able to achieve 2000mhz at?


----------



## mirkendargen

ttnuagmada said:


> I'm running about an 18C ambient right now because it got freezing cold here yesterday and I haven't turned the heater on yet.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 210 in Port Royal
> 
> 
> Intel Core i7-10700K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> View attachment 2469630


My Strix can't do Port Royal at 2160 at 1.1V on chilled-ish (16C) water with the core never getting above 34C. Put some shunts on that card and get 2250, lol.


----------



## BigMack70

Cyberpunk continues to impress me. One of very few games that has really made me think "wow" when considering the visuals. Up there with Far Cry or Doom 3 back in 2004, Crysis in 2007, or Crysis 3 in 2013. 

The RT difference is amazing. 3090 is a heck of a card when it's firing on all cylinders with RT and DLSS.



Spoiler: RT Off

















Spoiler: RT max


----------



## WMDVenum

Edgenier said:


> Can anyone who has fine-tuned their curve and tested it for stability under Metro Exodus / Control / Cyberpunk, tell me the voltage they're able to achieve 2000mhz at?


I have only really tested CP2077 but ran 2040/.9 and went up to 2055/.913 due to instability during temp stepping. I recommend always setting your curve at the lowest temp because the gpu steps down the frequency by 15mhz for roughly every 10C increase in temps. Load temps drop the functioning performance to about 2040/.913. I set my curve at sub 30C and then lose 15mhz when temps go up to high 30s/low 40s during GPU load.


----------



## Nico67

ShadowYuna said:


> Galax with EK Reference Vector block works just fine.
> 
> Backplate fit just fine and works well. On my current setup and weather in Aus , max temp is around 45.
> 
> There is coil whine when card is on heavy load but it can be hear only if you put ears next to the block. Pretty happy with Galax and EK block except it has only 2 * 8Pin.


Awesome, just waiting for some shunt mod bits and hope to have mine in cool water next week. Also in Aus and it gets pretty hot on air with the 390w bios


----------



## defiledge

Edgenier said:


> Can anyone who has fine-tuned their curve and tested it for stability under Metro Exodus / Control / Cyberpunk, tell me the voltage they're able to achieve 2000mhz at?


I'm doing 2000Mhz at 0.975V metro exodus everything max 24/7 on 75C air.


----------



## changboy

BigMack70 said:


> Cyberpunk continues to impress me. One of very few games that has really made me think "wow" when considering the visuals. Up there with Far Cry or Doom 3 back in 2004, Crysis in 2007, or Crysis 3 in 2013.
> 
> The RT difference is amazing. 3090 is a heck of a card when it's firing on all cylinders with RT and DLSS.
> 
> 
> 
> Spoiler: RT Off
> 
> 
> 
> 
> View attachment 2469649
> 
> 
> 
> 
> 
> 
> Spoiler: RT max
> 
> 
> 
> 
> View attachment 2469650


Do you get hdr working or you keep playing it on normal ? I saw its true to get good black on hdr but normal not to much better, maybe will need to wait for dolby vision patch to really look amazing.


----------



## HyperMatrix

Out of curiosity, is anyone seeing a pattern with newly released cards overclocking better than first batches? I helped a guy pick up an FTW3 Hybrid and was helping him OC it. He was able to do 2100MHz on 0.975 and it didn't crash unless he went above 65C. And it would never hit the power limit at those clocks regardless of game. Well...tested port royal, cod, red dead 2, and cyberpunk. He was able to complete a port royal run with average of around 2145MHz. Is it coincidence that the 2 ftw3 hybrids that I've had access to have been so great at overclocking or are we seeing a trend towards better clocks? Based on the numbers people have been posting lately, I'm not seeing anymore truly dud cards.


----------



## bmgjet

HyperMatrix said:


> Out of curiosity, is anyone seeing a pattern with newly released cards overclocking better than first batches? I helped a guy pick up an FTW3 Hybrid and was helping him OC it. He was able to do 2100MHz on 0.975 and it didn't crash unless he went above 65C. And it would never hit the power limit at those clocks regardless of game. Well...tested port royal, cod, red dead 2, and cyberpunk. He was able to complete a port royal run with average of around 2145MHz. Is it coincidence that the 2 ftw3 hybrids that I've had access to have been so great at overclocking or are we seeing a trend towards better clocks? Based on the numbers people have been posting lately, I'm not seeing anymore truly dud cards.


Thats how every release has been, Cards get better as time goes on, 3090 stock is pritty stablised now where you can walk into a lot of stores and they have atleast some of the lesser wanted cards on the shelves. The top end ones still have waits tho.

Nvidia is probably directing the lesser 3090 chips into 3080s since thats whats screaming out for stock. Or maybe pulling them aside ready for a 3080ti launch.


----------



## bmgjet

Heres a dump of all the powerlimit tables from all the RTX3090 Bioses on TechPowerUp.








RTX3090-PowerTables-Dump


3090 Vender,Date,Version,MD5,Min TDP,Default TPD,Max TDP,Default 8 Pin,Max 8 Pin,Default SRC,Max SRC,Default Chip,Max Chip,Default Slot,Max Slot,Defaul Vram,Max Vram,Unknown GALAXY,11/03/20,94.02.26.C0.34,ab4ebfd7e60ef3838e7a3542f8a3835f,100000,350000,390000,121500,121500,150000,175000,226800,22...




docs.google.com


----------



## Jared Pace

Looks like some bios allow more slot power chip power and vmem power than others. Could be interesting to test.


----------



## Keninishna

So I replaced the thermal paste on the card from tf8 to conductonaught and max temp went down to 52c. The max temp with tf8 was 60c so a decent improvement but still I think the temps are a bit high and now I am wondering if 1.5mm pads for the front is wrong. I was reading a few pages back one side of the vrms needs 2mm pads on some cards.


----------



## changboy

Little vidéo and you can also read some comment of peoples too in youtube :


----------



## NCC-1701-A

which Bios can i use for the 3090 FE?


----------



## Sheyster

Edgenier said:


> Can anyone who has fine-tuned their curve and tested it for stability under Metro Exodus / Control / Cyberpunk, tell me the voltage they're able to achieve 2000mhz at?


Every card is different due to the lottery. Try at 1000mv and go up from there if it's not stable, or down if it is. My card will do 2085 stable at 1000mv.


----------



## Sheyster

NCC-1701-A said:


> which Bios can i use for the 3090 FE?


None, you will have to shunt the card if you want a higher power limit.


----------



## BigMack70

changboy said:


> Do you get hdr working or you keep playing it on normal ? I saw its true to get good black on hdr but normal not to much better, maybe will need to wait for dolby vision patch to really look amazing.


The game definitely needs a patch to fix HDR. It was clearly mastered/authored in SDR with some kind of low quality conversion to HDR.

I got HDR looking a bit better on my LG c9 by changing from auto black level to low, but the black level is still artificially raised, and the image slightly washed out, compared to SDR. In the end I decided I like it better, though. The SDR image is really dark and lacks vibrancy and pop. 

I think nobody at cdpr bothered to play this game on an oled. Game feels like it was made for ips monitors with garbage black levels and half-assed hdr capabilities.


----------



## khunpunTH

What you guy think about my gpu temp?
msi gaming x trio 520w bios, barrow water block.
10900k ek cpu block . xii apex board.

info.
2 x 360mm ek PE radiator. 6 tt riing plus at 1200 rpm in push config.( not include case fan)
room temp 28c. quite hot here in thailand.
water idle 30 . Full load 40. ( full load mean furmark 15 mins then record )
gpu idle 32 . Full load 56. ( tf8 paste )
radiator place side exhaust, top intake. level 20 gt case. I think this case not suitable for side radiator.
what radiator config should i set up? What should i do to bring temp a little bit down.
my old 2080ti water block never reach 50c. Wonder 3090 produce so much heat.


----------



## bmgjet

16C delta is decent, Im getting a 18C delta at 520W on mine.
You need to get your computer some where cooler (by a window) and/or add more radiator.


----------



## WayWayUp

will the 3090 benefit most from resizeable bar? based on its memory bandwidth and gddr6x?

I enabled it on my z490 mobo just waiting on nvidia driver


----------



## Keninishna

bmgjet said:


> 16C delta is decent, Im getting a 18C delta at 520W on mine.
> You need to get your computer some where cooler (by a window) and/or add more radiator.


Ah OK, thanks. I don't know what my temps should be at other than it should be lower than air lol. These people getting temps in low 40s and 30s must be using chilled or a ton of rad space.


----------



## ttnuagmada

khunpunTH said:


> What you guy think about my gpu temp?
> msi gaming x trio 520w bios, barrow water block.
> 10900k ek cpu block . xii apex board.
> 
> info.
> 2 x 360mm ek PE radiator. 6 tt riing plus at 1200 rpm in push config.( not include case fan)
> room temp 28c. quite hot here in thailand.
> water idle 30 . Full load 40. ( full load mean furmark 15 mins then record )
> gpu idle 32 . Full load 56. ( tf8 paste )
> radiator place side exhaust, top intake. level 20 gt case. I think this case not suitable for side radiator.
> what radiator config should i set up? What should i do to bring temp a little bit down.
> my old 2080ti water block never reach 50c. Wonder 3090 produce so much heat.


Your delta seems on par with what most of us are getting.


----------



## Gebeleisis

khunpunTH said:


> What you guy think about my gpu temp?
> msi gaming x trio 520w bios, barrow water block.
> 10900k ek cpu block . xii apex board.
> 
> info.
> 2 x 360mm ek PE radiator. 6 tt riing plus at 1200 rpm in push config.( not include case fan)
> room temp 28c. quite hot here in thailand.
> water idle 30 . Full load 40. ( full load mean furmark 15 mins then record )
> gpu idle 32 . Full load 56. ( tf8 paste )
> radiator place side exhaust, top intake. level 20 gt case. I think this case not suitable for side radiator.
> what radiator config should i set up? What should i do to bring temp a little bit down.
> my old 2080ti water block never reach 50c. Wonder 3090 produce so much heat.


Did your old 2080ti consume 520w?


----------



## Shawnb99

Auto Notify for the KPE is still up for any interested


----------



## HyperMatrix

WayWayUp said:


> will the 3090 benefit most from resizeable bar? based on its memory bandwidth and gddr6x?
> 
> I enabled it on my z490 mobo just waiting on nvidia driver


They said similar gains to what AMD was seeing. From what I understand it only aids in the passing of data from system memory through the PCIe slot to GPU memory because the artificial limit of the system only seeing/accessing 256MB of VRAM would be gone. I'd expect it to become more important when RTX IO/DX12 Direct Storage come out. Particularly on PCIe 4.0 platforms. But I don't think GPU memory bandwidth or latency would be cause for additional improvements when enabling the feature.


----------



## Xel_Naga

Does anyone know the Fujipoly pad thickeness needed for the Founders GDDR6X? I'm finding my GDDR is GETTING TOASTY


----------



## ttnuagmada

BigMack70 said:


> The game definitely needs a patch to fix HDR. It was clearly mastered/authored in SDR with some kind of low quality conversion to HDR.
> 
> I got HDR looking a bit better on my LG c9 by changing from auto black level to low, but the black level is still artificially raised, and the image slightly washed out, compared to SDR. In the end I decided I like it better, though. The SDR image is really dark and lacks vibrancy and pop.
> 
> I think nobody at cdpr bothered to play this game on an oled. Game feels like it was made for ips monitors with garbage black levels and half-assed hdr capabilities.


I actually somehow tricked it into working correctly earlier, and im not 100% convinced it's not some combo of Windows/Nvidia drivers causing the problem. I switched on HDR mode in windows, then switched from full to limited in the Control Panel, then switched back to Full range and suddenly it was working like it was supposed to with no washout. 

It hasn't worked every time though, it's pretty weird. From what I can tell it seems to be related to Windows HDR using limited range, and needing to be messed with to get it to switch to full range. I can tell it's going to work by how the desktop looks before I enter the game.

Edit: Ok I think i figured out exactly how to make it work

1. go set your Nvidia control panel to limited range.
2. Go turn on HDR in windows.
3. Switch NV control panel back to full range.


----------



## Falkentyne

Xel_Naga said:


> Does anyone know the Fujipoly pad thickeness needed for the Founders GDDR6X? I'm finding my GDDR is GETTING TOASTY


1.5mm for VRAM (both sides), VRM's and PCB Hotspots works perfectly.
And fujipoly pads packets are way too small. I would suggest Thermalright Odyssey 1.5mm pads (85mm x 45mm is bigger than 60mm x 50mm).
Or you can order the 120mm * 120mm Odyssey full size pads from Aliexpress from Yao Yue Shop (seems to be an official supplier for TM pads), which is a lot cheaper for the size you get.

Here's a good layout for the backplate side hotspots



http://imgur.com/Jx4IuqL

or


http://imgur.com/ttF7Nla


----------



## pat182

evga hybrid kit is up, whos gonna try to put it on a strix ?? hehe


----------



## Jordyn

New 3090 owner here. Have Inno3d X4 so 2x8pin connector. Worth flashing to the KFA2 bios if on staying air? Or will temps and power limits mean any gains will be minimal. Also worth noting that I am in Australia so coming into a hot summer.


----------



## WayWayUp

anyone concerned about buying a 3090 kingpin? a $2000 gpu will look silly if nvidia drops a 3080ti next month for half that price


----------



## mirkendargen

WayWayUp said:


> anyone concerned about buying a 3090 kingpin? a $2000 gpu will look silly if nvidia drops a 3080ti next month for half that price


If that did happen, and EVGA did make a 3080Ti Kingpin (I don't think they would), it'd be $1500 so the difference isn't as big.


----------



## DrunknFoo

HyperMatrix said:


> Out of curiosity, is anyone seeing a pattern with newly released cards overclocking better than first batches? I helped a guy pick up an FTW3 Hybrid and was helping him OC it. He was able to do 2100MHz on 0.975 and it didn't crash unless he went above 65C. And it would never hit the power limit at those clocks regardless of game. Well...tested port royal, cod, red dead 2, and cyberpunk. He was able to complete a port royal run with average of around 2145MHz. Is it coincidence that the 2 ftw3 hybrids that I've had access to have been so great at overclocking or are we seeing a trend towards better clocks? Based on the numbers people have been posting lately, I'm not seeing anymore truly dud cards.


Fwiw. 

Been shunting a few cards for people in my area, from what i have tested out of the 7 3090 cards i played with they are pretty much all comparable with the exception of two that performed poorly. That said, i can never know the full potential of those without going under water


----------



## defiledge

Guys I took apart my card and figured out why it was throttling. It turns out all most of the thermal paste evaporated during the 2 weeks that I used it... I don't know if this was caused by the thicker pads I used or there's something wrong with my retention spring.


----------



## Falkentyne

defiledge said:


> Guys I took apart my card and figured out why it was throttling. It turns out all most of the thermal paste evaporated during the 2 weeks that I used it... I don't know if this was caused by the thicker pads I used or there's something wrong with my retention spring.


What thermal paste did you use?
You should be using Thermalright TFX on that thing. Paste performs top tier and lasts forever.


----------



## defiledge

Falkentyne said:


> What thermal paste did you use?
> You should be using Thermalright TFX on that thing. Paste performs top tier and lasts forever.


I used some noctua stuff from a cpu cooler I bought


----------



## Falkentyne

defiledge said:


> I used some noctua stuff from a cpu cooler I bought


Not good to use generic paste for high amp GPU's.
Just get some good stuff.
Kingpin KPx, Thermalright TFX or Kryonaut Extreme (pink) should all work well.
I personally can vouch for TFX. Performs even better than Kryonaut Extreme, by a couple of C.


----------



## defiledge

Falkentyne said:


> Not good to use generic paste for high amp GPU's.
> Just get some good stuff.
> Kingpin KPx, Thermalright TFX or Kryonaut Extreme (pink) should all work well.
> I personally can vouch for TFX. Performs even better than Kryonaut Extreme, by a couple of C.


 Alright thanks, I'll get one of these when my waterblock arrives. 

On another note, I'm actually getting pretty good scores now that I repasted my card 









I scored 14 770 in Port Royal


AMD Ryzen 9 3900X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## changboy

Falkentyne said:


> Not good to use generic paste for high amp GPU's.
> Just get some good stuff.
> Kingpin KPx, Thermalright TFX or Kryonaut Extreme (pink) should all work well.
> I personally can vouch for TFX. Performs even better than Kryonaut Extreme, by a couple of C.


How about liquid metal with thermal grizzly shield around the chip ? I think i will use this when put my ek wb on my ftw3 ultra but iam not sure.


----------



## mirkendargen

changboy said:


> How about liquid metal with thermal grizzly shield around the chip ? I think i will use this when put my ek wb on my ftw3 ultra but iam not sure.


Why spend the money for Thermal Grizzly when nail polish works? Lol.


----------



## defiledge

https://www.3dmark.com/3dm/54986700 , so close to 15k does OCing cpu help


----------



## Falkentyne

changboy said:


> How about liquid metal with thermal grizzly shield around the chip ? I think i will use this when put my ek wb on my ftw3 ultra but iam not sure.


Nail polish is cheap and works (cellulose or nitrocellulose). 
Or you can use MG chemicals conformal coating.

Of course liquid metal would work the best, but besides the safety prep work, it's a huge pain to clean up if you need to disassemble the card and then clean any old hardened LM from the heatsink, because then it could get on the pads and then you would have to replace the pads too. A lot of work to do unless you're sure you won't be taking apart the card anymore. If that's worth another 4-5C, then go ahead.

Repasting with something like TFX or Kryonaut Extreme or KPX is quick and easy by comparison.


----------



## dr/owned

Hulk1988 said:


> I saw people starting to put a heat sink on the backplate to get better temperatures. What do you think about that? https://www.amazon.com/gp/product/B089QJQY17/ref=ppx_yo_dt_b_asin_title_o05_s00?ie=UTF8&th=1


The attachment method is key...if you're using thermal adhesive it's absolute trash...like 0.5W/mK and then that's on top of an annodized backplate that's using thermal pads. Do I know for a fact that using screws + thermal paste is going to make more of a difference? Nope.


----------



## Falkentyne

dr/owned said:


> The attachment method is key...if you're using thermal adhesive it's absolute trash...like 0.5W/mK and then that's on top of an annodized backplate that's using thermal pads. Do I know for a fact that using screws + thermal paste is going to make more of a difference? Nope.


So what's the best way to attach something like this?


----------



## dante`afk

nycgtr said:


> KPE arrives tmr. Hoping for the best with the lottery. Will report.


any benches?


----------



## dr/owned

Falkentyne said:


> So what's the best way to attach something like this?


Drill it so you can insert screws through the backplate that you can then crank down to make a backplate paste heatsink sandwich. Have to make sure to drill where the PCB is blank though so it doesn't interfere with a capacitor or something.


----------



## Falkentyne

dr/owned said:


> Drill it so you can insert screws through the backplate that you can then crank down to make a backplate paste heatsink sandwich. Have to make sure to drill where the PCB is blank though so it doesn't interfere with a capacitor or something.


I'm sorry but I don't have any power tools. All I have are basic torx and philips and security bits.


----------



## changboy

I have liquid electrical tape too like this : (using this with my i9-7900x) With liquid metal.








Star brite Liquid Electrical Tape - LET Black 1 oz Tube | eBay


Find many great new & used options and get the best deals for Star brite Liquid Electrical Tape - LET Black 1 oz Tube at the best online prices at eBay! Free shipping for many products!



www.ebay.ca


----------



## dr/owned

Falkentyne said:


> I'm sorry but I don't have any power tools. All I have are basic torx and philips and security bits.


Dang....yeah with no ability to drill pretty much all you can do is the adhesive route. Would maybe still work to use thermal paste and press down the heatsink as hard as possible...it's not going to squeeze into a thin layer but even a "thick" layer of 12.5W/mK is going to be better than a thin layer of adhesive that's 0.5W/mK. Would be an interesting experiement to see if it does better and still "suctions" on to the backplate well enough.


----------



## DrunknFoo

Falkentyne said:


> Nail polish is cheap and works (cellulose or nitrocellulose).
> Or you can use MG chemicals conformal coating.
> 
> Of course liquid metal would work the best, but besides the safety prep work, it's a huge pain to clean up if you need to disassemble the card and then clean any old hardened LM from the heatsink, because then it could get on the pads and then you would have to replace the pads too. A lot of work to do unless you're sure you won't be taking apart the card anymore. If that's worth another 4-5C, then go ahead.
> 
> Repasting with something like TFX or Kryonaut Extreme or KPX is quick and easy by comparison.


Ahh black goop, electrical tape glue works well instead of conformal coating or nail polish... If for any reason u need to clean it up or remove, simply peel and all done. Any LM stuck on it wont really jump or come off of it either, so pretty safe.


----------



## Edge0fsanity

Got my kingpin today. Not impressed, will be keeping my ftw3 and putting it back in my pc once my optimus block ships.


----------



## changboy

Edge0fsanity said:


> Got my kingpin today. Not impressed, will be keeping my ftw3 and putting it back in my pc once my optimus block ships.


Lol i have clic for auto notify but i dont think i will buy it when i will get the mail notification. Its seam useless unless you go ln2, power and frequency is in direct link with temp then no miracle on air or water.


----------



## bmgjet

Edge0fsanity said:


> Got my kingpin today. Not impressed, will be keeping my ftw3 and putting it back in my pc once my optimus block ships.


Whats it bench like? Manage to crack 15K on port royal?

Seen a lot of people complaining on the EVGA forum.
There was a rage thread as well about how my bottom of the line XC3 was beating his KPE and him demanding a responce from EVGA lol.
They delete negitive threads pritty quickly tho so you miss a lot of it if your not checking a few times a day.


----------



## Edge0fsanity

bmgjet said:


> Whats it bench like? Manage to crack 15K on port royal?
> 
> Seen a lot of people complaining on the EVGA forum.
> There was a rage thread as well about how my bottom of the line XC3 was beating his KPE and him demanding a responce from EVGA lol.
> They delete negitive threads pritty quickly tho so you miss a lot of it if your not checking a few times a day.


I need to spend more time dialing it in. Had it in my system for maybe an hour or so tonight just getting a feel for how it performs. Seems to run benches around 2115-2130mhz without much power throttling. Cp2077 at 2100mhz with no power throttling. Haven't touched mem yet.

I haven't touched any voltages outside of the voltage slider so I'm sure I can squeeze out a little more on the core.


----------



## mirkendargen

dr/owned said:


> The attachment method is key...if you're using thermal adhesive it's absolute trash...like 0.5W/mK and then that's on top of an annodized backplate that's using thermal pads. Do I know for a fact that using screws + thermal paste is going to make more of a difference? Nope.


I used Arctic Silver Alumina thermal adhesive (9W/mK) to glue a RAM block on my backplate and it works great, it's a ton of surface contact area for the amount of heat it needs to get rid of. I wouldn't have done it on a stock backplate, but doing it on a backplate that came with a waterblock? Sure.


----------



## cheddle

So I asked Galax for a better bios than the 390w they released, this was their response:

“
Dear Customer ,

Thank you for reaching GALAX customer service .

Sorry for keep you waiting .

We will not release XOC Bios publicly.

We hope you understand and sorry for any inconvenience caused .

Best Regards
GALAX
“

so they HAVE an XOC bios?? 

Perhaps if we all bombard their support desk with requests for a decently high TDP bios for our reference PCBs they will eventually submit?

reading a bit about the 6900xt guys efforts with MorePower Tool these guys are jacking their TDPs to the moon on two 8-pin connectors.


----------



## bmgjet

Any of the venders can make a bios to what ever spec they want.
Then they just need to get Nvidia to sign the file.
None of them are going to risk losing there signing rights and release a real XOC bios with all the limiters turned off to the public.


----------



## HyperMatrix

bmgjet said:


> Any of the venders can make a bios to what ever spec they want.
> Then they just need to get Nvidia to sign the file.
> None of them are going to risk losing there signing rights and release a real XOC bios with all the limiters turned off to the public.


Not even a signing rights issue. It’s a liability issue. If they release a bios that goes over the standard power specs and a computer happens to catch fire for whatever reason and someone dies or a house burns down/etc...they’d be liable.

Only way they’d officially release a bios with a higher limit is if they made a card with 4 power connectors. Which I think they should have done with the KingPin card imo. Maybe we’ll see something like that with a 3089/3090 Ti model. But I doubt it.


----------



## mismatchedyes

Could I please ask a question about power usage with Cyberpunk to someone with a 3090?

On my 2080ti at 2100mhz I have noticed power usage with RTX off is approximately 430-450w but when RTX is on it drops to around 330-350w. I presume this is because the RT hardware is not fast enough to keep up and thus the rest of the GPU is less busy.

Is this similar on the 3090 or will it use the full ~500w with RT both on and off?

Thanks in advance.


----------



## mattxx88

mismatchedyes said:


> Could I please ask a question about power usage with Cyberpunk to someone with a 3090?
> 
> On my 2080ti at 2100mhz I have noticed power usage with RTX off is approximately 430-450w but when RTX is on it drops to around 330-350w. I presume this is because the RT hardware is not fast enough to keep up and thus the rest of the GPU is less busy.
> 
> Is this similar on the 3090 or will it use the full ~500w with RT both on and off?
> 
> Thanks in advance.


DLSS ON? cause this render at lower res
try RTX on and dlss off and see the wattage


----------



## HyperMatrix

mismatchedyes said:


> Could I please ask a question about power usage with Cyberpunk to someone with a 3090?
> 
> On my 2080ti at 2100mhz I have noticed power usage with RTX off is approximately 430-450w but when RTX is on it drops to around 330-350w. I presume this is because the RT hardware is not fast enough to keep up and thus the rest of the GPU is less busy.
> 
> Is this similar on the 3090 or will it use the full ~500w with RT both on and off?
> 
> Thanks in advance.


From my understanding, since RT can be run on non-RTX GPUs, you get however much accelerated ray tracing performance as the card can handle, and the rest is done at a slower rate using your normal cores. So you should never be stuck at a situation where you're at 20 fps for example because your RT cores can only render 20 frames a second and the rest of your card just sits idly by after matching that 20 fps target. And to answer your question the 3090s do pull full power regardless of RTX on or off, when not being limited by CPU performance.


----------



## mirkendargen

HyperMatrix said:


> From my understanding, since RT can be run on non-RTX GPUs, you get however much accelerated ray tracing performance as the card can handle, and the rest is done at a slower rate using your normal cores. So you should never be stuck at a situation where you're at 20 fps for example because your RT cores can only render 20 frames a second and the rest of your card just sits idly by after matching that 20 fps target. And to answer your question the 3090s do pull full power regardless of RTX on or off, when not being limited by CPU performance.


You'll probably max power either way, but RT on (and DLSS on) uses less power. With a 520W power limit you'll be in the low-mid 1900's at 4k RT off/DLSS off and always power limited. If you enable RT and DLSS quality, you'll be in the upper 2000's and power limited 50-75% of the time. DLSS balanced you'll be power limited ~25% of the time, DLSS performance you generally won't be power limited.


----------



## mattxx88

these were mine tests, dlss off takes more power


----------



## Esenel

If you want your card to cry for mercy just play Metro Exodus :-D

1440p - Extreme Details - RTX Ultra it is constantly showing the PL boolean with the 520W KPE bios.

Wallmeter showing 720-760W :-D


----------



## JonnyV75

Edge0fsanity said:


> I need to spend more time dialing it in. Had it in my system for maybe an hour or so tonight just getting a feel for how it performs. Seems to run benches around 2115-2130mhz without much power throttling. Cp2077 at 2100mhz with no power throttling. Haven't touched mem yet.
> 
> I haven't touched any voltages outside of the voltage slider so I'm sure I can squeeze out a little more on the core.


i was one of the lucky 6 that got a card shipped from yesterday’s auto-notify. I get mine today but won’t install until Saturday. What were you specifically not impressed about?


----------



## Edge0fsanity

JonnyV75 said:


> i was one of the lucky 6 that got a card shipped from yesterday’s auto-notify. I get mine today but won’t install until Saturday. What were you specifically not impressed about?


I retract my previous statement, something I had open in the background was crashing the card at anything above +100 offset limiting my OC to 2100mhz. Doing proper benching this morning finding the card's limit in PR. Currently hitting 2175mhz at the start of the test with temp/power throttling taking it down to 2145mhz by the end. Still haven't found the upper limit on the core and I'm running back to back runs with no cool off. Previous run below, should be over 15k shortly. This is in a closed case, no tricks to cool it down more. Will have to get the portable AC unit out soon and see if 15.5k is possible.


----------



## JonnyV75

Edge0fsanity said:


> I retract my previous statement, something I had open in the background was crashing the card at anything above +100 offset limiting my OC to 2100mhz. Doing proper benching this morning finding the card's limit in PR. Currently hitting 2175mhz at the start of the test with temp/power throttling taking it down to 2145mhz by the end. Still haven't found the upper limit on the core and I'm running back to back runs with no cool off. Previous run below, should be over 15k shortly. This is in a closed case, no tricks to cool it down more. Will have to get the portable AC unit out soon and see if 15.5k is possible.
> View attachment 2469818


Nice. I see you have/had a FTW3 as per your signature... so a decent upgrade?

My 3090 hybrid was a nice bump from the air FTW3. More temp related than huge performance gains. I’m curious to see how the KPE benches against the previous two In the same system.


----------



## blockbusta

So pretty new to these forums and just got the Asus strix rtx 3090 oc. Is there a bios for 500w similar to EVGA? I believe the bios V2 on the Asus website is just for a low fan db mode but I couldn't really find the answer as asus forums are quite dead... 

Also, is MSI afterburner OC scanner working on 3090 cards? I kept getting an error even with the beta for some reason...

Thanks!


----------



## dante`afk

Edge0fsanity said:


> I retract my previous statement, something I had open in the background was crashing the card at anything above +100 offset limiting my OC to 2100mhz. Doing proper benching this morning finding the card's limit in PR. Currently hitting 2175mhz at the start of the test with temp/power throttling taking it down to 2145mhz by the end. Still haven't found the upper limit on the core and I'm running back to back runs with no cool off. Previous run below, should be over 15k shortly. This is in a closed case, no tricks to cool it down more. Will have to get the portable AC unit out soon and see if 15.5k is possible.
> View attachment 2469818


i wouldn't be happy about that, vince said in one of the streams the card can do 15.5k easily in PR.

To anyone thinking the KPE is some kind of wondercard, no it's not, it's for LN2 use.


----------



## defiledge

Has anybody used the new alphacool 16W/mK paste? I just ordered some and it seems to be the highest conductivity for a non-conductive tim.


----------



## Edge0fsanity

JonnyV75 said:


> Nice. I see you have/had a FTW3 as per your signature... so a decent upgrade?
> 
> My 3090 hybrid was a nice bump from the air FTW3. More temp related than huge performance gains. I’m curious to see how the KPE benches against the previous two In the same system.


Seems like its around a 130mhz gain in PR over my FTW3 in PR so far. I hit a wall after my previous post so that was my best run. The biggest difference in my mind right now is the efficiency of the the kingpin vs the ftw3. My ftw3 slams into the powerlimit with an offset and benches poorly with the XC3 bios(or any bios for that matter, my card is bugged on the 500w evga one). I run it undervolted at 1025mV for 2085mhz and it still throttles a bit in PR. I'm guessing with the optimus block i'll be able to sustain 1025mV, maybe a bit more and clock around 2115-2130mhz average. Card will sustain 2160mhz at 1.1v with a 60% load. It really needs shunts but not worth doing on a chip that is not capable of 2200mhz+. 

The kingpin doesn't seem to throttle much at all and my board power draw is typically around 450w in PR and about the same in CP2077. I'm going to fine tune with the VF curve next and see where that gets me. 

Best runs on both cards so far. 
Kingpin https://www.3dmark.com/pr/650607 14832
FTW3 https://www.3dmark.com/pr/509397 14466


----------



## Edge0fsanity

dante`afk said:


> i wouldn't be happy about that, vince said in one of the streams the card can do 15.5k easily in PR.
> 
> To anyone thinking the KPE is some kind of wondercard, no it's not, it's for LN2 use.


I believe the expectation is 15-15.5k. Seems like I lost the silicon lottery again which I'm used to. I typically get average to trash chips in my GPUs and golden chips in my CPUs.


----------



## Fufuuu

Hey,

I have changed my Bios on Zotac RTX 3090 to the KFA2 one. Just after flashed, I saw stuttering on all games without any change on power, core voltage or frequency. Has someone noticed a behavior like that ?

Thanks.


----------



## megahmad

Guys I am not home to check but there's a new bios posted for the ASUS 3090 TUF OC on ASUS site, can anyone see if they increased the power limit after updating?




__





ASUS TUF Gaming GeForce RTX 3090 OC Edition 24GB GDDR6X | Graphics Card | ASUS Global


The ASUS TUF Gaming GeForce RTX 3090 OC Edition 24GB GDDR6X builds durability on NVIDIA Ampere Architecture, by featuring Axial-tech fan design, 0dB technology, Dual ball fan bearings, Auto-Extreme Technology, MaxContact, Military-grade Certification, and more.




www.asus.com





Thanks.


----------



## EniGma1987

dr/owned said:


> Drill it so you can insert screws through the backplate that you can then crank down to make a backplate paste heatsink sandwich. Have to make sure to drill where the PCB is blank though so it doesn't interfere with a capacitor or something.


Do you think it would be better to simply leave the backplate off and attach these with some TIM directly to each VRAM chip? It would seem more efficient heat transfer to me.





Amazon.com: Alphacool 17427 GPU RAM Copper Heatsinks 14x14mm - 10pcs Air Cooling Passive Coolers: Computers & Accessories


Buy Alphacool 17427 GPU RAM Copper Heatsinks 14x14mm - 10pcs Air Cooling Passive Coolers: Heatsinks - Amazon.com ✓ FREE DELIVERY possible on eligible purchases



www.amazon.com


----------



## xrb936

EK Strix block arrives tomorrow...


----------



## ttnuagmada

blockbusta said:


> So pretty new to these forums and just got the Asus strix rtx 3090 oc. Is there a bios for 500w similar to EVGA? I believe the bios V2 on the Asus website is just for a low fan db mode but I couldn't really find the answer as asus forums are quite dead...
> 
> Also, is MSI afterburner OC scanner working on 3090 cards? I kept getting an error even with the beta for some reason...
> 
> Thanks!


You can put the KPE 520w bios on it. It will give weird power readings, but it definitely gives a little headroom. Though i think ive read that the 500w bios works better for cards with fans.


----------



## JonnyV75

dante`afk said:


> i wouldn't be happy about that, vince said in one of the streams the card can do 15.5k easily in PR.
> 
> To anyone thinking the KPE is some kind of wondercard, no it's not, it's for LN2 use.


I'm not expecting mine to be. Also in Canada, buying a KPE direct from EVGA is the same price as a FTW3 bought locally.


----------



## xrb936

JonnyV75 said:


> I'm not expecting mine to be. Also in Canada, buying a KPE direct from EVGA is the same price as a FTW3 bought locally.


Not at all. You are still paying HST/GST.


----------



## Arbustok

Hi everyone!
Kind of new to overclocking so I'm not really sure what I'm doing but my Asus TUF OC can reach 1950 Mhz average on port royal at 0.881v, and spikes to 2010/2025 Mhz at 0.893v, but won't use any more voltage (I'm assuming it's because of it's 370 watt power limit ?)

Does anyone have an idea of what can I expect if I flash with gigabyte 390 watt bios?

Thank you!!!


----------



## mismatchedyes

Thank you very much!


mirkendargen said:


> You'll probably max power either way, but RT on (and DLSS on) uses less power. With a 520W power limit you'll be in the low-mid 1900's at 4k RT off/DLSS off and always power limited. If you enable RT and DLSS quality, you'll be in the upper 2000's and power limited 50-75% of the time. DLSS balanced you'll be power limited ~25% of the time, DLSS performance you generally won't be power limited.


----------



## geriatricpollywog

WayWayUp said:


> anyone concerned about buying a 3090 kingpin? a $2000 gpu will look silly if nvidia drops a 3080ti next month for half that price


Technically its $1900 after the Associates discount.

No, the features are worth it. And unless Ampere is dual sourced to 7nm, the Kingpin will still be the fastest card out of the box. The ones who will be disappointed are the ones who bought a $1500 to $1800 3090 and are currently disappointed with their power limits.


----------



## xrb936

0451 said:


> Technically its $1900 after the Associates discount.
> 
> No, the features are worth it. And unless Ampere is dual sourced to 7nm, the Kingpin will still be the fastest card out of the box. The ones who will be disappointed are the ones who bought a $1500 to $1800 3090 and are currently disappointed with their power limits.


And the performance. I have seen few posts about its benchmark and I believe Strix 3090 can beat them all under the aircooling.


----------



## HyperMatrix

xrb936 said:


> Not at all. You are still paying HST/GST.


FTW3 sells for $2450 CAD at memory express here before taxes. KingPin is $2420. Though you gotta add another $70 for brokerage and shipping, that’s still a significant card upgrade for barely anything more.


xrb936 said:


> And the performance. I have seen few posts about its benchmark and I believe Strix 3090 can beat them all under the aircooling.


There is no reason at all to think a Strix out of the box would beat a KingPin out of the box. None. Unless the Strix in question won out on silicon lottery and the kingpin you’re comparing it to has a dud of a chip. But there is literally 0 advantage for the Strix. The KingPin is a better designed card in every way. That doesn’t overcome silicon lottery. But it means the Strix has 0 advantage over it. And the KPE has a lot of added goodies for a reasonable price at $1900 USD.

I have no idea why people attack the card when it is objectively the best 3090 card out there. Arguments of It being too expensive at $1900 vs $1800 for the Strix when you could get a 3080 for $699? Come on fellas. Get real.

There is one reason and one reason only to get a Strix over a KPE. And that’s the presence of a second HDMI port on the Strix.


----------



## WayWayUp

I'm more disappointed with the kpe bin
Back in the day the bin was top notch!

Last gen the bin was lax but they got away with it because each card came with samsung memory. On top of that most 2080tis came as duel 8pin but the kpe was triple with 520w bios out the box which was significantly more than anything else out there

This gen tho.... Evga seems to be treating this card as a FTW3 with classified tool. they are making large batches (mass market) with a very lax bin. your not promised a good card the bin just prevents you from getting a dud

you have ppl only scoring 14700-14800 port royal with a 360 aio and 520w power limit! thats very standard and not impressive for water. Yeah many can reach 15k when min maxing out stats and a few got 15,200+ when they used ice cold air to drop water temps way down, but my ftw3 is beating this on air and thats really upsetting

I think any strix on water with 520w is capable of reaching those scores unless they lost silicon lottery

Honestly not even slightly impressed with the kingpin this generation. Evga is making it in large batches as if it's a mass market card. They are treating it as their new flagship as opposed to a specialty card

the card needs to be binned better, much better, but they cant if they are making such large batches. Evga is just going for max $ this gen and to seperate the ftw3 they made sure you cant volt mod it or even add a higher watt bios they gimped that too. Only way to separate their tiers


----------



## JonnyV75

xrb936 said:


> Not at all. You are still paying HST/GST.


It's the 5% associate discount that helps. Also need to have a CDN credit card that charges no currency conversion fees. 

Below was my estimate prior to actual purchase. Actual purchase was $2500.61 + 300.56 for taxes. Not quite "exactly the same" ... but damn close.


----------



## geriatricpollywog

HyperMatrix said:


> FTW3 sells for $2450 CAD at memory express here before taxes. KingPin is $2420. Though you gotta add another $70 for brokerage and shipping, that’s still a significant card upgrade for barely anything more.
> 
> There is no reason at all to think a Strix out of the box would beat a KingPin out of the box. None. Unless the Strix in question won out on silicon lottery and the kingpin you’re comparing it to has a dud of a chip. But there is literally 0 advantage for the Strix. The KingPin is a better designed card in every way. That doesn’t overcome silicon lottery. But it means the Strix has 0 advantage over it. And the KPE has a lot of added goodies for a reasonable price at $1900 USD.
> 
> I have no idea why people attack the card when it is objectively the best 3090 card out there. Arguments of It being too expensive at $1900 vs $1800 for the Strix when you could get a 3080 for $699? Come on fellas. Get real.
> 
> There is one reason and one reason only to get a Strix over a KPE. And that’s the presence of a second HDMI port on the Strix.


The Strix might have made sense if you had access to an XOC bios. But the Strix no longer makes sense on account of the Kingpin existing.

Plus, EVGA’s customer service is amazing. I called them yesterday because I ordered my KPE on Tuesday and UPS had not scanned the package yet (normally evga orders arrive by noon the next day). I immediately got through to US based customer service and the agent addressed me by name and asked if I was calling about my Kingpin card. He was legitimately concerned for me and he then spent the next 15 minutes going through the system trying to find it. I actually had to end the call because I had a meeting. Turns out UPS was delayed.


----------



## xrb936

HyperMatrix said:


> FTW3 sells for $2450 CAD at memory express here before taxes. KingPin is $2420. Though you gotta add another $70 for brokerage and shipping, that’s still a significant card upgrade for barely anything more.
> 
> There is no reason at all to think a Strix out of the box would beat a KingPin out of the box. None. Unless the Strix in question won out on silicon lottery and the kingpin you’re comparing it to has a dud of a chip. But there is literally 0 advantage for the Strix. The KingPin is a better designed card in every way. That doesn’t overcome silicon lottery. But it means the Strix has 0 advantage over it. And the KPE has a lot of added goodies for a reasonable price at $1900 USD.
> 
> I have no idea why people attack the card when it is objectively the best 3090 card out there. Arguments of It being too expensive at $1900 vs $1800 for the Strix when you could get a 3080 for $699? Come on fellas. Get real.
> 
> There is one reason and one reason only to get a Strix over a KPE. And that’s the presence of a second HDMI port on the Strix.


I was talking about the overclocking results.
Since EVGA made the worst cards ever in their history, and I got 2 dead 3090 FTW3 Ultra in last month, and hundreds of people are having the same issue, and EVGA still keeps silent, I doubt if KPE is worth another try. I am waiting my KPE tho, but I doubt.


----------



## lokran88

Aquacomputer just posted these sexy ones. Hopefully will get mine before Christmas as I preordered it as soon as it was up in their store.


----------



## xrb936

WayWayUp said:


> I'm more disappointed with the kpe bin
> Back in the day the bin was top notch!
> 
> Last gen the bin was lax but they got away with it because each card came with samsung memory. On top of that most 2080tis came as duel 8pin but the kpe was triple with 520w bios out the box which was significantly more than anything else out there
> 
> This gen tho.... Evga seems to be treating this card as a FTW3 with classified tool. they are making large batches (mass market) with a very lax bin. your not promised a good card the bin just prevents you from getting a dud
> 
> you have ppl only scoring 14700-14800 port royal with a 360 aio and 520w power limit! thats very standard and not impressive for water. Yeah many can reach 15k when min maxing out stats and a few got 15,200+ when they used ice cold air to drop water temps way down, but my ftw3 is beating this on air and thats really upsetting
> 
> I think any strix on water with 520w is capable of reaching those scores unless they lost silicon lottery
> 
> Honestly not even slightly impressed with the kingpin this generation. Evga is making it in large batches as if it's a mass market card. They are treating it as their new flagship as opposed to a specialty card
> 
> the card needs to be binned better, much better, but they cant if they are making such large batches. Evga is just going for max $ this gen and to seperate the ftw3 they made sure you cant volt mod it or even add a higher watt bios they gimped that too. Only way to separate their tiers


Exactly.


----------



## xrb936

lokran88 said:


> Aquacomputer just posted these sexy ones. Hopefully will get mine before Christmas as I preordered it as soon as it was up in their store.
> 
> View attachment 2469856
> 
> View attachment 2469857
> 
> View attachment 2469858


Better than EKWB one?


----------



## lokran88

Well, I am assuming that it will perform better as they always did considering reviews of past cooler generations. And it does have the active backplate, which EK does not offer at all.

I will post my temp delta as soon as I will get the block.


----------



## xrb936

lokran88 said:


> Well, I am assuming that it will perform better as they always did considering reviews of past cooler generations. And it does have the active backplate, which EK does not offer at all.
> 
> I will post my temp delta as soon as I will get the block.


Sounds great. Thank you!


----------



## xrb936

0451 said:


> The Strix might have made sense if you had access to an XOC bios. But the Strix no longer makes sense on account of the Kingpin existing.
> 
> Plus, EVGA’s customer service is amazing. I called them yesterday because I ordered my KPE on Tuesday and UPS had not scanned the package yet (normally evga orders arrive by noon the next day). I immediately got through to US based customer service and the agent addressed me by name and asked if I was calling about my Kingpin card. He was legitimately concerned for me and he then spent the next 15 minutes going through the system trying to find it. I actually had to end the call because I had a meeting. Turns out UPS was delayed.


Well, the only thing they have right now is better customer service. If they lose that, I will never buy their cards.


----------



## geriatricpollywog

xrb936 said:


> Well, the only thing they have right now is better customer service. If they lose that, I will never buy their cards.


And they have the ONLY XOC card.

Meanwhile Asus is charging $2600 for the Gundam Strix.


----------



## xrb936

JonnyV75 said:


> It's the 5% associate discount that helps. Also need to have a CDN credit card that charges no currency conversion fees.
> 
> Below was my estimate prior to actual purchase. Actual purchase was $2500.61 + 300.56 for taxes. Not quite "exactly the same" ... but damn close.
> 
> View attachment 2469853


Right, I forgot the discount.


----------



## xrb936

0451 said:


> And they have the ONLY XOC card.
> 
> Meanwhile Asus is charging $2600 for the Gundam Strix.


For now, yes. I think Galax is still working on it.
And EVGA is not really binning chips this year, and KINGPIN no longer has the real binned chip. Look how many KINGPIN they have produced.


----------



## alitayyab

Fufuuu said:


> Hey,
> 
> I have changed my Bios on Zotac RTX 3090 to the KFA2 one. Just after flashed, I saw stuttering on all games without any change on power, core voltage or frequency. Has someone noticed a behavior like that ?
> 
> Thanks.


unistall and reinstall nvidia drivers (fresh install) incl any ancillary stuff like after burner. Use ddu if simple reinstall does not work for you

other things to consider
1. Make sure your psu is able to deliver enough power
2. Run pcie power cables from separate rails if at all possible. Don’t use splitters to power your card
3. If nothing else work see if lowering power limit in afterburner helps


----------



## Thanh Nguyen

Anyone here know how to fix a shorted 3090? I have no signal to monitor. Tried to rma and they will void it coz the card is shunted. And for # 7325, I have aquacomputer block for reference card. It performs the same as my alphacool block.


----------



## pat182

JonnyV75 said:


> I'm not expecting mine to be. Also in Canada, buying a KPE direct from EVGA is the same price as a FTW3 bought locally.


which province and what the total price with ups fees and all ?


----------



## Falkentyne

Thanh Nguyen said:


> Anyone here know how to fix a shorted 3090? I have no signal to monitor. Tried to rma and they will void it coz the card is shunted. And for # 7325, I have aquacomputer block for reference card. It performs the same as my alphacool block.


What's wrong with it?
Didn't your power supply blow up or something before? You haven't posted on forums for awhile. Did your card work after the PSU died? You were using watercooling the whole time right?
Did you mess with the shunts again?


----------



## HyperMatrix

xrb936 said:


> I was talking about the overclocking results.
> Since EVGA made the worst cards ever in their history, and I got 2 dead 3090 FTW3 Ultra in last month, and hundreds of people are having the same issue, and EVGA still keeps silent, I doubt if KPE is worth another try. I am waiting my KPE tho, but I doubt.


FTW3 and KingPin are completely different cards with completely different components. You can't compare the 2.


----------



## Thanh Nguyen

Falkentyne said:


> What's wrong with it?
> Didn't your power supply blow up or something before? You haven't posted on forums for awhile. Did your card work after the PSU died? You were using watercooling the whole time right?
> Did you mess with the shunts again?


The card worked normally but after I changed back to alphacool block coz I want to return the aqua block, I cant boot up. With a new psu, Im able to boot up but the card output no signal. I tried with both blocks and nothing has changed. I tried to heat up the entire board with the hair dryer and it didn’t help.


----------



## ThrashZone

Thanh Nguyen said:


> Anyone here know how to fix a shorted 3090? I have no signal to monitor. Tried to rma and they will void it coz the card is shunted. And for # 7325, I have aquacomputer block for reference card. It performs the same as my alphacool block.


Hi,
You didn't solder it so clean up should not be able to tell it was shunt modded.


----------



## Thanh Nguyen

ThrashZone said:


> Hi,
> You didn't solder it so clean up should not be able to tell it was shunt modded.


I soldered it.


----------



## HyperMatrix

Guess we'll see how the card does compared to my FTW3 soon. Maybe tomorrow if I'm lucky with shipping.


----------



## ThrashZone

Thanh Nguyen said:


> I soldered it.


Hi,
Missed the op on this thread damn 
Heck I've done similar with tinny bit of liquid metal and hot glue works perfectly well.


----------



## JonnyV75

HyperMatrix said:


> Guess we'll see how the card does compared to my FTW3 soon. Maybe tomorrow if I'm lucky with shipping.
> View attachment 2469871


You should get it tomorrow. Mine shipped yesterday and is on the truck for delivery today.


----------



## HyperMatrix

JonnyV75 said:


> You should get it tomorrow. Mine shipped yesterday and is on the truck for delivery today.


last card from them took 2 days and UPS didnt even send it out for delivery until a day after. So 3 days after buying it. Thays why I’m skeptical. Wish I could choose FedEx for shipping. UPS is garbage in Canada. At least out west.


----------



## Dreams-Visions

heya folks, does the KPE bios run fine on the XTrio?


----------



## Fufuuu

alitayyab said:


> unistall and reinstall nvidia drivers (fresh install) incl any ancillary stuff like after burner. Use ddu if simple reinstall does not work for you
> 
> other things to consider
> 1. Make sure your psu is able to deliver enough power
> 2. Run pcie power cables from separate rails if at all possible. Don’t use splitters to power your card
> 3. If nothing else work see if lowering power limit in afterburner helps


Hey,

Tank you for your help. My power supply is HX1200i. I have plugged PCIe power cables on seperate rails. I'll try to uninstall driver with DDU and reinstall them to see if I have the same issue.


----------



## megahmad

megahmad said:


> Guys I am not home to check but there's a new bios posted for the ASUS 3090 TUF OC on ASUS site, can anyone see if they increased the power limit after updating?
> 
> 
> 
> 
> __
> 
> 
> 
> 
> 
> ASUS TUF Gaming GeForce RTX 3090 OC Edition 24GB GDDR6X | Graphics Card | ASUS Global
> 
> 
> The ASUS TUF Gaming GeForce RTX 3090 OC Edition 24GB GDDR6X builds durability on NVIDIA Ampere Architecture, by featuring Axial-tech fan design, 0dB technology, Dual ball fan bearings, Auto-Extreme Technology, MaxContact, Military-grade Certification, and more.
> 
> 
> 
> 
> www.asus.com
> 
> 
> 
> 
> 
> Thanks.


nvm still 375w. bleh.


----------



## DrunknFoo

Thanh Nguyen said:


> Anyone here know how to fix a shorted 3090? I have no signal to monitor. Tried to rma and they will void it coz the card is shunted. And for # 7325, I have aquacomputer block for reference card. It performs the same as my alphacool block.












The other day i worked and fixed this card for someone, he hacked it up badly and made a mess... literally blew all 3 fuses, and bridged the pads between shunt 2-3... Cant simply short and assume it will work, resistence across all 3 fuses must be near or be equal, otherwise black screen, or as soon as bios post black screen.


----------



## pat182

DrunknFoo said:


> View attachment 2469892
> 
> 
> The other day i worked and fixed this card for someone, he hacked it up badly and made a mess... literally blew all 3 fuses, and bridged the pads between shunt 2-3... Cant simply short and assume it will work, resistence across all 3 fuses must be near or be equal, otherwise black screen, or as soon as bios post black screen.


louis rossman would not be proud


----------



## DrunknFoo

pat182 said:


> louis rossman would not be proud


Lol that is not my work, that was the dudes attempt at a fix...

All that flux got scraped off n cleaned when i returned it... He was lost assuming simply bridging would work


----------



## HyperMatrix

DrunknFoo said:


> Lol that is not my work, that was the dudes attempt at a fix...
> 
> All that flux got scraped off n cleaned when i returned it... He was lost assuming simply bridging would work


Apparently there should be a license required for operating a soldering iron....other than the idiotic thought process behind bridging components...the actual solder job itself is frightening. 

I don't know why people with 0 soldering experience think it's a good idea to start learning by practicing on such an expensive video card.


----------



## Falkentyne

DrunknFoo said:


> View attachment 2469892
> 
> 
> The other day i worked and fixed this card for someone, he hacked it up badly and made a mess... literally blew all 3 fuses, and bridged the pads between shunt 2-3... Cant simply short and assume it will work, resistence across all 3 fuses must be near or be equal, otherwise black screen, or as soon as bios post black screen.


What the hell did I just witness here? Yikes! What did he do? Dab solder on top of the shunt and presto? How did you fix this abomination?


----------



## DrunknFoo

HyperMatrix said:


> Apparently there should be a license required for operating a soldering iron....other than the idiotic thought process behind bridging components...the actual solder job itself is frightening.
> 
> I don't know why people with 0 soldering experience think it's a good idea to start learning by practicing on such an expensive video card.


Monkey see monkey do?



Falkentyne said:


> What the hell did I just witness here? Yikes! What did he do? Dab solder on top of the shunt and presto? How did you fix this abomination?


Cleaned up the dried flux
Removed the broken remaining fuses he left on the pcb (apparently he couldnt get em off the pads so he tried to pry them off)
Resoldered the pads between the two shunts
Bridged the 3 resistor pads with solder.
3 attempts going from a thick bridge eventually going down to a thin line (the size something like a 24awg thick wire worth in solder)


----------



## HyperMatrix

DrunknFoo said:


> Removed the broken remaining resistors he left on the pcb (*apparently he couldnt get em off the pads so he tried to pry them off*)
> 
> Bridged the 3 resistor pads with solder.


He is so lucky he didn't rip the trace off the board. Also are you saying you fully bridged the resistors on the card?


----------



## DrunknFoo

HyperMatrix said:


> He is so lucky he didn't rip the trace off the board. Also are you saying you fully bridged the resistors on the card?



Yup err sorry misread... We talking about fuses here


----------



## Zurv

finally i can play cyberpunk without all the fan noise!


----------



## HyperMatrix

DrunknFoo said:


> Yup


I wouldn't recommend that unless you know something I don't. I've tried that in the past on my Pascal card and it's not just a matter of "removing the power limit entirely" by bridging it. It messes up the entire power source balancing and caused overheating on my motherboard and caused system instability/freeze/crash/reboots.


----------



## cletus-cassidy

Sorry if this has been asked before. Do any of the other EVGA compatible water blocks fit the Kingpin hybrid? Or do we need to use the EVGA Hydro Copper?


----------



## DrunknFoo

Thing is, he contacted me prior to this attempt, i offered to do it for 140$ (cad), guess he figured to tackle it himself... I ended up charging him $450 (cad) for the repair... Was going to offer to buy the card for half the msrp.... Lol


----------



## DrunknFoo

HyperMatrix said:


> I wouldn't recommend that unless you know something I don't. I've tried that in the past on my Pascal card and it's not just a matter of "removing the power limit entirely" by bridging it. It messes up the entire power source balancing and caused overheating on my motherboard and caused system instability/freeze/crash/reboots.


What? You or I are mistaking fuse and resistor... I misread your question my bad


----------



## DrunknFoo

All this talk about shunts, mixing up fuses n resistors lol


----------



## DrunknFoo

Zurv said:


> finally i can play cyberpunk without all the fan noise!


Nice, i gave up trying to get one from them

Do post your power draw, rad setup, ambient and temps for this block please


----------



## HyperMatrix

DrunknFoo said:


> What? You are mistaking fuse and resistor... I misread your question
> All this talk about shunts, mixing up fuses n resistors lol


You said "Bridged the 3 resistor pads with solder" so I asked if you bridged the resistors.  Fuses are different. I still wouldn't run a card with bypassed fuses because at this point if anything happens you end up with a possibly irreparable card.


----------



## dante`afk

xrb936 said:


> Better than EKWB one?


Aquacomputer has an active cooled backplate, also German manufacturing >>>

EKWB is garbage tier


----------



## Thanh Nguyen

DrunknFoo said:


> View attachment 2469892
> 
> 
> The other day i worked and fixed this card for someone, he hacked it up badly and made a mess... literally blew all 3 fuses, and bridged the pads between shunt 2-3... Cant simply short and assume it will work, resistence across all 3 fuses must be near or be equal, otherwise black screen, or as soon as bios post black screen.


Its hard to say but my card look like that. What can I do?


----------



## xrb936

Anyone want the EK block + backplate for Strix 3080/90 in Canada? I just got a set of brand new, but I think I will just stay with the KPE. If anyone wants it, please PM me. Otherwise I will just send them back.


----------



## edsontajra

Asus tuf 3090 arrived. Score:









I scored 13 513 in Port Royal


Intel Core i9-10900KF Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com





Its a good score people? Should i flash the biox to test? What is the best? Help please!!


----------



## markuaw1

dangerSK said:


> disagree, Galax OC Lab are best











my STRIX with EK block is loving the 520W Kingpin bios thank you EVGA, https://www.3dmark.com/pr/648784


----------



## defiledge

Honestly these it is seriously annoying how aquacomputers and corsair doesn't list the dimensions for their new blocks. I don't want to buy one and find that it won't fit in my case like I did with my byski. On that note I have a brand new byski TUF block that I'm selling for what I paid for + shipping if anybody wants it.


----------



## defiledge

Zurv said:


> finally i can play cyberpunk without all the fan noise!
> 
> View attachment 2469904
> 
> View attachment 2469905
> 
> View attachment 2469906


that block is beautiful, could you tell me the brand and what the height of it is including the inlet/outlet ports?


----------



## warbucks

defiledge said:


> that block is beautiful, could you tell me the brand and what the height of it is including the inlet/outlet ports?


It's the OptimusPC FTW3 block. You can see the block dimensions here:









Absolute GPU Block - FTW3 3080, 3080 Ti, 3090


Optimus Absolute GPU Block designed for the EVGA FTW3 3080/3080Ti/3090 The Absolute block is our all-out performance design, created to achieve maximum cooling on all areas of the new NVIDIA RTX 3080 and 3090 FTW3 cards from EVGA. The FTW3 GPUs pull huge amounts of power and require top cooling...




optimuspc.com


----------



## Thanh Nguyen

warbucks said:


> It's the OptimusPC FTW3 block. You can see the block dimensions here:
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Absolute GPU Block - FTW3 3080, 3080 Ti, 3090
> 
> 
> Optimus Absolute GPU Block designed for the EVGA FTW3 3080/3080Ti/3090 The Absolute block is our all-out performance design, created to achieve maximum cooling on all areas of the new NVIDIA RTX 3080 and 3090 FTW3 cards from EVGA. The FTW3 GPUs pull huge amounts of power and require top cooling...
> 
> 
> 
> 
> optimuspc.com


How it performs?


----------



## warbucks

Thanh Nguyen said:


> How it performs?


I wouldn't know, as I don't own one. I was just answering the question.


----------



## kot0005

kinpin pcb finally

How to DEEP CLEAN your PC Parts! - YouTube


----------



## Dreams-Visions

dante`afk said:


> EKWB is garbage tier


what brands come recommended these days, then? That are readily available in the United States?


----------



## GQNerd

HyperMatrix said:


> But there is literally 0 advantage for the Strix. The KingPin is a better designed card in every way.... But it means the Strix has 0 advantage over it...There is one reason and one reason only to get a Strix over a KPE. And that’s the presence of a second HDMI port on the Strix.


False.

Extra HDMI and better VRM design/power delivery..

Grab an EVC2S and it can beat the KPE..

Matter of fact, the top 2 HOF PR scores for single GPU are on a Strix.

You are correct that KPE is better out of the box, and worth the extra $100 though.


----------



## HyperMatrix

Miguelios said:


> better VRM design/power delivery..
> 
> Grab an EVC2S and it can beat the KPE..


A lot of people here would like for you to explain what that means. Please share.


----------



## bmgjet

Kingpin VRM 23 stage and extra 8 filter caps. (70A parts)
Strix VRM 22 stage (50A parts)

Kingpin also has a lot more isloation of parts its the best 3090 PCB you can get at the moment.
Classified software lets you do the same as EVC2 can do with out having to touch the hardware.
Also has dip switches for setting voltage offsets on the back of the card.


Extra HDMI port and avaliblity is what the strixx has going for it.


----------



## mirkendargen

bmgjet said:


> Kingpin VRM 23 stage and extra 8 filter caps. (70A parts)
> Strix VRM 22 stage (50A parts)
> 
> Kingpin also has a lot more isloation of parts its the best 3090 PCB you can get at the moment.
> Classified software lets you do the same as EVC2 can do with out having to touch the hardware.
> Also has dip switches for setting voltage offsets on the back of the card.
> 
> 
> Extra HDMI port and avaliblity is what the strixx has going for it.


Having proper waterblocks available is another plus for Strix.


----------



## bmgjet

mirkendargen said:


> Having proper waterblocks available is another plus for Strix.


Kingpin comes with either AIO or a full cover waterblock.


----------



## GQNerd

HyperMatrix said:


> A lot of people here would like for you to explain what that means. Please share.





bmgjet said:


> Kingpin VRM 23 stage and extra 8 filter caps. (70A parts)
> Strix VRM 22 stage (50A parts)


I stand corrected.. thought they were using the same VRM's as the FTW3


----------



## mirkendargen

bmgjet said:


> Kingpin comes with either AIO or a full cover waterblock.
> View attachment 2469958


Yeah AIO is nice if you don't have a loop already, but who the hell is getting a Kingpin that doesn't already have a loop?

The EVGA Hydrocopper blocks haven't been good, and I doubt that's changing.


----------



## mardon

Hi experts.
I'm loving my 3090 so far. Apart from crashing into the power limit all the time. Chip seems decent 2000mhz @ 0.875v (in Cyberpunk) 

Ive got an EKWB cooled by x2 240mm radiators in a SFF Ncase M1.
I'm sure I read somewhere that there are incompatibility issues with the shunt mods and the EK block? Is that the case? There's a decent place near me that does PCB repairs so I was considering getting them to swap the resistors completely. What resistance would I need to go without piggyback?

Once done I take it I wouldn't need to max out the power slider as 390w + shunt would be too much for x2 8pin?


----------



## bmgjet

mardon said:


> Hi experts.
> I'm loving my 3090 so far. Apart from crashing into the power limit all the time. Chip seems decent 2000mhz @ 0.875v (in Cyberpunk)
> ....


Would help if you said what model card you have.
You can use the tool I made to calculator what resistors to pick.








GitHub - bmgjet/ShutMod-Calculator: Work out what shunt values to use easily.


Work out what shunt values to use easily. Contribute to bmgjet/ShutMod-Calculator development by creating an account on GitHub.




github.com





Has a checkbox for replaced instead of stacked.


----------



## mardon

bmgjet said:


> Would help if you said what model card you have.
> You can use the tool I made to calculator what resistors to pick.
> 
> 
> 
> 
> 
> 
> 
> 
> GitHub - bmgjet/ShutMod-Calculator: Work out what shunt values to use easily.
> 
> 
> Work out what shunt values to use easily. Contribute to bmgjet/ShutMod-Calculator development by creating an account on GitHub.
> 
> 
> 
> 
> github.com
> 
> 
> 
> 
> 
> Has a checkbox for replaced instead of stacked.


Wow I didn't even know this existed! 

It's a reference KFA2 board.


----------



## mardon

bmgjet said:


> Would help if you said what model card you have.
> You can use the tool I made to calculator what resistors to pick.
> 
> 
> 
> 
> 
> 
> 
> 
> GitHub - bmgjet/ShutMod-Calculator: Work out what shunt values to use easily.
> 
> 
> Work out what shunt values to use easily. Contribute to bmgjet/ShutMod-Calculator development by creating an account on GitHub.
> 
> 
> 
> 
> github.com
> 
> 
> 
> 
> 
> Has a checkbox for replaced instead of stacked.


Ok so looking at that if i'm replacing them its ether going to be 3 or 4ohm's what do people commonly go for?

Also should I shunt the PCI? Is that common?

Also here is my current port royal:








I scored 14 190 in Port Royal


Intel Core i9-9900KS Processor, NVIDIA GeForce RTX 3090 x 1, 32672 MB, 64-bit Windows 10}




www.3dmark.com





Edit found some more info. Think I'll leave the PCI-e slot and got 3ohm on the 8pins.


----------



## marti69

hi anyone has picture of shunt mod rtx 3090 gaming x trio?


----------



## defiledge

mardon said:


> Ok so looking at that if i'm replacing them its ether going to be 3 or 4ohm's what do people commonly go for?
> 
> Also should I shunt the PCI? Is that common?
> 
> Also here is my current port royal:
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 14 190 in Port Royal
> 
> 
> Intel Core i9-9900KS Processor, NVIDIA GeForce RTX 3090 x 1, 32672 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> Edit found some more info. Think I'll leave the PCI-e slot and got 3ohm on the 8pins.


without a pcie slot shunt mod the highest power draw you can get from a 390W bios is about 450W. From my own experience this wont be enough for 9.25V + without hitting the powerlimit. After I stacked a 15ohm on my pcie slot I get around 570W max.


----------



## changboy

dante`afk said:


> Aquacomputer has an active cooled backplate, also German manufacturing >>>
> 
> EKWB is garbage tier


EKWB is not garbage product company, they make a lot of nice product in many area for many taste.
Its like said German car are the only way to buy a descent car lol.


----------



## Nizzen

dante`afk said:


> Aquacomputer has an active cooled backplate, also German manufacturing >>>
> 
> EKWB is garbage tier


Does that "active" backplate actually work better than any other backplate, like bykski ek and alphacool?


----------



## changboy

No doubt it will work thats not mean all other waterblock is garbage.
But the thing is : will you get better performance out of this ? How much this waterblock will cost ?
Usually you pay for what you get. I dont think this waterblock will be better then the Optimus ftw3 waterblock in quality and performance.
Me i didnt bought the Optimus one coz i found it too expensive, but quality is just insane. I have pré-order the EK coz the price was lower and all in all you wont get better performance from 5c, this is what i think.

Also from air to water dont expect miracal but just better boost of long session of gaming


----------



## mardon

defiledge said:


> without a pcie slot shunt mod the highest power draw you can get from a 390W bios is about 450W. From my own experience this wont be enough for 9.25V + without hitting the powerlimit. After I stacked a 15ohm on my pcie slot I get around 570W max.


I'll be replacing rather than stacking. 
Going off the calculator shared above it said using 3ohm on all but the PCIe will pull around 500w.

Am I missing something? 

Just spoke to the bloke locally and he said it's definitely something he could do and won't cost much. I prefer this idea. Warranty on my card is only 2 years anyway.


----------



## bmgjet

mardon said:


> I'll be replacing rather than stacking.
> Going off the calculator shared above it said using 3ohm on all but the PCIe will pull around 500w.
> 
> Am I missing something?
> 
> Just spoke to the bloke locally and he said it's definitely something he could do and won't cost much. I prefer this idea. Warranty on my card is only 2 years anyway.


The VRM load balances the inputs. So it will only pull a bit more extra from the plugs until it thinks theres something wrong with the slot power draw and hits a power limit.
You need to mod all of the shunts so the balance of everything is maintained. Which will be 6 shunts on your card.


----------



## mardon

bmgjet said:


> The VRM load balances the inputs. So it will only pull a bit more extra from the plugs until it thinks theres something wrong with the slot power draw and hits a power limit.
> You need to mod all of the shunts so the balance of everything is maintained. Which will be 6 shunts on your card.


Great stuff thanks for that. I can then limit the power down if I need with the power slider.

Is it the same ohm rating for all of them? So I'm replacing all of them Inc the pcie one with 6x 3ohm?


----------



## Esenel

dante`afk said:


> Aquacomputer has an active cooled backplate, also German manufacturing >>>
> 
> EKWB is garbage tier


Nah. I have the EK one here. It is doing fine on the Strix 3090.
And the RAM on the backside is totally fine.
5-10°C warmer than GPU.
No need for active backplate.

Although I will get the AquaComputer one and will compare it.
But I doubt it will do much.


----------



## bmgjet

mardon said:


> Great stuff thanks for that. I can then limit the power down if I need with the power slider.
> 
> Is it the same ohm rating for all of them? So I'm replacing all of them Inc the pcie one with 6x 3ohm?


You will have enough room with them stacked.
Go with 15 mOhm stacked on it if you want a safe increase. 
That will give you about 450W on default power limit and you can turn it up to 510W on 111% slider if your on the 350/390W bios.
Or if you want to push it stack 10 mOhms. Which will be on the limit of safe for decent motherboards at default and will be risking the slot pushing the slider higher.


----------



## Edgenier

Shunted TUF 3090 with 4mohm, I’m pushing 2000mhz at ~.94 however cyberpunk will only pull 220-230w (gpu-z) and frequency will be solid, but if the game tries to pull more than 240w (around 500w), gpu-z shows “PWR” and i get throttling??? It’s like there’s another power limit around 500w despite me shunting? Huh?


----------



## changboy

I have 8 mohm for my 3090 ultra but i never do this and from the guy pic who mess is pcb i scare to do it now lol.
Not sure i will do it when i will install my ek block.


----------



## Rbk_3

I am having issues with g-sync since I got my 3090 with my monitor (LG 27GL850) and I am trying to figure out if it is my card or my monitor or just a driver issue. My monitor HZ is not matching my FPS even when locked to a super stable framerate. 

Gsync just doesn’t feel smooth. It felt fine with my 2070s on this monitor. If I cap my framerate, even if it is a rock solid stable fps my monitor OSD will show the HZ fluctuating wildly when it should be pinned to around the capped framerate. This happens with any framerate cap, in game, RTSS, NVCP etc. If I cap anything above 135fps my monitor keeps bouncing up to 144hz. Here are a couple videos I have recorded.



RDR2 locked to 65fps





Warzone locked to 100fps





Warzone locked to 140fps






I also have an LG C9 and it doesn't feel right either but I can't confirm the display HZ as it doesn't have an OSD.
If anyone has this monitor, or any G-Sync compatible one could you please test this for me and tell me how it is behaving for you. I do not know if this is what is causing my issues but like I said it felt fine with a 2070s.


----------



## Edge0fsanity

Kingpin LN2 bios is not functioning correctly on my Kingpin, feels a lot like the bugged 500w bios on my FTW3. Max board power draw is roughly 460w before power throttling starts. I have to undervolt to 1075mV to maintain a consistent clock in PR which is holding back my score. Stuck at 14800 due to power limit. 

I'm not the only one with the problem RTX 3090 K|NGP|N Power Draw Issues - EVGA Forums


----------



## Edgenier

Can I use the strix bios on a TUF if it’s shunted? I need a bios that keeps all my ports working, i think strix might be the only one?


----------



## mardon

bmgjet said:


> You will have enough room with them stacked.
> Go with 15 mOhm stacked on it if you want a safe increase.
> That will give you about 450W on default power limit and you can turn it up to 510W on 111% slider if your on the 350/390W bios.
> Or if you want to push it stack 10 mOhms. Which will be on the limit of safe for decent motherboards at default and will be risking the slot pushing the slider higher.


Is there any benefit to stacking or replacing them entirely (excluding warranty)? I'm guessing they end up at the same point power limit wise.


----------



## bmgjet

mardon said:


> Is there any benefit to stacking or replacing them entirely (excluding warranty)? I'm guessing they end up at the same point power limit wise.


No difference.
Stacking Id say is safer since its easier to add something on then remove it when soldering.
Its a hot air job to remove them. Where adding them you can just do it with a fine tip soldering iron.
Or you can always stack them using a non-soldered method.


----------



## Edgenier

bmgjet said:


> No difference.
> Stacking Id say is safer since its easier to add something on then remove it when soldering.
> Its a hot air job to remove them. Where adding them you can just do it with a fine tip soldering iron.
> Or you can always stack them using a non-soldered method.


Well stacking them you have 2 resistors in parallel, which is not the same resistance as the lone new resistor. I think 5 + 5 = 2.5, and replacing a 5 with a 10 = 10, so may end up with a lower power limit than stock?


----------



## bwana

Edge0fsanity said:


> Got my kingpin today. Not impressed, will be keeping my ftw3 and putting it back in my pc once my optimus block ships.


What?! Can u elaborate on how ftw3>kingpin?


----------



## bmgjet

Edgenier said:


> Well stacking them you have 2 resistors in parallel, which is not the same resistance as the lone new resistor. I think 5 + 5 = 2.5, and replacing a 5 with a 10 = 10, so may end up with a lower power limit than stock?


Look up the electrical law on resistors in parrell. Thats why I made the shunt mod calculator since people cant work out the maths lol.

NewResistance=(R1×R2)/(R1+R2)

So thats.

5x10 = 50
5+10 =15
50/15=3.33 mOhm as the new resistance if you stacked a 10 on a 5.


----------



## Edge0fsanity

bwana said:


> What?! Can u elaborate on how ftw3>kingpin?


I followed up on that post after. Had something running in the background that was limiting OC to 2100mhz. Turned it off, card is topping out at 2175mhz in PR but am stuck at 1075mV due a power limit problem. This kingpin is much better than my ftw3 but won't know for sure until I have the ftw3 blocked and in a loop.


----------



## Edgenier

bmgjet said:


> Look up the electrical law on resistors in parrell. Thats why I made the shunt mod calculator since people cant work out the maths lol.
> 
> NewResistance=(R1×R2)/(R1+R2)
> 
> So thats.
> 
> 5x10 = 50
> 5+10 =15
> 50/15=3.33 mOhm as the new resistance if you stacked a 10 on a 5.


I have used your calculator thanks! And yes i vaguely remember this much from EE classes, so yes replacing a 5 with a 15 = way more resistance than stacking a 5 and a 15 was my point, where your power limit would actually be worse than stock


----------



## Zurv

Sorry if i missed it.
How have people faired with flashing the kingpin bios to other cards.
Help with voltage limits? I'm already shunt using a shunt mod'd FTW3 and my limit is "voltage"

also, really happy with the optimus block. Even with the tons of power going into it that card is really cool.
Under 40C in Cyberpunk. (note, this was a setup that in the day was setup for quad SLI.. and has a 420 and 360 rad.)

























(there is a another 360 hiding in the back and 2 D5 pumps)


----------



## Falkentyne

Edgenier said:


> Shunted TUF 3090 with 4mohm, I’m pushing 2000mhz at ~.94 however cyberpunk will only pull 220-230w (gpu-z) and frequency will be solid, but if the game tries to pull more than 240w (around 500w), gpu-z shows “PWR” and i get throttling??? It’s like there’s another power limit around 500w despite me shunting? Huh?


Please post your GPU-Z screenshot with all wattage values on "Maximum" when you get this power limit, please. Thank you.


----------



## WayWayUp

i use the KPE bios on my shunted ftw3. Since it didnt require me to firmware update everything is working well. only issue is that one of the fans is limited to 2000rpm but the other 2 go to 3000 rpm

Just waiting on the optimus block to arrive


Zurv said:


> Sorry if i missed it.
> How have people faired with flashing the kingpin bios to other cards.
> Help with voltage limits? I'm already shunt using a shunt mod'd FTW3 and my limit is "voltage"
> 
> also, really happy with the optimus block. Even with the tons of power going into it that card is really cool.
> Under 40C in Cyberpunk. (note, this was a setup that in the day was setup for quad SLI.. and has a 420 and 360 rad.)


----------



## edsontajra

edsontajra said:


> Asus tuf 3090 arrived. Score:
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 13 513 in Port Royal
> 
> 
> Intel Core i9-10900KF Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> Its a good score people? Should i flash the biox to test? What is the best? Help please!!


Can someone help please?


----------



## changboy

Esenel said:


> Nah. I have the EK one here. It is doing fine on the Strix 3090.
> And the RAM on the backside is totally fine.
> 5-10°C warmer than GPU.
> No need for active backplate.
> 
> Although I will get the AquaComputer one and will compare it.
> But I doubt it will do much.


Did you check the temp on the back side memory of the pcb where are the backplate ? 
Or you just mean those around the gpu chip coz have memory on both side.


----------



## Esenel

changboy said:


> Did you check the temp on the back side memory of the pcb where are the backplate ?
> Or you just mean those around the gpu chip coz have memory on both side.


I have Temp sensors there.


----------



## kx11

undervolting is highly recommended




http://imgur.com/WGABKac



my regular OC used to hit 78c after 2hrs, my gpu is ROG O24G


----------



## Edgenier

Falkentyne said:


> Please post your GPU-Z screenshot with all wattage values on "Maximum" when you get this power limit, please. Thank you.


Here it is, you can tell there's a wall at 240w which coincides with "pwr" on gpu-z. Also, did I read a 3-pin (strix) bios could work now that i'm shunted? If so, do you think all display ports would work? (as you can see I need all of them for 3 monitors + VR headset). Thanks!


----------



## changboy

Esenel said:


> I have Temp sensors there.


Ok then ek backplate is good, maybe will be the same result for my ftw3 ultra then i dont need to think of some mod on the backplate, coz i was thinking put some small heatsink on the backplate but from what i see, its useless.
I have some temp sensor came with my mobo and never use them lol.


----------



## Zurv

WayWayUp said:


> i use the KPE bios on my shunted ftw3. Since it didnt require me to firmware update everything is working well. only issue is that one of the fans is limited to 2000rpm but the other 2 go to 3000 rpm
> 
> Just waiting on the optimus block to arrive


Any improvement over the ftw3 bios?
or can't tell because you don't have your waterblock yet?


----------



## WayWayUp

increased power consumption but i cant take advantage of it yet due to being on air.


----------



## thegr8anand

I got a 3090 FE yesterday. What overclocks can be expected from it? Is there any benefit from increasing core voltage in afterburner?

Also is there a easy guide to undervolting?


----------



## Falkentyne

Edgenier said:


> Here it is, you can tell there's a wall at 240w which coincides with "pwr" on gpu-z. Also, did I read a 3-pin (strix) bios could work now that i'm shunted? If so, do you think all display ports would work? (as you can see I need all of them for 3 monitors + VR headset). Thanks!


Please next time, click "maximum" on GPU-Z for each power rail. I did ask you to do that and you didn't. You have it on "current", not "maximum".
Anyway, 75W PCIE Slot power is limiting you. You didn't shunt the PCIE Slot?
Also it looks like you didn't shunt GPU Chip power either. GPU Chip power should be half of whats shown, your gpu chip power is almost higher than your TDP (board power).


----------



## Edgenier

Falkentyne said:


> Please next time, click "maximum" on GPU-Z for each power rail. I did ask you to do that and you didn't. You have it on "current", not "maximum".
> Anyway, 75W PCIE Slot power is limiting you. You didn't shunt the PCIE Slot?
> Also it looks like you didn't shunt GPU Chip power either. GPU Chip power should be half of whats shown, your gpu chip power is almost higher than your TDP (board power).


Oh you must mean "Show Highest Reading", that's useful thanks, never knew about it so I just look at my HWINFO64 graphs. I did not shunt PCIE slot or anything other than the 2 shunts in the picture I posted. Which one is the "GPU Chip power" resistor and what's the benefit of shunting it? Never heard of this resistor from what i've looked at, does it matter that I didn't shunt it if PCIE is what's limiting me?

So it looks like with just the 2 shunts you basically only extend the limit to around 500W even if it's 4mhom resistors (theoretical limit ~700w+). That's interesting, do you see any benefit in trying a strix bios (i know it's a 3-pin bios, but I read it could work if shunted)?


----------



## BigMack70

thegr8anand said:


> I got a 3090 FE yesterday. What overclocks can be expected from it? Is there any benefit from increasing core voltage in afterburner?
> 
> Also is there a easy guide to undervolting?


Increasing voltage on the FE is basically useless. 

As for OC, I wouldn't get your hopes up. Mine can do +150 core +800 mem in a lot of games/tests but only +60 / +500 is 100% stable.

And outside of benchmarks, the performance uplift is pretty lame. For 14% more power and the OC over stock I get 2-5% higher FPS depending on the title.

Everything I've seen suggests that to get a meaningful overclock that yields +10% performance or better, you need to be on water with a 600W or higher BIOS, which is not available as far as I know for the FE.


----------



## Falkentyne

Edgenier said:


> Oh you must mean "Show Highest Reading", that's useful thanks, never knew about it so I just look at my HWINFO64 graphs. I did not shunt PCIE slot or anything other than the 2 shunts in the picture I posted. Which one is the "GPU Chip power" resistor and what's the benefit of shunting it? Never heard of this resistor from what i've looked at, does it matter that I didn't shunt it if PCIE is what's limiting me?
> 
> So it looks like with just the 2 shunts you basically only extend the limit to around 500W even if it's 4mhom resistors (theoretical limit ~700w+). That's interesting, do you see any benefit in trying a strix bios (i know it's a 3-pin bios, but I read it could work if shunted)?


You should shunt both. PCIE Slot shunt is usually somewhere close to the PCIE Slot. It may be on the opposite side of the board (backplate side). Chip power, idk it's somewhere. Maybe in one of those shunts you didn't shunt . On FE boards, "SRC" power, Chip Power and PCIE Slot are all on the backplate side. IDK about yours.
How much W you gain from NOT shunting PCIE Slot depends on what PCIE was pulling previously at max power draw slider. At the absolute minimum, you should throw a 10 mOhm shunt on PCIE Slot.

A few pics are here.









GitHub - bmgjet/ShutMod-Calculator: Work out what shunt values to use easily.


Work out what shunt values to use easily. Contribute to bmgjet/ShutMod-Calculator development by creating an account on GitHub.




github.com





Shunting pcie slot will give you at least "some" more power, because the next thing that will throttle you is chip power. It all depends on what the vbios limit is for it. On FE cards it's 280W. However if chip power exceeds TDP, that might also cause a "Normalized" TDP throttle (check HWInfo64 for this, GPU-Z doesn't report normalized TDP).

I would shunt all of those shunts that you didn't shunt the first time. Even 10 mOhm will work.


----------



## Falkentyne

BigMack70 said:


> Increasing voltage on the FE is basically useless.
> 
> As for OC, I wouldn't get your hopes up. Mine can do +150 core +800 mem in a lot of games/tests but only +60 / +500 is 100% stable.
> 
> And outside of benchmarks, the performance uplift is pretty lame. For 14% more power and the OC over stock I get 2-5% higher FPS depending on the title.
> 
> Everything I've seen suggests that to get a meaningful overclock that yields +10% performance or better, you need to be on water with a 600W or higher BIOS, which is not available as far as I know for the FE.


The voltage slider will unlock one more "Tier" of voltage and +15 mhz on the V/F freq chart, usually 1.081v-1.10v. Which point gets used depends on the shape of the curve, which seems to be really random. It's like it keeps auto adjusting the curve to cycle between 1.075v-1.10v, until it decides to settle on 1.087v-1.10v. The left-most voltage point on the 1.10v tier will get used, I believe. The slider determines under what conditions that tier will get used, with the exact conditions being unknown, but it isn't just temperature, as I've seen 1.10v set activated at 78C before at 100% slider, up to 600W (where I start getting throttle warnings). This slider is 100% utterly pointless unless you are NOT hitting a power limit. 

I don't know what happens if you manually adjust the curve so only one single point and freq is set for 1.10v onwards; I do know some people had worse performance trying to mess with that on 2080 TI's...


----------



## Edgenier

Falkentyne said:


> The voltage slider will unlock one more "Tier" of voltage and +15 mhz on the V/F freq chart, usually 1.081v-1.10v. Which point gets used depends on the shape of the curve, which seems to be really random. The left-most voltage point on the 1.10v tier will get used, I believe. The slider determines under what conditions that tier will get used, with the exact conditions being unknown, but it isn't just temperature, as I've seen 1.10v set activated at 78C before at 100% slider, up to 600W (where I start getting throttle warnings).


Dumb question, what's the point of any curve going past even 1.0v when it seems like ~.95v is already equivalent to ~500W and 90% of cards don't even allow past 450-500W ? What I mean is, any point past around .95v-1v is only a temporary "spike" before you get throttled hard back to ~.92v right? Well, unless you're playing Fall Dudes or something and 2200mhz barely uses 100w, then I guess you could be happy to know it's pegged at 1.1v...


----------



## EniGma1987

bmgjet said:


> Look up the electrical law on resistors in parrell. Thats why I made the shunt mod calculator since people cant work out the maths lol.
> 
> NewResistance=(R1×R2)/(R1+R2)
> 
> So thats.
> 
> 5x10 = 50
> 5+10 =15
> 50/15=3.33 mOhm as the new resistance if you stacked a 10 on a 5.


Quick question as I will be power modding my card when the waterblock gets here. Is it ok to use one resistor value on the 8-pin connectors, like a second 5mOhm, and a different value on the PCIe slot power, such as 10mOhm? That would give the slot a 3.33 like you said and 8-pins would be 2.5. I do not want to overdraw the motherboard, but I also dont want to be limited on the 8-pins by the slot power being at maximum. I just am not sure if the card starts seeing too different of resistances between them what it would do.


----------



## shiokarai

Anyone here with the bykski Strix 3090 block? I'm somehow doubtful there is enough thermal pads included... also the so-called "manual" shows pads are only applied to the memory chips + some caps, not the actual VRM?!?!? seems fishy...


----------



## mirkendargen

shiokarai said:


> Anyone here with the bykski Strix 3090 block? I'm somehow doubtful there is enough thermal pads included... also the so-called "manual" shows pads are only applied to the memory chips + some caps, not the actual VRM?!?!? seems fishy...


Cut the included strips to only cover each VRAM chip, don't just lay a strip across each row. There was plenty with mine if I did that. The thin strips on the left/right are the VRM's, cut a strip in half long ways to cover them. The row to the right of the right thin row is the caps, they aren't even covered by the cold plate, there's a cutout in the plexi for them so definitely don't put a pad on them. It doesn't have a strip on the back of the VRM's to the backplate like some cards do, but I haven't had any of the thermal throttling issues people have had when they miss a VRM pad with only the VRAM padded on the backplate, so it seems fine.


----------



## Falkentyne

Edgenier said:


> Dumb question, what's the point of any curve going past even 1.0v when it seems like ~.95v is already equivalent to ~500W and 90% of cards don't even allow past 450-500W ? What I mean is, any point past around .95v-1v is only a temporary "spike" before you get throttled hard back to ~.92v right? Well, unless you're playing Fall Dudes or something and 2200mhz barely uses 100w, then I guess you could be happy to know it's pegged at 1.1v...


Well if you're shunt modded or have a high TDP vbios and can cool the chip, it's still +15 mhz.
What it is NOT is a '+mv' increase for the same mhz as it would be without the mv increase. It's just another tier on the v/f curve.
Usually using this tier and having the card running at full 3d clocks (or V/F point locked with "L") will give you vOP+VREL, while without using this slider will only be VREL (unless the curve adjusts so you are at 1.081v anyway).

Usually the max voltage allowed is 1.080v without the slider.


----------



## Falkentyne

EniGma1987 said:


> Quick question as I will be power modding my card when the waterblock gets here. Is it ok to use one resistor value on the 8-pin connectors, like a second 5mOhm, and a different value on the PCIe slot power, such as 10mOhm? That would give the slot a 3.33 like you said and 8-pins would be 2.5. I do not want to overdraw the motherboard, but I also dont want to be limited on the 8-pins by the slot power being at maximum. I just am not sure if the card starts seeing too different of resistances between them what it would do.


If your card has a 10 amp fuse on PCIE slot, throw a 15 mOhm on PCIE slot and be done with it.
If it has 20 amp fuses or no fuses, you can throw 5 or 10 mOhm on PCIE slot. I highly doubt you're going to be cooling 700W of TDP without going sub-ambient anyway.

There seems to be another power limit somewhere that isn't shown on GPU-Z that causes throttling sometimes. You can see it happen if you run Superposition in 4k with shader quality at "Extreme" rather than optimized, and have hwinfo64 and gpu-z both open. You will see your TDP Normalized % go WAY up past your TDP slider, like to 120% when TDP max is 114%...even though no power limit has been exceeded on reported sensors. That doesn't happen on '4k optimized."

@dante`afk saw it happen first and I replicated it easily


----------



## ttnuagmada

What are the chances of a power unlocked vbios of some sort finding its way to the internet like the 1080ti/2080ti?


----------



## reflex75

BigMack70 said:


> +150 core


Each core has its own DNA boost behavior (probably depending on silicon quality).
Default clock boost +0 can be very much different even between 2 FE with the same base clock.
The same applies to +150 that doesn't tell anything about the result frequency...


----------



## BigMack70

reflex75 said:


> The same applies to +150 that doesn't tell anything about the result frequency...


And yet, it's still the best way to compare general overclocks between identical SKUs, as the only option to do it accurately is to take an identical test with all other variables (e.g. temperature) controlled and record average clock frequency of the card. That's not a realistic option for people comparing numbers over the internet.


----------



## dr/owned

This is a tangent sort-of, but USPS is absolutely boning me on my waterblock arriving from China. It hit the US almost 2 weeks ago and still hasn't moved. Even stuff I shipped Priority Mail last week hasn't left my city. I've got a 6 hour job to do of stripping a loop x2, moving motherboard to another build, installing new motherboard, cleaning blocks, etc and that is all being held up by this waterblock arriving.


----------



## Chamidorix

kot0005 said:


> kinpin pcb finally
> 
> How to DEEP CLEAN your PC Parts! - YouTube


Well we can finally confirm the shunt situation on the Kingpin 3090: the standard 3x 5MO 1W shunts (red) for chip, source, and vram and then 4x 5MO wide 2W? shunts (blue) like 2080 ti KPE for 8 pins + pci-e slot:

















So at the very least even if this has the exact same load balancing issues (nvddd/msvdd being balanced against pci-e which caps at 80W way too early) as the FTW3, you should still be able to shunt to get around it, as another option besides using 1000W temp-unprotected bios. Really stupid that its sounding like this is what needs to be done, however, based on forum reports (RTX 3090 K|NGP|N Power Draw Issues - EVGA Forums). But it does indeed seem that VRM, trace, filtering and capacitance-wise this PCB is indeed better than the Strix, though still a far cry from the incredible 2080ti kpe. I mean just look at these all these cap +pads on 2080ti:



https://xdevs.com/doc/_PC_HW/EVGA/E200/e200_bot.jpg



My KPE finally arrived today after much tribulation so this info is coming just in the nick of time. Time to make the hard decisions on what to do with it.


----------



## dr/owned

mirkendargen said:


> Cut the included strips to only cover each VRAM chip, don't just lay a strip across each row. There was plenty with mine if I did that. The thin strips on the left/right are the VRM's, cut a strip in half long ways to cover them. The row to the right of the right thin row is the caps, they aren't even covered by the cold plate, there's a cutout in the plexi for them so definitely don't put a pad on them. It doesn't have a strip on the back of the VRM's to the backplate like some cards do, but I haven't had any of the thermal throttling issues people have had when they miss a VRM pad with only the VRAM padded on the backplate, so it seems fine.


What was the thickness of the pads they use? I don't have any good 2.0mm ones on hand so I'm hoping they copied EK in using all 1mm thick.


----------



## thegr8anand

I have been testing my 3090 fe using Cyberpunk 2077.1440p max settings, fov 90, no grain, no dof, no blur, no dlss and rtx psycho. I found I can consistently boost to 1950mhz with 0.875mv without hitting powerlimit. Anything higher it keeps hitting power limit. But i don't want to miss higher boosts when running less intensive games. How do i set it up?


----------



## shiokarai

dr/owned said:


> What was the thickness of the pads they use? I don't have any good 2.0mm ones on hand so I'm hoping they copied EK in using all 1mm thick.


It looks more like 1.5mm but honestly could also be 1mm. I did everything according to the "manual" (what a mess of a "manual" that is...), put pads only on places they want me to and everything seems fine, at least for now. 36-37C GPU core when playing Cyberpunk (KPX used on GPU, stock thermal pads), with default 480w powerlimit and +120 core clocks jumping between 2010-2100MHz with RTX ultra everything, without DLSS 3840x1600 21:9 - getting about 30 fps lol, heavily powerlimited. Time to try KPE bios, conductonaut and then maybe shunting.


----------



## nycgtr

mirkendargen said:


> Yeah AIO is nice if you don't have a loop already, but who the hell is getting a Kingpin that doesn't already have a loop?
> 
> The EVGA Hydrocopper blocks haven't been good, and I doubt that's changing.


80% of the people buying once, hence they sell it on a stupid aio. I got mine the other day. Seems like a good chip out the box. A block is going to be a while away. I wouldn't even bother with the EVGA one which will just be trash.


----------



## geriatricpollywog

My display adapter arrived


----------



## mirkendargen

shiokarai said:


> It looks more like 1.5mm but honestly could also be 1mm. I did everything according to the "manual" (what a mess of a "manual" that is...), put pads only on places they want me to and everything seems fine, at least for now. 36-37C GPU core when playing Cyberpunk (KPX used on GPU, stock thermal pads), with default 480w powerlimit and +120 core clocks jumping between 2010-2100MHz with RTX ultra everything, without DLSS 3840x1600 21:9 - getting about 30 fps lol, heavily powerlimited. Time to try KPE bios, conductonaut and then maybe shunting.


Yup I don't have calipers but comparing a scrap of it to a 0.5mm Arctic pad it looks like 1.5mm. The material also looks just like Thermalright Odyssey, so I'm sure there's much need to replace it with anything "better".


----------



## xrb936

I assume that some of you have already received your KINGPIN cards. I am so excited to see your overclocking result.


----------



## mirkendargen

Anyone like me with OLED's raging about the randomly raised HDR blacks in the last two Nvidia drivers, hotfix driver is out that claims to fix it: GeForce Hotfix Driver Version 460.97 | NVIDIA


----------



## nycgtr

xrb936 said:


> I assume that some of you have already received your KINGPIN cards. I am so excited to see your overclocking result.


I just have it on the OC bios. +800 mem + 140 on the core. Seems to be fine running around in cyberpunk. Seems to hold 2140-2160 gaming


----------



## Shawnb99

nycgtr said:


> 80% of the people buying once, hence they sell it on a stupid aio. I got mine the other day. Seems like a good chip out the box. A block is going to be a while away. I wouldn't even bother with the EVGA one which will just be trash.



I’ll never understand that. You got $2000 for a card you can’t afford a custom loop? I wouldn’t use a AIO if you gave it to me for free. I’ll be removing it from my KPE before I ever install it


----------



## geriatricpollywog

The KP is meant to be an XOC card, not card you plumb into your hardline loop and casually overclock while you listen to Dave Matthews Band.


----------



## nycgtr

0451 said:


> The KP is meant to be an XOC card, not card you plumb into your hardline loop and casually overclock while you listen to Dave Matthews Band.


There's always the chiller


----------



## EniGma1987

Falkentyne said:


> If your card has a 10 amp fuse on PCIE slot, throw a 15 mOhm on PCIE slot and be done with it.
> If it has 20 amp fuses or no fuses, you can throw 5 or 10 mOhm on PCIE slot. I highly doubt you're going to be cooling 700W of TDP without going sub-ambient anyway.


So just to be absolutely clear on this, having a 3.75mOhm final resistance on the PCIE slot will not throw the card into fault mode when the 8-pin PCIE power has a final resistance of 2.5mOhm?


----------



## mirkendargen

0451 said:


> The KP is meant to be an XOC card, not card you plumb into your hardline loop and casually overclock while you listen to Dave Matthews Band.


Then why does it have a cooler at all? Why don't they sell a bare PCB? LN2 pot preinstalled? Lol...


----------



## geriatricpollywog

mirkendargen said:


> Then why does it have a cooler at all? Why don't they sell a bare PCB? LN2 pot preinstalled? Lol...


It’s like a street legal car prepped from the factory for track days.


----------



## Falkentyne

Can you guys with EITHER evga Kingpin (LN2 520W Bios) or FTW3 (XOC Bios) _PLEASE_ run "timespy extreme", with BOTH GPU-Z and HWinfo64 Open and have the "MAXIMUM" (I keep telling people this but they keep forgetting) value CHECKED in GPU-Z (use the mouse and choose "maximum") for each wattage rail, and in HWinfo64, please have "TDP Normalized %" and "TDP%" visible and showing in the "Maximum" field?

Bonus points for actual Kingpin card owners doing this as this "may" be relevant to XOC/FTW3 users also.

Follow this up by running either Port Royal or Timespy (normal) and do the same thing (GPU-Z "maximum" wattage values selected (remember--use the mouse and click "maximum" in GPU-Z!), and HWinfo64 'TDP normalized' vs "TDP%" values visible?

Finally, do the exact same thing with Superposition --> 4K--> Custom -->Shaders quality-->Extreme" and then compare it to '4k optimized' (preset) and post your results.

Trying to get some feedback for these threads.






RTX 3090 K|NGP|N Power Draw Issues - EVGA Forums


Posted this elsewhere but hoped this would have higher visibility. While using the LN2 BIOS and cranking the Power Target/Limit in both Precision X1 and later on Afterburner (assuming it was a Precision X1 issue) to 120%, I've noticed that GPU-Z continually shows my issues being capped at 108.1...



forums.evga.com





Thank you.


----------



## HyperMatrix

Falkentyne said:


> Can you guys with EITHER evga Kingpin (LN2 520W Bios) or FTW3 (XOC Bios) _PLEASE_ run "timespy extreme", with BOTH GPU-Z and HWinfo64 Open and have the "MAXIMUM" (I keep telling people this but they keep forgetting) value CHECKED in GPU-Z (use the mouse and choose "maximum") for each wattage rail, and in HWinfo64, please have "TDP Normalized %" and "TDP%" visible and showing in the "Maximum" field?
> 
> Bonus points for actual Kingpin card owners doing this as this "may" be relevant to XOC/FTW3 users also.
> 
> Follow this up by running either Port Royal or Timespy (normal) and do the same thing (GPU-Z "maximum" wattage values selected (remember--use the mouse and click "maximum" in GPU-Z!), and HWinfo64 'TDP normalized' vs "TDP%" values visible?
> 
> Finally, do the exact same thing with Superposition --> 4K--> Custom -->Shaders quality-->Extreme" and then compare it to '4k optimized' (preset) and post your results.
> 
> Trying to get some feedback for these threads.
> 
> 
> 
> 
> 
> 
> RTX 3090 K|NGP|N Power Draw Issues - EVGA Forums
> 
> 
> Posted this elsewhere but hoped this would have higher visibility. While using the LN2 BIOS and cranking the Power Target/Limit in both Precision X1 and later on Afterburner (assuming it was a Precision X1 issue) to 120%, I've noticed that GPU-Z continually shows my issues being capped at 108.1...
> 
> 
> 
> forums.evga.com
> 
> 
> 
> 
> 
> Thank you.


Mine has this issue. Screenshot from inefficient offset settings. My buddy picked one up too and I'll help OC his shortly and see if it has the same problem. Also do we know if this has been reported from original KP batch as well? Or just the latest batch?


----------



## Falkentyne

HyperMatrix said:


> Mine has this issue. Screenshot from inefficient offset settings. My buddy picked one up too and I'll help OC his shortly and see if it has the same problem. Also do we know if this has been reported from original KP batch as well? Or just the latest batch?
> 
> View attachment 2470101


Thank you for the GPU-Z screenshot.
I need to see your TDP Normalized % and TDP % values in HWinfo64, please. This is going to be useless without the normalized values. The GPU-Z max values are just something to compare this to. (the GPU-Z TDP value should match the hwinfo TDP value. But what is important here is normalized).


----------



## HyperMatrix

Falkentyne said:


> I need to see your TDP Normalized % values in HWinfo64, please. This is going to be useless without the normalized values.


I don't use that program. But after 2 hours of benching, the LN2 bios maxed out around 460-465 under a variety of conditions. OC bios was no better. PCIe slot power usage was at or under 60W every time I checked.


----------



## Falkentyne

HyperMatrix said:


> I don't use that program. But after 2 hours of benching, the LN2 bios maxed out around 460-465 under a variety of conditions. OC bios was no better. PCIe slot power usage was at or under 60W every time I checked.


Can you install it real fast and check? (Install the latest version, then run it and choose "sensors only.". Scroll down to where you see TDP Normalized. Run your benchmark then take a screenshot or post the two values (TDP vs TDP Normalized). You can uninstall it afterwards.

None of this information we've been discussing is going to be useful without seeing normalized. GPU-Z does not report normalized at all. Even MSI Afterburner doesn't report it...

Trust me for once, please...


----------



## markuaw1

nycgtr said:


> I just have it on the OC bios. +800 mem + 140 on the core. Seems to be fine running around in cyberpunk. Seems to hold 2140-2160 gaming


that is a nice overclock my 3090 Strix with EK Block dose ok 2220 gaming


----------



## HyperMatrix

Falkentyne said:


> Trust me for once, please...


This is a hard ask. Haha. I'll do it after I do some OC'ing for my bud.


----------



## nycgtr

Timespy extreme.


----------



## geriatricpollywog

I can’t figure out how to install the latest HWInfo64 without it clumsily installing over the previous version and then running as the previous version.


----------



## Falkentyne

nycgtr said:


> View attachment 2470105
> 
> Timespy extreme.


What's your TDP slider set to?


----------



## nycgtr

Falkentyne said:


> What's your TDP slider set to?


It was 118 hang on gonna do it again at 121. Too use to afterburner.


----------



## Falkentyne

0451 said:


> I can’t figure out how to install the latest HWInfo64 without it clumsily installing over the previous version and then running as the previous version.


Try deleting program files/hwinfo then install it.


----------



## dante`afk

markuaw1 said:


> that is a nice overclock my 3090 Strix with EK Block dose ok 2220 gaming
> View attachment 2470104


pretty sure you can't hold the 1.1v and 2200 constantly.


----------



## nycgtr




----------



## xrb936

Shawnb99 said:


> I’ll never understand that. You got $2000 for a card you can’t afford a custom loop? I wouldn’t use a AIO if you gave it to me for free. I’ll be removing it from my KPE before I ever install it


Firstly, there is neither EVGA waterblock nor third-part waterblock is available. Secondly, a lot of users are using AIO on both CPU and GPU. It doesn’t mean they can’t afford the custom loop.


----------



## Falkentyne

nycgtr said:


> It was 118 hang on gonna do it again at 121. Too use to afterburner.


I probably should have clarified further:
Some people with Kingpin cards with the LN2 Bios are getting throttling at <450W when running TS Extreme (sort of like FTW3 users on the XOC bios), when they should be able to reach 520W.
For those who are getting throttling far below your TDP Slider %, we need to see if TDP Normalized is exceeding TDP% by a large amount (TDP Normalized is not on GPU-Z).


----------



## xrb936

markuaw1 said:


> that is a nice overclock my 3090 Strix with EK Block dose ok 2220 gaming
> View attachment 2470104


Wow, what’s your Strix performance under stock cooler?


----------



## Falkentyne

nycgtr said:


> View attachment 2470106


So you were pulling 480W when it should have been 520W, right?
(Am I following this correctly?).
Your TDP slider was set to 121%?

The only thing I see is TDP Normalized is 5% higher than TDP. But both are still below 121%.


----------



## HyperMatrix

Falkentyne said:


> What's your TDP slider set to?


The exact same tests on my buddy's card has TDP maxing out at around 425W in Cyberpunk at 2160MHz (supposedly) and 1.08v. Port royal on those same settings 460 max.

update: We decided to test some crazy clocks with his card. With +180 offset and +1000 mem he hit 15000 points. Starting off at 2205MHz and settling down at 2190MHz. During that run, his power usage actually did go up to 480-491W.


----------



## nycgtr

Falkentyne said:


> So you were pulling 480W when it should have been 520W, right?
> (Am I following this correctly?).
> Your TDP slider was set to 121%?
> 
> The only thing I see is TDP Normalized is 5% higher than TDP. But both are still below 121%.


Correct I do see i was power capped.


----------



## Bakerman

My KP. Slider 121%. TS Extreme.


----------



## Shawnb99

xrb936 said:


> Firstly, there is neither EVGA waterblock nor third-part waterblock is available. Secondly, a lot of users are using AIO on both CPU and GPU. It doesn’t mean they can’t afford the custom loop.


If there was more demand for going the custom route then the Hydrocopper model would be the priority over the hybrid. They even stopped production for the Hydrocopper on the 2080ti’s first chance they got. AIO’s are the prebuilt computers of WC and rule the day.
it’s just sad in my view, spend all that and settle for an AIO.
To each their own.


----------



## GAN77

There is an error in the specs on the first page. MSI RTX 3090 Suprim X contains Monolithic Power Systems controller MP2888A. Similarly Asus.


----------



## bmgjet

Bakerman said:


> My KP. Slider 121%. TS Extreme.


Lets have a look what power limits your hitting.
GPU Chip and MVDDC.

I suspect the people that cant reach there max power limits have high leakage chips so the card hits that power limit first before it can even draw near the board power limit.


----------



## Falkentyne

Ok this is going to be a picture flood so I hope you guys can follow some of this stuff.

First, as you can see, the 3090 FE seems to cap out at 175W on the 8 pins, 400W TDP (175 + 170 +65)=410W, and the max cap on PCIE Slot seems to be 79.9W (although a bios debug has this at 86W, when I was being hard throttled by the slot, it tapped out at 79.9W, but this could also have been a balancing issue as I was first being throttled by Chip power at 290W, when I fixed that, then PCIE was reaching 79.9W and I had to fix that too, etc).

You can clearly see this in my Superposition 1080 extreme run last October before any mods, at 114% TDP.










Easy to compare to a SP 1080p extreme run post shunt mod here.










I'm aware the two 8 pins aren't balanced perfectly but since I can draw 600W and temps get uncontrollable at 600W, before TDP Normalized starts getting past 114% (the TDP % slider), that's just fine. Here's an overwatch run at 1080p + 200% render scale (or 4x SSAA = 4k basically) to see where the limit is. This will be important when you see what comes after.
Note the "small" throttle bar, which is basically a throttle alert you get when "something" gets close to the TDP Limit. There was a segment with a full bar for a short time but that went away.
The large dip in the power draw was from an alt tab. 










So I managed to hit 600W here (GPU power on hwinfo, checked my kill-a-watt too) without any rails actually trippping a power limit. 8 pin #1 seems to start triggering something around 170W, which turns into heavy throttling at 175W. Ok so no problems for a high temp, high power draw run.

Time Spy (normal) run:









That "double throttle" bar seems to happen on shunt modded 2080 Ti's also (someone on the Nvidia sub checked that for me on his 2080 Ti) and may be related to cards that were "crashing" at the beginning of Graphics test 2 long ago, so most likely that's getting "drivered" there. Normalized is far below 114%, so no throttling except from being drivered.

But now we have a problem...

Timespy Extreme:










Anyone see something weird going on here?
No reported power rails are throttling, but TS test 1 has "some" throttling (left power draw side) and TS test 2 has a lot of throttling (right power draw side in GPU-Z).
Notice the TDP Normalized? it's MUCH higher than my max TDP slider (114%), so "something" is triggering a throttle alert here. Notice the balance between TDP% and TDP Normalized %--the midpoint isn't exactly 114%, which is why the throttling wasn't extremely heavy. But clearly, no power rail was exceeding any throttle point.

Now this one becomes even MORE obvious:

Superposition 4k--custom--Extreme shaders.









Now this is HEAVY throttling. Notice the bar is completely thick, and not only is it permanently thick, also notice:
1) TDP Slider 114% is almost exactly inbetween TDP Normalized and TDP%.
2) TDP Normalized SUPER Exceeds the TDP slider here.
3) the throttling starts INSTANTLY---before th card has even ramped up the load. Notice the very beginning of the test--there is a 100% full sized TDP slider in full force BEFORE the load has even ramped up! Zoom in and look for yourself.

Yet once again, no rail seems to be throttling by itself! But something is triggering a huge throttle and reporting it to TDP Normalized--something is leapfrogging the 114% target and it's clearly not the 8 pins...but it's something that reports a value to normalized. if it were just 8 pin #1, you would expect around 113% Normalized (Look at the Overwatch screenshot again please).

So yeah. That's why I want people who are being capped at 450W on Kingpin LN2 bios, to post their TDP slider %, TDP% and TDP Normalized %. the Kingpin should have access to more power rails than normal boards....

Thank you to @dante`afk for being the first person to mention Superposition 4k Extreme custom throttling here.


----------



## Falkentyne

bmgjet said:


> Lets have a look what power limits your hitting.
> GPU Chip and MVDDC.
> 
> I suspect the people that cant reach there max power limits have high leakage chips so the card hits that power limit first before it can even draw near the board power limit.
> View attachment 2470114


Why is max chip power 292W on some eVGA cards and 312W on others? And why is it 292W on a kingpin card? Is that a bug?
I'm seeing "default chip" and "Max chip" of 292W on two eVGA card (one of which is a Kingpin), and 292W default and 312W max on all other evga cards...

Well looks like THAT already answers one Kingpin mystery already....
Oh, and that 96W Vram power limit isn't helping in the least either.....

Sounds like someone at eVGA didn't do their homework...looks like they need to increase the MVDDC and Chip limits...
(This is sort of funny...I actually mentioned several weeks ago when someone did a bunch of benchmark runs and was hitting 95W MVDDC, that it might be MVDDC and not PCIE Slot throttling the FTW3 cards...a few people thought about it then it got completely forgotten....)

(This still doesn't explain the TS Extreme + Superposition high TDP Normalized reports however...)


----------



## bmgjet

No idea why they decided on what limits.
I guess kingpin cards are ment to be higher binned low leakage chips. But so far its not really turned out like that since there have been a few reports are real lemons that have been worse then FTW3 cards by a fare bit which makes it seem like they just put any chip on it with no binning. You have people complaing about not being able to get over 14.3K PR scores on the forums.
Which would make sense with how many they are pumping them out compared to the last gens KPE where it was a limited card with only 1-5 cards being made out of every 1000 batch of chips.
I really need to get my hands on that 1000W bios vince is sending out to people tho to compare info from.


----------



## WMDVenum

Edgenier said:


> Shunted TUF 3090 with 4mohm, I’m pushing 2000mhz at ~.94 however cyberpunk will only pull 220-230w (gpu-z) and frequency will be solid, but if the game tries to pull more than 240w (around 500w), gpu-z shows “PWR” and i get throttling??? It’s like there’s another power limit around 500w despite me shunting? Huh?


How is that possible? I undervolted to 2100/.975v and 1000 mem and CP2077 only uses about 350-375 Watts. I'm not shunt moded but dont get power limited in any of the games I have tried....I do have a 3090 FE and not a TUF.


----------



## WMDVenum

BigMack70 said:


> Increasing voltage on the FE is basically useless.
> 
> As for OC, I wouldn't get your hopes up. Mine can do +150 core +800 mem in a lot of games/tests but only +60 / +500 is 100% stable.
> 
> And outside of benchmarks, the performance uplift is pretty lame. For 14% more power and the OC over stock I get 2-5% higher FPS depending on the title.
> 
> Everything I've seen suggests that to get a meaningful overclock that yields +10% performance or better, you need to be on water with a 600W or higher BIOS, which is not available as far as I know for the FE.


Unless you are shunt modding im find getting a good undervolt a larger performance improvement than increasing offset. I am also now testing in actual games instead of a stress test since games tend to not power limit you when a stress test will. The downside is you might need to adjust it if a more demanding game comes out.


----------



## HyperMatrix

Bit of info on those inquiring about the KingPin card. There does not appear to be an actual power limit at 460. Secondly, anyone who is scoring poorly is using the stock fans. On the stock fans at 100% speed, running port royal a couple times at 0.95v 2085MHz had my temps go up to 57C and a crash. The fans are apparently just 12v 0.2a whereas the FTW3 Hybrid has 12v 0.65a fans. So if you're using stock fans...your temps suck...and so does your OC potential. With good fans, you can keep temps under 50C with this card. My card isn't a great overclocker. But my friend got one too and I just finished doing his. He can pull 2205 to start and end at 2190 in port royal with max power draw hitting 491W. That's with +1000 on mem as well. My own card was able to do 0.95v at 2100 (dropping to 2085 during the run). It was a good start. But then crapped out before being able to hit the 2200+


----------



## Falkentyne

HyperMatrix said:


> Bit of info on those inquiring about the KingPin card. There does not appear to be an actual power limit at 460. Secondly, anyone who is scoring poorly is using the stock fans. On the stock fans at 100% speed, running port royal a couple times at 0.95v 2085MHz had my temps go up to 57C and a crash. The fans are apparently just 12v 0.2a whereas the FTW3 Hybrid has 12v 0.65a fans. So if you're using stock fans...your temps suck...and so does your OC potential. With good fans, you can keep temps under 50C with this card. My card isn't a great overclocker. But my friend got one too and I just finished doing his. He can pull 2205 to start and end at 2190 in port royal with max power draw hitting 491W. That's with +1000 on mem as well. My own card was able to do 0.95v at 2100 (dropping to 2085 during the run). It was a good start. But then crapped out before being able to hit the 2200+


We already determined the source of the problem.
the cards are not running into a power limit that can be adjusted.
They're running into "Chip Power Limit" and "MVDDC" power limits, which are set to the same values as a basic 350W card...(292W and 94W....)


----------



## rocklobsta1109

Jordyn said:


> New 3090 owner here. Have Inno3d X4 so 2x8pin connector. Worth flashing to the KFA2 bios if on staying air? Or will temps and power limits mean any gains will be minimal. Also worth noting that I am in Australia so coming into a hot summer.


I flashed KFA2 onto my card and noticed slightly higher average boost clocks. About 25-35mhz more than stock. I’d say worth it for a free mod


----------



## BigMack70

HyperMatrix said:


> Bit of info on those inquiring about the KingPin card. There does not appear to be an actual power limit at 460. Secondly, anyone who is scoring poorly is using the stock fans. On the stock fans at 100% speed, running port royal a couple times at 0.95v 2085MHz had my temps go up to 57C and a crash. The fans are apparently just 12v 0.2a whereas the FTW3 Hybrid has 12v 0.65a fans. So if you're using stock fans...your temps suck...and so does your OC potential. With good fans, you can keep temps under 50C with this card. My card isn't a great overclocker. But my friend got one too and I just finished doing his. He can pull 2205 to start and end at 2190 in port royal with max power draw hitting 491W. That's with +1000 on mem as well. My own card was able to do 0.95v at 2100 (dropping to 2085 during the run). It was a good start. But then crapped out before being able to hit the 2200+


What kind of clocks can you hold in a really heavy game like cyberpunk?


----------



## HyperMatrix

BigMack70 said:


> What kind of clocks can you hold in a really heavy game like cyberpunk?


My friend is currently holding 2190 with his. He was doing 2205 but crashed after 15 minutes.


----------



## dante`afk

same here, cyberpunk can hold 2190 with my FE, 2205 or 2020 crashes after some time.

PR and Superposition can do 2260 to 2245


----------



## thegr8anand

WMDVenum said:


> How is that possible? I undervolted to 2100/.975v and 1000 mem and CP2077 only uses about 350-375 Watts. I'm not shunt moded but dont get power limited in any of the games I have tried....I do have a 3090 FE and not a TUF.


Does your OC stick at 2100 the whole time? If not what does it settle on when running CP2077?


----------



## thegr8anand

dante`afk said:


> same here, cyberpunk can hold 2190 with my FE, 2205 or 2020 crashes after some time.
> 
> PR and Superposition can do 2260 to 2245


What overclock are you using? Are you using liquid cooling?


----------



## geriatricpollywog

Falkentyne said:


> Can you guys with EITHER evga Kingpin (LN2 520W Bios) or FTW3 (XOC Bios) _PLEASE_ run "timespy extreme", with BOTH GPU-Z and HWinfo64 Open and have the "MAXIMUM" (I keep telling people this but they keep forgetting) value CHECKED in GPU-Z (use the mouse and choose "maximum") for each wattage rail, and in HWinfo64, please have "TDP Normalized %" and "TDP%" visible and showing in the "Maximum" field?
> 
> Bonus points for actual Kingpin card owners doing this as this "may" be relevant to XOC/FTW3 users also.
> 
> Follow this up by running either Port Royal or Timespy (normal) and do the same thing (GPU-Z "maximum" wattage values selected (remember--use the mouse and click "maximum" in GPU-Z!), and HWinfo64 'TDP normalized' vs "TDP%" values visible?
> 
> Finally, do the exact same thing with Superposition --> 4K--> Custom -->Shaders quality-->Extreme" and then compare it to '4k optimized' (preset) and post your results.
> 
> Trying to get some feedback for these threads.
> 
> 
> 
> 
> 
> 
> RTX 3090 K|NGP|N Power Draw Issues - EVGA Forums
> 
> 
> Posted this elsewhere but hoped this would have higher visibility. While using the LN2 BIOS and cranking the Power Target/Limit in both Precision X1 and later on Afterburner (assuming it was a Precision X1 issue) to 120%, I've noticed that GPU-Z continually shows my issues being capped at 108.1...
> 
> 
> 
> forums.evga.com
> 
> 
> 
> 
> 
> Thank you.


----------



## Falkentyne

Interesting.
So it may be just "GPU Chip Power" causing some people to hit a power limit.
This user ran into heavy throttling at 121% TDP in Timespy Extreme, and you can see his GPU Chip Power exceeded 290W.









[Official] NVIDIA RTX 3090 Owner's Club







www.overclock.net





Yet yours only reached 278W, which is why you didn't throttle. However you did exceed 120W MVDDC...interesting.
I wonder if MVDDC limit isn't applying here.


----------



## geriatricpollywog

Falkentyne said:


> Interesting.
> So it may be just "GPU Chip Power" causing some people to hit a power limit.
> This user ran into heavy throttling at 121% TDP in Timespy Extreme, and you can see his GPU Chip Power exceeded 290W.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> [Official] NVIDIA RTX 3090 Owner's Club
> 
> 
> 
> 
> 
> 
> 
> www.overclock.net
> 
> 
> 
> 
> 
> Yet yours only reached 278W, which is why you didn't throttle. However you did exceed 120W MVDDC...interesting.
> I wonder if MVDDC limit isn't applying here.


Thanks for helping the community and looking into this.


----------



## Falkentyne

0451 said:


> Thanks for helping the community and looking into this.


I try.
To be honest, I wouldn't be surprised if MVDDC limit were getting completely ignored. MVDDC also seems to have "Continuity" with PCIE Slot shunt, if you take a multimeter to both of them.

Might explain this result when anethema didn't shunt his PCIE slot on his FE way back. 258W MVDDC power draw (clearly it can't draw that much).









RTX 3090 Founders Edition working shunt mod


Yes idea would be to stack them. Should calculate that later in the tool....




www.overclock.net





Then dropping to 139W after he shunted his PCIE slot...









RTX 3090 Founders Edition working shunt mod


Yes idea would be to stack them. Should calculate that later in the tool....




www.overclock.net


----------



## WMDVenum

thegr8anand said:


> Does your OC stick at 2100 the whole time? If not what does it settle on when running CP2077?


I have it set to be at 2100/.975V at load. It sticks to that for both CP2077 and WD:Legion. May try pushing it higher but starting to get close to power limits. Friend offered to shunt mod for me but not sure if it is worth it for an additional 100Mhz or so. The issue I am experiencing now is that in gaming GPU-Z seems to throw a PWR limit at about 370 Watts while in Port Royale I was hitting it at 400W.


----------



## BigMack70

dante`afk said:


> same here, cyberpunk can hold 2190 with my FE, 2205 or 2020 crashes after some time.
> 
> PR and Superposition can do 2260 to 2245


That seems insane to me. My FE holds like 1950 with an OC in cyberpunk @ 4k maxed


----------



## Falkentyne

BigMack70 said:


> That seems insane to me. My FE holds like 1950 with an OC in cyberpunk @ 4k maxed


He's shunt modded.


dante`afk said:


> same here, cyberpunk can hold 2190 with my FE, 2205 or 2020 crashes after some time.
> 
> PR and Superposition can do 2260 to 2245


I bet you 100% you can't hold 2200 in Call of Duty without crashing!


----------



## bmgjet

Falkentyne said:


> I bet you 100% you can't hold 2200 in Call of Duty without crashing!


What about on 720 res with min settings lol.


----------



## thegr8anand

BigMack70 said:


> That seems insane to me. My FE holds like 1950 with an OC in cyberpunk @ 4k maxed


Yeah same for me.

CP2077 max settings, RTX Psycho, FoV 90, no DoF, no Blur, no FG. No DLSS as it seems to put less load. Been using this to find lowest stable clock which I got 1950 @ 0.875mv on my FE. Power hits around 350-360w according to afterburner and power limit is not activated. Temps around 62-63 deg.

I ran a simple +150 oc and it ran higher temp and kept throttling down because of power limit to 1920-1935 boost with 0.931mv.


----------



## WMDVenum

Unrelated note but does anyone know where to order replacement water block screws? Mine are starting to wear out due to removing them multiple times. I assume if I know the size I can just order some from any store right?


----------



## bmgjet

Take a screw down to a hardware store and youll probably be able to match something up.


----------



## HyperMatrix

Falkentyne said:


> Interesting.
> So it may be just "GPU Chip Power" causing some people to hit a power limit.
> This user ran into heavy throttling at 121% TDP in Timespy Extreme, and you can see his GPU Chip Power exceeded 290W.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> [Official] NVIDIA RTX 3090 Owner's Club
> 
> 
> 
> 
> 
> 
> 
> www.overclock.net
> 
> 
> 
> 
> 
> Yet yours only reached 278W, which is why you didn't throttle. However you did exceed 120W MVDDC...interesting.
> I wonder if MVDDC limit isn't applying here.



Just for your records, I was able to go over 500W by upping FBVVD and MSVVD. Helped make an OC that crashed in under a minute last for 8 minutes.


----------



## dr/owned

WMDVenum said:


> Unrelated note but does anyone know where to order replacement water block screws? Mine are starting to wear out due to removing them multiple times. I assume if I know the size I can just order some from any store right?


If you don't care about paying for shipping, McMaster will have absolutely every kind you could want. They're almost certainly metric...probably M3. (i've also never heard of screws wearing out...kinda weird)


----------



## mirkendargen

WMDVenum said:


> Unrelated note but does anyone know where to order replacement water block screws? Mine are starting to wear out due to removing them multiple times. I assume if I know the size I can just order some from any store right?


The spring screws attaching the block to the die, or the bolt through screws? I've never had this happen either...but if it's the bolt through screws maybe some Loctite blue is in order.

Or by wearing out do you mean you are just torqueing the hell out of them and have a kinda missized screw driver that's stripping the heads?


----------



## Chamidorix

Falkentyne said:


> I try.
> To be honest, I wouldn't be surprised if MVDDC limit were getting completely ignored. MVDDC also seems to have "Continuity" with PCIE Slot shunt, if you take a multimeter to both of them.
> 
> Might explain this result when anethema didn't shunt his PCIE slot on his FE way back. 258W MVDDC power draw (clearly it can't draw that much).
> 
> 
> 
> 
> 
> 
> 
> 
> 
> RTX 3090 Founders Edition working shunt mod
> 
> 
> Yes idea would be to stack them. Should calculate that later in the tool....
> 
> 
> 
> 
> www.overclock.net
> 
> 
> 
> 
> 
> Then dropping to 139W after he shunted his PCIE slot...
> 
> 
> 
> 
> 
> 
> 
> 
> 
> RTX 3090 Founders Edition working shunt mod
> 
> 
> Yes idea would be to stack them. Should calculate that later in the tool....
> 
> 
> 
> 
> www.overclock.net


Its great to see someone actually making some effort to figure this out. I've had my kp and been messing with it for some hours now and chatting with some others. Its really problematic, there are just so so so many different things that all seem to be causing issues to varying degrees

1. Bios dump indeed shows ludicrously low chip +vram power limits
2. Adjusting voltage via dip switches or classified seems to remove or raise individual limits (hitting >520W when uppping NVVDD)
3. The fans seem to consume a good amount of power (~50W) when running full speed and off of the pcb power (vs others who run fans off of mobo or chill etc)
4. There still is still load balance variance with some at 75W pci-slot and some below 60W at similar board loads
5. Seems to be a high variance in performance with using daisy chained cables, single plugged daisy chain, and non daisy chain cables for 8 pins. I.E very tight voltage tolerances

For sure the dumped vram values from bmjet seem to be irrelevant. On pretty much any config (450W stock, 520W OCed) will exceed 96W to vram. Also, its very peculiar with chip power, as on low res (1440p TS) you will hit 300W quite easily, but then you bump it up to 8K and the power draw to chip plummets and it refuses to move up the voltage curve even with headroom. Then you flip an NVVDD switch and it locks to that higher voltage with the associated ludicrous power draw.

My current theory is a big missing piece we are all ignoring is dynamic SM clock/powering on/off. With lower resolutions I suspect there are individual SMs being heavily downclocked, allowing the primary core clock we all see in afterburner etc to easily raise and precisely hit power limits since it is less SMs drawing power per voltage step. As resolution increases suddenly you are turning on the entire chip and voltage steps result in large chunks of power increase and so the control can not be as fine tuned, in addition to lowering clock/v curve as a result of the massively increased overall current.

Anyways, I've got a KP and your wish is my command. Let me know what you want me to run and what data to collect and I'll give it to you.


----------



## WMDVenum

mirkendargen said:


> The spring screws attaching the block to the die, or the bolt through screws? I've never had this happen either...but if it's the bolt through screws maybe some Loctite blue is in order.
> 
> Or by wearing out do you mean you are just torqueing the hell out of them and have a kinda missized screw driver that's stripping the heads?


Stripping the heads...


----------



## Falkentyne

Chamidorix said:


> Its great to see someone actually making some effort to figure this out. I've had my kp and been messing with it for some hours now and chatting with some others. Its really problematic, there are just so so so many different things that all seem to be causing issues to varying degrees
> 
> 1. Bios dump indeed shows ludicrously low chip +vram power limits
> 2. Adjusting voltage via dip switches or classified seems to remove or raise individual limits (hitting >520W when uppping NVVDD)
> 3. The fans seem to consume a good amount of power (~50W) when running full speed and off of the pcb power (vs others who run fans off of mobo or chill etc)
> 4. There still is still load balance variance with some at 75W pci-slot and some below 60W at similar board loads
> 5. Seems to be a high variance in performance with using daisy chained cables, single plugged daisy chain, and non daisy chain cables for 8 pins. I.E very tight voltage tolerances
> 
> For sure the dumped vram values from bmjet seem to be irrelevant. On pretty much any config (450W stock, 520W OCed) will exceed 96W to vram. Also, its very peculiar with chip power, as on low res (1440p TS) you will hit 300W quite easily, but then you bump it up to 8K and the power draw to chip plummets and it refuses to move up the voltage curve even with headroom. Then you flip an NVVDD switch and it locks to that higher voltage with the associated ludicrous power draw.
> 
> My current theory is a big missing piece we are all ignoring is dynamic SM clock/powering on/off. With lower resolutions I suspect there are individual SMs being heavily downclocked, allowing the primary core clock we all see in afterburner etc to easily raise and precisely hit power limits since it is less SMs drawing power per voltage step. As resolution increases suddenly you are turning on the entire chip and voltage steps result in large chunks of power increase and so the control can not be as fine tuned, in addition to lowering clock/v curve as a result of the massively increased overall current.
> 
> Anyways, I've got a KP and your wish is my command. Let me know what you want me to run and what data to collect and I'll give it to you.


If it's not too much trouble, can you run "Superposition 4k-->Custom --Extreme shaders", and then see if you get a nice giant fat green throttle bar in GPU-Z?
Also have hwinfo64 open with "TDP%" and "TDP Normalized %" and see if they are far away from each other. (Note: I'm not talking about "4k optimized" but 4k custom--Extreme shaders. that REALLY eats your Fps!)  Remember to check gpu-z to see if it's nonstop throttling or not.

Hope you have Superposition though!

Thank you.


----------



## Patay Roland

Hey

FTW3 XOC BIOS can be used on Supreme X?


----------



## defiledge

Does anybody use quick disconnects on their gpu block for future upgrades?


----------



## HyperMatrix

defiledge said:


> Does anybody use quick disconnects on their gpu block for future upgrades?


Yeah I switched my whole loop to quick disconnect a few years back. Makes it really easy to pop new cards in and out. Even switching to air cooled cards just requires a couple quick disconnect cables moved. It will lower your flow rate though. So 2+ pumps recommended.


----------



## defiledge

HyperMatrix said:


> Yeah I switched my whole loop to quick disconnect a few years back. Makes it really easy to pop new cards in and out. Even switching to air cooled cards just requires a couple quick disconnect cables moved. It will lower your flow rate though. So 2+ pumps recommended.


awesome, I think I'I be doing exactly that


----------



## WMDVenum

HyperMatrix said:


> Yeah I switched my whole loop to quick disconnect a few years back. Makes it really easy to pop new cards in and out. Even switching to air cooled cards just requires a couple quick disconnect cables moved. It will lower your flow rate though. So 2+ pumps recommended.


Any tips on where to put the QD and a brand you recommend?


----------



## JonnyV75

@Falkentyne - Here’s my results with a FTW3 hybrid at XOC OC bios at 119%. Core at +90, voltage at 0, memory at +250.

TS extreme










Port Royal









Superposition Custom










Superposition 4k Optimized


----------



## Edge0fsanity

Did some runs with TS extreme GPU test 2 looped and PR custom 4k looped with my Kingpin. I don't have the full version of Superposition. 

I reset hwinfo and gpuz at the start of each run then grabbed a screenshot at the end to get the average values as well. Seems my MVDDC power draw is closer 105w on average with gpu chip power closer to 260w average. Using EK Vardar fans on the rad connected to an aquaero. +100 offset, +1000 mem, voltage slider maxed.

PR custom 4K looped









TS Extreme GPU test 2 looped


----------



## Shawnb99

WMDVenum said:


> Any tips on where to put the QD and a brand you recommend?



Koolance makes very good ones. The QDC4’s are massive and offer the least amount of flow resistance, make sure you end up with the flow going male to female for the best results


----------



## EniGma1987

WMDVenum said:


> Unrelated note but does anyone know where to order replacement water block screws? Mine are starting to wear out due to removing them multiple times. I assume if I know the size I can just order some from any store right?


First icon on the top left:
www.mcmaster.com
Find out from your waterblock specs what screws it uses and order from that site. They have every option possible outside of the most obscure proprietary ones. Many PCs use standard M3 bolts for things, and sometimes M2 for smaller stuff and M4 for larger stuff.


----------



## greq333

Hey guys,

New 3090 Trinity owner here.
Which bios exactly should I flash to get the 390W or higher power limit without breaking additional functionality like fan control or displayport?
There has been several mentions of KFA2 bios, but there are several available on the techpowerup site, just want to make sure before I flash 

Cheers


----------



## des2k...

greq333 said:


> Hey guys,
> 
> New 3090 Trinity owner here.
> Which bios exactly should I flash to get the 390W or higher power limit without breaking additional functionality like fan control or displayport?
> There has been several mentions of KFA2 bios, but there are several available on the techpowerup site, just want to make sure before I flash
> 
> Cheers


Using this one for mine, KFA2 RTX 3090 VBIOS


----------



## greq333

des2k... said:


> Using this one for mine,


Have you noticed no negative side effects?

That's a great help, thank you mate.


----------



## PhilipJFry

des2k... said:


> Using this one for mine, KFA2 RTX 3090 VBIOS


And what about this one with 1000W of max TDP?


----------



## des2k...

PhilipJFry said:


> And what about this one with 1000W of max TDP?


***, is that for real ? I need to try this.


----------



## Falkentyne

PhilipJFry said:


> And what about this one with 1000W of max TDP?


Does that vbios even work ?


----------



## PhilipJFry

Falkentyne said:


> Does that vbios even work ?


I haven't tried it


----------



## des2k...

Falkentyne said:


> Does that vbios even work ?


that would be a no, not a valid vbios warning from nvflash


----------



## rocklobsta1109

I'd be curious to see what the deal is with that 1000w bios, is that just some sort of glitch or has someone custom modded a bios file for that kind of power limit


----------



## mirkendargen

rocklobsta1109 said:


> I'd be curious to see what the deal is with that 1000w bios, is that just some sort of glitch or has someone custom modded a bios file for that kind of power limit


Modifying the power limit in a BIOS file is easy. Signing it so NVFlash (or modifying NVFlash to ignore signature checking) will flash it to a card is what isn't.


----------



## Talon2016

Might give that a go on my 3090 KPE later. I have 3 BIOS switches so no harm in trying it out on one of them. 

My LN2 vBIOS can pull well north of 500w in TSE. I saw 551w peak on my last run.


----------



## greq333

Getting "No NVIDIA adapters found" from nvflash, any tips?


----------



## QueueCumb3r

Curious what people's experience of the KPE LN2 Bios on Asus Strix OC 3090 has been.

I have it on my Asus right now and can't seem to push it past +120 and +500 (14.2k in PR). Is this normal for the Strix without shunt modding it, or is it possible I just got a average card? 

I'm trying to figure out if I should mod the card or flip it and try for another with better genetics... lol


----------



## des2k...

Talon2016 said:


> Might give that a go on my 3090 KPE later. I have 3 BIOS switches so no harm in trying it out on one of them.
> 
> My LN2 vBIOS can pull well north of 500w in TSE. I saw 551w peak on my last run.


there's an easier way 
Just use 2x8pin vbios on it, 390w - 75w /3 = 157w per PIN
because 3pin watts won't be reported on your card total power will be 390w + 157w = 547w


----------



## HyperMatrix

WMDVenum said:


> Any tips on where to put the QD and a brand you recommend?


I personally went overboard and quick disconnects on every cable and equipment in my loop. So I can take a cable out anywhere and the liquid will be locked in the tube as well. Super easy to play around with. But that’s excessive so you may decide to just put it on equipment that you expect to change out often.

As for brand, I got the Koolance ones but they haven’t been perfect. One issue I had shortly after getting them and that’s gotten worse To this day is that the female ends have a metal plate that’s supposed to spring itself back to being closed. And it doesn’t seem like it wants to do that. It takes it several seconds and doesn’t close as tightly as when new. This could also be my fault because I used nano coolant in my loop and that may have gunked it up, causing these issues.


----------



## geriatricpollywog

Did some benching this morning

NVIDIA GeForce RTX 3090 video card benchmark result - Intel Core i7-10700K Processor,Micro-Star International Co., Ltd. MEG Z490 UNIFY (MS-7C71) (3dmark.com)


----------



## sultanofswing

Just picked up a FTW3 Ultra Hybrid to test out and see how it does. All over the power limit even with the 500w BIOS and will not even overclock at all. Seeing it hit power limit at only 106% Total TDP.
Reading over here it seems to be an issue people are having.


----------



## ttnuagmada

0451 said:


> Did some benching this morning
> 
> NVIDIA GeForce RTX 3090 video card benchmark result - Intel Core i7-10700K Processor,Micro-Star International Co., Ltd. MEG Z490 UNIFY (MS-7C71) (3dmark.com)



Very nice. You running a chiller or just benching with the windows open?


----------



## ttnuagmada

des2k... said:


> Using this one for mine, KFA2 RTX 3090 VBIOS


How? It's telling me the firmware is invalid.


----------



## Falkentyne

des2k... said:


> that would be a no, not a valid vbios warning from nvflash


Is it possibly an out to date version of NVflash? The bios seems to at least be the correct size, so I assume someone dumped this with GPU-Z?


----------



## geriatricpollywog

ttnuagmada said:


> Very nice. You running a chiller or just benching with the windows open?


That particular run was with the radiator in an ice bath and a cold damp rag on the backplate. Here is a result with no tricks. Ambient temp is 19c.










I scored 15 338 in Port Royal


Intel Core i7-10700K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## des2k...

ttnuagmada said:


> How? It's telling me the firmware is invalid.


the 390w ? I used nvflash.exe from the Asus TUF update


https://t.co/KnsWVFlk1l?amp=1


----------



## sultanofswing

I would assume it is an id mismatch, I have a Programmer here I could try and flash it.


----------



## des2k...

Falkentyne said:


> Is it possibly an out to date version of NVflash? The bios seems to at least be the correct size, so I assume someone dumped this with GPU-Z?


Looks like correct size. I'm using the nvflash from 3090 TUF update package. Works with all other vbios.
I don't have dual bios on my Zotac, can someone else force flash the 1000w ?


----------



## QueueCumb3r

Let me add from my earlier post, that my Asus Strix OC seems to max out the same on the 480w, 500w, and 520w bios (asus, evga ftw3 ultra, kpe) at 14.2k on PR. I get a thermal throttling on GPU-Z (magenta therm on PerfCap) but not sure why exactly.


----------



## QueueCumb3r

Also, how many posts do I need before I can post without waiting for moderation each time?


----------



## dr/owned

mirkendargen said:


> Modifying the power limit in a BIOS file is easy. Signing it so NVFlash (or modifying NVFlash to ignore signature checking) will flash it to a card is what isn't.


On today's episode of "Stupid Questions": is it really just a NVFlash limitation that's preventing people from flashing any BIOS to these cards? What's stopping someone from using an SOIC8 clip and directly programming the bios chips? [I've done this at my job where the motherboard BIOS chips are socketed so we just pop them out and pop them in a programmer]


----------



## Falkentyne

dr/owned said:


> On today's episode of "Stupid Questions": is it really just a NVFlash limitation that's preventing people from flashing any BIOS to these cards? What's stopping someone from using an SOIC8 clip and directly programming the bios chips? [I've done this at my job where the motherboard BIOS chips are socketed so we just pop them out and pop them in a programmer]


The FE card doesn't seem to have an SOIC8 chip. There's some smaller multi-pin chip where the SOIC8 chip is 'supposed' to be. Couldn't find an soic8 chip anywhere on the board unless someone else finds one and I'm blind? And without knowing exactly which chip is the bios chip, even if that smaller BGA style chip seems to be it, you can't flash it inline with a clip and what sort of adapter would you need for that chip?

The other cards seem to have 1-2 SOIC8 bios chips marked with a red or white dot.

If someone can identify the vbios chip exactly, then maybe one of these clips may work on it?





| Pomona Electronics







www.pomonaelectronics.com













So does anyone see a vbios chip anywhere?










By comparison here's the Strix.


----------



## HyperMatrix

Anyone know if there's a way to disable the curve from shifting up/down as temperature changes? Most of my crashes are happening because of the curve. So if I set the clock to 2115MHz at 35C, it'll go to 2100 in the 40s. Then it'll go back to 2115 at 51C before dropping to 2085 at 52C. Literally the reason for 99% of my crashes because it shifts the clock speed to something I don't want which then is unstable with the set voltage. Makes it impossible to actually "lock" to a specific voltage/frequency if the temperature changes. Being under water will help greatly but outside of that I'm finding it to be very irritating.


----------



## Nizzen

dr/owned said:


> On today's episode of "Stupid Questions": is it really just a NVFlash limitation that's preventing people from flashing any BIOS to these cards? What's stopping someone from using an SOIC8 clip and directly programming the bios chips? [I've done this at my job where the motherboard BIOS chips are socketed so we just pop them out and pop them in a programmer]


Maybe stupid answer, but anyway:

If it was easy (with 2080ti too) this wouldn't be a question


----------



## sultanofswing

dr/owned said:


> On today's episode of "Stupid Questions": is it really just a NVFlash limitation that's preventing people from flashing any BIOS to these cards? What's stopping someone from using an SOIC8 clip and directly programming the bios chips?


I have a clip and all the necessary stuff to flash one myself.


HyperMatrix said:


> Anyone know if there's a way to disable the curve from shifting up/down as temperature changes? Most of my crashes are happening because of the curve. So if I set the clock to 2115MHz at 35C, it'll go to 2100 in the 40s. Then it'll go back to 2115 at 51C before dropping to 2085 at 52C. Literally the reason for 99% of my crashes because it shifts the clock speed to something I don't want which then is unstable with the set voltage. Makes it impossible to actually "lock" to a specific voltage/frequency if the temperature changes. Being under water will help greatly but outside of that I'm finding it to be very irritating.


No, that is how GPU boost works. Absolutely nothing you can do about it besides getting the temps as low as you can, preferably below 40c


----------



## Nizzen

ttnuagmada said:


> How? It's telling me the firmware is invalid.


Same here with 3090 strix OC:

ERROR: Invalid firmware image detected.


Code:


NVIDIA Firmware Update Utility (Version 5.670.0)
Copyright (C) 1993-2020, NVIDIA Corporation. All rights reserved.


Checking for matches between display adapter(s) and image(s)...

Adapter: GeForce RTX 3090     (10DE,2204,3842,3998) S:00,B:0A,D:00,F:00


EEPROM ID (EF,6015) : WBond W25Q16FW/JW 1.65-1.95V 16384Kx1S, page

WARNING: Firmware image PCI Subsystem ID (10DE.1454)
  does not match adapter PCI Subsystem ID (3842.3998).

Please press 'y' to confirm override of PCI Subsystem ID's:
Overriding PCI subsystem ID mismatch
Current      - Version:94.02.42.00.0F ID:10DE:2204:3842:3998
               GPU Board (Normal Board)
Replace with - Version:94.02.26.C0.34 ID:10DE:2204:10DE:1454
               GPU Board (Normal Board)

Update display adapter firmware?
Press 'y' to confirm (any other key to abort):
EEPROM ID (EF,6015) : WBond W25Q16FW/JW 1.65-1.95V 16384Kx1S, page




BIOS Cert 3.0 Verification Error, Update aborted.


Nothing changed!



ERROR: Invalid firmware image detected.


C:\nvflash>

Looks like it was too good to be true...


----------



## ttnuagmada

sultanofswing said:


> I have a clip and all the necessary stuff to flash one myself.
> 
> No, that is how GPU boost works. Absolutely nothing you can do about it besides getting the temps as low as you can, preferably below 40c


I think he's talking specifically about a blip he's getting at 51C


----------



## reflex75

HyperMatrix said:


> Anyone know if there's a way to disable the curve from shifting up/down as temperature changes? Most of my crashes are happening because of the curve. So if I set the clock to 2115MHz at 35C, it'll go to 2100 in the 40s. Then it'll go back to 2115 at 51C before dropping to 2085 at 52C. Literally the reason for 99% of my crashes because it shifts the clock speed to something I don't want which then is unstable with the set voltage. Makes it impossible to actually "lock" to a specific voltage/frequency if the temperature changes. Being under water will help greatly but outside of that I'm finding it to be very irritating.


I guess your 2115 point is at a little bit higher voltage.
Your curve is border line, there always will be some voltage/frequency points weaker than others that will make it crash first.
But if you have a favorite point, you can trick the boost algorithm which is always tracking the highest frequency achievable at lowest voltage:









To trick the algorithm, you can increase your favorite voltage point to be steeper compare to previous points, in order to force the dynamic logarithm curve to always choose your point:









I don't know if I make myself clear, but it's like undervolting but at higher voltage...


----------



## Falkentyne

reflex75 said:


> I guess your 2115 point is at a little bit higher voltage.
> Your curve is border line, there always will be some voltage/frequency points weaker than others that will make it crash first.
> But if you have a favorite point, you can trick the boost algorithm which is always tracking the highest frequency achievable at lowest voltage:
> View attachment 2470230
> 
> 
> To trick the algorithm, you can increase your favorite voltage point to be steeper compare to previous points, in order to force the dynamic logarithm curve to always choose your point:
> View attachment 2470231
> 
> 
> I don't know if I make myself clear, but it's like undervolting but at higher voltage...


I could have sworn, when people tried this with 2080 Ti's all it did was -lower- performance...


----------



## dante`afk

thegr8anand said:


> What overclock are you using? Are you using liquid cooling?





BigMack70 said:


> That seems insane to me. My FE holds like 1950 with an OC in cyberpunk @ 4k maxed


yea water and shunt modded




Falkentyne said:


> I bet you 100% you can't hold 2200 in Call of Duty without crashing!


fun fact, I got a cod code with my FE when I bought it over month ago, never redeemed it


----------



## Falkentyne

dante`afk said:


> yea water and shunt modded
> 
> 
> 
> 
> fun fact, I got a cod code with my FE when I bought it over month ago, never redeemed it


OH MY GOD CAN I HAVE IT??????????


----------



## sultanofswing

Falkentyne said:


> I could have sworn, when people tried this with 2080 Ti's all it did was -lower- performance...


When I tried it that is what it didfor me, you could not have a point in the curve that was overly higher than the rest of the curve or the performance would drop off.


----------



## bmgjet

Your not going to flash that 1000W bios in a unpatched NVFlash.


----------



## sultanofswing

This FTW3 Ultra hybrid I have here with 0 overclock but 500w XOC BIOS is all over the power limit. No overclock at all.


----------



## des2k...

bmgjet said:


> Your not going to flash that 1000W bios in a unpatched NVFlash.


Are you saying with a patched nvflash that 1000w will flash / work ?


----------



## mirkendargen

des2k... said:


> Are you saying with a patched nvflash that 1000w will flash / work ?


Probably or someone could make one that would if that one doesn't. But there hasn't been a patched NvFlash since Pascal or Maxwell, I forget which. The 2080Ti 1000W BIOSes were legit signed ones that extreme overclockers leaked.


----------



## WMDVenum

HyperMatrix said:


> Anyone know if there's a way to disable the curve from shifting up/down as temperature changes? Most of my crashes are happening because of the curve. So if I set the clock to 2115MHz at 35C, it'll go to 2100 in the 40s. Then it'll go back to 2115 at 51C before dropping to 2085 at 52C. Literally the reason for 99% of my crashes because it shifts the clock speed to something I don't want which then is unstable with the set voltage. Makes it impossible to actually "lock" to a specific voltage/frequency if the temperature changes. Being under water will help greatly but outside of that I'm finding it to be very irritating.


I get this happening as well. It seems to decrease 15Mhz per 10C increase in temp. Just shifts the entire curve and even loading a saved profile in msi afterburner is affected by it. I wish you could disable it.


----------



## WMDVenum

reflex75 said:


> I guess your 2115 point is at a little bit higher voltage.
> Your curve is border line, there always will be some voltage/frequency points weaker than others that will make it crash first.
> But if you have a favorite point, you can trick the boost algorithm which is always tracking the highest frequency achievable at lowest voltage:
> View attachment 2470230
> 
> 
> To trick the algorithm, you can increase your favorite voltage point to be steeper compare to previous points, in order to force the dynamic logarithm curve to always choose your point:
> View attachment 2470231
> 
> 
> I don't know if I make myself clear, but it's like undervolting but at higher voltage...


This isn't the problem. The problem is the entire curve shifts for roughly every 10C change in temperature. Undervolting for ampere is setting 2100 @ .95V but if you drop 10C it is now set to 2115 @ .95V and you can't stop it from doing that.


----------



## HyperMatrix

WMDVenum said:


> This isn't the problem. The problem is the entire curve shifts for roughly every 10C change in temperature. Undervolting for ampere is setting 2100 @ .95V but if you drop 10C it is now set to 2115 @ .95V and you can't stop it from doing that.


For me it’s not only the 15MHz drop. But at some points it also goes up momentarily which is what causes those crashes. Like my clock at 51C is always higher than my clock at 50C and 52C. It makes no sense.

Really wish it could be disabled. It seems like a really stupid feature to have enabled with a curve active.


----------



## Falkentyne

des2k... said:


> Are you saying with a patched nvflash that 1000w will flash / work ?


You can flash it with a Pomona 5250 clip, male to female jumper wires, a 1.8v adapter and a good hardware programmer like a Skypro 1/2/3 and just force flash it, after you backup the original bios first. Probably best to use the hardware programmer to backup the entire original vbios first and not GPU-Z. 

Not using a 1.8v adapter will damage or fry the bios chip (assuming it's 1.8v like it was on Pascal).

The FE doesn't seem to have an SOIC8 chip anywhere on the PCB, and I have no idea what chip it is. If it's that tiny 8 pin chip that's in the same overall location as the SOIC8 chips on the eVGA and Asus boards, well, good luck...


----------



## des2k...

WMDVenum said:


> This isn't the problem. The problem is the entire curve shifts for roughly every 10C change in temperature. Undervolting for ampere is setting 2100 @ .95V but if you drop 10C it is now set to 2115 @ .95V and you can't stop it from doing that.


You can, I'm using a large offset to boost freq and limit max clocks with nvdia-smi to prevent unstable frequency or temp drops ramp up.

+91gives me 2185max boost, but during 80% gpu load for example it never reaches that freq.

+161 offset, will boost to 2185 during load but may go higher during low load, low core temps.

nvidia-smi -lgc 210,2185 fixes this

also works with VF point, for rtx load setting 912mv @ 2055 and nvidia-smi -lgc 210,2040 is more consistent holding 2040 vs just VF point at 2040


----------



## bmgjet

Falkentyne said:


> You can flash it with a Pomona 5250 clip, male to female jumper wires, a 1.8v adapter and a good hardware programmer like a Skypro 1/2/3 and just force flash it, after you backup the original bios first. Probably best to use the hardware programmer to backup the entire original vbios first and not GPU-Z.
> 
> Not using a 1.8v adapter will damage or fry the bios chip (assuming it's 1.8v like it was on Pascal).
> 
> The FE doesn't seem to have an SOIC8 chip anywhere on the PCB, and I have no idea what chip it is. If it's that tiny 8 pin chip that's in the same overall location as the SOIC8 chips on the eVGA and Asus boards, well, good luck...


You will also need to inject your own UID info into the bios or your get byte miss match when trying to write the protected areas of the bios chip which store the UID.
Or use a programer that has a contiune on error function since most will just abort thinking something is wrong.

Best for people to just wait since its already being worked on and tested. People wernt ment to find that 1000W bios yet.


----------



## HyperMatrix

des2k... said:


> You can, I'm using a large offset to boost freq and limit max clocks with nvdia-smi to prevent unstable frequency or temp drops ramp up.
> 
> +91gives me 2185max boost, but during 80% gpu load for example it never reaches that freq.
> 
> +161 offset, will boost to 2185 during load but may go higher during low load, low core temps.
> 
> nvidia-smi -lgc 210,2185 fixes this
> 
> also works with VF point, for rtx load setting 912mv @ 2055 and nvidia-smi -lgc 210,2040 is more consistent holding 2040 vs just VF point at 2040


This was actually *incredibly *helpful in a slightly different way. I set 0.943v at 2160 (which it can't do). Then I set max clocks through smi to 2100. This way there's room for many temperature/curve related downclocks before it goes below 2100. And it seems to be holding it rock stready atm. Setting a minimum clock wasn't respected. But this is going to make my OC life so much easier. Thanks a bunch!


----------



## Falkentyne

des2k... said:


> You can, I'm using a large offset to boost freq and limit max clocks with nvdia-smi to prevent unstable frequency or temp drops ramp up.
> 
> +91gives me 2185max boost, but during 80% gpu load for example it never reaches that freq.
> 
> +161 offset, will boost to 2185 during load but may go higher during low load, low core temps.
> 
> nvidia-smi -lgc 210,2185 fixes this
> 
> also works with VF point, for rtx load setting 912mv @ 2055 and nvidia-smi -lgc 210,2040 is more consistent holding 2040 vs just VF point at 2040


What does the 210 do?


----------



## geriatricpollywog

https://www.3dmark.com/pr/662518





https://www.3dmark.com/spy/16496046



More testing


----------



## HyperMatrix

Falkentyne said:


> What does the 210 do?


It's just the minimum clock.


----------



## zlatanselvic

Shunt still the only way to get more juice to the founders card?


----------



## changboy

Did you see this : ASUS ROG Strix GeForce RTX 3090 O24G GUNDAM Limited Edition Graphics Cards 






ASUS ROG Strix GeForce RTX 3090 O24G GUNDAM Limited Edition Graphics Cards for sale | eBay


Find great deals on eBay for ASUS ROG Strix GeForce RTX 3090 O24G GUNDAM Limited Edition Graphics Cards. Shop with confidence.



www.ebay.ca


----------



## WMDVenum

HyperMatrix said:


> This was actually *incredibly *helpful in a slightly different way. I set 0.943v at 2160 (which it can't do). Then I set max clocks through smi to 2100. This way there's room for many temperature/curve related downclocks before it goes below 2100. And it seems to be holding it rock stready atm. Setting a minimum clock wasn't respected. But this is going to make my OC life so much easier. Thanks a bunch!


I am going to try this and report back. Is 210 the default min clock for the GPU? Also why isn't this just in MSI afterburner? I just want it to respect my settings.


----------



## HyperMatrix

WMDVenum said:


> I am going to try this and report back. Is 210 the default min clock for the GPU? Also why isn't this just in MSI afterburner? I just want it to respect my settings.


Well the interesting part is that it doesn’t respect the minimum clock. Just the maximum clock. So with the method I mentioned you’re tricking it into doing what we actually want, which is to lock at a certain clock and only throttle down due to power limit. Not due to the BS temperature range curve shift. I’ve been testing it for a few hours and it works. Only thing is there are times where the voltage drops slightly below what you have set. So for maximum stability I recommend increasing the voltage in your curve by 1 or 2 spots.

Also to make this all easier, I’ve created batch files with various max clock settings in them. So I just double click and I have the max clock I want. Can also set this to run on startup for daily driving if you wanted.


----------



## WMDVenum

HyperMatrix said:


> Well the interesting part is that it doesn’t respect the minimum clock. Just the maximum clock. So with the method I mentioned you’re tricking it into doing what we actually want, which is to lock at a certain clock and only throttle down due to power limit. Not due to the BS temperature range curve shift. I’ve been testing it for a few hours and it works. Only thing is there are times where the voltage drops slightly below what you have set. So for maximum stability I recommend increasing the voltage in your curve by 1 or 2 spots.
> 
> Also to make this all easier, I’ve created batch files with various max clock settings in them. So I just double click and I have the max clock I want. Can also set this to run on startup for daily driving if you wanted.


Do you need to run the command every startup? Also do you mind posting the text from one of the batch files I could copy?

Nm I figured it out. Also running in task schedule as an elevated program on startup works for setting it every time you log on. Makes me wonder if I should keep undervolting or just set the offset again.


----------



## tgb87

Does anyone here have a Palit gamingpro OC? I bought the regular gamingpro with OC’s bios flased. I am having issues with fan stop feature and I can‘t get the bios updated with Palit’s tool. What is the latest version of the bios? Mine is 94.02.26.40.AE and the fans seem to spin up everytime I open an app. This has apparently been fixed in the latest bios. I want to know if my version is the latest version.


----------



## xrb936

changboy said:


> Did you see this : ASUS ROG Strix GeForce RTX 3090 O24G GUNDAM Limited Edition Graphics Cards
> 
> 
> 
> 
> 
> 
> ASUS ROG Strix GeForce RTX 3090 O24G GUNDAM Limited Edition Graphics Cards for sale | eBay
> 
> 
> Find great deals on eBay for ASUS ROG Strix GeForce RTX 3090 O24G GUNDAM Limited Edition Graphics Cards. Shop with confidence.
> 
> 
> 
> www.ebay.ca


There is 0% performances improving, but only better (IMO worse) looking.


----------



## Falkentyne

xrb936 said:


> There is 0% performances improving, but only better (IMO worse) looking.


For that asking price, that guy is too cheap to include free shipping. Wow.


----------



## ShadowYuna

How come my FTW3 Hybrid with XOC bios does not draw power limit properly. Only draw power limit of 102~105%...

Any idea when EVGA will fix the issues and any suggestion bios that can draw proper power limit??


----------



## sultanofswing

ShadowYuna said:


> How come my FTW3 Hybrid with XOC bios does not draw power limit properly. Only draw power limit of 102~105%...
> 
> Any idea when EVGA will fix the issues and any suggestion bios that can draw proper power limit??


Mine does the exact same thing, Also have the FTW3 Ultra Hybrid.


----------



## HyperMatrix

This might seem like a strange question...but have any of you with any 3090 ran into a situation where your card was reporting a certain clock speed but wasn’t actually anywhere near that clock speed?

I’m running into a case where the card isn’t crashing...it’s showing exactly the voltage and clock speeds I’ve selected...but you can see the power usage and FPS is actually lower. And it happens under certain OC settings. Mostly curves. So for example doing superposition seemingly “locked” to 2160MHz ended up with the same score as another identical card at 2115MHz. Or 2160MHz locked in PR scoring less than 1V at 2145 with throttling down to 2115.

I’ve had cases in past generations where a card would go into limp mode after OC crashing. But never something like this. There’s no perfcap or anything else showing in gpu-z. It just shows a certain clock and voltage. But it’s not really at those settings...

Card tested in 2 different systems. DDU and all that jazz.


----------



## lolhaxz

bmgjet said:


> You will also need to inject your own UID info into the bios or your get byte miss match when trying to write the protected areas of the bios chip which store the UID.
> Or use a programer that has a contiune on error function since most will just abort thinking something is wrong.
> 
> Best for people to just wait since its already being worked on and tested. People wernt ment to find that 1000W bios yet.


What's being worked on?

Surely by now Falcon verifies the certificate before it executes the code already on the ROM... but who knows, if the reasoning as NVIDIA claims is malware/malicious rom mitigation perhaps its reasonable to expect they don't actually need to take it that far.

If you patch out the verification failure branches in nvflash, it of course still fails to write anything... but will behave as tho it succeeded until the final compare step.

Falcon based co-processor on the GPU is responsible for permitting you to write to protected areas, and by default it runs in heavy secure mode - you send it the payload and unless the certificates and checksums match, it won't write anything.

To take it out of HS mode you also need signed code (presumably part of the payload you send it) - so there really isn't any winning.

I know years back you could flash the mobile GPU bios directly, but I never did see anyone try it on a desktop GPU... I just assumed since it didn't become common practise, it probably didn't work. After reversing Nvflash for quite some time and determining it futile... I grabbed one of the SOIC clips but never got around to trying it either.


----------



## sultanofswing

HyperMatrix said:


> This might seem like a strange question...but have any of you with any 3090 ran into a situation where your card was reporting a certain clock speed but wasn’t actually anywhere near that clock speed?
> 
> I’m running into a case where the card isn’t crashing...it’s showing exactly the voltage and clock speeds I’ve selected...but you can see the power usage and FPS is actually lower. And it happens under certain OC settings. Mostly curves. So for example doing superposition seemingly “locked” to 2160MHz ended up with the same score as another identical card at 2115MHz. Or 2160MHz locked in PR scoring less than 1V at 2145 with throttling down to 2115.
> 
> I’ve had cases in past generations where a card would go into limp mode after OC crashing. But never something like this. There’s no perfcap or anything else showing in gpu-z. It just shows a certain clock and voltage. But it’s not really at those settings...
> 
> Card tested in 2 different systems. DDU and all that jazz.


Normal behavior when I was on my old 2080ti and not using the curve correctly. If I had a specific point in the curve way higher than it would run that clockspeed but draw less power and perform worse.
I could make the card run 2300mhz all day long but score worse than one running 2100mhz if I played around with it.


----------



## changboy

New patch on cyberpunk ressolve many issue also the hdr have been improve. Me i was stuck with the taxi mission and cant call anymore so i stop played and when i restart the game this bug was gone and my hdr look better, not same setting then before the patch.

About the nvidia latest driver, if i have my fps counter at ON for games and when i play mkv movies, fps keep showing on movies and before i didnt have that issue. Dont know if its the driver, windows latest update or just me but its bad.


----------



## ShadowYuna

sultanofswing said:


> Mine does the exact same thing, Also have the FTW3 Ultra Hybrid.


Have you heard or got any info for fix??


----------



## defiledge

has any1 tried to shunt a 3x8pin card? Imagine the insane clocks you could get if the heat output can be managed.


----------



## changboy

On discord Slovak Killer write this : i would not recommand to do shunt mod if you dont bench. 

If he write this i think he saw somthing or something bad happend to is card, i dont know but its like he gave an advise. So for a normal daily use its seam better to not shunt mod those card. Me i dont have any experience of this but him he have. Then i think i wont shunt mine even i have the resistor in my hand.

Also Gamer Nexus when he shunt is ftw3 ultra with the optimus waterblock at the end he cut verry short this vidéo and never really show performance increase on is card, he even dont talk about it and when i watch that vidéo i found this curious the vidéo end like that. If its just to get 100 mhz on gpu on water its not worth loosing the warranty on ur card.

If someone can tell me no its good to do the shunt and explain me why, then lets go.


----------



## bmgjet

Slovak Killers card died, But then he also used a bit of wire over the pci-e shunt, And didnt do the MSVDD,SRC and Chip power ones which are known to mess up reading a power balance so anything could of happen.

But I can agree, If you want to keep your warranty then dont do the shunt mod.
If you want to push your warrantys luck you could try doing the silver paint shunt mod since it can be cleaned up to look factory again.


----------



## 7empe

HyperMatrix said:


> This was actually *incredibly *helpful in a slightly different way. I set 0.943v at 2160 (which it can't do). Then I set max clocks through smi to 2100. This way there's room for many temperature/curve related downclocks before it goes below 2100. And it seems to be holding it rock stready atm. Setting a minimum clock wasn't respected. But this is going to make my OC life so much easier. Thanks a bunch!


How do you check stability with 2100 MHz at 0.943 V? Does it pass TSE stress test? Thanks.


----------



## changboy

bmgjet said:


> Slovak Killers card died, But then he also used a bit of wire over the pci-e shunt, And didnt do the MSVDD,SRC and Chip power ones which are known to mess up reading a power balance so anything could of happen.
> 
> But I can agree, If you want to keep your warranty then dont do the shunt mod.
> If you want to push your warrantys luck you could try doing the silver paint shunt mod since it can be cleaned up to look factory again.


 If is card die then he better erase that video on youtube and els coz it can give issue to people who want try something and its bad to do missinformation when you know it can give troubles.
Anyway not my buisness but like you can see i comeback many times and ask the same thing in different way coz iam really carefull about what i read and ear coz sadly more then 50% are not true or trick to shine hehehe.
Today more then ever we really need to be carefull about all info we can find on the web, sometime its good but sometime its not or not the taste for everyone.

I also read in many place some have problem with there card but at the end its the user who have done something wrong and most of the time he dosent comeback to tell it, and those comment keep on the forum where it was post and give wrong feedback to user who read it. Today its easy to dirty a name, its start with 1 and many follow, a lot of followers.


----------



## defiledge

bmgjet said:


> Slovak Killers card died, But then he also used a bit of wire over the pci-e shunt, And didnt do the MSVDD,SRC and Chip power ones which are known to mess up reading a power balance so anything could of happen.
> 
> But I can agree, If you want to keep your warranty then dont do the shunt mod.
> If you want to push your warrantys luck you could try doing the silver paint shunt mod since it can be cleaned up to look factory again.


so that wire did absolutely nothing then lol. 



changboy said:


> On discord Slovak Killer write this : i would not recommand to do shunt mod if you dont bench.
> 
> If he write this i think he saw somthing or something bad happend to is card, i dont know but its like he gave an advise. So for a normal daily use its seam better to not shunt mod those card. Me i dont have any experience of this but him he have. Then i think i wont shunt mine even i have the resistor in my hand.
> 
> Also Gamer Nexus when he shunt is ftw3 ultra with the optimus waterblock at the end he cut verry short this vidéo and never really show performance increase on is card, he even dont talk about it and when i watch that vidéo i found this curious the vidéo end like that. If its just to get 100 mhz on gpu on water its not worth loosing the warranty on ur card.
> 
> If someone can tell me no its good to do the shunt and explain me why, then lets go.


shunt mod + adequate cooling = free 10-15% performance increase.


----------



## Element77

des2k... said:


> nvidia-smi -lgc 210,2185 fixes this
> 
> also works with VF point, for rtx load setting 912mv @ 2055 and nvidia-smi -lgc 210,2040 is more consistent holding 2040 vs just VF point at 2040


Do you need to install Nvidia SMI management to use the -smi commands like that? Are you guys just entering that in an admin command prompt? Does seem very useful.


----------



## changboy

shunt mod + adequate cooling = free 10-15% performance increase.
[/QUOTE]

If your card die your performance will decrease a lot


----------



## des2k...

Element77 said:


> Do you need to install Nvidia SMI management to use the -smi commands like that? Are you guys just entering that in an admin command prompt? Does seem very useful.


It's part of the nvidia driver , exe is in Windows\System32
Run in admin command prompt, batch file or create shortcuts + run as admin under compatibility

Have a few shortcuts for tweaking
for VF point, nvidia-smi.exe -lgc 210,2040
Large offset for boost, nvidia-smi.exe -lgc 210,2175
reset boost clocks to default, nvidia-smi.exe -rgc
monitor power, nvidia-smi.exe --query-gpu=power.draw --format=csv --loop-ms=1000


----------



## st33L

Is the Zotac Trinity a good choice? Or rather from the Inno3D because of the mixed capacitor, would like to flash to 390w bios afterwards, but which one is recommended?


----------



## QueueCumb3r

Hello, curious what your experience of the KPE LN2 Bios on Asus Strix OC 3090 has been.

I have it on my Asus right now and can't seem to push it past +120 and +500 (14.2k in PR). Is this normal for the Strix without shunt modding it, or is it possible I just got a average/below average card?

My Asus Strix OC seems to max out the same on the 480w, 500w, and 520w bios (asus, evga ftw3 ultra, kpe) at 14.2k on PR. I get a thermal throttling on GPU-Z (magenta therm on PerfCap) but not sure why exactly.



https://www.overclock.net/attachments/asus-thermal-throttling-why-png.2470223/



Thanks!!!

Trying to figure out if the Asus needs shunts to really shine or if capping out at 14.2k is a sign of a low bin card.


----------



## des2k...

QueueCumb3r said:


> Hello, curious what your experience of the KPE LN2 Bios on Asus Strix OC 3090 has been.
> 
> I have it on my Asus right now and can't seem to push it past +120 and +500 (14.2k in PR). Is this normal for the Strix without shunt modding it, or is it possible I just got a average/below average card?
> 
> My Asus Strix OC seems to max out the same on the 480w, 500w, and 520w bios (asus, evga ftw3 ultra, kpe) at 14.2k on PR. I get a thermal throttling on GPU-Z (magenta therm on PerfCap) but not sure why exactly.
> 
> 
> 
> https://www.overclock.net/attachments/asus-thermal-throttling-why-png.2470223/
> 
> 
> 
> Thanks!!!
> 
> Trying to figure out if the Asus needs shunts to really shine or if capping out at 14.2k is a sign of a low bin card.


14.2k seem very low, I can pull 14050 with my card by locking in 2040 but mostly drops to 2010 due to power (~370w) during the run


----------



## geriatricpollywog

ShadowYuna said:


> How come my FTW3 Hybrid with XOC bios does not draw power limit properly. Only draw power limit of 102~105%...
> 
> Any idea when EVGA will fix the issues and any suggestion bios that can draw proper power limit??


My KPE only draws 450-480w in Port Royal, but when I increase NVVDD voltage it draws up to 520w. Try moving the Precision X1 voltage slider all the way to the right.


----------



## HyperMatrix

sultanofswing said:


> Normal behavior when I was on my old 2080ti and not using the curve correctly. If I had a specific point in the curve way higher than it would run that clockspeed but draw less power and perform worse.
> I could make the card run 2300mhz all day long but score worse than one running 2100mhz if I played around with it.


This is the 6th 3090 I’ve overclocked and the first to exhibit this behavior. The clock speed itself is something that it does run in offset mode (2160). With other cards if I tried a clock it couldn’t do, it would just crash. Just hours before playing with this card I used the same methodology with an FTW3 and it handled it properly as well. Do you know if it’s just certain types of cards? Because I just can’t wrap my head around it. Have you seen this behavior with other 3090 cards?


----------



## Falkentyne

QueueCumb3r said:


> Hello, curious what your experience of the KPE LN2 Bios on Asus Strix OC 3090 has been.
> 
> I have it on my Asus right now and can't seem to push it past +120 and +500 (14.2k in PR). Is this normal for the Strix without shunt modding it, or is it possible I just got a average/below average card?
> 
> My Asus Strix OC seems to max out the same on the 480w, 500w, and 520w bios (asus, evga ftw3 ultra, kpe) at 14.2k on PR. I get a thermal throttling on GPU-Z (magenta therm on PerfCap) but not sure why exactly.
> 
> 
> 
> https://www.overclock.net/attachments/asus-thermal-throttling-why-png.2470223/
> 
> 
> 
> Thanks!!!
> 
> Trying to figure out if the Asus needs shunts to really shine or if capping out at 14.2k is a sign of a low bin card.


That's a bug in all of the 457.xx drivers. Someone on evga forums found out its a 6500C temp blip on a memory temp sensor.
Does NOT happen on 456.98 hotfix.


----------



## geriatricpollywog

Falkentyne said:


> That's a bug in all of the 457.xx drivers. Someone on evga forums found out its a 6500C temp blip on a memory temp sensor.
> Does NOT happen on 456.98 hotfix.


If I already have the latest driver, would rolling back to 456.98 would fix?


----------



## Redjester

Falkentyne said:


> Does the zotac have fully flat flush shunts or are the silver edges lower than the black middle?
> 
> Can you please set all of the watts to 'max' in gpu-z next time?
> Also in hwinfo64, can you have tdp% and tdp normalized% showing?
> 
> If normalized is <6% of TDP%, then the mod isn't done that poorly. Not ideal balance, but 25W is a bit too high of a delta in GPU-Z.
> The two 8 pins in GPU_Z should be less than 20W apart from each other if you used paint, and <10W if you soldered, so yes maybe you should re-do it.
> I need the other values too, even src and mvddc and pcie slot.
> Is the Zotac's max unshunted power limit 390W (before multipliers?)


Sorry it's been a bit. Out of town for work and haven't' messed with this much. Upgraded my PSU to a 1000w up from my 850 just to be safe since I put this card on my new Unity Z490 / 10850k that don't seem to be playing nice.

Here is my latest gpu-z screenshot:








To answer your questions, I'm running the 390w bios on my Zotac. Yes the shunts are flat. It seems stable around 2070 clock / 1333 mem after hours of Warzone / FFXIV set to highest settings.

TimeSpy scores seem to be decent as well for a Zotac. Temps are stable at max at 53C after a few hours. Ambient room temp is 23'C. (Not sure if this is a good delta, please advise. My first custom loop.)

So when I re-paste my 10-850k for the 4th time this week, I'm going to reshunt Pin#2 and see if I get some balance. Is there anything else I should be looking at?


----------



## reflex75

HyperMatrix said:


> This is the 6th 3090 I’ve overclocked and the first to exhibit this behavior. The clock speed itself is something that it does run in offset mode (2160). With other cards if I tried a clock it couldn’t do, it would just crash. Just hours before playing with this card I used the same methodology with an FTW3 and it handled it properly as well. Do you know if it’s just certain types of cards? Because I just can’t wrap my head around it. Have you seen this behavior with other 3090 cards?


My FE do the same. It's the normal boost algorithm based on a logarithmic curve: at the higher voltage (right side) the curve is almost flat.
When temperature increases, the curve is reduced, but the amount of 'slide down' is not the same for all points (it's not a translation).
This means where the curve line is almost flat at the beginning (lower temperature), it starts to decrease from the left of the curve (becoming more steep) and in certain conditions, it can give temporarily access to higher tier voltage and jump to it if reachable (because the line is not flat anymore), thus higher frequency/voltage despite of increasing temperature at the same time.
You can check the curve behavior (repeat ctrl-F) while the temperature is rising...


----------



## Falkentyne

Redjester said:


> Sorry it's been a bit. Out of town for work and haven't' messed with this much. Upgraded my PSU to a 1000w up from my 850 just to be safe since I put this card on my new Unity Z490 / 10850k that don't seem to be playing nice.
> 
> Here is my latest gpu-z screenshot:
> View attachment 2470330
> 
> 
> To answer your questions, I'm running the 390w bios on my Zotac. Yes the shunts are flat. It seems stable around 2070 clock / 1333 mem after hours of Warzone / FFXIV set to highest settings.
> 
> TimeSpy scores seem to be decent as well for a Zotac. Temps are stable at max at 53C after a few hours. Ambient room temp is 23'C. (Not sure if this is a good delta, please advise. My first custom loop.)
> 
> So when I re-paste my 10-850k for the 4th time this week, I'm going to reshunt Pin#2 and see if I get some balance. Is there anything else I should be looking at?


Yeah shunt #2 is throttling you. Seems like its reaching the cap of 180W. Very interesting that it even goes that high. Usually it caps out between 150W and "Max SRC" which is 175W.
Also shunt SRC and MVDDC. They should not be reporting values that high.


----------



## des2k...

Redjester said:


> Sorry it's been a bit. Out of town for work and haven't' messed with this much. Upgraded my PSU to a 1000w up from my 850 just to be safe since I put this card on my new Unity Z490 / 10850k that don't seem to be playing nice.
> 
> Here is my latest gpu-z screenshot:
> View attachment 2470330
> 
> 
> To answer your questions, I'm running the 390w bios on my Zotac. Yes the shunts are flat. It seems stable around 2070 clock / 1333 mem after hours of Warzone / FFXIV set to highest settings.
> 
> TimeSpy scores seem to be decent as well for a Zotac. Temps are stable at max at 53C after a few hours. Ambient room temp is 23'C. (Not sure if this is a good delta, please advise. My first custom loop.)
> 
> So when I re-paste my 10-850k for the 4th time this week, I'm going to reshunt Pin#2 and see if I get some balance. Is there anything else I should be looking at?


did you shunt the PCIE ? Without that it wont pull any more power regardless of the other shunts. It's very similar to FE, where you need to shunt the PCIE, but I could be wrong.


----------



## Redjester

Falkentyne said:


> Yeah shunt #2 is throttling you. Seems like its reaching the cap of 180W. Very interesting that it even goes that high. Usually it caps out between 150W and "Max SRC" which is 175W.
> Also shunt SRC and MVDDC. They should not be reporting values that high.



Looks like I may have botched more than one shunt if that is the case.

I followed this guide: How To: Easy mode Shut Modding. | Overclock.net for two pin cards and shunted the 4 on the front and PCI on the back.

So I may need to just redo the whole board.


----------



## des2k...

Redjester said:


> Looks like I may have botched more than one shunt if that is the case.
> 
> I followed this guide: How To: Easy mode Shut Modding. | Overclock.net for two pin cards and shunted the 4 on the front and PCI on the back.
> 
> So I may need to just redo the whole board.


From this post, shows the shunts, the guy did say all except the GDDR6X and it worked








MSI 3090 Ventus OC can´t shunt mod the card - need help


Problem fixed.




www.overclock.net





If it ends up working on the Zotac , let us know. I have the same board.
It's suppose to hold 2100+ on port royal as that will go 500w+ for certain.


----------



## Carillo

des2k... said:


> From this post, shows the shunts, the guy did say all except the GDDR6X and it worked
> 
> 
> 
> 
> 
> 
> 
> 
> MSI 3090 Ventus OC can´t shunt mod the card - need help
> 
> 
> Problem fixed.
> 
> 
> 
> 
> www.overclock.net
> 
> 
> 
> 
> 
> If it ends up working on the Zotac , let us know. I have the same board.
> It's suppose to hold 2100+ on port royal as that will go 500w+ for certain.


That guy did something wrong , look at my 15800+ score with my Ventus 3090. Same card, all shunts applied


----------



## mardon

Is there any good shunt guides with Pictures for the reference board?


----------



## Falkentyne

Redjester said:


> Looks like I may have botched more than one shunt if that is the case.
> 
> I followed this guide: How To: Easy mode Shut Modding. | Overclock.net for two pin cards and shunted the 4 on the front and PCI on the back.
> 
> So I may need to just redo the whole board.


Did you scrape the edges of the original shunts with a small flat blade before stacking the shunts, to remove the conformal coating?

MVDDC is high but SRC is also linked to the 8 pins and Chip Power and it may be linked to MVDDC also. MVDDC is linked to PCIE Slot (a lower resistance shunt on PCIE can reduce MVDDC). So touch up shunt #2 then check to see if SRC and MVDDC go down.



des2k... said:


> did you shunt the PCIE ? Without that it wont pull any more power regardless of the other shunts. It's very similar to FE, where you need to shunt the PCIE, but I could be wrong.


His PCIE seems to be okay. I do know that shunting everything EXCEPT the PCIE can make MVDDC skyrocket. Someone had 240W on MVDDC when they didn't shunt PCIE.


----------



## Redjester

Falkentyne said:


> Did you scrape the edges of the original shunts with a small flat blade before stacking the shunts, to remove the conformal coating?
> 
> MVDDC is high but SRC is also linked to the 8 pins and Chip Power and it may be linked to MVDDC also. MVDDC is linked to PCIE Slot (a lower resistance shunt on PCIE can reduce MVDDC). So touch up shunt #2 then check to see if SRC and MVDDC go down.
> 
> 
> 
> His PCIE seems to be okay. I do know that shunting everything EXCEPT the PCIE can make MVDDC skyrocket. Someone had 240W on MVDDC when they didn't shunt PCIE.


I did scrape the edges, but I may have not done a good enough job. This is my first attempt at this kind of thing, so I was being very gentle.

I'll try reshunting #2 over christmas week and report back with the results. Thanks for all the help!

(Wish I had found these boards sooner in my life since I like building/tinkering with computers and have always built my own.... but I don't think my wallet could have handled it.)


----------



## Falkentyne

changboy said:


> On discord Slovak Killer write this : i would not recommand to do shunt mod if you dont bench.
> 
> If he write this i think he saw somthing or something bad happend to is card, i dont know but its like he gave an advise. So for a normal daily use its seam better to not shunt mod those card. Me i dont have any experience of this but him he have. Then i think i wont shunt mine even i have the resistor in my hand.
> 
> Also Gamer Nexus when he shunt is ftw3 ultra with the optimus waterblock at the end he cut verry short this vidéo and never really show performance increase on is card, he even dont talk about it and when i watch that vidéo i found this curious the vidéo end like that. If its just to get 100 mhz on gpu on water its not worth loosing the warranty on ur card.
> 
> If someone can tell me no its good to do the shunt and explain me why, then lets go.





bmgjet said:


> Slovak Killers card died, But then he also used a bit of wire over the pci-e shunt, And didnt do the MSVDD,SRC and Chip power ones which are known to mess up reading a power balance so anything could of happen.
> 
> But I can agree, If you want to keep your warranty then dont do the shunt mod.
> If you want to push your warrantys luck you could try doing the silver paint shunt mod since it can be cleaned up to look factory again.


Wasn't his card an eVGA card with fuses?

Putting a wire across a shunt on a board that has a 10 amp fuse on PCIE slot is asking for trouble. And a 5 mOhm stacked shunt on 8 pins can easily wind up pulling more than the 20A fuse limit of 240W per 8 pin if you aren't being throttled by something.

Someone said there's a way to short the fuses to remove the 10A (or 20A) limit but it was way way back and I don't remember if it was here, in the shunt mod thread or the evga forums.

However with all the reports of UNSHUNTED eVGA cards dying after playing Cyberpunk 2077 or just black screening in regular use and then getting the red PCIE cable light of death, unless a fuse actually blew on the card, hard to know if the shunt mod killed it or it being an eVGA card killed it....






RTX 3080/3090 Black screen possible solution - EVGA Forums


Hello All, Ever since I installed my RTX3090 XC3 Ultra, I have been experiencing random black screen crashes whenever I am watching videos, doing web browsing or being idle at desktop. According to event viewer, these happened because the display driver stopped responding and successfully recov...



forums.evga.com









3080 FTW3 Ultra Single Red Light - EVGA Forums


After playing Cyberpunk 2077 for about 15 minutes, my computer crashed. When I tried to reboot, I noticed a single red light on my 3080 FTW3 Ultra which I just bought two weeks ago. I replugged everything in with no help. I've read that this may be a common problem and just wanted to make sure t...



forums.evga.com


----------



## Falkentyne

Redjester said:


> I did scrape the edges, but I may have not done a good enough job. This is my first attempt at this kind of thing, so I was being very gentle.
> 
> I'll try reshunting #2 over christmas week and report back with the results. Thanks for all the help!
> 
> (Wish I had found these boards sooner in my life since I like building/tinkering with computers and have always built my own.... but I don't think my wallet could have handled it.)


When scraping the edges, you want to scrape until there is bright silver showing, rather than the dull layer. It will be obvious once it's done well. Being gentle is good but you need to make sure the silver part starts becoming more shiny/brighter. Although I can't be fully sure that is the issue or not.


----------



## QueueCumb3r

Falkentyne said:


> That's a bug in all of the 457.xx drivers. Someone on evga forums found out its a 6500C temp blip on a memory temp sensor.
> Does NOT happen on 456.98 hotfix.


OK, so I installed that driver, now my scores are only able to get up to 14.0k. There is no thermal PerfCap code, but definitely can't OC as much as before. I'm assuming that means that I'm sort of maxing out at the 14.3k I had previously on the KPE bios. With the Asus Strix OC 3090, is that a low bin or average bin OC for this card? Trying to figure out if I should flip and try for another or not. My other cards are hitting better numbers for sure...


----------



## Falkentyne

QueueCumb3r said:


> OK, so I installed that driver, now my scores are only able to get up to 14.0k. There is no thermal PerfCap code, but definitely can't OC as much as before. I'm assuming that means that I'm sort of maxing out at the 14.3k I had previously on the KPE bios. With the Asus Strix OC 3090, is that a low bin or average bin OC for this card? Trying to figure out if I should flip and try for another or not. My other cards are hitting better numbers for sure...


Did you click "reset" in the NVCP to default?

Don't read so much into your scores after just one run. I've seen scores vary as much as _300 points+_ in the same windows session with nothing running just by running something else first.
One time I was able to improve my Timespy scores by 300 points by running _Superposition 4k_ optimized first, then running Timespy again: bam, 300 point increase...


----------



## HyperMatrix

reflex75 said:


> My FE do the same. It's the normal boost algorithm based on a logarithmic curve: at the higher voltage (right side) the curve is almost flat.
> When temperature increases, the curve is reduced, but the amount of 'slide down' is not the same for all points (it's not a translation).
> This means where the curve line is almost flat at the beginning (lower temperature), it starts to decrease from the left of the curve (becoming more steep) and in certain conditions, it can give temporarily access to higher tier voltage and jump to it if reachable (because the line is not flat anymore), thus higher frequency/voltage despite of increasing temperature at the same time.
> You can check the curve behavior (repeat ctrl-F) while the temperature is rising...



Hey. Yes I know about the temperature based curve shift. What I’m asking about is a card that shows it is running at a certain clock clock speed (for example 2190) but actually is running at a lower frequency. So both voltage and clock speed displayed in afterburner and GPU-Z show the clock at 2190 but it’s actually running slower than that.


----------



## HyperMatrix

Falkentyne said:


> Wasn't his card an eVGA card with fuses?
> 
> Putting a wire across a shunt on a board that has a 10 amp fuse on PCIE slot is asking for trouble. And a 5 mOhm stacked shunt on 8 pins can easily wind up pulling more than the 20A fuse limit of 240W per 8 pin if you aren't being throttled by something.


Yeah I remember he was specifically asked about it and they said they were prepared for the fuse to blow but they didn’t see any problem with it. It was a silly thing to do. Most I would do is go from a 5 mohm to 4 mohm resistor. You definitely don’t want your pcie slot to go over 100W draw for extended use anyway.


----------



## QueueCumb3r

Falkentyne said:


> Did you click "reset" in the NVCP to default?
> 
> Don't read so much into your scores after just one run. I've seen scores vary as much as _300 points+_ in the same windows session with nothing running just by running something else first.
> One time I was able to improve my Timespy scores by 300 points by running _Superposition 4k_ optimized first, then running Timespy again: bam, 300 point increase...


Here are the Asus OC runs leading up to hitting the wall (from around 3 weeks ago):

GPU MEM 3DMARK
+0 +0 13315
+100 +250 14119
+100 +500 14137
+100 +750 14157
+100 +1000 14155
+100 +1100 14161
+100 +1200 14201
+100 +1300 14161
+100 +1400 14218
+100 +1500 0

+120 +1400 0
+120 +1300 14350
+130 +1300 0
+130 +1200 0
+130 +1100 0
+130 +1000 0
+130 +900 0
+130 +500 0
+130 +0 0
+120 +0 14219


EVGA BIOS
+145 +1300 14352

EVGA KPE BIOS 520W With Classified (not sure if this was even working)...

500 100 100 1.1000 1.4000 1.1000 14,419 372.9 29/80
500 100 100 1.11250 1.4000 1.1000 14,421 372 29/82
500 110 100 1.11250 1.4000 1.1000 CRASH VOp/vRel
500 105 100 1.15000 1.4000 1.11250 CRASH



I guess I did hit a little higher with the KPE bios on the Asus than I remembered. So I guess it is maxing out at 14.4k with 520W available to it. My evga ftw3 ultra maxes out at 14.6k and my kpe hit just over 15k (kpe card is being RMAed due to other voltage issues--card is flickering and disconnecting from monitor occasionally).


----------



## QueueCumb3r

I'm trying to figure out if this is worth taking apart, shunting, and putting a waterblock on for sub-ambient overclocking, or if I should flip and maybe try to get a higher bin.


----------



## changboy

This guy put a mp5 cooler on the backplate of a rtx-3080 hahahaha :


----------



## changboy

QueueCumb3r said:


> Here are the Asus OC runs leading up to hitting the wall (from around 3 weeks ago):
> 
> GPU MEM 3DMARK
> +0 +0 13315
> +100 +250 14119
> +100 +500 14137
> +100 +750 14157
> +100 +1000 14155
> +100 +1100 14161
> +100 +1200 14201
> +100 +1300 14161
> +100 +1400 14218
> +100 +1500 0
> 
> +120 +1400 0
> +120 +1300 14350
> +130 +1300 0
> +130 +1200 0
> +130 +1100 0
> +130 +1000 0
> +130 +900 0
> +130 +500 0
> +130 +0 0
> +120 +0 14219
> 
> 
> EVGA BIOS
> +145 +1300 14352
> 
> EVGA KPE BIOS 520W With Classified (not sure if this was even working)...
> 
> 500 100 100 1.1000 1.4000 1.1000 14,419 372.9 29/80
> 500 100 100 1.11250 1.4000 1.1000 14,421 372 29/82
> 500 110 100 1.11250 1.4000 1.1000 CRASH VOp/vRel
> 500 105 100 1.15000 1.4000 1.11250 CRASH
> 
> 
> 
> I guess I did hit a little higher with the KPE bios on the Asus than I remembered. So I guess it is maxing out at 14.4k with 520W available to it. My evga ftw3 ultra maxes out at 14.6k and my kpe hit just over 15k (kpe card is being RMAed due to other voltage issues--card is flickering and disconnecting from monitor occasionally).


I saw many guy with the kingpin who not score really better then the ftw3 ultra, i desapoint see that, and yes if you go ln2 but just on watercooling like this not really better. Like this guy :


----------



## des2k...

changboy said:


> This guy put a mp5 cooler on the backplate of a rtx-3080 hahahaha :


oh this video, all along I thought it was a 3090🤣

surprisingly, my ek backplate was hot to touch(50c+), put a temp sensor + two 70mm fans, dropped to 34c under load. You definitely don't need a waterblock at the back, I'll keep one fan it seems to help a bit with core temps.


----------



## WMDVenum

des2k... said:


> oh this video, all along I thought it was a 3090🤣
> 
> surprisingly, my ek backplate was hot to touch(50c+), put a temp sensor + two 70mm fans, dropped to 34c under load. You definitely don't need a waterblock at the back, I'll keep one fan it seems to help a bit with core temps.


Can you post a pic of your setup?


----------



## changboy

des2k... said:


> oh this video, all along I thought it was a 3090🤣
> 
> surprisingly, my ek backplate was hot to touch(50c+), put a temp sensor + two 70mm fans, dropped to 34c under load. You definitely don't need a waterblock at the back, I'll keep one fan it seems to help a bit with core temps.


What is your card ? Coz someone a few post or page ago write is memory under the backplate was around 55c with ek waterblock, then maybe no need to add a fan.


----------



## WMDVenum

Is the PCIE resister on the card or mobo? I am planning on just replacing mine with 3ohm resisters and want to know if following the front page is still the ideal solution.


----------



## markuaw1

3090 strix running kingpin bios, EK block with Alphacool x-flow 360rad 60mm, settings +170 clock +1250mem, getting 
*15 322 @ 34c https://www.3dmark.com/pr/648784*


----------



## Falkentyne

WMDVenum said:


> Is the PCIE resister on the card or mobo? I am planning on just replacing mine with 3ohm resisters and want to know if following the front page is still the ideal solution.


PCIE slot shunt is usually on the backplate side of the motherboard (not on the GPU core side), usually very close to the PCIE slot corner.
Make sure there are no 10 amp fuses next to the PCIE slot shunt. 10 amp limit the PCIE slot (or any slot that has a 10A fuse) to 120W. 20 amp = 240W. No fuse= YEET.


----------



## mirkendargen

markuaw1 said:


> 3090 strix running kingpin bios, EK block with Alphacool x-flow 360rad 60mm, settings +170 clock +1250mem, getting
> *15 322 @ 34c https://www.3dmark.com/pr/648784*


Things like this make me think I need to find a Strix from Best Buy/Amazon that's easy to return and swap with my meh Strix for a 5% performance boost, heh.


----------



## QueueCumb3r

markuaw1 said:


> 3090 strix running kingpin bios, EK block with Alphacool x-flow 360rad 60mm, settings +170 clock +1250mem, getting
> *15 322 @ 34c https://www.3dmark.com/pr/648784*


What were you able to get on that card max with air cooling (without freezing air or anything of that sort)?


----------



## markuaw1

QueueCumb3r said:


> What were you able to get on that card max with air cooling (without freezing air or anything of that sort)?


stock cooler and stock bios*14 751*https://www.3dmark.com/pr/550121 / with the EK block & stock bios*15022 *https://www.3dmark.com/pr/607384 / with the kingpin bios*15 322*


https://www.3dmark.com/pr/648784


----------



## HyperMatrix

So just finished with I think the best score I’d be able to do with the KingPin card without modification. Keep in mind I’m running a 6950x. Running the original settings my friend used for the GPU on his computer (with a 5950x) got him 14900 while for me it got 14590. So keep in mind any score you see would be higher under a better system.










For reference, I only got 14k with a Strix. 14.4K with my FTW3 Hybrid (2130 @ 1.025). 14.6k with my friend's FTW3 (2130 @ 0.987). 

This was locked at 2190 for the whole run. +1350 on memory. Just a little bit of power limit being hit.


----------



## sultanofswing

HyperMatrix said:


> So just finished with I think the best score I’d be able to do with the KingPin card without modification. Keep in mind I’m running a 6950x. Running the exact same settings for the GPU on my friends computer (with a 5950x) got him 14900 while for me it got 14590. So keep in mind any score you see would be higher under a better system.


Here is where I was playing with a FTW3 Ultra Hybrid last night. The card even with no overclock is all over the 500watt "XOC" BIOS power limit(known issue).
Got tired of messing with it and threw the rad in an ice bucket.








I scored 14 647 in Port Royal


Intel Core i9-10940X X-series Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com





Took that 3090 out and put my 2080ti Kingpin back in. I was all for the 3090 when it came out but after playing with one now I cannot recommend it unless you get a KINGPIN which also currently is having the same Power Limit issues as the FTW3.

If you don't overclock and just play games might as well just save the coin and get the 3080. Preferably a Strix
EVGA just doesn't have it this year sadly.


----------



## zkareemz

question: there are a lot of bios mentioned in here, (kpe, xoc, ...etc) from where I can download it?


----------



## HyperMatrix

sultanofswing said:


> Here is where I was playing with a FTW3 Ultra Hybrid last night. The card even with no overclock is all over the 500watt "XOC" BIOS power limit(known issue).
> Got tired of messing with it and threw the rad in an ice bucket.
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 14 647 in Port Royal
> 
> 
> Intel Core i9-10940X X-series Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> Took that 3090 out and put my 2080ti Kingpin back in. I was all for the 3090 when it came out but after playing with one now I cannot recommend it unless you get a KINGPIN which also currently is having the same Power Limit issues as the FTW3.
> 
> If you don't overclock and just play games might as well just save the coin and get the 3080. Preferably a Strix
> EVGA just doesn't have it this year sadly.



My score unfortunately hides how well the card is doing. My friend got 14960 with +1000 mem and 2160MHz. Those same settings got me 14590. But the card was doing locked 2190MHz at 1.1V. I did disable LLC in classified tool though.

Also for the FTW3, did you run it with the XC3 bios? That pushes it to somewhere around 500-520W. Sensors and fan controls still work. I didn't do any exotic cooling for any of my benches. Just my own fans on my own controller at 100% (not pulling power from the card). But definitely needed the XC3 bios for my scores on the FTW3s


----------



## QueueCumb3r

markuaw1 said:


> stock cooler and stock bios*14 751*https://www.3dmark.com/pr/550121 / with the EK block & stock bios*15022 *https://www.3dmark.com/pr/607384 / with the kingpin bios*15 322*
> 
> 
> https://www.3dmark.com/pr/648784


Thanks. That is helpful. Most or the cards I've played with land 14.5k to 14.8k all things being equal. I can see now that is is just not a higher binned card. Not a big deal. It is not a terrible card (I had one ftw3 ultra that wouldn't hit 14k even and that went right back to the store). It will make someone else happy for everyday gaming...


----------



## sultanofswing

HyperMatrix said:


> My score unfortunately hides how well the card is doing. My friend got 14960 with +1000 mem and 2160MHz. Those same settings got me 14590. But the card was doing locked 2190MHz at 1.1V. I did disable LLC in classified tool though.
> 
> Also for the FTW3, did you run it with the XC3 bios? That pushes it to somewhere around 500-520W. Sensors and fan controls still work. I didn't do any exotic cooling for any of my benches. Just my own fans on my own controller at 100% (not pulling power from the card). But definitely needed the XC3 bios for my scores on the FTW3s


I did try the XC3 BIOS but really didn't see but maybe a little improvement. The only way I could overclock the card was to undervolt/overclock it with the Curve.
I am going to set it up on my test bench and play with it some more at a later date.


----------



## changboy

Hyper Matrix this mean your score can be at around 15 300 with a newer cpu and board, its evident.
But once in game you wont notice this if you game at 4k, maybe just the .1% low frame can be lower and 42fps vs 44fps you cant see it.

With a blind test no one can tell what system is playing, its impossible but maybe at 1080p you can see it but who game at 1080p with a high end system.


----------



## des2k...

changboy said:


> Hyper Matrix this mean your score can be at around 15 300 with a newer cpu and board, its evident.
> But once in game you wont notice this if you game at 4k, maybe just the .1% low frame can be lower and 42fps vs 44fps you cant see it.
> 
> With a blind test no one can tell what system is playing, its impossible but maybe at 1080p you can see it but who game at 1080p with a high end system.


His card prob run 600w+ but he's showing some impressive fps gains


----------



## Baasha

Finally got my Asus RoG Strix 3090 OC SLI setup:


----------



## pat182

Baasha said:


> Finally got my Asus RoG Strix 3090 OC SLI setup:


i have removed the GEFORCE RTX tag, looks much cleaner, try it !


----------



## MangoMunchaa

hey guys, I have the galax 3090 and was wondering if the 1000w bios would work? if anyone else has tried it what were your results?


----------



## geriatricpollywog

My best score so far. I don't think I'll be able to get past 15.6k until I get the XOC bios.


----------



## DrunknFoo

MangoMunchaa said:


> hey guys, I have the galax 3090 and was wondering if the 1000w bios would work? if anyone else has tried it what were your results?


1000w bios? 

This is hardware modded, honestly a bios that is 1000w would solely be for ln2 or phase


----------



## dr/owned

DrunknFoo said:


> 1000w bios?
> 
> This is hardware modded, honestly a bios that is 1000w would solely be for ln2 or phase


Power consumption goes down under LN2 as it starts becoming a superconductor. I don't think anyone can really get much over 800W out of the card under ambient...at some point throwing more voltage at it isn't going to matter -or- you'll just degrade the chip -or- you won't be able to handle that much heat load in such a small area.


----------



## long2905

i have replaced the paste on my ichill x4 from stock paste to mx4 and finally to TFX and the difference is night and day. Granted its winter season now so ambient is around 15-18*C. temp went down from 81-82c to 76-77c on stock fan curve.

set a custom fan profile with uv [email protected] never breach 70c. the fans are a tiny bit loud though.

i was tempting to switch for another card but kinda getting lazy


----------



## bmgjet

ABE 004








380.5 KB file on MEGA







mega.nz





Editing disabled since not ready to deal with people spamming "My 9999W bios wont flash in NVFlash"


----------



## MangoMunchaa

DrunknFoo said:


> 1000w bios?
> 
> This is hardware modded, honestly a bios that is 1000w would solely be for ln2 or phase
> 
> 
> View attachment 2470372


There is a 1000W Galax BIOS that is unverified posted in the BIOS collection, I don't want to try and draw 1000W, my card is a 2x8pin and as a result the highest BIOS is 390W, I would like to set the % slider to allow the power limit to draw around 450W without having to shunt mod. I'm just curious because it is the only 2x8 pin BIOS above 390W, and it's the same card as mine


----------



## mirkendargen

bmgjet said:


> ABE 004
> 
> 
> 
> 
> 
> 
> 
> 
> 380.5 KB file on MEGA
> 
> 
> 
> 
> 
> 
> 
> mega.nz
> 
> 
> 
> 
> 
> Editing disabled since not ready to deal with people spamming "My 9999W bios wont flash in NVFlash"


Whatever that is it's throwing enough virus warnings it definitely needs to be run in a VM lol. I have a feeling I know what it is, but not checking without precaustions.


----------



## Edge0fsanity

So close to the top 100 in pr with my kingpin. Ln2 bios. Ambient temps too.
15065








I scored 15 065 in Port Royal


Intel Core i9-10850K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com




Will try to break 15100 soon.


----------



## Falkentyne

mirkendargen said:


> Whatever that is it's throwing enough virus warnings it definitely needs to be run in a VM lol.


It's not a freaking virus dude. Chill. I've tested the program. 
Most programs that deal with bios edits get flagged as injectors or other hacking tools. You need to learn to disable that crap.

The power limits are rather interesting. The BIOS has limits, which are based on the power TDP slider (it should be in sync with "default" and "max" TDP%), but it seems some limits aren't respected, and other limits "might" be bypassed by changing MSVDD (Uncore) voltage and MVDDC (FBVDD) / VRAM GDDR6X voltage (only possible with Kingpin and Classified exe tool).
Apparently the super hard throttle in Timespy Extreme can be avoided this way.

The FE has an 8 pin limit of 162W, but for some reason this seems to get ignored and goes up to 175W, which is also the SRC power limit. It almost looks like the 8 pin limits use the SRC as its power limit, as there is continuity between the SRC shunts and the 8 pins. There may be conditions where these limits get forcibly enforced (4k Superposition custom-Extreme shaders? Timespy Extreme? Or maybe not because the Kingpin has a 175W 8 pin max limit and there's still a large fat green throttle bar when running TS Extreme. Not to mention, this guy managed to pull 179W from an 8 pin that's limited to 175W...



http://imgur.com/QjrdPLF




http://imgur.com/EA88Fww


I already asked him if he can raise MVDDC to 1.4v and MSVDD to 1.12v to see if it gets rid or reduces that nasty green TS Extreme throttle bar.

And the FE has an 86W PCIE Slot limit, yet it gets hard throttled at 79.9W. Yet it can exceed 162W on the 8 pins even though that's the max limit.
It's worth noting that the PCIE slot shunt has continuity with BOTH SRC (Power plane input source power) and MVDDC (Memory voltage) shunts.


----------



## DrunknFoo

Falkentyne said:


> Wasn't his card an eVGA card with fuses?
> 
> Putting a wire across a shunt on a board that has a 10 amp fuse on PCIE slot is asking for trouble. And a 5 mOhm stacked shunt on 8 pins can easily wind up pulling more than the 20A fuse limit of 240W per 8 pin if you aren't being throttled by something.
> 
> Someone said there's a way to short the fuses to remove the 10A (or 20A) limit but it was way way back and I don't remember if it was here, in the shunt mod thread or the evga forums.
> 
> However with all the reports of UNSHUNTED eVGA cards dying after playing Cyberpunk 2077 or just black screening in regular use and then getting the red PCIE cable light of death, unless a fuse actually blew on the card, hard to know if the shunt mod killed it or it being an eVGA card killed it....
> 
> 
> 
> 
> 
> 
> RTX 3080/3090 Black screen possible solution - EVGA Forums
> 
> 
> Hello All, Ever since I installed my RTX3090 XC3 Ultra, I have been experiencing random black screen crashes whenever I am watching videos, doing web browsing or being idle at desktop. According to event viewer, these happened because the display driver stopped responding and successfully recov...
> 
> 
> 
> forums.evga.com
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 3080 FTW3 Ultra Single Red Light - EVGA Forums
> 
> 
> After playing Cyberpunk 2077 for about 15 minutes, my computer crashed. When I tried to reboot, I noticed a single red light on my 3080 FTW3 Ultra which I just bought two weeks ago. I replugged everything in with no help. I've read that this may be a common problem and just wanted to make sure t...
> 
> 
> 
> forums.evga.com


Did his card die while testing under ln2? did he provide any conditions that may have caused his card to fail?

It has been awhile now that he did that wire shunt, likely total resistance of 1.3 to 1.4 mohm is what i estimated....
as for the fuses, ya he should be able to recover the board by bridging or replacing the fuses on the ftw3


----------



## DrunknFoo

bmgjet said:


> ABE 004
> 
> 
> 
> 
> 
> 
> 
> 
> 380.5 KB file on MEGA
> 
> 
> 
> 
> 
> 
> 
> mega.nz
> 
> 
> 
> 
> 
> Editing disabled since not ready to deal with people spamming "My 9999W bios wont flash in NVFlash"


eh what's this exe for?

nevermind


----------



## Falkentyne

DrunknFoo said:


> Did his card die while testing under ln2? did he provide any conditions that may have caused his card to fail?
> 
> It has been awhile now that he did that wire shunt, likely total resistance of 1.3 to 1.4 mohm is what i estimated....
> as for the fuses, ya he should be able to recover the board by bridging or replacing the fuses on the ftw3


He has a discord. Has anyone asked him? Seems like only bmgjet knows.
olrdtg also seems to have completely disappeared. He was going to do a complete shunt test with which shunts correspond to which pins (so you can know which is 8 pin #1 and #2 on a FE), but he said he got busy with family matters and would have it done soon, then vanished. He was reading the shunt mod thread 10 days ago but didn't post anything.

@Sky3900 how are you doing? 
Do you know which shunt is 8 pin #1 and 8 pin #2 on the FE? Is 8 pin #1 that one awkward shunt with the depressed edge right next to a choke, making it super hard to access without shorting the PCB or even getting a clean dab of paint (or solder) on it (and there's barely any room even for Kapton tape in that tiny crevice!!)


----------



## bmgjet

He descussed it all on the GG discord channel.
Black screen then nothing, Checked fuses they were fine.
Bench testing die gets warm for few sec then nothing during post.
Ill let him discuss it himself instead of passing on info from the group or you can jump on there and read about it.

Theres a link to the discord in the 2080 ti offical owners thread.

But with EVGA cards this gen who really knows. Recently we found another qestionable parts choice with the bios chips they use only being rated to 80C temps compared to other vendors that are using 105C rated chips. And with the placement being between the die and pci-e slot who knows how hot they are getting.

That was test for the next person that has a dead EVGA card with duel bios to see if it works on the secondary chip still before they send it back.

ABE = Ampere Bios Editor.
The editing bit is disable in that build but there has been enough requests from people that just wanted to read the info out of there current bios to justify releasing a early version with just viewing.


----------



## ttnuagmada

Baasha said:


> Finally got my Asus RoG Strix 3090 OC SLI setup:


I couldn't imagine doing this without water cooling.


----------



## defiledge

HyperMatrix said:


> My score unfortunately hides how well the card is doing. My friend got 14960 with +1000 mem and 2160MHz. Those same settings got me 14590. But the card was doing locked 2190MHz at 1.1V. I did disable LLC in classified tool though.
> 
> Also for the FTW3, did you run it with the XC3 bios? That pushes it to somewhere around 500-520W. Sensors and fan controls still work. I didn't do any exotic cooling for any of my benches. Just my own fans on my own controller at 100% (not pulling power from the card). But definitely needed the XC3 bios for my scores on the FTW3s


 your friend running stock cooler?


----------



## Jordyn

long2905 said:


> i have replaced the paste on my ichill x4 from stock paste to mx4 and finally to TFX and the difference is night and day. Granted its winter season now so ambient is around 15-18*C. temp went down from 81-82c to 76-77c on stock fan curve.
> 
> set a custom fan profile with uv [email protected] never breach 70c. the fans are a tiny bit loud though.
> 
> i was tempting to switch for another card but kinda getting lazy



Nice, what bios are you running? One of the 390w KFA2 ones? I have the same card and am also running towards 80 to 85c on the 1695mhz version and with ambient of mid to high 20's here in Australia. How was the disassembly to get to the paste?


----------



## HyperMatrix

defiledge said:


> your friend running stock cooler?


We’re both running EVGA Hybrid cards. So the FTW3 has a 240mm AIO cooler with a weak pump. But the Kingpin has a seriously wicked pump on a 360mm AIO. Even with 2190MHz at 1.1V and pulling over 500W it didn’t reach 50C at the end of port royal run. I think it was 47-48C at the end.


----------



## LVNeptune

No clue what's triggering it but Precision X1 causes microstuttering if I make ANY changes within that app. I have to leave it running for the OLED screen to properly update on the Kingpin though.

That said, Afterburner only goes up to 120%, not 121% like it should on the LN2 bios, any ideas why?


----------



## long2905

Jordyn said:


> Nice, what bios are you running? One of the 390w KFA2 ones? I have the same card and am also running towards 80 to 85c on the 1695mhz version and with ambient of mid to high 20's here in Australia. How was the disassembly to get to the paste?


i was using the KFA2 vbios but then reverted to the stock one for testing, no issue so far with the undervolt. yeah the default fan curve and paste job is not that great.

the disassembly is pretty straight forward, just 4 main screws, 4-6 board screws and 2 screws on each ends of the card. be gentle as not to tear the warranty void sticker between the pcb and the backplate though.


----------



## bmgjet

LVNeptune said:


> No clue what's triggering it but Precision X1 causes microstuttering if I make ANY changes within that app. I have to leave it running for the OLED screen to properly update on the Kingpin though.
> 
> That said, Afterburner only goes up to 120%, not 121% like it should on the LN2 bios, any ideas why?


PX1 over polls the sensors compared to Afterburner which is why people get lower scores on it.
Also you can set higher power limits with nvidia-smi. A lot of cards have power limits with a decimal point in them that software wont let you reach.
Like 104.3% software will only let you set 104%. But in nvidia-smi you can tell it to set the maximum.


----------



## LVNeptune

bmgjet said:


> PX1 over polls the sensors compared to Afterburner which is why people get lower scores on it.
> Also you can set higher power limits with nvidia-smi. A lot of cards have power limits with a decimal point in them that software wont let you reach.
> Like 104.3% software will only let you set 104%. But in nvidia-smi you can tell it to set the maximum.


Where is nvidia-smi located? And that makes sense. It looks like it's 120.9% which is why afterburner won't go higher.


----------



## HyperMatrix

LVNeptune said:


> No clue what's triggering it but Precision X1 causes microstuttering if I make ANY changes within that app. I have to leave it running for the OLED screen to properly update on the Kingpin though.
> 
> That said, Afterburner only goes up to 120%, not 121% like it should on the LN2 bios, any ideas why?





bmgjet said:


> PX1 over polls the sensors compared to Afterburner which is why people get lower scores on it.
> Also you can set higher power limits with nvidia-smi. A lot of cards have power limits with a decimal point in them that software wont let you reach.
> Like 104.3% software will only let you set 104%. But in nvidia-smi you can tell it to set the maximum.


I've been using afterburner on the kingpin with 120% PL slider. "nvidia-smi -q -d power" already shows 520W as enforced PL. At 119% PL it shows 511.70W. But at 120% it properly shows 520W. So I'd check before getting too worried about the slider bar difference in afterburner/precision.


----------



## ShadowYuna

Just flash my FTW3 hybrid to XC3 bios. It runs fine without any overclocking. But when I overclcock core by 50 Timespy freeze.

Am I missing something or did I use wrong bios?

I used the bios on techpower up on verified bios XC3.


----------



## HyperMatrix

Slinky Supercomputer said:


> View attachment 2470399
> 
> if need highflow backplate cooling use the one from EK-FB KIT SR-X


Please don’t have your images linked to an EVGA associate code. You’re switching peoples saved codes in their account when they click to zoom in on the picture. 😡😡😡


----------



## Slinky Supercomputer

mouse right view image then zoom if you like.
for your's commodity was removed.. jk


----------



## defiledge

HyperMatrix said:


> Please don’t have your images linked to an EVGA associate code. You’re switching peoples saved codes in their account when they click to zoom in on the picture. 😡😡😡


Deserves a ban tbh


----------



## bmgjet

defiledge said:


> Deserves a ban tbh


Accounts 2 hours old so probably had his old one banned.


----------



## Benni231990

in conclusion have we a XOC bios for 2x8pin cards yes or no?

or is still the 390watt bios the highest power limit?


----------



## cheddle

Benni231990 said:


> in conclusion have we a XOC bios for 2x8pin cards yes or no?
> 
> or is still the 390watt bios the highest power limit?


im yet to try this one - 1000w









KFA2 RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com


----------



## bmgjet

cheddle said:


> im yet to try this one - 1000w
> 
> 
> 
> 
> 
> 
> 
> 
> 
> KFA2 RTX 3090 VBIOS
> 
> 
> 24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory
> 
> 
> 
> 
> www.techpowerup.com


You cant flash it though nvflasher.


----------



## pdlr

And a XOC bios for 3X8 PIN 1000W available for download for normal users there is or no?

KPE to 520W it stops me too...


----------



## TK421

pdlr said:


> And a XOC bios for 3X8 PIN 1000W available for download for normal users there is or no?
> 
> KPE to 520W it stops me too...


kpe don't have unlocked bios yet?


----------



## Edge0fsanity

TK421 said:


> kpe don't have unlocked bios yet?


they do, have to request it with a serial number. Takes awhile to get it.


----------



## TK421

Edge0fsanity said:


> they do, have to request it with a serial number. Takes awhile to get it.


do you write this email to evga or kingpin?

I assume once evga gets this request they void the card's warranty too?


----------



## Edge0fsanity

TK421 said:


> do you write this email to evga or kingpin?
> 
> I assume once evga gets this request they void the card's warranty too?


kingpin, and yes it voids the warranty. I haven't requested yet, waiting to see if someone will leak it.


----------



## TK421

Edge0fsanity said:


> kingpin, and yes it voids the warranty. I haven't requested yet, waiting to see if someone will leak it.


what if you say you didn't flash the bios and try to claim warranty?


----------



## Edge0fsanity

TK421 said:


> what if you say you didn't flash the bios and try to claim warranty?


Pretty sure you're agreeing your warranty is void if you request it which is why they need the serial. Not sure it matters if you never flashed it. If I destroyed the card with that bios flashed I wouldn't try to warranty it anyways but I need to keep my warranty intact for now. I have an FTW3 3090 with an optimus block ordered that should arrive soon. Might be selling the Kingpin depending on results, need the warranty intact until then.


----------



## pdlr

I don't physically have a KPE. I have an MSI TRIO X, with KPE 520W bios and alphacool water block. But even with temperatures of 35 degrees, if you bend and leave the voltage blocked, at 1,082 or even 1,075v that my card can make stable at those voltages frequencies above 2200MHZ, the power limit enters, and it gets annoyed ..


----------



## TK421

Edge0fsanity said:


> Pretty sure you're agreeing your warranty is void if you request it which is why they need the serial. Not sure it matters if you never flashed it. If I destroyed the card with that bios flashed I wouldn't try to warranty it anyways but I need to keep my warranty intact for now. I have an FTW3 3090 with an optimus block ordered that should arrive soon. Might be selling the Kingpin depending on results, need the warranty intact until then.


sad... no reason to get KP this generation then




pdlr said:


> I don't physically have a KPE. I have an MSI TRIO X, with KPE 520W bios and alphacool water block. But even with temperatures of 35 degrees, if you bend and leave the voltage blocked, at 1,082 or even 1,075v that my card can make stable at those voltages frequencies above 2200MHZ, the power limit enters, and it gets annoyed ..


are you sure it positively scales at 2200? sometimes clock can be high but performance regressed


----------



## Nizzen

TK421 said:


> sad... no reason to get KP this generation then
> 
> 
> 
> 
> are you sure it positively scales at 2200? sometimes clock can be high but performance regressed



The last Evga card that was worth buying was 780ti classified/kingpin with evbot. No [email protected]$%, just fun 

My opinion


----------



## Benni231990

why we cant flash the 1000 watt bios to a 2x 8pin card with nvflasher.??


----------



## dante`afk

because it needs to be signed or something like that.

didn't someone release modded nvflasher during 1080Ti times which allowed us to install the 1000w bios? where's that guy


----------



## Antsu

Anyone heard from their Aquacomputer Strix block? I ordered it almost 2 months ago, and asked about ETA of delivery about 2 weeks later. They replied to me and told me it should take two weeks. Problem is, it's been 5 weeks now and I don't really expect to get it during the holiday season, so we will be well in to January most likely...


----------



## pdlr

TK421 said:


> are you sure it positively scales at 2200? sometimes clock can be high but performance regressed


YES,escale,Sure..


----------



## markuaw1

TK421 said:


> sad... no reason to get KP this generation then
> 
> 
> 
> 
> are you sure it positively scales at 2200? sometimes clock can be high but performance regressed


I have a 3090 strix on a EK waterblock with that Kingpin 520ew bios and I can definitely say that it's scales


----------



## markuaw1

TK421 said:


> sad... no reason to get KP this generation then
> 
> 
> 
> 
> are you sure it positively scales at 2200? sometimes clock can be high but performance regressed


Have the 390 strix EK waterblock Kingpin bios it
View attachment 2470478
scales


----------



## pat182

markuaw1 said:


> I have a 3090 strix on a EK waterblock with that Kingpin 520ew bios and I can definitely say that it's scales
> View attachment 2470477


got the same power draw, on the same setup 370watt +150 so 520watts , can even go to 550watts


----------



## geriatricpollywog

I am seeing 520-540w on the OLED in Port Royal,
at 1.2 NVVD but it crashes. I can pass a run if the oled reports 500-520w during a run.


----------



## shiokarai

Baasha said:


> Finally got my Asus RoG Strix 3090 OC SLI setup:


Which games do work with 3090s in SLI? I see you're streaming a lot of RDR2, is it working ok with the SLI? Cyberpunk is not working I assume? Other notable NEW titles?


----------



## HyperMatrix

0451 said:


> I am seeing 520-540w on the OLED in Port Royal,
> at 1.2 NVVD but it crashes. I can pass a run if the oled reports 500-520w during a run.


Know anyone who's shunt modded theirs yet? Wondering what they used since they're physically a different size. Or is silver paint + wait for XOC bios leak the best option right now? Not too worried on the current cooler since I can barely hit the power limit but will definitely need it with higher clocks under water. Also any word on blocks? That seems to be the biggest hurdle for the KingPin to overcome at the moment.


----------



## geriatricpollywog

HyperMatrix said:


> Know anyone who's shunt modded theirs yet? Wondering what they used since they're physically a different size. Or is silver paint + wait for XOC bios leak the best option right now? Not too worried on the current cooler since I can barely hit the power limit but will definitely need it with higher clocks under water. Also any word on blocks? That seems to be the biggest hurdle for the KingPin to overcome at the moment.


Nobody has shunt modded their KPE that I am aware of. Myself and a few others are waiting for the XOC bios to drop. You'd have to be pretty desperate to make any modifications to the card before that happens. There is a thread on the EVGA forums where Jacob announces when everything will become available, including the HydroCopper KPE block.

FWIW, I don't think water will make a big difference over the AIO. I am only seeing a 200-300 PR point increase when I submerge the AIO in an ice bath, which is way colder than any open loop.


----------



## HyperMatrix

0451 said:


> Nobody has shunt modded their KPE that I am aware of. Myself and a few others are waiting for the XOC bios to drop. You'd have to be pretty desperate to make any modifications to the card before that happens. There is a thread on the EVGA forums where Jacob announces when everything will become available, including the HydroCopper KPE block.
> 
> FWIW, I don't think water will make a big difference over the AIO. I am only seeing a 200-300 PR point increase when I submerge the AIO in an ice bath, which is way colder than any open loop.


When I have my fans cranked out at max it does well enough but I get instability when I hit 48C. Being able to keep it under 40C would help greatly since this card uses such little power. My own kingpin was trash but my buddy traded me his and it's definitely a beast.

You done LM & repadding yet?


----------



## Edge0fsanity

HyperMatrix said:


> When I have my fans cranked out at max it does well enough but I get instability when I hit 48C. Being able to keep it under 40C would help greatly since this card uses such little power. My own kingpin was trash but my buddy traded me his and it's definitely a beast.
> 
> You done LM & repadding yet?


What was your best PR score on the trash kingpin? Curious where mine ranks, feel like it's low since it took extensive tweaking with the classified tool to break 15k.


----------



## HyperMatrix

Edge0fsanity said:


> What was your best PR score on the trash kingpin? Curious where mine ranks, feel like it's low since it took extensive tweaking with the classified tool to break 15k.


Only did 14600 and some change on the bad one. But remember my scores are going to be lower in general because of my system. So I'd expect that to be 14900-15000 on a 10900k. I consider it bad because the chip itself was essentially limited to that performance and would see minimal gains going under water. Otherwise the FTW3 I had that also scored 14600 (again...almost 15k on a 10900k) is considered great because it did that with 0.975v at 2130. So there's a ton of room to go up with shunts/water.


----------



## GAN77

pdlr said:


> I have an MSI TRIO X, with KPE 520W bios and alphacool water block.


Where did you get the block alphacool MSI TRIO X? It hasn't been released yet.
Can you take a photo?


----------



## Edge0fsanity

HyperMatrix said:


> Only did 14600 and some change on the bad one. But remember my scores are going to be lower in general because of my system. So I'd expect that to be 14900-15000 on a 10900k. I consider it bad because the chip itself was essentially limited to that performance and would see minimal gains going under water. Otherwise the FTW3 I had that also scored 14600 (again...almost 15k on a 10900k) is considered great because it did that with 0.975v at 2130. So there's a ton of room to go up with shunts/water.


My best without classy tool for the kingpin was 14832 on a 10850k @ 5.2ghz. My ftw3 on the xc3 bios is 14446, 975mV won't even break 2000mhz on it. Have an optimus block coming but kinda feel like it's wasted on this card.


----------



## HyperMatrix

Edge0fsanity said:


> My best without classy tool for the kingpin was 14832 on a 10850k @ 5.2ghz. My ftw3 on the xc3 bios is 14446, 975mV won't even break 2000mhz on it. Have an optimus block coming but kinda feel like it's wasted on this card.


Temperatures play a big game. My FTW3s are Hybrid cards so I have a pretty big temp advantage there. But if your card can't do over 2000MHz with 0.975v at 52C and below, then I'd sell it and buy another for better bin. Not worth investing further into it. Maybe just try for the notify list on the HydroCopper models.


----------



## sultanofswing

HyperMatrix said:


> Temperatures play a big game. My FTW3s are Hybrid cards so I have a pretty big temp advantage there. But if your card can't do over 2000MHz with 0.975v at 52C and below, then I'd sell it and buy another for better bin. Not worth investing further into it. Maybe just try for the notify list on the HydroCopper models.


My 90ftw3 ultra hybrid with a curve set for [email protected] just is all over the power limit. Even with the ftw3 XOC non hybrid bios.


----------



## HyperMatrix

sultanofswing said:


> My 90ftw3 ultra hybrid with a curve set for [email protected] just is all over the power limit. Even with the ftw3 XOC non hybrid bios.


Ouch. Sorry to hear that mate. That's terrible performance from an AIO. Does your card really need 0.975 for 2025? I'm assuming you've already tried with lower voltage for less heat and less power usage? Also you said you had tried the XC3 bios before (not XOC) and you were still having power limit issues with 0.975 @ 2025??

I'm on my 5th 3090 now so I understand the struggle in finding a good bin.


----------



## sultanofswing

HyperMatrix said:


> Ouch. Sorry to hear that mate. That's terrible performance from an AIO. Does your card really need 0.975 for 2025? I'm assuming you've already tried with lower voltage for less heat and less power usage? Also you said you had tried the XC3 bios before (not XOC) and you were still having power limit issues with 0.975 @ 2025??
> 
> I'm on my 5th 3090 now so I understand the struggle in finding a good bin.


I have not tried the XC3 bios yet. It may do lower voltage I just started at that point to see what happened.


----------



## lokran88

markuaw1 said:


> 3090 strix running kingpin bios, EK block with Alphacool x-flow 360rad 60mm, settings +170 clock +1250mem, getting
> *15 322 @ 34c https://www.3dmark.com/pr/648784*


I actually did not expect such low GPU temps with an EK block. 🤔


Antsu said:


> Anyone heard from their Aquacomputer Strix block? I ordered it almost 2 months ago, and asked about ETA of delivery about 2 weeks later. They replied to me and told me it should take two weeks. Problem is, it's been 5 weeks now and I don't really expect to get it during the holiday season, so we will be well in to January most likely...


First blocks have been shipped today.
They said that the blocks had been manufactured for a while now but the final surface treatment is done by an external company which worked slower than expected and there had been logistical delays due to Corona- that's what Aquacomputer said at least.


----------



## Falkentyne

Now this, my dear friends, is what you call a DEAD VEGA 64.



















Made you look

Yeah this card's done. I guess the HBM died? At least the r9 290X works and can serve as a "can play Crysis" backup for the 3090 FE.


----------



## HyperMatrix

sultanofswing said:


> I have not tried the XC3 bios yet. It may do lower voltage I just started at that point to see what happened.


XC3 bios should get you somewhere between 480-520W. Highly recommend it. Generally speaking, at under 52C temps if a card can't do 2055 with 975mV then I'd consider that a hard pass. That you're hitting power limit already though isn't a great sign unfortunately.


----------



## sultanofswing

HyperMatrix said:


> Ouch. Sorry to hear that mate. That's terrible performance from an AIO. Does your card really need 0.975 for 2025? I'm assuming you've already tried with lower voltage for less heat and less power usage? Also you said you had tried the XC3 bios before (not XOC) and you were still having power limit issues with 0.975 @ 2025??
> 
> I'm on my 5th 3090 now so I understand the struggle in finding a good bin.


[email protected] timespy extreme


HyperMatrix said:


> XC3 bios should get you somewhere between 480-520W. Highly recommend it. Generally speaking, at under 52C temps if a card can't do 2055 with 975mV then I'd consider that a hard pass. That you're hitting power limit already though isn't a great sign unfortunately.


[email protected] passes Timespy Extreme but is still blipping power limit. Temp was 48c max, pretty sure the silicon just isn't all that great. Gonna try XC3 BIOS now.


----------



## geriatricpollywog

Falkentyne said:


> Now this, my dear friends, is what you call a DEAD VEGA 64.
> 
> Made you look
> 
> Yeah this card's done. I guess the HBM died? At least the r9 290X works and can serve as a "can play Crysis" backup for the 3090 FE.


My condolences. When my Vega died I donated it to Buildzoid for medical research and organ harvesting.

I have a lot of the same desktop shortcuts. You can always tell when someone’s favorite game is overclocking.


----------



## sultanofswing

HyperMatrix said:


> XC3 bios should get you somewhere between 480-520W. Highly recommend it. Generally speaking, at under 52C temps if a card can't do 2055 with 975mV then I'd consider that a hard pass. That you're hitting power limit already though isn't a great sign unfortunately.


I assume just the normal 366watt XC3 BIOS? Then use PX1 to update it if there is an update?


----------



## HyperMatrix

sultanofswing said:


> I assume just the normal 366watt XC3 BIOS? Then use PX1 to update it if there is an update?


Yeah just the standard XC3 bios on techpowerup. It won’t report power usage properly. But you should hit power limit less often. It uses less power on pcie slot and doesn’t report one pcie connector at all while reporting another at about 50%


----------



## sultanofswing

HyperMatrix said:


> Yeah just the standard XC3 bios on techpowerup. It won’t report power usage properly. But you should hit power limit less often. It uses less power on pcie slot and doesn’t report one pcie connector at all while reporting another at about 50%


Yea testing it now. Right now at [email protected] and not on power limit yet


----------



## ttnuagmada

lokran88 said:


> I actually did not expect such low GPU temps with an EK block. 🤔


I moved from 1080 Ti's to a 3090 Strix OC. I had EK blocks on the Ti's and went with one for the 3090 too. I personally think there is a very noticeable improvement in build quality on the 3090 block. Their newer CPU blocks perform well as well, so it doesn't surprise me that the new GPU blocks do too.


----------



## des2k...

HyperMatrix said:


> Yeah just the standard XC3 bios on techpowerup. It won’t report power usage properly. But you should hit power limit less often. It uses less power on pcie slot and doesn’t report one pcie connector at all while reporting another at about 50%


the xc3 bios doesn't report power properly ? would that work on 2x8pin cards ?

It did have the highest oc on this bios before flashing kfa390w, but I never looked at 8pin power.


----------



## bmgjet

des2k... said:


> the xc3 bios doesn't report power properly ? would that work on 2x8pin cards ?
> 
> It did have the highest oc on this bios before flashing kfa390w, but I never looked at 8pin power.


The reason people use it on 3 plug cards is because it only reports the power of 2 plugs letting the 3rd one do what ever it wants which is copy the first plug because of the power balancing.
So it ends up drawing 512W when the card reports 366W. People also use the KFA390W one in 3 plugs cards for 535W when reporting 390W on the card.

On a 2 plug card it will just work normally, so 366W.


----------



## DrunknFoo

HyperMatrix said:


> Only did 14600 and some change on the bad one. But remember my scores are going to be lower in general because of my system. So I'd expect that to be 14900-15000 on a 10900k. I consider it bad because the chip itself was essentially limited to that performance and would see minimal gains going under water. Otherwise the FTW3 I had that also scored 14600 (again...almost 15k on a 10900k) is considered great because it did that with 0.975v at 2130. So there's a ton of room to go up with shunts/water.


Damn those numbers seem pretty sad for a KP card
What was the memory frequency avg for the scores? Poor mem or core?


----------



## chispy

bmgjet said:


> Heres were he is getting the images from.
> Old 2018 build
> 
> 
> 
> 
> 
> Slinky PC - Linus Tech Tips
> 
> 
> 
> 
> 
> 
> 
> linustechtips.com


He was also banned for life at hwbot for cheating many , many times up to including photoshop scores. He tried to make new accounts but kept getting banned. Read the thread here - One day you will get caught and H2o vs. Ln2(BANNED) @ HWBOT , @Slinky Supercomputer was H2OvsLN2 on hwbot also he made a new account by the name slinkyman and was banned again for life.


----------



## HyperMatrix

DrunknFoo said:


> Damn those numbers seem pretty sad for a KP card
> What was the memory frequency avg for the scores? Poor mem or core?


This is on a 4.3GHz 6950x with 2933MHz memory. The 14888 score I got would be about 15300 on 10900k. It's actually quite good. 2190MHz locked. No dropping. And +1350 on memory.

As for the 14.6k one on my original bad card it would be about 14900-15000 under a proper system. That was with +1100 on memory and 2130-2145MHz on the core. Not a great card tbh. Because while that score may not seem terrible....you'd get almost no increase by water cooling it. The GPU itself is tapped out at 2146-2160 at most. Fortunately I already traded that first card with my buddy for this new card which is substantially better and he sold it off to a friend of his that wanted it.


----------



## chispy

Enough with the off topic , it is not worth my time and bandwidth to speak about people who does not deserve it.

Back on Topic , i have found that the Byski block for the Strix 3090 is very good at keeping the Strix at low 38~44c while gaming. the back plate is a monster very thick and works very well with only a small low rpm fan the back temps are kept under check . 

Guys the thermal pads for the Byski block is a weird 1.2mm , i know because i bought some extra pads on their store.


----------



## bmgjet

Thanks for point that stuff out @chispy , was a lol to read that post.



Heres my slinky approved new score.


----------



## geriatricpollywog

So Vince just sent me the XOC bios. More at 11.


----------



## des2k...

0451 said:


> So Vince just sent me the XOC bios. More at 11.


Hope it's more than the 520w on tech power up. Hope it's something crazy high so we can use on 2x8pin for some boost past 390w😁


----------



## dr/owned

Finally after a month got my Bykski TUF block. Few early comments before I assemble it:

Wow they give way too many screws and washers and crap. I'm guessing the spacers they give are if you aren't going to use the backplate to take up slack. Not sure why they give so many spring-screws when you only need 4.
The backplate is chonky AF...shouldn't have any problem mounting a DIMM waterblock to it.
The backplate didn't quite clear the right angle I2C header that I soldered on the back. Touch of dremel grinding on the header and it clears.
It fully clears the 8 pin power connectors so no issues soldering wires to them directly.

Quality looks great. It's probably not the best fin density design but ehhh we'll see how it does.

EDIT: actually I lied I need to measure the surface depths on the block to figure out what pads to use.


----------



## Falkentyne

des2k... said:


> Hope it's more than the 520w on tech power up. Hope it's something crazy high so we can use on 2x8pin for some boost past 390w😁


What the hell are you talking about, dude?


----------



## defiledge

dr/owned said:


> Finally after a month got my Bykski TUF block. Few early comments before I assemble it:
> 
> Wow they give way too many screws and washers and crap. I'm guessing the spacers they give are if you aren't going to use the backplate to take up slack. Not sure why they give so many spring-screws when you only need 4.
> The backplate is chonky AF...shouldn't have any problem mounting a DIMM waterblock to it.
> The backplate didn't quite clear the right angle I2C header that I soldered on the back. Touch of dremel grinding on the header and it clears.
> It fully clears the 8 pin power connectors so no issues soldering wires to them directly.
> 
> Quality looks great. It's probably not the best fin density design but ehhh we'll see how it does.
> 
> EDIT: actually I lied I need to measure the surface depths on the block to figure out what pads to use.


Can you let me know what size pads to use when u measure it? Also don't they supply the pads with the block?


----------



## dr/owned

defiledge said:


> Can you let me know what size pads to use when u measure it? Also don't they supply the pads with the block?


Sure, they did include pads but they're not going to include "good" ones with a block...probably 1-3W/mK instead of the 7W/mK phobya ones I'll try to apply. They're "dry" pads though so they don't want to stick...makes assembly a bit tricky. I'll get that measurement and edit in a minute.

EDIT: they're a nonstandard thickness. With the skins on they measure 1.35mm thick. Each skin is .05mm thick...which jives roughly with the 1.25mm measurement I got on the pad (tricky because they're squishy).


----------



## des2k...

Falkentyne said:


> What the hell are you talking about, dude?


this "So Vince just sent me the XOC bios. More at 11."
if its 600w,1000w for 3pin, it will work on 2x8pin cards


----------



## bmgjet

Ill lol if vince ties the XOC bios powertables bitmask and max allowed powers to checks that the UID matches the serial number that you provide when you request the bios. 
Every card has a unique id which is stored in the bios zone //0x4000 to //0x4FFF which is a write protected area.


----------



## defiledge

best XOC bios is a shunt mod. I can draw 600W on my 2x8pin


----------



## geriatricpollywog

He sent it as URL link.


----------



## des2k...

0451 said:


> He sent it as URL link.


Do you know what the limit is ?


----------



## geriatricpollywog

That was fun


----------



## HyperMatrix

0451 said:


> Have fun boys
> 
> 
> 
> http://overclockingpin.com/3998/NDA_%20xoc_tools_3998.zip


I love how NDA is right in the file name. Should perhaps host it on mega.nz or something. Haha. Thanks for sharing mate.


----------



## dr/owned

0451 said:


> Have fun boys
> 
> 
> 
> http://overclockingpin.com/3998/NDA_%20xoc_tools_3998.zip


I'll give a cookie to the first person to meltdown a 3 connector card.


----------



## long2905

wow, is this for real


----------



## Slinky Supercomputer

GAN77 said:


> Bitspower Classic VGA Water Block for ASUS ROG Strix GeForce RTX 30 series, Bitspower Classic VGA Water Block for ASUS TUF Gaming GeForce RTX 30 series are compatible with ASUS ROG Strix / TUF 3090?
> 
> Yes, the 3080 ASUS ROG Strix / TUF water block is compatible with 3090 ASUS ROG Strix / TUF. But the back plate is not compatible.
> At present, we will release the new back plate for 3090.
> 
> *Best regards!
> Lily Wong*


only if you give a space to 2 capacitors. the bacplate 3080 is perfect but need watercooling block








high flow wb








my self watercooling rtx 3090 strix @-17C


----------



## Falkentyne

No one except Kingpin owners are going to be able to flash the Vbios.
And I'll eat a COW if a kingpin owner is going to be able to flash the vbios without a hardware programmer or their UID being registered to Vince first...


----------



## HyperMatrix

Falkentyne said:


> No one except Kingpin owners are going to be able to flash the Vbios.
> And I'll eat a COW if a kingpin owner is going to be able to flash the vbios without a hardware programmer or their UID being registered to Vince first...


I wasn’t going to flash since I’m not being power limited right now and I’m not under water. But now I am curious about whether it can be done or not. Will report back in a few.

Worked fine here.










Now...what kind of a COW are we talking about?


----------



## bmgjet

0451 said:


> Have fun boys
> 
> 
> 
> http://overclockingpin.com/3998/NDA_%20xoc_tools_3998.zip



Thanks, First thing of intrest. Bios layout is completely different to any other ampere card so wont open with ampere bios editor.
Going to take a few hours to reverse it and find all the tables.


----------



## des2k...

Flashed on my 2x8pin zotac, it's 1000w, that makes about 700w for my 2x8pin
Thanks Vince  Thanks 0451


----------



## des2k...

image-2020-12-21-232545


Image image-2020-12-21-232545 hosted in ImgBB




ibb.co


----------



## long2905

des2k... said:


> Flashed on my 2x8pin zotac, it's 1000w, that makes about 700w for my 2x8pin
> Thanks Vince  Thanks 0451


nice! can you do some testing on port royal and games? can you limit the available power with afterburner? have you shunted your card?


----------



## Slinky Supercomputer

Falkentyne said:


> No one except Kingpin owners are going to be able to flash the Vbios.
> And I'll eat a COW if a kingpin owner is going to be able to flash the vbios without a hardware programmer or their UID being registered to Vince first...


XOC bios will make no difference if not using ln2. Even if I sing up for KPE I believe asus deliver a beater pcb this time around. Will be sad to se the first RTX 3090 on the market outperformt the last one.
Unfortunately OGS is not that smart to choice the right nv-driver, congrats vince.


----------



## HyperMatrix

des2k... said:


> Flashed on my 2x8pin zotac, it's 1000w, that makes about 700w for my 2x8pin
> Thanks Vince  Thanks 0451


Be very careful with this. Most likely various protections are disabled with it on as it's designed for LN2. Shunting is a far safer method of getting a higher power limit.


----------



## HyperMatrix

Slinky Supercomputer said:


> XOC bios will make no difference if not using ln2.


Well you just confirmed you have no idea what you're talking about. You about done here yet?


----------



## geriatricpollywog

How do you verify the power limit? Slider only goes to 100%


----------



## dr/owned

Some measurements of the Bykski 3090 TUF backplate in case anyone wants to go the route I'm going of drilling it out and putting in studs for a waterblock:

The design is relatively simple: the total "void" is 3.5mm between the standoff/surface of PCB and the inside face of the backplate. The thinnest cross-sections of the backplate (the biggest voids) are 2.5mm thick. Anywhere they want to make contact with something (memory chips, VRM, etc) is 2.0mm from the standoff...or 1.5mm from the inside face of the backplate. They left about 1.0mm gap between the memory chips and the backplate surface (memory chips = 1.0mm tall, surface is 2.0mm from the standoff...you do the math). You want to use 1.5mm pads to allow for squish pressure.

The highest component on the backside of the PCB is 1.75mm (the backside VRM capacitors) leaving 1.75mm of clearance for the head of a screw or something if you completely screw up and drill it where those components are.

The waterblock itself looks like they did an aggressive interference fit on the die by about -0.3mm of clearance...they basically shaped it to make contact with the metal frame which is lower than the silicon is.


----------



## des2k...

0451 said:


> How do you verify the power limit? Slider only goes to 100%


lol yes MIN is 1000W and MAX is 1000W 
nvidia-smi -pl 600.00 limits to 600w for example


----------



## Falkentyne

HyperMatrix said:


> I wasn’t going to flash since I’m not being power limited right now and I’m not under water. But now I am curious about whether it can be done or not. Will report back in a few.
> 
> Worked fine here.
> 
> View attachment 2470552
> 
> 
> Now...what kind of a COW are we talking about?


The one that drops a legendary in the graveyard if you milk it


----------



## des2k...

des2k... said:


> lol yes MIN is 1000W and MAX is 1000W
> nvidia-smi -pl 600.00 limits to 600w for example


*edit
Default shows 1000W that's prob why it's 100% slider, MIN is 100W


----------



## HyperMatrix

0451 said:


> How do you verify the power limit? Slider only goes to 100%












Thermal protections are indeed disabled. Be very careful with this. OCP likely disabled as well. This is not a normal bios with a 1000W limit. If you don't know what that means, please don't use this bios. It's not a way to get around having to shunt mod. <--- just for randoms/google searchers that come across this.


----------



## Falkentyne

des2k... said:


> lol yes MIN is 1000W and MAX is 1000W
> nvidia-smi -pl 600.00 limits to 600w for example


What power supply do you have?
Please limit your power draw to 550W maximum unless you have 16 Aug cables.


----------



## DrunknFoo

loaded on my ftw
didn't put any load on the gpu, shunted card, disaster waiting to happen lol


----------



## Falkentyne

There are going to be some burnt 8 pin cables coming up if 2 cable users start trying to run Timespy Extreme or Superposition 4k Custom Extreme shaders on this thing...
Be careful you guys. And if anyone has disassembled a card and re-padded or repasted it, you had better make 100% sure you have PERFECT contact on everything if you're using a card with protections disabled...no "THERMAL" flag means burnt silicon....


----------



## DrunknFoo

Falkentyne said:


> What power supply do you have?
> Please limit your power draw to 550W maximum unless you have 16 Aug cables.


no slider, it's just 1000w without the added command line


----------



## des2k...

Falkentyne said:


> What power supply do you have?
> Please limit your power draw to 550W maximum unless you have 16 Aug cables.


It's an EVGA 850, 14 Aug. Yeah I'm going slowly, see what kind of power draw it takes on the 8pins (GPU-z) and looking at temps. Limited to 560w (400w for 2pin) no OC, already reaching 44c on the EK block.


----------



## HyperMatrix

DrunknFoo said:


> no slider, it's just 1000w without the added command line


Power slider was still there. I already rebooted back to LN2 bios. You should be able to set PL slider to 55%.


----------



## defiledge

dr/owned said:


> The waterblock itself looks like they did an aggressive interference fit on the die by about -0.3mm of clearance...they basically shaped it to make contact with the metal frame which is lower than the silicon is.


wont that crack the die?


----------



## DrunknFoo

Err i meant slider results in 1k regardless without command line...?


----------



## defiledge

honestly the 1000W bios is not worth it if it disables protections. Just shunt


----------



## changboy

Just read those line and iam scare for you guys, omg you are really crazy and have a lots of guts.


----------



## BigMack70

defiledge said:


> honestly the 1000W bios is not worth it if it disables protections. Just shunt


But then how are you going to melt things in your PC?


----------



## keeph8n

Slinky Supercomputer said:


> XOC bios will make no difference if not using ln2. Even if I sing up for KPE I believe asus deliver a beater pcb this time around. Will be sad to se the first RTX 3090 on the market outperformt the last one.
> Unfortunately OGS is not that smart to choice the right nv-driver, congrats vince.


Definitely a lot smarter than a known cheater.......


----------



## HyperMatrix

DrunknFoo said:


> Err i meant slider results in 1k regardless without command line...?


Definitely works.


----------



## chispy

Guys this is LN2 1000w Bios with no protections , all protections are off it's very dangerous. It is meant for Ln2 cooling , beware. Please play safe guys and thanks for the bios @0451 . Have fun but beware of the shortcomings of such risk.


----------



## geriatricpollywog

The bios didn’t magically make my score go up. You really do need LN2 to make the most of it.


----------



## des2k...

For 2x8pins if you guys need the info,
setting 562w, results in 372w to the card, pcie 52, pin1 190, pin2 130 PortRoyal score 12962 (no mem, no core offset)
setting 600w, results in 399w to the card, pcie 56, pin1 203, pin2 140 PortRoyal score 13110


----------



## HyperMatrix

0451 said:


> The bios didn’t magically make my score go up. You really do need LN2 to make the most of it.


Well it would only help if you were power limited.


----------



## Falkentyne

des2k... said:


> For 2x8pins if you guys need the info,
> setting 562w, results in 372w to the card, pcie 52, pin1 190, pin2 130 PortRoyal score 12962 (no mem, no core offset)
> setting 600w, results in 399w to the card, pcie 56, pin1 203, pin2 140 PortRoyal score 13110


So setting 800W on that vbios should give you about 550W?


----------



## des2k...

Falkentyne said:


> So setting 800W on that vbios should give you about 550W?


yes


----------



## bmgjet

Found power tables, They are offset 0xCB4 from type1 bioses.
Heres how it looks.


----------



## Pandora's Box

Got the Hybrid kit today for my 3090 FTW3 - Wasn't going to install it until the weekend but you know, here we are 12:30AM on a weeknight...lol


Really impressed with how far EVGA has come with the hybrid kits, last one i used was for a 1080Ti, I love how the fans are now fully controlled by the card and not plugged into the motherboard now. Installation was painless. It barely clears my NH-D15 cooler, I had to remove one of the fans from it, not seeing a change in CPU temps on my 3900X so no loss there


Quick run through the Red Dead 2 benchmark - 2080Mhz core clock at 52C. This is with the FTW3 Hybrid OC BIOS installed on the card. Was getting around 1950Mhz with the stock cooler at 76C


----------



## Slinky Supercomputer

HyperMatrix said:


> Well you just confirmed you have no idea what you're talking about. You about done here yet?


read my lips


----------



## HyperMatrix

Slinky Supercomputer said:


> read my lips


Do you know what a power limit is?


----------



## Falkentyne

Slinky Supercomputer said:


> read my lips


A SLINKY A SLINKY OH BOY WHAT A WONDERFUL TOY! FUN FOR A GIRL AND A BOY!


----------



## geriatricpollywog

I was hoping we could stop talking about power limits now.


----------



## HyperMatrix

0451 said:


> I was hoping we could stop talking about power limits now.


It’s more than just total board power mate. Look at bmgjet’s comparison. This is dangerous because of how much it unlocks.


----------



## Slinky Supercomputer

Falkentyne said:


> A SLINKY A SLINKY OH BOY WHAT A WONDERFUL TOY! FUN FOR A GIRL AND A BOY!











To improve performance you should use the right NV-Driver 27.21.14.5638


----------



## des2k...

HyperMatrix said:


> It’s more than just total board power mate. Look at bmgjet’s comparison. This is dangerous because of how much it unlocks.
> 
> View attachment 2470559


Well those memory PL are not playing nice with my 2x8pin card. Maybe because it's a 3x8pin but it goes into safety clocks (400mhz gpu, 400mhz mem if I start any compute load like Aida64, registers 3gb/s read and horrible TFLOPS / IOPS.

It holds for games , port royal. Why would compute trigger safety ?


----------



## bmgjet

Try using the compute P state bypass people do when they are mining. Its always dropped to below factory speeds on memory. Maybe thats speed thats set on that bios for that p state.


----------



## geriatricpollywog

Now I can't get the normal bioses working again. I only flashed the XOC to 1 switch position and now its stuck in XOC mode no matter which position I'm in.

Edit: Nevermind, I fixed with DDU.


----------



## des2k...

bmgjet said:


> Try using the compute P state bypass people do when they are mining. Its always dropped to below factory speeds on memory. Maybe thats speed thats set on that bios for that p state.


Yep, Fixed it, Nvidia driver sets P2 state for compute, this bios doesn't handle anything bellow 9751 for memory, it's always P0 state. 
Sweet, back to tweaking with power


----------



## Zooms

It's is the good bios for flash 3090 oc strix oc to kingpin 520w ?









EVGA RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com


----------



## markuaw1

HyperMatrix said:


> I wasn’t going to flash since I’m not being power limited right now and I’m not under water. But now I am curious about whether it can be done or not. Will report back in a few.
> 
> Worked fine here.
> 
> View attachment 2470552
> 
> 
> Now...what kind of a COW are we talking about?


ok been running the 520W bios on my strix with EK block, but when i tried to flash this bios i get, Cert 3.0 Verification Error, Update aborted. *** what I'm i missing?


----------



## Falkentyne

markuaw1 said:


> ok been running the 520W bios on my strix with EK block, but when i tried to flash this bios i get, Cert 3.0 Verification Error, Update aborted. *** what I'm i missing?


This?








NVIDIA NVFlash (5.792.0) Download


NVIDIA NVFlash is used to flash the graphics card BIOS on Ampere, Turing, Pascal and all older NVIDIA cards. NVFlash supports BIOS flashing on NVID




www.techpowerup.com


----------



## Baasha

shiokarai said:


> Which games do work with 3090s in SLI? I see you're streaming a lot of RDR2, is it working ok with the SLI? Cyberpunk is not working I assume? Other notable NEW titles?


As of now, the following work great with MGPU:

1.) RDR2
2.) Strange Brigade
3.) SOTTR
4.) ROTTR
5.) Sniper Elite 4
6.) Zombie Army 4 - Dead War

Three of those games are made by the same company - Rebellion - their implementation of MGPU is near perfect - I used to run 4-Way SLI with Titan Xp and used to get ~ 120fps in 8K with all settings maxed - in 2017!

If other companies would just pull up their pants and get to work, they should follow Rebellion's lead as an example and implement MGPU natively in ALL DX12/Vulkan games.

Cyberpunk 2077 NEEDS MGPU - I thought CDPR was different in that they would listen to their customers.  

Call of Duty, Battlefield, GTA, Cyberpunk etc. etc. need native MGPU implementation - it would be amazing! 

Seeing power draw of ~ 1500W consistently now.  finally stretching the legs of the Corsair AX1600i. hehe.


----------



## markuaw1

Falkentyne said:


> This?
> 
> 
> 
> 
> 
> 
> 
> 
> NVIDIA NVFlash (5.792.0) Download
> 
> 
> NVIDIA NVFlash is used to flash the graphics card BIOS on Ampere, Turing, Pascal and all older NVIDIA cards. NVFlash supports BIOS flashing on NVID
> 
> 
> 
> 
> www.techpowerup.com


ok well, now what still not working for me, getting.
Update display adapter firmware?
Press 'y' to confirm (any other key to abort):
Results:
Index | Match | Flash | Name
<00> * Graphics Device (10DE,2204,3842,3998) S:00, B:01
Nothing changed!


----------



## Falkentyne

markuaw1 said:


> ok well, now what still not working for me, getting.
> Update display adapter firmware?
> Press 'y' to confirm (any other key to abort):
> Results:
> Index | Match | Flash | Name
> <00> * Graphics Device (10DE,2204,3842,3998) S:00, B:01
> Nothing changed!


Did you do --protectoff and that override ID option? (I think it's -6 or something)
I have a FE so I'm not trying it.


----------



## Slinky Supercomputer

HyperMatrix said:


> Do you know what a power limit is?


more power for less?








100mhx respective 200mhz less for same result is call my overclocking More3D4Less. Untill you look up at me you should show same respect or just shut up... hwbot!


----------



## markuaw1

Falkentyne said:


> Did you do --protectoff and that override ID option? (I think it's -6 or something)
> I have a FE so I'm not trying it.


----------



## HyperMatrix

Is this slinky guy another of Escapee's alts?


----------



## ViRuS2k

Hi guys, massive thread to save time im just wondering guys what the best bios is to use for my MSI TRIO X 3090 so that i get no GPUZ pref issues. when i overclock i seem to see PWR most of the time when gaming and want to get rid of this...

the cards default power draw seems to be 370w with the slider maxed in msi
so is there a bios with at least 450w or 500w that is safe to use to get rid of these issues.

i was thinking the msi suprimX bios that way it does not break my cards RGB.. and it seems its 450w so much higher than my 370w ? or would i need 500w
my card is not shunt modded so no extreme overclocking for me but i want to hit what i currently hit with my card wich is 2100mhz but without the PWR issue in GPUZ.


----------



## bmgjet

HyperMatrix said:


> Is this slinky guy another of Escapee's alts?


Escapeee was worse, He forgot to use ctrl+l to lock his clocks min speed in afterburner so he ended up with superlow clock speed readings in the reports from the GPU having no load on it with all the RTX and textures disabled.


----------



## Nizzen

markuaw1 said:


> View attachment 2470563


Why not tell us more?


----------



## bmgjet

Nizzen said:


> Why not tell us more?











EVGA RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com


----------



## Slinky Supercomputer

@HyperMatrix I believe you may be the only one to don't know me.
I overclock any single GeForce ever built in max SLI in the lat 10 years.
Post here to help H2o builders as Ln2 became hopeless.
edit; BTW I don't sale T-Shirts like hwbot but you can become SLINKYMAN and save the world.. jk sold out!


----------



## Nizzen

bmgjet said:


> EVGA RTX 3090 VBIOS
> 
> 
> 24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory
> 
> 
> 
> 
> www.techpowerup.com


Does nvflash work with this bios?


----------



## ViRuS2k

ViRuS2k said:


> Hi guys, massive thread to save time im just wondering guys what the best bios is to use for my MSI TRIO X 3090 so that i get no GPUZ pref issues. when i overclock i seem to see PWR most of the time when gaming and want to get rid of this...
> 
> the cards default power draw seems to be 370w with the slider maxed in msi
> so is there a bios with at least 450w or 500w that is safe to use to get rid of these issues.
> 
> i was thinking the msi suprimX bios that way it does not break my cards RGB.. and it seems its 450w so much higher than my 370w ? or would i need 500w
> my card is not shunt modded so no extreme overclocking for me but i want to hit what i currently hit with my card wich is 2100mhz but without the PWR issue in GPUZ.


so does no one have any idea ??? im sure someone will help.


----------



## bmgjet

Nizzen said:


> Does nvflash work with this bios?


Read back though last 3 pages youll find answers to every thing your going to ask lol.



HyperMatrix said:


> I wasn’t going to flash since I’m not being power limited right now and I’m not under water. But now I am curious about whether it can be done or not. Will report back in a few.
> 
> Worked fine here.
> 
> View attachment 2470552


----------



## Sheyster

bmgjet said:


> People also use the KFA390W one in 3 plugs cards for 535W when reporting 390W on the card.


I was aware of people using the 366W BIOS with 3 x 8-pin cards but not the KFA 390W BIOS. Any idea what the fan behavior is like when using the KFA BIOS with a Strix 3090? With the KPE BIOS the middle fan won't run higher than 2000 RPM (~66%).


----------



## markuaw1

https://www.3dmark.com/pr/672176 stirx with EK block on Kingpin xoc 1000w


----------



## Slinky Supercomputer

markuaw1 said:


> https://www.3dmark.com/pr/672176 stirx with EK block on Kingpin xoc 1000w


Temps look OK, try driver 27.21.14.5638 for more.


----------



## Sheyster

DrunknFoo said:


> loaded on my ftw
> didn't put any load on the gpu, shunted card, disaster waiting to happen lol
> 
> 
> View attachment 2470554


I guess I'll try it on my Strix later today and set the PL slider to 60%.


----------



## dr/owned

defiledge said:


> wont that crack the die?


Ehh, mounting pressure is being applied pretty evenly and I think the purpose of the metal frame is to prevent the package from flexing and breaking. The surface of the cold plate and the package itself both will probably deform to absorb the interference. You always want some interference anyways so the thermal paste squeezes out....mounting pressure makes more of a difference to temperature of the die than high quality vs. low quality thermal paste.


----------



## Martin778

sultanofswing said:


> Here is where I was playing with a FTW3 Ultra Hybrid last night. The card even with no overclock is all over the 500watt "XOC" BIOS power limit(known issue).
> Got tired of messing with it and threw the rad in an ice bucket.
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 14 647 in Port Royal
> 
> 
> Intel Core i9-10940X X-series Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> Took that 3090 out and put my 2080ti Kingpin back in. I was all for the 3090 when it came out but after playing with one now I cannot recommend it unless you get a KINGPIN which also currently is having the same Power Limit issues as the FTW3.
> 
> If you don't overclock and just play games might as well just save the coin and get the 3080. Preferably a Strix
> EVGA just doesn't have it this year sadly.


My problem as well but I was stupid enough to sell my 2080Ti KP...then got a 3090 TUF which was rubbish, bouncing off PL all the time, so now I've returned it and am left with nothing. Seriously considering a 6900XT at this moment.


----------



## Antsu

markuaw1 said:


> https://www.3dmark.com/pr/672176 stirx with EK block on Kingpin xoc 1000w
> View attachment 2470568


Massive. Flashed fine on my Strix too, but I am still waiting for my block so not much use for me atm. Did a quick run with an average core clock of 2118Mhz and scored 14200. Seems a bit low to me? my PB atm is 14775 @ 2090 avg clock, same memory speed for both. Did a clean install of latest drivers with DDU and no change.


----------



## ShadowYuna

Gigbyte 3090 Extreme received today. It is huge card and it is best card ever in my perspective(Used Strix , FTW3 , Trinity , SG1 , FE)










Stock Timespy(Power limit maxed up to 107%)









KPE Bios with +100 offset









I am not that heavy overclocker as expert in this forum but I am very satisfy with the result on Extreme.Can not wait for Alphacool waterblock. Finally got the card that does not have coil whine with best performance.


----------



## Sheyster

ViRuS2k said:


> so does no one have any idea ??? im sure someone will help.


EVGA 500W XOC BIOS is good with the Trio X if you're on the stock air cooler. I think you'll lose the RGB though. If you really want to keep that then just use the SUPRIM BIOS.


----------



## MangoMunchaa

Thanks to the new 1000W BIOS I was able to finally get my card to (mostly) stretch it's legs!
Galax SG 3090 with EK Waterblock - https://www.3dmark.com/pr/672369


----------



## Benni231990

i thougt we cant flash the 1000 watt bios? 

how did you flashed it?


----------



## dr/owned

Actively cooled backplate centered over the VRAM on the back. Drilled M3 holes based on template I made of Bitspower DIMM6 block...M2.5 screws. Low profile heads. Took a PILE of thermal paste for this much surface area...probably about as much as I'd use installing 10 cpu's.

























No interference with backside VRM caps:


----------



## mardon

I'll be shunting my Reference 3090 in the new year. I just wanted to check a couple of things with you good folk.

Are these the resistors i need:








0805 SMD Resistors 0.125W 1% tolerance Full Range Available **UK Seller** | eBay


Resistance tolerance: 1%. Temperature coefficient: 200ppm/'c.



www.ebay.co.uk




10Ohm if Stacked
3Ohm if replacing

Are these the correct resistors to shunt?


----------



## mardon

MangoMunchaa said:


> Thanks to the new 1000W BIOS I was able to finally get my card to (mostly) stretch it's legs!
> Galax SG 3090 with EK Waterblock - https://www.3dmark.com/pr/672369
> View attachment 2470575


I'm on the same card and block!! Please detail how you flashed the 1000w BIOS! 
Saves me shunting.


----------



## mardon

MangoMunchaa said:


> Thanks to the new 1000W BIOS I was able to finally get my card to (mostly) stretch it's legs!
> Galax SG 3090 with EK Waterblock - https://www.3dmark.com/pr/672369
> View attachment 2470575


Is it this one? 

Galax 1000W


----------



## Slinky Supercomputer

dr/owned said:


> Actively cooled backplate centered over the VRAM on the back. Drilled M3 holes based on template I made of Bitspower DIMM6 block...M2.5 screws. Low profile heads. Took a PILE of thermal paste for this much surface area...probably about as much as I'd use installing 10 cpu's.
> View attachment 2470581
> 
> 
> View attachment 2470580
> View attachment 2470579
> 
> 
> No interference with backside VRM caps:
> View attachment 2470578


Nice! Don't use thermal paste, use 1mm good thermal pad like Gelid GP-Ultimate 15 W/mK Heat Conductivity. I assume is to late, your thermal paste is fine.


----------



## bmgjet

mardon said:


> I'm on the same card and block!! Please detail how you flashed the 1000w BIOS!
> Saves me shunting.


Could of looked back one page. or even the last 4 pages its been getting talked about nonstop.








EVGA RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com


----------



## mardon

bmgjet said:


> Could of looked back one page. or even the last 4 pages its been getting talked about nonstop.
> 
> 
> 
> 
> 
> 
> 
> 
> EVGA RTX 3090 VBIOS
> 
> 
> 24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory
> 
> 
> 
> 
> www.techpowerup.com


Yes but the one i've linked is for a two pin card. I will read back now. Mine is KFA2 so may not straight flash.

I'll try the -6 command. Having a working powerslider would be fantastic. 350w limit 1000w!


----------



## VickyBeaver

mardon said:


> Yes but the one i've linked is for a two pin card. I will read back now. Mine is KFA2 so may not straight flash.
> 
> I'll try the -6 command. Having a working powerslider would be fantastic. 350w limit 1000w!


been trying to get these roms to flash myself just keeps spitting "ERROR: Invalid firmware image detected." been able to flash all other bios fine, flashed over to the kingpin bios to see if that would help but nada, rolled back to the 390w bios for now


----------



## MangoMunchaa

mardon said:


> Is it this one?
> 
> Galax 1000W


Nah, that one doesn't work without using hardware to change the BIOS yourself, I used the Kingpin XOC one which is a 3x8 BIOS but still gives you a total of 700W to play with. The sensors don't read correctly either. I kept it to use 580W or so benchmarking.

Was linked here - http://overclockingpin.com/3998/NDA_ xoc_tools_3998.zip


----------



## mardon

Yeah just tried it, didn't work. Gutted.

I'll have to read back as I don't fancy having 1000w unlocked. I noticed people saying they could modify it to limit the power as you say.


----------



## MangoMunchaa

mardon said:


> Yeah just tried it, didn't work. Gutted.
> 
> I'll have to read back as I don't fancy having 1000w unlocked. I noticed people saying they could modify it to limit the power as you say.


Make sure you have the latest version of NVFlash, then use nvflash --protect off followed by nvflash -6 "rom name".rom


----------



## mardon

MangoMunchaa said:


> Make sure you have the latest version of NVFlash, then use nvflash --protect off followed by nvflash -6 "rom name".rom


Yes I think it will work now. Did you use that NVSMI program to limit you power or the slider?
Where did you get eh NVSMI program from if you used that?


----------



## bmgjet

On the 2 plug cards you can estimate what the power limit will be by subtracting 30.4%

So some examples would be.
Default power limit 1000W - 30.4%= 696W
Set the power limit slider to 50%.
50% of 1000 = 500W - 30.4% =469.6W

Heres a older version of nvidia-smi.exe








560.4 KB file on MEGA







mega.nz





It comes with the cuda and tensor flow stuff.


----------



## AcidWeb

I just tweaked backplate thermal pad layout on my Strix 3090 and I can confirm that new layout provided by EKWB vastly decrease coil whine.


----------



## MangoMunchaa

mardon said:


> Yes I think it will work now. Did you use that NVSMI program to limit you power or the slider?
> Where did you get eh NVSMI program from if you used that?


I am just running these settings with MSI Afterburner on startup, could probably use that program though I didn't bother


----------



## dr/owned

Slinky Supercomputer said:


> Nice! Don't use thermal paste, use 1mm good thermal pad like Gelid GP-Ultimate 15 W/mK Heat Conductivity. I assume is to late, your thermal paste is fine.


For fun I took it off to see the paste spread on the backplate...I already spatula'd it on the waterblock itself:










So I put some 0.5mm *Fujipoly SARCON X-HE Extreme* pads that I've had for a few years.


----------



## mardon

Thanks everyone will go it a go now many thanks.

Will probably try the MSI see if its worth it then shunt to get protections back.


----------



## mardon

MangoMunchaa said:


> Make sure you have the latest version of NVFlash, then use nvflash --protect off followed by nvflash -6 "rom name".rom


I used this one and it failed:









NVIDIA NVFlash (5.792.0) Download


NVIDIA NVFlash is used to flash the graphics card BIOS on Ampere, Turing, Pascal and all older NVIDIA cards. NVFlash supports BIOS flashing on NVID




www.techpowerup.com





Do I need the board missmatch one?









NVIDIA NVFlash with Board Id Mismatch Disabled (v5.590.0) Download


This is a patched version of NVIDIA's NVFlash. On Turing cards, NVFlash no longer allows overriding of the "board ID mismatch" message through comm




www.techpowerup.com


----------



## MangoMunchaa

mardon said:


> I used this one and it failed:
> 
> 
> 
> 
> 
> 
> 
> 
> 
> NVIDIA NVFlash (5.792.0) Download
> 
> 
> NVIDIA NVFlash is used to flash the graphics card BIOS on Ampere, Turing, Pascal and all older NVIDIA cards. NVFlash supports BIOS flashing on NVID
> 
> 
> 
> 
> www.techpowerup.com
> 
> 
> 
> 
> 
> Do I need the board missmatch one?
> 
> 
> 
> 
> 
> 
> 
> 
> 
> NVIDIA NVFlash with Board Id Mismatch Disabled (v5.590.0) Download
> 
> 
> This is a patched version of NVIDIA's NVFlash. On Turing cards, NVFlash no longer allows overriding of the "board ID mismatch" message through comm
> 
> 
> 
> 
> www.techpowerup.com


you need 5.670.0, if that doesn't work then I don't know how to help sorry


----------



## Nizzen

Slinky Supercomputer said:


> Nice! Don't forget to use good thermal pad like Gelid GP-Ultimate 15 W/mK Heat Conductivity


Thermal paste isn't good enough?


----------



## dr/owned

Nizzen said:


> Thermal paste isn't good enough?


I took a picture of the spread of the thermal paste a few posts back. When you have these giant surfaces it becomes problematic creating enough mounting pressure from just the corners. Especially when I can only use tiny M2.5 thread screws where if I crank them down any harder the hex-head strips out (one of them did...had to vise grip it off). A thermal pad guarantees contact across the entire surface...at a small sacrifice of more thickness. If I was being ridiculous I'd probably try the 0.2mm IC Diamond pads...but dang they're so expensive and I'd need 2 of them.


----------



## mardon

Can anyone see what I may have done wrong here:


----------



## greq333

You sure you got the evga 1000w bios, not the galax one?


----------



## MangoMunchaa

mardon said:


> Can anyone see what I may have done wrong here:
> 
> View attachment 2470599


the new version is nvflash instead of nvflash64 as it's already 64 bit, so instead of typing nvflash64 for your command just use nvflash


----------



## mardon

greq333 said:


> You sure you got the evga 1000w bios, not the galax one?


Ignore me. Had the old version of Nvflash still. Flash complete. Lets play!


----------



## defiledge

dr/owned said:


> I took a picture of the spread of the thermal paste a few posts back. When you have these giant surfaces it becomes problematic creating enough mounting pressure from just the corners. Especially when I can only use tiny M2.5 thread screws where if I crank them down any harder the hex-head strips out (one of them did...had to vise grip it off). A thermal pad guarantees contact across the entire surface...at a small sacrifice of more thickness. If I was being ridiculous I'd probably try the 0.2mm IC Diamond pads...but dang they're so expensive and I'd need 2 of them.


wait did you thermal pad your gpu die?


----------



## keeph8n

Slinky Supercomputer said:


> more power for less?
> View attachment 2470562
> 
> 100mhx respective 200mhz less for same result is call my overclocking More3D4Less. Untill you look up at me you should show same respect or just shut up... hwbot!


We do know you. You were banned from HWBOT for cheating


----------



## pat182

haha damn this 1000watt bios is gonna be the hot topic for the holydays


----------



## Slinky Supercomputer

Nizzen said:


> Thermal paste isn't good enough?


Should be fine, beater that nothing?!! I never try as my waterblock is not screwed on the backplate, i use only the thermal pad, right fittings and gravity.








I also use Fujipoly Sarcon on this image because I change videocard constantly. I recommend 1mm Gelid Solution GP-Ultimate, cost less and stick beater. Glue gun around the waterblock is another option before start making holes.


----------



## mardon

pat182 said:


> haha damn this 1000watt bios is gonna be the hot topic for the holydays


Yes I think so. 
For me I'm mini itx so will be limiting to 470w and trying to find max perf at lowest voltage. Depending on the jump I'll then go shunt mod to get the protections back in place.


----------



## pat182

mardon said:


> Yes I think so.
> For me I'm mini itx so will be limiting to 470w and trying to find max perf at lowest voltage. Depending on the jump I'll then go shunt mod to get the protections back in place.


it seems it doesnt scale much tho on water after a certain point, 15.5k seems to be the max on water whatever the limit will be

i dont think we will see more than 2.2ghz above 0c


----------



## Slinky Supercomputer

keeph8n said:


> We do know you. You were banned from HWBOT for cheating


I make mistakes but I never cheating. Get suspended because I repost my benchmarks removed where cpuid version was not visible? After Peter left hwbot 3 years ago... buncha clowns!
I know how you guys fill all this years after 1000+ Gold Cups, reach Top 10 in Elite with multiple Word Records. The only cheating was using a chiller under username H2o vs. Ln2 and for that most of you hate me 
BTW keeph8n, regarding overclacking I will improve your score in 3dmark 5-10% in less that 5 minutes. Let's start with WILD LIFE, post here the screenshot result then send me a PM.


----------



## pat182

i guess computer beef is a thing now ? MY score IS better than YOURS, damn, y all need a chill pill


----------



## markuaw1

pat182 said:


> it seems it doesnt scale much tho on water after a certain point, 15.5k seems to be the max on water whatever the limit will be
> 
> i dont think we will see more than 2.2ghz above 0c


Yep I agree 👍


----------



## dante`afk

anyone with a shunt modded card can make a comparison bench shunt mod vs 1000w bios?


----------



## mardon

MangoMunchaa said:


> Thanks to the new 1000W BIOS I was able to finally get my card to (mostly) stretch it's legs!
> Galax SG 3090 with EK Waterblock - https://www.3dmark.com/pr/672369
> View attachment 2470575


Mate you're 6th in the world with that hardware combo! Good going for a 2x 8pin!


----------



## Apecos

Slinky Supercomputer said:


> Temps look OK, try driver 27.21.14.5638 for more.


Hi, why you recomend this driver? is not the latest?


----------



## pat182

MangoMunchaa said:


> Thanks to the new 1000W BIOS I was able to finally get my card to (mostly) stretch it's legs!
> Galax SG 3090 with EK Waterblock - https://www.3dmark.com/pr/672369
> View attachment 2470575


any point of oc my 9900k pass 5ghz ? does it raise the score ?


----------



## Slinky Supercomputer

Apecos said:


> Hi, why you recomend this driver? is not the latest?


BCZ I test them all. Negative, is the 1st one.
Nvidia know to do business, give you the best at fist to make you go crazy.


----------



## long2905

Cant believe i would ever see such numbers on the ichill x4

can finally breach 14k lol









I scored 14 170 in Port Royal


Intel Core i9-9900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com





a undervolt of [email protected] consumes about 520w in cyberpunk 2077 which is totally fine for me. One less reason to swap this card for a 3 plug one. but i still want a strix to sync the argb leds and better stock air cooler lol

the card will consume less wattage when it is cooler, is that right?


----------



## Carillo

Here is som more kingpin 1000 watt testing. This is my turd 3090 strix. Pretty much maxed core and memory.


----------



## Carillo

pat182 said:


> it seems it doesnt scale much tho on water after a certain point, 15.5k seems to be the max on water whatever the limit will be
> 
> i dont think we will see more than 2.2ghz above 0c


It scales all the way. The only limit is your silicone


----------



## jura11

Assuming this Kingpin 1000W should work with 2 8pin non shunted RTX 3090? Or do I need to have shunt 2 8pin RTX 3090 to take full advantage of that BIOS? 

Will probably test it later when I'm back home 

Thanks, Jura


----------



## Tias

bmgjet said:


> On the 2 plug cards you can estimate what the power limit will be by subtracting 30.4%
> 
> So some examples would be.
> Default power limit 1000W - 30.4%= 696W
> Set the power limit slider to 50%.
> 50% of 1000 = 500W - 30.4% =469.6W
> 
> Heres a older version of nvidia-smi.exe
> 
> 
> 
> 
> 
> 
> 
> 
> 560.4 KB file on MEGA
> 
> 
> 
> 
> 
> 
> 
> mega.nz
> 
> 
> 
> 
> 
> It comes with the cuda and tensor flow stuff.


If it is -30.4% on 2x8 pin then:

50% of 1000W = 500W - 30.4% = 348W

65% of 1000w = 650W - 30.4% = 452W

82% of 1000w = 820w - 30.4% = 570W (which is prolly the max W you wanna use on a 2x8pin)

I guess you did a miss type?


----------



## long2905

jura11 said:


> Assuming this Kingpin 1000W should work with 2 8pin non shunted RTX 3090? Or do I need to have shunt 2 8pin RTX 3090 to take full advantage of that BIOS?
> 
> Will probably test it later when I'm back home
> 
> Thanks, Jura


im running it on my stock ichill x4 which is a 2 8pin card. the max it can draw is 700w if the power reading is correct. i ran one port royal test with the max power limit but that didnt trip my 750w corsair sfx psu. undervolting down to [email protected] for now to be on the safe side.

i notice RAM clock is locked at max though, anyway to bring that down on idle?


----------



## jura11

Carillo said:


> Here is som more kingpin 1000 watt testing. This is my turd 3090 strix. Pretty much maxed core and memory.
> View attachment 2470619


That's with Bykski or EKWB waterblock? 

26°C are great temperatures and 2205MHz its quite nice OC

Thanks, Jura


----------



## jura11

long2905 said:


> im running it on my stock ichill x4 which is a 2 8pin card. the max it can draw is 700w if the power reading is correct. i ran one port royal test with the max power limit but that didnt trip my 750w corsair sfx psu. undervolting down to [email protected] for now to be on the safe side.
> 
> i notice RAM clock is locked at max though, anyway to bring that down on idle?


Ohh that's great there, will test it later when I'm back home and see what I can do with that BIOS, I'm running Superflower 8pack 2000W PSU, did you read that from wall or from HWiNFO or other SW? 

Mostly with XOC BIOS which has been released in past, VRAM always has run at full speed or at least what I remember from RTX 2080Ti Strix XOC BIOS where VRAM is running at full speed 

Downclocking VRAM at idle not sure if its possible, this can do only Vince or EVGA itself, sadly there is no BIOS tweaker 

Hope this helps 

Thanks, Jura


----------



## Outcasst

Just tried this kingpin 1000w circling around on my 2 x 8 Pin Reference Palit GamingOC. No shunt mod. Zero power limiting in my usual heaven benchmark run, constant 1980MHz clock at stock. What's promising though is that performance is right where it should be for that clockspeed. I'm nearly approaching my maximum score and I haven't even touched memory or core clock yet.

The card is reporting 700w usage but I assume that is wrong. Max GPU temp has shot up to 62c up from 55c under water, so it's definitely using more power and is not hardware limited.


----------



## cletus-cassidy

AcidWeb said:


> I just tweaked backplate thermal pad layout on my Strix 3090 and I can confirm that new layout provided by EKWB vastly decrease coil whine.


Can you share new placement? Driving me nuts on mine.


----------



## markuaw1

yes 26°C are great temperatures ! i ran it last night but was more like 39c but was warm in my room last night dont think that helps with my open loop still got good clocks though


----------



## pat182

so to be clear, when would a fuse blow up on a card ? aint the 1000watt gonna jump a fuse somewhere ?


----------



## Sheyster

Tias said:


> If it is -30.4% on 2x8 pin then:
> 
> 50% of 1000W = 500W - 30.4% = 348W
> 
> 65% of 1000w = 650W - 30.4% = 452W
> 
> 82% of 1000w = 820w - 30.4% = 570W (which is prolly the max W you wanna use on a 2x8pin)
> 
> I guess you did a miss type?


Yes, that's right. If you don't want to deal with percentages, just multiply the power limit you have dialed in and applied by 0.696 and you'll get the approximate real PL for your 2 x 8-pin card. Taking your last example: 820 x 0.696 = 570.7.


----------



## HyperMatrix

pat182 said:


> haha damn this 1000watt bios is gonna be the hot topic for the holydays





pat182 said:


> so to be clear, when would a fuse blow up on a card ? aint the 1000watt gonna jump a fuse somewhere ?


Fuse blows when too much power comes in from a certain point. Once it’s past that point, the card has its own over current protection. That is now disabled. The card also disables thermal protections with this bios. So all those posts you see from people asking “hey why is my card thermal throttling at 50C” after a repaste/repad are going to be replaced with “why did my GPU melt?”

Too many people who weren’t even competent enough to shunt mod or simply replace their thermal pads/paste...are now going to be using the 1000W bios thinking it’s safer. Lol.


----------



## Sheyster

pat182 said:


> so to be clear, when would a fuse blow up on a card ? aint the 1000watt gonna jump a fuse somewhere ?


Most of us will want to limit the power down with this BIOS. It will scale all the way down to just 100W (10%) on the slider in AB. This said, apparently lots of limits have been removed (like OCP) so it can still be dangerous. I still have not tested it but plan to later today with a 600W limit on the slider.

EDIT - See @HyperMatrix explanation above my post, more detail there! Bottom line, be freaking careful folks!!


----------



## Spiriva

Tias said:


> 82% of 1000w = 820w - 30.4% = 570W (which is prolly the max W you wanna use on a 2x8pin)


Cant 1x8 pin deliver ~324W ? 

324 + 324 = 648W, and + 75w from the pcie = 723W

723W would be the max possible a 3090 with 2x8pin can pull?

Maybe there are other reasons you dont wanna go over ~570W on a 2x8pin 3090 tho?


----------



## HyperMatrix

Sheyster said:


> Most of us will want to limit the power down with this BIOS. It will scale all the way down to just 100W (10%) on the slider in AB. This said, apparently lots of limits have been removed (like OCP) so it can still be dangerous. I still have not tested it but plan to later today with a 600W limit on the slider.
> 
> EDIT - See @HyperMatrix explanation above my post, more detail there! Bottom line, be freaking careful folks!!


Just remember, it’s not only the total board power draw that is increased. Individual components have had their limits increased as well. So if you use the 1000W bios and move the slider down to the same wattage as before, it doesn’t mean it’s going to behave the same as your old bios. Other than OCP and Thermal being off, you know have other areas that will be unrestrained. Can that be an issue under water even if you’re not using the classified tool Or evc2? I can’t say for sure.

I’m just trying to make sure people understand what the bios is and what it does because there seems to be a lot of confusion around it and I doubt anyone wants to have a $2000 paperweight.


----------



## pat182

HyperMatrix said:


> Fuse blows when too much power comes in from a certain point. Once it’s past that point, the card has its own over current protection. That is now disabled. The card also disables thermal protections with this bios. So all those posts you see from people asking “hey why is my card thermal throttling at 50C” after a repaste/repad are going to be replaced with “why did my GPU melt?”
> 
> Too many people who weren’t even competent enough to shunt mod or simply replace their thermal pads/paste...are now going to be using the 1000W bios thinking it’s safer. Lol.


yea im just keeping the normal 520watt bios on my strix and waiting a -15 or -20c here in canada to put that ****er on my balcony and run the best air cooler run ever made hahaha


----------



## sultanofswing

The chances of damaging the card are pretty slim without having voltage control which only comes with the KINGPIN or one of Elmors boards.
I've had my KINGPIN 2080ti at 2300mhz and the voltage at 1.3v for benchmark runs on Ambient water and it's never had an issue.

The people who should NOT use this BIOS are those on standard Air cooling with temps pushing over 50c.


----------



## Falkentyne

Spiriva said:


> Cant 1x8 pin deliver ~324W ?
> 
> 324 + 324 = 648W, and + 75w from the pcie = 723W
> 
> 723W would be the max possible a 3090 with 2x8pin can pull?
> 
> Maybe there are other reasons you dont wanna go over ~570W on a 2x8pin 3090 tho?


16 Aug cables can deliver about 320W per cable maximum, so yes. That's why the Seasonic 12 pin FE cable can supply 600W safely (it's rated officially for 500W).
18 Aug cables, hell no. You don't want to exceed 240W per 18 aug 8 pin cable.


----------



## AcidWeb

cletus-cassidy said:


> Can you share new placement? Driving me nuts on mine.





https://www.ekwb.com/shop/EK-IM/EK-IM-3831109832479.pdf


----------



## Slinky Supercomputer

pat182 said:


> yea im just keeping the normal 520watt bios on my strix and waiting a -15 or -20c here in canada to put that ****er on my balcony and run the best air cooler run ever made hahaha


3000Watt chiller can do that in FL at +10C








+400$ monthly electricity bill.


----------



## defiledge

So all my the PSUs are out of stock... Do you guys think I'm safe to run my 600W card with a 750W gpu?


----------



## warbucks

chispy said:


> Enough with the off topic , it is not worth my time and bandwidth to speak about people who does not deserve it.
> 
> Back on Topic , i have found that the Byski block for the Strix 3090 is very good at keeping the Strix at low 38~44c while gaming. the back plate is a monster very thick and works very well with only a small low rpm fan the back temps are kept under check .
> 
> Guys the thermal pads for the Byski block is a weird 1.2mm , i know because i bought some extra pads on their store.


The Bykski blocks and backplates are quite good. I know they've had some issues with a couple different blocks but for the most part the quality is quite good for the dollar value.


----------



## HyperMatrix

defiledge said:


> So all my the PSUs are out of stock... Do you guys think I'm safe to run my 600W card with a 750W gpu?


No. Unless the PSU is powering only the GPU and you have another PSU for your system.


----------



## chispy

Slinky Supercomputer said:


> I make mistakes but I never cheating. Get suspended because I repost my benchmarks removed where cpuid version was not visible? After Peter left hwbot 3 years ago... buncha clowns!
> I know how you guys fill all this years after 1000+ Gold Cups, reach Top 10 in Elite with multiple Word Records. The only cheating was using a chiller under username H2o vs. Ln2 and for that most of you hate me
> BTW keeph8n, regarding overclacking I will improve your score in 3dmark 5-10% in less that 5 minutes. Let's start with WILD LIFE, post here the screenshot result then send me a PM.


You admitted and told at hwbot the way you were cheating , remember the skewed timers on windows on your 6950x Intel cpu to boost the scores because of a bug of the 3dmarks knowingly doing so to get higher scores ? We all do remember at hwbot the way you were cheating , Ronaldo ( Rbuass ) overclocking legend top overclocker from Brazil even made a video showing everyone what you were doing and how you did it. Photoshop scores ? Of course you did a lot of photoshop scores too. Using obscure software to fool the benchmarks ? Of course you did too , and so on and on and on ... infinite cheats !

You have a lot and i mean a lot of skeletons in the closet , if you want i can keep posting the threads from hwbot and the way you CHEATED all the benchmarks 3dmarks and Catzilla etc ...  . You will not have the respect from overclockers and techies ever again , you are a disgrace to the community , go back to your cave and stop looking for attention , we do not care about you. I have no idea why you keep posting here boasting about how good you are at overclocking when you have cheated all your live , so , no you did not made any world records , those scores were all cheat scores. Stop it now , you keep making a fool of yourself.

Get off the high horse you are in , looking for attention only boasting about having the best hardware and scores , we do not care. This is overclock.net my home and home of the best overclocking team on the whole world with real overclocking legends that do not cheat on their scores. Sour looser clown , stop hating at hwbot and we the members of it here at ocn or you won't last long with your new made up account a few hours old ...

Anyone that has doubts about the way he cheated all the benchmarks at hwbot all you have to do is have a quick read of this thread at hwbot as proff :






One day you will get caught


Short story 2nd time this month we experience lousy photoshopping. This member got reported numerous times for not respecting the new 2020 rules, meaning cpuz 1.91 version or newer, no clipping, etc.... On top of that his reputation did not help and many were watching his every move. I moderated ...



community.hwbot.org





Final word: You were banned for "CHEATING " ...


----------



## Carillo

defiledge said:


> So all my the PSUs are out of stock... Do you guys think I'm safe to run my 600W card with a 750W gpu?


1000 watt psu minimum


----------



## markuaw1

AcidWeb said:


> https://www.ekwb.com/shop/EK-IM/EK-IM-3831109832479.pdf
> 
> 
> [/QU





AcidWeb said:


> https://www.ekwb.com/shop/EK-IM/EK-IM-3831109832479.pdf


well that just sucks ! *** EK, i already put my ek block on two weeks ago, dont feel like pulling it back apart this is BS


----------



## Carillo

jura11 said:


> That's with Bykski or EKWB waterblock?
> 
> 26°C are great temperatures and 2205MHz its quite nice OC
> 
> Thanks, Jura


My chiller was running, water @12C. Its a alphacool water block.


----------



## Conqi1985

Outcasst said:


> Just tried this kingpin 1000w circling around on my 2 x 8 Pin Reference Palit GamingOC. No shunt mod. Zero power limiting in my usual heaven benchmark run, constant 1980MHz clock at stock. What's promising though is that performance is right where it should be for that clockspeed. I'm nearly approaching my maximum score and I haven't even touched memory or core clock yet.
> 
> The card is reporting 700w usage but I assume that is wrong. Max GPU temp has shot up to 62c up from 55c under water, so it's definitely using more power and is not hardware limited.



700w would be correct as the bios is 1000w with 3 X 8 Pin and you are on 2 X 8 Pin?


----------



## lokran88

Aquacomputer Strix Block with active backplate to be finally delivered tomorrow.


----------



## Outcasst

Conqi1985 said:


> 700w would be correct as the bios is 1000w with 3 X 8 Pin and you are on 2 X 8 Pin?


Yep looks like the power reading in MSI afterburner is accurate. Attempted to run OCCT PSU tester and as soon as I started it, the OCP tripped. Not surprising since the GPU was trying to pull 800w along with 200w for the 5900x on a 850w PSU.

Still not sure if I'll keep this BIOS. I'm getting no throttling but performance is at most 3fps faster in all the games I've tested compared to the gigabyte 390w. Seems like a good option for eeking out the highest benchmark scores but for gaming I don't think the extra power consumption and heat is worth it at all.


----------



## chispy

lokran88 said:


> Aquacomputer Strix Block with active backplate to be finally delivered tomorrow.


Let us know how are your temps and the quality of the block please . I have a Byski on my Strix.


----------



## Slinky Supercomputer

chispy said:


> You admitted and told at hwbot the way you were cheating , remember the skewed timers on windows on your 6950x Intel cpu to boost the scores because of a bug of the 3dmarks knowingly doing so to get higher scores ? We all do remember at hwbot the way you were cheating , Ronaldo ( Rbuass ) overclocking legend top overclocker from Brazil even made a video showing everyone what you were doing and how you did it. Photoshop scores ? Of course you did a lot of photoshop scores too. Using obscure software to fool the benchmarks ? Of course you did too , and so on and on and on ... infinite cheats !
> 
> You have a lot and i mean a lot of skeletons in the closet , if you want i can keep posting the threads from hwbot and the way you CHEATED all the benchmarks 3dmarks and Catzilla etc ...  . You will not have the respect from overclockers and techies ever again , you are a disgrace to the community , go back to your cave and stop looking for attention , we do not care about you. I have no idea why you keep posting here boasting about how good you are at overclocking when you have cheated all your live , so , no you did not made any world records , those scores were all cheat scores. Stop it now , you keep making a fool of yourself.
> 
> Get off the high horse you are in , looking for attention only boasting about having the best hardware and scores , we do not care. This is overclock.net my home and home of the best overclocking team on the whole world with real overclocking legends that do not cheat on their scores. Sour looser clown , stop hating at hwbot and we the members of it here at ocn or you won't last long with your new made up account a few hours old ...
> 
> Anyone that has doubts about the way he cheated all the benchmarks at hwbot all you have to do is have a quick read of this thread at hwbot as proff :
> 
> 
> 
> 
> 
> 
> One day you will get caught
> 
> 
> Short story 2nd time this month we experience lousy photoshopping. This member got reported numerous times for not respecting the new 2020 rules, meaning cpuz 1.91 version or newer, no clipping, etc.... On top of that his reputation did not help and many were watching his every move. I moderated ...
> 
> 
> 
> community.hwbot.org
> 
> 
> 
> 
> 
> Final word: You were banned for "CHEATING " ...
> 
> View attachment 2470631


I release More3D4Less to HWBOT and was confirmed 10% improvement at any single run. As a result they decide to take off Catzilla from points. Regarding 3dmark I make only live submissions look up please, need more skills than only dropping Ln2.


----------



## Sheyster

HyperMatrix said:


> Just remember, it’s not only the total board power draw that is increased. Individual components have had their limits increased as well. So if you use the 1000W bios and move the slider down to the same wattage as before, it doesn’t mean it’s going to behave the same as your old bios. Other than OCP and Thermal being off, you know have other areas that will be unrestrained. Can that be an issue under water even if you’re not using the classified tool Or evc2? I can’t say for sure.
> 
> I’m just trying to make sure people understand what the bios is and what it does because there seems to be a lot of confusion around it and I doubt anyone wants to have a $2000 paperweight.


Yep, no doubt from the screen shots posted from the ABE, the limits are all raised to the sky. This said, we have to assume there is still some hardware power balancing going on. Setting a lower limit on the slider (like 55% for instance) _should_ be relatively safe. *At the end of the day, use this 1000W BIOS at your own risk.* I know you know this, but it needs to be said for others.


----------



## markuaw1

my two new favorite bios flashes for my strix #1 Kingpin 520w for daily driver, #2 Kingpin xoc 1000w for mucking around with benchmarks.


----------



## Paperino super cuochino

Hi guys, i have a question would be still worth to shunt the gpu after the this new kingpin 1000w bios? And if not, which would be right now the best custom 3090 model for watercooling/oc purpose?


----------



## Sheyster

Carillo said:


> 1000 watt psu minimum


Or keep his current PSU and use an Add2PSU connector with a second PSU. Smaller power supplies are easy to find right now. I ran two of them during the Titan X Maxwell SLI days (1500W total):


----------



## Tias

Does the memory of the 3090 run at full speed all the time with the kingpin xoc bios?


----------



## long2905

Tias said:


> Does the memory of the 3090 run at full speed all the time with the kingpin xoc bios?


Thats a yes. Though it seems for me at least i can run a higher mem oc at +1100 stable while previously only at +900.


----------



## Sheyster

markuaw1 said:


> my two new favorite bios flashes for my strix #1 Kingpin 520w for daily driver, #2 Kingpin xoc 1000w for mucking around with benchmarks.


KPE 520W is fine if you're watercooled. Unfortunately if you're using an air-cooled Strix, the middle fan will top out at 2000 RPM (66% of max speed). Air-cooled Strix users should stick to the EVGA FTW3 500W BIOS for daily use, IMHO.


----------



## Element77

Slinky Supercomputer said:


> BCZ I test them all. Negative, is the 1st one.
> Nvidia know to do business, give you the best at fist to make you go crazy.


cheers for testing, interesting how much variance between them.

I don't know much about hwbot and I get there is some controversy there. but your validated wild life score is pretty awesome, I get like 108k with a 3090. Is the secret just big clocks and cooling?

Also, I wonder if we should make a list on what cards support the kp 1000w so far. I get that the strix does. What about the other 3pin cards, anyone tried?


----------



## Biscottoman

lokran88 said:


> Aquacomputer Strix Block with active backplate to be finally delivered tomorrow.


Would be overkill to run an mp5works backplate on that active xcs one? Even if probably the RTX incision would be an issue for proper contact betweent the thermal pad and the backplate


----------



## Slinky Supercomputer

Element77 said:


> cheers for testing, interesting how much variance between them.
> 
> I don't know much about hwbot and I get there is some controversy there. but your validated wild life score is pretty awesome, I get like 108k with a 3090. Is the secret just big clocks and cooling?
> 
> Also, I wonder if we should make a list on what cards support the kp 1000w so far. I get that the strix does. What about the other 3pin cards, anyone tried?


check your inbox in 20 seconds


----------



## djchup

I've got a GIGABYTE - GeForce RTX 3090 GAMING OC 24G on the way, seems like top performance for that card would be with the 1000W bios (which is 700w on 2x 8-pin?), and my only WB option I can find is Bykski. That all sound right?


----------



## Nizzen

Sheyster said:


> Yep, no doubt from the screen shots posted from the ABE, the limits are all raised to the sky. This said, we have to assume there is still some hardware power balancing going on. Setting a lower limit on the slider (like 55% for instance) _should_ be relatively safe. *At the end of the day, use this 1000W BIOS at your own risk.* I know you know this, but it needs to be said for others.


My bet it's as safe as 2080ti xoc bios.

The most unsafe is the users itself 

Have a fun Christmas you all good people of OCN <3


----------



## defiledge

djchup said:


> I've got a GIGABYTE - GeForce RTX 3090 GAMING OC 24G on the way, seems like top performance for that card would be with the 1000W bios (which is 700w on 2x 8-pin?), and my only WB option I can find is Bykski. That all sound right?


I would recommend barrow over byski


----------



## Sheyster

Element77 said:


> Also, I wonder if we should make a list on what cards support the kp 1000w so far. I get that the strix does. What about the other 3pin cards, anyone tried?


There is confirmation it works on 2 x 8-pin cards as well, but with 30.4% loss in max power limit (it's effectively a 696W BIOS for these cards). I'm going to boldly predict it works on virtually any 3090 card except the FE. This said, use it at your own risk!


----------



## jura11

djchup said:


> I've got a GIGABYTE - GeForce RTX 3090 GAMING OC 24G on the way, seems like top performance for that card would be with the 1000W bios (which is 700w on 2x 8-pin?), and my only WB option I can find is Bykski. That all sound right?


Bykski waterblocks are good for money, running one on my RTX 3090 and no issues, temperatures are great

With BIOS I can't help you, people here tried that BIOS on their 2 8 pin GPUs and they reported better performance 

I'm running KFA2 390W BIOS but in your case I would try Gigabyte 390W BIOS 

Just bit of caution as with everything 

Hope this helps 

Thanks, Jura


----------



## Falkentyne

I'm not risking trying to flash it on a FE. Especially since no one to date has even identified the vbios chip on a FE yet (for hardware force flashing).

Anyway another 550W+ Supeposition 1080p run









UNIGINE Superposition benchmark score


UNIGINE Superpsition detailed score page




benchmark.unigine.com


----------



## jura11

Nizzen said:


> My bet it's as safe as 2080ti xoc bios.
> 
> The most unsafe is the users itself
> 
> Have a fun Christmas you all good people of OCN <3


I'm still running on my Asus RTX 2080Ti Strix that BIOS although I set power limit for that BIOS 44% or 440W and its on Quiet BIOS switch which is okay for just benchmarks, as daily BIOS Matrix seems is best for Asus RTX 2080Ti Strix 

Thanks, Jura


----------



## djchup

jura11 said:


> Bykski waterblocks are good for money, running one on my RTX 3090 and no issues, temperatures are great
> 
> With BIOS I can't help you, people here tried that BIOS on their 2 8 pin GPUs and they reported better performance
> 
> I'm running KFA2 390W BIOS but in your case I would try Gigabyte 390W BIOS
> 
> Just bit of caution as with everything
> 
> Hope this helps
> 
> Thanks, Jura


Thanks for the reply. Shouldn't that just be the stock bios on my GIGABYTE - GeForce RTX 3090 GAMING OC 24G ?


----------



## ttnuagmada

Whats the consensus on running the unlocked KPE bios 24/7 on something like a Strix with the TDP capped at 600w?


----------



## jura11

djchup said:


> Thanks for the reply. Shouldn't that just be the stock bios on my GIGABYTE - GeForce RTX 3090 GAMING OC 24G ?


Really not sure there, if your BIOS is 390W if its then yes you can try Kingpin 1000W BIOS and see if its worth it or not or you will be better off with shunt mod

Hope this helps 

Thanks, Jura


----------



## Sheyster

Falkentyne said:


> I'm not risking trying to flash it on a FE. Especially since no one to date has even identified the vbios chip on a FE yet (for hardware force flashing).
> 
> Anyway another 550W+ Supeposition 1080p run
> 
> 
> 
> 
> 
> 
> 
> 
> 
> UNIGINE Superposition benchmark score
> 
> 
> UNIGINE Superpsition detailed score page
> 
> 
> 
> 
> benchmark.unigine.com


You're shunted, right? Not much reason to risk it, I don't blame you.


----------



## ViRuS2k

warbucks said:


> The Bykski blocks and backplates are quite good. I know they've had some issues with a couple different blocks but for the most part the quality is quite good for the dollar value.


Anywhere in the uk to buy those ? im desprate for a good backplate and waterblock for my MSI trio 3090 :/


----------



## Sheyster

ttnuagmada said:


> Whats the consensus on running the unlocked KPE bios 24/7 on something like a Strix with the TDP capped at 600w?


I'll let you know in a day or two! I am undervolting though. Goal is 2100 core at 1v stable.


----------



## Tias

long2905 said:


> Thats a yes. Though it seems for me at least i can run a higher mem oc at +1100 stable while previously only at +900.


Can you use this bios 24/7 with the memory running at full speed 24/7 and never idle?


----------



## geriatricpollywog

ttnuagmada said:


> Whats the consensus on running the unlocked KPE bios 24/7 on something like a Strix with the TDP capped at 600w?


Even if you capped the TDP to 450w, the temperature protection is disabled. It won’t take much for your core to hit 120c. I have loaded the XOC bios for benching on my 2080ti and 3090 Kingpins, but I am constantly watching the temperature and voltage every time I load the XOC. I have them displayed on the OLED. Do not load an XOC bios unless you are willing to pay more attention to the temperature of your card than the content you are rendering.


----------



## ttnuagmada

Sheyster said:


> I'll let you know in a day or two! I am undervolting though. Goal is 2100 core at 1v stable.


I know this bios removes a lot of protections, but I'm curious as to whether or not there's still a danger if the power limit is capped conservatively. I'm mostly just looking to hit the clock speeds/voltages i can already hit without power throttling.


----------



## SuperMumrik

Slinky Supercomputer said:


> BTW keeph8n, regarding overclacking I will improve your score in 3dmark 5-10% in less that 5 minutes. Let's start with WILD LIFE, post here the screenshot result then send me a PM.


I would really like a 5-10% boost to my score 








I scored 129 496 in Wild Life


Intel Core i9-10900K Processor, NVIDIA GeForce RTX 3090 x 1, 16384 MB, 64-bit Windows 10}




www.3dmark.com


----------



## ttnuagmada

0451 said:


> Even if you capped the TDP to 450w, the temperature protection is disabled. It won’t take much for your core to hit 120c. I have loaded the XOC bios for benching on my 2080ti and 3090 Kingpins, but I am constantly watching the temperature and voltage every time I load the XOC. I have them displayed on the OLED. Do not load an XOC bios unless you are willing to pay more attention to the temperature of your card than the content you are rendering.


I forgot to mention that I'm water cooled. Would I still have to worry?


----------



## Sheyster

ttnuagmada said:


> I know this bios removes a lot of protections, but I'm curious as to whether or not there's still a danger if the power limit is capped conservatively. I'm mostly just looking to hit the clock speeds/voltages i can already hit without power throttling.


Indeed, my goal is to also avoid any/all power limits with a conservative stable OC for gaming. Even with the 500W EVGA BIOS I still hit the power limit every now and then while undervolting.


----------



## geriatricpollywog

ttnuagmada said:


> I forgot to mention that I'm water cooled. Would I still have to worry?


Yes. Temperature protection is disabled for every part of the card. Besides, in my testing, the XOC bios offers no benefit to PR scores over the Kingpin 520w LN2 bios. For the comparison of both bioses, I had the 360mm AIO radiator dipped in an ice bath. This brings the temps lower than any open loop. My best score was 15,632 with the 520w bios and I could not beat it with the XOC bios. This is one of the highest scores anybody has achieved without extreme cooling and it was not with the XOC bios. There is no point to the XOC bios unless you have an LN2 pot.


----------



## pdlr

I am absolutely frustrated with this 1000W XOC bios as well as confusing. I have not won absolutely nothing, with respect to the KPE 520W bios, in fact, I have lost. I only see higher clocks and higher expenses, but terrible results. I have an MSI TRIO X, in principle the only way to keep clocks above 2200MHZ is by means of a curve, and the moment I go from 1,087V, the nonsense begins ... with 1.10V, I get 2235MHZ clocks without problems without some apparent drop in frequencies, and expenses of 600W according to all the meters, but it is not real, I lose over 300 points of my best result with the KPE bios, which makes me think that surely the graph is limited in some way to the expense, and put the frequency you put, it is still throttled. My question is if this has happened to more people, and the people who have achieved very good scores, have made it appear from this bio, because they also have shunt mod done. Another explanation I do not see ... something limits the graph ... it is impossible to get much but result at 2235MHZ / 1343MHZ than with 2175/1300 of the previous bios and some explanation has to have .

. Thank you and apologies for the post.


----------



## Slinky Supercomputer

SuperMumrik said:


> I would really like a 5-10% boost to my score
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 129 496 in Wild Life
> 
> 
> Intel Core i9-10900K Processor, NVIDIA GeForce RTX 3090 x 1, 16384 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com


*** are you serious with 10% you will take my throne! PM U in a minute.


----------



## geriatricpollywog

pdlr said:


> I am absolutely frustrated with this 1000W XOC bios as well as confusing. I have not won absolutely nothing, with respect to the KPE 520W bios, in fact, I have lost. I only see higher clocks and higher expenses, but terrible results. I have an MSI TRIO X, in principle the only way to keep clocks above 2200MHZ is by means of a curve, and the moment I go from 1,087V, the nonsense begins ... with 1.10V, I get 2235MHZ clocks without problems without some apparent drop in frequencies, and expenses of 600W according to all the meters, but it is not real, I lose over 300 points of my best result with the KPE bios, which makes me think that surely the graph is limited in some way to the expense, and put the frequency you put, it is still throttled. My question is if this has happened to more people, and the people who have achieved very good scores, have made it appear from this bio, because they also have shunt mod done. Another explanation I do not see ... something limits the graph ... it is impossible to get much but result at 2235MHZ / 1343MHZ than with 2175/1300 of the previous bios and some explanation has to have .
> 
> . Thank you and apologies for the post.


Look what I just wrote. I got 15,632 in Port Royal with the 520w bios and I could not beat it with the 1000w bios. The 1000w bios is for LN2 only and any time temperature protection is disabled, you need to constantly monitor the card. The XOC bios is NOT for people who are frustrated with the daily performance of their overclocked vanilla 3090.


----------



## Nizzen

ttnuagmada said:


> I forgot to mention that I'm water cooled. Would I still have to worry?


If you need to ask that question, you don't want that bios 

I'm running this bios on my watercooled 3090 strix OC, and for daily use, there is literally no difference over the Evga 520w bios 

For gaming, 520w bios is enough. If you are using a waterchiller 24/7. then the 1000w bios will be a bit better


----------



## ttnuagmada

0451 said:


> Look what I just wrote. I got 15,632 in Port Royal with the 520w bios and I could not beat it with the 1000w bios. The 1000w bios is for LN2 only and any time temperature protection is disabled, you need to constantly monitor the card. The XOC bios is NOT for people who are frustrated with the daily performance of their overclocked vanilla 3090.


To be fair, 15,632 is already a stellar score.


----------



## Slinky Supercomputer

SuperMumrik said:


> I would really like a 5-10% boost to my score
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 129 496 in Wild Life
> 
> 
> Intel Core i9-10900K Processor, NVIDIA GeForce RTX 3090 x 1, 16384 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com


PM me please, i will be back soon.


----------



## pdlr

0451 said:


> Look what I just wrote. I got 15,632 in Port Royal with the 520w bios and I could not beat it with the 1000w bios. The 1000w bios is for LN2 only and any time temperature protection is disabled, you need to constantly monitor the card. The XOC bios is NOT for people who are frustrated with the daily performance of their overclocked vanilla 3090.



Completely agree with you. What I don't understand is how there are people here who report so much profit. It is possible that if you have shunt mod and combine it with this bios if the result?


----------



## geriatricpollywog

pdlr said:


> Completely agree with you. What I don't understand is how there are people here who report so much profit. It is possible that if you have shunt mod and combine it with this bios if the result?


I would assume those cards are not getting close to 2160 without voltage and power due to poor silicon quality and the extra power is allowing them to hit 2160 at say, 600w. Even with good silicon, voltage tuning and unlocked power, it is difficult to scale past 2160-2205 without sub-zero temps.


----------



## geriatricpollywog

This is for the 2080ti, but the 3090 is very similar. Some cards might need lots of power to hit 2160, others can sustain it easily with a 500w bios. That might explain why some cards are scaling better with a 1000w bios. However, regardless of silicon quality you need sub-0 temps to scale past 2205.











https://xdevs.com/guide/2080ti_kpe/


----------



## Sheyster

0451 said:


> This is for the 2080ti, but the 3090 is very similar. Some cards might need lots of power to hit 2160, others can sustain it easily with a 500w bios. That might explain why some cards are scaling better with a 1000w bios. However, regardless of silicon quality you need sub-0 temps to scale past 2205.
> 
> View attachment 2470650
> 
> 
> 
> https://xdevs.com/guide/2080ti_kpe/


Indeed. Back in the day we had the low ASIC cards that shined with lots of voltage, typically out-performing efficient cards when properly cooled. I think you nailed it.


----------



## DrunknFoo

ttnuagmada said:


> Whats the consensus on running the unlocked KPE bios 24/7 on something like a Strix with the TDP capped at 600w?


Keep it cool and itll be fine. I have been running my shunted ftw for about 2 weeks now that pulls a max of roughly 800w

(Well 700w with slider offset till i get a block)


----------



## motivman

So will it be disaster if the kingpin 1000W bios is used on a shunt modded (5mohm everywhere) 2X8 pin card? lol


----------



## Sheyster

motivman said:


> So will it be disaster if the kingpin 1000W bios is used on a shunt modded (5mohm everywhere) 2X8 pin card? lol


Given that it's technically not a 1000W BIOS on a 2x8 card (it's ~696W), why not just roll with the shunt only and keep the protections intact?


----------



## motivman

Sheyster said:


> Given that it's technically not a 1000W BIOS on a 2x8 card (it's ~696W), why not just roll with the shunt only and keep the protections intact?


because even on the 520W kingpin bios, I am still power limited on some benchmarks ( Port Royal, Timespy extreme test 2), wonder if this bios will take care of the stealth power limits... 

Does the power slider on afterburner work with this bios? I can set it to 50% and test to see If I am still power limited in PR and Timespy extreme.


----------



## markuaw1

Sheyster said:


> KPE 520W is fine if you're watercooled. Unfortunately if you're using an air-cooled Strix, the middle fan will top out at 2000 RPM (66% of max speed). Air-cooled Strix users should stick to the EVGA FTW3 500W BIOS for daily use, IMHO.


----------



## Falkentyne

0451 said:


> Even if you capped the TDP to 450w, the temperature protection is disabled. It won’t take much for your core to hit 120c. I have loaded the XOC bios for benching on my 2080ti and 3090 Kingpins, but I am constantly watching the temperature and voltage every time I load the XOC. I have them displayed on the OLED. Do not load an XOC bios unless you are willing to pay more attention to the temperature of your card than the content you are rendering.


I'm going to quote this because I don't think people are taking this seriously.

Awhile back, I disassembled my 3090 FE in an attempt to see if I could improve VRAM cooling a bit. So I decided to big brain it and "Improve" the RAM cooling by stacking a 1.5mm and 1.0mm 12 w/mk pad on the GPU side VRAM GDDR6X (I didn't have 0.5mm pads, except Arctic 6 w/mk pads I didn't want to use--they don't cool well enough), for 2.5mm total thickness. 2.0 is the absolute max thickness on that side and ONLY on a very compressible NON stacked pad (a 1.5mm and a 0.5mm stacked is going to be THICKER than a 2.0mm single pad AND less compressible also). 

I reassembled the card, and as soon as I got on the desktop, black screen and 100% fan speed instantly.
I shut off, turned it back on, got into BIOS, no problem. Booted windows and black screen+100% fan speed as soon as the desktop loaded.
Thinking I messed up and shorted something on the conductive paint for the shunt mod, I took apart the card, fearing the worst. When I got to the GPU side, I noticed the thermal paste had not spread at all, and it had barely even made contact with the heatsink! The thermal pads were too thick, and the card went into protective shutdown mode from thermal temp trip (probably 105C-120C).

I put on the proper 1.5mm pads and the card booted right up. No damage.

Now if this had happened with a card with a 1000W bios with protections disabled, unless "thermal trip" is hardwired into the silicon (#Thermtrip cannot be disabled on Intel chips, it's a function of the chip firmware, not the bios), pretty safe to say the card would have been fried if the thermal protections being disabled include thermal trip.

You guys had better make sure your cards are in tip top shape. That vbios is supposed to be only for world record overclocking on LN2. The 520W kingpin bios is for daily use.


----------



## Sheyster

motivman said:


> because even on the 520W kingpin bios, I am still power limited on some benchmarks ( Port Royal, Timespy extreme test 2), wonder if this bios will take care of the stealth power limits...
> 
> Does the power slider on afterburner work with this bios? I can set it to 50% and test to see If I am still power limited in PR and Timespy extreme.


Yes, the slider scales down all the way to 10%. Based on the ABE screenshots @bmgjet posted, all limits are raised, so you get what you want I think. Nothing like an XOC BIOS to bring a little excitement to life, eh?  Just don't get too crazy with it.


----------



## Falkentyne

motivman said:


> because even on the 520W kingpin bios, I am still power limited on some benchmarks ( Port Royal, Timespy extreme test 2), wonder if this bios will take care of the stealth power limits...
> 
> Does the power slider on afterburner work with this bios? I can set it to 50% and test to see If I am still power limited in PR and Timespy extreme.


You shouldn't be stealth power limited in Port Royal at all. Unless you're doing some custom 4k test in it.

It will, but expect to see up to _700W_ of power draw on those tests (I'm talking specifically and directly about Timespy Extreme, and Superposition 4k Custom Extreme shaders).


----------



## Spiriva

Is it okay to run the memory at full speed 24/7 tho? It seems very important for the xoc bioses (as like this for the 2080ti too) not to let the memory idle for some reason.


----------



## Carillo

Anyone wants a turd Strix ?


----------



## Falkentyne

Spiriva said:


> Is it okay to run the memory at full speed 24/7 tho? It seems very important for the xoc bioses (as like this for the 2080ti too) not to let the memory idle for some reason.


There's never been a problem running the memory full speed. That's no difference than setting "prefer maximum performance" in the Nvidia drivers. If it were dangerous, Nvidia wouldn't have even allowed it.


----------



## geriatricpollywog

Falkentyne said:


> I'm going to quote this because I don't think people are taking this seriously.
> 
> Awhile back, I disassembled my 3090 FE in an attempt to see if I could improve VRAM cooling a bit. So I decided to big brain it and "Improve" the RAM cooling by stacking a 1.5mm and 1.0mm 12 w/mk pad on the GPU side VRAM GDDR6X (I didn't have 0.5mm pads, except Arctic 6 w/mk pads I didn't want to use--they don't cool well enough), for 2.5mm total thickness. 2.0 is the absolute max thickness on that side and ONLY on a very compressible NON stacked pad (a 1.5mm and a 0.5mm stacked is going to be THICKER than a 2.0mm single pad AND less compressible also).
> 
> I reassembled the card, and as soon as I got on the desktop, black screen and 100% fan speed instantly.
> I shut off, turned it back on, got into BIOS, no problem. Booted windows and black screen+100% fan speed as soon as the desktop loaded.
> Thinking I messed up and shorted something on the conductive paint for the shunt mod, I took apart the card, fearing the worst. When I got to the GPU side, I noticed the thermal paste had not spread at all, and it had barely even made contact with the heatsink! The thermal pads were too thick, and the card went into protective shutdown mode from thermal temp trip (probably 105C-120C).
> 
> I put on the proper 1.5mm pads and the card booted right up. No damage.
> 
> Now if this had happened with a card with a 1000W bios with protections disabled, unless "thermal trip" is hardwired into the silicon (#Thermtrip cannot be disabled on Intel chips, it's a function of the chip firmware, not the bios), pretty safe to say the card would have been fried if the thermal protections being disabled include thermal trip.
> 
> You guys had better make sure your cards are in tip top shape. That vbios is supposed to be only for world record overclocking on LN2. The 520W kingpin bios is for daily use.


I removed the fans from my 3090 KP AIO for some ice bath overclocking. After the session, I left the fans off to let the radiator dry. Forgetting I had the fans off, I booted up RDR2 and saw that my core was throttling to 2100. Then I saw the core temp was 85c. Nowhere near 120, but good thing it was running the 520w bios. Another reason why you shouldn’t turn off temperature protection unless you are visually monitoring the temps.


----------



## Slinky Supercomputer

Carillo said:


> Anyone wants a turd Strix ?


PM me


----------



## bwana

Using the latest beta of afterburner but it doesn’t see kingpin. Is there a version that works?


----------



## motivman

So I flashed the KP 1000W bios on my 2X8 pin shunt modded reference card with 5mohm shunts everywhere. Set the power limit to 50% or 500W. My card pulled about 662W, looks like all stealth power limits were removed. my normalized TDP reached 54%, so I have a little bit of power throttling in timespy extreme, but if you noticed, 8 pin #1 pulled around 367W!!!!!! Now, I will go check my cables to make sure they haven't melted... lolololol.


----------



## Paramedic10

I scored 15 238 in Port Royal


Intel Core i9-9900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com





15,238 with included LN2 BIOS on KP.

Does anyone know how much a 9900k holds back the score? What would be an estimate on a 10900k, or would it not really change? 

I see the best score on my combination is high 15k, versus mid 18k on a 3090 + 10900k. Just curious, would be nice to know if this is a 15.5k card with mild tweaking on a newer gen CPU.


----------



## motivman

Paramedic10 said:


> I scored 15 238 in Port Royal
> 
> 
> Intel Core i9-9900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> 15,238 with included LN2 BIOS on KP.
> 
> Does anyone know how much a 9900k holds back the score? What would be an estimate on a 10900k, or would it not really change?
> 
> I see the best score on my combination is high 15k, versus mid 18k on a 3090 + 10900k. Just curious, would be nice to know if this is a 15.5k card with mild tweaking on a newer gen CPU.


10900k will not make a difference in PR.


----------



## geriatricpollywog

Paramedic10 said:


> I scored 15 238 in Port Royal
> 
> 
> Intel Core i9-9900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> 15,238 with included LN2 BIOS on KP.
> 
> Does anyone know how much a 9900k holds back the score? What would be an estimate on a 10900k, or would it not really change?
> 
> I see the best score on my combination is high 15k, versus mid 18k on a 3090 + 10900k. Just curious, would be nice to know if this is a 15.5k card with mild tweaking on a newer gen CPU.


That’s a good score. No, it won’t improve with a 10900k. Did you adjust the voltages or just the core and memory speeds?


----------



## Paramedic10

motivman said:


> 10900k will not make a difference in PR.


Ahh okay, got it.



0451 said:


> That’s a good score. No, it won’t improve with a 10900k. Did you adjust the voltages or just the core and memory speeds?


Well, to be honest I have no idea what I'm really doing, I'm super new to this so I'm sure I'm losing points in other places?

I had NVVDD and MSVDD at 1.125. Otherwise, just put sliders up in PX1. +150/+1350

Hitting Pwr, Vop and Vrel in GPUZ. 517w board power, about 46C end of run. 18C ambient, max fans.PCI-E Slot was like ~65

*Edit: NVVDD was 1.1125, not 1.125*


----------



## motivman

happy to report kingpin 1000W bios removes all stealth power limits on my shunted 2X8 pin card, but I drew 400W out of one of my 8 pin cables!!!!!!!!!!!!!!!! LMAO... not looking to fry my card, flashing back to stock bios now. Was fun.


----------



## sultanofswing

The memory is set to run at full clocks on the XOC bios because under LN2 if the memory gets too cold it will have issues so this is a way of keeping the memory warmer.


----------



## geriatricpollywog

Paramedic10 said:


> Ahh okay, got it.
> 
> 
> 
> Well, to be honest I have no idea what I'm really doing, I'm super new to this so I'm sure I'm losing points in other places?
> 
> I had NVVDD and MSVDD at 1.125. Otherwise, just put sliders up in PX1. +150/+1350


Try Kingpin’s settings of nvvdd 1.125 and msvdd of 1.2.

Edit: and fbvdd of 1.40 (this will allow higher memory speeds)


----------



## Paramedic10

0451 said:


> Try Kingpin’s settings of nvvdd 1.125 and msvdd of 1.2.


I'll give it a go. I was hitting a wall with core OC. Anything over +150 was crashing me at the 38 second mark or whatever that really tough spot is to pass with the overhead view of the ship coming in.

*I also noted a few flickers with MEM past +1300, maybe more MSVDD will help that? Or is that the other slider (FBVVD)??*

It's also 0C or colder outside, I might open a window and put my PC on a table next to the screen LOL

*Edit: Is it safe to exceed 1.4 FBVVD for memory clocks? And by how much?*


----------



## ttnuagmada

So after playing hours of Cyberpunk and completing multiple 3Dmark runs, including the stress test, I've decided that the best way to test stability is COD Warzone. I had settings that I was sure were good after working in Cyberpunk and everything else, but Warzone crashed on me almost immediately after loading into a game.


----------



## markuaw1

https://www.3dmark.com/pr/672176 
*15 331 with NVIDIA GeForce RTX 3090(1x) and Intel Core i9-9900K Processor*


----------



## Paramedic10

markuaw1 said:


> https://www.3dmark.com/pr/672176
> *15 331 with NVIDIA GeForce RTX 3090(1x) and Intel Core i9-9900K Processor*


That's pretty good, about 100 over me, but your avg core speed was about 50 mhz faster, and MUCH colder average temps! <40 is great! Did you ice bucket the rad or cold ambient?

What were your offsets and volt settings?


----------



## markuaw1

Paramedic10 said:


> That's pretty good, about 100 over me, but your avg core speed was about 50 mhz faster, and MUCH colder average temps! <40 is great! Did you ice bucket the rad or cold ambient?
> 
> What were your offsets and volt settings?


nope just open loop one 360 rad 60mem man i did that about 4:30 am i think it was like + 210 core & + 1200 mem didn't mess with it to much it was late as hell


----------



## geriatricpollywog

Paramedic10 said:


> I'll give it a go. I was hitting a wall with core OC. Anything over +150 was crashing me at the 38 second mark or whatever that really tough spot is to pass with the overhead view of the ship coming in.
> 
> *I also noted a few flickers with MEM past +1300, maybe more MSVDD will help that? Or is that the other slider (FBVVD)??*
> 
> It's also 0C or colder outside, I might open a window and put my PC on a table next to the screen LOL


FBVVD is mem. Set no higher than 1.43. Memory degrades faster than the core over time.

Interesting you say the memory flickers. I haven’t noticed that with the 3090. Jays2cents mentioned during his most recent Kingpin LN2 video that the 3090 memory is error-correcting, which is why it doesn’t artifact. My 2080ti would throw crazy artifacts (spikes coming off the ship) in PR at +1400 and higher.


----------



## Paramedic10

0451 said:


> FBVVD is mem. Set no higher than 1.43. Memory degrades faster than the core over time.
> 
> Interesting you say the memory flickers. I haven’t noticed that with the 3090. Jays2cents mentioned during his most recent Kingpin LN2 video that the 3090 memory is error-correcting, which is why it doesn’t artifact. My 2080ti would throw crazy artifacts (spikes coming off the ship) in PR at +1400 and higher.


Could it be the core that's causing the occasional flickering then? I just figured it was the mem, but the core was also at the limit.


----------



## Sheyster

motivman said:


> happy to report kingpin 1000W bios removes all stealth power limits on my shunted 2X8 pin card, *but I drew 400W out of one of my 8 pin cables!!!!!!!!!!!!!!!! *LMAO... not looking to fry my card, flashing back to stock bios now. Was fun.


Anyone with a 2x8 card using this BIOS should take note of this before they decide to play.


----------



## Falkentyne

ttnuagmada said:


> So after playing hours of Cyberpunk and completing multiple 3Dmark runs, including the stress test, I've decided that the best way to test stability is COD Warzone. I had settings that I was sure were good after working in Cyberpunk and everything else, but Warzone crashed on me almost immediately after loading into a game.


I've been saying this for months now. Warzone crashes anything you can think you can bench at stable.
I can play Overwatch at 200% render scale (1080p--> 4k) +180 core. I can Port Royal at +180 core +650 memory for 14730+ score
But +180 core in Warzone literally insta-crashes. Need +135 for COD MW/Warzone.


----------



## Sheyster

ttnuagmada said:


> So after playing hours of Cyberpunk and completing multiple 3Dmark runs, including the stress test, I've decided that the best way to test stability is COD Warzone. I had settings that I was sure were good after working in Cyberpunk and everything else, but Warzone crashed on me almost immediately after loading into a game.


Warzone, CP 2077, BF5 are all good stability tests. Anytime I try a new locked OC or curve, I typically do a test run with SP 4K Optimized a few times first, then jump into Warzone for a final test.


----------



## VinnieM

Tried the XOC BIOS on my Gigabyte AORUS Master and it seemed to work, but the output connectors are all wrong. The HDMI 2.1 ports are recognized as DVI ports (only Full HD possible) and the DP ports (only 1 seemed to work) behave like HDMI 2.0 ports 
Anyone know of a BIOS that is compatible with these cards? I tried the AORUS Xtreme, but because that's a 3 pin card it resulted in even less power than the original BIOS


----------



## Zurv

I've been using the KP XOC bios. Seems fine. Keeps my OC a little higher than it would on the FTW3 bios.
I've been playing cypberpunk for about 3 hours. (4k, balanced DLSS, all RT on (psycho)
+1100 on mem and the core is settled at 2160.
It did make the card hotter. It was 39C (after maybe hours of play) and it is 43C.
mem chips (however EVGA reads them) are 49C-51C
How much power is the card using? No idea. The card has a shunt mod.

I'm using a 420 and 360 rad.
flow is about 6LPM.
I'm also using the Optimus FTW3 block.

pix of the setup:
[Official] NVIDIA RTX 3090 Owner's Club | Page 371 | Overclock.net


----------



## Falkentyne

Zurv said:


> I've been using the KP XOC bios. Seems fine. Keeps my OC a little higher than it would on the FTW3 bios.
> I've been playing cypberpunk for about 3 hours. (4k, balanced DLSS, all RT on (psycho)
> +1100 on mem and the core is settled at 2160.
> It did make the card hotter. It was 39C (after maybe hours of play) and it is 43C.
> mem chips (however EVGA reads them) are 49C-51C
> How much power is the card using? No idea. The card has a shunt mod.
> 
> I'm using a 420 and 360 rad.
> flow is about 6LPM.
> I'm also using the Optimus FTW3 block.


Wait. Didn't you have a founder's edition?


----------



## Arbustok

Zurv said:


> I've been using the KP XOC bios. Seems fine. Keeps my OC a little higher than it would on the FTW3 bios.
> I've been playing cypberpunk for about 3 hours. (4k, balanced DLSS, all RT on (psycho)
> +1100 on mem and the core is settled at 2160.
> It did make the card hotter. It was 39C (after maybe hours of play) and it is 43C.
> mem chips (however EVGA reads them) are 49C-51C
> How much power is the card using? No idea. The card has a shunt mod.
> 
> I'm using a 420 and 360 rad.
> flow is about 6LPM.
> I'm also using the Optimus FTW3 block.


What settings are you using?
I'm using a TUF OC +180 CORE +250 MEM and with DLSS on performance I'm around 55 fps avg.

I have every single setting at its maximum value, and it's unplayable with anything other than performance


----------



## mouacyk

Sounds like the XOC BIOS would be really good on a BIOS switch, for cards that have dual BIOSes. For anyone else, just don't leave any 3D work load running unattended -- of which I'm guilty of doing on my 1080 TI with Pascal XOC BIOS. Just never had a water loop failure...


----------



## Zurv

Falkentyne said:


> Wait. Didn't you have a founder's edition?


yep, i have two 3090FEs in SLI too.  They are in my desktop. The FTW3 is in my HTPC.


----------



## Zurv

Arbustok said:


> What settings are you using?
> I'm using a TUF OC +180 CORE +250 MEM and with DLSS on performance I'm around 55 fps avg.
> 
> I have every single setting at its maximum value, and it's unplayable with anything other than performance


The core offsets don't matter, it is what you end up with that does.  What is your core running at?
I'm also running a 18 core intel CPU at 5ghz. That might help. Yes, but under 50. Thank for the Gsync in the LG OLED 

Maybe the mem OC is helping a lot too? the Optiums block is doing a great job keeping the card cool.


----------



## motivman

motivman said:


> happy to report kingpin 1000W bios removes all stealth power limits on my shunted 2X8 pin card, but I drew 400W out of one of my 8 pin cables!!!!!!!!!!!!!!!! LMAO... not looking to fry my card, flashing back to stock bios now. Was fun.
> 
> 
> View attachment 2470669


why in the world does this stupid card use so much power on 8 pin #1 and so little power on #2? This is super annoying, makes this bios unusable for me. On the stock PNY bios, power usage is equal on both 8 pins, but on any 3X8 pin bios, pin #1 always uses so much more power than #2. I wish someone could write an unlimited 700W bios for 2 pin cards. SIGH. With the availability of the unlimited bios, I think I will sell my 2 pin card and either get a gaming X trio or Strix TBH.


----------



## Arbustok

Zurv said:


> The core offsets don't matter, it is what you end up with that does.
> I'm also running a 18 core intel CPU at 5ghz. That might help. Yes, but under 50. Thank for the Gsync in the LG OLED


It avgs 1950 mhz on the core while playing, Running a 9900K at 5.1 GHZ on a LG C9 and yes Gsync makes it playable for me.

What's your avg fps with DLSS at balanced?


----------



## ttnuagmada

I think I've figured out how to make HDR work properly in Cyberpunk:

1. Set NV control panel to limited range.
2. Enable HDR in Windows.
3. Set NV control panel back to full range.

Voila!

This doesn't appear to crush any blacks, and the washed out look in windows/Cyberpunk is gone!


----------



## Zurv

Arbustok said:


> It avgs 1950 mhz on the core while playing, Running a 9900K at 5.1 GHZ on a LG C9 and yes Gsync makes it playable for me.
> 
> What's your avg fps with DLSS at balanced?


65-75? having 200hz extra and 800-ish on the mem does have an impact. 
I did shut off all the stuff at the top of the grafix settings, blur, DoF, grain, etc.
shadow rez high and screen space reflections quality to high


----------



## Sheyster

Some quick feedback regarding the KPE 1000W BIOS with an ASUS Strix: The fan curve supports full speed on all fans (3000 RPM) unlike the 520W KPE BIOS. This actually surprised me a bit.

So far I like the BIOS, I feel that as long as you fully test the limits and temps, you'll be just fine. I'm testing with a 60% PL currently. I have yet to see a power limit throttle in GPU-Z. Just getting started though.


----------



## Zurv

ttnuagmada said:


> I think I've figured out how to make HDR work properly in Cyberpunk:
> 
> 1. Set NV control panel to limited range.
> 2. Enable HDR in Windows.
> 3. Set NV control panel back to full range.
> 
> Voila!
> 
> This doesn't appear to crush any blacks, and the washed out look in windows/Cyberpunk is gone!


boo  4:4:4 is king 
what TV?
I'm using a OLED. That is "fine" for me (it still isn't great)
i'm setting max nits to 500 and the setting below it to 1.4
I have tone mapping off. (but sometimes it turn it on.. i'm not clear what i like better. Note, HGIG doesn't work on windows.. yet.)


----------



## Sheyster

ttnuagmada said:


> I think I've figured out how to make HDR work properly in Cyberpunk:
> 
> 1. Set NV control panel to limited range.
> 2. Enable HDR in Windows.
> 3. Set NV control panel back to full range.
> 
> Voila!
> 
> This doesn't appear to crush any blacks, and the washed out look in windows/Cyberpunk is gone!


Did you download the new Nvidia hotfix driver yet? That's supposed to fix HDR in CP 2077.


----------



## Arbustok

Zurv said:


> 65-75? having 200hz extra and 800-ish on the mem does have an impact.
> I did shut off all the stuff at the top of the grafix settings, blur, DoF, grain, etc.
> shadow rez high and screen space reflections quality to high


I'm gonna try and lower those settings, seems strange that 200 mhz more give you about 40% more avg fps


----------



## Falkentyne

Zurv said:


> yep, i have two 3090FEs in SLI too.  They are in my desktop. The FTW3 is in my HTPC.


If you "try" to flash the 1000W vbios on your FE with --protectoff and -6 parameters with the december 13 version of Nvflash, please take a screenshot of the error message you get (since it won't flash) and upload it and give it to @bmgjet

DISCLAIMER: I TAKE NO RESPONSIBILITY FOR WHAT HAPPENS IF IT DOES FLASH


----------



## ttnuagmada

Zurv said:


> boo  4:4:4 is king
> what TV?
> I'm using a OLED. That is "fine" for me (it still isn't great)
> i'm setting max nits to 500 and the setting below it to 1.4
> I have tone mapping off. (but sometimes it turn it on.. i'm not clear what i like better. Note, HGIG doesn't work on windows.. yet.)


It's a Samsung CHG70 monitor. This shouldn't have any impact on Chroma subsampling I wouldn't think.


----------



## Zurv

Arbustok said:


> I'm gonna try and lower those settings, seems strange that 200 mhz more give you about 40% more avg fps


it is also the +1100 on mem.
Try lowering the other settings. Shadow resolution and screen space reflections quality.
oh the game still still tank sometimes. At night, a lot of people, raining, a lot of lights... pretty.. but ouch.


----------



## Arbustok

Zurv said:


> it is also the +1100 on mem.
> Try lowering the other settings. Shadow resolution and screen space reflections quality.
> oh the game still still tank sometimes. At night, a lot of people, raining, a lot of lights... pretty.. but ouch.


Will try, thanks for the info


----------



## Zurv

Falkentyne said:


> If you "try" to flash the 1000W vbios on your FE with --protectoff and -6 parameters with the december 13 version of Nvflash, please take a screenshot of the error message you get (since it won't flash) and upload it and give it to @bmgjet
> 
> DISCLAIMER: I TAKE NO RESPONSIBILITY FOR WHAT HAPPENS IF IT DOES FLASH


i don't plan on trying.  the FE board is to different. I'd be really grumpy if i had to take apart the watercooling to switch the order of the cards to reflash.


----------



## Zurv

Arbustok said:


> Will try, thanks for the info


don't try high mem settings like that. Even if it doesn't crash, a air cooled card isn't keep the mem cool enough. Maybe 500-750?
the Optimus comes with a nice ribbed backplate. Nice and thick. With a full fujipoly pad coving the full card. (And i have a 120 blowing directly at it.)


----------



## Arbustok

Zurv said:


> don't try high mem settings like that. Even if it doesn't crash, a air cooled card isn't keep the mem cool enough. Maybe 500-750?
> the Optimus comes with a nice ribbed backplate. Nice and thick. With a full fujipoly pad coving the full card. (And i have a 120 blowing directly at it.)


I'm on water (EKWB) and I can't go any further than +250 on afterburner for mem or I get artifacts.
It seems the EK backplate is not that great


----------



## markuaw1

Arbustok said:


> I'm on water (EKWB) and I can't go any further than +250 on afterburner for mem or I get artifacts.
> It seems the EK backplate is not that great


FYI same strix on ek block with xoc 1000w bios +1250 mem NP


----------



## geriatricpollywog

Arbustok said:


> I'm on water (EKWB) and I can't go any further than +250 on afterburner for mem or I get artifacts.
> It seems the EK backplate is not that great


You can try a cool damp cloth on the backplate to see if its the memory or the backplate.


----------



## ttnuagmada

Arbustok said:


> I'm on water (EKWB) and I can't go any further than +250 on afterburner for mem or I get artifacts.
> It seems the EK backplate is not that great


+1300 has been perfectly fine on my Strix/EK block.


----------



## DrunknFoo

_So what am i missing here.... Or can anyone shed some light...

520w bios tse and furmark peaks upto 470w on gpuz sensor (980-1000w system draw)

1000w bios @50% slider (albiet 500w) tse and furmark peaks at 520w on gpuz sensor (1020-1080w system draw)

Aside from everything being unlocked, what is the one thing that makes the two behave so differently?_


----------



## des2k...

motivman said:


> why in the world does this stupid card use so much power on 8 pin #1 and so little power on #2? This is super annoying, makes this bios unusable for me. On the stock PNY bios, power usage is equal on both 8 pins, but on any 3X8 pin bios, pin #1 always uses so much more power than #2. I wish someone could write an unlimited 700W bios for 2 pin cards. SIGH. With the availability of the unlimited bios, I think I will sell my 2 pin card and either get a gaming X trio or Strix TBH.


This doesn't surprise me, 1080ti xoc on 2x8pin was the same, pin1 was always 60w more than pin2. Here's my Zotac 2x8pin.


----------



## Falkentyne

motivman said:


> why in the world does this stupid card use so much power on 8 pin #1 and so little power on #2? This is super annoying, makes this bios unusable for me. On the stock PNY bios, power usage is equal on both 8 pins, but on any 3X8 pin bios, pin #1 always uses so much more power than #2. I wish someone could write an unlimited 700W bios for 2 pin cards. SIGH. With the availability of the unlimited bios, I think I will sell my 2 pin card and either get a gaming X trio or Strix TBH.


It's because you're shunt modded.
If you reverse the shunt mod, you should get even power draw.

There's some kind of weird power balancing going on. I still don't know the pattern, but it has to do with something with the SRC shunts, MVDDC, Slot and the 8 pins.
I still remember when my card was pretty well balanced (after my first shunt mod run), and it remained well balanced until I decided to completely re-pad the GPU side VRM's and VRAM with brand new pads.

You said it was because the VRM was running cooler, but it had absolutely NOTHING to do with the VRM's running cooler. The 8 pins are not linked to individual VRM banks at all. Anyway after my very first re-pad of the GPU side (putting 1.5mm Thermalright pads on both VRAM and VRM's), when I reassembled the card and ran Heaven (Note: I did NOT touch any of the shunts and I did not even physically stare at the 8 pin shunts either), I noticed instead of starting at 460W (which it did many times before I did the full backplate re-pad) and slowly climbing upwards (it would usually eventually get to around 530W, where I think that was where some throttling happened), the card INSTANTLY showed 540W, and was throttling slightly. I also had screenshots from before the re-pad, which also showed the two 8 pins being very close to each other (within 10W on GPU-Z, 15W on hwinfo with 1.36x multiplier).

Now of course the first thing I thought was..."Well maybe its drawing more power because of better pads". But after letting it run awhile, I noticed that the watts reading was _decreasing_. yes...Decreasing. Instead of pulling 540W (this is with GPU power * 1.36x), it was now pulling 511W and showing throttle. Eventually over a few more hours, I saw it only hitting 470W and showing throttle...

So I looked at the 8 pins, and saw 8 pin #1 was drawing a lot more than 8 pin #2. In fact, it seems like 8 pin #2 seemed to be decreasing with time and 8 pin #1 increasing.

So I took the card apart and applied a new layer of paint over the two shunts. Reassembled and it was back up to hitting 530W again (??)
...and then after awhile...decreasing...over several hours...just like before..(eventually stopped around 460W, after 1.36x multiplier).with 8 pin #2 reporting a lot less than 8 pin #1, and 8 pin #1 reaching 174W...(which is the the throttle point). 

Needless to say I had to wrestle with this for awhile. What I ended up doing was actually scraping the paint off (tedious) and reapplying new paint and then using the paint to stack a 5 mOhm shunt on the two 8 pins and MVDDC (I thought that shunt was GPU Chip Power at the time, but it was MVDDC), and then doing the same and putting 10mohm shunts on SRC, PCIE slot and Chip Power (which I thought was MVDDC) on the backplate side.

After doing this, I encountered the same issue. High draw on 8 pin #1 compared to 8 pin #2. So then I pryed off the 5 mOhm shunts and noticed that there didn't seem to be good contact with the original shunt and stacked shunt with the paste. But there was an amount of the dried paste in the "recessed" part of the original shunt on both sides. So I simply painted that over even better and bridged the shunts directly with silver paint (hoping that this would give better contact) and sure enough---that worked. Also at this time, I took off the 10 mOhm shunts off chip power and SRC (but not PCIE slot) and repainted those.

The two 8 pins were now very close to each other. Extremely close. BUT I was still getting a throttle flag. _ON CHIP POWER_. It was reaching 290W! Yeah...

So I re-painted chip power and made sure there were no gaps anywhere. This helped a _LITTLE_, chip power was now down to 280W. But still signaling a minor throttling flag when it reached 280W. So I took the 10 mOhm shunt off SRC and PCIE slot and re-painted those. 

Well...NOW chip power was fine but now I was getting throttling AGAIN. on PCIE SLOT! It was reaching 79.9W. I noticed my paint job wasn't too good on PCIE slot so I covered around the shunt with Super 33+ tape and re-painted it extremely well. This worked. Well...first it was down to 74W. Then it slowly started dropping. 70...68...65...eventually it was down to 60W.

Cool right?

No...Now the 8 pins were acting up again!! 8 pin #1 was showing up at like 162W in GPU-Z and 8 pin #2 at 135W...and 8 pin #2 just kept dropping. Over a few days it then turned into 8 pin #1 showing 168W and 8 pin #2 115W....

Yeah.

So I took apart the card again, scraped the paint off the 8 pins and decided to instantly stack 5 mOhm shunts with the paint again.
This worked but the same problem after curing--8 pin #1 at 174, 8 pin #2 low. So I decided this time to paint the TOP of the stacked shunts and bridge them completely around the top and the sides carefully as best I could.

And that seemed to do the job. Now at max power draw (around 600W), I only have a 14 to 18W difference between the two 8 pins (ex: 170W vs 155W, or 148W vs 167W), and drawing up to 600W from the card when verifying with my kill-a-watt. And so far this has remained stable.

------------------------------------------
So I'm pretty sure there's some weird stuff going on with power balancing. Since if you notice, when the card is in 2d clocks (210 mhz, etc), you will see 8 pin #1 always higher than 8 pin #2. So when shunted, something has to be in the right balance or the power draw of the pins gets balanced wrong.

So i'm guessing that your shunt mod is affecting the power balancing, that was balanced with the shunt mod, but with the MUCH higher allowance of the 1000W vbios, it starts favoring 8 pin #1. That is just a GUESS.

I take no responsibility if you un-do the shunt mod and still encounter the same problem. It's a theory, nothing more.


----------



## motivman

des2k... said:


> This doesn't surprise me, 1080ti xoc on 2x8pin was the same, pin1 was always 60w more than pin2. Here's my Zotac 2x8pin.
> View attachment 2470696


is your card shunt modded?


----------



## truehighroller1

I bought the asus ekwb 3090 today. Did I make a mistake or can I make this card a decent clocker / flash a different higher wattage BIOS?


----------



## motivman

Falkentyne said:


> It's because you're shunt modded.
> If you reverse the shunt mod, you should get even power draw.


I need to see what the power draw is like on a non shunt modded card (on 1000W kingpin bios) while running timespy extreme to determine if reversing my mod would be worth it. Anyone with a non shunt modded 2X8 pin card willing to run timespy extreme and post GPU-Z screenshots?


----------



## des2k...

motivman said:


> is your card shunt modded?


no


----------



## motivman

truehighroller1 said:


> I bought the asus ekwb 3090 today. Did I make a mistake or can I make this card a decent clocker / flash a different higher wattage BIOS?


mistake, get a 3X8 pin card, especially now that the 1000W bios with no limits is out in the wild.


----------



## motivman

des2k... said:


> no


great. Do you mind running timespy extreme and posting your GPU-Z screenshot showing your max power draw on PCIE, Pin1 and Pin2? want to see how much power you are using and if you are power limited at all.


----------



## pat182

ttnuagmada said:


> I think I've figured out how to make HDR work properly in Cyberpunk:
> 
> 1. Set NV control panel to limited range.
> 2. Enable HDR in Windows.
> 3. Set NV control panel back to full range.
> 
> Voila!
> 
> This doesn't appear to crush any blacks, and the washed out look in windows/Cyberpunk is gone!


will try it


----------



## motivman

pat182 said:


> will try it


Just download hotfix driver 460.97, HDR is fixed. I am using PG27UQ and PG35VQ, and HDR looks perfect with this driver. Black levels fixed.


----------



## pat182

Sheyster said:


> Did you download the new Nvidia hotfix driver yet? That's supposed to fix HDR in CP 2077.


this forum is a goldmine of info haha


----------



## pat182

motivman said:


> Just download hotfix driver 460.97, HDR is fixed. I am using PG27UQ and PG35VQ, and HDR looks perfect with this driver. Black levels fixed.


same monitor pg27uq WOW SICK trying asap


----------



## truehighroller1

motivman said:


> mistake, get a 3X8 pin card, especially now that the 1000W bios with no limits is out in the wild.


Damn it. Are you serious? Why even release a water cooler gpu if it's limited from the rip??????????? Dumbest decision ever seriously. I can take it back, should I get the EVGA FTW3 Ultra, or shoot for the *ROG Strix GeForce RTX 3090 White OC Edition?*


----------



## HyperMatrix

0451 said:


> FBVVD is mem. Set no higher than 1.43. Memory degrades faster than the core over time.
> 
> Interesting you say the memory flickers. I haven’t noticed that with the 3090. Jays2cents mentioned during his most recent Kingpin LN2 video that the 3090 memory is error-correcting, which is why it doesn’t artifact. My 2080ti would throw crazy artifacts (spikes coming off the ship) in PR at +1400 and higher.


I’ve had artifacts on several RTX 3090 cards when it was set too high but less than crashy and got worse as temps went up. This is one case of it happening but appears in











mouacyk said:


> Sounds like the XOC BIOS would be really good on a BIOS switch, for cards that have dual BIOSes. For anyone else, just don't leave any 3D work load running unattended -- of which I'm guilty of doing on my 1080 TI with Pascal XOC BIOS. Just never had a water loop failure...


If you haven’t shunt modded and your warranty is still intact, I would NOT recommend keeping XOC on one of your bios switches. Because even if you weren’t using it and your card dies, and they see that the bios was on there, your warranty would be void. 



Falkentyne said:


> It's because you're shunt modded.
> If you reverse the shunt mod, you should get even power draw.


For what it’s worth, and it may be unrelated to this situation, but the XOC bios for me was pulling approximately 35% more on power connector 1 and 2 than it was on 3. Not sure why. This is on a kingpin card with no shunts/mods.


----------



## motivman

truehighroller1 said:


> Damn it. Are you serious? Why even release a water cooler gpu if it's limited from the rip??????????? Dumbest decision ever seriously. I can take it back, should I get the EVGA FTW3 Ultra, or shoot for the *ROG Strix GeForce RTX 3090 White OC Edition?*


Personally, looking to sell my PNY reference card, and either getting a STRIX or Gaming X Trio. So if you have both choices I would get the White STRIX.


----------



## des2k...

motivman said:


> great. Do you mind running timespy extreme and posting your GPU-Z screenshot showing your max power draw on PCIE, Pin1 and Pin2? want to see how much power you are using and if you are power limited at all.


Oh I don't think I ever tweaked for this bench on the rtx3090, had 2130 993mv set from Port royal, 780w limit but I noticed the XOC power limit gets triggered 30w early, 

here's the log 
*https://file.io/PhL5h3sBDsfC*


----------



## Falkentyne

motivman said:


> I need to see what the power draw is like on a non shunt modded card (on 1000W kingpin bios) while running timespy extreme to determine if reversing my mod would be worth it. Anyone with a non shunt modded 2X8 pin card willing to run timespy extreme and post GPU-Z screenshots?


I wrote a super long edit to the post you were replying to. Check it.


----------



## bmgjet

HyperMatrix said:


> If you haven’t shunt modded and your warranty is still intact, I would NOT recommend keeping XOC on one of your bios switches. Because even if you weren’t using it and your card dies, and they see that the bios was on there, your warranty would be void.


could always externally flash it if you wanted to be dishonest


----------



## pat182

pat182 said:


> same monitor pg27uq WOW SICK trying asap


what midtone value are you guys using


----------



## des2k...

motivman said:


> great. Do you mind running timespy extreme and posting your GPU-Z screenshot showing your max power draw on PCIE, Pin1 and Pin2? want to see how much power you are using and if you are power limited at all.


not sure if the first link didn't work, but yeah the log 
*https://file.io/M1yjjM9a6zko*


----------



## HyperMatrix

bmgjet said:


> could always externally flash it if you wanted to be dishonest


I think anyone who can do that would have already shunt modded their cards and voided warranty. Haha.


----------



## motivman

pat182 said:


> what midtone value are you guys using



Maximum Brightness – 1000
Tone-Mapping Midpoint – 0.95
Paper White - 100


----------



## motivman

des2k... said:


> not sure if the first link didn't work, but yeah the log
> *https://file.io/M1yjjM9a6zko*


file is not available.


----------



## truehighroller1

motivman said:


> file is not available.



I get that the file you requested has been deleted.


----------



## pat182

motivman said:


> Maximum Brightness – 1000
> Tone-Mapping Midpoint – 0.95
> Paper White - 100


ok, ive downloaded it, borders in HDR settings seems blacker, but i dont have any before after pics, do we still need to be in limited color or should it work in full RGB 10bit 98hz hdr


----------



## Falkentyne

HyperMatrix said:


> I’ve had artifacts on several RTX 3090 cards when it was set too high but less than crashy and got worse as temps went up. This is one case of it happening but appears in
> View attachment 2470708
> 
> 
> 
> If you haven’t shunt modded and your warranty is still intact, I would NOT recommend keeping XOC on one of your bios switches. Because even if you weren’t using it and your card dies, and they see that the bios was on there, your warranty would be void.
> 
> 
> 
> For what it’s worth, and it may be unrelated to this situation, but the XOC bios for me was pulling approximately 35% more on power connector 1 and 2 than it was on 3. Not sure why. This is on a kingpin card with no shunts/mods.


This is a good point, but I think there is SOME Sort of relation between SRC and the 8 pins and the OTHER shunts.
I think it was @DrunknFoo who tested that there is some sort of multimeter continuity (between the shunts themselves!) between 8 pin #1 and SRC shunts, and between 8 pin #2, SRC And chip power shunts, and then between SRC and PCIE slot, and MVDDC and PCIE slot shunts. And I am 100% sure that "Input PP Source Power" stands for "Input Power Plane Source Power".

Another reason is just by looking at the 8 pin rail limits in the early bios dump that @bmgjet did.









RTX3090-PowerTables-Dump


3090 Vender,Date,Version,MD5,Min TDP,Default TPD,Max TDP,Default 8 Pin,Max 8 Pin,Default SRC,Max SRC,Default Chip,Max Chip,Default Slot,Max Slot,Defaul Vram,Max Vram,Unknown GALAXY,11/03/20,94.02.26.C0.34,ab4ebfd7e60ef3838e7a3542f8a3835f,100000,350000,390000,121500,121500,150000,175000,226800,22...




docs.google.com





You will see on the FE (Nvidia at the bottom), 8 pin default power limit is 145.8W, and max power limit is 162W.
_AND_, PCIE default power limit is 75W and max 86.2W.

HOWEVER, the 8 pins do NOT throttle at 162W. I've blown past 162W even before shunt modding.
It throttles at 175W !!! Which is the SAME VALUE as the SRC power limit!

Look at this superposition run before I ever modded my card.










114% TDP, notice both 8 pins leapfrogged 162W? (and since 148W is default and 162W is max, that would seem to imply that's from the TDP slider %).

Also notice something here:

162W + 162W + 75W= 399W...

But if max PCIE reached is 69W, then any balance of the two 8 pins has to equal 331. And clearly it isn't 165+165 if one topped out at 175W and the second topped out at 170W. Of course I didn't have GPU-Z running so no idea when each pin peaked out...which would have been a lot more helpful...

Another thing, if you look in the shunt mod thread where people did NOT mod the PCIE shunt slot, and my above post just earlier, PCIE throttles at 79.9W hard, even though the bios dump says 86W...

So I really think there's some big time shenanigans going on with the power balancing, like, if something is just slightly off, it starts favoring that rail for power or something....


----------



## motivman

pat182 said:


> ok, ive downloaded it, borders in HDR settings seems blacker, but i dont have any before after pics, do we still need to be in limited color or should it work in full RGB 10bit 98hz hdr


personally, I use144hz HDR on my monitor. Don't really notice a difference in games AT ALL. On desktop is a different matter, but I usually turn off HDR when not gaming.


----------



## ttnuagmada

motivman said:


> Just download hotfix driver 460.97, HDR is fixed. I am using PG27UQ and PG35VQ, and HDR looks perfect with this driver. Black levels fixed.


the driver didn't fix the issue for me


----------



## des2k...

motivman said:


> file is not available.








TinyUpload.com - best file hosting solution, with no limits, totaly free


TinyUpload.com - solution for tiny file hosting. No download limits, no upload limit. Totaly free.



s000.tinyupload.com


----------



## MangoMunchaa

mardon said:


> Mate you're 6th in the world with that hardware combo! Good going for a 2x 8pin!


haha I'm actually 2nd, the guy above me just did 5 runs so he's saturated the leaderboards! Thanks heaps, I'm very happy with it


----------



## MangoMunchaa

pat182 said:


> any point of oc my 9900k pass 5ghz ? does it raise the score ?


Nah I don't think so, I am fairly confident your ram latency does though, so if you want to play around with your RAM you'll probably get an increase in score  in that test I was running 4133 16-16-16-34 with tuned subtimings


----------



## motivman

des2k... said:


> TinyUpload.com - best file hosting solution, with no limits, totaly free
> 
> 
> TinyUpload.com - solution for tiny file hosting. No download limits, no upload limit. Totaly free.
> 
> 
> 
> s000.tinyupload.com


Thanks for log file. A little hard to read, but looks like your max power usage was around 550W. Would be more helpful if you posted a GPU-Z screenshot with your max values shown... but thanks for the log file


----------



## pat182

ttnuagmada said:


> the driver didn't fix the issue for me


from my test, it looks like the game likes better YCBCR 12bit 82hz than RGB


----------



## Sheyster

HyperMatrix said:


> For what it’s worth, and it may be unrelated to this situation, but the XOC bios for me was pulling approximately 35% more on power connector 1 and 2 than it was on 3. Not sure why. This is on a kingpin card with no shunts/mods.


I'm not sure if you saw my first feedback post about the 1000W KPE BIOS on my stock Strix. The fan curve works well, as opposed to the 520W KPE BIOS where the second fan is bugged at 66% max. Very odd if you ask me!

After some additional testing, IMHO the 1000W BIOS is remarkably tame on a stock Strix card. As expected, the third 8-pin is hardly registering anything, and the other two are within 20W of each other. PCI-E is low as well, just like the 520W BIOS. At a 60% limit and ~2100 core (drops some bins with temp increase) I see no throttling whatsoever with reasonable temps. Tested this in SP4KO and Warzone. I am now seriously considering keeping it as a daily driver. 😃


----------



## ViRuS2k

Sheyster said:


> I'm not sure if you saw my first feedback post about the 1000W KPE BIOS on my stock Strix. The fan curve works well, as opposed to the 520W KPE BIOS where the second fan is bugged at 66% max. Very odd if you ask me!
> 
> After some additional testing, IMHO the 1000W BIOS is remarkably tame on a stock Strix card. As expected, the third 8-pin is hardly registering anything, and the other two are within 20W of each other. PCI-E is low as well, just like the 520W BIOS. At a 60% limit and ~2100 core (drops some bins with temp increase) I see no throttling whatsoever with reasonable temps. Tested this in SP4KO and Warzone. I am now seriously considering keeping it as a daily driver. 😃


Is your card air or water ? if air and its safe as long as you set the correct power limit then i see no reason to not use it  might use it on my MSI trio X 3090 with a target set @80c max for temps.  im trying to find a bios that will give me no throttling lol or bin jumps.

btw what power does this bios pull when at stock with no msi running or at default does it pull the 1000w at default ??? lol or does it have to be reduced after you flash otherwise you might melt stuff lol


----------



## Nizzen

ViRuS2k said:


> Is your card air or water ? if air and its safe as long as you set the correct power limit then i see no reason to not use it  might use it on my MSI trio X 3090 with a target set @80c max for temps.  im trying to find a bios that will give me no throttling lol or bin jumps.
> 
> btw what power does this bios pull when at stock with no msi running or at default does it pull the 1000w at default ??? lol or does it have to be reduced after you flash otherwise you might melt stuff lol


This bios doesn't magical draw 1.5v from the gpu. Max vgpu is still ~ 1.1v


----------



## Aristotle

ViRuS2k said:


> Is your card air or water ? if air and its safe as long as you set the correct power limit then i see no reason to not use it  might use it on my MSI trio X 3090 with a target set @80c max for temps.  im trying to find a bios that will give me no throttling lol or bin jumps.
> 
> btw what power does this bios pull when at stock with no msi running or at default does it pull the 1000w at default ??? lol or does it have to be reduced after you flash otherwise you might melt stuff lol


I'm currently testing out the 1000w vbios on a stock strix on air. Its incredible for performance. At 50% power limit it already outperforms the 520w vbios. My Temps get up to 83C though. It's really scary. If afterburner doesn't start up for some reason, and it actually runs at 100% power limit, it'll fry the card for sure. I'm not sure I'm going to keep using it.

EDIT: its 1000W by default

EDIT 2: seems to be around 550W no matter what power limit you set. I ran the card at 100% and the Temps and power usage were same as when I ran at 50%.


----------



## bmgjet

You can always place a .bat file in the windows startup folder that runs the "nvidia-smi -pl 500" command


----------



## geriatricpollywog

Paramedic10 said:


> Could it be the core that's causing the occasional flickering then? I just figured it was the mem, but the core was also at the limit.


Flipping both LLC dipswitches added ~15-20mhz to my average core speed in Port Royal with no added core voltage. This neat trick increased my score by 150 pts.


----------



## Aristotle

Also I have a question. GPUz says I pulled a max of 403W. Thats because it isn't registering the third 8 pin right? Do I just add another 150W to that to get the actual board power? So its around 550w board power?


----------



## Keninishna

des2k... said:


> This doesn't surprise me, 1080ti xoc on 2x8pin was the same, pin1 was always 60w more than pin2. Here's my Zotac 2x8pin.
> View attachment 2470696


Where is it pulling pin 3 numbers from if it’s just a 2 pin card? When I tried the 520w bios on my 2 pin gigabyte card pin 3 was identical to pin 1 and pin 2 was half of pin 1 in power draw. Although there was some odd things going on like my mvddc was reporting like 460 watts power draw lol and src was reporting 250 watts.


----------



## Aristotle

Interesting discovery. Temps and power consumption stayed the same going from 50% power limit to 65% power limit. I'm going to keep increasing the power limit. Maybe the card is already fully pegged at 50%. In which case, running at 100% accidentally might be fine.

EDIT: Just ran the card at 100% power limit. It had the exact same result as running the card at 50%. Can't speak for other cards, but for the strix at least, this 1000W bios is basically a 550W bios with no temp restrictions. So it should be safe to run on strix cards 24/7 (I'm not responsible if you fry your card)


----------



## long2905

the power consumption depends on your set clock and voltage. you cant expect [email protected] to consume 1000w for example.

for now im setting [email protected] which boost a bit more to 1905 in game and power usage hovers around 440-460w if hwinfo and fpsmon reports are correct.


----------



## Aristotle

long2905 said:


> the power consumption depends on your set clock and voltage. you cant expect [email protected] to consume 1000w for example.
> 
> for now im setting [email protected] which boost a bit more to 1905 in game and power usage hovers around 440-460w if hwinfo and fpsmon reports are correct.


You're right. Setting the power limit to 100% just makes sure that power is never the limiting factor. The next limitation must be that 1.1v limit. In which case, this bios might actually be pretty safe.


----------



## Falkentyne

Aristotle said:


> You're right. Setting the power limit to 100% just makes sure that power is never the limiting factor. The next limitation must be that 1.1v limit. In which case, this bios might actually be pretty safe.


The bios is pretty safe as long as you don't have a bad thermal pad or thermal hotspot on the card, and you need to watch the temps SERIOUSLY, especially on cards that don't have GPU VRAM and GPU VRM temp monitoring, since the protections are disabled. The problem is the power draw through 8 pin cables of insufficient gauge. 16 Aug PCIE cables can go up to 300W per cable, although I'm not sure if I would want to daily that. The Seasonic 12 pin FE cable is rated up to 500W officially, but gauge wise, can handle at least 600W without any issues whatsoever (been tested).
Theoretically, if a FE were flashable with the 1000 vbios, the cable should be able to withstand max power for _short_ bench runs (2/3 * 1000W or 667W). So that should tell you the importance of wire gauge. Of course this vbios isn't flashable on FE cards yet (and even if it becomes "flashable", with Bmgjet's help, we still don't know if it will _WORK_ on a FE card--it might not even boot!), so the question is how people with regular 8 pin cables will do on their own cards. 18 Aug 8 pin cables--stay away from this Vbios for sure!









Tester in danger: How many watts can the 12-pin Micro-Fit connector of the NVIDIA GeForce RTX 3080 and RTX 3090 survive without melting? | igor'sLAB


The discussions about what current flow various 6-, 8- or now even 12-pin sockets can survive are as old as these sockets themselves. But while over the years one could see the odd experiment or two…




www.igorslab.de


----------



## long2905

im using these custom teflon 8pin cables. any idea about their durability?


----------



## mattxx88

long2905 said:


> im using these custom teflon 8pin cables. any idea about their durability?
> 
> View attachment 2470735


are they one way direct to your psu? or just extension?
for mine setup i opted for mdpcx 15awg cables, direct to psu


----------



## geriatricpollywog

I melted a CableMod 8 pin pushing 600w into a Vega64. Good luck with a 1000w bios on your dual 8-pins with custom cables.


----------



## mattxx88

0451 said:


> I melted a CableMod 8 pin pushing 600w into a Vega64. Good luck with a 1000w bios on your dual 8-pins with custom cables.


omg, do you know what awg cablemod use?


----------



## Paramedic10

https://www.3dmark.com/pr/676918

15,302 with colder air. I kept bumping the memory speed up, ran stable at 1500+. However anything above +150 Core seems to fail at 38 seconds. I was looking at the average core clocks, and I feel it should be higher with the temps compared to other high end runs. 

Do you think this is bin limited (card just wont do it) or it needs more volts? Would 37-38C not allow 2160+ mhz? I see other runs at 2200mhz with the same temps, but I don't know what you guys are running for volts.

NVVDD 1.15
FBVDD 1.4
MSVDD 1.15
Level 1, On for both

The OLED was showing as high as 1.3V for NVVDD during the run despite setting it at 1.15 in classified. Not sure about that? Can you set it higher than 1.15 and still be safe at 35C? 
Was seeing 527w+ on the OLED screen. Wasn't running any other monitoring software, trying to squeeze everything out of it.


----------



## long2905

mattxx88 said:


> are they one way direct to your psu? or just extension?
> for mine setup i opted for mdpcx 15awg cables, direct to psu


straight to my PSU. not sure about the awg so checking here. mine is a sff pc with a 750w corsair sfx psu.


----------



## defiledge

The highest rated SFX psu I can get atm is 850W. I hope this will be enough for a 600W card


----------



## motivman

0451 said:


> I melted a CableMod 8 pin pushing 600w into a Vega64. Good luck with a 1000w bios on your dual 8-pins with custom cables.


I have been playing with the 1000W bios on my 2X8 pin shunt modded reference card. I have pulled almost 450W from one 8 pin connector (running timespy extreme) on multiple occasions, checked my cables (on both ends) and they look just like new. I am using HX1200 Corsair PSU.


----------



## OC2000

Arbustok said:


> I'm on water (EKWB) and I can't go any further than +250 on afterburner for mem or I get artifacts.
> It seems the EK backplate is not that great


I've lost +300 (from 1200 on air to 900) on the Strix back plate. I don't even get artifacts. II just hard crash to desktop. I have noticed now the manual is completely different to the manual I used when preparing the back plate. All the pad thicknesses have changed and additional components are now covered. This EK block has been nothing short of disappointing. Hoping the Aqua Computer block with active backplate does a better job.


----------



## long2905

does lower operating temperature decrease power consumption? or it simply allow the chip to clock higher?


----------



## HyperMatrix

long2905 said:


> does lower operating temperature decrease power consumption? or it simply allow the chip to clock higher?


Both


----------



## motivman

I am guessing pulling 300W from a single 8 pin is no issue right? with the existence of 1000W bios, these cards can theoretically pull 300W from each 8 pin and 100W from the PCIE slot, but really what is the absolute max an 8 pin can pull with 16awg cables? I have been benching my shunt modded 2X8 pin card all day with the XOC bios, and one of my 8 pins has pulled upwards of 400W during timespy extreme and seems to be fine? inspected the cables on both ends, and looks like new. Cables do not get hot to touch while benching either. My little research shows that each cable can handle 9A at 12v, so about 324W is the max safe pull an 8 pin can do?


----------



## defiledge

motivman said:


> I am guessing pulling 300W from a single 8 pin is no issue right? with the existence of 1000W bios, these cards can theoretically pull 300W from each 8 pin and 100W from the PCIE slot, but really what is the absolute max an 8 pin can pull with 16awg cables? I have been benching my shunt modded 2X8 pin card all day with the XOC bios, and one of my 8 pins has pulled upwards of 400W during timespy extreme and seems to be fine? inspected the cables on both ends, and looks like new. Cables do not get hot to touch while benching either. My little research shows that each cable can handle 9A at 12v, so about 324W is the max safe pull an 8 pin can do?


I think limit for 16awg standard length is about 280 for an 8xpin, no guarantees beyond that. edit: well technically there's no guarantee above 150W with our voided warranties lol


----------



## bmgjet

motivman said:


> I am guessing pulling 300W from a single 8 pin is no issue right? with the existence of 1000W bios, these cards can theoretically pull 300W from each 8 pin and 100W from the PCIE slot, but really what is the absolute max an 8 pin can pull with 16awg cables? I have been benching my shunt modded 2X8 pin card all day with the XOC bios, and one of my 8 pins has pulled upwards of 400W and seems to be fine? inspected the cables on both ends, and looks like new. Cables do not get hot to touch while benching either. My little research shows that each cable can handle 9A at 12v, so about 324W is the max safe pull an 8 pin can do?


Its to hard to give a number on whats safe. But the actual connections are going to be the weakest link.
Such as where the pins are crimped onto the wires. Or where the slip joint goes over the pin in the plug.

Youll get a bad connection which creates resistance and more resistance creates more heat which raises the resistance further. That sets off the chain reaction of it getting worse and worse until something melts.

But going purely off the wires. 300W will create a 0.1V drop from resistance and raise tempture 35C over ambient on standand length, With air flow it would be fine if you dont mind the voltage drop. Theres plenty of online calculators to work out that stuff.


----------



## lokran88

OC2000 said:


> I've lost +300 (from 1200 on air to 900) on the Strix back plate. I don't even get artifacts. II just hard crash to desktop. I have noticed now the manual is completely different to the manual I used when preparing the back plate. All the pad thicknesses have changed and additional components are now covered. This EK block has been nothing short of disappointing. Hoping the Aqua Computer block with active backplate does a better job.


Just unpacked it. Still got some work but will assemble it tonite.
Feels and looks pretty good to me.


----------



## Gebeleisis

MangoMunchaa said:


> Thanks to the new 1000W BIOS I was able to finally get my card to (mostly) stretch it's legs!
> Galax SG 3090 with EK Waterblock - https://www.3dmark.com/pr/672369
> View attachment 2470575


Awesome result ! GRATS !


----------



## ViRuS2k

Falkentyne said:


> The bios is pretty safe as long as you don't have a bad thermal pad or thermal hotspot on the card, and you need to watch the temps SERIOUSLY, especially on cards that don't have GPU VRAM and GPU VRM temp monitoring, since the protections are disabled. The problem is the power draw through 8 pin cables of insufficient gauge. 16 Aug PCIE cables can go up to 300W per cable, although I'm not sure if I would want to daily that. The Seasonic 12 pin FE cable is rated up to 500W officially, but gauge wise, can handle at least 600W without any issues whatsoever (been tested).
> Theoretically, if a FE were flashable with the 1000 vbios, the cable should be able to withstand max power for _short_ bench runs (2/3 * 1000W or 667W). So that should tell you the importance of wire gauge. Of course this vbios isn't flashable on FE cards yet (and even if it becomes "flashable", with Bmgjet's help, we still don't know if it will _WORK_ on a FE card--it might not even boot!), so the question is how people with regular 8 pin cables will do on their own cards. 18 Aug 8 pin cables--stay away from this Vbios for sure!
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Tester in danger: How many watts can the 12-pin Micro-Fit connector of the NVIDIA GeForce RTX 3080 and RTX 3090 survive without melting? | igor'sLAB
> 
> 
> The discussions about what current flow various 6-, 8- or now even 12-pin sockets can survive are as old as these sockets themselves. But while over the years one could see the odd experiment or two…
> 
> 
> 
> 
> www.igorslab.de


Can you tell me if my cables are ok ?
i have these cables : Super Flower Sleeve Cable Kit Pro - Black/White

going directly from my psu to the card to splitting direct connections from my 2000w 8pack PSU...

cheers.
as for temp monitoring on GPU memory ect it should be fine as the memory i wont be changing its voltage at all and running a mild oc on it.
its the GPU core thats most important as the limiting factor on these cards is not bandwith.


----------



## geriatricpollywog

mattxx88 said:


> omg, do you know what awg cablemod use?


 Not sure, but they were the soft flexible cables with a fabric sheath. Copper is not soft and flexible. This should be your first clue that your vanity cable should not be used with a 1000w bios.

Your $1900 self contained supercomputer is designed to protect itself in the event a bad cable is connected. The XOC bios potentially bypasses these protections and voids your warranty at the same time.


----------



## dante`afk

mattxx88 said:


> omg, do you know what awg cablemod use?


16awg


----------



## des2k...

Gebeleisis said:


> Awesome result ! GRATS !


So average freq on this run was 2130. How are you guys pulling 15k+ in port royal? Is it a large a offset / high voltage ? I just run with 2130 @ 993mv +400 mem and got 14500 only.


----------



## Cholerikerklaus

It's the same problem for me. Ich did a run with 2279mhz avarage in a portrayal, but only 15600 score. No one can explain me the reason for this


----------



## nyk20z3

Where is the HOF, Matrix and Lighting 3090?


----------



## Baasha

Will flashing the 1000W BIOS on my Strix GPUs help with OC'ing or just throw out more heat? I'm on air on both cards. I want to try it first on my Z490 rig with just one GPU - it has a 1600W PSU so it should handle the full 1000W.


----------



## Zurv

Baasha said:


> Will flashing the 1000W BIOS on my Strix GPUs help with OC'ing or just throw out more heat? I'm on air on both cards. I want to try it first on my Z490 rig with just one GPU - it has a 1600W PSU so it should handle the full 1000W.


Put them under water first sir 

If you want to use something maybe try this:
VGA Bios Collection: EVGA RTX 3090 24 GB | TechPowerUp 
It is the 520w bios

Remember, at a point you'll need voltage too and your now maxed out fans will be sucking up some of it.


----------



## pat182

nyk20z3 said:


> Where is the HOF, Matrix and Lighting 3090?


probly wont never be, margins are so low that making a good high end would cost too much and people wouldnt buy it

just look at the KPE this gen , its only a glorified ftw3 ultra with minor power bump

but id like a matrix to have a my strix bios show full power with a better asus bios


----------



## bl4ckdot

There will be a Galax HOF


----------



## zkareemz

MSI Suprim X , I don't see huge difference .. except the fans are going crazy loud!

520W









620W (1000W @ 62%)


----------



## arvinz

Hey guys,

I have 2 Stix 3090 on the way to me today...what's the best and quickest method to test each to see which is the best overclocker?


----------



## Arbustok

I reapplied my EK backplate on my TUF OC and I'm able to get my mem oc to +1300 but the backplate gets extremely hot to the touch, I'm afraid to keep that oc.

Is anyone with an ek block monitoring the temps on those backside mem chips?

I got to 13998 in PR with the stock bios, I'm guessing that's pretty decent but still I'm tempted to sell this TUF and get a Strix and get an Aquacomputer Block with the active backplate whenever stock is stable


----------



## Nizzen

Cholerikerklaus said:


> It's the same problem for me. Ich did a run with 2279mhz avarage in a portrayal, but only 15600 score. No one can explain me the reason for this


Try higher cpu speed and higher memoryspeed


----------



## ttnuagmada

Arbustok said:


> I reapplied my EK backplate on my TUF OC and I'm able to get my mem oc to +1300 but the backplate gets extremely hot to the touch, I'm afraid to keep that oc.
> 
> Is anyone with an ek block monitoring the temps on those backside mem chips?
> 
> I got to 13998 in PR with the stock bios, I'm guessing that's pretty decent but still I'm tempted to sell this TUF and get a Strix and get an Aquacomputer Block with the active backplate whenever stock is stable


I checked after some benchmarking with an IR thermo and was only reading about 40C with a 20C or so ambient on my EK Strix backplate. I'll test it one night after an extended Warzone session whenever i get a chance. This was running +1300


----------



## ttnuagmada

des2k... said:


> So average freq on this run was 2130. How are you guys pulling 15k+ in port royal? Is it a large a offset / high voltage ? I just run with 2130 @ 993mv +400 mem and got 14500 only.


I got 15210 averaging 2160 and a 1325 mem overclock. I would think you have some memory headroom, but I also think there's a CPU clockspeed/memory latency aspect to it. I noticed that people with similar or even lower GPU clocks were scoring higher than me with the same CPU, but they also had .1-.2ghz higher clocks on their CPU.

I got mine by trying to find the VF curve sweet-spot. [email protected] seems to be the sweetspot for me, I was averaging 2160. From that point I have to start upping the volts pretty significantly to get the clocks any higher, which causes power throttling that gives me an overall lower average.


----------



## Zogge

For me the 1000W bios at 60% feels very stable (better than kingpin bios). It always shows the limit as voltage or load now, never power. 
It looks like the power outtage is around 550-560W based on my corsair AX1600i usb readings for the GPU 12V rails. With the PCI e at around 40W for strix then I believe it indeed hit 600W. 
I also doubt you can fry the card as the voltage will set the limit for the processor. I do not know if the memories are at risk if they do not share voltage from the 1.1V somehow, otherwise it would in theory be possible to fry them if going for say +2500 or something like that ? 
My temp is now around 58-60 degrees when gaming on this setting (+125 / +750) @600W compared to the kingpin 520W at same settings around 55 degrees and strix @480 W bios around 48-49 degrees on the same settings.


----------



## EniGma1987

Falkentyne said:


> Wasn't his card an eVGA card with fuses?
> 
> Putting a wire across a shunt on a board that has a 10 amp fuse on PCIE slot is asking for trouble. And a 5 mOhm stacked shunt on 8 pins can easily wind up pulling more than the 20A fuse limit of 240W per 8 pin if you aren't being throttled by something.
> 
> Someone said there's a way to short the fuses to remove the 10A (or 20A) limit but it was way way back and I don't remember if it was here, in the shunt mod thread or the evga forums.


Power is simply flowing across the fuse, and when it blows then power cant flow anymore. So placing a large enough wire over it should work to bypass the fuse entirely. Or removing the fuse and replacing it with a wire would do the same thing.
While the PCIE slot should have a lower power draw and be fused to save the motherboard, the 8-pins 20A fuse is a little bit low. The fuse should be more like a 25A since you can draw 300w through each connector just fine as long as your PSU also uses 16ga wires.


----------



## Arbustok

ttnuagmada said:


> I checked after some benchmarking with an IR thermo and was only reading about 40C with a 20C or so ambient on my EK Strix backplate. I'll test it one night after an extended Warzone session whenever i get a chance. This was running +1300


Awesome! 
Thank you!

I think mine is a bit hotter than that, I don't think 40C would be as uncomfortable to touch as my backplate is when running +1300, but i'm guessing it can't be that far off.

Do you have any fans cooling the backplate or anything of that sort?

Mine has very little room to breathe and that might be an issue ?


----------



## ttnuagmada

Arbustok said:


> Awesome!
> Thank you!
> 
> I think mine is a bit hotter than that, I don't think 40C would be as uncomfortable to touch as my backplate is when running +1300, but i'm guessing it can't be that far off.
> 
> Do you have any fans cooling the backplate or anything of that sort?
> 
> Mine has very little room to breathe and that might be an issue ?
> 
> View attachment 2470780
> View attachment 2470781



I don't have a fan blowing directly on it, but i do have a pretty massive case (caselabs SMA8), so that might play a role.


----------



## geriatricpollywog

Wow people are actually throwing caution into the wind and using the 1000w as a daily driver for no performance improvement. When testing this bios, I noticed the card idles at 100w. Perhaps this is to “keep it hot” as Vince said during LN2 runs.


----------



## ttnuagmada

0451 said:


> Wow people are actually throwing caution into the wind and using the 1000w as a daily driver for no performance improvement. When testing this bios, I noticed the card idles at 100w. Perhaps this is to “keep it hot” as Vince said during LN2 runs.


yeah you scared me away from it. I might put it on my second bios to bench with or something, but thats about it.


----------



## geriatricpollywog

ttnuagmada said:


> yeah you scared me away from it. I might put it on my second bios to bench with or something, but thats about it.


I am only installing it for LN2 runs and removing it after. If your card dies with the bios installed the RMA dept might take notice.


----------



## Sheyster

0451 said:


> Flipping both LLC dipswitches added ~15-20mhz to my average core speed in Port Royal with no added core voltage. This neat trick increased my score by 150 pts.


More LLC typically means more heat due to less vdroop. I assume you will switch it back for gaming? Good tip though!


----------



## DrunknFoo

motivman said:


> I am guessing pulling 300W from a single 8 pin is no issue right? with the existence of 1000W bios, these cards can theoretically pull 300W from each 8 pin and 100W from the PCIE slot, but really what is the absolute max an 8 pin can pull with 16awg cables? I have been benching my shunt modded 2X8 pin card all day with the XOC bios, and one of my 8 pins has pulled upwards of 400W during timespy extreme and seems to be fine? inspected the cables on both ends, and looks like new. Cables do not get hot to touch while benching either. My little research shows that each cable can handle 9A at 12v, so about 324W is the max safe pull an 8 pin can do?


My cables warm up slightly but no cause for concern. (Cable mods). Gpu draw of roughly 800w. Check for warmth nearest to the connectors to see if they heat up, also the shielding or sleeving used may play a factor...


----------



## Sheyster

0451 said:


> I am only installing it for LN2 runs and removing it after. If your card dies with the bios installed the RMA dept might take notice.


Yep, using the XOC BIOS is always accepting some risk. This said, I feel pretty safe using it because:

1. I'm not an idiot.

2. Based on my testing it's relatively tame with the Strix, the worse I would expect is a lockup not a meltdown.

3. The default fan curve is quite aggressive and fully supports the Strix fans up to 3000+ RPM.

4. Even with the 520W KPE BIOS I was bumping the power limit while gaming in 4K120. With this BIOS I never do.

5. As @bmgjet said, use NV-SMI to limit the PL at boot up, it's super easy to do and fool-proof after that.


----------



## markuaw1

ttnuagmada said:


> yeah you scared me away from it. I might put it on my second bios to bench with or something, but thats about it.


Yep running strix with ek block first bios is Kingpin 520W for daily an second is Kingpin xoc 1000W bios for binch marks best of both worlds Merry Christmas to me


----------



## DrunknFoo

0451 said:


> Wow people are actually throwing caution into the wind and using the 1000w as a daily driver for no performance improvement. When testing this bios, I noticed the card idles at 100w. Perhaps this is to “keep it hot” as Vince said during LN2 runs.


Ya it does. does it help with ln2 much though? 

Its not like really needed cause you can torch... unlike cpu warming plates for back side of motherboards for cold boot issues (is this even a thing anymore)


----------



## sultanofswing

The reason it is at 100w at idle is because the MEMORY is at full clocks. This is done to keep the memory warm under LN2 as letting the memory get too cold causes issues as the Memory does not like being as cold as the core past a certain point.


----------



## DrunknFoo

sultanofswing said:


> The reason it is at 100w at idle is because the MEMORY is at full clocks. This is done to keep the memory warm under LN2 as letting the memory get too cold causes issues as the Memory does not like being as cold as the core past a certain point.



Oohh i wasnt aware of this. Thanks


----------



## Sheyster

ViRuS2k said:


> btw what power does this bios pull when at stock with no msi running or at default does it pull the 1000w at default ??? lol or does it have to be reduced after you flash otherwise you might melt stuff lol


I'll do a run with all the stock settings while monitoring things closely. Since voltage won't exceed ~1.08v I don't expect a nuclear meltdown.


----------



## geriatricpollywog

sultanofswing said:


> The reason it is at 100w at idle is because the MEMORY is at full clocks. This is done to keep the memory warm under LN2 as letting the memory get too cold causes issues as the Memory does not like being as cold as the core past a certain point.


Interesting. I notice GDDR6 degrades pretty fast even with minimal OC. My 2080ti went from +1500 to +1250 and my 3090 is already down to +1350 from +1500. I can no longer pass PR at +1500 without adding memory voltage. Would keeping the memory at full clocks contribute to this degradation?


----------



## sultanofswing

0451 said:


> Interesting. I notice GDDR6 degrades pretty fast even with minimal OC. My 2080ti went from +1500 to +1250 and my 3090 is already down to +1350 from +1500. I can no longer pass PR at +1500 without adding memory voltage. Would keeping the memory at full clocks contribute to this degradation?


Well in past on cards like Turing the memory voltage always runs at 1.34 so I would assume that it is the same on Ampere so with the XOC stuff you are not only always at full voltage on the memory you are now also always at full clocks as well.
As far as that degrading the memory that is hard to say without having a few samples to test the theory.

I personally will not run a XOC BIOS 24/7 as I feel running the memory at full speed 24/7 is a complete waste. On my 2080ti KINGPIN I just flashed the XOC Bios to the Ln2 position and only used it when benchmarking, 99% of gaming the card was ran with 0 overclock.


----------



## des2k...

Sheyster said:


> I'll do a run with all the stock settings while monitoring things closely. Since voltage won't exceed ~1.08v I don't expect a nuclear meltdown.


well I know what I'll use this winter to warm up the room, time spy extreme, just [email protected], thing is continually pushing 550w+ 52c on the EK block


----------



## Zurv

more playing with the XOC KPE bios.
clearly tons of power isn't helpful without the voltage. ie, i was looking for the point where the limiting factor was voltage. For me that was 47% on the afterburner power slider. 

Note, this is on a FTW3 and shuntmod'd and with an optiumus block.
I'd be interested what that point is for other non-shunt mod'd people.


----------



## Keninishna

Is it weird my card draws 100 watts at idle and my memory never down clocks on stock bios? I did see there is a known driver bug where 4K gsync displays cause higher power draw at idle so that may be why.


----------



## sultanofswing

Keninishna said:


> Is it weird my card draws 100 watts at idle and my memory never down clocks on stock bios?


Take it out of performance mode in the nvidia control panel and restart the PC.


----------



## sultanofswing

Zurv said:


> more playing with the XOC KPE bios.
> clearly tons of power isn't helpful without the voltage. ie, i was looking for the point where the limiting factor was voltage. For me that was 47% on the afterburner power slider.
> 
> Note, this is on a FTW3 and shuntmod'd and with an optiumus block.
> I'd be interested what that point is for other non-shunt mod'd people.


Limiting factor will always be thermal headroom,Voltage and chip quality.
Need temps below 40c to start to see any benefit of adding voltage, The colder you can get it the better.


----------



## geriatricpollywog

sultanofswing said:


> Limiting factor will always be thermal headroom,Voltage and chip quality.
> Need temps below 40c to start to see any benefit of adding voltage, The colder you can get it the better.


This. I posted the bios so people would stop talking about their power limits. It’s not meant to be a daily driver. A 520w limit makes sense because this is the point on the graph where thermal, voltage, and chip quality limits intersect the power limit.


----------



## SolarBeaver

Hey guys, is it worth flashing Suprim X bios (or any other 450w+) to Trio if I'm interested in gaming only, particularly in CP2077 performance, how much gain should I get on air, giving I don't want it to run like a jet engine? Also, are power phases on the PCB strong enough, I've read some worrying stuff about it being only 45a each and pretty much the lowest amount of all 3090's (only 18 phases total, where most high wattage AIBs have 20+).
Would really appreciate any help!


----------



## Sheyster

Zurv said:


> Note, this is on a FTW3 and shuntmod'd and with an optiumus block.
> I'd be interested what that point is for other non-shunt mod'd people.


I have not determined a precise percentage yet. I just dial in at 60% on my stock Strix. 520W KPE was hitting the limit, now I can run SP4KO and play Warzone for an hour and never touch it. Feels just like the good old days again.


----------



## geriatricpollywog

Sheyster said:


> I have not determined a precise percentage yet. I just dial in at 60% on my stock Strix. 520W KPE was hitting the limit, now I can run SP4KO and play Warzone for an hour and never touch it. Feels just like the good old days again.


Can you track your memory degradation? Since the XOC bios runs memory at 100% (and 100w) I’m curious to see how that impacts max mem clock over time.


----------



## changboy

Zurv said:


> more playing with the XOC KPE bios.
> clearly tons of power isn't helpful without the voltage. ie, i was looking for the point where the limiting factor was voltage. For me that was 47% on the afterburner power slider.
> 
> Note, this is on a FTW3 and shuntmod'd and with an optiumus block.
> I'd be interested what that point is for other non-shunt mod'd people.


Can you tell me the score you can achieve with ur ftw3 shunt modded card on port royal please ?


----------



## Sheyster

0451 said:


> Can you track your memory degradation? Since the XOC bios runs memory at 100% (and 100w) I’m curious to see how that impacts max mem clock over time.


I could but my use case isn't ideal for that.

This is purely a gaming rig, I use it for literally nothing else. I have a MacBook and i7 Thinkpad for all my other computing needs. The gaming rig is off most of the day and turned on for 1 to 3 hours for gaming. Some days I don't game at all! I also typically use the "Prefer Maximum Performance" power mode in NVCP which forces core and memory speed up.

Perhaps someone else can do this with a more typical use case?


----------



## truehighroller1

I bought the ekwb asus 3090, can I flash a different BIOS to this card safely? I have a corsair AX1600i PSU.


----------



## Zogge

Running 1kW bios at 60% and no memory OC for daily should not really degrade memory. It is not different from a mining rig running 24/7 without OC on memory. Those are on 100% all the time as well and that I think the card is built for.


----------



## mirkendargen

Zurv said:


> boo  4:4:4 is king
> what TV?
> I'm using a OLED. That is "fine" for me (it still isn't great)
> i'm setting max nits to 500 and the setting below it to 1.4
> I have tone mapping off. (but sometimes it turn it on.. i'm not clear what i like better. Note, HGIG doesn't work on windows.. yet.)


HGIG is confusing. No "support" is needed, there's nothing extra in the signal going to the TV to make it an "HGIG signal". HGIG is just dynamic tone mapping REALLY off. The TV will show (or attempt to show and clip) exactly whatever luminance is sent to it. This is what you want on games like Cyberpunk that let you specify a max luminance.

The "off" in "dynamic tone mapping off" is referring to the "dynamic" part, not the "tone mapping" part. It is still tone mapping, just using a static 4000nit as the peak luminance and scaling down from there rather than picking whatever is brightest on each frame and scaling off it (dynamic tone mapping on).


----------



## Falkentyne

So for some reason I was able to loop Heaven at +180 on the core this morning without the driver crashing. Core undegradation?
Is undegradation a thing? (+180 core, +100% voltage slider (1.10v-1.081v randomly), +500 memory. Max core temp: 77C).

I was never able to go higher than +165 before. Memory still likes +500 for daily and +600 max.

Of course, Warzone / Modern Warfare still insta-crashes on +180 and likes +135 ....


----------



## des2k...

0451 said:


> Can you track your memory degradation? Since the XOC bios runs memory at 100% (and 100w) I’m curious to see how that impacts max mem clock over time.


Nothing to worry about, mem at full speed idle is under 40w


----------



## HyperMatrix

0451 said:


> Interesting. I notice GDDR6 degrades pretty fast even with minimal OC. My 2080ti went from +1500 to +1250 and my 3090 is already down to +1350 from +1500. I can no longer pass PR at +1500 without adding memory voltage. Would keeping the memory at full clocks contribute to this degradation?


My memory is always at full clocks without the 1000W bios. Was the same with my Pascal Titan Xs that I had for 4 years with a memory OC as well. My GPU also never clocks down below 1700-1800. 144Hz 4K G-Sync nonsense. There are only 2 reasons you should ever see memory degradation over its normal life span. One would be too much heat, and we're not talking about an extra 10C or 20C here. The other would be too much voltage and electromigration. But if you're not running above stock voltage on the memory, and your memory temps are under 60C, you really shouldn't be seeing any "degradation."



Keninishna said:


> Is it weird my card draws 100 watts at idle and my memory never down clocks on stock bios? I did see there is a known driver bug where 4K gsync displays cause higher power draw at idle so that may be why.


My card is the same. Not sure if related to "prefer maximum performance" or "optimal power" setting in NVCP. Also hooked up to a 4K 144Hz G-Sync display. Mine draws a bit more than 100W at idle.


----------



## mirkendargen

HyperMatrix said:


> My memory is always at full clocks without the 1000W bios. Was the same with my Pascal Titan Xs that I had for 4 years with a memory OC as well. My GPU also never clocks down below 1700-1800. 144Hz 4K G-Sync nonsense. There are only 2 reasons you should ever see memory degradation over its normal life span. One would be too much heat, and we're not talking about an extra 10C or 20C here. The other would be too much voltage and electromigration. But if you're not running above stock voltage on the memory, and your memory temps are under 60C, you really shouldn't be seeing any "degradation."
> 
> 
> My card is the same. Not sure if related to "prefer maximum performance" or "optimal power" setting in NVCP. Also hooked up to a 4K 144Hz G-Sync display. Mine draws a bit more than 100W at idle.


I have an LG CX OLED running 4k/120hz/Gsync, an LG LCD 32" monitor running at 1440p/60hz/Gsync, and a cheapo Dell monitor running at 1080p/60hz. The LG LCD monitor supports up to 165hz, but if I run it above 60hz my GPU memory never clocks down. This is probably what's happening to you guys.


----------



## geriatricpollywog

mirkendargen said:


> I have an LG CX OLED running 4k/120hz/Gsync, an LG LCD 32" monitor running at 1440p/60hz/Gsync, and a cheapo Dell monitor running at 1080p/60hz. The LG LCD monitor supports up to 165hz, but if I run it above 60hz my GPU memory never clocks down. This is probably what's happening to you guys.


This is what I’m seeing: With default LN2 bios, my card idles at about 20w as reported by the little television on the shroud. With the 1000w bios my card idles at 100w.


----------



## mirkendargen

0451 said:


> This is what I’m seeing: With default LN2 bios, my card idles at about 20w as reported by the little television on the shroud. With the 1000w bios my card idles at 100w.


I meant the people with high idles on stock BIOS's. It wouldn't surprise me at all if the 1000W BIOS doesn't downclock the memory because why would it? Clock changes introduce instability, when nothing matters but stability at the highest speeds possible, you want it running balls to the wall at all times.


----------



## Aristotle

Just did 2 hours of cyberpunk 2077 on the 1000W bios. 100% powerlimit 165 core / 1000 mem. Max temp was 80C. Max power draw was around 380W according to gpuz. So I guess around 530W when you add pin 3. I don't think you can damage your card on the 1000W bios as long as you don't increase the voltage or more than 1 fan dies.


----------



## reflex75

Falkentyne said:


> So for some reason I was able to loop Heaven at +180 on the core this morning without the driver crashing. Core undegradation?
> Is undegradation a thing? (+180 core, +100% voltage slider (1.10v-1.081v randomly), +500 memory. Max core temp: 77C).
> 
> I was never able to go higher than +165 before. Memory still likes +500 for daily and +600 max.
> 
> Of course, Warzone / Modern Warfare still insta-crashes on +180 and likes +135 ....


Same Nvidia drivers?
Or your chip gets better with age like good wine


----------



## sultanofswing

The higher the core temp the faster it will degrade. On my 2080ti kingpin I absolutely will not let it go above 45c.
I've fed that card 1.3v and up to 2300mhz.


----------



## Aristotle

I'd use the 520W bios over the 1000W bios if the mid fan wasn't limited to 2000 rpm. For some reason, its difficult for me to find a stable overclock with the 500W bios. So for now I guess I'll stick with the 1000W bios and see how it goes. I'm going to be applying liquid metal to my chip in a week. Hopefully that'll help with the high temps.


----------



## geriatricpollywog

Aristotle said:


> Just did 2 hours of cyberpunk 2077 on the 1000W bios. 100% powerlimit 165 core / 1000 mem. Max temp was 80C. Max power draw was around 380W according to gpuz. So I guess around 530W when you add pin 3. I don't think you can damage your card on the 1000W bios as long as you don't increase the voltage or more than 1 fan dies.


----------



## Pepillo

Today I received my Bykski block from China, some photos:




























Happy with the result, my maximum temperature has dropped from 70º to less than 50º with silent profile on the fans and 30 Mhz more with the same settings playing half an hour to Valhalla 4k Ultra:










I'm with the 500w EVGA bios in my Trio, I have to try the 520w and even the 1000w bios now that I'm on of water. Anyway, it's a PC to play with, not to bench.


----------



## Falkentyne

reflex75 said:


> Same Nvidia drivers?
> Or your chip gets better with age like good wine


Yes same drivers. Vulkan beta 457.44 since every other driver (besides the 456.98 hotfix) seems to suck for most people in some way.

May have something to do with the shunt mod and the old power balancing shenanigans since redoing the two 8 pins.

I've noticed that I seem to be slowly drawing more power from the wall (showing on the power meter and on gpu-z) and the SRC power draw has steadily gotten lower since I re-did the shunt mod on December 3rd.

SRC was showing 97W max looping Heaven right after the last 8 pin rework. Now it's down to 59W under exact same conditions (I didn't even retouch the SRC shunt paint at that time).


----------



## GAN77

Pepillo said:


> Today I received my Bykski block from China, some photos:


What's your delta chip water? If I read the graphs correctly, 9 degrees?


----------



## Zurv

mirkendargen said:


> HGIG is confusing. No "support" is needed, there's nothing extra in the signal going to the TV to make it an "HGIG signal". HGIG is just dynamic tone mapping REALLY off. The TV will show (or attempt to show and clip) exactly whatever luminance is sent to it. This is what you want on games like Cyberpunk that let you specify a max luminance.
> 
> The "off" in "dynamic tone mapping off" is referring to the "dynamic" part, not the "tone mapping" part. It is still tone mapping, just using a static 4000nit as the peak luminance and scaling down from there rather than picking whatever is brightest on each frame and scaling off it (dynamic tone mapping on).


HGIG needs to be support by the platform (in our case windows) and the game. 
HGIG is sending meta data to the TV and doing the computer/gaming is doing the tone mapping. Right now using HGIG setting on a TV and using a PC is the same as tone mapping being off.


----------



## Sheyster

mirkendargen said:


> I meant the people with high idles on stock BIOS's. It wouldn't surprise me at all if the 1000W BIOS doesn't downclock the memory because why would it? Clock changes introduce instability, when nothing matters but stability at the highest speeds possible, you want it running balls to the wall at all times.


Based on the data bmgjet showed us, virtually every limit range is tweaked in the 1000W BIOS. Hell, even the FAN CURVE is different than the KPE 520W! No surprise at all to me regarding memory range being different.


----------



## Sheyster

Aristotle said:


> Just did 2 hours of cyberpunk 2077 on the 1000W bios. 100% powerlimit 165 core / 1000 mem. Max temp was 80C. Max power draw was around 380W according to gpuz. So I guess around 530W when you add pin 3. I don't think you can damage your card on the 1000W bios as long as you don't increase the voltage or more than 1 fan dies.


You saved me some time testing at 100% PL! 

I would still be worried about using it on a 2 x 8-pin card. Based on what we're hearing about one connector being overloaded, I don't think I'd do it.


----------



## Aristotle

Sheyster said:


> You saved me some time testing at 100% PL!
> 
> I would still be worried about using it on a 2 x 8-pin card. Based on what we're hearing about one connector being overloaded, I don't think I'd do it.


Oh yeah I should have added that. It should be fine for the strix specifically since its 3 pin and seems to have good mem and vrm cooling.

Can't say the same for other cards especially 2 pin cards


----------



## truehighroller1

I bought the ekwb asus 3090, can I flash a different BIOS to this card safely? I have a corsair AX1600i PSU so it can handle higher voltages on the rails.


----------



## Carillo

Could we change the name of this thread to : "could i flash & is it safe" ?


----------



## HyperMatrix

Carillo said:


> Could we change the name of this thread to : "could i flash & is it safe" ?


As long as the answer is: "If you have to ask...you probably shouldn't."


----------



## Sheyster

Carillo said:


> Could we change the name of this thread to : "could i flash & is it safe" ?


I hear what you're saying here. BUT, 2 x 8-pin owners should be well aware of the risks associated with this new 1000W BIOS. It works but power balancing apparently goes out the window. It really could be a fire hazard, you should not minimize that risk. It is real.


----------



## Pepillo

GAN77 said:


> What's your delta chip water? If I read the graphs correctly, 9 degrees?


Yes, in that case it was nine degrees.


----------



## Falkentyne

Sheyster said:


> I hear what you're saying here. BUT, 2 x 8-pin owners should be well aware of the risks associated with this new 1000W BIOS. It works but power balancing apparently goes out the window. It really could be a fire hazard, you should not minimize that risk. It is real.


The only person who complained about power balancing was using it on a shunt modded 3090.
I haven't seen anyone mention bad power balancing when using it on a 2 pin NON-SHUNT MODDED 3090 (yet).

I already posted a huge wall of text that no one seemed to read (or maybe they didn't understand what I was saying) mentioning that shunt mods do REALLY weird things to the power balancing if even ONE shunt is slightly out of phase. And when you get it _IN_ Phase (that means, in phase with your current vbios) and then you throw in a Kingpin vbios that has all the limits out in outer space, yeah. Expect this to happen.

SRC has continuity with apparently ALL of the shunts, MVDDC has continuity with PCIE Slot, 8 pin #3 has continuity with GPU Chip Power, etc etc...

Need people with 2 pin stock 3090's to post their 8 pin power draws at 50-60% power limit on this vbios.


----------



## truehighroller1

I bought the ekwb asus 3090, can I flash a different BIOS to this card safely? I have a corsair AX1600i PSU so the rails won't have an issue.


----------



## GAN77

Pepillo said:


> Yes, in that case it was nine degrees.


Good result! Do you know your fluid consumption?


----------



## motivman

Falkentyne said:


> The only person who complained about power balancing was using it on a shunt modded 3090.
> I haven't seen anyone mention bad power balancing when using it on a 2 pin NON-SHUNT MODDED 3090 (yet).
> 
> I already posted a huge wall of text that no one seemed to read (or maybe they didn't understand what I was saying) mentioning that shunt mods do REALLY weird things to the power balancing if even ONE shunt is slightly out of phase. And when you get it _IN_ Phase (that means, in phase with your current vbios) and then you throw in a Kingpin vbios that has all the limits out in outer space, yeah. Expect this to happen.
> 
> SRC has continuity with apparently ALL of the shunts, MVDDC has continuity with PCIE Slot, 8 pin #3 has continuity with GPU Chip Power, etc etc...
> 
> Need people with 2 pin stock 3090's to post their 8 pin power draws at 50-60% power limit on this vbios.


1000W bios is perfectly fine on a non-shunt modded 2X8 pin card. uses about 270W max on pin 1 and around 195W max on pin #2 and 80w max on pCIE, so max power limit around 545-550W. Still not enough IMHO, but perfectly safe. On a shunt modded card, I saw it peak around 450w on pin #1, so definately a NO-NO, but all power limits were defeated.


----------



## Falkentyne

motivman said:


> 1000W bios is perfectly fine on a non-shunt modded 2X8 pin card. uses about 270W max on pin 1 and around 195W max on pin #2 and 80w max on pCIE, so max power limit around 545-550W. Still not enough IMHO, but perfectly safe. On a shunt modded card, I saw it peak around 450w on pin #1, so definately a NO-NO, but all power limits were defeated.


Wait who tested it on a non shunt modded card?
Did someone else post their power draw and they missed it? Or did you undo the shunt mod without telling me ? 
I also thought the max power limit (at 100% TDP/1000W) was 667W on a 2x8 pin card? Where did you get 550W from?


----------



## scaramonga

motivman said:


> 1000W bios is perfectly fine on a non-shunt modded 2X8 pin card. uses about 270W max on pin 1 and around 195W max on pin #2 and 80w max on pCIE, so max power limit around 545-550W. Still not enough IMHO, but perfectly safe. On a shunt modded card, I saw it peak around 450w on pin #1, so definately a NO-NO, but all power limits were defeated.


Hmm, so what are the pitfalls and pluses of using this BIOS in layman's terms?, aka, what benefits for those with 2 x 8 pin, and what dangers for those with same?, all laid out in a nice neat table format, with the risks highlighted for all to see, ending with a final recommendation as to whether its worth it in all reality.


----------



## pat182

Strix on air, just did 2100mhz AVG in port royal,curve 2100mhz @ 1.05v never went power limit, got 14 444 without Vram oc, might get a new PB , last one was 14646 with vram oc

edit: putting vram +1200 got me to 550watts on the 520bios and got a crash cause it was to hardcore lol


----------



## Alelau18

Falkentyne said:


> The only person who complained about power balancing was using it on a shunt modded 3090.
> I haven't seen anyone mention bad power balancing when using it on a 2 pin NON-SHUNT MODDED 3090 (yet).
> 
> I already posted a huge wall of text that no one seemed to read (or maybe they didn't understand what I was saying) mentioning that shunt mods do REALLY weird things to the power balancing if even ONE shunt is slightly out of phase. And when you get it _IN_ Phase (that means, in phase with your current vbios) and then you throw in a Kingpin vbios that has all the limits out in outer space, yeah. Expect this to happen.
> 
> SRC has continuity with apparently ALL of the shunts, MVDDC has continuity with PCIE Slot, 8 pin #3 has continuity with GPU Chip Power, etc etc...
> 
> Need people with 2 pin stock 3090's to post their 8 pin power draws at 50-60% power limit on this vbios.


You ask and you shall receive


Using the 1000W BIOS on a not shunted 2x8pin 3090 (ASUS TUF) no other changes than the power limit unless stated otherwise and just using GPU-Z readings and Port Royal:

50% PL ~165W 8pin#1 -115W 8pin#2 ~47W PCIe
75% PL ~250W 8pin#1 -165W 8pin#2 ~67W PCIe
100% PL ~255W 8pin#1 -165W 8pin#2 ~70W PCIe (lot of oscilation, VRel limited)
100% PL ~275W 8pin#1 -180W 8pin#2 ~70W PCIe (maxed voltage slider, PerfCap: VRel VOp)


----------



## Arbustok

Alelau18 said:


> You ask and you shall receive
> 
> 
> Using the 1000W BIOS on a not shunted 2x8pin 3090 (ASUS TUF) no other changes than the power limit unless stated otherwise and just using GPU-Z readings and Port Royal:
> 
> 50% PL ~165W 8pin#1 -115W 8pin#2 ~47W PCIe
> 75% PL ~250W 8pin#1 -165W 8pin#2 ~67W PCIe
> 100% PL ~255W 8pin#1 -165W 8pin#2 ~70W PCIe (lot of oscilation, VRel limited)
> 100% PL ~275W 8pin#1 -180W 8pin#2 ~70W PCIe (maxed voltage slider, PerfCap: VRel VOp)


Nice!!! thanks for the info!

Have you tried any other bios on your TUF? how do they compare?
Are you gonna keep that 24/7?
what are your temps like?

Gracias!!


----------



## Aristotle

pat182 said:


> Strix on air, just did 2100mhz AVG in port royal,curve 2100mhz @ 1.05v never went power limit, got 14 444 without Vram oc, might get a new PB , last one was 14646 with vram oc
> 
> edit: putting vram +1200 got me to 550watts on the 520bios and got a crash cause it was to hardcore lol


I have similar numbers as you. On the 520W bios I got 14644 PB. Using the 1000W bios I managed to get 14773 +195 core / +1120 mem.

Most importantly though, the 1000W bios provides game stability when overclocked.


----------



## Pepillo

GAN77 said:


> Good result! Do you know your fluid consumption?


EK-CryoFuel Clear


----------



## Falkentyne

Alelau18 said:


> You ask and you shall receive
> 
> 
> Using the 1000W BIOS on a not shunted 2x8pin 3090 (ASUS TUF) no other changes than the power limit unless stated otherwise and just using GPU-Z readings and Port Royal:
> 
> 50% PL ~165W 8pin#1 -115W 8pin#2 ~47W PCIe
> 75% PL ~250W 8pin#1 -165W 8pin#2 ~67W PCIe
> 100% PL ~255W 8pin#1 -165W 8pin#2 ~70W PCIe (lot of oscilation, VRel limited)
> 100% PL ~275W 8pin#1 -180W 8pin#2 ~70W PCIe (maxed voltage slider, PerfCap: VRel VOp)


So basically you're not at the max draw limit of the vbios yet, right?
So that means it's still possible to draw more.

What happens if you run Timespy Extreme (be careful, please) or Superposition 4k custom (extreme shaders)? 
Be careful with these two tests because these will draw far north of 600W if allowed to!


----------



## des2k...

motivman said:


> 1000W bios is perfectly fine on a non-shunt modded 2X8 pin card. uses about 270W max on pin 1 and around 195W max on pin #2 and 80w max on pCIE, so max power limit around 545-550W. Still not enough IMHO, but perfectly safe. On a shunt modded card, I saw it peak around 450w on pin #1, so definately a NO-NO, but all power limits were defeated.


set mine to 850w, 2x8pin and set 2130 1030mv, timespy extreme triggers the power cap, max pull was just like you said, I got 80pcie, 280w, 200w which is 560w, under 850 - 30% 600w limit.

I guess that's the max ? Still a nice boost from 390w.


----------



## pat182

well, im getting bottleneck hard on my air strix with the 520watt bios , being on water id break easy 15k and probly 15.3k on the 1000watt bios

i wont try 1000bios on an air cooled card lol, but maybe in the future ill strap the evga hybrid kit if im willing to improve, and im gonna wait a cold day (-20c) to put the pc outside and sub 0 it hahaha...


----------



## xrb936

Guys, I have a dumb question, why my Strix [email protected] can do 14900 in PR but my [email protected] only do 14600? The memory clock is same.


----------



## sultanofswing

What's the average clocks between the 2?


----------



## pat182

xrb936 said:


> Guys, I have a dumb question, why my Strix [email protected] can do 14900 in PR but my [email protected] only do 14600? The memory clock is same.


Lost the silicon lottery, strix probly more stable


----------



## Falkentyne

des2k... said:


> set mine to 850w, 2x8pin and set 2130 1030mv, timespy extreme triggers the power cap, max pull was just like you said, I got 80pcie, 280w, 200w which is 560w, under 850 - 30% 600w limit.
> 
> I guess that's the max ? Still a nice boost from 390w.


You would have to set 1000W (full TDP) and then run Timespy Extreme to see the max since it probably won't be 1:1 scaling due to SRC balancing.
I've already seen that TSE can pull way over 600W and actually ends up throttling probably due to some vram current limit. Port Royal caps out around 550W or so.


----------



## truehighroller1

des2k... said:


> set mine to 850w, 2x8pin and set 2130 1030mv, timespy extreme triggers the power cap, max pull was just like you said, I got 80pcie, 280w, 200w which is 560w, under 850 - 30% 600w limit.
> 
> I guess that's the max ? Still a nice boost from 390w.



I bought the ekwb asus 3090, can I flash a different BIOS to this card safely? I have a corsair AX1600i PSU so the rails won't have an issue.


----------



## Falkentyne

xrb936 said:


> Guys, I have a dumb question, why my Strix [email protected] can do 14900 in PR but my [email protected] only do 14600? The memory clock is same.


You need to run _Superposition_ either 1080p extreme or 4k, before running Port Royal. This can increase your score by as much as 300 points. If you get a "low" PR score, running it again and again right after will do absolutely nothing. You need to run a completely different program (No, Time Spy does NOT work).

This helps reduce that "300 point" spread that you get in PR. It's not the video card, it may have something to do with what portion of VRAM is accessed.


----------



## ViRuS2k

someone got a link to this 1000w 3090 bios 
my aircooled msi trio x needs a work out


----------



## WMDVenum

xrb936 said:


> Guys, I have a dumb question, why my Strix [email protected] can do 14900 in PR but my [email protected] only do 14600? The memory clock is same.


Is it actually maintaining that clock the entire run?


----------



## truehighroller1

ViRuS2k said:


> someone got a link to this 1000w 3090 bios
> my aircooled msi trio x needs a work out


Even though no one has responded to me for four or five pages I will gladly help you.

EVGA RTX 3090 VBIOS


----------



## scaramonga

ViRuS2k said:


> someone got a link to this 1000w 3090 bios
> my aircooled msi trio x needs a work out





http://overclockingpin.com/3998/NDA_%20xoc_tools_3998.zip


----------



## domenic

*Checkout this thread on their site - big screw up with the 3090s. Capacitors are too high to fit properly. They are recommending either drill out the plexi or send it back.*

See attacked pictures from one person that just implemented the "fix". My block will be here tomorrow and I will be doing the same. They are saying if you attempt the fix yourself and screw up your warrantee is still good.





lokran88 said:


> Just unpacked it. Still got some work but will assemble it tonite.
> Feels and looks pretty good to me.
> View attachment 2470747
> View attachment 2470748


----------



## Alelau18

Falkentyne said:


> So basically you're not at the max draw limit of the vbios yet, right?
> So that means it's still possible to draw more.
> 
> What happens if you run Timespy Extreme (be careful, please) or Superposition 4k custom (extreme shaders)?
> Be careful with these two tests because these will draw far north of 600W if allowed to!


I've ran 2 TimeSpy Extreme tests (I don't own Superposition so I can't edit the test) and it hovers around 550W total power consumption (8pin#1 ~275W, 8pin#2~195W PCIe ~75W), with VRel, VOp as performance cap reasons, with Voltage slider moved all the way to 100 and PL at 100% the card would just not draw more than that


----------



## des2k...

truehighroller1 said:


> I bought the ekwb asus 3090, can I flash a different BIOS to this card safely? I have a corsair AX1600i PSU so the rails won't have an issue.


Yeah, might as well just flash the 1000w vbios, on 2x8pin it's limited to 560w tops, so very safe on a waterblock


----------



## Falkentyne

Alelau18 said:


> I've ran 2 TimeSpy Extreme tests (I don't own Superposition so I can't edit the test) and it hovers around 550W total power consumption (8pin#1 ~275W, 8pin#2~195W PCIe ~75W), with VRel, VOp as performance cap reasons, with Voltage slider moved all the way to 100 and PL at 100% the card would just not draw more than that


Nice, so it's some sort of VRAM power limit that was limiting TSE on the other vbioses.
Superposition 4k Custom Extreme shaders puts that all to shame however. That test is just nuts.


----------



## des2k...

Alelau18 said:


> I've ran 2 TimeSpy Extreme tests (I don't own Superposition so I can't edit the test) and it hovers around 550W total power consumption (8pin#1 ~275W, 8pin#2~195W PCIe ~75W), with VRel, VOp as performance cap reasons, with Voltage slider moved all the way to 100 and PL at 100% the card would just not draw more than that


Why are you triggering that "VRel, VOp" , what freq / voltage are you at ?


----------



## ViRuS2k

Alelau18 said:


> I've ran 2 TimeSpy Extreme tests (I don't own Superposition so I can't edit the test) and it hovers around 550W total power consumption (8pin#1 ~275W, 8pin#2~195W PCIe ~75W), with VRel, VOp as performance cap reasons, with Voltage slider moved all the way to 100 and PL at 100% the card would just not draw more than that


Vrel and Vop i thought with this 1000w bios there would be no issues with performance caps being all removed and limits being maxed out 
wont your clocks drop bins ect with vrel and vop for reasons or temps ect.... 
wish there was a bios that would just stick to the clocks no matter the temps if there under 80c i see no issue with this :/


----------



## HyperMatrix

xrb936 said:


> Guys, I have a dumb question, why my Strix [email protected] can do 14900 in PR but my [email protected] only do 14600? The memory clock is same.


I call it a desync. Happens with KPE far more than any other card I've tested. If you set the clock into unstable territory, it can still report the higher clocks, while internally it's actually running a lower clock. You can confirm this by the lower power usage as well. At one point my friend had 2220MHz running at 1.1V with 410W power usage. Obviously that wasn't happening. You have to be a bit more methodical with KPE overclocking. I know it's tempting to start your overclocking from the top and go lower as you crash. But with the KingPin you have to start low and go up 15MHz at a time and make a chart of all your scores at each setting you used.


----------



## geriatricpollywog

Falkentyne said:


> You need to run _Superposition_ either 1080p extreme or 4k, before running Port Royal. This can increase your score by as much as 300 points. If you get a "low" PR score, running it again and again right after will do absolutely nothing. You need to run a completely different program (No, Time Spy does NOT work).
> 
> This helps reduce that "300 point" spread that you get in PR. It's not the video card, it may have something to do with what portion of VRAM is accessed.


I didn't see any improvement when running PR after Superposition 4K Optimized.


----------



## des2k...

Falkentyne said:


> You would have to set 1000W (full TDP) and then run Timespy Extreme to see the max since it probably won't be 1:1 scaling due to SRC balancing.
> I've already seen that TSE can pull way over 600W and actually ends up throttling probably due to some vram current limit. Port Royal caps out around 550W or so.


ran fine at 1000w no more downclocks or flags/ 2130, 1031mv, 560w time spy extreme, got to 52c on the waterblock, was 55c due to up/down when using slider bellow 1000w

mem +452, peak 115w just like stock zotac vbios


----------



## truehighroller1

This was my result from flashing the 1000 watt bios on my asus ekwb 3090.









Result







www.3dmark.com


----------



## Edgenier

Alelau18 said:


> I've ran 2 TimeSpy Extreme tests (I don't own Superposition so I can't edit the test) and it hovers around 550W total power consumption (8pin#1 ~275W, 8pin#2~195W PCIe ~75W), with VRel, VOp as performance cap reasons, with Voltage slider moved all the way to 100 and PL at 100% the card would just not draw more than that


What ports do you lose with the 1000W bios on a TUF?


----------



## Falkentyne

0451 said:


> I didn't see any improvement when running PR after Superposition 4K Optimized.


Oh I get a 200-300 point improvement (after a reboot).


----------



## Falkentyne

Alelau18 said:


> I've ran 2 TimeSpy Extreme tests (I don't own Superposition so I can't edit the test) and it hovers around 550W total power consumption (8pin#1 ~275W, 8pin#2~195W PCIe ~75W), with VRel, VOp as performance cap reasons, with Voltage slider moved all the way to 100 and PL at 100% the card would just not draw more than that





des2k... said:


> Why are you triggering that "VRel, VOp" , what freq / voltage are you at ?





ViRuS2k said:


> Vrel and Vop i thought with this 1000w bios there would be no issues with performance caps being all removed and limits being maxed out
> wont your clocks drop bins ect with vrel and vop for reasons or temps ect....
> wish there was a bios that would just stick to the clocks no matter the temps if there under 80c i see no issue with this :/


VRel and VOp have nothing to do with power limits, guys.
VRel=Boost clocks are being limited by voltage reliability.
VOp=Maximum Operating Voltage has been reached.

If you reach VRel and VOp, the only way to make the board draw more power is to increase the clocks 
And if you can't increase the clocks without crashing, you need to increase the voltages (Loadlines, MSVDD, NVVDD, etc).


----------



## bmgjet

des2k... said:


> Why are you triggering that "VRel, VOp" , what freq / voltage are you at ?


That means its reached the end of the VFR table.


----------



## HyperMatrix

Falkentyne said:


> VRel and VOp have nothing to do with power limits, guys.
> VRel=Boost clocks are being limited by voltage reliability.
> VOp=Maximum Operating Voltage has been reached.
> 
> If you reach VRel and VOp, the only way to make the board draw more power is to increase the clocks
> And if you can't increase the clocks without crashing, you need to increase the voltages (Loadlines, MSVDD, NVVDD, etc).



Just as a side note, you can also trigger VRel and VOp if you’re using too much voltage for your clocks. It’s not always a shortage. But can be an excess.

bmgjet’s answer above seems the most appropriate. My pascal Titan X is stable at 2100MHz with +50mV on the slider but crashes at +100mV. Same temps.


----------



## ViRuS2k

Well i just ordered that bad boy  now my problem is please for the love of god let my original msi gaming trio back plate work with this gpu waterblock otherwise i have no idea how im going to keep the back mem chips cool.

*N-MS3080TRIO-X*
*Bykski GPU Water Block For MSI RTX3080/3090 Gaming X TRIO, 12V/5V RGB MB SYNC N-MS3080TRIO-X*


----------



## truehighroller1

truehighroller1 said:


> This was my result from flashing the 1000 watt bios on my asus ekwb 3090.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Result
> 
> 
> 
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> View attachment 2470831


Pushed it further while feeling everything to make sure nothing's melting etc. and so far so good, pushing it to the limit.









I scored 14 871 in Port Royal


Intel Core i9-7900X Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com





Pushed it further............










I scored 14 983 in Port Royal


Intel Core i9-7900X Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## markuaw1

truehighroller1 said:


> Pushed it further while feeling everything to make sure nothing's melting etc. and so far so good, pushing it to the limit.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 14 871 in Port Royal
> 
> 
> Intel Core i9-7900X Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> Pushed it further............
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 14 983 in Port Royal
> 
> 
> Intel Core i9-7900X Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com


3dmark.com/pr/672176


----------



## truehighroller1

markuaw1 said:


> 3dmark.com/pr/672176
> View attachment 2470834


Your link is broke but, I think this is the furthest I can push my 2 pin gpu on this old 4 radiator dual pump x299 system....











I scored 14 997 in Port Royal


Intel Core i9-7900X Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com





Great score btw.

On a side note it's 71F in my office I could always open the windows in here and close the door and get a better score I'm sure but, I want it colder outside first.


----------



## WillP

Hi folks, first time posting here. I wanted to thank the community for this goldmine of information. I've recently got myself one of the KFA2 3090s and I've been enjoying a bit of light overclocking over the past 8 years or so with various hardware, but the information here has taken my interest to a new level. I'd never bios flashed before, and have done now with both the 390W and now 1000W bios thanks to the posts here from people posting about their results. Having to take it easy and keep a close eye on temps as my waterblock hasn't yet arrived, but I've seen some uplifts in gaming performance (4k CP2077) running at 72% power limit which is settling at 80 Celsius max.
Massive shout out to you all.


----------



## geriatricpollywog

I had some time on my hands...

Tool - YouTube


----------



## truehighroller1

WillP said:


> Hi folks, first time posting here. I wanted to thank the community for this goldmine of information. I've recently got myself one of the KFA2 3090s and I've been enjoying a bit of light overclocking over the past 8 years or so with various hardware, but the information here has taken my interest to a new level. I'd never bios flashed before, and have done now with both the 390W and now 1000W bios thanks to the posts here from people posting about their results. Having to take it easy and keep a close eye on temps as my waterblock hasn't yet arrived, but I've seen some uplifts in gaming performance (4k CP2077) running at 72% power limit which is settling at 80 Celsius max.
> Massive shout out to you all.



That's what it's all about man, helping each other out in anyway possible and sometimes throwing the dice and saying heck with it like I just did to help the community and peaking new interest in the overclocking game.

My last score was my highest possible with the temps in here right now.


----------



## WillP

truehighroller1 said:


> That's what it's all about man, helping each other out in anyway possible and sometimes throwing the dice and saying heck with it like I just did to help the community and peaking new interest in the overclocking game.
> 
> My last score was my highest possible with the temps in here right now.


Fantastic effort!
I'm trying to avoid the temptation to benchmark for now as I know that power slider will start moving up...


----------



## markuaw1

truehighroller1 said:


> Your link is broke but, I think this is the furthest I can push my 2 pin gpu on this old 4 radiator dual pump x299 system....
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 14 997 in Port Royal
> 
> 
> Intel Core i9-7900X Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> Great score btw.
> 
> On a side note it's 71F in my office I could always open the windows in here and close the door and get a better score I'm sure but, I want it colder outside first.


That's a good score man didn't notice that the link was messed up this is the best score I've gotten so far https://www.3dmark.com/pr/672176


----------



## bmgjet

Lol PR HOF is just filled with new submissions since the XOC 1KW EVGA bios came out.


----------



## Biscottoman

Guys what is in your opinion the best waterblock for a 3090 strix, from a performance point of view of course


----------



## truehighroller1

markuaw1 said:


> That's a good score man didn't notice that the link was messed up this is the best score I've gotten so far https://www.3dmark.com/pr/672176
> View attachment 2470836



Pretty damn good score man.

Me and you side by side, link for reference.









Result







www.3dmark.com





This is my old dusty dirty system lol. I am close to cleaning her out, I know I know.


----------



## des2k...

Falkentyne said:


> Nice, so it's some sort of VRAM power limit that was limiting TSE on the other vbioses.
> Superposition 4k Custom Extreme shaders puts that all to shame however. That test is just nuts.


If you crank up the freq / voltage, still keeps drawing more, it keeps increasing wattage on 2x8pins, 586watts TSE 2145 @1050mv avg at high load scene.


----------



## des2k...

Since I I have to redo the loop, move my backup pump and replace that old tube, do you guy think there's a problem with running the mobo + cpu from my 550 seasonic gold? Asking because the PCIE will come from the seasonic unit, would that cause issues / risk of something screwing up with the power (2 pcie from EVGA 850 )for the RTX 3090 ?


----------



## des2k...

des2k... said:


> Since I I have to redo the loop, move my backup pump and replace that old tube, do you guy think there's a problem with running the mobo + cpu from my 550 seasonic gold? Asking because the PCIE will come from the seasonic unit, would that cause issues / risk of something screwing up with the power (2 pcie from EVGA 850 )for the RTX 3090 ?
> View attachment 2470843


Would moving just the CPU , SATA powers to the 550 unit be safer ? and keep Mobo, 2x8pins on the evga unit for the RTX3090 same power supply.


----------



## Nico67

des2k... said:


> well I know what I'll use this winter to warm up the room, time spy extreme, just [email protected], thing is continually pushing 550w+ 52c on the EK block


I get similar results, 2 x 8pin at 65% didn't seem much different to 390w bios. slowly stepping thru locked voltages to find the highest clock and then adjust to no power limiting I got [email protected] 75% with very minor PL but upping to 80% would crash. [email protected] no mem doesn't power limit at 80% but gradually limits a bit as you add mem.










I would hazard a guess its around 550w, although I don't think the 8pins (287w) and some of the other readings are correct as the cables don't even get warm and there obviously is no 8pin#3. I think the readings are based on what the KP would pull due to different phases and relationships to other readings. At any rate you would need a huge amount of power to be unlimited at 1.100v and 2200+mhz.


----------



## markuaw1

Nico67 said:


> I get similar results, 2 x 8pin at 65% didn't seem much different to 390w bios. slowly stepping thru locked voltages to find the highest clock and then adjust to no power limiting I got [email protected] 75% with very minor PL but upping to 80% would crash. [email protected] no mem doesn't power limit at 80% but gradually limits a bit as you add mem.
> 
> View attachment 2470846
> 
> 
> I would hazard a guess its around 550w, although I don't think the 8pins (287w) and some of the other readings are correct as the cables don't even get warm and there obviously is no 8pin#3. I think the readings are based on what the KP would pull due to different phases and relationships to other readings. At any rate you would need a huge amount of power to be unlimited at 1.100v and 2200+mhz.


----------



## mirkendargen

Zurv said:


> HGIG needs to be support by the platform (in our case windows) and the game.
> HGIG is sending meta data to the TV and doing the computer/gaming is doing the tone mapping. Right now using HGIG setting on a TV and using a PC is the same as tone mapping being off.


There's no metadata sent frame by frame. The TV must be able to send it's capabilities to the device, the device must be able to receive those capabilities from the TV, and the device should include calibration to make use of them. The idea is the device then handles all the tone mapping so neither the TV nor the software running on the device has to, so it is tone mapping off in all situations. It isn't like Dolby Vision, where there's frame by frame metadata as part of the signal to the TV.

So yeah, Windows can't currently get the metadata from the TV...but that doesn't matter when you can set the peak brightness in a game (you are the metadata), the end result is the same (a signal being set to theTV that doesn't include brightness outside it's capabilities). If you set your max brightness to say 800nit in a game and display it on an LG OLED (approximately the peak brightness of a CX) with dynamic tone mapping set to "off" instead of "HGIG" the game will send peak brightness to the TV at 800nit, and the TV will then do it's static tone mapping with a peak brightness of 4000nit and display the game's peak brightness at like ~200nit. If you have dynamic tone mapping set to HGIG the TV will do no tone mapping, and that peak brightness will actually be 800nit.


----------



## Mrip541

My 3090 gives me flickering textures. Full screen, 1x1inch boxes, objects like a backpack. Spamming screenshots I've verified that the textures go completely black for a fraction of a sec. I've tried different drivers. I've tried different cables. It reminded me of issues I had with system memory a long time back but all memory tests come back normal. This is absolutely killing me for such an expensive purchase.


----------



## dante`afk

Aristotle said:


> Just did 2 hours of cyberpunk 2077 on the 1000W bios. 100% powerlimit 165 core / 1000 mem. Max temp was 80C. Max power draw was around 380W according to gpuz. So I guess around 530W when you add pin 3. I don't think you can damage your card on the 1000W bios as long as you don't increase the voltage or more than 1 fan dies.


please keep running that with your air cooler and let us know how your card is doing within 2 weeks.




truehighroller1 said:


> Pushed it further while feeling everything to make sure nothing's melting etc. and so far so good, pushing it to the limit.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 14 871 in Port Royal
> 
> 
> Intel Core i9-7900X Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> Pushed it further............
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 14 983 in Port Royal
> 
> 
> Intel Core i9-7900X Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com


insane wattage pull, sadly your gpu chip is garbage


----------



## Falkentyne

Nico67 said:


> I get similar results, 2 x 8pin at 65% didn't seem much different to 390w bios. slowly stepping thru locked voltages to find the highest clock and then adjust to no power limiting I got [email protected] 75% with very minor PL but upping to 80% would crash. [email protected] no mem doesn't power limit at 80% but gradually limits a bit as you add mem.
> 
> View attachment 2470846
> 
> 
> I would hazard a guess its around 550w, although I don't think the 8pins (287w) and some of the other readings are correct as the cables don't even get warm and there obviously is no 8pin#3. I think the readings are based on what the KP would pull due to different phases and relationships to other readings. At any rate you would need a huge amount of power to be unlimited at 1.100v and 2200+mhz.


You're probably right. I think I'm running slightly lower clocks than you, and judging from the "GT2" double throttle bar that most people seem to have (even shunt modded 2080 Ti's have that), that looks like a good estimate.


I just got this at +150 / +500, with shunt modded 3090 FE that can go up to around 600W or so from the shunts.
1.61x GPU Power multiplier in hwinfo only for that value (since I can't be exactly sure what the shunts are skewing, but I assume the SRC Power Plane, MVDDC, PCIE Slot are all between 1.34x to 1.60x. Don't ask me what's up with MVDDC. I do know that MVDDC seems to have some continuity relationship with PCIE slot AND with SRC.
SRC power limit is 175W, that seems to be what the 8 pins are limited to, even though the Vbios dump says it's 162W 8 pins, 175W SRC, at 114%











This is with +180 / +600.


----------



## defiledge

dante`afk said:


> insane wattage pull, sadly your gpu chip is garbage


What makes a chip good or garbage?


----------



## ViRuS2k

this 1000w is awesome 
no throttleing or very little due to heat on my 3090 msi trioX
975v consistantly @1950 with no droppage in bins and max temp @73c with max board power on gears 5 512w
and memory @254 on msi burner 10005mhz

on idle memory voltage @105w that is the part in little worried about but i had run memory at 3d clocks in the past ffor years without issue so i dont see this being a issue.
think i am also going to run this bios as a daily driver and once my block comes i will be going water but ffor now i dont see any issues with it 

best part about this bios is i get no GPUZ prefmon and stays at IDLE all the time.  also powerlimit is set at default wich is 100% on the slider


----------



## bmgjet

defiledge said:


> What makes a chip good or garbage?


Usually where it comes from the silicon wafer. 
The ones in the center are usually the best and lowest leakage. So will do higher clocks for less power.
But as a end user its just luck for what you get.
For general users low leak is best. But when going sub zero a high leakage chip can be a bit of a advantage since they take more power and colder temps before they crash.

------
If people are worried about clocks staying high on the XOC 1KW EVGA bios. Make 2 profiles in afterburner.
One with min clocks you can get as your idle profile. Then switch between them based on load.


----------



## cheddle

Falkentyne said:


> The only person who complained about power balancing was using it on a shunt modded 3090.
> I haven't seen anyone mention bad power balancing when using it on a 2 pin NON-SHUNT MODDED 3090 (yet).
> 
> INeed people with 2 pin stock 3090's to post their 8 pin power draws at 50-60% power limit on this vbios.


My reference board 2x8-pin (Galax SG) has terrible balance regardless of BIOS. Before and after my (failed) shunt mod I have a pretty serious delta between the two 8-pins (no change before/after shunts)
This only gets much worse on the 1000w bios.

example:
75% tdp slider. 8-pin #1: 257w. 8-pin #2: 173w.

I pushed TDP as far as 90% and saw 300w on 8-pin #1 (estimated 570w actual draw)

I am on air with a deshroud and noctua nf-a9’s, saw as high as 78c by the end of a PR run, with temps that high I began to hit pwr cap at the end of the run.

I monitored cable/connector temps with a laster thermometer, 50c was the highest I measured.


----------



## cheddle

Anyone with a 2x8-pin 1000w bios card have a current clamp handy to confirm the reporting is correct?


----------



## Falkentyne

cheddle said:


> My reference board 2x8-pin (Galax SG) has terrible balance regardless of BIOS. Before and after my (failed) shunt mod I have a pretty serious delta between the two 8-pins (no change before/after shunts)
> This only gets much worse on the 1000w bios.
> 
> example:
> 75% tdp slider. 8-pin #1: 257w. 8-pin #2: 173w.
> 
> I pushed TDP as far as 90% and saw 300w on 8-pin #1 (estimated 570w actual draw)
> 
> I am on air with a deshroud and noctua nf-a9’s, saw as high as 78c by the end of a PR run, with temps that high I began to hit pwr cap at the end of the run.
> 
> I monitored cable/connector temps with a laster thermometer, 50c was the highest I measured.


Did you scrape off the conformal coating layer from the edges of your original shunts with a flat blade, making them shiny silver, first?
Are the edges of your original shunts depressed lower than the black middle housing?

If you have a balance issue like that, that can usually be fixed by throwing an 8 mOhm shunt on 8 pin #1 and 10 mOhm shunt on 8 pin #2. OR if using MG 842AR paint only (after scraping the shunts like I mentioned), by applying a -thick- layer of paint on 8 pin #1 and a thin layer of paint on 8 pin #2 shunts.

For shunts with 'depressed' edges--i believe those all have that conformal coating on them. Also getting contact with the depressed edges causes issues even for people who are soldering shunts and not using MG 842AR paint at all. Ask @dante`afk about that 



cheddle said:


> Anyone with a 2x8-pin 1000w bios card have a current clamp handy to confirm the reporting is correct?


What's a current clamp?


----------



## bmgjet

Falkentyne said:


> What's a current clamp?


Inductive DC amp meter.


----------



## Falkentyne

bmgjet said:


> Inductive DC amp meter.


Can you guys link me to one ? I don't know much about these engineering tools.


----------



## bmgjet

Falkentyne said:


> Can you guys link me to one ? I don't know much about these engineering tools.


You sort of get what you pay for with them and could get a really nice one for $500 or a piece of junk thats unusable for $20.

But the main things to look at will be make sure it does DC since a lot only do AC.
And the min ammount of current they can read. A lot will advertise massive numbers like 600A but they wont read below 60A.
Then its just basic maths,
A x V = Watts.

And lastly the clamp size is something to think about. A big chunky clamp will have issues getting between between the 12V and GND wires


----------



## markuaw1

bmgjet said:


> You sort of get what you pay for with them and could get a really nice one for $500 or a piece of junk thats unusable for $20.
> 
> But the main things to look at will be make sure it does DC since a lot only do AC.
> And the min ammount of current they can read. A lot will advertise massive numbers like 600A but they wont read below 60A.
> Then its just basic maths,
> A x V = Watts.
> 
> And lastly the clamp size is something to think about. A big chunky clamp will have issues getting between between the 12V and GND wires


 I just ended up running my power supply through a meter to see how many watts total it was putting out to give me an idea what my 3090 was drawn off that 1000W bios


----------



## xrb936

pat182 said:


> Lost the silicon lottery, strix probly more stable





WMDVenum said:


> Is it actually maintaining that clock the entire run?





HyperMatrix said:


> I call it a desync. Happens with KPE far more than any other card I've tested. If you set the clock into unstable territory, it can still report the higher clocks, while internally it's actually running a lower clock. You can confirm this by the lower power usage as well. At one point my friend had 2220MHz running at 1.1V with 410W power usage. Obviously that wasn't happening. You have to be a bit more methodical with KPE overclocking. I know it's tempting to start your overclocking from the top and go lower as you crash. But with the KingPin you have to start low and go up 15MHz at a time and make a chart of all your scores at each setting you used.


The average clock on Strix is 2085, and on KPE is 2145. I am not sure what happened, it looks so weird.


----------



## xrb936

Falkentyne said:


> You need to run _Superposition_ either 1080p extreme or 4k, before running Port Royal. This can increase your score by as much as 300 points. If you get a "low" PR score, running it again and again right after will do absolutely nothing. You need to run a completely different program (No, Time Spy does NOT work).
> 
> This helps reduce that "300 point" spread that you get in PR. It's not the video card, it may have something to do with what portion of VRAM is accessed.


Wow. Any reason why it happens like that?


----------



## Falkentyne

xrb936 said:


> Wow. Any reason why it happens like that?


I don't know. Could be a windows issue. But what's strange is you keep the same score if you run the bench over and over, but not if you run the bench, run superposition if you got a low score, then run the bench again. (This is assuming a fresh reboot of windows first).


----------



## xrb936

Falkentyne said:


> I don't know. Could be a windows issue. But what's strange is you keep the same score if you run the bench over and over, but not if you run the bench, run superposition if you got a low score, then run the bench again. (This is assuming a fresh reboot of windows first).


That’s strange. Let me try.


----------



## HyperMatrix

xrb936 said:


> The average clock on Strix is 2085, and on KPE is 2145. I am not sure what happened, it looks so weird.


I know. As I mentioned, it's why I call it a desync. The clock speed being reported by the KPE is actually false when it desyncs. Internally, it starts running a much lower clock. But it won't tell you. But if you're monitoring power usage, you'll notice that the power being used isn't in line with the clock speed/voltage that's being reported. That's why I gave you those steps to follow.


----------



## xrb936

HyperMatrix said:


> I know. As I mentioned, it's why I call it a desync. The clock speed being reported by the KPE is actually false when it desyncs. Internally, it starts running a much lower clock. But it won't tell you. But if you're monitoring power usage, you'll notice that the power being used isn't in line with the clock speed/voltage that's being reported. That's why I gave you those steps to follow.


I see. Which steps were you talking about? They can fix that KPE issue?


----------



## HyperMatrix

xrb936 said:


> I see. Which steps were you talking about? They can fix that KPE issue?


The problem only occurs when you exceed the capabilities of the chip by more than just a little bit. If it exceeds it by a bit, it'll crash. But if it goes up too much, it can cause the desync. Just start with 0 memory overclock and 0 on the voltage slider. Max out fans and power limit. And test 15MHz offset and run port royal. Record your results. Keep going until you crash or see a lower bench score. That way you'll know your actual crash point without going over the desync point.

p.s. I'm waiting for someone with a better technical understanding of what's happening to pop up and explain in detail why this is happening. I'm only reporting based off of hands on experience with 2 cards and hours and hours of benching.


----------



## Gebeleisis

Pepillo said:


> Today I received my Bykski block from China, some photos:
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Happy with the result, my maximum temperature has dropped from 70º to less than 50º with silent profile on the fans and 30 Mhz more with the same settings playing half an hour to Valhalla 4k Ultra:
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I'm with the 500w EVGA bios in my Trio, I have to try the 520w and even the 1000w bios now that I'm on of water. Anyway, it's a PC to play with, not to bench.


Nice build!
I'm waiting for my bykski waterblock.
What size are the thermal pads ?
Do they come preapplied ? I would like to get some good ones when the block arrives. Thank you!


----------



## Pepillo

Gebeleisis said:


> Nice build!
> I'm waiting for my bykski waterblock.
> What size are the thermal pads ?
> Do they come preapplied ? I would like to get some good ones when the block arrives. Thank you!


Thank you. There are five identical thermal pads that you have to cut and put where you think, since the instructions are useless, in one of the photos I put it looks like I put them and the result looks good. In this other photo you can see them before placing them: 










As for thickness, I don't know very well how to measure it accurately and I'm afraid to give you the wrong measure.


----------



## Gebeleisis

Thanks! 
I will order some more different thickness pads then, just to be on the safe side


----------



## Gebeleisis

Alelau18 said:


> You ask and you shall receive
> 
> 
> Using the 1000W BIOS on a not shunted 2x8pin 3090 (ASUS TUF) no other changes than the power limit unless stated otherwise and just using GPU-Z readings and Port Royal:
> 
> 50% PL ~165W 8pin#1 -115W 8pin#2 ~47W PCIe
> 75% PL ~250W 8pin#1 -165W 8pin#2 ~67W PCIe
> 100% PL ~255W 8pin#1 -165W 8pin#2 ~70W PCIe (lot of oscilation, VRel limited)
> 100% PL ~275W 8pin#1 -180W 8pin#2 ~70W PCIe (maxed voltage slider, PerfCap: VRel VOp)


Using the 1000W BIOS on a not shunted 2x8pin Palit GamingPro no other changes than the power limit unless stated otherwise and just using GPU-Z readings :
stock(ish) - KFA 390w bios : pin 1 / pin2 / PCie / Total
111% : 168w + 155w ~70W PCIe ~ 393 w

1000w bios : pin 1 / pin2 / PCie / Total
50%PL 165w + 120w + 50w ~ 335w
75% PL 250w + 180w + 68w ~ 498w
100% PL 260w + 180w + 80w ~ 520w


----------



## defiledge

So I have a chance to buy a 3090 suprim. I have a shunted tuf atm. Do you guys think these "premium" models will clock higher?


----------



## Keninishna

Gebeleisis said:


> Thanks!
> I will order some more different thickness pads then, just to be on the safe side


Bykski uses some non standard pads at 1.3mm I think. You can order more from them or use some good compressible 1.5mm pads.


----------



## Gebeleisis

Keninishna said:


> Bykski uses some non standard pads at 1.3mm I think. You can order more from them or use some good compressible 1.5mm pads.


thank you !


----------



## Pepillo

I'm not going to break any records, but Kipping's 520w bios and Bykski block have given me the satisfaction of surpassing 15,000 points at port royale:









I scored 15 020 in Port Royal


Intel Core i9-7900X Processor, NVIDIA GeForce RTX 3090 x 1, 32450 MB, 64-bit Windows 10}




www.3dmark.com


----------



## defiledge

bmgjet said:


> Usually where it comes from the silicon wafer.
> The ones in the center are usually the best and lowest leakage. So will do higher clocks for less power.
> But as a end user its just luck for what you get.
> For general users low leak is best. But when going sub zero a high leakage chip can be a bit of a advantage since they take more power and colder temps before they crash.
> 
> ------
> If people are worried about clocks staying high on the XOC 1KW EVGA bios. Make 2 profiles in afterburner.
> One with min clocks you can get as your idle profile. Then switch between them based on load.


I mean in terms of clocks and stuff for a 3090


----------



## Keninishna

How are you all getting such good temps on water? I have to be doing something wrong because I’m getting up into the 50s at around 500 watts.


----------



## bmgjet

Keninishna said:


> How are you all getting such good temps on water? I have to be doing something wrong because I’m getting up into the 50s at around 500 watts.


Whats your room temp and coolant temp.
15-20C delta is sort of what to expect with 500W


----------



## Spiriva

This is what my PNY looks like with the 390W bios, after a few hours of Cyberpunk










(hwinfo can show the 8pins in OSD)


----------



## Keninishna

bmgjet said:


> Whats your room temp and coolant temp.
> 15-20C delta is sort of what to expect with 500W


room temp is like 24c coolant gets up to 31c. Last few pages it looks like everyone is low 40s in temp. I guess I can’t complain too much 50c isn’t a bad temp all things considered.

I guess I can try diff thermal pads on vrms. Anyone that has a bykski block what size pads do you use on your vrms. The only other thing I can think of right now is I might need better flow rate. I have a 1 freezemod pump going through 3 360 rads, and the gpu and cpu block.

36c is good for 390 watts too I get around 41c at that power.


----------



## xrb936

defiledge said:


> So I have a chance to buy a 3090 suprim. I have a shunted tuf atm. Do you guys think these "premium" models will clock higher?


Considering how KPE perform this year, no.


----------



## ViRuS2k

Thats what i wanted to know also
what are really good compressable thermal pads for a bykski gpu block on a MSI 3090 TrioX gaming Gpu.  they must be premium pads though as you can also get cheap pads and premium pads
with high mk watts..
i might also buy or is it not recommended to use a liquid metal thermal pad on the GPU die.


----------



## dspboys

ViRuS2k said:


> Well i just ordered that bad boy  now my problem is please for the love of god let my original msi gaming trio back plate work with this gpu waterblock otherwise i have no idea how im going to keep the back mem chips cool.
> 
> *N-MS3080TRIO-X*
> *Bykski GPU Water Block For MSI RTX3080/3090 Gaming X TRIO, 12V/5V RGB MB SYNC N-MS3080TRIO-X*


Yes, you can reuse the original backplate. I have the same block on my 3090 Gaming X Trio. You can even use the same screws.


----------



## ViRuS2k

dspboys said:


> Yes, you can reuse the original backplate. I have the same block on my 3090 Gaming X Trio. You can even use the same screws.


Thank you 
what about the thermal pads ? that come with it, you think buying 1.5mm grizzly pads would be better ? and is there anything i can do to gain better cooling like thermal pad placment ? extra places that are not listed with hotspots ect i dont want to get hotspots haha 

though being this late i dont think i will get my gpu until mid january i also ordered that bykski temp and water temp oled screen attachment  as i think it looks pretty cool haha


----------



## ViRuS2k

BTW has anyone noticed that this 1000w bios is very very aggressive with the fan profiles.
and there running at weird speeds hahaha  
just on desktop my card is at 35c and fans are reporting 51% on one and 61% on the other i have tried MSI afterburner to reduce speeds on the fans but im having no luck getting them reduced in speed or even turning them off lol


----------



## defiledge

Spiriva said:


> This is what my PNY looks like with the 390W bios, after a few hours of Cyberpunk
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> (hwinfo can show the 8pins in OSD)


4800 rpm? How do you deal with your pump sounding like a jet engine?


----------



## ViRuS2k

OK managed to get a nice 2d proile with msi afterburner for desktop use and one for gaming...
this 2d profile is very low on the voltages and shouldnt be a problem with temps breaking out


----------



## sweepersc

ViRuS2k said:


> Thank you
> what about the thermal pads ? that come with it, you think buying 1.5mm grizzly pads would be better ? and is there anything i can do to gain better cooling like thermal pad placment ? extra places that are not listed with hotspots ect i dont want to get hotspots haha
> 
> though being this late i dont think i will get my gpu until mid january i also ordered that bykski temp and water temp oled screen attachment  as i think it looks pretty cool haha


Dspboys here, I thought I already retired that username. The pads provided are thin, I think it's 0.05mm. Definitely worth it if you have extra 1 and 1.5mm as the 0.05mm pads arent't thick enough to touch the block. As for the pad placement, just take a picture of the original pad placement and you'll be fine. Mine was missing a pad on the 5 rightmost capacitors at the bottom.

I wanted to try the 1kw vbios but I need better pads as I only had 1mm and 0.05mm at the moment.


----------



## ViRuS2k

sweepersc said:


> Dspboys here, I thought I already retired that username. The pads provided are thin, I think it's 0.05mm. Definitely worth it if you have extra 1 and 1.5mm as the 0.05mm pads arent't thick enough to touch the block. As for the pad placement, just take a picture of the original pad placement and you'll be fine. Mine was missing a pad on the 5 rightmost capacitors at the bottom.
> 
> I wanted to try the 1kw vbios but I need better pads as I only had 1mm and 0.05mm at the moment.


Yeah thanks for that yes i will buy those 1.5mm pads then also is there any type off screw driver that i need to remove the original cooler ??? cheers.


----------



## sweepersc

ViRuS2k said:


> Yeah thanks for that yes i will buy those 1.5mm pads then also is there any type off screw driver that i need to remove the original cooler ??? cheers.


A standard Philip's screw driver should work. Just be careful because the screws are easy to strip.


----------



## des2k...

ViRuS2k said:


> OK managed to get a nice 2d proile with msi afterburner for desktop use and one for gaming...
> this 2d profile is very low on the voltages and shouldnt be a problem with temps breaking out


that's not an idle profile, it's the safety clocks, performance will be crap
mem needs to run full speed


----------



## Edgenier

Can anyone tell me if the 1000W bios disables any ports on a TUF, and also what would happen if i put it on a shunted TUF with 4mohm on just the 2 x 8-pins ?


----------



## des2k...

Edgenier said:


> Can anyone tell me if the 1000W bios disables any ports on a TUF, and also what would happen if i put it on a shunted TUF with 4mohm on just the 2 x 8-pins ?


evga uses from screw bracket dp,dp,dp,hdmi
so it pulls in timespy 580watts on 2x8pins for 1050mv, games with fps caps it's bellow 580w

at 100% power slider
pcie 80watts pin1 280watts pin2 210wtts


----------



## Edgenier

des2k... said:


> evga uses from screw bracket dp,dp,dp,hdmi
> so it pulls in timespy 580watts on 2x8pins for 1050mv, games with fps caps it's bellow 580w
> 
> at 100% power slider
> pcie 80watts pin1 280watts pin2 210wtts


with pcie still capped at 80watts, is this any diffetent than my shunt situation (pin1 and pin2 but no pcie)?


----------



## des2k...

Edgenier said:


> with pcie still capped at 80watts, is this any diffetent than my shunt situation (pin1 and pin2 but no pcie)?


pcie is not capped, if you use VF curve at 2145+ or 1050mv+ pcie gets to 90w+
it's suppose to be unlocked goes to 100w max, 100+300+300 for 2x8pin
that's on the zotac trinity

you need some serious cooling, 3 rads at 580w+ it's 56c+ under benches


----------



## Spiriva

defiledge said:


> 4800 rpm? How do you deal with your pump sounding like a jet engine?


Its a d5 pump, i cant hear it at all


----------



## ViRuS2k

des2k... said:


> that's not an idle profile, it's the safety clocks, performance will be crap
> mem needs to run full speed


its not ment to be used as a perormance 3d gaming profile, its a desktop/idle internet/movie watching and basiclly anything without gaming profile 
the performance is perfect for those needs and keeps the voltages super low across the board


----------



## Carillo

For those of you who are wondering if your 3090 will automatically pull 1000 watts with the Kingpin bios, I have now tested a couple of days in games. My 3090 strix is shunted, and has a theoretical maximum power of 960 watts with stock bios. In 4k ultra the card draws a maximum of 700 watts in games and 800 watt in syntetic benchmarks ... I have now used the Kingpin 1000 watt bios which in combination with shunts in theory can provide 2000 watts ... Guess how much power the card pulls? 700 watts with PL set to 100% ... There is no difference, the card never draws more power than it needs without you for some reason having a short circuit.


----------



## markuaw1

Carillo said:


> For those of you who are wondering if your 3090 will automatically pull 1000 watts with the Kingpin bios, I have now tested a couple of days in games. My 3090 strix is shunted, and has a theoretical maximum power of 960 watts with stock bios. In 4k ultra the card draws a maximum of 700 watts in games and 800 watt in syntetic benchmarks ... I have now used the Kingpin 1000 watt bios which in combination with shunts in theory can provide 2000 watts ... Guess how much power the card pulls? 700 watts with PL set to 100% ... There is no difference, the card never draws more power than it needs without you for some reason having a short circuit.


 strix EK block + xoc 1000W bios no shunt mod https://www.3dmark.com/pr/672176


----------



## xrb936

Guys, where can I find the 1000w BIOS? I cannot find it in TechPowerUp collection.


----------



## xrb936

markuaw1 said:


> strix EK block + xoc 1000W bios no shunt mod https://www.3dmark.com/pr/672176
> View attachment 2470900
> View attachment 2470901


Wow. You did have a good chip on your Strix.


----------



## des2k...

markuaw1 said:


> strix EK block + xoc 1000W bios no shunt mod https://www.3dmark.com/pr/672176
> View attachment 2470900
> View attachment 2470901


How's the core temp on the EK block ? Backplate is hot ?


----------



## Zurv

no, that isn't what is going on. The point of HGIG is that metadata is being sent from the device to the screen.
Right now, tone mapping off and HGIG are the same when playing games from the PC. Zero diff.
The TV can't see what settings you put on the PC and will still assume it is getting 4000 nits. 
There is some use for it on a ps5 and a xbox series console even if a game itself doesn't support it. At the console level it can at least tell the TV what the max nits are. Clearly one still wants the game to support it to so one get real metadata. windows 10 has zero support for HGIG right now.


----------



## Edgenier

For daily use (on air) I think i’ve settled on 1965mhz @ .90V which gets me 14005 PR and no higher than 71C . Any higher voltage and pcie will throttle me a bit. Anybody getting 2000mhz+ stable on air (zero throttling) for daily use?


----------



## GQNerd

Last night was a perfectly chilly night to do some extreme over clocking. My Strix was already a beast having achieved 15,454 in PR... but the KP is hitting 15.5-6 easily, and last night I hit *15,870!! *
https://www.3dmark.com/pr/683457

+245 core
+1400 or 1500 mem (I forget, it was past 1am lol)

Still have more to squeeze out of it, but will save that for a few weeks.. for now going to focus on highest stable clocks/temps for daily/gaming.

The KPE is definitely the way to go for tinkerers, lol.. also someone mentioned the vrm was rated for 70a, but the manual says 60a.. just fyi


----------



## markuaw1

xrb936 said:


> Guys, where can I find the 1000w BIOS? I cannot find it in TechPowerUp collection.











EVGA RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com


----------



## Sheyster

cheddle said:


> This only gets much worse on the 1000w bios.
> 
> example:
> 75% tdp slider. 8-pin #1: 257w. 8-pin #2: 173w.
> 
> I pushed TDP as far as 90% and saw 300w on 8-pin #1 (estimated 570w actual draw)
> 
> I am on air with a deshroud and noctua nf-a9’s, saw as high as 78c by the end of a PR run, with temps that high I began to hit pwr cap at the end of the run.
> 
> I monitored cable/connector temps with a laster thermometer, 50c was the highest I measured.


Just want to confirm that your card was NOT shunted when you ran these tests? The more info we have for 2 x 8-pin owners the better, as this issue is potentially a card killer.


----------



## markuaw1

des2k... said:


> How's the core temp on the EK block ? Backplate is hot ?


Average about 36C


----------



## markuaw1

Miguelios said:


> Last night was a perfectly chilly night to do some extreme over clocking. My Strix was already a beast having achieved 15,454 in PR... but the KP is hitting 15.5-6 easily, and last night I hit *15,870!! *
> https://www.3dmark.com/pr/683457
> 
> +245 core
> +1400 or 1500 mem (I forget, it was past 1am lol)
> 
> Still have more to squeeze out of it, but will save that for a few weeks.. for now going to focus on highest stable clocks/temps for daily/gaming.
> 
> The KPE is definitely the way to go for tinkerers, lol.. also someone mentioned the vrm was rated for 70a, but the manual says 60a.. just fyi


29c yeah I would call that chilly seems like ice bucket temperatures great score 👌


----------



## Sheyster

Edgenier said:


> For daily use (on air) I think i’ve settled on 1965mhz @ .90V which gets me 14005 PR and no higher than 71C . Any higher voltage and pcie will throttle me a bit. Anybody getting 2000mhz+ stable on air (zero throttling) for daily use?


Yes, doing that no problem here. ASUS Strix 3090 on air, 2055 @ 950mv for gaming.


----------



## GQNerd

markuaw1 said:


> 29c yeah I would call that chilly seems like ice bucket temperatures great score 👌


Ty! Had my pc on the patio, was 6c outside lol


----------



## geriatricpollywog

Miguelios said:


> Last night was a perfectly chilly night to do some extreme over clocking. My Strix was already a beast having achieved 15,454 in PR... but the KP is hitting 15.5-6 easily, and last night I hit *15,870!! *
> https://www.3dmark.com/pr/683457
> 
> +245 core
> +1400 or 1500 mem (I forget, it was past 1am lol)
> 
> Still have more to squeeze out of it, but will save that for a few weeks.. for now going to focus on highest stable clocks/temps for daily/gaming.
> 
> The KPE is definitely the way to go for tinkerers, lol.. also someone mentioned the vrm was rated for 70a, but the manual says 60a.. just fyi


You have a very good core. My best on the KPE is 15632 with classified tweaks. I’m in Los Angeles so it’s a bit warmer. Just now pulled a 15307 just using the precision X1 sliders. What classified settings are you using?

I found that flipping both LLC dip switches adds another 15mhz core stability.


----------



## Sheyster

Carillo said:


> For those of you who are wondering if your 3090 will automatically pull 1000 watts with the Kingpin bios, I have now tested a couple of days in games. My 3090 strix is shunted, and has a theoretical maximum power of 960 watts with stock bios. In 4k ultra the card draws a maximum of 700 watts in games and 800 watt in syntetic benchmarks ... I have now used the Kingpin 1000 watt bios which in combination with shunts in theory can provide 2000 watts ... Guess how much power the card pulls? 700 watts with PL set to 100% ... There is no difference, the card never draws more power than it needs without you for some reason having a short circuit.


I appreciate you doing this, but everyone here should realize that without more voltage (vcore) added there simply won't be more watts with any XOC BIOS, even with the PL limit completely removed. It's basic electronics: W = V x A.


----------



## Falkentyne

Keep in mind that unless you can monitor what the 8 pins are pulling directly from the cable or PSU somehow with external equipment, you won't know for sure if that's accurate or not
If it says you're pulling 350W through 8 pin#1 and the connector isn't even getting warm, it may not really be pulling 350W at all. You would have to have a way to monitor on the PSU end what the cable is actually pulling. And I don't know who here has equipment expensive enough for that. 

Can the AX 1600i even monitor its own PCIE draw on the PSU end?


----------



## markuaw1

Falkentyne said:


> Keep in mind that unless you can monitor what the 8 pins are pulling directly from the cable or PSU somehow with external equipment, you won't know for sure if that's accurate or not
> If it says you're pulling 350W through 8 pin#1 and the connector isn't even getting warm, it may not really be pulling 350W at all. You would have to have a way to monitor on the PSU end what the cable is actually pulling. And I don't know who here has equipment expensive enough for that.
> 
> Can the AX 1600i even monitor its own PCIE draw on the PSU end?


 picked this up to monitor the power draw straight from the PSU seems to do the job


----------



## Falkentyne

markuaw1 said:


> picked this up to monitor the power draw straight from the PSU seems to do the job
> View attachment 2470934


But you can only see the combined power here.
You can't see if its 380W + 380W or 600W + 160W (regardless of what is reported in gpu-z)


----------



## Carillo

When it comes to using XOC bios or hardware mods in general, I mean that you have already far past the line called safe, and you have accepted the possibility that your card may die at any time. At least I have. No one here knows the long-term effects of using XOC bios or SHUNTS. 2 of my 3090 died without warning. Both were shunted ... If you are AFRAID that your card will die, run standard bios without shunts  Merry Christmas


----------



## Spiriva

Falkentyne said:


> Keep in mind that unless you can monitor what the 8 pins are pulling directly from the cable or PSU somehow with external equipment, you won't know for sure if that's accurate or not
> If it says you're pulling 350W through 8 pin#1 and the connector isn't even getting warm, it may not really be pulling 350W at all. You would have to have a way to monitor on the PSU end what the cable is actually pulling. And I don't know who here has equipment expensive enough for that.
> 
> Can the AX 1600i even monitor its own PCIE draw on the PSU end?


In corsairs app the 1600i looks like this:


----------



## Falkentyne

Yeah that's part of the problem.

For example, with shunt mods, various people have had problems with the 8 pins reporting wildly different values. For example, 175W (throttle point, signaling a TDP throttle flag) + 118W for two 8 pins+55W Slot, which is 348W total reported to GPU-Z, when before the shunt mod it would be 175W + 170W+69W (410W=the 8 pin throttle point now matches the TDP throttle point (400W-410W)! But shunt mods change the power that is *reported* to the video card, not the ratio of what the power actually _IS_. So you see the problem already.

I explained in an earlier post that there's some weird stuff going on with SRC power and its affect on power balancing (this is after all called "Input Power Plane Source Power"). But all we have to go on is what the video card is _reporting_ to windows, not what the cables are actually really pulling! That's what I've been trying to get at.

To summarize, I had a case where after reworking a shunt mod, I had both 8 pins within 5W of each other, but GPU Chip Power reaching 290W--causing throttling flag before TDP limit was reached. So I touched that up and brought it down to 270W then re-worked PCIE slot. Then suddenly PCIE slot was reaching 79.9W and throttling. So I touched up the PCIE slot big time, reducing it to 74W which slowly went down as the paint cured, to 70W, 65W and 60W, and then noticed as chip power and PCIE power both dropped massively (chip power was now BELOW 200W), the two 8 pins were now growing apart, with 8 pin #1 showing much higher than 8 pin #2...

Yeah...see?


----------



## GQNerd

0451 said:


> You have a very good core. My best on the KPE is 15632 with classified tweaks. I’m in Los Angeles so it’s a bit warmer. Just now pulled a 15307 just using the precision X1 sliders. What classified settings are you using?
> 
> I found that flipping both LLC dip switches adds another 15mhz core stability.


thanks, yea I’m in NorCal so it gets pretty cold overnight.
I do have the same dip switch settings, and using classy tool. 

*My settings:*
NVVDD 1.1625v
FBVDD 1.38125v
MSVDD 1.18125v
Load line 1 for both
Protection Off


----------



## Evabdis

Hello!

I have a question: I own a 3090 Strix, its not shunted - so whenI flash the 1000w Bios, will it draw max. 580w?


----------



## markuaw1

Falkentyne said:


> But you can only see the combined power here.
> You can't see if its 380W + 380W or 600W + 160W (regardless of what is reported in gpu-z)


Actually the PCI slot power and the eight pin number one and two are reporting correctly the third 8-pin is not but you can do the math yourself you can add the pcie slot power with the first two look at the power output of your PSU and it's pretty easy to figure out how many watts are pulling from the third 8 pin just saying


----------



## Falkentyne

markuaw1 said:


> Actually the PCI slot power and the eight pin number one and two are reporting correctly the third 8-pin is not but you can do the math yourself you can add the pcie slot power with the first two look at the power output of your PSU and it's pretty easy to figure out how many watts are pulling from the third 8 pin just saying


There is a flaw in your thinking.
I'm referring to shunt mods here. I don't have a flashed vbios.
And I'm showing how imbalanced reporting after a shunt mod can get much worse after flashing the 1000W Vbios on a shunt modded card.
You also are again ignoring proper hardware reporting of the 8 pin--you need special equipment for this.

You also completely ignored my post when I said that one guy's PCIE Pin he reported was showing 350W. And it wasn't even warm to the touch. if someone's PCIE plug were pulling 350W, I guarantee you would be able to feel the heat on the connector.

AND ALSO--
People on FTW3's flashing the 1000Vbios and setting the power limit to 520W are drawing 88W through PCIE Slot
That means the power balancing is based on the _BOARD_ PCB design itself, not on the vbios......

So I'll repeat myself.
You need EQUIPMENT to test this. Not rhetoric!


----------



## cletus-cassidy

markuaw1 said:


> Average about 36C


Chilled water of some kind? Mine is upper 40's and even 50 with three 360 rads.


----------



## markuaw1

Falkentyne said:


> There is a flaw in your thinking.
> I'm referring to shunt mods here. I don't have a flashed vbios.
> And I'm showing how imbalanced reporting after a shunt mod can get much worse after flashing the 1000W Vbios on a shunt modded card.
> You also are again ignoring proper hardware reporting of the 8 pin--you need special equipment for this.
> 
> You also completely ignored my post when I said that one guy's PCIE Pin he reported was showing 350W. And it wasn't even warm to the touch. if someone's PCIE plug were pulling 350W, I guarantee you would be able to feel the heat...


Yep sorry I missed that part about the shunt mod you are 100% correct sir, I was going to go with a shunt mod but then the 1000W bios came out so I am good with just the bios


----------



## markuaw1

cletus-cassidy said:


> Chilled water of some kind? Mine is upper 40's and even 50 with three 360 rads.


No sir just a open loop with one 360rad 60mm


----------



## danielstorlos

Here is some more info: Flashed my non-shunted 3090 Palit Gaming OC 2x8pins with the 1000W Bios. ( +100 core + 350 mem 100% powerlimit.).
EK Water.


----------



## Falkentyne

markuaw1 said:


> Yep sorry I missed that part about the shunt mod you are 100% correct sir, I was going to go with a shunt mod but then the 1000W bios came out so I am good with just the bios


It's sort of my fault. I should have clarified everything but I'm writing posts while playing video games, so there's that.

What I mean is, I think the power balancing of the 8 pins and PCIE slot are based on the board design or components themselves.

It's really obvious it's not the vbios. Look in the EvGA FTW3 3090 vbios thread. You see all the complains about them flashing the 500W vbios and getting limited to 420-440W maximum at 500W TDP slider percentage, and their PCIE slot reaching 75W and throttling on some board, and 82W on other boards?

Then look at the people who flashed the same evga xoc ftw3 500W vbios on Asus Strix or MSI boards (both of them 3 pin boards, so no bad 3 pin power reporting here).
What were they getting? The full 500W, and only 55-60W on PCIE slot!!

That means it's not the vbios at all that's controlling the power balancing. It's the actual chips on the board. It's possible the vbios MAY request a certain ratio--but we don't have the disassembled code, so we don't know. All we can see are the "limits". And the limits don't even seem to be properly recognized.

Look for example.

Nvidia 3090 FE:


VenderDateVersionMD5Min TDPDefault TPDMax TDPDefault 8 PinMax 8 PinDefault SRCMax SRCDefault ChipMax ChipDefault SlotMax SlotDefaul VramMax VramUnknown



NVIDIA09/01/2094.02.27.00.0A25b8f739d8bdea69bd6e2592371b5fc21000003500004000001458001620001500001750002430002700007500086250937501078003588

Notice the 8 pin limits? 162W (145.8W default, this should be 100% TDP, 162W should be 114%, etc), and the PCIE slot limits (86.2W, 75W default).

Now what do you see here (un-modded 3090 FE at max TDP).
8 pins seemingly capping out at the SOURCE power limit (175W) while also reaching the TDP limit (400W).










And about that 86W PCIE slot limit...

What do we have here from this guy who didn't mod his PCIE mod with a shunt yet?









RTX 3090 Founders Edition working shunt mod


Yes idea would be to stack them. Should calculate that later in the tool....




www.overclock.net





Hard throttle at 79.5W...(even though the vbios says 86W)...

Then he shunted the PCIE and not only stopped throttling, his _MVDDC_ dropped!! which is proof the PCIE and MVDDC shunts have continuity between them!









RTX 3090 Founders Edition working shunt mod


Yes idea would be to stack them. Should calculate that later in the tool....




www.overclock.net





This is exactly what I'm talking about.
You mod one shunt properly, it makes another shunt report lower power.
And then my weird 8 pin issues after I stopped GPU Chip Power and PCIE Slot from reaching their throttle points (290W and 79.9W)....

Shenanigans, man. There's some weird stuff going on with the shunts having continuity with each other.
So when I see someone posting that they're pulling 350W from 8 pin #1 and 150W from 8 pin #2 on the kingpin vbios on a two 8 pin board, I'm not believing it's really pulling 350W from that 8 pin.

See? (Otherwise you would have to believe that @anethema was REALLY drawing 250W through MVDDC


----------



## jl434

Another Galax 390w bios for 2*8 pins: Galaxy RTX 3090 VBIOS

Higher fan speed than KFA2 390w bios


----------



## Cholerikerklaus

Nizzen said:


> Try higher cpu speed and higher memoryspeed


We did it with a 10900kf 5.5 allcore und 4700mhz mem. 
I think we can't get more Frequency. Other help?


----------



## Falkentyne

danielstorlos said:


> Here is some more info: Flashed my non-shunted 3090 Palit Gaming OC 2x8pins with the 1000W Bios. ( +100 core + 350 mem 100% powerlimit.).
> EK Water.
> 
> View attachment 2470945


Daniel, can you do me a favor and run port royal again, but have GPU-Z open? Please take a picture /screenshot of GPU-Z as soon as the PR run ends, with the "throttle bar" visible. (if you're late to SS it, just "extend" the GPU-Z window to the right (thanks to someone above who mentioned that protip!).


----------



## Huseyinbaykal

Hey all
I need your help guys. I have to choose a card asap. My computer parts store asks me to decide which card I want to buy. Asus strix oc 3090 or kingpin 3090. I had ordered asus strix waterblock from ek to use with strix oc or I can buy kingpin and go with aio.
Please help me which card is more powerful (with bios mods etc..) and which one has less problems. I must decide in a day. Please help me decide. Thanks


----------



## danielstorlos

Falkentyne said:


> Daniel, can you do me a favor and run port royal again, but have GPU-Z open? Please take a picture /screenshot of GPU-Z as soon as the PR run ends, with the "throttle bar" visible. (if you're late to SS it, just "extend" the GPU-Z window to the right (thanks to someone above who mentioned that protip!).


----------



## Falkentyne

danielstorlos said:


> View attachment 2470955


Replied in PM.
Do you have Superposition (4k --custom --extreme shaders preset) or can you run Timespy Extreme if you don't?


----------



## Falkentyne

Huseyinbaykal said:


> Hey all
> I need your help guys. I have to choose a card asap. My computer parts store asks me to decide which card I want to buy. Asus strix oc 3090 or kingpin 3090. I had ordered asus strix waterblock from ek to use with strix oc or I can buy kingpin and go with aio.
> Please help me which card is more powerful (with bios mods etc..) and which one has less problems. I must decide in a day. Please help me decide. Thanks


Kingpin obviously. Classified voltage control, more speed, more tweaks, dipswitches onboard and an OLED and full control with the 1000W Vbios (Just flash that to the #3 bios slot for benching, use the 520W for daily). Yeah. No contest.


----------



## geriatricpollywog

Is anybody else's backplate too hot to touch? Card is not overclocked at the moment.

Edit; Cyberpunk running


----------



## des2k...

I


Falkentyne said:


> Replied in PM.
> Do you have Superposition (4k --custom --extreme shaders preset) or can you run Timespy Extreme if you don't?


Here's mine if you need it, I'm running TE GT2 for few hours now. Also 2x8pin


----------



## danielstorlos

Falkentyne said:


> Replied in PM.
> Do you have Superposition (4k --custom --extreme shaders preset) or can you run Timespy Extreme if you don't?


Timespy Extreme:


----------



## Falkentyne

0451 said:


> Is anybody else's backplate too hot to touch? Card is not overclocked at the moment.
> 
> Edit; Cyberpunk running
> 
> View attachment 2470957
> 
> 
> View attachment 2470956


What's up with the "junior" throttle bar in GPU-Z? (That's what I call it). Are you on the LN2 Vbios or the 1000W one?


----------



## Falkentyne

des2k... said:


> I
> 
> Here's mine if you need it, I'm running TE GT2 for few hours now. Also 2x8pin
> 
> View attachment 2470964





danielstorlos said:


> Timespy Extreme:
> 
> View attachment 2470965


Nice. 571W and 611W power draws from 2x8 pin cards. Proof I was right that you can exceed 600W.

The only question is if it's really drawing 303W through the pin, or if it's drawing less through #1 and more through #2 but misreporting it...


----------



## des2k...

0451 said:


> Is anybody else's backplate too hot to touch? Card is not overclocked at the moment.
> 
> Edit; Cyberpunk running
> 
> View attachment 2470957
> 
> 
> View attachment 2470956


Yeah couldn't touch/hold mine. I put two 70mm fans on it. Used to be 34c with the fans 390w, now it's 41c with XOC.


----------



## geriatricpollywog

Falkentyne said:


> What's up with the "junior" throttle bar in GPU-Z? (That's what I call it). Are you on the LN2 Vbios or the 1000W one?


LN2 with no overclock and 100% on power slider. I just want to know if my memory temps are normal. Can someone tell me if their memory is getting close to 69C like mine is?


----------



## Nico67

Falkentyne said:


> Yeah that's part of the problem.
> 
> For example, with shunt mods, various people have had problems with the 8 pins reporting wildly different values. For example, 175W (throttle point, signaling a TDP throttle flag) + 118W for two 8 pins+55W Slot, which is 348W total reported to GPU-Z, when before the shunt mod it would be 175W + 170W+69W (410W=the 8 pin throttle point now matches the TDP throttle point (400W-410W)! But shunt mods change the power that is *reported* to the video card, not the ratio of what the power actually _IS_. So you see the problem already.
> 
> I explained in an earlier post that there's some weird stuff going on with SRC power and its affect on power balancing (this is after all called "Input Power Plane Source Power"). But all we have to go on is what the video card is _reporting_ to windows, not what the cables are actually really pulling! That's what I've been trying to get at.
> 
> To summarize, I had a case where after reworking a shunt mod, I had both 8 pins within 5W of each other, but GPU Chip Power reaching 290W--causing throttling flag before TDP limit was reached. So I touched that up and brought it down to 270W then re-worked PCIE slot. Then suddenly PCIE slot was reaching 79.9W and throttling. So I touched up the PCIE slot big time, reducing it to 74W which slowly went down as the paint cured, to 70W, 65W and 60W, and then noticed as chip power and PCIE power both dropped massively (chip power was now BELOW 200W), the two 8 pins were now growing apart, with 8 pin #1 showing much higher than 8 pin #2...
> 
> Yeah...see?


Yep, I think it would be very hard to tell what the actual readings are. As you alluded to, board design says this shunt should measure "x"mv so the bios adds a multiplier with a safety factor added so it will power limit to keep on various points to keep the board within safe operating parameters. each multiplier will be tailored to each boards VRM and 8pin config. Also the various points don't seem to strictly direct measurements, but could be a combination of additive points with different multipliers, which is where it would get extremely messy.

say chip power was PCIEx1.2 + 8PIN1x1.5 vs PCIEx0.8 + 8PIN1x.75 + 8PIN3x0.8 and it could have factors of PWR SRC as well? Note: this is just random numbers plucked out of the air to illustrate possible relationships.

The only things that should be fairly static would be PCIE + all 8PINs = TBP or close too. However there are extra or less 8PIN readings, and we can't even say how accurate they are 

There are a couple of ways to get a reasonable approximation,

1/ scale the default bios up to you predicted TBP, this will only be good for inputs though as core power will probably go up faster than memory.

mine should be about

93.1 / 239.2 / 227.3, not 68.8 / 282.5 / 171.2 / 282.5

the only thing that is close is the PCIE to 8PIN2 relationship, but I have no idea whether a bios can change the physical balancing or if that is purely down to board design and which rails / vrm they feed?

I can get a clampmeter when I go back, but thats after the new year, but it will be interesting if I could get that to prove anything


----------



## GQNerd

0451 said:


> LN2 with no overclock and 100% on power slider. I just want to know if my memory temps are normal. Can someone tell me if their memory is getting close to 69C like mine is?


My mem hits 69c, but during extreme benching, not gaming.. will check that later today

Was already planning on replacing the thermal pads/installing extra fan(s).. Wish there was a waterblock for the KP available. Some day :/

*Side note:* Rocking the same PSU, sh!ts a beast
Also, why not use the 1000w bios but limit the power to 520w and leave Overcurrent protection on?


----------



## HyperMatrix

Falkentyne said:


> Can the AX 1600i even monitor its own PCIE draw on the PSU end?


iCue software lets you see current draw from each individual pin. Here's a screenshot:










And the corresponding "GPU/CPU #" from the PSU itself:


----------



## Falkentyne

HyperMatrix said:


> iCue software lets you see current draw from each individual pin. Here's a screenshot:
> 
> View attachment 2470968
> 
> 
> And the corresponding "GPU/CPU #" from the PSU itself:
> 
> View attachment 2470969


Now that's useful. Someone with the 1000W vbios and a 2x8 pin card and this PSU would be able to see exactly what they're pulling.
But I don't think anyone with a 2x8 pin card has this PSU. 

Does only the AX 1600i have this debug port?


----------



## HyperMatrix

0451 said:


> Is anybody else's backplate too hot to touch? Card is not overclocked at the moment.
> 
> Edit; Cyberpunk running
> 
> View attachment 2470957
> 
> 
> View attachment 2470956


I noticed memory temps on KPE getting up to 70C (maybe higher) after I crashed. I don't remember the memory on the FTW3 Hybrid getting this hot. Will have to get a couple fans blowing on it. Really hoping there's a good waterblock solution for this card.


----------



## geriatricpollywog

Miguelios said:


> Mine hits 69c, but during benching not gaming.. will check that later today
> 
> Was already planning on replacing the thermal pads/installing extra fan(s).. Wish there was a waterblock for the KP available. Some day :/
> 
> *Side note:* Rocking the same PSU, sh!ts a beast
> Also, why not use the 1000w bios but limit the power to 520w and leave Overcurrent protection on?


Thanks for confirming!

1) There is nothing to stop the core from going past 120c if a pump fails or a firmware / bios glitch causes the fans to stop.

2) My 15632 score was with the LN2 bios and I could not surpass with the XOC

3) If the card fails the RMA dept will not honor my warranty if the XOC is installed

4) It causes the memory to stay at max speed which makes the card IDLE at 100 watts. This will degrade the memory and hurt performance unless the card is frosted over from LN2. I don’t need my memory burning itself up while I’m watching Downton Abbey on my Kingpin.


----------



## HyperMatrix

Falkentyne said:


> Now that's useful. Someone with the 1000W vbios and a 2x8 pin card and this PSU would be able to see exactly what they're pulling.
> But I don't think anyone with a 2x8 pin card has this PSU.
> 
> Does only the AX 1600i have this debug port?


I have ax 1500i from like 3 or 4 years ago I guess and it works for me.


----------



## Spiriva

Falkentyne said:


> Now that's useful. Someone with the 1000W vbios and a 2x8 pin card and this PSU would be able to see exactly what they're pulling.
> But I don't think anyone with a 2x8 pin card has this PSU.
> 
> Does only the AX 1600i have this debug port?


I got a PNY 3090 (2x8pin) and a 1600i psu. But im away from home over xmas.


----------



## GAN77

HyperMatrix said:


> iCue software lets you see current draw from each individual pin. Here's a screenshot:


But the readings do not match. Take 13 amperes from the iCue screenshot and multiply by 12 volts = 156 watts. There are no such values in the screenshot GPU-Z.


----------



## HyperMatrix

GAN77 said:


> But the readings do not match. Take 13 amperes from the iCue screenshot and multiply by 12 volts = 156 watts. There are no such values in the screenshot GPU-Z.


I noticed that too. Could be polling rate difference between the 2 softwares. There’s also a logging option that may do a better job. Asides from that, the power draw ratio between the rails should still be intact. As you may notice #1 and #2 are showing the same draw in GPU-Z with #3 behind a bit. Amperage in iCue also shows that difference. Another part of it is that it’s using whole numbers instead of decimals. So 13 amps could have been 12.5. And you also have to account for PSU power efficiency. Power used doesn’t equal the power the card gets.


----------



## GQNerd

0451 said:


> Thanks for confirming!
> 1) There is nothing to stop the core from going past 120c if a pump fails or a firmware / bios glitch causes the fans to stop.
> 2) My 15632 score was with the LN2 bios and I could not surpass with the XOC
> 3) If the card fails the RMA dept will not honor my warranty if the XOC is installed
> 4) It causes the memory to stay at max speed which makes the card IDLE at 100 watts. This will degrade the memory and hurt performance unless the card is frosted over from LN2.


Interesting..
1- Could try to offload the pump/fan responsibility to the Mboard?
2- LN2 bios works fine, but feel like it was interfering with my Classified settings.. highest PR was 15,564
3- Hope I don't have to RMA, but I if I had to I'd use a programmer to reflash the stock bios
4- I'm aware, I've been running with slider at 10% (100w), and then I apply a -1000 mem offset. -_ not sure if it really helps, but my board power reads 35-40w at idle._

I have my Configurations for benching saved.. today I will explore my daily driver settings.. Including revisiting the LN2 bios

Will report back!


----------



## Baasha

Man these RoG Strix 3090 OC in SLI sure do get hot:


online image hosting


----------



## Avacado

Baasha said:


> Man these RoG Strix 3090 OC in SLI sure do get hot:
> 
> 
> online image hosting


Hope they burn a hole in your mobo. Has 2... _Rollseyes_


----------



## xrb936

3090 KPE w/ XOC 1000W Bios
Mem+1200, Core+160: https://www.3dmark.com/3dm/55462781

Mem+1200, Core [email protected]: https://www.3dmark.com/3dm/55461987

That’s weird. Higher frequency w/ lower score. I am so frustrated with it now.


----------



## GQNerd

xrb936 said:


> 3090 KPE w/ XOC 1000W Bios
> Mem+1200, Core+160: https://www.3dmark.com/3dm/55462781
> Mem+1200, Core [email protected]: https://www.3dmark.com/3dm/55461987
> That’s weird. Higher frequency w/ lower score. I am so frustrated with it now.


After experimenting last night, your limited by the *temps...*
Try running it tonight by a window, or lower your thermostat at home and compare.


----------



## geriatricpollywog

xrb936 said:


> 3090 KPE w/ XOC 1000W Bios
> Mem+1200, Core+160: https://www.3dmark.com/3dm/55462781
> 
> Mem+1200, Core [email protected]: https://www.3dmark.com/3dm/55461987
> 
> That’s weird. Higher frequency w/ lower score. I am so frustrated with it now.


I bet your score would be higher with the LN2 bios.


----------



## sultanofswing

xrb936 said:


> 3090 KPE w/ XOC 1000W Bios
> Mem+1200, Core+160: https://www.3dmark.com/3dm/55462781
> 
> Mem+1200, Core [email protected]: https://www.3dmark.com/3dm/55461987
> 
> That’s weird. Higher frequency w/ lower score. I am so frustrated with it now.
> [/QUOT
> 
> 
> xrb936 said:
> 
> 
> 
> 3090 KPE w/ XOC 1000W Bios
> Mem+1200, Core+160: https://www.3dmark.com/3dm/55462781
> 
> Mem+1200, Core [email protected]: https://www.3dmark.com/3dm/55461987
> 
> That’s weird. Higher frequency w/ lower score. I am so frustrated with it now.
> 
> 
> 
> Sounds like chip quality isn't all that great, see if you can get the temps down.
> At 2200mhz your score should be over 15k.
> Make sure you are not pushing the memory past where it has error corrected.
Click to expand...


----------



## Falkentyne

xrb936 said:


> 3090 KPE w/ XOC 1000W Bios
> Mem+1200, Core+160: https://www.3dmark.com/3dm/55462781
> 
> Mem+1200, Core [email protected]: https://www.3dmark.com/3dm/55461987
> 
> That’s weird. Higher frequency w/ lower score. I am so frustrated with it now.


Eh. You got something weird going on there.
My shunt modded 3090 FE beat you at +180 core +650 memory (I think it was 14,735 points). And that's on the stock air cooler.
I think @HyperMatrix was on to something where he said core was dropping in performance (when usually it's mem that drops in performance if you clock the memory too high).

Also if you get low scores in Port Royal try running Superposition THEN run Port Royal.


----------



## sultanofswing

Not sure why my reply got screwed up.
At 2200mhz you should be 15k+
So make sure you have not pushed the memory too far and see if you can get the temps lower.


----------



## Nico67

des2k... said:


> I
> 
> Here's mine if you need it, I'm running TE GT2 for few hours now. Also 2x8pin
> 
> View attachment 2470964


Go "MAX" on the tabs, I want to hear it scream


----------



## markuaw1

Miguelios said:


> After experimenting last night, your limited by the temps...
> Try running it tonight by a window, or lower your thermostat at home and compare.


if you can drop that temp by 20c you will pickup a lot i bet would be like 15 100 to 15 500 50c down to 30c


----------



## dante`afk

Baasha said:


> Man these RoG Strix 3090 OC in SLI sure do get hot:
> 
> 
> online image hosting


your epeen gets crippled by not even having watercooling


----------



## dr/owned

Pretty much a throwaway post but _finally_ after getting all the thermal pads I needed to assemble this pig:

-The Bykski block uses PH1 screws. I tried #1 JIS screwdriver and I think it's PH not JIS. The standoffs are M4 thread reducing to M1.6? or something..so they're removable and not press-fit or glued in. The "nut" on the standoffs is 2.0mm thick...which I can't find anywhere commercially. 2.2mm is the lowest profile I could find. (I was considering "upgrading" to M4 screws instead of using the included ones)

-Oven set to 170F and then turned off before inserting the assembled GPU for a half hour helped a LOT to relax the thermal pads and liquid metal so I could get another 1/8-1/4 turn on the backplate screws. IR camera says the backplate was around 140F which is prefectly fine for all the plastics and such.

-The backplate needs 4mm thick thermal pads (absurdly thick) to make contact with the back of the PCB where the VRM's are at.

-I put 1.5mmx2 (3mm) pads on the capacitors on the back of the silicon. The TUF uses all ceramic that are 1mm thick, so 3.5mm void - 1mm = 2.5mm. 3mm thickness pads give a nice 0.5mm compression.


----------



## arvinz

Hey guys,

Just installed the Strix 3090...is there any protective film on the RGB strip area? Those darn things are everywhere..but I can't tell if there's any on the RGB area?


----------



## xrb936

Miguelios said:


> After experimenting last night, your limited by the *temps...*
> Try running it tonight by a window, or lower your thermostat at home and compare.


I will do it in patio later today.


----------



## HyperMatrix

xrb936 said:


> 3090 KPE w/ XOC 1000W Bios
> Mem+1200, Core+160: https://www.3dmark.com/3dm/55462781
> 
> Mem+1200, Core [email protected]: https://www.3dmark.com/3dm/55461987
> 
> That’s weird. Higher frequency w/ lower score. I am so frustrated with it now.



Pretty much done repeating the answer to you. Already said it twice. Yet you continue to be surprised and “frustrated” without listening. Keep doing the same thing over and over again.


----------



## cletus-cassidy

markuaw1 said:


> No sir just a open loop with one 360rad 60mm


Impressive. I must have a bad contact cold plate.


----------



## markuaw1

arvinz said:


> Hey guys,
> 
> Just installed the Strix 3090...is there any protective film on the RGB strip area? Those darn things are everywhere..but I can't tell if there's any on the RGB area?


Yes there is film on the rgb area there was on my strix 3090


----------



## GQNerd

xrb936 said:


> I will do it in patio later today.


don't knock it till you try it.. 
If you can keep the card below 45c (preferably 40c) you'll see an improvement. promise, lol


----------



## arvinz

markuaw1 said:


> Yes there is film on the rgb area there was on my strix 3090


Ok I couldn't see the edge to grab the film...I only saw it on the metal design on the face of the card where the fans are and also the backplate....any suggestions? Don't want to scratch this thing.


----------



## markuaw1

arvinz said:


> Ok I couldn't see the edge to grab the film...I only saw it on the metal design on the face of the card where the fans are and also the backplate....any suggestions? Don't want to scratch this thing.


Just try and pick at a corner see if you can get it started if it's got any on it it should mine did


----------



## ssti

Thanks to 3DMARK for score compare feature, let's meet same overclockers
1st episode TIME SPY EXTREME 
1 GPU








2 GPU








3 GPU








4 GPU Unfortunately participant from the left doesn't appear on TOP 100








You can compare your results to someone closer to yours and find this feature more helpful for your success.
Episode 2 coming soon, Marry Christmas and Happy Holidays!​


----------



## arvinz

markuaw1 said:


> Just try and pick at a corner see if you can get it started if it's got any on it it should mine did


I must be losing my mind but I don't see any film in that zone. I just watched a bunch of unboxing videos and neither of them had a film there...


----------



## bmgjet

ssti said:


> ...


Whats the point in comparing scores to some one thats been caught cheating. Not only once but 3 times now.
Uses programs in the background to remove textures and stuff to boost scores, Photoshops images, Makes new name each time he gets banned.





One day you will get caught


Short story 2nd time this month we experience lousy photoshopping. This member got reported numerous times for not respecting the new 2020 rules, meaning cpuz 1.91 version or newer, no clipping, etc.... On top of that his reputation did not help and many were watching his every move. I moderated ...



community.hwbot.org


----------



## xrb936

0451 said:


> Thanks for confirming!
> 
> 1) There is nothing to stop the core from going past 120c if a pump fails or a firmware / bios glitch causes the fans to stop.
> 
> 2) My 15632 score was with the LN2 bios and I could not surpass with the XOC
> 
> 3) If the card fails the RMA dept will not honor my warranty if the XOC is installed
> 
> 4) It causes the memory to stay at max speed which makes the card IDLE at 100 watts. This will degrade the memory and hurt performance unless the card is frosted over from LN2. I don’t need my memory burning itself up while I’m watching Downton Abbey on my Kingpin.


Hey man, could you share your overclock setting please? Did you just offset clock or curve?


----------



## xrb936

HyperMatrix said:


> Pretty much done repeating the answer to you. Already said it twice. Yet you continue to be surprised and “frustrated” without listening. Keep doing the same thing over and over again.


I know you pointed out the problem, but I don’t know how to fix it. I have already tried to lower the temp.


----------



## pantsoftime

0451 said:


> 4) It causes the memory to stay at max speed which makes the card IDLE at 100 watts. This will degrade the memory and hurt performance unless the card is frosted over from LN2. I don’t need my memory burning itself up while I’m watching Downton Abbey on my Kingpin.


Are you overvolting or something where this would be a concern? It seems strange to be concerned about memory running at its rated speed. Inefficient yes, damaging no.


----------



## Nico67

Just a quick question as I haven't seen it mentioned anywhere.

Did everybody with a Galax / KFA SG 3090 have to modify the white (I assume 6pin fan) header to get the EK reference water block to fit?

Also used some of the 1.5mm thermal pads on the coils as 1mm wasn't even touching.


----------



## geriatricpollywog

xrb936 said:


> Hey man, could you share your overclock setting please? Did you just offset clock or curve?


For the 16532 I had +1350 mem and I forgot what the core speed was. I had activated both NVVDD sip switches on the back of the card and did not change anything on the classified tool.
NVIDIA GeForce RTX 3090 video card benchmark result - Intel Core i7-10700K Processor,Micro-Star International Co., Ltd. MEG Z490 UNIFY (MS-7C71) (3dmark.com)



pantsoftime said:


> Are you overvolting or something where this would be a concern? It seems strange to be concerned about memory running at its rated speed. Inefficient yes, damaging no.


Keeping the memory hot is bad for long term performance. I was able to do +1500 on the mem when the 3090 Kingpin was new 1 week ago. Now it won't pass PR at anything higher than 1400. My 2080ti Kingpin could also pass PR at +1500 on Day 1, then degraded by 150 mhz within a few weeks. After that it continued to degrade, but much more slowly. Now its stable at +1250. I imagine the GDDR would degrade MUCH faster if it was always hot.

Keeping the memory hot is bad for short term performance. I makes sense to keep the memory loaded for LN2 because the card is completely frosted over. It does not make sense for Port Royal runs using the AIO since the memory would not cool down between runs. This would lower max stable mhz during the benching session.


----------



## jura11

Tested Kingpin XOC 1000W BIOS and pleased with results,although power draw is high,if someone is concerned about power draw or you are scared please don't flash it,but without the questions is one of the best BIOS for 2 8pin RTX3090,temperatures are off course higher at 90% power limit,highest temperatures what I have seen has been 38C,usually I see 32-36C as max in benchmarks or in gaming,not tried to run that BIOS at 100% yet

And here are Port royal results at same OC(+145MHZ on core/1295MHz on VRAM),difference is only Kingpin XOC 1000W BIOS,with KFA2/Galax 390W BIOS my score has been 14279 and with Kinpin XOC 1000W BIOS score is 14908,I will probably try break 15000 later during the Xmas hahaha with 100% power limit









Result







www.3dmark.com





Merry Christmas guys and gals and please stay safe 

Here are few screenshots of GPU-Z
























Hope this helps

Thanks,Jura


----------



## dr/owned

jura11 said:


> Tested Kingpin XOC 1000W BIOS and pleased with results,although power draw is high,if someone is concerned about power draw or you are scared please don't flash it,but without the questions is one of the best BIOS for 2 8pin RTX3090,temperatures are off course higher at 90% power limit,highest temperatures what I have seen has been 38C,usually I see 32-36C as max in benchmarks or in gaming,not tried to run that BIOS at 100% yet
> 
> Thanks,Jura


Presumably you lose some display connections though? I think that's what kills it for me cause I need all 4 outputs.


----------



## jura11

dr/owned said:


> Presumably you lose some display connections though? I think that's what kills it for me cause I need all 4 outputs.


No I didn't loose any of DP or HDMi ports, all are working, if yours GPU is based on the reference PCB or is have similar or same I/O layout then you should be okay there

Hope this helps you 

Thanks, Jura


----------



## xrb936

0451 said:


> For the 16532 I had +1350 mem and I forgot what the core speed was. I had activated both NVVDD sip switches on the back of the card and did not change anything on the classified tool.
> NVIDIA GeForce RTX 3090 video card benchmark result - Intel Core i7-10700K Processor,Micro-Star International Co., Ltd. MEG Z490 UNIFY (MS-7C71) (3dmark.com)
> 
> 
> Keeping the memory hot is bad for long term performance. I was able to do +1500 on the mem when the 3090 Kingpin was new 1 week ago. Now it won't pass PR at anything higher than 1400. My 2080ti Kingpin could also pass PR at +1500 on Day 1, then degraded by 150 mhz within a few weeks. After that it continued to degrade, but much more slowly. Now its stable at +1250. I imagine the GDDR would degrade MUCH faster if it was always hot.
> 
> Keeping the memory hot is bad for short term performance. I makes sense to keep the memory loaded for LN2 because the card is completely frosted over. It does not make sense for Port Royal runs using the AIO since the memory would not cool down between runs. This would lower max stable mhz during the benching session.


Thanks man. What does the NVVDD sips do compared to the classified tool?


----------



## ssti

bmgjet said:


> Whats the point in comparing scores to some one thats been caught cheating. Not only once but 3 times now.....


Definitely not as remainder to update your CPU-Z or user H2o vs. Ln2 @hwbot, the point is to learn build, cooling, overclacking of tomorrow if you like.


----------



## geriatricpollywog

xrb936 said:


> Thanks man. What does the NVVDD sips do compared to the classified tool?


The NVVDD dips add +50mv each to core voltage. I'm not sure how much the FBVDD or MSVDD add. The LLC dips allow me another 15mhz core stability so I leave them on all the time. The dip switches control the same parameters as the sliders in the Classified tool. Vince (Kingpin) said he doesn't use them, but I prefer them over the Classified tool because its buggy. Sometimes I'll open up the classified tool and it will set LLC to 0, other times it sets it to 1. It always sets MSVDD to 0.9, which is too low and I have no idea what the default should be. The dips switches can also be used for changing the parameters during a run. If for example the first seconds of PR need higher voltage, but this causes heat buildup later in the run, the dipswitches can help.

Plus, I just like dip switches. They are a fun way to overclock. As a kid, my first custom build using the original Athlon required a dip switch tool to be attached directly to the CPU to enable overclocking.


----------



## DrunknFoo

Im surprised my score is still in the top 100 PR with a 60c avg, the leaderboard looks so fun to climb now... God damn ek and or watercool 😭


----------



## defiledge

I'm pretty sure @ssti is the same tard on a different account. Both of them are obviously "overclacking" pros. Can we ban this one too. 

Also the clown kept asking people to pm him, what was he sending?


----------



## bmgjet

defiledge said:


> I'm pretty sure ssti is the same tard on a different account. Both of them are obviously "overclacking" pros. Can we ban this one too.
> 
> Also what the clown kept asking people to pm him, what was he sending?


First post attacking chispy who called out the cheating, Then has same spelling mistakes as the other account.


----------



## ssti

DrunknFoo said:


> Im surprised my score is still in the top 100 PR with a 60c avg, the leaderboard looks so fun to climb now... God damn ek and or watercool 😭


keep pushing until hit the wall, your best result should show @70C-75C


----------



## chispy

ssti said:


> Definitely not as remainder to update your CPU-Z or user H2o vs. Ln2 @hwbot, the point is to learn build, cooling, overclacking of tomorrow if you like.


Slinky - H20 vs LN2 - ssti - ( Escapee ) stop making new accounts to post BS , you are 100% proven cheater at hwbot and got Banned , end of story , it is True - One day you will get caught You will leave this place sooner than you can expect


----------



## ssti

chispy said:


> Slinky - H20 vs LN2 stop making new accounts to post BS , you are 100% proven cheater at hwbot and got Banned , end of story. You will leave this place sooner than you can expect


don't miss second episode, can improve your skills.


----------



## chispy

defiledge said:


> I'm pretty sure @ssti is the same tard on a different account. Both of them are obviously "overclacking" pros. Can we ban this one too.
> 
> Also the clown kept asking people to pm him, what was he sending?


Yes he is the same user with many accounts , Slinky , H20 vs LN2 , ssti , Escapee , etc...


----------



## Falkentyne

xrb936 said:


> 3090 KPE w/ XOC 1000W Bios
> Mem+1200, Core+160: https://www.3dmark.com/3dm/55462781
> 
> Mem+1200, Core [email protected]: https://www.3dmark.com/3dm/55461987
> 
> That’s weird. Higher frequency w/ lower score. I am so frustrated with it now.





HyperMatrix said:


> Pretty much done repeating the answer to you. Already said it twice. Yet you continue to be surprised and “frustrated” without listening. Keep doing the same thing over and over again.


Ok relax, guys. 
Might be a combination of problems happening.
I think I pinpointed _one_ of them and it's not core clock for me.

What I noticed was, after several days of normal scores and then playing overwatch a bunch of games at 600W TDP, I suddenly saw my benchmark scores like 400 points below normal at my normal profile overclock, +150 / +500 (testing TS, FS). I was only getting 14,220 when I should be getting 14,580 or so.
And this time, running Superposition didn't fix it (and that was low, too).

So I reset NVCP to default, that helped superposition but did NOT help timespy or port royal.
But I noticed that when I raised core and memory, my scores still went up but were still 400-500 points below normal!! Tried +180/+650. Scores went up a little but still way too low. Got like 14,300 when it should have been 14,750

So I turned off the computer, unplugged the PSU, plugged it back in, rebooted and loaded my +150/+500 profile and ran PR again.

SAME low score!!! like 400 points too low!!!

Ran superposition (that was okay), PR, same low score...

So I underclocked the memory (still an overclock) from +500 to +250.
Ran port royal and my scores were suddenly 400 points higher!!!

Ran timespy, same thing!

So THEN I set the memory back to where it was before, +500, and ran PR again, and my scores were higher again, back to normal scores.
Then I clocked it up to +180 +650...got my 14765 port royal score back. What the hell?

So I have NO IDEA what happened. Nor do I know why after a complete power off and on, my scores remained low until I "downclocked" the memory, and then when I 're-clocked' it back, I kept the score increase! Clearly that wasn't heat...

I'm guessing there's either a bug somewhere, or if you underclock the memory by a certain amount, some memory strap gets set somewhere that fixes your scores...

I have no proof of anything. Just telling you what happened.

So if there's some weird thing going on with the core for some people, there seems to be a weird thing with the VRAM too. Try underclocking it, hit apply, test, then clock it up again...


----------



## geriatricpollywog

Falkentyne said:


> Ok relax, guys.
> Might be a combination of problems happening.
> I think I pinpointed _one_ of them and it's not core clock for me.
> 
> What I noticed was, after several days of normal scores and then playing overwatch a bunch of games at 600W TDP, I suddenly saw my benchmark scores like 400 points below normal at my normal profile overclock, +150 / +500 (testing TS, FS). I was only getting 14,220 when I should be getting 14,580 or so.
> And this time, running Superposition didn't fix it (and that was low, too).
> 
> So I reset NVCP to default, that helped superposition but did NOT help timespy or port royal.
> But I noticed that when I raised core and memory, my scores still went up but were still 400-500 points below normal!! Tried +180/+650. Scores went up a little but still way too low. Got like 14,300 when it should have been 14,750
> 
> So I turned off the computer, unplugged the PSU, plugged it back in, rebooted and loaded my +150/+500 profile and ran PR again.
> 
> SAME low score!!! like 400 points too low!!!
> 
> Ran superposition (that was okay), PR, same low score...
> 
> So I underclocked the memory (still an overclock) from +500 to +250.
> Ran port royal and my scores were suddenly 400 points higher!!!
> 
> Ran timespy, same thing!
> 
> So THEN I set the memory back to where it was before, +500, and ran PR again, and my scores were higher again, back to normal scores.
> Then I clocked it up to +180 +650...got my 14765 port royal score back. What the hell?
> 
> So I have NO IDEA what happened. Nor do I know why after a complete power off and on, my scores remained low until I "downclocked" the memory, and then when I 're-clocked' it back, I kept the score increase! Clearly that wasn't heat...
> 
> I'm guessing there's either a bug somewhere, or if you underclock the memory by a certain amount, some memory strap gets set somewhere that fixes your scores...
> 
> I have no proof of anything. Just telling you what happened.
> 
> So if there's some weird thing going on with the core for some people, there seems to be a weird thing with the VRAM too. Try underclocking it, hit apply, test, then clock it up again...


There is something weird going on with your setup. First, running superposition fixed your scores, then downclocking and upclocking VRAM fixed them. I'd be interested in full monitoring of a bad run and a good run.


----------



## Falkentyne

0451 said:


> There is something weird going on with your setup. First, running superposition fixed your scores, then downclocking and upclocking VRAM fixed them. I'd be interested in full monitoring of a bad run and a good run.


I monitored all the runs. Nothing was changed or reported different on any rail. Just low scores. And slightly less power pulled from the wall.

IDK. Someone else mentioned earlier in this same thread way back that he had these exact same problems (or maybe it was on eVGA forums--sorry I don't remember, but I think it was here!), and he said the ONLY thing that fixed his low scores was reinstalling windows :/

But if you read the eVGA FTW3 bios thread, you will see I'm not the only person complaining about inconsistent varying scores. But an instant downclock of the VRAM From 10251 mhz to 10001 mhz INSTANTLY boosted my scores---and I spent about an HOUR running benchmarks before that, getting low scores (changing core clock back and forth) which didn't help. Then I raised my VRAM Clock back up and my scores kept increasing again.

Makes you wonder if Nvidia is doing something weird in the drivers or there is indeed some sort of "memory strap" thing going on...

For all I know, it could be a "bug" with saved profiles in MSI Afterburner. Wouldn't be the first time profiles cause problems. Remember the "Load saved profile on 2080 Ti Kingpin 1000W Vbios=BSOD" bug? That was fixed by an updated vbios (that apparently was not public).


----------



## geriatricpollywog

Falkentyne said:


> I monitored all the runs. Nothing was changed or reported different on any rail. Just low scores. And slightly less power pulled from the wall.
> 
> IDK. Someone else mentioned earlier in this same thread way back that he had these exact same problems (or maybe it was on eVGA forums--sorry I don't remember, but I think it was here!), and he said the ONLY thing that fixed his low scores was reinstalling windows :/
> 
> But if you read the eVGA FTW3 bios thread, you will see I'm not the only person complaining about inconsistent varying scores. But an instant downclock of the VRAM From 10251 mhz to 10001 mhz INSTANTLY boosted my scores---and I spent about an HOUR running benchmarks before that, getting low scores (changing core clock back and forth) which didn't help. Then I raised my VRAM Clock back up and my scores kept increasing again.
> 
> Makes you wonder if Nvidia is doing something weird in the drivers or there is indeed some sort of "memory strap" thing going on...
> 
> For all I know, it could be a "bug" with saved profiles in MSI Afterburner. Wouldn't be the first time profiles cause problems. Remember the "Load saved profile on 2080 Ti Kingpin 1000W Vbios=BSOD" bug? That was fixed by an updated vbios (that apparently was not public).


I am using Precision X1, not Afterburner so I didn't have that specific BSOD problem.

My last 4 cards before the 2080ti were AMD and the issues you are having would be par for the course for team red, and the reason I switched.

There is a point where increasing memory OC starts to hurt scores. However, since your issue is fixed by downclocking followed by upclocking, we can rule this out as a potential root cause.

Another potential root cause is "too many cooks in the kitchen" if you have several different programs running that are manipulating your CPU (Afterburner, Precision X1, etc).

Have you tried removing all Nvidia drivers using DDU and reinstalling? This fixed a problem I had, where after flashing back to the original bios from the 1000w bios, the 1000w bios would still be active.

If all else fails, then reinstalling WIndows might help. There might be some janky AMD stuff in your registry left over from Vega. My previous card was a 2080 ti, so I didn't need to reinstall Windows.


----------



## HyperMatrix

Falkentyne said:


> Ok relax, guys.
> Might be a combination of problems happening.
> I think I pinpointed _one_ of them and it's not core clock for me.
> 
> What I noticed was, after several days of normal scores and then playing overwatch a bunch of games at 600W TDP, I suddenly saw my benchmark scores like 400 points below normal at my normal profile overclock, +150 / +500 (testing TS, FS). I was only getting 14,220 when I should be getting 14,580 or so.
> And this time, running Superposition didn't fix it (and that was low, too).
> 
> So I reset NVCP to default, that helped superposition but did NOT help timespy or port royal.
> But I noticed that when I raised core and memory, my scores still went up but were still 400-500 points below normal!! Tried +180/+650. Scores went up a little but still way too low. Got like 14,300 when it should have been 14,750
> 
> So I turned off the computer, unplugged the PSU, plugged it back in, rebooted and loaded my +150/+500 profile and ran PR again.
> 
> SAME low score!!! like 400 points too low!!!
> 
> Ran superposition (that was okay), PR, same low score...
> 
> So I underclocked the memory (still an overclock) from +500 to +250.
> Ran port royal and my scores were suddenly 400 points higher!!!
> 
> Ran timespy, same thing!
> 
> So THEN I set the memory back to where it was before, +500, and ran PR again, and my scores were higher again, back to normal scores.
> Then I clocked it up to +180 +650...got my 14765 port royal score back. What the hell?
> 
> So I have NO IDEA what happened. Nor do I know why after a complete power off and on, my scores remained low until I "downclocked" the memory, and then when I 're-clocked' it back, I kept the score increase! Clearly that wasn't heat...
> 
> I'm guessing there's either a bug somewhere, or if you underclock the memory by a certain amount, some memory strap gets set somewhere that fixes your scores...
> 
> I have no proof of anything. Just telling you what happened.
> 
> So if there's some weird thing going on with the core for some people, there seems to be a weird thing with the VRAM too. Try underclocking it, hit apply, test, then clock it up again...



I never had heat issues as in my runs temps were 48C and below. Memory also doesn’t affect it. Nor did I have to reboot. I’d simply set clocks back to default and OC again. It’s not random either. Once I found my safe OC limit, I could reboot and rerun and never run into the issues. It’s very specifically related to voltage/clocks/curve. How exactly, I don’t know. But as I said start at a low OC and keep going up by increasing offset 15MHz at a time and record your score. You’ll see what’s making your clock drop and and what point. Alternatively you can run something like Valley on a static scene and live OC while looking at power usage and FPS in that static scene.

Also important to note: I did NOT witness this happening with the FE/Strix/FTW3 cards but happened with both KPE cards I was OC’ing.


----------



## DrunknFoo

ssti said:


> keep pushing until hit the wall, your best result should show @70C-75C


Gave up till i get a block. Core downclocking due to temps


----------



## mirkendargen

Falkentyne said:


> Ok relax, guys.
> Might be a combination of problems happening.
> I think I pinpointed _one_ of them and it's not core clock for me.
> 
> What I noticed was, after several days of normal scores and then playing overwatch a bunch of games at 600W TDP, I suddenly saw my benchmark scores like 400 points below normal at my normal profile overclock, +150 / +500 (testing TS, FS). I was only getting 14,220 when I should be getting 14,580 or so.
> And this time, running Superposition didn't fix it (and that was low, too).
> 
> So I reset NVCP to default, that helped superposition but did NOT help timespy or port royal.
> But I noticed that when I raised core and memory, my scores still went up but were still 400-500 points below normal!! Tried +180/+650. Scores went up a little but still way too low. Got like 14,300 when it should have been 14,750
> 
> So I turned off the computer, unplugged the PSU, plugged it back in, rebooted and loaded my +150/+500 profile and ran PR again.
> 
> SAME low score!!! like 400 points too low!!!
> 
> Ran superposition (that was okay), PR, same low score...
> 
> So I underclocked the memory (still an overclock) from +500 to +250.
> Ran port royal and my scores were suddenly 400 points higher!!!
> 
> Ran timespy, same thing!
> 
> So THEN I set the memory back to where it was before, +500, and ran PR again, and my scores were higher again, back to normal scores.
> Then I clocked it up to +180 +650...got my 14765 port royal score back. What the hell?
> 
> So I have NO IDEA what happened. Nor do I know why after a complete power off and on, my scores remained low until I "downclocked" the memory, and then when I 're-clocked' it back, I kept the score increase! Clearly that wasn't heat...
> 
> I'm guessing there's either a bug somewhere, or if you underclock the memory by a certain amount, some memory strap gets set somewhere that fixes your scores...
> 
> I have no proof of anything. Just telling you what happened.
> 
> So if there's some weird thing going on with the core for some people, there seems to be a weird thing with the VRAM too. Try underclocking it, hit apply, test, then clock it up again...


Was your core clock and power limit doing what it normally does on a good run during the -400pt runs? I've seen (although usually after a crash from an unstable overclock) Afterburner claim to have all the right settings, but defaults (+0, +0, 100%PL) are actually applied until you make a change to each of the settings, hit apply, then change them to what they're supposed to be and hit apply again.

I've also noticed on my 3090 that for no apparent reason at all sometimes it's happy to do an entire PR run at 1.1V running balls to the wall at my highest curve bucket, other times it does lots of jumping around bouncing off vrel/vop with no obvious reason. Temps are the same, Afterburner settings are the same, NVCP settings are the same, etc.


----------



## cheddle

Sheyster said:


> Just want to confirm that your card was NOT shunted when you ran these tests? The more info we have for 2 x 8-pin owners the better, as this issue is potentially a card killer.


These exact measurments were taken *after *I attempted to shunt mod, however I was not able to get decent enough contact on the surface of the 5mohm resister to acheive any difference. I used a silver pen and that was inadequate.

Currently (heh) attached to my board are two 8mohm resistors poorly glued to the top of the two 5mohm resistors near the 8-pin connectors. Whilst these have continuity to the 8-pin connectors, the glue I have used is very high resistance and has not affected clock speeds or balance between the two connectors.

I am going to buy a current clamp tomorrow and will do some testing once xmas madness is all done and dusted to see if the reported draw reflects reality at all.

On the 390w BIOS I was seeing the exact same behavior regardless of ****ing around with shunts, being a reported draw of:


on 8-pin#1, 173w peak (168w avg) 
on 8-pin#2, 155w peak (150w avg) 
on PCIE, 68w peak ( 65w avg)
I have an EVGA hybrid kit on the way I am going to butcher and mod to fit this Galax SG and I will remove the shunts at that time, also giving some pre/post comparison on balance to ensure these arent in any way skewing the numbers.


----------



## cheddle

Falkentyne said:


> Did you scrape off the conformal coating layer from the edges of your original shunts with a flat blade, making them shiny silver, first?
> Are the edges of your original shunts depressed lower than the black middle housing?
> 
> If you have a balance issue like that, that can usually be fixed by throwing an 8 mOhm shunt on 8 pin #1 and 10 mOhm shunt on 8 pin #2. OR if using MG 842AR paint only (after scraping the shunts like I mentioned), by applying a -thick- layer of paint on 8 pin #1 and a thin layer of paint on 8 pin #2 shunts.
> 
> For shunts with 'depressed' edges--i believe those all have that conformal coating on them. Also getting contact with the depressed edges causes issues even for people who are soldering shunts and not using MG 842AR paint at all. Ask @dante`afk about that
> 
> What's a current clamp?


My shunts have fairly prononuced ends with the insulator being a little depressed into the SMD itself. I did scrape off the conformal coating, but perhaps not well enough... My shunt mod failed likley due to poor choice of adhesive to affix the SMD's

I definitly think going with a 8 and 10 would fix the balance. However I want to check the actual draw on the leads themselves

A 'current clamp' is a device you thread the cable through that will measure the amperage being pulled through the cables... I just need to find a decent DC one locally.

You multiply the amperage by the voltage to work out the wattage. Fortauntley the card reports the exact voltage (it can vary a little depending on gpu and load).

In theory if I measure each 8-pin cable independantly I can get a real world reading of the power going through the cables.


----------



## Huseyinbaykal

Any more users have opinion on Kingpin Aio vs strix oc full waterblock? For daily gaming? Strix will have its own customloop with 480mm rad but kingpin with its aio. Scared about kingpin. Give me some advice please


----------



## cheddle

Huseyinbaykal said:


> Any more users have opinion on Kingpin Aio vs strix oc full waterblock? For daily gaming? Strix will have its own customloop with 480mm rad but kingpin with its aio. Scared about kingpin. Give me some advice please


My vote is with the KP AIO - althought a good bin is not as 'guaranteed' as it has been in prior gens.


----------



## geriatricpollywog

Huseyinbaykal said:


> Any more users have opinion on Kingpin Aio vs strix oc full waterblock? For daily gaming? Strix will have its own customloop with 480mm rad but kingpin with its aio. Scared about kingpin. Give me some advice please


Kpe has dipswitches, proprietary software voltage control, oled screen for monitoring, better power delivery, and direct access to the man himself. Strix is a mid-range Asus motherboard, no?


----------



## HyperMatrix

Huseyinbaykal said:


> Any more users have opinion on Kingpin Aio vs strix oc full waterblock? For daily gaming? Strix will have its own customloop with 480mm rad but kingpin with its aio. Scared about kingpin. Give me some advice please





cheddle said:


> My vote is with the KP AIO - althought a good bin is not as 'guaranteed' as it has been in prior gens.



I would disagree. Even the worst cards can do about 2145-2160MHz under water. You won’t be able to sustain that for gaming on an AIO. Even though my KPE can do a full PR run at 2190MHz with the AIO, I can barely keep it stable for gaming long term at 2100-2115 unless I’m cranking the fans. I haven’t switched to liquid metal yet so that would help on the KPE. But a full block is still much better. My first KPE also only clocked to 2130-2145 for a PR run. So no “guarantee” you’re going to get a great chip.

If both are going under water, go KPE. If it’s under water vs AIO, it’s always under water.


----------



## xrb936

0451 said:


> The NVVDD dips add +50mv each to core voltage. I'm not sure how much the FBVDD or MSVDD add. The LLC dips allow me another 15mhz core stability so I leave them on all the time. The dip switches control the same parameters as the sliders in the Classified tool. Vince (Kingpin) said he doesn't use them, but I prefer them over the Classified tool because its buggy. Sometimes I'll open up the classified tool and it will set LLC to 0, other times it sets it to 1. It always sets MSVDD to 0.9, which is too low and I have no idea what the default should be. The dips switches can also be used for changing the parameters during a run. If for example the first seconds of PR need higher voltage, but this causes heat buildup later in the run, the dipswitches can help.
> 
> Plus, I just like dip switches. They are a fun way to overclock. As a kid, my first custom build using the original Athlon required a dip switch tool to be attached directly to the CPU to enable overclocking.


Great answer. Thank you so much.


----------



## xrb936

https://www.3dmark.com/3dm/55482634
Well, this is the best I can do for now. Looks like lowering temperature does help, but not as much as I expect. I will put the EK block on my Strix tomorrow and make some test later tomorrow. And I wanna say thanks to guys who helped me.


----------



## Huseyinbaykal

HyperMatrix said:


> I would disagree. Even the worst cards can do about 2145-2160MHz under water. You won’t be able to sustain that for gaming on an AIO. Even though my KPE can do a full PR run at 2190MHz with the AIO, I can barely keep it stable for gaming long term at 2100-2115 unless I’m cranking the fans. I haven’t switched to liquid metal yet so that would help on the KPE. But a full block is still much better. My first KPE also only clocked to 2130-2145 for a PR run. So no “guarantee” you’re going to get a great chip.
> 
> If both are going under water, go KPE. If it’s under water vs AIO, it’s always under water.


thanks for the help mate and all others answered. I dont wanna wait for kingpin block for ages. Also evga says they dont know if they will sell a hc block alone for kingpin hybrids. I dont want to gamble on a kingpin where I cant rma fast or even for low clock speeds...


----------



## geriatricpollywog

Huseyinbaykal said:


> thanks for the help mate and all others answered. I dont wanna wait for kingpin block for ages. Also evga says they dont know if they will sell a hc block alone for kingpin hybrids. I dont want to gamble on a kingpin where I cant rma fast or even for low clock speeds...


You should consider an Asus card.


----------



## xrb936

0451 said:


> You should consider an Asus card.


ASUS no longer binned their chips. I have already purchased 4 Strix till today, and my friends and I have tested over 30 of them, it’s really a silicon lottery now.


----------



## ssti

DrunknFoo said:


> Gave up till i get a block. Core downclocking due to temps


good choice... do the same with waterblock, 3090 strix "on air" reach best result @75-78C


----------



## Zogge

Spiriva said:


> I got a PNY 3090 (2x8pin) and a 1600i psu. But im away from home over xmas.


Strix 3090 with 1kW bios and Ax1600i here. I am home tomorrow evening if you want me to test something.


----------



## geriatricpollywog

xrb936 said:


> ASUS no longer binned their chips. I have already purchased 4 Strix till today, and my friends and I have tested over 30 of them, it’s really a silicon lottery now.


I should be more specific. He should consider an Asus because it’s more of a casual overclocking brand. EVGA is more enthusiast.


----------



## HyperMatrix

xrb936 said:


> ASUS no longer binned their chips. I have already purchased 4 Strix till today, and my friends and I have tested over 30 of them, it’s really a silicon lottery now.


My first KPE clocked worse than my Strix. KPE only binned for 2100. All cards can do about 2160 under water. Strix under water = higher constant clock than KPE Hybrid.



0451 said:


> I should be more specific. He should consider an Asus because it’s more of a casual overclocking brand. EVGA is more enthusiast.


I don’t think that’s fair. The biggest problem I have with the KPE is lack of blocks. You could argue that the true enthusiasts are getting proper AquaComputer blocks and evc2 hooked up. Currently with KPE all we have is waiting for maybe whenever EVGA sells us the Asetek blocks. 

I prefer the KPE over the Strix as a card. But not if the KPE has to be stuck with the hybrid cooler.


----------



## geriatricpollywog

HyperMatrix said:


> My first KPE clocked worse than my Strix. KPE only binned for 2100. All cards can do about 2160 under water. Strix under water = higher constant clock than KPE Hybrid.
> 
> 
> 
> I don’t think that’s fair. The biggest problem I have with the KPE is lack of blocks. You could argue that the true enthusiasts are getting proper AquaComputer blocks and evc2 hooked up. Currently with KPE all we have is waiting for maybe whenever EVGA sells us the Asetek blocks.
> 
> I prefer the KPE over the Strix as a card. But not if the KPE has to be stuck with the hybrid cooler.


The KPE is for LN2. It’s not intended for being plumbed into a hard loop to be casually overclocked while listening to the Dave Matthews Band. I think Asus has cards better suited for that.


----------



## HyperMatrix

0451 said:


> The KPE is for LN2. It’s not intended for being plumbed into a hard loop to be casually overclocked while listening to the Dave Matthews Band. I think Asus has cards better suited for that.



Haha well the definition was a little confusing. I wouldn’t consider LN2 users to be overclock enthusiasts. I’d consider it to be extreme overclocking. Enthusiast I would consider any multi-rad or chilled water setup. Anyone who shunt mods. But yes...if we’re talking about LN2 the KingPin is a better card.


----------



## cheddle

HyperMatrix said:


> I would disagree. Even the worst cards can do about 2145-2160MHz under water. You won’t be able to sustain that for gaming on an AIO. Even though my KPE can do a full PR run at 2190MHz with the AIO, I can barely keep it stable for gaming long term at 2100-2115 unless I’m cranking the fans. I haven’t switched to liquid metal yet so that would help on the KPE. But a full block is still much better. My first KPE also only clocked to 2130-2145 for a PR run. So no “guarantee” you’re going to get a great chip.
> 
> If both are going under water, go KPE. If it’s under water vs AIO, it’s always under water.


Agree, KPE under a custom loop is the best of both worlds


----------



## xrb936

cheddle said:


> Agree, KPE under a custom loop is the best of both worlds


Personally I will keep my Strix and wait for ASUS ROG and GALAX HOF. KINGPIN is no longer the best.


----------



## SoldierRBT

@HyperMatrix hey I’ve been following your posts about the 3090 KPE and I’ve been experiencing the “desync” issue you described with my KPE too.

In my card, I’m not sure if this applies to others KPE since every card is different, the desync happens only in 3DMARK Software. It seems that the KPE was designed to keep 3Dmark benchmarks (Port royal, time spy, etc) running no matter how unstable it is and the only method to check if it’s not running correctly is power draw. 

I found that using the classified tool and setting NVVDD to 1.10v removes part of this desync problem. Now, you’ll need to find the proper core voltage point in order to get high scores. So far I’ve found that 1.037-1.043v will use around 500W in PR and score 15k on avg every run with good OC. If I go higher in voltage, power draw goes down to 450W and of course the scores are mediocre. GPU-Z won’t detect any perfcap reason, benchmark will still run at the same voltage point just 60-70W lower power draw. 

There’s a lot of people in EVGA forum with power draw issues and low PR scores even though their cards are running 2160+ on avg. This may be the problem.

Games don’t have this desync issue, the few I’ve tested work fine with just MSI afterburner. RTX Quake 2 would pull 500W+ at 1.012v locked 2145MHz 2560x1440 and show high FPS


----------



## ssti

0451 said:


> I should be more specific. He should consider an Asus because it’s more of a casual overclocking brand. EVGA is more enthusiast.


I can't agree more, just shunt STRIX 3090 in the right place and you will don't need this;
1. xoc, xxoc Ln2, xxxoc or self mod bios.
2. test all on/of 8 dip switches (24+ possible combinations)
3. new OS at less once weekly
4. test all drivers
5. Evbot or classified tool multiple tasks
6. Ln2
Only the top 10 around will do all this for less that 10% improvement. Sad to see sameone spend his time with a wrong NV-drive, not surprised.
Thanks to Vince/EVGA to find the right accessible place ofr those 8 dip switchers, can't wait to play with ...next January 2021.









For years need dismantle the loop, will get forced to build a switcher tool to reach them, in 4xSLI was just NIGHTMARE.









Even if Vince don't use them (understandable) non ln2 must give a try


----------



## HyperMatrix

SoldierRBT said:


> @HyperMatrix hey I’ve been following your posts about the 3090 KPE and I’ve been experiencing the “desync” issue you described with my KPE too.
> 
> In my card, I’m not sure if this applies to others KPE since every card is different, the desync happens only in 3DMARK Software. It seems that the KPE was designed to keep 3Dmark benchmarks (Port royal, time spy, etc) running no matter how unstable it is and the only method to check if it’s not running correctly is power draw.
> 
> I found that using the classified tool and setting NVVDD to 1.10v removes part of this desync problem. Now, you’ll need to find the proper core voltage point in order to get high scores. So far I’ve found that 1.037-1.043v will use around 500W in PR and score 15k on avg every run with good OC. If I go higher in voltage, power draw goes down to 450W and of course the scores are mediocre. GPU-Z won’t detect any perfcap reason, benchmark will still run at the same voltage point just 60-70W lower power draw.
> 
> There’s a lot of people in EVGA forum with power draw issues and low PR scores even though their cards are running 2160+ on avg. This may be the problem.
> 
> Games don’t have this desync issue, the few I’ve tested work fine with just MSI afterburner. RTX Quake 2 would pull 500W+ at 1.012v locked 2145MHz 2560x1440 and show high FPS


My friend had this issue with cyberpunk where the game was showing him over 2200MHz but when I connected over with team viewer I noticed his FPS was lower than I remember it being for me in some familiar scenes. That’s when I first became aware of it.

I also did live OC testing with unigine valley. I set a static scene. Checked FPS at stock. As well as FPS with a certain offset. Then tried that exact same clock speed but with a curve and locked voltage. I then continued to monitor power usage as well as FPS. I could easily desync and go back to normal without touching classified tool. 

It appears to be more often related to voltage/frequency curve and perhaps setting too low of a voltage. Also the majority of the power draw complaints on the EVGA forum are by people who don’t know anything.  They’re from people who are hitting their chip’s voltage/temperature limit and can’t clock any higher and aren’t trying more power intensive workloads like metro exodus and then claiming their clocks are being held back because they’re not hitting the power limit. 😂

After all my testing and figuring out how to manage the problem I was able to do a legit Port royal run at 2190MHz. So I have no complaints other than about lack of a water block for the card.


----------



## Gebeleisis

I was just wondering if we could use some sort of 2 connector to 1 so that we split the power draw on each pcie cables? 
I know that zotac had something like this bundled with their gpus


----------



## MangoMunchaa

Nico67 said:


> Just a quick question as I haven't seen it mentioned anywhere.
> 
> Did everybody with a Galax / KFA SG 3090 have to modify the white (I assume 6pin fan) header to get the EK reference water block to fit?
> 
> Also used some of the 1.5mm thermal pads on the coils as 1mm wasn't even touching.


Yes I did, I cut away at the plastic to make room and it works but it was easily one of the worst things I have ever had to do in my life hahahahaha, you just need to cut about half of the plastic off and it will fit I think? I would have to check mine again. Big oversight on EK's behalf


----------



## sultanofswing

HyperMatrix said:


> My friend had this issue with cyberpunk where the game was showing him over 2200MHz but when I connected over with team viewer I noticed his FPS was lower than I remember it being for me in some familiar scenes. That’s when I first became aware of it.
> 
> I also did live OC testing with unigine valley. I set a static scene. Checked FPS at stock. As well as FPS with a certain offset. Then tried that exact same clock speed but with a curve and locked voltage. I then continued to monitor power usage as well as FPS. I could easily desync and go back to normal without touching classified tool.
> 
> It appears to be more often related to voltage/frequency curve and perhaps setting too low of a voltage. Also the majority of the power draw complaints on the EVGA forum are by people who don’t know anything.  They’re from people who are hitting their chip’s voltage/temperature limit and can’t clock any higher and aren’t trying more power intensive workloads like metro exodus and then claiming their clocks are being held back because they’re not hitting the power limit. 😂
> 
> After all my testing and figuring out how to manage the problem I was able to do a legit Port royal run at 2190MHz. So I have no complaints other than about lack of a water block for the card.


This "Desync" issue you are talking about sure sounds like improper use of the Voltage/Frequency curve to me. If you do not setup the voltage curve correctly you can have the high clocks, the voltage will be the voltage you were aiming for but the score will be way lower and also you will be able to run higher clocks then you normally would.

*First image is setting the curve incorrectly, the voltage points right before the "target" frequency are too many steps down.
This will run the card at the desired frequency and the desired voltage but will produce low scores and will still be stable.*









*Second Image is how the curve should be setup for the clock/voltage you desire.







*


----------



## Carls_Car

Looking at my GPU-Z I have the following readouts:

PWR_SRC Voltage: 11.9V
PCIe Slot Voltage: 12.0V
8pin #1: 11.8V
8pin #2: 11.9V
8pin #3: 11.9V

Should I be worried that these are not @ 12V? Is this too much droop?


----------



## mardon

What sort of power do you think would be safe to run through 30cm 18 Gauge wire?


----------



## Nico67

jura11 said:


> Tested Kingpin XOC 1000W BIOS and pleased with results,although power draw is high,if someone is concerned about power draw or you are scared please don't flash it,but without the questions is one of the best BIOS for 2 8pin RTX3090,temperatures are off course higher at 90% power limit,highest temperatures what I have seen has been 38C,usually I see 32-36C as max in benchmarks or in gaming,not tried to run that BIOS at 100% yet
> 
> And here are Port royal results at same OC(+145MHZ on core/1295MHz on VRAM),difference is only Kingpin XOC 1000W BIOS,with KFA2/Galax 390W BIOS my score has been 14279 and with Kinpin XOC 1000W BIOS score is 14908,I will probably try break 15000 later during the Xmas hahaha with 100% power limit
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Result
> 
> 
> 
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> Merry Christmas guys and gals and please stay safe
> 
> Here are few screenshots of GPU-Z
> 
> View attachment 2471006
> View attachment 2471007
> View attachment 2471008
> 
> 
> Hope this helps
> 
> Thanks,Jura


Wow, I would check your min voltages, all the rails are getting smashed under load. possibly needs a larger PSU?



des2k... said:


> I
> 
> Here's mine if you need it, I'm running TE GT2 for few hours now. Also 2x8pin
> 
> View attachment 2470964





danielstorlos said:


> Timespy Extreme:
> 
> View attachment 2470965


You guys might want to check your minimums rail voltages also, bit hard to tell with Des2k's screenshot as its all load, but it does look a little low in the window?


----------



## bmgjet

Carls_Car said:


> Looking at my GPU-Z I have the following readouts:
> 
> PWR_SRC Voltage: 11.9V
> PCIe Slot Voltage: 12.0V
> 8pin #1: 11.8V
> 8pin #2: 11.9V
> 8pin #3: 11.9V
> 
> Should I be worried that these are not @ 12V? Is this too much droop?


Its with in ATX spec. But then it puts more work on the VRMs since they need to draw more current to get the same conversion as if it had a higher voltage with less current.
Mine sit at 12.6V idle and drop to 12.2-12.4V on load with 500W.


----------



## Nico67

Gebeleisis said:


> I was just wondering if we could use some sort of 2 connector to 1 so that we split the power draw on each pcie cables?
> I know that zotac had something like this bundled with their gpus


Yeah I had thought of that too, dual 6pin to 8pin connector so you could have two 6pin PCIE cables out of the PSU going to a single 8pin on the graphics card.



MangoMunchaa said:


> Yes I did, I cut away at the plastic to make room and it works but it was easily one of the worst things I have ever had to do in my life hahahahaha, you just need to cut about half of the plastic off and it will fit I think? I would have to check mine again. Big oversight on EK's behalf


Yeah thats what I did, just cut the plastic ends away a bit, as the pins cleared ok. Was not happy about it, but beats grinding away at the water block when it was all pasted and padded ready to go.


----------



## ssti

sultanofswing said:


> This "Desync" issue you are talking about sure sounds like improper use of the Voltage/Frequency curve to me. If you do not setup the voltage curve correctly you can have the high clocks, the voltage will be the voltage you were aiming for but the score will be way lower and also you will be able to run higher clocks then you normally would.
> 
> *First image is setting the curve incorrectly, the voltage points right before the "target" frequency are too many steps down.
> This will run the card at the desired frequency and the desired voltage but will produce low scores and will still be stable.*
> 
> 
> 
> 
> 
> 
> 
> 
> 
> *Second Image is how the curve should be setup for the clock/voltage you desire.
> 
> 
> 
> 
> 
> 
> 
> *


OMG guys, I never ever try this **** in my entire life and find it repulsive. This + -X is not supposed to be your extra job and have nothing to do with overclocking.
Those company's that collect your money should fix it right for best results and if you encounter problem ask for RMA.
Changing frequency curve is cheating and your system will fail 99% for this +1 -1.
Do it at yours expenses and pleasure, what a [email protected]#$%&g lost of time 
You guys should focus on cooling and run all your hardware highest possible(2) @ lower voltage(1) for best results.
........
(1) priority


----------



## jura11

Nico67 said:


> Wow, I would check your min voltages, all the rails are getting smashed under load. possibly needs a larger PSU?
> 
> 
> 
> 
> You guys might want to check your minimums rail voltages also, bit hard to tell with Des2k's screenshot as its all load, but it does look a little low in the window?


I'm running Superflower 8pack 2000w PSU there and don't think there is larger PSU, this PSU has powered in past 4*GPUs setup and now is powering 3*GPUs setup(Asus RTX 2080Ti Strix, Zotac RTX 2080Ti AMP and Palit RTX 3090 GamingPro with XOC BIOS) 

Hope this helps 

Thanks, Jura


----------



## des2k...

Nico67 said:


> Wow, I would check your min voltages, all the rails are getting smashed under load. possibly needs a larger PSU?
> 
> 
> 
> 
> You guys might want to check your minimums rail voltages also, bit hard to tell with Des2k's screenshot as its all load, but it does look a little low in the window?


🙄 the evga 850 blew the thermister res on the vin/vout circuit, it exploded, pieces flew through the fan
really pissed, it's rated for 850w, card was 570w+ 90w for cpu

house braker save it, everything survived, so f lucky

I'm with an under volt now, 1860-1905 at 800mv for 300w, since my backup is only 550 seasonic.

No, I'm not gamming, just wanted to test the GPU.
The seasonic has OCP, triggers at 333w with the rtx 3090🙄

Got to wait for stores to open.
What 1000w+ is the best ?


----------



## Nizzen

des2k... said:


> 🙄 the evga 850 blew the thermister res on the vin/vout circuit, it exploded, pieces flew through the fan
> really pissed, it's rated for 850w, card was 570w+ 90w for cpu
> 
> house braker save it, everything survived, so f lucky
> 
> I'm with an under volt now, 1860-1905 at 800mv for 300w, since my backup is only 550 seasonic.
> 
> No, I'm not gamming, just wanted to test the GPU.
> The seasonic has OCP, triggers at 333w with the rtx 3090🙄
> 
> Got to wait for stores to open.
> What 1000w+ is the best ?


Seasonic titan 1000w or 1300w plat


----------



## jura11

Last night I tested XOC BIOS in Cyberpunk 2077, finally clocks are not bouncing from 1995MHz to 2010-2025MHz, clocks for most of the game staying in 2085MHz, with some drops to 2040-2055MHz that's with power limit set to 65%, used only thermal probe for measuring backplate temperature which on idle has been in 22-24°C, in actual gaming or Port Royal I have seen 34-36°C as max and GPU temperature never broke 38°C 

Hope this helps 

Thanks, Jura


----------



## des2k...

des2k... said:


> 🙄 the evga 850 blew the thermister res on the vin/vout circuit, it exploded, pieces flew through the fan
> really pissed, it's rated for 850w, card was 570w+ 90w for cpu
> 
> house braker save it, everything survived, so f lucky
> 
> I'm with an under volt now, 1860-1905 at 800mv for 300w, since my backup is only 550 seasonic.
> 
> No, I'm not gamming, just wanted to test the GPU.
> The seasonic has OCP, triggers at 333w with the rtx 3090🙄
> 
> Got to wait for stores to open.
> What 1000w+ is the best ?


----------



## jura11

des2k... said:


> 🙄 the evga 850 blew the thermister res on the vin/vout circuit, it exploded, pieces flew through the fan
> really pissed, it's rated for 850w, card was 570w+ 90w for cpu
> 
> house braker save it, everything survived, so f lucky
> 
> I'm with an under volt now, 1860-1905 at 800mv for 300w, since my backup is only 550 seasonic.
> 
> No, I'm not gamming, just wanted to test the GPU.
> The seasonic has OCP, triggers at 333w with the rtx 3090🙄
> 
> Got to wait for stores to open.
> What 1000w+ is the best ?


Sorry to hear that there, not sure which one PSU I would recommend to get, I have run in past Seasonic 1250W XM2 I think and that PSU powered 3*GPUs setup(GTX1080Ti with dual GTX1080's), now have Superflower 8pack 2000w PSU which previously powered 4*GPUs setup and no issues to the date

I owned only one EVGA PSU, 1200W which failed afyer 3 months of constant use 

Hope this helps and good luck there

Thanks, Jura


----------



## Edgenier

PSA: DLSS lowers power consumption substantially. For real stability / testing if you throttle, turn off DLSS in Control / Cyberpunk and watch your “stable” clocks throttle like crazy!


----------



## geriatricpollywog

xrb936 said:


> Personally I will keep my Strix and wait for ASUS ROG and GALAX HOF. KINGPIN is no longer the best.


Obviously you should keep the Strix if you already own one. I would argue that the KPE is no longer binned, but all the extra hardware goodies is worth the extra $100 over the Strix. This card rips 15,300 back to back with no extra voltage, just LLC adjustments. No other card has these voltage and llc adjustments and flashing the kp bios doesn’t magically grow them.


----------



## Sheyster

Falkentyne said:


> AND ALSO--
> People on FTW3's flashing the 1000Vbios and setting the power limit to 520W are drawing 88W through PCIE Slot
> That means the power balancing is based on the _BOARD_ PCB design itself, not on the vbios......


This is known at this point, and why the MSI Trio and Asus Strix actually work with the 500W EVGA XOC BIOS, up to 500W!

I agree with your theory about power balancing being screwy. If a PCI-E cable is pulling 300+ watts it's gonna get warm!

EDIT - Just caught up to your other post mentioning MSI and ASUS.


----------



## Baasha

xrb936 said:


> ASUS no longer binned their chips. I have already purchased 4 Strix till today, and my friends and I have tested over 30 of them, it’s really a silicon lottery now.


This is quite true - my FE 3090s OC quite a bit better than the Strix 3090 OCs - but base clock on the Strix is quite a bit higher so it kind of evens out lol. The only problem is that the Strix 3090 OC runs like a mini toaster oven.

on another note - is there a cryo-cooling solution for GPUs similar to Intel's cryo cooler for their CPUs?


----------



## Pinto

Tried 1000w bios a little better than the 520w obviously but my card definitely not using more than 600w :










I scored 15 556 in Port Royal


Intel Core i9-10980XE Extreme Edition Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com















I scored 22 159 in Time Spy


Intel Core i9-10980XE Extreme Edition Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com







Room temp 10°c :lol: 




My Strix bin isn't the best one... memory is pretty decent and EK block doing the job with water temp of 10°c


----------



## HyperMatrix

sultanofswing said:


> This "Desync" issue you are talking about sure sounds like improper use of the Voltage/Frequency curve to me. If you do not setup the voltage curve correctly you can have the high clocks, the voltage will be the voltage you were aiming for but the score will be way lower and also you will be able to run higher clocks then you normally would.
> 
> *First image is setting the curve incorrectly, the voltage points right before the "target" frequency are too many steps down.
> This will run the card at the desired frequency and the desired voltage but will produce low scores and will still be stable.*
> 
> 
> 
> 
> 
> 
> 
> 
> 
> *Second Image is how the curve should be setup for the clock/voltage you desire.
> 
> 
> 
> 
> 
> 
> 
> *


Definitely not the issue as my curve is either like your second pic or done in a more linear fashion (both with same result). Also this issue only popped up with the 2 KPE cards and not the other 4 cards I OC’d.

Others have been reporting the same thing. It’s a KPE peculiarity.


----------



## truehighroller1

I took back my asus ekwb 3090 and got the suprim x 3090. My results with a little play time so far.









I scored 15 048 in Port Royal


Intel Core i9-7900X Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## Falkentyne

des2k... said:


> 🙄 the evga 850 blew the thermister res on the vin/vout circuit, it exploded, pieces flew through the fan
> really pissed, it's rated for 850w, card was 570w+ 90w for cpu
> 
> house braker save it, everything survived, so f lucky
> 
> I'm with an under volt now, 1860-1905 at 800mv for 300w, since my backup is only 550 seasonic.
> 
> No, I'm not gamming, just wanted to test the GPU.
> The seasonic has OCP, triggers at 333w with the rtx 3090🙄
> 
> Got to wait for stores to open.
> What 1000w+ is the best ?


Corsair Ax1600i if you're rich.
Seasonic Prime PX-1000 if you're not.


----------



## xrb936

0451 said:


> Obviously you should keep the Strix if you already own one. I would argue that the KPE is no longer binned, but all the extra hardware goodies is worth the extra $100 over the Strix. This card rips 15,300 back to back with no extra voltage, just LLC adjustments. No other card has these voltage and llc adjustments and flashing the kp bios doesn’t magically grow them.


If KINGPIN has its own waterblock right now, I will definitely keep it.


----------



## GAN77

truehighroller1 said:


> I took back my asus ekwb 3090 and got the suprim x 3090. My results with a little play time so far.


Excellent result. Is this a test in air and a 450 watt limit?

No, I see Evga) 520 watt limit?


----------



## xrb936

truehighroller1 said:


> I took back my asus ekwb 3090 and got the suprim x 3090. My results with a little play time so far.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 048 in Port Royal
> 
> 
> Intel Core i9-7900X Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com


Is the Suprim X you got the new version? Or the old version before the recall?


----------



## truehighroller1

xrb936 said:


> Is the Suprim X you got the new version? Or the old version before the recall?



I'd say it has to be the new one because I got it right off the shelf right after it arrived.



GAN77 said:


> Excellent result. Is this a test in air and a 450 watt limit?
> 
> No, I see Evga) 520 watt limit?


Thanks! Air, KPE 1000 W


----------



## Arbustok

Quick question.
Can anyone tell me which port I will lose by flashing Gigabyte's 390Watt to an Asus TUF?

Thanks


----------



## arvinz

Can someone link the 500/520/1000 W bioses and just mention quickly the pro's and cons of each for either Air or water cooling? Can't believe how long this thread is and it's a little confusing and time consuming trying to hunt this info down.

I've just installed a Strix 3090 OC on air and would love to run it through it's paces and see how it does.

Any help is appreciated.


----------



## xrb936

arvinz said:


> Can someone link the 500/520/1000 W bioses and just mention quickly the pro's and cons of each for either Air or water cooling? Can't believe how long this thread is and it's a little confusing and time consuming trying to hunt this info down.
> 
> I've just installed a Strix 3090 OC on air and would love to run it through it's paces and see how it does.
> 
> Any help is appreciated.


Just keep the stock. Strix doesn’t need those. You need a better cooling solution.


----------



## Falkentyne

arvinz said:


> Can someone link the 500/520/1000 W bioses and just mention quickly the pro's and cons of each for either Air or water cooling? Can't believe how long this thread is and it's a little confusing and time consuming trying to hunt this info down.
> 
> I've just installed a Strix 3090 OC on air and would love to run it through it's paces and see how it does.
> 
> Any help is appreciated.


You can use either the eVGA 500W XOC FTW3 Vbios on your card, or the 1000W Kingpin Vbios.


----------



## Tias

Whats the verdict for the 1000w bios and 2x8pin cards? I got the Asus TUF 3090, watercooled (EK block 2x 480 rads), and a 1300w evga psu.

Is there any problems running the 1000w bios as a 24/7 bios? Any problems that the memory is running at full speed all the time?
Im not looking to kill my card, but if the 1000w bios work fine and there isnt any problems using it, i dont wanna leave preformance on the table either.


----------



## defiledge

truehighroller1 said:


> I took back my asus ekwb 3090 and got the suprim x 3090. My results with a little play time so far.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 048 in Port Royal
> 
> 
> Intel Core i9-7900X Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com


which one performs better?


----------



## truehighroller1

defiledge said:


> which one performs better?


The suprim imo. I'm going to put it under water when the blocks get here for it " January ".


----------



## GAN77

truehighroller1 said:


> The suprim imo. I'm going to put it under water when the blocks get here for it " January ".



Which block did you choose?


----------



## truehighroller1

GAN77 said:


> Which block did you choose?


So far I've only read about one compatible block which seems to be the ek vector 2. Unless some one knows about a better compatible one that I'm not aware of mind you.


----------



## GAN77

truehighroller1 said:


> So far I've only read about one compatible block which seems to be the ek vector 2. Unless some one knows about a better compatible one that I'm not aware of mind you.


I also bought suprim.
I was looking for blocks. These are the three companies that talk about Suprim compatibility.





BP-VG3090MT


Bitspower Classic VGA Water Block for MSI GeForce RTX 3090 Gaming Trio series




shop.bitspower.com












EK-Quantum Vector Trio RTX 3080/3090 D-RGB - Nickel + Plexi


This is the 2nd generation Vector GPU water block from the EK® Quantum Line, designed for MSI® Trio RTX 3080 and 3090 graphics cards based on the latest NVIDIA® Ampere™ architecture. For a precise compatibility match of this water block, we recommend you refer to the EK Cooling Configurator.




www.ekwb.com












Alphacool Eisblock Aurora Acryl GPX-N RTX 3090/3080 Suprim X mit Backplate


Der Alphacool Eisblock Aurora Acryl GPX-N RTX 3080/3090 vereint Style mit Performance und eine umfangreiche Digital RGB Beleuchtung. Die Erfahrung von über 17 Jahren sind in diesen Grafikkarten-Wasserkühler eingeflossen und stellen den...




www.alphacool.com





Chinese manufacturers also talk about compatibility, but I'm not sure.

I ordered bitspower.


----------



## Carillo

Wish OP or someone else could PIN answears and links on the main page to questions spamming this thread 50 times every day. Just saying.


----------



## Falkentyne

Tias said:


> Whats the verdict for the 1000w bios and 2x8pin cards? I got the Asus TUF 3090, watercooled (EK block 2x 480 rads), and a 1300w evga psu.
> 
> Is there any problems running the 1000w bios as a 24/7 bios? Any problems that the memory is running at full speed all the time?
> Im not looking to kill my card, but if the 1000w bios work fine and there isnt any problems using it, i dont wanna leave preformance on the table either.


It works fine if 1) you have 16 Aug power cables, 2) YOU HAVE ABSOLUTELY ZERO PROBLEMS WITH THERMAL PAD CONTACT OR THERMAL PASTE CONTACT OR HOTSPOT HEAT DISSIPATION. 3) 1000W Vbios is limited to 666W on 2 8 pin cards. 4) it may be less than 666W depending on the power balancing on your SRC chip / PCIE slot/8 pins and your PCB design (power balancing seems to be PCB related--witness the FTW3 owners who get 88W PCIE slot at 500W while Kingpin owners get 70W PCIE Slot, and MSI and Asus owners get even lower PCIE Slot...

If your cooling is 100% top tier and NO mistakes anywhere, go ahead.


----------



## ssti

Falkentyne said:


> You can use either the eVGA 500W XOC FTW3 Vbios on your card, or the 1000W Kingpin Vbios.


Strix on air don't need any evga bios at less you run all day a single benchmark Port Royal.


----------



## Outcasst

I have noticed since flashing the Bios and having a benching session with 700watts, going back to the 390w bios there's definitely an increase in audible coil whine where there was absolutely zero before. The extra power draw must have loosened up the chokes or something.


----------



## Nico67

jura11 said:


> I'm running Superflower 8pack 2000w PSU there and don't think there is larger PSU, this PSU has powered in past 4*GPUs setup and now is powering 3*GPUs setup(Asus RTX 2080Ti Strix, Zotac RTX 2080Ti AMP and Palit RTX 3090 GamingPro with XOC BIOS)
> 
> Hope this helps
> 
> Thanks, Jura


Yeah that should be plenty , I would still check the minimums though as it looks like its nearly halving the 12v's under load



des2k... said:


> 🙄 the evga 850 blew the thermister res on the vin/vout circuit, it exploded, pieces flew through the fan
> really pissed, it's rated for 850w, card was 570w+ 90w for cpu
> 
> house braker save it, everything survived, so f lucky
> 
> I'm with an under volt now, 1860-1905 at 800mv for 300w, since my backup is only 550 seasonic.
> 
> No, I'm not gamming, just wanted to test the GPU.
> The seasonic has OCP, triggers at 333w with the rtx 3090🙄
> 
> Got to wait for stores to open.
> What 1000w+ is the best ?


Ouch, sorry to hear that, but I guess its better to have the PSU go than the videocard.

Generally speaking I would go for a PSU that is double the expected power draw as they usually work best at around 50% load, however with the massive power draw increases over the last couple of GPU generations, that's getting harder to keep up with. Probably just need to look at max 12v load capabilities and single rail PSU's.

I would agree with Seasonic 1000w titanium, 1300w platinum or Corsair 1000w Platinum or higher/ better. I personally have a Seasonic 1000w titanium, which doesn't seem to be drooping much, and although I have had a couple of other seasonic PSU's with issues in the past, at least they make there own  Not sure which manufacturers use 16awg cable, but seasonic seem to use 18awg, but shorter cables is always better too.


----------



## dr/owned

6 hours of teardown and reassembly, motherboard swap (9900KF -> 10850K), one leaking internal rotary o-ring (that's a new one), replumbing everything...

*It all worked*. Detaching PCIe slot power and putting the shunt on the 8 pin....connecting the two 8 pin +12V lines together so they're permanently balanced. 4mOhm shunts on top of the stock 5mOhm on everything. The header for the EVC2S. Everything. Works.

Here's Port Royal with 107% power and no OC..I'm not sure how to read the EVC2 IOUT yet:










Looks like the Bykski block and watercooled backplate are doing absurdly well. 40C in PR...and because of the size of my system (10 gallons) it never heatsoaks...it stays wherever it starts.


----------



## Falkentyne

Nico67 said:


> Yeah that should be plenty , I would still check the minimums though as it looks like its nearly halving the 12v's under load
> 
> 
> Ouch, sorry to hear that, but I guess its better to have the PSU go than the videocard.
> 
> Generally speaking I would go for a PSU that is double the expected power draw as they usually work best at around 50% load, however with the massive power draw increases over the last couple of GPU generations, that's getting harder to keep up with. Probably just need to look at max 12v load capabilities and single rail PSU's.
> 
> I would agree with Seasonic 1000w titanium, 1300w platinum or Corsair 1000w Platinum or higher/ better. I personally have a Seasonic 1000w titanium, which doesn't seem to be drooping much, and although I have had a couple of other seasonic PSU's with issues in the past, at least they make there own  Not sure which manufacturers use 16awg cable, but seasonic seem to use 18awg, but shorter cables is always better too.


Seasonic's 12 pin FE adapter is 16 Aug, but I don't know about the regular 8 pins.


----------



## dante`afk

I feel inclined to buy a chiller because I can't put my system outside 



dr/owned said:


> 6 hours of teardown and reassembly, motherboard swap (9900KF -> 10850K), one leaking internal rotary o-ring (that's a new one), replumbing everything...
> 
> *It all worked*. Detaching PCIe slot power and putting the shunt on the 8 pin....connecting the two 8 pin +12V lines together so they're permanently balanced. 4mOhm shunts on top of the stock 5mOhm on everything. The header for the EVC2S. Everything. Works.
> 
> Here's Port Royal with 107% power and no OC..I'm not sure how to read the EVC2 IOUT yet:
> 
> View attachment 2471076
> 
> 
> Looks like the Bykski block and watercooled backplate are doing absurdly well. 40C in PR...and because of the size of my system (10 gallons) it never heatsoaks...it stays wherever it starts.


Your watercooling holds 10gallons of water?


----------



## Spiriva

dr/owned said:


> 6 hours of teardown and reassembly, motherboard swap (9900KF -> 10850K), one leaking internal rotary o-ring (that's a new one), replumbing everything...
> 
> *It all worked*. Detaching PCIe slot power and putting the shunt on the 8 pin....connecting the two 8 pin +12V lines together so they're permanently balanced. 4mOhm shunts on top of the stock 5mOhm on everything. The header for the EVC2S. Everything. Works.
> 
> Here's Port Royal with 107% power and no OC..I'm not sure how to read the EVC2 IOUT yet:
> 
> View attachment 2471076
> 
> 
> Looks like the Bykski block and watercooled backplate are doing absurdly well. 40C in PR...and because of the size of my system (10 gallons) it never heatsoaks...it stays wherever it starts.



10 gallons is about 38 liter, do you use your bathtub as reservoir?


----------



## defiledge

only big brain overclackers connect their loop to the sewage system


----------



## dr/owned

dante`afk said:


> Your watercooling holds 10gallons of water?





Spiriva said:


> 10 gallons is about 38 liter, do you use your bathtub as reservoir?


I've been doing iterations of "whole room watercooling" since before Linus ever attempted it. I have a 10 gallon tap-water reservoir that I put together myself (square plastic container, had to make my own gasket and lid to make it water-tight, and mounted my own bulkhead fittings) , a PMP-600 pump on it running @ 24V, and about 125 feet of hose to a water-water heat exchanger with my desktop. The radiator is a 20"x20" air-to-water heat exchanger with a box fan on it.

My desktop loop itself is a normal 1L of coolant with a D5 pump in a soundproof box external to the case.


----------



## truehighroller1

Highest score yet with PR.










Time spy results that I posted in the timespy benchmark post.


----------



## Aristotle

Whats the consensus on good waterblocks for strix 3090? EK, alphacool, bykski? Will the vram chips on the backside be cooled properly when switching to a waterblock?


----------



## reflex75

Edgenier said:


> For daily use (on air) I think i’ve settled on 1965mhz @ .90V which gets me 14005 PR and no higher than 71C . Any higher voltage and pcie will throttle me a bit. Anybody getting 2000mhz+ stable on air (zero throttling) for daily use?


I can play games around 2100Mhz when not power limited with a 3090FE stock air cooler (no shunt mod):





The 3090FE is a very nice card, cool and quiet for daily gaming, but it's heavily power limited for benchmark:
https://www.3dmark.com/spy/15738613
https://www.3dmark.com/pr/565338


----------



## Nico67

dr/owned said:


> I've been doing iterations of "whole room watercooling" since before Linus ever attempted it. I have a 10 gallon tap-water reservoir that I put together myself (square plastic container, had to make my own gasket and lid to make it water-tight, and mounted my own bulkhead fittings) , a PMP-600 pump on it running @ 24V, and about 125 feet of hose to a water-water heat exchanger with my desktop. The radiator is a 20"x20" air-to-water heat exchanger with a box fan on it.
> 
> My desktop loop itself is a normal 1L of coolant with a D5 pump in a soundproof box external to the case.


I use a 10 gallon Gatorade sports drink dispenser as my chiller res, its thermally insulated so it keeps the water cold. modded fittings into the side a the drink spout is handy as a drain tap (gets most of it out at least)


----------



## xrb936

Aristotle said:


> Whats the consensus on good waterblocks for strix 3090? EK, alphacool, bykski? Will the vram chips on the backside be cooled properly when switching to a waterblock?


EKWB is good enough. Backside doesn’t really need to be water cooled, IMO.


----------



## motivman

Do you guys think its worth it dumping my shunt modded 2X8 pin reference card for the Gaming X trio? Also what is the consensus on running the 1000W bios 24/7? worried about memory degradation since it would be running full speed 24/7, or should I just get the gaming x trio and shunt mod it? Ultimately, with the gaming x trio, I can defeat all power limits with either the shunt mod or the 1000W bios, but the question is which route is better?


----------



## Falkentyne

motivman said:


> Do you guys think its worth it dumping my shunt modded 2X8 pin reference card for the Gaming X trio? Also what is the consensus on running the 1000W bios 24/7? worried about memory degradation since it would be running full speed 24/7, or should I just get the gaming x trio and shunt mod it? Ultimately, with the gaming x trio, I can defeat all power limits with either the shunt mod or the 1000W bios, but the question is which route is better?


Why not just get the actual Kingpin card at that rate?
Also the MSI cards do have fuses on them, although that shouldn't matter with the Kingpin Bios--but I am not 100% sure about that.


----------



## dr/owned

Is this right when all the shunts are modded? It PerfCap = Pwr for Port Royal and seems to be dropping voltage. GPU temp doesn't go above 45C. Looks like all the W reported are "correct" for shunts.










Is this like the GPU die has some sort of awareness of its power draw and caps out at around 550W no matter what?


----------



## Falkentyne

dr/owned said:


> Is this right when all the shunts are modded? It PerfCap = Pwr for Port Royal and seems to be dropping voltage. GPU temp doesn't go above 45C. Looks like all the W reported are "correct" for shunts.
> 
> View attachment 2471094
> 
> 
> Is this like the GPU die has some sort of awareness of its power draw and caps out at around 550W no matter what?


If I'm not being dumb here, please run that again with all the wattage values set to "Maximum". Also put PWR_SRC Voltage (edit) on "Maximum" also.
Also please have HWINFO64 open so I can see "TDP%" and "TDP Normalized". I want to see if your Normalized is far above 100% here. The green junior throttle bar is a throttle warning. It only becomes a throttle if it becomes a large throttle bar. You're also getting VREL and vOP at the same time so I'm curious. a Triple color bar usually shows a high Normalized %.
are you running port royal at the same time you took that screenshot? I thought the benchmark exits out if you alt tab?

What card is this again (AIB)? Your older posts are blocked on your profile--can't search them so I have to ask.


----------



## dr/owned

Falkentyne said:


> If I'm not being dumb here, please run that again with all the wattage values set to "Maximum". Also put PWR_SRC Voltage (edit) on "Maximum" also.
> Also please have HWINFO64 open so I can see "TDP%" and "TDP Normalized". I want to see if your Normalized is far above 100% here. The green junior throttle bar is a throttle warning. It only becomes a throttle if it becomes a large throttle bar. You're also getting VREL and vOP at the same time so I'm curious. a Triple color bar usually shows a high Normalized %.
> are you running port royal at the same time you took that screenshot? I thought the benchmark exits out if you alt tab?
> 
> What card is this again (AIB)? Your older posts are blocked on your profile--can't search them so I have to ask.


This is a TUF. Yeah the screenshot was with a Custom PR run so I could run it windowed. Here's a normal 14350 score run. Multiplier on these wattages should be 2.25x with the shunt configuration.


----------



## Falkentyne

dr/owned said:


> This is a TUF. Yeah the screenshot was with a Custom PR run so I could run it windowed. Here's a normal 14350 score run. Multiplier on these wattages should be 2.25x with the shunt configuration.
> 
> View attachment 2471095


Ok I spent a good 10 minutes staring at this and I can't find anything wrong with any of your rails. Literally everything here is on point.
But just like I suspected, your TDP Normalized is in outer space. Something, somewhere is drawing more power than it's allowed to, and it may be one of those internal power limits that aren't being shown on sensors. 

Sort of like this garbage you see here with Superposition 4k Extreme shaders custom where nothing was hitting any power limit visible (Look at the normalized %).










The only thing that even looks slightly odd is the NVDD2 (Sram/uncore, maybe related to MSVDD voltage, which may be the same thing as sram voltage) input power (sum) being way lower than NVVDD Output power (sum), but I have absolutely no idea what those are or what "sum" means (the only thing I know is GPU Core NVVDD Input Power (sum) is the exact same thing as "GPU Chip Power" in GPU-Z).

That being said, I would not expect you to have any thick throttle bar at all in Port Royal. Timespy Extreme or Superposition 4k Extreme Shaders, sure, but not Port Royal...

I did this on my last run a little earlier.
(This was +180 core, +250 RAM).










I know that @bmgjet 's bios editor shows an Aux1, Aux2, Aux3 and Aux4 power limit in vbios. But these limits are not reported on any sensor and I doubt anyone here besides Kingpin knows what they are...

You could try opening your vbios in his editor and posting the screenshot of what it says....


----------



## dante`afk

dr/owned said:


> I've been doing iterations of "whole room watercooling" since before Linus ever attempted it. I have a 10 gallon tap-water reservoir that I put together myself (square plastic container, had to make my own gasket and lid to make it water-tight, and mounted my own bulkhead fittings) , a PMP-600 pump on it running @ 24V, and about 125 feet of hose to a water-water heat exchanger with my desktop. The radiator is a 20"x20" air-to-water heat exchanger with a box fan on it.
> 
> My desktop loop itself is a normal 1L of coolant with a D5 pump in a soundproof box external to the case.


pictures please


----------



## Krytecks

Hello guys, little problem here, i have a 3090 Phoenix, and when i try to flash all 3*8 pin bios ( like ftw3, strix, or kingpin ) my gpu is undervolting itself and i have worse perf than bios with lower powerlimit, like now i have a Xtreme bios and with gpu-z i can see that my gpu is using ~450W in burn mode, but only 0.8V ...  Someone have a solution ?


----------



## dr/owned

Falkentyne said:


> Ok I spent a good 10 minutes staring at this and I can't find anything wrong with any of your rails. Literally everything here is on point.
> 
> I know that @bmgjet 's bios editor shows an Aux1, Aux2, Aux3 and Aux4 power limit in vbios. But these limits are not reported on any sensor and I doubt anyone here besides Kingpin knows what they are...
> 
> You could try opening your vbios in his editor and posting the screenshot of what it says....


Here's one more run with the VRM set to +0.15V....like I see the total system power consumption increase by 80-100W and the Output Voltage being reported looks correct-ish (assuming some load line going on), but then PerfCap is freaking out:










EDIT: I forgot to add, thank you for helping me out 

Is there a link to his BIOS editor? I assume this ends with me flashing the 1000W BIOS and yolo-McSwaggins it?


----------



## Falkentyne

dr/owned said:


> Here's one more run with the VRM set to +0.15V....like I see the total system power consumption increase by 80-100W and the Output Voltage being reported looks correct-ish (assuming some load line going on), but then PerfCap is freaking out:
> 
> View attachment 2471099
> 
> 
> EDIT: I forgot to add, thank you for helping me out
> 
> Is there a link to his BIOS editor? I assume this ends with me flashing the 1000W BIOS and yolo-McSwaggins it?



Should still be here I hope.









[Official] NVIDIA RTX 3090 Owner's Club


3090 strix running kingpin bios, EK block with Alphacool x-flow 360rad 60mm, settings +170 clock +1250mem, getting 15 322 @ 34c https://www.3dmark.com/pr/648784 What were you able to get on that card max with air cooling (without freezing air or anything of that sort)?




www.overclock.net


----------



## jura11

Krytecks said:


> Hello guys, little problem here, i have a 3090 Phoenix, and when i try to flash all 3*8 pin bios ( like ftw3, strix, or kingpin ) my gpu is undervolting itself and i have worse perf than bios with lower powerlimit, like now i have a Xtreme bios and with gpu-z i can see that my gpu is using ~450W in burn mode, but only 0.8V ...  Someone have a solution ?


Don't use 3*8 pin BIOS on 2*8pin BIOS,performance will lot worse than with stock BIOS,use KFA2 390W or Gigabyte 390W BIOS,if you are under water then you can try Kingpin 1000W BIOS and limit power limit to 65%

Hope this helps

Thanks,Jura


----------



## Krytecks

jura11 said:


> Don't use 3*8 pin BIOS on 2*8pin BIOS,performance will lot worse than with stock BIOS,use KFA2 390W or Gigabyte 390W BIOS,if you are under water then you can try Kingpin 1000W BIOS and limit power limit to 65%
> 
> Hope this helps
> 
> Thanks,Jura


I have a custom WC for my 3090, i tried 65% powerlimit with kingpin bios but perf are worse


----------



## dr/owned

jura11 said:


> Don't use 3*8 pin BIOS on 2*8pin BIOS,performance will lot worse than with stock BIOS,use KFA2 390W or Gigabyte 390W BIOS,if you are under water then you can try Kingpin 1000W BIOS and limit power limit to 65%
> 
> Hope this helps
> 
> Thanks,Jura


Alright...BRB going to go melt some things. Not really though...I'm using 16AWG cables and have everything as heatsinked as I can get it...and a IR Camera to watch what's happening.


----------



## des2k...

Krytecks said:


> I have a custom WC for my 3090, i tried 65% powerlimit with kingpin bios but perf are worse


65% limit is about 420w on 2x8pin but it's way less in reality
I would start at 80% limit and add a core offset, that's when it registers a +performance in benches


----------



## Krytecks

If i can add something: at idle im at constant 1920Mhz, 0.8620V and 140W ... With Kombustor stress test, im at ~1780Mhz, ~0.8V, and ~460W


----------



## Falkentyne

Krytecks said:


> Hello guys, little problem here, i have a 3090 Phoenix, and when i try to flash all 3*8 pin bios ( like ftw3, strix, or kingpin ) my gpu is undervolting itself and i have worse perf than bios with lower powerlimit, like now i have a Xtreme bios and with gpu-z i can see that my gpu is using ~450W in burn mode, but only 0.8V ...  Someone have a solution ?


You can use the Kingpin 1000W vbios at your own risk but you will need to have it at 80% (about 550W) TDP to show any big improvement over your existing bios. 100% TDP will draw at most 667W and if you get close to that depends on how the power balancing is on your board, and you had better have 16 Aug PCIE power cables if you even attempt that.

Other board's 3 pin vbioses should NOT be used on yours.


----------



## jura11

dr/owned said:


> Alright...BRB going to go melt some things. Not really though...I'm using 16AWG cables and have everything as heatsinked as I can get it...and a IR Camera to watch what's happening.


If you are scared don't use that BIOS,that's what I recommend there,I have run and still running on my Asus RTX2080Ti Strix XOC 1000W BIOS and no issues to the date,I have flashed my Asus RTX2080Ti Strix with XOC BIOS when has been released 

Personally I have Superflower 8pack 2000W PSU and no issues with PSU or cables,I always measuring temperatures and most of the time on my second monitor I have opened HWiNFO or SIV64,in my case temperatures in 30's in gaming or in benchmarks etc

Yup I use as well IR camera and checking all the time cables temperatures which are in 20's(seen 32C at one of the 8pin cable when I run 90% power limit)

Hope this helps

ThanksJura


----------



## dr/owned

jura11 said:


> If you are scared don't use that BIOS,that's what I recommend there,I have run and still running on my Asus RTX2080Ti Strix XOC 1000W BIOS and no issues to the date,I have flashed my Asus RTX2080Ti Strix with XOC BIOS when has been released
> 
> Personally I have Superflower 8pack 2000W PSU and no issues with PSU or cables,I always measuring temperatures and most of the time on my second monitor I have opened HWiNFO or SIV64,in my case temperatures in 30's in gaming or in benchmarks etc
> 
> Yup I use as well IR camera and checking all the time cables temperatures which are in 20's(seen 32C at one of the 8pin cable when I run 90% power limit)
> 
> Hope this helps
> 
> ThanksJura


Welp...bricked the Performance bios setting trying this one: EVGA RTX 3090 VBIOS

I'm not sure if the driver crashed or something while it was flashing. Maybe it didn't like that I had 4 monitors connected. Is there any steps I'm missing besides '--protectoff' and then '-6 227771.rom'?


----------



## jura11

Krytecks said:


> I have a custom WC for my 3090, i tried 65% powerlimit with kingpin bios but perf are worse


In my case with 65% power limit I gained like 100-150 points in Port Royal with same OC(+145MHz and +1295MHz on VRAM) and in Cyberpunk 2077 previously GPU will bounce in 1950-1995MHz to 2010-2025MHz now GPU stays in 2085MHz with some downclocking to 2070MHz

I would try 70% and more,try 80% and more and see if something will change it or scores are higher,if you want have lower VRAM clocks on idle,create idle profile in MSI Afterburner with 0MHz on core and -10MHz on VRAM and GPU should idle as should and in NV Control Panel set GPU to Normal or Optimal rather than Performance there

Hope this helps

Thanks,Jura


----------



## motivman

Falkentyne said:


> Why not just get the actual Kingpin card at that rate?
> Also the MSI cards do have fuses on them, although that shouldn't matter with the Kingpin Bios--but I am not 100% sure about that.


Anyone with the Gaming X trio try out the 1000W bios? I have one that I can buy locally for a good price, just wondering if it is a good investment, compared to my stock PNY.


----------



## jura11

dr/owned said:


> Welp...bricked the Performance bios setting trying this one: EVGA RTX 3090 VBIOS
> 
> I'm not sure if the driver crashed or something while it was flashing. Maybe it didn't like that I had 4 monitors connected. Is there any steps I'm missing besides '--protectoff' and then '-6 227771.rom'?


I think someone over here posted direct link on XOC,rename this 227771.rom to XOC.BIOS what I would do and if you do any flash BIOS,remove or reset to default yours OC settings in MSI Afterburner or rather I usually shut down MSI Afterburner

This driver crash,I got that as well last time when I wanted to flash KFA2 390W BIOS in my case this has caused OC profile on my RTX3090,reseted to default my OC profile and flash went smoothly

Hope this helps

Thanks,Jura


----------



## Falkentyne

dr/owned said:


> Welp...bricked the Performance bios setting trying this one: EVGA RTX 3090 VBIOS
> 
> I'm not sure if the driver crashed or something while it was flashing. Maybe it didn't like that I had 4 monitors connected. Is there any steps I'm missing besides '--protectoff' and then '-6 227771.rom'?


Unhook all but your primary monitor.
Boot with the backup bios switch, switch it to standard after you're in windows, then flash.

Also this is the bios link.


http://overclockingpin.com/3998/NDA_%20xoc_tools_3998.zip


----------



## Krytecks

It seems to work with the bios dr/owned linked, but my card is screaming  ( even tho it's at max 61°), so you think 80% PW is a good limit ?


----------



## dr/owned

jura11 said:


> I think someone over here posted direct link on XOC,rename this 227771.rom to XOC.BIOS what I would do and if you do any flash BIOS,remove or reset to default yours OC settings in MSI Afterburner or rather I usually shut down MSI Afterburner
> 
> This driver crash,I got that as well last time when I wanted to flash KFA2 390W BIOS in my case this has caused OC profile on my RTX3090,reseted to default my OC profile and flash went smoothly
> 
> Hope this helps
> 
> Thanks,Jura





Falkentyne said:


> Unhook all but your primary monitor.
> Boot with the backup bios switch, switch it to standard after you're in windows, then flash.
> 
> Also this is the bios link.
> 
> 
> http://overclockingpin.com/3998/NDA_%20xoc_tools_3998.zip



Thanks guys, I got recovered back to the original Performance bios. I'm rusty since the last time I flashed was the 680 Lightning days flashing an unlocked voltage BIOS. 

Good point on resettting Afterburner too. I'll report back shortly if I screw the pooch again.


----------



## Falkentyne

Krytecks said:


> It seems to work with the bios dr/owned linked, but my card is screaming  ( even tho it's at max 61°), so you think 80% PW is a good limit ?


At least your scores are higher now than before?


----------



## dr/owned

Didn't screw the pooch, but this Kingpin BIOS kills the second DP output on the TUF. Which I was sorta expecting since 3 pin BIOS on 2 pin card...

I'll just do some benchmarking to see what sort of fun can be had.


----------



## Nico67

dr/owned said:


> Thanks guys, I got recovered back to the original Performance bios. I'm rusty since the last time I flashed was the 680 Lightning days flashing an unlocked voltage BIOS.
> 
> Good point on resettting Afterburner too. I'll report back shortly if I screw the pooch again.


The older guides going back a few generations used to say disable video driver in Device manager, then do the flashing business. Never had any issues doing it that way.


----------



## Nico67

PSA: Cablemod cables are 20awg, not even 18 , or at least the ones i had for my old Seasonic snow silent 1000w where. Just found an old extra 6pin tail I had cut off


----------



## Martin778

Cablemod cables get just slightly warm in synthetic benchmarks with a [email protected], I have a triple 8 pin 3090 and run 1x8 + 2x8(single cable).
Still waiting for decent AiO 3090 models, the aircooled ones are disappointing due to immense power draw. 450W in Quake II RTX...


----------



## truehighroller1

Falkentyne said:


> I know that @bmgjet 's bios editor shows an Aux1, Aux2, Aux3 and Aux4 power limit in vbios. But these limits are not reported on any sensor and I doubt anyone here besides Kingpin knows what they are...
> 
> You could try opening your vbios in his editor and posting the screenshot of what it says....


Where can I find his editor please? Been googling and looking through his post etc. cAN't find it.


----------



## Falkentyne

truehighroller1 said:


> Where can I find his editor please? Been googling and looking through his post etc. cAN't find it.


I just posted the link.


----------



## dr/owned

So slightly disappointing conclusion:

(for Google, this is the Asus 3090 TUF OC )

-The Kingpin 1000W BIOS basically makes no difference with my card being shunted. From the wall there's maybe 30W more total consumption (around 600-630W) in Port Royal but it sacrifices the second DP from the top of the card. On paper though this would be something like 2000W power limit with the shunt...where the VRM would collapse before it got there anyways.

-Port Royal Score didn't change by any meaningful amount.

-GPUz stops flagging Pwr limit and stays at VRel. I'm guessing there's some minor shunt/current sensing going on outside the 5 big ones that stops you from going into disco-inferno territory with this card. It doesn't seem to aggressively pull down voltage / clock though when it's triggered so not sure.

-Adding more voltage to the core doesn't really seem to improve the max OC at all with my card. I can do +175 core (around 2130 actual) with no +Voltage. Even +0.15V added via EVC2, it doesn't make +200 stable. The power consumption increases by about 80W or so with this voltage increase. I didn't mess with the uncore or memory voltages but I assume if I cranked those I could make this card get up to 800W.

-The Bykski block + backplate is good. +15C core temp over the water temp. The DIMM6 waterblock on the backplate works well and keeps it far cooler than it was with the aircooler. (EDIT: I need to disclaim though that I built the assembly in the best way possible. Conductonaut on the die. I didn't use the included thermal pads, I used Thermalright 12W/mK 1.5mm ones, generic 4MM ones to make PCB contact with the backplate, drilled the backplate to mount the waterblock with Kryonaut, baked the card in the oven to make everything squishier for screw-tightening, etc etc)


----------



## truehighroller1

Falkentyne said:


> I just posted the link.



Thank you, was disappointed it's just for viewing though. I thought that it was an actual editor. Oh well. Thanks again.


----------



## HyperMatrix

truehighroller1 said:


> Thank you, was disappointed it's just for viewing though. I thought that it was an actual editor. Oh well. Thanks again.


It is an editor. You can modify the bios and save the changes you make. But you can’t sign the bios. So you currently have no way of flashing it to your card.


----------



## Carls_Car

Hi all, looking for a bit of input... I've got an ASUS Thor 850W PSU (plat). Wondering if I'm getting good power from it? Do my voltages look alright? Most people I see have something @ or above 12V on the 8-pin readings under load... As you can see I've got some droop and I'm pretty much never at 12V on the 8pins. Will this affect my OC capabilities? I'm running an MSI SUPRIM X.


----------



## bmgjet

ABE005








392.5 KB file on MEGA







mega.nz





Support for EVGA 1KW Bios
More tables for the 3080FE found.


----------



## Falkentyne

bmgjet said:


> ABE005
> 
> 
> 
> 
> 
> 
> 
> 
> 392.5 KB file on MEGA
> 
> 
> 
> 
> 
> 
> 
> mega.nz
> 
> 
> 
> 
> 
> Support for EVGA 1KW Bios
> More tables for the 3080FE found.


Thank you very much, Bmgjet!

I just realized that eventually, flashing the Founder's Editions are going to be next to impossible. 
There's no way I can determine to even force flash the bios.
I asked Igor from igor's lab which chip is the vbios chip but he didn't get back to me yet.
I think it's that small mini 8 pin chip on the front bottom, because it's very close the 1.8v circuit, so that's logical.
But if a flash were to fail, there's no chance at all of recovery if you can't reprogram the chip with a hardware programmer...(if flashing with another card with the FE as a slave fails, you're done for).


----------



## xrb936

Anybody bought EKWB Strix 30 series backplate? It says there are 9 thermal pads but I only found 6 in the box....


----------



## Krytecks

Falkentyne said:


> At least your scores are higher now than before?


My 3090 is dead 😅


----------



## Falkentyne

Krytecks said:


> My 3090 is dead 😅


What are you talking about?


----------



## bmgjet

Probably got hot spotted looking back at his posts.
Says he has custom water cooling on it but his temps 61C.
Then says he is going to try a higher power limit on the 1KW bios.


----------



## cheddle

Krytecks said:


> My 3090 is dead 😅


awks, any detail on how this happened?


----------



## motivman

dr/owned said:


> So slightly disappointing conclusion:
> 
> (for Google, this is the Asus 3090 TUF OC )
> 
> -The Kingpin 1000W BIOS basically makes no difference with my card being shunted. From the wall there's maybe 30W more total consumption (around 600-630W) in Port Royal but it sacrifices the second DP from the top of the card. On paper though this would be something like 2000W power limit with the shunt...where the VRM would collapse before it got there anyways.
> 
> -Port Royal Score didn't change by any meaningful amount.
> 
> -GPUz stops flagging Pwr limit and stays at VRel. I'm guessing there's some minor shunt/current sensing going on outside the 5 big ones that stops you from going into disco-inferno territory with this card. It doesn't seem to aggressively pull down voltage / clock though when it's triggered so not sure.
> 
> -Adding more voltage to the core doesn't really seem to improve the max OC at all with my card. I can do +175 core (around 2130 actual) with no +Voltage. Even +0.15V added via EVC2, it doesn't make +200 stable. The power consumption increases by about 80W or so with this voltage increase. I didn't mess with the uncore or memory voltages but I assume if I cranked those I could make this card get up to 800W.
> 
> -The Bykski block + backplate is good. +15C core temp over the water temp. The DIMM6 waterblock on the backplate works well and keeps it far cooler than it was with the aircooler. (EDIT: I need to disclaim though that I built the assembly in the best way possible. Conductonaut on the die. I didn't use the included thermal pads, I used Thermalright 12W/mK 1.5mm ones, generic 4MM ones to make PCB contact with the backplate, drilled the backplate to mount the waterblock with Kryonaut, baked the card in the oven to make everything squishier for screw-tightening, etc etc)


My card drew 750+ watts in Timespy extreme test 2. Run that and report back. Also, how is your power balancing between 8 pin #1 and #2. 1000W bios completely solves power limit issue on my reference card, however pin #1 was pulling as much as 450W, and #2 pulling only about 200W during timespy extreme, so I do not really use that bios, afraid to cook my cables... I am using 5mohm shunts. what value shunts are you using, and can you post a screenshot of your max power draw in GPU-Z while running Timespy extreme test 2?


----------



## dante`afk

Nico67 said:


> PSA: Cablemod cables are 20awg, not even 18 , or at least the ones i had for my old Seasonic snow silent 1000w where. Just found an old extra 6pin tail I had cut off


I emailed them last month, they said it's 16awg


----------



## Nizzen

Krytecks said:


> If i can add something: at idle im at constant 1920Mhz, 0.8620V and 140W ... With Kombustor stress test, im at ~1780Mhz, ~0.8V, and ~460W


Play a game...


----------



## cheddle

cheddle said:


> These exact measurments were taken *after *I attempted to shunt mod, however I was not able to get decent enough contact on the surface of the 5mohm resister to acheive any difference. I used a silver pen and that was inadequate.
> 
> Currently (heh) attached to my board are two 8mohm resistors poorly glued to the top of the two 5mohm resistors near the 8-pin connectors. Whilst these have continuity to the 8-pin connectors, the glue I have used is very high resistance and has not affected clock speeds or balance between the two connectors.
> 
> I am going to buy a current clamp tomorrow and will do some testing once xmas madness is all done and dusted to see if the reported draw reflects reality at all.
> 
> On the 390w BIOS I was seeing the exact same behavior regardless of ****ing around with shunts, being a reported draw of:
> 
> 
> on 8-pin#1, 173w peak (168w avg)
> on 8-pin#2, 155w peak (150w avg)
> on PCIE, 68w peak ( 65w avg)
> I have an EVGA hybrid kit on the way I am going to butcher and mod to fit this Galax SG and I will remove the shunts at that time, also giving some pre/post comparison on balance to ensure these arent in any way skewing the numbers.


follow-up: So using a fairly cheap current clamp (model QM1632) running the 390w Galax BIOS. with a reported 380w draw in GPUz.

pcie 1: 12.45 amps (165w @ 12.1v (13.6 amps) reported in GPUz)
pcie 2: 6.05 amps (146w @ 12.0v (12.17 amps) reported in GPUz)
Combined: 18.50 amps

my card is running around 1980mhz @ 0.968v

Super strange that pcie 2 is reported so much higher than its reading on the clamp

For some even stranger readings, I have dropped the card to a 240w TDP limit and re-measured

pcie 1: 6.93 amps (100w @ 12.1v (8.26 amps) reported in GPUz)
pcie 2: 1.46 amps (95w @ 12.0v (7.91 amps) reported in GPUz)
Combined: 8.20 amps

I will swap the cables over and re-measure just to make sure there isnt ****ey with one lead...

EDIT: I swapped the ports in the GPU that they are connected to and the reads follow the cable. Ill swap to a fresh spare set of hx1200 leads I have and report back in a few hours

EDIT2: Also, I am measuring the whole lead, not just the positive pins. I dont know how the **** to use this thing, clampy clamp go buurrr


----------



## bmgjet

motivman said:


> My card drew 750+ watts in Timespy extreme test 2. Run that and report back. Also, how is your power balancing between 8 pin #1 and #2. 1000W bios completely solves power limit issue on my reference card, however pin #1 was pulling as much as 450W, and #2 pulling only about 200W during timespy extreme, so I do not really use that bios, afraid to cook my cables... I am using 5mohm shunts. what value shunts are you using, and can you post a screenshot of your max power draw in GPU-Z while running Timespy extreme test 2?



You could aways bridge the plugs with some wires between all the 12Vs.














cheddle said:


> follow-up: So using a fairly cheap current clamp (model QM1632) running the 390w Galax BIOS. with a reported 380w draw in GPUz.
> ....
> 
> I will swap the cables over and re-measure just to make sure there isnt ****ey with one lead...



Make sure you have seperated the gnd wires out from the 12V ones when you put the clamp around them.


----------



## Nizzen

xrb936 said:


> Anybody bought EKWB Strix 30 series backplate? It says there are 9 thermal pads but I only found 6 in the box....


You need to cut them in into smaller pices, so it fits the components.


----------



## motivman

bmgjet said:


> You could aways bridge the plugs with some wires between all the 12Vs.
> View attachment 2471115
> 
> 
> 
> 
> 
> 
> 
> 
> Make sure you have seperated the gnd wires out from the 12V ones when you put the clamp around them.


Please go into more detail... will this solve the power balancing on the 8 pins?


----------



## cheddle

bmgjet said:


> Make sure you have seperated the gnd wires out from the 12V ones when you put the clamp around them.


yeah I will sacrafice some cables to do this.


----------



## defiledge

dr/owned said:


> So slightly disappointing conclusion:
> 
> (for Google, this is the Asus 3090 TUF OC )
> 
> -The Kingpin 1000W BIOS basically makes no difference with my card being shunted. From the wall there's maybe 30W more total consumption (around 600-630W) in Port Royal but it sacrifices the second DP from the top of the card. On paper though this would be something like 2000W power limit with the shunt...where the VRM would collapse before it got there anyways.
> 
> -Port Royal Score didn't change by any meaningful amount.
> 
> -GPUz stops flagging Pwr limit and stays at VRel. I'm guessing there's some minor shunt/current sensing going on outside the 5 big ones that stops you from going into disco-inferno territory with this card. It doesn't seem to aggressively pull down voltage / clock though when it's triggered so not sure.
> 
> -Adding more voltage to the core doesn't really seem to improve the max OC at all with my card. I can do +175 core (around 2130 actual) with no +Voltage. Even +0.15V added via EVC2, it doesn't make +200 stable. The power consumption increases by about 80W or so with this voltage increase. I didn't mess with the uncore or memory voltages but I assume if I cranked those I could make this card get up to 800W.
> 
> -The Bykski block + backplate is good. +15C core temp over the water temp. The DIMM6 waterblock on the backplate works well and keeps it far cooler than it was with the aircooler. (EDIT: I need to disclaim though that I built the assembly in the best way possible. Conductonaut on the die. I didn't use the included thermal pads, I used Thermalright 12W/mK 1.5mm ones, generic 4MM ones to make PCB contact with the backplate, drilled the backplate to mount the waterblock with Kryonaut, baked the card in the oven to make everything squishier for screw-tightening, etc etc)


Why drill the backplate? I thought it come with screwholes


----------



## bmgjet

motivman said:


> Please go into more detail... will this solve the power balancing on the 8 pins?


The balance is still there since that parts measured after the shunts.

But what that will do is pull the same power over both plugs.
So instead of having plug 1 pulling 400W and plug 2 pulling 200W both will pull 300W each, All the GNDs are already tied together so you dont need to worry about them.

Thats some what of a tidy way to do it. Also seen people cut up a 2X 6 plug to 1X 8plug adapter and solder the wires straight onto it. So in a 2 plug cards case youd have 2X 8pin and then 2X 6pin.


----------



## Nizzen

defiledge said:


> Why drill the backplate? I thought it come with screwholes


The DIMM6 waterblock on the backplate


----------



## ttnuagmada

Aristotle said:


> Whats the consensus on good waterblocks for strix 3090? EK, alphacool, bykski? Will the vram chips on the backside be cooled properly when switching to a waterblock?


I like the EK block. seems like an upgrade in quality compared to the 1080 Ti blocks i came from. Backplate doesn't get hot at all.


----------



## geriatricpollywog

ttnuagmada said:


> I like the EK block. seems like an upgrade in quality compared to the 1080 Ti blocks i came from. Backplate doesn't get hot at all.


Your backplate is not making contact then. What are your memory temps?


----------



## jomama22

Nizzen said:


> I'm dumb LOL
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 22 401 in Time Spy
> 
> 
> Intel Core i9-10900K Processor, NVIDIA GeForce RTX 3090 x 1, 16384 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com


Yes, you are:

https://www.3dmark.com/spy/16713397

And I didn't need chilled water to do it either.

Amd has 0 issues pacing the 3090 with intel.


----------



## jomama22

0451 said:


> Your backplate is not making contact then. What are your memory temps?


I have a 3090 strix with block and backplate and for sure have issues with the rear plate making contact with the memory. I put 2mm pads on them instead of the 1.5mm in the instructions and it helped a bit. I am planning on just drilling a hole on the top, to the left of the nv connector, where there is a screw available throught the pcb to the block, and anchoring the back plate more there. The backplate will bent from the nv link connector all the way to the top left backplate screw since there isn't enough support to counter the bottom screws.

There is no memory temp sensor to track to no way of knowing what's going on there. Don't have a thermal gun either. I do have a ram block drilled and tapped to the backplate though so that's fun lol.


----------



## geriatricpollywog

jomama22 said:


> I have a 3090 strix with block and backplate and for sure have issues with the rear plate making contact with the memory. I put 2mm pads on them instead of the 1.5mm in the instructions and it helped a bit. I am planning on just drilling a hole on the top, to the left of the nv connector, where there is a screw available throught the pcb to the block, and anchoring the back plate more there. The backplate will bent from the nv link connector all the way to the top left backplate screw since there isn't enough support to counter the bottom screws.
> 
> There is no memory temp sensor to track to no way of knowing what's going on there. Don't have a thermal gun either. I do have a ram block drilled and tapped to the backplate though so that's fun lol.
> 
> View attachment 2471120


This was at stock clocks. At least I know the backplate is making contact but damn that's hot. I am thinking about running a threadripper block with the jet plate removed to reduce restriction. I would just apply a tiny dab of superglue, since I don't plan on drilling into the backplate.


----------



## jomama22

51 rea


0451 said:


> This was at stock clocks. At least I know the backplate is making contact but damn that's hot. I am thinking about running a threadripper block with the jet plate removed to reduce restriction. I would just apply a tiny dab of superglue, since I don't plan on drilling into the backplate.
> 
> View attachment 2471122


51* isn't really too bad tbh. You should try to get the thermocouple next to one of the groups of 4 memory modules and see what it reads there. I would imagine it would be much hotter.

That's a good idea with the treadripper block, didn't even thing of that. May be a bit more restrictive and costly than necessary though. I was just looking for something cheap with not much flow impact. That's a 6x ram cooler from bitspower on mine.


----------



## dr/owned

bmgjet said:


> The balance is still there since that parts measured after the shunts.
> 
> But what that will do is pull the same power over both plugs.
> So instead of having plug 1 pulling 400W and plug 2 pulling 200W both will pull 300W each, All the GNDs are already tied together so you dont need to worry about them.
> 
> Thats some what of a tidy way to do it. Also seen people cut up a 2X 6 plug to 1X 8plug adapter and solder the wires straight onto it. So in a 2 plug cards case youd have 2X 8pin and then 2X 6pin.


Unfortunately this picture is through the waterblock, but I balanced my 2 8 pins by each of the 3 +12V pins individually together with "bus bars" of copper strands twisted and soldered together (5 strands from an 8AWG wire that had 19 of them). Then I bridge those bus bars with each other via a 14awg wire. It's physically impossible for the connectors to be imbalanced now before the shunts.

The red wire is 18awg and goes around to the PCIe slot shunt and provides 100% of that power. The slot fingers are taped off.

Those are Mod One cables with 16awg wire.










The hottest spot in this whole setup seems to be the PCB power plane right next to the power connectors. I have a 4mm thick thermal pad touching the backplate near there to heatsink it a little.


----------



## jomama22

dr/owned said:


> Unfortunately this picture is through the waterblock, but I balanced my 2 8 pins by each of the 3 +12V pins individually together with "bus bars" of copper strands twisted and soldered together (5 strands from an 8AWG wire that had 19 of them). Then I bridge those bus bars with each other via a 14awg wire. It's physically impossible for the connectors to be imbalanced now before the shunts.
> 
> The red wire is 18awg and goes around to the PCIe slot shunt and provides 100% of that power. The slot fingers are taped off.
> 
> Those are Mod One cables with 16awg wire.
> 
> View attachment 2471126
> 
> 
> The hottest spot in this whole setup seems to be the PCB power plane right next to the power connectors. I have a 4mm thick thermal pad touching the backplate near there to heatsink it a little.


May want to measure how much the pcie fingers would actually pull if you untaped them in this setup. May be beneficial to at least have some power going through the pcie to alleviate the plugs from drawing everything on just the 2 pins. Granted, if it's under 650 watts or so it shouldn't be too much an issue, but I can imagine anything greater than that will really start pushing those molex connectors to the limit.

And honestly, at this point in what you have done, you could just hook up a third pin to the rail you made. It would go through those shunts but you could just add more in parallel or desolder and put lower mohm shunts on to take advantage of the 3rd pin more.

It also looks like you have an evc2 connected. I get mine in the mail in just a few days (the sx model that is, since that's what Elmore is selling now). Can't wait to try it out on the strix. How is it working for you?


----------



## SoldierRBT

3090 KPE 620W avg. at 1.10v locked. Max temp: 52C Score: 15,710








I scored 15 710 in Port Royal


Intel Core i9-10900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## dr/owned

jomama22 said:


> May want to measure how much the pcie fingers would actually pull if you untaped them in this setup. May be beneficial to at least have some power going through the pcie to alleviate the plugs from drawing everything on just the 2 pins. Granted, if it's under 650 watts or so it shouldn't be too much an issue, but I can imagine anything greater than that will really start pushing those molex connectors to the limit.
> 
> And honestly, at this point in what you have done, you could just hook up a third pin to the rail you made. It would go through those shunts but you could just add more in parallel or desolder and put lower mohm shunts on to take advantage of the 3rd pin more.
> 
> It also looks like you have an evc2 connected. I get mine in the mail in just a few days (the sx model that is, since that's what Elmore is selling now). Can't wait to try it out on the strix. How is it working for you?


I put extra length of wire in the PCIe line so I could put a current clamp on it....it's right at 7A. The absolute max I've been able to pull from the wall on the stock BIOS is 730W with no OC on the CPU at all. I'm not seeing any indications (IR camera) that the 8 pin connectors are even getting really warm let alone to the point of melting something. They're also brand new so they connectors are going to be very tight fitting.

I'll flash the Kingpin BIOS again...seems like the TUF has some sort of restriction in the stock BIOS that caps it out at around 650W and isn't one of the big 5 shunts so that's limiting the true potential max power consumption when it starts pulling voltage. 

The EVC2 works but the software is a bit confusing in places. The output voltage seems to completely disagree with software monitoring where I can't correlate the two, the DrMOS temp report is always -68C (it takes -75 + 7 that it gets from the TEMP registers of the VRM). I didn't solder voltage check points on the PCB so I can't see what's truly happening. Right now I'm just using it to turn off all the OCP and increase the switching frequencies.


----------



## Sheyster

Falkentyne said:


> Corsair Ax1600i if you're rich.
> Seasonic Prime PX-1000 if you're not.


The EVGA SN P2 1200 is one of only a few power supplies to ever achieve a perfect 10 score at JonnyGuru. Good luck finding one though. I almost bought one on eBay today but decided to wait for more stock availability and a lower price. I want a little more headroom than my current 1000w PSU offers, and platinum efficiency will be nice as well.


----------



## Zogge

Anybody with the AX1600i who knows the AWG size of the 3 pin connectors ?


----------



## reflex75

SoldierRBT said:


> 3090 KPE 620W avg. at 1.10v locked. Max temp: 52C Score: 15,710
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 710 in Port Royal
> 
> 
> Intel Core i9-10900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com


Nice score, especially with 48° average. 
Great chip you have!


----------



## motivman

Falkentyne said:


> Why not just get the actual Kingpin card at that rate?
> Also the MSI cards do have fuses on them, although that shouldn't matter with the Kingpin Bios--but I am not 100% sure about that.


Dang, did not realize the MSI had fuses. So Max I can safely pull before Fuse blows is about 820W? And If that Ever happens, I can just bridge with solder?


----------



## Spiriva

Zogge said:


> Anybody with the AX1600i who knows the AWG size of the 3 pin connectors ?












"Corsair gives you lots of cables and connectors, since 1600W has to be distributed among them. Six of the PCIe connectors are hosted on dedicated cables, while the other four are on two cables using a typical daisy-chain scheme. There are 16 SATA connectors and nine Molex ones, along with a couple of FDD adapters for anyone still requiring Berg connectors.

Cable length is satisfactory, though the distance between peripheral connectors is too small. Corsair should leave at least 15cm, especially between the four-pin Molex connectors.

As expected, the PCIe cables have some wires that are thicker than typical 18AWG ones for lower voltage drops under high loads. Both EPS connectors consist of only 16AWG wires, while the ATX connector also uses some 16AWG wires."













Corsair AX1600i PSU Review


It is tough to improve something that it is already excellent, so Corsair's desire to build an even better PSU than its current flagship, the AX1500i, is noteworthy. More impressive, the AX1600i succeeds in setting new performance records.




www.tomshardware.com


----------



## Zogge

Spiriva said:


> View attachment 2471143
> 
> 
> "Corsair gives you lots of cables and connectors, since 1600W has to be distributed among them. Six of the PCIe connectors are hosted on dedicated cables, while the other four are on two cables using a typical daisy-chain scheme. There are 16 SATA connectors and nine Molex ones, along with a couple of FDD adapters for anyone still requiring Berg connectors.
> 
> Cable length is satisfactory, though the distance between peripheral connectors is too small. Corsair should leave at least 15cm, especially between the four-pin Molex connectors.
> 
> As expected, the PCIe cables have some wires that are thicker than typical 18AWG ones for lower voltage drops under high loads. Both EPS connectors consist of only 16AWG wires, while the ATX connector also uses some 16AWG wires."
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Corsair AX1600i PSU Review
> 
> 
> It is tough to improve something that it is already excellent, so Corsair's desire to build an even better PSU than its current flagship, the AX1500i, is noteworthy. More impressive, the AX1600i succeeds in setting new performance records.
> 
> 
> 
> 
> www.tomshardware.com


Tusen tack / many thanks.


----------



## Zogge

Also, I have decided to reapply thermal pads and paste on my ekwb strix 3090. Which thermal pads of better quality / thickness do you recommend ? Also should I use 2mm thickness on all of them to be safe ? Is 11W/mK good enough ?

(I will scan the thread further back too)


----------



## Carls_Car

Can anyone tell me if these voltages look odd? Is my PSU giving me weak output?


----------



## Nico67

dante`afk said:


> I emailed them last month, they said it's 16awg


They may vary between PSU kits to keep the same spec as manufacturers, but even the seasonic original is 18awg.


----------



## Biscottoman

jomama22 said:


> Yes, you are:
> 
> https://www.3dmark.com/spy/16713397
> 
> And I didn't need chilled water to do it either.
> 
> Amd has 0 issues pacing the 3090 with intel.


Wow pretty dope score and GPU clocks, what's your cooling setup?


----------



## pat182

f-m-l, *STRIX ON AIR 520BIOS *my 850watt psu is shutting my pc when i want to OC the Vram lol... gonna have to wait for intel gen 11 for a complet build from the ground up with a 1200watt psu i guess 

good news is im now in the top 1% with a regular strix with only a bios mod hahaha


----------



## ssti

Can anyone with RTX 3090 KPE make a picture of the pcb? Interested also to know card performance on Time Spy/Extreme , Wild Life and Blue Room not only the PR.
Will be nice to see Vince or any hwbot-elite running WILD LIFE


----------



## menko2

I'm into a emergency here.

I'm doing re-paste into my 3090 Strix and I have a problem with the thickness of the thermal pads for the Vram.

I have Thermalright Odyssey thermal pads (12.8w/mk) of 0.5, 1.0 and 1.5mm.

I haven't made to the gpu yet but the back plate I think the 1.5mm doesn't put enough preasure and I already throw away the foam the vrams had (they were wasted opening the card).

I could stack two thermal pads to make 2.0mm or 2.5mm thickness but I'm not sure if it's a "right" decision.

I can't use the card until I solve this and here it will take 5 days to get 2mm or 3mm thermal pads. So no pc at all.

Anyone knows if 1.5mm will be thick enough or any solution?


----------



## Belcebuu

Hi guys, is it the 3090 fe s good choice? Is there any bios you can use to flash it and get extra perf?


----------



## Sheyster

pat182 said:


> f-m-l, *STRIX ON AIR 520BIOS *my 850watt psu is shutting my pc when i want to OC the Vram lol... gonna have to wait for intel gen 11 for a complet build from the ground up with a 1200watt psu i guess
> 
> good news is im now in the top 1% with a regular strix with only a bios mod hahaha
> View attachment 2471171


We have similar builds. My 1000w PS is doing fine but I'm also looking to upgrade to 1200w or possibly 1300w. I now feel I need the headroom and also looking for better efficiency with a new platinum rated PS.


----------



## Falkentyne

menko2 said:


> I'm into a emergency here.
> 
> I'm doing re-paste into my 3090 Strix and I have a problem with the thickness of the thermal pads for the Vram.
> 
> I have Thermalright Odyssey thermal pads (12.8w/mk) of 0.5, 1.0 and 1.5mm.
> 
> I haven't made to the gpu yet but the back plate I think the 1.5mm doesn't put enough preasure and I already throw away the foam the vrams had (they were wasted opening the card).
> 
> I could stack two thermal pads to make 2.0mm or 2.5mm thickness but I'm not sure if it's a "right" decision.
> 
> I can't use the card until I solve this and here it will take 5 days to get 2mm or 3mm thermal pads. So no pc at all.
> 
> Anyone knows if 1.5mm will be thick enough or any solution?


You need to measure the stock pads with a calipers. No one has done this so that means someone has to be the one to start. That's how the community profits, unfortunately.
You can also buy large squares of 2mm TM pads also.









8.65US $ 6% OFF|Thermalright Thermal Pad 120x120mm 12.8 W/mk 0.5mm 1.0mm 1.5mm 2.0mm High Efficient Thermal Conductivity Original Authentic - Pc Components Cooling & Tools - AliExpress


Smarter Shopping, Better Living! Aliexpress.com




www.aliexpress.com


----------



## Krytecks

Falkentyne said:


> What are you talking about?


I was stress testing at 80% for few minutes then blackout, and it wont boot anymore 🤕


----------



## Sheyster

Belcebuu said:


> Hi guys, is it the 3090 fe s good choice? Is there any bios you can use to flash it and get extra perf?


If you're gaming and don't intend to water cool and/or shunt it for benching, it's a great card! I owned one for 2 months. No BIOS is available but keep in mind it has up to a 400w power limit.


----------



## Nizzen

Krytecks said:


> I was stress testing at 80% for few minutes then blackout, and it wont boot anymore 🤕


Why stresstest wit "powervirus" ? Why not just play a game? 🤔


----------



## Sheyster

Nizzen said:


> Why stresstest wit "powervirus" ? Why not just play a game? 🤔


Personally I only use SP 4K and a few games to test. I am tempted to install Timespy Extreme though.


----------



## geriatricpollywog

Krytecks said:


> I was stress testing at 80% for few minutes then blackout, and it wont boot anymore 🤕


Can you share more details to help prevent other cards from dying? There are people who think it’s safe to run the 1000w bios 24/7.


----------



## pat182

Sheyster said:


> We have similar builds. My 1000w PS is doing fine but I'm also looking to upgrade to 1200w or possibly 1300w. I now feel I need the headroom and also looking for better efficiency with a new platinum rated PS.


yea, putting my pc in power mode making the cpu lock clock at 5ghz so aroundf 200watts and the gpu munching 550watts or more + all the goodies, 1000watts seems to be the minimum


----------



## des2k...

pat182 said:


> f-m-l, *STRIX ON AIR 520BIOS *my 850watt psu is shutting my pc when i want to OC the Vram lol... gonna have to wait for intel gen 11 for a complet build from the ground up with a 1200watt psu i guess
> 
> good news is im now in the top 1% with a regular strix with only a bios mod hahaha
> 
> 
> 
> pat182 said:
> 
> 
> 
> f-m-l, *STRIX ON AIR 520BIOS *my 850watt psu is shutting my pc when i want to OC the Vram lol... gonna have to wait for intel gen 11 for a complet build from the ground up with a 1200watt psu i guess
> 
> good news is im now in the top 1% with a regular strix with only a bios mod hahaha
> View attachment 2471171
> 
> 
> 
> I just blew my evga850w, I was at 570w + 100w cpu for about 4hours
> Everything seems to be working. I would not run ampere on anything bellow 1000w when OC.
> 
> Got a thermaltake 1200w, good reviews + 16awg cables. Installing it now.
Click to expand...


----------



## jura11

Krytecks said:


> I was stress testing at 80% for few minutes then blackout, and it wont boot anymore 🤕


Assuming PSU is okay there? PCis booting or won't boot at all? Did you tried different PSU? 

Hope this helps 

Thanks, Jura


----------



## inedenimadam

Hey guys. Read up a bit through the posts. I am expecting delivery of a Zotac Trinity this afternoon, and a waterblock a few days later. I read that the Palit 350/365 Bios is compatible but I will be loosing a DP. I dont mind loosing the DP, but if there is a better bios or potentially one that has the same power limits but keeps DP...I would be a happy camper. Any suggestions? Should i just plan on doing the Palit BIOS?


----------



## jomama22

menko2 said:


> I'm into a emergency here.
> 
> I'm doing re-paste into my 3090 Strix and I have a problem with the thickness of the thermal pads for the Vram.
> 
> I have Thermalright Odyssey thermal pads (12.8w/mk) of 0.5, 1.0 and 1.5mm.
> 
> I haven't made to the gpu yet but the back plate I think the 1.5mm doesn't put enough preasure and I already throw away the foam the vrams had (they were wasted opening the card).
> 
> I could stack two thermal pads to make 2.0mm or 2.5mm thickness but I'm not sure if it's a "right" decision.
> 
> I can't use the card until I solve this and here it will take 5 days to get 2mm or 3mm thermal pads. So no pc at all.
> 
> Anyone knows if 1.5mm will be thick enough or any solution?


One page back I wrote up what needs to be done. Just use 2mm pads for the beam for now. I'll be drilling a hole in the backplate where there is a screw anchor on the block to the left of the nv connector. The reason it's not making good contact is because of the backplate bending when fully tightened down on the bottom screws.

I'll just using a spacer between the backplate and pcb for that new drilled holes. After that, I wouldn't be surprised if the 1.5mm make good contact. The 2.0mm now is just to fill the gapsbleft by the bending backplate and to raise the bottom half slightly so it doesn't bend as aggressively.


----------



## arvinz

pat182 said:


> f-m-l, *STRIX ON AIR 520BIOS *my 850watt psu is shutting my pc when i want to OC the Vram lol... gonna have to wait for intel gen 11 for a complet build from the ground up with a 1200watt psu i guess
> 
> good news is im now in the top 1% with a regular strix with only a bios mod hahaha
> View attachment 2471171


Could you share your Afterburner settings? I'm also on a Strix on air and just about to dive into the 520w bios. Would love to understand what settings you've chosen to see those results. Also fan settings, etc.


----------



## ttnuagmada

0451 said:


> Your backplate is not making contact then. What are your memory temps?


Strix doesn't have memory temp sensors. When I say "doesn't get hot at all" I mean that it stays under 45C. Pretty sure I couldn't maintain +1200 memory 24/7 if there were no contact.


----------



## Falkentyne

Krytecks said:


> I was stress testing at 80% for few minutes then blackout, and it wont boot anymore 🤕


Can you please test the card in another computer to make sure it wasn't the power supply or PCIE cables that decided it was time for the magic smoke?
And take note: You said you were having temp problems and were at 61C on a custom loop, right? Are you sure your hotspots were properly padded and fully accounted for? That 1000W Vbios has no thermal protections. Any bad contact and something will happily get go past 120C and just fry (with protections, the card will just black screen with 100% fan and shut down at the #thermtrip point).


----------



## Slackaveli

Falkentyne said:


> Modern Warfare/Warzone already did this exact thing a long time ago. But seems no one here except me plays that game.


I do, and you're exactly right; although, CP77 required me to drop my meopry oC as well , but my Warzone core offset works.


----------



## Slackaveli

mirkendargen said:


> Metro Exodus and Control also. The common denominator is the ray tracing/DLSS doublewhammy. I've been stability testing my daily driver overclock with the Metro Exodus benchmark all along.


I did see you mention that WAY back there. I just caught up on this thread yesterday. Im here, fellaz.


----------



## Slackaveli

dante`afk said:


> dont' destroy my dreams, I switched from year long intel 5.2ghz cpu with ddr4700 cl 17 to AMD


Why? You could have just dropped in Rocket Lake next month for a super cheap 20% upgrade...


----------



## xrb936

Nizzen said:


> You need to cut them in into smaller pices, so it fits the components.


The manual said there are 9 pieces. And 6 pieces are definitely not enough.


----------



## pat182

des2k... said:


> I just blew my evga850w, I was at 570w + 100w cpu for about 4hours
> Everything seems to be working. I would not run ampere on anything bellow 1000w when OC.
> 
> Got a thermaltake 1200w, good reviews + 16awg cables. Installing it now.


Yea its a evga g3 850 i got, wont push it


----------



## Nizzen

xrb936 said:


> The manual said there are 9 pieces. And 6 pieces are definitely not enough.


Ok


----------



## pat182

arvinz said:


> Could you share your Afterburner settings? I'm also on a Strix on air and just about to dive into the 520w bios. Would love to understand what settings you've chosen to see those results. Also fan settings, etc.


I did +160core +1000 mem, before my psu shut down, also ambient was -4c lol

I was waiting for a -20c day but with that psu limit wont change much

Probly could do 15k on air with a new psu, story to follow..


----------



## Falkentyne

Slackaveli said:


> Why? You could have just dropped in Rocket Lake next month for a super cheap 20% upgrade...


Depends on what motherboard.
Only a few boards would be fully PCIE 4.0 compliant. I think eVGA Dark, Maximus 12 Extreme and Apex and a couple of Gigabyte's and MSI's high end boards.
It's going to be interesting also what chipset and CPU changes there are. Going from 10 cores to 8 cores is going to raise some eyebrows, but the only game that I've seen that can even use all 20 threads of a 10900k is Death Stranding. Benchmarks will be very interesting to see on that game.


----------



## Slackaveli

[QUOTE="HyperMatrix, post: 28693176, member: 273237"

I've noticed other weird behavior. Like I will be on the sidewalk. I'll turn around and look at the building for a few seconds. And quickly do a 180 degree. Even cars that are like 2 meters away from me are massively LOD reduced. And not only that....sometimes the car will be completely different. So apparently in different scenarios it can completely delete and respawn a car into an entirely different one. In your apartment complex I've walked down a hallway and ran into 3 identical characters. One right after the other. Same build/clothing/etc.

ent guides for your specific card, and ordering all new pads (with the proper thickness) and just redoing the whole thing. 70C thermal throttling means something is heating up far beyond normal operating range.
[/QUOTE]i use Nvidia Inspector with max negative LOD (-3)


----------



## itssladenlol

Thinking About buying a zotac 3090 trinity (cause its cheap) and putting a heatkiller V on it with a Mora 420, Any downsides getting the zotac over a gainward or palit? 
Need reference pcb for heatkiller V


----------



## arvinz

pat182 said:


> I did +160core +1000 mem, before my psu shut down, also ambient was -4c lol
> 
> I was waiting for a -20c day but with that psu limit wont change much


Nice, I take it you have the PL and Temp limit sliders maxed to the right? And the core voltage slider as well? You running fan speed at 100% during benches? I'm just on the stock bios right now and I'm hitting close to 14K with some of the overclocking but it looks like the higher watt bioses is the way to go to above it?


----------



## HyperMatrix

Slackaveli said:


> [QUOTE="HyperMatrix, post: 28693176, member: 273237"
> 
> I've noticed other weird behavior. Like I will be on the sidewalk. I'll turn around and look at the building for a few seconds. And quickly do a 180 degree. Even cars that are like 2 meters away from me are massively LOD reduced. And not only that....sometimes the car will be completely different. So apparently in different scenarios it can completely delete and respawn a car into an entirely different one. In your apartment complex I've walked down a hallway and ran into 3 identical characters. One right after the other. Same build/clothing/etc.
> 
> ent guides for your specific card, and ordering all new pads (with the proper thickness) and just redoing the whole thing. 70C thermal throttling means something is heating up far beyond normal operating range.


i use Nvidia Inspector with max negative LOD (-3)
[/QUOTE]


I did that last week. Now if only that would stop things from spawning or despawning right in front of my eyes. Haha. Kill a single cop in a secluded area? 10 cops literally spawn right in front of you out of nowhere. Really wish they had delayed the game another year or two and done it right. Absolutely gorgeous game but you can tell it was rushed.


----------



## des2k...

itssladenlol said:


> Thinking About buying a zotac 3090 trinity (cause its cheap) and putting a heatkiller V on it with a Mora 420, Any downsides getting the zotac over a gainward or palit?
> Need reference pcb for heatkiller V


Zotac is fine, got it msrp, you can push it 500w+ with waterblock. Depends on luck too, needs insane voltage to push to 2.1+

I'm looping Timespy extreme again, want to keep it under 1v 2.1core. It goes higher like 2.19 with no RTX in games and port royal just around 2.13 but I prefer worst case + 1 profile in afterburner.


----------



## Sheyster

pat182 said:


> yea, putting my pc in power mode making the cpu lock clock at 5ghz so aroundf 200watts and the gpu munching 550watts or more + all the goodies, 1000watts seems to be the minimum


I'm actually considering a 1600w model now since the EVGA P2 1200 is not available anywhere. It was my first choice.


----------



## des2k...

Slackaveli said:


> I do, and you're exactly right; although, CP77 required me to drop my meopry oC as well , but my Warzone core offset works.


You're better off testing with timespy extreme, CP2077 crashed for me at +500, dropped to 466 that was solid for all day. now +466 crashes timespy in 5mins🙄


----------



## Krytecks

Falkentyne said:


> Can you please test the card in another computer to make sure it wasn't the power supply or PCIE cables that decided it was time for the magic smoke?
> And take note: You said you were having temp problems and were at 61C on a custom loop, right? Are you sure your hotspots were properly padded and fully accounted for? That 1000W Vbios has no thermal protections. Any bad contact and something will happily get go past 120C and just fry (with protections, the card will just black screen with 100% fan and shut down at the #thermtrip point).


61° After hours of stress test with different bios, so i think it was ok at least for the core itself, but the backplate was insanely hot, i mostly used hwmonitor for temps and gpuz but i didnt see any very high temp ... I tested both cables on my 2*8 pin and both work on one but the other one make the pc instant shutdown so im pretty sure it's some kind of electrical short on one of the 2*8pin of the gpu. With one cable i can boot, i bought a GT 710 that should arrives tomorrow so i will try to flash my gpu factory bios and then test it again, i will try to send it back where i bought it for maybe a trade or repair ...


----------



## Falkentyne

Krytecks said:


> 61° After hours of stress test with different bios, so i think it was ok at least for the core itself, but the backplate was insanely hot, i mostly used hwmonitor for temps and gpuz but i didnt see any very high temp ... I tested both cables on my 2*8 pin and both work on one but the other one make the pc instant shutdown so im pretty sure it's some kind of electrical short on one of the 2*8pin of the gpu. With one cable i can boot, i bought a GT 710 that should arrives tomorrow so i will try to flash my gpu factory bios and then test it again, i will try to send it back where i bought it for maybe a trade or repair ...


Wait, you can boot your "dead" GPU with one 8 pin?
So the GPU is not dead, then.
So all you did was fry the PCIE connector?
Because you're saying two different things in your post and they conflict with each other.
1) you tested both cables on your 2*8 pin and both work (what??)
The second makes your PC instant shutdown...are you talking about the GPU's power connector itself?

And how were you able to boot with one cable plugged in? 
if you can boot with one cable plugged in, why not just flash the bios that way?


----------



## geriatricpollywog

ssti said:


> Can anyone with RTX 3090 KPE make a picture of the pcb? Interested also to know card performance on Time Spy/Extreme , Wild Life and Blue Room not only the PR.
> Will be nice to see Vince or any hwbot-elite running WILD LIFE


I'll need help turning down LOD first.

Edit: nevermind, I see it is a CPU limited test. I'll post a score later


----------



## Krytecks

Falkentyne said:


> Wait, you can boot your "dead" GPU with one 8 pin?
> So the GPU is not dead, then.
> So all you did was fry the PCIE connector?
> Because you're saying two different things in your post and they conflict with each other.
> 1) you tested both cables on your 2*8 pin and both work (what??)
> The second makes your PC instant shutdown...are you talking about the GPU's power connector itself?
> 
> And how were you able to boot with one cable plugged in?
> if you can boot with one cable plugged in, why not just flash the bios that way?


Yes one of the gpu connector ( if plugged in with any cable to any port of my psu make the pc instant shutdown ), the other one make the pc boot ( but non display ofc ), i tried another pcie connector but it's the same result


----------



## Zogge

Sheyster said:


> I'm actually considering a 1600w model now since the EVGA P2 1200 is not available anywhere. It was my first choice.


I had a seasonic focus gold 1000w and 3090 with fans and all in watercooling and 10980xe OC crashed it to power off. 1097 W was the last log entry.
I got the AX1600i after that.


----------



## dante`afk

could someone with the KPE 1000w XOC bios run SP with these settings and let me know if they still see the green perfcap in gpuz?












Nico67 said:


> They may vary between PSU kits to keep the same spec as manufacturers, but even the seasonic original is 18awg.
> 
> View attachment 2471148


possibly, its 16awg for the corsair I have from Cablemods.



Slackaveli said:


> Why? You could have just dropped in Rocket Lake next month for a super cheap 20% upgrade...


I don't think they'll beat the current AMD IPC, but if they do, I'll switch again.


----------



## mismatchedyes

Any downsides to the 520w bios on the Trio? Currently have the 500w bios and it's working well but I understand on the strix the middle fan speed is reduced on the 520w bios. Is this the same with the trio or can I run the 520 with no downsides ? Thank you.


----------



## SoldierRBT

3090 KPE 0.950v undervolt 2130MHz - Score: 15,012



https://www.3dmark.com/pr/696376


----------



## gecko991

Nice.


----------



## arvinz

SoldierRBT said:


> 3090 KPE 0.950v undervolt 2130MHz - Score: 15,012
> 
> 
> 
> https://www.3dmark.com/pr/696376


Is the RTX Classified Controller restricted to EVGA/KPE cards or does it wotk with any card? Or only with EVGA/KPE bios?


----------



## SoldierRBT

@arvinz 

It's restricted to KPE cards.


----------



## itssladenlol

SoldierRBT said:


> 3090 KPE 0.950v undervolt 2130MHz - Score: 15,012
> 
> 
> 
> https://www.3dmark.com/pr/696376


With Stock 360 Rad? 
Damn i think ill cancel my Mora 420 plans 😂


----------



## SoldierRBT

itssladenlol said:


> With Stock 360 Rad?
> Damn i think ill cancel my Mora 420 plans 😂


A Waterblock will be much better at cooling. Keep in mind, this run was locked at 0.950v so the heat wasn't that high. Full 520W it gets to 55-56C easy with 24C ambient.


----------



## des2k...

SoldierRBT said:


> A Waterblock will be much better at cooling. Keep in mind, this run was locked at 0.950v so the heat wasn't that high. Full 520W it gets to 55-56C easy with 24C ambient.


well.. you know there's timespy extreme, just 900mv, 1995 on core, still pulls 470w🙄 so all depends on the load


----------



## Martin778

Just slap 3 or 6 Noctua NF-A12x25 on it and it will run as cool as a custom loop  My 2080Ti averaged 44-46*C with 2 of these Noctuas and with roughly 18*C ambient.


----------



## SoldierRBT

des2k... said:


> well.. you know there's timespy extreme, just 900mv, 1995 on core, still pulls 470w🙄 so all depends on the load


In RTX Quake 2. +120 Core +1000 Memory it pulls 520W constantly voltage around 0.975v-1.012v 2040-2085MHz


----------



## des2k...

SoldierRBT said:


> In RTX Quake 2. +120 Core +1000 Memory it pulls 520W constantly voltage around 0.975v-1.012v 2040-2085MHz


Jesus !, I had to install new drivers to start RTX Quake. Guess what gets reset with driver updates ?... Your afterburner profile.

Imagine to my surprise when I looked at my script on my second screen ! and that's a 2x8pin card


----------



## Magrecite

Zogge said:


> I had a seasonic focus gold 1000w and 3090 with fans and all in watercooling and 10980xe OC crashed it to power off. 1097 W was the last log entry.
> I got the AX1600i after that.


How're you liking the AX1600i? I actually had one on order ready to get here since it seemed like the Seasonic 1000W Prime Platinum isn't cutting it quite as well as I'd hoped for the 3090 K|NGP|N.


----------



## des2k...

des2k... said:


> Jesus !, I had to install new drivers to start RTX Quake. Guess what gets reset with driver updates ?... Your afterburner profile.
> 
> Imagine to my surprise when I looked at my script on my second screen ! and that's a 2x8pin card
> View attachment 2471206


I think I might have to do a script that always runs in the background every 1sec if max clock / PL doesn't match my afterburner just star afterburner profile 1 then kill the process🙄


----------



## jomama22

des2k... said:


> I think I might have to do a script that always runs in the background every 1sec if max clock / PL doesn't match my afterburner just star afterburner profile 1 then kill the process🙄


I imagine that's the 1000w kp bios? Question, what does the pcie slot pull? Also, does pcie power usage drop with reduction of the power limit? And if so, do you know what the pcie slot power draw is at, say, 600w limit?

Also, I'm guessing the 1000w is when the power slider is at 100%?

Cheers!


----------



## des2k...

jomama22 said:


> I imagine that's the 1000w kp bios? Question, what does the pcie slot pull? Also, does pcie power usage drop with reduction of the power limit? And if so, do you know what the pcie slot power draw is at, say, 600w limit?
> 
> Also, I'm guessing the 1000w is when the power slider is at 100%?
> 
> Cheers!


I didn't pay attention to GPU-z with Quake, had it at Avg. In the past 3dmark TE, for my 2x8pin around 570w(85% XOC) it was 84w+ over the PCIE. Yes PCIE starts low at 20w and slowly climbs up as you increase PL.


----------



## dr/owned

Couple of EVC2 comments with the TUF:

The one reporting 8 phase is the Core
The one reporting 6 is the Uncore
4 is the VRAM.

The voltage being applied is an offset of whatever the reported voltage is, it doesn't get reported in software. Output Voltage is the true voltage (probably, can't measure it myself) coming from the regulator and has load line applied to it as well as the offset.

These are the "safe" settings to use to increase the switching frequency and turn off the OCP and be able to change "Voltage Offset". "Per-phase OCP is a voltage because it's a current measurement across a resistor. 2.55 is the max available":










It looks like the offset can be exploited a bit to get around the PWR PerfCap, because the GPU thinks its in a power virus in certain benchmarks and calls for a lower voltage/frequency point, but you can use the offset to force it back to like 1.10V if 1.0V is being called for.

What I think I'm going to do is lock the curve at 1.0V and then use +.1V offset to figure out what the max overclock is at 1.10V when PerfCap isn't being triggered....because it's a PITA trying to find the max overclock when it's dancing around all over the place getting pulled down to 1.03V.


The datasheet for the uP9512 is available here: https://www.upi-semi.com/files/2228/e6374867-49f3-11e9-8d44-d30dc7bf67c5

It's pretty easy to look at the XML configuration in the EVC2 folder and line it up to the datasheet for what commands and values are being used.


----------



## jura11

dante`afk said:


> could someone with the KPE 1000w XOC bios run SP with these settings and let me know if they still see the green perfcap in gpuz?
> 
> View attachment 2471200
> 
> 
> 
> 
> possibly, its 16awg for the corsair I have from Cablemods.
> 
> 
> 
> I don't think they'll beat the current AMD IPC, but if they do, I'll switch again.


I will try these settings later today if that's helps, but I assume I will see green perfcap in GPU-Z, my Palit RTX 3090 GamingPro is not the best and won't clock as other guys here, it will do probably 2160-2175MHz as best with 100% power limit on KPE XOC BIOS 

Hope this helps 

Thanks, Jura


----------



## jomama22

If anyone with a three pim card could run the kpe bios in, say, furmark or somthing (or some power heavy benchmark) with the PL set to be 600w or so and lmk what kind of pcie slot power it is pulling, that would be fantastic! Lol

Would like to try it on my shunted strix (I only had 4mohm available at the time and got impatient lol) and see what I may expect the pcie slot to pull under specific power limits of the 1000w bios.

Thanks in advance!


----------



## Falkentyne

dante`afk said:


> could someone with the KPE 1000w XOC bios run SP with these settings and let me know if they still see the green perfcap in gpuz?
> 
> View attachment 2471200
> 
> 
> 
> 
> possibly, its 16awg for the corsair I have from Cablemods.
> 
> 
> 
> I don't think they'll beat the current AMD IPC, but if they do, I'll switch again.


Do you still have your FE or do you have another card?
I asked people before if they could run those Superposition custom extreme shaders 4k preset with the 1000W kingpin bios but only one person replied, saying he doesn't have superposition


----------



## dr/owned

jomama22 said:


> If anyone with a three pim card could run the kpe bios in, say, furmark or somthing (or some power heavy benchmark) with the PL set to be 600w or so and lmk what kind of pcie slot power it is pulling, that would be fantastic! Lol
> 
> Would like to try it on my shunted strix (I only had 4mohm available at the time and got impatient lol) and see what I may expect the pcie slot to pull under specific power limits of the 1000w bios.
> 
> Thanks in advance!


I’m pretty sure the driver blocks Furmark from going full power. Port Royal draws 200W more.


----------



## des2k...

SoldierRBT said:


> 3090 KPE 0.950v undervolt 2130MHz - Score: 15,012
> 
> 
> 
> https://www.3dmark.com/pr/696376


must be a cpu thing, I'm also holding 2130 in port royal, doesn't go higher than 14.5k for score with my 3900x.

Is 950mv 2130 stable in TE gt2 for 2h ?


----------



## des2k...

dr/owned said:


> I’m pretty sure the driver blocks Furmark from going full power. Port Royal draws 200W more.


on the XOC bios ? pretty sure there's no driver or vbios limits as you easily hit 640w+ vrel/vop on RTX quake.
Limited RTX quake to 1875, 806mv that was 384w and just 1fps loss from 640w


----------



## jomama22

dr/owned said:


> I’m pretty sure the driver blocks Furmark from going full power. Port Royal draws 200W more.


Well port royal works as well then lol. Doesn't really matter what it is, just more interested in seeing what the pcie slot draws at specific total power draw.


----------



## Falkentyne

dr/owned said:


> I’m pretty sure the driver blocks Furmark from going full power. Port Royal draws 200W more.





des2k... said:


> on the XOC bios ? pretty sure there's no driver or vbios limits as you easily hit 640w+ vrel/vop on RTX quake.
> Limited RTX quake to 1875, 806mv that was 384w and just 1fps loss from 640w


Nvidia's drivers have power throttled _ALL_ Doughnut* type stress tests for years, going back over 10 years ago now. Power Limit isn't necessarily flagged, just boost clocks and voltages are limited by the driver.

In the OLD days, you could remove this throttling by renaming furmark.exe as Quake3.exe or Unrealtournament.exe. I do NOT know if this works now, and if it does, I take no responsibility if you try this and your $1500 video card goes byebye. Try it at your own risk.


----------



## geriatricpollywog

So I thought my VRAM degraded but that might not be the case. When I first got the 3090, it could run PR at +1500 mem, but now it crashes above 1350. However, when I look at historical results, the average memory clock has been around 1381mhz this whole time. I had thought my 2080ti experienced the same degradation, but it too had approximately the same average memory clock frequency the entire time I owned it.


----------



## jomama22

Took the day to work with the shunted strix and 5950x and such and came out with some good results:

11th overall in TimeSpy:
https://www.3dmark.com/spy/16749964
Graphics: 23729
CPU: 18105

7th overall in TimeSpy Extreme:
https://www.3dmark.com/spy/16740973
Graphics: 12375
CPU: 11700

No exotic cooling/chilled water. Just water-cooled. I'll take the rig outside soon enough and see what -5*C air can help with lol.

To whomever still thinks the new AMD 5xxx series can't keep up with intel in the graphics score, well, it's just not true.


----------



## zkareemz

first time to do overclock, is this a good result? how I can improve it?


----------



## Sygnano

Hey guys. Just received a PNY 3090 and started to fiddle with it.Got a fews questions up my sleeve : The card seems constantly power starved, so I'm looking the power draw, what's the BIOS used for 2x8 pins ? Best I can find that's flashable is 390w but that doesn't seem to help a lot.


----------



## rhyno

Falkentyne said:


> Nvidia's drivers have power throttled _ALL_ Doughnut* type stress tests for years, going back over 10 years ago now. Power Limit isn't necessarily flagged, just boost clocks and voltages are limited by the driver.
> 
> In the OLD days, you could remove this throttling by renaming furmark.exe as Quake3.exe or Unrealtournament.exe. I do NOT know if this works now, and if it does, I take no responsibility if you try this and your $1500 video card goes byebye. Try it at your own risk.


not true, i just pulled over 900 watts no problem with the 1000 watt kingpin bios on my ftw3









1000 watts is 100% power limit on this bios. so i set it at 666 watts because i cant cool 1000 wats. but it fully opend up the card. no power limit poping up in gpuz anymore


----------



## sultanofswing

zkareemz said:


> first time to do overclock, is this a good result? how I can improve it?
> View attachment 2471223


Your Curve is setup incorrectly which will produce a lower than normal score, You need to smoothen the curve out more.


----------



## jomama22

rhyno said:


> not true, i just pulled over 900 watts no problem with the 1000 watt kingpin bios on my ftw3
> View attachment 2471225
> 
> 
> 1000 watts is 100% power limit on this bios. so i set it at 666 watts because i cant cool 1000 wats. but it fully opend up the card. no power limit poping up in gpuz anymore


When you have it set at 666, can you check and see what the pcie slot is pulling interms of power?


----------



## Avacado

rhyno said:


> so i set it at 666 watts because i cant cool 1000 wats.


Rookie...


----------



## rhyno

jomama22 said:


> When you have it set at 666, can you check and see what the pcie slot is pulling interms of power?


Think it maxed at 90 watts.


----------



## rhyno

rhyno said:


> Think it maxed at 90 watts.
> edit pic for results


----------



## SoldierRBT

des2k... said:


> must be a cpu thing, I'm also holding 2130 in port royal, doesn't go higher than 14.5k for score with my 3900x.
> 
> Is 950mv 2130 stable in TE gt2 for 2h ?


You need to tweak NVVDD/MSVDD in order to get better scores. Finding the perfect combination isn't easy but can boost score a lot. Vince said in the Overclock Invitational 2 Stream: The efficiency goes up in the score when the voltages are right.

Undervolt 0.950v 2130MHz without NVVDD/MSVDD tweaks - 14,624 - Max power draw: 407W
https://www.3dmark.com/pr/696258

Undervolt 0.950v 2130MHz NVVDD/MSVDD tweaked - 15,012 - Max power draw: 432W
https://www.3dmark.com/pr/696376


----------



## iunlock

Hello fellow 3090 owners... Finally got mine today and I've been tuning it all day. Pretty solid I must say... I do have a water block and back plate for it sitting right next to me, but just gathering some data with it in stock form first...

*Main Gaming Desktop / 9900KS @ 53x / RTX 3090 FE Stock Blower *

Not even on the test bench yet.* 

Fire Strike:







*


----------



## Cholerikerklaus

jomama22 said:


> Took the day to work with the shunted strix and 5950x and such and came out with some good results:
> 
> 11th overall in TimeSpy:
> https://www.3dmark.com/spy/16749964
> Graphics: 23729
> CPU: 18105
> 
> 7th overall in TimeSpy Extreme:
> https://www.3dmark.com/spy/16740973
> Graphics: 12375
> CPU: 11700
> 
> No exotic cooling/chilled water. Just water-cooled. I'll take the rig outside soon enough and see what -5*C air can help with lol.
> 
> To whomever still thinks the new AMD 5xxx series can't keep up with intel in the graphics score, well, it's just not true.


Can you do a port royal run? I also have the 5950x and a 3090. Iam close to your scores but in port royal my card sucks with the 5950x


----------



## Gebeleisis

I have the same 5950x +3090 combo and I can confirm that in portroyal the results are low. 
I am still on air with the 3090 and waiting for a waterblock to arrive


----------



## Zogge

Magrecite said:


> How're you liking the AX1600i? I actually had one on order ready to get here since it seemed like the Seasonic 1000W Prime Platinum isn't cutting it quite as well as I'd hoped for the 3090 K|NGP|N.


I am really happy with it with no issues at all, well only the price.


----------



## Zogge

jomama22 said:


> If anyone with a three pim card could run the kpe bios in, say, furmark or somthing (or some power heavy benchmark) with the PL set to be 600w or so and lmk what kind of pcie slot power it is pulling, that would be fantastic! Lol
> 
> Would like to try it on my shunted strix (I only had 4mohm available at the time and got impatient lol) and see what I may expect the pcie slot to pull under specific power limits of the 1000w bios.
> 
> Thanks in advance!


I did it on the strix. 26A, 22A, 24A on the 3 pins and PCIE I didnt really check but say 50ish W.
Power slider was at 60. Overclock mild 120/750. A bit weird as it is more than 600W.
In games with same settings it is closer to 550W.

VRM 50 degrees, core 57 degrees after 3 -4 min. Memory I canot check hence the very short run.


----------



## jomama22

Zogge said:


> I did it on the strix. 26A, 22A, 24A on the 3 pins and PCIE I didnt really check but say 50ish W.
> Power slider was at 60. Overclock mild 120/750. A bit weird as it is more than 600W.
> In games with same settings it is closer to 550W.
> 
> VRM 50 degrees, core 57 degrees after 3 -4 min. Memory I canot check hence the very short run.


Thanks, yeah, it's more of just a concern with the shunt on the pcie and using that bios. Would only need it for ~550-600w to get 900+ and would just save me a bit of time instead or adding more shunts to it to so the same.

Last page someone posted a shot of 660w from that bios showing 100w being pulled from the pcie slot alone, which makes me a bit hesitant to try it out tbh lol. Would mean 160w for me and the shunt.


----------



## xrb936

Guys, quick Q: What's the difference between 457.xx driver and 460.xx driver?


----------



## mismatchedyes

https://www.3dmark.com/pr/698623 

I have tested the 520w bios on the trio and one fan does run slower, 2000rpm instead of 3000rpm

Still I got a better score with this bios 14600 with 5950x on port royale. I am quite pleased with this. It is my best so far.


----------



## xrb936

I scored 14 725 in Port Royal


Intel Core i9-10980XE Extreme Edition Processor, NVIDIA GeForce RTX 3090 x 1, 65536 MB, 64-bit Windows 10}




www.3dmark.com





LFMAO


----------



## cstkl1

burn baby burn 3090 strix
psu blew up. taking out a ps5 next to it
seasonic tx1000
china
ori post



https://bbs.nga.cn/read.php?tid=24821510&_ff=334&fbclid=IwAR0eK6ftdg8OIWUlZObvGGe7oYDK3bf-u4JBZKbR0MkRFYdGavnpSXMnFjQ&rand=427



fb repost



__ https://www.facebook.com/IMIKAipoh/posts/1896165023865605


----------



## cstkl1

gut feelin say. kp bios.


----------



## Cholerikerklaus

xrb936 said:


> I scored 14 725 in Port Royal
> 
> 
> Intel Core i9-10980XE Extreme Edition Processor, NVIDIA GeForce RTX 3090 x 1, 65536 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> LFMAO


Good core clock. But this is the same problem I got. Very high clocks but low scores. I don't know why


----------



## xrb936

cstkl1 said:


> burn baby burn 3090 strix
> psu blew up. taking out a ps5 next to it
> seasonic tx1000
> china
> ori post
> 
> 
> 
> https://bbs.nga.cn/read.php?tid=24821510&_ff=334&fbclid=IwAR0eK6ftdg8OIWUlZObvGGe7oYDK3bf-u4JBZKbR0MkRFYdGavnpSXMnFjQ&rand=427
> 
> 
> 
> fb repost
> 
> 
> 
> __ https://www.facebook.com/IMIKAipoh/posts/1896165023865605


I read whole posts. I highly doubt the fire is caused by other appliance and that poor PC is just an victim...


----------



## xrb936

Cholerikerklaus said:


> Good core clock. But this is the same problem I got. Very high clocks but low scores. I don't know why


There are some discussion earlier about it, don't know why wither, only suspicion


----------



## cstkl1

xrb936 said:


> There are some discussion earlier about it, don't know why wither, only suspicion


disable cstate


----------



## xrb936

cstkl1 said:


> disable cstate


Wut? How does C-State affect 3DMark?


----------



## bmgjet

xrb936 said:


> Wut? How does C-State affect 3DMark?


When GPU is bottlenecking cpu downclocks then get to a area with more cpu load and has slight delay in the clock on cpu going back to 100% speed.
You can see it in the CPU clock graph with in 3dmark places you have lot some points from it changing clock speeds.


----------



## defiledge

Does anybody know if asus motherboard bios allows the CPU fan header to be controlled by coolant temp?


----------



## OC2000

Cholerikerklaus said:


> Can you do a port royal run? I also have the 5950x and a 3090. Iam close to your scores but in port royal my card sucks with the 5950x


Im the same. Can get similar scores in both Timespys, but can't oc the ram as high as his, but Port Royal scores terribly with the 5950X


----------



## Pepillo

I have dedicated the morning to benchmark, to see how the block behaved, which since I put it I had only dedicated myself to playing. To say that the core can be raised between 2.175 Mhz and 2.190 Mhz, at 2.200 Mhz does not reach me, and the memos at just over 21.000, about 21.200 without problems. The temperatures, between 44º and 47º according to the bench. Very happy with the toy, MSI Trio Gaming X with Bykski block and Kipping 520w bios:

- More than15.000 points in the Port Royale:

http://www.3dmark.com/pr/683485

- 22.572 graphics in the Time Spy:

http://www.3dmark.com/spy/16761076

- 11.794 graphics on the TS Extreme:

http://www.3dmark.com/spy/16646850

- Over 14.000 in the Fire Strike Ultra:

http://www.3dmark.com/fs/24467931 

- And the nearly 14.000 Superposition 1080 Extreme:


----------



## sultanofswing

xrb936 said:


> I scored 14 725 in Port Royal
> 
> 
> Intel Core i9-10980XE Extreme Edition Processor, NVIDIA GeForce RTX 3090 x 1, 65536 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> LFMAO


What method are you using core clock offset or voltage curve editor?


----------



## GAN77

Pepillo said:


> I have dedicated the morning to benchmark, to see how the block behaved, which since I put it I had only dedicated myself to playing.


What delta chip water do you get at 520 watts with with Bykski block?


----------



## Pepillo

GAN77 said:


> What delta chip water do you get at 520 watts with with Bykski block?


Nine degrees. You can see in this screenshot (Valhalla 4k Ultra) after more than half an hour playing, 40º the water, 49º the graphics card:


----------



## asdkj1740

no wonder why asus strix cooler on rtx3000 series is that bad...
the copper base is simply a flat plate rather than curved one like before.
and the heatpipes under the copper plate are not completely flat too.


----------



## reflex75

xrb936 said:


> I scored 14 725 in Port Royal
> 
> 
> Intel Core i9-10980XE Extreme Edition Processor, NVIDIA GeForce RTX 3090 x 1, 65536 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> LFMAO


Your score: 14 725
My score: 14 636

Your average clock: 2 258 MHz
My average clock: 2 031 MHz

Your frequency is useless...

To check the rest of the comparaison (temp, vram...):
https://www.3dmark.com/compare/pr/687118/pr/565338#


----------



## Cholerikerklaus

reflex75 said:


> Your score: 14 725
> My score: 14 636
> 
> Your average clock: 2 258 MHz
> My average clock: 2 031 MHz
> 
> Your frequency is useless...
> 
> To check the rest of the comparaison (temp, vram...):
> https://www.3dmark.com/compare/pr/687118/pr/565338#


Then tell us what is the reason for this?


----------



## reflex75

Cholerikerklaus said:


> Then tell us what is the reason for this?


I first need to know how he has done his OC.
Could be a bad curve line too steep reaching high frequency but not load.
The curve must be logarithmic, the flatter, the harder it works.
Because the previous voltage points have an impact on the global performance...


----------



## SolarBeaver

Hi guys, I'm only getting +100-110 max on core with strix oc, it is really bad for this card, right?


----------



## Nizzen

SolarBeaver said:


> Hi guys, I'm only getting +100-110 max on core with strix oc, it is really bad for this card, right?


Depends on the temperatur. +100 is about 2100mhz @ ~50c, so it's not bad


----------



## SolarBeaver

Nizzen said:


> Depends on the temperatur. +100 is about 2100mhz @ ~50c, so it's not bad


It's on air, so 65-75c... So frustrated, I've already had trio gaming X, but after reading all the praise for strix decided to get one strix for a terrible price and sell the trio, and strix has only a slight edge over trio... tough luck it seems


----------



## 414347

Sorry posted in the wrong thread

deleted


----------



## asdkj1740

NewUser16 said:


> View attachment 2471298
> 
> Strix 3080/3090 base plate is flat and heat pipes are covered, it doesn’t look anything like the one on the picture











i know. i am talking about the side facing/making contact with heatpipes.
by fire, we can now be able to see how this side of copper base is like and the shapes of heatpipes under/covered by the copper base.
heatpipes should be covered of course. the problem is how well are they making contact so as to suck the heat from the copper base.

some dude has tested them out by 500w bios, the msi gaming trio cooler performs significantly better than asus strix cooler at 450~500w loading.


----------



## 414347

asdkj1740 said:


> View attachment 2471299
> 
> i know. i am talking about the side facing/making contact with heatpipes.
> 
> by fire, we can now be able to see how this side of copper base is like and the shapes of heatpipes under/covered by the copper base.


That's why I deleted my reply, I misunderstood you 
Sorry


----------



## man from atlantis

cstkl1 said:


> burn baby burn 3090 strix
> psu blew up. taking out a ps5 next to it
> seasonic tx1000
> china
> ori post
> 
> 
> 
> https://bbs.nga.cn/read.php?tid=24821510&_ff=334&fbclid=IwAR0eK6ftdg8OIWUlZObvGGe7oYDK3bf-u4JBZKbR0MkRFYdGavnpSXMnFjQ&rand=427
> 
> 
> 
> fb repost
> 
> 
> 
> __ https://www.facebook.com/IMIKAipoh/posts/1896165023865605


I think it's because of the fan controller, but who knows










front of the motherboard doesn't seem to be burning, smokes are coming from back of the mobo. then spread to the front.
Video


https://www.bilibili.com/video/BV1pX4y1T7rY?p=1&share_medium=iphone&share_plat=ios&share_source=COPY&share_tag=s_i&timestamp=1608917165&unique_k=BK611G


----------



## pat182

asdkj1740 said:


> no wonder why asus strix cooler on rtx3000 series is that bad...
> the copper base is simply a flat plate rather than curved one like before.
> and the heatpipes under the copper plate are not completely flat too.
> 
> View attachment 2471287


My strix never pass 60c no matter the power draw, not sure why people says its bad


----------



## mismatchedyes

With 500w it won't pass 60c? That is really good. What is your ambient ?


----------



## des2k...

man from atlantis said:


> I think it's because of the fan controller, but who knows
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> front of the motherboard doesn't seem to be burning, smokes are coming from back of the mobo. then spread to the front.
> Video
> 
> 
> https://www.bilibili.com/video/BV1pX4y1T7rY?p=1&share_medium=iphone&share_plat=ios&share_source=COPY&share_tag=s_i&timestamp=1608917165&unique_k=BK611G


Seasonic refuses to use high quality cables even with high wattage units. Mining forums are full of burned cables,units with seasonic power supplies.

I wouln't run a 1000w unit with their crap 18awg cpu/gpu and 20awg on atx24,sata,molex cables.


----------



## Thanh Nguyen

1000w bios works with the strix? I tried but it has error pci subsystem mismatch.


----------



## des2k...

Thanh Nguyen said:


> 1000w bios works with the strix? I tried but it has error pci subsystem mismatch.


use -6 option to flash


----------



## Pandora's Box

look at all the magic smoke escaping


----------



## pat182

mismatchedyes said:


> With 500w it won't pass 60c? That is really good. What is your ambient ?


20c , 1.006v 2055mhz im always at 57c in cyberpunk drawing 430 to 500 depending on scenes


----------



## man from atlantis

des2k... said:


> Seasonic refuses to use high quality cables even with high wattage units. Mining forums are full of burned cables,units with seasonic power supplies.
> 
> I wouln't run a 1000w unit with their crap 18awg cpu/gpu and 20awg on atx24,sata,molex cables.


That's sad, and there are still people daisy chain with 18awg cables. My 2011 model SS-1000XP has 16awg 12V wires across the board https://abload.de/img/img_20200916_143006ank3i.jpg


----------



## jomama22

18 awg is just fine, gotta stop this nonsense. It's the physical connectors that fail 99% of the time when the crimps become loose from multiple plugs and unplugs.

The crimps/connectors are only rated for 30 connections/disconnections, that's it.

A loose crimp connection will create a hilarious amount of heat due to arcing and resistance compared to what the wires themselves ever will at the lengths pc cables use.

And I almost guarantee that the 20 awg used on those 24 pins are for low current carrying signals and 3.3/5v lines. The 12v lines would most certainly be 18awg.

mining melted pcie cable - Google Search

Every image is of melted connectors, not wires. The damaged part of the wire is caused by the crimp creating enough heat to melt the connector and then move the heat down the wire.


----------



## geriatricpollywog

Is there a guide for manually tuning the v/f curve in Precision X1?


----------



## des2k...

0451 said:


> Is there a guide for manually tuning the v/f curve in Precision X1?


I don't believe so since it doesn't work with X1

Works with afterburner, Undervolting the RTX 3080 and the RTX3090 - Bjorn3D.com


----------



## jura11

0451 said:


> Is there a guide for manually tuning the v/f curve in Precision X1?


Use MSI Afterburner rather for V/F curve than Precision X1, I don't use X1 because of UI hahaha, hate that software 

Thanks, Jura


----------



## SuperMumrik

defiledge said:


> Does anybody know if asus motherboard bios allows the CPU fan header to be controlled by coolant temp?


All headers except the cpu and cpu_opt on Maximus XII boards


----------



## xrb936

sultanofswing said:


> What method are you using core clock offset or voltage curve editor?


Offset.


----------



## xrb936

reflex75 said:


> I first need to know how he has done his OC.
> Could be a bad curve line too steep reaching high frequency but not load.
> The curve must be logarithmic, the flatter, the harder it works.
> Because the previous voltage points have an impact on the global performance...


I am using offset. And it’s a conman issue with KPE. If you can fix it, thousands of people will thank you.


----------



## kanttori

Sygnano said:


> Hey guys. Just received a PNY 3090 and started to fiddle with it.Got a fews questions up my sleeve : The card seems constantly power starved, so I'm looking the power draw, what's the BIOS used for 2x8 pins ? Best I can find that's flashable is 390w but that doesn't seem to help a lot.


The best i have found sofar is gigabyte gaming oc model bios Gigabyte RTX 3090 VBIOS. It has different inputs and atelast the middle display port stops working with this (No idea about the hdmi). With this you can adjust the power limit to 105% and get 390w to the card, running 0.875v and 1950 with the stock cooler while waiting for my waterblock. Will propably atempt to shuntmod this when i get the block.


----------



## Glerox

I'm selling an unopened alphacool GPU block for the RTX 3090/3080 ROG strix. PM me if interested.


----------



## bogdi1988

Alright guys, I have been wanting to watercool my 3090FE. I see there are 5 options:
1. EK
2. Alphacool
3. Bitspower
4. Corsair
5. Bykski

I know that EK so far seems to be vaporware; looks really nice but it's like a mythical unicorn since it is not out in the wild. Corsair seems to have fixed their water leak on the fittings block. So I am a bit on the fence which one of the 4 (not counting EK) I should get, or should I wait a bit more for EK? This is going in a TR3970 with 3x 360mm rads. Which one do you recommend for best cooling?


----------



## xrb936

SolarBeaver said:


> Hi guys, I'm only getting +100-110 max on core with strix oc, it is really bad for this card, right?


Compare to all Strix OC, It's on average. Compare to all first batch Strix OC, it's far below the average.


----------



## jomama22

The alphacool blocks seem to be performing really well this generation with their design. A number of people have the Bitspower block and it, to put it nicely, performs pretty poorly on the 3090 fe. The corsair block looks pretty meh with no water actively going over all the memory modules or vrms (and visably looks pretty meh to me personally).

Havnt seen anyone with an actual bykski 3090 fe block yet.

I'd get the alphacool. Btw, if you need one, I have an unopened one lol. Bought it before I had my strix and just came in the mail from Germany last week.


----------



## reflex75

Dont forget to take a break and enjoy the beautiful ray tracing reflection in Cyberpunk 2077 😃


----------



## sultanofswing

xrb936 said:


> I am using offset. And it’s a conman issue with KPE. If you can fix it, thousands of people will thank you.


Try using the 2 dipswitches to add voltage and see if the score increases a good bit, It's possible the core even though stable needs more voltage.


----------



## reflex75

xrb936 said:


> I am using offset. And it’s a conman issue with KPE. If you can fix it, thousands of people will thank you.


I don't have KPE so please explain what's going on exactly?
I know offset method should give you straight foreword increase in performance.
If it's not the case, then it could be a trick in the bios in order to use average chip without crashing.
Because advertised 1920MHz boost clock is very high.
For instance, my 3090FE boost clock is only 1695MHz.
But when I start a game, it starts as high as 2055!
Now imagine my chip used in KPE which has 225Mhz higher boost.
It will try to boost at 2280Mhz and crash immediately.
My point is higher boost clock does not automatically guarantee higher binned chip...


----------



## xrb936

Guys, if I am using 1000w bios on Strix, do I need the shunt mod? Why can I only hit 400w if I am using it without shunt mod?


----------



## xrb936

reflex75 said:


> I don't have KPE so please explain what's going on exactly?
> I know offset method should give you straight foreword increase in performance.
> If it's not the case, then it could be a trick in the bios in order to use average chip without crashing.
> Because advertised 1920MHz boost clock is very high.
> For instance, my 3090FE boost clock is only 1695MHz.
> But when I start a game, it starts as high as 2055!
> Now imagine my chip used in KPE which has 225Mhz higher boost.
> It will try to boost at 2280Mhz and crash immediately.
> My point is higher boost clock does not automatically guarantee higher binned chip...
> 
> View attachment 2471342


In simple words, all KPE can hit very high clock with very low points. It won’t crash during the test tho.


----------



## GAN77

Guys, what average delta chip water do you get on water cooling and 450-500 watts of consumption?


----------



## Thanh Nguyen

Any waterblock for ftw3 yet? And how u determine how good the chip is when undervolt?


----------



## dante`afk

Falkentyne said:


> Do you still have your FE or do you have another card?
> I asked people before if they could run those Superposition custom extreme shaders 4k preset with the 1000W kingpin bios but only one person replied, saying he doesn't have superposition


still have my FE, but I'll probably look for a KPE once there is a proper water block for it.


----------



## Falkentyne

reflex75 said:


> I don't have KPE so please explain what's going on exactly?
> I know offset method should give you straight foreword increase in performance.
> If it's not the case, then it could be a trick in the bios in order to use average chip without crashing.
> Because advertised 1920MHz boost clock is very high.
> For instance, my 3090FE boost clock is only 1695MHz.
> But when I start a game, it starts as high as 2055!
> Now imagine my chip used in KPE which has 225Mhz higher boost.
> It will try to boost at 2280Mhz and crash immediately.
> My point is higher boost clock does not automatically guarantee higher binned chip...
> 
> View attachment 2471342


You can't generalize like that
My card will only boost to 1950 mhz max at +0 offset. Pretty sure the way the curve boosts is based on silicon lottery, so no two of the same cards will have the same boost


----------



## dr/owned

(TUF card conclusion)

So really doesn't look like I can get my card truly stable beyond about 2100Mhz core. In Afterburner I set the offset to +120 core and then touched up the curve a bit to make the 1.0V - 1.08V spread be higher clocked to compensate for when PerfCap=PWR kicks in under heavy loads like Port Royal. It'd have been nice if I had a 3 connector card where the KP bios wouldn't have so many drawbacks, but realistically it's higher cost and probably isn't going to make much of a gaming FPS difference anyways. It's slightly disappointing that there's still some sort of power limit around 650W despite the shunts theoretically enabling 800W+.

Here's the Afterburner. From what I can tell in Port Royal, PWR perfap doesn't ever pull the voltage below 1.0V. The curve tool is pretty garbage when it comes to line-fit. It seems to have some sort of limitation where you can only place frequency points so close together and anywhere the points are flat with each other those voltage points aren't usable....so if I try to make it flat from 1.0V to 1.1V it'll only bounce between those 2 voltages...nothing inbetween.:










Thermals are incredibly good...+14C delta over the water in Port Royal on a block that cost $100 with the backplate. I'd definitively say it's a waste of money spending more on EK or (incredibly dumb) $400 on Optimus...spend an extra $30 on upgraded thermal pads and get the Bykski block. Actively cooling the backplate is worth it. 160F without -> 105F with. Are the memory chips ok with those temperatures? Probably, but a cooler card is a more reliable card and pulling heat through the PCB and backplate probably helps the frontside cooling too.

Oh and GPUz is bugged where if it's running in the background my 10850K will never downclock/volt from full speed.


----------



## Sheyster

Falkentyne said:


> My card will only boost to 1950 mhz max at +0 offset. Pretty sure the way the curve boosts is based on silicon lottery, so no two of the same cards will have the same boost


The FE I previously had boosted to 1980 with no offset, so there is some VID check or something going on there for sure.


----------



## Carls_Car

Carls_Car said:


> Can anyone tell me if these voltages look odd? Is my PSU giving me weak output?
> 
> View attachment 2471145


Any help here..??


----------



## Falkentyne

dr/owned said:


> (TUF card conclusion)
> 
> So really doesn't look like I can get my card truly stable beyond about 2100Mhz core. In Afterburner I set the offset to +120 core and then touched up the curve a bit to make the 1.0V - 1.08V spread be higher clocked to compensate for when PerfCap=PWR kicks in under heavy loads like Port Royal. It'd have been nice if I had a 3 connector card where the KP bios wouldn't have so many drawbacks, but realistically it's higher cost and probably isn't going to make much of a gaming FPS difference anyways. It's slightly disappointing that there's still some sort of power limit around 650W despite the shunts theoretically enabling 800W+.
> 
> Here's the Afterburner. From what I can tell in Port Royal, PWR perfap doesn't ever pull the voltage below 1.0V. The curve tool is pretty garbage when it comes to line-fit. It seems to have some sort of limitation where you can only place frequency points so close together and anywhere the points are flat with each other those voltage points aren't usable....so if I try to make it flat from 1.0V to 1.1V it'll only bounce between those 2 voltages...nothing inbetween.:
> 
> View attachment 2471350
> 
> 
> Thermals are incredibly good...+14C delta over the water in Port Royal on a block that cost $100 with the backplate. I'd definitively say it's a waste of money spending more on EK or (incredibly dumb) $400 on Optimus...spend an extra $30 on upgraded thermal pads and get the Bykski block. Actively cooling the backplate is worth it. 160F without -> 105F with. Are the memory chips ok with those temperatures? Probably, but a cooler card is a more reliable card and pulling heat through the PCB and backplate probably helps the frontside cooling too.
> 
> Oh and GPUz is bugged where if it's running in the background my 10850K will never downclock/volt from full speed.



You can't use 150mhz on the core?
You're only at 1000mv. What happens at 1.087v-1.10v (usable only if voltage slider is set to 100%)?

Here's mine at +150 / +100% / 55C.










Absolute highest I can go is +180 on the core and +500 on the memory.










COD MW / Warzone hates anything above +150, and sometimes hates +150 too.


----------



## pat182

I like my strix so much , best thing since sliced bread


----------



## dr/owned

Falkentyne said:


> You can't use 150mhz on the core?
> You're only at 1000mv. What happens at 1.087v-1.10v (usable only if voltage slider is set to 100%)?
> 
> 
> COD MW / Warzone hates anything above +150, and sometimes hates +150 too.


Similar to your findings in COD, Fortnite being lightly loaded with the settings I use crashes at around 2130Mhz (the game crashes, not the driver). I could get this speed to pass Port Royal all day long. I'm more interested in reliability than a couple % gain running on the ragged edge so I just backed it off from +150 to +120 and tweaked the curve a little bit so at full voltage (1.10V) it runs about 2115 Mhz and 2100 down to 1.056V.


----------



## jura11

Falkentyne said:


> You can't use 150mhz on the core?
> You're only at 1000mv. What happens at 1.087v-1.10v (usable only if voltage slider is set to 100%)?
> 
> Here's mine at +150 / +100% / 55C.
> 
> View attachment 2471355
> 
> 
> Absolute highest I can go is +180 on the core and +500 on the memory.
> 
> View attachment 2471357
> 
> 
> COD MW / Warzone hates anything above +150, and sometimes hates +150 too.


Hi there 

In my case +145MHz gaves me 2160-2175MHz and +180MHz would gave me easily 2205MHz but sadly my Palit RTX 3090 GamingPro will do only in benchmarks +145MHz, in actual gaming I'm running +100-115MHz as max with 2115-2130MHz and voltage is bouncing from 1.05v to 1.1v and when I hit power limit clocks will drop to 2010-2025MHz in best case and in worse case they will drop to 1950-1995MHz 

I didn't tried yet to create custom V/F curve in MSI Afterburner with KPE XOC 1000W BIOS and compare results in Port Royal and in gaming as well 

Hope this helps 

Thanks, Jura


----------



## bogdi1988

jomama22 said:


> The alphacool blocks seem to be performing really well this generation with their design. A number of people have the Bitspower block and it, to put it nicely, performs pretty poorly on the 3090 fe. The corsair block looks pretty meh with no water actively going over all the memory modules or vrms (and visably looks pretty meh to me personally).
> 
> Havnt seen anyone with an actual bykski 3090 fe block yet.
> 
> I'd get the alphacool. Btw, if you need one, I have an unopened one lol. Bought it before I had my strix and just came in the mail from Germany last week.


I definitely am interested LOL! PM incoming


----------



## dr/owned

dante`afk said:


> pictures please


Unrelated to a 3090 but here's a couple of pictures of this setup:

Water tank (it's LED backlit so I can see the water moving/water level), PMP 600, UV Sterilizer. The power supply is out of the picture. On the far left in the background in a heat excharger for the servers that are out of picture to the left. This one loop services my server, router (Untangle), and desktop.
















The desktop side of things. Heat exchanger (40 plate, 2000W @ 5C delta) , soundproof box for the D5 pump, and then my desktop in its own soundproof box. I put filters on all my loops because it's way easier to clean those out then to disassemble blocks.


----------



## des2k...

GAN77 said:


> Guys, what average delta chip water do you get on water cooling and 450-500 watts of consumption?


If I leave my case open, fans at 1400rpms water stays 25c, 515w was going 47 to 45c back to 45c in TE gt2 loop. 400w is 37c, so very hard to cool.


----------



## ttnuagmada

GAN77 said:


> Guys, what average delta chip water do you get on water cooling and 450-500 watts of consumption?


Im at about 15-16C with my Strix EK block.


----------



## Sheyster

dr/owned said:


> Unrelated to a 3090 but here's a couple of pictures of this setup:
> 
> Water tank (it's LED backlit so I can see the water moving/water level), PMP 600, UV Sterilizer. The power supply is out of the picture. On the far left in the background in a heat excharger for the servers that are out of picture to the left. This one loop services my server, router (Untangle), and desktop.
> View attachment 2471363
> View attachment 2471364
> 
> 
> The desktop side of things. Heat exchanger (40 plate, 2000W @ 5C delta) , soundproof box for the D5 pump, and then my desktop in its own soundproof box. I put filters on all my loops because it's way easier to clean those out then to disassemble blocks.
> 
> View attachment 2471365


Nice setup, if I set something like that up my wife would kick me out of the house and into the garage in a New York minute.


----------



## dante`afk

Falkentyne said:


> Do you still have your FE or do you have another card?
> I asked people before if they could run those Superposition custom extreme shaders 4k preset with the 1000W kingpin bios but only one person replied, saying he doesn't have superposition


as per @xtremefunky on the hwluxx forum, no perfcap power reason even with the settings in superposition I mentioned on xoc kpe bios.



reflex75 said:


> Your score: 14 725
> My score: 14 636
> 
> Your average clock: 2 258 MHz
> My average clock: 2 031 MHz
> 
> Your frequency is useless...
> 
> To check the rest of the comparaison (temp, vram...):
> https://www.3dmark.com/compare/pr/687118/pr/565338#





Cholerikerklaus said:


> Then tell us what is the reason for this?


all of these "top scorers" and also 3dmark HOF are using old ass drivers, that, in some cases, are not even supported and can't be installed right away. I assume the reason for this is, the older the driver, the worse new GPU features can be utilized = more fps fore more scores.

essentially cheating the system  

as the other guy some pages back boasting his score with "no special cooling", but his card is at 31c, yea buddy, no window mod or balcony.


----------



## Thanh Nguyen

1000w bios with 60% power limit is safe for daily usage guys?


----------



## dr/owned

Sheyster said:


> Nice setup, if I set something like that up my wife would kick me out of the house and into the garage in a New York minute.


Yup, it's quite space intensive with hoses running through the house. I've seen another guy do a version without heat exchangers where he was able to drill through the floor into the basement below where he kept his pumps and radiators, but didn't use heat exchangers so needed quite a volume of more expensive coolant. The only reason mine is setup this way is because an earlier iteration used water chillers that needed to vent heat out of a window...I dropped that though when I concluded that there's no real point in chilling the water because it doesn't ultimately improve overclocks. I left the bones of the setup in tact though because it also enables me to have my radiator and fans in a separate room so I don't hear them. 

The version that I would recommend for most people is just mounting external radiators. Life is so much better when you don't need to buy small inefficient ones to fit in a case. My 20"x20" heat exchanger which is meant for HVAC only cost like $150 and is equivalent to something like 4x480mm radiators.



Thanh Nguyen said:


> 1000w bios with 60% power limit is safe for daily usage guys?


On a waterblock, 650W is very manageable. On an air cooler? Probably not so much.


----------



## Falkentyne

I think I found out what the vbios chip is on 3080 FE and 3090 FE, courtesy of Elmor. But you guys aren't going to like it.


----------



## SoldierRBT

Testing voltage points in my 3090 KPE in Port Royal

1.000v 2175MHz Max power draw: 488W Score: 15,291


https://www.3dmark.com/pr/702293











1.025v 2205MHz Max power draw: 518W Score: 15,408


https://www.3dmark.com/pr/702708


----------



## sultanofswing

SoldierRBT said:


> Testing voltage points in my 3090 KPE in Port Royal
> 
> 1.000v 2175MHz Max power draw: 488W Score: 15,291
> 
> 
> https://www.3dmark.com/pr/702293
> 
> 
> View attachment 2471392
> 
> 
> 1.025v 2205MHz Max power draw: 518W Score: 15,408
> 
> 
> https://www.3dmark.com/pr/702708
> 
> 
> View attachment 2471393


Looks like you got a winner.


----------



## Nizzen

Thanh Nguyen said:


> 1000w bios with 60% power limit is safe for daily usage guys?


Bios is safe, but if you are safe we don't know


----------



## GQNerd

dante`afk said:


> all of these "top scorers" and also 3dmark HOF are using old ass drivers
> essentially cheating the system
> as the other guy some pages back boasting his score with "no special cooling", but his card is at 31c, yea buddy, no window mod or balcony.


Unless someone is deliberately lowering texture settings, 3rd party apps and the such to score higher, then it’s not cheating..

Sometimes older drivers and versions of Windows are more stable or offer better performance.. and for those not into liquid nitrogen, so what if they put their computer on the balcony or throw it in an ice bath to score a little higher? 

All’s fair in love and overclocking. Lol


----------



## bmgjet

Falkentyne said:


> I think I found out what the vbios chip is on 3080 FE and 3090 FE, courtesy of Elmor. But you guys aren't going to like it.


Is it part of the MCU flash memory?


----------



## MangoMunchaa

Hi everyone, final update for me with the 1000W BIOS until it's winter over here in Australia, did this with the Galax SG using the EKWB in 10C ambient, if I were able to keep it under 40C I think I could've got it stable at 2205, with a peak of 2250mhz as it seemed to crash as soon as it hit that no matter where that was in the bench. If anyone is wondering the power draw it was reporting 800W (although that's not what it's drawing obv being a 2x8pin) and I am using 18AWG cables.









I scored 15 369 in Port Royal


Intel Core i9-9900KF Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## Falkentyne

bmgjet said:


> Is it part of the MCU flash memory?


No,










Igor said this is the bios chip.
Elmor said he thinks it's "UDFN8, 3x2mm"

I can barely find an in stock socket adapter for this thing...

BTW You haven't replied back to my private messages...I wasn't nagging you or anything was i?


----------



## bmgjet

Falkentyne said:


> No,
> View attachment 2471398
> 
> 
> 
> Igor said this is the bios chip.
> Elmor said he thinks it's "UDFN8, 3x2mm"
> 
> I can barely find an in stock socket adapter for this thing...
> 
> BTW You haven't replied back to my private messages...I wasn't nagging you or anything was i?


No way your getting a clip onto that lol.
There wasnt any questions in your PMs to answer.


----------



## Falkentyne

bmgjet said:


> No way your getting a clip onto that lol.
> There wasnt any questions in your PMs to answer.


Yeah that's why I'm looking for a socket adapter so I can desolder it. But that won't fit any I have and this IC seems literally impossible to find. I don't even know which pin is "pin 1". I could probably hot glue jumper wires to the leads and just hook it up to a 1.8v adapter and then throw it into the programmer.....IF i know where pin 1 is...

Nvidia makes a board without fuses, then does this... :/


----------



## geriatricpollywog

Falkentyne said:


> Yeah that's why I'm looking for a socket adapter so I can desolder it. But that won't fit any I have and this IC seems literally impossible to find. I don't even know which pin is "pin 1". I could probably hot glue jumper wires to the leads and just hook it up to a 1.8v adapter and then throw it into the programmer.....IF i know where pin 1 is...
> 
> Nvidia makes a board without fuses, then does this... :/


Why don’t you sell the FE for a triple 8-pin model?


----------



## bmgjet

Falkentyne said:


> Yeah that's why I'm looking for a socket adapter so I can desolder it. But that won't fit any I have and this IC seems literally impossible to find. I don't even know which pin is "pin 1". I could probably hot glue jumper wires to the leads and just hook it up to a 1.8v adapter and then throw it into the programmer.....IF i know where pin 1 is...
> 
> Nvidia makes a board without fuses, then does this... :/


Pin1 is always where the dot is on the top of SMD/SMT chips.


----------



## sultanofswing

Has anyone investigated the Inductors on the FTW3 3090? I am wondering if all the dying cards are due to the inductors exploding from extremely high FPS situations.


----------



## Falkentyne

0451 said:


> Why don’t you sell the FE for a triple 8-pin model?


No need. Not trying to go for world records. 550W 3090 FE is fast enough for games. And I don't want to deal with the hassle and stress of selling it (keep in mind I have no car either, transportation is an issue for me, and my health problems are not good).



sultanofswing said:


> Has anyone investigated the Inductors on the FTW3 3090? I am wondering if all the dying cards are due to the inductors exploding from extremely high FPS situations.


That's the problem. They are NOT dying at high load. They're dying at LOW load!
Multiple cards have died from people playing games like League of Legends and Halo: MCC ! Or after black screens while _browsing_!






FTW3 3090 died while playing League. - EVGA Forums


Luckily I still have my GTX 1080 because I was playing league and I heard a tick and my whole screen went out. Looked down and a red light was glowing. RIP. F in the chat for all the busted FTW3 cards. FYI: the PCI connections weren't originally daisy chained under normal operation, I daisy...



forums.evga.com










Two different 3090's have each failed for me! - EVGA Forums


I've had two different 3090 FTW3 Ultra's, and they've each failed on me under similar circumstances within 24 hours after getting them. For the first card, my screen went black when gaming, followed by the card's fans getting very loud (I'm assuming thy were at 100%). My PC shutting off after a...



forums.evga.com


----------



## SolarBeaver

xrb936 said:


> Compare to all Strix OC, It's on average. Compare to all first batch Strix OC, it's far below the average.


I see, thanks, so there's basically no point in trying to buy a new Strix and sell this one, because I will most likely get the same performance out of the new ones 
And for non-binned chips is +100-110 core on air is average for most 3090 chips, regardless of the manufacturer, or is Asus strix is now the worst?


----------



## sultanofswing

Falkentyne said:


> No need. Not trying to go for world records. 550W 3090 FE is fast enough for games. And I don't want to deal with the hassle and stress of selling it (keep in mind I have no car either, transportation is an issue for me, and my health problems are not good).
> 
> 
> 
> That's the problem. They are NOT dying at high load. They're dying at LOW load!
> Multiple cards have died from people playing games like League of Legends and Halo: MCC ! Or after black screens while _browsing_!
> 
> 
> 
> 
> 
> 
> FTW3 3090 died while playing League. - EVGA Forums
> 
> 
> Luckily I still have my GTX 1080 because I was playing league and I heard a tick and my whole screen went out. Looked down and a red light was glowing. RIP. F in the chat for all the busted FTW3 cards. FYI: the PCI connections weren't originally daisy chained under normal operation, I daisy...
> 
> 
> 
> forums.evga.com
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Two different 3090's have each failed for me! - EVGA Forums
> 
> 
> I've had two different 3090 FTW3 Ultra's, and they've each failed on me under similar circumstances within 24 hours after getting them. For the first card, my screen went black when gaming, followed by the card's fans getting very loud (I'm assuming thy were at 100%). My PC shutting off after a...
> 
> 
> 
> forums.evga.com


Yea that is what I am saying. A game that is lightly loaded and a setup that is not configured to limit FPS the FPS will be extremely high, that is when the inductors vibrate the most producing coil whine and if possible if the coils are inferior they may just be letting go.


----------



## Falkentyne

sultanofswing said:


> Yea that is what I am saying. A game that is lightly loaded and a setup that is not configured to limit FPS the FPS will be extremely high, that is when the inductors vibrate the most producing coil whine and if possible if the coils are inferior they may just be letting go.


The person playing Halo and the one playing I think GTA5(?) said they were using capped FPS, so it was not uncapped.


----------



## SolarBeaver

Guys, sorry for newbie question, but this thread is so huge, can't really find the info I need without spending whole day in...
I'm wondering if I will see a significant benefit going to custom loop setup with my strix, my main purpose is to play games, (finally have stable 60fps in CP2077 on 4k/rtx/dlss balanced in particular) and not competitive overclocking, and now I can only get like 2000-2025mhz stable with 100% fans on, and scores like 21300-21400 graphics in TimeSpy and 13800-13900 in PR (stock bios, but I guess on air there's no point going 500w+), so how much extra performance, if any, I will get from cooling this thing properly? Or am I doomed either way with my potato chip?


----------



## Falkentyne

SolarBeaver said:


> Guys, sorry for newbie question, but this thread is so huge, can't really find the info I need without spending whole day in...
> I'm wondering if I will see a significant benefit going to custom loop setup with my strix, my main purpose is to play games, (finally have stable 60fps in CP2077 on 4k/rtx/dlss balanced in particular) and not competitive overclocking, and now I can only get like 2000-2025mhz stable with 100% fans on, and scores like 21300-21400 graphics in TimeSpy and 13800-13900 in PR (stock bios, but I guess on air there's no point going 500w+), so how much extra performance, if any, I will get from cooling this thing properly? Or am I doomed either way with my potato chip?


Custom loop won't help your FPS very much unless you can cool the memory enough to push it past +1000 mhz without performance degradation. A few +15 mhz steps on the core will gain you a couple of FPS, but you will also gain FPS by not dropping clocks from temps also (temp stabilization around 50C core should give you +30mhz in clock boost). You're still going to be power limited. But that's out of my area of expertise, but I do know some people managed to clock the memory higher after they were able to greatly improve the cooling of the VRAM.


----------



## Pepillo

SolarBeaver said:


> Guys, sorry for newbie question, but this thread is so huge, can't really find the info I need without spending whole day in...
> I'm wondering if I will see a significant benefit going to custom loop setup with my strix, my main purpose is to play games, (finally have stable 60fps in CP2077 on 4k/rtx/dlss balanced in particular) and not competitive overclocking, and now I can only get like 2000-2025mhz stable with 100% fans on, and scores like 21300-21400 graphics in TimeSpy and 13800-13900 in PR (stock bios, but I guess on air there's no point going 500w+), so how much extra performance, if any, I will get from cooling this thing properly? Or am I doomed either way with my potato chip?


A little more overclocking is achieved with water, plus 30-45 Mhz more by about 20º less temperature. And, don't forget, you can have the fans at a few rpm, nothing to do with the 3.000 rpm of these graphics at 100%, I don't understand those who say that about "my graphics card is very quiet". Whether it's worth it or not, it's already a personal decision, it's definitely much better for me. Don't expect many more fps either, but it will be a better graphics card in every way.


----------



## Gebeleisis

dr/owned said:


> Unrelated to a 3090 but here's a couple of pictures of this setup:
> 
> Water tank (it's LED backlit so I can see the water moving/water level), PMP 600, UV Sterilizer. The power supply is out of the picture. On the far left in the background in a heat excharger for the servers that are out of picture to the left. This one loop services my server, router (Untangle), and desktop.
> View attachment 2471363
> View attachment 2471364
> 
> 
> The desktop side of things. Heat exchanger (40 plate, 2000W @ 5C delta) , soundproof box for the D5 pump, and then my desktop in its own soundproof box. I put filters on all my loops because it's way easier to clean those out then to disassemble blocks.
> 
> View attachment 2471365


awesome !


----------



## Markus_

hi all,

the waterforce card is now in our region available and it's complete different from the xtreme with air cooler. 









AORUS GeForce RTX™ 3090 XTREME 24G Spezifikation | Grafikkarten - GIGABYTE Germany


Entdecke die AORUS Premium- Grafikkarten, ausgestattet mit WINDFORCE Kühlsystem, RGB-Beleuchtung, PCB Coating und VR Features für das beste Gaming- und VR-E...




www.gigabyte.com













AORUS GeForce RTX™ 3090 XTREME WATERFORCE WB 24G Spezifikation | Grafikkarten - GIGABYTE Germany


Entdecke die AORUS Premium- Grafikkarten, ausgestattet mit WINDFORCE Kühlsystem, RGB-Beleuchtung, PCB Coating und VR Features für das beste Gaming- und VR-E...




www.gigabyte.com





3*8pin vs 2*8pin

Looks like a downgrade for the watercooled 

Markus


----------



## iunlock

*Cyberpunk 2077: 9900KS | RTX 3090 | 21:9 - 4K (3840x1600)*

Here's a quick comparison of the FPS on the 3080 vs 3090. The screenshots below are from the 3090. 

Ultra Preset w/ DLSS: *AUTO - 3080 (60 FPS) vs 3090 (80 FPS) ~ Delta 25%*

Ultra Preset w/ DLSS: *QUALITY - 3080 (56 FPS) vs 3090 (70 FPS) ~ Delta 20%*

Ultra Preset w/ DLSS: *OFF - 3080 (35 FPS) vs 3090 (46 FPS) ~ Delta 31.4%*

I picked this specific location on purpose as it is pretty consistent overall with a lot of details on the screen.

Note the resolution that I'm gaming at. This is 21:9 - 4K, which is not very common. Your scaling may vary. 

*Ray Tracing Ultra: Preset*

Only changes:
Film Grain: *OFF*
RT: Reflections *ONLY*
RT Lighting: *ULTRA*
DLSS: *OFF *and *AUTO *and *QUALITY

80 FPS* w/ DLSS: *AUTO









70 FPS *w/ DLSS: *QUALITY









46 FPS *w/ DLSS:* OFF









BONUS:
42 FPS - Psycho w/ DLSS: OFF









Since I like to max everything out within reason (ie... film grain = no bueno for me, hence why it's turned OFF) and play at the resolution that I am on the wide screen monitor, when it comes to achieving a 60 FPS+ gaming experience at 4K, the 3090 is the ideal choice for me over the 3080. 

However, just to be clear I'm fully aware how terrible the value is for the 3090 as a gaming GPU lol, especially for 1080p and 1440p gaming. The 3080 is more than adequate for most people.*


----------



## MangoMunchaa

iunlock said:


> *Cyberpunk 2077: 9900KS | RTX 3090 | 21:9 - 4K (3840x1600)*
> 
> Here's a quick comparison of the FPS on the 3080 vs 3090. The screenshots below are from the 3090.
> 
> Ultra Preset w/ DLSS: *AUTO - 3080 (60 FPS) vs 3090 (80 FPS) ~ Delta 25%*
> 
> Ultra Preset w/ DLSS: *QUALITY - 3080 (56 FPS) vs 3090 (70 FPS) ~ Delta 20%*
> 
> Ultra Preset w/ DLSS: *OFF - 3080 (35 FPS) vs 3090 (46 FPS) ~ Delta 31.4%*
> 
> I picked this specific location on purpose as it is pretty consistent overall with a lot of details on the screen.
> 
> Note the resolution that I'm gaming at. This is 21:9 - 4K, which is not very common. Your scaling may vary.
> 
> *Ray Tracing Ultra: Preset*
> 
> Only changes:
> Film Grain: *OFF*
> RT: Reflections *ONLY*
> RT Lighting: *ULTRA*
> DLSS: *OFF *and *AUTO *and *QUALITY
> 
> 80 FPS* w/ DLSS: *AUTO
> View attachment 2471415
> 
> 
> 70 FPS *w/ DLSS: *QUALITY
> View attachment 2471416
> 
> 
> 46 FPS *w/ DLSS:* OFF
> View attachment 2471417
> 
> 
> BONUS:
> 42 FPS - Psycho w/ DLSS: OFF
> View attachment 2471418
> 
> 
> Since I like to max everything out within reason (ie... film grain = no bueno for me, hence why it's turned OFF) and play at the resolution that I am on the wide screen monitor, when it comes to achieving a 60 FPS+ gaming experience at 4K, the 3090 is the ideal choice for me over the 3080.
> 
> However, just to be clear I'm fully aware how terrible the value is for the 3090 as a gaming GPU lol, especially for 1080p and 1440p gaming. The 3080 is more than adequate for most people.*


Thanks for this! This is exactly the reason I got the 3090 over the 3080, like you said it's a horrible value proposition but on the most taxing games it's the difference between me getting 165fps for my monitor or not, and in the case of Cyberpunk which is an extreme scenario sub 60 and maintaining over 60!


----------



## Gebeleisis

iunlock said:


> *Cyberpunk 2077: 9900KS | RTX 3090 | 21:9 - 4K (3840x1600)*
> 
> Here's a quick comparison of the FPS on the 3080 vs 3090. The screenshots below are from the 3090.
> 
> Ultra Preset w/ DLSS: *AUTO - 3080 (60 FPS) vs 3090 (80 FPS) ~ Delta 25%*
> 
> Ultra Preset w/ DLSS: *QUALITY - 3080 (56 FPS) vs 3090 (70 FPS) ~ Delta 20%*
> 
> Ultra Preset w/ DLSS: *OFF - 3080 (35 FPS) vs 3090 (46 FPS) ~ Delta 31.4%*
> 
> I picked this specific location on purpose as it is pretty consistent overall with a lot of details on the screen.
> 
> Note the resolution that I'm gaming at. This is 21:9 - 4K, which is not very common. Your scaling may vary.
> 
> *Ray Tracing Ultra: Preset*
> 
> Only changes:
> Film Grain: *OFF*
> RT: Reflections *ONLY*
> RT Lighting: *ULTRA*
> DLSS: *OFF *and *AUTO *and *QUALITY
> 
> 80 FPS* w/ DLSS: *AUTO
> View attachment 2471415
> 
> 
> 70 FPS *w/ DLSS: *QUALITY
> View attachment 2471416
> 
> 
> 46 FPS *w/ DLSS:* OFF
> View attachment 2471417
> 
> 
> BONUS:
> 42 FPS - Psycho w/ DLSS: OFF
> View attachment 2471418
> 
> 
> Since I like to max everything out within reason (ie... film grain = no bueno for me, hence why it's turned OFF) and play at the resolution that I am on the wide screen monitor, when it comes to achieving a 60 FPS+ gaming experience at 4K, the 3090 is the ideal choice for me over the 3080.
> 
> However, just to be clear I'm fully aware how terrible the value is for the 3090 as a gaming GPU lol, especially for 1080p and 1440p gaming. The 3080 is more than adequate for most people.*


Thank you ! 
I am runing a 3440x1440 monitor and this is why I got the 3090 .


----------



## SolarBeaver

After fiddling with the curve, I've managed to squeeze 14200 with my strix @ PR and 21500 time spy








I scored 14 192 in Port Royal


Intel Core i9-7980XE Processor, NVIDIA GeForce RTX 3090 x 1, 130764 MB, 64-bit Windows 10}




www.3dmark.com




Is it considered acceptable on air no shunt mod? I've seen people here hit 14500+ all the time, but just not sure if they went through more vigorous affords to bring it.
And seeing it run @ 2040mhz average does it mean it's roughly 180 overclock from stock (1860 base boost clock), so I was wrong assuming my strix won't go past 100mhz OC? It's just when you just apply 110+ core clock in afterburner I'm getting crashes and only after fiddling with the curve I've managed to hit this, so is it considered normal behavior on those cards?
Here's my curve, just in case:


----------



## jomama22

dante`afk said:


> as per @xtremefunky on the hwluxx forum, no perfcap power reason even with the settings in superposition I mentioned on xoc kpe bios.
> 
> 
> 
> 
> 
> all of these "top scorers" and also 3dmark HOF are using old ass drivers, that, in some cases, are not even supported and can't be installed right away. I assume the reason for this is, the older the driver, the worse new GPU features can be utilized = more fps fore more scores.
> 
> essentially cheating the system
> 
> as the other guy some pages back boasting his score with "no special cooling", but his card is at 31c, yea buddy, no window mod or balcony.


Lol, so I'm cheating the system by using the driver's windows installs automatically during a fresh windows install and subsequent windows updates? It is the Sept. 17th driver/3xxx launch driver. I use that because I am not going to mess with ddu or driver uninstall on my bench drive. I can count the amount of programs I put on that ssd on 1 hand. 

As far as temperature, here is my setup:
4x360 hwlabs gtx black ice gen 2 (yes, the ones from 2012, I cleaned and reused them)
All push pull with pwm deltas running at 1700rpms (max is 3400 but have no need for that with such short runs)
That is a mountain mods ascension case from 2012 as well. Basically a test bench I can house my rads in as well.
The rest you can see in the pics. 
I also have a memory waterblock attached to the back of the strix which probably helps a tad.









































I am in a basement with no windows. Temperature down here is 66*F/18-19*C because of the aquarium.

If I was outside or near any windows, it would be laughably colder. I live in Maryland, and on Saturday the 26th, it was never above 34F/1C all day. 









There is somthing wrong with your scores. Look at your clockspeeds compared to mine. I was running those timespys well below your clockspeeds and far out pacing you.


----------



## Avacado

jomama22 said:


> Lol, so I'm cheating the system by using the driver's windows installs automatically during a fresh windows install and subsequent windows updates? It is the Sept. 17th driver/3xxx launch driver. I use that because I am not going to mess with ddu or driver uninstall on my bench drive. I can count the amount of programs I put on that ssd on 1 hand.
> 
> As far as temperature, here is my setup:
> 4x360 hwlabs gtx black ice gen 2 (yes, the ones from 2012, I cleaned and reused them)
> All push pull with pwm deltas running at 1700rpms (max is 3400 but have no need for that with such short runs)
> That is a mountain mods ascension case from 2012 as well. Basically a test bench I can house my rads in as well.
> The rest you can see in the pics.
> I also have a memory waterblock attached to the back of the strix which probably helps a tad.
> View attachment 2471444
> 
> View attachment 2471445
> 
> View attachment 2471447
> 
> View attachment 2471448
> 
> View attachment 2471449
> 
> 
> I am in a basement with no windows. Temperature down here is 66*F/18-19*C because of the aquarium.
> 
> If I was outside or near any windows, it would be laughably colder. I live in Maryland, and on Saturday the 26th, it was never above 34F/1C all day.
> View attachment 2471450
> 
> 
> There is somthing wrong with your scores. Look at your clockspeeds compared to mine. I was running those timespys well below your clockspeeds and far out pacing you.


That is a Mountain mods ascension, how pleased with it are you? I actually placed an order for one over the holidays, but sadly the 140.3 back panel and 720 front aren't available anymore, sad face.

Looks like you have the Trinity front panel, they had that in place of the 720.


----------



## jomama22

SolarBeaver said:


> I see, thanks, so there's basically no point in trying to buy a new Strix and sell this one, because I will most likely get the same performance out of the new ones
> And for non-binned chips is +100-110 core on air is average for most 3090 chips, regardless of the manufacturer, or is Asus strix is now the worst?


I went through 2 strix. The first one was very similar to yours, second one was much much better, letting me do +175 in timespy benches on air. You could try your luck with a second one if you were up for it and could easily sell that one for probably what you paid tbh. That's exactly what I did.


----------



## jomama22

Avacado said:


> That is a Mountain mods ascension, how pleased with it are you? I actually placed an order for one over the holidays, but sadly the 140.3 back panel and 720 front aren't available anymore, sad face.
> 
> Looks like you have the Trinity front panel, they had that in place of the 720.


I love this case! It's actually the extended ascension. Iv had it forever, bought it back in 2011/2012. The back panel is the 120.3 with dual psu and horizontal motherboard tray. The MB tray is the sr2 one. Back then, I wasn't sure what I would ever do so I just got the largest tray available lmao. I actually like it because it also separates the top and bottom more and if you look in the pics, I used the extra tray space to put 2 swiftech 2xmcp35x pumps.

If you have the room for this case/don't mind a massive case you can put on the ground (they should come with wheels too, or you may have to order them separate, can't remember. The wheels are very convenient lol) it is awesome. Working on it is a breeze, especially with the horizontal tray.

The acrylic windows scratch very very easily. My top lid acrylic is destroyed but that's my own doing from years of abuse. You could always leave the film that come on it on until you are fully done with the build.

I the paint is very durable. I think I got the powder coated black with clear coat on it. I'd have to try and find my receipt in my email, one sec...


----------



## Avacado

jomama22 said:


> I love this case! It's actually the extended ascension. Iv had it forever, bought it back in 2011/2012. The back panel is the 120.3 with dual psu and horizontal motherboard tray. The MB tray is the sr2 one. Back then, I wasn't sure what I would ever do so I just got the largest tray available lmao. I actually like it because it also separates the top and bottom more and if you look in the pics, I used the extra tray space to put 2 swiftech 2xmcp35x pumps.
> 
> If you have the room for this case/don't mind a massive case you can put on the ground (they should come with wheels too, or you may have to order them separate, can't remember. The wheels are very convenient lol) it is awesome. Working on it is a breeze, especially with the horizontal tray.
> 
> The acrylic windows scratch very very easily. My top lid acrylic is destroyed but that's my own doing from years of abuse. You could always leave the film that come on it on until you are fully done with the build.
> 
> I the paint is very durable. I think I got the powder coated black with clear coat on it. I'd have to try and find my receipt in my email, one sec...
> View attachment 2471456


Ah cool beans, I actually bought 3x480mm rads for the side panel, hopefully I can get it worked out with them. 

P.S. I grew up in the burbs of Bmore (Columbia).


----------



## jomama22

Avacado said:


> Ah cool beans, I actually bought 3x480mm rads for the side panel, hopefully I can get it worked out with them.
> 
> P.S. I grew up in the burbs of Bmore (Columbia).


I do kinda regret not just getting the big 3x480 for the extra cooling capacity but I think I just wanted to have side windows if I remember correctly lol.

I grew up in northern Maryland above Bel Air, near the PA line. Middle of nowhere amish country lol.


----------



## pat182

SolarBeaver said:


> After fiddling with the curve, I've managed to squeeze 14200 with my strix @ PR and 21500 time spy
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 14 192 in Port Royal
> 
> 
> Intel Core i9-7980XE Processor, NVIDIA GeForce RTX 3090 x 1, 130764 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> Is it considered acceptable on air no shunt mod? I've seen people here hit 14500+ all the time, but just not sure if they went through more vigorous affords to bring it.
> And seeing it run @ 2040mhz average does it mean it's roughly 180 overclock from stock (1860 base boost clock), so I was wrong assuming my strix won't go past 100mhz OC? It's just when you just apply 110+ core clock in afterburner I'm getting crashes and only after fiddling with the curve I've managed to hit this, so is it considered normal behavior on those cards?
> Here's my curve, just in case:
> View attachment 2471432


its a bit low, you should be able to do 14.5 14.6k with optimal oc and near 15k with a bios swap


----------



## dante`afk

jomama22 said:


> There is somthing wrong with your scores. Look at your clockspeeds compared to mine. I was running those timespys well below your clockspeeds and far out pacing you.


I can see that, and that's possibly as you said you have a fresh windows installation.

I did not reinstall anything after switching from intel to amd.

but it's most likely the drivers at this point, there's a reason why everyone up there is using older drivers.


----------



## jomama22

dante`afk said:


> I can see that, and that's possibly as you said you have a fresh windows installation.
> 
> I did not reinstall anything after switching from intel to amd.
> 
> but it's most likely the drivers at this point, there's a reason why everyone up there is using older drivers.


DDU messes stuff up a lot of times for nvidia drivers. I don't know why but it does. I made a whole post in the past about how merely doing driver uninstall/reinstall with DDU or installing over nvidia drivers the normal way screws with stuff, whether it's benchmarking or games.

It's pretty disingenuous to call people cheaters when your own system is set up in such a bad way. Just because you refuse to actually do a clean install and try it out for yourself doesn't mean all of us that have no issues doing so are somehow beating the system. Hell, we are doing what the majority of pc users would do with a new system build.

Going from intel to amd and not reinstalling windows is just a terrible idea no matter how you want to slice it.

Here's what I'll do. I will install the newest nvidia driver on this ssd and redo my bench and see what happens, just for you. If I have to reinstall windows then w.e., takes 10 minutes.


----------



## dante`afk

I didn't call you specifically a cheater, but technically the way to get there as futuremark allows it, is questionable. As I said this version of drivers can't be even installed manually, only windows updates does that.

But yea thanks for doing the tests again with newest drivers. It's just SO time consuming to do a fresh install with how I have everything set up, and despite the scores, which don't make any sense if I look at my clocks, everything runs splendid.


Btw, what you'll find with the newest drivers, you can push clocks much higher.


----------



## jomama22

dante`afk said:


> I didn't call you specifically a cheater, but technically the way to get there as futuremark allows it, is questionable. As I said this version of drivers can't be even installed manually, only windows updates does that.
> 
> But yea thanks for doing the tests again with newest drivers. It's just SO time consuming to do a fresh install with how I have everything set up, and despite the scores, which don't make any sense if I look at my clocks, everything runs splendid.
> 
> 
> Btw, what you'll find with the newest drivers, you can push clocks much higher.


You can download the win 10 driver installed by windows during a fresh install right here:








Geforce Driver Results | NVIDIA


Download the <dd~LanguageName> <dd~Name> for <dd~OSName> systems. Released <dd~ReleaseDateTime>



www.nvidia.com


----------



## 414347

pat182 said:


> I like my strix so much , best thing since sliced bread
> View attachment 2471358
> 
> View attachment 2471359


No wonder why you like it, look at this thing. It looks great in that case in vertical position


----------



## SolarBeaver

jomama22 said:


> I went through 2 strix. The first one was very similar to yours, second one was much much better, letting me do +175 in timespy benches on air. You could try your luck with a second one if you were up for it and could easily sell that one for probably what you paid tbh. That's exactly what I did.


I see, what a gamble this card is atm...
I think I have better chances with MSI Suprim X though, it should also be a little bit quiter and I guess it's easier to get a decent one, assuming it's one of the first batches so they should care about performance, but who knows...



pat182 said:


> its a bit low, you should be able to do 14.5 14.6k with optimal oc and near 15k with a bios swap


I've tried the best I could (but must admit I'm still pretty bad @ this), it just crashes whenever it hits 2085-2100mhz range, sometimes lower, even @ high voltage. I've also tried undervolting to see how it performs overall and results weren't that great either  And on top of all it's quite noise after gaming x trio and the cooling isn't that great even @100% More than disappointed with this sample...


----------



## AllGamer

Hi everyone.

I'm still in the market for a RTX 3090, out of stock in Canada.
but the version I really really want is the RTX3090-24G-EK

Anyone know any other RTX 3090 water model besides the RTX3090-24G-EK ?


----------



## GAN77

AllGamer said:


> Anyone know any other RTX 3090 water model besides the RTX3090-24G-EK ?











AORUS GeForce RTX™ 3090 XTREME WATERFORCE WB 24G Key Features | Graphics Card - GIGABYTE Global


Discover AORUS premium graphics cards, ft. WINDFORCE cooling, RGB lighting, PCB protection, and VR friendly features for the best gaming and VR experience!




www.gigabyte.com


----------



## AllGamer

GAN77 said:


> AORUS GeForce RTX™ 3090 XTREME WATERFORCE WB 24G Key Features | Graphics Card - GIGABYTE Global
> 
> 
> Discover AORUS premium graphics cards, ft. WINDFORCE cooling, RGB lighting, PCB protection, and VR friendly features for the best gaming and VR experience!
> 
> 
> 
> 
> www.gigabyte.com


Nice!.. didn't know Gigabyte also released their own version, but I still prefer the design more of the ASUS+EK block version.
All my previous Nvidia water used the EK blocks as well (MSI SeaHawk EK edition)

But it's good to have an alternative at least now I can look for both Asus and Aorus water editions.


----------



## WMDVenum

Avacado said:


> That is a Mountain mods ascension, how pleased with it are you? I actually placed an order for one over the holidays, but sadly the 140.3 back panel and 720 front aren't available anymore, sad face.
> 
> Looks like you have the Trinity front panel, they had that in place of the 720.


I have had a mountain mods ascension BYO since 2014. The case is great. I keep wanting to get something smaller since I no longer have any spinning drives but I can't get myself to pull the trigger since the Ascension just works and can fit everything I have easily.


----------



## Sheyster

sultanofswing said:


> Has anyone investigated the Inductors on the FTW3 3090? I am wondering if all the dying cards are due to the inductors exploding from extremely high FPS situations.


Sheesh, that card seems cursed and is a PR nightmare for EVGA. Glad I stayed away. 🤨


----------



## andrvas

Using the kp 1000w bios on a 2x8pin, I assume the power readings in Afterburner are wrong? It says the card is drawing 800w, something I find odd. Haven't had time to do anything more than a quick test, just curious about the readings I'm seeing.


----------



## des2k...

andrvas said:


> Using the kp 1000w bios on a 2x8pin, I assume the power readings in Afterburner are wrong? It says the card is drawing 800w, something I find odd. Haven't had time to do anything more than a quick test, just curious about the readings I'm seeing.


It's -33% for 2x8pin, so 800w is 536w on your card


----------



## Redjester

andrvas said:


> Using the kp 1000w bios on a 2x8pin, I assume the power readings in Afterburner are wrong? It says the card is drawing 800w, something I find odd. Haven't had time to do anything more than a quick test, just curious about the readings I'm seeing.


Is this at 100% slider?


----------



## andrvas

Redjester said:


> Is this at 100% slider?


Yeah, but I only did a short PR run


----------



## des2k...

andrvas said:


> Yeah, but I only did a short PR run


Start RTX Quake for fun, at 100% slider


----------



## Redjester

Lots of information scattered across the last 30 or so pages about 1000w on a 2x8 pin. Where does this compare to tuneability when compared to the shunt mod? I'm running about 15c deltas on a shunted 2x8. Should I be looking at the 1000w at 60%ish instead? Nothing but pwr cap in timespy and port royal.


----------



## des2k...

Redjester said:


> Lots of information scattered across the last 30 or so pages about 1000w on a 2x8 pin. Where does this compare to tuneability when compared to the shunt mod? I'm running about 15c deltas on a shunted 2x8. Should I be looking at the 1000w at 60%ish instead? Nothing but pwr cap in timespy and port royal.


1000w for example, pulls 640w in RTX Quake for my 2x8pin on stock curve with no offset. What is the limit ?not sure. Might be that or higher. I know at 640w you'll be hitting vops/vref (gpu-z sensor) limit. You'll need to use custom VF point to go past 640w.


----------



## ttnuagmada

SolarBeaver said:


> Guys, sorry for newbie question, but this thread is so huge, can't really find the info I need without spending whole day in...
> I'm wondering if I will see a significant benefit going to custom loop setup with my strix, my main purpose is to play games, (finally have stable 60fps in CP2077 on 4k/rtx/dlss balanced in particular) and not competitive overclocking, and now I can only get like 2000-2025mhz stable with 100% fans on, and scores like 21300-21400 graphics in TimeSpy and 13800-13900 in PR (stock bios, but I guess on air there's no point going 500w+), so how much extra performance, if any, I will get from cooling this thing properly? Or am I doomed either way with my potato chip?


Well, i never tested this thing before putting a block on, but I know with my 1080 Ti's, i got an extra 30-45mhz or so out of them, mostly due to avoiding any bin drops from temps going up, since they maxed out in the low 40's. I would imagine it's the same with my Strix. Going by that, it sounds like you may have just gotten a bad chip. That being said, with a good custom loop, you won't have to crank the fans up, and imo, that alone is enough to go with water cooling. I dumped a bunch of money into a monster loop 2-3 years back, and I can't even tell the thing is turned on, all while getting sub-50C GPU temps at all times.


----------



## ALSTER868

Hey guys, can anyone tell why pin 3 isn't being reported in GPU-Z? It's KPE 520W bios on Strix. During 3D runs I'm only seeing around 360W of total power consumption.


----------



## pat182

ALSTER868 said:


> Hey guys, can anyone tell why pin 3 isn't being reported in GPU-Z? It's KPE 520W bios on Strix. During 3D runs I'm only seeing around 360W of total power consumption.


its normal, just add 150watt to total


----------



## ALSTER868

pat182 said:


> its normal, just add 150watt to total


I understand that it's pulling something like 500+W due to stable clocks and higher than usually temps. Is that thing happening to everybody with this bios and different 3 pin cards?


----------



## J7SC

Markus_ said:


> hi all,
> 
> the waterforce card is now in our region available and it's complete different from the xtreme with air cooler.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> AORUS GeForce RTX™ 3090 XTREME 24G Spezifikation | Grafikkarten - GIGABYTE Germany
> 
> 
> Entdecke die AORUS Premium- Grafikkarten, ausgestattet mit WINDFORCE Kühlsystem, RGB-Beleuchtung, PCB Coating und VR Features für das beste Gaming- und VR-E...
> 
> 
> 
> 
> www.gigabyte.com
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> AORUS GeForce RTX™ 3090 XTREME WATERFORCE WB 24G Spezifikation | Grafikkarten - GIGABYTE Germany
> 
> 
> Entdecke die AORUS Premium- Grafikkarten, ausgestattet mit WINDFORCE Kühlsystem, RGB-Beleuchtung, PCB Coating und VR Features für das beste Gaming- und VR-E...
> 
> 
> 
> 
> www.gigabyte.com
> 
> 
> 
> 
> 
> 3*8pin vs 2*8pin
> 
> Looks like a downgrade for the watercooled
> 
> Markus


...noticed the same thing as I have been waiting for the Aorus Xtreme WB release (I am runing dual 2080 Ti Aorus Xtreme WB since Dec '18, and love them). Unlike in previous gens, WB doesn't have the top PCB layout w/ 3090, though it appears that Bykski has a water-block for the 3x8 PCIe pin air-cooled model. Perhaps Gigabyte / Aorus will add a 3-8 PCIe WB model on their own later ?



Spoiler


----------



## 7empe

ALSTER868 said:


> I understand that it's pulling something like 500+W due to stable clocks and higher than usually temps. Is that thing happening to everybody with this bios and different 3 pin cards?


Yep. Have the same. Strix OC here.


----------



## des2k...

ALSTER868 said:


> I understand that it's pulling something like 500+W due to stable clocks and higher than usually temps. Is that thing happening to everybody with this bios and different 3 pin cards?


only strix when cross flash, 1000w is actually 1300w, with huge VRM I bet you'll be able to pull some insane LN2 numbers


----------



## jomama22

des2k... said:


> only strix when cross flash, 1000w is actually 1300w, with huge VRM I bet you'll be able to pull some insane LN2 numbers


Wait what? Do the pcie slot still consume 100w like it does on the kp or does it only consume the 50w or so the stock strix card does?


----------



## GQNerd

jomama22 said:


> Wait what? Do the pcie slot still consume 100w like it does on the kp or does it only consume the 50w or so the stock strix card does?


I experienced the latter on my (shunted) Strix.. never passed 55w 

not sure where he’s pulling that 1300w # from


----------



## Nizzen

Testing a friends Msi 3090 suprim now with kingpin 1000w bios. Max load in port royal is ~550w (gpu-z) with 100% powerlimit and +100 on voltage slider. Card does 2070mhz max in port royal. ~ 63-65c.
Orginal bios is 450w.


----------



## smonkie

Any BIOS recommended for 3090TUF?


----------



## jura11

Nizzen said:


> Testing a friends Msi 3090 suprim now with kingpin 1000w bios. Max load in port royal is ~550w (gpu-z) with 100% powerlimit and +100 on voltage slider. Card does 2070mhz max in port royal. ~ 63-65c.
> Orginal bios is 450w.


Its strange it pulls just 550W at 100% on MSI 3090 Suprim,do you have wall meter? Can you check how much it pulls from wall? 

Thanks, Jura


----------



## Nizzen

jura11 said:


> Its strange it pulls just 550W at 100% on MSI 3090 Suprim,do you have wall meter? Can you check how much it pulls from wall?
> 
> Thanks, Jura


I will test in a few days. Have a wattmeter in my "benchmark garage"


----------



## jura11

smonkie said:


> Any BIOS recommended for 3090TUF?


KFA2 390W or Gigabyte 390W BIOS probably would be best but I think you will loose one of the DP ports or HDMi port because Asus usually is compatible with Asus only hahaha 

Other option is to run KPE XOC BIOS which 1000W 

Hope this helps 

Thanks, Jura


----------



## GAN77

Nizzen said:


> Testing a friends Msi 3090 suprim now with kingpin 1000w bios.


I wouldn't risk that. Each power connector has a 20 amp fuse.
240+240+240=720 w


----------



## jura11

Nizzen said:


> I will test in a few days. Have a wattmeter in my "benchmark garage"


Okay mate, I will be testing my poor Palit RTX 3090 GamingPro with KPE XOC BIOS, it won't do 15k hahaha with offset, will try manual V/F curve 

Hope this helps 

Thanks, Jura


----------



## TFP_FuzZ

Hey everyone, has anyone had any luck with the EVGA XC3 Ultra getting 400w + without shunt mod? I am planning on shunt modding eventually but if a bios flash can get me closer then that is preferred. Everywhere I have searched it seems only 370w gigabyte bios works and that only 4w over the stock evga bios.


----------



## incognito90

Hey, I'm new so sorry if this is a stupid question. Afterburner and GPU-Z both say I'm at 49 degrees, my temp limit is 91 degrees, yet I am still thermal throttled. What am I missing? 

Running a Gigabyte 3090 Gaming OC on a x570 Aorus Elite Wifi.


----------



## HyperMatrix

incognito90 said:


> Hey, I'm new so sorry if this is a stupid question. Afterburner and GPU-Z both say I'm at 49 degrees, my temp limit is 91 degrees, yet I am still thermal throttled. What am I missing?
> 
> Running a Gigabyte 3090 Gaming OC on a x570 Aorus Elite Wifi.


Did you dismantle your card? 99% of the time this happens with a bad repaste/repad of the card with improper sized pads used or pads falling out and being lost. You have many components on the card that can cause the thermal throttling. 

Just as a side note, in your case, if you use the 1000W bios, your card will literally melt/burn out/catch fire because that thermal protection will be disabled. Thermal throttling is your friend. If you didn't open the card and this is happening, it means there was an issue with assembly from the manufacturer. If you're comfortable enough opening it and fixing it yourself, that's an option. Otherwise you can have it swapped at the store or through the manufacturer depending on when you bought it.


----------



## bmgjet

TFP_FuzZ said:


> Hey everyone, has anyone had any luck with the EVGA XC3 Ultra getting 400w + without shunt mod? I am planning on shunt modding eventually but if a bios flash can get me closer then that is preferred. Everywhere I have searched it seems only 370w gigabyte bios works and that only 4w over the stock evga bios.


KFA2 350-390W bios is the best one for it.
That card has identical VRMs,Ports and Fans layout.

Gigabyte bios has different VRM, you lose 1 DP and fans wont run full speed.

The only bios that will get you over 400W is the 1000W Bios. That will get you to 696W but it will also blow the fuses on the card if you go over 500W.


----------



## Falkentyne

bmgjet said:


> KFA2 350-390W bios is the best one for it.
> That card has identical VRMs,Ports and Fans layout.
> 
> Gigabyte bios has different VRM, you lose 1 DP and fans wont run full speed.
> 
> The only bios that will get you over 400W is the 1000W Bios. That will get you to 696W but it will also blow the fuses on the card if you go over 500W.


I thought someone said that the fuse protection was disabled on that vbios? Did anyone with an evga FTW3 card have the fuses blow when using the 1000W Kingpin bios?


----------



## incognito90

HyperMatrix said:


> Did you dismantle your card? 99% of the time this happens with a bad repaste/repad of the card with improper sized pads used or pads falling out and being lost. You have many components on the card that can cause the thermal throttling.
> 
> Just as a side note, in your case, if you use the 1000W bios, your card will literally melt/burn out/catch fire because that thermal protection will be disabled. Thermal throttling is your friend. If you didn't open the card and this is happening, it means there was an issue with assembly from the manufacturer. If you're comfortable enough opening it and fixing it yourself, that's an option. Otherwise you can have it swapped at the store or through the manufacturer depending on when you bought it.


I have not dismantled it, and I'm using the bios it came with. I have opened a GPU years ago to apply new paste. But this is a rather expensive card! How risky is the process?


----------



## TFP_FuzZ

bmgjet said:


> KFA2 350-390W bios is the best one for it.
> That card has identical VRMs,Ports and Fans layout.
> 
> Gigabyte bios has different VRM, you lose 1 DP and fans wont run full speed.
> 
> The only bios that will get you over 400W is the 1000W Bios. That will get you to 696W but it will also blow the fuses on the card if you go over 500W.


Hmm ok thanks, I will take a look at the KFA2 350-390W bios, I don't want to dabble with a BIOS that can blow out my card haha.


----------



## HyperMatrix

incognito90 said:


> I have not dismantled it, and I'm using the bios it came with. I have opened a GPU years ago to apply new paste. But this is a rather expensive card! How risky is the process?


I would swap it at the store in Canada if you haven't opened it yet. Unfortunately we don't have the same protections enjoyed in the United States with the Magnuson-Moss Warranty Act or in Europe with their amazing consumer protection policies. We're a 3rd world trash country. So if you open the card up for any reason, you can be denied warranty. Even if you opened it to fix a problem that was present from the moment you bought the card. So unless you have an amazing chip that can be clocked high (which may be hard to test with the current thermal throttling problem) I'd recommend swapping it out.


----------



## bmgjet

Falkentyne said:


> I thought someone said that the fuse protection was disabled on that vbios? Did anyone with an evga FTW3 card have the fuses blow when using the 1000W Kingpin bios?


Lol how would you disable a fuse. Its a phycial bit of wire in a ceramic package thats between plugs power plane and the shunts input power plane.
All power has to flow though them.


----------



## HyperMatrix

Falkentyne said:


> I thought someone said that the fuse protection was disabled on that vbios? Did anyone with an evga FTW3 card have the fuses blow when using the 1000W Kingpin bios?


Overcurrent protection (OCP) is disabled. Fuses can't be _disabled_. Well...unless you bridge them.


----------



## Falkentyne

bmgjet said:


> Lol how would you disable a fuse. Its a phycial bit of wire in a ceramic package thats between plugs power plane and the shunts input power plane.
> All power has to flow though them.


I stand relieved, sir


----------



## incognito90

HyperMatrix said:


> I would swap it at the store in Canada if you haven't opened it yet. Unfortunately we don't have the same protections enjoyed in the United States with the Magnuson-Moss Warranty Act or in Europe with their amazing consumer protection policies. We're a 3rd world trash country. So if you open the card up for any reason, you can be denied warranty. Even if you opened it to fix a problem that was present from the moment you bought the card. So unless you have an amazing chip that can be clocked high (which may be hard to test with the current thermal throttling problem) I'd recommend swapping it out.


Thanks for all the information! I will give that a try, unfortunately there are no cards in stock at any Canada Computers stores.


----------



## bmgjet

HyperMatrix said:


> Overcurrent protection (OCP) is disabled. Fuses can't be disabled.


On the kingpins OCP is still enabled. You have to go into the classified tool to disable it.
It then does another check for the bios switch position which can be read with I2C.



Code:


I2C connection to 56:01h.
Sends unlock command 0x0E Data 0xE5.
Then returns status by checking status with command 0x0E Data 0xE9

Then write command 0xBE Data 0x01
Wait 1 sec
Reads register 0xBA
Which has bios switch data.
0 = Normal
1 = OC
2 = LN2

It will only allow you to disable it and the resets if you have it on LN2 position.
Heres a modded version that unlocks every feature in classified tool. Test button opens up a form that lets you send custom I2C commands and stuff.








2.13 MB file on MEGA







mega.nz


----------



## HyperMatrix

bmgjet said:


> On the kingpins OCP is still enabled. You have to go into the classified tool to disable it.
> It then does another check for the bios switch position which can be read with I2C.
> 
> 
> 
> Code:
> 
> 
> I2C connection to 56:01h.
> Sends unlock command 0x0E Data 0xE5.
> Then returns status by checking status with command 0x0E Data 0xE9
> 
> Then write command 0xBE Data 0x01
> Wait 1 sec
> Reads register 0xBA
> Which has bios switch data.
> 0 = Normal
> 1 = OC
> 2 = LN2
> 
> It will only allow you to disable it and the resets if you have it on LN2 position.
> Heres a modded version that unlocks every feature in classified tool. Test button opens up a form that lets you send custom I2C commands and stuff.
> 
> 
> 
> 
> 
> 
> 
> 
> 2.13 MB file on MEGA
> 
> 
> 
> 
> 
> 
> 
> mega.nz
> 
> 
> 
> 
> 
> View attachment 2471526


This is one of the reasons I love you. ❤


----------



## HyperMatrix

incognito90 said:


> Thanks for all the information! I will give that a try, unfortunately there are no cards in stock at any Canada Computers stores.


If you weren’t able to swap it in store and your only option was a return, I may have an FTW3 Hybrid for sale if you were interested. A friend of a friend was interested in it and will let me know tonight if they want it or not. Otherwise it’ll be available. Pretty decent bin too. 0.95v for 2055MHz. 1V for 2130MHz (barely stable but indicator of what it could do under water).


----------



## SolarBeaver

SolarBeaver said:


> After fiddling with the curve, I've managed to squeeze 14200 with my strix @ PR and 21500 time spy
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 14 192 in Port Royal
> 
> 
> Intel Core i9-7980XE Processor, NVIDIA GeForce RTX 3090 x 1, 130764 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> Is it considered acceptable on air no shunt mod? I've seen people here hit 14500+ all the time, but just not sure if they went through more vigorous affords to bring it.
> And seeing it run @ 2040mhz average does it mean it's roughly 180 overclock from stock (1860 base boost clock), so I was wrong assuming my strix won't go past 100mhz OC? It's just when you just apply 110+ core clock in afterburner I'm getting crashes and only after fiddling with the curve I've managed to hit this, so is it considered normal behavior on those cards?


So I've digged up my Trio Gaming X, which I almost sold and then run some tests (didn't had time before, because Strix arrived and I was so sure it would be much better) and here it is:








I scored 14 525 in Port Royal


Intel Core i7-6700K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com




I didn't even fiddle with the curve, just +135 core +1300mem and here we go, my best score yet. Pretty sure I can squeeze 14700 maybe even 14800 with adjusting on the curve, and probably 15k+ with 520 bios. Cooling also seems to be better with the fans having more pleasant sound as a bonus. Testing was on a different rig though, not sure if it matters, will check Strix on this one tomorrow.

So for now I guess I'll just sell Strix and keep the Trio, without any more gambling trying to get a better chip with new Strix or Suprim X, this one is decent enough I think.

Also is running 480w or 520w bios 24/7 on Trio is considered safe? I mean it has weaker PCB elements, but from some brief surfing through the thread it seems to be adequate enough, won't hurt asking to make sure tho.

p.s. It still kind of disheartining ditching this beautiful Strix, have to remind myself that perfomance is more important every second 😂


----------



## geriatricpollywog

bmgjet said:


> On the kingpins OCP is still enabled. You have to go into the classified tool to disable it.
> It then does another check for the bios switch position which can be read with I2C.
> 
> 
> 
> Code:
> 
> 
> I2C connection to 56:01h.
> Sends unlock command 0x0E Data 0xE5.
> Then returns status by checking status with command 0x0E Data 0xE9
> 
> Then write command 0xBE Data 0x01
> Wait 1 sec
> Reads register 0xBA
> Which has bios switch data.
> 0 = Normal
> 1 = OC
> 2 = LN2
> 
> It will only allow you to disable it and the resets if you have it on LN2 position.
> Heres a modded version that unlocks every feature in classified tool. Test button opens up a form that lets you send custom I2C commands and stuff.
> 
> 
> 
> 
> 
> 
> 
> 
> 2.13 MB file on MEGA
> 
> 
> 
> 
> 
> 
> 
> mega.nz
> 
> 
> 
> 
> 
> View attachment 2471526


Thanks! Can you explain what the unlocked features do?

I have a KPE and I flashed the 1000w bios to the “Normal” switch position. Should it be flashed to the “LN2” position or does it not matter?


----------



## Sheyster

Miguelios said:


> I experienced the latter on my (shunted) Strix.. never passed 55w
> 
> not sure where he’s pulling that 1300w # from


I think he's basing it off the bogus total TDP percentage value being reported. Since pin 3 isn't registering anything, the TDP % is reported too low and should allow more than 1000w. In order to get anywhere near that amount of wattage/heat would require voltage that would melt the card so it's not practical by any means.


----------



## TFP_FuzZ

bmgjet said:


> KFA2 350-390W bios is the best one for it.
> That card has identical VRMs,Ports and Fans layout.
> 
> Gigabyte bios has different VRM, you lose 1 DP and fans wont run full speed.
> 
> The only bios that will get you over 400W is the 1000W Bios. That will get you to 696W but it will also blow the fuses on the card if you go over 500W.


Hmm ok so I flashed the drivers successfully however now port royal crashes instantly with no overclock yet. Everything else seems to be working, I can see the 390w max now


----------



## incognito90

HyperMatrix said:


> I would swap it at the store in Canada if you haven't opened it yet. Unfortunately we don't have the same protections enjoyed in the United States with the Magnuson-Moss Warranty Act or in Europe with their amazing consumer protection policies. We're a 3rd world trash country. So if you open the card up for any reason, you can be denied warranty. Even if you opened it to fix a problem that was present from the moment you bought the card. So unless you have an amazing chip that can be clocked high (which may be hard to test with the current thermal throttling problem) I'd recommend swapping it out.


Any chance my memory is actually getting hot causing a legitimate throttle? Is there a way to monitor gpu memory temps?


----------



## Sheyster

bmgjet said:


> On the kingpins OCP is still enabled. You have to go into the classified tool to disable it.
> It then does another check for the bios switch position which can be read with I2C.


Very good to know this! Thanks.


----------



## jomama22

Miguelios said:


> I experienced the latter on my (shunted) Strix.. never passed 55w
> 
> not sure where he’s pulling that 1300w # from


So even with your shunt, the 1000w bios is only pulling 55w? Do you know what the full card power draw was at that time? 

Mines shunted as well and just trying to figure out how much the pcie slot would pull having it shunted and using the 1000w bios


----------



## des2k...

HyperMatrix said:


> If you weren’t able to swap it in store and your only option was a return, I may have an FTW3 Hybrid for sale if you were interested. A friend of a friend was interested in it and will let me know tonight if they want it or not. Otherwise it’ll be available. Pretty decent bin too. 0.95v for 2055MHz. 1V for 2130MHz (barely stable but indicator of what it could do under water).


nice, my zotac holds TE gt2 at 950mv 2040
Not far from yours, pulls 520w

From* 1995mhz 893mv* I needed *+7mv for 2010*, *+32mv for 2025* and *+57mv for 2040*.
That's were I stopped, as mv increase per bin didn't make sense  

The average undervolt for 3080/3090 are:
2000 987mv
1950 931mv
1900 887mv
1850 850mv
1800 806mv

If you have a couple of bins higher per mv, your card is already pretty good.


----------



## bmgjet

TFP_FuzZ said:


> Hmm ok so I flashed the drivers successfully however now port royal crashes instantly with no overclock yet. Everything else seems to be working, I can see the 390w max now


Do a DDU to completely uninstall old drivers and reinstall them.
Also reinstall any overclocking software.




0451 said:


> Thanks! Can you explain what the unlocked features do?
> 
> I have a KPE and I flashed the 1000w bios to the “Normal” switch position. Should it be flashed to the “LN2” position or does it not matter?


Should be on the LN2 bios position if you want it to fully work.
Vince said that also in the stream with GN few weeks back.




incognito90 said:


> Any chance my memory is actually getting hot causing a legitimate throttle? Is there a way to monitor gpu memory temps?


GPU Per-cap of "Therm" is the VRAM over heating.
Only aftermarket sensors like ICX3 ones on EVGA can give you a idea of the temps but even thats not correct since the sensors only close to the vram and not reading the inside temps of them.


----------



## des2k...

Miguelios said:


> I experienced the latter on my (shunted) Strix.. never passed 55w
> 
> not sure where he’s pulling that 1300w # from


well Strix, if you don't run asus vbios your pin3 will report 0watts, so technically, it's +33% power over the bios limit assuming no limits elsewhere.


----------



## HyperMatrix

incognito90 said:


> Any chance my memory is actually getting hot causing a legitimate throttle? Is there a way to monitor gpu memory temps?


Always a chance. You can try underclocking your memory by -500MHz and see if it still happens. I’d be very concerned if you’re getting vram overheating to the point of thermal throttling if you’re not overclocking it. Unless there’s a serious issue with airflow in your case.


----------



## HyperMatrix

des2k... said:


> nice, my zotac holds TE gt2 at 950mv 2040
> Not far from yours, pulls 520w
> 
> From* 1995mhz 893mv* I needed *+7mv for 2010*, *+32mv for 2025* and *+57mv for 2040*.
> That's were I stopped, as mv increase per bin didn't make sense
> 
> The average undervolt for 3080/3090 are:
> 2000 987mv
> 1950 931mv
> 1900 887mv
> 1850 850mv
> 1800 806mv
> 
> If you have a couple of bins higher per mv, your card is already pretty good.


Yeah it’s definitely a decent bin. I’ve seen better but this isn’t bad. Was able to hit 2205MHz with 80% GPU load and 2220MHz at 40-60% load. My friend’s FTW3 Hybrid can do 2130MHz at 0.975v which in my opinion is incredible. But this card definitely has room for massive waterblock/shunt gains. I ended up getting an equally good kingpin card from a friend so decided to sell this off.


----------



## TFP_FuzZ

bmgjet said:


> Do a DDU to completely uninstall old drivers and reinstall them.
> Also reinstall any overclocking software.


Thanks, should have thought of that mybad! Working great so far, hitting my old port royal scores without pushing it to far yet =D


----------



## incognito90

HyperMatrix said:


> If you weren’t able to swap it in store and your only option was a return, I may have an FTW3 Hybrid for sale if you were interested. A friend of a friend was interested in it and will let me know tonight if they want it or not. Otherwise it’ll be available. Pretty decent bin too. 0.95v for 2055MHz. 1V for 2130MHz (barely stable but indicator of what it could do under water).


Thanks for the offer, that would be complicated unfortunately. The university has paid for this card for my AI PhD research. I would need approval to exchange for a different model, and second hand would be tough


----------



## Gandyman

Hey guys,

I have a inno3D ichill 3090 (as it was the only 30 series I could buy before cyberpunk) and ... well 3090 is a interesting card I guess. This is my first non Strix-level-card, thanks to low availability. I have a few questions for you guys that would understand this way better than myself.

A: This card seems incredibly power limited, why are there so many models with only 2 8pins? Running at 75% power limit and -300 on the core or 110% power +300 core, the boost clock stays exactly the same: 18xx - 19xx dependong on temperature.

B: Does this mean I can undervolt without losing any performance? Or because the card is cooler with less voltage (this cheap Chinese cooler leaves alot to be desired acoustically) poissbly even boost higher or more consistently?

C: Worked in the industry for almost 2 decades, heard the term 'undervolt' many times but never done it to a GPU, is there a quick guide or link to undervolting Ampere that anyone can reccomend?

D: Is there a way to make the boost clock stable (even if its lower?) I would prefer a constant 1800mhz rather than it starting at 1910, making my gpu so hot, fans ramp up and get super loud, it lowers to 1810, fans spin down due to less heat output, boosts to 1910, heats up due to the extra 100mz .. so on and and so on and so on.

E: I've had many GPUs over the years, as I run a small local PC store. I upgrade every generation (sometimes multiple times ( 80 - ti for eg) and never before have I seen such odd GPU behaviour. From 2080ti to 3090, some games like AC:V gave me an extra 5 - 10 fps, and Deus Ex went from 85ish - 140ish. Sometimes lowering settings gives no fps increase. Low to Ultra in CP2077 @ 1440p gives me the exact same fps (100 - 120 depending on erea) and quality - performance DLSS gives no FPS increase also. Is this due to Ampere only thriving when loaded to the brim @4k ? or my cheap branded card with 2 8 pins and a mediocre cooler just being gimp? Or something wrong with my system? (10900k @ 5.0 + 32g 3600 cl16)

Edit - one more Q came to mind 

F: Would selling this one on and getting a 3090 strix (or strix class) fix lots of the strange boost behaviour? (Im sure it would fix the lawnmower sounding cooler) Or is this a limitation of nVidia engineering, gimping their own flagship with power limits for some reason?
Thanks to all who read and/or answer, and happy new year to all

B


----------



## GQNerd

jomama22 said:


> So even with your shunt, the 1000w bios is only pulling 55w? Do you know what the full card power draw was at that time?
> Mines shunted as well and just trying to figure out how much the pcie slot would pull having it shunted and using the 1000w bios





des2k... said:


> well Strix, if you don't run asus vbios your pin3 will report 0watts, so technically, it's +33% power over the bios limit assuming no limits elsewhere.


I stacked 15mOhm resistors.
_***These are after multiplying GPU-Z's #'s by 1.33 based on bmgjets' calculator_

Pin 1 and 2: 155-162w (+ ~155 for pin 3)
Pcie: 52-55w
Board total power draw: 530-540w

I'm only getting an extra 20w overall vs running the 520w KPE bios on the Strix.

Side Note: I've pushed my KingPin card to 640w so far


----------



## Thanh Nguyen

Any waterblock for ftw3 yet? Alphacool will release it in 2-3weeks but I want it now.


----------



## jomama22

Miguelios said:


> I stacked 15mOhm resistors.
> _***These are after multiplying GPU-Z's #'s by 1.33 based on bmgjets' calculator_
> 
> Pin 1 and 2: 155-162w (+ ~155 for pin 3)
> Pcie: 52-55w
> Board total power draw: 530-540w
> 
> I'm only getting an extra 20w overall vs running the 520w KPE bios on the Strix.
> 
> Side Note: I've pushed my KingPin card to 640w so far


Interesting, I suppose the lowered power draw from the pcie on the strix is hardware modded into the card then, not what I would of expected.

Feel safe then to throw the kpe bios on there then with my 8mohm shunts. Was just gonna stack another 8mohm on them when I open the card up to solder in the evc2, but maybe I won't now.

Thanks!


----------



## truehighroller1

Nizzen said:


> Testing a friends Msi 3090 suprim now with kingpin 1000w bios. Max load in port royal is ~550w (gpu-z) with 100% powerlimit and +100 on voltage slider. Card does 2070mhz max in port royal. ~ 63-65c.
> Orginal bios is 450w.



I managed to get 760 Watts to pull in fire strike extreme on my suprim x 3090


----------



## GQNerd

jomama22 said:


> Interesting, I suppose the lowered power draw from the pcie on the strix is hardware modded into the card then, not what I would of expected.
> Feel safe then to throw the kpe bios on there then with my 8mohm shunts. Was just gonna stack another 8mohm on them when I open the card up to solder in the evc2, but maybe I won't now.
> Thanks!


No prob.
Yes, that's what it seems like. The extra 20w I mentioned is coming from the 8 pins


----------



## dante`afk

Gandyman said:


> Hey guys,
> 
> I have a inno3D ichill 3090 (as it was the only 30 series I could buy before cyberpunk) and ... well 3090 is a interesting card I guess. This is my first non Strix-level-card, thanks to low availability. I have a few questions for you guys that would understand this way better than myself.
> 
> A: This card seems incredibly power limited, why are there so many models with only 2 8pins? Running at 75% power limit and -300 on the core or 110% power +300 core, the boost clock stays exactly the same: 18xx - 19xx dependong on temperature.
> 
> B: Does this mean I can undervolt without losing any performance? Or because the card is cooler with less voltage (this cheap Chinese cooler leaves alot to be desired acoustically) poissbly even boost higher or more consistently?
> 
> C: Worked in the industry for almost 2 decades, heard the term 'undervolt' many times but never done it to a GPU, is there a quick guide or link to undervolting Ampere that anyone can reccomend?
> 
> D: Is there a way to make the boost clock stable (even if its lower?) I would prefer a constant 1800mhz rather than it starting at 1910, making my gpu so hot, fans ramp up and get super loud, it lowers to 1810, fans spin down due to less heat output, boosts to 1910, heats up due to the extra 100mz .. so on and and so on and so on.
> 
> E: I've had many GPUs over the years, as I run a small local PC store. I upgrade every generation (sometimes multiple times ( 80 - ti for eg) and never before have I seen such odd GPU behaviour. From 2080ti to 3090, some games like AC:V gave me an extra 5 - 10 fps, and Deus Ex went from 85ish - 140ish. Sometimes lowering settings gives no fps increase. Low to Ultra in CP2077 @ 1440p gives me the exact same fps (100 - 120 depending on erea) and quality - performance DLSS gives no FPS increase also. Is this due to Ampere only thriving when loaded to the brim @4k ? or my cheap branded card with 2 8 pins and a mediocre cooler just being gimp? Or something wrong with my system? (10900k @ 5.0 + 32g 3600 cl16)
> 
> Edit - one more Q came to mind
> 
> F: Would selling this one on and getting a 3090 strix (or strix class) fix lots of the strange boost behaviour? (Im sure it would fix the lawnmower sounding cooler) Or is this a limitation of nVidia engineering, gimping their own flagship with power limits for some reason?
> Thanks to all who read and/or answer, and happy new year to all
> 
> B


just shunt mod and slap the xoc bios on it and it's a good card.



Thanh Nguyen said:


> Any waterblock for ftw3 yet? Alphacool will release it in 2-3weeks but I want it now.


optimus has for 400$
but if you wait, just get the best which is from watercool


----------



## Sheyster

des2k... said:


> The average undervolt for 3080/3090 are:
> 2000 987mv
> 1950 931mv
> 1900 887mv
> 1850 850mv
> 1800 806mv


What is the source of these averages? Also, you skipped 950mv which is a popular undervolt.

My card is rock solid stable at 2055/950mv in benchmarks and Warzone.


----------



## long2905

Gandyman said:


> Hey guys,
> 
> I have a inno3D ichill 3090 (as it was the only 30 series I could buy before cyberpunk) and ... well 3090 is a interesting card I guess. This is my first non Strix-level-card, thanks to low availability. I have a few questions for you guys that would understand this way better than myself.
> 
> A: This card seems incredibly power limited, why are there so many models with only 2 8pins? Running at 75% power limit and -300 on the core or 110% power +300 core, the boost clock stays exactly the same: 18xx - 19xx dependong on temperature.
> 
> B: Does this mean I can undervolt without losing any performance? Or because the card is cooler with less voltage (this cheap Chinese cooler leaves alot to be desired acoustically) poissbly even boost higher or more consistently?
> 
> C: Worked in the industry for almost 2 decades, heard the term 'undervolt' many times but never done it to a GPU, is there a quick guide or link to undervolting Ampere that anyone can reccomend?
> 
> D: Is there a way to make the boost clock stable (even if its lower?) I would prefer a constant 1800mhz rather than it starting at 1910, making my gpu so hot, fans ramp up and get super loud, it lowers to 1810, fans spin down due to less heat output, boosts to 1910, heats up due to the extra 100mz .. so on and and so on and so on.
> 
> E: I've had many GPUs over the years, as I run a small local PC store. I upgrade every generation (sometimes multiple times ( 80 - ti for eg) and never before have I seen such odd GPU behaviour. From 2080ti to 3090, some games like AC:V gave me an extra 5 - 10 fps, and Deus Ex went from 85ish - 140ish. Sometimes lowering settings gives no fps increase. Low to Ultra in CP2077 @ 1440p gives me the exact same fps (100 - 120 depending on erea) and quality - performance DLSS gives no FPS increase also. Is this due to Ampere only thriving when loaded to the brim @4k ? or my cheap branded card with 2 8 pins and a mediocre cooler just being gimp? Or something wrong with my system? (10900k @ 5.0 + 32g 3600 cl16)
> 
> Edit - one more Q came to mind
> 
> F: Would selling this one on and getting a 3090 strix (or strix class) fix lots of the strange boost behaviour? (Im sure it would fix the lawnmower sounding cooler) Or is this a limitation of nVidia engineering, gimping their own flagship with power limits for some reason?
> Thanks to all who read and/or answer, and happy new year to all
> 
> B


ichill x4 user here. im currently using the XOC 1000W vbios and set a undervolt of [email protected] 
the stock air cooler is very hot and the paste job is bad, very bad. After repasting with Thermalright TFX things improved considerably but still not as great as other top tier card. the max PR score i can get out of the card on air with XOC bios is 14300.

The strange boost behavior is the same on every card. to get consistent boost and performance you have to tweak the voltage curve. a good air cooler or waterblock will help immensely.

if you put the card in water most of the issues so go away. get the alphacool block if you can as that is what they used for the integrated WB version Frostbite.

I plan to finish gathering numbers and making a user review for the card on youtube and then sell it for a strix or a gaming x trio if i cant find a strix at a reasonable price.

let me know if you have any more question.


----------



## Falkentyne

long2905 said:


> ichill x4 user here. im currently using the XOC 1000W vbios and set a undervolt of [email protected]
> the stock air cooler is very hot and the paste job is bad, very bad. After repasting with Thermalright TFX things improved considerably but still not as great as other top tier card. the max PR score i can get out of the card on air with XOC bios is 14300.
> 
> The strange boost behavior is the same on every card. to get consistent boost and performance you have to tweak the voltage curve. a good air cooler or waterblock will help immensely.
> 
> if you put the card in water most of the issues so go away. get the alphacool block if you can as that is what they used for the integrated WB version Frostbite.
> 
> I plan to finish gathering numbers and making a user review for the card on youtube and then sell it for a strix or a gaming x trio if i cant find a strix at a reasonable price.
> 
> let me know if you have any more question.


How do you like the TFX? Very good and long lasting paste (very durable), right? 
The highest you can get is 14300??

This is a 2x8 pin board, so max power will be limited to 66% of normal.
Can you set your TDP to 85%, this will give you about 500-525W of total power, then try a setting of +105 core and +250 memory offsets (Do not undervolt), and then test your port royal score.


----------



## long2905

Falkentyne said:


> How do you like the TFX? Very good and long lasting paste (very durable), right?
> The highest you can get is 14300??
> 
> This is a 2x8 pin board, so max power will be limited to 66% of normal.
> Can you set your TDP to 85%, this will give you about 500-525W of total power, then try a setting of +105 core and +250 memory offsets (Do not undervolt), and then test your port royal score.


the paste itself is very thick and hard to spread and i failed once or twice on my CPU first before moving on to the GPU. But after pasting a X-shaped line on the die and let the paste spread and cure for a while, the difference compared to say MX4 is obvious. I cant imagine going back lol. it does take some learning to get it right.

I have always used the voltage curve to set the boost clock and the best i can do is 14300 in PR with the cyberpunk driver. havent updated to the latest minecraft one yet.

the most power draw i saw from hwinfo and fpsmon is 850w so subtract about 30% from that i guess. the stock air cooler for this card is pretty bad, and it is only slightly less worse with the TFX paste.

When running PR, i already put 100% fan, disable shadowplay overlay and gpu acceleration in windows 10, closed off most unrelated processes. i can get a RAM offset of 1100 max in PR and complete the run without problem. this will artifact in games though so i have to back it down to 1050 for testing. anything else I can try? if i just set a normal offset on the core clock then it will fluctuate wildly along with voltage. and i will get a system reboot as well.


----------



## Falkentyne

long2905 said:


> the paste itself is very thick and hard to spread and i failed once or twice on my CPU first before moving on to the GPU. But after pasting a X-shaped line on the die and let the paste spread and cure for a while, the difference compared to say MX4 is obvious. I cant imagine going back lol. it does take some learning to get it right.
> 
> I have always used the voltage curve to set the boost clock and the best i can do is 14300 in PR with the cyberpunk driver. havent updated to the latest minecraft one yet.
> 
> the most power draw i saw from hwinfo and fpsmon is 850w so subtract about 30% from that i guess. the stock air cooler for this card is pretty bad, and it is only slightly less worse with the TFX paste.
> 
> When running PR, i already put 100% fan, disable shadowplay overlay and gpu acceleration in windows 10, closed off most unrelated processes. i can get a RAM offset of 1100 max in PR and complete the run without problem. this will artifact in games though so i have to back it down to 1050 for testing. anything else I can try? if i just set a normal offset on the core clock then it will fluctuate wildly along with voltage. and i will get a system reboot as well.


Try just using the basic offset instead of the voltage curve. Try +105 offset (and +250 to +500 memory). I have found from observation that if you change the v/f curve manually, if you don't make it perfect, your performance can actually go -down-, as the target clock's performance seems to be based on the V/F points before it, for some reason.

I've been telling everyone to use the diagonal X method for awhile. It seems some people don't listen and keep trying to spread it and get horrible results. The X method works very well though!
Is "thermagic TF-EX" the same paste as TFX or is it based on TF8? It looks to be almost the exact same as one of the two and is apparently the exact same thickness. Both pastes seem to feel like putty after you try to remove it from the chip after it's been on it a week or longer (with a very small improvement in thermal performance after a few days of use), so I'm not sure if it's the same supplier. Thermagic TF-EX is well known in Vietnam (it's sold in several Vietnamese shops), so that's why I'm asking.


----------



## long2905

Falkentyne said:


> Try just using the basic offset instead of the voltage curve. Try +105 offset (and +250 to +500 memory). I have found from observation that if you change the v/f curve manually, if you don't make it perfect, your performance can actually go -down-, as the target clock's performance seems to be based on the V/F points before it, for some reason.
> 
> I've been telling everyone to use the diagonal X method for awhile. It seems some people don't listen and keep trying to spread it and get horrible results. The X method works very well though!
> Is "thermagic TF-EX" the same paste as TFX or is it based on TF8? It looks to be almost the exact same as one of the two and is apparently the exact same thickness. Both pastes seem to feel like putty after you try to remove it from the chip after it's been on it a week or longer (with a very small improvement in thermal performance after a few days of use), so I'm not sure if it's the same supplier. Thermagic TF-EX is well known in Vietnam (it's sold in several Vietnamese shops), so that's why I'm asking.


yeah i will try your suggestions in the next few days when it get colder. its heating up a bit here for now.

I have rotated between pasting methods actually. i would say it depends on the paste, not one size fit all. X pattern for TFX definitely though as there is no way you can spread it evenly. I was favoring spreading paste earlier as I saw debauer doing exclusively that with his grizzly paste.

Think i saw the TF-EX paste but havent tried it yet. the one i bought is TFX. based on my friend's feedback, TFX and tf8 is similar in performance but TFX is harder to spread. We can buy straight from taobao relatively easy here so i guess we got that going for us lol. the strix card is around $2600 here though.


----------



## sultanofswing

Falkentyne said:


> Try just using the basic offset instead of the voltage curve. Try +105 offset (and +250 to +500 memory). I have found from observation that if you change the v/f curve manually, if you don't make it perfect, your performance can actually go -down-, as the target clock's performance seems to be based on the V/F points before it, for some reason.
> 
> I've been telling everyone to use the diagonal X method for awhile. It seems some people don't listen and keep trying to spread it and get horrible results. The X method works very well though!
> Is "thermagic TF-EX" the same paste as TFX or is it based on TF8? It looks to be almost the exact same as one of the two and is apparently the exact same thickness. Both pastes seem to feel like putty after you try to remove it from the chip after it's been on it a week or longer (with a very small improvement in thermal performance after a few days of use), so I'm not sure if it's the same supplier. Thermagic TF-EX is well known in Vietnam (it's sold in several Vietnamese shops), so that's why I'm asking.


If the curve points before your target voltage/frequency point is set are spread out too far that is when performance gets worse but the card will still clock what you want.
I believe it has to do with something we cannot see, I feel when it happens it's due to interpolation properties. Like in order for 1093mv to work properly it needs to use the voltage point before it and after it and if the point before it is too far then the voltage is mis-read/miscalculated. Remember, the voltage that we see in MSI Afterburner/PX1 for normal cards is not the actual NVVDD. it's just a look up table in the BIOS voltage.


----------



## pewpewlazer

Falkentyne said:


> Try just using the basic offset instead of the voltage curve. Try +105 offset (and +250 to +500 memory). I have found from observation that if you change the v/f curve manually, if you don't make it perfect, your performance can actually go -down-, as the target clock's performance seems to be based on the V/F points before it, for some reason.


Not too surprising, but good to know for sure that Ampere exhibits this same weird behavior as Turing. There's gotta be something going on that we can't see that's affected by the lower points on the curve.


----------



## rhyno

Falkentyne said:


> I thought someone said that the fuse protection was disabled on that vbios? Did anyone with an evga FTW3 card have the fuses blow when using the 1000W Kingpin bios?


i pulled 930 watts no issues


----------



## SoldierRBT

pewpewlazer said:


> Not too surprising, but good to know for sure that Ampere exhibits this same weird behavior as Turing. There's gotta be something going on that we can't see that's affected by the lower points on the curve.


The problem is that the NVVDD/MSVDD internal table on each voltage point is too low and doesn't supply enough power to the internal clocks. You may see it running at 2100MHz stable but the actual internal clocks are lower. In my KPE on stock, the internal clocks are around 90-120MHz lower than the actual clocks that MSI afterburner detects and the gap increases at higher voltage points. Every voltage point in the v/f curve uses different NVVDD/MSSVD combinations and sometimes too much/low won't work.


----------



## sultanofswing

SoldierRBT said:


> The problem is that the NVVDD/MSVDD internal table on each voltage point is too low and doesn't supply enough power to the internal clocks. You may see it running at 2100MHz stable but the actual internal clocks are lower. In my KPE on stock, the internal clocks are around 90-120MHz lower than the actual clocks that MSI afterburner detects and the gap increases at higher voltage points. Every voltage point in the v/f curve uses different NVVDD/MSSVD combinations and sometimes too much/low won't work.


This is exactly what Vince told me as well. It shows in all your monitoring programs it's running the speed you set but it isn't in reality.


----------



## Nizzen

truehighroller1 said:


> I managed to get 760 Watts to pull in fire strike extreme on my suprim x 3090


From the wall of the whole system, or in gpu-z?

Drawing 1000w+ from the wall with my 7980xe + 3090 is no problem


----------



## pewpewlazer

SoldierRBT said:


> The problem is that the NVVDD/MSVDD internal table on each voltage point is too low and doesn't supply enough power to the internal clocks. You may see it running at 2100MHz stable but the actual internal clocks are lower. In my KPE on stock, the internal clocks are around 90-120MHz lower than the actual clocks that MSI afterburner detects and the gap increases at higher voltage points. Every voltage point in the v/f curve uses different NVVDD/MSSVD combinations and sometimes too much/low won't work.


Lets say +165 offset = 2100mhz @ 1.05v (I made this up, just an example).

First, raise ONLY the 1.05v point on the curve to +165, and leave the rest at +0.

Next, raise the entire curve +150, then set the 1.05v point to +165.

In both instances, the card will show to be running 2100mhz @ 1.05v. But the second scenario will score much higher.

This magical "internal clock" your KPE card reports, what does it show during those two scenarios?


----------



## Falkentyne

SoldierRBT said:


> The problem is that the NVVDD/MSVDD internal table on each voltage point is too low and doesn't supply enough power to the internal clocks. You may see it running at 2100MHz stable but the actual internal clocks are lower. In my KPE on stock, the internal clocks are around 90-120MHz lower than the actual clocks that MSI afterburner detects and the gap increases at higher voltage points. Every voltage point in the v/f curve uses different NVVDD/MSSVD combinations and sometimes too much/low won't work.





sultanofswing said:


> This is exactly what Vince told me as well. It shows in all your monitoring programs it's running the speed you set but it isn't in reality.


Is there a tool you can use to check what the internal clocks are?

I found THIS but I have NO idea if it will work for any of you here nor whether the magic smoke will come out.









Thermspy


I found a link to a Nvidia program called Thermspy. If anyone is interested i will post a link to it if allowed. i have attached a image from my cards readout.




www.techpowerup.com


----------



## Nizzen

rhyno said:


> i pulled 930 watts no issues
> View attachment 2471549


With 0.81v i guess


----------



## Falkentyne

Before anyone thinks I'm trolling, this is at 400W TDP (74% slider) running Heaven benchmark.


----------



## sultanofswing

Falkentyne said:


> Before anyone thinks I'm trolling, this is at 400W TDP (74% slider) running Heaven benchmark.
> 
> View attachment 2471550


Yea that tool has been around a long time and used to be how everyone forced P0 P-state with that program and a batch file.


----------



## rhyno

Nizzen said:


> With 0.81v i guess


not sure but it holds 1.1 volts at 600 watts no problem while playing cyberpunk


----------



## Falkentyne

sultanofswing said:


> Yea that tool has been around a long time and used to be how everyone forced P0 P-state with that program and a batch file.


My clocks though  I want muh clocks=(
I think @HyperMatrix is going to like this.


----------



## SoldierRBT

pewpewlazer said:


> Lets say +165 offset = 2100mhz @ 1.05v (I made this up, just an example).
> 
> First, raise ONLY the 1.05v point on the curve to +165, and leave the rest at +0.
> 
> Next, raise the entire curve +150, then set the 1.05v point to +165.
> 
> In both instances, the card will show to be running 2100mhz @ 1.05v. But the second scenario will score much higher.
> 
> This magical "internal clock" your KPE card reports, what does it show during those two scenarios?


Because the card may be hitting power limits and it will go down a bin or 2 and since you applied +150 to the other voltage points the average clocks will be higher than just setting +165 to one point. This is why I check the power draw of the voltage points before running any benchmarks. This depends on the load/game. Port royal can go up to 1.025v while Time Spy Extreme 993mv (This is with 520W bios limt).



Falkentyne said:


> Is there a tool you can use to check what the internal clocks are?
> 
> I found THIS but I have NO idea if it will work for any of you here nor whether the magic smoke will come out.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Thermspy
> 
> 
> I found a link to a Nvidia program called Thermspy. If anyone is interested i will post a link to it if allowed. i have attached a image from my cards readout.
> 
> 
> 
> 
> www.techpowerup.com


That's exactly the program I use to monitor internal clocks. It allowed me to score 15k in Port Royal with just 430W (card is also supplying power to 4 fans and one pump).
Undervolt 0.950v 2130MHz without NVVDD/MSVDD tweaks - 14,624 - Max power draw: 407W
https://www.3dmark.com/pr/696258

Undervolt 0.950v 2130MHz NVVDD/MSVDD tweaked - 15,012 - Max power draw: 432W
https://www.3dmark.com/pr/696376


----------



## bmgjet

Seems to be some hidden stuff in that program.


----------



## sultanofswing

And here he goes,LOL.


----------



## Falkentyne

bmgjet said:


> Seems to be some hidden stuff in that program.
> View attachment 2471555


Ok don't scare me now.
Is this good hidden stuff or bad hidden stuff?


----------



## HyperMatrix

Falkentyne said:


> My clocks though  I want muh clocks=(
> I think @HyperMatrix is going to like this.
> 
> View attachment 2471553


I told you how I got around the problem, right? So I basically have 2 methods I use now. One for liquid cooling/constant temps, and one for air cooling. 

1) So if you're going to end up in the 40s temperature range but are setting curve while in the 20s, set the clock speed you want 2 clock steppings above. So let's say you want to lock to 2100MHz at 0.975v. Set your curve to 2130MHz at 0.975v. Then set clock cap with smi to 2100MHz. So while the vfq table is trying to do 2130MHz at 0.975v, it will actually only do 2100MHz. And when you hit temp into the 30s, it'll now be trying to do 2115MHz at 0.975v. But you're capped at 2100MHz with smi so you're still fine. And even in the 40s, same thing. So you get a nice constant clock. You will need to have stable/lower temps for this method otherwise you'll still get crashing in games.

2) If you're on air and more variable temps/clocks, do the exact same thing I said above...but don't do a flatline curve. Either do a smooth linear curve, or just do a global offset. It's not efficient at all. But basically it'll stick to 2100MHz which is the cap you've put, and as the vfq table goes up and down with temperature, it'll change how much voltage is required to maintain those same clocks, but won't change the actual clocks themselves.

These methods have made my life infinitely easier. I'll definitely be interested in seeing what ThermSpy is reporting while I'm doing all this though. Got LM coming in on thursday. Have like 10 packs of CLU left from before but decided to try some of Thermaltake's true actual 79 W/mK liquid metal that is 100% accurate.  Will be nice to experiment and figure out a bit more about how this damn card works.


----------



## sultanofswing

For me liquid metal worked great when I was air cooled, Drop the temps quite a bit.
Once I watercooled it only netted me at best 1-2c vs Kryonaut.


----------



## SoldierRBT

sultanofswing said:


> This is exactly what Vince told me as well. It shows in all your monitoring programs it's running the speed you set but it isn't in reality.


You're right. I also discovered that if the card is running very close to the PL, let's say 4-7W below it, the internal clocks would lower 15MHz for a few seconds, MSI Afterburner/ Precision X1 / GPU-Z won't detect this.


----------



## jomama22

SoldierRBT said:


> The problem is that the NVVDD/MSVDD internal table on each voltage point is too low and doesn't supply enough power to the internal clocks. You may see it running at 2100MHz stable but the actual internal clocks are lower. In my KPE on stock, the internal clocks are around 90-120MHz lower than the actual clocks that MSI afterburner detects and the gap increases at higher voltage points. Every voltage point in the v/f curve uses different NVVDD/MSSVD combinations and sometimes too much/low won't work.





sultanofswing said:


> This is exactly what Vince told me as well. It shows in all your monitoring programs it's running the speed you set but it isn't in reality.


Figured this out more or less when doing my timespy and tse runs. I actually set my scores using offsets and found that 1.075v gave me the best possible scores for my chip. Seems to be the highest voltage point where MSVDD and NVVDD mesh really well. Anything beyond that will always net me less points. 

It can be a bit of a pain as touching the curve at all can really be detrimental to how well it actually runs. If you get lucky enough where the default curve will apply an offset with the first point of the clock you want (or 15mhz above, as when things start to get wonky, it will downclock you that much) on the voltage you want, you can just set the voltage slider to 0 and you should be stuck on that. At the upper range of voltages it's not bad as there is usually only one more clock/voltage step it would take (which it would if you set the voltage slider to +100).

This is a big part of the reason we see people with massive clocks getting far fewer points in benchmarks. 

This is a huge reason I can't wait to get my evc2 in the mail for the strix so I can start messing around with it and seeing what needs to be done at the higher voltage levels.


----------



## bmgjet

Falkentyne said:


> Ok don't scare me now.
> Is this good hidden stuff or bad hidden stuff?


None of that hidden stuff is populated so I guess thats why they hid it off screen. 
Code is just pulling info from nvapi.dll and only showing the public stuff so nothing new to lean from this program.
You can get more info from the GPU just by browsing the nvapi.dll in Visual Studio.


----------



## HyperMatrix

So I don't know about smashclock or whatever...but this seems to be doing the same thing. Question is...why is it reporting 2130MHz when it's doing 2081MHz? This is going to be very helpful for real OC trouble shooting instead of blind testing. Thanks again for the share, @Falkentyne









When I switched to a locked 1V/2100MHz, it dropped down to 2025MHz.









And then oddly enough...at around the 50-52C mark (which I previously mentioned would cause a crash because clocks would go up before they went back down)...it gets a boost:


----------



## SoldierRBT

HyperMatrix said:


> So I don't know about smashclock or whatever...but this seems to be doing the same thing. Question is...why is it reporting 2130MHz when it's doing 2081MHz? This is going to be very helpful for real OC trouble shooting instead of blind testing. Thanks again for the share, @Falkentyne
> View attachment 2471559


Open Classified tool and add NVVDD until the internal clock match the clocks showing in MSI. MSVDD usually doesn't need tweaking, the voltage on auto is enough.


----------



## incognito90

HyperMatrix said:


> Always a chance. You can try underclocking your memory by -500MHz and see if it still happens. I’d be very concerned if you’re getting vram overheating to the point of thermal throttling if you’re not overclocking it. Unless there’s a serious issue with airflow in your case.


I have 5 120mms (2 intake and 3 exhaust). I get the throttle with the side panel off as well. All fans set to 100% and GPU fans at 100%. I’m mining ETH which is memory intensive, but I don’t think it should throttle at standard clocks.


----------



## HyperMatrix

incognito90 said:


> I’m mining ETH which is memory intensive


I think you just solved the problem. Haha. Yes ethereum mining can be very stressful on the memory. You'll want to get a 3080 Ti which will have 20GB with just 10 memory modules instead of the 24 modules in the 3090. Or add an additional finned heatsink to the top of the card with a fan blowing on it to keep it cool. Or watercool it. Get a model that is supported by AquaComputer since they make waterblocks with an active cooled backplate.




SoldierRBT said:


> Open Classified tool and add NVVDD until the internal clock match the clocks showing in MSI. MSVDD usually doesn't need tweaking, the voltage on auto is enough.


Check the other pic. It stabilized when the temp hit 50C. 1V at 2160MHz. That doesn't make sense.











Here are pics better demonstrating what I'm saying. Nothing touched on the clock settings. Just at different temps. GPU-Z in the screenshot for verification.
47C:









51C:


----------



## jomama22

HyperMatrix said:


> So I don't know about smashclock or whatever...but this seems to be doing the same thing. Question is...why is it reporting 2130MHz when it's doing 2081MHz? This is going to be very helpful for real OC trouble shooting instead of blind testing. Thanks again for the share, @Falkentyne
> View attachment 2471559
> 
> 
> When I switched to a locked 1V/2100MHz, it dropped down to 2025MHz.
> View attachment 2471560
> 
> 
> And then oddly enough...at around the 50-52C mark (which I previously mentioned would cause a crash because clocks would go up before they went back down)...it gets a boost:
> 
> View attachment 2471561


It's the same reason ryzen CPUs need to have their effective clocks watched, not the reported ones. On the side of the pond it's called clock stretching as it's just the clocks being requested, not the actual clocks being use.

That's how all these advanced boost behaviors work now.


----------



## jomama22

HyperMatrix said:


> I think you just solved the problem. Haha. Yes ethereum mining can be very stressful on the memory. You'll want to get a 3080 Ti which will have 20GB with just 10 memory modules instead of the 24 modules in the 3090. Or add an additional finned heatsink to the top of the card with a fan blowing on it to keep it cool. Or watercool it. Get a model that is supported by AquaComputer since they make waterblocks with an active cooled backplate.
> 
> 
> 
> 
> Check the other pic. It stabilized when the temp hit 50C. 1V at 2160MHz. That doesn't make sense.
> 
> View attachment 2471562


Wouldn't be surpryif if was feeding less NVVDD because of the lower temps, assuming that amount of voltage would be enough. But I could be completely off base as well.


----------



## HyperMatrix

jomama22 said:


> Wouldn't be surpryif if was feeding less NVVDD because of the lower temps, assuming that amount of voltage would be enough. But I could be completely off base as well.





SoldierRBT said:


> Open Classified tool and add NVVDD until the internal clock match the clocks showing in MSI. MSVDD usually doesn't need tweaking, the voltage on auto is enough.


Interestingly enough...yes there seems to be some NVVDD pullback at lower temps...because while hitting 50C+ corrects it...upping it in classified tool in the 40s seems to have fixed the problem as well. Meaning this is actually a pretty important issue for anyone under water/cooler temps. Just a KPE issue or other cards as well?


----------



## jomama22

HyperMatrix said:


> Interestingly enough...yes there seems to be some NVVDD pullback at lower temps...because while hitting 50C+ corrects it...upping it in classified tool in the 40s seems to have fixed the problem as well. Meaning this is actually a pretty important issue for anyone under water/cooler temps. Just a KPE issue or other cards as well?
> 
> View attachment 2471566


Once I get my evc2 In I'll take a look. Should be here this week. I'm underwater and was just thinking that lmao.


----------



## HyperMatrix

jomama22 said:


> Once I get my evc2 In I'll take a look. Should be here this week. I'm underwater and was just thinking that lmao.


Even without evc2, check right now to see if you're getting your actual selected clock speed in the 30s.


----------



## SoldierRBT

HyperMatrix said:


> I think you just solved the problem. Haha. Yes ethereum mining can be very stressful on the memory. You'll want to get a 3080 Ti which will have 20GB with just 10 memory modules instead of the 24 modules in the 3090. Or add an additional finned heatsink to the top of the card with a fan blowing on it to keep it cool. Or watercool it. Get a model that is supported by AquaComputer since they make waterblocks with an active cooled backplate.
> 
> 
> 
> 
> Check the other pic. It stabilized when the temp hit 50C. 1V at 2160MHz. That doesn't make sense.
> 
> View attachment 2471562
> 
> 
> 
> Here are pics better demonstrating what I'm saying. Nothing touched on the clock settings. Just at different temps. GPU-Z in the screenshot for verification.
> 47C:
> View attachment 2471563
> 
> 
> 51C:
> View attachment 2471564


Look at the second pic, the voltage dropped to 0.993v when hitting 51C and the internal clock went up. This mean the internal NVVDD table is supplying enough voltage for the 993mv point to be efficient while 1.0v wasn't getting enough voltage. This is why I recommend to test each voltage point. They need different NVVDD/MSVVDD combinations to be efficient/match internal clocks.

My 1.0v voltage point needs the same NVVDD to be efficient. Nice


----------



## SolarBeaver

Is 500w works fine for gaming trio? I think I read somewhere that the middle fan not spinning correctly.
For some reason strix bios getting lower scores than suprim x, even though the latter seem to be PL perf capped a little (I'm running 2025mhz @ 950, getting ~14400 @ PR), but I have to tune up a voltage a little on suprim in order for it to get stable (can do lower than 950 with strix, but results are still worse regardless @ the same frequencies).
So I thought I could try EVGA 500w, but not sure about the fans.
EDIT: I've tried EVGA 500w bios (not sure which version I should use though) and it stabilizes @ 987 @ 2025, such a heavy one, guess I better stick to strix or supreme for better temps and similar performance @ higher Mhzs.


----------



## originxt

With the optimus ftw3 block no shunts. Really hope they get that 500 watt bios sorted, want to break that 15k mark.


----------



## Slackaveli

HyperMatrix said:


> i use Nvidia Inspector with max negative LOD (-3)



I did that last week. Now if only that would stop things from spawning or despawning right in front of my eyes. Haha. Kill a single cop in a secluded area? 10 cops literally spawn right in front of you out of nowhere. Really wish they had delayed the game another year or two and done it right. Absolutely gorgeous game but you can tell it was rushed.
[/QUOTE]
Oh yeah. Game is fun and BEAUTIFUL but quite buggy.


----------



## Slackaveli

dante`afk said:


> View attachment 2471200
> 
> 
> 
> 
> 
> 
> 
> 
> I don't think they'll beat the current AMD IPC, but if they do, I'll switch again.


fair enough lol.


----------



## Slackaveli

0451 said:


> So I thought my VRAM degraded but that might not be the case. When I first got the 3090, it could run PR at +1500 mem, but now it crashes above 1350. However, when I look at historical results, the average memory clock has been around 1381mhz this whole time. I had thought my 2080ti experienced the same degradation, but it too had approximately the same average memory clock frequency the entire time I owned it.


Just that human memory degredation. As a 46 year old, can confirm this is real lol.


----------



## pewpewlazer

SoldierRBT said:


> Because the card may be hitting power limits and it will go down a bin or 2 and since you applied +150 to the other voltage points the average clocks will be higher than just setting +165 to one point. This is why I check the power draw of the voltage points before running any benchmarks. This depends on the load/game. Port royal can go up to 1.025v while Time Spy Extreme 993mv (This is with 520W bios limt).


So you get identical performance with both methods?

EDIT: Whatever this ThermSpy program does, it's like a magic looking glass of some sort. (Yes, I know I'm a Turing peasant still, but it appears Ampere behaves similarly, so I'm sharing my findings).

+150 = 2113 (2130) @ 1.050v
+165 = 2128 (2145) @ 1.050v
1.050v point set to +165 (with the rest of the curve at +150) = 2119 (2145) @ 1.050v.
1.050v point set to +165 (with the rest of the curve at +0) = 2060 (2145) @ 1.050v.

So what exactly is this program looking at? And why are no other monitoring programs looking at it? Because whatever it is, these numbers seem to validate all the strange behavior I've observed.


----------



## SoldierRBT

pewpewlazer said:


> So you get identical performance with both methods?


You didn’t read what I wrote. The first scenario would have better score if power limit isn’t an issue. Card would be boosting +165 all the time. If you get better score in second scenario, you’re doing it wrong.


----------



## pewpewlazer

SoldierRBT said:


> You didn’t read what I wrote. The first scenario would have better score if power limit isn’t an issue. Card would be boosting +165 all the time. If you get better score in second scenario, you’re doing it wrong.


I read what you wrote. Let's try this another way:

Set a +165 offset to the entire curve. Run benchmark. Observe what voltage it runs. This is now your "desired voltage".

Open VF curve, drag ONLY that desired voltage point to +165. Leave the rest at +0. Run benchmark again.

Assume identical temperatures, and the card is NOT hitting power limit (it's shunt modded or some 1000w XOC BIOS)

Which one will score higher?


----------



## Bilco

Arbustok said:


> I got to 13998 in PR with the stock bios, I'm guessing that's pretty decent but still I'm tempted to sell this TUF and get a Strix and get an Aquacomputer Block with the active backplate whenever stock is stable


You'll be waiting for some time. My order from October still has no ETA. Every time I contact them I get a "shipping soon" or "should be shipping next week" for nearly 3 months now.


----------



## HyperMatrix

@Falkentyne that program was a huge help man...I was able to quickly fine tune my OC in valley and then do successful port royal runs. I did 2160MHz at 1.025v and +1250 mem. Also 2175MHz at 1.036v with +1250 mem. Never hit the power limit at all. Using the method I described a few pages back, clocks were fully locked. No up/down at all. And I got my highest score yet with 2175MHz on my 4.3GHz 6950x system with 2933MHz CL15 ram. Didn't push any higher as I was already peaking at 507W. But these are very promising results considering there's no liquid metal yet either. Even with my system being as outdated as it is, I think I could probably push it past 15K tomorrow with more tweaking.


----------



## xrb936

SolarBeaver said:


> I see, thanks, so there's basically no point in trying to buy a new Strix and sell this one, because I will most likely get the same performance out of the new ones
> And for non-binned chips is +100-110 core on air is average for most 3090 chips, regardless of the manufacturer, or is Asus strix is now the worst?


You are correct.


----------



## 7empe

xrb936 said:


> Compare to all Strix OC, It's on average. Compare to all first batch Strix OC, it's far below the average.


I have Strix OC from the October batch. It does +180 core.


----------



## Pepillo

A quick question. Does the RTX Classified Controller in this forum work with cards other than Kipping?

Thanks


----------



## 7empe

GAN77 said:


> Guys, what average delta chip water do you get on water cooling and 450-500 watts of consumption?


If you mean temps I went down from 85 C (520W bios) to 49 C on average in TSE stress test. Strix here.


----------



## aerotracks

Sheyster said:


> The FE I previously had boosted to 1980 with no offset, so there is some VID check or something going on there for sure.


FE here too - the stock boost seems quite volatile. I see the 1075mV step only available at 35C and lower. After that, it goes to 2025, 2010 etc. when warming up.


----------



## SolarBeaver

7empe said:


> I have Strix OC from the October batch. It does +180 core.


ENVYYYYYY

My Strix also does some kind of weird perf caps, for example I undervolt to [email protected] and it occasionally hits Vrel Vop and thermals (which are below 60), my gaming trio on the strix bios never has this, this Strix is just a mess...


----------



## GAN77

The MSI Suprim has a connector J5007 that looks like a connection EVC2 . The tracks go to the Monolithic Power Systems MPS controllers MP2999A. Control will work when Elmor EVC2SX is connected? What opportunities can I get?


----------



## SolarBeaver

There's definitely something wrong with this Strix, here's how it scores totally stock, I would feel guilty trying to sell this abomination. Here's a screenshot of gpu-z + my score, on the same PC I scored 13600+ with strix bios on trio, so it's not setup related. Maybe I could get it to repairs or is this kind of behavior not a warranty case? Need help please...










https://www.3dmark.com/3dm/55771272?


----------



## ALSTER868

7empe said:


> If you mean temps I went down from 85 C (520W bios) to 49 C on average in TSE stress test. Strix here.


Question was about what temp delta are you getting between your water temp and the chip temp? It's interesting to hear and compare.



7empe said:


> I have Strix OC from the October batch. It does +180 core.


Mine does +190 in PR and +180 in TS though on stock bios. Should be above average chip it looks like.


----------



## long2905

did some further tweaking, set voltage curve of [email protected] and get a max temp of 82c. turned off argb led on the card, set max fan and +1000 offset memory clock. got 14512 PR.









I scored 14 512 in Port Royal


Intel Core i9-9900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com





if i set a simple offset on the core clock then the auto voltage will consistently hover around 1.0-1.06v and heat up the card like crazy, upward of 100c before i cancel the run.


----------



## sultanofswing

long2905 said:


> did some further tweaking, set voltage curve of [email protected] and get a max temp of 82c. turned off argb led on the card, set max fan and +1000 offset memory clock. got 14512 PR.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 14 512 in Port Royal
> 
> 
> Intel Core i9-9900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> if i set a simple offset on the core clock then the auto voltage will consistently hover around 1.0-1.06v and heat up the card like crazy, upward of 100c before i cancel the run.


That temp is way too hot for the voltage, I'd start looking at your cooling solution/Case airflow.


----------



## stryker7314

Anyone running the Kingpin1000w bios that's in the wild?









EVGA RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com


----------



## truehighroller1

Nizzen said:


> From the wall of the whole system, or in gpu-z?
> 
> Drawing 1000w+ from the wall with my 7980xe + 3090 is no problem


Afterburner reported it. Check my post / scores I got up there. I have a corsair 1600w psu.


----------



## truehighroller1

Falkentyne said:


> Is there a tool you can use to check what the internal clocks are?
> 
> I found THIS but I have NO idea if it will work for any of you here nor whether the magic smoke will come out.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Thermspy
> 
> 
> I found a link to a Nvidia program called Thermspy. If anyone is interested i will post a link to it if allowed. i have attached a image from my cards readout.
> 
> 
> 
> 
> www.techpowerup.com



Interesting thanks for that.


----------



## rhyno

stryker7314 said:


> Anyone running the Kingpin1000w bios that's in the wild?


i am , on a ftw3 card


----------



## Nizzen

stryker7314 said:


> Anyone running the Kingpin1000w bios that's in the wild?


Soon everybody in this very thread  Included me


----------



## Nizzen

ALSTER868 said:


> Question was about what temp delta are you getting between your water temp and the chip temp? It's interesting to hear and compare.
> 
> 
> Mine does +190 in PR and +180 in TS though on stock bios. Should be above average chip it looks like.


So do you got any Port Royal result with +190? Should be about ~2185mhz in port Royal (without powerlimit)


----------



## Nizzen

truehighroller1 said:


> Highest score yet with PR.
> 
> View attachment 2471082
> 
> 
> Time spy results that I posted in the timespy benchmark post.
> 
> View attachment 2471083
> View attachment 2471084


Can't see any power draw here?

Must be the whole system?


----------



## ALSTER868

Nizzen said:


> So do you got any Port Royal result with +190? Should be about ~2185mhz in port Royal (without powerlimit)


On stock Strix bios and +190/+1000 it was 2175-2145 MHz and 15080 PR score.
On KPE 520W and +170/+1000 it was 2205 to 2190 MHz and 15280.









I scored 15 280 in Port Royal


Intel Core i9-9900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## jomama22

ALSTER868 said:


> Question was about what temp delta are you getting between your water temp and the chip temp? It's interesting to hear and compare.
> 
> 
> Mine does +190 in PR and +180 in TS though on stock bios. Should be above average chip it looks like.


Really need to be giving voltage/frequency relationship as just saying +whatever doesn't really define anything. 

If you slap the 1000w bios on or shunt the card, you'll find that +whatever will begin to drop quite rapidly. A +180 where it's .900 @ 2050 or somthing, could easily deteriorate to +120 @ 1.075v


----------



## Keninishna

Any idea why my score is low now on PR? I was getting 14.5k with these settings before. I tried clean reinstall of the drivers and still the same. Average clock is 2100mhz.


----------



## bl4ckdot

Whats the best Strix waterblock ? EK ? Aquacomputer ?


----------



## ALSTER868

jomama22 said:


> Really need to be giving voltage/frequency relationship as just saying +whatever doesn't really define anything.
> 
> If you slap the 1000w bios on or shunt the card, you'll find that +whatever will begin to drop quite rapidly. A +180 where it's .900 @ 2050 or somthing, could easily deteriorate to +120 @ 1.075v


I was trying yesterday first time my Strix after installing the waterblock on it. +190 offset on stock bios was with +100/+123 voltage/PL, 2175-2145/1.0875-1.075 during the PR run.


----------



## SuperMumrik

Anyone know why my card won't boost to 1100mV unless I use the curve editor?
I get the "Limit: Voltage" in my OSD, but I can set 1100mV in the curve editor without any voltage limit (scaling seems to be **** with cruve though)

(Stock bios with 12 mOhm stacked shunts (680W ish))


----------



## jomama22

Keninishna said:


> Any idea why my score is low now on PR? I was getting 14.5k with these settings before. I tried clean reinstall of the drivers and still the same. Average clock is 2100mhz.
> 
> View attachment 2471628


Reinstalling using DDU blows up the drivers. Don't know why but it does. It's better to just use Nvidias installer by itself right now. Don't know if it's just a 3xxx thing or what, but I have had the same issues.

Download thermspy and see if you can watch it while doing your run. Or try it with heaven so you can out it in windowed mode. Will tell you the effective clock being used along with the set clock.


----------



## long2905

sultanofswing said:


> That temp is way too hot for the voltage, I'd start looking at your cooling solution/Case airflow.


yeah i was trying out falkentyne suggestions earlier but as you can see that would not work with the ichill x4 stock air cooler. but i went back to voltage curve and still managed a +200 on my previous best PR score.


----------



## jomama22

SuperMumrik said:


> Anyone know why my card won't boost to 1100mV unless I use the curve editor?
> I get the "Limit: Voltage" in my OSD, but I can set 1100mV in the curve editor without any voltage limit (scaling seems to be **** with cruve though)
> 
> (Stock bios with 12 mOhm stacked shunts (680W ish))
> View attachment 2471629


You can either hope to get lucky with offset where 1.1v is the start of its own frequency (example, 1.093 is 2100, 1.1 is 2115) while haveing the voltage slider at +100. Or you can manually set the curve to do that, but as you said, seems to mess with stuff.

You can also just lock the voltage at 1.1v using ctrl-L and then adjusting the curve so that the idle voltage (the white dotted line) is also 1.1v

Those are your only options really.


----------



## Falkentyne

Keninishna said:


> Any idea why my score is low now on PR? I was getting 14.5k with these settings before. I tried clean reinstall of the drivers and still the same. Average clock is 2100mhz.
> 
> View attachment 2471628


Check thermspy clock vs effective clock please. Haven't you been reading the thread? Link is right above your posts.


----------



## stryker7314

rhyno said:


> i am , on a ftw3 card


 How ya like it?


----------



## rhyno

stryker7314 said:


> How ya like it?


love it. its the only bios the truely unlocks the card. even the 520 watt bios has a power target of 430 so it will try to remain there. this 1000 watt bios will pull whatever the **** it wants haha. highest ive seen while gaming is 700 watts peak but its usually around 550-600 watts in game


----------



## Thanh Nguyen

So 1000w on ftw3 card will blow up the fuse? Just hit 600w after a PR run.


----------



## rhyno

Thanh Nguyen said:


> So 1000w on ftw3 card will blow up the fuse? Just hit 600w after a PR run.


ive already posted pics of ftw3 card pulling 930 watts a couple pages back


----------



## dante`afk

jomama22 said:


> Reinstalling using DDU blows up the drivers. Don't know why but it does. It's better to just use Nvidias installer by itself right now. Don't know if it's just a 3xxx thing or what, but I have had the same issues.
> 
> Download thermspy and see if you can watch it while doing your run. Or try it with heaven so you can out it in windowed mode. Will tell you the effective clock being used along with the set clock.


how'd the new drivers go?


----------



## stryker7314

rhyno said:


> love it. its the only bios the truely unlocks the card. even the 520 watt bios has a power target of 430 so it will try to remain there. this 1000 watt bios will pull whatever the **** it wants haha. highest ive seen while gaming is 700 watts peak but its usually around 550-600 watts in game


That is awesome, thank you. Do you have a waterblock?


----------



## dante`afk

there you go... I guess benches do not report the proper clock then 











edit - then again this seems to be only in heaven benchmark, 3dmark nearly reports the right clocks.


----------



## rocklobsta1109

I don't understand how vanilla cards shipped with a 350w power target. Even the 390w KFA2 bios that plays nice with 2x8 pin cards rides power limiter nearly 100% of the time. We really need a safer alternative to the 1000w bios, even a 500w option with some of the safety measure in place would be preferable.


----------



## Nizzen

rhyno said:


> ive already posted pics of ftw3 card pulling 930 watts a couple pages back


Show us a actual video of the gpu drawing 930w alone 

Can't wait 😎


----------



## jomama22

dante`afk said:


> how'd the new drivers go?


Havnt done anything with the computer yet, been busy with work. Once I have time ill let you know.


----------



## originxt

For the 1000w bios, what oc software you guys using? I'm assuming px1 is out.


----------



## des2k...

that thermspy is awesome, can see a huge difference between active and request clocks. It's up to two bins lower than what I see in Afterburner.

The positive thing is that increasing voltage/VF point does increase effective clocks, but re-doing my test shows that I have a prettt crappy bin!


----------



## ALSTER868

My best so far, but I'm not done yet of course.
Strix OC KPE 520W bios +180/+1250 offset, PR score 15412. Planning to hit 15600 and even more when it gets colder outside



https://www.3dmark.com/3dm/55791185?resultRegistrationOutcome=KEY_IN_ANOTHER_ACCOUNT_CURRENT_ACCOUNT_FREE


----------



## SoldierRBT

pewpewlazer said:


> I read what you wrote. Let's try this another way:
> 
> Set a +165 offset to the entire curve. Run benchmark. Observe what voltage it runs. This is now your "desired voltage".
> 
> Open VF curve, drag ONLY that desired voltage point to +165. Leave the rest at +0. Run benchmark again.
> 
> Assume identical temperatures, and the card is NOT hitting power limit (it's shunt modded or some 1000w XOC BIOS)
> 
> Which one will score higher?


Interesting. You're right. A smoother v/f curve does improve internal clocks. I let it run a few minutes to get the desired voltage which was 1.080v.
Then I added +165 to 1.080v. Target clock: 2190MHz Internal Clock: 2022MHz Temp: 41C









Then I added +150 to the whole curve and +165 to the desired voltage (1.080v) Target clock change to 2175MHz and internal clock went up to 2141MHz Temp: 41C









So people that doesn't have access to NVVDD/MSVVDD voltage can increase internal clocks by getting a smoother v/f curve.

UPDATE: Okay so with the "smoother v/f curve" the card would only do +165 on desired voltage (2141MHz internal clock) anything higher crash but If I adjust NVVDD to fix the internal clocks on the +165 desired voltage/whole curve +0, It can sustain +195 (2200MHz internal clock) without crashing.


----------



## geriatricpollywog

long2905 said:


>


The People’s Paste.


----------



## ALSTER868

Hey guys, everybody is getting lower offset in TS than in PR? So in PR it's flying @+180 but in TS hardly +165


----------



## Sheyster

ALSTER868 said:


> Mine does +190 in PR and +180 in TS though on stock bios. Should be above average chip it looks like.


How far can you push it at 950mv? Just curious as I've been experimenting with undervolting my Strix.


----------



## pewpewlazer

SoldierRBT said:


> Interesting. You're right. A smoother v/f curve does improve internal clocks. I let it run a few minutes to get the desired voltage which was 1.080v.
> Then I added +165 to 1.080v. Target clock: 2190MHz Internal Clock: 2022MHz Temp: 41C
> 
> 
> Then I added +150 to the whole curve and +165 to the desired voltage (1.080v) Target clock change to 2175MHz and internal clock went up to 2141MHz Temp: 41C
> 
> 
> So people that doesn't have access to NVVDD/MSVVDD voltage can increase internal clocks by getting a smoother v/f curve.
> 
> UPDATE: Okay so with the "smoother v/f curve" the card would only do +165 on desired voltage (2141MHz internal clock) anything higher crash but If I adjust NVVDD to fix the internal clocks on the +165 desired voltage/whole curve +0, It can sustain +195 (2200MHz internal clock) without crashing.


Is the "2200mhz internal clock" configuration with tweaked voltages truly faster in benchmarks (Port Royal or similar)? If so, the ability to adjust NVVDD/MSVVDD is a huge deal. Very interesting.


----------



## HyperMatrix

pewpewlazer said:


> Is the "2200mhz internal clock" configuration with tweaked voltages truly faster in benchmarks (Port Royal or similar)? If so, the ability to adjust NVVDD/MSVVDD is a huge deal. Very interesting.


The internal clock is the real clock and affects real performance. Being able to view this data I was easily able to identify proper OC settings and boost my previous max score of 14880 at what I thought was a locked 2190MHz to 15059 at 2175MHz. At the previous setting with 2190MHz I had no throttling of any kind in GPU-Z. Not power, not voltage, nothing. And it was locked the entire run. Power usage was also high around 500W so I had no reason to suspect anything was wrong. But obviously being able to see exactly what the card is doing...I was able to squeeze out more from it at a lower clock.


----------



## ALSTER868

Sheyster said:


> How far can you push it at 950mv? Just curious as I've been experimenting with undervolting my Strix.


Haven't done extensive testing on all V/F points yet, but in CP2077 it seems stable @2115/0.962, at least I've been able to play several game sessions for more than an hour. Rest of my games run ok @2130/0.956 (Apex Legends, CoD Warzone) and stuff like this.


----------



## Nizzen

Powerdraw in GPU-z with Msi Suprim 3090 (air ambient 25c) with Kingpin 1000w bios @ 100%
+70 on core and +1000mem
Will try "kill a watt" later.


----------



## HyperMatrix

_duplicate - deleted_


----------



## SoldierRBT

pewpewlazer said:


> Is the "2200mhz internal clock" configuration with tweaked voltages truly faster in benchmarks (Port Royal or similar)? If so, the ability to adjust NVVDD/MSVVDD is a huge deal. Very interesting.


It should. When the internal clock matches the target clock the performance gain in PR are very good. I was able to break 15K in PR with just 0.950v 2130MHz (2115MHz internal clock) where before NVVDD tweaks, I was getting 14.6K constantly.


----------



## Falkentyne

It would be nice if someone wrote a tool for FE users to adjust these voltages


----------



## jomama22

Falkentyne said:


> It would be nice if someone wrote a tool for FE users to adjust these voltages


You could probably get an evc2 and solder directly to the pwm controllers. Could look up the data sheet and see where the i2c data connections are located on it.

Would be a bit of a pain, but if you can identify those things and then scan the 12c bus to find the correct address, you could get elmor to add it to the evc2 firmware.


----------



## Falkentyne

jomama22 said:


> You could probably get an evc2 and solder directly to the pwm controllers. Could look up the data sheet and see where the i2c data connections are located on it.
> 
> Would be a bit of a pain, but if you can identify those things and then scan the 12c bus to find the correct address, you could get elmor to add it to the evc2 firmware.


Sorry but that's way beyond my level of skill. I can barely assemble a computer or even apply thermal paste with the metal bar in my spine. Soldering like that and doing that kind of work would put me in the hospital.


----------



## jomama22

SoldierRBT said:


> Interesting. You're right. A smoother v/f curve does improve internal clocks. I let it run a few minutes to get the desired voltage which was 1.080v.
> Then I added +165 to 1.080v. Target clock: 2190MHz Internal Clock: 2022MHz Temp: 41C
> View attachment 2471652
> 
> 
> Then I added +150 to the whole curve and +165 to the desired voltage (1.080v) Target clock change to 2175MHz and internal clock went up to 2141MHz Temp: 41C
> View attachment 2471653
> 
> 
> So people that doesn't have access to NVVDD/MSVVDD voltage can increase internal clocks by getting a smoother v/f curve.
> 
> UPDATE: Okay so with the "smoother v/f curve" the card would only do +165 on desired voltage (2141MHz internal clock) anything higher crash but If I adjust NVVDD to fix the internal clocks on the +165 desired voltage/whole curve +0, It can sustain +195 (2200MHz internal clock) without crashing.
> View attachment 2471654


So what is the relationship between core voltage in AB and the NVVDD voltage setting slider you have with the classified too? Are they one in the same and the slider just overrides AB? Or is it a separate voltage entirely?


----------



## jomama22

Falkentyne said:


> Sorry but that's way beyond my level of skill. I can barely assemble a computer or even apply thermal paste with the metal bar in my spine. Soldering like that and doing that kind of work would put me in the hospital.


Fair enough, I wouldn't want to do it either lol.


----------



## sultanofswing

jomama22 said:


> So what is the relationship between core voltage in AB and the NVVDD voltage setting slider you have with the classified too? Are they one in the same and the slider just overrides AB? Or is it a separate voltage entirely?


There is no relationship to the voltage MSI afterburner shows and the actual nvvdd.
MSI afterburner only shows the lookup voltage.


----------



## jomama22

sultanofswing said:


> There is no relationship to the voltage MSI afterburner shows and the actual nvvdd.
> MSI afterburner only shows the lookup voltage.


Ok, so in that case the nvvdd slider is a pure core voltage override then?


----------



## sultanofswing

jomama22 said:


> Ok, so in that case the nvvdd slider is a pure core voltage override then?


Correct, NVVDD slider in the classified tool is direct core voltage control.


----------



## SoldierRBT

jomama22 said:


> So what is the relationship between core voltage in AB and the NVVDD voltage setting slider you have with the classified too? Are they one in the same and the slider just overrides AB? Or is it a separate voltage entirely?


I'm still not sure what's the relationship between them but I think they're independent from each other. What I've found in my testing is that each card have their own NVVDD/MSVDD table and it would adjust itself depending on the voltage requested (the voltage curve in MSI Afterburner). According to Igor's lab the RTX 3090 FE NVVDD chips are rated to 0.70v-1.20v and we only have access up to 1.10v in MSI afterburner. My card 1.10v voltage point requires 1.175v NVVDD to get efficient internal clock with 2250MHz. If I lock let's say 1.050v 2250MHz with same 1.175v NVVDD the test would insta crash.


----------



## truehighroller1

Nizzen said:


> Can't see any power draw here?
> 
> Must be the whole system?


It was logged in afterburner. I looked at it and then minimized it again. I was just saying look at my scores with my old cpu combo because they're good scores that's all.


----------



## HyperMatrix

SoldierRBT said:


> I'm still not sure what's the relationship between them but I think they're independent from each other. What I've found in my testing is that each card have their own NVVDD/MSVDD table and it would adjust itself depending on the voltage requested (the voltage curve in MSI Afterburner). According to Igor's lab the RTX 3090 FE NVVDD chips are rated to 0.70v-1.20v and we only have access up to 1.10v in MSI afterburner. My card 1.10v voltage point requires 1.175v NVVDD to get efficient internal clock with 2250MHz. If I lock let's say 1.050v 2250MHz with same 1.175v NVVDD the test would insta crash.


I'm definitely not an expert in this area but from some of the testing I've done, I'd also have to agree with your assessment that NVVDD appears to be related to but different from your standard core voltage as you'd set in afterburner. At one point I had thought it was the same thing but when I started setting my clock speed based off of manual NVVDD adjustments (with the assumption that it was the same thing), I ran into issues.


----------



## sultanofswing

HyperMatrix said:


> I'm definitely not an expert in this area but from some of the testing I've done, I'd also have to agree with your assessment that NVVDD appears to be related to but different from your standard core voltage as you'd set in afterburner. At one point I had thought it was the same thing but when I started setting my clock speed based off of manual NVVDD adjustments (with the assumption that it was the same thing), I ran into issues.


Like @SoldierRBT said the voltage in Afterburner is just the lookup voltage.


----------



## HyperMatrix

sultanofswing said:


> Like @SoldierRBT said the voltage in Afterburner is just the lookup voltage.


So then shouldn't NVVDD adjustments in the classified tool completely override it?


----------



## SoldierRBT

HyperMatrix said:


> I'm definitely not an expert in this area but from some of the testing I've done, I'd also have to agree with your assessment that NVVDD appears to be related to but different from your standard core voltage as you'd set in afterburner. At one point I had thought it was the same thing but when I started setting my clock speed based off of manual NVVDD adjustments (with the assumption that it was the same thing), I ran into issues.


Correct, just finding the correct NVVDD voltage to the specific voltage point in MSI afterburner has a huge boost in internal clock. If you exceed NVVDD voltage needed, the actual internal clock would go lower because the die would produce unnecessary heat. I still haven't found the relationship of MSVDD with core voltage. Vince said in the stream don't go higher than 1.20v MSSVDD. I found out that 1.0v-1.05v MSVDD is enough for 0.9v-1.0v core voltage and 1.05v-1.10v MSVDD for 1.0v-1.10v core voltage. Still need to find the efficient curve of MSVDD.


----------



## HyperMatrix

SoldierRBT said:


> Correct, just finding the correct NVVDD voltage to the specific voltage point in MSI afterburner has a huge boost in internal clock. If you exceed NVVDD voltage needed, the actual internal clock would go lower because the die would produce unnecessary heat. I still haven't found the relationship of MSVDD with core voltage. Vince said in the stream don't go higher than 1.20v MSSVDD. I found out that 1.0v-1.05v MSVDD is enough for 0.9v-1.0v core voltage and 1.05v-1.10v MSVDD for 1.0v-1.10v core voltage. Still need to find the efficient curve of MSVDD.


Just as a side note...I read that for FBVVD, you need to reboot after applying the new voltage for it to take (but not shutdown, as that would wipe all settings). You can't do it on the fly. At least that was the case with the 2080 ti. So if you're playing around with that, keep that possibility in mind. This is all new to me so almost any info is new info. Haha.


----------



## sultanofswing

HyperMatrix said:


> So then shouldn't NVVDD adjustments in the classified tool completely override it?


Correct, At least on Turing it did and MSI Afterburner would still only show the lookup voltage and not the nvvdd.


----------



## HyperMatrix

sultanofswing said:


> Correct, At least on Turing it did and MSI Afterburner would still only show the lookup voltage and not the nvvdd.


So then why are we both crashing when setting a "lower than stable voltage" in afterburner but setting the appropriate voltage through NVVDD in classified tool? I'll have to test this further when I'm home as some of my data and recollection regarding the testing could be flawed. 

Just to be clear on what I'm going to test later, based on what you're saying I should be able to set 0.8v in afterburner for 2100MHz. And then set NVVDD to 0.950-1.000V in classified tool, and it should run 2100MHz properly without a crash.


----------



## SoldierRBT

HyperMatrix said:


> Just as a side note...I read that for FBVVD, you need to reboot after applying the new voltage for it to take (but not shutdown, as that would wipe all settings). You can't do it on the fly. At least that was the case with the 2080 ti. So if you're playing around with that, keep that possibility in mind. This is all new to me so almost any info is new info. Haha.


If you completely remove all power to the PC, the settings you have applied to Classified tool would go back to stock settings when you turn it on again. I've settings applied that stick with reboot/turn off for daily use. Quick question, what are your memory temps? Mine on stock (1.368-1.375v) go up to 70C after 2 hours of gaming. I'm running undervolt of 1.30v in memory stock speed and they get up to 55C now. I can't go any lower than 1.30v because the card goes crazy (white and red lights blinking) and OLED screen says Warning too low FBVVD.


----------



## sultanofswing

HyperMatrix said:


> So then why are we both crashing when setting a "lower than stable voltage" in afterburner but setting the appropriate voltage through NVVDD in classified tool? I'll have to test this further when I'm home as some of my data and recollection regarding the testing could be flawed.
> 
> Just to be clear on what I'm going to test later, based on what you're saying I should be able to set 0.8v in afterburner for 2100MHz. And then set NVVDD to 0.950-1.000V in classified tool, and it should run 2100MHz properly without a crash.


I can't say that for sure as I personally don't use the curve function and dislike it, I always use Offsets when benchmarking.
If I am daily gaming I leave the card at stock.
It could be crashing because MSI Afterburner since it does not show the actual NVVDD as it does not have direct access to the voltage controller also does not show the Loadline and how the NVVDD moves around a lot. If you set MSI afterburner to [email protected] and you can maintain 1.056v and 2100mhz then MSI afterburner log would just show a constant 1.056 but if you were to look at the NVVDD either in PX1 or on the OLED of the card you would see it moves around a lot.


----------



## HyperMatrix

SoldierRBT said:


> If you completely remove all power to the PC, the settings you have applied to Classified tool would go back to stock settings when you turn it on again. I've settings applied that stick with reboot/turn off for daily use. Quick question, what are your memory temps? Mine on stock (1.368-1.375v) go up to 70C after 2 hours of gaming. I'm running undervolt of 1.30v in memory stock speed and they get up to 55C now. I can't go any lower than 1.30v because the card goes crazy (white and red lights blinking) and OLED screen says Warning too low FBVVD.


Yeah I also have memory going to 70C~ in gaming sessions, which is odd because I didn't have this problem with the FTW3 Hybrid which always stayed under 60C. The FTW3 had cold plate contact with the memory. I assume the Kingpin doesn't do that and leaves the cold plate strictly to deal with the GPU die, which would also explain why I get so much better cooling with the Kingpin than with the FTW3 Hybrid. But I've played hours long cyberpunk gaming sessions with +1250 on memory without a crash. I'd definitely want some additional cooling on the backplate. I have an old ram water block i could probably stick on top of there with some thermal paste. But really just waiting for a block at this point.


----------



## des2k...

ALSTER868 said:


> Hey guys, everybody is getting lower offset in TS than in PR? So in PR it's flying @+180 but in TS hardly +165


TS


SoldierRBT said:


> Interesting. You're right. A smoother v/f curve does improve internal clocks. I let it run a few minutes to get the desired voltage which was 1.080v.
> Then I added +165 to 1.080v. Target clock: 2190MHz Internal Clock: 2022MHz Temp: 41C
> View attachment 2471652
> 
> 
> Then I added +150 to the whole curve and +165 to the desired voltage (1.080v) Target clock change to 2175MHz and internal clock went up to 2141MHz Temp: 41C
> View attachment 2471653
> 
> 
> So people that doesn't have access to NVVDD/MSVVDD voltage can increase internal clocks by getting a smoother v/f curve.
> 
> UPDATE: Okay so with the "smoother v/f curve" the card would only do +165 on desired voltage (2141MHz internal clock) anything higher crash but If I adjust NVVDD to fix the internal clocks on the +165 desired voltage/whole curve +0, It can sustain +195 (2200MHz internal clock) without crashing.
> View attachment 2471654


this sounds too complicated + extra steps

if you start from defaults, just drag 1080mv to 2220 (2175 +2 or +3 bins)
set max clocks, nvidia-smi -lgc 210,2175
you'll get your 2175 in afterburner and ~2145 effective


----------



## geriatricpollywog

I don't like the classified tool for daily use because 1) the settings get reset 2) fixed NVVDD means high idle power draw and possible GPU degradation. I am getting good results from just flipping 1 LLC dipswitch and setting sliders to +1300 mem and +150 core.

NVIDIA GeForce RTX 3090 video card benchmark result - Intel Core i7-10700K Processor,Micro-Star International Co., Ltd. MEG Z490 UNIFY (MS-7C71) (3dmark.com)


----------



## sultanofswing

0451 said:


> I don't like the classified tool for daily use because 1) the settings get reset 2) fixed NVVDD means high idle power draw and possible GPU degradation. I am getting good results from just flipping 1 LLC dipswitch and setting sliders to +1300 mem and +150 core.
> 
> NVIDIA GeForce RTX 3090 video card benchmark result - Intel Core i7-10700K Processor,Micro-Star International Co., Ltd. MEG Z490 UNIFY (MS-7C71) (3dmark.com)


Hopefully it gets like it was with my KPE 2080ti for the KPE 3090, I had lots of help but was able to get direct access to the fbvdd and the mvvdd through afterburner which pretty much eliminated the classy tool for me.


----------



## geriatricpollywog

sultanofswing said:


> Hopefully it gets like it was with my KPE 2080ti for the KPE 3090, I had lots of help but was able to get direct access to the nvvdd and the mvvdd through afterburner which pretty much eliminated the classy tool for me.



I am monitoring the NVVDD using the little television attached to the GPU. At Idle, NVVDD is 0.000 to 0.001v. When browsing or watching a video, its 0.745v. Port Royal has it jumping around between 1.05 and 1.20v.

However, when I set NVVDD using the Classified tool, its either 0.000 or whatever I set it to. It does not go to an intermediate voltage of 0.745v. Is there a way to set an NVVDD offset manually while maintaining the curve? As far as I know, only the dipswitches can do this.


----------



## sultanofswing

0451 said:


> I am monitoring the NVVDD using the little television attached to the GPU. At Idle, NVVDD is 0.000 to 0.001v. When browsing or watching a video, its 0.745v. Port Royal has it jumping around between 1.05 and 1.20v.
> 
> However, when I set NVVDD using the Classified tool, its either 0.000 or whatever I set it to. It does not go to an intermediate voltage of 0.745v. Is there a way to set an NVVDD offset manually while maintaining the curve? As far as I know, only the dipswitches can do this.


Cannot say for sure, I never used the classy tool and the curve together on Turing because the XOC BIOS for that card blocked out the curve function and I never used classy tool with the NON XOC BIOS but if I am not mistaken whatever you set the NVVDD too with the classy tool will override anything set in MSI Afterburner and is why Vince always said to not use more than 1 program to try and control voltages as it could cause issues but it's worth a shot to try I guess.
At the end of the day if you set Classy tool to 1.1v the card will lock in 1.1v there is no offsets you can apply except for the dipswitches as you said which get applied on top of any other voltage you may have set.


----------



## SoldierRBT

0451 said:


> I don't like the classified tool for daily use because 1) the settings get reset 2) fixed NVVDD means high idle power draw and possible GPU degradation. I am getting good results from just flipping 1 LLC dipswitch and setting sliders to +1300 mem and +150 core.
> 
> NVIDIA GeForce RTX 3090 video card benchmark result - Intel Core i7-10700K Processor,Micro-Star International Co., Ltd. MEG Z490 UNIFY (MS-7C71) (3dmark.com)


Dipswitches and traditional OC in MSI afterburner/PX1 works very good. Each dipswitch adds 0.0125v to NVVDD and switching both gives you almost identical internal clock. The problem is when you trying to set a voltage point/frequency in MSI afterburner, the dipswitches don't add any voltage to NVVDD. It just uses the internal NVVDD/MSVDD table for that specific voltage point. For example, 0.950v undervolt 2100MHz gets 0.993v from the internal table and it needs 1.043v to efficient/match internal clocks. Dipswitches in this scenario don't work and Classified tool must be used.


----------



## geriatricpollywog

SoldierRBT said:


> Dipswitches and traditional OC in MSI afterburner/PX1 works very good. Each dipswitch adds 0.0125v to NVVDD and switching both gives you almost identical internal clock. The problem is when you trying to set a voltage point/frequency in MSI afterburner, the dipswitches don't add any voltage to NVVDD. It just uses the internal NVVDD/MSVDD table for that specific voltage point. For example, 0.950v undervolt 2100MHz gets 0.993v from the internal table and it needs 1.043v to efficient/match internal clocks. Dipswitches in this scenario don't work and Classified tool must be used.


I thought the dipswitches add a fixed offset voltage, on top of any other tools being used. Are you saying that adjusting the V/F curve overrides the dip switch NVVDD offset?


----------



## sultanofswing

The dipswitches are hardware I'm pretty sure they override anything


----------



## SoldierRBT

0451 said:


> I thought the dipswitches add a fixed offset voltage, on top of any other tools being used. Are you saying that adjusting the V/F curve overrides the dip switch NVVDD offset?


Yeap the dipswitch effect doesn't apply if you touch the v/f curve in MSI afterburner. You can try it and let me know the results. 
Normal +120 OC core with both switches on you can see 0.025v NVVDD added to each voltage point and internal clock gets a boost. 
Undervolt 0.950v locked 2130MHz both switches off 0.993v NVVDD. Both switches on same scenario NVVDD still 0.933v


----------



## sultanofswing

SoldierRBT said:


> Yeap the dipswitch effect doesn't apply if you touch the v/f curve in MSI afterburner. You can try it and let me know the results.
> Normal +120 OC core with both switches on you can see 0.025v NVVDD added to each voltage point and internal clock gets a boost.
> Undervolt 0.950v locked 2130MHz both switches off 0.993v NVVDD. Both switches on same scenario NVVDD still 0.933v


I guess another reason I don't use the curve editor. Too many chiefs not enough indians if you know what i mean.


----------



## geriatricpollywog

SoldierRBT said:


> Yeap the dipswitch effect doesn't apply if you touch the v/f curve in MSI afterburner. You can try it and let me know the results.
> Normal +120 OC core with both switches on you can see 0.025v NVVDD added to each voltage point and internal clock gets a boost.
> Undervolt 0.950v locked 2130MHz both switches off 0.993v NVVDD. Both switches on same scenario NVVDD still 0.933v


On default voltage, what is the NVVDD displayed on the oled when browsing or watching a video? It should be similar to mine. 0.745v. Now undervolt to 0.95. What is the NVVDD now?


----------



## SoldierRBT

0451 said:


> On default voltage, what is the NVVDD displayed on the oled when browsing or watching a video? It should be similar to mine. 0.745v. Now undervolt to 0.95. What is the NVVDD now?


On idle the NVVDD voltage on mine is the same around 0.745v. With both switches on it idles at 0.800v.
Undervolt 0.950v still idles at 0.745v/0.800v both switch on but on load max is 0.993v NVVDD.

The switches effect not applying when touching the v/f curve could be probably be fixed with a firmware. For daily I'd suggest just adding an offset on MSI afterburner and have both switches on for high internal clocks. For benchmarks, v/f curve voltage points + Classified tool is the way to go.


----------



## geriatricpollywog

0451 said:


> On default voltage, what is the NVVDD displayed on the oled when browsing or watching a video? It should be similar to mine. 0.745v. Now undervolt to 0.95. What is the NVVDD now?





SoldierRBT said:


> On idle the NVVDD voltage on mine is the same around 0.745v. With both switches on it idles at 0.800v.
> Undervolt 0.950v still idles at 0.745v/0.800v both switch on but on load max is 0.993v NVVDD.
> 
> The switches effect not applying when touching the v/f curve could be probably be fixed with a firmware. For daily I'd suggest just adding an offset on MSI afterburner and have both switches on for high internal clocks. For benchmarks, v/f curve voltage points + Classified tool is the way to go.


By offset in afterburner, you mean frequency offset, right?

I think I’ll leave voltage stock and only use the LLC dips for daily.


----------



## SoldierRBT

Yeah just add the frequency in the core clock that is stable for your card (+90, +120, +135, +150, +165, etc) and set both NVVDD switches on. The card will automatically choose v/f it needs and the switch effect is applied correctly. MSI Afterburner or Precision X1 work perfectly. I tested both. Voltage slider only unlocks up to 1.10v in the v/f table. Maxing out power slider is important to get the 520W.


----------



## Biscottoman

Guys is there any safe way to get the classified tool working on a not-kpe card or another way to manually set the nvvdd so it should be easier to achieve the perfect ratio between msi voltages and nvvdd?


----------



## sultanofswing

Biscottoman said:


> Guys is there any safe way to get the classified tool working on a not-kpe card or another way to manually set the nvvdd so it should be easier to achieve the perfect ratio between msi voltages and nvvdd?


Nope. If you want NVVDD control you need a KPE or wait and hope someone reverse engineers the Classy tool which may never happen.


----------



## bmgjet

Biscottoman said:


> Guys is there any safe way to get the classified tool working on a not-kpe card or another way to manually set the nvvdd so it should be easier to achieve the perfect ratio between msi voltages and nvvdd?


Change the I2C address and commands to match the controllers that are used on your graphics card.
But then if you know those I2C address and commands you could just do it in afterburner.


----------



## geriatricpollywog

Biscottoman said:


> Guys is there any safe way to get the classified tool working on a not-kpe card or another way to manually set the nvvdd so it should be easier to achieve the perfect ratio between msi voltages and nvvdd?


Buildzoid can make a core voltage potentiometer for your card.


----------



## Thanh Nguyen

950mv-2040mhz is good or bad card?


----------



## jomama22

0451 said:


> Buildzoid can make a core voltage potentiometer for your card.


May as well just buy an evc2 at that point. Most digital pwm controllers work with it and even the analog ones in the ftw3 do as well.


----------



## bmgjet

Thanh Nguyen said:


> 950mv-2055mhz is good or bad card?


What sort of load?
RTX load then thats very very good.
Raster load then thats upper average.


----------



## Thanh Nguyen

bmgjet said:


> What sort of load?
> RTX load then thats very very good.
> Raster load then thats upper average.


Metro exodus rtx dlss off. Card is on air. 1000w bios power slide is at 52.


----------



## gecko991

I would say pretty good.


----------



## Falkentyne

Thanh Nguyen said:


> Metro exodus rtx dlss off. Card is on air. 1000w bios power slide is at 52.


So you fixed your card? You never posted after you said your FE died. What happened?


----------



## des2k...

Thanh Nguyen said:


> Metro exodus rtx dlss off. Card is on air. 1000w bios power slide is at 52.











Thermspy


I found a link to a Nvidia program called Thermspy. If anyone is interested i will post a link to it if allowed. i have attached a image from my cards readout.




www.techpowerup.com




if 850mv 2055mhz is the actual frequency during timespy extreme gt2, then yeah that's an awesome card

I have doubts your card is actually doing 2055 at 850mv

port royal, or games that have raster+rtx (load here is not the most demanding, as raster cores wait on RTX cores to finish, lots of idle); you can push low voltage/ high freq that won't be stable in games like Horizon Dawn non-rtx or Control heavy rtx.

There's also RTX Quake lol, but that I wouldn't even run past 1800mhz  , I know on stock goes to 640w lol

After some more testing on my side, I ended up with 2100 ( goes 2050 - 2080 for effective freq) at 1018mv) for gt2 (2h pass). TE GT2 load is very representative of game load.


----------



## des2k...

des2k... said:


> Thermspy
> 
> 
> I found a link to a Nvidia program called Thermspy. If anyone is interested i will post a link to it if allowed. i have attached a image from my cards readout.
> 
> 
> 
> 
> www.techpowerup.com
> 
> 
> 
> 
> if 850mv 2055mhz is the actual frequency during timespy extreme gt2, then yeah that's an awesome card
> 
> I have doubts your card is actually doing 2055 at 850mv
> 
> port royal, or games that have raster+rtx (load here is not the most demanding, as raster cores wait on RTX cores to finish, lots of idle); you can push low voltage/ high freq that won't be stable in games like Horizon Dawn non-rtx or Control heavy rtx.
> 
> There's also RTX Quake lol, but that I wouldn't even run past 1800mhz  , I know on stock goes to 640w lol
> 
> After some more testing on my side, I ended up with 2100 ( goes 2050 - 2080 for effective freq) at 1018mv) for gt2 (2h pass). TE GT2 load is very representative of game load.


*950mv


----------



## Thanh Nguyen

_was_


Falkentyne said:


> So you fixed your card? You never posted after you said your FE died. What happened?


i sent it to drunkfoo and hope he will fix it. Got a ftw3 now but no block. Got a strix from a friend but the coil whine is crazy so I returned it to him.
So metro exodus is not demanding as control? What is timespy extreme gt2? I only see timespy extreme.


----------



## dr/owned

jomama22 said:


> May as well just buy an evc2 at that point. Most digital pwm controllers work with it and even the analog ones in the ftw3 do as well.


I didn't find that adding voltage really made any difference to the max overclock anyways. My card was just frequency limited probably based on the architecture / fab process. Reminds me of current Intel stuff where it just hits a wall where no amount of additional voltage helps.


----------



## jomama22

dr/owned said:


> I didn't find that adding voltage really made any difference to the max overclock anyways. My card was just frequency limited probably based on the architecture / fab process. Reminds me of current Intel stuff where it just hits a wall where no amount of additional voltage helps.


Think it really depends on the chip. First strix I had at stock would easily go to 1.05-1.63 all on its own in every benchmark but clock to a measly 2075. This current strix I have wants all the voltage you can give and keeps going higher. At stock it wouldn't break 0.93 @ 2085 in any benchmark.


----------



## HyperMatrix

des2k... said:


> *950mv


950 at 2055 isn’t crazy. My FTW3 Hybrid does it. My friend’s FTW3 Hybrid does 950 at 2100MHz. I scored 14700 with 0.975v at 2130 on his card. Although that was in my system so the score is lower than what you should expect with a better CPU. But it’s in line with the score my Kingpin is getting. 

Kingpin got 15059 today at 2175MHz locked/verified. The FTW3 got 14700~ at 0.975v/2130MHz. 2175MHz is 2.1% higher. The score is 2.4% higher. My kingpin was +250MHz higher on memory as well.


----------



## Thanh Nguyen

why my friend strix uses only 370w in PR at that clock? His score is lower than 15k too.


----------



## Falkentyne

Thanh Nguyen said:


> _was_
> 
> i sent it to drunkfoo and hope he will fix it. Got a ftw3 now but no block. Got a strix from a friend but the coil whine is crazy so I returned it to him.
> So metro exodus is not demanding as control? What is timespy extreme gt2? I only see timespy extreme.


GT2= Graphics Test #2. The one that hammers the power limit of every card not using the 1000W Vbios (even shunted cards!)


----------



## HyperMatrix

Thanh Nguyen said:


> why my friend strix uses only 370w in PR at that clock? His score is lower than 15k too.
> View attachment 2471725



2 things. Either GPU load is low. Or that’s not his real clock speed. On a proper overclocked 9900k/10900k with fast memory, an “average clock speed” of 2200~ should be getting approximately 15500-15600. If he’s getting under 15k it means his internal/real clock speed is much lower. Use Thermspy that des2k linked for you to verify the actual clock speeds.


----------



## Christopher2178

rhyno said:


> love it. its the only bios the truely unlocks the card. even the 520 watt bios has a power target of 430 so it will try to remain there. this 1000 watt bios will pull whatever the **** it wants haha. highest ive seen while gaming is 700 watts peak but its usually around 550-600 watts in game


That’s awesome! How’s your FTW3 setup? Hybrid kit?, Water block?, shunted? How much is PCIE Slot power draw like 100-120+? Thanks for more info!


----------



## HyperMatrix

sultanofswing said:


> There is no relationship to the voltage MSI afterburner shows and the actual nvvdd.
> MSI afterburner only shows the lookup voltage.





jomama22 said:


> Ok, so in that case the nvvdd slider is a pure core voltage override then?





SoldierRBT said:


> I'm still not sure what's the relationship between them but I think they're independent from each other. What I've found in my testing is that each card have their own NVVDD/MSVDD table and it would adjust itself depending on the voltage requested (the voltage curve in MSI Afterburner). According to Igor's lab the RTX 3090 FE NVVDD chips are rated to 0.70v-1.20v and we only have access up to 1.10v in MSI afterburner. My card 1.10v voltage point requires 1.175v NVVDD to get efficient internal clock with 2250MHz. If I lock let's say 1.050v 2250MHz with same 1.175v NVVDD the test would insta crash.


Quoted those involved in this conversation. Guess I'll tag @Falkentyne too. After further testing, 1 of 2 things is known:

*1)* Either NVVDD is related to but *not* the same as the voltage you set in afterburner

or

*2)* NVVDD is the same as core voltage, but there's some unknown phenomenon that I can't explain.

Now...as tempting as #2 is...I'm going to lean more towards #1. Here's what I did:


Set voltage in afterburner to 0.85v for 2130MHz
Manually set NVVDD in classified tool to 1.05V. Crashed within seconds
Started increasing the NVVDD until 1.15V. As I increased NVVDD, it look longer to crash (about 20-30 seconds at 1.15V) but it still crashed.
Keeping NVVDD at 1.08V which is very crash happy, and changing afterburner voltage back to 1.05V which is less than NVVDD, and shouldn't affect anything if they were really one and the same, removed the crashing and stabilized the OC.
It's not that NVVDD isn't doing anything and is broken. It is working. You can see it change the power draw as well as help bring your internal clocks up to where your requested clocks are.

Knowing that I was able to do back to back runs in port royal without crashing at 1.037V-1.050V in afterburner and 1.125V NVVDD with 2175MHz locked (no downclocking at all), it would make no sense that I wouldn't be able to do 2130MHz at 1.05V. Or only have it stable for 20-30 seconds at 1.15V which is excessive.

Also from my benching this morning...I switched from 1.037V for 2175MHz that I was doing last night, to 1.050V. That increased my stability. This is while NVVDD was already higher at 1.125. So while NVVDD being higher does help stabilize your OC even when the voltage in afterburner is too low for the requested clock speeds, it is not the same as increasing your voltage in afterburner.

*So in conclusion*...while I may not understand the inner workings of the card...I do know for a fact that I wasn't able to replace afterburner voltages with an NVVDD voltage override. Each one is doing something seemingly different. And as such I'm going back to my original OC method of finding a general voltage/frequency that works, then trying to stabilize it by boosting NVVDD.


----------



## Falkentyne

HyperMatrix said:


> Quoted those involved in this conversation. Guess I'll tag @Falkentyne too. After further testing, 1 of 2 things is known:
> 
> *1)* Either NVVDD is related to but *not* the same as the voltage you set in afterburner
> 
> or
> 
> *2)* NVVDD is the same as core voltage, but there's some unknown phenomenon that I can't explain.
> 
> Now...as tempting as #2 is...I'm going to lean more towards #1. Here's what I did:
> 
> 
> Set voltage in afterburner to 0.85v for 2130MHz
> Manually set NVVDD in classified tool to 1.05V. Crashed within seconds
> Started increasing the NVVDD until 1.15V. As I increased NVVDD, it look longer to crash (about 20-30 seconds at 1.15V) but it still crashed.
> Keeping NVVDD at 1.08V which is very crash happy, and changing afterburner voltage back to 1.05V which is less than NVVDD, and shouldn't affect anything if they were really one and the same, removed the crashing and stabilized the OC.
> It's not that NVVDD isn't doing anything and is broken. It is working. You can see it change the power draw as well as help bring your internal clocks up to where your requested clocks are.
> 
> Knowing that I was able to do back to back runs in port royal without crashing at 1.037V-1.050V in afterburner and 1.125V NVVDD with 2175MHz locked (no downclocking at all), it would make no sense that I wouldn't be able to do 2130MHz at 1.05V. Or only have it stable for 20-30 seconds at 1.15V which is excessive.
> 
> Also from my benching this morning...I switched from 1.037V for 2175MHz that I was doing last night, to 1.050V. That increased my stability. This is while NVVDD was already higher at 1.125. So while NVVDD being higher does help stabilize your OC even when the voltage in afterburner is too low for the requested clock speeds, it is not the same as increasing your voltage in afterburner.
> 
> *So in conclusion*...while I may not understand the inner workings of the card...I do know for a fact that I wasn't able to replace afterburner voltages with an NVVDD voltage override. Each one is doing something seemingly different. And as such I'm going back to my original OC method of finding a general voltage/frequency that works, then trying to stabilize it by boosting NVVDD.


You're forgetting about Loadline Calibration.
LLC will cause voltage to drop at load compared to idle.
I don't think LLC affects MSI Afterburner's voltage reporting.
Maybe it affects NVVDD. I don't have this card...so.....


----------



## sultanofswing

HyperMatrix said:


> Quoted those involved in this conversation. Guess I'll tag @Falkentyne too. After further testing, 1 of 2 things is known:
> 
> *1)* Either NVVDD is related to but *not* the same as the voltage you set in afterburner
> 
> or
> 
> *2)* NVVDD is the same as core voltage, but there's some unknown phenomenon that I can't explain.
> 
> Now...as tempting as #2 is...I'm going to lean more towards #1. Here's what I did:
> 
> 
> Set voltage in afterburner to 0.85v for 2130MHz
> Manually set NVVDD in classified tool to 1.05V. Crashed within seconds
> Started increasing the NVVDD until 1.15V. As I increased NVVDD, it look longer to crash (about 20-30 seconds at 1.15V) but it still crashed.
> Keeping NVVDD at 1.08V which is very crash happy, and changing afterburner voltage back to 1.05V which is less than NVVDD, and shouldn't affect anything if they were really one and the same, removed the crashing and stabilized the OC.
> It's not that NVVDD isn't doing anything and is broken. It is working. You can see it change the power draw as well as help bring your internal clocks up to where your requested clocks are.
> 
> Knowing that I was able to do back to back runs in port royal without crashing at 1.037V-1.050V in afterburner and 1.125V NVVDD with 2175MHz locked (no downclocking at all), it would make no sense that I wouldn't be able to do 2130MHz at 1.05V. Or only have it stable for 20-30 seconds at 1.15V which is excessive.
> 
> Also from my benching this morning...I switched from 1.037V for 2175MHz that I was doing last night, to 1.050V. That increased my stability. This is while NVVDD was already higher at 1.125. So while NVVDD being higher does help stabilize your OC even when the voltage in afterburner is too low for the requested clock speeds, it is not the same as increasing your voltage in afterburner.
> 
> *So in conclusion*...while I may not understand the inner workings of the card...I do know for a fact that I wasn't able to replace afterburner voltages with an NVVDD voltage override. Each one is doing something seemingly different. And as such I'm going back to my original OC method of finding a general voltage/frequency that works, then trying to stabilize it by boosting NVVDD.


Like I originally said before, If nothing has changed since Turing then the voltage displayed in MSI afterburner is just the lookup Voltage. You may tell MSI Afterburner "I want 1.093v" so it Looks up in it's little table and finds 1.093v and if it is at a point in the table it thinks it can run it then it spits out what you see in the log 1.093v and will just show 1.093v steady for the run.
Now if you look at the NVVDD(Actual voltage going to the Core) it will be something different generally bouncing around to try and Target 1.093v so it may go from 1.025-1.120v as an example.

At the end of the day with the KPE don't use Afterburner to try and control or log the voltage, Use PX1 and log the NVVDD that way and control it with either the dipswitches or Classy tool.


----------



## HyperMatrix

Falkentyne said:


> You're forgetting about Loadline Calibration.
> LLC will cause voltage to drop at load compared to idle.
> I don't think LLC affects MSI Afterburner's voltage reporting.
> Maybe it affects NVVDD. I don't have this card...so.....


Forgot to mention I repeated the tests with loadline off as well.


----------



## sultanofswing

Falkentyne said:


> You're forgetting about Loadline Calibration.
> LLC will cause voltage to drop at load compared to idle.
> I don't think LLC affects MSI Afterburner's voltage reporting.
> Maybe it affects NVVDD. I don't have this card...so.....


Correct,Afterburner doesn't have direct access to the Voltage controller so it's lookup voltage does not display the LLC being applied.


----------



## sultanofswing

Offset clocks are the best clocks!


----------



## HyperMatrix

sultanofswing said:


> Like I originally said before, If nothing has changed since Turing then the voltage displayed in MSI afterburner is just the lookup Voltage. You may tell MSI Afterburner "I want 1.093v" so it Looks up in it's little table and finds 1.093v and if it is at a point in the table it thinks it can run it then it spits out what you see in the log 1.093v and will just show 1.093v steady for the run.
> Now if you look at the NVVDD(Actual voltage going to the Core) it will be something different generally bouncing around to try and Target 1.093v so it may go from 1.025-1.120v as an example.
> 
> At the end of the day with the KPE don't use Afterburner to try and control or log the voltage, Use PX1 and log the NVVDD that way and control it with either the dipswitches or Classy tool.


Help me understand this then. No curve set in precision. Just offset of +110. And for some reason voltage reported is 1.32? As you can see, NVVDD is set to 1.125V. I have no problem with being wrong. But I will need information that I can verify/validate.


----------



## sultanofswing

HyperMatrix said:


> Help me understand this then. No curve set in precision. Just offset of +110. And for some reason voltage reported is 1.32? As you can see, NVVDD is set to 1.125V
> 
> View attachment 2471728


You have Loadline set to 0. On my 2080ti KINGPIN A loadline of 0 overshoots the NVVDD by quite a bit.
Try this. Set the Loadline to 10, go to the HWM tab of PX1 and log NVVDD.
Now start adjusting the NVVDD in Classy tool and see what it does as you will be able to see it in Real time.
Also that reading from what I remember that you see as 1.320 might not be the NVVDD. it's just a voltage reading. If you look in PX1 HWM you should have a Voltage monitor along with a NVVDD monitor, my bet is they show different.


----------



## Thanh Nguyen

Anyone know why my pcie is at 8x instead of 16x? Any performance loss at 8x


----------



## HyperMatrix

sultanofswing said:


> You have Loadline set to 0. On my 2080ti KINGPIN A loadline of 0 overshoots the NVVDD by quite a bit.


How about now.


----------



## sultanofswing

HyperMatrix said:


> How about now.
> 
> View attachment 2471731


You are still at a loadline of 1, the lower the loadline number the more it overshoots.
Just log the NVVDD Voltage in the hardware monitor of PX1, don't pay attention to what it says on the main screen.

Also the Sync button in Classy Tool will lock P0 State.


----------



## Falkentyne

HyperMatrix said:


> Forgot to mention I repeated the tests with loadline off as well.


loadline OFF?
The last time I turned off loadline on a GPU, it got to 100C in FIVE SECONDS and black screened...(used MSI Afterburner, or that GUI tool that was made to interface with afterburner's i2c commands)
(AMD 7970 if I remember)


----------



## HyperMatrix

sultanofswing said:


> You are still at a loadline of 1, the lower the loadline number the more it overshoots.
> Just log the NVVDD Voltage in the hardware monitor of PX1, don't pay attention to what it says on the main screen.
> 
> Also the Sync button in Classy Tool will lock P0 State.


loadline 1 is card default. Hardware monitor just shows the same voltage as afterburner. Straight from the table. Not in line with NVVDD in classified tool.


----------



## HyperMatrix

Falkentyne said:


> loadline OFF?
> The last time I turned off loadline on a GPU, it got to 100C in FIVE SECONDS and black screened...(used MSI Afterburner, or that GUI tool that was made to interface with afterburner's i2c commands)
> (AMD 7970 if I remember)


Correction. Loadline 0.


----------



## sultanofswing

HyperMatrix said:


> loadline 1 is card default. Hardware monitor just shows the same voltage as afterburner. Straight from the table. Not in line with NVVDD in classified tool.
> 
> View attachment 2471733


PX1 should have NVVDD in the options of the Hardware Monitor.


----------



## Falkentyne

HyperMatrix said:


> How about now.
> 
> View attachment 2471731





sultanofswing said:


> You are still at a loadline of 1, the lower the loadline number the more it overshoots.
> Just log the NVVDD Voltage in the hardware monitor of PX1, don't pay attention to what it says on the main screen.
> 
> Also the Sync button in Classy Tool will lock P0 State.


You guys are making the same mistakes people made with the Super I/O chip on motherboards.
Using "no vdroop" loadline calibration on motherboards and using the Super I/O chip to read vcore will cause vcore to be reported 50-150mv higher than what you set in bios, with the vcore being HIGHER at load than at idle. VR VOUT (read by some Gigabyte motherboards) and Vcore (on Maximus 11/12 motherboards with voltage set to "Die sense") will read the true voltage, however transient spikes and dips will not be shown unless you have an oscilloscope.

But this isn't the real reading. Die-sense (VCC_Sense) readings show the voltage being set with the tool is exactly the same as the voltage read, but remains there at load and at idle---this will cause EXCESSIVE HEAT!).

Most likely, the voltage PX1 is reading is similar to super i/o, so there is resistance drop across the power plane from the VRM supplying the voltage, to the chip that reads the voltage. This resistance drop causes the voltage read back to be _HIGHER_ than what is actually set.

At level 0 LLC, if you take a multimeter to the board and probe the actual VRM or whatever pin supplies the voltage (NOT the controller that reads the voltage!), it would then show 1.075v (from that classified tool screenshot) or whatever you tried to set.


----------



## HyperMatrix

sultanofswing said:


> PX1 should have NVVDD in the options of the Hardware Monitor.


It does. At 1.273V. Appears to be roughly in sync with the voltage on the main precision x window. Setting loadline to 15 still has the reported volted as 0.12v higher than what I set NVVDD to. Setting offset back down to 0 is still reportedly pulling 1.17V with NVVDD at 1.05V. Resetting classified tool back to default settings as well as precision X, is reportedly pulling more than 1.2V at 1995MHz.












Falkentyne said:


> You guys are making the same mistakes people made with the Super I/O chip on motherboards.
> Using "no vdroop" loadline calibration on motherboards and using the Super I/O chip to read vcore will cause vcore to be reported 50-150mv higher than what you set in bios, with the vcore being HIGHER at load than at idle. VR VOUT (read by some Gigabyte motherboards) and Vcore (on Maximus 11/12 motherboards with voltage set to "Die sense") will read the true voltage, however transient spikes and dips will not be shown unless you have an oscilloscope.
> 
> But this isn't the real reading. Die-sense (VCC_Sense) readings show the voltage being set with the tool is exactly the same as the voltage read, but remains there at load and at idle---this will cause EXCESSIVE HEAT!).
> 
> Most likely, the voltage PX1 is reading is similar to super i/o, so there is resistance drop across the power plane from the VRM supplying the voltage, to the chip that reads the voltage. This resistance drop causes the voltage read back to be _HIGHER_ than what is actually set.
> 
> At level 0 LLC, if you take a multimeter to the board and probe the actual VRM or whatever pin supplies the voltage (NOT the controller that reads the voltage!), it would then show 1.075v (from that classified tool screenshot) or whatever you tried to set.


If there is such a big difference whether you're at loadline 0 or 15, what good is monitoring the measurement through precision x then?


----------



## bmgjet

Heres the code that PX1 is using to read the voltage.









So be careful with how much your running since it looks like thats whats really going into the chip.


----------



## sultanofswing

Yea looks like yours is overshooting by quite a bit, This is what my 2080ti KPE does. The pic may be hard to see as I am on a 4k Display.


----------



## SoldierRBT

HyperMatrix said:


> How about now.
> 
> View attachment 2471731


Are you sure all dipswitches are off? 1256mv seems a little bit high for stock and manual 1.075v NVVDD. Mine with the same settings reports around 1160mv in X1.










Here's the same but with everything on auto. 1120mv










Both dipswitches on. 1232mv and higher internal clock.


----------



## Falkentyne

HyperMatrix said:


> It does. At 1.273V. Appears to be roughly in sync with the voltage on the main precision x window. Setting loadline to 15 still has the reported volted as 0.12v higher than what I set NVVDD to. Setting offset back down to 0 is still reportedly pulling 1.17V with NVVDD at 1.05V. Resetting classified tool back to default settings as well as precision X, is reportedly pulling more than 1.2V at 1995MHz.
> 
> View attachment 2471734
> 
> 
> 
> 
> If there is such a big difference whether you're at loadline 0 or 15, what good is monitoring the measurement through precision x then?


It's useful as a reference only, the same way Super I/O and CPU VCore was.
Die sense is not a thing on GPU's because look how long die-sense became relevant on CPU's? It took YEARS. And to answer your question:
The Z490 EVGA Dark STILL does not have die-sense vcore! It uses the Super I/O...that should tell you something.


----------



## sultanofswing

I don't have a 3090KPE here to test with but at least on my card the lower the loadline number in the Classy tool the more it overshot. I would play with the loadline a lot and if you let it overshoot too much it can cause just as much instability as letting it droop way too much.

For kicks what does the Voltage report if you set the nvvdd in classy tool and not have the card in a P0 state(idle on Desktop).


----------



## HyperMatrix

SoldierRBT said:


> Are you sure all dipswitches are off? 1256mv seems a little bit high for stock and manual 1.075v NVVDD. Mine with the same settings reports around 1160mv in X1.
> 
> View attachment 2471736
> 
> 
> Here's the same but with everything on auto. 1120mv
> 
> View attachment 2471738
> 
> 
> Both dipswitches on. 1232mv and higher internal clock.
> 
> View attachment 2471739


Yeah I haven't touched the dip switches at all. Just checked and they’re all off.


----------



## Falkentyne

Never mind. It doesn't work. "Device not supported"


----------



## SoldierRBT

sultanofswing said:


> Offset clocks are the best clocks!


Not true. Offset clocks are generally easy to set but If you want your card to be efficient, you need to tweak the v/f curve. A proper undervolt will let you have the same performance/clocks as an offset overclock that is pulling 50W more.


----------



## sultanofswing

Falkentyne said:


> Does the classified tool work on ANY non eVGA card?
> If I try to run it on a 3090 FE, will the magic smoke come out?


It worked on my Toaster Oven so it should.


----------



## sultanofswing

SoldierRBT said:


> Not true. Offset clocks are generally easy to set but If you want your card to be efficient, you need to tweak the v/f curve. A proper undervolt will let you have the same performance/clocks as an offset overclock that is pulling 50W more.


I am talking for all out overclocking not efficiency. My card always got better scores using offset clocks and the classy tool.


----------



## Keninishna

So I downloaded thermspy to see why my 3dmark scores were low and found out the card pcie is running at 4x link speed lol. I tried moving the card to another slot in the mobo and unplugging all the m2 ssds and its still gimped at 4x speed. Any ideas on what to try next? get a gen4 board and run it at 4x gen4? lol.


----------



## HyperMatrix

sultanofswing said:


> I don't have a 3090KPE here to test with but at least on my card the lower the loadline number in the Classy tool the more it overshot. I would play with the loadline a lot and if you let it overshoot too much it can cause just as much instability as letting it droop way too much.
> 
> For kicks what does the Voltage report if you set the nvvdd in classy tool and not have the card in a P0 state(idle on Desktop).


Well I can't really idle. My card is always in a boosted state. 4K/144Hz G-Sync issue. Or maybe something else. But with card set back to full default settings, it appears to be 0.07v higher than NVVDD. So at 1.075 which was set under 'Auto' setting it's sitting at around 1.14V. If I set NVVDD down to 1.000V it'll read 1.070V in precision X. But yeah my card is at 2055MHz on the desktop. Which is interesting...because even when my previous cards would stay in a boosted state, it was usually in the 1700s or 1800s.


----------



## sultanofswing

HyperMatrix said:


> Well I can't really idle. My card is always in a boosted state. 4K/144Hz G-Sync issue. Or maybe something else. But with card set back to full default settings, it appears to be 0.07v higher than NVVDD. So at 1.075 which was set under 'Auto' setting it's sitting at around 1.14V. If I set NVVDD down to 1.000V it'll read 1.070V in precision X. But yeah my card is at 2055MHz on the desktop. Which is interesting...because even when my previous cards would stay in a boosted state, it was usually in the 1700s or 1800s.


Set refresh rate to 120hz


----------



## HyperMatrix

sultanofswing said:


> Set refresh rate to 120hz


It's normally set to 98Hz. I've tried dropping it down to 60Hz as well. No diff.


----------



## sultanofswing

HyperMatrix said:


> It's normally set to 98Hz. I've tried dropping it down to 60Hz as well. No diff.


You run the driver at prefer maximum performance by any chance? I know anytime I have and set it back to optimal i would have to reboot for the clocks to drop to normal.


----------



## HyperMatrix

sultanofswing said:


> You run the driver at prefer maximum performance by any chance? I know anytime I have and set it back to optimal i would have to reboot for the clocks to drop to normal.


I normally do. Tried changing it to normal 15 minutes ago and rebooted. No difference. Gonna DDU it for good measure right now.


----------



## bmgjet

Keninishna said:


> So I downloaded thermspy to see why my 3dmark scores were low and found out the card pcie is running at 4x link speed lol. I tried moving the card to another slot in the mobo and unplugging all the m2 ssds and its still gimped at 4x speed. Any ideas on what to try next? get a gen4 board and run it at 4x gen4? lol.


Get the book for your motherboard and find what lines are shared with other stuff. You might need to turn off some USB/Wifi ect.





HyperMatrix said:


> Well I can't really idle. My card is always in a boosted state. 4K/144Hz G-Sync issue. Or maybe something else. But with card set back to full default settings, it appears to be 0.07v higher than NVVDD. So at 1.075 which was set under 'Auto' setting it's sitting at around 1.14V. If I set NVVDD down to 1.000V it'll read 1.070V in precision X. But yeah my card is at 2055MHz on the desktop. Which is interesting...because even when my previous cards would stay in a boosted state, it was usually in the 1700s or 1800s.


They fixed that ages ago. Mine idles fine with 2X pg27uq @ 4K 144hz with gsync on and 1X QX2710LED 1440P.
Change the power state to normal in the driver. And the adjust image settings to "Use advanced 3D image settings"


----------



## sultanofswing

I never set the driver to prefer maximum performance, In my case it has never made a single FPS difference in anything I have ever ran.


----------



## Falkentyne

Keninishna said:


> So I downloaded thermspy to see why my 3dmark scores were low and found out the card pcie is running at 4x link speed lol. I tried moving the card to another slot in the mobo and unplugging all the m2 ssds and its still gimped at 4x speed. Any ideas on what to try next? get a gen4 board and run it at 4x gen4? lol.


Try clearing bios CMOS.


----------



## Falkentyne

sultanofswing said:


> I never set the driver to prefer maximum performance, In my case it has never made a single FPS difference in anything I have ever ran.


It will if you try to run a VERY old game. If it is not at prefer maximum performance, the game will stutter if it runs at 300 core clock....


----------



## sultanofswing

Falkentyne said:


> It will if you try to run a VERY old game. If it is not at prefer maximum performance, the game will stutter if it runs at 300 core clock....


Yea in that case I would just set it up for just that game and not globally.


----------



## HyperMatrix

bmgjet said:


> Get the book for your motherboard and find what lines are shared with other stuff. You might need to turn off some USB/Wifi ect.
> 
> They fixed that ages ago. Mine idles fine with 2X pg27uq @ 4K 144hz with gsync on and 1X QX2710LED 1440P.
> Change the power state to normal in the driver. And the adjust image settings to "Use advanced 3D image settings"


DDU solved the not idling thing. At idle, auto NVVDD settings are 0.75V and Precision X is reporting 0.76-0.77 which is fine. Under load, with card at full stock. So no offset to gpu, or to memory, and power limit at default 100% and nothing changed in classified tool, it's still reporting over 1.2V being pulled.












sultanofswing said:


> I never set the driver to prefer maximum performance, In my case it has never made a single FPS difference in anything I have ever ran.



Under load, again with full stock card settings and NVVDD set to 1.000v precision X is reporting around 0.12V higher with load line 1. 











Load line at 15 helps a bit but still around 0.1V higher than NVVDD setting. The confusing part is that if these numbers are accurate, then even the starting voltage at stock settings is above the maximum manual/accessible voltage through afterburner and precision x.










At this point I'm not sure if this is a reporting issue specific to Kingpin cards, or if there's something wrong with this card. I mean it's operating properly. It's not heating up any higher than I'd expect. The only oddity is the fact that Precision X is reporting NVVDD to be that high.


----------



## sultanofswing

HyperMatrix said:


> DDU solved the not idling thing. At idle, auto NVVDD settings are 0.75V and Precision X is reporting 0.76-0.77 which is fine. Under load, with card at full stock. So no offset to gpu, or to memory, and power limit at default 100% and nothing changed in classified tool, it's still reporting over 1.2V being pulled.
> 
> View attachment 2471748
> 
> 
> 
> 
> 
> Under load, again with full stock card settings and NVVDD set to 1.000v precision X is reporting around 0.12V higher with load line 1.
> View attachment 2471749
> 
> 
> 
> 
> Load line at 15 helps a bit but still around 0.1V higher than NVVDD setting. The confusing part is that if these numbers are accurate, then even the starting voltage at stock settings is above the maximum manual/accessible voltage through afterburner and precision x.
> View attachment 2471750
> 
> 
> 
> At this point I'm not sure if this is a reporting issue specific to Kingpin cards, or if there's something wrong with this card. I mean it's operating properly. It's not heating up any higher than I'd expect. The only oddity is the fact that Precision X is reporting NVVDD to be that high.


Yea it's overshooting quite a bit, Like I said I don't have a 3090KPE on hand, Missed the first 2 runs but should have one the next run.
When I was messing with my 2080ti KPE I went through the same thing like trying to figure out what was what and finally just said screw it and just set the NVVDD where I needed it for the card to run the clock i desired and not crash.
I got tired of chasing the Voltage curve table and stuck with offsets and off I went.
With Ampere being as inneficient as it is I can see why most want to use the Voltage curve though.
I'll Talk with Vince and get his opinion but I know he is kinda the same way I am, Set the NVVDD where you want and go for it.
Only way to know the actual true reading would be use the Probe header and a DMM and see where it lines up.


----------



## Keninishna

Falkentyne said:


> Try clearing bios CMOS.


I just tried resetting cmos and link speed still 4x. Thanks for the tip though.

The only device that can pull from the pcie lanes is one m2 ssd slot and I tried removing that and no change.

The card may just be damaged from reassembling it too many times. I dunno I might be able to get some money if I sell it on ebay as partially working it still scores 13k in pr at 4x lol.


----------



## Falkentyne

Keninishna said:


> I just tried resetting cmos and link speed still 4x. Thanks for the tip though.
> 
> The only device that can pull from the pcie lanes is one m2 ssd slot and I tried removing that and no change.
> 
> The card may just be damaged from reassembling it too many times. I dunno I might be able to get some money if I sell it on ebay as partially working it still scores 13k in pr at 4x lol.


Try cleaning the entire gold fingers with Deoxit D5.


----------



## HyperMatrix

sultanofswing said:


> Yea it's overshooting quite a bit, Like I said I don't have a 3090KPE on hand, Missed the first 2 runs but should have one the next run.
> When I was messing with my 2080ti KPE I went through the same thing like trying to figure out what was what and finally just said screw it and just set the NVVDD where I needed it for the card to run the clock i desired and not crash.
> I got tired of chasing the Voltage curve table and stuck with offsets and off I went.
> With Ampere being as inneficient as it is I can see why most want to use the Voltage curve though.
> I'll Talk with Vince and get his opinion but I know he is kinda the same way I am, Set the NVVDD where you want and go for it.
> Only way to know the actual true reading would be use the Probe header and a DMM and see where it lines up.


See the main issue I have with the reading is that for my 2175MHz PR run, with actual 2171-2174MHz internal clocks, and +1350MHz memory, and Precision X reporting over 1.3V, I never once hit the power limit on the standard 520W bios. So how would it be possible to do 2175MHz at 1.3V in PR and not go over 520W? The FTW3 Hybrid cards I had would hit their 500W~ approximate power limits with 2130-2144 at 1V (although to be fair..I never checked their NVVDD reading...but I can guarantee I wasn't seeing anything close to what the Kingpin is doing when I played around with Precision).


----------



## sultanofswing

Wished I knew, my 2080ti kingpin at 1.3v and 2220mhz was 598w.


----------



## SoldierRBT

HyperMatrix said:


> See the main issue I have with the reading is that for my 2175MHz PR run, with actual 2171-2174MHz internal clocks, and +1350MHz memory, and Precision X reporting over 1.3V, I never once hit the power limit on the standard 520W bios. So how would it be possible to do 2175MHz at 1.3V in PR and not go over 520W? The FTW3 Hybrid cards I had would hit their 500W~ approximate power limits with 2130-2144 at 1V (although to be fair..I never checked their NVVDD reading...but I can guarantee I wasn't seeing anything close to what the Kingpin is doing when I played around with Precision).


I think each card has its own internal NVVDD/MSVVDD table. Similar to VID on Intel CPUs. From what I’ve seen your card is fine and it’s working perfectly. My chip is leaky compare to yours. At 1.025v 2205MHz its already hitting 518W. 1.037v I think it is not possible with the 520W BIOS


----------



## SoldierRBT

2100MHz avg. is enough to break 15K in Port Royal 

0.943v locked at 2130MHz Max power draw: 426W









I scored 15 026 in Port Royal


Intel Core i9-10900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## HyperMatrix

SoldierRBT said:


> I think each card has its own internal NVVDD/MSVVDD table. Similar to VID on Intel CPUs. From what I’ve seen your card is fine and it’s working perfectly. My chip is leaky compare to yours. At 1.025v 2205MHz its already hitting 518W. 1.037v I think it is not possible with the 520W BIOS


1.025v at 2205MHz? This with or without manual NVVDD in classified tool? Internal clocks confirmed with thermspy? What’s your NVVDD reading in precision X?


----------



## sultanofswing

@HyperMatrix


----------



## SoldierRBT

HyperMatrix said:


> 1.025v at 2205MHz? This with or without manual NVVDD in classified tool? Internal clocks confirmed with thermspy? What’s your NVVDD reading in precision X?


I posted the result a few days ago. Yes, the internal clocks were confirmed with Thermspy. I got 15,402 2205MHz set and 2164MHz avg because of temps.



https://www.3dmark.com/pr/702708


----------



## HyperMatrix

sultanofswing said:


> @HyperMatrix


That's a huge relief. Thanks man. Much appreciated.


----------



## HyperMatrix

SoldierRBT said:


> I posted the result a few days ago. Yes, the internal clocks were confirmed with Thermspy. I got 15,402 2205MHz set and 2164MHz avg because of temps.
> 
> 
> 
> https://www.3dmark.com/pr/702708
> 
> 
> 
> View attachment 2471759


What was your selected NVVDD in classified tool and what was your reported voltage in Precision X1? Based on some additional testing and arguing with @sultanofswing I actually made some good discoveries. The reason I wasn't able to replace voltage in afterburner directly with the same voltage in the Classified tool is because the 2 voltages are essentially separate. NVVDD is auto-selected based on your chosen voltage and heat and goes up/down. So for example, just because I've selected 1V in afterburner, doesn't mean NVVDD is going to be set to 1V. It'll likely be higher. And more importantly, it'll likely need to be higher in order to get true clocks. Furthermore...because of loadline with KPE and what Vince said above...we're actually getting a lot more voltage into the card than what's selected in classified tool.

Essentially what I mean is that when I said I did 2175MHz with 1.037V....sure that's what afterburner was showing because that's what I chose...but it definitely isn't what the card was getting. So most likely the 1.025 you had at 2205MHz also wasn't actually 1.025V either. Would be nice to compare the actual voltage between KPE cards.


----------



## SoldierRBT

HyperMatrix said:


> What was your selected NVVDD in classified tool and what was your reported voltage in Precision X1? Based on some additional testing and arguing with @sultanofswing I actually made some good discoveries. The reason I wasn't able to replace voltage in afterburner directly with the same voltage in the Classified tool is because the 2 voltages are essentially separate. NVVDD is auto-selected based on your chosen voltage and heat and goes up/down. So for example, just because I've selected 1V in afterburner, doesn't mean NVVDD is going to be set to 1V. It'll likely be higher. And more importantly, it'll likely need to be higher in order to get true clocks. Furthermore...because of loadline with KPE and what Vince said above...we're actually getting a lot more voltage into the card than what's selected in classified tool.
> 
> Essentially what I mean is that when I said I did 2175MHz with 1.037V....sure that's what afterburner was showing because that's what I chose...but it definitely isn't what the card was getting. So most likely the 1.025 you had at 2205MHz also wasn't actually 1.025V either. Would be nice to compare the actual voltage between KPE cards.


Yeah I already explained this a few pages back. NVDDD and core voltage are independent from each other. Each card have their own NVVDD/MSVDD Internal table and it would adjust itself depending on the voltage requested (the voltage points in the curve of MSI Afterburner). According to Igor's lab the RTX 3090 FE NVVDD chips are rated to 0.70v-1.20v and we only have access up to 1.10v in MSI afterburner.

My 1.025v voltage point is efficient/match internal clocks when NVVDD is set o 1.075v. I’m not sure what was the NVVDD after LLC. I don’t use Precision X1 and I have the oled set to show power draw


----------



## HyperMatrix

SoldierRBT said:


> Yeah I already explained this a few pages back. NVDDD and core voltage are independent from each other. Each card have their own NVVDD/MSVDD Internal table and it would adjust itself depending on the voltage requested (the voltage points in the curve of MSI Afterburner). According to Igor's lab the RTX 3090 FE NVVDD chips are rated to 0.70v-1.20v and we only have access up to 1.10v in MSI afterburner.
> 
> My 1.025v voltage point is efficient/match internal clocks when NVVDD is set o 1.075v. I’m not sure what was the NVVDD after LLC. I don’t use Precision X1 and I have the oled set to show power draw


1.075 for 2205 is pretty damn good man. Can’t wait to see what your card does under water with XOC.


----------



## SoldierRBT

HyperMatrix said:


> 1.075 for 2205 is pretty damn good man. Can’t wait to see what your card does under water with XOC.


I think since NVVDD chips in these cards are rated up to 1.20v, that’s enough voltage to make the Afterburner voltage table efficient from 0.7-1.10v. When I got 15,7k on the OXC Bios, I was using 1.25v NVVDD at 1.10v and pulling 620W+. Now that I know how internal clocks works, my 1.10v voltage point is efficient/match internal clocks with only 1.175v. 1.25v to 1.175v NVVDD, that’s probably 40W less it would have produced, which would be much less heat and probably higher average clocks in the benchmark.

Could you test on your card what’s NVVDD it needs to become efficient at 1.10v?


----------



## lolhaxz

Just quickly hacked in some NVVDD output rail reporting since you guys were banging on about it haha...

NVVDD on my Strix just follows the core voltage... as you would expect, considering it is the core voltage... perhaps 15-20mv higher depending on load, higher the load the closer it will read to expected core voltage (reading output rail from NVAPI) - when the power draw gets higher it does tend to sag a little bit by anywhere upto 25mv

Not sure why the API is reporting 3 of them, only the first one tends to... do anything different

DLSS Off:









DLSS On:









In a load like Forza Horizon 4, it tends to sag alot less:


----------



## SoldierRBT

@lolhaxz Interesting. Thanks for sharing. So on your Strix, NVVDD can use up to 25mv more than core voltage reported in Afterburner. Have you checked the internal clocks match the clocks reported in MSI with ThermSpy?


----------



## dante`afk

Thanh Nguyen said:


> Anyone know why my pcie is at 8x instead of 16x? Any performance loss at 8x


0.5-1fps


----------



## Krytecks

Hello guys, little question, my 3090 burned few days ago and now i have to send it back for repair/refund, but i have a custom bios and i can't edit it back now ... ( pc wont boot with 2*8 power pin plugged in, one of the 2 port is "dead" and instant shutdown the pc when i try to boot, the other is ok but still no display ) Nvflash dont see the 3090, but i have some "unknow" PCI components detected in windows. Do you think they have a way to know ? 😕 it's the Gainward 3090 phoenix, only one bios. Thanks for your help guys


----------



## Pepillo

Krytecks said:


> Hello guys, little question, my 3090 burned few days ago and now i have to send it back for repair/refund, but i have a custom bios and i can't edit it back now ... ( pc wont boot with 2*8 power pin plugged in, one of the 2 port is "dead" and instant shutdown the pc when i try to boot, the other is ok but still no display ) Nvflash dont see the 3090, but i have some "unknow" PCI components detected in windows. Do you think they have a way to know ? 😕 it's the Gainward 3090 phoenix, only one bios. Thanks for your help guys


I think they can find out, they can take the bios out of the socket and read it, for example. But I don't think they will, the most normal thing would be to be tested and to see that it doesn't work they change it. That if the warranty is given to you by the store, if you go to the manufacturer then it is more likely that they will locate the fault.


----------



## bmgjet

Takes like 30 seconds to read/write the bios with a sop-8 clip.


----------



## Krytecks

bmgjet said:


> Takes like 30 seconds to read/write the bios with a sop-8 clip.


Do you think i need to buy some sop-8 kit and try to do it ? Even if the GPU dont even boot ?


----------



## devilhead

Got waterblock for strix, now, just need cherry picked 3090 strix  Mine previous was bad memory overclocker +800 max. 









Now have 3090 gigabyte oc, memory can do +1600 and more, just don't dare to throw 1000w on it, need waterblock at least and still those stupid power extensions could catch fire


----------



## ALSTER868

devilhead said:


> Mine previous was bad memory overclocker +800 max.


Tried performance in PR with different memory offsets. Raising from +850 to +1250 it was like 50 extra score points or so.
Did you manage to compare performance between +800 and +1600? Does it give any palpable benefit?


----------



## bl4ckdot

I've already asked but my message was lost I think, I have 2 strix incoming, do we know whats the go-to waterblock is for the Strix ?


----------



## SoldierRBT

ALSTER868 said:


> Tried performance in PR with different memory offsets. Raising from +850 to +1250 it was like 50 extra score points or so.
> Did you manage to compare performance between +800 and +1600? Does it give any palpable benefit?


Memory OC in Port Royal improve score depending on the average GPU clock. For 2100MHz +1100 vs +1200 score was literally the same but at 2235MHz avg +1200 vs +1250 was like 100 extra score.

CPU works well. 10900K 5GHz 4000 XMP vs 5.4GHz 4500C16 was around 120 extra score


----------



## WillP

Gone back through the thread and found the answer to my question, deleted.


----------



## GAN77

bl4ckdot said:


> do we know whats the go-to waterblock is for the Strix ?







__





BP-VG3090AST


Bitspower Classic VGA Water Block for ASUS ROG Strix GeForce RTX 3090




shop.bitspower.com









kryographics NEXT RTX 3080 Strix / RTX 3090 Strix, vernickelte Ausführung


kryographics NEXT RTX 3080 Strix / RTX 3090 Strix, vernickelte Ausführung: Fullcover-Wasserkühler für Grafikkarten vom Typ ASUS GeForce RTX 3080 ROG Strix und ASUS GeForce RTX 3090 ROG Strix. Der aus einem massiven Kupferblock gefräste Kühlerboden hat Kontakt zu allen zu kühlenden Bauelementen...




shop.aquacomputer.de








__





Phanteks Innovative Computer Hardware Design







www.phanteks.com












Alphacool Eisblock Aurora Acryl GPX-N RTX 3090/3080 ROG Strix mit Backplate


Der Alphacool Eisblock Aurora Plexi GPX-N RTX 3080/3090 vereint Style mit Performance und eine umfangreiche Digital RGB Beleuchtung. Die Erfahrung von über 17 Jahren sind in diesen Grafikkarten-Wasserkühler eingeflossen und stellen den...




www.alphacool.com






Barrow LRC2.0 full coverage GPU Water Block for ASUS STRIX 3090 Aurora BS-ASS3090-PA2_BARROW 水冷智造专家


----------



## truehighroller1

I've been undervolting this morning with my stock msi suprim x bios and so far I've managed 983 mv at 2115 as my best score in pr is that a good card? I feel like it is but wondering what you guys think.

.975 is now my best that might, be as low as i can go.









I scored 14 629 in Port Royal


Intel Core i9-7900X Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## dante`afk

pretty good, but is it game stable?


----------



## Apecos

deleted


----------



## Sheyster

dante`afk said:


> pretty good, but is it game stable?


Always a good question to ask but I would change that to: *Is it Warzone stable?*

I thought my 2055 @ 950mv was 100% gaming stable. It locked up in Warzone yesterday. Never crashed once in many SP 1080 Extreme and 4K Optimized runs. I bumped vcore up to 975mv and it appears to be good for 2070 in Warzone now with only slightly more heat.


----------



## Apecos

truehighroller1 said:


> I've been undervolting this morning with my stock msi suprim x bios and so far I've managed 983 mv at 2115 as my best score in pr is that a good card? I feel like it is but wondering what you guys think.
> 
> .975 is now my best that might, be as low as i can go.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 14 629 in Port Royal
> 
> 
> Intel Core i9-7900X Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com


Sorry, but what is your score at stock in PR?


----------



## geriatricpollywog

Krytecks said:


> Do you think i need to buy some sop-8 kit and try to do it ? Even if the GPU dont even boot ?


You broke your 3090 while running an unlocked bios, you didn’t share with the community (when asked) how the card failed, and now you are asking for help with warranty fraud?


----------



## jura11

I said that many times, don't use KPE XOC BIOS if you are scared or you don't have proper cooling and proper airflow 

For 2*8-pin this KPE XOC BIOS is one of the best, if I would use it as daily not sure, I'm running him but with 65% power limit and only for benchmarks I use 80-85%

Hope this helps 

Thanks, Jura


----------



## WMDVenum

Sheyster said:


> Always a good question to ask but I would change that to: *Is it Warzone stable?*
> 
> I thought my 2055 @ 950mv was 100% gaming stable. It locked up in Warzone yesterday. Never crashed once in many SP 1080 Extreme and 4K Optimized runs. I bumped vcore up to 975mv and it appears to be good for 2070 in Warzone now with only slightly more heat.


I currently run [email protected] stable in WZ. It is actually [email protected] but I lock my max frequency to 2100 to prevent temperature stepping from changing my frequency.


----------



## Falkentyne

jura11 said:


> I said that many times, don't use KPE XOC BIOS if you are scared or you don't have proper cooling and proper airflow
> 
> For 2*8-pin this KPE XOC BIOS is one of the best, if I would use it as daily not sure, I'm running him but with 65% power limit and only for benchmarks I use 80-85%
> 
> Hope this helps
> 
> Thanks, Jura


I was literally the first to warn people about this danger, of any bios with thermal protections disabled, when I was saved by thermal shutdown on my 3090 FE (black screen+100% fans) on windows load when I had too thick thermal VRAM pads on the gpu side so there was NO contact between gpu core and heatsink! Seems like some people didn't take me seriously...imagine if I had no protection...


----------



## Sheyster

jura11 said:


> I said that many times, don't use KPE XOC BIOS if you are scared or you don't have proper cooling and proper airflow


I don't think it's necessarily about being scared, if anyone is truly scared they should not mod their card in any way. Everyone needs to ask themselves a simple question: Can I afford to replace a $1500+ video card if the warranty is void? If the answer is no, then why take the risk?


----------



## Sheyster

WMDVenum said:


> I currently run [email protected] stable in WZ. It is actually [email protected] but I lock my max frequency to 2100 to prevent temperature stepping from changing my frequency.


Which 3090 card are you running?


----------



## bwana

Does anyone know what the specific differences are between the three bioses on the KP card? There is the normal, the OC, and the LN2.


----------



## HyperMatrix

@Falkentyne @jura11 @Sheyster @0451
Guy flashes XOC bios despite many many warnings. Runs it at 80% PL. Says his card is "screaming." Surprised that it doesn't work anymore. Really wonder what people are thinking....increasing power limit by 50% and disabling all protections to get an extra 5% performance. Ends up losing 100% performance.

Don't think we should be giving any advice or suggestions in these scenarios.




Krytecks said:


> It seems to work with the bios dr/owned linked, but my card is screaming  ( even tho it's at max 61°), so you think 80% PW is a good limit ?





Krytecks said:


> I was stress testing at 80% for few minutes then blackout, and it wont boot anymore 🤕





Krytecks said:


> If i can add something: at idle im at constant 1920Mhz, 0.8620V and 140W ... With Kombustor stress test, im at ~1780Mhz, ~0.8V, and ~460W



Best idea ever. "Stress test" with furmark on an unlocked/unregulated XOC bios. I have no sympathy after all the warnings we kept giving to people. I have an actual kingpin card with multi-bios and a solid AIO cooler connected to 3x 21W fans and I haven't installed XOC bios yet. And I definitely wouldn't be running a furmark stress test with it. Christ...


----------



## truehighroller1

Apecos said:


> Sorry, but what is your score at stock in PR?











Result







www.3dmark.com


----------



## HyperMatrix

bwana said:


> Does anyone know what the specific differences are between the three bioses on the KP card? There is the normal, the OC, and the LN2.


450/480/520W. LN2 bios isn't really LN2 bios. Just a normal bios with 520W limit. You won't void warranty by using it.


----------



## WMDVenum

Sheyster said:


> Which 3090 card are you running?


I currently have a 3090 FE. Hoping to shunt mod it tonight, replacing the resistors with 3MO on the card and PCIE slot.


----------



## Falkentyne

WMDVenum said:


> I currently have a 3090 FE. Hoping to shunt mod it tonight, replacing the resistors with 3MO on the card and PCIE slot.


Replacing the resistors works perfectly, but good luck melting the solder to remove the originals! Multiple people have had problems with getting the original solder off, so be careful and DO NOT DAMAGE YOUR CARD!

Protip: Use this. when soldering to help stop solder from getting someplace it doesn't want to be.



Amazon.com



(Super 33+ tape protection is for shunt mods with MG842AR paint)


----------



## Alelau18

Flashing a 1000W vBIOS, being careless about it and then asking for help to commit warranty fraud. To be honest that line of thought is quite impressive.


----------



## HyperMatrix

Falkentyne said:


> Replacing the resistors works perfectly, but good luck melting the solder to remove the originals! Multiple people have had problems with getting the original solder off, so be careful and DO NOT DAMAGE YOUR CARD!
> 
> Protip: Use this. when soldering to help stop solder from getting someplace it doesn't want to be.
> 
> 
> 
> Amazon.com
> 
> 
> 
> (Super 33+ tape protection is for shunt mods with MG842AR paint)


Highly recommend tweezer style soldering iron for removing SMDs. Not sure of this specific brand, but posting as an example:


----------



## DrunknFoo

you just need a proper iron along with supplemental heating cause the pcb soaks up much of the heat... i.e reflow station, solder gun, heatgun ./ hair dryer etc


----------



## man from atlantis

_GPU SRAM output voltage_ on HWinfo, does that correlate with nvddd msdd etc


----------



## Falkentyne

man from atlantis said:


> _GPU SRAM output voltage_ on HWinfo, does that correlate with nvddd msdd etc


That should be MSVDD (aka Uncore).


----------



## Falkentyne

@bmgjet
Martin of Hwinfo needs your help. He's asking me a question I am unable to answer. If you can help him, it would be really nice for Hwinfo64.



> Hi,
> 
> I have made some progress with decoding that tool, but for some of the items I can't find definitions. This data is returned by NvAPI_GPU_GetAllClocks, which is a secret function and I could only find some very vague definitions of the buffer it returns. So I'm wondering where did you get this statement from:
> 
> 
> 
> "NVENC Clocks (Current Clocks)"
> string.format("{0} ({1}) MHz", nvapi.clocks.current.nvenc_clock_, nvapi.clocks.current.core_clock);_
> 
> 
> 
> It seems that whoever said that knows the definition of the _nvapi.clocks _structure and this would be very useful to know in full.
> Can you help with this?
> 
> Thanks,
> Martin
Click to expand...


----------



## geriatricpollywog

HyperMatrix said:


> @Falkentyne @jura11 @Sheyster @0451
> Guy flashes XOC bios despite many many warnings. Runs it at 80% PL. Says his card is "screaming." Surprised that it doesn't work anymore. Really wonder what people are thinking....increasing power limit by 50% and disabling all protections to get an extra 5% performance. Ends up losing 100% performance.
> 
> Don't think we should be giving any advice or suggestions in these scenarios.
> 
> 
> 
> 
> 
> 
> 
> Best idea ever. "Stress test" with furmark on an unlocked/unregulated XOC bios. I have no sympathy after all the warnings we kept giving to people. I have an actual kingpin card with multi-bios and a solid AIO cooler connected to 3x 21W fans and I haven't installed XOC bios yet. And I definitely wouldn't be running a furmark stress test with it. Christ...


I applaud anyone brave enough to do everything you just mentioned and feel sympathetic if they brick the card jn the process. However they should at least share a PSA with details on how the card failed. And my sympathy for people who attempt warranty fraud is up there with scalpers. Cards are fairly unlocked right now and AIBs are very generous with approving RMAs even when the shroud is removed. Warranty fraud will lead to more locked down cards and stricter RMA policies.


----------



## HyperMatrix

0451 said:


> I applaud anyone brave enough to do everything you just mentioned


Not sure I'd call running furmark on an unlocked XOC bios with a 2 pin card "brave." A few other words come to mind though. I'd have sympathy if someone ran into an issue with shunt modding. Or if they took responsibility for breaking their card with the XOC bios. But the problem is a lot of people are flashing the XOC bios like it's nothing. Not realizing it's actually far more dangerous than shunt modding. The least capable people are ignoring all warnings and doing the most risky things.


----------



## ViRuS2k

Yeah well think of them guys/girls stress testing and testing and testing's even if there is a 0.1% chance its the bios thats caused the burnout they are testing and reporting risky stuff thats beneficial to use all.


----------



## bwana

HyperMatrix said:


> 450/480/520W. LN2 bios isn't really LN2 bios. Just a normal bios with 520W limit. You won't void warranty by using it.


Thank you HyperMatrix. But does anyone know if there are any other differences between them? If the LN2 is a 520w limit, then how are the normal and the OC different? Is it fan curve? or is it something not accessible to the user like idle voltage?


----------



## jura11

@Falkentyne 

With 80-85% power limit you can definitely hear coilwhine more than with any other BIOS which to the date I tried 

KPE XOC BIOS I would use only probably for benchmarks and that's it, as daily BIOS I'm still not sure of it, shunt mod is safer option or rather use 390W BIOS which is probably safest option out there 

Personally I always monitor temperatures,on my second monitor I've opened SIV64 and HWiNFO and MSI Afterburner plus Aquacomputer Aquasuite which is running constantly etc and once per 15 or 30 minutes I use IR gun with which I'm measuring backplate temperatures and 8-pin cables etc 

Hope this helps 

Thanks, Jura


----------



## HyperMatrix

bwana said:


> Thank you HyperMatrix. But does anyone know if there are any other differences between them? If the LN2 is a 520w limit, then how are the normal and the OC different? Is it fan curve? or is it something not accessible to the user like idle voltage?


That's a solid question. From what I was reading a few days ago, the 2080 Ti Kingpin card did seem to have different fan speeds tied to the bios switch position, where they were recommending flashing your XOC bios to the LN2 position for max fan speeds because it wasn't just based off of the bios, but also bios switch position. I don't have the answer you're looking for, sadly. But I wouldn't mind some light being shed on this as well.


----------



## jomama22

Falkentyne said:


> Replacing the resistors works perfectly, but good luck melting the solder to remove the originals! Multiple people have had problems with getting the original solder off, so be careful and DO NOT DAMAGE YOUR CARD!
> 
> Protip: Use this. when soldering to help stop solder from getting someplace it doesn't want to be.
> 
> 
> 
> Amazon.com
> 
> 
> 
> (Super 33+ tape protection is for shunt mods with MG842AR paint)


You do not need $33 a roll kapton tape lmao.

Just search kapton tape on Amazon or w.e and buy some. All works the same and you'll save yourself $25.


----------



## Falkentyne

jomama22 said:


> You do not need $33 a roll kapton tape lmao.
> 
> Just search kapton tape on Amazon or w.e and buy some. All works the same and you'll save yourself $25.


Some tape is terrible. There are some tapes that have less adhesion than a 10 year old roll of scotch tape. Trust me on that.
If you're trying to mod a $1500-$1800 video card, what is $30 on quality tape?


----------



## jomama22

Falkentyne said:


> Some tape is terrible. There are some tapes that have less adhesion than a 10 year old roll of scotch tape. Trust me on that.
> If you're trying to mod a $1500-$1800 video card, what is $30 on quality tape?


How much adhesion do you really need if you are only using it as a mask while soldering? All you're using it for is blocking solder.

Here's what I use:
https://www.amazon.com/dp/B07S1Z877Q/ref=cm_sw_r_cp_apa_fabc_KNp7FbMSKB195?psc=1

Works perfectly fine, good adhesion (and not too much as too much can be a pain). Used it when shunting the strix.

The whole motto of "Well if you spend $xxxx money then what's $xxxx money" is just lame. Yeah, for somthing things I can understand, but kapton tape? Come on now. You're talking about spending 400% more on somthing some will use a hand full of times most likely.

$33 kapton tape is a massive waste of money. I don't care what you're using it on.


----------



## jura11

RTX 3090 FTW3 waterblock from Aquacomputer 



http://imgur.com/a/EoXiyN5


Thanks, Jura


----------



## HyperMatrix

jura11 said:


> RTX 3090 FTW3 waterblock from Aquacomputer
> 
> 
> 
> http://imgur.com/a/EoXiyN5
> 
> 
> Thanks, Jura


Wow. So the active cooled backplate is actually properly watercooled this time instead of just being a heatpipe design. God I wish I could buy this for the Kingpin card....


----------



## jura11

HyperMatrix said:


> Wow. So the active cooled backplate is actually properly watercooled this time instead of just being a heatpipe design. God I wish I could buy this for the Kingpin card....


Yes this time seems active backplate is really active water cooled backplate and not sure if they release as well Kingpin version of Kryographics block, maybe yes if there is enough customers or people interested and mainly EVGA should contact Aquacomputer and send them GPU too for making proper waterblock, I won't touch Hydrocopper if its last block available 

Thanks, Jura


----------



## HyperMatrix

jura11 said:


> Yes this time seems active backplate is really active water cooled backplate and not sure if they release as well Kingpin version of Kryographics block, maybe yes if there is enough customers or people interested and mainly EVGA should contact Aquacomputer and send them GPU too for making proper waterblock, I won't touch Hydrocopper if its last block available
> 
> Thanks, Jura


Sadly, Hydrocopper may be the only block available. Haha. Can't be worse than Bykski. Also for this block do you have a link to the product/order page? I only see the basic style block when I go to their shop.


----------



## jura11

HyperMatrix said:


> Sadly, Hydrocopper may be the only block available. Haha. Can't be worse than Bykski. Also for this block do you have a link to the product/order page? I only see the basic style block when I go to their shop.


Running now Bykski which is nice surprise, I expected worse temperatures but I'm mighty surprised with Bykski, I used them on RTX 2080Ti too and been that time very impressed and mainly happy with temperatures 

No I don't have link on product page but here is announcement 

kryographics NEXT RTX 3080 3090 FTW3 / Guten Rutsch! - Wasserkühlung - Aqua Computer Forum 

Hope this helps 

Thanks, Jura


----------



## sultanofswing

jura11 said:


> RTX 3090 FTW3 waterblock from Aquacomputer
> 
> 
> 
> http://imgur.com/a/EoXiyN5
> 
> 
> Thanks, Jura


This block will probably be 400 bucks or more but will be the only one worth buying also.


----------



## EniGma1987

It will be a couple of months before it releases, but Heatkiller is developing an active backplate with watercooling in it too.


----------



## jura11

sultanofswing said:


> This block will probably be 400 bucks or more but will be the only one worth buying also.


Hi there 

I think this waterblock should be in £160-£200 range as max which will be much cheaper than Optimus block and backplate I think shouldn't cost more than £50-£80 maybe less add postage/shipping to that and taxes maybe it will be 400usd landed 

I have already placed an order for the Aquacomputer Kryographics RTX 3090 Strix waterblock and active backplate, just need get Strix as well 😞

Hope this helps 

Thanks, Jura


----------



## Biscottoman

devilhead said:


> Got waterblock for strix, now, just need cherry picked 3090 strix  Mine previous was bad memory overclocker +800 max.
> View attachment 2471832
> 
> 
> Now have 3090 gigabyte oc, memory can do +1600 and more, just don't dare to throw 1000w on it, need waterblock at least and still those stupid power extensions could catch fire


could you tell me what cooling performance are you able to reach with that waterblock (delta temperature above water)? Cause i'm really interested into get one too for a strix


----------



## jura11

EniGma1987 said:


> It will be a couple of months before it releases, but Heatkiller is developing an active backplate with watercooling in it too.


It will be released probably much sooner than Heatkiller, Heatkiller planning to release block next year in January and I expect Aquacomputer block will be out by February or March, maybe they will surprise us hahaha 

Hope this helps 

Thanks, Jura


----------



## Biscottoman

jura11 said:


> RTX 3090 FTW3 waterblock from Aquacomputer
> 
> 
> 
> http://imgur.com/a/EoXiyN5
> 
> 
> Thanks, Jura


If this thing will be avaiable only for the ftw3 card, it is probably going to make it even a better purchase than the strix. I would really like to get a block like that and maybe even putting an mp5works on that activexcs plate event if it would be so overkill


----------



## HyperMatrix

Biscottoman said:


> If this thing will be avaiable only for the ftw3 card, it is probably going to make it even a better purchase than the strix. I would really like to get a block like that and maybe even putting an mp5works on that activexcs plate event if it would be so overkill


The backplate on this design is fully actively cooled. You have actual water flow up there making direct contact with the memory. What on earth could you hope to get by sticking that tiny mp5works low flow unit that's being cooled by the same water on top of it? Haha.


----------



## Slackaveli

sultanofswing said:


> Yea it's overshooting quite a bit, Like I said I don't have a 3090KPE on hand, Missed the first 2 runs but should have one the next run.
> When I was messing with my 2080ti KPE I went through the same thing like trying to figure out what was what and finally just said screw it and just set the NVVDD where I needed it for the card to run the clock i desired and not crash.
> I got tired of chasing the Voltage curve table and stuck with offsets and off I went.
> With Ampere being as inneficient as it is I can see why most want to use the Voltage curve though.
> I'll Talk with Vince and get his opinion but I know he is kinda the same way I am, Set the NVVDD where you want and go for it.
> Only way to know the actual true reading would be use the Probe header and a DMM and see where it lines up.


So, Im on the Kingpin list too and havent gotten any info other than "You have already entered for a notify on the following date/time: 12/16/2020 2:03:28 PM. " Any idea on if Ill make the next batch and when that might be? Not rich enough to just always have $2G layin around and need an eta. Especially since they give you a very short term link. I missed out on FTW# Hybrid bc of that time limit and not being able to get my money together that particular day.


----------



## sultanofswing

Slackaveli said:


> So, Im on the Kingpin list too and havent gotten any info other than "You have already entered for a notify on the following date/time: 12/16/2020 2:03:28 PM. " Any idea on if Ill make the next batch and when that might be? Not rich enough to just always have $2G layin around and need an eta. Especially since they give you a very short term link. I missed out on FTW# Hybrid bc of that time limit and not being able to get my money together that particular day.


You just have to keep up with Jacob's Twitter to see when the next drops are, He usually announces it a day before.


----------



## dr/owned

HyperMatrix said:


> Not sure I'd call running furmark on an unlocked XOC bios with a 2 pin card "brave." A few other words come to mind though. I'd have sympathy if someone ran into an issue with shunt modding. Or if they took responsibility for breaking their card with the XOC bios. But the problem is a lot of people are flashing the XOC bios like it's nothing. Not realizing it's actually far more dangerous than shunt modding. The least capable people are ignoring all warnings and doing the most risky things.


The TUF as I've found still has some sort of power limit protection with the stock BIOS that isn't removed with shunts. The 1000W BIOS though has absolutely no limits. I'd say people are being straight idiots using it on a non waterblocked card...and even if you have a waterblock if it's the first one that you've ever assembled...maybe start a bit more gentle cause there's very tight tolerances on thermal pad gaps where it's easy to mess up the assembly. Even experienced people will do stuff like putting 2mm 17W/mK pads where 1mm pads are supposed to be when those high conductivity pads are rock hard and incapable of squishing 50%.

That's all my soap boxing 



jura11 said:


> Running now Bykski which is nice surprise, I expected worse temperatures but I'm mighty surprised with Bykski, I used them on RTX 2080Ti too and been that time very impressed and mainly happy with temperatures


Bykski makes good stuff. I've used their 1080Ti block (because they were the only ones that made one for my Gigabyte Gaming G1 card with non-ref PCB) with no issues for 3? years and now their 3090 block had zero issues (aside from them using 1.2mm pads but 1.5mm upgrade pads seem to work fine). The 3090 Bykski backplate was a pleasure to modify with active cooling. So much meat on it and great finish.

That aquacomputer block is cool...good to see the mfg's are finally getting the active backplate message.


----------



## dante`afk

my backplate btw without fan










with fan measuring directly after taking the fan off 40c, exactly as my gpu temp

45minute run with bright memory benchmark
5120x1440p
rtx ultra
dlss quality 

my card pulls depending on the appliation between 530w - 700+w


----------



## Biscottoman

HyperMatrix said:


> The backplate on this design is fully actively cooled. You have actual water flow up there making direct contact with the memory. What on earth could you hope to get by sticking that tiny mp5works low flow unit that's being cooled by the same water on top of it? Haha.
> 
> View attachment 2471924


Dunno lol, probably something between 1-2°C degree thanks to the bigger total cooling surface on the backplate, worth It 90€ 🤥


----------



## sultanofswing

The biggest thing that sucks about the AQ block for the 3090 FTW3 is for example mine. It's a pretty poor bin, I think I can see the Core leaking as we speak.
I don't feel like going through how ever many purchases to find a good one and then still not have full control over the card.
I feel i'd rather just slap a block on a KPE and be done, I hate not having my 2080ti KPE in my loop right now.


----------



## Nico67

That Aquacomputer block is nice and all, but it could only be done for the FTW3. If you look at the waterflow front to back, they have it going through those air vent holes in the PCB 

I'm sure they could do something else for other cards, but it would have to join above the card at the terminal.


----------



## Biscottoman

Unlucky that only the ftw3 Is the only card with such kind of pcb, i still don't know if this block can make this card to be a better pick over competitors such the strix


----------



## Nizzen

jura11 said:


> Running now Bykski which is nice surprise, I expected worse temperatures but I'm mighty surprised with Bykski, I used them on RTX 2080Ti too and been that time very impressed and mainly happy with temperatures
> 
> No I don't have link on product page but here is announcement
> 
> kryographics NEXT RTX 3080 3090 FTW3 / Guten Rutsch! - Wasserkühlung - Aqua Computer Forum
> 
> Hope this helps
> 
> Thanks, Jura


No 3090 strix block = epic fail 

Well, looks like I'll stick with Ek block with dimmcooler on the backplate.


----------



## jura11

Nizzen said:


> No 3090 strix block = epic fail
> 
> Well, looks like I'll stick with Ek block with dimmcooler on the backplate.


Hi @Nizzen you mean Bykski RTX 3090 Strix waterblock? Is that bad on Strix? Or you meant Aquacomputer Kryographics RTX 3090 Strix waterblock? 

Thanks, Jura


----------



## Shawnb99

HyperMatrix said:


> Sadly, Hydrocopper may be the only block available. Haha. Can't be worse than Bykski. Also for this block do you have a link to the product/order page? I only see the basic style block when I go to their shop.


Optimus is confirmed to have a block for the KPE, said to put end of January...


----------



## Nizzen

jura11 said:


> Hi @Nizzen you mean Bykski RTX 3090 Strix waterblock? Is that bad on Strix? Or you meant Aquacomputer Kryographics RTX 3090 Strix waterblock?
> 
> Thanks, Jura


meant Aquacomputer Kryographics RTX 3090 Strix waterblock.

I have EK and Bykski strix blocks yes 

Had alphacool strix too, but sold it to Carillo.


----------



## jura11

Shawnb99 said:


> Optimus is confirmed to have a block for the KPE, said to put end of January...


And actually when we can get it, I meant when I will receive it, if it will be same sheetshow as FTW3 then good luck who will preorder something from Optimus hahaha 

Won't touch them until they've in stock everything 

Thanks, Jura


----------



## jura11

Nizzen said:


> meant Aquacomputer Kryographics RTX 3090 Strix waterblock.
> 
> I have EK and Bykski strix blocks yes
> 
> Had alphacool strix too, but sold it to Carillo.


Ohh okay mate 😔 Let's see what I can do with their block if not then I think my brother will be having upgraded PC very soon hahaha 

Thanks, Jura


----------



## Biscottoman

Nizzen said:


> meant Aquacomputer Kryographics RTX 3090 Strix waterblock.
> 
> I have EK and Bykski strix blocks yes
> 
> Had alphacool strix too, but sold it to Carillo.


Nizzen could u share me a pic of your backplate watercooling setup?


----------



## Slackaveli

HyperMatrix said:


> @Falkentyne @jura11 @Sheyster @0451
> Guy flashes XOC bios despite many many warnings. Runs it at 80% PL. Says his card is "screaming." Surprised that it doesn't work anymore. Really wonder what people are thinking....increasing power limit by 50% and disabling all protections to get an extra 5% performance. Ends up losing 100% performance.
> 
> Don't think we should be giving any advice or suggestions in these scenarios.
> 
> 
> 
> 
> 
> 
> 
> Best idea ever. "Stress test" with furmark on an unlocked/unregulated XOC bios. I have no sympathy after all the warnings we kept giving to people. I have an actual kingpin card with multi-bios and a solid AIO cooler connected to 3x 21W fans and I haven't installed XOC bios yet. And I definitely wouldn't be running a furmark stress test with it. Christ...


Why on Earth would somebody STRESS test their 1000w OC? LOL


----------



## jura11

Slackaveli said:


> Why on Earth would somebody STRESS test their 1000w OC? LOL


No stress test just some benchmarks 🙄

But I will never will be running any power virus on my PC, I just refuse run it hahaha 

Thanks, Jura


----------



## HyperMatrix

Biscottoman said:


> Dunno lol, probably something between 1-2°C degree thanks to the bigger total cooling surface on the backplate, worth It 90€ 🤥


I would doubt even that. Haha. Because there is already a block and water flow between the MP5 and the memory. So best you could do is “cool” the water in the Aquacomputer block. But you can’t cool it with its own water temperature. If this new design by Aquacomputer actually gets built, you should 100% eBay your mp5 or save it for a future project. Really no use for it with the block they’re showing off.


----------



## truehighroller1

New high for PR with a fresh install of drivers and afterburner and on the XOC KP BIOS.



https://www.3dmark.com/pr/717148


----------



## Slackaveli

HyperMatrix said:


> Wow. So the active cooled backplate is actually properly watercooled this time instead of just being a heatpipe design. God I wish I could buy this for the Kingpin card....


Sell me the Kingpin and keep the FTW3...


----------



## HyperMatrix

jura11 said:


> No stress test just some benchmarks 🙄
> 
> But I will never will be running any power virus on my PC, I just refuse run it hahaha
> 
> Thanks, Jura


It’s like stress testing with intel burn test. Good for a couple minutes of short testing with regular OC settings. But you can’t run it for hours like you do prime95. And definitely not if your cpu thermals are disabled. Haha.


----------



## HyperMatrix

Shawnb99 said:


> Optimus is confirmed to have a block for the KPE, said to put end of January...


I’m hoping man. Both in terms of release date and availability.


----------



## Slackaveli

sultanofswing said:


> You just have to keep up with Jacob's Twitter to see when the next drops are, He usually announces it a day before.


This guy, right? https://twitter.com/evga_jacobf?lang=en


----------



## HyperMatrix

Slackaveli said:


> Sell me the Kingpin and keep the FTW3...


I would probably do it if the block was actually available now. Haha.


----------



## Slackaveli

sultanofswing said:


> The biggest thing that sucks about the AQ block for the 3090 FTW3 is for example mine. It's a pretty poor bin, I think I can see the Core leaking as we speak.
> I don't feel like going through how ever many purchases to find a good one and then still not have full control over the card.
> I feel i'd rather just slap a block on a KPE and be done, I hate not having my 2080ti KPE in my loop right now.


Sell it to an LN2 OCer? Don't they still value the leakiest cards or is that history now?


----------



## Slackaveli

jura11 said:


> No stress test just some benchmarks 🙄
> 
> But I will never will be running any power virus on my PC, I just refuse run it hahaha
> 
> Thanks, Jura


I feel you, Man. Im just as OCD as any of y'all but Im also a single parent with 3 teenagers. No way on Earth Im doing more than a simple 390w BIOS on a 2*pin 3090; especially a budget joint. Uh uh. I'm already reaching up to BE ON a 3090 period lol. Damn sure couldnt just buy another one.


----------



## Slackaveli

HyperMatrix said:


> I would probably do it if the block was actually available now. Haha.


Yeah IKR. That Aquacomputer block is a sexy bish.


----------



## Biscottoman

HyperMatrix said:


> I would doubt even that. Haha. Because there is already a block and water flow between the MP5 and the memory. So best you could do is “cool” the water in the Aquacomputer block. But you can’t cool it with its own water temperature. If this new design by Aquacomputer actually gets built, you should 100% eBay your mp5 or save it for a future project. Really no use for it with the block they’re showing off.


Don't worry dude i don't have an mp5works and a 3090 neither, i'm just planning (buying components one by one as i found them on a decent price) my next build and i'm still even very doubtful to what custom to choose, i was planning shunted strix + kryographics + mp5works but as i said i'm still not totally sure about the choice


----------



## Nizzen

dante`afk said:


> my backplate btw without fan
> 
> View attachment 2471927
> 
> 
> with fan measuring directly after taking the fan off 40c, exactly as my gpu temp
> 
> 45minute run with bright memory benchmark
> 5120x1440p
> rtx ultra
> dlss quality
> 
> my card pulls depending on the appliation between 530w - 700+w


Do you have a link to this cooler?


----------



## lolhaxz

Falkentyne said:


> @bmgjet
> Martin of Hwinfo needs your help. He's asking me a question I am unable to answer. If you can help him, it would be really nice for Hwinfo64.


@Falkentyne, I've uploaded a release build of my (very) WIP Ampere monitoring/overclocking tools.... I've added the clock that I think ThermSpy is reporting....

It's here if you are a brave enough to give it a try:








GitHub - l0lhax/AmpereOC


Contribute to l0lhax/AmpereOC development by creating an account on GitHub.




github.com





Written in C++ using ImGUI as the front-end.... only ever tested on my 3090 Strix.

There is alot of additional clocks reported, but this is the only one that is close to the shader clock, haven't looked at the others yet.


----------



## dr/owned

HyperMatrix said:


> I would doubt even that. Haha. Because there is already a block and water flow between the MP5 and the memory. So best you could do is “cool” the water in the Aquacomputer block. But you can’t cool it with its own water temperature. If this new design by Aquacomputer actually gets built, you should 100% eBay your mp5 or save it for a future project. Really no use for it with the block they’re showing off.


That block is probably going to be pretty expensive since we're now talking about 4? operations in the CNC to make the front and back sides. Plus the extra assembly cost. So the DIY route of active backplate is going to be cost-effective.



jura11 said:


> And actually when we can get it, I meant when I will receive it, if it will be same sheetshow as FTW3 then good luck who will preorder something from Optimus hahaha
> 
> Won't touch them until they've in stock everything
> 
> Thanks, Jura


I just don't like Optimus in the same way I don't like EK. They're charging $400 for something that doesn't innovate _at all_ and in fact is a reuse design where they just bolt their CPU coldplate to a GPU base and then somehow take months to ship too. And the way they hype the crap out of their materials annoys me.


----------



## dr/owned

lolhaxz said:


> @Falkentyne, I've uploaded a release build of my (very) WIP Ampere monitoring/overclocking tools.... I've added the clock that I think ThermSpy is reporting....
> 
> It's here if you are a brave enough to give it a try:
> 
> 
> 
> 
> 
> 
> 
> 
> GitHub - l0lhax/AmpereOC
> 
> 
> Contribute to l0lhax/AmpereOC development by creating an account on GitHub.
> 
> 
> 
> 
> github.com


Couple of suggestions off the bat: overlay all the "limits" graphs on top of each other with different colors and fix the scale max to 1 instead of 2.
I'd love if the boost curve had a CSV import-export function. Afterburner is completely dub with how it does line-fits and makes you move each point individually and then when you hit apply it decides "ah I think I'll just move these points together". I'd rather just mess with it in Excel and then import.

I like it though. Clean like Afterburner instead of the GUI disco-vomit that X1 is.


----------



## stryker7314

dr/owned said:


> That block is probably going to be pretty expensive since we're now talking about 4? operations in the CNC to make the front and back sides. Plus the extra assembly cost. So the DIY route of active backplate is going to be cost-effective.
> 
> 
> 
> I just don't like Optimus in the same way I don't like EK. They're charging $400 for something that doesn't innovate _at all_ and in fact is a reuse design where they just bolt their CPU coldplate to a GPU base and then somehow take months to ship too. And the way they hype the crap out of their materials annoys me.


That's the reason I went with Alphacool this time for my FTW3 waterblock.


----------



## lolhaxz

dr/owned said:


> Couple of suggestions off the bat: overlay all the "limits" graphs on top of each other with different colors and fix the scale max to 1 instead of 2.
> I'd love if the boost curve had a CSV import-export function. Afterburner is completely dub with how it does line-fits and makes you move each point individually and then when you hit apply it decides "ah I think I'll just move these points together". I'd rather just mess with it in Excel and then import.
> 
> I like it though. Clean like Afterburner instead of the GUI disco-vomit that X1 is.


The whole curve overclocking situation is... troublesome, the driver makes those changes.... and on top of all that it is temperature based (which I haven't found a way to stop), so yes... you can modify a curve and hit apply - then you read it back from the driver again... the driver _will_ likely make mince meat out of the curve...

The entire premise of the app was to poke around for hidden stuff and do auto per point (every point) overclocking with a CUDA based load. I just haven't gotten there yet in terms of the front-end.... figuring out the NVAPI private stuff was in itself challenging.

It's also pretty CPU heavy at the moment, querying NVAPI at any rate of knots is actually quite costly/driver kernel wise, you see the same effect when you increase the polling rate on any of the other tools.


----------



## bmgjet

lolhaxz said:


> @Falkentyne, I've uploaded a release build of my (very) WIP Ampere monitoring/overclocking tools.... I've added the clock that I think ThermSpy is reporting....
> 
> It's here if you are a brave enough to give it a try:
> 
> 
> 
> 
> 
> 
> 
> 
> GitHub - l0lhax/AmpereOC
> 
> 
> Contribute to l0lhax/AmpereOC development by creating an account on GitHub.
> 
> 
> 
> 
> github.com
> 
> 
> 
> 
> 
> Written in C++ using ImGUI as the front-end.
> 
> There is alot of additional clocks reported, but this is the only one that is close to the shader clock, haven't looked at the others yet.
> 
> View attachment 2471942


Will you open source?





Falkentyne said:


> @bmgjet
> Martin of Hwinfo needs your help. He's asking me a question I am unable to answer. If you can help him, it would be really nice for Hwinfo64.


Not sure what language HWInfo is using but a lot of its in the Nvidia API docs.




__





NVAPI Reference Documentation






docs.nvidia.com





In C# using the wrapper iv played around with its nvenc_clock. or ID 7. Theres an array of 30 different clocks in there but only 7 of them have valid looking values.


----------



## dante`afk

Nizzen said:


> Do you have a link to this cooler?





https://www.amazon.com/Awxlumv-Aluminum-Heatsinks-Cooler-Motherboard/dp/B089QJQY17/ref=redir_mobile_desktop?ie=UTF8&aaxitk=6s4f6LJ98b4Fl1v3zL7WbQ&hsa_cr_id=3749509920601&ref_=sbx_be_s_sparkle_mcd_asin_2



the optimus block for the KPE will probably cost more then 400$ as it does for their FTW3 model.

ridonkolous


----------



## HyperMatrix

dr/owned said:


> That block is probably going to be pretty expensive since we're now talking about 4? operations in the CNC to make the front and back sides. Plus the extra assembly cost. So the DIY route of active backplate is going to be cost-effective.


Of course. But mp5 isn’t cost effective. And he was specifically talking about sticking an mp5 on top of that already over the top expensive Aquacomputer block.


----------



## Falkentyne

bmgjet said:


> Will you open source?
> 
> 
> 
> 
> 
> Not sure what language HWInfo is using but a lot of its in the Nvidia API docs.
> 
> 
> 
> 
> __
> 
> 
> 
> 
> 
> NVAPI Reference Documentation
> 
> 
> 
> 
> 
> 
> docs.nvidia.com
> 
> 
> 
> 
> 
> In C# using the wrapper iv played around with its nvenc_clock. or ID 7. Theres an array of 30 different clocks in there but only 7 of them have valid looking values.


Pinging @Mumak


----------



## Falkentyne

lolhaxz said:


> @Falkentyne, I've uploaded a release build of my (very) WIP Ampere monitoring/overclocking tools.... I've added the clock that I think ThermSpy is reporting....
> 
> It's here if you are a brave enough to give it a try:
> 
> 
> 
> 
> 
> 
> 
> 
> GitHub - l0lhax/AmpereOC
> 
> 
> Contribute to l0lhax/AmpereOC development by creating an account on GitHub.
> 
> 
> 
> 
> github.com
> 
> 
> 
> 
> 
> Written in C++ using ImGUI as the front-end.... only ever tested on my 3090 Strix.
> 
> There is alot of additional clocks reported, but this is the only one that is close to the shader clock, haven't looked at the others yet.
> 
> View attachment 2471942


Nice! I'll be happy to look at it when it's WIP and not VERY WIP  haha
Not in a rush. You're doing a great job here. Keep it up! got it bookmarked.


----------



## cletus-cassidy

jura11 said:


> Hi there
> 
> I think this waterblock should be in £160-£200 range as max which will be much cheaper than Optimus block and backplate I think shouldn't cost more than £50-£80 maybe less add postage/shipping to that and taxes maybe it will be 400usd landed
> 
> I have already placed an order for the Aquacomputer Kryographics RTX 3090 Strix waterblock and active backplate, just need get Strix as well 😞
> 
> Hope this helps
> 
> Thanks, Jura


Didn't someone here report a problem with the Aquacomputer Strix block not fitting the card correctly?


----------



## long2905

watching gamernexus i know well enough to stay far from furmark and its kinds in any scenario.

currently went back to stock vbios and putting the card up for sale. things are still running fine. but the power limit is hard to ignore now that i have a taste of the 1000W vbios lol. the card cant go pass 1860-1875mhz over 360w


----------



## Thanh Nguyen

dante`afk said:


> https://www.amazon.com/Awxlumv-Aluminum-Heatsinks-Cooler-Motherboard/dp/B089QJQY17/ref=redir_mobile_desktop?ie=UTF8&aaxitk=6s4f6LJ98b4Fl1v3zL7WbQ&hsa_cr_id=3749509920601&ref_=sbx_be_s_sparkle_mcd_asin_2
> 
> 
> 
> the optimus block for the KPE will probably cost more then 400$ as it does for their FTW3 model.
> 
> ridonkolous


Does this work for vertical mount gpu?


----------



## long2905

Thanh Nguyen said:


> Does this work for vertical mount gpu?


If there is enough space in between the card and mainboard, i dont see why not. you just need adhesive thermal pad


----------



## dante`afk

yea, i'm using thermalright odyssey 0.5mm and it sticks pretty good


----------



## jomama22

Biscottoman said:


> Nizzen could u share me a pic of your backplate watercooling setup?


This is mine. I just drilled and tapped holes for m2.5 socket hex screws. 5mm length screws so as to not go past the inside part of the backplate.

Did it this way since it's vertical and wanted to use thermal paste as opposed to a pad.

Got the block on sale for like $28. Is a bitspower 6x dimm block so it's nice and chunky.


----------



## truehighroller1

I just noticed that I'm number one with my cpu and this gpu in the world ***** not bad if I may say so myself.













https://www.3dmark.com/search#advanced?test=spy%20P&cpuId=2239&gpuId=1339&gpuCount=1&deviceType=ALL&memoryChannels=0&country=&scoreType=overallScore&hofMode=false&showInvalidResults=false&freeParams=&minGpuCoreClock=&maxGpuCoreClock=&minGpuMemClock=&maxGpuMemClock=&minCpuClock=&maxCpuClock=












I scored 21 127 in Time Spy


Intel Core i9-7900X Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## bmgjet

truehighroller1 said:


> I just noticed that I'm number one with my cpu and this gpu in the world *** not bad if I may say so myself.
> 
> 
> 
> https://www.3dmark.com/search#advanced?test=spy%20P&cpuId=2239&gpuId=1339&gpuCount=1&deviceType=ALL&memoryChannels=0&country=&scoreType=overallScore&hofMode=false&showInvalidResults=false&freeParams=&minGpuCoreClock=&maxGpuCoreClock=&minGpuMemClock=&maxGpuMemClock=&minCpuClock=&maxCpuClock=
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 21 127 in Time Spy
> 
> 
> Intel Core i9-7900X Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com


What about Port royal?


----------



## zAnimal

Hey oh! First let me apoligize if this isn't the place for this post. It just seems this is defintely the most active 3090 thread, with the most amount of info/folks in one place. If I should start my own thread, I'll gladly delete this incoming wall of text, and do that, just lemme know.

So I'm getting back in to overclocking after being away from it for like 20yrs. Just built a whole new rig, and planning a custom loop over the next few weeks. I have a couple questions though, relating to my 3090(s), if I may!

So I have an FE that I've had well over a month now. The highest I could oc it was 2070 on the boost clock, with it averaging 1949, at 71 degrees. If iirc, I think anything above that crashed out, and I'm not even sure I could replicate that without crashing there after. So then I had an opportunity to grab a Strix OC a couple weeks ago, and have been messing around with that one ever since. So far I've pushed that one to 2130 boost clock, with an average of 2008, at 72 degrees. I tried replicating that yesterday, but it crashed (but I was playing with my pbo curve optimizer on my cpu too, so that probably had something to do with it). I haven't tried pushing beyond that 2130 yet, since I'm just using the stock cooler, which leads me to my first question.

Question 1: Can I do any damage trying to push it like this on the stock air coolers? My instinct says no, but hey, I'm wrong a lot (just ask the lady, lol).
Question 2/3: Why would I be able to complete a PR run one day, and not be able to the next day with the exact same clock speed? Would putting it under water guarantee at least that clock that passed once, and failed once?

Question 4: I'm trying to figure out which card to keep. My instinct also says just keep the Strix, since it has a higher power limit/different bios options, and so then it could be pushed harder underwater, right? It should theoretically be able to perform better than the FE regardless of the "lottery" unless I have a real crapper of a Strix, and an all-star FE, which I don't think I have either. Maybe even the opposite, maybe not all-star, but decent I think, ya think?

Quesiton 5: When I'm finding the upper limit of what I can run right now with the stock cooler on both these cards, does that have anything to do with how hard they can be pushed underwater? I obviously want the best card I can get...and if that means returning the Strix (assuming that's the one I should keep for best future watercooled performance), and rolling the dice again, I'm down to do that. Plus I can use this stupid vibrating fan at 80-90% speed, as an excuse to get an exchange free of charge...

Bonus question: Should I be trying to find the limits of these cards on stock ram/cpu settings? I'm currently using just stock PBO (no undervolt on any cores) and ram I've tightened timings on (3600 14-14-14-34, from 16-16-16-36), stable af.

So I think that about wraps it up. Sorry if this is a rambling hard to read mess. I did my best....

Thanks in advance for any help, it's much appreciated. This was quite a daunting thread to go through, lol.


----------



## truehighroller1

bmgjet said:


> What about Port royal?



Good question. I've been benching all night running time spy extreme right now, I'll check shortly.


----------



## dr/owned

(As a reminder, I'm on the TUF with 4mOhm stacked shunts on everything) So Asus _did something_ with the BIOS they offer via their BIOS updater tool for the 3090 TUF and it isn't just something related to the fan curve.

My power consumption went up from about 600W to 650W, Furmark stopped being capped at 450W, and PerfCap went from being solid "Pwr" in Port Royal to only ocassionally flickering it. 

Weirdly the Vrel, VOp went from 1.1V to 1.087V and I can't get it to lock 1.1V anymore. 

Uploaded both here.


----------



## truehighroller1

bmgjet said:


> What about Port royal?



Close, #4..


















3DMark.com search


3DMark.com search




www.3dmark.com







https://www.3dmark.com/pr/717148


----------



## bmgjet

dr/owned said:


> (As a reminder, I'm on the TUF with 4mOhm stacked shunts on everything) So Asus _did something_ with the BIOS they offer via their BIOS updater tool for the 3090 TUF and it isn't just something related to the fan curve.
> 
> My power consumption went up from about 600W to 650W, Furmark stopped being capped at 450W, and PerfCap went from being solid "Pwr" in Port Royal to only ocassionally flickering it.
> 
> Weirdly the Vrel, VOp went from 1.1V to 1.087V and I can't get it to lock 1.1V anymore.
> 
> Uploaded both here.


Hows your ram overclock?


----------



## jomama22

zAnimal said:


> Hey oh! First let me apoligize if this isn't the place for this post. It just seems this is defintely the most active 3090 thread, with the most amount of info/folks in one place. If I should start my own thread, I'll gladly delete this incoming wall of text, and do that, just lemme know.
> 
> So I'm getting back in to overclocking after being away from it for like 20yrs. Just built a whole new rig, and planning a custom loop over the next few weeks. I have a couple questions though, relating to my 3090(s), if I may!
> 
> So I have an FE that I've had well over a month now. The highest I could oc it was 2070 on the boost clock, with it averaging 1949, at 71 degrees. If iirc, I think anything above that crashed out, and I'm not even sure I could replicate that without crashing there after. So then I had an opportunity to grab a Strix OC a couple weeks ago, and have been messing around with that one ever since. So far I've pushed that one to 2130 boost clock, with an average of 2008, at 72 degrees. I tried replicating that yesterday, but it crashed (but I was playing with my pbo curve optimizer on my cpu too, so that probably had something to do with it). I haven't tried pushing beyond that 2130 yet, since I'm just using the stock cooler, which leads me to my first question.
> 
> Question 1: Can I do any damage trying to push it like this on the stock air coolers? My instinct says no, but hey, I'm wrong a lot (just ask the lady, lol).
> Question 2/3: Why would I be able to complete a PR run one day, and not be able to the next day with the exact same clock speed? Would putting it under water guarantee at least that clock that passed once, and failed once?
> 
> Question 4: I'm trying to figure out which card to keep. My instinct also says just keep the Strix, since it has a higher power limit/different bios options, and so then it could be pushed harder underwater, right? It should theoretically be able to perform better than the FE regardless of the "lottery" unless I have a real crapper of a Strix, and an all-star FE, which I don't think I have either. Maybe even the opposite, maybe not all-star, but decent I think, ya think?
> 
> Quesiton 5: When I'm finding the upper limit of what I can run right now with the stock cooler on both these cards, does that have anything to do with how hard they can be pushed underwater? I obviously want the best card I can get...and if that means returning the Strix (assuming that's the one I should keep for best future watercooled performance), and rolling the dice again, I'm down to do that. Plus I can use this stupid vibrating fan at 80-90% speed, as an excuse to get an exchange free of charge...
> 
> Bonus question: Should I be trying to find the limits of these cards on stock ram/cpu settings? I'm currently using just stock PBO (no undervolt on any cores) and ram I've tightened timings on (3600 14-14-14-34, from 16-16-16-36), stable af.
> 
> So I think that about wraps it up. Sorry if this is a rambling hard to read mess. I did my best....
> 
> Thanks in advance for any help, it's much appreciated. This was quite a daunting thread to go through, lol.


1.
No, not with the 480w stock bios or the 520w kp bios. Yes with the 1000w bios.
2. Because it's not stable at that offset with whatever voltage is being fed to the chip.
3. Maybe? Not somthing anyone could guarantee but it gives you a higher probability.
4. I'd keep the strix for flexibility. But it's really up to you. I was in the se boat, kept the strix for easy shunt modding and flexibility.
5. You can determine what kind of headroom you probably have by tracking what voltage/clock pair each seem to have at their max overclock when using the same power limit. Put the strix at 400w (will be like 105-107%, stock bios @ 100% IS 390W.) and bench mark both of them.
6. Doesn't really matter so long as the cpu is stable. Use the same cpu setting for both cards when comparing.


----------



## HyperMatrix

truehighroller1 said:


> I just noticed that I'm number one with my cpu and this gpu in the world *** not bad if I may say so myself.
> 
> View attachment 2471971
> 
> 
> 
> 
> 
> https://www.3dmark.com/search#advanced?test=spy%20P&cpuId=2239&gpuId=1339&gpuCount=1&deviceType=ALL&memoryChannels=0&country=&scoreType=overallScore&hofMode=false&showInvalidResults=false&freeParams=&minGpuCoreClock=&maxGpuCoreClock=&minGpuMemClock=&maxGpuMemClock=&minCpuClock=&maxCpuClock=
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 21 127 in Time Spy
> 
> 
> Intel Core i9-7900X Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com


Not hard to do. Most people who buy $2000 GPUs also have better CPUs. Haha. I’m still rocking a 6950x and scored 15059 a couple days ago in port royal but highest score recorded so far on the site with that cpu is just 14346.


----------



## zAnimal

jomama22 said:


> 5. You can determine what kind of headroom you probably have by tracking what voltage/clock pair each seem to have at their max overclock when using the same power limit. Put the strix at 400w (will be like 105-107%, stock bios @ 100% IS 390W.) and bench mark both of them.


So that's just with the power slider in AB, ya? What does the voltage slider do, like, how is it different than the power slider? Should I just crank both, and leave it, when trying to max out my oc?


----------



## truehighroller1

HyperMatrix said:


> Not hard to do. Most people who buy $2000 GPUs also have better CPUs. Haha. I’m still rocking a 6950x and scored 15059 a couple days ago in port royal but highest score recorded so far on the site with that cpu is just 14346.


Apparently you're not rocking a better cpu either then are you? Lol, come on man. Good score btw.


----------



## dr/owned

bmgjet said:


> Hows your ram overclock?
> 
> View attachment 2471978


Thank you for running it through your tool. I was going to go digging shortly to find it so you saved me a step 

The vram behavior is weird. I was previously running at +1000 on it and as far as I could tell (OCCT VRAM stress, Superposition 10 hour loop, Time Spy Extreme/regular stress, Firestrike Extreme/Ultra stress, etc) it was legit stable. I just turned it down to +500 and find that the PerfCap goes from flickering Pwr and jumping around 1.06V at +1000 to Pwr, Vrel, VOp and stays locked at 1.093V the entire Port Royal run at +500. Score went from 14,400 to 14,300.

EDIT: here's the before and after screenshots in PR. Looks like across the board it increased max's (and with the shunts this is 2.25x multiplier to get actuals):


----------



## HyperMatrix

truehighroller1 said:


> Apparently you're not rocking a better cpu either then are you? Lol, come on man. Good score btw.


Haha well it doesn’t make much difference at 4K gaming tbh. I was on a wait list for a 5950x but decided I’m doing a rebuild with the 11900K. Liking the IPC and clocks on it. And it would boost my score to probably around 15600. Not that benchmark settings/scores do anything to improve my in game performance.


----------



## D3LTA KING

markuaw1 said:


> 3090 strix running kingpin bios, EK block with Alphacool x-flow 360rad 60mm, settings +170 clock +1250mem, getting
> *15 322 @ 34c https://www.3dmark.com/pr/648784*


Just wondering you said that you are running a kingpin bios on a Asus Strix card? Where could I find that bios? and is it the 520 watt? 

thanks


----------



## WayWayUp

optimus waterblock came in today


































just barely getting started with this thing. I know i can go much higher as this run was only with +1,000 memory and im not done maxing out my core yet.

This block is unbelievably good


----------



## Thanh Nguyen

WayWayUp said:


> optimus waterblock came in today
> 
> View attachment 2471987
> 
> 
> 
> View attachment 2471988
> 
> 
> 
> 
> 
> 
> View attachment 2471983
> 
> 
> 
> just barely getting started with this thing. I know i can go much higher as this run was only with +1,000 memory and im not done maxing out my core yet.
> 
> This block is unbelievably good


What is the gpu delta? Running 1000w bios?


----------



## WayWayUp

520w bios shunted
Dont have water temp yet until i install new temp sensor but these temps are very low as my room temp is 22C and i just did a port royal run where core maxed at 31C
Did a bunch of loops of heaven at 33c max
Didnt have enough time to really test it out


----------



## GAN77

Shawnb99 said:


> put end of January...


Joker. Maybe in August?


----------



## sultanofswing

WayWayUp said:


> 520w bios shunted
> Dont have water temp yet until i install new temp sensor but these temps are very low as my room temp is 22C and i just did a port royal run where core maxed at 31C
> Did a bunch of loops of heaven at 33c max
> Didnt have enough time to really test it out


Whats your Radiator setup?
My current GPU loop is 3 360's.


----------



## dr/owned

WayWayUp said:


> 520w bios shunted
> Dont have water temp yet until i install new temp sensor but these temps are very low as my room temp is 22C and i just did a port royal run where core maxed at 31C
> Did a bunch of loops of heaven at 33c max
> Didnt have enough time to really test it out


That's pretty much the same delta I see with my Bykski block. 29-30C water, 43-44C in Port Royal with ~550W on the gpu. Throttling doesn't happen until the 50's? anyways.


----------



## geriatricpollywog

WayWayUp said:


> optimus waterblock came in today
> 
> 
> just barely getting started with this thing. I know i can go much higher as this run was only with +1,000 memory and im not done maxing out my core yet.
> 
> This block is unbelievably good


 Nice score! Is that CLU on the die? Did you get the heat sink backplate?


----------



## WMDVenum

WayWayUp said:


> optimus waterblock came in today
> 
> 
> just barely getting started with this thing. I know i can go much higher as this run was only with +1,000 memory and im not done maxing out my core yet.
> 
> This block is unbelievably good


+1 for LM on a 3090. What kind of GPU temp are you holding during port royale? Also what kind of core clock and mem offset are you rocking? My shunted FE was doing 2205 core and 1250 mem but scoring 15108. I do have a 9900k but not sure how much that is holding it back.


----------



## lolhaxz

Falkentyne said:


> Nice! I'll be happy to look at it when it's WIP and not VERY WIP  haha
> Not in a rush. You're doing a great job here. Keep it up! got it bookmarked.


The entire premise of mentioning it and tagging yourself was to see if you could determine that was indeed the clock in question, then I would have provided you all the details (structs/DLL interface ID) to give to whomever you like. (ie, hwinfo)

There is something like 15-20 clocks being returned by that private function.

But I'm quite suspicious of how useful any of them are anyway, including this "thermspy" reported clock... bit of a red herring i suspect... It's technically a OLD (not particularly "secret") deprecated function which has been around for a very long time and not recommended to be used any more.


----------



## Falkentyne

lolhaxz said:


> The entire premise of mentioning it and tagging yourself was to see if you could determine that was indeed the clock in question, then I would have provided you all the details (structs/DLL interface ID) to give to whomever you like. (ie, hwinfo)
> 
> There is something like 15-20 clocks being returned by that private function.
> 
> But I'm quite suspicious of how useful any of them are anyway, including this "thermspy" reported clock... bit of a red herring i suspect... It's technically a OLD (not particularly "secret") deprecated function which has been around for a very long time and not recommended to be used any more.


sorry, I'll look at it. I am not feeling well (I am very sick). sorry. any amount of programming anything is confusing to me right now.


----------



## Pepillo

truehighroller1 said:


> I just noticed that I'm number one with my cpu and this gpu in the world *** not bad if I may say so myself.
> 
> View attachment 2471971
> 
> 
> 
> 
> 
> https://www.3dmark.com/search#advanced?test=spy%20P&cpuId=2239&gpuId=1339&gpuCount=1&deviceType=ALL&memoryChannels=0&country=&scoreType=overallScore&hofMode=false&showInvalidResults=false&freeParams=&minGpuCoreClock=&maxGpuCoreClock=&minGpuMemClock=&maxGpuMemClock=&minCpuClock=&maxCpuClock=
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 21 127 in Time Spy
> 
> 
> Intel Core i9-7900X Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com


Hey, you've brightened my day, the third score is mine.


----------



## HyperMatrix

lolhaxz said:


> The entire premise of mentioning it and tagging yourself was to see if you could determine that was indeed the clock in question, then I would have provided you all the details (structs/DLL interface ID) to give to whomever you like. (ie, hwinfo)
> 
> There is something like 15-20 clocks being returned by that private function.
> 
> But I'm quite suspicious of how useful any of them are anyway, including this "thermspy" reported clock... bit of a red herring i suspect... It's technically a OLD (not particularly "secret") deprecated function which has been around for a very long time and not recommended to be used any more.


Using thermspy to match up my requested clocks with my real clocks allowed me to increase my score in port royal by 180 points, and at 15MHz lower. So I know it’s not all voodoo. It’s also allowed me to go from maximum 2100MHz stable game clocks in cyberpunk to 2145MHz for about 2 hours now without a crash. I find it to be very helpful in diagnosing my OC.


----------



## lolhaxz

HyperMatrix said:


> Using thermspy to match up my requested clocks with my real clocks allowed me to increase my score in port royal by 180 points, and at 15MHz lower. So I know it’s not all voodoo. It’s also allowed me to go from maximum 2100MHz stable game clocks in cyberpunk to 2145MHz for about 2 hours now without a crash. I find it to be very helpful in diagnosing my OC.


So what specifically am I looking for? because as I raise the clock the "Unknown Clock" follows it fairly linerally....

I also note NVIDIA must have recently changed the way OC settings work, seems you now have to be running as administrator to set clocks >_< - that's new.


----------



## WMDVenum

So I have some odd behavior. I am undervolted to [email protected] with +1200 on the memory. When I run Port Royale with a 0 Core Voltage % slider my port royale score is ~14900 but when i change the slider to 100 it goes to 15135. Throughout this the voltage reported by MSI Afterburner doesn't change, and shouldn't since I am locking it in with an undervolt. Both sets of runs have the GPU stay at 2200 in MSI afterburner.

Might try thermspy to see what is going on.


----------



## Daniel M

I got my PNY 3090 today. First day of testing and gaming and its a huge step up from the 1080ti. But man is the card loud.
Can’t wait for the EK block to be delivered so I can game in silence again. 

Heres my best Time Spy result of the day


----------



## WMDVenum

HyperMatrix said:


> Using thermspy to match up my requested clocks with my real clocks allowed me to increase my score in port royal by 180 points, and at 15MHz lower. So I know it’s not all voodoo. It’s also allowed me to go from maximum 2100MHz stable game clocks in cyberpunk to 2145MHz for about 2 hours now without a crash. I find it to be very helpful in diagnosing my OC.


Can you give some advice on how you attempted to achieve this?


----------



## Zogge

markuaw1 said:

3090 strix running kingpin bios, EK block with Alphacool x-flow 360rad 60mm, settings +170 clock +1250mem, getting
15 322 @ 34c https://www.3dmark.com/pr/648784



D3LTA KING said:


> Just wondering you said that you are running a kingpin bios on a Asus Strix card? Where could I find that bios? and is it the 520 watt?
> 
> thanks


Ek block at 34 degrees and 520+ W ? How is that possible, chiller ? Flow ? Ambient 10 degrees or ?
My ambient is 24 C now and still at 50 degrees. I will reseat it but I doubt it will be 16 degrees colder due to that.


----------



## lolhaxz

Zogge said:


> markuaw1 said:
> 
> 3090 strix running kingpin bios, EK block with Alphacool x-flow 360rad 60mm, settings +170 clock +1250mem, getting
> 15 322 @ 34c https://www.3dmark.com/pr/648784
> 
> 
> 
> Ek block at 34 degrees and 520+ W ? How is that possible, chiller ? Flow ? Ambient 10 degrees or ?
> My ambient is 24 C now and still at 50 degrees. I will reseat it but I doubt it will be 16 degrees colder due to that.


Pump speed is also very important I've noted with GPU blocks... ie, you probably have delta left on the table all the way upto 3000rpm on a D5.... if you're running the pump at 1500-2000rpm temps will be alot higher... also difficult to control based on water temp/cpu temp...

I'm looking into a way to use the GPU PWM output to trigger the pump PWM to a higher value when the fans on the GPU ... would have come on, because otherwise, my CPU often sits at 30-35C gaming... kinda tough to have a nice idle/load curve over such a tight temp range... and I do not trust software based fan/pump control, recipe for disaster.


----------



## WilliamLeGod

Since 2080ti strix black has 325W PL and white has 360W PL, does any1 have info on the 3090 strix White PL?


----------



## Zogge

lolhaxz said:


> Pump speed is also very important I've noted with GPU blocks... ie, you probably have delta left on the table all the way upto 3000rpm on a D5.... if you're running the pump at 1500-2000rpm temps will be alot higher... also difficult to control based on water temp/cpu temp...
> 
> I'm looking into a way to use the GPU PWM output to trigger the pump PWM to a higher value when the fans on the GPU ... would have come on, because otherwise, my CPU often sits at 30-35C gaming... kinda tough to have a nice idle/load curve over such a tight temp range... and I do not trust software based fan/pump control, recipe for disaster.


I tried with 5000rpm on my dual d5s. 150l/h and 2000rpm on 24x fans. Rad size 3360x140+1080x120. Delta T ambient to water is under 1 degree. But water to block is delta T is 25 degrees on full load. Must be a bad seating. 
I bought a bykski block and mp5 to compare with.


----------



## Nizzen

WilliamLeGod said:


> Since 2080ti strix black has 325W PL and white has 360W PL, does any1 have info on the 3090 strix White PL?


I have 2x 2080ti strix white and ordered 3090 strix white oc. Hoping for a binned card like the 2080ti white was 

Thought it was 350w bios on white 2080ti? Maybe I remember wrong


----------



## dr/owned

lolhaxz said:


> Pump speed is also very important I've noted with GPU blocks... ie, you probably have delta left on the table all the way upto 3000rpm on a D5.... if you're running the pump at 1500-2000rpm temps will be alot higher... also difficult to control based on water temp/cpu temp...
> 
> I'm looking into a way to use the GPU PWM output to trigger the pump PWM to a higher value when the fans on the GPU ... would have come on, because otherwise, my CPU often sits at 30-35C gaming... kinda tough to have a nice idle/load curve over such a tight temp range... and I do not trust software based fan/pump control, recipe for disaster.


Relevant to this, this is a D5 pump running at full speed flowrate of 145 l/h (about 0.5 gpm) which is a pretty low amount of flow since my loop is very restrictive. With Furmark making the GPU consume about 600W on its own (730W total from the wall). the delta is only 3 degrees across the block and that includes it cooling the backplate too. You'll also notice the GPU is 45C vs. the 28.7C water.










Here's illustrating how restrictive I am on the flow curve for this pump...it's basically almost maxing out and can only handle 10% more restriction before the flowrate goes to zero (4.0mH20 on the left axis):











Basically a lot of explanation to say: unless you have a completely clogged loop or are running the pump at 2/5...it shouldn't really matter for the delta across the block. @Zogge


----------



## Zogge

dr/owned said:


> Relevant to this, this is a D5 pump running at full speed flowrate of 145 l/h (about 0.5 gpm) which is a pretty low amount of flow since my loop is very restrictive. With Furmark making the GPU consume about 600W on its own (730W total from the wall). the delta is only 3 degrees across the block and that includes it cooling the backplate too. You'll also notice the GPU is 45C vs. the 28.7C water.
> 
> View attachment 2472047
> 
> 
> Here's illustrating how restrictive I am on the flow curve for this pump...it's basically almost maxing out and can only handle 10% more restriction before the flowrate goes to zero (4.0mH20 on the left axis):
> 
> View attachment 2472048
> 
> 
> 
> Basically a lot of explanation to say: unless you have a completely clogged loop or are running the pump at 2/5...it shouldn't really matter for the delta across the block. @Zogge


E g this is a pure block / seating issue for my GPU block if I understand you right ? Or should I add one more d5 to increase flow ? Money isnt really an issue within "reasonable" limits.


----------



## lolhaxz

dr/owned said:


> Relevant to this, this is a D5 pump running at full speed flowrate of 145 l/h (about 0.5 gpm) which is a pretty low amount of flow since my loop is very restrictive. With Furmark making the GPU consume about 600W on its own (730W total from the wall). the delta is only 3 degrees across the block and that includes it cooling the backplate too. You'll also notice the GPU is 45C vs. the 28.7C water.
> 
> ...
> 
> Here's illustrating how restrictive I am on the flow curve for this pump...it's basically almost maxing out and can only handle 10% more restriction before the flowrate goes to zero (4.0mH20 on the left axis):
> 
> ...
> 
> Basically a lot of explanation to say: unless you have a completely clogged loop or are running the pump at 2/5...it shouldn't really matter for the delta across the block. @Zogge


I believe he was referring to the difference between the water temperature and the GPU die sensor temperature, I spose the delta in/out is somewhat otherwise irrelevent.

My loop:
3x PE360
1x Formula Z490 VRM Block
1x EK Velocity CPU Block
1x EK 3090 Strix Block
1x EK-Annihilator EX/EP Narrow (GPU Backplate, split flow on EK GPU Terminal)
14mm Hardline

1x EK D5/Res Pump

Don't have the fancy graphs, but all I can say is, at 1500rpm I will see almost 60C GPU temps, at 4000rpm 43-44C, at 3000rpm.. perhaps 47C at approx 500-600W... the delta between the water and the GPU die sensor keeps reducing as the RPM rises, but the benefit does slow up as it approaches 3200rpm+ .. but still benefit even at 4000rpm.

Water temps typically settle around 28-29C with fans at 900RPM... so a good 16-20C delta between water temp/GPU temp in upper pump RPM range... ambient typically 23C or there abouts.

A) The less radiator area you have and the slower your fans spin, the higher your water temperature delta to ambient will be.
B) The lower the pump RPM (flow rate, I suppose? .. to a point) the lower the overall efficiency of the GPU waterblock, incremental on top of A.
C) Thermal paste effectiveness, incremental on top of A and B

So many variables... 600W is not a trivial amount of heat to move really.

Pinching off the parallel flow to the GPU backplate block (which is silicon hose) only nets ~1C reduction in temps, yet this in theory is stealing a fair bit of flow from the GPU.

That was the only point I was trying to get across...

When someone says, my delta is 15C... yeah OK, there's about _atleast_ 5 other variables I need to know before I can draw any conclusions relationally (like alot of people will try to do) and wonder why their setup "sucks".

While I'm not going to pretend I know anything specific about thermodynamics, the relationship seems a little more complex than simply the flow rate alone... although arguably the flow rate alone... determines everything else  ... ie turbulence... but possibly you get what I mean.


----------



## dr/owned

Zogge said:


> E g this is a pure block / seating issue for my GPU block if I understand you right ? Or should I add one more d5 to increase flow ? Money isnt really an issue within "reasonable" limits.


For how much of a delta between the ambient and gpu die I'd say that yeah there's either a seating issue or EK didn't design the block with enough clamping force. My Bykski block I measured to have -0.3mm of clearance between the waterblock and the surface of the die...I don't have your block to measure it for comparison but if they put only -0.1mm of interference then it's not going to be as good squeezing out the paste to an ultra thin layer in which case you're stuck.. Ambient vs. water temperature is a bit tricky but it's probably about 3-4C higher water temp vs. the ambient air temp. If you have an infinitely large radiator you'd have 0C difference.

2 other points: a) you may try liquid metal, since that I think is known to be worth a couple degrees on GPU die. b) You may just want to live with it instead of chasing a lower delta that probably doesn't affect the clocking anyways. I'm not sure of where 3090's start pulling clocks because of temperature, but I think it is in the 55-60C range?



lolhaxz said:


> While I'm not going to pretend I know anything specific about thermodynamics, the relationship seems a little more complex than simply the flow rate alone... although arguably the flow rate alone... determines everything else  ... ie turbulence... but possibly you get what I mean.


Increasing flow rate better approximates an "ideal scenario" of the entire loop being an equilibrium in all areas, instead of the actual scenario of heat behaving discretely like cars driving on a road going from place to place. However, the total system heat transfer doesn't care about flow rate in a loop (because if the water has longer to absorb heat by moving slower, it also has longer to dissipate in the radiator). If you go to some insanely high flowrate you start creating turbulence and friction, but in PC watercooling the difference is pretty trivial...like 1-2C difference at most in component temperatures. The general rule of thumb for a while has been "just have more than a trickle of water" and everything is fine, but the 3000 series presents a new challenge of >500W heat load from a single component which hasn't been seen mainstream in a long time.

Bonus fun fact: if you have a delta of 15C between water and GPU and it's consuming 600W...the math is as simple as the delta would be 30C if the GPU consumed 1200W or 7.5C if it was 300W. Heat transfer is a linear function of the delta T. (There is some real world nuance though like the block warping more at higher temperatures or the paste changing consistency).


----------



## Shawnb99

dr/owned said:


> The general rule of thumb for a while has been "just have more than a trickle of water" and everything is fine,


What? Umm the general rule of thumb is you should have at minimum 0.5GPM and ideally 1GPM.
I couldn’t help but laugh at lower flow means more time to spend in the radiators.


----------



## sultanofswing

I target 1gp(227lph).
My loop has 3 D5's in series, this nets me right at 500lph but I run the pumps at 76% which gets me right at my 227lph.
Big thing with D5's that people don't know is that the flow rate drops off really fast with less PWM signal.
Like 100% could be 250lph but 95% might only be 130lph which when people say their flow rate is fine and don't have a flow meter to know I just look the other way.


----------



## devilhead

so tested 3090 GAMING OC cooler 100% fans with 1000w bios  at the end of the test (port royal) temp was already 79C 🔥 the wall power meter was showing ~820W


----------



## robertr1

TimeSpy:


https://www.3dmark.com/3dm/55925538



3090 FE with manual tuning. Cracking 21,700 was a goal for me so happy to break that. Outside of a bios update, that's about as good as it gets for this card.

PR: https://www.3dmark.com/3dm/55926325 Couple of points off 14,400 so happy there as well given the power draw limitations.


----------



## DarkHollow

So I've seen some screenshots of some here using a program called ABE to read bios info, is that available somewhere?

I've got the MSI Gaming X Trio 3090 and was considering flashing it to the Suprim X bios, especially after I get my block. I was looking at the vbioses on TPU and see a Suprim X bios with a version of 94.02.26.88.98 but then there's an unofficial one with the version of 94.02.42.00.F5. Do we know what the differences are for any bios revisions?


----------



## defiledge

Will be changing to a 3090 FE soon. Planning to shunt mod, but can the 12pin adapter handle the powerdraw?


----------



## des2k...

with all this discussion on water temps, and block mounting, just FYI you may have a board or gpu die that is not flat, and waterblock mounting might be difficult to get perfect temps









GeForce RTX 3080 and RTX 3090 with bended package - why water and air coolers have such a hard time | Investigative | igor'sLAB


Of course, you'll be amazed if, for example, three supposedly identical cards from one manufacturer have completely different temperatures or fan speeds and this is then reflected very similarly in…




www.igorslab.de


----------



## geriatricpollywog

devilhead said:


> so tested 3090 GAMING OC cooler 100% fans with 1000w bios  at the end of the test (port royal) temp was already 79C 🔥 the wall power meter was showing ~820W
> View attachment 2472071


You are 100% going to kill that card if you have not done so at press time.

ANY card should need no more than 450 watt board power on the 520 watt bios to achieve 14.7k in PR. You card is pulling double that power and it’s all being wasted as heat.


----------



## HyperMatrix

devilhead said:


> so tested 3090 GAMING OC cooler 100% fans with 1000w bios  at the end of the test (port royal) temp was already 79C 🔥 the wall power meter was showing ~820W
> View attachment 2472071


Do this a few more times and your card could die. Just warning you before it happens.


----------



## defiledge

devilhead said:


> so tested 3090 GAMING OC cooler 100% fans with 1000w bios  at the end of the test (port royal) temp was already 79C 🔥 the wall power meter was showing ~820W
> View attachment 2472071


With a couple more attempts you might break 15k! Keep trying!!!


----------



## HyperMatrix

Shawnb99 said:


> What? Umm the general rule of thumb is you should have at minimum 0.5GPM and ideally 1GPM.
> I couldn’t help but laugh at lower flow means more time to spend in the radiators.


Yeah...importance of flow really can’t be understated. Just a quick primer for some:

- ambient temperature = coldest your water can get if there was no heat being generated

- radiator size and fan speeds = how fast you can dissipate heat and get your water to ambient temperature by essentially increasing the contact between your warm water and the cooler ambient temps.

- reservoir/total liquid capacity = how long it takes for the heat from your gpu/cpu to thoroughly warm up the liquid. More water = more time you’ll have to dissipate heat through your radiator and lower overall liquid temperature.

- flow rate = affects everything except ambient temperatures. The delta between water temperature and your gpu/cpu, as well as between your water temperature and your ambient temperature affect the rate of thermal transfer. The higher the flow rate, the more heat is being pulled from your gpu/cpu. It’s the same as increasing fan speed on your radiator. You’re basically increasing the contact between the hot stuff and the cool stuff. So if water travels faster over your gpu/cpu, the water gets warmer. The faster that water gets through the radiator which is in contact with colder ambient air, the more heat you dissipate there as well.

Bottom line = flow rate is incredibly important.


----------



## geriatricpollywog

CPU blocks have tiny fins and low flow rate is unable to force water between the fins.


----------



## Keninishna

Isn't the gaming oc card a 2x8pin card? so its drawing 516 watts not 820.


----------



## HyperMatrix

Keninishna said:


> Isn't the gaming oc card a 2x8pin card? so its drawing 516 watts not 820.


He's getting 820W total system draw from a wall power meter. So 820W minus CPU and other system components/fans/etc...would be his GPU power draw. Still way too high. Especially when the card is hitting 80C after an under 2 minute port royal run. I have no idea why people do this with a 1000W bios with all protections disabled and talk about it as though they're really impressed with themselves when in reality they're a few runs away from burning their card.


----------



## Falkentyne

defiledge said:


> Will be changing to a 3090 FE soon. Planning to shunt mod, but can the 12pin adapter handle the powerdraw?


The Seasonic 12 pin power cable is good up to 600W.
The stock FE adapter? That I don't know. It had better be good up to 500W considering max stock TDP is 400W...


----------



## HyperMatrix

lolhaxz said:


> So what specifically am I looking for? because as I raise the clock the "Unknown Clock" follows it fairly linerally....
> 
> I also note NVIDIA must have recently changed the way OC settings work, seems you now have to be running as administrator to set clocks >_< - that's new.
> 
> View attachment 2472015


Assuming the unknown clock is the same clock shown in thermspy, then under various OC conditions, the numbers could be closer or even farther apart. I've witnessed in some cases where it's been 100MHz lower. Using it to know when your requested clocks are translating into real clocks can help you get more performance and also more stability out of your card, and skip over 90% of the trial/error runs you do in port royal. Before we'd make adjustments to OC settings, and then try to "validate" the OC by running port royal to see if our score went up or down with the new settings. Now with this....you're almost guaranteed to know if your score is going to go up or down (assuming no voltage/power/thermal throttling). 



WMDVenum said:


> Can you give some advice on how you attempted to achieve this?


So from my testing, here's what I've seen:


NVVDD is not exactly the same as the voltage you set/see in afterburner.
Setting manual NVVDD isn't the only thing that matters in getting your real clocks to match your requested clocks

So in one of my tests, I had NVVDD locked at a certain voltage and just played with the gpu offset. Now here's the interesting part. I had locked maximum clock speed to 2130MHz. So now...we had locked NVVDD (core voltage). And locked requested clock speed at 2130MHz with nvidia-smi. So that means whether the offset was trying to hit 2145MHz or 2235MHz, it was actually only requesting 2130MHz. But...changing the offset shifted the entire curve/vfq table. And in doing so actually changed what my real clocks were. Technically that shouldn't happen. If you have a locked voltage, and locked clock speed, your real clocks shouldn't shift. But they did. 

Now I'm not sure about how non-KPE cards operate. I know they're more sensitive to this "Desync" behavior. But my method for testing is to run unigine valley in 4k with 8x MSAA in windowed mode. Then I have GPU-Z, afterburner/precision x, ThermSpy, and Classified tool open. And I adjust the settings while it's under load to see the relationship between the real and requested clocks. When they're within approximately 5-10MHz of each other, that's a good point to leave it. Card temperature increasing could affect this as well. You'd have to test that out. But yes I've had greater stability and greater performance, with less trial and error, using this method.


----------



## tbrown7552

truehighroller1 said:


> Close, #4..
> View attachment 2471975
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 3DMark.com search
> 
> 
> 3DMark.com search
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> 
> 
> https://www.3dmark.com/pr/717148


----------



## SoldierRBT

lolhaxz said:


> The entire premise of mentioning it and tagging yourself was to see if you could determine that was indeed the clock in question, then I would have provided you all the details (structs/DLL interface ID) to give to whomever you like. (ie, hwinfo)
> 
> There is something like 15-20 clocks being returned by that private function.
> 
> But I'm quite suspicious of how useful any of them are anyway, including this "thermspy" reported clock... bit of a red herring i suspect... It's technically a OLD (not particularly "secret") deprecated function which has been around for a very long time and not recommended to be used any more.


Thermspy does work. 








I scored 14 624 in Port Royal


Intel Core i9-10900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com






https://www.3dmark.com/pr/696376



400+ points difference by just matching internal clocks with Afterburner clocks. Even Vince said in the KPE stream to use similar program to maximize performance. 



devilhead said:


> so tested 3090 GAMING OC cooler 100% fans with 1000w bios  at the end of the test (port royal) temp was already 79C 🔥 the wall power meter was showing ~820W
> View attachment 2472071


3090s shouldn't need more than 450W to break 15K in Port Royal. I'd suggest to use MSI v/f curve to stabilize clockspeed at lower watts. 79C is affecting the avg clocks and reducing your score.


----------



## des2k...

What's a good value for NVVDD again ? For some reason, it's always 25mv higher then afterburner.


----------



## jomama22

des2k... said:


> What's a good value for NVVDD again ? For some reason, it's always 25mv higher then afterburner.
> View attachment 2472084


Whatever makes your thermspy effective clock match your requested clock. No real magic number.


----------



## SoldierRBT

des2k... said:


> What's a good value for NVVDD again ? For some reason, it's always 25mv higher then afterburner.
> View attachment 2472084


It would depend on how much your card needs for each voltage point to stabilize internal clock. I've tried all the voltage points in my KPE and it needs about 50mv-75mv above MSI afterburner voltage point to match internal clocks. My card on stock also uses around 25mv more NVVDD than MSI afterburner and the internal clock is about 45mhz lower. 

Keep in mind that adding a lot of NVVDD voltage to a specific voltage point won't make your card magically be able to OC higher, it just adds more heat/watts at the same clockspeed. My 950mv point can only do 2130MHz, 2145MHz isn't possible doesn't matter if I add 100-150mv more NVVDD. OC capabilities of each card depends on silicon quality. NVVDD only stabilize internal clocks.


----------



## WMDVenum

defiledge said:


> Will be changing to a 3090 FE soon. Planning to shunt mod, but can the 12pin adapter handle the powerdraw?


There is FE shunt mod post discussing this but mine maxed at 550W without any noticeable issues using the stock 12pin cable. Plus you will only really ever hit that in benchmarks. I can still only pull like 400W when playing CP2077 maxed. Games just don't use as much power as a benchmark.

as for the hidden clocks. I suppose with a 3090FE there isn't much I can do since I don't think I can change any voltages to try to get them to match. Oh well.


----------



## HyperMatrix

WMDVenum said:


> There is FE shunt mod post discussing this but mine maxed at 550W without any noticeable issues using the stock 12pin cable. Plus you will only really ever hit that in benchmarks. I can still only pull like 400W when playing CP2077 maxed. Games just don't use as much power as a benchmark.
> 
> as for the hidden clocks. I suppose with a 3090FE there isn't much I can do since I don't think I can change any voltages to try to get them to match. Oh well.


Different clocks at different voltages affect it. You may find that a lower requested clock is giving you higher real clocks than a higher requested clock that was unstable and giving you lower real clocks. But if your requested/real clocks are very close already, and don't change based on your OC settings, you're probably fine. Never hurts to look though.


----------



## mirkendargen

SoldierRBT said:


> It would depend on how much your card needs for each voltage point to stabilize internal clock. I've tried all the voltage points in my KPE and it needs about 50mv-75mv above MSI afterburner voltage point to match internal clocks. My card on stock also uses around 25mv more NVVDD than MSI afterburner and the internal clock is about 45mhz lower.
> 
> Keep in mind that adding a lot of NVVDD voltage to a specific voltage point won't make your card magically be able to OC higher, it just adds more heat/watts at the same clockspeed. My 950mv point can only do 2130MHz, 2145MHz isn't possible doesn't matter if I add 100-150mv more NVVDD. OC capabilities of each card depends on silicon quality. NVVDD only stabilize internal clocks.


I messed around with all this on an unshunted Strix (not much to really adjust, just observe what happens) using offset overclocks. Temperatures were ~33C on all the tests so no thermal throttling of any kind.

With the 520W BIOS, my card is basically locked at 1.1V on NVVDD and core (not powerlimited in general) but the shader clock is perpetually 30mhz above the unknown clock. Doesn't matter where I put the offset, they're always 30mhz apart.

With the stock 480W BIOS it's tough to tell what's going on because it's power limited basically the entire run. NVVDD is generally 20mV higher than the core voltage, but it's hard to see the delta between the clocks because they're both constantly jumping around.

With the 500W FTW3 BIOS, NVVDD and core are also both the same and pegged at 1.093-1.1V, and this time the shader and unknown clocks are locked 24mhz apart. Also didn't matter what the offset was.

Once again, not really anything I could attempt to do about any of this without soldering on an Elmor, just observing for science.


----------



## Falkentyne

WMDVenum said:


> There is FE shunt mod post discussing this but mine maxed at 550W without any noticeable issues using the stock 12pin cable. Plus you will only really ever hit that in benchmarks. I can still only pull like 400W when playing CP2077 maxed. Games just don't use as much power as a benchmark.
> 
> as for the hidden clocks. I suppose with a 3090FE there isn't much I can do since I don't think I can change any voltages to try to get them to match. Oh well.


Can you check your private message if you're not too busy?


----------



## truehighroller1

tbrown7552 said:


>



DAMN YOU TBROWN~~~!!!!!!!


LOL, Nice scores man though seriously.


----------



## Nizzen

HyperMatrix said:


> He's getting 820W total system draw from a wall power meter. So 820W minus CPU and other system components/fans/etc...would be his GPU power draw. Still way too high. Especially when the card is hitting 80C after an under 2 minute port royal run. I have no idea why people do this with a 1000W bios with all protections disabled and talk about it as though they're really impressed with themselves when in reality they're a few runs away from burning their card.


This is not safeclock.net. Maybe why people don't care


----------



## HyperMatrix

Nizzen said:


> This is not safeclock.net. Maybe why people don't care


There’s a difference between you or I doing it and someone who has no idea what they’re doing pulling 1000W on an air cooled card with no protections because they don’t realize how unsafe it is. It’s not like we’re telling other experienced members not to do it. We just see too many random newbies coming in here and posting this stuff and if we don’t post a warning about it each time, random googlers will see it and think it’s a good idea and break their cards too.

I never complained about the 1000W bios being posted here. I just don’t want people to think it’s a safe option. It’s by far the most unsafe option. But because it’s easier to flash a bios than it is to shunt mod a card, people think it’s also a safer method. Doesn’t help that EVGA calls their normal 500/520W bios “XOC” as well. So people can easily misunderstand.


----------



## truehighroller1

HyperMatrix said:


> There’s a difference between you or I doing it and someone who has no idea what they’re doing pulling 1000W on an air cooled card with no protections because they don’t realize how unsafe it is. It’s not like we’re telling other experienced members not to do it. We just see too many random newbies coming in here and posting this stuff and if we don’t post a warning about it each time, random googlers will see it and think it’s a good idea and break their cards too.
> 
> I never complained about the 1000W bios being posted here. I just don’t want people to think it’s a safe option. It’s by far the most unsafe option. But because it’s easier to flash a bios than it is to shunt mod a card, people think it’s also a safer method. Doesn’t help that EVGA calls their normal 500/520W bios “XOC” as well. So people can easily misunderstand.


It's called Darwinism. Let it happen. Quit wasting your energy man.


----------



## Mumak

bmgjet said:


> Not sure what language HWInfo is using but a lot of its in the Nvidia API docs.
> 
> 
> 
> 
> __
> 
> 
> 
> 
> 
> NVAPI Reference Documentation
> 
> 
> 
> 
> 
> 
> docs.nvidia.com
> 
> 
> 
> 
> 
> In C# using the wrapper iv played around with its nvenc_clock. or ID 7. Theres an array of 30 different clocks in there but only 7 of them have valid looking values.


Thanks. But the function used to report the effective clock is not even documented in the NDA version of NVAPI.
I haven't seen the _nvenc_clock_ documented anywhere, so I thought you know something more about that function and layout of the clocks reported. First half of the buffer seems to consist of multiple clock entries with flags, but the second part seems a bit more complex with some additional items.
Nevertheless, I think I was able to determine what item is the effective clock, at least it should match the one reported by ThermSpy.
So here's a test build of HWiNFO64 that should report this value: www.hwinfo.com/beta/hwi64_641_4340.zip
Let me know how it works, I haven't been able to test it thoroughly yet.

Happy New Year to everyone !


----------



## SoldierRBT

@Mumak Thank you very much! Effective clock matches internal clock. Now there is no need to use Thermspy.


----------



## Falkentyne

@Mumak : Awesome thank you and happy new year!

Everyone else:

Does anyone (anyone on the planet, even) know which exact physical shunt is " 8 pin #1 " and "8 pin #2" on the Founder's Edition 3090?

Seems like no one knows anywhere.

Furthermore (for you smart people out there): I do NOT have a multimeter, but if I did get one, how would you even find out which is 8 pin #1 and which is 8 pin #2 (as reported in GPU-Z)?

And olrdtg seems to have disappeared (I really hope it isn't because he broke his card or something...seems like that's why some people vanish)


----------



## jomama22

Falkentyne said:


> @Mumak : Awesome thank you and happy new year!
> 
> Everyone else:
> 
> Does anyone (anyone on the planet, even) know which exact physical shunt is " 8 pin #1 " and "8 pin #2" on the Founder's Edition 3090?
> 
> Seems like no one knows anywhere.
> 
> Furthermore (for you smart people out there): I do NOT have a multimeter, but if I did get one, how would you even find out which is 8 pin #1 and which is 8 pin #2 (as reported in GPU-Z)?
> 
> And olrdtg seems to have disappeared (I really hope it isn't because he broke his card or something...seems like that's why some people vanish)


If you are having issues with the power from whatever pin 1 is, it would probably be better to get a multimeter that will measure down to the milliohm or get an ohmmeter and check each shunt you made.

Figuring out which pin each shunt is would involve check the continuity of each 12v pin to it's respective shunt, then checking the current through each 8pin cable and using that knowledge, with your gpu-z reading, to determine which shunt is which.

I'd just go the ohmmeter route and check the shunts, you can gauge which one is which based on that and what you have going on in gpu-z with the power.

I should ass that milliohm meters are quite expensive. It would be easier to just buy a decent multimeter and bench top power supply with adjustable current and using a 4 prong approach.

Here's a fun video explaining it:






You could use a spare shunt you have to calibrate the multimeter readout to give yourself accurate results.

Btw, the fun thing about this is the current sense circuit(what the shunt is a part of) is basically doing this same thing to determine the current coming through the 8 pin connector. It's just looking at the voltage drop caused by the resistor of known value. As that voltage drop changes (as it does based on the change in input current) it knows the current. That's why adding a resistor in parallel works, we are just reducing the voltage drop across the shunt, tricking the current sense circuit.

But I digress.

It genuinely may be cheaper (and easier) to prod the pins of the 12 pin plug and shunts, find the continuity among them and the 8pin plugs, then use a current clamp on each pcie cable while running a bench and looking at gpu-z. Then you can determine what 8pin goes to which shunt and which shunt is 1 and 2. Just remember when using the current clamp that only the 12v wires should be going through it. If you just put the whole cable through it, well, you'll get a current of 0 lol.


----------



## DrunknFoo

Nizzen said:


> This is not safeclock.net. Maybe why people don't care


i fall under this category, only care when something goes wrong /knockonwood


----------



## Nizzen

Happy new year from Norway ❤


----------



## dr/owned

Falkentyne said:


> @Mumak : Awesome thank you and happy new year!
> 
> Everyone else:
> 
> Does anyone (anyone on the planet, even) know which exact physical shunt is " 8 pin #1 " and "8 pin #2" on the Founder's Edition 3090?
> 
> Seems like no one knows anywhere.
> 
> Furthermore (for you smart people out there): I do NOT have a multimeter, but if I did get one, how would you even find out which is 8 pin #1 and which is 8 pin #2 (as reported in GPU-Z)?
> 
> And olrdtg seems to have disappeared (I really hope it isn't because he broke his card or something...seems like that's why some people vanish)


You could buy a $5 multimeter with continuity checking. On the TUF anyways they use power planes that separate the 8 pin's from each other (1 layer of the PCB being vcore, 1 layer being uncore, sanwiched between ground layres...at least that's how I think it works) so you can put one probe on the +12V connector on the 8 pin and then touch all the shunts until it beeps. Doesn't matter which side of the shunt because it's such little resistance.

This is the symbol you're looking for for continuity checking...the speaker looking thing:







. You can also do it looking at resistance but it's easier for the thing to beep at you when the resistance approaches zero.


----------



## Falkentyne

dr/owned said:


> You could buy a $5 multimeter with continuity checking. On the TUF anyways they use power planes that separate the 8 pin's from each other (1 layer of the PCB being vcore, 1 layer being uncore, sanwiched between ground layres...at least that's how I think it works) so you can put one probe on the +12V connector on the 8 pin and then touch all the shunts until it beeps. Doesn't matter which side of the shunt because it's such little resistance.
> 
> This is the symbol you're looking for for continuity checking...the speaker looking thing:
> View attachment 2472115


Hi thanks for the reply! 
I'll get a multimeter with my next check. I'm dead broke right now.

Yes I know this, but that still doesn't tell you which 8 pin is which.
Do you see the problem?
It can tell you which shunt goes to which physical 8 pin but how do you know which 8 pin is #1 and #2? The easy part is testing 12 pin to shunt, but the hard part is finding which one is reported in windows...

Or are you saying:
If I'm following you correctly:

You would literally have to remove 1 of 2 shunts, then check in windows for the shunt that reports super high power?
And that takes extra thermal pads, paste and everything. I don't have all those materials  That's a lot of work and money invested especially with how expensive good thermal pads are.


----------



## dr/owned

Falkentyne said:


> Hi thanks for the reply!
> I'll get a multimeter with my next check. I'm dead broke right now.
> 
> Yes I know this, but that still doesn't tell you which 8 pin is which.
> Do you see the problem?
> It can tell you which shunt goes to which physical 8 pin but how do you know which 8 pin is #1 and #2? The easy part is testing 12 pin to shunt, but the hard part is finding which one is reported in windows...
> 
> Or are you saying:
> If I'm following you correctly:
> 
> You would literally have to remove 1 of 2 shunts, then check in windows for the shunt that reports super high power?
> And that takes extra thermal pads, paste and everything. I don't have all those materials  That's a lot of work and money invested especially with how expensive good thermal pads are.


Ah I misunderstood and thought you were asking how to determine the shunt to the connector. To tell the "software" #1 / #2 to the physical connector you'd have to go based on the power differences between the using a current clamp measurement vs. the software reported power draw. Or trace which one is connected to the core VRM section and assume that's #1 but I have no idea how good of an assumption that would be.


----------



## Carillo

Palit Gaming pro OC, KingPin 1000w bios, NO shunts, 2220MHZ 1000mV curve + 1350 mem using ThermSpy


----------



## jomama22

dr/owned said:


> Ah I misunderstood and thought you were asking how to determine the shunt to the connector. To tell the "software" #1 / #2 to the physical connector you'd have to go based on the power differences between the using a current clamp measurement vs. the software reported power draw. Or trace which one is connected to the core VRM section and assume that's #1 but I have no idea how good of an assumption that would be.


Literally just said this last page lmao.


----------



## jomama22

Carillo said:


> Palit Gaming pro OC, KingPin 1000w bios, NO shunts, 2220MHZ 1000mV curve + 1350 mem using ThermSpy
> View attachment 2472116


The whole "no shunts" seems a bit redundant there with the 1000w bios yeah? Lol


----------



## Falkentyne

dr/owned said:


> Ah I misunderstood and thought you were asking how to determine the shunt to the connector. To tell the "software" #1 / #2 to the physical connector you'd have to go based on the power differences between the using a current clamp measurement vs. the software reported power draw. Or trace which one is connected to the core VRM section and assume that's #1 but I have no idea how good of an assumption that would be.


That's okay. I figure the easiest method is just to remove one shunt and then check the power draw balance. I was just trying to avoid doing another disassembly unless I know for certain to avoid doing two or three disassemblies!. My Thermalright TFX seems like it's lost in transit somewhere between China, singapore(??) and USA and never arrived here. Which means I have to pay highway robbery prices through Amazon, and I don't want to wait another month for another shipment of 120mm * 120mm * 1.5mm Odyssey pads from the slow plane from China again, which means more highway robbery prices for 85mm * 45mm ones... So yeah.

I guess I'll just have to use some throwaway pads until the work is done. Not urgent now anyway. Just want to get the 8 pins a bit closer together. I don't hit the power limit in port royal anyway (since I can't do more than +500 to +600 VRAM or it just black screens then reboots...all you guys have lottery winners


----------



## jomama22

Falkentyne said:


> Hi thanks for the reply!
> I'll get a multimeter with my next check. I'm dead broke right now.
> 
> Yes I know this, but that still doesn't tell you which 8 pin is which.
> Do you see the problem?
> It can tell you which shunt goes to which physical 8 pin but how do you know which 8 pin is #1 and #2? The easy part is testing 12 pin to shunt, but the hard part is finding which one is reported in windows...
> 
> Or are you saying:
> If I'm following you correctly:
> 
> You would literally have to remove 1 of 2 shunts, then check in windows for the shunt that reports super high power?
> And that takes extra thermal pads, paste and everything. I don't have all those materials  That's a lot of work and money invested especially with how expensive good thermal pads are.


Edit; too many replies at the same time. Ignore the redundancy
You wouldn't have to remove any shunts. I'm guessing you have some difference in power between pin 1 and pin 2 you can't figure out yeah?

You can find which shunt goes to which pcie cable by doing a continuity check of the 12v pins on each 8 pin connector on the adapter cable while it's attached to the card (not turned on, just uplug the 8pins from the adapter with everything powered off).

Once you know with 8pin goes to which shunt, then, while running benchmarks or whatever is giving you strange values, check the current on the pcie cables and check gpu-z. See if the power gpu-z is displaying (after whatever you think the resistance of your shunt should be) matches the power through a pcie cable. Gpu-z also tells you in input voltage, so you could take that voltage (let's say it's 12.1v) multiply it by the current through the pcie cable your checking, and see if it lines up with gpuzs power rating, of it doesn't, do the same with the other power cable.

Do that for each pin 1 and pin 2 and see if you can discover which shunt is which.


----------



## Falkentyne

jomama22 said:


> You wouldn't have to remove any shunts. I'm guessing you have some difference in power between pin 1 and pin 2 you can't figure out yeah?
> 
> You can find which shunt goes to which pcie cable by doing a continuity check of the 12v pins on each 8 pin connector on the adapter cable while it's attached to the card (not turned on, just uplug the 8pins from the adapter with everything powered off).
> 
> Once you know with 8pin goes to which shunt, then, while running benchmarks or whatever is giving you strange values, check the current on the pcie cables and check gpu-z. See if the power gpu-z is displaying (after whatever you think the resistance of your shunt should be) matches the power through a pcie cable. Gpu-z also tells you in input voltage, so you could take that voltage (let's say it's 12.1v) multiply it by the current through the pcie cable your checking, and see if it lines up with gpuzs power rating, of it doesn't, do the same with the other power cable.
> 
> Do that for each pin 1 and pin 2 and see if you can discover which shunt is which.


Sounds good. I'll have a multimeter on my short list for the future!


----------



## jomama22

Falkentyne said:


> Sounds good. I'll have a multimeter on my short list for the future!


Np, just make sure to get one that has a current clamp attached to it. Doesn't have to high end or anything for this purpose. 

It may be easiest to do with somthing that creates some constant load, like just sitting at desktop or somthing, so the current values wouldn't be bouncing around a ton.


----------



## Carillo

jomama22 said:


> The whole "no shunts" seems a bit redundant there with the 1000w bios yeah? Lol


Well, its a 2x8 pin card, so that makes it a 660 watt bios. Noob


----------



## jomama22

Carillo said:


> Well, its a 2x8 pin card, so that makes it a 660 watt bios. Noob


Again, a bit redundant as shunting a 2 pin card and then using the 1000w bios would probably blow up your card. Heard that happened to you in the past.

Btw, feel free to check the ts and tse hall of fame, you can see how much a noob I am there.


----------



## dr/owned

Falkentyne said:


> That's okay. I figure the easiest method is just to remove one shunt and then check the power draw balance. I was just trying to avoid doing another disassembly unless I know for certain to avoid doing two or three disassemblies!. My Thermalright TFX seems like it's lost in transit somewhere between China, singapore(??) and USA and never arrived here. Which means I have to pay highway robbery prices through Amazon, and I don't want to wait another month for another shipment of 120mm * 120mm * 1.5mm Odyssey pads from the slow plane from China again, which means more highway robbery prices for 85mm * 45mm ones... So yeah.
> 
> I guess I'll just have to use some throwaway pads until the work is done. Not urgent now anyway. Just want to get the 8 pins a bit closer together. I don't hit the power limit in port royal anyway (since I can't do more than +500 to +600 VRAM or it just black screens then reboots...all you guys have lottery winners


Being a broke boi, one thing you could try instead of using a current clamp (which is 4x more expensive than a normal multimeter) is to de-pin one of the 8 pins and put the multimeter in-line doing a current measurement. The current draw on all 3 12V wires should be the same within each connector because they're tied together on the PCB. Have a steady load running and if you probe each connector the same way you may be able to see one is lower. Don't do it in Furmark or something because the probes aren't going to be making the best contact with the connector and pin.

Worth a shot


----------



## Carillo

jomama22 said:


> Again, a bit redundant as shunting a 2 pin card and then using the 1000w bios would probably blow up your card. Heard that happened to you in the past.
> 
> Btw, feel free to check the ts and tse hall of fame, you can see how much a noob I am there.


"Blow up" Yeah, that comment defines you  Only hall i found you, was hall of shame. LOL , yeah ? right ? yeah


----------



## DrunknFoo

jomama22 said:


> Again, a bit redundant as shunting a 2 pin card and then using the 1000w bios would probably blow up your card. Heard that happened to you in the past.
> 
> Btw, feel free to check the ts and tse hall of fame, you can see how much a noob I am there.


Not sure if it is redunant per say, fwiw ill be only using the 1000w bios for benching, will go back to the 450w bios for regular use (shunted ftw) as the draw for ram appears to b3 less. (I run a silent pc so rather keep my water temps nice n low)


----------



## jomama22

DrunknFoo said:


> Not sure if it is redunant per say, fwiw ill be only using the 1000w bios for benching, will go back to the 450w bios for regular use (shunted ftw) as the draw for ram appears to b3 less. (I run a silent pc so rather keep my water temps nice n low)


You have 3 pins, he has 2. For 3 pin, that's fine, just gotta be careful of the pcie slot is all, depending how much you shunted it for.

Someone had posted a screenshot of the pcie slot pulling 100w on the 1000w bios at 550w total board power. So if you are shunted (let's say 5mohms) and use that bios, on your ftw3 that pulls a normal 75w any load over 150w, you really start getting up there around 120w+, which starts to get near the fuse limits on those cards.

Let's not forget what happened to steve at GN when using an unlocked bios on his ftw.


----------



## dr/owned

bmgjet said:


> Hows your ram overclock?
> 
> View attachment 2471978


After looking at every TUF BIOS that TechPowerUp has...they're pretty much all the same as the one on the left except 94.02.26.48.5D which has the AUX limits at 50W but the 8 pins were still at 121.5W.

So it looks like Asus has done some progression of increasing Aux and then 8 pin limits and decreasing the VRAM power limit (VRAM power limit I don't think I noticed in benchmarking anyways). Practically what I'm seeing is the total board power limit has increased by about 60W with my shunted card moving to 94.02.42.00.B4 that comes from their update executable. *So I'd tell TUF owners they should definitely apply that update for more power. *I tried other 2 connector card BIOS's like Gigabyte Vision and KFA2 but both of them disabled the middle displayport just like the 1000W bios did.

What's interesting is I see the exact same 121.5W limits with the Strix bios's too except having the extra 3rd connector values populated. I'd be curious if someone with a Strix card updated their BIOS and found the same overall power limit increase: https://dlcdnets.asus.com/pub/ASUS/Graphic Card/NVIDIA/BIOSUPDATE_TOOL/RTX3090/RTX3090_V2.zip . Unfortunately the bios library techpowerup has is only the first versions. (And probably anybody that cares about maxing out would just drop in the 1000W BIOS anyways)

Also: it's not worth messing with the VF curve on this card...it seems to cause funky behavior of the voltage cap (Vrel, Vop PerfLimit) being lowered from 1.10V to 1.07 or 1.06V. If I just do a straight offset on the core frequency it'll stay at 1.093V in Port Royal.


----------



## Carillo

DrunknFoo said:


> Not sure if it is redunant per say, fwiw ill be only using the 1000w bios for benching, will go back to the 450w bios for regular use (shunted ftw) as the draw for ram appears to b3 less. (I run a silent pc so rather keep my water temps nice n low)


Be careful ? He dont know what he is talking about. I ran this bios with 4mohm shunts, no problem


----------



## jomama22

Carillo said:


> "Blow up" Yeah, that comment defines you  Only hall i found you, was hall of shame. LOL , yeah ? right ? yeah


Good one? How's that blown up 3090 working for you?


----------



## Carillo

jomama22 said:


> Good one? How's that blown up 3090 working for you?


You're not that bright are you? I can understand it based on the way you write. What's your point?


----------



## jomama22

Carillo said:


> You're not that bright are you? I can understand it based on the way you write. What's your point?


Your comments speak for themselves.

Not sure taking advice from someone who has blown up a card is wise for anyone.


----------



## mirkendargen

dr/owned said:


> After looking at every TUF BIOS that TechPowerUp has...they're pretty much all the same as the one on the left except 94.02.26.48.5D which has the AUX limits at 50W but the 8 pins were still at 121.5W.
> 
> So it looks like Asus has done some progression of increasing Aux and then 8 pin limits and decreasing the VRAM power limit (VRAM power limit I don't think I noticed in benchmarking anyways). Practically what I'm seeing is the total board power limit has increased by about 60W with my shunted card moving to 94.02.42.00.B4 that comes from their update executable. *So I'd tell TUF owners they should definitely apply that update for more power. *I tried other 2 connector card BIOS's like Gigabyte Vision and KFA2 but both of them disabled the middle displayport just like the 1000W bios did.
> 
> What's interesting is I see the exact same 121.5W limits with the Strix bios's too except having the extra 3rd connector values populated. I'd be curious if someone with a Strix card updated their BIOS and found the same overall power limit increase: https://dlcdnets.asus.com/pub/ASUS/Graphic Card/NVIDIA/BIOSUPDATE_TOOL/RTX3090/RTX3090_V2.zip . Unfortunately the bios library techpowerup has is only the first versions. (And probably anybody that cares about maxing out would just drop in the 1000W BIOS anyways)
> 
> Also: it's not worth messing with the VF curve on this card...it seems to cause funky behavior of the voltage cap (Vrel, Vop PerfLimit) being lowered from 1.10V to 1.07 or 1.06V. If I just do a straight offset on the core frequency it'll stay at 1.093V in Port Royal.


Latest Strix BIOS:


----------



## jomama22

mirkendargen said:


> Latest Strix BIOS:
> View attachment 2472121


Could to view the original bios to compare?


----------



## mirkendargen

jomama22 said:


> Could to view the original bios to compare?


New on the left, old on the right.


----------



## Falkentyne

jomama22 said:


> Again, a bit redundant as shunting a 2 pin card and then using the 1000w bios would probably blow up your card. Heard that happened to you in the past.
> 
> Btw, feel free to check the ts and tse hall of fame, you can see how much a noob I am there.


I never blew up a card with a vbios! That's the one thing I haven't done.
Or were you referring to someone else?
In your above post I see you talking to someone who "blew up" a card, but I don't see their posts...that isn't Nizzen, is it? I have him blocked.

Do you mean my Vega 64?
I don't know how that died. Something apparently happened to the HBM.
All I did was change the P0 and P1 power state voltages by 75mv...it was working just fine for months, until a couple of days after I ordered my RTX 3090 from Best Buy...then the V64 suddenly dropped to 15 FPS while playing Fortnite, then went back up to 120 FPS...It did that two times during that game session...

So I took apart the card again, repasted it with LM (molded die), didn't tighten the x-bracket quite as much, reinstalled, card ran fine for 2 days. then...black screen while playing fortnite.
When I tried to boot again, there were green vertical color bars in the bios. RIP.


----------



## dr/owned

mirkendargen said:


> New on the left, old on the right.
> View attachment 2472122


Yup, so they did the exact same thing on the Strix that they did on the TUF and raised the power limit since the initial release. Although weirdly the Strix VRAM always had a higher power limit than the TUF VRAM. Can't really explain that one since it was 4 phases on the TUF that could handle something like 100W at high efficiency so maybe they just wanted to free up more budget for the core by clamping the VRAM on the TUF.

(I don't really know the behavior on non-shunted cards though...I just know that my shunted card went from about 500W limit to 600W limit under Furmark with the update. It's probably being stopped by some Aux shunt or something that isn't as obvious as the 6 big ones on the PCB. Possible that with the stock 480W limit it wasn't triggering the 121W 8 pin limits anyways)


----------



## bmgjet

mirkendargen said:


> New on the left, old on the right.
> View attachment 2472122


They both bios type 1?
Iv noticed a pattern towards the new bios they are releasing being mostly type 2 now (type 1 is bios based of FE bios). And them lowering power on vram and slot but raising plug and aux limits.
Which gives you higher offset on mem but if you watch your logs during stress test vram is downclocking which just makes it seem like your getting a higher offset because its stable.


----------



## jomama22

Falkentyne said:


> I never blew up a card with a vbios! That's the one thing I haven't done.
> Or were you referring to someone else?
> In your above post I see you talking to someone who "blew up" a card, but I don't see their posts...that isn't Nizzen, is it? I have him blocked.
> 
> Do you mean my Vega 64?
> I don't know how that died. Something apparently happened to the HBM.
> All I did was change the P0 and P1 power state voltages by 75mv...it was working just fine for months, until a couple of days after I ordered my RTX 3090 from Best Buy...then the V64 suddenly dropped to 15 FPS while playing Fortnite, then went back up to 120 FPS...It did that two times during that game session...
> 
> So I took apart the card again, repasted it with LM (molded die), didn't tighten the x-bracket quite as much, reinstalled, card ran fine for 2 days. then...black screen while playing fortnite.
> When I tried to boot again, there were green vertical color bars in the bios. RIP.


If I quoted you my bad. Was not meant for you.


----------



## bwana

HyperMatrix said:


> This was actually *incredibly *helpful in a slightly different way. I set 0.943v at 2160 (which it can't do). Then I set max clocks through smi to 2100. This way there's room for many temperature/curve related downclocks before it goes below 2100. And it seems to be holding it rock stready atm. Setting a minimum clock wasn't respected. But this is going to make my OC life so much easier. Thanks a bunch!


I am missing the point here. Why would you want to limit your upside?


----------



## bwana

@des2k... where did you find 'ampere tools 0.1'?


----------



## mirkendargen

bmgjet said:


> They both bios type 1?
> Iv noticed a pattern towards the new bios they are releasing being mostly type 2 now (type 1 is bios based of FE bios). And them lowering power on vram and slot but raising plug and aux limits.
> Which gives you higher offset on mem but if you watch your logs during stress test vram is downclocking which just makes it seem like your getting a higher offset because its stable.


Both Type 2.


----------



## WillP

Few questions. I've read a great deal on this thread but I'm a bit confused on a few things, and navigating it is a complete nightmare.

1. Regarding the "safety" of the KPE 1000W bios the concerns are regarding the lack of thermal monitoring and throttling/shut down, yes?
2. The advantage of shunt modding over bios flashing a 2 plug card is that thermal protections are maintained?
3. While the temperature readout provided by the card may be limited to GPU only, are thermal protections in place for other components, VRMs, VRAM, etc.? Are these "behind the scenes", and does the KPE bios also remove those protections if they are present normally?
4. I have seen a few references to the KFA2/GALAX 1000W bios. I'm only aware of KFA2 cards with 2 8 pins, reference designs. Is this bios for a different card? Does anyone have experience of using it?

Cheers guys. At present I'm running my KFA2 card under-water in its own discrete loop with a 280mm rad, and power limited to 75% with the KPE 1000W bios. Max temp in prolonged 4K CP 2077 is 69 Celsius. I'm monitoring temperatures the whole time under load, and won't be asking for warranty fraud advice if it does fry. I'm also not after any guarantees from anyone, but am enjoying the more stable clocks without being power limited, for now. I could take the hit if it does break, but if there are any suggestions people have for maintaining the performance without bricking it I would appreciate them. Oh, clocks are +169 and VRAM +450. If benchmarking data with this setup would be useful to anyone I'm happy to provide 3D Mark and GPU-Z stuff.


----------



## bmgjet

Kingpin 1000W bios is a real XOC bios. So all temp protections are disabled. VRM,VRAM,Core will all fry with no warning or throttling.
They need to be disabled on XOC overclocking since if you get it cold enough the temp readings only go so low before they flip the scale so it goes from being the lowest negitive to the max positive reading. Which would instatly switch your computer off with that protection in place.

At 69C youll be losing a lot of performance you should fix your loop since thats way too hot. 50C vs 69C would be a 75mhz clock difference.
Id be worryed about your pump since if you have a big delta like 20C between die and water temp thats putting your water temp at 49C.


----------



## des2k...

bwana said:


> @des2k... where did you find 'ampere tools 0.1'?


https://github.com/l0lhax/AmpereOC


----------



## des2k...

bwana said:


> I am missing the point here. Why would you want to limit your upside?


It helps with two things, prevents the auto boost from boosting higher(not stable) when you hit low GPU or game menus

second, I use it for locking the frequency with high temps, it seems to disable downclocks,
I have VF point at +1,+2 bin higher like 2160 @ 1037mv and nvidia-smi -lgc 210,2115 will lock you into 2115 from 22c to 57c+ no up or down with gpu temps


----------



## dr/owned

WillP said:


> Few questions. I've read a great deal on this thread but I'm a bit confused on a few things, and navigating it is a complete nightmare.
> 
> 1. Regarding the "safety" of the KPE 1000W bios the concerns are regarding the lack of thermal monitoring and throttling/shut down, yes?
> 2. The advantage of shunt modding over bios flashing a 2 plug card is that thermal protections are maintained?
> 3. While the temperature readout provided by the card may be limited to GPU only, are thermal protections in place for other components, VRMs, VRAM, etc.? Are these "behind the scenes", and does the KPE bios also remove those protections if they are present normally?
> 4. I have seen a few references to the KFA2/GALAX 1000W bios. I'm only aware of KFA2 cards with 2 8 pins, reference designs. Is this bios for a different card? Does anyone have experience of using it?


It's hard to say how much is truly disabled because different cards are different VRM implementations.

Ultimately the DrMOS (power stages) likely have built in protection that can't be shut off even if the VRM fault protection is turned off. However, this is only going to be triggered at 55A+ from each stage which can translate to an OBSCENE amount of power. Similar story with temperature protection...it's built in. There's layers of protection from the VRM controller that is generating the phases, to the power stages, to the die itself...but when you're cross flashing a BIOS you can't really say how many of those layers get thrown away.

However a high end card may have smart power stages that are software configured and literally every protection gets disabled. I'd have to look at a datasheet to say whether there's some ultimate safety catches (there probably is) but they may be set so high that the PCB gets damage or other components nearby get killed before they are tripped.

Shunt modding also "solves" problems with the 1000W bios disabling output ports or preventing the card from entering lower power states (which kinda matters if you're running a desktop 24x7)


----------



## WillP

bmgjet said:


> Kingpin 1000W bios is a real XOC bios. So all temp protections are disabled. VRM,VRAM,Core will all fry with no warning or throttling.
> They need to be disabled on XOC overclocking since if you get it cold enough the temp readings only go so low before they flip the scale so it goes from being the lowest negitive to the max positive reading. Which would instatly switch your computer off with that protection in place.
> 
> At 69C youll be losing a lot of performance you should fix your loop since thats way too hot. 50C vs 69C would be a 75mhz clock difference.
> Id be worryed about your pump since if you have a big delta like 20C between die and water temp thats putting your water temp at 49C.


Thanks for the advice. It's my first custom loop. It takes a good hour to get to 69 cools to 40 in a matter of a minute or so when not not under gaming load, so in a menu, or map for example. I have the capacity to add another 280mm rad, and have a backplate cooler to add also, but I will take your advice until the changes are made.

Yeah, the water can get up to 41 or so I think, I'll keep a closer eye on that.


----------



## WillP

dr/owned said:


> It's hard to say how much is truly disabled because different cards are different VRM implementations.
> 
> Ultimately the DrMOS (power stages) likely have built in protection that can't be shut off even if the VRM fault protection is turned off. However, this is only going to be triggered at 55A+ from each stage which can translate to an OBSCENE amount of power. Similar story with temperature protection...it's built in. There's layers of protection from the VRM controller that is generating the phases, to the power stages, to the die itself...but when you're cross flashing a BIOS you can't really say how many of those layers get thrown away.
> 
> However a high end card may have smart power stages that are software configured and literally every protection gets disabled.
> 
> Shunt modding also "solves" problems with the 1000W bios disabling output ports or preventing the card from entering lower power states (which kinda matters if you're running a desktop 24x7)


Thanks, makes a lot of sense. I have shunts and a soldering iron ready to try this option, not sure quite where to go for now. Wish I'd just bought a 3 8pin card...


----------



## dr/owned

WillP said:


> Thanks, makes a lot of sense. I have shunts and a soldering iron ready to try this option, not sure quite where to go for now. Wish I'd just bought a 3 8pin card...


I'm satisfied with my 2 connector card. Sure I could probably squeeze out another 100Mhz or something out of a Strix with even more power (not guaranteed either) but ultimately that's not going to translate to meaningful FPS increases anyways. I'll take the $200 savings and apply it to something else.

Different mentalities though for different people. Some really don't care about the cost and want to sit on the bleeding edge of stability for the series. I've been there before and I'm over it.

Look at me being all "responsible adult"


----------



## des2k...

bmgjet said:


> Kingpin 1000W bios is a real XOC bios. So all temp protections are disabled. VRM,VRAM,Core will all fry with no warning or throttling.
> They need to be disabled on XOC overclocking since if you get it cold enough the temp readings only go so low before they flip the scale so it goes from being the lowest negitive to the max positive reading. Which would instatly switch your computer off with that protection in place.
> 
> At 69C youll be losing a lot of performance you should fix your loop since thats way too hot. 50C vs 69C would be a 75mhz clock difference.
> Id be worryed about your pump since if you have a big delta like 20C between die and water temp thats putting your water temp at 49C.


Are you sure about this ? water temp at 25c or 28c on my side, if I push 600w into the card the temps will be 57c.
If I limit to 400-500w gpu temps are lower like 42c which follows my 1080ti xoc at 400w.

With being 8nm, alot of heat in small area you'll always have a big delta, just like 7nm amd cpu. 

Looking at multiple youtube review, temps are not amazing. 

Are you telling us , we're suppose to be bellow 20c delta with ampere waterblocks ? I know my block is mounted good. The only thing I'm thinking is maybe my waterblock (gpu core) is not flat, maybe lapping would help ?


----------



## mirkendargen

WillP said:


> Thanks for the advice. It's my first custom loop. It takes a good hour to get to 69 cools to 40 in a matter of a minute or so when not not under gaming load, so in a menu, or map for example. I have the capacity to add another 280mm rad, and have a backplate cooler to add also, but I will take your advice until the changes are made.
> 
> Yeah, the water can get up to 41 or so I think, I'll keep a closer eye on that.


With proper contact your GPU should idle at 0-1C above your coolant temp, so you don't have proper contact between your block and your GPU if it's idling at 40C. If your temperatures keep increasing for an hour, you need more radiator area or more airflow over your radiators. My coolant only goes up 6C from ambient, but I have a fairly ridiculous loop. I think 10-12C is a good goal. Then your GPU shouldn't be more than 20C above your coolant temp under load, if it is your block isn't making good contact still.




des2k... said:


> Are you sure about this ? water temp at 25c or 28c on my side, if I push 600w into the card the temps will be 57c.
> If I limit to 400-500w gpu temps are lower like 42c which follows my 1080ti xoc at 400w.
> 
> With being 8nm, alot of heat in small area you'll always have a big delta, just like 7nm amd cpu.
> 
> Looking at multiple youtube review, temps are not amazing.
> 
> Are you telling us , we're suppose to be bellow 20c delta with ampere waterblocks ? I know my block is mounted good. The only thing I'm thinking is maybe my waterblock (gpu core) is not flat, maybe lapping would help ?


My Bykski Strix block with plain old Kryonaut has a delta of 16C with the 520W BIOS. Other people have similar deltas.


----------



## HyperMatrix

So with +1350 memory my temps were going up to 72-73C and then crashing. Found an old ram block I had. Kryonaut pasted it to the back of my card. Temps on memory #2 dropped 10C and memory #1 dropped nearly 30C. Cheap and easy method. Don’t need mp5. Any cpu or ram block with paste is fine. Only problem with my setup is I couldn’t position it properly as it was blocked by other tubing. But still substantial benefit and stopped the 72C+ crashes.


----------



## des2k...

mirkendargen said:


> With proper contact your GPU should idle at 0-1C above your coolant temp, so you don't have proper contact between your block and your GPU if it's idling at 40C. If your temperatures keep increasing for an hour, you need more radiator area or more airflow over your radiators. My coolant only goes up 6C from ambient, but I have a fairly ridiculous loop. I think 10-12C is a good goal. Then your GPU shouldn't be more than 20C above your coolant temp under load, if it is your block isn't backing good contact still.
> 
> 
> 
> My Bykski Strix block with plain old Kryonaut has a delta of 16C with the 520W BIOS.


Where are you guys taking this 20c above water temps for the GPU ? Like what load is the card under ? for how long ?
I'm sure you won't be at 20c above water if you push 600w into the card and run furmark / timespy extreme for hours.


----------



## mirkendargen

des2k... said:


> Where are you guys taking this 20c above water temps for the GPU ? Like what load is the card under ? for how long ?
> I'm sure you won't be at 20c above water if you push 600w into the card and run furmark / timespy extreme for hours.


Sure it will. Your water will heat up (a lot if you don't have enough radiator area/airflow), but the card won't suddenly have a larger delta with the same load.


----------



## dr/owned

des2k... said:


> Where are you guys taking this 20c above water temps for the GPU ? Like what load is the card under ? for how long ?
> I'm sure you won't be at 20c above water if you push 600w into the card and run furmark / timespy extreme for hours.


My gpu delta is about 15-16C above water temperature with Furmark blasting ~650W into the card (760W reading from the wall). I have 20" x 20" of radiator so while the water temperature does eventually climb it's only a couple degrees.


----------



## bmgjet

13C delta at 400W and 16C delta on my card at 520W so id assume around 20C at 600W.
If your coolant temp continues to increase you need more radiator.
Iv got 2X 360 and 1X 280. Coolant temp get to 6C above air temp after 4 hours in CP2077 with CPU in the loop as well.
Room temps usually around 24-28C being summer where I am, Max I see playing all day is 50C on 520W.


----------



## WMDVenum

So when testing CP2077 my 3090FE was about 48C on water with an ambient water temp of about 37C after an hour stress testing. The card is on a bitspower waterblock with liquid metal shunt modded, was pulling about 450-480 watts during this testing. But overall under load the card is typically about 10-12C above water. I am currently stability testing 2205/1100mV and +1150 on the memory and it seems stable in Port Royale, Warzone, CP2077, and Warframe. I am able to run 2220 in port royale but not in gaming. Likewise +1200 memory is stable on Port royale but +1175 crashes while gaming.

My next two projects are to direct die the CPU and reverse the rear radiator fans since the 480 on the rear is being "cooled" by inside case air, which was pulled through a 360 rad. My exterior - water delta was 14C during the above testing. I am hoping by reversing it I can drop my water temp a bit. Wasn't ever a problem, and really isn't now, because I never had a 500 watt card pushing heat into the water. I also imagine direct die cooling a 9900K will also dump more heat I to the loop than using a IHS.


----------



## Nico67

HyperMatrix said:


> Yeah...importance of flow really can’t be understated. Just a quick primer for some:
> 
> - ambient temperature = coldest your water can get if there was no heat being generated
> 
> - radiator size and fan speeds = how fast you can dissipate heat and get your water to ambient temperature by essentially increasing the contact between your warm water and the cooler ambient temps.
> 
> - reservoir/total liquid capacity = how long it takes for the heat from your gpu/cpu to thoroughly warm up the liquid. More water = more time you’ll have to dissipate heat through your radiator and lower overall liquid temperature.
> 
> - flow rate = affects everything except ambient temperatures. The delta between water temperature and your gpu/cpu, as well as between your water temperature and your ambient temperature affect the rate of thermal transfer. The higher the flow rate, the more heat is being pulled from your gpu/cpu. It’s the same as increasing fan speed on your radiator. You’re basically increasing the contact between the hot stuff and the cool stuff. So if water travels faster over your gpu/cpu, the water gets warmer. The faster that water gets through the radiator which is in contact with colder ambient air, the more heat you dissipate there as well.
> 
> Bottom line = flow rate is incredibly important.


The only thing I would add to that is pumps add heat to a loop, add the more and the faster, the more heat. Mr Sultan's 3 x D5's while not at full bore, would non the less add probably 75w+.


----------



## Falkentyne

@bmgjet @jomama22 @dante`afk @SoldierRBT @dr/owned @HyperMatrix
Ok I know some of you may not care (since you're using Kingpin cards), but there's something weird going on.

I applied another layer of paint over the GPU Chip Power shunt so I could stick a 5 mOhm shunt on top of the old paint, then I painted the top of the new shunt also.
This reduced GPU Chip Power about 20W.

HOWEVER
This also RAISED SRC power by 20W! And I could freaking swear this ALSO made 8 pin #2 report a LITTLE less than before (like 2W), and I didn't touch the 8 pins because I only removed the backplate.

I know what changed because I ran a test before.

And I remember @DrunknFoo clearly saying on the eVGA forums (hey man, you here?) that he found "continuity" between GPU Chip Power and SRC shunts (like they were linked somehow?), MVDDC and PCIE Slot shunts, and GPU Chip Power and 8 pin #3 (huh?). So I'm wondering if I'm onto something or just on something...is there some freaky power balancing going on ? I may have to find out what has continuity with 8 pin #1 shunt and see what happens if I mess with that (MVDDC or SRC maybe?)

_Edit_ removed that shunt. It was actually freaking TOUCHING the backplate!! I was getting random "Super low FPS" stutters...yikes...could have destroyed my card.
I don't want to be like those fools who destroyed two video cards with 1000W Vbioses...

Anyway, I wonder what has continuity with 8 pin #1...I can't find @DrunknFoo 's post because it's buried on eVGA forums in the complain about the FTW3 Vbios thread 



WillP said:


> Few questions. I've read a great deal on this thread but I'm a bit confused on a few things, and navigating it is a complete nightmare.
> 
> 1. Regarding the "safety" of the KPE 1000W bios the concerns are regarding the lack of thermal monitoring and throttling/shut down, yes?
> 2. The advantage of shunt modding over bios flashing a 2 plug card is that thermal protections are maintained?
> 3. While the temperature readout provided by the card may be limited to GPU only, are thermal protections in place for other components, VRMs, VRAM, etc.? Are these "behind the scenes", and does the KPE bios also remove those protections if they are present normally?
> 4. I have seen a few references to the KFA2/GALAX 1000W bios. I'm only aware of KFA2 cards with 2 8 pins, reference designs. Is this bios for a different card? Does anyone have experience of using it?
> 
> Cheers guys. At present I'm running my KFA2 card under-water in its own discrete loop with a 280mm rad, and power limited to 75% with the KPE 1000W bios. Max temp in prolonged 4K CP 2077 is 69 Celsius. I'm monitoring temperatures the whole time under load, and won't be asking for warranty fraud advice if it does fry. I'm also not after any guarantees from anyone, but am enjoying the more stable clocks without being power limited, for now. I could take the hit if it does break, but if there are any suggestions people have for maintaining the performance without bricking it I would appreciate them. Oh, clocks are +169 and VRAM +450. If benchmarking data with this setup would be useful to anyone I'm happy to provide 3D Mark and GPU-Z stuff.


Hi,
The thermal protections are for GPU, VRAM, VRM's and hotspots (not sure if that is VRAM or VRM related).
Any one of those can trigger #thermal.
Note: I do NOT know about the specifics of a GPU #ThermTrip, which shuts down the GPU at danger heat, like the corresponding #Thermtrip on Intel CPU's, where the CPU drops all clock signals at 125C. I assume the GPU has a similar hardwired protection like this, but I don't know for sure, and even if it does, the people who killed their cards with the 1000W Vbios weren't frying the GPU ...they were frying the VRM or VRAM or something else, as they didn't report 90C on the GPU (as far as I know?)

For example, one person had a torn VRM flag and got thermal even though his chip was only 61C.
Another person was doing a water block and had bad contact on the backplate and got thermal flag even though his chip was cool
So yes, using the 1000W Kingpin bios means, if ANY of your thermals ANYWHERE, are not covered and you get thermal runaway, your card can die.
It seems that two people here had cards die from that bios already, because they were not responsible enough.
The Galax (HOF?) Bios is for afaik, Galax' 3 pin card, was dumped by someone but it has some sort of board code attached to it, meaning you aren't flashing that without a 1.8v adapter and a hardware programmer.


----------



## des2k...

bmgjet said:


> 13C delta at 400W and 16C delta on my card at 520W so id assume around 20C at 600W.
> If your coolant temp continues to increase you need more radiator.
> Iv got 2X 360 and 1X 280. Coolant temp get to 6C above air temp after 4 hours in CP2077 with CPU in the loop as well.
> Room temps usually around 24-28C being summer where I am, Max I see playing all day is 50C on 520W.


I'm at 22c ambient, after 2hour of TE GT2 , water is max 28c regardless if I use 400w or 550w plus on the card. Adding 200w from Prime95 cpu, will make the water climb to 30c+.

2 x 360 1x 240,with push/pull 1400rpm+, so for example an aggressive OC at 595w GPU gets to 56c max in GT2.

This makes it a delta of 28c, is my EK block bad ? I already re-mounted the thing. Maybe add washers (more force on the core? or maybe lap the damn thing ?


----------



## dr/owned

des2k... said:


> I'm at 22c ambient, after 2hour of TE GT2 , water is max 28c regardless if I use 400w or 550w plus on the card. Adding 200w from Prime95 cpu, will make the water climb to 30c+.
> 
> 2 x 360 1x 240,with push/pull 1400rpm+, so for example an aggressive OC at 595w GPU gets to 56c max in GT2.
> 
> This makes it a delta of 28c, is my EK block bad ? I already re-mounted the thing. Maybe add washers (more force on the core? or maybe lap the damn thing ?


@Zogge was reporting similar delta with his EK block:



> I tried with 5000rpm on my dual d5s. 150l/h and 2000rpm on 24x fans. Rad size 3360x140+1080x120. Delta T ambient to water is under 1 degree. But water to block is delta T is 25 degrees on full load. Must be a bad seating.


My comment repeated:

My Bykski block I measured to have -0.3mm of clearance between the waterblock and the surface of the die...I don't have your block to measure it for comparison but if they put only -0.1mm of interference then it's not going to be as good squeezing out the paste to an ultra thin layer in which case you're stuck.

But yes trying to raise the mounting pressure may help a bit. If the block isn't that great of a design then there's nothing you can really do. If you have calipers you should be able to measure the standoff height that mates with the PCB and the part that makes contact with the GPU. Bykski block the standoff is 3.12mm and the die surface is 0.43mm (so 2.69mm of void on a die that's 2.90mm tall). Regardless of the make/model of 3090 the gpu package height should be the same so I can give you those numbers.

EDIT:


----------



## bmgjet

Id be checking the block with a delta like that.
My EK block was terriable for my first test mount. I had to shave down 2 of the stand offs for it to even make contact with the top edge of the die.
And thats was with the screws done up tight enough that it undid the stand offs when removing the block again.


----------



## Falkentyne

bmgjet said:


> Id be checking the block with a delta like that.
> My EK block was terriable for my first test mount. I had to shave down 2 of the stand offs for it to even make contact with the top edge of the die.
> And thats was with the screws done up tight enough that it undid the stand offs when removing the block again.
> 
> View attachment 2472132


Wow that's bad.
Why do people still buy EK stuff? Is it because it's the only one available?


----------



## bmgjet

Falkentyne said:


> Wow that's bad.
> Why do people still buy EK stuff? Is it because it's the only one available?


I just got what was avaliable first.
Got my card on 26th sep. Messaged all the water cooling manufutures to see if they had a pre-order mailing list.
EKWB messaged with pre-orders first at the start of oct and the ETA for delivery was mid oct so bought it. 
Then they delayed it 3 weeks and shipped only the block with back plate taking another 2 weeks after that to arrive.


----------



## ViRuS2k

dr/owned said:


> The TUF as I've found still has some sort of power limit protection with the stock BIOS that isn't removed with shunts. The 1000W BIOS though has absolutely no limits. I'd say people are being straight idiots using it on a non waterblocked card...and even if you have a waterblock if it's the first one that you've ever assembled...maybe start a bit more gentle cause there's very tight tolerances on thermal pad gaps where it's easy to mess up the assembly. Even experienced people will do stuff like putting 2mm 17W/mK pads where 1mm pads are supposed to be when those high conductivity pads are rock hard and incapable of squishing 50%.
> 
> That's all my soap boxing
> 
> 
> 
> Bykski makes good stuff. I've used their 1080Ti block (because they were the only ones that made one for my Gigabyte Gaming G1 card with non-ref PCB) with no issues for 3? years and now their 3090 block had zero issues (aside from them using 1.2mm pads but 1.5mm upgrade pads seem to work fine). The 3090 Bykski backplate was a pleasure to modify with active cooling. So much meat on it and great finish.
> 
> That aquacomputer block is cool...good to see the mfg's are finally getting the active backplate message.


The 3090 Bykski backplate ??????
where did you get the back plate from ? i have this waterblock and oled display coming in the post, would love a matching backplate for my MSI 3090 Gaming Trio X card
where did you get back plate ???? cheers.


----------



## ViRuS2k

DarkHollow said:


> So I've seen some screenshots of some here using a program called ABE to read bios info, is that available somewhere?
> 
> I've got the MSI Gaming X Trio 3090 and was considering flashing it to the Suprim X bios, especially after I get my block. I was looking at the vbioses on TPU and see a Suprim X bios with a version of 94.02.26.88.98 but then there's an unofficial one with the version of 94.02.42.00.F5. Do we know what the differences are for any bios revisions?


Just flash the F5 bios its what i did as the other bios is a review sample bios 
i have had no issues on my TRIO X running it, im currently running it on my 3090 until i get my waterblock. + RGB lighting still works and can be set with MSI center 
it has also a slightly higher fan curve for better temps and at default the card will run @1995mhz  stock..... with temps around 70-74c depending on case air flow


----------



## Belcebuu

What do you guys think Scott the 3090 gamerock oc? I haven't seen that many people posting about it.

It seems good isn't it? 3x8 good vrms the only problem is the horrible rgb? But i can live with it in this broken stock time


----------



## dr/owned

ViRuS2k said:


> The 3090 Bykski backplate ??????
> where did you get the back plate from ? i have this waterblock and oled display coming in the post, would love a matching backplate for my MSI 3090 Gaming Trio X card
> where did you get back plate ???? cheers.


AliExpress: Buy Products Online from China Wholesalers at Aliexpress.com . It took a month to get to me with the current corona situation killing air shipments and USPS being jammed with christmas stuff.

Seems to be their "home base" store...the US one I think is just a reseller and not actually affiliated with them.

Judging by this it seems they don't make a backplate and want you to reuse the factory one?


----------



## Zogge

dr/owned said:


> @Zogge was reporting similar delta with his EK block:
> 
> 
> 
> My comment repeated:
> 
> My Bykski block I measured to have -0.3mm of clearance between the waterblock and the surface of the die...I don't have your block to measure it for comparison but if they put only -0.1mm of interference then it's not going to be as good squeezing out the paste to an ultra thin layer in which case you're stuck.
> 
> But yes trying to raise the mounting pressure may help a bit. If the block isn't that great of a design then there's nothing you can really do. If you have calipers you should be able to measure the standoff height that mates with the PCB and the part that makes contact with the GPU. Bykski block the standoff is 3.12mm and the die surface is 0.43mm (so 2.69mm of void on a die that's 2.90mm tall). Regardless of the make/model of 3090 the gpu package height should be the same so I can give you those numbers.
> 
> EDIT:


True and I decided last night to update the loop as 25 deg delta T chip to water isn't what I want.

Ordered :
Bykski block
Mp5 block
2 more D5 pumps (for pressure / flow) (have 2 already)

Will try parallell, serial and separate loops and evaluate.


----------



## dr/owned

Zogge said:


> True and I decided last night to update the loop as 25 deg delta T chip to water isn't what I want.
> 
> Ordered :
> Bykski block
> Mp5 block
> 2 more D5 pumps (for pressure / flow) (have 2 already)
> 
> Will try parallell, serial and separate loops and evaluate.


Have to warn you to not put more than 2 D5 pumps in series. There is no scenario of loop restriction where you would need that much pump and even if you did have that much restriction (maybe some hypothetical system with 7 GPU's and 4 CPUs), you're liable to generate so much pressure you can start blowing tubing off barbs or pushing fluid past seals that are only meant for very low pressure. I did it with 3 D5 pumps in series.

When you start thinking you need a whole lot of pump it's either unnecessary or there's a clog somewhere in the loop as happens with pastel / nanoparticle fluids.


----------



## Zogge

dr/owned said:


> Have to warn you to not put more than 2 D5 pumps in series. There is no scenario of loop restriction where you would need that much pump and even if you did have that much restriction (maybe some hypothetical system with 7 GPU's and 4 CPUs), you're liable to generate so much pressure you can start blowing tubing off barbs or pushing fluid past seals that are only meant for very low pressure. I did it with 3 D5 pumps in series.
> 
> When you start thinking you need a whole lot of pump it's either unnecessary or there's a clog somewhere in the loop as happens with pastel / nanoparticle fluids.


Okay this is my loop.

Radiators : Airplex Gigant 3360, 3x360mm ST30 Alphacool, 1x240mm. All with 140/120mm fans 
Pumps : 2xD5 next vision
Reservoar : Aqualis 450 with fill level indicator base
Soft tubing 10/13 mm about 4 m in total
Filter from Aquacomputer
High flow mps from Aquacomputer
High flow next vision from Alphacool
Quick fittings one set
Mix of 45 and 0 deg compression fittings
Gpu block EKWB strix
Cpu block Techn 2066
Kryonaut Grizzly on blocks, standard pads from suppliers

Heatload cpu max 450w, gpu say 520w. Pumps drop around 20w each on 5000rpm speed.

On both pumps:
5000 rpm gives 150l/h 
4000 rpm gives 125l/h

Ambient is 25 deg and water stays at max 1 deg delta T to ambient at 1400+ rpm on fans and 150l/h flow. At fans 900 rpm it is around 3-4 degrees delta T to ambient with 125 l/h flow. 

Gpu max out on 58 to 59 degrees on 520W load and max cooling scenario above. Cpu gives around 300w in these benchmarks and max on 68 to 70 degrees on hottest core (10980xe 2x5.1, 4x5.0, 12x4.8).

Pumps and system above is one loop with serial pump setup. 

I have more reservoars, pumps etc as mentioned but what should I do to improve ? Suggestions ?


----------



## Shawnb99

dr/owned said:


> Have to warn you to not put more than 2 D5 pumps in series. There is no scenario of loop restriction where you would need that much pump and even if you did have that much restriction (maybe some hypothetical system with 7 GPU's and 4 CPUs), you're liable to generate so much pressure you can start blowing tubing off barbs or pushing fluid past seals that are only meant for very low pressure. I did it with 3 D5 pumps in series.
> 
> When you start thinking you need a whole lot of pump it's either unnecessary or there's a clog somewhere in the loop as happens with pastel / nanoparticle fluids.


You’re doing something wrong then. I ran two MCP35X’s in my loop and never had tubing blow off barbs. Didn’t you say all the flow you need is just a trickle so why would you run three D5’s? 
Again can’t help but laugh at these claims


----------



## Pepillo

Would it be helpful to put passive heatsinks on the backplate where the memory chips are? Will temperatures drop? It is a very easy mod to perform, but by not having temperature sensors in the memories I will not be able to check its result. What do you think? Something like a few of these:









CTRICALVER L40mm X W40mm X H11mm 4pcs Disipador de Calor Negro Radiador de Aluminio Fuego TV Kit de Bricolaje Placa de Circuito de Chip IC Chip Mosfet Adecuado para Amplificador LED de Alta Potencia : Amazon.es: Informática


CTRICALVER L40mm X W40mm X H11mm 4pcs Disipador de Calor ***** Radiador de Aluminio Fuego TV Kit de Bricolaje Placa de Circuito de Chip IC Chip Mosfet Adecuado para Amplificador LED de Alta Potencia : Amazon.es: Informática



www.amazon.es


----------



## Nico67

Zogge said:


> Okay this is my loop.
> 
> Radiators : Airplex Gigant 3360, 3x360mm ST30 Alphacool, 1x240mm. All with 140/120mm fans
> Pumps : 2xD5 next vision
> Reservoar : Aqualis 450 with fill level indicator base
> Soft tubing 10/13 mm about 4 m in total
> Filter from Aquacomputer
> High flow mps from Aquacomputer
> High flow next vision from Alphacool
> Quick fittings one set
> Mix of 45 and 0 deg compression fittings
> Gpu block EKWB strix
> Cpu block Techn 2066
> Kryonaut Grizzly on blocks, standard pads from suppliers
> 
> Heatload cpu max 450w, gpu say 520w. Pumps drop around 20w each on 5000rpm speed.
> 
> On both pumps:
> 5000 rpm gives 150l/h
> 4000 rpm gives 125l/h
> 
> Ambient is 25 deg and water stays at max 1 deg delta T to ambient at 1400+ rpm on fans and 150l/h flow. At fans 900 rpm it is around 3-4 degrees delta T to ambient with 125 l/h flow.
> 
> Gpu max out on 58 to 59 degrees on 520W load and max cooling scenario above. Cpu gives around 300w in these benchmarks and max on 68 to 70 degrees on hottest core (10980xe 2x5.1, 4x5.0, 12x4.8).
> 
> Pumps and system above is one loop with serial pump setup.
> 
> I have more reservoars, pumps etc as mentioned but what should I do to improve ? Suggestions ?


Run two loops, less restrictions per loop and a pump on each?


----------



## Shawnb99

Zogge said:


> Okay this is my loop.
> 
> Radiators : Airplex Gigant 3360, 3x360mm ST30 Alphacool, 1x240mm. All with 140/120mm fans
> Pumps : 2xD5 next vision
> Reservoar : Aqualis 450 with fill level indicator base
> Soft tubing 10/13 mm about 4 m in total
> Filter from Aquacomputer
> High flow mps from Aquacomputer
> High flow next vision from Alphacool
> Quick fittings one set
> Mix of 45 and 0 deg compression fittings
> Gpu block EKWB strix
> Cpu block Techn 2066
> Kryonaut Grizzly on blocks, standard pads from suppliers
> 
> Heatload cpu max 450w, gpu say 520w. Pumps drop around 20w each on 5000rpm speed.
> 
> On both pumps:
> 5000 rpm gives 150l/h
> 4000 rpm gives 125l/h
> 
> Ambient is 25 deg and water stays at max 1 deg delta T to ambient at 1400+ rpm on fans and 150l/h flow. At fans 900 rpm it is around 3-4 degrees delta T to ambient with 125 l/h flow.
> 
> Gpu max out on 58 to 59 degrees on 520W load and max cooling scenario above. Cpu gives around 300w in these benchmarks and max on 68 to 70 degrees on hottest core (10980xe 2x5.1, 4x5.0, 12x4.8).
> 
> Pumps and system above is one loop with serial pump setup.
> 
> I have more reservoars, pumps etc as mentioned but what should I do to improve ? Suggestions ?


What are you looking to improve? Water temps? Add more radiator space. Better GPU temps? Other then changing blocks you’ll get a small increase with lower water temps so again add more radiator space or better cooling on the ones you have.
Adding flow so you’re at 1GPM might give you another degree


----------



## Markus_

bmgjet said:


> Id be checking the block with a delta like that.
> My EK block was terriable for my first test mount. I had to shave down 2 of the stand offs for it to even make contact with the top edge of the die.
> And thats was with the screws done up tight enough that it undid the stand offs when removing the block again.
> 
> View attachment 2472132


i ordered the same block damn 
So there was no contact without that fix?

Thx 
Markus


----------



## dr/owned

Zogge said:


> I have more reservoars, pumps etc as mentioned but what should I do to improve ? Suggestions ?


This is a solid loop, I'll be interested to see if just the different GPU block fixes the delta on it. Would be nice to get some calipers on it which would give us some good clues.

Your flowrate matches mine and it's not a problem in my loop (and I have water temp sensors on every component to confirm). You _could_ go dual loops like ^^ suggested but I've been there and transitioned away from it. Just not much benefit to the pain of having to double up on the components needed for a loop.


----------



## ALSTER868

Zogge said:


> Gpu max out on 58 to 59 degrees on 520W load and max cooling scenario above.


That's way too much man. You've got a pretty serious loop, your temps and T deltas seem to be fine in almost all aspects except your GPU temp and T delta GPU/water.
I'd check the waterblock seating first. I'd also increase the flow rate, in my case going from 150 l/h to 200 l/h gives me a 2-3C decrease in GPU temp.

Bitspower waterblock for Strix 3090 there, my GPU/water delta in 500+W load is like 13-16C, when the load is up to 450W then delta is not going above 12-13C.
Ambient air/water is like yours doesn't exceed 3-4C @750 rpm fan on my MO-RA3 with 2xD5 running at 100% and 75% speed.


----------



## Zogge

ALSTER868 said:


> That's way too much man. You've got a pretty serious loop, your temps and T deltas seem to be fine in almost all aspects except your GPU temp and T delta GPU/water.
> I'd check the waterblock seating first. I'd also increase the flow rate, in my case going from 150 l/h to 200 l/h gives me a 2-3C decrease in GPU temp.
> 
> Bitspower waterblock for Strix 3090 there, my GPU/water delta in 500+W load is like 13-16C, when the load is up to 450W then delta is not going above 12-13C.
> Ambient air/water is like yours doesn't exceed 3-4C @750 rpm fan on my MO-RA3 with 2xD5 running at 100% and 75% speed.


Ok all thanks for your inputs!!

Looks like I need to try this as first step :

1. Try to reseat ekwb on strix and add a 3rd pump for flow if needed. Not sure I dare to go the liquid metal route as I have never done it. 

2. Try bykski and mp5 instead and add a 3rd pump too if needed for flow

I will report once this is done.

Two other questions :

Is it any idea to introduce parallell flow just over the cpu/mp5/gpu for less restriction with everything else in serial ?

Will 3 or 4 pumps in serial really burst fittings and stuff like someone said ?


----------



## ViRuS2k

dr/owned said:


> AliExpress: Buy Products Online from China Wholesalers at Aliexpress.com . It took a month to get to me with the current corona situation killing air shipments and USPS being jammed with christmas stuff.
> 
> Seems to be their "home base" store...the US one I think is just a reseller and not actually affiliated with them.
> 
> Judging by this it seems they don't make a backplate and want you to reuse the factory one?
> View attachment 2472146


Yeah you might be right,  guess i can reuse the original back plate as the waterblock takes up the hole PCB length of the card so using the original back plate would be pretty good and would still look good. ...

this is what i ordered i hope its the right block for my card - US $48.17 |Bykski GPU Water Block For MSI RTX3080/3090 Gaming X TRIO, 12V/5V RGB MB SYNC N MS3080TRIO X|Fluid DIY Cooling| - AliExpress

says 3080/3090 in the discription so i hope its the correct block + i ordered the POM module aswell for reading GPU temps/Water temps directly or i can hook it up to the back plate for direct back plate monitoring


----------



## Esenel

Falkentyne said:


> Wow that's bad.
> Why do people still buy EK stuff? Is it because it's the only one available?


Because EK provides solid quality early.
I have here the EK one for Strix and it is doing a great job and making good contact.

I had the Aqua Computer one for Strix for three days to figure out it has a wrongly bended heatpipe and therefor a leaking terminal.

So in my case I am better of with the EK one.


----------



## ALSTER868

Zogge said:


> Try to reseat ekwb on strix and add a 3rd pump for flow if needed. Not sure I dare to go the liquid metal route as I have never done it.


Yep, trying to reseat your waterblock and probably try other thermal pads would be step one to undertake.
I don't think 3rd pump is a must, you'd better increase your pumps' speed if that is not adding much noise of course or try to improve your loop restrictiveness. btw I worked on that and achieved better temps and less noise from pumps.
Using LM is not that scary as you might think. Just make sure to isolate SMDs around the chip with a simple nail polish, then apply a thin layer of LM on the chip and waterblock and that's it. Woulda help you shave off some couple of degrees also.


----------



## bwana

des2k... said:


> I'm at 22c ambient, after 2hour of TE GT2 , water is max 28c regardless if I use 400w or 550w plus on the card. Adding 200w from Prime95 cpu, will make the water climb to 30c+.
> 
> 2 x 360 1x 240,with push/pull 1400rpm+, so for example an aggressive OC at 595w GPU gets to 56c max in GT2.
> 
> This makes it a delta of 28c, is my EK block bad ? I already re-mounted the thing. Maybe add washers (more force on the core? or maybe lap the damn thing ?


you have TWO 360 rads for the KPE? I also read someone here is using THREE 360 rads. Is this really needed? It seems the KPE needs its own chiller.


----------



## bwana

ViRuS2k said:


> Yeah you might be right,  guess i can reuse the original back plate as the waterblock takes up the hole PCB length of the card so using the original back plate would be pretty good and would still look good. ...
> 
> this is what i ordered i hope its the right block for my card - US $48.17 |Bykski GPU Water Block For MSI RTX3080/3090 Gaming X TRIO, 12V/5V RGB MB SYNC N MS3080TRIO X|Fluid DIY Cooling| - AliExpress
> 
> says 3080/3090 in the discription so i hope its the correct block + i ordered the POM module aswell for reading GPU temps/Water temps directly or i can hook it up to the back plate for direct back plate monitoring


interesting site. their water bock for the evga ftw ultra is only $30 but the the one for the xc is 179$









i wonder why?


----------



## jura11

Zogge said:


> Okay this is my loop.
> 
> Radiators : Airplex Gigant 3360, 3x360mm ST30 Alphacool, 1x240mm. All with 140/120mm fans
> Pumps : 2xD5 next vision
> Reservoar : Aqualis 450 with fill level indicator base
> Soft tubing 10/13 mm about 4 m in total
> Filter from Aquacomputer
> High flow mps from Aquacomputer
> High flow next vision from Alphacool
> Quick fittings one set
> Mix of 45 and 0 deg compression fittings
> Gpu block EKWB strix
> Cpu block Techn 2066
> Kryonaut Grizzly on blocks, standard pads from suppliers
> 
> Heatload cpu max 450w, gpu say 520w. Pumps drop around 20w each on 5000rpm speed.
> 
> On both pumps:
> 5000 rpm gives 150l/h
> 4000 rpm gives 125l/h
> 
> Ambient is 25 deg and water stays at max 1 deg delta T to ambient at 1400+ rpm on fans and 150l/h flow. At fans 900 rpm it is around 3-4 degrees delta T to ambient with 125 l/h flow.
> 
> Gpu max out on 58 to 59 degrees on 520W load and max cooling scenario above. Cpu gives around 300w in these benchmarks and max on 68 to 70 degrees on hottest core (10980xe 2x5.1, 4x5.0, 12x4.8).
> 
> Pumps and system above is one loop with serial pump setup.
> 
> I have more reservoars, pumps etc as mentioned but what should I do to improve ? Suggestions ?


Hi there 

Have look I've in my loop 4*360mm radiators(HWLabs SR-2 360mm and 2*HWLabs GTS360) plus MO-ra3 360mm which is housed in Caselabs M8 with pedestal, total tubing run is I think close to 3m 

This loop is cooling 3*GPUs setup(RTX 3090 GamingPro with Bykski Waterblock and backplate, Asus RTX 2080Ti Strix with EKWB Vector RTX 2080Ti Strix waterblock, Zotac RTX 2080Ti AMP with EVGA FTW3 BIOS and Aquacomputer Kryographics RTX 2080Ti with active backplate), CPU I have 3900X 4.35GHz OC with Aquacomputer Kryos Next 

Previously I have run single D5 pump with dual 18W DDC pumps in Aquacomputer dual pump top, flow rate in such loop I have in 125-150LPH, recently I swapped my DDC pumps for another 3*D5 pumps, currently running 4*D5 pumps(3 pumps are running at full speed and another pumps is running just at lowest speed) and my flow rate now is in 215LPH range, with pumps running at full speed flow rate is in 225-230LPH 

My temperatures on GPUs are in 30's(RTX 3090 usually sits in 32-33°C during benchmarks, in rendering usually same and in gaming temperatures are in 33-36°C highest temperature I have seen to the date has been 40°C that's with KPE XOC BIOS and 75% power limit, other GPUs usually in rendering are in 30-32°C max), fans I usually run them in 650-750RPM, water delta T in gaming usually is in 2-4°C and water delta T in gaming is under 5°C with all fans spinning around the 700RPM

What case do you have? Improve cooling, you have enough radiators just seems that loop doesn't transferring to actual lower temperatures

I'm using supplied thermal pads on GPUs like on Bykski and EKWB and Aquacomputer and no issues, TIM I use on GPUs Kryonaut and ZF-EX on Bykski 

Picture would help for sure and I wouldn't use any filter in the loop, I don't think you need that there

Hope this helps 

Thanks, Jura


----------



## Biscottoman

Has anyone been able to improve his memory overclock with the new strix BIOS ?


----------



## jomama22

Falkentyne said:


> @bmgjet @jomama22 @dante`afk @SoldierRBT @dr/owned @HyperMatrix
> Ok I know some of you may not care (since you're using Kingpin cards), but there's something weird going on.
> 
> I applied another layer of paint over the GPU Chip Power shunt so I could stick a 5 mOhm shunt on top of the old paint, then I painted the top of the new shunt also.
> This reduced GPU Chip Power about 20W.
> 
> HOWEVER
> This also RAISED SRC power by 20W! And I could freaking swear this ALSO made 8 pin #2 report a LITTLE less than before (like 2W), and I didn't touch the 8 pins because I only removed the backplate.
> 
> I know what changed because I ran a test before.
> 
> And I remember @DrunknFoo clearly saying on the eVGA forums (hey man, you here?) that he found "continuity" between GPU Chip Power and SRC shunts (like they were linked somehow?), MVDDC and PCIE Slot shunts, and GPU Chip Power and 8 pin #3 (huh?). So I'm wondering if I'm onto something or just on something...is there some freaky power balancing going on ? I may have to find out what has continuity with 8 pin #1 shunt and see what happens if I mess with that (MVDDC or SRC maybe?)
> 
> _Edit_ removed that shunt. It was actually freaking TOUCHING the backplate!! I was getting random "Super low FPS" stutters...yikes...could have destroyed my card.
> I don't want to be like those fools who destroyed two video cards with 1000W Vbioses...
> 
> Anyway, I wonder what has continuity with 8 pin #1...I can't find @DrunknFoo 's post because it's buried on eVGA forums in the complain about the FTW3 Vbios thread


I would guess the src sources are connected to specific pins, could check with the multimeter on every shunt and see what they connect to. Take a look at most aib pcb's, the extra shunts by the pins are most likely those and derbaur also more or less confirmed this in his video of shunting the tuf (near the end of the video if I remember correctly).


----------



## escapee

Hello All!

I have been busy holidaying over the past few weeks. In that time, I have theorized many advancements in overclocking Jensens mighty rocket chip. Upon returning from my holiday and testing said theories; it appears that the 3090 can in fact perform very well and has exceeded my expectations !!!

I have created a video to share my knowledge with you all. I have also included some bonus content including a world record run just for @chispy and @Falkentyne






Have a wonderful new year everyone xx !!!


----------



## jomama22

escapee said:


> Hello All!
> 
> I have been busy holidaying over the past few weeks. In that time, I have theorized many advancements in overclocking Jensens mighty rocket chip. Upon returning from my holiday and testing said theories; it appears that the 3090 can in fact perform very well and has exceeded my expectations !!!
> 
> I have created a video to share my knowledge with you all. I have also included some bonus content including a world record run just for @chispy and @Falkentyne
> 
> 
> 
> 
> 
> 
> Have a wonderful new year everyone xx !!!


What's unfortunate is that since you cheated multiple times before, even if this was legit, no one cares nor should believe you.


----------



## inedenimadam

escapee said:


> Hello All!
> 
> I have created a video to share my knowledge with you all.


Did I miss it? 

Get best hardware.
Get KPE vBIOS.

I don't think I can slog through that video a second time. Can you just tell me if I missed something important?


----------



## HyperMatrix

inedenimadam said:


> Did I miss it?
> 
> Get best hardware.
> Get KPE vBIOS.
> 
> I don't think I can slog through that video a second time. Can you just tell me if I missed something important?


He didn't get the best hardware. He got a Strix. KPE FTW.  I'm confused by the video because it seems to be a troll on...himself? I mean video starts off calling out people who were questioning him. He sets it up as though he's about to show proof of his expertise in overclocking in order to discredit his detractors. But then simply says buy Strix and 5950x (which is suboptimal) and jumps into a clip of port royal with nothing regarding his setup or settings or method.


----------



## SoldierRBT

HyperMatrix said:


> So with +1350 memory my temps were going up to 72-73C and then crashing. Found an old ram block I had. Kryonaut pasted it to the back of my card. Temps on memory #2 dropped 10C and memory #1 dropped nearly 30C. Cheap and easy method. Don’t need mp5. Any cpu or ram block with paste is fine. Only problem with my setup is I couldn’t position it properly as it was blocked by other tubing. But still substantial benefit and stopped the 72C+ crashes.
> 
> View attachment 2472126
> 
> View attachment 2472127
> 
> View attachment 2472128


That's awesome results. My memory also gets in the 70s while gaming. Hopefully the KPE block has memory cooling on the backplate.


----------



## jomama22

HyperMatrix said:


> He didn't get the best hardware. He got a Strix. KPE FTW.  I'm confused by the video because it seems to be a troll on...himself? I mean video starts off calling out people who were questioning him. He sets it up as though he's about to show proof of his expertise in overclocking in order to discredit his detractors. But then simply says buy Strix and 5950x (which is suboptimal) and jumps into a clip of port royal with nothing regarding his setup or settings or method.


Yeah, it makes it that much more unbelievable tbh. No setup pics or anything? A random stock photo of 3090 strix boxes?

Has to be a real lonely person to make a video like that.

But anyway. 5950x is fine. Just like a 10900k, so long as you have it set up correctly it's not a big deal. I get the same gfx score in TS and TSE with my 5950x as any intel counterpart. Same with port royal.

To put it into perspective, I only gained 5% in graphics score going from an intel 3960x (yes, the one from 2011) to the 5950x.
It all comes down to that effective clock matching requesting input clock.

Someone had a video of them doing port royal with a 3090 on their 10900k and then their amd 8350 and it scored within 2 points lol.

I'm genuinely not sure where this whole idea that 5xxx amd parts can't match Intel's I'm benchmarks for graphics scores. It's just incorrect.


----------



## HyperMatrix

jomama22 said:


> I'm genuinely not sure where this whole idea that 5xxx amd parts can't match Intel's I'm benchmarks for graphics scores. It's just incorrect.


Because even while having higher IPC than intel, the majority of GPU-Bound game benchmarks show the intel CPUs in the lead. That's why I canceled my 5950x preorder and will be picking up the new 11900K when it launches. Not sure what the difference is when both CPUs are OC'd with LN2 for extreme benching. But at least under standard water loops, Intel seems to have the lead in those scenarios. And even a 1% difference in performance at 18k points is another 180 points. Not entirely insignificant if you're actually pushing for world records.


----------



## des2k...

well... after a re-mount, re-paste, double washers , I finally fixed the delta on the EK block
with water temps 21c, gpu is only 35c with 460w load

huge difference from 46c before  I guess all of you were right lol !


----------



## ALSTER868

Got a Bitspower waterblock for my Strix with a backplate with like small radiator with short fins on it.
My memory can do +1500, the backplate during benchmarks warms up as high as 45-48C. As a temporary solution set a 90 mm fan to blow the backplate. No problems so far.
Because of that small radiator I am not sure whether I can make up a waterblock on the backplate, but now it seems unnecessary to me.


----------



## stryker7314

How much better is the KPE in board components than a FTW3 Ultra?


----------



## geriatricpollywog

escapee said:


>


All that for an invalid score. What's weird is the amount of effort he puts into this. It takes "model train" or "ship in a bottle" level of dedication and skill to LN2 overclock a 3090 past 2.7ghz. He could clearly get into the top 5 on merit alone, but posting seething videos of invalid results shows he doesn't want to be respected for his primary hobby and doesn't care if people think he's cheating by turning down the LOD. He just wants to be #1. Kind of makes you re-think your priorities in life.


----------



## jomama22

HyperMatrix said:


> Because even while having higher IPC than intel, the majority of GPU-Bound game benchmarks show the intel CPUs in the lead. That's why I canceled my 5950x preorder and will be picking up the new 11900K when it launches. Not sure what the difference is when both CPUs are OC'd with LN2 for extreme benching. But at least under standard water loops, Intel seems to have the lead in those scenarios. And even a 1% difference in performance at 18k points is another 180 points. Not entirely insignificant if you're actually pushing for world records.


You can look at my TS graphics score on the HOF with my 5950x @4.8ghz and compare against 5.6ghz 10900k's. Same with the graphics score in TSE.

I'm not sure what gaming benchmarks you are referring to? Can you link?

Gpu bound scenario's, like 4k and such, more or less put everything within 1/2%, whether intel or amd come out on top, all the way down the product stack to 6 core CPUs.


----------



## jomama22

stryker7314 said:


> How much better is the KPE in board components than a FTW3 Ultra?


Way better. The FTW3 is basically a tall reference card, as buildzoid would say.


----------



## HyperMatrix

stryker7314 said:


> How much better is the KPE in board components than a FTW3 Ultra?


Infinitely better. FTW3 is kind of a budget build. Although honestly even that's good enough. But KPE is an entirely different beast than the FTW3.


----------



## jomama22

escapee said:


> Hello All!
> 
> I have been busy holidaying over the past few weeks. In that time, I have theorized many advancements in overclocking Jensens mighty rocket chip. Upon returning from my holiday and testing said theories; it appears that the 3090 can in fact perform very well and has exceeded my expectations !!!
> 
> I have created a video to share my knowledge with you all. I have also included some bonus content including a world record run just for @chispy and @Falkentyne
> 
> 
> 
> 
> 
> 
> Have a wonderful new year everyone xx !!!


Lolololol your score got removed from 3dmark already.

Once a cheater always a cheater.


----------



## Thanh Nguyen

Any kingpin owner wanna sell?


----------



## HyperMatrix

jomama22 said:


> You can look at my TS graphics score on the HOF with my 5950x @4.8ghz and compare against 5.6ghz 10900k's. Same with the graphics score in TSE.
> 
> I'm not sure what gaming benchmarks you are referring to? Can you link?
> 
> Gpu bound scenario's, like 4k and such, more or less put everything within 1/2%, whether intel or amd come out on top, all the way down the product stack to 6 core CPUs.


Yes I'm referring to 4K GPU bound benchmarks. The difference as you mentioned is usually just 1-3% but that's still 1-3%. And if that translates to GPU bound bench scores, then that is significant. And from my recollection, the ratio was over 4:1 in favor of Intel and sometimes a higher percentage when comparing 1% lows as opposed to average FPS. 

In your case, you're telling me to compare your score with a 5950x to someone with a 5.6GHz 10900k. But that's not an accurate comparison because we're dealing with different GPUs. If you had the exact same GPU with the exact same _real_ clocks, system memory speed/timings, gpu memory clocks temperatures, then the 10900K should be scoring higher. 

Also I didn't buy a 10900k for a reason. It was only barely better gaming, but severely lacking in multitasking performance compared to the 5950x. The new 11900k on the other hand...is definitely on my buy list. Great clocks, great IPC, and should greatly improve Intel's lead in 4K gaming. And doesn't hurt that Photoshop is sadly and notoriously single threaded and could benefit from it.


----------



## VinnieM

Managed to get at least 1 HDMI output working on my AORUS Master with the 1000w BIOS. I'm limiting it at 75% though, 500 watts is already almost too much for air cooling 🥵
I've set up MSI Afterburner so that it's automatically switching 2D/3D profiles so it's lowering the memory clocks when not running a 3D program.
And for anyone with a 2*8 pin card, you can enter a correction formula in the monitoring settings so that it automatically converts the value * 0.667:









I've tested this with a power meter at the wall. Peak was 648w at the wall, peak power of the graphics card was 509w so that sounds plausible (CPU was at 50w, PSU losses at 90% efficiency ~60w).


----------



## stryker7314

jomama22 said:


> Way better. The FTW3 is basically a tall reference card, as buildzoid would say.





HyperMatrix said:


> Infinitely better. FTW3 is kind of a budget build. Although honestly even that's good enough. But KPE is an entirely different beast than the FTW3.


Sooo no one knows, I could get more from EVGA marketing lol.

I have seen posts on EVGA community that they are barely different at all.


----------



## Esenel

@jomama22 
Intel 10900K is faster in Timespy with the same GPU.
Hypermatrix is correct.
@ivans89 tested it.
He can repost the scores again.


----------



## HyperMatrix

stryker7314 said:


> Sooo no one knows, I could get more from EVGA marketing lol.
> 
> I have seen posts on EVGA community that they are barely different at all.


Gotta say I am surprised at the lack of full high resolution shots of the board. All we've seen are from video grabs.


----------



## jomama22

HyperMatrix said:


> Yes I'm referring to 4K GPU bound benchmarks. The difference as you mentioned is usually just 1-3% but that's still 1-3%. And if that translates to GPU bound bench scores, then that is significant. And from my recollection, the ratio was over 4:1 in favor of Intel and sometimes a higher percentage when comparing 1% lows as opposed to average FPS.
> 
> In your case, you're telling me to compare your score with a 5950x to someone with a 5.6GHz 10900k. But that's not an accurate comparison because we're dealing with different GPUs. If you had the exact same GPU with the exact same _real_ clocks, system memory speed/timings, gpu memory clocks temperatures, then the 10900K should be scoring higher.
> 
> Also I didn't buy a 10900k for a reason. It was only barely better gaming, but severely lacking in multitasking performance compared to the 5950x. The new 11900k on the other hand...is definitely on my buy list. Great clocks, great IPC, and should greatly improve Intel's lead in 4K gaming. And doesn't hurt that Photoshop is sadly and notoriously single threaded and could benefit from it.


I mean, my 3090 clock speeds are lower than those of the 10900k's I'm comparing. Obviously it's difficult to tell what their real clock is of the gpu but that's not really somthing anyone could know.

I'm not seeing any 1-3% advantage at 4k on any review site?









AMD Ryzen 9 5900X and 5950X review


We review ZEN3, the new Ryzen 9 5900X, and 5950X. Released by AMD as a new architecture that will once again attack Intel, this round with a heavy focus on your gaming performance. Overall, this pro... Performance - Gaming RTX 3090 - 3840x2160 (UHD)




www.guru3d.com












Ryzen 9 5950X and 5900X Review: AMD Unleashes Zen 3 Against Intel's Last Performance Bastions - ExtremeTech


AMD continues its onslaught on what was once Intel's undisputed turf.




www.extremetech.com












AMD Zen 3 Ryzen Deep Dive Review: 5950X, 5900X, 5800X and 5600X Tested







www.anandtech.com





The anandtech article is the most detailed with gaming benchmarks at 4k with average and 95th%. They just trade blows at 4 K as expected, with minimal difference among the whole product stack.

I can run some fire strike 4k benches over the weekend and see.


----------



## jomama22

Esenel said:


> @jomama22
> Intel 10900K is faster in Timespy with the same GPU.
> Hypermatrix is correct.
> @ivans89 tested it.
> He can repost the scores again.


Then there is somthing wrong with the amd setup? I can help if someone is getting worse scores on amd with the same gpu?

A GPU average of 2164/1400 mem nets me a 23740 gpu score in TS and 12370 in tse.

I'm not touting my scores because I care about where they land, merely showing them as proof for my statements.


----------



## des2k...

VinnieM said:


> Managed to get at least 1 HDMI output working on my AORUS Master with the 1000w BIOS. I'm limiting it at 75% though, 500 watts is already almost too much for air cooling 🥵
> I've set up MSI Afterburner so that it's automatically switching 2D/3D profiles so it's lowering the memory clocks when not running a 3D program.
> And for anyone with a 2*8 pin card, you can enter a correction formula in the monitoring settings so that it automatically converts the value * 0.667:
> View attachment 2472230
> 
> 
> I've tested this with a power meter at the wall. Peak was 648w at the wall, peak power of the graphics card was 509w so that sounds plausible (CPU was at 50w, PSU losses at 90% efficiency ~60w).


That's good to know about the monitor adjustment, I was using a batch file with nvidia-smi to convert the power for 2x8pin.


----------



## dante`afk

@escapee why are you going through this when everyone knows you're cheating, do the right thing and show your rig, LN2, everything. you want to be valid? spent your time showing off the right thing, not memeing.


has anyone a KPE to sell or is in the queue? @HyperMatrix "wink" 


someone wants to buy my FE for a lot more than I bought it. I believe it goes really well with LN2, and I guess that person sees that too.


----------



## HyperMatrix

jomama22 said:


> I mean, my 3090 clock speeds are lower than those of the 10900k's I'm comparing. Obviously it's difficult to tell what their real clock is of the gpu but that's not really somthing anyone could know.
> 
> I'm not seeing any 1-3% advantage at 4k on any review site?
> 
> 
> 
> 
> 
> 
> 
> 
> 
> AMD Ryzen 9 5900X and 5950X review
> 
> 
> We review ZEN3, the new Ryzen 9 5900X, and 5950X. Released by AMD as a new architecture that will once again attack Intel, this round with a heavy focus on your gaming performance. Overall, this pro... Performance - Gaming RTX 3090 - 3840x2160 (UHD)
> 
> 
> 
> 
> www.guru3d.com
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Ryzen 9 5950X and 5900X Review: AMD Unleashes Zen 3 Against Intel's Last Performance Bastions - ExtremeTech
> 
> 
> AMD continues its onslaught on what was once Intel's undisputed turf.
> 
> 
> 
> 
> www.extremetech.com
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> AMD Zen 3 Ryzen Deep Dive Review: 5950X, 5900X, 5800X and 5600X Tested
> 
> 
> 
> 
> 
> 
> 
> www.anandtech.com
> 
> 
> 
> 
> 
> The anandtech article is the most detailed with gaming benchmarks at 4k with average and 95th%. They just trade blows at 4 K as expected, with minimal difference among the whole product stack.
> 
> I can run some fire strike 4k benches over the weekend and see.


Well Guru3d is just showing the same rounded off numbers so nothing to report there. But Extremetech shows the 10900k 0.875% faster across 10 games (I didn't include ashes because it's a cpu crapfest). And Anandtech didn't have any 4K benches. Well they had "1080p maxed out" and "4k low graphics" which doesn't help push the GPU bound situation I was referring to. Furthermore, for the purposes of gaming, 1% lows also tend to favor Intel at an even higher rate.


----------



## HyperMatrix

dante`afk said:


> has anyone a KPE to sell or is in the queue? @HyperMatrix "wink"


Haha. Still in the queue for a Hydrocopper version.


----------



## jomama22

HyperMatrix said:


> Well Guru3d is just showing the same rounded off numbers so nothing to report there. But Extremetech shows the 10900k 0.875% faster across 10 games (I didn't include ashes because it's a cpu crapfest). And Anandtech didn't have any 4K benches. Well they had "1080p maxed out" and "4k low graphics" which doesn't help push the GPU bound situation I was referring to. Furthermore, for the purposes of gaming, 1% lows also tend to favor Intel at an even higher rate.


Then again, links please?

You're being very specific about the whole scenario. I am providing my proof of my statements. I just want to see where you got your info from, that's all.


----------



## Cholerikerklaus

Esenel said:


> @jomama22
> Intel 10900K is faster in Timespy with the same GPU.
> Hypermatrix is correct.
> @ivans89 tested it.
> He can repost the scores again.


Same with Port Royal


----------



## HyperMatrix

jomama22 said:


> Then again, links please?
> 
> You're being very specific about the whole scenario. I am providing my proof of my statements. I just want to see where you got your info from, that's all.


Nearly impossible to find articles that did actual 4K benching but here are 3 . Wow. 90% of them stuck to 320p to 1440p in order to show the strength of the CPU in cpu bound scenarios and not show its weakness at 4K.

And I'm only being specific because that scenario is the specific one at which I game.  I game at 4K. I couldn't care less if the 5950x got 100% more FPS at 1080p or at 4K with all settings set to low, if it happens to get 1-3% less at Maxed Out 4K since that's the only resolution/setting I play at. Haha.









AMD’s Ryzen 9 5950X 1080p & 4K Gaming Performance


AMD has just launched its much-anticipated Zen 3 processors, with four on offer for mainstream budgets. We're kicking off our coverage with a look at the top dog Ryzen 9 5950X in gaming, as it goes up against the last-gen Ryzen 9 3950X, and Intel's de facto best gaming CPU, the Core i9-10900K.




techgage.com












AMD Ryzen 9 5950X och Ryzen 9 5900X "Vermeer" - Test


Ryzen 5000 och arkitekturen Zen 3 är här, med stora löften om dominerande prestanda på processormarknaden.




www.sweclockers.com












Ryzen 9 5950X vs i9-10900K 5.3 GHz en 16 juegos (1080p,1440p, 2160p)


(Última actualización: November 6, 2020) Ryzen 9 5950X vs i9-10900K 5.3 GHz en 16 juegos (1080p,1440p, 2160p) No tomen esto literalmente como un review (aunque al final tendremos un análisis final) del rendimiento del procesador Ryzen 9 5950X vs un i9-10900K overclockeado a 5.3 GHz y 5.0 GHz en...




www.xanxogaming.com





I just looked for the first 3 with 4k benches on AMD Ryzen 5000 Vermeer (Zen3) Review Roundup | VideoCardz.com

Also for quick reference, this is the odd behavior witht he 5950x that I'm speaking of:


























At 1080p the 10900k is just 0.46% faster
At 1440p the 10900k is 1.17% faster
At 2160p the 10900k is 2.97% faster

I don't know enough about CPU architectures to understand why a CPU with clearly better IPC, that can often completely run away with performance in high fps low graphics demanding games, can start to lose that performance at 4K. Unless under GPU bound scenarios, the clock speed itself ends up being more important than IPC. Because who cares if you can do more per cycle when you're not being asked to do more per cycle? Difference between latency and bandwidth. That's my theory anyway.


----------



## Falkentyne

HyperMatrix said:


> Gotta say I am surprised at the lack of full high resolution shots of the board. All we've seen are from video grabs.
> 
> View attachment 2472238
> 
> View attachment 2472234
> 
> View attachment 2472235
> 
> View attachment 2472236



Welp the Kingpin card doesn't have any fuses on its strange looking shunts, at least. So that's something.


----------



## Falkentyne

Is this a good multimeter to buy?
and I know nothing about current clamps. Can this use a current clamp?



https://www.amazon.com/Fluke-117-Electricians-True-Multimeter/dp/B000O3LUEI/


----------



## jomama22

HyperMatrix said:


> Nearly impossible to find articles that did actual 4K benching but here are 3 . Wow. 90% of them stuck to 320p to 1440p in order to show the strength of the CPU in cpu bound scenarios and not show its weakness at 4K.
> 
> And I'm only being specific because that scenario is the specific one at which I game.  I game at 4K. I couldn't care less if the 5950x got 100% more FPS at 1080p or at 4K with all settings set to low, if it happens to get 1-3% less at Maxed Out 4K since that's the only resolution/setting I play at. Haha.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> AMD’s Ryzen 9 5950X 1080p & 4K Gaming Performance
> 
> 
> AMD has just launched its much-anticipated Zen 3 processors, with four on offer for mainstream budgets. We're kicking off our coverage with a look at the top dog Ryzen 9 5950X in gaming, as it goes up against the last-gen Ryzen 9 3950X, and Intel's de facto best gaming CPU, the Core i9-10900K.
> 
> 
> 
> 
> techgage.com
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> AMD Ryzen 9 5950X och Ryzen 9 5900X "Vermeer" - Test
> 
> 
> Ryzen 5000 och arkitekturen Zen 3 är här, med stora löften om dominerande prestanda på processormarknaden.
> 
> 
> 
> 
> www.sweclockers.com
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Ryzen 9 5950X vs i9-10900K 5.3 GHz en 16 juegos (1080p,1440p, 2160p)
> 
> 
> (Última actualización: November 6, 2020) Ryzen 9 5950X vs i9-10900K 5.3 GHz en 16 juegos (1080p,1440p, 2160p) No tomen esto literalmente como un review (aunque al final tendremos un análisis final) del rendimiento del procesador Ryzen 9 5950X vs un i9-10900K overclockeado a 5.3 GHz y 5.0 GHz en...
> 
> 
> 
> 
> www.xanxogaming.com
> 
> 
> 
> 
> 
> I just looked for the first 3 with 4k benches on AMD Ryzen 5000 Vermeer (Zen3) Review Roundup | VideoCardz.com
> 
> Also for quick reference, this is the odd behavior witht he 5950x that I'm speaking of:
> View attachment 2472256
> 
> View attachment 2472257
> 
> View attachment 2472258
> 
> 
> 
> At 1080p the 10900k is just 0.46% faster
> At 1440p the 10900k is 1.17% faster
> At 2160p the 10900k is 2.97% faster
> 
> I don't know enough about CPU architectures to understand why a CPU with clearly better IPC, that can often completely run away with performance in high fps low graphics demanding games, can start to lose that performance at 4K. Unless under GPU bound scenarios, the clock speed itself ends up being more important than IPC. Because who cares if you can do more per cycle when you're not being asked to do more per cycle? Difference between latency and bandwidth. That's my theory anyway.


The sweclockers one has a nice collection on this page:









AMD Ryzen 9 5950X och Ryzen 9 5900X "Vermeer" - Test - Speltest: Tolv spel från SweClockers GPU-svit


Ryzen 5000 och arkitekturen Zen 3 är här, med stora löften om dominerande prestanda på processormarknaden.




www.sweclockers.com





Which, again, just trading blows whether it's avg or 99th percentile. 

Dunno, just seems like a wash no matter which you choose tbh.


----------



## jomama22

Falkentyne said:


> Is this a good multimeter to buy?
> and I know nothing about current clamps. Can this use a current clamp?
> 
> 
> 
> https://www.amazon.com/Fluke-117-Electricians-True-Multimeter/dp/B000O3LUEI/


You could honestly get somthing like this for your purpose and be totally fine.



https://www.amazon.com/dp/B01N014USE/ref=cm_sw_r_cp_apa_fabc_wn77FbXVH72F2


----------



## DrunknFoo

Carillo said:


> Be careful ? He dont know what he is talking about. I ran this bios with 4mohm shunts, no problem


well i have drawn upto roughly 850w on the ftw on air, haven't really tried going any higher until i can get my hands on a block


----------



## Falkentyne

jomama22 said:


> You could honestly get somthing like this for your purpose and be totally fine.
> 
> 
> 
> https://www.amazon.com/dp/B01N014USE/ref=cm_sw_r_cp_apa_fabc_wn77FbXVH72F2


I think I'll get the Fluke 117 first.
It's been something I've needed for a long long time anyway and I've always been putting it off.
Ok I'll order the fluke now. I'll look at that clamp you mentioned later.


----------



## itssladenlol

1000w xoc bios on my 2x8pin galax 3090 non shunt should be maxed At 660w right? 
Slider 100% -33% from missing 3rd 8Pin should equal 660w.
Planning to run it with a Mora 420 and heatkiller V Block.


----------



## GQNerd

KPE noods


----------



## HyperMatrix

itssladenlol said:


> 1000w xoc bios on my 2x8pin galax 3090 non shunt should be maxed At 660w right?
> Slider 100% -33% from missing 3rd 8Pin should equal 660w.
> Planning to run it with a Mora 420 and heatkiller V Block.


Depending on how it distributes power to the PCIe slot with one of the connectors missing. On the FTW3 using a 2-pin bios it would drop PCIe slot usage down to about 66%. This bios has I think a 95 or 98W PCIe slot max limit. So normally you're looking at around 300W draw per connector. So you'd have 600W from just the 2 PCIe 8 pin connectors + whatever you get from the PCIe slot, whether that's the full 98W or does what the FTW3 does and just give you 66% of max slot draw. So either way you'd be looking at between about 666W-700W. Most likely around the 666W mark.

Can anyone pitch in with info on PCIe slot draw with 1000W bios on 2-pin cards?


----------



## jomama22

Falkentyne said:


> I think I'll get the Fluke 117 first.
> It's been something I've needed for a long long time anyway and I've always been putting it off.
> Ok I'll order the fluke now. I'll look at that clamp you mentioned later.


That's find. Just make sure the dc current range is correct for our use and you should be good.

Just get yourself a multimeter as well and you'll be good.


----------



## HyperMatrix

Miguelios said:


> KPE noods
> View attachment 2472270
> View attachment 2472271


Thanks for sharing! Any idea on thermal pad thickness? Any of the pads get damaged and need replacement after opening?


----------



## itssladenlol

HyperMatrix said:


> Depending on how it distributes power to the PCIe slot with one of the connectors missing. On the FTW3 using a 2-pin bios it would drop PCIe slot usage down to about 66%. This bios has I think a 95 or 98W PCIe slot max limit. So normally you're looking at around 300W draw per connector. So you'd have 600W from just the 2 PCIe 8 pin connectors + whatever you get form PCIe, whether that's the full 98W or does with the FTW3 does and just give you 66% of max slot draw. So either way you'd be looking at between about 666W-700W. Most likely around the 666W mark.
> 
> Can anyone pitch in with info on PCIe slot draw with 1000W bios on 2-pin cards?


Im probably capping it at 550w anyway.
More than enough for me.
1000w bios is a blessing, Not being Power limited on 2x8pin with the Benefit of less coilwhine.
All 3x8pin i had where like chainsaws in Terms of coilwhine (strix, FTW3,suprim)
My galax 2x8pin is Dead silent.
Saw people with non shunted 2x8pins getting 15700 Port royal with 1000w bios.


----------



## stryker7314

Falkentyne said:


> Welp the Kingpin card doesn't have any fuses on its strange looking shunts, at least. So that's something.


This is important, because if a fuse blows then it can be replaced and the main components have a chance. Situation dependant of course.

Though my KPE number is coming up I may pass because I rather waterblock the FTW3 and have protections in place even though I may throw the 1000w on it. Have fans for the backplate. Alphacool waterblock coming in a few days.


----------



## GQNerd

HyperMatrix said:


> Thanks for sharing! Any idea on thermal pad thickness? Any of the pads get damaged and need replacement after opening?


Np

The pads look to be about 2mm, I can’t accurately measure it. There’s also some paste being used as well. Would really like to know what they use so I can order some reserve.
I was able to open the card twice, and haven’t damaged any of the pads so far 🤞


----------



## dante`afk

HyperMatrix said:


> Haha. Still in the queue for a Hydrocopper version.


KPE hydrocopper version? this isnt even listed?


----------



## HyperMatrix

dante`afk said:


> KPE hydrocopper version? this isnt even listed?


I realized after I posted that it’s the FTW3 HydroCopper I was on the list for and just hoped nobody would realize my mistake. Thanks man. 😂


----------



## ivans89

@jomama22 








I scored 21 727 in Time Spy


Intel Core i9-10900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com












I scored 21 387 in Time Spy


AMD Ryzen 9 5900X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com





the amd score was with vmod - intel without....


----------



## lolhaxz

stryker7314 said:


> This is important, because if a fuse blows then it can be replaced and the main components have a chance. Situation dependant of course.
> 
> Though my KPE number is coming up I may pass because I rather waterblock the FTW3 and have protections in place even though I may throw the 1000w on it. Have fans for the backplate. Alphacool waterblock coming in a few days.


In this particular case, the purpose of the fuse(s) is not really to save the device - _generally_ it will not... it's to prevent it going up in flames because something continued to draw too much current.... while you can blow a fuse by marginally exceeding its spec, that's not the normal scenario in the case of typical devices like this, I would argue it normally blows because something failed and is drawing too much current (ie. already failed _causing_ the fuse to blow... ie. already too late.).


----------



## Sheyster

jomama22 said:


> Gpu bound scenario's, like 4k and such, more or less put everything within 1/2%, whether intel or amd come out on top, all the way down the product stack to 6 core CPUs.


Indeed. I was on the verge of pulling the trigger on a 5900x, the local Microcenter had a bunch of them on release day. Then I looked at some newer 4K gaming benchmarks and decided there was no point in upgrading.


----------



## Sheyster

Biscottoman said:


> Has anyone been able to improve his memory overclock with the new strix BIOS ?


What new Strix BIOS are you referring to?


----------



## mirkendargen

Sheyster said:


> What new Strix BIOS are you referring to?


https://dlcdnets.asus.com/pub/ASUS/Graphic Card/NVIDIA/BIOSUPDATE_TOOL/RTX3090/RTX3090_V2.zip I assume.

That's packaged in their uploader tool, here it is dumped with NVflash after updating. I don't seem to be allowed to upload most file formats so rename this to .zip


----------



## bmgjet

You can just open there update exe file straight with winrar and you can get all its contents to save you having to flash and pull it.


----------



## GzZ

Stock Cooler, MSI Gaming X Trio with 1000w Bios, my best so far.


https://www.3dmark.com/pr/728482



Compared to KPE 520w Bios I was able to get another 80 points.
Best right now with my old 6700K...


----------



## WilliamLeGod

GzZ said:


> Stock Cooler, MSI Gaming X Trio with 1000w Bios, my best so far.
> 
> 
> https://www.3dmark.com/pr/728482
> 
> 
> 
> Compared to KPE 520w Bios I was able to get another 80 points.
> Best right now with my old 6700K...


Any issue with Fan speed and VRM heat using the 1000w bios?


----------



## jomama22

ivans89 said:


> @jomama22
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 21 727 in Time Spy
> 
> 
> Intel Core i9-10900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 21 387 in Time Spy
> 
> 
> AMD Ryzen 9 5900X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> the amd score was with vmod - intel without....


V-mod? Like a to evc2?

So I'm guessing you use nvidia control panel and change the power and texture setting to performance? Let's try something.

Uninstall afterburner and reset nvidia control panel to default for the 3d settings.
Reboot
Start 3dmark, reinstall afterburner, now set your settings
change nvidia ctrl panel to what you had for 3d ft
Run the bench, see if there is any change.


----------



## GzZ

WilliamLeGod said:


> Any issue with Fan speed and VRM heat using the 1000w bios?


Fans at 100%. 3200 - 3400 RPM.
Didn't check VRM Heat.
Only use 1000w for bechmark.
KPE for daily.


----------



## stryker7314

lolhaxz said:


> In this particular case, the purpose of the fuse(s) is not really to save the device - _generally_ it will not... it's to prevent it going up in flames because something continued to draw too much current.... while you can blow a fuse by marginally exceeding its spec, that's not the normal scenario in the case of typical devices like this, I would argue it normally blows because something failed and is drawing too much current (ie. already failed _causing_ the fuse to blow... ie. already too late.).


I like those odds and the fact that the rest of my rig will be spared lol


----------



## Sheyster

bmgjet said:


> You can just open there update exe file straight with winrar and you can get all its contents to save you having to flash and pull it.





mirkendargen said:


> https://dlcdnets.asus.com/pub/ASUS/Graphic Card/NVIDIA/BIOSUPDATE_TOOL/RTX3090/RTX3090_V2.zip I assume.
> 
> That's packaged in their uploader tool, here it is dumped with NVflash after updating. I don't seem to be allowed to upload most file formats so rename this to .zip


Thanks guys, do we have any idea what this new Strix BIOS brings to the table? Last I heard ASUS only had some updates for fan curves on some cards.

EDIT - Found this on ASUS support page:

Version V2
2020/12/14 10.4 MBytes
RTX3090 bios update tool

Further optimize the performance for 0dB fan feature
Fixed motherboard “beeping” bug during computer start-up









ROG-STRIX-RTX 3090-O24G-GAMING | Graphics Cards


The ROG Strix GeForce RTX 3090 OC Edition 24GB GDDR6X unleash the maximum performance on NVIDIA Ampere Architecture, by featuring Axial-tech fan design, 0dB technology, 2.9-slot Design, Dual BIOS, Auto-Extreme Technology, SAP II, MaxContact Technology, and more.



rog.asus.com


----------



## inedenimadam

What all options do I have for bios flashing on a zotac Trinity? As I understand it, the 1000W KPE BIOS should be compatible. Does this mean that the more modest limited KPE bioses should also be compatible? I was planning on doing shunts, but with it requiring so many, and the apparent ease of bios compatibility, flashing a different bios may make more sense for a gaming only card on water. Would like to keep some minimum amount of thermal protection in place if possible. 

Thanks in advance guys. I've been out of the loop for a while.


----------



## bwana

zAnimal said:


> So that's just with the power slider in AB, ya? What does the voltage slider do, like, how is it different than the power slider? Should I just crank both, and leave it, when trying to max out my oc?


This fellow explains what the various voltages can be achieved on a kingpin






It looks like the PX1 slider (or the voltage slider in afterburner) allows a voltage increase of .02v Notice I said 'allows'. Whether the card will actually get that voltage (and what frequency it will therefore boost to) depends on a super secret algorithm that seems to use temperature - anything over 40 degrees starts to cut voltage - and power limit - plus who knows what else..

The power slider allows a slight increase in power limit - for the stock bios it is like 104% of max coded in the bios. Other bioses allow the power slider to read 110% and 120% at max. And of course with the strix and EVGA cards there is a 1000w bios that will allow you to fry your card since over current protections are disabled.

EVGA has a 'classified tool' that allows access to the various voltages and switching frequencies. Although this was used and explained extensively in the 8 hour youtube video where Vince, Jay and steve overclocked their kingpin cards, I do not know if it works for other cards. I don't even know what some of the parameters it changes can do (switching frequency?) but it does allow something that is like 'Vdroop compensation for cpu vCore'.

In addition there is something going on at high power limits that people report as instability. I really do not understand what they are talking about but I think it refers to the fact that the card is boosting to higher voltages and frequencies so fast and coming back down again that you never see it registered in any of the tools we have. However, when the algorithm takes the card into this warp speed, it can crash and you never really know what the values were when it did. Even using afterburner's ctrl-L feature to lock the freq seems not to prevent this. To get around this, people are using a command line tool to limit the maximum clocks allowed, search nvidia-smi and you'll find out how to use it. It's nvidia's tool to specifically lock the max freq. At least that's what I've been able to understand, or perhaps misunderstand. I hope I am not bastardizing the truth but hypermatrix, des2k.., falkentyne, or even shamino (who has posted in this thread) can correct me and I will edit this post.


----------



## Sygnano

itssladenlol said:


> Im probably capping it at 550w anyway.
> More than enough for me.
> 1000w bios is a blessing, Not being Power limited on 2x8pin with the Benefit of less coilwhine.
> All 3x8pin i had where like chainsaws in Terms of coilwhine (strix, FTW3,suprim)
> My galax 2x8pin is Dead silent.
> Saw people with non shunted 2x8pins getting 15700 Port royal with 1000w bios.


Could you share your settings ? I've been fiddling with the 1000w bios on my 2x8 PNY and I have yet to find a sweet spot. Having some refs would be very useful


----------



## asdkj1740

it seems 3090 aorus master rev2.0 is coming very soon, as 3080 master rev2.0 has already launched/updated.

what an extraordinary 2020, ****ed up the power extension cable, ****ed up the number of power connectors, ****ed up the caps combination (has changed from 6sp to 4sp+2mlcc array recently).

z490 weak, b550 suck, b460 terrbile. what else! full of supprises!


----------



## EniGma1987

Miguelios said:


> KPE noods
> View attachment 2472270
> View attachment 2472271


I love how the internet was up in arms about "poscaps" being so bad and cards should not use them. Then the Kingpin edition comes absolutely covered in Poscaps


----------



## jomama22

bwana said:


> This fellow explains what the various voltages can be achieved on a kingpin
> 
> 
> 
> 
> 
> 
> It looks like the PX1 slider (or the voltage slider in afterburner) allows a voltage increase of .02v Notice I said 'allows'. Whether the card will actually get that voltage (and what frequency it will therefore boost to) depends on a super secret algorithm that seems to use temperature - anything over 40 degrees starts to cut voltage - and power limit - plus who knows what else..
> 
> The power slider allows a slight increase in power limit - for the stock bios it is like 104% of max coded in the bios. Other bioses allow the power slider to read 110% and 120% at max. And of course with the strix and EVGA cards there is a 1000w bios that will allow you to fry your card since over current protections are disabled.
> 
> EVGA has a 'classified tool' that allows access to the various voltages and switching frequencies. Although this was used and explained extensively in the 8 hour youtube video where Vince, Jay and steve overclocked their kingpin cards, I do not know if it works for other cards. I don't even know what some of the parameters it changes can do (switching frequency?) but it does allow something that is like 'Vdroop compensation for cpu vCore'.
> 
> In addition there is something going on at high power limits that people report as instability. I really do not understand what they are talking about but I think it refers to the fact that the card is boosting to higher voltages and frequencies so fast and coming back down again that you never see it registered in any of the tools we have. However, when the algorithm takes the card into this warp speed, it can crash and you never really know what the values were when it did. Even using afterburner's ctrl-L feature to lock the freq seems not to prevent this. To get around this, people are using a command line tool to limit the maximum clocks allowed, search nvidia-smi and you'll find out how to use it. It's nvidia's tool to specifically lock the max freq. At least that's what I've been able to understand, or perhaps misunderstand. I hope I am not bastardizing the truth but hypermatrix, des2k.., falkentyne, or even shamino (who has posted in this thread) can correct me and I will edit this post.


The voltage slider just allows the option of the next step in the voltage/few curve to be used if power allows. At 0, it will use the second to highest voltage/freq step which could be whatever you set it to. In an example, if you were to set [email protected]/1.068, 2050/1.075-1.093 and [email protected], +0 would use 1.075v and +100 would use [email protected] if you instead set [email protected]/1.075, 2050/1.081-1.093 and [email protected], +0 would use 1.081 and +100 would use [email protected]
Again, this depends on the power limit not being reached. Of you are hitting the power limit, this does not apply.

The power limit %'s do not correspond to a fixed power limit. They are merely ratios for whatever the bios' default 100% power limit is. 100% on a strix != 100% on a ftw.

The classified tool will only work on the kingpin card.

Switching frequency is the pwm signal in hz the vrms (more specifically, the mosfets) use when operating. A higher frequency can give you better transient response (lowers peak to peak voltages during the frequency transition)

LLC is responsible for the voltage droop you are referring to. Changes the spread between idle and load voltage depending on the current the vrms are outputing between those scenarios.

Nvidia-smi is just a driver configuration tool. Can do much more with it than set a maximum frequency but for our purposes, yeah, that's what it's used for.


----------



## jomama22

EniGma1987 said:


> I love how the internet was up in arms about "poscaps" being so bad and cards should not use them. Then the Kingpin edition comes absolutely covered in Poscaps


They are being used for input/output filtering for the vrms. In this area they are completely suitable. Replacing them with mlcc's would be hilariously expensive with little to no gain in performance or stability.

The only caps to be compared for input/output filtering of the vrms would be poscaps and through-hole, where poscaps are preferred.


----------



## Spiriva

Sygnano said:


> Could you share your settings ? I've been fiddling with the 1000w bios on my 2x8 PNY and I have yet to find a sweet spot. Having some refs would be very useful


On a 2x8 pin 3090 with the 1000w XOC bios, the slider in exemple msi afterburner could be set to the following:

Default power limit 1000W - 30.4%= 696W

696W = 100% (for a 2x8 pin this would be the max it can pull with this bios, i would not use 100% on my card)
550W = 79%
500W = 71% (here is prolly where i would be)
450W = 64%
400W = 57%

Ofc your own risk policy


----------



## Slackaveli

jomama22 said:


> You can look at my TS graphics score on the HOF with my 5950x @4.8ghz and compare against 5.6ghz 10900k's. Same with the graphics score in TSE.
> 
> I'm not sure what gaming benchmarks you are referring to? Can you link?
> 
> Gpu bound scenario's, like 4k and such, more or less put everything within 1/2%, whether intel or amd come out on top, all the way down the product stack to 6 core CPUs.


I mean, even if you are technically correct today, 11900k is still going to wreck it next month. Double the L2 cache, 20% IPS bump, and likely faster clocks, at least clock parity with comet lake. 11900k is going to be a turbo charged racecar of a chip.


----------



## Pandora's Box

asdkj1740 said:


> it seems 3090 aorus master rev2.0 is coming very soon, as 3080 master rev2.0 has already launched/updated.
> 
> what an extraordinary 2020, ****ed up the power extension cable, ****ed up the number of power connectors, ****ed up the caps combination (has changed from 6sp to 4sp+2mlcc array recently).
> 
> z490 weak, b550 suck, b460 terrbile. what else! full of supprises!


Gigabyte have been doing rev 2.0/3.0 on their cards for years. Some revisions where they shift from reference design to custom PCB with cheaper parts. I avoid Gigabyte like the plague


----------



## jomama22

Slackaveli said:


> I mean, even if you are technically correct today, 11900k is still going to wreck it next month. Double the L2 cache, 20% IPS bump, and likely faster clocks, at least clock parity with comet lake. 11900k is going to be a turbo charged racecar of a chip.


Ok?


----------



## bogdi1988

Slackaveli said:


> I mean, even if you are technically correct today, 11900k is still going to wreck it next month. Double the L2 cache, 20% IPS bump, and likely faster clocks, at least clock parity with comet lake. 11900k is going to be a turbo charged racecar of a chip.





jomama22 said:


> Ok?


@jomama22 I think @Slackaveli means that Intel 11900k is going to wreck itself. Really why would anyone want to buy old recycled designs over what AMD is doing?


----------



## rocklobsta1109

and all this on the 14nm process it will only pull like 300watts...


----------



## geriatricpollywog

bogdi1988 said:


> @jomama22 I think @Slackaveli Really why would anyone want to buy old recycled designs over what AMD is doing?


Because the 11900k will have higher FPS averages in games and will not BSOD at stock clocks.


----------



## BiLLbOuS

Is it me or is there not mlcc count for 3090 KPE?

edit: stoopid, thanks for post *Miguelios*


----------



## Falkentyne

BiLLbOuS said:


> Is it me or is there not mlcc count for 3090 KPE?
> 
> edit: stoopid


Any of you guys have Path of Exile?

Anyone here getting power limited instantly (with vsync off) on shunt modded and any other 3090, except the Kingpin 1000W Vbios?
This game seems to be triggering some internal power limit instantly. It's not one of the main rails. I saw this as soon as the game loaded. Game was drawing 515W, triggering power limit downclock of Core and Voltage, but none of the main power rails were at their limit, so the power draw kept increasing as temps went up....(kept rising slowly to 520, 525, 535, and got up to 570W before it capped out when 8 pin #1 reached 175W, which is its power cap (GPU-Z), and total power draw reached 390W (GPU-Z), very close to the 400W max reading.

I noticed that Power Plane Input Source (SRC) power drops DRASTICALLY when this game is running....down to like 24W! When I alt tab, it goes back up to 45W...
The other programs that trigger a "Normal" power limit, like Heaven (never reaches max TDP), Port Royal (never reaches it), or Time Spy (reaches it on that weird double instance on GT2) don't cause SRC to drop like this.

Can someone with a Kingpin card or the 1000W bios run Path of Exile also and tell me if your SRC drops super low (compared to anything else?).


----------



## Slackaveli

0451 said:


> Because the 11900k will have higher FPS averages in games and will not BSOD at stock clocks.


Im not sure why people think AMD's 20% ipc gain from zen2 to zen3 on the same node was revolutionary but Intel's gain of 20% from Comet Lake to Rocket Lake on teh same node is "recycled". Well, actually I do, it's called fanboyism.

l'll gladly take my 20% bump for a $400 chip and resell my comet lake for $300+ . I'ts basically a free upgrade. Added bonus is you'll rule the benchmarks with a "recycled" node. LOL!


----------



## geriatricpollywog

Slackaveli said:


> Im not sure why people think AMD's 20% ipc gain from zen2 to zen3 on the same node was revolutionary but Intel's gain of 20% from Comet Lake to Rocket Lake on teh same node is "recycled". Well, actually I do, it's called fanboyism.
> 
> l'll gladly take my 20% bump for a $400 chip and resell my comet lake for $300+ . I'ts basically a free upgrade. Added bonus is you'll rule the benchmarks with a "recycled" node. LOL!


Yeah, Intel resale value is ridiculous. I sold my 7700k for $270 in July and I should get at least $300 for my 10700k. Intel may have fallen behind in some ways, but when you see "Intel Inside" or "Genuine Intel" you know the product has been extensively tested and all the bugs have been worked out. I've run bargain brands like Cyrix and AMD but the experience was always broken in some way.


----------



## Slackaveli

Yep. I just sold my old 5775-c for $300.


----------



## Falkentyne

Anyone have the skills to determine what internal power rail Path of Exile is flagging?
Is it one of the "AUX" rails?

Take a look at the SRC.
This is Path of Exile when I'm alt tabbed out. SRC is about 40W here. Notice I'm still at VREL.
This is where the mouse cursor was when I took the SS so you can see the point on the graph and look up and down vertically aligned. 39.5W SRC here. POE Minimized.










Here is what happens when I alt tab into the game.









SRC Instantly drops drastically. It's at 12.6W where the arrow is drawn, and keeps rising slowly as the 8 pins and PCIE and chip power keep increasing. Board power draw is only about 363W GPUZ reported here (Real power was about 525W), and 8 pin #1 was 161W at this time and 8 pin #2 was 147W at this time and Slot was 52W and Chip Power was 255W. They just kept rising up to the "max" values shown for them in the end, when I alt tabbed again. 8 pin #1 capped out at its max power limit of 175.7W, with Total Board Power at 393W when the power draw stopped climbing (not bad balance there, only lost 7 total watts  Notice there was a full fat green throttle bar throughout the entire session. What's even stranger is that SRC voltage started at 135V !!! Yes, 135V. And just slowly dropped, showing up as 21.4V before the alt tab (this HAS To be a glitch or a false reading by GPUZ btw. There's nothing corresponding to this in Hwinfo64).

I'll let you pros chew your brains out with this. Happy new year.


----------



## rocklobsta1109

Falkentyne said:


> Any of you guys have Path of Exile?
> 
> Anyone here getting power limited instantly (with vsync off) on shunt modded and any other 3090, except the Kingpin 1000W Vbios?
> This game seems to be triggering some internal power limit instantly. It's not one of the main rails. I saw this as soon as the game loaded. Game was drawing 515W, triggering power limit downclock of Core and Voltage, but none of the main power rails were at their limit, so the power draw kept increasing as temps went up....(kept rising slowly to 520, 525, 535, and got up to 570W before it capped out when 8 pin #1 reached 175W, which is its power cap (GPU-Z), and total power draw reached 390W (GPU-Z), very close to the 400W max reading.
> 
> I noticed that Power Plane Input Source (SRC) power drops DRASTICALLY when this game is running....down to like 24W! When I alt tab, it goes back up to 45W...
> The other programs that trigger a "Normal" power limit, like Heaven (never reaches max TDP), Port Royal (never reaches it), or Time Spy (reaches it on that weird double instance on GT2) don't cause SRC to drop like this.
> 
> Can someone with a Kingpin card or the 1000W bios run Path of Exile also and tell me if your SRC drops super low (compared to anything else?).


It has something to do with the Shadows or Shadows + global illuminations option. My power draw goes through the roof with global illumination enabled and then down to like 175-180w when its off. The shadows settings overall do something weird to the load on the GPU making it go nuts.


----------



## jomama22

Slackaveli said:


> Im not sure why people think AMD's 20% ipc gain from zen2 to zen3 on the same node was revolutionary but Intel's gain of 20% from Comet Lake to Rocket Lake on teh same node is "recycled". Well, actually I do, it's called fanboyism.
> 
> l'll gladly take my 20% bump for a $400 chip and resell my comet lake for $300+ . I'ts basically a free upgrade. Added bonus is you'll rule the benchmarks with a "recycled" node. LOL!


Personally, I don't really care what brand I get so long as it fills my needs. I ran an intel 3960x for 8 years then grabbed the 5950x with a 3090 as it did just that.

Your comments are ironic, you throw shade at people being fanboys while it's quite clear that's what you are? 

I don't care what anyone prefers or uses, it has no bearing on me or my decisions. I buy what I want and need and be done with it. If you prefer intel and hate AMD? that's cool, vise versa? That's cool.

You're the one who stumbled in here, made a completely irrelevant comment about next gen intel, and then gets hyped up when fanboys on the other side of the fence say somthing.

Pretty funny tbh.


----------



## Nizzen

I want the fastest, and then overclock it.
New season, then repeat 

I don't care about color, rgb or name


----------



## mirkendargen

0451 said:


> ...when you see "Intel Inside" or "Genuine Intel" you know the product has been extensively tested and *all the bugs have been worked out*.


Uhhh....not gonna fanboy here, but that statement is hilariously incorrect to the point I'm not sure you aren't being sarcastic...

You can make an argument that Intel processors are better for gaming than Ryzen processors, but bug free? Lol...


----------



## Falkentyne

rocklobsta1109 said:


> It has something to do with the Shadows or Shadows + global illuminations option. My power draw goes through the roof with global illumination enabled and then down to like 175-180w when its off. The shadows settings overall do something weird to the load on the GPU making it go nuts.


YOU MY FRIEND ARE A GENIUS!

It is EXACTLY that setting!
With Shadows+GI set to "high", my video card performs like a video card!
If I set to to Ultra...it performs like a three legged horse trying to climb Mt. Everest! (ok it draws more power but it hits some internal strange power limit instantly and SRC drops drastically).


----------



## BiLLbOuS

Falkentyne said:


> Anyone have the skills to determine what internal power rail Path of Exile is flagging?
> Is it one of the "AUX" rails?
> 
> Take a look at the SRC.
> This is Path of Exile when I'm alt tabbed out. SRC is about 40W here. Notice I'm still at VREL.



5950x/3090 kingpin when i sit at desktop its 83 and goes down to 38 in game

EDIT: adjusted same setting A++


----------



## geriatricpollywog

mirkendargen said:


> Uhhh....not gonna fanboy here, but that statement is hilariously incorrect to the point I'm not sure you aren't being sarcastic...
> 
> You can make an argument that Intel processors are better for gaming than Ryzen processors, but bug free? Lol...


So what’s a few security flaws that require performance reducing microcode updates?


----------



## bmgjet

0451 said:


> So what’s a few security flaws that require performance reducing microcode updates?


Lol intel spinned them very well in there advertising.
Lost 10% IPC from patches. Then they advertise next gen 25% IPC increase with hardware security fixes.


----------



## Slackaveli

jomama22 said:


> Personally, I don't really care what brand I get so long as it fills my needs. I ran an intel 3960x for 8 years then grabbed the 5950x with a 3090 as it did just that.
> 
> Your comments are ironic, you throw shade at people being fanboys while it's quite clear that's what you are?
> 
> I don't care what anyone prefers or uses, it has no bearing on me or my decisions. I buy what I want and need and be done with it. If you prefer intel and hate AMD? that's cool, vise versa? That's cool.
> 
> You're the one who stumbled in here, made a completely irrelevant comment about next gen intel, and then gets hyped up when fanboys on the other side of the fence say somthing.
> 
> Pretty funny tbh.


How am I a fanboy lol? Psssht. tf outta here with that. If AMD was superior when i was building i would have bought one, but, Comet Lake destroyed zen2. Why would I switch a complete platform in a lateral move instead of wait on the same mobo for rocket lake to drop in. Thats fanboyism?


----------



## jomama22

0451 said:


> So what’s a few security flaws that require performance reducing microcode updates?


If you do genuinely care about the performance degradation for intel and amd in speculative execution vulnerabilities, I suggest you read this.

Comparison of 9th gen intel and zen 2:





Spectre Mitigation Performance Impact Benchmarks On AMD Ryzen 3700X / 3900X Against Intel - Phoronix







www.phoronix.com





For 3rd gen ryzen:





The Spectre Mitigation Performance Impact On AMD Ryzen 5000 "Zen 3" Processors - Phoronix







www.phoronix.com





Intel 10th gen:





The Ongoing CPU Security Mitigation Impact On The Core i9 10900K Comet Lake - Phoronix







www.phoronix.com





You will find intel suffers much more as no real mitigations were made at an architectural level until this upcoming generation.

Both suffer degression because of these mitigations. That was going to happen when you have to protect a massive part of what makes an architecture efficient with higher ipc, the instruction pipeline and predictive based fetch and execution.


----------



## jomama22

Slackaveli said:


> How am I a fanboy lol? Psssht. tf outta here with that. If AMD was superior when i was building i would have bought one, but, Comet Lake destroyed zen2. Why would I switch a complete platform in a lateral move instead of wait on the same mobo for rocket lake to drop in. Thats fanboyism?


Your comments speak for themselves. No need to spar with you.


----------



## dr/owned

Sheyster said:


> Thanks guys, do we have any idea what this new Strix BIOS brings to the table? Last I heard ASUS only had some updates for fan curves on some cards.


I don't know exactly when but they increased the 8 pin power limits (leaving total board power the same) which gave me a higher effective power usage ie sustaining higher voltage/frequency.



bmgjet said:


> You can just open there update exe file straight with winrar and you can get all its contents to save you having to flash and pull it.


They don't open in ABE though "Unsupported Device?". I haven't had coffee yet so maybe I'm being an idiot.


----------



## Falkentyne

rocklobsta1109 said:


> It has something to do with the Shadows or Shadows + global illuminations option. My power draw goes through the roof with global illumination enabled and then down to like 175-180w when its off. The shadows settings overall do something weird to the load on the GPU making it go nuts.





BiLLbOuS said:


> 5950x/3090 kingpin when i sit at desktop its 83 and goes down to 38 in game
> 
> EDIT: adjusted same setting A++


So this game actually made my 3090 FE _black screen_ with BEYOND 100% fans (!!!) several times from that GI Ultra setting+Super high power draw! Like something got overloaded or something almost like thermal protection. And when I mean 100% fans, the fans were actually spinning FASTER Than the "100%" speed you can set in MSI Afterburner! Sounded like the fans were at 3000 RPM (max fan speed is 2600). Temps were about 78C when this happened. I had the GPU chip crash before from a too high overclock, in Watch Dogs Legion...+180 on the core, but the card didn't 100% fan.....it just hard locked with the screen frozen (the card wasn't even past 65C).

The first black screen+screaming fans was at +150 core.
Then I lowered the memory overclock to +250, no effect. So it wasn't memory.

I tried core +135 and then +120 and both "eventually" black screened with screaming 100% fan. It didn't seem to do it at +105, but I closed the game out because I got scared, when the core temp reached 77C and power draw was SO HIGH that 8 pin #1 reached 177W! I've never seen the 8 pin go above 175W before...
I also KNOW it's not JUST the core temp because I had the GPU at 82C before playing Overwatch at 4k messing around with max TDP (it was throttling but still).

I wonder if this game triggered a failsafe protection trip (black screen + SCREAMING fans is usually a GPU emergency shutdown, like if your GPU gets to 100C or you didn't get proper heatsink contact on the GPU core or something)

Yeah. No more "Ultra" GI shadows for me. Leaving it on High now.

Here's what it go to on the +105 core run.











Suggest you guys be careful about the "Ultra" GI Shadows setting, guys...


----------



## mirkendargen

0451 said:


> So what’s a few security flaws that require performance reducing microcode updates?


A few off the top of my head:









Pentium FDIV bug - Wikipedia







en.wikipedia.org












Spectre (security vulnerability) - Wikipedia







en.wikipedia.org












Meltdown (security vulnerability) - Wikipedia







en.wikipedia.org












5 years of Intel CPUs and chipsets have a concerning flaw that’s unfixable


Converged Security and Management Engine flaw may jeopardize Intel's root of trust.




arstechnica.com





I'm sure there are more, and when did the goalpost move from "Intel releases bug-free processors" to "Intel can patch their released bugs without performance impact" anyway?


----------



## bmgjet

dr/owned said:


> They don't open in ABE though "Unsupported Device?". I haven't had coffee yet so maybe I'm being an idiot.



Get the latest version of ABE 006








File on MEGA







mega.nz





That should open them.


----------



## rocklobsta1109

Falkentyne said:


> So this game actually made my 3090 FE _black screen_ with BEYOND 100% fans (!!!) several times from that GI Ultra setting+Super high power draw! Like something got overloaded or something almost like thermal protection. And when I mean 100% fans, the fans were actually spinning FASTER Than the "100%" speed you can set in MSI Afterburner! Sounded like the fans were at 3000 RPM (max fan speed is 2600). Temps were about 78C when this happened. I had the GPU chip crash before from a too high overclock, in Watch Dogs Legion...+180 on the core, but the card didn't 100% fan.....it just hard locked with the screen frozen (the card wasn't even past 65C).
> 
> The first black screen+screaming fans was at +150 core.
> Then I lowered the memory overclock to +250, no effect. So it wasn't memory.
> 
> I tried core +135 and then +120 and both "eventually" black screened with screaming 100% fan. It didn't seem to do it at +105, but I closed the game out because I got scared, when the core temp reached 77C and power draw was SO HIGH that 8 pin #1 reached 177W! I've never seen the 8 pin go above 175W before...
> I also KNOW it's not JUST the core temp because I had the GPU at 82C before playing Overwatch at 4k messing around with max TDP (it was throttling but still).
> 
> I wonder if this game triggered a failsafe protection trip (black screen + SCREAMING fans is usually a GPU emergency shutdown, like if your GPU gets to 100C or you didn't get proper heatsink contact on the GPU core or something)
> 
> Yeah. No more "Ultra" GI shadows for me. Leaving it on High now.
> 
> Here's what it go to on the +105 core run.
> 
> [SNIP}
> 
> Suggest you guys be careful about the "Ultra" GI Shadows setting, guys...


My 390 watt BIOS flashed card touched at least 405 watts with the global illumination on. It's the only game I've played, Cyperpunk 2077 included, that made it meaningfully overshoot the max TDP.


----------



## Nizzen

Get a room or get on topic


----------



## rocklobsta1109

Hey man its 3090 related


----------



## dr/owned

bmgjet said:


> Get the latest version of ABE 006
> 
> 
> 
> 
> 
> 
> 
> 
> File on MEGA
> 
> 
> 
> 
> 
> 
> 
> mega.nz
> 
> 
> 
> 
> 
> That should open them.


Yup, working. 

Didn't see anything particularly interesting except the file includes both TUF and Strix bios's and I think both performance and quiet versions...and a few revisions.

No secret ones where they mistyped a power limit to 1000


----------



## Nizzen

rocklobsta1109 said:


> Hey man its 3090 related


Not you


----------



## stryker7314

EniGma1987 said:


> I love how the internet was up in arms about "poscaps" being so bad and cards should not use them. Then the Kingpin edition comes absolutely covered in Poscaps


Yep the KPE is looking less and less appealing...


----------



## bmgjet

stryker7314 said:


> Yep the KPE is looking less and less appealing...


----------



## jomama22

dr/owned said:


> Yup, working.
> 
> Didn't see anything particularly interesting except the file includes both TUF and Strix bios's and I think both performance and quiet versions...and a few revisions.
> 
> No secret ones where they mistyped a power limit to 1000


Lol if only.

I am concerned a bit by the reduction in memory power allowance. I'm not 100% sure if a shunt is used for it on the strix or not and during my testing, I have seen mvddc get as high at 85w @1500 mem offset. Once the evc2 is attached, I'll see what happens when touching up the memory voltage with respects to mvddc.


----------



## Falkentyne

112


bmgjet said:


> Lol intel spinned them very well in there advertising.
> Lost 10% IPC from patches. Then they advertise next gen 25% IPC increase with hardware security fixes.


Hey Bmgjet I just wanted to say thank you for all the help you've done for the community. I really hope your work gets more appreciation, because you sure deserve it. Happy new year!


----------



## bmagnien

stryker7314 said:


> This is important, because if a fuse blows then it can be replaced and the main components have a chance. Situation dependant of course.
> 
> Though my KPE number is coming up I may pass because I rather waterblock the FTW3 and have protections in place even though I may throw the 1000w on it. Have fans for the backplate. Alphacool waterblock coming in a few days.


Where have you ordered an alphacool ftw3 block that's coming in a few days, may I ask? I preordered from performanc pcs and from aquatuning but everything on those sites just give vague references to being available in the second week on january, which I'm skeptical of. Thanks!


----------



## HyperMatrix

bogdi1988 said:


> @jomama22 I think @Slackaveli means that Intel 11900k is going to wreck itself. Really why would anyone want to buy old recycled designs over what AMD is doing?


I am. I cancelled my 5950x preorder to get the 11900K. It’s going to be the fastest clock and highest IPC cpu out there and give the highest gaming performance at 4K. Why wouldn’t I want it?



rocklobsta1109 said:


> and all this on the 14nm process it will only pull like 300watts...


If I cared about power efficiency I wouldn’t have a 3090. Lol. The only reason people made fun of AMD power usage before is because the power usage was high, but the performance wasn’t there. If performance is there, who cares about power efficiency? Not many in this thread according to the interest in a 1000W bios.


----------



## des2k...

Falkentyne said:


> YOU MY FRIEND ARE A GENIUS!
> 
> It is EXACTLY that setting!
> With Shadows+GI set to "high", my video card performs like a video card!
> If I set to to Ultra...it performs like a three legged horse trying to climb Mt. Everest! (ok it draws more power but it hits some internal strange power limit instantly and SRC drops drastically).


what game ?


----------



## dante`afk

path of exile

can confirm with GI on ultra, literally cuts my fps to half and triggers power perfcap.


----------



## dante`afk

Btw, wallpaper engine with certain wallpapers triggers also vRel, vOp and even sometimes power (high quality 60fps settings


----------



## des2k...

well I tried it, I have 63fps cap for my 65hz 4k display, was only 300w with my 2130,1050mv OC

no FPS cap (no idea of the FPS since performance monitor decided not to work), power is about 3dmark TE GT2, 584watts


----------



## jomama22

des2k... said:


> well I tried it, I have 63fps cap for my 65hz 4k display, was only 300w with my 2130,1050mv OC
> 
> no FPS cap (no idea of the FPS since performance monitor decided not to work), power is about 3dmark TE GT2, 584watts


Yeah, for tse I think the peak I have seen from gt2 for me when testing my loop was [email protected] 2190. Average was probably more around 590 or so.


----------



## Thanh Nguyen

1000w bios 60% power slide. 1081mv-2205mhz curve. 100% fan speed and my pc about to go airbone .Bad or ok guys?








I scored 15 020 in Port Royal


Intel Core i9-10900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## geriatricpollywog

Thanh Nguyen said:


> 1000w bios 60% power slide. 1081mv-2205mhz curve. Bad or ok guys?
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 020 in Port Royal
> 
> 
> Intel Core i9-10900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com


There might be something holding back performance. CPU and Mem averages are the same but there is a 200+ points difference. It might have to do with AIO vs air cooling.

NVIDIA GeForce RTX 3090 video card benchmark result - Intel Core i7-10700K Processor,Micro-Star International Co., Ltd. MEG Z490 UNIFY (MS-7C71) (3dmark.com)


----------



## Exilon

HyperMatrix said:


> If I cared about power efficiency I wouldn’t have a 3090. Lol. The only reason people made fun of AMD power usage before is because the power usage was high, but the performance wasn’t there. If performance is there, who cares about power efficiency? Not many in this thread according to the interest in a 1000W bios.


You're right. It's overclock.net, not energystar.gov. Plus the 250W is only going to be in heavily vectorized benchmarks and the power difference in web and gaming are going to be 20-30W just like Skylake vs Zen2. 

That said, the 3090's surge power is quite something especially on the EVGA 3090 FTW3. There's a whole 17 page discussion on EVGA forums about 750-850W PSUs from 3-5 years ago shutting down in gaming loads from the 440W 3090. Having to crowd source a list of working PSUs because no one tested these kind of transients before kind of sucks for the consumer.


----------



## DrunknFoo

Falkentyne said:


> So this game actually made my 3090 FE _black screen_ with BEYOND 100% fans (!!!) several times from that GI Ultra setting+Super high power draw! Like something got overloaded or something almost like thermal protection. And when I mean 100% fans, the fans were actually spinning FASTER Than the "100%" speed you can set in MSI Afterburner! Sounded like the fans were at 3000 RPM (max fan speed is 2600). Temps were about 78C when this happened. I had the GPU chip crash before from a too high overclock, in Watch Dogs Legion...+180 on the core, but the card didn't 100% fan.....it just hard locked with the screen frozen (the card wasn't even past 65C).
> 
> The first black screen+screaming fans was at +150 core.
> Then I lowered the memory overclock to +250, no effect. So it wasn't memory.
> 
> I tried core +135 and then +120 and both "eventually" black screened with screaming 100% fan. It didn't seem to do it at +105, but I closed the game out because I got scared, when the core temp reached 77C and power draw was SO HIGH that 8 pin #1 reached 177W! I've never seen the 8 pin go above 175W before...
> I also KNOW it's not JUST the core temp because I had the GPU at 82C before playing Overwatch at 4k messing around with max TDP (it was throttling but still).
> 
> I wonder if this game triggered a failsafe protection trip (black screen + SCREAMING fans is usually a GPU emergency shutdown, like if your GPU gets to 100C or you didn't get proper heatsink contact on the GPU core or something)
> 
> Yeah. No more "Ultra" GI shadows for me. Leaving it on High now.
> 
> Here's what it go to on the +105 core run.
> 
> 
> Suggest you guys be careful about the "Ultra" GI Shadows setting, guys...


are you using dx11 or vulkan? I haven't played since harvest, and wasn't a fan of vulkan with the 2080ti


----------



## dr/owned

dante`afk said:


> path of exile
> 
> can confirm with GI on ultra, literally cuts my fps to half and triggers power perfcap.


I messed around with ray tracing settings in Fortnite...pretty much anything RTX is just a FPS murderer. 1440p it can only maintain 144fps if I keep global illumination and shadows off. Reflections can go up to medium but that's it. Ray tracing is still basically "not happening for 2 more generations". Global Illumination set to low cuts the FPS down to like 40fps.


----------



## HyperMatrix

dr/owned said:


> I messed around with ray tracing settings in Fortnite...pretty much anything RTX is just a FPS murderer. 1440p it can only maintain 144fps if I keep global illumination and shadows off. Reflections can go up to medium but that's it. Ray tracing is still basically "not happening for 2 more generations". Global Illumination set to low cuts the FPS down to like 40fps.


RTX with DLSS Quality is probably 2 generations away at 4K60 with effects more meaningfully enabled. Right now RTX is a hack job. A lot of features disabled. Blending SSR with RT. Limiting range and bounces. Etc etc. RT is super expensive. But it’s also super worth it. Here’s hoping the move to 5nm next gen is used to push more performance and not to pull back power consumption.


----------



## Nico67

Try BDO online if you want a power hungry game that will shutdown your system  Its my go to stab test now, just running round on remastered will not quite power cap at 65%, but do a solo dark rift and it'll pull around 520w real at 1980 1.03v. MY PC will actually shutdown on one of them consistently yet be fine everywhere else. Not surprising as it had issues with my last 2080Ti as well.
I think badly coded games or some game settings really show up badly with 3000 series cards.


----------



## Exilon

Nico67 said:


> MY PC will actually shutdown on one of them consistently yet be fine everywhere else. Not surprising as it had issues with my last 2080Ti as well.


What PSU are you running? I can get my PC to shutdown if I lift the power limit to 500W on my Ultra FTW3 without undervolting.

Currently it seems that [email protected] is the limit on my GPU to be stable in all stress tests at 4K, resulting in 420W max power in Time Spy Extreme stress test loop. I can probably get it to hit [email protected] but I'm waiting for my waterblock to try it.


----------



## Nico67

Exilon said:


> What PSU are you running? I can get my PC to shutdown if I lift the power limit to 500W on my Ultra FTW3 without undervolting.
> 
> Currently it seems that [email protected] is the limit on my GPU to be stable in all stress tests at 4K, resulting in 420W max power in Time Spy Extreme stress test loop. I can probably get it to hit [email protected] but I'm waiting for my waterblock to try it.


1000w Seasonic Titanium Ultra, seems Seasonic might have aggressive overcurrent trip. Silverstone SFX-L800 titanium would last 15mins in BDO before rebooting on 2080ti, so I don't think its PSU quality, but just aggressive current trip


----------



## Falkentyne

Nico67 said:


> Try BDO online if you want a power hungry game that will shutdown your system  Its my go to stab test now, just running round on remastered will not quite power cap at 65%, but do a solo dark rift and it'll pull around 520w real at 1980 1.03v. MY PC will actually shutdown on one of them consistently yet be fine everywhere else. Not surprising as it had issues with my last 2080Ti as well.
> I think badly coded games or some game settings really show up badly with 3000 series cards.


Path of Exile with Vsync off and Global Illumination+Shadows set to Ultra with modded cards/shunts/vbioses _will_ shut down your system  Try it. 1000W vBios owners had better be careful ...real Kingpins might have no problem but if you're using another card with the 1000W Kingpin bios, you had better say your prayers...


----------



## Chamidorix

Regarding the temperature protection on XOC bios with KPE, just like with the 2080 ti https://xdevs.com/guide/2080ti_kpe/#biost it appears the actual bios switch position is also queried to determine if temp protections are turned on/off (just like the OCP only disabling while on LN2 switch via BMJet's dump a while back).

Have a buddy who does LN2 who says the XOC bios on Normal or OC switch trips when when temp sensor flips. Only disabled when XOC bios on the LN2 bios switch.

Would be great to get an I2C dump confirming/denying this, haven't had time to get to it with the busy holidays.

But if so should mean the 1000W bios on OC bios slot should be quite safe for daily use, on kingpin cards anyways.


----------



## Exilon

Nico67 said:


> 1000w Seasonic Titanium Ultra, seems Seasonic might have aggressive overcurrent trip. Silverstone SFX-L800 titanium would last 15mins in BDO before rebooting on 2080ti, so I don't think its PSU quality, but just aggressive current trip


Yeah, it's the overcurrent trip. A normal 3090 would be fine but the 440W default power target Ultra FTW3 lets the voltage ramp too high and the PSU trips as well.
When did you get the PSU? It's the SSR-1000TR right? 
Per the EVGA FTW3 thread, some people have gotten cross-ship RMAs with SeaSonic for their older SSR-*TD and SSR-*TR models to address this problem. One person got their SSR-850TR swapped out with a TX-850 and no longer suffers shutdowns, but the model name for the "TX-850" still seems to be SSR-850TR so Seasonic probably has multiple revisions with the same name...


----------



## Zogge

Thanh Nguyen said:


> 1000w bios 60% power slide. 1081mv-2205mhz curve. 100% fan speed and my pc about to go airbone .Bad or ok guys?
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 020 in Port Royal
> 
> 
> Intel Core i9-10900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com


My 3d mark score at 15084 was with strix, 500w bios. Core 2220, avg 2175 and mem +1200. Volt 1.081. So based on that if we eliminate cpu and ram inputs, the bios did not do much in your case ? Or did you have a 2 pin card now ?
I did not run any pr with 1kW bios as I have to reseat the ekwb first.


----------



## xrb936

Does anyone know what's going on with this situation: 



In general, the GPU usage in some games, such as GTA V, is extremely low, and the game is totally unplayable. However, in other games, such as Cyberpunk 2077, everything is fine.
The driver version is 460.89, everything else is stock on my Strix OC 3090.


----------



## bmgjet

xrb936 said:


> Does anyone know what's going on with this situation:
> 
> 
> 
> In general, the GPU usage in some games, such as GTA V, is extremely low, and the game is totally unplayable. However, in other games, such as Cyberpunk 2077, everything is fine.
> The driver version is 460.89, everything else is stock on my Strix OC 3090.


You need to set Max Performance or Balanced in the driver or turn the graphics up more to trigger it to go to P0 power state.
Optimised has issues with some games like what your getting where there power saving is too agressive to make the idle and low power states look more energy efficient in reviews.


----------



## xrb936

bmgjet said:


> You need to set Max Performance or Balanced in the driver or turn the graphics up more to trigger it to go to P0 power state.
> Optimised has issues with some games like what your getting where there power saving is too agressive to make the idle and low power states look more energy efficient in reviews.


It was max performance. Otherwise it won't stay at 1860mhz with 3% usage.


----------



## HyperMatrix

xrb936 said:


> It was max performance. Otherwise it won't stay at 1860mhz with 3% usage.


Have you tried locking your clocks to your max boost? Both precision x and afterburner can do it.


----------



## mardon

What sort of temps would you expect a 390w 0.90v 2000mhz 3090 to be with the following:

240 40mm think with noctuas
240 20mm think with slim noctuas
Small Reservoir DDC pump combo
EK reference 3090 block 
10900k @ 1.32v 5.2ghz in the loop. 
In an ITX case with restricted air flow


----------



## JonnyV75

Nico67 said:


> 1000w Seasonic Titanium Ultra, seems Seasonic might have aggressive overcurrent trip. Silverstone SFX-L800 titanium would last 15mins in BDO before rebooting on 2080ti, so I don't think its PSU quality, but just aggressive current trip





Exilon said:


> Yeah, it's the overcurrent trip. A normal 3090 would be fine but the 440W default power target Ultra FTW3 lets the voltage ramp too high and the PSU trips as well.
> When did you get the PSU? It's the SSR-1000TR right?
> Per the EVGA FTW3 thread, some people have gotten cross-ship RMAs with SeaSonic for their older SSR-*TD and SSR-*TR models to address this problem. One person got their SSR-850TR swapped out with a TX-850 and no longer suffers shutdowns, but the model name for the "TX-850" still seems to be SSR-850TR so Seasonic probably has multiple revisions with the same name...


I have a KPE with a ASUS Strix x570-E and I’ve had the KPE up to 550w via the LN2 switch and overvolting. Surprisingly enough my Seasonic 850 Focus Gold FX hasn’t shut down yet, and this is supposed to be the ones to avoid.

However, it did have shutdown issues with my old ASRock x470 Taichi and EVGA 2080Ti. I thought it was the MB, which is why I switched to the ASUS.


----------



## tankyx

I am the owner of a PNY Uprising, and it is acting weird :


Boost up to 2050mhz in games, but wont boost past 1830mhz in benchmarks
Stuck at drawing 350w instead of the 365w. In MSI afterburner, the powerdraw is limited to 100%.

Any ideas?


----------



## stryker7314

bmagnien said:


> Where have you ordered an alphacool ftw3 block that's coming in a few days, may I ask? I preordered from performanc pcs and from aquatuning but everything on those sites just give vague references to being available in the second week on january, which I'm skeptical of. Thanks!


Aquatuning, I preordered a while back which could be why.


----------



## Zooms

Hello its normal with kingpin 520w bios on 3090 strix OC see 359.8W max board power draw Just after port royal ? Why not ~520 w ?


----------



## MacMus

Hello,

I have purchases 3090 Asus Strix OC and i have connected all 3 Pins.
However on the games in Afterburner it constantly stops at ~350W with a power limit sign..

What gives?


----------



## rhyno

Zooms said:


> Hello its normal with kingpin 520w bios on 3090 strix OC see 359.8W max board power draw Just after port royal ? Why not ~520 w ?


because the power TARGET on those bios' is still 430 watts. 520 is the max it will boost to for short periods. if you want to actually hit 520 use the kingpin 1000 watt bios and cap it at 52percent. I have been using the 1000 watt bios capped at 66% it hits max of 670 watts but im only seeing 550-600 in games. about 70c max temps on the stock air cooler.


----------



## Zooms

Hum... I read long time ago they are a wrong power détection. And need add +150w to know the right consomtion ? 
Right ?


----------



## originxt

Is it safe to run the 1000w bios on the ftw3 if you cap it at 50-60% PL? On custom water loop.


----------



## caki

answers for some of the questions above (based on what i've read in the previous pages of this thread):


getting less than tdp allowed by bios: check if you have all the power cables connected directly to your psu.
vbios with 1000w seems to be safe to use "at your own risk". which roughly means that if you know what you are doing you should be okay with it. if not sure, stick with 520w vbios as chances that you'll see enough benefit from 1000w are slim.

btw i spent last few days trying to solve my framerate spikes so i'll share it here in case someone falls in the same pit as i did.

symptoms: frametime spikes, micro stutters in all games
culprit : asci.sys with high ISR and DPC count with high execution times (>3 ms) caused by Thermaltake RGB plus software. Also looks like ASUS's own aura rgb controller softwares are creating some spikes however not as dramatic as asci.sys as in the case of thermaltake RGB software. That stuff is utter trash but well that's the bane of people who love rgb like me


----------



## rocklobsta1109

tankyx said:


> I am the owner of a PNY Uprising, and it is acting weird :
> 
> 
> Boost up to 2050mhz in games, but wont boost past 1830mhz in benchmarks
> Stuck at drawing 350w instead of the 365w. In MSI afterburner, the powerdraw is limited to 100%.
> 
> Any ideas?


I also have PNY 3090 and you'd be doing yourself a favor to flash it with the KFA2 390 watt bios and max your power limit to 111% in afterburner. Card will pull every bit of the 390 watts and should maintain low-mid 1900's easily in benchmarks


----------



## MacMus

caki said:


> answers for some of the questions above (based on what i've read in the previous pages of this thread):
> 
> 
> getting less than tdp allowed by bios: check if you have all the power cables connected directly to your psu.


i have all 3 connected if they are disconnected they have "red LED" on the top of PIN connector to indicate it's not connected.


----------



## caki

i'd say expect north of 50C during gpu intensive game sessions.


----------



## des2k...

caki said:


> answers for some of the questions above (based on what i've read in the previous pages of this thread):
> 
> 
> getting less than tdp allowed by bios: check if you have all the power cables connected directly to your psu.
> vbios with 1000w seems to be safe to use "at your own risk". which roughly means that if you know what you are doing you should be okay with it. if not sure, stick with 520w vbios as chances that you'll see enough benefit from 1000w are slim.
> 
> btw i spent last few days trying to solve my framerate spikes so i'll share it here in case someone falls in the same pit as i did.
> 
> symptoms: frametime spikes, micro stutters in all games
> culprit : asci.sys with high ISR and DPC count with high execution times (>3 ms) caused by Thermaltake RGB plus software. Also looks like ASUS's own aura rgb controller softwares are creating some spikes however not as dramatic as asci.sys as in the case of thermaltake RGB software. That stuff is utter trash but well that's the bane of people who love rgb like me





caki said:


> answers for some of the questions above (based on what i've read in the previous pages of this thread):
> 
> 
> getting less than tdp allowed by bios: check if you have all the power cables connected directly to your psu.
> vbios with 1000w seems to be safe to use "at your own risk". which roughly means that if you know what you are doing you should be okay with it. if not sure, stick with 520w vbios as chances that you'll see enough benefit from 1000w are slim.
> 
> btw i spent last few days trying to solve my framerate spikes so i'll share it here in case someone falls in the same pit as i did.
> 
> symptoms: frametime spikes, micro stutters in all games
> culprit : asci.sys with high ISR and DPC count with high execution times (>3 ms) caused by Thermaltake RGB plus software. Also looks like ASUS's own aura rgb controller softwares are creating some spikes however not as dramatic as asci.sys as in the case of thermaltake RGB software. That stuff is utter trash but well that's the bane of people who love rgb like me


Yeah, if you want realtime, double digit for system latency, 
x-msi enabled, all low , gpu & audio normal









unload anything that you don't need, even hwinfo, cpuid & Aida64 will install some low level driver 
uninstalling stuff doesn't actually remove/clean-up those









Disable power management on the timer. I usually run with HPET On(bios) HPET Off (windows)(10mhz)
bcdedit /set disabledynamictick yes

Some nvidia drivers are better than others, but you should be pretty low for latency.


----------



## HyperMatrix

MacMus said:


> Hello,
> 
> I have purchases 3090 Asus Strix OC and i have connected all 3 Pins.
> However on the games in Afterburner it constantly stops at ~350W with a power limit sign..
> 
> What gives?


Edited comment: completely misread what was happening. My apologies.

Which bios are you running? Your Strix is showing power limit at 350W in GPU-Z? What are your OC settings (clocks/voltage/temps). What PSU are you using? Does this problem happen in every gpu load situation or is it just some games?


----------



## pewpewlazer

originxt said:


> Is it safe to run the 1000w bios on the ftw3 if you cap it at 50-60% PL? On custom water loop.


This has been discussed multiple times in this thread over the past week. At least one guy toasted his card running the 1000w BIOS. Go back and read (or at least skim) the last 10-20 pages of this thread. If doing a bit of research is too much work, then the answer is NO, the 1000w BIOS is not "safe".


----------



## jura11

mardon said:


> What sort of temps would you expect a 390w 0.90v 2000mhz 3090 to be with the following:
> 
> 240 40mm think with noctuas
> 240 20mm think with slim noctuas
> Small Reservoir DDC pump combo
> EK reference 3090 block
> 10900k @ 1.32v 5.2ghz in the loop.
> In an ITX case with restricted air flow


Hi there

I would expect temperatures in high 40's in best case and in worst case scenario in 50's, I have run single RTX 2080Ti with 380W BIOS with single 360mm radiator and temperatures been in 40's in fairly restrictive Phanteks Evolv ATX 

Hope this helps 

Thanks, Jura


----------



## jura11

originxt said:


> Is it safe to run the 1000w bios on the ftw3 if you cap it at 50-60% PL? On custom water loop.


On FTW3 you should be safe, just personally I would monitor temperatures and monitor power usage etc, I use KPE XOC BIOS only for benchmarks 

Hope this helps 

Thanks, Jura


----------



## geriatricpollywog

The 1000w bios is only intended for liquid nitrogen on Kingpin cards. Anything else is a risk. The ones who take the risk can afford to blow a $1500 card. The Kingpin lacks the fuses that would otherwise trip at high current. It also has more power stages, capacitors, and better input/output filtering than lesser cards. You can flash it to a lesser card like a Strix or a Colorful but that lesser card will not magically grow the extra hardware.


----------



## Chamidorix

One question from Hyper I realize still hasn't been explicitly answered: why you get better performance when you are on a higher v/f curve point in the driver even when overriding nvvdd and how this gets actual clock via thermspy to raise vs a lower driver v/f point:










Remember that msvdd is powering the lower quarter of the chip, the L2 cache and "gigathread"/scheduler primarily, as well as the memory buses around the edge. All the GPCs=TPCs/SMs+Geo/ROP that hold fp32/int32, tensor, and BVH (raytracing) are on the top 3/4 of chip and are powered by nvvdd. 

There are technically thousands of clocks across all the various transistor clumps that reference dozens of domain clocks. But these domain clocks will be split into referencing the primary power rail clock (msvdd/misc or nvvdd/shader). 

So, to bring this full circle, there is a clock for msvdd and currently the only user accessible way to raise it is to select a higher v/f point on the driver provided table. If you have a nvvdd core clock of 2200mhz with 1.2v manually set nvvdd, but the driver is stuck on the 0.8v point, your msvdd clock will be looking up the msvdd clock value for 0.8v nvvdd (since there is an accompanying msvdd voltage for every nvvdd voltage you see in afterburner). Obviously lower voltages will haver lower driver requested clocks and so everything attached to the msvdd clock above will run slower, limiting how fast the other interdependent parts of the chip (computation/shaders requires access to memory/cache) on nvvdd will actually clock. 

The whole process of getting the actual core clock to match driver requested clock is basically eliminating an msvdd bottleneck that causes nvvdd to downclock since it is waiting on on something from msvdd so it might as well save power. You see this more on Kingpin cards since it is so easy to mess with nvvdd voltage. 

All these clocks will auto adjust based on individual voltage provided:stability tables, and temperature. So if you are on the highest 1.1V driver entry, and therefore the highest possible msvdd driver clock, you can still bump it up a bit higher by oversupplying voltage via i2c/classified. 

So there is still real work to be done in allowing users to manually adjust the msvdd clock, since right now we are basically limited by the 1.1V entry on your card's table. It appears just using the offset in precision seems to do the best at automatically offsetting the msvdd clock in parallel with nvvdd lock, which pairs great with manual voltage setting of both nvddd and msvdd via classified.


TLDR: being at higher point on the curve in afterburner means your msvdd is going to clock higher, which makes it easier for your nvddd to actually clock to your requested value. For max performance you always want to be at your 1.1V entry, assuming no thermal bottleneck (which will happen below 1.1V for most people).


----------



## bmagnien

stryker7314 said:


> Aquatuning, I preordered a while back which could be why.


Cool. Did yours actually ship yet?


----------



## HyperMatrix

0451 said:


> You can flash it to a lesser card like a Strix or a Colorful but that lesser card will not magically grow the extra hardware.


I see you paraphrasing Tin. Haha.

_”Extra hardware present on KPE card will not magically grow itself out on PCB, when you flash BIOS.”_


----------



## HyperMatrix

Chamidorix said:


> One question from Hyper I realize still hasn't been explicitly answered: why you get better performance when you are on a higher v/f curve point in the driver even when overriding nvvdd and how this gets actual clock via thermspy to raise vs a lower driver v/f point:
> 
> View attachment 2472540
> 
> 
> Remember that msvdd is powering the lower quarter of the chip, the L2 cache and "gigathread"/scheduler primarily, as well as the memory buses around the edge. All the GPCs=TPCs/SMs+Geo/ROP that hold fp32/int32, tensor, and BVH (raytracing) are on the top 3/4 of chip and are powered by nvvdd.
> 
> There are technically thousands of clocks across all the various transistor clumps that reference dozens of domain clocks. But these domain clocks will be split into referencing the primary power rail clock (msvdd/misc or nvvdd/shader).
> 
> So, to bring this full circle, there is a clock for msvdd and currently the only user accessible way to raise it is to select a higher v/f point on the driver provided table. If you have a nvvdd core clock of 2200mhz with 1.2v manually set nvvdd, but the driver is stuck on the 0.8v point, your msvdd clock will be looking up the msvdd clock value for 0.8v nvvdd (since there is an accompanying msvdd voltage for every nvvdd voltage you see in afterburner). Obviously lower voltages will haver lower driver requested clocks and so everything attached to the msvdd clock above will run slower, limiting how fast the other interdependent parts of the chip (computation/shaders requires access to memory/cache) on nvvdd will actually clock.
> 
> The whole process of getting the actual core clock to match driver requested clock is basically eliminating an msvdd bottleneck that causes nvvdd to downclock since it is waiting on on something from msvdd so it might as well save power. You see this more on Kingpin cards since it is so easy to mess with nvvdd voltage.
> 
> All these clocks will auto adjust based on individual voltage provided:stability tables, and temperature. So if you are on the highest 1.1V driver entry, and therefore the highest possible msvdd driver clock, you can still bump it up a bit higher by oversupplying voltage via i2c/classified.
> 
> So there is still real work to be done in allowing users to manually adjust the msvdd clock, since right now we are basically limited by the 1.1V entry on your card's table. It appears just using the offset in precision seems to do the best at automatically offsetting the msvdd clock in parallel with nvvdd lock, which pairs great with manual voltage setting of both nvddd and msvdd via classified.
> 
> 
> TLDR: being at higher point on the curve in afterburner means your msvdd is going to clock higher, which makes it easier for your nvddd to actually clock to your requested value. For max performance you always want to be at your 1.1V entry, assuming no thermal bottleneck (which will happen below 1.1V for most people).


Interesting that you brought this up because from what I had noticed, locking the clock to a set number like 2100MHz with smi, but increasing the requested clock whether through offset or curve, would actually increase the real clocks without having to just bump up NVVDD. So for those who don’t have kingpin cards, this may be an option. Based on what you’ve said, it seems selecting a higher point on the table, even while your clocks are capped lower, does cause your card to use a higher point MSVDD as well.

Will do some more testing with this later. Thanks for sharing.


----------



## Slackaveli

Man, I just hope EVGA ships a batch of Kingpins this week. Im so close on the list.


----------



## jomama22

HyperMatrix said:


> Interesting that you brought this up because from what I had noticed, locking the clock to a set number like 2100MHz with smi, but increasing the requested clock whether through offset or curve, would actually increase the real clocks without having to just bump up NVVDD. So for those who don’t have kingpin cards, this may be an option. Based on what you’ve said, it seems selecting a higher point on the table, even while your clocks are capped lower, does cause your card to use a higher point MSVDD as well.
> 
> Will do some more testing with this later. Thanks for sharing.


I imagine this works because core and uncore (just what I'm calling everything else) clocks seem to be tied amongst themselves to some degree. I have noticed adjusting the power limit to within the same v/f curve will keep a pretty consistent difference between requested and effective core clocks. 

I personally have never seen a larger spread than 30 mhz between requested and effective core clocks but others have reported much larger differences. I would imagine adjusting nvvdd first to raise the effective clock as high as it will go at a specific requested clock and then adjusting MSVDD if stability isn't fully reached would be the first way I would go about it.

I would then inherently think that the clock cap in nvidia-smi is being executed after any driver instructions that are produced from the selection from the v/f curve. In other words, the uncore clock would remain at the value tied to the v/f selected, ignoring the core clock cap set by nvidia-smi. 

Could be wrong though and without some way of seeing what the uncore clock is actually doing under different circumstances, it's hard to nail down. But, as others have discovered, it is possible to produce a more efficient use of core clock at lower voltages using this method of nvidia-smi caping. so I would surmise that uncore "overclocking" (let's just stick with that as in some sense, if what I said above is true, you would more or less be doing that at a reduced core frequency) can help stability of core clock at lower voltages.

I do have a question though as I'm not at home for a bit. Does using nvidia-smi retain the voltage an offset or fixed curve point does without the cap? I.e. if power is not a limiting factor, fixing at the 1.1v point or using an offset that would naturally pick the 1.1v point, still outputs 1.1v on the core/nvvdd whole capping with nvidia-smi?


----------



## J-Dog1980

Sheyster said:


> I've been following that thread since they released the 500W BIOS. I passed on the FTW3 when I got the invite to buy and went with the Strix. It's doing just fine with their 500W BIOS installed.


Curious if you could share the bios flashing details or point me to a decent walkthrough. Do they still have the file available on the EVGA website?


----------



## jomama22

J-Dog1980 said:


> Curious if you could share the bios flashing details or point me to a decent walkthrough. Do they still have the file available on the EVGA website?


First post in this thread.


----------



## originxt

pewpewlazer said:


> This has been discussed multiple times in this thread over the past week. At least one guy toasted his card running the 1000w BIOS. Go back and read (or at least skim) the last 10-20 pages of this thread. If doing a bit of research is too much work, then the answer is NO, the 1000w BIOS is not "safe".


I have read them and iirc he ran some stupid PL on a 2-pin card while his card was "screaming." Only reason I ask about 50-60% is because it seems like a relatively safe draw level, safe being what seems to be drawn on shunted cards. I don't think the kingpin 520w bios works on the ftw3, nor the 500w while the actual xoc bios seems to operate fine for drawing correct numbers. I'll just wait for the 500w bios to actually work then.



jura11 said:


> On FTW3 you should be safe, just personally I would monitor temperatures and monitor power usage etc, I use KPE XOC BIOS only for benchmarks
> 
> Hope this helps
> 
> Thanks, Jura


Gotcha, I'll keep an eye out. I'm around 43c in a shared loop with a 10980xe after a couple hours of gaming. Unsure if I'll be daily driving it with the bios anymore, probably just some benches.


----------



## Daniel M

Tried to do some undervolting on my PNY 3090 (390W KFA2 BIOS). 750/1695, 800/1800, 850/1890.

Got my best PR score of 13,380 - https://www.3dmark.com/pr/735349. The card seems to average 1,853mhz during PR. I'm PWR limited according to GPU-Z.

Still waiting for EK to ship my waterblock so I can get the temps down.


----------



## jura11

originxt said:


> I have read them and iirc he ran some stupid PL on a 2-pin card while his card was "screaming." Only reason I ask about 50-60% is because it seems like a relatively safe draw level, safe being what seems to be drawn on shunted cards. I don't think the kingpin 520w bios works on the ftw3, nor the 500w while the actual xoc bios seems to operate fine for drawing correct numbers. I'll just wait for the 500w bios to actually work then.
> 
> 
> Gotcha, I'll keep an eye out. I'm around 43c in a shared loop with a 10980xe after a couple hours of gaming. Unsure if I'll be daily driving it with the bios anymore, probably just some benches.


Hi there 

I have run KPE XOC BIOS on my RTX 3090 GamingPro which is 2*8-pin GPU and personally I never run that BIOS at 100% power limit on my loop and on my loop temperatures never broke 38°C, I literally monitored my temperatures every 30 minutes, used IR gun for measuring backplate temperature and measured IR gun on my 8 pin cables and checked every 15 or so minutes power wall meter

With EVGA RTX 3090 FTW3 you have at least option to monitor VRAM, VRM temperatures which will help you and depending on the block 43°C I think is okay there

Just keep in mind, with this BIOS you have disabled all protections

Hope this helps and good luck mate 

Thanks, Jura


----------



## jura11

Daniel M said:


> Tried to do some undervolting on my PNY 3090 (390W KFA2 BIOS). 750/1695, 800/1800, 850/1890.
> 
> Got my best PR score of 13,380 - https://www.3dmark.com/pr/735349. The card seems to average 1,853mhz during PR. I'm PWR limited according to GPU-Z.
> 
> Still waiting for EK to ship my waterblock so I can get the temps down.


Sadly yes 2*8-pin GPUs are power limited and best BIOS is KFA2 390W or Gigabyte 390W BIOS

Shunt mod is really needed for raising the power limit of RTX 3090, I would say sweet spot for RTX 3090 is around 450-500W 

Hope this helps 

Thanks, Jura


----------



## ttnuagmada

Is it pretty standard for internal clocks to be 10-20mhz lower than the requested clock speed? My Strix seems to do this regardless of whether or not im running stock settings or OC'd.


----------



## WMDVenum

ttnuagmada said:


> Is it pretty standard for internal clocks to be 10-20mhz lower than the requested clock speed? My Strix seems to do this regardless of whether or not im running stock settings or OC'd.


My requested clocks, on a 3090 FE are set to 2205 but the effective clocks, displayed by hardware info, is 2050-2100. If I lower the requested clock to 2050 the effective clock is 1950-2000. Not really anything I can do about it on an FE so I just figure out what is the max requested clocks my GPU can support.


----------



## pewpewlazer

WMDVenum said:


> My requested clocks, on a 3090 FE are set to 2205 but the effective clocks, displayed by hardware info, is 2050-2100. If I lower the requested clock to 2050 the effective clock is 1950-2000. Not really anything I can do about it on an FE so I just figure out what is the max requested clocks my GPU can support.


Are you running that beta version of hwinfo64? Have you crosschecked with ThermSpy? Or are you just referencing your reported clocks after GPU Boost 5.0 (or whatever version of this abortion Nvidia is up to) thermal & power throttles them into oblivion?



ttnuagmada said:


> Is it pretty standard for internal clocks to be 10-20mhz lower than the requested clock speed? My Strix seems to do this regardless of whether or not im running stock settings or OC'd.


On my 2080 Ti, I typically see this so-called "internal clock" running ~15mhz below the reported clock, even at stock.

I wouldn't worry about 10-20mhz. If you're within 10-20mhz, you're probably as good as you're going to get. Whatever this "internal clock" is, it's not THE ONLY CLOCK.



http://imgur.com/YBf6oA7

2015/2175 (ThermSpy clocks/Afterburner clocks) = 10,264



http://imgur.com/YELCaQB

2025/2040 (ThermSpy clocks/Afterburner clocks) = 10,063

Yes, I'm running a Turing card, not Ampere. But they seem to behave rather similarly in regards to this so-called "internal clock". 

If this "internal clock" is the exact same thing as the "requested clock", then there's zero reason I should score 201 points HIGHER with a 10mhz LOWER "internal clock:"


----------



## SoldierRBT

WMDVenum said:


> My requested clocks, on a 3090 FE are set to 2205 but the effective clocks, displayed by hardware info, is 2050-2100. If I lower the requested clock to 2050 the effective clock is 1950-2000. Not really anything I can do about it on an FE so I just figure out what is the max requested clocks my GPU can support.


Are you setting a custom v/f curve? My KPE has around 50MHz effective clocks difference when setting a custom v/f curve (flat line). This can be fixed by adjusting voltages in classified tool. Offset overclock seems to be the best OC for now (Classified tool not needed). I have 3-7MHz difference in benchmarks and 10-15MHz in games.


----------



## Falkentyne

WMDVenum said:


> My requested clocks, on a 3090 FE are set to 2205 but the effective clocks, displayed by hardware info, is 2050-2100. If I lower the requested clock to 2050 the effective clock is 1950-2000. Not really anything I can do about it on an FE so I just figure out what is the max requested clocks my GPU can support.


Mine is 10-25 mhz difference on my 3090 FE (sometimes more if power draw drops a lot)
If you're using a custom curve it tanks your performance by causing a larger delta. Several people earlier explained how to avoid this if you can't use the classified tool, which can't be used on any non-kingpin card.


----------



## WMDVenum

I set my frequency go about [email protected] and then limit it to 2205 using the smi command.

I'll try playing around with offset and see how it impacts performance and effective clock vs v/f tuning.


----------



## Falkentyne

WMDVenum said:


> I set my frequency go about [email protected] and then limit it to 2205 using the smi command.
> 
> I'll try playing around with offset and see how it impacts performance and effective clock vs v/f tuning.


I believe you can't have any voltage point more than 15 mhz more than the previous voltage point, if I understood @HyperMatrix and that other guy properly.
E.g. if the 1.062v voltage point is 2050 mhz, than if you set the next voltage point (1.069v) higher than 2065 mhz, it screws up the clocks.
Even if you lock the 1.10v voltage point and move just that point up 15 mhz and adjust absolutely nothing else on the curve, it can still separate the clocks a little more than if you just used offset and didn't touch the curve.


----------



## WayWayUp

was able to get a new high score on my ftw3.
It’s a shame about my memory. I can run +1000 but im hearing ppl run 1200 as normal with ppl some pushing as high as +1600. Definitely disappointed there

the core though? Its an oh so sweet chip definitely a lottery winner. 
i was so close to having 2,280Mhz but failed 3 times at the end. I guess i need a few degrees colder. Maybe in a couple weeks ill try for 16k


----------



## Falkentyne

WayWayUp said:


> View attachment 2472584
> 
> 
> was able to get a new high score on my ftw3.
> It’s a shame about my memory. I can run +1000 but im hearing ppl run 1200 as normal with ppl some pushing as high as +1600. Definitely disappointed there
> 
> the core though? Its an oh so sweet chip definitely a lottery winner.
> i was so close to having 2,280Mhz but failed 3 times at the end. I guess i need a few degrees colder. Maybe in a couple weeks ill try for 16k


You're complaining about +1000 when I can only run +600 max (+650 is sometimes benchable). Jeez...


----------



## des2k...

WayWayUp said:


> View attachment 2472584
> 
> 
> was able to get a new high score on my ftw3.
> It’s a shame about my memory. I can run +1000 but im hearing ppl run 1200 as normal with ppl some pushing as high as +1600. Definitely disappointed there
> 
> the core though? Its an oh so sweet chip definitely a lottery winner.
> i was so close to having 2,280Mhz but failed 3 times at the end. I guess i need a few degrees colder. Maybe in a couple weeks ill try for 16k


does going high on memory make any difference in games ?

I'm only doing +452 on mem on mine. I haven't tried higher but since a repaste I did drop on core temps. The backplate also went from 39c to 34c after repaste.


----------



## WayWayUp

des2k... said:


> does going high on memory make any difference in games ?
> 
> I'm only doing +452 on mem on mine. I haven't tried higher but since a repaste I did drop on core temps. The backplate also went from 39c to 34c after repaste.


It makes a difference but its relatively small compared to core


----------



## geriatricpollywog

WayWayUp said:


> View attachment 2472584
> 
> 
> was able to get a new high score on my ftw3.
> It’s a shame about my memory. I can run +1000 but im hearing ppl run 1200 as normal with ppl some pushing as high as +1600. Definitely disappointed there
> 
> the core though? Its an oh so sweet chip definitely a lottery winner.
> i was so close to having 2,280Mhz but failed 3 times at the end. I guess i need a few degrees colder. Maybe in a couple weeks ill try for 16k


The coldest I can get my core is 29C and that’s with the AIO radiator in an ice bucket. How did you manage 16C?


----------



## Nico67

Back to work today, so I got the clampmeter 

2 x 8pin 1000w @ 80% some power limiting but saw max board power around 780w, so somewhere around 520w real,

8pin1 = 17.70A x 12.1v = 214w
8pin2 = 18.24A x 12.1v = 221w

so its definitely a lot more balanced than GPU-Z would suggest at 282/174.

Still hard to gauge absolute power and therefore whats coming off the PCIE slot, but I would guess its up there around 80w+


----------



## WayWayUp

It’s because Aio’s are not as efficient as a full loop. I’m using an optimus block with LM. Right off the bat i idle at 20 c with room temp 70-71 F.
Delta with optimus block is 8C
With LM its a little better
So just a normal run without doing anything my core will be just around or slightly over the aio with icebucket; for reference 

16c in complete honesty was just window open in illinois
Today was around 30 F
Anyways, over 90% of the scores ahead of me listed as “N/A” temps so i guess 
temp is relatively warm for the score range


----------



## WMDVenum

So if I run [email protected] my effective clock is around 2050. If I set an offset to target 2190 @ 1.075v my effective clock is near 2160. I managed to get my effective clock to be 2160 while doing a v/f curve as well but it didn't stay long. Now the issue is occasionally the effective clock will bump up to 2190 and crash.

Also on a fresh boot the [email protected] setting had an effective clock of 2190 that crashed soon, upon launching port royale again it dropped to 2040.

I seem to get instability if the effective clock goes above 2170. Is there any way to limit that click similarly to locking the set clock?

I wonder if the reason some games are unstable, where port royale is stable, is because the effective clock in port royale is lower where in a game it is increased, because it is less demanding.

I have been doing some stability testing in CP2077 and Warframe.

Waframe
[email protected] (2000 effective)- 362 FPS
+175 offset (2080 effective) - 366FPS
CP2077
[email protected] (2060 effective)- 62 FPS
+175 offset (2113 effective) - 65 FPS


Still trying to figure out how to prevent the temp stepping. If I set the offset to +175 by the time I get to my load temps the frequency drops from 2160 to 2130 and effective drops similarly. But if I set my target at the load temp then if the load temp decreases and it steps to a higher clock it will crash due to instability. My next goal will be to limit my frequency to 2145 using -smi and then increase the offset to try to keep the effective as close to set as possible. So far initial testing is allowing 2145 set and 2131 effective but this crashes fairly quickly in CP2077. I think I will just give up and let the temp stepping do whatever it wants.


----------



## geriatricpollywog

WayWayUp said:


> It’s because Aio’s are not as efficient as a full loop. I’m using an optimus block with LM. Right off the bat i idle at 20 c with room temp 70-71 F.
> Delta with optimus block is 8C
> With LM its a little better
> So just a normal run without doing anything my core will be just around or slightly over the aio with icebucket; for reference
> 
> 16c in complete honesty was just window open in illinois
> Today was around 30 F
> Anyways, over 90% of the scores ahead of me listed as “N/A” temps so i guess
> temp is relatively warm for the score range


That temperature might be plausible if your PC was completely outside at 30F. I wouldn’t know as the record low for my city is 33F. I am just calling out cropping out the temperature without linking the score and calling the GPU core a “lottery winner.” A good score is a good score though.


----------



## WayWayUp

There you go https://www.3dmark.com/pr/736189
Im sorry you got that impression

Ive been able to score over 15.1k even with the stock air cooler with backend of benchmark in the 60s C


https://www.3dmark.com/pr/490499



This is def a good chip and im just excited to finally have waterblock after waiting for so long


----------



## Chobbit

Hey new to the 30 series and just got the only one available the Palit GameRock 3090 (none OC but got it for £1500 which is about £300 cheaper than any other card that have no stock still)

Anyway anyone use it's Thundermaster software?

I can get the card to goto sit around 2100/10850 by just increasing the power/temp slider to max with auto fans, however if the games not demanding and the cards are running cool (or I manually increase the fan speeds to keep it below 60 degC) it sometimes tries to boost clocks to 2130-2200mhz and crashes the driver.

Now there's a voltage slider in Thundermaster which if I could add some more volts will probably help with stability but it goes from 0% - 100% and that scares me as if that is 1-1 surely an extra 100% voltage will kill the card. Can someone with a bit more understanding let me know what steps that slider goes up by? And if 100% is within Palits safe limits? seems as it's available in their OC software, Thank.


----------



## Biscottoman

WayWayUp said:


> There you go https://www.3dmark.com/pr/736189
> Im sorry you got that impression
> 
> Ive been able to score over 15.1k even with the stock air cooler with backend of benchmark in the 60s C
> 
> 
> https://www.3dmark.com/pr/490499
> 
> 
> 
> This is def a good chip and im just excited to finally have waterblock after waiting for so long


Pretty nice results dude even if you can't achieve over 1k on memory, your card is of the binned in this thread for sure. Are you running any shunt on that?


----------



## man from atlantis

Chobbit said:


> Hey new to the 30 series and just got the only one available the Palit GameRock 3090 (none OC but got it for £1500 which is about £300 cheaper than any other card that have no stock still)
> 
> Anyway anyone use it's Thundermaster software?
> 
> I can get the card to goto sit around 2100/10850 by just increasing the power/temp slider to max with auto fans, however if the games not demanding and the cards are running cool (or I manually increase the fan speeds to keep it below 60 degC) it sometimes tries to boost clocks to 2130-2200mhz and crashes the driver.
> 
> Now there's a voltage slider in Thundermaster which if I could add some more volts will probably help with stability but it goes from 0% - 100% and that scares me as if that is 1-1 surely an extra 100% voltage will kill the card. Can someone with a bit more understanding let me know what steps that slider goes up by? And if 100% is within Palits safe limits? seems as it's available in their OC software, Thank.


Voltage slider just opens up higher vcurve voltages than 1081mV, it stops at 1100mV. I have the 3080 GameRock vanilla, flashed the GameRock OC bios to have 440W power limit. I believe 3090 GameRock OC has 470W power limit, compared to 420W GameRock vanilla.


----------



## Chobbit

man from atlantis said:


> Voltage slider just opens up higher vcurve voltages than 1081mV, it stops at 1100mV. I have the 3080 GameRock vanilla, flashed the GameRock OC bios to have 440W power limit. I believe 3090 GameRock OC has 470W power limit, compared to 420W GameRock vanilla.


Thank you for the info, glad to know I'm not going to blow it up using that slider 😄, if it's possible I would be interested in flashing the OC bios (since it has dual bios) if possible


----------



## bl4ckdot

Hydro X Series XG7 RGB 30-SERIES STRIX GPU Water Block (3090, 3080, 3070) (corsair.com) 
Thoughts ?


----------



## man from atlantis

Chobbit said:


> Thank you for the info, glad to know I'm not going to blow it up using that slider 😄, if it's possible I would be interested in flashing the OC bios (since it has dual bios) if possible


Yeah, just follow the guide at the first page;








Palit RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com


----------



## Zurv

Note to anyone having coil whine. My 3090 FTW3 was SUPER bad. This was a big problem for me as the audio was coming from the card too. (HDMI to AVR) The computer is far away so some local noise from there wouldn't have been a problem - but buzzing from all my speakers (when on max load on the GPU) is a huge problem. 7.1.2 buzz isn't a good thing. 


what i tried...
ran a new 20amp dedicated circuit. (i ran 3 for home theater stuff.)
tried audioquest Niagara 5000 and 1200
tired surgeX SA-20
all kinda crazy and stunning costly cables (magic audiophile level of crazy.)

none of it worked.
I'm using the Corsair axi 1600. I wasn't having problems with my SLI 2080 ti kingpins.. or 3090FE or 2080 it FE SLI. Just this 3090FTW3.
ugh.. do i need to remove the shuntmod? _sigh_

Nope! The fix was ferrite beads. Putting one for the power coming in to the PS and one on each of the PCI-E connectors at the video card side.
whine is 99% gone.
The odd part is the corsair cables already have ferrite beads... _shrug_ maybe they were to small - but adding more fixed the issues.

I used this package of them: Amazon.com: eBoot 20 Pieces Clip-on Ferrite Ring Core RFI EMI Noise Suppressor Cable Clip for 3mm/ 5mm/ 7mm/ 9mm/ 13mm Diameter Cable, Black: Industrial & Scientific 

Hopefully this info is helpful to others. It is also cheap - if it didn't help you aren't out much.
My next step would have been removing then shunts. I had also ordered a pure-sinewave UPS.. but cancelled it after the beads fixed the issue.


----------



## WMDVenum

Falkentyne said:


> I believe you can't have any voltage point more than 15 mhz more than the previous voltage point, if I understood @HyperMatrix and that other guy properly.
> E.g. if the 1.062v voltage point is 2050 mhz, than if you set the next voltage point (1.069v) higher than 2065 mhz, it screws up the clocks.
> Even if you lock the 1.10v voltage point and move just that point up 15 mhz and adjust absolutely nothing else on the curve, it can still separate the clocks a little more than if you just used offset and didn't touch the curve.


After further testing I was able to reproduce what you discussed but I was not able to get it to a level I was satisfied with. I did additional testing using the V/F curve method and did multiple sets of port royale runs on my 3090 FE.

-3090 FE had a 2205 limit set with -smi.
-3090 had a V/F curve set to 2300 frequency at various voltages to make sure it never goes below 2205 during temp stepping.
-During all testing the requested frequency was always 2205 and did not throttle.

-at 1.1v the effective frequency was around 2040 with a port royale score of 14687.
-at 1.075v the effective frequency was ~2125 with a port royale score of 15014
-at 1.062v the effective frequency was ~2150 with a port royale score of 15060.
-at 1.050v the effective frequency was ~2155 with a port royale score of 15079.
-at 1.042v the effective frequency was ~2160 with a port royale score of 15095.
-at 1.035v the run was not stable and crashed.

I have no idea why the effective frequency increased with reduced voltage but the port royale score seems to support the effective frequency reports.


----------



## Chobbit

Hit a first world problem, added some more voltage and got my clocks to sit around 2170-2200mhz stable in benchmarking no issues although temps hit 80 degC.

Went to play some games and computer kept freezing, backed the voltage off and went back to older clocks and still freeIng but not so quick. Turns out there's so much heat coming off the card into the case and radiator that I can't run my i9 @ 5.2ghz without now hitting thermal limits 😂

So voltage backed off on CPU and just running it at 5.1 now and no issues. Ah well the CPU might last longer now and 5.1ghz ain't bad and not really anymore of a bottleneck, when the GPU is going to be doing most the rendering.

The compromises we have to make 😂


----------



## mirkendargen

WMDVenum said:


> After further testing I was able to reproduce what you discussed but I was not able to get it to a level I was satisfied with. I did additional testing using the V/F curve method and did multiple sets of port royale runs on my 3090 FE.
> 
> -3090 FE had a 2205 limit set with -smi.
> -3090 had a V/F curve set to 2300 frequency at various voltages to make sure it never goes below 2205 during temp stepping.
> -During all testing the requested frequency was always 2205 and did not throttle.
> 
> -at 1.1v the effective frequency was around 2040 with a port royale score of 14687.
> -at 1.075v the effective frequency was ~2125 with a port royale score of 15014
> -at 1.062v the effective frequency was ~2150 with a port royale score of 15060.
> -at 1.050v the effective frequency was ~2155 with a port royale score of 15079.
> -at 1.042v the effective frequency was ~2160 with a port royale score of 15095.
> -at 1.035v the run was not stable and crashed.
> 
> I have no idea why the effective frequency increased with reduced voltage but the port royale score seems to support the effective frequency reports.


That just sounds like the power limit in action. Those are just the frequency/voltage combinations that bring you to your power limit.


----------



## Esenel

Zurv said:


> Note to anyone having coil whine.
> ...
> Nope! The fix was ferrite beads. Putting one for the power coming in to the PS and one on each of the PCI-E connectors at the video card side.
> whine is 99% gone.


Thanks for the hint.
I might steal the one from my old N64


----------



## pat182

bl4ckdot said:


> Hydro X Series XG7 RGB 30-SERIES STRIX GPU Water Block (3090, 3080, 3070) (corsair.com)
> Thoughts ?


interesting


----------



## Redjester

I'll be re-padding / repainting shunts on my Zotac today. Are there any hot spots on this card that I should add thermal pad too? I found the one on the FE, but cant seem to locate anything on reference cards. EKWB waterblock / backplate are a bit lackluster.


----------



## mardon

Her


jura11 said:


> Hi there
> 
> I would expect temperatures in high 40's in best case and in worst case scenario in 50's, I have run single RTX 2080Ti with 380W BIOS with single 360mm radiator and temperatures been in 40's in fairly restrictive Phanteks Evolv ATX
> 
> Hope this helps
> 
> Thanks, Jura


You're pretty much on the money. I have to set quite an aggressive fan curve to keep it under 50c @ 0.906 2000mhz.
Once my 850w SFX Psu arrives I'll have a play with the 1000w bios. That should be.. Interesting.


----------



## WMDVenum

mirkendargen said:


> That just sounds like the power limit in action. Those are just the frequency/voltage combinations that bring you to your power limit.


While that may be the case I don't believe so. My card is shunted and and my max power draw was about 550 watts in all testing I have done. None of these tests went above 535 watts so overall board power, PCIE, and Rail power aren't hitting their limits. It may be a different limit that isn't shown in HWINFO or GPUZ. In the end for my setup none of this matters because anything under 1.082 volts isn't actually stable in gaming so while a high PR score is cool I need to do more testing in real world use.

Somewhat related but I did collect enough data to make a PR score vs effective frequency chart for my setup.


----------



## Falkentyne

Redjester said:


> I'll be re-padding / repainting shunts on my Zotac today. Are there any hot spots on this card that I should add thermal pad too? I found the one on the FE, but cant seem to locate anything on reference cards. EKWB waterblock / backplate are a bit lackluster.


If you don't know where the hotspots (backplate side) are, a good place to randomly pad is to pad directly behind the VRM locations. You lose nothing by doing that.
One thing to watch out for when doing hotspots is--NEVER compromise VRAM pad contact for hotspot pads, so be careful with the pad thickness and check for proper contact with VRAM.
The good news is that the backplate side is more forgiving than the GPU side with regard to thermal pads, and you can also work with compressing them if they are squishy enough. The GPU side has literally no leeway on pad contact because you're limited by the GPU core contact bracket, and any small difference results in either VRAM not getting contact or GPU not getting contact.


----------



## Slackaveli

ttnuagmada said:


> Is it pretty standard for internal clocks to be 10-20mhz lower than the requested clock speed? My Strix seems to do this regardless of whether or not im running stock settings or OC'd.


So, I havent been paying much attention tot he strix bc i assumed Id be on a ftw3 or kingpin (still might) but i managed to catch a Strix OC in stock at msrp and jumped on it. I will probably still buy the kingpin and fip one of the two when my name pops up but Im going to test out this strix first for sure now. So, I do remember you guys had a consensus best BIOS for the strix but Im not sure which BIOS it was. Was it the 520-watt LN2 kingpin bios?


----------



## ttnuagmada

Just out of curiosity, what kind of game-stable clocks have been able to achieve? The settings


Slackaveli said:


> So, I havent been paying much attention tot he strix bc i assumed Id be on a ftw3 or kingpin (still might) but i managed to catch a Strix OC in stock at msrp and jumped on it. I will probably still buy the kingpin and fip one of the two when my name pops up but Im going to test out this strix first for sure now. So, I do remember you guys had a consensus bext BIOS for the strix but Im not sure which BIOS it was. Was it the 520-watt LN2 kingpin bios?



I think the 500w is best for air-cooled and 520w for water cooled. My understanding is that the 520w bios doesn't play nice with the fans.


----------



## Slackaveli

ttnuagmada said:


> Just out of curiosity, what kind of game-stable clocks have been able to achieve? The settings
> 
> 
> 
> I think the 500w is best for air-cooled and 520w for water cooled. My understanding is that the 520w bios doesn't play nice with the fans.


Thanks, man. I knew there was something.

I just ordered today but i'll report my clocks. 

Anybody care to link me that best bios , por favor? Also will repaste and repad so iirc i need the 1.5mm pads? Ill be staying on air.


----------



## originxt

How is the voltage slider affected with the 1000w bios? Are we still capped at 1.1v?

Edit: Nvm got it. It does not. 55% and no perf cap. Will edit after some more benching. Doesn't seem to stay consistent or get close to 550w though at least in windowed mode.

Edit2: Hmm, had a run where the icx monitoring for memory freaked out on gpuz. Unsure if it was just a sensor error or it actually just spiked hard in temps. _nvm isolated incident_


----------



## WayWayUp

new dxr feature test
clocks run crazy in this test


----------



## ttnuagmada

So, what kind of game stable core clocks/voltages are everyone seeing? I can average [email protected] in PR, but my game stable settings are nothing like that at all. It looks like [email protected] is a sweet spot, I can get higher with more voltage, but the power limit starts becoming an issue on the more demanding games. What's everyone else look like?


----------



## jomama22

[


Zurv said:


> Note to anyone having coil whine. My 3090 FTW3 was SUPER bad. This was a big problem for me as the audio was coming from the card too. (HDMI to AVR) The computer is far away so some local noise from there wouldn't have been a problem - but buzzing from all my speakers (when on max load on the GPU) is a huge problem. 7.1.2 buzz isn't a good thing.
> 
> 
> what i tried...
> ran a new 20amp dedicated circuit. (i ran 3 for home theater stuff.)
> tried audioquest Niagara 5000 and 1200
> tired surgeX SA-20
> all kinda crazy and stunning costly cables (magic audiophile level of crazy.)
> 
> none of it worked.
> I'm using the Corsair axi 1600. I wasn't having problems with my SLI 2080 ti kingpins.. or 3090FE or 2080 it FE SLI. Just this 3090FTW3.
> ugh.. do i need to remove the shuntmod? _sigh_
> 
> Nope! The fix was ferrite beads. Putting one for the power coming in to the PS and one on each of the PCI-E connectors at the video card side.
> whine is 99% gone.
> The odd part is the corsair cables already have ferrite beads... _shrug_ maybe they were to small - but adding more fixed the issues.
> 
> I used this package of them: Amazon.com: eBoot 20 Pieces Clip-on Ferrite Ring Core RFI EMI Noise Suppressor Cable Clip for 3mm/ 5mm/ 7mm/ 9mm/ 13mm Diameter Cable, Black: Industrial & Scientific
> 
> Hopefully this info is helpful to others. It is also cheap - if it didn't help you aren't out much.
> My next step would have been removing then shunts. I had also ordered a pure-sinewave UPS.. but cancelled it after the beads fixed the issue.


I'd be curious as to how much voltage drop you incur at higher current draws with the chokes installed vs no chokes. Genuinely just curious.


----------



## des2k...

ttnuagmada said:


> So, what kind of game stable core clocks/voltages are everyone seeing? I can average [email protected] in PR, but my game stable settings are nothing like that at all. It looks like [email protected] is a sweet spot, I can got higher with more voltage, but the power limit starts becoming an issue on the more demanding games. What's everyone else look like?


Usually anything that's stable for 2hours in timespy extreme gt2 shouldn't crash in any games, maybe pure RTX path is different like RTX Quake.

I'm at 2130 @ 1050mv, +452mem which gives me the FPS boost in games, without going too crazy on power.

I'm already at 95fps in Horizon Dawn ultimate 4k, 2145 is only 96fps.

5:12, he's 95fps at 2160 and using a lot of power


----------



## ttnuagmada

des2k... said:


> Usually anything that's stable for 2hours in timespy extreme gt2 shouldn't crash in any games, maybe pure RTX path is different like RTX Quake.
> 
> I'm at 2130 @ 1050mv, +452mem which gives me the FPS boost in games, without going too crazy on power.
> 
> I'm already at 95fps in Horizon Dawn ultimate 4k, 2145 is only 96fps.


Warzone has been my go-to for stability testing, it will crash in settings that even maxed out Cyberpunk is OK with.


----------



## originxt

I scored 15 023 in Port Royal


Intel Core i9-10980XE Extreme Edition Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com





Well, 15k score goal accomplished so I'm satisfied. I'm assuming I can get better scores if I adjusted curve. However, I am unhappy with the oddities from icx sensor data. Unsure if its from the bios or just some freak anomoly but definitely not daily driving this bios. Usually the sensor displays mid 40c. Really happy with a flat, consistent 2205 though.


----------



## des2k...

ttnuagmada said:


> Warzone has been my go-to for stability testing, it will crash in settings that even maxed out Cyberpunk is OK with.


Yeah... but I would have to play battle royal, which is so trash🤣

Looking at wiki that's IW8.0 engine and cold war is based on IW8.0 with extra features. I know I finished the story on some really unstable freq/voltages without crash 🙄


----------



## Christopher2178

originxt said:


> I scored 15 023 in Port Royal
> 
> 
> Intel Core i9-10980XE Extreme Edition Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> Well, 15k score goal accomplished so I'm satisfied. I'm assuming I can get better scores if I adjusted curve. However, I am unhappy with the oddities from icx sensor data. Unsure if its from the bios or just some freak anomoly but definitely not daily driving this bios. Usually the sensor displays mid 40c. Really happy with a flat, consistent 2205 though.
> 
> View attachment 2472690


I also have an FTW3 Ultra and I get those weird sensor spikes on icx sensors as well to like 6000c on the XC3 bios and have seen others report it and blame it on nvidia drivers as it maybe didn’t happen on previous drivers?
But I’m not sure so I just basically ignore them from what I can tell they don’t mean anything just that the sensor goes wacky temps aren’t actually going high or anything just randomly happens on mine. 

Or else the core of the sun is inside your card that has also been suggested as a possible cause.


Sent from my iPhone using Tapatalk


----------



## Biscottoman

WayWayUp said:


> View attachment 2472681
> 
> 
> new dxr feature test
> clocks run crazy in this test


Bro is your ftw3 shunted?


----------



## originxt

Biscottoman said:


> Bro is your ftw3 shunted?


He mentioned on the Optimus thread that it is shunted.



Christopher2178 said:


> I also have an FTW3 Ultra and I get those weird sensor spikes on icx sensors as well to like 6000c on the XC3 bios and have seen others report it and blame it on nvidia drivers as it maybe didn’t happen on previous drivers? But I’m not sure so I just basically ignore them from what I can tell they don’t mean anything just that the sensor goes wacky temps aren’t actually going high or anything just randomly happens on mine. Or else the core of the sun is inside your card that has also been suggested as a possible cause. Sent from my iPhone using Tapatalk


Are you having issues with the other bios's as well? I can't get the xc3 bios to work on my card either. Can't even get this 1000w bios to saturate all the power I'm allowing it. I think my gpu is capped at 2205 since it crashes anytime its over that.


----------



## Falkentyne

Christopher2178 said:


> I also have an FTW3 Ultra and I get those weird sensor spikes on icx sensors as well to like 6000c on the XC3 bios and have seen others report it and blame it on nvidia drivers as it maybe didn’t happen on previous drivers?
> But I’m not sure so I just basically ignore them from what I can tell they don’t mean anything just that the sensor goes wacky temps aren’t actually going high or anything just randomly happens on mine.
> 
> Or else the core of the sun is inside your card that has also been suggested as a possible cause.
> 
> 
> Sent from my iPhone using Tapatalk


I said this many times. This overtemp blip happens on all cards (not just eVGA, just you can see the temp on evga, other cards throw a Thermal flag), starting with the very first 457.xx drivers.
This did NOT occur in 456.98 hotfix, ever.


----------



## Zurv

jomama22 said:


> [
> I'd be curious as to how much voltage drop you incur at higher current draws with the chokes installed vs no chokes. Genuinely just curious.


hrmm.. maybe when i'm not lazy  The case is all crammed full of stuff and i'd need to get the usb cable to the PS to check power draw.
I sadly can't use the GPU to get power because of the shunt mod. The 18 core intel CPU can also randomly suck a lot of power too.
The OC wasn't impacted at all - but i would have run it at stock speed to get rid of the EMI hum/buz.

But my guess there shouldn't be any impact. PCI-E cables can pump a TON of power thru them and nvidia is limiting heavily limiting them.

But it would be an interesting test for someone that isn't shunted. 

I also assumed the cables had beads in them already because there is something fat in them.. but that turned out to be capacitors 

(The PS is a monster too... the system was build for HEDT and quad SLI.)


----------



## originxt

Falkentyne said:


> I said this many times. This overtemp blip happens on all cards (not just eVGA, just you can see the temp on evga, other cards throw a Thermal flag), starting with the very first 457.xx drivers.
> This did NOT occur in 456.98 hotfix, ever.


Fair enough. I assumed it was a sensor error.


----------



## Apecos

Falkentyne said:


> I said this many times. This overtemp blip happens on all cards (not just eVGA, just you can see the temp on evga, other cards throw a Thermal flag), starting with the very first 457.xx drivers.
> This did NOT occur in 456.98 hotfix, ever.


Falkentyne what driver from nvidia you are using, or wich version you prefer?

Thanks!


----------



## Falkentyne

Apecos said:


> Falkentyne what driver from nvidia you are using, or wich version you prefer?
> 
> Thanks!


I'm using 457.44 vulkan driver. I wouldn't touch the newer drivers than that one with a 50 meter pole with all the reports of various cards black screening at idle with 100% fans or completely dying in directX 11 or light load games (eVGA FTW3 mostly), with the red LED light of death, or smoke and sparks.


----------



## jomama22

Zurv said:


> hrmm.. maybe when i'm not lazy  The case is all crammed full of stuff and i'd need to get the usb cable to the PS to check power draw.
> I sadly can't use the GPU to get power because of the shunt mod. The 18 core intel CPU can also randomly suck a lot of power too.
> The OC wasn't impacted at all - but i would have run it at stock speed to get rid of the EMI hum/buz.
> 
> But my guess there shouldn't be any impact. PCI-E cables can pump a TON of power thru them and nvidia is limiting heavily limiting them.
> 
> But it would be an interesting test for someone that isn't shunted.
> 
> I also assumed the cables had beads in them already because there is something fat in them.. but that turned out to be capacitors
> 
> (The PS is a monster too... the system was build for HEDT and quad SLI.)


I mean just checking what gpu-z reads on the 12v for each plug during a high load scenario (especially with your shunts), not the power, the actual voltage reading.


----------



## motivman

Falkentyne said:


> I'm using 457.44 vulkan driver. I wouldn't touch the newer drivers than that one with a 50 meter pole with all the reports of various cards black screening at idle with 100% fans or dying in directX 11 or light load games (eVGA FTW3 mostly).


wow, my card has been randomly black screening at idle the past few days, thought it was in the process of dying or something. Guess ****ty drivers are the cause huh?


----------



## Falkentyne

motivman said:


> wow, my card has been randomly black screening at idle the past few days, thought it was in the process of dying or something. Guess ****ty drivers are the cause huh?


Try 457.44 vulkan beta (you have to get it off a google search) or 456.98 hotfix drivers. The 457.44 vulkan beta drivers do have the temp blip bug, the 456.98 hotfix do not.
I have never encountered the idle black screen yet.


----------



## bmagnien

originxt said:


> I scored 15 023 in Port Royal
> 
> 
> Intel Core i9-10980XE Extreme Edition Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> Well, 15k score goal accomplished so I'm satisfied. I'm assuming I can get better scores if I adjusted curve. However, I am unhappy with the oddities from icx sensor data. Unsure if its from the bios or just some freak anomoly but definitely not daily driving this bios. Usually the sensor displays mid 40c. Really happy with a flat, consistent 2205 though.
> 
> View attachment 2472690


This may be a dumb question but I just assumed I could always look it up easily when I needed it, but I can't seem to find it. Is there a list of where all those ICX sensors are physically located on the board. Is MEM 1-3 all on the front? Is PWR 1-5 all on the front? And what's the difference between GPU 1 and 2. I have an MP5Works BPC incoming and want to use these to do direct comparisons in temps with and without the BPC installed. Thanks!


----------



## aagerius

Well after tinkering a few days with my new build system, these are the results:
Still can't wrap my head around the fact that the stock V2 TUF OC bios gave me the best results, tried to use several 2x8pin bios, KFA, Gigabyte, ...
What do you guys think? Good enough for what it is?


























I scored 19 649 in Time Spy


Intel Xeon W-1290 Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com












I scored 13 642 in Port Royal


Intel Xeon W-1290 Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## Arbustok

That's about as high as I can get with that same card and vbios



aagerius said:


> Well after tinkering a few days with my new build system, these are the results:
> Still can't wrap my head around the fact that the stock V2 TUF OC bios gave me the best results, tried to use several 2x8pin bios, KFA, Gigabyte, ...
> What do you guys think? Good enough for what it is?


----------



## Slackaveli

aagerius said:


> Well after tinkering a few days with my new build system, these are the results:
> Still can't wrap my head around the fact that the stock V2 TUF OC bios gave me the best results, tried to use several 2x8pin bios, KFA, Gigabyte, ...
> What do you guys think? Good enough for what it is?
> 
> View attachment 2472712
> 
> View attachment 2472713
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 19 649 in Time Spy
> 
> 
> Intel Xeon W-1290 Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 13 642 in Port Royal
> 
> 
> Intel Xeon W-1290 Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com


Gotta remember that this is the penthouse in here and not be worried about being tops. That looks solid to me. For gaming you have a nice card , no doubt. The envy of any other thread besides this one lol. This is a tough crowd.


----------



## ALSTER868

ttnuagmada said:


> So, what kind of game stable core clocks/voltages are everyone seeing? I can average [email protected] in PR, but my game stable settings are nothing like that at all. It looks like [email protected] is a sweet spot, I can get higher with more voltage, but the power limit starts becoming an issue on the more demanding games. What's everyone else look like?


I have 3 profiles in AB for my Strix OC with KPE 520W bios. One is for power demanding games like PUBG/Metro @2055/0.925/+1250, another one is for RT games (CP2077, Control) @2130/0.993/+1250 and the rest of games like Warzone, Apex Legends, BFV I run @2205/1.056/+1250.
All profiles are a good balance between power consumption and temps for me, like 430-500W and T delta 11-15C water-chip.
For benchmarks I use offset.


----------



## Gebeleisis

aagerius said:


> Well after tinkering a few days with my new build system, these are the results:
> Still can't wrap my head around the fact that the stock V2 TUF OC bios gave me the best results, tried to use several 2x8pin bios, KFA, Gigabyte, ...
> What do you guys think? Good enough for what it is?
> 
> View attachment 2472712
> 
> View attachment 2472713
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 19 649 in Time Spy
> 
> 
> Intel Xeon W-1290 Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 13 642 in Port Royal
> 
> 
> Intel Xeon W-1290 Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com


I have similar results with air-cooled card. This is the limit on air that you can get. 
This is my result :








I scored 13 580 in Port Royal


AMD Ryzen 9 5950X, NVIDIA GeForce RTX 3090 x 1, 131072 MB, 64-bit Windows 10}




www.3dmark.com




and with the window opened , not much difference in score .








I scored 13 638 in Port Royal


AMD Ryzen 9 5950X, NVIDIA GeForce RTX 3090 x 1, 131072 MB, 64-bit Windows 10}




www.3dmark.com





I am waiting for my waterblock and will try again then.


----------



## Slackaveli

Anybody need at cost 3090? NVidia GeForce RTX 3090 Graphic Cards


----------



## geriatricpollywog

Pointing a fan at my backplate completely fixed memory clock throttling. I can rip Port Royal back to back and average memory temperature will be the same as a cold run each time.


----------



## 86Jarrod

I just picked up a Strix 3090. Any ideas on the best waterblocks to buy and what to avoid?


----------



## 7empe

86Jarrod said:


> I just picked up a Strix 3090. Any ideas on the best waterblocks to buy and what to avoid?


I have a waterblock from EKWB and I'm really happy with it.


----------



## Zogge

86Jarrod said:


> I just picked up a Strix 3090. Any ideas on the best waterblocks to buy and what to avoid?


I have an EKWB if you would like to by mine (Sweden). I decided to change for another type.


----------



## WayWayUp

does anyone have info regarding resizable bar support from Nvidia?
I thought they would release drivers by now

I would imagine it would wipe up all the 3dmark scores 

my theory is nvidia will release it with 3080ti driver pack so reviewers test it out and show gains over amd


----------



## Gebeleisis

If that is so then our 3090 will be wiped by 3080ti )


----------



## rocklobsta1109

aagerius said:


> Well after tinkering a few days with my new build system, these are the results:
> Still can't wrap my head around the fact that the stock V2 TUF OC bios gave me the best results, tried to use several 2x8pin bios, KFA, Gigabyte, ...
> What do you guys think? Good enough for what it is?
> 
> [SNIP]
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 19 649 in Time Spy
> 
> 
> Intel Xeon W-1290 Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 13 642 in Port Royal
> 
> 
> Intel Xeon W-1290 Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com


I have same bios on my card and we're within 100 points on port royal and Time spy graphics score so I think that's about the limit of performance for the 390 watt bios


----------



## WayWayUp

Gebeleisis said:


> If that is so then our 3090 will be wiped by 3080ti )


well we would have same driver support on our 3090s
I just expect it with 3080ti launch


----------



## 7empe

Guys, just flashed my Strix OC with 1000W bios. No shunt mod. It seems that I am power limited up to 520W total. Am I right thinking that to go beyond 150W/PCIe limit I need to shunt mod the card?

Btw. I just sum up max PCIe slot power, 8-pin connector #1 and 2x 8-pin connector #2. Readings from #3 seems to be invalid just like for KPE 520W bios.


----------



## Antsu

Are you guys able to keep 1.100V locked without using the V/F curve? It seems impossible on my card with just an offset, maybe my die is not flat or something. I've mounted the card 3 times now, first with Kryonaut and now with liquid metal. It always starts at 1.100V, but after an hour or two the voltage will start dropping. Tightening the screws to ridiculous levels will "fix" the issue for a while, but after an hour or two it's back to square 1. Anyone else? It's not like my whole mount is ****ed either, because my temps are quite nice: about 10C delta to water @ 500W load and 11C delta @ 600W.


----------



## pat182

7empe said:


> Guys, just flashed my Strix OC with 1000W bios. No shunt mod. It seems that I am power limited up to 520W total. Am I right thinking that to go beyond 150W/PCIe limit I need to shunt mod the card?
> 
> Btw. I just sum up max PCIe slot power, 8-pin connector #1 and 2x 8-pin connector #2. Readings from #3 seems to be invalid just like for KPE 520W bios.


i can do 600watts with the kpe 520watt bios, maybe your load wasnt high enough


----------



## WMDVenum

Antsu said:


> Are you guys able to keep 1.100V locked without using the V/F curve? It seems impossible on my card with just an offset, maybe my die is not flat or something. I've mounted the card 3 times now, first with Kryonaut and now with liquid metal. It always starts at 1.100V, but after an hour or two the voltage will start dropping. Tightening the screws to ridiculous levels will "fix" the issue for a while, but after an hour or two it's back to square 1. Anyone else? It's not like my whole mount is ****ed either, because my temps are quite nice: about 10C delta to water @ 500W load and 11C delta @ 600W.


Every 10C change in temperature will reduce performance by 15mhz. So eventually your requested frequency will no longer require 1.1v to reach based on the V/F curve. I can get around this by setting a frequency limit using -smi and undervolting to like 2300 @1.1v so it never falls below my requested frequency but that does odd things to effective clocks. I have not found a way to do this using an offset.


----------



## Antsu

WMDVenum said:


> Every 10C change in temperature will reduce performance by 15mhz. So eventually your requested frequency will no longer require 1.1v to reach based on the V/F curve. I can get around this by setting a frequency limit using -smi and undervolting to like 2300 @1.1v so it never falls below my requested frequency but that does odd things to effective clocks. I have not found a way to do this using an offset.


I can get around this by locking the voltage to 1.100V, but as you said yourself the internal clocks will be trash if you do that. This is why I am trying to keep the voltage at 1.100V without forcing it, and it's proving rather difficult. I wish there was some solid data on when the clocks/voltage drop.


----------



## originxt

What's the effective way to oc ampere? Trying to find my game stable clocks and struggling. I've tried heaven but unsure if it applies anymore since it's dx11.

Do you guys just set a curve? Under volt? Pick a point and adjust around that?


----------



## des2k...

originxt said:


> What's the effective way to oc ampere? Trying to find my game stable clocks and struggling. I've tried heaven but unsure if it applies anymore since it's dx11.
> 
> Do you guys just set a curve? Under volt? Pick a point and adjust around that?


3dmark timespy extreme GT2 loop works, it's pretty hard on ampere


----------



## Hibbing

Have a Tuf 3090 and I'll be damned if I can find the switch to change from quiet to performance bios. It says quiet and performance with arrows but there's no switch.


----------



## Gebeleisis

Hibbing said:


> Have a Tuf 3090 and I'll be damned if I can find the switch to change from quiet to performance bios. It says quiet and performance with arrows but there's no switch.


I had a 1080ti strix that had the markers but no switch. Then a 2080ti strix with the markers and the switch. 
So I guess you got a series without the switch


----------



## Hibbing

Gebeleisis said:


> I had a 1080ti strix that had the markers but no switch. Then a 2080ti strix with the markers and the switch.
> So I guess you got a series without the switch


Well that wouldn't be acceptable.


----------



## GAN77

Hibbing said:


> but there's no switch


Can not be


----------



## Hibbing

GAN77 said:


> Can not be


That's a switch? How do you use it?


----------



## Slackaveli

Gebeleisis said:


> If that is so then our 3090 will be wiped by 3080ti )


No we won't- our far superior bandwidth will make it more effective if anything. 3080ti is a bandwidth starved hog. I can't wait to see 3090 resizable BAR results. My ASUS ROG Hero has the updated BIOS and is ready.


----------



## itssladenlol

Hibbing said:


> That's a switch? How do you use it?


You serious? 😂


----------



## Slackaveli

itssladenlol said:


> You serious? 😂


im so confused.
clearly there's a spot for a switch but it's blocked by that white cap thing.


----------



## Spiriva

Slackaveli said:


> im so confused.
> clearly there's a spot for a switch but it's blocked by that white cap thing.


The switch is in front of the white thing.


----------



## GAN77

Slackaveli said:


> clearly there's a spot for a switch but it's blocked by that white cap thing.


Mini tumbler under cover


----------



## Falkentyne

Hibbing said:


> That's a switch? How do you use it?


Do you see that small black tab on the cap?
Do you have that tab? If so you just move it to the other position. Use something small and non conductive if you cant do it with your fingernails.
Now if you don't have a small black tab on that cap, contact Asus. We can't help you with that.


----------



## Slackaveli

GAN77 said:


> Mini tumbler under cover
> 
> View attachment 2472840


"Awwww, i see" said the blind man.


----------



## WillP

Hadn't seen this mentioned elsewhere but those looking to get a Strix might want to hurry and get a firm price, or look elsewhere. New price $1980... ASUS officially increases graphics cards and motherboards pricing - VideoCardz.com


----------



## 86Jarrod

Zogge said:


> I have an EKWB if you would like to by mine (Sweden). I decided to change for another type.


I appreciate the offer but I already ordered one that I can get the soonest. I should have a Phanteks in 3 days. Hopefully it performs alright.


----------



## originxt

Slackaveli said:


> No we won't- our far superior bandwidth will make it more effective if anything. 3080ti is a bandwidth starved hog. I can't wait to see 3090 resizable BAR results. My ASUS ROG Hero has the updated BIOS and is ready.


If only it was coming to x299. Rip cascade lake x


----------



## 7empe

pat182 said:


> i can do 600watts with the kpe 520watt bios, maybe your load wasnt high enough


How it is possible that you pull much over 520W? What tool did you use to find out that it is 600W? I have tried Port Royal up to 2205 MHz and max power readings from GPU-Z and MSI Ab were 518-521 W (as sum of PCIe power, connector #1 power and 2x connector #2 power). Also temperature did not reach 50C. Power Limit set to 650W. What am I doing wrong?


----------



## Dr Mad

Hello,

I have the opportunity to get Asus 3090 strix or MSI 3090 Trio.

I see that MSI is power limited to 380W.
What is the best bios for this card?

Also, is the Suprim X really better than Trio?

I plan to watercool the card.

Thanks


----------



## 7empe

7empe said:


> How it is possible that you pull much over 520W? What tool did you use to find out that it is 600W? I have tried Port Royal up to 2205 MHz and max power readings from GPU-Z and MSI Ab were 518-521 W (as sum of PCIe power, connector #1 power and 2x connector #2 power). Also temperature did not reach 50C. Power Limit set to 650W. What am I doing wrong?


I've re-flashed it and now it works... strange thing.


----------



## Slackaveli

WillP said:


> Hadn't seen this mentioned elsewhere but those looking to get a Strix might want to hurry and get a firm price, or look elsewhere. New price $1980... ASUS officially increases graphics cards and motherboards pricing - VideoCardz.com


Yeah I got incredibly lucky bc i bought mine yesterday (same day as the press release about the price hike) for $1999 after tax, shipped.


----------



## Slackaveli

originxt said:


> If only it was coming to x299. Rip cascade lake x


you guys have been beating our heads in with your superior system bandwidth bc of your quad channel ram for years on that platform lol. 
\
Let us have this lol.


----------



## Slackaveli

Dr Mad said:


> Hello,
> 
> I have the opportunity to get Asus 3090 strix or MSI 3090 Trio.
> 
> I see that MSI is power limited to 380W.
> What is the best bios for this card?
> 
> Also, is the Suprim X really better than Trio?
> 
> I plan to watercool the card.
> 
> Thanks


Kingpin>StrixOC>Suprim/AorusX>StrixNon-OC/MSITrio/FTW3>the rest>Zotac


----------



## geriatricpollywog

Slackaveli said:


> Kingpin>StrixOC>Suprim/AorusX>StrixNon-OC/MSITrio/FTW3>the rest>Zotac


I thought Strix OC and non OC were the exact same card with a different bios?

Agree Kingpin rules all, but I thought the Strix/Tuf had a better PCB than the FTW3.


----------



## truehighroller1

0451 said:


> I thought Strix OC and non OC were the exact same card with a different bios?
> 
> Agree Kingpin rules all, but I thought the Strix/Tuf had a better PCB than the FTW3.


Now My Suprim x is better then asus card because of their insane price hikes lol. 

Had to say it sorry.


----------



## geriatricpollywog

truehighroller1 said:


> Now My Suprim x is better then asus card because of their insane price hikes lol.
> 
> Had to say it sorry.


I haven’t seen a PCB analysis for the SuprimX, just the Tuf and FTW3 3080 versions. Asus openly scalps. EVGA is more sneaky about it, since they have apparently shipped 0 units of their $1500 black edition cards.


----------



## AvengedRobix

I can flash galax 1000w BIOS on strix?


----------



## Falkentyne

Slackaveli said:


> Kingpin>StrixOC>Suprim/AorusX>StrixNon-OC/MSITrio/FTW3>the rest>Zotac


I would put the Founder's Edition far ahead of the FTW3 at this moment.
I've seen a grand total of three reports of 3090 FE's that died or were defective, that were not modded. One died shortly after install, one had a cut fan ribbon cable (wrapped around PCB in the wrong spot), and I think one other random report of a dead card and then a 4th report of one desktop random black screen loop (except the card restored itself automatically).. I'm sure there are more if I actually try to look.

Now, random FTW3's dying in light load games completely or dying with the red LED light of death, after browsing/youtube, tons of posts about that on the evga forums.



AvengedRobix said:


> I can flash galax 1000w BIOS on strix?


You can with a hardware programmer + 1.8v adapter. Not with nvflash.


----------



## WMDVenum

7empe said:


> How it is possible that you pull much over 520W? What tool did you use to find out that it is 600W? I have tried Port Royal up to 2205 MHz and max power readings from GPU-Z and MSI Ab were 518-521 W (as sum of PCIe power, connector #1 power and 2x connector #2 power). Also temperature did not reach 50C. Power Limit set to 650W. What am I doing wrong?


 I have a shunt modded 3090FE and my max ever recorded power was around 570 watts. In my current config of +160 core/1125 mem port royale maxes at about 545-550. CP2077 uses about 440 watts while warframe without a capped frame rate goes up to 495 watts. Warzone with an uncapped frame rate sits around 350 watts.


----------



## geriatricpollywog

Falkentyne said:


> I would put the Founder's Edition far ahead of the FTW3 at this moment.
> I've seen a grand total of three reports of 3090 FE's that died or were defective, that were not modded. One died shortly after install, one had a cut fan ribbon cable (wrapped around PCB in the wrong spot), and I think one other random report of a dead card and then a 4th report of one desktop random black screen loop (except the card restored itself automatically).. I'm sure there are more if I actually try to look.
> 
> Now, random FTW3's dying in light load games completely or dying with the red LED light of death, after browsing/youtube, tons of posts about that on the evga forums.
> 
> 
> 
> You can with a hardware programmer + 1.8v adapter. Not with nvflash.


It would be nice if we could have a FAQ on the front page with info about each card. It seems that the OP started threads for each new GPU for whatever reason and did not bother to maintain each thread. @Falkentyne I nominate you to start a new official 3090 thread and we can all migrate over.


----------



## Falkentyne

0451 said:


> It would be nice if we could have a FAQ on the front page with info about each card. It seems that the OP started threads for each new GPU for whatever reason and did not bother to maintain each thread. @Falkentyne I nominate you to start a new official 3090 thread and we can all migrate over.


Huh? what would I do? 
I'm the worst person in the world for maintaining a thread. Trust me on that. I'm surprised I'm still alive with how many problems I have with myself. I do appreciate the kind words though.


----------



## Sheyster

0451 said:


> It would be nice if we could have a FAQ on the front page with info about each card. It seems that the OP started threads for each new GPU for whatever reason and did not bother to maintain each thread. @Falkentyne I nominate you to start a new official 3090 thread and we can all migrate over.





Falkentyne said:


> Huh? what would I do?
> I'm the worst person in the world for maintaining a thread. Trust me on that. I'm surprised I'm still alive with how many problems I have with myself. I do appreciate the kind words though.


The OP did a better job with the 2080 Ti thread. He seems to have gone AWOL on this one. I agree he's not the best person to manage it. AFAIK he does not own a 3090, so it's an awkward situation at best.


----------



## Slackaveli

Falkentyne said:


> I would put the Founder's Edition far ahead of the FTW3 at this moment.
> I've seen a grand total of three reports of 3090 FE's that died or were defective, that were not modded. One died shortly after install, one had a cut fan ribbon cable (wrapped around PCB in the wrong spot), and I think one other random report of a dead card and then a 4th report of one desktop random black screen loop (except the card restored itself automatically).. I'm sure there are more if I actually try to look.
> 
> Now, random FTW3's dying in light load games completely or dying with the red LED light of death, after browsing/youtube, tons of posts about that on the evga forums.
> 
> 
> 
> You can with a hardware programmer + 1.8v adapter. Not with nvflash.


Yeah didnt mean to leave FE off. I'd put it right there above MSI Trio for sure- somewhere right around Aorus Master, just a smidge under Suprim and Xtrem. Really a great card and objectively as good as any of them.


----------



## Slackaveli

0451 said:


> It would be nice if we could have a FAQ on the front page with info about each card. It seems that the OP started threads for each new GPU for whatever reason and did not bother to maintain each thread. @Falkentyne I nominate you to start a new official 3090 thread and we can all migrate over.


somebody transfer this RTX 3090 Aftermarket Card List - 35 Cards Compared


----------



## Slackaveli

Sheyster said:


> The OP did a better job with the 2080 Ti thread. He seems to have gone AWOL on this one. I agree he's not the best person to manage it. AFAIK he does not own a 3090, so it's an awkward situation at best.


Why on Earth would a non-3090 owner control the 3090 Owner's Thread? LOL! We need that fixed.


----------



## geriatricpollywog

Slackaveli said:


> somebody transfer this RTX 3090 Aftermarket Card List - 35 Cards Compared


That’s just a plagiarized version of Techpowerup’s database.









NVIDIA GeForce RTX 3090 Specs


NVIDIA GA102, 1695 MHz, 10496 Cores, 328 TMUs, 112 ROPs, 24576 MB GDDR6X, 1219 MHz, 384 bit




www.techpowerup.com


----------



## jura11

0451 said:


> Pointing a fan at my backplate completely fixed memory clock throttling. I can rip Port Royal back to back and average memory temperature will be the same as a cold run each time.


 I done same thing with my RTX 3090 GamingPro with Bykski Waterblock, I have put fan on my backplate till I get RAM waterblock which I can use it there

Hope this helps 

Thanks, Jura


----------



## geriatricpollywog

jura11 said:


> I done same thing with my RTX 3090 GamingPro with Bykski Waterblock, I have put fan on my backplate till I get RAM waterblock which I can use it there
> 
> Hope this helps
> 
> Thanks, Jura


Here are before and after temps for adding the 80mm fan. Mem1 went down 12C but Mem2 went down 5C. Anybody know where Mem2 is? This was a mining load so ignore everything but memory temps.


----------



## mirkendargen

Zurv said:


> Note to anyone having coil whine. My 3090 FTW3 was SUPER bad. This was a big problem for me as the audio was coming from the card too. (HDMI to AVR) The computer is far away so some local noise from there wouldn't have been a problem - but buzzing from all my speakers (when on max load on the GPU) is a huge problem. 7.1.2 buzz isn't a good thing.
> 
> 
> what i tried...
> ran a new 20amp dedicated circuit. (i ran 3 for home theater stuff.)
> tried audioquest Niagara 5000 and 1200
> tired surgeX SA-20
> all kinda crazy and stunning costly cables (magic audiophile level of crazy.)
> 
> none of it worked.
> I'm using the Corsair axi 1600. I wasn't having problems with my SLI 2080 ti kingpins.. or 3090FE or 2080 it FE SLI. Just this 3090FTW3.
> ugh.. do i need to remove the shuntmod? _sigh_
> 
> Nope! The fix was ferrite beads. Putting one for the power coming in to the PS and one on each of the PCI-E connectors at the video card side.
> whine is 99% gone.
> The odd part is the corsair cables already have ferrite beads... _shrug_ maybe they were to small - but adding more fixed the issues.
> 
> I used this package of them: Amazon.com: eBoot 20 Pieces Clip-on Ferrite Ring Core RFI EMI Noise Suppressor Cable Clip for 3mm/ 5mm/ 7mm/ 9mm/ 13mm Diameter Cable, Black: Industrial & Scientific
> 
> Hopefully this info is helpful to others. It is also cheap - if it didn't help you aren't out much.
> My next step would have been removing then shunts. I had also ordered a pure-sinewave UPS.. but cancelled it after the beads fixed the issue.


I just did this on my Strix. It didn't have what I would call "bad" coil whine, but it was definitely noticeable. Now I can't hear it with my ear to my case over the faint buzz of the transformer in my pure sine wave UPS next to it, it's virtually eliminated. Awesome tip.

I put them at the PSU side of the 8pins because it's much cleaner down there, and I figured the noise was coming from the PSU not being introduced in the cable run so it wouldn't matter which end it was on. I put one on the power into the PSU also, but that one probably isn't actually necessary since I have a pure sine wave UPS already as mentioned.

A pro tip for anyone else trying this, the 13mm ones in the linked set were too big for 8pin cables (they could slide around) so I used the 9mm ones, but they were REALLY tight, I actually crushed one of them using plyers to clamp it down. I'd recommend finding 10mm ones for 8pin cables if I was doing it over again.


----------



## Thanh Nguyen

Anyone interested in Aqua block with active backplate for reference board please pm me.


----------



## Slackaveli

Good tip man.


----------



## Gebeleisis

Thanh Nguyen said:


> Anyone interested in Aqua block with active backplate for reference board please pm me.


I am in Europe and can get one here. Can you please tell me the part number? Tks


----------



## dr/owned

mirkendargen said:


> I put them at the PSU side of the 8pins because it's much cleaner down there, and I figured the noise was coming from the PSU not being introduced in the cable run so it wouldn't matter which end it was on. I put one on the power into the PSU also, but that one probably isn't actually necessary since I have a pure sine wave UPS already as mentioned.


I'd be curious if you took the beads off and ran solely off the UPS if you get noise. Your UPS likely isn't line-interactive which means it basically does nothing unless there's a power outage. I'm guessing it's not because you really only see them in datacenters and the starting price is $$$$.


----------



## bmgjet

GXT-3000MTPLUS is the UPS I use which seems to do a good job.


----------



## 7empe

WMDVenum said:


> I have a shunt modded 3090FE and my max ever recorded power was around 570 watts. In my current config of +160 core/1125 mem port royale maxes at about 545-550. CP2077 uses about 440 watts while warframe without a capped frame rate goes up to 495 watts. Warzone with an uncapped frame rate sits around 350 watts.


That is true. I took Superposition benchmark and it took me over 700W easily.


----------



## reflex75

Hi to you all!
May I ask what PSU you use and if it can handle the strong transient load spikes, especially when OC the 3090?
Thanks and happy new year!


----------



## 7empe

reflex75 said:


> Hi to you all!
> May I ask what PSU you use and if it can handle the strong transient load spikes, especially when OC the 3090?
> Thanks and happy new year!


Happy New Year! I use Super Flower Leadex Platinum 2000W.


----------



## Zogge

reflex75 said:


> Hi to you all!
> May I ask what PSU you use and if it can handle the strong transient load spikes, especially when OC the 3090?
> Thanks and happy new year!


Corsair AX 1600i


----------



## reflex75

Most powerful PSUs there!
Some people with lower PSU?


----------



## Gebeleisis

AX1000i , but i'm on a 2 pin card.


----------



## ALSTER868

1000W Seasonic PX-1000 there


----------



## long2905

SF750 here lol with 9900k and 2 pin card


----------



## megahmad

Hello guys. Just got a Strix 3090 OC. Is it normal for the temp to reach 68c (max I saw was 70c) at gaming loads with everything on stock (performance bios at 100% power limit)? Room temp is around 18C/64F. It's on default fan curve which gets a bit loud at 75% or 2000 RPM. Just want to know if it has bad factory thermal paste application and if I should re-paste it. From what I saw from reviews, I see some got 64c max at load with default settings. My case airflow is good somewhat and I have one noctua 200mm fan blowing on the card.


----------



## maw784

Anyone found a good after marekt 16 guage power supply extension for the 3090 fe? Only can find 18 guage, anybody have luck using the 18guage extensions?


----------



## ALSTER868

megahmad said:


> Hello guys. Just got a Strix 3090 OC. Is it normal for the temp to reach 68c (max I saw was 70c) at gaming loads with everything on stock


Yes, that's quite OK for Strix. Had same temps on stock settings on air cooling.
To get better temps it should be downvolted though.


----------



## Pepillo

EVGA Supernova 1600 T2


----------



## dante`afk

maw784 said:


> Anyone found a good after marekt 16 guage power supply extension for the 3090 fe? Only can find 18 guage, anybody have luck using the 18guage extensions?


cablemods are 16awg, for my psu at least


----------



## des2k...

megahmad said:


> Hello guys. Just got a Strix 3090 OC. Is it normal for the temp to reach 68c (max I saw was 70c) at gaming loads with everything on stock (performance bios at 100% power limit)? Room temp is around 18C/64F. It's on default fan curve which gets a bit loud at 75% or 2000 RPM. Just want to know if it has bad factory thermal paste application and if I should re-paste it. From what I saw from reviews, I see some got 64c max at load with default settings. My case airflow is good somewhat and I have one noctua 200mm fan blowing on the card.


I would re-paste, amepere is a big die and everywhere I look the die is not flat. If that's the case for you, use ticker paste.

My zotac at 390w, 100% load didn't go past 60c at 60%fans and it's a small card. But when I removed the stock cooler it was a perfect paste / torque on the screws, 100% coverege and thin layer. 

Something I can't reproduce with my ek water block, but ok temps, 44c, 14c water delta at 400w-500w.


----------



## WilliamLeGod

megahmad said:


> Hello guys. Just got a Strix 3090 OC. Is it normal for the temp to reach 68c (max I saw was 70c) at gaming loads with everything on stock (performance bios at 100% power limit)? Room temp is around 18C/64F. It's on default fan curve which gets a bit loud at 75% or 2000 RPM. Just want to know if it has bad factory thermal paste application and if I should re-paste it. From what I saw from reviews, I see some got 64c max at load with default settings. My case airflow is good somewhat and I have one noctua 200mm fan blowing on the card.


Its freaking hot. 390W 18C ambient 75% fan, u should be hover around 60C only


----------



## megahmad

Alright thanks guys. I thought the same, I should be mid 60s max especially with this fan speed.

From what I've seen, it seems like almost every strix is different with temps. Bad QC and factory paste?
I mean they invested a lot in their cooler, components and everything, why cheap out on thermal paste..


----------



## Gebeleisis

all manufacturers cheap out on paste .... why use 0.90 usd per application when you can use 0.09 usd per application
cost reduction 90% of the customers won't notice.


----------



## Sheyster

megahmad said:


> Alright thanks guys. I thought the same, I should be mid 60s max especially with this fan speed.
> 
> From what I've seen, it seems like almost every strix is different with temps. Bad QC and factory paste?
> I mean they invested a lot in their cooler, components and everything, why cheap out on thermal paste..


There seem to be mixed reviews here regarding re-pasting the Strix cards. Many don't notice any difference at all with new paste while some have reported better temps.


----------



## mirkendargen

dr/owned said:


> I'd be curious if you took the beads off and ran solely off the UPS if you get noise. Your UPS likely isn't line-interactive which means it basically does nothing unless there's a power outage. I'm guessing it's not because you really only see them in datacenters and the starting price is $$$$.


It's line interactive (this is why I said I could hear the transformer on it). HP T1500 Gen4 (which is a rebadged Eaton 5SC). I'd been running solely off the UPS previously before I put them on.

And my PSU that's apparently making a bit of noise is an EVGA 1000W Supernova P2.

HP T1500 Gen4's are $80-$150 on ebay btw, and use 3 standard UPS lead acid batteries for a $60 retrofit if anyone else with these giant TDP GPU's wants an affordable UPS that can handle up to 1080W...


----------



## dr/owned

WillP said:


> Hadn't seen this mentioned elsewhere but those looking to get a Strix might want to hurry and get a firm price, or look elsewhere. New price $1980... ASUS officially increases graphics cards and motherboards pricing - VideoCardz.com


Nice, so now our resale value just got bumped up by another $200. Sucks for people that don't have one yet though.


----------



## Rena Ryugu

I still can't flash the Kingpin's 1000W BIOS to my 3090 Strix. Can anyone flash that bios to his/her Strix?


----------



## GQNerd

EVGA SuperNOVA 1200 P2
runs my 10900k and 3090 perfectly (tested Kingpin and Strix)



Rena Ryugu said:


> I still can't flash the Kingpin's 1000W BIOS to my 3090 Strix. Can anyone flash that bios to his/her Strix?


nvflash --protectoff
nvflash -f -6 NAME.rom


----------



## originxt

Miguelios said:


> EVGA SuperNOVA 1200 P2
> runs my 10900k and 3090 perfectly (tested Kingpin and Strix)
> 
> 
> 
> nvflash --protectoff
> nvflash -f -6 NAME.rom


I was having issues on my ftw3 initially with an error message about how pcie connector versions were different. Initially had the 520w kingpin bios. Resolved by updating nvflash to its newest version.


----------



## jura11

7empe said:


> Happy New Year! I use Super Flower Leadex Platinum 2000W.


Same PSU(Superflower 8pack 2000w) I've ir running, powering 3*GPUs setup and no issues with that, just wish that PSU is quieter 

Hope this helps 

Thanks, Jura


----------



## J7SC

Speaking of PSU(s), a quick question re. 'wiring': ...looking at a potential upgrade to 2x 3x8 pin Amperes in a few months. Current system has 2x 2x8 pin 2080 Tis, a Threadripper on a mobo with 2x 8 pin and 1x 24 pin for CPU, and is powered by an Antec HPC 1300 w platinum...current build with oc'ed GPUs and oc'ed CPU can exceed 1150 W in certain apps.

I have two of those PSUs and they have a proprietary OCLink to connect them together...what would be the best way to distribute the PCIe cables from two PSUs to two GPUs w/ triple 8 pins each ?


----------



## Nizzen

Rena Ryugu said:


> I still can't flash the Kingpin's 1000W BIOS to my 3090 Strix. Can anyone flash that bios to his/her Strix?


Working here in my strix 3090 oc. Remember --protectoff


----------



## stryker7314

Super Flower Leadex Titanium 750W, working like a champ with a FTW3 and 520w bios at 121%.


----------



## Slackaveli

stryker7314 said:


> Super Flower Leadex Titanium 750W, working like a champ


Im glad you reported in. I have an Asus 3090 Strix OC en route and only a Evga G2 1000w and these guys have me feeling like a peasant with their replies lol.


----------



## jura11

Slackaveli said:


> Im glad you reported in. I have an Asus 3090 Strix OC en route and only a Evga G2 1000w and these guys have me feeling like a peasant with their replies lol.


Hi there 

1000W should be okay there, peasant hahaha, trust me I feel here too like pleb or peasant here with guys here scoring easy 15k and me I literally need to have KPE XOC BIOS to score close to 15k

What CPU do you have?

Hope this helps 

Thanks, Jura


----------



## SoldierRBT

I have an Asus Thor 1200W and the max I've seen is 680W running RTX Quake 2. The total PC power draw is around 850W with a 10900K.


----------



## des2k...

SoldierRBT said:


> I have an Asus Thor 1200W and the max I've seen is 680W running RTX Quake 2. The total PC power draw is around 850W with a 10900K.
> 
> View attachment 2473026


I doubt that power is put to good use I measured 1fps difference between 340w and 640w.
Taking about power usage, Path of Exile is up there as a power virus.


----------



## Slackaveli

jura11 said:


> Hi there
> 
> 1000W should be okay there, peasant hahaha, trust me I feel here too like pleb or peasant here with guys here scoring easy 15k and me I literally need to have KPE XOC BIOS to score close to 15k
> 
> What CPU do you have?
> 
> Hope this helps
> 
> Thanks, Jura


I mean, Im sure im one of the more broke ones in here. Me and Falk lol. It's all good, though. Im still HERE.


----------



## megahmad

First OC try with strix. +100core / +200mem / 123% PL on air








I scored 14 007 in Port Royal


Intel Core i9-9900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com





I know result isn't that good for a strix but will keep trying until I get 14.5K or so.


----------



## Slackaveli

megahmad said:


> First OC try with strix. +100core / +200mem / 123% PL on air
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 14 007 in Port Royal
> 
> 
> Intel Core i9-9900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> I know result isn't that good for a strix but will keep trying until I get 14.5K or so.


I wouldnt be mad at it. 14k, sub-70c, over 2000mhz. check, check, check.


----------



## megahmad

Slackaveli said:


> I wouldnt be mad at it. 14k, sub-70c, over 2000mhz. check, check, check.


Thanks. As for the temp that's the average tho, I hit 78c at the end of the benchmark 😀 Card was pulling ~482w and fans were at 97% which is unbearable.


----------



## SoldierRBT

des2k... said:


> I doubt that power is put to good use I measured 1fps difference between 340w and 640w.
> Taking about power usage, Path of Exile is up there as a power virus.


I see 9fps difference 340w vs 660w at 3440x1440 and 11fps difference at 2K.


----------



## Rena Ryugu

Nizzen said:


> Working here in my strix 3090 oc. Remember --protectoff


I finally got the 1000w bios working on my strix. However, gpuz shows that the 3rd 8pin isn't getting any power at all. Is that a bug or a misreading?


----------



## WilliamLeGod

SoldierRBT said:


> I see 9fps difference 340w vs 660w at 3440x1440 and 11fps difference at 2K.
> 
> View attachment 2473031
> 
> 
> View attachment 2473032


Thats why we should all use %fps instead of fps


----------



## Falkentyne

Rena Ryugu said:


> I finally got the 1000w bios working on my Strix. However, gpuz shows that the 3rd 8pin doesn't receive any power at all. Is that a bug or just a misreading?
> 
> I finally got the 1000w bios working on my strix. However, gpuz shows that the 3rd 8pin isn't getting any power at all. Is that a bug?
> View attachment 2473033


8 pin #3 will not report correct power on some cards, or will be a copy of 8 pin #1, depending on what card it is flashed on.
So you won't be able to measure proper accurately without a "power (amps) clamp" that can measure the amps going into the PCIE 8 pin plugs (Watts=V*A, so if you know the volts, and you know the amps, you can get the watts!)

The 8 pin #3 is drawing power. The power management of the board is based on the board design, not on what bios is used.
That's why FTW3 owners with the "440W" XOC Bios bug can flash the XC3 (a 2 8 PCIE pin card) vbios and gain more power, because that bios will ignore the third 8 pin for reporting, but will still draw power from the connector. This is also why the FTW3 may have the 75W/82W PCIE Slot throttling bug when using the XOC Vbios, which limits you to 440W and stops TDP past 107% from working at all, but if you flash that exact same vbios on a Strix, you can now pull 500W, because the PCIE slot only reads 65W instead!

Anyway be careful with that vbios, make sure ALL your cooling is 100% PERFECT as there are NO TEMP PROTECTIONS on that vbios, and set your TDP to 55% and run some Port Royal 3dmark benches at +90 core, +500 memory. You should score over 14,600 points.


----------



## des2k...

SoldierRBT said:


> I see 9fps difference 340w vs 660w at 3440x1440 and 11fps difference at 2K.
> 
> View attachment 2473031
> 
> 
> View attachment 2473032


I'll recheck mine, possible had some bad cpu/mem settings.

What's your fps at that scene 4k ? I remeber looking at the first door/first guy comming out, It's about 54-51fps that was 1875 core or 2130 same fps.

Are you getting higher than 54fps ?


----------



## Neon Lights

If there's some isopropyl alcohol between the die and the thermal paste (or, described in a different way: if you put thermal paste on the die if there is still some isopropyl alcohol on it) after you've removed the previous thermal paste, is that an issue? What happens with the alcohol?


----------



## des2k...

SoldierRBT said:


> I see 9fps difference 340w vs 660w at 3440x1440 and 11fps difference at 2K.
> 
> View attachment 2473031
> 
> 
> View attachment 2473032


re-tested so yeah there's a FPS difference, didn't see it because I was at 4k

I don't have that super wide resolution(5mpixel), 1600p is 4.1mpixel

1600p
1875mhz was 96fps, 2130 103fps (600w)

4k
1875mhz was 50fps, 2130 54fps (635w)


----------



## bmgjet

Neon Lights said:


> If there's some isopropyl alcohol between the die and the thermal paste (or, described in a different way: if you put thermal paste on the die if there is still some isopropyl alcohol on it) after you've removed the previous thermal paste, is that an issue? What happens with the alcohol?


It evaporates.


----------



## SoldierRBT

des2k... said:


> re-tested so yeah there's a FPS difference, didn't see it because I was at 4k
> 
> I don't have that super wide resolution(5mpixel), 1600p is 4.1mpixel
> 
> 1600p
> 1875mhz was 96fps, 2130 103fps (600w)
> 
> 4k
> 1875mhz was 50fps, 2130 54fps (635w)


What are your settings at 4K? I'm getting 46fps 690W


----------



## Neon Lights

bmgjet said:


> It evaporates.


It evaporates through the thermal paste? Because I think that it'd be "trapped" beneath it.


----------



## des2k...

SoldierRBT said:


> What are your settings at 4K? I'm getting 46fps 690W


defaults, temporal anti-aliasing set to upscaling and illumination set to medium


----------



## SoldierRBT

des2k... said:


> defaults, temporal anti-aliasing set to upscaling and illumination set to medium
> View attachment 2473045


Thank you. I'm getting 56fps at those settings 2190MHz 700W


----------



## geriatricpollywog

My PC normally sits in front of a sliding door to a covered patio, so I simply moved it outside. It's 13C outside which is not that cold, yet my core temp dropped significantly. I might just leave it there for a while. The settings I used were +1325 mem, +165 core, voltage slider maxed, LN2 bios (121%), and both NVVDD and LLC dipswitches on. No classified tool.










NVIDIA GeForce RTX 3090 video card benchmark result - Intel Core i7-10700K Processor,Micro-Star International Co., Ltd. MEG Z490 UNIFY (MS-7C71) (3dmark.com)


----------



## Slackaveli

megahmad said:


> Thanks. As for the temp that's the average tho, I hit 78c at the end of the benchmark 😀 Card was pulling ~482w and fans were at 97% which is unbearable.


may need a repaste


----------



## f04m1n4t0r

reflex75 said:


> Most powerful PSUs there!
> Some people with lower PSU?


RM750x with Suprim X (and 9900K).


----------



## Daniel M

Finally got my EK waterblock for the PNY 3090. Best scores yet with undervolting. Never went over 43c during the benchmarks.

Time Spy 19,416 - https://www.3dmark.com/spy/17156368

Port Royal 13,799 - https://www.3dmark.com/pr/747601


----------



## geriatricpollywog

Daniel M said:


> Finally got my EK waterblock for the PNY 3090. Best scores yet with undervolting. Never went over 43c during the benchmarks.
> 
> Time Spy 19,416 - https://www.3dmark.com/spy/17156368
> 
> Port Royal 13,799 - https://www.3dmark.com/pr/747601


What is the core and memory offset? I feel like there is some more performance waiting to be unlocked.


----------



## long2905

Daniel M said:


> Finally got my EK waterblock for the PNY 3090. Best scores yet with undervolting. Never went over 43c during the benchmarks.
> 
> Time Spy 19,416 - https://www.3dmark.com/spy/17156368
> 
> Port Royal 13,799 - https://www.3dmark.com/pr/747601


you gotta push it more. you can even try the 1000w vbios if you are confident.


----------



## EDORAM

Hi guys , I got my EK waterblock for 3090 strix OC, I'v only tested Port Royal.
Max score AIR 14808 ( 1000w kpe bios )
Max score WATER 15231 ( stock bios )

I'm going to try 1000w kpe bios again on my strix, are there problems?
I'd like to get 15400-15500 PR Score, is it possible?

Regards
EDORAM


----------



## megahmad

Slackaveli said:


> may need a repaste


I don't know, I won't be using this OC anyway, I am not even sure if it's stable in games, just wanted to see the potential of the strix with this 123% PL. My current OC is 105% PL and +50 core which is fine for temps (max 70c and average 68c at reasonable fan speed). As for repasting, I don't have any thermal pads to replace, can I keep using the same pads if they don't get any damage with disassembly?
Another question, is grizzly kryonaut good for GPUs or should I use another paste?


----------



## 7empe

EDORAM said:


> Hi guys , I got my EK waterblock for 3090 strix OC, I'v only tested Port Royal.
> Max score AIR 14808 ( 1000w kpe bios )
> Max score WATER 15231 ( stock bios )
> 
> I'm going to try 1000w kpe bios again on my strix, are there problems?
> I'd like to get 15400-15500 PR Score, is it possible?
> 
> Regards
> EDORAM


On the same card and waterblock I did 15438 on 520W KPE bios.


----------



## 7empe

jura11 said:


> Same PSU(Superflower 8pack 2000w) I've ir running, powering 3*GPUs setup and no issues with that, just wish that PSU is quieter
> 
> Hope this helps
> 
> Thanks, Jura


I had also issues with fan loudness. It seems that RPM of the fan are maxed out all the time. I simply removed the stock fan from the PSU and mounted NZXT 140mm fan instead with cable routing to my motherboard for RPM control. Fan blowing at the max 1000 rpm speed all the time (no need to running it at lower speed). It is soooo silent right now...


----------



## reflex75

f04m1n4t0r said:


> RM750x with Suprim X (and 9900K).


Is it stable?
Not shutdown or reboot?
What about OC?
Because Suprim X is a 3x8pin card with average power above 400w, and spikes above 600w...


----------



## EDORAM

7empe said:


> On the same card and waterblock I did 15438 on 520W KPE bios.


Great, with shunt? What do you think shunt via silver pen? I'm thinking...
ps: do you prefer 520w instead 1000w bios?

Regards


----------



## stryker7314

reflex75 said:


> Is it stable?
> Not shutdown or reboot?
> What about OC?
> Because Suprim X is a 3x8pin card with average power above 400w, and spikes above 600w...


Quality PSU's can handle much higher wattage than they are rated for, the RM750x can handle up to 925W: Corsair RMx Series RM750x 750W

The superflower leadex platinum 750 can do up to 1051w: SuperFlower Leadex Platinum 750W & Titanium 1000W Review

Now I wouldn't recommend doing that sustained but transient shouldn't be an issue.


----------



## 7empe

EDORAM said:


> Great, with shunt? What do you think shunt via silver pen? I'm thinking...
> ps: do you prefer 520w instead 1000w bios?
> 
> Regards


No shunt. Just +180 on core (from the 1920 MHz base on that bios) and +1200 on memory. For benchmarks 1000w is nice to have, just started to play with it and it seems that I will be able to get 15600-15700 points in PR. Maybe more. For daily use, I prefer 520W. You can also use 1000W and make a cap at 520-550W. Stability of both is the same. The difference is in the base clock (and protection of course).


----------



## dante`afk

just use 1000w for daily 24/7 use

nothing will happen.

🤡


----------



## 7empe

stryker7314 said:


> Quality PSU's can handle much higher wattage than they are rated for, the RM750x can handle up to 925W: Corsair RMx Series RM750x 750W
> 
> The superflower leadex platinum 750 can do up to 1051w: SuperFlower Leadex Platinum 750W & Titanium 1000W Review
> 
> Now I wouldn't recommend doing that sustained but transient shouldn't be an issue.


It's just a peak power capability. I would not rely on this for long term loads. PC will shut down due to lack of power or voltage/current/thermal protection. Not mentioning the PSU efficiency here


----------



## Biscottoman

Guys I need an advice on what 3090 custom model to get the *BEST *overclock on custom loop+5ohm shunt mod. I'm stucked between ftw3 ultra or strix, strix was my main target but right now is really hard to find one at a "good" price (thanks to asus price increase), the ftw3 ultra would be ok especially paired with the new upcoming kryographics waterblock with watercooled backplate but i've heard many people complaing on the quality of this card and his pcb (especially under shunt). I can't go for the kingpin cause it's sadly almost impossible to find and there's still no waterblock on the way. Any advice on these two models or any other option?


----------



## GAN77

Biscottoman said:


> many people complaing on the quality of this card and his pcb (especially under shunt


I do not understand what can be praised in this card. Fair.
The usual reference plus a few additional phases.


----------



## Shawnb99

Biscottoman said:


> Guys I need an advice on what 3090 custom model to get the *BEST *overclock on custom loop+5ohm shunt mod. I'm stucked between ftw3 ultra or strix, strix was my main target but right now is really hard to find one at a "good" price (thanks to asus price increase), the ftw3 ultra would be ok especially paired with the new upcoming kryographics waterblock with watercooled backplate but i've heard many people complaing on the quality of this card and his pcb (especially under shunt). I can't go for the kingpin cause it's sadly almost impossible to find and there's still no waterblock on the way. Any advice on these two models or any other option?


FTW and an Optimus block would be my vote. As for the KPE you can still sign up for the auto notify and hope to get one, Optimus is also making a block for that as well. Said to be out end of the month or so but being Optimus take that date with a grain of salt
I’m holding out for the KPE and an Optimus block myself.


----------



## SoldierRBT

KPE waterblock


----------



## Shawnb99

SoldierRBT said:


> KPE waterblock
> View attachment 2473122


Oooo where you find this? Any eta when it’ll be out? I assume this is the Hydrocopper?


----------



## WayWayUp

Biscottoman said:


> Guys I need an advice on what 3090 custom model to get the *BEST *overclock on custom loop+5ohm shunt mod. I'm stucked between ftw3 ultra or strix, strix was my main target but right now is really hard to find one at a "good" price (thanks to asus price increase), the ftw3 ultra would be ok especially paired with the new upcoming kryographics waterblock with watercooled backplate but i've heard many people complaing on the quality of this card and his pcb (especially under shunt). I can't go for the kingpin cause it's sadly almost impossible to find and there's still no waterblock on the way. Any advice on these two models or any other option?


A lot of ppl's ftw3 cards are outperforming their kingpin cards. It's still a really good card its just annoying because you need to shunt it if you want more power
1) doesnt play nice with other bios unless it's shunted
2) cant power mod it

Other than that the build quality is good. vrm's are overbuilt
Honestly though despite what anyone tells you its all about silicon lottery

My came out the box with 450w bios stock fan cooler and warm room temp being able to score almost 15k in port royal
The other day with my waterblock i was able to score over 15.8k and with some cold coming this weekend with both windows open i will try again for 16k

It's really hit or miss though. I would take the card apart regardless and change out the gunk with some fujipoly/nice pads and possibly repaste as well. not a big fan of the backplate but your going for a custom loop anyway which would be a big upgrade to the backside

ftw3 hydrocopper is the best deal though since it comes with block for only $50 premium and you can get 5% off
personally using an optimus block

that said, if you dont mind a price premium you could always get a kingpin and that way your guaranteed at least a good card (no promise on a great card though as the bin is lax)


----------



## itssladenlol

dante`afk said:


> just use 1000w for daily 24/7 use
> 
> nothing will happen.
> 
> 🤡


Used 1000w bios for over 2years on 2080Ti
Doing Same now with kfa2 3090 2x8pin non shunt and heatkiller V with a Mora 420.
Almost 16k Port royal and using it for daily.


----------



## WayWayUp

whats your 3dmark handle?


----------



## dante`afk

itssladenlol said:


> Used 1000w bios for over 2years on 2080Ti
> Doing Same now with kfa2 3090 2x8pin non shunt and heatkiller V with a Mora 420.
> Almost 16k Port royal and using it for daily.


yep me too with the 2080Ti, however the 3090 is a different monster in terms of power usage.

show us your 3dmark link if you boast like this.


----------



## Biscottoman

WayWayUp said:


> A lot of ppl's ftw3 cards are outperforming their kingpin cards. It's still a really good card its just annoying because you need to shunt it if you want more power
> 1) doesnt play nice with other bios unless it's shunted
> 2) cant power mod it
> 
> Other than that the build quality is good. vrm's are overbuilt
> Honestly though despite what anyone tells you its all about silicon lottery
> 
> My came out the box with 450w bios stock fan cooler and warm room temp being able to score almost 15k in port royal
> The other day with my waterblock i was able to score over 15.8k and with some cold coming this weekend with both windows open i will try again for 16k
> 
> It's really hit or miss though. I would take the card apart regardless and change out the gunk with some fujipoly/nice pads and possibly repaste as well. not a big fan of the backplate but your going for a custom loop anyway which would be a big upgrade to the backside
> 
> ftw3 hydrocopper is the best deal though since it comes with block for only $50 premium and you can get 5% off
> personally using an optimus block
> 
> that said, if you dont mind a price premium you could always get a kingpin and that way your guaranteed at least a good card (no promise on a great card though as the bin is lax)


What shunts setup are you running on your card? I will surely run LM + gelid 15W/mK pads + hardware labs 560gtx for the card only so the cooling shouldn't be a problem if the silicon is good enough


----------



## rhyno

WayWayUp said:


> A lot of ppl's ftw3 cards are outperforming their kingpin cards. It's still a really good card its just annoying because you need to shunt it if you want more power
> 1) doesnt play nice with other bios unless it's shunted
> 2) cant power mod it
> 
> Other than that the build quality is good. vrm's are overbuilt
> Honestly though despite what anyone tells you its all about silicon lottery
> 
> My came out the box with 450w bios stock fan cooler and warm room temp being able to score almost 15k in port royal
> The other day with my waterblock i was able to score over 15.8k and with some cold coming this weekend with both windows open i will try again for 16k
> 
> It's really hit or miss though. I would take the card apart regardless and change out the gunk with some fujipoly/nice pads and possibly repaste as well. not a big fan of the backplate but your going for a custom loop anyway which would be a big upgrade to the backside
> 
> ftw3 hydrocopper is the best deal though since it comes with block for only $50 premium and you can get 5% off
> personally using an optimus block
> 
> that said, if you dont mind a price premium you could always get a kingpin and that way your guaranteed at least a good card (no promise on a great card though as the bin is lax)


points 1 and 2 are wrong. i use 1000 watt bios and it will pull 1000 watts no prob on my ftw3, no shunts no custom hardware just a download of a bios.


----------



## geriatricpollywog

SoldierRBT said:


> KPE waterblock
> View attachment 2473122


This is going to cost more than some display adapters. Too bad the pretty side faces down and the ugly backplate faces up. I hope EK makes a $150 black acetal version.


----------



## Shawnb99

0451 said:


> This is going to cost more than some display adapters. Too bad the pretty side faces down and the ugly backplate faces up. I hope EK makes a $150 black acetal version.


Vertical mount is the only true way to show it off. 
We need a MB that makes that standard, then every else can copy. If only it worked that way and more copied right angle connectors ☹


----------



## jura11

7empe said:


> I had also issues with fan loudness. It seems that RPM of the fan are maxed out all the time. I simply removed the stock fan from the PSU and mounted NZXT 140mm fan instead with cable routing to my motherboard for RPM control. Fan blowing at the max 1000 rpm speed all the time (no need to running it at lower speed). It is soooo silent right now...


Fan profile on the Superflower 8pack 2000w is way too aggressive, in colder temperatures you won't hear the fan or fan is not so loud but when outside is warm or hot weather then fan is ramping up like crazy, at 800-900W power draw you won't hear fan as much but when you break 1200w power draw, fan will take off hahaha 

This what I was thinking too, replace Superflower fan for Noctua or Arctic Cooling P14 fan or something like that because loudest thing on my loop is that bloody PSU 

Hope this helps 

Thanks, Jura


----------



## WayWayUp

Biscottoman said:


> What shunts setup are you running on your card? I will surely run LM + gelid 15W/mK pads + hardware labs 560gtx for the card only so the cooling shouldn't be a problem if the silicon is good enough


7mo is more than plenty. It's already coming from a high power limit bios


----------



## des2k...

anybody tried playing 4k videos (nvdec) during stress test ? For some reason, using nvdec during stress test, power usage drops by 50w - 100w and effective clocks remain the same, even get 15mhz boost due to lower temps

I can reproduce this on my side by playing YouTube 4k 60fps during benching.
2145 1075mv using up to 630w in TE GT2 with effective clocks 2119
with youtube 4k playing, it's down to 540w, 2-5c cooler on the gpu and effective clocks don't drop 

Also those huge down spikes (2000mhz effective clock), is when loading nvdec the first time.
Here's 10sec at the start to highlight this power difference, 









Decoder-off


Image Decoder-off hosted in ImgBB




ibb.co












Decoder-on


Image Decoder-on hosted in ImgBB




ibb.co


----------



## scaramonga

itssladenlol said:


> Used 1000w bios for over 2years on 2080Ti
> Doing Same now with kfa2 3090 2x8pin non shunt and heatkiller V with a Mora 420.
> Almost 16k Port royal and using it for daily.


You almost got me there, and made me want to shove it on my KFA2, but I'm not believing you


----------



## Sheyster

Curious to know how people here are setting up their NVCP color output (under Display > Change Resolution, Use Nvidia color settings) with 10-bit G-Sync monitors. I'm using Highest/32-bit, 10bpc, RGB Full. My setup is primarily for gaming, I don't use this PC for much else.


----------



## Sheyster

jura11 said:


> This what I was thinking too, replace Superflower fan for Noctua or Arctic Cooling P14 fan or something like that because loudest thing on my loop is that bloody PSU


Definitely go with the Noctua for reliability. Since it's a hidden fan, color won't be an issue!


----------



## bmagnien

anyone have a line on a FTW3 waterblock or have one for sale? I believe Alphacool just pushed theirs another 2 weeks, not feeling the bykski


----------



## jomama22

des2k... said:


> anybody tried playing 4k videos (nvdec) during stress test ? For some reason, using nvdec during stress test, power usage drops by 50w - 100w and effective clocks remain the same, even get 15mhz boost due to lower temps
> 
> I can reproduce this on my side by playing YouTube 4k 60fps during benching.
> 2145 1075mv using up to 630w in TE GT2 with effective clocks 2119
> with youtube 4k playing, it's down to 540w, 2-5c cooler on the gpu and effective clocks don't drop
> 
> Also those huge down spikes (2000mhz effective clock), is when loading nvdec the first time.
> Here's 10sec at the start to highlight this power difference,
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Decoder-off
> 
> 
> Image Decoder-off hosted in ImgBB
> 
> 
> 
> 
> ibb.co
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Decoder-on
> 
> 
> Image Decoder-on hosted in ImgBB
> 
> 
> 
> 
> ibb.co


I imagine your benchmark score would drop though yeah? Running nvdec forces it to share resources on the core. Nvdec doesn't require nearly the power to run at the same clocks as a benchmark like that, so you are going to get reduced power draw because of it.


----------



## Slackaveli

megahmad said:


> I don't know, I won't be using this OC anyway, I am not even sure if it's stable in games, just wanted to see the potential of the strix with this 123% PL. My current OC is 105% PL and +50 core which is fine for temps (max 70c and average 68c at reasonable fan speed). As for repasting, I don't have any thermal pads to replace, can I keep using the same pads if they don't get any damage with disassembly?
> Another question, is grizzly kryonaut good for GPUs or should I use another paste?


Yep, you can reuse them . i almost always have- be careful to not shred them when you pull it apart.\

I think this time im going to have some pads on hand, tho, when i do my strix. Just make sure they are the right size.

Ive asked for a Strix owner to confirm that 1.5mm pads are correct but havent gotten confirmation on that.


----------



## Slackaveli

Biscottoman said:


> Guys I need an advice on what 3090 custom model to get the *BEST *overclock on custom loop+5ohm shunt mod. I'm stucked between ftw3 ultra or strix, strix was my main target but right now is really hard to find one at a "good" price (thanks to asus price increase), the ftw3 ultra would be ok especially paired with the new upcoming kryographics waterblock with watercooled backplate but i've heard many people complaing on the quality of this card and his pcb (especially under shunt). I can't go for the kingpin cause it's sadly almost impossible to find and there's still no waterblock on the way. Any advice on these two models or any other option?


Fateka .com has FTW3 at msrp right now


----------



## Slackaveli

Sheyster said:


> Curious to know how people here are setting up their NVCP color output (under Display > Change Resolution, Use Nvidia color settings) with 10-bit G-Sync monitors. I'm using Highest/32-bit, 10bpc, RGB Full. My setup is primarily for gaming, I don't use this PC for much else.


im not sure why anyone would do any other settings.


----------



## Falkentyne

megahmad said:


> I don't know, I won't be using this OC anyway, I am not even sure if it's stable in games, just wanted to see the potential of the strix with this 123% PL. My current OC is 105% PL and +50 core which is fine for temps (max 70c and average 68c at reasonable fan speed). As for repasting, I don't have any thermal pads to replace, can I keep using the same pads if they don't get any damage with disassembly?
> Another question, is grizzly kryonaut good for GPUs or should I use another paste?


Pads can be reused as long as they are not too squished in, or dried out, but some stock pads lose heat transfer if try to reuse them, due to their material, causing higher temps than before disassembly. Some people saw this after repasting a founder's edition, and had to remove the stock pads and replace them to get temps back down.


----------



## itssladenlol

scaramonga said:


> You almost got me there, and made me want to shove it on my KFA2, but I'm not believing you


Couldn't care less if you believe me or not.
1000w bios is maxed out at 660w on 2x8pin (100% on Power target slider).
At 79% my card pulls 550Watts and destroys Port royal.
Shunts are not needed anymore.
You could just Flash the bios in a matter of 5 minutes and see im right.

Heres another example:


----------



## geriatricpollywog

Can’t you blow a fuse if you run the 1000w bios and don’t shunt?


----------



## Nizzen

itssladenlol said:


> Couldn't care less if you believe me or not.
> 1000w bios is maxed out at 660w on 2x8pin (100% on Power target slider).
> At 79% my card pulls 550Watts and destroys Port royal.
> Shunts are not needed anymore.
> You could just Flash the bios in a matter of 5 minutes and see im right.
> 
> Heres another example:
> 
> View attachment 2473137


Yep I can confirm, because we are almost neighbors  He is using waterchiller and watertemp was under 10c


----------



## WMDVenum

des2k... said:


> anybody tried playing 4k videos (nvdec) during stress test ? For some reason, using nvdec during stress test, power usage drops by 50w - 100w and effective clocks remain the same, even get 15mhz boost due to lower temps
> 
> I can reproduce this on my side by playing YouTube 4k 60fps during benching.
> 2145 1075mv using up to 630w in TE GT2 with effective clocks 2119
> with youtube 4k playing, it's down to 540w, 2-5c cooler on the gpu and effective clocks don't drop
> 
> Also those huge down spikes (2000mhz effective clock), is when loading nvdec the first time.
> Here's 10sec at the start to highlight this power difference,
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Decoder-off
> 
> 
> Image Decoder-off hosted in ImgBB
> 
> 
> 
> 
> ibb.co
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Decoder-on
> 
> 
> Image Decoder-on hosted in ImgBB
> 
> 
> 
> 
> ibb.co


I had a similar performance just watching YouTube in 1080p. In CP2077 with YouTube paused I had 66fps but with it running it drops below 60 in a stationary scene. Not sure if it always happens since I haven't looked into it too much.


----------



## dr/owned

WMDVenum said:


> I had a similar performance just watching YouTube in 1080p. In CP2077 with YouTube paused I had 66fps but with it running it drops below 60 in a stationary scene. Not sure if it always happens since I haven't looked into it too much.


I have to kill my GPU accelerated remote desktop software when I'm gaming because it causes hard FPS lag spikes. Like we're talking 300 FPS in the game down to 140 and then back up. I don't know why this is (maybe Win10 gpu scheduling) but it happened on my 1080Ti too.


----------



## Tias

0451 said:


> Can’t you blow a fuse if you run the 1000w bios and don’t shunt?


I been running the 1000w bios on my 3090 TUF (2x8pin) for about 14 days now. Having the powerlinit at around 70% for about 500W.
Watercooled with an EK block. Its been working just fine.


----------



## DrunknFoo

GAN77 said:


> I do not understand what can be praised in this card. Fair.
> The usual reference plus a few additional phases.
> 
> View attachment 2473121


As an owner of an ftw3 and someone who shunted numerous 3000 series cards, the only benefit of the ftw3 is not requiring kapton tape when soldering lol


----------



## Thanh Nguyen

Anyone know how to fix color in window look like crap when turn on hdr?


----------



## Biscottoman

DrunknFoo said:


> As an owner of an ftw3 and someone who shunted numerous 3000 series cards, the only benefit of the ftw3 is not requiring kapton tape when soldering lol


Full shunting the ftw3 with 5mohm resistors would be too much for the pcie?


----------



## geriatricpollywog

Shawnb99 said:


> Vertical mount is the only true way to show it off.
> We need a MB that makes that standard, then every else can copy. If only it worked that way and more copied right angle connectors ☹


I need space to work under the hood so a vertical mount is not for me. I want to be able to remove the CPU block without removing the display adapter. Besides, nobody pays attention to the PC but me so I’m not into vanity mods. A vertical mount would nicely hide a backplate cooling solution though.


----------



## Falkentyne

Biscottoman said:


> Full shunting the ftw3 with 5mohm resistors would be too much for the pcie?


The FTW3 pulling 82W from the PCIE slot with just the 420W vbios would probably blow the PCIE fuse if you tried to do a full 5 mOhm shunt. Better to throw a 10 or even better, 15 mOhm shunt on the Slot resistor. MSI has 20 amp fuses on the slot. Gigabyte, 10 amp fuses on the slot (20 for others). FE and Asus cards don't have fuses. Not sure about other AIB's.


----------



## DrunknFoo

Biscottoman said:


> Full shunting the ftw3 with 5mohm resistors would be too much for the pcie?


nope, i have 2.86 mohm total resistence on my ftw3 right now

I should clarify, i have tested the pcie as low as 1.875 mohm but after reading that slovak killer fried his card somehow i swapped my resistors
I think 2.5mohm should be fine, at least safer than going lower than 2.0

(That said, it may depend on the mobo) I have melted pcie slots in the past)


----------



## Biscottoman

So in case i would take ftw3 ultra should i go for a 15mohm on the pice+5mohm rest to be safe (motherboard dark hero)? That shouldn't change much the power limit of the card so it shouldn't be an issue


----------



## DrunknFoo

Biscottoman said:


> So in case i would take ftw3 ultra should i go for a 15mohm on the pice+5mohm rest to be safe (motherboard dark hero)? That shouldn't change much the power limit of the card so it shouldn't be an issue


that should be more than safe


----------



## WillP

scaramonga said:


> You almost got me there, and made me want to shove it on my KFA2, but I'm not believing you


I'm running it on a KFA2, have been since about December 20th. Using an alphacool waterblock now, but did run it on air up until the 24th. I need to add more radiator to my loop, and keep my power limit to 70% at the moment, but it works well. I have realtemp configured to shutdown Windows should the temp go past 80 Celsius as some-sort of replacement for thermal protection. I'm not posting my benchmarks until I complete my loop, but 14800 roughly on Port Royal at present with an i9 9900k.


----------



## Daniel M

0451 said:


> What is the core and memory offset? I feel like there is some more performance waiting to be unlocked.


+1000 on the mem. Here's my Core curve - mostly based on other posts here about the PNY 3090: 









This weekend I'll spend some time adjusting for water.


----------



## bmagnien

2100 at .95? Is that common? I can barely hit 2100 max at full voltage


----------



## rocklobsta1109

bmagnien said:


> 2100 at .95? Is that common? I can barely hit 2100 max at full voltage


I just tested this same curve on my 390w flashed PNY and the card never sees these frequencies in common benchmarks. It's always power limited into the 1950ish range. Perhaps in a non GPU bound scenario when its voltage limited but I haven't personally tested these settings that much to know if it's going to work


----------



## Neon Lights

I shunt modded (with 842-AR silver paint) and water cooled my PNY RTX 3090 Epic X Uprising (stock BIOS), but I'm still power limited, getting only between ~ 1850MHz and 2000MHz at +250MHz in MSI Afterburner. If you look at the shunt mod, did I do anything wrong?


----------



## motivman

U


Neon Lights said:


> I shunt modded (with 842-AR silver paint) and water cooled my PNY RTX 3090 Epic X Uprising (stock BIOS), but I'm still power limited, getting only between ~ 1850MHz and 2000MHz. If you look at the shunt mod, did I do anything wrong?
> View attachment 2473186


You have to shunt all six resistors. I see one "r005" that is not shunted, and also shunt the one of the other side next to the PCIE slot.


----------



## dr/owned

Biscottoman said:


> So in case i would take ftw3 ultra should i go for a 15mohm on the pice+5mohm rest to be safe (motherboard dark hero)? That shouldn't change much the power limit of the card so it shouldn't be an issue


I found with my TUF that the PCIe slot never consumed more than about 100W on the PCIe slot (12V only) even with a 4mOhm resistor which should have enabled 150W. So at least for that PCB whatever they're using the slot for doesn't scale up much in power consumption when it's "unlocked".


----------



## Neon Lights

motivman said:


> U
> 
> 
> You have to shunt all six resistors. I see one "r005" that is not shunted, and also shunt the one of the other side next to the PCIE slot.


I watched the der8auer video about shunt modding RTX 3090 (he did it on a TUF in the video I believe) and he showed the effect of every shunt and there only was a useful difference only up to the third (he used the resistor solder stack method, I should maybe add). Where do you know the all six resistor thing from?


----------



## motivman

Neon Lights said:


> I watched the der8auer video about shunt modding RTX 3090 (he did it on a TUF in the video I believe) and he showed the effect of every shunt and there only was a useful difference only up to the third (he used the resistor solder stack method, I should maybe add). Where do you know the all six resistor thing from?


Hey man, don't listen, DO WHAT YOU WANT, you obviously know everything, SMH.


----------



## WillP

@Neon Lights Having read through vast swathes of this thread, and watched De8auer's video multiple times trying to reconcile the different information, I'm still confused by the same thing. It seems the folks on this page have come to a different conclusion, and I'm inclined to trust it given the scrutiny, but De8auer's data then doesn't make much sense. One thing I would say is that I thought the silver paint method didn't work on more recent NVidia cards unless it was used as part of adding resistors of 8mOhm to the 5s? My understanding was that the application had to be so perfect in terms of resistance with paint alone that it was impractical to do now, with the danger that the card would go into 2d mode @300mhz. I have no experience of shunt modding personally though, I have resistors ready to solder, but went the route of bios flashing with the XOC bios instead as that became available pretty much as my order arrived.
I have a couple of older cards (2080 and 970) I'm thinking of practicing the technique on to see if I can improve their performance, and am keen to learn more, but for now the Bios route is working for me.
I'm sure you'll get a more helpful response from someone else with time, as well as clarification on what resistor should be used on the PCIe power as I've seen a variety of opinion on that too.

The first page of this thread lays things out very clearly for the FE card: RTX 3090 Founders Edition working shunt mod | Overclock.net


----------



## Neon Lights

motivman said:


> Hey man, don't listen, DO WHAT YOU WANT, you obviously know everything, SMH.


I don't know what you thought I was "trying to do", but everything I said/asked was meant in a genuine way.


----------



## bmgjet

Probably just getting sick of same questions being asked over and over.
Theres hundreds of posts with people showing you have to do all of them to maintain the balance, But really with the 1KW bios being out in the public theres no reason to mess with shunt mods anymore.


----------



## Neon Lights

bmgjet said:


> Probably just getting sick of same questions being asked over and over.
> Theres hundreds of posts with people showing you have to do all of them to maintain the balance, But really with the 1KW bios being out in the public theres no reason to mess with shunt mods anymore.


Where is the 1KW BIOS? It's hard to find something in this thread for me


----------



## bmgjet

Again something you could of done in 10 seconds by searching using the search bar at the top of the page.
3rd result searching for 1000W 3090.








EVGA RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com


----------



## Neon Lights

bmgjet said:


> Again something you could of done in 10 seconds by searching using the search bar at the top of the page.
> 3rd result searching for 1000W 3090.
> 
> 
> 
> 
> 
> 
> 
> 
> EVGA RTX 3090 VBIOS
> 
> 
> 24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory
> 
> 
> 
> 
> www.techpowerup.com
> 
> 
> 
> 
> 
> View attachment 2473192


Thank you. Well I just remember looking for things on this forum (although that was before the new interface) and it being hard to find something
And I could use it with my graphics card without problems, despite it having only two 8 pins?


----------



## itssladenlol

Nizzen said:


> Yep I can confirm, because we are almost neighbors  He is using waterchiller and watertemp was under 10c





Neon Lights said:


> I watched the der8auer video about shunt modding RTX 3090 (he did it on a TUF in the video I believe) and he showed the effect of every shunt and there only was a useful difference only up to the third (he used the resistor solder stack method, I should maybe add). Where do you know the all six resistor thing from?


Shunt all 6 or get Power limited. Easy as that.


----------



## Neon Lights

itssladenlol said:


> Shunt all 6 or get Power limited. Easy as that.


Where do you know the all 6 shunts thing from?


----------



## Falkentyne

Neon Lights said:


> Where is the 1KW BIOS? It's hard to find something in this thread for me





Neon Lights said:


> Where do you know the all 6 shunts thing from?


No offense man but you NEED TO STOP ASKING STUPID QUESTIONS and start READING MORE.
We know all six (or 7, on triple 8 pin cards) must be shunted because of OTHER USERS LIKE YOU WHO ENCOUNTERED THROTTLING WHEN YOU DIDN'T SHUNT ALL OF THEM.

You're being lazy and refusing to read the dang threads and you want other people to do your lazy work for you. With this attitude, you would be user #2 to blow up your card with the 1000W Bios, like that one other guy did, who had a hot spot on the 1000W bios (which has ALL thermal protections disabled) and his card died.

Have you even BOTHERED to read the mega shunt thread?









RTX 3090 Founders Edition working shunt mod


I posted this in the RTX 3090 owners club thread and I'm posting this here for more visibility to anyone who might stumble across this: Edit: Added note about 6th shunt resistor & updated images Tonight I successfully shunt modded my RTX 3090 FE. It's a pretty straightforward process, just...




www.overclock.net





Look what happens if you do NOT shunt the PCIE Slot.









RTX 3090 Founders Edition working shunt mod


Yes idea would be to stack them. Should calculate that later in the tool....




www.overclock.net





Look at his screenshot. Notice PCIE Slot is at the throttle point of 79.9W? But look at MVDDC!! (Memory) It's reporting HIGHER than the total board power!
BOTH are throttling him and not shunting PCIE caused MVDDC to skyrocket.

Oh look what happened when he went back and shunted PCIE Slot!









RTX 3090 Founders Edition working shunt mod


Yes idea would be to stack them. Should calculate that later in the tool....




www.overclock.net





There is also someone in the 3090 thread who tried to get by without shunting SRC. He said that not shunting SRC caused the 8 pins to have VERY imbalanced power draw (which caused throttling when one reached its max internal power limit). When he shunted SRC, it balanced out the 8 pins.

It was @ExDarkxH. I don't know why he was banned from the forum, but whatever: (this was a PM to me but he also posted this in the main 3090 thread)


> Hey,
> Thats correct i shunted 5 resistors + the pcie and i noticed the board wasnt fully open to me
> The first pin was pulling the most the second was pulling about ~40watts less and the third was pulling about ~80watts less
> Thats why i kept asking about the src i was very curious
> Now, after shunting that resistor i see more equal draw from all the pins
> 
> There wasnt an exact number since it varied but if i was to give an example, it would be 240-200-160
> 
> Keep in mind that im using the 500w bios and shunted with 7mo
> 
> Today i looked at it again and i was around 750w in heaven. The first and second were roughly equal. The third was a little lower but it could be that i simiply didnt need more
> 
> In timespy extreme i can definitely pull over 800watts now and the third pin is more equal
> 
> I will post gpuz/hw info in chat sometime soon with an update


So you see, if you took the time to do your research, you would know the answers to your questions..

The reason Der8auer didn't shunt PCIE Slot, is because afaik, Asus boards draw a rather low amount from the PCIE Slot, but it was still throttling him, so Der8auer should have shunted PCIE. This was discussed several times in this thread that der8auer made a mistake. Just because someone is a well known tech tuber doesn't make them immune to screwing up.

As far as the 1000W Kingpin bios:
It was posted several times earlier in the thread and linked to techpowerup, if you would bother to take the time to read back.
Anyway, I'm assuming you didn't shunt the SRC and the PCIE slot shunts.

The PCIE slot shunt is probably on the backside of the card. If you didn't shunt that, the PCIE slot will reach 75-85W (depends on the board layout) and trigger a power limit.
The SRC has continuity with all other shunts (there are TWO chips that control current monitoring, but I do not know how that relates to there only being one SRC shunt), and seems to be directly linked on some boards to GPU Chip Power. I've seen that shunting GPU Chip power will make SRC report higher wattage (SRC power in GPU-Z, Input Power Plane source power in HWinfo64), which is why SRC must also be shunted, as it has its own power limit as well (usually between 125W-175W).

On some boards, the 8 pin power limits seem to default to the SRC power limits, and the SRC has multiple power limits--equal to the number of 8 pins. So a dual 8 pin board has two SRC power limits (may be a fixed value, may be adjustable, like 150W default, 175W Max), a triple 8 pin board has three SRC power limits (on a dual 8 pin board, the third SRC power limit is set to "zero").
I have no idea about other boards, but on 3090 FE, the 8 pin power limits are the same as the SRC power limits, at 100%, the 8 pin throttles at 150W reported through the pin, and at 114% it throttles at 175W reported through the pin. (in addition to "TDP", which is simply all the 8 pins + PCIE Slot Power added together in milliwatts (or watts).

Anyway, it would help us (instead of you asking the same question a million times), if you run "Port Royal" benchmark, or Heaven (but with vsync off/max settings please), and then have GPU-Z running in the background. And PLEASE CLICK THE MOUSE ON ALL THE GPU-Z WATTAGE VALUES and have them show the "maximum" values (for TDP also), then post a screenshot. This will help tell us what power rail is throttling you. (Only takes a few minutes).

Then yes, you would need to shunt the remaining shunts (SRC and PCIE Slot).

I am NOT sure what happens if you don't shunt the SRC on your exact card (if you decided to shunt PCIE Slot (on the other side of your card) but not the 005 shown in your picture, which is the SRC shunt resistor). It would probably end up showing as 150W and throttling you, or jacking up any number of the other readings. This depends purely on the card's PCB layout.

If you made it this far, you probably forgot already what I asked you to do.
Run Heaven benchmark or Port Royal or TimeSpy, have GPU-Z open with ALL THE WATTS CLICKED TO MAXIMUM for ALL RAILS (USE THE MOUSE, CLICK THEM UNTIL A GREEN 'MAX' APPEARS!), then take a screenshot of the benchmark when done. Make sure the "THROTTLE" green bar is still visible (you can always move the GPU-Z window wider ---> to extend the reading--protip!), and make sure "TDP%" is set to MAXIMUM also. This will show us which primary rail is throttling you.


----------



## dr/owned

bmgjet said:


> Probably just getting sick of same questions being asked over and over.
> Theres hundreds of posts with people showing you have to do all of them to maintain the balance, But really with the 1KW bios being out in the public theres no reason to mess with shunt mods anymore.


For 3 connector cards, yeah. But us 2 connector bois get some side effects on the 1000W BIOS. Me anyways I saw no more power states and a disabled DP connector.


----------



## Neon Lights

dr/owned said:


> For 3 connector cards, yeah. But us 2 connector bois get some side effects on the 1000W BIOS. Me anyways I saw no more power states and a disabled DP connector.


Is it working fine in games and all?


----------



## geriatricpollywog

The best way to tell if the 1000w bios is working is if it bricks your card.


----------



## Slackaveli

Falkentyne said:


> No offense man but you NEED TO STOP ASKING STUPID QUESTIONS and start READING MORE.
> We know all six (or 7, on triple 8 pin cards) must be shunted because of OTHER USERS LIKE YOU WHO ENCOUNTERED THROTTLING WHEN YOU DIDN'T SHUNT ALL OF THEM.
> 
> You're being lazy and refusing to read the dang threads and you want other people to do your lazy work for you. With this attitude, you would be user #2 to blow up your card with the 1000W Bios, like that one other guy did, who had a hot spot on the 1000W bios (which has ALL thermal protections disabled) and his card died.
> 
> Have you even BOTHERED to read the mega shunt thread?
> 
> 
> 
> 
> 
> 
> 
> 
> 
> RTX 3090 Founders Edition working shunt mod
> 
> 
> I posted this in the RTX 3090 owners club thread and I'm posting this here for more visibility to anyone who might stumble across this: Edit: Added note about 6th shunt resistor & updated images Tonight I successfully shunt modded my RTX 3090 FE. It's a pretty straightforward process, just...
> 
> 
> 
> 
> www.overclock.net
> 
> 
> 
> 
> 
> Look what happens if you do NOT shunt the PCIE Slot.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> RTX 3090 Founders Edition working shunt mod
> 
> 
> Yes idea would be to stack them. Should calculate that later in the tool....
> 
> 
> 
> 
> www.overclock.net
> 
> 
> 
> 
> 
> Look at his screenshot. Notice PCIE Slot is at the throttle point of 79.9W? But look at MVDDC!! (Memory) It's reporting HIGHER than the total board power!
> BOTH are throttling him and not shunting PCIE caused MVDDC to skyrocket.
> 
> Oh look what happened when he went back and shunted PCIE Slot!
> 
> 
> 
> 
> 
> 
> 
> 
> 
> RTX 3090 Founders Edition working shunt mod
> 
> 
> Yes idea would be to stack them. Should calculate that later in the tool....
> 
> 
> 
> 
> www.overclock.net
> 
> 
> 
> 
> 
> There is also someone in the 3090 thread who tried to get by without shunting SRC. He said that not shunting SRC caused the 8 pins to have VERY imbalanced power draw (which caused throttling when one reached its max internal power limit). When he shunted SRC, it balanced out the 8 pins.
> 
> It was @ExDarkxH. I don't know why he was banned from the forum, but whatever: (this was a PM to me but he also posted this in the main 3090 thread)
> 
> 
> So you see, if you took the time to do your research, you would know the answers to your questions..
> 
> The reason Der8auer didn't shunt PCIE Slot, is because afaik, Asus boards draw a rather low amount from the PCIE Slot, but it was still throttling him, so Der8auer should have shunted PCIE. This was discussed several times in this thread that der8auer made a mistake. Just because someone is a well known tech tuber doesn't make them immune to screwing up.
> 
> As far as the 1000W Kingpin bios:
> It was posted several times earlier in the thread and linked to techpowerup, if you would bother to take the time to read back.
> Anyway, I'm assuming you didn't shunt the SRC and the PCIE slot shunts.
> 
> The PCIE slot shunt is probably on the backside of the card. If you didn't shunt that, the PCIE slot will reach 75-85W (depends on the board layout) and trigger a power limit.
> The SRC has continuity with all other shunts (there are TWO chips that control current monitoring, but I do not know how that relates to there only being one SRC shunt), and seems to be directly linked on some boards to GPU Chip Power. I've seen that shunting GPU Chip power will make SRC report higher wattage (SRC power in GPU-Z, Input Power Plane source power in HWinfo64), which is why SRC must also be shunted, as it has its own power limit as well (usually between 125W-175W).
> 
> On some boards, the 8 pin power limits seem to default to the SRC power limits, and the SRC has multiple power limits--equal to the number of 8 pins. So a dual 8 pin board has two SRC power limits (may be a fixed value, may be adjustable, like 150W default, 175W Max), a triple 8 pin board has three SRC power limits (on a dual 8 pin board, the third SRC power limit is set to "zero").
> I have no idea about other boards, but on 3090 FE, the 8 pin power limits are the same as the SRC power limits, at 100%, the 8 pin throttles at 150W reported through the pin, and at 114% it throttles at 175W reported through the pin. (in addition to "TDP", which is simply all the 8 pins + PCIE Slot Power added together in milliwatts (or watts).
> 
> Anyway, it would help us (instead of you asking the same question a million times), if you run "Port Royal" benchmark, or Heaven (but with vsync off/max settings please), and then have GPU-Z running in the background. And PLEASE CLICK THE MOUSE ON ALL THE GPU-Z WATTAGE VALUES and have them show the "maximum" values (for TDP also), then post a screenshot. This will help tell us what power rail is throttling you. (Only takes a few minutes).
> 
> Then yes, you would need to shunt the remaining shunts (SRC and PCIE Slot).
> 
> I am NOT sure what happens if you don't shunt the SRC on your exact card (if you decided to shunt PCIE Slot (on the other side of your card) but not the 005 shown in your picture, which is the SRC shunt resistor). It would probably end up showing as 150W and throttling you, or jacking up any number of the other readings. This depends purely on the card's PCB layout.
> 
> If you made it this far, you probably forgot already what I asked you to do.
> Run Heaven benchmark or Port Royal or TimeSpy, have GPU-Z open with ALL THE WATTS CLICKED TO MAXIMUM for ALL RAILS (USE THE MOUSE, CLICK THEM UNTIL A GREEN 'MAX' APPEARS!), then take a screenshot of the benchmark when done. Make sure the "THROTTLE" green bar is still visible (you can always move the GPU-Z window wider ---> to extend the reading--protip!), and make sure "TDP%" is set to MAXIMUM also. This will show us which primary rail is throttling you.


Well done, Falk


----------



## WMDVenum

In addition to what Falk said, if you are debating turning your 1500 card into a paperweight. It may be worth it to spend 4-5 hours reading this entire threat and the two shunt mod threads so you know what you are getting into.


----------



## xarot

Nizzen said:


> Yep I can confirm, because we are almost neighbors  He is using waterchiller and watertemp was under 10c


Which block on the card?


----------



## Nizzen

xarot said:


> Which block on the card?


Alphacool


----------



## dr/owned

Neon Lights said:


> Is it working fine in games and all?


I went back to the stock "Performance" bios because I didn't really see any tangible improvement in PR. It seems like the TUF caps at ~650W whether you're shunted or use the 1000W BIOS. My card is also frequency capped at about 2130Mhz where any higher crashes PR, and 2100 is really the only stable clock for normal games. I also saw no difference messing with the VRM settings over I2C.


----------



## WMDVenum

dr/owned said:


> I went back to the stock "Performance" bios because I didn't really see any tangible improvement in PR. It seems like the TUF caps at ~650W whether you're shunted or use the 1000W BIOS. My card is also frequency capped at about 2130Mhz where any higher crashes PR, and 2100 is really the only stable clock for normal games. I also saw no difference messing with the VRM settings over I2C.


I have a similar observation with my 3090 FE. Effective clocks seemed to max out at around 2130 at best so trying to push higher requested clocks didn't give performance gains.


----------



## motivman

dr/owned said:


> I went back to the stock "Performance" bios because I didn't really see any tangible improvement in PR. It seems like the TUF caps at ~650W whether you're shunted or use the 1000W BIOS. My card is also frequency capped at about 2130Mhz where any higher crashes PR, and 2100 is really the only stable clock for normal games. I also saw no difference messing with the VRM settings over I2C.


Honestly, 1000W bios is inferior to shunt modding. I score higher on my shunted card with 520W kingpin bios compared to XOC bios, plus it seems my card maxes out at 2235mhz, anything higher crashes PR, maybe I need to get my temperatures down even further, but however, my lonely 2X8pin PNY reference card is now #49 on the PR hall of fame, I am satisfied for now, but Eventually I wanna sell this thing, and get me a Strix... too bad Asus increased the prices, SMH.









I scored 15 566 in Port Royal


Intel Core i9-10900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## stryker7314

motivman said:


> Honestly, 1000W bios is inferior to shunt modding. I score higher on my shunted card with 520W kingpin bios compared to XOC bios, plus it seems my card maxes out at 2235mhz, anything higher crashes PR, maybe I need to get my temperatures down even further, but however, my lonely 2X8pin PNY reference card is now #49 on the PR hall of fame, I am satisfied for now, but Eventually I wanna sell this thing, and get me a Strix... too bad Asus increased the prices, SMH.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 566 in Port Royal
> 
> 
> Intel Core i9-10900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com


Could that be because the 1000w bios still only lets the 2x8pin card pull 2/3 or so of the 1000w while the 520w+shunt lets it pull more?

Maybe a 3x8pin card won't have the same problem.


----------



## Tias

To shunt your 3090 is useless now when the XOC 1000W bios is out. There is NOTHING that would make it "safer" to start soldering on your €15-2000 card.
My 2080ti (2x8) still runs the XOC bios and been doing that since that bios was launched. Now my TUF 3090 (2x8pin) runs the XOC bios.

In MSI Afterburner or alike, i put the slider to ~70% which will put the card in the neighborhood around 500W
You can push the slider to 79-80% for 550W too.

There are no "fuses that will blow" on a 2x8pin 3090 that will magical not blow on a 3x8pin for exemple strix.

You can set a MSI profile to -500mhz on the memory and just apply that when your done gaming/benching for the day, and force your 3090 to downclock its memory if you are worried to run them at max speed while idleing in windows.

I use a EK waterblock on my 3090 TUF, i also have a 120mm vardar fan blowing directly on my backplate (backplate from EK too).


----------



## long2905

im this close to slapping a waterblock on my 2x8pin 3090 and calls it a day. looked up this mining stuff and wow, i can recover the cost of the card in 6 months time just by itself (not counting electric bills yet)


----------



## bwana

long2905 said:


> im this close to slapping a waterblock on my 2x8pin 3090 and calls it a day. looked up this mining stuff and wow, i can recover the cost of the card in 6 months time just by itself (not counting electric bills yet)


I don't get it. I thought ASICs were way better than GPUs for mining.


----------



## jomama22

Tias said:


> To shunt your 3090 is useless now when the XOC 1000W bios is out. There is NOTHING that would make it "safer" to start soldering on your €15-2000 card.
> My 2080ti (2x8) still runs the XOC bios and been doing that since that bios was launched. Now my TUF 3090 (2x8pin) runs the XOC bios.
> 
> In MSI Afterburner or alike, i put the slider to ~70% which will put the card in the neighborhood around 500W
> You can push the slider to 79-80% for 550W too.
> 
> There are no "fuses that will blow" on a 2x8pin 3090 that will magical not blow on a 3x8pin for exemple strix.
> 
> You can set a MSI profile to -500mhz on the memory and just apply that when your done gaming/benching for the day, and force your 3090 to downclock its memory if you are worried to run them at max speed while idleing in windows.
> 
> I use a EK waterblock on my 3090 TUF, i also have a 120mm vardar fan blowing directly on my backplate (backplate from EK too).


There aren't fuses to blow on a strix just fyi. if the strix will just die from too much power or voltage at some point. There are fuses on msi and evga products though and people have blown them.


----------



## Alelau18

Just as a heads up if using the 1000W BIOS and doing any kind of compute workload (for example just opening Adobe Premiere) puts my card to the P8 state, I assume the BIOS just doesnt have P2 State and defaults back to P8 (around 400MHz core clock). You would need to force P2 State off via NVIDIA Profile Inspector to keep it at P0 during compute workloads.


----------



## mismatchedyes

Could I ask for some advice regarding memory overclocking on an air cooled card? I gather it is not possible to see memory temperature unless you have an evga card? 

When overclocking the memory how do you know if it is too hot? Assuming you are running a non xoc bios it it gets too hot is it likely to crash or just throttle and is there any risk of damage ?

Thank you!


----------



## wirx

I am thinking to watercool my 3090 trio with 500W bios, but actually only game where I needed this at 4K is cyberpunk. It needs a lot of work and litlebit money to watercool 3090. So before I do so, wanted to ask what GPU Mhz you guys see there when playing Cyberpunk? 
With default fan curve and DLSS perfomance in 4K I have about 2000-2050mhz and temps are 75-80c, voltage 1.07. When I undervolt to 1V Mhz then Mhz are 1970-2020 and temps 70-75c. Wanted to know how much better results are with watercooling after 30 minutes play?


----------



## Tias

jomama22 said:


> There aren't fuses to blow on a strix just fyi. if the strix will just die from too much power or voltage at some point. There are fuses on msi and evga products though and people have blown them.


Sure there is, allways this "there are ppl who have". Absolutly there are ppl who have wings and fly too......


----------



## dante`afk

I guess I should be able to get near 16k with an Intel CPU, sadly RocketLake won't be any upgrade I think.

no KPE bios, just my 3090 FE


















I scored 15 614 in Port Royal


AMD Ryzen 9 5950X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com





also snagged this:












Still super eager to get a 3pin card for the KPE bios, but I think I'll wait to get a kingpin card with dedicated waterblock, think this is the best due to the voltage control that is possible.


----------



## jomama22

Tias said:


> Sure there is, allways this "there are ppl who have". Absolutly there are ppl who have wings and fly too......


?
You can talk to @bmgjet about fuses blowing. Believe he popped one early on.
Steve from gamers nexus did on his ftw3 as well.

You can think what you want but it has happened to a few as I said.


----------



## ttnuagmada

Question; I see some of you recommending to loop TSE GT2 for a stability test, but I easily hit the power limiter with the 520w bios even when im targetting [email protected] and drop clocks. Will that really be a good test for stability?


----------



## VinnieM

Hmm, I've just had my first spontaneous shutdown using the 1000w bios on dual 8-pin card. I was running AC:Odyssey benchmark in 8k with power limit adjusted to about 535w. It was probably the overcurrent protection from my PSU, a Corsair RM850x. I'm still using only a single PCIe power cable with dual connector though. Would it be better to use 2 separate PCIe power cables from the PSU to the graphics card?


----------



## Spiriva

VinnieM said:


> Hmm, I've just had my first spontaneous shutdown using the 1000w bios on dual 8-pin card. I was running AC:Odyssey benchmark in 8k with power limit adjusted to about 535w. It was probably the overcurrent protection from my PSU, a Corsair RM850x. I'm still using only a single PCIe power cable with dual connector though. Would it be better to use 2 separate PCIe power cables from the PSU to the graphics card?


Yes its recommended that you use two separate cables with any bios and the 3090.


----------



## long2905

i didnt know any better and used a shared cable for awhile but that was on a 2080. i know better after switching to a 2080ti onward


----------



## itssladenlol

VinnieM said:


> Hmm, I've just had my first spontaneous shutdown using the 1000w bios on dual 8-pin card. I was running AC:Odyssey benchmark in 8k with power limit adjusted to about 535w. It was probably the overcurrent protection from my PSU, a Corsair RM850x. I'm still using only a single PCIe power cable with dual connector though. Would it be better to use 2 separate PCIe power cables from the PSU to the graphics card?


Are you ****ing serious?!?!?
And that people, is why you hear Horror stories About the 1000w bios (User errors, nothing else) 😂
Running 470w (535 minus pcie Power) through a single cable and on a 850w Power supply is completely suicide and You can be happy that You had no fire hazard yet.
Jesus ****ing Christ.
Running 1000w bios as daily with air cooling... You cant be serious...


----------



## mirkendargen

VinnieM said:


> Hmm, I've just had my first spontaneous shutdown using the 1000w bios on dual 8-pin card. I was running AC:Odyssey benchmark in 8k with power limit adjusted to about 535w. It was probably the overcurrent protection from my PSU, a Corsair RM850x. I'm still using only a single PCIe power cable with dual connector though. Would it be better to use 2 separate PCIe power cables from the PSU to the graphics card?


I feel like this has to be a troll...


----------



## itssladenlol

mirkendargen said:


> I feel like this has to be a troll...


Checked the post history, sadly its Not...


----------



## mirkendargen

bwana said:


> I don't get it. I thought ASICs were way better than GPUs for mining.


No ASICs for Dagger-Hashimoto (ETH). A 3090 can currently bring in about $10/day. If Ampere supply picks up that will probably drop, if ETH price goes up and Ampere supply stays limited it might go up.


----------



## VinnieM

Spiriva said:


> Yes its recommended that you use two separate cables with any bios and the 3090.


Thanks, I've added a second cable and no more shutdowns.



itssladenlol said:


> Are you ****ing serious?!?!?
> And that people, is why you hear Horror stories About the 1000w bios (User errors, nothing else) 😂
> Running 470w (535 minus pcie Power) through a single cable and on a 850w Power supply is completely suicide and You can be happy that You had no fire hazard yet.
> Jesus ****ing Christ.
> Running 1000w bios as daily with air cooling... You cant be serious...
> ...


Yeah, I thought as long as it's working, no problem, right...
But most of the time I'm running an undervolt of around 1920MHz with 914mv. But then depending on the game it still manages to pull 400-450w.



mirkendargen said:


> I feel like this has to be a troll...


No troll, I never had any problems with a single cable, but maybe it was kind of naive from me thinking that pulling 400+ watts through a single cable was no issue. Anyway, it's fixed now without causing any damage


----------



## motivman

stryker7314 said:


> Could that be because the 1000w bios still only lets the 2x8pin card pull 2/3 or so of the 1000w while the 520w+shunt lets it pull more?
> 
> Maybe a 3x8pin card won't have the same problem.


I pulled over 750w with the xoc bios, about 620w max with the 520w bios, but my card performs better with the 520w bios.... I still hit power limit with kingpin bios, but no power limit with xoc bios, but still get better performance with the kingpin 520w bios.


----------



## stryker7314

motivman said:


> I pulled over 750w with the xoc bios, about 620w max with the 520w bios, but my card performs better with the 520w bios.... I still hit power limit with kingpin bios, but no power limit with xoc bios, but still get better performance with the kingpin 520w bios.


I know the equation to performance on these bad boys is complicated but could it be the cooling throttling the performance when temps go up at said 750w? Now sure what cooling you're running.


----------



## BiLLbOuS

Does that 1000w xoc still get rid of all protection on the Strix?


----------



## Falkentyne

BiLLbOuS said:


> Does that 1000w xoc still get rid of all protection on the Strix?


There's no protection at all, except hardwired chip thermal trip protection, as I'm assuming a bios can't disable that (it can't disable it on Intel CPU's).
It's the VRM and GDDR6X and hotspot that you have to worry about because the thermal throttling flag is disabled.

I've already seen that the 1000W Vbios "might" require MSVDD adjustment to get proper performance (and possibly NVVDD along with it) and you're only doing that on a Kingpin or FTW3 board, that's why some people found better stock performance with the 520W Bios. I mean who knows what else was changed in it? It was designed for world records on LN2. Maybe even memory timings are different, we just don't know...


----------



## BiLLbOuS

yeah flash frying the memory is no good, have the 520 on mine but gpuz doesn't does not reconize pwer on pin 3 that a glitch?


----------



## motivman

stryker7314 said:


> I know the equation to performance on these bad boys is complicated but could it be the cooling throttling the performance when temps go up at said 750w? Now sure what cooling you're running.


Watercooled with mo-ra3 360 sitting on window, drawing 12-15c air from outside. Gpu maxes at 31c with both bios


----------



## BobertCole

Hi everyone,
Long time reader of this thread, first time poster .

I'm relatively new to GPU OC and hoping to get some advice:
Firstly, what is the opinion on the quality of my chip? I got a TimeSpy score of 18,843 stock on air (Gamerock OC), then ramped the power slider up to 111% to get the full 470W my BIOS allows and also set the fans to max (case and GPU) and got 19,307. I had left all my background processes running etc and not made any adjustments to clock or mem frequency.

Secondly, GPUZ says I was power limited on the 2nd run with all of the cooling ramped up (peaked at 70C rather than 81C prior), but during the first run it mostly said I was limited by Vrel.
From what I understand Vrel means the GPU couldn't get a higher stable voltage, but obviously it did on the 2nd run when the card was cooler as the frequency was higher. Presumably this is due to the heat interfering with the voltage stability, so is it telling me that the card was temp throttling in a round about way? - I'm trying to work out is there is headroom in the card to make it worth putting it under water (if anyone ever comes out with a Gamerock block).

Your advice would be appreciated, thank you in advance.


----------



## Thanh Nguyen

Anyone has an extra ftw3 block and want to sell?


----------



## inedenimadam

BobertCole said:


> Hi everyone,
> Long time reader of this thread, first time poster
> 
> Your advice would be appreciated, thank you in advance.


19000 on air is a good result. I mean, its not breaking any records, but its better than my trinity was at stock. The card is absolutely worth putting under water. The way boost works on these cards, you are rewarded with higher clocks for lower temperatures. You also get the benefit of not listening to a jet engine...the card is likely to last longer...and the card becomes more efficient...meaning higher clocks at the same voltage and power draw. We are talking expensive tech here, and you are obviously interested in pushing it...so block that sucker and push it!

Edit to add: Welcome to the fold!


----------



## dr/owned

inedenimadam said:


> 19000 on air is a good result. I mean, its not breaking any records, but its better than my trinity was at stock. The card is absolutely worth putting under water. The way boost works on these cards, you are rewarded with higher clocks for lower temperatures. You also get the benefit of not listening to a jet engine...the card is likely to last longer...and the card becomes more efficient...meaning higher clocks at the same voltage and power draw. We are talking expensive tech here, and you are obviously interested in pushing it...so block that sucker and push it!
> 
> Edit to add: Welcome to the fold!


It's probably been said a few times but basically power and thermal are extra variables that you want to eliminate. If you have no power limit and a waterblock you don't have to worry about the card pulling down voltage (and thereby pulling clock speed) which creates a bunch of instability when it's happening a million times under load as it fights to boost as much as it can. And you can also figure out what the true max overclock is.

@BobertCole Port Royal is really what most people are using to gauge their GPU performance because it's not very much CPU sensitive. Anyways my shunted TimeSpy score was: Graphics Score 21 555 and my scores are fairly low across the board because I'm not using a "fresh" Win10 install. I upgraded my mobo cpu and gpu all at once and have a bunch of background processes and whatnot.


----------



## Herald

BobertCole said:


> Hi everyone,
> Long time reader of this thread, first time poster .
> 
> I'm relatively new to GPU OC and hoping to get some advice:
> Firstly, what is the opinion on the quality of my chip? I got a TimeSpy score of 18,843 stock on air (Gamerock OC), then ramped the power slider up to 111% to get the full 470W my BIOS allows and also set the fans to max (case and GPU) and got 19,307. I had left all my background processes running etc and not made any adjustments to clock or mem frequency.
> 
> Secondly, GPUZ says I was power limited on the 2nd run with all of the cooling ramped up (peaked at 70C rather than 81C prior), but during the first run it mostly said I was limited by Vrel.
> From what I understand Vrel means the GPU couldn't get a higher stable voltage, but obviously it did on the 2nd run when the card was cooler as the frequency was higher. Presumably this is due to the heat interfering with the voltage stability, so is it telling me that the card was temp throttling in a round about way? - I'm trying to work out is there is headroom in the card to make it worth putting it under water (if anyone ever comes out with a Gamerock block).
> 
> Your advice would be appreciated, thank you in advance.


Also got the gamerock OC. Im getting ~23k on timespy with the stock bios








I scored 21 487 in Time Spy


Intel Core i9-10900K Processor, NVIDIA GeForce RTX 3090 x 1, 16384 MB, 64-bit Windows 10}




www.3dmark.com





The kingpin bios is another story 

My temps are insane for an air cooled card, maxes at 60 on SUPO 4k with the 470w bios and 65 with the kingpin 520w bios (around 22-24 ambient)








UNIGINE Superposition benchmark score


UNIGINE Superpsition detailed score page




benchmark.unigine.com





Rank 9 on supo, I can do better, maybe ill try on the weekend, pretty confident i can break past 20k.


----------



## scaramonga

Is it possible to put the thermal protections in place on 1000W BIOS?


----------



## geriatricpollywog

dante`afk said:


> I guess I should be able to get near 16k with an Intel CPU, sadly RocketLake won't be any upgrade I think.
> 
> no KPE bios, just my 3090 FE
> 
> View attachment 2473231
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 614 in Port Royal
> 
> 
> AMD Ryzen 9 5950X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> also snagged this:
> 
> View attachment 2473232
> 
> 
> 
> 
> Still super eager to get a 3pin card for the KPE bios, but I think I'll wait to get a kingpin card with dedicated waterblock, think this is the best due to the voltage control that is possible.





motivman said:


> I pulled over 750w with the xoc bios, about 620w max with the 520w bios, but my card performs better with the 520w bios.... I still hit power limit with kingpin bios, but no power limit with xoc bios, but still get better performance with the kingpin 520w bios.


I have no idea why people keep making excuses why the 1000w bios is safe to use when here are 2 examples of why it doesn’t help PR scores on air and water. What is the point?

15,600 on 520w bios pulling 500w on water = smart
15,750 on 1000w bios pulling 700w = dumb, pointless and dangerous for the card 
17,500 on 1000w bios on LN2 = smart


----------



## motivman

0451 said:


> I have no idea why people keep making excuses why the 1000w bios is safe to use when here are 2 examples of why it doesn’t help PR scores on air and water. What is the point?
> 
> 15,600 on 520w bios pulling 500w on water = smart
> 15,750 on 1000w bios pulling 700w = dumb, pointless and dangerous for the card
> 17,500 on 1000w bios on LN2 = smart


I was scoring less on my card with the 1000w bios, did everything I could to get it to beat my 520W score, but couldn't.


----------



## geriatricpollywog

motivman said:


> I was scoring less on my card with the 1000w bios, did everything I could to get it to beat my 520W score, but couldn't.


Same. I had the bios flashed to the “normal “ switch position, but Kingpin said to flash it to the “LN2” position for it to work properly so I’ll try that and see if it makes a difference.


----------



## Falkentyne

motivman said:


> I was scoring less on my card with the 1000w bios, did everything I could to get it to beat my 520W score, but couldn't.





0451 said:


> Same. I had the bios flashed to the “normal “ switch position, but Kingpin said to flash it to the “LN2” position for it to work properly so I’ll try that and see if it makes a difference.


Could be related to memory timings or other things that are designed to let you overclock higher. It's not unheard of that bioses designed for world record clocking might be lax on some system timings, to allow for pushing very high clocks when you increase NVVDD and MVDDC voltages.

On the LN2 bios and on the 1000W bios (assuming KINGPIN owners of course), were the max stable clocks identical for core/memory?


----------



## geriatricpollywog

Falkentyne said:


> Could be related to memory timings or other things that are designed to let you overclock higher. It's not unheard of that bioses designed for world record clocking might be lax on some system timings, to allow for pushing very high clocks when you increase NVVDD and MVDDC voltages.
> 
> On the LN2 bios and on the 1000W bios (assuming KINGPIN owners of course), were the max stable clocks identical for core/memory?


I’ll let you know after I test again knowing what I know now. I am waiting for low temperature and humidity.

I think the low score has to do with the 1000w bios keeping the memory at 100% and keeping the card at 100w idle. That causes the entire card to heat soak and possibly trigger some kind of thermal throttling. Remember the bios assumes your card is frosted over.


----------



## bmgjet

On my card, 
15 mohm shunts for 520W = 15008 in PR.
1000W bios set to 520W = 15007 in PR.

Its with in the magin or error.
The real difference is tho.
1000W bios peak pci-e slot draw iv seen is 88W, With shunts modded I hit 104W for the same power draw.
I have a script running in the back ground that changes between afterburner profiles based on GPU load so everything downclocks. As well as a thermal safty that shunts the computer off if GPU gets to 65C. (Max I see is 50C)


----------



## geriatricpollywog

bmgjet said:


> On my card,
> 15 mohm shunts for 520W = 15008 in PR.
> 1000W bios set to 520W = 15007 in PR.
> 
> Its with in the magin or error.
> The real difference is tho.
> 1000W bios peak pci-e slot draw iv seen is 88W, With shunts modded I hit 104W for the same power draw.
> I have a script running in the back ground that changes between afterburner profiles based on GPU load so everything downclocks. As well as a thermal safty that shunts the computer off if GPU gets to 65C. (Max I see is 50C)


What’s your board power draw at idle with the 1000w bios?


----------



## bmgjet

40-45W
Memory at 555mhz (0.695V)
Core at 210mhz (0.738V)
Thats my lowest idle profile I could make but it makes youtube videos stutter when full screened.

2D profile I have is memory 1000 (0.869v) core 1695mhz (0.831V) which uses 80-90W. 

Then full load profile memory 2500 (1.075V) core 2125 (1.056v). Uses 480-520W.


----------



## DocratesCR

Tested the EVGA 3090 Kingpin 520w Bios on my EVGA 3090 FTW3 Ultra.
Install ok, reboot...

EVGA Precision X1 asked me to update bios... I did it and fans went full speed, and stay full speed even on reboot. EVGA was looping about "new bios firmware".

Can't use Precision because of loop, and I just turned back to previous Bios.

Any tip? anyone with a success history?


----------



## bmgjet

DocratesCR said:


> Tested the EVGA 3090 Kingpin 520w Bios on my EVGA 3090 FTW3 Ultra.
> Install ok, reboot...
> 
> EVGA Precision X1 asked me to update bios... I did it and fans went full speed, and stay full speed even on reboot. EVGA was looping about "new bios firmware".
> 
> Can't use Precision because of loop, and I just turned back to previous Bios.
> 
> Any tip? anyone with a success history?



If your really set on using PX1 I can patch a version for you that doesnt run the update.
But really outside of using it just for fans and leds your better using afterburner.


----------



## dante`afk

Zurv said:


> Note to anyone having coil whine. My 3090 FTW3 was SUPER bad. This was a big problem for me as the audio was coming from the card too. (HDMI to AVR) The computer is far away so some local noise from there wouldn't have been a problem - but buzzing from all my speakers (when on max load on the GPU) is a huge problem. 7.1.2 buzz isn't a good thing.
> 
> 
> what i tried...
> ran a new 20amp dedicated circuit. (i ran 3 for home theater stuff.)
> tried audioquest Niagara 5000 and 1200
> tired surgeX SA-20
> all kinda crazy and stunning costly cables (magic audiophile level of crazy.)
> 
> none of it worked.
> I'm using the Corsair axi 1600. I wasn't having problems with my SLI 2080 ti kingpins.. or 3090FE or 2080 it FE SLI. Just this 3090FTW3.
> ugh.. do i need to remove the shuntmod? _sigh_
> 
> Nope! The fix was ferrite beads. Putting one for the power coming in to the PS and one on each of the PCI-E connectors at the video card side.
> whine is 99% gone.
> The odd part is the corsair cables already have ferrite beads... _shrug_ maybe they were to small - but adding more fixed the issues.
> 
> I used this package of them: Amazon.com: eBoot 20 Pieces Clip-on Ferrite Ring Core RFI EMI Noise Suppressor Cable Clip for 3mm/ 5mm/ 7mm/ 9mm/ 13mm Diameter Cable, Black: Industrial & Scientific
> 
> Hopefully this info is helpful to others. It is also cheap - if it didn't help you aren't out much.
> My next step would have been removing then shunts. I had also ordered a pure-sinewave UPS.. but cancelled it after the beads fixed the issue.


I gave this a shot and it's actually 200% worse lol. start cs:go and stay in menu with 1300+ fps. Even the PSU makes noise with these.

without ferrite rings





with rings:







Anyway I don't mind since my coilwhine is barely noticeable in normal usage


----------



## mirkendargen

dante`afk said:


> I gave this a shot and it's actually 200% worse lol. start cs:go and stay in menu with 1300+ fps. Even the PSU makes noise with these.
> 
> without ferrite rings
> 
> 
> 
> 
> 
> with rings:
> 
> 
> 
> 
> 
> 
> 
> Anyway I don't mind since my coilwhine is barely noticeable in normal usage


The bottom video sounded like a fan was hitting a cable, lol. I didn't test anything at 1300fps, I have a 117fps cap globally anyway.


----------



## WMDVenum

dante`afk said:


> I gave this a shot and it's actually 200% worse lol. start cs:go and stay in menu with 1300+ fps. Even the PSU makes noise with these.
> 
> without ferrite rings
> 
> 
> 
> 
> 
> with rings:
> 
> 
> 
> 
> 
> 
> 
> Anyway I don't mind since my coilwhine is barely noticeable in normal usage


I tried the ferrite rings and it made the noise much worse for me with them on as well. I dont have much coil while to begin with.


----------



## BiLLbOuS

Hey guys have the 520 bios on my 3090 Strix, and pin 3 not showing voltage and vrel pinned, can i adjust voltage or be worried about the pin 3


----------



## ttnuagmada

BiLLbOuS said:


> Hey guys have the 520 bios on my 3090 Strix, and pin 3 not showing voltage and vrel pinned, can i adjust voltage or be worried about the pin 3


3pin not showing anything is normal behavior


----------



## SolarBeaver

Just got a new Strix, which seems to be from the first batch somehow (it had the oldest bios versions by default), and the silicon seem to be very good with [email protected] CP2077 and TS stress test stable, but the weird thing is it constantly hits PWR limit in GPU-Z and generates more heat, than the old Strix which can only go [email protected], but didn't hit any perfcap @ 950 at all and was much cooler, what's the deal with this, is it possible to be software related? I've used DDU and clean install and now wondering if I should just format C: install new windows 10 instead 
p.s. it's on strix latest bios.


----------



## BobertCole

Herald said:


> Also got the gamerock OC. Im getting ~23k on timespy with the stock bios
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 21 487 in Time Spy
> 
> 
> Intel Core i9-10900K Processor, NVIDIA GeForce RTX 3090 x 1, 16384 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> My temps are insane for an air cooled card, maxes at 60 on SUPO 4k with the 470w bios and 65 with the kingpin 520w bios (around 22-24 ambient)


wow! yeah those temps are crazy low on air compared to mine. Have you undervolted or just an amazing chip or happens to be great cooler mount?


----------



## ALSTER868

SolarBeaver said:


> the silicon seem to be very good with [email protected] CP2077


My Strix needs only 0.918v for 2070 for CP2077 and I don't consider it to be very good.. somewhere a little bit above average.


----------



## SolarBeaver

ALSTER868 said:


> My Strix needs only 0.918v for 2070 for CP2077 and I don't consider it to be very good.. somewhere a little bit above average.


I've just got it yesterday, so not much time testing, I guess it could go higher, but was already glad it was much better than my previous Strix. Anyway, judging from what I've read here [email protected] is super great, not just "above average", could be wrong tho.


----------



## WMDVenum

If you are undervolting you should monitor effective clocks using HWInfo. The clock you set in MSI afterburner isn't the whole story. My 3090 FE can do 2205 @ 1.1V but effective clocks is only 2050. If I set offset to +160 my regular clock is only 2145 and effective is 2115 but this nets me a higher port royale score than the 2205. Neither hit power limit since I am shunted.

Something with undervolting really messes with the effective clock and it seems performance is more dependant on effective clock than set clock.


----------



## BiLLbOuS

WMDVenum said:


> If you are undervolting you should monitor effective clocks using HWInfo. The clock you set in MSI afterburner isn't the whole story. My 3090 FE can do 2205 @ 1.1V but effective clocks is only 2050. If I set offset to +160 my regular clock is only 2145 and effective is 2115 but this nets me a higher port royale score than the 2205. Neither hit power limit since I am shunted.
> 
> Something with undervolting really messes with the effective clock and it seems performance is more dependant on effective clock than set clock.


what shunts are you using are you stacking, or soldering replacing? Strix?


----------



## ALSTER868

SolarBeaver said:


> I've just got it yesterday, so not much time testing, I guess it could go higher, but was already glad it was much better than my previous Strix. Anyway, judging from what I've read here [email protected] is super great, not just "above average", could be wrong tho.


mmm, your Strix is on air, right? Then the comparison between yours and mine is not quite correct. When on water, the GPU requires less voltage though and is pulling less power.


----------



## Herald

BobertCole said:


> wow! yeah those temps are crazy low on air compared to mine. Have you undervolted or just an amazing chip or happens to be great cooler mount?


No, not undervolted.


----------



## Gebeleisis

Herald said:


> No, not undervolted.


did you put it ouside the window ? 

I have the same card as yours and can barely break 20k 
If i do not undervolt at 24 C ambient it can hit 74 C easily


----------



## BTK

is 75c for a load temp okay? asus tuf 3090 oc


----------



## WayWayUp

was able to get a pretty decent score in timespy








I scored 22 208 in Time Spy


Intel Core i9-10900KF Processor, NVIDIA GeForce RTX 3090 x 1, 16384 MB, 64-bit Windows 10}




www.3dmark.com





over 22k so im satisfied. Not gona rerun this benchmark as I have never cared for timespy as im more interested in the 4k benchs / port royal.


----------



## Gebeleisis

BTK said:


> is 75c for a load temp okay? asus tuf 3090 oc


Stock, on air, with stock bios? Ambient Temps at 23-24c? Then yeah, normal temps


----------



## BTK

everything stock air and yes those ambient


----------



## rhyno

bmgjet said:


> On my card,
> 15 mohm shunts for 520W = 15008 in PR.
> 1000W bios set to 520W = 15007 in PR.
> 
> Its with in the magin or error.
> The real difference is tho.
> 1000W bios peak pci-e slot draw iv seen is 88W, With shunts modded I hit 104W for the same power draw.
> I have a script running in the back ground that changes between afterburner profiles based on GPU load so everything downclocks. As well as a thermal safty that shunts the computer off if GPU gets to 65C. (Max I see is 50C)


i pull over 100 watts from pcie with just the bios so thats not true


----------



## geriatricpollywog

rhyno said:


> i pull over 100 watts from pcie with just the bios so thats not true
> 
> View attachment 2473372


1000w bios is completely useless for air and water and stupid to run daily. If your goal is to consume power and shorten the life of your card it’s perfect. If your goal is daily performance, the 520w bios is the best.


----------



## jomama22

rhyno said:


> i pull over 100 watts from pcie with just the bios so thats not true
> 
> View attachment 2473372


Well considering his is shunted, the power balancing going on will assume his power usage is lower than it actually is. Thus, if the pcie slot power draw isn't a linear relationship with total power draw, it's entirely possible that it will use less pcie slot power for a given power output when using shunts.

There are also pcb design considerations like the strix, where it seems to hit a pcie slot pull max of ~65w no matter what bios or shunt you put on there and no matter the power draw of the entire board, which makes it seem to never hit the pcie slot power limit.


----------



## itssladenlol

0451 said:


> 1000w bios is completely useless for air and water and stupid to run daily. If your goal is to consume power and shorten the life of your card it’s perfect. If your goal is daily performance, the 520w bios is the best.


1000w bios is the Best for 2x8pin.
520w bios doesnt work on 2x8pin cause 520w minus 33% from missing 3rd 8pin cripples you down to 350w.
1000w bios is fine for daily on 2x8pin as you can adjust it to just Pull 500w.
I have yet to see more than 70w Pull from pcie, only happens on ****ty evga cards (yes evga dropped the Ball this gen,pcie draw is Almost 100w without shunts and Stock bios, 100's of Dead FTW3 cards at the forums, board Layout is a joke, its the rev board with Additional Power stages, nothing custom like strix or gaming x Trio)
Some people are on their 4th 3090 ftw3 in 8 weeks cause it just dies within few days. 

For waterblock that is, anyone running 1000w bios on air deserves his card fried.


----------



## Nizzen

rhyno said:


> i pull over 100 watts from pcie with just the bios so thats not true
> 
> View attachment 2473372


And 101w is not true for every card... 
Some "magical" pull way more than others too.... So what is "true" ?


----------



## jura11

What I remember on RTX 2080Ti Galax 380W VRAM have lose timing and I think XOC BIOS from Asus as well have lose timing, but maybe I'm wrong on this one 

I scored better results with FTW3 BIOS on RTX 2080Ti than with Galax 380W BIOS on my Zotac RTX 2080Ti AMP 

Hope this helps 

Thanks, Jura


----------



## BiLLbOuS

just wondering what the best way to monitor watts and would be with a 3090 Strix running the 520w ln2 bios. Also any idea the best shunt config is for it aswell for say 700 on water.


----------



## WMDVenum

BiLLbOuS said:


> what shunts are you using are you stacking, or soldering replacing? Strix?


I have a 3090 FE and replaced all 6 5mOhm resistors with 3mOhm resistors.


----------



## shredy44

Has anyone tried the 2x8pin 1000w bios from galax?









KFA2 RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com




350w base 1000w max.

I tried the bios but I couldn’t flash it since this came up: bios cert 3.0 verification error

any work around or is it fake?


----------



## itssladenlol

shredy44 said:


> Has anyone tried the 2x8pin 1000w bios from galax?
> 
> 
> 
> 
> 
> 
> 
> 
> 
> KFA2 RTX 3090 VBIOS
> 
> 
> 24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory
> 
> 
> 
> 
> www.techpowerup.com
> 
> 
> 
> 
> 350w base 1000w max.
> 
> I tried the bios but I couldn’t flash it since this came up: bios cert 3.0 verification error
> 
> any work around or is it fake?


Nobody can use this Bios cause its Not certified by nvidia

Unless you are an nvidia employe and got Access to the Tools you cant use it.

Only other way to get it is getting a bios Programmer and Flash the bios chip directly over cables soldered onto the bios chip.
But noone has done it yet.
Did that with some older cards but dont want To Brick my 3090.


----------



## SolarBeaver

ALSTER868 said:


> mmm, your Strix is on air, right? Then the comparison between yours and mine is not quite correct. When on water, the GPU requires less voltage though and is pulling less power.


Yeah air, gotta get into custom loop as well it seems...


----------



## Falkentyne

itssladenlol said:


> Nobody can use this Bios cause its Not certified by nvidia
> 
> Unless you are an nvidia employe and got Access to the Tools you cant use it.
> 
> Only other way to get it is getting a bios Programmer and Flash the bios chip directly over cables soldered onto the bios chip.
> But noone has done it yet.
> Did that with some older cards but dont want To Brick my 3090.


Inline flashing would definitely have a good chance to work as long as you have a 1.8v adapter with the flasher. All you have to do is make a complete bios dump of the entire stock bios first with your flasher.
And if it black screens or gives code 43, just flash your original bios back, which will 100% work since you made a full hardware backup. You have 0% of a brick with this.
That's assuming you have a standard SOP8 bios chip. All of the main cards have a standard package, but unfortunately the FE does not, which means inline flashing is impossible (unless you solder or hot glue wires to the pins), as desoldering that nonstandard chip is next to impossible (and then you would need a special adapter to attach it to the flasher--better off just hot gluing wires to the pins in pin 1-8 positions if someone has the guts to try that)


----------



## BiLLbOuS

On the strix is the 500 ftw3 bios, kpe 520 function any differently besides add 20 watts


----------



## mirkendargen

BiLLbOuS said:


> On the strix is the 500 ftw3 bios, kpe 520 function any differently besides add 20 watts


Different fan profiles or something I hear if you're still on air.


----------



## BiLLbOuS

mirkendargen said:


> Different fan profiles or something I hear if you're still on air.


yeah till ek sends me my blocks even got one of those back plate coolers that youtuber was on about, want to shunt this thing too to 700+ but new to all this and unclear getting a straight answer what shunts to stack/glu paint on


----------



## itssladenlol

Falkentyne said:


> Inline flashing would definitely have a good chance to work as long as you have a 1.8v adapter with the flasher. All you have to do is make a complete bios dump of the entire stock bios first with your flasher.
> And if it black screens or gives code 43, just flash your original bios back, which will 100% work since you made a full hardware backup. You have 0% of a brick with this.
> That's assuming you have a standard SOP8 bios chip. All of the main cards have a standard package, but unfortunately the FE does not, which means inline flashing is impossible (unless you solder or hot glue wires to the pins), as desoldering that nonstandard chip is next to impossible (and then you would need a special adapter to attach it to the flasher--better off just hot gluing wires to the pins in pin 1-8 positions if someone has the guts to try that)


I already killed bios Chips with a flasher causing the bios chip to die, so just flashing back is not always possible. 
I had to desolder the chip and resolder a blank one which i flashed before with Stock bios.


----------



## Falkentyne

itssladenlol said:


> I already killed bios Chips with a flasher causing the bios chip to die, so just flashing back is not always possible.
> I had to desolder the chip and resolder a blank one which i flashed before with Stock bios.


Did you use a 1.8v adapter with your flasher?
Did you have the pins wired up correctly if you were inline flashing?
The most common reason people kill bios chips with a hardware flasher is by pumping 2.5v-3.5v into the chip, which is a 1.8v chip.


----------



## itssladenlol

Falkentyne said:


> Did you use a 1.8v adapter with your flasher?
> Did you have the pins wired up correctly if you were inline flashing?
> The most common reason people kill bios chips with a hardware flasher is by pumping 2.5v-3.5v into the chip, which is a 1.8v chip.


I probably just used the wrong Adapter.
Last time i did this was years ago .
I cant even remember the flasher it was something like ezpx and I had to manually build an Adapter with hot glue and wires.
Thats why im also Not doing it on my 3090 cause i forgot Almost everything


----------



## shredy44

itssladenlol said:


> Nobody can use this Bios cause its Not certified by nvidia
> 
> Unless you are an nvidia employe and got Access to the Tools you cant use it.
> 
> Only other way to get it is getting a bios Programmer and Flash the bios chip directly over cables soldered onto the bios chip.
> But noone has done it yet.
> Did that with some older cards but dont want To Brick my 3090.







Lucky guy


----------



## Falkentyne

itssladenlol said:


> I probably just used the wrong Adapter.
> Last time i did this was years ago .
> I cant even remember the flasher it was something like ezpx and I had to manually build an Adapter with hot glue and wires.
> Thats why im also Not doing it on my 3090 cause i forgot Almost everything


That's probably why the bios chip died, then 

I hardware flashed my GTX 1070 (MXM) with a TDP modded bios into my laptop multiple times, using a Skypro (Coright) programmer and 1.8v adapter, and no problems (using the high quality Pomona 5250 clip). I also made a backup of my GT73VR laptop bios successfully with it (without the 1.8v adapter as that's a 3.3v system bios chip), which I later deleted, but at least it worked.

I only know one programmer that has native support for 1.8v flashing without the adapter It was linked on the Pascal TDP flashing thread on notebookreview forums (I think Coolane had the flasher).
A couple of people actually flashed their chips with the full 3.3v before the chip eventually fried and they had to buy a new chip (mentioned in that thread some time back also).


----------



## itssladenlol

shredy44 said:


> Lucky guy


That is fake, i saw this Video weeks ago. 
The one comment saying its fake is me.


----------



## itssladenlol

Falkentyne said:


> That's probably why the bios chip died, then
> 
> I hardware flashed my GTX 1070 (MXM) with a TDP modded bios into my laptop multiple times, using a Skypro (Coright) programmer and 1.8v adapter, and no problems (using the high quality Pomona 5250 clip). I also made a backup of my GT73VR laptop bios successfully with it (without the 1.8v adapter as that's a 3.3v system bios chip), which I later deleted, but at least it worked.
> 
> I only know one programmer that has native support for 1.8v flashing without the adapter It was linked on the Pascal TDP flashing thread on notebookreview forums (I think Coolane had the flasher).
> A couple of people actually flashed their chips with the full 3.3v before the chip eventually fried and they had to buy a new chip (mentioned in that thread some time back also).


Thats the one i was using with self Made glue and wire Adapter. 
Just had an older Model not the 2019 one.


----------



## Falkentyne

itssladenlol said:


> Nobody can use this Bios cause its Not certified by nvidia
> 
> Unless you are an nvidia employe and got Access to the Tools you cant use it.
> 
> Only other way to get it is getting a bios Programmer and Flash the bios chip directly over cables soldered onto the bios chip.
> But noone has done it yet.
> Did that with some older cards but dont want To Brick my 3090.


That 1000W Galaxy Bios isn't properly made.
The checksum is completely wrong. You're never going to flash that with NVflash.
You can open it in the Ampere editor and see for yourself. I don't think this was a proper bios dump. Probably someone just edited "1000W" in and called it a day.

It's "possible" hardware flashing (1.8v adapter is REQUIRED) may make it work. Just make sure you BACKUP and save the entire bios region of your original vbios first so you can restore it if things go south.


----------



## zhrooms

EVGA RTX 3090 Kingpin / K|NGP|N PCB
EAN: 4250812438232 / Product Number: 24G-P5-3998-KR
Source: @vamps#0001


----------



## bmagnien

Thanh Nguyen said:


> Anyone has an extra ftw3 block and want to sell?


PMed you.


----------



## bmagnien

What's the best way to soften/remove silver paint to remove a silver-painted-on stacked shunt? Im wary of applying a lot of alcohol to dissolve it in case the alcohol/silver mixture runs out onto the pcb...


----------



## Falkentyne

bmagnien said:


> What's the best way to soften/remove silver paint to remove a silver-painted-on stacked shunt? Im wary of applying a lot of alcohol to dissolve it in case the alcohol/silver mixture runs out onto the pcb...


Use Super 33+ tape and cover the PCB around the shunts. I assume Kapton tape will work but I've never used or owned kapton tape before.
You're still going to have to scrape slowly and work it off with alcohol. There's no other good way to remove it. This will take a lot of time so DO NOT RUSH, and be very careful when you're scraping and GO SLOW.



https://www.amazon.com/Scotch-Super-Vinyl-Electrical-Tape/dp/B00004WCCL/



If some alcohol+paint still gets in there after you're done, a Q tip with alcohol or a dull narrow point (in case it gets between something very small) covered with lint free cloth + Alcohol will clean it up easily.


----------



## bmagnien

Falkentyne said:


> Use Super 33+ tape and cover the PCB around the shunts. I assume Kapton tape will work but I've never used or owned kapton tape before.
> 
> 
> 
> https://www.amazon.com/Scotch-Super-Vinyl-Electrical-Tape/dp/B00004WCCL/
> 
> 
> 
> If some alcohol+paint still gets in there after you're done, a Q tip with alcohol or a dull point covered with lint free cloth + Alcohol will clean it up easily.


Thanks falk


----------



## WilliamLeGod

BTK said:


> is 75c for a load temp okay? asus tuf 3090 oc


Its 15C hotter than a normal 3090 tuf 375w, u r supposed to hit max 60C. Check yr v/f


----------



## dr/owned

Is the GPU management checking BIOS checksums when it starts where you can't even hardware flash an unsigned one?


----------



## Jordyn

What's the recommended bios for MSI Gaming Trio X these days?


----------



## Dreams-Visions

Jordyn said:


> What's the recommended bios for MSI Gaming Trio X these days?


Kingpin or FTW3 bios.


----------



## Jordyn

Dreams-Visions said:


> Kingpin or FTW3 bios.


Thanks, was reading back through at the same time....FTW3 more than Suprim if using air only?


----------



## Falkentyne

bmagnien said:


> Thanks falk


I've gotten "alcohol silver paint" mix on the PCB after scraping and cleaning before (because it got on the Super 33+ tape when I was scraping, and eventually wet the tape and it lost some adhesion) and it came right up with a tool with some tissue/q-tip/microfiber cloth wrapped around a toothpick edge, no problem at all. It's annoying but it works. And probably a hell of a lot easier than trying to clean off a solder bridge that got on the PCB (like what happened to Sky3900 if you look at his post around page 6 of the shunt mod thread!). Super 33+ tape saves you from 80% of the work, so you only have a small cleanup after.

Just be VERY VERY careful when scraping. You want to go as slow as possible. And trust me, you will be very glad you bought that super 33+ tape as it helps protect against slips that might scratch the PCB (it's saved me several times). Just DON'T rush. The hardened paint will come off.

I assume you're removing the paint to flash the 1000W Bios? Or are you selling the card?


----------



## Falkentyne

Jordyn said:


> Thanks, was reading back through at the same time....FTW3 more than Suprim if using air only?


I'd get the Suprim. There are just too many reports of FTW3's dying randomly, especially under "light load" games, and something 'popping' on the PCB and then the red LED light of death.
I dont know if it's some bad interaction with the Nvidia drivers or not, or if it's related to the "6500C " temp bug/thermal alert temp bug that appeared after the 456.98 hotfix (the last driver without this bug), e.g. a combination of eVGA's fuse system, power balancing issues with too aggressive PCIE Slot power, and Nvidia's drivers, but I would not buy a FTW3 at all.


----------



## Jordyn

Yikes, yea ok. Thanks for the advice. Am guessing a RMA would be tough if flashed to a different bios also.


----------



## bmagnien

Falkentyne said:


> I've gotten "alcohol silver paint" mix on the PCB after scraping and cleaning before (because it got on the Super 33+ tape when I was scraping, and eventually wet the tape and it lost some adhesion) and it came right up with a tool with some tissue/q-tip/microfiber cloth wrapped around a toothpick edge, no problem at all. It's annoying but it works. And probably a hell of a lot easier than trying to clean off a solder bridge that got on the PCB (like what happened to Sky3900 if you look at his post around page 6 of the shunt mod thread!). Super 33+ tape saves you from 80% of the work, so you only have a small cleanup after.
> 
> Just be VERY VERY careful when scraping. You want to go as slow as possible. And trust me, you will be very glad you bought that super 33+ tape as it helps protect against slips that might scratch the PCB (it's saved me several times). Just DON'T rush. The hardened paint will come off.
> 
> I assume you're removing the paint to flash the 1000W Bios? Or are you selling the card?


I bought a second FTW3 which upon early testing seems to be better silicon, running 1000w with zero mods at 55% (which pulls almost 100w through the pcie slot...**** evga lol). I’m gonna pull the shunt off the first 3090 and put the 1000w on to do a direct comparison just to be sure, then sell that one with a hybrid kit I picked up. Then start the long wait for a freaking waterblock.

also - what’s a good score on the ray tracing feature test? I feel like you (falk) or dark put together a little rough benchmark for that, like 62ish decent bin, 65+ great bin, 67+ golden sample


----------



## Falkentyne

bmagnien said:


> I bought a second FTW3 which upon early testing seems to be better silicon, running 1000w with zero mods at 55% (which pulls almost 100w through the pcie slot...**** evga lol). I’m gonna pull the shunt off the first 3090 and put the 1000w on to do a direct comparison just to be sure, then sell that one with a hybrid kit I picked up. Then start the long wait for a freaking waterblock.
> 
> also - what’s a good score on the ray tracing feature test? I feel like you (falk) or dark put together a little rough benchmark for that, like 62ish decent bin, 65+ great bin, 67+ golden sample


I don't know what a good score is. I only scored 60 fps. I can only go to +600 on the memory, that's why.


----------



## bmagnien

Falkentyne said:


> I don't know what a good score is. I only scored 60 fps. I can only go to +600 on the memory, that's why.


Must’ve been dark then. My other FTW3 hit 59.1 with hybrid, this one just hit 63.2 on air (outside cold air)


----------



## Falkentyne

And yet another eVGA FTW3 just died. From the same person who had one die earlier and got a RMA replacement.
Same problem. Random black screen+100% fans during a game, tries to power off and on them "Pop" sound.

Wonder if it's a fuse that blew?





3090 FTW - Red Light of Death (Page 3) - EVGA Forums


Launch day 1 owner of 3090 FTW. Running perfectly fine until I installed the hybrid kit and upped my standard oc by an extra 100mem because it was super stable during benchmarks. Apparently during holidays and covid their phone support is closed weekends now so woooo. Was gaming a few hours...



forums.evga.com


----------



## Nico67

anybody seen AB reporting different steps from curve? currently set at [email protected] and it should be [email protected], but I have had driver resets and upon checking its still running 2025, but at 0.943 during those times which it shouldn't be able too?
I think it might be bouncing close to a temperature point that shifts the curve, so I have dropped 0.943 down to 1995 to see how it goes.
I have noticed these cards are very particular about voltages but one frequency step will usually be the difference in stability


----------



## lolhaxz

Nico67 said:


> anybody seen AB reporting different steps from curve? currently set at [email protected] and it should be [email protected], but I have had driver resets and upon checking its still running 2025, but at 0.943 during those times which it shouldn't be able too?
> I think it might be bouncing close to a temperature point that shifts the curve, so I have dropped 0.943 down to 1995 to see how it goes.
> I have noticed these cards are very particular about voltages but one frequency step will usually be the difference in stability


The curve is applied/displayed based on the current temperature of the GPU at the time AB read it... if you load/apply the same curve at a different temperature (even 5C can be enough) the result will be different and the curve will "auto" change, some points may flatten, some may rise etc etc... effectively the result of writing that curve to the GPU then reading it back again, drivers will modify it.

Temperature/clock changes start happening even at 20C.


----------



## Thanh Nguyen

Which one is heavier to test :Cyberpunk 2077 and metro exodus guys?


----------



## maw784

Just in case anyone is in the same boat I was and dosen't want to wait for a custom made 16guage adapter 
I'm using this 18guage cable it works so far for the 3090FE, this is coming off a 2x 6+2 pin non modular 16guage wires off my 760watt PSU.

I'm using a 9 year old PC Power and cooling PSU, its a rebranded Seasonic unit with a single rail,I'm surprised, i thought I was finally going to be forced to upgrade my PSU, so far so good.

I will update if the cable starts acting up or melts etc.

https://www.amazon.com/Asiahorse-Ex...pY2tSZWRpcmVjdCZkb05vdExvZ0NsaWNrPXRydWU&th=1


----------



## WilliamLeGod

Thanh Nguyen said:


> Which one is heavier to test :Cyberpunk 2077 and metro exodus guys?


Pretty sure Cyberpunk for less +offset core and Metro Exodus for more Power draw


----------



## marti69

hello guys repoting back after many days testing to figure out 1000w was giving me less score on port royal vs 520w bios i finaly figure out what was the issue,turn out to be my cable mod extentions from formula mod when ussing occt i noticed that my 12v 3.3 and 5v where all lower 11.96v 3.15v and 4.8v at idle and they go even lower when stress testing so i took them out ans use stock cables
( i have rog thor 1200 platimum psu) i checked voltages again and they where fine 12.26v 3.3v and 5v all stable even on stress test ,i decide to try 1000w bios again my best score before with this bios on my rtx 3090 trio with bykski water block was 14900 now with stock psu cables 15548 @ 2235mhz gpu and +1200 on vram.


----------



## Herald

Gebeleisis said:


> did you put it ouside the window ?
> 
> I have the same card as yours and can barely break 20k
> If i do not undervolt at 24 C ambient it can hit 74 C easily


Nope, sitting on my desk next to the room heater. Mine also hits 74+C with the stock fan curve obviously, I'm just not using the stock fan curve when benching. Is the airflow of your case decent?


----------



## Gebeleisis

yes, good airflow, probably poor chip. I am waiting for a bykski waterblock atm.


----------



## Herald

Gebeleisis said:


> yes, good airflow, probably poor chip. I am waiting for a bykski waterblock atm.


What OC are you using for timespy? Try OCing your ram a lot, my core is pretty average, but the Vram on this card are probably golden, havent seen anyone in the supo leaderboard running their ram as high as mine. Im doing +1775 on the Indigo bench, +1400 on timespy and +1600 on supo 4k


----------



## itssladenlol

marti69 said:


> hello guys repoting back after many days testing to figure out 1000w was giving me less score on port royal vs 520w bios i finaly figure out what was the issue,turn out to be my cable mod extentions from formula mod when ussing occt i noticed that my 12v 3.3 and 5v where all lower 11.96v 3.15v and 4.8v at idle and they go even lower when stress testing so i took them out ans use stock cables
> ( i have rog thor 1200 platimum psu) i checked voltages again and they where fine 12.26v 3.3v and 5v all stable even on stress test ,i decide to try 1000w bios again my best score before with this bios on my rtx 3090 trio with bykski water block was 14900 now with stock psu cables 15548 @ 2235mhz gpu and +1200 on vram.
> View attachment 2473470
> View attachment 2473471


I have the same Asus thor 1200w Power supply and 1000w bios, but my cable mod cables arent Extensions, they go directly in the Power supply and I got no problems. 
Seems Extensions do Problems on high Power draw.


----------



## Canson

Any 500W+ bios out there that works with my 3090 Suprim X? I want try it for fun and see what performance i get.

Right now undervolted @850mv at 1920mhz and +1000 memory (Micron, overclocks good right?)


----------



## WilliamLeGod

Got 15k on my 3090 trio 520W stock cooler 








I scored 15 083 in Port Royal


Intel Core i9-10900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## GzZ

Canson said:


> Any 500W+ bios out there that works with my 3090 Suprim X? I want try it for fun and see what performance i get.
> 
> Right now undervolted @850mv at 1920mhz and +1000 memory (Micron, overclocks good right?)


KPE 520w should work.
Runs fine on Gaming X Trio.


----------



## Canson

GzZ said:


> KPE 520w should work.
> Runs fine on Gaming X Trio.


got any link to download it?


----------



## Gebeleisis

Herald said:


> What OC are you using for timespy? Try OCing your ram a lot, my core is pretty average, but the Vram on this card are probably golden, havent seen anyone in the supo leaderboard running their ram as high as mine. Im doing +1775 on the Indigo bench, +1400 on timespy and +1600 on supo 4k


anything over +650 and it crashes ...


----------



## Sheyster

marti69 said:


> hello guys repoting back after many days testing to figure out 1000w was giving me less score on port royal vs 520w bios i finaly figure out what was the issue


I believe @bmgjet has shown that everything being set equal, the results are exactly the same with both at a given set power limit, like 520W on the 1000W BIOS. He posted scores that were ONE point apart to illustrate this.

The main benefit for Strix owners for the 1000W BIOS is the fan support. The 520W BIOS simply won't run the middle fan past 66% speed. This is not an issue with the 1000W BIOS or the 500W FTW3 BIOS.


----------



## Chobbit

Hey how do you undervolt these cards? My Palit Gamerock just either doesn't show any voltage slider (Afterburner) or has 0-100% and starts at 0 (X1 & Thunder Master).

I can't see anyway to undervolt


----------



## Gebeleisis

Chobbit said:


> Hey how do you undervolt these cards? My Palit Gamerock just either doesn't show any voltage slider (Afterburner) or has 0-100% and starts at 0 (X1 & Thunder Master).
> 
> I can't see anyway to undervolt
> View attachment 2473506


in msi afterburner, press CTRL + F 

There you get to the voltage curve , where you can unvolt it.


----------



## Chobbit

Gebeleisis said:


> in msi afterburner, press CTRL + F
> 
> There you get to the voltage curve , where you can unvolt it.


Didn't even know that was there ha, thanks


----------



## geriatricpollywog

I did some testing with the XOC bios yet my card still doesn't want to pull more than 520w, so I probably would have had the same result with the LN2 bios. I learned that when I increase the core LLC from Level 0 to Level 3 in the Classified tool, power consumption goes down to 450-480w, average clock speed doesn't change much (reported in Port Royal), but the PR score is 300 points lower (15371). For those who have mysteriously low scores relative to their reported clock speeds, the voltage could be drooping too much.

LLC0: 15684
LLC3: 15371


----------



## ALSTER868

So if on 520W KPE bios the 3rd pin power is not reported, how much should we add to the displayed consumption to know the real one?


----------



## wirx

WilliamLeGod said:


> Got 15k on my 3090 trio 520W stock cooler
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 083 in Port Royal
> 
> 
> Intel Core i9-10900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com


Nice score, can you share what were settings and mybe screenshot of voltage curve?
Average temperature 55 °C is quite good for air and stock cooler.


----------



## gfunkernaught

jomama22 said:


> Whatever makes your thermspy effective clock match your requested clock. No real magic number.


Hi, I drifted here from the 2080 Ti owners thread but I have question about thermspy. I know this is possible in thermspy alone using the test clk and test pwr functions. But anything I set in AB for example, always results in the effective clock one bin or 15mhz lower than requested if I do the offset>curve method for oc. Is it even possible, in real world applications, to get requested clock and effective clock to match?


----------



## indicajones

Falkentyne said:


> And yet another eVGA FTW3 just died. From the same person who had one die earlier and got a RMA replacement.
> Same problem. Random black screen+100% fans during a game, tries to power off and on them "Pop" sound.
> 
> Wonder if it's a fuse that blew?
> 
> 
> 
> 
> 
> 3090 FTW - Red Light of Death (Page 3) - EVGA Forums
> 
> 
> Launch day 1 owner of 3090 FTW. Running perfectly fine until I installed the hybrid kit and upped my standard oc by an extra 100mem because it was super stable during benchmarks. Apparently during holidays and covid their phone support is closed weekends now so woooo. Was gaming a few hours...
> 
> 
> 
> forums.evga.com


I havent reported it in forums but I also had the same thing happen to me on my first hybrid ftw 3. Evga sent me another one about a month ago.


----------



## DrunknFoo

Falkentyne said:


> And yet another eVGA FTW3 just died. From the same person who had one die earlier and got a RMA replacement.
> Same problem. Random black screen+100% fans during a game, tries to power off and on them "Pop" sound.
> 
> Wonder if it's a fuse that blew?
> 
> 
> 
> 
> 
> 3090 FTW - Red Light of Death (Page 3) - EVGA Forums
> 
> 
> Launch day 1 owner of 3090 FTW. Running perfectly fine until I installed the hybrid kit and upped my standard oc by an extra 100mem because it was super stable during benchmarks. Apparently during holidays and covid their phone support is closed weekends now so woooo. Was gaming a few hours...
> 
> 
> 
> forums.evga.com


probably isolated issues to their own system


----------



## Falkentyne

DrunknFoo said:


> probably isolated issues to their own system


I'm not so sure.
I've been reading those threads for a -long- time and almost every one has the same basic failure.
black screen+100% fans crash.
Followed by either red LED light by a PCIE plug, or a pop sound followed by red LED light.
I'd bet anyone here $100 it's something related to that board's component design that is failing. The problem is no one has actually looked to see if a fuse blew or not when that pop happened.
The most I saw was in an earlier post where someone saw a puff of smoke, which seemed to come from behind the area where the "Power Source" Shunt resistor was located

Now it's my job to find strange things in hardware--that's why I do and that's why some people trust me with their work.
What I want to know is, is it really a substandard flaw with a component on eVGA cards? Or is it a combination of the Nvidia drivers and something onboard?
Is it related to the FTW3 cards showing extremely high PCIE Slot power draw, in comparison to other cards?
Is it related to the 6500C/thermal throttling bug that has appeared in *EVERY* single Nvidia driver released after 456.98 Hotfix? (this driver was not affected).
Did this bug happen in the first four original driver releases (the driver which crashed alot of systems, including some FE cards, due to over aggressive boosting, although apparently the Studio driver was far more stable, then the next two game ready drivers (which crashed Warzone for many people), and which failed to run "For Honor", and then the 456.98 Hotfix, which fixed For Honor and was stable in Warzone)? After these series of drivers, the "thermal throttle" temp blip appeared.

Note that apparently all cards show the random thermal throttle flag, which you can see in Hwinfo and gpu-z. But only eVGA Cards have RAM/VRM temp sensors on them, so the non eVGA cards just show a random "thermal+power" blip.

The 456.98 hotfix driver was the last driver that seemed to be stable, and may have also been the last driver that didn't bomb on 1080 Ti cards (I do NOT know this for a fact, I was not paying attention to the 1080 Ti driver crashes!).

So again, I want to know if there is a relationship between the thermal blip and some eVGA cards dying randomly (especially at low load).


----------



## BTK

WilliamLeGod said:


> Its 15C hotter than a normal 3090 tuf 375w, u r supposed to hit max 60C. Check yr v/f


I made a more aggressive fan curve and got the load down to 69c


----------



## WMDVenum

Falk, Would you recommend staying with driver version 456.98 then?

Recently I got a weird issue where my third (left) monitor "reconnects" occasionally when I am gaming on my main (middle) monitor. It will lose connection and connect again immediately. I have tried:

Change the refresh rate to 59hz on the affected monitor.
Remove the overclock on the GPU.
Change the DP ports used on the card.
Disable G-Sync.
None of these seem to matter. I am probably going to roll back drivers to see if that helps. THis also seems to happen when running real bench stress test so I think it is related to GPU load and not gaming specifically.

My monitors are :
Left - 1080P - HDMI to DP cable
Middle - 1440P - DP cable
Right - 1080P - HDMI to DP cable


----------



## Falkentyne

WMDVenum said:


> Falk, Would you recommend staying with driver version 456.98 then?
> 
> Recently I got a weird issue where my third (left) monitor "reconnects" occasionally when I am gaming on my main (middle) monitor. It will lose connection and connect again immediately. I have tried:
> 
> Change the refresh rate to 59hz on the affected monitor.
> Remove the overclock on the GPU.
> Change the DP ports used on the card.
> Disable G-Sync.
> None of these seem to matter. I am probably going to roll back drivers to see if that helps. THis also seems to happen when running real bench stress test so I think it is related to GPU load and not gaming specifically.
> 
> My monitors are :
> Left - 1080P - HDMI to DP cable
> Middle - 1440P - DP cable
> Right - 1080P - HDMI to DP cable


Whatever works. I never saw the overtemp blip on 456.98 and I never had a random crash either (besides the normal warzone hates core overclocks thing). I'm on 457.44 vulkan beta and still have never had a youtube/browsing crash on it, and I refuse to update to anything newer because I don't want to see them. But I do get the random thermal bug on it but I haven't seen any performance issues from that bug. But it doesn't happen at all on 456.98 which is why I recommend people try it unless they need support for a game.

and frankly if people get mad at me for trying to help them on eVGA forums I'll block every single one of them. None of them are my friend and I do not care about any of them--nor should I. I work hard trying to help people and if they don't respect my work and time I put in--they can do without me.


----------



## stryker7314

marti69 said:


> hello guys repoting back after many days testing to figure out 1000w was giving me less score on port royal vs 520w bios i finaly figure out what was the issue,turn out to be my cable mod extentions from formula mod when ussing occt i noticed that my 12v 3.3 and 5v where all lower 11.96v 3.15v and 4.8v at idle and they go even lower when stress testing so i took them out ans use stock cables
> ( i have rog thor 1200 platimum psu) i checked voltages again and they where fine 12.26v 3.3v and 5v all stable even on stress test ,i decide to try 1000w bios again my best score before with this bios on my rtx 3090 trio with bykski water block was 14900 now with stock psu cables 15548 @ 2235mhz gpu and +1200 on vram.
> View attachment 2473470
> View attachment 2473471


What kind of temps is the bykski getting, and is that 2 360mm rads?


----------



## jomama22

gfunkernaught said:


> Hi, I drifted here from the 2080 Ti owners thread but I have question about thermspy. I know this is possible in thermspy alone using the test clk and test pwr functions. But anything I set in AB for example, always results in the effective clock one bin or 15mhz lower than requested if I do the offset>curve method for oc. Is it even possible, in real world applications, to get requested clock and effective clock to match?


By requested clock I mean whatever is being reported at the clock being used vs the efficient clock shown in thermspy. My understanding of why we always get some drop of 15mhz from what's set in AB (iv seen random drops of 30 mhz as well when the curve wants to go wacky) is because of the temp difference between when you set the clock to when load is actually applied but this could also be some other quirk of the boost algo. It doesn't drop a voltage bin on me when this happens, just the 15mhz as you said.

Thermspy shows me what the current requested clock is, i.e., the 15mhz drop from what you have set on the curve. If that's not what it's showing you, I would look at what AB or another program is saying the current clock is and then compare the effective clock to that.

For instance, if my 1.1v bin is set to 2220, it will, almost always, show 2205 as the requested clock across all programs once load is applied.

I will get random runs where it does stay constant at the requested clock from AB even though temps are exactly the same as a run that will drop me 15mhz, but it's hard to decipher if the effective clock is dropping as well as the score will be the exact same.


----------



## gfunkernaught

jomama22 said:


> By requested clock I mean whatever is being reported at the clock being used vs the efficient clock shown in thermspy. My understanding of why we always get some drop of 15mhz from what's set in AB (iv seen random drops of 30 mhz as well when the curve wants to go wacky) is because of the temp difference between when you set the clock to when load is actually applied but this could also be some other quirk of the boost algo. It doesn't drop a voltage bin on me when this happens, just the 15mhz as you said.
> 
> Thermspy shows me what the current requested clock is, i.e., the 15mhz drop from what you have set on the curve. If that's not what it's showing you, I would look at what AB or another program is saying the current clock is and then compare the effective clock to that.
> 
> For instance, if my 1.1v bin is set to 2220, it will, almost always, show 2205 as the requested clock across all programs once load is applied.
> 
> I will get random runs where it does stay constant at the requested clock from AB even though temps are exactly the same as a run that will drop me 15mhz, but it's hard to decipher if the effective clock is dropping as well as the score will be the exact same.


As I write this my 2080 ti has passed away _sob_. A moment of silence 
But back to the boost algos. I noticed that when I had 2145mhz set at the 1093mv point after setting an offset to 2130mhz, while running the bright memory bench, the curve still showed 2145, AB showed 2130, and Thermspy showed 2115. This is all at a temp of 38-39c. 

This is all for naught at this point in time but it can't hurt to understand the mystery of gpu boost.


----------



## sultanofswing

you guys seem to be forgetting about bins vs temp. at 39-40c is a 15mhz bin drop. There will be another 15mhz bin drop around 45-46c and so on and so forth.
This thermspy program is just getting everyone confused in the long run.


----------



## Falkentyne

Saw someone complaining about the newest Nvidia 461.09 driver causing extra thermal throttling on VRAM during mining.

Never mind that this guy is WAY past the internal power limits for both 8 pin and MVDDC (VRAM), which seems to get picked up by the SRC, which then hits its own power limit and then it's lights out.









NVIDIA 461.09 Driver on a 3090 and the secret memory PerfCap


Something changed with this driver and my ability to overclock the memory on my 3090. I believe they changed the thermal limit of the memory, reducing it significantly. With previous drivers, I was able to run my memory at 10350mhz under full stress-test load. With the new driver, I pretty...




www.techpowerup.com





Of course it doesn't explain the thermal cap...


----------



## SoldierRBT

To the people that water-cooled their 3090. What were the stable before/after clocks and memory OC in games? 30-45MHz+ average core improvement? How about memory? Thanks


----------



## motivman

SoldierRBT said:


> To the people that water-cooled their 3090. What were the stable before/after clocks and memory OC in games? 30-45MHz+ average core improvement? How about memory? Thanks


my memory overclock did not improve at all with WC. max stable memory is +800, +900 is stable for some games, +1000 is not stable, and sometimes passes benchmarks, but most of the time crashes. I even used fugipoly extreme pads, and still no improvement in my memory overclocking. For core clocks, I gained at least 3 bins on water compared to air, and of course loads temps went from 85+ to max 55C on my pny 3090 epic-x on EKWB.


----------



## mirkendargen

Falkentyne said:


> Saw someone complaining about the newest Nvidia 461.09 driver causing extra thermal throttling on VRAM during mining.
> 
> Never mind that this guy is WAY past the internal power limits for both 8 pin and MVDDC (VRAM), which seems to get picked up by the SRC, which then hits its own power limit and then it's lights out.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> NVIDIA 461.09 Driver on a 3090 and the secret memory PerfCap
> 
> 
> Something changed with this driver and my ability to overclock the memory on my 3090. I believe they changed the thermal limit of the memory, reducing it significantly. With previous drivers, I was able to run my memory at 10350mhz under full stress-test load. With the new driver, I pretty...
> 
> 
> 
> 
> www.techpowerup.com
> 
> 
> 
> 
> 
> Of course it doesn't explain the thermal cap...


Can confirm that memory clocks are dropping 250mhz (it drops 250mhz automatically from where it's set, not it becomes unstable unless you drop it 250mz) when mining ETH, but I don't think it's thermal throttling. It happens instantly when I fire up a Dagger-Hashimoto miner, and I have a waterblock glued on my backplate and my memory is definitely not getting hot, and GPU-Z never shows a thermal perfcap. It also drops 250mhz no matter what you set your memory clock to. I think it's just something baked into the driver to try and prevent people with bad VRAM cooling from cooking their cards.


----------



## pat182

im happy my card is verticaly mounted and the exausted air goes around the back to cool the backplate a bit then. warm air is better than no air i guess


----------



## rocklobsta1109

motivman said:


> my memory overclock did not improve at all with WC. max stable memory is +800, +900 is stable for some games, +1000 is not stable, and sometimes passes benchmarks, but most of the time crashes. I even used fugipoly extreme pads, and still no improvement in my memory overclocking. For core clocks, I gained at least 3 bins on water compared to air, and of course loads temps went from 85+ to max 55C on my pny 3090 epic-x on EKWB.


Curious to get a little more details on your clocks and WC setup. I too have a PNY 3090 with an EK block. My card averages around 1930mhz in timespy and I have mem at +750 currently, load temps around 50C at max. What kind of clocks are you seeing in timespy?


----------



## arvinz

@mirkendargen I'm also experiencing strange Thermal limit with that driver...I didn't think it was the driver until you mentioned it. I'll revert back to the previous for now.

Does changing the 'Power management mode' in the Nvidia Control panel to 'Prefer maximum performance' have any bearing at all on either synthetic benchmarks or gaming?


----------



## SolarBeaver

Hey, is it normal for strix to draw only ~40w from pcie slot? Stock bios, full 123% tdp.


----------



## jomama22

sultanofswing said:


> you guys seem to be forgetting about bins vs temp. at 39-40c is a 15mhz bin drop. There will be another 15mhz bin drop around 45-46c and so on and so forth.
> This thermspy program is just getting everyone confused in the long run.


I mentioned temps in my explanation. This also occurs going from ~17c when set to ~31 under load, so I imagine it's any 10+*C interval from the set temperature or even smaller temp differences.


----------



## WayWayUp

bmagnien said:


> I bought a second FTW3 which upon early testing seems to be better silicon, running 1000w with zero mods at 55% (which pulls almost 100w through the pcie slot...**** evga lol). I’m gonna pull the shunt off the first 3090 and put the 1000w on to do a direct comparison just to be sure, then sell that one with a hybrid kit I picked up. Then start the long wait for a freaking waterblock.
> 
> also - what’s a good score on the ray tracing feature test? I feel like you (falk) or dark put together a little rough benchmark for that, like 62ish decent bin, 65+ great bin, 67+ golden sample


I believe the target for the 3090 in this test is 60fps
anything over 61fps is good but if you can get 62fps + that is great

if it's watercooled you will want to target 64fps. if you have a kingpin card and can throw a ton of voltage you should target 65fps










NVIDIA GeForce RTX 3090 video card benchmark result - Intel Core i9-10900KF Processor,ASUSTeK COMPUTER INC. ROG MAXIMUS XII APEX (3dmark.com)


I was able to get around 68fps but my card is strong i wouldnt use it a goal post

I wish there was some kind of leaderboard for comparison sake so you can see how your doing; relatively speaking. At least to understand what constitutes a good score


----------



## bmagnien

WayWayUp said:


> I believe the target for the 3090 in this test is 60fps
> anything over 61fps is good but if you can get 62fps + that is great
> 
> if it's watercooled you will want to target 64fps. if you have a kingpin card and can throw a ton of voltage you should target 65fps
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> NVIDIA GeForce RTX 3090 video card benchmark result - Intel Core i9-10900KF Processor,ASUSTeK COMPUTER INC. ROG MAXIMUS XII APEX (3dmark.com)
> 
> 
> I was able to get around 68fps but my card is strong i wouldnt use it a goal post
> 
> I wish there was some kind of leaderboard for comparison sake so you can see how your doing; relatively speaking. At least to understand what constitutes a good score


makes sense. i managed 65.3 on outside cold air hybrid kitted evga ftw3.


----------



## bmagnien

just wanted to add some recent experience for the good of the group. i currently have 2 ftw3 3090s, one with a hybrid kit installed and one stock air. i had previously pcie slot silver stack shunted the hybrid (to bypass EVGAs power balancing issue), but as I will be selling one, I decided to remove the shunt. I then put the 1000w xoc bios on both to be able to simulate the power draw I could achieve with the shunt (~550w) within my system (sfx 750w power supply, CPU PPT limited to 150w (5950x)). The goal was to test both to try to figure out the better bin, but this is made complicated given that one has the hybrid kit and one doesn't. Tried both on 3c ambient outside temps to push for some good scores just to see what was what.

I got really great scores using the 1000w bios, passing all synthetic benches I tried that pushed both GPU and CPU. And I was happy I wouldn't need to worry about shunting with these new bios.

BUT, when I actually tried to game on this bios, I got instant shut downs on my computer. I first thought it was power spikes over my 750w sfx PS, so I limited my CPU PPT, and dropped vbios down to 50%, 45%, 40%, but was still getting shutdowns. Mainly in CP2077 as soon as I'd load a save, but also in RDR2 about 10 mins in, and sometimes SOTTR wouldn't even start the program.

Reset my system BIOS to stock settings to eliminate any other issues (XMP, PBO, etc.) Still got shutdowns. DDU'd and tried several different drivers, still shutdowns. Only way I could make the 1000w bios stable for actual GAMING was limiting it to 30% power, which obviously wasn't viable for achieving the performance I was getting with just the shunted PCIE slot and the 520w KP bios. I was also testing in windowed mode with GPUz and HWInfo up checking real time full system power draw and temps across all GPU and ICX sensors to rule out power delivery spikes or temps spikes being the caused. Never saw anything even close to out of spec leading up to the shutdowns. 

Long story short, I'll be silver stack shunting both cards and testing with the KP 520 bios and keeping the better performer. I currently have one reshunted and am pulling 560w again, and it's performing great in both gaming and synthetics. So there must be some actual performance variation between the KP520 and the 1000w bios that was causing my system instability. I don't know enough about the inner workings of the bios (voltage, advanced settings, etc.) to say what the problem was, but I saw others above questioning the difference and wanted to share my experience between the two other than just 1% differences in 3dmark scores (i.e. full-on system instability).

Just wanted to share my odd results regarding the 1000w bios in synthetics vs gaming. If anyone reads this and thinks they know what the issue is, I'm all ears. But just wanted to share all the details in case either I missed something, or there's some info here that could help the community. Cheers!


----------



## bmagnien

x


----------



## sultanofswing

jomama22 said:


> I mentioned temps in my explanation. This also occurs going from ~17c when set to ~31 under load, so I imagine it's any 10+*C interval from the set temperature or even smaller temp differences.


It starts bin drops even lower than 30c.


----------



## Falkentyne

bmagnien said:


> just wanted to add some recent experience for the good of the group. i currently have 2 ftw3 3090s, one with a hybrid kit installed and one stock air. i had previously pcie slot silver stack shunted the hybrid (to bypass EVGAs power balancing issue), but as I will be selling one, I decided to remove the shunt. I then put the 1000w xoc bios on both to be able to simulate the power draw I could achieve with the shunt (~550w) within my system (sfx 750w power supply, CPU PPT limited to 150w (5950x)). The goal was to test both to try to figure out the better bin, but this is made complicated given that one has the hybrid kit and one doesn't. Tried both on 3c ambient outside temps to push for some good scores just to see what was what.
> 
> I got really great scores using the 1000w bios, passing all synthetic benches I tried that pushed both GPU and CPU. And I was happy I wouldn't need to worry about shunting with these new bios.
> 
> BUT, when I actually tried to game on this bios, I got instant shut downs on my computer. I first thought it was power spikes over my 750w sfx PS, so I limited my CPU PPT, and dropped vbios down to 50%, 45%, 40%, but was still getting shutdowns. Mainly in CP2077 as soon as I'd load a save, but also in RDR2 about 10 mins in, and sometimes SOTTR wouldn't even start the program.
> 
> Reset my system BIOS to stock settings to eliminate any other issues (XMP, PBO, etc.) Still got shutdowns. DDU'd and tried several different drivers, still shutdowns. Only way I could make the 1000w bios stable for actual GAMING was limiting it to 30% power, which obviously wasn't viable for achieving the performance I was getting with just the shunted PCIE slot and the 520w KP bios. I was also testing in windowed mode with GPUz and HWInfo up checking real time full system power draw and temps across all GPU and ICX sensors to rule out power delivery spikes or temps spikes being the caused. Never saw anything even close to out of spec leading up to the shutdowns.
> 
> Long story short, I'll be silver stack shunting both cards and testing with the KP 520 bios and keeping the better performer. I currently have one reshunted and am pulling 560w again, and it's performing great in both gaming and synthetics. So there must be some actual performance variation between the KP520 and the 1000w bios that was causing my system instability. I don't know enough about the inner workings of the bios (voltage, advanced settings, etc.) to say what the problem was, but I saw others above questioning the difference and wanted to share my experience between the two other than just 1% differences in 3dmark scores (i.e. full-on system instability).
> 
> Just wanted to share my odd results regarding the 1000w bios in synthetics vs gaming. If anyone reads this and thinks they know what the issue is, I'm all ears. But just wanted to share all the details in case either I missed something, or there's some info here that could help the community. Cheers!


You would need a better PSU in order to test properly. A complete shutdown (if you mean PSU actually turning itself off) is from overload from a transient event.
The 1000W Vbios has internal power limits removed that you have no access to with shunt mods. Have you noticed that if you run Timespy with your PCIE Slot silver shunt, you get a 'double' throttle bar event during GT2? (set core and RAM clocks to stock when doing it to prevent any other normal TDP limits from triggering, then look at GPU-Z after you run the test). 

Then do the same test on a non-shunted card with the power limit set to 55% on the 1000W Vbios and you will not see that throttle happen despite the same approximate power target being used.

Anyway I highly recommend getting a decent PSU for those cards. A Seasonic Prime PX-1000 (OneSeasonic model) or Prime TX-1000 (which is more expensive) will work well. But not the older "Prime Ultra Platinum" or "Prime Ultra Titanium" as those have the old OCP circuit.


----------



## Sheyster

Falkentyne said:


> Anyway I highly recommend getting a decent PSU for those cards. A Seasonic Prime PX-1000 (OneSeasonic model) or Prime TX-1000 (which is more expensive) will work well. But not the older "Prime Ultra Platinum" or "Prime Ultra Titanium" as those have the old OCP circuit.


I agree 100%. I'm not comfortable with a 1000W PSU anymore. I'm having a heck of a time finding anything good in the 1200-1300 watt range, platinum or titanium efficiency. Everything is sold out it seems.


----------



## bmagnien

Falkentyne said:


> You would need a better PSU in order to test properly. A complete shutdown (if you mean PSU actually turning itself off) is from overload from a transient event.
> The 1000W Vbios has internal power limits removed that you have no access to with shunt mods. Have you noticed that if you run Timespy with your PCIE Slot silver shunt, you get a 'double' throttle bar event during GT2? (set core and RAM clocks to stock when doing it to prevent any other normal TDP limits from triggering, then look at GPU-Z after you run the test).
> 
> Then do the same test on a non-shunted card with the power limit set to 55% on the 1000W Vbios and you will not see that throttle happen despite the same approximate power target being used.
> 
> Anyway I highly recommend getting a decent PSU for those cards. A Seasonic Prime PX-1000 (OneSeasonic model) or Prime TX-1000 (which is more expensive) will work well. But not the older "Prime Ultra Platinum" or "Prime Ultra Titanium" as those have the old OCP circuit.


this makes sense, thanks for the explanation falk. im building out a heavily modded custom sffpc closed loop. im completely comfortable being limited to the best currently available sfx psu (which is the corsair sfx750). I was just trying to push the OC limits within the constraints I've set for myself, and wanted to understand exactly what was happening. 

If a transient event is actually pushing power beyond what my PSU is capable, that means that my card is asking for well more than 550w. Considering that an UNSHUNTED evga card pulls close to 100w through the pcie slot on the 1000w bios set to 55%, I frankly am happy to have my computer turn off rather than pull even more power through the card, and thus through the PCIE slot, during those transient spikes. The transient spike would've had to have been over 650w for my PSU to fail, meaning that PCIe slot power draw would have been dangerously close to the 120w fuse limit. And frankly mobo safety limit and my riser cable safety threshold. This has to be why GN's steve fried his fuse. 

So...given that I've resigned myself to EVGA's dodgy power balancing, and to a 'sub par' 750w PSU (but certainly not a low quality SFX PSU), I think I'll just settle for the 520w KP bios with a silver shunted pcie slot and call it a day.

Thanks all!


----------



## SoldierRBT

@bmagnien 

Best SFX PSU is SilverStone Technology SX1000 80 Plus Platinum 1000W Fully Modular SFX-L Power Supply. Currently out of stock. 






Amazon.com: SilverStone Technology SX1000 80 Plus Platinum 1000W Fully Modular SFX-L Power Supply SX1000-LPT: Computers & Accessories


Buy SilverStone Technology SX1000 80 Plus Platinum 1000W Fully Modular SFX-L Power Supply SX1000-LPT: Internal Power Supplies - Amazon.com ✓ FREE DELIVERY possible on eligible purchases



www.amazon.com


----------



## bmagnien

SoldierRBT said:


> @bmagnien
> 
> Best SFX PSU is SilverStone Technology SX1000 80 Plus Platinum 1000W Fully Modular SFX-L Power Supply. Currently out of stock.
> 
> 
> 
> 
> 
> 
> Amazon.com: SilverStone Technology SX1000 80 Plus Platinum 1000W Fully Modular SFX-L Power Supply SX1000-LPT: Computers & Accessories
> 
> 
> Buy SilverStone Technology SX1000 80 Plus Platinum 1000W Fully Modular SFX-L Power Supply SX1000-LPT: Internal Power Supplies - Amazon.com ✓ FREE DELIVERY possible on eligible purchases
> 
> 
> 
> www.amazon.com


this is an sfx-L form factor, larger than SFX.

there's a coolermaster 850 sfx but I prefer corsairs build and cable quality. there's also a great wall 850 sfx which is corsairs OEM that I'm looking into as well.

another question is, if this was a transient event, why did I never have this transient event in a vast array of synthetic benches across multiple days, but only in games? Again, my lack of understanding on this topic is on full display so forgive the stupid question. Thanks!


----------



## Falkentyne

bmagnien said:


> this makes sense, thanks for the explanation falk. im building out a heavily modded custom sffpc closed loop. im completely comfortable being limited to the best currently available sfx psu (which is the corsair sfx750). I was just trying to push the OC limits within the constraints I've set for myself, and wanted to understand exactly what was happening.
> 
> If a transient event is actually pushing power beyond what my PSU is capable, that means that my card is asking for well more than 550w. Considering that an UNSHUNTED evga card pulls close to 100w through the pcie slot on the 1000w bios set to 55%, I frankly am happy to have my computer turn off rather than pull even more power through the card, and thus through the PCIE slot, during those transient spikes. The transient spike would've had to have been over 650w for my PSU to fail, meaning that PCIe slot power draw would have been dangerously close to the 120w fuse limit. And frankly mobo safety limit and my riser cable safety threshold. This has to be why GN's steve fried his fuse.
> 
> So...given that I've resigned myself to EVGA's dodgy power balancing, and to a 'sub par' 750w PSU (but certainly not a low quality SFX PSU), I think I'll just settle for the 520w KP bios with a silver shunted pcie slot and call it a day.
> 
> Thanks all!


It's not overloading the PSU. It's the PSU thinking it's being overloaded.
Basically, a sudden rise in current causes the PSU to think that something is going to exceed a threshold (even though it actually isn't) and then it shuts itself off. It doesn't even have to be a rise in wattage close to the PSU limit. Just how fast a wattage changes and gets 'close' to a limit.
Think of how you react when you drop a glass bowl 1 foot onto a carpet and you panic like it's going to shatter all over the place but it actually won't. You go into panic mode when you drop it, see?
People were tripping the old Seasonic SS-1050 XP Platinum PSU (I think the year 2014 model) with stock 3090's. And that's a 1050W PSU. All because of an overly strict transient OCP circuit.


----------



## bmagnien

Falkentyne said:


> It's not overloading the PSU. It's the PSU thinking it's being overloaded.
> Basically, a sudden rise in current causes the PSU to think that something is going to exceed a threshold (even though it actually isn't) and then it shuts itself off.
> Think of how you react when you drop a glass bowl 1 foot onto a carpet and you panic like it's going to shatter all over the place but it actually won't. You go into panic mode when you drop it, see?
> People were tripping the old Seasonic SS-1050 XP Platinum PSU (I think the year 2014 model) with stock 3090's. And that's a 1050W PSU. All because of an overly strict transient OCP circuit.


very interesting, and great explanation. thanks again. i've sort've painted myself into a corner with this sffpc goal, but that's what I wanted and I'm comfortable accepting the limitations of my choices lol. but always open to alternatives and this community is great for that


----------



## Falkentyne

bmagnien said:


> this is an sfx-L form factor, larger than SFX.
> 
> there's a coolermaster 850 sfx but I prefer corsairs build and cable quality. there's also a great wall 850 sfx which is corsairs OEM that I'm looking into as well.
> 
> another question is, if this was a transient event, why did I never have this transient event in a vast array of synthetic benches across multiple days, but only in games? Again, my lack of understanding on this topic is on full display so forgive the stupid question. Thanks!


Run the OCCT (latest beta version) PSU test and you'll get your shutdown 
Games draw a lot more out of the CPU than a synthetic benchmark does, and the loads can be a lot more erratic.


----------



## bmagnien

Falkentyne said:


> Run the OCCT (latest beta version) PSU test and you'll get your shutdown
> Games draw a lot more out of the CPU than a synthetic benchmark does, and the loads can be a lot more erratic.


got it. thanks for taking the time to answer all my questions. you've provided pretty much all the transparency i could've asked for on what was actually happening, and have given me all the info I need to make the right decisions going forward. thanks again!


----------



## Exilon

Falkentyne said:


> Anyway I highly recommend getting a decent PSU for those cards. A Seasonic Prime PX-1000 (OneSeasonic model) or Prime TX-1000 (which is more expensive) will work well. But not the older "Prime Ultra Platinum" or "Prime Ultra Titanium" as those have the old OCP circuit.


Seasonic is taking RMAs for their older Prime flagship PSUs and shipping out equivalent TX replacements. A contact of mine got their RMA approved 15 minutes after filing at 2AM on NYE so I assume there's some automatic approval process for those PSUs which should all still be in warranty.


----------



## bmagnien

Exilon said:


> Seasonic is taking RMAs for their older Prime flagship PSUs and shipping out equivalent TX replacements. A contact of mine got their RMA approved 15 minutes after filing at 2AM on NYE so I assume there's some automatic approval process for those PSUs which should all still be in warranty.


Must’ve been one wild NYE party!


----------



## pat182

everyone says hi for the arrival of my new child, im pulling off my 850watt evga before it implode. i was getting shutdown pass +1400 mem clock


----------



## Falkentyne

pat182 said:


> everyone says hi for the arrival of my new child, im pulling off my 850watt evga before it implode. i was getting shutdown pass +1400 mem clock
> View attachment 2473638


Doesn't that PSU have the old OCP transient system?






OneSeasonic







seasonic.com





The OneSeasonic GX-1300 hasn't been released yet.
That being said it should have enough leeway to work well.


----------



## bmagnien

Is there a way to do a low power consumption, high temperature test to compare two cards to see which has better silicon? I have a hybrid kit on one and stock air cooler on the other so I won't be able to control temps very well to make it a fair comparison from a thermal standpoint. I could limit power consumption to make that a fair comparison though.

Could i just try to set a really aggressive curve like 1v 2100mhz and run Heaven benchmark and see what card can do the highest mhz at the lowest voltage? Or will temp affect that?


----------



## pat182

Falkentyne said:


> Doesn't that PSU have the old OCP transient system?
> 
> 
> 
> 
> 
> 
> OneSeasonic
> 
> 
> 
> 
> 
> 
> 
> seasonic.com
> 
> 
> 
> 
> 
> The OneSeasonic GX-1300 hasn't been released yet.
> That being said it should have enough leeway to work well.


the model is SSR 1300GD, isnt the TBA one ?

anyhow, i wont get anywhere near that, im gonna be at 50% on normal gaming loads, should be plentty


----------



## Nizzen

Good bye 2080ti strix oc white, hello 3090 strix oc white. Workstation upgrade 🤓


----------



## reflex75

I like the community, so here is my sharing regarding the problem about the PSU shutdown.
I have myself a 750w Seasonic Prime from 2017 bought for my previous MSI 2080ti Trio Gaming X.
I used this GPU with its 406w bios, and never had any single shutdown!!
Since I installed my new 3090FE, my same PSU started to shutdown even at default 350w!!
My point is, the culprit is not the PSU, but ampere GPU strange power behavior.
After doing some tests, I found that shutdown occurs more often when [Alt-Tab] during heavy load games like Cyberpunk 2077.
In this game, at default power (100% 350W FE) voltage is in the range 0.900v-0.950v
Now, when doing several [Alt-Tab], I can see sometimes in the monitoring tool (with faster refresh every 0.2s) crazy high voltage spikes at 1.08v!
After instant spikes, the voltage reduces where it should be around ~0.90v to be in the max power allowed (350w) during the heavy scene of CP2077!
The real problem is not the target power, but ampere out of control voltage spikes.
I don't know if Nvidia will/can fix it with driver update, but in the meantime I have found a workaround.
With this workaround, my 3090FE at maximum power 114% 400w does not trigger the PSU safety shutdown (stable again like my previous 2080ti 406w)
The goal is to prevent the voltage to rise to its max 1.07v or 1.08v (depending on temp) even for an instant between rapid load changes.
In the case of Cyberpunk 2077, 400w power put my voltage around 0.950v.
Now if I limit my voltage for this game at 0.950v to target 400w power consumption, then I won't face any more shutdown, while it was the case at lower default 350w!
Crazy no? Higher power but more stable!


----------



## bmagnien

reflex75 said:


> I like the community, so here is my sharing regarding the problem about the PSU shutdown.
> I have myself a 750w Seasonic Prime from 2017 bought for my previous MSI 2080ti Trio Gaming X.
> I used this GPU with its 406w bios, and never had any single shutdown!!
> Since I installed my new 3090FE, my same PSU started to shutdown even at default 350w!!
> My point is, the culprit is not the PSU, but ampere GPU strange power behavior.
> After doing some tests, I found that shutdown occurs more often when [Alt-Tab] during heavy load games like Cyberpunk 2077.
> In this game, at default power (100% 350W FE) voltage is in the range 0.900v-0.950v
> Now, when doing several [Alt-Tab], I can see sometimes in the monitoring tool (with faster refresh every 0.2s) crazy high voltage spikes at 1.08v!
> After instant spikes, the voltage reduces where it should be around ~0.90v to be in the max power allowed (350w) during the heavy scene of CP2077!
> The real problem is not the target power, but ampere out of control voltage spikes.
> I don't know if Nvidia will/can fix it with driver update, but in the meantime I have found a workaround.
> With this workaround, my 3090FE at maximum power 114% 400w does not trigger the PSU safety shutdown (stable again like my previous 2080ti 406w)
> The goal is to prevent the voltage to rise to its max 1.07v or 1.08v (depending on temp) even for an instant between rapid load changes.
> In the case of Cyberpunk 2077, 400w power put my voltage around 0.950v.
> Now if I limit my voltage for this game at 0.950v to target 400w power consumption, then I won't face any more shutdown, while it was the case at lower default 350w!
> Crazy no? Higher power but more stable!


Wait...what’s your method for limiting voltage to a predetermined limit? Can you describe it in detail? What program, what steps?


----------



## Exilon

Power limiting alone is not enough to prevent the spikes.. Power limit is not instant; when the power limit is exceeded, the GPU pulls back on the boost table to reduce power, rather than preventing the boost from happening in the first place.



bmagnien said:


> Wait...what’s your method for limiting voltage to a predetermined limit? Can you describe it in detail? What program, what steps?


Use MSI Afterburner and Ctrl-F to bring up the V-F boost table. Go look at an undervolting guide to see how to modify the table easily.


----------



## reflex75

bmagnien said:


> Wait...what’s your method for limiting voltage to a predetermined limit? Can you describe it in detail? What program, what steps?


A simple undervolt can do the trick.
The problem is different games have different loads on power, which means you need to adjust your UV to the current game/application.
We are lacking some sort of dynamic undervolt (like negative voltage %)


----------



## reflex75

Exilon said:


> Power limiting alone is not enough to prevent the spikes.. Power limit is not instant; when the power limit is exceeded, the GPU pulls back on the boost table to reduce power, rather than preventing the boost from happening in the first place.


That was my point: the power limit algorithm is not working fast enough to prevent spikes, while limiting voltage prevent spikes and improve stability, even at higher power!


----------



## bmagnien

Right I know how to edit the voltage/frequency curve, I didn’t know there was a way to put a hard lock on the upper limit of the allowed voltage though. Like I know you can lock it to a specific frequency/voltage point, but I don’t want to do that, I would want it to use standard curve up until a set voltage threshold


----------



## bmagnien

Lol wow I just tried your method reflex, just locked it to like 2055mhz at 1v and it solved the 1000w bios Instant crash in cyberpunk...


----------



## Exilon

bmagnien said:


> Right I know how to edit the voltage/frequency curve, I didn’t know there was a way to put a hard lock on the upper limit of the allowed voltage though. Like I know you can lock it to a specific frequency/voltage point, but I don’t want to do that, I would want it to use standard curve up until a set voltage threshold


Flatten it after a specific VF point by dragging all the points after it down below the selected point. The boost algorithm uses the lowest voltage for a specific frequency so it will not exceed the VF point you specified.


----------



## bmagnien

I love this community. Thanks reflex and exilon!


----------



## pat182

reflex75 said:


> I like the community, so here is my sharing regarding the problem about the PSU shutdown.
> I have myself a 750w Seasonic Prime from 2017 bought for my previous MSI 2080ti Trio Gaming X.
> I used this GPU with its 406w bios, and never had any single shutdown!!
> Since I installed my new 3090FE, my same PSU started to shutdown even at default 350w!!
> My point is, the culprit is not the PSU, but ampere GPU strange power behavior.
> After doing some tests, I found that shutdown occurs more often when [Alt-Tab] during heavy load games like Cyberpunk 2077.
> In this game, at default power (100% 350W FE) voltage is in the range 0.900v-0.950v
> Now, when doing several [Alt-Tab], I can see sometimes in the monitoring tool (with faster refresh every 0.2s) crazy high voltage spikes at 1.08v!
> After instant spikes, the voltage reduces where it should be around ~0.90v to be in the max power allowed (350w) during the heavy scene of CP2077!
> The real problem is not the target power, but ampere out of control voltage spikes.
> I don't know if Nvidia will/can fix it with driver update, but in the meantime I have found a workaround.
> With this workaround, my 3090FE at maximum power 114% 400w does not trigger the PSU safety shutdown (stable again like my previous 2080ti 406w)
> The goal is to prevent the voltage to rise to its max 1.07v or 1.08v (depending on temp) even for an instant between rapid load changes.
> In the case of Cyberpunk 2077, 400w power put my voltage around 0.950v.
> Now if I limit my voltage for this game at 0.950v to target 400w power consumption, then I won't face any more shutdown, while it was the case at lower default 350w!
> Crazy no? Higher power but more stable!


3090 with 520watt bios on evga 850watt would only trip if i went full mega oc on mem and cpu, but never had any shutdown even alt tabing at 520watt in a game, maybe the evga was a little more forgiving but was still at its limit and im not sure its wise pushing the psu at its limit for hours of gaming


----------



## chiny1990

Hello i bought a zotac trinity 3090, i saw that the BIOS have only 350w, i found in techpowerup that bios.
KFA2 RTX 3090 VBIOS 390watts
And i found a 1000w bios








KFA2 RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com




I cant flash the 1000w bios with nvflash, why? How can i flash it on my 3090 trinity?


----------



## bmgjet

chiny1990 said:


> Hello i bought a zotac trinity 3090, i saw that the BIOS have only 350w, i found in techpowerup that bios.
> KFA2 RTX 3090 VBIOS 390watts
> And i found a 1000w bios
> 
> 
> 
> 
> 
> 
> 
> 
> KFA2 RTX 3090 VBIOS
> 
> 
> 24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory
> 
> 
> 
> 
> www.techpowerup.com
> 
> 
> 
> 
> I cant flash the 1000w bios with nvflash, why? How can i flash it on my 3090 trinity?


That 1000W one wont work in NVFlash since its not signed so you need to use a external programer.
Then youll have problems with it since checksum mismatch for power tables so your clocks will be locked at safe mode clocks 250mhz.
It was made in ABE 001 before I had that checksum worked out and uploaded by a idiot friend that thought TPU would resign it since they modifiy all the bioses you upload though gpuz to black out the UID.


----------



## chiny1990

bmgjet said:


> That 1000W one wont work in NVFlash since its not signed so you need to use a external programer.
> Then youll have problems with it since checksum mismatch for power tables so your clocks will be locked at safe mode clocks 250mhz.
> It was made in ABE 001 before I had that checksum worked out and uploaded by a idiot friend that thought TPU would resign it since they modifiy all the bioses you upload though gpuz to black out the UID.


Thanks a lot, the best bios is 390w to this card?


----------



## Gandyman

Hey quick question for you lads, 

So in RT titles my ichill x3 3090 with RT on goes to 100% fan speed after about 5 minutes and thermal throttles down to about 1300 mhz then further down to 1100, while being pegged at 84c. And the noise is COMPLETELY UNBEARABLE. It makes r9 290 stock blower cooler seem whisper quiet.

Taking the advice of someone in this thread I changed thermal paste, after seeing no change I opened it again to check if my method worked, as for GPUs I don't like doing a 'pea' or 'x' as im paranoid with no IHS a small pocket without coverage may burn a hole. So I do a generous 'spread' (someone tell the verge i actually use the thermal paste applicator for something) but nothing changed thermally.

Undervolting doesn't seem to do anything. I have it set to 750mv @ 1700 mhz and it doesn't change the cards behaviour at all.

_note that in non RT workloads it seems to follow more to what the voltage curve dictates, sits at 1648 -1695 at about 70 - 75c with fans on rougly 60% which is still SUPER loud_

So the actual question: Is this expected behavior from a cheap Chinese card and I'm reaping the consequences of my impatience by buying the first 30 series that became available in my country? Or could something be wrong with my unit? Or is this just 3090 behaviour in general?

Bonus question: Would a top end model 3080 that can run 2000+ mhz be more performance than the worst 3090 in the world going at ~1400 mhz? I could probably sell this 3090 at quite a loss to a watercooler or professional that could use it and get a 3080 Strix or something when they become avaliable. Although by then we will prbably be looking at 3080ti -_-


----------



## Exilon

Yes, 3080 at 2Ghz would beat a 3090 at 1.4GHz
It's hard to believe that a cooler can be so bad. Reviews of it look fine.


----------



## Gandyman

Exilon said:


> Yes, 3080 at 2Ghz would beat a 3090 at 1.4GHz
> It's hard to believe that a cooler can be so bad. Reviews of it look fine.


To be fair to it with fans on default noise and not playing a RT title it sits around 1810 mhz, RT title lowers clocks by a good 500 mhz to stay cool however.


----------



## SolarBeaver

SolarBeaver said:


> Hey, is it normal for strix to draw only ~40w from pcie slot? Stock bios, full 123% tdp.


help me please...


----------



## WMDVenum

SolarBeaver said:


> help me please...


What do the pins pull?


----------



## pat182

reflex75 said:


> A simple undervolt can do the trick.
> The problem is different games have different loads on power, which means you need to adjust your UV to the current game/application.
> We are lacking some sort of dynamic undervolt (like negative voltage %)


if you take the hardest game , like CP2077 or control without DLSS full RT any custom curve will be ok for other app, cause those 2 are the worst


----------



## SolarBeaver

WMDVenum said:


> What do the pins pull?


1st pin 130-135w, 2nd 150-160w, 3rd 135-140w


----------



## Falkentyne

pat182 said:


> the model is SSR 1300GD, isnt the TBA one ?
> 
> anyhow, i wont get anywhere near that, im gonna be at 50% on normal gaming loads, should be plentty


No, that's the older version. The TBA one wouldn't have a "2017 year awards" sticker on it 
It would also say directly on the box "Prime GX-1300", and not "Prime Gold / Gold Ultra", etc.

Here is the GX-1000 box for reference.



https://cdna.pcpartpicker.com/static/forever/images/product/c430f6a50502b8127832d06d7e21dfa8.1600.jpg


----------



## Falkentyne

bmgjet said:


> That 1000W one wont work in NVFlash since its not signed so you need to use a external programer.
> Then youll have problems with it since checksum mismatch for power tables so your clocks will be locked at safe mode clocks 250mhz.
> It was made in ABE 001 before I had that checksum worked out and uploaded by a idiot friend that thought TPU would resign it since they modifiy all the bioses you upload though gpuz to black out the UID.


Can you check your PM?


----------



## Herald

I have an 850 Corsair HXi platinum with a 3090 @ 520w bios and an OCed 10900k. Never had shutdowns, even with extreme +1700mhz ram / 2150+ core frequencies.


----------



## pat182

Falkentyne said:


> No, that's the older version. The TBA one wouldn't have a "2017 year awards" sticker on it
> It would also say directly on the box "Prime GX-1300", and not "Prime Gold / Gold Ultra", etc.
> 
> Here is the GX-1000 box for reference.
> 
> 
> 
> https://cdna.pcpartpicker.com/static/forever/images/product/c430f6a50502b8127832d06d7e21dfa8.1600.jpg
> 
> 
> View attachment 2473741


oh well, just finished some quick bench with it, nothing wrong so far, i think ill be ok but the seasonic stock cable have HUGE rubber sleeves its unbeleivable, i have trouble bending the 24pin and 3x8pin to get somewhat a descent look , my evga ones where much flexible, anyway..


----------



## pat182

Herald said:


> I have an 850 Corsair HXi platinum with a 3090 @ 520w bios and an OCed 10900k. Never had shutdowns, even with extreme +1700mhz ram / 2150+ core frequencies.


you live in greece , maybe the 240v system is more forgiving ? either way, im not trusting running a psu near its max tolerance, sound like a fire to happen in long term
also the more you get near max psu power, the less efficient it is, idealy you want it to be loaded at 50% for maximum efficiency


----------



## Falkentyne

I took the liberty of fixing that checksum in the Galax 1000W vBIOS. The 8 pin power limits and SRC and PCIE Slot power limits are also set to values that make sense. (230W per 8 pin, 120W Slot power, 230W SRC 1+2, 175W MVDDC (on most bioses I've seen, the 8 pins actually use the SRC power limit), but AUX1 max is not changeable right now. Rename the attached file to RAR file extension (example: 227671.rar) rather than txt. I TAKE NO FREAKING RESPONSIBILITY IF IT DOESN'T WORK AND YOUR CARD EXPLODES. I also take no responsibility if the file got corrupted by the rename+upload. I think 580W max is enough for any sane person.

THIS REQUIRES A HARDWARE PROGRAMMER LIKE A SKYPRO OR SIMILAR DEVICE AND A 1.8V ADAPTER. DO NOT use one without a 1.8v adapter unless the flasher supports NATIVE 1.8v live flashing unless you want a crispy bios chip. If you're smart you will do a full backup of your existing bios chip with the hardware programmer. DO NOT RELY ON GPU-Z dumps.

(edit: updated with correct power limit for aux1 max PL)


----------



## GoldCartGamer

Just ordered an Asus TUF OC 3090. Looking at doing some overclocking. Any recommendations for bios? Looking at water-cooling solutions now.


----------



## chiny1990

Falkentyne said:


> I took the liberty of fixing that checksum in the Galax 1000W vBIOS. The 8 pin power limits and SRC and PCIE Slot power limits are also set to values that make sense. (230W per 8 pin, 120W Slot power, 230W SRC 1+2, 175W MVDDC (on most bioses I've seen, the 8 pins actually use the SRC power limit), but AUX1 max is not changeable right now. Rename the attached file to RAR file extension (example: 227671.rar) rather than txt. I TAKE NO FREAKING RESPONSIBILITY IF IT DOESN'T WORK AND YOUR CARD EXPLODES. I also take no responsibility if the file got corrupted by the rename+upload. I think 580W max is enough for any sane person.
> 
> THIS REQUIRES A HARDWARE PROGRAMMER LIKE A SKYPRO OR SIMILAR DEVICE AND A 1.8V ADAPTER. DO NOT use one without a 1.8v adapter unless the flasher supports NATIVE 1.8v live flashing unless you want a crispy bios chip. If you're smart you will do a full backup of your existing bios chip with the hardware programmer. DO NOT RELY ON GPU-Z dumps.


Hello do you know what It the BIOS to the 3090 Trinity With more Powerlimit 
I need one to flash With nvflash. I red thar kfa 390w is the BEST, is It true?


----------



## Falkentyne

chiny1990 said:


> Hello do you know what It the BIOS to the 3090 Trinity With more Powerlimit
> I need one to flash With nvflash. I red thar kfa 390w is the BEST, is It true?


I don't know. If it's a 3 PCIE Plug card, the XOC FTW3 500W vBIOS is best.
If it's a 2 PCIE Plug card, sorry I don't know.
If you have a SPI programmer (hardware flasher) and a 1.8v adapter and a Pomona 5250 clip and male to female jumper wires, you can try the file I posted (need to rename it to .RAR, then you can extract it).


----------



## inedenimadam

chiny1990 said:


> Hello do you know what It the BIOS to the 3090 Trinity With more Powerlimit
> I need one to flash With nvflash. I red thar kfa 390w is the BEST, is It true?


The PNY 390W bios flashes to the trinity without any negative side effects. Can't speak for the KFA, as I haven't tried it. The 1000W KPE also flashes to it, but there are some caveats, some of which have a work around, some dont: VRAM doesn't downclock, compute workloads are stuck at 410mhz, power readings are completely off, thermal protections are removed(?). If you aren't watercooling, stick to one of the reference board 390W bioses for the trinity.


----------



## WMDVenum

SolarBeaver said:


> 1st pin 130-135w, 2nd 150-160w, 3rd 135-140w


So your card is pulling about 450 watts. What is your bios power limit? 450 watts seems about right most the higher end bios. Also is this in port royale? Are you power limited and that is why you are asking?


----------



## long2905

Gandyman said:


> Hey quick question for you lads,
> 
> So in RT titles my ichill x3 3090 with RT on goes to 100% fan speed after about 5 minutes and thermal throttles down to about 1300 mhz then further down to 1100, while being pegged at 84c. And the noise is COMPLETELY UNBEARABLE. It makes r9 290 stock blower cooler seem whisper quiet.
> 
> Taking the advice of someone in this thread I changed thermal paste, after seeing no change I opened it again to check if my method worked, as for GPUs I don't like doing a 'pea' or 'x' as im paranoid with no IHS a small pocket without coverage may burn a hole. So I do a generous 'spread' (someone tell the verge i actually use the thermal paste applicator for something) but nothing changed thermally.
> 
> Undervolting doesn't seem to do anything. I have it set to 750mv @ 1700 mhz and it doesn't change the cards behaviour at all.
> 
> _note that in non RT workloads it seems to follow more to what the voltage curve dictates, sits at 1648 -1695 at about 70 - 75c with fans on rougly 60% which is still SUPER loud_
> 
> So the actual question: Is this expected behavior from a cheap Chinese card and I'm reaping the consequences of my impatience by buying the first 30 series that became available in my country? Or could something be wrong with my unit? Or is this just 3090 behaviour in general?
> 
> Bonus question: Would a top end model 3080 that can run 2000+ mhz be more performance than the worst 3090 in the world going at ~1400 mhz? I could probably sell this 3090 at quite a loss to a watercooler or professional that could use it and get a 3080 Strix or something when they become avaliable. Although by then we will prbably be looking at 3080ti -_-


you are the first one with an ichill x3 card i have seen so far. your case make it looks like the 4th does wonder on the x4 version lol. did you check the paste job if its evenly applied? tried undervolting? i think your best bet (if you cant get another card) is to slab a waterblock on it, then issue solved.

the reviews for these inno3d cards are very conservative or read just like a PR post for the ones i could find.


----------



## cheddle

Just in case anyone was curious. The EVGA FTW3 hybrid kit ‘fits’ the Nvidia ‘reference’ board (i.e GALAX SG) just the AIO at least...

the RGB header plastics needed to be removed from the board and an insulator such as ‘sticky tape’ used to avoid a short - that or grinding around 1mm of copper away from the vram copper heat spreader.
I am using a regular fan cable stripped with pins inserted into the pump header (could use gpu but would prefer the extra few watts to go to the gpu not pump/fans)

The VRMs are naked and I am using three 92mm fans crudely zip tied to the board to cool it.

on a 390w BIOS the VRMs are around 75c using a laser thermometer. Basically the same temperature as the rear of the card/VRAM.
I am not keen to try a 1000w BIOS with naked VRMs so will try to sort something out for this.

my temps are great, very similar to running 5,000 rpm 92x32mm server fans on the stock heatsink but without ear splitting levels of noise.


----------



## EDORAM

So the best bios fro strix is 520w bios?
I got 15551 PR with 1000w but i think I can increase efficency...


----------



## thorswrath91

Gandyman said:


> Hey quick question for you lads,
> 
> So in RT titles my ichill x3 3090 with RT on goes to 100% fan speed after about 5 minutes and thermal throttles down to about 1300 mhz then further down to 1100, while being pegged at 84c. And the noise is COMPLETELY UNBEARABLE. It makes r9 290 stock blower cooler seem whisper quiet.
> 
> Taking the advice of someone in this thread I changed thermal paste, after seeing no change I opened it again to check if my method worked, as for GPUs I don't like doing a 'pea' or 'x' as im paranoid with no IHS a small pocket without coverage may burn a hole. So I do a generous 'spread' (someone tell the verge i actually use the thermal paste applicator for something) but nothing changed thermally.
> 
> Undervolting doesn't seem to do anything. I have it set to 750mv @ 1700 mhz and it doesn't change the cards behaviour at all.
> 
> _note that in non RT workloads it seems to follow more to what the voltage curve dictates, sits at 1648 -1695 at about 70 - 75c with fans on rougly 60% which is still SUPER loud_
> 
> So the actual question: Is this expected behavior from a cheap Chinese card and I'm reaping the consequences of my impatience by buying the first 30 series that became available in my country? Or could something be wrong with my unit? Or is this just 3090 behaviour in general?
> 
> Bonus question: Would a top end model 3080 that can run 2000+ mhz be more performance than the worst 3090 in the world going at ~1400 mhz? I could probably sell this 3090 at quite a loss to a watercooler or professional that could use it and get a 3080 Strix or something when they become avaliable. Although by then we will prbably be looking at 3080ti -_-


Maybe try flashing another vbios and check it's behaviour while running RT intensive workload.??? Maybe the gigabyte gaming oc one or kfa2.
If the temps are unbelievable still, slap a waterblock on it and see.


----------



## Herald

pat182 said:


> you live in greece , maybe the 240v system is more forgiving ? either way, im not trusting running a psu near its max tolerance, sound like a fire to happen in long term
> also the more you get near max psu power, the less efficient it is, idealy you want it to be loaded at 50% for maximum efficiency


Well for my daily usage I usually use the stock bios on stock power limit, meaning 410w, and the CPU is at ~100w maximum, so i'm at 550-600w max consumption while gaming. I'm not at the 50% sure, but I think im way below what's considered dangerous, or so I hope. Thing is I couldn't find one of the bigger Corsair HXi's, so I was stuck with the 850. Ideally I would like the 1k now ,but it's too late now, im not going to change a newly bought PSU.


----------



## Jordyn

Gandyman said:


> Hey quick question for you lads,
> 
> So in RT titles my ichill x3 3090 with RT on goes to 100% fan speed after about 5 minutes and thermal throttles down to about 1300 mhz then further down to 1100, while being pegged at 84c. And the noise is COMPLETELY UNBEARABLE. It makes r9 290 stock blower cooler seem whisper quiet.
> 
> Taking the advice of someone in this thread I changed thermal paste, after seeing no change I opened it again to check if my method worked, as for GPUs I don't like doing a 'pea' or 'x' as im paranoid with no IHS a small pocket without coverage may burn a hole. So I do a generous 'spread' (someone tell the verge i actually use the thermal paste applicator for something) but nothing changed thermally.
> 
> Undervolting doesn't seem to do anything. I have it set to 750mv @ 1700 mhz and it doesn't change the cards behaviour at all.
> 
> _note that in non RT workloads it seems to follow more to what the voltage curve dictates, sits at 1648 -1695 at about 70 - 75c with fans on rougly 60% which is still SUPER loud_
> 
> So the actual question: Is this expected behavior from a cheap Chinese card and I'm reaping the consequences of my impatience by buying the first 30 series that became available in my country? Or could something be wrong with my unit? Or is this just 3090 behaviour in general?
> 
> Bonus question: Would a top end model 3080 that can run 2000+ mhz be more performance than the worst 3090 in the world going at ~1400 mhz? I could probably sell this 3090 at quite a loss to a watercooler or professional that could use it and get a 3080 Strix or something when they become avaliable. Although by then we will prbably be looking at 3080ti -_-


I had a Ichill x4 and at stock temps would average around 81 with fans at 100% with ambients in mid to high 20's. With the KFA 390w bios that would increase to mid 80's. Clocks would bounce between 1700 and 1850.

I managed to score a "good" deal on the Gaming X Trio which just arrived today and it is night and day different with the MSI card averaging in the high 60s with fans at around 60% at stock. I have just flashed to the Suprim bios and still only hitting around 70 despite the increased power draw.

In short it looks like the ichill cooling is no good at all and if you can source another card you would be better off.


----------



## man1ac

So just got my EVGA 3090 Block from Bykski and have a question: On the LR22 parts has been a ton of cooling pads and on the Bykski Cooler there is a 4mm Gap to the cooler and there are no pads which I should put there.

Is that correct? I don't want to damage the card....


----------



## bmgjet

man1ac said:


> So just got my EVGA 3090 Block from Bykski and have a question: On the LR22 parts has been a ton of cooling pads and on the Bykski Cooler there is a 4mm Gap to the cooler and there are no pads which I should put there.
> 
> Is that correct? I don't want to damage the card....


EVGA says you should have them if you message them with there responce being if its factory cooled it should be cooled.
But EKWB is the same and say they arnt needed.


----------



## man1ac

Okay that's good to know, so I can use it without.

I could stack 3x1,5mm of the pads I have. Would that help? Can't be bad, can it?


----------



## rix2

Alphacool answer this question was:
all components which needs cooling will be cooled with our water blocks.
Often the manufacturer are using thermal pads as placeholders or to stabilize the air cooler.
With an water block the GPU doesn´t perform better. It will be cooler and the GPU can hold the boost longer.








I not see big different between waterblocks...


----------



## SolarBeaver

WMDVenum said:


> So your card is pulling about 450 watts. What is your bios power limit? 450 watts seems about right most the higher end bios. Also is this in port royale? Are you power limited and that is why you are asking?


It's a Strix bios, so power limit should be 480w with 123% tdp, but for some reason it seems to be lower, like you said about 450w, so I'm hitting it playing CP2077 in 4k or in benchmarks. I had another strix and it never hit those limits with the same settings, this one hits it even in lower voltages like 925 sometimes, which I find odd. It also much warmer card (like 7c+) than the old one with at the same voltages, but has way more stable chip.


----------



## pat182

does the rom chip register the date of a flash ? like could Asus or EVGA ,MSI refuse a RMA cause they could read the bios chip was modified later than the manifacturing date ? 

ex; you put the 1000 watt bios then revert back to stock but need to rma after


----------



## bmgjet

pat182 said:


> does the rom chip register the date of a flash ? like could Asus or EVGA ,MSI refuse a RMA cause they could read the bios chip was modified later than the manifacturing date ?
> 
> ex; you put the 1000 watt bios then revert back to stock but need to rma after


If they wanted to get down to the nitty gritty they could check that bios protection has been disabled since using nvflash you have to disable it.
But if you use there own flashing update tools it stays enabled.
Theres no date tho closest thing would be flashwear level count which goes up each time you rewrite the bios but they wont check that, at most they will check the bios is the one for that card and maybe check bios protection is still enabled.


----------



## jomama22

SolarBeaver said:


> It's a Strix bios, so power limit should be 480w with 123% tdp, but for some reason it seems to be lower, like you said about 450w, so I'm hitting it playing CP2077 in 4k or in benchmarks. I had another strix and it never hit those limits with the same settings, this one hits it even in lower voltages like 925 sometimes, which I find odd. It also much warmer card (like 7c+) than the old one with at the same voltages, but has way more stable chip.


A better chip is going to hit the limit earlier as you have higher clocks at lower voltages. Also, you are more than likely hitting some other power limit, be it memory or a single 8pin, that is causing the power limit to hit earlier.

I had noticed the same thing on my strix before shunting it. The v1 bios has a limit of 121.5w per pin, which is easy to hit especially with the pcie slot power being limited by hardware and not the bios, so it pulls extra from the pins to compensate.


----------



## SolarBeaver

jomama22 said:


> A better chip is going to hit the limit earlier as you have higher clocks at lower voltages. Also, you are more than likely hitting some other power limit, be it memory or a single 8pin, that is causing the power limit to hit earlier.
> 
> I had noticed the same thing on my strix before shunting it. The v1 bios has a limit of 121.5w per pin, which is easy to hit especially with the pcie slot power being limited by hardware and not the bios, so it pulls extra from the pins to compensate.


I see, thanks a lot!

So worse thermals are also possible due to higher average clocks, even though voltage is the same I guess.

It seems I should complement this chip with a proper water cooling and 520w bios, I'm just a bit hesitant because it will lose the looks and warranty this way.


----------



## Gebeleisis

I have installed yesterday the bykski aio for gpu. There is no manula whatsoever. There is no backplate so I have used the one from the card.
At idle I get a 8-9c drop and at max a 5-7c drop while the noise levels dropped a lot. 
I can play at 55c now games with almost no noise at all. 
I am waiting for the new case so I can install all of this in a custom loop.


----------



## long2905

Gebeleisis said:


> I have installed yesterday the bykski aio for gpu. There is no manula whatsoever. There is no backplate so I have used the one from the card.
> At idle I get a 8-9c drop and at max a 5-7c drop while the noise levels dropped a lot.
> I can play at 55c now games with almost no noise at all.
> I am waiting for the new case so I can install all of this in a custom loop.
> 
> View attachment 2473790
> View attachment 2473791
> View attachment 2473792


nice, thats similar temp to the id cooling kit i used on a 2080 in the past. but i dont see a pump in there, is there one?


----------



## Gebeleisis

long2905 said:


> nice, thats similar temp to the id cooling kit i used on a 2080 in the past. but i dont see a pump in there, is there one?


There is a pump inside the radiator. That one connected directly to my Asus mobo water pump header. 
The fans are connected to bykski fan controller and controlled by a remote. Although it has an argb header that I have also connected to the motherboard it does not do anything


----------



## mischi7

bmgjet said:


> Get the latest version of ABE 006
> 
> 
> 
> 
> 
> 
> 
> 
> File on MEGA
> 
> 
> 
> 
> 
> 
> 
> mega.nz
> 
> 
> 
> 
> 
> That should open them.


thanks, but this time ms defender blocks this file as Program:Win32/Wacapew.C!ml


----------



## WMDVenum

SolarBeaver said:


> It's a Strix bios, so power limit should be 480w with 123% tdp, but for some reason it seems to be lower, like you said about 450w, so I'm hitting it playing CP2077 in 4k or in benchmarks. I had another strix and it never hit those limits with the same settings, this one hits it even in lower voltages like 925 sometimes, which I find odd. It also much warmer card (like 7c+) than the old one with at the same voltages, but has way more stable chip.


When you open GPU-Z does it say it is throttling for anything?


----------



## SolarBeaver

WMDVenum said:


> When you open GPU-Z does it say it is throttling for anything?


Yeah, it throttlings with PerfCap PWR if voltage is 925v or higher quite often.
Also, If I check real frequencies with thermspy it can actually have like 50-100 less Mhz than what AB shows, but that could be because of my voltage curve and thermals I guess.


----------



## stryker7314

What size are the tubes and fittings on the kingpin aio? If I get one I'll modify it and go from there before the waterblocks are available. Hell may just get the aio for the FTW3 and do the same before getting the kingpin.


----------



## bmagnien

resizable bar vbios coming in March. Hopefully we’ll be able to get updated versions of the 520w KP, 1000w, and any other of the ‘best’ current vbios once those changes go live to get the best of both worlds.


----------



## pat182

bmagnien said:


> resizable bar vbios coming in March. Hopefully we’ll be able to get updated versions of the 520w KP, 1000w, and any other of the ‘best’ current vbios once those changes go live to get the best of both worlds.


whut, isnt it just a driver thing , doubt all the people gonna need to flash a bios for that


----------



## bmagnien

pat182 said:


> whut, isnt it just a driver thing , doubt all the people gonna need to flash a bios for that


Confirmed to be vbios, not just drivers. Will require flash. Confirmed by Jacob from evga on Twitter and specifically mentioned by Nvidia in their PR.


----------



## HyperMatrix

Thank God. New bios required to enable resizable bar support. Shunting > 1000W bios for multiple reasons. 








GeForce RTX 30 Series Performance Accelerates With Resizable BAR Support | GeForce News | NVIDIA


Support available now for all GeForce RTX 30 Series Founders Edition graphics cards, and select GeForce RTX 30 Series laptops.<br/>



www.nvidia.com


----------



## bmagnien

HyperMatrix said:


> Thank God. New bios required to enable resizable bar support. Shunting > 1000W bios for multiple reasons.
> 
> 
> 
> 
> 
> 
> 
> 
> GeForce RTX 30 Series Performance Accelerates With Resizable BAR Support | GeForce News | NVIDIA
> 
> 
> Support available now for all GeForce RTX 30 Series Founders Edition graphics cards, and select GeForce RTX 30 Series laptops.<br/>
> 
> 
> 
> www.nvidia.com
> 
> 
> 
> 
> 
> View attachment 2473823


Just out of curiosity, what’s your personal reason for being happy about this being a vbios update? Meaning shunters will be in a good position to reap the both benefits immediately but those relying on the 1000w bios will have to potentially pick one or the other at first. Am I missing something? Not trying to imply anything just genuinely curious


----------



## mardon

Praying for a 390w KFA with resizable bar!


----------



## pat182

bmagnien said:


> Confirmed to be vbios, not just drivers. Will require flash. Confirmed by Jacob from evga on Twitter and specifically mentioned by Nvidia in their PR.


o ****, that a whole new lvl of logistic upgrade for average joe user


----------



## HyperMatrix

bmagnien said:


> Just out of curiosity, what’s your personal reason for being happy about this being a vbios update? Meaning shunters will be in a good position to reap the both benefits immediately but those relying on the 1000w bios will have to potentially pick one or the other at first. Am I missing something? Not trying to imply anything just genuinely curious


Since the 1000W bios came out, more than half the posts here have been by new people with no experience and no ability to shunt, asking things along the lines of "is it safe to flash 1000W bios on my air cooled card with piggy backed pcie power connectors? btw my pcie power connectors are white. is that ok?"

Kind of tired of going over the same answer about it being a very unsafe bios over and over and over again.



pat182 said:


> o ****, that a whole new lvl of logistic upgrade for average joe user


I think the notification can be pushed through Precision X. Either way it's just a single EXE file to run. No need for command prompts and nvflash for official bios flashing.


----------



## mardon

HyperMatrix said:


> Since the 1000W bios came out, more than half the posts here have been by new people with no experience and no ability to shunt, asking things along the lines of "is it safe to flash 1000W bios on my air cooled card with piggy backed pcie power connectors? btw my pcie power connectors are white. is that ok?"
> 
> Kind of tired of going over the same answer about it being a very unsafe bios over and over and over again.
> 
> 
> 
> I think the notification can be pushed through Precision X. Either way it's just a single EXE file to run. No need for command prompts and nvflash for official bios flashing.


In addition you'll need a motherboard bios update too. 
Precision only pushes EVGA updated doesn't it. So if you have cards from other vendors your in their hands as to whether they release an update.

I agree not the most accessible for everyday users. Fine for us though.


----------



## HyperMatrix

mardon said:


> In addition you'll need a motherboard bios update too.
> Precision only pushes EVGA updated doesn't it. So if you have cards from other vendors your in their hands as to whether they release an update.
> 
> I agree not the most accessible for everyday users. Fine for us though.


I'm still rocking an old 6950x. Haha. But in the middle of a new build right now. Will have case/radiators/reservoirs/pumps ready this week and waiting for z590/11900k.

And yeah EVGA only pushes their own updates. But it's something MSI could do through afterburner if they wanted. And ASUS does have GPU Tweak. But if you're experienced enough to know not to use something like GPU tweak, you shouldn't have an issue downloading an exe file. Will be interesting to see how it's marketed/pushed though, I'll give you that.


----------



## mardon

HyperMatrix said:


> I'm still rocking an old 6950x. Haha. But in the middle of a new build right now. Will have case/radiators/reservoirs/pumps ready this week and waiting for z590/11900k.


I think the 11900k is going to be a gaming beast.


----------



## Exilon

Exilon said:


> Seasonic is taking RMAs for their older Prime flagship PSUs and shipping out equivalent TX replacements. A contact of mine got their RMA approved 15 minutes after filing at 2AM on NYE so I assume there's some automatic approval process for those PSUs which should all still be in warranty.


Update: the new PSU is the refreshed TX-* series as expected and so far hasn't shutdown in normal usage with a 3090 FE maxed out power slider.

On my end, my 3090 FTW3 seems to be hitting a wall at 1.86GHz 0.9V at 420W in Time Spy Extreme stress test. I have the voltage headroom to go higher but it's too loud so I will wait for the waterblock to come in first.


----------



## HyperMatrix

mardon said:


> I think the 11900k is going to be a gaming beast.


Yeah. I mean I would have preferred a 16 core version on intel's 10nm with PCIe 5 and DDR5 support, but honestly....for the purpose of gaming...I'm definitely excited. Between the clock speed increase and IPC boost it should give me about a 65% increase in single threaded performance. I'll give up a few NVMe drives and quad channel memory for that.

Only thing I'm curious about though with the new boards is whether there is a performance difference between running PCIe 4.0 x8 vs x16. We know there's a slight uplift when going from PCIe 3.0 x16 to 4.0 x16, and we know that the 3090 is not currently fully saturating PCIe 3.0 x16, so the gains should be mostly from latency reduction. So theoretically you should be able to run at 4.0 x8 and still get the benefits, but nobody's done that test yet. So it's an unknown. It would be nice to have more than 1 NVMe in the system, which isn't possible if you're running x16.


----------



## bmagnien

HyperMatrix said:


> Since the 1000W bios came out, more than half the posts here have been by new people with no experience and no ability to shunt, asking things along the lines of "is it safe to flash 1000W bios on my air cooled card with piggy backed pcie power connectors? btw my pcie power connectors are white. is that ok?"
> 
> Kind of tired of going over the same answer about it being a very unsafe bios over and over and over again.
> 
> 
> 
> I think the notification can be pushed through Precision X. Either way it's just a single EXE file to run. No need for command prompts and nvflash for official bios flashing.


What’s your current ‘daily driver’ vbios for gaming?


----------



## HyperMatrix

bmagnien said:


> What’s your current ‘daily driver’ vbios for gaming?


Kingpin 520W.


----------



## bmagnien

HyperMatrix said:


> Kingpin 520W.


Word me too. My only concern about the requirement for a vbios update to reap the benefits of resizable bar would be for folks with maybe a PNY or Zotac card where there’s more unknowns about the company’s ease of access to vbios releases and accompanying tools. I think we’ll be in good shape in that the updated 520 KP bios _should_ be easy to access (but still not guaranteed) but not so much for others, regardless of their skill set at shunting.


----------



## HyperMatrix

bmagnien said:


> Word me too. My only concern about the requirement for a vbios update to reap the benefits of resizable bar would be for folks with maybe a PNY or Zotac card where there’s more unknowns about the company’s ease of access to vbios releases and accompanying tools. I think we’ll be in good shape in that the updated 520 KP bios _should_ be easy to access (but still not guaranteed) but not so much for others, regardless of their skill set at shunting.


If you read the tweet from Nvidia they said they will be providing the vBios to partners and not that you’ll have to wait for whenever the partners decide to make them. But yes I’m expecting all standard bios to be updated immediately. That 1000W bios was great for many of us but it got very annoying to deal with and predictably you had people burning their cards while some others continued touting it as the safest and easiest way to get more power. It definitely is the easiest. But certainly not the safest.


----------



## pat182

mardon said:


> In addition you'll need a motherboard bios update too.
> Precision only pushes EVGA updated doesn't it. So if you have cards from other vendors your in their hands as to whether they release an update.
> 
> I agree not the most accessible for everyday users. Fine for us though.


yea, imma need to retire my 9900k for a 11900k


----------



## Exilon

A whole platform change for "up to" 19% SPEC2017 ST seems like a huge hassle - I say as I prepare to drain my watercooling loop to replace a 280mm radiator and a GPU waterblock 

The only thing giving me doubt about buying Rocket Lake is Alder Lake being right on its heels. That one will be a new 2021 architecture rather than Ice Lake core backported to 14nm, and the usual suspects that have been leaking Rocket Lake scores on Twitter are saying it will have huge IPC gains.


----------



## mardon

HyperMatrix said:


> Yeah. I mean I would have preferred a 16 core version on intel's 10nm with PCIe 5 and DDR5 support, but honestly....for the purpose of gaming...I'm definitely excited. Between the clock speed increase and IPC boost it should give me about a 65% increase in single threaded performance. I'll give up a few NVMe drives and quad channel memory for that.
> 
> Only thing I'm curious about though with the new boards is whether there is a performance difference between running PCIe 4.0 x8 vs x16. We know there's a slight uplift when going from PCIe 3.0 x16 to 4.0 x16, and we know that the 3090 is not currently fully saturating PCIe 3.0 x16, so the gains should be mostly from latency reduction. So theoretically you should be able to run at 4.0 x8 and still get the benefits, but nobody's done that test yet. So it's an unknown. It would be nice to have more than 1 NVMe in the system, which isn't possible if you're running x16.


Hold on.. Am I loosing performance running x2 m.2 drives on z490 10900k?


----------



## Exilon

10900K has a PCIe3x4 link to the PCH which hosts all the chipset lanes. This can simultaneously support ~4GB/s PCH->CPU and ~4GB/s CPU->PCH
Therefore it's impossible for it to have full unidirectional bandwidth between the CPU and both PCIe3x4 m2 drives simultaneously because that'd be ~8GB/s in one direction, assuming the drives can hit 4GB/s
If you're reading from one m2 and dumping it to the other, it will not since it'll use the read and write lanes together.

Basically I wouldn't worry about it unless you know you have the serial read/write performance requirements and the drives to support it.


----------



## HyperMatrix

mardon said:


> Hold on.. Am I loosing performance running x2 m.2 drives on z490 10900k?


I will fully admit that the whole PCIe lane issue confuses the hell out of me so don't listen to me on the matter. For all I know, the 11900k actually has more lanes available to it than the 10900k. We know the CPU lanes are increased to 20 (from 16 on the 10900k) and I assume the chipset lanes are still there at 24, distributed to various components. I'd love for someone to answer this for me. But from what I've read, I won't be able to get the GPU running at PCIe 4.0 x16 and upgrade my 3 current NVMe PCIe 3.0 x4 drives PCIe 4.0 x4 drives without running short of lanes because the chipset lanes are all PCIe 3.0. So I'd only be able to have 1 GPU at x16 and 1 NVMe at x4 on PCIe 4.0. Everything else would have to stay on PCIe 3.0. But yet again...I'll state that I don't fully understand it so I'm very open to any new information regarding this.


----------



## mouacyk

HyperMatrix said:


> I will fully admit that the whole PCIe lane issue confuses the hell out of me so don't listen to me on the matter. For all I know, the 11900k actually has more lanes available to it than the 10900k. We know the CPU lanes are increased to 20 (from 16 on the 10900k) and I assume the chipset lanes are still there at 24, distributed to various components. I'd love for someone to answer this for me. But from what I've read, I won't be able to get the GPU running at PCIe 4.0 x16 and upgrade my 3 current NVMe PCIe 3.0 x4 drives PCIe 4.0 x4 drives without running short of lanes because the chipset lanes are all PCIe 3.0. So I'd only be able to have 1 GPU at x16 and 1 NVMe at x4 on PCIe 4.0. Everything else would have to stay on PCIe 3.0. But yet again...I'll state that I don't fully understand it so I'm very open to any new information regarding this.


How you understood it seems to be how it's reported. Traditional 16X lanes for GPU and additional dedicated 4x lanes for dedicated NVMe drive. Previously, you had to use 4x lanes from the chipset in order to not eat into the 16X lanes dedicated for PCIe.


----------



## mirkendargen

HyperMatrix said:


> Yeah. I mean I would have preferred a 16 core version on intel's 10nm with PCIe 5 and DDR5 support, but honestly....for the purpose of gaming...I'm definitely excited. Between the clock speed increase and IPC boost it should give me about a 65% increase in single threaded performance. I'll give up a few NVMe drives and quad channel memory for that.
> 
> Only thing I'm curious about though with the new boards is whether there is a performance difference between running PCIe 4.0 x8 vs x16. We know there's a slight uplift when going from PCIe 3.0 x16 to 4.0 x16, and we know that the 3090 is not currently fully saturating PCIe 3.0 x16, so the gains should be mostly from latency reduction. So theoretically you should be able to run at 4.0 x8 and still get the benefits, but nobody's done that test yet. So it's an unknown. It would be nice to have more than 1 NVMe in the system, which isn't possible if you're running x16.


I tested PCIE 4.0 16x, PCIE 4.0 8x and PCIE 3.0 16x on my 3090 when I was making sure my riser cable wasn't causing any issues. They were all identical in benchmarks. But I have 64 4.0 lanes so....not an issue.


----------



## bmagnien

mirkendargen said:


> I tested PCIE 4.0 16x, PCIE 4.0 8x and PCIE 3.0 16x on my 3090 when I was making sure my riser cable wasn't causing any issues. They were all identical in benchmarks. But I have 64 4.0 lanes so....not an issue.


You tested all 3 of those against each other directly connected to the mobo and just switching settings in bios? Or via the riser?

because I did similar testing and saw repeatable differences between 3.0x16 and 4.0x16:








Riser Comparison Benchmark Results


Sheet1 No Riser,No Riser,White Linkup 4.0 Riser,White Linkup 4.0 Riser,Black Sliger 3.0 Riser PCIe 3.0,PCIe 4.0,PCIe 3.0,PCIe 4.0 Bandwidth (gb/s),13.03,26.24,13.06,25.93 450w Bios, +1250+125 Heaven High FPS test,6980,7043,7003,6804 Port Royal,13834,13855,13768,13679 Time Spy Extreme,10136,10101...




docs.google.com


----------



## mirkendargen

bmagnien said:


> You tested all 3 of those against each other directly connected to the mobo and just switching settings in bios? Or via the riser?
> 
> because I did similar testing and saw repeatable differences between 3.0x16 and 4.0x16:
> 
> 
> 
> 
> 
> 
> 
> 
> Riser Comparison Benchmark Results
> 
> 
> Sheet1 No Riser,No Riser,White Linkup 4.0 Riser,White Linkup 4.0 Riser,Black Sliger 3.0 Riser PCIe 3.0,PCIe 4.0,PCIe 3.0,PCIe 4.0 Bandwidth (gb/s),13.03,26.24,13.06,25.93 450w Bios, +1250+125 Heaven High FPS test,6980,7043,7003,6804 Port Royal,13834,13855,13768,13679 Time Spy Extreme,10136,10101...
> 
> 
> 
> 
> docs.google.com


Just changing the settings in BIOS. You also show a riser IMPROVING performance PCIE 3.0 16x vs. PCIE 3.0 16x which is....odd and shows something else is going on between the tests. I believe a riser not decreasing performance, but I see no way it increases performance.


----------



## bmagnien

mirkendargen said:


> Just changing the settings in BIOS.


I think you should’ve seen between 1-5% improvement from 3.0 to 4.0. In games at least. I saw little difference in synthetics but more in gaming benchmarks. You can see the % difference in the spreadsheet. And this is at 1080p where the numbers are higher and this more granular to allow for smaller differences to be seen


----------



## geriatricpollywog

If you run your card in PCIE 4.0 x8, will bandwidth be the same as PCIE 3.0 x16?


----------



## mirkendargen

0451 said:


> If you run your card in PCIE 4.0 x8, will bandwidth be the same as PCIE 3.0 x16?


Yes.


----------



## HyperMatrix

0451 said:


> If you run your card in PCIE 4.0 x8, will bandwidth be the same as PCIE 3.0 x16?


Bandwidth would be the same but latency benefits of PCIe 4.0 would still be there. More PCIe bandwidth may be beneficial once direct storage/RTX IO comes out though.


----------



## mirkendargen

Also if there really was a 1-5% improvement with PCIE4.0 on current cards, you'd see a whole lot more AMD in the HOF and LN2 overclocking, because that's a huge advantage comparatively.


----------



## bmagnien

mirkendargen said:


> Also if there really was a 1-5% improvement with PCIE4.0 on current cards, you'd see a whole lot more AMD in the HOF and LN2 overclocking, because that's a huge advantage comparatively.


I’m not sure what you’re implying. Are you saying my data is incorrect? Or that Gamers Nexus’s methodology was wrong? 




Can you share your data points for validation?


----------



## mirkendargen

bmagnien said:


> I’m not sure what you’re implying. Are you saying my data is incorrect? Or that Gamers Nexus’s methodology was wrong?
> 
> 
> 
> 
> Can you share your data points for validation?


I'm definitely saying that all of your tests are within the margin of error, or something else is different between them, because you show a similar performance increase going from PCIE 3.0 no riser->PCIE 3.0 with riser as PCIE 3.0 no riser->PCIE 4.0 no riser. A riser isn't actually increasing your performance, so there's variability between your tests.


----------



## bmagnien

mirkendargen said:


> you'd see a whole lot more AMD in the HOF


have you checked the FS HOF lately?


----------



## bmagnien

mirkendargen said:


> I'm definitely saying that all of your tests are within the margin of error, or something else is different between them, because you show a similar performance increase going from PCIE 3.0 no riser->PCIE 3.0 with riser as PCIE 3.0 no riser->PCIE 4.0 no riser. A riser isn't actually increasing your performance, so there's variability between your tests.


The riser allowed for better thermals moving the card off the ssd heat sink stack on my ITX board hence the deprecation in bandwidth was alleviated by thermal variation


----------



## mardon

Exilon said:


> 10900K has a PCIe3x4 link to the PCH which hosts all the chipset lanes. This can simultaneously support ~4GB/s PCH->CPU and ~4GB/s CPU->PCH
> Therefore it's impossible for it to have full unidirectional bandwidth between the CPU and both PCIe3x4 m2 drives simultaneously because that'd be ~8GB/s in one direction, assuming the drives can hit 4GB/s
> If you're reading from one m2 and dumping it to the other, it will not since it'll use the read and write lanes together.
> 
> Basically I wouldn't worry about it unless you know you have the serial read/write performance requirements and the drives to support it.


Thanks for the detailed response.


----------



## HyperMatrix

mirkendargen said:


> Also if there really was a 1-5% improvement with PCIE4.0 on current cards, you'd see a whole lot more AMD in the HOF and LN2 overclocking, because that's a huge advantage comparatively.


No because intel with PCIe 3.0 was still outperforming AMD with PCIe 4.0, while AMD did show a few percent performance uplift when running in PCIe 4.0 compared to 3.0. My personal theory on intel superiority in GPU bound situations is due to the importance of clock speed as opposed to better IPC, when CPU is not being hit heavily.


----------



## mirkendargen

bmagnien said:


> have you checked the FS HOF lately?


AMD CPU's with Nvidia GPU's.


----------



## bmagnien

mirkendargen said:


> AMD CPU's with Nvidia GPU's.


Uh what? Dude the FS HOF is like half 6800 and 6900 gpus. Are you just going to keep denying the facts presented by myself and others to support your own conclusions or take into account any of the data being presented here and by others far more qualified than myself?


----------



## mirkendargen

HyperMatrix said:


> No because intel with PCIe 3.0 was still outperforming AMD with PCIe 4.0, while AMD did show a few percent performance uplift when running in PCIe 4.0 compared to 3.0. My personal theory on intel superiority in GPU bound situations is due to the importance of clock speed as opposed to better IPC, when CPU is not being hit heavily.


Maybe...but someone would just throw down a 6Ghz+ 5950X on LN2 if records are being gone for, and that still isn't what the pros are doing.


----------



## mirkendargen

bmagnien said:


> Uh what? Dude the FS HOF is like half 6800 and 6900 gpus. Are you just going to keep denying the facts presented by myself and others to support your own conclusions or take into account any of the data being presented here and by others far more qualified than myself?


My point is that in benchmarks where Nvidia GPUs are superior (not low res DX11), if PCIE4.0 made a sizeable difference, AMD CPUs would be all over the top, and they aren't.

I'm not disputing your data, I'm just disputing the conclusion that the only variable is PCIE 3.0 vs. 4.0 with how close they are and sketchy riser differences.


----------



## HyperMatrix

mirkendargen said:


> Maybe...but someone would just throw down a 6Ghz+ 5950X on LN2 if records are being gone for, and that still isn't what the pros are doing.


yes but getting a 16 core 5950x to 6GHz would mean you could get the 11900k up even higher than that with the same effort.


----------



## bmagnien

mirkendargen said:


> My point is that in benchmarks where Nvidia GPUs are superior (not low res DX11), if PCIE4.0 made a sizeable difference, AMD CPUs would be all over the top, and they aren't.
> 
> I'm not disputing the data, I'm just disputing the conclusion that the only variable is PCIE 3.0 vs. 4.0 with how close they are and sketchy riser differences.


Ifs and buts my man. You started the conversation by saying “there was no difference between 3.0 and 4.0” If that was your conclusions, you either tested for the wrong things or had flawed testing conditions. Becaause your conclusion is demonstrably false. That’s my only point. It’s not conducive to the community to make broad sweeping conclusions like that, especially without any data to back it up.

to keep this amicable, I agree with you that there _are_ certain scenarios where there is truly no difference, or such a small difference that its
Within margin or error or not worth accounting for. But that’s very different from the broad stroke statement you made previously.


----------



## mirkendargen

bmagnien said:


> Ifs and buts my man. You started the conversation by saying “there was no difference between 3.0 and 4.0” If that was your conclusions, you either tested for the wrong things or had flawed testing conditions. Becaause your conclusion is demonstrably false. That’s my only point. It’s not conducive to the community to make broad sweeping conclusions like that, especially without any data to back it up.
> 
> to keep this amicable, I agree with you that there _are_ certain scenarios where there is truly no difference, or such a small difference that its
> Within margin or error or not worth accounting for. But that’s very different from the broad stroke statement you made previously.


One thing I do notice is your riser cable had a sizable drop going 3.0->4.0, and that's the specific situation I tested for, because it's the specific thing I worried about and didn't see. I have a Linkup cable too but it's black. Do you know if the black ones are any different/newer? The picture looks the same. What length do you have? I have 15cm plugged into my second PCIE16x slot (the choice was longer riser cable to reach the first slot, or shorter to reach the second, and I trusted the shielding of the PCB more and went shorter cable).


----------



## bmagnien

Happy 


mirkendargen said:


> One thing I do notice is your riser cable had a sizable drop going 3.0->4.0, and that's the specific situation I tested for, because it's the specific thing I worried about and didn't see. I have a Linkup cable too but it's black. Do you know if the black ones are any different/newer? The picture looks the same. What length do you have? I have 15cm plugged into my second PCIE16x slot (the choice was longer riser cable to reach the first slot, or shorter to reach the second, and I trusted the shielding of the PCB more and went shorter cable).


Happy to help man, I did all this testing for the very same reason as you did. Here’s my Reddit post on the subject, I owe a lot of people a follow up and some case manufacturers have even quoted my results:

__
https://www.reddit.com/r/sffpc/comments/jmozjs


----------



## mirkendargen

bmagnien said:


> Happy
> 
> Happy to help man, I did all this testing for the very same reason as you did. Here’s my Reddit post on the subject, I owe a lot of people a follow up and some case manufacturers have even quoted my results:
> 
> __
> https://www.reddit.com/r/sffpc/comments/jmozjs


This is making me glad I got the 15cm one. I bet 25cm is just too long...


----------



## bmagnien

bmagnien said:


> Happy
> 
> Happy to help man, I did all this testing for the very same reason as you did. Here’s my Reddit post on the subject, I owe a lot of people a follow up and some case manufacturers have even quoted my results:
> 
> __
> https://www.reddit.com/r/sffpc/comments/jmozjs


something _has_ changed though that I haven’t been able to put my finger on and I do believe now the benefits are there with the 4.0 riser, I just haven’t had the time to write up all my data yet. Whether it be an upgrade to the 5950 with new mobo and chipset drivers or something else, I would not revert back to 3.0 mode on my mobo at this point, using this same riser cable


----------



## bmagnien

mirkendargen said:


> This is making me glad I got the 15cm one. I bet 25cm is just too long...


It’s an odd beast to be sure. Linkup says specifically that with their manufacturing technique the lengths shouldn’t matter...but who knows


----------



## mirkendargen

bmagnien said:


> It’s an odd beast to be sure. Linkup says specifically that with their manufacturing technique the lengths shouldn’t matter...but who knows


I mean...when x470 mobo makers are sometimes not promising 4.0 will work, or saying it will only work on the first slot...you know distance definitely matters. I'd trust them more than a riser cable maker, I barely trust a riser cable maker to ensure that it will negotiate 4.0 and not bluescreen/have obvious issues...


----------



## bmagnien

mirkendargen said:


> I mean...when x470 mobo makers are sometimes not promising 4.0 will work, or saying it will only work on the first slot...you know distance definitely matters. I'd trust them more than a riser cable maker, I barely trust a riser cable maker to ensure that it will negotiate 4.0 and not bluescreen/have obvious issues...


On this we can agree!


----------



## bogdi1988

HyperMatrix said:


> Thank God. New bios required to enable resizable bar support. Shunting > 1000W bios for multiple reasons.
> 
> 
> 
> 
> 
> 
> 
> 
> GeForce RTX 30 Series Performance Accelerates With Resizable BAR Support | GeForce News | NVIDIA
> 
> 
> Support available now for all GeForce RTX 30 Series Founders Edition graphics cards, and select GeForce RTX 30 Series laptops.<br/>
> 
> 
> 
> www.nvidia.com
> 
> 
> 
> 
> 
> View attachment 2473823


LOL! Should we get a betting pool on whether the bastard child called 3090FE would be getting it? Seems like the FE series got no attention from Nvidia at all


----------



## Lord of meat

Just as a reference to others,
I have Msi Trio x 3090, flashed SuprimX and the performance slightly increased, i could pull core 1995-2015 on cyberpunk without immediate crashes 65-70c.
I flashed the EVGA XOC BIOS with 500 W power limit, OC BIOS. It now runs at 2085 no crashes and the game seems to have less stutter 65-68c.
I am on stock air.
TO ME it seems like the card runs alot better and it might be worth a shot if your willing to take the risk.
*I do not recommend this and I am not responsible for any damages.* *I am a meat popsicle.*


----------



## Trevbev

Can anyone tell me what the max power limit you can get on a gigabyte extreme water force AIO without shunt mods?
Also any idea why the air cooled extreme has 3 pcie pin but the water force only have 2 pin?


----------



## WMDVenum

Trevbev said:


> Can anyone tell me what the max power limit you can get on a gigabyte extreme water force AIO without shunt mods?
> Also any idea why the air cooled extreme has 3 pcie pin but the water force only have 2 pin?


I think that card with stock bios is 450 watts.


----------



## Trevbev

Thanks. Can you flash it to anything higher than 450?


----------



## HyperMatrix

Trevbev said:


> Also any idea why the air cooled extreme has 3 pcie pin but the water force only have 2 pin?


Asus did the same thing. There’s 2 possibilities:

1) keep price within a moderate range
2) knowing that their AIO won’t be as good as a custom loop which people who are after more power can buy and install on their card of choice.


----------



## Trevbev

Yeah . Even the water block version only has 2. I wondered if it was a space thing.


----------



## cletus-cassidy

Falkentyne said:


> You would need a better PSU in order to test properly. A complete shutdown (if you mean PSU actually turning itself off) is from overload from a transient event.
> The 1000W Vbios has internal power limits removed that you have no access to with shunt mods. Have you noticed that if you run Timespy with your PCIE Slot silver shunt, you get a 'double' throttle bar event during GT2? (set core and RAM clocks to stock when doing it to prevent any other normal TDP limits from triggering, then look at GPU-Z after you run the test).
> 
> Then do the same test on a non-shunted card with the power limit set to 55% on the 1000W Vbios and you will not see that throttle happen despite the same approximate power target being used.
> 
> Anyway I highly recommend getting a decent PSU for those cards. A Seasonic Prime PX-1000 (OneSeasonic model) or Prime TX-1000 (which is more expensive) will work well. But not the older "Prime Ultra Platinum" or "Prime Ultra Titanium" as those have the old OCP circuit.


Do we know if the Corsair AX1000 (which is Seasonic OEM) has the older sub-optimal OCP circuit or the newer better version?


----------



## HyperMatrix

Trevbev said:


> Yeah . Even the water block version only has 2. I wondered if it was a space thing.


Yup not sure why my mind went with AIO there. I was referring to the Asus RTX 3090 with EK block.


----------



## Trevbev

Just checked the first page of the thread and the water force is only 390watts


----------



## pat182

mirkendargen said:


> Also if there really was a 1-5% improvement with PCIE4.0 on current cards, you'd see a whole lot more AMD in the HOF and LN2 overclocking, because that's a huge advantage comparatively.


i bet we gonna see a bigger improvement on intel, for some reasons


----------



## mirkendargen

pat182 said:


> i bet we gonna see a bigger improvement on intel, for some reasons


Meh, I wouldn't bet on it. Bus bandwidth hasn't really been a big deal on GPU's since moving from PCI to AGP 20+ years ago. The big bandwidth needs are on the card (VRAM bandwidth), not between the card and the rest of the computer. Now for SSD's and NICs....for those PCIE4.0 is great. 

Stuff like RTX I/O (which I still haven't seen a non-marketing buzzword description of) might use lots of PCIE bandwidth....but it's about streaming assets from RAM to VRAM. No need for special sauce there when you have 24GB of VRAM. I can see it driving a paradigm shift in VRAM usage in games moving forward possibly leading to less VRAM on future GPUs, but right now? Meh.


----------



## dante`afk

today I wanted to test how well my card is at undervolting. my goal was to check how low I can go for 1800 and 1905.
instead of using my rig only for browsing/working and as it's running the whole day anyway, I figured I could as well start to mine with it.

1800core - +1000mem - 100% PL - 0.750v - 20059
1905core - +1000mem - 100% PL - 0.800v - 20909


----------



## BiLLbOuS

Anyone find scratchy audio while playing games or flashing chrome windows flashing 3090 KPE bios on the Strix?


----------



## mirkendargen

BiLLbOuS said:


> Anyone find scratchy audio while playing games or flashing chrome windows flashing 3090 KPE bios on the Strix?


Audio from an HDMI port on the card? Or another device? Or the card is making sounds? I haven't had any of the above happen, but they're all different things.


----------



## BiLLbOuS

I am using DP port on the strix, get it playing cyberpunk, i flash back to 480w asus bios, all gone


----------



## mirkendargen

BiLLbOuS said:


> I am using DP port on the strix, get it playing cyberpunk, i flash back to 480w asus bios, all gone


Cyberpunk has audio issues in general, mostly mitigated if you pick 96khz sampling or lower. I noticed this too, but it happened to me no matter what BIOS I used.


----------



## BiLLbOuS

yeah for some reasons my symptoms went away twice flashing back to stock bios .... weiiird. also anyways to get gpuz to recognize the third pin power with kpe bios?


----------



## mirkendargen

BiLLbOuS said:


> yeah for some reasons my symptoms went away twice flashing back to stock bios .... weiiird. also anyways to get gpuz to recognize the third pin power with kpe bios?


I bet your audio settings defaulted back to 44.1khz when you flashed BIOS and that's what actually fixed it.


----------



## BiLLbOuS

where are these setting in cyberdouche?

edit: stupid


----------



## mirkendargen

BiLLbOuS said:


> where are these setting in cyberdouche?
> 
> edit: stupid


General Windows sound device settings, not in Cyberpunk.


----------



## UdoG

man1ac said:


> So just got my EVGA 3090 Block from Bykski and have a question: On the LR22 parts has been a ton of cooling pads and on the Bykski Cooler there is a 4mm Gap to the cooler and there are no pads which I should put there.


Are you finished with the installation? Does the Bykski fits perfectly on the EVGA FTW3? Could you please tell us the temp?

Thanks!


----------



## man1ac

UdoG said:


> Are you finished with the installation? Does the Bykski fits perfectly on the EVGA FTW3? Could you please tell us the temp?
> 
> Thanks!


Sure thing. This was my first attempt on watercooling a GPU:
It fitted quite nicely (overall)Id say. At least it works for now 

There are just a few things I would like to mention: There are 2 rows of condesators. (If you view the card from the PSU direction to the PCIE connectors) The left ones fit perfectly and you have about 1mm clearance on top. The right ones dont fit that perfectly.








I tried to make a picture. The first 2 condensators clearly touch the GPU cooler and I cant fit a sheet of paper between them. I loosened the GPU Screws a bit and I got clearance immediately - so even when fully tightened I assume it will only just touch the condensators a bit.
Anoother thing








This is how the spacers look then you fully tightened the GPU Screws. So when you put the backplate on and tightened it fully the backplate and pcb will bend quite hard. Dont know if thats supposued to. I only tightend them half so it looks okay and the cooling pads geht squeezed a bit.

Overall it seems to work perfectly (ish ) 
Setup has 2x 420mm Rads (Nexxxos XT45) and a 5900x in the Loop. Temps stayed at 50° max over all runs. (Just nothing special, Fractal Meshify 2 XL, 6x Silent Wings 3 fixed at 600rpm and a D5 Next fixed at 50%)

The current best result in Port RoyalI had was 13900 with an average of 2000 Mhz (its quite low, but I loose about 500-400 Points through my connected simracing rig (4x Screens and 6 USB Ports). I easily increased to 2060 Mhz Average and went up 400 Points.

So how can I get the max out the card?! I always have the problem, when the card goesto 2115 Mhz Port Royal instantly crashes. I dont know what causes this, but It feels like the cardcould push higher if that wouldnt happen...


----------



## rix2

New bios...


----------



## man1ac

I am just downloading all the stuff (XC3 BIOS) and Downloaded the latest nvflash and wanted to try the XC3 BIOS. 
Any Tips?


----------



## UdoG

Thanks for your feedback regarding the Bykski.
Regarding nvflash:

1. nvflash --protectoff
2. nvflash -6 nameofthebios.rom


----------



## rix2

I think temp is lit bit high to 2x420 I have same ventus 3090 and only one 360 rad

what thermal paste You us?


----------



## HyperMatrix

man1ac said:


> Sure thing. This was my first attempt on watercooling a GPU:
> It fitted quite nicely (overall)Id say. At least it works for now
> 
> There are just a few things I would like to mention: There are 2 rows of condesators. (If you view the card from the PSU direction to the PCIE connectors) The left ones fit perfectly and you have about 1mm clearance on top. The right ones dont fit that perfectly.
> View attachment 2473917
> 
> I tried to make a picture. The first 2 condensators clearly touch the GPU cooler and I cant fit a sheet of paper between them. I loosened the GPU Screws a bit and I got clearance immediately - so even when fully tightened I assume it will only just touch the condensators a bit.
> Anoother thing
> View attachment 2473918
> 
> This is how the spacers look then you fully tightened the GPU Screws. So when you put the backplate on and tightened it fully the backplate and pcb will bend quite hard. Dont know if thats supposued to. I only tightend them half so it looks okay and the cooling pads geht squeezed a bit.
> 
> Overall it seems to work perfectly (ish )
> Setup has 2x 420mm Rads (Nexxxos XT45) and a 5900x in the Loop. Temps stayed at 50° max over all runs. (Just nothing special, Fractal Meshify 2 XL, 6x Silent Wings 3 fixed at 600rpm and a D5 Next fixed at 50%)
> 
> The current best result in Port RoyalI had was 13900 with an average of 2000 Mhz (its quite low, but I loose about 500-400 Points through my connected simracing rig (4x Screens and 6 USB Ports). I easily increased to 2060 Mhz Average and went up 400 Points.
> 
> So how can I get the max out the card?! I always have the problem, when the card goesto 2115 Mhz Port Royal instantly crashes. I dont know what causes this, but It feels like the cardcould push higher if that wouldnt happen...


Is your ambient temperature very high? Or did you not use liquid metal? My kingpin with the stock AIO cooler maintains 45C and below at 500W draw. With 400W draw it’s around 38-41C. I would expect a full block with 2x 420mm rads to do a better job than that.


----------



## man1ac

I used Kryonaut paste (so no Liquid Metal). Ambient is 23/24°...

I just flashed the XC3 Bios and the moment in PR where you view the ship from the outside top view I jump to 2115mhz and the card crashes.
It feels like a hard cap I just hit...


----------



## rix2

I see only way is buy Bykski ore Barrow waterblock to FTW3 
EKW came only march and Alphacool "delivery date exceeded"


----------



## rix2

man1ac said:


> I used Kryonaut paste (so no Liquid Metal). Ambient is 23/24°...
> 
> I just flashed the XC3 Bios and the moment in PR where you view the ship from the outside top view I jump to 2115mhz and the card crashes.
> It feels like a hard cap I just hit...


I had the same yesterday, stock cooling and def bios 2088mhz


----------



## man1ac

rix2 said:


> I had the same yesterday, stock cooling and def bios 2088mhz


What did you have? The "hit this MHz and Crash" or what?


----------



## rix2

man1ac said:


> What did you have? The "hit this MHz and Crash" or what?


hit and crash

and after bios flash is not best results?


----------



## GAN77

rix2 said:


> I see only way is buy Bykski ore Barrow waterblock to FTW3


*Bitspower Classic VGA Water Block for EVGA GeForce RTX 3090 FTW3 series





BP-VG3090EVFTW


Bitspower Classic VGA Water Block for EVGA GeForce RTX 3090 FTW3 series




shop.bitspower.com




*


----------



## man1ac

So talking about Gamestable: I just got Metro 2033 10 Runs stable, which for me is 99% stable.
I can get my FTW3 with the Bykski Block (2x420mm), 5900X, and all set to be silent to stay above 2000mhz all the time.
After the 10 Runs temps start to settle:
GPU 59°C
CPU 60-70°C with peaks to 85 (damn Ryzen)
Water Temp 41,6° (D5 Next self measurement)

Is that okay? Ambient is 23° under a desk.
I use Kryonaut Paste for everything....


----------



## rix2

GAN77 said:


> *Bitspower Classic VGA Water Block for EVGA GeForce RTX 3090 FTW3 series
> 
> 
> 
> 
> 
> BP-VG3090EVFTW
> 
> 
> Bitspower Classic VGA Water Block for EVGA GeForce RTX 3090 FTW3 series
> 
> 
> 
> 
> shop.bitspower.com
> 
> 
> 
> 
> *


nice, 208.- eur, no picture from the back side and also pre order...


----------



## rix2

man1ac said:


> So talking about Gamestable: I just got Metro 2033 10 Runs stable, which for me is 99% stable.
> I can get my FTW3 with the Bykski Block (2x420mm), 5900X, and all set to be silent to stay above 2000mhz all the time.
> After the 10 Runs temps start to settle:
> GPU 59°C
> CPU 60-70°C with peaks to 85 (damn Ryzen)
> Water Temp 41,6° (D5 Next self measurement)
> 
> Is that okay? Ambient is 23° under a desk.
> I use Kryonaut Paste for everything....


What do you want? If silence then ok, if performance is not ok

what is power draw after bios flash?


----------



## man1ac

rix2 said:


> What do you want? If silence then ok, if performance is not ok
> 
> what is power draw after bios flash?


Since I am completely new to Water-cooled GPUs I can't tell if that's "normal" or not (maybe the system works but is not setted up right).
I use Wing Boost 3 Fans and no one of them is above 700rpm. Pump D5 is at 50% (don't know the l/min).

So if you say: for silence it's okay I am happy...


----------



## motivman

My memory is really limiting the score on my card, My goal is to reach 15800+ on PR, but it seems I can't because my damn memory will not go past +950... I see a lot of folks (above) me on the leaderboards with memory in the +1250-1400 range? How in the world do they achieve this? I need an expert opinion on how to get to at least +1200 with my memory overclock. My card is on the EKWB waterblock with backplate. I even used fujipoly extreme 17.0 W/mK thermal pads on the memory modules and pointed a fan at the backplate, but still my memory will not clock past +950 without crashing. Looks like due to silicon lottery, the highest my core will clock is 2235MHz ( anything higher crashes), no matter how cold I get it. This card is a reference 2X8-pin PNY 3090 epic-X shunt modded and running kingpin 520W bios. Any suggestions or advice will be greatly appreciated. Right now, This is the best score I can get on PR.









I scored 15 626 in Port Royal


Intel Core i9-10900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com





I guess I lost the lottery on my memory modules huh? Will getting a waterblock with active backplate help? like the Aquacomputer?


----------



## rix2

man1ac said:


> Since I am completely new to Water-cooled GPUs I can't tell if that's "normal" or not (maybe the system works but is not setted up right).
> I use Wing Boost 3 Fans and no one of them is above 700rpm. Pump D5 is at 50% (don't know the l/min).
> 
> So if you say: for silence it's okay I am happy...


for me ok temp is less than 50 degrees


----------



## motivman

rix2 said:


> for me ok temp is less than 50 degrees


what is so magical about 50 degrees?


----------



## EarlZ

Due to very limited availability my 3090 option as of now is an MSI Ventus 3X or a Gigabyte Gaming OC, which is the better pick ?


----------



## Zogge

Both are 2-pin cards, depends on what you want to use the 3090 for. If it is to get into HoF in PR, those are not the cards to go for in my view.


----------



## rix2

motivman said:


> what is so magical about 50 degrees?


every 10 degrees less is magical


----------



## man1ac

So I just talked to two other friends with a 3090 and they really were wondering why my card is topping out at 2100. I cant get even 1 step further. Even with quick and dirty curve at 150mhz offset. Once I hit anything more than 2100 the card crashes. I can "easily" get to 2070Mhz average, so the card cant be that bad. Was just wondering what that is...is this know? common?
Have the FTW @ Stock BIOS 450W.
I tested the XC3 BIOS and the FTW3 500W Bios. Always the same...


----------



## changboy

Ek ship my waterblock for my ftw3 ultra, so i will get it next week in Canada, scoring 14.3k in port royal on air i will see if i can increase this on liquid.


----------



## des2k...

man1ac said:


> So I just talked to two other friends with a 3090 and they really were wondering why my card is topping out at 2100. I cant get even 1 step further. Even with quick and dirty curve at 150mhz offset. Once I hit anything more than 2100 the card crashes. I can "easily" get to 2070Mhz average, so the card cant be that bad. Was just wondering what that is...is this know? common?
> Have the FTW @ Stock BIOS 450W.
> I tested the XC3 BIOS and the FTW3 500W Bios. Always the same...


what voltage are you using for 2100 ?


----------



## motivman

man1ac said:


> So I just talked to two other friends with a 3090 and they really were wondering why my card is topping out at 2100. I cant get even 1 step further. Even with quick and dirty curve at 150mhz offset. Once I hit anything more than 2100 the card crashes. I can "easily" get to 2070Mhz average, so the card cant be that bad. Was just wondering what that is...is this know? common?
> Have the FTW @ Stock BIOS 450W.
> I tested the XC3 BIOS and the FTW3 500W Bios. Always the same...


u lost the silicon lottery bud.


----------



## motivman

Zogge said:


> Both are 2-pin cards, depends on what you want to use the 3090 for. If it is to get into HoF in PR, those are not the cards to go for in my view.


not true, I am currently #47 in the PR HOF with a reference 2X8 pin card, he would just have to shunt mod and flash the KP 520W bios. Knowing what I know now though, If I could do it all over again, the only 3090 I would get would be the STRIX. lol


----------



## motivman

EarlZ said:


> Due to very limited availability my 3090 option as of now is an MSI Ventus 3X or a Gigabyte Gaming OC, which is the better pick ?


The one that has waterblock available... lol


----------



## Zogge

Yes shunting of course, true, but I was referring to standard overclocking and water perhaps with 500+ bios.


----------



## Zogge

I am about to put the strix with bykski wbinstead of ek wb but I am thinking about if there is anything specific in thermal pads placement, thickness etc I should consider or just apply bykski standard pads and go with it. I have a MP5 ready as well that is not mounted. My strix is close to 55-57 degrees on 520W KP bios and full load, hence I want to get something a bit cooler. Before I swap I will try the mp5 and report how much, if any, it will lower my temps with ekwb and standard pads I used.


----------



## man1ac

des2k... said:


> what voltage are you using for 2100 ?


I just used stock curve with offset. I think it's about 1V.



motivman said:


> u lost the silicon lottery bud.


Having 2060 Average with 2070 makes me Top60 in Germany. I don't need more  I got a freakishly fast and silent gaming pc....


----------



## des2k...

man1ac said:


> I just used stock curve with offset. I think it's about 1V.
> 
> 
> anything after 2100 requires alot of voltage(use VF point), for 2115 for mine, I needed 1031mv to pass TE GT2 loop but that spikes to 590w so that testing only works with xoc 1000w vbios. I know PR needs way less voltages for 2100+ freq.
> 
> Certain freq steps will increase effective freq (new hwinfo64 beta) by small amounts where other freq steps might increase the effective freq by alot needing more voltage.


----------



## dante`afk

man1ac said:


> So talking about Gamestable: I just got Metro 2033 10 Runs stable, which for me is 99% stable.
> I can get my FTW3 with the Bykski Block (2x420mm), 5900X, and all set to be silent to stay above 2000mhz all the time.
> After the 10 Runs temps start to settle:
> GPU 59°C
> CPU 60-70°C with peaks to 85 (damn Ryzen)
> Water Temp 41,6° (D5 Next self measurement)
> 
> Is that okay? Ambient is 23° under a desk.
> I use Kryonaut Paste for everything....


pretty high water temps and gpu temps, what is your water flow?

mine maxes out at 40c ( 27c water temp) or 37-38c (with 24-26c water temp) depending on the room temp.232l/h


----------



## pat182

resisable bar will be available for 9th gen platform !!! MSI have confirmed


----------



## des2k...

dante`afk said:


> pretty high water temps and gpu temps, what is your water flow?
> 
> mine maxes out at 40c ( 27c water temp) or 37-38c (with 24-26c water temp) depending on the room temp.232l/h


at 40c for gpu, what's the average power draw ?


----------



## man1ac

dante`afk said:


> pretty high water temps and gpu temps, what is your water flow?
> 
> mine maxes out at 40c ( 27c water temp) or 37-38c (with 24-26c water temp) depending on the room temp.232l/h


That like after 30mins of 380-420W Powerdraw. So pretty much max what I could have in real life gaming sessions. On a few PR runs it wont go higher than 31° Water and 50° GPU. Still pretty high but lots of guys just have huge coolers, possible rpm fan speeds above 1200 and a flowrate thats insanly higher than mine


----------



## leegoocrap

woops wrong spot


----------



## dante`afk

des2k... said:


> at 40c for gpu, what's the average power draw ?


550w~
can't estimate exactly because shuntodded, but my KAW shows 450w without PCIe slot calculation.


----------



## des2k...

man1ac said:


> That like after 30mins of 380-420W Powerdraw. So pretty much max what I could have in real life gaming sessions. On a few PR runs it wont go higher than 31° Water and 50° GPU. Still pretty high but lots of guys just have huge coolers, possible rpm fan speeds above 1200 and a flowrate thats insanly higher than mine


here's some recent temps from mine, ek block, with 2 360 and 1 240 thick

around 400w-500w 27water, 42-44gpu, about 17c delta (this is really good I think)
max load, TE GT2 600w-620w, 27water, 52-54gpu, about 24c delta (this not the best, but I doubt I can fix this big delta)


----------



## stryker7314

rix2 said:


> I see only way is buy Bykski ore Barrow waterblock to FTW3
> EKW came only march and Alphacool "delivery date exceeded"


Just ordered a Barrow waterblock for FTW3 from formulamod, picked it over Bykski because it allows the use of the stock EVGA backplate.

Had an Alphacool on back order but canceled due to their dates pretty much indicating "no idea when these will be ready," and I was annoyed that they charged up front so took my money back out of principle.

EK is backed up wayyy too far.


----------



## Hawkjoss

for those who is looking for 3090 - 2 are available at PPCS








EVGA GeForce RTX 3090 FTW3 Ultra Gaming Graphic Card


The EVGA GeForce RTX 3090 is colossally powerful in every way imaginable, giving you a whole new tier of performance at 8K resolution. It's powered by the NVIDIA Ampere architecture, which doubles down on ray tracing and AI performance with enhanced RT Cores, Tensor Cores, and new streaming...




www.performance-pcs.com


----------



## man1ac

des2k... said:


> here's some recent temps from mine, ek block, with 2 360 and 1 240 thick
> 
> around 400w-500w 27water, 42-44gpu, about 17c delta (this is really good I think)
> max load, TE GT2 600w-620w, 27water, 52-54gpu, about 24c delta (this not the best, but I doubt I can fix this big delta)


Sounds as if I am not far off. What flow and fan rpms?


----------



## des2k...

new active EK backplate , this looks awesome, just use the new terminal, no extra tubes

Once out, I need to get one 









EK to offer Backplate Cooling Solution


With liquid cooling you are cooling just one side, why not the other one as well EK must have figured. They have the first proper active backplate cooling solution that is integrated seamlessly, in ...




www.guru3d.com


----------



## Esenel

des2k... said:


> new active EK backplate , this looks awesome, just use the new terminal, no extra tubes
> 
> Once out, I need to get one


Better than AquaComputer's ****ty heatpipe with a failure rate of roughly 50%


----------



## HyperMatrix

Esenel said:


> Better than AquaComputer's ****ty heatpipe with a failure rate of roughly 50%


I prefer this one from aquacomputer:

































But at this point, I can't cool my card with a render.


----------



## Chobbit

Is VR more memory intensive than none VR gaming? 

I can almost get +1200mhz stable on the memory overclock but settled for +1100 (I could obviously go a bit higher stable but like rounder numbers/steps) and that games fine all day. 

However in VR, +1100 cause artifacting/flickering and I have to drop it to +900 to be completely stable in VR. The max stable core overclock I have doesn't need to change or cause issues in or out of VR.

I had a 5700XT in the interim of selling my Kingpin 1080ti and getting my 3090 and that was similar, outside of VR I could overclock the memory, in VR anything more than stock memory clock caused flickering.


----------



## Esenel

HyperMatrix said:


> I prefer this one from aquacomputer:


With there current rate of faulty devices (Strix), I wouldn't buy it.

They were the only manufacturer which forgot the "Elkos" you had to drill yourself, or send your newly gotten block back. And like another user which again got the wrong one..
Either their manual or the pads are incorrectly added for the backplate
wrongly bended heatpipe for active backplate
leaking terminal with the active backplate
Worse cooling performance than EK, Alphacool and Byksi

But I wish you good luck with their blocks


----------



## pat182

i mean , if a WC 3090 get to 50c is it even worth it to slap a block on it ?! my strix is at 1.006v @ 2085mhz and runs like 57-59c 100% fan speed, and got 14.863k in PR. all on air, i wouldnt spend money just to get 10c cooler with little to no benefit in real world scenario, ( beside noise)


----------



## HyperMatrix

Esenel said:


> With there current rate of faulty devices (Strix), I wouldn't buy it.
> 
> They were the only manufacturer which forgot the "Elkos" you had to drill yourself, or send your newly gotten block back. And like another user which again got the wrong one..
> Either their manual or the pads are incorrectly added for the backplate
> wrongly bended heatpipe for active backplate
> leaking terminal with the active backplate
> Worse cooling performance than EK, Alphacool and Byksi
> 
> But I wish you good luck with their blocks


I've bought 7 blocks with active cooled backplate from them. Been happy so far. Great cooling performance at least in my loop. But the things you're mentioning are definitely of concern.


----------



## Esenel

HyperMatrix said:


> But the things you're mentioning are definitely of concern.


It seems it is just this generation's Strix wihich is making issues.
You can read their thread.
I am not the only one having those issues 🙃


----------



## WMDVenum

pat182 said:


> i mean , if a WC 3090 get to 50c is it even worth it to slap a block on it ?! my strix is at 1.006v @ 2085mhz and runs like 57-59c 100% fan speed, and got 14.863k in PR. all on air, i wouldnt spend money just to get 10c cooler with little to no benefit in real world scenario, ( beside noise)


I mean, in most scenarios water cooling is for looks/noise and not performance unless you are doing more high end overclocking (shunt/direct die).


----------



## pat182

WMDVenum said:


> I mean, in most scenarios water cooling is for looks/noise and not performance unless you are doing more high end overclocking (shunt/direct die).


looks like waterblock compagnies got side blinded by the power usage, would a thicker block that allow more water in the gpu area cool better ? should 2 slot waterblock be a thing ?


----------



## HyperMatrix

WMDVenum said:


> I mean, in most scenarios water cooling is for looks/noise and not performance unless you are doing more high end overclocking (shunt/direct die).


I would disagree with that. Noise, sure. Looks? Really doubt it. If that were true, we wouldn't have 3000+ rpm fans or people sticking their computers by their windows or getting chillers or AC blowing on their radiators. If there are people going primarily for looks....you won't find them in this thread.


----------



## WMDVenum

For me I water cool because I like water cooling but on the 3090 I feel like I only really needed it when I shunt modded the 3090 FE. I would expect the stock cooler to have a hard time handling 500+ watts. Below is my progression on my 3090 FE, not all of these were on the same day and water temp wasn't controlled so direct comparisons between each condition can't be done. Also all numbers were in the port royale benchmark.

3090 on Air - 77C
3090 on Air UV to .85V - 73C
3090 on Water - 54C
3090 on Water UV to 0.9V - 48C
3090 on Water UV to 0.9V with LM - 42C
3090 on Water with LM Shunt Modded - 52C
I imagine if I didn't LM and water cool my temp would be near 90-100C, assuming no temp limits existed with the same configuration on air.

@pat182 I don't think there is a drastic difference between blocks. From what I have read all of them are within a couple degrees of each other and at that point it won't make a performance difference. I think what could be limiting on a water block is how well it cools the non GPU components, since we know those can over heat and thermal limit your card.


----------



## Falkentyne

WMDVenum said:


> For me I water cool because I like water cooling but on the 3090 I feel like I only really needed it when I shunt modded the 3090 FE. I would expect the stock cooler to have a hard time handling 500+ watts. Below is my progression on my 3090 FE, not all of these were on the same day and water temp wasn't controlled so direct comparisons between each condition can't be done. Also all numbers were in the port royale benchmark.
> 
> 3090 on Air - 77C
> 3090 on Air UV to .85V - 73C
> 3090 on Water - 54C
> 3090 on Water UV to 0.9V - 48C
> 3090 on Water UV to 0.9V with LM - 42C
> 3090 on Water with LM Shunt Modded - 52C
> I imagine if I didn't LM and water cool my temp would be near 90-100C, assuming no temp limits existed with the same configuration on air.
> 
> @pat182 I don't think there is a drastic difference between blocks. From what I have read all of them are within a couple degrees of each other and at that point it won't make a performance difference. I think what could be limiting on a water block is how well it cools the non GPU components, since we know those can over heat and thermal limit your card.


I highly doubt that. Unless you're nonstop looping Port Royal, there's no way you can get that hot if you repasted and repadded. My shunt modded air cooled 3090 FE gets to about 69-71C, depending on ambient, after a regular standard run. If I let it loop a few times it gets up to 76-77C, but I don't just leave it running. I mean I 'could' check...I have the core repasted with Thermalright TFX (actually using Thermagic ZF-EX on it right now which is the same paste), and thermal pad mods/hotspots done with Thermalright Odyssey 1.5mm pads on both sides.


----------



## GAN77

Falkentyne said:


> .I have the core repasted with Thermalright TFX


Have you noticed the difference with stock thermal paste?


----------



## ttnuagmada

EDORAM said:


> So the best bios fro strix is 520w bios?
> I got 15551 PR with 1000w but i think I can increase efficency...


500w for air cooled, 520w for water. My understanding is that the 520w bios doesn't play nice with the fans on the stock cooler.


----------



## Falkentyne

GAN77 said:


> Have you noticed the difference with stock thermal paste?


No because I shunt modded when i repasted.
But I think if I limit power to about 400W, it's a few degrees cooler for sure compared to stock paste at 400W max. I just don't remember the exact temps with stock paste.


----------



## Falkentyne

WMDVenum said:


> For me I water cool because I like water cooling but on the 3090 I feel like I only really needed it when I shunt modded the 3090 FE. I would expect the stock cooler to have a hard time handling 500+ watts. Below is my progression on my 3090 FE, not all of these were on the same day and water temp wasn't controlled so direct comparisons between each condition can't be done. Also all numbers were in the port royale benchmark.
> 
> 3090 on Air - 77C
> 3090 on Air UV to .85V - 73C
> 3090 on Water - 54C
> 3090 on Water UV to 0.9V - 48C
> 3090 on Water UV to 0.9V with LM - 42C
> 3090 on Water with LM Shunt Modded - 52C
> I imagine if I didn't LM and water cool my temp would be near 90-100C, assuming no temp limits existed with the same configuration on air.
> 
> @pat182 I don't think there is a drastic difference between blocks. From what I have read all of them are within a couple degrees of each other and at that point it won't make a performance difference. I think what could be limiting on a water block is how well it cools the non GPU components, since we know those can over heat and thermal limit your card.


Ok 18 minutes of looping Port Royale got to 80C.
Some nice throttle warning bars for you too since I got past 380W GPU-Z out of 400W (about 560W after multiplier).
So yeah, the stock heatsink can handle a shunt mod. Barely


----------



## Proson

Any good custom bios for Asus 3090 tuf?
Want to increase the power limit from stock.


----------



## itssladenlol

pat182 said:


> i mean , if a WC 3090 get to 50c is it even worth it to slap a block on it ?! my strix is at 1.006v @ 2085mhz and runs like 57-59c 100% fan speed, and got 14.863k in PR. all on air, i wouldnt spend money just to get 10c cooler with little to no benefit in real world scenario, ( beside noise)


100% Fan speed lol...
My card runs 45C at 2160MHz and You cant tell the PC is turned on cause my Mora 420s Fans run 350 rpm.
Watercooling is for performance with zero noise.


----------



## des2k...

itssladenlol said:


> 100% Fan speed lol...
> My card runs 45C at 2160MHz and You cant tell the PC is turned on cause my Mora 420s Fans run 350 rpm.
> Watercooling is for performance with zero noise.


350rpm is pretty impressive you get low temps, 600rpm on 120mm doesn't move much air

Not counting the 3 fans on the backplate of the GPU, I have 20 fans in my o11d, glass panels replaced with filters, but yeah so much air pressure the thing needs daily vacuum of the filters regardless of fan RPM


----------



## pat182

itssladenlol said:


> 100% Fan speed lol...
> My card runs 45C at 2160MHz and You cant tell the PC is turned on cause my Mora 420s Fans run 350 rpm.
> Watercooling is for performance with zero noise.


all my gpus are 100% fan speed, got a surround system blasting my games so the noise just loose itself in the game, not an issue for me


----------



## itssladenlol

des2k... said:


> 350rpm is pretty impressive you get low temps, 600rpm on 120mm doesn't move much air
> 
> Not counting the 3 fans on the backplate of the GPU, I have 20 fans in my o11d, glass panels replaced with filters, but yeah so much air pressure the thing needs daily vacuum of the filters regardless of fan RPM


Mora 420 is a 1260 Rad and I got 4x 200mm noctua on it 
Got O11 dynamic XL aswell but no rads Inside.
Replaced the glass aswell on my 011 for 3 noctuas in Front.


----------



## EarlZ

Zogge said:


> Both are 2-pin cards, depends on what you want to use the 3090 for. If it is to get into HoF in PR, those are not the cards to go for in my view.


HoF? PR?

Gonna be using it only for gaming with maybe mild overclocks


----------



## WMDVenum

Falkentyne said:


> I highly doubt that. Unless you're nonstop looping Port Royal, there's no way you can get that hot if you repasted and repadded. My shunt modded air cooled 3090 FE gets to about 69-71C, depending on ambient, after a regular standard run. If I let it loop a few times it gets up to 76-77C, but I don't just leave it running. I mean I 'could' check...I have the core repasted with Thermalright TFX (actually using Thermagic ZF-EX on it right now which is the same paste), and thermal pad mods/hotspots done with Thermalright Odyssey 1.5mm pads on both sides.


I believe my water was around 35-37 during these runs. Right now playing warzone my water is 35C and gpu is at 42-45C (but limited to 140 FPS to at roughly 60% usage pulling 330 watts.


----------



## Falkentyne

WMDVenum said:


> I believe my water was around 35-37 during these runs. Right now playing warzone my water is 35C and gpu is at 42-45C (but limited to 140 FPS to at roughly 60% usage pulling 330 watts.


Did you see my temps screenshot?


----------



## WMDVenum

Falkentyne said:


> Did you see my temps screenshot?


Makes me think NVIDIA could have included an OC toggle to get even more performance from the FE version to look better in benchmarks.


----------



## bmagnien

Anyone happen to have an extra FTW3 block they’d want to sell? EK or otherwise?


----------



## mattskiiau

Lord of meat said:


> Just as a reference to others,
> I have Msi Trio x 3090, flashed


Hey just looking for a further update. Do you recommend the SuprimX or the EVGA BIOS for the Trio x?
Any drawbacks from them?


----------



## Menno

Got my 3090 Suprim X. Port Royal stress test succeeded with +105 / +748 so far and 107% (450w). Anything above (anything with +15 increments extra on core will pass benching but not stress testing) With mid 70's temps hovering between 1980 - 2070. Power limited always. Will do more when waterblock arrives .

Also tried de FTW3 500watt bios for air with fan speed locked at 90% for testing, seems to gain another 50 or so on the core when it's less power limited. It will bench between 2070 - 2130 when applying +165 / +748 on port royal. Highest score was around ~14.700 when pushing +200 on core. Highest clock was 2175 when load is a bit less.

Reflashed surpim x bios and waiting for waterblock before trying anything else. Happy with a stable small overclock for now. Need to get stable temps before finding a stable clock .


----------



## hayame

Might be a really stupid however I flashed a 3x8pin bios (asus strix oc) onto a 2x8pin card (asus tuf oc).

GPU-Z/HWiNFO64 reports that there is a 3rd 8 pin that is pulling some wattage even if there physically isn't a 3rd 8pin on the card itself, is this how it's reported when flashing a 3x8pin bios onto a 2x8pin card?

Is this safe? Der8auer has a video where shunt mod on a TUF 3090 card seems to pull a pretty hefty amount hence me just flashing a higher power limit bios.

thx


----------



## bmgjet

hayame said:


> Might be a really stupid however I flashed a 3x8pin bios (asus strix oc) onto a 2x8pin card (asus tuf oc).
> 
> GPU-Z/HWiNFO64 reports that there is a 3rd 8 pin that is pulling some wattage even if there physically isn't a 3rd 8pin on the card itself, is this how it's reported when flashing a 3x8pin bios onto a 2x8pin card?
> 
> Is this safe? Der8auer has a video where shunt mod on a TUF 3090 card seems to pull a pretty hefty amount hence me just flashing a higher power limit bios.
> 
> thx


You just lowered your power limit by using a 3 plug card bios on a 2 plug card.
This has been covered dozens of times.
520W 3 plug bios = 330W on a 2 plug.
1000W 3 plug bios = 696W on a 2 plug card.


----------



## hayame

bmgjet said:


> You just lowered your power limit by using a 3 plug card bios on a 2 plug card.
> This has been covered dozens of times.
> 520W 3 plug bios = 330W on a 2 plug.
> 1000W 3 plug bios = 696W on a 2 plug card.


oo ok thank you for that quick reply, that's what I couldn't figure out due to too long wording in google search


----------



## mattskiiau

Has anyone not flashed their Msi x trio to another bios?
I’m not sure if I want to stick with a 500w bios or just return and get a higher model with a higher PL. 
Any opinions on what you would do and why?


----------



## GAN77

Msi Suprim. Chip 43 weeks. Have you learned to make it even?


----------



## Biscottoman

des2k... said:


> new active EK backplate , this looks awesome, just use the new terminal, no extra tubes
> 
> Once out, I need to get one
> 
> 
> 
> 
> 
> 
> 
> 
> 
> EK to offer Backplate Cooling Solution
> 
> 
> With liquid cooling you are cooling just one side, why not the other one as well EK must have figured. They have the first proper active backplate cooling solution that is integrated seamlessly, in ...
> 
> 
> 
> 
> www.guru3d.com
> 
> 
> 
> 
> View attachment 2473975


That's dope, but why do they have to develop this kind of high performance waterblock for the reference models only... That makes no sense to me, even considering models with reference pcb are very few this gen


----------



## Apecos

GAN77 said:


> Msi Suprim. Chip 43 weeks. Have you learned to make it even?
> 
> View attachment 2474127
> View attachment 2474128
> View attachment 2474129


Hi, do you have some pics (the full pcb board)?


----------



## GAN77

Apecos said:


> Hi, do you have some pics (the full pcb board)?


Detailed photos in this review.








MSI GeForce RTX 3090 Suprim X Review


The MSI RTX 3090 Suprim X is the company's new flagship card. It is highly overclocked, to 1860 MHz rated boost, and ticks at a power limit of 420 W. In our review, it was the quietest RTX 3090 we've ever tested, quieter than the EVGA FTW3 Ultra, almost whisper-quiet.




www.techpowerup.com


----------



## lifes931

Just got a XC3 Ultra Gaming. Tested stock cooler and bios. 13450 PR with fans 80% max power limit +100/800. 
Temps hit 84 deg. I am returning it when my Suprim X comes in, this thing is atrocious, do not buy.


----------



## Lord of meat

cenumis said:


> Hey just looking for a further update. Do you recommend the SuprimX or the EVGA BIOS for the Trio x?
> Any drawbacks from them?


I had it for 3 days flashed to the EVGA only thing you cant change is the rgb. once you set it and flash to EVGA you cant change it but its on.
They are both great but the EVGA works better for what im trying to do, it can hold 2050-2080.
i will let you know if the cards breaks but i doubt it.


----------



## mattskiiau

Lord of meat said:


> I had it for 3 days flashed to the EVGA only thing you cant change is the rgb. once you set it and flash to EVGA you cant change it but its on.
> They are both great but the EVGA works better for what im trying to do, it can hold 2050-2080.
> i will let you know if the cards breaks but i doubt it.


Thanks!
I'll stick with EVGA bios on my Trio X since it seems to be running a lot better than stock.


----------



## Lord of meat

cenumis said:


> Thanks!
> I'll stick with EVGA bios on my Trio X since it seems to be running a lot better than stock.


dont forget to set a custom fan curve in afterburner, dont know what values are set in the evga and i just run it at 100 until eiswolf2 comes out🤓.


----------



## DrunknFoo

changboy said:


> Ek ship my waterblock for my ftw3 ultra, so i will get it next week in Canada, scoring 14.3k in port royal on air i will see if i can increase this on liquid.


Nice, what was your order number if you dont mind me asking?


----------



## jomama22

Finally had time to hook up the evc2 to the strix. Just messing around a bit, will have to take it outside to really push it but got some decent results in port royal using the 5950x:
Score: 15735
https://www.3dmark.com/pr/776950


----------



## Biscottoman

jomama22 said:


> Finally had time to hook up the evc2 to the strix. Just messing around a bit, will have to take it outside to really push it but got some decent results in port royal using the 5950x:
> Score: 15735
> https://www.3dmark.com/pr/776950
> 
> View attachment 2474233


Wow GREAT score man, did you have issues achieving such scores using a 5950x? Do you think there would be a big different in scores between the 5950x and the 5900x?


----------



## jomama22

Biscottoman said:


> Wow GREAT score man, did you have issues achieving such scores using a 5950x? Do you think there would be a big different in scores between the 5950x and the 5900x?


I don't see why not. Others have had issues using amd's 5xxx series for whatever reason for port royal/timespy. Port royal isn't really cpu dependent for the most part though.


----------



## 86Jarrod

jomama22 said:


> I don't see why not. Others have had issues using amd's 5xxx series for whatever reason for port royal/timespy. Port royal isn't really cpu dependent for the most part though.


Nice! I'm still waiting on my evc to come in stock and ship. Did you figure out a way to change nvvdd without changing msvdd?


----------



## jomama22

86Jarrod said:


> Nice! I'm still waiting on my evc to come in stock and ship. Did you figure out a way to change nvvdd without changing msvdd?


I have not but I think I know a way to do it. I would need to take the card apart again to double check (unless we can find better closeup pictures of the two mp2888a controllers). The controllers have an address pin (addr) that when hooked up to a voltage divider between the 1.8v output pin (vdd18) and common ground, you can set the pmbus/i2c address during boot.

Looking at the few pictures I can find (where can I actually see the traces), there are pads that exist for the resistors that would make this voltage divider. If that is the case, would just need to solder on those two resistors (there is a chart in the datasheet for the mp2888a that has the resistor values you need for the various address you can use).

So if you have your card apart by chance (or if anyone else does!) And can pictures of the two mp2888a's where we could see the traces, we could probably determine what's what.

Also, it pictures from TPU seem to be different then the other ones I have found as the place where to put those resistors has them on there already lol. So it may have been a pre-production unit. Not really sure why they would have them both on the same address, maybe just a mistake? Who knows.


----------



## Noaphex

Sheyster said:


> Hey guys, does anyone know for sure if the Zotac Trinity 3090 has build quality issues and tends to CTD under full load?


What is CTD?? I own the 3090 Trinity and it has ran great btw!


----------



## Falkentyne

Noaphex said:


> What is CTD?? I own the 3090 Trinity and it has ran great btw!


Crash to Desktop.


----------



## mirkendargen

I don't remember anyone reporting a Zotac failing, the perception is just coming from a Youtuber (I forget which) criticizing it's power design on release.


----------



## mattskiiau

What are peoples 24/7 preferred OC settings for everyday use?

I'm currently testing .975mv @ 2100mhz in cyberpunk and port royal. Seems stable in CP but not port.

What should I be trying to achieve for everyday usage?

(Trio X with 500w bios)


----------



## Falkentyne

mirkendargen said:


> I don't remember anyone reporting a Zotac failing, the perception is just coming from a Youtuber (I forget which) criticizing it's power design on release.


You know the world is coming to an end when a 3090 FTW3 dies with the red LED light of death when playing GTA5 / League of Legends / Halo 3 MCC / some other low load game, and Zotacs last longer...


----------



## Nizzen

jomama22 said:


> Finally had time to hook up the evc2 to the strix. Just messing around a bit, will have to take it outside to really push it but got some decent results in port royal using the 5950x:
> Score: 15735
> https://www.3dmark.com/pr/776950
> 
> View attachment 2474233


You don' use the "hotwire" holes to hook up the Evc2?


----------



## Exilon

cenumis said:


> What are peoples 24/7 preferred OC settings for everyday use?
> 
> I'm currently testing .975mv @ 2100mhz in cyberpunk and port royal. Seems stable in CP but not port.
> 
> What should I be trying to achieve for everyday usage?
> 
> (Trio X with 500w bios)


Currently doing 900mV @ 1850MHz to be stable in all uses.
I think my 3090 would need maxed out voltage to hit 2100MHz 
Temperature derating of boost is a big problem with higher voltages and it's hard to dial in a fully stable curve. Hopefully I'll have my waterblock next week.

Anyone have luck with the OC scanner? Whenever I try it it locks up the system doing the 875mV point.


----------



## jomama22

Nizzen said:


> You don' use the "hotwire" holes to hook up the Evc2?


I did, I soldered wires with pin plugs on the other end (bottom of picture) into the i2c holes there. Couldn't solder pins in there as they would have hit the backplate, even the 90* ones.


----------



## mattskiiau

Exilon said:


> Currently doing 900mV @ 1850MHz to be stable in all uses.
> I think my 3090 would need maxed out voltage to hit 2100MHz
> Temperature derating of boost is a big problem with higher voltages and it's hard to dial in a fully stable curve. Hopefully I'll have my waterblock next week.
> 
> Anyone have luck with the OC scanner? Whenever I try it it locks up the system doing the 875mV point.


I've settled to reflash to the 380w stock bios for Trio X instead of keeping the 500w EVGA since stock can maintain .925 @ 2000mhz. It pulls around 370-380w which is fine. 
I'm a bit worried that the cheaper VRM's of the trio X can't hold 500w for daily use.

I tried OC Scanner but gives bad results. It didn't crash or lock up my system on either BIOS.


----------



## WMDVenum

cenumis said:


> What are peoples 24/7 preferred OC settings for everyday use?
> 
> I'm currently testing .975mv @ 2100mhz in cyberpunk and port royal. Seems stable in CP but not port.
> 
> What should I be trying to achieve for everyday usage?
> 
> (Trio X with 500w bios)


I have a shunted 3090FE and found +160/1125 to be the highest i can go and be stable, requested clock is about 2145/2130 under load and effective is about 10-15 lower than requested. I can undervolt at 2205/1.1v but the performance is lower than the 160 offset due to reduced effective clocks.

The highest I can pass port royale at is +200/1250 but that isn't stable gaming.


----------



## khunpunTH

520w full load with 1 rad test.

After i bought a new case. The result is very satisfy. This is TT distrocase 350p. 
It reduce temparature a lot. I assume that , the huge distroplate act like a radiator to reduce water temparature.

system is 10900k at 5.0 
msi gaming x trio 3090 .evga 520w bios . barrow water block. TF8 silicone.
one radiator ek PE 360 with push pull config. 1200 rpm fans.
after 20 mins furmark max temp is 52c water temp 36c. ambient temp 27c.

I also try 2 rad config by 3d printed top mount for rad. run furmark 20mins 520w temp is around 48c. water temp max 33c.
FYI .For u guys to consider to buy new case.

ps. I am waiting for ek xe360 to run hard tube.


----------



## jomama22

86Jarrod said:


> Nice! I'm still waiting on my evc to come in stock and ship. Did you figure out a way to change nvvdd without changing msvdd?


Figured out a way just now. No hardmod needed. 
(This first part made not be necessary, Elmor suggested it when talking to him about the hard mod and needing to make sure the eeprom would check the addr pin and no the register for the address)
1. Write 0xBE [7] = 0 (address from ADDR pin)
2. Write 0x13 = 0x00 (disable EEPROM write protection)
3. Write 0x15 (cmd only, data length = 0, store user config to EEPROM)
4. Power cycle the controller and see if 0xBE [7] = 0

What is necessary:
1. Write 0x04 = 0x0000 (sets password to 0000/unprotected mode)
2. Write 0xBE = 0x20 (sets use addr pin at addresses starting at 2)
Then tried the 0x13 command but got no ack, so unplugged and replugged the evc2 and bam, 3 controllers show up in the evc2.
















Nvddd on top, msvdd bottom right. 0x21 = gpu/nvvdd, 0x22=msvdd


----------



## mattskiiau

Do our 3090's have something like VRM thermal protection?
Still concerned about going back to pushing 500w daily on an X Trio.. Don't want to hear a pop


----------



## long2905

cenumis said:


> Do our 3090's have something like VRM thermal protection?
> Still concerned about going back to pushing 500w daily on an X Trio.. Don't want to hear a pop


500w is fine for even a 2 x 8pin card vrm


----------



## mattskiiau

long2905 said:


> 500w is fine for even a 2 x 8pin card vrm


Thanks. Is there any info or source that could back that? I'm just paranoid.


----------



## nievz

Lord of meat said:


> dont forget to set a custom fan curve in afterburner, dont know what values are set in the evga and i just run it at 100 until eiswolf2 comes out🤓.


Hi, can you please send me the link to the EVGA BIOS that you're using on your X Trio 3090?


----------



## pat182

Falkentyne said:


> You know the world is coming to an end when a 3090 FTW3 dies with the red LED light of death when playing GTA5 / League of Legends / Halo 3 MCC / some other low load game, and Zotacs last longer...


the evga forum is a s hit show


----------



## des2k...

cenumis said:


> Thanks. Is there any info or source that could back that? I'm just paranoid.


well for 500w board power if you substract the memory power (they have their own VRM) 100w, you're actually just using 400w from the core VRM (the bigger VRM)

so technically speaking, even if the core VRM is weak, there's no way it can't handle 400w as that just 40w-60w in power loss from the VRM(usually easy to cool with air)


----------



## WilliamLeGod

cenumis said:


> Do our 3090's have something like VRM thermal protection?
> Still concerned about going back to pushing 500w daily on an X Trio.. Don't want to hear a pop


What "pop" do u mean exactly? Im having random pop on my trio too on 500w/520w same issue and back to stock 370w still hearing pop sound occasionally. Performance is still the same though


----------



## Zogge

I have upgraded my 3090 Strix OC 520W bios from EKWB/MP5Works to Bykski/MP5Works block now. 
With everything else the same - Kryonaut paste, Grizzly 8 pads, flow rate, water temp 25-31 degrees idle/load, all screws on the block to the maximum torque etc - the Bykski is running a lot colder. 

Now it never goes beyond delta T + 18 degrees degrees (max 49) and EK could hit delta T + 29 degrees degrees (max 60) on long runs. +120Mhz / +1200Mhz during gaming.

The paste was spread nicely on the EK when I dismantled it so everything was done as per instruction. 

Anyone want the EK block cheap, pm me.


----------



## des2k...

Zogge said:


> I have upgraded my 3090 Strix OC 520W bios from EKWB/MP5Works to Bykski/MP5Works block now.
> With everything else the same - Kryonaut paste, Grizzly 8 pads, flow rate, water temp 25-31 degrees idle/load, all screws on the block to the maximum torque etc - the Bykski is running a lot colder.
> 
> Now it never goes beyond delta T + 18 degrees degrees (max 49) and EK could hit delta T + 29 degrees degrees (max 60) on long runs. +120Mhz / +1200Mhz during gaming.
> 
> The paste was spread nicely on the EK when I dismantled it so everything was done as per instruction.
> 
> Anyone want the EK block cheap, pm me.


prob a bad mount on the ek or paste application
I'm delta 17c-19c if I keep my zotac under 500w. At 600w+ delta is 24,26c with the EK block.

I can prob fix this, with some paper washers (on the block stand offs)because I know I don't have even paste spread on the core.

I was like you at 29c delta even at 400w, I used double washers on the core and not X method for paste and that fixed it.


----------



## itssladenlol

WilliamLeGod said:


> What "pop" do u mean exactly? Im having random pop on my trio too on 500w/520w same issue and back to stock 370w still hearing pop sound occasionally. Performance is still the same though


Fuse exploding pop


----------



## des2k...

itssladenlol said:


> Fuse exploding pop


It wouldn't be a fuse ! that would mean a dead / no power card lol
maybe coil whine ?


----------



## nievz

motivman said:


> go to device manager, right click on your card, click disable device. Screen with black out and video comes back.
> 
> open cmd prompt in admin mode
> 
> chdir C:\nvflash (or wherever your nvflash folder is located)
> 
> Type nvflash --protectoff
> 
> then nvflash -6 "bios".rom
> 
> Y
> 
> Y
> 
> renable your card in device manager, restart computer... done!


after disabling the gpu in device manager, will the screen still go black during the update process or no?


----------



## itssladenlol

des2k... said:


> It wouldn't be a fuse ! that would mean a dead / no power card lol
> maybe coil whine ?


Dude... He just Asked what the other Person was talking About when saying "im afraid of using over 500w and" hear a pop" He was talking About being afraid that the fuse pops.

I wasnt talking About the other Persons card making weird noises


----------



## WilliamLeGod

des2k... said:


> It wouldn't be a fuse ! that would mean a dead / no power card lol
> maybe coil whine ?


Im 100% sure it is not coil whine. It pops more during cooling down from underload back to idle and vice versal


----------



## mattxx88

any feedback on moddiy 12pin gpu custom cable for FE?


----------



## dante`afk

great job @*jomama22*
I wish I knew my way around of soldering that 


with an intel cpu you should be able to break 16k+ easily


----------



## VerySugoi

Just bought a KFA2/GALAX 3090 SG and was wondering if anyone here has that card and don't mind sharing their opinions on it. I'm gonna let it run on air for a month(ish) to test for defects and watercool it.


----------



## Thanh Nguyen

Zogge said:


> I have upgraded my 3090 Strix OC 520W bios from EKWB/MP5Works to Bykski/MP5Works block now.
> With everything else the same - Kryonaut paste, Grizzly 8 pads, flow rate, water temp 25-31 degrees idle/load, all screws on the block to the maximum torque etc - the Bykski is running a lot colder.
> 
> Now it never goes beyond delta T + 18 degrees degrees (max 49) and EK could hit delta T + 29 degrees degrees (max 60) on long runs. +120Mhz / +1200Mhz during gaming.
> 
> The paste was spread nicely on the EK when I dismantled it so everything was done as per instruction.
> 
> Anyone want the EK block cheap, pm me.


My friend has 11c-12c delta with his EK block on a strix and runs 1000w bios daily. You should mount it wrong somewhere. I will have a bykski for ftw3 next week and hope it can do good. The seller said he has 20c delta without liquid metal.


----------



## Canson

Fans acting weird after KPE520W bios flash on my SuprimX , left and middle fan 0db and right one spinning 40% on idle. I went back to stock bios.
Is there any other bios that works?


----------



## Canson

cenumis said:


> What are peoples 24/7 preferred OC settings for everyday use?
> 
> I'm currently testing .975mv @ 2100mhz in cyberpunk and port royal. Seems stable in CP but not port.
> 
> What should I be trying to achieve for everyday usage?
> 
> (Trio X with 500w bios)


.850mv @ 1920mhz , seems stable in port royal for me and cyberpunk2077.
but for whatever reason it jumps to 1905 or 1935 sometimes after reboot. 
I dont know why. seems i cant look it properly to 1920mhz


----------



## Nizzen

jomama22 said:


> I did, I soldered wires with pin plugs on the other end (bottom of picture) into the i2c holes there. Couldn't solder pins in there as they would have hit the backplate, even the 90* ones.


Tnx for answer


----------



## Zogge

des2k... said:


> prob a bad mount on the ek or paste application
> I'm delta 17c-19c if I keep my zotac under 500w. At 600w+ delta is 24,26c with the EK block.
> 
> I can prob fix this, with some paper washers (on the block stand offs)because I know I don't have even paste spread on the core.
> 
> I was like you at 29c delta even at 400w, I used double washers on the core and not X method for paste and that fixed it.


I talked with EK support and 25 deg delta T was normal they said.


----------



## Redjester

I'm running ekwb/mp5 and have a 20-22c delta over liquid on my shunted zotac with LM. Card stabilizes around 56c 2070mhz/1005mem after hours of Warthunder (which holds the card at 100%) My PC is pulling 750w out of the wall.

One thing I have noticed with my EKWB block is that the standoffs will loosen if you do a lot of adjusting with the backplate. Make sure they stay tight.


----------



## Gebeleisis

Zogge said:


> I have upgraded my 3090 Strix OC 520W bios from EKWB/MP5Works to Bykski/MP5Works block now.
> With everything else the same - Kryonaut paste, Grizzly 8 pads, flow rate, water temp 25-31 degrees idle/load, all screws on the block to the maximum torque etc - the Bykski is running a lot colder.
> 
> Now it never goes beyond delta T + 18 degrees degrees (max 49) and EK could hit delta T + 29 degrees degrees (max 60) on long runs. +120Mhz / +1200Mhz during gaming.
> 
> The paste was spread nicely on the EK when I dismantled it so everything was done as per instruction.
> 
> Anyone want the EK block cheap, pm me.


can you post some pictures ? 
I got a bykski aio for 3090 and the waterblock can be reused in a custom loop.
I just need some visual inspection of yours if you can.
Also , did it came with a backplate ?


----------



## SolarBeaver

Hey guys, I know it has been discussed million times, but if someone has some will to answer it I'd greatly appreciate it, because this is bothering me for quite some time already.

Is it worth going water for performance in games only, particularly in CP2077?
For now on air strix 3090 I'm getting around 50-55 fps average on 4k/rtx high/dlss balanced with [email protected] with silent bios and the temps 79-80, real freqs are smoothing like 1940-1960 with thermspy.
I already ordered a EK block with 420mm rad, but having second thoughts before it got shipped, the main reason is not the money, but voiding the warranty in my country and I really like the looks of the air-cooled strix, as silly as it sounds.

So what's the average gain in performance for going water should be?


----------



## GoldCartGamer

Are there any AIOs for the 3090 or are we still waiting on products to launch?


----------



## Gebeleisis

just had one installed


----------



## GoldCartGamer

Gebeleisis said:


> just had one installed
> 
> View attachment 2474342


Is that the bykski AIO? I haven't been able to find it available anywhere online.


----------



## Zogge

Gebeleisis said:


> can you post some pictures ?
> I got a bykski aio for 3090 and the waterblock can be reused in a custom loop.
> I just need some visual inspection of yours if you can.
> Also , did it came with a backplate ?


I go more for function than looks hence please do not judge my setup based on looks.


----------



## WMDVenum

SolarBeaver said:


> Hey guys, I know it has been discussed million times, but if someone has some will to answer it I'd greatly appreciate it, because this is bothering me for quite some time already.
> 
> Is it worth going water for performance in games only, particularly in CP2077?
> For now on air strix 3090 I'm getting around 50-55 fps average on 4k/rtx high/dlss balanced with [email protected] with silent bios and the temps 79-80, real freqs are smoothing like 1940-1960 with thermspy.
> I already ordered a EK block with 420mm rad, but having second thoughts before it got shipped, the main reason is not the money, but voiding the warranty in my country and I really like the looks of the air-cooled strix, as silly as it sounds.
> 
> So what's the average gain in performance for going water should be?


I doubt going to water will give you noticeable performance improvements unless you plan on going 500+ watts or want a quieter machine. You can probably even handle higher power with stock cooling with the tradeoff of noise and heat. Water really shines when want to daily drive very aggressive overclocking. but for standard use performance isn't the main driving force.


----------



## Gebeleisis

GoldCartGamer said:


> Is that the bykski AIO? I haven't been able to find it available anywhere online.


I got it from here :








202.66US $ |Aio Gpu Water Cooling Kit For Nvidia Rtx 2060 2080 3060 3070 3080 3090 Aic Reference Graphics Card Vga Radiator 5v 3pin A-rgb - Fluid Diy Cooling & Accessories - AliExpress


Smarter Shopping, Better Living! Aliexpress.com




www.aliexpress.com





It does not have a backplate !
other than that it works really really well.
I pointed a fan to my backplate and got another 5C drop in temps.


----------



## SolarBeaver

WMDVenum said:


> I doubt going to water will give you noticeable performance improvements unless you plan on going 500+ watts or want a quieter machine. You can probably even handle higher power with stock cooling with the tradeoff of noise and heat. Water really shines when want to daily drive very aggressive overclocking. but for standard use performance isn't the main driving force.


Thanks a lot!
I want to overclock it as much as possible and the chip seems to be ok for it, as it clocks 2145 max in PR.
But I'm just unsure of the actual gain in games, is going from 2010mhz requested to say 2100mhz+ gets something like 5-10%+ actual performance, assuming real clocks also should be higher with boosting algorithm not throttling back because of thermals, right?


----------



## jomama22

Zogge said:


> I talked with EK support and 25 deg delta T was normal they said.


Those guys are full of it. Not sure your setup but mine will steady state ~20* delta over ambiant @ constant 600-630w load (ts gt2 looping for 2 hours). At 500w (using power limit to restrict same loop as above), will steady state at ~15* over ambiant. 

This is liquid metal and 2x360 hwlabs gtx 360s in push pull, fans at 1600rpms.

They are just covering their ass for their inconsistent production. I don't have any issues with contact on the die except the lower left corner.

You make me want to try a bykski block and see if it performs better.


----------



## jomama22

SolarBeaver said:


> Hey guys, I know it has been discussed million times, but if someone has some will to answer it I'd greatly appreciate it, because this is bothering me for quite some time already.
> 
> Is it worth going water for performance in games only, particularly in CP2077?
> For now on air strix 3090 I'm getting around 50-55 fps average on 4k/rtx high/dlss balanced with [email protected] with silent bios and the temps 79-80, real freqs are smoothing like 1940-1960 with thermspy.
> I already ordered a EK block with 420mm rad, but having second thoughts before it got shipped, the main reason is not the money, but voiding the warranty in my country and I really like the looks of the air-cooled strix, as silly as it sounds.
> 
> So what's the average gain in performance for going water should be?


Your biggest advantage will be during gaming when you won't drop clocks as much. Will also be able to use higher power limit bios' or shunts and still keep it cool. 

If you have been using the strix as it is and havnt had problems, the chance it will go belly up and need to be RMA'd is really low.


----------



## SolarBeaver

jomama22 said:


> Your biggest advantage will be during gaming when you won't drop clocks as much. Will also be able to use higher power limit bios' or shunts and still keep it cool.
> 
> If you have been using the strix as it is and havnt had problems, the chance it will go belly up and need to be RMA'd is really low.


Thanks a lot! 
So the overall 5-10%+ gain in longer gaming sessions is def achievable I'd guess...
Also yeah, I should be able to run 520w bios and higher voltages without power/thermal capping, well 1000w bios is tempting too, but kinda afraid to run it tbh. 
Never done any shunt mods, so probably will pass for now.


----------



## des2k...

SolarBeaver said:


> Thanks a lot!
> I want to overclock it as much as possible and the chip seems to be ok for it, as it clocks 2145 max in PR.
> But I'm just unsure of the actual gain in games, is going from 2010mhz requested to say 2100mhz+ gets something like 5-10%+ actual performance, assuming real clocks also should be higher with boosting algorithm not throttling back because of thermals, right?


yeah water block, you can easily hold 2145 in any type of load even 600w+ with 1000w bios
CP 2077, I'm holding 2130 about 480w (no FPS cap at 4k), ~45c for temps
Between 1900 core and 2160 core, for example at 4k, you're just looking at 6-10fps more, that's with HZ Dawn & RTX Quake. Not sure about other games, but I'm assuming it's very similar at 4k.


----------



## Thanh Nguyen

Zogge said:


> I go more for function than looks hence please do not judge my setup based on looks.
> 
> View attachment 2474345
> View attachment 2474346
> View attachment 2474347


Dam it the Gigan is so huge and 0.5c delta is worth it. I dont know why my 2 mora420 cant get my delta down even my space is tiny like your space.


----------



## WayWayUp

is it true that you can't shunt mod the kinpin based on it's design?


----------



## SolarBeaver

des2k... said:


> yeah water block, you can easily hold 2145 in any type of load even 600w+ with 1000w bios
> CP 2077, I'm holding 2130 about 480w (no FPS cap at 4k), ~45c for temps
> Between 1900 core and 2160 core, for example at 4k, you're just looking at 6-10fps more, that's with HZ Dawn & RTX Quake. Not sure about other games, but I'm assuming it's very similar at 4k.


Thanks!
You've been really helpful guys, now I'm pulling the trigger without any regrets!


----------



## sippo

Hey,
I have 800W PSU Ennermax with 3090 Strix + Intel 9920x.
Right now I can't go for more than this:
https://www.3dmark.com/pr/779663 (is this ok for stock Strix bios?)

Im on Power limit all the time, probably I can change bios on card - but I will need to change PSU right?


----------



## HyperMatrix

WayWayUp said:


> is it true that you can't shunt mod the kinpin based on it's design?


It uses different sized resistors but I haven't heard anything about it not being shuntable. Personally I was planning to just silver paint it once I got a block for it since i can do 2145MHz without hitting the 520W cap as is. Got any links to the claims about it being unshuntable? Would be interested in reading up on it. 



SolarBeaver said:


> Hey guys, I know it has been discussed million times, but if someone has some will to answer it I'd greatly appreciate it, because this is bothering me for quite some time already.
> 
> Is it worth going water for performance in games only, particularly in CP2077?
> For now on air strix 3090 I'm getting around 50-55 fps average on 4k/rtx high/dlss balanced with [email protected] with silent bios and the temps 79-80, real freqs are smoothing like 1940-1960 with thermspy.
> I already ordered a EK block with 420mm rad, but having second thoughts before it got shipped, the main reason is not the money, but voiding the warranty in my country and I really like the looks of the air-cooled strix, as silly as it sounds.
> 
> So what's the average gain in performance for going water should be?


Well my Kingpin with the included 360mm AIO cooler and liquid metal can maintain about 42-45C under full load (500W~) at 2145MHz with fans cranked without any drops in cyberpunk. If I drop fans to about 50-60% I gotta go down to 2130MHz otherwise it'll eventually crash. Memory is at +1361 as well (22223MHz). So yes, water cooling can help a lot with getting and maintaining higher clock speeds. And most importantly....with OC stability.


----------



## HyperMatrix

sippo said:


> Hey,
> I have 800W PSU Ennermax with 3090 Strix + Intel 9920x.
> Right now I can't go for more than this:
> https://www.3dmark.com/pr/779663 (is this ok for stock Strix bios?)
> 
> Im on Power limit all the time, probably I can change bios on card - but I will need to change PSU right?


I hit the 900W draw limit on my UPS when my GPU and CPU are going full load together. I wouldn't recommend an 800W PSU. 500W~ from GPU, 200-300W from CPU. Not including fans/pumps/peripherals. If you can afford a $2000 GPU, you can afford a better PSU. Haha.


----------



## UdoG

Which thermal paste did you use?
Should I use Thermal Grizzly Kryonaut or Arctic MX-4?


----------



## sippo

HyperMatrix said:


> I hit the 900W draw limit on my UPS when my GPU and CPU are going full load together. I wouldn't recommend an 800W PSU. 500W~ from GPU, 200-300W from CPU. Not including fans/pumps/peripherals. If you can afford a $2000 GPU, you can afford a better PSU. Haha.


thats true, I'm searching for AX1600 but with no luck


----------



## WayWayUp

no claims it's just what i heard from another user. I was concerned considering the resizable bar update will require a vios update. Majority of kingpin users are relying on the 1000w vbios but that bios may become obsolete if resizable bar shows decent gains


----------



## HyperMatrix

UdoG said:


> Which thermal paste did you use?
> Should I use Thermal Grizzly Kryonaut or Arctic MX-4?


Not sure who this was for but I use liquid metal. Normally use Conductonaut but this time grabbed some thermaltake silver king as it was on sale. Immediate and huge difference in cooling performance. Highly recommend it. 



WayWayUp said:


> no claims it's just what i heard from another user. I was concerned considering the resizable bar update will require a vios update. Majority of kingpin users are relying on the 1000w vbios but that bios may become obsolete if resizable bar shows decent gains


Well resizable bar is hit or miss on current games but I think if the game engine was designed with it in mind, it will start to be a lot more impactful. Especially with direct stream/rtx io. Anyone who legitimately obtained the the 1000W bios from vince should be able to request an updated version. Those who were using it without signing up to void their warranty will have to wait for someone like 0451 to get the new bios from vince and share it online. 

Also I'm not sure how much power others are pulling on their Kingpin cards but I was able to do a 2190MHz PR run with almost 0 power throttling happening with the AIO cooler. so even if stacked/replaced shunts are off the table, and vince doesn't provide an updated XOC bios (which I would doubt), silver paint should still provide enough of a boost to allow over 2200MHz.


----------



## bl4ckdot

Hello,
I just got my white Strix this morning, its performing a bit better than my black one. I'll keep the white for sure, it's a beauty. Sadly it has more coil whine but heh.
I did some Port Royal : https://www.3dmark.com/3dm/56782197 
I noticed the white one comes with the new bios already on it

I also tried the corsair XG7 on a 3rd Strix for my friend, it was sort of nice tbh. I felt the backplate lacked some pressure though.


----------



## 86Jarrod

Just ran a stress loop timespy extreme on 1000w xoc bios. Phanteks block on my strix and saw a max delta over loop temp of 11c.


----------



## bl4ckdot

86Jarrod said:


> Just ran a stress loop timespy extreme on 1000w xoc bios. Phanteks block on my strix and saw a max delta over loop temp of 11c.


That's the block I want for my Strix, I guess you recommend it ? Anything special to note ?


----------



## 86Jarrod

bl4ckdot said:


> That's the block I want for my Strix, I guess you recommend it ? Anything special to note ?


Yeah I'd recommend it on temps and it looks alright. One thing is I broke the damn connection on the led strip with very little pressure.


----------



## Zogge

Thanh Nguyen said:


> Dam it the Gigan is so huge and 0.5c delta is worth it. I dont know why my 2 mora420 cant get my delta down even my space is tiny like your space.


I would say normal is around 2,5-3,0 degrees delta T. To reach 0,5 delta T when benching I need a floor fan to blow the hot air out of the corner and top the fan speed on radiators, they run at 900 rpm when gaming.
I tried it today again as we had -7 degrees C in Sweden. With an open window I got the ambient to 14 degrees in that corner with the floor fan blowing the cold air into the corner from the window and pushing away the hot air from the rads. Delta T was 0,59 degrees and the 3090 with full 520W load gaming +120/+750 was running at ~28 degrees with the bykski block. Flow rate 210 l/h.

Then my wife came back home and the party was over.


----------



## Zogge

86Jarrod said:


> Yeah I'd recommend it on temps and it looks alright. One thing is I broke the damn connection on the led strip with very little pressure.


I think I shorted my bykski led strip as well when I installed it stupid as I am.


----------



## 86Jarrod

Zogge said:


> I think I shorted my bykski led strip as well when I installed it stupid as I am.


I didn't even get it in my pc yet lol. Just talked to phanteks, they're sending a new led strip. Hopefully I don't fat finger this one too.


----------



## GoldCartGamer

Gebeleisis said:


> I got it from here :
> 
> 
> 
> 
> 
> 
> 
> 
> 202.66US $ |Aio Gpu Water Cooling Kit For Nvidia Rtx 2060 2080 3060 3070 3080 3090 Aic Reference Graphics Card Vga Radiator 5v 3pin A-rgb - Fluid Diy Cooling & Accessories - AliExpress
> 
> 
> Smarter Shopping, Better Living! Aliexpress.com
> 
> 
> 
> 
> www.aliexpress.com
> 
> 
> 
> 
> 
> It does not have a backplate !
> other than that it works really really well.
> I pointed a fan to my backplate and got another 5C drop in temps.


Thank you for the link!

Which card are you using with the AIO? Trying to find out if my card will be compatible. I have a few spare fans laying around I could have one or two pointing at the backplate.


----------



## jomama22

86Jarrod said:


> Yeah I'd recommend it on temps and it looks alright. One thing is I broke the damn connection on the led strip with very little pressure.


Just solder it back on if you want it. Will give you a bit of practice for the evc2 lol. Also, see my reply to you about the msvdd/nvvdd setting. Was able to figure it out and get them to be controlled separately/independently from each other. Evc2 software now shows a controller for both.


----------



## Gebeleisis

GoldCartGamer said:


> Thank you for the link!
> 
> Which card are you using with the AIO? Trying to find out if my card will be compatible. I have a few spare fans laying around I could have one or two pointing at the backplate.


I am using a reference design board - palit gaming pro 2*8 pin
This kit will fit any reference board design. 

For the backplate I have a heatsink that I took out from a artic gpu cooler and will kmfit an 80mm on that one. 

I would like to have a watercooled backplate but hey, I will do with whatever I have around until one is available


----------



## WMDVenum

Zogge said:


> I think I shorted my bykski led strip as well when I installed it stupid as I am.


I think I broke my bitspower LEDs last time I opened my case. Only displays a faint green light now. Not really worried about it, at least I don't want to tear apart the block to try to fix it.


----------



## 86Jarrod

jomama22 said:


> Just solder it back on if you want it. Will give you a bit of practice for the evc2 lol. Also, see my reply to you about the msvdd/nvvdd setting. Was able to figure it out and get them to be controlled separately/independently from each other. Evc2 software now shows a controller for both.


I would but it pulled some traces with it, plus flimsy strip. I did see that you got it working that's awesome! I hope the new evcs2 or whatever the new one is called comes in soon. I really want to play with it.


----------



## GoldCartGamer

Gebeleisis said:


> I am using a reference design board - palit gaming pro 2*8 pin
> This kit will fit any reference board design.
> 
> For the backplate I have a heatsink that I took out from a artic gpu cooler and will kmfit an 80mm on that one.
> 
> I would like to have a watercooled backplate but hey, I will do with whatever I have around until one is available


Aw darn. Just looked and the 3090 I have is a custom PCB. I will just keep running on Air.

Thank you for all of the help and knowledge. I have saved it for future reference.


----------



## Canson

Cant overclock my 3090 Suprim X because it hits the power limit 450w all time and core drops.
Any 500W bios out there that i can try? one that doesnt mess my fans please









I scored 14 159 in Port Royal


Intel Core i9-9900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com





this what i got on PR


----------



## mattskiiau

des2k... said:


> well for 500w board power if you substract the memory power (they have their own VRM) 100w, you're actually just using 400w from the core VRM (the bigger VRM)
> so technically speaking, even if the core VRM is weak, there's no way it can't handle 400w as that just 40w-60w in power loss from the VRM(usually easy to cool with air)


Thanks. I read somewhere that the 45A power stages on the X Trio couldn't handle pushing a 500w bios but all of this is over my head.



WilliamLeGod said:


> What "pop" do u mean exactly? Im having random pop on my trio too on 500w/520w same issue and back to stock 370w still hearing pop sound occasionally. Performance is still the same though


I was just referring to if the card could not handle 500w and went "pop", as in, blow up! Sorry for the confusion.


----------



## Chamidorix

There is 0 problem shunting kingpin. It has the standard 7x 5mo shunts, and the 3x 8pin and 1x pci-slot resisters are double width to accommodate the wider space between traces on 12V.


----------



## des2k...

GoldCartGamer said:


> Aw darn. Just looked and the 3090 I have is a custom PCB. I will just keep running on Air.
> 
> Thank you for all of the help and knowledge. I have saved it for future reference.


The pcb looks reference to me, EK reference block should fit


----------



## GoldCartGamer

des2k... said:


> The pcb looks reference to me, EK reference block should fit


The Asus TUF does look similar to reference. I comapred it. I wonder if I could order a different block and use it with the Bykski AIO since that particular unit can be used with a custom loop setup.


----------



## gfunkernaught

New 3090 owner here...MSI Gaming X Trio to be precise.
I've read that flashing the EVGA FTW3 bios will work on these boards and increase the power limit to 500w. I am still on the stock cooler and don't plan to OC manually just yet, waiting for the right block to be released. I just want the power limit to be increased and use whatever stock clocks the evga bios use. On TPU/vgabios I see FTW3 and FTW3 Ultra. Which one should I use?


----------



## mattskiiau

gfunkernaught said:


> New 3090 owner here...MSI Gaming X Trio to be precise.
> I've read that flashing the EVGA FTW3 bios will work on these boards and increase the power limit to 500w. I am still on the stock cooler and don't plan to OC manually just yet, waiting for the right block to be released. I just want the power limit to be increased and use whatever stock clocks the evga bios use. On TPU/vgabios I see FTW3 and FTW3 Ultra. Which one should I use?


I used this one on my X trio.








EVGA RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com


----------



## gfunkernaught

cenumis said:


> I used this one on my X trio.
> 
> 
> 
> 
> 
> 
> 
> 
> EVGA RTX 3090 VBIOS
> 
> 
> 24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory
> 
> 
> 
> 
> www.techpowerup.com


Any issues I should be aware of?


----------



## galsdgk

Trio owners (@gfunkernaught, @cenumis, @GoldCartGamer), it can be safe to use 1000w bios and run on air if you aren't irresponsible. https://www.3dmark.com/spy/17361878

I limited my power to 650 and used approx settings in above benchmark (+ whatever to get 2,130 MHz, +1190 mhz on RAM). Your voltage may vary.
During benchmarks I get max 75 degrees C with fans at 100%. At boot I have to make a fan change to queue the card to go back to 100% fan speed... my MSI afterburner settings don't stick even when I have my startup setting enabled (?).

I'm running all my fans in my computer at 100% until I get all of my EKWB parts... all computers parts are so hard to get these days...!

I can run +1500 mhz ram stable with a fan on the back, but I actually get better performance at +1190 ram.

I'm running max voltage until I start seeing errors from GPU-Z or instability or crazy temps.

If that's too much to stomach, on the stock bios, you can get some good gains at 2040 mhz. Someone detailed the settings way back in this thread. The card likes to keep pushing over that OC though (not stable), which you can mitigate with nvidia-smi.exe -lgc 210,2040.


----------



## ttnuagmada

Zogge said:


> I have upgraded my 3090 Strix OC 520W bios from EKWB/MP5Works to Bykski/MP5Works block now.
> With everything else the same - Kryonaut paste, Grizzly 8 pads, flow rate, water temp 25-31 degrees idle/load, all screws on the block to the maximum torque etc - the Bykski is running a lot colder.
> 
> Now it never goes beyond delta T + 18 degrees degrees (max 49) and EK could hit delta T + 29 degrees degrees (max 60) on long runs. +120Mhz / +1200Mhz during gaming.
> 
> The paste was spread nicely on the EK when I dismantled it so everything was done as per instruction.
> 
> Anyone want the EK block cheap, pm me.


something was up there. Im at about a 16C delta with my Strix/EK block under stress testing.


----------



## Falkentyne

galsdgk said:


> Trio owners (@gfunkernaught, @cenumis, @GoldCartGamer), it can be safe to use 1000w bios and run on air if you aren't irresponsible. https://www.3dmark.com/spy/17361878
> 
> I limited my power to 650 and used approx settings in above benchmark (+ whatever to get 2,130 MHz, +1190 mhz on RAM). Your voltage may vary.
> During benchmarks I get max 75 degrees C with fans at 100%. At boot I have to make a fan change to queue the card to go back to 100% fan speed... my MSI afterburner settings don't stick even when I have my startup setting enabled (?).
> 
> I'm running all my fans in my computer at 100% until I get all of my EKWB parts... all computers parts are so hard to get these days...!
> 
> I can run +1500 mhz ram stable with a fan on the back, but I actually get better performance at +1190 ram.
> 
> I'm running max voltage until I start seeing errors from GPU-Z or instability or crazy temps.
> 
> If that's too much to stomach, on the stock bios, you can get some good gains at 2040 mhz. Someone detailed the settings way back in this thread. The card likes to keep pushing over that OC though (not stable), which you can mitigate with nvidia-smi.exe -lgc 210,2040.


It's not "safe"
The 1000W Bios has all thermal protections disabled.
If you mess up on ONE thermal pad somewhere, on a VRM or GDDR6....just one...you can kiss your card goodbye.
It doesn't matter if you limit it to 500W. The card is still going to die.
You people need to get this into your head. Seriously.
And I've seen people with overheating cards before who were saved by #thermal power limit, because they had a torn pad somewhere.

A 1000W vBios with all protections enabled IS safe if you overclock responsibly and don't exceed your PSU cable safe limits. This simply doesn't exist yet.

A 1000W vBios with all protections disabled is NOT safe unless you do a perfect job on your re-pad/re-paste or the factory (stock heatsinks) didn't mess up on a pad somewhere (and I've seen pads messed up from the factory also!)


----------



## mattskiiau

Falkentyne said:


> It's not "safe"
> The 1000W Bios has all thermal protections disabled.


So does this mean, for example, the FTW3 500w bios has OCP enabled and should theoretically be safe, even with the cheaper X Trio PCB components? 
As long as we're not using a 1000w bios with OCP disabled etc?


----------



## Falkentyne

cenumis said:


> So does this mean, for example, the FTW3 500w bios has OCP enabled and should theoretically be safe, even with the cheaper X Trio PCB components?
> As long as we're not using a 1000w bios with OCP disabled etc?


Yeah you pretty much nailed it.

Unless someone can give you the total max draw of the Trio X VRM's and demonstrate otherwise. This is something I am unfamiliar with, but these boards are often overbuilt anyway. How many power stages does the TrioX have and what are the # of amps per power stage? If the VRM's are rated for (volts * amps=watts), 750W max, and you stay at 75% of that and keep it well cooled, you should be completely fine since you're NOT doing a vcore mod. For example look at the 90A VRM's on the Maximus 12 Extreme motherboard, and I forgot how many power stages it has, I think it's 12 (or phases, but some boards are often two power stages per each phase)? That's means it can handle 1000W of power from the CPU. Good luck getting anywhere close to 1000...even on LN2 you're not going to even be exceeding 500W...(no one stress tests on LN2 since you will just burn through your entire pot instantly).

My point is, you can use a 500W FTW3 bios on the 3 pin card and be fine as long as you don't have any fan or performance issues. But you could use the Kingpin 1000W bios, set it to 400W and still blow up your video card if you have a torn thermal pad. That's my point. The 1000W Bios was designed for LN2 / world record overclocking runs only.


----------



## jomama22

Was able to break 24000 on Timespy graphics score:










I scored 22 695 in Time Spy


AMD Ryzen 9 5950X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com





Just barely lol


----------



## mattskiiau

Falkentyne said:


> (volts * amps=watts), 750W max


I believe the X Trio is 14 GPU power phase's @ 45A and 3 memory power phases separate. This would mean the GPU can handle 630w in total?

If the card is pulling at 500w maximum reading on GPUZ, I would imagine that is the full package and some of that 500w is going towards the memory as well or am I wrong about that?


----------



## changboy

For those who want know, ordering a waterblock+backplate for the ftw3 ultra direct from EK in Canada with money exchange and duty tax will cost you around 410$ cad, its not cheap for a waterblock.

Duty fee ask me 61.94 over the already expensive price lol.

Not long time ago i paid around 200$ for this, price begin to be crazy for watercooling part.


----------



## gfunkernaught

Falkentyne said:


> Yeah you pretty much nailed it.
> 
> Unless someone can give you the total max draw of the Trio X VRM's and demonstrate otherwise. This is something I am unfamiliar with, but these boards are often overbuilt anyway. How many power stages does the TrioX have and what are the # of amps per power stage? If the VRM's are rated for (volts * amps=watts), 750W max, and you stay at 75% of that and keep it well cooled, you should be completely fine since you're NOT doing a vcore mod. For example look at the 90A VRM's on the Maximus 12 Extreme motherboard, and I forgot how many power stages it has, I think it's 12 (or phases, but some boards are often two power stages per each phase)? That's means it can handle 1000W of power from the CPU. Good luck getting anywhere close to 1000...even on LN2 you're not going to even be exceeding 500W...(no one stress tests on LN2 since you will just burn through your entire pot instantly).
> 
> My point is, you can use a 500W FTW3 bios on the 3 pin card and be fine as long as you don't have any fan or performance issues. But you could use the Kingpin 1000W bios, set it to 400W and still blow up your video card if you have a torn thermal pad. That's my point. The 1000W Bios was designed for LN2 / world record overclocking runs only.


I used to run KPXOC bios on my 2080 TI only for benches and temps never exceeded 41c, and I only benched with those once in a while. Must have done something right with the block install. One time I ran those bios with my PC in the garage for that mid-winter xoc bench 

So just to confirm because I just got this card today don't want it to be overworked at all, the ftw 500w bios are safe to use on this card even on air and default fan profiles WITHOUT manual OC? Like I said in my previous post, I don't want to move any sliders other than the power limit slider, not until I put it under water.


----------



## Falkentyne

gfunkernaught said:


> I used to run KPXOC bios on my 2080 TI only for benches and temps never exceeded 41c, and I only benched with those once in a while. Must have done something right with the block install. One time I ran those bios with my PC in the garage for that mid-winter xoc bench
> 
> So just to confirm because I just got this card today don't want it to be overworked at all, the ftw 500w bios are safe to use on this card even on air and default fan profiles WITHOUT manual OC? Like I said in my previous post, I don't want to move any sliders other than the power limit slider, not until I put it under water.


Remember I don't have that card and I don't know the technical specs of it. So I can't exactly answer your question. You would be better off asking Buildzoid or Elmor if you want such a precise question answered.

You've seen plenty of people cross flash other bioses here. The only dead cards you've found that were not killed by hardware mods (or FTW3 red light of death cards) have been those 2 guys who killed their cards using the 1000W bios.


----------



## Nico67

VerySugoi said:


> Just bought a KFA2/GALAX 3090 SG and was wondering if anyone here has that card and don't mind sharing their opinions on it. I'm gonna let it run on air for a month(ish) to test for defects and watercool it.


I got a Galax 3090 SG as I would still be waiting for a 3080/ 6800xt  standard aircooler was chunky, but it didn't cool very well, easily getting 80+ with high fans and not very high clk. Mines now watercooled with an EK vector block and drop a good 50c temp, but its still pretty power limited.
One thing that's not standard, is although its a reference design, it has a white 6pin header for what I assume is fans, which is not standard, so you will need to chop at the outer edges of the white connector to get the ek block to fit.


----------



## wheatpaste1999

jomama22 said:


> Figured out a way just now. No hardmod needed.
> (This first part made not be necessary, Elmor suggested it when talking to him about the hard mod and needing to make sure the eeprom would check the addr pin and no the register for the address)
> 1. Write 0xBE [7] = 0 (address from ADDR pin)
> 2. Write 0x13 = 0x00 (disable EEPROM write protection)
> 3. Write 0x15 (cmd only, data length = 0, store user config to EEPROM)
> 4. Power cycle the controller and see if 0xBE [7] = 0
> 
> What is necessary:
> 1. Write 0x04 = 0x0000 (sets password to 0000/unprotected mode)
> 2. Write 0xBE = 0x20 (sets use addr pin at addresses starting at 2)
> Then tried the 0x13 command but got no ack, so unplugged and replugged the evc2 and bam, 3 controllers show up in the evc2.


Can you explain your process here in a little more detail? I have a Strix and EVC2SX and trying to follow what you did but can't get it to work like you have. 

I have connected PCON1 to I2C1, PCON2 to I2C2, and SCON2 to I2C3 on the EVC2. Using the "find devices" button in the EVC2 software lists uP9512 and MP2888 at all 3 I2C headers with the same addresses (20 and 27). I can monitor and control voltage, but not GPU and MSVDD independently. This seems to match up with your earlier findings and also what Elmor posted on his forum. 

On your read/write settings in the EVC2 software, I assume you are using I2C for the mode and 20 as the address (I can only select 20 for a MP2888 and 27 for a uP9512 in the drop down)? Are you connecting both PCON1 and PCON2 to separate I2C headers? Does it matter if you are trying to write to the I2C header connected to PCON1 or PCON2 (or SCON2)?

If I write 0x04 = 0x0000 and 0xBE = 0x20 to the I2C header connected to PCON1 or PCON2 I get acknowledgements that it works but no devices show up at address 21 even after power cycling the EVC2. 

Sorry for the dumb questions but this is new to me so any help would be greatly appreciated.


----------



## jomama22

wheatpaste1999 said:


> Can you explain your process here in a little more detail? I have a Strix and EVC2SX and trying to follow what you did but can't get it to work like you have.
> 
> I have connected PCON1 to I2C1, PCON2 to I2C2, and SCON2 to I2C3 on the EVC2. Using the "find devices" button in the EVC2 software lists uP9512 and MP2888 at all 3 I2C headers with the same addresses (20 and 27). I can monitor and control voltage, but not GPU and MSVDD independently. This seems to match up with your earlier findings and also what Elmor posted on his forum.
> 
> On your read/write settings in the EVC2 software, I assume you are using I2C for the mode and 20 as the address (I can only select 20 for a MP2888 and 27 for a uP9512 in the drop down)? Are you connecting both PCON1 and PCON2 to separate I2C headers? Does it matter if you are trying to write to the I2C header connected to PCON1 or PCON2 (or SCON2)?
> 
> If I write 0x04 = 0x0000 and 0xBE = 0x20 to the I2C header connected to PCON1 or PCON2 I get acknowledgements that it works but no devices show up at address 21 even after power cycling the EVC2.
> 
> Sorry for the dumb questions but this is new to me so any help would be greatly appreciated.


So I would just use pcon1 for running the commands.

Address = 0x20, command = 0x04 length = 1, data = 0x0000 length = 2, write.
Address = 0x20, command = 0xBE length = 1, data = 0x20 length = 1, write. Close out of the evc2 software, wait a few seconds, reopen it, scan the pcon 1 connector, should have the 3 controllers now at 0x21, 0x22, 0x27

If that doesn't work, will have to do the first commands in that post and then do this again.

For now, you have to do the two commands above after every shut down.

I only have pcon1 connected to the evc2. I'm surprised you are seeing 0x20 and 0x27 on the scon1 tbh, when I checked continuation on that port, it didn't seem to be connected to either pcon, which is what I would expect if it does infact connect to the same i2c bus.


----------



## galsdgk

Falkentyne said:


> "Unless someone can give you the total max draw of the Trio X VRM's and demonstrate otherwise. "
> 
> It's not "safe"
> The 1000W Bios has all thermal protections disabled.
> If you mess up on ONE thermal pad somewhere, on a VRM or GDDR6....just one...you can kiss your card goodbye.
> It doesn't matter if you limit it to 500W. The card is still going to die.
> You people need to get this into your head. Seriously.


Thanks for shedding light on this.
Trio has 14 phases [1]. Each phase seems good for 50 amps [2].
There are fuses on the board that trip at 800W.

I'm running at 1.068mv (but you can go like 1.1 before error) * 50 amps * 14 stages so max should be 770W (based on 1.1V and your post like 3 posts back [Official] NVIDIA RTX 3090 Owner's Club).

Given the 75% safe guidance with appropriate cooling, seems like daily 577W is perfectly OK.

[1] MSI GeForce RTX 3090 Gaming X Trio Review
[2] https://www.onsemi.com/pub/Collateral/NCP302150-D.PDF

Bonus: the memory is rated for up to 1313Mhz [1]
Bonus2: I believe the MSI Suprim has 2 additional phases (I read somewhere and remember b/c a trio owner said they considered just soldering them on there and people berated him/her/they)


----------



## galsdgk

cenumis said:


> I believe the X Trio is 14 GPU power phase's @ 45A and 3 memory power phases separate. This would mean the GPU can handle 630w in total?
> 
> If the card is pulling at 500w maximum reading on GPUZ, I would imagine that is the full package and some of that 500w is going towards the memory as well or am I wrong about that?


Yes 14 stages. Looks like 50 Amps [1-2]. Memory is rated at 1313 mhz.

Given voltage * amps * stages I believe there's an acceptable range from 1 * 50 * 14 to 1.1 * 50 * 14 or 700W to 770W.

[1] MSI GeForce RTX 3090 Gaming X Trio Review
[2] https://www.onsemi.com/pub/Collateral/NCP302150-D.PDF


----------



## marti69

stryker7314 said:


> What kind of temps is the bykski getting, and is that 2 360mm rads?


yes is dual 360 rad bykski water block with liquid metal on gpu i get around 36 38c on port royal ambient temp on room is 16 to 18c and never exeed 40c during hours of gaming pump and fans at max ofc 1600rpm.


----------



## marti69

itssladenlol said:


> I have the same Asus thor 1200w Power supply and 1000w bios, but my cable mod cables arent Extensions, they go directly in the Power supply and I got no problems.
> Seems Extensions do Problems on high Power draw.


i have use direct modcables too they dont drop as bad as extention cables but they still vdrop a little 3.312v vs 3.29 and 5v vs 4.96 but on heavy stress is 3.29 vs 3.24 and 4.96 vs 4.8


----------



## Zogge

marti69 said:


> yes is dual 360 rad bykski water block with liquid metal on gpu i get around 36 38c on port royal ambient temp on room is 16 to 18c and never exeed 40c during hours of gaming pump and fans at max ofc 1600rpm.


Like mentioned earlier I have kryonaut only and get around 14-15c delta T with 520W bios and Bykski Strix+mp5. Slow fans and 27c ambient is 30c water and around 45c gpu and 14c ambient with max fans, open windows etc gives 14.5 to 15c water and around 29c gpu for me. 
I think the max I saw was 49c gpu at 31.5c water in a long stress test with 800 - 900rpm fan condition.


----------



## Pepillo

cenumis said:


> I believe the X Trio is 14 GPU power phase's @ 45A and 3 memory power phases separate. This would mean the GPU can handle 630w in total?
> 
> If the card is pulling at 500w maximum reading on GPUZ, I would imagine that is the full package and some of that 500w is going towards the memory as well or am I wrong about that?





galsdgk said:


> Yes 14 stages. Looks like 50 Amps [1-2]. Memory is rated at 1313 mhz.
> 
> Given voltage * amps * stages I believe there's an acceptable range from 1 * 50 * 14 to 1.1 * 50 * 14 or 700W to 770W.
> 
> [1] MSI GeForce RTX 3090 Gaming X Trio Review
> [2] https://www.onsemi.com/pub/Collateral/NCP302150-D.PDF


I'm curious since I have a Trio with the bios of 520w and the Bykski block. I have also read about the 14 phases and 3 for memory, but in the photos you can clearly see that they are 14+4 ..........


----------



## ViRuS2k

hahah i had a 3080 gaming x trio on back order with OCUK for ages in the mean time i had a Gameing X trio 3090 that i was using in my computer
i just got shipping conformation from OCUK seems i got a free upgrade to the MSI SuprimX 3080  and has been shipped,

will be taking out my 3090 when it arrives on monday to test it, its pretty good as it has a higher powerlimit and oc boost out the box and better cooling and dual bios and looks pretty 
if it overclocks higher than my stock 3090 i might just sell the 3090 and make a profit and keep the suprimx 

does anyone know the power phases on a suprimx compaired to the gaming x trio ?


----------



## ViRuS2k

Pepillo said:


> I'm curious since I have a Trio with the bios of 520w and the Bykski block. I have also read about the 14 phases and 3 for memory, but in the photos you can clearly see that they are 14+4 ..........


Off topic i have a pretty simular setup as you 2x 360 rads cool stream XT versions and and i bought the waterblock aslo whats the temps you get when maxed overclocked or stock
and are you running those 2x 360`s on your cpu also ?
need to know cause something is screwy with my ssytem at the minute i seem to be getting really high ryzen 5950x temps and this is without a gpu added yet + my radiators are not even warm for some reason lol


----------



## martinhal

I have a water and GPU delta of 10 c when running Timespy stress test or run FS 2020 . 390 W bios on a Palit EK Block.. I presume that its ok ?


----------



## Pepillo

ViRuS2k said:


> Off topic i have a pretty simular setup as you 2x 360 rads cool stream XT versions and and i bought the waterblock aslo whats the temps you get when maxed overclocked or stock
> and are you running those 2x 360`s on your cpu also ?
> need to know cause something is screwy with my ssytem at the minute i seem to be getting really high ryzen 5950x temps and this is without a gpu added yet + my radiators are not even warm for some reason lol


Well, I have two 360 EK PEs with a distro plate and a lot of 90º angles with a 3.2 DDC for the CPU block and the 3090. The fans are the Lian Li Unifan at about 1,000-1,200 rpm. The result is a delta in the water temperature of about 12º-15º and another 10º-12º on the GPU, under high load with the bios of 520w I go at 50º maximum:


----------



## aagerius

Just installed the EK block and custom loop. Trying out the 1000w bios at 60% power limit and keeping it at stock voltages.
At least there is less coilwhine on my TUF 3090 OC. Seems now I'm limited by my 750w Seasonic connect PSU, but quite happy with the results.
+200 core and +1180mhz stable.









I scored 14 245 in Port Royal


Intel Xeon W-1290 Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com












I scored 20 003 in Time Spy


Intel Xeon W-1290 Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## GAN77

ViRuS2k said:


> does anyone know the power phases on a suprimx compaired to the gaming x trio ?


For GPU voltage, a 17-phase VRM configuration. 
MSI is using OnSemi NCP303151A DrMOS chips throughout the VRM capable of 50 A.
Memory voltage uses a 4-phase design and is generated by a UPI uS5650Q.


----------



## ViRuS2k

GAN77 said:


> For GPU voltage, a 17-phase VRM configuration.
> MSI is using OnSemi NCP303151A DrMOS chips throughout the VRM capable of 50 A.
> Memory voltage uses a 4-phase design and is generated by a UPI uS5650Q.


Nice so its higher than the triox version


----------



## des2k...

ViRuS2k said:


> Off topic i have a pretty simular setup as you 2x 360 rads cool stream XT versions and and i bought the waterblock aslo whats the temps you get when maxed overclocked or stock
> and are you running those 2x 360`s on your cpu also ?
> need to know cause something is screwy with my ssytem at the minute i seem to be getting really high ryzen 5950x temps and this is without a gpu added yet + my radiators are not even warm for some reason lol


Because it's a 7nm cpu and not flat surface, the only way to get good temps is to lap the ihs and water block. On my 3900x, stock was hitting 95c on start of prime95 lol

Just a quick lap(I still have traces of nickel plating) but they are flat now, PBO 200w on Prime95 doesn't go higher than 80c with noctua nt-h2 paste.


----------



## ViRuS2k

Pepillo said:


> Well, I have two 360 EK PEs with a distro plate and a lot of 90º angles with a 3.2 DDC for the CPU block and the 3090. The fans are the Lian Li Unifan at about 1,000-1,200 rpm. The result is a delta in the water temperature of about 12º-15º and another 10º-12º on the GPU, under high load with the bios of 520w I go at 50º maximum:


Nice 
im using the same case and distro plate and higher spec radiators XT versions and Lian Li fans push pull on the bottom rad and push on the top rad, the gpu block is the same one i have that should be here next week so i will defiantly be happy with anything under 50c on the gpu as i loose to meny gpu bins if i go over 55c lol though i do have a powerfull 5950x that generates so much heat its unreal lol

infact i think my cpu block is not on properly even though i fully tightened the screws all the way down

EK velocity block and when i set motherboard powerlimit and 4.6ghz all core @1.23v temps under load can reach 85c  but the 85c happens instantly like as soon as i press the cinebench button temps go from 46c to 85c within 5 seconds  does that mean the block is not on right cause shouldnt there be a slow increse not a jump like that or am i missing something.
cpu under load is drawing around 240w


----------



## Pepillo

I know what you're talking about, the 7900X at 4.800 Mhz is easy to go near 300w


----------



## Gebeleisis

des2k... said:


> Because it's a 7nm cpu and not flat surface, the only way to get good temps is to lap the ihs and water block. On my 3900x, stock was hitting 95c on start of prime95 lol
> 
> Just a quick lap(I still have traces of nickel plating) but they are flat now, PBO 200w on Prime95 doesn't go higher than 80c with noctua nt-h2 paste.


do you have any pictures of how it looked when laped ?


----------



## Dreams-Visions

Would someone mind linking that 1000W bios? Getting ready to put my XTrio under water (Bykski) and I'd like to try it at about 60%.

tyvm


----------



## Pepillo

Dreams-Visions said:


> Would someone mind linking that 1000W bios? Getting ready to put my XTrio under water (Bykski) and I'd like to try it at about 60%.
> 
> tyvm


Honestly, I don't recommend that bios. I have the same card and the 520w of the Kipping is more than enough without taking the unnecessary risks of those 1.000w without temperature protections that above do not perform well in comparison. If you still want it, here it is:

VGA Bios Collection: EVGA RTX 3090 24 GB | TechPowerUp


----------



## Gebeleisis

Pepillo said:


> Honestly, I don't recommend that bios. I have the same card and the 520w of the Kipping is more than enough without taking the unnecessary risks of those 1.000w without temperature protections that above do not perform well in comparison. If you still want it, here it is:
> 
> VGA Bios Collection: EVGA RTX 3090 24 GB | TechPowerUp


Can you link the 520w bios please?


----------



## Pepillo

Gebeleisis said:


> Can you link the 520w bios please?


VGA Bios Collection: EVGA RTX 3090 24 GB | TechPowerUp


----------



## Gebeleisis

Pepillo said:


> VGA Bios Collection: EVGA RTX 3090 24 GB | TechPowerUp


Thank you!


----------



## Pepillo

The 520w Kipping is only convenient with liquid cooling, by air does not control the fans well, the central unit goes only at a maximum of 2.000 rpm. If it's by air, better the 500w:

VGA Bios Collection: EVGA RTX 3090 24 GB | TechPowerUp


----------



## Gebeleisis

i have a 2 pin card so i guess i'll stick to the 390w bios for now.
on aio atm so cooler than air


----------



## jura11

Gebeleisis said:


> i have a 2 pin card so i guess i'll stick to the 390w bios for now.
> on aio atm so cooler than air


For 2*8-pin GPU 390W BIOS is safest and best BIOS, if you are under water then XOC I would use with extra caution there 

Hope this helps 

Thanks, Jura


----------



## Gebeleisis

jura11 said:


> For 2*8-pin GPU 390W BIOS is safest and best BIOS, if you are under water then XOC I would use with extra caution there
> 
> Hope this helps
> 
> Thanks, Jura


Thanks for the input .

I like the 390w so i will just keep it


----------



## jomama22

ViRuS2k said:


> Nice
> im using the same case and distro plate and higher spec radiators XT versions and Lian Li fans push pull on the bottom rad and push on the top rad, the gpu block is the same one i have that should be here next week so i will defiantly be happy with anything under 50c on the gpu as i loose to meny gpu bins if i go over 55c lol though i do have a powerfull 5950x that generates so much heat its unreal lol
> 
> infact i think my cpu block is not on properly even though i fully tightened the screws all the way down
> 
> EK velocity block and when i set motherboard powerlimit and 4.6ghz all core @1.23v temps under load can reach 85c  but the 85c happens instantly like as soon as i press the cinebench button temps go from 46c to 85c within 5 seconds  does that mean the block is not on right cause shouldnt there be a slow increse not a jump like that or am i missing something.
> cpu under load is drawing around 240w


Somthing seems wrong here. My 5950x is on its own loop. 2 hwlabs gtx 360s, push pull fans, mcp35x2. Running 4.825 all core @ 1.325v under load, I'll see a max cpu temp of 68*, [email protected]* [email protected]* running cinabench. This is with ambiant around 66*f.


----------



## kx11

do you guys think a 240mm rad is enough for 3090 ROG OC gpu ? i won't OC it a lot i just want a silent operation


----------



## sultanofswing

kx11 said:


> do you guys think a 240mm rad is enough for 3090 ROG OC gpu ? i won't OC it a lot i just want a silent operation


For silent operation you'll want more rad. 360 minimum.


----------



## geriatricpollywog

kx11 said:


> do you guys think a 240mm rad is enough for 3090 ROG OC gpu ? i won't OC it a lot i just want a silent operation


Yes, the FTW3 Hybrid comes with a 240mm AIO so 240mm on a custom loop is plenty, but it won't be silent unless you sacrifice performance. Go with at least a 280mm.


----------



## kx11

Thnx guys


----------



## BiLLbOuS

Hey was wondering was using the shunt mod calculator looking to add 7x 8ohm shunts painted or glued to the existing 5ohm's on the 3090 strix, just double checking should give me 780w, anyone here shunted theirs know any different?

thanks


----------



## Falkentyne

BiLLbOuS said:


> Hey was wondering was using the shunt mod calculator looking to add 7x 8ohm shunts painted or glued to the existing 5ohm's on the 3090 strix, just double checking should give me 780w, anyone here shunted theirs know any different?
> 
> thanks


It may possibly not give you the full 780W because there are internal power limits that are not accessible by shunts (Only by a bios mod which is currently not possible), so while under certain load conditions, you may be able to get close to it, you might wind up getting an unknown throttling flag from somewhere when you exceed 550W. This depends purely on your vBIOS, and it's quite possible, considering it's a Strix, that maybe the power limits are set high enough.

I don't know about the vbios to know if these are all of the internal limits or if there are other so I can't tell you what will happen.
What I can tell you is that on the 3090 Founders, the 8 pin power limits seem to get "ignored" and the 8 pins use the SRC power limits (SRC1 and SRC2) for each 8 pin, and even though there is an 86.2W max PCIE Slot limit, it seems to hard throttle at 79.9W (if you don't mod the Slot shunt), causing MVDDC to somehow skyrocket (which makes it completely unclear if it's MVDDC or PCIE slot triggering the actual throttle!). This is all in addition to the common "TDP" limit, which is simply the SUM of all the 8 pins + Slot Power.

tl;dr: Go ahead and stack the 8 mOhm shunts. BEFORE STACKING THEM, run something like Heaven benchmark with Vsync on at 60hz refresh rate (or cap the FPS to 60 fps) and record the GPU Power Draw shown in Hwinfo64 or GPU-Z. This is important so you don't reach the total board power limit, then you have a control variable to look for.

After stacking, Do the same benchmark capped at 60hz/60 FPS cap, and notice the % reduction on GPU Board Power Draw. That reduction will tell you how "Effective" your shunt mod is--also look at the individual power rails (8 pin #1, #2, #3 and Slot Power).

Then try running something at higher TDP or uncapped and set your power limit, and see if you get power limit throttling BEFORE you reach the total "raw" power limit (example: 480W reported to GPUZ)---if you get power limit throttling at let's say, 400W reported to GPU-Z, look in "HWINFO64" and go to "TDP%" and "TDP Normalized %" and post a screenshot of the Normalized vs TDP value.

This will tell us if a hidden rail is limiting you from reaching "max" TDP.


----------



## BiLLbOuS

Awesome will do have 7's 9 10 12's but will start with 8 just paint em on and try this , so no real way to monitor actual gpu voltage without a calculator or watching wattage from a power meter? also after shunt how will i set my power limit with afterburner?

thanks for the reply


----------



## ViRuS2k

jomama22 said:


> Somthing seems wrong here. My 5950x is on its own loop. 2 hwlabs gtx 360s, push pull fans, mcp35x2. Running 4.825 all core @ 1.325v under load, I'll see a max cpu temp of 68*, [email protected]* [email protected]* running cinabench. This is with ambiant around 66*f.


Do you have motherboard powerlimit selected so that the cpu draws as much power as possible ?
stuck some liquid metal on my 5950x and it dropped the temps by about 8c on max load so im down to 75c max still not as great as i wanted but i think this is normal for motherboard limit set or i could be wrong :/ this is also at 1.25v - 1.35v


----------



## Spiriva

kx11 said:


> do you guys think a 240mm rad is enough for 3090 ROG OC gpu ? i won't OC it a lot i just want a silent operation


One 360 would prolly put your 3090 at around 65C or such. I would get two of 360 rads and run the fans slower.


----------



## khunpunTH

Spiriva said:


> One 360 would prolly put your 3090 at around 65C or such. I would get two of 360 rads and run the fans slower.


I ran vahalla 4k 2085mhz/ mem+1000 in under 39c with one medium thick ek pe360. no chilled water , room temp 27c.
delta room and water is 4c.
delta water and gpu is 8c.
The key is distrocase 350p. open case with hugh distroplate that act like radiator. also barrow water block that i think can be compare or better than ek and byski.


----------



## Spiriva

khunpunTH said:


> The key is distrocase 350p. open case with hugh distroplate that act like radiator. also barrow water block that i think can be compare or better than ek and byski.


I would guess most ppl dont have an open case that want "silent operation" tho


----------



## WilliamLeGod

khunpunTH said:


> I ran vahalla 4k 2085mhz/ mem+1000 in under 39c with one medium thick ek pe360. no chilled water , room temp 27c.
> delta room and water is 4c.
> delta water and gpu is 8c.
> The key is distrocase 350p. open case with hugh distroplate that act like radiator. also barrow water block that i think can be compare or better than ek and byski.
> 
> View attachment 2474536


Valhalla uses very low gpu power (all AC titles to be exact)


----------



## cstkl1

WilliamLeGod said:


> Valhalla uses very low gpu power (all AC titles to be exact)


fc new dawn, fc 5

only rdna2 runs 99%


----------



## nievz

Pepillo said:


> The 520w Kipping is only convenient with liquid cooling, by air does not control the fans well, the central unit goes only at a maximum of 2.000 rpm. If it's by air, better the 500w:
> 
> VGA Bios Collection: EVGA RTX 3090 24 GB | TechPowerUp


Why do you use the FTW3 Ultra normal BIOS instead of the OC one over here EVGA RTX 3090 VBIOS ?


----------



## jomama22

khunpunTH said:


> I ran vahalla 4k 2085mhz/ mem+1000 in under 39c with one medium thick ek pe360. no chilled water , room temp 27c.
> delta room and water is 4c.
> delta water and gpu is 8c.
> The key is distrocase 350p. open case with hugh distroplate that act like radiator. also barrow water block that i think can be compare or better than ek and byski.
> 
> View attachment 2474536


You are only pulling 287w.....everyone will have low temps with that lmao.

Max power limit and go run time spy gt2 on a loop (the second graphics test, custom run). Then let us know.

And no, a distro plate doesn't act like a radiator....


----------



## Pepillo

nievz said:


> Why do you use the FTW3 Ultra normal BIOS instead of the OC one over here EVGA RTX 3090 VBIOS ?


No, I don't use those, I use the Kipping 520w because I have a Bykski water block.


----------



## nievz

Pepillo said:


> No, I don't use those, I use the Kipping 520w because I have a Bykski water block.


Have you tried either in the past? Which one should i use, normal or OC?


----------



## Pepillo

nievz said:


> Have you tried either in the past? Which one should i use, normal or OC?


Yes, I used the 500w beta they put on the official EVGA forums, but since I mounted the block I moved on to the 520w Kipping. I guess any of the 500w at Techpowerup will go just as well.


----------



## khunpunTH

WilliamLeGod said:


> Valhalla uses very low gpu power (all AC titles to be exact)





jomama22 said:


> You are only pulling 287w.....everyone will have low temps with that lmao.
> 
> Max power limit and go run time spy gt2 on a loop (the second graphics test, custom run). Then let us know.
> 
> And no, a distro plate doesn't act like a radiator....


ok. I pull 520w 15mins max temp 47c, room temp 26c , water max temp 33c.










and for the ts gt2 custom loop 15mins. max temp 48c. max freq 2160,mem+1000. 520w power


----------



## WilliamLeGod

khunpunTH said:


> ok. I pull 520w 15mins max temp 47c, room temp 26c , water max temp 33c.
> 
> View attachment 2474554


100% fan tho


----------



## khunpunTH

WilliamLeGod said:


> 100% fan tho


100% fan? Lmao. I use water block . Don't say that you think 0 or 100 gpu fan will make any different. plug is not even connected. ......... pls see fan is 0 rpm.


----------



## gfunkernaught

I'm seeing Blyski and alphacool blocks available for the 3090 x trio. I have one 360mm rad (pull) and another 240mm rad (push/pull). Should I wait for the EKWB blocks for this card? Anyone have good/bad experience with existing blocks? I would really like to wait for the kryographics block for the trio, since they seem to be the best blocks but I don't see any prospects for those just yet. I also want to put those 500w bios on the card after I drop a block on it.


----------



## Pepillo

gfunkernaught said:


> I'm seeing Blyski and alphacool blocks available for the 3090 x trio. I have one 360mm rad (pull) and another 240mm rad (push/pull). Should I wait for the EKWB blocks for this card? Anyone have good/bad experience with existing blocks? I would really like to wait for the kryographics block for the trio, since they seem to be the best blocks but I don't see any prospects for those just yet. I also want to put those 500w bios on the card after I drop a block on it.


Happy with the Bykski, it's a plus for me to be able to keep the original backplate. Delta temperature between 10º-15º, correct in my opinion.


----------



## LuckyTheWolf

Hey I just got my 3090. It's a Gaming X Trio. As for right now I am gonna keep it on air. Are there any suggested bios for this card on air? Looking honestly just for something stable with everyday use.


----------



## jomama22

khunpunTH said:


> ok. I pull 520w 15mins max temp 47c, room temp 26c , water max temp 33c.
> 
> View attachment 2474554
> 
> 
> and for the ts gt2 custom loop 15mins. max temp 48c. max freq 2160,mem+1000. 520w power
> 
> View attachment 2474562


That's more inline with what's expected.

Btw, furmark doesn't really work well for a test as it is limited by the Nvidia driver (it's detected as a power virus) hence why the timespy test resulted in slightly higher temps.


----------



## Gebeleisis

how can i check the temps of the memory chips ?


----------



## Canson

Why is 3090 suprim x downclocking and hitting power limit under 'Perfcap Reason'? the power slide is max at 107% which should be 450W right?

I try to figure out why it cant stay at 2040mhz that i locked at .0950mhz, any tips guys? goes all the way down to 1965mhz-2025mhz

Cyberpunk2077 running in background as benchmark


----------



## Zogge

Monitoring software with quiet mode on radiator fans. Full speed gives <1c delta T.


----------



## ALSTER868

Gebeleisis said:


> how can i check the temps of the memory chips ?


By installing a temp sensor between the rad and mem chip. You can also use a pirometer pointing it at the naked chip.
AFAIK only EVGA cards have software memory temp monitoring.


----------



## SoldierRBT

Canson said:


> Why is 3090 suprim x downclocking and hitting power limit under 'Perfcap Reason'? the power slide is max at 107% which should be 450W right?
> 
> I try to figure out why it cant stay at 2040mhz that i locked at .0950mhz, any tips guys? goes all the way down to 1965mhz-2025mhz
> 
> Cyberpunk2077 running in background as benchmark


It's hitting PL that's why is downclocking. I'm not sure why is doing it before hitting 430W could be temperature related. You can try the EVGA 450W BIOS for more headroom and lower the temp below 65C to maintain higher avg clocks.


----------



## jomama22

Canson said:


> Why is 3090 suprim x downclocking and hitting power limit under 'Perfcap Reason'? the power slide is max at 107% which should be 450W right?
> 
> I try to figure out why it cant stay at 2040mhz that i locked at .0950mhz, any tips guys? goes all the way down to 1965mhz-2025mhz
> 
> Cyberpunk2077 running in background as benchmark


Because your hitting a power limit somewhere else, whether it be the 8pin, memory or w.e. . The "450w" is just to total board power limit allowed if no other limit were reached first.

Without looking at the power limits of the bios in Abe, I would guess that 150w 8 pin #1 is what is capping it.

This goes for every single bios out there. The "450w", "480w", "500w" etc. Is just the total board power allowed if no other limit is hit first. If the 8 pins are limited to 150w in bios, it doesn't matter at all what the total board power limit is if you are hitting that first. Same goes with memory, src and pcie slot.

I should add that there is always going to be an imbalance of power across the 8pins and slot as they will have different power roles for the card. If a game or benchmark is memory heavy, the pin(s) used for that will get used more. If it's core/physics heavy, the pin(s) used for that will get used more. 

A good example of this is to go into gpu-z and run the pcie slot tester in full screen, then go take a look at the sensors after like 30 seconds. You are going to hit a power limit, yet almost all the power will come from one pin.


----------



## Gebeleisis

ALSTER868 said:


> By installing a temp sensor between the rad and mem chip. You can also use a pirometer pointing it at the naked chip.
> AFAIK only EVGA cards have software memory temp monitoring.


Thanks . 
I did search for it before and it seems that nvidia is deliberately hiding this info.


----------



## Canson

SoldierRBT said:


> It's hitting PL that's why is downclocking. I'm not sure why is doing it before hitting 430W could be temperature related. You can try the EVGA 450W BIOS for more headroom and lower the temp below 65C to maintain higher avg clocks.


yea i dont know why its hitting PL , can it be because my memory might get hot with +1000mhz clock? my stock bios is 450w i believe already. evga500w bios dont give me any boost really and 520w bios doesnt work with my fans.





jomama22 said:


> Because your hitting a power limit somewhere else, whether it be the 8pin, memory or w.e. . The "450w" is just to total board power limit allowed if no other limit were reached first.
> 
> Without looking at the power limits of the bios in Abe, I would guess that 150w 8 pin #1 is what is capping it.
> 
> This goes for every single bios out there. The "450w", "480w", "500w" etc. Is just the total board power allowed if no other limit is hit first. If the 8 pins are limited to 150w in bios, it doesn't matter at all what the total board power limit is if you are hitting that first. Same goes with memory, src and pcie slot.


so is there a way i can check what power limits are on my suprim x stock bios?


----------



## Pepillo

jomama22 said:


> Because your hitting a power limit somewhere else, whether it be the 8pin, memory or w.e. . The "450w" is just to total board power limit allowed if no other limit were reached first.
> 
> Without looking at the power limits of the bios in Abe, I would guess that 150w 8 pin #1 is what is capping it.
> 
> This goes for every single bios out there. The "450w", "480w", "500w" etc. Is just the total board power allowed if no other limit is hit first. If the 8 pins are limited to 150w in bios, it doesn't matter at all what the total board power limit is if you are hitting that first. Same goes with memory, src and pcie slot.
> 
> I should add that there is always going to be an imbalance of power across the 8pins and slot as they will have different power roles for the card. If a game or benchmark is memory heavy, the pin(s) used for that will get used more. If it's core/physics heavy, the pin(s) used for that will get used more.
> 
> A good example of this is to go into gpu-z and run the pcie slot tester in full screen, then go take a look at the sensors after like 30 seconds. You are going to hit a power limit, yet almost all the power will come from one pin.


Interesting, I had also observed this problem, even bios of less theoretical power giving better results. With the bios editor you can check those limits and hire them with GPUz values to try to find out who puts the brake. Complicated, but not impossible.


----------



## jomama22

Canson said:


> yea i dont know why its hitting PL , can it be because my memory might get hot with +1000mhz clock? my stock bios is 450w i believe already. evga500w bios dont give me any boost really and 520w bios doesnt work with my fans.
> 
> 
> 
> 
> 
> so is there a way i can check what power limits are on my suprim x stock bios?


Download abe .06








File on MEGA







mega.nz





Get you suprim x bios from gpu-z (or tpu)

Read the .rom of the bios with abe.


----------



## Canson

jomama22 said:


> Download abe .06
> 
> 
> 
> 
> 
> 
> 
> 
> File on MEGA
> 
> 
> 
> 
> 
> 
> 
> mega.nz
> 
> 
> 
> 
> 
> Get you suprim x bios from gpu-z (or tpu)
> 
> Read the .rom of the bios with abe.


thanks for link , took a while to be able to download it. my windows defender was blocking it because it was think that the file was virus. had to disable defender 



Does this tell you anything with my problem?


----------



## gfunkernaught

Pepillo said:


> Happy with the Bykski, it's a plus for me to be able to keep the original backplate. Delta temperature between 10º-15º, correct in my opinion.


How do you have your loop set up? # of rads/size? Also what paste did you use on the gpu? 
What is "kipping 520w" bios? Is that code for Kingpin?


----------



## Pepillo

gfunkernaught said:


> How do you have your loop set up? # of rads/size? Also what paste did you use on the gpu?
> What is "kipping 520w" bios? Is that code for Kingpin?


The 520w Kipping bios:

VGA Bios Collection: EVGA RTX 3090 24 GB | TechPowerUp

Two EK 360 PE with a distro plate on the 011XL and a DDC 3.2. EK Velocity on CPU and Bykski on GPU with Thermal Grizzly _Kryonaut_ :


----------



## des2k...

Gebeleisis said:


> Thanks .
> I did search for it before and it seems that nvidia is deliberately hiding this info.


gddr6x has temperature sensor, igorlabs has the internal tool that can read this ! so yeah...

it should be public, but that's going to freakout people with FE that have memory at 105c on the back modules


----------



## GAN77

Canson said:


> Does this tell you anything with my problem?


Turn off the backlight and see 450 watts.

Bitspower MSI Trio/Suprim


----------



## Falkentyne

Canson said:


> thanks for link , took a while to be able to download it. my windows defender was blocking it because it was think that the file was virus. had to disable defender
> 
> 
> 
> Does this tell you anything with my problem?


Might be VRAM PL. Min and Max are set to the same value so it will ignore the power limit slider.
Very strange that the 3090 FE has a higher vRAM power limit than your Suprim X (and min/max are not set to the same value either).
I don't know if this is the answer, though, because for example, I've exceeded the "Chip power limit" without throttling, and I've seen others exceed the VRAM (MVDDC) power limit also.
We do know there is some sort of power balancing going on with the SRC chip, but no one knows how it works. The only thing known is, with shunt mods, shunting certain shunts (but not the SRC) will make the SRC report _higher_, not lower, and will cause strange power balancing, and then shunting the SRC will drop the high SRC power draw and improve balancing someplace else.

However i can tell you right now that your Suprim correctly has the 8 pin PL's set to the same value as its corresponding SRC power limit, so you won't reach an 8 pin PL until it reports 175W. The 3090 FE does not; the SRC limits are higher and the SRC takes priority over the 8 pin rails (which is still 175W).

And some PL's shown in the editor get completely ignored (8 pin PL's being below SRC gets ignored) and others get overruled by something unknown (like PCIE Slot Power triggering a PL before it actually reaches the limit)


----------



## GAN77

Which BIOS is better? Both are for MSI suprim.


----------



## jomama22

Canson said:


> thanks for link , took a while to be able to download it. my windows defender was blocking it because it was think that the file was virus. had to disable defender
> 
> 
> 
> Does this tell you anything with my problem?


Yeah, you're hitting the memory power limit. MVDDC = 98w in your screen shot, the limit is 90w.


----------



## Falkentyne

GAN77 said:


> Which BIOS is better? Both are for MSI suprim.
> 
> View attachment 2474581


You should have dual bios on your board, right?
Just switch to the backup bios and flash the second one and test your game again with GPU-Z running.


----------



## gfunkernaught

Pepillo said:


> The 520w Kipping bios:
> 
> VGA Bios Collection: EVGA RTX 3090 24 GB | TechPowerUp
> 
> Two EK 360 PE with a distro plate on the 011XL and a DDC 3.2. EK Velocity on CPU and Bykski on GPU with Thermal Grizzly _Kryonaut_ :


Aren't there official 520w bios from evga? Any reason you chose this "kipping" bios over the official?


----------



## motivman

...


----------



## jomama22

motivman said:


> Anyone with a shunt modded strix care you share your experience and benchmark results for port royal? Does your card still hit power limit with shunt mod? Thinking about selling my reference shunt modded card for a strix.


No, I don't hit a power limit, both with just a single 8mohm on each and after adding a second one when I attached the evc2 (making it 4mohmn effective shunt on each).

My current score before really railing later this week/next weekend:








I scored 15 735 in Port Royal


AMD Ryzen 9 5950X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## Pepillo

gfunkernaught said:


> Aren't there official 520w bios from evga? Any reason you chose this "kipping" bios over the official?


I don't know if there's an official or not, I follow the forum and I'll get the bios right here and the Techpowerup collection. I don't see any need to visit each brand's official pages for this. In addition, all bios are "official", if they are not signed they do not work by definition.


----------



## motivman

jomama22 said:


> No, I don't hit a power limit, both with just a single 8mohm on each and after adding a second one when I attached the evc2 (making it 4mohmn effective shunt on each).
> 
> My current score before really railing later this week/next weekend:
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 735 in Port Royal
> 
> 
> AMD Ryzen 9 5950X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com


Thanks for the report man. Is there a reason why you went down to 4mohm if you were not hitting power limit with 8mohm?. What waterblock are you using if I may ask, and what is you water to GPU temp DeltaT?


----------



## Canson

GAN77 said:


> Turn off the backlight and see 450 watts.
> 
> Bitspower MSI Trio/Suprim
> 
> View attachment 2474580


not sure exactly what you mean? 



Falkentyne said:


> Might be VRAM PL. Min and Max are set to the same value so it will ignore the power limit slider.
> Very strange that the 3090 FE has a higher vRAM power limit than your Suprim X (and min/max are not set to the same value either).
> I don't know if this is the answer, though, because for example, I've exceeded the "Chip power limit" without throttling, and I've seen others exceed the VRAM (MVDDC) power limit also.
> We do know there is some sort of power balancing going on with the SRC chip, but no one knows how it works. The only thing known is, with shunt mods, shunting certain shunts (but not the SRC) will make the SRC report _higher_, not lower, and will cause strange power balancing, and then shunting the SRC will drop the high SRC power draw and improve balancing someplace else.
> 
> However i can tell you right now that your Suprim correctly has the 8 pin PL's set to the same value as its corresponding SRC power limit, so you won't reach an 8 pin PL until it reports 175W. The 3090 FE does not; the SRC limits are higher and the SRC takes priority over the 8 pin rails (which is still 175W).
> 
> And some PL's shown in the editor get completely ignored (8 pin PL's being below SRC gets ignored) and others get overruled by something unknown (like PCIE Slot Power triggering a PL before it actually reaches the limit)


hi falken, nice to see you around here. i have seen you on z390 aorus forum and dram memory forum also.
This forum needs more people like you , you help so many people and come up with so much helpful information.
We all appreciate it really.

I think my problem is related to gpu vram. power limit or thermal. as soon as i remove my +1000mhz my clock stays stable at 2040mhz , no throttle.



jomama22 said:


> Yeah, you're hitting the memory power limit. MVDDC = 98w in your screen shot, the limit is 90w.


jomama22 my man , you definitely got a point here. has to be power limit. as soon as i remove my +1000mhz mem clock the core clock looks at 2040 mhz and no throttle.
i wonder what vram memory power limit is for other bios? since i haven't seen other people experience same issuse as me.


----------



## Falkentyne

motivman said:


> Thanks for the report man. Is there a reason why you went down to 4mohm if you were not hitting power limit with 8mohm? When I finally get my hands on a strix, I plan to shunt with 5mohm and use the phanteks waterblock. What waterblock are you using if I may ask, and what is you water to GPU temp DeltaT?


You have the Palit card, right?
The Palit seems to have low 'AUX' power limits, but no one seems to know what they are for.
The Strix seems to have jacked up GPU Chip and Aux power limits.


----------



## Falkentyne

Canson said:


> not sure exactly what you mean?
> 
> 
> 
> hi falken, nice to see you around here. i have seen you on z390 aorus forum and dram memory forum also.
> This forum needs more people like you , you help so many people and come up with so much helpful information.
> We all appreciate it really.
> 
> I think my problem is related to gpu vram. power limit or thermal. as soon as i remove my +1000mhz my clock stays stable at 2040mhz , no throttle.
> 
> 
> 
> jomama22 my man , you definitely got a point here. has to be power limit. as soon as i remove my +1000mhz mem clock the core clock looks at 2040 mhz and no throttle.
> i wonder what vram memory power limit is for other bios? since i haven't seen other people experience same issuse as me.


You have to look at a vBIOS dump of each card to find out.
Looks like the Strix has a massive VRAM and Chip power limit (162W and 640W). But AUX is still lower than FE (and no one knows what AUX does).


----------



## gfunkernaught

Pepillo said:


> I don't know if there's an official or not, I follow the forum and I'll get the bios right here and the Techpowerup collection. I don't see any need to visit each brand's official pages for this. In addition, all bios are "official", if they are not signed they do not work by definition.


Well by official I meant verified, how techpowerup filters them. 

So it looks like the consensus is the 500-520w evga bios will work fine on the msi trio.


----------



## Canson

Falkentyne said:


> You have to look at a vBIOS dump of each card to find out.
> Looks like the Strix has a massive VRAM and Chip power limit (162W and 640W). But AUX is still lower than FE (and no one knows what AUX does).




Here are 3 different bios for Suprim X, left one is mine (P-mode bios , not found in techpowerup website) , middle and right one are from techpowerup.
I will try test them since they have higher vram power limit and see if i see any difference in my Cyberpunk2077 benchmark with 1000+ memory clock


----------



## Falkentyne

Canson said:


> Here are 3 different bios for Suprim X, left one is mine (P-mode bios , not found in techpowerup website) , middle and right one are from techpowerup.
> I will try test them since they have higher vram power limit and see if i see any difference in my Cyberpunk2077 benchmark with 1000+ memory clock


Try that and then report your results.
If you don't get favorable results, try the Strix OC vbios and see if that helps. You can always flash back since you should have dual bios.


----------



## jomama22

Canson said:


> Here are 3 different bios for Suprim X, left one is mine (P-mode bios , not found in techpowerup website) , middle and right one are from techpowerup.
> I will try test them since they have higher vram power limit and see if i see any difference in my Cyberpunk2077 benchmark with 1000+ memory clock


You don't need to link the photos from an external website...just drag and drop them from your pc so the show up in full res in the thread.


----------



## des2k...

Canson said:


> Here are 3 different bios for Suprim X, left one is mine (P-mode bios , not found in techpowerup website) , middle and right one are from techpowerup.
> I will try test them since they have higher vram power limit and see if i see any difference in my Cyberpunk2077 benchmark with 1000+ memory clock


well on my 2x8pin, original vbios mem uses 100w, if I add +200 offset it adds 3w to that

to me, looks like this AIB is literately following nvidia design sheet , which makes no sense, I have not seen a 3090 use 60w in AAA games for memory!


*Total Graphics Power TGP**350 Watts*24 GB GDDR6X Memory (GA_0180_P075_120X140, 2.5 Watts per Module)-60 WattsMOSFET, Inductor, Caps NVDD (GPU Voltage)-26 WattsMOSFET, Inductor, Caps FBVDDQ (Framebuffer Voltage)-6 WattsMOSFET, Inductor, Caps PEXVDD (PCIExpress Voltage)-2 WattsOther Voltages, Input Section (AUX)-4 WattsFans, Other Power-7 WattsPCB Losses-15 Watts


----------



## zkareemz

Canson said:


> Here are 3 different bios for Suprim X, left one is mine (P-mode bios , not found in techpowerup website) , middle and right one are from techpowerup.
> I will try test them since they have higher vram power limit and see if i see any difference in my Cyberpunk2077 benchmark with 1000+ memory clock


I'd appreciate if you can share with me the original suprimX bios .. as I forgot to backup mine before flashing


----------



## Canson

Falkentyne said:


> Try that and then report your results.
> If you don't get favorable results, try the Strix OC vbios and see if that helps. You can always flash back since you should have dual bios.


Just tried the Supmrim X MSI RTX 3090 VBIOS , rock stable at 2040mhz in Cyberpunk2077 running more than 30 mins in background as benchmark with +1000mhz on memory. no core throttle. 

Not sure if i have any benefit from strix bios for additional 30W more? My temps are already high with .0950mV @ 2040mhz











jomama22 said:


> You don't need to link the photos from an external website...just drag and drop them from your pc so the show up in full res in the thread.


i didn't know that , thanks for info. like this right?












zkareemz said:


> I'd appreciate if you can share with me the original suprimX bios .. as I forgot to backup mine before flashing


I can upload for you if you want , but to be honest i dont recommend it. as you can see people here just helped and found out for me that my bios is vram power limited compared to other bios.

So if you want overclock your memory like i did then you should use MSI RTX 3090 VBIOS (this one is the Performance-bios)
Its still same Suprim X bios as mine , just higher vram power limit. Dont know why they changed it on new batches?


----------



## GAN77

Canson said:


> not sure exactly what you mean?


Turn off the RGB backlight of the video card.


----------



## Canson

GAN77 said:


> Turn off the RGB backlight of the video card.


why?


----------



## GAN77

Canson said:


> why?


I noticed that the backlight reserves 15-20 watts. I turned it off and got a full 450 watts in monitoring.


----------



## Falkentyne

Canson said:


> Just tried the Supmrim X MSI RTX 3090 VBIOS , rock stable at 2040mhz in Cyberpunk2077 running more than 30 mins in background as benchmark with +1000mhz on memory. no core throttle.
> 
> Not sure if i have any benefit from strix bios for additional 30W more? My temps are already high with .0950mV @ 2040mhz
> View attachment 2474592
> 
> 
> 
> 
> i didn't know that , thanks for info. like this right?
> 
> View attachment 2474593
> 
> 
> 
> 
> I can upload for you if you want , but to be honest i dont recommend it. as you can see people here just helped and found out for me that my bios is vram power limited compared to other bios.
> 
> So if you want overclock your memory like i did then you should use MSI RTX 3090 VBIOS (this one is the Performance-bios)
> Its still same Suprim X bios as mine , just higher vram power limit. Dont know why they changed it on new batches?


Thanks for confirming that VRAM power limit actually matters in the vBIOS.
@bmgjet will be happy to know this information.

So here is what I can confirm:

1) 8 pin Power Limit 1 and Power Limit 2 (and PL3) are useless except for TDP limit. All they are used for is calculating TDP by adding it with Slot Power. Default 8 pin 1+8pin2+(8pin 3 if available)+Slot power should equal TDP default limit if the BIOS values weren't written by monkeys. (disclaimer: this may not apply to all vBioses. does apply to Founder's Edition)

2) 8 pin PL1/PL2/PL3 seem to use the corresponding SRC power Limit values, and Input Power Plane Source Power (SRC) has its own single power draw and shunt.
(disclaimer: This may not apply to all vbioses. It's unknown if some use the 8 pin values then throttle, or if some use the SRC values instead. Take note that the # of SRC power limits equal the # of 8 pin inputs).

3) PCIE Slot Power has a strange link to MVDDC if MVDDC is shunt modded and Slot Power is NOT shunt modded--MVDDC skyrockets (and you get throttled).
4) GPU Chip Power seems to be able to exceed the Bios limit (when, why and by how much, no one knows).
5) No one on the planet knows what the AUX power limits do. May be related to people getting throttling in Superposition 4k--custom--extreme shaders, or massive throttling in Path of Exile with GI+Shadows set to Ultra, at all max settings (but not if set to "High").


----------



## Canson

GAN77 said:


> I noticed that the backlight reserves 15-20 watts. I turned it off and got a full 450 watts in monitoring.


I had problem with vram power limit and fixed it by using another bios. so i am fine. don't need to disable my backlight



Falkentyne said:


> Thanks for confirming that VRAM power limit actually matters in the vBIOS.
> @bmgjet will be happy to know this information.
> 
> So here is what I can confirm:
> 
> 1) 8 pin Power Limit 1 and Power Limit 2 (and PL3) are useless except for TDP limit. All they are used for is calculating TDP by adding it with Slot Power. Default 8 pin 1+8pin2+(8pin 3 if available)+Slot power should equal TDP default limit if the BIOS values weren't written by monkeys. (disclaimer: this may not apply to all vBioses. does apply to Founder's Edition)
> 
> 2) 8 pin PL1/PL2/PL3 seem to use the corresponding SRC power Limit values, and Input Power Plane Source Power (SRC) has its own single power draw and shunt.
> (disclaimer: This may not apply to all vbioses. It's unknown if some use the 8 pin values then throttle, or if some use the SRC values instead. Take note that the # of SRC power limits equal the # of 8 pin inputs).
> 
> 3) PCIE Slot Power has a strange link to MVDDC if MVDDC is shunt modded and Slot Power is NOT shunt modded--MVDDC skyrockets (and you get throttled).
> 4) GPU Chip Power seems to be able to exceed the Bios limit (when, why and by how much, no one knows).
> 5) No one on the planet knows what the AUX power limits do. May be related to people getting throttling in Superposition 4k--custom--extreme shaders, or massive throttling in Path of Exile with GI+Shadows set to Ultra, at all max settings (but not if set to "High").


Yes VRAM power limit matters in the vBios , Because the core frequency never throttled for me with this other bios i use right now since my +1000mhz memory overclock never reached 103W vram power limit, max it reached was i believe 98W.


----------



## jomama22

Falkentyne said:


> Thanks for confirming that VRAM power limit actually matters in the vBIOS.
> @bmgjet will be happy to know this information.
> 
> So here is what I can confirm:
> 
> 1) 8 pin Power Limit 1 and Power Limit 2 (and PL3) are useless except for TDP limit. All they are used for is calculating TDP by adding it with Slot Power. Default 8 pin 1+8pin2+(8pin 3 if available)+Slot power should equal TDP default limit if the BIOS values weren't written by monkeys. (disclaimer: this may not apply to all vBioses. does apply to Founder's Edition)
> 
> 2) 8 pin PL1/PL2/PL3 seem to use the corresponding SRC power Limit values, and Input Power Plane Source Power (SRC) has its own single power draw and shunt.
> (disclaimer: This may not apply to all vbioses. It's unknown if some use the 8 pin values then throttle, or if some use the SRC values instead. Take note that the # of SRC power limits equal the # of 8 pin inputs).
> 
> 3) PCIE Slot Power has a strange link to MVDDC if MVDDC is shunt modded and Slot Power is NOT shunt modded--MVDDC skyrockets (and you get throttled).
> 4) GPU Chip Power seems to be able to exceed the Bios limit (when, why and by how much, no one knows).
> 5) No one on the planet knows what the AUX power limits do. May be related to people getting throttling in Superposition 4k--custom--extreme shaders, or massive throttling in Path of Exile with GI+Shadows set to Ultra, at all max settings (but not if set to "High").


It depends highly on the card.

8 pin power 100% matters in some scenarios. A good example is what I asked him to try before: just load up gpu-z pcie gen tester in full screen. You will hit a power limit. It's not src, mem, slot...it's one 8pin that get hammered that will flag the power. Src will not be high at all, along with everything else except one pin (this is on the strix mind you, with 3 8pins. How this acts for a "two pin" like the fe I'm not 100% on).

Yes, on your fe, src seems to be your problem once it's shunted, but before its shunted I would imagine you would be hitting the pin limit before src unless src is limited below the 8 pin power from the get go.

Also, total tdp limit is not merely an additive of the pins and slot (not referring to what gpu-z is reporting but what is labeled at the limit, ex:450w, 480w...). Many bios' (my v1 strix bios for example) has a limit of 121.5w per pin + "86w" (I put this in quotes because it would never hit that limit as hardware prevents the slot from ever going above ~65w under any circumstance), which is well below the max board power of 480w. I would not be surprised if other bios's have this same layout.

As to why mvddc goes up when the pcie isn't shunted, who knows, doesn't really matter tbh lol. Possible that since you are messing with some ratio of power from the slot and the pins (by only shunting the pins) it's going to pull more than it needs because the pin shunt is telling it it's not getting enough while the slot tells it it's getting what it requested and since it probably wants to be safe, it will believe the pin shunt, this requesting more power from everyone until everyone is happy with the power it's giving. When they are both shunted, they would equally be happy at the same time so the power draw will be overall lower. But again, who knows and why it happens doesn't really matter.


----------



## Falkentyne

jomama22 said:


> It depends highly on the card.
> 
> 8 pin power 100% matters in some scenarios. A good example is what I asked him to try before: just load up gpu-z pcie gen tester in full screen. You will hit a power limit. It's not src, mem, slot...it's one 8pin that get hammered that will flag the power. Src will not be high at all, along with everything else except one pin (this is on the strix mind you, with 3 8pins. How this acts for a "two pin" like the fe I'm not 100% on).
> 
> Yes, on your fe, src seems to be your problem once it's shunted, but before its shunted I would imagine you would be hitting the pin limit before src unless src is limited below the 8 pin power from the get go.
> 
> Also, total tdp limit is not merely an additive of the pins and slot (not referring to what gpu-z is reporting but what is labeled at the limit, ex:450w, 480w...). Many bios' (my v1 strix bios for example) has a limit of 121.5w per pin + "86w" (I put this in quotes because it would never hit that limit as hardware prevents the slot from ever going above ~65w under any circumstance), which is well below the max board power of 480w. I would not be surprised if other bios's have this same layout.
> 
> As to why mvddc goes up when the pcie isn't shunted, who knows, doesn't really matter tbh lol. Possible that since you are messing with some ratio of power from the slot and the pins (by only shunting the pins) it's going to pull more than it needs because the pin shunt is telling it it's not getting enough while the slot tells it it's getting what it requested and since it probably wants to be safe, it will believe the pin shunt, this requesting more power from everyone until everyone is happy with the power it's giving. When they are both shunted, they would equally be happy at the same time so the power draw will be overall lower. But again, who knows and why it happens doesn't really matter.


IDK it's weird.
First I was referring to his post:









RTX 3090 Founders Edition working shunt mod


Yes idea would be to stack them. Should calculate that later in the tool....




www.overclock.net




You can see here he didn't shunt his PCIE Slot, and MVDDC skyrocketed to the Space Station.

Then when he shunted it, it went way down.

Also I found something else bizarre. One time my PCIE wasn't shunted "well enough" and I had the same thick throttle bar at 79.9W PCIE! But MVDDC was 0.0W (due to the shunt), so it was PCIE throttling hard. Yet the Bios says default is 75W and max is 86.2W....so I do not know where that 79.9W comes from...it's not in the ampere editor anywhere.

When I applied more paint on the shunt, Slot Power dropped slowly down as it cured, to 75W, 70W, 65W, eventually down to 55W.

Anyway:
-----------------------------------


My 8 pin default power limit is 145.8, Max is 162, default SRC1 and 2 is 150, Max is 175.

But I just did what you said, ran GPU-Z fullscreen for 10 minutes, and 8 pin #1 got up to 182W! 8 pin #2 was 159.5W.
Slot was I forgot. 55W or something.

I was getting repeated (not consistent, like off and on) green throttle flags in GPU-Z, but I don't know if that's because of the 8 pin, or because of the Total Board Power being reported as 385W+ (Usually it starts doing minor throttling if total board power is reported as 375W or higher, with 400W reported being the max (114% TDP)). Note: Real power was 560W.


----------



## mattskiiau

Having a look at stock X Trio bios vs EVGA's 500w bios using ABE, interesting to see how low MSI locked VRAM PL down compared to EVGA.

X Trio:
Max 7500mW

EVGA:
Max 13406mW

I wonder if this calls for concern? Unsure what X Trio's VRM's can handle.

Screenshot if anyone is curious:


----------



## Falkentyne

cenumis said:


> Having a look at stock X Trio bios vs EVGA's 500w bios using ABE, interesting to see how low MSI locked VRAM PL down compared to EVGA.
> 
> X Trio:
> Max 7500mW
> 
> EVGA:
> Max 13406mW
> 
> I wonder if this calls for concern? Unsure what X Trio's VRM's can handle.
> 
> Screenshot if anyone is curious:
> View attachment 2474602


Default TDP: 370W.
Default 8 pins: 150W + 150W + 150W + 66W Slot=516W.

Max TDP: 380W (WTH?)
Max 8 pins: 175W + 175W + 175W + 75W (78W)=600W

FE 3090:
Default TDP: 350W
Default 8 pins: 145.8+145.8+75 Slot=366W (this was the original default spec btw).
8 Pin throttle @ 100% TDP: 150W.

Max TDP: 400W
Max 8 pins: 162+162+75 Slot=399W
8 pin throttle @ 114% TDP: 175W+

I think I don't need to say more.

These companies like watching the world burn.


----------



## jomama22

Falkentyne said:


> IDK it's weird.
> First I was referring to his post:
> 
> 
> 
> 
> 
> 
> 
> 
> 
> RTX 3090 Founders Edition working shunt mod
> 
> 
> Yes idea would be to stack them. Should calculate that later in the tool....
> 
> 
> 
> 
> www.overclock.net
> 
> 
> 
> 
> You can see here he didn't shunt his PCIE Slot, and MVDDC skyrocketed to the Space Station.
> 
> Then when he shunted it, it went way down.
> 
> Also I found something else bizarre. One time my PCIE wasn't shunted "well enough" and I had the same thick throttle bar at 79.9W PCIE! But MVDDC was 0.0W (due to the shunt), so it was PCIE throttling hard. Yet the Bios says default is 75W and max is 86.2W....so I do not know where that 79.9W comes from...it's not in the ampere editor anywhere.
> 
> When I applied more paint on the shunt, Slot Power dropped slowly down as it cured, to 75W, 70W, 65W, eventually down to 55W.
> 
> Anyway:
> -----------------------------------
> 
> 
> My 8 pin default power limit is 145.8, Max is 162, default SRC1 and 2 is 150, Max is 175.
> 
> But I just did what you said, ran GPU-Z fullscreen for 10 minutes, and 8 pin #1 got up to 182W! 8 pin #2 was 159.5W.
> Slot was I forgot. 55W or something.
> 
> I was getting repeated (not consistent, like off and on) green throttle flags in GPU-Z, but I don't know if that's because of the 8 pin, or because of the Total Board Power being reported as 385W+ (Usually it starts doing minor throttling if total board power is reported as 375W or higher, with 400W reported being the max (114% TDP)). Note: Real power was 560W.


Yeah, the gpu-z power limit bar I'm talking about just happens sporadically during the pcie test, not a constant thing. Just an example of how the pin can be a limiter is some edge cases.


----------



## Falkentyne

jomama22 said:


> Yeah, the gpu-z power limit bar I'm talking about just happens sporadically during the pcie test, not a constant thing. Just an example of how the pin can be a limiter is some edge cases.


I just got this, even though the PCIE 8 pin limit at 114% TDP is 162W (but throttles at the SRC1/SRC2 limits though) and SRC limit is 175W.


----------



## jomama22

Falkentyne said:


> I just got this, even though the PCIE 8 pin limit at 114% TDP is 162W (but throttles at the SRC1/SRC2 limits though) and SRC limit is 175W.
> 
> View attachment 2474608


Are you talking about sram? That's msvdd power (so msddc I guess). I don't believe that is what src power would be but I don't know. I would expect some overshoot of any power limit as well.


----------



## Falkentyne

jomama22 said:


> Are you talking about sram? That's msvdd power (so msddc I guess). I don't believe that is what src power would be but I don't know. I would expect some overshoot of any power limit as well.


No I'm talking about SRC (SRC power in GPU-Z is the same thing as "Input Power Plane Source" in Hwinfo64).
Not Sram input power.

There are # of SRC rails which are equal to the # of 8 pin power connectors.

RTX 3090 FE (2 eight pin connectors). Notice that SRC 3 power limit is "0" as it doesn't have a third 8 pin.










3090 ROG Strix.
Notice there are three SRC power limits--for each of the 8 pins!









I do not know how the "SRC" power draw itself relates to this, as the SRC controls the power balancing. I do know there are two "Current sensing" (US5650?) chips on the card or maybe two MP2886 (6 phase) chips, but only one SRC shunt itself. (this stuff is over my head).

For example, throwing a shunt on GPU Chip power reduces Chip Power reporting but causes SRC power draw to _RISE_ higher, which is why the SRC shunt must also be modded.


----------



## jomama22

Falkentyne said:


> No I'm talking about SRC (SRC power in GPU-Z is the same thing as "Input Power Plane Source" in Hwinfo64).
> Not Sram input power.
> 
> There are # of SRC rails which are equal to the # of 8 pin power connectors.
> 
> RTX 3090 FE (2 eight pin connectors). Notice that SRC 3 power limit is "0" as it doesn't have a third 8 pin.
> View attachment 2474618
> 
> 
> 
> 3090 ROG Strix.
> Notice there are three SRC power limits--for each of the 8 pins!
> View attachment 2474619
> 
> 
> I do not know how the "SRC" power draw itself relates to this, as the SRC controls the power balancing. I do know there are two "Current sensing" (US5650?) chips on the card or maybe two MP2886 (6 phase) chips, but only one SRC shunt itself. (this stuff is over my head).
> 
> For example, throwing a shunt on GPU Chip power reduces Chip Power reporting but causes SRC power draw to _RISE_ higher, which is why the SRC shunt must also be modded.


I misread you comment, thought you were implying the src limit was being reached.


----------



## Gebeleisis

des2k... said:


> gddr6x has temperature sensor, igorlabs has the internal tool that can read this ! so yeah...
> 
> it should be public, but that's going to freakout people with FE that have memory at 105c on the back modules


yes , i saw that, but is an internal nvidia tool and all the data was blurred except the temp ))


----------



## robertr1

Anyone find a way to get the 520w bios working on the FE?


----------



## bmgjet

robertr1 said:


> Anyone find a way to get the 520w bios working on the FE?


That would give you less power limit since its a 3 plug card.
FE Bios is the 2nd highest power limited bios.

Highest being 1KW XOC bios which will get you about 696W on a 2 plug (12pin Nvidia plug) card. (I wouldnt really call it safe for the stock cooler tho)
Using FE bios on a 3 plug card gets you 600W.
You need to use the patched NVFlash, Its been tested using FE bios on AIBs cards. But no ones wanted to risk flashing a FE card with patched version since you dont have 2nd bios to fall back into if its a bad flash and cant even hardware flash it back.


----------



## Dreams-Visions

Pepillo said:


> Honestly, I don't recommend that bios. I have the same card and the 520w of the Kipping is more than enough without taking the unnecessary risks of those 1.000w without temperature protections that above do not perform well in comparison. If you still want it, here it is:
> 
> VGA Bios Collection: EVGA RTX 3090 24 GB | TechPowerUp


Understood. I'll leave it be. Thanks for the insight.


----------



## Vikingcools

I have a Asus RTX 3090 EKWB on the way. I see this has two connectors, 20 ps and 375W bios. I'm sure it could benefit from more power though. I have complete overkill on rads, 2x420/1x480 - so temps won't be an issue. What is the performance expectation of this particular card, in comparison with "the best"? Anyone actually running this sku - and have some stats to share?


----------



## Jpmboy

Optimus water block for my FTW3 arrived today...


----------



## Gamester.fr

Hello everyone,
I bought a Gigabyte RTX 3090 Eagle OC in September last year :what would be your advice for overclocking it? I use AORUS engine but it is rather limited. Thank you for your help.

PS: is there a good waterblock for this graphics card?


----------



## changboy

EK waterblock for my ftw3 ultra arrive today, its the day of the waterblock delivery


----------



## Rbk_3

I have tried 2 hybrid kits on my FTW3 3090 and the pumps on both are unbearably loud. Has anyone gotten one with a silent pump? Here are the 2 I had. 

First one




 
Second one


----------



## changboy

From ur vidéo i didnt ear it verry loud lol.


----------



## o1dschoo1

Rbk_3 said:


> I have tried 2 hybrid kits on my FTW3 3090 and the pumps on both are unbearably loud. Has anyone gotten one with a silent pump? Here are the 2 I had.
> 
> First one
> 
> 
> 
> 
> 
> Second one


lmao if you call that loud you would hate to sit next to my pc ..


----------



## GAN77

Which BIOS is better in your opinion? Evga 500 and 520 watts. I have highlighted the significant differences.


----------



## Rbk_3

changboy said:


> From ur vidéo i didnt ear it verry loud lol.


Depends what you listen to the video on. My laptop it isn't very loud, but if I listen to my phone it is and more true to life. 

My EVGA CLC cooler makes hardly any noise at all. This literally sounds like a refrigerator.


----------



## gfunkernaught

All this talk about vrm PL's and RAM PL's got my head spinning. So flashing a bios from one custom PCB to another custom PCB from a different mfg isn't quite as straight forward as I thought. So take my MSI 3090 Tri for example, the BIOS was programmed in accordance with how the PCB was built with all of its components. So throwing an EVGA 520W bios on my card would throw off the balance wouldn't it? Like all of those extra power limits of each component matter. I still want to water cool my card but flashing? There must be a reason MSI chose 380w as the power limit for this card right?


----------



## Falkentyne

GAN77 said:


> Which BIOS is better in your opinion? Evga 500 and 520 watts. I have highlighted the significant differences.
> 
> View attachment 2474688


The first.
VRAM PL's matter as proven a few pages back with msi suprim X.


----------



## WayWayUp

interesting. So on a shunted ftw3 card im better off using the 500w bios? I have been using the 520w

currently disappointed with memory overclock perhaps this will help


----------



## jomama22

WayWayUp said:


> interesting. So on a shunted ftw3 card im better off using the 500w bios? I have been using the 520w
> 
> currently disappointed with memory overclock perhaps this will help


If your shunted, I would only care about the memory power limit.

Granted, depending what you shunted with and if the ftw3 has a memory shunt...


----------



## jomama22

gfunkernaught said:


> All this talk about vrm PL's and RAM PL's got my head spinning. So flashing a bios from one custom PCB to another custom PCB from a different mfg isn't quite as straight forward as I thought. So take my MSI 3090 Tri for example, the BIOS was programmed in accordance with how the PCB was built with all of its components. So throwing an EVGA 520W bios on my card would throw off the balance wouldn't it? Like all of those extra power limits of each component matter. I still want to water cool my card but flashing? There must be a reason MSI chose 380w as the power limit for this card right?


No, you will be fine. A 500w/520w bios will be ok. You also have to consider that many bios limits are used to differentiate their product line. They wouldn't want a reviewer to have their trio beat their supreme and limiting the bios is the cheapest and fastest way to do it (because binning is all over the place anyway).


----------



## changboy

WayWayUp, can you tell me ur score on port royal with ur shunt ftw3 and also how many shunt you did ?


----------



## gfunkernaught

jomama22 said:


> No, you will be fine. A 500w/520w bios will be ok. You also have to consider that many bios limits are used to differentiate their product line. They wouldn't want a reviewer to have their trio beat their supreme and limiting the bios is the cheapest and fastest way to do it (because binning is all over the place anyway).


So are the trio and suprim identical PCB's just one is clocked higher? Or are they different? Sure, the lower-tier card with a higher power limit can "beat" a higher-tier card with a lower power limit. But long term effects? The 500w bios from EVGA won't cause any of the components on the Trio to behave differently than what they were designed for?


----------



## Benni231990

Hi 

I have a question i ordered a gainward 3090 phantom gs

Has any body flashed the 500 Watt bios on the card and can say how many dp ports i lost? And is the hdmi Port in function after the Flash?


----------



## ALSTER868

WayWayUp said:


> interesting. So on a shunted ftw3 card im better off using the 500w bios? I have been using the 520w
> 
> currently disappointed with memory overclock perhaps this will help


I'm on 520W bios and my memory can do +1500.
Does that mean I don't need that 500W bios?


----------



## Falkentyne

WayWayUp said:


> interesting. So on a shunted ftw3 card im better off using the 500w bios? I have been using the 520w
> 
> currently disappointed with memory overclock perhaps this will help


If you're shunted, why are you trying to use a different vbios?
You did apply a shunt over GDDR6X right? So there's no need to worry about the memory power limit AFAIK. But you can test it.

FTW3 has the connector 1 solo at the top, the others are the same.
(PCIE Slot is on the back side).


----------



## jomama22

gfunkernaught said:


> So are the trio and suprim identical PCB's just one is clocked higher? Or are they different? Sure, the lower-tier card with a higher power limit can "beat" a higher-tier card with a lower power limit. But long term effects? The 500w bios from EVGA won't cause any of the components on the Trio to behave differently than what they were designed for?


They are slightly different pcbs/components but at that power level you will be fine.


----------



## WayWayUp

changboy said:


> WayWayUp, can you tell me ur score on port royal with ur shunt ftw3 and also how many shunt you did ?











I scored 15 811 in Port Royal


Intel Core i9-10900KF Processor, NVIDIA GeForce RTX 3090 x 1, 16384 MB, 64-bit Windows 10}




www.3dmark.com





this was with +1,000 memory. I know there is something related to volatge going on as i can run +1100 at room temp, but when i lower the temps a bit and ramp up my gpu core i crash with same +1100 memory and have to scale back

I shunted with i believe 7mo shunts including the src
I cant remember since it was a few month back i believe it's 5 +1 + 1slot so 7 total if i remember correctly. used 10mo on the slot

it's weird having a golden chip but extremely average memory. usually chips match better. If my memory matched my core i would be scoring 16k+ and have the highest above freezing score in port royal. It's a shame but dont get me wrong I dont mean to complain. Very happy with my chip its excellent


----------



## boli

OK, so I got a PNY 3090 Uprising, because my patience waiting for the Asus EKWB 3080 ordered weeks ago ran out.

A heatkiller V waterblock is on its way, but for now it's aircooled. I like to give them a test run before I put a block on them anyway.

Unfortunately, it appears to be defective. Its performance is abysmal (4 fps in AC: Valhalla @ UHD), and its clock starts at 1350 and quickly degrades to around 800 then 210 MHz, while under load. It doesn't show any power draw in GPU-Z whatsoever. Wall power changes from 80W on desktop to 220W during Furmark - way too little for this card. Still it claims power limit all the time.

So far the only thing I've done is a clean install of the NVidia drivers (which included 2 restarts), and changed the single dual-head PCI power cable to two separate power cables, just because. The dual-head setup worked fine for a Titan X Pascal, 2080 Ti (380W) and RTX 3070 – the last of which I just removed from this PC working absolutely fine.

Seasonic Prime Titanium 750W PSU.

Do you guys have any ideas before I attempt to return the card as DOA?









PS: This is my old i7-6700K build. The card is supposed to go in my new Ryzen build I finished last weekend, but will not fit with the original air cooler.


----------



## Falkentyne

boli said:


> OK, so I got a PNY 3090 Uprising, because my patience waiting for the Asus EKWB 3080 ordered weeks ago ran out.
> 
> A heatkiller V waterblock is on its way, but for now it's aircooled.
> 
> Unfortunately, it appears to be defective. Its performance is abysmal, and its clock starts at 1350 and quickly degrades to around 800 then 210 MHz, while under load. It doesn't show any power draw in GPU-Z whatsoever.
> 
> So far the only thing I've done is a clean install of the NVidia drivers (which included 2 restarts), and changed the single dual-head PCI power cable to two separate power cables, just because. The dual-head setup worked fine for a Titan X Pascal, 2080 Ti and RTX 3070 (which I just removed from this PC working absolutely fine)), but you never know.
> 
> Seasonic Prime Titanium 750W PSU.
> 
> Do you guys have any ideas before I attempt to return the card as DOA?
> 
> View attachment 2474703


Looks like one of the shunts or current sensing is busted. RMA it.


----------



## WayWayUp

boli said:


> OK, so I got a PNY 3090 Uprising, because my patience waiting for the Asus EKWB 3080 ordered weeks ago ran out.
> 
> A heatkiller V waterblock is on its way, but for now it's aircooled.
> 
> Unfortunately, it appears to be defective. Its performance is abysmal, and its clock starts at 1350 and quickly degrades to around 800 then 210 MHz, while under load. It doesn't show any power draw in GPU-Z whatsoever. Wall power changes from 80W on desktop to 220W during Furmark - way too little for this card. Still it claims power limit all the time.
> 
> So far the only thing I've done is a clean install of the NVidia drivers (which included 2 restarts), and changed the single dual-head PCI power cable to two separate power cables, just because. The dual-head setup worked fine for a Titan X Pascal, 2080 Ti (380W) and RTX 3070 – the last of which I just removed from this PC working absolutely fine.
> 
> Seasonic Prime Titanium 750W PSU.
> 
> Do you guys have any ideas before I attempt to return the card as DOA?
> 
> View attachment 2474703
> 
> 
> PS: This is my old i7-6700K build. The card is supposed to go in my new Ryzen build I finished last weekend, but will not fit with the original air cooler.


dont be shy to RMA it happens to many of us. Good luck on your next card. I doubt there is any quick fix. I dont understand dishing out for ultra extreme gpus but then matching up with barely passable psus. If you leave everything at stock you might be fine with 750w but when you start pushing overclocks you will find yourself disappointed. I pny bios isnt too crazy though so hopefully you will be fine


----------



## gfunkernaught

jomama22 said:


> They are slightly different pcbs/components but at that power level you will be fine.


Fine as long as all components are cooled properly right? Thermal pads have to be perfect, etc. If I were to put the 500w evga bios on my trio which still has the stock cooler on it, and not do any manual overclocking, would it be safe? Sorry for my repetitive question, but I do have legitimate concerns about pushing this VERY expensive card.


----------



## WayWayUp

ALSTER868 said:


> I'm on 520W bios and my memory can do +1500.
> Does that mean I don't need that 500W bios?


That's a phenomenal OC i would say your g2g


----------



## boli

Thanks for the answers, guys.



WayWayUp said:


> I dont understand dishing out for ultra extreme gpus but then matching up with barely passable psus. If you leave everything at stock you might be fine with 750w but when you start pushing overclocks you will find yourself disappointed. I pny bios isnt too crazy though so hopefully you will be fine


Hmm, I thought a Seasonic Prime Titanium 750W was one of the finest available quality-wise, but yes I'm aware 750W is at the limit of what's officially recommended. But realize this PSU is in my *old* PC.

For the *new* one I actually wanted to buy an 850W+ to be safe. Unfortunately no titanium or platinum PSUs with more than 650W in stock anywhere around here, so I'll have to get by with 750W for a while… according to tests by Igor it should be fine with original BIOS.

I'm also somewhat confident because it ran an OC 2080 Ti (380W) without issues. This particular 3090 has a 365W BIOS, so about the same ballpark. If history is any indication, I will not try any crazy things power wise. The 2080 Ti I eventually returned to undervolted stock 320W BIOS, because it ran a lot cooler using less power with barely any fps loss. It still died the space invader death after 1.5 years of service, like so many others.


----------



## mattskiiau

gfunkernaught said:


> Fine as long as all components are cooled properly right? Thermal pads have to be perfect, etc. If I were to put the 500w evga bios on my trio which still has the stock cooler on it, and not do any manual overclocking, would it be safe? Sorry for my repetitive question, but I do have legitimate concerns about pushing this VERY expensive card.


I've been kinda poking the same thing for a few pages on this forum now. No body really knows. I'm not too concerned about the PL since I'm locking my card at 2050mhz which can't do that on stock ROM. Doesn't really go higher than 400w~ PL which is like 20w higher than stock. 
However, the VRM is my concern still. 

I'm actually thinking of changing to the 520w bios as someone posted the ABE few pages ago. The VRM limit is closer to the stock X Trio bios. Very strange how the 500w BIOS has a much higher VRM PL over the 520w?


----------



## GAN77

cenumis said:


> Very strange how the 500w BIOS has a much higher VRM PL over the 520w?


If we compare bios of different manufacturers, on version 94.02.26. the memory limit is higher than on version 94.02.42. 
Version 94.02.42 newer.


----------



## changboy

I will put my waterblock soon on my ftw3 ultra and i have in here 10 x 8mohm panasonic resistor for shunt but i saw 3 shunt on pci connector and maybe 1 on the other side for the pcie but where do you put the other shunt ?

Me i count 4 shunt and i dont know where you put the other 3 ?


----------



## gfunkernaught

cenumis said:


> I've been kinda poking the same thing for a few pages on this forum now. No body really knows. I'm not too concerned about the PL since I'm locking my card at 2050mhz which can't do that on stock ROM. Doesn't really go higher than 400w~ PL which is like 20w higher than stock.
> However, the VRM is my concern still.
> 
> I'm actually thinking of changing to the 520w bios as someone posted the ABE few pages ago. The VRM limit is closer to the stock X Trio bios. Very strange how the 500w BIOS has a much higher VRM PL over the 520w?
> View attachment 2474712


Looking at the VRAM PL 500w vs TRIO stock bios, the 500w bios has almost twice the power limit. Just because the limit for vram power was raised doesn't mean no harm will come to the vram right? Like, what is stopping the ram from using more power than it was designed to? Even at stock clocks? If a card was built with a higher power limit, doesn't that mean that component voltages will be higher than a card that was built with a power limit of 380w? Even if their clocks are either the same or within margin?


----------



## Falkentyne

gfunkernaught said:


> Looking at the VRAM PL 500w vs TRIO stock bios, the 500w bios has almost twice the power limit. Just because the limit for vram power was raised doesn't mean no harm will come to the vram right? Like, what is stopping the ram from using more power than it was designed to? Even at stock clocks? If a card was built with a higher power limit, doesn't that mean that component voltages will be higher than a card that was built with a power limit of 380w? Even if their clocks are either the same or within margin?


This has no relevance for gaming. Higher VRAM PL=good.
Such a thing would matter for mining, however.


----------



## gfunkernaught

Falkentyne said:


> This has no relevance for gaming. Higher VRAM PL=good.
> Such a thing would matter for mining, however.


Electrical load is what I'm referring to, not usage. A lot of my concern comes from all these power/stability issues with the 3090 that I've read about. Nvidia releases a driver fix to effectively lower the power limit/boost clocks to restore stability. That to me sounds like a design flaw, like the specter thing with Intel (although in Intel's case it wasn't a "flaw").

If you put a 20 power stage bios on a card that only has 18 power stages, how does the card compensate?


----------



## GRABibus

Hi,
I received my 3090 EVGA ultra FTW3 and I can see in osd during gaming that the voltage goes up to 1,1V.

I thought the max was 1,093V.

am I wrong ?


----------



## Falkentyne

gfunkernaught said:


> Electrical load is what I'm referring to, not usage. A lot of my concern comes from all these power/stability issues with the 3090 that I've read about. Nvidia releases a driver fix to effectively lower the power limit/boost clocks to restore stability. That to me sounds like a design flaw, like the specter thing with Intel (although in Intel's case it wasn't a "flaw").
> 
> If you put a 20 power stage bios on a card that only has 18 power stages, how does the card compensate?


There's no problem with electrical load. Don't worry about things you don't need to worry about.
FTW3's exploding / catching fire / dying is a hardware flaw on the cards themselves. This issue doesn't seem to be happening with other cards.

If you're worried about load, just remove the crappy 1.2 w/mk thermal pads, measure them with a calipers at original thickness, then buy some nice replacement 12 w/mk or 12.8 w/mk Gelid or Thermalright Odyssey pads and repad your VRAM and then repaste your GPU with Thermalright TFX.


----------



## Jpmboy

GRABibus said:


> Hi,
> I received my 3090 EVGA ultra FTW3 and I can see in osd during gaming that the voltage goes up to 1,1V.
> 
> I thought the max was 1,093V.
> 
> am I wrong ?


uh - 1.093 and 1.1 are the same voltage


----------



## gfunkernaught

Falkentyne said:


> There's no problem with electrical load. Don't worry about things you don't need to worry about.
> FTW3's exploding / catching fire / dying is a hardware flaw on the cards themselves. This issue doesn't seem to be happening with other cards.
> 
> If you're worried about load, just remove the crappy 1.2 w/mk thermal pads, measure them with a calipers at original thickness, then buy some nice replacement 12 w/mk or 12.8 w/mk Gelid or Thermalright Odyssey pads and repad your VRAM and then repaste your GPU with Thermalright TFX.


Wasn't just FTW3s, the issue affected FE as well as various AIB cards.


----------



## Falkentyne

gfunkernaught said:


> Wasn't just FTW3s, the issue affected FE as well as various AIB cards.


I haven't seen any FE's go up in smoke.
Link me to this?


----------



## changboy

Why do you recomand the Thermalright TFX coz from all review i see its not the best and many said its really hard to apply coz its kind of dry, me i wont use this for sure.


----------



## des2k...

WayWayUp said:


> That's a phenomenal OC i would say your g2g


that's a nice memory OC , but it's not really +1500 as those modules are already rated for 21(10500mhz), so at +750 you're still running at micron spec

I haven't pushed mine higher then 745, but I would of been mad if they didn't get to +745 at least.

I got some thermalright odyssey (12,8w/mk), not sure how good they are but should be better than EK(5w/mk or less).


----------



## GRABibus

Jpmboy said:


> uh - 1.093 and 1.1 are the same voltage


no.
They are 2 different points on V/F curve and you can adjust both separately.

until now, 1,093V was always a known limit. 
only custom bios could help in going beyond, as of 1,1V.

‘this is why I ask just to understand.


----------



## GAN77

I looked at all available bios 94.02.*42*.XX from different vendors. All have a memory limit of less than 100 watts. The older BIOS version 94.02.*26*.XX has a higher memory limit. Why did you make these changes?


----------



## gfunkernaught

Falkentyne said:


> I haven't seen any FE's go up in smoke.
> Link me to this?


No smoke, just stability issues.








Manufacturers respond to GeForce RTX 3080/3090 crash to desktop issues - VideoCardz.com


NVIDIA GeForce RTX 3080/3090 graphics cards face Crash To Desktop (CTD) problems The ongoing problems with GeForce RTX 30 series launch are reaching new grounds. From a very limited stock, sponsored videos ahead of independent reviews to actual hardware issues with the very few cards that were...




videocardz.com





Also this

I'm not gonna act like I have a deep understanding of all these intricate bios power limits, but I've never bought a GPU that was already at its power limit with no oc. Just seems weird. 

I see at least three newer bios for my card on techpowerup. My BIOS was released on Sept 1 2020, version 94.02.42.00.F1. After going through the frustration of trial and error with a 2080 ti and various bios, I'm not exactly eager to start flashing or modding this card. Eager to water cool it yes. But I want to see how all this unfolds over the next few months.


----------



## Canson

Guys how do you stress test your gpu to find stable overclock? I am starting to give up soon lol
+150mhz core or more in port royal and superposition 4k no problem , did keep crashing in Cyberpunk2077 all the way down to +110mhz core (haven't tested this for longer period to know if its stable or not)

Is Cyberpunk2077 best tool to stress test or does it just hate overclock? 


Played with .850mV @ 1905 mhz for 1 hour at least , no crashes. Changed game to Warzone crash after 15-20 min with some weird error (not the usual error 6068)

I see alot of people compare port royal and time spy results here , but does anyone actualy game here with stable overclocks?


----------



## Falkentyne

Warzone / COD MW Ground War is still the gold standard for stress testing.
Or if you really hate your video card you can try Path of Exile Ultra detail + GI+Global Illumination shadows=Ultra with unlocked FPS.


----------



## des2k...

Well for 1905 at 850mv you can loop Timespy Extreme GT2 test. If it holds 1905mhz and passes 2hours then games shouldn't crash at frequency.

cp2077, you can play with overclock, I'm at 2130 core +745mem no crashes


----------



## des2k...

Falkentyne said:


> Warzone / COD MW Ground War is still the gold standard for stress testing.
> Or if you really hate your video card you can try Path of Exile Ultra detail + GI+Global Illumination shadows=Ultra with unlocked FPS.


"if you hate your card", well there's Quake RTX too lol


----------



## bmgjet

Theres 4 points you need to get stable.
Raster
Raytracing
Raytracing + AI (DLSS)
Then all of them combined.

For raster only most cards will happily get 150-200. More if your starting from a low base boost card at 1695mhz.
Ray tracing only most cards will do 140-170 sort of range
Ray tracing and AI is where it starts getting hard for the card since AI cant have any errors at all in the maths or it crashes. 
Compared to raytracing and raster where small errors will just give visual artifacts. 100-120 is sort of the aim for that load.
Then combining them all youd just usually hit power limit first.

But of course if your starting from a factory overclock already you could only get 50-100 when doing RT and AI.
But thats all part of the fun of overclocking finding what parts of your card can do what.


----------



## Paperino super cuochino

Hi guys, i'm looking for an rtx 3090 to shunt and watercool (aim to achieve high oc levels), but i don't know what model shoudl i get since a lot of them are also really hard to find in european stores. So at the moment i'm considering many models: strix oc, gamerock oc, suprim x, evga ftw3 ultra and gigabyte xtreme. Which one is better of the other in your opinion? I know that some of them don't even still have any avaible wb at the moment so the choice is pretty hard considering even that. Thanks in advice for the help


----------



## des2k...

bmgjet said:


> Theres 4 points you need to get stable.
> Raster
> Raytracing
> Raytracing + AI (DLSS)
> Then all of them combined.
> 
> For raster only most cards will happily get 150-200. More if your starting from a low base boost card at 1695mhz.
> Ray tracing only most cards will do 140-170 sort of range
> Ray tracing and AI is where it starts getting hard for the card since AI cant have any errors at all in the maths or it crashes.
> Compared to raytracing and raster where small errors will just give visual artifacts. 100-120 is sort of the aim for that load.
> Then combining them all youd just usually hit power limit first.
> 
> But of course if your starting from a factory overclock already you could only get 50-100 when doing RT and AI.
> But thats all part of the fun of overclocking finding what parts of your card can do what.


Not sure why you want to test all those combinations out. 
Raster at 4k with heavy power load will be basically Prime95 for CPUs. If it doesn't crash, no amount of DLSS, RTX combination will crash your OC.

DLSS and RTX have small die area on ampere, they require less voltage then pure raster and always hold back the raster engine (biggest part of the die) to work 100%


----------



## mattskiiau

gfunkernaught said:


> No smoke, just stability issues.
> 
> 
> 
> 
> 
> 
> 
> 
> Manufacturers respond to GeForce RTX 3080/3090 crash to desktop issues - VideoCardz.com
> 
> 
> NVIDIA GeForce RTX 3080/3090 graphics cards face Crash To Desktop (CTD) problems The ongoing problems with GeForce RTX 30 series launch are reaching new grounds. From a very limited stock, sponsored videos ahead of independent reviews to actual hardware issues with the very few cards that were...
> 
> 
> 
> 
> videocardz.com
> 
> 
> 
> 
> 
> Also this
> 
> I'm not gonna act like I have a deep understanding of all these intricate bios power limits, but I've never bought a GPU that was already at its power limit with no oc. Just seems weird.
> 
> I see at least three newer bios for my card on techpowerup. My BIOS was released on Sept 1 2020, version 94.02.42.00.F1. After going through the frustration of trial and error with a 2080 ti and various bios, I'm not exactly eager to start flashing or modding this card. Eager to water cool it yes. But I want to see how all this unfolds over the next few months.


So.. interesting find. I compared the 3 official BIOS for X Trio. Left is default, middle is low temp, right is low noise.
It seems even the low temp version increased the VRAM PL to 100 max? I'm suddenly not too concerned about running the 500w bios now.


----------



## Canson

Falkentyne said:


> Warzone / COD MW Ground War is still the gold standard for stress testing.
> Or if you really hate your video card you can try Path of Exile Ultra detail + GI+Global Illumination shadows=Ultra with unlocked FPS.


I dont have Path of Exile. Do you think it would work to run Warzone in background running at menu without fps cap and stress test it this way?
I tried that with 1950mhz at .850mV 20 mins no crash. this would crash pretty fast in Cyberpunk2077



des2k... said:


> Well for 1905 at 850mv you can loop Timespy Extreme GT2 test. If it holds 1905mhz and passes 2hours then games shouldn't crash at frequency.
> 
> cp2077, you can play with overclock, I'm at 2130 core +745mem no crashes


Is Timespy Extreme GT2 more demanding than Port Royal? I will test it now and see how it goes with 1950mhz at .850mV (this one crash in Cyberpunk2077 pretty fast 5-15min)
Should i disable g-sync and v-sync when running it? It does tell me to do it for benchmark



bmgjet said:


> Theres 4 points you need to get stable.
> Raster
> Raytracing
> Raytracing + AI (DLSS)
> Then all of them combined.
> 
> For raster only most cards will happily get 150-200. More if your starting from a low base boost card at 1695mhz.
> Ray tracing only most cards will do 140-170 sort of range
> Ray tracing and AI is where it starts getting hard for the card since AI cant have any errors at all in the maths or it crashes.
> Compared to raytracing and raster where small errors will just give visual artifacts. 100-120 is sort of the aim for that load.
> Then combining them all youd just usually hit power limit first.
> 
> But of course if your starting from a factory overclock already you could only get 50-100 when doing RT and AI.
> But thats all part of the fun of overclocking finding what parts of your card can do what.


So what program or game you recommend for gpu stress test to find stable overclock?


----------



## gfunkernaught

Bright Memory Benchmark is a great way to test your OC for heat and stability. Does both RT and Raster. I've had OCs that were perfectly stable for raster but as soon as I introduced RT+DLSS, not so stable. Testing the entire die is the way to go.


----------



## Falkentyne

Canson said:


> I dont have Path of Exile. Do you think it would work to run Warzone in background running at menu without fps cap and stress test it this way?
> I tried that with 1950mhz at .850mV 20 mins no crash. this would crash pretty fast in Cyberpunk2077
> 
> 
> 
> Is Timespy Extreme GT2 more demanding than Port Royal? I will test it now and see how it goes with 1950mhz at .850mV (this one crash in Cyberpunk2077 pretty fast 5-15min)
> Should i disable g-sync and v-sync when running it? It does tell me to do it for benchmark
> 
> 
> 
> So what program or game you recommend for gpu stress test to find stable overclock?


Running it in the menu doesn't work. It has to actually be on a map.
Ground War crashes it faster because there's a lot more stuff going on close to you.
Running it with render scale > 100% (if you're not above 4k) is the fastest way to crash your computer if you're not stable!


----------



## gfunkernaught

cenumis said:


> So.. interesting find. I compared the 3 official BIOS for X Trio. Left is default, middle is low temp, right is low noise.
> It seems even the low temp version increased the VRAM PL to 100 max? I'm suddenly not too concerned about running the 500w bios now.
> 
> View attachment 2474728


I was looking for that ABE utility to do the comparisons myself, but the one I found from this forum came up with a virus so I didn't bother. Maybe a false-positive?

Still though, running a bios made for one custom PCB on another completely different PCB worries me. Like the KP XOC bios on a reference 2080 ti (watercooled), ran benchmarks fine, but games would crash instantly.


----------



## Falkentyne

Paperino super cuochino said:


> Hi guys, i'm looking for an rtx 3090 to shunt and watercool (aim to achieve high oc levels), but i don't know what model shoudl i get since a lot of them are also really hard to find in european stores. So at the moment i'm considering many models: strix oc, gamerock oc, suprim x, evga ftw3 ultra and gigabyte xtreme. Which one is better of the other in your opinion? I know that some of them don't even still have any avaible wb at the moment so the choice is pretty hard considering even that. Thanks in advice for the help


Shunting? Strix for sure!
No fuses, very high Chip Power Limit, AUX limits seem to be high (no one knows what they do yet) and the MOST mod friendly shunts in existence @bmgjet
Shunts are flush flat edges to middle housing so no contact issues like you get on FE card shunts trying to bridge depressed edges (just look at a high res picture of a FE and you'll see just how painful that board is to mod safely).

All the shunts are positioned well enough away from IC's that you would have to be a gorilla to get MG842AR paint bridging an IC, or solder bridging a component so you have a decent amount of space to work with. Also solder points for EVC2 controller onboard also (I think).

(still recommended to use Super 33+ tape if you're painting the shunts, or high temp kapton tape if you're solder stacking!)



gfunkernaught said:


> I was looking for that ABE utility to do the comparisons myself, but the one I found from this forum came up with a virus so I didn't bother. Maybe a false-positive?
> 
> Still though, running a bios made for one custom PCB on another completely different PCB worries me. Like the KP XOC bios on a reference 2080 ti (watercooled), ran benchmarks fine, but games would crash instantly.


No virus. Bmgjet's files are safe.
Many hack tools/modding tools cause AVs to run amok. Malwarebytes tested clean.


----------



## inedenimadam

I can't seem to find a definitive answer in this thread or elsewhere on the internet. Is the thermal/power blip bug anything to worry about? Is Nvidia aware of it being a driver based bug?


----------



## Canson

Falkentyne said:


> Running it in the menu doesn't work. It has to actually be on a map.
> Ground War crashes it faster because there's a lot more stuff going on close to you.
> Running it with render scale > 100% (if you're not above 4k) is the fastest way to crash your computer if you're not stable!


Like map in an actual game and play it or can do it in practice mode and just stay afk?



gfunkernaught said:


> Bright Memory Benchmark is a great way to test your OC for heat and stability. Does both RT and Raster. I've had OCs that were perfectly stable for raster but as soon as I introduced RT+DLSS, not so stable. Testing the entire die is the way to go.


Ok, we might found the best stress test program to find stable overclock the fastest and most accurate way.

I tried this settings i know crashes in Cyberpunk2077 to see if they crash in benchmark programs also or not

TEST : 1950mhz at .850mV (not stable in cyberpunk2077)

Timespy Extreme GT2 loop in background 25 mins , stable and no crashes
Bright Memory Benchmark loop crash after 5 mins , tried again this time running it in background and it crashed after 1-2 mins


----------



## jomama22

Falkentyne said:


> Shunting? Strix for sure!
> No fuses, very high Chip Power Limit, AUX limits seem to be high (no one knows what they do yet) and the MOST mod friendly shunts in existence @bmgjet
> Shunts are flush flat edges to middle housing so no contact issues like you get on FE card shunts trying to bridge depressed edges (just look at a high res picture of a FE and you'll see just how painful that board is to mod safely).
> 
> All the shunts are positioned well enough away from IC's that you would have to be a gorilla to get MG842AR paint bridging an IC, or solder bridging a component so you have a decent amount of space to work with. Also solder points for EVC2 controller onboard also (I think).
> 
> (still recommended to use Super 33+ tape if you're painting the shunts, or high temp kapton tape if you're solder stacking!)
> 
> 
> 
> No virus. Bmgjet's files are safe.
> Many hack tools/modding tools cause AVs to run amok. Malwarebytes tested clean.


Just make sure you have a small tip for the iron on the strix, the upper left two by the pcie connector are a bit of a pain to get to well. They are smooshed between an inductor and the connector. The gap in the pic looks much larger than it actually is.









Also, just make sure to go with the correct mohm shunts you want first if using a waterblock/backplate. The pcie slot shunt and the shunt on the back of the pcb will need some improvisation if you have to double shunt like I did (I was more just impatient lol)

This was my creative solution for them lmao:


----------



## gfunkernaught

I did a quick comparison of my bios, the Gaming X Trio, the MSI Suprim, and the EVGA 500W FTW3 bios. I noticed the suprim will pull less watts from the 8-pin than the trio, and pull the rest from PCIe slot (?). Am I reading that correctly?


----------



## jomama22

gfunkernaught said:


> I did a quick comparison of my bios, the Gaming X Trio, the MSI Suprim, and the EVGA 500W FTW3 bios. I noticed the suprim will pull less watts from the 8-pin than the trio, and pull the rest from PCIe slot (?). Am I reading that correctly?
> View attachment 2474739


Those are just max limits. How much it will pull and from where is dependent on the type of load and what those inputs are feeding. Also depends on what type of mechanisms they have in place for power balancing. The bios doesn't really control much of that. It's the reason you can throw any bios on the strix and never pull over 70w on the pcie slot no matter the actual power draw. Even with my shunted strix I don't pull over 70w on the slot.


----------



## WillP

@Canson I'm on my 3rd playthrough of Cyberpunk. My card (KFA2, Kingpin XOC bios, water cooled 280mm * 2) is stable at +147 clock and 450 mem at 4k max settings DLSS performance. Gets 55-60 fps.


----------



## Falkentyne

gfunkernaught said:


> I did a quick comparison of my bios, the Gaming X Trio, the MSI Suprim, and the EVGA 500W FTW3 bios. I noticed the suprim will pull less watts from the 8-pin than the trio, and pull the rest from PCIe slot (?). Am I reading that correctly?
> View attachment 2474739


Those limits aren't what you think they are.

TDP limit is based on adding the active values of all of the individual 8 pins power draw and PCIE Slot Power together. Unfortunately, almost every AIB (except Nvidia) has a lower default (or max) TDP compared to the "potential" values you get by adding the default 8 pin values + default slot power together. Usually drastically lower.

100% TDP seems to throttle at _either_ all 8 pins+Slot power being close to or equaling default TDP, or any one of the 8 pins equaling the default "SRC#" power limit related to that pin. It looks like every single card has a 150W default SRC power limit. I'm not sure which rail power limit has more priority on any individual card: the individual 8 pin or the SRC. It looks like the SRC has higher priority on the FE. There's also VRAM power limit you have to worry about, and some use the same value for default and max VRAM.

Max TDP throttle seems to throttle at all 8 pins+slot power at max, or at individual 8 pin value =max SRC power limit. The scaling gets weird on the 8 pins past 100%. For example on 3090 FE, 150W=8 pin throttle at 100% TDP, 156W=8 pin throttle at 105% TDP, 166W=8 pin throttle at 110%, 175W=8 pin throttle at 114% (same as max SRC).

The MSI Gaming X Trio is the worst one you posted...absolutely highway robbery VRAM power limit...


----------



## GQNerd

Been swapping between KPE and Strix to decide which to keep. - both on 1000w BIOS


KPE hybrid kills it in PR scoring 300+ pts higher than anything I can hit on the watercooled Strix, but it’s less stable at gaming... 

Max clocks (only useful for benching):
KPE - 2250 (2295 chilled)
Strix - 2205 (2235 chilled)

Both can mem overclock to +1400-1500 for benches


Sustained gaming clocks: 
Cyberpunk 4k, RT on, DLSS perf.. without crashing:
KPE - 2160 , +400 mem
Strix - 2175, +800 mem


Tried to get the KPE to run CP2077 at 2200+, but just can’t seem to hold it longer than 10mins before taking a dump... Tried different BIOS, diff voltages, dip switches, Classy Tool, etc.. 

Open to suggestions, otherwise seems like I’m keeping the Strix, as benching is fun, but I need stability for daily use.


----------



## gfunkernaught

Falkentyne said:


> Those limits aren't what you think they are.
> 
> The MSI Gaming X Trio is the worst one you posted...absolutely highway robbery VRAM power limit...


I see. So to play it safe, I could flash the suprim bios onto my Gaming X Trio with the stock cooler? Just to bump it up a bit, at the very least so it doesn't run PL'ed with just about every game I play. Why would MSI be so conservative with this card/BIOS?


----------



## mattskiiau

gfunkernaught said:


> Why would MSI be so conservative with this card/BIOS?


To get you to upgrade to the next model. 
Aka, $$$$$$.


----------



## Falkentyne

Miguelios said:


> Been swapping between KPE and Strix to decide which to keep. - both on 1000w BIOS
> 
> 
> KPE hybrid kills it in PR scoring 300+ pts higher than anything I can hit on the watercooled Strix, but it’s less stable at gaming...
> 
> Max clocks (only useful for benching):
> KPE - 2250 (2295 chilled)
> Strix - 2205 (2235 chilled)
> 
> Both can mem overclock to +1400-1500 for benches
> 
> 
> Sustained gaming clocks:
> Cyberpunk 4k, RT on, DLSS perf.. without crashing:
> KPE - 2160 , +400 mem
> Strix - 2175, +800 mem
> 
> 
> Tried to get the KPE to run CP2077 at 2200+, but just can’t seem to hold it longer than 10mins before taking a dump... Tried different BIOS, diff voltages, dip switches, Classy Tool, etc..
> 
> Open to suggestions, otherwise seems like I’m keeping the Strix, as benching is fun, but I need stability for daily use.



The Strix is fine. Plus you can just shunt mod it and use the stock vbios and as long as you don't short anything on the board, you'll get your higher power limits and still have full thermal protection (you'll still have that really STRANGE Throttle you will see with Superposition 4k Custom -->Extreme shaders, that you won't have with the KP 1000W Bios). Or just keep the 1000W VBIOS on your strix and enjoy life (just please remember: be careful. But sounds like you already did your homework).

Are you returning the KP or selling it? They won't charge a restocking fee?


----------



## Jpmboy

Okay - was able to mount the Optimus waterblock this evening... eh, i'll deal with the family fallout later. Anyway, the block is typical Optimus: massive, well made, great backplate and really handsome. It comes with Kingpin cooling TIM and Fuji Poly pads! It dropped the core temp by 20C easy (vs 100% on all fans and crunching numbers) Memory temps are down 10-15C (high 50s vs low 40s). Once I realized that with all the GPUs I have here that they would produce $1200/month net, doing some mining (ETH), that's what they are doing this month Snip on the left is with the stock air cooler. right is the Optimus block. The younglings here will give it a gaming workout on the weekend (as usual). 
If you need a block for the FTW3 the Optimus block is a good one for sure.


----------



## GQNerd

Falkentyne said:


> The Strix is fine. Plus you can just shunt mod it and use the stock vbios and as long as you don't short anything on the board, you'll get your higher power limits and still have full thermal protection (you'll still have that really STRANGE Throttle you will see with Superposition 4k Custom -->Extreme shaders, that you won't have with the KP 1000W Bios). Or just keep the 1000W VBIOS on your strix and enjoy life (just please remember: be careful. But sounds like you already did your homework).
> 
> Are you returning the KP or selling it? They won't charge a restocking fee?


Hey Falk, the Strix is already shunted (15mohm stacked), thought about reversing it after the 1000w bios came out, but it’s running fine.. might remove them when I clean out the loop. 

Still trying to get the KP to run CP77 at 2200+, might be the card, might be the sh*tty recent nvidia drivers?!?!.. Hoping to figure it out/decide in the next week. Then selling the “losing card”


----------



## mattskiiau

Anyone know if Suprim BIOS retains RGB and fan speeds on the X trio?


----------



## jomama22

Miguelios said:


> Hey Falk, the Strix is already shunted (15mohm stacked), thought about reversing it after the 1000w bios came out, but it’s running fine.. might remove them when I clean out the loop.
> 
> Still trying to get the KP to run CP77 at 2200+, might be the card, might be the sh*tty recent nvidia drivers?!?!.. Hoping to figure it out/decide in the next week. Then selling the “losing card”


I mean, your splitting hairs here lol. Just pick the one that performs best. You're not gonna wish the kp into performing better in games if it doesn't. Also, you could always buy an evc2 and have some more fun with the strix as well.


----------



## gfunkernaught

Has anyone had trouble getting DSR to work above native res with your 3090? I have a 4k tv that according to Windows has a native res of 3840x2160, but one more above that is 4096x2160, I guess for cinema. I checked off the 4x button and hit apply, and I can see 8192x4320 now in games, but when I set it, I get a black screen. Cant create a custom res because nvidia control panel says my display does not support that res. Even with the gpu scaling override. I can change the desktop res to the DSR 8192, but games no worky.


----------



## mattskiiau

This BIOS stuff is so strange. 

I tested lowtemp X Trio BIOS vs 500w EVGA BIOS using port royal, setting both tests at 2050mhz @ 1.0v.
For some reason, I get higher MVDDC out of the lowtemp BIOS vs 500w?

Max VRAM PL:
Lowtemp = 100w 
EVGA = 130w


The MVDDC goes well beyond 100w as per my screenshot, ***?








*(LEFT Lowtemp, RIGHT EVGA 500w)*


----------



## Jordyn

gfunkernaught said:


> Has anyone had trouble getting DSR to work above native res with your 3090? I have a 4k tv that according to Windows has a native res of 3840x2160, but one more above that is 4096x2160, I guess for cinema. I checked off the 4x button and hit apply, and I can see 8192x4320 now in games, but when I set it, I get a black screen. Cant create a custom res because nvidia control panel says my display does not support that res. Even with the gpu scaling override. I can change the desktop res to the DSR 8192, but games no worky.


Yes I had something similar with my 3090. This happened for me after I flashed my GPU bios after which whenever I tried to use a DSR resolution above my native res the game flicker as it tried to set the resolution then give up and minimize to desktop. I tried everything including reinstalling drivers and resetting the displays using CRU without any success. Then i'm not sure why or how but after resetting my CPU bios and reapplying my overclock etc things started working again as normal and I can now use DSR again without issue. Try that and see how you go.


----------



## Falkentyne

cenumis said:


> This BIOS stuff is so strange.
> 
> I tested lowtemp X Trio BIOS vs 500w EVGA BIOS using port royal, setting both tests at 2050mhz @ 1.0v.
> For some reason, I get higher MVDDC out of the lowtemp BIOS vs 500w?
> 
> Max VRAM PL:
> Lowtemp = 100w
> EVGA = 130w
> 
> 
> The MVDDC goes well beyond 100w as per my screenshot, ***?
> View attachment 2474747
> 
> *(LEFT Lowtemp, RIGHT EVGA 500w)*


No one really knows exactly how the power balancing works on these bioses. Nor do people know how the power delivery changes if you flash another vbios. You can see that 8 pin #1 is highest on the first screenshot and 8 pin #2 is highest on the second. You would need a clamp meter to read the amps directly on the PSU line, to determine whether that's accurate or not (Watts=volts * Amps).

Were you throttling in the first screenshot? If so, how much? You didn't show the throttle bar in GPU-Z.
Were you throttling in the second screenshot? And if so, how much throttle?

I do know that some rails can exceed the values in the bios, but I don't know under what conditions. For example, GPU Chip Power max limit is 270W on 3090 FE in the bios, but I've had it as high as 285W after a Timespy run, where 8 pin #1 got to 174W (the most I've seen it at in the past was 290.2W which caused a nice big throttle, that was awhile back). I know the 8 pin can skyrocket past the max "SRC" value for it based on TDP%, if you let GPU-Z power test loop for awhile, even though you get a big throttle bar from it.


----------



## gfunkernaught

Jordyn said:


> Yes I had something similar with my 3090. This happened for me after I flashed my GPU bios after which whenever I tried to use a DSR resolution above my native res the game flicker as it tried to set the resolution then give up and minimize to desktop. I tried everything including reinstalling drivers and resetting the displays using CRU without any success. Then i'm not sure why or how but after resetting my CPU bios and reapplying my overclock etc things started working again as normal and I can now use DSR again without issue. Try that and see how you go.


Secure Boot/EFI related maybe?? That's weird. Someone else suggested elsewhere to disable hardware accelerated gpu scheduling, that didn't help. I really want to play the original crisis in 8K I just can't do it.


----------



## mattskiiau

Falkentyne said:


> Were you throttling in the first screenshot? If so, how much? You didn't show the throttle bar in GPU-Z.
> Were you throttling in the second screenshot? And if so, how much throttle?


I was throttling at board power 380w on the first screenshot. Second screenshot, there was no throttle. 

Either way, My concerns were the lower quality VRM's on the X trio but for some reason they run on lower wattage on the 500w BIOS. 
I think I can put my mind at rest, for now lol.


----------



## motivman

Anyone has a strix to sell to me please?


----------



## Thanh Nguyen

motivman said:


> Anyone has a strix to sell to me please?


Ebay has so many strix.


----------



## GQNerd

motivman said:


> Anyone has a strix to sell to me please?


Give me till next week to decide which one I’m keeping. Might have a Strix w/phanteks block for sale


----------



## motivman

Miguelios said:


> Give me till next week to decide which one I’m keeping. Might have a Strix w/phanteks block for sale


That is exactly the combo I am looking for. Strix with phanteks block. Plan to shunt with 5mohm. What is your max score in PR with the strix? What is your gpu temp, water temp delta with the phanteks block?


----------



## motivman

Thanh Nguyen said:


> Ebay has so many strix.


 Yeah, for $2500 before tax, no thank you. Not that rich.


----------



## GQNerd

motivman said:


> That is exactly the combo I am looking for. Strix with phanteks block. Plan to shunt with 5mohm. What is your max score in PR with the strix? What is your gpu temp, water temp delta with the phanteks block?


Already shunted(15mohm) but I’d remove them before sale.

15,459 on stock bios:








I scored 15 459 in Port Royal


Intel Core i9-10900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com





I’ve hit 15.5 on the 520/1000w evga bioses but don’t have em saved.

Custom loop with 240 + 360 rads, shared with 10900k:
21c water temp at idle.
GPU idle temp 26, full load never exceeded 48c

That’s without chilling/running it outdoors lol


----------



## Thanh Nguyen

Miguelios said:


> Already shunted(15mohm) but I’d remove them before sale.
> 
> 15,459 on stock bios:
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 459 in Port Royal
> 
> 
> Intel Core i9-10900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> I’ve hit 15.5 on the 520/1000w evga bioses but don’t have em saved.
> 
> Custom loop with 240 + 360 rads, shared with 10900k:
> 21c water temp at idle.
> GPU idle temp 26, full load never exceeded 48c
> 
> That’s without chilling/running it outdoors lol
> 
> View attachment 2474774


Water temp at full load?


----------



## motivman

Miguelios said:


> Already shunted(15mohm) but I’d remove them before sale.
> 
> 15,459 on stock bios:
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 459 in Port Royal
> 
> 
> Intel Core i9-10900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> I’ve hit 15.5 on the 520/1000w evga bioses but don’t have em saved.
> 
> Custom loop with 240 + 360 rads, shared with 10900k:
> 21c water temp at idle.
> GPU idle temp 26, full load never exceeded 48c
> 
> That’s without chilling/running it outdoors lol
> 
> View attachment 2474774


That's nice. What is your PR score with your stable overclock for CP2077? I can hit 15.6K with my reference card (max overclock, obviously not stable for everyday gaming) , but for everyday gaming stable overclock (165/800), I hit 14.8k


----------



## gfunkernaught

cenumis said:


> I was throttling at board power 380w on the first screenshot. Second screenshot, there was no throttle.
> 
> Either way, My concerns were the lower quality VRM's on the X trio but for some reason they run on lower wattage on the 500w BIOS.
> I think I can put my mind at rest, for now lol.


What clocks/temps are you seeing on your Trip with stock vs 500w bios? Also what block are you using if water-cooled?


----------



## WayWayUp

Miguelios said:


> Been swapping between KPE and Strix to decide which to keep. - both on 1000w BIOS
> 
> 
> KPE hybrid kills it in PR scoring 300+ pts higher than anything I can hit on the watercooled Strix, but it’s less stable at gaming...
> 
> Max clocks (only useful for benching):
> KPE - 2250 (2295 chilled)
> Strix - 2205 (2235 chilled)
> 
> Both can mem overclock to +1400-1500 for benches
> 
> 
> Sustained gaming clocks:
> Cyberpunk 4k, RT on, DLSS perf.. without crashing:
> KPE - 2160 , +400 mem
> Strix - 2175, +800 mem
> 
> 
> Tried to get the KPE to run CP2077 at 2200+, but just can’t seem to hold it longer than 10mins before taking a dump... Tried different BIOS, diff voltages, dip switches, Classy Tool, etc..
> 
> Open to suggestions, otherwise seems like I’m keeping the Strix, as benching is fun, but I need stability for daily use.


thats very confusing to me.

you can overclock memory to 1400-1500 in benches but in actual gaming you max at 400-800? thats so inconsistent 
I can bench at +1000 memory but i game at +975 all day


----------



## gfunkernaught

Benchmarks and games are not the same. Expect different results.


----------



## WayWayUp

yes.. expect to go from 1400 in bench to 400 in games....yeah okay

nobody else is dropping 1000 memory.


----------



## GQNerd

WayWayUp said:


> thats very confusing to me.
> 
> you can overclock memory to 1400-1500 in benches but in actual gaming you max at 400-800? thats so inconsistent
> I can bench at +1000 memory but i game at +975 all day


This only happens in Cyber Punk, think it has to do with the memory correction.

I can run other Games +1000-1200


----------



## Canson

Miguelios said:


> This only happens in Cyber Punk, think it has to do with the memory correction.
> 
> I can run other Games +1000-1200


How do you know its memory? have you like tried stock core and +1000mhz memory and got crash or?


----------



## GQNerd

Canson said:


> How do you know its memory? have you like tried stock core and +1000mhz memory and got crash or?


That is my assumption.. Although I have not tried stock core and a high mem offset, since in my opinion Core is more important to overall FPS.

my technique:
I start with core overclock, and dial that in.. Then I start going up on the mem, in +200 increments.

I can run CP77 at +1000-1200 for about 5-10 mins before it crashes.. so then I dial it back. If I can run 45mins-1hr, I consider that stable.


----------



## Canson

Miguelios said:


> That is my assumption.. Although I have not tried stock core and a high mem offset, since in my opinion Core is more important to overall FPS.
> 
> my technique:
> I start with core overclock, and dial that in.. Then I start going up on the mem, in +200 increments.
> 
> I can run CP77 at +1000-1200 for about 5-10 mins before it crashes.. so then I dial it back. If I can run 45mins-1hr, I consider that stable.


for me +1000mhz memory works , but +1100 or more i get crash.
the game is more core sensitive for me it seems.

running right now .850mV @ 1905mhz , havent had a crash yet on my short test. And if it crashes then i will try memory at +0 and try again same core to see if it was memory or core that made the crash


----------



## WayWayUp

i had issue with CP as well but it seemingly resolved after revalidating files. thought it was my overclocks and kept needing to mess with them but it seems every couple days all i need to do it revalidate files and it resolves the issues


----------



## Canson

WayWayUp said:


> i had issue with CP as well but it seemingly resolved after revalidating files. thought it was my overclocks and kept needing to mess with them but it seems every couple days all i need to do it revalidate files and it resolves the issues


And what kind of issuses did you had?


----------



## WayWayUp

crashing. lots of crashing and constantly thinking it was due to my overclocks. then i would reduce the OC and see it running fine and falsely conclude that my OC was too high. now every 3 days i just revalidate automatically haha.
CP has it's issues unfortunately
it had me tweaking out i started checking memory and thinking maybe my mem oc was unstable but stability testing said otherwise. i run 4533 cl15 so it was easy to pin it on the memory


----------



## changboy

Also many of those game crash is related to memory RAM overclock, my gskill 3200mhz run at 3200mhz since have some crash when just oc at 3400mhz, this also can be true with too high oc on the mesh.
Some game no problem and other game will crash if it use more then 1 stick like Cold War.


----------



## WayWayUp

changboy said:


> Also many of those game crash is related to memory RAM overclock, my gskill 3200mhz run at 3200mhz since have some crash when just oc at 3400mhz, this also can be true with too high oc on the mesh.
> Some game no problem and other game will crash if it use more then 1 stick like Cold War.


thats true. I've found it's easy to pin it on gpu when it's often ram


----------



## ttnuagmada

WayWayUp said:


> crashing. lots of crashing and constantly thinking it was due to my overclocks. then i would reduce the OC and see it running fine and falsely conclude that my OC was too high. now every 3 days i just revalidate automatically haha.
> CP has it's issues unfortunately
> it had me tweaking out i started checking memory and thinking maybe my mem oc was unstable but stability testing said otherwise. i run 4533 cl15 so it was easy to pin it on the memory


One way to get some insight is to check the Event Viewer and see if there are any event codes listed as "Display" whenever you crash.


----------



## gfunkernaught

Is there a different version of nvflash that works with 3090's? I tried nvflash64.exe --list and no adapters were found and I ran it in an elevated prompt.









Wait nevermind I think a newer version that supports ampere.


----------



## gfunkernaught

Thinking of flash my Trio with Suprim bios to test it out. Gonna jump 60w PL. Think the stock cooler can handle it? Not overclocking it, just want it to breathe a little. My load temps right now are 67c so maybe I have some headroom?


----------



## ysalienz

I have a ASUS TUF (non OC). Under load it's at 67-68C @ 375W.
I would like to repaste it with Kryonaut Extreme.

Do I have to order new thermal pads if I open it?
Does anyone know which ones? Size, thickness?


----------



## changboy

WayWayUp i belive you put shunt on all 6 resistor on the right and 1 on back for pcie right ?
The src one have a lot of small compoment around lol.


----------



## GQNerd

WayWayUp said:


> thats true. I've found it's easy to pin it on gpu when it's often ram


Haha, I thought it was my RAM at first as well.. dialed it back from 4600 to 4400.
Even dropped my CPU to 5.2 down from 5.3 to be sure..
Ultimately, MY CRASHES were due to my GPU OC

I've re-validated the files, but not every few days.. lol. 
I'll give this a try over the next week or so and report back.


----------



## gfunkernaught

Some interesting results. I flashed my Trio with Suprim bios, went from 380w to 450w. 100mhz boost without interaction. Ran Timespy Extreme and noticed the PL was still pinned (according to AB's OSD) even during GT1. If the PL is pinned, doesn't that mean I won't be able to raise the clocks? Even with the 500w bios?


----------



## gfunkernaught

And some port royal results: Left is 380w, Right is 450w


----------



## Sheyster

Falkentyne said:


> This has no relevance for gaming. Higher VRAM PL=good.
> Such a thing would matter for mining, however.


What's the story with the VRAM PL using the KPE 520W BIOS? I'm not using it, just curious if it's similar to the 500W FTW3 BIOS in that regard.


----------



## Falkentyne

gfunkernaught said:


> Some interesting results. I flashed my Trio with Suprim bios, went from 380w to 450w. 100mhz boost without interaction. Ran Timespy Extreme and noticed the PL was still pinned (according to AB's OSD) even during GT1. If the PL is pinned, doesn't that mean I won't be able to raise the clocks? Even with the 500w bios?
> View attachment 2474858


Time Spy Extreme is going to power limit you even at 600W. I believe someone said or tested earlier that you need at least 650W to not get limited in Timespy Extreme (and that's also assuming you aren't tripping another internal power rail that isn't shown on GPU-Z).


----------



## gfunkernaught

Falkentyne said:


> Time Spy Extreme is going to power limit you even at 600W. I believe someone said or tested earlier that you need at least 650W to not get limited in Timespy Extreme (and that's also assuming you aren't tripping another internal power rail that isn't shown on GPU-Z).


Graphics Test 1 never slammed the power limit on my previous card, 2080 ti. Graphics Test 2 is what I think you're referring to. It's odd that this 3090 no matter what game I play, as long as it was released within the last five years, will pin the PL, or at least keep the watts close to the limit. I gotta do more testing.

I will say though, for just a 1.5 generational leap, double the performance using the same amount of power is pretty impressive. I haven't used a 384-bit gpu since the GTX 580. I do notice some stuttering here and there when my fps drops below 55fps. Could be driver related.


----------



## des2k...

gfunkernaught said:


> Graphics Test 1 never slammed the power limit on my previous card, 2080 ti. Graphics Test 2 is what I think you're referring to. It's odd that this 3090 no matter what game I play, as long as it was released within the last five years, will pin the PL, or at least keep the watts close to the limit. I gotta do more testing.
> 
> I will say though, for just a 1.5 generational leap, double the performance using the same amount of power is pretty impressive. I haven't used a 384-bit gpu since the GTX 580. I do notice some stuttering here and there when my fps drops below 55fps. Could be driver related.


Well TE GT2, RTX Quake, Path of exile all push 600W+. On the 1000w vbios there's no clock throttle.


----------



## mirkendargen

gfunkernaught said:


> Graphics Test 1 never slammed the power limit on my previous card, 2080 ti. Graphics Test 2 is what I think you're referring to. It's odd that this 3090 no matter what game I play, as long as it was released within the last five years, will pin the PL, or at least keep the watts close to the limit. I gotta do more testing.
> 
> I will say though, for just a 1.5 generational leap, double the performance using the same amount of power is pretty impressive. I haven't used a 384-bit gpu since the GTX 580. I do notice some stuttering here and there when my fps drops below 55fps. Could be driver related.


You must not play Assassin's Creed games


----------



## gfunkernaught

des2k... said:


> Well TE GT2, RTX Quake, Path of exile all push 600W+. On the 1000w vbios there's no clock throttle.





mirkendargen said:


> You must not play Assassin's Creed games


Well I forget to mention that with my 2080 Ti, I clocked it so the power limit didn't get touched often. I've played Control maxed out at 4k DLSS and never saw the power limit on the KFA 380w BIOS. I've played Witcher 3 at 4k, no PL indicator, all clocked at [email protected] I also played cyberpunk with [email protected] and never got the PL warning. 

How do you figure Quake 2 RTX for example pushes +600w? That's another game I never saw push more than 380w. Even before dynamic resolution was added to the game. GT2 from timespy extreme looks like it was designed to push watts.


----------



## mirkendargen

gfunkernaught said:


> Well I forget to mention that with my 2080 Ti, I clocked it so the power limit didn't get touched often. I've played Control maxed out at 4k DLSS and never saw the power limit on the KFA 380w BIOS. I've played Witcher 3 at 4k, no PL indicator, all clocked at [email protected] I also played cyberpunk with [email protected] and never got the PL warning.
> 
> How do you figure Quake 2 RTX for example pushes +600w? That's another game I never saw push more than 380w. Even before dynamic resolution was added to the game. GT2 from timespy extreme looks like it was designed to push watts.


Yeah I meant AC games starting with at least Origins had notoriously low power usage even under 100% GPU load.

I haven't tried it but it also seems odd to me that Quake 2 RTX would use a ton of power. It's pure ray tracing just like the 3dMark DXR test, and that test never touches the power limit.


----------



## changboy

Jmpboy, do you have a port royal score with ur waterblock ?
You have install this block verry fast hehehe. Wanna do mine today but dont want start this big job this morning, will see tomorrow lol. Big job drain liquid and intalling the block and full the loop. Most of the time when i change my coolant i also change all my tubbing, a 8 hours job.


----------



## des2k...

"How do you figure Quake 2 RTX for example pushes +600w?"

It's pure path tracing, usually will sit at base clock ~1700mhz using max tdp of your card at stock.

with undervolt 1900core is over 450w

at 2130 it's about 600-640w on my zotac xoc vbios. Another user here with strix is higher on the core and uses 700w+.

Try the demo ! you'll see the insane power usage / downclock.


----------



## mirkendargen

des2k... said:


> "How do you figure Quake 2 RTX for example pushes +600w?"
> 
> It's pure path tracing, usually will sit at base clock ~1700mhz using max tdp of your card at stock.
> 
> with undervolt 1900core is over 450w
> 
> at 2130 it's about 600-640w on my zotac xoc vbios. Another user here with strix is higher on the core and uses 700w+.
> 
> Try the demo ! you'll see the insane power usage / downclock.


Maybe the difference between it and the 3dMark DXR test is the 3dMark test only uses RT cores, and Quake 2 uses the RT cores + more raytracing on the shaders like Pascal cards supported.


----------



## gfunkernaught

des2k... said:


> "How do you figure Quake 2 RTX for example pushes +600w?"
> 
> It's pure path tracing, usually will sit at base clock ~1700mhz using max tdp of your card at stock.
> 
> with undervolt 1900core is over 450w
> 
> at 2130 it's about 600-640w on my zotac xoc vbios. Another user here with strix is higher on the core and uses 700w+.
> 
> Try the demo ! you'll see the insane power usage / downclock.


Right I know how heavy it is on the card, I own the game and played through it many times. Once you started throwing clock numbers out then I understood what you meant. 

I just played Quake 2 RTX with the 450w Suprim bios, power slider was at 107%, clocks on average mid to low 1800s, and my PC just crashed. No event logs, nothing about the nvidia driver stopping and recovering, nada. Power Supply? Corsair RM750x, 62.5a on the +12v rail. 450w/12v=37.5a...Could it be the fact that my PSU only has two PCIe connections and one of them is supplying power to two 8-pin connectors?


----------



## RosaPanteren

Belcebuu said:


> What do you guys think Scott the 3090 gamerock oc? I haven't seen that many people posting about it.
> 
> It seems good isn't it? 3x8 good vrms the only problem is the horrible rgb? But i can live with it in this broken stock time


I got one

It's the non OC version but I flashed OC bios and it works well. https://www.3dmark.com/spy/17569245

Most of the scores above on the 3DMark list for 5800x + 3090 are at least 16-20c below on average temp. Next week it'll be some cold nights so it will be interesting to see what 20 degrees less will do....

I was going for as little rgb as possible but that went south with limited availability of cards. In the end I'm happy with this one and the rgb could just be switched off, and also the horrible front of the card is facing downwards so I'm not bothered about it.

What made me go for it was the 22 power phases(one additional for the memory compared to FE), 3x8 pin PCIe, same Mlcc/SP-cap configuration as the FE PCB and dual bios which is nice to have.

In games it run stable at 2055-70 for hours and I've had it boost to 2150 in benching. I haven't had much time to tweak the oc because the last couple of weeks have gone to test PBO curve optimizer, so I'm looking forward to do some more tinkering.

Haven't had a Palit gpu before so I was a bit sceptical but so far it seems solid


----------



## des2k...

gfunkernaught said:


> Right I know how heavy it is on the card, I own the game and played through it many times. Once you started throwing clock numbers out then I understood what you meant.
> 
> I just played Quake 2 RTX with the 450w Suprim bios, power slider was at 107%, clocks on average mid to low 1800s, and my PC just crashed. No event logs, nothing about the nvidia driver stopping and recovering, nada. Power Supply? Corsair RM750x, 62.5a on the +12v rail. 450w/12v=37.5a...Could it be the fact that my PSU only has two PCIe connections and one of them is supplying power to two 8-pin connectors?


yeah separate pcie cables are better, at least you should have the daisy chain cable on the last pins (2,3)
the thing with ampere even if you limit to 450w, you can still have spikes to 500w


----------



## gfunkernaught

Not sure what just happened but after flashing to the latest Trio "low temp" bios from techpowerup, setting the PL to 102% (max) in AB, then running Quake 2 RTX, played for about 5min then crash to desktop. Event Viewer showed that the driver stopped and recovered successfully. I know that non-stable OC can cause that, but I didn't even manually OC it, just raised the PL from 100 to 102%. Flashed back to my original bios that was on the card and going from here.


----------



## gfunkernaught

des2k... said:


> yeah separate pcie cables are better, at least you should have the daisy chain cable on the last pins (2,3)
> the thing with ampere even if you limit to 450w, you can still have spikes to 500w


I figured. The last two looking at the card from the top (I/O shield on the left. power pins on the right), 2,3 are the right most pins?


----------



## jomama22

Falkentyne said:


> Time Spy Extreme is going to power limit you even at 600W. I believe someone said or tested earlier that you need at least 650W to not get limited in Timespy Extreme (and that's also assuming you aren't tripping another internal power rail that isn't shown on GPU-Z).


Depends on your chip quality as to what power is needed to not be limited in whatever bench/game it may be (at stock voltage range). Also depends on how well the chip clocks. If you have a lossy chip that dumps a lot of the power into heat, then you will need more power over any given voltage/frequency range/point. If you have a card that clocks really well at lower voltages, you will again need more power to fully max out the card (within the stock voltage range). These are mutually exclusive from each other. You could have a great clocker and a lossy chip, requiring lots of power or a garbage clocker that isn't lossy at all, that uses much less power at the top of the stock voltage range.

My strix would never pull over 630w fully maxed with shunts during any part of TSE (630 happens in gt2 near beginning). 

That dweeb framechaser on youtube has his card pulling 730w or so in tse with shunts and had low clocks to show for it.


----------



## des2k...

"That dweeb framechaser " 
lol,
Well it's just luck, I think his chip is very lossy or he needs high voltage, in HZ Dawn 4k he get 95fps at 2160 core. I'm at 2130 +745mem and I get 96fps for that benchmark(490w peaks)


----------



## des2k...

gfunkernaught said:


> I figured. The last two looking at the card from the top (I/O shield on the left. power pins on the right), 2,3 are the right most pins?


yeah, pin1 is close to gpu core,


----------



## gfunkernaught

Quake 2 RTX in its current state can't be a good test for stability. Since the most recent update, it has been crashing when loading a new level, sometimes. I wanted to do a quickload from a save I made, and it crashed to Desktop. Even Viewer showed "Display driver nvlddmkm stopped responding and has successfully recovered." That could just be something wrong with the way game uses/handles the driver right? Or is it my PSU?


----------



## Edge0fsanity

Thoughts on RMAing this gpu?

Fan died on my ftw3, noticed it was running hot lately. I have an optimus block ordered, should ship soon. I don't need a working heatsink much longer. My fear if I rma is getting a potato in return. My luck with gpu silicon lottery is very bad. The card I have will run most games @ 2085mhz 1.025mV with xc3 bios. Mem does +1000 as well in games. Only cp2077 drops my core down to 2055mhz. Not a great card but not bad either. 

So do I roll again for a better chip from an rma? It will piss me off to no end if I get another bottom 10% chip from evga.


----------



## mirkendargen

gfunkernaught said:


> Quake 2 RTX in its current state can't be a good test for stability. Since the most recent update, it has been crashing when loading a new level, sometimes. I wanted to do a quickload from a save I made, and it crashed to Desktop. Even Viewer showed "Display driver nvlddmkm stopped responding and has successfully recovered." That could just be something wrong with the way game uses/handles the driver right? Or is it my PSU?


Display driver crashes have historically been a symptom of overclock instability. With Ampere I've mostly just seen individual applications crash if the core is unstable and the entire computer reboot if the VRAM is unstable. I wouldn't count out display driver crashes though.


----------



## gfunkernaught

mirkendargen said:


> Display driver crashes have historically been a symptom of overclock instability. With Ampere I've mostly just seen individual applications crash if the core is unstable and the entire computer reboot if the VRAM is unstable. I wouldn't count out display driver crashes though.


Thing is I didn't OC it, just raised the PL from 100 to 102% which is the highest my card supports with stock bios.


----------



## Christopher2178

bmgjet said:


> That would give you less power limit since its a 3 plug card.
> FE Bios is the 2nd highest power limited bios.
> 
> Highest being 1KW XOC bios which will get you about 696W on a 2 plug (12pin Nvidia plug) card. (I wouldnt really call it safe for the stock cooler tho)
> Using FE bios on a 3 plug card gets you 600W.
> You need to use the patched NVFlash, Its been tested using FE bios on AIBs cards. But no ones wanted to risk flashing a FE card with patched version since you dont have 2nd bios to fall back into if its a bad flash and cant even hardware flash it back.


I would like to try the FE bios on FTW3 watercooled card currently using the XC3 2 pin bios as without shunting it gives me the best results for this power crippled card.

Does anyone know where can we find a newer version of NVFlash that is patched for Board ID Mismatch? I have the old 5.590 turing patched version but that won't work for us anymore correct?


----------



## Gbone

Are there any AIO Water coolers for the 3090 on the Horizon??? I'd like to be able to close the side panel on my Case one day without my Gaming X trio Overheating...


----------



## gfunkernaught

Christopher2178 said:


> Does anyone know where can we find a newer version of NVFlash that is patched for Board ID Mismatch? I have the old 5.590 turing patched version but that won't work for us anymore correct?





https://www.techpowerup.com/forums/attachments/nvflash-5-660-0-rar.170249/


----------



## des2k...

Christopher2178 said:


> I would like to try the FE bios on FTW3 watercooled card currently using the XC3 2 pin bios as without shunting it gives me the best results for this power crippled card.
> 
> Does anyone know where can we find a newer version of NVFlash that is patched for Board ID Mismatch? I have the old 5.590 turing patched version but that won't work for us anymore correct?


what exactly is the purpose of flashing 2pin 400w bios on the FTW3 ? you'll get to 550w and evga already has 550w vbios


----------



## ratzofftoya

Hi folks! Hopefully quick question. Money no object, yada yada, this is a 3090 thread after all:

I have a custom loop. Do I want the Founder's Edition because then I can get the awesome new EK waterblock, and I care about aesthetics? Or do I want the Strix because of performance gains (but comparatively lamer waterblocks)? In other words, is the performance gain significant enough to warrant the compromised looks?


----------



## defiledge

Is anyone still getting hitting power limit in port royal with 600W ?


----------



## Thanh Nguyen

des2k... said:


> what exactly is the purpose of flashing 2pin 400w bios on the FTW3 ? you'll get to 550w and evga already has 550w vbios


Where is the 550w bios ?


----------



## pat182

Almost broke 15k on air  strix


----------



## arvinz

pat182 said:


> Almost broke 15k on air  strix
> View attachment 2474890


Nice, can you provide some details on what you did to achieve that?


----------



## pat182

arvinz said:


> Nice, can you provide some details on what you did to achieve that?


520watt bios ,pc outside at -10c weirdly my pc reboot when ocing mem at +1400, hardcrash ( changed my psu for a 1300watt cause i tought my 850watt was causing this, but seems that the mem just make the pc crash at this speed, +1200 is fine , +1300 got some black squares on the screen , freaked out reverted to stock bios, seems ok so far, wont try again, ill say its 15k alright at this point


----------



## Christopher2178

des2k... said:


> what exactly is the purpose of flashing 2pin 400w bios on the FTW3 ? you'll get to 550w and evga already has 550w vbios


Evga has a 500w XOC Ftw3 bios that basically doesn’t work for many of us due to a well documented power balancing pci slot power issue a lot of people max out at 420-460w with higher than desired pci slot power draws 75w-90w. So flashing a 2 pin bios is a way to get a lower slot draw and still maintain a higher total board power due to that unknown noticed 3rd pin not an ideal solution but so far a practical one until shunting and even then your still going to run higher than desired pci slot power.. Thanks EVGA!


Sent from my iPhone using Tapatalk


----------



## jomama22

pat182 said:


> 520watt bios ,pc outside at -10c weirdly my pc reboot when ocing mem at +1400, hardcrash ( changed my psu for a 1300watt cause i tought my 850watt was causing this, but seems that the mem just make the pc crash at this speed, +1200 is fine , +1300 got some black squares on the screen , freaked out reverted to stock bios, seems ok so far, wont try again, ill say its 15k alright at this point


Lol, the "on air" part is kinda unnecessary when you put it in -10c weather.

Also, nothing is going wrong if you see black squares with a bad overclock. That bios isn't going to damage your card.


----------



## Loucypher

Hi everybody,

built a new system a few days ago including a MSI RTX 3090 Suprim X. Other specs include an ASUS ROG Strix Z490 Gaming-E Mobo, an i7-10700k AIO water-cooled, 32 GB G.Skill Trident Z DDR4 3200 CL14 RAM.
The PSU is a Corsair HM1200i Platinum. All 3 8pins on seperate lines, no cable splitting. Everything runs stock except the memory which is on XMP1 Profile.
My problem is that, whatever I fire at the GPU (Cyberpunk2077, CoD BOCW both in UWQHD maxed out, Superposition Benchmark) according to HWMonitor the GPU caps at 360 watts. That's nowhere near the promoted 420 (450) watts. Results are that I'm 1500-2000 points behind people with similar PC specs in the before mentioned benchmark. The CPU hits 5100 mhz in the 60°C range, while the GPU tops in the low 70's temperature wise.
The vBIOS modes Silent/Gaming change nothing in that regard. I checked the vBIOS versions and got 2 figures which I can find nothing about on dr. google.
The Versions are:
Silent: 94.2.42.0.f4 and Gaming: 94.2.42.0.f5
Did a second clean Win10 Pro install after I installed MSI Dragon Center with the first install, because I had the feeling that this messed things up even more.....but still no changes.
Anyone an idea or similar experiences with the 3090 Suprim X? Could this be vBIOS related? I don't think that the Corsair is not providing enough power to run the GPU properly.

Every input is highly appreciated!

Regards

Lou


----------



## Canson

Loucypher said:


> Hi everybody,
> 
> built a new system a few days ago including a MSI RTX 3090 Suprim X. Other specs include an ASUS ROG Strix Z490 Gaming-E Mobo, an i7-10700k AIO water-cooled, 32 GB G.Skill Trident Z DDR4 3200 CL14 RAM.
> The PSU is a Corsair HM1200i Platinum. All 3 8pins on seperate lines, no cable splitting. Everything runs stock except the memory which is on XMP1 Profile.
> My problem is that, whatever I fire at the GPU (Cyberpunk2077, CoD BOCW both in UWQHD maxed out, Superposition Benchmark) according to HWMonitor the GPU caps at 360 watts. That's nowhere near the promoted 420 (450) watts. Results are that I'm 1500-2000 points behind people with similar PC specs in the before mentioned benchmark. The CPU hits 5100 mhz in the 60°C range, while the GPU tops in the low 70's temperature wise.
> The vBIOS modes Silent/Gaming change nothing in that regard. I checked the vBIOS versions and got 2 figures which I can find nothing about on dr. google.
> The Versions are:
> Silent: 94.2.42.0.f4 and Gaming: 94.2.42.0.f5
> Did a second clean Win10 Pro install after I installed MSI Dragon Center with the first install, because I had the feeling that this messed things up even more.....but still no changes.
> Anyone an idea or similar experiences with the 3090 Suprim X? Could this be vBIOS related? I don't think that the Corsair is not providing enough power to run the GPU properly.
> 
> Every input is highly appreciated!
> 
> Regards
> 
> Lou


download gpu-z and msi afterburner, change power limit slider and temp slider in msi to max. open gpu-z sensor and show max values
run superposition 4k optimized benchmark. take a screenshot and show us here. So we can compare with my Suprim X. i have same card as you.

Make sure to disable g-sync and v-sync in nvdia settings. And close background apps

and please check your pm . i got question about fan behavior


----------



## Loucypher

Canson said:


> download gpu-z and msi afterburner, change power limit slider and temp slider in msi to max. open gpu-z sensor and show max values
> run superposition 4k optimized benchmark. take a screenshot and show us here.


Will do so when I'm home again.
Thank you

Lou


----------



## mattskiiau

gfunkernaught said:


> What clocks/temps are you seeing on your Trip with stock vs 500w bios? Also what block are you using if water-cooled?


I tested using 2050mhz locked @ 1.0v on both firm. 
380w throttles down to 1950mhz due to PL. Temps don't go higher than 55c. 
500w sits rock solid at 2050mhz, no limits hit. Draws a lot more power but sits under 60c still.


----------



## robertr1

bmgjet said:


> That would give you less power limit since its a 3 plug card.
> FE Bios is the 2nd highest power limited bios.
> 
> Highest being 1KW XOC bios which will get you about 696W on a 2 plug (12pin Nvidia plug) card. (I wouldnt really call it safe for the stock cooler tho)
> Using FE bios on a 3 plug card gets you 600W.
> You need to use the patched NVFlash, Its been tested using FE bios on AIBs cards. But no ones wanted to risk flashing a FE card with patched version since you dont have 2nd bios to fall back into if its a bad flash and cant even hardware flash it back.


You're right. The 1000w bios would make more sense as you described. 

Yeah, the lack of dual bios could lead to an expensive testing platform.


----------



## Zogge

bmgjet said:


> ABE = Ampere Bios Editor.
> The editing bit is disable in that build but there has been enough requests from people that just wanted to read the info out of there current bios to justify releasing a early version with just viewing.


Is there a version with the edit bit enabled available somewhere ?


----------



## Benni231990

hello

i have flashed the 500 watt bios to my gainward phantom GS and it works great

but i have to turn on the fans to 47% under the 47% 1 fan alsways start and stop all the time with 47% all fand running perfect

and i dont lost any DP oder HDMI port


----------



## pat182

jomama22 said:


> Lol, the "on air" part is kinda unnecessary when you put it in -10c weather.
> 
> Also, nothing is going wrong if you see black squares with a bad overclock. That bios isn't going to damage your card.


Hahaha, yea ,its start at -2 but ramp up hard halfway trhu, fins stack cant stay cool enough and end the run near 60c, still avg 2160mhz

I think my last runs i was getting condensation with the temp difference, cause score was tanking, anyway, fun experience


----------



## TechnoPeasant

Question for you guys about running high power BIOS's on cards that run low power in stock form. I've got a Zotac 3090 on the way, and I'm weighing my options as to what BIOS I should flash, as it seems even another 350W will manage voltage and temps much better. One thing I've noticed, is that some will run a 500W BIOS on this 350W card. Are people doing this only for short bursts? Meaning, only for benchmarking purposes? I'd imagine the VRM and caps on the card need to be able to handle the added power, even if it's only a 390W BIOS. Can anyone shed some light on the risks of running higher power?

Also, I see people recommending the KFA2 BIOS for Zotac cards. Will this BIOS cause any issues with any of the HDMI or display ports? Thanks!


----------



## des2k...

TechnoPeasant said:


> Question for you guys about running high power BIOS's on cards that run low power in stock form. I've got a Zotac 3090 on the way, and I'm weighing my options as to what BIOS I should flash, as it seems even another 350W will manage voltage and temps much better. One thing I've noticed, is that some will run a 500W BIOS on this 350W card. Are people doing this only for short bursts? Meaning, only for benchmarking purposes? I'd imagine the VRM and caps on the card need to be able to handle the added power, even if it's only a 390W BIOS. Can anyone shed some light on the risks of running higher power?
> 
> Also, I see people recommending the KFA2 BIOS for Zotac cards. Will this BIOS cause any issues with any of the HDMI or display ports? Thanks!


evga & kfa2 have the same display config Zotac,
390w on air cooler is fine, it's about 60c but I was at 70% fan speed (loud)

I have the 1000w vbios with ek block, so tops about 500w for daily gaming, 620w for stress testing. No need to push higher.

The absolute max on VRM (usually you want to stay bellow) 3x50 150a mem 13x50 650a core. So board power 800w max.


----------



## TechnoPeasant

Thanks for the info! Yeah, I'm only looking to OC on the stock cooler, so 390 should be fine. I noticed people saying they got better temps on non-Zotac Bios's, which is interesting. I wonder what the reason for that could be?


----------



## gfunkernaught

pat182 said:


> Hahaha, yea ,its start at -2 but ramp up hard halfway trhu, fins stack cant stay cool enough and end the run near 60c, still avg 2160mhz
> 
> I think my last runs i was getting condensation with the temp difference, cause score was tanking, anyway, fun experience


I did one of those too. 2220mhz on a 2080 Ti, water-cooled though. I think I scored close to 7k in timespy extreme and 10.3k in port royal. That was fun, though I probably should have set up a remote session because sitting in the garage in 20F wasn't fun.


----------



## Canson

Loucypher said:


> Hi everybody,
> 
> built a new system a few days ago including a MSI RTX 3090 Suprim X. Other specs include an ASUS ROG Strix Z490 Gaming-E Mobo, an i7-10700k AIO water-cooled, 32 GB G.Skill Trident Z DDR4 3200 CL14 RAM.
> The PSU is a Corsair HM1200i Platinum. All 3 8pins on seperate lines, no cable splitting. Everything runs stock except the memory which is on XMP1 Profile.
> My problem is that, whatever I fire at the GPU (Cyberpunk2077, CoD BOCW both in UWQHD maxed out, Superposition Benchmark) according to HWMonitor the GPU caps at 360 watts. That's nowhere near the promoted 420 (450) watts. Results are that I'm 1500-2000 points behind people with similar PC specs in the before mentioned benchmark. The CPU hits 5100 mhz in the 60°C range, while the GPU tops in the low 70's temperature wise.
> The vBIOS modes Silent/Gaming change nothing in that regard. I checked the vBIOS versions and got 2 figures which I can find nothing about on dr. google.
> The Versions are:
> Silent: 94.2.42.0.f4 and Gaming: 94.2.42.0.f5
> Did a second clean Win10 Pro install after I installed MSI Dragon Center with the first install, because I had the feeling that this messed things up even more.....but still no changes.
> Anyone an idea or similar experiences with the 3090 Suprim X? Could this be vBIOS related? I don't think that the Corsair is not providing enough power to run the GPU properly.
> 
> Every input is highly appreciated!
> 
> Regards
> 
> Lou


I recommend you to uninstall that retarted stupid program called MSI Dragon Center, it installs alot of bullshit apps. 
It installed on my pc something called cfosspeed and lan manager and other stupid ****.

Those destroyed my internet connection and stability. my upload went from 500mbit/s to 75mbit/s and i did get latency lagg every 5 sec when playing online games with wifi 5ghz network.
I can tell you it took me a while until i figured this out. 

When you installed the program , just configure your mystic lighting. 
when your done uninstall cfosspeed and MSI SDK


----------



## Loucypher

Canson said:


> I recommend you to uninstall that retarted stupid program called MSI Dragon Center, it installs alot of bullshit apps.


Yeah, I noticed that. That's why I did a fresh install of Windows without touching Dragon Center again. Did an 8k-optimized Superposition run. Attached some screens. On the HWMonitor pic you can clearly see, that the GPU caps out @ ~360 watts. The GPU-Z screen shows that the power is more or less distributed equally on the 3 8pin lines. On the last image you see the sad bench result.
One more thing I noticed....only the first 2 CPU Cores run up to 5100 MHz....the others only reach 5000....you see me really confused right now....


----------



## Canson

Loucypher said:


> Yeah, I noticed that. That's why I did a fresh install of Windows without touching Dragon Center again. Did an 8k-optimized Superposition run. Attached some screens. On the HWMonitor pic you can clearly see, that the GPU caps out @ ~360 watts. The GPU-Z screen shows that the power is more or less distributed equally on the 3 8pin lines. On the last image you see the sad bench result.
> One more thing I noticed....only the first 2 CPU Cores run up to 5100 MHz....the others only reach 5000....you see me really confused right now....
> 
> 
> View attachment 2475030
> View attachment 2475031
> View attachment 2475032
> View attachment 2475033


Use HWINFO64 instead , its a better program and shows more info and values that are on point. by looking at your gpy-z sensor values i dont see a problem.

Try this settings like me and run benchmark so we can compare our result.

Msi afteburner power limit and temp slimit slider to the max , Fans to max 100%. close all background apps , g-synd and v-sync disabled and rund Superposition 4k Optimized

Here are my result for you


----------



## man from atlantis

Good news to Palit GameRock owners who are looking for liquid cooler options, Alphacool is preparing to launch a block for the 3080/3090 GameRock series.






'Gainward Phantom, Phantom GS, Palit GameRock, GameRock OC RTX 3090, 3080 waterblocks'


When searching on web on blocks for Gamerock/Gamerock OC/Phantom/Phantom GS 3080/3090, among returned results norm was "we won't make block", also found this thread. Seems somewhat more hopeful vs "we won't/no plans" for all the other vendors. Registered to say to count me in among buyers. It...




forum.alphacool.com


----------



## martinhal

man from atlantis said:


> Good news to Palit GameRock owners who are looking for liquid cooler options, Alphacool is preparing to launch a block for the 3080/3090 GameRock series.
> 
> 
> 
> 
> 
> 
> 'Gainward Phantom, Phantom GS, Palit GameRock, GameRock OC RTX 3090, 3080 waterblocks'
> 
> 
> When searching on web on blocks for Gamerock/Gamerock OC/Phantom/Phantom GS 3080/3090, among returned results norm was "we won't make block", also found this thread. Seems somewhat more hopeful vs "we won't/no plans" for all the other vendors. Registered to say to count me in among buyers. It...
> 
> 
> 
> 
> forum.alphacool.com


I wonder if they are going to use the same design language ?


----------



## Biscottoman

man from atlantis said:


> Good news to Palit GameRock owners who are looking for liquid cooler options, Alphacool is preparing to launch a block for the 3080/3090 GameRock series.
> 
> 
> 
> 
> 
> 
> 'Gainward Phantom, Phantom GS, Palit GameRock, GameRock OC RTX 3090, 3080 waterblocks'
> 
> 
> When searching on web on blocks for Gamerock/Gamerock OC/Phantom/Phantom GS 3080/3090, among returned results norm was "we won't make block", also found this thread. Seems somewhat more hopeful vs "we won't/no plans" for all the other vendors. Registered to say to count me in among buyers. It...
> 
> 
> 
> 
> forum.alphacool.com


Finally, that is a great quality card with solid pcb very close to the strix; not having a waterblock avaible for cards like that while having many reference/zotac trinity ones was really a shame


----------



## RosaPanteren

man from atlantis said:


> Good news to Palit GameRock owners who are looking for liquid cooler options, Alphacool is preparing to launch a block for the 3080/3090 GameRock series.
> 
> 
> 
> 
> 
> 
> 'Gainward Phantom, Phantom GS, Palit GameRock, GameRock OC RTX 3090, 3080 waterblocks'
> 
> 
> When searching on web on blocks for Gamerock/Gamerock OC/Phantom/Phantom GS 3080/3090, among returned results norm was "we won't make block", also found this thread. Seems somewhat more hopeful vs "we won't/no plans" for all the other vendors. Registered to say to count me in among buyers. It...
> 
> 
> 
> 
> forum.alphacool.com


Yes!

For those who are interested in a waterblock for the Gamerock now or in the future please leave a comment in the Alphacool thread showing your interest. The more the merrier






'Gainward Phantom, Phantom GS, Palit GameRock, GameRock OC RTX 3090, 3080 waterblocks'


When searching on web on blocks for Gamerock/Gamerock OC/Phantom/Phantom GS 3080/3090, among returned results norm was "we won't make block", also found this thread. Seems somewhat more hopeful vs "we won't/no plans" for all the other vendors. Registered to say to count me in among buyers. It...




forum.alphacool.com


----------



## motivman

Anyone having issues with random black screen on 3090? This is getting frustrating


----------



## gfunkernaught

Has anyone here with a Trio put a bykski block on yet? If so what are your temps? I'm debating bykski, alpha cool, or ek, or really waiting for the kryographics from aquacomputer.


----------



## ttnuagmada

So what am I reading about the FE bios giving 600w on 3 pin cards?


----------



## Canson

motivman said:


> Anyone having issues with random black screen on 3090? This is getting frustrating


i think i had this problem with the new drivers. i even had pc restart after black screen.

I am running now 456.71 and seems to be fine. Only black screen i see if i overclock my gpu memory too much.


----------



## motivman

Canson said:


> i think i had this problem with the new drivers. i even had pc restart after black screen.
> 
> I am running now 456.71 and seems to be fine. Only black screen i see if i overclock my gpu memory too much.


I am currently on 461.33, so far no issues (knock on wood), 461.09 was so bad though.


----------



## Canson

motivman said:


> I am currently on 461.33, so far no issues (knock on wood), 461.09 was so bad though.


which driver did you had black screen issues?


----------



## motivman

Canson said:


> which driver did you had black screen issues?


461.09


----------



## Thanh Nguyen

ttnuagmada said:


> So what am I reading about the FE bios giving 600w on 3 pin cards?


Where did u read it at?


----------



## Christopher2178

ttnuagmada said:


> So what am I reading about the FE bios giving 600w on 3 pin cards?











We need a newer NVflash version with board ID mismatch allowed in order to flash the FE bios. I haven’t found one yet.. but I’m sure someone has it..

Sent from my iPhone using Tapatalk


----------



## HyperMatrix

WayWayUp said:


> thats very confusing to me.
> 
> you can overclock memory to 1400-1500 in benches but in actual gaming you max at 400-800? thats so inconsistent
> I can bench at +1000 memory but i game at +975 all day





gfunkernaught said:


> Benchmarks and games are not the same. Expect different results.





WayWayUp said:


> yes.. expect to go from 1400 in bench to 400 in games....yeah okay
> 
> nobody else is dropping 1000 memory.





Miguelios said:


> This only happens in Cyber Punk, think it has to do with the memory correction.
> 
> I can run other Games +1000-1200



There is no cyberpunk related ram crash issue unless it's just due to more heat = more instability = quicker crash. I run +1350 mem and was getting the odd crash at 2145 and 2130. Decided to pull it down to 2115 until I get a block for the card. No crashes after hours. But I did stick a ram block on the back of my card to keep memory temps down. Because several times that I did crash, I noticed the ICX sensors were reporting over 70C on the memory.


----------



## defiledge

Can anybody with a 3090 FE show me a pic of their GPUZ power draw? It is showing one of my 8pins drawing way more power than the other one


----------



## Falkentyne

defiledge said:


> Can anybody with a 3090 FE show me a pic of their GPUZ power draw? It is showing one of my 8pins drawing way more power than the other one


I answered you in the other thread but you completely (missed or ignored?) the photo I made just for you... :/


----------



## Benni231990

guys can you help me pls 

i bought a gainward phantom gs and every game crash after a few minutes i tried so many other bios strix OC , 500 watt EVGA , Suprime X 

every the same can anybody help me 

i dont overclock the card i only put the silder in msi afterburner to the max


----------



## Falkentyne

Benni231990 said:


> guys can you help me pls
> 
> i bought a gainward phantom gs and every game crash after a few minutes i tried so many other bios strix OC , 500 watt EVGA , Suprime X
> 
> every the same can anybody help me
> 
> i dont overclock the card i only put the silder in msi afterburner to the max


Try driver 456.98 (an old hotfix driver) and see if that fixes the problem.
If it does, you can try the driver that was released today (another hotfix) by Nvidia.


----------



## mattskiiau

Falkentyne said:


> If it does, you can try the driver that was released today (another hotfix) by Nvidia.


Isn't 461.09 the latest? Can't see any new drivers on their site?


----------



## Falkentyne

cenumis said:


> Isn't 461.09 the latest? Can't see any new drivers on their site?


461.33 Hotfix.





Error | NVIDIA







nvidia.custhelp.com


----------



## Nico67

motivman said:


> Anyone having issues with random black screen on 3090? This is getting frustrating


I get that a bit playing games, I think it happens if the load drops to the point where the temperature point changes the curve and it boosts a little higher. Dropping the curve a 15mhz seems to fix it.
I can see the frequency change in AB at the same point the GPU load drops.Kinda wish they just made temperature drop on a fixed curve like PL does rather than temp shifting the entire curve up and down as it trips over temp points.


----------



## Benni231990

i tried the both driver 456 and the new fix from today nothing changed


----------



## marti69

gfunkernaught said:


> Has anyone here with a Trio put a bykski block on yet? If so what are your temps? I'm debating bykski, alpha cool, or ek, or really waiting for the kryographics from aquacomputer.


im runing bykski on 3090 trio x temps are around 38 to 42c when gaming on 16c ambient


----------



## geriatricpollywog

WayWayUp said:


> i had issue with CP as well but it seemingly resolved after revalidating files. thought it was my overclocks and kept needing to mess with them but it seems every couple days all i need to do it revalidate files and it resolves the issues


I tried this after reading your post and it actually worked. Cyberpunk was unstable above +500 memory. I tried +1200 after revalidating files and it didn’t crash.


----------



## Gbone

Loucypher said:


> Hi everybody,
> 
> built a new system a few days ago including a MSI RTX 3090 Suprim X. Other specs include an ASUS ROG Strix Z490 Gaming-E Mobo, an i7-10700k AIO water-cooled, 32 GB G.Skill Trident Z DDR4 3200 CL14 RAM.
> The PSU is a Corsair HM1200i Platinum. All 3 8pins on seperate lines, no cable splitting. Everything runs stock except the memory which is on XMP1 Profile.
> My problem is that, whatever I fire at the GPU (Cyberpunk2077, CoD BOCW both in UWQHD maxed out, Superposition Benchmark) according to HWMonitor the GPU caps at 360 watts. That's nowhere near the promoted 420 (450) watts. Results are that I'm 1500-2000 points behind people with similar PC specs in the before mentioned benchmark. The CPU hits 5100 mhz in the 60°C range, while the GPU tops in the low 70's temperature wise.
> The vBIOS modes Silent/Gaming change nothing in that regard. I checked the vBIOS versions and got 2 figures which I can find nothing about on dr. google.
> The Versions are:
> Silent: 94.2.42.0.f4 and Gaming: 94.2.42.0.f5
> Did a second clean Win10 Pro install after I installed MSI Dragon Center with the first install, because I had the feeling that this messed things up even more.....but still no changes.
> Anyone an idea or similar experiences with the 3090 Suprim X? Could this be vBIOS related? I don't think that the Corsair is not providing enough power to run the GPU properly.
> 
> Every input is highly appreciated!
> 
> Regards
> 
> Lou


You Installed Dragon Center???? are you freakin crazy bro??? the bloatware and Aidsware in that App should get a Lawsuit on MSI... Uninstall every single trace of the app ASAP!! can really mess with your system..


----------



## Falkentyne

Benni231990 said:


> i tried the both driver 456 and the new fix from today nothing changed


Did you uninstall the old driver, run "Display Driver Uninstaller" in windows safe mode (with your internet unplugged/turned off), reboot to windows (still with internet unplugged), and then install the new driver?


----------



## Gbone

What BIOS is everyone Using for the MSI Gaming X Trio??


----------



## bmagnien

Loucypher said:


> Yeah, I noticed that. That's why I did a fresh install of Windows without touching Dragon Center again. Did an 8k-optimized Superposition run. Attached some screens. On the HWMonitor pic you can clearly see, that the GPU caps out @ ~360 watts. The GPU-Z screen shows that the power is more or less distributed equally on the 3 8pin lines. On the last image you see the sad bench result.
> One more thing I noticed....only the first 2 CPU Cores run up to 5100 MHz....the others only reach 5000....you see me really confused right now....
> 
> 
> View attachment 2475030
> View attachment 2475031
> View attachment 2475032
> View attachment 2475033


You see how gpuz reports the bus interface as pcie x16 3.0 @ x16 1.1? That should be pcie x16 3.0 @ x16 3.0. Are you using a riser and do you have your pcie slots in your mobo bios configured at 3.0?


----------



## gfunkernaught

marti69 said:


> im runing bykski on 3090 trio x temps are around 38 to 42c when gaming on 16c ambient


Wow really? Ok. Idle temps? And which bios do you run on your card? How many rads and what size are they? I have one 360x55 and another 240x55.


----------



## Canson

Benni231990 said:


> guys can you help me pls
> 
> i bought a gainward phantom gs and every game crash after a few minutes i tried so many other bios strix OC , 500 watt EVGA , Suprime X
> 
> every the same can anybody help me
> 
> i dont overclock the card i only put the silder in msi afterburner to the max


next time when you crash go to msi afterburner hardware monitor and look at Core clock MHz and see at what mhz your gpu crashed.
It might trying to spike high up in core and make the crash happening.

What psu do you have?




bmagnien said:


> You see how gpuz reports the bus interface as pcie x16 3.0 @ x16 1.1? That should be pcie x16 3.0 @ x16 3.0. Are you using a riser and do you have your pcie slots in your mobo bios configured at 3.0?


gpuz reports the bus interface as pcie x16 3.0 @ x16 1.1 because the gpu is at idle. when he games it will change to pcie x16 3.0 @ x16 3.0


----------



## mattskiiau

Gbone said:


> What BIOS is everyone Using for the MSI Gaming X Trio??


EVGA 500w


----------



## GQNerd

HyperMatrix said:


> There is no cyberpunk related ram crash issue unless it's just due to more heat = more instability = quicker crash. I run +1350 mem and was getting the odd crash at 2145 and 2130. Decided to pull it down to 2115 until I get a block for the card. No crashes after hours. But I did stick a ram block on the back of my card to keep memory temps down. Because several times that I did crash, I noticed the ICX sensors were reporting over 70C on the memory.


Think you're on the money regarding the heat being the cause for the crashes, more specifically coming from the VRAM. Since my comment I've started testing with a fresh install of Windows, drivers (downgraded to nvidia 460.89), and CP2077. 

I'm now able to maintain *2190 core, + 1200 mem on the Strix*, and *2250 core +1000 mem on the KP*. - _still fine tuning_

*Here's what changed:*

Repasted the Strix and upgraded pads on the front and back, remounted and now can clock mem higher.

I was using the Classified tool on the KP and was pumping wayy too much voltage to the memory (1.39v) and/or using the dipswitches.. so this time I just set the switches off, and not using Classified, just configuring my offsets in Afterburner. 

In order to hit the mentioned clocks, I lock the core voltage at 1.093v and power target at 600w.. so naturally that's a lot of heat.

Despite the GPU die never surpassing 50c on either card, and not being limited by PWR/VREL/VOP/THRM... I still get occasional crashes. I've been logging the data from those crashes, and while the Strix doesn't have ICX sensors, the KP sh*ts the bed right when the memory surpasses 70-75C.

I need a KP block ASAP!


----------



## WilliamLeGod

cenumis said:


> EVGA 500w


Any issue with the 520w?


----------



## Gbone

cenumis said:


> EVGA 500w


thanks. id assume 500w is the most stable.


----------



## Lord of meat

Gbone said:


> thanks. id assume 500w is the most stable.


It gives the most performance but also the most heat.


----------



## Lord of meat

Does anyone know if the EVGA Hybrid kit for 3090 fit msi trio?


----------



## mattskiiau

WilliamLeGod said:


> Any issue with the 520w?





Gbone said:


> thanks. id assume 500w is the most stable.


I'm using the 500w bios as 380w stock won't allow me to lock core at 2000mhz+. It power throttles on stock down to about 1900mhz, even at 50c. 
The heat is pretty much 5c-10c difference considering I top out at about 400w due to capping core at 2050mhz @ 1.0v in games.
Long sessions on cyberpunk, battlefront 2 and tarkov don't see me going past 60c~.


----------



## geriatricpollywog

Miguelios said:


> Think you're on the money regarding the heat being the cause for the crashes, more specifically coming from the VRAM. Since my comment I've started testing with a fresh install of Windows, drivers (downgraded to nvidia 460.89), and CP2077.
> 
> I'm now able to maintain *2190 core, + 1200 mem on the Strix*, and *2250 core +1000 mem on the KP*. - _still fine tuning_
> 
> *Here's what changed:*
> 
> Repasted the Strix and upgraded pads on the front and back, remounted and now can clock mem higher.
> 
> I was using the Classified tool on the KP and was pumping wayy too much voltage to the memory (1.39v) and/or using the dipswitches.. so this time I just set the switches off, and not using Classified, just configuring my offsets in Afterburner.
> 
> In order to hit the mentioned clocks, I lock the core voltage at 1.093v and power target at 600w.. so naturally that's a lot of heat.
> 
> Despite the GPU die never surpassing 50c on either card, and not being limited by PWR/VREL/VOP/THRM... I still get occasional crashes. I've been logging the data from those crashes, and while the Strix doesn't have ICX sensors, the KP sh*ts the bed right when the memory surpasses 70-75C.
> 
> I need a KP block ASAP!


You could wait several months for a block, or you could point a 92mm fan at the backplate. This reduced my "MEM1" temperature by over 10C.


----------



## Lord of meat

cenumis said:


> I'm using the 500w bios as 380w stock won't allow me to lock core at 2000mhz+. It power throttles on stock down to about 1900mhz, even at 50c.
> The heat is pretty much 5c-10c difference considering I top out at about 400w due to capping core at 2050mhz @ 1.0v in games.
> Long sessions on cyberpunk, battlefront 2 and tarkov don't see me going past 60c~.


Try to run red dead 2 on it. Mine goes up to 68. Plus the power draw is high but I might have a poop card.


----------



## gfunkernaught

I have a question about power supply. I've mentioned this before but never formulated a proper question. I have the Corsair RM750x which can supply 62.5a on the +12v rail. There are two PCIe connectors at the PSU, both cables go to the gpu, but only one is split. Right now the last two 8-pin connectors (left to right) are connected to one PCIe cable, and the first one has its own cable. Considering the 500w bios, 500w/12v=41.6a, does that mean I would need a new PSU? I don't know how the power distro works with PSUs and multiple connections. Is the circuit connecting the two PCIe connectors at the PSU fused for 62.5a? I don't even know if I'm asking this correctly. Basically, should I get a new PSU that has more PCIe connectors this way I can use dedicated cables for each 8-pin on the gpu, or am I good with what I got?


----------



## GQNerd

0451 said:


> You could wait several months for a block, or you could point a 92mm fan at the backplate. This reduced my "MEM1" temperature by over 10C.


Yea seems like it's gonna be a while.. but sheesh -10c? I can definitely do that, what's the model of the fan if you don't mind me asking? - save me some research, lol

It's that or mount a ram block on the back somehow


----------



## geriatricpollywog

Miguelios said:


> Yea seems like it's gonna be a while.. but sheesh -10c? I can definitely do that, what's the model of the fan if you don't mind me asking? - save me some research, lol
> 
> It's that or mount a ram block on the back somehow








Blade Master 92 | Cooler Master







www.coolermaster.com


----------



## HyperMatrix

Miguelios said:


> I need a KP block ASAP!


Sadly HydroCopper blocks for the KP blocks pushed back another 2 months. Only other option I’m aware of is rumors of Optimus making one but their communication skills seem to be lacking so who knows.


----------



## SoldierRBT

Hopefully they pushed it back to make active cooling on the backplate. Memories get hot on my KPE too and get unstable at 70-75C. More mem voltage doesn’t help. +1100 on mine at 70C+ is 100% stable at stock voltage but +1200 only when it’s cool.


----------



## Thanh Nguyen

Is it normal for 20c delta with bykski block for ftw3 card?


----------



## Canson

motivman said:


> Anyone having issues with random black screen on 3090? This is getting frustrating


just had this black screen issue now. i was trying mining for the first time. did run it for 30 min with +1000mhz memory overclock.
When i stopped the program my screen flickered 2-3 times and went black screen and stuck there. Had to hold power button and restart my PC
I wonder if the vram memory got too hot and made the system unstable or was just a coincidence and something else triggered the black screen


----------



## HyperMatrix

_deleted. nvm. probably not relevant._


----------



## Zogge

Thanh Nguyen said:


> Is it normal for 20c delta with bykski block for ftw3 card?


10-12 deg delta on my Bykski with Strix.


----------



## ALSTER868

Zogge said:


> 10-12 deg delta on my Bykski with Strix.


Which is the power consumption for this delta?


----------



## Zogge

508-520W. On idle (<20% load) 8 deg or so. Pure idle 0% load is more or less same as loop temp.


----------



## Biscottoman

Guys which wb should i get for my strix? Which is the one with the best performance? Aquacomputer kryographics?


----------



## motivman

Biscottoman said:


> Guys which wb should i get for my strix? Which is the one with the best performance? Aquacomputer kryographics?


from my research, looks like phanteks might be the best option.


----------



## marti69

gfunkernaught said:


> Wow really? Ok. Idle temps? And which bios do you run on your card? How many rads and what size are they? I have one 360x55 and another 240x55.


idle temp is 25c im runing dual 360 ek slim radiators using kp520w bios.


----------



## Sheyster

Anyone successfully flash the FE BIOS to a Strix card? If yes, are the fans able to perform at 100% speed (~3000 RPM) ?


----------



## long2905

what settings did you guys use to get 15k+ on port royal? I slapped on the reference waterblock and put on the XOC vbios but cant do any better than on air. Temp is better though obviously.


----------



## shalafi

gfunkernaught said:


> I have a question about power supply. I've mentioned this before but never formulated a proper question. I have the Corsair RM750x which can supply 62.5a on the +12v rail. There are two PCIe connectors at the PSU, both cables go to the gpu, but only one is split. Right now the last two 8-pin connectors (left to right) are connected to one PCIe cable, and the first one has its own cable. Considering the 500w bios, 500w/12v=41.6a, does that mean I would need a new PSU? I don't know how the power distro works with PSUs and multiple connections. Is the circuit connecting the two PCIe connectors at the PSU fused for 62.5a? I don't even know if I'm asking this correctly. Basically, should I get a new PSU that has more PCIe connectors this way I can use dedicated cables for each 8-pin on the gpu, or am I good with what I got?


I have the RM750i, which is basically the exact same thing + Corsair Link. With an MSI Suprim X @ 450W (stock OC bios) and a 9700k @ 5.1GHz, I have no power issues at all. I think the max reported power draw was 610-620W for the whole rig.


----------



## gfunkernaught

shalafi said:


> I have the RM750i, which is basically the exact same thing + Corsair Link. With an MSI Suprim X @ 450W (stock OC bios) and a 9700k @ 5.1GHz, I have no power issues at all. I think the max reported power draw was 610-620W for the whole rig.


That's what I calculated between theoretical max load of cpu and gpu, not accounting for ram and other mobo components. I also have 20w worth of fans, including the pump. So with that I'm closer to 650w, about 80% load, which drops efficiency. So I'm wondering if I should get an 850w or 1kw that has at least 6 PCIe ports so I'm not splitting anything and with the extra load i can maintain efficiency.


----------



## Vikingcools

So I got my ASUS RTX 3090 EKWB today. A quick undervolt to [email protected] gives me 20500 in timespy and better than stock FPS in games with 280-320W power use. Good temps too, around 40-41 with 23C ambient.

Is there any point loading up a different bios in this card? And which one would be the best? Would I stand to gain anything?


----------



## Rhadamanthys

Trying to decide which card to get. I want to use the 500W EVGA bios and slap a waterblock on it. Will there be any difference between the Suprim or the Strix, other than loosing an HDMI slot with the Strix? (bringing it down to one port on the Strix, the Suprim has just one HDMI, too).


----------



## ttnuagmada

Vikingcools said:


> So I got my ASUS RTX 3090 EKWB today. A quick undervolt to [email protected] gives me 20500 in timespy and better than stock FPS in games with 280-320W power use. Good temps too, around 40-41 with 23C ambient.
> 
> Is there any point loading up a different bios in this card? And which one would be the best? Would I stand to gain anything?


Which Asus card? the Strix? if so, the 520w bios will give you a little more power headroom, but if undervolting is your goal, then there's probably not a need for it.


----------



## Biscottoman

Which undervolt do you reccomend to test a strix on air (without pushing it too much) to see if it is a good/bad binned one, while waiting for the waterblock? What would be a good vcore/coreclock ratio to achieve too?


----------



## Sheyster

Biscottoman said:


> Which undervolt do you reccomend to test a strix on air (without pushing it too much) to see if it is a good/bad binned one, while waiting for the waterblock? What would be a good vcore/coreclock ratio to achieve too?


2100 @ 950mv stable would be great IMHO.


----------



## Vikingcools

ttnuagmada said:


> Which Asus card? the Strix? if so, the 520w bios will give you a little more power headroom, but if undervolting is your goal, then there's probably not a need for it.


Thanks for the reply! No this is the one with the preinstalled EK-block. 370W bios, 104% PL. Two powerconnectors.


----------



## Biscottoman

Sheyster said:


> 2100 @ 950mv stable would be great IMHO.


Isn't that too much on air? I mean unless you have a golden sample, should be really hard to reach such ratio especially since strix has not the best air cooling too


----------



## ttnuagmada

Vikingcools said:


> Thanks for the reply! No this is the one with the preinstalled EK-block. 370W bios, 104% PL. Two powerconnectors.


I'm not 100% sure what the best bios would be for that card. I've really only been paying attention to bioses for 3 pin cards. As a general rule though, if you aren't throttling due to the power limit, I wouldn't mess with flashing another bios, as that's the only thing you're going to gain from it.


----------



## Vikingcools

ttnuagmada said:


> I'm not 100% sure what the best bios would be for that card. I've really only been paying attention to bioses for 3 pin cards. As a general rule though, if you aren't throttling due to the power limit, I wouldn't mess with flashing another bios, as that's the only thing you're going to gain from it.


Well thats the thing though, it is throttling.


----------



## ttnuagmada

Vikingcools said:


> Well thats the thing though, it is throttling.


You can probably find the consensus best 2 pin bios by searching through the thread. If the throttling is due to PL then you do stand to gain from flashing the bios.


----------



## jura11

Vikingcools said:


> Thanks for the reply! No this is the one with the preinstalled EK-block. 370W bios, 104% PL. Two powerconnectors.


Gigabyte 390W BIOS or KFA2 390W BIOS would be probably best and probably safest BIOS 

But I'm just not sure if you will don't loose one of DP and HDMi port because of I/O layout on these Gigabyte or KFA2 are different 

Hope this helps 

Thanks, Jura


----------



## Vikingcools

jura11 said:


> Gigabyte 390W BIOS or KFA2 390W BIOS would be probably best and probably safest BIOS
> 
> But I'm just not sure if you will don't loose one of DP and HDMi port because of I/O layout on these Gigabyte or KFA2 are different
> 
> Hope this helps
> 
> Thanks, Jura


Thanks! Well, I only need 1 hdmi and 1 DP. I'll give those a look.


----------



## gfunkernaught

I was just running Bright Memory Benchmark on loop with the 450w bios on my Trio, clocks hovered around 1925-1950mhz and the power usage was running 430-448w. After the temp reached 69c, the clock came down to around 1905mhz, but the power usage stayed the same. If the board is at its limit will water cooling even help? These cards throttle based on power before temp right? What could I actually gain from water cooling my card other than just lower temps?


----------



## dante`afk

Thanh Nguyen said:


> Is it normal for 20c delta with bykski block for ftw3 card?


delta to what, ambient? water temp?


----------



## Thanh Nguyen

dante`afk said:


> delta to what, ambient? water temp?


Its 20c delta from the water temp but I repaste it and remount and the delta is 12c at 560w.


----------



## des2k...

All these post with super low deltas  

re-mounted the block, used noctua nt-h2, used new thermal pads; I already saw a difference the block was sitting much better even before the screws 

Installed a small 240mm rad, my o11 is packed now, 2x360, 1x240 thick front, 1x240 bottom.

A quick 5min, cp2077 4k, water was going 24,25c core was 38,39. So about 14c delta for 480watts.
Back of the card 37c.

*What do you guys think for the EK block ? Good enough ~500w 14c delta ?*


----------



## Gebeleisis

des2k... said:


> All these post with super low deltas
> 
> re-mounted the block, used noctua nt-h2, used new thermal pads; I already saw a difference the block was sitting much better even before the screws
> 
> Installed a small 240mm rad, my o11 is packed now, 2x360, 1x240 thick front, 1x240 bottom.
> 
> A quick 5min, cp2077 4k, water was going 24,25c core was 38,39. So about 14c delta for 480watts.
> Back of the card 37c.
> 
> *What do you guys think for the EK block ? Good enough ~500w 14c delta ?*
> 
> View attachment 2475255


That sounds good ! 
What was the ambient temp ?


----------



## ALSTER868

des2k... said:


> What do you guys think for the EK block ? Good enough ~500w 14c delta ?


Almost same delta there with Bitspower block at 500W load, may be a degree or so higher.
Reports about 10-12c delta at 500+W look unreal.


----------



## des2k...

Gebeleisis said:


> That sounds good !
> What was the ambient temp ?


21c


----------



## Antsu

Situation getting out of hand...


----------



## Bobbylee

Hi guys, im reading this post and everyones temps are far lower on waterblocks. with 22c ambient im peaking at 45c at 390w on pny xlr8 epic, ek vactor block 360x60mm rad. Do you think i have a bad mount? also I was playing with the 1000w kp bios, my card is 2x8 pin. do you guys know which power pins on gpuz are active? it shows what it expects to be drawing from all three so im not sure how to work out actual board power draw as theyre is a big difference between pin 2 and pins 1 +3.

I have also ordered an ek ram cooler i plan on mounting to my backplate for some active vram cooling on the backside of pcb, any one have any tips when doing this?


----------



## PhuCCo

Bobbylee said:


> Hi guys, im reading this post and everyones temps are far lower on waterblocks. with 22c ambient im peaking at 45c at 390w on pny xlr8 epic, ek vactor block 360x60mm rad. Do you think i have a bad mount? also I was playing with the 1000w kp bios, my card is 2x8 pin. do you guys know which power pins on gpuz are active? it shows what it expects to be drawing from all three so im not sure how to work out actual board power draw as theyre is a big difference between pin 2 and pins 1 +3.
> 
> I have also ordered an ek ram cooler i plan on mounting to my backplate for some active vram cooling on the backside of pcb, any one have any tips when doing this?


What is your water temperature when your GPU is peaking at 45C? My 3090FE with the Corsair XG7 gets to about 53C after running a couple of Port Royal/Time Spy Stress Tests back to back. Ambient air temp at 23C. My water temp peaks at 33C during that time, so my GPU to water temp delta is about 20C, which I think is terrible. I see people with the same block have a very similar delta to mine. 
I also have a ram block mounted to the backplate, I used the 6 DIMM Alphacool one. The Corsair backplate is only 1mm thick which made mounting extremely tedious. I ended up countersinking four M3 screws through the underside of the backplate and capturing them with nuts on the top side. I used a 1mm thermal pad between the ram block and the backplate to fill in the gap (The backplate was bowing due to how it is mounted and how thin it is, otherwise I would have used paste)
I would imagine the EK backplate is thicker than 1mm so you shouldn't be so limited in ways you can mount it. I've seen people using thermal conductive epoxy to mount the block, so maybe you could look into that instead of drilling. Or if the backplate is thick enough, you could just tap threads directly into the backplate.


----------



## Gebeleisis

Bobbylee said:


> Hi guys, im reading this post and everyones temps are far lower on waterblocks. with 22c ambient im peaking at 45c at 390w on pny xlr8 epic, ek vactor block 360x60mm rad. Do you think i have a bad mount? also I was playing with the 1000w kp bios, my card is 2x8 pin. do you guys know which power pins on gpuz are active? it shows what it expects to be drawing from all three so im not sure how to work out actual board power draw as theyre is a big difference between pin 2 and pins 1 +3.
> 
> I have also ordered an ek ram cooler i plan on mounting to my backplate for some active vram cooling on the backside of pcb, any one have any tips when doing this?


45c water temp or 45c gpu temp ? 
guys in this thread refer to the following temps : 

loop water temp
gpu / cpu max temp
delta , difference between the two above
ambient temp - usually they do not mention it
chilled water by submerging one rad in a bucket of ice / or by placing the rig / rad outside in cold air / cold weather


What is important : 

AMBIENT TEMP - this dictates how low you can go with your cooling for a normal rig used in an office for eg
gpu/cpu temp
delta between the two above dictates how good / efficinet your cooling really is.


----------



## Bobbylee

PhuCCo said:


> What is your water temperature when your GPU is peaking at 45C? My 3090FE with the Corsair XG7 gets to about 53C after running a couple of Port Royal/Time Spy Stress Tests back to back. Ambient air temp at 23C. My water temp peaks at 33C during that time, so my GPU to water temp delta is about 20C, which I think is terrible. I see people with the same block have a very similar delta to mine.
> I also have a ram block mounted to the backplate, I used the 6 DIMM Alphacool one. The Corsair backplate is only 1mm thick which made mounting extremely tedious. I ended up countersinking four M3 screws through the underside of the backplate and capturing them with nuts on the top side. I used a 1mm thermal pad between the ram block and the backplate to fill in the gap (The backplate was bowing due to how it is mounted and how thin it is, otherwise I would have used paste)
> I would imagine the EK backplate is thicker than 1mm so you shouldn't be so limited in ways you can mount it. I've seen people using thermal conductive epoxy to mount the block, so maybe you could look into that instead of drilling. Or if the backplate is thick enough, you could just tap threads directly into the backplate.


I havent actually got a temp sensor, I guess I'll grab one of those. But yes 45c is my max GPU temp after a long gaming session or a timespy stress test. i have a 9900k at 5.2 1.38v in the same loop too so I guess its hard to say until I find out my water temps. Yeah the ek one is thicker than 1mm for sure, maybe 3mm. i was planning on doing the countersunk hole method too. thanks for the information, very useful


----------



## Bobbylee

Gebeleisis said:


> 45c water temp or 45c gpu temp ?
> guys in this thread refer to the following temps :
> 
> loop water temp
> gpu / cpu max temp
> delta , difference between the two above
> ambient temp - usually they do not mention it
> chilled water by submerging one rad in a bucket of ice / or by placing the rig / rad outside in cold air / cold weather
> 
> 
> What is important :
> 
> AMBIENT TEMP - this dictates how low you can go with your cooling for a normal rig used in an office for eg
> gpu/cpu temp
> delta between the two above dictates how good / efficinet your cooling really is.


45c GPU temp, I need to get a temp sensor then to find this out. thanks


----------



## Thanh Nguyen

Why I set 1.1v 2220mhz and it crashes in metro rtx but it will run at 1.068v -2220mhz? In game, the clock is down to 2190mhz.


----------



## pat182

so i was wondering why switching from a 011 to a 011 XL getting me worst temp ,just to discovert that in my 011 dynamic, i had 9 corsair ML120 , but in this case its 9 LL120, and they are like HALF the CFM vs the ML120 , i wish i knew this before buying them..


----------



## gfunkernaught

des2k... said:


> All these post with super low deltas
> 
> re-mounted the block, used noctua nt-h2, used new thermal pads; I already saw a difference the block was sitting much better even before the screws
> 
> Installed a small 240mm rad, my o11 is packed now, 2x360, 1x240 thick front, 1x240 bottom.
> 
> A quick 5min, cp2077 4k, water was going 24,25c core was 38,39. So about 14c delta for 480watts.
> Back of the card 37c.
> 
> *What do you guys think for the EK block ? Good enough ~500w 14c delta ?*
> 
> View attachment 2475255


Which block did you use?


----------



## mattskiiau

pat182 said:


> so i was wondering why switching from a 011 to a 011 XL getting me worst temp ,just to discovert that in my 011 dynamic, i had 9 corsair ML120 , but in this case its 9 LL120, and they are like HALF the CFM vs the ML120 , i wish i knew this before buying them..


Sorry off topic, what case is that?


----------



## schoolofmonkey

So what's the general consensus on the Inno3D RTX 3090 iCHILL X4, I just ordered one for a new 5900x build, it's all I can get at the moment.
Can't really find much info about them other than don't bother trying to OC them.
Any other owners here that can give me some feedback.

It's going in a O11 Dynamic with 3 bottom intake fans, so plenty of air.

It's just got to do "for now", seeing I've always bought ROG cards, I've still got my heart set on the Strix.


----------



## Bobbylee

I have another question guys.. if we look at my gpu-z on my 2x8 pin pny xlr8 epic with 1000w kp bios. Can anybody tell me which 8-pins are actually the two that would be used on my card? ie 1 and 3 or 1 and 2 etc? I'm not sure which one to subtract to work out my total board power draw.


----------



## motivman

Bobbylee said:


> I have another question guys.. if we look at my gpu-z on my 2x8 pin pny xlr8 epic with 1000w kp bios. Can anybody tell me which 8-pins are actually the two that would be used on my card? ie 1 and 3 or 1 and 2 etc? I'm not sure which one to subtract to work out my total board power draw.
> 
> View attachment 2475283


pins 1 and 2. Make sure you have quality PCIE cables... pulling 284W from a single 8 pin will melt some cables (bad quality ones). Also make sure that both 8 pins are separate cables, and not a single cable.


----------



## des2k...

gfunkernaught said:


> Which block did you use?


EK-Quantum Vector Trinity


----------



## Bobbylee

motivman said:


> pins 1 and 2. Make sure you have quality PCIE cables... pulling 284W from a single 8 pin will melt some cables (bad quality ones). Also make sure that both 8 pins are separate cables, and not a single cable.


Thank you, i am running an evga semi modular psu (18awg) then using certified 16 AWG cable wire extensions. do you think that would be suitable? and yes two individual cables.


----------



## Nammi

For those interested in the Strix AC kryographics block, here's how it performs on my end.

Running TS Extreme on the 520w bios for about 30min after heating up the loop gave avg water to core delta of 16.3C, with lowest blips to 14C and highest 19C.


----------



## Biscottoman

Nammi said:


> For those interested in the Strix AC kryographics block, here's how it performs on my end.
> 
> Running TS Extreme on the 520w bios for about 30min after heating up the loop gave avg water to core delta of 16.3C, with lowest blips to 14C and highest 19C.


I was expecting a little more from this block tbh, something like 11/12 average core delta. What radiator setup are you running?


----------



## Nammi

Biscottoman said:


> I was expecting a little more from this block tbh, something like 11/12 average core delta. What radiator setup are you running?


Yeah it does seem abit underwhelming...
2x 420 HWL SR2


----------



## Biscottoman

Nammi said:


> Yeah it does seem abit underwhelming...
> 2x 420 HWL SR2


Right now, i'm so doubtful about what waterblock pick for the strix even if the difference between each manufacture is something like 3-4 degrees. Btw you would probably achieve a little more cooling performance running LM on your card


----------



## LancerVI

cenumis said:


> Sorry off topic, what case is that?


Looks like a Lian Li PC-O11 Dynamic XL ROG. I have the same case.


As for me, looks like I'm going to buy a 3090. I hate to do so, so "late" in the game, especially off of eBay and Amazon marketplace. 

Any consensus on card and where to get? I've narrowed my choices down to Asus / EVGA / MSI. Leaning more ASUS/EVGA.

Thoughts and any inside info where I can get my paws on one. Almost ordered a FTW3 Ultra Gaming from Amazon marketplace for $2350 before tax. OUCH!


----------



## Ironcobra

Massive best buy drop happening now, just got a 3090 strix, I am in the club!!


----------



## mouacyk

Antsu said:


> Situation getting out of hand...


Tariffs, price hikes, scalping -- not really an obstacle, because BTC is f'ing inflated from fake Tethers. Unregulated crypto pumping is running amok in the world, and too many people are going on shopping sprees that normally would not happen.


----------



## long2905

schoolofmonkey said:


> So what's the general consensus on the Inno3D RTX 3090 iCHILL X4, I just ordered one for a new 5900x build, it's all I can get at the moment.
> Can't really find much info about them other than don't bother trying to OC them.
> Any other owners here that can give me some feedback.
> 
> It's going in a O11 Dynamic with 3 bottom intake fans, so plenty of air.
> 
> It's just got to do "for now", seeing I've always bought ROG cards, I've still got my heart set on the Strix.


slap a reference waterblock on it, an alphacool one for example and it should be fine. at stock it wont clock as high as other cards but if you shunt or use the 1000w xoc vbios you should be fine, especially in an O11.

I just slapped a block on mine but have yet to test it out properly. however mine is on a single thin 280 rad for both cpu and gpu (itx case) so the results will be limited compared to others.


----------



## gfunkernaught

des2k... said:


> EK-Quantum Vector Trinity


Right I just reread your initial post. Since the ek is more expensive (plus backplate) than the bykski, you think it's worth it?


----------



## tiefox

Does anyone has the specs on the 4 screws around the GPU for the Aquacomputer Block for the Strix 3090 ? Mine arrived with them missing, cant get support to reply


----------



## gfunkernaught

tiefox said:


> Does anyone has the specs on the 4 screws around the GPU for the Aquacomputer Block for the Strix 3090 ? Mine arrived with them missing, cant get support to reply
> View attachment 2475300


Did you try using the stock screws?


----------



## mattFE

Hey everyone - I've been really enjoying this forum for info and would love your help if possible. I picked up a second 3090 FE today and was able to get an NVLink as well but it's not going to work with my motherboard (Z490 Taichi). Any suggestions on motherboards that will allow for the 4 slot link? Many many thanks!


----------



## arvinz

tiefox said:


> Does anyone has the specs on the 4 screws around the GPU for the Aquacomputer Block for the Strix 3090 ? Mine arrived with them missing, cant get support to reply
> View attachment 2475300


Did you get the passive or active backplate? I'm debating whether to get the Phanteks block or get this + active cooler...any thoughts anyone?


----------



## Biscottoman

arvinz said:


> Did you get the passive or active backplate? I'm debating whether to get the Phanteks block or get this + active cooler...any thoughts anyone?


I'm exactly in your same position right now


----------



## tiefox

gfunkernaught said:


> Did you try using the stock screws?


Stock ones are completely different as the card has a bracket attached to the 4 screws in the back and the thread size is much smaller the the threads in the waterblock


----------



## tiefox

arvinz said:


> Did you get the passive or active backplate? I'm debating whether to get the Phanteks block or get this + active cooler...any thoughts anyone?


I got the active backplate


----------



## Esenel

Nammi said:


> For those interested in the Strix AC kryographics block, here's how it performs on my end.
> 
> Running TS Extreme on the 520w bios for about 30min after heating up the loop gave avg water to core delta of 16.3C, with lowest blips to 14C and highest 19C.


At what flow?
At the link at the bottom we tested several blocks already under different conditions. Just contribute there as well.

Best blocks for Strix so for seem to be Byksi and EK with Liquid Metal.
Alphacool also very solid, but someone mentioned very restrictive for the flow.

Performance from the AquaComputer one is mediocre.









[Übersicht] - RTX 30x0 Wasserkühlervergleich | GPU Block Comparison


Einleitung Hallo zusammen, da es an verschiedenen Stellen bereits Informationen zu diesem Thema gibt, würde ich gerne anfangen alle Daten zentral und übersichtlich zu sammeln. Es geht darum, die Leistung der jeweiligen GPU-Blöcke der verschiedenen Hersteller transparent auszulisten. Die Form...




www.hardwareluxx.de


----------



## arvinz

Biscottoman said:


> I'm exactly in your same position right now


Ya it's a tough one...There also aren't too many solutions that cool the back of the card...I know the Kryographics block has the active back option...EK will be releasing a backplate solution as well but not a fan of their block...The Phanteks block looks great but no back solution.

Kinda wish heatkiller had a solution for Strix but it doesn't look like it. And I doubt we'll see anything from Optimus for at least a few months.


----------



## arvinz

Strix waterblock options:

Aquacomputer - Yes
Alphacool - Yes
Barrow - Yes
Bitspower - Yes
Bykski - Yes
Corsair - Yes
EK - Yes
Optimus - No
Phanteks - Yes
Watercool - No

I miss any? I'm still undecided...


----------



## Ironcobra

So in anticipation of receiving my strix, what is the general consensus on bios and to shunt or not to shunt. I am planning on running air until later in the year an then possible water. Im using 011 xl and a focus gold850w with a 5800x.


----------



## Biscottoman

arvinz said:


> Ya it's a tough one...There also aren't too many solutions that cool the back of the card...I know the Kryographics block has the active back option...EK will be releasing a backplate solution as well but not a fan of their block...The Phanteks block looks great but no back solution.
> 
> Kinda wish heatkiller had a solution for Strix but it doesn't look like it. And I doubt we'll see anything from Optimus for at least a few months.


For backplate cooling you have almost always to go with mp5works, the only cons of the phanteks is that backplate is only 1mm thick which seems not too much to me tbh. I have no data regarding bitspower performance, while many people says are ek and bysksi are good performing blocks


----------



## Antsu

Biscottoman said:


> I was expecting a little more from this block tbh, something like 11/12 average core delta. What radiator setup are you running?


IIRC I had a delta (core to water) of ~11C @ 600W and ~9C @ 450W but this was with LM. I can do some testing with regular paste tomorrow or the day after. Loop is single D5 at full tilt and 560x55 HWLabs Black Ice Nemesis GTR.


----------



## Biscottoman

Antsu said:


> IIRC I had a delta (core to water) of ~11C @ 600W and ~9C @ 450W but this was with LM. I can do some testing with regular paste tomorrow or the day after. Loop is single D5 at full tilt and 560x55 HWLabs Black Ice Nemesis GTR.


I'm building almost same loop for gpu only ekwb d5+hardware labs 560 gtx, and i'm goin LM for sure and probably mp5works if i don't get an actve backplate


----------



## arvinz

Biscottoman said:


> For backplate cooling you have almost always to go with mp5works, the only cons of the phanteks is that backplate is only 1mm thick which seems not too much to me tbh. I have no data regarding bitspower performance, while many people says are ek and bysksi are good performing blocks


Would a thinner backplate be better if you're using something like mp5works on it?


----------



## WayWayUp

i mean it might make the mp5works more effective as a thinner backplate would run hotter, but thats the only way it would be "better"

but by all measures a thinner backplate will be hotter and the mp5works + thick plate will have better overall temperatures opposed to pairing with thin plate

If you have a large backplate you can buy the larger clips off their website for a couple of bucks. It will fit just as well


----------



## Biscottoman

yeah i was meaning to say, regarding phanteks backplate, that a thicker one would have been better from a cooling performance point of view


----------



## Thanh Nguyen

Can anyone who has Optimus block confirm the delta less than 10c with liquid metal applied to the core during 550w-650w load? Optimus will open order next week so I want to see if its worth or not. Thanks.


----------



## jura11

I'm thinking putting spare CPU Alphacool Xp3 block on backplate which in theory should work although flow will suffer with such block, other block I have here is Bykski Ryzen waterblock whuch I can mount there 

Will need to think about it and see what is best way to do it, but with Bykski waterblock and backplate, backplate with IR gun can reach quite easily 45-50°C during the rendering(GPU usage is close to 109-110% of power limit) with 390W BIOS only 

Hope this helps 

Thanks, Jura


----------



## jomama22

jura11 said:


> I'm thinking putting spare CPU Alphacool Xp3 block on backplate which in theory should work although flow will suffer with such block, other block I have here is Bykski Ryzen waterblock whuch I can mount there
> 
> Will need to think about it and see what is best way to do it, but with Bykski waterblock and backplate, backplate with IR gun can reach quite easily 45-50°C during the rendering(GPU usage is close to 109-110% of power limit) with 390W BIOS only
> 
> Hope this helps
> 
> Thanks, Jura


Just use a 6x ram water block. Cheaper and will cool just fine for it's purpose. No reason to add that kind of restriction to your loop.


----------



## gfunkernaught

jura11 said:


> I'm thinking putting spare CPU Alphacool Xp3 block on backplate which in theory should work although flow will suffer with such block, other block I have here is Bykski Ryzen waterblock whuch I can mount there
> 
> Will need to think about it and see what is best way to do it, but with Bykski waterblock and backplate, backplate with IR gun can reach quite easily 45-50°C during the rendering(GPU usage is close to 109-110% of power limit) with 390W BIOS only
> 
> Hope this helps
> 
> Thanks, Jura


Hey when you measured the backplate temp at 45-50c, what was the gpu temp with the bykski? The ek seems to be unavailable mostly and too expensive with the backplate. I'm leaning towards the bykski.


----------



## Damaged__

Thanh Nguyen said:


> Can anyone who has Optimus block confirm the delta less than 10c with liquid metal applied to the core during 550w-650w load? Optimus will open order next week so I want to see if its worth or not. Thanks.


It's quite hard to imagine the block not being able to achieve a 10c delta with a proper mount and cooling


----------



## Thanh Nguyen

Damaged__ said:


> It's quite hard to imagine the block not being able to achieve a 10c delta with a proper mount and cooling


Less than 10.


----------



## Biscottoman

jomama22 said:


> Just use a 6x ram water block. Cheaper and will cool just fine for it's purpose. No reason to add that kind of restriction to your loop.
> View attachment 2475327


If i would run something like that can i use the bottom first thread as inlet for the gpu waterblock then going with the top as inlet for the RAM block, the second top to comeback into the GPU and at the end the last bottom thread as outlet from the gpu? Sorry for the bad description, i hope it is understandable


----------



## mirkendargen

Biscottoman said:


> If i would run something like that can i use the bottom first thread as inlet for the gpu waterblock then going with the top as inlet for the RAM block, the second top to comeback into the GPU and at the end the last bottom thread as outlet from the gpu? Sorry for the bad description, i hope it is understandable


I do pump->RAM block on the backplate->GPU block->rest of the loop.


----------



## Biscottoman

mirkendargen said:


> I do pump->RAM block on the backplate->GPU block->rest of the loop.


If have to run that setup i would like to do Pump->Gpu->RAM block->Gpu->Radiator->Pump. If this kind of "parallel" loop is possible to run


----------



## mirkendargen

Biscottoman said:


> If have to run that setup i would like to do Pump->Gpu->RAM block->Gpu->Radiator->Pump. If this kind of "parallel" loop is possible to run


If I understand what you're saying correctly (use the rear ports on the GPU block to connect to the RAM block on the backplate) this would be a very bad idea unless you put a pretty extreme flow restrictor on the inlet to the RAM block. The RAM block is far less restrictive than the GPU block, so if you put them in parallel the RAM block would get a far greater percentage of the coolant flow.

The reason the MP5 block is plumbed this way is it has super tiny tubing to increase flow restriction to it.


----------



## Biscottoman

mirkendargen said:


> If I understand what you're saying correctly (use the rear ports on the GPU block to connect to the RAM block on the backplate) this would be a very bad idea unless you put a pretty extreme flow restrictor on the inlet to the RAM block. The RAM block is far less restrictive than the GPU block, so if you put them in parallel the RAM block would get a far greater percentage of the coolant flow.
> 
> The reason the MP5 block is plumbed this way is it has super tiny tubing to increase flow restriction to it.


Ok, so basically to run this setup properly i should have to run each wb in series in the loop, right?


----------



## mirkendargen

Biscottoman said:


> Ok, so basically to run this setup properly i should have to run each wb in series in the loop, right?


That is the simple answer yes.


----------



## dante`afk

Thanh Nguyen said:


> Can anyone who has Optimus block confirm the delta less than 10c with liquid metal applied to the core during 550w-650w load? Optimus will open order next week so I want to see if its worth or not. Thanks.


at this point every block is the same, only thing you can do is more radiator area/cooler water.

a 100$ bykski block cools equally good/bad as a 400$ optimus block. they just look different.


----------



## Falkentyne

dante`afk said:


> at this point every block is the same, only thing you can do is more radiator area/cooler water.
> 
> a 100$ bykski block cools equally good/bad as a 400$ optimus block. they just look different.


Can I ask you something?


----------



## geriatricpollywog

dante`afk said:


> at this point every block is the same, only thing you can do is more radiator area/cooler water.
> 
> a 100$ bykski block cools equally good/bad as a 400$ optimus block. they just look different.


Eh, the Byjski block for the 2080ti Kingpin required the acrylic to be dremmeled to clear one of the ports. Bykski is possibly designing these blocks based on engineering diagrams, but not testing fitting every SKU. I like knowing things will fit perfectly and brands like Aquacomputer and Optimus would give me piece of mind in every aspect.


----------



## dante`afk

Falkentyne said:


> Can I ask you something?


lets go 



0451 said:


> Eh, the Byjski block for the 2080ti Kingpin required the acrylic to be dremmeled to clear one of the ports. Bykski is possibly designing these blocks based on engineering diagrams, but not testing fitting every SKU. I like knowing things will fit perfectly and brands like Aquacomputer and Optimus would give me piece of mind in every aspect.


I think you are right, but so far I had no issues with them or bitspower. I only ended up buying "more expensive" blocks from AC or WC for aesthetics, cooling quality was always equal.


----------



## Falkentyne

dante`afk said:


> lets go
> 
> 
> 
> I think you are right, but so far I had no issues with them or bitspower. I only ended up buying "more expensive" blocks from AC or WC for aesthetics, cooling quality was always equal.


Check pm !


----------



## geriatricpollywog

I want to know the secret.


----------



## jura11

gfunkernaught said:


> Hey when you measured the backplate temp at 45-50c, what was the gpu temp with the bykski? The ek seems to be unavailable mostly and too expensive with the backplate. I'm leaning towards the bykski.


Hi there

Temperatures have been in 36-38C and water temperature have been in as follows IN 26.7C/OUT 26.9C and water temperature have been in 23.7C which will give you delta around 12-14C and now I have pointed on the backplate 80mm Noiseblocker fan and another 120mm Arctic Cooling P12 PWM fan and backplate temps are now in 40's during very long rendering session 

Hope this helps

Thanks,Jura


----------



## Gandyman

Jordyn said:


> I had a Ichill x4 and at stock temps would average around 81 with fans at 100% with ambients in mid to high 20's. With the KFA 390w bios that would increase to mid 80's. Clocks would bounce between 1700 and 1850.
> 
> I managed to score a "good" deal on the Gaming X Trio which just arrived today and it is night and day different with the MSI card averaging in the high 60s with fans at around 60% at stock. I have just flashed to the Suprim bios and still only hitting around 70 despite the increased power draw.
> 
> In short it looks like the ichill cooling is no good at all and if you can source another card you would be better off.


Managed to get my hands on one of the first batch of 3090 STRIX OC to hit Australia, Should arrive next week. I think the power limit is 500w at the 23% extra on slider? I'm just hoping its quieter. Because I'm not even gaming anymore lately, because I just cant stand the noise. I took quite a loss reselling the iChill just to be rid of it.


----------



## Thanh Nguyen

Those pwr 1 2 3 4 temp are in the 70s. Are they safe? What are they?


----------



## WillP

WayWayUp said:


> i mean it might make the mp5works more effective as a thinner backplate would run hotter, but thats the only way it would be "better"
> 
> but by all measures a thinner backplate will be hotter and the mp5works + thick plate will have better overall temperatures opposed to pairing with thin plate
> 
> If you have a large backplate you can buy the larger clips off their website for a couple of bucks. It will fit just as well


I found the supplied clips too thin for my alphacool backplate but didn't want to wait to get their larger clips. I've found zip-ties to work just fine.


----------



## schoolofmonkey

Gandyman said:


> Managed to get my hands on one of the first batch of 3090 STRIX OC to hit Australia, Should arrive next week. I think the power limit is 500w at the 23% extra on slider? I'm just hoping its quieter. Because I'm not even gaming anymore lately, because I just cant stand the noise. I took quite a loss reselling the iChill just to be rid of it.



Reading that doesn't give me hope, just ordered the ichill x4 from PCCGear, starting to think I'll just wear the restocking fee and look at a MSI Suprim X from Umart, just saw they have a couple in stock.

But I am seriously thinking about water cooling this new 5900x build.


----------



## Esenel

0451 said:


> brands like Aquacomputer and Optimus would give me piece of mind in every aspect.


Haha.
AquaComputer had to be dremeled too as they forget space for the Elkos.
The cheap Byksi you didn't have to this gen ;-)


----------



## ALSTER868

Thanh Nguyen said:


> Those pwr


Can you please show your V/F curve you were using for that test? Thanks


----------



## Gandyman

schoolofmonkey said:


> Reading that doesn't give me hope, just ordered the ichill x4 from PCCGear, starting to think I'll just wear the restocking fee and look at a MSI Suprim X from Umart, just saw they have a couple in stock.
> 
> But I am seriously thinking about water cooling this new 5900x build.


Im not one to just cry about stuff on the internet, but in 20 years of pc building (recreational and professionally) I have never been so severely dissapointed by a GPU. My PC is wall mounted in a Core P3 directly infront of a split system aircon outlet. At stock, within 5 minutes of running Heaven, the fans ramp to 100%, clock speed falls to 1500ish, and temperature sits at 84c. If temperature starts to get worse it will easily fall to 1300 or 1400 mhz. Playing Metro Exodus with RT on, it was so hot I observed 1100mhz core clock at one point. Keep in mind that this is at 100% fan speed, which is UNBEARABLY loud. I realize I do have a open-air case, however previous to this I owned a MSI 2080ti gaming Z trio, prior to that a Gigabyte 2080 Aorus, And previous to that a 1080ti Strix, all in the Core p3. The gigabyte was by far the loudest of the 3, but I would only hear it very late at night if all else was exceptionally still. This ichill is the loudest card I have ever owned, and I have owned a reference gtx 580. How any $3000 card could be allowed to leave the factory and be sold in this state blows my mind.

TLDR; tell PC Case Gear to shove their garbage iChill trash where the sun don't shine, a 15% restocking fee on 3k is rough .. but its money well spent.


----------



## schoolofmonkey

Gandyman said:


> Im not one to just cry about stuff on the internet, but in 20 years of pc building (recreational and professionally) I have never been so severely dissapointed by a GPU. My PC is wall mounted in a Core P3 directly infront of a split system aircon outlet. At stock, within 5 minutes of running Heaven, the fans ramp to 100%, clock speed falls to 1500ish, and temperature sits at 84c. If temperature starts to get worse it will easily fall to 1300 or 1400 mhz. Playing Metro Exodus with RT on, it was so hot I observed 1100mhz core clock at one point. Keep in mind that this is at 100% fan speed, which is UNBEARABLY loud. I realize I do have a open-air case, however previous to this I owned a MSI 2080ti gaming Z trio, prior to that a Gigabyte 2080 Aorus, And previous to that a 1080ti Strix, all in the Core p3. The gigabyte was by far the loudest of the 3, but I would only hear it very late at night if all else was exceptionally still. This ichill is the loudest card I have ever owned, and I have owned a reference gtx 580. How any $3000 card could be allowed to leave the factory and be sold in this state blows my mind.
> 
> TLDR; tell PC Case Gear to shove their garbage iChill trash where the sun don't shine, a 15% restocking fee on 3k is rough .. but its money well spent.


So it seems what you're saving is the cooler isn't really good.
I'm interested how bad the card would be under water with the XOC Bios, I know it won't be like a 3*8 Pin card.
Regardless of what card I get it won't have the stock cooling on it.

I found when I vertically mounted my RTX 2080ti Strix temps went down and the annoying buzzing fan stopped buzzing, yeah the middle fan would buzz so loudly when it hit 60% when mounted horizontally, warranty deemed no fault found.

Shame there's no good PCIe Gen 4 risers yet.


Watching this video leaves me with a lot of questions like why when the card is running cooler the clock drops lower.


----------



## Gandyman

schoolofmonkey said:


> So it seems what you're saving is the cooler isn't really good.
> I'm interested how bad the card would be under water with the XOC Bios, I know it won't be like a 3*8 Pin card.
> Regardless of what card I get it won't have the stock cooling on it.
> 
> I found when I vertically mounted my RTX 2080ti Strix temps went down and the annoying buzzing fan stopped buzzing, yeah the middle fan would buzz so loudly when it hit 60% when mounted horizontally, warranty deemed no fault found.
> 
> Shame there's no good PCIe Gen 4 risers yet.
> 
> 
> Watching this video leaves me with a lot of questions like why when the card is running cooler the clock drops lower.


Yeah, I suppose it is the cooler that is the issue. As for how it would go without stock cooling, I can't say, I've always been a stock cooler guy. Can't comment on how it performs in that regard. I do know the memory can run at +1000 without issue, as when I first got it, before experiencing the poor cooler, I was trying to find its OC limits. The cooler soon put an end to that. I don't pretend to know much about how it works, but the boost clock doesn't change from 75% power limit to 100%. At 70% I can observe it boosting lower. As for undervolt, I have it set to 1700 mhz at .725v, but it still thermal throttles to 1550-1600ish. And with RT way way below that.


----------



## schoolofmonkey

Gandyman said:


> Yeah, I suppose it is the cooler that is the issue. As for how it would go without stock cooling, I can't say, I've always been a stock cooler guy. Can't comment on how it performs in that regard. I do know the memory can run at +1000 without issue, as when I first got it, before experiencing the poor cooler, I was trying to find its OC limits. The cooler soon put an end to that. I don't pretend to know much about how it works, but the boost clock doesn't change from 75% power limit to 100%. At 70% I can observe it boosting lower. As for undervolt, I have it set to 1700 mhz at .725v, but it still thermal throttles to 1550-1600ish. And with RT way way below that.


Well I was just talking to someone on one of our local forums, he said his iChill x3 is peaking at 249w (under water), so I found that interesting seeing the standard x3 is meant to be 350w and the iChill x3 is 370w.
Guess I can only try it and see what the card is like, nearly $500 is a fair chunk of change for a restocking fee.


----------



## Gandyman

schoolofmonkey said:


> Watching this video leaves me with a lot of questions like why when the card is running cooler the clock drops lower.


Seems because RT was enabled that the clock went down a few hundred. How he is playing oddesy at 78c and 1800 mhz is beyond me tho. I have the x3 not the x4, but surely there cant be that big a difference between the two. Or perhaps there is? Or perhaps my card is just b0rked? As you can see here, I just threw on Heaven real fast, after a few minutes, I was at the 82c it likes to sit at and clocks were slowly dropping but already down to 1630.


----------



## schoolofmonkey

Gandyman said:


> Or perhaps my card is just b0rked? As you can see here, I just threw on Heaven real fast, after a few minutes, I was at the 82c it likes to sit at and clocks were slowly dropping but already down to 1630.
> 
> View attachment 2475381


That to me is odd, having the temps peak so quickly, even the crappiest cards I've had took a while to hit 82cm we're talking about old NVIDIA reference blower cards.
I guess when mine shows up next week I'll test it and we'll know for sure.

If you see here, they put the Inno3D x4 against the MSI Gaming X, the Inno3D x4 was about 3fps behind.


----------



## mattFE

Motherboard options for SLI?


----------



## long2905

Gandyman said:


> Im not one to just cry about stuff on the internet, but in 20 years of pc building (recreational and professionally) I have never been so severely dissapointed by a GPU. My PC is wall mounted in a Core P3 directly infront of a split system aircon outlet. At stock, within 5 minutes of running Heaven, the fans ramp to 100%, clock speed falls to 1500ish, and temperature sits at 84c. If temperature starts to get worse it will easily fall to 1300 or 1400 mhz. Playing Metro Exodus with RT on, it was so hot I observed 1100mhz core clock at one point. Keep in mind that this is at 100% fan speed, which is UNBEARABLY loud. I realize I do have a open-air case, however previous to this I owned a MSI 2080ti gaming Z trio, prior to that a Gigabyte 2080 Aorus, And previous to that a 1080ti Strix, all in the Core p3. The gigabyte was by far the loudest of the 3, but I would only hear it very late at night if all else was exceptionally still. This ichill is the loudest card I have ever owned, and I have owned a reference gtx 580. How any $3000 card could be allowed to leave the factory and be sold in this state blows my mind.
> 
> TLDR; tell PC Case Gear to shove their garbage iChill trash where the sun don't shine, a 15% restocking fee on 3k is rough .. but its money well spent.


it sounds to me like yours has a bad paste job from the factory. you can try to do a repaste yourself if you feel like it. otherwise just trade it in for another card and save a lot of headache. the stock air cooler is of course awful but its not to that extend for me.


----------



## WilliamLeGod

schoolofmonkey said:


> That to me is odd, having the temps peak so quickly, even the crappiest cards I've had took a while to hit 82cm we're talking about old NVIDIA reference blower cards.
> I guess when mine shows up next week I'll test it and we'll know for sure.
> 
> If you see here, they put the Inno3D x4 against the MSI Gaming X, the Inno3D x4 was about 3fps behind.


U believe in these **** benchmark videos?


----------



## Nammi

Esenel said:


> At what flow?
> At the link at the bottom we tested several blocks already under different conditions. Just contribute there as well.
> 
> Best blocks for Strix so for seem to be Byksi and EK with Liquid Metal.
> Alphacool also very solid, but someone mentioned very restrictive for the flow.
> 
> Performance from the AquaComputer one is mediocre.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> [Übersicht] - RTX 30x0 Wasserkühlervergleich | GPU Block Comparison
> 
> 
> Einleitung Hallo zusammen, da es an verschiedenen Stellen bereits Informationen zu diesem Thema gibt, würde ich gerne anfangen alle Daten zentral und übersichtlich zu sammeln. Es geht darum, die Leistung der jeweiligen GPU-Blöcke der verschiedenen Hersteller transparent auszulisten. Die Form...
> 
> 
> 
> 
> www.hardwareluxx.de


I don't have a flowmeter, ran the D5 at max.

Sure I'll drop by that thread at some point.


----------



## WayWayUp

Can anyone help me figure out please whats going on.
shunted ftw3 using 520w bios

at ambient temps I'm able to run +1,100 memory without issue. What a noticed however was when a introduced a little bit of cold, I had to actually _reduce _my memory offset to +1,000

Yesterday i introduced a little bit more cold. This was the coldest run yet. What surprised me was that I kept crashing at offsets that should have easily ran?!?

finally i realized that i had to reduce my memory offset AGAIN because the temps were colder. So i kept lowering memory until i arrived at 900 offset









I scored 15 820 in Port Royal


Intel Core i9-10900KF Processor, NVIDIA GeForce RTX 3090 x 1, 16384 MB, 64-bit Windows 10}




www.3dmark.com




I was able to score nearly the same at 16c as i am at 11c unfortunately. I'm afraid even the 900 is at the edge and possible making me score lower.

I even game at +1000 memory in CP for hours with a heated up loop, so you can understand my confusion 

managed to score 15,820 but i was very disappointed with my result due to memory instability at colder temps. I feel like i would be at 16,000 by now if memory was responding to colder temps as it should and increase instead of decrease.










is the power limit of my bios reserving more for the core and robbing memory? What could possibly be the reason?


----------



## geriatricpollywog

.


----------



## geriatricpollywog

WayWayUp said:


> Can anyone help me figure out please whats going on.
> shunted ftw3 using 520w bios
> 
> at ambient temps I'm able to run +1,100 memory without issue. What a noticed however was when a introduced a little bit of cold, I had to actually _reduce _my memory offset to +1,000
> 
> Yesterday i introduced a little bit more cold. This was the coldest run yet. What surprised me was that I kept crashing at offsets that should have easily ran?!?
> 
> finally i realized that i had to reduce my memory offset AGAIN because the temps were colder. So i kept lowering memory until i arrived at 900 offset
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 820 in Port Royal
> 
> 
> Intel Core i9-10900KF Processor, NVIDIA GeForce RTX 3090 x 1, 16384 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> I was able to score nearly the same at 16c as i am at 11c unfortunately. I'm afraid even the 900 is at the edge and possible making me score lower.
> 
> I even game at +1000 memory in CP for hours with a heated up loop, so you can understand my confusion
> 
> managed to score 15,820 but i was very disappointed with my result due to memory instability at colder temps. I feel like i would be at 16,000 by now if memory was responding to colder temps as it should and increase instead of decrease.
> 
> 
> 
> is the power limit of my bios reserving more for the core and robbing memory? What could possibly be the reason?


Your memory clocks seem very low compared to other results in that score range. I am seeing 1370-1385. Don’t pay attention to the stable offset, pay attention to the average memory frequency in PR. Sometimes lowering the offset can improve stability and average speed. Do you have a fan on the backplate?


----------



## WayWayUp

no fan just optimus backplate


----------



## WayWayUp

check out this score with very warm temps before i received my water block.
62c average with temps reaching nearly 70c by end of run








I scored 14 991 in Port Royal


Intel Core i9-10900KF Processor, NVIDIA GeForce RTX 3090 x 1, 16384 MB, 64-bit Windows 10}




www.3dmark.com





notice the memory clock at 1341.... this was with stock ftw3 backplate
the clock frequency is actually higher here than it is at my 11c run with thick optimus backplate and massive fujipoly pad covering the entire backside......

something fishy is going on. My theory is that the the core is requesting more power limit in the cold (due to the high frequencies) and its taking it away from the memory power budget?


----------



## SoldierRBT

@WayWayUp 

I read a discussion a few pages back about 520W BIOS having a low power limit for memory. Can you tried 500W BIOS and see if it does improve your memory OC?

That's a nice result. Your score is being affected by your memory OC that's for sure. Going from +1200 to +1250 memory I got 100 extra score with an avg core of 2223MHz and 48C.


----------



## McFluff

I wanted to share some info on trying to cool the backplate on my 3090 FE. I have found a pretty effective way to cool it, but I'm looking for a permanent solution.

I first discovered this issue while mining eth. I bought this card for gaming, but seeing that Nicehash was paying ~$10 a day, I figured that I'd leave it running when I wasn't using the computer. The issue that I ran into was one that others in this thread have described, while performing the memory intensive task of Eth mining, the fans would ramp way up even though the GPU is nice and cool at 42c. I assume this is due to the card reading high temperatures from the memory on the back.

Since I can't read the memory temperatures on the FE and the fan was way too loud to justify the annoyance of making about 25 cents an hour, I started to fiddle around with trying to cool the backplate to see if I could lower the percentage that the fans automatically ramp up to. I had some marginal success by lowering my case temperatures, leaving off the side panel off and aiming a fan directly at the backplate, but it was still much louder that I'd consider bearable.

Moving past simply trying to cool the air around the card, I started to look into ways to actively cool the backplate. Unfortunately, I didn't have any spare heatsinks, but I did have a spare AIO that I was able to MacGyver into position by using a kitchen utensil as a mounting device.










Not even using any sort of thermal pad or paste, just pressing the coldplate against the backplate, here is the result. The fans quickly drop down to 58%, and when removing the coldplate the shoot right back up to 100%.










In terms of the real-world effect this has had on mining, here are some results I had while trying to find a good overclock combo for Nicehash. Highlighting what I finally settled on as a good hash rate and fan speed.










In terms of gaming benchmarks, I was able to complete Port Royal with an extra 150 memory overclock when using the AIO, +1100 was the max I could get without the AIO, +1250 with.

An AIO seems like an overkill for cooling a backplate, anyone have a better solution? I am considering ordering a CPU Heatsink and sticking it to the card, but I sort of want to find a solution that maintains aesthetics.


----------



## McFluff

I wanted to share some info on trying to cool the backplate on my 3090 FE. I have found a pretty effective way to cool it, but I'm looking for a permanent solution.

I first discovered this issue while mining eth. I bought this card for gaming, but seeing that Nicehash was paying ~$10 a day, I figured that I'd leave it running when I wasn't using the computer. The issue that I ran into was one that others in this thread have described, while performing the memory intensive task of Eth mining, the fans would ramp way up even though the GPU is nice and cool at 42c. I assume this is due to the card reading high temperatures from the memory on the back.

Since I can't read the memory temperatures on the FE and the fan was way too loud to justify the annoyance of making about 25 cents an hour, I started to fiddle around with trying to cool the backplate to see if I could lower the percentage that the fans automatically ramp up to. I had some marginal success by lowering my case temperatures, leaving off the side panel off and aiming a fan directly at the backplate, but it was still much louder that I'd consider bearable.

Moving past simply trying to cool the air around the card, I started to look into ways to actively cool the backplate. Unfortunately, I didn't have any spare heatsinks, but I did have a spare AIO that I was able to MacGyver into position by using a kitchen utensil as a mounting device.










Not even using any sort of thermal pad or paste, just pressing the coldplate against the backplate, here is the result. The fans quickly drop down to 58%, and when removing the coldplate the shoot right back up to 100%.










In terms of the real-world effect this has had on mining, here are some results I had while trying to find a good overclock combo for Nicehash. Highlighting what I finally settled on as a good hash rate and fan speed.










In terms of gaming benchmarks, I was able to complete Port Royal with an extra 150 memory overclock when using the AIO, +1100 was the max I could get without the AIO, +1250 with.

An AIO seems like an overkill for cooling a backplate, anyone have a better solution? I am considering ordering a CPU Heatsink and sticking it to the card, but I sort of want to find a solution that maintains aesthetics.


----------



## GAN77

How does a 1000 watt bios behave? Is the chip and memory frequency reset during idle time?


----------



## des2k...

GAN77 said:


> How does a 1000 watt bios behave? Is the chip and memory frequency reset during idle time?


chip drops frequency, memory runs full speed only


----------



## des2k...

McFluff said:


> I wanted to share some info on trying to cool the backplate on my 3090 FE. I have found a pretty effective way to cool it, but I'm looking for a permanent solution.
> 
> I first discovered this issue while mining eth. I bought this card for gaming, but seeing that Nicehash was paying ~$10 a day, I figured that I'd leave it running when I wasn't using the computer. The issue that I ran into was one that others in this thread have described, while performing the memory intensive task of Eth mining, the fans would ramp way up even though the GPU is nice and cool at 42c. I assume this is due to the card reading high temperatures from the memory on the back.
> 
> Since I can't read the memory temperatures on the FE and the fan was way too loud to justify the annoyance of making about 25 cents an hour, I started to fiddle around with trying to cool the backplate to see if I could lower the percentage that the fans automatically ramp up to. I had some marginal success by lowering my case temperatures, leaving off the side panel off and aiming a fan directly at the backplate, but it was still much louder that I'd consider bearable.
> 
> Moving past simply trying to cool the air around the card, I started to look into ways to actively cool the backplate. Unfortunately, I didn't have any spare heatsinks, but I did have a spare AIO that I was able to MacGyver into position by using a kitchen utensil as a mounting device.
> 
> View attachment 2475460
> 
> 
> Not even using any sort of thermal pad or paste, just pressing the coldplate against the backplate, here is the result. The fans quickly drop down to 58%, and when removing the coldplate the shoot right back up to 100%.
> 
> View attachment 2475461
> 
> 
> In terms of the real-world effect this has had on mining, here are some results I had while trying to find a good overclock combo for Nicehash. Highlighting what I finally settled on as a good hash rate and fan speed.
> 
> View attachment 2475462
> 
> 
> In terms of gaming benchmarks, I was able to complete Port Royal with an extra 150 memory overclock when using the AIO, +1100 was the max I could get without the AIO, +1250 with.
> 
> An AIO seems like an overkill for cooling a backplate, anyone have a better solution? I am considering ordering a CPU Heatsink and sticking it to the card, but I sort of want to find a solution that maintains aesthetics.


I would use paste or thermal pad, usually you don't want copper + alum to contact


----------



## mirkendargen

des2k... said:


> I would use paste or thermal pad, usually you don't want copper + alum to contact


This only matters when there's something between them that can carry ions. Just aluminum and copper touching isn't a problem, the vast majority or CPU coolers and a lot of radiators are copper bases/tubing/heat pipes connected to aluminum fins.


----------



## Canson

there was a guy who talked about audio crackling in this thread , i do actually experience this too now (not talking cyberpunk2077 audio crackling).

i hear it in twitch or youtube skipping the videos. I dont remember i had this issue before lol

Removed overclock and still had crackling.

Does anyone have audio crackling?

test this video. skip video or change volume down and up and see if you get crackling


----------



## bmgjet

Canson said:


> there was a guy who talked about audio crackling in this thread , i do actually experience this too now (not talking cyberpunk2077 audio crackling).
> 
> i hear it in twitch or youtube skipping the videos. I dont remember i had this issue before lol
> 
> Removed overclock and still had crackling.
> 
> Does anyone have audio crackling?
> 
> test this video. skip video or change volume down and up and see if you get crackling


Im on my phone and changing volume or skipping on that video causes a pop from the speaker like old youtube videos do because of there codec.


----------



## truehighroller1

Guy's quick questions about my Bykski waterblock in regards to my 3090 suprim x if you could please that qould be great.. I just need your opinions on this one.

Okay I noticed something here with my 3090 suprim x and this block and it makes me wonder if it should cause concern or not but I would think that it would concidering that MSI had these spots I'm concerned about cooled.


View attachment 2475494


Potato spelling maybe quality too a little but it wouldn't let me upload a high quality picture like I wanted to and I just edited it to quick and the point is that this stuff is not getting cooled either at all or barely anymore. Wouldn't this be of concern for these areas?

Edit:

This is the card so you can see what it was cooling as well I suppose.

View attachment 2475498


----------



## Nizzen

truehighroller1 said:


> Guy's quick questions about my Bykski waterblock in regards to my 3090 suprim x if you could please that qould be great.. I just need your opinions on this one.
> 
> Okay I noticed something here with my 3090 suprim x and this block and it makes me wonder if it should cause concern or not but I would think that it would concidering that MSI had these spots I'm concerned about cooled.
> 
> 
> View attachment 2475494
> 
> 
> Potato spelling maybe quality too a little but it wouldn't let me upload a high quality picture like I wanted to and I just edited it to quick and the point is that this stuff is not getting cooled either at all or barely anymore. Wouldn't this be of concern for these areas?


Some pads is just for stability support. Some components isn't very hot when the other components are watercooled 

The pcb itself is working as a heatspreader.


----------



## GAN77

truehighroller1 said:


> Guy's quick questions about my Bykski waterblock in regards to my 3090 suprim


You must be concerned about these places. The Bykski vrm zone is shorter than the Suprim.


----------



## RosaPanteren

McFluff said:


> I wanted to share some info on trying to cool the backplate on my 3090 FE. I have found a pretty effective way to cool it, but I'm looking for a permanent solution.
> 
> I first discovered this issue while mining eth. I bought this card for gaming, but seeing that Nicehash was paying ~$10 a day, I figured that I'd leave it running when I wasn't using the computer. The issue that I ran into was one that others in this thread have described, while performing the memory intensive task of Eth mining, the fans would ramp way up even though the GPU is nice and cool at 42c. I assume this is due to the card reading high temperatures from the memory on the back.
> 
> Since I can't read the memory temperatures on the FE and the fan was way too loud to justify the annoyance of making about 25 cents an hour, I started to fiddle around with trying to cool the backplate to see if I could lower the percentage that the fans automatically ramp up to. I had some marginal success by lowering my case temperatures, leaving off the side panel off and aiming a fan directly at the backplate, but it was still much louder that I'd consider bearable.
> 
> Moving past simply trying to cool the air around the card, I started to look into ways to actively cool the backplate. Unfortunately, I didn't have any spare heatsinks, but I did have a spare AIO that I was able to MacGyver into position by using a kitchen utensil as a mounting device.
> 
> View attachment 2475453
> 
> 
> Not even using any sort of thermal pad or paste, just pressing the coldplate against the backplate, here is the result. The fans quickly drop down to 58%, and when removing the coldplate the shoot right back up to 100%.
> 
> View attachment 2475455
> 
> 
> In terms of the real-world effect this has had on mining, here are some results I had while trying to find a good overclock combo for Nicehash. Highlighting what I finally settled on as a good hash rate and fan speed.
> 
> View attachment 2475456
> 
> 
> In terms of gaming benchmarks, I was able to complete Port Royal with an extra 150 memory overclock when using the AIO, +1100 was the max I could get without the AIO, +1250 with.
> 
> An AIO seems like an overkill for cooling a backplate, anyone have a better solution? I am considering ordering a CPU Heatsink and sticking it to the card, but I sort of want to find a solution that maintains aesthetics.


I have been concerned about the same thing, you could fry a steak on those backplates.....

Igor's Lab had an article about memory temps on the 3080 and it was measured to 104c.....and I think I read somewhere that recommended operating temperatures ranges for GDDR6X is between *0C* to *95C(*edit it might be max 110c for gddr6x)* GDDR6X at the limit? Over 100 degrees measured*

As an intermediate solution I connected a spare cpu block, it's a bit messy with the tubings now but I'll clean that up when my ram coolers come next week. The block is only pressed to the backplate by gravity and the force from the tubings, hence the arch.









The MP5Works is a solution but I went for 2 Alphacool ram coolers instead, as they were in stock and also give you possibility to run in serial with normal G1/4 fittings for higher flow rate. Compared to the cpu block these ram coolers will give 3x cooling surface effectively covering all memory modules. It's termal paddings between the backplate and mem modules so this should work okey. I'm not sure how I'm going to attach the coolers to the backplate, I might try to solder the edges.

There has also been some speculations regarding the last couple of drivers secretly nerfing memory clock to prevent excessive heating. NVIDIA 461.09 Driver on a 3090 and the secret memory PerfCap

I have no idea if this is a thing, but the memory is running crazy hot thats for sure.


----------



## truehighroller1

GAN77 said:


> You must be concerned about these places. The Bykski vrm zone is shorter than the Suprim.
> 
> View attachment 2475502
> 
> 
> View attachment 2475503



Yep, not only that but I just wasted a ton of time to find out that it's not cmc'd out, enough to let the capacitors sit inside this hole either so, this block isn't working for me either....












I really hate to go through this but, I guess I'll now just put everything back together and purchase a different waterblock later down the road unfortunately..


----------



## motivman

WayWayUp said:


> Can anyone help me figure out please whats going on.
> shunted ftw3 using 520w bios
> 
> at ambient temps I'm able to run +1,100 memory without issue. What a noticed however was when a introduced a little bit of cold, I had to actually _reduce _my memory offset to +1,000
> 
> Yesterday i introduced a little bit more cold. This was the coldest run yet. What surprised me was that I kept crashing at offsets that should have easily ran?!?
> 
> finally i realized that i had to reduce my memory offset AGAIN because the temps were colder. So i kept lowering memory until i arrived at 900 offset
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 820 in Port Royal
> 
> 
> Intel Core i9-10900KF Processor, NVIDIA GeForce RTX 3090 x 1, 16384 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> I was able to score nearly the same at 16c as i am at 11c unfortunately. I'm afraid even the 900 is at the edge and possible making me score lower.
> 
> I even game at +1000 memory in CP for hours with a heated up loop, so you can understand my confusion
> 
> managed to score 15,820 but i was very disappointed with my result due to memory instability at colder temps. I feel like i would be at 16,000 by now if memory was responding to colder temps as it should and increase instead of decrease.
> 
> View attachment 2475425
> 
> 
> is the power limit of my bios reserving more for the core and robbing memory? What could possibly be the reason?


Same Issue with my Reference card, the colder I go, the more instable my memory gets.
at normal ambient temps ie 48C load, I can run +1000 on memory
at colder temps ie 26-30C load, my memory is stable at maximum +850, sometimes I have to go down to +800.


----------



## schoolofmonkey

Nammi said:


> I don't have a flowmeter, ran the D5 at max.
> 
> Sure I'll drop by that thread at some point.


I don't believe anything until I try it for myself, which hopefully by Wednesday I'll be able to.


----------



## truehighroller1

truehighroller1 said:


> Yep, not only that but I just wasted a ton of time to find out that it's not cmc'd out, enough to let the capacitors sit inside this hole either so, this block isn't working for me either....
> 
> 
> 
> View attachment 2475507
> 
> 
> I really hate to go through this but, I guess I'll now just put everything back together and purchase a different waterblock later down the road unfortunately..



It's about 3/32" to small for them to fit down into the hole...


----------



## Canson

bmgjet said:


> Im on my phone and changing volume or skipping on that video causes a pop from the speaker like old youtube videos do because of there codec.


your right on that video , had pop in my phone too. 

but this video demonstrates it better. this is exactly the problem i have. have you please try on your pc and see if your sounds the same


----------



## mattFE

mattFE said:


> Motherboard options for SLI?


Thanks guys!


----------



## Thanh Nguyen

I have a question. Currently, Im running 1.087mv-2250mhz. Effective clock is around 2190mhz. I tried to pump up the voltage to 1.093mv or 1.1mv so I can get to 2265mhz and 2205mhz effective clock because 1.087mv cant run 2265mhz in metro. However, when I use 1.093mv-1.1mv, the effective clock is only 2085mhz-2100mhz. What is going on?


----------



## SoldierRBT

@Thanh Nguyen 

When I see my internal clocks drop a lot it could mean that the core is unstable or you may be hitting PL somewhere or the NVVDD table isn't providing enough voltage. Have you tried doing offset OC? Internal clocks should be 15-20MHz below requested clocks.


----------



## truehighroller1

truehighroller1 said:


> It's about 3/32" to small for them to fit down into the hole...
> 
> View attachment 2475512
> View attachment 2475513



Side note my card is running about 8C cooler now that I had to tear it down for no reason and put it all back together lol.


----------



## whitewizard

Vikingcools said:


> Thanks! Well, I only need 1 hdmi and 1 DP. I'll give those a look.


I am torn between Asus EKWB 3090 and Strix 3090 both were reserved. I really like the one slot design of the EKWB however I am worried about the 2x 8 power pins and how much Overclock can be done. Between the maxed out Strix and a maxed out EKWB what kind of performance (Both stock and OC) difference would i be expecting. Any advice as I have to go pick one or the other. Thanks


----------



## geriatricpollywog

WayWayUp said:


> check out this score with very warm temps before i received my water block.
> 62c average with temps reaching nearly 70c by end of run
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 14 991 in Port Royal
> 
> 
> Intel Core i9-10900KF Processor, NVIDIA GeForce RTX 3090 x 1, 16384 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> notice the memory clock at 1341.... this was with stock ftw3 backplate
> the clock frequency is actually higher here than it is at my 11c run with thick optimus backplate and massive fujipoly pad covering the entire backside......
> 
> something fishy is going on. My theory is that the the core is requesting more power limit in the cold (due to the high frequencies) and its taking it away from the memory power budget?


Do you notice any difference between your first run (cold) and your last (hot) run when running PR in series?


----------



## whitewizard

Can someone help if there are any benchmarks, bios flash success and overclock for the Asus EKWB card or it is not worth it and better to go for a Strix OC version. Thanks


----------



## jomama22

WayWayUp said:


> Can anyone help me figure out please whats going on.
> shunted ftw3 using 520w bios
> 
> at ambient temps I'm able to run +1,100 memory without issue. What a noticed however was when a introduced a little bit of cold, I had to actually _reduce _my memory offset to +1,000
> 
> Yesterday i introduced a little bit more cold. This was the coldest run yet. What surprised me was that I kept crashing at offsets that should have easily ran?!?
> 
> finally i realized that i had to reduce my memory offset AGAIN because the temps were colder. So i kept lowering memory until i arrived at 900 offset
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 820 in Port Royal
> 
> 
> Intel Core i9-10900KF Processor, NVIDIA GeForce RTX 3090 x 1, 16384 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> I was able to score nearly the same at 16c as i am at 11c unfortunately. I'm afraid even the 900 is at the edge and possible making me score lower.
> 
> I even game at +1000 memory in CP for hours with a heated up loop, so you can understand my confusion
> 
> managed to score 15,820 but i was very disappointed with my result due to memory instability at colder temps. I feel like i would be at 16,000 by now if memory was responding to colder temps as it should and increase instead of decrease.
> 
> View attachment 2475425
> 
> 
> is the power limit of my bios reserving more for the core and robbing memory? What could possibly be the reason?


Are you using just straight water? Or some sort of premix? Curious as to how you can get down to 11c under load.


----------



## Nammi

schoolofmonkey said:


> I don't believe anything until I try it for myself, which hopefully by Wednesday I'll be able to.


Well, hopefully you'll have a better experience... Performance aside, I was part of the early batch so I needed to drill down the acrylic and now I've orded extra thermal pads to hopefully quiet down the banshee (coil whine) that my card was turned into by the block. =D


----------



## pantsoftime

Looking for some help on retrieving memory temps. Does anyone have a tool for it? I'd like to do some experiments with backplate cooling and it would be nice to get some hard data.


----------



## long2905

pantsoftime said:


> Looking for some help on retrieving memory temps. Does anyone have a tool for it? I'd like to do some experiments with backplate cooling and it would be nice to get some hard data.


You can always stick a sensor on a next to the memory chip


----------



## Vikingcools

Vikingcools said:


> Thanks! Well, I only need 1 hdmi and 1 DP. I'll give those a look.


However. The FE-bios is 350->400. Any reason this won't work? The 12-pin thing?


----------



## Damaged__

whitewizard said:


> Can someone help if there are any benchmarks, bios flash success and overclock for the Asus EKWB card or it is not worth it and better to go for a Strix OC version. Thanks


Asus EKWB is limited at stock to 366W max power limit, so your only option is to shunt it or flash the KP bios for more power, whereas with the strix you can daily the 520w bios just fine.


----------



## TechnoPeasant

Anyone know if the Zotac 3090 is dual BIOS?

Have one coming tomorrow and looking to bump the power limit using the KFA2 BIOS, but I’m curious if there’s a way to reflash the card in case of issue. Thanks!


----------



## Vikingcools

Damaged__ said:


> Asus EKWB is limited at stock to 366W max power limit, so your only option is to shunt it or flash the KP bios for more power, whereas with the strix you can daily the 520w bios just fine.


Why the KP bios and not the FE bios? Any reason the FE bios won't work on this card?


----------



## Damaged__

Vikingcools said:


> Why the KP bios and not the FE bios? Any reason the FE bios won't work on this card?


You can't flash the FE bios without a cert modded nvflash which isn't available yet (to my knowledge).


----------



## Vikingcools

Damaged__ said:


> You can't flash the FE bios without a cert modded nvflash which isn't available yet (to my knowledge).


I'm in no hurry, and the card is absolutely awesome. Having a few extra watts to play around with would be nice, but I can wait. 

Some other dude asked about the performance. From my perspective its fantastic. It has absolutely zero coilwhine, and all the other cards - including 6800, 6900xt and 3080/3090s I have tested so far have had that. Even my 2080 ti had a bit of it. The 6900XT was HORRIBLE. This card is a dream. I have it running at 1950mhz at 0.912 and its doing great. I don't care much about timespy-stuff, but I'll run a round of timespy at my daily driver-settings and report back.


----------



## Thomas73

Zotac RTX 3090 Trinity


----------



## jura11

Vikingcools said:


> Why the KP bios and not the FE bios? Any reason the FE bios won't work on this card?


You can't flash FE with any BIOS to the date, this aplies for previous generations of RTX, RTX FE seeks are compatible with themselves 

And you can't use FE BIOS on AIB GPUs 

Hope this helps 

Thanks, Jura


----------



## Vikingcools

jura11 said:


> You can't flash FE with any BIOS to the date, this aplies for previous generations of RTX, RTX FE seeks are compatible with themselves
> 
> And you can't use FE BIOS on AIB GPUs
> 
> Hope this helps
> 
> Thanks, Jura


Thanks. There is no chance a version of NVflash that makes this possible will surface then?

I did a port royal test where I removed my curve and just set +150 core and +1000 mem. It ran fine. 13500, which I guess is completely average. It did so at 44C without making a sound though, so I'm happy with that.

Unsure if I want to test the KF-BIOS. Same number of DP and HDMI, so that should be ok. My card only has that single bios though, and thats a bit scary. I think I'll just be happy with the performance I'm getting, and try some more curve settings with undervolt+overclock instead of messing with the BIOS for now.


----------



## Vikingcools

Nammi said:


> For those interested in the Strix AC kryographics block, here's how it performs on my end.
> 
> Running TS Extreme on the 520w bios for about 30min after heating up the loop gave avg water to core delta of 16.3C, with lowest blips to 14C and highest 19C.


That is interesting! I considered kryographics or optimus for a ftw ultra or strix oc, but eventually went with the available "premade" Asus EKWB instead - via a super-coilwhiny 6900XT with an EKWB block.

Running with my gaming curve, which is 370W PL (obviously quite a bit less than your card) and [email protected], this prebuilt EKWB had a delta of between 17-18C steady. No real "spikes". Obviously after warming up the loop for many hours. When I get around to it I will replace the paste on this thing with kryonaut extreme, as I am sure they are using something that performs worse.

Pretty sure your graphics score there put mine to shame though. I got a 10.251 on that run for graphics...


----------



## motivman

Anyone interested in buying a shunt modded (5mohm) reference card with EK waterblock? it is running on stock PNY bios. Benchmark results below. Looking to let this go and "upgrade" to a STRIX. Card has literally Zero coil-whine. Send offers to my PM, USA ONLY.

https://www.3dmark.com/3dm/57294221 --- stock (no overclock)
https://www.3dmark.com/3dm/57294627 ---- CP2077 stable overclock +150/+800
https://www.3dmark.com/3dm/57294806 ---- Highest stable for PR, not stable in games +195/+1000 (stock bios)
https://www.3dmark.com/pr/810269 ---- on 520W bios (currently #54 in the world for single cards)


----------



## Nizzen

motivman said:


> Anyone interested in buying a shunt modded (5mohm) reference card with EK waterblock? it is running on stock PNY bios. Benchmark results below. Looking to let this go and "upgrade" to a STRIX. Card has literally Zero coil-whine. Send offers to my PM, USA ONLY.
> 
> https://www.3dmark.com/3dm/57294221 --- stock (no overclock)
> https://www.3dmark.com/3dm/57294627 ---- CP2077 stable overclock +150/+800
> https://www.3dmark.com/3dm/57294806 ---- Highest stable for PR, not stable in games +195/+1000 (stock bios)
> https://www.3dmark.com/pr/810269 ---- on 520W bios (currently #54 in the world for single cards)


Nice Port Royal run. Nice and cold! You almost beat my 15671 score


----------



## motivman

Nizzen said:


> Nice Port Royal run. Nice and cold! You almost beat my 15671 score


Haha. The card is pretty nice for a 2X8 pin, I might end up getting a Strix and it turns out to be a DUD or even worse than my reference, lol. I think I would score higher on this card, but the memory is holding me back. What card do you have?


----------



## Nizzen

motivman said:


> Haha. The card is pretty nice for a 2X8 pin, I might end up getting a Strix and it turns out to be a DUD or even worse than my reference, lol. I think I would score higher on this card, but the memory is holding me back. What card do you have?


Average bin 3090 Strix oc


----------



## jura11

Vikingcools said:


> Thanks. There is no chance a version of NVflash that makes this possible will surface then?
> 
> I did a port royal test where I removed my curve and just set +150 core and +1000 mem. It ran fine. 13500, which I guess is completely average. It did so at 44C without making a sound though, so I'm happy with that.
> 
> Unsure if I want to test the KF-BIOS. Same number of DP and HDMI, so that should be ok. My card only has that single bios though, and thats a bit scary. I think I'll just be happy with the performance I'm getting, and try some more curve settings with undervolt+overclock instead of messing with the BIOS for now.


If there will be NvFlash which will allow that hard to say maybe there will be NvFlash which allow that

My best result with KFA2 390W BIOS is 14345 points in Port Royal and with XOC BIOS with 75 or 80% power limit my best score is 14908,with stock BIOS I think my score is 13492 without any OC and stock power limit and VRAM, with 150MHz on core and 1250MHz on VRAM score my score is 14009 points 

BIOS flashing is easy but if you are scared of that then I wouldn't do that, I done that and I think I have tried most of the BIOS on my RTX 3090 for now 

Hope this helps 

Thanks, Jura


----------



## changboy

Ive install my ek waterblock on my ftw3 ultra and temp are verry good, gpu is at 44c boost at 2145mhz with +150 on core but i not mod my pcb so iam limited by power. From 14256 port royal on air iam at 14600 and i can oc at +200mhz on core without crashing. I not try other bios then the xoc and i not gain much with this, maybe my board is limiting the power.
This gpuz pic is after playing assassin creed odysey 1 hour.


----------



## Exilon

changboy said:


> Ive install my ek waterblock on my ftw3 ultra and temp are verry good, gpu is at 44c boost at 2145mhz with +150 on core but i not mod my pcb so iam limited by power. From 14256 port royal on air iam at 14600 and i can oc at +200mhz on core without crashing. I not try other bios then the xoc and i not gain much with this, maybe my board is limiting the power.
> This gpuz pic is after playing assassin creed odysey 1 hour.


What temperatures are you getting on the ICX sensors?


----------



## changboy

This is when i play watch dog legion : This pic not good coz the game was on pause while i take it, i will put another one with furemark.


----------



## Exilon

You can access the ICX sensors in latest GPU-Z as well. No need to swap to Precision.
Those PCB temperatures look great.


----------



## bmgjet

Built in shuntmod jumpers.


----------



## changboy

This is when i use furemark benchmark : I put liquid metal for thermal paste


----------



## changboy

My temp drop of 40c from air cooling ! And i found the original thermal paste was near to be not enough, like some spot have no paste on the die, near dry too.

Now then i have intalled my waterblock i will begin play with some bios to see if i can improve my performance and at the end will see if i want mod my pcb, not sure it worth doing it for the small improvement at the end.


----------



## pat182

bmgjet said:


> Built in shuntmod jumpers.
> View attachment 2475660


look sick


----------



## Falkentyne

pat182 said:


> look sick


What brand is this?


----------



## bmgjet

Falkentyne said:


> What brand is this?











GALAX GeForce RTX 3090 HOF PCB pictured with monstrous VRM design - VideoCardz.com


GALAX preparing GeForce RTX 3090 HOF We received pictures of the upcoming GALAX GeForce RTX 3090 flagship graphics card known as HOF. Back in November, Galax began teasing its HOF series through a virtual expo, the manufacturer’s innovative attempt to showcase new products during the unfortunate...




videocardz.com


----------



## Sheyster

Is anyone actually using the 3090 FE BIOS on a 3 x 8-pin card in here? Just wondering how well it's working. It should net ~600W power limit.


----------



## mirkendargen

Sheyster said:


> Is anyone actually using the 3090 FE BIOS on a 3 x 8-pin card in here? Just wondering how well it's working. It should net ~600W power limit.


What's the reasoning behind thinking this should work? I tried the TUF BIOS on my Strix and it maintained the TUF power limit. Is it something specific about FE's?


----------



## Falkentyne

Sheyster said:


> Is anyone actually using the 3090 FE BIOS on a 3 x 8-pin card in here? Just wondering how well it's working. It should net ~600W power limit.


You need the Ampere NVflash with ID checks bypassed for that and I don't think anyone has uploaded it anywhere.


----------



## Sheyster

mirkendargen said:


> What's the reasoning behind thinking this should work? I tried the TUF BIOS on my Strix and it maintained the TUF power limit. Is it something specific about FE's?


It should be similar to the 2x8-pin XC3 BIOS working on 3x8-pin cards. Power limits are under-reported which increases the real power limit the card can pull.


----------



## mirkendargen

Sheyster said:


> It should be similar to the 2x8-pin XC3 BIOS working on 3x8-pin cards. Power limits are under-reported which increases the real power limit the card can pull.


I think this is only a FTW3 thing, but I've only tried TUF on Strix, not XC3 on Strix. The limit wasn't underreported, I checked total system power draw and eyeballing benchmarks and what frequency/voltage it's hitting the power limit at.

If this really worked like that, everyone would be using the KFA 390W BIOS on 3x8pin cards for what, 550-575W, instead of the 520W Kingpin BIOS.


----------



## sultanofswing

On my FTW3 Ultra even the Kingpin 520w BIOS has me all over the power limit.


----------



## mirkendargen

sultanofswing said:


> On my FTW3 Ultra even the Kingpin 520w BIOS has me all over the power limit.


Yeah I think something is different with FTW3's only that makes the higher wattage BIOSes (even the 500W one literally designed for them) not work without shunting the PCIE resistor, but 2x8pin BIOSes give only them more power.


----------



## Thanh Nguyen

Any card here can run metro exodus rtx at 2175mhz internal clock or above? Dont know why my card can run timespy extreme gt2 at 2180-2190 internal clock but crash instantly in metro. It can run only 2160 in metro.


----------



## Nizzen

Thanh Nguyen said:


> Any card here can run metro exodus rtx at 2175mhz internal clock or above? Dont know why my card can run timespy extreme gt2 at 2180-2190 internal clock but crash instantly in metro. It can run only 2160 in metro.


8nm sux, that's why 
Be happy with ~ 2100-2150mhz and wait for next generation


----------



## WMDVenum

@Falkentyne, I was crashing in WZ with a 150 core and 500 mem offset and reduced my videomemoryscale from 0.85 to 0.75. So far 3 hours and no crash. I think what happens is when it is at 0.85 (default) it leaks until 0.95 and crashes, according to my hwinfo observations. Maybe this can help a bit with WZ stability. Testing +150/1150 again with the 0.75 limit.

I can also do +200/1200 on Port royale but gaming seems to max at 150/1150. After all a benchmark only needs 2 minutes of stability.


----------



## Falkentyne

WMDVenum said:


> @Falkentyne, I was crashing in WZ with a 150 core and 500 mem offset and reduced my videomemoryscale from 0.85 to 0.75. So far 3 hours and no crash. I think what happens is when it is at 0.85 (default) it leaks until 0.95 and crashes, according to my hwinfo observations. Maybe this can help a bit with WZ stability. Testing +150/1150 again with the 0.75 limit.
> 
> I can also do +200/1200 on Port royale but gaming seems to max at 150/1150. After all a benchmark only needs 2 minutes of stability.


You won the VRAM lottery. I can't even do +700 in Port Royale. It just black screens and then the entire computer reboots 
I had videomemoryscale already at 0.75 for months now.

I also think I know what is stopping all cards (except either a Strix, or the 1000W Kingpin bios) from exceeding 600W, btw.


----------



## WMDVenum

Falkentyne said:


> You won the VRAM lottery. I can't even do +700 in Port Royale. It just black screens and then the entire computer reboots
> I had videomemoryscale already at 0.75 for months now.
> 
> I also think I know what is stopping all cards (except either a Strix, or the 1000W Kingpin bios) from exceeding 600W, btw.


Do tell . I also finally stuck a temp prove on the back of the 3090FE, I think near the vram between the backplate and PCB. It is holding about 60C while in CP2077, was 50C in warzone but went up to 60C if I changed render scale to 200%, I run 1440P native. Trying to decide if I want to attempt to cool it more but I doubt it will OC better and 60C isn't bad for the back of the card i think.


----------



## Falkentyne

I think it's something related to NVVDD2 or MSVDD power rails.

I noticed before this throttle happens, the effective clocks drop far below the requested clocks, without any power flag being triggered....

This limit doesn't seem to scale at all below 100% TDP (like it has a hard lower cap at whatever it is set for at 100% TDP), and has a max cap at whatever the "maximum" value is for it in the BIOS, which corresponds to the max TDP % slider position (in other words, it has a fixed minimum value and a fixed maximum value).
I have no idea, or I don't think it corresponds to any of the shunts or primary power rails that can be adjusted by shunts (then again I don't know).

I'll know for sure after I solder a 5 mOhm shunt on the GPU Chip Power shunt and see if I still get the same behavior from the effective clocks or not. I'd say right now it's a total guess whether it's related to GPU Chip Power or if it has no relation to anything except a hardwired fixed "Default" and "maximum" value.


----------



## changboy

I menage to update my port royal score with my ftw3 on liquid without modding my pcb : 14 766









Result not found







www.3dmark.com


----------



## GRABibus

sultanofswing said:


> On my FTW3 Ultra even the Kingpin 520w BIOS has me all over the power limit.


My EVGA ultra FTW3 is throttling like hell in timespy, heaven benchmark.
I mean, whatever the OC I set in MSI AB, the card won’t boost beyond 1995LHz with a voltage of


Sheyster said:


> Is anyone actually using the 3090 FE BIOS on a 3 x 8-pin card in here? Just wondering how well it's working. It should net ~600W power limit.





With my EVGA FTWW Ultra, I hardly throttling in Timespy or Heaven benchmark. I didn't test Poprt Royal yet, but I assume this will be the same.
I mean, whatever the OC clocks I set in MSI AB (Even with PL at 107% and max fans), my core clocks are always below 1995MHz and voltage max is 0.987mV........

Is this the thing you are tyalking about ?

i have found this thread :








EVGA 3090 FTW POWER LIMIT BYPASS


Hello guys, Just trying some things, Ive managed to fix my card somewhat, its not downclocking or randomly undervolting to 987mv anymore. During TimeSpy run its holding fullboost as long as youre under 60C, so 2000mhz+ and 1.1Vcore. Performance also went up for me, same settings I gained 10fps...




www.overclock.net





I flashed with the Gigabyte mentionned Bios (390W 2x8pin Bios, Aorus master)) and now, I can sustain cloks as 2050MHz and higher and i increased my TimeSpy scores.


----------



## WayWayUp

I scored 0 in DirectX Raytracing feature test


Intel Core i9-10900KF Processor, NVIDIA GeForce RTX 3090 x 1, 16384 MB, 64-bit Windows 10}




www.3dmark.com




anyone try out the Dxr feature test?


----------



## changboy

Update my score again, not far from 15 000 :


----------



## des2k...

WayWayUp said:


> I scored 0 in DirectX Raytracing feature test
> 
> 
> Intel Core i9-10900KF Processor, NVIDIA GeForce RTX 3090 x 1, 16384 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> anyone try out the Dxr feature test?


62.6fps for mine, with 2160 core


----------



## changboy

WayWayUp i just score 64 fps in that test but my cpu at 4.9ghz and memory at 3200mhz maybe this have something to do with this. You really way way high


----------



## SoldierRBT

WayWayUp said:


> I scored 0 in DirectX Raytracing feature test
> 
> 
> Intel Core i9-10900KF Processor, NVIDIA GeForce RTX 3090 x 1, 16384 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> anyone try out the Dxr feature test?


Nice score and clocks. I get 66.57fps 2310MHz peak 2262MHz avg with 3x your avg temp.









I scored 0 in DirectX Raytracing feature test


Intel Core i9-10900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## changboy

WayWayUp just saw ur oc at 2340mhz and temp at 14c , i understand now lol.


----------



## WayWayUp

i think it would respond well to a kingpin since you can pump a lot of voltage in this test. It doesnt run very hot or use a lot of power


----------



## WayWayUp

SoldierRBT said:


> Nice score and clocks. I get 66.57fps 2310MHz peak 2262MHz avg with 3x your avg temp.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 0 in DirectX Raytracing feature test
> 
> 
> Intel Core i9-10900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com


can you increase your memory more than that? you should try if possible
great clocks on the core. very satisfying to see the magic 2300mhz


----------



## ivans89

is there any chance to get a block for a kingpin? any company planed a block?


----------



## SoldierRBT

WayWayUp said:


> i think it would respond well to a kingpin since you can pump a lot of voltage in this test. It doesnt run very hot or use a lot of power


You can pump the necessary NVVDD/MSVVDD it needs to make internal clocks match requested clocks but it won't make it OC better with more voltage, it just add unnecessary heat. KPE, like any other card is still limited by what it can do at 1.10v (silicon lottery). 2310MHz is the best it can do starting at 21c then drops when temperature rises. 2325MHz would crash starting the test. It needs colder temperature. Hoping the KPE waterblock releases soon. 



WayWayUp said:


> can you increase your memory more than that? you should try if possible
> great clocks on the core. very satisfying to see the magic 2300mhz


Memory was running stock speed. I didn't add OC to that since the test is specifically for RTX cores. I'll check if memory OC adds more FPS.


----------



## WayWayUp

i think the market is so small for the kingpin that companies are not really interested. At least with the others they share pcb with the 3080 and it expands the potential market for blocks. Also, the fact that it already comes standard as a hybrid and that evga will release the next one with a block as standard, i dont see many companies being interested in designed a Kingpin block

i heard optimus would like to make one but they are so unreliable. If they do make it, expect it to cost $400 and you probably wouldnt get your hands on it until May


----------



## changboy

That raytracing test is really strange also ugly hehehe, i got 64.48fps :








I scored 0 in DirectX Raytracing feature test


Intel Core i9-10980XE Extreme Edition Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## Shawnb99

Damn this is a big ass card


----------



## changboy

Shawnb99 said:


> Damn this is a big ass card
> 
> View attachment 2475764


 You do a collection of EVGA rtx-3090 lol


----------



## Shawnb99

changboy said:


> You do a collection of EVGA rtx-3090 lol


Just a lowly 2080ti up top


----------



## changboy

Shawnb99 said:


> Just a lowly 2080ti up top


Ah ok, thinking it was both rtx-3090 hehehe, still a nice collection


----------



## stryker7314

Shawnb99 said:


> Damn this is a big ass card
> 
> View attachment 2475764


Mine came in today too, now I just gotta sell the FTW3 Ultra..


----------



## Shawnb99

Waiting for an Optimus block is going to be a *****.


----------



## GAN77

Shawnb99 said:


> Waiting for an Optimus block is going to be a ***.


Endless)

I can argue, Hof and bitspower will be released half a year earlier)


----------



## stryker7314

Shawnb99 said:


> Waiting for an Optimus block is going to be a ***.


Anyone know if they already have the measurements? I sent them an email offering to let them borrow my 3090 kingpin for it if they needed it but haven't heard back.


----------



## Shawnb99

stryker7314 said:


> Anyone know if they already have the measurements? I sent them an email offering to let them borrow my 3090 kingpin for it if they needed it but haven't heard back.


I think so. I offered mine as well but as usual communication sucks. I hope it’s not a long wait


----------



## stryker7314

Shawnb99 said:


> I think so. I offered mine as well but as usual communication sucks. I hope it’s not a long wait


With the poor comms it wouldn't be worth the trouble and risk.


----------



## chispy

Finally i was able to break the elusive 16k+ on Port Royal. It took me a complete week to tweak and find the correct combination of voltages and a whole lot of settings on my evc2 , tried one setting at a time until it was tweak to perfection and optimal clocks. 

16035 port royal - https://www.3dmark.com/pr/815332


----------



## jomama22

chispy said:


> Finally i was able to break the elusive 16k+ on Port Royal. It took me a complete week to tweak and find the correct combination of voltages and a whole lot of settings on my evc2 , tried one setting at a time until it was tweak to perfection and optimal clocks.
> 
> 16035 port royal - https://www.3dmark.com/pr/815332


What evc settings did you end up using?


----------



## Exilon

I got my Alphacool 3090 FTW3 block in. The thing is 1.5kg of 25mm plexiglass and nickel plated copper. Backplate is 6mm of anodized aluminum with a smooth finish.
Quite pleased with the quality for $150 including backplate.


----------



## mouacyk

Exilon said:


> I got my Alphacool 3090 FTW3 block in. The thing is 1.5kg of 25mm plexiglass and nickel plated copper. Backplate is 6mm of anodized aluminum with a smooth finish.
> Quite pleased with the quality for $150 including backplate.


I have never seen so many o-rings on a GPU. Complexity is epic.


----------



## Thanh Nguyen

Exilon said:


> I got my Alphacool 3090 FTW3 block in. The thing is 1.5kg of 25mm plexiglass and nickel plated copper. Backplate is 6mm of anodized aluminum with a smooth finish.
> Quite pleased with the quality for $150 including backplate.


And the temp?


----------



## Exilon

mouacyk said:


> I have never seen so many o-rings on a GPU. Complexity is epic.


There are two layers of plexiglass to form a sealed flow path. It's a different approach from single o-ring blocks where the nickel sits flush with plexi to form the (mostly sealed) flow channel.



Thanh Nguyen said:


> And the temp?


I'll update soon. Still waiting for Amazon to drop by with copper shims today and need to rebuild my loop.


----------



## Chamidorix

Optimus got a kingpin for block measurement right on release from a guy. I don't think hes got it back yet.


----------



## geriatricpollywog

WayWayUp said:


> check out this score with very warm temps before i received my water block.
> 62c average with temps reaching nearly 70c by end of run
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 14 991 in Port Royal
> 
> 
> Intel Core i9-10900KF Processor, NVIDIA GeForce RTX 3090 x 1, 16384 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> notice the memory clock at 1341.... this was with stock ftw3 backplate
> the clock frequency is actually higher here than it is at my 11c run with thick optimus backplate and massive fujipoly pad covering the entire backside......
> 
> something fishy is going on. My theory is that the the core is requesting more power limit in the cold (due to the high frequencies) and its taking it away from the memory power budget?





Chamidorix said:


> Optimus got a kingpin for block measurement right on release from a guy. I don't think hes got it back yet.


Wow. I hope he at least gets a free block out if it. The first cards shipped early December.


----------



## changboy

People who will buy rtx-3090 after the 16 april will see an increase price of 350 usd on each card. The 3060 ti will increase of 100 usd. This news is bad bad bad. On EVGA web page its write this :


*SPECIAL EVGA 30 SERIES UPDATE NOTICE:*
Due to ongoing events, EVGA has made price adjustments on the GeForce RTX 30 Series products. This change was necessary due to several factors and will be effective January 11, 2021. EVGA has worked to reduce and minimize these costs as much as possible. For those who are currently in the EVGA.com Notify Queue system or Step-Up Queue, EVGA will honor the original MSRP pricing through April 16th, 2021 if your purchase position is processed before this date.


----------



## dr/owned

stryker7314 said:


> Anyone know if they already have the measurements? I sent them an email offering to let them borrow my 3090 kingpin for it if they needed it but haven't heard back.


I had better response pinging them via PM here instead of email. Got handled immediately vs. days of no reply.

I wonder if their inbox is just not setup right or something.


----------



## changboy

The EK for ftw3 ultra with backplate :


----------



## pat182

changboy said:


> People who will buy rtx-3090 after the 16 april will see an increase price of 350 usd on each card. The 3060 ti will increase of 100 usd. This news is bad bad bad. On EVGA web page its write this :
> 
> 
> *SPECIAL EVGA 30 SERIES UPDATE NOTICE:*
> Due to ongoing events, EVGA has made price adjustments on the GeForce RTX 30 Series products. This change was necessary due to several factors and will be effective January 11, 2021. EVGA has worked to reduce and minimize these costs as much as possible. For those who are currently in the EVGA.com Notify Queue system or Step-Up Queue, EVGA will honor the original MSRP pricing through April 16th, 2021 if your purchase position is processed before this date.


rofl, so same price of someone would have scalp, people selling for the new msrp that paid launch msrp are loling,my strix gonna skyrocket


----------



## mirkendargen

****ing Zotac raised their MSRP to $1900 hahahahahahahaha.









ZOTAC GAMING GeForce RTX 3090 Trinity


<p>Get Amplified with the ZOTAC GAMING GeForce RTX™ 30 Series based on the NVIDIA Ampere architecture. Built with enhanced RT Cores and Tensor Cores, new streaming multiprocessors, and superfast GDDR6X memory, the ZOTAC GAMING GeForce RTX 3090 Trini




www.zotacstore.com


----------



## geriatricpollywog

mirkendargen said:


> ****ing Zotac raised their MSRP to $1900 hahahahahahahaha.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> ZOTAC GAMING GeForce RTX 3090 Trinity
> 
> 
> <p>Get Amplified with the ZOTAC GAMING GeForce RTX™ 30 Series based on the NVIDIA Ampere architecture. Built with enhanced RT Cores and Tensor Cores, new streaming multiprocessors, and superfast GDDR6X memory, the ZOTAC GAMING GeForce RTX 3090 Trini
> 
> 
> 
> 
> www.zotacstore.com


Same price as the KP


----------



## Falkentyne

So it's MSVDD (NVVDD2) current/power limiting (most) non kingpin cards to 600W maximum. This shows up initially as EFFECTIVE clocks dropping (with requested clocks NOT dropping) without a power limit flag, then eventually when you high enough, you get a power limit flag and requested clocks drop.

So there you go.

(it's pretty much the same thing that happens when you mess with the curve and put the 1.10v point more than 15mv higher than the previous point and lock it...you get 1.10v but the effective clocks drop drastically--because of MSVDD not liking the curve).


----------



## WMDVenum

Falkentyne said:


> You won the VRAM lottery. I can't even do +700 in Port Royale. It just black screens and then the entire computer reboots
> I had videomemoryscale already at 0.75 for months now.
> 
> I also think I know what is stopping all cards (except either a Strix, or the 1000W Kingpin bios) from exceeding 600W, btw.


Little premature on my end. Was running +150/1125 for 4 hours yesterday with no crash and got two within 15 minutes today. Starting fresh. Going to test +150/0 by letting warzone "spectate" overnight and see if I crash.


----------



## geriatricpollywog

Falkentyne said:


> So it's MSVDD (NVVDD2) current/power limiting (most) non kingpin cards to 600W maximum. This shows up initially as EFFECTIVE clocks dropping (with requested clocks NOT dropping) without a power limit flag, then eventually when you high enough, you get a power limit flag and requested clocks drop.
> 
> So there you go.
> 
> (it's pretty much the same thing that happens when you mess with the curve and put the 1.10v point more than 15mv higher than the previous point and lock it...you get 1.10v but the effective clocks drop drastically--because of MSVDD not liking the curve).


Just observational information: MSVDD definitely has a "sweet spot." On my card its 1.100 on the classified tool or default switch position. Adding more or less on classified or activating 1 or both dips hurts performance. This is an extreme overclocking card, so MSVDD tuning might work for LN2.


----------



## Falkentyne

Note I'm talking about shunt modded cards topping out at about 600W, regardless of what shunts you throw on them.
I do not know if this applies to the Strix. The Strix has a MASSIVE 648W GPU Chip Power Limit and a giant 162W VRAM power limit too. But all the other rails look normal. And MSVDD rail isn't shown in the last ABE editor.

Anyone here have a shunt modded Strix? Try running Timespy graphics test 2 and see if you get two throttle bars at the beginning of the test.
Alternatively, try running Superposition 4k Custom--Extreme shaders. If you get a nonstop super thick throttle bar way below your TDP limit, that's MSVDD.

You'll know it's MSVDD's limit outscaling the power limit when your "Normalized" TDP exceeds your regular TDP, even though none of the primary (shunted) rails are anywhere close to their power limits. Then when the draw gets too high, first effective clocks drop. Then requested clocks drop (causing a PWR flag despite TDP being low).


----------



## bmgjet

Falkentyne said:


> Note I'm talking about shunt modded cards topping out at about 600W, regardless of what shunts you throw on them.
> I do not know if this applies to the Strix. The Strix has a MASSIVE 648W GPU Chip Power Limit and a giant 162W VRAM power limit too. But all the other rails look normal. And MSVDD rail isn't shown in the last ABE editor.
> 
> Anyone here have a shunt modded Strix? Try running Timespy graphics test 2 and see if you get two throttle bars at the beginning of the test.
> Alternatively, try running Superposition 4k Custom--Extreme shaders. If you get a nonstop super thick throttle bar way below your TDP limit, that's MSVDD.
> 
> You'll know it's MSVDD's limit outscaling the power limit when your "Normalized" TDP exceeds your regular TDP, even though none of the primary (shunted) rails are anywhere close to their power limits. Then when the draw gets too high, first effective clocks drop. Then requested clocks drop (causing a PWR flag despite TDP being low).



If you seen the HOF 3090 pcb shot it has shunt jumpers for the main large 7 shunts we have all been messing with.
Then 3 more shunt jumpers for the smaller shunts.

So that 600Wish limit could be getting triggered by those smaller shunts.


----------



## Zirc92

Hello, any one can advice me a little? Done a bios update/change many years ago, so don't remember all. But can i change my Inno3d 3090 x3 gaming bios to like an EVGA XC3? Or an Asus tuf? I actually just want those because they have fan stop about 50 C. And my inno3d fans just goes on and off when browsing, which is rather annoying. 

How can i tell if the bios are compatible with my card? Only thing i think of is they have same power pin on card..?


----------



## 86Jarrod

Falkentyne said:


> Note I'm talking about shunt modded cards topping out at about 600W, regardless of what shunts you throw on them.
> I do not know if this applies to the Strix. The Strix has a MASSIVE 648W GPU Chip Power Limit and a giant 162W VRAM power limit too. But all the other rails look normal. And MSVDD rail isn't shown in the last ABE editor.
> 
> Anyone here have a shunt modded Strix? Try running Timespy graphics test 2 and see if you get two throttle bars at the beginning of the test.
> Alternatively, try running Superposition 4k Custom--Extreme shaders. If you get a nonstop super thick throttle bar way below your TDP limit, that's MSVDD.
> 
> You'll know it's MSVDD's limit outscaling the power limit when your "Normalized" TDP exceeds your regular TDP, even though none of the primary (shunted) rails are anywhere close to their power limits. Then when the draw gets too high, first effective clocks drop. Then requested clocks drop (causing a PWR flag despite TDP being low).


Here's 2 runs first gt2 and second gt2 extreme. Shunt modded strix 5mohm on everything except pcie.
I get perf cap power on superposition 4k extreme shaders though.


----------



## Exilon

Thanh Nguyen said:


> And the temp?


This is only a ~320W ETH mining load on both tests. I'll try to push this card when I have more time to mess with it.

Before (GPU was 60C):









After (water):


----------



## Bobbylee

Exilon said:


> This is only a ~320W ETH mining load on both tests. I'll try to push this card when I have more time to mess with it.
> 
> Before (GPU was 60C):
> 
> 
> After (water):


whats your cooling solution? im at 43c on 360mmx60mm rad (also 9900k in loop) with same power draw while mining, 23c ambient air temp. id be interested to hear how you acheive 10c lower


----------



## Exilon

Bobbylee said:


> whats your cooling solution? im at 43c on 360mmx60mm rad (also 9900k in loop) with same power draw while mining, 23c ambient air temp. id be interested to hear how you acheive 10c lower


420mm HWlabs GTS push + 280mm HWlabs GTR push pull all with Artic P14 fans. It's all crammed into a Define S so the GTR is feeding the GTS warm air.
I too have a 9900K in the loop. I'm not doing anything special I think... buy block, put NT-H2 on block, tighten block down, install into loop.

Max temperature in Time Spy Extreme at 450W power limit (107%) with liquid peak
















The Alphacool FTW3 block does ~16C [email protected]450W in a 'typical' core/mem power split which is decent. My old EK block on a 320W 1080Ti did ~15C with a 25% smaller die size so the power temperature/power density is the roughly same.


----------



## Bobbylee

Exilon said:


> 420mm HWlabs GTS push + 280mm HWlabs GTR push pull all with Artic P14 fans. It's all crammed into a Define S so the GTR is feeding the GTS warm air.
> I too have a 9900K in the loop. I'm not doing anything special I think... buy block, put NT-H2 on block, tighten block down, install into loop.
> 
> Max temperature in Time Spy Extreme at 450W power limit (107%) with liquid peak
> View attachment 2475873
> View attachment 2475874
> 
> 
> The Alphacool FTW3 block does ~16C [email protected] in a 'typical' core/mem power split which is decent. My old EK block on a 320W 1080Ti did ~15C with a 25% smaller die size so the power temperature/power density is the roughly same.


hmm ok so youve got around double the surface area, maybe I'll add a 280 when i add the ramblock to my backplate. Im using arctic mx-4 paste, think its worth replacing or are we talking just 1c difference you think?


----------



## Exilon

Bobbylee said:


> hmm ok so youve got around double the surface area, maybe I'll add a 280 when i add the ramblock to my backplate. Im using arctic mx-4 paste, think its worth replacing or are we talking just 1c difference you think?


Well, what's your liquid temp? If the delta between block and liquid in in the same ballpark, the paste isn't going to change much.


----------



## Bobbylee

Exilon said:


> Well, what's your liquid temp? If the delta between block and liquid in in the same ballpark, the paste isn't going to change much.


great question, I need to get a temp sensor then I will come back and answer that


----------



## Benni231990

hey look here now we can read the ram temperature from the 3090 






HWiNFO v6.42 released


HWiNFO v6.42 available. Changes: Improved support of some future AMD CPUs and APUs. Fixed a possible hang on some systems with Intel Thunderbolt controller. Enhanced sensor monitoring on ASUS ROG STRIX B550-XE. Added reporting of DIMM module location if BIOS provides correct data. Fixed VRM...




www.hwinfo.com





i have 86°C in games


----------



## GAN77

Benni231990 said:


> hey look here now we can read the ram temperature from the 3090


Great news!


----------



## Mumak

Make sure to read my comment about it: HWiNFO v6.42 released


----------



## GAN77

Mumak said:


> Make sure to read my comment about it: HWiNFO v6.42 released


Thank you for the update!
In the case of RTX3090, is this the temperature of the hottest chip? Or is it average across all memory chips?


----------



## Falkentyne

bmgjet said:


> If you seen the HOF 3090 pcb shot it has shunt jumpers for the main large 7 shunts we have all been messing with.
> Then 3 more shunt jumpers for the smaller shunts.
> 
> So that 600Wish limit could be getting triggered by those smaller shunts.


Are those smaller shunts also 005?
Because the FE has three very small 005 shunts also, however one of them is connected to the RGB LED (huh?), one is connected to PCIE slot (why?) and one is connected to GPU Chip Power.
olrdtg modded the two that were not on RGB, but he said he saw no difference. Perhaps he didn't know what to look for.
He also seems to have vanished.


----------



## WilliamLeGod

I got 86C at 500W constant used after 10min (3090 trio air cooled)


----------



## Bobbylee

Exilon said:


> Well, what's your liquid temp? If the delta between block and liquid in in the same ballpark, the paste isn't going to change much.


So I've put a fairly accurate temperature probe into my res. Pc has been mining all week 320w +200 core +1200 mem. water temp 34.5c air temp 23.3c gpu die temp 44c mem temp 102c. any thoughts on this? i've ordered a block for the backplate to help cool that memory on the back. Other than that, would you be happy with those temps?


----------



## bmgjet

58C with ir temp gun on back plate.


----------



## Mumak

GAN77 said:


> Thank you for the update!
> In the case of RTX3090, is this the temperature of the hottest chip? Or is it average across all memory chips?


I'm inquiring about this, will let you know if I know more. For the time being I assume this is the hottest value among the chips.


----------



## man from atlantis

Gaming doesn't really push the GDDR6X memories. Try mining ethereum for half an hour.


----------



## rix2

changboy said:


> The EK for ftw3 ultra with backplate :
> View attachment 2475813
> 
> View attachment 2475809
> 
> View attachment 2475810
> 
> View attachment 2475811
> 
> View attachment 2475812


it would be better to take pictures on the other said block and backplate...


----------



## Exilon

Bobbylee said:


> So I've put a fairly accurate temperature probe into my res. Pc has been mining all week 320w +200 core +1200 mem. water temp 34.5c air temp 23.3c gpu die temp 44c mem temp 102c. any thoughts on this? i've ordered a block for the backplate to help cool that memory on the back. Other than that, would you be happy with those temps?


GPU die temp looks fine. It's a little higher K/W than what I am getting probably because of the paste and mount pressure.
I'm only getting 78C memory junction on the latest HWiNFO though at 360W (I cranked up the voltage since my last run since I use this PC primarily for gaming).

















I did replace the stock memory pads VRM pads with 17W/mK fujipoly pads on the front side which is probably making the difference for PCB and memory temperature. I might point a fan at the backplate and see how it goes, but not too fussed about 80C worst case memory temperature.


----------



## man from atlantis

Exilon said:


> GPU die temp looks fine. It's a little higher K/W than what I am getting probably because of the paste and mount pressure.
> I'm only getting 78C memory junction on the latest HWiNFO though at 360W (I cranked up the voltage since my last run since I use this PC primarily for gaming).
> 
> View attachment 2475908
> 
> View attachment 2475909
> 
> I did replace the stock memory pads VRM pads with 17W/mK fujipoly pads on the front side which is probably making the difference for PCB and memory temperature. I might point a fan at the backplate and see how it goes, but not too fussed about 80C worst case memory temperature.


Nice temps, i read GDDR6X starts throttling at 110C but can't confirm. Do you have active wc backplate?


----------



## Exilon

man from atlantis said:


> Do you have active wc backplate?


No, just the stock 6mm aluminum backplate from the Alphacool block and the stock 2mm 3W/mK pads. I'm too lazy to run a thermal probe right now, but I'd estimate the backplate hotspot at 50C based on the ouch scale. Quick napkin map with 2x15x15mm 3W/mK pads suggests the Tcase on the memory is 10-15C higher than the backplate (60-65C), so therefore a 80C Tj is reasonable for the backside memory. 

According to the ICX sensors, the PCB around the memory is 50C so I'd expect a lot of the backside heat to be going through the solder bumps to the PCB and out through the front block.
Plopping a 4x8cm heatsink using 1mm pads on the hotspot did nothing, probably because its simply not hot enough to get any decent convection going.


----------



## Mumak

Mumak said:


> I'm inquiring about this, will let you know if I know more. For the time being I assume this is the hottest value among the chips.


Confirmed, it should be the hottest temperature among all chips.


----------



## Bobbylee

Exilon said:


> GPU die temp looks fine. It's a little higher K/W than what I am getting probably because of the paste and mount pressure.
> I'm only getting 78C memory junction on the latest HWiNFO though at 360W (I cranked up the voltage since my last run since I use this PC primarily for gaming).
> 
> View attachment 2475908
> 
> View attachment 2475909
> 
> I did replace the stock memory pads VRM pads with 17W/mK fujipoly pads on the front side which is probably making the difference for PCB and memory temperature. I might point a fan at the backplate and see how it goes, but not too fussed about 80C worst case memory temperature.


So ive just played a bit of cyberpunk till temps level out, gpu core is at 50c mem is at 76c, i run a different oc when gaming: 390w power draw +150 core +650 mem. ambient is around 25c now though. not too dissimilar from yours, i guess the active backplate cooling isnt needed for gaming just mining then.


----------



## ttnuagmada

So 110C is the mem throttle point, but does anyone have any idea what a safe 24/7 range would be? probably under 80C?


----------



## STAWKEYE

As a newb to watercooling, are there any suggestions for the FE 3090? Anything unique to the 3090 to watch out for? In order to combat the memory on the backside (which is only passively cooled by the backplate) I have this "less than ideal" setup. It helped immensely, but I'd like to get a 2nd card and that won't be possible without putting them on water, unless the rear exhaust fan of the upper GPU actually manages to keep the backplate of the lower GPU cool while still having enough overhead for itself. I highly doubt that. Gigabyte has a blower version that would be great for multiGPU setups that need to remain on air, but I'd imagine it has to absolutely scream at these power levels.


----------



## Benni231990

i have in warzone 74 °C and overwatch 88°C all max temp

i would say everthing unter 98-100 °C is save


----------



## Shawnb99

For those with the Kingpin are we not able to purchase the extended warranty for it? Been trying for over a day now, keep being denied


----------



## des2k...

cp2077 4k, mem temps


----------



## Thanh Nguyen

Shunt a ftw3 will have more than 600w power limit? I see 635w peak with 1000w bios on my ftw3.


----------



## changboy

Thanh Nguyen said:


> Shunt a ftw3 will have more than 600w power limit? I see 635w peak with 1000w bios on my ftw3.


So 1000w bios working on the ftw3, did you do something else or just put that bios ?


----------



## BiLLbOuS

Guys, after ek waterblocking/backplate, with the mp5works backplate cooling the Strix , you figure i would see gains in allowable core clock in port royals and other benches? still cannot seem pass with anything over 149 core


----------



## Thanh Nguyen

changboy said:


> So 1000w bios working on the ftw3, did you do something else or just put that bios ?


Just flash it.


----------



## Benni231990

@des2k... 

can you pls explain how did you make it what you can see the watt power? 

i only see the power in %?


----------



## Gebeleisis

HWInfo version 6.42 can now also display the GDDR6X temperatures of NVIDIA’s new Ampere cards - Out now | igor'sLAB


A pre-tested HWInfo version of today's release now also reliably shows GDDR6X temperatures. As the programmer told me, there is a lot of searching, trial and error and therefore a lot of work involved.




www.igorslab.de





I tested it and I can confirm that the mem temp is now shown.










This is with the GPU in full load crypto mining and with an heatsink on the back with an 80mm fan blowing on it.


----------



## Spiriva

Gebeleisis said:


> HWInfo version 6.42 can now also display the GDDR6X temperatures of NVIDIA’s new Ampere cards - Out now | igor'sLAB
> 
> 
> A pre-tested HWInfo version of today's release now also reliably shows GDDR6X temperatures. As the programmer told me, there is a lot of searching, trial and error and therefore a lot of work involved.
> 
> 
> 
> 
> www.igorslab.de
> 
> 
> 
> 
> 
> I tested it and I can confirm that the mem temp is now shown.
> 
> This is with the GPU in full load crypto mining and with an heatsink on the back with an 80mm fan blowing on it.












Works for me too, PNY 3090 (2x8pin) XOC 1000W bios(~550W).


----------



## changboy

But with this 1000w bios what is ur steady boost clock on gpu ? Is it higher then 2100mhz when you in game ?


----------



## WMDVenum

I have a 3090 FE with a bitspower waterblock. I have a temp sensor sitting near the back VRAM on the card, which I will likely remove at this point. All testing was done while spectating in warzone while the game was using roughly 20gb of VRAM. I have a frame limit of 140 FPS so the 1440p setting didn't max GPU usage while the 5k maxed it. I am trying to decide if I want to cool the back more but the bitspower backplate isn't flat. I may try to put a fan on there to see what happens.

Warzone 5K (200% render scale)
Water Temp 35C
GPU Temp - 49C
GPU Mem Temp - 78C
GPU Probe Temp - 67C

Warzone 1440P (100% render scale)
Water Temp 34C
GPU Temp - 41C
GPU Mem Temp - 64C
GPU Probe Temp - 51C


----------



## mirkendargen

Nice to see validation (beyond touching the backplate) the RAM block I glued on my backplate with Arctic Silver Alumina thermal epoxy does indeed do its job. This is mining ETH, the hardest the VRAM will ever work.


----------



## Gebeleisis

I have a bykski waterblock but definetly this 3090's need an active backplate so I'm looking for one.


----------



## Spiriva

changboy said:


> But with this 1000w bios what is ur steady boost clock on gpu ? Is it higher then 2100mhz when you in game ?


Yes. 2190-2205mhz with the XOC bios. While gaming for me.


----------



## dante`afk

my mem temps after several hours of PoE
5120x1440p










560~w approx. 2100/1000
26c ambient
27c water temp


----------



## PhuCCo

Here are my temps after running a game of TDM on Shoothouse (Modern Warfare @ 1440p), a pass each of Port Royal and Time Spy, and then a full Port Royal Stress Test. 400W limit, +500 on memory. I have an Alphacool 6 DIMM RAM waterblock on the backplate.


----------



## McFluff

McFluff said:


> I wanted to share some info on trying to cool the backplate on my 3090 FE. I have found a pretty effective way to cool it, but I'm looking for a permanent solution.
> 
> I first discovered this issue while mining eth. I bought this card for gaming, but seeing that Nicehash was paying ~$10 a day, I figured that I'd leave it running when I wasn't using the computer. The issue that I ran into was one that others in this thread have described, while performing the memory intensive task of Eth mining, the fans would ramp way up even though the GPU is nice and cool at 42c. I assume this is due to the card reading high temperatures from the memory on the back.
> 
> Since I can't read the memory temperatures on the FE and the fan was way too loud to justify the annoyance of making about 25 cents an hour, I started to fiddle around with trying to cool the backplate to see if I could lower the percentage that the fans automatically ramp up to. I had some marginal success by lowering my case temperatures, leaving off the side panel off and aiming a fan directly at the backplate, but it was still much louder that I'd consider bearable.
> 
> Moving past simply trying to cool the air around the card, I started to look into ways to actively cool the backplate. Unfortunately, I didn't have any spare heatsinks, but I did have a spare AIO that I was able to MacGyver into position by using a kitchen utensil as a mounting device.
> 
> View attachment 2475460
> 
> 
> Not even using any sort of thermal pad or paste, just pressing the coldplate against the backplate, here is the result. The fans quickly drop down to 58%, and when removing the coldplate the shoot right back up to 100%.
> 
> View attachment 2475461
> 
> 
> In terms of the real-world effect this has had on mining, here are some results I had while trying to find a good overclock combo for Nicehash. Highlighting what I finally settled on as a good hash rate and fan speed.
> 
> View attachment 2475462
> 
> 
> In terms of gaming benchmarks, I was able to complete Port Royal with an extra 150 memory overclock when using the AIO, +1100 was the max I could get without the AIO, +1250 with.
> 
> An AIO seems like an overkill for cooling a backplate, anyone have a better solution? I am considering ordering a CPU Heatsink and sticking it to the card, but I sort of want to find a solution that maintains aesthetics.


So I’ve managed to get a similar result by removing the AIO and keeping just a kitchen utensil! There is some benefit from the cooling the AIO was providing, but it seems like most of the benefit was from additional pressure on the backplate. Maybe a thermal pad problem? I’ve ordered some new ones to try replacing the stock ones.


----------



## dante`afk

lol


----------



## Thanh Nguyen

changboy said:


> But with this 1000w bios what is ur steady boost clock on gpu ? Is it higher then 2100mhz when you in game ?





dante`afk said:


> my mem temps after several hours of PoE
> 5120x1440p
> 
> View attachment 2475955
> 
> 
> 560~w approx. 2100/1000
> 26c ambient
> 27c water temp


1c delta. What is your cooling setup?


----------



## changboy

My cooling set-up ?D-5 Pump- 3 Rad= 2x360mm + [email protected]+optimus block and my ek block on my ftw3 ultra. Nocthua fans.


----------



## motivman

Spiriva said:


> Yes. 2190-2205mhz with the XOC bios. While gaming for me.


how many watts are each of your 8 pins drawing on average? What is your max PR score with the 1000W bios?


----------



## motivman

what is the consensus on the gaming X trio? Anyone running it shunt modded or on the 1000w bios? what is the maximum power it can support, since MSI cheaped out on the VRM's? been looking for a strix, but looks like the gaming x trio are easier to get.


----------



## ALSTER868

Tried Heaven, then CoD Warzone and ETH mining to see what the temps will look like. Gaming seems to be OK, but the mem temps during mining look to be on the edge.
Bitspower waterblock with a 90 mm fan pointed to the backplate.


----------



## ALSTER868

del


----------



## Falkentyne

McFluff said:


> So I’ve managed to get a similar result by removing the AIO and keeping just a kitchen utensil! There is some benefit from the cooling the AIO was providing, but it seems like most of the benefit was from additional pressure on the backplate. Maybe a thermal pad problem? I’ve ordered some new ones to try replacing the stock ones.
> 
> View attachment 2475964


You're on the stock pads? Yikes. They are garbage. And they become useless once the card has been disassembled once also. Try using Thermalright Odyssey 1.5mm (exact thickness) pads and add pads for the VRM 'backside' hotspots and you won't have the problem anymore. You can also use 2mm pads if you so wish but 1.5mm is enough. (DO NOT use 2mm pads on the GPU CORE SIDE!!!! 1.5mm works fine, 2mm stops contact).


----------



## Falkentyne

bmgjet said:


> If you seen the HOF 3090 pcb shot it has shunt jumpers for the main large 7 shunts we have all been messing with.
> Then 3 more shunt jumpers for the smaller shunts.
> 
> So that 600Wish limit could be getting triggered by those smaller shunts.


As far as I can tell, it looks like GPU Core NVVDD2 Input Power (sum) is the sum of GPU Chip Power (GPU Core NVVDD Input Power (sum)) and GPU SRAM Input Power (sum).
Well we know SRAM is NVVDD2 (Uncore) already as it has its own voltage in hwinfo (SRAM output voltage). The question here is if SRAM Input Power (sum) has its own power limit that we don't have access to, and if it is, that could be what is limiting 2x8 pin cards, and since the first symptom is effective clocks dropping (you can test this by setting a lower TDP slider, when _NOT_ TDP power limited, and see if effective clocks suddenly go way lower than requested clocks), I think this is the mythical power limit that @motivman and @dante`afk were running into, causing TDP Normalized to rise way above TDP%. I strongly doubt GPU Core NVVDD2 input power (sum) would have any power limit at all if it's the sum of Core and Sram, since then super shunting GPU Chip Power with a 2 mOhm shunt would bypass that permanently, which makes no sense (it being part of a main rail and a sub rail).

SRAM Input Power (sum) also seems to be a sum of something else but I don't know what. SRAM Output Power seems to always be slightly lower than SRAM Input Power.

GPU Chip Power may also be a sum of something else but I don't know what, or it may not. I'll ask @Mumak . Martin do you know what the "sums" are sums of?


----------



## Mumak

The interface doesn't say which values are included in a particular summed value, it just says that it's a sum of something..


----------



## dante`afk

Thanh Nguyen said:


> 1c delta. What is your cooling setup?


dual mora420, 2x d5, 2x ddc


----------



## HyperMatrix

@Falkentyne Just watched this video and thought of you:


----------



## motivman

guys, about to pull the trigger on a gaming X trio... is this a good or bad purchase?


----------



## Spiriva

motivman said:


> how many watts are each of your 8 pins drawing on average? What is your max PR score with the 1000W bios?


Dunno as it prolly reports wrong, 230ish according to hwinfo. Abit over 15k


----------



## Rhadamanthys

motivman said:


> guys, about to pull the trigger on a gaming X trio... is this a good or bad purchase?


I'm about to order one, too. I think it's a good deal, it being cheaper than the Strix or Suprim, but still it's 3x8 pin and you can use it with a 500/520W bios.


----------



## Sheyster

motivman said:


> what is the consensus on the gaming X trio? Anyone running it shunt modded or on the 1000w bios? what is the maximum power it can support, since MSI cheaped out on the VRM's? been looking for a strix, but looks like the gaming x trio are easier to get.


If you can get one for the old price ($1589) it's a great deal for a 3 x 8-pin card. The perfect gaming BIOS for it is the EVGA FTW3 500W BIOS.


----------



## motivman

Sheyster said:


> If you can get one for the old price ($1589) it's a great deal for a 3 x 8-pin card. The perfect gaming BIOS for it is the EVGA FTW3 500W BIOS.


I want a card where I can pull upward of 750Watts with no issues, looks like the strix might be the only choice, since trio has garbage vrms and stupid 20A fuses.


----------



## des2k...

Benni231990 said:


> @des2k...
> 
> can you pls explain how did you make it what you can see the watt power?
> 
> i only see the power in %?


It's Afterburner Monitoring, just added a correction because it's 2x8pin card (1000w vbios)


----------



## schoolofmonkey

From my information gathering it seems the best and safest BIOS for the Inno3D iChill x4 is from he 390w Gigabyte Gaming X 2*8.
The Inno is capped at 370w, so using the Gigabyte BIOS will put in on par with the rest of the 2*8 pin cards.

I figure it's not a good idea to push too much power through the 2*8 Pins, different story if it was 3*8.

Correct me if I'm wrong though


----------



## shredy44

can anyone provide a link to the 520watt bios?
i checked the sticky on page 1 but can only find the 1000w, are people using the 500 or 520 on strix 3090? or is the 1000w the way to go.

thanks in advanced


----------



## changboy

I run heaven benchmark for 30 minute and my gpu+memory looks like this with my ek block :


----------



## Sheyster

motivman said:


> I want a card where I can pull upward of 750Watts with no issues, looks like the strix might be the only choice, since trio has garbage vrms and stupid 20A fuses.


That's A LOT of power draw. KPE is your best bet for that. If you can't get one then you'll have to settle for a Strix, but they're overpriced now and still hard to get.


----------



## Sheyster

shredy44 said:


> can anyone provide a link to the 520watt bios?
> i checked the sticky on page 1 but can only find the 1000w, are people using the 500 or 520 on strix 3090? or is the 1000w the way to go.
> 
> thanks in advanced











EVGA RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com


----------



## motivman

Sheyster said:


> That's A LOT of power draw. KPE is your best bet for that. If you can't get one then you'll have to settle for a Strix, but they're overpriced now and still hard to get.


honestly, do you think it is worth it to sell my 2X8 pin and get a Strix? I was able to achieve #55 in the world in PR with this card, so not sure If I will really see a big improvement going to a Strix. I just hate seeing my card power limited in GPU-Z though... haha.









I scored 15 634 in Port Royal


Intel Core i9-10900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com





I can defeat ALL power limits running the 1000W kingpin bios with my 2x8 pin card, however, one 8 pin sometimes pulls upward of 450w in timespy extreme... cables haven't melted yet though, lol. My card power draw running the 1000w bios with timespy extreme is about 765W!!!!


----------



## changboy

motivman said:


> honestly, do you think it is worth it to sell my 2X8 pin and get a Strix? I was able to achieve #55 in the world in PR with this card, so not sure If I will really see a big improvement going to a Strix. I just hate seeing my card power limited in GPU-Z though... haha.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 634 in Port Royal
> 
> 
> Intel Core i9-10900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> I can defeat ALL power limits running the 1000W kingpin bios with my 2x8 pin card, however, one 8 pin sometimes pulls upward of 450w in timespy extreme... cables haven't melted yet though, lol. My card power draw running the 1000w bios with timespy extreme is about 765W!!!!


 Yup you really need buy a Strix one


----------



## motivman

changboy said:


> Yup you really need buy a Strix one


lol


----------



## Sheyster

motivman said:


> honestly, do you think it is worth it to sell my 2X8 pin and get a Strix? I was able to achieve #55 in the world in PR with this card, so not sure If I will really see a big improvement going to a Strix. I just hate seeing my card power limited in GPU-Z though... haha.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 634 in Port Royal
> 
> 
> Intel Core i9-10900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> I can defeat ALL power limits running the 1000W kingpin bios with my 2x8 pin card, however, one 8 pin sometimes pulls upward of 450w in timespy extreme... cables haven't melted yet though, lol. My card power draw running the 1000w bios with timespy extreme is about 765W!!!!


If you're really a hardcore bencher you should probably get a KPE and feed it all the power it can handle, as long as your cooling can keep up. Otherwise, it sounds like you have a decent card for gaming. Use the 1000W BIOS and lock voltage at something reasonable like 1050mv, and push core/mem as high as it'll go in games like Warzone and CP 2077.

EDIT - To elaborate a little more, I ran a 3090 FE for about 2 months before I got the Strix. My goal was to get a 3 x 8-pin card that had a higher power limit than 400W (max power limit for the FE). I was getting tired of constantly being up against the power limit in games. I had the same issue with the Strix and the stock 480W BIOS. The KPE 1000W resolved that. I can run at 2100 MHz (4K120 screen res) all day long in games and never hit the power limit.


----------



## Damaged__

Sheyster said:


> If you're really a hardcore bencher you should probably get a KPE and feed it all the power it can handle, as long as your cooling can keep up. Otherwise, it sounds like you have a decent card for gaming. Use the 1000W BIOS and lock voltage at something reasonable like 1050mv, and push core/mem as high as it'll go in games like Warzone and CP 2077.
> 
> EDIT - To elaborate a little more, I ran a 3090 FE for about 2 months before I got the Strix. My goal was to get a 3 x 8-pin card that had a higher power limit than 400W (max power limit for the FE). I was getting tired of constantly being up against the power limit in games. I had the same issue with the Strix and the stock 480W BIOS. The KPE 1000W resolved that. I can run at 2100 MHz (4K120 screen res) all day long in games and never hit the power limit.


Honestly for daily use I'd be quite careful with the KP bios even on a waterblock. Even with proper preparation, cards have died on that bios regardless of power/volt limits set by the user.


----------



## geriatricpollywog

motivman said:


> honestly, do you think it is worth it to sell my 2X8 pin and get a Strix? I was able to achieve #55 in the world in PR with this card, so not sure If I will really see a big improvement going to a Strix. I just hate seeing my card power limited in GPU-Z though... haha.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 634 in Port Royal
> 
> 
> Intel Core i9-10900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> I can defeat ALL power limits running the 1000W kingpin bios with my 2x8 pin card, however, one 8 pin sometimes pulls upward of 450w in timespy extreme... cables haven't melted yet though, lol. My card power draw running the 1000w bios with timespy extreme is about 765W!!!!


That’s a lot of power. I was seeing 480-520w during my personal best runs. I think the Kingpin doesn’t need as much power to get the same scores. If your goal is simply to draw as much power as possible without liquid nitrogen using a bios designed for liquid nitrogen, then maybe get a Strix?


----------



## motivman

Damaged__ said:


> Honestly for daily use I'd be quite careful with the KP bios even on a waterblock. Even with proper preparation, cards have died on that bios regardless of power/volt limits set by the user.


I personally hardly ever run that bios. My card is shunted with 5mohm, so when I saw it drawing upwards of 450W from one 8 pin in timespy extreme, I quickly flashed back to the stock reference bios. With the stock bios, the power pull between both my 8 pins are equal most of the time, but whenever I run any 3 pin bios, pin 1 will pull almost double the power of pin 2, so damn frustrating. With the PNY bios and my card shunted, I average about 500w to 520w max, but still get power limited. On the 520W bios, I can draw as much as 600W, but pin 1 pulls over 300W, which kinda makes me concerned about melting my cables. Forget the 1000W bios, one pin pulling 450W is insanity... this is all the reason why i want to dumb this card and get a strix... but at the end of the day, I know i will see minimal performance difference going with the strix, but at least I know watt power pull will be more balanced between all 3 pins on the strix.


----------



## Falkentyne

motivman said:


> I personally hardly ever run that bios. My card is shunted with 5mohm, so when I saw it drawing upwards of 450W from one 8 pin in timespy extreme, I quickly flashed back to the stock reference bios. With the stock bios, the power pull between both my 8 pins are equal most of the time, but whenever I run any 3 pin bios, pin 1 will pull almost double the power of pin 2, so damn frustrating. With the PNY bios and my card shunted, I average about 500w to 520w max, but still get power limited. On the 520W bios, I can draw as much as 600W, but pin 1 pulls over 300W, which kinda makes me concerned about melting my cables. Forget the 1000W bios, one pin pulling 450W is insanity... this is all the reason why i want to dumb this card and get a strix... but at the end of the day, I know i will see minimal performance difference going with the strix, but at least I know watt power pull will be more balanced between all 3 pins on the strix.


Did you see my message?


----------



## changboy

Who of you have killed there card with some experiment ? I'am pretty sure many have burn it but never came back to talk about it.


----------



## motivman

Falkentyne said:


> Did you see my message?


Yes, thanks man. I would be satisfied with this card, if I could get the power draw between both pins to be close to equal on a 3 pin bios.


----------



## bmgjet

changboy said:


> Who of you have killed there card with some experiment ? I'am pretty sure many have burn it but never came back to talk about it.


Theres been 3 of them in this thread that never followed up.


----------



## ttnuagmada

Strix with an EK block/backplate: Memory at +800 tops out at 80c after several hours of GT2


----------



## itssladenlol

mirkendargen said:


> Nice to see validation (beyond touching the backplate) the RAM block I glued on my backplate with Arctic Silver Alumina thermal epoxy does indeed do its job. This is mining ETH, the hardest the VRAM will ever work.
> 
> View attachment 2475954


Can you link the Block you Put on backplate? ty


----------



## des2k...

ttnuagmada said:


> Strix with an EK block/backplate: Memory at +800 tops out at 80c after several hours of GT2


That seems high, it's mostly 60c - 62c with GT2 600w loop with +930mem
Fans will help


----------



## ShadowYuna

lol finally 3090 xtreme Alphacool waterblock is on the way from PPCS.

Can't wait to change the block from Bykski


----------



## changboy

ttnuagmada said:


> Strix with an EK block/backplate: Memory at +800 tops out at 80c after several hours of GT2


Like you have 0 air flow into ur case, me if i touch my backplate its just feel a bit warm, nothing have to do with the original ftw3 backplate wich i almost burn my finger. Ek backplate top at 62c here but i have my 280mm rad on the front of my case and the upper 140mm fan blow air direct at backplate. I didnt put fan on the backplate.


----------



## Thanh Nguyen

5mohm stacked shunt on ftw3 is safe guys?


----------



## changboy

Is this safe for you ? hahahaha


----------



## Falkentyne

Thanh Nguyen said:


> 5mohm stacked shunt on ftw3 is safe guys?


Yes but throw a 10 mOhm on the PCIE. Have to be careful about the 10A fuse.
Do keep in mind that you're not going to get anywhere near 800W even if you go sub-ambient. The SRAM power limit is going to throttle you some point and there's no shunt for that.
The SRAM limit isn't visible in any current bios editor. The only bioses that are known to have a high SRAM limit are the Strix (which has high chip and VRAM power also) and the Kingpin 1000W Bios.


----------



## mirkendargen

itssladenlol said:


> Can you link the Block you Put on backplate? ty











Bitspower Galaxy Freezer DIMM4 RAM Liquid Cooling Block - Ice Red BP-WBDM4AC-IRD


Bitspower Galaxy Freezer DIMM4 RAM Liquid Cooling Block - Ice Red BP-WBDM4AC-IRD 4711946749903




www.highflow.nl





It's this one, but I just got it because it was the cheapest one on ebay at the time, it isn't visible so nothing about it really matters. I just wanted at least a 4 DIMM one, not a 2 DIMM one.


----------



## des2k...

Thanh Nguyen said:


> 5mohm stacked shunt on ftw3 is safe guys?


I think gamer nexus did 8ohm + he shorted all fuses, I guess he must of burned a few free ftw3 in the past lol


----------



## des2k...

mirkendargen said:


> Bitspower Galaxy Freezer DIMM4 RAM Liquid Cooling Block - Ice Red BP-WBDM4AC-IRD
> 
> 
> Bitspower Galaxy Freezer DIMM4 RAM Liquid Cooling Block - Ice Red BP-WBDM4AC-IRD 4711946749903
> 
> 
> 
> 
> www.highflow.nl
> 
> 
> 
> 
> 
> It's this one, but I just got it because it was the cheapest one on ebay at the time, it isn't visible so nothing about it really matters. I just wanted at least a 4 DIMM one, not a 2 DIMM one.


what was the difference in temps with this thing on ? Did it drop core temps after ?


----------



## mirkendargen

des2k... said:


> what was the difference in temps with this thing on ? Did it drop core temps after ?


Never ran the card on water without it on so I don't know, but I highly doubt it's dropping core temps. I don't even have a thermal pad from the back of the core to the backplate.


----------



## ttnuagmada

changboy said:


> Like you have 0 air flow into ur case, me if i touch my backplate its just feel a bit warm, nothing have to do with the original ftw3 backplate wich i almost burn my finger. Ek backplate top at 62c here but i have my 280mm rad on the front of my case and the upper 140mm fan blow air direct at backplate. I didnt put fan on the backplate.


There is air movement, but the fans are all running 400 rpm. I may pull the backplate off and see if contact is good on everything. 

Hopefully EK releases their active backplate for the Strix.


----------



## changboy

About shunt mod a rtx-3090, i think this card over 2100mhz its a waste of time coz it bring too much heat, unless you use a chiller or something like this, ur card will drop at 2100-2085mhz anyway.

Its good for a bech run thats it, but for myself i dont see any good by doing this on my card coz iam on a watercooling normal loop. Dont expect gaming at 2200mhz for hours and menage this amount of heat.


----------



## Chamidorix

I'd just like to remind everyone with kingpins for daily use to try flashing the 1000W bios on the 2nd bios slot. Strong indications that the temperature throttling on smart power stages is only disabled when you are on ln2 bios switch. And that temperature throttling on chip and vram ics is impossible to turn off; built into the chip and not a problem with ln2 since they read down to ln2 temps.

For actual ln2 overclocking obviously the ln2 bios slot enables of number of ln2 friendly features that bmjet has touched on it previous posts, and you can touch a bit more with his custom classified tool.

People who kill cards on 1000W bios are most likely just blowing fuses that everything except strix and kpe have. Product segmentation at its finest.

This is all far from from guaranteed without more data and more detailed decompilation and i2c tracking, and the usual disclaimers apply, but at the very least it seems a decent way to help hedge your bets if you run kp 1000W for gaming etc.


----------



## Falkentyne

Chamidorix said:


> I'd just like to remind everyone with kingpins for daily use to try flashing the 1000W bios on the 2nd bios slot. Strong indications that the temperature throttling on smart power stages is only disabled when you are on ln2 bios switch. And that temperature throttling on chip and vram ics is impossible to turn off; built into the chip and not a problem with ln2 since they read down to ln2 temps.
> 
> For actual ln2 overclocking obviously the ln2 bios slot enables of number of ln2 friendly features that bmjet has touched on it previous posts, and you can touch a bit more with his custom classified tool.
> 
> People who kill cards on 1000W bios are most likely just blowing fuses that everything except strix and kpe have. Product segmentation at its finest.
> 
> This is all far from from guaranteed without more data and more detailed decompilation and i2c tracking, and the usual disclaimers apply, but at the very least it seems a decent way to help hedge your bets if you run kp 1000W for gaming etc.


That's not possible. The first person who blew up his card with the 1000W Bios was on a Strix....no fuses....


----------



## geriatricpollywog

Chamidorix said:


> I'd just like to remind everyone with kingpins for daily use to try flashing the 1000W bios on the 2nd bios slot. Strong indications that the temperature throttling on smart power stages is only disabled when you are on ln2 bios switch. And that temperature throttling on chip and vram ics is impossible to turn off; built into the chip and not a problem with ln2 since they read down to ln2 temps.
> 
> For actual ln2 overclocking obviously the ln2 bios slot enables of number of ln2 friendly features that bmjet has touched on it previous posts, and you can touch a bit more with his custom classified tool.
> 
> People who kill cards on 1000W bios are most likely just blowing fuses that everything except strix and kpe have. Product segmentation at its finest.
> 
> This is all far from from guaranteed without more data and more detailed decompilation and i2c tracking, and the usual disclaimers apply, but at the very least it seems a decent way to help hedge your bets if you run kp 1000W for gaming etc.


The 1000w bios makes the card run hot and pull 100w at idle. Why anyone with a KP would need it for gaming is beyond me. The fans would need to be at 100%.


----------



## sultanofswing

0451 said:


> The 1000w bios makes the card run hot and pull 100w at idle. Why anyone with a KP would need it for gaming is beyond me. The fans would need to be at 100%.


AFAIK it only makes the memory run at max clocks at idle which is a non issue.


----------



## Chamidorix

Yea the strix guy is the standout mystery. Wish we had more info. He wasn't particularly verbose. All sorts of ways board components can fail under normal use with bad qa luck, however. The point is it is extremely hard to kill a modern gpu/cpu IC itself, without static overvoltage into high transient current draw +spike temp before throttling. Such as kingpin when his paste cracked in the middle of the PR spike on LN2 volts.


----------



## mirkendargen

sultanofswing said:


> AFAIK it only makes the memory run at max clocks at idle which is a non issue.


The memory idling at max clocks is about 75w of power, on top of the normal 25w idle. Not a big deal to everyone, but certainly something.


----------



## JonnyV75

Shawnb99 said:


> For those with the Kingpin are we not able to purchase the extended warranty for it? Been trying for over a day now, keep being denied


I was able to purchase the 10 year at the end of December for my Kingpin without issue.


----------



## Bobbylee

motivman said:


> honestly, do you think it is worth it to sell my 2X8 pin and get a Strix? I was able to achieve #55 in the world in PR with this card, so not sure If I will really see a big improvement going to a Strix. I just hate seeing my card power limited in GPU-Z though... haha.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 634 in Port Royal
> 
> 
> Intel Core i9-10900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> I can defeat ALL power limits running the 1000W kingpin bios with my 2x8 pin card, however, one 8 pin sometimes pulls upward of 450w in timespy extreme... cables haven't melted yet though, lol. My card power draw running the 1000w bios with timespy extreme is about 765W!!!!


are you sure one cable is pulling 450w? when i run 1000w bios on my 2x8 pin total power draw is around 575w and highest 8 pin draw is 284 which is fine. Show me where you saw 450w on one 8-pin please?

Edit: saw you also shunted, thats like doubling up so i can beleive the 450w power draw now


----------



## geriatricpollywog

JonnyV75 said:


> I was able to purchase the 10 year at the end of December for my Kingpin without issue.


I am able to, but the warranty is non transferrable and I am not sure if I will keep it more than 3 years. For perspective, 3 years ago the 1080ti was top, 5 years ago it was the 980ti, and 10 years ago it was the 580ti. EVGA has good customer service so if you don’t have the option to buy the warranty just call them and they’ll sort it out.


----------



## V I P E R

Hello guys,
Does anybody with EVGA 3090 FTW3 Ultra Gaming knows what thikness thermalpads should be used for the VRM and Memory? My card will arrive in a few days and I want to replace the pads with ThermalGrizzly minus pads, but don't know what thikness I should buy?


----------



## motivman

Bobbylee said:


> are you sure one cable is pulling 450w? when i run 1000w bios on my 2x8 pin total power draw is around 575w and highest 8 pin draw is 284 which is fine. Show me where you saw 450w on one 8-pin please?
> 
> Edit: saw you also shunted, thats like doubling up so i can beleive the 450w power draw now


yep, surprised my cable didn't turn to goo on that timespy run. the card peaked at over 750W, SMH. Timespy extreme is such a power hog.


----------



## geriatricpollywog

V I P E R said:


> Hello guys,
> Does anybody with EVGA 3090 FTW3 Ultra Gaming knows what thikness thermalpads should be used for the VRM and Memory? My card will arrive in a few days and I want to replace the pads with ThermalGrizzly minus pads, but don't know what thikness I should buy?


Optimus makes a block for the FTW3 so I’d email them and ask. Personally, I wouldn’t suggest it. All 3090s have ridiculous overkill VRM that will never reach 120c which is the thermal limit of the power stages. You might see 70-80c on an air cooled card. The memory is best cooled by pointing a fan at the backplate.


----------



## Thanh Nguyen

Can we modify and attach the kingpin aio to the custom loop ?


----------



## EarlZ

Im looking at getting the Gigabyte Auros Master 3090 Rev1 (2x8PIN) and will be running it at stock cooling, Might do some undervolting as I've done on my 2080TI, How much performance potential do I lose out with just 2X8PINs vs a 3X8PIN rev2 card ?


----------



## Spiriva

changboy said:


> Who of you have killed there card with some experiment ? I'am pretty sure many have burn it but never came back to talk about it.


I used the XOC bios on my 2080ti FE (EK block, XOC is still on it) Never had any problem. Now im running the XOC 3090 on my PNY 3090 (2x8pin).
I havnt seen any problem with using the XOC bios (yet?), i promise i will let you know if my 3090 would die. Its running at ~550W atm (79% in msi afterburner)

Default power limit 1000W - 30.4%= 696W

550W = 79%
500W = 71%
450W = 64%
400W = 57%

To be clear, im not saying "it wount hurt the card" since i have no clue what happens in the long run. It didnt seem to hurt my 2080ti FE, its still running fine with the XOC.

I wouldnt use the XOC bios if i felt that i could not afford to buy another 3090 if this one would die.
Tbh i wouldnt touch the 3090 (installing waterblock etc) if i felt that i could not afford to buy a new one if this would fail. Would it be XOC bios, or if the waterloop would leak etc.

I got a 120mm fan blowing on the EK backplate of my 3090, and according to hwinfo the memory is at 51c after a few hours of gaming. Altho im gonna get my self the new EK watercooled backplate when they release it. Gonna rebuild the loop anyhow with a z590 & 11900k cpu.

Anyhow i will let you know if something happens with my 3090 with the XOC bios on it, will it be that it just insta die, if it some how get hurts and run slower, start crashing etc.
I will not flash any other bios, and i will let it be at 79% (~550W). I made a profile in afterburner that will set the memory to -500mhz when im not playing, altho i dont think it matters if they run max speed or not 24/7.


----------



## Shawnb99

Thanh Nguyen said:


> Can we modify and attach the kingpin aio to the custom loop ?


No and why would you want to?


----------



## des2k...

"
Default power limit 1000W - 30.4%= 696W

550W = 79%
500W = 71%
450W = 64%
400W = 57% 
"

It's not really that, because 30% less due to the 3rd 8pin, will also be 30% less from the PCIE
so 1000w is more like 667w for 2x8pins

and if you're not at 100% TDP 1000w(default for xoc) you'll also throttle early (sram limit maybe) about 9% early

*so if you want 550w, it's 82.5% + 9% , so about 91.5% (using nvidia-smi) and about 92% with the slider in afterburner and not 79%*


----------



## gfunkernaught

So does anyone else regret getting 3090 now that those 4080 Ti leaks are out? 🤣


----------



## Zogge

gfunkernaught said:


> So does anyone else regret getting 3090 now that those 4080 Ti leaks are out? 🤣


There is always something faster around the corner or do you buy a GPU and expect it to be a top performer forever ?

Will it cost 71% (as some say it has 71% more cuda cores as example) more than 3090 ? Or does it follow the same performance vs cost scale from 2080ti to 3080 to 3090 for instance ? If so it could be around 5k+ USD...


----------



## motivman

gfunkernaught said:


> So does anyone else regret getting 3090 now that those 4080 Ti leaks are out? 🤣


lololololololol, 4080ti really???? really????? you can barely even buy a 30 series card, and they are leaking 4080ti, got to be a damn joke...SMH


----------



## Sheyster

0451 said:


> I am able to, but the warranty is non transferrable and I am not sure if I will keep it more than 3 years. For perspective, 3 years ago the 1080ti was top, 5 years ago it was the 980ti, and 10 years ago it was the 580ti. EVGA has good customer service so if you don’t have the option to buy the warranty just call them and they’ll sort it out.


I don't think I've kept a video card for more than 3 years, ever. Usually 2 years is the longest time I'll go without upgrading.


----------



## stryker7314

motivman said:


> lololololololol, 4080ti really???? really????? you can barely even buy a 30 series card, and they are leaking 4080ti, got to be a damn joke...SMH


Be able to buy a 4080ti when the 5080ti releases...


----------



## gfunkernaught

Love how this thread was silent for 4 days then poof! Lol.

I fell off the "buy and keep a gpu for at least 5 years" wagon when I went from a 1080 ti to a 2080 ti. I thought my 2080 ti was dead and brought it back to microcenter with the extended warranty. I thought about it hard after the fact and realized it wasn't exactly dead and I messed up. That mistake cost me around $800, that was in addition to the $1250 credit from the 2080 ti, upgrading to a 3090, with a warranty. Now I don't even think water cooling it will be worth it since these don't clock very well. Being impulsive is f**king expensive.


----------



## WayWayUp

The interesting thing about those lovelace numbers is that they are assuming 1.8Ghz clock speed

no increase going from 8nm to 5nm?
They are probably understating projected performance


----------



## GQNerd

mirkendargen said:


> The memory idling at max clocks is about 75w of power, on top of the normal 25w idle. Not a big deal to everyone, but certainly something.





0451 said:


> The 1000w bios makes the card run hot and pull 100w at idle. Why anyone with a KP would need it for gaming is beyond me. The fans would need to be at 100%.


FALSE.. I've already mentioned this earlier in the thread, but if you apply -500 to the memory on Afterburner, while on 1000w BIOS, the mem won't be running at max clocks, and the card is only pulling between 30-40w TOTAL while idle.

I've swapped to the KP full time (from Strix) and run it on 1000w bios as my daily.

I have 3 profiles saved: 1 for gaming, 1 for benching, and most importantly the -500 profile for general use.


----------



## long2905

anyone able to run the 1000w bios for mining? mine does not get any rate while on this bios.

now im on a waterblock and somehow i cannot beat my personal high score for PR 14650. the card is shown to be running at 2170Mhz in port royal but the result is far from that.


----------



## des2k...

long2905 said:


> anyone able to run the 1000w bios for mining? mine does not get any rate while on this bios.
> 
> now im on a waterblock and somehow i cannot beat my personal high score for PR 14650. the card is shown to be running at 2170Mhz in port royal but the result is far from that.


you have to disable P2 compute work with NV Inspector, 1000w vbios only runs p1 for memory
You don't disable that , core/mem clock breaks for compute workload


----------



## gfunkernaught

So for the Trio owners, how far past stock are you getting with water-cooling and 500w bios?


----------



## WMDVenum

Miguelios said:


> FALSE.. I've already mentioned this earlier in the thread, but if you apply -500 to the memory on Afterburner, while on 1000w BIOS, the mem won't be running at max clocks, and the card is only pulling between 30-40w TOTAL while idle.
> 
> I've swapped to the KP full time (from Strix) and run it on 1000w bios as my daily.
> 
> I have 3 profiles saved: 1 for gaming, 1 for benching, and most importantly the -500 profile for general use.


What do you use to make these profiles and do you know if there is a way for say MSI Afterburner to swap profiles based on the active app?


----------



## defiledge

Can we increase the voltage without a mod? My slider is locked in afterburner


----------



## GQNerd

WMDVenum said:


> What do you use to make these profiles and do you know if there is a way for say MSI Afterburner to swap profiles based on the active app?


Just set your offsets, and other settings in afterburner, and hit the save button to the right.. I don’t have any of the profiles loading automatically, though there might be an option to set it at system boot.. 

any other auto switching might need to be done via scripting/task scheduler


----------



## GQNerd

defiledge said:


> Can we increase the voltage without a mod? My slider is locked in afterburner











MSI Afterburner 4.6.5 (Beta2) Download
 

MSI Afterburner 4.6.2 Download - Today we release an updated this Stable revision of Afterburner, this application successfully secured the leading position on graphics card utilities.




www.guru3d.com


----------



## Sheyster

motivman said:


> lololololololol, 4080ti really???? really????? you can barely even buy a 30 series card, and they are leaking 4080ti, got to be a damn joke...SMH


I don't know what the real timeline is for 4090/4080Ti, but it sure sounds like it's going to be another big upgrade over the 3090. Cuda core difference alone is huge!


----------



## Falkentyne

defiledge said:


> Can we increase the voltage without a mod? My slider is locked in afterburner


Can't increase voltage without a mod. The Afterburner slider only allows +1 higher tier bin which is also linked to +15 mhz higher clocks. Unlocking this bin won't give you any higher stability because the clocks must respond to the jump. This allows 1.069v-1.10v at +15 mhz, instead of 1.056v-1.081v at the previous bin max. The voltage that is selected on the bin depends on the shape of the V/F curve (the voltage on the very leftmost point of the current tier is always the one selected).
Also playing with the curve to try to force 1.10v all the time just trashes your effective clocks because MSVDD won't like the desync.


----------



## Thanh Nguyen

How is the curve to call its a good silicon?


----------



## SoldierRBT

Thanh Nguyen said:


> How is the curve to call its a good silicon?


I like to use ATI tool to test silicon. It's a low wattage test to see how far you can push 1.10v voltage point. I've tested 3 cards so far.
3080 FTW3 1.10v 2355MHz - 32C holds 2340MHz
3080 FE 1.10v 2280MHz - 32C holds 2265MHz
3090 KPE 1.10v 2325MHz - 32C holds 2310MHz


----------



## gfunkernaught

When you guys oc your 3090s, are you checking your actual clocks with Thermspy? I'm seeing lots of 2100mhz+ overclock claims but have they been verified with Thermspy?


----------



## Nizzen

gfunkernaught said:


> When you guys oc your 3090s, are you checking your actual clocks with Thermspy? I'm seeing lots of 2100mhz+ overclock claims but have they been verified with Thermspy?


15500+ in port royal is verifying enough


----------



## GAN77

SoldierRBT said:


> I like to use ATI tool to test silicon.


Please give a link to this tool.


----------



## motivman

Nizzen said:


> 15500+ in port royal is verifying enough


yes, but what is your gaming stable PR score ie CP2077??? lol. I can score 15600+ in PR, but my gaming stable OC is in the 15.1 -15.2K range.


----------



## Falkentyne

gfunkernaught said:


> When you guys oc your 3090s, are you checking your actual clocks with Thermspy? I'm seeing lots of 2100mhz+ overclock claims but have they been verified with Thermspy?





GAN77 said:


> Please give a link to this tool.


There is no need to use Thermspy anymore (unless you want to mess with the P-states, which did nothing but glitch me when I tried). The latest HWinfo64 has support for "effective" clocks now, so just add effective clocks to the RTSS OSD in the hwinfo RTSS options plugin.


----------



## SoldierRBT

GAN77 said:


> Please give a link to this tool.











TechPowerUp


ATITool is an overclocking utility designed for ATI and NVIDIA video cards.




www.techpowerup.com





This is how I test silicon:


----------



## WMDVenum

motivman said:


> yes, but what is your gaming stable PR score ie CP2077??? lol. I can score 15600+ in PR, but my gaming stable OC is in the 15.1 -15.2K range.


I can pass PR with +200/+1200 but my highest stable in games is about +135/+1125 (warzone) but I can go to +150/1125 (cp2077). I am staying with +135/+1125 since it ran warzone for 12 hours (spectating overnight) without crashing. Typically my effective clock will sit at 15-20 mhz below requested clock using this process. I can do a [email protected] V/F curve and be stable but my effective clock is near 2055 in that case, this is the reason I didn't do a V/F curve adjustment when overclocking with the 3090FE.

My PR score with with [email protected] with +1125 on the memory was 14654 while +150/1125 was 14939. My highest ever score was 15135 when doing [email protected] with +1200 memory but I can't reproduce it with the current set of drivers I have.


----------



## changboy

14 600 to 15 300 port royal is about the same once in game, rtx-3090 vs rtx-3090 yeah.


----------



## Thanh Nguyen

So kingpin has an option to increase voltage to make internal clock matches the requested clock? I bet 15k-16k PR has the same fps in games.


----------



## des2k...

WMDVenum said:


> I can pass PR with +200/+1200 but my highest stable in games is about +135/+1125 (warzone) but I can go to +150/1125 (cp2077). I am staying with +135/+1125 since it ran warzone for 12 hours (spectating overnight) without crashing. Typically my effective clock will sit at 15-20 mhz below requested clock using this process. I can do a [email protected] V/F curve and be stable but my effective clock is near 2055 in that case, this is the reason I didn't do a V/F curve adjustment when overclocking with the 3090FE.
> 
> My PR score with with [email protected] with +1125 on the memory was 14654 while +150/1125 was 14939. My highest ever score was 15135 when doing [email protected] with +1200 memory but I can't reproduce it with the current set of drivers I have.


I'm around 14690 with [email protected] +853 mem. Usually effective freq can be anywhere from 2117 - 2155 depending on game / gpu load.


----------



## changboy

Thanh Nguyen said:


> So kingpin has an option to increase voltage to make internal clock matches the requested clock? I bet 15k-16k PR has the same fps in games.


16k in port royal.....you wont probably not going to play with this setting coz you will crash and crash. Or maybe those who own a chiller can do it but on watercooling loop forget about it.


----------



## GAN77

Falkentyne said:


> The latest HWinfo64 has support for "effective" clocks now


Big frequencies - big corn)

Port Royal:


----------



## mouacyk

What's the history behind the effective clock, and why is this needed on Ampere, whilst it wasn't on prior generations?


----------



## motivman

changboy said:


> 16k in port royal.....you wont probably not going to play with this setting coz you will crash and crash. Or maybe those who own a chiller can do it but on watercooling loop forget about it.


16K PR --- 74.18 FPS https://www.3dmark.com/pr/735499
15.4K PR --- 71.32 FPS https://www.3dmark.com/pr/681933
difference --- 2.86 FPS aka negligible
14.86K --- 68.69 FPS https://www.3dmark.com/3dm/57452301
difference --- 5.5 FPS aka somewhat negligible!!!!!
If your stable gaming clock in PR hits 14.8-15K, you have a very good chip... everything else is just for e-peen


----------



## jura11

Done few quick


motivman said:


> 16K PR --- 74.18 FPS https://www.3dmark.com/pr/735499
> 15.4K PR --- 71.32 FPS https://www.3dmark.com/pr/681933
> difference --- 2.86 FPS aka negligible
> 14.86K --- 68.69 FPS https://www.3dmark.com/3dm/57452301
> difference --- 5.5 FPS aka somewhat negligible!!!!!
> If your stable gaming clock in PR hits 14.8-15K, you have a very good chip... everything else is just for e-peen


14908 is my record on my Palit RTX 3090 GamingPro, of course I used Kingpin XOC 1000W BIOS 









I scored 14 908 in Port Royal


AMD Ryzen 9 3900X, NVIDIA GeForce RTX 3090 x 1, 65536 MB, 64-bit Windows 10}




www.3dmark.com





Recently I tested 3 other RTX 3090, one is KFA2 RTX 3090 SG and two others Palit RTX 3090 GamingPro 

On my Palit RTX 3090 GamingPro I could easy use +1250MHz on VRAM but on friend KFA2 RTX 3090 SG anything above 1100MHz would flicker display like crazy like every 3 seconds, other than that in Firestrike KFA2 have nice results just in Port Royal results have been quite disappointing, I couldn't pass 13200 points with KFA2 390W BIOS and that's with 10900k 

Hope this helps 

Thanks, Jura


----------



## mirkendargen

motivman said:


> 16K PR --- 74.18 FPS https://www.3dmark.com/pr/735499
> 15.4K PR --- 71.32 FPS https://www.3dmark.com/pr/681933
> difference --- 2.86 FPS aka negligible
> 14.86K --- 68.69 FPS https://www.3dmark.com/3dm/57452301
> difference --- 5.5 FPS aka somewhat negligible!!!!!
> If your stable gaming clock in PR hits 14.8-15K, you have a very good chip... everything else is just for e-peen


What you're actually saying there is game FPS scales pretty closely with PR scores...which is the opposite of saying PR scores are just epeen, lol. 7.5% is 7.5%, the question is whether you think 7.5% is a lot or not.


----------



## motivman

mirkendargen said:


> What you're actually saying there is game FPS scales pretty closely with PR scores...which is the opposite of saying PR scores are just epeen, lol. 7.5% is 7.5%, the question is whether you think 7.5% is a lot or not.


really, 7.5% is negligible. I doubt there is anyone out there that is gaming stable at whatever overclock produces 16K, lol. Most of the very best cards out there are probably gaming stable at around 15.3k max, and these are the guys with either the kingpin or strix cards.


----------



## Gandyman

Has there been any definitive word if using one 8 pin and one daisy chain 2x8 pin is ok for 3090 strix oc? I have a HX1000. And if so is there that has a higher power draw that should be the single 8 pin? or is it evenly spread?


----------



## mirkendargen

Gandyman said:


> Has there been any definitive word if using one 8 pin and one daisy chain 2x8 pin is ok for 3090 strix oc? I have a HX1000. And if so is there that has a higher power draw that should be the single 8 pin? or is it evenly spread?


It completely depends on the connector to the power supply (assuming it's modular) and the cabling in between. You're running double the power through that section and any problem with it (bad connection between connectors, bad crimping of the connector pins to the wires, etc.) is that much more likely to melt something/catch on fire. If you're running the stock BIOS, you're probably not exceeding the amperage rating of the wire or the connectors. The 1000W BIOS...just don't do that.


----------



## changboy

The thing is for 7% on a billion or 7% on 50 cent not doing the same thing at the end 
Me its more 7% of 50 cent hehehe.


----------



## mirkendargen

changboy said:


> The thing is for 7% on a billion or 7% on 50 cent not doing the same thing at the end


I must have misclicked and ended up in the 3080 owners thread on accident to find people not caring about 7.5%...


----------



## Thanh Nguyen

motivman said:


> really, 7.5% is negligible. I doubt there is anyone out there that is gaming stable at whatever overclock produces 16K, lol. Most of the very best cards out there are probably gaming stable at around 15.3k max, and these are the guys with either the kingpin or strix cards.


My 15k2 ftw3 does 20 loops metro and 30m gt2.


----------



## changboy

mirkendargen said:


> I must have misclicked and ended up in the 3080 owners thread on accident to find people not caring about 7.5%...


Even you cant notice this when you play cyberpunk 2077 after V turn the next corner street and you asking yourself : Do i see better fps now ??????😂


----------



## Gandyman

mirkendargen said:


> It completely depends on the connector to the power supply (assuming it's modular) and the cabling in between. You're running double the power through that section and any problem with it (bad connection between connectors, bad crimping of the connector pins to the wires, etc.) is that much more likely to melt something/catch on fire. If you're running the stock BIOS, you're probably not exceeding the amperage rating of the wire or the connectors. The 1000W BIOS...just don't do that.


My issue is that the stock corsair cables have ZERO single 8 pin cables. The $250 cablemod c-series kit comes with two 8 pins only (zero daisy chain which is pathetic, older cablemod kits came with 4 of each AND were cheaper!!) The $150 corsair premium cable kits, come with 2x 8 and 2x 8x2. I can't find any individual cables for sale in my country. Buying another overpriced cable kit for a single 8 pin seems very not worth .. especially after finding every last penny to get a 3090 strix (which is $3500 AUD) I don't really want to inflate the cost of this GPU anymore.

Not undervolted at +150 core +1000 memory the highest HWINFO has reported my wattage at is 369.9w

My plan is to get the $150 corsair premium kit, and use 1x 8x8 and 1x 8


----------



## gfunkernaught

Falkentyne said:


> There is no need to use Thermspy anymore (unless you want to mess with the P-states, which did nothing but glitch me when I tried). The latest HWinfo64 has support for "effective" clocks now, so just add effective clocks to the RTSS OSD in the hwinfo RTSS options plugin.


Would be great if AB read effective clock. Ever since I learned about this effective clock it threw a wrench in my whole approach to overclocking.


----------



## gfunkernaught

Speaking of power supplies. I'm a fan or corsair. Which of their psu would be more than enough for 3x8pin 3090 with a 500w bios for overclocking? Preferably something with at least 4 PCIe connectors and single 8pin cables not the split ones.


----------



## des2k...

What are the tricks for bringing up effective frequency ?

For me,
I use a high VF point in AB which seem to help with effective clocks and nvidia-smi to limit max freq
so for 2160 @ 1068mv my point is for 2250 @ 1068mv seem to get the best effective clocks (~2135 and 2150 for low load games) 

Any +bin from stock, seems to add very little like +10 max +15 to the effective freq 

Bins at / after 2175 adds double in effective clocks for some reason, +20 +30mhz. This I didn't try to get stable as I'm sure I would need +35mv to the voltage.

Offset mode doesn't seem to improve effective clocks for my card and custom curve in AB is not really applying. Maybe the points are not done close enough ?


----------



## mattskiiau

So where is the explanation of effective clock? Is that what we should be looking at now and not AB's clock?


----------



## Falkentyne

mattskiiau said:


> So where is the explanation of effective clock? Is that what we should be looking at now and not AB's clock?


If you have to choose one or the other, effective clock is the best
It's still nice to see both.


----------



## PhatSV6

Hello all 3090 TUF owners,

let us rise up for better power limits and push asus for at lest a 400w - 420w power limit for our cards here Better BIOS as we all know that our 2 x 8pin TUF cards are more than capable of dealing with that power in a cooling and power delivery standpoint. 

We have seen many many other gpu series with 2 x 8pin pulling in excess of 450w in the past from both amd and nvidia at stock.

Every mention counts guys even if its a thumbs up or a simple yes more power please.

XOC bios for the 3 pin will work and give you more power however using it means that you will have issues with compute work load clock speeds as a memory P state is missing.

Gigabyte and other bios at 390w mark are ok but im sure we would all like to stay within warranty and have all of our peripheral outputs working.

Given the average higher price point and the better power delivery on these cards I would say we are entitled to a better power limit.


----------



## bmgjet

PhatSV6 said:


> Hello all 3090 TUF owners,
> 
> let us rise up for better power limits and push asus for at lest a 400w - 420w power limit for our cards here Better BIOS as we all know that our 2 x 8pin TUF cards are more than capable of dealing with that power in a cooling and power delivery standpoint.
> 
> We have seen *many many other gpu series with 2 x 8pin pulling in excess of 450w* in the past from both amd and nvidia at stock.
> 
> Every mention counts guys even if its a thumbs up or a simple yes more power please.
> 
> XOC bios for the 3 pin will work and give you more power however using it means that you will have issues with compute work load clock speeds as a memory P state is missing.
> 
> Gigabyte and other bios at 390w mark are ok but im sure we would all like to stay within warranty and have all of our peripheral outputs working.
> 
> Given the average higher price point and the better power delivery on these cards I would say we are entitled to a better power limit.


Can you list these many many other GPUs pulling over 450W over 2X 8pins at stock since I cant think of any.
Your not going to get a bios over 390W on a 2 plug card since thats already 6% over the pci express spec of 66W for 12V slot and 150W for 12V 8pins.
The FE got around that by making there own plug and not advertising that its pci-express approved on the pcb stamps.


----------



## motivman

PhatSV6 said:


> Hello all 3090 TUF owners,
> 
> let us rise up for better power limits and push asus for at lest a 400w - 420w power limit for our cards here Better BIOS as we all know that our 2 x 8pin TUF cards are more than capable of dealing with that power in a cooling and power delivery standpoint.
> 
> We have seen many many other gpu series with 2 x 8pin pulling in excess of 450w in the past from both amd and nvidia at stock.
> 
> Every mention counts guys even if its a thumbs up or a simple yes more power please.
> 
> XOC bios for the 3 pin will work and give you more power however using it means that you will have issues with compute work load clock speeds as a memory P state is missing.
> 
> Gigabyte and other bios at 390w mark are ok but im sure we would all like to stay within warranty and have all of our peripheral outputs working.
> 
> Given the average higher price point and the better power delivery on these cards I would say we are entitled to a better power limit.


shunt mod the card, flash 520W bios, get a nice waterblock = profit, up to 600W power draw and performance comparable to 3X8 pin cards. No need for 1000W bios and all the downsides that come with it. Unless someone comes up with a way to edit and sign the bios, no mfr is releasing more than a 390W bios for any 2 pin card, that is just a pipe dream TBH.


----------



## schoolofmonkey

So I got my Inno3D RTX 3090 iChill x4 today, 
Man this card is huge.

What's your take on this, good soldering job you think.


----------



## Falkentyne

schoolofmonkey said:


> So I got my Inno3D RTX 3090 iChill x4 today,
> Man this card is huge.
> 
> What's your take on this, good soldering job you think.
> 
> 
> View attachment 2476199
> 
> View attachment 2476200


Not a fan.
@bmgjet what do you think?


----------



## bmgjet

Falkentyne said:


> Not a fan.
> @bmgjet what do you think?


2nd pic the SP caps have exploded lol.
The MLCC are fine tho they just need to connect from middle row to outside row nothing says they need to be perfectly straight.


----------



## schoolofmonkey

bmgjet said:


> 2nd pic the SP caps have exploded lol.
> The MLCC are fine tho they just need to connect from middle row to outside row nothing says they need to be perfectly straight.


Yeah there is what looks like little manufacturing marks that wiped off after I saw the picture, only showed up with the flash.
I thought the exact thing you do, so examined it closer, looked like lead pencil lines, gone now.
Hey the pics are taken with a Google Pixel 5, never gonna take the best shots. 

I just had a laugh when I saw the MLCC soldering job.


----------



## long2905

schoolofmonkey said:


> So I got my Inno3D RTX 3090 iChill x4 today,
> Man this card is huge.
> 
> What's your take on this, good soldering job you think.


may i ask where did you get this card from? if you want to stick with it for the long haul, i suggest getting a reference waterblock asap


----------



## schoolofmonkey

long2905 said:


> may i ask where did you get this card from? if you want to stick with it for the long haul, i suggest getting a reference waterblock asap


Already sorted, waiting on delivery,
I got it from a local store in Australia called PC Case Gear, they still have them in stock.


----------



## long2905

schoolofmonkey said:


> Already sorted, waiting on delivery,
> I got it from a local store in Australia called PC Case Gear, they still have them in stock.


looking forward to comparing results and learning from you lol. mine is not great even on 1000w xoc bios


----------



## Falkentyne

gfunkernaught said:


> Would be great if AB read effective clock. Ever since I learned about this effective clock it threw a wrench in my whole approach to overclocking.


Effective clock is an old interface from long ago, already supported but undocumented.
You can add it via PLLClockMonitoring in .cfg
Big thanks to Unwinder for posting this.


----------



## PhatSV6

motivman said:


> shunt mod the card, flash 520W bios, get a nice waterblock = profit, up to 600W power draw and performance comparable to 3X8 pin cards. No need for 1000W bios and all the downsides that come with it. Unless someone comes up with a way to edit and sign the bios, no mfr is releasing more than a 390W bios for any 2 pin card, that is just a pipe dream TBH.



One word ( WARRANTY )


----------



## PhatSV6

PhatSV6 said:


> Hello all 3090 TUF owners,
> 
> let us rise up for better power limits and push asus for at lest a 400w - 420w power limit for our cards here Better BIOS as we all know that our 2 x 8pin TUF cards are more than capable of dealing with that power in a cooling and power delivery standpoint.
> 
> We have seen many many other gpu series with 2 x 8pin pulling in excess of 450w in the past from both amd and nvidia at stock.
> 
> Every mention counts guys even if its a thumbs up or a simple yes more power please.
> 
> XOC bios for the 3 pin will work and give you more power however using it means that you will have issues with compute work load clock speeds as a memory P state is missing.
> 
> Gigabyte and other bios at 390w mark are ok but im sure we would all like to stay within warranty and have all of our peripheral outputs working.
> 
> Given the average higher price point and the better power delivery on these cards I would say we are entitled to a better power limit.






Additionally for those who need it Unlocking Voltage in MSI Afterburner


----------



## jomama22

PhatSV6 said:


> Additionally for those who need it Unlocking Voltage in MSI Afterburner


You don't need to do anything to "unlock" voltage control for any 3090. Just go into afterburner settings and enable voltage control. Select msi extended (though reference would probably be just fine).

You aren't going to get any more than +100 on the slider or over 1.1v when not power limited.

Btw fun fact, the 1.1v in afterburner is for msvdd voltage. Nvvdd (gpu core) voltage will run at some ratio of the msvdd voltage. The highest I have seen at pure stock is 1.18v nvvdd and 1.12 nvvdd when set to lock at 1.1 in afterburner and running time spy gt2 test.
Top is nvvdd, bottom right is msvdd.


----------



## ViRuS2k

question for the awesome community this is, 

my biskiki waterblock arrived for my msi 3090 gaming x trio 
i remember someone saying they had to order 3mm pads as the ones that come with the block are not thick enough, can someone clerify please what pads i should order and where to put them so that the block gets comepletely covered in pads lol so i get no heat spot issues...


please
thanks.


----------



## Zogge

gfunkernaught said:


> Speaking of power supplies. I'm a fan or corsair. Which of their psu would be more than enough for 3x8pin 3090 with a 500w bios for overclocking? Preferably something with at least 4 PCIe connectors and single 8pin cables not the split ones.


I use the AX1600i. There you can monitor individual 8 pin amps too.


----------



## rix2

EVGA FTW3 ultra new card, with 450w OC bios 13890 PR and with 500w bios https://www.3dmark.com/pr/823371

no any changes, voltage use 430w max and pci-e 78-79w... I have very happy  is it possible that the power supply is limiting, Corsair 850W, 2 years mining ps?


----------



## Falkentyne

rix2 said:


> EVGA FTW3 ultra new card, with 450w OC bios 13890 PR and with 500w bios https://www.3dmark.com/pr/823371
> 
> no any changes, voltage use 430w max and pci-e 78-79w... I have very happy  is it possible that the power supply is limiting, Corsair 850W, 2 years mining ps?


No
Hardware flaw with these cards. Power delivery problem causes too high PCIE power draw instead of the 8 pins. So these cards are limited to 420W maximum unless you use "XC3" bios instead of FTW3 XOC Bios.


----------



## geriatricpollywog

rix2 said:


> EVGA FTW3 ultra new card, with 450w OC bios 13890 PR and with 500w bios https://www.3dmark.com/pr/823371
> 
> no any changes, voltage use 430w max and pci-e 78-79w... I have very happy  is it possible that the power supply is limiting, Corsair 850W, 2 years mining ps?





Falkentyne said:


> No
> Hardware flaw with these cards. Power delivery problem causes too high PCIE power draw instead of the 8 pins. So these cards are limited to 420W maximum unless you use "XC3" bios instead of FTW3 XOC Bios.


This is why you need to manage this thread.


----------



## rix2

Falkentyne said:


> No
> Hardware flaw with these cards. Power delivery problem causes too high PCIE power draw instead of the 8 pins. So these cards are limited to 420W maximum unless you use "XC3" bios instead of FTW3 XOC Bios.


Should I test 520w bios?


----------



## mardon

Mining last night and my 3090 touched 82c on the mem. Core @ 1500mhz memory @ +1500mhz power limit 87% (390w bios). 125.9 hash. 
I've got an MP5 but I think it's starved for flow. Got a better pump in the mail and will order some Gelid pads to try and bring temps down. 

Think I'll mine $350s worth and call it quits. Don't fancy baking my card.


----------



## Gebeleisis

mardon said:


> Mining last night and my 3090 touched 82c on the mem. Core @ 1500mhz memory @ +1500mhz power limit 87% (390w bios). 125.9 hash.
> I've got an MP5 but I think it's starved for flow. Got a better pump in the mail and will order some Gelid pads to try and bring temps down.
> 
> Think I'll mine $350s worth and call it quits. Don't fancy baking my card.


Have the same problem with mine. We need an active backplate.
I'll mine just enough to cover the cost of the 3090 over the 3080 and call it a day


----------



## Zogge

I talked with Mp5works and they say it is designed for parallel flow only. When I tried to put it in my loop I dropped from 190 l/h to 50l/h so it is really restrictive but at 50 l/h through their block it got really cold. I am going to try a separate pump/loop with that flow at some point. With a separate 50l/h for instance, it is pretty close to an active backplate compared to EK showcase for instance. 

With current setup my memory never go over 66 degrees in long benchmarks.


----------



## mardon

Gebeleisis said:


> Have the same problem with mine. We need an active backplate.
> I'll mine just enough to cover the cost of the 3090 over the 3080 and call it a day


That's the thing.. I've got an active backplate. The MP5 works. I'm pretty sure flow is restricted at the moment though.


----------



## PhatSV6

jomama22 said:


> You don't need to do anything to "unlock" voltage control for any 3090. Just go into afterburner settings and enable voltage control. Select msi extended (though reference would probably be just fine).
> 
> You aren't going to get any more than +100 on the slider or over 1.1v when not power limited.
> 
> Btw fun fact, the 1.1v in afterburner is for msvdd voltage. Nvvdd (gpu core) voltage will run at some ratio of the msvdd voltage. The highest I have seen at pure stock is 1.18v nvvdd and 1.12 nvvdd when set to lock at 1.1 in afterburner and running time spy gt2 test.
> Top is nvvdd, bottom right is msvdd.
> View attachment 2476226


Thanks for the info although your wrong at least in my case for my Asus TUF 3090. Afterburner does not show out the box voltage regardless of settings area toggle tried all the re-install and new driver bull crap. first card i have ever had that i have had to do this to so just for people that may be having the same issue as me where it was not showing even after enableing voltage settings. Thank you for you input though.


----------



## Benni231990

hey guys i need your help 

i use msi afterburner with curve for core clock an i set 1.025v with 2055mhz but msi change the curve from self to higher and lower 

today first start msi set the coreclock to 2100 so i xhange it 2055 after i start a game the clock was 2025 so i need to change it manuell every time this sucks so hard 

can you help me ? i use the ne beta 5


----------



## PhuCCo

PhatSV6 said:


> Thanks for the info although your wrong at least in my case for my Asus TUF 3090. Afterburner does not show out the box voltage regardless of settings area toggle tried all the re-install and new driver bull crap. first card i have ever had that i have had to do this to so just for people that may be having the same issue as me where it was not showing even after enableing voltage settings. Thank you for you input though.


Did you download the beta version of Afterburner? I don't think the normal version works with Ampere showing voltage. If I'm understanding correctly it's just Afterburner not showing a voltage value


----------



## PhuCCo

Benni231990 said:


> hey guys i need your help
> 
> i use msi afterburner with curve for core clock an i set 1.025v with 2055mhz but msi change the curve from self to higher and lower
> 
> today first start msi set the coreclock to 2100 so i xhange it 2055 after i start a game the clock was 2025 so i need to change it manuell every time this sucks so hard
> 
> can you help me ? i use the ne beta 5


It does the same to me, I think it has to do with temperature. When rebooting my pc, the custom curve will shift and I have to mess with it. I've just learned to deal with it lol I cannot find a solution


----------



## Benni231990

yes this sucks so hard i fixed the curve but msi change it from self


----------



## PhuCCo

Benni231990 said:


> yes this sucks so hard i fixed the curve but msi change it from self


I ended up using like three of the Afterburner presets with the same manual curve and one of them usually applies correctly if it moved, and yeah it's pretty annoying


----------



## long2905

i need the active cooling backplate so bad. anyone know when will EK release it?


----------



## Gebeleisis

mardon said:


> That's the thing.. I've got an active backplate. The MP5 works. I'm pretty sure flow is restricted at the moment though.


and you get 88 C with that back plate ?


----------



## motivman

long2905 said:


> i need the active cooling backplate so bad. anyone know when will EK release it?


Buy fujipoly thermal pad 17 w/mk pads and use it on all the components used by your backplate. My memory does not exceed 60C after hours of gaming. I have EK block also, and no fan blowing directly on my backplate, except case airflow.


----------



## long2905

Gebeleisis said:


> and you get 88 C with that back plate ?


88c on a mining load is far better than stock where you can get 110c easily



motivman said:


> Buy fujipoly thermal pad 17 w/mk pads and use it on all the components used by your backplate. My memory does not exceed 60C after hours of gaming. I have EK block also, and no fan blowing directly on my backplate, except case airflow.


can you share the thickness and area you covered on the backplate? did you put the whole pad on or still follow the manufacturer guideline? i have access to thermalright odyssey pad not sure if thats enough. I would need the 2mm one though.


----------



## ALSTER868

SoldierRBT said:


> This is how I test silicon:


Could you please share the way you do the silicon testing with this ATI tool? Just increasing the offset while the embedded 3D test is going on until it's starting to artefact?


----------



## motivman

long2905 said:


> 88c on a mining load is far better than stock where you can get 110c easily
> 
> 
> can you share the thickness and area you covered on the backplate? did you put the whole pad on or still follow the manufacturer guideline? i have access to thermalright odyssey pad not sure if thats enough. I would need the 2mm one though.


Bought pads based on the thickness provided in the manual, replaced every generic EK pad with fujipoly extreme.

.


----------



## Gebeleisis

long2905 said:


> 88c on a mining load is far better than stock where you can get 110c easily
> 
> 
> can you share the thickness and area you covered on the backplate? did you put the whole pad on or still follow the manufacturer guideline? i have access to thermalright odyssey pad not sure if thats enough. I would need the 2mm one though.


I have a heatsink with a 80mm fan on it and i get 89C .

I was hoping that an active backplate would do better. That is why i am asking . 

tks for the info !


----------



## SoldierRBT

ALSTER868 said:


> Could you please share the way you do the silicon testing with this ATI tool? Just increasing the offset while the embedded 3D test is going on until it's starting to artefact?


Open vf/curve and add offset to 1.10v point then start the 3D test on ATI tool. If it doesn’t crash, close it and add more offset and repeat this until it crashes. Then lower 15MHz and run the test again. I let it run for about 10 secs then check the clocks reported at 32C which is one bin below the speed you set on v/f curve. Good chips can hold 2250+ at 1.10v.

Keep in mind this is just a silicon test, it gives you an idea how good the chip is and how far it can be pushed with proper cooling and voltages. For stability test, Time Spy Extreme and RTX Quake 2 are very good.


----------



## gfunkernaught

Falkentyne said:


> Effective clock is an old interface from long ago, already supported but undocumented.
> You can add it via PLLClockMonitoring in .cfg
> Big thanks to Unwinder for posting this.


Wait I can add that to AB?? Or are you referring to hwinfo?


----------



## mardon

long2905 said:


> 88c on a mining load is far better than stock where you can get 110c easily
> 
> 
> can you share the thickness and area you covered on the backplate? did you put the whole pad on or still follow the manufacturer guideline? i have access to thermalright odyssey pad not sure if thats enough. I would need the 2mm one though.


Well thats good to know. I thought it wasn't working. Remember thats at +1500mhz on the mem too.

I've got An alphacool DDC that does 200l/h at the moment. Thats feeding a GPU, CPU, MP5, 2 Rads and quite a few 90° fittings. Got an EK DDC3.2 on the way that does 1000l/h so I expect it'll be much improved!

I'm also going to relocate the MP5 connections from the EK block to two T fittings elsewhere in the loop. 

Also going for some Gelid thermal pads for the backplate shortly.


----------



## PiotrMKG

Sorry for dumb Q but where are breakers if there are any on 3090 STRIX? and can I leave unmodded PCI shunt? Have anyone tried to populate caps on back side? there are just two on the left side. 

BTW. I have Vishay Dale 5m ohm current sensing resistors are they equivalent of those on PCB? they are way thinner than Asus ones.


----------



## Rhadamanthys

motivman said:


> since trio has garbage vrms and stupid 20A fuses.


Is that from a review or where do you get this from? Just curious since I'm about to order one.


----------



## motivman

PiotrMKG said:


> Sorry for dumb Q but where are breakers if there are any on 3090 STRIX? and can I leave unmodded PCI shunt? Have anyone tried to populate caps on back side? there are just two on the left side.
> 
> BTW. I have Vishay Dale 5m ohm current sensing resistors are they equivalent of those on PCB? they are way thinner than Asus ones.
> View attachment 2476269


it has NONE, that is why this is one of the absolute best cards for shunt modding!!!!!!!!!!


----------



## motivman

Rhadamanthys said:


> Is that from a review or where do you get this from? Just curious since I'm about to order one.


common knowledge. look at the PCB, you will see the 20A fuses.


----------



## Falkentyne

PiotrMKG said:


> Sorry for dumb Q but where are breakers if there are any on 3090 STRIX? and can I leave unmodded PCI shunt? Have anyone tried to populate caps on back side? there are just two on the left side.
> 
> BTW. I have Vishay Dale 5m ohm current sensing resistors are they equivalent of those on PCB? they are way thinner than Asus ones.
> View attachment 2476269


There are no fuses.
You should not use those vishay shunts. Just buy some Panasonic current sensing 5 mOhm shunt resistors and use those if you are stack soldering, or 3 mOhms if you are desoldering to replace.
It's best to mod all the shunts. Skipping some throws off the power balancing, which can cause a higher or imbalanced draw on a different power rail and lead to throttling earlier than expected. Although you can get away with stacking 5 mOhm shunts on all the shunts and 8 mOhm or 10 mOhm on the PCIE Slot shunt.

The issue with this isn't just that the PCIE Slot power will limit the max TDP first. It's also that it will limit you even if you adjust the power slider in MSI AB for a lower power target, because the slot won't be in sync with the 8 pins. While putting a higher resistance shunt on PCIE Slot is required on cards with fuses, you simply should not do it on a Strix. Just use the same shunts on all of them and don't skip any.


----------



## Rhadamanthys

motivman said:


> common knowledge. look at the PCB, you will see the 20A fuses.


I see. I remember several people mention in here that the Trio should be able to stand 600ish W. Any indications that it won't, considering the "****ty" VRMs / fuse?


----------



## Falkentyne

Rhadamanthys said:


> I see. I remember several people mention in here that the Trio should be able to stand 600ish W. Any indications that it won't, considering the "****ty" VRMs / fuse?


The card won't blow up at 600W, especially since you aren't modding the vcore. You are also going to run into the NVVDD Output / SRAM limit before you hit the card limit anyway after you shunt mod because that rail doesn't respond to shunts. The MSI card has 20 amp fuses on the Slot and the 8 pins. That's 240W per input. You aren't exceeding that unless you go on LN2.
It's the Gigabyte and Red LED Light of DeathFTW3 cards that have the 10 amp Slot fuses.


----------



## jomama22

Finally had a moment to use the evc2 on the strix while it was cold. Only had about an hour and a half to mess around but was able to get some decent results:

Port royal: 15916
https://www.3dmark.com/pr/825726

Time spy Extreme 6th place:
Overall 12651
Graphics 12767
https://www.3dmark.com/spy/17838197

Time spy 8th place:
Overall 23111
Graphics 24405
https://www.3dmark.com/spy/17838149

Hopefully can have time to actually tweak stuff more and not have to try and do all three in such a short time lol.


----------



## Beagle Box

gfunkernaught said:


> Wait I can add that to AB?? Or are you referring to hwinfo?


Afterburner has a text configuration file named afterburner.cfg MSIAfterburner.cfg in the main afterburner folder. 

Find: PLLClockMonitoring = 0 under the [NVAPIHAL] section

Change to: PLLClockMonitoring = 1

** mistake fixed, further info added. **


----------



## gfunkernaught

All these overclocking results are making me want to get the ek block when it comes back for the trio.


----------



## PiotrMKG

Falkentyne said:


> There are no fuses.
> You should not use those vishay shunts. Just buy some Panasonic current sensing 5 mOhm shunt resistors and use those if you are stack soldering, or 3 mOhms if you are desoldering to replace.
> It's best to mod all the shunts. Skipping some throws off the power balancing, which can cause a higher or imbalanced draw on a different power rail and lead to throttling earlier than expected. Although you can get away with stacking 5 mOhm shunts on all the shunts and 8 mOhm or 10 mOhm on the PCIE Slot shunt.
> 
> The issue with this isn't just that the PCIE Slot power will limit the max TDP first. It's also that it will limit you even if you adjust the power slider in MSI AB for a lower power target, because the slot won't be in sync with the 8 pins. While putting a higher resistance shunt on PCIE Slot is required on cards with fuses, you simply should not do it on a Strix. Just use the same shunts on all of them and don't skip any.


I will try to get Panasonic but I will have to import them, because where I live there aren’t in stock. What’s wrong with Wishay ones? Used them for other projects. Since warranty goes out of the window the moment I unscrew backplate I will replace shunts instead of stacking them.


----------



## kuutale

I update my 2080ti for asus 3090 ekwb , i have two monitor setups dpi cable connect. my problem is if gaming, first game works e charm mut then seeing this. I have lg 34 monitor and acer predator, my concern is my gpu is broken, i reset drivers and another things but not helping, temps look fine. i have ryzen setup and asus crosshair viii hero, can someone advice?


----------



## Falkentyne

PiotrMKG said:


> I will try to get Panasonic but I will have to import them, because where I live there aren’t in stock. What’s wrong with Wishay ones? Used them for other projects. Since warranty goes out of the window the moment I unscrew backplate I will replace shunts instead of stacking them.


Can you post a picture of this Wishay shunts?
Are they the same size ? And just thinner? The conductive edges are the same?


----------



## PiotrMKG

Falkentyne said:


> Can you post a picture of this Wishay shunts?
> Are they the same size ? And just thinner? The conductive edges are the same?





https://eu.mouser.com/productdetail/vishay-dale/wsl25122l000fea?qs=ViWNInbc%252beXvfGOPKnFAiw==


----------



## Falkentyne

PiotrMKG said:


> https://eu.mouser.com/productdetail/vishay-dale/wsl25122l000fea?qs=ViWNInbc%252beXvfGOPKnFAiw==
> 
> 
> 
> View attachment 2476293


I guess they will work, then.


----------



## LancerVI

May I join your fancy club???


----------



## McFluff

McFluff said:


> I wanted to share some info on trying to cool the backplate on my 3090 FE. I have found a pretty effective way to cool it, but I'm looking for a permanent solution.
> 
> I first discovered this issue while mining eth. I bought this card for gaming, but seeing that Nicehash was paying ~$10 a day, I figured that I'd leave it running when I wasn't using the computer. The issue that I ran into was one that others in this thread have described, while performing the memory intensive task of Eth mining, the fans would ramp way up even though the GPU is nice and cool at 42c. I assume this is due to the card reading high temperatures from the memory on the back.
> 
> Since I can't read the memory temperatures on the FE and the fan was way too loud to justify the annoyance of making about 25 cents an hour, I started to fiddle around with trying to cool the backplate to see if I could lower the percentage that the fans automatically ramp up to. I had some marginal success by lowering my case temperatures, leaving off the side panel off and aiming a fan directly at the backplate, but it was still much louder that I'd consider bearable.
> 
> Moving past simply trying to cool the air around the card, I started to look into ways to actively cool the backplate. Unfortunately, I didn't have any spare heatsinks, but I did have a spare AIO that I was able to MacGyver into position by using a kitchen utensil as a mounting device.
> 
> View attachment 2475460
> 
> 
> Not even using any sort of thermal pad or paste, just pressing the coldplate against the backplate, here is the result. The fans quickly drop down to 58%, and when removing the coldplate the shoot right back up to 100%.
> 
> View attachment 2475461
> 
> 
> In terms of the real-world effect this has had on mining, here are some results I had while trying to find a good overclock combo for Nicehash. Highlighting what I finally settled on as a good hash rate and fan speed.
> 
> View attachment 2475462
> 
> 
> In terms of gaming benchmarks, I was able to complete Port Royal with an extra 150 memory overclock when using the AIO, +1100 was the max I could get without the AIO, +1250 with.
> 
> An AIO seems like an overkill for cooling a backplate, anyone have a better solution? I am considering ordering a CPU Heatsink and sticking it to the card, but I sort of want to find a solution that maintains aesthetics.





McFluff said:


> So I’ve managed to get a similar result by removing the AIO and keeping just a kitchen utensil! There is some benefit from the cooling the AIO was providing, but it seems like most of the benefit was from additional pressure on the backplate. Maybe a thermal pad problem? I’ve ordered some new ones to try replacing the stock ones.
> 
> View attachment 2475964


Just a final update, the thermal pad replacement worked:









Possibly widespread cooling issues on RTX 3090 FE


You guys need to stop trolling. Seriously. Half the people posting here are elitist morons. I helped fix Miadhawk's 3090 FE myself by telling him precisely how to repad it with exact sized thermal pads and locations and i spent my hard earned time giving them PRECISE instructions on how to...




www.overclock.net


----------



## LancerVI

kuutale said:


> I update my 2080ti for asus 3090 ekwb , i have two monitor setups dpi cable connect. my problem is if gaming, first game works e charm mut then seeing this. I have lg 34 monitor and acer predator, my concern is my gpu is broken, i reset drivers and another things but not helping, temps look fine. i have ryzen setup and asus crosshair viii hero, can someone advice?
> View attachment 2476291


Did you DDU by chance, before installing the new card??? I highly, highly, highly recommend doing so for ANY gpu replacement.


----------



## motivman

Falkentyne said:


> The card won't blow up at 600W, especially since you aren't modding the vcore. You are also going to run into the NVVDD Output / SRAM limit before you hit the card limit anyway after you shunt mod because that rail doesn't respond to shunts. The MSI card has 20 amp fuses on the Slot and the 8 pins. That's 240W per input. You aren't exceeding that unless you go on LN2.
> It's the Gigabyte and Red LED Light of DeathFTW3 cards that have the 10 amp Slot fuses.


But can the crap VRM's on the trio really support north of 600W without blowing up?


----------



## des2k...

motivman said:


> But can the crap VRM's on the trio really support north of 600W without blowing up?


14 phases for core x 50a = 700w for core
3 phases for mem x 50a = 150w for mem
Total board power 850w 

So 600w, if you substract 100w mem, you're using 500w on the core VRM out of 700w possible. Not sure why this wouldn't hold on water.
My Zotac has less phases and holds 600w just fine but it's a 2x8pin.


----------



## motivman

des2k... said:


> 14 phases for core x 50a = 700w for core
> 3 phases for mem x 50a = 150w for mem
> Total board power 850w
> 
> So 600w, if you substract 100w mem, you're using 500w on the core VRM out of 700w possible. Not sure why this wouldn't hold on water.
> My Zotac has less phases and holds 600w just fine but it's a 2x8pin.


how many phases does the reference PCB have? I have drawn north of 750W from my reference card, with no issues. So will you say the gaming x trio is a good alternative to the Strix?


----------



## Rhadamanthys

des2k... said:


> 14 phases for core x 50a = 700w for core
> 3 phases for mem x 50a = 150w for mem
> Total board power 850w
> 
> So 600w, if you substract 100w mem, you're using 500w on the core VRM out of 700w possible. Not sure why this wouldn't hold on water.
> My Zotac has less phases and holds 600w just fine but it's a 2x8pin.


Wait I read the Trio is using 45A power stages or am I confusing something there?


----------



## des2k...

Rhadamanthys said:


> Wait I read the Trio is using 45A power stages or am I confusing something there?


"MSI uses OnSemi NCP302150 low RDS (on) DrMOS components throughout."

Says 50a on spec sheet
*


https://www.onsemi.com/pub/Collateral/NCP302150-D.PDF


*


----------



## des2k...

motivman said:


> how many phases does the reference PCB have? I have drawn north of 750W from my reference card, with no issues. So will you say the gaming x trio is a good alternative to the Strix?


I believe they don't go bellow 13phases for core, not sure if all of them use 50a stages

techpowerup usually have specs, pictures

what card do you have ?


----------



## motivman

des2k... said:


> I believe they don't go bellow 13phases for core, not sure if all of them use 50a stages
> 
> techpowerup usually have specs, pictures
> 
> what card do you have ?


PNY reference


----------



## des2k...

motivman said:


> PNY reference


Interesting that you brought this up, because the PNY XLR8 shows as "There are 18 phases in total, 4 memory, 8 Vcore, 6 Uncore"
If you count the phases this matches.

If I look at techpowerup for mine zotac 3090, I count also 18 phases but they label as 13 core 3 mem, so they forgot to add 2 for uncore so.... my calculation for total board power was way off

For Zotac, core VRM + cache are 750w + 3 for mem 150w, so total board is 900w
PNY, core VRM + cache are 700w + 4 for mem 200w, so total board 900w *Edit

That explains why you can run 750w no problem. Is yours 2x8pin or 3x8pin ?
I'm not too worried about using 600w then, not that I use that in games.










PNY RTX 3090 XLR8 Revel teardown video


Hello, before purchasing a reference PCB RTX 3080, i searched up and down for teardowns and reviews. didn't really see anything worthwhile. i bit the bullet and got a PNY RTX 3090. i'm not thrilled with it but it will do the job i need it to. it's about the same length as the founders card...




www.techpowerup.com













ZOTAC GeForce RTX 3090 Trinity Review


Zotac's GeForce RTX 3090 Trinity comes at the NVIDIA MSRP of $1499. It offers 24 GB VRAM paired with a large triple-fan, triple-slot cooler that ensures temperatures and noise levels stay low. Outside of gaming, the fans will stop completely because of the fan-stop capability.




www.techpowerup.com


----------



## bmgjet

Has any one thought of using a 120mm AIO for the back plate cooling?
Can pick them up cheap and should be able to screw in to the exhaust fan spot on most cases above the gpu.
Mounting to the back plate is going to be the only sort of issue since youll need to drill some holes.


----------



## Falkentyne

bmgjet said:


> Has any one thought of using a 120mm AIO for the back plate cooling?
> Can pick them up cheap and should be able to screw in to the exhaust fan spot on most cases above the gpu.
> Mounting to the back plate is going to be the only sort of issue since youll need to drill some holes.


I have good news. please check your pm


----------



## des2k...

bmgjet said:


> Has any one thought of using a 120mm AIO for the back plate cooling?
> Can pick them up cheap and should be able to screw in to the exhaust fan spot on most cases above the gpu.
> Mounting to the back plate is going to be the only sort of issue since youll need to drill some holes.


I almost installed my unused CM 120 aio but went the lazy way and put fans.
For that to make any difference on mem temps you'll need good thermal pads / pressure from the backplate.

Post your before / after temps. I have some thin double side tape (heat resistant)(just enough for paste pressure) and If your temps are good I might try it.

The backplate heat I'm just guessing, would be half the memory 50w + back VRM , back of the core (what doesn't get to the front cooler)


----------



## mirkendargen

bmgjet said:


> Has any one thought of using a 120mm AIO for the back plate cooling?
> Can pick them up cheap and should be able to screw in to the exhaust fan spot on most cases above the gpu.
> Mounting to the back plate is going to be the only sort of issue since youll need to drill some holes.


Better than nothing, but pretty low surface area compared to a RAM block and overkill on the pump/rad part of it.


----------



## gfunkernaught

Wait the Trio is crap?!? No!!!!!!!


----------



## gfunkernaught

Beagle Box said:


> Afterburner has a text configuration file named afterburner.cfg in the main afterburner folder.
> 
> Find: PLLClockMonitoring = 0
> 
> Change to: PLLClockMonitoring = 1


Don't see a file called afterburner.cfg. I have a file called MSIAfterburner.cfg that does not have PLLClockMonitoring.


----------



## des2k...

gfunkernaught said:


> Don't see a file called afterburner.cfg. I have a file called MSIAfterburner.cfg that does not have PLLClockMonitoring.


Well notes for MSI Afterburner 4.6.3 beta 5 build 16041 shows this in the notes

Unlocked old alternate clock monitoring functionality from original RivaTuner era. Power users may switch to PLL clock monitoring mode instead of default target clock monitoring mode on NVIDIA GPUs


----------



## Beagle Box

gfunkernaught said:


> Don't see a file called afterburner.cfg. I have a file called MSIAfterburner.cfg that does not have PLLClockMonitoring.


You are correct. 
The name of the file is MSIAfterburner.cfg.
My mistake.

PLLClockMonitoring = 0 should be found under the [NVAPIHAL] section. 
If it isn't there, you can probably add it to the end of that section.


----------



## Falkentyne

gfunkernaught said:


> Don't see a file called afterburner.cfg. I have a file called MSIAfterburner.cfg that does not have PLLClockMonitoring.


You need beta 5 and follow the instructions others have posted.


----------



## motivman

des2k... said:


> Interesting that you brought this up, because the PNY XLR8 shows as "There are 18 phases in total, 4 memory, 8 Vcore, 6 Uncore"
> If you count the phases this matches.
> 
> If I look at techpowerup for mine zotac 3090, I count also 18 phases but they label as 13 core 3 mem, so they forgot to add 2 for uncore so.... my calculation for total board power was way off
> 
> For Zotac, core VRM + cache are 750w + 3 for mem 150w, so total board is 900w
> PNY, core VRM + cache are 700w + 4 for mem 200w, so total board 900w *Edit
> 
> That explains why you can run 750w no problem. Is yours 2x8pin or 3x8pin ?
> I'm not too worried about using 600w then, not that I use that in games.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> PNY RTX 3090 XLR8 Revel teardown video
> 
> 
> Hello, before purchasing a reference PCB RTX 3080, i searched up and down for teardowns and reviews. didn't really see anything worthwhile. i bit the bullet and got a PNY RTX 3090. i'm not thrilled with it but it will do the job i need it to. it's about the same length as the founders card...
> 
> 
> 
> 
> www.techpowerup.com
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> ZOTAC GeForce RTX 3090 Trinity Review
> 
> 
> Zotac's GeForce RTX 3090 Trinity comes at the NVIDIA MSRP of $1499. It offers 24 GB VRAM paired with a large triple-fan, triple-slot cooler that ensures temperatures and noise levels stay low. Outside of gaming, the fans will stop completely because of the fan-stop capability.
> 
> 
> 
> 
> www.techpowerup.com


2X8 pin


----------



## bmgjet

des2k... said:


> I almost installed my unused CM 120 aio but went the lazy way and put fans.
> For that to make any difference on mem temps you'll need good thermal pads / pressure from the backplate.
> 
> Post your before / after temps. I have some thin double side tape (heat resistant)(just enough for paste pressure) and If your temps are good I might try it.
> 
> The backplate heat I'm just guessing, would be half the memory 50w + back VRM , back of the core (what doesn't get to the front cooler)





mirkendargen said:


> Better than nothing, but pretty low surface area compared to a RAM block and overkill on the pump/rad part of it.


Iv already got Chinese Ram water block on my back plate, I just really hate how its plumbed up since I only had straight fittings.
That gives me 70-80C temps on the juntion temp and a 50-58C back plate with IR temp gun.

Iv got a old antec 620 aio that from quick measurement would work and would probably look a bit nicer. Im just too lazy to pull my loop apart again until I need to do some matinance in a couple months.


----------



## gfunkernaught

Just want some opinions: my trio, stock cooler, stock 380w low-temp bios, PL set to 102% in AB, no overclock applied, while playing cold war @4k everything maxed, DLSS disabled, 55-60fps (Vsync), Reflex+Boost enabled, I'm seeing core clocks from 1905-1950mhz, temps 64-66c...does that look like good silicon overclocking potential to anyone? It does to me but I don't know ampere well so maybe I can get some opinions.


----------



## Falkentyne

gfunkernaught said:


> Just want some opinions: my trio, stock cooler, stock 380w low-temp bios, PL set to 102% in AB, no overclock applied, while playing cold war @4k everything maxed, DLSS disabled, 55-60fps (Vsync), Reflex+Boost enabled, I'm seeing core clocks from 1905-1950mhz, temps 64-66c...does that look like good silicon overclocking potential to anyone? It does to me but I don't know ampere well so maybe I can get some opinions.


Cant know from such a thing. Try the ATItool test.


----------



## ViRuS2k

ViRuS2k said:


> question for the awesome community this is,
> 
> my biskiki waterblock arrived for my msi 3090 gaming x trio
> i remember someone saying they had to order 3mm pads as the ones that come with the block are not thick enough, can someone clerify please what pads i should order and where to put them so that the block gets comepletely covered in pads lol so i get no heat spot issues...
> 
> 
> please
> thanks.



BUMP for the information above, can some one answer my question please
that has this block and gaming trio x 3090 please as i need to know the exact thermal pad thickness for the cards placment on this waterblock front and back, im sure i read something in this thread 
about someone having this exact combo im intrested in what pads he used for the front and the back of the card.


----------



## gfunkernaught

Falkentyne said:


> Cant know from such a thing. Try the ATItool test.


Oh ok. I thought maybe some speculation could occur based on initial clocks and temps.


----------



## PhatSV6

PhuCCo said:


> Did you download the beta version of Afterburner? I don't think the normal version works with Ampere showing voltage. If I'm understanding correctly it's just Afterburner not showing a voltage value


That is possibly it as i'm not using beta version.


----------



## Nico67

motivman said:


> Yes, thanks man. I would be satisfied with this card, if I could get the power draw between both pins to be close to equal on a 3 pin bios.


I check this about three weeks back and using a clampmeter they reading closer to equal than any shunt can read. Shunt readings are only good for triggering throttle




Gandyman said:


> My issue is that the stock corsair cables have ZERO single 8 pin cables. The $250 cablemod c-series kit comes with two 8 pins only (zero daisy chain which is pathetic, older cablemod kits came with 4 of each AND were cheaper!!) The $150 corsair premium cable kits, come with 2x 8 and 2x 8x2. I can't find any individual cables for sale in my country. Buying another overpriced cable kit for a single 8 pin seems very not worth .. especially after finding every last penny to get a 3090 strix (which is $3500 AUD) I don't really want to inflate the cost of this GPU anymore.
> 
> Not undervolted at +150 core +1000 memory the highest HWINFO has reported my wattage at is 369.9w
> 
> My plan is to get the $150 corsair premium kit, and use 1x 8x8 and 1x 8


get some PCIE les from Moddiy, I got two 8pin to 8pin and asked for solid PCIE end and 16awg cable. Took a few weeks as it was free post, but they are really good and you get get the length you want. 



Benni231990 said:


> hey guys i need your help
> 
> i use msi afterburner with curve for core clock an i set 1.025v with 2055mhz but msi change the curve from self to higher and lower
> 
> today first start msi set the coreclock to 2100 so i xhange it 2055 after i start a game the clock was 2025 so i need to change it manuell every time this sucks so hard
> 
> can you help me ? i use the ne beta 5


Its all to do with temp. I set 2085 / 1.000v at idle 23c curve "L" and the drops to 2070 / 0.993 over 25c and at game temp 32c it runs 2055 / 0.993. Nothing you can do about that unless you can keep temp from changing. Possibly change fan speeds to keep it with one step? temp changes are just weird, makes me wonder if just some points change at certain temps and why.


----------



## WillP

Just to add I couldn't control voltage on my KFA2 card with the normal Afterburner release, and installing the beta detailed above has solved the problem. Thanks again to the users of this forum, I thought it was specific to my card as KFA2's own OC program worked with voltage, but seeing others post about the issue and the helpful responses has fixed it. So glad I found this forum, cheers guys.


----------



## Rhadamanthys

des2k... said:


> "MSI uses OnSemi NCP302150 low RDS (on) DrMOS components throughout."
> 
> Says 50a on spec sheet
> *
> 
> 
> https://www.onsemi.com/pub/Collateral/NCP302150-D.PDF
> 
> 
> *


Nope, according to Kitguru.



> Moving onto the PCB, here MSI has kept the same board design as the RTX 3080 Gaming X Trio, but it has stepped up the power delivery. This 3090 now sports a total of 18 power stages, which are likely split with 15 for the GPU and 3 for the memory, though a 14+4 configuration is also possible.
> 
> Here, MSI is using ON Semi’s NCP302045 MOSFETs, rated for average current up to 45A and peak current up to 70A.


MSI RTX 3090 Gaming X Trio Review - KitGuru

Assuming it's 14 stages for the GPU, x45 leaves it with 630W and peak draw of 980W (x70). Meaning the Trio should be fine running the 500W bios. Someone on Reddit was worried about longevity with peak spikes, though. What do you guys think?


----------



## rix2

I flash ftw3 ultra with 320w bios, and firmware upgrade is not possible, red light in power connections burn and vent work 100% card in safe mode, nothing help, is this bios ok which is linked here? ore this card is protected?


----------



## rix2

Rhadamanthys said:


> Nope, according to Kitguru.
> 
> 
> 
> MSI RTX 3090 Gaming X Trio Review - KitGuru
> 
> Assuming it's 14 stages for the GPU, x45 leaves it with 630W and peak draw of 980W (x70). Meaning the Trio should be fine running the 500W bios. Someone on Reddit was worried about longevity with peak spikes, though. What do you guys think?


work fine, 14600 PR air cooled https://www.3dmark.com/compare/pr/815933/pr/545057/pr/524216#
first OC Trio EVGA 500W, 2 def clock 500w and 13800 my beaten evga ultra card oc max


----------



## Rhadamanthys

rix2 said:


> work fine, 14600 PR air cooled


That's with the 500W bios? Have you tried the 520?


----------



## rix2

Rhadamanthys said:


> That's with the 500W bios? Have you tried the 520?


no, he has air cooled, this is not my card, I have evga ftw3 ultra only 430w max, it doesn't matter which bios...


----------



## schoolofmonkey

Holy heck this Inno3D iChill x4 card hits 83c but sits on 1755Mhz, seriously it doesn't go under 83c running Heaven.
I've got 3 120mm fans as intake right underneath the card, I even manually set the fan speed to 100%, why is this card heatsink so bad at cooling when it's so massive.

Water here I come.


----------



## bmgjet

schoolofmonkey said:


> Holy heck this Inno3D iChill x4 card hits 83c but sits on 1755Mhz, seriously it doesn't go under 83c running Heaven.
> I've got 3 120mm fans as intake right underneath the card, I even manually set the fan speed to 100%, why is this card heatsink so bad at cooling when it's so massive.
> 
> Water here I come.
> 
> View attachment 2476357
> 
> View attachment 2476358


Only reason its not going higher is because 83C is the tempture target.


----------



## schoolofmonkey

bmgjet said:


> Only reason its not going higher is because 83C is the tempture target.


I figured that, ram is hitting 86-90c, thought about a fan on the backplate.


----------



## Rhadamanthys

Anyone with a Gaming X Trio have an EK block on it? Seen bad reviews about massive coil whine with the Strix block. This is one of the things that keep me from ordering a Strix, actually.

Anyway, I was wondering if there are similiar issues with the Trio (block).


----------



## motivman

anyone care to ponder on how this might have happened?????









Faulty RTX 3090 | ASUS ROG STRIX OC 24GB RTX 3090 | Faulty | eBay


EK water block and backplate installed and included with sale.



www.ebay.com


----------



## motivman

Nico67 said:


> I check this about three weeks back and using a clampmeter they reading closer to equal than any shunt can read. Shunt readings are only good for triggering throttle





Nico67 said:


> I check this about three weeks back and using a clampmeter they reading closer to equal than any shunt can read. Shunt readings are only good for triggering throttle


really? very interesting... now I have to test this out myself. What clamp meter are you using, and how do you go about testing the power draw? sorry, I am a guy in the medical field, who happens to have an interest in pc gaming and hardware...


----------



## ttnuagmada

motivman said:


> anyone care to ponder on how this might have happened?????
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Faulty RTX 3090 | ASUS ROG STRIX OC 24GB RTX 3090 | Faulty | eBay
> 
> 
> EK water block and backplate installed and included with sale.
> 
> 
> 
> www.ebay.com


Probably that guy from earlier in the thread who fried his being a moron with the 1000w bios.


----------



## motivman

Nico67 said:


> Back to work today, so I got the clampmeter
> 
> 2 x 8pin 1000w @ 80% some power limiting but saw max board power around 780w, so somewhere around 520w real,
> 
> 8pin1 = 17.70A x 12.1v = 214w
> 8pin2 = 18.24A x 12.1v = 221w
> 
> so its definitely a lot more balanced than GPU-Z would suggest at 282/174.
> 
> Still hard to gauge absolute power and therefore whats coming off the PCIE slot, but I would guess its up there around 80w+


what clamp meter model are you using and do you just clamp the entire 8 pin cable or have to isolate cables from the bunch?


----------



## jomama22

ttnuagmada said:


> Probably that guy from earlier in the thread who fried his being a moron with the 1000w bios.


It would have had to be over 1000w or even possibly a defect of some kind. Maybe testing furmark for some extended time, or quake rtx lmao. who knows. It's honestly strange to see that on a strix.

And that cap is on the msddc line, not the nvddc, which is even stranger.


----------



## jomama22

motivman said:


> what clamp meter model are you using and do you just clamp the entire 8 pin cable or have to isolate cables from the bunch?


You would clamp just the 12v wires of the pcie cable.


----------



## jomama22

schoolofmonkey said:


> Holy heck this Inno3D iChill x4 card hits 83c but sits on 1755Mhz, seriously it doesn't go under 83c running Heaven.
> I've got 3 120mm fans as intake right underneath the card, I even manually set the fan speed to 100%, why is this card heatsink so bad at cooling when it's so massive.
> 
> Water here I come.
> 
> View attachment 2476357
> 
> View attachment 2476358


I would try reapplying paste and cranking down the cooler with some washers between the screw and retention bracket. That seems abnormally high.

But if you're going water then meh lol


----------



## motivman

jomama22 said:


> You would clamp just the 12v wires of the pcie cable.


so using the image below as a reference, I would have to isolate the 3 "yellow" cables in the picture and clamp them? Thinking this might be a little hard for me to do since my cable is sleeved all the way up to the connector.... we shall see though, I just bought this clamp meter off of amazon.



https://www.amazon.com/gp/product/B0721MKXBC/ref=ppx_yo_dt_b_asin_title_o00_s00?ie=UTF8&psc=1


----------



## jomama22

motivman said:


> so using the image below as a reference, I would have to isolate the 3 "yellow" cables in the picture and clamp them? Thinking this might be a little hard for me to do since my cable is sleeved all the way up to the connector.... we shall see though, I just bought this clamp meter off of amazon.
> 
> 
> 
> https://www.amazon.com/gp/product/B0721MKXBC/ref=ppx_yo_dt_b_asin_title_o00_s00?ie=UTF8&psc=1
> 
> 
> 
> View attachment 2476369


Yeah, that's it. Was gonna say it would be annoying if they are fully sleeved together lol. You can just slice the sleeving in a small area, pull it apart and clamp there, and then just get some shrink tubing to reseal the sleeving together. If you have a hair dryer, it may do the job, though a heat gun would be better.


----------



## motivman

jomama22 said:


> Yeah, that's it. Was gonna say it would be annoying if they are fully sleeved together lol. You can just slice the sleeving in a small area, pull it apart and clamp there, and then just get some shrink tubing to reseal the sleeving together. If you have a hair dryer, it may do the job, though a heat gun would be better.


This is a game changer. Can't wait to get the clamp meter. If it turns out my power draw is balanced on both cables, then I have zero need to get a STRIX. on the 1000W bios, while running timespy extreme, one of my cables was reporting upward of 450W power draw...but cable was cold to touch...no way an 8 pin cable was pulling so much power without literally melting or getting hot.


----------



## motivman

jomama22 said:


> Yeah, that's it. Was gonna say it would be annoying if they are fully sleeved together lol. You can just slice the sleeving in a small area, pull it apart and clamp there, and then just get some shrink tubing to reseal the sleeving together. If you have a hair dryer, it may do the job, though a heat gun would be better.


If the other cables on the 8 pin cable are for sense and ground, won't clamping the entire thing still not give accurate results for amperage???


----------



## jomama22

motivman said:


> This is a game changer. Can't wait to get the clamp meter. If it turns out my power draw is balanced on both cables, then I have zero need to get a STRIX. on the 1000W bios, while running timespy extreme, one of my cables was reporting upward of 450W power draw...but cable was cold to touch...no way an 8 pin cable was pulling so much power without literally melting or getting hot.


Tbf, with the gauging of pc cables, short distance and multiple 12v lines, I wouldn't expect them to get warm. I would be more concerned with the physical connector melting on either the gpu side or the psu side from the pins/crimps inside them. Just one needs to not have a great connection together and you create a large resistance and arcing. You never see a melted or damaged pcie cable when they go, it's always the connectors and it happens because of the above.

The pins/crimps are only rated for 30 connections/disconnections as they get looser each time.


----------



## jomama22

motivman said:


> If the other cables on the 8 pin cable are for sense and ground, won't clamping the entire thing still not give accurate results for amperage???


No, a clamp meter works by measuring the magnetic field created by a current moving in one direction (look up lenz's and faraday's induction current law). If you put the ground cables in there with it, you would effectively create two opposing magnetic fields that would cancel each other out.

You can try this out yourself when you get the clamp. I don't expect there to be a reading of 0 when claiming the whole cable as I'm not sure if the entire ground plane is just all connected or if each pin would have its own ground plane to deal with (very doubtful) so the current going through the ground wire may be more or less than the current going through the 12v.

Also, there would be some loss of current through heat as well.

Anyway, yeah. Just the 12v wires lol.


----------



## motivman

jomama22 said:


> No, a clamp meter works by measuring the magnetic field created by a current moving in one direction (look up lenz's and faraday's induction current law). If you put the ground cables in there with it, you would effectively create two opposing magnetic fields that would cancel each other out.


ahhh makes sense. So my PSU is EVGA, 1200P2 to be exact. The PCIE cable terminates in a 6+2 configuration. So just to make sure the +2 is just sense b and ground correct? I am just gonna clamp pins 1, 2 and 3 on the 6 pin cable?


----------



## RosaPanteren

Are there any universal die block that will fit 3090, since stock bracket holes are positioned with different distance in regards to width and depth?


----------



## jomama22

motivman said:


> ahhh makes sense. So my PSU is EVGA, 1200P2 to be exact. The PCIE cable terminates in a 6+2 configuration. So just to make sure the +2 is just sense b and ground correct? I am just gonna clamp pins 1, 2 and 3 on the 6 pin cable?


Yeah, that seems to be what the diagram indicates. I'm not sure how evga cables are wired up on the psu side. I'd say to trace the wires from the gpu plugs but with it sleeved, that would be difficult. You can just measure each wire individually while the card is running. With the meter facing the same way for each wire you test (the clamp will have some indication for the direction of flow it wants + current to be flowing) you will get a reading. The 12v lines will give you a positive amperage reading and the ground will give you a negative amperage reading. The sense wires, meh, who knows, probably some negligible negative current.


----------



## Z0eff

Which waterblock do people recommend for the 3090 FTW3 Ultra?


----------



## ttnuagmada

jomama22 said:


> It would have had to be over 1000w or even possibly a defect of some kind. Maybe testing furmark for some extended time, or quake rtx lmao. who knows. It's honestly strange to see that on a strix.


Running Furmark is exactly what he did I'm pretty sure lol.


----------



## long2905

schoolofmonkey said:


> Holy heck this Inno3D iChill x4 card hits 83c but sits on 1755Mhz, seriously it doesn't go under 83c running Heaven.
> I've got 3 120mm fans as intake right underneath the card, I even manually set the fan speed to 100%, why is this card heatsink so bad at cooling when it's so massive.
> 
> Water here I come.


on stock air cooler i have to undervolt to [email protected] to keep it at around 70c

its much better on water obviously but i havent been able to crack 15000 port royal yet.



ttnuagmada said:


> Running Furmark is exactly what he did I'm pretty sure lol.


thats exactly it. i remember someone posted running furmark on 1000w bios and got chided on this exact thread lol.


----------



## Thanh Nguyen

Z0eff said:


> Which waterblock do people recommend for the 3090 FTW3 Ultra?


Optimus block if u want a Gucci or Bykski if you want to shop at Dollar Tree.


----------



## motivman

ttnuagmada said:


> Running Furmark is exactly what he did I'm pretty sure lol.


funny he sold it for a really great price at the end of the day.


----------



## Thanh Nguyen

motivman said:


> funny he sold it for a really great price at the end of the day.


How much?


----------



## motivman

Thanh Nguyen said:


> How much?


the thing sold for $2126!!!!!! lmao


----------



## Sheyster

motivman said:


> the thing sold for $2126!!!!!! lmao


Hard to believe that. A lot of anti-scammers are up-bidding these cards on eBay to win then backing out of it. Could be that's what happened here.


----------



## motivman

Sheyster said:


> Hard to believe that. A lot of anti-scammers are up-bidding these cards on eBay to win then backing out of it. Could be that's what happened here.


for the technically inclined with god-like soldering skills, how hard would it to replace the blown caps though? I would guess an easy fix????


----------



## Sheyster

motivman said:


> for the technically inclined with god-like soldering skills, how hard would it to replace the blown caps though? I would guess an easy fix????


I suppose, but damage plus no warranty plus over $2100 cost... Sheesh! I'd just pay a scalper if I was that desperate for a card.


----------



## Z0eff

Thanh Nguyen said:


> Optimus block if u want a Gucci or Bykski if you want to shop at Dollar Tree.


I'm guessing Dollar Tree is a cheap store in the US?

What about the EK one? Availability issues?


----------



## toxicnerve

Hi Everyone, 

I've read most of this thread (well skimmed) and I've got a daft question that I suspect will be answered with "It's up to you" but I wanted to ask anyway.

I have a 3090 Aorus Master. This card is 2 x 8-pin design and so I'm finding I am PerfCapped by power limit 99% of the time. Core clocks seem to cap out at around 1800-1950 MHz in benchmarks and heavy gaming loads. It's gone higher in some instances but usually when it's a lighter load and the card is able to push silly high frames (though then it usually goes PerfCap No Load).

Obviously, I should have done more research before buying but it's done now and I've had the card a while (so no distance selling regs option to return it). 

Given that I'm on air with no intention of going water cooling is there any real reason to flog this card (probably taking a hit on fees) and get something with 3 x 8-pin like a FTW3 Ultra?

Any help appreciated.


----------



## long2905

toxicnerve said:


> Hi Everyone,
> 
> I've read most of this thread (well skimmed) and I've got a daft question that I suspect will be answered with "It's up to you" but I wanted to ask anyway.
> 
> I have a 3090 Aorus Master. This card is 2 x 8-pin design and so I'm finding I am PerfCapped by power limit 99% of the time. Core clocks seem to cap out at around 1800-1950 MHz in benchmarks and heavy gaming loads. It's gone higher in some instances but usually when it's a lighter load and the card is able to push silly high frames (though then it usually goes PerfCap No Load).
> 
> Obviously, I should have done more research before buying but it's done now and I've had the card a while (so no distance selling regs option to return it).
> 
> Given that I'm on air with no intention of going water cooling is there any real reason to flog this card (probably taking a hit on fees) and get something with 3 x 8-pin like a FTW3 Ultra?
> 
> Any help appreciated.


depend on your use. if you want to bench and get the highest score possible then yes swap it for something else. otherwise even the cheapest model is more than enough for gaming. yours is near the top of the 2 8pin card anyway.


----------



## Rhadamanthys

Z0eff said:


> I'm guessing Dollar Tree is a cheap store in the US?
> 
> What about the EK one? Availability issues?


Been looking at EK blocks recently. Availability seems to be good, at least according to their own stock indicators, which I have no reason to doubt. In general, they do seem to take a bit longer for shipping these days. Their delivery estimations range from 2 to 3 weeks (Europe).


----------



## motivman

toxicnerve said:


> Hi Everyone,
> 
> I've read most of this thread (well skimmed) and I've got a daft question that I suspect will be answered with "It's up to you" but I wanted to ask anyway.
> 
> I have a 3090 Aorus Master. This card is 2 x 8-pin design and so I'm finding I am PerfCapped by power limit 99% of the time. Core clocks seem to cap out at around 1800-1950 MHz in benchmarks and heavy gaming loads. It's gone higher in some instances but usually when it's a lighter load and the card is able to push silly high frames (though then it usually goes PerfCap No Load).
> 
> Obviously, I should have done more research before buying but it's done now and I've had the card a while (so no distance selling regs option to return it).
> 
> Given that I'm on air with no intention of going water cooling is there any real reason to flog this card (probably taking a hit on fees) and get something with 3 x 8-pin like a FTW3 Ultra?
> 
> Any help appreciated.


if you were going water (Byski makes waterblock for auros master btw), I would say shunt mod it and flash the 520W bios. but if you have no intention of going water, I would suggest either getting an ftw3, trio, strix or KP hybrid. Personally, I wouldn't touch a gaming x trio or ftw3, since the MSI and evga cheaped out on their hardware. A suprim X seems to be a good buy also, has better VRM's than gaming x trio. Unless you decide to run 1000W bios daily on either card, or shunt mod, you will always run into power limit issues, but at least you will have access to over 520W with both cards above, compared to 390W max with 2X8pin card.


----------



## Z0eff

I can't seem to find this Optimus FTW3 block anywhere in europe. Ordering from the US will probably take a long time and I'd like to avoid that


----------



## Zogge

+ the 30% on the price for import duty etc to Europe.


----------



## jomama22

Z0eff said:


> I can't seem to find this Optimus FTW3 block anywhere in europe. Ordering from the US will probably take a long time and I'd like to avoid that


Just grab an alphacool block if they are available. Seem to be performing really well for many cards. The ek blocks seem pretty hit or miss this generation. I don't have coil whine issues on mine but that's with their modified thermalpad layout (posted on reddit somewhere) and I had to drill a hole in the backplate to add another place for it to attach to the block so it would actually make contact with the rear top vmem modules (and use 2mm pads instead of 1.5mm).


----------



## Gebeleisis

any active backplates for europeans ?


----------



## Arbustok

Gebeleisis said:


> any active backplates for europeans ?


Aquacomputer or soon Ekwb


----------



## Gebeleisis

AquaComputer is sold out...


----------



## PiotrMKG

Gebeleisis said:


> AquaComputer is sold out...


It didn’t make it to store yet, there are guys who ordered them day one, and still waiting (90+ days now).


----------



## Arbustok

I doubt you see it in stock for a while, better order now and wait


----------



## Gebeleisis

Or that... Pfff

I might need to improvise something for my watercooling loop by the time they start selling those.

Ram blocks are the best atm

Also found some copper cooling blocks on aliexpress made to order. But they look cheap and not trustworthy


----------



## mirkendargen

Gebeleisis said:


> Or that... Pfff
> 
> I might need to improvise something for my watercooling loop by the time they start selling those.
> 
> Ram blocks are the best atm
> 
> Also found some copper cooling blocks on aliexpress made to order. But they look cheap and not trustworthy


4 or 6 DIMM RAM block from Alphacool or Bykski/Barrow via Aliexpress and some thermal epoxy. DIY active backplate, available worldwide


----------



## Gebeleisis

mirkendargen said:


> 4 or 6 DIMM RAM block from Alphacool or Bykski/Barrow via Aliexpress and some thermal epoxy. DIY active backplate, available worldwide


That is my plan! 🎉


----------



## Falkentyne

@bmgjet @HyperMatrix @chispy I think I found out what controls GPU Chip Power in GPU-Z (also known as GPU Core NVVDD Input Power (sum)) in HWinfo64.

GPU Chip Power = GPU Misc1 Input Power + GPU Misc3 Input Power + GPU Core NVVDD1 Input Power (sum).

GPU Core NVVDD2 Input Power(sum) = GPU Chip Power + GPU Sram Input Power (sum)

GPU Core NVVDD Output Power (sum) = GPU Core NVVDD Output Power (the very bottom one in Hwinfo64) + GPU SRAM Output Power.

Not sure if anyone cares about this.


----------



## Nico67

motivman said:


> so using the image below as a reference, I would have to isolate the 3 "yellow" cables in the picture and clamp them? Thinking this might be a little hard for me to do since my cable is sleeved all the way up to the connector.... we shall see though, I just bought this clamp meter off of amazon.
> 
> 
> 
> https://www.amazon.com/gp/product/B0721MKXBC/ref=ppx_yo_dt_b_asin_title_o00_s00?ie=UTF8&psc=1
> 
> 
> 
> View attachment 2476369


Yep, as Jomama22 said, only clamp the three yellow wires 1,2 and 3. I used a spare cable and separated the wires, helps to have them way separate to get a clampmeter on. The one I used was from work, as we get ask to check rack or router power draw for card adds/ power supply limits or cooling limits. Not sure what brand it is but nothing we have is cheap


----------



## changboy

Bloc à eau pour processeur - Tête de refroidissement à eau avec panneau en cuivre - Bague d'étanchéité intégrée pour 4 dissipateurs de chaleur à mémoire de forme : Amazon.ca: Électronique


Bloc à eau pour processeur - Tête de refroidissement à eau avec panneau en cuivre - Bague d'étanchéité intégrée pour 4 dissipateurs de chaleur à mémoire de forme : Amazon.ca: Électronique



www.amazon.ca


----------



## Trevbev

motivman said:


> anyone care to ponder on how this might have happened?????
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Faulty RTX 3090 | ASUS ROG STRIX OC 24GB RTX 3090 | Faulty | eBay
> 
> 
> EK water block and backplate installed and included with sale.
> 
> 
> 
> www.ebay.com


I think it might be this guy
There were more pictures earlier.


----------



## Falkentyne

I think the rail that is limiting a bunch of cards to ~600W shunted, is

GPU Core (NVVDD) Output Power (sum), which is the sum of SRAM Output Power (not the problem) + the bottommost GPU Core NVVDD Output Power

In fact I'm almost sure of it. It's the only rail that seems to ignore shunt mods, and ends up reporting MORE power than "Total Board Power"...


----------



## jomama22

Trevbev said:


> I think it might be this guy
> There were more pictures earlier.


Yeah, didn't make sense it would explode like that even with a 1000w bios, especially on the msddc rail. He could have probably gotten it fixed for pretty cheap. The pads probably ripped off but nothing to crazy to fix. 

To glue them back on....like what?


----------



## Falkentyne

So this guy was able to bypass the 580W internal power limit from shunt mods somehow.
I attempted to PM him to ask which mod combination he did which allowed it. I'm 100% sure it's that power rail I mentioned above.






I broke it! | RTX 3090 Water block install gone wrong


Having just seen the ebay listing, it's not actually as bad as I expected from some of the descriptions. The trace damage to the PCB doesn't look that extreme and most of the damaged components can be sourced from RS Components for a few pounds. I've seen worse examples on other forums that came...




www.overclockers.co.uk


----------



## PiotrMKG

Arbustok said:


> I doubt you see it in stock for a while, better order now and wait


Oredered like 10 day ago, patiently waiting in line


----------



## ShadowYuna

Final setup for my Xtreme 3090. Very satisfied with Alphacool block which is way better than Bykski.

No more upgrade until 4000 series.


----------



## gfunkernaught

ShadowYuna said:


> Final setup for my Xtreme 3090. Very satisfied with Alphacool block which is way better than Bykski.
> 
> No more upgrade until 4000 series.
> 
> View attachment 2476429
> 
> 
> View attachment 2476435
> 
> 
> View attachment 2476433
> 
> 
> View attachment 2476434


Temps?! I'm waiting on the ek block.


----------



## ShadowYuna

gfunkernaught said:


> Temps?! I'm waiting on the ek block.


Boost clock stable at 2100 on 4k resoultion with max temp of 49~50 using Mora360 pro. All fan running at 700rpm. Summer in Australia now.

My target temp is below 55 with very quite fan noise so it is higher than other people in the forum.


----------



## schoolofmonkey

long2905 said:


> on stock air cooler i have to undervolt to [email protected] to keep it at around 70c
> 
> its much better on water obviously but i havent been able to crack 15000 port royal yet.


It's interesting, I did manage to undervolt, temps are a tad better, clocks drop to 1680Mhz, never went that low even when I hit 83c...
Clocks jump around so much even when the temps have equalized.

Still running the Inno3D Bios though.
Which from I'm seeing is odd, even at default if I drop the temps by redirecting my aircon onto the card, clocks go down not up..
So I got temps to sit at 80c everything default, dropped to 1670Mhz, removed the cooling shot back to 83c with 1755Mhz...Huh????


----------



## gfunkernaught

ShadowYuna said:


> Boost clock stable at 2100 on 4k resoultion with max temp of 49~50 using Mora360 pro. All fan running at 700rpm. Summer in Australia now.
> 
> My target temp is below 55 with very quite fan noise so it is higher than other people in the forum.


Hmm, that seems high even for high ambient temp and low fan speeds no? Still impressive though, based on what I'm reading about 3090s in general. Hopefully I will be able to hit 2100mhz with my trio once I get a good block.


----------



## bmgjet

schoolofmonkey said:


> It's interesting, I did manage to undervolt, temps are a tad better, clocks drop to 1680Mhz, never went that low even when I hit 83c...
> Clocks jump around so much even when the temps have equalized.
> 
> Still running the Inno3D Bios though.
> Which from I'm seeing is odd, even at default if I drop the temps by redirecting my aircon onto the card, clocks go down not up..
> So I got temps to sit at 80c everything default, dropped to 1670Mhz, removed the cooling shot back to 83c with 1755Mhz...Huh????


Id say something is wrong with the thermal compound job on your card since feeding it AC should of made a massive difference to temps since you would of gone from 30C ambient hitting the cooler to like 20C of the AC. so youd expect at least a 10C drop, Not just 3C.


----------



## schoolofmonkey

bmgjet said:


> Id say something is wrong with the thermal compound job on your card since feeding it AC should of made a massive difference to temps since you would of gone from 30C ambient hitting the cooler to like 20C of the AC. so youd expect at least a 10C drop, Not just 3C.


I think I worked out, it's the Inno3D Bios.
Just flashed the Gigabyte Gaming OC 390w bios, left everything default, way higher and more stable clocks, much more aggressive fan curve, card still hit 83c just none of the strangeness of the Inno3D bios.
One thing was with the Gigabyte BIOS I can't read the VRAM temps like I could with the default BIOS.

TuneIT flashed it for me, seriously it just let me pick the BIOS file I wanted to use and it flashed it.


----------



## long2905

Yeah can flash any bios save for the 1000w one with the tuneit software. I did have to repaste mine with tfx to get better temp.


----------



## schoolofmonkey

long2905 said:


> Yeah can flash any bios save for the 1000w one with the tuneit software. I did have to repaste mine with tfx to get better temp.


I'll give it a few days and then repaste it with some Kryonaut.
Saw 1900Mhz in the Hitman 3 benchmark, didn't see that before with the Inno3D bios.


----------



## Celcius

I finally had a chance to upgrade my videocard from an EVGA GTX 1080 Ti SC2 to an EVGA RTX 3090 FTW3.
Performance is amazing and here are my benchmarks before and after:
3dmark timespy
1080Ti: 10129
3090: 18428

3dmark firestrike
1080Ti: 24525
3090: 33199

ff14 shadowbringers benchmark
1080Ti: 9525
3090: 17408

ff15 benchmark
1080Ti: 4732
3090: 8790

superposition
1080Ti: 5752
3090: 12899


----------



## galsdgk

Rhadamanthys said:


> Anyone with a Gaming X Trio have an EK block on it? Seen bad reviews about massive coil whine with the Strix block. This is one of the things that keep me from ordering a Strix, actually.
> 
> Anyway, I was wondering if there are similiar issues with the Trio (block).


I've got the gaming x trio and an ek block and backplate... trouble is I haven't finished setting up my loop yet. lol. I'll finish probably by next weekend.


----------



## galsdgk

motivman said:


> I want a card where I can pull upward of 750Watts with no issues, looks like the strix might be the only choice, since trio has garbage vrms and stupid 20A fuses.


I've pulled 675 watts no problem on my trio in Time Spy Extreme.


----------



## jomama22

motivman said:


> I want a card where I can pull upward of 750Watts with no issues, looks like the strix might be the only choice, since trio has garbage vrms and stupid 20A fuses.


To put it into perspective, using my evc2 @ 1.3v in tse gt2, I wouldn't pull over 680w on my strix.

It's really down to chip quality how much power you are going to pull.


----------



## motivman

jomama22 said:


> To put it into perspective, using my evc2 @ 1.3v in tse gt2, I wouldn't pull over 680w on my strix.
> 
> It's really down to chip quality how much power you are going to pull.


run timespy extreme and report back...lol


----------



## motivman

So I got the clampmeter from amazon. Flashed the xoc bios to my 2X8 pin reference card, and ran timespy extreme on a loop, with +120/+800. Happy to report power pull from both 8 pins are about equal. Max I recorded was about 22 amps from both pins. not sure how to accurately measure voltage (yet), but GPU-z was showing voltage at 12V, so essentially both 8 pins maxed out at about 264W.


----------



## Falkentyne

motivman said:


> So I got the clampmeter from amazon. Flashed the xoc bios to my 2X8 pin reference card, and ran timespy extreme on a loop, with +120/+800. Happy to report power pull from both 8 pins are about equal. Max I recorded was about 22 amps from both pins. not sure how to accurately measure voltage (yet), but GPU-z was showing voltage at 12V, so essentially both 8 pins maxed out at about 264W.


That's what we were trying to tell you all along. You were never pulling 400W from a connector. The power draw on these cards is _hard wired_. The shunts simply deal with the power reporting. It's why the 3x8 pin FTW3 cards that are being throttled by reaching PCIE limit too soon can't be fixed by a vbios, except by one that allows even more power to be drawn by "ignoring" one of the 8 pins (XC3 bios, etc). Also jomama22 said he ran tse for you already. tse=time spy extreme, gt2=graphics test 2.


----------



## Nico67

motivman said:


> So I got the clampmeter from amazon. Flashed the xoc bios to my 2X8 pin reference card, and ran timespy extreme on a loop, with +120/+800. Happy to report power pull from both 8 pins are about equal. Max I recorded was about 22 amps from both pins. not sure how to accurately measure voltage (yet), but GPU-z was showing voltage at 12V, so essentially both 8 pins maxed out at about 264W.


Awesome, good to see confirmed what I was seeing, and I just used GPU-Z rail mins as that's what the voltage is at high load.


----------



## Cpgeek

Lord of meat said:


> Yes!
> make sure u use afterburner and set fan curves.
> My settings:
> power limit 107
> core clock +60
> mem clock +730
> i set the fan curve to 50 and when the temp hits 65c it goes to 100.
> hope this help ya.


When using the 3090 Suprim X bios on the Gaming X Trio has it been reasonably stable? Do all of your display outputs work? (I've heard people were having issues with the asus and evga bios disabling one of the display outputs). and while in the end it doesn't really matter, but does the RGB still work? (i'm probably going to get a waterblock for my card anyway, but it'd still be nice to know)
Thank you very much!


----------



## jomama22

motivman said:


> run timespy extreme and report back...lol


What do you think tse gt2 is? Lol. I'll just leave this here:









I scored 12 651 in Time Spy Extreme


AMD Ryzen 9 5950X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## Nico67

Tried HWInfo 6.42, and my mem was 38c idle @ 10000mhz (20000 doubled) with XOC, and 52c max in BDO after some hrs. Its harsh on the GPU, but doesn't use alot of mem I don't think. GPU was max 34c at 2055mhz / 0.993v.


----------



## Gebeleisis

ShadowYuna said:


> Final setup for my Xtreme 3090. Very satisfied with Alphacool block which is way better than Bykski.
> 
> No more upgrade until 4000 series.
> 
> View attachment 2476429
> 
> 
> View attachment 2476435
> 
> 
> View attachment 2476433
> 
> 
> View attachment 2476434


is there any temp improvement frm bykski ?


----------



## ShadowYuna

gfunkernaught said:


> Hmm, that seems high even for high ambient temp and low fan speeds no? Still impressive though, based on what I'm reading about 3090s in general. Hopefully I will be able to hit 2100mhz with my trio once I get a good block.





Gebeleisis said:


> is there any temp improvement frm bykski ?


Yes Alphacool block keeps better temp than Bykski. Also the Alphacool block conduct the coil so no more coil whine on my Xtreme. Quite as silent


----------



## PhuCCo

I apologize if this has already been covered, but is there a way for force Afterburner to have the fans at 0% at all times? 
My 3090 FE is water-cooled, but the fan percentage is stuck at 30% no matter what I do with the curve. The strange thing is that it sometimes does actually go to 0%, but it never stays. 
Is this a VBIOS limitation or can I edit something in Afterburner? Would I gain any power limit reserve if the fans actually read 0% rather than 30%? Thank you


----------



## Pepillo

PhuCCo said:


> I apologize if this has already been covered, but is there a way for force Afterburner to have the fans at 0% at all times?
> My 3090 FE is water-cooled, but the fan percentage is stuck at 30% no matter what I do with the curve. The strange thing is that it sometimes does actually go to 0%, but it never stays.
> Is this a VBIOS limitation or can I edit something in Afterburner? Would I gain any power limit reserve if the fans actually read 0% rather than 30%? Thank you


The percentage of the fan should equal, if there are no fans there is no consumption.


----------



## PhuCCo

Pepillo said:


> The percentage of the fan should equal, if there are no fans there is no consumption.


Makes sense. That's what I had thought, I just wasn't sure if maybe the card would see the fans "running" at whatever percentage and take away some of my power budget for allocation. Thank you for the response


----------



## Rhadamanthys

galsdgk said:


> I've got the gaming x trio and an ek block and backplate... trouble is I haven't finished setting up my loop yet. lol. I'll finish probably by next weekend.


Please report back with results when you're done.

EK pushed my Trio block back to end of February. 😩 These are my options (haven't been able to snag a card yet):

Trio, wait for the EK block, but risk it being delayed further
Trio, get an Alphacool block (nope these aren't in stock either)
Strix (OC or non, suppose it won't matter), get the EK block, but risk unbearable coil whine according to reports
Strix, get an Alphacool block
In any case, I'll be mounting vertically. Anyone know if that has any effect on coil whine? What if I just went without backplate?


----------



## Trevbev

Anyone got a FTW3 Hybrid?
What temps are getting?


----------



## GoldCartGamer

I have an opportunity to buy a Gigabyte Aorus Master card. Unsure of the revision until it arrives. How is it comapred to the Asus TUF?


----------



## WilliamLeGod




----------



## Thanh Nguyen

ShadowYuna said:


> Yes Alphacool block keeps better temp than Bykski. Also the Alphacool block conduct the coil so no more coil whine on my Xtreme. Quite as silent


Its Bykski design that you cant screw the block all the way down. I use double washer and the temp is a lot better. Its 20c delta at 500w load before and after double washer its 12c delta.


----------



## Celcius

Trevbev said:


> Anyone got a FTW3 Hybrid?
> What temps are getting?


I have a FTW3 non-hybrid and I’m getting 75C-76C on full load during benchmarks with vsync turned off and the card going full throttle


----------



## Beagle Box

ShadowYuna said:


> Final setup for my Xtreme 3090. Very satisfied with Alphacool block which is way better than Bykski.
> 
> No more upgrade until 4000 series.
> 
> ---snip--
> 
> 
> View attachment 2476434


The block looks nice. Got an Alphacool block arriving today.
What were your Time Spy scores like before the water block?


----------



## long2905

WilliamLeGod said:


> View attachment 2476510


do you have more details on this?


----------



## Trevbev

Celcius said:


> I have a FTW3 non-hybrid and I’m getting 75C-76C on full load during benchmarks with vsync turned off and the card going full throttle


Thanks. 
Anyone with the hybrid model/hybrid kit?


----------



## motivman

conclusion time boys... someone should send me a drink or something for all these 2 months of research and info gathering I have done to save a lot of people headache... lol

Guess the best card to get if you are watercooling? my vote goes to any card with reference PCB. Why you may ask? Because Nvidia reference design really does not cheap out on any components. VRM's that can handle 900W, no coil whine, short PCB and waterblocks, so very good compatibility with a lot of cases. You can get a reference card and DEFEAT literally all power limits if you shunt mod and run the 1000W bios, again the only requirement is that you need to have watercooling. This path will not work on air cooling, because of too much heat. Yes, the 1000W bios is safe if you PROPERLY install your waterblock, and use good aftermarket thermal pads (ie fujipoly). On any 3 pin bios, the reference PCB will show a huge difference between pin 1 and 2. Do not look at hwinfo or gpu-z, these values are false. both pins pull about the same power (verified with my clamp meter). For example in timespy extreme, I measured on average 22A max on both 8 pins, which translate to about 265W. I looped timespy extreme, GT2 for 3 hours last night and my card survived, cables did not even get hot to touch and are intact, no melting here sir! GPU temp went up to 54C, usually I sit at around 45-47C and memory temp went up to 72C, (usually it maxes at 62C for other games). If my card can loop such a heavy load like TSE GT2 for 3 hours, the 1000W bios is safe. I set max power limit in afterburner to 70%, just in case, but my max power draw per HWINFO was around 60%. Even if you do not shunt the card (I still think shunting is a good idea, unless you are worried about warranty...), running the 1000W bios on an unshunted reference card will give you at least 550W. however, the benefit of shunting the reference card is that you can run other bios, like your stock bios or 390W bios, or even the evga 500W or 520W bios and still have access to over 500W. Even though the 1000W bios is safe, I think you should only run it for benching. In fact, the 520W bios performs better than the 1000W bios for me in all cases.

best card if you are on Air cooling? my vote will be gaming X trio ( only run 520W bios max) or maybe Strix (you can run 1000W bios (if you watercool it) or 520W bios, but the coil whine, ugh).... JUST DON't buy the strix if you watercool. What is the point of watercooling, if you listen to coil whine all day? The reference card is a better value and will provide better watercooling experience. It will perform just as good as the strix and will be quiet as a mouse... lol

best card for extreme overclocking? kingpin on LN2 or KP hybrid or kP with waterblock ( if you can find one)

At this point, I think the reference design is a better pick than the founders edition. You cannot flash any other bios on founders. Founders cards are hard to disassemble, waterblocks for founders are also very expensive compared to reference design blocks.

Again, I do not think the Strix or gaming x trio are worth it if you are gonna watercool, just get the reference card, shunt mod it (very easy) and run the 520W bios daily. You will get a card with good VRM, no coil whine, and short and compact sexy block and PCB, which can literally fit any case on the market.

In conclusion, these are the only 3090 cards that are worth getting....

1. I want to watercool and having blazing performance only second to the KP cards = Reference PCB card and run 1000W bios or Reference PCB, shunt mod and run 520W bios
2. I want to run air cooling, but also get good performance = Suprim X, Gaming x Trio or Strix and flash 520W bios
3. I want to break records = get a KP card

Nothing else is worth getting IMHO. Hope this does not offend anybody....


----------



## des2k...

Thanh Nguyen said:


> Its Bykski design that you cant screw the block all the way down. I use double washer and the temp is a lot better. Its 20c delta at 500w load before and after double washer its 12c delta.


I also used double washer for my ek block but I didn't tighten them too much this time.
It's about 14c,15c delta up to 500w. Close to 600w delta will climb 19c,20c.

I noticed Zotac doesn't include the back retention clip. I'm sure this will add some pressure but I can't find where to buy this for rtx 3090.





__





Redirect Notice






images.app.goo.gl


----------



## motivman

des2k... said:


> I also used double washer for my ek block but I didn't tighten them too much this time.
> It's about 14c,15c delta up to 500w. Close to 600w delta will climb 19c,20c.
> 
> I noticed Zotac doesn't include the back retention clip. I'm sure this will add some pressure but I can't find where to buy this for rtx 3090.
> 
> 
> 
> 
> 
> __
> 
> 
> 
> 
> 
> Redirect Notice
> 
> 
> 
> 
> 
> 
> images.app.goo.gl


I can confirm your finding about the ek block. with 500W, my delta is 14-16C, and while looping timespy extreme last night with the 1000W bios, which was drawing a little over 600w, my delta rose up to 20C.


----------



## WilliamLeGod

__





天貓淘寶海外，花更少，買到寶！


天貓淘寶海外作爲面向華人的跨境電商平台，覆蓋200多個國家和地區的消費者，其中核心站點包括：中國香港、中國澳門、中國台灣、新加坡、馬來西亞、澳洲、加拿大。




m.intl.taobao.com


----------



## des2k...

WilliamLeGod said:


> __
> 
> 
> 
> 
> 
> 天貓淘寶海外，花更少，買到寶！
> 
> 
> 天貓淘寶海外作爲面向華人的跨境電商平台，覆蓋200多個國家和地區的消費者，其中核心站點包括：中國香港、中國澳門、中國台灣、新加坡、馬來西亞、澳洲、加拿大。
> 
> 
> 
> 
> m.intl.taobao.com
> 
> 
> 
> 
> View attachment 2476517


These should have thermal pads, need cooling. Not sure if it matters since he's bellow 400w. Temps are good because low wattage and AIO only deals with core heat  

















Since this block has coverage till the end of the board, might as well cool this part too


----------



## mardon

des2k... said:


> I also used double washer for my ek block but I didn't tighten them too much this time.
> It's about 14c,15c delta up to 500w. Close to 600w delta will climb 19c,20c.
> 
> I noticed Zotac doesn't include the back retention clip. I'm sure this will add some pressure but I can't find where to buy this for rtx 3090.
> 
> 
> 
> 
> 
> __
> 
> 
> 
> 
> 
> Redirect Notice
> 
> 
> 
> 
> 
> 
> images.app.goo.gl


Hi can you explain the double washers a little more? Is it simply doubling up on the washers on the back of the PCB before screwing down?


----------



## galsdgk

Rhadamanthys said:


> Please report back with results when you're done.
> 
> EK pushed my Trio block back to end of February. 😩 These are my options (haven't been able to snag a card yet):
> 
> Trio, wait for the EK block, but risk it being delayed further
> Trio, get an Alphacool block (nope these aren't in stock either)
> Strix (OC or non, suppose it won't matter), get the EK block, but risk unbearable coil whine according to reports
> Strix, get an Alphacool block
> In any case, I'll be mounting vertically. Anyone know if that has any effect on coil whine? What if I just went without backplate?


you can get the byski block that’s in stock on eBay. Cellphone something is the seller. Or overnight from AliExpress. Ya the alphacool folks are way behind. I think the estimates are bogus. Performance-pc should have some alphacool in stock in a few weeks. ¯\_(ツ)_/¯.


----------



## GQNerd

motivman said:


> 1. I want to watercool and having blazing performance only second to the KP cards = Reference PCB card and run 1000W bios or Reference PCB, shunt mod and run 520W bios
> 2. I want to run air cooling, but also get good performance = Suprim X, Gaming x Trio or Strix and flash 520W bios
> 3. I want to break records = get a KP card
> 
> Nothing else is worth getting IMHO. Hope this does not offend anybody....


‘Conclusions’ can only be drawn if you actually tested each model, not based on aggregating what ppl report online.. That aside, the real *rankings *(with actual testing minus the 2 pin Ref.) are:

Performance-wise:
KP>>Strix>3pin Reference>2pin Reference>FE

AirCooled:
3pin Reference>Strix>2pin Reference>FE(cause it’s damn loud)

Shunted:
Strix>3pin Reference>2pin Reference>FE
*
Models tested:*
KP, Strix OC, Gaming Trio, and FE.


----------



## geriatricpollywog

Miguelios said:


> ‘Conclusions’ can only be drawn if you actually tested each model, not based on aggregating what ppl report online.. That aside, the real rankings (with actual testing minus the 2 pin Ref.) are:
> 
> Performance-wise:
> KP>>Strix>3pin Reference>2pin Reference>FE
> 
> AirCooled:
> 3pin Reference>Strix>2pin Reference>FE(cause it’s damn loud)
> 
> Shunted:
> Strix>3pin Reference>2pin Reference>FE
> 
> Models tested:
> KP, Strix OC, Gaming Trio, and FE.


These recommendations fall apart when you take into account the price of the KPE. It's cheaper than the Strix, SuprimX, and Gaming Trio . The only reason to get a non-KP is if you want to watercool and need a block today.

The only other card worth considering is the FE since its a legendary drop at Best Buy at $1500.


----------



## GQNerd

0451 said:


> These recommendations fall apart when you take into account the price of the KPE. It's cheaper than the Strix, SuprimX, and Gaming Trio . The only reason to get a non-KP is if you want to watercool and need a block today.
> The only other card worth considering is the FE since its a legendary drop at Best Buy at $1500.


First, not recommendations, these are performance rankings based on my testing.. 
And I didn't consider cost because this is the "3090 E-peen" thread..lol

Agreed, the only reason to not buy the KP is saving a few hundred/if you want to watercool *NOW*.

_And I wouldn't grab the FE again, UNLESS it's going under a block and shunted. It's too power-deprived, and the stock cooler is the LOUDEST of the bunch, by far._


----------



## jomama22

Miguelios said:


> ‘Conclusions’ can only be drawn if you actually tested each model, not based on aggregating what ppl report online.. That aside, the real *rankings *(with actual testing minus the 2 pin Ref.) are:
> 
> Performance-wise:
> KP>>Strix>3pin Reference>2pin Reference>FE
> 
> AirCooled:
> 3pin Reference>Strix>2pin Reference>FE(cause it’s damn loud)
> 
> Shunted:
> Strix>3pin Reference>2pin Reference>FE
> 
> *Models tested:*
> KP, Strix OC, Gaming Trio, and FE.


Really, if we're going to get into it like this it just boils down to chip quality lottery anyway. Plenty of reference cards that will beat kingpins if you slap the 1000w bios on it.

The only thing you are buying with anything beyond a reference design is pcb quality/limitations at the end of the day.

Take a look at the kingpin thread over at evga to see how unbinned those cards are, you still have to get lucky enough to get a good chip, a pcb/voltage control isn't going to save you from that.

You need to bin chips to find the best out of the bunch, doesn't matter what card it is. Went through 2 strix to get the chip I have that performs well.

And at that, you have any performance metrics to show such a thing? I.e. 3dmark scores?


----------



## des2k...

mardon said:


> Hi can you explain the double washers a little more? Is it simply doubling up on the washers on the back of the PCB before screwing down?


yes, but just the 4 core screws, that way you can put a bit more pressure (even less than 1/4 turn) will help to get even pressure on the big die


----------



## jura11

Thanh Nguyen said:


> Its Bykski design that you cant screw the block all the way down. I use double washer and the temp is a lot better. Its 20c delta at 500w load before and after double washer its 12c delta.


Double washer method I didn't tried on my block but I can try it later on when I will be doing coolant change 

Can you please explain that double washer method which you are mentioned? 

Thanks, Jura


----------



## GQNerd

jomama22 said:


> Really, if we're going to get into it like this it just boils down to chip quality lottery anyway. Plenty of reference cards that will beat kingpins if you slap the 1000w bios on it.
> The only thing you are buying with anything beyond a reference design is pcb quality/limitations at the end of the day.
> Take a look at the kingpin thread over at evga to see how unbinned those cards are, you still have to get lucky enough to get a good chip, a pcb/voltage control isn't going to save you from that.
> You need to bin chips to find the best out of the bunch, doesn't matter what card it is. Went through 2 strix to get the chip I have that performs well.
> And at that, you have any performance metrics to show such a thing? I.e. 3dmark scores?


Of course it *comes down to chip quality/lottery*.. and yes, due to the limited availability, seems like MFR’s aren’t binning as much if at all. Hence why I went thru so many cards doing it myself..

As for my results, you can search for my posts in this thread, and I have a ton of 3dmark scores.. user: gqnerd

While true someone can get a great chip even if it’s Zotac, my testing and the top of the 3dmark HOF is basically Kingpins and Flashed/shunted Strix’s.. couple of random cards ‘sprinkled’ in between.

Didn’t comment with my findings to start a debate, it was in response to how OP ranked the cards.. which I disagree with. To each their own.


----------



## J7SC

...went to the store to 'save some money' on the Arctic P12 pwm fan 5-pack - and I did !  I had been looking for a Strix RTX 3090 OC 3x 8 pin for months and almost gave up - but there it was by its lonesome self (just came into the store this morning). Fortunately, Asus imports directly into Canada, so MSRP - and no tariff.

...couple of quick questions:

1.) My testbench is Win 7 64 Pro, RTX 3090 should still work I take it ? I just want to test it out - eventually, I'm getting a waterblock for it and it will move into a Win 10 system

2.) Said Win 10 system already as 2x 2080 TI w-cooled. I think @jura11 has been running 2x 2080 Ti + RTX 3090 ? Any issues ? I add that my 2080 TIs are in NVLink.


----------



## jomama22

Miguelios said:


> Of course it *comes down to chip quality/lottery*.. and yes, due to the limited availability, seems like MFR’s aren’t binning as much if at all. Hence why I went thru so many cards doing it myself..
> 
> As for my results, you can search for my posts in this thread, and I have a ton of 3dmark scores.. user: gqnerd
> 
> While true someone can get a great chip even if it’s Zotac, my testing and the top of the 3dmark HOF is basically Kingpins and Flashed/shunted Strix’s.. couple of random cards ‘sprinkled’ in between.
> 
> Didn’t comment with my findings to start a debate, it was in response to how OP ranked the cards.. which I disagree with. To each their own.


Just pointing out that if a bios is flashed with the kingpin bios it will report as an evga card regardless, hence why you see evga for the majority of the hof.


----------



## GQNerd

jomama22 said:


> Just pointing out that if a bios is flashed with the kingpin bios it will report as an evga card regardless, hence why you see evga for the majority of the hof.


Yes, fully aware that cards report whatever bios they’re running. My tests were with the 520w bios and then 1000w when it became available.

While you can’t pinpoint exactly what card someone is using, a lot of the more known OC’ers are either on this forum, EVGA, or Hwbot. So you can usually find out what they’re using.

Btw, I respect your input as you’re 3 spots ahead of me in the HOF... for now. lol


----------



## geriatricpollywog

J7SC said:


> ...went to the store to 'save some money' on the Arctic P12 pwm fan 5-pack - and I did !  I had been looking for a Strix RTX 3090 OC 3x 8 pin for months and almost gave up - but there it was by its lonesome self (just came into the store this morning). Fortunately, Asus imports directly into Canada, so MSRP - and no tariff.
> 
> ...couple of quick questions:
> 
> 1.) My testbench is Win 7 64 Pro, RTX 3090 should still work I take it ? I just want to test it out - eventually, I'm getting a waterblock for it and it will move into a Win 10 system
> 
> 2.) Said Win 10 system already as 2x 2080 TI w-cooled. I think @jura11 has been running 2x 2080 Ti + RTX 3090 ? Any issues ? I add that my 2080 TIs are in NVLink.
> 
> View attachment 2476569


Nice find! I can’t wait for your performance comparison in FS2020.


----------



## J7SC

0451 said:


> Nice find! I can’t wait for your performance comparison in FS2020.


tx  ...yeah, MSFS 2020 will be interesting as I run it in SLI-CFR w/ the 2080 Tis...even if I would have a 2nd 3090 Strix OC (may be later), I don't think RTX 3090s would work with the CFR driver

btw, any folks here with the Strix 3090 OC on water ? What blocks ?


----------



## Redlurkeraite

J7SC said:


> MSFS 2020 will be interesting as I run it in SLI-CFR w/ the 2080 Tis...even if I would have a 2n


Bykski 3090 Strix
Idle - 32 Degrees
Load - 65 Degrees
1x 360mm and 1x 560mm with 3 fans on the 360mm as I am still waiting on fans for the rest of the rad. 

I'm interested in the Phanteks G30 recently placed an order. Covers the entirety of the GPU which I like.


----------



## J7SC

Thanks, I will check those out


----------



## motivman

Redlurkeraite said:


> Bykski 3090 Strix
> Idle - 32 Degrees
> Load - 65 Degrees
> 1x 360mm and 1x 560mm with 3 fans on the 360mm as I am still waiting on fans for the rest of the rad.
> 
> I'm interested in the Phanteks G30 recently placed an order. Covers the entirety of the GPU which I like.


good lord, those temps are terrible.... how fast are you running the fans on the 360 rad?


----------



## ShadowYuna

Beagle Box said:


> The block looks nice. Got an Alphacool block arriving today.
> What were your Time Spy scores like before the water block?


Before it was same. Due to xtreme air cooler has good performance it handle quite good score even on air. But on real world gaming the boost wont be stable due to heat over the time.

With waterblock it keep the boost at 2100 as stable no matter how long I play the game.


----------



## ShadowYuna

Thanh Nguyen said:


> Its Bykski design that you cant screw the block all the way down. I use double washer and the temp is a lot better. Its 20c delta at 500w load before and after double washer its 12c delta.


Did not know about using double washer but one thing I hate on Bykski is that the block does not contact the coil on left and right which makes littile coil whine.


----------



## Thanh Nguyen

ShadowYuna said:


> Did not know about using double washer but one thing I hate on Bykski is that the block does not contact the coil on left and right which makes littile coil whine.


My card has 0 coil whine.


----------



## motivman

Miguelios said:


> ‘Conclusions’ can only be drawn if you actually tested each model, not based on aggregating what ppl report online.. That aside, the real *rankings *(with actual testing minus the 2 pin Ref.) are:
> 
> Performance-wise:
> KP>>Strix>3pin Reference>2pin Reference>FE
> 
> AirCooled:
> 3pin Reference>Strix>2pin Reference>FE(cause it’s damn loud)
> 
> Shunted:
> Strix>3pin Reference>2pin Reference>FE
> 
> *Models tested:*
> KP, Strix OC, Gaming Trio, and FE.


Strix is a sexy card, I lusted after one for a long time, but the reality is that when watercooled, the reference card might be the better buy. no coil whine issues, shorter pcb and ultimately better compatibility with a lot of cases. You mentioned that the max u could hit with your strix was about 15.5k with the 1000w bios. framechasers has a highly modified strix, and he isn't even in the port royal hof. compare the pr score of my reference card with that of NZZ (he is running a strix). Even though he is running his 10900k @ 5.6ghz and running 10C cooler, he only beat my score by "37" 3dmark points or 0.17 FPS, lol. Carillo, who scored a 15.8k in PR is running a reference card from what I gather. Your KP card which scored 15870 beats my reference card by only 1.09 FPS. At the end of day, I stand by my recommendation... KP is the best extreme overclocking card, but watercooling wise, the STRIX is not better than a good reference card. It really all comes down to silicon lottery, but the fact that the PCB is shorter on the reference card and it has no coil whine issues, I will choose reference PCB over strix everytime for water... Aircooling is another story though... that is where the STRIX, suprim X and gaming X trio really shines, IMHO.



https://www.3dmark.com/pr/451869


- STRIX








I scored 15 634 in Port Royal


Intel Core i9-10900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com




- Reference PCB


----------



## GQNerd

motivman said:


> Strix is a sexy card, I lusted after one for a long time, but the reality is that when watercooled, the reference card might be the better buy. no coil whine issues, shorter pcb and ultimately better compatibility with a lot of cases. You mentioned that the max u could hit with your strix was about 15.5k with the 1000w bios. framechasers has a highly modified strix, and he isn't even in the port royal hof. compare the pr score of my reference card with that of NZZ (he is running a strix). Even though he is running his 10900k @ 5.6ghz and running 10C cooler, he only beat my score by "37" 3dmark points or 0.17 FPS, lol. Carillo, who scored a 15.8k in PR is running a reference card from what I gather. Your KP card which scored 15870 beats my reference card by only 1.09 FPS. At the end of day, I stand by my recommendation... KP is the best extreme overclocking card, but watercooling wise, the STRIX is not better than a good reference card. It really all comes down to silicon lottery, but the fact that the PCB is shorter on the reference card and it has no coil whine issues, I will choose reference PCB over strix everytime for water... Aircooling is another story though... that is where the STRIX, suprim X and gaming X trio really shines, IMHO.
> 
> 
> 
> https://www.3dmark.com/pr/451869
> 
> 
> - STRIX
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 634 in Port Royal
> 
> 
> Intel Core i9-10900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> - Reference PCB


i like the sound of 200pts higher, rather than just 1.09 fps.. lol

..and I disagree that the Strix is not better than reference cards, especially under water cooling.. Take a look at the power tables and you can see how much more tolerant towards voltage the Strix is than ref design.

At the end of the day this is an overclocking forum, so I’m not here looking for the best deal, I’m looking for the best performance


----------



## DOOOLY

Just a heads up to Msi Trio 3080/3090 owners. Ek shipped my block on Friday I will have it on Wednesday.


----------



## jura11

@J7SC 

Yes there I'm running 2*RTX 2080Ti's with RTX 3090 GamingPro in one loop, although I don't running 2080Ti's in SLI or they're not connected through the NvLink, sadly there is no way how to connect Asus RTX 2080Ti Strix with Zotac RTX 2080Ti AMP, different height and there is no flexible NvLink plus X570 in my case running bottom GPU in x4 only, I use my PC for rendering most of the time... 

Congratulations getting Asus RTX 3090 Strix OC,I resigned getting one because of poor availability here and I'm around 500 in queue hahaha and when I get my one probably in summer 

For waterblocks, I personally recommend Bykski waterblock, on my Palit RTX 3090 GamingPro which is running 390W BIOS and for some benchmarks and gaming I use KPE XOC BIOS too and temperatures are under 40°C, usually they're in 36-38°C, just one recommendation get RAM waterblock for VRAM which are at back of GPU or you can try use 120mm or 80mm fan as I'm using and VRAM temperatures are under 60°C during the heavy workload or rendering or even heavy gaming, RAM waterblock probably will lower as well core temperatures and VRAM temperature 

Seems Aquacomputer Kryographics right now have some issues with their waterblocks and personally I would choose that block, I have tested their RTX 2080Ti block with active backplate and love it, performs great on my loop 

EKWB I won't touch until they fix their issues, have their Strix waterblock and went through the small hell last time, GPU won't boot if you tighten screw down and backplate literally is hot on touch on EKWB RTX 2080Ti Strix waterblock

Phanteks waterblock, I tested only on 2080Ti and 1080Ti too and they been always in middle pack, not bad but not the best

Wish Heatkiller or Optimus released their blocks for Strix but think Heatkiller won't release block for Strix and Optimus probably in summer hahaha 

Bykski for money is great waterblock, performs quite well on my loop and many people here are running their blocks and they're happy, I would expect in your case temperatures in same range as on my loop

Arctic Cooling P12 or P14 fans are great there, running them on my loop and very happy, on MO-ra3 360mm they're running at full speed and I can't hear them and on other radiators they're running in 750-800RPM and still they're quiet fans, great fans for money 

Hope this helps and best of luck to you 

Thanks, Jura


----------



## J7SC

jura11 said:


> @J7SC
> 
> Yes there I'm running 2*RTX 2080Ti's with RTX 3090 GamingPro in one loop, although I don't running 2080Ti's in SLI or they're not connected through the NvLink, sadly there is no way how to connect Asus RTX 2080Ti Strix with Zotac RTX 2080Ti AMP, different height and there is no flexible NvLink plus X570 in my case running bottom GPU in x4 only, I use my PC for rendering most of the time...
> 
> Congratulations getting Asus RTX 3090 Strix OC,I resigned getting one because of poor availability here and I'm around 500 in queue hahaha and when I get my one probably in summer
> 
> For waterblocks, I personally recommend Bykski waterblock, on my Palit RTX 3090 GamingPro which is running 390W BIOS and for some benchmarks and gaming I use KPE XOC BIOS too and temperatures are under 40°C, usually they're in 36-38°C, just one recommendation get RAM waterblock for VRAM which are at back of GPU or you can try use 120mm or 80mm fan as I'm using and VRAM temperatures are under 60°C during the heavy workload or rendering or even heavy gaming, RAM waterblock probably will lower as well core temperatures and VRAM temperature
> 
> Seems Aquacomputer Kryographics right now have some issues with their waterblocks and personally I would choose that block, I have tested their RTX 2080Ti block with active backplate and love it, performs great on my loop
> 
> EKWB I won't touch until they fix their issues, have their Strix waterblock and went through the small hell last time, GPU won't boot if you tighten screw down and backplate literally is hot on touch on EKWB RTX 2080Ti Strix waterblock
> 
> Phanteks waterblock, I tested only on 2080Ti and 1080Ti too and they been always in middle pack, not bad but not the best
> 
> Wish Heatkiller or Optimus released their blocks for Strix but think Heatkiller won't release block for Strix and Optimus probably in summer hahaha
> 
> Bykski for money is great waterblock, performs quite well on my loop and many people here are running their blocks and they're happy, I would expect in your case temperatures in same range as on my loop
> 
> Arctic Cooling P12 or P14 fans are great there, running them on my loop and very happy, on MO-ra3 360mm they're running at full speed and I can't hear them and on other radiators they're running in 750-800RPM and still they're quiet fans, great fans for money
> 
> Hope this helps and best of luck to you
> 
> Thanks, Jura


Wow, lots of real-world useful details, as always ! Thanks Jura 11  ! 

For the record, I'm in the software business and apart from some development and back-up servers, I have an additional 4 systems running in my home office (which is getting quite a work out these days, what with Covid 19 beer virus)....Two of those four are on big 4K monitors...and while one is well-served by the 2x w-cooled 2080 Tis, the other 4K system has been running on 2x 980 Classifieds..._4 GB effective VRAM is not so hot for 4K / 55 inch IPS HDR_...

...thus the Strix 3090 OC...just not sure yet whether I just integrate the 3090 OC into the 2x 2080 Ti dedicated GPU loop, or do a true free-standing system on the 2nd 4K monitor...As to Bykski, I have no aversion to their w-cooling equipment, even if I usually go for Heatkiller, Aquacomputer etc...


----------



## changboy

Is memory at 4500mhz vs 3200mhz will increase the result of port royal ?


----------



## Testier

Can someone post a high res PCB front picture of a 3090 XC3 Ultra? I need to check a few components from there.

Thanks


----------



## dante`afk

SoldierRBT said:


> I like to use ATI tool to test silicon. It's a low wattage test to see how far you can push 1.10v voltage point. I've tested 3 cards so far.
> 3080 FTW3 1.10v 2355MHz - 32C holds 2340MHz
> 3080 FE 1.10v 2280MHz - 32C holds 2265MHz
> 3090 KPE 1.10v 2325MHz - 32C holds 2310MHz


I think my FE is pretty good










i'm getting the evc2 the following days.


----------



## bmgjet

Testier said:


> Can someone post a high res PCB front picture of a 3090 XC3 Ultra? I need to check a few components from there.
> 
> Thanks


----------



## bmgjet




----------



## ShadowYuna

Thanh Nguyen said:


> My card has 0 coil whine.


Well thats good for you. but there is no such card has 0 coil whine. Your ear just can not hear it. Since graphic card has all electrical component there is coil whine 100% on all high end card.


----------



## schoolofmonkey

ShadowYuna said:


> Well thats good for you. but there is no such card has 0 coil whine. Your ear just can not hear it. Since graphic card has all electrical component there is coil whine 100% on all high end card.


I can't hear any from the Inno3d, I have tried listening for it, coil whine and I are worst enemies, had many card I have return because of it, one you could hear across the room, still haven't heard that distinctive sound from the Inno3D.....yet...


----------



## Bobbylee

Has anyone tried mining using 1000w kp bios? For some reason it will not work for me. Just 0mh/s. Other bioses work fine. I’m using Phoenix to test this. What do you guys think is causing this?


----------



## bmgjet

Bobbylee said:


> Has anyone tried mining using 1000w kp bios? For some reason it will not work for me. Just 0mh/s. Other bioses work fine. I’m using Phoenix to test this. What do you guys think is causing this?


It doesnt have the compute p-state so card just sits there. You need to force it to p state 0.


----------



## ShadowYuna

schoolofmonkey said:


> I can't hear any from the Inno3d, I have tried listening for it, coil whine and I are worst enemies, had many card I have return because of it, one you could hear across the room, still haven't heard that distinctive sound from the Inno3D.....yet...


If you undervolt the GPU coil whine can not be hear also with air cooler it can be cover by fan noise as well. So if you dont have coil whine try not to find it cause once you hear it , it makes you crazy.


----------



## Bobbylee

bmgjet said:


> It doesnt have the compute p-state so card just sits there. You need to force it to p state 0.


Thanks for the rapid response. Do you know of any good resources that will teach me how to do this? And I should be doing this in Nvidia inspector right?


----------



## ALSTER868

dante`afk said:


> I think my FE is pretty good


Tried mine, seems 2325 mhz is possible. 2340 crashes after 2 minutes


----------



## geriatricpollywog

Bobbylee said:


> Has anyone tried mining using 1000w kp bios? For some reason it will not work for me. Just 0mh/s. Other bioses work fine. I’m using Phoenix to test this. What do you guys think is causing this?


Comments like this that make me regret sharing the 1000w bios.


----------



## Bobbylee

0451 said:


> Comments like this that make me regret sharing the 1000w bios.


I would like to run at 70% power limit on my 2x8 pin and then a different oc for gaming. I’m aware of the risks but thanks for sharing your feeling of superiority


----------



## Bobbylee

Bobbylee said:


> I would like to run at 70% power limit on my 2x8 pin and then a different oc for gaming. I’m aware of the risks but thanks for sharing your feeling of superiority. I’m just here trying to learn.


----------



## schoolofmonkey

ShadowYuna said:


> If you undervolt the GPU coil whine can not be hear also with air cooler it can be cover by fan noise as well. So if you dont have coil whine try not to find it cause once you hear it , it makes you crazy.


Yeah I know mate, once you hear it time to sell the card 
Been messing with undervolting, you've probably seen my other posts on OCAU (Supersize).

Had managed to get the card to stay around 70c with 69% fan usage locked at 1770Mhz (fine for now), made it a lot quieter, still didn't hear the annoying whine/buzz, the only game that found instability was Quake II RTX, so I had to raise some of the lower end voltages.


----------



## geriatricpollywog

Bobbylee said:


> I would like to run at 70% power limit on my 2x8 pin and then a different oc for gaming. I’m aware of the risks but thanks for sharing your feeling of superiority


That you have a 2 x 8 and are OK with 700w only deepens my regrets. Consider learning more about your hardware before destroying it. Consider learning more about mining too. I am at 280w and 104mh/h. Is an extra 15 mh/h worth melting an 8 pin or blowing a fuse on a card that has fuses?


----------



## Bobbylee

0451 said:


> That you have a 2 x 8 and are OK with 700w only deepens my regrets. Consider learning more about your hardware before destroying it. Consider learning more about mining too. I am at 280w and 104mh/h. Is an extra 15 mh/h worth melting an 8 pin or blowing a fuse on a card that has fuses?


So during a port royal run at 2200mhz and +1200 mem offset draws 575w on my card at 100% limit not 700. That’s a max of 284 through 16 awg cable. Absolutely fine. My temps reach 42c after 247 mining at 320w (124mh/s). I intend to keep it at 320w by having one overclock set for mining, then a seperate overclock profile with increased power limits for gaming. Explain what the problem with this is? Plenty of 2080tis running 550w 247 safely, I fail to see why this generation is different and what information you know that I don’t?

Edit: have disabled force p2 state and locked voltage at .731v. Power limit 60% so power draw around 270W. 118mh/s mem junction temp has come down 5c, die temp down 2c. Lost 6mh/s. I feel like im missing something here, how is the performance still so high?


----------



## changboy




----------



## motivman

Miguelios said:


> i like the sound of 200pts higher, rather than just 1.09 fps.. lol
> 
> ..and I disagree that the Strix is not better than reference cards, especially under water cooling.. Take a look at the power tables and you can see how much more tolerant towards voltage the Strix is than ref design.
> 
> At the end of the day this is an overclocking forum, so I’m not here looking for the best deal, I’m looking for the best performance


Just beat NZZ's strix score with my 2X8 pin card... lol. Reference cards are BEASTS...









I scored 15 672 in Port Royal


Intel Core i9-10900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## changboy

Lol its great, i didnt mod my pcb but those who made this have crazy score, i also be able to break 15k barrier and iam able to clock the memory of the gpu at + 1475mhz lol.









Result not found







www.3dmark.com


----------



## jomama22

dante`afk said:


> I think my FE is pretty good
> 
> View attachment 2476648
> 
> 
> i'm getting the evc2 the following days.


Here's my go. I don't really think this is much of an indication of anything if i'm being honest...










Also, when you get you evc2, there is someone in elmors discord who has already wired one up who could help you with that end of it. The person using it cant get it to actually alter any voltages, though I do think its possible. Once you have it all together lmk and we can try to get it to accept commands.


----------



## jomama22

Please ignore


----------



## pewpewlazer

Bobbylee said:


> Plenty of 2080tis running 550w 247 safely, I fail to see why this generation is different and what information you know that I don’t?


I can't say I've ever seen someone claim to be running a Turing card at 550w 24/7...


----------



## changboy

motivman said:


> Just beat NZZ's strix score with my 2X8 pin card... lol. Reference cards are BEASTS...
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 672 in Port Royal
> 
> 
> Intel Core i9-10900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com


How you score 15 672 in port royal ? And what card ?

Me i never hit 2200 mhz and i dont know why, if i try more then +180 on core i crash.


----------



## Thanh Nguyen

Effective clock is 2240 but the score is low. Not sure its the internal voltage or the 3dmark on steam gimps me down.








I scored 15 565 in Port Royal


Intel Core i9-10900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## changboy

Thanh Nguyen said:


> Effective clock is 2240 but the score is low. Not sure its the internal voltage or the 3dmark on steam gimps me down.
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 565 in Port Royal
> 
> 
> Intel Core i9-10900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com


How you pass 2200mhz and temp so low ? What you doing guys ?


----------



## GoldCartGamer

My TUF OC hashes approx 115Mh/s at 280w. I'm happy with that performance.


----------



## changboy

GoldCartGamer said:


> My TUF OC hashes approx 115Mh/s at 280w. I'm happy with that performance.


Hehehe funny 

Me i didnt said iam not happy, i always laugh hehehehe.


----------



## GQNerd

motivman said:


> Just beat NZZ's strix score with my 2X8 pin card... lol. Reference cards are BEASTS...
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 672 in Port Royal
> 
> 
> Intel Core i9-10900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com


Cool beans.. 
Grats on the exceptional Ref card, but my comments/rankings still stand. 

comes down to:

Chip quality, system specs, cooling components, ambient settings, and ultimately the Overclocker’s SKILL

I haven’t done any serious PR runs in weeks, but I will soon and try to crack into the 16k+ club


----------



## GQNerd

Thanh Nguyen said:


> Effective clock is 2240 but the score is low. Not sure its the internal voltage or the 3dmark on steam gimps me down.
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 565 in Port Royal
> 
> 
> Intel Core i9-10900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com


Perhaps it’s the internal voltage, but your CPU/RAM overclocks might not be as stable as you think..

See a lot of cards hitting 2250-2295 but scoring 15.5ish... You should be in the 15.8-16k range


----------



## ZealotKi11er

changboy said:


> How you pass 2200mhz and temp so low ? What you doing guys ?


You are from Canada and you dont know?


----------



## jomama22

changboy said:


> How you pass 2200mhz and temp so low ? What you doing guys ?


He's putting it near a window/outside


----------



## reflex75

Miguelios said:


> Perhaps it’s the internal voltage, but your CPU/RAM overclocks might not be as stable as you think..
> 
> See a lot of cards hitting 2250-2295 but scoring 15.5ish... You should be in the 15.8-16k range


Indeed, bad curve (too steep) favors higher frequency instead of efficiency.


----------



## geriatricpollywog




----------



## changboy

Ya i'am in Canada lol, but my pc is inside and the heater at full load, temperature of my room ia 24.5c hahaha.

Maybe if i put my pc outside for 30 minute i can oc pass 2300mhz hahaha. Its crazy cold outside now but thing is i will get the same score in summer hehehe.


----------



## changboy

My build :


----------



## Falkentyne

Is this worth buying?









Extech Instruments 380460-NIST Precision Milliohm Meter (110 VAC) with NIST, 6.3" x 2.8" x 3.4" Size: Amazon.com: Industrial & Scientific


Extech Instruments 380460-NIST Precision Milliohm Meter (110 VAC) with NIST, 6.3" x 2.8" x 3.4" Size: Amazon.com: Industrial & Scientific



www.amazon.com


----------



## motivman

changboy said:


> How you score 15 672 in port royal ? And what card ?
> 
> Me i never hit 2200 mhz and i dont know why, if i try more then +180 on core i crash.


PNY 3090 Epic-X on EKWB and backplate. Just noticed MSRP on these has increased to over 2000 USD, compared to 1650 USD I paid for this last year. SMH



https://www.bestbuy.com/site/pny-geforce-rtx-3090-24gb-xlr8-gaming-epic-x-rgb-triple-fan-graphics-card/6432656.p?skuId=6432656


----------



## changboy

Ok but about me, from 14600 port royal to 15 000 i dont see a difference in game.


----------



## moccor

Does anyone know the thermal pad size for the Zotac 3090 Trinity? It uses some super soft black thermal pads and I can't find anything like it, they're about 1.6-1.7mm or so I am pretty sure


----------



## wiznillyp

Bobbylee said:


> So during a port royal run at 2200mhz and +1200 mem offset draws 575w on my card at 100% limit not 700. That’s a max of 284 through 16 awg cable. Absolutely fine. My temps reach 42c after 247 mining at 320w (124mh/s). I intend to keep it at 320w by having one overclock set for mining, then a seperate overclock profile with increased power limits for gaming. Explain what the problem with this is? Plenty of 2080tis running 550w 247 safely, I fail to see why this generation is different and what information you know that I don’t?
> 
> Edit: have disabled force p2 state and locked voltage at .731v. Power limit 60% so power draw around 270W. 118mh/s mem junction temp has come down 5c, die temp down 2c. Lost 6mh/s. I feel like im missing something here, how is the performance still so high?


EthHash is memory intensive. I get 115MH/s at 1.2 GHz core - 5.25 GHz mem at 275W, similar to you.


----------



## wiznillyp

Does anyone know what BIOS would give me the highest power limit (trying to avoid Kingpin right now, if I can) on a 2x power adapter card? 

I have the Asus EKWB, which is limited to 104% (365W) in BIOS, the number listed in the first post of this page is incorrect. I flashed to a Gigabyte one that was supposedly 390W but it seems that am a bit under that. 

Thanks.


----------



## Thanh Nguyen

Miguelios said:


> Perhaps it’s the internal voltage, but your CPU/RAM overclocks might not be as stable as you think..
> 
> See a lot of cards hitting 2250-2295 but scoring 15.5ish... You should be in the 15.8-16k range


Cpu and ram passed tm5 extreme and can run linpack and prime95. I remember Lumnni said internal voltage gimps down the performance.


----------



## GQNerd

Thanh Nguyen said:


> Cpu and ram passed tm5 extreme and can run linpack and prime95. I remember Lumnni said internal voltage gimps down the performance.


Hmm, then try bringing your voltage curve down a bit.. maybe 2235-2250 at 1.1v., can also try decreasing the mem offset

As the other user mentioned, efficiency matters as well


----------



## cheddle

so its been about 100 pages of posts since I have been here. Any higher than 390w bios for 2x8pin reference boards at this point? (aside from the 1000w one)


----------



## des2k...

cheddle said:


> so its been about 100 pages of posts since I have been here. Any higher than 390w bios for 2x8pin reference boards at this point? (aside from the 1000w one)


just flash the 1000w vbios, set limits with the silder, on 2x8pin is about -33% on the power you set

390w is still pretty low for 3090 since Nvidia will keep it around 380w for average

If you keep your fans at 60%,70% you can manage 450w unless the air cooler is complete garbage


----------



## des2k...

wiznillyp said:


> EthHash is memory intensive. I get 115MH/s at 1.2 GHz core - 5.25 GHz mem at 275W, similar to you.


so limit core to 1.2ghz at stock non-boost voltage ?
I might just try this when I don't game.

What's the average $ you can make for 12h a day ?


----------



## jomama22

Thanh Nguyen said:


> Cpu and ram passed tm5 extreme and can run linpack and prime95. I remember Lumnni said internal voltage gimps down the performance.


What is internal voltage? Link to where he said this to have some context?


----------



## changboy

Result not found







www.3dmark.com






http://gpuz.techpowerup.com/21/02/01/w27.png



Look at my gpuz info lol.


----------



## Thanh Nguyen

jomama22 said:


> What is internal voltage? Link to where he said this to have some context?


6:40 msvdd


----------



## arvinz

Can someone confirm that you can take off the backplate of the Strix card without taking off the front fan/heatsink piece? I'd like to upgrade the thermal pads on the backplate of my card but hoping I can do that relatively easy. If you've replaced the pads in yours, can you mention what thickness pads you replaced with?


----------



## changboy

The bad thing with the 1000w bios its the memory always at full load.


----------



## wiznillyp

des2k... said:


> so limit core to 1.2ghz at stock non-boost voltage ?
> I might just try this when I don't game.
> 
> What's the average $ you can make for 12h a day ?


Yes. I use MSI Afterburner.

I have two profiles saved. One for gaming, and another for mining. The mining one does not use the curve - I locked the clock rates and voltages using Afterburner. Click "L" in the curve editor.

Right now, I just manually swap between the profiles because I don't know a good way to automate them yet (perhaps a script that presses a hotkey when idle?? Hmmm). Works pretty well.

Mining ETH at the current difficulty at the current exchange rate, about 100 -110 USD a month before your power bill.


----------



## Redlurkeraite

motivman said:


> good lord, those temps are terrible.... how fast are you running the fans on the 360 rad?


As I will reiterate, the radiators currently do not have any fans on them since I am awaiting their delivery. 
I don't think it's particularly bad considering that I have shunt modded my strix 3090 and it easily pushes 600/700~w


----------



## Falkentyne

So I think the throttling rail limiting shunted 2x8 pin cards is SRAM Input Power. Might be a 150W cap on it.
It is probably NOT the GPU Core NVVDD Output Power (sum). SRAM Input Power and SRAM Output Power are related and that sum is the bottom NVVDD rail+output power....and the common factor seems to be SRAM Input power being too high.

I also think I know how to avoid this cap but I can't do it myself. Someone else would need to try.


----------



## moccor

Falkentyne said:


> Did you do the PCIE slot shunt? You did all six shunts, right? One shunt out of the six is hidden in the middle of nowhere. You said you desoldered the original shunts and put on 3 mOhm ones? Can you post your hwinfo screenshot showing with the "1.67x multiplier" for all the power limits added, while you are pulling max power you can before getting Perfcap: Power, for all the power rails?
> 
> For fixing your thermal issues:
> 
> Just follow my guide and repad your card. Trust me on this. This will fix _ALL_ percap Thermal flags as well as prevent "stuttering" bugs that I had on my card before I ever even opened the card up. It may even let you push the clocks higher!
> This works. Perfectly.
> 
> 1) Buy four packs of 1.5mm thickness, Gelid 12 w/mk pads, Thermalright Odyssey 12 w/mk pads or Fujipoly 11 w/mk pads. You will need them for re-padding the entire card (which you probably should do if you're repasting anyway. If you spent $1500+ on a FE, you can spend a little bit more on quality pads. I _believe_ all of these pads are from the same supplier (at least the Thermalright and Fujipoly pads are). You need 4 packs because the amount of pad you get is extremely small. The fujipoly 60mm * 50mm pad is barely larger than a saltine cracker. Don't buy the 17 w/mk pads, they literally crumble if you even open the card after a repad once--even though they offer the best thermal performance, they are not re-usable and are expensive.
> 
> 1.5mm pads work on both sides of the card and make fine contact.
> 
> 2) The heatsink side of the card is pretty self explanatory. Just use what is shown on Igor's lab page here. BTW make sure the entire left side of the strip is covered on the heatsink picture.
> 
> 
> 
> 
> 
> 
> 
> 
> NVIDIA GeForce RTX 3090 Founders Edition Review: Between Value and Decadence - When Price is Not Everything | Page 2 | igor'sLAB
> 
> 
> With the GeForce RTX 3090, NVIDIA is rounding out its graphics card portfolio at the top end today, for now. Much more is not possible with the GA102-300 anyway and so one may see the current…
> 
> 
> 
> 
> www.igorslab.de
> 
> 
> 
> 
> 
> 3) The backplate side of the card is where the problems happen. Hotspots and stuttering galore as well as hot GDDR6X. To do this, look at my picture carefully.
> Pay close attention to the red outlines. This is exactly where you pad.
> 
> View attachment 2464901
> 
> 
> 
> I got this pad layout by comparing the 3090 and 3080 pages on Igor's lab, and also noticed that my 3090 didn't even have pads on one side of the "V", when I saw another user in the shunt mod thread that did have a pad on that side (it was just a square). And since the hotspots at the V should be the same on FE 3090 and 3080, I came up with that placement. Pay attention to igor's hotspot drawing for 3080 FE with Nvidia's V pads, that don't seem to be on both V sides of the 3090....
> 
> 
> 
> 
> 
> 
> 
> 
> 
> NVIDIA copies the Pad-Mod from igorsLAB for the GeForce RTX 3080 FE to smooth the hotspot for the GDDR6X | igor'sLAB
> 
> 
> It's nice to see that NVIDIA seems to be actively involved in this, or that you've reported on your reading of the catchy social media content, because the hotspot that I found in the launch article…
> 
> 
> 
> 
> www.igorslab.de
> 
> 
> 
> 
> 
> End result: No more "Thermal" flag, no more weird "Fast sync enabled" + Scanline Sync (RTSS) massive microstuttering, no more retrace lines with RTSS Scanline Sync + Vsync disabled, etc. And I can overclock the core higher too!
> 
> Remember use 1.5mm pads and choose the ones I listed. The Arctic 6 w/mk giant 145mm*145mm*1.5mm pads do work, but are not ideal for this card and I was not able to overclock as high, even though my first re-pad (where I took that picture before doing ANOTHER repad) with the arctic pads is what fixed the weird microstuttering problem!
> 
> I now have fujipoly and Thermalright Odyssey 1.5mm pads completely on the backplate side for RAM and hotspots, and the stock pads (which seem to be high quality and better than Arctics) on the GPU chip side (GPU re-pasted with Thermalright TFX) GDDRX RAM side but I replaced the original pads for the VRM's with Arctic 1.5mm's (which seems to be working fine, since the original pads are still on the RAM).


So I kinda ruined my Zotac 3090 Trinity by repading it. The pads were def bigger than 1.5mm, maybe 1.7-1.8mm. They were some super soft foamy black pads. But I tried 2mm, 1.6mm (1.5+0.1 stacked) and 1.5mm, and my 3090 thermal throttles the memory so fast unless I drop the power limit to like 40-45%. Do you have any recommendation for pads?


----------



## Falkentyne

moccor said:


> So I kinda ruined my Zotac 3090 Trinity by repading it. The pads were def bigger than 1.5mm, maybe 1.7-1.8mm. They were some super soft foamy black pads. But I tried 2mm, 1.6mm (1.5+0.1 stacked) and 1.5mm, and my 3090 thermal throttles the memory so fast unless I drop the power limit to like 40-45%. Do you have any recommendation for pads?


Which pads did you use?
Did you re-pad the backplate side or the GPU core side?
How do you know the memory was throttling? Was the memory junction temp 110C in hwinfo64?
And 1.6mm? What? There's no such thing as a 0.1mm thermal pad. I've never seen such a pad, nor would it be necessary. The smallest "normal" pad is 0.5mm. If you had a pad that needed 1.8mm thickness, you should be able to use 2mm and just compress it. MOST pads have about 0.3mm of leeway in them. Some have as much as 0.5mm but those are the really squishy ones.

You need to do a fitness test. Find out which pads make proper contact on the backplate side (usually just one thickness). All you have to do is just assemble the card then immediately disassemble it and look for the RAM "impression" print on the thermal pad. That's really easy for the backplate side.

On the GPU Core side, you have to be more careful. Too thick pads will stop the heatsink from touching the GPU chip. Too thin pads won't contact the VRAM at all. There's also the VRM (voltage regulator) pads which may or may not be the same thickness as the VRAM pads.

If you aren't sure what pads to buy, try buying a collection of these.



https://www.amazon.com/gp/product/B086VYKZN4/



These aren't the best pads but you get a GIANT square and they will at least make the card work. And you get enough to experiment with, since they are 200mm * 200mm in size!
I would grab one of each size. 0.5mm, 1mm and 1.5mm. I'm a bit suspicious about if you would need 2mm. But at least you would have pads to last a lifetime if you did.

Testing the backplate is easy. You have a lot of leeway there.

When testing proper thickness on the GPU Core side, you need to look for 3 things.

1) VRAM pad impressions on the chip side when tightening down (no chip impression=pads are not thick enough). NOTE: YOU USUALLY ONLY HAVE 0.5mm OF LEEWAY BETWEEN VRAM THICKNESS AND 'GPU CORE WON'T TOUCH THE HEATSINK' !!

2) VRM pad impressions---if the VRM doesn't make an 'impression' on the pad after you tighten and remove the heatsink, the pad isn't thick enough.

3) GPU Core contact --to test this--apply a small drop of thermal pad in the very middle of the GPU die, when you found proper VRAM and VRM contact thickness, then tighten and release and make sure the paste spread. If it spreads, you're good. If not, then ONE of the pad sections are too thick.

Once you found the perfect pad sizes, then you can use those until you can get something of much higher quality, like Gelid 12 w/mk pads or Thermalright Odyssey 12.8 w/mk pads in non tiny postage stamp sizes.









7.48US $ 32% OFF|Thermalright Odyssey Thermal Pad On-conductive Silicone Grease Pad 12.8w/mk For Gpu/ram/motherborad/ssd 120x120mm Original Pad - Pc Components Cooling & Tools - AliExpress


Smarter Shopping, Better Living! Aliexpress.com




www.aliexpress.com












8.65US $ 6% OFF|Thermalright Thermal Pad 120x120mm 12.8 W/mk 0.5mm 1.0mm 1.5mm 2.0mm High Efficient Thermal Conductivity Original Authentic - Pc Components Cooling & Tools - AliExpress


Smarter Shopping, Better Living! Aliexpress.com




www.aliexpress.com












15.51US $ |Gelid Tp-gp02 120x120x0.5mm 1.0mm 1.5mm Graphics Processor Cooling Radiator Conductive Silicone Pad Thermal Pad High Quality - Pc Components Cooling & Tools - AliExpress


Smarter Shopping, Better Living! Aliexpress.com




www.aliexpress.com


----------



## changboy

what i want you tell me is why the memory never downclock to idle with the 1000w bios and what i can do for resolve this thing ?


----------



## GQNerd

changboy said:


> The bad thing with the 1000w bios its the memory always at full load.





changboy said:


> what i want you tell me is why the memory never downclock to idle with the 1000w bios and what i can do for resolve this thing ?


Not the case if you set the mem to a negative offset..

I’m using it as a daily, and have it set to negative offset when doing regular tasks, and let it run full speed when gaming or rendering etc..


----------



## changboy

Ya ok but i already change it and now i will try the 520w bios.
You know what i saw ?

-In game things look fine, draw 525w and 88w on pcie
-But i wanna watch a 4k movie with mpc-hc be then i start the movie and check my GPUZ and WFT !
Board power Draw was at 688W and 108W from pcie !!!!!!!!!!!!!!!!!!!

What i have done i change that bios verry fast and i put the 520w bios, The 1000w bios is verry dangerous, me i wont use this thing again.


----------



## GQNerd

changboy said:


> Ya ok but i already change it and now i will try the 520w bios.
> You know what i saw ?
> 
> -In game things look fine, draw 525w and 88w on pcie
> -But i wanna watch a 4k movie with mpc-hc be then i start the movie and check my GPUZ and WFT !
> Board power Draw was at 688W and 108W from pcie !!!!!!!!!!!!!!!!!!!
> 
> What i have done i change that bios verry fast and i put the 520w bios, The 1000w bios is verry dangerous, me i wont use this thing again.


That’s pretty weird.. and yes the 1000w bios is dangerous without proper cooling. I always cap the power limit to 580-620w either way.

I’m on a KingPin, so I can always just flip the bios switch, but it still requires a reboot, so setting it is easier.

You’ll be fine with the 520w bios.


----------



## des2k...

Falkentyne said:


> So I think the throttling rail limiting shunted 2x8 pin cards is SRAM Input Power. Might be a 150W cap on it.
> It is probably NOT the GPU Core NVVDD Output Power (sum). SRAM Input Power and SRAM Output Power are related and that sum is the bottom NVVDD rail+output power....and the common factor seems to be SRAM Input power being too high.
> 
> I also think I know how to avoid this cap but I can't do it myself. Someone else would need to try.


Wouldn't make much difference on reference 2x8pin ?
I think all reference 2x8pin from nvidia are 2 or 3 phases max for uncore so that's why you see that 150w limit(hardware). 

Maybe custom cards they forgot to adjust the limit if they run more phases on uncore, but usually those are 3x8pin cards and use 400w+.


----------



## des2k...

Miguelios said:


> Not the case if you set the mem to a negative offset..
> 
> I’m using it as a daily, and have it set to negative offset when doing regular tasks, and let it run full speed when gaming or rendering etc..
> View attachment 2476796


yeah you're stuck 315mhz core and 405 mem, basically safety clocks for the GPU & mem
not sure why you would run that, certain things you do in windows (not gaming) need a min freq on the core / mem.

No load on the memory at max freq is still idle. It just happens to consume a bit more wattage.


----------



## changboy

I just try port royal with the 520w and i score 14 860 its a bit less but iam not worry anymore. 1000W bios just gave me +200 on my score anyway, not worth the risk.

The thing is with the 1000w if you dont check and use a program not really hard for the gpu then you can see a ridiculous power draw at the end, even iam on water now i wont use this thing.


----------



## Falkentyne

des2k... said:


> Wouldn't make much difference on reference 2x8pin ?
> I think all reference 2x8pin from nvidia are 2 or 3 phases max for uncore so that's why you see that 150w limit(hardware).
> 
> Maybe custom cards they forgot to adjust the limit if they run more phases on uncore, but usually those are 3x8pin cards and use 400w+.


Again I don't have confirmation yet.
But at least on the FE cards, shunting PCIE Slot (possibly in combination with SRC, this step may be required) at a lower resistance than the other shunts may bypass this limit.
Example: (8 pins, MVDDC, GPU Chip Power)= 5 mOhm stack, (PCIE Slot, PWR_SRC)=3 mOhms stack should work.

It's already confirmed that just shunting PCIE Slot by itself more aggressively without matching it with SRC allows a -little- higher power. SRC and Slot power are linked, so both may need to be done.

I believe the culprit rail is "SRAM Input Power". There is no way to shunt this rail directly. But the SRC has access to it and PCIE is linked to SRC...


----------



## des2k...

changboy said:


> I just try port royal with the 520w and i score 14 860 its a bit less but iam not worry anymore. 1000W bios just gave me +200 on my score anyway, not worth the risk.
> 
> The thing is with the 1000w if you dont check and use a program not really hard for the gpu then you can see a ridiculous power draw at the end, even iam on water now i wont use this thing.


Well you know, there's the power limit slider too, you can use it at 60% (600w). Shouldn't be that much unsafe if 520w already works for you.

Was playing Control 4k max on mine, was using up to 590w. Delta from 10c - 14c with water temps 25c wasn't too much concern.


----------



## J7SC

After reshuffling components in three separate systems , I finally got a brand new Win 10 Pro install for the Asus Strix 3090...that card is definitely a keeper, judging by some (early, light) benching. Temps and fan noise are quite good, but I will get a w-block as soon as I have done a bit more research re. w-blocks. Main function of the 3090 will be to run secondary 4K productivity and games on a 55 inch


----------



## moccor

Falkentyne said:


> Which pads did you use?
> Did you re-pad the backplate side or the GPU core side?
> How do you know the memory was throttling? Was the memory junction temp 110C in hwinfo64?
> And 1.6mm? What? There's no such thing as a 0.1mm thermal pad. I've never seen such a pad, nor would it be necessary. The smallest "normal" pad is 0.5mm. If you had a pad that needed 1.8mm thickness, you should be able to use 2mm and just compress it. MOST pads have about 0.3mm of leeway in them. Some have as much as 0.5mm but those are the really squishy ones.
> 
> You need to do a fitness test. Find out which pads make proper contact on the backplate side (usually just one thickness). All you have to do is just assemble the card then immediately disassemble it and look for the RAM "impression" print on the thermal pad. That's really easy for the backplate side.
> 
> On the GPU Core side, you have to be more careful. Too thick pads will stop the heatsink from touching the GPU chip. Too thin pads won't contact the VRAM at all. There's also the VRM (voltage regulator) pads which may or may not be the same thickness as the VRAM pads.
> 
> If you aren't sure what pads to buy, try buying a collection of these.
> 
> 
> 
> https://www.amazon.com/gp/product/B086VYKZN4/
> 
> 
> 
> These aren't the best pads but you get a GIANT square and they will at least make the card work. And you get enough to experiment with, since they are 200mm * 200mm in size!
> I would grab one of each size. 0.5mm, 1mm and 1.5mm. I'm a bit suspicious about if you would need 2mm. But at least you would have pads to last a lifetime if you did.
> 
> Testing the backplate is easy. You have a lot of leeway there.
> 
> When testing proper thickness on the GPU Core side, you need to look for 3 things.
> 
> 1) VRAM pad impressions on the chip side when tightening down (no chip impression=pads are not thick enough). NOTE: YOU USUALLY ONLY HAVE 0.5mm OF LEEWAY BETWEEN VRAM THICKNESS AND 'GPU CORE WON'T TOUCH THE HEATSINK' !!
> 
> 2) VRM pad impressions---if the VRM doesn't make an 'impression' on the pad after you tighten and remove the heatsink, the pad isn't thick enough.
> 
> 3) GPU Core contact --to test this--apply a small drop of thermal pad in the very middle of the GPU die, when you found proper VRAM and VRM contact thickness, then tighten and release and make sure the paste spread. If it spreads, you're good. If not, then ONE of the pad sections are too thick.
> 
> Once you found the perfect pad sizes, then you can use those until you can get something of much higher quality, like Gelid 12 w/mk pads or Thermalright Odyssey 12.8 w/mk pads in non tiny postage stamp sizes.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 7.48US $ 32% OFF|Thermalright Odyssey Thermal Pad On-conductive Silicone Grease Pad 12.8w/mk For Gpu/ram/motherborad/ssd 120x120mm Original Pad - Pc Components Cooling & Tools - AliExpress
> 
> 
> Smarter Shopping, Better Living! Aliexpress.com
> 
> 
> 
> 
> www.aliexpress.com
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 8.65US $ 6% OFF|Thermalright Thermal Pad 120x120mm 12.8 W/mk 0.5mm 1.0mm 1.5mm 2.0mm High Efficient Thermal Conductivity Original Authentic - Pc Components Cooling & Tools - AliExpress
> 
> 
> Smarter Shopping, Better Living! Aliexpress.com
> 
> 
> 
> 
> www.aliexpress.com
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 15.51US $ |Gelid Tp-gp02 120x120x0.5mm 1.0mm 1.5mm Graphics Processor Cooling Radiator Conductive Silicone Pad Thermal Pad High Quality - Pc Components Cooling & Tools - AliExpress
> 
> 
> Smarter Shopping, Better Living! Aliexpress.com
> 
> 
> 
> 
> www.aliexpress.com


I bought a bunch of 6.0w/mk (US $1.43 |New 6.0 W/mK GPU CPU Heatsink Cooling Conductive Silicone Pad 100mm*100mm*1mm Thermal Pad high quality|Fans & Cooling| - AliExpress) in November 2019 and kept them in a ziplock Thermal Grizzly thermal paste pouch. I first tried the 2.0mm thinking worst case scenario it would squish the pads a bit. Checking with my fingers, I guess they are misleadingly squishy on the edges than they are in the middle of the pads but its hard to check if my finger is covering it haha. I know the memory is throttling since even stock, the GPU would get extremely loud very quickly while mining (we all know how GDDDR6X is). It was hitting 106-108 stock and throttling nearly instantly. But I know it is worse now because it gets to 110C and I will exit anything soon as I see that. I am sure it would get higher if I let it.

You're right, Idk why I was thinking 0.1mm that was dumb of me. Which means I tried 2.0mm and then I tried 1.5mm+0.5mm, both of which didn't produce good results. I also tried just 1.5mm Arctic pads, which didn't help temps. I did try the 2.0mm on the front too, and I put a tiny bit of paste on the die and reassembled it, then dissembled it and saw the heatsink never touched the die so 2.0 is def too big for the front. All the pads were the same black kind and size though. Maybe its just the pads I used were just bad then. Is it possible they just quickly go bad even when in their plastic packages and also in a zip baggy?

I did take note of the backplate leaving a imprint on all of the 2mm pads as well as the front. I forget if the 1.5mm left imprints since I repadded it about 3-4x not understanding how it could just do so badly temp-wise. I think these pads I used just might be bad. The arctic pads were about 5-6 years old (unopened from their plastic bag) and the other pads were pretty much never opened and always in a ziplock bag. Maybe 6w/mk just isn't enough for the 3090s? I will have to take it apart once more and check for impression on the 1.5mm and then just buy nice 1.5mm if there are impressions. There isn't a chance they could be some weird black proprietary kind? I found some weird company that sells ultra soft thermal pads and they cut them to specific sizes. I saw they sold 2mm on Amazon but it was like 12$ + 42$ shipping.

Thanks a lot for all of that info too I really appreciate it.


----------



## changboy

des2k... said:


> Well you know, there's the power limit slider too, you can use it at 60% (600w). Shouldn't be that much unsafe if 520w already works for you.
> 
> Was playing Control 4k max on mine, was using up to 590w. Delta from 10c - 14c with water temps 25c wasn't too much concern.


Ya i know and my power slider was at 100% lol but i found a bit a mess for me to always adjust all thing for different application/program/games versus the + of this bios. Yes i have better perf a bit but if i forgot to adjust those setting after played a game a got demage i will feel verry bad. So on 520w for now.

The 2x 8 pins gigabyte bios of 390w game me near same ressult of the 1000w bios but power usage was messy, i saw 175w on 8 pins #2 and 75w on 8 pins #1 and no read on the thrid pin but pcie was just at 68w.

So all in all i think the 520w is best for me actually. I will use this till i want try other thing or the 1000w again but not for now coz after i saw 108w from pcie slot iam bit scary hehehe.


----------



## Falkentyne

moccor said:


> I bought a bunch of 6.0w/mk (US $1.43 |New 6.0 W/mK GPU CPU Heatsink Cooling Conductive Silicone Pad 100mm*100mm*1mm Thermal Pad high quality|Fans & Cooling| - AliExpress) in November 2019 and kept them in a ziplock Thermal Grizzly thermal paste pouch. I first tried the 2.0mm thinking worst case scenario it would squish the pads a bit. Checking with my fingers, I guess they are misleadingly squishy on the edges than they are in the middle of the pads but its hard to check if my finger is covering it haha. I know the memory is throttling since even stock, the GPU would get extremely loud very quickly while mining (we all know how GDDDR6X is). It was hitting 106-108 stock and throttling nearly instantly. But I know it is worse now because it gets to 110C and I will exit anything soon as I see that. I am sure it would get higher if I let it.
> 
> You're right, Idk why I was thinking 0.1mm that was dumb of me. Which means I tried 2.0mm and then I tried 1.5mm+0.5mm, both of which didn't produce good results. I also tried just 1.5mm Arctic pads, which didn't help temps. I did try the 2.0mm on the front too, and I put a tiny bit of paste on the die and reassembled it, then dissembled it and saw the heatsink never touched the die so 2.0 is def too big for the front. All the pads were the same black kind and size though. Maybe its just the pads I used were just bad then. Is it possible they just quickly go bad even when in their plastic packages and also in a zip baggy?
> 
> I did take note of the backplate leaving a imprint on all of the 2mm pads as well as the front. I forget if the 1.5mm left imprints since I repadded it about 3-4x not understanding how it could just do so badly temp-wise. I think these pads I used just might be bad. The arctic pads were about 5-6 years old (unopened from their plastic bag) and the other pads were pretty much never opened and always in a ziplock bag. Maybe 6w/mk just isn't enough for the 3090s? I will have to take it apart once more and check for impression on the 1.5mm and then just buy nice 1.5mm if there are impressions. There isn't a chance they could be some weird black proprietary kind? I found some weird company that sells ultra soft thermal pads and they cut them to specific sizes. I saw they sold 2mm on Amazon but it was like 12$ + 42$ shipping.
> 
> Thanks a lot for all of that info too I really appreciate it.


Try the pads I linked. There is feedback on amazon saying they helped people with Vega 64 and eVGA RTX 2080 super cards, so they should be better than what you have at least.
Once you get everything working, post your findings here.


----------



## moccor

Falkentyne said:


> Try the pads I linked. There is feedback on amazon saying they helped people with Vega 64 and eVGA RTX 2080 super cards, so they should be better than what you have at least.
> Once you get everything working, post your findings here.


Will do. Took the HSF off just now, and 1.5mm is def too small on the front, there's no indents. 2mm is too big though. Would they ever use something weird like 1.6mm or 1.7mm thermal pads on a GPU?

Edit- I guess that means it has to be 2mm and that the pads I was using were simply too hard. I will order some 2mm and try them and report back, thanks again


----------



## des2k...

Falkentyne said:


> Again I don't have confirmation yet.
> But at least on the FE cards, shunting PCIE Slot (possibly in combination with SRC, this step may be required) at a lower resistance than the other shunts may bypass this limit.
> Example: (8 pins, MVDDC, GPU Chip Power)= 5 mOhm stack, (PCIE Slot, PWR_SRC)=3 mOhms stack should work.
> 
> It's already confirmed that just shunting PCIE Slot by itself more aggressively without matching it with SRC allows a -little- higher power. SRC and Slot power are linked, so both may need to be done.
> 
> I believe the culprit rail is "SRAM Input Power". There is no way to shunt this rail directly. But the SRC has access to it and PCIE is linked to SRC...


For some reason uncore is 6phases on FE, so you have tons available lol

I'm not sure why Nvidia goes crazy on uncore phases for FE models then for reference pcb they use only 3 or less phases.


----------



## GQNerd

des2k... said:


> yeah you're stuck 315mhz core and 405 mem, basically safety clocks for the GPU & mem
> not sure why you would run that, certain things you do in windows (not gaming) need a min freq on the core / mem.
> No load on the memory at max freq is still idle. It just happens to consume a bit more wattage.


Yes that’s the mem safety clock, but the core still boosts.. screenshot is when it’s literally at Idle.

I only run it when I’m not doing anything intensive on my PC (Movies and occasional browsing, maybe photoshop), and it’s not that often. I’m usually on my Razer more often than my desktop.

Also per my previous comment it’s easier to set it like this for lower consumption when necessary rather than flipping the bios switch. The latter requires me to reboot the system to fully apply.. If I don’t, card still pulls 100w minimum at idle

_my recommendation is for people that don’t want to keep flashing/rebooting, but want to run the 1000w as their daily_


----------



## mirkendargen

Miguelios said:


> Yes that’s the mem safety clock, but the core still boosts.. screenshot is when it’s literally at Idle.
> 
> I only run it when I’m not doing anything intensive on my PC (Movies and occasional browsing, maybe photoshop), and it’s not that often. I’m usually on my Razer more often than my desktop.
> 
> Also per my previous comment it’s easier to set it like this for lower consumption when necessary rather than flipping the bios switch. The latter requires me to reboot the system to fully apply.. If I don’t, card still pulls 100w minimum at idle
> 
> *my recommendation is for people that don’t want to keep flashing/rebooting.*


Yeah I'm a fan of dual (or triple) BIOS with a switch as protection in case something goes horribly wrong during a flash...but in terms of practicality it's easier to just flash a different BIOS and reboot than to open your case up, fiddle with the tiny tiny switch that you'll end up breaking one of these days, then reboot.


----------



## Glerox

Hey guys, need some help from my fellow overclockers.

I had the FTW3 on AIR on my 9900K system. Managed to get 2070MHz in Port royal with +120 on the core. Temps is around 65c. I hit power limit.

I switched it to my ryzen 3970X under water and shunted the 3 PCIe resistors with silver paint. Temp is now 33c. I do not hit power limit anymore (max 104% power)
However, the frequency is now all over the place in Port Royal. +120 on the core gets me 1980-2070Mhz.
+220 gets me 2055mhz with spikes of 2130Mhz and then crashes...
My score is the same...

How do you explain that my frequency doesn't get higher with lower temps?
Can shunt mod change the frequency curve to power/temp?
Can threadripper change the frequency curve? I don't see how the CPU can change the GPU core frequency... it changes the FPS but the frequency?
I tried also game mode on my 3970x and the same problem occurs.
I tried normal bios, OC bios and 500w EVGA beta bios...

Thanks!


----------



## J7SC

...I am slowly tiptoeing my way up to find the 'speed limit' for GPU and VRAM using Superposition 4K with the Strix and have yet to run into them / don't know top speed yet. However, I noticed more variance in the 12v PCIe than I like, though after looking through quite a few posts here, that seems to happen at times...still, I'm using an older AX1200 PSU and have other PSUs I can switch to, I just don't fancy changing all the cables w/o good reason. What do you guys think about the 12v variance ? Any other comments re. power distribution ? Thanks


----------



## Falkentyne

Glerox said:


> Hey guys, need some help from my fellow overclockers.
> 
> I had the FTW3 on AIR on my 9900K system. Managed to get 2070MHz in Port royal with +120 on the core. Temps is around 65c. I hit power limit.
> 
> I switched it to my ryzen 3970X under water and shunted the 3 PCIe resistors with silver paint. Temp is now 33c. I do not hit power limit anymore (max 104% power)
> However, the frequency is now all over the place in Port Royal. +120 on the core gets me 1980-2070Mhz.
> +220 gets me 2055mhz with spikes of 2130Mhz and then crashes...
> My score is the same...
> 
> How do you explain that my frequency doesn't get higher with lower temps?
> Can shunt mod change the frequency curve to power/temp?
> Can threadripper change the frequency curve? I don't see how the CPU can change the GPU core frequency... it changes the FPS but the frequency?
> I tried also game mode on my 3970x and the same problem occurs.
> I tried normal bios, OC bios and 500w EVGA beta bios...
> 
> Thanks!


This means you're at the power limit.
You can't ONLY Short the three 8 pins. You do know that right?
There is power balancing going on. The SRC shunt has access to all the other shunts and reads power from them.

1) Shunting the three 8 pins drops the SRC power reading. The lower the resistance of the 8 pins (3 mOhms is lower than 5 mOhms), the more SRC will drop in reading.
2) Shunting GPU Chip Power RAISES SRC. The better chip power is shunted, the more SRC is raised.
3) Not shunting PCIE Slot Power on some cards causes MVDDC to skyrocket and causes both rails to throttle you. MVDDC and PCIE Slot Power seem linked somehow.
4) there is a link between PCIE Slot Power and SRC on at least some cards.
5) NOT shunting GPU Chip Power seems to affect the power draw, especially if the TDP is set to lower than <101%, causing you to not reach the full TDP. 
6) SRC has some weird power balancing. The 8 pins and PCIE Slot Power draw power from the motherboard and 8 pins in a 'fixed' ratio. But how this power is distributed seems to be related to what the SRC reads. 
7) all shunts must be modded.


----------



## sultanofswing

J7SC said:


> ...I am slowly tiptoeing my way up to find the 'speed limit' for GPU and VRAM using Superposition 4K with the Strix and have yet to run into them / don't know top speed yet. However, I noticed more variance in the 12v PCIe than I like, though after looking through quite a few posts here, that seems to happen at times...still, I'm using an older AX1200 PSU and have other PSUs I can switch to, I just don't fancy changing all the cables w/o good reason. What do you guys think about the 12v variance ? Any other comments re. power distribution ? Thanks
> 
> View attachment 2476802


Looks like you got a keeper for sure. The 8 pin numbers seem pretty normal, your PCI-E Input is surprisingly extremely low assuming I am seeing that correctly at only 48w which is pretty crazy.


----------



## J7SC

sultanofswing said:


> Looks like you got a keeper for sure. The 8 pin numbers seem pretty normal, your PCI-E Input is surprisingly extremely low assuming I am seeing that correctly at only 48w which is pretty crazy.


Tx - so far so good with this card...a good waterblock will also come in handy when the time comes. I added an Arctic P12 pwm angled over the back-plate, it's not running at full tilt and thus quiet but it seems to help move some of the heat off the back-plate


----------



## sultanofswing

J7SC said:


> Tx - so far so good with this card...a good waterblock will also come in handy when the time comes. I added an Arctic P12 pwm angled over the back-plate, it's not running at full tilt and thus quiet but it seems to help move some of the heat off the back-plate


I have a temp sensor on the backplate of my FTW3 and it gets to about 65c. Hell that artic P12 you could crank up to full speed and still be quiet.


----------



## wiznillyp

Bobbylee said:


> Thanks for the rapid response. Do you know of any good resources that will teach me how to do this? And I should be doing this in Nvidia inspector right?


Yup.









NVIDIA CUDA Force P2 State - Performance Analysis (Off vs. On)


NVIDIA CUDA Force P2 State Feature Performance Analysis (Off vs. ON) – 15 games benchmarked using a Gigabyte AORUS RTX 3080 MASTER.




babeltechreviews.com


----------



## Zogge

N


sultanofswing said:


> Looks like you got a keeper for sure. The 8 pin numbers seem pretty normal, your PCI-E Input is surprisingly extremely low assuming I am seeing that correctly at only 48w which is pretty crazy.


It is the same on my Strix. Pci E slot power never exceed 50W. I can run 2220/22100 in 4k optimized. Effective clock around 2160 to 2190 as temp go to 45c on 520W bios.


----------



## WilliamLeGod

Wth


----------



## Julia1989

Hello, i have a big problem. i was so clever to remove the original thermalpads from the backplate of my asus rog strix 3090 oc, now she gets too hot and smth is throttling down to 600 MHz core clock. what for thermal pads i could buy? the chip itself is quite cool - about 56°C under load.... somebody could give me a hint? how much mm?


----------



## Glerox

Falkentyne said:


> This means you're at the power limit.
> You can't ONLY Short the three 8 pins. You do know that right?
> There is power balancing going on. The SRC shunt has access to all the other shunts and reads power from them.
> 
> 1) Shunting the three 8 pins drops the SRC power reading. The lower the resistance of the 8 pins (3 mOhms is lower than 5 mOhms), the more SRC will drop in reading.
> 2) Shunting GPU Chip Power RAISES SRC. The better chip power is shunted, the more SRC is raised.
> 3) Not shunting PCIE Slot Power on some cards causes MVDDC to skyrocket and causes both rails to throttle you. MVDDC and PCIE Slot Power seem linked somehow.
> 4) there is a link between PCIE Slot Power and SRC on at least some cards.
> 5) NOT shunting GPU Chip Power seems to affect the power draw, especially if the TDP is set to lower than <101%, causing you to not reach the full TDP.
> 6) SRC has some weird power balancing. The 8 pins and PCIE Slot Power draw power from the motherboard and 8 pins in a 'fixed' ratio. But how this power is distributed seems to be related to what the SRC reads.
> 7) all shunts must be modded.


Thanks! I'll try to mod the other shunts. Do you know there's how many shunts on the FTW3? I don't want to miss any.


----------



## Beagle Box

Redlurkeraite said:


> As I will reiterate, the radiators currently do not have any fans on them since I am awaiting their delivery.
> I don't think it's particularly bad considering that I have shunt modded my strix 3090 and it easily pushes 600/700~w


If you don't mind my asking, which shunts did you mod?


----------



## stryker7314

WilliamLeGod said:


> Wth
> View attachment 2476816


No kidding, I saw that this weekend too and just assumed it was a 3090, only noticed it was a 6900XT when you posted it! I wanna know more about this!


----------



## nyk20z3

GALAX introduces GeForce RTX 3090 Hall of Fame series clocked up to 1905 MHz - VideoCardz.com


GALAX announces three Hall of Fame graphics cards Just a day after posting a teaser of the upcoming flagship series, the manufacturer formally introduces the graphics cards. GALAX HOF (OC Lab, Premium/Limited Ed, Base), Source: Galax It’s finally here, the ultra-premium graphics card by GALAX...




videocardz.com


----------



## jomama22

WilliamLeGod said:


> Wth
> View attachment 2476816


It's a glitch with the 6xxx series. Has happened a few times recently. It's actual 2x6900xt's lol. They will fix it soon.


----------



## des2k...

*1000w on 2x8pin power reporting*

I was tired of adding values in gpuz and a general times X from total power wasn't very accurate, moving from .66 .67 .68 depending on the game load & idle power was always off

hwinfo64 has custom sensors, this works really well for adding power together , you can even add your shunts to those values 





Custom user sensors in HWiNFO


HWiNFO since version 6.10 introduces a new feature - ability to display custom user sensors in the sensors window. This feature allows users with basic programming skills to show any sensor values. This might be useful in case of custom-made devices, or sensor values not implemented in HWiNFO by...




www.hwinfo.com


----------



## changboy

I have a low score on time spy extreme and iam #48 lol :








I scored 11 689 in Time Spy Extreme


Intel Core i9-10980XE Extreme Edition Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## J7SC

stryker7314 said:


> No kidding, I saw that this weekend too and just assumed it was a 3090, only noticed it was a 6900XT when you posted it! I wanna know more about this!





jomama22 said:


> It's a glitch with the 6xxx series. Has happened a few times recently. It's actual 2x6900xt's lol. They will fix it soon.


Apart from that particular result and potential 'glitch' , the 6900XT is still a _very impressive_ card on rasterization (ray tracing not so much, until further improvements). However, 6900XT memory bus / bandwidth limitations - even with InfinityCache - become an issue at 4K. I settled on the 3090 Strix because it will be doing 4K exclusively once of the setup / testbench is complete. 

Speaking of finalizing setup, I now have to get Steam and especially Microsoft Store / FlightSim2020 loaded onto this new install, while keeping the original machine and account (not running concurrently). I just know this will be a torture trip through the Microsoft account universe


----------



## stryker7314

J7SC said:


> Apart from that particular result and potential 'glitch' , the 6900XT is still a _very impressive_ card on rasterization (ray tracing not so much, until further improvements). However, 6900XT memory bus / bandwidth limitations - even with InfinityCache - become an issue at 4K. I settled on the 3090 Strix because it will be doing 4K exclusively once of the setup / testbench is complete.
> 
> Speaking of finalizing setup, I now have to get Steam and especially Microsoft Store / FlightSim2020 loaded onto this new install, while keeping the original machine and account (not running concurrently). I just know this will be a torture trip through the Microsoft account universe


Ah that explains it! Same reason I went 3090 because I'm running 4k, otherwise woulda got a 6900XT.


----------



## jomama22

J7SC said:


> Apart from that particular result and potential 'glitch' , the 6900XT is still a _very impressive_ card on rasterization (ray tracing not so much, until further improvements). However, 6900XT memory bus / bandwidth limitations - even with InfinityCache - become an issue at 4K. I settled on the 3090 Strix because it will be doing 4K exclusively once of the setup / testbench is complete.
> 
> Speaking of finalizing setup, I now have to get Steam and especially Microsoft Store / FlightSim2020 loaded onto this new install, while keeping the original machine and account (not running concurrently). I just know this will be a torture trip through the Microsoft account universe


Not a knock against it, was merely pointing out what was happening on the scoreboard.



changboy said:


> I have a low score on time spy extreme and iam #48 lol :
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 11 689 in Time Spy Extreme
> 
> 
> Intel Core i9-10980XE Extreme Edition Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com


Yes, that is what will happen with a 10980xe. Should be comparing your graphics score against others.


----------



## changboy

6900xt in fact is a good card but what is bad its the buggy driver, sometimes run good and sometimes you have many problem you can olny dream about. Sometimes old problem comeback and it always like that.

Before i always use amd card so i know what i talking about, also after sell my 1080ti before got my rtx-3090 i have use my r9-290 and got so many bug its incredible. I wont buy another amd coz of this.


----------



## changboy

jomama22 said:


> Not a knock against it, was merely pointing out what was happening on the scoreboard.
> 
> 
> Yes, that is what will happen with a 10980xe. Should be comparing your graphics score against others.


 What your score in time spy extreme ?


----------



## motivman

des2k... said:


> *1000w on 2x8pin power reporting*
> 
> I was tired of adding values in gpuz and a general times X from total power wasn't very accurate, moving from .66 .67 .68 depending on the game load & idle power was always off
> 
> hwinfo64 has custom sensors, this works really well for adding power together , you can even add your shunts to those values
> 
> 
> 
> 
> 
> Custom user sensors in HWiNFO
> 
> 
> HWiNFO since version 6.10 introduces a new feature - ability to display custom user sensors in the sensors window. This feature allows users with basic programming skills to show any sensor values. This might be useful in case of custom-made devices, or sensor values not implemented in HWiNFO by...
> 
> 
> 
> 
> www.hwinfo.com
> 
> 
> 
> 
> View attachment 2476832


This is awesome, but you know power reporting for pin 1 and 2 are inaccurate on a 2X8 pin card with any 3 pin bios. pin 1 always reports way higher than pin 2, but I verified with a clampmeter, they both pull about the same power.


----------



## J7SC

jomama22 said:


> Not a knock against it, was merely pointing out what was happening on the scoreboard.
> (...)


...didn't take it as a knock  . I had looked at both 3090 and 6900XT carefully re.various tests at 1440 vs 4K, and the 4K VRAM bus on the 6900XT (and RTX performance, for now) is clearly a factor on an otherwise very nice card, IMO


----------



## jomama22

changboy said:


> What your score in time spy extreme ?


https://www.3dmark.com/spy/17838197


----------



## changboy

jomama22 said:


> https://www.3dmark.com/spy/17838197
> View attachment 2476843


Wow nice score dude also at 20c hehehe


----------



## Falkentyne

Is this worth $198 USD?









Extech Instruments 380460-NIST Precision Milliohm Meter (110 VAC) with NIST, 6.3" x 2.8" x 3.4" Size: Amazon.com: Industrial & Scientific


Extech Instruments 380460-NIST Precision Milliohm Meter (110 VAC) with NIST, 6.3" x 2.8" x 3.4" Size: Amazon.com: Industrial & Scientific



www.amazon.com


----------



## xarot

Anyone with >18C/36t CPU here? Looks like on my W-3175X I am getting some really poor performance unless I disable HT to make it a 28c/28t CPU. Looks like a potential driver bug. Getting dips to 63 % GPU usage in Time Spy. Resulting in like 13-14K Time Spy score. Disable HT and getting 19-20K easily...tried disabling 8 cores (20c/40t) with HT on as well and the result improved quite a bit, while not nearly as good with whole HT disabled.

Did not happen with RTX2080TI from what I could recall. Might go back to 2080TI for this


----------



## Zogge

10980xe here at 4.7 to 5.0 ghz. No issues on 18 cores 36 threads with dips in gpu usage or so. Encore board with 8x8gb 4000 cl16. Cache 3200mhz. I will check score and post with / without HT. 4k timespy ?


----------



## xarot

Zogge said:


> 10980xe here at 4.7 to 5.0 ghz. No issues on 18 cores 36 threads with dips in gpu usage or so. Encore board with 8x8gb 4000 cl16. Cache 3200mhz. I will check score and post with / without HT. 4k timespy ?


Thanks! However forgot to wrote it looks like on my 7980XE I have no issues at all with my 3090 Strix OC. Only with W-3175X which is 28c/56t (disabling HT).


----------



## changboy

xarot said:


> Thanks! However forgot to wrote it looks like on my 7980XE I have no issues at all with my 3090 Strix OC. Only with W-3175X which is 28c/56t (disabling HT).


You can ask 
*der8auer*


----------



## wiznillyp

Falkentyne said:


> Is this worth $198 USD?
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Extech Instruments 380460-NIST Precision Milliohm Meter (110 VAC) with NIST, 6.3" x 2.8" x 3.4" Size: Amazon.com: Industrial & Scientific
> 
> 
> Extech Instruments 380460-NIST Precision Milliohm Meter (110 VAC) with NIST, 6.3" x 2.8" x 3.4" Size: Amazon.com: Industrial & Scientific
> 
> 
> 
> www.amazon.com


Why do you think you need a milliohm meter? Just for the shunt mod?


----------



## Falkentyne

wiznillyp said:


> Why do you think you need a milliohm meter? Just for the shunt mod?


I don't know. I like buying cool toys.


----------



## xarot

changboy said:


> You can ask
> *der8auer*


Thanks, already asked him another question some time ago for the CPU but never got reply.


----------



## changboy

xarot said:


> Thanks, already asked him another question some time ago for the CPU but never got reply.


Ask him in German hehehehe 😂


----------



## changboy

*MSI RTX 3090 Suprim X*


----------



## Lord of meat

Cpgeek said:


> When using the 3090 Suprim X bios on the Gaming X Trio has it been reasonably stable? Do all of your display outputs work? (I've heard people were having issues with the asus and evga bios disabling one of the display outputs). and while in the end it doesn't really matter, but does the RGB still work? (i'm probably going to get a waterblock for my card anyway, but it'd still be nice to know)
> Thank you very much!


Sorry for the late reply I work shifts.
If you use the suprim the rgb works and the display outputs work.
I swap between the suprim and evga 500w bios.

Everything works fine in both besides the rgb on the evga 500w. You can always set it first to the color you want and then flash evga if you really want to draw 500w.

If you are getting a water block i would go with the evga bios. Make sure you have a good psu too.
BTW I'm waiting for the icewolf 2 to come out for 3090 hopefully soon.


----------



## Thanh Nguyen

Anyone has an extra kingpin and want to sell so I can have fun? Thanks.


----------



## changboy

Thanh Nguyen said:


> Anyone has an extra kingpin and want to sell so I can have fun? Thanks.


How much will you paid for it ? I mean a brand new one never been open.


----------



## motivman

Today was my lucky day guys. I was able to secure a kingpin for $150 above MSRP (on the way to me), and a Strix from my local microcenter (got lucky)... anyways, do you guys think this Strix is worth a waterblock? this is what I achieved on air, If not, I will be throwing this on ebay or return to microcenter... lol. This was on 520W evga bios... this thing runs hot as hell on air.


----------



## changboy

motivman said:


> Today was my lucky day guys. I was able to secure a kingpin for $150 above MSRP (on the way to me), and a Strix from my local microcenter (got lucky)... anyways, do you guys think this Strix is worth a waterblock? this is what I achieved on air, If not, I will be throwing this on ebay or return to microcenter... lol. This was on 520W evga bios... this thing runs hot as hell on air.


What happening with ur Strix card ? 3.7W power draw max on the third 8 pin power ???


----------



## motivman

changboy said:


> What happening with ur Strix card ? 3.7W power draw max on the third 8 pin power ???


520W EVGA bios is what happened, lol.


----------



## J7SC

I think I keep my Strix on the stock bios for the foreseeable future - according to GPUz, it already pulls 495W+ while staying under 67 C on full blast (27 C idle, ambient 19 C). Once I w-cool it, I _might_ fool around with the XOC bios, but for now, I'm really happy with this sample as is...


----------



## motivman

J7SC said:


> I think I keep my Strix on the stock bios for the foreseeable future - according to GPUz, it already pulls 495W+ while staying under 67 C on full blast (27 C idle, ambient 19 C). Once I w-cool it, I _might_ fool around with the XOC bios, but for now, I'm really happy with this sample as is...


what is your max PR score on air?


----------



## J7SC

motivman said:


> what is my max PR score on air?


I don't know, you tell me...or do you mean my card ? New Win 10 pro install, and I'm still downloading Steam as we speak which has my 3DM...currently adding Cyberpunk 2077 (only 40 GB to go...)


----------



## motivman

J7SC said:


> I don't know, you tell me...or do you mean my card ? New Win 10 pro install, and I'm still downloading Steam as we speak which has my 3DM...currently adding Cyberpunk 2077 (only 40 GB to go...)


Just tryna determine if this Strix I got is worth keeping.


----------



## J7SC

motivman said:


> Just tryna determine if this Strix I got is worth keeping.


I've only done a couple of quick Superposition 4K runs (yesterday, really just to make sure the card worked after unpacking it) and was up to 2145 MHz, then 2160 MHz for the second 4K Superposition run, with more headroom to go...once I finish this build, I'll do the usual battery of tests.

I can only talk about the Strix sample I got...I am really impressed with it so far right out of the box


----------



## SoldierRBT

motivman said:


> Today was my lucky day guys. I was able to secure a kingpin for $150 above MSRP (on the way to me), and a Strix from my local microcenter (got lucky)... anyways, do you guys think this Strix is worth a waterblock? this is what I achieved on air, If not, I will be throwing this on ebay or return to microcenter... lol. This was on 520W evga bios... this thing runs hot as hell on air.


Memory OC looks very good but Core OC is hard to tell since it was running 83C max. You could try running ATI tool to check silicon. The advantage of the Strix is that there is already a waterblock available. Kingpin could be better but not waterblock available at the moment.


----------



## Sheyster

motivman said:


> This was on 520W evga bios... this thing runs hot as hell on air.


The middle fan is stuck at 66% max with the 520W KPE BIOS on the Strix, hence why it probably ran hot for you. This isn't a problem with the FTW3 500W BIOS.


----------



## J7SC

I usually use MSI AB for oc'ing etc - is their any advantage with PrecX in Ampere if it is NOT an EVGA card ?


----------



## Sheyster

J7SC said:


> I usually use MSI AB for oc'ing etc - is their any advantage with PrecX in Ampere if it is NOT an EVGA card ?


None whatsoever. Continue to use AB, it is far superior in every conceivable way.


----------



## J7SC

Sheyster said:


> None whatsoever. Continue to use AB, it is far superior in every conceivable way.


Thanks


----------



## sultanofswing

Think I'm gonna get rid of the FTW3 and try and try my luck with a Strix. Originally wanted to get a KPE as I came from a KPE 2080ti to the 3090 FTW3 but gonna pass.
This FTW3 is a certified dud. Power balancing is all screwed up and even using the XC3 Bios it struggles to make it over 2100mhz.
It's all a crap shoot I could get a dud Strix also.


----------



## motivman

J7SC said:


> I don't know, you tell me...or do you mean my card ? New Win 10 pro install, and I'm still downloading Steam as we speak which has my 3DM...currently adding Cyberpunk 2077 (only 40 GB to go...)


sorry, typo


----------



## changboy

What happening with Hyper Matrix ? 

Do you got ur Kingpin card ?


----------



## GQNerd

motivman said:


> Just tryna determine if this Strix I got is worth keeping.


thought Ref cards are BEASTS? lmaoo.. I digress.

On a serious, more helpful note, try re-mount/repasting... 83c is ridiculous especially if you were on max fans.
Mine had a terrible mount from the factory, but still never passed 75c.
Kryonaut and remount will go a long way, but waterblock will let you know it's true potential..

Also, keep it until you receive the 2nd hand KP cause YMMV from what the EVGA forums say.


----------



## J7SC

sultanofswing said:


> Think I'm gonna get rid of the FTW3 and try and try my luck with a Strix. Originally wanted to get a KPE as I came from a KPE 2080ti to the 3090 FTW3 but gonna pass.
> This FTW3 is a certified dud. Power balancing is all screwed up and even using the XC3 Bios it struggles to make it over 2100mhz.
> It's all a* crap shoot *I could get a dud Strix also.


...re. crap shoot, I can only thing of Forrest Gump's line 'Life is like a box of chocolates. You never know what you're gonna get'. I was getting a bit worried as my last GPU purchase (2x 2080 Ti, still in use in my main system) were also really good clockers, so I wasn't sure what to expect with the 3090 Strix...then again, the 5960X in the setup below isn't the world's greatest oc CPU, but it does has a very good IMC (DDR4 3400 tight) in quad-channel, and that's really more important for 4K.

Anyway, I still haven't crashed the Strix a single time as I'm ramping up this sample...some more room on GPU and VRAM left. I also got in a quick round of Cyberpunk 2077 on this secondary system, wow. As mentioned before, as quiet and temp-controlled this Strix 3090 is on air, almost 500W is a lot on air, so time for a w-block soon...


----------



## marti69

anyone can show me points for shunt mod on msi 3090 trio plz?and how much ohm resistor to use?


----------



## OC2000

motivman said:


> Today was my lucky day guys. I was able to secure a kingpin for $150 above MSRP (on the way to me), and a Strix from my local microcenter (got lucky)... anyways, do you guys think this Strix is worth a waterblock? this is what I achieved on air, If not, I will be throwing this on ebay or return to microcenter... lol. This was on 520W evga bios... this thing runs hot as hell on air.


My Strix on air topped out at 2130 mhz too. My best PR score with a Ryzen 3950X at the time was 14500. Adding a water block on to it got it running 3d Mark benchmarks at around 2235 mhz and others around 2250 mhz. I still cant pass 15100 on PR though, but that will be down to Ryzen performing worse than Intel on PR. My Ram overclock went down from +1200 to +1000, so I think I need to re do the mount to get better mem overclocking.


----------



## Rhadamanthys

Reasonable Trio drops seem to be rare over here. Need to widen my options. What's the latest view on the reference cards? @motivman pointed out that they would fare quite well, however, I'm not good with soldering and won't dare shunt modding them. Do they work with the 500W bios anyway?


----------



## motivman

Rhadamanthys said:


> Reasonable Trio drops seem to be rare over here. Need to widen my options. What's the latest view on the reference cards? @motivman pointed out that they would fare quite well, however, I'm not good with soldering and won't dare shunt modding them. Do they work with the 500W bios anyway?


reference cards are good, as long as you watercool. No need to shunt mod them anymore, as long as you install your waterblock correctly, should be able to run the XOC bios daily on them, just remember to limit power in afterburner to 70%.


----------



## motivman

Miguelios said:


> thought Ref cards are BEASTS? lmaoo.. I digress.
> 
> On a serious, more helpful note, try re-mount/repasting... 83c is ridiculous especially if you were on max fans.
> Mine had a terrible mount from the factory, but still never passed 75c.
> Kryonaut and remount will go a long way, but waterblock will let you know it's true potential..
> 
> Also, keep it until you receive the 2nd hand KP cause YMMV from what the EVGA forums say.


Yup, they are beasts.... will be hard to let go of the current one I have, Its truly a special card, however, the overclocker in me couldn't resist when I walked into my local microcenter, and they had one (strix) in stock... I just had to buy it, and play with it. We shall see what this baby can do, just ordered the phanteks waterblock from newegg, should be here friday.


----------



## Rhadamanthys

motivman said:


> reference cards are good, as long as you watercool. No need to shunt mod them anymore, as long as you install your waterblock correctly, should be able to run the XOC bios daily on them, just remember to limit power in afterburner to 70%.


Cheers! Yep, water cooling I will do. What ref card do you have?

Also, why set AB PL to 70 %? If I run the 500W bios, wouldn't that restrict me to 350W?


----------



## motivman

Rhadamanthys said:


> Cheers! Yep, water cooling I will do. What ref card do you have?
> 
> Also, why set AB PL to 70 %? If I run the 500W bios, wouldn't that restrict me to 350W?


pny reference on EKWB, replaced all ek garbage pads with fujipoly extreme, card is shunted with 5mohm resistors. Most likely selling it because I got the KP and Strix, think I will end of keeping the kingpin and selling the strix and this reference.... If you are interested in buying, send me a PM, I would rather this baby end up in the hands of another OCN member... its a great card, currently number 49 in PR HOF.

unless the card is shunted, 500W bios will give you less power. Below is what the card pulls with different bios...

reference PNY bios = around 500W max
evga 520W bios = around 550W max
XOC bios with power capped at 70% = over 750W in TSE GT2...

lol.


----------



## Rhadamanthys

motivman said:


> pny reference on EKWB, replaced all ek garbage pads with fujipoly extreme, card is shunted with 5mohm resistors. Most likely selling it because I got the KP and Strix, think I will end of keeping the kingpin and selling the strix and this reference.... If you are interested in buying, send me a PM, I would rather this baby end up in the hands of another OCN member... its a great card, currently number 49 in PR HOF.
> 
> unless the card is shunted, 500W bios will give you less power. Below is what the card pulls with different bios...
> 
> reference PNY bios = around 500W max
> evga 520W bios = around 550W max
> XOC bios with power capped at 70% = over 750W in TSE GT2...


Thanks for the offer, but I'm across the pond unfortunately. Sigh, so no 500W on a 2x8 pin without shunting?

Maybe I can find someone in town who is offering his solder skills in exchange for a few quid. If I had a few throwaway cards, I might just practice myself and eventually get around to do it on a 3090.


----------



## motivman

Rhadamanthys said:


> Thanks for the offer, but I'm across the pond unfortunately. Sigh, so no 500W on a 2x8 pin without shunting?
> 
> Maybe I can find someone in town who is offering his solder skills in exchange for a few quid. If I had a few throwaway cards, I might just practice myself and eventually get around to do it on a 3090.


if you run 1000W bios, you can get more than 500W on 2X8 pin without shunting. you are in the UK?


----------



## Rhadamanthys

motivman said:


> if you run 1000W bios, you can get more than 500W on 2X8 pin without shunting. you are in the UK?


What makes the 1000W bios give more power on an unshunted 2x8 pin card than say the 500W?

I'm in Berlin btw.


----------



## motivman

Rhadamanthys said:


> What makes the 1000W bios give more power on an unshunted 2x8 pin card than say the 500W?
> 
> I'm in Berlin btw.


All power limits are removed with 1000W bios. So your 8 pins can pull as much power as they need. with the 500W bios, you are limited to 165-175W max per 8 pin. On the 1000w bios, each 8 pin can pull upward of 250W on an unshunted card, and over 335W on a shunted card.


----------



## Rhadamanthys

motivman said:


> All power limits are removed with 1000W bios. So your 8 pins can pull as much power as they need. with the 500W bios, you are limited to 165-175W max per 8 pin. On the 1000w bios, each 8 pin can pull upward of 250W on an unshunted card, and over 335W on a shunted card.


Gotcha, thanks. I remember seeing reports about worse performance and mem oc (or was it mem power draw) with the 1000W bios, though, so I'm gonna do some more research before looking at them 2x8 pins.


----------



## motivman

Rhadamanthys said:


> Gotcha, thanks. I remember seeing reports about worse performance and mem oc (or was it mem power draw) with the 1000W bios, though, so I'm gonna do some more research before looking at them 2x8 pins.


1000W bios performs worse than 520W bios on my reference card. also memory runs at full speed all the time. I do not recommend 1000W bios for 24/7


----------



## changboy

This is with my crappy Evga FTW3 Ultra :


----------



## long2905

des2k... said:


> These should have thermal pads, need cooling. Not sure if it matters since he's bellow 400w. Temps are good because low wattage and AIO only deals with core heat
> View attachment 2476523


sorry for digging up an old post. im going to take apart my card again to repaste and put in proper thermal pad.

May i ask why do you think these parts need to be cooled when both stock air configuration and guideline for watercool from alphacool does not include them? i remember other posts further down saying some pads are only there for stabilization with no thermal benefit.

Do you see tangible benefit padding these highlighted components?


----------



## des2k...

long2905 said:


> sorry for digging up an old post. im going to take apart my card again to repaste and put in proper thermal pad.
> 
> May i ask why do you think these parts need to be cooled when both stock air configuration and guideline for watercool from alphacool does not include them? i remember other posts further down saying some pads are only there for stabilization with no thermal benefit.
> 
> Do you see tangible benefit padding these highlighted components?


Right side, it's the choke for that memory phase.
Left is PEX for PCIE


----------



## Thanh Nguyen

Should I keep or sell this FTW 3?


----------



## changboy

Thanh Nguyen said:


> Should I keep or sell this FTW 3?
> 
> View attachment 2476938


Lol i saw wrong test coz you have done the 1080p extreme ur score is a bit better then mine lol, we have done the 4k optimized test.
But you can sell ur card if you want coz its yours hehehe.

Also you make that test coz your cpu perform well in 1080p, what is ur 4k optimized score dude ?


----------



## Falkentyne

The two small shunt (1206) resistors are parallel to two of the larger shunts (2512). They are for NVVDD and MSVDD rails They must be modded to bypass the 550W barrier.


----------



## RainRedPurple

Is the Zotac Trinity still a bad card? I've heard there's a bios updated for increasing the power limit but haven't confirmed it yet. Also, it's the only card in my country that has 5 years of warranty, everyone else only has 3 aside from Gigabyte which has 4. To be honest though the warranty isn't really something I'm super interested in as I'm gonna change video card every 2 years but the 5 years from the Zotac does potentially give it a better resale value. If these are the prices of the 3090 in my country (they're the ones I could only find, the rest are out of stock), which one would you guys prefer?

$1,711 - Zotac Trinity
$1,732 - PNY XLR8
$1,857 - MSI Ventus 3X
$1,857 - Palit Gaming Pro
$1,972 - MSI Gaming Trio X
$2,045 - Galax 
$2,088 - Suprim X


----------



## changboy

RainRedPurple said:


> Is the Zotac Trinity still a bad card? I've heard there's a bios updated for increasing the power limit but haven't confirmed it yet. Also, it's the only card in my country that has 5 years of warranty, everyone else only has 3 aside from Gigabyte which has 4. To be honest though the warranty isn't really something I'm super interested in as I'm gonna change video card every 2 years but the 5 years from the Zotac does potentially give it a better resale value. If these are the prices of the 3090 in my country (they're the ones I could only find, the rest are out of stock), which one would you guys prefer?
> 
> $1,711 - Zotac Trinity
> $1,732 - PNY XLR8
> $1,857 - MSI Ventus 3X
> $1,857 - Palit Gaming Pro
> $1,972 - MSI Gaming Trio X
> $2,045 - Galax
> $2,088 - Suprim X


Is this in USD ?


----------



## Thanh Nguyen

changboy said:


> Lol i saw wrong test coz you have done the 1080p extreme ur score is a bit better then mine lol, we have done the 4k optimized test.
> But you can sell ur card if you want coz its yours hehehe.
> 
> Also you make that test coz your cpu perform well in 1080p, what is ur 4k optimized score dude ?


1080p extreme is more demanding than 4k optimized.


----------



## RainRedPurple

changboy said:


> Is this in USD ?


Yup, I've converted them from my prices in the Philippines.


----------



## changboy

Thanh Nguyen said:


> 1080p extreme is more demanding than 4k optimized.


Then what your score in 4k optimized bench, i want see how higher then me you have ?

Me i done that test with my cpu at 4.9ghz but i can do it at 5.1ghz, dont know if this increase the value in final score.


----------



## changboy

RainRedPurple said:


> Yup, I've converted them from my prices in the Philippines.


Price seam high compare from USA but its not cheaper for you to by from Japan or China coz ur in Philippines ?


----------



## RainRedPurple

changboy said:


> Price seam high compare from USA but its not cheaper for you to by from Japan or China coz ur in Philippines ?


I've considered this but I'll have a few challenges up ahead:
1. Foreign transaction fee - Let's say I purchase something for $1500, I'll have to pay an extra $50 for converting my php currency to a foreign currency
2. Shipping fee - By my calculations, a duties paid shipping (so I don't have to worry about import tax) would cost me around $100-$150
3. Warranty - The major concern I have, only Gigabyte services an international card in my country. EVGA claims to have an "international warranty" but I still have to ship it to Taiwan which would cost $200 back and forth in the unfortunate event my gpu craps out.

Basically I have to add an extra $150-200 for anything I want to import here plus I'm forfeiting warranty and quite possibly damaging my item's resalability as it won't be covered with any warranty.


----------



## changboy

You dont have a big pc part in Manila also someone who build crazy pc can be in ur area but iam not sure, this guy :



https://www.youtube.com/c/DeclassifiedSystems/featured


----------



## RainRedPurple

changboy said:


> You dont have a big pc part in Manila also someone who build crazy pc can be in ur area but iam not sure, this guy :
> 
> 
> 
> https://www.youtube.com/c/DeclassifiedSystems/featured


Unfortunately Manila is pretty far from my area and I'm not really into custom builds as I wanted to build it myself. So far I really only need a 3090 recommendation from the price list I've posted.


----------



## J7SC

changboy said:


> Lol i saw wrong test coz you have done the 1080p extreme ur score is a bit better then mine lol, we have done the 4k optimized test.
> But you can sell ur card if you want coz its yours hehehe.
> 
> Also you make that test coz your cpu perform well in 1080p, what is ur 4k optimized score dude ?


What clocks per GPUz tabs open for your test ? Also, I agree that Superposition 4K (and 8K) are a nice setup test and to check clock consistency per graphs when increasing speeds. For now, NV tab still set to 'optimal power, quality' etc on mine until I find 'max stable GPU and VRAM' clocks...full benching starts after system config is complete. I am just about to download Microsoft Flight Sim 2020 onto this new system - all 100 GB+ of it, w/patches 



RainRedPurple said:


> Is the Zotac Trinity still a bad card? I've heard there's a bios updated for increasing the power limit but haven't confirmed it yet. Also, it's the only card in my country that has 5 years of warranty, everyone else only has 3 aside from Gigabyte which has 4. To be honest though the warranty isn't really something I'm super interested in as I'm gonna change video card every 2 years but the 5 years from the Zotac does potentially give it a better resale value. If these are the prices of the 3090 in my country (they're the ones I could only find, the rest are out of stock), which one would you guys prefer?
> 
> $1,711 - Zotac Trinity
> $1,732 - PNY XLR8
> $1,857 - MSI Ventus 3X
> $1,857 - Palit Gaming Pro
> $1,972 - MSI Gaming Trio X
> $2,045 - Galax
> $2,088 - Suprim X


Price for my 3090 Strix came in at US $1869 at today's exchange rate (no tariff here). And as changboy pointed out, 'Declassified Systems' in the Philippines does some incredible custom builds and every once in a while mentions some local suppliers in his YT vids - they might also have 'online' if you're further away from Manila


----------



## pat182

im seeing the galax HOF benches, it rip hard, any of you with an asus got a galax bios ? is the 3rd pin not showing power like the EVGA bios ? would like to change my strix 520watt flash to a bios i can see the power draw


----------



## J7SC

pat182 said:


> im seeing the galax HOF benches, it rip hard, any of you with an asus got a galax bios ? is the 3rd pin not showing power like the EVGA bios ? would like to change my strix 520watt flash to a bios i can see the power draw


...I saw a few of the Galax 3090 HoF (OCL) benches at HWBot earlier - no Galax imported here, and HoF OCLs are unobtanium anyhow, but still nice to look at (the PCB, for now...)



Spoiler


----------



## mirkendargen

pat182 said:


> im seeing the galax HOF benches, it rip hard, any of you with an asus got a galax bios ? is the 3rd pin not showing power like the EVGA bios ? would like to change my strix 520watt flash to a bios i can see the power draw


According to the Chinese site the HOF BIOS maxes at 500W so could be a slight downgrade (but might show the 3rd 8pin properly).

Also from what I understand the entire HOF line this time is the same PCB, the only difference is whether it has that giant screen or not and the base clock. Could be plausible and only almost impossible instead of impossible to get a non-OC Lab one and still have it be great.


----------



## pat182

mirkendargen said:


> According to the Chinese site the HOF BIOS maxes at 500W so could be a slight downgrade (but might show the 3rd 8pin properly).
> 
> Also from what I understand the entire HOF line this time is the same PCB, the only difference is whether it has that giant screen or not and the base clock. Could be plausible and only almost impossible instead of impossible to get a non-OC Lab one and still have it be great.


id take the 20watt hit if my card can show the right values
i guess ill wait and see when the bios is gonna be uploaded


----------



## WayWayUp

how does one go about buying a Galax HOF 3090
where are they even sold?

surprised how bad its spanking the kingpin cards. well, maybe not surprised as Evga downgraded everything this generation for max profits


----------



## mouacyk

One does not simply... go about buying a Galax HOF 3090


----------



## mirkendargen

WayWayUp said:


> how does one go about buying a Galax HOF 3090
> where are they even sold?
> 
> surprised how bad its spanking the kingpin cards. well, maybe not surprised as Evga downgraded everything this generation for max profits


They list them on their own store, not sure if they sell through any 3rd parties or not.


----------



## Falkentyne

Use these to stack on the small shunt resistors (the two in parallel with the BIG shunt resistors--the big shunts should be right next to the small ones) to remove MSVDD/NVVDD power limits on your shunt modded cards.

https://www.mouser.com/ProductDetail/71-WSLP12063L000FEA/ for replacing (desolder pads, resolder. Use high quality 3M high temp polymide tape. Don't use cheap chinese tape)

https://www.mouser.com/ProductDetail/71-WSLP12065L000FEA/ for stacking (easier for beginners. Use high QUALITY Kapton tape. 3M high temp polymide tape is best).


----------



## J7SC

mirkendargen said:


> They list them on their own store, not sure if they sell through any 3rd parties or not.


...with the 2080 Ti HoF, there were two versions (HoF and HoF OCL), not sure whether that's what they are doing again with 3090. OCL would technically only be available to a small group of known XOCers at ie. HWBot (along with 'special' oc software), though some popped up later on eBay. PCB between the two versions looked 'similar', but binning was obviously not.

AFAIK, Galax generally does sell in the USA (though not Canada). In some other markets, brand is called KFA2 (same thing)


----------



## changboy

J7SC : When i bench usually i put my oc on msi afterburner at +175 core and + 1450 memory. Some test i can put higher core gpu near +200 or sometime less at +150. It varie with the test. Port royal +175 or +180.

If i put +1500 memory (the max) i can see some artifact sometimes so i drop of 50mhz my memory and all is fine.

--But for gaming i put +120 core and + 800 memory to never get a crash in game. I played hours like this.
--But i use a 2x 8pins bios on my 3x8pins card and port royal get +400 on score and game are more stable with higher fps. With original bios i cant game higher then +115 on core


----------



## Thanh Nguyen

Falkentyne said:


> Use these to stack on the small shunt resistors (the two in parallel with the BIG shunt resistors--the big shunts should be right next to the small ones) to remove MSVDD/NVVDD power limits on your shunt modded cards.
> 
> https://www.mouser.com/ProductDetail/71-WSLP12063L000FEA/ for replacing (desolder pads, resolder. Use high quality 3M high temp polymide tape. Don't use cheap chinese tape)
> 
> https://www.mouser.com/ProductDetail/71-WSLP12065L000FEA/ for stacking (easier for beginners. Use high QUALITY Kapton tape. 3M high temp polymide tape is best).


Does this work for 1000w bios without shunt the big one?


----------



## J7SC

changboy said:


> J7SC : When i bench usually i put my oc on msi afterburner at +175 core and + 1450 memory. Some test i can put higher core gpu near +200 or sometime less at +150. It varie with the test. Port royal +175 or +180.
> 
> If i put +1500 memory (the max) i can see some artifact sometimes so i drop of 50mhz my memory and all is fine.
> 
> --But for gaming i put +120 core and + 800 memory to never get a crash in game. I played hours like this.
> --But i use a 2x 8pins bios on my 3x8pins card and port royal get +400 on score and game are more stable with higher fps. With original bios i cant game higher then +115 on core


Thanks - but what are your 'base numbers' you add those MSI AB oc to ? I am also asking re. my temp planning for w-block vs air comparisons (I think yours is w-cooled, per earlier screenie) ?


----------



## Falkentyne

Thanh Nguyen said:


> Does this work for 1000w bios without shunt the big one?


Not needed on 1kw Bios.


----------



## changboy

J7SC said:


> Thanks - but what are your 'base numbers' you add those MSI AB oc to ? I am also asking re. my temp planning for w-block vs air comparisons (I think yours is w-cooled, per earlier screenie) ?


-I told you i put 2 preset : 1- Gaming : +120 core and +800 memory 
2- Bench : +175 core and +1450 memory
- I'am always on preset 1 Gaming unless i want Bench then i clic preset 2 and i can menage my core clock with the Bench i do. 

-Yes my card under water with EK block . When i play any game my boost clock on gpuz is 2085mhz-2100mhz and dosen't drop even i play 2 hours. But i can feel the heat in my room after i payed 4 hours. Temp of my card never go higher then 49c, usually 44c-45c. I think when my card hit 49 its my pump D5 slowing down a bit so i can a peak. But my bios pull more power then the normal bios. Even iam on EK block if i use the normal bios my boost clock will drop at 1990mhz, same on air. So this is bios related.

-The thing is : EVGA dont want you get same power of the super kingpin. Coz i score 15 074 on port royal and Kingpin not much better then this on liquid cooling, maybe 15 300.
- If i put my pc on the balcon at -10c i can score lot higher, maybe 15 600, i dont know coz i never try it.


----------



## J7SC

...tx, with 2080 TI, THW had a nice little comp chart of MHz vs Temp (below), trying to come up with s.th. similar for 3090s. Once FlightSim 2020 download is complete, I get to mess with chipset and MEI drivers..tried to update them yesterday evening, but there's some sort of conflict issue I need to track down. Rampage V Ex and 5960X are quite old, but still should work fine in Win 10 Pro...my main system (TR 2950X + 2080 Tis) was quicker to set up for Win 10 though...



Spoiler: EDIT: Galax 3090 HoF specs and pic links



Galax Launches Record-Chasing RTX 3090 HOF GPUs

GALAX GeForce RTX™ 3090 HOF Limited Edition - GeForce RTX™ 3090 Series - GeForce RTX™ 30 Series - Graphics Card


----------



## changboy

*J7SC*
On asus web site you should get all update for ur mobo then after windows 10 will update what it need.
And sometimes you have other asus program helping keep things update for ur system.

Like me armoury crate program tell me if i need some update on my x299 mobo and frequently i check asus web site if new bios or els. Latest bios on my mobo give me less perf but other stuff can work better after update it, like compatibility issue and els. Sometimes other things cant be update if ur not on latest mobo bios.

Also audio chipset is important to avoid problem with game, I also open windows update and clic on optional update to read what is there too.

Sometime you just need to unplug ur pc from the wall for 10 minute to get a real reset and things will work and update after this.


----------



## J7SC

changboy said:


> *J7SC*
> On asus web site you should get all update for ur mobo then after windows 10 will update what it need.
> And sometimes you have other asus program helping keep things update for ur system.
> 
> Like me armoury crate program tell me if i need some update on my x299 mobo and frequently i check asus web site if new bios or els. Latest bios on my mobo give me less perf but other stuff can work better after update it, like compatibility issue and els. Sometimes other things cant be update if ur not on latest mobo bios.
> 
> Also audio chipset is important to avoid problem with game, I also open windows update and clic on optional update to read what is there too.
> 
> Sometime you just need to unplug ur pc from the wall for 10 minute to get a real reset and things will work and update after this.


 ...Tx - yeah, I have run into this before in other builds and usually manage to sort it all out; just have to wait now while those huge downloads are running in case of reboot requirements. BTW, I added 'edit' to my previous post re. Galax 3090 HoF story, pics and model chart ! Not a 'single slot' card


----------



## changboy

Did you just buy flight simulator ? I saw a special price at chrismas but didnt bought it, where did you buy it ?


----------



## J7SC

changboy said:


> Did you just buy flight simulator ? I saw a special price at chrismas but didnt bought it, where did you buy it ?


I bought it (premium edition) when it first came out last fall (at the Microsoft Store). But I am going to run it on two separate, non-concurrent systems...didn't have to pay again (same with Cyberpunk 2077 on Steam) but still have to install it from scratch on the 2nd machine...I could have perhaps used drive-ghosting but didn't want to mix up identities etc.


----------



## mouacyk

J7SC said:


> I bought it (premium edition) when it first came out last fall (at the Microsoft Store). But I am going to run it on two separate, non-concurrent systems...didn't have to pay again (same with Cyberpunk 2077 on Steam) but still have to install it from scratch on the 2nd machine...I could have perhaps used drive-ghosting but didn't want to mix up identities etc.


It's better with a joystick, isn't it? Not much fun trying to fly with numpad keys.


----------



## geriatricpollywog

mouacyk said:


> It's better with a joystick, isn't it? Not much fun trying to fly with numpad keys.


The aircraft are always yawing up and down and getting knocked around by the wind. A joystick is tiring because I am constantly moving my entire arm to make corrections. This is much easier with an Xbox controller. A yoke would be perfect but they are expensive and I’m not trying to be a pilot or anything.


----------



## motivman

Miguelios said:


> thought Ref cards are BEASTS? lmaoo.. I digress.
> 
> On a serious, more helpful note, try re-mount/repasting... 83c is ridiculous especially if you were on max fans.
> Mine had a terrible mount from the factory, but still never passed 75c.
> Kryonaut and remount will go a long way, but waterblock will let you know it's true potential..
> 
> Also, keep it until you receive the 2nd hand KP cause YMMV from what the EVGA forums say.


Do you remember how high your card scored in PR on air? Do not want to take off the cooler till I am sure this is a good bin... memory seems great though, can reach +1400, but starts artifacting because I guess too hot, but currently memory is stable at +1250 in CP2077.


----------



## J7SC

mouacyk said:


> It's better with a joystick, isn't it? Not much fun trying to fly with numpad keys.





0451 said:


> The aircraft are always yawing up and down and getting knocked around by the wind. A joystick is tiring because I am constantly moving my entire arm to make corrections. This is much easier with an Xbox controller. A yoke would be perfect but they are expensive and I’m not trying to be a pilot or anything.


...probably personal preference, but I do prefer my simple Logitech 3d Xtr joy-stick though for FlightSim 2020. I use it also for car racing games - and I wish Cyberpunk 2077 would recognize joysticks for driving as well - I keep on wrecking cars I just stole, and get into unwanted fights with pedestrians I just ran over..._"Sorry, I'm just here for the free-roaming" _


----------



## muzzer

Inno3D GeForce RTX 3090 iChill Frostbite 24GB GDDR6X PCI-Express Graphics Card


C3090-246XX-1880FB, Boost Clock: 1755MHz, Memory 24GB 19500MHz GDDR6X, Cuda Cores: 10496, VR Ready, PhysX/CUDA Enabled, NVIDIA Ampere, 8nm Process, Real-Time Ray Tracing, H20 Cooled, 3 Years Warranty.




www.overclockers.co.uk




and ive got stable 2145mhz and 10005mhz temps 58 gpu and 68c memory is this any good


----------



## Thanh Nguyen

changboy said:


> Lol i saw wrong test coz you have done the 1080p extreme ur score is a bit better then mine lol, we have done the 4k optimized test.
> But you can sell ur card if you want coz its yours hehehe.
> 
> Also you make that test coz your cpu perform well in 1080p, what is ur 4k optimized score dude ?


Gaming setting +135 core + 1250 mem.


----------



## HyperMatrix

WayWayUp said:


> how does one go about buying a Galax HOF 3090
> where are they even sold?
> 
> surprised how bad its spanking the kingpin cards. well, maybe not surprised as Evga downgraded everything this generation for max profits


How are they "spanking" the kingpin cards? The highest card I see on PR HOF is scoring 0.47% higher than the KingPin card under LN2 although the HOF cards all appear to be listing incorrect clock speeds. Or rather, not the real thermspy clock. Either that or the scoring discrepancy is due to the low memory clocks which they all seem to share. Either way....the best HOF card atm is just 0.47% faster than the best KingPin card.

And we can't compare general performance under water or air since KingPin doesn't come with air cooling and there are no blocks for it yet. I haven't read the last 30 or so pages so there may be something I missed. Are the air cooled HOF cards outperforming the AIO KingPin cards? Just trying to get the details around the "spanking" comment.


----------



## gfunkernaught

DOOOLY said:


> Just a heads up to Msi Trio 3080/3090 owners. Ek shipped my block on Friday I will have it on Wednesday.


Site says it will ship out 2/12. Maybe they got a few and shipped them out. I'm actually strongly considering splurging. Present in cart right now.


----------



## gfunkernaught

I saw some interesting potential on my Trio. Flashed the suprim 450w bios, set the PL to 107% (bios max) and played cold war with reflex+boost (max performance) and while the gpu was at 50c, the clock boosted to 1980mhz then settled at 1965mhz, and that's without using any offsets. I'm thinking if I put a good block on and use liquid metal (with all the precautions), I could easily pass 2100mhz. Who knows.


----------



## Fire2

Is there a a modded kp520watt bios with decent middle fan speed out? Seems to be only 2000rpm not 3000/3350.

mayve the galax 1000w? Does this work on 3x8pin cards?


----------



## Chamidorix

Falkentyne said:


> The two small shunt (1206) resistors are parallel to two of the larger shunts (2512). They are for NVVDD and MSVDD rails They must be modded to bypass the 550W barrier.


Ah, did you finally get a good multimeter to test traces yourself? I messed around with it a bit myself but my cheapo meter was not up to the task. This is what I've suspected for a while, nice to see some evidence since the initial reports by olget months ago. (and also basically confirmed by the shunt jumpers on Galax HOF)

Are they parallel on 12V? Or are they after VRM stepdown?


----------



## Falkentyne

Chamidorix said:


> Ah, did you finally get a good multimeter to test traces yourself? I messed around with it a bit but my cheapo meter was not up to the task. This is what I've suspected for a while, nice to see some evidence since the initial reports months ago.


I didn't. I'm not good enough for that. Someone found a design sheet and tested it.


----------



## Falkentyne

Chamidorix said:


> Ah, did you finally get a good multimeter to test traces yourself? I messed around with it a bit myself but my cheapo meter was not up to the task. This is what I've suspected for a while, nice to see some evidence since the initial reports by olget months ago. (and also basically confirmed by the shunt jumpers on Galax HOF)
> 
> Are they parallel on 12V? Or are they after VRM stepdown?


I have absolutely no idea. I can't understand spec sheets like this. I have zero engineering knowledge. I just learned how to solder a shunt three days ago...


----------



## geriatricpollywog

HyperMatrix said:


> How are they "spanking" the kingpin cards? The highest card I see on PR HOF is scoring 0.47% higher than the KingPin card under LN2 although the HOF cards all appear to be listing incorrect clock speeds. Or rather, not the real thermspy clock. Either that or the scoring discrepancy is due to the low memory clocks which they all seem to share. Either way....the best HOF card atm is just 0.47% faster than the best KingPin card.
> 
> And we can't compare general performance under water or air since KingPin doesn't come with air cooling and there are no blocks for it yet. I haven't read the last 30 or so pages so there may be something I missed. Are the air cooled HOF cards outperforming the AIO KingPin cards? Just trying to get the details around the "spanking" comment.


The HOFs are hitting just under 2900mhz under LN2 as reported by 3DMark which is insane and on-par with the 6900XT. I’d also be curious to know the actual clockspeed. Galax must have binned the hell out of those GPUs. In all likelihood, you will need to be a member of the CPP to obtain one.


----------



## changboy

HyperMatrix said:


> How are they "spanking" the kingpin cards? The highest card I see on PR HOF is scoring 0.47% higher than the KingPin card under LN2 although the HOF cards all appear to be listing incorrect clock speeds. Or rather, not the real thermspy clock. Either that or the scoring discrepancy is due to the low memory clocks which they all seem to share. Either way....the best HOF card atm is just 0.47% faster than the best KingPin card.
> 
> And we can't compare general performance under water or air since KingPin doesn't come with air cooling and there are no blocks for it yet. I haven't read the last 30 or so pages so there may be something I missed. Are the air cooled HOF cards outperforming the AIO KingPin cards? Just trying to get the details around the "spanking" comment.


 You were in jail hehehe, did you got ur kingpin or your still on the ftw3 hydro copper ?


----------



## HyperMatrix

Question regarding manual NVVDD. So I can set NVVDD manually in the classified tool to something that can run 2130MHz-2145MHz for hours without issue. However, that same amount of voltage causes instability and I've had the card/system hang on me while in a low power state (desktop/web browsing). So I've combated that by setting boost lock to keep it pinned. Because too much voltage for too low of a clock = bad.

Now, I'm wondering about potential solutions. Only thing I can think of is to set a semi-boost state with SMI. So other than just capping max boost clock with SMI which I already do, or locking the boost clock at 100%, I can also set the minimum boost clock to maybe 800 or 1000MHz to see if that's high enough of a clock to not be a problem with the NVVDD that I've set. But I thought I'd check to see if there's a simpler solution to this that I'm missing.

To summarize, card go crash when it clocks down during idle, with manually set high NVVDD. Wondering if there's a solution to this outside of boost clock locking or SMI minimum clock speed increase.


----------



## changboy

Thanh Nguyen said:


> Gaming setting +135 core + 1250 mem.


Nice card and score !


----------



## HyperMatrix

0451 said:


> The HOFs are hitting just under 2900mhz under LN2 as reported by 3DMark which is insane and on-par with the 6900XT. I’d also be curious to know the actual clockspeed. Galax must have binned the hell out of those GPUs. In all likelihood, you will need to be a member of the CPP to obtain one.


I see the clocks but I'm not seeing the relative performance gain. If a card clocked to 1500MHz and scored higher, I'd be impressed.  If those clocks are actually real, then yeah that'd be incredibly impressive. But it's surprising that despite such high clocks, it's barely scoring any higher than the KingPin.



changboy said:


> You were in jail hehehe, did you got ur kingpin or your still on the ftw3 hydro copper ?


Still on the KingPin, still waiting on a block. Friend took the FTW3 Hydro. Willing to buy a block from anyone. Whether Optimus or EVGA. Whoever releases it first. Just putting together the final touches on an entirely new rig tonight. Just too bad about the KingPin AIO cooler.


----------



## Falkentyne

Chamidorix said:


> Ah, did you finally get a good multimeter to test traces yourself? I messed around with it a bit myself but my cheapo meter was not up to the task. This is what I've suspected for a while, nice to see some evidence since the initial reports by olget months ago. (and also basically confirmed by the shunt jumpers on Galax HOF)
> 
> Are they parallel on 12V? Or are they after VRM stepdown?


Sent a PM.


----------



## J7SC

0451 said:


> The HOFs are hitting just under 2900mhz under LN2 as reported by 3DMark which is insane and on-par with the 6900XT. I’d also be curious to know the actual clockspeed. Galax must have binned the hell out of those GPUs. In all likelihood, you will need to be a member of the CPP to obtain one.


...well, the availability per region of the top bin (table below is from THW) is certainly 'interesting' - though as OGS / Greece already showed, top HWBot XOCers will get access outside China. And who knows how many 'China-only' HoF OCLs will show up on eBay, though it will be tough to figure out which sku is which, IMO


----------



## Chamidorix

I've looking more into modding the existing AIO for kingpin, especially as contingency if Optimus block is practically unavailable. Basically just plug tubes into Mora3. Really just need more than 360 rad when upping volts.... LM on die -> aio copper block + fans on VRM heatsink + ram block on backplate seems to really be all you need on card, just need more rad area for dissipation.

Its just a shame I think it knocks away warranty if you do that......


----------



## J7SC

...FlightSim2020 loaded just fine, first brief flight on Strix system worked great  Now, I just have to finish the Win 10 setup after downloading 170 + Gigs from the net...plus transfer some more files directly from the primary system (top left in pic) and complete the build, not to mention clean the mess up...


----------



## GQNerd

motivman said:


> Do you remember how high your card scored in PR on air? Do not want to take off the cooler till I am sure this is a good bin... memory seems great though, can reach +1400, but starts artifacting because I guess too hot, but currently memory is stable at +1250 in CP2077.


Before the remount, I was hitting 14.3-14.6k
@2130-2145, +1300 mem, 75c

After the remount and repaste, but before the shunt mod + block was 14.6-14.9k @ 2160 (avg 2145), mem +1300, and 68c

Just crack that b*tch open and fully remove the warranty sticker on one of the screws. If you decide to return, don’t think they’ll notice the sticker and there’s no other seal to worry about..


----------



## motivman

motivman said:


> Just beat NZZ's strix score with my 2X8 pin card... lol. Reference cards are BEASTS...
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 672 in Port Royal
> 
> 
> Intel Core i9-10900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com


hey @Nizzen, is you strix shunt modded currently? or are you running the XOC bios for daily.


----------



## motivman

Miguelios said:


> Before the remount, I was hitting 14.3-14.6k
> @2130-2145, +1300 mem, 75c
> 
> After the remount and repaste, but before the shunt mod + block was 14.6-14.9k @ 2160 (avg 2145), mem +1300, and 68c
> 
> Just crack that b*tch open and fully remove the warranty sticker on one of the screws. If you decide to return, don’t think they’ll notice the sticker and there’s no other seal to worry about..


thanks bro. think I will just repaste it and run this on air (on my wife's rig), the stock cooler is almost too beautiful to replace with a waterblock... lol


----------



## PhatSV6

bmgjet said:


> Can you list these many many other GPUs pulling over 450W over 2X 8pins at stock since I cant think of any.
> Your not going to get a bios over 390W on a 2 plug card since thats already 6% over the pci express spec of 66W for 12V slot and 150W for 12V 8pins.
> The FE got around that by making there own plug and not advertising that its pci-express approved on the pcb stamps.


 Ill list 3 for you, Vega64, R9 r295 x 2 and the Titan-Z and the founders 3090 which we all know is 12 however no better than 2 x 8 pin given your psu can handle the load over them.


----------



## PhatSV6

motivman said:


> shunt mod the card, flash 520W bios, get a nice waterblock = profit, up to 600W power draw and performance comparable to 3X8 pin cards. No need for 1000W bios and all the downsides that come with it. Unless someone comes up with a way to edit and sign the bios, no mfr is releasing more than a 390W bios for any 2 pin card, that is just a pipe dream TBH.


Whats wrong with dreaming lol. If no one had pipe dreams we would still be living in the stone ages lol.


----------



## PhatSV6

FYI:
NO Shunt or repast done.
Core +130 Mem +1100 CoreTemp max 65c MemTemp tjunc max 88c
Custom Fan Curve
Stock OC TUF BIOS.
Please excuse the camera lines.


----------



## geriatricpollywog

PhatSV6 said:


> Ill list 3 for you, Vega64, R9 r295 x 2 and the Titan-Z and the founders 3090 which we all know is 12 however no better than 2 x 8 pin given your psu can handle the load over them.


I melted a CableMod wire on a Vega64 with the liquid bios and a modded power play table. The card was pulling about 450w. I wouldn’t go above 400w on a 2x8 for safety reasons. There are people with mobility problems living in my building so the consequences of a fire would be devastating.


----------



## tyw214

Hey guys, I have a Strix 3090 OC.

I was wondering if I could achieve 2000 mhz stable with relative ease without much bios fiddling/extra cooling? I use MSI Afterburner. Using OC Scanner, it seems to only OC average like 80mhz ish...

btw, i am a relative noob to OC XD


----------



## Falkentyne

PhatSV6 said:


> Whats wrong with dreaming lol. If no one had pipe dreams we would still be living in the stone ages lol.


We're working on a way to exceed the 600W limit.
Dante'afk is going to get his 1206 shunts before me.
So I'm practicing soldering and desoldering stuff on my dead Vega 64 PCB. I just soldered a random 5 mOhm shunt to two empty solder pads. Got a clean connection.
I also re-flowed a super tiny green resistor successfully and got new high quality solder on the edges so it's "prepared" to be stacked (if I had something to stack on it, which I don't). Unfortunately, I was unable to DESOLDER it. WAY too tiny.

I think I'm ready to battle these small 1206 (MSVDD, and the other NVVDD) shunts now and stack 5 mOhm 1206 shunts on (after careful application of 3M polymide high temp tape all around!!) without destroying my RTX 3090 FE.
(these pictures were taken from the shunt mod thread).



















YAY, MORE SHUNTS WITH DEPRESSED LOWER SILVER EDGES!!! AWESOME!!! NVIDIA HATES ALL OF US!!!!


----------



## motivman

Miguelios said:


> Before the remount, I was hitting 14.3-14.6k
> @2130-2145, +1300 mem, 75c
> 
> After the remount and repaste, but before the shunt mod + block was 14.6-14.9k @ 2160 (avg 2145), mem +1300, and 68c
> 
> Just crack that b*tch open and fully remove the warranty sticker on one of the screws. If you decide to return, don’t think they’ll notice the sticker and there’s no other seal to worry about..


looks like my sample is doing the same (haven't remounted yet...), This is on the stock bios, I already hit 14.9k on the 520W bios.


----------



## J7SC

So far, I found actually-available-now water blocks for the 3090 Strix from Phanteks, EK and Bykski..of those three, which one do you folks think is the best combo of fit, quality and performance ?


----------



## motivman

J7SC said:


> So far, I found actually-available-now water blocks for the 3090 Strix from Phanteks, EK and Bykski..of those three, which one do you folks think is the best combo of fit, quality and performance ?


I went with the phanteks block for my strix, ordered from newegg. Too many reports of EK block having excessive coil whine with the backplate on... plus I am personally tired of EK blocks, want to try something else.


----------



## motivman

Falkentyne said:


> We're working on a way to exceed the 600W limit.
> Dante'afk is going to get his 1206 shunts before me.
> So I'm practicing soldering and desoldering stuff on my dead Vega 64 PCB. I just soldered a random 5 mOhm shunt to two empty solder pads. Got a clean connection.
> I also re-flowed a super tiny green resistor successfully and got new high quality solder on the edges so it's "prepared" to be stacked (if I had something to stack on it, which I don't). Unfortunately, I was unable to DESOLDER it. WAY too tiny.
> 
> I think I'm ready to battle these small 1206 (MSVDD, and the other NVVDD) shunts now and stack 5 mOhm 1206 shunts on (after careful application of 3M polymide high temp tape all around!!) without destroying my RTX 3090 FE.
> (these pictures were taken from the shunt mod thread).
> 
> View attachment 2477032
> 
> 
> View attachment 2477033
> 
> 
> YAY, MORE SHUNTS WITH DEPRESSED LOWER SILVER EDGES!!! AWESOME!!! NVIDIA HATES ALL OF US!!!!


I do not consider myself an expert in soldering, but I have done this many times. Easiest way to stack in my opinion is to dap a little bit of solder on the contact points of your shunt, lay the shunt carefully on the resistor, then use a heat gun to heat up and flow the solder. Literally looks like factory new, IMHO.


----------



## changboy

I also interest to know what temperature each block can do, wich one is the best....nobody know this. Me i didnt found any test on this actually and from one user to another, mnay things can affect the final result. Ambiant temp, paste also the oc of the card itself.
Usually we saw review on youtube or the web but nothing i have found so far about watercooling performance of the rtx-3090.
We just know the optimus block is really good but no data anywhere.


----------



## motivman

....


----------



## mirkendargen

J7SC said:


> So far, I found actually-available-now water blocks for the 3090 Strix from Phanteks, EK and Bykski..of those three, which one do you folks think is the best combo of fit, quality and performance ?


If price doesn't really matter, Phanteks. Bykski will be cheaper with nearly the same performance, and fit should be fine assuming they've fixed the sweet plexi header problem from the very first ones some of us got.

There's a lot of fud around the "performance" of blocks that seems to be coming from mediocre paste jobs, dies that are extremely unlevel, people that don't know the difference between "ambient" and the temperature of their coolant to give an accurate measure of block performance or some combination of all of these. Multiple people have reported great performance with every block (12-16C delta), multiple people have reported **** performance with every block (20+C delta).


----------



## Falkentyne

motivman said:


> Strix owners that have shunt modded, do you still hit power limit in TSE GT2 after your shunt? Tryna figure out if this issue only affects the reference and FE cards....


Can someone with a Strix who is shuntmodded please run TS extreme GT2 and regular TS graphics test 2?
Have GPU-Z open and see if you get a "double" throttle bar in regular Timespy graphics test 2, and a lot of throttle bars in Timespy Extreme graphics test (1 and 2).


----------



## J7SC

mirkendargen said:


> If price doesn't really matter, Phanteks. Bykski will be cheaper with nearly the same performance, and fit should be fine assuming they've fixed the sweet plexi header problem from the very first ones some of us got.
> 
> There's a lot of fud around the "performance" of blocks that seems to be coming from mediocre paste jobs, dies that are extremely unlevel, people that don't know the difference between "ambient" and the temperature of their coolant to give an accurate measure of block performance or some combination of all of these. Multiple people have reported great performance with every block (12-16C delta), multiple people have reported **** performance with every block (20+C delta).


Thanks - my 2080 Tis (Aorus xtr) are the WB factor block version and their OEM _might_ be Bykski (not sure). I know from various older (ie 1080 Ti) GPU block tests that the differences in the review graphs are often dramatized just by the y-axis scaling. I'm more concerned about getting the right kind of thermal strips in the package, etc. Phanteks does look a bit more substantial, though. 

BTW, is Asus still doing that 'warranty void' thing when the stock cooler has been removed ? I know there are some laws governing that in the US recently, but I'm wondering in general (well, including Canada). Seems to me a well-cooled 3090 lasts longer


----------



## SoldierRBT

dante`afk said:


> I think my FE is pretty good
> 
> View attachment 2476648
> 
> 
> i'm getting the evc2 the following days.


We have twins!


----------



## mouacyk

^^ Insanity...


----------



## KCDC

J7SC said:


> BTW, is Asus still doing that 'warranty void' thing when the stock cooler has been removed ? I know there are some laws governing that in the US recently, but I'm wondering in general (well, including Canada). Seems to me a well-cooled 3090 lasts longer


I was curious as well and looks like Asus calmed down a while back as long as you don't eff anything up yourself:



We'll be back.


----------



## SaccoSVD

Hi guys.

I wonder if anyone knows if there is a modded BIOS with a higher power limit for the 3090 Eagle


----------



## geriatricpollywog

HyperMatrix said:


> I see the clocks but I'm not seeing the relative performance gain. If a card clocked to 1500MHz and scored higher, I'd be impressed.  If those clocks are actually real, then yeah that'd be incredibly impressive. But it's surprising that despite such high clocks, it's barely scoring any higher than the KingPin.


What you are seeing could be due to lower memory clock speeds. The top Kingpin result, #5 Luumi, is 90 mhz behind in core speed and 50 mhz ahead in memory speed compared to the #4 result, which is a Galax. In fact, the top 4 single card results are all Galax. Results 5-8 are all Kingpin. The memory speed for #1 to #4 Galax cards ranges from 1350 to 1379 mhz. The memory speeds for #5 to $8 Kingpin cards are 1425 to 1457 mhz (My KP does 1381). According to Buildzoid's PCB breakdown, Galax HOF VMEM is basically reference. It's possible the Galax cards are not as capable as the Kingpins at memory overclocking. 1350-1379 is pretty average.

Regardless of the memory clock disadvantage, the top 4 Galax cards are all within 81 points of each other. I am not going to crunch the data, but there is a big difference in average score between 1-4 and 5-8. These cards are all fast, but Galax vs Kingpin is looking like Mercedes vs Williams. I hate to say this as a Kingpin owner, but it's not even competitive.


----------



## mirkendargen

J7SC said:


> Thanks - my 2080 Tis (Aorus xtr) are the WB factor block version and their OEM _might_ be Bykski (not sure). I know from various older (ie 1080 Ti) GPU block tests that the differences in the review graphs are often dramatized just by the y-axis scaling. I'm more concerned about getting the right kind of thermal strips in the package, etc. Phanteks does look a bit more substantial, though.
> 
> BTW, is Asus still doing that 'warranty void' thing when the stock cooler has been removed ? I know there are some laws governing that in the US recently, but I'm wondering in general (well, including Canada). Seems to me a well-cooled 3090 lasts longer


Yeah I've read before that the OEM blocks for Gigabyte are Bykski or Barrow too. I had an MSI 2080Ti with their EK factory block and it was god awful due to making poor die contact, it was always like 25-30C delta no matter what I did (and looking at the Newegg reviews showed this was a common problem). It was still better than air (and silent of course) but not by much... That along with the Threadripper EK fails have soured me on them as a company and I go with Bykski. Even if their stuff is 5-10% worse (which it honestly doesn't seem to be) as long as there isn't an increased risk of catastrophic failure, paying half the price is worth it to me.

There is a "warranty void" sticker on one of the main four screws that mounts the block to the die, but no idea whether Asus is actually being strict or not.


----------



## GQNerd

motivman said:


> looks like my sample is doing the same (haven't remounted yet...), This is on the stock bios, I already hit 14.9k on the 520W bios. After your shunt mod, were you hitting power limit in TSE GT2? I do not like running the 520W bios because pin #3 does not show power draw....when I get this phanteks waterblock in, I would rather shunt mod and stick with the stock strix bios. lets see if this can beat my "BEAST" reference card... lol


Two things I forgot.
The scores I listed for air were on a Strix z490-E board with 10900k @ 5.1 and 4266 ram, never ran it on air on the Apex @5.4 with 4500-4600 ram.. wonder what it could’ve done. 

Second, I hit 14,668 on stock 480 bios.

After shunt, I never hit power limit on TSE or PR.. (520w bios)


----------



## PhatSV6

SaccoSVD said:


> Hi guys.
> 
> I wonder if anyone knows if there is a modded BIOS with a higher power limit for the 3090 Eagle


currently I don't believe so except for the xoc bios but you will have memory disadvantage with that and you will see numbers being reported in os different to what the card is actually pulling as the bios is for a 3 x 8pin card.

I know that someone on oc3d forums I believe hit up Galax support and they gave him a beta bios for their 2 x 8pin cards at 390w so maybe hitting up support for different brands may net you a beta bios above that. it would be ideal to see a 400w or even better a 420w bios for 2 x 8pin cards especially ones that can handle it like the Asus TUF with no trade offs


----------



## 86Jarrod

Falkentyne said:


> Can someone with a Strix who is shuntmodded please run TS extreme GT2 and regular TS graphics test 2?
> Have GPU-Z open and see if you get a "double" throttle bar in regular Timespy graphics test 2, and a lot of throttle bars in Timespy Extreme graphics test (1 and 2).


Just ran them for you.


----------



## Falkentyne

86Jarrod said:


> Just ran them for you.
> View attachment 2477046
> View attachment 2477047


Wow,
So that guy was right.
The Asus Strix DOES have 3 mOhm shunts on the NVVDD and MSVDD power rails, while all other cards have 5 mOhm (small size--1206 type) shunts...
Assuming that this isn't vbios related, due to the Strix having a 648W GPU Chip Power power limit and who knows what the limits are for the voltage rails (the limits for MSVDD and NVVDD are not shown anywhere nor are they reported in hwinfo64, just some 'summed' rails appear for NVVDD1 and NVVDD2)

No "double" bar throttle power draw on TS GT2, or throttling on TS Extreme.

So that means, solder stacking the 5 mOhm 1206 shunts with 5 mOhms, (or desoldering and replacing with 3 mOhms) should remove the hidden throttle on Founder's edition cards, as well as all reference models (again that is assuming this isn't from vbios limits).


----------



## SaccoSVD

PhatSV6 said:


> currently I don't believe so except for the xoc bios but you will have memory disadvantage with that and you will see numbers being reported in os different to what the card is actually pulling as the bios is for a 3 x 8pin card.
> 
> I know that someone on oc3d forums I believe hit up Galax support and they gave him a beta bios for their 2 x 8pin cards at 390w so maybe hitting up support for different brands may net you a beta bios above that. it would be ideal to see a 400w or even better a 420w bios for 2 x 8pin cards especially ones that can handle it like the Asus TUF with no trade offs


I just sent them a question in their tech support website "Are you guys gonna release a new BIOS with a higher power limit"?


----------



## motivman

86Jarrod said:


> Just ran a stress loop timespy extreme on 1000w xoc bios. Phanteks block on my strix and saw a max delta over loop temp of 11c.


what thermal paste did you use?


----------



## 86Jarrod

motivman said:


> what thermal paste did you use?


conductonaut


----------



## motivman

86Jarrod said:


> conductonaut


figured as much, that's a crazy good delta.


----------



## J7SC

mirkendargen said:


> Yeah I've read before that the OEM blocks for Gigabyte are Bykski or Barrow too. I had an MSI 2080Ti with their EK factory block and it was god awful due to making poor die contact, it was always like 25-30C delta no matter what I did (and looking at the Newegg reviews showed this was a common problem). It was still better than air (and silent of course) but not by much... That along with the Threadripper EK fails have soured me on them as a company and I go with Bykski. Even if their stuff is 5-10% worse (which it honestly doesn't seem to be) as long as there isn't an increased risk of catastrophic failure, paying half the price is worth it to me.
> 
> There is a "warranty void" sticker on one of the main four screws that mounts the block to the die, but no idea whether Asus is actually being strict or not.


...The 2080 Ti factory OEM blocks have been completely trouble-free for over two years now...same with the Heatkiller Pro TR CPU block. I did have some minor issues in the past with EK on custom GPU blocks, but the rest of their stuff I have such as CPU blocks for other builds is ok. I would love to get a Watercool Heatkiller or Aquacomputer Kryographics block for the 3090 Strix, but Watercool doesn't make one yet and Aquacomputer is sold out ('60 days or longer').

As to Asus warranty and w-cooling, I might contact them and ask - after all, they actually already offer an EK-blocked 3090 (though non-Strix 2x 8pin)...either way, I might just get the Phanteks


Spoiler: Phanteks


----------



## 86Jarrod

motivman said:


> figured as much, that's a crazy good delta.


For what it's worth I've seen around 13 to 14 delta now that its been heat cycled a bunch from 5c to 38c and had some higher voltages thrown at it.


----------



## J7SC

...still working on the setup, as syncing some Microsoft apps w/o the cloud is a spot of trouble...but tried out Cyberpunk 2077 and Flight Simulator 2020 for a good chunk...all ultra settings for both but on a smaller monitor (4K later when the build is finished). I haven't pushed the clocks higher yet from yesterday but wanted to know real world / gaming performance...also, while the clocks look different below, the actual MS AB settings were the same. 

Cyberpunk 2077 w/ RTX Psycho is a power-monger monster, while FS 2020 (DX11) is a walk in the park. I really like the 3090 Strix, even on air it seems very solid...backplate does heat up though with Cyberpunk 2077, even with a helper fan...seriously considering a GentleTyphoon I have for the backplate, if the noise sound is ok as the system will ultimately end up about 8 feet away (next to a 55 inch IPS HDR). W-cooling will also help, but the back-plate needs s.th. custom

CP 2077 on the left, MS FlightSim2020 on the right


----------



## SolarBeaver

Hey guys, first time into water-cooling, is 60c alright temp with 26 ambient full load for EKWB block with strix?


----------



## motivman

86Jarrod said:


> For what it's worth I've seen around 13 to 14 delta now that its been heat cycled a bunch from 5c to 38c and had some higher voltages thrown at it.


you need to teach me how to use the evc2sx with the strix, about to order one also.


----------



## SoldierRBT

Best score my 3090 KPE can do with 49C avg temp. 









I scored 15 728 in Port Royal


Intel Core i9-10900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## Bobbylee

SolarBeaver said:


> Hey guys, first time into water-cooling, is 60c alright temp with 26 ambient full load for EKWB block with strix?



Seems quite high, you might want to recheck your mount. When I run 1000w bios I’ll peak at 51c with similar ambient temps with fluid temp Around 35c. I’m also using ekwb


----------



## motivman

SoldierRBT said:


> Best score my 3090 KPE can do with 49C avg temp.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 728 in Port Royal
> 
> 
> Intel Core i9-10900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com


Nice!


----------



## SolarBeaver

Bobbylee said:


> Seems quite high, you might want to recheck your mount. When I run 1000w bios I’ll peak at 51c with similar ambient temps with fluid temp Around 35c. I’m also using ekwb


Is it with max fan/pump speed or silent? With max fan/pump I get 54-55c. 520w bios, heavy overclock, I have a 420mm rad with just a gpu btw. If it's not normal I guess I should re-apply thermal paste, quite possible I've used too much. But it would be such a pain to empty the loop for a newbie like me


----------



## motivman

SolarBeaver said:


> Is it with max fan/pump speed or silent? With max fan/pump I get 54-55c. 520w bios, heavy overclock, I have a 420mm rad with just a gpu btw. If it's not normal I guess I should re-apply thermal paste, quite possible I've used too much. But it would be such a pain to empty the loop for a newbie like me


reason why you should build a modular watercooling setup with koolance QDC's. I swap my hardware like its nothing... lol.


----------



## HyperMatrix

0451 said:


> What you are seeing could be due to lower memory clock speeds. The top Kingpin result, #5 Luumi, is 90 mhz behind in core speed and 50 mhz ahead in memory speed compared to the #4 result, which is a Galax. In fact, the top 4 single card results are all Galax. Results 5-8 are all Kingpin. The memory speed for #1 to #4 Galax cards ranges from 1350 to 1379 mhz. The memory speeds for #5 to $8 Kingpin cards are 1425 to 1457 mhz (My KP does 1381). According to Buildzoid's PCB breakdown, Galax HOF VMEM is basically reference. It's possible the Galax cards are not as capable as the Kingpins at memory overclocking. 1350-1379 is pretty average.
> 
> Regardless of the memory clock disadvantage, the top 4 Galax cards are all within 81 points of each other. I am not going to crunch the data, but there is a big difference in average score between 1-4 and 5-8. These cards are all fast, but Galax vs Kingpin is looking like Mercedes vs Williams. I hate to say this as a Kingpin owner, but it's not even competitive.


I wouldn’t feel too bad about it. These cards aren’t exactly retail cards that you can buy. They’re highly binned cards that are only going to select buyers. And despite their high clocks, they’re lagging behind in memory overclocking, meaning those high clocks don’t really matter much.

LN2 numbers aside, I would actually be very interested to see how well these ultra binned chips do under normal water cooling. I will give them props for doing far better binning than Vince even does for himself. And also give them props for getting over 3GHz to shut the AMD fanboys up.  but this isn’t a consumer card so I don’t feel too bad about it.


----------



## motivman

Falkentyne said:


> Can someone with a Strix who is shuntmodded please run TS extreme GT2 and regular TS graphics test 2?
> Have GPU-Z open and see if you get a "double" throttle bar in regular Timespy graphics test 2, and a lot of throttle bars in Timespy Extreme graphics test (1 and 2).


----------



## StullenAndi

I2C access on Founders Edition PCB.


----------



## edsontajra

Guys, I had the opportunity to get an FTW3 ULTRA 3090 at a VERY good price, its coming at friday.

But I already have an ASUS TUF 3090 OC.

What do you think of these two card´s? Which one should I keep? I saw several cases of FTW3 Ultra with a problem.


----------



## stryker7314

edsontajra said:


> Guys, I had the opportunity to get an FTW3 ULTRA 3090 at a VERY good price, its coming at friday.
> 
> But I already have an ASUS TUF 3090 OC.
> 
> What do you think of these two card´s? Which one should I keep? I saw several cases of FTW3 Ultra with a problem.


SLi


----------



## truehighroller1

changboy said:


> Price seam high compare from USA but its not cheaper for you to by from Japan or China coz ur in Philippines ?


The suprim x 3090 is now $2230 at microcenter by me. I paid $1753 the other month at the same store..

I have a bitspower water block on her now too as a side note. Running the xoc bios. Runs 22C idle 20 if I hit profile one in afterburner which I set to -400 memory so it down clocks. Max 51C running cyberpunk for hours over clocked to 2190 / +1000 at 1.1v. could probably drop voltage but been busy playing.


----------



## Sheyster

Fire2 said:


> Is there a a modded kp520watt bios with decent middle fan speed out? Seems to be only 2000rpm not 3000/3350.


Nope.


----------



## J7SC

stryker7314 said:


> SLi


...why stop there ?


----------



## stryker7314

J7SC said:


> ...why stop there ?
> View attachment 2477093


Totally would but I like to custom make my water cooling system 😁


----------



## J7SC

stryker7314 said:


> Totally would but I like to custom make my water cooling system 😁


Agree on the loop ! Also, thank you for not asking '...but will it play Crysis' ?


----------



## Biscottoman

Guys is in your opinion worth to wait the ekwb active backplate wb for the strix 3090 or going right now for a normal wb (still have to choose which one has the best perfomance) pairing it with the mp5works would let me to achieve almost the same cooling performance? I would like to avoid the use of a ramblock drilled on the backplate even if it performs better from a flow point of view, but since I am going to overclock as high as i can I will need every degree as little as possible on the core and especially the best vram temperature to reach at least 1200/1250mhz, if the bin allows it too ofc


----------



## long2905

des2k... said:


> Right side, it's the choke for that memory phase.
> Left is PEX for PCIE


the alphacool reference block i have does not cover these 2 parts at all. does that mean its not a good block to pick?


----------



## motivman

long2905 said:


> the alphacool reference block i have does not cover these 2 parts at all. does that mean its not a good block to pick?


I originally had the alphacool block, did not like it and replaced it with the EK block. Performance on the core was similar with both, but ek cools more components, and looks better IMHO.


----------



## changboy

But why people want add a watercooling on the backplate coz i play 2 or 3 hours and touch my backplate of my EK block and its barelly warm ? 
If the memory was hot its doing same thing then original backplate with air cooling and feel near to burn my finger.
Coz i feel some say there is problem with the memory on the back ?


----------



## mirkendargen

changboy said:


> But why people want add a watercooling on the backplate coz i play 2 or 3 hours and touch my backplate of my EK block and its barelly warm ?
> If the memory was hot its doing same thing then original backplate with air cooling and feel near to burn my finger.
> Coz i feel some say there is problem with the memory on the back ?


Your backplate being barely warm could be because you have poor contact with your memory, not because your memory isn't hot. Memory temps can be monitored now, just do that and decide for yourself if you think you need it or not. It throttles at 110C.


----------



## Alex24buc

Guys, a piece of advice please. Does it worth changing my palit gaming pro oc used with a new evga ftw3 ultra and to pay an extra 600 dollars for this? Another information is that the evga doesn`t have any invoice or valid warranty, unlike my palit which does have a 3 year old warranty at the local reseller. Thanks in advance. I Paid 1600 dollars in october for my palit 3090.


----------



## edsontajra

Alex24buc said:


> Guys, a piece of advice please. Does it worth changing my palit gaming pro oc used with a new evga ftw3 ultra and to pay an extra 600 dollars for this? Another information is that the evga doesn`t have any invoice or valid warranty, unlike my palit which does have a 3 year old warranty at the local reseller. Thanks in advance. I Paid 1600 dollars in october for my palit 3090.


Do not do this. See overhere: 3090 FTW - Red Light of Death - EVGA Forums

Do not risk change without warranty.


----------



## martinhal

Alex24buc said:


> Guys, a piece of advice please. Does it worth changing my palit gaming pro oc used with a new evga ftw3 ultra and to pay an extra 600 dollars for this? Another information is that the evga doesn`t have any invoice or valid warranty, unlike my palit which does have a 3 year old warranty at the local reseller. Thanks in advance. I Paid 1600 dollars in october for my palit 3090.


Only if you chasing benchmark scores.I would just hold onto the Palit. Without the warranty the Evga is not a good buy , my guess the owner is trying to get rid of a dud


----------



## Falkentyne

Alex24buc said:


> Guys, a piece of advice please. Does it worth changing my palit gaming pro oc used with a new evga ftw3 ultra and to pay an extra 600 dollars for this? Another information is that the evga doesn`t have any invoice or valid warranty, unlike my palit which does have a 3 year old warranty at the local reseller. Thanks in advance. I Paid 1600 dollars in october for my palit 3090.


Considering how many FTW3 3090's are dying (just go read the eVGA forums), I would keep your Palit, thermal paste and mod it (you will need the original pad sizes and this may require pressure and contact testing (to make sure the GPU core makes contact with the heatsink after a pad change!) with multiple sizes unless you can get the original exact specifications (maybe email Palit and ask?), and just solder stack some 5 mOhm shunts onto the 5 mOhm shunt resistors (make sure you do ALL of them) for a higher power limit. Soldering with flux and stacking 5 mOhms is more reliable than MG842AR silver paint. Solder properly and you never have to worry about the contacts degrading.


----------



## Alex24buc

I have the 390w bios on Palit and it is pretty solid. In don`t want to get high scores in benchmarks, just gaming. But I thought with the evga I would get better temps and a higher fps. Thanks!


----------



## 86Jarrod

motivman said:


> you need to teach me how to use the evc2sx with the strix, about to order one also.


Yeah I can definitely help a little. I've only played with it for a little more than an hour. I have yet to touch llc or switching freq. Msvdd is weird too, I had to mess with it a bit to get any higher clocks to run. It's going to take hours to figure out what my card likes lol. Should be fun though. The only issue is getting the damn controller to read without getting a bus low error. I have to fully shut down my pc for hours and enter evc software immediately after boot. I'm sure there's an easier way but I haven't found it yet. 70 more points and I'd have the highest on water Port Royal I think though.


----------



## martinhal

Alex24buc said:


> I have the 390w bios on Palit and it is pretty solid. In don`t want to get high scores in benchmarks, just gaming. But I thought with the evga I would get better temps and a higher fps. Thanks!


I have the same bios on my Palit. Happy in games. I doubt it would be more than a few FPS at most


----------



## Alex24buc

I will keep my Palit, I don`t want to take the risk considering that the Evga doesn`t have warranty.


----------



## DrunknFoo

Falkentyne said:


> Considering how many FTW3 3090's are dying (just go read the eVGA forums), I would keep your Palit, thermal paste and mod it (you will need the original pad sizes and this may require pressure and contact testing (to make sure the GPU core makes contact with the heatsink after a pad change!) with multiple sizes unless you can get the original exact specifications (maybe email Palit and ask?), and just solder stack some 5 mOhm shunts onto the 5 mOhm shunt resistors (make sure you do ALL of them) for a higher power limit. Soldering with flux and stacking 5 mOhms is more reliable than MG842AR silver paint. Solder properly and you never have to worry about the contacts degrading.



U can add my ftw to the list, 2 fuses blown (1 previously repaired and blew again) power pins 1 and 2. No new mods, no changes to the shunts or power slider... Still trying to fix (sigh)


----------



## des2k...

changboy said:


> But why people want add a watercooling on the backplate coz i play 2 or 3 hours and touch my backplate of my EK block and its barelly warm ?
> If the memory was hot its doing same thing then original backplate with air cooling and feel near to burn my finger.
> Coz i feel some say there is problem with the memory on the back ?


Depends alot on the mem controler load + mem oc.
If you really want to test the backplate, run phoenisminer.exe -bench for 45mins it will climb to 60c+. Control 4k max is also close to this at 50c+.

I have 3fans + two heatsinks covering the mem(saved 2c on the mem) and it's about 64 -66c with phoenixminer.


----------



## changboy

des2k... said:


> Depends alot on the mem controler load + mem oc.
> If you really want to test the backplate, run phoenisminer.exe -bench for 45mins it will climb to 60c+. Control 4k max is also close to this at 50c+.
> 
> I have 3fans + two heatsinks covering the mem(saved 2c on the mem) and it's about 64 -66c with phoenixminer.


How see the temperature of the memory coz gpuz have a bug and show 6554c lol


----------



## Thanh Nguyen

changboy said:


> How see the temperature of the memory coz gpuz have a bug and show 6554c lol


Its 520w bios bug. I use 1000w and the temp is normal. Hopefully my card fuses wont be blown up. I run at least 20m metro everyday.


----------



## HyperMatrix

changboy said:


> But why people want add a watercooling on the backplate coz i play 2 or 3 hours and touch my backplate of my EK block and its barelly warm ?
> If the memory was hot its doing same thing then original backplate with air cooling and feel near to burn my finger.
> Coz i feel some say there is problem with the memory on the back ?


Depends how hot your ram gets. On the Kingpin with +1350 memory temps would get close to 80c and crash. If you're not OC'in the memory then you're fine. But from memory performance testing I've done...in a game like Horizon Zero Dawn, going from +500 on memory (20500MHz) to +1350 on memory (2200MHz), which is a 7.3% increase in memory bandwidth, resulted in about a 2.5-3% increase in FPS. Well...basically in that situation every ~300 or so you add to the memory offset is another 1% FPS gain. Or if we're talking about a baseline of 2205MHz GPU clock, every 300MHz you can add to the memory offset is roughly equivalent to a 22MHz increase in clock speed. 

So depending on your card...if cooling the back lets you go from +500MHz stable to +1500MHz stable, you're looking at roughly the same performance boost you'd get as you would if you went from 2205MHz to 2280MHz on your GPU. Does this situation apply to everyone and every card and every block? Not at all. But if the card in question is affected in this way, then back plate cooling ends up being one of the cheapest methods to increase performance on this card.


----------



## J7SC

mirkendargen said:


> Your backplate being barely warm could be because you have poor contact with your memory, not because your memory isn't hot. Memory temps can be monitored now, just do that and decide for yourself if you think you need it or not. It throttles at 110C.


Yeah, you want the backplate to accept and transfer the heat - and to get rid of it efficiently after. I have an open-case ('naked frame') build for the 3090 Strix I can feel the heat escape with the help of a fan, but the backplate is 'toasty' to the touch. Per below, memory junction temps are ok though, and the highest I have seen so far / yesterday is 73 C for that in HWInfo. Finally got a 4K monitor hooked up and tried Cyberpunk 2077 and Flight Simulator 2020 again. In 4K, FS 2020 starts pulling the watts as well now but stays a lot cooler (DX11) compared to Cyberpunk 2077 at 4K Ultra RTX DX12).


----------



## des2k...

changboy said:


> How see the temperature of the memory coz gpuz have a bug and show 6554c lol


Latest version of hwinfo64 shows tj max (highest mem internal temp )


----------



## Nico67

J7SC said:


> ...still working on the setup, as syncing some Microsoft apps w/o the cloud is a spot of trouble...but tried out Cyberpunk 2077 and Flight Simulator 2020 for a good chunk...all ultra settings for both but on a smaller monitor (4K later when the build is finished). I haven't pushed the clocks higher yet from yesterday but wanted to know real world / gaming performance...also, while the clocks look different below, the actual MS AB settings were the same.
> 
> Cyberpunk 2077 w/ RTX Psycho is a power-monger monster, while FS 2020 (DX11) is a walk in the park. I really like the 3090 Strix, even on air it seems very solid...backplate does heat up though with Cyberpunk 2077, even with a helper fan...seriously considering a GentleTyphoon I have for the backplate, if the noise sound is ok as the system will ultimately end up about 8 feet away (next to a 55 inch IPS HDR). W-cooling will also help, but the back-plate needs s.th. custom
> 
> CP 2077 on the left, MS FlightSim2020 on the right
> 
> View attachment 2477058


Curious as to why your Flight sim GPU clocks vary so much when you have no perf limits and temp is flat? Its almost like there's a throttle limit that GPU-Z doesn't see.


----------



## Falkentyne

Thanh Nguyen said:


> Its 520w bios bug. I use 1000w and the temp is normal. Hopefully my card fuses wont be blown up. I run at least 20m metro everyday.


This is not a 520W bios bug. This same problem happens on Founder's edition cards too. You just can't see the temp 6500C. You just see a single "thermal throttle" blip in GPU-Z and flags in hwinfo change to "Yes".
It doesn't appear on memory junction temp. It's a sensor bug with something the driver is doing (it seems to happen when the vcore changes).
This bug first appeared on the first 457 driver branch (456.98 was not affected).


----------



## J7SC

Nico67 said:


> Curious as to why your Flight sim GPU clocks vary so much when you have no perf limits and temp is flat? Its almost like there's a throttle limit that GPU-Z doesn't see.


Could be, but the numbers at 4K are now getting close to my FlightSim 2020 with 2080 Ti SLI-CFR system. I'll do a lot more testing when this second system with the 3090 Strix is finished and moved over to another 4K monitor. Also, FlightSim2020 in this 2nd build hasn't got any texture cache build up yet, so the GPU will be 'waiting' a lot more, even if FPS = 60 capped (max)


----------



## SolarBeaver

Guys, sorry for asking this again, but I can't find some definitive info myself yet.

Is it normal to have 54-55 full load max fan on rads/pump speed on EKWB block with strix 3090 @ 520w heavy OC? There's only gpu on the loop and rad is 420mm EKWB CE, ambient is 26-27c, don't have a temp sensor for the liquid.
When I use it in relatively silent mode (50% pump, 50% fans) it sits @ 61-62c, same ambient temps.

Would really appreciate if anyone can give their results with waterblocks and what is normal to expect with this setup, cheers.


----------



## mouacyk

SolarBeaver said:


> Guys, sorry for asking this again, but I can't find some definitive info myself yet.
> 
> Is it normal to have 54-55 full load max fan on rads/pump speed on EKWB block with strix 3090 @ 520w heavy OC? There's only gpu on the loop and rad is 420mm EKWB CE, ambient is 26-27c, don't have a temp sensor for the liquid.
> When I use it in relatively silent mode (50% pump, 50% fans) it sits @ 61-62c, same ambient temps.
> 
> Would really appreciate if anyone can give their results with waterblocks and what is normal to expect with this setup, cheers.


The whole internet is getting temps like that and they all seem to think it's normal. Mine is a 3080, but I discovered that my standoffs were not tightened fully from Bykski and I got a 19C drop after simply tightening them all up. Also when mounting, torque the screws slowly and completely.


----------



## HyperMatrix

mouacyk said:


> The whole internet is getting temps like that and they all seem to think it's normal. Mine is a 3080, but I discovered that my standoffs were not tightened fully from Bykski and I got a 19C drop after simply tightening them all up. Also when mounting, torque the screws slowly and completely.


Might be normal if you're not using liquid metal. My Kingpin using the AIO cooler it comes with does 480-500W full load at max of about 45-46C after hours of gaming and I'm not happy with this. This is with fans at around 50-60%. My previous cards on a block would be around 34-36C with 21-22C ambient. Liquid metal is key. So if you have a 420mm radiator and a proper D5 pump, and your card is on a block, and you're getting mid 50s with max fans, it means you need to do a few things:

1) re-paste or switch to liquid metal. it's a huge difference
2) get better fans. your fans may be at 100% but they also might suck both at airflow as well as static pressure. 
3) get better ambient temperatures.


----------



## mirkendargen

SolarBeaver said:


> Guys, sorry for asking this again, but I can't find some definitive info myself yet.
> 
> Is it normal to have 54-55 full load max fan on rads/pump speed on EKWB block with strix 3090 @ 520w heavy OC? There's only gpu on the loop and rad is 420mm EKWB CE, ambient is 26-27c, don't have a temp sensor for the liquid.
> When I use it in relatively silent mode (50% pump, 50% fans) it sits @ 61-62c, same ambient temps.
> 
> Would really appreciate if anyone can give their results with waterblocks and what is normal to expect with this setup, cheers.


No one can tell you without knowing your coolant temps and therefore delta between your coolant and your GPU core. I don't understand what people are thinking when so many people leave this out, or talk about "ambient" temps. No one knows if you have 45C coolant because you're trying to run your whole system on a 240mm rad, or 3C coolant because you have a rad sitting in an ice water bucket. 55C full load max means VERY different things in those situations.

In my personal setup I have ~18-20C coolant, and 55C core would be god awful. Lots of people have ~35C coolant, 55C would be ok-but-not-great. Some people have even warmer coolant and 55C would be pretty good.


----------



## jura11

SolarBeaver said:


> Guys, sorry for asking this again, but I can't find some definitive info myself yet.
> 
> Is it normal to have 54-55 full load max fan on rads/pump speed on EKWB block with strix 3090 @ 520w heavy OC? There's only gpu on the loop and rad is 420mm EKWB CE, ambient is 26-27c, don't have a temp sensor for the liquid.
> When I use it in relatively silent mode (50% pump, 50% fans) it sits @ 61-62c, same ambient temps.
> 
> Would really appreciate if anyone can give their results with waterblocks and what is normal to expect with this setup, cheers.


54-55°C with 420mm radiator and D5 pump, that's quite high for such loop

Take as example my friend loop, he us running slim 30mm 360mm and 240mm radiators for 10900k with 5.3GHz and KFA2 RTX 3090 SG with Bykski waterblock, his temperatures in Cyberpunk 2077 are usually in 46-50°C with 190MHz OC on Core and 1200MHz on VRAM, we are used for GPU MX4 thermal paste, I run out of Kryonaut and NT-H1 or ZF-EX, with better thermal paste I think temperatures would be better by 2-5°C maybe less, will tear down his loop in coming days and see how much we can improve temperatures, for pump we are using Barrow 18W DDC pump and fans we are using Arctic Cooling P12 PWM fans which are running at 1200RPM and loop is quite quiet, pump is making more noise than fans hahaha 

What case do you have, I would get MO-ra3 360mm or 420mm, its external radiator and helps with temperatures a lot

61-62°C see with waterblock and 420mm radiator I would check everything twice or so, can you please check tubes if they're warm or hot on touch? 

Hope this helps 

Thanks, Jura


----------



## changboy

Ok, i just play cyberpunk 1 hour with +120 core and +1000 memory and iam also on the 1000w bios + 52% heheh doing some test  this is my temperature, is it ok ?


----------



## PhatSV6

0451 said:


> I melted a CableMod wire on a Vega64 with the liquid bios and a modded power play table. The card was pulling about 450w. I wouldn’t go above 400w on a 2x8 for safety reasons. There are people with mobility problems living in my building so the consequences of a fire would be devastating.


yeah agreed although still 2 x 8pin will handle 400w on nearly every cable I would image unless its a cheap and nasty rip off from way wish lol


----------



## SolarBeaver

mouacyk said:


> The whole internet is getting temps like that and they all seem to think it's normal. Mine is a 3080, but I discovered that my standoffs were not tightened fully from Bykski and I got a 19C drop after simply tightening them all up. Also when mounting, torque the screws slowly and completely.


Thanks, will double check them.


HyperMatrix said:


> Might be normal if you're not using liquid metal. My Kingpin using the AIO cooler it comes with does 480-500W full load at max of about 45-46C after hours of gaming and I'm not happy with this. This is with fans at around 50-60%. My previous cards on a block would be around 34-36C with 21-22C ambient. Liquid metal is key. So if you have a 420mm radiator and a proper D5 pump, and your card is on a block, and you're getting mid 50s with max fans, it means you need to do a few things:
> 
> 1) re-paste or switch to liquid metal. it's a huge difference
> 2) get better fans. your fans may be at 100% but they also might suck both at airflow as well as static pressure.
> 3) get better ambient temperatures.


Yeah, I think thermal paste is most likely the reason I'm getting those temps, I've read that you should apply more paste on gpu and probably gone overkill 
Fans are good, ml140 corsair's should be adequate I think.
As for the ambient temperatures, well one of the reason of going waterloop was not having to open window everytime I play 
Thanks for the suggestions!



mirkendargen said:


> No one can tell you without knowing your coolant temps and therefore delta between your coolant and your GPU core. I don't understand what people are thinking when so many people leave this out, or talk about "ambient" temps. No one knows if you have 45C coolant because you're trying to run your whole system on a 240mm rad, or 3C coolant because you have a rad sitting in an ice water bucket. 55C full load max means VERY different things in those situations.
> 
> In my personal setup I have ~18-20C coolant, and 55C core would be god awful. Lots of people have ~35C coolant, 55C would be ok-but-not-great. Some people have even warmer coolant and 55C would be pretty good.


I thought there would be some kind of sensor in the loop or rad included, my bad. Sorry for dumb question, but I sure can't use thermal sensor that came with the motherboard right, it will just fry it, right?  And what's the preferred way to check coolant temps, any sensor recommendations? Pump is pwm, so Id like to control it with coolant temps in mind.



jura11 said:


> 54-55°C with 420mm radiator and D5 pump, that's quite high for such loop
> 
> Take as example my friend loop, he us running slim 30mm 360mm and 240mm radiators for 10900k with 5.3GHz and KFA2 RTX 3090 SG with Bykski waterblock, his temperatures in Cyberpunk 2077 are usually in 46-50°C with 190MHz OC on Core and 1200MHz on VRAM, we are used for GPU MX4 thermal paste, I run out of Kryonaut and NT-H1 or ZF-EX, with better thermal paste I think temperatures would be better by 2-5°C maybe less, will tear down his loop in coming days and see how much we can improve temperatures, for pump we are using Barrow 18W DDC pump and fans we are using Arctic Cooling P12 PWM fans which are running at 1200RPM and loop is quite quiet, pump is making more noise than fans hahaha
> 
> What case do you have, I would get MO-ra3 360mm or 420mm, its external radiator and helps with temperatures a lot
> 
> 61-62°C see with waterblock and 420mm radiator I would check everything twice or so, can you please check tubes if they're warm or hot on touch?
> 
> Hope this helps
> 
> Thanks, Jura


Thanks, should probably wait for thermal sensor to check the coolant, then buy another set of thermals pads (the ones that came with the block would be too damaged, right?), is thermal grizzly good pads? Also I should probably apply less paste next time, liquid metal is attractive, but chances of me ****ing up are way too high.

I have ASUS ROG Helios case.
Tubes on the loop are quite warm, borderline hot. Backplate on the block is very hot.

Thanks a lot!


----------



## PhatSV6

changboy said:


> Ok, i just play cyberpunk 1 hour with +120 core and +1000 memory and iam also on the 1000w bios + 52% heheh doing some test  this is my temperature, is it ok ?
> View attachment 2477151


those temps look mint for both core and mem imo compared to air at least.


----------



## Pinnacle Fit

Hey guys I’m gonna pore through the thread for this further, but is there a fix for the TJ memory issue? I just got a 3090 extreme and I noticed this was an issue. 


Sent from my iPhone using Tapatalk


----------



## mirkendargen

SolarBeaver said:


> I thought there would be some kind of sensor in the loop or rad included, my bad. Sorry for dumb question, but I sure can't use thermal sensor that came with the motherboard right, it will just fry it, right?  And what's the preferred way to check coolant temps, any sensor recommendations? Pump is pwm, so Id like to control it with coolant temps in mind.


The silly simple way is with a meat thermometer in your reservoir 

More permanent solutions, most motherboards have a 2pin header for a thermistor that you put in your loop. Some of them are shaped like a fitting, some are shaped like a plug, etc.


----------



## DOOOLY

So has anyone received the EK block for MSI 3080/3090 yet? Mine will be delivered tomorrow for my 3090 Trio.


----------



## Pinnacle Fit

mirkendargen said:


> The silly simple way is with a meat thermometer in your reservoir
> 
> More permanent solutions, most motherboards have a 2pin header for a thermistor that you put in your loop. Some of them are shaped like a fitting, some are shaped like a plug, etc.


Lol I use an IR temp gun for this. 


Sent from my iPhone using Tapatalk


----------



## J7SC

...changed the Strix 3090 backplate 'helper fan' from an Arctic P12 to a GentleTyphoon (4k rpm); HWInfo memory junction temps dropped a further 4 C (guess which one is, ahem, less quiet). Once final location assembly is finished, I'll make a decision re. which fan, given the distance to my ears. 

Also a bit of a weird thing...in FS2020, Vsync had turned on, but the monitor doesn't have Vsync ?...I turned it off and that improved things also, actually smoother, oddly enough...getting around 60 - 70 fps in 4K / Ultra and in low flight city scenes on full clocks now


----------



## changboy

Ok then 70c is fine for the memory, coz i play games from this morming and temp increase in my room, now its 25.5c in my pc room. More i play more temp go higher in the room and also my coolant increasy by 1 or 2c.

I have a [email protected] and this rtx-3090, so together its crazy hot. I feel i play game on border of the beach lol.


----------



## des2k...

SolarBeaver said:


> Thanks, will double check them.
> 
> Yeah, I think thermal paste is most likely the reason I'm getting those temps, I've read that you should apply more paste on gpu and probably gone overkill
> Fans are good, ml140 corsair's should be adequate I think.
> As for the ambient temperatures, well one of the reason of going waterloop was not having to open window everytime I play
> Thanks for the suggestions!
> 
> 
> I thought there would be some kind of sensor in the loop or rad included, my bad. Sorry for dumb question, but I sure can't use thermal sensor that came with the motherboard right, it will just fry it, right?  And what's the preferred way to check coolant temps, any sensor recommendations? Pump is pwm, so Id like to control it with coolant temps in mind.
> 
> 
> Thanks, should probably wait for thermal sensor to check the coolant, then buy another set of thermals pads (the ones that came with the block would be too damaged, right?), is thermal grizzly good pads? Also I should probably apply less paste next time, liquid metal is attractive, but chances of me ****ing up are way too high.
> 
> I have ASUS ROG Helios case.
> Tubes on the loop are quite warm, borderline hot. Backplate on the block is very hot.
> 
> Thanks a lot!


thermal oddesey pads work too, you don't need liquid metal. I'm using noctua nt-h2 and have 10c delta at 400w and 14c at 500w with my ek block.

The paste, you use medium dot middle, 4 small dots corners. The trick is to get all 4core screws to tiggten equally, you can use double washers too (alot of paste doesn't help with this)


----------



## Pinnacle Fit

J7SC said:


> ...changed the Strix 3090 backplate 'helper fan' from an Arctic P12 to a GentleTyphoon (4k rpm); HWInfo memory junction temps dropped a further 4 C (guess which one is, ahem, less quiet). Once final location assembly is finished, I'll make a decision re. which fan, given the distance to my ears.
> 
> Also a bit of a weird thing...in FS2020, Vsync had turned on, but the monitor doesn't have Vsync ?...I turned it off and that improved things also, actually smoother, oddly enough...getting around 60 - 70 fps in 4K / Ultra and in low flight city scenes on full clocks now


Ok so I’m not the only one with this issue. I have a 3090 extreme. Even underpowered I still have the issue. 


Sent from my iPhone using Tapatalk


----------



## rush2049

I've got a 3090 FE heading my way. Should arrive tomorrow.
Also have that EK block on pre-order.

Whats the latest bios on it that unlocks the power limits? I keep seeing the 1000W one mentioned, but I run a lot of different OS and do not want to rely on a negative offset in software to save me.....


----------



## SolarBeaver

des2k... said:


> thermal oddesey pads work too, you don't need liquid metal. I'm using noctua nt-h2 and have 10c delta at 400w and 14c at 500w with my ek block.
> 
> The paste, you use medium dot middle, 4 small dots corners. The trick is to get all 4core screws to tiggten equally, you can use double washers too (alot of paste doesn't help with this)


Thanks a lot, I will try your method of pasting. I have kryonout which I think should be very good.
Also is it possible to disassemble without taking the block of the loop and with the same thermal pads? Really wanna do it today and don't have spare pads atm...


----------



## Sheyster

rush2049 said:


> I've got a 3090 FE heading my way. Should arrive tomorrow.
> Also have that EK block on pre-order.
> 
> Whats the latest bios on it that unlocks the power limits? I keep seeing the 1000W one mentioned, but I run a lot of different OS and do not want to rely on a negative offset in software to save me.....


3090 FE requires shunt modding. The 500/520/1000W XOC BIOS are not an option for the FE.


----------



## rush2049

Sheyster said:


> 3090 FE requires shunt modding. The 500/520/1000W XOC BIOS are not an option for the FE.


Roger that, I will start looking for soldering equipment.


----------



## J7SC

...after settling on the GPU oc, I am slowly zeroing in now on the best (= efficient, not outright clocks) speed for the VRAM...seems to be at 1259 MHz or below, at least until I watercool the setup...and even then, the custom backplate will need some modded universal GPU cooler slapped on to improve upon my current fans. I have several of the Swiftech Universal GPU MCW82s laying around from yesteryear...so that might be an option. 

I can't believe that I'm hitting 503 W+ peak on the stock bios with Win 10 in power saving mode and NV panel still set to 'normal' instead of 'prefer maximum performance'. Then again, in for example FS2020, the 2x 2080 Tis in SLI-CFR suck back at least 200 W more for the same settings


----------



## Falkentyne

rush2049 said:


> Roger that, I will start looking for soldering equipment.


Do yourself a favor and grab a really good quality soldering iron like a TS100.
This is the one I have, and it is so much better than the starter piece of crap I had before.






UY CHAN Upgraded Original TS100 Digital OLED Programmable Pocket-size Smart Mini Outdoor Portable Soldering Iron Station Kit Embedded Interface DC5525 Acceleration Sensors STM32 Chip Fast Heat (B2) - - Amazon.com


UY CHAN Upgraded Original TS100 Digital OLED Programmable Pocket-size Smart Mini Outdoor Portable Soldering Iron Station Kit Embedded Interface DC5525 Acceleration Sensors STM32 Chip Fast Heat (B2) - - Amazon.com



www.amazon.com




This iron is very nice and it heats up quickly. The package comes with a 24V power supply so you get the full 65W.

This tip is very useful for soldering shunts (you can use the default tip also but this makes it easier to bridge the two shunts).


Amazon.com



Use this to tin the iron tip and keep it clean.


https://www.amazon.com/gp/product/B076X1NYBB



Here is high quality flux (always, always use flux).


https://www.amazon.com/gp/product/B008ZIV85A/



And finally here is some good quality 60/40 solder.


https://www.amazon.com/gp/product/B00068IJNQ/


----------



## changboy

Your power draw is really good on ur card, i put back the normal bios on my ftw3 and i see around 420w after playing 1 hour of read dead 2, but gpu core temp drop from 52c on 1000w bios to 45c and memory from 70c to 66c. But in game i dont see much better perf, its not so evident, but the heat come from the pc is less for sure.


----------



## Thanh Nguyen

Anyone know port royal on steam or standalone version is better? Any difference? I dontknow why my steam version keep crashing even I have not run it yet.


----------



## changboy

Thanh Nguyen said:


> Anyone know port royal on steam or standalone version is better? Any difference? I dontknow why my steam version keep crashing even I have not run it yet.


I have the steam version and never got this problem, maybe unistall it and install it again.


----------



## changboy

With normal bios when i watch a 4k movie with mpc-hc i draw more power then read dead 2 !


----------



## Falkentyne

Thanh Nguyen said:


> Anyone know port royal on steam or standalone version is better? Any difference? I dontknow why my steam version keep crashing even I have not run it yet.


Disable all monitoring programs except Gpu-z or hwinfo. Disable RTSS. If you have a crashed javaw.exe process, end task on it.


----------



## sultanofswing

Need to get rid of this turd.


----------



## J7SC

sultanofswing said:


> Need to get rid of this turd.


...is that with the FTW3 stock bios ? Or KPE 520, 1K ?


----------



## sultanofswing

J7SC said:


> ...is that with the FTW3 stock bios ? Or KPE 520, 1K ?


It's on the 1000w Bios with power slider set to 60%.
Core clock was only 2085mhz.


----------



## Thanh Nguyen

sultanofswing said:


> It's on the 1000w Bios with power slider set to 60%.
> Core clock was only 2085mhz.


I run at 100% power slide with 1000w bios and have not seen the card hit 700w. The max spike was only 650w or so within 1s then it dropped to around 600w. Dont need to worry much about those numbers; u just make sure the card is cool.


----------



## J7SC

sultanofswing said:


> It's on the 1000w Bios with power slider set to 60%.
> Core clock was only 2085mhz.


...I used to think that my 2080 Tis stock bios (380 W, but x 2) was a lot, but perceptions change. Still, on a 'fps' basis, the 3090 is actually more efficient than 2x 2080 Tis...On another note, remember those 'mineral oil' cooled PCs with the CPU and GPU(s) fully submerged ? That would be quite a 3090 project


----------



## sultanofswing

Thanh Nguyen said:


> I run at 100% power slide with 1000w bios and have not seen the card hit 700w. The max spike was only 650w or so within 1s then it dropped to around 600w. Dont need to worry much about those numbers; u just make sure the card is cool.


I wasn't talking about board power, was talking about PCI-E slot power.


----------



## motivman

rush2049 said:


> Roger that, I will start looking for soldering equipment.


----------



## lolhaxz

Falkentyne said:


> Do yourself a favor and grab a really good quality soldering iron like a TS100.
> This is the one I have, and it is so much better than the starter piece of crap I had before.
> 
> 
> 
> 
> 
> 
> UY CHAN Upgraded Original TS100 Digital OLED Programmable Pocket-size Smart Mini Outdoor Portable Soldering Iron Station Kit Embedded Interface DC5525 Acceleration Sensors STM32 Chip Fast Heat (B2) - - Amazon.com
> 
> 
> UY CHAN Upgraded Original TS100 Digital OLED Programmable Pocket-size Smart Mini Outdoor Portable Soldering Iron Station Kit Embedded Interface DC5525 Acceleration Sensors STM32 Chip Fast Heat (B2) - - Amazon.com
> 
> 
> 
> www.amazon.com
> 
> 
> 
> 
> This iron is very nice and it heats up quickly. The package comes with a 24V power supply so you get the full 65W.
> 
> This tip is very useful for soldering shunts (you can use the default tip also but this makes it easier to bridge the two shunts).
> 
> 
> Amazon.com
> 
> 
> 
> Use this to tin the iron tip and keep it clean.
> 
> 
> https://www.amazon.com/gp/product/B076X1NYBB
> 
> 
> 
> Here is high quality flux (always, always use flux).
> 
> 
> https://www.amazon.com/gp/product/B008ZIV85A/
> 
> 
> 
> And finally here is some good quality 60/40 solder.
> 
> 
> https://www.amazon.com/gp/product/B00068IJNQ/


A 65W iron is _WAY_ overkill and a great way to potentially screw the pads on the board - especially true if one does not have experience, there is really no need to exceed typical 25W hobbyist iron for this (assuming you don't already have something "better")... although with a temperature controlled iron such as you suggested, arguably, the wattage becomes quite irrelevant. I would just hate to see a beginner go ham on a board with a conventional 65W iron with no experience.

Solder generally already has flux - (for this application) its really not necessary to add more... if you need additional flux (for this particular operation) you are most likely doing it incorrectly... but of course - you can use flux if you want...

The mistake people make is loading up the tip and the target with solder, then re-heating it... all the flux burns away when you do this, you should flow the solder between the joint in a single action, not dab it on there, if you need to re-heat it multiple times .. your technique is questionable. [if you do it this way, then yes, flux will help immensely]

Take away: Any random 25W iron and proper technique will work just fine... spending $200 on gear won't make it any prettier or easier... practise.


----------



## Shawnb99

Alex24buc said:


> I will keep my Palit, I don`t want to take the risk considering that the Evga doesn`t have warranty.


Why wouldn’t the EVGA have a warranty? The warranty follows the card so it would still have the 3 year


----------



## mardon

martinhal said:


> I have the same bios on my Palit. Happy in games. I doubt it would be more than a few FPS at most


This 100%
I was playing with the 1000w bios. 2000mhz @ 390w vs 2100mhz @ 470w (PSU shuts off after this in Cyberpunk) was 3fps.

However 1970mhz @ 390w vs 2100mhz @ 500w got me around 10fps in RDR2

It all depends on the game. I'm going to Shunt mod I think and have a few different profiles. 

It seems in games with RTX it's more like a power virus the 3090 will suck power but really not going that many FPS. I observed similar behaviour in Control.


----------



## SolarBeaver

Hey, what is considered acceptable memory junction temps? Mine could go as high as 85+ in time spy stress test no matter overclocked to 1200mhz or stock (2c difference), 520w bios, +125 core, max fan/pump speed with like 15c ambient, side panel open, bad thermal pads mounting I guess? VRM temps sit in mid 50's, which I think is fine.


----------



## StullenAndi

lolhaxz said:


> A 65W iron is _WAY_ overkill and a great way to potentially screw the pads on the board


maybe if you solder diodes to a one layer pcb. 65w is needed to get solder melting on these pads. pcb is full of massive ground planes. 

please make us a video soldering a shunt with 25w iron....


----------



## SaccoSVD

Guys.

I'm curious to see how far my Eagle (non OC) can go if I flash a different BIOS. (I just feel this card has some more untapped potential)

What would you do in your case? And by that I mean, which BIOS would you flash?

Also, would I be forced to do a shunt mod if I wanna do so? What about just flashing the Eagle OC BIOS into the non OC card?

How safe is to flash a BIOS? Can I brick my 3090 by doing so? (I see I can use a backup GPU if something goes wrong)


----------



## des2k...

SolarBeaver said:


> Hey, what is considered acceptable memory junction temps? Mine could go as high as 85+ in time spy stress test no matter overclocked to 1200mhz or stock (2c difference), 520w bios, +125 core, max fan/pump speed with like 15c ambient, side panel open, bad thermal pads mounting I guess? VRM temps sit in mid 50's, which I think is fine.


I think depends on the quality of the thermal pads and airflow in the case.
I have two heatsink(on memory section, ek backplate) + noctua 3 fans(max rpm), it's 64c max using phoenixminer.exe -bench. Games, Benchmark usually bellow that, 54c.


----------



## changboy

On my ftw3 ultra my power draw on pcie with evga 500w bios can be 83w and when i was on 1000w bios slider at 52% 91w my card pulling 525w and slider at 100% i saw my card pulling 688w and pcie was at 108w.

I have a molex plug on my rampage omega to help pcie power draw and i use it.

Do you think my pcie power draw on my FTW3 is normal ? When i was on 1000w bios and saw 108w on pcie, is it dangerous for my graphic card or my system ?

How much power my pcie can pull ?


----------



## long2905

motivman said:


> I originally had the alphacool block, did not like it and replaced it with the EK block. Performance on the core was similar with both, but ek cools more components, and looks better IMHO.


think i will wait for the EK fullcover backplate and switch


----------



## Zogge

...


----------



## dante`afk

I'm laughing at all the people buying their EK block and then having 50c on the GPU asking if that is a good temp.

No, it's not lol.


----------



## Zogge

If ambient is 30 and water temp 40 I disagree. But I know what you mean.


----------



## mardon

dante`afk said:


> I'm laughing at all the people buying their EK block and then having 50c on the GPU asking if that is a good temp.
> 
> No, it's not lol.


Doesn't it depend on the Radiators also? I have a slim 20x240 and a 48x240 in my SFF Ncase M1 and get around 53C in Cyberpunk 1.0v 390w (46c with the sides off) on my EK block. I'd say that pretty good. Ambient 22C (ish).


----------



## Arbustok

Are you saying that a better block would make a significant difference?


----------



## Thanh Nguyen

Anyone know what kinda Quite and Small fan to blow ambient air to the corner of the room? I put my pc there and it needs fresh air.


----------



## Shawnb99

Arbustok said:


> Are you saying that a better block would make a significant difference?


Yes the Optimus block is worlds away better then EK


----------



## mouacyk

Shawnb99 said:


> Yes the Optimus block is worlds away better then EK


It's that good, you don't even need to qualify it? That is amazing.


----------



## Shawnb99

mouacyk said:


> It's that good, you don't even need to qualify it? That is amazing.


It truly is


----------



## Arbustok

How many degrees are we talking about?


----------



## changboy

In normal gaming usage of my gpu+ek block i have 47c max temp on gpu and my room at 24.5c.
What temp the optimus block can be ?
Even its 40c, is this doing any benefit ?


----------



## Jpmboy

Yeah, the optimus block on this 3090FTW3 is working really well. Core temps are <10C above water temp, regardless of the ambient. I'm cooling it with a single fat 360 in push-pull along with the 5.2GHz 8086K where it's found a home. It is a very handsome and well made GPU block.

On another note, after mounting the block I moved the card around between several rigs (7980XE/R6Apex at 4.6GHz, 10980XE/R6EO at 4.8GHz and a 8086K/R10apex at 5.2) and with the exception of benchmarks, the 5.2 8086K, 4400c16 apex is the better gaming configuration on everything that's been tried by the gamers here.


----------



## Thanh Nguyen

Jpmboy said:


> Yeah, the optimus block on this 3090FTW3 is working really well. Core temps are <10C above water temp, regardless of the ambient. I'm cooling it with a single fat 360 in push-pull along with the 5.2GHz 8086K where it's found a home. It is a very handsome and well made GPU block.
> 
> On another note, after mounting the block I moved the card around between several rigs (7980XE/R6Apex at 4.6GHz, 10980XE/R6EO at 4.8GHz and a 8086K/R10apex at 5.2) and with the exception of benchmarks, the 5.2 8086K, 4400c16 apex is the better gaming configuration on everything that's been tried by the gamers here.


Can u flash 1000w bios and run metro or timespy extreme gt2 and report the delta? I want to see if its worth. Thanks.


----------



## changboy

Me i wanna use this 1000w bios, but the memory thing never go at idle or need change the setting everytime you want do something break me to use it.

Is there a way for the memory drop at idle and boost normal with this bios ???
Not sure its good for my card if my memory always boost 24/24.


----------



## Esenel

Jpmboy said:


> Yeah, the optimus block on this 3090FTW3 is working really well. Core temps are <10C above water temp, regardless of the ambient.


At which power consumption and which water flow?
These variables are essential.

If anybody wants to compare their block under similar conditions to other blocks, just pay a visit here:








[Übersicht] - RTX 30x0 Wasserkühlervergleich | GPU Block Comparison


Einleitung Hallo zusammen, da es an verschiedenen Stellen bereits Informationen zu diesem Thema gibt, würde ich gerne anfangen alle Daten zentral und übersichtlich zu sammeln. Es geht darum, die Leistung der jeweiligen GPU-Blöcke der verschiedenen Hersteller transparent auszulisten. Die Form...




www.hardwareluxx.de


----------



## stryker7314

Anyone post thermal pad measurements for 3090 Kingpin? Want to replace them with fancy pants fujipoly ones, also liquid metal the gpu.


----------



## ttnuagmada

dante`afk said:


> I'm laughing at all the people buying their EK block and then having 50c on the GPU asking if that is a good temp.
> 
> No, it's not lol.


15C delta seems pretty par for the course for waterblocks on 8nm. 16/12nm had a much lower heat density, so getting half of that wasn't uncommon. So if your water temp is at 35C, hitting 50C is to be expected.


----------



## itssladenlol

changboy said:


> Me i wanna use this 1000w bios, but the memory thing never go at idle or need change the setting everytime you want do something break me to use it.
> 
> Is there a way for the memory drop at idle and boost normal with this bios ???
> Not sure its good for my card if my memory always boost 24/24.


Want to know this aswell.


----------



## Falkentyne

stryker7314 said:


> Anyone post thermal pad measurements for 3090 Kingpin? Want to replace them with fancy pants fujipoly ones, also liquid metal the gpu.


Might be better asking this in the Kingpin thread on the eVGA forums. You would need precise measurements for working with postage stamp sized samples of Fujipoly pads.


----------



## J7SC

After Arctic P12 pwm and GentleTyphoon (4k rpm), I am going to try 2x Arctic F8 (80mm) fans on the backplate of the Strix...the first two options work, but are bulkier, and I like to also be able to 'split' the air flow a little bit...though the 31 CFM (x2) of the Arctic F8 certainly can't match the GT. The backplate of the 3090 Strix is actually pretty good for heat-transfer and has some fine-grain grey-black crackle paint and 'some' fins. On the other hand, I could just wait for the Phantek GPU block which comes with a separate backplate I could mod w/universal GPU block

@stryker7314 ...you might want to ask Jay2C the question re. pads...he just broke another PCB item and seems to have a direct line to Vince / KP for all the PCB fixing he needs


----------



## dante`afk

Shawnb99 said:


> Yes the Optimus block is worlds away better then EK


anything is better than EK.



ttnuagmada said:


> 15C delta seems pretty par for the course for waterblocks on 8nm. 16/12nm had a much lower heat density, so getting half of that wasn't uncommon. So if your water temp is at 35C, hitting 50C is to be expected.


i agree with that, 10-15c delta to water temp is in a tolerable range


----------



## des2k...

J7SC said:


> After Arctic P12 pwm and GentleTyphoon (4k rpm), I am going to try 2x Arctic F8 (80mm) fans on the backplate of the Strix...the first two options work, but are bulkier, and I like to also be able to 'split' the air flow a little bit...though the 31 CFM (x2) of the Arctic F8 certainly can't match the GT. The backplate of the 3090 Strix is actually pretty good for heat-transfer and has some fine-grain grey-black crackle paint and 'some' fins. On the other hand, I could just wait for the Phantek GPU block which comes with a separate backplate I could mod w/universal GPU block
> 
> @stryker7314 ...you might want to ask Jay2C the question re. pads...he just broke another PCB item and seems to have a direct line to Vince / KP for all the PCB fixing he needs


what works for my zotac,ek backplate is 60mm,80mm,80mm. All noctua max rpm. I have them in pull,push(this works for best temp on mem),pull(not to dump heat on the pcie 8pins since they pull 280w each)

The backplate is already good with oddesey pads, adding heatsinks just dropped tj mem by 2c,4c


----------



## gfunkernaught

Are the ek blocks really that bad? Or is it a mount/paste issue?


----------



## motivman

man, so disappointed with phanteks blocks for my strix. 26C delta T between GPU temp and water temp. I have remounted like 3 times, used gelid extreme. Double checked and thermal paste is spreading and making good contact with the die. Not sure What I might be doing wrong. With my reference card and EK block, my delta T is 15C. I think I will order the EK block and see what my results are. Stumped with this one...


----------



## des2k...

dante`afk said:


> anything is better than EK.
> 
> 
> 
> i agree with that, 10-15c delta to water temp is in a tolerable range


optimus block is ftw3 only ? and too expensive

ek block, well for normal usage, doesn't matter because you'll max out on the mem, core oc before taking advantage of that 10c delta at 500w+ load lol

If you reach 2160core already at 14c delta 500w, optimus at 10c delta will get you one bin, max two bins so 15mhz,30mhz. That's about 1fps more lol.


----------



## Esenel

dante`afk said:


> anything is better than EK.
> i agree with that, 10-15c delta to water temp is in a tolerable range


Why this grudge against EK?
They have one of the best performing Strix block.
Better than AquaComputer.

Nobody yet posted a proper analysis of the Optimus block. So how could this one be better?


----------



## lolhaxz

StullenAndi said:


> maybe if you solder diodes to a one layer pcb. 65w is needed to get solder melting on these pads. pcb is full of massive ground planes.
> 
> please make us a video soldering a shunt with 25w iron....


Yeah bit late now.

I'll refer you to this previous post -> [Official] NVIDIA RTX 3090 Owner's Club


----------



## changboy

When i play my coolant at 35c and my gpu at 47c with EK waterblock, i dont think its bad for a 500w power draw.
If my ambiant temp were at 21c then my gpu will be at 42c then its really normal. Ek waterblock are ok and its not true to say its bad.

Its more about availability for ur region


----------



## gfunkernaught

changboy said:


> When i play my coolant at 35c and my gpu at 47c with EK waterblock, i dont think its bad for a 500w power draw.
> If my ambiant temp were at 21c then my gpu will be at 42c then its really normal. Ek waterblock are ok and its not true to say its bad.


What paste did you use? I have conductonaut that I want to use, still have an ek trio block in the cart, dunno if I should jump.


----------



## mouacyk

motivman said:


> man, so disappointed with phanteks blocks for my strix. 26C delta T between GPU temp and water temp. I have remounted like 3 times, used gelid extreme. Double checked and thermal paste is spreading and making good contact with the die. Not sure What I might be doing wrong. With my reference card and EK block, my delta T is 15C. I think I will order the EK block and see what my results are. Stumped with this one...


I had to tighten standoffs on my Bykski block to get a delta of 17C, instead of 36C initially, even though it made like 90% contact before. Afterward, contact paper was showing around 97-99% contact. @bmgjet had to file some of his standoffs on his EK block


----------



## des2k...

mouacyk said:


> I had to tighten standoffs on my Bykski block to get a delta of 17C, instead of 36C initially, even though it made like 90% contact before. Afterward, contact paper was showing around 97-99% contact. @bmgjet had to file some of his standoffs on his EK block


When you need 100% contact on the die


----------



## mouacyk

des2k... said:


> When you need 100% contact on the die


Saw that yesterday and tempted. I just lapped my Raystorm Pro for direct die mount, so...
I swear, the pesky NVidia lettering makes up 1% of the lost contact.


----------



## motivman

Esenel said:


> Why this grudge against EK?
> They have one of the best performing Strix block.
> Better than AquaComputer.
> 
> Nobody yet posted a proper analysis of the Optimus block. So how could this one be better?


what is your delta T with the strix block? not sure why my phanteks block is performing like trash...


----------



## dante`afk

Esenel said:


> Why this grudge against EK?
> They have one of the best performing Strix block.
> Better than AquaComputer.
> 
> Nobody yet posted a proper analysis of the Optimus block. So how could this one be better?


Well you're in hwluxx you know how the response the general response to EK is there. and me as well had some pretty bad customer service experience in the past.


----------



## Pinnacle Fit

Sheyster said:


> 3090 FE requires shunt modding. The 500/520/1000W XOC BIOS are not an option for the FE.


Are these bioses good for all the triple 8pin cards? I have the 3090 extreme. 


Sent from my iPhone using Tapatalk


----------



## changboy

gfunkernaught said:


> What paste did you use? I have conductonaut that I want to use, still have an ek trio block in the cart, dunno if I should jump.


I use conductonaut but i apply protection on part around the chip and no problem. I saw most of those who bought a waterblock for the rtx-3090 use liquid metal too. But dont use collaboratory liquid metal coz it dry fast and its a pain to take it off, conductonaut is ok even after 2 years.


----------



## tyw214

Is it normal that my strix 3090 doesn't clock down during idle?? I am running double 4k LG OLED, and a 2K IPS; i heard it's intended behavior for running multiple monitors?


----------



## unyx

On a 3090 FE, during Mining session, impossible to go less than 102° (to keep a correct Hashrate).
I saw that many people use radiators+fan on the FE model, directly on the card near the VRAM (don't know the name of that part).
Does someone can advise me if it's possible/useful to go with a noctua heatsink like the NH-U14S TR4-SP3 fixed with some thermal pad ?
Is it a valid choice, or some aluminum pad + basic fan will do the same ?
(I don't want to use WaterCooling, just air ! or maybe an AIO ?)


----------



## mirkendargen

tyw214 said:


> Is it normal that my strix 3090 doesn't clock down during idle?? I am running double 4k LG OLED, and a 2K IPS; i heard it's intended behavior for running multiple monitors?


More than one screen above 60hz keeps my Strix from clocking down the memory.


----------



## tyw214

mirkendargen said:


> More than one screen above 60hz keeps my Strix from clocking down the memory.


my core clock also just stays the same all the time


----------



## stryker7314

@stryker7314 ...you might want to ask Jay2C the question re. pads...he just broke another PCB item and seems to have a direct line to Vince / KP for all the PCB fixing he needs
[/QUOTE]

Not on this forum is he? Not sure where to find him lol. Though someone posted a teardown of the kingpin recently was that on this forum or evga? Can't find it anymore ugh


----------



## 86Jarrod

motivman said:


> man, so disappointed with phanteks blocks for my strix. 26C delta T between GPU temp and water temp. I have remounted like 3 times, used gelid extreme. Double checked and thermal paste is spreading and making good contact with the die. Not sure What I might be doing wrong. With my reference card and EK block, my delta T is 15C. I think I will order the EK block and see what my results are. Stumped with this one...


26c is crazy different than mine. That's weird af. I changed my pads right away I wonder if that made a diff or just ****ty QC?


----------



## J7SC

stryker7314 said:


> @stryker7314 ...you might want to ask Jay2C the question re. pads...he just broke another PCB item and seems to have a direct line to Vince / KP for all the PCB fixing he needs


Not on this forum is he? Not sure where to find him lol. Though someone posted a teardown of the kingpin recently was that on this forum or evga? Can't find it anymore ugh
[/QUOTE]

...a year ago, I would have just said to join and email xedev.com ...that's TiN's site (as in TiN from KPE) but he has left EVGA / KPE / Taiwan recently...still, he would certainly know and he might still monitor the site. Another fellow who would either know or could get in contact with Vince is Steve from GN - either via his web site or YT/Patreon. Steve and Jay2C both have 3090 KPEs they disassembled for LN2 comps, with Vince on live video conference


----------



## Esenel

dante`afk said:


> Well you're in hwluxx you know how the response the general response to EK is there. and me as well had some pretty bad customer service experience in the past.


All of the water cooling companies have ****ty support.
So it comes down to performance and quality.
And there it looks pretty good for Byksi and EK.
And not so good for AquaComputer and Alphacool.


----------



## Esenel

motivman said:


> what is your delta T with the strix block? not sure why my phanteks block is performing like trash...


520W - 210l/h flow - 30°C water before GPU - 46°C GPU - Memory Junction 64°C
Delta ~16K








At 400W it is around 11K delta.

So I do not see why some say EK would have ****ty performance.


----------



## gfunkernaught

Esenel said:


> 520W - 210l/h flow - 30°C water before GPU - 46°C GPU - Memory Junction 64°C
> Delta ~16K
> View attachment 2477253
> 
> 
> At 400W it is around 11K delta.
> 
> So I do not see why some say EK would have ****ty performance.


I guess they're comparing it to previous gen gpus where say running 380w bios would push the gpu to 38-40c. 46c does seem high to me too. My trio with the stock cooler and the suprim 450w bios runs around 67c with default fan curve, not very loud at all. I still don't know if I should put the ek block or wait for kryographics block, if they even will make it.


----------



## motivman

86Jarrod said:


> 26c is crazy different than mine. That's weird af. I changed my pads right away I wonder if that made a diff or just ****ty QC?


whats your Delta T with your phanteks block?


----------



## motivman

Esenel said:


> 520W - 210l/h flow - 30°C water before GPU - 46°C GPU - Memory Junction 64°C
> Delta ~16K
> View attachment 2477253
> 
> 
> At 400W it is around 11K delta.
> 
> So I do not see why some say EK would have ****ty performance.


yeah, I ordered the EK block, this phanteks block is just not cutting it. I have remounted like 6 times now, made sure all the standoffs are screwed all the way into the block, and Delta T is still 25C???? unacceptable. I know my mount is good because the GPU idles at my water temp, but put any load on it, and temps shoot up real quick!


----------



## Esenel

gfunkernaught said:


> 46c does seem high to me too.


It is about the delta, not the absolute temp.
At the moment water temp is around 25°C.
So GPU would be under full load 40-41°C ;-)


----------



## mirkendargen

motivman said:


> yeah, I ordered the EK block, this phanteks block is just not cutting it. I have remounted like 6 times now, made sure all the standoffs are screwed all the way into the block, and Delta T is still 25C???? unacceptable. I know my mount is good because the GPU idles at my water temp, but put any load on it, and temps shoot up real quick!


There was talk of the dies themselves having fairly extreme variations in flatness. How were your temps with the stock cooler still on? This could be affecting you.


----------



## 86Jarrod

motivman said:


> whats your Delta T with your phanteks block?


13c to 14c TSE GT2 loop just under 15 min. Over 600 watts


----------



## motivman

86Jarrod said:


> 13c to 14c TSE GT2 loop just under 15 min. Over 600 watts
> View attachment 2477276


Man, I don't get it, this is pissing me off. Good news is that I can overclock the memory to max +1500 stable, but GPU delta T is literally sitting at 30C, but it idles at my water temp? Can't wait to get this EK block on monday... gonna be a long weekend, not sure what is going on with this phanteks block to be honest.


----------



## 86Jarrod

motivman said:


> Man, I don't get it, this is pissing me off. Good news is that I can overclock the memory to max +1500 stable, but GPU delta T is literally sitting at 30C, but it idles at my water temp? Can't wait to get this EK block on monday... gonna be a long weekend, not sure what is going on with this phanteks block to be honest.


Who know's probably bad QC. EK's look like they perform just as good too. This was just the block I could get delivered fastest. Good luck!


----------



## StullenAndi

lolhaxz said:


> I'll refer you to this previous post -> [Official] NVIDIA RTX 3090 Owner's Club


 It´s the opposite from a good solderjoint. Solder is creating a blopp, the reason is your low wattage iron not able to transfer enaugh energy into the component. If the component and surrounding pcb is heated properly the solder will flow in between the components and doesn´t create a blopp. But with 25w it is not possible. If the iron is powerfull enaugh no matter how much solder you put onto your tip, it will only stick the right amount on the components.


----------



## Esenel

gfunkernaught said:


> 46c does seem high to me too.


Extra for you.
Timespy GPU 2 Loop delta 13-15K with 520W bios showing Power Limit.
GPU 38-40°C. Happy ;-)


----------



## motivman

Esenel said:


> Extra for you.
> Timespy GPU 2 Loop delta 13-15K with 520W bios showing Power Limit.
> GPU 38-40°C. Happy ;-)
> View attachment 2477279


just in case I missed it, does the phanteks block have a dedicated inlet and outlet?


----------



## motivman

86Jarrod said:


> Who know's probably bad QC. EK's look like they perform just as good too. This was just the block I could get delivered fastest. Good luck!


just in case I missed it, does the phanteks block have a dedicated inlet and outlet?


----------



## HyperMatrix

sultanofswing said:


> It's on the 1000w Bios with power slider set to 60%.
> Core clock was only 2085mhz.


XC3 bios and shunt mod. Uses less PCIe slot power than normal bios.


----------



## 86Jarrod

motivman said:


> just in case I missed it, does the phanteks block have a dedicated inlet and outlet?


I'm not sure but mine is in closest to nv link and out closest to 8pin.


----------



## HyperMatrix

lolhaxz said:


> A 65W iron is _WAY_ overkill and a great way to potentially screw the pads on the board - especially true if one does not have experience, there is really no need to exceed typical 25W hobbyist iron for this (assuming you don't already have something "better")... although with a temperature controlled iron such as you suggested, arguably, the wattage becomes quite irrelevant. I would just hate to see a beginner go ham on a board with a conventional 65W iron with no experience.
> 
> Solder generally already has flux - (for this application) its really not necessary to add more... if you need additional flux (for this particular operation) you are most likely doing it incorrectly... but of course - you can use flux if you want...
> 
> The mistake people make is loading up the tip and the target with solder, then re-heating it... all the flux burns away when you do this, you should flow the solder between the joint in a single action, not dab it on there, if you need to re-heat it multiple times .. your technique is questionable. [if you do it this way, then yes, flux will help immensely]
> 
> Take away: Any random 25W iron and proper technique will work just fine... spending $200 on gear won't make it any prettier or easier... practise.


Disagree on 2 points. Flux is important. The amount you get in your solder is really not enough in many cases. Flux is the difference between thinking you have a good connection and knowing you have a good connection.

Secondly, the wattage depends on the quality of your tip. Lower quality tips require more power to heat up to the same level. So just saying 25W is enough doesn't provide enough info. Best bet is a temperature regulated soldering iron.


----------



## gfunkernaught

Esenel said:


> It is about the delta, not the absolute temp.
> At the moment water temp is around 25°C.
> So GPU would be under full load 40-41°C ;-)


Delta does matter when judging cooling performance of your loop. GPU temp will affect your max OC. So I look at both.


Esenel said:


> Extra for you.
> Timespy GPU 2 Loop delta 13-15K with 520W bios showing Power Limit.
> GPU 38-40°C. Happy ;-)
> View attachment 2477279


You're covering your current gpu voltage lol. Is it 1.1v?


----------



## motivman

86Jarrod said:


> I'm not sure but mine is in closest to nv link and out closest to 8pin.


THIS FIXED THE ISSUE FOR ME!!!!! Good lord... I knew something was off. Delta T is now at 15C... shame on phanteks for not mentioning this in the manual.


----------



## Thanh Nguyen

motivman said:


> THIS FIXED THE ISSUE FOR ME!!!!! Good lord... I knew something was off. Delta T is now at 15C... shame on phanteks for not mentioning this in the manual.


Use liquid metal and enjoy at least 3c cooler .


----------



## Esenel

gfunkernaught said:


> Delta does matter when judging cooling performance of your loop. GPU temp will affect your max OC. So I look at both.
> 
> You're covering your current gpu voltage lol. Is it 1.1v?


There is nothing to cover.
It is a real turd GPU compared to what people are showing here.


----------



## 86Jarrod

motivman said:


> THIS FIXED THE ISSUE FOR ME!!!!! Good lord... I knew something was off. Delta T is now at 15C... shame on phanteks for not mentioning this in the manual.


Hell yeah nice!


----------



## changboy

HyperMatrix said:


> XC3 bios and shunt mod. Uses less PCIe slot power than normal bios.


When i use a 2x 8 pins bios card perform well but gpuz power meter is a mess, do you know how is the real power draw on each 3 x 8pins and if the pcie value of 68w is correct ?

Also after chek the temp again of my memory, if iscroll down have another place of memory temp and on mem2 icx valut mine reach 92c and others are a lot lower, where is that memory and do you think i have a thermal pad not touching at some place under my backplate, see the pic :


----------



## jura11

motivman said:


> THIS FIXED THE ISSUE FOR ME!!!!! Good lord... I knew something was off. Delta T is now at 15C... shame on phanteks for not mentioning this in the manual.


Phanteks waterblocks usually performed pretty much on par with EKWB at least on Pascal and Turing generations of GPU's, I have tested almost every block on RTX 2080Ti and GTX1080Ti as well 

On GTX1080Ti Phanteks hasn't been best but performed pretty much on par with EKWB, just VRM temperatures has been around 5-7°C better on Phanteks blocks than EKWB, Aquacomputer or Heatkiller been in their own league, on Turing or RTX 2080Ti tested EKWB Vector RTX 2080Ti Strix and Reference PCB, Heatkiller IV Pro Strix and Reference and Phanteks Glacier and Bykski RTX 2080Ti 

On Turing Aquacomputer Kryographics been best on my loop, Heatkiller IV Pro has been pretty much on par with Kryographics, Phanteks has been around 1-2°C worse than Heatkiller and EKWB has been dead last and Bykski overtaken my EKWB hahaha 

IN and OUT should be always followed on every block that's my view 

Hope this helps 

Thanks, Jura


----------



## gfunkernaught

Esenel said:


> There is nothing to cover.
> It is a real turd GPU compared to what people are showing here.
> View attachment 2477284


The image you posted before where you circled the GPU temp, the circle was covering the voltage. What frequency did you get to with those temps and 1.075v?


----------



## changboy

I think i discover why mem2 is way hotter then the others, that memory is near of the pcie slot so on the 1000w bios power draw of pcie is 92w and maybe this bring heat in that place but iam not sure about this


----------



## motivman

Memory on this strix is insane... never have I been able to max out memory overclock on afterburner, and this thing keeps passing the benchmarks. How can I try +1600? lol


----------



## Thanh Nguyen

Can someone show the delta with the optimus block?


----------



## changboy

motivman said:


> Memory on this strix is insane... never have I been able to max out memory overclock on afterburner, and this thing keeps passing the benchmarks. How can I try +1600? lol


I think they all oc like this on memory hehehe.


----------



## changboy

Thanh Nguyen said:


> Can someone show the delta with the optimus block?


The thing is : if the guy show you a nice temp but just 400w power board its not mean anything.
If on the other hand show you a board power draw of 525w with temp of 40c on the gpu with ambiant temp of 23c then ok WOW.


----------



## jura11

changboy said:


> The thing is : if the guy show you a nice temp but just 400w power board its not mean anything.
> If on the other hand show you a board power draw of 525w with temp of 40c on the gpu with ambiant temp of 23c then ok WOW.


You can't compare one loop with another loop, because of radiators used, flow rate and ambient and CPU etc

On RTX 2080Ti I usually tested with 380W Galax BIOS or with XOC BIOS waterblocks 

Previously Thermalbench or Techpowerup and Igors Lab tested waterblocks but now hard to find any kind of good reviews on waterblocks 

Hope this helps 

Thanks, Jura


----------



## Thanh Nguyen

changboy said:


> The thing is : if the guy show you a nice temp but just 400w power board its not mean anything.
> If on the other hand show you a board power draw of 525w with temp of 40c on the gpu with ambiant temp of 23c then ok WOW.


Gpu temp - water temp = delta that I want to see and running TSE gt2.


----------



## DrunknFoo

Well got my ek block for my ftw3, was finally able to fix the fuses for the card and able to run port royal without something popping. Now off to slapping the block on and figuring out how to run my lines (previous setup had dual pumps, now one has to go to fit the card)


----------



## J7SC

changboy said:


> I think they all oc like this on memory hehehe.


GDDR6X oc'ing drives me a bit batty...every time I think I found the most efficient number with one app on the Strix, another app will go higher, and then so does the first one (may be temp related?). Apart from the usual VRAM-heavy tests (such as GT2 in TS) is there an actual test that checks for errors in GDDR6X, not unlike ye ol' ATITool, but modern ?


----------



## Falkentyne

motivman said:


> Memory on this strix is insane... never have I been able to max out memory overclock on afterburner, and this thing keeps passing the benchmarks. How can I try +1600? lol


Motivman, what's the difference between your TDP% and TDP Normalized % in hwinfo64?


----------



## motivman

Falkentyne said:


> Motivman, what's the difference between your TDP% and TDP Normalized % in hwinfo64?


less than 1% on average.


----------



## Falkentyne

motivman said:


> less than 1% on average.


Well I was practicing soldering those super tiny 1206 shunts.
Just successfully soldered a 4 mOhm shunt onto a green small resistor (which is the same size) on my dead Vega 64, successfully.


----------



## SoldierRBT

3090 KPE 1.018v 2175MHz 520W BIOS Max temp: 49C








I scored 21 784 in Time Spy


Intel Core i9-10900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## changboy

I put a fan on my backplate of ek waterblock and memory temp drop of 4c after playing cyberpunk with 525w board power draw, and high temp i saw on icx mem2 its the bug coz temp are at 52c lol. So a fan help a bit and i will keep it like this.


----------



## changboy

Falkentyne said:


> Well I was practicing soldering those super tiny 1206 shunts.
> Just successfully soldered a 4 mOhm shunt onto a green small resistor (which is the same size) on my dead Vega 64, successfully.
> View attachment 2477299


Its verry small and just find the part, my eyes hurt lol


----------



## edsontajra

Guys, *** is goin on with the FTW3 ULTRA 3090 Cards? A tons of threads on evga forum and reddit of people with ~red light of death~. The card suddenly dies.


----------



## Falkentyne

edsontajra said:


> Guys, *** is goin on with the FTW3 ULTRA 3090 Cards? A tons of threads on evga forum and reddit of people with ~red light of death~. The card suddenly dies.


No one knows. Until someone donates a dead card to Buildzoid (who can fix it as long as all the power stages aren't blown) or Steve at GamersNexus, no one will really know.


----------



## Biscottoman

jura11 said:


> Phanteks waterblocks usually performed pretty much on par with EKWB at least on Pascal and Turing generations of GPU's, I have tested almost every block on RTX 2080Ti and GTX1080Ti as well
> 
> On GTX1080Ti Phanteks hasn't been best but performed pretty much on par with EKWB, just VRM temperatures has been around 5-7°C better on Phanteks blocks than EKWB, Aquacomputer or Heatkiller been in their own league, on Turing or RTX 2080Ti tested EKWB Vector RTX 2080Ti Strix and Reference PCB, Heatkiller IV Pro Strix and Reference and Phanteks Glacier and Bykski RTX 2080Ti
> 
> On Turing Aquacomputer Kryographics been best on my loop, Heatkiller IV Pro has been pretty much on par with Kryographics, Phanteks has been around 1-2°C worse than Heatkiller and EKWB has been dead last and Bykski overtaken my EKWB hahaha
> 
> IN and OUT should be always followed on every block that's my view
> 
> Hope this helps
> 
> Thanks, Jura


 What do you think is the best block between phanteks and ekwb for strix 3090? Have you been able to try many blocks even on this gen?


----------



## changboy

edsontajra said:


> Guys, *** is goin on with the FTW3 ULTRA 3090 Cards? A tons of threads on evga forum and reddit of people with ~red light of death~. The card suddenly dies.


I follow this forum too and i not saw much post about this, this forum have near 200 pages and i have read all.
Many ask rma coz high power draw on pcie thats it.
If you put a bios not made for this card like i do with the 1000w or a 2x8pins bios then its at ur own risk. Same with modding pcb, who know what will happen in 1 year after doing this.
Every brand have return on production this not mean all card are bad or will die.


----------



## DrunknFoo

edsontajra said:


> Guys, *** is goin on with the FTW3 ULTRA 3090 Cards? A tons of threads on evga forum and reddit of people with ~red light of death~. The card suddenly dies.


Just repaired 2 fuses, 1 for power pin 1 and power pin 2
(power pin 2 fuse was replaced prior)

/shrug


----------



## Falkentyne

DrunknFoo said:


> Just repaired 2 fuses, 1 for power pin 1 and power pin 2
> (power pin 2 fuse was replaced prior)
> 
> /shrug


Isn't there a way to bridge the dead fuses so you can get by without a fuse?


----------



## geriatricpollywog

Are recently produced FTW3 Ultras dying? You’d think EVGa would have fixed this by now.


----------



## changboy

DrunknFoo said:


> Just repaired 2 fuses, 1 for power pin 1 and power pin 2
> (power pin 2 fuse was replaced prior)
> 
> /shrug


Why the fuse blow ? 1000w bios ?


----------



## DrunknFoo

changboy said:


> Why the fuse blow ? 1000w bios ?


It blew basically on a load screen, at the time maz power draw would have been 480-520w on the stock bios (450w) (shunted with slider at 50%). Had no block and fan noise was pissing me off hence the low power.


I have benched it prior anywhere between 850-920w without issues and once over 950w using the 1000w bios.... For one reason or another it just blew 2 fuses (maybe static discharge? I really dont know).

First time fuse 2 blew, i accidently bridged something as a piece of solder ran off...


----------



## changboy

DrunknFoo said:


> It blew basically on a load screen, at the time maz power draw would have been 480-520w on the stock bios (450w) (shunted with slider at 50%). Had no block and fan noise was pissing me off hence the low power.
> 
> 
> I have benched it prior anywhere between 850-920w without issues and once over 950w using the 1000w bios.... For one reason or another it just blew 2 fuses (maybe static discharge? I really dont know).
> 
> First time fuse 2 blew, i accidently bridged something as a piece of solder ran off...


You bench the ftw3 at 900w **** ! Whats the pcie slot power draw ? So iam fine if i see 100w power draw on pcie(peak) ?


----------



## DrunknFoo

Falkentyne said:


> Isn't there a way to bridge the dead fuses so you can get by without a fuse?


Ya i did it in the past but what happens is the solder ends up breaking based on too high of a power draw... Problem with bridging the resistence must be close to the others (i cant remember what the reading was) not sure why but i spent 2 days on n off trying out different fuses n finally managed to run bench marks without something popping.


----------



## Falkentyne

DrunknFoo said:


> Ya i did it in the past but what happens is the solder ends up breaking based on too high of a power draw... Problem with bridging the resistence must be close to the others (i cant remember what the reading was) not sure why but i spent 2 days on n off trying out different fuses n finally managed to run bench marks without something popping.


Are you going to try modding the small 1206 shunts on your card linked to MSVDD and NVVDD or just using the 520W or 1000W bios?


----------



## DrunknFoo

Falkentyne said:


> Are you going to try modding the small 1206 shunts on your card linked to MSVDD and NVVDD or just using the 520W or 1000W bios?


my card is already shunted
With the ek block on my revived ftw3 doing a bit better now








Result not found







www.3dmark.com





not planning to push this card anymore, going back to the 450w and dialing in a stable OC for gaming


----------



## Biscottoman

Mp5works has just introduced a new variant on his store with two g1/4 ports to run on series like people has done using ram waterblock drilled on the backplate


----------



## jomama22

DrunknFoo said:


> It blew basically on a load screen, at the time maz power draw would have been 480-520w on the stock bios (450w) (shunted with slider at 50%). Had no block and fan noise was pissing me off hence the low power.
> 
> 
> I have benched it prior anywhere between 850-920w without issues and once over 950w using the 1000w bios.... For one reason or another it just blew 2 fuses (maybe static discharge? I really dont know).
> 
> First time fuse 2 blew, i accidently bridged something as a piece of solder ran off...


My guess is that the fuse just wore out and finally popped. They usually won't go at their rating, but running it over that limit over and over will weaken the filament in there. Same thing happens with household breakers. Can usually run them over by about 10-15% but they will weaken and finally go at a random time. Once that happens, it will usually start to flip before it's rating.


----------



## gfunkernaught

I pulled the trigger on the EK Trio block and backplate, hopefully will get it end of Feb. I'm excited and nervous. I want to try the 500w bios, but I'm def afraid to push passed that even under water. Hopefully my silicon is a good sample and the board is of good quality. Fingers crossed!


----------



## moccor

Falkentyne said:


> Try the pads I linked. There is feedback on amazon saying they helped people with Vega 64 and eVGA RTX 2080 super cards, so they should be better than what you have at least.
> Once you get everything working, post your findings here.


So I repaded my Zotac 3090 Trinity today with the Gelid GP Extreme 2.0mm pads, it seemed to have worked well. It is for sure better than stock pads. I think I am now being limited by the VRM though, I noticed the VRM pads are def larger than 2.0mm, so 2.5 or 3.0 may be needed for it. And since I kinda removed the heatsink and replaced it a bunch of times, I am sure it has negatively impacted the 2 VRM thermal pads even if a little bit. But I really appreciate the help. I think with these 3090s and 3080s they truly had decent thermal pads, because they were necessary by design. And I think by using basic 6.0Wmk, it just wasn't enough for the GDDR6X. I am going to order some extras off Aliexpress I think since I ordered a couple 3080s. Do thermal pads truly 'expire'? I seen some have an expiration date of like 1 year. Though the Gelid doesn't mention it. I plan to keep them in a ziplock bag to keep moisture out and just wanted to have them just in case if needed like 1 year or 5 years from now since Aliexpress takes over 30days for deliveries to my house.

Stock Phoenixminer - 108C
Gelid GP Extreme Phoenixminer - 102C


----------



## J7SC

0451 said:


> Are recently produced FTW3 Ultras dying? You’d think EVGa would have fixed this by now.


yeah, it must take some of the fun out of a new purchase like that for folks just having that in the back of their mind - even if their card is actually fine. I wonder if Steve from GN will do a special about the 'red light of death' with those cards (as he has done before on other recurring GPU issues).

I am still grappling with the new Win 10 pro install on an old X99 mobo w/ some device manager PCI and other warnings (not MEI or chipset drivers), and that platform is not ideal to max out the 3090 potential. But I won't pull apart the newer systems I have (productivity - gaming hybrids) such as the one with the dual 2080 Tis...besides, this thing will run 4K exclusively. Still, I wouldn't mind a Ryzen 59xx, but no CPUs in sight (or the mobos I would want). The 3090 Strix seems to be a good sample though, knocking on 15,000 in Port Royal after just a few tries. Best results with VRAM so far at 1325...may be a bit more left, but for a stock bios and air-cooled card, I couldn't be happier. VRAM should be lower though for daily use..and looking forward to w-cooling both sides of this thing.


Spoiler


----------



## Falkentyne

moccor said:


> So I repaded my Zotac 3090 Trinity today with the Gelid GP Extreme 2.0mm pads, it seemed to have worked well. It is for sure better than stock pads. I think I am now being limited by the VRM though, I noticed the VRM pads are def larger than 2.0mm, so 2.5 or 3.0 may be needed for it. And since I kinda removed the heatsink and replaced it a bunch of times, I am sure it has negatively impacted the 2 VRM thermal pads even if a little bit. But I really appreciate the help. I think with these 3090s and 3080s they truly had decent thermal pads, because they were necessary by design. And I think by using basic 6.0Wmk, it just wasn't enough for the GDDR6X. I am going to order some extras off Aliexpress I think since I ordered a couple 3080s. Do thermal pads truly 'expire'? I seen some have an expiration date of like 1 year. Though the Gelid doesn't mention it. I plan to keep them in a ziplock bag to keep moisture out and just wanted to have them just in case if needed like 1 year or 5 years from now since Aliexpress takes over 30days for deliveries to my house.
> 
> Stock Phoenixminer - 108C
> Gelid GP Extreme Phoenixminer - 102C


Sorry I don't know the answer to these questions.


----------



## geriatricpollywog

J7SC said:


> yeah, it must take some of the fun out of a new purchase like that for folks just having that in the back of their mind - even if their card is actually fine. I wonder if Steve from GN will do a special about the 'red light of death' with those cards (as he has done before on other recurring GPU issues).
> 
> I am still grappling with the new Win 10 pro install on an old X99 mobo w/ some device manager PCI and other warnings (not MEI or chipset drivers), and that platform is not ideal to max out the 3090 potential. But I won't pull apart the newer systems I have (productivity - gaming hybrids) such as the one with the dual 2080 Tis...besides, this thing will run 4K exclusively. Still, I wouldn't mind a Ryzen 59xx, but no CPUs in sight (or the mobos I would want). The 3090 Strix seems to be a good sample though, knocking on 15,000 in Port Royal after just a few tries. Best results with VRAM so far at 1325...may be a bit more left, but for a stock bios and air-cooled card, I couldn't be happier. VRAM should be lower though for daily use..and looking forward to w-cooling both sides of this thing.


I'd like to see a killshot video on the FTW3, but Steve tends to reserve those for safety and ethical issues. GN doesn't have the best technical content, but they are killing it with case reviews and industry news.

I'm seeing some main thread throttling in FS2020 so I know what you mean about wanting a CPU upgrade. I plan on getting an 11900k.

15,000 is a definitely a great score for air-cooled. What are your temps?


----------



## Zogge

motivman said:


> Man, I don't get it, this is pissing me off. Good news is that I can overclock the memory to max +1500 stable, but GPU delta T is literally sitting at 30C, but it idles at my water temp? Can't wait to get this EK block on monday... gonna be a long weekend, not sure what is going on with this phanteks block to be honest.


If you remember my earlier post where I was so unhappy with EK temps on my 3090 strix peaking 60c ln 520W load !! I remounted several times and still saw 57c, 59c with ambient 26c and water at 32-33c. I asked EK who said up to 25c delta T core to water was acceptable ??
In fairness I upgraded my loop after and now have about 27c water in same 26c ambient so the Ek would probably land at 52 to 53c.
But I ordered the byski, added some pads on vrms and back of core and double washers to tighten the screws even more. 3 of them are tight like a mountain but one refuse to get a good grip and is borderline loose so the mount is not perfect but ok.
Still my byski never go above 45c same load. 
The mp5 backplate did bring it down to 44c which is not a lot but memories are now down at 52c to 54c peak. 

My theory, either I suck at mounting coolers or the samples are varying in quality regardless of brand as some with EK are happy. If anyone want to buy my ek strix wb cheap just raise your hand.


----------



## J7SC

0451 said:


> I'd like to see a killshot video on the FTW3, but Steve tends to reserve those for safety and ethical issues. GN doesn't have the best technical content, but they are killing it with case reviews and industry news.
> 
> I'm seeing some main thread throttling in FS2020 so I know what you mean about wanting a CPU upgrade. I plan on getting an 11900k.
> 
> 15,000 is a definitely a great score for air-cooled. What are your temps?


...max temps at 97% fan speed were at 67 C...which is quite good for 500+ W. The backplate for this run had an Arctic P12 and an Arctic F8. The 4k-rpm GentleTyphoon I tried out before was just too hard to keep from moving around for now, until I find a permanent mounting arrangement for it. Regarding the GPU stock fans on the front, they only really get annoyingly noisy once past 70%, IMO

Your 10700K at 5.4 giggles is limiting you in FS2020 ? Then again, FS2020 / dev mode indicator is always showing limits for s.th., though even with just a few runs in FS2020 and that X99 (8c/16T), I got a lot of 'green', perhaps because it is an HEDT quad channel which might help. FYI, I plan to do a FS2020 head-to-head with both the 2080 Ti SLI-CFR and the 3090 in a while..


----------



## geriatricpollywog

J7SC said:


> ...max temps at 97% fan speed were at 67 C...which is quite good for 500+ W. The backplate for this run had an Arctic P12 and an Arctic F8. The 4k-rpm GentleTyphoon I tried out before was just too hard to keep from moving around for now, until I find a permanent mounting arrangement for it. Regarding the GPU stock fans on the front, they only really get annoyingly noisy once past 70%, IMO
> 
> Your 10700K at 5.4 giggles is limiting you in FS2020 ? Then again, FS2020 / dev mode indicator is always showing limits for s.th., though even with just a few runs in FS2020 and that X99 (8c/16T), I got a lot of 'green', perhaps because it is an HEDT quad channel which might help. FYI, I plan to do a FS2020 head-to-head with both the 2080 Ti SLI-CFR and the 3090 in a while..


I think I cracked the substrate on my CPU because it’s now thermally limited to 5.2ghz, even at 1.2v at LLC5. Before that, I was able to run FS2020 at 5.5ghz. It sucks going from golden silicon quality to mere average. I don’t have rolling cache enabled, so that might be why I’m seeing main thread limit.


----------



## J7SC

0451 said:


> I think I cracked the substrate on my CPU because it’s now thermally limited to 5.2ghz, even at 1.2v at LLC5. Before that, I was able to run FS2020 at 5.5ghz. It sucks going from golden silicon quality to mere average. I *don’t have rolling cache enabled,* so that might be why I’m seeing main thread limit.


re. the substrate on your 10700K CPU, I am sorry to hear that. I have wrecked a superb 3970X once- and obviously still regret that.

...rolling cache *definitely* helps - a lot - in FS2020 ! I transferred my 200 GB rolling cache over from the other system today so 'all things equal' comparisons are possible. The ideal FS2020 to chase is a 4K / Ultra / Dense w/ extra traffic and over a big metro area showing all green, with rapidly alternating main thread and GPU limitations at 60 fps (which means there isn't a single bottleneck to address). Of course there are folks who have tried FS2020 at 8K (back to the drawing board  ). The Strix 3090 will get a real work-out with FS 2020 and Cyberpunk 2077...


----------



## Falkentyne

0451 said:


> I think I cracked the substrate on my CPU because it’s now thermally limited to 5.2ghz, even at 1.2v at LLC5. Before that, I was able to run FS2020 at 5.5ghz. It sucks going from golden silicon quality to mere average. I don’t have rolling cache enabled, so that might be why I’m seeing main thread limit.


How did you crack the substrate? Are you direct die?


----------



## Nizzen

motivman said:


> Memory on this strix is insane... never have I been able to max out memory overclock on afterburner, and this thing keeps passing the benchmarks. How can I try +1600? lol


Evga "precision x"


----------



## GQNerd

Nizzen said:


> Evga "precision x"


This.. Strix can hit 1800+ in some 3dmark runs with proper cooling


----------



## Nizzen

Miguelios said:


> This.. Strix can hit 1800+ in some 3dmark runs with proper cooling


I benchmarked on cold water with 1700 😂🤟


----------



## geriatricpollywog

Falkentyne said:


> How did you crack the substrate? Are you direct die?


Yes, direct die. I don’t know if I actually cracked the substrate but I asked Buildzoid and that was his theory.


----------



## Nizzen

motivman said:


> hey @Nizzen, is you strix shunt modded currently? or are you running the XOC bios for daily.


I sold my shuntmodded strix to Carillo.
Dayli bios on my 3090 strix oc white is stock LOL. 
My other strix black is on stock bios too, when I'm not benchmarking. Good enough performance in the on the only game I play, BF V 

Msi 3090 Suprim x is on 1000w bios on air. Using 550w max


----------



## Spiriva

Reporting in, still using the 1000w bios (~75% msi afterburner) on my PNY 2x8pin. It works perfectly so far.
Gaming around 5h/day.

For me the 1000w bios is very good.


----------



## rix2

edsontajra said:


> Guys, *** is goin on with the FTW3 ULTRA 3090 Cards? A tons of threads on evga forum and reddit of people with ~red light of death~. The card suddenly dies.


Tons? I read this red light club in evga forum, maybe 10-15 cards, same people write this all forums and the card of other companies will not die?


----------



## SolarBeaver

Guys, is it cnc-machining error or is it acceptable? Upper right corner of the thing that covers chip die, circle-like thing which has uneven surface, and also some square-like thing on the upper left. I'm thinking that's why I'm getting such bad temps, but who knows... Already contacted support, but don't want them to fool me into thinking that's normal if it's not.


----------



## mouacyk

SolarBeaver said:


> Guys, is it cnc-machining error or is it acceptable? Upper right corner of the thing that covers chip die, circle-like thing which has uneven surface, and also some square-like thing on the upper left. I'm thinking that's why I'm getting such bad temps, but who knows... Already contacted support, but don't want them to fool me into thinking that's normal if it's not.


If it looks like a duck, quacks like a duck, it's a ... defect. How do you find the courage to put something like that on a $2000 GPU? I suppose with thermal protection and that being the last thing to make contact can only result in bad temps and not something breaking.


----------



## motivman

Miguelios said:


> This.. Strix can hit 1800+ in some 3dmark runs with proper cooling


*@86Jarrod*
@Miguelios
do you remember how hot your memory junction temp got with the phanteks block? While playing control with no DLSS, mine gets up to 92C (+1500 on mem), the backplate of the phanteks get very hot to touch. Just touching the backplate with my fingers leads to 2C degree reduction in temp. I need either a waterblock on the backplate or switch pads to fujipoly. Jarrod, you do remember the thermal pad sizes for the phanteks block? I remember you said you replaced yours with something better. This phanteks block doesn't seem to like me, lol.


----------



## SolarBeaver

mouacyk said:


> If it looks like a duck, quacks like a duck, it's a ... defect. How do you find the courage to put something like that on a $2000 GPU? I suppose with thermal protection and that being the last thing to make contact can only result in bad temps and not something breaking.


Thanks, I've just found it when I decided to re-assemble because of bad temps, didn't check it first time because I was too excited and tbh didn't expect something like this to happen  First time assembling custom water loop, tough luck I guess...


----------



## gfunkernaught

Spiriva said:


> Reporting in, still using the 1000w bios (~75% msi afterburner) on my PNY 2x8pin. It works perfectly so far.
> Gaming around 5h/day.
> 
> For me the 1000w bios is very good.


Does the 1kw bios down clock to 2D mode?


----------



## WayWayUp

The 1000w bios everyone is using is from Galax?
I heard it's not as good as the one from Vince?


----------



## ttnuagmada

Zogge said:


> If you remember my earlier post where I was so unhappy with EK temps on my 3090 strix peaking 60c ln 520W load !! I remounted several times and still saw 57c, 59c with ambient 26c and water at 32-33c. I asked EK who said up to 25c delta T core to water was acceptable ??
> In fairness I upgraded my loop after and now have about 27c water in same 26c ambient so the Ek would probably land at 52 to 53c.
> But I ordered the byski, added some pads on vrms and back of core and double washers to tighten the screws even more. 3 of them are tight like a mountain but one refuse to get a good grip and is borderline loose so the mount is not perfect but ok.
> Still my byski never go above 45c same load.
> The mp5 backplate did bring it down to 44c which is not a lot but memories are now down at 52c to 54c peak.
> 
> My theory, either I suck at mounting coolers or the samples are varying in quality regardless of brand as some with EK are happy. If anyone want to buy my ek strix wb cheap just raise your hand.


Something's not right there. I get about a 16C max delta on my EK/Strix combo with the 520w bios under stress testing.


----------



## motivman

ttnuagmada said:


> Something's not right there. I get about a 16C max delta on my EK/Strix combo with the 520w bios under stress testing.


how about your memory junction temp? what is your peak?


----------



## edsontajra

rix2 said:


> Tons? I read this red light club in evga forum, maybe 10-15 cards, same people write this all forums and the card of other companies will not die?


Sorry dude, thats not the case. There are people who are in their 3 RMA. All died.

Do a little research and you will see.


----------



## des2k...

Well this is weird, I was playing some Control this morning and noticed my memory temps higher then usual. I sit at 56c,58c for extended play. This morning memory climbs to 62c after 5mins of play.

Ran 'poenixminer -bench' and looks like memory is working harder than before, it's 120.5MH/s vs 117.8MH/s.
Same mem OC, 2.5MH/s gain at +853.

*Anyone know what is happening here ? *Reminds me of old pascal gpu with mem steps loosing perf and need to go up / down bins.

*120MH/s*









*117MH/s*








image-2021-01-30-163851


Image image-2021-01-30-163851 hosted in ImgBB




ibb.co


----------



## ttnuagmada

motivman said:


> how about your memory junction temp? what is your peak?


I peak at 80C at +1000. At some point im either going to try to re-seat it and maybe switch out thermal pads, or hopefully EK will release the active backplate for the Strix. I also run my fans at about 400 RPM so im sure case airflow could be better.

I think memory could be better, but it's not high enough for me to go through the effort of draining my loop to deal with it right now. It's hardline and fairly complicated.


----------



## Jpmboy

Esenel said:


> *At which power consumption and which water flow?*
> These variables are essential.
> 
> If anybody wants to compare their block under similar conditions to other blocks, just pay a visit here:
> 
> 
> 
> 
> 
> 
> 
> 
> [Übersicht] - RTX 30x0 Wasserkühlervergleich | GPU Block Comparison
> 
> 
> Einleitung Hallo zusammen, da es an verschiedenen Stellen bereits Informationen zu diesem Thema gibt, würde ich gerne anfangen alle Daten zentral und übersichtlich zu sammeln. Es geht darum, die Leistung der jeweiligen GPU-Blöcke der verschiedenen Hersteller transparent auszulisten. Die Form...
> 
> 
> 
> 
> www.hardwareluxx.de


The 1000w bios on an FTW3 is meaningless for gaming IMO.
This is with the 500W bios, under a continuous 375-425 watt load. Flow is ~3LPM as measured using Aquaero and an AQ MPS High Flow sensor. Inlet and outlet temps are measured at the rad in, CPU block in, GPU block(s) out. Three rigs here have Aqaeros with MPS, Pressure and thermal sensors, one has the Koolance TM5 (inboard) card with AQ GiGant rad tower and 3 D5 pumps (and 3 Titan Vs) with flow sensors, two others have the Koolance ERM-3K3U rad systems (flow sensor, etc) with temp sensors for hot and cold side coolant,


----------



## des2k...

Jpmboy said:


> The 1000w bios on an FTW3 is meaningless for gaming IMO.
> This is with the 500W bios, under a continuous 375-425 watt load. Flow is ~3LPM as measured using Aquaero and an AQ MPS High Flow sensor. Inlet and outlet temps are measured at the rad in, CPU block in, GPU block(s) out. Three rigs here have Aqaeros with MPS, Pressure and thermal sensors, one has the Koolance TM5 (inboard) card with AQ GiGant rad tower and 3 D5 pumps (and 3 Titan Vs) with flow sensors, two others have the Koolance ERM-3K3U rad systems (flow sensor, etc) with temp sensors for hot and cold side coolant,


I'm curious about water flow now, for example D5 are rated at 1000l/h+ but people with flow sensors show as little as 200l/h for flow. 

I only have a xpc 600l/h and added another no name brand 600l/h in series when flow got pretty low with 3rd rad. Now running a 4th rad and don't know my actual flow. I'm guessig it's prob not 200l/h like d5 but it does move the water fast when filling up and air bubbles also move out pretty fast from the loop.


----------



## Jpmboy

des2k... said:


> Well this is weird, I was playing some Control this morning and noticed my memory temps higher then usual. I sit at 56c,58c for extended play. This morning memory climbs to 62c after 5mins of play.
> 
> Ran 'poenixminer -bench' and looks like memory is working harder than before, it's 120.5MH/s vs 117.8MH/s.
> Same mem OC, 2.5MH/s gain at +853.
> 
> *Anyone know what is happening here ? *Reminds me of old pascal gpu with mem steps loosing perf and need to go up / down bins.
> 
> *120MH/s
> 
> 
> 117MH/s*
> 
> 
> 
> 
> 
> 
> 
> 
> image-2021-01-30-163851
> 
> 
> Image image-2021-01-30-163851 hosted in ImgBB
> 
> 
> 
> 
> ibb.co


120-ish is about right for a reported hash rate. The memory in these cards has an error-check protocol for matching checksums,if it is borderline "stable" ... if the checksums do not match, the procedure call is repeated until they do match or the hash is dropped as incorrect (rejected). That looping lowers the hashrate (loss of efficiency) and makes the ram work harder = more heat.
Question: why are you running a pumped up gpu clock? For Cuda compute, unless you disable the P2 cuda lock out, that clock is not the actual. I get about 121MH/s at -200 core, 10700 on the ram.
Also - check out the Cuda 11.1 miner in T-rex. It's a better bench for this card and driver series.


----------



## Jpmboy

des2k... said:


> I'm curious about water flow now, for example D5 are rated at 1000l/h+ but people with flow sensors show as little as 200l/h for flow.
> 
> I only have a xpc 600l/h and added another no name brand 600l/h in series when flow got pretty low with 3rd rad. Now running a 4th rad and don't know my actual flow. I'm guessig it's prob not 200l/h like d5 but it does move the water fast when filling up and air bubbles also move out pretty fast from the loop.


the rated flow is with zero head pressure. These pump types are very suseptible to reduction in flow with slight increases in head pressure (this is why you can run them in series and even turn off the last pump in the train an not block flow, there is no "one-way valve").

Depending on the cold plate, flow rate to a single block can have counter intuitive effects. Fine fin cold plates need high flow to penetrate the fins (which tend to clog in the long run), whereas blocks like the Byskysi (s) on the three TVs here work well with a low(er) flow than the EKs (jet plate type) on the two 2080Tis,. The fine fin cuts on the optimus block benefit from higher flow rates (flow pressure).
Franky, I personally do not like jet-plate like cold plates - they tend to collect crap over the long term (like years running 24/7) leading to degraded cooling performance over time.


----------



## Gebeleisis

SolarBeaver said:


> Guys, is it cnc-machining error or is it acceptable? Upper right corner of the thing that covers chip die, circle-like thing which has uneven surface, and also some square-like thing on the upper left. I'm thinking that's why I'm getting such bad temps, but who knows... Already contacted support, but don't want them to fool me into thinking that's normal if it's not.
> View attachment 2477369


that looks a lot worse than my bykski block


----------



## des2k...

Jpmboy said:


> 120-ish is about right for a reported hash rate. The memory in these cards has an error-check protocol for matching checksums,if it is borderline "stable" ... if the checksums do not match, the procedure call is repeated until they do match or the hash is dropped as incorrect (rejected). That looping lowers the hashrate (loss of efficiency) and makes the ram work harder = more heat.
> Question: why are you running a pumped up gpu clock? For Cuda compute, unless you disable the P2 cuda lock out, that clock is not the actual. I get about 121MH/s at -200 core, 10700 on the ram.
> Also - check out the Cuda 11.1 miner in T-rex. It's a better bench for this card and driver series.
> View attachment 2477400


P2 Cuda is disabled. I was just testing memory temps and didn't want to change the oc profile. But yeah, you want negative offset, it's all about memory speed.

Will try T-rex later. 
I'm going to re-test for stability first. Not sure if going up in MH/s for the same oc could fail stress test or heavy load games.


----------



## Jpmboy

des2k... said:


> P2 Cuda is disabled. I was just testing memory temps and didn't want to change the oc profile. But yeah, you want negative offset, it's all about memory speed.
> 
> Will try T-rex later.
> I'm going to re-test for stability first. Not sure if going up in MH/s for the same oc could fail stress test or heavy load games.


Yeah, games tend to bork memory stability, and it varies by code and the actual coders 
Just as an FYI, I pulled the rigs here off other work and dedicated them to ETH for the past few weeks. I'm currently at $2400 per month with 630MH/s combined. Figured since I hadn't mined for a few years, ETH seemed at a sweetspot even this late in it's life.


----------



## GQNerd

motivman said:


> *@86Jarrod*
> @Miguelios
> do you remember how hot your memory junction temp got with the phanteks block? While playing control with no DLSS, mine gets up to 92C (+1500 on mem), the backplate of the phanteks get very hot to touch. Just touching the backplate with my fingers leads to 2C degree reduction in temp. I need either a waterblock on the backplate or switch pads to fujipoly. Jarrod, you do remember the thermal pad sizes for the phanteks block? I remember you said you replaced yours with something better. This phanteks block doesn't seem to like me, lol.


Max I’ve ever seen was 82c after replacing the default pads with some 15w/mk Gelid Ultimate ones.

FYI, the Phanteks website lists the thickness of each pad.. Did the homework for ya, scroll to specs at the bottom:


https://www.phanteks.com/salestools/PH-GB3090ASSRX.jpg



They use some weird sizes in some areas (1.75mm?), so you’ll have to buy thicker pads and compress them and/or stack them (2.5mm).


----------



## J7SC

Jpmboy said:


> the rated flow is with zero head pressure. These pump types are very suseptible to reduction in flow with slight increases in head pressure (this is why you can run them in series and even turn off the last pump in the train an not block flow, there is no "one-way valve").
> 
> Depending on the cold plate, flow rate to a single block can have counter intuitive effects. Fine fin cold plates need high flow to penetrate the fins (which tend to clog in the long run), whereas blocks like the Byskysi (s) on the three TVs here work well with a low(er) flow than the EKs (jet plate type) on the two 2080Tis,. The fine fin cuts on the optimus block benefit from higher flow rates (flow pressure).
> Franky, I personally do not like jet-plate like cold plates - they tend to collect crap over the long term (like years running 24/7) leading to degraded cooling performance over time.


...yeah, this seems to reminiscent of a comparison test DerBauer did for Caseking re. D5 and DDC a while back (both pumps are very good). Long story short: 'On paper', flow specs of the D5 are better but because of its larger diameter, it is more susceptible to sudden pressure drops (for example air bubble pockets). However, when two D5s are used in series, that problem goes away and it becomes near unbeatable. D5s also running a bit cooler to begin with than DDC. I always use 2x D5s for fail-over reasons re. work-play hybrids anyway.

@Jpmboy and others. Quick question not directly related to cooling a 3090, but making it run even better: When you have a sim like FS2020 which is intensive on the network chip, the CPU and the GPU, is it better to have the rolling cache on a separate SSD, or the same SSD (= 'C' root / programs drive) ? I am trying to max performance of an old X99 setup which does have some nice quad-channel RAM speed until I can either get and build a new Zen 3 X570, or a even Zen-3 Threadripper which should be out soon. Thanks


----------



## motivman

Miguelios said:


> Max I’ve ever seen was 82c after replacing the default pads with some 15w/mk Gelid Ultimate ones.
> 
> FYI, the Phanteks website lists the thickness of each pad.. Did the homework for ya, scroll to specs at the bottom:
> 
> 
> https://www.phanteks.com/salestools/PH-GB3090ASSRX.jpg
> 
> 
> 
> They use some weird sizes in some areas (1.75mm?), so you’ll have to buy thicker pads and compress them and/or stack them (2.5mm).


Thanks man. I already opened the backplate and replaced the pads on the memory modules with minus pad 8 2mm, temperatures went down 10C.


----------



## SolarBeaver

Gebeleisis said:


> that looks a lot worse than my bykski block


Have you RMA'd it? Already filled the form, hope they will accept it without any troubles. I've had enough with this 3090 thing already...


----------



## Jpmboy

J7SC said:


> ...yeah, this seems to reminiscent of a comparison test DerBauer did for Caseking re. D5 and DDC a while back (both pumps are very good). Long story short: 'On paper', flow specs of the D5 are better but because of its larger diameter, it is more susceptible to sudden pressure drops (for example air bubble pockets). However, when two D5s are used in series, that problem goes away and it becomes near unbeatable. D5s also running a bit cooler to begin with than DDC. I always use 2x D5s for fail-over reasons re. work-play hybrids anyway.
> 
> @Jpmboy and others. Quick question not directly related to cooling a 3090, but making it run even better: When you have a sim like FS2020 which is intensive on the network chip, the CPU and the GPU, is it better to have the rolling cache on a separate SSD, or the same SSD (= 'C' root / programs drive) ? I am trying to max performance of an old X99 setup which does have some nice quad-channel RAM speed until I can either get and build a new Zen 3 X570, or a even Zen-3 Threadripper which should be out soon. Thanks


Sorry bud, I have little experience with FS2020 per se, but AFAIK, keep in the cage on the fastest block-read source should be optimal.
D5s are the better Laing pump for sure. I have to say tho, I have DDC pumps here that have been running continuously for almost 10 years now. And an old Eheim that's been at it even longer!


----------



## gfunkernaught

Hey anyone here with a Trio and ek block having any issues with memory temps? Or any issues at all with the block?


----------



## mirkendargen

J7SC said:


> ...yeah, this seems to reminiscent of a comparison test DerBauer did for Caseking re. D5 and DDC a while back (both pumps are very good). Long story short: 'On paper', flow specs of the D5 are better but because of its larger diameter, it is more susceptible to sudden pressure drops (for example air bubble pockets). However, when two D5s are used in series, that problem goes away and it becomes near unbeatable. D5s also running a bit cooler to begin with than DDC. I always use 2x D5s for fail-over reasons re. work-play hybrids anyway.
> 
> @Jpmboy and others. Quick question not directly related to cooling a 3090, but making it run even better: When you have a sim like FS2020 which is intensive on the network chip, the CPU and the GPU, is it better to have the rolling cache on a separate SSD, or the same SSD (= 'C' root / programs drive) ? I am trying to max performance of an old X99 setup which does have some nice quad-channel RAM speed until I can either get and build a new Zen 3 X570, or a even Zen-3 Threadripper which should be out soon. Thanks


D5's are quieter than DDC too, and I believe more reliable although I've never had either fail. I personally do 2 D5's in series to cover a failure, and have a flow sensor and hwinfo setup to trigger a shutdown if there's 0 flow. I'm paranoid of something happening when I'm not home, etc.


----------



## J7SC

Jpmboy said:


> Sorry bud, I have little experience with FS2020 per se, but AFAIK, keep in the cage on the fastest block-read source should be optimal.
> D5s are the better Laing pump for sure. I have to say tho, I have DDC pumps here that have been running continuously for almost 10 years now. And an old Eheim that's been at it even longer!


Thanks. I was just wondering about the chipset bottle necks and CPU usage w/ two identical-make/model SSDs. On pumps, I have multiple D5s running for 8+ years now - and I only lost one due to my own '''genius''' - was pre-testing a giant system w/ 20 server fans and confused a D5 Molex with a fan molex, then went to answer the phone. When I came back into the room, I heard a banshee like sound and there was a funny smell...what you get when you run a D5 for 10 min with no fluids  . BTW, I also have an ancient TT240 / Asetek AIO which I rebuild last year (fluids partially turned into crud solids) but the pump was fine, and now it is still going strong..


----------



## geriatricpollywog

des2k... said:


> Well this is weird, I was playing some Control this morning and noticed my memory temps higher then usual. I sit at 56c,58c for extended play. This morning memory climbs to 62c after 5mins of play.
> 
> Ran 'poenixminer -bench' and looks like memory is working harder than before, it's 120.5MH/s vs 117.8MH/s.
> Same mem OC, 2.5MH/s gain at +853.
> 
> *Anyone know what is happening here ? *Reminds me of old pascal gpu with mem steps loosing perf and need to go up / down bins.
> 
> *120MH/s*
> View attachment 2477378
> 
> 
> *117MH/s*
> 
> 
> 
> 
> 
> 
> 
> 
> image-2021-01-30-163851
> 
> 
> Image image-2021-01-30-163851 hosted in ImgBB
> 
> 
> 
> 
> ibb.co


I wouldn’t be surprised, since you are mining with a full overclock and presumably unlocked bios. I am seeing 107 mh/h with the power slider at 65% and 280w, +0 mem and core. Is the extra 10-13 hashrate worth pulling double the power and thermal load?


----------



## motivman

what is the consensus on double stacking thermal pads? I cannot find any 2mm fujipoly, thickest they seem to sell is 1.5mm. Would it be ok if I stack two 1mm thermal pads?


----------



## Jpmboy

J7SC said:


> Thanks. I was just wondering about the chipset bottle necks and CPU usage w/ two identical-make/model SSDs. On pumps, I have multiple D5s running for 8+ years now - and I only lost one due to my own '''genius''' - was pre-testing a giant system w/ 20 server fans and confused a D5 Molex with a fan molex, then went to answer the phone. When I came back into the room, I heard a banshee like sound and there was a funny smell...what you get when you run a D5 for 10 min with no fluids  . BTW, I also have an ancient TT240 / Asetek AIO which I rebuild last year (fluids partially turned into crud solids) but the pump was fine, and now it is still going strong..


Yeah, we've all had those "oh-sheet" moments! 
Getting back to the cage question... if the SSDs are NVMe, aside from a permanent ramdisk, it likely would not matter which disk it is on. Now, a CPU-NVMe drive may be somewhat faster (vs PCIE NVMe) but as you know, some of the speed increase can be readily measured but not really noticed in real-time. Kinda like one of those wine tasting things where the bottles are masked...


----------



## Jpmboy

motivman said:


> what is the consensus on double stacking thermal pads? I cannot find any 2mm fujipoly, thickest they seem to sell is 1.5mm. Would it be ok if I stack two 1mm thermal pads?


Yes for the 17K pads (they are essentially a thermal putty.


----------



## Falkentyne

motivman said:


> what is the consensus on double stacking thermal pads? I cannot find any 2mm fujipoly, thickest they seem to sell is 1.5mm. Would it be ok if I stack two 1mm thermal pads?


Its better to stack than have improper contact. Stacking lowers the effectiveness though. And 17 w/mk fujipolys are highway robbery. 60mm * 50mm? Takes like four for an entire card


----------



## Shawnb99

mirkendargen said:


> D5's are quieter than DDC too, and I believe more reliable although I've never had either fail. I personally do 2 D5's in series to cover a failure, and have a flow sensor and hwinfo setup to trigger a shutdown if there's 0 flow. I'm paranoid of something happening when I'm not home, etc.


Both are just as reliable. I’ve had MCP35X2’s last 10+ years 24/7 and had a few D5’s die. Another difference is at Max speed a D5 will dump about 25-35w into the loop whereas the DDC doesn’t and benefits from a heatsink. Smaller as well so they take up less space. They can be a bit loud but I found under 37% or so they are dead quiet


----------



## Pepillo

MacGyver strikes again!!!!

I remembered that I had slim hard drive fans stored around, a couple of this particular Scythe model:





__





Startseite - Scythe EU GmbH


Kompatibilität Sockel AM5 Alle aktuellen Scythe-CPU-Kühlermodelle sind voll kompatibel zu AMDs Sockel AM5. Unsere Kühlermodelle nutzen zur Montage die Original Backplate der AM5 Mainboards. Ein Wechsel oder Demontage der Backplate ist nicht erforderlich. Der Installationsvorgang ist in allen...




www.scythe-eu.com





So with some thermal pads I mounted one of them on the backplate of my 3090:










And it works very well, 10º to 15º less on memory. Aesthetically it looks tailor-made, looks good, and is a silent fan of only 1,000 rpm. Subject solved.


----------



## jura11

Biscottoman said:


> What do you think is the best block between phanteks and ekwb for strix 3090? Have you been able to try many blocks even on this gen?


With current generation of GPU's I have tested only two blocks Bykski and Alphacool, on same loop Bykski outperformed Alphacool by 2°C, VRAM temperatures with Bykski have been in 52-60°C for most of the time and with Alphacool I have seen high 60's to mid 70's to high 70's, I tested both blocks in rendering with Blender and some 8 hour folding in [email protected], for sure I will be getting Heatkiller and EKWB blocks just for some testing and use them later on other projects and builds

Which block is best for Strix, I really don't know, some people reported their issues with EKWB block, Alphacool not heard any problems with their blocks, personally I like Bykski this generation, just not sure how it performs on Strix, I think @Nizzen tested EKWB and Bykski on his Strix 

Hope this helps 

Thanks, Jura


----------



## kx11

Pepillo said:


> MacGyver strikes again!!!!
> 
> I remembered that I had slim hard drive fans stored around, a couple of this particular Scythe model:
> 
> 
> 
> 
> 
> __
> 
> 
> 
> 
> 
> Startseite - Scythe EU GmbH
> 
> 
> Kompatibilität Sockel AM5 Alle aktuellen Scythe-CPU-Kühlermodelle sind voll kompatibel zu AMDs Sockel AM5. Unsere Kühlermodelle nutzen zur Montage die Original Backplate der AM5 Mainboards. Ein Wechsel oder Demontage der Backplate ist nicht erforderlich. Der Installationsvorgang ist in allen...
> 
> 
> 
> 
> www.scythe-eu.com
> 
> 
> 
> 
> 
> So with some thermal pads I mounted one of them on the backplate of my 3090:
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> And it works very well, 10º to 15º less on memory. Aesthetically it looks tailor-made, looks good, and is a silent fan of only 1,000 rpm. Subject solved.


I might consider this too, where do you connect the fan exactly?? i have AquaComputers controller with lots of fans connected to it, do i use it or just settle with the Mobo fan header?


----------



## J7SC

Pepillo said:


> MacGyver strikes again!!!!
> 
> I remembered that I had slim hard drive fans stored around, a couple of this particular Scythe model:
> 
> 
> 
> 
> 
> __
> 
> 
> 
> 
> 
> Startseite - Scythe EU GmbH
> 
> 
> Kompatibilität Sockel AM5 Alle aktuellen Scythe-CPU-Kühlermodelle sind voll kompatibel zu AMDs Sockel AM5. Unsere Kühlermodelle nutzen zur Montage die Original Backplate der AM5 Mainboards. Ein Wechsel oder Demontage der Backplate ist nicht erforderlich. Der Installationsvorgang ist in allen...
> 
> 
> 
> 
> www.scythe-eu.com
> 
> 
> 
> 
> 
> So with some thermal pads I mounted one of them on the backplate of my 3090:
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> And it works very well, 10º to 15º less on memory. Aesthetically it looks tailor-made, looks good, and is a silent fan of only 1,000 rpm. Subject solved.


I have that very fan since 2013 or so (though not the heat-sink) The ultra-thin fan is now on one of my systems (top in pic), cooling VRM and RAM. Never had any issues with that fan for all these years, quality seems great and they cool well for their size / speed


----------



## motivman

Hey, can someone assist with this. So I lock my clock speed to 2175 @ 1.1v in afterburner, however, my effective clock is only 2000? what gives?


----------



## gfunkernaught

motivman said:


> Hey, can someone assist with this. So I lock my clock speed to 2175 @ 1.1v in afterburner, however, my effective clock is only 2000? what gives?


You have to use the offset+curve method. Set the offset slider to one or two bins below desired clock, apply, then in the curve find your desired clock/voltage point and set it, apply.


----------



## 86Jarrod

motivman said:


> *@86Jarrod*
> @Miguelios
> do you remember how hot your memory junction temp got with the phanteks block? While playing control with no DLSS, mine gets up to 92C (+1500 on mem), the backplate of the phanteks get very hot to touch. Just touching the backplate with my fingers leads to 2C degree reduction in temp. I need either a waterblock on the backplate or switch pads to fujipoly. Jarrod, you do remember the thermal pad sizes for the phanteks block? I remember you said you replaced yours with something better. This phanteks block doesn't seem to like me, lol.


I have a fan that blows on the backplate. I don't think I ever see above 70. I'll put it in osd while I game today to make sure that's right but I'm pretty sure. I left the thick ass stock pads on the back cause I only bought .5 and 1mm so I only replaced .5 and 1mm pads.


----------



## Esenel

Jpmboy said:


> This is with the 500W bios, under a continuous 375-425 watt load. Flow is ~3LPM as measured using Aquaero and an AQ MPS High Flow sensor.


With 400W and 210l/h the EK block has a delta of 11K on the Strix.
And the Optimus has a delta of 10K?
But is much more expensive.

I guess much more accessable in the US though.


----------



## Pepillo

J7SC said:


> I have that very fan since 2013 or so (though not the heat-sink) The ultra-thin fan is now on one of my systems (top in pic), cooling VRM and RAM. Never had any issues with that fan for all these years, quality seems great and they cool well for their size / speed


Yes, the fan is of good quality, mine also has many years of life.



kx11 said:


> I might consider this too, where do you connect the fan exactly?? i have AquaComputers controller with lots of fans connected to it, do i use it or just settle with the Mobo fan header?


I use a mobo header near the PCIe.


----------



## marti69

Pepillo said:


> MacGyver strikes again!!!!
> 
> I remembered that I had slim hard drive fans stored around, a couple of this particular Scythe model:
> 
> 
> 
> 
> 
> __
> 
> 
> 
> 
> 
> Startseite - Scythe EU GmbH
> 
> 
> Kompatibilität Sockel AM5 Alle aktuellen Scythe-CPU-Kühlermodelle sind voll kompatibel zu AMDs Sockel AM5. Unsere Kühlermodelle nutzen zur Montage die Original Backplate der AM5 Mainboards. Ein Wechsel oder Demontage der Backplate ist nicht erforderlich. Der Installationsvorgang ist in allen...
> 
> 
> 
> 
> www.scythe-eu.com
> 
> 
> 
> 
> 
> So with some thermal pads I mounted one of them on the backplate of my 3090:
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> And it works very well, 10º to 15º less on memory. Aesthetically it looks tailor-made, looks good, and is a silent fan of only 1,000 rpm. Subject solved.


i was doing same befor i got ek back plate now temps are much better fan on back on stock trio back plate use to have 75 to 80c now with ek back plate and no fan on it temps are around 60c max on ddr.


----------



## gfunkernaught

marti69 said:


> i was doing same befor i got ek back plate now temps are much better fan on back on stock trio back plate use to have 75 to 80c now with ek back plate and no fan on it temps are around 60c max on ddr.


When people here mention ram temps they're referring to the ram on the back of the board right? Guess backplates and passive cooling isn't enough? I'm not looking to oc my ram much. I remember with my 2080 ti I oced the ram to +1200 (8200mhz) and was fine with that.


----------



## iunlock

Finally arrived...










Into what first? Hmm... I'm thinking I'll run it through the 9900KS rig first, then 10900K, 7980XE then the 3970X.

Really hoping that it's a good bin... 🤞


----------



## J7SC

Does anyone know exactly what & where the HWInfo 'Memory Junction Temperature' is measured ? Is it an average ? Does it include front & back VRAM chips ? ...on my Strix 3090, that temp is typically in the mid-70s, even when the CPU is shown at mid-60s. Tx.


----------



## des2k...

J7SC said:


> Does anyone know exactly what & where the HWInfo 'Memory Junction Temperature' is measured ? Is it an average ? Does it include front & back VRAM chips ? ...on my Strix 3090, that temp is typically in the mid-70s, even when the CPU is shown at mid-60s. Tx.


not average, it's the highest among those mem chips


----------



## J7SC

des2k... said:


> not average, it's the highest among those mem chips


Thanks, so 75 C max or so on 'Memory Junction Temperature' is nothing to get concerned about (of course colder would be better) ?


----------



## galsdgk

DOOOLY said:


> So has anyone received the EK block for MSI 3080/3090 yet? Mine will be delivered tomorrow for my 3090 Trio.


@Rhadamanthys , getting close. Doing the tubing and leak testing this weekend. I used liquid metal on the GPU, which was a little sketch. I got some globbing on some of the components around the GPU chip, but luckily I had them coated with conformal coating and checked my work before adding power.


----------



## des2k...

J7SC said:


> Thanks, so 75 C max or so on 'Memory Junction Temperature' is nothing to get concerned about (of course colder would be better) ?


should be fine, depends on the quality of the thermal pads and if you have a fan on the backplate

Unless you go waterblock for the backplate it's pretty hard to keep it bellow 70c.

Control 4k dlss was 64c+ and mining bench is 70c for me with fans +853mem.

On air only they reach 105c, so yeah...


----------



## HyperMatrix

J7SC said:


> Thanks, so 75 C max or so on 'Memory Junction Temperature' is nothing to get concerned about (of course colder would be better) ?


I find getting higher than around 73-75C to be a crash point for my memory depending on how much I'm OC'ing it. But every card will be different. If you're not crashing, 75C itself isn't a worrying temperature for your memory. Would be nice for it to be lower, but it's not something I would be actively concerned about.


----------



## gfunkernaught

anyone here using seasonic TX series PSUs with their 3090s?


----------



## J7SC

des2k... said:


> should be fine, depends on the quality of the thermal pads and if you have a fan on the backplate
> 
> Unless you go waterblock for the backplate it's pretty hard to keep it bellow 70c.
> 
> Control 4k dlss was 64c+ and mining bench is 70c for me with fans +853mem.
> 
> On air only they reach 105c, so yeah...





HyperMatrix said:


> I find getting higher than around 73-75C to be a crash point for my memory depending on how much I'm OC'ing it. But every card will be different. If you're not crashing, 75C itself isn't a worrying temperature for your memory. Would be nice for it to be lower, but it's not something I would be actively concerned about.


Thanks guys  No crashing yet...still on stock air & bios but looking forward to the waterblock + backplate active cooling. Below are the temps for a Superposition 4K run just now (_starting/max_ GPU clock 2220, VRAM 1325, same as PortRoyal before). GPUv rarely if ever exceeds 1.069v, which is decent. 

The GT 4k fan did get VRAM temps lower by about 4 C and I plan to fix that in place on the weekend (or just go w/ 2x Arctic P12s) once I finish 'setup / calibration' next to my other systems


----------



## Falkentyne

gfunkernaught said:


> anyone here using seasonic TX series PSUs with their 3090s?


I'm using a Seasonic PX-1000 with my 600W shunted 3090 FE with the Seasonic 12 pin cable. No problems.
And that's a PX-1000 so the TX-1000 should be even higher spec.


----------



## gfunkernaught

Falkentyne said:


> I'm using a Seasonic PX-1000 with my 600W shunted 3090 FE with the Seasonic 12 pin cable. No problems.
> And that's a PX-1000 so the TX-1000 should be even higher spec.


I found a Seasonic Prime Ultra Titanium Series 1000W SSR-1000TR but I can't tell if it's a TX or not. What appeals to me about the TX is the single 8pin PCIe cables. My trio is a 3x8pin card. I've read that Seasonic hasn't joined the DSP game yet like Corsair with their "i" series. Hopefully will get my block soon from ek. Until I get a new psu, I'll have to settle with the 450w bios.

Does anyone have an issue with the EVGA Supernova PSU's?


----------



## gfunkernaught

I have a PSU efficiency question. Say my full system load is 650w. Full load meaning all components are at max 100% load. If I used a 1000w Titanium PSU that has say 93% efficiency at 50% load, which would be 500w, then 650w would probably be at 85% efficiency right? Is there a half-load rule with psu's? Like to maintain efficiency you want to get a PSU that is at least twice the max load of your components. Is that a thing? I don't want to waste power either, especially if I will be gaming at 500w with the GPU alone.


----------



## Falkentyne

gfunkernaught said:


> I found a Seasonic Prime Ultra Titanium Series 1000W SSR-1000TR but I can't tell if it's a TX or not. What appeals to me about the TX is the single 8pin PCIe cables. My trio is a 3x8pin card. I've read that Seasonic hasn't joined the DSP game yet like Corsair with their "i" series. Hopefully will get my block soon from ek. Until I get a new psu, I'll have to settle with the 450w bios.
> 
> Does anyone have an issue with the EVGA Supernova PSU's?


That's neither. But it is a Titanium PSU.
It uses the old naming scheme so it "might" have the "old" OCP circuit. It all depends on when it was manufactured. 
I would not buy it if you don't already own it to begin with.

The new naming scheme is PX-xxx for Prime Platinum and TX-xxx for Prime Titanium and GX-xxx for Prime Gold.
All of these "oneSeasonic" names have the new OCP circuit. These are the ones you want.


----------



## motivman

So my reference card is clocking about 3 bins above my strix... Strix memory however is insane and can max out +1500 (+1250 stable in games), reference card maxes out memory at +1000 (+800 stable in games)... Both score about the same in benchmarks, which should I keep? can I close the gap in core overclocking on the Strix with an evc2sx? max overclock in control maxed out 3440X1440, no DLSS

STRIX --- 2100/11002
Reference --- 2150/10552

I tested with a particular scene in control, and they both get the exact same FPS - 64 FPS, lol.

not sure which one to keep, leaning towards reference card. What do you guys think?


----------



## des2k...

motivman said:


> So my reference card is clocking about 3 bins above my strix... Strix memory however is insane and can max out +1500 (+1250 stable in games), reference card maxes out memory at +1000 (+800 stable in games)... Both score about the same in benchmarks, which should I keep? can I close the gap in core overclocking on the Strix with an evc2sx? max overclock in control maxed out 3440X1440, no DLSS
> 
> STRIX --- 2100/11002
> Reference --- 2150/10552
> 
> I tested with a particular scene in control, and they both get the exact same FPS - 64 FPS, lol.
> 
> not sure which one to keep, leaning towards reference card. What do you guys think?


Control that's everything max, 4k dlss(1440p render res)?, what control point was that for that fps ? I'm just curious to test my reference fps  

If I had a choice I wouldn't keep a card that does 2100 only, that seems like really bad bin. What voltage ?


----------



## Thanh Nguyen

motivman said:


> So my reference card is clocking about 3 bins above my strix... Strix memory however is insane and can max out +1500 (+1250 stable in games), reference card maxes out memory at +1000 (+800 stable in games)... Both score about the same in benchmarks, which should I keep? can I close the gap in core overclocking on the Strix with an evc2sx? max overclock in control maxed out 3440X1440, no DLSS
> 
> STRIX --- 2100/11002
> Reference --- 2150/10552
> 
> I tested with a particular scene in control, and they both get the exact same FPS - 64 FPS, lol.
> 
> not sure which one to keep, leaning towards reference card. What do you guys think?


Keep the one beats my FTW3 in metro exodus 1440p rtx on dlss off.


----------



## gfunkernaught

Falkentyne said:


> That's neither. But it is a Titanium PSU.
> It uses the old naming scheme so it "might" have the "old" OCP circuit. It all depends on when it was manufactured.
> I would not buy it if you don't already own it to begin with.
> 
> The new naming scheme is PX-xxx for Prime Platinum and TX-xxx for Prime Titanium and GX-xxx for Prime Gold.
> All of these "oneSeasonic" names have the new OCP circuit. These are the ones you want.


----------



## motivman

des2k... said:


> Control that's everything max, 4k dlss(1440p render res)?, what control point was that for that fps ? I'm just curious to test my reference fps
> 
> If I had a choice I wouldn't keep a card that does 2100 only, that seems like really bad bin. What voltage ?


that's the max stable overclock in control. I flashed the 1000W bios on both of them, currently testing but results are below

Reference (Max Overclock) --- 2190/10552 (max reference clock - 2184) PR score -- 15345
STRIX (Max Overclock) --- 2160/11252 (max reference clock --- 2151) PR Score --- 15257

reference card will do 2235 with cold temps, haven't test strix yet, but about to see what it can do with cold temps tonight.

I told y'all the reference card was a better buy... lol. Looks likes strix might be going back to microcenter.


----------



## gfunkernaught

I think I answered my own question about efficiency. Turns out there is a 2x rule for load/efficiency. To get the highest efficiency for your load, you have to get a PSU that is at least twice your systems total load or theoretical load. So in my case, my system's max load is 650w (once I put the 500w bios on my 3090 , so I would have to get a PSU with at least 1300w to maintain max efficiency at 650w. Looking at the AX1000, about 93%@650w and only improves with higher wattage PSUs. This reminds me of the half-load capacity rule in Astrophotography for deep space objects. Balance, become water my friend.


----------



## changboy

motivman said:


> that's the max stable overclock in control. I flashed the 1000W bios on both of them, currently testing but results are below
> 
> Reference (Max Overclock) --- 2190/10552 (max reference clock - 2184) PR score -- 15345
> STRIX (Max Overclock) --- 2160/11252 (max reference clock --- 2151) PR Score --- 15257
> 
> reference card will do 2235 with cold temps, haven't test strix yet, but about to see what it can do with cold temps tonight.
> 
> I told y'all the reference card was a better buy... lol. Looks likes strix might be going back to microcenter.


What you mean by REFERENCE, its the founder edition ?


----------



## motivman

changboy said:


> What you mean by REFERENCE, its the founder edition ?


no, founders edition does not have reference PCB. PNY epic-x, Galax SG, palit gaming pro, these all have reference PCB.


----------



## motivman

looks like strix is going back to microcenter, no chance of beating my reference card... looks like reference card is really special. Someone make me a good offer on the reference and I might just keep the strix, lol...


----------



## J7SC

motivman said:


> no, founders edition does not have reference PCB. PNY epic-x, Galax SG, palit gaming pro, these all have reference PCB.


well, to each his own. I will definitely keep my 3090 Strix, it seems to be a good sample (and I have seen 2235 on it, mind you when cold). For most benches and games, I have set it to 2220 (start, before bin drop due to temps) and VRAM to a 1325 for now, but will slow it down a little until I can w-cool it.

Here is my 'quick and dirty' method to check a new GPU and its quality, and potentially compare it to another one...as far as I remember, you already flashed the Strix and you would have to return it to full stock in order to use these quick-valuation methods.

1.) With the card at bone-stock (no oc, no PL), I run a quick Superposition 4k w/ GPUz open and see where the natural max GPU MHz boost settles. In my 3090 Strix's case, it was around 2045 - 2060 as I recall. The second thing I check is max watt level at stock in Superposition.

2.) In the old days, we used to get ASIC quality values (%, higher is usually better) in GPUz, but that led to a lot of 'open box returns' and RMA 'claims'...other than sub-zero XOCers liked (some) of the lower ASIC % cards, but that is a special case.

As ASIC values are no longer published, there is another way I use to choose between cards if I have the option - and that is comparing the TOTALLY STOCK (incl. bios) v-curves in MSI AB, with MSI AB freshly installed for each card. Obviously, it makes no sense to change anything in that vCurve manually as you want to compare 'as is'. That curve is sort of like an ASIC value as it shows you the voltage at a given Mhz of that specific chip. I take screenshots of each instance and compare them. I used this method with my 2x 2080 Aorus XTR WBs and I find it to be accurate. Now, VRAM is trickier, especially with GDDR6X with which I have exactly 4 use days of experience...but I run a battery of tests (including TS GT2, Firestrike Ultra, Superposition, and now FS2020 and CP 2077).

If you want to make up your mind between the Strix and whatever 'reference' card you have, I would do the above two steps and compare them (both on their respective stock bios of course). The other thing you might want to consider that if you're planning heavy oc, hard mods and such, a solid 3x 8 Pin with a very strong VRM is preferable. Finally, Asus Rog may very well come up with a Matrix version, and/or XOC bios (like they did for the 2080 Ti...)


----------



## changboy

From what i see you have all good card guys hehehe, play games ! 

Me its doing 10 times i try beat the big boss in the division 2 Aaron, i hate this guy hehehehe. I give up for now lol.
I dont know if i place too hard the setting but it seam i cant change this.


----------



## changboy

This is the most beautyful built ive ever seen, its amazing and i never saw a job like this :


----------



## motivman

...


----------



## long2905

Anyone else mining on the 1000w vbios? Just want to check if the wattage reported here is accurate or should i subtract it by 30% as im using a reference based card. i already set the limit to 40% in Afterburner.


----------



## GQNerd

motivman said:


> looks like strix is going back to microcenter, no chance of beating my reference card... looks like reference card is really special. Someone make me a good offer on the reference and I might just keep the strix, lol...


Silicon lottery strikes again.. 
Keep the Ref card dude.

Unless u ever get that KP


----------



## changboy

I never mining, what the program i need download to do this ?


----------



## J7SC

Miguelios said:


> Silicon lottery strikes again..
> Keep the Ref card dude.
> 
> Unless u ever get that KP


..or Galax HoF +...with either, it is not only about the bios but the HW (such as VRM filtering) and also the specific control software that comes with those cards - if you chase benchmarks over anything else, that is probably the way to go...and/or consider sub-zero XOC

I am very grateful for my Strix sample, not only because it just ran 2250 / 1325 / 400 W (light rendering) but because it is perfect for my intended use case for this card once I complete the setup. I am still surprised and thrilled that I could even get a 3090...I had posted in this thread early on when 3090s released re. my interest in a 3-pin such as the Strix or Aorus XTR 3-8 pin, but the supply situation up here was and still is atrocious. I went to two large computer stores today to pick up another network switch, and their Ampere shelves were empty, with long waiting lists (never mind their online stores). I literally ended up at the right place, at the right time, last Saturday...


----------



## Rhadamanthys

galsdgk said:


> View attachment 2477468
> 
> @Rhadamanthys , getting close. Doing the tubing and leak testing this weekend. I used liquid metal on the GPU, which was a little sketch. I got some globbing on some of the components around the GPU chip, but luckily I had them coated with conformal coating and checked my work before adding power.


Nice one! I broke down and ordered a reference card. Availability is much better over here compared to the Trio.


----------



## GQNerd

J7SC said:


> ..or Galax HoF +...with either, it is not only about the bios but the HW (such as VRM filtering) and also the specific control software that comes with those cards - if you chase benchmarks over anything else, that is probably the way to go...and/or consider sub-zero XOC


It was a msg to OP as he mentioned he was going to get a KP soon. I wasn’t suggesting it was the best card available.. Although it WAS until the GxHOF released, but they’re even harder to obtain...

I had a monster Strix (PR 15.6k), but got rid of it because I have an even better KP with more granular control over the voltage.

For now I don’t plan to go sub-zero, despite my avy.. 

I’m married, have a teenager and newborn, + demanding ass job.. I bench and game for fun. Someday I’ll have time and space to get into LN2


----------



## geriatricpollywog

long2905 said:


> View attachment 2477497
> 
> Anyone else mining on the 1000w vbios? Just want to check if the wattage reported here is accurate or should i subtract it by 30% as im using a reference based card. i already set the limit to 40% in Afterburner.


Hell no, I’m mining with the normal bios ar 65% power limit for 107mh/s. I don’t want to overclock and degrade my card for an extra 20 mh/s.

Keep it up though for science. I want to see how long your card lasts.


----------



## des2k...

long2905 said:


> View attachment 2477497
> 
> Anyone else mining on the 1000w vbios? Just want to check if the wattage reported here is accurate or should i subtract it by 30% as im using a reference based card. i already set the limit to 40% in Afterburner.


That's not really accurate on 2x8pin with 1000w. The best so far, it's using hwinfo64 custom sensors.
pice+pin1+pin2 for total
pin1 and pin2 are balanced, so pin1+pin2 /2

You can rename & import that into the registry


----------



## long2905

0451 said:


> Hell no, I’m mining with the normal bios ar 65% power limit for 107mh/s. I don’t want to overclock and degrade my card for an extra 20 mh/s.
> Keep it up though for science. I want to see how long your card lasts.


yup still going strong so far lol. i will be sure to keep you posted. and appreciate the kind words.


des2k... said:


> That's not really accurate on 2x8pin with 1000w. The best so far, it's using hwinfo64 custom sensors.
> pice+pin1+pin2 for total
> pin1 and pin2 are balanced, so pin1+pin2 /2
> 
> You can rename & import that into the registry
> 
> View attachment 2477517


thanks man. i will try this and see how it goes. 

max 270w this is even better than what i have on stock and kfa2 vbios where i can get it down to 300w for about the same hashrate.


----------



## J7SC

Miguelios said:


> It was a msg to OP as he mentioned he was going to get a KP soon.* I wasn’t suggesting it was the best card available*.. Although it WAS until the GxHOF released, but they’re even harder to obtain...
> 
> I had a monster Strix (PR 15.6k), but got rid of it because I have an even better KP with more granular control over the voltage.
> 
> For now I don’t plan to go sub-zero, despite my avy..
> 
> I’m married, have a teenager and newborn, + demanding ass job.. I bench and game for fun. Someday I’ll have time and space to get into LN2


Yeah, I was making a more general comment about the various strengths of the KPE and Galax compared to just picking any old card, modding it and flashing a XOC bios its VRM was not designed for - with some folks wondering why their cards don't perform well, or even blow up. With cards such as the Galax and KPE on the other hand, it is not only the binned GPUs, specific VRMs and bios but also the control software (akin to the physical EVBot / EVGA). The Strix is sort of a hybrid from that perspective, IMO, if cooled properly, and it too actually has NDA/hidden XOC software.

I like benching and modding as well, and did a fair amount of sub-zero some years back with good results, but then set that aside and skipped a few GPU gens before upgrading to 2080 Tis for both work + play. I actually wanted a set of Galax RTS 2080 Tis, yet Galax was/is more or less unobtanium (not sold in my country, and no warranty if bought via 3rd parties). But I ended up getting a very nice SLI set from Aorus, hitting position #15 in 3DM HoF PortRoyal (spoiler) on stock bios, and stayed in PR HoF for 1.5 yrs. 

The thing with sub-zero though in particular is that it can be a slippery slope down a rabbit hole with a snake's open jaw at the other end re. time spent (not only benching but preparation, including for safety of family). Like you, with a family and demanding job (congrats on the newborn btw), this is supposed to be a net-fun benefit, not a consuming obsession. 


Spoiler


----------



## Jpmboy

long2905 said:


> View attachment 2477497
> 
> Anyone else mining on the 1000w vbios? Just want to check if the wattage reported here is accurate or should i subtract it by 30% as im using a reference based card. i already set the limit to 40% in Afterburner.


There is no reason to mine with the 1000W bios. Hashing is basically a memory controller issue and unless we get control of the SOC voltage rail, overclocking he core is meaningless for hashing. Additionally, unless you are using a miner that can use Cuda 11.1, that true (at pool) hashrate will be below the reported rate.
Here's a snip using a 3090FTW3 Ultra with optimus block, 500W bios, -200 core, + 1200 on the ram. T-Rex miner using cuda 11.1.


----------



## Gebeleisis

Jpmboy said:


> There is no reason to mine with the 1000W bios. Hashing is basically a memory controller issue and unless we get control of the SOC voltage rail, overclocking he core is meaningless for hashing. Additionally, unless you are using a miner that can use Cuda 11.1, that true (at pool) hashrate will be below the reported rate.
> Here's a snip using a 3090FTW3 Ultra with optimus block, 500W bios, -200 core, + 1200 on the ram. T-Rex miner using cuda 11.1.
> View attachment 2477529


what is the power draw ?


----------



## long2905

Jpmboy said:


> There is no reason to mine with the 1000W bios. Hashing is basically a memory controller issue and unless we get control of the SOC voltage rail, overclocking he core is meaningless for hashing. Additionally, unless you are using a miner that can use Cuda 11.1, that true (at pool) hashrate will be below the reported rate.
> Here's a snip using a 3090FTW3 Ultra with optimus block, 500W bios, -200 core, + 1200 on the ram. T-Rex miner using cuda 11.1.
> View attachment 2477529


yeah so im not using the XOC bios specifically for mining but rather to extend the power limit of my reference card. I use it for gaming mostly like the majority of people but during downtime might as well get something back you know?

have you tried using nbminer? any reason why you opted for t-rex? I just tried it and found the server latency is slightly worse than on nbminer. hashrate is the same.


----------



## zayd

Esteemed forum members. I have the opportunity to purchase an MSI Geforce RTX 3090 Suprim X. Can anyone provide any feedback if this will be a good GPU, amongst the many other AIB cards that are available. Most of the reviews are good for this card, but there's nothing like the experience of fellow PC enthusiasts like myself.

Also, is it the general consensus amongst this fraternity, to use separate PCI-E cables from my PSU to the GPU. Do you guys have any recommendations for high quality cables and what length would you go for, as the distance from the PSU to the GPU is not that much.


----------



## gfunkernaught

zayd said:


> Esteemed forum members. I have the opportunity to purchase an MSI Geforce RTX 3090 Suprim X. Can anyone provide any feedback if this will be a good GPU, amongst the many other AIB cards that are available. Most of the reviews are good for this card, but there's nothing like the experience of fellow PC enthusiasts like myself.


If this helps at all, I have a Trio that auto boosts to 1980mhz at 50c with the 450 Suprim Bios and the PL set to 107%, and that is on the stock air cooler. Waiting on my ek block to see how far I can push it. I didn't touch the core/memory offset sliders.


----------



## Falkentyne

zayd said:


> Esteemed forum members. I have the opportunity to purchase an MSI Geforce RTX 3090 Suprim X. Can anyone provide any feedback if this will be a good GPU, amongst the many other AIB cards that are available. Most of the reviews are good for this card, but there's nothing like the experience of fellow PC enthusiasts like myself.
> 
> Also, is it the general consensus amongst this fraternity, to use separate PCI-E cables from my PSU to the GPU. Do you guys have any recommendations for high quality cables and what length would you go for, as the distance from the PSU to the GPU is not that much.


Does that card have 20A fuses on the 8 pins and PCIE? Fuses are garbage if you intend to mod or use a custom/1000w vbios.
I'd rather get a strix or a 3 8 pin reference card like a Palit, which doesn't have fuses. Ask @motivman about his card. Maybe he will sell you one as he has two of them.


----------



## des2k...

spending some time in paint to find the best position for those heatsink 
already had fans running on the backplate, the thermal tape is surprisingly good 4c+ drop in temps
correct size for full coverage would of been better, but I had these mofset heatsinks around
control 4k dlss was 54,52c mem
idle 34c,36c mem


----------



## Sheyster

Has any card vendor released a resizable BAR capable VBIOS yet? Apparently some mobo vendors have already released them. Yes, I know, we will still have to wait for Nvidia to release the new driver to fully enable it.


----------



## Sheyster

zayd said:


> Esteemed forum members. I have the opportunity to purchase an MSI Geforce RTX 3090 Suprim X. Can anyone provide any feedback if this will be a good GPU, amongst the many other AIB cards that are available. Most of the reviews are good for this card, but there's nothing like the experience of fellow PC enthusiasts like myself.
> 
> Also, is it the general consensus amongst this fraternity, to use separate PCI-E cables from my PSU to the GPU. Do you guys have any recommendations for high quality cables and what length would you go for, as the distance from the PSU to the GPU is not that much.


If you're just gaming, I highly recommend the 3090 FE. 

If you want to push the card, it's going to boil down to the silicon lottery.

Yes, use separate PCI-E cables always, if you can. I'm just using EVGA's stock cables (3 of them) and they work just fine with my Strix.


----------



## Jpmboy

Gebeleisis said:


> what is the power draw ?


Says right in the snip. And according to iCue is within a few %


long2905 said:


> yeah so im not using the XOC bios specifically for mining but rather to extend the power limit of my reference card. I use it for gaming mostly like the majority of people but during downtime might as well get something back you know?
> have you tried using nbminer? any reason why you opted for t-rex? I just tried it and found the server latency is slightly worse than on nbminer. hashrate is the same.


Yep, use the downtime wisely. 
I've tried a bunch but not nb. I'll give it a spin.
reported hash rate and actual (at pool) are different, right?

Here's a recent comparo: Best Ethereum Mining Software for Nvidia and AMD - Crypto Mining Blog (2miners.com)


----------



## Jpmboy

Sheyster said:


> Has any card vendor released a resizable BAR capable VBIOS yet? Apparently some mobo vendors have already released them. Yes, I know, we will still have to wait for Nvidia to release the new driver to fully enable it.


Good question since the Ampere hardware is all set for rBAR...


----------



## J7SC

I think I'm done with pre-testing and it's time to get this thing off the floor (pic) and put it into 'the thing' - a dual-mobo contraption for our master br which also holds a Z170 SOC Force. The trick is to plan ahead for the GPU w-cooling peripherals in the build now as it is still on stock air at assembly today...I also did find a good solution for the stock backplate cooling - two ancient TT-AIO fans which are very quiet yet move a lot of air. Final setup benches below, noting that Superpos 4k ran one bin higher. I also ordered some heatsinks for the custom w-cooling backplate for when it arrives. With the new fan arrangement, we don't have to listen to the 'sound' of the GentleTyphoon, but per HWInfo below, it dropped the temp delta between GPU and VRAM from 8 C to 4 C


----------



## Chamidorix

Jpmboy said:


> There is no reason to mine with the 1000W bios. Hashing is basically a memory controller issue and unless we get control of the SOC voltage rail, overclocking he core is meaningless for hashing. Additionally, unless you are using a miner that can use Cuda 11.1, that true (at pool) hashrate will be below the reported rate.
> Here's a snip using a 3090FTW3 Ultra with optimus block, 500W bios, -200 core, + 1200 on the ram. T-Rex miner using cuda 11.1.
> View attachment 2477529


My friend my friend this is called the MSVDD rail.


----------



## changboy

rtx-3090 give similar result :


----------



## Falkentyne

Chamidorix said:


> My friend my friend this is called the MSVDD rail.


Do you think I should man up and mod those two 1206 shunts?


----------



## Sheyster

Falkentyne said:


> Do you think I should man up and mod those two 1206 shunts?


You've done so much already and been practicing your soldering skillz, so why not? 

This said, if you're not totally comfortable doing it yet, then don't!


----------



## mardon

Would anyone be so kind to post an example of an MSI frequency curve which will give close reported and actual clocks. I was running 2140mhz earlier but actual clock was about 2070mhz.

Is there a specific technique I should be using over Turing?


----------



## des2k...

mardon said:


> Would anyone be so kind to post an example of an MSI frequency curve which will give close reported and actual clocks. I was running 2140mhz earlier but actual clock was about 2070mhz.
> 
> Is there a specific technique I should be using over Turing?


Try this to see if you have better effective clock,
nvidia-smi.exe -lgc 210,2145
Whatever voltage your stable, ex. from default curve, drag 1056mv point to 2235mhz and double check it's actually set at that after you hit apply


----------



## mardon

des2k... said:


> Try this to see if you have better effective clock,
> nvidia-smi.exe -lgc 210,2145
> Whatever voltage your stable, ex. from default curve, drag 1056mv point to 2235mhz and double check it's actually set at that after you hit apply


Thanks for the reply. I'm really sorry, can you take it back one step? Where do I input the Nvidia SMI? Is this a different app?
I've seen it referenced on this thread a few times too. 
Is there somewhere I can go to read about it? My Google search didn't actually bring up any program.


----------



## Falkentyne

mardon said:


> Thanks for the reply. I'm really sorry, can you take it back one step? Where do I input the Nvidia SMI? Is this a different app?
> I've seen it referenced on this thread a few times too.
> Is there somewhere I can go to read about it? My Google search didn't actually bring up any program.


Anywhere in command prompt. I already know the windows/system32 folder works. Not sure if it registers globally, depends where the file is located.


----------



## mardon

Falkentyne said:


> Anywhere in command prompt. I already know the windows/system32 folder works. Not sure if it registers globally, depends where the file is located.


OK cool sorry thought I needed a specific program.


----------



## Jpmboy

Chamidorix said:


> My friend my friend this is called the MSVDD rail.


That's the ram voltage, yeah, I know. SoC is a different bus and VRM set.


----------



## Jpmboy

mardon said:


> OK cool sorry thought I needed a specific program.


it's here: C:\Program Files\NVIDIA Corporation\NVSMI
And ya need to do a "open command prompt here" in the folder.


----------



## Falkentyne

Jpmboy said:


> That's the ram voltage, yeah, I know. SoC is a different bus and VRM set.


RAM is MVDDC (FBVDD). Uncore is MSVDD (also known as SRAM).


----------



## Nizzen

zayd said:


> Esteemed forum members. I have the opportunity to purchase an MSI Geforce RTX 3090 Suprim X. Can anyone provide any feedback if this will be a good GPU, amongst the many other AIB cards that are available. Most of the reviews are good for this card, but there's nothing like the experience of fellow PC enthusiasts like myself.
> 
> Also, is it the general consensus amongst this fraternity, to use separate PCI-E cables from my PSU to the GPU. Do you guys have any recommendations for high quality cables and what length would you go for, as the distance from the PSU to the GPU is not that much.


Good gpu with a bit extra backplate cooling integrated 

I'm using this 3090 Suprim x with 1000w bios.


----------



## changboy

I just try the curve in msi afterburner and i played cyberpunk with curve set at 2140mhz and memory +1000.
Card is boosting at 2115mhz when play cyberpunk.
If i put 2160mhz game crash after some minute.


----------



## motivman

Falkentyne said:


> Does that card have 20A fuses on the 8 pins and PCIE? Fuses are garbage if you intend to mod or use a custom/1000w vbios.
> I'd rather get a strix or a 3 8 pin reference card like a Palit, which doesn't have fuses. Ask @motivman about his card. Maybe he will sell you one as he has two of them.


20A fuses shouldn't be an issue IMHO. Unlike the gaming X trio, the suprim has better VRM's.


----------



## J7SC

changboy said:


> I just try the curve in msi afterburner and i played cyberpunk with curve set at 2140mhz and memory +1000.
> Card is boosting at 2115mhz when play cyberpunk.
> If i put 2160mhz game crash after some minute.


Yeah I find Cyberpunk 2077 is a real core stress tester (w/RTX / 4K Psycho), while FS2020 is tough on VRAM... 



Spoiler: ...from the horse's mouth



...Strix 3090 in the top system, starting to lay things out for measurements while setting up space for dual 360/60s once the Strix GPU block / backplate is settled on)


----------



## geriatricpollywog

The toughest core stress test I've tried is Quake II RTX with everything enabled and maxed. 520-540W on 520W bios.


----------



## changboy

If i can increase voltage a bit higher then 1.1v maybe i can game at 2200mhz boost clock.
Why we cant increase higher then this ? Maybe 1.15v to 1.2v can be good


----------



## SoldierRBT

changboy said:


> If i can increase voltage a bit higher then 1.1v maybe i can game at 2200mhz boost clock.
> Why we cant increase higher then this ? Maybe 1.15v to 1.2v can be good


NVVDD/MSVVDD voltages only improve effective/internal clocks (more wattage/heat) but card is still limited by what it can do at 1.10v. 



0451 said:


> The toughest core stress test I've tried is Quake II RTX with everything enabled and maxed. 520-540W on 520W bios.


That's true. 2190-2200MHz effective clocks at 1.10v was using 700W at 56C.


----------



## J7SC

0451 said:


> The toughest core stress test I've tried is Quake II RTX with everything enabled and maxed. 520-540W on 520W bios.


btw, how do you cool the backplate of your KPE...stock w/ fans, or other ?


----------



## geriatricpollywog

J7SC said:


> btw, how do you cool the backplate of your KPE...stock w/ fans, or other ?



I am using a 92mm fan plugged into the fan header on the card. A 3rd fan slider shows up in Precision X1 when you plug a fan into the card.


----------



## J7SC

Thanks  my card also has a couple of onboard pwm headers for fan-out, but I'm not sure regular MSI AB (or even non-KPE PrecX) would pick those up for control..got to try that out. For now, those former AIO fans for the backplate are controlled through ROG mobo pwm header and software since I can't bring myself to add GPU Tweak


----------



## CallsignVega

What do you guys consider a "decent" overclock on air? My Suprim X, even though it has a great cooler at temps max out around 60C , is maxing out about 2050 MHz core, no matter where I put the voltage and power sliders. That seems pretty low, wonder if I got a crappy chip.


----------



## Falkentyne

CallsignVega said:


> What do you guys consider a "decent" overclock on air? My Suprim X, even though it has a great cooler at temps max out around 60C , is maxing out about 2050 MHz core, no matter where I put the voltage and power sliders. That seems pretty low, wonder if I got a crappy chip.


I'm surprised you don't have an A6000, Vega! You usually yeet those high end cards.

Are you power limited or not? 
If you're not power limited at all, that's low for 60C. 2085 mhz is average for a +105 mhz offset. This of course depends on the offset you're using as I heard cards with better silicon may boost higher with a lower offset. I can't verify that, but people have said better silicon has a different V/F curve than worse silicon!

If you're power limited, well, all bets are off because the card is going to throttle to 1850-2000 mhz.


----------



## zayd

Thanks for the advice guys, but I missed out on buying a 3090 for now, so the hunt continues. For me its just going to be a gaming card with some capture for uploads.


----------



## dante`afk

long2905 said:


> View attachment 2477497
> 
> Anyone else mining on the 1000w vbios? Just want to check if the wattage reported here is accurate or should i subtract it by 30% as im using a reference based card. i already set the limit to 40% in Afterburner.



you are best served by underclocking your core and overclocking your memory.. i'm running -300 core and 1500 memory and min 125mh/s if i leave my pc alone


__
https://www.reddit.com/r/NiceHash/comments/kw0k6u


----------



## changboy

I have download minegate but karpersky saw it like a virus lol, i want try mining and buy any dvd hd with bitcoin lol


----------



## CallsignVega

Falkentyne said:


> I'm surprised you don't have an A6000, Vega! You usually yeet those high end cards.
> 
> Are you power limited or not?
> If you're not power limited at all, that's low for 60C. 2085 mhz is average for a +105 mhz offset. This of course depends on the offset you're using as I heard cards with better silicon may boost higher with a lower offset. I can't verify that, but people have said better silicon has a different V/F curve than worse silicon!
> 
> If you're power limited, well, all bets are off because the card is going to throttle to 1850-2000 mhz.


I have the Kingpin 550w BIOS installed on my Suprim X and my watt draw per MSI Afterburner is ~510 watts or so. That seems like a lot of power for only 2050 core in 4K Optimized Superposition.


----------



## J7SC

zayd said:


> Thanks for the advice guys, but I missed out on buying a 3090 for now, so the hunt continues. For me its just going to be a gaming card with some capture for uploads.


Well, you never know - there seems to be a slightly 'faster trickle' of new Ampere cards coming in. When I was looking to buy one last fall, I couldn't find the model I wanted (or for that matter any model), and when I wasn't looking to buy one, I ran across a single one last Saturday, at MSRP no less...I wasn't on any pre-order / waiting lists or some such, just right time, right place...hope you find one soon


----------



## ViRuS2k

Biscottoman said:


> Mp5works has just introduced a new variant on his store with two g1/4 ports to run on series like people has done using ram waterblock drilled on the backplate


Yeah problem is, does that store have anything ever in stock, ? everything on there most of the time is out of stock..... and you cant preorder and that makes it a pain in the ass..


----------



## Falkentyne

CallsignVega said:


> I have the Kingpin 550w BIOS installed on my Suprim X and my watt draw per MSI Afterburner is ~510 watts or so. That seems like a lot of power for only 2050 core in 4K Optimized Superposition.


That's how much Superposition pulls.
On my shunted 3090 FE, it pulls around 520-540W.

If you want to see pure hell, try 4k custom--Extreme shaders. 

This will cause all shunt modded cards to trip an internal power rail (MSVDD probably) and throttle far below the TDP% limit (Normalized TDP% will be sky high).
The throttling will be large, consistent and a nice thick green bar in GPU-Z!

The only exception is the Asus Strix, as we know shunted Strixes will not throttle in Timespy Extreme. Not sure if it will in 4K Custom-extreme shaders however.
It's still unknown whether shunting the two 1206 shunts (these are inline with the 2512 shunts) will circumvent this. Me and @dante`afk are going to shunt our FE's and find out. Apparently these two smaller shunts have something to do with the MSVDD and NVVDD rail limits. I've seen all reference boards have these small 5 mohm shunts...the only board that doesn't seem to have them is the Strix.


----------



## mirkendargen

dante`afk said:


> you are best served by underclocking your core and overclocking your memory.. i'm running -300 core and 1500 memory and min 125mh/s if i leave my pc alone
> 
> 
> __
> https://www.reddit.com/r/NiceHash/comments/kw0k6u


I lock my curve at 742mv/1230Mhz core on a profile for mining, works great.


----------



## Sheyster

CallsignVega said:


> What do you guys consider a "decent" overclock on air? My Suprim X, even though it has a great cooler at temps max out around 60C , is maxing out about 2050 MHz core, no matter where I put the voltage and power sliders. That seems pretty low, wonder if I got a crappy chip.


Seems a little low, silicon lottery most likely. My 3090 FE was similar to yours, 2055 max stable in Warzone from what I recall. This Strix I have now is a bit better. I'm running it slightly undervolted at 1v/2100 in Warzone/MW2019/CW. Since it's air-cooled I have not bothered to push it past 2160 in SP 4KO.


----------



## J7SC

CallsignVega said:


> I have the Kingpin 550w BIOS installed on my Suprim X and my watt draw per MSI Afterburner is ~510 watts or so. That seems like a lot of power for only 2050 core in 4K Optimized Superposition.





Falkentyne said:


> That's how much Superposition pulls.
> On my shunted 3090 FE, it pulls around 520-540W.
> 
> If you want to see pure hell, try 4k custom--Extreme shaders.
> 
> This will cause all shunt modded cards to trip an internal power rail (MSVDD probably) and throttle far below the TDP% limit (Normalized TDP% will be sky high).
> The throttling will be large, consistent and a nice thick green bar in GPU-Z!
> 
> The only exception is the Asus Strix, as we know shunted Strixes will not throttle in Timespy Extreme. Not sure if it will in 4K Custom-extreme shaders however.
> It's still unknown whether shunting the two 1206 shunts (these are inline with the 2512 shunts) will circumvent this. Me and @dante`afk are going to shunt our FE's and find out. Apparently these two smaller shunts have something to do with the MSVDD and NVVDD rail limits. I've seen all reference boards have these small 5 mohm shunts...the only board that doesn't seem to have them is the Strix.


...yeah, that's why I like to use Superposition 4K (or even 8K) as an early test to get an idea about the overall power envelope of a newly installed card. Strix / air / stock bios / max PL in Superposition 4K optimized:


----------



## CallsignVega

So doing more testing.. using 4K Superposition my max stable core is like 2,015 MHz, barely over stock boost. On an expensive Suprim X. They definitely don't bin these chips, looks like a big lottery failure. I wonder if this is one of the lowest ever max overclocks in this thread.

Probably have Zotac Trinity's beating my card haha.

Max 4K optimized ~18,500 score. May return this POS.


----------



## des2k...

Sheyster said:


> Seems a little low, silicon lottery most likely. My 3090 FE was similar to yours, 2055 max stable in Warzone from what I recall. This Strix I have now is a bit better. I'm running it slightly undervolted at 1v/2100 in Warzone/MW2019/CW. Since it's air-cooled I have not bothered to push it past 2160 in SP 4KO.


2160+ doesn't really give you much in terms of fps outside benches.

With RTX/dlss games going from 2160 to 2040 for example you only loose 2fps at 4k.

Raster only games, 3fps loss at 4k

About 100w+ less for power draw. So not worth to push heavy OC in games !
Pretty worthless on my side with fps cap for my old 4k freesync.


----------



## Falkentyne

CallsignVega said:


> So doing more testing.. using 4K Superposition my max stable core is like 2,015 MHz, barely over stock boost. On an expensive Suprim X. They definitely don't bin these chips, looks like a big lottery failure. I wonder if this is one of the lowest ever max overclocks in this thread.
> 
> Probably have Zotac Trinity's beating my card haha.
> 
> Max 4K optimized ~18,500 score. May return this POS.


Vega, can you do a 4k Superposition Custom--4k resolution--Extreme shaders quality, and see if you get throttling in GPU-Z?

All shunt modded cards are getting destroyed by this test due to massive MSVDD/NVDDC loads. I think only the Strix and Kingpin may escape without a power limit throttle here.


----------



## TK421

any way to get >480w on the strix?

kp bios is the only way I assume?


----------



## Sheyster

des2k... said:


> 2160+ doesn't really give you much in terms of fps outside benches.


Indeed. I'm perfectly happy with 2100 in gaming with the lower voltage.


----------



## Sheyster

TK421 said:


> any way to get >480w on the strix?
> 
> kp bios is the only way I assume?


You have 3 choices: 500W FTW3 BIOS, 520W KPE BIOS, 1000W KPE XOC BIOS.

Note that the 520W BIOS will not run the middle fan over 66% of max speed (~2000 RPM) no matter what you do. The other two run the 3 fans at full speed (~3000 RPM). Obviously this is not an issue if you're on water.


----------



## Chamidorix

Jpmboy said:


> That's the ram voltage, yeah, I know. SoC is a different bus and VRM set.


No, its not.

SoC aka system on a chip aka AMD system agent nomenclature aka the memory controller part of uncore aka the MSVDD aka the miscellaneous voltage domain on GA-102 is what controls the memory controller on bottom + memory busses on the outside perimeter of the chip, among other things. And yes, has its own VRM stages. FBVDD aka frame buffer voltage domain is the rail + vrm stages for actual micron memory chips themselves.











I can increase the stable phoenix-miner memory overclock and therefore hashrate at any given nvvdd voltage+clock and fbvdd voltage (say, 1.4v) just solely by increasing msvdd voltage. Thoroughly akin yes to raising vccsa on intel or vsoc on AMD for higher ram frequency.


----------



## J7SC

Sheyster said:


> Indeed. I'm perfectly happy with 2100 in gaming with the lower voltage.


...agree. The only two applications where a bit of difference 'makes a difference' (at 4K Ultra) for me is FS2020 and CP2077...they're are right on the cusp of 60 fps with everything maxed, and there, a few extra fps can make a difference. That said, my main concern is to w-cool the Strix, as much as the air-cooling setup on this card is remarkable (apart from the typical 3090 back-plate temps).

re. earlier posts on the resizable-BAR update by Nvidia, I wonder how much difference that will make...the AMD version of it has some impact on some apps, but not others. I also wonder how many popular but older mobos will get the required resizable-BAR support through UEFI firmware updates.


----------



## Sheyster

J7SC said:


> re. earlier posts on the resizable-BAR update by Nvidia, I wonder how much difference that will make...the AMD version of it has some impact on some apps, but not others. I also wonder how many popular but older mobos will get the required resizable-BAR support through UEFI firmware updates.


Based on some 6800 XT testing I read in another thread, it ranged from 0 to just over 4%. I believe SuperPosition 4K showed the most benefit for that card.


----------



## Thanh Nguyen

Curve has higher clock but only a few points aheah of offset even I did it with opened window.
+180 offset + 1300 mem








I scored 15 545 in Port Royal


Intel Core i9-10900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com




Curve








I scored 15 565 in Port Royal


Intel Core i9-10900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## J7SC

Sheyster said:


> Based on some 6800 XT testing I read in another thread, it ranged from 0 to just over 4%. I believe SuperPosition 4K showed the most benefit for that card.


Thanks. I hope I don't end up having to switch mobos between the 2x 2080 Ti (bottom left in sig) and this new build w/ the Strix 3090 re.available UEFI firmware support for resizable-BAR. That would be a bummer..probably easier to pick up a decent mid-range X570 mobo and a Ryzen 59xx (if I can find one...)


----------



## lolhaxz

Falkentyne said:


> If you want to see pure hell, try 4k custom--Extreme shaders.
> 
> The only exception is the Asus Strix, as we know shunted Strixes will not throttle in Timespy Extreme. Not sure if it will in 4K Custom-extreme shaders however.


The Strix is also power limited to around stock power limit (480W?), shunt modded card with 10mohm on all but PCIE shunt... can pull ~600W+ in other extreme scenarios.










Power draw on Optimized 4K is about 530W @ 2160MHz at 1.093v - SRAM ~70w


----------



## SoldierRBT

Thanh Nguyen said:


> Curve has higher clock but only a few points aheah of offset even I did it with opened window.
> +180 offset + 1300 mem
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 545 in Port Royal
> 
> 
> Intel Core i9-10900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> Curve
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 565 in Port Royal
> 
> 
> Intel Core i9-10900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com


Internal clocks drop a lot when you modify the v/f curve. I believe the NVVDD/MSVVDD table isn’t providing enough voltage that’s why the low score with high requested clocks. Your offset OC matches with my findings in Port Royal and internal clocks performance.
2100MHz avg 15K
2130MHz avg 15,2K
2160MHz avg 15,4K
2190MHz avg 15,5K
2220MHz avg 15,7K
My guess to break 16K+ would be 2265MHz avg.


----------



## Zogge

ViRuS2k said:


> Yeah problem is, does that store have anything ever in stock, ? everything on there most of the time is out of stock..... and you cant preorder and that makes it a pain in the ass..


I bought a serial top, it was in stock.


----------



## TK421

Sheyster said:


> You have 3 choices: 500W FTW3 BIOS, 520W KPE BIOS, 1000W KPE XOC BIOS.
> 
> Note that the 520W BIOS will not run the middle fan over 66% of max speed (~2000 RPM) no matter what you do. The other two run the 3 fans at full speed (~3000 RPM). Obviously this is not an issue if you're on water.


any oddities with the 520/1000w kpe bios?


----------



## Chamidorix

Ah, finally AHOC for kingpin. The most interesting thing imo was pointing out the fuse on the pci-express slot. Mass red line of death KPE incoming?
Also would be real great to figure out if there is a way to get the individual rail current reading via i2c commands via BMGJets custom classified tool (that makes it simple to send i2c commands from OS).


----------



## Canson

Nizzen said:


> Good gpu with a bit extra backplate cooling integrated
> 
> I'm using this 3090 Suprim x with 1000w bios.


does the fans work properly with that bios and the rgb?



CallsignVega said:


> What do you guys consider a "decent" overclock on air? My Suprim X, even though it has a great cooler at temps max out around 60C , is maxing out about 2050 MHz core, no matter where I put the voltage and power sliders. That seems pretty low, wonder if I got a crappy chip.


wow 60C only? i got same card as you and not even close to that lol

what case do you have and how many fans? which bios you run on your suprim x and whats your fan speed?


----------



## motivman

Thanh Nguyen said:


> Curve has higher clock but only a few points aheah of offset even I did it with opened window.
> +180 offset + 1300 mem
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 545 in Port Royal
> 
> 
> Intel Core i9-10900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> Curve
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 565 in Port Royal
> 
> 
> Intel Core i9-10900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com


2265 average core clock and only 15.5k? u should be scoring in the 15.7-8k range. Do you have too many background apps running or something? maybe reduce the overclock on that 10900k to 5.1-5.2ghz and try again. Are you sure your memory is completely stable? what are your timings? my reference card at 2220mhz is scoring 15.6k, my **** strix at 2175 was scoring in the 15.5k range.


----------



## DrunknFoo

well back at it with the ftw3, thought i give the card a few more runs, maybe a bit more potential left, still holding back on the power slider in fear of blowing fuses again. at 55% on 1000w bios. (whatever that equates to with my shunts)









I scored 15 691 in Port Royal


Intel Core i9-9900KS Processor, NVIDIA GeForce RTX 3090 x 1, 16384 MB, 64-bit Windows 10}




www.3dmark.com


----------



## PowerK

Does anybody here has Gigabyte 3090 Aorus Master and experienced good OC results by cross-flashing vBIOS?


----------



## Thanh Nguyen

Anyone can use more than 600w with the 1000w bios? Dont know why my card peak at 650w or so for 1s then it stays around 600w max all the time.


----------



## cazpy

SoldierRBT said:


> Internal clocks drop a lot when you modify the v/f curve. I believe the NVVDD/MSVVDD table isn’t providing enough voltage that’s why the low score with high requested clocks. Your offset OC matches with my findings in Port Royal and internal clocks performance.
> 2100MHz avg 15K
> 2130MHz avg 15,2K
> 2160MHz avg 15,4K
> 2190MHz avg 15,5K
> 2220MHz avg 15,7K
> My guess to break 16K+ would be 2265MHz avg.


yeah might be true, i got 15913pts with an average of 2250MHz. i want that magic 16k somehow tho 😅









I scored 15 913 in Port Royal


Intel Core i9-10900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com





and yeah curve doesn't give me the performance offset does, so i keep playing with pure offset.


----------



## reflex75

CallsignVega said:


> What do you guys consider a "decent" overclock on air? My Suprim X, even though it has a great cooler at temps max out around 60C , is maxing out about 2050 MHz core, no matter where I put the voltage and power sliders. That seems pretty low, wonder if I got a crappy chip.


Decent overclock on air?
My 3090FE, completely stock, boosts to 2055Mhz when I start a game:








Then it stabilizes around 2Ghz after warming up.

If I add OC above that, I can reach almost 2.2Ghz on air (when not power limited):





Still, it is considered as good, but not golden chip, according to the criteria there (+2.3Ghz).
So I guess your Suprim X OC to 2050Mhz is below average.

As I already said, expensive GPU with higher boost clock doesn't say anything about the chip quality behind.
Sometimes, it's the opposite! Because imagine my chip used in your Suprim X with boost clock at 1860Mhz instead of my 1695Mhz (165Mhz higher), then my chip will try to boost at 2220Mhz (2055+165) and crash immediately.

Having said that, the most important is performance, not frequency.
I consider my stock 3090FE the perfect GPU for gaming (fast and quiet), but it can be heavily limited by 400w, especially on bench like Port Royal:
Score 14 645 I scored 14 645 in Port Royal
Or Time Spy 22 435 I scored 20 426 in Time Spy

That's why, even with a below average chip, you should be able to do better thanks to your higher power limit in bench case...


----------



## changboy

CallsignVega said:


> So doing more testing.. using 4K Superposition my max stable core is like 2,015 MHz, barely over stock boost. On an expensive Suprim X. They definitely don't bin these chips, looks like a big lottery failure. I wonder if this is one of the lowest ever max overclocks in this thread.
> 
> Probably have Zotac Trinity's beating my card haha.
> 
> Max 4K optimized ~18,500 score. May return this POS.


Your card is fine dude.


----------



## Beagle Box

lolhaxz said:


> The Strix is also power limited to around stock power limit (480W?), shunt modded card with 10mohm on all but PCIE shunt... can pull ~600W+ in other extreme scenarios.
> 
> View attachment 2477673
> 
> 
> Power draw on Optimized 4K is about 530W @ 2160MHz at 1.093v - SRAM ~70w


Questions: 
1. Is the 3702 score with the stock curve?
2. Air cooled or water?
3. What is that software Ampere Tools, and where can I get a copy?


----------



## Sheyster

TK421 said:


> any oddities with the 520/1000w kpe bios?


Just the fan issue I mentioned earlier with the 520W BIOS. Otherwise it's pretty normal.

The 1000W XOC BIOS runs memory at full speed all the time by default, and also has various protections removed. I believe you also need to force P0 state if you're using Compute (e.g. mining) and want the highest Compute performance.


----------



## sippo

Card: 3090 Strix OC, Stock Asus Bios, overclock values on image, card on ekwb block

Now few questions:
1) If I receive a better psu (ax1600 orderd, with 800w right now I'm on limit), should change a bios to 520 kingpin?
2) Is memory temperature/clock is ok?
3) GPU temperature is ok?


----------



## changboy

Far cry 5 4k ultra + hd texture pack benchmark and gpz screen shoot :


----------



## J7SC

Can Strix 3090 / OC owners who watercooled their cards with a custom block (such as Phanteks, Bykski, EK) post what thickness the required thermal pads are for front and backplate, please ? I seem to recall two sizes from earlier posts, such as 1.2mm and 1.5mm. I want order some Fujipoli in preparation to switch my card from air to water. Thanks.


----------



## motivman

J7SC said:


> Can Strix 3090 / OC owners who watercooled their cards with a custom block (such as Phanteks, Bykski, EK) post what thickness the required thermal pads are for front and backplate, please ? I seem to recall two sizes from earlier posts, such as 1.2mm and 1.5mm. I want order some Fujipoli in preparation to switch my card from air to water. Thanks.


For Phanteks block... make sure you use the correct inlet-outlet with phanteks, or your temps will be beyond terrible....



https://www.phanteks.com/salestools/PH-GB3090ASSRX.jpg


----------



## motivman

Strix owners with elmor evc2sx, how much extra voltage are you able to pump into the card beyond 1.1v? and did this increase your maximum stable core overclock? there is a difference of 50mhz between my reference and strix, looking to close the gap with the evc2sx hopefully. Will be really hard to let go of this strix, since +1500 is stable for the memory, even while playing cp2077


----------



## cazpy

J7SC said:


> Can Strix 3090 / OC owners who watercooled their cards with a custom block (such as Phanteks, Bykski, EK) post what thickness the required thermal pads are for front and backplate, please ? I seem to recall two sizes from earlier posts, such as 1.2mm and 1.5mm. I want order some Fujipoli in preparation to switch my card from air to water. Thanks.





https://www.ekwb.com/shop/EK-IM/EK-IM-3831109832455.pdf


----------



## changboy

have some model at amazon canada :






GeForce : RTX 3090


RTX 3090



www.amazon.ca


----------



## HyperMatrix

WTB: water block. Can’t put on any of my case panels like this. lol.


















Was definitely a tight fit. Probably remove that Distro plate in a month or two.









Only RGB I haven’t been able to turn off completely is the one on the ram. But yeah at this point waiting on either EVGA or Optimus to come out with a block for the Kingpin. Don’t even care which one. Whoever launches first.


----------



## changboy

This is really full lol, also a lot of money in fitting in this, i saw the quick connect fitting but didnt bought them, it really a great add to a watercooling built. Fitting are really expensive and just a drain valve is pretty expensive and not yet buy one, i need reverse my reservoir to flush my liquid hehehe.
You can add some hdd on the back like this


----------



## mirkendargen

changboy said:


> This is really full lol, also a lot of money in fitting in this, i saw the quick connect fitting but didnt bought them, it really a great add to a watercooling built. Fitting are really expensive and just a drain valve is pretty expensive and not yet buy one, i need reverse my reservoir to flush my liquid hehehe.
> You can add some hdd on the back like this
> View attachment 2477751


This picture used to be me and is the reason I switched to a NAS in the basement with an ISCSI extent to my desktop instead of spinning drives locally, lol.


----------



## changboy

mirkendargen said:


> This picture used to be me and is the reason I switched to a NAS in the basement with an ISCSI extent to my desktop instead of spinning drives locally, lol.


Hehehe can you show me what did you buy to put ur hdd and show me what is a ISCSI too ?
Actually i have a lsi card X8 with 2 sata extend to 2x 4 sata connector.
Look my hdd windows :


----------



## mirkendargen

changboy said:


> Hehehe can you show me what did you buy to put ur hdd and show me what is a ISCSI too ?
> Actually i have a lsi card X8 with 2 sata extend to 2x 4 sata connector.


I got a cheap 4U 24bay (room for growth) SuperMicro server on ebay for like $300 and 2 40Gbit NIC's (one in the server one in my desktop) for like $40 each and a fiber direct connect cable for connecting them. Install FreeNAS on the server. ISCSI is a way to present a chunk of drive from one computer to another over a network, Windows and FreeNAS support it natively. I also use Primocache for cache the ISCSI drive on a fast SSD so there's no performance problems if you're using it for more than bulk storage.


----------



## Zogge

Guys, if the 3090 strix with WB and MP5 BPC never go above core 48c, tjunction mem 60c and vrm 42c, shouldn't the 1000W bios be "safe" at say 650W or so ? What could you possibly fry if all temps are in order ? Also a AX1600i PSU with standard cables - which should be okay to take a higher load (and can be monitored in corsair link).


----------



## HyperMatrix

changboy said:


> This is really full lol, also a lot of money in fitting in this, i saw the quick connect fitting but didnt bought them, it really a great add to a watercooling built. Fitting are really expensive and just a drain valve is pretty expensive and not yet buy one, i need reverse my reservoir to flush my liquid hehehe.
> You can add some hdd on the back like this
> View attachment 2477751


damn that cable management...lol...this is me with 3 fan controllers and 16 fans atm And I thought mine was bad. Haha.


----------



## Zogge

HyperMatrix said:


> damn that cable management...lol...this is me with 3 fan controllers and 16 fans atm And o thought mine was bad. Haha.
> 
> View attachment 2477757


I gave up to fit it inside the case.


----------



## DrunknFoo

HyperMatrix said:


> WTB: water block. Can’t put on any of my case panels like this. lol.
> 
> Was definitely a tight fit. Probably remove that Distro plate in a month or two
> Only RGB I haven’t been able to turn off completely is the one on the ram. But yeah at this point waiting on either EVGA or Optimus to come out with a block for the Kingpin. Don’t even care which one. Whoever launches first.


lol this is how i managed mine, there is a hidden 120mm behind the gpu. 20 fans + 1 from psu
Cramming all the connections in that psu cubby was a PITA! So decided to remove all the RGB fans and go with the corsair ml120s.
(damn 3000 series is too large!, i ended up having to remove my second pump and rerouting a bit)

Great case, but if it was just .5 inches wider and longer, it'd be so much better


----------



## changboy

HyperMatrix said:


> damn that cable management...lol...this is me with 3 fan controllers and 16 fans atm And I thought mine was bad. Haha.
> 
> View attachment 2477757


Hehehe yup add 12 hdd inside ur case, just try it  This is what i have done with my cooling loop + 3 radiator, all this inside my corsair 750d ! The thing is i dont have any space to do cable management lol.


----------



## Falkentyne

Zogge said:


> Guys, if the 3090 strix with WB and MP5 BPC never go above core 48c, tjunction mem 60c and vrm 42c, shouldn't the 1000W bios be "safe" at say 650W or so ? What could you possibly fry if all temps are in order ? Also a AX1600i PSU with standard cables - which should be okay to take a higher load (and can be monitored in corsair link).


As long as all your VRM thermal pads and VRAM pads are 100% absolutely perfect, you should be fine.


----------



## changboy

Zogge said:


> I gave up to fit it inside the case.


hehehe funny


----------



## changboy

Also i have 3 x hdd over my psu and 2 other on the side of the front 280mm radiator and 1 over my bluray burner. So for this reason i have a small reservoir coz i need to plug all those hdd inside and menage them. Cable inside my built its so hard to do, near impossible. But all working good so far lol.


----------



## changboy

BTW if someone want do my cable management i can give my adress heheheheh. I can tell you each time i have to do something in my build like last time i put my card in a waterblock. I wanna cry lol.


----------



## J7SC

changboy said:


> Hehehe can you show me what did you buy to put ur hdd and show me what is a ISCSI too ?
> Actually i have a lsi card X8 with 2 sata extend to 2x 4 sata connector.
> Look my hdd windows :
> View attachment 2477754


...you're almost running out of space. How about 100 TB w/ far fewer cables ?


Spoiler


----------



## changboy

Lol i wanna own this, i actually have 175tb but i want exchange all this for 2 like you show me.
Maybe the only downside it will be the price yeah.

Ouch i just check its 40 000$ for 1x 100tb !






Are you a human?







www.newegg.com


----------



## J7SC

changboy said:


> Lol i wanna own this, i actually have 175tb but i want exchange all this for 2 like you show me.
> Maybe the only downside it *will be the price* yeah.


yup, enterprise level $$$tuff 

I also like Intel U.2 drives (and have mobos here set up for it), but they aren't exactly cheap either (such as the PCIe4 P5800X Optane), and used PCIe 3 on eBay are more affordable, but you don't really know how just used they are


----------



## changboy

Like someone just told me , we will have this capacity in usb stick in a few years lol.


----------



## HyperMatrix

changboy said:


> Hehehe yup add 12 hdd inside ur case, just try it  This is what i have done with my cooling loop + 3 radiator, all this inside my corsair 750d ! The thing is i dont have any space to do cable management lol.


I previously ran 10 HDD/SSDs but I decided it was smarter running it off a NAS. Currently have a 5 bay NAS but planning to upgrade to an 8-bay 10Gbe model later this year. That way you can have raid protected 1GB transfer speeds with the value of HDD pricing.

Recently got 1.5Gbe fiber put in so I’m going to set up the NAS behind a gigabit wireguard VPN server for torrents/IPTV/Plex/cloud drive work in the basement next to my modem. highly recommend moving anything you don’t need at NVMe speeds to network storage. So 1 NVMe for your system drive, then 1, 2, or 3 MVMe in raid for your games. And then up to 100TB of HDD space at up to 1.1GB/s on a NAS. Whether you buy something like snap/synology or just build your own.

Also if you plan to go that route, use the WD external drives and shuck them. Some of them are using helium HDDs. Just check the model before buying. Last week I checked and it was 14TB helium drive for $340 CAD. In total you could do about 90TB with an 8-bay NAS for roughly $4500 CAD. And keep in mind you can also add extension units on top of that or even just plug in external USB 3.0 drives into the NAS directly or through a hub.


----------



## motivman

Falkentyne said:


> That's how much Superposition pulls.
> On my shunted 3090 FE, it pulls around 520-540W.
> 
> If you want to see pure hell, try 4k custom--Extreme shaders.
> 
> This will cause all shunt modded cards to trip an internal power rail (MSVDD probably) and throttle far below the TDP% limit (Normalized TDP% will be sky high).
> The throttling will be large, consistent and a nice thick green bar in GPU-Z!
> 
> The only exception is the Asus Strix, as we know shunted Strixes will not throttle in Timespy Extreme. Not sure if it will in 4K Custom-extreme shaders however.
> It's still unknown whether shunting the two 1206 shunts (these are inline with the 2512 shunts) will circumvent this. Me and @dante`afk are going to shunt our FE's and find out. Apparently these two smaller shunts have something to do with the MSVDD and NVVDD rail limits. I've seen all reference boards have these small 5 mohm shunts...the only board that doesn't seem to have them is the Strix.


Ran this test from hell on my shunted 2 pin reference on XOC bios... did not hit power limit. Power draw = 770W, LMAO. So this is confirmed. Reference card + 5mohm shunts + XOC bios has unlimited power.....


----------



## gfunkernaught

des2k... said:


> 2160+ doesn't really give you much in terms of fps outside benches.
> 
> With RTX/dlss games going from 2160 to 2040 for example you only loose 2fps at 4k.
> 
> Raster only games, 3fps loss at 4k


You might not see a huge gain. When I had my 2080 TI going from 2085 to 2130 gained me 5-10fps depending on the game. That means the difference between 55fps and 60fps, or 45fps and 50fps in heavy scenes. Not everyone games with vsync off or uncapped fps. I game on my 60hz 4k tv. So keeping minimum fps as high as possible without turning graphics settings down is key.


----------



## changboy

HyperMatrix said:


> I previously ran 10 HDD/SSDs but I decided it was smarter running it off a NAS. Currently have a 5 bay NAS but planning to upgrade to an 8-bay 10Gbe model later this year. That way you can have raid protected 1GB transfer speeds with the value of HDD pricing.
> 
> Recently got 1.5Gbe fiber put in so I’m going to set up the NAS behind a gigabit wireguard VPN server for torrents/IPTV/Plex/cloud drive work in the basement next to my modem. highly recommend moving anything you don’t need at NVMe speeds to network storage. So 1 NVMe for your system drive, then 1, 2, or 3 MVMe in raid for your games. And then up to 100TB of HDD space at up to 1.1GB/s on a NAS. Whether you buy something like snap/synology or just build your own.
> 
> Also if you plan to go that route, use the WD external drives and shuck them. Some of them are using helium HDDs. Just check the model before buying. Last week I checked and it was 14TB helium drive for $340 CAD. In total you could do about 90TB with an 8-bay NAS for roughly $4500 CAD. And keep in mind you can also add extension units on top of that or even just plug in external USB 3.0 drives into the NAS directly or through a hub.


Ya i know i can do this, running my hdd out of my case but this have a cost too, last summer thinking of this but i go with 2 x 14tb helium too and at chrismas time i was thinking again about this but i have bought 2 x 18tb lol.
And i not talk about my new graphic card + waterblock and not long time ago my 10980xe. So now iam done for the next 2 years minimum i think, yup i always said this to people around me


----------



## 86Jarrod

motivman said:


> Strix owners with elmor evc2sx, how much extra voltage are you able to pump into the card beyond 1.1v? and did this increase your maximum stable core overclock? there is a difference of 50mhz between my reference and strix, looking to close the gap with the evc2sx hopefully. Will be really hard to let go of this strix, since +1500 is stable for the memory, even while playing cp2077


I can get an extra 30 to 45mhz staying below 35c but I don't think it'd be worth it for daily. I was playing metro yesterday with evc. Not only is the performance gain pretty low but it's a pain to change voltages every time. It also locks voltages so you'd want to change manually every time you stop gaming unless you like youtube at 1.17+ nvvdd. One thing I did notice is I could get the real clock within a few points of set clock. Maybe changing msvdd or switching freq did it.


----------



## Sambone

Does anyone here have a 3090 strix corsair waterblock ( XG7). With it installed im getting 59 C while playing warzone and FPS is not that great . When i run GPU Z it shows the GPU as thermal throttling. I dont understand why ?Am I getting bad performance on this card. my temps are 30C idle.


----------



## Rangerscott

Got my evga 3090 ftw3 ultra plugged in. 

Now have to figure out why I cant switch resolution or hz without black screening or it coming back with a poor digital signal picture.


----------



## changboy

Rangerscott said:


> Got my evga 3090 ftw3 ultra plugged in.
> 
> Now have to figure out why I cant switch resolution or hz without black screening or it coming back with a poor digital signal picture.


I dont have this problem with my ftw3 ultra plug into my oled C8. Also try with 2 cable 1x hdmi 2.0 and now hdmi 2.1. no problem like this.


----------



## KCDC

Howdy, has anyone had to adjust their memory or CPU OCs to accommodate their 3090? I ask because a Houdini designer friend of mine had to take their 128gb ram down below rated clocks just to get his 3090 TUF to work without BSOD after he tried everything else, cross hair viii hero with 5950x. He knows what he's doing but is used to building xeon nodes, not new AMD stuff. His trial and error came down to the ram clock. Might just be his build, just curious. Only way he could get it to work without issue was take the 128gb set below its rated 3200 clock. I know this seems weird, he even tried using pci3 vs4 but same issue, perhaps its just that amount of ram trying to run at that freq and somehow the 3090 is taking too much power? He was running two 1080tis on this build without issue and now just the 3090 so that also doesn't make sense, to me, since it's using the same amount of power only through one slot.

I'm still waiting on mine to show up, so wanted to get as much info as possible beforehand since it will be going into a production workstation watercooled. It doesn't have the space to fit the 3slot heatsink, or I'd test it as-is. My build is totally different and will be added to one of my 2080tis with the build in my sig. Since I have half his ram, I am hoping it will work out with my ram at 3800.


----------



## Rangerscott

changboy said:


> I dont have this problem with my ftw3 ultra plug into my oled C8. Also try with 2 cable 1x hdmi 2.0 and now hdmi 2.1. no problem like this.


Gonna do a factory bios reset on the mobo and mess around.


----------



## jura11

KCDC said:


> Howdy, has anyone had to adjust their memory or CPU OCs to accommodate their 3090? I ask because a Houdini designer friend of mine had to take their 128gb ram down below rated clocks just to get his 3090 TUF to work without BSOD after he tried everything else, cross hair viii hero with 5950x. He knows what he's doing but is used to building xeon nodes, not new AMD stuff. His trial and error came down to the ram clock. Might just be his build, just curious. Only way he could get it to work without issue was take the 128gb set below its rated 3200 clock. I know this seems weird, he even tried using pci3 vs4 but same issue, perhaps its just that amount of ram trying to run at that freq and somehow the 3090 is taking too much power? He was running two 1080tis on this build without issue and now just the 3090 so that also doesn't make sense, to me, since it's using the same amount of power only through one slot.
> 
> I'm still waiting on mine to show up, so wanted to get as much info as possible beforehand since it will be going into a production workstation watercooled. It doesn't have the space to fit the 3slot heatsink, or I'd test it as-is. My build is totally different and will be added to one of my 2080tis with the build in my sig. Since I have half his ram, I am hoping it will work out with my ram at 3800.


Hi there 

Running 3*GPUs(RTX 3090 GamingPro with RTX 2080Ti Strix and RTX 2080Ti AMP) setup for rendering with Asus ROG Crosshair VIII Hero with 3900X and 64GB of G.SKILL TridentZ Neo 3600MHz and I'm running them at 3600MHz and no issues at all,didn't have any problems with BSOD for very long time, I have more BSOD with X99(almost every day I got one BSOD) 

Although my PSU is Superflower 8pack 2000w and no issues too

Hard to day why he is getting such BSOD, I don't have any issues and I use like Houdini too or Cycles or V-RAY or Octane and many other renderers 

Hope this helps 

Thanks, Jura


----------



## des2k...

for 2x8pin on the 1000w
added core, cache, mem correction


----------



## PhatSV6

des2k... said:


> for 2x8pin on the 1000w
> added core, cache, mem correction
> View attachment 2477826


HOW? you fixed the memory pstate issue? and incorrect power reading ?

pls show us under load gpu-z and afterburner settings if you don't mind ??
very excited if its been corrected for the 2 x 8 pin


----------



## HyperMatrix

KCDC said:


> Howdy, has anyone had to adjust their memory or CPU OCs to accommodate their 3090? I ask because a Houdini designer friend of mine had to take their 128gb ram down below rated clocks just to get his 3090 TUF to work without BSOD after he tried everything else, cross hair viii hero with 5950x. He knows what he's doing but is used to building xeon nodes, not new AMD stuff. His trial and error came down to the ram clock. Might just be his build, just curious. Only way he could get it to work without issue was take the 128gb set below its rated 3200 clock. I know this seems weird, he even tried using pci3 vs4 but same issue, perhaps its just that amount of ram trying to run at that freq and somehow the 3090 is taking too much power? He was running two 1080tis on this build without issue and now just the 3090 so that also doesn't make sense, to me, since it's using the same amount of power only through one slot.
> 
> I'm still waiting on mine to show up, so wanted to get as much info as possible beforehand since it will be going into a production workstation watercooled. It doesn't have the space to fit the 3slot heatsink, or I'd test it as-is. My build is totally different and will be added to one of my 2080tis with the build in my sig. Since I have half his ram, I am hoping it will work out with my ram at 3800.


I initially had to go from 8 sticks of ram down to 4 sticks when I got the 3090. No issues when I previously ran 2x Pascal Titan X cards in SLI. But the 3090 didn’t like it. Made some adjustments and was finally able to boot up with all 8 sticks of ram but still had instability and had to drop CPU clock by 100MHz. So there’s definitely something there. This was on an X99 platform with a 6950x.


----------



## des2k...

PhatSV6 said:


> HOW? you fixed the memory pstate issue? and incorrect power reading ?
> 
> pls show us under load gpu-z and afterburner settings if you don't mind ??
> very excited if its been corrected for the 2 x 8 pin


Mem still runs P0 only, I have prefer max performance so idles at 80w for me. This is just hwinfo custom sensors (see txt file). It's the adjusted values since it's a 3x8pin vbios and over reports all values for my card.


----------



## gfunkernaught

Guys while I was running the suprim 450w bios on my trio and running the bright memory benchmark, I heard serious coil whine coming from the back of the card. I have the stock cooler on there. Is that normal?


----------



## motivman

des2k... said:


> for 2x8pin on the 1000w
> added core, cache, mem correction
> View attachment 2477826



This is great, but the total sum of pin 1 and 2 in gpu-z, as well as Board power draw is not accurate in HWinfo/GPU-Z for 1000w bios or any 3X8 pin bios on 2X8 pin cards. I verified with a separate power supply plugged into the card, and adding gpu-z value of pin 1 and 2 was reporting about 35W-50W above real power draw from a power meter for the 2 8 pin cables. This is how to get the real, and more accurate power draw for your 2X8 pin card.....

1. connect the 8 pin cables to a seperate power supply, and plug in that power supply into a power meter (plugged into the wall)
2. Start any game, and leave the game idling. Note the value of pin 1 and 2 in GPU-Z. Let use for example pin 1 reporting 280W, pin 2 report 120W
3. Note power draw from separate power supply plugged into power meter. lets use for this example 360W. Now divide this value by 2, each 8 pin should be drawing 180W (real value based on power meter, and based on the fact that with a clamp meter on 2 pin cards, I verified both 8 pins pulls about exactly the same amps)
4. Now to get real power draw for pin #1 = 180/280 = 0.64, for pin #2 = 180/120 = 1.5
5. Use these values for your multipliers in hwinfo for 8 pin #1 (0.64) and 8 pin #2 (1.5)
6. PCIE power draw is correct, so we use whatever value is being reported... for this example, we use 60W
7. To get real (total) power draw value, add 60W+360W = 420W
8. To get real GPU power draw, note "Board power draw" value in GPU-Z... in this example 460W, then 420W/460W = 0.91. Use 0.91 as your multiplier for GPU power in Hwinfo.


----------



## motivman

PhatSV6 said:


> HOW? you fixed the memory pstate issue? and incorrect power reading ?
> 
> pls show us under load gpu-z and afterburner settings if you don't mind ??
> very excited if its been corrected for the 2 x 8 pin


for memory running at full speed with 1000w bios, just set afterburner to start with memory at -500mhz at startup.


----------



## des2k...

motivman said:


> This is great, but the total sum of pin 1 and 2 in gpu-z, as well as Board power draw is not accurate in HWinfo/GPU-Z for 1000w bios or any 3X8 pin bios on 2X8 pin cards. I verified with a separate power supply plugged into the card, and adding gpu-z value of pin 1 and 2 was reporting about 35W-50W above real power draw from a power meter for the 2 8 pin cables. This is how to get the real, and more accurate power draw for your 2X8 pin card.....
> 
> 1. connect the 8 pin cables to a seperate power supply, and plug in that power supply into a power meter (plugged into the wall)
> 2. Start any game, and leave the game idling. Note the value of pin 1 and 2 in GPU-Z. Let use for example pin 1 reporting 280W, pin 2 report 120W
> 3. Note power draw from separate power supply plugged into power meter. lets use for this example 360W. Now divide this value by 2, each 8 pin should be drawing 180W (real value based on power meter, and based on the fact that with a clamp meter on 2 pin cards, I verified both 8 pins pulls about exactly the same amps)
> 4. Now to get real power draw for pin #1 = 180/280 = 0.64, for pin #2 = 180/120 = 1.5
> 5. Use these values for your multipliers in hwinfo for 8 pin #1 (0.64) and 8 pin #2 (1.5)
> 6. PCIE power draw is correct, so we use whatever value is being reported... for this example, we use 60W
> 7. To get real (total) power draw value, add 60W+360W = 420W
> 8. To get real GPU power draw, note "Board power draw" value in GPU-Z... in this example 460W, then 420W/460W = 0.91. Use 0.91 as your multiplier for GPU power in Hwinfo.


*edit

A few here said, they clamped the 12v on the 8pins and the total power was accurate just balanced for 2x8pins.

I have 2 all single cable sleeved ext for 8pins , just need to get a clamp reader to check myself.

Question for you, if you change the wattage (lower, higher vs what you tested), does the mutilpier for power correction also change ?


----------



## motivman

des2k... said:


> *edit
> 
> A few here said, they clamped the 12v on the 8pins and the total power was accurate just balanced for 2x8pins.
> 
> I have 2 all single cable sleeved ext for 8pins , just need to get a clamp reader to check myself.
> 
> Question for you, if you change the wattage (lower, higher vs what you tested), does the mutilpier for power correction also change ?


no, multiplier remains the same. With this method, when I add the values of pin 1 and 2 (once corrected) in hwinfo, and verify the reading on the power meter, its always within 2-5 watts of each other.


----------



## dante`afk

motivman said:


> Ran this test from hell on my shunted 2 pin reference on XOC bios... did not hit power limit. Power draw = 770W, LMAO. So this is confirmed. Reference card + 5mohm shunts + XOC bios has unlimited power.....
> 
> View attachment 2477782


are you on water?


----------



## KCDC

jura11 said:


> Hi there
> 
> Running 3*GPUs(RTX 3090 GamingPro with RTX 2080Ti Strix and RTX 2080Ti AMP) setup for rendering with Asus ROG Crosshair VIII Hero with 3900X and 64GB of G.SKILL TridentZ Neo 3600MHz and I'm running them at 3600MHz and no issues at all,didn't have any problems with BSOD for very long time, I have more BSOD with X99(almost every day I got one BSOD)
> 
> Although my PSU is Superflower 8pack 2000w and no issues too
> 
> Hard to day why he is getting such BSOD, I don't have any issues and I use like Houdini too or Cycles or V-RAY or Octane and many other renderers
> 
> Hope this helps
> 
> Thanks, Jura


Thanks, Jura, I believe he is now reading all this now, so this info will help



HyperMatrix said:


> I initially had to go from 8 sticks of ram down to 4 sticks when I got the 3090. No issues when I previously ran 2x Pascal Titan X cards in SLI. But the 3090 didn’t like it. Made some adjustments and was finally able to boot up with all 8 sticks of ram but still had instability and had to drop CPU clock by 100MHz. So there’s definitely something there. This was on an X99 platform with a 6950x.


Also interesting, thank you, might be a mobo power issue or something else there... plot thickens..


----------



## motivman

dante`afk said:


> are you on water?


yes, on water... card pulled 635W


----------



## Thanh Nguyen

I saw a build with a tablet in the case and it shows all the temp and info of the system. Anyone know what is that and where to get it? Thanks.


----------



## motivman

motivman said:


> Ran this test from hell on my shunted 2 pin reference on XOC bios... did not hit power limit. Power draw = 770W, LMAO. So this is confirmed. Reference card + 5mohm shunts + XOC bios has unlimited power.....
> 
> View attachment 2477782


edit. real power draw was about 635W, this was before I used my clamp meter and separate power supply to measure the power draw.


----------



## Damaged__

J7SC said:


> well, to each his own. I will definitely keep my 3090 Strix, it seems to be a good sample (and I have seen 2235 on it, mind you when cold). For most benches and games, I have set it to 2220 (start, before bin drop due to temps) and VRAM to a 1325 for now, but will slow it down a little until I can w-cool it.
> 
> Here is my 'quick and dirty' method to check a new GPU and its quality, and potentially compare it to another one...as far as I remember, you already flashed the Strix and you would have to return it to full stock in order to use these quick-valuation methods.
> 
> 1.) With the card at bone-stock (no oc, no PL), I run a quick Superposition 4k w/ GPUz open and see where the natural max GPU MHz boost settles. In my 3090 Strix's case, it was around 2045 - 2060 as I recall. The second thing I check is max watt level at stock in Superposition.
> 
> 2.) In the old days, we used to get ASIC quality values (%, higher is usually better) in GPUz, but that led to a lot of 'open box returns' and RMA 'claims'...other than sub-zero XOCers liked (some) of the lower ASIC % cards, but that is a special case.
> 
> As ASIC values are no longer published, there is another way I use to choose between cards if I have the option - and that is comparing the TOTALLY STOCK (incl. bios) v-curves in MSI AB, with MSI AB freshly installed for each card. Obviously, it makes no sense to change anything in that vCurve manually as you want to compare 'as is'. That curve is sort of like an ASIC value as it shows you the voltage at a given Mhz of that specific chip. I take screenshots of each instance and compare them. I used this method with my 2x 2080 Aorus XTR WBs and I find it to be accurate. Now, VRAM is trickier, especially with GDDR6X with which I have exactly 4 use days of experience...but I run a battery of tests (including TS GT2, Firestrike Ultra, Superposition, and now FS2020 and CP 2077).
> 
> If you want to make up your mind between the Strix and whatever 'reference' card you have, I would do the above two steps and compare them (both on their respective stock bios of course). The other thing you might want to consider that if you're planning heavy oc, hard mods and such, a solid 3x 8 Pin with a very strong VRM is preferable. Finally, Asus Rog may very well come up with a Matrix version, and/or XOC bios (like they did for the 2080 Ti...)


Asus already has an XOC bios but it doesn't really work properly - at least not right now.


----------



## Zogge

KCDC said:


> Thanks, Jura, I believe he is now reading all this now, so this info will help
> 
> 
> 
> Also interesting, thank you, might be a mobo power issue or something else there... plot thickens..


10980xe here at 4.7-5.0 ghz with cl 16 4000mhz 8x8gb on x299. Working like a charm with 3090 strix. No OC issues.


----------



## motivman

Thanh Nguyen said:


> I saw a build with a tablet in the case and it shows all the temp and info of the system. Anyone know what is that and where to get it? Thanks.


I have the same setup in my build, learned how to do it from Jayztwocents


----------



## 86Jarrod

Highest above freezing water score PR 16075. I scored 16 075 in Port Royal


----------



## sweepersc

KCDC said:


> Howdy, has anyone had to adjust their memory or CPU OCs to accommodate their 3090? I ask because a Houdini designer friend of mine had to take their 128gb ram down below rated clocks just to get his 3090 TUF to work without BSOD after he tried everything else, cross hair viii hero with 5950x. He knows what he's doing but is used to building xeon nodes, not new AMD stuff. His trial and error came down to the ram clock. Might just be his build, just curious. Only way he could get it to work without issue was take the 128gb set below its rated 3200 clock. I know this seems weird, he even tried using pci3 vs4 but same issue, perhaps its just that amount of ram trying to run at that freq and somehow the 3090 is taking too much power? He was running two 1080tis on this build without issue and now just the 3090 so that also doesn't make sense, to me, since it's using the same amount of power only through one slot.
> 
> I'm still waiting on mine to show up, so wanted to get as much info as possible beforehand since it will be going into a production workstation watercooled. It doesn't have the space to fit the 3slot heatsink, or I'd test it as-is. My build is totally different and will be added to one of my 2080tis with the build in my sig. Since I have half his ram, I am hoping it will work out with my ram at 3800.


Yes, I have experienced this one as well. My previous memory oc profile for 16GBx2, that worked so well for the 2080ti and below, doesn't work with the 3090. I had to dial down to 4400Mhz.


----------



## ViRuS2k

Im happy with this, fully stable almost 2000mhz infinity fabric with high clock and as tight a timeings as i can get, iff anyone can help with further tight timings let me know.
memory though is at 1.47v wich is perfectly fine on B-Die, 

tips are welcome,

also does anyone know where to buy thermal pads from really premium ones for my byski block to put on my 3090 msi trio x thanks
someone said i need 2.0mm and 3.0mm pads.


----------



## gfunkernaught

motivman said:


> I have the same setup in my build, learned how to do it from Jayztwocents
> 
> 
> 
> 
> 
> 
> View attachment 2477848
> View attachment 2477849


Love seeing high end hardware crammed into mid cases like these. Reminds me of the old days. Especially like the black tubing.


----------



## des2k...

motivman said:


> no, multiplier remains the same. With this method, when I add the values of pin 1 and 2 (once corrected) in hwinfo, and verify the reading on the power meter, its always within 2-5 watts of each other.


Calculated with your multiplier vs reported hwinfo numbers, difference comes out as 5w under load for my Zotac. 
I won't know if I have the same multiplier till I get my clamp.


----------



## motivman

des2k... said:


> Calculated with your multiplier vs reported hwinfo numbers, difference comes out as 5w under load for my Zotac.
> I won't know if I have the same multiplier till I get my clamp.
> View attachment 2477892


you really do not need a clamp to figure out the multipliers for your specific card, you just need a second power supply and kill-a-watt power meter. Myself and several others in the thread has verified with a clamp meter that power is balanced between both 8 pins. The multipliers I posted earlier are just made up examples, to show how to do the calculations, you need actual numbers from gpu-z/hwinfo and from your card to get your card's correct multiplier values.


----------



## motivman

86Jarrod said:


> Highest above freezing water score PR 16075. I scored 16 075 in Port Royal


How are you getting temps that cold? waterchiller?


----------



## 86Jarrod

motivman said:


> How are you getting temps that cold? waterchiller?


Minnesota. All I have to do is crack the window and slide my pc over 6in. I froze the loop twice yesterday... Haha


----------



## Beagle Box

Damaged__ said:


> Asus already has an XOC bios but it doesn't really work properly - at least not right now.


How is it buggy? 
They should look at the MSI Suprim BIOS and just bump it to 700W. 
That Suprim BIOS is the smoothest running on my Strix OC I've tried so far. I'm back on stock BIOS for the extra 30 Watts.


----------



## motivman

86Jarrod said:


> Minnesota. All I have to do is crack the window and slide my pc over 6in. I froze the loop twice yesterday... Haha


lololololol


----------



## gfunkernaught

I'm picturing GN Steve with the blow torch.


----------



## des2k...

If you're curious about the xoc 1000w on reference 2x8pins 4phase mem 14phase core for 530w load vcore1 and vcore2 run at 73% capacity, about 36.5A per 50A stage. Mem vrm is 12.4A per 50A stage.
Assuming 1.35v for mem & vcore at 0.944v for this run.


----------



## NBrock

Anyone have any details on the EVGA 3090 FTW Ultra "red light of death"? I have a buddy that just had his card die and was wondering if there was any info. He has started the RMA process but is pretty bummed.


----------



## Falkentyne

NBrock said:


> Anyone have any details on the EVGA 3090 FTW Ultra "red light of death"? I have a buddy that just had his card die and was wondering if there was any info. He has started the RMA process but is pretty bummed.


Yeah it's something fubar'd with the power delivery.
No one knows what.

But one person was able to prevent crashes in League of Legends for two days now, by messing with the V/F curve to prevent the card from ever passing the 1.050v voltage point.
He did some extended testing and said every time he had the black screen + 100% fan crash in League, the voltage was either 1.081v or 1.075v.

This looks suspiciously like the bug that affected the MSI MXM GTX 1070 Ver 1.0 PCB cards in several GT series skylake laptops, where any voltage past 0.950v would just crash instantly. This was completely fixed on the Ver 1.2 PCB cards. A custom bios was released by MSI for the 1.0v cards, which limited the maximum voltage allowed to 0.881v.

The difference is, the cards didn't catch fire, smoke or stop working. They just crashed.


----------



## toxicnerve

Any BIOS recommendations for a Suprim X?


----------



## bmgjet

NBrock said:


> Anyone have any details on the EVGA 3090 FTW Ultra "red light of death"? I have a buddy that just had his card die and was wondering if there was any info. He has started the RMA process but is pretty bummed.


Its failing one of the MCUs test, then MCU throws a halt to the GPU and stops it posting.
I decompiled the mcu firmware for the ftw3 and this is the stuff that triggers the leds.


























Techonally you can mod the firmware to remove all the checks. But then you lose your warranty. And its probably failing a check because something gone wrong with the hardware causing it to read too low a voltage on one of the inputs.
Had a mates FTW3 fail which is why I looked into it, would just sit on 1 red led by the first plug so he bought it over to me to test out and see if any parts were broken.
His card worked in my PC fine lol.
The check it was failing on his computer was slot voltage below 11.8V during post. Where on my computer it was 12.3V.
I suspect thats the biggest issue people are having since they already over draw on the slot so if you have a few pci-e cards or weak 24 plug power its going to suck up a lot of power though that input and pull the voltage down.


If you do mod the firmware I released a tool over on EVGA forum for force flashing the MCU since there have been a few peoples cards get bricked by PX1 trying to update the wrong mcu firmware.


----------



## Wihglah

My Optimus Waterblock came in. It's epic. Way better performance than anything else I have used.


----------



## Thanh Nguyen

Weird, I throw 600w load everyday to my ftw3 and have not seen my card die. The RMA process is easy for me too. 1 week and I got another brand new card. I jus told them my card cant reach 500w with 500w bios.


----------



## des2k...

...


bmgjet said:


> Its failing one of the MCUs test, then MCU throws a halt to the GPU and stops it posting.
> I decompiled the mcu firmware for the ftw3 and this is the stuff that triggers the leds.
> View attachment 2477953
> 
> 
> View attachment 2477954
> 
> View attachment 2477955
> 
> 
> Techonally you can mod the firmware to remove all the checks. But then you lose your warranty. And its probably failing a check because something gone wrong with the hardware causing it to read too low a voltage on one of the inputs.
> Had a mates FTW3 fail which is why I looked into it, would just sit on 1 red led by the first plug so he bought it over to me to test out and see if any parts were broken.
> His card worked in my PC fine lol.
> The check it was failing on his computer was slot voltage below 11.8V during post. Where on my computer it was 12.3V.
> I suspect thats the biggest issue people are having since they already over draw on the slot so if you have a few pci-e cards or weak 24 plug power its going to suck up a lot of power though that input and pull the voltage down.
> 
> 
> If you do mod the firmware I released a tool over on EVGA forum for force flashing the MCU since there have been a few peoples cards get bricked by PX1 trying to update the wrong mcu firmware.


Spec is +-5% on 12v on PC, what is Evga doing with that stupid voltage check !?


----------



## Falkentyne

des2k... said:


> ...
> 
> Spec is +-5% on 12v on PC, what is Evga doing with that stupid voltage check !?


The hell?
It's failing on 1.8% below spec? Uh....


----------



## mirkendargen

bmgjet said:


> Its failing one of the MCUs test, then MCU throws a halt to the GPU and stops it posting.
> I decompiled the mcu firmware for the ftw3 and this is the stuff that triggers the leds.
> View attachment 2477953
> 
> 
> View attachment 2477954
> 
> View attachment 2477955
> 
> 
> Techonally you can mod the firmware to remove all the checks. But then you lose your warranty. And its probably failing a check because something gone wrong with the hardware causing it to read too low a voltage on one of the inputs.
> Had a mates FTW3 fail which is why I looked into it, would just sit on 1 red led by the first plug so he bought it over to me to test out and see if any parts were broken.
> His card worked in my PC fine lol.
> The check it was failing on his computer was slot voltage below 11.8V during post. Where on my computer it was 12.3V.
> I suspect thats the biggest issue people are having since they already over draw on the slot so if you have a few pci-e cards or weak 24 plug power its going to suck up a lot of power though that input and pull the voltage down.
> 
> 
> If you do mod the firmware I released a tool over on EVGA forum for force flashing the MCU since there have been a few peoples cards get bricked by PX1 trying to update the wrong mcu firmware.


I like the variable name there acting like it's a 6pin and 2 8pins, heh. Being that picky about the PCIE voltage is dumb...but it also doesn't explain the people getting pop sounds in low load situations. Oh Evga...


----------



## Falkentyne

mirkendargen said:


> I like the variable name there acting like it's a 6pin and 2 8pins, heh. Being that picky about the PCIE voltage is dumb...but it also doesn't explain the people getting pop sounds in low load situations. Oh Evga...


It does explain that they really messed up that board, though. You don't see any other cards, even Lord Zotacs, dying like that...


----------



## bmgjet

mirkendargen said:


> I like the variable name there acting like it's a 6pin and 2 8pins, heh. Being that picky about the PCIE voltage is dumb...but it also doesn't explain the people getting pop sounds in low load situations. Oh Evga...


Lol probably orignally intended it to be 2X8 and 1X6 pin but then just went with all 8 pins since every one else did.
2X8 and 1X6 would of given the card a 480W max power limit which is also sort of inline with the hardware design.

PSU tuning on and off makes the same sort of pop sound some people have reported.
But iv also seen a few cards with exploded MCUs on them but they were followed by smoke coming out.
Fuses shouldnt make any sound or smoke youd just hear the PSU switch off.
Either way its been a bad launch, Would love to see the proper RMA numbers and if EVGA actually makes a proper announcement on the issue instead of keeping it quite and just saying send the card back.


----------



## HyperMatrix

For anyone who may be curious....this is what EVGA said for the thermal pad thickness of the 3090 Kingpin:

2.85mm VRM
2.25mm Memory (front of card)
2mm Memory (back of card)

No wonder I have the memory hitting over 70C even without an OC. Those are some super thick pads. And I'm not sure if they're the same quality as the FTW3 ones, which were reportedly just 1.2 W/mK. Is that thickness pretty common for stock coolers? I normally don't replace pads unless I'm switching to a water block and I'm used to either 0.5mm or 1mm thickness. 2.85mm seems very excessive and inefficient.


----------



## DrunknFoo

HyperMatrix said:


> For anyone who may be curious....this is what EVGA said for the thermal pad thickness of the 3090 Kingpin:
> 
> 2.85mm VRM
> 2.25mm Memory (front of card)
> 2mm Memory (back of card)
> 
> No wonder I have the memory hitting over 70C even without an OC. Those are some super thick pads. And I'm not sure if they're the same quality as the FTW3 ones, which were reportedly just 1.2 W/mK. Is that thickness pretty common for stock coolers? I normally don't replace pads unless I'm switching to a water block and I'm used to either 0.5mm or 1mm thickness. 2.85mm seems very excessive and inefficient.


unconventional sizing, if you look at the aftermarket thermal pads, they measure in .5 intervals and the thickest being 3mm. Cost cutting manufacturing is the result from evga and other AIBs, they could have easily machined the contact sections for the plate if they wanted to spend a few extra time per piece of copper.


----------



## gfunkernaught

Crazy how all of this detailed and complex info about power sensitivity of the 3090, I don't remember seeing it in the 2080 Ti Owners thread. Is it because Turing wasn't as sensitive or didn't have the sensors the 3090 has?


----------



## TK421

Sheyster said:


> Just the fan issue I mentioned earlier with the 520W BIOS. Otherwise it's pretty normal.
> 
> The 1000W XOC BIOS runs memory at full speed all the time by default, and also has various protections removed. I believe you also need to force P0 state if you're using Compute (e.g. mining) and want the highest Compute performance.


ah I misread what you said

afaik the evga ftw3 has messed up power balancing scheme, is it fine to use the 500w bios with the strix then?


there's no way to get that 1000w bios to run fans properly, or downclock properly?


what's the issue with mining having to force P0? does the 1000w bios make the card not able to enter P0 automatically if it's only a compute load?


----------



## mirkendargen

HyperMatrix said:


> For anyone who may be curious....this is what EVGA said for the thermal pad thickness of the 3090 Kingpin:
> 
> 2.85mm VRM
> 2.25mm Memory (front of card)
> 2mm Memory (back of card)
> 
> No wonder I have the memory hitting over 70C even without an OC. Those are some super thick pads. And I'm not sure if they're the same quality as the FTW3 ones, which were reportedly just 1.2 W/mK. Is that thickness pretty common for stock coolers? I normally don't replace pads unless I'm switching to a water block and I'm used to either 0.5mm or 1mm thickness. 2.85mm seems very excessive and inefficient.


Are they actual conventional pads or more like a crumbly putty? The Strix "pads" weren't pad-like at all. I hear the super expensive Fujipoly stuff has the crumbly consistency though.


----------



## HyperMatrix

mirkendargen said:


> Are they actual conventional pads or more like a crumbly putty? The Strix "pads" weren't pad-like at all. I hear the super expensive Fujipoly stuff has the crumbly consistency though.


I was able to take it apart and apply LM without any of it tearing or crumbling. I've done the fujipoly 17W/mK stuff before but I'm done with those prices. Those thermalright 12.8W/mK pads are probably the best compromise. 85mm x 45mm sheet of 1mm thickness for $12.50~ USD on Amazon.ca with free prime delivery.

I bought an Alphacool X6 Ram cooler from DrunknFoo but can't mount it until I get a block because drilling mounting holes in the backplate would void my warranty. So I was looking for a way to keep the backplate cool. I've had to go to stock memory clocks on the card in my new build because of how hot it gets. Hit over 80C at +1350MHz and crashed. And that's with half of a 120mm fan blowing on the backside too.


----------



## inedenimadam

EK needs to hurry up to market with that rear waterblock. I need it like...a month ago.


----------



## J7SC

...Until more professional w-cooled solutions are available via custom blocks, I mounted 2x TT 120 AIO fans I had for years. They work great and I can bench at 4K w/VRAM at 1350 consistently (VRAM at mid 60ies C max). Those fans have a pitch and speed which doesn't require flush mounting while being very quiet. I don't want any flush mounting to allow the hot air to escape. I also changed the angle from the one in the pic...the system is off the floor and getting integrated into a complex build, but the 2x120mm backplate fans stay until a custom w-cooling solution is available (or home made, if I manage)


----------



## bmgjet

gfunkernaught said:


> Crazy how all of this detailed and complex info about power sensitivity of the 3090, I don't remember seeing it in the 2080 Ti Owners thread. Is it because Turing wasn't as sensitive or didn't have the sensors the 3090 has?


Each gen they add more to try stop people modding them.
Also ampere has the highest tdp of any nvidia card.
We went from flagships being 250W (1080ti and 2080ti) to what we have now which is 350-520W)


But on the hardware side this is how its changed.
1080ti (could bypass power limits easily by blobing bit of solder between the LMA2218 IC legs, so pre and post shunt voltages read the same meaning 0W usage.)

So then in turing they added a safe mode if it detected the power limit not moving or too low..
2080ti (So you shut mod the plugs shunts only and it was good)

So then on ampere they added power balancing and extra voltage checks to make sure its pulling the expected ratio from each input meaning you cant just shunt plugs only.
3090 (You need to shunt mod every shunt on it)


----------



## Rangerscott

I love having a $2k card and it plays games at 21 fps. Lol. This has been the crappiest "upgrade" Ive ever done.


----------



## Falkentyne

bmgjet said:


> Each gen they add more to try stop people modding them.
> Also ampere has the highest tdp of any nvidia card.
> We went from flagships being 250W (1080ti and 2080ti) to what we have now which is 350-520W)
> 
> 
> But on the hardware side this is how its changed.
> 1080ti (could bypass power limits easily by blobing bit of solder between the LMA2218 IC legs, so pre and post shunt voltages read the same meaning 0W usage.)
> 
> So then in turing they added a safe mode if it detected the power limit not moving or too low..
> 2080ti (So you shut mod the plugs shunts only and it was good)
> 
> So then on ampere they added power balancing and extra voltage checks to make sure its pulling the expected ratio from each input meaning you cant just shunt plugs only.
> 3090 (You need to shunt mod every shunt on it)


Dante and I shunted the small shunts (we don't even know what package they are. They are NOT 1206's. The reference cards, except the Asus cards, actually have 1206's. The eVGA FTW3 has four of them. The Asus doesn't have any. The Founder's Edition, well...their small shunts are as wide as a 1206, but they are slightly longer. If you put a 1206 on top of them, the edges of the 1206 only fit on the black portion of the shunt, so it''s just as annoying as the larger 'depressed edges' of the 2512 shunts--just even WORSE to deal with.


First picture: 1206 shunt from Mouser, sitting on top of the Ngreedia whateversizeitis mini 005 mOhm shunt.
If you look carefully, you can see that the edges of the mouser shunt are only as long as the black housing. More obvious in the 2nd picture.
Please don't make fun of my soldering job. 











Second pic: shunts side by side. Might be clearer here.











anyway, I modded the two shunts next to "GPU Chip Power" and "PCIE Slot Power", and @dante`afk modded all three.

Not a single power rail changed, nor the sub/misc rails. Not even by one watt....

Yeah baby. Epic waste of time?

Ok but we found something else out here.
1) The throttling _IS_ related to MSVDD/NVVDD rails. We know this because of what happens at 100% TDP on the shunted card--effective clocks drop drastically, followed by a power limit from "Normalized TDP". Some rail is throttling and it isn't being reported!

2) Take a look at this.
This is Dante's 4k superposiition custom extreme shaders run.

Notice that ALL of his input rails are low because he shunted them all extremely well. All of them are low.
Also notice that GPU Chip Power (GPU Core NVVDD Input Power (sum) at the top, is equal to = Misc1 Input Power + Misc3 Input Power + GPU Core NVVDD1 Input Power (sum).
AND NVVDD2 Input Power (sum) is equal to= GPU Core NVVDD (sum) + SRAM Input Power (sum).

No problems.
All rails are shunted. 

Now look at the OUTPUT RAILS. There are three of them plus the "sum".
Look carefully. The "Sum" one you see is equal to the bottom NVVDD Output Power + Sram output power.










Notice anything?

THESE RAILS AREN'T RESPONDING TO SHUNTS!

They're reporting power as if they are coming from unshunted rails.
So the board is somehow getting a 'pre-shunted' amps value from someplace and it isn't from the shunts, otherwise the output rails would report MUCH MUCH lower.
But clearly the card is getting rail data from somewhere, and it isn't on Hwinfo64. _BUT_ TDP Normalized % can see it. That's why TDP Normalized is so high.
TDP Normalized is basically the TDP of ANY individual rail, with respect to a "default' limit (100%) and a "max" limit, which can be the main rails (that you can shunt), or ANY of the sub rails that has a limit attached to it. 

This limit is probably stored in the vbios somewhere, but no one knows what it is. But my guess is it's related to the NVVDD/MSVDD or PLL rails.


----------



## SoldierRBT

@Falkentyne 
I’ve noticed a similar problem with the KPE in some benchmarks like PR and Time Spy. It starts to hit a PWR Limit around 470W even though the BIOS is 520W. My guess is that internal clocks can’t keep up with the requested clocks by the lack of NVVDD/MSVVDD voltages and GPU-Z reads this as PWR Limit somewhere. 

I’ve solved this by adjusting NVDDD/MSVVDD voltages to raise internal clocks and now the card can use the full 520W BIOS (just need to make sure it doesn’t hit 520W because it’ll show PWR limit). Is it possible to connect a Elmor EVC2 to the Founder’s edition? It may need a slight bump in voltages to eliminate those greens bars.

The 1000W doesn’t have any limit. It’ll try to boost to 1.10v and pull 600W+


----------



## Falkentyne

SoldierRBT said:


> @Falkentyne
> I’ve noticed a similar problem with the KPE in some benchmarks like PR and Time Spy. It starts to hit a PWR Limit around 470W even though the BIOS is 520W. My guess is that internal clocks can’t keep up with the requested clocks by the lack of NVVDD/MSVVDD voltages and GPU-Z reads this as PWR Limit somewhere.
> 
> I’ve solved this by adjusting NVDDD/MSVVDD voltages to raise internal clocks and now the card can use the full 520W BIOS (just need to make sure it doesn’t hit 520W because it’ll show PWR limit). Is it possible to connect a Elmor EVC2 to the Founder’s edition? It may need a slight bump in voltages to eliminate those greens bars.
> 
> The 1000W doesn’t have any limit. It’ll try to boost to 1.10v and pull 600W+


You can connect an EVC2 to the founder's edition but it seems to be read only. It's the same problem that happens on the Strix with the newer Nvidia drivers. But on the Strix, you can just install the 456.xx drivers and no problem at all. The read only attribute also resets after a hard power off. But there was one person who tried it on a FE, and he couldn't get write access. He could read everything fine. It acted the same as the Strix did on the newer drivers, but except in this case, the older drivers didn't help. I don't have this device nor do I know much about it so I'm useless for this.


----------



## J7SC

Falkentyne said:


> You can connect an EVC2 to the founder's edition but it seems to be read only. It's the same problem that happens on the Strix with the newer Nvidia drivers. But on the Strix, you can just install the 456.xx drivers and no problem at all. The read only attribute also resets after a hard power off. But there was one person who tried it on a FE, and he couldn't get write access. He could read everything fine. It acted the same as the Strix did on the newer drivers, but except in this case, the older drivers didn't help. I don't have this device nor do I know much about it so I'm useless for this.


How much difference does the 456.xx drivers make in terms of performance in PortRoyal, Superposition etc ? When I got the Strix last week and installed a fresh Win 10, I just downloaded the latest NVidia drivers, 460.xx.


----------



## martinhal

Rangerscott said:


> I love having a $2k card and it plays games at 21 fps. Lol. This has been the crappiest "upgrade" Ive ever done.


Sound about right for Cyberpunk and FS2020


----------



## changboy

Wihglah said:


> My Optimus Waterblock came in. It's epic. Way better performance than anything else I have used.


Yup its really epic, the backplate is insane


----------



## changboy

I just done the superposition bench at 4k extreme and my board power draw is 568w, the card boost at 2205mhz and drop at 2160mhz. If i want try holding 2175mhz the bench crash at some point, maybe my temp is begin too high(48c) for that frequency, so my result :


----------



## mardon

I'm shunting my Reference 3090 this week (well i'm not, i'm paying an expert to do it). I've decided on a modest 15mOhm stacked shunt. My question is, has anyone else stack shunted a reference board with an EK block and backplate? Did you have any clearance issues?

I've got 4mOhm shunts too in case there are any clearance issues so can do direct replacement. I'd rather it not pull that much power without software intervention so would rather go stacked route.

I've got 20cm custom cables on my SFX build. They're only 16Gauge. I'm guessing because of length 183w per 8pin should be fine? 

Thanks in advance for any replies.


----------



## motivman

mardon said:


> I'm shunting my Reference 3090 this week (well i'm not, i'm paying an expert to do it). I've decided on a modest 15mOhm stacked shunt. My question is, has anyone else stack shunted a reference board with an EK block and backplate? Did you have any clearance issues?
> 
> I've got 4mOhm shunts too in case there are any clearance issues so can do direct replacement. I'd rather it not pull that much power without software intervention so would rather go stacked route.
> 
> I've got 20cm custom cables on my SFX build. They're only 16Gauge. I'm guessing because of length 183w per 8pin should be fine?
> 
> Thanks in advance for any replies.


no clearance issues for shunts on the gpu side... for the shunt on the backplate side, just put a really really thin thermal pad over it, and you should be ok.
As for what shunt to use, does not matter, 5mohm, 15mohm, even 3 mohm, the card will pull about 525W max due to hidden power limit that no one can defeat except for 1000W bios.


----------



## Exilon

Swapping out the 2mm pads on the 6mm Alphacool backplate with 1mm pads + 1mm copper shim + TIM dropped memory junction temps by 15C. 
Backplate temperature rose considerably in exchange. It is now burning hot. My 80mm fan + 40x40x10mm + 1mm pad heatsink sitting on the backplate is being bottlenecked at the thermal interface.
Going to strap a 150x90x10mm heatsink to it using 0.5mm pads to the backplate and see how it goes.


----------



## long2905

Exilon said:


> Swapping out the 2mm pads on the 6mm Alphacool backplate with 1mm pads + 1mm copper shim + TIM dropped memory junction temps by 15C.
> Backplate temperature rose considerably in exchange. It is now burning hot. My 80mm fan + 40x40x10mm + 1mm pad heatsink sitting on the backplate is being bottlenecked at the thermal interface.
> Going to strap a 150x90x10mm heatsink to it using 0.5mm pads to the backplate and see how it goes.


did you take picture of your pad job? you put liquid metal in there or regular paste?


changboy said:


> Yup its really epic, the backplate is insane


the optimus block is only for ftw3 card right? can you take picture of the card front and back with the waterblock on?


----------



## Esenel

Wihglah said:


> My Optimus Waterblock came in. It's epic. Way better performance than anything else I have used.





changboy said:


> Yup its really epic, the backplate is insane


Could the both of you please do the following.
Run Timespy GPU Test 2 in a loop.

Take a picture after 10 minutes of HWInfo with water temp, water flow, GPU Core temp, Memory Junction temp and power consmuption?

So that we properly see the delta.

Because until now everybody says it is awesome but no data provided yet.

Thanks!


----------



## mardon

motivman said:


> no clearance issues for shunts on the gpu side... for the shunt on the backplate side, just put a really really thin thermal pad over it, and you should be ok.
> As for what shunt to use, does not matter, 5mohm, 15mohm, even 3 mohm, the card will pull about 525W max due to hidden power limit that no one can defeat except for 1000W bios.


Thanks for the response. Do you think a couple of layers of Electrical tape will work?

525W is totally fine for me. I'd only ever run that for a benchmark. I'm considering going the full replacment route now! I just feel it would be neater.


----------



## rowo

I have the normal 3090 Palit GameRock with the 370W bios. Is it save to flash the 420W bios from the OC version? When i downlaod the oc bios from the Palit homepage i get it in a bios update utility, i'm guessing, this wont work?


----------



## gfunkernaught

Rangerscott said:


> I love having a $2k card and it plays games at 21 fps. Lol. This has been the crappiest "upgrade" Ive ever done.


Welcome to the new world order. I thought I was good playing cyberpunk with everything maxed at 4k then I found that central park area that crunched the frame rate down to 29fps. Yeah...


----------



## mardon

rowo said:


> I have the normal 3090 Palit GameRock with the 370W bios. Is it save to flash the 420W bios from the OC version? When i downlaod the oc bios from the Palit homepage i get it in a bios update utility, i'm guessing, this wont work?


Is the game rock the 3 pin?

I doubt Palit's own until will let you flash the new bios. You'd be better of getting the latest Nvflash off tech power up and using Nvflash -6 to put the 420w bios on there.


----------



## rowo

mardon said:


> Is the game rock the 3 pin?
> 
> I doubt Palit's own until will let you flash the new bios. You'd be better of getting the latest Nvflash off tech power up and using Nvflash -6 to put the 420w bios on there.


Yes, its 3 pin. What does -6 mean? Never used Nvflash tbh..

Edit: I found the OC bios here: Palit RTX 3090 VBIOS will this work when i flash it with Nvflash? I dont want to RIP my 3090..


----------



## changboy

Esenel : I dont have the optimus block, i have the EK but GN Steve show it on youtube and we see it clear, its epic.


----------



## mardon

rowo said:


> Yes, its 3 pin. What does -6 mean? Never used Nvflash tbh..
> 
> Edit: I found the OC bios here: Palit RTX 3090 VBIOS will this work when i flash it with Nvflash? I dont want to RIP my 3090..


Follow this guide and use the latest version of Nvflash and you'll be fine. 



How-To Flash RTX Video Card BIOS To A Different Series


----------



## rowo

mardon said:


> Follow this guide and use the latest version of Nvflash and you'll be fine.
> 
> 
> 
> How-To Flash RTX Video Card BIOS To A Different Series


Ok, so i wanted to flash it with "NVIDIA NVFlash 5.527 (ID Mismatch Disabled) " but when i want to backup my bios i get the message "ERROR: No NVIDIA display adapters found." no matter if i enable or disable the card in device manager..

Edit: The not "Mismatch Disabled" version finds the card. Will this work to flash?


----------



## Nizzen

rowo said:


> Ok, so i wanted to flash it with "NVIDIA NVFlash 5.527 (ID Mismatch Disabled) " but when i want to backup my bios i get the message "ERROR: No NVIDIA display adapters found." no matter if i enable or disable the card in device manager..
> 
> Edit: The not "Mismatch Disabled" finds the card. Will this work to flash?


With *NVIDIA NVFlash with Board Id Mismatch Disabled v5.590.0*
we canflash modified bioses like Galax 1000w bios?


----------



## rowo

mardon said:


> Follow this guide and use the latest version of Nvflash and you'll be fine.
> 
> 
> 
> How-To Flash RTX Video Card BIOS To A Different Series


Ok, thank you, it worked, with nvflash_5.670.0 🤘


----------



## changboy

Me i dont like see my board power go higher then 525w and i stop it there. I already saw 688w but i really dont think its really good for long terme so i put the slider at 52% for every day.

Just if i want do benchmark then i can put it at around 70% on the 1000w bios.
The thing is on ftw3 the pcie power draw is a bit scary,

-525w its: 88w, 
-600w pcie is around :100w,
-680w its at :108w
Normal bios 480w its at 78w.

all value are peak level.


----------



## Wihglah

Esenel said:


> Could the both of you please do the following.
> Run Timespy GPU Test 2 in a loop.
> 
> Take a picture after 10 minutes of HWInfo with water temp, water flow, GPU Core temp, Memory Junction temp and power consmuption?
> 
> So that we properly see the delta.
> 
> Because until now everybody says it is awesome but no data provided yet.
> 
> Thanks!












TMPIN4 is my Water temp.
500W BIOS.
All the fan / pump profiles are auto PWM controlled, so when it benches it ramps. Chassis #4 is the GPU loop D5, I don't have a flow rate meter, but it was giving it the beans. Loop details are XSPC RX480 rad and the Optimus (CPU is on a separate loop). 
This is my 24/7 overclock right now. I only have a 860W PSU, so there isn't much point pushing it further.


----------



## Exilon

long2905 said:


> you put liquid metal in there or regular paste?


Regular paste. LM would be too adventurous on the back with all the bits around.


----------



## sippo

changboy said:


> Yup its really epic, the backplate is insane


Could you post some photo of it? (in/out)


----------



## Esenel

changboy said:


> Esenel : I dont have the optimus block, i have the EK but GN Steve show it on youtube and we see it clear, its epic.


Could you share link?
I didn't find anything about the block on his channel.

Thanks.



Wihglah said:


> View attachment 2478033
> 
> 
> TMPIN4 is my Water temp.
> 500W BIOS.
> Chassis #4 is the GPU loop D5.
> Loop details are XSPC RX480 rad and the Optimus (CPU is on a separate loop).
> This is my 24/7 overclock right now. I only have a 860W PSU, so there isn't much point pushing it further.


So at ~350W the Optimus block has a delta of 7K.
Which test was it?
Why only 350W?
Could you do a 400W and a 500W run as well?

Could you measure it with HWInfo?
There we would see the Memory Junction temp as well.

Thanks!


----------



## HyperMatrix

Esenel said:


> Because until now everybody says it is awesome but no data provided yet.
> 
> Thanks!


$80 bottle of wine almost always tastes better than a $10 bottle of wine. Even if those bottles of wine are identical. People get a better sense of quality when they pay more. Not saying quality isn't there. But I've never seen anything in benchmarks to indicate that the Optimus blocks would be 5-10C better temps than other quality blocks, despite it being claimed by some.


----------



## J7SC

changboy said:


> Me i dont like see my board power go higher then 525w and i stop it there. I already saw 688w but i really dont think its really good for long terme so i put the slider at 52% for every day.
> 
> Just if i want do benchmark then i can put it at around 70% on the 1000w bios.
> The thing is on ftw3 the pcie power draw is a bit scary,
> 
> -525w its: 88w,
> -600w pcie is around :100w,
> -680w its at :108w
> Normal bios 480w its at 78w.
> 
> all value are peak level.


...yeah, that's up there. My Strix OC always seems to peak at 48.x W PCIe power draw no matter what, even at 503 W board power peak. With that in mind, I don't feel compelled to load a 520W bios (never mind the 1000W with protections disabled). Not only am I still on air until w-cooling is settled, the one real difference would be in benches, and I primarily use those just to find the limits below which I should safely settle 'daily' clocks, PL for gaming, productivity etc. Speaking of air, the two TT120mm AIO fans I use for the Strix backplate work very well, but with a constant load I noticed that the angle the fans hit the backplate matters...w/ everything else the same, the delta can be 3 C to 4 C in max VRAM temps just by varying the angles through a 15 degree arc.


----------



## gfunkernaught

Don't let this happen to your 3090


----------



## J7SC

gfunkernaught said:


> Don't let this happen to your 3090


...a picture (or video with many successive pictures) is worth a thousand words ?!  ...makes you wonder though why they didn't turn it off after the first few sparks, or put a fire extinguisher on it. Also, am I just imagining things, or is that TiN's voice in the background ?


----------



## Wihglah

Esenel said:


> So at ~350W the Optimus block has a delta of 7K.
> Which test was it?
> Why only 350W?
> Could you do a 400W and a 500W run as well?
> 
> Could you measure it with HWInfo?
> There we would see the Memory Junction temp as well.
> 
> Thanks!




















Most power I could push on this BIOS. Headline temps remained the same. GPU-z reported 447W. It's probably a polling rate discrepancy.

I looped Timespy as requested for about 10 mins.


----------



## Falkentyne

HyperMatrix said:


> $80 bottle of wine almost always tastes better than a $10 bottle of wine. Even if those bottles of wine are identical. People get a better sense of quality when they pay more. Not saying quality isn't there. But I've never seen anything in benchmarks to indicate that the Optimus blocks would be 5-10C better temps than other quality blocks, despite it being claimed by some.


Want to buy some Monster Displayport cables from me? I promise they will let you see 12 bit color from your 10 bit monitor. No lies.


----------



## Chamidorix

HyperMatrix said:


> For anyone who may be curious....this is what EVGA said for the thermal pad thickness of the 3090 Kingpin:
> 
> 2.85mm VRM
> 2.25mm Memory (front of card)
> 2mm Memory (back of card)
> 
> No wonder I have the memory hitting over 70C even without an OC. Those are some super thick pads. And I'm not sure if they're the same quality as the FTW3 ones, which were reportedly just 1.2 W/mK. Is that thickness pretty common for stock coolers? I normally don't replace pads unless I'm switching to a water block and I'm used to either 0.5mm or 1mm thickness. 2.85mm seems very excessive and inefficient.


For eth mining I have removed the backplate and have just been blowing fans directly on the modules, lol.


----------



## Chamidorix

Falkentyne said:


> THESE RAILS AREN'T RESPONDING TO SHUNTS!
> 
> They're reporting power as if they are coming from unshunted rails.
> So the board is somehow getting a 'pre-shunted' amps value from someplace and it isn't from the shunts, otherwise the output rails would report MUCH MUCH lower.
> But clearly the card is getting rail data from somewhere, and it isn't on Hwinfo64. _BUT_ TDP Normalized % can see it. That's why TDP Normalized is so high.
> TDP Normalized is basically the TDP of ANY individual rail, with respect to a "default' limit (100%) and a "max" limit, which can be the main rails (that you can shunt), or ANY of the sub rails that has a limit attached to it.
> 
> This limit is probably stored in the vbios somewhere, but no one knows what it is. But my guess is it's related to the NVVDD/MSVDD or PLL rails.


Thanks for doing this. 

Oh man, so my assumption is wrong and I guess the driver can/will poll the current reading from VRM stages.... that has to be where it is getting pull for individual rails..... There must be some variations between different cards with different power stages and voltage controllers but obviously on the founders edition its sufficiently digital.


----------



## Zogge

Wihglah said:


> View attachment 2478063
> 
> View attachment 2478064
> 
> 
> Most power I could push on this BIOS. Headline temps remained the same. GPU-z reported 447W. It's probably a polling rate discrepancy.
> 
> I looped Timespy as requested for about 10 mins.


At 350W my bykski is sub 10c delta T as well. It goes up to 14c delta T or so at 510-520W.


----------



## HyperMatrix

Chamidorix said:


> For eth mining I have removed the backplate and have just been blowing fans directly on the modules, lol.


Ram blocks work great if you happen to have any lying around. 30C+ drop on memory temp just by thermal paste gluing it to the back plate. I just can't fit it in my new case until i get a block for the card.


----------



## gfunkernaught

J7SC said:


> ...a picture (or video with many successive pictures) is worth a thousand words ?!  ...makes you wonder though why they didn't turned it off after the first few sparks, or put a fire extinguisher on it. Also, am I just imagining things, or is that TiN's voice in the background ?


I think they were purposefully blowing it up. You can tell that they intentionally ramp up the power flow even after everything blew up. I hope that doesn't happen to my 3090 once I get my block and use the 500w bios lol. 
For the backside ram, have people been cooling it successfully using those small heatsinks?


----------



## gfunkernaught

HyperMatrix said:


> Ram blocks work great if you happen to have any lying around. 30C+ drop on memory temp just by thermal paste gluing it to the back plate. I just can't fit it in my new case until i get a block for the card.


Could I use ram blocks or heat sinks in conjunction with a back plate?


----------



## des2k...

So I clamped the 3 12v on each 8pin for my Zotac under 1000w vbios.

Just a quick test, because the extensions are really sketchy , they don't seem to clip and easily move / could fall out.
Didn't have this issue with my 1080ti ftw3.

*So idle,* 
hwinfo reports pin1 *23w* pin2 *42w*
clamp reports pin1 1.83a (*21w)* pin2 2a (*24w*)
_Difference -2w and -18w , card is over reporting_

*load,*
hwinfo reports pin1 *173w* pin 2 *141w*
clamp reports pin1 10.7a (*128.4w*) pin2 12.5 to 12.9 (*154.8w*)
_Difference -44.6w and +13.8, card is over reporting pin1, under reporting pin2_

So power draw not really balanced, what exactly would the formula be for an accurate hwinfo ?


----------



## Jpmboy

HyperMatrix said:


> $80 bottle of wine almost always tastes better than a $10 bottle of wine. Even if those bottles of wine are identical. People get a better sense of quality when they pay more. Not saying quality isn't there. But I've never seen anything in benchmarks to indicate that the Optimus blocks would be 5-10C better temps than other quality blocks, despite it being claimed by some.


10C better? Some u-tuber said that, I'm guessing. Unless the block is mounted poorly or the fitment is obstructed, I've never seen that much spread across many, many dozen GPU blocks on many dozens of GPUs.
The Optimus block is very well made but expensive, as are all their products. Right now I'm running Byski (4), EK (3), OPtimus (1), Swiftech (1), Koolance (1) on 9 GPUs. The vast majority of variation between block comparisons is usually based in mount quality... unless the manufacturer screwed up. None have so far this generation AFAIK


----------



## motivman

des2k... said:


> So I clamped the 3 12v on each 8pin for my Zotac under 1000w vbios.
> 
> Just a quick test, because the extensions are really sketchy , they don't seem to clip and easily move / could fall out.
> Didn't have this issue with my 1080ti ftw3.
> 
> *So idle,*
> hwinfo reports pin1 *23w* pin2 *42w*
> clamp reports pin1 1.83a (*21w)* pin2 2a (*24w*)
> _Difference -2w and -18w , card is over reporting_
> 
> *load,*
> hwinfo reports pin1 *173w* pin 2 *141w*
> clamp reports pin1 10.7a (*128.4w*) pin2 12.5 to 12.9 (*154.8w*)
> _Difference -44.6w and +13.8, card is over reporting pin1, under reporting pin2_
> 
> So power draw not really balanced, what exactly would the formula be for an accurate hwinfo ?


hmmmm... those load amps are really low for a card on load. what game were you testing with? normally, in games like cp2077, I get at least 17a, games like control 4k with no DLSS, I can get up to 21A.

but for pin 1, your ratio would be 128.4/173 = 0.74
pin 2 be 154.8/141 = 1.09


----------



## des2k...

motivman said:


> hmmmm... those load amps are really low for a card on load. what game were you testing with? normally, in games like cp2077, I get at least 17a, games like control 4k with no DLSS, I can get up to 21A.
> 
> but for pin 1, your ratio would be 128.4/173 = 0.74
> pin 2 be 154.8/141 = 1.09


Control save point, 4k rtx, dlss [email protected] hwinfo was showing ~380w

*edit
for cache,mem,core I used the corrected 2x8pin power / 3x8pin (pin3 copies pin 1 * 0.74) for correction


----------



## HyperMatrix

gfunkernaught said:


> Could I use ram blocks or heat sinks in conjunction with a back plate?


Yes of course. I used thermal paste to kinda glue the ram block to the backplate.



Jpmboy said:


> 10C better? Some u-tuber said that, I'm guessing.


Slight exaggeration on my part. I just remembered the reviews from their own website for their AMD cpu block.


----------



## changboy

Esenel said:


> Could you share link?
> I didn't find anything about the block on his channel.
> 
> Thanks.


This is it, you can see this crazy waterblock(optimus) at 8:43 and yes its epic and look also this backplate hehehe :


----------



## J7SC

changboy said:


> This is it, you can see this crazy waterblock(optimus) at 8:43 and yes its epic and look also this backplate hehehe :


I watched that video before, but those were clearly prototypes blocks, and those giant thermal sheets.... I'm sure Optimus makes great stuff from what I have read but even if a competing GPU block product is 3 C - 4 C behind (and available now for shipment, at half the price), that's still a win compared to air. Also, as others have stated, a lot comes down to mounting 'tolerances'. I know from my own early experience w/ custom GPU blocks way back that a good mount of an average block can beat a so-so mount of a top block, including thermal strips etc.


----------



## changboy

J7SC said:


> ...yeah, that's up there. My Strix OC always seems to peak at 48.x W PCIe power draw no matter what, even at 503 W board power peak. With that in mind, I don't feel compelled to load a 520W bios (never mind the 1000W with protections disabled). Not only am I still on air until w-cooling is settled, the one real difference would be in benches, and I primarily use those just to find the limits below which I should safely settle 'daily' clocks, PL for gaming, productivity etc. Speaking of air, the two TT120mm AIO fans I use for the Strix backplate work very well, but with a constant load I noticed that the angle the fans hit the backplate matters...w/ everything else the same, the delta can be 3 C to 4 C in max VRAM temps just by varying the angles through a 15 degree arc.


Yup i know the Strix pcie power draw is low and you dont have to be worry about it and on the other side the engineer who built the pcb of the ftw3 are better find another way in life then built graphic card pcb hehehehe.

Dont get me wrong, my card work fine and perform well but this is something to be worry about. If i run my card with normal bios pcie wont exeed 78w and its a peak level but when you push on it, this thing increase even more. But its not the way it suppose to run, over the spec but i run mine over a bit, maybe i will be fine on waterblock and i didnt do it before instal my block on it. 85w on my pcie its a bit high but i think it still safe. Maybe over 120w then you can run into problem but under this, it should be fine.


----------



## J7SC

changboy said:


> Yup i know the Strix pcie power draw is low and you dont have to be worry about it and on the other side the engineer who built the pcb of the ftw3 are better find another way in life then built graphic card pcb hehehehe.
> 
> Dont get me wrong, my card work fine and perform well but this is something to be worry about. If i run my card with normal bios pcie wont exeed 78w and its a peak level but when you push on it, this thing increase even more. But its not the way it suppose to run, over the spec but i run mine over a bit, maybe i will be fine on waterblock and i didnt do it before instal my block on it. 85w on my pcie its a bit high but i think it still safe. Maybe over 120w then you can run into problem but under this, it should be fine.


super burn: ._..the engineer who built the pcb of the ftw3 are better find another way in life then built graphic card pcb hehehehe._

FYI, I JUST saw two FTW3 Ultra pop up at a branch about 20km from here for the equivalent of US$ 1960 and told a friend of mine about it...either way, I'm sure they will be gone in about 2 hrs...or less...much less


----------



## Rangerscott

gfunkernaught said:


> Welcome to the new world order. I thought I was good playing cyberpunk with everything maxed at 4k then I found that central park area that crunched the frame rate down to 29fps. Yeah...



This is with any game.


----------



## Ginola

bmgjet said:


> Id be checking the block with a delta like that.
> My EK block was terriable for my first test mount. I had to shave down 2 of the stand offs for it to even make contact with the top edge of the die.
> And thats was with the screws done up tight enough that it undid the stand offs when removing the block again.
> 
> View attachment 2472132


Think I have a similar problem, just got my vector EK block for my 3090ftw3 today and temps are higher than when it was air cooled on full load 80+ water temps are in the 20s . Tho low load stuff seems cooler? I've removed it and remounted has made no difference. doesn't make much sense. previously used aquacomputer blocks and had no issues. loop has 360 aqua rad and a 240 black ice, ddc pump 8086 at 5.2ghz (no temp issues).


----------



## des2k...

well I guess my EK block(Zotac) is not that awesome for delta, 
I still have to re-check with the clamp on the 8pins for amps, voltage
but after some corrections, it was over reporting a lot 

350w,370w delta 10c
430w,470w delta 14c
550w+ delta 19c


----------



## Jpmboy

HyperMatrix said:


> Yes of course. I used thermal paste to kinda glue the ram block to the backplate.
> Slight exaggeration on my part. I just remembered the reviews from their own website for their AMD cpu block.
> 
> View attachment 2478077


Ah... the CPU block... well, as far as their Foundation block, at least the one I have does a better job at cooling this 10980XE than the EK (2 versions) and even their own V2 sample I have (the V2 is amazing, but did not fit this IHS as well as the foundation and I tried two different cold plates on it). Ya just never know. And this was with three mounts each using TGK, Kingpin and Tim-Mate TIM2 (lol - and a freakin digital torque screwdriver!)
The optimus FTW3 GPU block backplate comes with this humungous thermal "bed" on it and it really needs to be tightened down very slowly to allow for this thermal mattress to conform to the pcb. Largest thermal "thing" I've ever seen come from a manufacturer! Bottom line is, it is a beautiful and very heavy block... keep that in mind. MOst of my GPUs are vertical mount so the weight is no problem. It may need support mounted horizontally.



J7SC said:


> super burn: ._..the engineer who built the pcb of the ftw3 are better find another way in life then built graphic card pcb hehehehe._
> 
> FYI, I JUST saw two FTW3 Ultra pop up at a branch about 20km from here for the equivalent of US$ 1960 and told a friend of mine about it...either way, *I'm sure they will be gone in about 2 hrs*...or less...much less


no doubt. If these things are burning up is as common as it appears on this thread, talk about lotteries.


----------



## gfunkernaught

Rangerscott said:


> This is with any game.


Not true. Not any game will run in the 50-60fps range then tank to the 20s. This is a ray tracing thing. The central park area in cyberpunk has a lot of trees and thus a lot of shadows and global illumination. Could just be poor optimization. But this massive gap between min and max framerates is something I've noticed is common with games that use ray tracing.


----------



## Jpmboy

changboy said:


> Yup i know the Strix pcie power draw is low and you dont have to be worry about it and on the other side the engineer who built the pcb of the ftw3 are better find another way in life then built graphic card pcb hehehehe.
> 
> Dont get me wrong, my card work fine and perform well but this is something to be worry about. If i run my card with normal bios pcie wont exeed 78w and its a peak level but when you push on it, this thing increase even more. But its not the way it suppose to run, over the spec but i run mine over a bit, maybe i will be fine on waterblock and i didnt do it before instal my block on it. 85w on my pcie its *a bit high but i think it still safe. Maybe over 120w then you can run into problem but under this, it should be fine.*


It's pretty easy to get an idea of the draw... just gotta shoot the ATX connector wires with an IR thermometer. There was a time when extreme benching (especially "franken-gpus") would actually melt the ATX power connectors. 200-250W through the slot was more than capable of overheating the ATX pins of the day (and power-plane tracing).


----------



## changboy

Jpmboy said:


> It's pretty easy to get an idea of the draw... just gotta shoot the ATX connector wires with an IR thermometer. There was a time when extreme benching (especially "franken-gpus") would actually melt the ATX power connectors. 200-250W through the slot was more than capable of overheating the ATX pins of the day (and power-plane tracing).


 So do you mean pcie slot can handle verry easy 90w-100w ? Its my bad english, sometime not sure about what i read lol.


----------



## motivman

des2k... said:


> well I guess my EK block(Zotac) is not that awesome for delta,
> I still have to re-check with the clamp on the 8pins for amps, voltage
> but after some corrections, it was over reporting a lot
> 
> 350w,370w delta 10c
> 430w,470w delta 14c
> 550w+ delta 19c


roughly the same with my pny and ek block... seems normal imo. unless you use liquid metal, I don't think you can get delta lower.


----------



## changboy

who want lap is kingpin hehehe :


----------



## Rangerscott

gfunkernaught said:


> Not true. Not any game will run in the 50-60fps range then tank to the 20s. This is a ray tracing thing. The central park area in cyberpunk has a lot of trees and thus a lot of shadows and global illumination. Could just be poor optimization. But this massive gap between min and max framerates is something I've noticed is common with games that use ray tracing.


Im talking about my 3090. It wont go out of idle clock speed and its pulling 385-420 watts at idle.


----------



## HyperMatrix

Jpmboy said:


> Ah... the CPU block... well, as far as their Foundation block, at least the one I have does a better job at cooling this 10980XE than the EK (2 versions) and even their own V2 sample I have (the V2 is amazing, but did not fit this IHS as well as the foundation and I tried two different cold plates on it). Ya just never know. And this was with three mounts each using TGK, Kingpin and Tim-Mate TIM2 (lol - and a freakin digital torque screwdriver!)
> The optimus FTW3 GPU block backplate comes with this humungous thermal "bed" on it and it really needs to be tightened down very slowly to allow for this thermal mattress to conform to the pcb. Largest thermal "thing" I've ever seen come from a manufacturer! Bottom line is, it is a beautiful and very heavy block... keep that in mind. MOst of my GPUs are vertical mount so the weight is no problem. It may need support mounted horizontally.
> 
> 
> no doubt. If these things are burning up is as common as it appears on this thread, talk about lotteries.


5-6C is too much. Even vs. the EK. But especially against the heat killer. I’m not saying their products are bad but a bit exaggerated in terms of some of the claims others make - and the ones they’re happy to republish in their website in essence saying those are the performance metrics they’re dealing with.

I can imagine creating those conditions in a lab environment. Where you pick the flow rate, water temperature, and cpu power consumption to favor your block. But in general I think some of the claims are a bit much.

Having said that...I am planning to buy their GPU block for my Kingpin card. Though not because I think it’s going to be 5-6C better than other blocks.


----------



## J7SC

Rangerscott said:


> Im talking about my 3090. It wont go out of idle clock speed and its pulling 385-420 watts at idle.


...yeah, I saw your separate thread on that...needless to add that 'there's something very wrong', either on the hardware or software side. What with pulling 385-420 watts at idle clock speeds, it might be time for a complete fresh install and/or RMA. I can tell you that in FS2020 / 4K Ultra, my 3090, even w/o oc, is around +- 60 fps at low level / metro; and when it slows down from that, it usually is the 'limited by main thread' instead of GPU, per FS2020 / dev mode / options / fps. Also, I play Cyberpunk 2077 at 4K / highest textures / & RTX either on 'ultra' or 'psycho', w/ DLSS at quality, and certainly at a lot more than 20 fps...

Given the supply situation of Ampere and the 3090 variant, this must be frustrating for you. But may be it is time to RMA your card and get yourself what you paid for...


----------



## ViRuS2k

does anyone know what size thermal pads i need for the bysinki waterblock for the msi gaming x trio 3090 please
need pad sizes please ????? someone said the waterblock does not come with the correct size pads ????? :/ i cant remember who said it lol


----------



## DOOOLY

ViRuS2k said:


> does anyone know what size thermal pads i need for the bysinki waterblock for the msi gaming x trio 3090 please
> need pad sizes please ????? someone said the waterblock does not come with the correct size pads ????? :/ i cant remember who said it lol


Not sure but with a EK block they are 1mm pads. Maybe Email them if the info is not on their product page.


----------



## gfunkernaught

I noticed that the 3090 is actually really good with the Low Latency setting in the NV Control Panel. When I had my 2080 Ti, it didn't work too well, except for Cold War. I play Titanfall 2 a lot and need the mouse latency as low as I can get it without introducing tearing. So then I would use RTSS scanlinesync -30 and maintain 60fps. For whatever reason, scanlinesync still tears and stutters with the 3090. Fastsync stutters as well. Not sure why. So I set the Low Latency setting to Ultra and enabled vsync in-game and sure enough no stuttering, tearing, and very very low mouse latency. Like slightly slower than with vsync off. Thought I'd share.


----------



## dante`afk

slap the 520w or 1000w bios on the evga card, let it pull 550w for 30 minutes and then show water/delta temps.

if they really believed their claim, they'd show such results.


----------



## geriatricpollywog

changboy said:


> who want lap is kingpin hehehe :


I've come to respect Luumi a lot. He explains in detail how to overclock and achieve his scores. SlinkyPC and Escapee brag about their scores on this very thread, are not transparent about how they are achieved, yet are somehow less successful as Luumi.


----------



## Zogge

ViRuS2k said:


> does anyone know what size thermal pads i need for the bysinki waterblock for the msi gaming x trio 3090 please
> need pad sizes please ????? someone said the waterblock does not come with the correct size pads ????? :/ i cant remember who said it lol


On the strix block bykski uses 1.8mm.


----------



## rix2

Is this shunt mode safe to motherboard? He says that is motherboard pci-e is shunted...
And is there any difference when I add 10ohm?


----------



## Wihglah

dante`afk said:


> slap the 520w or 1000w bios on the evga card, let it pull 550w for 30 minutes and then show water/delta temps.
> 
> if they really believed their claim, they'd show such results.


I have the optimus and I can tell you there is a 7C delta between the water temp and GPU temp on the 500W BIOS with Timespy looping.


----------



## Falkentyne

rix2 said:


> Is this shunt mode safe to motherboard? He says that is motherboard pci-e is shunted...
> And is there any difference when I add 10ohm?


Framechasers is a complete doddering power hungry mentally ill idiot. Like there's something REALLY weird about that guy.
However he is right that shunting the PCIE Slot shunt on an EVGA FTW3 will avoid the early power limit throttle caused by the messed up eVGA power balancing. But you do NOT use hot glue or liquid electrical tape to shunt the freaking 10 mOhm shunt (better to put 15 mOhm on it anyway). Just solder the freaking thing. I was once afraid to solder, but it's a lot easier if you are NOT using a 'starter' crappy 25W soldering iron. A TS100 iron is very nice and makes soldering a shunt easy. And 3M Kapton tape is good for beginners, so they can protect the PCB from accidents. Then you just need some nice Kestor soldering wire, a brass tip cleaner, and flux. Flux is critical in helping the solder to flow to the edges of the shunt for stacking. The main hard part is heating up the edges of the original shunt so solder can flow to it, so you can get solder on both edges, then flux again, apply the new shunt on top of the solder joints you created, then bond them together.

Just don't homeless bum it with hot glue or electrical tape. Please.

BTW I used to like MG 842AR for shorting shunts instead of shunt stacking, but 842AR paint can degrade if the contact isn't perfect and draws a lot of current, and is a HUGE PAIN to clean off the shunt also. It's alot easier to desolder a shunt and clean it off by just using flux + wick if necessary, and much, much faster than scraping old paint off a shunt.


----------



## rix2

Falkentyne said:


> Framechasers is a complete doddering power hungry mentally ill idiot. Like there's something REALLY weird about that guy.
> However he is right that shunting the PCIE Slot shunt on an EVGA FTW3 will avoid the early power limit throttle caused by the messed up eVGA power balancing. But you do NOT use hot glue or liquid electrical tape to shunt the freaking 10 mOhm shunt (better to put 15 mOhm on it anyway). Just solder the freaking thing. I was once afraid to solder, but it's a lot easier if you are NOT using a 'starter' crappy 25W soldering iron. A TS100 iron is very nice and makes soldering a shunt easy. And 3M Kapton tape is good for beginners, so they can protect the PCB from accidents. Then you just need some nice Kestor soldering wire, a brass tip cleaner, and flux. Flux is critical in helping the solder to flow to the edges of the shunt for stacking. The main hard part is heating up the edges of the original shunt so solder can flow to it, so you can get solder on both edges, then flux again, apply the new shunt on top of the solder joints you created, then bond them together.
> 
> Just don't homeless bum it with hot glue or electrical tape. Please.
> 
> BTW I used to like MG 842AR for shorting shunts instead of shunt stacking, but 842AR paint can degrade if the contact isn't perfect and draws a lot of current, and is a HUGE PAIN to clean off the shunt also. It's alot easier to desolder a shunt and clean it off by just using flux + wick if necessary, and much, much faster than scraping old paint off a shunt.


Why not do with hot glue?


----------



## KedarWolf

Here's a link with the latest NVFlash and the FTW3 500w, Kingpin 520w and the Kingpin 1000w BIOS's.

Strongly recommend only running them on 3x8 pin power connector cards like the ASUS STRIX OC or EVGA FTW3 Ultra 3090 cards and with a waterblock and backplate.

And AIO cooler not recommended as only the GPU core itself is water-cooled and the memory and VRMs are not.

I'm pre-ordering from a local store my ASUS Strix 3090 OC March 1st. Hope it doesn't take three months to get like my 5950x CPU nearly did. 






3090BIOS.zip







drive.google.com


----------



## toxicnerve

KedarWolf said:


> Here's a link with the latest NVFlash and the FTW3 500w, Kingpin 520w and the Kingpin 1000w BIOS's.
> 
> Strongly recommend only running them on 3x8 pin power connector cards like the ASUS STRIX OC or EVGA FTW3 Ultra 3090 cards and with a waterblock and backplate.
> 
> And AIO cooler not recommended as only the GPU core itself is water-cooled and the memory and VRMs are not.
> 
> I'm pre-ordering from a local store my ASUS Strix 3090 OC March 1st. Hope it doesn't take three months to get like my 5950x CPU nearly did.
> 
> 
> 
> 
> 
> 
> 3090BIOS.zip
> 
> 
> 
> 
> 
> 
> 
> drive.google.com


Are you saying the 500W / 520W are NOT safe to use on air? 

And does one of them have an issue with triple fan air coolers? I thought I saw that somewhere.


----------



## WillP

Not really an overclocking post, more of a 3090 owner post: anyone else get a smug satisfaction when Riva reports VRAM use going above 10 or 11GB or is it just me?


----------



## Nizzen

toxicnerve said:


> Are you saying the 500W / 520W are NOT safe to use on air?
> 
> And does one of them have an issue with triple fan air coolers? I thought I saw that somewhere.


Strix 3090 has 480w bios, so 500 and 520w isn't unsafe. It't getting hot, but it depends on the ambient air the gpu i getting. If the gpu is getting too hot, it's throttling down more bins.

I benchmarked 3090 strix shuntmodded with cold air. Litterally "unlimited" power. 
No problem.


----------



## rix2

KedarWolf said:


> Here's a link with the latest NVFlash and the FTW3 500w, Kingpin 520w and the Kingpin 1000w BIOS's.
> 
> Strongly recommend only running them on 3x8 pin power connector cards like the ASUS STRIX OC or EVGA FTW3 Ultra 3090 cards and with a waterblock and backplate.
> 
> And AIO cooler not recommended as only the GPU core itself is water-cooled and the memory and VRMs are not.
> 
> I'm pre-ordering from a local store my ASUS Strix 3090 OC March 1st. Hope it doesn't take three months to get like my 5950x CPU nearly did.
> 
> 
> 
> 
> 
> 
> 3090BIOS.zip
> 
> 
> 
> 
> 
> 
> 
> drive.google.com


EVGA FTW3 Ultra with 500w bios no any problems in Air, anyway is not go more when 430w


----------



## mardon

My shunts arrived today. I got a selection to give me some options. However some are 1W and some are 2W. Does it make a difference? I was hoping to go with 15mohm stacked.


----------



## KedarWolf

toxicnerve said:


> Are you saying the 500W / 520W are NOT safe to use on air?
> 
> And does one of them have an issue with triple fan air coolers? I thought I saw that somewhere.


I

I don't know of any of them have issues with triple fan cards, I haven't gotten my Strix yet to test them.

You likely can run the 500w and 520w on air but I'd like to see someone with an FTW3 Ultra post their VRM temps etc. if they try them. I think you can see those temps on those cards and I've heard the VRMs and memory runs hot even on the stock BIOS, hence the water cooling recommendation.


----------



## J7SC

toxicnerve said:


> Are you saying the 500W / 520W are NOT safe to use on air?
> 
> And does one of them have an issue with triple fan air coolers? I thought I saw that somewhere.


My Strix 3090 OC is still air-cooled for now, and I have seen it go beyond 503 W on stock bios, with GPU temps in the mid-to-high 60ies C. I was more concerned about the backplate which due to double-sided VRAM runs between 6 C - 8 C hotter out of the box, unless you mount some extra fans. VRM temps are quite good though, btw. Still, 500W is a lot for air-cooling - even for a really good solution like the Strix seems to have. . Best to water-cool the GPU and if possible adding additional cooling for the backplate, i.e. with RAM coolers as several folks here have shown.


----------



## changboy

I tried all bios with my ftw3 ultra and all 500w or 520w bios and original bios the card still limited in board power draw.
The only bios unlock those power is those with 2x8pins wich i not recommand coz the 3x 8pins are not well balance.
And the 1000w bios wich work perfectly, 3x 8pins are well balance. Pcie power draw a bit high but its fine.
The 520w bios on port royal card draw only 425w and on 1000w card draw 525w in the same benchmark.

I dont know why some card fail but maybe coz they were using a 2x 8pins bios on it and power draw from 1 of the 8 pin was verry high. Coz when i try this on my card i have a feeling ouf, card generate a lot of heat and power have wrong balance over the 3x 8pins. Maybe not really good to force it in this way.
Actually, to me the 1000w is the only way to go for the ftw3 ultra if you want use the power of this card and you can set the slider on safe place to avoid surprise.


----------



## KedarWolf

J7SC said:


> My Strix 3090 OC is still air-cooled for now, and I have seen it go beyond 503 W on stock bios, with GPU temps in the mid-to-high 60ies C. I was more concerned about the backplate which due to double-sided VRAM runs between 6 C - 8 C hotter out of the box, unless you mount some extra fans. VRM temps are quite good though, btw. Still, 500W is a lot for air-cooling - even for a really good solution like the Strix seems to have. . Best to wate thermalr-cool the GPU and if possible adding additional cooling for the backplate, i.e. with RAM coolers as several folks here have shown.


Maybe just replace the thermal pads with Fujipoly 17 wm/k pads and use decent thermal paste on the core like Master Gel Maker or something?

I'm going water eventually with those pads, but active water cooling backplates are impossible to find, no-one has them in stock.


----------



## HyperMatrix

WillP said:


> Not really an overclocking post, more of a 3090 owner post: anyone else get a smug satisfaction when Riva reports VRAM use going above 10 or 11GB or is it just me?


Honestly I wish I could remove half the memory modules for gaming. There’s no benefit to having it. It creates a lot of heat. And eats away from your power limit. Literally 0 benefit in 99% of gaming.


----------



## KedarWolf

HyperMatrix said:


> Honestly I wish I could remove half the memory modules for gaming. There’s no benefit to having it. It creates a lot of heat. And eats away from your power limit. Literally 0 benefit in 99% of gaming.


Maybe get a 3080 that you can flash a 3090 BIOS on it? I've read there are a few models you can.


----------



## J7SC

KedarWolf said:


> Maybe just replace the thermal pads with Fujipoly 17 wm/k pads and use decent thermal paste on the core like Master Gel Maker or something?
> 
> I'm going water eventually with those pads, but active water cooling backplates are impossible to find, no-one has them in stock.


...the MSI Suprim X seems to have a decent backplate solution, with integrated, flattened copper heat pipes, though overall, I much prefer the Strix. One thing I don't quite get is why the top PCB and speed bins of several vendors this time around in Ampere don't include factory full waterblock options, as was the case with 2080 Ti models. They do have full water block options for 3090s (Asus, Aorus, others) but not the top bins, afaik. May be they'll come out later ?

...anyway, below in the spoiler are the temps for my pair of 2080 Ti XTR waterfoce WB (factory full waterblock) for a PortRoyal run followed directly by the DLSS feature test which includes but is not limited to Port Royal, so triple runs all told. The system is dual loop, and the GPU loop has 2x D5s, a total of 1080/60 rad space and GentleTyphoons, all cobbled together from prior builds. The point is that even factory blocks (OEM for Aorus might be Bykski, with regular thermal pads, not sure) can do a great job as long as the rest of the loop is 'well-sized'. The stock bios GPUs combined peak power for these runs* was at 760 W*. I never had any trouble or deterioration with those factory OEM installed blocks since I got them back in December '18.

All that said, the 'toasty' GDDR6X in a double-sided config is s.th. to watch out for and deal with on 3090s. I do use the 3090 for productivity as well - 16 GB would have been enough, but a bit of future-proofing doesn't hurt....  



Spoiler


----------



## rix2

changboy said:


> I tried all bios with my ftw3 ultra and all 500w or 520w bios and original bios the card still limited in board power draw.
> The only bios unlock those power is those with 2x8pins wich i not recommand coz the 3x 8pins are not well balance.
> And the 1000w bios wich work perfectly, 3x 8pins are well balance. Pcie power draw a bit high but its fine.
> The 520w bios on port royal card draw only 425w and on 1000w card draw 525w in the same benchmark.
> 
> I dont know why some card fail but maybe coz they were using a 2x 8pins bios on it and power draw from 1 of the 8 pin was verry high. Coz when i try this on my card i have a feeling ouf, card generate a lot of heat and power have wrong balance over the 3x 8pins. Maybe not really good to force it in this way.
> Actually, to me the 1000w is the only way to go for the ftw3 ultra if you want use the power of this card and you can set the slider on safe place to avoid surprise.


only 1000w works? 520w also not work? this is my card results open window -13 out said I scored 14 021 in Port Royal
and before I was 3090ventus water-cooled 350w bios 13890 PR


----------



## WayWayUp

awesome got my kingpin notification
Will need to test it of course.. I have a weird dilemma in that my 3090ftw3 already scores around 15.6k port royal at ambient temps

Will probs sell the kingpin unless its golden
Already troubled myself with an optimus block on the ftw3

either way i will sell one of them but i would want a premium for the ftw3 since it has a sweet block on it and i went through the trouble of shunting it already. would be much easier selling the kingpin i reckon


----------



## changboy

rix2 said:


> only 1000w works? 520w also not work? this is my card results open window -13 out said I scored 14 021 in Port Royal
> and before I was 3090ventus water-cooled 350w bios 13890 PR


Yes, if you put the 500w or the 520w you will see a little more perf but nothing to do with the 1000w bios.

On port royal with 500w bios and 520w board power draw around 420w and with the 1000w bios board power draw 518w ! You get at least +600 on port royal.
On rtx game you can gain 7-8% + perf in game.

Then its mean with the 500w or 520w the ftw3 still litimg the power board and i dont know why. They want protect the card from draw more power. The 1000w bios unlock everything and on my card i saw 688w power draw but i dont use it like this, i put the slider at 52% so the card stop at around 525w and its enough for all game and you feel it in game. But gpu run a bit hotter with my ek block from 43c to 48c.

Not all game will reach 525w but you have it if it need it. So if you score 14 021 in port royal you should score 14 650 at least with the 1000w bios, just be carefull to use it and all should be fine...this is what i think.
Me i score 15 074 with the 1000w bios in port royal. And with my game setting i score around 14 800 in port royal. 
With 500w bios my card dont boost at 2100mhz when i game and with the 1000w, its stay always at 2100-2115mhz. Sometimes higher.


----------



## changboy

Ek have the new waterblock for the founder edition...not cheap.






EK-Quantum Vector FE RTX 3090 D-RGB - Special Edition


EKWaterBlocks Shop offers you complete assortiment for water-cooling of your PC. Only EK and EK confirmed quality products.




www.ekwb.com


----------



## Wihglah

changboy said:


> Ek have the new waterblock for the founder edition...not cheap.
> 
> 
> 
> 
> 
> 
> EK-Quantum Vector FE RTX 3090 D-RGB - Special Edition
> 
> 
> EKWaterBlocks Shop offers you complete assortiment for water-cooling of your PC. Only EK and EK confirmed quality products.
> 
> 
> 
> 
> www.ekwb.com


Wow - that's 90% of the price of an Optimus.


----------



## J7SC

WayWayUp said:


> awesome got my kingpin notification
> Will need to test it of course.. I have a weird dilemma in that my 3090ftw3 already scores around 15.6k port royal at ambient temps
> 
> Will probs sell the kingpin unless its golden
> Already troubled myself with an optimus block on the ftw3
> 
> either way i will sell one of them but i would want a premium for the ftw3 since it has a sweet block on it and i went through the trouble of shunting it already. would be much easier selling the kingpin i reckon


...I would keep the Kingpin (if at all competitive re. chip quality) and sell the ftw3 / Optimus block, but that's just me...either way, 'grats on the Kingpin . I know one other person who got his Kingpin notification today and is hoping for delivery this Friday.


----------



## WayWayUp

appreciate it I've been waiting a long time for the kingpin. Would you sell the ftw3 with optimus block on it or would you sell it with original cooler?
I've had trouble selling waterblocked cards in the past but at the same time i spent a lot on the block and it only works for the ftw3


----------



## J7SC

WayWayUp said:


> appreciate it I've been waiting a long time for the kingpin. Would you sell the ftw3 with optimus block on it or would you sell it with original cooler?
> I've had trouble selling waterblocked cards in the past but at the same time i spent a lot on the block and it only works for the ftw3


...w/ 3090s, folks might actually appreciate that it was water-cooled. You also have the 3rd option of selling the Optimus separately...still, I would first try to sell it as a package, stating the reason: 'my Kingpin arrived...' you should be able to get really good dollars for the ftw3 and Optimus package.


----------



## changboy

Maybe i will get my notification too or soon for the kingpin, i also dont know what to do when this will happen. I can also not anwser to the notification.


----------



## KedarWolf

Oh, EKWB is coming out with an active backplate soon. They have the Strix OC 3090 block and it might take a few months for me to get my pre-order, so here's hoping.


----------



## geriatricpollywog

I upgraded my Kingpin AIO fans to Phanteks PH-F120MP. The core went from 53 to 51 in Quake II RTX with fans at 100% but that included cleaning the dust out of the fins. I don’t think they cool any better, but they are much much quieter. My comfort threshold for the stock finger cutters was 75%. The new fans are quiet at 100%.


----------



## HyperMatrix

KedarWolf said:


> Maybe get a 3080 that you can flash a 3090 BIOS on it? I've read there are a few models you can.


If you can give me a bios that somehow magically enables the additional 20.5% cores that are available on the 3090 and provides the additional memory bandwidth too, I'll do it.



WayWayUp said:


> awesome got my kingpin notification
> Will need to test it of course.. I have a weird dilemma in that my 3090ftw3 already scores around 15.6k port royal at ambient temps
> 
> Will probs sell the kingpin unless its golden
> Already troubled myself with an optimus block on the ftw3
> 
> either way i will sell one of them but i would want a premium for the ftw3 since it has a sweet block on it and i went through the trouble of shunting it already. would be much easier selling the kingpin i reckon


Remember to swap out the fans on the radiator when testing the kingpin. The stock fans are absolute trash. Secondly....silicon lottery is still king. My first Kingpin card was one of those 2130-2160MHz MAX under full water block cards regardless of how much voltage you pushed. So test very carefully. If it can match your FTW3, then keep it. Because then you'll get to have a card you can tweak even further, and still maintain your warranty. But that's not guaranteed.

I got lucky with my second Kingpin. I'm able to do a run at 2190MHz through port royal almost without ever hitting the power limit (a few slim bars and that's it). Very little leakage. Although still requires a lot of voltage. So I decided to keep this one. But my first one....yeah....no thank you. The person I sold it to has now sold it yet again and the new owner apparently messaged him today and was very angry about the poor performance of the card. Lol. A good FTW3 with a good block is superior to a bad Kingpin with no block.  The only exception again being lack of warranty due to shunting your FTW3. So more peace of mind with the Kingpin.

Also don't forget to get some massive airflow or a ram block over the backplate of the Kingpin. Keep an eye on the memory temperatures through EVGA Precision's ICX sensor page. A lot of my instability/crashing turned out to be from the memory getting toasty. This was never an issue with the FTW3 Hybrid card which kept memory temperatures under 70C even with an OC, while the Kingpin gets to 80C+.


----------



## geriatricpollywog

HyperMatrix said:


> If you can give me a bios that somehow magically enables the additional 20.5% cores that are available on the 3090 and provides the additional memory bandwidth too, I'll do it.
> 
> 
> 
> Remember to swap out the fans on the radiator when testing the kingpin. The stock fans are absolute trash. Secondly....silicon lottery is still king. My first Kingpin card was one of those 2130-2160MHz MAX under full water block cards regardless of how much voltage you pushed. So test very carefully. If it can match your FTW3, then keep it. Because then you'll get to have a card you can tweak even further, and still maintain your warranty. But that's not guaranteed.
> 
> I got lucky with my second Kingpin. I'm able to do a run at 2190MHz through port royal almost without ever hitting the power limit (a few slim bars and that's it). Very little leakage. Although still requires a lot of voltage. So I decided to keep this one. But my first one....yeah....no thank you. The person I sold it to has now sold it yet again and the new owner apparently messaged him today and was very angry about the poor performance of the card. Lol. A good FTW3 with a good block is superior to a bad Kingpin with no block.  The only exception again being lack of warranty due to shunting your FTW3. So more peace of mind with the Kingpin.
> 
> Also don't forget to get some massive airflow or a ram block over the backplate of the Kingpin. Keep an eye on the memory temperatures through EVGA Precision's ICX sensor page. A lot of my instability/crashing turned out to be from the memory getting toasty. This was never an issue with the FTW3 Hybrid card which kept memory temperatures under 70C even with an OC, while the Kingpin gets to 80C+.


How did you buy a second? I got my auto-notify today and it wouldn’t let me because I already have one.


----------



## J7SC

KedarWolf said:


> Oh, EKWB is coming out with an active backplate soon. They have the Strix OC 3090 block and it might take a few months for me to get my pre-order, so here's hoping.


...may be I get the Phanteks Strix block (comes w/ backplate) and just order the EK active backplate by itself - if experimenting with the Phanteks backplate w/ active cooling fails...per pic below, there isn't that much room between an elevated mobo piece and the Strix backplate, especially re. the lowest VRAM chip on the backside closest to the PCIe slot











from techpowerup


----------



## Xeq54

I just finished attaching EK supremacy CPU block to an ek backplate. Looks funny, but I have my gpu vertical, so you cannot even see it. Had to lap the block to flatten it.

Memory junction temp went from 103 degrees stock to 70 degrees stock while eth mining.


----------



## HyperMatrix

0451 said:


> How did you buy a second? I got my auto-notify today and it wouldn’t let me because I already have one.


You can place the order to another address. But in my case I traded my kingpin with a friend who also had a kingpin as he was planning to sell his kingpin and keep his awesome FTW3.


----------



## Exilon

Xeq54 said:


> I just finished attaching EK supremacy CPU block to an ek backplate. Looks funny, but I have my gpu vertical, so you cannot even see it. Had to lap the block to flatten it.
> 
> Memory junction temp went from 103 degrees stock to 70 degrees stock while eth mining.
> 
> View attachment 2478206


That's what I'm getting with an _uncooled _backplate after replacing front pads with 1mm Fujipoly Ultra Extreme and the backside pads with with 1mm Arctic thermal pads and copper shims.








You should be able to do a lot better with an actively cooled backplate. My guess is the pads under the backplate are severely bottlenecking heat transfer and/or your PCB temperature is too high due to low quality frontside pads. The memory can transfer significant amounts of heat through the PCB so cooling it helps a lot.


----------



## gfunkernaught

Has anyone here tried liquid metal with their water blocks? What are your temps like?


----------



## changboy

gfunkernaught said:


> Has anyone here tried liquid metal with their water blocks? What are your temps like?


Me i just use liquid metal and i have maybe 25 tube of paste but i not use them, so i cant tell you temp with paste vs liquid metal but liquid metal always better but some scare use this, me i never have problem with this.
But if you have 0 experiment with this you better watch many video before use it.
I think most of user of 3090 use it, at least with waterblock.

Btw its really hard compare temp from use to another user, all have different system, pump, rad, fans ect.


----------



## gfunkernaught

changboy said:


> Me i just use liquid metal and i have maybe 25 tube of paste but i not use them, so i cant tell you temp with paste vs liquid metal but liquid metal always better but some scare use this, me i never have problem with this.
> But if you have 0 experiment with this you better watch many video before use it.
> I think most of user of 3090 use it, at least with waterblock.


I've used it before on my cpu. I was just wondering what temps liquid metal yields with water blocks.


----------



## geriatricpollywog

HyperMatrix said:


> You can place the order to another address. But in my case I traded my kingpin with a friend who also had a kingpin as he was planning to sell his kingpin and keep his awesome FTW3.


My friend wants a Kingpin, so I gave him my EVGA login and forwarded the auto-notify purchase email. It wouldn’t let him order using his credit card and address. Same message.


----------



## mardon

Ek or heatkiller for the reference 3090? Both are a similar price.


----------



## J7SC

mardon said:


> Ek or heatkiller for the reference 3090? Both are a similar price.


I would go w/ Watercool's Heatkiller, though not a super-strong preference. I have both Heatkiller and EK w-cooling products and the Heatkiller tends to perform a bit better and also seems to be better quality, IMO.


----------



## mardon

J7SC said:


> I would go w/ Watercool's Heatkiller, though not a super-strong preference. I have both Heatkiller and EK w-cooling products and the Heatkiller tends to perform a bit better and also seems to be better quality, IMO.


That's what I thought. I can't find any videos or definitive reviews on the 3090 version this time around.


----------



## gfunkernaught

Is the XSPC G1/4" Inline 10k Sensor any good? Should I look for a different one?


----------



## KedarWolf

mardon said:


> Ek or heatkiller for the reference 3090? Both are a similar price.


EK is releasing an active backplate soon and 3090s have dual sided RAM with RAM on the back of the card, active backplate really helps.


----------



## gfunkernaught

KedarWolf said:


> EK is releasing an active backplate soon and 3090s have dual sided RAM with RAM on the back of the card, active backplate really helps.


Great I ordered a passive backplate lol


----------



## mardon

KedarWolf said:


> EK is releasing an active backplate soon and 3090s have dual sided RAM with RAM on the back of the card, active backplate really helps.


Unfortunately the new EK backplate won't fit in my Ncase M1. I do have a new style MP5 works to go on though


----------



## HyperMatrix

0451 said:


> My friend wants a Kingpin, so I gave him my EVGA login and forwarded the auto-notify purchase email. It wouldn’t let him order using his credit card and address. Same message.


Try contacting customers service. Jacob had previously tweeted that this could be done.


----------



## changboy

gfunkernaught said:


> Is the XSPC G1/4" Inline 10k Sensor any good? Should I look for a different one?


I use 2 of them but those with the lcd temp (red) and they read both right. The only thing you need be carefull when you screew the plug coz have the 2 mini wire and you must turn the wire in same times coz you can broke this wire.
They are both in my system for 4 years + and no problem.


----------



## Chamidorix

.


----------



## HyperMatrix

Chamidorix said:


> Another day, another FC hating post. Sigh.
> 
> Gluing the shunts works perfectly fine and is quite moraley justifiable if you frame it as preserving RMA status on a fundamentally poorly designed and overpriced card. Obviously soldering is better, but only because it is truly fire and forget. With glue, you must periodically check and validate the contact since glue is so much more heat malleable and and it is also harder to get conductive contact in the first place without the intermediate conductive solder. But getting conductive metal on metal contact is not rocket science or precision engineering or something.


I wouldn’t recommend using a glue gun to stack resistors. I mean you can use silver paint to get a boost. Or use silver paint as the glue to hold 2 shunts together. But glue gun? You’d have to scrape off the top of the shunts to make sure they were getting proper contact anyway. If they were specifically checking for shunt modding, they’d be able to see it, just as they would from evidence of you having cleared up silver paint from them.

The majority of what I’ve seen that guy say/do is absolutely moronic. He’s like the Moore’s law is dead guy. He’s right sometimes. But he’s wrong so often that it’s best to avoid using him as a source of knowledge or expertise.


----------



## changboy

Why so much for those active backplate ? Is the passive backplate dosent work or something wrong with the temp ? 
Or this to get a lower temp but the passive work too ? Coz by cooling the gpu with liquid keep temp lower then memory not get hot like before so passive backplate should be enough ?


----------



## Chamidorix

HyperMatrix said:


> I wouldn’t recommend using a glue gun to stack resistors. I mean you can use silver paint to get a boost. Or use silver paint as the glue to hold 2 shunts together. But glue gun? You’d have to scrape off the top of the shunts to make sure they were getting proper contact anyway. If they were specifically checking for shunt modding, they’d be able to see it, just as they would from evidence of you having cleared up silver paint from them.
> 
> The majority of what I’ve seen that guy say/do is absolutely moronic. He’s like the Moore’s law is dead guy. He’s right sometimes. But he’s wrong so often that it’s best to avoid using him as a source of knowledge or expertise.


Lol, I removed by post 3 minutes after I posted it because I realized I'm just pouring oil onto a fire here. Elitism is pretty dumb, that's all I will say.


----------



## changboy

Chamidorix said:


> Lol, I removed by post 3 minutes after I posted it because I realized I'm just pouring oil onto a fire here. Elitism is pretty dumb, that's all I will say.


 Maybe You still too slow 😂


----------



## Falkentyne

Chamidorix said:


> .





HyperMatrix said:


> I wouldn’t recommend using a glue gun to stack resistors. I mean you can use silver paint to get a boost. Or use silver paint as the glue to hold 2 shunts together. But glue gun? You’d have to scrape off the top of the shunts to make sure they were getting proper contact anyway. If they were specifically checking for shunt modding, they’d be able to see it, just as they would from evidence of you having cleared up silver paint from them.
> 
> The majority of what I’ve seen that guy say/do is absolutely moronic. He’s like the Moore’s law is dead guy. He’s right sometimes. But he’s wrong so often that it’s best to avoid using him as a source of knowledge or expertise.


Not only that, using a glue gun or even liquid electrical tape is _impossible_ on Founder's Edition or Gigabyte shunts!
The shunts you stack won't even touch the original shunts! The Gigabyte and Founder's Edition cards "follow" spec by using 2W shunts. 2W shunts have depressed edges that are lower than the middle. Oddly enough, Asus, MSI and eVGA and a few other AIB's use 1W shunts (Flush edges to middle).

You have no option whatsoever except desoldering and replacing shunts with 3 mOhms (best, but is ridiculously hard unless you are very experienced and have top tier desoldering gear!), Stack soldering (best), or MG 842AR paint as its own shunt or as a 'bridge' between the original and stacked shunt edges (may be fine on PCIE Slot, MVDDC and SRC shunts, but can possibly slowly degrade on the Chip Power and 8 Pin shunts due to very high current draw, paint works best as a temporary method). I had to deal with 8 pin #1 and GPU Chip Power degrading slowly on mine, maybe I didn't scrape enough, I don't know. But I moved up to soldering and got rewarded with stable, steady shunted power draw.

Another thing is that removing paint is a VERY messy and tedious affair that takes a very long time, and requires insulating the board (both for protection against paint residue and from accidental screwdriver slips!), and alcohol+paint residue will still get underneath the adhesive, and there is a risk of damage if the flat blade you use slips and scratches or breaks something. Most people overlook this. Desoldering a stacked shunt is MUCH easier once you know how and have a decent soldering iron, as you have desoldering wick to help as well. And as long as you apply Kapton tape, the cleanup job is much much smoother.


----------



## Falkentyne

Chamidorix said:


> Lol, I removed by post 3 minutes after I posted it because I realized I'm just pouring oil onto a fire here. Elitism is pretty dumb, that's all I will say.


You can't hot glue 2W shunts anyway. It won't work. eVGA has 1W shunts. Liquid electrical tape may work on 1W shunts (easier to work with and obtain than a hot glue gun anyway), but that still requires scraping both shunts as proper contact without a lot of resistance isn't guaranteed, and you may still have issues due to expansion/contraction as the board heats up (similar to what happens with thermal paste and 'pump out').


----------



## changboy

Using glue is also dangerous, the restriction by a non good contact will produce heat, nobody who is serious will never use glue for electric contact.***.


----------



## gfunkernaught

changboy said:


> I use 2 of them but those with the lcd temp (red) and they read both right. The only thing you need be carefull when you screew the plug coz have the 2 mini wire and you must turn the wire in same times coz you can broke this wire.
> They are both in my system for 4 years + and no problem.


Yeah I read a review on amazon about that. That person said the cable came loose or something. I saw the pic and it does look like it can come loose easily if one isn't careful. Cool.

Any suggestions on where to place it? I was thinking of putting it on the pump/res outlet.


----------



## J7SC

...I'm dreaming about that mineral oil aquarium setup for my mobo and Strix again...seriously ...2 gallons + of externally-cooled mineral oil surrounding the GPU and GDDR6X sounds like a fair fight to me


----------



## KedarWolf

changboy said:


> Why so much for those active backplate ? Is the passive backplate dosent work or something wrong with the temp ?
> Or this to get a lower temp but the passive work too ? Coz by cooling the gpu with liquid keep temp lower then memory not get hot like before so passive backplate should be enough ?


The RAM on 3090s is dual-sided with RAM on the back of the card, and it gets hot, why an active backplate is better.


----------



## Jpmboy

I kinda got a kick out of this discussion... especially the part about reversing any power mod resistance reduction (eg, stacking) and not scratching the resistor surface so "they" can't tell there was a mod. Ridiculous. There's been NVRAM registering max power (current) through the VRMs for generations. You gotta flash the card and leave no physical evidence. ✌


----------



## Exilon

changboy said:


> Or this to get a lower temp but the passive work too ?


I can get it down to [email protected] MVDDC with 1mm shims and better pads for a total of $30 in passive components. 

It's just mediocre thermal design from too much space and low quality pads from the block manufacturers because they weren't expecting to have to dissipate this much non-GPU power.


----------



## Falkentyne

Has anyone even tried flashing this FE vbios listed on techpowerup and know if it works? Is it an official dump or some crappy hack that won't flash, like the Galax one ?









NVIDIA RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com


----------



## des2k...

after some adjustments (amp clamp), around 612w max on 2x8pin reference
not sure if TDP or normalized TDP is enforced, 1.7% left, not adding much after corrections
I'm going to re-mount this weekend with Kryonaut to see if delta is better


----------



## changboy

des2k... said:


> after some adjustments (amp clamp), around 612w max on 2x8pin reference
> not sure if TDP or normalized TDP is enforced, 1.7% left, not adding much after corrections
> I'm going to re-mount this weekend with Kryonaut to see if delta is better
> View attachment 2478226


Can a card run like this all the time without problem ? I mean 8pins can support a lot then ?


----------



## des2k...

changboy said:


> Can a card run like this all the time without problem ? I mean 8pins can support a lot then ?


Not sure, the only game that pushes 600w is RTX Quake which I don't play and block delta is 22c which I don't really like. My other games are max 470w and I did play alot of control and cb2077.

The 8pins are 16awg and some airflow does help, they feel very warm to the touch around 500w without case airflow.


----------



## motivman

changboy said:


> Can a card run like this all the time without problem ? I mean 8pins can support a lot then ?


I tested my 2X8 pin cables for 4 hours running a scene in 4k (The Dark Pictures Anthology: Little Hope), Max out. Card was pulling over 700W, and my cables got a little warm, but no melting or anything. Each 8 pin was pulling over 300W, PCIE slot was pulling about 94W. PSU was EVGA 1200P2


----------



## J7SC

For folks with a Kingpin Hybrid, what are your VRAM junction temps overall, and more importantly the delta to GPU temp during load ? Tx


----------



## Jpmboy

Has anyone asked Martin which exact device sensor HWI is taking getting the temperature signal from ? He's on this forum...


----------



## changboy

*200 Watts*16.667 *Amps**12 Volts*


----------



## changboy

This is about temp vs amp too, more its cold more it can take but more the heat is high more is the restriction and need more amp and produce more heat. For this supra conductivity is at -270.

The wire itself is not the biggest problem, its where the connection and its where you can see the higher temp coz the connection do restriction and there most of the time the fire begin.
Same if you have old outlet loose and when you touch the plug near the wall you feel its verry hot, coz the contact is not good, so more good the contact is less you will see heat but its always more hot at the connector. So on graphic card i dont think you can see the wire melt or in fire but its where the connection is more crytical.


----------



## HyperMatrix

J7SC said:


> For folks with a Kingpin Hybrid, what are your VRAM junction temps overall, and more importantly the delta to GPU temp during load ? Tx


Temps are ridiculous. GPU maxes out at 45-46C after hours of 450-480W load. VRAM gets above 70C within minutes with no OC if there's no fan/ram block on it. Goes up to 80C+ and crashes with an OC. Been running Precision X instead of Afterburner just so I can see all 3 memory sensor temperatures during gaming to trouble shoot crashes. With a ram block on there previously, the temps would stay below 50C for the areas the ram block made contact with. Lot of stability added by doing that.


----------



## J7SC

changboy said:


> This is about temp vs amp too, more its cold more it can take but more the heat is high more is the restriction and need more amp and produce more heat. For this supra conductivity is at -270.
> 
> The wire itself is not the biggest problem, its where the connection and its where you can see the higher temp coz the connection do restriction and there most of the time the fire begin.
> Same if you have old outlet loose and when you touch the plug near the wall you feel its verry hot, coz the contact is not good, so more good the contact is less you will see heat but its always more hot at the connector. So on graphic card i dont think you can see the wire melt or in fire but its where the connection is more crytical.


...somehow, all that talk about wire gauge, amps and that sort of thing brought this to mind: ...interesting name, interesting '''3450 W''' PSU - why should miners have all the fun 










@HyperMatrix Tx for the info on the KPE temps


----------



## changboy

HyperMatrix said:


> Temps are ridiculous. GPU maxes out at 45-46C after hours of 450-480W load. VRAM gets above 70C within minutes with no OC if there's no fan/ram block on it. Goes up to 80C+ and crashes with an OC. Been running Precision X instead of Afterburner just so I can see all 3 memory sensor temperatures during gaming to trouble shoot crashes. With a ram block on there previously, the temps would stay below 50C for the areas the ram block made contact with. Lot of stability added by doing that.


Its curious coz i play hours and my gpuram is oc at +1000 and never got a crash coz this, i have some crash if i try push too much my core but not on memory, are you sure all ur memory make contact with pads and all things.

It's more then 1 week i game with my memory oc at +1000 and my core at 2100mhz and i dont have any crash.
If i bench i put my memory at +1500 and never crash coz this too, its always if i put a bit too much my gpu core.

Like my card if i play many hours at 2100mhz no problem on all game, but if i put 2130mhz then some game after 5 minutes, i can have crash, maybe just try reduce ur core clock a bit see if it still crash. Core is sensitive for crash on some game.


----------



## HyperMatrix

changboy said:


> Its curious coz i play hours and my gpuram is oc at +1000 and never got a crash coz this, i have some crash if i try push too much my core but not on memory, are you sure all ur memory make contact with pads and all things.
> 
> It's more then 1 week i game with my memory oc at +1000 and my core at 2100mhz and i dont have any crash.
> If i bench i put my memory at +1500 and never crash coz this too, its always if i put a bit too much my gpu core.
> 
> Like my card if i play many hours at 2100mhz no problem on all game, but if i put 2130mhz then some game after 5 minutes, i can have crash, maybe just try reduce ur core clock a bit see if it still crash. Core is sensitive for crash on some game.


It’s a strictly memory issue on the Kingpin cards. FTW3 Hybrid the memory stayed around 60-65C With an OC. Kingpin goes higher with no OC. Sticking a ram block on there to cool it down fixed my crashes with +1350MHz on mem.


----------



## changboy

HyperMatrix said:


> It’s a strictly memory issue on the Kingpin cards. FTW3 Hybrid the memory stayed around 60-65C With an OC. Kingpin goes higher with no OC. Sticking a ram block on there to cool it down fixed my crashes with +1350MHz on mem.


Ah ok i undertsand now hehehe, sorry


----------



## geriatricpollywog

J7SC said:


> For folks with a Kingpin Hybrid, what are your VRAM junction temps overall, and more importantly the delta to GPU temp during load ? Tx


Here are my idle and load results with PR running in a loop with fans at 100% (AIO), 51% (VRM) and 65% (backplate). AIO fans are Phanteks PH-F140MP. Room temp is 21C.


----------



## changboy

Banggood i didnt know that place hehehe, i found you can cool ur backplate for 5 $ hehehe :









bloc de refroidissement par eau en aluminium de 40x40x12mm pour le dissipateur thermique de radiateur de graphiques de CPU


Only 3,37€, buy best 40x40x12mm aluminum water cooling block for cpu graphics radiator heatsink sale online store at wholesale price.




www.banggood.com


----------



## J7SC

@0451 - Thanks also for the KPE temp info

... fyi, I was checking out caseking.de re. inventory on CPUs and did a quick side-trip to the 3090 GPU section just for fun...the price differential of the Strix OC to other models is the reverse of what's going on in most places in N.America...mind you, the FTW3 Ultra is the only one in stock right now (at least they have stock...), so may be the prices of the others will be 'adjusted' as new stock comes in...also no tariff in EU, but posted prices include VAT / sales tax.


----------



## gfunkernaught

changboy said:


> *200 Watts*16.667 *Amps**12 Volts*
> 
> View attachment 2478237


Apparently 18AWG is the standard wire gauge for PSUs. I emailed corsair to verify this. Isn't the limit for PCIe connections on a GPU 150w? So that 12.5a. Are the PCIe cables 16AWG? 18AWG would seem like it would get warm if more than 11.4a is being pulled.


----------



## changboy

2452 euro to cad = 3773$ !!!


----------



## J7SC

changboy said:


> 2452 euro to cad = 3773$ !!!


...yup, though as mentioned, that_ includes_ VAT sales tax (15% or higher)


----------



## mirkendargen

J7SC said:


> ...somehow, all that talk about wire gauge, amps and that sort of thing brought this to mind: ...interesting name, interesting '''3450 W''' PSU - why should miners have all the fun
> 
> View attachment 2478240
> 
> 
> @HyperMatrix Tx for the info on the KPE temps


I wonder how you're supposed to have your house wired to even plug that in... A single 20 amp circuit wouldn't cut it.


----------



## changboy

gfunkernaught said:


> Apparently 18AWG is the standard wire gauge for PSUs. I emailed corsair to verify this. Isn't the limit for PCIe connections on a GPU 150w? So that 12.5a. Are the PCIe cables 16AWG? 18AWG would seem like it would get warm if more than 11.4a is being pulled.


Cable come warm its nothing to be worry coz copper melt at 1000c but more the cable come hot more the psu will force to push current then at some point voltage will drop from 12v, then u can see 11.9 and 11.8.... and connector can come hot more then the cable. I think those 8 pins can take more then we think, coz those number are for safety electric for everything, house and els. If many connector in line its worst too.(extension). Usually you will measure more power on the cable with extention then the one with no extention coz of restriction.


----------



## changboy

J7SC said:


> ...yup, though as mentioned, that_ includes_ VAT sales tax (15% or higher)


Myself i found the card are going at crazy price even more then when card cames out, iam happy got one some months ago. Today i have think twice at this crazy price. Its more then i paid for my card+waterblock lol. You look now at 4000$ cad with the waterblock lol.


----------



## J7SC

mirkendargen said:


> I wonder how you're supposed to have your house wired to even plug that in... A single 20 amp circuit wouldn't cut it.


...it probably is geared towards non-residential users...if it has two inputs, separate circuits in your house could also do it



changboy said:


> Myself i found the card are going at crazy price even more then when card cames out, iam happy got one some months ago. Today i have think twice at this crazy price. Its more then i paid for my card+waterblock lol. You look now at 4000$ cad with the waterblock lol.


...yeah, waiting for prices to go down _in the near future _is probably not such a good idea

btw, 'Banggood' has some nifty stuff ! I was just checking them out a bit more


----------



## changboy

Banggood, the name make me laugh hehehe, i just dont know if its not many junk hehehe with cheapest part, but sometime i have pay less and got more, but not happen many times hehehe. A lot of stuff there hehehe.


----------



## HyperMatrix

mirkendargen said:


> I wonder how you're supposed to have your house wired to even plug that in... A single 20 amp circuit wouldn't cut it.


Could be for 220V euro voltage. Would require a 16 amp fuse.


----------



## gfunkernaught

changboy said:


> Cable come warm its nothing to be worry coz copper melt at 1000c but more the cable come hot more the psu will force to push current then at some point voltage will drop from 12v, then u can see 11.9 and 11.8.... and connector can come hot more then the cable. I think those 8 pins can take more then we think, coz those number are for safety electric for everything, house and els. If many connector in line its worst too.(extension). Usually you will measure more power on the cable with extention then the one with no extention coz of restriction.


Its not the copper I'm worried about, its the cable's sheathing or jacket. That is just plastic, which can melt or at least warp. Then again I've seen some CAT6 cables just get really soft and sometimes a bit goo-e from high temp.


----------



## changboy

gfunkernaught said:


> Its not the copper I'm worried about, its the cable's sheathing or jacket. That is just plastic, which can melt or at least warp. Then again I've seen some CAT6 cables just get really soft and sometimes a bit goo-e from high temp.


Isolation on the cable are not just plastic, usually its rated 105c for normal cable, Its doing isolation for the current dont ground on exterior metal or part. They cant just put ordinary plastic on those cable.Some aérial cable are not isolate but they are at some distance from other cable and never on or in ground.


----------



## motivman

Finally broke 15.7k on my reference card. Now, I just need to secure a 3X8 pin card that matches this card aka a Strix (the only 3x8pin I think is worth buying IMHO). FTW3 has red light of death issues, Gaming X trio has **** VRM's, Suprim X has fuses...etc. The only issue I have with my reference is the fact that in some games, the thing pulls over 300W per pin (verified with power meter). I would rather have a strix that pulls 200W per 8 pin max. My first strix was a dud, hoping that the next one I get overclocks better, so I can retire/sell my reference card. I ended up selling my kingpin, because I got an offer locally that I couldn't refuse. In addition to that, kingpin is not the best choice for me at the moment since there are no waterblocks available for it right now, so no way to integrate it into my loop.









I scored 15 713 in Port Royal


Intel Core i9-10900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## toxicnerve

motivman said:


> Suprim X has fuses...


Can you elaborate on this please? I am running a Suprim X. What / where are the fuses and what limitation do they impose?


----------



## Beagle Box

motivman said:


> Finally broke 15.7k on my reference card. Now, I just need to secure a 3X8 pin card that matches this card aka a Strix (the only 3x8pin I think is worth buying IMHO). FTW3 has red light of death issues, Gaming X trio has **** VRM's, Suprim X has fuses...etc. The only issue I have with my reference is the fact that in some games, the thing pulls over 300W per pin (verified with power meter). I would rather have a strix that pulls 200W per 8 pin max. My first strix was a dud, hoping that the next one I get overclocks better, so I can retire/sell my reference card. I ended up selling my kingpin, because I got an offer locally that I couldn't refuse. In addition to that, kingpin is not the best choice for me at the moment since there are no waterblocks available for it right now, so no way to integrate it into my loop.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 713 in Port Royal
> 
> 
> Intel Core i9-10900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com


Nice. Which BIOS are you using?


----------



## gfunkernaught

Speaking of power and homes, I remember dimming the lights in my old apartment while playing crysis 3 with GTX 580 SLI. The lights would flicker when the gpu usage was above 90%. I wonder what my neighbors thought


----------



## J7SC

gfunkernaught said:


> Speaking of power and homes, I remember dimming the lights in my old apartment while playing crysis 3 with GTX 580 SLI. The lights would flicker when the gpu usage was above 90%. I wonder what my neighbors thought


...yeah SLI 580s, 780 Tis, 980 / Ti CL/KPE all did that, mind you mostly just on the circuit they were on, not the other outlets. My 2x 2080 Ti / SLI-CFR still do it in FS2020 (w/Threadripper oc as well, that system will pull more than 1150W). Even the single 3090 does it in FS2020...

Speaking of 3090 power, check out HWbot XoCer OGS' latest Port Royal...up to *2895 MHz* on the core (likely full-pot). HeHe, I can match the VRAM at 1381 MHz w/ my Strix on air, GPU speed, not so much...

source


----------



## dk10438

gfunkernaught said:


> Apparently 18AWG is the standard wire gauge for PSUs. I emailed corsair to verify this. Isn't the limit for PCIe connections on a GPU 150w? So that 12.5a. Are the PCIe cables 16AWG? 18AWG would seem like it would get warm if more than 11.4a is being pulled.


FWIW
each 8 pin connector has 3 live wires and the standard is 18 gauge thickness. Therefore, conservatively, each wire should be able to handle 10Ax12V=120W or 360W per connector


----------



## motivman

toxicnerve said:


> Can you elaborate on this please? I am running a Suprim X. What / where are the fuses and what limitation do they impose?


Suprim X and gaming x trio has 20a fuses for each 8 pin connector, so max power you can pull per connector before it blows is 240W. The chance of you pulling that much power is nil though. Max power draw I have seen on a 3X8 pin card on the 1000W bios is about 190-200W, so I wouldn't worry about it. But what if there is a spike? I just prefer to have a card with no fuses at all.


----------



## motivman

Beagle Box said:


> Nice. Which BIOS are you using?


520W evga bios. It outperforms 1000W bios for me in all scenarios. 1000W bios imho just wastes power for no reason. For example, In a particular scene in control, running max setting, at 4k, with no dlss;

520W bios = 64 FPS 540W power draw
1000W bios = 64 FPS 712W power draw.

The only benefit of 1000w bios, is that in GPU-Z, my perfcap does not say "pwr".... lol


----------



## gfunkernaught

motivman said:


> 520W bios = 64 FPS 540W power draw
> 1000W bios = 64 FPS 712W power draw


Wait at the same clock speed?


----------



## changboy

motivman said:


> 520W evga bios. It outperforms 1000W bios for me in all scenarios. 1000W bios imho just wastes power for no reason. For example, In a particular scene in control, running max setting, at 4k, with no dlss;
> 
> 520W bios = 64 FPS 540W power draw
> 1000W bios = 64 FPS 712W power draw.
> 
> The only benefit of 1000w bios, is that in GPU-Z, my perfcap does not say "pwr".... lol


What card do you have again ?


----------



## motivman

gfunkernaught said:


> Wait at the same clock speed?


1000W bios was running maybe 2 bins higher, due to the fact that it wasn't "power" limited.... haha


----------



## motivman

changboy said:


> What card do you have again ?


2X8 pin reference


----------



## Beagle Box

motivman said:


> 520W evga bios. It outperforms 1000W bios for me in all scenarios. 1000W bios imho just wastes power for no reason. For example, In a particular scene in control, running max setting, at 4k, with no dlss;
> 
> 520W bios = 64 FPS 540W power draw
> 1000W bios = 64 FPS 712W power draw.
> 
> The only benefit of 1000w bios, is that in GPU-Z, my perfcap does not say "pwr".... lol


Thx.

While I haven't yet experimented with the 1000W KP BIOS just yet, I have seen differences among BIOSs I've tried. 

Both the MSI Suprim 450W BIOS and the KP520W BIOS perform better on my Strix OC and both give better benchmark scores in Port Royal than the stock Strix 480W BIOS . The Suprim seems to run really smoothly in my GPU for some reason - fps doesn't swing as much. While the stock Strix OC BIOS pulls a constant 460-470Ws, it's harsh and unforgiving. It's most prone to lockup during benching if the card is pushed too far. 

I was finally able to reach my personal goal of 15200 in PR this morning with the KP 520W BIOS. That still puts me more than 100 pts low in my weight class. May have to resort to opening a window to get over 15300 with my rig. 🥶


----------



## Pepillo

Have you tried the new 3dMark Mesh Shader?









I scored 0 in Mesh Shader feature test


Intel Core i9-7900X Processor, NVIDIA GeForce RTX 3090 x 1, 32446 MB, 64-bit Windows 10}




www.3dmark.com





It promises a lot this technology, profits of more than 800% ........


----------



## HyperMatrix

Pepillo said:


> Have you tried the new 3dMark Mesh Shader?
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 0 in Mesh Shader feature test
> 
> 
> Intel Core i9-7900X Processor, NVIDIA GeForce RTX 3090 x 1, 32446 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> It promises a lot this technology, profits of more than 800% ........


----------



## J7SC

I think I made further progress on cooling the Strix OC backplate heated up by the double-sided GDDR6X VRAM  ...after some trials and errors also necessitated by this old mobo having an issue with the primary PCIe slot, I moved the card to slot #3 and turned all the other ones off (via Rampage XTR switches). My TT 240s did well and were quiet, but then I did mount a single GentleTyphoon 4K rpm, even better temps but too loud...finally I put on 2x Arctic P12 pwm pst and moved them also closer to the backplate, like with the other fans. With 2x 120 mm fans side-by-side, one thing to watch out for is not to 'cover' the VRM backplate cutout of the Strix OC as that is cooled by one of the three GPU fans blowing in the opposite direction.

Delta under load is actually the best of all the fan tests, and they are quiet even at this level. Idle temps on top, load temps on the bottom. Load was just over 400 W (500 W later...)


----------



## gfunkernaught

HyperMatrix said:


> View attachment 2478352


Is that Cyberpunk running on an N64?


----------



## changboy

gfunkernaught said:


> Is that Cyberpunk running on an N64?


nintendo switch


----------



## geriatricpollywog

Just bought the 10 year warranty. The card will be as old as the GTX 590 when the warranty expires.









The Almighty GTX 590 - Is Nvidia's $1000 Dual GPU Beast From 2011 Still Worth It? - YouTube


----------



## wesley8

Pepillo said:


> Have you tried the new 3dMark Mesh Shader?
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 0 in Mesh Shader feature test
> 
> 
> Intel Core i9-7900X Processor, NVIDIA GeForce RTX 3090 x 1, 32446 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> It promises a lot this technology, profits of more than 800% ........


http://www.3dmark.com/ms/3674


----------



## changboy

0451 said:


> Just bought the 10 year warranty. The card will be as old as the GTX 590 when the warranty expires.
> 
> View attachment 2478423
> 
> 
> The Almighty GTX 590 - Is Nvidia's $1000 Dual GPU Beast From 2011 Still Worth It? - YouTube


What the cost of this ?


----------



## geriatricpollywog

changboy said:


> What the cost of this ?


$30 for the 5 year (+2) and $60 for the 10 year (+7). It’s stupid cheap for a $2000 card with a short MTBF.


----------



## HyperMatrix

0451 said:


> Just bought the 10 year warranty. The card will be as old as the GTX 590 when the warranty expires.
> 
> View attachment 2478423
> 
> 
> The Almighty GTX 590 - Is Nvidia's $1000 Dual GPU Beast From 2011 Still Worth It? - YouTube


If I’m still using the RTX 3090 after 3 years please shoot me.


----------



## geriatricpollywog

HyperMatrix said:


> If I’m still using the RTX 3090 after 3 years please shoot me.


I thought the same thing since this has been my 5th unique GPU in the past 9 months. But man, $60… Imagine all the GPU shortages I’ll weather.


----------



## geriatricpollywog

Pepillo said:


> Have you tried the new 3dMark Mesh Shader?
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 0 in Mesh Shader feature test
> 
> 
> Intel Core i9-7900X Processor, NVIDIA GeForce RTX 3090 x 1, 32446 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> It promises a lot this technology, profits of more than 800% ........


I'm getting an error message. I am pretty sure my GPU has mesh shaders.

*Your PC is unable to run this test*
You need a graphics card with drivers that support mesh shaders to run this test.


----------



## Falkentyne

0451 said:


> I'm getting an error message. I am pretty sure my GPU has mesh shaders.
> 
> *Your PC is unable to run this test*
> You need a graphics card with drivers that support mesh shaders to run this test.


Maybe your drivers are as old as a dinosaur's poop?


----------



## changboy

HyperMatrix said:


> If I’m still using the RTX 3090 after 3 years please shoot me.


hehehehe funny 😂


----------



## changboy

I think i have check for extended warranty evga but not sure if we can buy this in Canada. If i remeber i clic for try and i was not able to buy this.


----------



## geriatricpollywog

changboy said:


> I think i have check for extended warranty evga but not sure if we can buy this in Canada. If i remeber i clic for try and i was not able to buy this.


It didn’t work for me the first time. Try it on your phone, tablet, different browser etc. If all else fails, call them.


----------



## changboy

Ok you are in usa but maybe its too late for me coz i bought it in november 2020 i think lol. Also i use the 1000w on my ftw3 ultra, lol 😂


----------



## geriatricpollywog

changboy said:


> Ok you are in usa but maybe its too late for me coz i bought it in november 2020 i think lol. Also i use the 1000w on my ftw3 ultra, lol 😂


You have 3 months. So if you bought it after Nov 11, there is still time.


Falkentyne said:


> Maybe your drivers are as old as a dinosaur's poop?


I just updated Windows. That adds the latest drivers, right?





Just kidding, I am running 461.40. This also happened when the Ray Tracing feature test came out, but I forgot what fixed it.


----------



## changboy

0451 said:


> You have 3 months. So if you bought it after Nov 11, there is still time.


 the Purchase Date: 11/02/2020 : its too late hehehe. I remember tried but it didnt work, like i told you i think its just for united states citizen but iam not sure.


----------



## Zogge

changboy said:


> the Purchase Date: 11/02/2020 : its too late hehehe. I remember tried but it didnt work, like i told you i think its just for united states citizen but iam not sure.


I could buy it in Sweden too when I had the evga 3090, but I returned it for the Strix. So it is not only in USA.


----------



## toxicnerve

Beagle Box said:


> Both the MSI Suprim 450W BIOS and the KP520W BIOS perform better on my Strix OC and both give better benchmark scores in Port Royal than the stock Strix 480W BIOS .


Interesting that the lower wattage Suprim BIOS nets you better results than the stock 480W BIOS. 

Any theories as to why that may be the case?


----------



## rix2

Has anyone test in FTW3 ultra Asus Strix 480w bios?


----------



## Beagle Box

toxicnerve said:


> Interesting that the lower wattage Suprim BIOS nets you better results than the stock 480W BIOS.
> 
> Any theories as to why that may be the case?


I haven't heard anyone else mention this behavior, so no theories. Could be just my card. Maybe it is slightly different somehow.


----------



## changboy

EVGA working on a bios to fix for the pcie power draw and els, will see in the future.


----------



## Wihglah

changboy said:


> EVGA working on a bios to fix for the pcie power draw and els, will see in the future.


They have been working on it for months. I wouldn't hold your breath.


----------



## changboy

Wihglah said:


> They have been working on it for months. I wouldn't hold your breath.


I ear it will be ready for 2023


----------



## jomama22

toxicnerve said:


> Interesting that the lower wattage Suprim BIOS nets you better results than the stock 480W BIOS.
> 
> Any theories as to why that may be the case?


The strix 480w bios is just the total tdp. If you're using the original bios from launch (there are two versions) the 8pins are limited to 121.5w each. Meanwhile, the pcie slot, though limited to 89w or somthing like that, will only ever pull ~70w max no matter what, even shunted. So you will only hit that 480w when spiking. It will usually average around 430-450w during usage.


----------



## changboy

I recive a new error at start up with msi afterburner : direct3D12 compoments cannot be hooked right now , we strongly recommended you restart application.

I didnt have this 2 days ago........


----------



## KedarWolf

Anyone wanting Fujipoly 17 wm/k pads, I find this is the cheapest place to get them. And they have free standard shipping last time I checked, even to Canada for free I believe.
You'll have to pay the taxes when you receive them though in Canada.



Ultra Extreme Thermal Pads | Page 1 | Sort By: Product Title A-Z - FrozenCPU.com



If your card has left you broke but you want decent cheap thermal pads, you can get Thermalright 12.8 wm/k pads on amazon.com 85x45mm starting at $12.99.



https://www.amazon.com/nkomax-Thermalright-Conductive-Resistance-Temperature/dp/B08CGR62QX?th=1



You can also get them an Aliexpress, but might wait a month for them to actually get to you from China, the 120x120mm starting for under $7.

Thermalright 120x120mm

Edit: As well, you can get Thermalright TFX rebranded for Vietnam as Thermagic ZF-EX for under $6.

Thermagic ZF-EX

And if you Google, you can get a coupon code for $4 off for new buyers or if you just register with a new email.


----------



## mardon

motivman said:


> 520W evga bios. It outperforms 1000W bios for me in all scenarios. 1000W bios imho just wastes power for no reason. For example, In a particular scene in control, running max setting, at 4k, with no dlss;
> 
> 520W bios = 64 FPS 540W power draw
> 1000W bios = 64 FPS 712W power draw.
> 
> The only benefit of 1000w bios, is that in GPU-Z, my perfcap does not say "pwr".... lol


This is very interesting and mirrors my experience of the 1000w bios.

I've just shunted my 3090 with 15mohm resistors as I don't need over 550w as I'm SFF.

My 850w Psu would turn off at about 450w power draw during Cyberpunk with the 1000w bios. I wonder if it will do the same when the card is shunted? It only does it in Cyberpunk and Control not any other games.


----------



## changboy

mardon said:


> This is very interesting and mirrors my experience of the 1000w bios.
> 
> I've just shunted my 3090 with 15mohm resistors as I don't need over 550w as I'm SFF.
> 
> My 850w Psu would turn off at about 450w power draw during Cyberpunk with the 1000w bios. I wonder if it will do the same when the card is shunted? It only does it in Cyberpunk and Control not any other games.


Buy a 1600w hehehehe


----------



## mouacyk

mardon said:


> This is very interesting and mirrors my experience of the 1000w bios.
> 
> I've just shunted my 3090 with 15mohm resistors as I don't need over 550w as I'm SFF.
> 
> My 850w Psu would turn off at about 450w power draw during Cyberpunk with the 1000w bios. I wonder if it will do the same when the card is shunted? It only does it in Cyberpunk and Control not any other games.


What makes you think shunting the GPU would allow your PSU to maintain the same power that shut it down? Your only option is to get more watts on your PSU.


----------



## martinhal

changboy said:


> I recive a new error at start up with msi afterburner : direct3D12 compoments cannot be hooked right now , we strongly recommended you restart application.
> 
> I didnt have this 2 days ago........


It has to do with the Windows 10 update. Uninstalled and error message gone


----------



## mardon

mouacyk said:


> What makes you think shunting the GPU would allow your PSU to maintain the same power that shut it down? Your only option is to get more watts on your PSU.


Because there is more pull from the pcie slot with a shunt.
I don't see why 850w wouldn't be enough at 450w GPU load. CPU is at 150w max. I'm sure I read it was something to do with spikes in power draw on Cyberpunk.


----------



## mardon

changboy said:


> Buy a 1600w hehehehe


If they did one in SFX I would!
Trying to work out if the new 1000w Silverstone SFX-L will fit.


----------



## Falkentyne

mardon said:


> Because there is more pull from the pcie slot with a shunt.
> I don't see why 850w wouldn't be enough at 450w GPU load. CPU is at 150w max. I'm sure I read it was something to do with spikes in power draw on Cyberpunk.


Older PSU's have a sensitive OCP circuit that gets tripped during high transient loads. Most of the older Seasonic PSU platform, from the X-1050 to the early Prime and Focus models are affected by this. The OneSeasonic rebranded and refreshed models have fixed this flaw. But many OEM's also used the SS platform as well...
So yes, he needs a new, modern PSU.


----------



## mardon

Falkentyne said:


> Older PSU's have a sensitive OCP circuit that gets tripped during high transient loads. Most of the older Seasonic PSU platform, from the X-1050 to the early Prime and Focus models are affected by this. The OneSeasonic rebranded and refreshed models have fixed this flaw. But many OEM's also used the SS platform as well...
> So yes, he needs a new, modern PSU.


Damn I had a Corsair SF750 so upgraded to a TF850 (by the oem manufacturer of Corsair) both are under 2 year old design. I guess it must be a limitation of the SFX form factor?


----------



## changboy

martinhal said:


> It has to do with the Windows 10 update. Uninstalled and error message gone


Yu mean unistall the latest update on windows 10, ya i got 1 update yesterday i think ?
About that i found this on microsoft web site : 
*KB4601319*


Microsoft and Discord have found incompatibility issues with some games using Direct3D 12 when the in-game overlay feature of Discord is enabled. When attempting to open affected games you might receive an error, or the game might close silently.


----------



## J7SC

jomama22 said:


> The strix 480w bios is just the total tdp. If you're using the original bios from launch (there are two versions) the 8pins are limited to 121.5w each. Meanwhile, the pcie slot, though limited to 89w or somthing like that, will only ever pull ~70w max no matter what, even shunted. So you will only hit that 480w when spiking. It will usually average around 430-450w during usage.


...good to know. I must have the updated version then, per below. I got the card less than two weeks ago (Strix OC) and didn't think they had those linger on their shelves for months on end. BTW, is there a difference between the Strix and Strix OC bios, or just the binning ?


----------



## HarbingerOfLive

Tried going through the thread, but does anyone have specific experience on the MSI X Trio with either the 450w Suprim bios, the 480w Strix OC bios, or the 500/520w EVGA bios? Specifically trying to see which ones disable certain Displayports since I currently run 3 monitors and no iGPU to recover myself out of a fluke, as well as just generally curious how they perform comparatively.


----------



## gfunkernaught

My motherboard is an MSI Z370 Gaming M5 and it has a 2-pin header near the CPU and it isn't labeled (or I'm not seeing it). Manual doesn't mention it. Could I use that to plug my water temp sensor into? If not, what could I use? I bought a water temp sensor which is coming next week and it has a 2-pin connector.


----------



## Beagle Box

gfunkernaught said:


> My motherboard is an MSI Z370 Gaming M5 and it has a 2-pin header near the CPU and it isn't labeled (or I'm not seeing it). Manual doesn't mention it. Could I use that to plug my water temp sensor into? If not, what could I use? I bought a water temp sensor which is coming next week and it has a 2-pin connector.


No. The M5 has no temperature header. You'll need a Corsair Commander Pro or something like an Aquero controller to use that sensor.


----------



## gfunkernaught

Beagle Box said:


> No. The M5 has no temperature header. You'll need a Corsair Commander Pro or something like an Aquero controller to use that sensor.


That 2-pin header seems to be labeled "J1". 

Are there any stand-alone temp controller/header that would plug directly into an available USB header on my motherboard? Those two that you mentioned are full controllers that I don't want to spend that kind of money on.


----------



## jomama22

J7SC said:


> ...good to know. I must have the updated version then, per below. I got the card less than two weeks ago (Strix OC) and didn't think they had those linger on their shelves for months on end. BTW, is there a difference between the Strix and Strix OC bios, or just the binning ?
> 
> View attachment 2478525


I'm not sure a non oc bios exists honestly. Yeah seems you are on v2 bios where the pin limits are 150w I believe. May want to test out overclocking the mem and running some 4k benchmarks. The v2 bios cuts max memory wattage to 9x watts (96 I think it is) from 150w or so on the v1 bios. Really the reason I stuck with v1 (I'm shunted with 4mohm so the pin limit isn't really a big deal).


----------



## Beagle Box

gfunkernaught said:


> Are there any stand-alone temp controller/header that would plug directly into an available USB header on my motherboard? Those two that you mentioned are full controllers that I don't want to spend that kind of money on.


The Quadro or Octo. I have the same board and I need one, too. Good luck finding one.


----------



## Beagle Box

jomama22 said:


> I'm not sure a non oc bios exists honestly. Yeah seems you are on v2 bios where the pin limits are 150w I believe. May want to test out overclocking the mem and running some 4k benchmarks. The v2 bios cuts max memory wattage to 9x watts (96 I think it is) from 150w or so on the v1 bios. Really the reason I stuck with v1 (I'm shunted with 4mohm so the pin limit isn't really a big deal).


How do you know which BIOS version a card has? Got my card in early Jan.


----------



## J7SC

jomama22 said:


> I'm not sure a non oc bios exists honestly. Yeah seems you are on v2 bios where the pin limits are 150w I believe. May want to test out overclocking the mem and running some 4k benchmarks. The v2 bios cuts max memory wattage to 9x watts (96 I think it is) from 150w or so on the v1 bios. Really the reason I stuck with v1 (I'm shunted with 4mohm so the pin limit isn't really a big deal).


Thanks. So far I haven't noticed any memory misbehavior (past 1350 / VRAM in Superposition 4K now) but I will do some more benching on the (here long) weekend...and it's getting cold and snowy  

Can you tell from the GPUz screenie in the spoiler what the bios version is (v1 or v2) ? I know it says Sept. 1, but with other GPUs I have, updated bios often kept the initial release date



Spoiler


----------



## gfunkernaught

Beagle Box said:


> The Quadro or Octo. I have the same board and I need one, too. Good luck finding one.


I found this which peaked my DIY interest. Maybe it is possible to connect the temp sensor to this thing? I see it is a type-c connector, but that can easily be wired to a USB header.


----------



## jomama22

Beagle Box said:


> How do you know which BIOS version a card has? Got my card in early Jan.











[Official] NVIDIA RTX 3090 Owner's Club


Good one? How's that blown up 3090 working for you? You're not that bright are you? I can understand it based on the way you write. What's your point?




www.overclock.net





These are the two versions, you can just check in gpu-z for the bios version your are on by comparing the version.



J7SC said:


> Thanks. So far I haven't noticed any memory misbehavior (past 1350 / VRAM in Superposition 4K now) but I will do some more benching on the (here long) weekend...and it's getting cold and snowy
> 
> Can you tell from the GPUz screenie in the spoiler what the bios version is (v1 or v2) ? I know it says Sept. 1, but with other GPUs I have, updated bios often kept the initial release date
> 
> 
> 
> Spoiler
> 
> 
> 
> 
> View attachment 2478533


You're on the v2. Check link above. You would need to keep an eye on mvddc to see what you memory is pulling. If it's bouncing off 90w or so, you are being throttled and the memory clock will just be reduced behind the scenes.


----------



## J7SC

jomama22 said:


> [Official] NVIDIA RTX 3090 Owner's Club
> 
> 
> Good one? How's that blown up 3090 working for you? You're not that bright are you? I can understand it based on the way you write. What's your point?
> 
> 
> 
> 
> www.overclock.net
> 
> 
> 
> 
> 
> These are the two versions, you can just check in gpu-z for the bios version your are on by comparing the version.
> 
> 
> 
> You're on the v2. Check link above. You would need to keep an eye on mvddc to see what you memory is pulling. If it's bouncing off 90w or so, you are being throttled and the memory clock will just be reduced behind the scenes.


Thanks again !  Yeah, I really haven't watched mvddc yet at all, was/still am fighting with a mobo issue over the past week. So far, I just locked in the GPU MHz and PL in Superpos 4K,varying only the VRAM speed...score is still going up. I'll try to run Superpos 8K on the weekend w/ sensors open...that should give me some good feedback on mvddc


----------



## martinhal

changboy said:


> Yu mean unistall the latest update on windows 10, ya i got 1 update yesterday i think ?
> About that i found this on microsoft web site :
> *KB4601319*
> 
> 
> Microsoft and Discord have found incompatibility issues with some games using Direct3D 12 when the in-game overlay feature of Discord is enabled. When attempting to open affected games you might receive an error, or the game might close silently.


Yes I think that was the one


----------



## HiLuckyB

changboy said:


> I recive a new error at start up with msi afterburner : direct3D12 compoments cannot be hooked right now , we strongly recommended you restart application.
> 
> I didnt have this 2 days ago........


I just installed the newest beta of rivatuner and this went away


----------



## Beagle Box

J7SC said:


> Thanks. So far I haven't noticed any memory misbehavior (past 1350 / VRAM in Superposition 4K now) but I will do some more benching on the (here long) weekend...and it's getting cold and snowy
> 
> Can you tell from the GPUz screenie in the spoiler what the bios version is (v1 or v2) ? I know it says Sept. 1, but with other GPUs I have, updated bios often kept the initial release date
> 
> 
> 
> Spoiler
> 
> 
> 
> 
> View attachment 2478533


My BIOS is earlier version "A8". Note the RAM speed I run. I could go faster, but this is the high-end 'sweet spot'. 
Max Board Power Draw stated as 481.4W.
Max MVDDC stated as 123.4W
Max PWR_SRC Power Draw 142.1W


----------



## ViRuS2k

Store – MP5WORKS







mp5works.com





Just got myself the serial kit, grab em while there hot guys there back in stock  but not for long lol


----------



## changboy

ViRuS2k said:


> Store – MP5WORKS
> 
> 
> 
> 
> 
> 
> 
> mp5works.com
> 
> 
> 
> 
> 
> Just got myself the serial kit, grab em while there hot guys there back in stock  but not for long lol


 Its 156$ cad before import fee, so maybe 200 for this thing, verry expensive i think when you can do the same result for verry cheap, this is what i think.


----------



## motivman

mardon said:


> This is very interesting and mirrors my experience of the 1000w bios.
> 
> I've just shunted my 3090 with 15mohm resistors as I don't need over 550w as I'm SFF.
> 
> My 850w Psu would turn off at about 450w power draw during Cyberpunk with the 1000w bios. I wonder if it will do the same when the card is shunted? It only does it in Cyberpunk and Control not any other games.


shunting will not make a difference. power draw = power draw, shunted or not. You need a better PSU.


----------



## Beagle Box

gfunkernaught said:


> I found this which peaked my DIY interest. Maybe it is possible to connect the temp sensor to this thing? I see it is a type-c connector, but that can easily be wired to a USB header.
> View attachment 2478534


If you're good at programming and want to write some drivers and an interface, then sure. Otherwise, you should probably just pay the $45 for the Quadro, something known to actually work.


----------



## gfunkernaught

ViRuS2k said:


> Store – MP5WORKS
> 
> 
> 
> 
> 
> 
> 
> mp5works.com
> 
> 
> 
> 
> 
> Just got myself the serial kit, grab em while there hot guys there back in stock  but not for long lol


Serial huh...so in theory, if that block is in series with the gpu block, it will probably make temps worse no? Like if this thing picks up the heat from the back, then feeds that warm water into the gpu block, that would hurt performance.


----------



## gfunkernaught

Beagle Box said:


> Quadro


It was just a thought. 

can HWInfo read data from any module like the Quadro, Corsair Commander, and others?


----------



## J7SC

jomama22 said:


> [Official] NVIDIA RTX 3090 Owner's Club
> 
> 
> Good one? How's that blown up 3090 working for you? You're not that bright are you? I can understand it based on the way you write. What's your point?
> 
> 
> 
> 
> www.overclock.net
> 
> 
> 
> 
> (...)
> You're on the v2. Check link above. You would need to keep an eye on mvddc to see what you memory is pulling. If it's bouncing off 90w or so, you are being throttled and the memory clock will just be reduced behind the scenes.


...seems like my Strix OC's mvddc is not limited to 90 W, shows over 158 W per below

I'm still trying to deal with base system device issues in a brand new Win 10 Pro install, including PCI. MEI and correct chipset drivers are not solving it...I also moved the 3090 from PCIe slot 1 to 3 (both16x PCIe 3.0) as the GPU post in the Rampage Extreme / X99 would not show the card in slot 1 (though the card worked fine, but selective boot issues, now solved with that move to slot 3).

This board has seen LN2 and DICE XOC and I recall a bit of acetone bubbling onto it back in the day, but it has been working fine in other configs since then. I have much more modern stuff, but that's also being used for productivity and I really don't want to pull those system apart...perhaps transfer it to unused TR4 mobo here, but currently no extra TR chip for it...have been holding out for a Ryzen 5950X /X570 combo..you folks all know the current market craziness...


----------



## KedarWolf

J7SC said:


> ...seems like my Strix OC's mvddc is not limited to 90 W, shows over 158 W per below
> 
> I'm still trying to deal with base system device issues in a brand new Win 10 Pro install, including PCI. MEI and correct chipset drivers are not solving it...I also moved the 3090 from PCIe slot 1 to 3 (both16x PCIe 3.0) as the GPU post in the Rampage Extreme / X99 would not show the card in slot 1 (though the card worked fine, but selective boot issues, now solved with that move to slot 3).
> 
> This board has seen LN2 and DICE XOC and I recall a bit of acetone bubbling onto it back in the day, but it has been working fine in other configs since then. I have much more modern stuff, but that's also being used for productivity and I really don't want to pull those system apart...perhaps transfer it to unused TR4 mobo here, but currently no extra TR chip for it...have been holding out for a Ryzen 5950X /X570 combo..you folks all know the current market craziness...
> 
> View attachment 2478562


Instead of X570 for a motherboard, check out the MSI MEG B550 Unify-X, monstrous VRMs, best VRMs you can get in a Ryzen board. Dual DIMM, not four, which you want, overclocks memory better and X570 and B550 are Daisy Chain, you want to run two DIMMs. Can still do 2x Gen 4.0 M.2s.

Last I heard it holds the world memory overclocking record.

And it's only 299 USD.

Just sacrifice a goat to the gods you don't get one with coil whine if noise is an issue for you.

I got one on pre-order from B&H Photo, supposed to ship mid-Feb.

Buildzoid Unify-X Breakdown


----------



## Shark00n

For a watercooled PNY 3090 Uprising (this one @ techpowerup)
Could I flash the vBIOS for the PNY 3090 XLR8 for +15W TDP increase? (this one @ techpowerup)

Or is there any more powerful vBIOS for reference PCB 3090s? (2x8pin)
PNY is pretty 'basic' but quality control looks killer and I'm having great core and mem junction temps with my setup.

Thanks!

P.S.
Where's the 'Search this thread' option? Can't find it anywhere.


----------



## changboy

J7SC said:


> ...seems like my Strix OC's mvddc is not limited to 90 W, shows over 158 W per below
> 
> I'm still trying to deal with base system device issues in a brand new Win 10 Pro install, including PCI. MEI and correct chipset drivers are not solving it...I also moved the 3090 from PCIe slot 1 to 3 (both16x PCIe 3.0) as the GPU post in the Rampage Extreme / X99 would not show the card in slot 1 (though the card worked fine, but selective boot issues, now solved with that move to slot 3).
> 
> This board has seen LN2 and DICE XOC and I recall a bit of acetone bubbling onto it back in the day, but it has been working fine in other configs since then. I have much more modern stuff, but that's also being used for productivity and I really don't want to pull those system apart...perhaps transfer it to unused TR4 mobo here, but currently no extra TR chip for it...have been holding out for a Ryzen 5950X /X570 combo..you folks all know the current market craziness...
> 
> View attachment 2478562


This is my power draw in gpuz after gaming 1 hour on battelfield 1 :


----------



## EarlZ

Does the Asus ROG Strix 30909 require a thermal pad replacement to keep the GDDR6X temps in check, preferable under 90c at max gaming loads?


----------



## geriatricpollywog

Luumi destroyed the OC Lab tuned Galax cards. Long live the K|NG.









I scored 18 710 in Port Royal


Intel Core i9-10900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## J7SC

0451 said:


> Luumi destroyed the OC Lab tuned Galax cards. Long live the K|NG.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 18 710 in Port Royal
> 
> 
> Intel Core i9-10900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com


Time to get some popcorn !

EDIT - FINALLY solved the Win 10 Device manager issue above...thanks to guru3D...turns out that the Asus driver Asus has for Win 10 download is problematic


----------



## changboy

J7SC said:


> Time to get some popcorn !
> 
> EDIT - FINALLY solved the Win 10 Device manager issue above...thanks to guru3D...turns out that the Asus driver Asus has for Win 10 download is problematic


What the driver from Asus is problematic ?


----------



## J7SC

changboy said:


> What the driver from Asus is problematic ?


...finally had a bit more time tonight to follow down some rabbit holes...found a few posts by folks with the same Rampage Ext X99 'base device' Win 10 Pro issue; one post had a link to Intel software (non-Asus) which pulled the official Asus set (which was the latest on their site for Win 10 64) and corrected that problem at least...still have to look at slot 1, though.


----------



## changboy

J7SC said:


> ...finally had a bit more time tonight to follow down some rabbit holes...found a few posts by folks with the same Rampage Ext X99 'base device' Win 10 Pro issue; one post had a link to Intel software (non-Asus) which pulled the official Asus set (which was the latest on their site for Win 10 64) and corrected that problem at least...still have to look at slot 1, though.


You can take a look here : RE: [Guide] X99: SATA controllers and drivers - 3


----------



## motivman

why does my memory suck so bad? will putting a waterblock on the backplate improve my memory overclocking? I need 15.8k in PR!!!!!!









I scored 15 734 in Port Royal


Intel Core i9-10900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## Zogge

gfunkernaught said:


> It was just a thought.
> 
> can HWInfo read data from any module like the Quadro, Corsair Commander, and others?


I have a full system of aquasuite and aquacomputer products. Yes hwinfo can be integrated there.
For commander pro I do not think so.


----------



## geriatricpollywog

motivman said:


> why does my memory suck so bad? will putting a waterblock on the backplate improve my memory overclocking? I need 15.8k in PR!!!!!!
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 734 in Port Royal
> 
> 
> Intel Core i9-10900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com


For those temps I’d expect both core and mem to be higher.


----------



## Zogge

EarlZ said:


> Does the Asus ROG Strix 30909 require a thermal pad replacement to keep the GDDR6X temps in check, preferable under 90c at max gaming loads?


I watercool mine with standard pads from bykski.

After 2h bf5 with +105/+1100 with 520w KP bios. 

Memory Tjunction 58c
Backplate temp 39c
Core 40c
VRM 43c


----------



## J7SC

changboy said:


> You can take a look here : RE: [Guide] X99: SATA controllers and drivers - 3


Tx, but it's fixed  per above post


EarlZ said:


> Does the Asus ROG Strix 30909 require a thermal pad replacement to keep the GDDR6X temps in check, preferable under 90c at max gaming loads?


...80 C for the memory would make me nervous already, never mind 90 C. I just got my Strix two weeks ago, so still stock air-cooled (w-cooling soon). I tried out a few fan combos and settled on 2x Arctic P12 pst blowing on the backplate with great results


----------



## mardon

motivman said:


> shunting will not make a difference. power draw = power draw, shunted or not. You need a better PSU.


There isn't one in SFF. I've literally got the highest wattage SFX you can buy @ 850w unless I'm missing something.


----------



## mardon

Shark00n said:


> For a watercooled PNY 3090 Uprising (this one @ techpowerup)
> Could I flash the vBIOS for the PNY 3090 XLR8 for +15W TDP increase? (this one @ techpowerup)
> 
> Or is there any more powerful vBIOS for reference PCB 3090s? (2x8pin)
> PNY is pretty 'basic' but quality control looks killer and I'm having great core and mem junction temps with my setup.
> 
> Thanks!
> 
> P.S.
> Where's the 'Search this thread' option? Can't find it anywhere.


You can get a Galax 390w Bios foe that card.


----------



## EarlZ

J7SC said:


> Tx, but it's fixed  per above post
> 
> 
> ...80 C for the memory would make me nervous already, never mind 90 C. I just got my Strix two weeks ago, so still stock air-cooled (w-cooling soon). I tried out a few fan combos and settled on 2x Arctic P12 pst blowing on the backplate with great results


How much is your vramtjunction temp from HWinfo64 and can you share a photo on how you've got this setup ?


----------



## motivman

mardon said:


> There isn't one in SFF. I've literally got the highest wattage SFX you can buy @ 850w unless I'm missing something.


before I upgraded, my EVGA 850G2 could handle my shunt modded 3090 just fine. it even pulled up to 900W one time, without shutting the entire PC down. I "upgraded" to the corsair HX1200i, and had issues with the 3090 randomly loosing video signal, where I had to restart the PC to get video signal back. I then decided to try the evga 1200P2, and have no more signal loss issues with my 3090. I really thought that my 3090 was dying... but turns out it was the power supply. It is possible you might have a defective PSU, maybe try getting another PSU (same model), and test to see if the issue still persists.


----------



## motivman

0451 said:


> For those temps I’d expect both core and mem to be higher.


The problem is that my memory gets more unstable the colder it gets. For example, at normal ambient temps... I can run +1000 on the memory, but as it gets colder, and my clock speeds gets higher, memory becomes more unstable. The highest I can run the memory at 2280mhz is +875, anything higher, even +880 insta crashes. I guess 15.7k is the highest I can push this reference PCB.. oh well.


----------



## mardon

motivman said:


> before I upgraded, my EVGA 850G2 could handle my shunt modded 3090 just fine. it even pulled up to 900W one time, without shutting the entire PC down. I "upgraded" to the corsair HX1200i, and had issues with the 3090 randomly loosing video signal, where I had to restart the PC to get video signal back. I then decided to try the evga 1200P2, and have no more signal loss issues with my 3090. I really thought that my 3090 was dying... but turns out it was the power supply. It is possible you might have a defective PSU, maybe try getting another PSU (same model), and test to see if the issue still persists.


I have tried a SF750 from corsair and it did it so I upgraded to the TF850 and was gutted to see it doing the same thing. Apparently the TF850 is very similar so it's probably the same weakness. I really don't want to move away from my Ncase M1 

I'm hoping I can find a balance of around 450w which won't shut down. I didn't play the 1000w bios for that long.


----------



## Rhadamanthys

mardon said:


> You can get a Galax 390w Bios foe that card.


Are you sure it's a Galax, not a Gigabyte?


----------



## mardon

Rhadamanthys said:


> Are you sure it's a Galax, not a Gigabyte?


For you card you want this one as its the same reference board and won't mess with your display outputs. 









KFA2 RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com


----------



## zayd

I have finally joined the 3090 club. I managed to get an MSI Suprim X and as you can all imagine, the first thing I did after installing was to commence several benchmarks. I already knew it was going to be a significant upgrade from my 5700XT Red Devil, but damn this GPU is a beast. The only intensive task I do with my PC is to play FPS games, with that mainly being Warzone at the present time. I'm a little worried that my CPU may struggle to keep up with the GPU, being an i7-9700K, overclocked to 5 ghz @ 1.37 Vcore, 4.6 ghz Uncore, No AVX offset and all Multicore Enhancements disabled. 

In HWinfo, the CPU is averaging 75% load whilst playing Warzone, temperatures are in the upper 70c. It must be doing ok, as I'm getting around 170 FPS average with all settings at high, transmitting to a 1440p 170hz monitor. I'm just concerned that maybe the CPU is working too hard and if I will see any tangible benefits in upgrading to the only candidate for my Z390 platform, an i9-9900K. Many reviews have stated the lesser, i7-9700k is better for gaming, as hyperthreading is not being currently used in most games. 

I'll post some of my benchmark results soon, but notably at stock clocks on the GPU, I got 16,900 in Timespy, (8450 CPU score and 20,800 GPU score)

I will be looking to upgrade my entire platform in 2022, once AMD introduce Zen 4 with their proposed 5nm Advanced node, multi chiplet CPU's. The new Zen 4 boards will purportedly also support PCIe 5.0 and DDR5. So maybe my current rig will suffice until then.

My system is:

Gigabyte Z390 Aorus Ultra (Bios: F10h (Resizable Bar activated and it worked with my RX 5700XT 😳)
i7-9700K (Corsair H80i GT CPU Cooler. 4 X Corsair AF140 Case Fans)
MSI RTX 3090 Suprim X
Corsair Vengeance 32Gb 3200 MHz ram
Samsung 980 Pro 1Tb
All housed inside my Corsair Carbide 300R (Fantastic mid size case that can take GPU's up to 450MM in length!)
Asus ROG Strix XG279Q Monitor - WQHD 2560X1440 - 170hz


----------



## mardon

zayd said:


> I have finally joined the 3090 club. I managed to get an MSI Suprim X and as you can all imagine, the first thing I did after installing was to commence several benchmarks. I already knew it was going to be a significant upgrade from my 5700XT Red Devil, but damn this GPU is a beast. The only intensive task I do with my PC is to play FPS games, with that mainly being Warzone at the present time. I'm a little worried that my CPU may struggle to keep up with the GPU, being an i7-9700K, overclocked to 5 ghz @ 1.37 Vcore, 4.6 ghz Uncore, No AVX offset and all Multicore Enhancements disabled.
> 
> In HWinfo, the CPU is averaging 75% load whilst playing Warzone, temperatures are in the upper 70c. It must be doing ok, as I'm getting around 170 FPS average with all settings at high, transmitting to a 1440p 170hz monitor. I'm just concerned that maybe the CPU is working too hard and if I will see any tangible benefits in upgrading to the only candidate for my Z390 platform, an i9-9900K. Many reviews have stated the lesser, i7-9700k is better for gaming, as hyperthreading is not being currently used in most games.
> 
> I'll post some of my benchmark results soon, but notably at stock clocks on the GPU, I got 16,900 in Timespy, (8450 CPU score and 20,800 GPU score)
> 
> I will be looking to upgrade my entire platform in 2022, once AMD introduce Zen 4 with their proposed 5nm Advanced node, multi chiplet CPU's. The new Zen 4 boards will purportedly also support PCIe 5.0 and DDR5. So maybe my current rig will suffice until then.
> 
> My system is:
> 
> Gigabyte Z390 Aorus Ultra (Bios: F10h (Resizable Bar activated and it worked with my RX 5700XT 😳)
> i7-9700K (Corsair H80i GT CPU Cooler. 4 X Corsair AF140 Case Fans)
> MSI RTX 3090 Suprim X
> Corsair Vengeance 32Gb 3200 MHz ram
> Samsung 980 Pro 1Tb
> All housed inside my Corsair Carbide 300R (Fantastic mid size case that can take GPU's up to 450MM in length!)
> Asus ROG Strix XG279Q Monitor - WQHD 2560X1440 - 170hz


If you turn on your latency indicator in warzone report back what it's sitting at. My guess is 6/7m/s.
Despite what the tech tubers will tell you 3200mhz RAM is a bottleneck in Warzone. Get some Patriot Viper Samsung B-Die at 4000mhz+ and you'll gain between 10 and 20fps.


----------



## Shark00n

mardon said:


> For you card you want this one as its the same reference board and won't mess with your display outputs.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> KFA2 RTX 3090 VBIOS
> 
> 
> 24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory
> 
> 
> 
> 
> www.techpowerup.com


Thanks mate!


----------



## zayd

mardon said:


> If you turn on your latency indicator in warzone report back what it's sitting at. My guess is 6/7m/s.
> Despite what the tech tubers will tell you 3200mhz RAM is a bottleneck in Warzone. Get some Patriot Viper Samsung B-Die at 4000mhz+ and you'll gain between 10 and 20fps.


You were correct it does indicate 6/7 ms cpu latency. The GPU sits at 5ms.


----------



## gfunkernaught

Can someone help me figure out where to put the water temp sensor? I'm thinking of putting it direct to the outlet of my pump/res. Figured that would be the best place to see true delta between water temp and gpu/cpu temps. Does that sound right? It is an inline sensor.


----------



## J7SC

EarlZ said:


> How much is your vramtjunction temp from HWinfo64 and can you share a photo on how you've got this setup ?


...here you go:


Spoiler



...pic 1 - setup still needs some cabling attention, twin Arctic P12 pst about 1.5 inches from the Strix OC backplate
...pic 2 - HWInfo tables per earlier post...idle, then 400 W (out of 500 W possible) load. Ambient temp was around 18 C - 19 C


----------



## changboy

gfunkernaught said:


> Can someone help me figure out where to put the water temp sensor? I'm thinking of putting it direct to the outlet of my pump/res. Figured that would be the best place to see true delta between water temp and gpu/cpu temps. Does that sound right? It is an inline sensor.


I have 2 sensor at different point and they follow each other inside 2c-3c, so temp of ur loop is global temp, you can put it where its easier for you


----------



## gfunkernaught

changboy said:


> I have 2 sensor at different point and they follow each other inside 2c-3c, so temp of ur loop is global temp, you can put it where its easier for you
> View attachment 2478667


Right I keep forgetting about the equalizing of the temp throughout the loop. The sensor won't be hooked up to the PC. I run my fans and pump based on gpu temp. Just ordered an lcd display to use with the sensor that should be coming soon. Thanks.


----------



## s14det

Anyone know why my MSI RTX 3090 MIS Trio (on water with watercooled backplate) will not take the 520w Bios. I can check in GPUZ and it shows 520w but when in RTX Quake it limits power to around 490-500w. I just loaded the 1000w bios and limited it to 550w (same GPU/Mem overlock as before) and RTX Quake hovered right around 540-550w. I reloaded the 520w bios and after a few reboots it shows it back to 520w in GPUZ but still the max it will pull is 500w. The 500w and 520w bios are the same on this card for some reason. I have also tried to reload the 520w bios many times with many reboots and still will not allow over 500w. Just really confused on what is going on.


----------



## ALSTER868

Tried today to reseat my waterblock and to change the thermal pads coming with the block for Arctic. 
After installing the card back into its place there is no boot (no signal on the monitor) and 97 debug error code on the MoBo ''Console Output devices connect''
The card is seated properly in the slot, PCI E connectors seem to be OK. 
Plugged and unplugged several times the connectors, once after trying to start the system, the red LEDs on the connectors were blinking and same 97 error code.
What can be the problem? Any ideas where to look for?


----------



## mardon

zayd said:


> You were correct it does indicate 6/7 ms cpu latency. The GPU sits at 5ms.


Defo go the Bdie route. My 10900k sits at 4/5 m/s with 4300mhz CL16 2x16Gb. I get between 180/195fps @ 3440x1440 in warzone.
My 3090 is @ 2100mhz +249mhz RAM.


----------



## Falkentyne

s14det said:


> Anyone know why my MSI RTX 3090 MIS Trio (on water with watercooled backplate) will not take the 520w Bios. I can check in GPUZ and it shows 520w but when in RTX Quake it limits power to around 490-500w. I just loaded the 1000w bios and limited it to 550w (same GPU/Mem overlock as before) and RTX Quake hovered right around 540-550w. I reloaded the 520w bios and after a few reboots it shows it back to 520w in GPUZ but still the max it will pull is 500w. The 500w and 520w bios are the same on this card for some reason. I have also tried to reload the 520w bios many times with many reboots and still will not allow over 500w. Just really confused on what is going on.


MSVDD current/power limits.
The 1000W Bios has these limits set extremely high. You can't do a thing about this without changing loadline calibration or MSVDD voltage. The only exception seems to be a Strix.


----------



## changboy

What is a high and a normal MSVDD value ?


----------



## Beagle Box

ALSTER868 said:


> Tried today to reseat my waterblock and to change the thermal pads coming with the block for Arctic.
> After installing the card back into its place there is no boot (no signal on the monitor) and 97 debug error code on the MoBo ''Console Output devices connect''
> The card is seated properly in the slot, PCI E connectors seem to be OK.
> Plugged and unplugged several times the connectors, once after trying to start the system, the red LEDs on the connectors were blinking and same 97 error code.
> What can be the problem? Any ideas where to look for?


Is it possible the new pads are so thick that they are preventing proper waterblock contact with the GPU die?


----------



## Thanh Nguyen

Is it normal when memory junction temp at 94c when 620w is on the card? I may repad it.


----------



## changboy

Thanh Nguyen said:


> Is it normal when memory junction temp at 94c when 620w is on the card? I may repad it.


Hehehe i belive its a joke 😂


----------



## Falkentyne

changboy said:


> What is a high and a normal MSVDD value ?


No one knows. These are not shown in any bios editor. What is known is that the Strix seems to have these limits set much higher than reference boards.


----------



## dilster97

Falkentyne said:


> These are not shown in any bios editor.


Does an Ampere BIOS editor exist? Couldn't find any results while googling and I'm fairly sure there wasn't a public Pascal or Turing BIOS editor. So it seems a bit odd to me that an Ampere BIOS editor would exist.


----------



## gfunkernaught

s14det said:


> Anyone know why my MSI RTX 3090 MIS Trio (on water with watercooled backplate) will not take the 520w Bios. I can check in GPUZ and it shows 520w but when in RTX Quake it limits power to around 490-500w. I just loaded the 1000w bios and limited it to 550w (same GPU/Mem overlock as before) and RTX Quake hovered right around 540-550w. I reloaded the 520w bios and after a few reboots it shows it back to 520w in GPUZ but still the max it will pull is 500w. The 500w and 520w bios are the same on this card for some reason. I have also tried to reload the 520w bios many times with many reboots and still will not allow over 500w. Just really confused on what is going on.


Try setting the performance mode of the quake 2 RTX profile in the Nvidia control panel to prefer maximum performance.


----------



## ALSTER868

Beagle Box said:


> Is it possible the new pads are so thick that they are preventing proper waterblock contact with the GPU die?


No, they're actually the same size as recomended. And I checked the mounting pressure on the chip and memory/VRMs before assembling finally the card.
Thought maybe it's shorting somewhere. If the LED connectors are blinking red (they were once) what is that?


----------



## Falkentyne

dilster97 said:


> Does an Ampere BIOS editor exist? Couldn't find any results while googling and I'm fairly sure there wasn't a public Pascal or Turing BIOS editor. So it seems a bit odd to me that an Ampere BIOS editor would exist.


Rename from TXT to ZIP then extract.

It won't open the Strix 1000W bios posted by Elmor this morning, but it will open the Strix 999W bios (no, this is not the Kingpin bios) which was also posted. There seem to be two XOC Bioses for the Strix.


----------



## marti69

Falkentyne said:


> Rename from TXT to ZIP then extract.
> 
> It won't open the Strix 1000W bios posted by Elmor this morning, but it will open the Strix 999W bios (no, this is not the Kingpin bios) which was also posted. There seem to be two XOC Bioses for the Strix.


strix xoc bios posted?i cant find them?where???


----------



## Beagle Box

Falkentyne said:


> Rename from TXT to ZIP then extract.
> 
> It won't open the Strix 1000W bios posted by Elmor this morning, but it will open the Strix 999W bios (no, this is not the Kingpin bios) which was also posted. There seem to be two XOC Bioses for the Strix.


Thank you for this.
There are Strix XOC BIOSs? Where are these posted?


----------



## SoldierRBT

s14det said:


> Anyone know why my MSI RTX 3090 MIS Trio (on water with watercooled backplate) will not take the 520w Bios. I can check in GPUZ and it shows 520w but when in RTX Quake it limits power to around 490-500w. I just loaded the 1000w bios and limited it to 550w (same GPU/Mem overlock as before) and RTX Quake hovered right around 540-550w. I reloaded the 520w bios and after a few reboots it shows it back to 520w in GPUZ but still the max it will pull is 500w. The 500w and 520w bios are the same on this card for some reason. I have also tried to reload the 520w bios many times with many reboots and still will not allow over 500w. Just really confused on what is going on.


The 520W KPE BIOS seems to have a limit to around 500W (with and without OC). I believe it reserves the extra 20W to the pump and fans. This can be solved with NVVDD/MSSVVD tuning on the KPE. Maybe the 520W for the KPE HC won't have this problem. On the 1000W BIOS at stock settings, It'd pull 600W and up to 700W with OC on RTX Quake.


----------



## gfunkernaught

SoldierRBT said:


> The 520W KPE BIOS seems to have a limit to around 500W (with and without OC). I believe it reserves the extra 20W to the pump and fans. This can be solved with NVVDD/MSSVVD tuning on the KPE. Maybe the 520W for the KPE HC won't have this problem. On the 1000W BIOS at stock settings, It'd pull 600W and up to 700W with OC on RTX Quake.


What will/can an extra 20w net you other than heat?


----------



## Zogge

Beagle Box said:


> Thank you for this.
> There are Strix XOC BIOSs? Where are these posted?


Virus warning trojan from windows firewall on that file you posted. Anyone else got this ?


----------



## Beagle Box

Zogge said:


> Virus warning trojan from windows firewall on that file you posted. Anyone else got this ?


On a file I posted? I posted no files.
I have no windows warnings on any files.


----------



## Zogge

The one Falkentyre posted, did you download the ABE008 txt file ? No warnings ? Trojan:Win32/Wacatac


----------



## Beagle Box

Zogge said:


> The one Falkentyre posted, did you download the ABE008 txt file ? No warnings ? Trojan:Win32/Wacatac


I got no warnings, but that doesn't mean there isn't something there. Yikes!


----------



## Zogge

post your zip file you downloaded here, I can check for virus on it


----------



## geriatricpollywog

SoldierRBT said:


> The 520W KPE BIOS seems to have a limit to around 500W (with and without OC). I believe it reserves the extra 20W to the pump and fans. This can be solved with NVVDD/MSSVVD tuning on the KPE. Maybe the 520W for the KPE HC won't have this problem. On the 1000W BIOS at stock settings, It'd pull 600W and up to 700W with OC on RTX Quake.


I think it depends on the application. I use the 520w bios exclusively on my Kingpin and the oled reports 510-530w in Quake II RTX.


----------



## Zogge

Falkentyne said:


> Rename from TXT to ZIP then extract.
> 
> It won't open the Strix 1000W bios posted by Elmor this morning, but it will open the Strix 999W bios (no, this is not the Kingpin bios) which was also posted. There seem to be two XOC Bioses for the Strix.


Links please...


----------



## DarkHollow

So I've got an MSI 3090 Gaming X Trio which of course, the RAM is hot as ****. I had already planned on a waterblock which is why I preordered the Alphacool block/backplate combo for it.

I've been mining on it because, well... the plan was for a 3080 not a 3090 so I need to make a bunch of that back. During mining temps have been apparently (according to HWInfo's new GPU Memory Junction Temperature functionality) reaching 100+C. I decided I should fix that so I bought a cheap RAM waterblock from China ($20) and mounted it to the backplate like I've seen some others here do.

I finally got around to taking apart my card yesterday to install the waterblock and the modify the backplate and add the RAM waterblock to it. Decided while I was at it that I should make a light bar diffuser since the factory one is built in to the backplate.














































Looks like temps after ~12 hours or so running mining the max RAM temp is 88C. GPU core is running 43C mining. It's not perfect, I could have gotten a larger RAM block or a different block but overall I'm happy with it. Gaming so far it's 61C core and similar temps on the memory. Haven't tried to push it any harder yet.

Also, I checked the powerstages, they are 45A on my particular card. Does this mean MSI downgraded them from the initial review sample? Are they dual sourcing? I dunno, others would need to check their cards so we can determine that.


----------



## Beagle Box

Zogge said:


> post your zip file you downloaded here, I can check for virus on it


It was on my other pc and I deleted it before opening it. Defender saw nothing, but I trashed it anyway.


----------



## J7SC

Falkentyne said:


> MSVDD current/power limits.
> The 1000W Bios has these limits set extremely high. You can't do a thing about this without changing loadline calibration or MSVDD voltage. The only exception seems to be a Strix.


I need some feedback from you folks on Strix 3090 OC mvddc. As established before, it is on 'v2' stock bios and air-cooled, but does have helper fans for the back-VRAM and temps are ok. Mvddc is NOT bouncing off a 90 W limit, but now I'm wondering about the other direction - what the max safe value for peak watts is for mvddc ? While I have oc'ed GPUs for eons, Ampere & GDDR6X are new to me, and feedback re. below is very welcome.

On my Strix, mvddc is typically in the 140 W - 155 W range, but per setup run for Superposition 8K below, it can hit over 173 W when at or near max oc. Tx


----------



## zayd

mardon said:


> Defo go the Bdie route. My 10900k sits at 4/5 m/s with 4300mhz CL16 2x16Gb. I get between 180/195fps @ 3440x1440 in warzone.
> My 3090 is @ 2100mhz +249mhz RAM.


Cheers for the advice everyone. I managed to bag a brand new i9-9900K for a good price. This should give me a performance uplift with this GPU.


----------



## gfunkernaught

When you guys mount ram heatsinks on the backplate, do you use thermal tape or paste? Do you also put a fan to blow on the heatsinks?


----------



## Falkentyne

Zogge said:


> post your zip file you downloaded here, I can check for virus on it


Can't post zip files on this forum anymore. And course the editor/viewer is going to get reported as a trojan. It's a BIOS hacking tool. You guys should know by now what hacking tools do to antivirus algorithms.
Malwarebytes reports it as clean. Stop worrying. We've been using these editors for months.
You just can't flash anything you change due to HULK certification protection, so writing / editing is disabled.

Strix bioses. Apparently XOC and XOC2. I know nothing about these.

RENAME to ROM.
I take NO responsibility if it doesn't work.


----------



## Falkentyne

J7SC said:


> I need some feedback from you folks on Strix 3090 OC mvddc. As established before, it is on 'v2' stock bios and air-cooled, but does have helper fans for the back-VRAM and temps are ok. Mvddc is NOT bouncing off a 90 W limit, but now I'm wondering about the other direction - what the max safe value for peak watts is for mvddc ? While I have oc'ed GPUs for eons, Ampere & GDDR6X are new to me, and feedback re. below is very welcome.
> 
> On my Strix, mvddc is typically in the 140 W - 155 W range, but per setup run for Superposition 8K below, it can hit over 173 W when at or near max oc. Tx
> 
> View attachment 2478704


MSVDD has nothing to do with MVDDC. MSVDD=uncore power rail (SRAM voltage on HWinfo64).
MVDDC=GDDR6X.
Apparently, a TDP Normalized % power limit gets triggered if MSVDD voltage / current drop too low for the requested core clocks, causing effective clocks to drop too low. There is also a NVVDD rail also. People with the Classified tool can increase MSVDD which brings the effective clocks back up and appears to circumvent this.


----------



## SoldierRBT

gfunkernaught said:


> What will/can an extra 20w net you other than heat?


The overall clocks are more consistent. 



0451 said:


> I think it depends on the application. I use the 520w bios exclusively on my Kingpin and the oled reports 510-530w in Quake II RTX.


I wouldn't trust the OLED readings. They aren't accurate and it reports higher than it actually is. Luumi confirmed this in his latest KPE OC video where the OLED was reporting 1.33-1.36v NVVDD and the multimeter was reporting steady 1.30v NVVDD. RTX Quake also reports 515-524W but GPU-Z and HWINFO64 report 500-508W.


----------



## J7SC

double


----------



## J7SC

J7SC said:


> Thanks  ok, so mvddc (GDDR6X) at 173 W or so per above is noting to worry about then, also as I do not see any throttling or score drop. Is there a utility to monitor MSVDD ? Don't seem to be able to find it in GPUz and/or HWInfo
> 
> edit...found it in HWInfo


----------



## gfunkernaught

SoldierRBT said:


> The overall clocks are more consistent.


So is that a problem with the way the Trio handles the bios?


----------



## Beagle Box

J7SC said:


> I need some feedback from you folks on Strix 3090 OC mvddc. As established before, it is on 'v2' stock bios and air-cooled, but does have helper fans for the back-VRAM and temps are ok. Mvddc is NOT bouncing off a 90 W limit, but now I'm wondering about the other direction - what the max safe value for peak watts is for mvddc ? While I have oc'ed GPUs for eons, Ampere & GDDR6X are new to me, and feedback re. below is very welcome.
> 
> On my Strix, mvddc is typically in the 140 W - 155 W range, but per setup run for Superposition 8K below, it can hit over 173 W when at or near max oc. Tx
> 
> View attachment 2478704


I really think ASUS screwed this up. 
My older ASUS Strix OC BIOS MVDDC maxes at 115W when running Superposition 8KO.
I don't think it ever actually needs to go over 15W. 
Just a reporting error?


----------



## mirkendargen

Falkentyne said:


> Can't post zip files on this forum anymore. And course the editor/viewer is going to get reported as a trojan. It's a BIOS hacking tool. You guys should know by now what hacking tools do to antivirus algorithms.
> Malwarebytes reports it as clean. Stop worrying. We've been using these editors for months.
> You just can't flash anything you change due to HULK certification protection, so writing / editing is disabled.
> 
> Strix bioses. Apparently XOC and XOC2. I know nothing about these.
> 
> RENAME to ROM.
> I take NO responsibility if it doesn't work.


I can confirm these are for a Strix device ID, and the XOC2 one is a newer BIOS version than the 999 one but not quite as new as the latest Strix stock BIOS (latest stock BIOS is 94.02.42.00.A9, XOC2 is 94.02.38.00.26.)

Flashing it now...I'll report back!


----------



## mirkendargen

mirkendargen said:


> I can confirm these are for a Strix device ID, and the XOC2 one is a newer BIOS version than the 999 one but not quite as new as the latest Strix stock BIOS (latest stock BIOS is 94.02.42.00.A9, XOC2 is 94.02.38.00.26.)
> 
> Flashing it now...I'll report back!


Flash is a success, GPU downclocks at idle, memory does not. 3rd 8pin reports power correctly. I'm firing up 3dMark to see what 60% power limit draws and I'll go from there.


----------



## Falkentyne

mirkendargen said:


> Flash is a success, GPU downclocks at idle, memory does not. 3rd 8pin reports power correctly. I'm firing up 3dMark to see what 60% power limit draws and I'll go from there.


Maybe now people will stop accusing me of posting viruses.


----------



## gfunkernaught

Anyone else spending more time studying, researching, and benchmarking this card more than just playing games? Lol


----------



## Zogge

Falkentyne said:


> Maybe now people will stop accusing me of posting viruses.


Sorry and excuse my temporary stupidity.


----------



## mirkendargen

mirkendargen said:


> Flash is a success, GPU downclocks at idle, memory does not. 3rd 8pin reports power correctly. I'm firing up 3dMark to see what 60% power limit draws and I'll go from there.


Well good news and bad news.

Good news, the BIOS works and the power limit does indeed seem to be 1000W.

Bad news, I never drew more than this at 60%, 75% or 100% power limit and I had a PWR perfcap reason virtually 100% of a PR run, so something else is up. It's *possible* I need to do a clean install of my GPU drivers, but I haven't had this problem flashing between BIOSes previously.


----------



## Zogge

Thanks for the reports. I will try too tomorrow morning.


----------



## Falkentyne

mirkendargen said:


> Well good news and bad news.
> 
> Good news, the BIOS works and the power limit does indeed seem to be 1000W.
> 
> Bad news, I never drew more than this at 60%, 75% or 100% power limit and I had a PWR perfcap reason virtually 100% of a PR run, so something else is up. It's *possible* I need to do a clean install of my GPU drivers, but I haven't had this problem flashing between BIOSes previously.
> 
> View attachment 2478707


Can you kindly go to HWinfo64 and please post your TDP Normalized % and TDP % values? (not visible in GPU-Z).

Also, please try running Superposition 4k-->Custom-->Extreme shaders, and Timespy Extreme (assuming you have them) and post your throttle limit bar and TDP Normalized % values.


----------



## J7SC

Beagle Box said:


> I really think ASUS screwed this up.
> My older ASUS Strix OC BIOS MVDDC maxes at 115W when running Superposition 8KO.
> I don't think it ever actually needs to go over 15W.
> Just a reporting error?


...I am quite happy w/ stock v2 bios and the Strix's overall performance, at least while I'm on air. Once it is fully watercooled front and back, I might go for the Strix custom XOC...just in case, I decided to pull the old Corsair AX1200 and give the system a newer Antec HPC 1300 W PSU. I want to finish the cable routing / hiding and all that and might as well get set up for later...


----------



## Falkentyne

mirkendargen said:


> Well good news and bad news.
> 
> Good news, the BIOS works and the power limit does indeed seem to be 1000W.
> 
> Bad news, I never drew more than this at 60%, 75% or 100% power limit and I had a PWR perfcap reason virtually 100% of a PR run, so something else is up. It's *possible* I need to do a clean install of my GPU drivers, but I haven't had this problem flashing between BIOSes previously.
> 
> View attachment 2478707


The XOC2 bios can't be opened in the ABE v008 editor. Probably the limits are in the wrong location.
The XOC1 bios shows default TDP 800W, Max 999W, but the SRC power rails are still 150W default and 175W maximum, just as they are in the normal bioses.
As far as I know, the 8 pin wattage values per rail can NOT exceed the limits for the SRC. But I could be wrong.

The Kingpin 1000W Bios has the SRC set to 900W....


----------



## mirkendargen

Falkentyne said:


> Can you kindly go to HWinfo64 and please post your TDP Normalized % and TDP % values? (not visible in GPU-Z).
> 
> Also, please try running Superposition 4k-->Custom-->Extreme shaders, and Timespy Extreme (assuming you have them) and post your throttle limit bar and TDP Normalized % values.


Timespy Extreme GT2:

It *appears* that an 8pin can exceed SRC based on that....but dunno. I'll try the other BIOS.


----------



## Beagle Box

gfunkernaught said:


> Anyone else spending more time studying, researching, and benchmarking this card more than just playing games? Lol


Guilty as charged. 🥺


----------



## mardon

zayd said:


> Cheers for the advice everyone. I managed to bag a brand new i9-9900K for a good price. This should give me a performance uplift with this GPU.


9900k will help but not as much as faster RAM in warzone trust me.


----------



## gfunkernaught

Beagle Box said:


> Guilty as charged. 🥺


They did say that the 3090 is not exactly a gaming card...._sigh_....


----------



## Falkentyne

mirkendargen said:


> Timespy Extreme GT2:
> 
> It *appears* that an 8pin can exceed SRC based on that....but dunno. I'll try the other BIOS.
> 
> View attachment 2478709
> 
> 
> View attachment 2478710


What is your TDP slider set to?
Max SRC is 175W. Default is 150W.

Were you getting throttling bars in GT2?
Your TDP Normalized is sky high. It looks like that bios doesn't have the internal rail limits maxed out like the Kingpin bios does as TDP Normalized can see every individual rail wired to the current sensing chip. Please note that the individual rail limits for the 8 pins are also part of Normalized. The individual 8 pin rail limits can exceed what is sets in BIOS for them, but from what I know, they can not exceed the max SRC value without hitting a normalized throttle limit. There are individual SRC rails for each 8 pin, that's why (all of them: default 150W, max: 175W).

I can't open the XOC2 one.

You can see the difference here.


----------



## mirkendargen

Falkentyne said:


> What is your TDP slider set to?
> Max SRC is 175W. Default is 150W.
> 
> Were you getting throttling bars in GT2?
> Your TDP Normalized is sky high. It looks like that bios doesn't have the internal rail limits maxed out like the Kingpin bios does as TDP Normalized can see every individual rail wired to the current sensing chip. Please note that the individual rail limits for the 8 pins are also part of Normalized. The individual 8 pin rail limits can exceed what is sets in BIOS for them, but from what I know, they can not exceed the max SRC value without hitting a normalized throttle limit. There are individual SRC rails for each 8 pin, that's why (all of them: default 150W, max: 175W).
> 
> I can't open the XOC2 one.
> 
> You can see the difference here.
> 
> View attachment 2478713
> 
> 
> View attachment 2478714


It was set to 100% on that run, it was PWR limited everywhere you'd expect during GT2.

I tested the other one, It seems to behave exactly like a stock Strix BIOS (memory downclocks, TBP peaks at just over 480W, power limited the bulk of a PR run) with the exception of reporting a lower TDP % all the time.


----------



## Exilon

DarkHollow said:


> Looks like temps after ~12 hours or so running mining the max RAM temp is 88C. GPU core is running 43C mining. It's not perfect, I could have gotten a larger RAM block or a different block but overall I'm happy with it. Gaming so far it's 61C core and similar temps on the memory. Haven't tried to push it any harder yet.


What's your coolant temperature? 61C is very high unless you're looking at >40C coolant temperature.
Alphacool's stock pads are not that good and also too thick on the back. I don't have any backplate cooling but my RAM temp is 70C mining after replacing front side pads with 17W/mK and backside pads with 1mm 6W/mK + 1mm copper shim on a coolant temperature of 32C


----------



## DarkHollow

Exilon said:


> What's your coolant temperature? 61C is very high unless you're looking at >40C coolant temperature.
> Alphacool's stock pads are not that good and also too thick on the back. I don't have any backplate cooling but my RAM temp is 70C mining after replacing front side pads with 17W/mK and backside pads with 1mm 6W/mK + 1mm copper shim on a coolant temperature of 32C


Yea that's probably about where my coolant is. Right now my PC is kinda forced to sit under my desk though that should be changing soon. It is what it is for now. I imagine if it wasn't underneath it would run 5 - 10 degrees cooler.


----------



## Falkentyne

mirkendargen said:


> It was set to 100% on that run, it was PWR limited everywhere you'd expect during GT2.
> 
> I tested the other one, It seems to behave exactly like a stock Strix BIOS (memory downclocks, TBP peaks at just over 480W, power limited the bulk of a PR run) with the exception of reporting a lower TDP % all the time.


What happens if you yeet the % slider?

So the XOC2 bios does allow more power than the first one, then?
Since I can't open it in ABE, I can't see the limits.


----------



## mirkendargen

Falkentyne said:


> What happens if you yeet the % slider?
> 
> So the XOC2 bios does allow more power than the first one, then?
> Since I can't open it in ABE, I can't see the limits.


This is correct, XOC2 is ~20W higher effective usage than the other one. I cranked the slider to 124% on the 999W one, still peaked at 484W.

I still honestly kinda like the XOC2 one and might use it if the memory downclocked. Higher power limit than stock while keeping all the outputs working.

Any sneaky private BIOSes like this I figure are on borrowed time though for when resizable BAR official BIOSes start rolling out later this month. Then it will be back to the KPE 520W that (we can pretty safely assume) will get updated.


----------



## J7SC

mirkendargen said:


> This is correct, XOC2 is ~20W higher effective usage than the other one. I cranked the slider to 124% on the 999W one, still peaked at 484W.
> 
> I still honestly kinda like the XOC2 one and might use it if the memory downclocked. Higher power limit than stock while keeping all the outputs working.
> 
> Any sneaky private BIOSes like this I figure are on borrowed time though for when resizable BAR official BIOSes start rolling out later this month. Then it will be back to the KPE 520W that (we can pretty safely assume) will get updated.


I would think Asus will update the stock Strix bios as well for resizable BAR. Also, if I understand your posts above correctly on the "Strix XOC/2", there really isn't any peak gain at all? I regularly pull 490 W - 504 W on the stock one.

...got the Antec 1300 W hooked up...its PCIe cables are very heavy gauge, even tough to bend...and the Strix's recessed 3x 8 Pin didn't make that switch-over any easier...I hate 8 pin PCIe almost as much as the old Molex...


----------



## mirkendargen

J7SC said:


> I would think Asus will update the stock Strix bios as well for resizable BAR. Also, if I understand your posts above correctly on the "Strix XOC/2", there really isn't any peak gain at all? I regularly pull 490 W - 504 W on the stock one.
> 
> ...got the Antec 1300 W hooked up...its PCIe cables are very heavy gauge, even tough to bend...and the Strix's recessed 3x 8 Pin didn't make that switch-over any easier...I hate 8 pin PCIe almost as much as the old Molex...


Correct, I'm sure Asus will update the official Strix BIOS as well, but the KPE 520W one definitely leaves more power headroom (possibly more than the 40W it seems since power can't easily be measure properly with it on a Strix).

And correct, I didn't notice any serious improvement with either of the "XOC"Strix BIOSes.


----------



## jura11

Will try later tomorrow both XOC BIOS on my RTX 3090 and see how good it is or if I see any improvement over KPE XOC BIOS 

Assume with Asus XOC BIOS you will loose DP port or HDMi port @mirkendargen ?

Thanks, Jura


----------



## mirkendargen

jura11 said:


> Will try later tomorrow both XOC BIOS on my RTX 3090 and see how good it is or if I see any improvement over KPE XOC BIOS
> 
> Assume with Asus XOC BIOS you will loose DP port or HDMi port @mirkendargen ?
> 
> Thanks, Jura


I didn't test, but I assume since it's a Strix BIOS running on a Strix the ports would have all worked.


----------



## changboy

J7SC said:


> I need some feedback from you folks on Strix 3090 OC mvddc. As established before, it is on 'v2' stock bios and air-cooled, but does have helper fans for the back-VRAM and temps are ok. Mvddc is NOT bouncing off a 90 W limit, but now I'm wondering about the other direction - what the max safe value for peak watts is for mvddc ? While I have oc'ed GPUs for eons, Ampere & GDDR6X are new to me, and feedback re. below is very welcome.
> 
> On my Strix, mvddc is typically in the 140 W - 155 W range, but per setup run for Superposition 8K below, it can hit over 173 W when at or near max oc. Tx
> 
> View attachment 2478704


I never saw my mvddc over 100w and i run on the 1000w bios max on gpuz for today and i have gaming a lot its 99w. but its lower usually and this is my peak value.
But my oc on my memory today was +1000 for gaming.


----------



## jura11

mirkendargen said:


> I didn't test, but I assume since it's a Strix BIOS running on a Strix the ports would have all worked.


Ahh okay I thought so you have reference PCB or EVGA FTW3 or something like that 

Yes if you have Strix then yes you shouldn't loose any DP or HDMi port 

Will test that BIOS later tomorrow or when I have time 

Hope this helps 

Thanks, Jura


----------



## J7SC

changboy said:


> I never saw my mvddc over 100w and i run on the 1000w bios max on gpuz for today and i have gaming a lot its 99w. but its lower usually and this is my peak value.
> But my oc on my memory today was +1000 for gaming.


...I'm running well past 1000+ VRAM for gaming such as FS2020 etc as well. VRAM is great on this card (highest benchmark VRAM so far well over '11000' in MSI AB speak), as long as it can get not only good direct backplate cooling, but the hot air can be evacuated swiftly...it is in a semi-open build anyway...

Somewhat related, I did try out the card in both Slot 1 and Slot 3 after the OS fix I posted on before. Scores were indistinguishable (no other PCIe item plugged in, just the Strix). That means that I'll keep the card in slot 3 - allows for way easier cooling now, and also later for a w-cooled backplate.


----------



## jura11

VRAM OC on my RTX 3090 GamingPro will do easily 1250-1300MHz,beyond that its just unstable or will just crash, no artifacts or errors or lower results just straight crash and black screen 

VRAM temperatures are maxed at 60°C not seen higher temperatures than that, cranked central heating to 26-28°C and VRAM temperatures topped up at 72°C and that's with simple backplate 80mm and 120mm fans cooling, ordered Alphacool D-RAM waterblock for VRAM and I hope it will arrive to end of the month and see how it compares to my fan cooling

Hope this helps 

Thanks, Jura


----------



## Nico67

changboy said:


> I have 2 sensor at different point and they follow each other inside 2c-3c, so temp of ur loop is global temp, you can put it where its easier for you
> View attachment 2478667


It looks like you are running the water in reverse through the GPU block and then into the CPU?


----------



## des2k...

J7SC said:


> ...I'm running well past 1000+ VRAM for gaming such as FS2020 etc as well. VRAM is great on this card (highest benchmark VRAM so far well over '11000' in MSI AB speak), as long as it can get not only good direct backplate cooling, but the hot air can be evacuated swiftly...it is in a semi-open build anyway...
> 
> Somewhat related, I did try out the card in both Slot 1 and Slot 3 after the OS fix I posted on before. Scores were indistinguishable (no other PCIe item plugged in, just the Strix). That means that I'll keep the card in slot 3 - allows for way easier cooling now, and also later for a w-cooled backplate.


all gddr6x are rated at +750 , nvidia downclocks because of heat / air coolers.

Since I mounted my block so many times, I can tell you if the block doesn't make contact on the edge of the die you can't overclock mem that much
ga102, x-ray shows mem controller all around the edge of the die + it's not the usual bellow silicon, sits very close to the top of the silicon(don't lap the die 🤣)

I got +953 and 2175 ,2155+ eff clocks by using very little paste / double washers but it's still not perfect.
I would need to lap the EK block, paste on the edge still doesn't show as compressed patern vs center.

So for me reaching +1000 on mem was just remount/more pressure, not the mem being a poor sample.


----------



## J7SC

...yeah, I'm looking forward to get the w-cooling setup for this card w/custom backplate and new thermal pads soon. Even though this is not a 'bencher' but a a hybrid for gaming and productivity, the card has a nice combo of GPU and VRAM performance... but IMO anything over 350 W -400 W (= heat energy) really should be w-cooled, much less the 500 W or so I have seen


----------



## gfunkernaught

I just ran the Bright Memory Benchmark for about 5 minutes just to see where my card is at as far as detailed power draw and temps, since I've been so much of that on this thread I got curious. This was with the stock Trio (low temp) bios with the PL set to 102% and no overclock applied to either core or memory. I'm just too afraid to OC this thing on air passed the 2% increase in power limit provides.

I can't believe how hot the memory got. I hope the EK backplate with a fan will help me.


----------



## changboy

Nico67 said:


> It looks like you are running the water in reverse through the GPU block and then into the CPU?


It goes from the pump to the front 280mm rad then the gpu and direct to the cpu block then 2x360mm rad, there is no in and out on those gpu ek waterblock, in can be on left or right side. Its doing the same job, at least this is what i think.

About this block EK write this :
The jet plate and fin structure geometry have been optimized to provide even flow distribution with minimal losses and optimal performance when used in any given coolant flow orientation, unlike some products that are currently available on the market.


----------



## Thanh Nguyen

Is this temp ok for memory? I let it run until the temp stays steady. Put a fan in the backplate reduces a few c. Ambient is 21.5c.


----------



## mardon

Thanh Nguyen said:


> Is this temp ok for memory? I let it run until the temp stays steady. Put a fan in the backplate reduces a few c. Ambient is 21.5c.
> View attachment 2478770


It's good for a mining temperature but on the high side for gaming.


----------



## Zogge

Tried the XOC2 bios now with 600W, 750W, 1000W settings on strix. Same result, max 510W achieved.
GPU Clock 2220mhz/Memory 21500Mhz gave:
SRC 141W
PCIE 45.8W
Pin1 149.9W
Pin2 165.3W
Pin3 146.3W
Power TDP normalized 95.1%
Power TDP 50.4%

I reinstalled drivers and MSI AB, no change. Superposition 4k standard run above.


----------



## J7SC

Zogge said:


> Tried the XOC2 bios now with 600W, 750W, 1000W settings on strix. Same result, max 510W achieved.
> GPU Clock 2220mhz/Memory 21500Mhz gave:
> SRC 141W
> PCIE 45.8W
> Pin1 149.9W
> Pin2 165.3W
> Pin3 146.3W
> Power TDP normalized 95.1%
> Power TDP 50.4%
> 
> I reinstalled drivers and MSI AB, no change. Superposition 4k standard run above.


...good to have confirmation. BTW, 21500 Mhz for the memory is excellent


----------



## ALSTER868

Can anyone tell what blinking red LEDs at the power connectors mean? The card does not boot (no image). Strix OC.
It happended after I reseated the waterblock. Any ideas?


----------



## Zogge

Power problem or connection problem, I would reseat the card (make sure it is all the way into the PCI slot) and unplug/replug the 8 pin connectors.
I had that too once some months back and did the above, after that it disappeared.
I could still boot on it though.

Also I would do a complete power down e g unplug it from the wall, wait some time and retry.


----------



## J7SC

ALSTER868 said:


> Can anyone tell what blinking red LEDs at the power connectors mean? The card does not boot (no image). Strix OC.
> It happended after I reseated the waterblock. Any ideas?


First, I would make sure that the PCIe connectors are correctly seated. There are other more serious issues which could be at play, but check the cables thoroughly


----------



## Zogge

J7SC said:


> ...good to have confirmation. BTW, 21500 Mhz for the memory is excellent


Thanks, it can go to 22200 then it start to crash.


----------



## ALSTER868

Zogge said:


> Power problem or connection problem, I would reseat the card (make sure it is all the way into the PCI slot) and unplug/replug the 8 pin connectors.
> I had that too once some months back and did the above, after that it disappeared.
> I could still boot on it though.
> 
> Also I would do a complete power down e g unplug it from the wall, wait some time and retry.





J7SC said:


> First, I would make sure that the PCIe connectors are correctly seated. There are other more serious which could be at play, but check the cables thoroughly


I did all this - plugged and unplugged the card from the MoBo, replugged several times the connectors. These LEDs are not blinking every time I turn on the PC, once or twice only one of the connectors was blinking.. could the card be dead? I'll try to disassemble the card and see what's underneath the waterblock. (


----------



## Nizzen

For people not having the XOC 2 strix bios:

XOC2 Strix 3090 bios


----------



## Zogge

"This attachment is not available. It may have been removed or the person who shared it may not have permission to share it to this location."

Anyway, Nizzen did you see any improvement on this bios ?


----------



## Nizzen

Zogge said:


> "This attachment is not available. It may have been removed or the person who shared it may not have permission to share it to this location."
> 
> Anyway, Nizzen did you see any improvement on this bios ?


Haven't tested yet. Nice weather, so it's skiing time here in Norway 

Will try tonight


----------



## cazpy

492w peak with the strix xoc v2... sadge


----------



## Nico67

changboy said:


> It goes from the pump to the front 280mm rad then the gpu and direct to the cpu block then 2x360mm rad, there is no in and out on those gpu ek waterblock, in can be on left or right side. Its doing the same job, at least this is what i think.
> 
> About this block EK write this :
> The jet plate and fin structure geometry have been optimized to provide even flow distribution with minimal losses and optimal performance when used in any given coolant flow orientation, unlike some products that are currently available on the market.


Inlet would always be flowing down thru the restrictor plate, like there CPU coolers do. It will always work both flow directions, but I'm sure you would get better results the conventional way, regardless of what EK says  theres bound to be uneven low pressure point directly below that opening rather than forcing water thru and into the fins.
Motiveman got a 15c Improvement on a Phanteks block changing the flow.


----------



## Wihglah

Did I miss something? I cannot flash the 1000W BIOS to my FTW3.

Already have the 520W BIOS in there. Is there a step I don't know about?

nvflash64 --protectoff
nvflash64 -6 KP1000.rom

I get an invalid BIOS warning and it refuses to flash.


----------



## Thanh Nguyen

Wihglah said:


> Did I miss something? I cannot flash the 1000W BIOS to my FTW3.
> 
> Already have the 520W BIOS in there. Is there a step I don't know about?
> 
> nvflash64 --protectoff
> nvflash64 -6 KP1000.rom
> 
> I get an invalid BIOS warning and it refuses to flash.


Try the latest nvflash?


----------



## Wihglah

Thanh Nguyen said:


> Try the latest nvflash?


Good call - I was on 5.66. Updated to 5.67 and it worked. Thx.


----------



## Nizzen

Zogge said:


> "This attachment is not available. It may have been removed or the person who shared it may not have permission to share it to this location."
> 
> Anyway, Nizzen did you see any improvement on this bios ?


530w max board power draw in Timespy extreme with powerslider at 100%. 3090 strix oc on water. 30c watertemp. 52c max gpu temp.









Edit: 526w in Battlefield V.

I think it will draw more if you have a better binned gpu and way colder gpu, so the gpu will call for higher frequence


----------



## itssladenlol

Nizzen said:


> For people not having the XOC 2 strix bios:
> 
> XOC2 Strix 3090 bios


New 1000w bios? 
Where the Hell does this come from?


----------



## Nizzen

itssladenlol said:


> New 1000w bios?
> Where the Hell does this come from?


Our friend Elmor


----------



## Gebeleisis

finally broke 14k barrier - PR on PR with 14.464
palit gaming pro 3090 - with 1000 bios capped at 520w.








I scored 14 464 in Port Royal


AMD Ryzen 9 5950X, NVIDIA GeForce RTX 3090 x 1, 131072 MB, 64-bit Windows 10}




www.3dmark.com





Just finised my new loop, and it was a cold day outside, opened the window and got this score. 
Alphacool waterblock , 1 x 420mm rad , 1 x 360mm rad . 
This is my workcomputer so back to 390w bios and will not try to break any records anytime soon.
Cheers !


----------



## ttnuagmada

Zogge said:


> Tried the XOC2 bios now with 600W, 750W, 1000W settings on strix. Same result, max 510W achieved.
> GPU Clock 2220mhz/Memory 21500Mhz gave:
> SRC 141W
> PCIE 45.8W
> Pin1 149.9W
> Pin2 165.3W
> Pin3 146.3W
> Power TDP normalized 95.1%
> Power TDP 50.4%
> 
> I reinstalled drivers and MSI AB, no change. Superposition 4k standard run above.


So, this sounds like it's probably the best one to use for the strix? basically same max as the 520w, but proper readings and all ports work.


----------



## cazpy

vram doesn't downclock and it doesn't give the power it should, so i won't use it, looks scuffed.


----------



## Zogge

ttnuagmada said:


> So, this sounds like it's probably the best one to use for the strix? basically same max as the 520w, but proper readings and all ports work.


I think the same.


----------



## jura11

Strix XOC BIOS always or every XOC BIOS which I remember run VRAM at full speed, not sure on GTX1080's and GTX1080Ti if on these VRAM has downclocked or not

Hope this helps 

Thanks, Jura


----------



## changboy

Nico67 said:


> Inlet would always be flowing down thru the restrictor plate, like there CPU coolers do. It will always work both flow directions, but I'm sure you would get better results the conventional way, regardless of what EK says  theres bound to be uneven low pressure point directly below that opening rather than forcing water thru and into the fins.
> Motiveman got a 15c Improvement on a Phanteks block changing the flow.


My temp are fine, on 520w its 50c and 480w 48c and 440w 44c, so the flow direction working fine. Also on my cpu its look like its in the wrong way but its not coz i reverse my optimus cpu block and add light around it.


----------



## Xdrqgol

Could I please get some advice on which BIOS should I use for 3090 FTW3 Ultra?
would really appreciate


----------



## Wihglah

Xdrqgol said:


> Could I please get some advice on which BIOS should I use for 3090 FTW3 Ultra?
> would really appreciate


If you are interested in benching the Kingpin 1000W is the way to go - just be careful of the PCIe power level. If you don't know why this is an issue, then it's not for you.

For 24/7 I would stick with the 500W FTW3 Bios.


----------



## 8GIR8

mardon said:


> I have tried a SF750 from corsair and it did it so I upgraded to the TF850 and was gutted to see it doing the same thing. Apparently the TF850 is very similar so it's probably the same weakness. I really don't want to move away from my Ncase M1
> 
> I'm hoping I can find a balance of around 450w which won't shut down. I didn't play the 1000w bios for that long.


I had the ncase and sf750 with a KPE and Asus x570 Impact. The sf750 would work fine at a mild OC but would fail at anything over 112% power limit. I’m not sure what I’ll end up doing in the long run but for now I removed the sf750 and Frankensteined an EVGA1600w t2 to the system for the time being.


----------



## Falkentyne

ttnuagmada said:


> So, this sounds like it's probably the best one to use for the strix? basically same max as the 520w, but proper readings and all ports work.


The Asus XOC2 bios increases the SRC limits. The XOC1 bios does not. However the AUX limits are not touched. I have no idea what effect the AUX values have. 50000mw default, 70000 mw max...even the FE has higher limits (30000 mw default but 100,000 max).

The Kingpin bios has the AUX limits set sky high.


----------



## devilhead

don't know why mine 3dmark scores are so low after update, more than 1000 scores lower than usually...  stock 3090 Strix OC watercooled - 13 000 port royal. Overclocked - 14k - I scored 14 081 in Port Royal
Before on air, same card without 3dmark update managed to get 15k. Looks like waterblock has good contact and Superposition 1080p Extreme got 15K - performance are ok  UNIGINE Superposition benchmark score
all tests stock bios.


----------



## mirkendargen

ttnuagmada said:


> So, this sounds like it's probably the best one to use for the strix? basically same max as the 520w, but proper readings and all ports work.


I disagree unless you really need the ports. It might read 520w at times (never did for me and I have significantly colder temps), but it was power limited more of the time than the KPE 520w BIOS that we don't really know the exact draw for on a Strix.


----------



## mirkendargen

Falkentyne said:


> The Asus XOC2 bios increases the SRC limits. The XOC1 bios does not. However the AUX limits are not touched. I have no idea what effect the AUX values have. 50000mw default, 70000 mw max...even the FE has higher limits (30000 mw default but 100,000 max).
> 
> The Kingpin bios has the AUX limits set sky high.
> View attachment 2478814


What's up with SRC3 being the default? Is that + some kind of power balancing what is making this not work as expected?


----------



## Falkentyne

mirkendargen said:


> What's up with SRC3 being the default? Is that + some kind of power balancing what is making this not work as expected?


Oh wow. That I didn't notice. That will limit 8 pin #3.


----------



## mardon

8GIR8 said:


> I had the ncase and sf750 with a KPE and Asus x570 Impact. The sf750 would work fine at a mild OC but would fail at anything over 112% power limit. I’m not sure what I’ll end up doing in the long run but for now I removed the sf750 and Frankensteined an EVGA1600w t2 to the system for the time being.


When we get the new office built next month I'm thinking of running duel power supplies. 
How did you fit the King Pin in there??


----------



## LancerVI

I'm sure this will get a "No Duh!?!?!?!" type of response, but I figured I would add my thoughts about how important your pcie cables and PSU are and checking your connections.

I just installed an EVGA 3090 FTW3 Ultra and Ryzen 5900x a couple of weeks ago on my Asus ROG CHVIII Hero WiFi.

I was happy with the performance for the most part in games, but something was off with my benches. CTR 2.0 RC3 was reporting my 5900x as a Bronze Sample showing vdroop on diagnostics up near 4%. My scores for CPU-Z were 600 to 620'ish and TimeSpy CPU score was in the 11k range. Waaaaaay too low. My 3090 wouldn't complete the eVGA Precision X1 OC Scan and would reboot the computer everytime at 29%. *EDIT:* My idle GPU temps on the 3090 were in the 50 to 60 range and near 80 at load, on air, stock.

I have CableMod extension cables on my EVGA SuperNova p2 1200 psu. I removed them. That's the only change I made.

CTR now reports a Silver Sample CPU and OC Scan in Precision X1 completes and my scores are up big (20-30% in some cases) across the board on benches and my vdroop is now down to about 3%. It may not seem like much, but still; a full point after taking off the extensions. *EDIT2:* My idle GPU temps dropped to low 30s. 60 to 70s at load.

TL : DR Duh!!!!!!! Cable Extensions are bad, m'kay! At least, in my brief experience with them, purrrrrtty as they may be!

If you're having problems with your new 3090 and you have cable extensions or a vertical GPU mount.....get rid of them.

EDIT: I forgot to add my GPU temps


----------



## 8GIR8

mardon said:


> When we get the new office built next month I'm thinking of running duel power supplies.
> How did you fit the King Pin in there??


Haphazardly dremeled away part of the frame that would block the gpu bracket where IO is located. I also removed the case top and 3D printed an adapter to hold the rads and push pull fans on the top. It’s outright hilarious looking and has more or less turned into a vertical open bench.


----------



## 8GIR8

mardon said:


> When we get the new office built next month I'm thinking of running duel power supplies.
> How did you fit the King Pin in there??


Also looked into using both PSU simultaneously but couldn’t find a solution that I was confident was safe for the hardware. Any recommendations?


----------



## Beagle Box

devilhead said:


> don't know why mine 3dmark scores are so low after update, more than 1000 scores lower than usually...  stock 3090 Strix OC watercooled - 13 000 port royal. Overclocked - 14k - I scored 14 081 in Port Royal
> Before on air, same card without 3dmark update managed to get 15k. Looks like waterblock has good contact and Superposition 1080p Extreme got 15K - performance are ok  UNIGINE Superposition benchmark score
> all tests stock bios.


My rig can't do 15,000 in SP @ 1080p extreme, but can still do over 15000 in Port Royal. 
And my GPU is just average.
Maybe you should delete and re-install 3dmark.


----------



## des2k...

mesh shaders on, 1000% fps boost lol
I still don't know what this dx12 feature does. Any games supporting this ?


----------



## mirkendargen

des2k... said:


> mesh shaders on, 1000% fps boost lol
> I still don't know what this dx12 feature does. Any games supporting this ?
> 
> View attachment 2478847


TL;DR version is it's a more efficient method of determining what polygons are visible in the scene to not waste time rendering stuff that is out of view.


----------



## changboy

mirkendargen said:


> TL;DR version is it's a more efficient method of determining what polygons are visible in the scene to not waste time rendering stuff that is out of view.


Here we call this cheating lol 😂


----------



## geriatricpollywog

I’m paying for those polygons and I want my GPU to draw them all.


----------



## J7SC

...I noticed several times now that switching NV tab from 'prefer maximum power' to 'normal' doesn't really cost me any fps / bench performance...perhaps because these cards spool up so quickly, yet also appreciate a split-second cool down ??


----------



## mirkendargen

changboy said:


> Here we call this cheating lol 😂


DLSS is cheating, this is just being lazy


----------



## Xdrqgol

Wihglah said:


> If you are interested in benching the Kingpin 1000W is the way to go - just be careful of the PCIe power level. If you don't know why this is an issue, then it's not for you.
> 
> For 24/7 I would stick with the 500W FTW3 Bios.


Have flashed the Kingpin 520W bios and , the card runs better and colder ( with 1 fan less) *** 🤦‍♂️🤔 . Thanks for the help.

Would like to flash the 1000W bios to run some Port royale - with the stock bios I got 14.800k ... maybe I can get more with the 1000W , what do you say? 

Cooling wise, I am using the stock cooler and the temp outside 😅!


----------



## mirkendargen

I played around with the Strix XOC2 BIOS some more. I can manage to get it to draw 556W peak in Godfall.

What's interesting is the (reported) power balance when doing that.










It's dropping down to 1006mV while doing this, so it's very much power limited.


----------



## Talon2016

Anyone have a link to the latest version of ABE?


----------



## J7SC

As much as I like the stock Strix bios while the card is still on air, the power limit, even at 500 W, throttles this card...did some Superpos 8K benching today w/ 'normal' power and yesterday with 'prefer maximum performance'. The former on the left, the latter on the right in the pic below. For comparison, pic below also has my 2x 2080 Ti w-cooled results for 8K...that was a combined 760 W for those GPUs.  Even w/ slightly varying temps, the results are always within the margin of error for the Strix...same story with Port Royal, 15024 as posted before, 15032 today.

The question is then once w-cooled, the 520 W KP or the Strix XOC, assuming they will all get the resizable BAR updates ?


----------



## gfunkernaught

J7SC said:


> As much as I like the stock Strix bios while the card is still on air, the power limit, even at 500 W, throttles this card...did some Superpos 8K benching today w/ 'normal' power and yesterday with 'prefer maximum performance'. The former on the left, the latter on the right in the pic below. For comparison, pic below also has my 2x 2080 Ti w-cooled results for 8K...that was a combined 760 W for those GPUs.  Even w/ slightly varying temps, the results are always within the margin of error for the Strix...same story with Port Royal, 15024 as posted before, 15032 today.
> 
> The question is then once w-cooled, the 520 W KP or the Strix XOC, assuming they will all get the resizable BAR updates ?
> 
> View attachment 2478886


I think the way GPU boost works is power limit>temp throttle. So if you're no longer power limited, the temp throttle scaling kicks in. Can someone confirm this? I remember that was the deal with Turing. What I don't get about the Normal/Prefer Max Performance thing is both settings will pin my power limit even with no OC in heavy games like Control, Quake 2 RTX, Cyberpunk, Cold War, and Titanfall 2 when I run the game with vsync off @144fps OR use in-game supersampling. The 3090 is definitely the weirdest GPU I've ever used. I remember going from my first video card a Voodoo 3 PCI up until the GTX 580, it all was pretty simple. Then GPU boost with my 1080 Ti, make it even weirder on the 2080 Ti, and then finally next to impossible with the 3090 with power checks and limits all over the board. What's next? Will the 4090 or 5090 just be a metal block that will explode if you open it?


----------



## des2k...

gfunkernaught said:


> I think the way GPU boost works is power limit>temp throttle. So if you're no longer power limited, the temp throttle scaling kicks in. Can someone confirm this? I remember that was the deal with Turing. What I don't get about the Normal/Prefer Max Performance thing is both settings will pin my power limit even with no OC in heavy games like Control, Quake 2 RTX, Cyberpunk, Cold War, and Titanfall 2 when I run the game with vsync off @144fps OR use in-game supersampling. The 3090 is definitely the weirdest GPU I've ever used. I remember going from my first video card a Voodoo 3 PCI up until the GTX 580, it all was pretty simple. Then GPU boost with my 1080 Ti, make it even weirder on the 2080 Ti, and then finally next to impossible with the 3090 with power checks and limits all over the board. What's next? Will the 4090 or 5090 just be a metal block that will explode if you open it?


Different game engines will use different part of the 3090. You have light / heavy of each or combined: raster core, rt cores, dlss cores.

Of course there's extremes cases like Quake RTX that will use 640w+ and Superposition that runs raytracing for global light on normal raster cores.

You can always flash the 1000w kingpin vbios. It will not drop your OC, just wattage goes pretty high.


----------



## gfunkernaught

des2k... said:


> Different game engines will use different part of the 3090. You have light / heavy of each or combined: raster core, rt cores, dlss cores.
> 
> Of course there's extremes cases like Quake RTX that will use 640w+ and Superposition that runs raytracing for global light on normal raster cores.
> 
> You can always flash the 1000w kingpin vbios. It will not drop your OC, just wattage goes pretty high.


I use afterburner with the RTSS osd, and it shows the power limit always on even though usage is slightly below 380w. I've read that other power limits can trigger that. Quake RTX is a different animal. But I'm aware of different games using different parts of the die. Power limit is being triggered regardless. Once I get my water block to remove the thermal issue then I'll see where I'm at.


----------



## long2905

tried the strix xoc bios v2 on my inno3d waterblock'd card and its the same as other 3 8pin bios. incorrect power reading and max core clock 1860. max ram clock and have to set p2 state off to run eth hash too.

in case anyone using a ref card is curious.


----------



## Ironcobra

So no one has a water lock with active backplate cooling yet?


----------



## Zogge

Depends on if you qualify MP5works as active BPC. I do and yes it works well. Will I try the EK version ? Maybe, as I have a EK block already.


----------



## mardon

8GIR8 said:


> Also looked into using both PSU simultaneously but couldn’t find a solution that I was confident was safe for the hardware. Any recommendations?


I'll be honest. I thought I would be able to get a half decent 600w PSU and just run x2 decent 8Pins to the GPU, hook it up to the wall and turn it on? I take it, its more complex than that?
I'll have to keep an eye out for the Sliverstone SFX-L 1000w to see if that solves my issues.


----------



## mardon

Ironcobra said:


> So no one has a water lock with active backplate cooling yet?


I had the old style MP5 works and it dropped temps mining from 102C to 76C. I'm swapping from EK to Heatkiller block this week and also going Gelid thermal pads and have the new style cold plate for the MP5 ready to go on. I'll report back if those temps drop further under a mining load.


----------



## Xdrqgol

mardon said:


> I had the old style MP5 works and it dropped temps mining from 102C to 76C. I'm swapping from EK to Heatkiller block this week and also going Gelid thermal pads and have the new style cold plate for the MP5 ready to go on. I'll report back if those temps drop further under a mining load.


Seems like lots of people are mining their cards...is this worth it on 3090? 
Asking as I have no idea...


----------



## WayWayUp

as a pure investment its not the best bang for buck (3080 better value proposition i think u can mine 75-80% as efficiently) but if you already have the card and only plan to keep it for 2 years its a pretty good idea. especially with the way eth has been surging in price.
if 10 years from now this does 10x like bitcoin and is worth like 18k you will be very happy about your mining


----------



## mardon

Xdrqgol said:


> Seems like lots of people are mining their cards...is this worth it on 3090?
> Asking as I have no idea...


I didn't purchase the card to mine. I got it for work and gaming purposes. But if I can mine now and again to help cover its costs its defo worth it. I'm getting about £60 a week off it. 125m/h @ 290w.


----------



## pat182

J7SC said:


> As much as I like the stock Strix bios while the card is still on air, the power limit, even at 500 W, throttles this card...did some Superpos 8K benching today w/ 'normal' power and yesterday with 'prefer maximum performance'. The former on the left, the latter on the right in the pic below. For comparison, pic below also has my 2x 2080 Ti w-cooled results for 8K...that was a combined 760 W for those GPUs.  Even w/ slightly varying temps, the results are always within the margin of error for the Strix...same story with Port Royal, 15024 as posted before, 15032 today.
> 
> The question is then once w-cooled, the 520 W KP or the Strix XOC, assuming they will all get the resizable BAR updates ?
> 
> View attachment 2478886


wait, i was away for a week, whats this strix xoc ? new strix bios ?


----------



## pat182

mirkendargen said:


> I played around with the Strix XOC2 BIOS some more. I can manage to get it to draw 556W peak in Godfall.
> 
> What's interesting is the (reported) power balance when doing that.
> 
> View attachment 2478876
> 
> 
> It's dropping down to 1006mV while doing this, so it's very much power limited.


whats this new strix xoc bios people are talking about, i was away for a while


----------



## Biscottoman

Guy i'm struggling between ekwb and phanteks waterblock for the 3090 strix which is in your opinion the best one to pair with an mp5works as active backplate. There are many tests regarding the ekwb but almost none about the phanteks. Which one does perform better?


----------



## J7SC

Biscottoman said:


> Guy i'm struggling between ekwb and phanteks waterblock for the 3090 strix which is in your opinion the best one to pair with an mp5works as active backplate. There are many tests regarding the ekwb but almost none about the phanteks. Which one does perform better?


I'm waiting to see the answers to your query myself, though I would also throw in the Bykski option. Separate but related question: Will the EKWB active backplate cooling block be available separately as a stand-alone purchase options ?


----------



## J7SC

pat182 said:


> wait, i was away for a week, whats this strix xoc ? new strix bios ?


...see post 11902 (and subsequent)


----------



## Gebeleisis

mardon said:


> I didn't purchase the card to mine. I got it for work and gaming purposes. But if I can mine now and again to help cover its costs its defo worth it. I'm getting about £60 a week off it. 125m/h @ 290w.


what settings are you using ?


----------



## Biscottoman

mardon said:


> I didn't purchase the card to mine. I got it for work and gaming purposes. But if I can mine now and again to help cover its costs its defo worth it. I'm getting about £60 a week off it. 125m/h @ 290w.


As i will watercool my 3090 strix i would like to start mining too, i never tried this practise. How many hours a week do you run your GPU on mining and what program do you use?


----------



## Gebeleisis

Biscottoman said:


> As i will watercool my 3090 strix i would like to start mining too, i never tried this practise. How many hours a week do you run your GPU on mining and what program do you use?


you run it 24h . 7 days a week.


you can use this site to calculate earnings , with a 3090 you will have around 118-121mhs , with a 290-295w / h consumtion.





WhatToMine - Crypto coins mining profit calculator compared to Ethereum Classic


Calculate how profitable it is to mine selected altcoins in comparison to ethereum or bitcoin




whattomine.com





at current prices that is 13.5usd per day after you take into account electricity cost.


----------



## Xdrqgol

mardon said:


> I didn't purchase the card to mine. I got it for work and gaming purposes. But if I can mine now and again to help cover its costs its defo worth it. I'm getting about £60 a week off it. 125m/h @ 290w.


Thanks for the insight!


----------



## mardon

Gebeleisis said:


> what settings are you using ?


1450mhz core straight line voltage curve right the way across. 86% Power limit +1500 mem.


----------



## gfunkernaught

Gebeleisis said:


> you run it 24h . 7 days a week.
> 
> 
> you can use this site to calculate earnings , with a 3090 you will have around 118-121mhs , with a 290-295w / h consumtion.
> 
> 
> 
> 
> 
> WhatToMine - Crypto coins mining profit calculator compared to Ethereum Classic
> 
> 
> Calculate how profitable it is to mine selected altcoins in comparison to ethereum or bitcoin
> 
> 
> 
> 
> whattomine.com
> 
> 
> 
> 
> 
> at current prices that is 13.5usd per day after you take into account electricity cost.


That's assuming you have a titanium rated psu?


----------



## mardon

gfunkernaught said:


> That's assuming you have a titanium rated psu?


Mine is platinum.


----------



## Zogge

J7SC said:


> I'm waiting to see the answers to your query myself, though I would also throw in the Bykski option. Separate but related question: Will the EKWB active backplate cooling block be available separately as a stand-alone purchase options ?


I had EK and swapped to Bykski with mp5. Really happy and much lower temps.


----------



## gfunkernaught

Zogge said:


> I had EK and swapped to Bykski with mp5. Really happy and much lower temps.


Did you have the mp5 with the EK block? Or did you move to bykski and then added the mp5?


----------



## Zogge

To bykski and mp5 but core temps went from 59 max on 520w to 47 max. I reseated ek with same crappy results. First try on bykski and solid delta T 8-13c pending load.

All with stock pads. Only difference was that on bykski I added a 30x30x2mm grizzly minus 8 behind the core on the caps.


----------



## changboy

I got a kingpin notification from evga, dont know what to do, i score 15 074(room temp) port royal with my ftw3 ultra with EK block.


----------



## J7SC

Zogge said:


> To bykski and mp5 but core temps went from 59 max on 520w to 47 max. I reseated ek with same crappy results. First try on bykski and solid delta T 8-13c pending load.
> 
> All with stock pads. Only difference was that on bykski I added a 30x30x2mm grizzly minus 8 behind the core on the caps.


I take it that the Bykski kit for the Strix comes with backplate and pads ? Or is the backplate a separate purchase ? Also, what material is the backplate (ie. metal?)


----------



## des2k...

Biscottoman said:


> Guy i'm struggling between ekwb and phanteks waterblock for the 3090 strix which is in your opinion the best one to pair with an mp5works as active backplate. There are many tests regarding the ekwb but almost none about the phanteks. Which one does perform better?


I have the ek for my Zotac. I did replace the ek pads and re-mounted a few times.
Tried ekwb paste, Noctua nt-h2, Kryonaut all give the same delta. After some adjustments (amp clamp) since xoc is over reporting wattage.

Delta 13c-15c for 470w load. RTX Quake 22c delta for 650w load.


----------



## gfunkernaught

Zogge said:


> To bykski and mp5 but core temps went from 59 max on 520w to 47 max. I reseated ek with same crappy results. First try on bykski and solid delta T 8-13c pending load.
> 
> All with stock pads. Only difference was that on bykski I added a 30x30x2mm grizzly minus 8 behind the core on the caps.


Right so your temps dropped significantly probably because of the active backplate cooler. Did you compare core temps without the mp5?


----------



## gfunkernaught

changboy said:


> I got a kingpin notification from evga, dont know what to do, i score 15 074(room temp) port royal with my ftw3 ultra with EK block.


Oh man...
Get it!


----------



## J7SC

changboy said:


> I got a kingpin notification from evga, dont know what to do, i score 15 074(room temp) port royal with my ftw3 ultra with EK block.


...tough choice, but I would go for the Kingpin...or if your budget allows, keep both (FTW3 w/KPE bios) and do 'SLI' and/or a bit of mining (> NOT overclocked to your usual levels... )


----------



## changboy

This kingping can cost 3300$ CAD after import fee and tax.


----------



## changboy

Well where i can put this kingpin inside this :


----------



## kunit13

Been reading this thread off and on for 3 weeks!
Anyway I see a lot of questions around the ek wb and mp5.
This is my first water loop build but below are my temps gaming (Warzone around the 2hr mark)

Specs
5950x
3090 strix oc.
560 rad (x4 bionix p14 fans push)
Both cpu and gpu on the same loop.
Room temp - 22-23c
Water temp 29-31c
Backplate temps 37c (heat gun)

After reading this thread, I guess my gpu is reverse flow. I’m going to switch this weekend and Add 4 more fans to do a push pull. Also adding a heat probe to the back plate. I will report the difference.
MP5 is in parrellel
Running Unigen in a loop my gpu gets up to 46-47c Gaming temps (38-42c).


----------



## Gebeleisis

mardon said:


> 1450mhz core straight line voltage curve right the way across. 86% Power limit +1500 mem.


Thanks. I get 121mhs, 290w(80% on 390w bios) mem 1100-1400 seems to not impact that much mhs.


----------



## Gebeleisis

gfunkernaught said:


> That's assuming you have a titanium rated psu?


I've built mining rigs in the past. Still running after 3 years. 
With some gold Segotep psu running at 90% capacity all this time. I have yet to see a failure. 

În my 3090 rig I have a hx1000i and at 300w power draw it does not even break a sweat.


----------



## changboy

There no way to the flow


kunit13 said:


> View attachment 2478981
> View attachment 2478982
> Been reading this thread off and on for 3 weeks!
> Anyway I see a lot of questions around the ek wb and mp5.
> This is my first water loop build but below are my temps gaming (Warzone around the 2hr mark)
> 
> Specs
> 5950x
> 3090 strix oc.
> 560 rad (x4 bionix p14 fans push)
> Both cpu and gpu on the same loop.
> Room temp - 22-23c
> Water temp 29-31c
> Backplate temps 37c (heat gun)
> 
> After reading this thread, I guess my gpu is reverse flow. I’m going to switch this weekend and Add 4 more fans to do a push pull. Also adding a heat probe to the back plate. I will report the difference.
> MP5 is in parrellel
> Running Unigen in a loop my gpu gets up to 46-47c Gaming temps (38-42c).


There is no way to the flow on EK block but if you try it we will see what you will get.


----------



## Gebeleisis

kunit13 said:


> View attachment 2478981
> View attachment 2478982
> Been reading this thread off and on for 3 weeks!
> Anyway I see a lot of questions around the ek wb and mp5.
> This is my first water loop build but below are my temps gaming (Warzone around the 2hr mark)
> 
> Specs
> 5950x
> 3090 strix oc.
> 560 rad (x4 bionix p14 fans push)
> Both cpu and gpu on the same loop.
> Room temp - 22-23c
> Water temp 29-31c
> Backplate temps 37c (heat gun)
> 
> After reading this thread, I guess my gpu is reverse flow. I’m going to switch this weekend and Add 4 more fans to do a push pull. Also adding a heat probe to the back plate. I will report the difference.
> MP5 is in parrellel
> Running Unigen in a loop my gpu gets up to 46-47c Gaming temps (38-42c).


You have my exact setup 5950x+3090
The 3090 on the 390w bios
. 
I am running a dual rad, 420 and 360.

I think I have some lower Temps than you.


----------



## mardon

Gebeleisis said:


> Thanks. I get 121mhs, 290w(80% on 390w bios) mem 1100-1400 seems to not impact that much mhs.


I get around that on my gaming boot drive. I've got a second boot drive with Nvidia Studio drivers and seem to hash better for some reason??


----------



## des2k...

kunit13 said:


> View attachment 2478981
> View attachment 2478982
> Been reading this thread off and on for 3 weeks!
> Anyway I see a lot of questions around the ek wb and mp5.
> This is my first water loop build but below are my temps gaming (Warzone around the 2hr mark)
> 
> Specs
> 5950x
> 3090 strix oc.
> 560 rad (x4 bionix p14 fans push)
> Both cpu and gpu on the same loop.
> Room temp - 22-23c
> Water temp 29-31c
> Backplate temps 37c (heat gun)
> 
> After reading this thread, I guess my gpu is reverse flow. I’m going to switch this weekend and Add 4 more fans to do a push pull. Also adding a heat probe to the back plate. I will report the difference.
> MP5 is in parrellel
> Running Unigen in a loop my gpu gets up to 46-47c Gaming temps (38-42c).


that's 12-15c gpu delta, what's the wattage during games / unigen ?


----------



## kunit13

Gebeleisis said:


> You have my exact setup 5950x+3090
> The 3090 on the 390w bios
> .
> I am running a dual rad, 420 and 360.
> 
> I think I have some lower Temps than you.


I’m running the strix bios (480w)


----------



## kunit13

des2k... said:


> that's 12-15c gpu delta, what's the wattage during games / unigen ?


Gaming


changboy said:


> There no way to the flow
> 
> There is no way to the flow on EK block but if you try it we will see what you will get.


i read that when I put the system together. But after reading some comments in this thread I did a little more research. I guess it’s 1-2c less running it the correct way.


----------



## kunit13

des2k... said:


> that's 12-15c gpu delta, what's the wattage during games / unigen ?


Max watts in unigen (470ish) 
Gaming ( ~350ish) I will double check tonight. 
I run 1440p (240hz) med-low settings for FPS.
That leads to me my next question?
If the only game I play is Warzone and I’m not maxing out powerlimit, will I gain any performance with a xoc bios? 
Also my overclock is (Afterburner) 
+115 core
+800 mem 
It’s Warzone stable.


----------



## des2k...

kunit13 said:


> Max watts in unigen (470ish)
> Gaming ( ~350ish) I will double check tonight.
> I run 1440p (240hz) med-low settings for FPS.
> That leads to me my next question?
> If the only game I play is Warzone and I’m not maxing out powerlimit, will I gain any performance with a xoc bios?
> Also my overclock is (Afterburner)
> +115 core
> +800 mem
> It’s Warzone stable.


I just re-tested control, not moving was 488-490w, delta was moving 15-16 back to 15c for my ek block normal flow.
Looking at your delta you're altready better with reverse flow but my Zotac vrm also dumps tons of heat in the block. The core vrm(9phase) is already at 37a per 50a stage.

I like the xoc vbios, I'm pretty much locked at 2175core +945mem regardless of load.

It stops at about 640w due to pcie hitting 100w(max for this vbios) and rtx quake uses that much which I don't really play at 22c delta.

Went from 12.9port royal to 14.8+ with xoc vbios so I'm happy


----------



## KedarWolf

changboy said:


> This kingping can cost 3300$ CAD after import fee and tax.


A Strix OC 3090 at retail is about $2710 with tax.


----------



## Zogge

Bykski comes with a flat metal backplate which was also thicker than EK. I never tried Bykski without mp5 but I can check if you want.


----------



## Zogge

They sold kingpins in sweden last week through a store. 27000 SEK or 3262 USD.


----------



## marti69

kunit13 said:


> View attachment 2478981
> View attachment 2478982
> Been reading this thread off and on for 3 weeks!
> Anyway I see a lot of questions around the ek wb and mp5.
> This is my first water loop build but below are my temps gaming (Warzone around the 2hr mark)
> 
> Specs
> 5950x
> 3090 strix oc.
> 560 rad (x4 bionix p14 fans push)
> Both cpu and gpu on the same loop.
> Room temp - 22-23c
> Water temp 29-31c
> Backplate temps 37c (heat gun)
> 
> After reading this thread, I guess my gpu is reverse flow. I’m going to switch this weekend and Add 4 more fans to do a push pull. Also adding a heat probe to the back plate. I will report the difference.
> MP5 is in parrellel
> Running Unigen in a loop my gpu gets up to 46-47c Gaming temps (38-42c).


what did you use to coolback plate?


----------



## kunit13

In


marti69 said:


> what did you use to coolback plate?


mp5 in parrelel. I’m also using a ek backplate.


----------



## KedarWolf

kunit13 said:


> In
> 
> mp5 in parrelel. I’m also using a ek backplate.



EK is coming out with active backplates for the Founders 3090 soon, and other cards not long after.


----------



## kunit13

KedarWolf said:


> EK is coming out with active backplates for the Founders 3090 soon, and other cards not long after.


Yea I seen that, I’ll probably just stick with the mp5 for now. I’m going to have drain my loop to make some changes this weekend and would like to leave it alone for a while (it’s only a month old).


----------



## J7SC

Zogge said:


> Bykski comes with a flat metal backplate which was also thicker than EK. I never tried Bykski without mp5 but I can check if you want.


Thanks..no need to disassemble 

Somewhat related, we have a cold snap here + long weekend, so I decided to move the system w/ the Strix into our sun room (no heat by itself) and open all the windows...brrr. What I found really surprising is how little the results differed (per my earlier post > here) from a cozy heated room ...when I did that with my 2x 2080 Tis before, the temp drop gain in fps was much more pronounced - then again, they are water-cooled via a big dedicated loop, and the heat energy is spread out via two GPU blocks.

With the 8nm Strix and the GDDR6X, a much higher 'Watt per square inch' situation is present. As already suggested, this is why I also think leaving the card on 'normal' vs 'prefer max power' in the NV tab is useful, given the sheer speed these Ampere cards spool up anyhow...overall much better temps with 'normal', and more or less identical results to 'max power', even at 13 C ambient less ! The only real solution is to use a waterblock + cooled backplate, IMO


----------



## geriatricpollywog

Does anybody use "Boost Lock" in Precision X1 or the Afterburner equivalent for locking the highest p-state?


----------



## marti69

kunit13 said:


> In
> 
> mp5 in parrelel. I’m also using a ek backplate.


thanks for replay can you plz post pic?


----------



## kunit13

marti69 said:


> thanks for replay can you plz post pic?


----------



## Gebeleisis

des2k... said:


> I just re-tested control, not moving was 488-490w, delta was moving 15-16 back to 15c for my ek block normal flow.
> Looking at your delta you're altready better with reverse flow but my Zotac vrm also dumps tons of heat in the block. The core vrm(9phase) is already at 37a per 50a stage.
> 
> I like the xoc vbios, I'm pretty much locked at 2175core +945mem regardless of load.
> 
> It stops at about 640w due to pcie hitting 100w(max for this vbios) and rtx quake uses that much which I don't really play at 22c delta.
> 
> Went from 12.9port royal to 14.8+ with xoc vbios so I'm happy


That xoc bios is valid for 3pin cards, right? 

I have a 2 pin card so I think it will not pull all those watts


----------



## des2k...

Gebeleisis said:


> That xoc bios is valid for 3pin cards, right?
> 
> I have a 2 pin card so I think it will not pull all those watts


works on 2x8pin, pulls 100w pcie 270w per pin at 100% tdp 640w 2x8pin + pcie


----------



## Gebeleisis

des2k... said:


> works on 2x8pin, pulls 100w pcie 270w per pin at 100% tdp 640w 2x8pin + pcie


Wow. Can I get a link tk that bios please?


----------



## Falkentyne

Rename it to ROM.

Good luck trying to flash it on a 3090 founder's edition. No one has been successful with that and most likely no one ever will be 

Note there is a bug in the BIOS which will limit power draw. The SRC3 limits are set to the defaults of 150W, and max 175W, and PCIE plug #3 is linked to SRC3, and cannot exceed SRC3. SRC1 and SRC2 limits are set to 475W.


----------



## Gebeleisis

Thank you! 

I have a reference design 3090 palit gaming with 2 pin connector. 
So, will it work with my card? 

Other than this bios what would you recommend for daily driving? 

I am on water cooling.


----------



## Thanh Nguyen

16c-17c delta at 620w-635w in quake 2 rtx. Bykski block with liquid metal.


----------



## mirkendargen

Falkentyne said:


> Rename it to ROM.
> 
> Good luck trying to flash it on a 3090 founder's edition. No one has been successful with that and most likely no one ever will be
> 
> Note there is a bug in the BIOS which will limit power draw. The SRC3 limits are set to the defaults of 150W, and max 175W, and PCIE plug #3 is linked to SRC3, and cannot exceed SRC3. SRC1 and SRC2 limits are set to 475W.


In actual use I'm seeing 8pin #2 peak to 200W but #1 and #3 never getting above 150W. It seems like there's some sort of power balancing going on between #1 and #3 (or what's reported to GPU-z isn't accurate). Could be an awesome BIOS if a new version with the SRC3 limit raised emerges.


----------



## des2k...

Gebeleisis said:


> Thank you!
> 
> I have a reference design 3090 palit gaming with 2 pin connector.
> So, will it work with my card?
> 
> Other than this bios what would you recommend for daily driving?
> 
> I am on water cooling.


xoc kingpin 1000w for 2x8pin works, don't use strix xoc


http://overclockingpin.com/3998/NDA_%20xoc_tools_3998.zip


----------



## Gebeleisis

des2k... said:


> xoc kingpin 1000w for 2x8pin works, don't use strix xoc
> 
> 
> http://overclockingpin.com/3998/NDA_%20xoc_tools_3998.zip


So the xoc kingpin will work? 

What do you recommend for daily driving? 
I am. Running the gigabyte 390w bios since it came out and haven't been able to keep an active eye on this thread. So I do not know what new bioses came out lately. 

Thanks!


----------



## gfunkernaught

Gebeleisis said:


> I've built mining rigs in the past. Still running after 3 years.
> With some gold Segotep psu running at 90% capacity all this time. I have yet to see a failure.
> 
> În my 3090 rig I have a hx1000i and at 300w power draw it does not even break a sweat.


Oh no I wasn't concerned with ability, but efficiency. Plat vs Titanium will certainly change the profitability of mining. Small margins add up over time.


----------



## Biscottoman

J7SC said:


> Thanks..no need to disassemble
> 
> Somewhat related, we have a cold snap here + long weekend, so I decided to move the system w/ the Strix into our sun room (no heat by itself) and open all the windows...brrr. What I found really surprising is how little the results differed (per my earlier post > here) from a cozy heated room ...when I did that with my 2x 2080 Tis before, the temp drop gain in fps was much more pronounced - then again, they are water-cooled via a big dedicated loop, and the heat energy is spread out via two GPU blocks.
> 
> With the 8nm Strix and the GDDR6X, a much higher 'Watt per square inch' situation is present. As already suggested, this is why I also think leaving the card on 'normal' vs 'prefer max power' in the NV tab is useful, given the sheer speed these Ampere cards spool up anyhow...overall much better temps with 'normal', and more or less identical results to 'max power', even at 13 C ambient less ! The only real solution is to use a waterblock + cooled backplate, IMO


Shouldn't a thinner backplate like the phanteks one (which is also flat) help the heat transfer between the gpu back and the mp5works block? Improving the general backplate cooling, while a thicker backplate would work better when not active cooled or while using passive cooling as the total heat capacity of the backplate would be higher thanks to his thickness


----------



## des2k...

Gebeleisis said:


> So the xoc kingpin will work?
> 
> What do you recommend for daily driving?
> I am. Running the gigabyte 390w bios since it came out and haven't been able to keep an active eye on this thread. So I do not know what new bioses came out lately.
> 
> Thanks!


Going over 390w will give you some fps boost but you'll increase the power by a lot.

Horizon dawn benchmark at 4k max, it's 89fps 1900core 350w- , it's 97fps at 2175core but it will peak to 500w usage.
So for 150w more in heat, you get 8fps more. Most games will follow this rule with heavy oc.

Some games scale pretty bad,
Control 4k rtx,dlss going from 500w to 350w you loose about 2-4 fps.

The positive side, it's very nice in winter to warm up the entire floor gaming at 500w gpu + 100w cpu lol


----------



## changboy

kunit13 said:


> Gaming
> 
> i read that when I put the system together. But after reading some comments in this thread I did a little more research. I guess it’s 1-2c less running it the correct way.


Its impossible ! There are no in/out ! How you can tell where the in and where the out, they are not specified lol.


----------



## kunit13

changboy said:


> Its impossible ! There are no in/out ! How you can tell where the in and where the out, they are not specified lol.


Hey man I’m not trying to argue. I’ll let you know MY experience after this weekend.

good read
Ek

To me 1-2c might drop me into a better bin. For you it might not matter.


----------



## des2k...

kunit13 said:


> Hey man I’m not trying to argue. I’ll let you know MY experience after this weekend.
> 
> good read
> Ek
> 
> To me 1-2c might drop me into a better bin. For you it might not matter.


I'm interested to know if there's a temp difference with the EK block. 
Maybe running reverse flow is better for the new rtx 30 blocks ?


----------



## kunit13

des2k... said:


> I'm interested to know if there's a temp difference with the EK block.
> Maybe running reverse flow is better for the new rtx 30 blocks ?


Man I hope not.... a lot of extra work... lol


----------



## changboy

kunit13 said:


> Hey man I’m not trying to argue. I’ll let you know MY experience after this weekend.
> 
> good read
> Ek
> 
> To me 1-2c might drop me into a better bin. For you it might not matter.


hehehe ok, then will wait to ear from you and i wont argue hehehe.


----------



## changboy

But thing to check its after drain coolant and set up again then restart system, dont forget the temp will have drop from non running, so for a real info you need wait 24-48 hour later hehehe.


----------



## J7SC

...did another quick Superpos_8K (Strix 3090 OC, stock bios, air, at room temp, power to 'normal' in NV tab)...added some GPUzs for a quick question: would the KPE 520, and later for w-cooling the KPE XOC change the relative relationships of the power parameters (in orange square box) significantly, or is that hardwired onto the PCB ? I like to see at least 70 W or so on the PCIe slot, and more balanced 3x 8 pin


----------



## changboy

Your gpuz look perfect to me lol. I wanna see my pcie like this and not see 85w hehehe.


----------



## J7SC

changboy said:


> Your gpuz look perfect to me lol. I wanna see my pcie like this and not see 85w hehehe.


...want full powar


----------



## changboy

Then look my pic of gpuz : i can do this all day but i scare a lot, just do it to show u


----------



## J7SC

changboy said:


> Then look my pic of gpuz : i can do this all day but i scare a lot, just do it to show u
> View attachment 2479029


 I can smell the wire sleeves melting from here


----------



## changboy

J7SC said:


> I can smell the wire sleeves melting from here


Hehehe i dont like do it, maybe nothing to worry but i dont want broke anything but 110w on pcie i really dont know. And on this test i put the slider at 80%, still 20% left lol.


----------



## des2k...

J7SC said:


> I can smell the wire sleeves melting from here


Well there's two 12v lines on the 24pin. I doubt 100W will do anything. Mobo traces should be ok.

It's the tiny wires on the pcie connector / card.

Has anyine run 100w on the pcie for like 2,3 years ? Does it survive ?


----------



## changboy

but in game it will never draw that power.


----------



## des2k...

changboy said:


> Hehehe i dont like do it, maybe nothing to worry but i dont want broke anything but 110w on pcie i really dont know. And on this test i put the slider at 80%, still 20% left lol.


unless you have shunts on it it doesn't go past 100w on the xoc vbios as that will hit 100% on normalized tdp and cap power.


----------



## des2k...

changboy said:


> but in game it will never draw that power.


rtx quake on 2x8pin past 2130core will use 100w on the pcie


----------



## J7SC

changboy said:


> Hehehe i dont like do it, maybe nothing to worry but i dont want broke anything but 110w on pcie i really dont know. And on this test i put the slider at 80%, still 20% left lol.


Does your mobo have an auxiliary PCIe slot power connection (either 6/8 pin PCIe or Molex) ? Might not do anything with that GPU PCB and bios, but worth a test ?


----------



## changboy

J7SC said:


> Does your mobo have an auxiliary PCIe slot power connection (either 6/8 pin PCIe or Molex) ? Might not do anything with that GPU PCB and bios, but worth a test ?


Yes i have a molex and its plug down on my x299 rampage omega, iam not worry about the mobo but the card itself.


----------



## changboy

des2k... said:


> rtx quake on 2x8pin past 2130core will use 100w on the pcie


Ok but i dont have this game hehehe.


----------



## changboy

And i decide to not buy the kingpin coz i dont have any place to put that 360mm rad but i wait for another card.
Its a secret


----------



## J7SC

changboy said:


> And i decide to not buy the kingpin coz i dont have any place to put that 360mm rad but i wait for another card.
> Its a secret


...just buy 2x Galax HoF or 2x KPE and...and...that Banggood 3450W psu


----------



## changboy

J7SC said:


> ...just buy 2x Galax HoF or 2x KPE and...and...that Banggood 3450W psu


I doubt that 3450w psu have a good voltage regulation hehehe, but its possible.


----------



## changboy

For me this card is a better choice :


----------



## J7SC

changboy said:


> I doubt that 3450w psu have a good voltage regulation hehehe, but its possible.


....can you say 'rrrripple' ? Actually, I like to see it tested


----------



## WayWayUp

do i open it up and kill the value? Im so temped to take it out and bench it but i dont want to take a $1,000 value hit on it being “used”
My thought process is there is maybe a 10% chance it will actually be better than my golden 3090ftw3

then again, optimus will make blocks for this and the memory oc will surely smoke my ftw.
Decisions decisions


----------



## gfunkernaught

@WayWayUp 
GPU prices are still climbing so that used price hit will shrink. Unless Kingpins are super hard to find and people are willing to pay double what you paid. One thing I regret about buying a custom 3090 is everything for it is custom like the blocks and backplates unlike reference boards.

EK's new active backplate cooling raises my brow. Regardless of if it is in series or parallel, you overall board temp will be higher because if the backplate block is feeding warm water to front block, it doesn't sound like it will be all that worth it just to keep the ram cool. Especially since it can be done with those heatsinks and a fan.


----------



## gfunkernaught

What kind of clock/voltages are yallz getting with 3x8pin PCBs and 500-520w bios? Please include core temp too. Like gaming in general.


----------



## changboy

gfunkernaught said:


> @WayWayUp
> GPU prices are still climbing so that used price hit will shrink. Unless Kingpins are super hard to find and people are willing to pay double what you paid. One thing I regret about buying a custom 3090 is everything for it is custom like the blocks and backplates unlike reference boards.
> 
> EK's new active backplate cooling raises my brow. Regardless of if it is in series or parallel, you overall board temp will be higher because if the backplate block is feeding warm water to front block, it doesn't sound like it will be all that worth it just to keep the ram cool. Especially since it can be done with those heatsinks and a fan.


So why we can see a kingpin on ebay for 4100 cad for a month now ?









EVGA GeForce RTX 3090 FTW3 ULTRA HYBRID 24GB *FREE SHIPPING IN US & NO RESERVE!! | eBay


Find many great new & used options and get the best deals for EVGA GeForce RTX 3090 FTW3 ULTRA HYBRID 24GB *FREE SHIPPING IN US & NO RESERVE!! at the best online prices at eBay! Free shipping for many products!



www.ebay.ca


----------



## GQNerd

WayWayUp said:


> View attachment 2479046
> 
> 
> do i open it up and kill the value? Im so temped to take it out and bench it but i dont want to take a $1,000 value hit on it being “used”
> My thought process is there is maybe a 10% chance it will actually be better than my golden 3090ftw3
> 
> then again, optimus will make blocks for this and the memory oc will surely smoke my ftw.
> Decisions decisions


Used or not you’re going to get more than MSRP.. 

So did u buy it cause you wanted it, or always intended to scalp and just wanted to flex real quick? lol


----------



## Thanh Nguyen

25c water temp








I scored 15 603 in Port Royal


Intel Core i9-10900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com





8c water temp








I scored 15 946 in Port Royal


Intel Core i9-10900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## J7SC

I noticed that on my dual Arctic P12 pwm PST backplate cooling setup, there was a dead spot right where it counts...so I flipped one of the Arctic P12s around (pull instead of push) so that I could move the whole assembly, aligning one of the fans (push) dead center with all of the back VRAM area...the other 'now pull' fan' is right in front of the VRM exhaust slot of the air cooled Strix OC...still on stock bios of course until water-c, but that re-arrangement seems to have helped...as before, I seem to get better results with NV tap on 'normal' than 'prefer maximum performance'... that 2235 MHz GPU was starting speed, certainly not at 6x C, but I am impressed with this card in stock config and can only wonder what it'll do on water and a higher PL bios


----------



## Biscottoman

J7SC said:


> I noticed that on my dual Arctic P12 pwm PST backplate cooling setup, there was a dead spot right where it counts...so I flipped one of the Arctic P12s around (pull instead of push) so that I could move the whole assembly, aligning one of the fans (push) dead center with all of the back VRAM area...the other 'now pull' fan' is right in front of the VRM exhaust slot of the air cooled Strix OC...still on stock bios of course until water-c, but that re-arrangement seems to have helped...as before, I seem to get better results with NV tap on 'normal' than 'prefer maximum performance'... that 2235 MHz GPU was starting speed, certainly not at 6x C, but I am impressed with this card in stock config and can only wonder what it'll do on water and a higher PL bios
> 
> View attachment 2479057


What Port Royal score can you achieve on air with that OC?


----------



## Nizzen

changboy said:


> Then look my pic of gpuz : i can do this all day but i scare a lot, just do it to show u
> View attachment 2479029


Show us powerdraw from the wall


----------



## changboy

Nizzen said:


> Show us powerdraw from the wall


Why power draw from the wall ? You dont belive my gpuz pic ??I dont understand why you ask me this. Everyone who post a pic of there gpuz, nobody ask powerdraw from the wall lol.

I can tell you how to do it with your card + the 1000w bios but its dangerous hehehe.


----------



## Nizzen

changboy said:


> Why power draw from the wall ? You dont belive my gpuz pic ??I dont understand why you ask me this. Everyone who post a pic of there gpuz, nobody ask powerdraw from the wall lol.
> 
> I can tell you how to do it with your card + the 1000w bios but its dangerous hehehe.


I'm just want to see the powerdraw from the wall. Total Power from the whole computer.

I ran 1000w kingpin bios on 3090 suprim X. It drew about 550w (gpu-z) max on air.

When I was benchmarking on cold water with a shuntmodded Strix 3090, the total draw with 5.5Ghz 10900k was about 750w from the wall in timespy extreme.

Just curious about the total draw from the wall. Sometimes gpu-z isn't 100% accurate 

PS: I don't care about dangerous, I'm running 1000w bios on my strix daily


----------



## dante`afk

Thanh Nguyen said:


> 25c water temp
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 603 in Port Royal
> 
> 
> Intel Core i9-10900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> 8c water temp
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 946 in Port Royal
> 
> 
> Intel Core i9-10900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com


did it allow you to push the clock manually higher the lower the temps were? what I noticed with mine is, while it could maintain higher clocks during benching, actually manually increasing clocks did not work (in MSI AB)


----------



## mattxx88

mine system 9900ks 5.3ghz and shuntmodded 3090FE max draw i see from the wall was round 700w (670/680w)


----------



## J7SC

Biscottoman said:


> What Port Royal score can you achieve on air with that OC?


...low 15000s for the Strix OC stock / air, which I am happy with. I haven't yet found the most efficient VRAM speed for this card, but have narrowed the range to somewhere between mid 1300s to high 1300s MHz. I thought that the oldish 5960X / X99 combo would hold me back but as PortRoyal is less dependent on CPU setup, it doesn't seem to be a big issue, apart from IPC may be. As with Superposition 8K, best results with NV tab on'normal' rather than 'prefer max' settings.That could change w/ w-cooling though. I also blended in my best 2x 2080 Ti w-cooled results...don't think that I reach that even on water and custom bios with the 3090


----------



## Esenel

Nizzen said:


> When I was benchmarking on cold water with a shuntmodded Strix 3090, the total draw with 5.5Ghz 10900k was about 750w from the wall in timespy extreme.


810W peak from the wall with 10900K OC and the Strix with 1000W bios :-D

The 800W chiller is on another power line 🙃


----------



## Thanh Nguyen

dante`afk said:


> did it allow you to push the clock manually higher the lower the temps were? what I noticed with mine is, while it could maintain higher clocks during benching, actually manually increasing clocks did not work (in MSI AB)


I use effective clock in afterburn setting so I set the clock I want. +185 offset, then lock 1.1v and slide it up 1 bin.


----------



## Nizzen

Esenel said:


> 810W peak from the wall with 10900K OC and the Strix with 1000W bios :-D
> 
> The 800W chiller is on another power line 🙃


Tested My 5900x @ 4600 all core 1.25v and 3090 strix with 1000w bios:
About 850w draw from the wall in Timespy Extreme.


----------



## WayWayUp

I'll try my luck again with a kingpin hydrocopper later. Incredibly disappointed with the memory overclock of the kingpin sample. used ln2 bios switch and left everything at stock settings. Just wanted to see what i could get out of it and whether it was worth keeping.

the core was...good

Its hard to judge properly without using classified tool and unlocking power limit but as is it's not a good as my ftw3. I could potentially make it as good with a waterblock/ liquid metal/ heavy tweaking but i will never get over the very average memory overclock


----------



## J7SC

WayWayUp said:


> I'll try my luck again with a kingpin hydrocopper later. Incredibly disappointed with the memory overclock of the kingpin sample. used ln2 bios switch and left everything at stock settings. Just wanted to see what i could get out of it and whether it was worth keeping.
> 
> the core was...good
> 
> Its hard to judge properly without using classified tool and unlocking power limit but as is it's not a good as my ftw3. I could potentially make it as good with a waterblock/ liquid metal/ heavy tweaking but i will never get over the very average memory overclock


...Luumi has a series of oc tips for the KPE 3090, including 'back switches' etc on air/water, VRAM settings...here's just one of the vids, check for others (earlier ones, later he gets into LN2 etc)


----------



## changboy

When i show my gpuz power draw i was at 80% of the slider gpu power draw 696w, i can go higher with the gpu.
My cpu is the 10980xe and i can oc it at 5.0 or 5.1ghz for bench, and daily oc at 4.9ghz.
I belive total system power draw was high, maybe more then 1000w.

The thing is my pcie was at 110w on my ftw3 so maybe the pcie fuse was near to pup up, what you think ?
How much W can handle that fuse on the pcie ?


----------



## Thanh Nguyen

850w-930w draw from the wall.


----------



## gdias92

I just purchased a ZOTAC RTX 3090 (ZT-A30900D-10P), the only one available.

How do I know which bios can i se to flash it ?


----------



## Nizzen

changboy said:


> When i show my gpuz power draw i was at 80% of the slider gpu power draw 696w, i can go higher with the gpu.
> My cpu is the 10980xe and i can oc it at 5.0 or 5.1ghz for bench, and daily oc at 4.9ghz.
> I belive total system power draw was high, maybe more then 1000w.
> 
> The thing is my pcie was at 110w on my ftw3 so maybe the pcie fuse was near to pup up, what you think ?
> How much W can handle that fuse on the pcie ?


Maybe your EVGA 3090 card is drawing much more than 3090 strix with that 1000w bios? With almost 700w gpu powerdraw, you must be in the 2300mhz range?


----------



## Thanh Nguyen

Nizzen said:


> Maybe your EVGA 3090 card is drawing much more than 3090 strix with that 1000w bios? With almost 700w gpu powerdraw, you must be in the 2300mhz range?


Low quaility chip requires more power to run.


----------



## SoldierRBT

Interesting video about effective clocks vs target clocks


----------



## changboy

Nizzen said:


> Maybe your EVGA 3090 card is drawing much more than 3090 strix with that 1000w bios? With almost 700w gpu powerdraw, you must be in the 2300mhz range?


I didnt check the mhz but just the power draw when i watch a 4k bluray with mpc hc in a specific way that can use verry high gpu power, i can draw more power but i dont want to kill my card. Maybe i can draw 800w on the gpu when i do this but i stop at 700w lol. 
Just use madvr filter + lav video filter set to dxva2 copy-back and you will see how high this card can go.


----------



## mouacyk

SoldierRBT said:


> Interesting video about effective clocks vs target clocks


Does the reduced gap between effective and target clocks actually result in higher scores?

Wow, never knew about the voltage locking. Been overclocking like a n00b. Thanks.


----------



## gfunkernaught

Thanh Nguyen said:


> Low quaility chip requires more power to run.


My trio wants 1068mv for 1980mhz so I'm screwed.


----------



## des2k...

gfunkernaught said:


> My trio wants 1068mv for 1980mhz so I'm screwed.


1980 effective frequency ?or just 1980mhz on the freq curve?


----------



## des2k...

mouacyk said:


> Does the reduced gap between effective and target clocks actually result in higher scores?


You can have as little as 5-8mhz less for effective clocks by shift dragging the entire curve first like +152, then pick your voltage point and drag that another +50, +100 points and limit max clock with smi tool.
Pretty hard to get stable on my side for 2145 set and 2140effective freq.

Now if I set it the OC differently, where the effective clocks and voltage is allowed to drop, I can set 2175, and be around 2145-2155 for effective clock and it's stable.

2VF Point, for 2175 uses 1075mv - 1087mv, smi limit to 2175
No shift drag

Point 2









Point 1


----------



## Snoopy69

What is the best result without a hard mod with a STRIX OC? 
I get this result with the KPE-BIOS...








3DMark Time Spy Benchmark Top 30


Here's the recent WR sub for superposition at the BOT from my ole friend Dave: mllrkllr88`s Unigine Superposition - 1080P Xtreme score: 17787 Points with a GeForce RTX 3090 (hwbot.org) That's the necessary "proof of bench" required.




www.overclock.net


----------



## Snoopy69

Esenel said:


> 810W peak from the wall with 10900K OC and the Strix with 1000W bios :-D
> 
> *The 800W chiller is on another power line* 🙃


Welchen hast du?


----------



## Esenel

Snoopy69 said:


> Welchen hast du?


This one:








Durchlaufkühler Hailea Ultra Titan 1500 (HC500=790Watt Kälteleistung)


Die Kühlaggregate von Hailea werden seit über 15 Jahren in bewährten Durchlaufkühlern anderer Hersteller verbaut. Die neuen Wärmetauscher der Ultra Baureihe werden aus Titan gefertigt. Dadurch eignen sich die Kühler perfekt für...




www.aquatuning.de





But I would recommend going one tier up.
I am not completly satisfied with the cooling capability.

But it is still a good water reservoir :-D


----------



## Snoopy69

dante`afk said:


> I'm laughing at all the people buying their EK block and then having 50c on the GPU asking if that is a good temp.
> 
> No, it's not lol.


Hehe, the cheapest block (BykSky) reached better results than the expansive EK 
(Grüße ausm Luxx )


----------



## Snoopy69

Esenel said:


> This one:
> 
> 
> 
> 
> 
> 
> 
> 
> Durchlaufkühler Hailea Ultra Titan 1500 (HC500=790Watt Kälteleistung)
> 
> 
> Die Kühlaggregate von Hailea werden seit über 15 Jahren in bewährten Durchlaufkühlern anderer Hersteller verbaut. Die neuen Wärmetauscher der Ultra Baureihe werden aus Titan gefertigt. Dadurch eignen sich die Kühler perfekt für...
> 
> 
> 
> 
> www.aquatuning.de
> 
> 
> 
> 
> 
> But I would recommend going one tier up.
> I am not completly satisfied with the cooling capability.
> 
> But it is still a good water reservoir :-D


The same as mine (bought in 2007), but is too expensive (very poor cooling-power)
I bougth last month a better, bigger one for the half price


----------



## Snoopy69

Esenel said:


> But I would recommend going one tier up.


Which one?


----------



## des2k...

Snoopy69 said:


> Hehe, the cheapest block (BykSky) reached better results than the expansive EK
> (Grüße ausm Luxx )


better than ek ? by how much ?


----------



## Snoopy69

Max 12k

But do never (ever) believe the sensors, GPU-Z or HWinfo without manually measuring


----------



## des2k...

Snoopy69 said:


> Max 12k


I'm curious if I can drop my delta that much.
So about 470-500w I get 13c-16c delta with my ek.
It's going over that, I get 22c delta at 600w+

Are you saying I could be 10c cooler on the gpu by switching ?


----------



## Thanh Nguyen

des2k... said:


> I'm curious if I can drop my delta that much.
> So about 470-500w I get 13c-16c delta with my ek.
> It's going over that, I get 22c delta at 600w+
> 
> Are you saying I could be 10c cooler on the gpu by switching ?


Use liquid metal and u will have at least 5c reduction.


----------



## Snoopy69

des2k... said:


> I'm curious if I can drop my delta that much.
> So about 470-500w I get 13c-16c delta with my ek.
> It's going over that, I get 22c delta at 600w+
> 
> Are you saying I could be 10c cooler on the gpu by switching ?


Maybe yes, but wait...
Do you use the EK-Backplate?
I´m using the BykSky-Block with the Strix-BP (the BykSky-BP is still in the box)
It is not possible to use the steel cross (from the Strix) with the BykSky-BP
This Steel cross increase the contact pressure much more. Thats why i ´m use the Strix-BP 

Or maybe your waterblock and the GPU-Die do not match (convex to convex etc)
And check your thermal paste thickness. Too much is fatal to the GPU-Die


----------



## Biscottoman

what delta should i expect on a full cover wb like the phanteks one with LM?


----------



## des2k...

Snoopy69 said:


> Maybe yes, but wait...
> Do you use the EK-Backplate?
> I´m using the BykSky-Block with the Strix-BP (the BykSky-BP is still in the box)
> It is not possible to use the steel cross (from the Strix) with the BykSky-BP
> This Steel cross increase the contact pressure much more. Thats why i ´m use the Strix-BP
> 
> Or maybe your waterblock and the GPU-Die do not match (convex to convex etc)


I use the ek backplate. Can't use my stock backplate(zotac) and also don't have the cross. Zotac uses 4 screws with springs, also can't use those.

I can't find a place that sells the square pressure bracket. I'm suspecting is lack of pressure on the edge of the die.

Razor blade comes out flat for the gpu ek core, no light going through. I know the board / gpu die is flat as the Zotac cooler makes perfect edge contact(paste is compressed, not like my ek on the edge).
I'm having exactly the same pattern as Jaystwocents before he lapped his ln2 pot with the EK. The stock cooler doesn't have this issue.


----------



## des2k...

Found a reliable way to lock the freq / voltage with Afterburner. Not ctrl+l, that doesn't work at all.

On ampere any given freq on the curve is only valid for 3 voltages, then the frequency has to move up.
Effective frequency also doesn't drop by forcing these 4 voltage points , you'll get locked into 4th voltage point regardless of gpu load or temps.

Using two points, forcing 2160 bin (2175 on the curve) to use 1069,1075,1081 then the next bin 2175 gets locked to the 4th point.
smi limit to 2175
prefer max performance

right 2253 1087mv(the voltage you want to run at)(apply first)
left 2175 1068mv ( -3 bins on the voltage)(apply second)

*2253 -15mhz or +15mhz will +/- to the effective frequency

Not sure if this is useful, but writing it anyways.
I had some trouble getting voltage lock on ampere due to load, temps changes


----------



## gfunkernaught

des2k... said:


> 1980 effective frequency ?or just 1980mhz on the freq curve?


Stock and on air. All I did was raise the PL to 102%, I have the Trio with the stock 380w bios. That was the highest I saw it go during the bright memory benchmark.


----------



## des2k...

...


----------



## des2k...

gfunkernaught said:


> Stock and on air. All I did was raise the PL to 102%, I have the Trio with the stock 380w bios. That was the highest I saw it go during the bright memory benchmark.


well yeah, that's rtx,dlss load with 380w vbios, freq will be super low,

just use afterburner to cut off/flat the curve, you don't need more than 900mv for that low frequency


----------



## Snoopy69

des2k... said:


> I use the ek backplate. Can't use my stock backplate(zotac) and also don't have the cross. Zotac uses 4 screws with springs, also can't use those.
> 
> I can't find a place that sells the square pressure bracket. I'm suspecting is lack of pressure on the edge of the die.
> 
> *Razor blade comes out flat for the gpu ek core, no light going through. I know the board / gpu die is flat as the Zotac cooler makes perfect edge contact(paste is compressed, not like my ek on the edge).
> I'm having exactly the same pattern as Jaystwocents before he lapped his ln2 pot with the EK. The stock cooler doesn't have this issue.*


Ok, that is the problem...
EK-Block and the GPU-Die doesnt match


----------



## Snoopy69

Thanh Nguyen said:


> Use liquid metal and u will have at least 5c reduction.


Never use LM, when Waterblock and GPU-Die doesnt match


----------



## mirkendargen

Snoopy69 said:


> Never use LM, when Waterblock and GPU-Die doesnt match


GPU's are bare pure silicon, LM is fine.


----------



## sultanofswing

Everytime I have used Liquid Metal on a watercooled GPU the temps were within 2-3c of the normal Kryonaut I use.


----------



## Esenel

Snoopy69 said:


> The same as mine (bought in 2007), but is too expensive (very poor cooling-power)
> I bougth last month a better, bigger one for the half price


Yes.
I thought about double the cooling power.
Same brand.
But now I am curious which one you bought ;-)
Can you share a link.
Thx


----------



## Snoopy69

You mean this one?








Durchlaufkühler Hailea Ultra Titan 2000 (HC1000=1650Watt Kälteleistung)


Die Kühlaggregate von Hailea werden seit über 15 Jahren in bewährten Durchlaufkühlern anderer Hersteller verbaut. Die neuen Wärmetauscher der Ultra Baureihe werden aus Titan gefertigt. Dadurch eignen sich die Kühler perfekt für...




www.aquatuning.de





I bought this one in january on Amazon for 299€








Aohuada 8L Industrieller Wasserkühler Water Chiller Wasserkühlung CW-5200 Thermolysis Industrial Water Cooler Chiller for CNC Engraving Machines CO2 Laser Kühlmaschine 50Hz : Amazon.de: Küche, Haushalt & Wohnen


Amazon.de: Küchen- und Haushaltsartikel online - Aohuada 8L Industrieller Wasserkühler Water Chiller Wasserkühlung CW-5200 Thermolysis Industrial Water Cooler Chiller for CNC Engraving Machines CO2 Laser Kühlmaschine 50Hz. Aohuada 8L Industrieller Wasserkühler Water Chiller Wasserkühlung CW-5200...



www.amazon.de






btw:
You know about this one?








Alphacool Eiszeit 2000 Chiller - Black


Die Alphacool Eiszeit ist ein Kompressor Kühler mit einer maximalen Kühlleistung von 1500W. Anders als die meisten sogenannten Chiller oder Durchlaufkühler bietet die Eiszeit eine leistungsstarke integrierte Pumpe.   Highlights...




www.aquatuning.de





Then look at this one. Its the same as "Scalpacool" 








CW-5200 Wasserkühler Wasserkühler Laserrohr For Laser Engraver Genuine S&A | eBay


Entdecken Sie CW-5200 Wasserkühler Wasserkühler Laserrohr For Laser Engraver Genuine S&A in der großen Auswahl bei eBay. Kostenlose Lieferung für viele Artikel!



www.ebay.de






btw2:
You have a message


----------



## Rhadamanthys

Could someone please run the PCI Express feature test in 3DMark (takes about a minute) and tell me their bandwidth? I get 22.86 GB/s (prefer max performance, no max framerate, G-Sync off). Reference 3090. However, I've seen the 3080 go up to 26.28 GB/s. The 3090 should be up there, no? I'm afraid it's because I'm using a riser cable (linkup pci-e 4.0). It's clearly is more than pci-e 3.0, but could the cable still be limiting the bandwidth?

Card's under water so I can't be bothered to take it apart and slide it directly into the MB.


----------



## EarlZ

Rhadamanthys said:


> Could someone please run the PCI Express feature test in 3DMark (takes about a minute) and tell me their bandwidth? I get 22.86 GB/s (prefer max performance, no max framerate, G-Sync off). Reference 3090. However, I've seen the 3080 go up to 26.28 GB/s. The 3090 should be up there, no? I'm afraid it's because I'm using a riser cable (linkup pci-e 4.0). It's clearly is more than pci-e 3.0, but could the cable still be limiting the bandwidth?
> 
> Card's under water so I can't be bothered to take it apart and slide it directly into the MB.


I am only getting 13GB/s on my 3080, This is on a Z390 Pcie gen3.. Should I be worried ?


----------



## Rhadamanthys

EarlZ said:


> I am only getting 13GB/s on my 3080, This is on a Z390 Pcie gen3.. Should I be worried ?
> 
> View attachment 2479178


No, that's totally fine and as expected, since you're running PCIe 3. I need someone with a PCIe 4 board to do the test.

EDIT: Found the culprit. I do have a fresh system and my memory was still running at default (2000 mhz I believe). After setting XMP, I get the 26 GB/s as I should.


----------



## Alex24buc

For undervolting, I want to make sure, I read the previous posts and everything I could find, I want to know if I want to choose on my palit [email protected] is it enough to move the slider in the curve to the desired value and then hit apply? Because others say that an offset should be used, like decreasing the core clock first. Thanks!


----------



## kx11

Rhadamanthys said:


> No, that's totally fine and as expected, since you're running PCIe 3. I need someone with a PCIe 4 board to do the test.
> 
> EDIT: Found the culprit. I do have a fresh system and my memory was still running at default (2000 mhz I believe). After setting XMP, I get the 26 GB/s as I should.


I have 3090 on a x570 ryzen board, i'm getting 13gb too, my ram is running 3600mhz, should i be worried? tried to set the PCI speed to GEN4 and got same results









Result not found







www.3dmark.com






i do have 2 more PCI slots busy btw so that makes it 3 slots being used


----------



## gfunkernaught

des2k... said:


> well yeah, that's rtx,dlss load with 380w vbios, freq will be super low,
> 
> just use afterburner to cut off/flat the curve, you don't need more than 900mv for that low frequency


Assuming I have a good sample. I got my card from microcenter, and I've never bought good silicon from them. Waterblock _should_ be coming today. Why would the bios default such a high voltage for a low frequency?


----------



## zyalon

Just use the EVGA power boost and connect it to the next pcie slot. is just 6 bucks on amazon



changboy said:


> When i show my gpuz power draw i was at 80% of the slider gpu power draw 696w, i can go higher with the gpu.
> My cpu is the 10980xe and i can oc it at 5.0 or 5.1ghz for bench, and daily oc at 4.9ghz.
> I belive total system power draw was high, maybe more then 1000w.
> 
> The thing is my pcie was at 110w on my ftw3 so maybe the pcie fuse was near to pup up, what you think ?
> How much W can handle that fuse on the pcie ?


----------



## Shawnb99

For those with the KPE what sort of sag if any are guys seeing? Just finally installed mine yesterday and so far I'm not seeing any sag at all, we'll see after a few more days but it seems more solid then my 2080ti, that sagged so badly.
Will be interesting if the Optimus blocks changes that.


----------



## jomama22

Snoopy69 said:


> Hehe, the cheapest block (BykSky) reached better results than the expansive EK
> (Grüße ausm Luxx )


Think it's more just a block consistency thing and what the loop setup actually is. My ek strix block has a delta over air, not water, of 18* at 620-630w when heatsoaked for 6+ hours. @500-520w delta is 14* for same time period. I imagine there are people who's rads become overwhelmed and can't effectively dissipate heat properly after a given power level. 

Also depends on how someone's case is set up. If you're just hotboxing the pc with more intake(especially if all the intake goes through rads) then outtake, you will have higher temps and vise versa.

Those setups alone can vary temperatures by 3-5* on the gpu die. Let alone whether there are independent loops for the cpu and gpu or just a single loop.

This is why trying to aggregate block data between different setups is genuinely pointless as it really is a crapshoot. Also, they only way to really have a test is for everyone to use furmark and adjust the power slider to create a strict max power draw. Someone's "600-630w" (like mine above) could very well be different interms of load compared to someone else using some other method to achieve the same power draw.


----------



## changboy

I have update my port royal score hehehe : 15 458








I scored 15 458 in Port Royal


Intel Core i9-10980XE Extreme Edition Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## J7SC

...replacing the old X99 setup I had for the Strix 3090 OC with an X570 Asus Crosshair Hero VIII & 3950X...X99 had and still has some issues stemming from sub-zero use way back, but if properly resurrected, it will get reborn as a Linux...in the meantime X570 + 3090 = PCIe 4.0  

...'It's alive' (on the testbench, 1st boot)...modded AIO for now with six Arctic P12s in push/pull, full custom loop when 3090 gets the w-cool treatment


----------



## changboy

J7SC said:


> ...replacing the old X99 setup I had for the Strix 3090 OC with an X570 Asus Crosshair Hero VIII & 3950X...X99 had and still has some issues stemming from sub-zero use way back, but if properly resurrected, it will get reborn as a Linux...in the meantime X570 + 3090 = PCIe 4.0
> 
> ...'It's alive' (on the testbench, 1st boot)...modded AIO for now with six Arctic P12s in push/pull, full custom loop when 3090 gets the w-cool treatment
> 
> View attachment 2479246


You will need a 5950x hehehe, i kidding


----------



## J7SC

changboy said:


> You will need a 5950x hehehe, i kidding


I tried...but they had no 5950X, just 1x 3900X and 2x 3950X (good prices, though). In the days, weeks and months of 'out of stock, out of stock, out of stock' I was happy to find s.th. decent at non-scalper prices on short notice...


----------



## marti69

hello guys i just switched from 10900k and maximus xII formula to rizen 5950x asrock x570 taichi my rtx 3090 trio is performing about 10% slower then 10900k even with oc to 4.7ghz on 5950x anyone has lower score on amd too?


----------



## KedarWolf

marti69 said:


> hello guys i just switched from 10900k and maximus xII formula to rizen 5950x asrock x570 taichi my rtx 3090 trio is performing about 10% slower then 10900k even with oc to 4.7ghz on 5950x anyone has lower score on amd too?


You probably want to setup Curve Optimizer rather than an all-core overclock on the 5950x. Single-core people are getting 5000Mhz+ and I get 4.883Ghz effective multicore with it.


----------



## Thanh Nguyen

changboy said:


> I have update my port royal score hehehe : 15 458
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 458 in Port Royal
> 
> 
> Intel Core i9-10980XE Extreme Edition Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com


Did u put your radiator in the snow? It was 28f outside and I can only bring my water temp down to 8c when I put my radiator on the window.


----------



## gfunkernaught

Any thought's and experience with liquid metal on the 3090? I'm reading about it drying out or not being necessary since the water block is direct-die contact. I've also read about a 3-5c drop in temps, which I want, because that means the difference between say 45c and 40c . If I can't do LM, I do have IC Diamond which has worked the best for GPUs in the past for me. For whatever reason, Kryonaut didn't perform as good as ICD.


----------



## mirkendargen

gfunkernaught said:


> Any thought's and experience with liquid metal on the 3090? I'm reading about it drying out or not being necessary since the water block is direct-die contact. I've also read about a 3-5c drop in temps, which I want, because that means the difference between say 45c and 40c . If I can't do LM, I do have IC Diamond which has worked the best for GPUs in the past for me. For whatever reason, Kryonaut didn't perform as good as ICD.


It depends on your coolant temps really. You have two dials for getting better GPU temps: lower your coolant temps via more/faster rad/fan, or get a better block/TIM. It depends on your case/setup/budget to know which of those dials is easier to adjust. But if you can keep your GPU at 40C or less already, you aren't really going to gain anything performance-wise anyway.


----------



## changboy

Thanh Nguyen said:


> Did u put your radiator in the snow? It was 28f outside and I can only bring my water temp down to 8c when I put my radiator on the window.


I put my pc outside on balcon at -15c lol. I just done 1 port royal bench and my coolant freez hehehe, so i bring back inside and my pc happy now hehehe. I also unplug all my hdd before doing this, just keep my 4x m.2 online.


----------



## J7SC

changboy said:


> I put my pc outside on balcon at -15c lol. I just done 1 port royal bench and my coolant freez hehehe, so i bring back inside and my pc happy now hehehe. I also unplug all my hdd before doing this, just keep my 4x m.2 online.


...one of the rainX windshield wiper fluids has been successfully used in computers at sub-0 ..they have several types, it is the 'orange' one I believe. Just make sure to double-check the label re. compatibility with tubes, seals etc. but again, one of their products is ok for w-cooled PCs, the others, not so much...


----------



## changboy

J7SC said:


> ...one of the rainX windshield wiper fluids has been successfully used in computers at sub-0 ..they have several types, it is the 'orange' one I believe. Just make sure to double-check the label re. compatibility with tubes, seals etc. but again, one of their products is ok for w-cooled PCs, the others, not so much...


I just do that once for fun hehehe, i saw formation of ice in my reservoir and then blue screen, i think my cpu reach over 100c coz cooling not moving lol. After restart it in my room all was fine. Coz was too cold outside, i want do more bench but just got 5 minute hehehe.
It can happen i will try again when its not so cold, i saw my lcd temp coolant sensor show -3c 😂


----------



## gfunkernaught

mirkendargen said:


> It depends on your coolant temps really. You have two dials for getting better GPU temps: lower your coolant temps via more/faster rad/fan, or get a better block/TIM. It depends on your case/setup/budget to know which of those dials is easier to adjust. But if you can keep your GPU at 40C or less already, you aren't really going to gain anything performance-wise anyway.


See my sig for my loop setup. I have the Corsair Graphite 760T. My EK block came today, and I have to assemble the loop (new tubes), and use sysprep for at least 15 or so hours, there was a lot of algae build up. I don't think it will be possible for my loop to keep my 3090 at 40c or below. My 2080 ti with 380w bios (shunt mod ~425-450w) pushed beyond 40c when I ran timespy extreme gt2 on loop, the temp reached 43c after about 15min. The thing is I don't want to have to keep draining my loop just to repaste the gpu. I want to set it and forget it and do it right the first time. Is LM better for air cooling since there is more heat involved? It did wonders for my CPU, 62c was the hottest any core got at [email protected], and that is with basically passive fan curves, my fans are controlled by gpu temp via Argus Monitor.

Here's a pic of my case currently. Top and front fans are intake, those two big 200mm fans are exhaust and the one 140mm fan at the rear is also exhaust.

Excuse the messy appearance. Will look very different by tonight


----------



## ttnuagmada

Hey i may have missed it being posted already, but the newest HWINFO Beta with 3090 hotspot sensor is out. The hotspot difference on mine peaked about 13C higher than the GPU temp peak (42C comapred to 55C in a GT2 loop) What is everyone else getting?:









Free Download HWiNFO Sofware | Installer & Portable for Windows, DOS


Start to analyze your hardware right now! HWiNFO has available as an Installer and Portable version for Windows (32/64-bit) and Portable version for DOS.




www.hwinfo.com


----------



## Style68

des2k... said:


> Found a reliable way to lock the freq / voltage with Afterburner. Not ctrl+l, that doesn't work at all.
> 
> On ampere any given freq on the curve is only valid for 3 voltages, then the frequency has to move up.
> Effective frequency also doesn't drop by forcing these 4 voltage points , you'll get locked into 4th voltage point regardless of gpu load or temps.
> 
> Using two points, forcing 2160 bin (2175 on the curve) to use 1069,1075,1081 then the next bin 2175 gets locked to the 4th point.
> smi limit to 2175
> prefer max performance
> 
> right 2253 1087mv(the voltage you want to run at)(apply first)
> left 2175 1068mv ( -3 bins on the voltage)(apply second)
> 
> *2253 -15mhz or +15mhz will +/- to the effective frequency
> 
> Not sure if this is useful, but writing it anyways.
> I had some trouble getting voltage lock on ampere due to load, temps changes
> 
> View attachment 2479150


Thanks for this. It did help me to maintain a constant frequency.


----------



## changboy

ttnuagmada said:


> Hey i may have missed it being posted already, but the newest HWINFO Beta with 3090 hotspot sensor is out. The hotspot difference on mine peaked about 13C higher than the GPU temp peak (42C comapred to 55C in a GT2 loop) What is everyone else getting?:
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Free Download HWiNFO Sofware | Installer & Portable for Windows, DOS
> 
> 
> Start to analyze your hardware right now! HWiNFO has available as an Installer and Portable version for Windows (32/64-bit) and Portable version for DOS.
> 
> 
> 
> 
> www.hwinfo.com


I just play shadow of tomb raider and got this temp :


----------



## ttnuagmada

That's the memory temp. They added a "hotspot" temp in the new beta.


----------



## mirkendargen

ttnuagmada said:


> That's the memory temp. They added a "hotspot" temp in the new beta.
> 
> View attachment 2479286


I read about this but couldn't find anywhere what it's ACTUALLY measuring. Is it a VRM temp or what? "Hotspot" is a pretty freaking vague metric name...


----------



## ttnuagmada

mirkendargen said:


> I read about this but couldn't find anywhere what it's ACTUALLY measuring. Is it a VRM temp or what? "Hotspot" is a pretty freaking vague metric name...


Yeah I didn't see any clarification on it either. I figure it's the hottest part of the chip maybe?


----------



## des2k...

Here's mine, for 38c gpu, hotspot is 50.7c. 4mins of Control at 485w.
Not sure if this is normal, till others post their temps.


----------



## des2k...

mirkendargen said:


> I read about this but couldn't find anywhere what it's ACTUALLY measuring. Is it a VRM temp or what? "Hotspot" is a pretty freaking vague metric name...


Looks like it's the highest temp reported on the die vs gpu temp which is the average of all sensors on the die.
Not mem and not vrm temps.


----------



## geriatricpollywog

Shawnb99 said:


> For those with the KPE what sort of sag if any are guys seeing? Just finally installed mine yesterday and so far I'm not seeing any sag at all, we'll see after a few more days but it seems more solid then my 2080ti, that sagged so badly.
> Will be interesting if the Optimus blocks changes that.


No sag on my 3090 Kingpin.

My first 2080ti Kingpin did not sag, but I had to RMA it and the 2nd one had minor sag/wiggle.


----------



## changboy

Ok i have download latest beta and temp is for 500w power draw:


----------



## des2k...

Around the same wattage with something heavy on the memory controller (edge of the die) the delta increases by alot.
35c gpu, hotspot 56c, 21 delta vs gaming 12.7 delta.

Can you guys try mining bench vs gaming load ? Is your average/hotspot delta bigger ?


----------



## Beagle Box

After 15 minutes of looping Port Royal:


----------



## changboy

des2k... said:


> Around the same wattage with something heavy on the memory controller (edge of the die) the delta increases by alot.
> 35c gpu, hotspot 56c, 21 delta vs gaming 12.7 delta.
> 
> Can you guys try mining bench vs gaming load ? Is your average/hotspot delta bigger ?
> 
> View attachment 2479300


Can you just show me a link to learn how minning with my gpu and also what program you are using ?


----------



## des2k...

changboy said:


> Can you just show me a link to learn how minning with my gpu and also what program you are using ?


I don't know where to find these tutorials lol; I use it as mem controller / mem stress tool.

I just downloaded PhoenixMiner_5.5c_Windows.zip and I run PhoenixMiner.exe -bench. Ctrl+C to stop testing.


----------



## mirkendargen

changboy said:


> Can you just show me a link to learn how minning with my gpu and also what program you are using ?








NiceHash - Leading Cryptocurrency Platform for Mining and Trading







www.nicehash.com


----------



## ttnuagmada

Bigger delta with the mining benchmark, but the max hotspot was the same-ish. This was after going from +800 to +1200 on the mem. Not sure what to make of that really.


----------



## Beagle Box

So GPU Hotspot Temperature seems to have no real meaning for now. 
With temps all less than 60*, what's the point of knowing this number?


----------



## des2k...

Beagle Box said:


> So GPU Hotspot Temperature seems to have no real meaning for now.
> With temps all less than 60*, what's the point of knowing this number?


Looking at the numbers, the hotspot temp will increase even with lower load on the gpu but a heavy load on the memory.
So to me, this is the memory controller temp.


----------



## Falkentyne

Beagle Box said:


> So GPU Hotspot Temperature seems to have no real meaning for now.
> With temps all less than 60*, what's the point of knowing this number?


Not everyone here is on water cooling you know?


----------



## Beagle Box

Falkentyne said:


> Not everyone here is on water cooling you know?
> _snip_


Sure. Okay. 70*. What part of the GPU is likely to fail at 70*?


----------



## ttnuagmada

des2k... said:


> Looking at the numbers, the hotspot temp will increase even with lower load on the gpu but a heavy load on the memory.
> So to me, this is the memory controller temp.


my hotspot temp didnt really increase going from GT2 to phoenixminer and that's with a +400 bump to memory.


----------



## Falkentyne

Beagle Box said:


> Sure. Okay. 70*. What part of the GPU is likely to fail at 70*?


And you're missing the point.
Hotspot temperature helps tell you if you have a proper cooling mount done or not. 11-12C delta means you did a good job with your thermal paste.
People were getting 60C GPU temps and 95C hotspots on Vega 64's because they had insufficient thermal paste contact on part of the interposer, so that told them to remount it properly (e.g. by screwing in the two x-bracket screws furthest from the slot first, then the two bottom ones, to resistance, then going back to the first 2 to full tighten+bottom 2.


----------



## des2k...

ttnuagmada said:


> my hotspot temp didnt really increase going from GT2 to phoenixminer and that's with a +400 bump to memory.


yes, but your delta is bigger with mining since avg core temp drops but hotspot stays the same.

For me it confirms I don't have enough pressure on the edge of the die since delta doubles when hiting the mem controller with heavy load.


----------



## ttnuagmada

des2k... said:


> yes, but your delta is bigger with mining since avg core temp drops but hotspot stays the same.
> 
> For me it confirms I don't have enough pressure on the edge of the die since delta doubles when hiting the mem controller with heavy load.


I don't think it's conclusive either way, or that there is another variable we aren't considering. The delta is bigger because the GPU isn't being hit as hard, but the memory controller is being pushed much harder and yet doesnt increase hotspot temp.


----------



## J7SC

des2k... said:


> Looking at the numbers, the hotspot temp will increase even with lower load on the gpu but a heavy load on the memory.
> So to me, this is the memory controller temp.


It's still a HWInfo beta release, so things may yet change, but I after some quick GT2 stressing the VRAM, Hotspot is actually lower than junction temp by 3 C, unlike under regular ops. FYI, after the mobo / CPU swap today, I haven't put my backplate fans back yet


----------



## Beagle Box

Falkentyne said:


> And you're missing the point.
> Hotspot temperature helps tell you if you have a proper cooling mount done or not. 11-12C delta means you did a good job with your thermal paste.
> People were getting 60C GPU temps and 95C hotspots on Vega 64's because they had insufficient thermal paste contact on part of the interposer, so that told them to remount it properly (e.g. by screwing in the two x-bracket screws furthest from the slot first, then the two bottom ones, to resistance, then going back to the first 2 to full tighten+bottom 2.


So what points on the card are being measured on the RTX 3090 so that the hotspot can be determined?


----------



## Falkentyne

Beagle Box said:


> So what points on the card are being measured on the RTX 3090 so that the hotspot can be determined?


You would have to ask Nvidia that question. None of us know yet. Only the AMD Hotspots are known.


----------



## J7SC

...had 1/2 day to finish setup of the X570/3950X and making some progress. I haven't oc'ed the CPU yet, but RAM went to 3800 at 1.35v with InFin at 1900 without a hiccup on the first try...need to get a feel for oc'ing this CPU itself. 

A couple of early tests with the Strix 3090 and the new CPU/mobo in the spoiler. GPU in PCIe 4 mode is almost twice as fast in the 3DM PCI Express Feature test compared to its prior PCIe 3 home. No idea what to make of the mesh shader tests, first time I have run it  Superposition 8K is getting close to my previous best run, I'm looking forward to more when the oc'ing and tuning is complete



Spoiler


----------



## Zogge

My hotspot, idle.










Load 490-500W gaming 30 min. 










VRMs:


----------



## gfunkernaught

Just got the new ek block installed with LM, two coats of nail polish on the surrounding resistors, and new tubing. Pain in the arse this was. I had to use the support bracket. Thing is heavier than the stock cooler.

Right now it's cleansing, gonna be til tomorrow. I can finally have a cigarette.


----------



## Lord of meat

All done, right before i added more coolant.
i get now 2100-2115 @1.05 (curve is 1.05 @ 2160) and memory +848. Power all the way up and did not touch volt slider.
max temp is 52c.
Stable in RDR2, cybercrap and AC Valhalla
card is msi trio using a evga 520w bios.


----------



## captn1ko

Hi guys,

ive got 2 3090 Strix OC at home. How should i test their overclocking potential? The better card will be 4 weeks on air, then get an waterblock. 

Thanks for your advice.

cheers


----------



## Nizzen

captn1ko said:


> Hi guys,
> 
> ive got 2 3090 Strix OC at home. How should i test their overclocking potential? The better card will be 4 weeks on air, then get an waterblock.
> 
> Thanks for your advice.
> 
> cheers


Check both at stock settings and the ambient temperature. Open the "voltage curve" (ctrl +F ) in MSI Afterburner to see what card that have the lowest voltage set for given frequency 
Many looks at 950mv and looks at the frequency. Example: 950mv = 1965mhz
Lower voltage for example 2000mhz is often the better card for air/water.


----------



## ViRuS2k

Made the biggest mistake, got eveything ready to install today

had my 3090 gaming trio x
had my byski waterblock + water temp addon for it
had my MP5 serial gpu memory block

then i proceeded to start tairing down and draining system
then i found out that the waterblock does not even fit , it fits perfectly left to right and perfectly top to bottom but the problem is i cant put the glass pannel back on the front of my case,
thats what i get for installing my build in a 011 Dynamic Xl case

im so so annoyed  

guess i will have to build this in the future when i upgrade my case
i could fit everything but would have to have a permanently open case :/ im not willing to sacrifice that as the glass panels are what makes the 011 dynamic so good looking....

i guess i could try a vertical mount option but then i would loose my sound card :/ wonder if i can get a dual vertical mount for this case with 2 slots on it with 2 riser cables so i can have the card + the sound card vertically hmmmm need to check that. :/ still im pritty pissed off.


----------



## changboy

Me if i were you i install it like this and not put the glass on my case, coz at the price of ur waterblock and backplate cooling i want use it, maybe you can put a spacer to create a gap on ur front glass to fit everything ??


----------



## Nizzen

ViRuS2k said:


> Made the biggest mistake, got eveything ready to install today
> 
> had my 3090 gaming trio x
> had my byski waterblock + water temp addon for it
> had my MP5 serial gpu memory block
> 
> then i proceeded to start tairing down and draining system
> then i found out that the waterblock does not even fit , it fits perfectly left to right and perfectly top to bottom but the problem is i cant put the glass pannel back on the front of my case,
> thats what i get for installing my build in a 011 Dynamic Xl case
> 
> im so so annoyed
> 
> guess i will have to build this in the future when i upgrade my case
> i could fit everything but would have to have a permanently open case :/ im not willing to sacrifice that as the glass panels are what makes the 011 dynamic so good looking....
> 
> i guess i could try a vertical mount option but then i would loose my sound card :/ wonder if i can get a dual vertical mount for this case with 2 slots on it with 2 riser cables so i can have the card + the sound card vertically hmmmm need to check that. :/ still im pritty pissed off.


Lol I tried the same with Bykski and 3090 strix 🤣

Had to use another case. Using Rog Strix Helios now.

Small cases is often too small 

LD cooling V8 case is still my main computer for a reason 😅


----------



## changboy

I'am still using my corsair 750D for 9 years i think lol. When i got that case i have put crossfire of r9-290 inside it lol, so many years with that case now.
I have look at many other case but didnt found one, sometime the case is too high, sometime not enough space for all i have to put inside so i have stick with this corsair case.


----------



## Shawnb99

ViRuS2k said:


> Made the biggest mistake, got eveything ready to install today
> 
> had my 3090 gaming trio x
> had my byski waterblock + water temp addon for it
> had my MP5 serial gpu memory block
> 
> then i proceeded to start tairing down and draining system
> then i found out that the waterblock does not even fit , it fits perfectly left to right and perfectly top to bottom but the problem is i cant put the glass pannel back on the front of my case,
> thats what i get for installing my build in a 011 Dynamic Xl case
> 
> im so so annoyed
> 
> guess i will have to build this in the future when i upgrade my case
> i could fit everything but would have to have a permanently open case :/ im not willing to sacrifice that as the glass panels are what makes the 011 dynamic so good looking....
> 
> i guess i could try a vertical mount option but then i would loose my sound card :/ wonder if i can get a dual vertical mount for this case with 2 slots on it with 2 riser cables so i can have the card + the sound card vertically hmmmm need to check that. :/ still im pritty pissed off.


Bitspower Vertical mount is a dual slot one so it could work or just scrap the soundcard and buy a DAC and HPA, unless it's for the surround sound the get a AV receiver. If you need upgraded onboard sound that badly spend the sound card money on a better MB with better audio or go external. 

Issues like this make so thankful for my Caselabs, I'm forced to add extra stuff to it or it looks empty.


----------



## J7SC

First-world problems !..I only use open 'cases' such as TT Core P5 etc....I'm ready for NVidia's next card, the 3-foot long, 1 foot deep Hxxx


----------



## Shawnb99

J7SC said:


> First-world problems !..I only use open 'cases' such as TT Core P5 etc....I'm ready for NVidia's next card, the 3-foot long, 1 foot deep Hxxx


🤬 TT


----------



## Wihglah

ViRuS2k said:


> Made the biggest mistake, got eveything ready to install today
> 
> had my 3090 gaming trio x
> had my byski waterblock + water temp addon for it
> had my MP5 serial gpu memory block
> 
> then i proceeded to start tairing down and draining system
> then i found out that the waterblock does not even fit , it fits perfectly left to right and perfectly top to bottom but the problem is i cant put the glass pannel back on the front of my case,
> thats what i get for installing my build in a 011 Dynamic Xl case
> 
> im so so annoyed
> 
> guess i will have to build this in the future when i upgrade my case
> i could fit everything but would have to have a permanently open case :/ im not willing to sacrifice that as the glass panels are what makes the 011 dynamic so good looking....
> 
> i guess i could try a vertical mount option but then i would loose my sound card :/ wonder if i can get a dual vertical mount for this case with 2 slots on it with 2 riser cables so i can have the card + the sound card vertically hmmmm need to check that. :/ still im pritty pissed off.


Can you not go with a vertical mount?


----------



## rocklobsta1109

ttnuagmada said:


> Bigger delta with the mining benchmark, but the max hotspot was the same-ish. This was after going from +800 to +1200 on the mem. Not sure what to make of that really.
> 
> View attachment 2479310


Are you mining to mine or mining to benchmark temps? if your mining to mine then you can turn down the power a TON and still get the same or better hashrate. 290-300w seems to be the sweet spot for my card and will do 120MH/s on right at 300W


----------



## changboy

rocklobsta1109 said:


> Are you mining to mine or mining to benchmark temps? if your mining to mine then you can turn down the power a TON and still get the same or better hashrate. 290-300w seems to be the sweet spot for my card and will do 120MH/s on right at 300W


What mean 120mh/s in usd hour or day ?


----------



## mirkendargen

changboy said:


> What mean 120mh/s in usd hour or day ?


Varies, but $12-$15/day not factoring in electricity.


----------



## Pepillo

ViRuS2k said:


> Made the biggest mistake, got eveything ready to install today
> 
> had my 3090 gaming trio x
> had my byski waterblock + water temp addon for it
> had my MP5 serial gpu memory block
> 
> then i proceeded to start tairing down and draining system
> then i found out that the waterblock does not even fit , it fits perfectly left to right and perfectly top to bottom but the problem is i cant put the glass pannel back on the front of my case,
> thats what i get for installing my build in a 011 Dynamic Xl case
> 
> im so so annoyed
> 
> guess i will have to build this in the future when i upgrade my case
> i could fit everything but would have to have a permanently open case :/ im not willing to sacrifice that as the glass panels are what makes the 011 dynamic so good looking....
> 
> i guess i could try a vertical mount option but then i would loose my sound card :/ wonder if i can get a dual vertical mount for this case with 2 slots on it with 2 riser cables so i can have the card + the sound card vertically hmmmm need to check that. :/ still im pritty pissed off.


I had that problem, also with an MSI Trio with the Bykski block and the Lian Li O11 Dynamic XL, I didn't close the glass side window. After a few minutes of panic, I managed to put the window taken by three of the four attachment points, the lower two and the upper front. Leaving the top back unspocked, the window is in place and you can't tell it's not quite embedded. The piece where the block fittings go sticks into the glass, but it's no problem.


----------



## OC2000

My Strix with EK block fits in the o11D XL with plenty of room is the bytski one much bigger?


----------



## Nizzen

OC2000 said:


> My Strix with EK block fits in the o11D XL with plenty of room is the bytski one much bigger?


Yes, it's higher.


----------



## ttnuagmada

rocklobsta1109 said:


> Are you mining to mine or mining to benchmark temps? if your mining to mine then you can turn down the power a TON and still get the same or better hashrate. 290-300w seems to be the sweet spot for my card and will do 120MH/s on right at 300W


that was purely to benchmark


----------



## ViRuS2k

Pepillo said:


> I had that problem, also with an MSI Trio with the Bykski block and the Lian Li O11 Dynamic XL, I didn't close the glass side window. After a few minutes of panic, I managed to put the window taken by three of the four attachment points, the lower two and the upper front. Leaving the top back unspocked, the window is in place and you can't tell it's not quite embedded. The piece where the block fittings go sticks into the glass, but it's no problem.
> 
> View attachment 2479422



Nice , my card though its block sticks out even further than that as i have bought the little lcd screen that attaches to it, so its about a inch sticking out of the side off the case though i might just run the system without the side window, i mean i loose a lot of cash if i dont install the stufff lol 
i also might be able to make little extenders so that the glass sits a inch out but still looks good lol  will figure something out i hope hahah


----------



## edsontajra

Guys, can someone post results on port royal with STRIX OC 480W bios on air??

I already have an ftw3 but I was lucky enough to find a STRIX that is on the way. I'm curious to know the performance.


----------



## jura11

I have recently built loop in Lian Li O11 Dynamic XL where I have used Bykski RTX 3090 waterblock and no issues although I have used only reference PCB(KFA2 or Zotac or Palit) to the date in Lian Li cases and no issues with fitting these blocks

Hope this helps 

Thanks, Jura


----------



## J7SC

edsontajra said:


> Guys, can someone post results on port royal with STRIX OC 480W bios on air??
> 
> I already have an ftw3 but I was lucky enough to find a STRIX that is on the way. I'm curious to know the performance.


...Strix 3090 OC on air, stock bios from about a week ago...


----------



## captn1ko

Hey Guys 

What is your experience of a ,,good,, Chip.
2000mhz @900mv? [email protected]?


----------



## gfunkernaught

Just got my Trio under water and flash to the EVGA 500w bios, been running bright memory benchmark for about 10 minutes. Messing around with clocks, core temp peaked at 41c. Effective clock is still way too low when AB shows 2100mhz, effective clock is 2065mhz. Weird thing is the PCIe pin #3 power usage in HWINFO isn't available. Which 500w or 520w bios should I be using?


----------



## Cyclops

Does anyone know if the EVGA 3090 XoC BIOS works on MSI Suprim X or is even compatible?


----------



## gfunkernaught

Nevermind, flashed to the KP 520w bios, so far so good OC searching wise. Running Heaven just to see where it will crash. Been running 2190mhz(2170mhz effective)@1093mv. Is that too high for voltage? I haven't touched the curve yet. 
Update: Heaven crashed. These bios do seem to keep the core and effective clocks 1-bin apart which is nice.


----------



## gfunkernaught

Just ran a Port Royal bench using the KP 520w bios. Anyone know why the PL is getting triggered even though my avg power usage was 480w? I have +130 on the core, +1200 on the memory. I've been seeing some really high clocks on this thread. Not that I am not impressed with going from stock upper 1700s to upper 1800s in mhz with my max temp at 43c. I don't have a water temp reader yet, sensor is in place though. Ambient temp is 19c. So basically consistent with what I've been reading about this card, it gets very hot and hungry.


----------



## Falkentyne

gfunkernaught said:


> Just ran a Port Royal bench using the KP 520w bios. Anyone know why the PL is getting triggered even though my avg power usage was 480w? I have +130 on the core, +1200 on the memory. I've been seeing some really high clocks on this thread. Not that I am not impressed with going from stock upper 1700s to upper 1800s in mhz with my max temp at 43c. I don't have a water temp reader yet, sensor is in place though. Ambient temp is 19c. So basically consistent with what I've been reading about this card, it gets very hot and hungry.
> View attachment 2479451


The power limit gets triggered whenever the thermal "blip" happens (you know, the one that causes temps to report 6500C on eVGA cards, but a "thermal throttle: Yes" or a blue single bar in GPU-Z).

Also please post your GPU-Z readout from the Port Royal run to see if you were really power limit throttling. Also a TDP Normalized % being higher than 100% and higher than TDP% can also cause a power alert (this can appear even if it doesn't actually throttle--this is a "mini" green bar in GPU-Z, rather than a large green bar.

Also need to see your TDP Normalized % in hwinfo and TDP%. Normalized affects all individual rails including MSVDD limits.


----------



## gfunkernaught

@Falkentyne


----------



## Falkentyne

gfunkernaught said:


> @Falkentyne
> View attachment 2479452


Looks about right. At those clocks and memory speeds, you're going draw more power.
Nice score.


----------



## Lord of meat

captn1ko said:


> Hey Guys
> 
> What is your experience of a ,,good,, Chip.
> 2000mhz @900mv? [email protected]?


I would keep it. Mine goes 2100-2130 at 1.05


----------



## gfunkernaught

@Falkentyne 
Thanks. So it is normal for power limit throttling to occur even though I am far from 520w? I wonder if the power mode in the nvcpl would/could affect that? I remember that Normal/Prefer Max Performance would cause the clocks to go from the 1800s (Normal) to mid 1900s (Prefer Max).


----------



## Thanh Nguyen

gfunkernaught said:


> @Falkentyne
> Thanks. So it is normal for power limit throttling to occur even though I am far from 520w? I wonder if the power mode in the nvcpl would/could affect that? I remember that Normal/Prefer Max Performance would cause the clocks to go from the 1800s (Normal) to mid 1900s (Prefer Max).


Just use the 1000w bios and forget all the limit. Set aside emergency fund in case your card is dead.😅. I have been using 1000w bios since its on the website and dont have any issue.


----------



## Falkentyne

gfunkernaught said:


> @Falkentyne
> Thanks. So it is normal for power limit throttling to occur even though I am far from 520w? I wonder if the power mode in the nvcpl would/could affect that? I remember that Normal/Prefer Max Performance would cause the clocks to go from the 1800s (Normal) to mid 1900s (Prefer Max).


Your PCIE Slot power draw is a bit high. I don't have an evga card. I don't know what's limiting you, if it's the PCIE Slot Power or some rail not visible. I also don't know what the MSVDD rail limits are as they are not visible in any bios editor from what I can see.

However, from your HWINFO screenshot, it looks like the 8 pin rail is limiting you.

You're at 173W on the 8 pin. This rail throttles at 175W, as the 8 pin rail is limited by the SRC rail that is linked to it (there are three SRC rails, one for each 8 pin, on most 3090 non 1000W Bioses, these seem to be set to 150W default and 175W maximum).

If that is the case, the ONLY solution for you is to shunt mod. (the large primary shunts, all seven of them).

Supposedly, you can stack shunt the "small" (1206) shunts to increase the hidden MSVDD and NVVDD power limits (not part of the 7 main 2512 shunts), on _reference_ boards, but this did NOT work on a Founder's Edition. It's completely unknown what the small shunts (the two of them not hooked up to the RGB connector) control on a FE card, as both me and dante`afk tried modding them and it did absolutely nothing, yet there is continuity between the two stacked small shunts...


----------



## gfunkernaught

Thanh Nguyen said:


> Just use the 1000w bios and forget all the limit. Set aside emergency fund in case your card is dead.😅. I have been using 1000w bios since its on the website and dont have any issue.


That's my next step lol. I just tried to with the latest nvflash64 -6 and it said invalid firmware.


----------



## gfunkernaught

Thanh Nguyen said:


> Just use the 1000w bios and forget all the limit. Set aside emergency fund in case your card is dead.😅. I have been using 1000w bios since its on the website and dont have any issue.


I just tried to flash got this. I also tried --protectoff and it said "
Setting EEPROM software protect setting...

Setting EEPROM protection complete."

E:\Drivers\nvBios>nvflash64.exe -6 EVGA.RTX3090.24576.201124.rom
BIOS Cert 3.0 Verification Error, Update aborted.
Nothing changed!
ERROR: Invalid firmware image detected.
E:\Drivers\nvBios>


----------



## Falkentyne

gfunkernaught said:


> I just tried to flash got this. I also tried --protectoff and it said "
> Setting EEPROM software protect setting...
> 
> Setting EEPROM protection complete."
> 
> E:\Drivers\nvBios>nvflash64.exe -6 EVGA.RTX3090.24576.201124.rom
> BIOS Cert 3.0 Verification Error, Update aborted.
> Nothing changed!
> ERROR: Invalid firmware image detected.
> E:\Drivers\nvBios>


You're using the wrong nvflash. Use the latest one on techpowerup.


----------



## gfunkernaught

Falkentyne said:


> You're using the wrong nvflash. Use the latest one on techpowerup.


Yup just noticed that. My bad. Flashed.


----------



## Beagle Box

del


----------



## gfunkernaught

Heh, so far the XOC bios work well....like REALLY good...almost too good...?
I can lock my core clock to 2145mhz 2130mhz eff at 1062mv by capping the PL to 55%...
While running bright memory, one of the 8pins reached 180w! I could tell which cable it was, the connector was warm not hot. I have an HX1200 which has 100A on the +12v shared by four PCIe ports, so 25A each, and that one pin is pulling 15A. Whew! The core temp reached 48c LOL. So much for expensive water blocks. Then again, it could totally just be my cooling capacity. One 360x55 and 240x55, was great for a 2080 ti at 400w. Times are'a'changing.


----------



## gfunkernaught

XOC 1kW Port Royal Run 14979, not too shabby. I'm sure once I play a game it will crash within seconds.








I scored 14 979 in Port Royal


Intel Core i7-8700K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## Nizzen

J7SC said:


> ...Strix 3090 OC on air, stock bios from about a week ago...
> 
> View attachment 2479440


This looks like a shuntmodded cold air result


----------



## J7SC

Nizzen said:


> This looks like a shuntmodded cold air result


....well thanks , originally posted it a few days ago, along w/ another one. Stock Strix OC / air...though did have the window open. .wait till I w-cool this beast


----------



## gfunkernaught

Sure enough, Quake 2 RTX crashed within minutes, around 540w and 44c. Beginning to suspect that this was all for naught. Don't even have enough faith in shunt mods at this point.


----------



## J7SC

gfunkernaught said:


> Sure enough, Quake 2 RTX crashed within minutes, around 540w and 44c. Beginning to suspect that this was all for naught. Don't even have enough faith in shunt mods at this point.


On STOCK bios and total stock settings (no oc / no PL+), what does your car boost to as 'max' in GPUz when running a quick 3D render ?


----------



## Nizzen

J7SC said:


> ....well thanks , originally posted it a few days ago, along w/ another one. Stock Strix OC / air...though did have the window open. .wait till I w-cool this beast


Looks like you got a nice bin 
Could you post the "vf" curve in afterburner? Stock settings and ambient temp?


----------



## changboy

On benchmark setting i have score 15 458 in port royal but my daily setting for all my game gave me a score of 14 800 on port royal and this is when my pc is hot after played 2 hours at BF 1.
I have check this to see how my perf drop from bench and gaming. Its good to know.
Sure some game can be played with more oc but other will crash, so i apply 1 setting for all game i own.
Gaming boost clock at 2100mhz always= 14 800 port royal on my system.


----------



## J7SC

Nizzen said:


> Looks like you got a nice bin
> Could you post the "vf" curve in afterburner? Stock settings and ambient temp?


....will look at that on the weekend. Right now, finishing the upgrade to the new mobo /CPU; looks like a good bin as well. No oc on CPU yet, just efficiency tuning &.also, runs InFin at 1900 / RAM 3800 right out of the box...should be a nice combo w/ Strix


----------



## gfunkernaught

J7SC said:


> On STOCK bios and total stock settings (no oc / no PL+), what does your car boost to as 'max' in GPUz when running a quick 3D render ?


I have to flash and check now that I am on water. Get back to you shortly.


----------



## J7SC

gfunkernaught said:


> 1980mhz iirc on water. I had to move on from XOC as it was starting to cause instant BSOD. Now I am on the latest EVGA 500w bios from techpowerup. How can I get the effective and request clocks close to each other? The offset then curve method isn't working.


As far as I remember, my card was at around 2040 plus / minus when I first plugged it into (the old X99) mobo and did that test. That's on air, but it shouldn't make a difference w/ GPUz on max. Also, I haven't downloaded Quake II RTX yet on this new install, but I understand from others here that it is a monster and 'clock depressor'


----------



## gfunkernaught

@J7SC
2010mhz(1770mhz effective) max clock while running heaven benchmark
1905mhz(1665mhz effective) max clock while running bright memory benchmark

I think I bought a crap oc'er. I had a feeling once I got the card and noticed how pretty much any modern game pins the PL always even without OC.


----------



## changboy

@*gfunkernaught when you said the evga latest 500w bios do you mean this bios number : Version: 94.02.26.48.F7 ?*


----------



## J7SC

gfunkernaught said:


> @J7SC
> 2010mhz(1770mhz effective) max clock while running heaven benchmark


...effective clock per HWInfo ? Yeah, that divergence is steep...on mine, it usually is just the speed bin loss due to temps / air. I presume you already tried the KPE 520W bios ? What is it showing in Superposition 4k/8k as peak watts with either/both KPE 520 and Trio stock ? btw, getting late here if I don't immediately answer back...


----------



## gfunkernaught

J7SC said:


> ...effective clock per HWInfo ? Yeah, that divergence is steep...on mine, it usually is just the speed bin loss due to temps / air. I presume you already tried the KPE 520W bios ? What is it showing in Superposition 4k/8k as peak watts with either/both KPE 520 and Trio stock ? btw, getting late here if I don't immediately answer back...


Yes HWInfo. I did try both the EVGA 500w and KP 520W bios both of them would peak at around 480w. I use bright memory benchmark with the highest settings to push the whole chip. I'm about to see where the suprim 450w bios peak. 
It's late here too so no worries. I should let it go for the night and resume the battle with gpu boost tomorrow. Thanks man.


----------



## Falkentyne

gfunkernaught said:


> Sure enough, Quake 2 RTX crashed within minutes, around 540w and 44c. Beginning to suspect that this was all for naught. Don't even have enough faith in shunt mods at this point.


Not surprising Q2 RTX Crashed because you're yeeting the core and memory clocks.
Looks like you have it at +200 / +1000. That's not going to be stable on many systems.

Try +150 / +600 in Quake 2 RTX. I can only do +120 on Q2 on my 3090 FE.

If you can use the classified tool, you may be able to increase MSVDD voltage or loadline, but I think that's only possible on an actual Kingpin card. I don't know if the classified tool works on the FTW3.


----------



## gfunkernaught

Falkentyne said:


> Not surprising Q2 RTX Crashed because you're yeeting the core and memory clocks.
> Looks like you have it at +200 / +1000. That's not going to be stable on many systems.
> 
> Try +150 / +600 in Quake 2 RTX. I can only do +120 on Q2 on my 3090 FE.
> 
> If you can use the classified tool, you may be able to increase MSVDD voltage or loadline, but I think that's only possible on an actual Kingpin card. I don't know if the classified tool works on the FTW3.


I have the Trio. I like to push things 
Here's a quick superposition with the suprim 450w bios. I think the issue with the 480w cap thing when I use evga bios on my msi card has to do with the fact that bios's were written for specific PCBs. Just because they "work" doesn't mean much anymore. I could bench xoc bios all day and night, can't play a game on them. I may give the 520w bios another shot if I can figure out how to keep the effective clock close to the curve I set. Any time I touched the curve with the 520w bios the gap widened. I was watching the clock/voltages while running superpostion to get an idea of what v/f curve I should hunt. I highly doubt I will get passed 2100mhz and sustain it. Didn't daur8er say that 2100mhz on a 3090 is very difficult due to physical limitations? Shunt mods, changing out caps, etc...


----------



## Lord of meat

Triple posted. Ignore


----------



## captn1ko

ok, i had a few minutes to test the 2 gpus.


Nr1. Boost is a bit higher than Nr2,
Power is a bit lower than Nr2.
Chip is a bit warmer than Nr2.
[email protected] fails on both.


----------



## PhuCCo

I bit the bullet and applied liquid metal to my 3090FE. I am using the Alphacool Eisblock. After stressing the card through a bunch of Time Spy and Port Royal stress tests (consistently 380-400W for 1.75 hours), with Thermal Grizzly Kryonaut I was getting a gpu to water delta of 17C. I now have a consistent 10C delta. 41C instead of 48C. I am beyond impressed. The mount with normal paste must've been kinda crap, but the paste spread looked great when I tore it apart. I am very happy regardless. The hotspot reading from the new hwinfo beta went from a delta of 14C to 11C.

I also replaced all of the stock thermal pads that came with the Alphacool block with Thermalright Odyssey pads. For the backplate, I replaced the 2mm pads on the memory with 1mm pads and 1mm copper shims. I dropped 15C according to the hwinfo hotspot temperature (50C vs 65C) using the same stress tests for load. I already have a ram waterblock mounted to the backplate, so this further reduction in temps is very nice. I'm sure mining would have much higher temps, but again I am very happy and I recommend to try it if you're bored.


----------



## jomama22

gfunkernaught said:


> I have the Trio. I like to push things
> Here's a quick superposition with the suprim 450w bios. I think the issue with the 480w cap thing when I use evga bios on my msi card has to do with the fact that bios's were written for specific PCBs. Just because they "work" doesn't mean much anymore. I could bench xoc bios all day and night, can't play a game on them. I may give the 520w bios another shot if I can figure out how to keep the effective clock close to the curve I set. Any time I touched the curve with the 520w bios the gap widened. I was watching the clock/voltages while running superpostion to get an idea of what v/f curve I should hunt. I highly doubt I will get passed 2100mhz and sustain it. Didn't daur8er say that 2100mhz on a 3090 is very difficult due to physical limitations? Shunt mods, changing out caps, etc...
> View attachment 2479457


Derbaur said in his video that he had no idea if 2100 was good at his voltage (1.06v, on air, at the beginning of a benchmark). His video was from very early on. 2100 is kinda on the average side of things for a shunt mod if I had to guess. I run 2205 set with 2175-2190 get for all games I play (cyberpunk all rtx at maxed out), but that's it with my own binning of cards (on a strix).

Also, it doesn't matter what bios you use, you will get clock drops due to temperature and/or hitting a power limit. It seems to drop a step every 7*-10*, it may even change depending on the actual temperature itself. 

Also understand that the way you have the curve set up will determine what voltage/frequency point it will drop to when it decides to drop. It's entirely possible the 520w gave you troubles because you were hitting temp/power changes that made it drop another step.

If you want it to stay fixed in what you set at whatever voltage you want, go back a few pages and see the smi method. All that method does is give an artificial high clock set (like 2250) then uses smi to limit the actual clock get to 2175 or somthing, so when the boost decides to clock down, it does so off the 2250 and requests 2235, but smi has it limited at 2175 and so that's what it still being used.


----------



## Zogge

Anyone tried this ? Would it work ? Instead of paste for core.

GCS-025-A10 | Thermal Interface Sheet, Graphite, 1500W/m·K, 225 x 225mm 0.025mm, Self-Adhesive | RS Components


----------



## Zogge

Also is it possible to stack 2 x 1mm pads to reach 2 mm thickness ? 2 mm pads are hard to find but 1mm are quite common, hence my question. Any risks or issues doing so ?


----------



## long2905

Zogge said:


> Also is it possible to stack 2 x 1mm pads to reach 2 mm thickness ? 2 mm pads are hard to find but 1mm are quite common, hence my question. Any risks or issues doing so ?


the heat transfer efficiency will decrease. thats the only issue.


----------



## Zogge

Yes air pockets I guess.


----------



## gfunkernaught

jomama22 said:


> Derbaur said in his video that he had no idea if 2100 was good at his voltage (1.06v, on air, at the beginning of a benchmark). His video was from very early on. 2100 is kinda on the average side of things for a shunt mod if I had to guess. I run 2205 set with 2175-2190 get for all games I play (cyberpunk all rtx at maxed out), but that's it with my own binning of cards (on a strix).
> 
> Also, it doesn't matter what bios you use, you will get clock drops due to temperature and/or hitting a power limit. It seems to drop a step every 7*-10*, it may even change depending on the actual temperature itself.
> 
> Also understand that the way you have the curve set up will determine what voltage/frequency point it will drop to when it decides to drop. It's entirely possible the 520w gave you troubles because you were hitting temp/power changes that made it drop another step.
> 
> If you want it to stay fixed in what you set at whatever voltage you want, go back a few pages and see the smi method. All that method does is give an artificial high clock set (like 2250) then uses smi to limit the actual clock get to 2175 or somthing, so when the boost decides to clock down, it does so off the 2250 and requests 2235, but smi has it limited at 2175 and so that's what it still being used.


Is your Strix shunt modded?


----------



## gfunkernaught

@jomama22 
I saw that post about the locking v/f, I will try that out once I repaste. My core hit 48c with the xoc bios. I used LM, so that doesn't sound right. I probably put too much. The conductonaut syringe was very tight and put a giant blob on the die, then I used the removal nozzle which barely worked. Do I have to apply and spread a very small amount to both the die and waterblock?


----------



## gfunkernaught

changboy said:


> @*gfunkernaught when you said the evga latest 500w bios do you mean this bios number : Version: 94.02.26.48.F7 ?*


Yes those the 500w bios I tried.


----------



## dante`afk

Thanh Nguyen said:


> Just use the 1000w bios and forget all the limit. Set aside emergency fund in case your card is dead.😅. I have been using 1000w bios since its on the website and dont have any issue.


don't forget to let us know once it's dead, I remember your last card died as well.


----------



## Thanh Nguyen

dante`afk said:


> don't forget to let us know once it's dead, I remember your last card died as well.


I did not use 1000w bios on the PNY. Its dead coz water or something else. But got it sold for $1500.


----------



## gfunkernaught

des2k... said:


> Found a reliable way to lock the freq / voltage with Afterburner. Not ctrl+l, that doesn't work at all.
> 
> On ampere any given freq on the curve is only valid for 3 voltages, then the frequency has to move up.
> Effective frequency also doesn't drop by forcing these 4 voltage points , you'll get locked into 4th voltage point regardless of gpu load or temps.
> 
> Using two points, forcing 2160 bin (2175 on the curve) to use 1069,1075,1081 then the next bin 2175 gets locked to the 4th point.
> smi limit to 2175
> prefer max performance
> 
> right 2253 1087mv(the voltage you want to run at)(apply first)
> left 2175 1068mv ( -3 bins on the voltage)(apply second)
> 
> *2253 -15mhz or +15mhz will +/- to the effective frequency
> 
> Not sure if this is useful, but writing it anyways.
> I had some trouble getting voltage lock on ampere due to load, temps changes
> 
> View attachment 2479150


What card and bios are you using for this method?


----------



## gfunkernaught

dante`afk said:


> don't forget to let us know once it's dead, I remember your last card died as well.





Thanh Nguyen said:


> I did not use 1000w bios on the PNY. Its dead coz water or something else. But got it sold for $1500.


think the Trio's PCB is capable of up to 480w. Both the Evga 500w and 520w bios on my Trio trigger the PL at 474w max. I tried the KP XOC 1kW bios and set the PL to 50%, benches passed but games crashed, peak 525w in quake 2 rtx. I'm going to keep exploring this. I want to try using nvidia-smi to set the power limit to exactly 480w to see if I can get better effective clocks.


----------



## gfunkernaught

Update:
Using the KP 1kW bios on my Trio, running the bright memory bench on loop, set the PL to 55% in AB, set two points on the curve:
1. [email protected], apply
2. [email protected], apply

Now my effective clock is [email protected], been running the bench on loop for about 5min or so, max temp 46c (ugh need to repaste, dreading it). Take a look.


----------



## jomama22

gfunkernaught said:


> Is your Strix shunt modded?


Yes, shunt modded and volt modded.


----------



## J7SC

gfunkernaught said:


> think the Trio's PCB is capable of up to 480w. Both the Evga 500w and 520w bios on my Trio trigger the PL at 474w max. I tried the KP XOC 1kW bios and set the PL to 50%, benches passed but games crashed, peak 525w in quake 2 rtx. I'm going to keep exploring this. I want to try using nvidia-smi to set the power limit to exactly 480w to see if I can get better effective clocks.


...I tried to read the follow up in this thread before both coffee and work..my poor head  @gfunkernaught : 480W should be plenty, subject to other safeties not spoiling the party and the actual power distribution. My Strix always comes in between 480+ W and 503 W. For power source distribution comps, check the spoiler, noting that that Superpos8K run was on the X99 mobo, air, and NV tab set to 'normal' instead of 'prefer max power' (s.th. else that might help, though likely more on air-cooled cards than water. Still, worth a try)


Spoiler
























Also have a look at Buildzoid's YT channel reviews of the 3080 MSI Trio and 3090 Suprim X. It's been a while since I watched those, but I seem to recall that the 3rd 8 pin is less important on those cards...just guessing while half asleep, but what is the strongest (XOC?) bios for 2x 8 pin out there ? It may / may not help. Finally, I think it stands to reason that MSI might launch a Lightning version...that might be another good bios to try if/when it appears.


----------



## gfunkernaught

J7SC said:


> ...I tried to read the follow up in this thread before both coffee and work..my poor head  @gfunkernaught : 480W should be plenty, subject to other safeties not spoiling the party and the actual power distribution. My Strix always comes in between 480+ W and 503 W. For power source distribution comps, check the spoiler, noting that that Superpos8K run was on the X99 mobo, air, and NV tab set to 'normal' instead of 'prefer max power' (s.th. else that might help, though likely more on air-cooled cards than water. Still, worth a try)
> 
> 
> Spoiler
> 
> 
> 
> 
> View attachment 2479489
> 
> 
> View attachment 2479490
> 
> 
> 
> 
> Also have a look at Buildzoid's YT channel reviews of the 3080 MSI Trio and 3090 Suprim X. It's been a while since I watched those, but I seem to recall that the 3rd 8 pin is less important on those cards...just guessing while half asleep, but what is the strongest (XOC?) bios for 2x 8 pin out there ? It may / may not help. Finally, I think it stands to reason that MSI might launch a Lightning version...that might be another good bios to try if/when it appears.


Ha good morning sir. I think the Strix's are just better cards than the Trio's. What was your effective clock during that run? Unfortunately without shunt-modding, I have to use the 1kw bios to keep the PL at around 520w so that I can shoot for an effective clock of [email protected] I do need to repaste though which is a pain. Did you just move the slider and make an offset overclock for that?


----------



## J7SC

gfunkernaught said:


> Ha good morning sir. I think the Strix's are just better cards than the Trio's. What was your effective clock during that run? Unfortunately without shunt-modding, I have to use the 1kw bios to keep the PL at around 520w so that I can shoot for an effective clock of [email protected] I do need to repaste though which is a pain. Did you just move the slider and make an offset overclock for that?


...about to leave for work  , but here's a quickie comp of 'early' HWInfo nominal vs effective clocks, albeit with only mild oc's from day 1 when I got the card to feel the Strix out a bit..and yes, I do wonder about the one on the right  in a pleasant sort of way


----------



## gfunkernaught

@J7SC
How did you get that effective clock to [email protected]?


----------



## J7SC

gfunkernaught said:


> @J7SC
> How did you get that effective clock to [email protected]?


...no idea. Just ran the brand new card, initially at stock speed, then w/ some mild ocs...thought that there was s.th wrong as PCIe slot 'only' pulled 48 W or so...then I though there was s.th wrong as the card exceeded 500 W at times...those were some of my first posts / questions when I first picked up the Strix OX (...less than 2 weeks ago). 

This weekend should allow me to finish the new mobo / CPU setup with the Strix and dive deeper afterwards...later, I do it all over again when w-cooling stuff is finalized and ordered (still wondering about which active cooling solutions to build for the backplate - stick with my decent fans setup or go RAM w-block)


----------



## gfunkernaught

I think I'm getting the hang of this. I set the curve to [email protected], and it settles at an effective clock of around 2115mhz. I will keep searching for the lowest possible voltage I can get at each bin above 2100mhz.


----------



## J7SC

^^


----------



## Falkentyne

Thanh Nguyen said:


> I did not use 1000w bios on the PNY. Its dead coz water or something else. But got it sold for $1500.


Oh


----------



## changboy

gfunkernaught said:


> I think I'm getting the hang of this. I set the curve to [email protected], and it settles at an effective clock of around 2115mhz. I will keep searching for the lowest possible voltage I can get at each bin above 2100mhz.
> View attachment 2479499


If you have the game shadow of the tomb raider, try this game with this setting if you will see frame drop, me when i play this game at 21000mhz and i drop the voltage i issue serious frame drop when i game, if i put back the voltage at 1.1 volt frame drop are gone. Also for some game to play at 2100mhz you need more then 500w, maybe around 540w like in red dead redemption 2.
For this i set my voltage at 1.1 volt to avoid issue with all game, but your card may act differently them my ftw3.


----------



## pat182

82c Jtemp mining at 100mh/s 70%PL


----------



## Canson

pat182 said:


> 82c Jtemp mining at 100mh/s 70%PL
> View attachment 2479508


running your fan as push or pull? is it 120mm or 140mm? and what rpm you run it?
i like your low temps.


----------



## pat182

Canson said:


> running your fan as push or pull? is it 120mm or 140mm? and what rpm you run it?
> i like your low temps.


corsair ML120 @ 80% on top of the backplate in push, the strix backplate looks to be plastic :/ not the best heat spreader


----------



## gfunkernaught

changboy said:


> If you have the game shadow of the tomb raider, try this game with this setting if you will see frame drop, me when i play this game at 21000mhz and i drop the voltage i issue serious frame drop when i game, if i put back the voltage at 1.1 volt frame drop are gone. Also for some game to play at 2100mhz you need more then 500w, maybe around 540w like in red dead redemption 2.
> For this i set my voltage at 1.1 volt to avoid issue with all game, but your card may act differently them my ftw3.


I've been playing cyberpunk for about two hours with no framedrops, but it did crash, 44c. 1025mv is too low. Trying 1032mv now.


----------



## changboy

gfunkernaught said:


> I've been playing cyberpunk for about two hours with no framedrops, but it did crash, 44c. 1025mv is too low. Trying 1032mv now.


Do you play at 2100mhz steady ?


----------



## Canson

pat182 said:


> corsair ML120 @ 80% on top of the backplate in push, the strix backplate looks to be plastic :/ not the best heat spreader


Ok , i just found a 120mm Noctua fan at home and tried like you. But made only 4c difference 

My 3090 Suprim X is at 98-100c now, man i don't understand how yours can be so low lol. Wish I could drop mine at least to 90c or something 

Skickat från min SM-N975F via Tapatalk


----------



## des2k...

pat182 said:


> 82c Jtemp mining at 100mh/s 70%PL
> View attachment 2479508


the mem on the back start right at the end of the sli connector
you should get better temps by shifting the fan


----------



## gfunkernaught

changboy said:


> Do you play at 2100mhz steady ?


I was playing cyberpunk for about an hour at [email protected] and once the gpu hit 46c it crashed. I might have to repaste.

Side question: Is 550w sustained safe for the Trio?


----------



## J7SC

gfunkernaught said:


> I was playing cyberpunk for about an hour at [email protected] and once the gpu hit 46c it crashed. I might have to repaste.


which bios are you currently using for gaming ?


----------



## Thanh Nguyen

Falkentyne said:


> You killed a Founder's Edition too. You had a FE before you had the PNY. You're on your third card now.


U remember the wrong guy. I just got the PNY and this FTW3.


----------



## Falkentyne

Thanh Nguyen said:


> U remember the wrong guy. I just got the PNY and this FTW3.


I thought you had a founder's edition first...hmm..I think it was because you kept asking for help with the shunt calculator in the 3090 FE shunt mod thread and then motivman and someone else was trying to shunt mod a TUF or PNY also.

I apologize. I did remember you killed a card though  Or maybe it was because your dad soldered 2x 5 mOhm shunts at once? I forgot. But I know olrdtg vanished! I think he killed his card too.
How did you sell a broken card for $1,500? It was fixable?


----------



## changboy

We have a name for this here but i dont want say it here lol !


----------



## des2k...

gfunkernaught said:


> I was playing cyberpunk for about an hour at [email protected] and once the gpu hit 46c it crashed. I might have to repaste.
> 
> Side question: Is 550w sustained safe for the Trio?


check hwinfo or gpu-z , what does it report for core wattage at 550w board power?

Usually mem,cache don't stress the mem vrm and uncore vrm that much.


----------



## gfunkernaught

J7SC said:


> which bios are you currently using for gaming ?


Currently, the kp 1kw bios with PL set to 55%, only because it allows me to sustain [email protected] effective while gaming. the 500 and 520 bios cap me at 480w, and thus I cannot even sustain 2100mhz. I was playing around before with trying to find lowest voltage for beyond 2115mhz. Sadly, I need more than 1032mv for 2115mhz and up.


----------



## gfunkernaught

des2k... said:


> check hwinfo or gpu-z , what does it report for core wattage at 550w board power?
> 
> Usually mem,cache don't stress the mem vrm and uncore vrm that much.


You mean this?








That is right now, [email protected]


----------



## des2k...

gfunkernaught said:


> You mean this?
> View attachment 2479561
> 
> That is right now, [email protected]


I believe, it's a 9 phases for VRM with OnSemi NCP302150 , so 50A per phase.
9 x 50A x 1032mv = 464.4w max
so your usage is 286.2w / 464.4 max, about 30A per stage or 61% capacity


----------



## kunit13

Quick follow up. I switched my GPU to “proper flow direction”
Unigen loop 42-43c (from 46-47).

Gaming 36-38 (from 40-42)

I installed a temp probe on my back plate earlier this week temp is 32c after heavy gaming. Mp5 in Parallel.

a few pages back is my first post showing my temps with a EK water block on a strix oc.
I attached a pic (only about hour) I have a Warzone tourney tonight so gotta get focused lol.


Next up, waiting on a xoc bios that enables me to use both hdmi ports (I dual
Pc stream so need those).

Thanks!


----------



## gfunkernaught

@kunit13 
So that mp3 cooler really helps huh? Especially in parallel.


----------



## gfunkernaught

des2k... said:


> I believe, it's a 9 phases for VRM with OnSemi NCP302150 , so 50A per phase.
> 9 x 50A x 1032mv = 464.4w max
> so your usage is 286.2w / 464.4 max, about 30A per stage or 61% capacity


Oh ok. Well I had to once again stop using the 1kw bios since it always makes my PC act weird. Now on 520w bios and just accepting/dealing with the bouncing clocks. Running bright memory bench, 2077-2100mhz effective, temp is around 44c at the moment.


----------



## changboy

gfunkernaught said:


> Oh ok. Well I had to once again stop using the 1kw bios since it always makes my PC act weird. Now on 520w bios and just accepting/dealing with the bouncing clocks. Running bright memory bench, 2077-2100mhz effective, temp is around 44c at the moment.


what you mean by : make my pc act weird ?


----------



## des2k...

gfunkernaught said:


> Oh ok. Well I had to once again stop using the 1kw bios since it always makes my PC act weird. Now on 520w bios and just accepting/dealing with the bouncing clocks. Running bright memory bench, 2077-2100mhz effective, temp is around 44c at the moment.


I have a zotac 2x8pin, so about +945mem 2175, 1086mv is the limit for my card with the xoc bios. Around 2155 effective freq for heavy load. 

Outside benchmarks and rtx quake it holds 2175 and power is always bellow 520w.


----------



## J7SC

Well the weekend is here , warmly welcomed after a hyper-busy week. I'm still trying to figure out which waterblock for the Strix 3090 I should go for, Bykski or Phanteks, and I'm also trying to get more info on EK's active-cooling backplate. Apart from a few media kit photos, I can't find anything on that...anybody got any links, and/or know about pricing and availability / dates of the active backplate EK setup ? I'm also wondering whether it can be purchased as stand-alone and made to work with other vendors' Strix blocks - actually doesn't look like it from the few pics I've seen.

If that's the case, it will be some w-cooled RAM block(s) for the custom backplates that come with either Phanteks or Bykski, or even 2x the Swittech Universal GPU coolers I have a pile of.


----------



## gfunkernaught

des2k... said:


> I have a zotac 2x8pin, so about +945mem 2175, 1086mv is the limit for my card with the xoc bios. Around 2155 effective freq for heavy load.
> 
> Outside benchmarks and rtx quake it holds 2175 and power is always bellow 520w.


I definitely bought a dud overclocker. Can barely do 2100mhz effective. Cyberpunk crashed on me again, this time with the 520w evga bios and an offset of +130 and the mem was +1200. Temp hit 46c.


----------



## gfunkernaught

changboy said:


> what you mean by : make my pc act weird ?


timespy crashed during the cpu test which never happens. idk. I think my card just sucks for overclocking. thats the kind of luck i have. What voltage should I try for an effective clock of say 2160mhz? 1.1mv?


----------



## Falkentyne

gfunkernaught said:


> I definitely bought a dud overclocker. Can barely do 2100mhz effective. Cyberpunk crashed on me again, this time with the 520w evga bios and an offset of +130 and the mem was +1200. Temp hit 46c.


Did you do what I told you to do yet? I told you to try it but I don't know if you ignored it or if you just didn't see it.

Try +120 / +500 in Cyberpunk.
Also core clock offsets are in 15 mhz increments. +120 and +130 are the same clock. So it's 105 / 120/ 135 / 150 etc etc....


----------



## gfunkernaught

@Falkentyne 
You mean this?
"Try +150 / +600 in Quake 2 RTX. I can only do +120 on Q2 on my 3090 FE." You mentioned the classified tool which led me to believe you thought I had an evga card, which I don't. I'll try what you suggested with cyberpunk, what should I set the PL to? (back on 1kw bios to find max voltage)


----------



## ViRuS2k

Well guys got around to installing the waterblock and i think i installed it correctly, used hard as nails varnish on the gpu also with LM on the die.
used gelid extreme thermal pads on the front side memory, as for some reason i could not get the waterblock to screw down tight enough on the gpu with the stuff they supplied.

-- i think i installed good cause the watertemp is 28c and the gpu temp in windows is 29c lol and after 45minutes running watertemp 30c gpu temp in windows 30c 
tried playing a few games Hellblade Senua's Sacrifice its pretty demanding, and temps where 37c lol completely baffled and suprized at the temps i was so use to seeing 78c lol

now i should be able to push the card a little and see what she can do 










just hope the block was installed correctly
still have my MP5 memory block serial to install when i get the time i spent so long draining and getting this waterblock fitted lol took my time and everything seems to have worked out i think lol


----------



## gfunkernaught

@Falkentyne
This is from the Quake 2 run, +150 core +600 mem with the 1kW BIOS, PL set to 55%. The sustained clocks were from 2025-2055mhz


----------



## changboy

des2k... said:


> I have a zotac 2x8pin, so about +945mem 2175, 1086mv is the limit for my card with the xoc bios. Around 2155 effective freq for heavy load.
> 
> Outside benchmarks and rtx quake it holds 2175 and power is always bellow 520w.


So you said you gaming at 2175mhz steady at normal room temp ? Have you mod your pcb ? I never saw many people claim this on a rtx-3090 or i miss something else around ?


----------



## Falkentyne

gfunkernaught said:


> @Falkentyne
> This is from the Quake 2 run, +150 core +600 mem with the 1kW BIOS, PL set to 55%. The sustained clocks were from 2025-2055mhz
> View attachment 2479593


Right, but did you test Cyberpunk with that ?
You need to eliminate VRAM as a crash factor and its been known for a long time that yeeting the RAM can sometimes limit the core clock.


----------



## des2k...

....


changboy said:


> So you said you gaming at 2175mhz steady at normal room temp ? Have you mod your pcb ? I never saw many people claim this on a rtx-3090 or i miss something else around ?


it's 2175 with VF point so it could be as low as 2135 and 2155 as effective clock for gaming

I didn't game much at 2155 since I repasted with kryonaut(seems to be worst than noctua by 1,2c)

I played alot at 2135,2145 effective frequency. I doubt it's a good chip, water is about 25,26c and gpu 38-42c.


----------



## kunit13

gfunkernaught said:


> @kunit13
> So that mp3 cooler really helps huh? Especially in parallel.


Yea.. I’ve never ran the series one for comparison?
When I had a gigabyte oc 3090 I put a fan and some heat sinks to keep it cool


----------



## gfunkernaught

Falkentyne said:


> Right, but did you test Cyberpunk with that ?
> You need to eliminate VRAM as a crash factor and its been known for a long time that yeeting the RAM can sometimes limit the core clock.


Welp....played Cyberpunk for an hour straight, no crash...probably was the ram. thanks man lol. faith restored. I'm wondering If can keep pushing the core though. cyberpunk sustained 2100-2105mhz, 2115mhz at times when load was low, and voltage stayed at 1068mv the whole time. these 1kw bios default to 1088mv then 1068mv with load and temp. you think I should keep trying to push on 1068mv or use the curve?


----------



## gfunkernaught

kunit13 said:


> Yea.. I’ve never ran the series one for comparison?
> When I had a gigabyte oc 3090 I put a fan and some heat sinks to keep it cool


I was worried that in series would just feed the gpu (front) block warm water. What kind of heatsinks did you use? those square ones that come in an assortment on amazon?


----------



## Falkentyne

gfunkernaught said:


> Welp....played Cyberpunk for an hour straight, no crash...probably was the ram. thanks man lol. faith restored. I'm wondering If can keep pushing the core though. cyberpunk sustained 2100-2105mhz, 2115mhz at times when load was low, and voltage stayed at 1068mv the whole time. these 1kw bios default to 1088mv then 1068mv with load and temp. you think I should keep trying to push on 1068mv or use the curve?


I have a FE remember? I can't help you tweak your card. I know at my absolute limits (around 550W, sometimes close to 600W on games that don't hammer everything), my max core stable can be anywhere between +120 to +150, with Q2 RTX and Path of Exile (GI + Shadows=Ultra+Vsync off) being the worst offenders, followed by COD MW (Ground war) and Warzone. I don't have all the other cool games you have, like Metro (unless it was free on Epic store before?) and SOTTR or Bright Memory.

It's been a thing ever since Pascal that pushing the RAM too high can limit core clocks on some boards. You need access to the classified tool to increase MVDDC, MSVDD and NVVDD voltages, or to bump the loadline calibration up slightly (and I'm not sure if non Kingpin boards can use the classified tool, even FTW3 flashed with the 1000W Kingpin bios).

And maintaining 2100 effective mhz core clocks requires very good silicon. Sometimes golden chips. Unless you're on sub ambient cooling, most of those godly 2200 mhz bench runs you see are nowhere near game stable. How many 2200 mhz effective clock runs have you seen that can pass Warzone / COD MW? (you can pass cold war at higher clocks than Warzone/MW because of it ancient engine).

And Quake 2 RTX is even worse. That brings a card to its knees. The only thing worse is Path of Exile with Global illumination+Shadows set to Ultra, Vsync off and max detail. (that can actually cause your card to emergency trip with black screen + fans beyond 100% speed, if power draw and temps get too high).


----------



## changboy

ftw3 cant use the classified tool, need the kingpin pcb. I also see info of kingpin will get a revise pcb of the 3090 kingpin, so the kingpin will get an update for the future kingpin.

I found this bad for those who already bought a kingpin, and like me this week got notify by evga to buy one and i didnt buy it. But i know some just buy this kingpin.
New kingpin will also be available for hydro copper option.


----------



## gfunkernaught

@Falkentyne
You are correct about the classified tool. I remember my 2080 ti's mem clocked to 1200mhz rock solid. even 1250mhz for benches. I used to think that it couldnt go passed 2100mhz, so i settled on [email protected], but I didn't know about effective clock back then.

Metro, all of them, are awesome games even without ray tracing. I remember the first one, melted my 8800 gtx, so i got another one, and watched them melt together, in SLI, but at 1080p!

my 2080 ti hit [email protected] and scored like 10.5k in port royal and a little bit over 7000 or something crazy in timespy extreme. that was a pc in the garage in the winter bench. shadow of the tomb raider, crashed at the main menu.

so yea i need to count my blessings.


----------



## J7SC

changboy said:


> ftw3 cant use the classified tool, need the kingpin pcb. I also see info of kingpin will get a revise pcb of the 3090 kingpin, so the kingpin will get an update for the future kingpin.
> 
> I found this bad for those who already bought a kingpin, and like me this week got notify by evga to buy one and i didnt buy it. But i know some just buy this kingpin.
> New kingpin will also be available for hydro copper option.


...have been watching HWBot and the trading of blows between Galax HoF OCL, KPEs and even some Asus (2x SLI) on several 3DMarks...fascinating, and better to watch than those idiot reality tv reruns...who has got an USB w/ some held-back scores ? 

I also had just a bit of time to enjoy a glass of wine and do some more serious Strix VRAM testing tonight, after the new (X570) build...three runs per speed setting, same temps & other conditions equal, using TSEx GT2 as the benchmark....and the winner (for my Strix OC in GT2) is VRAM at 1377 MHz on air w/ fans on backplate. The question is whether this will apply to Cyberpunk 2077, and 8k Superposition as well, and/or how much w-cooling will change that. More wine is needed to answer that...

Overall, with anything but a truly unlimited bios PL and exceptional cooling, there will always be a trade off between 'taking from Peter and giving to Paul'...with a fixed max power budget, giving more to VRAM will ultimately limit core, as also suggested by others here. Even with "1kw' XOC, other parameters in this single equation with multiple unknowns can rain on your parade. For now, max cooling and max efficiency per FPS comp is probably the best approach for daily high-oc gaming


----------



## mattxx88

Zogge said:


> Anyone tried this ? Would it work ? Instead of paste for core.
> 
> GCS-025-A10 | Thermal Interface Sheet, Graphite, 1500W/m·K, 225 x 225mm 0.025mm, Self-Adhesive | RS Components


this reminds me the one used on radeon VII



it should work fine, derbauer did a vid comparison between it and LM if im not wrong


----------



## Zogge

Thanks I will buy one and try and report here. Also this could be put on top of the thermal pads for memory. Maybe some improvement...


----------



## mattxx88

Zogge said:


> Thanks I will buy one and try and report here. Also this could be put on top of the thermal pads for memory. Maybe some improvement...


pay attention to the odds because it is really thin, i don't think is a good idea to use it on memory (maybe you need many layers)


----------



## KedarWolf

God-tier thermal pad review. The 1950 wm/k one.

It never went well.


----------



## KedarWolf

Zogge said:


> Thanks I will buy one and try and report here. Also this could be put on top of the thermal pads for memory. Maybe some improvement...








Another Graphite Thermal Pad Test: Panasonic PGS


TLDR; IC Graphite is Panasonic Soft PGS. Both perform identically. Soft PGS can be bought in large sheets and runs ~10% the cost of IC Graphite. Soft PGS(IC Grapite) performs best under the most clamping force as can be safely used for any application(for my laptop, increased clamp force resulted...




linustechtips.com


----------



## Zogge

Looks like a dead end, forget that then. I will pass on it.


----------



## J7SC

Zogge said:


> Looks like a dead end, forget that then. I will pass on it.


...yeah, I'll stick w/ my Gelid Xtr (or NH2, MX4) and use the little Gelid spachelor that came w/ their paste to spread it all on GPUs & CPUs


----------



## kunit13

gfunkernaught said:


> I was worried that in series would just feed the gpu (front) block warm water. What kind of heatsinks did you use? those square ones that come in an assortment on amazon?


I wanted to use those, it was a cpu heat sink from a 3600. I just took the fan off (then used a 120mm case fan on it). I never measured temps with that setup, but it seemed to cool it more to the touch


----------



## Thanh Nguyen

gfunkernaught said:


> @Falkentyne
> You are correct about the classified tool. I remember my 2080 ti's mem clocked to 1200mhz rock solid. even 1250mhz for benches. I used to think that it couldnt go passed 2100mhz, so i settled on [email protected], but I didn't know about effective clock back then.
> 
> Metro, all of them, are awesome games even without ray tracing. I remember the first one, melted my 8800 gtx, so i got another one, and watched them melt together, in SLI, but at 1080p!
> 
> my 2080 ti hit [email protected] and scored like 10.5k in port royal and a little bit over 7000 or something crazy in timespy extreme. that was a pc in the garage in the winter bench. shadow of the tomb raider, crashed at the main menu.
> 
> so yea i need to count my blessings.


11c delta in CP2077 with bykski block. 1000w bios +135 core/ 1250 mem which is daily setting. This setting gets me 15k3 in PR.


----------



## changboy

Thanh Nguyen said:


> 11c delta in CP2077 with bykski block. 1000w bios +135 core/ 1250 mem which is daily setting. This setting gets me 15k3 in PR.
> View attachment 2479633


How your water temp is at 24c when you gaming at this frequency ? Mine is around 34c.

Is your room at 14c ? lol, do you play with your jacket hehehe.


----------



## Thanh Nguyen

changboy said:


> How your water temp is at 24c when you gaming at this frequency ? Mine is around 34c.
> 
> Is your room at 14c ? lol, do you play with your jacket hehehe.


Ambient is at 22c. Heater set to 70f (21.6c).


----------



## changboy

Thanh Nguyen said:


> Ambient is at 22c. Heater set to 70f (21.6c).


You use a chiller ? That can explain your stat coz i never saw a miracal til today 😂


----------



## gfunkernaught

Thanh Nguyen said:


> 11c delta in CP2077 with bykski block. 1000w bios +135 core/ 1250 mem which is daily setting. This setting gets me 15k3 in PR.
> View attachment 2479633


That's pretty good. What is your loop setup?


----------



## Thanh Nguyen

We


changboy said:


> You use a chiller ? That can explain your stat coz i never saw a miracal til today 😂


----------



## gfunkernaught

@Thanh Nguyen 
Ha! that explains it....


----------



## gfunkernaught

anyone here with the ek quantum trio block have issues with contact? I noticed it took quite a bit of effort to get the card to not look like a banana. I think there could be a contact issue, the gpu idles at 20-24c then as soon as there is heavy load it shoots up to 38c, then eventually reaches 47-48c, but from like 40-48c is a slow-ish process. Is that normal for these cards? I'm using a 2080 ti as a reference, that card would slowly make its way to 38c max even at full load in the same loop.


----------



## Wihglah

Thanh Nguyen said:


> We
> 
> View attachment 2479636


My OCD is twitching - the top MoRa is backwards


----------



## gfunkernaught

Wihglah said:


> My OCD is twitching - the top MoRa is backwards


I thought about building an external cooling system like that. Would need a strong pump and larger tubes, but that would be cool.


----------



## des2k...

gfunkernaught said:


> anyone here with the ek quantum trio block have issues with contact? I noticed it took quite a bit of effort to get the card to not look like a banana. I think there could be a contact issue, the gpu idles at 20-24c then as soon as there is heavy load it shoots up to 38c, then eventually reaches 47-48c, but from like 40-48c is a slow-ish process. Is that normal for these cards? I'm using a 2080 ti as a reference, that card would slowly make its way to 38c max even at full load in the same loop.


I think the cheap blue pads from EK don't compress much, combine that with too much paste you get horrible temps and board bending

the oddesey pads + very little paste that spreads easy nt-h2 will be best for that ek.

But those EK blocks have something wrong with them past 500w.

I have 4 rads in my o11, no glass panels so water is 24c max gpu gets to 37-42 up to 500w at 2175 core with my zotac.


----------



## Sheyster

gfunkernaught said:


> I think I'm getting the hang of this. I set the curve to [email protected], and it settles at an effective clock of around 2115mhz. I will keep searching for the lowest possible voltage I can get at each bin above 2100mhz.


I settled on [email protected] with the curve. 100% stable in Warzone for 4 hours straight, that's the longest stretch I've played in one Warzone session!


----------



## samcheekin

hi. im new here and am not sure whether here is the right place to ask. 
im looking for a guide to flash my current 3090 gaming x trio to supreme x 450w bios. 
i have nvflash downloaded. but i have no idea what to do next. can any1 guide me along?


----------



## gfunkernaught

Sheyster said:


> I settled on [email protected] with the curve. 100% stable in Warzone for 4 hours straight, that's the longest stretch I've played in one Warzone session!


That's your effective clock?

And that setting didn't last for me. Now I can run [email protected] sustained for the most part, with some dips down to 2088mhz, occasional rise to 2115mhz, while playing cyberpunk. Thanks to @Falkentyne for the tip!


----------



## gfunkernaught

samcheekin said:


> hi. im new here and am not sure whether here is the right place to ask.
> im looking for a guide to flash my current 3090 gaming x trio to supreme x 450w bios.
> i have nvflash downloaded. but i have no idea what to do next. can any1 guide me along?


nvflash.exe -6 biosname.rom, then say yes at the prompts. Make sure the bios file is in the same folder as nvflash.exe.

Monitor your temps. I tried the suprim bios on my trio with the stock cooler and the temps did rise a bit. Also, make sure your power supply can keep up.


----------



## gfunkernaught

des2k... said:


> I think the cheap blue pads from EK don't compress much, combine that with too much paste you get horrible temps and board bending
> 
> the oddesey pads + very little paste that spreads easy nt-h2 will be best for that ek.
> 
> But those EK blocks have something wrong with them past 500w.
> 
> I have 4 rads in my o11, no glass panels so water is 24c max gpu gets to 37-42 up to 500w at 2175 core with my zotac.


How big are your rads? What thickness are the pads you used? The ek backplate came with 1mm and 2mm pads. The 2mm pads were for the back of the vrm's and the 1mm for the ram and gpu backside. I'm curious about these thinner pads. I only have one 360mm and one 240mm rad, not sure how much cooler I can get at 450-520w. My 2080 ti ran at like 38-45c with 380w bios + single shunt mod, so around 425-450w.


----------



## des2k...

gfunkernaught said:


> How big are your rads? What thickness are the pads you used? The ek backplate came with 1mm and 2mm pads. The 2mm pads were for the back of the vrm's and the 1mm for the ram and gpu backside.


2x360 freezemod, 1x240 alphacool 40mm, 1x240 corsair

1mm for front, back 2mm for core, 1.5mm vrm, 1mm mem 

oddesey pads seem to stretch,extend when removing from the plastic, so I would say they are .1 less


----------



## gfunkernaught

des2k... said:


> 2x360 freezemod, 1x240 alphacool 40mm, 1x240 corsair
> 
> 1mm for front, back 2mm for core, 1.5mm vrm, 1mm mem
> 
> oddesey pads seem to stretch,extend when removing from the plastic, so I would say they are .1 less


I thought it was odd that the instructions for the backplate said to use 1mm for the ram and gpu, but 2mm for the vrms. The gpu backside is recessed, lower than the raised ram plates, so common sense says put 2mm there to make sure there is contact. Could explain the banana.


----------



## changboy

So need a lot of radiator to keep temp down like this, its nice to see you can play without see temperature of coolant go higher hehehe.


----------



## J7SC

changboy said:


> So need a lot of radiator to keep temp down like this, its nice to see you can play without see temperature of coolant go higher hehehe.


...yeah, rad space (along w/ type of fans etc) can make life a lot easier with oc'ed RTX...my other sys has dual w-cooled 2080 Tis on their own dedicated loop w/ 1080x55 rad space and 9x GT 4K rpm fans...Even with a total of 760W pumping into the GPU loop, it stays in the mid to high 30 C range, even after hours of stress (FS2020, CP 2077).


----------



## changboy

J7SC said:


> ...yeah, rad space (along w/ type of fans etc) can make life a lot easier with oc'ed RTX...my other sys has dual w-cooled 2080 Tis on their own dedicated loop w/ 1080x55 rad space and 9x GT 4K rpm fans...Even with a total of 760W pumping into the GPU loop, it stays in the mid to high 30 C range, even after hours of stress (FS2020, CP 2077).


Like me i have just 1 D-5 pump, is this enough to go and comeback for all rad like this ?


----------



## Falkentyne

Someone posted this awhile back and even though I found this program useless (games black screen and the computer reboots at memory speeds that the tester says no errors on), someone may find this useful somewhere.

RENAME THE FILE TO ZIP.


----------



## toxicnerve

Don't know if these scores are good or bad, opinions welcome.

Time Spy 20,026








Result not found







www.3dmark.com





Port Royal 14,278








Result not found







www.3dmark.com





MSI Suprim X with 520W BIOS, +60 Core & +898 Mem. Stock cooler.


----------



## des2k...

talking about memory, the slider stops at +1500 that's a hardware limit ?


----------



## marti69

need liitle help guys i have been searching lot still cant figure out the issue i uograded from 10900k to 5950x now my rtx 3090 trio watercooled with kp520w bios gives lower score with same clocks i use to have 15300 15300 in port royal gpu score with 10900k oc to 5.1ghz now my best score with 5950x is 14800 same score if i oc from 4.5 to 4.7ghz cpu score dont change even if oc cpu i reinstalled 10900k to test and even with just 5ghz i still get better pr score then amd cpu anyone know whats wrong with amd platforme ? (gpu is 99 98 usage on both platforme and same core oc 2220 to 2205 mhz) this thing is driving me crazy i cant believe ryzen 5000 have lower gpu perfomance even with 4.7oc.


----------



## Nizzen

des2k... said:


> talking about memory, the slider stops at +1500 that's a hardware limit ?
> 
> View attachment 2479688


Software limit. Use evga px 

I had to use evga px to bench under chilled water +1700 on mem


----------



## Nizzen

marti69 said:


> need liitle help guys i have been searching lot still cant figure out the issue i uograded from 10900k to 5950x now my rtx 3090 trio watercooled with kp520w bios gives lower score with same clocks i use to have 15300 15300 in port royal gpu score with 10900k oc to 5.1ghz now my best score with 5950x is 14800 same score if i oc from 4.5 to 4.7ghz cpu score dont change even if oc cpu i reinstalled 10900k to test and even with just 5ghz i still get better pr score then amd cpu anyone know whats wrong with amd platforme ? (gpu is 99 98 usage on both platforme and same core oc 2220 to 2205 mhz) this thing is driving me crazy i cant believe ryzen 5000 have lower gpu perfomance even with 4.7oc.


"Upgraded" 😅

How many Amd cpu's do you see?








3DMark Port Royal Hall of Fame


The 3DMark.com Overclocking Hall of Fame is the official home of 3DMark world record scores.




www.3dmark.com





It's possible to get good port royal scores with Amd, but it's way harder.

You need to know all the benchmark "tricks"


----------



## GRABibus

I have flashed my ASUS ROC Strix OC with 520W Kingpin Bios :

Resul is very bad: one of the 8 pin power is stuck at 2W and the max power draw is 350W (Timespy, heaven, etc...).

Some tried the kingpin Bios on ASUS Strix OC ? Same issue ?


----------



## des2k...

marti69 said:


> need liitle help guys i have been searching lot still cant figure out the issue i uograded from 10900k to 5950x now my rtx 3090 trio watercooled with kp520w bios gives lower score with same clocks i use to have 15300 15300 in port royal gpu score with 10900k oc to 5.1ghz now my best score with 5950x is 14800 same score if i oc from 4.5 to 4.7ghz cpu score dont change even if oc cpu i reinstalled 10900k to test and even with just 5ghz i still get better pr score then amd cpu anyone know whats wrong with amd platforme ? (gpu is 99 98 usage on both platforme and same core oc 2220 to 2205 mhz) this thing is driving me crazy i cant believe ryzen 5000 have lower gpu perfomance even with 4.7oc.


well with my 3900x, It goes to 14870+ in port royal and boost 4.3 all core and 4.5 single core

zen3 is much better than zen2 for 1440p(port royal) especially with that OC

are you running 1900IF+ ? mem ?


----------



## marti69

des2k... said:


> well with my 3900x, It goes to 14870+ in port royal and boost 4.3 all core and 4.5 single core
> 
> zen3 is much better than zen2 for 1440p(port royal) especially with that OC
> 
> are you running 1900IF+ ? mem ?


thank you for reply im runing 3800 memory and 1900if yes 4.7ghz on all core gpu score is same if i oc from 4.4 to 4.7ghz on cpu 10900k is giving much better gpu score at 5ghz


----------



## marti69

Nizzen said:


> "Upgraded" 😅
> 
> How many Amd cpu's do you see?
> 
> 
> 
> 
> 
> 
> 
> 
> 3DMark Port Royal Hall of Fame
> 
> 
> The 3DMark.com Overclocking Hall of Fame is the official home of 3DMark world record scores.
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> It's possible to get good port royal scores with Amd, but it's way harder.
> 
> You need to know all the benchmark "tricks"


so amd 5000 is still behind intel on gaming after all


----------



## Falkentyne

GRABibus said:


> I have flashed my ASUS ROC Strix OC with 520W Kingpin Bios :
> 
> Resul is very bad: one of the 8 pin power is stuck at 2W and the max power draw is 350W (Timespy, heaven, etc...).
> 
> Some tried the kingpin Bios on ASUS Strix OC ? Same issue ?


Did you actually benchmark it and test the score?
Ignore the power readings. They mean nothing. The power pins are actually balanced. This has been verified by multiple people with a current clamp.
Also you shouldn't flash the 520W Kingpin bios on your Strix. That's a useless bios for your board.

You can try the 1000W Kingpin bios (3998-xoc.rom I think it's called) but be careful as VRAM/VRM overtemp protection limits are disabled on it. 

All that matters is the score, not how much power the KP bios is reporting. Test your scores and then see if they're better or not.

Use the Asus Strix XOC2 bios that was posted earlier here (if it was a .TXT file attachment, rename it to .ROM). See if this helps.


----------



## J7SC

changboy said:


> Like me i have just 1 D-5 pump, is this enough to go and comeback for all rad like this ?


...I always use two D5s in series per loop for a variety of reasons...in the TR /2x 2080 Tis system, total of 4x D5s fore two loops


----------



## jomama22

Nizzen said:


> "Upgraded" 😅
> 
> How many Amd cpu's do you see?
> 
> 
> 
> 
> 
> 
> 
> 
> 3DMark Port Royal Hall of Fame
> 
> 
> The 3DMark.com Overclocking Hall of Fame is the official home of 3DMark world record scores.
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> It's possible to get good port royal scores with Amd, but it's way harder.
> 
> You need to know all the benchmark "tricks"


No real tricks tbh. Just know how to tune memory is all. My PR score is comparable to those on a 10900k @5.4+ with 4400+ memory.

At the end of the day it's just port royal which has pretty much no baring on gaming and whatnot. Though I do agree going from a 10900k to a 5950x seems a bit of strange unless you are in need of the extra cores for work or other multitasking.


----------



## changboy

J7SC said:


> ...I always use two D5s in series per loop for a variety of reasons...in the TR /2x 2080 Tis system, total of 4x D5s fore two loops


Do you mean this : EK-XTOP Revo Dual D5 Serial


----------



## GRABibus

marti69 said:


> so amd 5000 is still behind intel on gaming after all


Not really...


----------



## Beagle Box

toxicnerve said:


> Don't know if these scores are good or bad, opinions welcome.
> 
> Time Spy 20,026
> 
> 
> 
> 
> 
> 
> 
> 
> Result not found
> 
> 
> 
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> Port Royal 14,278
> 
> 
> 
> 
> 
> 
> 
> 
> Result not found
> 
> 
> 
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> MSI Suprim X with 520W BIOS, +60 Core & +898 Mem. Stock cooler.


Looks great for using a stock cooler.


----------



## GRABibus

Falkentyne said:


> Did you actually benchmark it and test the score?
> Ignore the power readings. They mean nothing. The power pins are actually balanced. This has been verified by multiple people with a current clamp.
> Also you shouldn't flash the 520W Kingpin bios on your Strix. That's a useless bios for your board.
> 
> You can try the 1000W Kingpin bios (3998-xoc.rom I think it's called) but be careful as VRAM/VRM overtemp protection limits are disabled on it.
> 
> All that matters is the score, not how much power the KP bios is reporting. Test your scores and then see if they're better or not.
> 
> Use the Asus Strix XOC2 bios that was posted earlier here (if it was a .TXT file attachment, rename it to .ROM). See if this helps.


Thank you.
With 1000W bios, i am afraid to burn my card .
I am on stock cooler.

i will try first the Bios you posted. How should it help ?


----------



## J7SC

changboy said:


> Do you mean this : EK-XTOP Revo Dual D5 Serial


nope, just 4x Swiftech mcp655 (D5) I had for years & years


----------



## jomama22

marti69 said:


> so amd 5000 is still behind intel on gaming after all


Let's ignore all reviews and only focus on the port royal benchmark *****.

You can go take a look at the graphics score for time spy and time spy extreme hof and see that it's simply not true.


----------



## GRABibus

Falkentyne said:


> Did you actually benchmark it and test the score?
> Ignore the power readings. They mean nothing. The power pins are actually balanced. This has been verified by multiple people with a current clamp.
> Also you shouldn't flash the 520W Kingpin bios on your Strix. That's a useless bios for your board.
> 
> You can try the 1000W Kingpin bios (3998-xoc.rom I think it's called) but be careful as VRAM/VRM overtemp protection limits are disabled on it.
> 
> All that matters is the score, not how much power the KP bios is reporting. Test your scores and then see if they're better or not.
> 
> Use the Asus Strix XOC2 bios that was posted earlier here (if it was a .TXT file attachment, rename it to .ROM). See if this helps.


OK.

ASUS ROG STRIX stock Bios at stock settings :
=> GX score TS = 19800

ASUS ROG STRIX with KP Bios 520W at stock settings :
=> GX score TS = 20600 => Nicely +800pts

If I overclock as follow :
2050MHz boost clock, +1000 on memory, sliders power and voltage maxed out and fans at 100%

=> Same score with kingpin Bios and ASUS strix stock bios.

What I won with power at stock settings in Timespy, I loose it with temperature throttling with KP Bios by throttling when overclocked.

Definitely, ASUS ROG STRIX must live on water 

Thank you for your advices.


----------



## J7SC

Nizzen said:


> "Upgraded" 😅
> 
> How many Amd cpu's do you see?
> 
> 
> 
> 
> 
> 
> 
> 
> 3DMark Port Royal Hall of Fame
> 
> 
> The 3DMark.com Overclocking Hall of Fame is the official home of 3DMark world record scores.
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> It's possible to get good port royal scores with Amd, but it's way harder.
> 
> You need to know all the benchmark "tricks"


...depends what you mean by "tricks" - some people might take offense to that. I spent a year-and-a-half in PortRoyal HoF as high as 15th (stock bios, stock w-cooling on the cards) until those pesky 3090s came out  ...My CPU was (is) an AMD TR 2950X that isn't exactly the world's best known bench CPU. Running PortRoyal takes a lot of setup work and understanding. While it is not super sensitive to CPUs / speed per se, both VRAM as well as system RAM latencies play a major role in PortRoyal

Almost done, rebuilding the Win 10 pro with the 3950X...not a 5950X, but it runs IF 1900 / RAM 3800 tight right out of the box 



Spoiler


----------



## Nizzen

jomama22 said:


> Let's ignore all reviews and only focus on the port royal benchmark ***.
> 
> You can go take a look at the graphics score for time spy and time spy extreme hof and see that it's simply not true.


It depends on the game. It looks to be more games that love low latency than big cache 
I have both, so I don't care 

5900x with 3090 and 10900k with 3090.


----------



## Nizzen

jomama22 said:


> Let's ignore all reviews and only focus on the port royal benchmark ***.
> 
> You can go take a look at the graphics score for time spy and time spy extreme hof and see that it's simply not true.


Did you try your voltmodded 3090 with 10900k?


----------



## Falkentyne

GRABibus said:


> OK.
> 
> ASUS ROG STRIX stock Bios at stock settings :
> => GX score TS = 19800
> 
> ASUS ROG STRIX with KP Bios 520W at stock settings :
> => GX score TS = 20600 => Nicely +800pts
> 
> If I overclock as follow :
> 2050MHz boost clock, +1000 on memory, sliders power and voltage maxed out and fans at 100%
> 
> => Same score with kingpin Bios and ASUS strix stock bios.
> 
> What I won with power at stock settings in Timespy, I loose it with temperature throttling with KP Bios by throttling when overclocked.
> 
> Definitely, ASUS ROG STRIX must live on water
> 
> Thank you for your advices.


How much were you drawing on your kingpin bios?
Here is my shunt modded 3090 FE, at 100% power limit, stock clocks.

Not sure how I beat you here. What were your temps?


----------



## GRABibus

Falkentyne said:


> How much were you drawing on your kingpin bios?
> Here is my shunt modded 3090 FE, at 100% power limit, stock clocks.
> 
> Not sure how I beat you here. What were your temps?
> 
> View attachment 2479702



Here I am :










Kingpin Bios 520W, all at stock settings (Fans on Auto).
Please note I had put drivers textures on high performances (Only 100pts difference maybe...)

My GPU power max is 340W....

How can it be so low ??


----------



## changboy

J7SC said:


> nope, just 4x Swiftech mcp655 (D5) I had for years & years


This with the 2 pump install in the same top : Amazon.com: EKWB EK-XTOP Revo Dual D5 PWM Pumps: Computers & Accessories
If you dont use a thing like this i dont understand how you can push with 2 pump on the same loop. You mean you just put 1 pump and you put another pump at another point on the same loop ?


----------



## GRABibus

Falkentyne said:


> Did you actually benchmark it and test the score?
> Ignore the power readings. They mean nothing. The power pins are actually balanced. This has been verified by multiple people with a current clamp.
> Also you shouldn't flash the 520W Kingpin bios on your Strix. That's a useless bios for your board.
> 
> You can try the 1000W Kingpin bios (3998-xoc.rom I think it's called) but be careful as VRAM/VRM overtemp protection limits are disabled on it.
> 
> All that matters is the score, not how much power the KP bios is reporting. Test your scores and then see if they're better or not.
> 
> Use the Asus Strix XOC2 bios that was posted earlier here (if it was a .TXT file attachment, rename it to .ROM). See if this helps.


And here the result with ASUS strix stock bios at stock settings (Fans on auto) :










Max power is higher (390W), and temps are lower than with Kingpin.
But score is lower

So, the max power is HWinfo with kingpin bios is false.


----------



## Falkentyne

GRABibus said:


> Here I am :
> 
> View attachment 2479705
> 
> 
> Kingpin Bios 520W, all at stock settings (Fans on Auto).
> Please note I had put drivers textures on high performances (Only 100pts difference maybe...)
> 
> My GPU power max is 340W....
> 
> How can it be so low ??


At least your score is about proper this time. 
GPU Power draw is messed up because KP Bios doesn't report power draw accurately on non kingpin cards on the 8 pin connectors. It's not really like that.
You have to ignore the reading. The card's built in power balancing has it drawing correctly, you can verify that with a current clamp on the PCIE cables (don't buy one just for that purpose unless you actually need one or are rich).


----------



## GRABibus

Falkentyne said:


> At least your score is about proper this time.
> GPU Power draw is messed up because KP Bios doesn't report power draw accurately on non kingpin cards on the 8 pin connectors. It's not really like that.
> You have to ignore the reading. The card's built in power balancing has it drawing correctly, you can verify that with a current clamp on the PCIE cables (don't buy one just for that purpose unless you actually need one or are rich).


Yeah but my score is 20600 in fact, which is the score without Drivers nvidia on texture high perrformance (The first one I posted).


----------



## GRABibus

Falkentyne said:


> At least your score is about proper this time.
> GPU Power draw is messed up because KP Bios doesn't report power draw accurately on non kingpin cards on the 8 pin connectors. It's not really like that.
> You have to ignore the reading. The card's built in power balancing has it drawing correctly, you can verify that with a current clamp on the PCIE cables (don't buy one just for that purpose unless you actually need one or are rich).


And how can you have 520W max powezr in HWinfo if the value is false ?


----------



## J7SC

changboy said:


> This with the 2 pump install in the same top : Amazon.com: EKWB EK-XTOP Revo Dual D5 PWM Pumps: Computers & Accessories
> If you dont use a thing like this i dont understand how you can push with 2 pump on the same loop. You mean you just put 1 pump and you put another pump at another point on the same loop ?


The EKWB EK-XTOP Revo Dual D5 is serial as well, anyhow... and yes, you install 2x D5s at different points in the same loop (usually before restrictions such as a block, just making sure the pumps get fed by res)...pressure will equalize anyway in the loop. I've used dual D5s per loop for eons, including on workstation / quad GPU...dual D5s give you fail-over insurance. What is more, well, just read > this


----------



## des2k...

well... EVGA X1 (with the VF curve window), seems to apply memory past 1500 and I don't loose the Afterburner curve,

+1718 mem


----------



## Falkentyne

GRABibus said:


> And how can you have 520W max powezr in HWinfo if the value is false ?


Shunt mod. And as I told you earlier, I have a FE. I can NOT use a Kingpin bios nor a Strix bios. I can't flash it. I'm not using a custom bios. And I changed the power multiplier to 1.91x in hwinfo64. (1.91x the GPU*Z reading, which is an approximation based on my Timespy scores at a 400W power limit).

While I have 5 mOhm stacked shunts on 8 pin #1, #2 and GPU Chip Power, it's not 2x multiplier because I still have MG842AR paint on PCIE Slot and SRC shunts, which throws off the total reading a little (because that's higher resistance).


----------



## ViRuS2k

well guys got my msi 3090 + waterblock fitted and small bubble in my cpu cooler that i hope will work its way out hahaha....
temps are really good with max overclock 2100+ im getting 49c and 30c idle with watertemps 30c
the little lcd on the graphics waterblock is pretty cool shows me watertemp with a 1c between watertemp reading and GPU core temp in windows so thats good mounting i have on the graphics card
with Liquid metal.

i do have some board bendage though wich it totally annoying but couldnt be helped for some reason this waterblock comes with to thick a thermal pads but i seemed to have gotten good mounting as temps speak for themselfs.

i also run the card @ performance mode in windows so no down clocking hence why the idle 30c temps is there,
but so far so good played gears of war 5 @99% gpu usage for 3+ hours and it ran perfect no crashes or anything and gpu temp was hovering around 49c

thats pretty good for 2x 360mm radiators Coolstream XT versions with my 5950x @5.1 single core and 4.6 multi core  games run sweet..
temps on the memory seem to have been improved also there maxing out at 82c where by before they where maxing out at 86c i know not a lot of difference but better than nothing + the memory is overclocked @ 20502mhz +500 in afterburner...

though i do plan on cooling that memory down with the Serial MP5 but i cant install that yet until i can get a very strong back plate without the hole in the middle as i need flush surface and for it not to bend lol

also water temps max out @38c when gpu running 99% usage @49c
so only a delta of 11c i think thats good...


----------



## des2k...

A small bump to Port Royal score, 2175 +1718mem
Going by history I think 2190 adds another 50 or so points. 
Going past +1200 mem very small gains.


----------



## J7SC

First full TimeSpy run ever w/ this card and/or mobo/CPU... Setup is AIO 3950X IF1900/RAM3800 and Strix OC / stock bios / air. Room ambient at 24 C was too hot to kick in full oc, but promising via the new setup. As before, NV power tab on 'normal' rather than 'prefer high performance' as that keeps the card from getting hotter...


----------



## GRABibus

J7SC said:


> First full TimeSpy run ever w/ this card and/or mobo/CPU... Setup is AIO 3950X IF1900/RAM3800 and Strix OC / stock bios / air. Room ambient at 24 C was too hot to kick in full oc, but promising via the new setup. As before, NV power tab on 'normal' rather than 'prefer high performance' as that keeps the card from getting hotter...
> 
> View attachment 2479720


22616 graphics score on Asus strix oc at stock settings ???
😳


----------



## WillP

samcheekin said:


> hi. im new here and am not sure whether here is the right place to ask.
> im looking for a guide to flash my current 3090 gaming x trio to supreme x 450w bios.
> i have nvflash downloaded. but i have no idea what to do next. can any1 guide me along?





How-To Flash RTX Video Card BIOS To A Different Series


The above link gives a basic guide to flashing bios, but it is a little out of date. Basic principles still apply, back up your existing bios, be prepared for some risk, and some holding your breath to see if it takes, and I'd suggest looking through these pages to see if the bios you're hoping to use has been successfully used on your card model by someone else.
The bit that is out of date is that it is no longer nvflash64 in the commands, but the commands as listed by Gfunkernaught in his answer to you, plain nvflash now.
I've flashed my bios a grand total of twice, so I'm no expert, but I did it after a few efforts from the guide I've linked to and the advice in these pages.
I wouldn't normally give advice on something I know so little about, but I can see that you haven't had much of an answer from anyone else other than the command instruction which I guess may not be enough if you're a complete novice.
Good luck if you choose to go ahead with it.


----------



## J7SC

GRABibus said:


> 22616 graphics score on Asus strix oc at stock settings ???
> 😳


...stock _bios_, air, 'not full oc' yet (see GPUz tab above)


----------



## GRABibus

J7SC said:


> ...stock _bios_, air, 'not full oc' yet (see GPUz tab above)


Here is my best score with strix OC stock bios and highest stable OC, stock air cooling, fans at 100% and drivers nvidia on textures high perf :









3DMark Time Spy Benchmark Top 30


Here's the recent WR sub for superposition at the BOT from my ole friend Dave: mllrkllr88`s Unigine Superposition - 1080P Xtreme score: 17787 Points with a GeForce RTX 3090 (hwbot.org) That's the necessary "proof of bench" required.




www.overclock.net





We have 450pts difference at GX score...That's huge !.


----------



## changboy

I'am back on 500w from the 1000w, i think the 1000w draw too much power for nothing. Near same oc i draw 100w less power on the board. Maybe just lil bit lower perf but nothing to be worry.

I feel drawing always +500w its hard on the gpu ampere.


----------



## Sheyster

gfunkernaught said:


> That's your effective clock?


Yes, it does drop a bin or two with temps, just depends on the fan curve I use.


----------



## Falkentyne

WillP said:


> How-To Flash RTX Video Card BIOS To A Different Series
> 
> 
> The above link gives a basic guide to flashing bios, but it is a little out of date. Basic principles still apply, back up your existing bios, be prepared for some risk, and some holding your breath to see if it takes, and I'd suggest looking through these pages to see if the bios you're hoping to use has been successfully used on your card model by someone else.
> The bit that is out of date is that it is no longer nvflash64 in the commands, but the commands as listed by Gfunkernaught in his answer to you, plain nvflash now.
> I've flashed my bios a grand total of twice, so I'm no expert, but I did it after a few efforts from the guide I've linked to and the advice in these pages.
> I wouldn't normally give advice on something I know so little about, but I can see that you haven't had much of an answer from anyone else other than the command instruction which I guess may not be enough if you're a complete novice.
> Good luck if you choose to go ahead with it.


There is no ID Flash mismatched version of NVflash for Ampere cards. Someone posted it once briefly then deleted the post.
This article is for Turing cards, but otherwise works on Ampere reference cards. The -6 only works on reference 3090 cards.
So you can't flash a FE with a reference or XOC bios because the -6 override doesn't work.


----------



## changboy

Me i use -6 and its work on my ftw3 ultra


----------



## J7SC

changboy said:


> I'am back on 500w from the 1000w, i think the 1000w draw too much power for nothing. Near same oc i draw 100w less power on the board. Maybe just lil bit lower perf but nothing to be worry.
> 
> I feel drawing always +500w its hard on the gpu ampere.


Your card has a bios switch anyhow, right ? You can keep the XOC for those special cold winter days for benching, but for gaming etc, I would stick with stock bios as well, with a mild oc...when I play FS2020 or CP2077, I don't even use the full oc, or PL, but usually run up to 115% out of 123%. 

I used to flash custom XOC bios years back for GTX series, but for RTX (2080 Ti, 3090) I not only leave the bios alone, but also never touch the voltage slider either...with a good setup, I prefer to max efficiency and HD cooling to gain as much in the PL budget as I can. RTX seem to be optimized much more already straight out of the box. Also, this 3090 build is 1/3rd productivity and 2/3rd entertainment as a secondary 4K system in a quiet area. All that said, these 3090s cry out for w-cooling, even w/ a great air cooler like the Strix has...once I have done that, I 'might' kick it up a bit to the KPE 520W non-XOC or similar...we'll see. The Strix OC though pulls up to 500W on full song anyway...


----------



## changboy

J7SC said:


> Your card has a bios switch anyhow, right ? You can keep the XOC for those special cold winter days for benching, but for gaming etc, I would stick with stock bios as well, with a mild oc...when I play FS2020 or CP2077, I don't even use the full oc, or PL, but usually run up to 115% out of 123%.
> 
> I used to flash custom XOC bios years back for GTX series, but for RTX (2080 Ti, 3090) I not only leave the bios alone, but also never touch the voltage slider either...with a good setup, I prefer to max efficiency and HD cooling to gain as much in the PL budget as I can. RTX seem to be optimized much more already straight out of the box. Also, this 3090 build is 1/3rd productivity and 2/3rd entertainment as a secondary 4K system in a quiet area. All that said, these 3090s cry out for w-cooling, even w/ a great air cooler like the Strix has...once I have done that, I 'might' kick it up a bit to the KPE 520W non-XOC or similar...we'll see. The Strix OC though pulls up to 500W on full song anyway...


Yes, like i was playing bf 1 and i can draw 580w on the 1000w bios but i not need this, not worth it to me and i can also feel the heat came out from my radiator. Now its winter but at summer time this can be insane in a room. Even with a/c.


----------



## gfunkernaught

Sheyster said:


> Yes, it does drop a bin or two with temps, just depends on the fan curve I use.


Nice. What did you set the curve to? Which bios?


----------



## WillP

Falkentyne said:


> There is no ID Flash mismatched version of NVflash for Ampere cards. Someone posted it once briefly then deleted the post.
> This article is for Turing cards, but otherwise works on Ampere reference cards. The -6 only works on reference 3090 cards.
> So you can't flash a FE with a reference or XOC bios because the -6 override doesn't work.


Thanks for clarifying. He's trying to flash a Gaming X Trio. I flashed a reference card so perhaps I didn't run into that problem, it's been a while since I did it. I know I worked out the "64" bit somehow, remember that.


----------



## J7SC

...3DM TS with just a few degrees cooler (ambient 22 C now) and one speed bin up. May have one or two steps left at room temp / air but will try those later. As before, Strix OC on stock bios. 

3950X continues to impress...hoping to optimize RAM a bit more



Spoiler


----------



## rix2

What is wrong my ftw3 ultra? I install EK water block and card power draw fall to max 408w XOC bios, was before 440w and PR result is same?!? Core speed was +130 now +150 and nothing changed... I scored 14 085 in Port Royal and same air I scored 14 021 in Port Royal


----------



## KedarWolf

rix2 said:


> What is wrong my ftw3 ultra? I install EK water block and card power draw fall to max 408w XOC bios, was before 440w and PR result is same?!? Core speed was +130 now +150 and nothing changed... I scored 14 085 in Port Royal and same air I scored 14 021 in Port Royal


After much research, I'm going with a Strix OC. Was originally going with the Ultra.

I've even read of peeps here selling their Ultras and getting the Strix card.


----------



## Esenel

marti69 said:


> cant figure out the issue i uograded from 10900k to 5950x now my rtx 3090 trio watercooled with kp520w bios gives lower score with same clocks


Going from a 10900K to 5950X is only a sidegrade.
As @jomama22 already showed a 5950X can compete head to head.
In PR und TS although you are faster with the same GPU on Intel max OC.

But


marti69 said:


> so amd 5000 is still behind intel on gaming after all


Depending on the game.
As written before.
Some like Cache.
Some low Latency.


----------



## rix2

Precision x1 not working when I flash kp 520w bios?


----------



## Nizzen

rix2 said:


> Precision x1 not working when I flash kp 520w bios?


Try DDU and install new gpu driver?


----------



## rix2

Nizzen said:


> Try DDU and install new gpu driver?


X1 find new firmware and need restart, after restart same and same and....


----------



## rix2

rix2 said:


> What is wrong my ftw3 ultra? I install EK water block and card power draw fall to max 408w XOC bios, was before 440w and PR result is same?!? Core speed was +130 now +150 and nothing changed... I scored 14 085 in Port Royal and same air I scored 14 021 in Port Royal


and def oc bios work better 420w draw and PR same score


----------



## Beagle Box

rix2 said:


> What is wrong my ftw3 ultra? I install EK water block and card power draw fall to max 408w XOC bios, was before 440w and PR result is same?!? Core speed was +130 now +150 and nothing changed... I scored 14 085 in Port Royal and same air I scored 14 021 in Port Royal


Your block is working perfectly. Same score pulling less power. 
My experiments have shown the ASUS BIOSs are crap, at least the three I tried. 
On my ASUS Strix OC:
The best OC BIOS is the KP1000W. 
The second best for OC is the KP520W BIOS.
The best every-day running BIOS is the MSI Suprim BIOS.
YMMV.


----------



## Zogge

What makes the supreme 450w better than strix original ?


----------



## samcheekin

gfunkernaught said:


> nvflash.exe -6 biosname.rom, then say yes at the prompts. Make sure the bios file is in the same folder as nvflash.exe.
> 
> Monitor your temps. I tried the suprim bios on my trio with the stock cooler and the temps did rise a bit. Also, make sure your power supply can keep up.


hi good day. thank you for ur reply. some say the -6 command only works for FE cards. is urs gaming x trio 3090 ?

also may i know which bios that u using? there are few of them listed in tech powerup website. 

for temp wise wouldnt worry about as i had it on watercool at the moment. just wanna get a higher power limit for OC for benchmark purpose.


----------



## samcheekin

WillP said:


> How-To Flash RTX Video Card BIOS To A Different Series
> 
> 
> The above link gives a basic guide to flashing bios, but it is a little out of date. Basic principles still apply, back up your existing bios, be prepared for some risk, and some holding your breath to see if it takes, and I'd suggest looking through these pages to see if the bios you're hoping to use has been successfully used on your card model by someone else.
> The bit that is out of date is that it is no longer nvflash64 in the commands, but the commands as listed by Gfunkernaught in his answer to you, plain nvflash now.
> I've flashed my bios a grand total of twice, so I'm no expert, but I did it after a few efforts from the guide I've linked to and the advice in these pages.
> I wouldn't normally give advice on something I know so little about, but I can see that you haven't had much of an answer from anyone else other than the command instruction which I guess may not be enough if you're a complete novice.
> Good luck if you choose to go ahead with it.


hi good day. thanks for ur reply. i will try to ask around here and will see how it goes. thank you


----------



## Beagle Box

Zogge said:


> What makes the supreme 450w better than strix original ?


It's just runs the smoothest on my GPU. Dunno why. 
The stock ASUS BIOS is the least stable. Fluctuates too widely at high clocks which makes it more prone to crashing.
I hope MSI comes out with a 600W version of that BIOS. May then be able to reach 15500 in PR, so I can stop trying. 😉


----------



## GRABibus

Beagle Box said:


> Your block is working perfectly. Same score pulling less power.
> My experiments have shown the ASUS BIOSs are crap, at least the three I tried.
> On my ASUS Strix OC:
> The best OC BIOS is the KP1000W.
> The second best for OC is the KP520W BIOS.
> The best every-day running BIOS is the MSI Suprim BIOS.
> YMMV.


Which MSI Suprim version Bios do you use please ?

This one ?








MSI RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com


----------



## Beagle Box

GRABibus said:


> Which MSI Suprim version Bios do you use please ?
> 
> This one ?
> 
> 
> 
> 
> 
> 
> 
> 
> MSI RTX 3090 VBIOS
> 
> 
> 24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory
> 
> 
> 
> 
> www.techpowerup.com


Yes. BIOS 94.02.42.00.F5


----------



## rix2

finally kp 520w bios I scored 14 287 in Port Royal

power draw max only 414w pcie 78,9w


----------



## Sheyster

gfunkernaught said:


> Anyone else spending more time studying, researching, and benchmarking this card more than just playing games? Lol


Not any more!










My WZ Duos buddy is an old friend on a 3090 FE and doesn't OC it whatsoever. We play a highly tactical game of WZ. I've been playing PC games with this guy since the Quake 2 days (1998?).


----------



## J7SC

KedarWolf said:


> After much research, I'm going with a Strix OC. Was originally going with the Ultra.
> 
> I've even read of peeps here selling their Ultras and getting the Strix card.


...good choice (if I do say so myself ). 
When 3090s first came out, I made a shortlist of what to get, and that included the Strix OC as one of three models. Then NOTHING was available to buy, and I do have decent systems w/ RTX cards already, so I didn't bother registering anywhere on a waiting list. Then I completely lucked out three weeks ago when I went to buy some Arctic P12 pwm pst fan value packs out here in BC...one single Asus Strix OC had just come in...and left with me, at a very good MSRP.

I obviously only have this one sample, and am running it on stock bios, yet it is incredibly smooth and does what I want it to: hit a consistent 60+ fps / 4K Ultra in the most demanding games, as well as do some professional work that requires more than 11 GB VRAM. This card will be w-cooled soon, but on air the fans are quite bearable at up to 70%. I use benching to find the limits and then dial back to my daily stuff. Now, with RTX2K, THW had a nice comparison table re. MHz vs temps (have't sen a reliable one yet for air-cooled 3090s):










Obviously, w-cooled RTX are the way to go. Still, even on air the Strix OC is a joy in all apps, and I for one love the stock bios, at least while the card is on air. 

There's a selection of some benches in the spoiler re. what you could expect if you get a decent sample. Some of those are re-posts, while others are new. Importantly, the card ran in both an ancient Intel X99 / 5960X HEDT and now in a brand new AMD X570/3950X setup (not finished tuning yet) as the X99 has some left-over '''issues''' from LN2/DICE years back, yet the benches are very close (as you might expect at 4+K). I also included the bone-stock MSI AB voltage curve of my Strix OC. I am sure you will enjoy the Strix, as much as there are other great 3090 cards out there.



Spoiler


----------



## KedarWolf

J7SC said:


> ...good choice (if I do say so myself ).
> When 3090s first came out, I made a shortlist of what to get, and that included the Strix OC as one of three models. Then NOTHING was available to buy, and I do have decent systems w/ RTX cards already, so I didn't bother registering anywhere on a waiting list. Then I completely lucked out three weeks ago when I went to buy some Arctic P12 pwm pst fan value packs out here in BC...one single Asus Strix OC had just come in...and left with me, at a very good MSRP.
> 
> I obviously only have this one sample, and am running it on stock bios, yet it is incredibly smooth and does what I want it to: hit a consistent 60+ fps / 4K Ultra in the most demanding games, as well as do some professional work that requires more than 11 GB VRAM. This card will be w-cooled soon, but on air the fans are quite bearable at up to 70%. I use benching to find the limits and then dial back to my daily stuff. Now, with RTX2K, THW had a nice comparison table re. MHz vs temps (have't sen a reliable one yet for air-cooled 3090s):
> 
> View attachment 2479782
> 
> 
> Obviously, w-cooled RTX are the way to go. Still, even on air the Strix OC is a joy in all apps, and I for one love the stock bios, at least while the card is on air.
> 
> There's a selection of some benches in the spoiler re. what you could expect if you get a decent sample. Some of those are re-posts, while others are new. Importantly, the card ran in both an ancient Intel X99 / 5960X HEDT and now in a brand new AMD X570/3950X setup (not finished tuning yet) as the X99 has some left-over '''issues''' from LN2/DICE years back, yet the benches are very close (as you might expect at 4+K). I also included the bone-stock MSI AB voltage curve of my Strix OC. I am sure you will enjoy the Strix, as much as there are other great 3090 cards out there.
> 
> 
> 
> Spoiler
> 
> 
> 
> 
> View attachment 2479783
> 
> 
> View attachment 2479784
> 
> 
> View attachment 2479785


I found a local store I can pre-order the card. Hopefully, it won't take three months to actually get it like my 5950x did.


----------



## GRABibus

KedarWolf said:


> I found a local store I can pre-order the card. Hopefully, it won't take three months to actually get it like my 5950x did.


I really got chance.
I bought first a EVGA ultra FTW3 , new, 2280$
Then, I found on eBay a Rog strix gaming OC, quite new, at 2220$, nice price !

I bought it and I resaled my used EVGA at 2700$, so 420$ more than the price I bought it.

So at the end, This is like I got the strix at 1800$ 😊

Look at eBay, sometimes there are incredible good deals.


----------



## des2k...

well +1500 on mem passes GT2 loop for 3hours after the remount (I wan't getting past +800,900 on mem before, paste not compressed on the edge of the die)

PR benches at +1700, D2 crashed earlier at +1680.

What new game is the hardest on rtx 3090 mem?


----------



## gfunkernaught

samcheekin said:


> hi good day. thank you for ur reply. some say the -6 command only works for FE cards. is urs gaming x trio 3090 ?
> 
> also may i know which bios that u using? there are few of them listed in tech powerup website.
> 
> for temp wise wouldnt worry about as i had it on watercool at the moment. just wanna get a higher power limit for OC for benchmark purpose.


Ah ok good. I am using the gaming x trio, and I've been using the -6 switch for years without issue. 

I'm currently using the kingpin 1kW bios.


----------



## gfunkernaught

Sheyster said:


> Not any more!
> 
> View attachment 2479768
> 
> 
> My WZ Duos buddy is an old friend on a 3090 FE and doesn't OC it whatsoever. We play a highly tactical game of WZ. I've been playing PC games with this guy since the Quake 2 days (1998?).


That's awesome. After water-cooling and good OC, my minimum fps went up drastically in cold war. I can play at native 4k without DLSS. Same with Control and Shadow of the Tomb Raider.


----------



## J7SC

des2k... said:


> well +1500 on mem passes GT2 loop for 3hours after the remount (I wan't getting past +800,900 on mem before, paste not compressed on the edge of the die)
> 
> PR benches at +1700, D2 crashed earlier at +1680.
> 
> What new game is the hardest on rtx 3090 mem?


...FS2020 and Cyberpunk 2077 over longer play time can get nasty on VARM, depending on backplate temps. Others here suggested Quake II RTX as being a torturer, but I haven't tried that yet


----------



## Beagle Box

J7SC said:


> ...good choice (if I do say so myself ).
> When 3090s first came out, I made a shortlist of what to get, and that included the Strix OC as one of three models. Then NOTHING was available to buy, and I do have decent systems w/ RTX cards already, so I didn't bother registering anywhere on a waiting list. Then I completely lucked out three weeks ago when I went to buy some Arctic P12 pwm pst fan value packs out here in BC...one single Asus Strix OC had just come in...and left with me, at a very good MSRP.
> 
> I obviously only have this one sample, and am running it on stock bios, yet it is incredibly smooth and does what I want it to: hit a consistent 60+ fps / 4K Ultra in the most demanding games, as well as do some professional work that requires more than 11 GB VRAM. This card will be w-cooled soon, but on air the fans are quite bearable at up to 70%. I use benching to find the limits and then dial back to my daily stuff. Now, with RTX2K, THW had a nice comparison table re. MHz vs temps (have't sen a reliable one yet for air-cooled 3090s):
> 
> View attachment 2479782
> 
> 
> Obviously, w-cooled RTX are the way to go. Still, even on air the Strix OC is a joy in all apps, and I for one love the stock bios, at least while the card is on air.
> 
> There's a selection of some benches in the spoiler re. what you could expect if you get a decent sample. Some of those are re-posts, while others are new. Importantly, the card ran in both an ancient Intel X99 / 5960X HEDT and now in a brand new AMD X570/3950X setup (not finished tuning yet) as the X99 has some left-over '''issues''' from LN2/DICE years back, yet the benches are very close (as you might expect at 4+K). I also included the bone-stock MSI AB voltage curve of my Strix OC. I am sure you will enjoy the Strix, as much as there are other great 3090 cards out there.
> 
> 
> 
> Spoiler
> 
> 
> 
> 
> View attachment 2479783
> 
> 
> View attachment 2479784
> 
> 
> View attachment 2479785


If you think you like it now, just wait 'til you've water cooled it. 😲


----------



## J7SC

Beagle Box said:


> If you think you like it now, just wait 'til you've water cooled it. 😲


...looking forward to that !

*edit: * ...speaking of watercooling, @changboy ...this is a build @iamjanco has done, note the two EK-XTOP Revo Dual D5 (for a total of 4x D5 in that pic). Dual MoRas are nice, too  ...IAJ also just got his KPE 3090...


----------



## Sheyster

gfunkernaught said:


> That's awesome. After water-cooling and good OC, my minimum fps went up drastically in cold war. I can play at native 4k without DLSS. Same with Control and Shadow of the Tomb Raider.


I also play at 4K120 without DLSS. I have applied some graphics settings tweaks from this link:









Why FPS Matters when Playing Call of Duty: Warzone


FPS give you a competitive gaming advantage. Learn why.



www.nvidia.com





I have very little FPS fluctuation, in fact it's been so steady I've been playing without G-Sync turned on lately, just with an FPS cap just below 120. Sightly less input lag this way.


----------



## gfunkernaught

Sheyster said:


> I also play at 4K120 without DLSS. I have applied some graphics settings tweaks from this link:
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Why FPS Matters when Playing Call of Duty: Warzone
> 
> 
> FPS give you a competitive gaming advantage. Learn why.
> 
> 
> 
> www.nvidia.com
> 
> 
> 
> 
> 
> I have very little FPS fluctuation, in fact it's been so steady I've been playing without G-Sync turned on lately, just with an FPS cap just below 120. Sightly less input lag this way.


I would play at 120fps, but screen tearing is an issue for me, very distracting, unless I'm playing at 144fps, in which case tearing is barely noticeable. I prefer to play with the highest quality I can get on my 4k60hz tv.


----------



## gfunkernaught

J7SC said:


> ...looking forward to that !
> 
> *edit: * ...speaking of watercooling, @changboy ...this is a build @iamjanco has done, note the two EK-XTOP Revo Dual D5 (for a total of 4x D5 in that pic). Dual MoRas are nice, too  ...IAJ also just got his KPE 3090...
> 
> View attachment 2479832


Is that a fridge?


----------



## J7SC

gfunkernaught said:


> Is that a fridge?


...> nope


----------



## changboy

J7SC said:


> ...looking forward to that !
> 
> *edit: * ...speaking of watercooling, @changboy ...this is a build @iamjanco has done, note the two EK-XTOP Revo Dual D5 (for a total of 4x D5 in that pic). Dual MoRas are nice, too  ...IAJ also just got his KPE 3090...
> 
> View attachment 2479832


Insane watercooling rig ! Also expensive 
The thing its you buy this once and you good to go for life, just change fans and pump after years when they fails.


----------



## gfunkernaught

J7SC said:


> ...> nope





J7SC said:


> ...> nope


I see. That seems like a bit much for benching or trying to hit high scores...unless you live in a warm climate that doesn't have COLD winters. I could take my PC out the garage on a night that is like 15F and bench the crap out of it, and use remote desktop. I placed 54th on the Port Royal leaderboards a while ago doing that.


----------



## Zogge

changboy said:


> Insane watercooling rig ! Also expensive
> The thing its you buy this once and you good to go for life, just change fans and pump after years when they fails.


I think it is tiny  only 2520x140 rad size and like 50% of mine.


----------



## J7SC

changboy said:


> Insane watercooling rig ! Also expensive
> The thing its you buy this once and you good to go for life, just change fans and pump after years when they fails.


I only ever lost one D5, since I got them in early '14 and that's because I broke it (advanced dummy class 2.0). As long as you don't have any particulates in your liquids or let them run dry, D5s tend to last and last and last...



gfunkernaught said:


> I see. That seems like a bit much for benching or trying to hit high scores...unless you live in a warm climate that doesn't have COLD winters. I could take my PC out the garage on a night that is like 15F and bench the crap out of it, and use remote desktop. I placed 54th on the Port Royal leaderboards a while ago doing that.


...I sometimes do the open window thing, but after a period of LN2 / DICE some years back, I'm over that part...I just like to 'compete against myself & the machine' to extract max performance of the components I got, whatever their bin, and do so w/ 24//365 reliability. Also the owner of that system I linked earlier probably enjoys designing and building it more than anything else.


----------



## Zogge

I agree with that, planning/designing and building a large water cooling solution is fun and I also just compete to myself and my parts now. 
Stability as high as possible in games and 24/7 usage.


----------



## long2905

those things are insane. no wonder you guys get such ridiculous temp and clock


----------



## J7SC

^^...my Strix is still in air (for now... )


----------



## captn1ko

J7SC said:


> ^^...my Strix is still in air (for now... )


same here... until friday 😈


----------



## mardon

My heatkiller V block arrived today. I noticed that there is no requirement to put thermal pads on the areas i've circled in blue below but I did on my EK block. Is this a problem? Should I add some 0.5mm ones anyways? I presumed heatkiller would be better than EK?


----------



## dante`afk

Why do you ask here and not WC support directly?

These are not being cooled on the FE as well.


----------



## Beagle Box

mardon said:


> My heatkiller V block arrived today. I noticed that there is no requirement to put thermal pads on the areas i've circled in blue below but I did on my EK block. Is this a problem? Should I add some 0.5mm ones anyways? I presumed heatkiller would be better than EK?
> 
> View attachment 2479905


Personally, I wouldn't add the pads. 
My alphacool doesn't have them and doesn't need them. You run the risk reducing pad pressure on the parts that really need cooling or maybe even transferring additional heat to those parts that are already cool. The only change I made when installing my block on my Strix was using a slightly thicker pad behind the processor itself because the provided pad wouldn't reach the backplate.


----------



## Zogge

Same on bykski strix. Only memory needed pads as per design if I recall correct. Ek block wanted pads everywhere but with worse temps anyway lol.


----------



## gfunkernaught

I guess I should have said it seems a bit much for me lol. Not downplaying that massive rad setup. I enjoy games more than benching. I feel like benching is a rabbit hole. All I want is 4k60 locked and lowest latency. Some older games like Titanfall 2 I play at 144fps no vsync just cuz I like those fast paced shooters without vsync, blame quake 3 for that.


----------



## Sheyster

gfunkernaught said:


> I guess I should have said it seems a bit much for me lol. Not downplaying that massive rad setup. I enjoy games more than benching. I feel like benching is a rabbit hole. All I want is 4k60 locked and lowest latency. Some older games like Titanfall 2 I play at 144fps no vsync just cuz I like those fast paced shooters without vsync, blame quake 3 for that.


Highly competitive FPS shooters really require in-game graphics settings tweaks, and no V-Sync or G-Sync if you truly want minimal lag. Frame caps are generally okay but not always. Counterstrike Source and GO come to mind, cap FPS too low and you'll have trouble with hit reg. I haven't played CS:GO since around 2015 but I still do everything I can to get smooth FPS in competitive games like Warzone.


----------



## Gary2015

Turbo｜Graphics Cards｜ASUS Global


ASUS Turbo graphics cards are designed from the ground up for systems with restricted airflow. A host of subtle optimizations increase intake and allow a large 80mm blower-fan to propel more air through the heatsink and out of the chassis, delivering lower temperatures in cramped quarters.




www.asus.com





Can anyone recommend a suitable waterblock for this blower edition ?


----------



## des2k...

Zogge said:


> Same on bykski strix. Only memory needed pads as per design if I recall correct. Ek block wanted pads everywhere but with worse temps anyway lol.


I would put pads on VRM phases & chokes if the block allows it. I'm running reference 2x8pin and those parts dump tons of heat into the PCB.

The EK delta is horrible, but I would prefer a high delta then a dead card.


----------



## Thanh Nguyen

Can Anyone with a chiller answer me this? Does a chiller cool down the liquid instantly or it needs time? Because I dont use my pc all the time so I just need it when I turn my pc on. Any model can cool 1000w at 20c and how loud is it?


----------



## mardon

Beagle Box said:


> Personally, I wouldn't add the pads.
> My alphacool doesn't have them and doesn't need them. You run the risk reducing pad pressure on the parts that really need cooling or maybe even transferring additional heat to those parts that are already cool. The only change I made when installing my block on my Strix was using a slightly thicker pad behind the processor itself because the provided pad wouldn't reach the backplate.


That's a good point. Maybe I'll use a bit of thermal paste.


----------



## Nizzen

Thanh Nguyen said:


> Can Anyone with a chiller answer me this? Does a chiller cool down the liquid instantly or it needs time? Because I dont use my pc all the time so I just need it when I turn my pc on. Any model can cool 1000w at 20c and how loud is it?


It takes a bit time, and it depends of the cooling capasity. Most chillers are loud, so have it in a different room is the best. Or closed headset 😆

Koolance have some good and there are many aquarium chillers out there. You can get as big as you want. Only your budget is the limit


----------



## Nizzen

Thanh Nguyen said:


> Can Anyone with a chiller answer me this? Does a chiller cool down the liquid instantly or it needs time? Because I dont use my pc all the time so I just need it when I turn my pc on. Any model can cool 1000w at 20c and how loud is it?


My friend Devilhead is using this:






Aqua Medic Kühlaggregat Titan 4000 (105.400) - Mrutzek Meeresaquaristik GmbH


Aqua Medic Kühlaggregat Titan 4000 (105.400): Die Titan Kühlaggregate von Aqua Medic bauen auf den seit über 12 Jahren bewährten SK-Durchlaufkühlern auf. Die Wärmetauscher der Titan Baureihe werden aus reinem meerwasserbeständigen Titan gefertigt. Das macht die Kühler geeignet für alle...




www.shop-meeresaquaristik.de


----------



## samcheekin

gfunkernaught said:


> Ah ok good. I am using the gaming x trio, and I've been using the -6 switch for years without issue.
> 
> I'm currently using the kingpin 1kW bios.


mind to pm me which version of bios are u using? there are a few supreme x bios available on the website..many thanks !


----------



## des2k...

Nizzen said:


> My friend Devilhead is using this:
> 
> 
> 
> 
> 
> 
> Aqua Medic Kühlaggregat Titan 4000 (105.400) - Mrutzek Meeresaquaristik GmbH
> 
> 
> Aqua Medic Kühlaggregat Titan 4000 (105.400): Die Titan Kühlaggregate von Aqua Medic bauen auf den seit über 12 Jahren bewährten SK-Durchlaufkühlern auf. Die Wärmetauscher der Titan Baureihe werden aus reinem meerwasserbeständigen Titan gefertigt. Das macht die Kühler geeignet für alle...
> 
> 
> 
> 
> www.shop-meeresaquaristik.de


1500w power damn ! You might as well run a line from the sink with no rads, it will be cheaper and keep water temp at ambient.


----------



## gfunkernaught

des2k... said:


> I would put pads on VRM phases & chokes if the block allows it. I'm running reference 2x8pin and those parts dump tons of heat into the PCB.
> 
> The EK delta is horrible, but I would prefer a high delta then a dead card.


Have you tried the thermalright pads with the ek block?


----------



## des2k...

gfunkernaught said:


> Have you tried the thermalright pads with the ek block?


Yes using them now, front, back. Those pads did help alot with making the block sit 100% level over the core.
For the back, they helped with mem temps alot, +1500 mem, only 52,54c for tjmax.


----------



## gfunkernaught

des2k... said:


> Yes using them now, front, back. Those pads did help alot with making the block sit 100% level over the core.
> For the back, they helped with mem temps alot, +1500 mem, only 52,54c for tjmax.


Nice! That's distance to tjmax or actual memory junction temp?


----------



## mirkendargen

Zogge said:


> Same on bykski strix. Only memory needed pads as per design if I recall correct. Ek block wanted pads everywhere but with worse temps anyway lol.


Bykski Strix instructions definitely tell you to put pads on the VRMs...I hope you actually did and don't remember, heh.


----------



## GRABibus

J7SC said:


> ^^...my Strix is still in air (for now... )


I got mine since one week. Still on air of course currently


----------



## J7SC

GRABibus said:


> I got mine since one week. Still on air of course currently


...which block did you get ? I'm leaning towards the Aquacomputer, Phanteks or Bykski (available now), though Heatkiller 3090 Strix would be nice, but not yet available. Speaking of Aquacomputer, if I wouldn't have enough D5s already, that D5 Next would be on my list, or this...

@changboy


----------



## itssladenlol

des2k... said:


> Yes using them now, front, back. Those pads did help alot with making the block sit 100% level over the core.
> For the back, they helped with mem temps alot, +1500 mem, only 52,54c for tjmax.


Can you tell me which pads exactly and what thickness in Front and back?
Got 3090 tuf with ekwb here and want to know what thermalright pads in thickness to Put where exactly on the pcb (Front and back)


----------



## des2k...

itssladenlol said:


> Can you tell me which pads exactly and what thickness in Front and back?
> Got 3090 tuf with ekwb here and want to know what thermalright pads in thickness to Put where exactly on the pcb (Front and back)


pads are thermalright odyssey
front is 1mm for my ek block
back is 1mm for mem, 1.5mm vrm, 2mm for core

my ek block 








EK-Quantum Vector Trinity RTX 3080/3090 D-RGB - Nickel + Plexi


This is the 2nd generation Vector GPU water block from the EK® Quantum Line, designed for Zotac® Trinity RTX 3080 and 3090 graphics cards based on the latest NVIDIA® Ampere™ architecture. For a precise compatibility match of this water block, we recommend you refer to the EK Cooling Configurator.




www.ekwb.com


----------



## changboy

J7SC said:


> ...which block did you get ? I'm leaning towards the Aquacomputer, Phanteks or Bykski (available now), though Heatkiller 3090 Strix would be nice, but not yet available. Speaking of Aquacomputer, if I wouldn't have enough D5s already, that D5 Next would be on my list, or this...
> 
> @changboy


This dual pump is verry nice, i think this is the one i will get next


----------



## gfunkernaught

changboy said:


> This dual pump is verry nice, i think this is the one i will get next


Speaking of pumps, is the EK d5 pump enough for this loop:
Pump/res combo>cpu block>240mm rad>gpu block>360mm rad>pump/res


----------



## jura11

gfunkernaught said:


> Speaking of pumps, is the EK d5 pump enough for this loop:
> Pump/res combo>cpu block>240mm rad>gpu block>360mm rad>pump/res


More than enough there for such loop

Hope this helps 

Thanks, Jura


----------



## gfunkernaught

jura11 said:


> More than enough there for such loop
> 
> Hope this helps
> 
> Thanks, Jura


Ok thanks, just wanted to make sure.

I still have an extra 360mm rad that I want to add to the loop, just don't know where the best place to put it would be. Any suggestions?


----------



## changboy

gfunkernaught said:


> Speaking of pumps, is the EK d5 pump enough for this loop:
> Pump/res combo>cpu block>240mm rad>gpu block>360mm rad>pump/res


Actally i run with 1 D5 pump for my loop = reservoir>pump>280mm rad>gpu>cpu>360mm rad>360mm rad.

I run like this for 5 years.


----------



## jura11

gfunkernaught said:


> Ok thanks, just wanted to make sure.
> 
> I still have an extra 360mm rad that I want to add to the loop, just don't know where the best place to put it would be. Any suggestions?


If your case allows you to fit extra 360mm radiator them fit it 

You have 760T case and no sure if you can fit there 2*360mm radiators and 240mm radiator, I would personally go with MO-ra3 360mm 

Hope this helps 

Thanks, Jura


----------



## changboy

gfunkernaught said:


> Ok thanks, just wanted to make sure.
> 
> I still have an extra 360mm rad that I want to add to the loop, just don't know where the best place to put it would be. Any suggestions?


Me i put an external 360mm rad on the back of my case, its work great, with a koolance bracket :
Radiator Mounting Bracket with Quick-Release


----------



## mirkendargen

I use this one, cheap Aliexpress gets the job done. I also mounts nicely to an external rad box.


----------



## jura11

changboy said:


> Me i put an external 360mm rad on the back of my case, its work great, with a koolance bracket :
> Radiator Mounting Bracket with Quick-Release


This one I used on my Phanteks Enthoo Primo too 

Hope this helps 

Thanks, Jura


----------



## Zogge

You are right I did put that on the VRMs (and VRM temps are high 40s on load only for me. But EK also had pads on several individual chips on top of the memory. Like under 8pin power pins and further down as well as the chip row next to vrms and that chip in bottom left corner. I did not do all those on bykski, only VRM and Mems as per manual.

I added a 2mm thick 30x30 pad behind core to connect it to backplate though.


----------



## Zogge

On top of this I need to report that I tried the serial setup mp5works now and it gave ~4c lower on core at full load now 44c and tjunction ~3c lower now 53c. Hottest spot 4c lower as well at 54c. 26.5c water and 227l/h flow.


----------



## changboy

mirkendargen said:


> I use this one, cheap Aliexpress gets the job done. I also mounts nicely to an external rad box.
> 
> View attachment 2479953


Is this a dual D5 pump ?
Can you link me this on ali ?

Ok found it


----------



## gfunkernaught

@changboy @jura11 
Thanks for the suggestions. I used to mount a 240mm rad outside my Antec 1200 back in the day.


----------



## mirkendargen

changboy said:


> Is this a dual D5 pump ?
> Can you link me this on ali ?
> 
> Ok found it


Yeah it's dual D5, but beware that it doesn't come with pumps (the price would be too good to be true then). Get 2 bare Laing D5 pumps separately to put in it.


----------



## itssladenlol

des2k... said:


> pads are thermalright odyssey
> front is 1mm for my ek block
> back is 1mm for mem, 1.5mm vrm, 2mm for core
> 
> my ek block
> 
> 
> 
> 
> 
> 
> 
> 
> EK-Quantum Vector Trinity RTX 3080/3090 D-RGB - Nickel + Plexi
> 
> 
> This is the 2nd generation Vector GPU water block from the EK® Quantum Line, designed for Zotac® Trinity RTX 3080 and 3090 graphics cards based on the latest NVIDIA® Ampere™ architecture. For a precise compatibility match of this water block, we recommend you refer to the EK Cooling Configurator.
> 
> 
> 
> 
> www.ekwb.com


Thank you so much, i have the same Block but tuf Version  should be the same i guess.


----------



## gfunkernaught

Alphacool Eisschicht Thermal Pad 17W/mk, 100mmx100mmx1.5mm
$187 on titanrig.com

***?


----------



## jura11

gfunkernaught said:


> @changboy @jura11
> Thanks for the suggestions. I used to mount a 240mm rad outside my Antec 1200 back in the day.


On friend Enthoo Primo I used that mount too, that time we run 2*360mm radiators and 240mm radiator which cooled 4*GPUs setup 

Hope this helps 

Thanks, Jura


----------



## jura11

gfunkernaught said:


> Alphacool Eisschicht Thermal Pad 17W/mk, 100mmx100mmx1.5mm
> $187 on titanrig.com
> 
> ***?


Get rather Thermalright Odyssey pads, Alphacool Eisschicht are way too overpriced mate

Hope this helps 

Thanks, Jura


----------



## Falkentyne

gfunkernaught said:


> Alphacool Eisschicht Thermal Pad 17W/mk, 100mmx100mmx1.5mm
> $187 on titanrig.com
> 
> ***?


That's exactly how much they cost elsewhere too. 17 w/mk 100mm * 100mm.
And often those 17 w/mk pads tend to be only one use before flaking to death (eventually the 12 w/mk ones will do that but it takes longer).
Unless you're rich, don't buy some crap like that.

Buy this for large size pads.








15.51US $ |Gelid Tp-gp02 120x120x0.5mm 1.0mm 1.5mm Graphics Processor Cooling Radiator Conductive Silicone Pad Thermal Pad High Quality - Pc Components Cooling & Tools - AliExpress


Smarter Shopping, Better Living! Aliexpress.com




www.aliexpress.com





Or one of these:








7.48US $ 32% OFF|Thermalright Odyssey Thermal Pad On-conductive Silicone Grease Pad 12.8w/mk For Gpu/ram/motherborad/ssd 120x120mm Original Pad - Pc Components Cooling & Tools - AliExpress


Smarter Shopping, Better Living! Aliexpress.com




www.aliexpress.com












8.65US $ 6% OFF|Thermalright Thermal Pad 120x120mm 12.8 W/mk 0.5mm 1.0mm 1.5mm 2.0mm High Efficient Thermal Conductivity Original Authentic - Pc Components Cooling & Tools - AliExpress


Smarter Shopping, Better Living! Aliexpress.com




www.aliexpress.com


----------



## shenzabre

They're good for everyone.... i enter the Zotac club with the 3090 trinity.
As is known, this card, despite being valid, has the problem of having a power limit, in my opinion, too low and this reduces its true potential. Having said that I decide to try flash the bios of the gigabyte eagle oc or the galaxy with pl of 370/390 on the zotac and here arises the problem: there is no possibility to eliminate the write protec from the eeprom of the same. Disabled the disposive management board,cmd in administrator mode and latest version of nvflash 5.670.0 .... giving the command nvflash --protectoff..... the command answers me: setting eeprom protection complete and not eeprom unprotected. So no unprotect.... x test I also gave the protecton command with the same result. This involves the impossibility of flah a new bios...or not?...... while others I heard about have flash different bios on the zotac without any problems....
Does anyone know what is happening or do they have what solution to recommend?I tried the same procedure also the w10 in safe mode with the same result.....beginner grappling with flashing bios.... here I am


----------



## J7SC

changboy said:


> This dual pump is verry nice, i think this is the one i will get next


...I still have a few ancient but functional Swiftech D5s laying around that I'll use for the upcoming 3950X/RTX 3090 loops. I usually just use XSPC RX360 rads for every loop, but also have two Black Ice NEMESIS M160GTX I might plumb into each loop...probably going to skip the copper tubing and homemade joints this time, though, just ZMT or similar


----------



## changboy

J7SC said:


> ...I still have a few ancient but functional Swiftech D5s laying around that I'll use for the upcoming 3950X/RTX 3090 loops. I usually just use XSPC RX360 rads for every loop, but also have two Black Ice NEMESIS M160GTX I might plumb into each loop...probably going to skip the copper tubing and homemade joints this time, though, just ZMT or similar
> 
> View attachment 2479960


Those are same rad as mine on the back, hehehe xspc, i like that rad coz many g1/4 on it. You can do all ur need with this rad.


----------



## gfunkernaught

jura11 said:


> Get rather Thermalright Odyssey pads, Alphacool Eisschicht are way too overpriced mate
> 
> Hope this helps
> 
> Thanks, Jura


Oh no I wasn't buying them lol.


----------



## des2k...

shenzabre said:


> They're good for everyone.... i enter the Zotac club with the 3090 trinity.
> As is known, this card, despite being valid, has the problem of having a power limit, in my opinion, too low and this reduces its true potential. Having said that I decide to try flash the bios of the gigabyte eagle oc or the galaxy with pl of 370/390 on the zotac and here arises the problem: there is no possibility to eliminate the write protec from the eeprom of the same. Disabled the disposive management board,cmd in administrator mode and latest version of nvflash 5.670.0 .... giving the command nvflash --protectoff..... the command answers me: setting eeprom protection complete and not eeprom unprotected. So no unprotect.... x test I also gave the protecton command with the same result. This involves the impossibility of flah a new bios...or not?...... while others I heard about have flash different bios on the zotac without any problems....
> Does anyone know what is happening or do they have what solution to recommend?I tried the same procedure also the w10 in safe mode with the same result.....beginner grappling with flashing bios.... here I am


did you run cmd terminal as an admin ?


----------



## gfunkernaught

Falkentyne said:


> That's exactly how much they cost elsewhere too. 17 w/mk 100mm * 100mm.
> And often those 17 w/mk pads tend to be only one use before flaking to death (eventually the 12 w/mk ones will do that but it takes longer).
> Unless you're rich, don't buy some crap like that.
> 
> Buy this for large size pads.
> 
> 
> 
> 
> 
> 
> 
> 
> 15.51US $ |Gelid Tp-gp02 120x120x0.5mm 1.0mm 1.5mm Graphics Processor Cooling Radiator Conductive Silicone Pad Thermal Pad High Quality - Pc Components Cooling & Tools - AliExpress
> 
> 
> Smarter Shopping, Better Living! Aliexpress.com
> 
> 
> 
> 
> www.aliexpress.com
> 
> 
> 
> 
> 
> Or one of these:
> 
> 
> 
> 
> 
> 
> 
> 
> 7.48US $ 32% OFF|Thermalright Odyssey Thermal Pad On-conductive Silicone Grease Pad 12.8w/mk For Gpu/ram/motherborad/ssd 120x120mm Original Pad - Pc Components Cooling & Tools - AliExpress
> 
> 
> Smarter Shopping, Better Living! Aliexpress.com
> 
> 
> 
> 
> www.aliexpress.com
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 8.65US $ 6% OFF|Thermalright Thermal Pad 120x120mm 12.8 W/mk 0.5mm 1.0mm 1.5mm 2.0mm High Efficient Thermal Conductivity Original Authentic - Pc Components Cooling & Tools - AliExpress
> 
> 
> Smarter Shopping, Better Living! Aliexpress.com
> 
> 
> 
> 
> www.aliexpress.com


Right I've been looking at the odyssey pads. Is aliexpress trust worthy?


----------



## gfunkernaught

This is my spare ek 360mm rad. Luckily my case has water tube ports.


----------



## des2k...

gfunkernaught said:


> Right I've been looking at the odyssey pads. Is aliexpress trust worthy?


had to buy the 2mm off aliexpress, got them pretty fast, same as the ones I got from Amazon for 1mm, 1.5mm


----------



## gfunkernaught

changboy said:


> Me i put an external 360mm rad on the back of my case, its work great, with a koolance bracket :
> Radiator Mounting Bracket with Quick-Release
> View attachment 2479956


I can't use that bracket, my rear fan port is 140mm. Might have to summon MacGyver.


----------



## jura11

gfunkernaught said:


> This is my spare ek 360mm rad. Luckily my case has water tube ports.
> 
> 
> View attachment 2479968


I have been using this on friend loop 

Koolance PCI slot bracket with 2 G1 / 4 inch ports - silver [WAZU-752] from WatercoolingUK 

Aquacomputer making something similar but I think this Koolance should be easier get in US than in UK 

Just get QDC from Koolance or Colder CPC QDC 

Hope this helps 

Thanks, Jura


----------



## J7SC

changboy said:


> Those are same rad as mine on the back, hehehe xspc, i like that rad coz many g1/4 on it. You can do all ur need with this rad.


...yeah, every custom loop I have_ ever _done is based on multiple XSPX RX360s. There are other good rads out there but the quality of these is great and they also give you some leeway re. fan types and speed, given fin density etc.


----------



## jura11

gfunkernaught said:


> Right I've been looking at the odyssey pads. Is aliexpress trust worthy?


Personally I'm using all the time Aliexpress and no issues, never had issues with sellers from Aliexpress 

Hope this helps 

Thanks, Jura


----------



## Falkentyne

gfunkernaught said:


> Right I've been looking at the odyssey pads. Is aliexpress trust worthy?


Aliexpress is as big as ebay. That's like asking is ebay trustworthy.
It's the individual sellers, not aliexpress. And if you don't receive an item, you can just apply for a refund. Use paypal for orders.
All of the thermal pads I ordered from the sellers got shipped and all have been fine. There's one still stuck in Limbo between Pitney Bowes and USPS but that's on the shipping couriers.
Yao Yue Shop never seems to reply to messages however, but they do ship. Try to find sellers that have above average communication ratings.


----------



## des2k...

Nizzen said:


> Software limit. Use evga px
> 
> I had to use evga px to bench under chilled water +1700 on mem


After many 2h loops of TE GT2 & many hard crashes
+1590 mem passes, +1620 black screen, didn't bother trying in between only 30mhz diff


----------



## gfunkernaught

shenzabre said:


> They're good for everyone.... i enter the Zotac club with the 3090 trinity.
> As is known, this card, despite being valid, has the problem of having a power limit, in my opinion, too low and this reduces its true potential. Having said that I decide to try flash the bios of the gigabyte eagle oc or the galaxy with pl of 370/390 on the zotac and here arises the problem: there is no possibility to eliminate the write protec from the eeprom of the same. Disabled the disposive management board,cmd in administrator mode and latest version of nvflash 5.670.0 .... giving the command nvflash --protectoff..... the command answers me: setting eeprom protection complete and not eeprom unprotected. So no unprotect.... x test I also gave the protecton command with the same result. This involves the impossibility of flah a new bios...or not?...... while others I heard about have flash different bios on the zotac without any problems....
> Does anyone know what is happening or do they have what solution to recommend?I tried the same procedure also the w10 in safe mode with the same result.....beginner grappling with flashing bios.... here I am


what if you just did nvflash.exe -6 bios.rom?


----------



## gfunkernaught

Is it ok to stack thermal pads in case you can't find a specific thickness?
Also, I just found this on amazon








Amazon.com: Fujipoly/mod/smart Ultra Extreme XR-m Thermal Pad - 100 x 15 x 1.0 - Thermal Conductivity 17.0 W/mK : Electronics


Amazon.com: Fujipoly/mod/smart Ultra Extreme XR-m Thermal Pad - 100 x 15 x 1.0 - Thermal Conductivity 17.0 W/mK : Electronics



www.amazon.com




17W/mk. Does that heat transfer rating being higher always mean better? Because there are also compression ratings.


----------



## gfunkernaught

Falkentyne said:


> Aliexpress is as big as ebay. That's like asking is ebay trustworthy.
> It's the individual sellers, not aliexpress. And if you don't receive an item, you can just apply for a refund. Use paypal for orders.
> All of the thermal pads I ordered from the sellers got shipped and all have been fine. There's one still stuck in Limbo between Pitney Bowes and USPS but that's on the shipping couriers.
> Yao Yue Shop never seems to reply to messages however, but they do ship. Try to find sellers that have above average communication ratings.


I never looked into aliexpress so I didn't know they're like ebay.


----------



## Falkentyne

gfunkernaught said:


> I never looked into aliexpress so I didn't know they're like ebay.


Aliexpress is a marketplace. I believe they are an english/international storefront, while Taobao is the chinese only one. With Taobao, you either need to be able to read chinese or have a "taobao" agent that can buy from them, then ship it to you. With Aliexpress, you just buy from the seller that lists there. And since they accept paypal, they are required to uphold to proper merchant standards otherwise paypal kicks them off as a payment option.


----------



## changboy

gfunkernaught said:


> I can't use that bracket, my rear fan port is 140mm. Might have to summon MacGyver.


Its weird coz usually case have both hole, 120mm and 140mm, my corsair 750d have both.
You can also use this adaptor :








Bitspower Fan Adapter 140MM To 120MM, Black : Amazon.ca: Electronics


Bitspower Fan Adapter 140MM To 120MM, Black : Amazon.ca: Electronics



www.amazon.ca


----------



## gfunkernaught

changboy said:


> Its weird coz usually case have both hole, 120mm and 140mm, my corsair 750d have both.
> You can also use this adaptor :
> 
> 
> 
> 
> 
> 
> 
> 
> Bitspower Fan Adapter 140MM To 120MM, Black : Amazon.ca: Electronics
> 
> 
> Bitspower Fan Adapter 140MM To 120MM, Black : Amazon.ca: Electronics
> 
> 
> 
> www.amazon.ca


You are correct sir. My has case has both holes. But that bracket takes a fan spot. I'm thinking just to use 4 long screws to go through the fan and thread into the rad. 4 screws should be enough to hold the rad I think. This will be fun, making my case look like something out of Mad Max.


----------



## gfunkernaught

Falkentyne said:


> Aliexpress is a marketplace. I believe they are an english/international storefront, while Taobao is the chinese only one. With Taobao, you either need to be able to read chinese or have a "taobao" agent that can buy from them, then ship it to you. With Aliexpress, you just buy from the seller that lists there. And since they accept paypal, they are required to uphold to proper merchant standards otherwise paypal kicks them off as a payment option.


I always knew that Aliexpress was based in China, but didn't know they were a marketplace like ebay.


----------



## GRABibus

gfunkernaught said:


> Anyone else spending more time studying, researching, and benchmarking this card more than just playing games? Lol


More gaming now


----------



## changboy

gfunkernaught said:


> You are correct sir. My has case has both holes. But that bracket takes a fan spot. I'm thinking just to use 4 long screws to go through the fan and thread into the rad. 4 screws should be enough to hold the rad I think. This will be fun, making my case look like something out of Mad Max.


Ok but the thing you didnt think its the weight of the rad will be hold by just 2 upper screw but the bracket have 4 screw and a frame. If you just use 2 screew it will not support the rad without fold. This is what i think hehehe.


----------



## gfunkernaught

GRABibus said:


> More gaming now


Try capping your framerate to 144fps, might lower input latency.


----------



## gfunkernaught

changboy said:


> Ok but the thing you didnt think its the weight of the rad will be hold by just 2 upper screw but the bracket have 4 screw and a frame. If you just use 2 screew it will not support the rad without fold. This is what i think hehehe.


I did consider the weight of the rad, 3 fans, and the water in the rad, and I said 4 screws. They'd be holding the middle of the rad.


----------



## changboy

gfunkernaught said:


> I did consider the weight of the rad, 3 fans, and the water in the rad, and I said 4 screws. They'd be holding the middle of the rad.


Oki


----------



## J7SC

changboy said:


> Oki


... :


----------



## gfunkernaught

J7SC said:


> ... :


I'd have to make new holes for that


----------



## changboy

This is also a nice one, like some member have in here :









Watercool MO-RA3 420 PRO - Black


The MO-RA3 is a heat exchanger featuring excellent cooling performance and various possibilities of application. Its reliability allows for usage in workstation and server environments and it is sturdy enough for industrial applications. And, of course, it is powerful enough to cool even the...




www.performance-pcs.com


----------



## gfunkernaught

I found this site that sells thermal pads, up to 16W/mk, but high conductivity pads don't last as long right? 


https://timtronics.com/wp-content/uploads/2016/12/Thermal-Gap-Filler-and-Pads_-Reference-Guides.pdf


I'm gonna keep searching, it looks like special order, but might be something to consider. I can't find all the sizes I want at the conductivity above at least 11W/mk. Sorry, not doing aliexpress, don't feel like waiting that long lol

Found another site








Custom Silicone Gaskets, Custom Gaskets Manufacturing | Stockwell Elastomerics


Stockwell Elastomerics is a leading custom gasket manufacturer of silicone gaskets, seals, cushioning pads, and elastomeric components, an American ISO 9001-certified gasket manufacturer located in Philadelphia, PA, USA 19136




www.stockwell.com


----------



## Falkentyne

gfunkernaught said:


> I found this site that sells thermal pads, up to 16W/mk, but high conductivity pads don't last as long right?
> 
> 
> https://timtronics.com/wp-content/uploads/2016/12/Thermal-Gap-Filler-and-Pads_-Reference-Guides.pdf
> 
> 
> I'm gonna keep searching, it looks like special order, but might be something to consider. I can't find all the sizes I want at the conductivity above at least 11W/mk. Sorry, not doing aliexpress, don't feel like waiting that long lol
> 
> Found another site
> 
> 
> 
> 
> 
> 
> 
> 
> Custom Silicone Gaskets, Custom Gaskets Manufacturing | Stockwell Elastomerics
> 
> 
> Stockwell Elastomerics is a leading custom gasket manufacturer of silicone gaskets, seals, cushioning pads, and elastomeric components, an American ISO 9001-certified gasket manufacturer located in Philadelphia, PA, USA 19136
> 
> 
> 
> 
> www.stockwell.com


Yeah. I requested a sample from timtronics, as a "video game player" for a RTX video card. LOL. Good luck with that...they're not going to care :/
They'll probably charge like $100 for a 100mm * 100mm * 1.5mm, 16 w/mk pad sample...


----------



## shenzabre

des2k... said:


> did you run cmd terminal as an admin ?


yes of course..on w10 in safe mode and normal mode...but nothing


----------



## shenzabre

i didn't know command -6 i try it


----------



## shenzabre

gfunkernaught said:


> what if you just did nvflash.exe -6 bios.rom?


Thank you... it was just that it served. Now the zotac has become a GALAX RTX 3090...


----------



## Wihglah

J7SC said:


> ...I still have a few ancient but functional Swiftech D5s laying around that I'll use for the upcoming 3950X/RTX 3090 loops. I usually just use XSPC RX360 rads for every loop, but also have two Black Ice NEMESIS M160GTX I might plumb into each loop...probably going to skip the copper tubing and homemade joints this time, though, just ZMT or similar
> 
> View attachment 2479960


RX series rad are my favourite. Very high performance with the right fans.


----------



## gfunkernaught

Just had a chat with a sales rep over at NEDC.com, a thermal solutions distributor, asking about their pads and thermal conductivity. Told him about what my application is and pads I have and looking at, the thermalright and gelid, and when I mentioned the 13W/mk he said that's probably not true. So I sent him links and he said that there isn't a spec or test method listed next to the conductivity rating, which is a red flag according to him. He sent me a datasheet from his company, see attached. He also said he'd send me some samples to test them out. But this got me thinking, my 2080 Ti did not have contact issues, and those pads were the same pads ek always use, so I will try a remount with all the same thickness pads, except I will use the 2mm at the back of the gpu instead of the 1mm as instructed by ek. I'll also cut the front pads to the exact dimensions of the VRMs, mosfets, and ram. I will also use washers for the four main gpu screws. If I do request samples I might use them for the backplate. Unfortunately, this company doesn't sell to individuals 





__





Low Modulus Thermal Pads- High Thermal Conductivity Thermal Pads - New England Die Cutting (NEDC)


Thermal Gap Filling Pads are used in electronics globally. These low-modulus thermal pads feature some of the highest thermal performances in industry.




www.nedc.com


----------



## Falkentyne

gfunkernaught said:


> Just had a chat with a sales rep over at NEDC.com, a thermal solutions distributor, asking about their pads and thermal conductivity. Told him about what my application is and pads I have and looking at, the thermalright and gelid, and when I mentioned the 13W/mk he said that's probably not true. So I sent him links and he said that there isn't a spec or test method listed next to the conductivity rating, which is a red flag according to him. He sent me a datasheet from his company, see attached. He also said he'd send me some samples to test them out. But this got me thinking, my 2080 Ti did not have contact issues, and those pads were the same pads ek always use, so I will try a remount with all the same thickness pads, except I will use the 2mm at the back of the gpu instead of the 1mm as instructed by ek. I'll also cut the front pads to the exact dimensions of the VRMs, mosfets, and ram. I will also use washers for the four main gpu screws. If I do request samples I might use them for the backplate. Unfortunately, this company doesn't sell to individuals


There's no link in your post.

Can I get the email or contact so I can request samples from him too? PM if necessary. he seems like a cool guy.


----------



## Evostance

Anyone managed to cool the VRAM down yet, without using a waterblock?

The heat coming off this thing is ridiculous. Even with new with thermal pads, playing tarkov for a prolonged period makes the junction temp hit 90c. Any attempt at memory intensive work is straight to 106c


----------



## J7SC

Evostance said:


> Anyone managed to cool the VRAM down yet, without using a waterblock?
> 
> The heat coming off this thing is ridiculous. Even with new with thermal pads, playing tarkov for a prolonged period makes the junction temp hit 90c. Any attempt at memory intensive work is straight to 106c


I use a carefully angled 120mm Arctic P12 pointing at the backplate area where the VRAM sits...helped by about 4 C+...using instead a GentleTyphoon 4K rpm or even a Sunon 5K 120mm server fan gained an additional 3 C, but too loud for now...waiting for 3090 w-cooling parts, then settle on a permanent backplate cooling solution (either fan or RAM w-block)


----------



## gfunkernaught

@Falkentyne
Updated my post with a link. Wish they sold to individuals. Told me there's a $125 minimum purchase, but in a way, if I could get preem thermal pads in large sheets, I could have them for a very long time.


----------



## des2k...

gfunkernaught said:


> @Falkentyne
> Updated my post with a link. Wish they sold to individuals. Told me there's a $125 minimum purchase, but in a way, if I could get preem thermal pads in large sheets, I could have them for a very long time.


ok, how much are the pads ?


----------



## gfunkernaught

des2k... said:


> ok, how much are the pads ?


He can't give me a quote because I'm not a company. They don't do business with individuals.*

*CORRECTION
They do, but it will be expensive.


----------



## gfunkernaught

@Falkentyne @des2k... 
They do sell to individuals but it will be expensive. Just confirmed that with the sales rep. Sorry about the misinformation. Honestly, don't mind paying for it if it's worth it. I rather have large sheets than small ones. I don't want to use multiple cuts to cover the VRMs and mosfets.


----------



## gfunkernaught

Evostance said:


> Anyone managed to cool the VRAM down yet, without using a waterblock?
> 
> The heat coming off this thing is ridiculous. Even with new with thermal pads, playing tarkov for a prolonged period makes the junction temp hit 90c. Any attempt at memory intensive work is straight to 106c


When I had the stock cooler on mine, memory hit 88c, that without overclocking. Might want to check your air flow. Do you have warm air hitting the back of the card?


----------



## des2k...

Well you don't need a waterblock on the back, just fans or heatsinks(thermal tape) and fans will do the job

52,54c here with +1590 mem

I'm still waiting for the EK back waterblock that way I can get rid of the fans+heatsinks


----------



## gfunkernaught

des2k... said:


> Well you don't need a waterblock on the back, just fans or heatsinks(thermal tape) and fans will do the job
> 
> 52,54c here with +1590 mem
> 
> I'm still waiting for the EK back waterblock that way I can get rid of the fans+heatsinks


Will that work with a passive backplate? Or do i need to remove the backplate then place the heatsinks directly onto the vram?


----------



## des2k...

gfunkernaught said:


> Will that work with a passive backplate? Or do i need to remove the backplate then place the heatsinks directly onto the vram?


yes works with passive plate, use thermal tape from amazon

I would not remove the EK plate as that still cools the back of the core + back of the VRM

Something like this + fan will drop temps alot


----------



## J7SC

des2k... said:


> yes works with passive plate, use thermal tape from amazon
> 
> I would not remove the EK plate as that still cools the back of the core + back of the VRM
> 
> Something like this + fan will drop temps alot


my mobo is mounted horizontally w/the 3090 mounted vertically...is that thermal tape strong enough to hold a total of 2x 4 inches of copper heatsinks ? Any link at Amazon to the specific type of thermal tape, or product name ? Tx.


----------



## gfunkernaught

des2k... said:


> yes works with passive plate, use thermal tape from amazon
> 
> I would not remove the EK plate as that still cools the back of the core + back of the VRM
> 
> Something like this + fan will drop temps alot
> View attachment 2480077


So I have a small fan that could probably sit over those heatsinks. Could you please link to those heatsinks? Those two are whole pieces or multiple small ones?


----------



## Xeq54

Hi guys, just got my FTW3 Ultra 3090. Looking for a waterblock, however I am not able to find any in stock for this card in the EU.

Any EU members got any tips for shops where I could look ?

Thanks


----------



## Nizzen

Xeq54 said:


> Hi guys, just got my FTW3 Ultra 3090. Looking for a waterblock, however I am not able to find any in stock for this card in the EU.
> 
> Any EU members got any tips for shops where I could look ?
> 
> Thanks


You need to preorder and just wait in line. 
Like on alphacool.com

I bought 3090 strix, because it was easy to get waterblock for it.


----------



## T.Sharp

Evostance said:


> Anyone managed to cool the VRAM down yet, without using a waterblock?



Someone should try 15x15xmm copper shims stuck directly to the chips with thermal paste (assuming that they don't contact any caps), and 0.5mm pads between the shim and cooler / backplate. Even high performance pads are poor thermal conductors because the gap is significant. Copper is 400W/mK. If you can spread the heat from the memory die (much smaller than the plastic package) to a 15x15mm shim, and reduce the distance filled by the pad to 0.5mm, you should be able to greatly improve the cooling efficiency.

You can calculate the differential across a material by using the formula :

(thickness in meters X heat in watts) / (heat source area in sqr meters X W/mK)

For a crude example : (0.001m x 5W) / ((0.005m x 0.005m) x 5W/mK = 40c temp gradient

So a 1mm 5W/mK pad, cooling a 5x5mm die dissipating 5W of heat, would be 90c if the heatsink is 50c.

For the sake of simplicity, if you ignore the inefficiency between a 15x15mm copper shim attached to the chip with quality paste (should be relatively small), you increase the area by 9x, which means 1/9th the temp gradient. But since you'd use a 0.5mm pad, you cut that in half again. so in the simplified example, you have an 18x improvement in thermal transfer over stock. If you used 10W/mK pads you cut the temp difference in half again... and so on.

Don't know if anyone has mentioned it in here yet, but you might want to check out TG-PP10 thermal putty from digikey. 10W/mK, designed for 0.25-8mm gaps. No need to worry about pads being too thick and taking pressure off the die, which can be an issue with some higher end pads because they can be quite hard compared to OEM. It's about $35 for 50g, which should go a long way.


----------



## Falkentyne

T.Sharp said:


> Someone should try 15x15xmm copper shims stuck directly to the chips with thermal paste (assuming that they don't contact any caps), and 0.5mm pads between the shim and cooler / backplate. Even high performance pads are poor thermal conductors because the gap is significant. Copper is 400W/mK. If you can spread the heat from the memory die (much smaller than the plastic package) to a 15x15mm shim, and reduce the distance filled by the pad to 0.5mm, you should be able to greatly improve the cooling efficiency.
> 
> You can calculate the differential across a material by using the formula :
> 
> (thickness in meters X heat in watts) / (heat source area in sqr meters X W/mK)
> 
> For a crude example : (0.001m x 5W) / ((0.005m x 0.005m) x 5W/mK = 40c temp gradient
> 
> So a 1mm 5W/mK pad, cooling a 5x5mm die dissipating 5W of heat, would be 90c if the heatsink is 50c.
> 
> For the sake of simplicity, if you ignore the inefficiency between a 15x15mm copper shim attached to the chip with quality paste (should be relatively small), you increase the area by 9x, which means 1/9th the temp gradient. But since you'd use a 0.5mm pad, you cut that in half again. so in the simplified example, you have an 18x improvement in thermal transfer over stock. If you used 10W/mK pads you cut the temp difference in half again... and so on.
> 
> Don't know if anyone has mentioned it in here yet, but you might want to check out TG-PP10 thermal putty from digikey. 10W/mK, designed for 0.25-8mm gaps. No need to worry about pads being too thick and taking pressure off the die, which can be an issue with some higher end pads because they can be quite hard compared to OEM. It's about $35 for 50g, which should go a long way.


That has a great desk of risk involved. If that shim should move for any reason when you're assembling the card together, you can kiss your card goodbye.
This isn't the same as using a shim on a CPU, or even a GPU core...


----------



## T.Sharp

Falkentyne said:


> That has a great desk of risk involved. If that shim should move for any reason when you're assembling the card together, you can kiss your card goodbye.
> This isn't the same as using a shim on a CPU, or even a GPU core...


Indeed, if any caps near the edge of the memory are higher than the chip, it would be very bad. So maybe not as fantastic of an idea as I thought.

But, if you use high performance 0.5mm pads on the chips and paste / shims on the cooler / backplate, I think the risk would be less, and cooling would still be improved.

EDIT: just stuck a copper shim to a flat surface with Kryonaut. lol
The viscosity and surface tension hold it in place well. It doesn't move without quite a bit of force. 

And of course there's conformal coating / kapton / nail polish for insulation. Not like it would be that hard to make it work if you had the gumption


----------



## Evostance

gfunkernaught said:


> When I had the stock cooler on mine, memory hit 88c, that without overclocking. Might want to check your air flow. Do you have warm air hitting the back of the card?


It's a FE so temps are correct. Cool air in from bottom and sides with exhaust at top.

I might have a look for some heatsinks and see if I can find a blower fan I can use in my PC to push the air across it


----------



## pat182

any chance the strix backplate with an heatsink will change something ? looks like its a plastic backplate 

edit: techpowerup says its metal, ill give it a try


----------



## des2k...

"So a 1mm 5W/mK pad, cooling a 5x5mm die dissipating 5W of heat, would be 90c if the heatsink is 50c. "

I don't think it's that bad, well at least with the EK backplate and EK pads which are around 3w/mk.

I had a temp sensor on the backplate near the memory, it was like 47c+ with no fans. When hwinfo released mem temps it was around 67+ for tjmax. So about 20c difference between metal / internal mem temps.

at 50c delta some bad contact, bad backplate or 0 airflow in the case

It's about 66w for memory gaming, but that's with a heavy OC(+1590). So about 33w for the back modules / 12 = 2.7w for module. Very low wattage per module, just need airflow really.


----------



## J7SC

T.Sharp said:


> (...)
> 
> Don't know if anyone has mentioned it in here yet, but you might want to check out TG-PP10 thermal putty from digikey. 10W/mK, designed for 0.25-8mm gaps. No need to worry about pads being too thick and taking pressure off the die, which can be an issue with some higher end pads because they can be quite hard compared to OEM. It's about $35 for 50g, which should go a long way.


 Thanks for the tip on the TG-PP10 thermal putty. 

...one of theses days, I'll get a 3FD metal printer to make my own blocks, after professional laser survey


----------



## Zogge

If you want to have a metal printer with really good resolution you need to pay up some sweet amounts.
Cheaper to have it custom made somewhere every generation for life.


----------



## gfunkernaught

J7SC said:


> Thanks for the tip on the TG-PP10 thermal putty.
> 
> ...one of theses days, I'll get a 3FD metal printer to make my own blocks, after professional laser survey


You know I did look at putty too. Could get messy when removing the block. But compression is probably better than pads right? Assuming the putty doesn't crack...


----------



## Ironcobra

Has anyone tried the aquacomputer next yet on there strix, building my first water cooling build and cant decide on gpu block, I have heatkiller for everything else but there saying there not making a strix block. Is the active backplate really necessary? I haven't heard much about aquacomputer.


----------



## gfunkernaught

Kryographics had the best blocks for the 2080 ti so I think their quality continues with ampere


----------



## T.Sharp

des2k... said:


> "So a 1mm 5W/mK pad, cooling a 5x5mm die dissipating 5W of heat, would be 90c if the heatsink is 50c. "
> 
> I don't think it's that bad, well at least with the EK backplate and EK pads which are around 3w/mk.


Yes that was a very rough example that doesn't account for dissipation into the PCB and the memory chip package spreading the heat to a larger area than the die. It also assumes a die size of 5x5mm, which GDDR6X could be larger, idk. 5x5mm is roughly the size of a GDDR5 die. Your calculation of 2.7W per chip is more accurate too. 

But the equation still shows that for every halving of the gap or doubling of the surface area, you double the thermal efficiency. So having as much of the gap filled by 400W/mK copper shims as possible, should give a considerable improvement either way. 

I'm waiting on some TG-PP10 to come in the mail so I can test this on a 1060. Hopefully increase the max mem OC.


----------



## gfunkernaught

T.Sharp said:


> Yes that was a very rough example that doesn't account for dissipation into the PCB and the memory chip package spreading the heat to a larger area than the die. It also assumes a die size of 5x5mm, which GDDR6X could be larger, idk. 5x5mm is roughly the size of a GDDR5 die. Your calculation of 2.7W per chip is more accurate too.
> 
> But the equation still shows that for every halving of the gap or doubling of the surface area, you double the thermal efficiency. So having as much of the gap filled by 400W/mK copper shims as possible, should give a considerable improvement either way.
> 
> I'm waiting on some TG-PP10 to come in the mail so I can test this on a 1060. Hopefully increase the max mem OC.


You think the putty will perform better than the ek pads?


----------



## Esenel

gfunkernaught said:


> Kryographics had the best blocks for the 2080 ti so I think their quality continues with ampere


But it doesn't.
Quality and performance are the worst for 3090 Strix.


----------



## T.Sharp

gfunkernaught said:


> You think the putty will perform better than the ek pads?


Yeah if EK pads are 3-5W/mK I would expect the TG putty to outperform them. It's designed for long term use OEM applications, so I'm hoping it won't crack (like I've seen with K5 Pro). Not sure how much it would improve junction temp irl, but might be worth a shot if you don't mind the mess. I think it's like soft playdoh, but I'm still waiting on my shipment from digikey.


----------



## bogdi1988

Looks like resize bar driver and vbios is coming out very soon:




__





Resizable Bar on Asus Dual 621 Sage, success


So, I've had this build since September 2020. It succeeded the prior build that included 4 Titan V's, including the Titan V 32GB CEO Ed. The updated build uses 2 nVidia 3090 RTX in SLI/nvLink. Current build: 2x 3090 rtx Founders Edition & SLI bridge 2x 8180M (56/112 cores) 1.5 TB ram Asus...




forums.servethehome.com


----------



## des2k...

bogdi1988 said:


> Looks like resize bar driver and vbios is coming out very soon:
> 
> 
> 
> 
> __
> 
> 
> 
> 
> 
> Resizable Bar on Asus Dual 621 Sage, success
> 
> 
> So, I've had this build since September 2020. It succeeded the prior build that included 4 Titan V's, including the Titan V 32GB CEO Ed. The updated build uses 2 nVidia 3090 RTX in SLI/nvLink. Current build: 2x 3090 rtx Founders Edition & SLI bridge 2x 8180M (56/112 cores) 1.5 TB ram Asus...
> 
> 
> 
> 
> forums.servethehome.com


did he run nvidia-smi -q ? I'm curious what the BAR is set for the new vbios.


----------



## mouacyk

bogdi1988 said:


> Looks like resize bar driver and vbios is coming out very soon:
> 
> 
> 
> 
> __
> 
> 
> 
> 
> 
> Resizable Bar on Asus Dual 621 Sage, success
> 
> 
> So, I've had this build since September 2020. It succeeded the prior build that included 4 Titan V's, including the Titan V 32GB CEO Ed. The updated build uses 2 nVidia 3090 RTX in SLI/nvLink. Current build: 2x 3090 rtx Founders Edition & SLI bridge 2x 8180M (56/112 cores) 1.5 TB ram Asus...
> 
> 
> 
> 
> forums.servethehome.com





> ... may or may not include the resizable bar and may or may not also included the FE 3090 bios ...





> A side note, nvidia disabled the 3090 from doing SLI for pre DX12, but I was able to reverse engineer it again


----------



## gfunkernaught

Esenel said:


> But it doesn't.
> Quality and performance are the worst for 3090 Strix.


They had the best block for the reference 2080 to according to reviews. Guess every gen is different.


----------



## gfunkernaught

Build (almost, rear external rad is next) complete. Finally got a water temp sensor. After playing Control at 4k native (no DLSS) for about 20min, GPU core [email protected] (curve, effective), vram offset +650, the gpu core topped at 48c, and water temp was at 33.8c, ambient air temp was 19-20c, while the gpu was using a little over 500w the whole time. So that puts me at 14.2c delta T. I've heard of the 15c delta rule for the gpu, but idk if that still applies to the 3090. I think I could do better.


----------



## changboy

gfunkernaught said:


> Build (almost, rear external rad is next) complete. Finally got a water temp sensor. After playing Control at 4k native (no DLSS) for about 20min, GPU core [email protected] (curve, effective), vram offset +650, the gpu core topped at 48c, and water temp was at 33.8c, ambient air temp was 19-20c, while the gpu was using a little over 500w the whole time. So that puts me at 14.2c delta T. I've heard of the 15c delta rule for the gpu, but idk if that still applies to the 3090. I think I could do better.
> 
> 
> View attachment 2480174


OMG you copy me....well done hehehehe 😂

btw your temp are fine, same then me. You can do better by adding more rad that's it.


----------



## gfunkernaught

changboy said:


> OMG you copy me....well done hehehehe 😂
> 
> btw your temp are fine, same then me. You can do better by adding more rad that's it.


Ha na man I've had this design for a few years now  Similar mind think alike!
I think I can remount/repaste, that should improve temps a bit. 48c seems a bit high.


----------



## des2k...

gfunkernaught said:


> Ha na man I've had this design for a few years now  Similar mind think alike!
> I think I can remount/repaste, that should improve temps a bit. 48c seems a bit high.


Delta 14c at 500w is already good for the ek block, unless you try liquid metal or reduce your water temp it's not going to improve things.

There's really no difference running your max oc at 38c gpu vs 48c gpu. Air coolers already get to 60c+ and card still work after years of gaming.


----------



## jura11

Hi guys


gfunkernaught said:


> Build (almost, rear external rad is next) complete. Finally got a water temp sensor. After playing Control at 4k native (no DLSS) for about 20min, GPU core [email protected] (curve, effective), vram offset +650, the gpu core topped at 48c, and water temp was at 33.8c, ambient air temp was 19-20c, while the gpu was using a little over 500w the whole time. So that puts me at 14.2c delta T. I've heard of the 15c delta rule for the gpu, but idk if that still applies to the 3090. I think I could do better.
> 
> 
> View attachment 2480174


Add MO-ra3 360mm or 420mm to the loop and your water delta T should improve by good margin or even make own 3*360mm external radiator

Yours water delta T is in 14-15°C which is not bad if you are taking to account you are pulling something around 700W+ with 360mm and 240mm radiators in fairly restrictive case too, I expected you will be in region of 42-45°C as max maybe a bit lower around 40°C mark 

What are yours VRAM temperatures? 

Hope this helps 

Thanks, Jura


----------



## dante`afk

Ironcobra said:


> Has anyone tried the aquacomputer next yet on there strix, building my first water cooling build and cant decide on gpu block, I have heatkiller for everything else but there saying there not making a strix block. Is the active backplate really necessary? I haven't heard much about aquacomputer.











Aqua Computer Kryographics Next for GeForce RTX 3080 and RTX 3090 Reference Design Review - Solid GPU water block for a hot card | igor'sLAB


A proper water cooling makes sense for power losses over 300 watts and thus creates a real added value. With the Kryographics Next, Aqua Computer continues on its path of precisely crafted coolers and…




www.igorslab.de


----------



## gfunkernaught

jura11 said:


> Hi guys
> 
> 
> Add MO-ra3 360mm or 420mm to the loop and your water delta T should improve by good margin or even make own 3*360mm external radiator
> 
> Yours water delta T is in 14-15°C which is not bad if you are taking to account you are pulling something around 700W+ with 360mm and 240mm radiators in fairly restrictive case too, I expected you will be in region of 42-45°C as max maybe a bit lower around 40°C mark
> 
> What are yours VRAM temperatures?
> 
> Hope this helps
> 
> Thanks, Jura


+700w? Where did you get that from? 

I know for a fact that around 400w in this setup was 15c delta when comparing to ambient air temp, that was with my 2080ti back when I didn't have a water temp sensor. So that means the water temp delta between gpu and water must have been closer. With rad fans capped at 60%, the 2080 ti would be 40c max, so an extra 100w can shoot the temps up by 10c? That run I did with Control had the fans capped at 50%, so it was quiet but warm. 

Just played BF1 at 8k and the temp reached 50c, idk if it would keep climbing or not, didn't want to find out lol, one of the pcie power inputs was 188w!

My vram temp reached 72c.


----------



## gfunkernaught

Who remembers these things? Seems to be working, vram went from 70c to 64-66c. Need to raise it up a bit. I just plopped it on there to see what would happen.


----------



## gfunkernaught

Here is Port Royal running on loop for about 20min. Water temp has been a steady 34.7c. Note the Argus Monitor (great program), where it says "rear fan" is actually the ram fan that is cooling the back of the card. Interesting.


----------



## bogdi1988

des2k... said:


> did he run nvidia-smi -q ? I'm curious what the BAR is set for the new vbios.
> View attachment 2480144


I asked. I'll reply back if I hear more on it


----------



## jura11

gfunkernaught said:


> +700w? Where did you get that from?
> 
> I know for a fact that around 400w in this setup was 15c delta when comparing to ambient air temp, that was with my 2080ti back when I didn't have a water temp sensor. So that means the water temp delta between gpu and water must have been closer. With rad fans capped at 60%, the 2080 ti would be 40c max, so an extra 100w can shoot the temps up by 10c? That run I did with Control had the fans capped at 50%, so it was quiet but warm.
> 
> Just played BF1 at 8k and the temp reached 50c, idk if it would keep climbing or not, didn't want to find out lol, one of the pcie power inputs was 188w!
> 
> My vram temp reached 72c.


Assuming you have in loop as well CPU? If yes then you have extra 100-200W at least which usually add up, you can try running Prime95 and some GPU stress test and see how your loop will perform 

Yes extra 100W can shoot temperatures a lot, depending on the loop, radiators and more factors, RTX 2080Ti itself has been hot chip and current RTX 3090 or 3080 are very hot chips because of 12nm and 8nm too

VRAM temperatures are okay for that temperatures

Hope this helps 

Thanks, Jura


----------



## gfunkernaught

jura11 said:


> Assuming you have in loop as well CPU? If yes then you have extra 100-200W at least which usually add up, you can try running Prime95 and some GPU stress test and see how your loop will perform
> 
> Yes extra 100W can shoot temperatures a lot, depending on the loop, radiators and more factors, RTX 2080Ti itself has been hot chip and current RTX 3090 or 3080 are very hot chips because of 12nm and 8nm too
> 
> VRAM temperatures are okay for that temperatures
> 
> Hope this helps
> 
> Thanks, Jura


Right ok. When I'm gaming the cpu never goes above 50% usage, so that is probably 60w. I still think that I should remount/repaste though. The front pads for the VRMs I didn't cut them to the exact size, so there is extra pad hanging off the side, right above the mosfets, so it leads me to believe that it is interfering with proper contact. Plus the pcb is slightly bending, that could be the backplate pads as well. 
I'm not complaining about the temps I swear lol! Haven't had issues with thermal throttling at all. Plus when the gpu hits 50c, PC is still very quiet.


----------



## sultanofswing

Wished I had a waterblock for my 3090 so I could test temps myself.
I know with my 2080ti Kingpin at 600 watts (1.2 volts/2235mhz) during Timespy the temp just barely hits 42c with a water temp of 29c.


----------



## J7SC

...this could possibly be of help: My 2x 2080 Ti -w-wb, GPU-dedicated loop w/ total of 1080x55 core rads, 2x D5s running PortRoyal + DLSS test sequentially...GPUs peak at a combined 760W, my temps stayed at 38 C or below


----------



## gfunkernaught

sultanofswing said:


> Wished I had a waterblock for my 3090 so I could test temps myself.
> I know with my 2080ti Kingpin at 600 watts (1.2 volts/2235mhz) during Timespy the temp just barely hits 42c with a water temp of 29c.


What about an hour of 600w usage?


----------



## Ironcobra

dante`afk said:


> Aqua Computer Kryographics Next for GeForce RTX 3080 and RTX 3090 Reference Design Review - Solid GPU water block for a hot card | igor'sLAB
> 
> 
> A proper water cooling makes sense for power losses over 300 watts and thus creates a real added value. With the Kryographics Next, Aqua Computer continues on its path of precisely crafted coolers and…
> 
> 
> 
> 
> www.igorslab.de


Thanks I picked it up along with there active backplate and there viaro cpu block for my 5800x. Should be a good setup for myself first water build.


----------



## des2k...

J7SC said:


> ...this could possibly be of help: My 2x 2080 Ti -w-wb, GPU-dedicated loop w/ total of 1080x55 core rads, 2x D5s running PortRoyal + DLSS test sequentially...GPUs peak at a combined 760W, my temps stayed at 38 C or below
> 
> View attachment 2480193


so this is one of those sub 10c deltas 😁 ?
I can tell you, that 7c-10c deltas are very rare and difficult to achive for 99% of users lol

we need a youtube video of how to achive these super low delta along with list of gpus,waterblocks and possible workarounds to get lower delta.

I know I tried 10+ remounts with my EK and delta of 12c 400w is best I can get with water temp 24c, 22c ambient.


----------



## J7SC

des2k... said:


> so this is one of those sub 10c deltas 😁 ?
> I can tell you, that 7c-10c deltas are very rare and difficult to achive for 99% of users lol
> 
> we need a youtube video of how to achive these super low delta along with list of gpus,waterblocks and possible workarounds to get lower delta.
> 
> I know I tried 10+ remounts with my EK and delta of 12c 400w is best I can get with water temp 24c, 22c ambient.


...check 'Orca' in sig for build details  .about to do s.th. similar for 3950X / 3090 Stix OC combo


----------



## Niju

Ironcobra said:


> Has anyone tried the aquacomputer next yet on there strix, building my first water cooling build and cant decide on gpu block, I have heatkiller for everything else but there saying there not making a strix block. Is the active backplate really necessary? I haven't heard much about aquacomputer.


I have the AC block with the active backplate on the 3090 Strix for a couple of weeks now. I'm getting a delta of 21C between water temp and GPU core temp. Hotspot temp is +12C from the core. The memory junction temp maxes out around 70C. The backplate feels considerably cooler to the touch compared to when the card was air cooled. This is with +100mhz core and the power slider to max.
Take these results with a grain of salt, I'm not 100% confident about how good the mount is; I did not try remounting it or checking contact.


----------



## gfunkernaught

des2k... said:


> so this is one of those sub 10c deltas 😁 ?
> I can tell you, that 7c-10c deltas are very rare and difficult to achive for 99% of users lol
> 
> we need a youtube video of how to achive these super low delta along with list of gpus,waterblocks and possible workarounds to get lower delta.
> 
> I know I tried 10+ remounts with my EK and delta of 12c 400w is best I can get with water temp 24c, 22c ambient.


Now when you achieved that 12c delta, did you notice the time it took for the gpu temp to rise to equilibrium? Based on my experience, quick temp rise is indicative of bad contact. 

I ordered that rear rad mount bracket and more coolant. I don't want to remount just yet and have to drain twice. As you can see, my case is tight and pain in the arse to get things in and out.


----------



## J7SC

gfunkernaught said:


> Now when you achieved that 12c delta, did you notice the time it took for the gpu temp to rise to equilibrium? Based on my experience, quick temp rise is indicative of bad contact.
> 
> I ordered that rear rad mount bracket and more coolant. I don't want to remount just yet and have to drain twice. As you can see, my case is tight and pain in the arse to get things in and out.


...I can't recall off-hand whether you're running separate CPU and GPU loops - it would help greatly...I usually do run separate loops for CPU and GPU, and each loop gets two D5 in series, per earlier posts w/ @changboy ...as to fans, I use a mix, but the majority for the 3x360x55 rads for the GPU loop for the 2x 2080 Ti are GentleTyphoon AP29...183 cfm per, according to the manufacturers site, afair....


----------



## des2k...

gfunkernaught said:


> Now when you achieved that 12c delta, did you notice the time it took for the gpu temp to rise to equilibrium? Based on my experience, quick temp rise is indicative of bad contact.
> 
> I ordered that rear rad mount bracket and more coolant. I don't want to remount just yet and have to drain twice. As you can see, my case is tight and pain in the arse to get things in and out.


Well with water at 22c If I start control(after load save) gpu will jump to 12c delta 34c from 24c idle temps. 

Are you saying it's not suppose to do that ?
It's suppose to ramp up slow for gpu temps ?


----------



## Thanh Nguyen

Im curios to know what block gives u sub 10c delta with 600w load on the 3090 too. Waiting on my optimus block which may take a few months.


----------



## jomama22

des2k... said:


> Well with water at 22c If I start control(after load save) gpu will jump to 12c delta 34c from 24c idle temps.
> 
> Are you saying it's not suppose to do that ?
> It's suppose to ramp up slow for gpu temps ?


No, you shouldn't expect a slow ramp up for gpu core temps when load is first applied. They will spike and then slowly creep as heat soak begins on the block and water. The temperature rise rate of the gpu core will be much much faster than the ability for that heat to transfer to the block and water.

Just for reference, I have a 8C water to gpu core delta between 400w-800w+ once gpu temperature stabilizes. Using 2x360 hwlabs gtx black Ice 2's, an mcp35x2, with fans in push/pull at 1700rpms in their own loop. Every 100w adds about 3C to the gpu core and water temp.

Also, running the mining bench and letting it soak for a while using +1500 mem:


----------



## J7SC

While the TR / 2x 2080 Ti w-cooling loops have worked out great, I hope my planning for the separate loops the 3950X / and 3090 Strix OC is comparable in performance. One key difference is of course that the GPU loop for the 2080 Tis handles 760W via 2x 12nm chips but each with their own block and rads and pumps in between, while the 3090 system has to handle 500W+ concentrated on one block with a 8nm 'hotspot;'... ☕


----------



## gfunkernaught

J7SC said:


> ...I can't recall off-hand whether you're running separate CPU and GPU loops - it would help greatly...I usually do run separate loops for CPU and GPU, and each loop gets two D5 in series, per earlier posts w/ @changboy ...as to fans, I use a mix, but the majority for the 3x360x55 rads for the GPU loop for the 2x 2080 Ti are GentleTyphoon AP29...183 cfm per, according to the manufacturers site, afair....


I don't have separate loops because the cpu barely puts a dent in my loop, when I'm gaming, the cpu temp sometimes matches the gpu temp, because of the water temp, and that's with anywhere from 10-50% load on the cpu cores. I use those ek fans that came with the 360mm rad, and I use those thermalrake ring sp fans for the 240mm push/pull.


----------



## gfunkernaught

jomama22 said:


> No, you shouldn't expect a slow ramp up for gpu core temps when load is first applied. They will spike and then slowly creep as heat soak begins on the block and water. The temperature rise rate of the gpu core will be much much faster than the ability for that heat to transfer to the block and water.
> 
> Just for reference, I have a 8C water to gpu core delta between 400w-800w+ once gpu temperature stabilizes. Using 2x360 hwlabs gtx black Ice 2's, an mcp35x2, with fans in push/pull at 1700rpms in their own loop. Every 100w adds about 3C to the gpu core and water temp.
> 
> Also, running the mining bench and letting it soak for a while using +1500 mem:
> View attachment 2480206


My 2080 ti did not exhibit this behavior you describe, not exactly. When a 99% load goes onto that gpu, the temp would go from 26c idle to 30-32c load, then slowly makes its way up to 38-39c, that's with 380-425w. So I guess that 26-32c jump would be the "spike". But my 3090 idles at 20c, then spikes to 38c, then rises to a max of 50c. That 18c spike is what leads me to believe there's a contact issue.


----------



## T.Sharp

gfunkernaught said:


> My 2080 ti did not exhibit this behavior you describe, not exactly. When a 99% load goes onto that gpu, the temp would go from 26c idle to 30-32c load, then slowly makes its way up to 38-39c, that's with 380-425w. So I guess that 26-32c jump would be the "spike". But my 3090 idles at 20c, then spikes to 38c, then rises to a max of 50c. That 18c spike is what leads me to believe there's a contact issue.


I think a big part of the issue is that GPU temp is an average of multiple sensors or an edge reading. Not sure which. If you watch CPU core Temp when going from idle to load, the spike is instant And then it might creep up a few C after heat soak. With GPU it usually has a small spike but temp will continue to climb quite a bit higher as the cooler heat soaks. Not sure how much is due to the nature of direct die cooling and how much is due to sensor location / averaging.

When I tested LM on a 1060, it only appeared to Drop the temps ~3c according to the GPU temp sensor, but I was able to OC 60-80MHz higher. Implying that the actual core was much cooler than previously, but this wasn’t apparent from looking at GPU temp.

I’m not sure how core hotspot temp reading works though, is it an average of the core or the hottest sensor in the core?


----------



## gfunkernaught

@T.Sharp maybe the LM was able to transfer the heat much faster, so while the gpu core temp reading didn't show much difference, the heat is actually leaving the core at a higher rate I guess?


----------



## T.Sharp

gfunkernaught said:


> @T.Sharp maybe the LM was able to transfer the heat much faster, so while the gpu core temp reading didn't show much difference, the heat is actually leaving the core at a higher rate I guess?


Yeah the LM was definitely cooling the core much better than paste. Must have been a lot more than the 3c difference that the temp sensor showed though. Point is, GPU temp isn't a direct core temp measurement, like CPU core temp.


----------



## Nizzen

*New drivers 461.72 released with ReBAR support*


> New drivers have been released today with resizable BAR support:
> https://www.nvidia.com/en-us/geforce...-ready-driver/
> 
> Still need an updated VBIOS for all cards except the 3060 which does come with ReBAR support already included in its VBIOS:
> https://www.nvidia.com/en-us/geforce...e-bar-support/


----------



## jura11

Here are my temperatures after 2 hours of Control at 3440x1440 and everything maxed etc,used my normal profile +115MHz on core and 1200MHz on VRAM,VRAM temperatures have been in 56-60C and ambient 21C with fans spinning 600-620RPM on main radiators and MO-ra3 fans been running at 1000RPM as max

















Hope this helps

Thanks,Jura


----------



## des2k...

Nizzen said:


> *New drivers 461.72 released with ReBAR support*


Rebar is only enabled on these, no global on/off switch

Battlefield V, 
Assassins Creed Valhalla
Gears 5
Borderlands 3
Red Dead Redemption 2
Metro Exodus, 
Watch Dogs Legion, 
Forza Horizon 4


----------



## des2k...

jura11 said:


> Here are my temperatures after 2 hours of Control at 3440x1440 and everything maxed etc,used my normal profile +115MHz on core and 1200MHz on VRAM,VRAM temperatures have been in 56-60C and ambient 21C with fans spinning 600-620RPM on main radiators and MO-ra3 fans been running at 1000RPM as max
> 
> View attachment 2480221
> View attachment 2480221
> 
> 
> Hope this helps
> 
> Thanks,Jura
> View attachment 2480221


that's like 388w only, but good temps,
 even the **** corsair 16c delta at 350w will work


----------



## jura11

des2k... said:


> that's like 388w only, but good temps,
> even the **** corsair 16c delta at 350w will work


Currently running just KFA2 390W BIOS and already hitting power limit in that game but normally no issues in gaming or rendering with that BIOS 

Agree temperatures are good and will probably do tests with XOC BIOS and compare temperatures

Hope this helps 

Thanks, Jura


----------



## Chamidorix

Ah, late march is when AIBs start releasing new vBioses, and thus begins the wait for Kingpin 1000W bios with BAR support.

Also stupid we are going to have to rename .exe etc. as usual to test this on non-sanctioned titles.


----------



## gfunkernaught

I just watched a pcb breakdown speedrun by buildzoid for the gaming x trio, and he didn't get into the boards physical power limit, like he did with the 2080 ti reference and found it can handle 600w. How would I find out without blowing the card up what the actual limit of the PCB is?


----------



## lolhaxz

gfunkernaught said:


> I just watched a pcb breakdown speedrun by buildzoid for the gaming x trio, and he didn't get into the boards physical power limit, like he did with the 2080 ti reference and found it can handle 600w. How would I find out without blowing the card up what the actual limit of the PCB is?


It is not rocket science, lookup the mosfet(s) for the board, count them (remember, some core, some mem), find the optimal efficiency point +- 10%... whatever you are comfortable with and add up the current... extrapolate from there and try not to exceed whatever magical number is presented... either way it's going to be a guess anyway.

You'll be hard pressed to find (that many) real world loads that sit above 600W anyway.

my literal 30 second conclusion would be, NCP302150 drivers - rated at 50A average continuous at 300khz @ 1V, call it 40-45A for margins sake, it's a 14 phase design...14 * 45A = 630A, 630A * 1.00v = 630W.... or 14 * 40A = 560A, 560A * 1.00v = 560W (output side) ... so input will be a little higher...

Current figures are purely an example of "a specific scenario", ie. the GPU won't draw 560A @ 1.0v ... the current drawn will be a result of the voltage, temperature (to a very limited extent) and the load.... for example, at 1.0v and a typical load, 3090 may draw say ~300A at 1v on the GPU core rail(s)... as the voltage rises the current draw (typically) will rise... ie, perhaps its ~375A @ 1.093v for that same load.

That is all assuming you can keep the mosfets cool, you can get more specific by deriving exact values from the curves provided in the datasheet, but it wont always have all the information you require... typically as current rises, efficiency drops, heat output increases, you'll either A) exceed the maximum current rating of the device or B) exceed your ability to maintain thermal control.

This is entirely ignoring the input stage (ie, 2x 8pin vs 3x 8pin) .. max current per cable(s)/plug... which is also equally a consideration

TBP of 600W is probably a nice practical target for this board without much "risk" if you could keep it cool .. but I am not sitting here claiming 600W continuous for extended periods of time is "safe".. that depends on so much more than just the mosfets.

I suppose anywhere upto 100W+ of your "Total board power" draw which most people would be looking at could be the GDDR6X on a different set of mosfets...


----------



## Beagle Box

gfunkernaught said:


> I just watched a pcb breakdown speedrun by buildzoid for the gaming x trio, and he didn't get into the boards physical power limit, like he did with the 2080 ti reference and found it can handle 600w. How would I find out without blowing the card up what the actual limit of the PCB is?


How much power do you need? 
Everything I've run with my Strix suggests that even under water, 590W-600W is the point where performance degrades. 
Unless LN2 benching is the goal?


----------



## gfunkernaught

@Beagle Box Not doing LN2. Idk how much I "need" but I just want to see how high I can get the core clock for a few benches. Is it possible for the card to blow a fuse before it would crash due to an unstable clock? I'm using the KP1kW bios btw, with the PL at 52%.

@lolhaxz Based on many reads through forums and reddits, 600w seems to be the general safe limit for most of these cards, even the 2x8pin versions. But I don't plan on gaming at 600w, thats just ridiculous. Benching is a different story. I honestly didn't see much of a difference in minimum fps (which is what I'm after for regular gaming) going from 490ish to 540ish watts.


----------



## PhuCCo

I recommend the copper shims on the rear memory. I dropped 15C from my memory junction peak temp after doing this. I already had a ram waterblock mounted to the backplate.

Last week I took apart my 3090 FE with the Alphacool block and replaced all of the thermal pads with Thermalright Odyssey. Instead of the stock 2mm pads on the rear memory, I used 1mm pads with 1mm copper shims on top of the pads. I had to cut a few of them to smaller sizes in order to not overhang the memory modules, but it was very easy since they're just soft copper. Then I put a layer of Kryonaut on the shims to fill gaps between the copper and the backplate. I did a test fit and saw that it was making great contact and squeezing the paste. The shims are held in place insanely well from sticking to the pads and paste; they are not going to move. I went from a memory junction peak of 65C to 50C in two hours of running Port Royal and Time Spy stress tests.

I've never mined before, but I wanted to check the temps during mining as a lot of people are doing that now. I kept restarting the miner because I didn't understand how the payout works and wasn't sure if I had it set up correctly. After about three hours of mining, I hit a peak of 58C. 114-120MH/s (had a browser and discord open so it fluctuated) with +1000 on memory @ 280W. Ambient was 21-23C and water peaked at 30C during that time.

3090FE Shim Mod - Google Drive


----------



## Zogge

I just talked to Bykski, their standard pads are 2 W / m*k. So I guess it is worth replacing standard pads for sure.


----------



## T.Sharp

PhuCCo said:


> I recommend the copper shims on the rear memory. I dropped 15C from my memory junction peak temp after doing this. I already had a ram waterblock mounted to the backplate.


Awesome, glad to see some real results with this method! Copper is 400W/mK so the more of the gap you can fill with that, the better. I ordered some TG-PP10 10W/mK thermal putty so I can use thicker shims and have a ~0.25mm gap filled with the putty. Plan is to stick the shims directly to the memory chips with Kryonaut, to act as a heat spreader, and then fill the gap between shim and cooler with TG putty. Should be more effective that way. Only concern is shorting out capacitors around the edge of the memory chips, but i'll probably use some nail polish on them for good measure.

Curious to find out how much temps will effect the max stable mem OC.


----------



## des2k...

*Here's the reference 2x8pin, you can prob figure things out for the gaming x trio*

mem runs on 4 stages 50a each, usually even at +1700mem those stages will run at bellow 30% capacity, you won't stress the VRM here even on air

cache will run on vc2 vrm, 5 stages 50a each, using 1000w xoc (about ~660w board power at 100% tdp) those stages will also be around 30% capacity, again you won't stress the VRM here

core runs on vc1, 9 stages 50a each, this one at 660w board power will use ~75% capacity ~38a per stage
Quake 2 RTX with no limit will use that much

This one will generate alot of heat, prob a good idea to water cool at 38a per stage.

*What is safe for gaming x trio ? I'm guessing you'll reach the core limit before over stressing the VRM / worrying about board power, assuming you're water cooled. 
If it's 3x8pin with 1000w xoc I would prob limit to 600w-700wmax and not run Quake 2 RTX *











If you care about PCIE connector power, at 100W (no fans, single gpu) you're using about 50w per cable on the 24pin atx and 20w per connector at the card.


----------



## ViRuS2k

jomama22 said:


> No, you shouldn't expect a slow ramp up for gpu core temps when load is first applied. They will spike and then slowly creep as heat soak begins on the block and water. The temperature rise rate of the gpu core will be much much faster than the ability for that heat to transfer to the block and water.
> 
> Just for reference, I have a 8C water to gpu core delta between 400w-800w+ once gpu temperature stabilizes. Using 2x360 hwlabs gtx black Ice 2's, an mcp35x2, with fans in push/pull at 1700rpms in their own loop. Every 100w adds about 3C to the gpu core and water temp.
> 
> Also, running the mining bench and letting it soak for a while using +1500 mem:
> View attachment 2480206


Hi pal how you getting such high hashrates ?
my 3090 is maxing out around 117 hashrates.... with 1100 on the memory. im just curious any tips lol 
or could it be the mining application im using i use , nanominer not sure if thats the best or not lol


----------



## des2k...

ViRuS2k said:


> Hi pal how you getting such high hashrates ?
> my 3090 is maxing out around 117 hashrates.... with 1100 on the memory. im just curious any tips lol
> or could it be the mining application im using i use , nanominer not sure if thats the best or not lol


For +1590 mem is about 129 hash with PhoenixMiner, using apps like youtube will loose 2 or so. It's all mem clock, you can drop your board to 280w and wont' affect much.


----------



## gfunkernaught

@des2k...
Dude awesome thanks for that breakdown. I really need to do my homework on circuitry and power delivery to really understand all that. I totally forgot about one important fact that you mentioned: core limit. When I used the xoc bios on my reference 2080 ti the core voltage maxed out at 1125mv, even if I tried to set it higher it wouldn't go higher than that, this that 600w-ish limit that the board will handle without catching fire, assuming it is being cooled properly. Same applies here as well. Thanks again for explaining that. 
If I limit to 600w though, why can't I run quake 2 RTX? Are you saying it will use 660w even if dynamic resolution is enabled and fps is capped at 60?


----------



## Thanh Nguyen

Anyone know I got error while running niceharsh? Cant mine.


----------



## des2k...

gfunkernaught said:


> @des2k...
> Dude awesome thanks for that breakdown. I really need to do my homework on circuitry and power delivery to really understand all that. I totally forgot about one important fact that you mentioned: core limit. When I used the xoc bios on my reference 2080 ti the core voltage maxed out at 1125mv, even if I tried to set it higher it wouldn't go higher than that, this that 600w-ish limit that the board will handle without catching fire, assuming it is being cooled properly. Same applies here as well. Thanks again for explaining that.
> If I limit to 600w though, why can't I run quake 2 RTX? Are you saying it will use 660w even if dynamic resolution is enabled and fps is capped at 60?


You can run quake 2 rtx. I've seen people use 700w for this game alone. Just doesn't make sense to me unless you have a perfect waterblock with 10c delta. For me it's 22c delta at that wattage.

It's a pure path ray tracing, so it will use alot, 4k dlss uses over 660w as I hit powerlimit at 2190core.
The demo is free on Steam so you can try. Start from 1900 core you'll see how much power it uses. 

At stock 400w vbios you'll run 1700 or lower for core for this game.


----------



## gfunkernaught

des2k... said:


> You can run quake 2 rtx. I've seen people use 700w for this game alone. Just doesn't make sense to me unless you have a perfect waterblock with 10c delta. For me it's 22c delta at that wattage.
> 
> It's a pure path ray tracing, so it will use alot, 4k dlss uses over 660w as I hit powerlimit at 2190core.
> The demo is free on Steam so you can try. Start from 1900 core you'll see how much power it uses.
> 
> At stock 400w vbios you'll run 1700 or lower for core for this game.


When I had the 1kW bios, quake 2 RTX pulled about 520-540w, with the pl set to 50, clocks were in the 2055-2077 range, temp was around 47c.


----------



## gfunkernaught

@changboy I followed your advice lol. Waiting for fittings and more coolant to arrive.

Also need to get some fans. I already have one corsair sp120, wondering if I should get another two or get three thermal take ring sp fans. Not looking for lighting outside the case tbh. What fans would you suggest? Noctua maybe? The rad is a 360 ek LC Solution Coolstream PE.


----------



## mirkendargen

gfunkernaught said:


> @changboy I followed your advice lol. Waiting for fittings and more coolant to arrive.
> 
> Also need to get some fans. I already have one corsair sp120, wondering if I should get another two or get three thermal take ring sp fans. Not looking for lighting outside the case tbh. What fans would you suggest? Noctua maybe? The rad is a 360 ek LC Solution Coolstream PE.
> View attachment 2480320


If you don't care about lighting, just get Arctic fan 5-packs off Amazon.


----------



## jura11

Here are my temperatures after 2 hours of Control at 3440x1440 and everything maxed etc with XOC BIOS and 90% power limit with +160MHz OC and +1295MHz OC,VRAM temperatures have been in 60's and ambient temperatures have been in low to mid 20's with fans spinning 600-650RPMon main rads and on MO-ra3 fans been running at 1000RPM as max










Hope this helps

Thanks,Jura


----------



## gfunkernaught

jura11 said:


> Here are my temperatures after 2 hours of Control at 3440x1440 and everything maxed etc with XOC BIOS and 90% power limit with +160MHz OC and +1295MHz OC,VRAM temperatures have been in 60's and ambient temperatures have been in low to mid 20's with fans spinning 600-650RPMon main rads and on MO-ra3 fans been running at 1000RPM as max
> 
> View attachment 2480327
> 
> 
> Hope this helps
> 
> Thanks,Jura


What were your framerates like? Without DLSS right?


----------



## gfunkernaught

mirkendargen said:


> If you don't care about lighting, just get Arctic fan 5-packs off Amazon.


Are those arctic fans good? Also don't want a lot of noise either. Those thermalrake fans are actually very quiet, same with the ek vardars. Just wondering about options and inquiring about experience with other fans.


----------



## des2k...

jura11 said:


> Here are my temperatures after 2 hours of Control at 3440x1440 and everything maxed etc with XOC BIOS and 90% power limit with +160MHz OC and +1295MHz OC,VRAM temperatures have been in 60's and ambient temperatures have been in low to mid 20's with fans spinning 600-650RPMon main rads and on MO-ra3 fans been running at 1000RPM as max
> 
> View attachment 2480327
> 
> 
> Hope this helps
> 
> Thanks,Jura


That's pretty awesome for temps. I don't remember now, what card / block do you have ? Did you use LM ?

5mins, Control 4k no fps cap, 2190 core, eff 2148, about 540W on my side. 26c water temp, 43c gpu, 17c+ delta :-(


----------



## jura11

des2k... said:


> That's pretty awesome for temps. I don't remember now, what card / block do you have ? Did you use LM ?
> 
> 5mins, Control 4k no fps cap, 2190 core, eff 2148, about 540W on my side. 26c water temp, 43c gpu, 17c+ delta :-(


Hi there 

That's with Palit RTX 3090 GamingPro and Bykski waterblock and backplate and 80mm and 120mm fan pointed out on backplate, for TIM I have used I think NT-H1 because that time I didn't have Kryonaut or ZF-EX, will need to repaste GPU with ZF-EX or Kryonaut later on and maybe temperatures will improve a bit

Yours temperatures are not bad, depending on the loop and radiator space, I have 4*360mm radiators plus MO-ra3 360mm which helps a lot 

Hope this helps 

Thanks, Jura


----------



## jura11

gfunkernaught said:


> What were your framerates like? Without DLSS right?


Hi there 

FPS have been above 100 and yes without the DLSS, I don't use DLSS, I hate DLSS hahaha 

Hope this helps 

Thanks, Jura


----------



## des2k...

...


jura11 said:


> Hi there
> 
> That's with Palit RTX 3090 GamingPro and Bykski waterblock and backplate and 80mm and 120mm fan pointed out on backplate, for TIM I have used I think NT-H1 because that time I didn't have Kryonaut or ZF-EX, will need to repaste GPU with ZF-EX or Kryonaut later on and maybe temperatures will improve a bit
> 
> Yours temperatures are not bad, depending on the loop and radiator space, I have 4*360mm radiators plus MO-ra3 360mm which helps a lot
> 
> Hope this helps
> 
> Thanks, Jura


They list as ZT-A30900J-10P (OC) compatible, mine is ZT-A30900D-10P non-OC









Bykski N-RTX3090H-X-V2 GPU Water Cooling Block For GALAXY Palit KFA2 Maxsun Leadtek Gainward RTX 3080 3090


Buy Bykski N-RTX3090H-X-V2 GPU Water Cooling Block For GALAXY Palit KFA2 Maxsun Leadtek Gainward RTX 3080 3090 and FormulaMod, Bykski, GPU Water Block, WaterCooling, etc.




www.formulamod.com





For 90$ if it works with the Zotac I wouldn't mind seeing if I get lower delta than EK.
Anybody else have experience with V2 of this block ?


----------



## jura11

@des2k... 

I think V2 is improved of waterblock which I have now, my waterblock is for sure V1 version and works okay there, temperatures are good and no issues, maybe later on I will replace thermal pads for Thermalright Odyssey but for now very happy with temperatures

I bought my Bykski waterblock when they released that time 

For money Bykski waterblock is one of best, you can hardly find better one for money 

Hope this helps 

Thanks, Jura


----------



## mirkendargen

gfunkernaught said:


> Are those arctic fans good? Also don't want a lot of noise either. Those thermalrake fans are actually very quiet, same with the ek vardars. Just wondering about options and inquiring about experience with other fans.


Yup Arctic stuff is good.


----------



## J7SC

mirkendargen said:


> Yup Arctic stuff is good.


...yeah, after testing a single one out for a bit, I bought 3x value packs over the last couple of weeks...need one more pack (for now...)


----------



## gfunkernaught

jura11 said:


> Hi there
> 
> FPS have been above 100 and yes without the DLSS, I don't use DLSS, I hate DLSS hahaha
> 
> Hope this helps
> 
> Thanks, Jura


Uh 4k native, above 100fps, with Ray tracing on?? What was your minimum fps?


----------



## jura11

gfunkernaught said:


> Uh 4k native, above 100fps, with Ray tracing on?? What was your minimum fps?


Hi there 

I have Acer Predator X34p which is Ultrawide 3440x1440 and yes RT on, everything maxed and minimum FPS don't remember now there, will do later on few more tests 

In 4k I think FPS should be similar, can't confirm that, sorry 

Yes Arctic Cooling P12 or F12 are great fans, using them on my radiators with Phanteks PH-F120MP and few other fans 

Hope this help

Thanks, Jura


----------



## gfunkernaught

jura11 said:


> Hi there
> 
> I have Acer Predator X34p which is Ultrawide 3440x1440 and yes RT on, everything maxed and minimum FPS don't remember now there, will do later on few more tests
> 
> In 4k I think FPS should be similar, can't confirm that, sorry
> 
> Yes Arctic Cooling P12 or F12 are great fans, using them on my radiators with Phanteks PH-F120MP and few other fans
> 
> Hope this help
> 
> Thanks, Jura


100fps is insane. I was getting about 48-50fps average at 4k native with everything maxed, about 540w max usage, 2100mhz average clock.

Thanks for the fan tip? I read a bunch of amazon reviews for the P12's, watched on video of the fan spinning at 1170 or so rpm, and it produced a loud tone that I'm down with. Do you guys experience a tone or hum at certain rpms with the arctic fans?


----------



## jura11

gfunkernaught said:


> 100fps is insane. I was getting about 48-50fps average at 4k native with everything maxed, about 540w max usage, 2100mhz average clock.
> 
> Thanks for the fan tip? I read a bunch of amazon reviews for the P12's, watched on video of the fan spinning at 1170 or so rpm, and it produced a loud tone that I'm down with. Do you guys experience a tone or hum at certain rpms with the arctic fans?


Hi there 

I done few tests in Control and will do few more later on tonight and post my minimum and maximum

I can't comment on 4k FPS, I can try that on my TV and play there and see what FPS I will be getting, my clocks are usually in 2145-2160MHz and average is around 2130-2145MHz 

No loud tone on my P12 PWM,running my P12 PWM at 850RPM as max on my radiators, just on MO-ra3 360mm I'm running these fans at 1000-1200RPM and in some situations I'm running them at max RPM and they're literally so quiet on MO-ra3 360mm and I can't hear them at all 

Hope this helps 

Thanks, Jura


----------



## des2k...

gfunkernaught said:


> 100fps is insane. I was getting about 48-50fps average at 4k native with everything maxed, about 540w max usage, 2100mhz average clock.
> 
> Thanks for the fan tip? I read a bunch of amazon reviews for the P12's, watched on video of the fan spinning at 1170 or so rpm, and it produced a loud tone that I'm down with. Do you guys experience a tone or hum at certain rpms with the arctic fans?


single artic fan no, multiple artic fans yes, if you run them at diff rpm they don't seem to hum/tone.
I have a few of them in my o11, I separated them into groups to get rid of this annoying hum/tone. 

P12 PST
Front are 1260rpm
Side 1150rpm
Top 1050rpm

But that might be just my setup, two many fans close together or the PST fan design.


----------



## changboy

gfunkernaught said:


> @changboy I followed your advice lol. Waiting for fittings and more coolant to arrive.
> 
> Also need to get some fans. I already have one corsair sp120, wondering if I should get another two or get three thermal take ring sp fans. Not looking for lighting outside the case tbh. What fans would you suggest? Noctua maybe? The rad is a 360 ek LC Solution Coolstream PE.
> View attachment 2480320


Ok, me i have paint in black the bracket before install it, just want tell you this coz after its more hard if you not like the chrome around ur black.

I just bought 3 others fans too and after check all, the best quality for durability, and best performance also for cheap price i buy those noctha:








Noctua NF-P12 Redux-1700 PWM Ventilateur de refroidissement haute performance, 4 broches, 1700 tr/min (120 mm, gris) : Amazon.ca: Commerce, Industrie et Science


Noctua NF-P12 Redux-1700 PWM Ventilateur de refroidissement haute performance, 4 broches, 1700 tr/min (120 mm, gris) : Amazon.ca: Commerce, Industrie et Science



www.amazon.ca





And if you dont want broke ur fan coz they are external you can add grill on it, so wire and els wont enter in the fans(you can find model you like or prefer :








Apevia G-120 mm accessoire Grille de ventilateur de 120 mm : Amazon.ca: Électronique


Apevia G-120 mm accessoire Grille de ventilateur de 120 mm : Amazon.ca: Électronique



www.amazon.ca


----------



## mirkendargen

changboy said:


> Ok, me i have paint in black the bracket before install it, just want tell you this coz after its more hard if you not like the chrome around ur black.
> 
> I just bought 3 others fans too and after check all, the best quality for durability, and best performance also for cheap price i buy those noctha:
> 
> 
> 
> 
> 
> 
> 
> 
> Noctua NF-P12 Redux-1700 PWM Ventilateur de refroidissement haute performance, 4 broches, 1700 tr/min (120 mm, gris) : Amazon.ca: Commerce, Industrie et Science
> 
> 
> Noctua NF-P12 Redux-1700 PWM Ventilateur de refroidissement haute performance, 4 broches, 1700 tr/min (120 mm, gris) : Amazon.ca: Commerce, Industrie et Science
> 
> 
> 
> www.amazon.ca
> 
> 
> 
> 
> 
> And if you dont want broke ur fan coz they are external you can add grill on it, so wire and els wont enter in the fans(you can find model you like or prefer :
> 
> 
> 
> 
> 
> 
> 
> 
> Apevia G-120 mm accessoire Grille de ventilateur de 120 mm : Amazon.ca: Électronique
> 
> 
> Apevia G-120 mm accessoire Grille de ventilateur de 120 mm : Amazon.ca: Électronique
> 
> 
> 
> www.amazon.ca


Definitely add grills. Your fingers/toes/pets will thank you in the future, heh.


----------



## changboy

@gfunkernaught I know all that was already in your planning anyway 🤣!

Me i also screw the bracket at the upper space and like this the rad dosent exeed the pc case, do this if you can do it or if the bottom goes too much down then leave it like this. My rad in the back is a 360mm.


----------



## gfunkernaught

changboy said:


> Ok, me i have paint in black the bracket before install it, just want tell you this coz after its more hard if you not like the chrome around ur black.
> 
> I just bought 3 others fans too and after check all, the best quality for durability, and best performance also for cheap price i buy those noctha:
> 
> 
> 
> 
> 
> 
> 
> 
> Noctua NF-P12 Redux-1700 PWM Ventilateur de refroidissement haute performance, 4 broches, 1700 tr/min (120 mm, gris) : Amazon.ca: Commerce, Industrie et Science
> 
> 
> Noctua NF-P12 Redux-1700 PWM Ventilateur de refroidissement haute performance, 4 broches, 1700 tr/min (120 mm, gris) : Amazon.ca: Commerce, Industrie et Science
> 
> 
> 
> www.amazon.ca
> 
> 
> 
> 
> 
> And if you dont want broke ur fan coz they are external you can add grill on it, so wire and els wont enter in the fans(you can find model you like or prefer :
> 
> 
> 
> 
> 
> 
> 
> 
> Apevia G-120 mm accessoire Grille de ventilateur de 120 mm : Amazon.ca: Électronique
> 
> 
> Apevia G-120 mm accessoire Grille de ventilateur de 120 mm : Amazon.ca: Électronique
> 
> 
> 
> www.amazon.ca


Aw man I do kinda want it black now lol. I already mounted it. I bought those fans but the 3-pin version since I already have a 4-way 3-pin splitter. Not too worried about putting 3 fans on one header. If it becomes an issue down the road I may end up getting the corsair commander pro. The rad on the back does look kinda silly tbh, but if it helps my temps then to hell with the looks! I found a few grills I had in my supply box. 

Man this 3090 upgrade really did a number on my PC. Starting to look like a Mad Max PC.


----------



## changboy

gfunkernaught said:


> Aw man I do kinda want it black now lol. I already mounted it. I bought those fans but the 3-pin version since I already have a 4-way 3-pin splitter. Not too worried about putting 3 fans on one header. If it becomes an issue down the road I may end up getting the corsair commander pro. The rad on the back does look kinda silly tbh, but if it helps my temps then to hell with the looks! I found a few grills I had in my supply box.
> 
> Man this 3090 upgrade really did a number on my PC. Starting to look like a Mad Max PC.


If your tube not mount, its still time to paint it black hehehe. My rad really desapear behind my pc hihihi.
I show this to another one who want another external rad but he end up by adding another rad outside over the top rad lol, so the heat of the first rad go in the second rad, is temp go down a bit but not like if you mount on the back. Anyway evryone do what they this is right for them i belive hehehe.

Its also fun adding stuff and one day you realize ur pc weight a ton lol.


----------



## J7SC

changboy said:


> If your tube not mount, its still time to paint it black hehehe. My rad really desapear behind my pc hihihi.
> I show this to another one who want another external rad but he end up by adding another rad outside over the top rad lol, so the heat of the first rad go in the second rad, is temp go down a bit but not like if you mount on the back. Anyway evryone do what they this is right for them i belive hehehe.
> 
> *Its also fun adding stuff and one day you realize ur pc weight a ton lol.*


...you got that right ! I thought it was a brilliant idea to mount 5x thick 360s, 4x D5s, 2x GPU blocks and a heavy Threadripper block to my last build...that's before the case, PSU, 20x 120mm fans etc...the thing is close to 100 pds  .watercoolig stuff is heavy as it all adds up...


----------



## jura11

gfunkernaught said:


> 100fps is insane. I was getting about 48-50fps average at 4k native with everything maxed, about 540w max usage, 2100mhz average clock.
> 
> Thanks for the fan tip? I read a bunch of amazon reviews for the P12's, watched on video of the fan spinning at 1170 or so rpm, and it produced a loud tone that I'm down with. Do you guys experience a tone or hum at certain rpms with the arctic fans?


Hi there

I have just done 4 hour test in Control,tested OC etc,max stable clocks in Control are 2130-2145MHz anything above that will crash the game or rather Nvidia driver will crash and with such clocks I'm getting pretty much around 100FPs,during the fights I'm getting 88-93FPS that's with Nvidia Freestyle filters(Freestyle filters usually cost you around 5-10FPS),lowest what I have seen has been in you enter Control menu and go back to game and FPS will drop to 65-70FPS but during the game or whole gaming I can say I'm getting nice and steady 100FPS and in fights with hiss FPS will drop to 88-93FPS

If you will be going with any kind of fan controller,get Aquacomputer Quadro or OCTO,they're one of the best fan controllers plus Aquasuite is easy to use etc

Hope this helps

Thanks,Jura


----------



## changboy

You know when its heavy when you need clean ur pc and bring it outside to blow air in rads and fans.
Me i use this for 8 years maybe and its really a nice add to clean ur system. For some reason the price is really higher then what i paid, i paid around 80$ cad, maybe not from same company :





__





Are you a human?







www.newegg.ca


----------



## jura11

Hi guys 

I'm asking on behalf of my friend, friend looking for a RTX 3090 for his build, doesn't matter what RTX 3090

If someone have for sale RTX 3090 please PM me or let me know here 

Thanks in advance, Jura


----------



## captn1ko

jura11 said:


> max stable clocks in Control are 2130-2145MHz anything


Is 2130-2145 gamestable good, average or below average for an watercooled 3090?


----------



## Nizzen

captn1ko said:


> Is 2130-2145 gamestable good, average or below average for an watercooled 3090?


Maybe average is 2100? Noone knows, and some results are more stable than others. Too many factors. Cooling, type game, benchmark and duration.


----------



## jura11

captn1ko said:


> Is 2130-2145 gamestable good, average or below average for an watercooled 3090?


Depending on the game there I would say,in some games you can be able run higher clocks than in others

Have look in Superposition 4k or 1080p Extreme or any benchmarks I can run 2175-2190MHz, there is first spike at 2205MHz which usually drops to 2190MHz and then 2175MHz 

Control maxing power limit of KPE XOC BIOS at 75-77%, didn't done lots of tests as I use my PC for rendering most of the time there and in rendering I'm maxing power limit of KPE XOC BIOS at 56-60%

Will do later few tests in Cyberpunk 2077 and Assassin's Creed Valhalla and few others and see what stable clocks are

I would say in my opinion 2130-2145MHz its average maybe slightly below average under water

Hope this helps 

Thanks, Jura


----------



## jura11

Nizzen said:


> Maybe average is 2100? Noone knows, and some results are more stable than others. Too many factors. Cooling, type game, benchmark and duration.


I agree with that too, there is lots of variables and too many factors, my old Asus RTX 2080Ti Strix OC would do stable in raster games 2175-2190MHz and in RT games 2130-2145MHz(2160MHz OC I could use only with BFV) 

Average literally have no idea, I would say my Palit RTX 3090 GamingPro is below average clocker and done last night only 4 hour gaming session for a while(didn't sleep yet hahaha) and GPU have been stable in 2130-2145MHz 

Hope this helps 

Thanks, Jura


----------



## captn1ko

Thanks for the answers  I have my 3090 strix watercooled since friday now with 520W Vbios. 2130Mhz are Metro Exodus stable so far. 2145mhz CP2077 stable and 2175mhz Rust, BFV and Sea of Things stable.

Temps are great so far. 1800mhz @0.8v 29-31°C and 2130mhz 42°C. Ambient is ~22°C.


----------



## J7SC

captn1ko said:


> Thanks for the answers  I have my 3090 strix watercooled since friday now with 520W Vbios. 2130Mhz are Metro Exodus stable so far. 2145mhz CP2077 stable and 2175mhz Rust, BFV and Sea of Things stable.
> 
> Temps are great so far. 1800mhz @0.8v 29-31°C and 2130mhz 42°C. Ambient is ~22°C.


Which w-block are you using for your Strix ? I'm about to place some w-cooling orders for my Strix and also the CPU on a new build...


----------



## wuttman

captn1ko said:


> Temps are great so far. 1800mhz @0.8v 29-31°C and 2130mhz 42°C. Ambient is ~22°C.


What about vram?


----------



## captn1ko

J7SC said:


> Which w-block are you using for your Strix ? I'm about to place some w-cooling orders for my Strix and also the CPU on a new build...


Its the EK strix Vector block. 



wuttman said:


> What about vram?


Mem is running +1000mhz so far. Passive backplate, max Temp 84°C @520w.


----------



## des2k...

captn1ko said:


> Is 2130-2145 gamestable good, average or below average for an watercooled 3090?


heavy load raster or rtx/dlss
2190 , 2155 eff freq is the upper limit for game stable for mine but needs 1093mv

higher, you need a card with big vrm + voltage control since limit is 1100mv and the higher the voltage the more stress on the VRM

mem oc is important, you gain 5fps+ with +1500mem for just 10w more in power where it could take you 60w with core OC to gain those FPS

I settled for 2130 1018mv and +1590 for mem for mine. If the game is around 450w usage I usually load the 2190 profile.
All tested under 2h with TE GT2, no crash in game so far.


----------



## jura11

For me as well 2130-2145MHz is stable in any game, just done test with Cyberpunk 2077, Detroit Become Human and Assassin's Creed Valhalla, but my RTX 3090 GamingPro needs offset +135MHz for such clocks, literally done whole day and night testing and seems everything is fine and stable with such clocks

Effective clocks are 2130MHz with +135MHz, VRAM tried 1200MHz which is stable like in gaming or rendering, 1400MHz as well are stable and with such clocks finally broke 15k in Port Royal hahaha, VRAM on Palit RTX 3090 GamingPro won't pass 1400MHz, I would say 1435MHz will pass but scores are lower and Port Royal won't passes 2145-2160MHz, will crashes with no result 

As I said my Palit RTX 3090 GamingPro is below average clocker, won't OC as much as other here 

Newer Nvidia drivers are just ...., got strange flickering every 10 seconds in 3DMARK Port Royal, disabled G-Sync and seeks everything is now okay... 


Hope this helps 

Thanks, Jura


----------



## kx11

Temps go up to 80c when i push the power target slider to 123% without touching the voltage and clocks, reducing PT to 100% and OC clocks helps the card maintain temps under 70c


----------



## jura11

kx11 said:


> Temps go up to 80c when i push the power target slider to 123% without touching the voltage and clocks, reducing PT to 100% and OC clocks helps the card maintain temps under 70c


What GPU you have there? I think you will benefit from watercooling there,temperatures should drop by good margin,Bykski makes waterblock I think for most of the GPUs(I think only for GameRock OC they don't have )

Personally I would be disappointed if I would see on my Palit RTX3090 GamingPro temperatures above 45C hahaha 

Hope this helps

Thanks,Jura


----------



## kx11

jura11 said:


> What GPU you have there? I think you will benefit from watercooling there,temperatures should drop by good margin,Bykski makes waterblock I think for most of the GPUs(I think only for GameRock OC they don't have )
> 
> Personally I would be disappointed if I would see on my Palit RTX3090 GamingPro temperatures above 45C hahaha
> 
> Hope this helps
> 
> Thanks,Jura


Mine is ROG strix OC, not intersted in Watercolling now, i'll settle with this gpu for now


----------



## jura11

My best result in Port Royal,I just broke 15014 Points and that's it,tried few hours to bench,usually I'm hitting something like 82% of KPE XOC BIOS power limit in Port Royal,although I have set power limit to 100%









I scored 15 014 in Port Royal


AMD Ryzen 9 3900X, NVIDIA GeForce RTX 3090 x 1, 65536 MB, 64-bit Windows 10}




www.3dmark.com





Hope this helps

Thank,Jura


----------



## captn1ko

this graphics card benefits from water cooling like rarely one before... 

The waterblock dropped my temps with UV to 31°C or allows me to run ca. 150mhz - 200mhz higher ingame. Crazy...


----------



## Beagle Box

jura11 said:


> My best result in Port Royal,I just broke 15014 Points and that's it,tried few hours to bench,usually I'm hitting something like 82% of KPE XOC BIOS power limit in Port Royal,although I have set power limit to 100%
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 014 in Port Royal
> 
> 
> AMD Ryzen 9 3900X, NVIDIA GeForce RTX 3090 x 1, 65536 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> Hope this helps
> 
> Thank,Jura


Is the "KPE XOC BIOS" = the 1000W EVGA King Pin BIOS?
If so, you have the KP1K bios set to 100%? 😱 
From my experience with my Strix OC, you shouldn't need more than ~60% (on water). 
Anything more just adds heat.


----------



## jura11

Beagle Box said:


> Is the "KPE XOC BIOS" = the 1000W EVGA King Pin BIOS?
> If so, you have the KP1K bios set to 100%? 😱
> From my experience with my Strix OC, you shouldn't need more than ~60% (on water).
> Anything more just adds heat.


Yes there, that's with KPE XOC BIOS and yes that's 1000W BIOS and yes I have set power limit to 100%

Sadly I don't have Asus ROG RTX 3090 Strix, I have Palit RTX 3090 GamingPro which is 2*8-pin GPU and this BIOS is one best for 2*8-pin GPU, KFA2 390W BIOS works as well but with that BIOS I can score 14206-14300 points 

Hope this helps 

Thanks, Jura


----------



## SoldierRBT

Anyone else excited for Resizable BAR? 5-11% improvement seems great. I wonder if Intel PCIe 4.0/ RTX 3090 + Resizable BAR would show more gains.


----------



## mardon

Hi all. Finally got my rig back up and running with my shunt modded 3090 (Reference). I went for a modest power mod of stacked 15mohm shunts. So far i'm really impressed with it. I can run locked at 2130mhz in Warzone and Cyberpunk which are my two big stability tests. Also got to grips with Nvidia SMI which is reallly useful for additional stability. My Cards not a great clocker and cant really run much past 2130mhz in anything other than port royal.

Anyways.. using the fantastic shunt mod caluculator it gives me a correction factor of 1.33 I just wanted to check i'm filling it in correctly.

The slot Limit of 75W how do I check this?
My Bios is 350w @ 100% and 390w @ 111% I take it i've used the correct setting of 350w here not 390w max Bios power?

As a side note the Heatkiller V block blows the EK block out the water. Having had problems with 3 out of 4 products i've purchased from them they're officially on my **** list. The heatkiller manages lower temperatures at higher clocks and wattage!!


----------



## Beagle Box

jura11 said:


> Yes there, that's with KPE XOC BIOS and yes that's 1000W BIOS and yes I have set power limit to 100%
> 
> Sadly I don't have Asus ROG RTX 3090 Strix, I have Palit RTX 3090 GamingPro which is 2*8-pin GPU and this BIOS is one best for 2*8-pin GPU, KFA2 390W BIOS works as well but with that BIOS I can score 14206-14300 points
> 
> Hope this helps
> 
> Thanks, Jura


Have you tried the 1000W BIOS at lower Power Limit settings is what I'm asking. 
I have run 5 or 6 different BIOSs and have scored 15k with the MSI 450W BIOS and my PC only has 6 cores. If I run the KP1000W BIOS at 100%, my Port Royal score will be at or below 15000. Sliding the Power Limit and/or Core Voltage sliders to the left might actually raise your score when using the KP1K BIOS is what I'm trying to say.


----------



## mardon

jomama22 said:


> No, you shouldn't expect a slow ramp up for gpu core temps when load is first applied. They will spike and then slowly creep as heat soak begins on the block and water. The temperature rise rate of the gpu core will be much much faster than the ability for that heat to transfer to the block and water.
> 
> Just for reference, I have a 8C water to gpu core delta between 400w-800w+ once gpu temperature stabilizes. Using 2x360 hwlabs gtx black Ice 2's, an mcp35x2, with fans in push/pull at 1700rpms in their own loop. Every 100w adds about 3C to the gpu core and water temp.
> 
> Also, running the mining bench and letting it soak for a while using +1500 mem:
> View attachment 2480206


You should be able to hit 125MH/s @ 300w btw.

I have mine set core 1450mhz ram +1400mhz (Can do +1500mhz but makes little difference) and it'll do 125 all day.


----------



## T.Sharp

kx11 said:


> Temps go up to 80c when i push the power target slider to 123% without touching the voltage and clocks, reducing PT to 100% and OC clocks helps the card maintain temps under 70c


You tried manually undervolting and overclocking with the curve editor? I like to leave PL at max and set the V/F point on the curve. That way the card stays cool but the frequency isn't constantly changing as the card bounces off power limit. Should get more consistent frame times that way too.


----------



## Falkentyne

mardon said:


> Hi all. Finally got my rig back up and running with my shunt modded 3090 (Reference). I went for a modest power mod of stacked 15mohm shunts. So far i'm really impressed with it. I can run locked at 2130mhz in Warzone and Cyberpunk which are my two big stability tests. Also got to grips with Nvidia SMI which is reallly useful for additional stability. My Cards not a great clocker and cant really run much past 2130mhz in anything other than port royal.
> 
> Anyways.. using the fantastic shunt mod caluculator it gives me a correction factor of 1.33 I just wanted to check i'm filling it in correctly.
> 
> The slot Limit of 75W how do I check this?
> My Bios is 350w @ 100% and 390w @ 111% I take it i've used the correct setting of 350w here not 390w max Bios power?
> 
> As a side note the Heatkiller V block blows the EK block out the water. Having had problems with 3 out of 4 products i've purchased from them they're officially on my **** list. The heatkiller manages lower temperatures at higher clocks and wattage!!
> 
> 
> View attachment 2480489


PCIE Slot limit is the PCIE power at your original default power limit (e.g. 350W or 400W). For boards that draw less from the PCIE Slot, e.g. Strix only drawing like 50W or something.


----------



## Esenel

captn1ko said:


> The waterblock dropped my temps with UV to 31°C or allows me to run ca. 150mhz - 200mhz higher ingame. Crazy...


Yeeeeah this math doesn't check out.
*31°C* results in *6 boost bins* for every 5°C.
As the *increment* is *15MHz* you will end up with *90MHz.*

Far away from 200MHz gain just due to water cooling


----------



## T.Sharp

@Esenel lower temps greatly increase OC potential. I don't think he means the card just boosts 200MHz higher automatically.


----------



## captn1ko

Esenel said:


> Yeeeeah this math doesn't check out.
> *31°C* results in *6 boost bins* for every 5°C.
> As the *increment* is *15MHz* you will end up with *90MHz.*
> 
> Far away from 200MHz gain just due to water cooling


1. Lower temp = lower Power consumption = more boost with same power limit.

2. Air cooling reach thermal limit before the chip is on max OC.

With waterblock you can get the higher boost bin due to lower temp, the card can boost higher within the same powerlimit and you eliminate the thermal limit to crank up the power Limit even more. All together my card can boost 150-200mhz higher under water. 

On Air my card was barely able to hold 2ghz ingame. Most of the time it was 1950 or less. Now i can set the PL to 520w and can play my Games at 2130-2145mhz.


----------



## Esenel

If you reached thermal limit on air before, your air cooler was ****ty.

But on a proper card this doesn't happen.
On Strix you will not see 200MHz by going on water.


----------



## J7SC

Esenel said:


> If you reached thermal limit on air before, your air cooler was ****ty.
> 
> But on a proper card this doesn't happen.
> On Strix you will not see 200MHz by going on water.


The highest GPU temp I've ever seen on my Strix is 70 C w/ stock bios and stock air cooler @ room temp ambient with full PL and max oc...usually it's just below that in benches. Also, I did an hour of FS2020 at slightly lower oc and PL and temps were in the high 50 C to low 60 C range throughout...


----------



## jomama22

mardon said:


> You should be able to hit 125MH/s @ 300w btw.
> 
> I have mine set core 1450mhz ram +1400mhz (Can do +1500mhz but makes little difference) and it'll do 125 all day.
> 
> View attachment 2480492


Wasn't really concerned with the MH/s tbh, don't mine with the card. Was doing it for the memory junction temp and hotspot.


----------



## WillP

Beagle Box said:


> Have you tried the 1000W BIOS at lower Power Limit settings is what I'm asking.
> I have run 5 or 6 different BIOSs and have scored 15k with the MSI 450W BIOS and my PC only has 6 cores. If I run the KP1000W BIOS at 100%, my Port Royal score will be at or below 15000. Sliding the Power Limit and/or Core Voltage sliders to the left might actually raise your score when using the KP1K BIOS is what I'm trying to say.


The 1000w XOC bios on a 2 8pin card will draw "only" ~690w (as I understand it, and rubbish mental arithmetic this time of night) at 100% power limit bringing it much more in line with your 60% power limit. From my experience with my 2 8pin card going beyond 85% achieved nothing and I was never power limited beyond that point. Actually I run the card at 75% limit to keep me closer to the ~500w bios on the 3 8pin cards.


----------



## gfunkernaught

Got my 3rd rad hooked up and did a repaste/remount of the gpu block and backplate. I'm able to hit [email protected] effective while running the Bright Memory benchmark, power usage is around 550w. Core temp peaked at 45c. Water temp has been sitting at just below 28c for about 5min now. I noticed the gpu temp slowly rise while the water temp stayed the same, around 27.7c It has been running for 20min now. IC Diamond does take time to cure so we will see how it performs later on. Bright mem didnt really make the core as hot as cyberpunk did, or quake 2 rtx. I'm just seeing where the clocks are. Back to work I go.


----------



## des2k...

gfunkernaught said:


> Got my 3rd rad hooked up and did a repaste/remount of the gpu block and backplate. I'm able to hit [email protected] effective while running the Bright Memory benchmark, power usage is around 550w. Core temp peaked at 45c. Water temp has been sitting at just below 28c for about 5min now. I noticed the gpu temp slowly rise while the water temp stayed the same, around 27.7c It has been running for 20min now. IC Diamond does take time to cure so we will see how it performs later on. Bright mem didnt really make the core as hot as cyberpunk did, or quake 2 rtx. I'm just seeing where the clocks are. Back to work I go.
> View attachment 2480550


Are you willing to do TE GT2 loop, you can cap fps to 66, window mode, usually that gives the highest effective frequency on it, a bit less power draw too. 

Does it pass crash / pass for like 30m or 1h at 2150 eff 1088mv ?
Just curious since mine needs 1093mv.


----------



## gfunkernaught

des2k... said:


> Are you willing to do TE GT2 loop, you can cap fps to 66, window mode, usually that gives the highest effective frequency on it, a bit less power draw too.
> 
> Does it pass crash / pass for like 30m or 1h at 2150 eff 1088mv ?
> Just curious since mine needs 1093mv.


You're talking about timespy extreme gtest 2 right? Sure. That's a good test stress anyway, port royal on loop is good too.
Waiting for the noctuas to arrive. Right now, I have a corsair sp120, and two regular air flow fans because I'm impatient lol.

Crashed at 5.5min. I just tried 1093mv and GT test was doiing this weird flickering thing, looks like there is a vertical bar going across the screen left to right and anything in the scene that touches the bar almost disappears. I even put the clocks back to stock and the same thing happened.


----------



## des2k...

voltage slider 100%
nvidia-smi.exe -lgc 210,2190
prefer max performance 

I use 2 points VF curve to force freq / voltage at the 4th step. 

ex. this one forces 2190 at 1093mv because the previous 3 voltage steps are flat & set for 2190 (2175 real step).
So any given step (2175 in this case) can only use 3 voltage points then it has to jump to the next frequency bin.

Right point, you apply first,









Left point, you apply second,










You'll need to re-check those / apply multiple times if AB doesn't set it properly first.


----------



## mirkendargen

Anyone else had a problem with the plating starting to flake on a Bykski block? I was one of the first people here with a Strix Bykski block and noticed today I have a bit of plating flaking off. I also have a Bykski TR4 CPU block in the same loop that has been in use 2 years longer that is perfectly fine. Not sure if I got one from a bad batch or something changed or what.

It isn't the end of the world, just annoying that I'll have to clean out my loop.


----------



## jura11

mirkendargen said:


> Anyone else had a problem with the plating starting to flake on a Bykski block? I was one of the first people here with a Strix Bykski block and noticed today I have a bit of plating flaking off. I also have a Bykski TR4 CPU block in the same loop that has been in use 2 years longer that is perfectly fine. Not sure if I got one from a bad batch or something changed or what.
> 
> It isn't the end of the world, just annoying that I'll have to clean out my loop.


Hi there 

I have I think first batch of Bykski RTX 3090 waterblocks and no issues, tomorrow I will be pulling out my RTX 2080Ti's and will be checking too RTX 3090 waterblock and probably will replace TIM as well 

What coolant did you used and what fittings did you used? 

Hope this helps 

Thanks, Jura


----------



## gfunkernaught

des2k... said:


> voltage slider 100%
> nvidia-smi.exe -lgc 210,2190
> prefer max performance
> 
> I use 2 points VF curve to force freq / voltage at the 4th step.
> 
> ex. this one forces 2190 at 1093mv because the previous 3 voltage steps are flat & set for 2190 (2175 real step).
> So any given step (2175 in this case) can only use 3 voltage points then it has to jump to the next frequency bin.
> 
> Right point, you apply first,
> View attachment 2480555
> 
> 
> Left point, you apply second,
> View attachment 2480556
> 
> 
> 
> You'll need to re-check those / apply multiple times if AB doesn't set it properly first.


That didn't work for me. Instead I set an offset of +165, then raised the 1093mv point up to 2160mhz, and it settled around 2146-9mhz, only to heat up to 46c, now it drops to 2130mhz effective, but still at 1093mv. Doesn't look like adding another 360mm rad did much, only gained a 35mhz overclock.


----------



## mirkendargen

jura11 said:


> Hi there
> 
> I have I think first batch of Bykski RTX 3090 waterblocks and no issues, tomorrow I will be pulling out my RTX 2080Ti's and will be checking too RTX 3090 waterblock and probably will replace TIM as well
> 
> What coolant did you used and what fittings did you used?
> 
> Hope this helps
> 
> Thanks, Jura


Mayhem's Pastel V2 UV white premix, a variety of fittings that are all brass. Like I said, I have a Bykski CPU block in the same loop that's been in use two years longer, so I don't think it's anything about my loop, probably just a sub par plating batch that I got unlucky with. I'll probably just strip the remaining plating with vinegar when I clean the loop out so I don't have to worry about any more flaking off and gunking up my radiators.


----------



## KedarWolf

mirkendargen said:


> Mayhem's Pastel V2 UV white premix, a variety of fittings that are all brass. Like I said, I have a Bykski CPU block in the same loop that's been in use two years longer, so I don't think it's anything about my loop, probably just a sub par plating batch that I got unlucky with. I'll probably just strip the remaining plating with vinegar when I clean the loop out so I don't have to worry about any more flaking off and gunking up my radiators.


I'd go Bykski for my ASUS OC 3090 but I think I'm going EKWB, just because they are coming out with active backplates sometime soon. For FE cards first, then others later.


----------



## gfunkernaught

Del


----------



## captn1ko

Esenel said:


> If you reached thermal limit on air before, your air cooler was ****ty.


Only if you set the fans to an (for me) anoying level. Otherwise you cant cool 520w in air.




Esenel said:


> But on a proper card this doesn't happen.
> On Strix you will not see 200MHz by going on water.


See post above. My card was not able to cool even 480w without jet noise or getting to hot. So in fact i got ca. 200mhz more usable in games with my water cooler.

With 100% Fan my best Avg clock in Timespy was 1995mhz. On water its 2115. Still 120mhz difference.


----------



## Falkentyne

captn1ko said:


> Only if you set the fans to an (for me) anoying level. Otherwise you cant cool 520w in air.
> 
> 
> 
> 
> See post above. My card was not able to cool even 480w without jet noise or getting to hot. So in fact i got ca. 200mhz more usable in games with my water cooler.


What? Says who?
My 3090 FE can cool 550W on air. I can keep it under 77C (with some difficulty).


----------



## captn1ko

Falkentyne said:


> My 3090 FE can cool 550W on air. I can keep it under 77C (with some difficulty)


Without annoing fan speed and for 24/7 usage? If so, perfect for you. My strix was not able to use 480w at decent noise levels for gaming. What are the difficulties?

I talk about everyday usage and in this case i dont want to have full speed fan all the time


----------



## reflex75

PhuCCo said:


> I recommend the copper shims on the rear memory. I dropped 15C from my memory junction peak temp after doing this. I already had a ram waterblock mounted to the backplate.
> 
> Last week I took apart my 3090 FE with the Alphacool block and replaced all of the thermal pads with Thermalright Odyssey. Instead of the stock 2mm pads on the rear memory, I used 1mm pads with 1mm copper shims on top of the pads. I had to cut a few of them to smaller sizes in order to not overhang the memory modules, but it was very easy since they're just soft copper. Then I put a layer of Kryonaut on the shims to fill gaps between the copper and the backplate. I did a test fit and saw that it was making great contact and squeezing the paste. The shims are held in place insanely well from sticking to the pads and paste; they are not going to move. I went from a memory junction peak of 65C to 50C in two hours of running Port Royal and Time Spy stress tests.
> 
> I've never mined before, but I wanted to check the temps during mining as a lot of people are doing that now. I kept restarting the miner because I didn't understand how the payout works and wasn't sure if I had it set up correctly. After about three hours of mining, I hit a peak of 58C. 114-120MH/s (had a browser and discord open so it fluctuated) with +1000 on memory @ 280W. Ambient was 21-23C and water peaked at 30C during that time.
> 
> 3090FE Shim Mod - Google Drive


58° max for the memory junction temp, while mining, is an amazing result!
It's the hardest test to check max memory temp.
Usually, almost all stock 3090 are above 100°, or even worse, throttle at 110°c...


----------



## gfunkernaught

Thinking about using these for the back of my card, I have some extra 2mm thermal pads. 








Micro Connectors M.2 2280 SSD Heat Sink Kit - Black - Micro Center


Get it now! Supports 2280 Length M.2 SSD Aluminum Alloy Fins for Optimal Heat Dissipation Works on Motherboards or M.2 Adapters Easy Installation with Thermal Pad Silicone Rings.




www.microcenter.com


----------



## Style68

Hi Guys,
I have been getting valuable information from this forum and have learned alot about overclocking because of it. After trying every recommendation on this forum I will have to assume that both my ASUS RTX 3090 Strix's are duds. One is on a water cooled loop that is shared with an AMD 5950X that is overclocked as well. The system has a full EKWB system with one D5 pump and two 360 radiators (1 PE, 1 XE). I am pretty confident the EKWB water block and backplate are installed correctly and the thermal pads used are Gelid GP Extreme. After considerable testing of different BIOS, I have settled on the KP 1000w BIOS which seems to perform the best for this card. The best I can get out of this card is a constant 2085Mhz at +120 and +1000 for VRAM for CyberPunk. Anything higher and it crashes. CyberPunk was played on all high settings (4K, Quality DLSS, Quality RT). Even after playing with V/F, undervolting and SMI, it still seems that 2085MHz is the best I can get from this card. It is an improvement over when it was just on air which I was only able to get 2010MHz constant on CyberPunk. I could not stop trying to get to at least 2115MHz constant because of many people in here stating they get a constant 2130-2145MHz, and some stating they get this on air (which is so hard for me to believe because of my two apparent duds). Is there any advice someone can provide that can help me reach that 2115MHz mark? Below is information I have gathered.

Ambient air temp: 20c
Coolant temp at idle: 29c
GPU temp at idle: 30c
Coolant temp at full load: 34c
Highest GPU temp at full load: 48c

Would adding another radiator help or will 2085MHz be the best I can do? When I put my hand over the air that is being removed from the radiators, it doesn't seem hot or warm but just a cool warm.

Any help would be greatly appreciated, and sorry for the long post.


----------



## Falkentyne

Style68 said:


> Hi Guys,
> I have been getting valuable information from this forum and have learned alot about overclocking because of it. After trying every recommendation on this forum I will have to assume that both my ASUS RTX 3090 Strix's are duds. One is on a water cooled loop that is shared with an AMD 5950X that is overclocked as well. The system has a full EKWB system with one D5 pump and two 360 radiators (1 PE, 1 XE). I am pretty confident the EKWB water block and backplate are installed correctly and the thermal pads used are Gelid GP Extreme. After considerable testing of different BIOS, I have settled on the KP 1000w BIOS which seems to perform the best for this card. The best I can get out of this card is a constant 2085Mhz at +120 and +1000 for VRAM for CyberPunk. Anything higher and it crashes. CyberPunk was played on all high settings (4K, Quality DLSS, Quality RT). Even after playing with V/F, undervolting and SMI, it still seems that 2085MHz is the best I can get from this card. It is an improvement over when it was just on air which I was only able to get 2010MHz constant on CyberPunk. I could not stop trying to get to at least 2115MHz constant because of many people in here stating they get a constant 2130-2145MHz, and some stating they get this on air (which is so hard for me to believe because of my two apparent duds). Is there any advice someone can provide that can help me reach that 2115MHz mark? Below is information I have gathered.
> 
> Ambient air temp: 20c
> Coolant temp at idle: 29c
> GPU temp at idle: 30c
> Coolant temp at full load: 34c
> Highest GPU temp at full load: 48c
> 
> Would adding another radiator help or will 2085MHz be the best I can do? When I put my hand over the air that is being removed from the radiators, it doesn't seem hot or warm but just a cool warm.
> 
> Any help would be greatly appreciated, and sorry for the long post.


Lower your memory offset to +500 then try.


----------



## Style68

Falkentyne said:


> Lower your memory offset to +500 then try.


Thanks. I have lowered it to +250 even +0 and still 2085MHz is all I can do.


----------



## Alex24buc

I need your help again because something strange is happening with my palit gaming pro oc card. I had the 390w bios from KFA2 in the past few months. My card in full load was reaching 73-74 degrees temps. Yesterday when I was playing with the power limit at maximum 111% (390w) I noticed that the temps dropped to 69-70 in the same scenarios where I had 73-74 degrees. Also the performance dropped a little, 250 points in port royal and 2-3 fps in other games and benchmarks. I reinstalled the original bios from palit (370w) and the performance recovered a little but the temps also won’t pass 69-70 no matter what I do. When the temps reach 71 the core clock seems to throttle a little untill the temps go down to 69-70. I don’t see something bad with the performance because in the end loosing 150 points in port royal and getting a better temperature doesn’t bother me. But I am afraid that the card has a problem because even the fans run faster to keep the card lower than 70 degrees, while before on auto the fans were more silent and didn’t run at full speed at 70 degrees. Thanks for your help. I reinstalled the drivers but same thing.


----------



## KedarWolf

Style68 said:


> Thanks. I have lowered it to +250 even +0 and still 2085MHz is all I can do.


Find a Strix OC in stock if you can, buy it, sell your old one at a profit.


----------



## gfunkernaught

Alex24buc said:


> I need your help again because something strange is happening with my palit gaming pro oc card. I had the 390w bios from KFA2 in the past few months. My card in full load was reaching 73-74 degrees temps. Yesterday when I was playing with the power limit at maximum 111% (390w) I noticed that the temps dropped to 69-70 in the same scenarios where I had 73-74 degrees. Also the performance dropped a little, 250 points in port royal and 2-3 fps in other games and benchmarks. I reinstalled the original bios from palit (370w) and the performance recovered a little but the temps also won’t pass 69-70 no matter what I do. When the temps reach 71 the core clock seems to throttle a little untill the temps go down to 69-70. I don’t see something bad with the performance because in the end loosing 150 points in port royal and getting a better temperature doesn’t bother me. But I am afraid that the card has a problem because even the fans run faster to keep the card lower than 70 degrees, while before on auto the fans were more silent and didn’t run at full speed at 70 degrees. Thanks for your help. I reinstalled the drivers but same thing.


When you flash the bios, do you run display driver uninstaller and then reinstall drivers?


----------



## Style68

KedarWolf said:


> Find a Strix OC in stock if you can, buy it, sell your old one at a profit.


LOL. Yeah I was thinking about that. The second one I purchased was for this reason and it also didn't overclock well. I may take my chances and purchase another one.


----------



## Falkentyne

Alex24buc said:


> I need your help again because something strange is happening with my palit gaming pro oc card. I had the 390w bios from KFA2 in the past few months. My card in full load was reaching 73-74 degrees temps. Yesterday when I was playing with the power limit at maximum 111% (390w) I noticed that the temps dropped to 69-70 in the same scenarios where I had 73-74 degrees. Also the performance dropped a little, 250 points in port royal and 2-3 fps in other games and benchmarks. I reinstalled the original bios from palit (370w) and the performance recovered a little but the temps also won’t pass 69-70 no matter what I do. When the temps reach 71 the core clock seems to throttle a little untill the temps go down to 69-70. I don’t see something bad with the performance because in the end loosing 150 points in port royal and getting a better temperature doesn’t bother me. But I am afraid that the card has a problem because even the fans run faster to keep the card lower than 70 degrees, while before on auto the fans were more silent and didn’t run at full speed at 70 degrees. Thanks for your help. I reinstalled the drivers but same thing.


Close all active applications, reset the Nvidia custom profiles and directx global settings in the nvidia control panel (two different options) to default, then try again.
Alternatively, you can in addition to that, manually delete the files in NV_Cache (c:\programdata\Nvidia corporation\NV_cache).


----------



## Alex24buc

Yes I did use ddu. I am afraid that is something regarding with the thermal protection of the card, I don’t know. Like the card wants to protect itself.


----------



## Bobbylee

Alex24buc said:


> Yes I did use ddu. I am afraid that is something regarding with the thermal protection of the card, I don’t know. Like the card wants to protect itself.


Did you install the latest Nvidia drivers? I noticed some slight performance differences


----------



## Alex24buc

I also installed the latest drivers, 461.72


----------



## Somandarin

I regret buying my Gigabyte GeForce RTX 3090! No better than my last RTX 2080 TI


----------



## Wihglah

Somandarin said:


> I regret buying my Gigabyte GeForce RTX 3090! No better than my last RTX 2080 TI


Well, mine is significantly better than my previous 980TI


----------



## KedarWolf

Somandarin said:


> I regret buying my Gigabyte GeForce RTX 3090! No better than my last RTX 2080 TI


My 3090 will be a HUGE upgrade from my really old 1080 Ti.


----------



## jura11

Alex24buc said:


> Yes I did use ddu. I am afraid that is something regarding with the thermal protection of the card, I don’t know. Like the card wants to protect itself.


You can try replacing TIM, get yourself Kryonaut or ZF-EX or Noctua NT-H1 or something along these lines and you should be okay there

Personally I always replacing TIM on stock GPU coolers although I don't used for while any air cooled GPU 

Hope this helps 

Thanks, Jura


----------



## gfunkernaught

@Alex24buc 
You can also try IC Diamond 24k. People say it leaves scratches on the die. If you remove it properly, it wont. Use these when removing paste and follow the directions. After wiping off the first bottle's application with paper towel, I clean the surface of the die with 99% isopropyl, then apply the second bottle, then wipe that, then wipe again with isopropyl. The important thing is to let this citrus based cleaner soak into the all the paste. Move the card around if the solution sits in one spot. For the IC Diamond, it makes it soft. If you do use ICD, dont wipe in circles, just wipe in one direction once, then use a clean paper towel for each pass. I think people end up scratching the die because they were wiping in circles basically digging the bigger diamond particles into the die, leaving scratches.



https://www.amazon.com/ArctiClean-Thermal-Compound-Remover-Purifier/dp/B001JYQ9TM


----------



## KedarWolf

gfunkernaught said:


> @Alex24buc
> You can also try IC Diamond 24k. People say it leaves scratches on the die. If you remove it properly, it wont. Use these when removing paste and follow the directions. After wiping off the first bottle's application with paper towel, I clean the surface of the die with 99% isopropyl, then apply the second bottle, then wipe that, then wipe again with isopropyl. The important thing is to let this citrus based cleaner soak into the all the paste. Move the card around if the solution sits in one spot. For the IC Diamond, it makes it soft. If you do use ICD, dont wipe in circles, just wipe in one direction once, then use a clean paper towel for each pass. I think people end up scratching the die because they were wiping in circles basically digging the bigger diamond particles into the die, leaving scratches.
> 
> 
> 
> https://www.amazon.com/ArctiClean-Thermal-Compound-Remover-Purifier/dp/B001JYQ9TM


I swear by MasterGel Maker. just don't get the long wide applicator, it sucks, get the tube one.

It has micro-diamond particles and won't scratch your IHS. I find it's a bit better than Kryonaut in cooling, easier to apply, and won't stain your block as I've personally seen Kryonaut do.









[COOLERMASTER] MasterGel Maker Nano Ultra-high Thermal conductivity Compound | eBay


MASTERGEL MAKERIs. Use the scraper to spread MasterGel Maker Nano equally on the surface. - Ultra-high CPU/GPU conductivity (11 W/m. Thermal Conductivity (W/m-K) : 11. The non-curing and non-electrical conductive traits help avoid any short circuiting. ).



www.ebay.com













12.82US $ 28% OFF|Insulation Thermal Grease 2g/4g W/m.k 11 High-performance - AliExpress


Smarter Shopping, Better Living! Aliexpress.com




www.aliexpress.com





Expect shipping up to a month with both though.


----------



## gfunkernaught

I too am having a new issue stability issue. When I used the KP 1kW bios limited to 550w, I played cyberpunk for hours [email protected] Then after playing around with trying to get a higher OC and mulitple fails, I set it back to that original stable setting. That was just an offset of +150 on the core and +600 mem. I figured ok maybe the 1kw bios is doing something to my card. Flashed to the evga 520w bios, setting an offset of +105 to achieve [email protected] and ran port royal on loop, crashed after a couple of minutes, core temp got to 44c. Set the curve to [email protected] so the effective clock would hit 2100mhz, ran port royal again, crashed within minutes 44c. I also tried resetting all defaults (still with the evga 520w bios) and port royal crashed within seconds this time.

Is this a sign that my card may be damaged in some way?


----------



## J7SC

...spent part of the day fine-tuning the 3950X but ran a few Superposition 8k with my Strix on full tilt / window open...first time I got past 8600 for 8K with a single card ...and yeah, that 2265 MHz max clock didn't last that long through that run, hehe  as usual, stock bios and stock air cooler...my personal best with Superposition 8K is 11998 w/ 2x 2080 TI on full song, so still some ways to go with the single 3090 Strix OC...

Still, this system is nicely taking shape with CPU, RAM and GPU getting 'matched', and once w--cooled, it will be even more fun


----------



## stryker7314

Got a Kill A Watt to see how many watts the entire sigrig was drawing and it's no more than 650w and usually at 550-575w under load running Cawadooty settings maxed out at 4k RTX On. 3090 KPE is overclocked to 2175 and hits 480w max (usually in the 350's wattage) and 9900k is overclocked to 5.3 1.425V and 5.0 Cache, and draws about 155w max (usually in the mid 120's wattage), memory overclocked with stock timings 14/14/14/34 to 3400Mhz. Posting metrics to consider for PSU purposes, 750w titanium is plenty for me.


----------



## Falkentyne

gfunkernaught said:


> I too am having a new issue stability issue. When I used the KP 1kW bios limited to 550w, I played cyberpunk for hours [email protected] Then after playing around with trying to get a higher OC and mulitple fails, I set it back to that original stable setting. That was just an offset of +150 on the core and +600 mem. I figured ok maybe the 1kw bios is doing something to my card. Flashed to the evga 520w bios, setting an offset of +105 to achieve [email protected] and ran port royal on loop, crashed after a couple of minutes, core temp got to 44c. Set the curve to [email protected] so the effective clock would hit 2100mhz, ran port royal again, crashed within minutes 44c. I also tried resetting all defaults (still with the evga 520w bios) and port royal crashed within seconds this time.
> 
> Is this a sign that my card may be damaged in some way?


Go back to the KPE 1000W bios and +150 core and +600 mem and see if that still works.
KPE Bios has higher internal MSVDD and NVVDD rail limits. Different bioses will have different levels of stability as well as different performance even at the exact same clocks, because the internal power rails are set different...


----------



## KedarWolf

Does anyone have the KPE 1000W BIOS as a .rom file?

You can just run nvflash64 with



Code:


nvflash64 --save kpe.rom

for me, please, or a download link to it.

I can only find the EVGA .exe install file for FTW3 Ultras etc.

Edit: I found it. 









EVGA RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com


----------



## gfunkernaught

Falkentyne said:


> Go back to the KPE 1000W bios and +150 core and +600 mem and see if that still works.
> KPE Bios has higher internal MSVDD and NVVDD rail limits. Different bioses will have different levels of stability as well as different performance even at the exact same clocks, because the internal power rails are set different...


If it has higher rail limits, wouldn't that be bad for my card? Sounds like it would push the board beyond spec at the board level.


----------



## gfunkernaught

KedarWolf said:


> Does anyone have the KPE 1000W BIOS as a .rom file?
> 
> You can just run nvflash64 with
> 
> 
> 
> Code:
> 
> 
> nvflash64 --save kpe.rom
> 
> for me, please, or a download link to it.
> 
> I can only find the EVGA .exe install file for FTW3 Ultras etc.


go to page 1 of this thread and scroll down


----------



## gfunkernaught

@Falkentyne
Just tried your suggestions, PL set to 52%, crashed within a minute, core temp got to 42c. I remember these same settings worked for cyberpunk for at least 2 hours. I think I broke my card . Tried +145 core, crash.


----------



## Falkentyne

gfunkernaught said:


> @Falkentyne
> Just tried your suggestions, PL set to 52%, crashed within a minute, core temp got to 42c. I remember these same settings worked for cyberpunk for at least 2 hours. I think I broke my card . Tried +145 core, crash.


Can you try Wagnard's display driver uninstaller, please?
First download it. But don't run it yet.
Then open the Nvidia control panel and click on "Both" reset preferences/setting buttons for profiles and hit apply after each one.
Then go to c:\progarmdata\nvidia corporation and delete all the files in NV_Cache
Then unplug internet, boot to safe mode (hold down shift when clicking "reboot" in the start menu and choose advanced options), run DDU in safe mode (clean+restart option in DDU) reboot, reinstall drivers with internet still unplugged, reboot a second time, plug in internet.

Then try your clock settings again.
Report back.


----------



## gfunkernaught

Falkentyne said:


> Can you try Wagnard's display driver uninstaller, please?
> First download it. But don't run it yet.
> Then open the Nvidia control panel and click on "Both" reset preferences/setting buttons for profiles and hit apply after each one.
> Then go to c:\progarmdata\nvidia corporation and delete all the files in NV_Cache
> Then unplug internet, boot to safe mode (hold down shift when clicking "reboot" in the start menu and choose advanced options), run DDU in safe mode (clean+restart option in DDU) reboot, reinstall drivers with internet still unplugged, reboot a second time, plug in internet.
> 
> Then try your clock settings again.
> Report back.


I did everything you said, two files in NV_CACHE were in use and I forgot to delete them in safe mode. Ran port royal with the same settings and it crashed in 45sec. Would those two files have anything to do with it?


----------



## gfunkernaught

Trying again, this time i deleted the nv_cache files in safe mode before running ddu, and also trying the newest drivers as of 2/25/21. Will report.


----------



## J7SC

...compared to 461.40, are the new 461.72 drivers impacting performance (better or worse) apart from the specific games NVidia mentioned in the release notes ?


----------



## gfunkernaught

@Falkentyne no good. crashed within a minute this time. I've seen this kind of behavior before...dying card. I think if I keep this up, it will start to artifact at the desktop. When I removed the block to repaste and mount, I noticed the mosfet pad had dried up a bit, some of it stuck to moset chips.


----------



## gfunkernaught

J7SC said:


> ...compared to 461.40, are the new 461.72 drivers impacting performance (better or worse) apart from the specific games NVidia mentioned in the release notes ?


I have no clue, haven't tried any games.


----------



## J7SC

gfunkernaught said:


> @Falkentyne no good. crashed within a minute this time. I've seen this kind of behavior before...dying card. I think if I keep this up, it will start to artifact at the desktop. When I removed the block to repaste and mount, I noticed the mosfet pad had dried up a bit, some of it stuck to moset chips.





gfunkernaught said:


> I have no clue, haven't tried any games.


Tx re. the driver. Also, I recall your 2080 ti adventures  ; this advice is worth as much as you paid for it, but you might want to put the stock bios back on your 3090 Trio and not stress the card too much until you can look at the waterblock mount and all thermal strips and contacts


----------



## Gebeleisis

Alex24buc said:


> Yes I did use ddu. I am afraid that is something regarding with the thermal protection of the card, I don’t know. Like the card wants to protect itself.


Repaste your card.
Buy some new thermal pads.

The paste that comes with the palit is just meh... 

BTW you will lose warranty on that card (in our sweet home Romania)


----------



## xarot

Normal temps for Strix 3090 OC? In Cyberpunk (no Vsync/Gsync enabled) the card stays at around 68-70c but the fan goes up to 86% in longer sessions and starting to get LOUD. So not much OC headroom left. Is that normal or poor contact or such?

Case has 2x140mm blowing air to the card, and 10x120mm in radiators + 1x140mm blowing air directly out of the case.


----------



## gfunkernaught

Del


----------



## Gebeleisis

xarot said:


> Normal temps for Strix 3090 OC? In Cyberpunk (no Vsync/Gsync enabled) the card stays at around 68-70c but the fan goes up to 86% in longer sessions and starting to get LOUD. So not much OC headroom left. Is that normal or poor contact or such?
> 
> Case has 2x140mm blowing air to the card, and 10x120mm in radiators + 1x140mm blowing air directly out of the case.


you are sucking 350w at the wall (if on stock bios) and that heat needs room to dissapate.
I recommend watercooling the 3090 as it is a power hungry card.


----------



## gfunkernaught

Del


----------



## EarlZ

Anyone here with a MSI Suprim X 3090 and still running the stock heatsink, how much is the GDDR6X (Memroy Tjunction) temps yoy guys are getting?

I ask this because an upgrade opportunity has come up for me and wouls cost me $520 to upgrade from a 3080 to the MSI Suprim X 3090.


----------



## J7SC

xarot said:


> Normal temps for Strix 3090 OC? In Cyberpunk (no Vsync/Gsync enabled) the card stays at around 68-70c but the fan goes up to 86% in longer sessions and starting to get LOUD. So not much OC headroom left. Is that normal or poor contact or such?
> 
> Case has 2x140mm blowing air to the card, and 10x120mm in radiators + 1x140mm blowing air directly out of the case.


In demanding DX11 apps (such as FS2020), my Strix OC's temps stay in the 50C - 60C range even with near-constant 99% utilization. With demanding DX12 apps, it is a bit higher (+ 7 C or so), depending on Ray tracing mode, resolution etc. I do have a helper fan mounted over the backplate though, to help with VRAM temps.

Up to about 70 %, the Stix OC's fans are fairly quiet and bearable, after that, they become 'quite audible'...I typically set the fans at 97% though for benching and big DX12 titles such as Cyberpunk 2077, but I am also about 7 feet away from the (mostly open) build. Managing 'case airflow' is critical for these cards, what with 500+ W observed at peak...that heat energy not only has to be taken off the GPU, but it has to go somewhere swiftly outside the system.

As mentioned before, looking forward to w-cooling the Strix...



EarlZ said:


> Anyone here with a MSI Suprim X 3090 and still running the stock heatsink, how much is the GDDR6X (Memroy Tjunction) temps yoy guys are getting?
> 
> I ask this because an upgrade opportunity has come up for me and wouls cost me $520 to upgrade from a 3080 to the MSI Suprim X 3090.


...the 3090 Suprim X does have some of the better VRAM cooling for its stock backplate, i.e. with a copper heat pipe, per pic from TechPowerUp:


----------



## des2k...

J7SC said:


> In demanding DX11 apps (such as FS2020), my Strix OC's temps stay in the 50C - 60C range even with near-constant 99% utilization. With demanding DX12 apps, it is a bit higher (+ 7 C or so), depending on Ray tracing mode, resolution etc. I do have a helper fan mounted over the backplate though, to help with VRAM temps.
> 
> Up to about 70 %, the Stix OC's fans are fairly quiet and bearable, after that, they become 'quite audible'...I typically set the fans at 97% though for benching and big DX12 titles such as Cyberpunk 2077, but I am also about 7 feet away from the (mostly open) build. Managing 'case airflow' is critical for these cards, what with 500+ W observed at peak...that heat energy not only has to be taken off the GPU, but it has to go somewhere swiftly outside the system.
> 
> As mentioned before, looking forward to w-cooling the Strix...
> 
> 
> 
> ...the 3090 Suprim X does have some of the better VRAM cooling for its stock backplate, i.e. with a copper heat pipe, per pic from TechPowerUp:


heatpipes only move heat from one place to another, they don't cool

in this pic, they do absolutily nothing, because there's no mass to sink the heat into


----------



## mirkendargen

des2k... said:


> heatpipes only move heat from one place to another, they don't cool
> 
> in this pic, they do absolutily nothing, because there's no mass to sink the heat into


Maybe that fat thermal pad on the opposite side is connected through a hole in the PCB and is touching the main cooler....maybe...


----------



## gfunkernaught

J7SC said:


> Tx re. the driver. Also, I recall your 2080 ti adventures  ; this advice is worth as much as you paid for it, but you might want to put the stock bios back on your 3090 Trio and not stress the card too much until you can look at the waterblock mount and all thermal strips and contacts


I remounted it and used metal washers on top of the plastic ones of the four main gpu screws, and made sure the rest of the board was straight. Also made sure the inductor thermal pads didn't overlap the mosfets. I couldn't find detailed technical info as to why a once stable oc is no longer stable. Like what happens at the low level. I'm guessing degradation of power delivery..? Never using 1kw bios again unless I have a 1kw card. It's wreckless. Hope others thinking about using those bios on their cards read this and NOT do it. Just stick to spec and find the highest oc you can get with that. 

I did drop a new cpu into the system, might add to the complexity of the situation.


----------



## Falkentyne

gfunkernaught said:


> I remounted it and used metal washers on top of the plastic ones of the four main gpu screws, and made sure the rest of the board was straight. Also made sure the inductor thermal pads didn't overlap the mosfets. I couldn't find detailed technical info as to why a once stable oc is no longer stable. Like what happens at the low level. I'm guessing degradation of power delivery..? Never using 1kw bios again unless I have a 1kw card. It's wreckless. Hope others thinking about using those bios on their cards read this and NOT do it. Just stick to spec and find the highest oc you can get with that.
> 
> I did drop a new cpu into the system, might add to the complexity of the situation.


Try increasing your VCCSA (system agent voltage) to 1.30v then test your previous overclock please.


----------



## Thanh Nguyen

gfunkernaught said:


> I remounted it and used metal washers on top of the plastic ones of the four main gpu screws, and made sure the rest of the board was straight. Also made sure the inductor thermal pads didn't overlap the mosfets. I couldn't find detailed technical info as to why a once stable oc is no longer stable. Like what happens at the low level. I'm guessing degradation of power delivery..? Never using 1kw bios again unless I have a 1kw card. It's wreckless. Hope others thinking about using those bios on their cards read this and NOT do it. Just stick to spec and find the highest oc you can get with that.
> 
> I did drop a new cpu into the system, might add to the complexity of the situation.


I have been using 1kw bios since the release and not a single problem.


----------



## gfunkernaught

Falkentyne said:


> Try increasing your VCCSA (system agent voltage) to 1.30v then test your previous overclock please.


You know I have a Trio right? I don't have access to Kingpin level voltage controls.


----------



## gfunkernaught

Thanh Nguyen said:


> I have been using 1kw bios since the release and not a single problem.


On which card?


----------



## Thanh Nguyen

gfunkernaught said:


> On which card?


Ftw3


----------



## yzonker

gfunkernaught said:


> I remounted it and used metal washers on top of the plastic ones of the four main gpu screws, and made sure the rest of the board was straight. Also made sure the inductor thermal pads didn't overlap the mosfets. I couldn't find detailed technical info as to why a once stable oc is no longer stable. Like what happens at the low level. I'm guessing degradation of power delivery..? Never using 1kw bios again unless I have a 1kw card. It's wreckless. Hope others thinking about using those bios on their cards read this and NOT do it. Just stick to spec and find the highest oc you can get with that.
> 
> I did drop a new cpu into the system, might add to the complexity of the situation.


New to this board but not really new to all of this. I've been using the XOC bios for a while on my Zotac 3090. 

Anyway your trouble caught my interest for obvious reasons. Did you try going to the extreme and underclocking it or maybe limiting power to a low level just to see if it's ever stable? 

Might be worth swapping cpus back too if you can or do some stability testing on it to try to rule it out. 

I hope it's not dead. Keep in mind cards go bad even at 100% stock usage, so no way to know for sure the high power levels killed it. Obviously doesn't help. This is a similar to putting a supercharger on a car. If you do it right it probably won't hurt the motor but there's always increased risk.


----------



## des2k...

gfunkernaught said:


> I remounted it and used metal washers on top of the plastic ones of the four main gpu screws, and made sure the rest of the board was straight. Also made sure the inductor thermal pads didn't overlap the mosfets. I couldn't find detailed technical info as to why a once stable oc is no longer stable. Like what happens at the low level. I'm guessing degradation of power delivery..? Never using 1kw bios again unless I have a 1kw card. It's wreckless. Hope others thinking about using those bios on their cards read this and NOT do it. Just stick to spec and find the highest oc you can get with that.
> 
> I did drop a new cpu into the system, might add to the complexity of the situation.


Has nothing to do with the xoc bios.
Most likely you didn't screw those 4 screws at max (thermal pad didn't compress or paste too thick) + heat cycles the paste spread /pushed away and thermal pads compressed. You end up with no pressure on the core part.

Happened on mine lol, lost OC + artifacts 
re-mounted + metal washers that way I was able to max torque,


----------



## J7SC

des2k... said:


> heatpipes only move heat from one place to another, they don't cool
> 
> in this pic, they do absolutily nothing, because there's no mass to sink the heat into


...as stated in my original post, you obviously also have to be able to evacuate the heated air from the card and case, rather than just move it around...that said, I for one think it is a good idea to 'de-focus' some of the backside VRAM heat...

I do trust TechPowerup, and with noise-normalized results, the SuprimX did very well, per below:


----------



## gfunkernaught

yzonker said:


> New to this board but not really new to all of this. I've been using the XOC bios for a while on my Zotac 3090.
> 
> Anyway your trouble caught my interest for obvious reasons. Did you try going to the extreme and underclocking it or maybe limiting power to a low level just to see if it's ever stable?
> 
> Might be worth swapping cpus back too if you can or do some stability testing on it to try to rule it out.
> 
> I hope it's not dead. Keep in mind cards go bad even at 100% stock usage, so no way to know for sure the high power levels killed it. Obviously doesn't help. This is a similar to putting a supercharger on a car. If you do it right it probably won't hurt the motor but there's always increased risk.


The 520w bios weren't stable at stock clocks either, stock for those bios.
Yes I will swap the cpus tonight and try the stock 380w bios at stock and run port royal on loop for 20min minimum. Then I will work my way up to the suprim 450w bios, evga 500w then 520w, all without setting offsets. Each bios test will get a 20min PR loop test.
I don't think the card is dead though, hoping I didn't cause physical micro-damage. It sucks because I saw real time scaling at 2100mhz and fps increased very noticeably. 10-15 fps. I like to play without DLSS so that's what kept drawing me into extreme overclocking. But all that might be over now.


----------



## gfunkernaught

des2k... said:


> Has nothing to do with the xoc bios.
> Most likely you didn't screw those 4 screws at max (thermal pad didn't compress or paste too thick) + heat cycles the paste spread /pushed away and thermal pads compressed. You end up with no pressure on the core part.
> 
> Happened on mine lol, lost OC + artifacts
> re-mounted + metal washers that way I was able to max torque,


I did put a good amount of torque in the middle all the other screws. I remember the issue with the ek 2080 ti block and putting too much torque on the screws lead to uneven contact. Draining my loop is a pain, especially from the gpu block since it's sideways. Need a good right angle screwdriver.😂


----------



## gfunkernaught

On a brighter note:
Those m.2 heatsinks did wonders for the back of the card. Mem junction temps topped at 60c lol.


----------



## Beagle Box

gfunkernaught said:


> The 520w bios weren't stable at stock clocks either, stock for those bios.
> Yes I will swap the cpus tonight and try the stock 380w bios at stock and run port royal on loop for 20min minimum. Then I will work my way up to the suprim 450w bios, evga 500w then 520w, all without setting offsets. Each bios test will get a 20min PR loop test.
> I don't think the card is dead though, hoping I didn't cause physical micro-damage. It sucks because I saw real time scaling at 2100mhz and fps increased very noticeably. 10-15 fps. I like to play without DLSS so that's what kept drawing me into extreme overclocking. But all that might be over now.


In the end, I think you'll like the Suprim BIOS best for every day use and gaming. 
It ran very smoothly on my Strix and scored 15000+ in Port Royal, though it's only rated at 450W.


----------



## Falkentyne

gfunkernaught said:


> You know I have a Trio right? I don't have access to Kingpin level voltage controls.


VCCSA isn't a kingpin voltage. It's often tweaked in conjunction with VCCIO, most commonly for memory and high CPU cache ratio overclocking.

VCCSA is on your motherboard. It's system agent, signal between CPU, PCIE slots, etc...
People on radeon 6800 XT/6900 XT cards on recent AMD posts had crashes and low overclocks if VCCSA were too low.









PSA: How I fixed my black screen issue and further increased stability with 6800 XT


Preface: This worked for me on my Intel Z390 chipset but I assume it should work for any Intel chipset built for the i7 3xxx series and forward as...




forums.guru3d.com





Just trying to help you.


----------



## des2k...

on ryzen is also like that, had to increase soc voltage / drop to cl18 at 3800mem with the RTX3090, didn't have this issue with my 1080ti


----------



## gfunkernaught

Falkentyne said:


> VCCSA isn't a kingpin voltage. It's often tweaked in conjunction with VCCIO, most commonly for memory and high CPU cache ratio overclocking.
> 
> VCCSA is on your motherboard. It's system agent, signal between CPU, PCIE slots, etc...
> People on radeon 6800 XT/6900 XT cards on recent AMD posts had crashes and low overclocks if VCCSA were too low.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> PSA: How I fixed my black screen issue and further increased stability with 6800 XT
> 
> 
> Preface: This worked for me on my Intel Z390 chipset but I assume it should work for any Intel chipset built for the i7 3xxx series and forward as...
> 
> 
> 
> 
> forums.guru3d.com
> 
> 
> 
> 
> 
> Just trying to help you.


Thanks man. Didnt mean to come off like that. It's been a while since I've seen those terms so I didn't know you were referring to the mobo. I remember trying this (link attached) with my 8700k and it just clonked so I left everything on auto with just a ratio set to 44 and I use the xmp profile for my ram. 

__
https://www.reddit.com/r/intel/comments/78g4qh


----------



## des2k...

gfunkernaught said:


> I did put a good amount of torque in the middle all the other screws. I remember the issue with the ek 2080 ti block and putting too much torque on the screws lead to uneven contact. Draining my loop is a pain, especially from the gpu block since it's sideways. Need a good right angle screwdriver.😂


without metal washers on top, this is what happens to those small ek washers after a few heat cycles, they just sink into the hole if you apply torque on them


----------



## gfunkernaught

des2k... said:


> without metal washers on top, this is what happens to those small ek washers after a few heat cycles, they just sink into the hole if you apply torque on them
> 
> View attachment 2480793


So am I supposed to use metal washers for all the screws not just the four main ones?


----------



## des2k...

gfunkernaught said:


> So am I supposed to use metal washers for all the screws not just the four main ones?


It's just the 4 for core, the holes are bigger diameter vs other holes on the board


----------



## gfunkernaught

des2k... said:


> without metal washers on top, this is what happens to those small ek washers after a few heat cycles, they just sink into the hole if you apply torque on them
> 
> View attachment 2480793





des2k... said:


> It's just the 4 for core, the holes are bigger diameter vs other holes on the board


So when I did my last remount, those plastic washers weren't warped, not even close. They were flat. I think you'd need a torque driver to set the exact amount of torque for each hole, otherwise you'll end up with an uneven mount. So I went with drive until the screw stops then quarter turn more for all the screws including the backplate.


----------



## J7SC

@gfunkernaught I hope the remount solves the problems, especially as they seem to appear after the previous remount, w/o having changed other mobo-related values...FYI, I had a couple of 980 Classifieds which I had used for sub-zero comps years ago, but then put them aside...when I got back to them years later to mount the original ICX air-coolers, I couldn't find those little spring-loaded screws NVidia/EVGA used, so I rigged s.th. up, very carefully...still, after about 20 min or so, one card would show temp deviations and started to artifact even at stock speed and voltage...I redid the mount using a different (still non-original) set of screws w/ springloading - and metal washers on top of the plastic ones - and everything was back to 'normal' (I'm posting this on the machine with those 980 Cl GPUs  ). Anyway, good luck !


----------



## gfunkernaught

@Falkentyne @J7SC 
So with the 9900K I wasn't even able to start bright memory benchmark and Port Royale would crash within a minute. Put the 8700K back in stock no overclocking no XMP Bright memory benchmark started ran for about 3 or 4 minutes. I still have the XOC bios on here with the power limit set to 50 no overclock applied. So far I've been running Port Royal on loop for about four and a half minutes GPU temp seems to be settling at 42 clock speed is 1970 voltage 1068 I don't want to jinx it so I'm going to let it run for another 25 minutes. The memory juncture temperature is going between 56 and 58 with a peak of 60. Power usage peaked at 478 so far. I will take a look at those bus level voltages of the motherboard because I do want to put my CPU back to where it was at 4.8ghz but it looks like I may have to drop some of the power saving features which were nice but maybe causing problems with the 3090.


----------



## Bobbylee

gfunkernaught said:


> @Falkentyne @J7SC
> So with the 9900K I wasn't even able to start bright memory benchmark and Port Royale would crash within a minute. Put the 8700K back in stock no overclocking no XMP Bright memory benchmark started ran for about 3 or 4 minutes. I still have the XOC bios on here with the power limit set to 50 no overclock applied. So far I've been running Port Royal on loop for about four and a half minutes GPU temp seems to be settling at 42 clock speed is 1970 voltage 1068 I don't want to jinx it so I'm going to let it run for another 25 minutes. The memory juncture temperature is going between 56 and 58 with a peak of 60. Power usage peaked at 478 so far. I will take a look at those bus level voltages of the motherboard because I do want to put my CPU back to where it was at 4.8ghz but it looks like I may have to drop some of the power saving features which were nice but maybe causing problems with the 3090.


So are you sure it wasn’t remounting that caused the problem? I ask because I use a 9900k with xoc bios and wasn’t happy with temps initially so remounted which improved temps 10c


----------



## gfunkernaught

Bobbylee said:


> So are you sure it wasn’t remounting that caused the problem? I ask because I use a 9900k with xoc bios and wasn’t happy with temps initially so remounted which improved temps 10c


Remounted the cpu or gpu? I'm pretty sure this mount is fine. Port Royal has been running for 20 minutes now ambient temp is 20 water temp is 30 GPU temp is about 43. I think the 9900K is too much for my motherboard even though MSI says it's compatible.


----------



## gfunkernaught

With Port Royal still running after about 25 minutes I started to up the offset of the core clock. The base was 1980mhz, so I started with +100 and in 15mhz increments went up to +130, applying each increment. Now at [email protected] Temps are the same as before. The top of the mountain for me right now is 2100mhz effective, scared to push it. I'll see how long it runs at 2090mhz, give it another 20min or so. I haven't touched the vmem offset.


----------



## Falkentyne

gfunkernaught said:


> Remounted the cpu or gpu? I'm pretty sure this mount is fine. Port Royal has been running for 20 minutes now ambient temp is 20 water temp is 30 GPU temp is about 43. I think the 9900K is too much for my motherboard even though MSI says it's compatible.


As I said earlier you need to divide and conquer. Always isolate one change at a time before panicking that you destroyed your video card.
So it looks like it's your 9900k that's causing the instability, then?
Is your video card stable at the original overclock with the 8700k ?


----------



## gfunkernaught

Falkentyne said:


> As I said earlier you need to divide and conquer. Always isolate one change at a time before panicking that you destroyed your video card.
> So it looks like it's your 9900k that's causing the instability, then?
> Is your video card stable at the original overclock with the 8700k ?


Amen.

Yeah with the 8700k at stock not the 4.8 it was at before, I am at 2090mhz right now one bin below where I was before when it failed. I still want to give it a few more minutes before I try to bump it up to 2100.

I want the cpu to run at least 4.8ghz, and will up the those bus voltages you mentioned earlier. When I had the 2080 ti and this CPU at 4.8 I let the bios do all the work I never had a problem I guess the 3090 requires more at the PCIe level?


----------



## Beagle Box

gfunkernaught said:


> Remounted the cpu or gpu? I'm pretty sure this mount is fine. Port Royal has been running for 20 minutes now ambient temp is 20 water temp is 30 GPU temp is about 43. I think the 9900K is too much for my motherboard even though MSI says it's compatible.


It's important that you choose the best BIOS for whichever CPU your running. I 'upgraded' the BIOS on my M5 to the most recent and had to flash back to 1.51 The newer ones crashed like crazy with settings that ran fine before. In fact, one BIOS had difficulty letting me into the CMOS at all. Almost couldn't flash it back. 

I've tried 4 of them and BIOS 1.51 (07/05/2018) is the only one that runs well with my i7-8086. That stinks for me because the Adaptive + Offset system in that BIOS is completely non-functional. But the 1.51 has allowed me to bench @ 5.3/5.2 without hiccup. MSI is hit or miss in everything they do, I think. 

If you were trying the 1.51 bios with the 9900, your problems make perfect sense. Find and flash the correct BIOS and all should be good.


----------



## J7SC

gfunkernaught said:


> Remounted the cpu or gpu? I'm pretty sure this mount is fine. Port Royal has been running for 20 minutes now ambient temp is 20 water temp is 30 GPU temp is about 43. I think the 9900K is too much for my motherboard even though MSI says it's compatible.


...it would be nice for you if you can break the issue(s) up into two distinct sets...GPU (including mount) and CPU / mobo. If the GPU mount is fine now (and it looks like it per your most recent post), I would settle on a 'mild oc' on GPU and VRAM, lock it in and then compare the two CPUs in hard benching.

...fyi, on power saving settings for the GPU - every personal high score I've done (including PortRoyal) has been on the NV tab set to 'normal' rather than 'prefer maximum power'...this likely has to do with heat and dropping down the NV boost temp-bins, and it may change when w-cooling this card.


----------



## gfunkernaught

Beagle Box said:


> It's important that you choose the best BIOS for whichever CPU your running. I 'upgraded' the BIOS on my M5 to the most recent and had to flash back to 1.51 The newer ones crashed like crazy with settings that ran fine before. In fact, one BIOS had difficulty letting me into the CMOS at all. Almost couldn't flash it back.
> 
> I've tried 4 of them and BIOS 1.51 (07/05/2018) is the only one that runs well with my i7-8086. That stinks for me because the Adaptive + Offset system in that BIOS is completely non-functional. But the 1.51 has allowed me to bench @ 5.3/5.2 without hiccup. MSI is hit or miss in everything they do, I think.
> 
> If you were trying the 1.51 bios with the 9900, your problems make perfect sense. Find and flash the correct BIOS and all should be good.


This is the motherboard bios I am currently on:
Version
7B58v1A
Release Date
2020-06-11
These bios seem to automagically adjust all the settings for me for an oc of 4.8ghz. The VCCSA and VCCIO I will have to adjust since they seem to be related to PCIe functions. Favorite bios so far. 5ghz is another story lol


----------



## gfunkernaught

J7SC said:


> ...it would be nice for you if you can break the issue(s) up into two distinct sets...GPU (including mount) and CPU / mobo. If the GPU mount is fine now (and it looks like it per your most recent post), I would settle on a 'mild oc' on GPU and VRAM, lock it in and then compare the two CPUs in hard benching.
> 
> ...fyi, on power saving settings for the GPU - every personal high score I've done (including PortRoyal) has been on the NV tab set to 'normal' rather than 'prefer maximum power'...this likely has to do with heat and dropping down the NV boost temp-bins, and it may change when w-cooling this card.


I got the 9900k just to max out my system without having to build a new one. MC had it for $300 so I figured why not? It looks like I'm settling on 2100mhz for now. I did set the offset to +145 while the gpu was already at 43c, so there is a chance after cooldown to idle, that offset may shoot it up higher. I'll keep an eye on that.
The thing with the 9900k is that during a cinebench the core temp reached 71c, and that bothers me so I want to delid it, but I've read that it isn't easy to do and way more involved than previous gens. The 8700k delid was super easy and I got excellent results. I could return the 9900k, still in the return window, but I already had a taste of those two extra cores and an extra 200mhz, so idk....


----------



## gfunkernaught

Probably should have reset the time in hwinfo with every step in offset oc, but so far it has been about 20min, maybe more, of [email protected] running PL on loop.
ambient 20c
water 30c
gpu 43c
vram 58c
Gonna try oc'ing the ram now.


----------



## Beagle Box

gfunkernaught said:


> This is the motherboard bios I am currently on:
> Version
> 7B58v1A
> Release Date
> 2020-06-11
> These bios seem to automagically adjust all the settings for me for an oc of 4.8ghz. The VCCSA and VCCIO I will have to adjust since they seem to be related to PCIe functions. Favorite bios so far. 5ghz is another story lol


Yeah. That's the latest one. It nearly locked my motherboard up completely. I couldn't run it. I'm surprised the 8700 works on it at all. Maybe my chipset drivers are old or something...


----------



## gfunkernaught

Upping the vram offset in 100mhz increments...got to +500 and noticed the gpu core temp begin to bounce up from 43c to 44c, but mostly settling at 43c. vram temp is now bouncing between 58c and 60c. Is the memory junction sensor reader only capable of outputting 2c increments?. I'm using an old corsair dominator RAM cooler that has two small fans and it is sitting on top of two m.2 heatsinks that are laying vertically on the back of the card where the vertical vram module strips are. I'm thinking about getting more of these modules and just lining the back of the card where the vram and core is with heatsinks.


----------



## yzonker

gfunkernaught said:


> Upping the vram offset in 100mhz increments...got to +500 and noticed the gpu core temp begin to bounce up from 43c to 44c, but mostly settling at 43c. vram temp is now bouncing between 58c and 60c. Is the memory junction sensor reader only capable of outputting 2c increments?. I'm using an old corsair dominator RAM cooler that has two small fans and it is sitting on top of two m.2 heatsinks that are laying vertically on the back of the card where the vertical vram module strips are. I'm thinking about getting more of these modules and just lining the back of the card where the vram and core is with heatsinks.


Mine never reports anything other than even numbers. Glad you may have found the problem. I've had some stability issues from swapping my 3900x to a 5800x.


----------



## gfunkernaught

yzonker said:


> Mine never reports anything other than even numbers. Glad you may have found the problem. I've had some stability issues from swapping my 3900x to a 5800x.


Don't want to jump to conclusions yet lol! But so far at 1.5hrs total and about 45min with a decent OC, temps seem to be under control, PC is is fairly quiet, playing games with my 5.1 speakers or headphones, won't even hear it. I think I'm gonna leave the vmem offset at +500 for now until I get some more of these m.2 heatsinks. 
Want to thank yallz for talking me out of my panic.
_SMACK_ Settle down Beavis!


----------



## yzonker

gfunkernaught said:


> Don't want to jump to conclusions yet lol! But so far at 1.5hrs total and about 45min with a decent OC, temps seem to be under control, PC is is fairly quiet, playing games with my 5.1 speakers or headphones, won't even hear it. I think I'm gonna leave the vmem offset at +500 for now until I get some more of these m.2 heatsinks.
> Want to thank yallz for talking me out of my panic.
> _SMACK_ Settle down Beavis!


That's why I said "may have". Don't want to jinx you.


----------



## yzonker

On the topic of the XOC bios having various safety features disabled, what else can be done for some added protection if you want to run the bios longer term? 

The only solution I came up with was to set up Argus Monitor to shut down my machine if certain thresholds are crossed (above 60C core temp and below 1000 rpm pump speed specifically). This works provided Argus monitor is loaded and working correctly and Win10 doesn't hang up on the shutdown of course.


----------



## des2k...

"On the topic of the XOC bios having various safety features disabled, "
various ? more like all lol

but hey, your GPU core is safe, it will shutoff at 255c


----------



## gfunkernaught

yzonker said:


> On the topic of the XOC bios having various safety features disabled, what else can be done for some added protection if you want to run the bios longer term?
> 
> The only solution I came up with was to set up Argus Monitor to shut down my machine if certain thresholds are crossed (above 60C core temp and below 1000 rpm pump speed specifically). This works provided Argus monitor is loaded and working correctly and Win10 doesn't hang up on the shutdown of course.


If for whatever reason your oc crashes, and the video driver restarts, argus may or may not detect the gpu correctly.


----------



## jura11

@gfunkernaught 

From date I switched to AMD X570 and 3900X I didn't have one BSOD, my switch has been smooth sailing from day one, I run on X570 3*GPUs setup(sold both RTX 2080Ti's) and 8*HDD's and 3*SSD and 2*NVMe and no issues at all

On my X99 literally there hasn't been day when I have been without the BSOD, every morning I woke up and check PC is if its running or is stuck at Windows 10 login 

If I remember correctly now, I have built loops like with 8086k and 9900k too, 8086k would easily run 5.2-5.3GHz with delid, without the delid 5.1GHz has been possible and 5.2GHz I could run but temperatures would be beyond 90's and 9900k which I have used, its been one dud in OC hahaha, wouldn't be stable beyond 5.1GHz with 1.4v and that CPU literally needed like 1.45v to be stable at 5.1GHz 

Regarding the GPU OC stability, try isolate CPU and RAM as first thing and then I would do any GPU OC, if once CPU or RAM is not stable then you will be running to problems 

Currently running KPE XOC BIOS at 65% with 1405MHz on VRAM and +130MHz on core, anything above that my GPU would crash in Port Royal or Control or Cyberpunk 2077 

Hope this helps 

Thanks, Jura


----------



## yzonker

gfunkernaught said:


> If for whatever reason your oc crashes, and the video driver restarts, argus may or may not detect the gpu correctly.


That's a good point. Although that's why I was asking for alternatives. Flawed solution, but does provide somewhat of a safety net. I wouldn't be as concerned if not for the inability to buy another card.


----------



## gfunkernaught

jura11 said:


> Currently running KPE XOC BIOS at 65% with 1405MHz on VRAM and +130MHz on core, anything above that my GPU would crash in Port Royal or Control or Cyberpunk 2077


I set my cpu oc back to 4.8ghz, made sure the vccsa and io were set to 1.3v, and played cyberpunk for 1.5hrs, the gpu core temp just started to creep up to 45c. It had been 43c the whole time, then crept to 44c, then settled at 44c, then it started to bounce up to 45c, crashed. The only thing I noticed that was weird and probably not good was that the core voltage started off at 1068mv, with clocks ranging from 2077-2101mhz. Then later on the core voltage jumped to 1075mv and stayed there, with the same clock range. I'm guessing that heated up the card, probably unnecessarily. That could be fixed by setting a curve. This was all with offsets of core +145 mem +500, PL set to 50%. So as long as the card stays below 44c, I think it will be fine, OR I lower my overclock.


----------



## DrunknFoo

RIP 3090 FTW3 Ultra Gaming

Tragic life of my card, crippled at birth, has gone through several operations under the iron, lota ups and downs, repaired / swapped the fuses twice, and the third time bridged them... This time around, no idea what went wrong, 3 red lights, the bridges are intact. Spent a few hours trying to fix, no bueno.
I think this card is in a better place now, off to recycling and gpu heaven. lol


----------



## Falkentyne

DrunknFoo said:


> RIP 3090 FTW3 Ultra Gaming
> 
> Tragic life of my card, crippled at birth, has gone through several operations under the iron, lota ups and downs, repaired / swapped the fuses twice, and the third time bridged them... This time around, no idea what went wrong, 3 red lights, the bridges are intact. Spent a few hours trying to fix, no bueno.
> I think this card is in a better place now, off to recycling and gpu heaven. lol
> 
> 
> 
> View attachment 2480824


Probably a combination of the hardware flaw that kills _stock_ FTW3's along with work that already may have harmed the integrity.
At this point I believe most FTW3 3090's are just slow ticking time bombs.  Which doesn't seem to affect the KPE since it actually has proper power regulation and digital controllers.


----------



## gfunkernaught

DrunknFoo said:


> RIP 3090 FTW3 Ultra Gaming
> 
> Tragic life of my card, crippled at birth, has gone through several operations under the iron, lota ups and downs, repaired / swapped the fuses twice, and the third time bridged them... This time around, no idea what went wrong, 3 red lights, the bridges are intact. Spent a few hours trying to fix, no bueno.
> I think this card is in a better place now, off to recycling and gpu heaven. lol
> 
> 
> 
> View attachment 2480824


My condolences. Quite the perspective this post has given me...


----------



## gfunkernaught

Update on my overclocking tests. Setting a single point in the curve to [email protected], netted [email protected] max, with minimum [email protected] Never saw 2100-2115mhz use anything less than 1068mv. The voltage also never went up either. Temp peaked at 45c which got me worried but average was 44c. Vram never exceeded 60c. Whew! Played cyberpunk for over an hour this time. I'm gonna return the 9900k since my 8700k does just fine, is delided, and I could maybe take another stab at 5ghz. Gonna use that money to get a bunch of those m.2 heatsinks and just cover the back of the card. Putting two already improved temps. Imagine the entire gpu/vram/vrm area...


----------



## DrunknFoo

Falkentyne said:


> Probably a combination of the hardware flaw that kills _stock_ FTW3's along with work that already may have harmed the integrity.
> At this point I believe most FTW3 3090's are just slow ticking time bombs. Which doesn't seem to affect the KPE since it actually has proper power regulation and digital controllers.


Ya, probably puahed it too much when i first got it... Time to somehow look for a replacement




gfunkernaught said:


> Talk about perspective!
> 
> Is that what I think it is? My eyes are tired...that's broke in half right?


Lol yup


----------



## J7SC

Well, I finally fought my way through various web sites and placed orders for two new extensive loops (one for the 3090 Strix OC, another for a new 3950X build that is now home to the 3090)...was quite an adventure overall at s.th. like four separate web sites, never mind the $s ...

I settled on the EK-Quantum Vector Strix RTX 3090 D-RGB (Nickel/ Plexi) as 1.) I actually found one but not any of the other options I had marked, and 2.) I am interested in adding the new 'active water-cooled' back plate EK is bringing out in a bit... the Strix will get its own loop with 640x64 rad space and dual D5s in series and Arctic fans in push/pull - that ought to help with 500+ Wattskis (I hope). The CPU loop is identical re. rads, pumps and fans...


----------



## gfunkernaught

J7SC said:


> Well, I finally fought my way through various web sites and placed orders for two new extensive loops (one for the 3090 Strix OC, another for a new 3950X build that is now home to the 3090)...was quite an adventure overall at s.th. like four separate web sites, never mind the $s ...
> 
> I settled on the EK-Quantum Vector Strix RTX 3090 D-RGB (Nickel/ Plexi) as 1.) I actually found one but not any of the other options I had marked, and 2.) I am interested in adding the new 'active water-cooled' back plate EK is bringing out in a bit... the Strix will get its own loop with 640x64 rad space and dual D5s in series and Arctic fans in push/pull - that ought to help with 500+ Wattskis (I hope). The CPU loop is identical re. rads, pumps and fans...


Time for a drink/smoke! 
You know I always thought EK>Plexi/Nickel, then someone here (i think) mentioned the regular non plexi one. I was at microcenter and they one those ek blocks up on their water cooling display. Tapped the black part, it was metal. Thought to myself "heat dissipation! DOH! Should've bought that one!" My card is horizontal so I don't care about how it looks.


----------



## J7SC

gfunkernaught said:


> *Time for a drink/smoke!*
> You know I always thought EK>Plexi/Nickel, then someone here (i think) mentioned the regular non plexi one. I was at microcenter and they one those ek blocks up on their water cooling display. Tapped the black part, it was metal. Thought to myself "heat dissipation! DOH! Should've bought that one!" My card is horizontal so I don't care about how it looks.


Enjoy your well-deserved break after all that excitement ! 

...my mobo is mounted horizontally, and the Strix sits vertically in its PCIe slot...the trick will be to plan the GPU loop in such a way that I can integrate the upcoming EK active water cooled back-plate if/when later...I also 'raided' Amazon for sheets and sheets of thermal pads that several folks recommended here and elsewhere, as well as Grizzly TIM, little heat-sinks, compression fittings and all that jazz that accompanies a loop even though I have a fair amount of w-cooling stuff unused here already...I usually do dual loops and I look forward to give this setup the same treatment


----------



## des2k...

It's going to be a long wait before I can buy it.... but I think this block will be awesome
I asked about 600w load on reference card, 8c-10c delta


----------



## EarlZ

J7SC said:


> ...as stated in my original post, you obviously also have to be able to evacuate the heated air from the card and case, rather than just move it around...that said, I for one think it is a good idea to 'de-focus' some of the backside VRAM heat...
> 
> I do trust TechPowerup, and with noise-normalized results, the SuprimX did very well, per below:
> 
> View attachment 2480779


Looks like it is connected to the main heatsink the a thermal pad near the MSI logo, I have a few 40x40x11mm heatsinks and some smaller ones like 14x14x6mmthat I can place around the backplate but that will ruin the aesthetic of the card, which to me is one of the selling points why I prefer this card over all other brands


----------



## gfunkernaught

J7SC said:


> Enjoy your well-deserved break after all that excitement !
> 
> ...my mobo is mounted horizontally, and the Strix sits vertically in its PCIe slot...the trick will be to plan the GPU loop in such a way that I can integrate the upcoming EK active water cooled back-plate if/when later...I also 'raided' Amazon for sheets and sheets of thermal pads that several folks recommended here and elsewhere, as well as Grizzly TIM, little heat-sinks, compression fittings and all that jazz that accompanies a loop even though I have a fair amount of w-cooling stuff unused here already...I usually do dual loops and I look forward to give this setup the same treatment


Ha thermal pads raid! Gimme all your pads! I get notifications from ebay about pads. Still haven't heard from that dude about those sample pads. 13c delta at 500w, not too bad, dunno how well thermal pads will help. I'm gonna get those heatsinks today if I can find them.


----------



## jura11

30 minutes of Port Royal with +135MHz on Core and 1405MHz on VRAM and core and VRAM temperatures are quite okay I would say,from wall I think I pulled something like 550-600W(855W in total wattage of my system from wall,right now PC idles at 245W)











Hope this helps

Thanks,Jura


----------



## gfunkernaught

jura11 said:


> 30 minutes of Port Royal with +135MHz on Core and 1405MHz on VRAM and core and VRAM temperatures are quite okay I would say,from wall I think I pulled something like 550-600W(855W in total wattage of my system from wall,right now PC idles at 245W)
> 
> 
> View attachment 2480898
> 
> 
> Hope this helps
> 
> Thanks,Jura


That's insane. Lol. I can't push my vram passed +500 or else the core will heat up a lot. How is your cooling set up again?


----------



## des2k...

gfunkernaught said:


> That's insane. Lol. I can't push my vram passed +500 or else the core will heat up a lot. How is your cooling set up again?


check your mount on the edge of the die

the mem are already rated for +700, going from +0 to +1700 only adds 16w of heat

for me +400 to +600 to +1590 passing stress test was all in the mount of the block


----------



## jura11

gfunkernaught said:


> That's insane. Lol. I can't push my vram passed +500 or else the core will heat up a lot. How is your cooling set up again?


Hi there 

Not sure but seems my VRAM can be pushed beyond 1250MHz, tested 1405MHz with which I'm getting best results, my personal best is 15077 points in Port Royal right now

Regarding the cooling, running 4*360mm radiators plus MO-ra3 360mm with fans spinning around 620-650RPM on main radiators and 1000RPM on MO-ra3 360mm

Just doing testing with 100mm copper heatsink which I slapped on backplate with 80mm and 120mm fan which pointing to backplate, not sure if that's improved overall cooling but I dropped just 2-4°C on VRAM with such setup 

Hope this helps 

Thanks, Jura


----------



## des2k...

jura11 said:


> Hi there
> 
> Not sure but seems my VRAM can be pushed beyond 1250MHz, tested 1405MHz with which I'm getting best results, my personal best is 15077 points in Port Royal right now
> 
> Regarding the cooling, running 4*360mm radiators plus MO-ra3 360mm with fans spinning around 620-650RPM on main radiators and 1000RPM on MO-ra3 360mm
> 
> Just doing testing with 100mm copper heatsink which I slapped on backplate with 80mm and 120mm fan which pointing to backplate, not sure if that's improved overall cooling but I dropped just 2-4°C on VRAM with such setup
> 
> Hope this helps
> 
> Thanks, Jura


what kind of pump system are you running ?

I was thinking adding external mora with a d5 strong at 24v to my loop.


----------



## gfunkernaught

Throwing ideas out there for backplate cooling using air....I'm digging into the m.2 heatsink world...


----------



## gfunkernaught

jura11 said:


> Hi there
> 
> Not sure but seems my VRAM can be pushed beyond 1250MHz, tested 1405MHz with which I'm getting best results, my personal best is 15077 points in Port Royal right now
> 
> Regarding the cooling, running 4*360mm radiators plus MO-ra3 360mm with fans spinning around 620-650RPM on main radiators and 1000RPM on MO-ra3 360mm
> 
> Just doing testing with 100mm copper heatsink which I slapped on backplate with 80mm and 120mm fan which pointing to backplate, not sure if that's improved overall cooling but I dropped just 2-4°C on VRAM with such setup
> 
> Hope this helps
> 
> Thanks, Jura


Ah ok. Yeah if I want to get really high clocks like that, I'll have to wait until it is super cold again and do a few benches. Right now 44-45c is not good for benching. Gaming sure, which is what I'm after mostly.


----------



## jura11

des2k... said:


> what kind of pump system are you running ?
> 
> I was thinking adding external mora with a d5 strong at 24v to my loop.


Hi there 

I would say dual D5 pumps would be more than enough for such loop but depending how many radiators you have already 

Single D5 strong probably will struggle but you can try it 

I'm running in total 4*D5 pumps(3 pumps are running at full speed and one D5 is running at minimum speed around 1800RPM)

Hope this helps 

Thanks, Jura


----------



## jura11

gfunkernaught said:


> Ah ok. Yeah if I want to get really high clocks like that, I'll have to wait until it is super cold again and do a few benches. Right now 44-45c is not good for benching. Gaming sure, which is what I'm after mostly.


If I do single bench run, temperatures wouldn't break 33-35°C usually they sits in 31-32°C,in gaming temperatures are in 35-38°C 

Such high clocks I'm unable sustain in Cyberpunk 2077 or Control or Battlefield V

2130-2145MHz is max stable in Cyberpunk 2077 or Control or Battlefield V


Hope this helps 

Thanks, Jura


----------



## J7SC

jura11 said:


> Hi there
> 
> I would say dual D5 pumps would be more than enough for such loop but depending how many radiators you have already
> 
> Single D5 strong probably will struggle but you can try it
> 
> I'm running in total 4*D5 pumps(3 pumps are running at full speed and one D5 is running at minimum speed around 1800RPM)
> 
> Hope this helps
> 
> Thanks, Jura


...as you may recall, I'm running a total of 4x D5s and 5x 360/60 rads in my TR / 2x 2080 Ti system and am a staunch proponent of 2x D5s in series per loop. That said, I've seen several professional workstation builders on YT using_ just one_ D5 for complex commercial builds, ie. TR w-cooled, 4x 3090 w-cooled and 1x MoRa360...seems to work fine for them, but I rather have two per loop...

...on 3090 VRAM, my Strix seems to like actual (per GPUz) speed of 1370 MHz, but there is a bit of a range of what is fastest_and_most efficient, depending on the bench, IMO...


----------



## jomama22

Running the same port royal loop for about 37 min or so @2190/1500 mem gets me this overall:










the gpu power figure is just from a 2.25 multiplier in hwinfo (4mohm shunts on the card everywhere). Have compared this to my kill-a-watt to confirm it is within some margin (less than 10w difference at any given time).


----------



## J7SC

jomama22 said:


> Running the same port royal loop for about 37 min or so @2190/1500 mem gets me this overall:
> 
> View attachment 2480931
> 
> 
> the gpu power figure is just from a 2.25 multiplier in hwinfo (4mohm shunts on the card everywhere). Have compared this to my kill-a-watt to confirm it is within some margin (less than 10w difference at any given time).


...nice cooling for both GPU 'front and back' !


----------



## jomama22

J7SC said:


> ...nice cooling for both GPU 'front and back' !


Yeah, this is using a ram waterblock on the back, though it's using whatever crappy 2mm pads ek gives with their blocks. Hadn't ordered a nice set of 2mm pads when I had put it all back together a month ago and genuinely don't really care to take it back apart any time soon lol. Works well enough for what it is. This pic is from a while ago from when I first set it up.


----------



## jura11

J7SC said:


> ...as you may recall, I'm running a total of 4x D5s and 5x 360/60 rads in my TR / 2x 2080 Ti system and am a staunch proponent of 2x D5s in series per loop. That said, I've seen several professional workstation builders on YT using_ just one_ D5 for complex commercial builds, ie. TR w-cooled, 4x 3090 w-cooled and 1x MoRa360...seems to work fine for them, but I rather have two per loop...
> 
> ...on 3090 VRAM, my Strix seems to like actual (per GPUz) speed of 1370 MHz, but there is a bit of a range of what is fastest_and_most efficient, depending on the bench, IMO...


Hi there 

Yours TR4 loop with dual 2080Ti's performed great there and yes I remember your loop👍

Regarding the pump setup and radiator setup, there people who prefer quiet loops and they don't really care about flow rate and GPUs temperatures, I built loop for one friend who is running 3990X with 4*2080Ti and we used Phanteks Enthoo Primo for his case with top 480mm radiator and bottom 280mm radiator plus MO-ra3 420mm,with just top and bottom radiators I tried in his loop EK 3.2 PWM Elite DDC pump and flow rate has been in 210LPH adding extra MO-ra3 420mm will drop flow rate to 140LPH, due this I added extra D5 pump which bring flow rate to 190LPH 

In my case, with single D5 pump my flow rate is around 75-90LPH because of my Aquacomputer Kryos Next is quite restrictive block, adding extra pump will increase flow rate to 110LPH, tried using dual DDC 18W with single D5 Vario and flow rate gas been in 130-150LPH range 

Lots of depends on waterblocks used as not every block is same and restrictions can arise using different blocks like in my case, plus adding 45° and 90° fittings will slowdown flow rate etc

I usually target at least 0.5-1GPM for any loop, sometimes one D5 is more than enough but sometimes you need add more pumps

Above tests which I have done has been in 22-23°C ambient, in something like 15°C ambient if I would open door or window temperatures wouldn't break 30 to 31°C what I done few tests lately 

Regarding the VRAM with 1250MHz or 1295MHz I couldn't break 15k in Port Royal

Hope this helps 

Thanks, Jura


----------



## J7SC

jura11 said:


> Hi there
> 
> Yours TR4 loop with dual 2080Ti's performed great there and yes I remember your loop👍
> 
> Regarding the pump setup and radiator setup, there people who prefer quiet loops and they don't really care about flow rate and GPUs temperatures, I built loop for one friend who is running 3990X with 4*2080Ti and we used Phanteks Enthoo Primo for his case with top 480mm radiator and bottom 280mm radiator plus MO-ra3 420mm,with just top and bottom radiators I tried in his loop EK 3.2 PWM Elite DDC pump and flow rate has been in 210LPH adding extra MO-ra3 420mm will drop flow rate to 140LPH, due this I added extra D5 pump which bring flow rate to 190LPH
> 
> In my case, with single D5 pump my flow rate is around 75-90LPH because of my Aquacomputer Kryos Next is quite restrictive block, adding extra pump will increase flow rate to 110LPH, tried using dual DDC 18W with single D5 Vario and flow rate gas been in 130-150LPH range
> 
> Lots of depends on waterblocks used as not every block is same and restrictions can arise using different blocks like in my case, plus adding 45° and 90° fittings will slowdown flow rate etc
> 
> I usually target at least 0.5-1GPM for any loop, sometimes one D5 is more than enough but sometimes you need add more pumps
> 
> Above tests which I have done has been in 22-23°C ambient, in something like 15°C ambient if I would open door or window temperatures wouldn't break 30 to 31°C what I done few tests lately
> 
> Regarding the VRAM with 1250MHz or 1295MHz I couldn't break 15k in Port Royal
> 
> Hope this helps
> 
> Thanks, Jura


...yeah, the TR 2950X / 2x 2080 Ti CFR system is still in full service...this new 3950X/3090 Strix build is also a productivity-fun-hybrid, like the other one (but: 24 GB VRAM !), just in a different part of our home-offices.

...for PortRoyal > 15,000, my VRAM is in the 1360-1387 MHz range, though actually haven't tried higher VRAM for that yet as the card is still stock air-cooled...also, I wonder how w-cooling will change that (if any), w/ a modded backplate also in the cards... @jomama22 's posted VRAM temps are a great inspiration !


----------



## itssladenlol

Just got myself a 3090 strix and will waterblock it. 
Whats the best bios except 1000w bios? 
The 520W kingpin bios or something else?


----------



## jomama22

J7SC said:


> ...yeah, the TR 2950X / 2x 2080 Ti CFR system is still in full service...this new 3950X/3090 Strix build is also a productivity-fun-hybrid, like the other one (but: 24 GB VRAM !), just in a different part of our home-offices.
> 
> ...for PortRoyal > 15,000, my VRAM is in the 1360-1387 MHz range, though actually haven't tried higher VRAM for that yet as the card is still stock air-cooled...also, I wonder how w-cooling will change that (if any), w/ a modded backplate also in the cards... @jomama22 's posted VRAM temps are a great inspiration !


For the ek strix backplate, I drilled and tapped 4 holes for the screws for the ram block so I could have it attached without using thermal pads/tape (also since the gpu is in a vertical position). The other mod I did was to drill a hole to use a screw through the backplate to connect to the block. The ek strix backplate will bend when fully fastened down and cause the top half of the rear vram chips to make little to no contact with the pads (also needed to use 2mm, not the 1.5mm said by the ek instructions). I also used a 2mm plastic spacer for the new screw between the pcb and backplate so as not to overtightened and bend the backplate. Below, you can see where I drilled the hole and where it goes through the pcb to the block. Without this, contact was abysmal.


----------



## Falkentyne

itssladenlol said:


> Just got myself a 3090 strix and will waterblock it.
> Whats the best bios except 1000w bios?
> The 520W kingpin bios or something else?


1000W Kingpin bios will allow unlimited power and MSVDD limits (up to a point), more than all the other bioses, but it might score lower benchmarks at something like 500W TDP (50%).
Asus Strix XOC2 Bios (XOC1 is garbage) works also but someone at Asus forgot to change the SRC3 power limit, so this limits 8 pin #3 from exceeding 175W, so this will throttle you. But you might still score better than the Kingpin 520W Bios (I don't have either card, maybe I'm talking out of my butthole).


----------



## itssladenlol

Falkentyne said:


> 1000W Kingpin bios will allow unlimited power and MSVDD limits (up to a point), more than all the other bioses, but it might score lower benchmarks at something like 500W TDP (50%).
> Asus Strix XOC2 Bios (XOC1 is garbage) works also but someone at Asus forgot to change the SRC3 power limit, so this limits 8 pin #3 from exceeding 175W, so this will throttle you. But you might still score better than the Kingpin 520W Bios (I don't have either card, maybe I'm talking out of my butthole).


I cant find the Asus xoc bios? 
Googled and looked at Techpowerup. 
Any link? 
I know Most bioses but never heard of that one


----------



## J7SC

jomama22 said:


> For the ek strix backplate, I drilled and tapped 4 holes for the screws for the ram block so I could have it attached without using thermal pads/tape (also since the gpu is in a vertical position). The other mod I did was to drill a hole to use a screw through the backplate to connect to the block. The ek strix backplate will bend when fully fastened down and cause the top half of the rear vram chips to make little to no contact with the pads (also needed to use 2mm, not the 1.5mm said by the ek instructions). I also used a 2mm plastic spacer for the new screw between the pcb and backplate so as not to overtightened and bend the backplate. Below, you can see where I drilled the hole and where it goes through the pcb to the block. Without this, contact was abysmal.
> 
> View attachment 2480962
> 
> View attachment 2480963
> 
> View attachment 2480965


Thanks so much, very informative, bookmarked it. I initially plan to use the OEM Strix backplate (w/ thermal-taped coolers added) until I either mod a custom one like you have, or know more about the EK w-cooled 'active' backplate they announced recently. If that takes too long, I just get the EK Strix custom backplate you showed


----------



## Falkentyne

itssladenlol said:


> I cant find the Asus xoc bios?
> Googled and looked at Techpowerup.
> Any link?
> I know Most bioses but never heard of that one


Rename to *.ROM


----------



## jura11

Personal best with KPE XOC BIOS









I scored 15 077 in Port Royal


AMD Ryzen 9 3900X, NVIDIA GeForce RTX 3090 x 1, 65536 MB, 64-bit Windows 10}




www.3dmark.com





Hope this helps

Thanks,Jura


----------



## itssladenlol

Falkentyne said:


> Rename to *.ROM


Thanks.
Whats the PowerLimit of this Bios and where the Hell you found that?


----------



## jomama22

itssladenlol said:


> Thanks.
> Whats the PowerLimit of this Bios and where the Hell you found that?


Elmor had posted it in his discord. Power limit is irrelevant (its 999w). You be maxed out around 575w or so, depending on the pcie plugs wattage.


----------



## itssladenlol

jomama22 said:


> Elmor had posted it in his discord. Power limit is irrelevant (its 999w). You be maxed out around 575w or so, depending on the pcie plugs wattage.


And it doesnt have the 1000w memory Bug?
Thermal protections active? 
I mean everything works normal like it should?
Sounds perfect.
Not Benching, just gaming.


----------



## J7SC

jomama22 said:


> Elmor had posted it in his discord. Power limit is irrelevant (its 999w). You be maxed out around 575w or so, depending on the pcie plugs wattage.


Re. your comment on PCIe plugs wattage w/ Elmor's XOC Strix bios, below is an earlier 'stock Strix bios' screenie I took a week++ ago - do you see anything there which looks 'odd', ie. in regard to power distribution and potentially using a higher-PL bios down the line once I'm comfy with the water setup and temps ? I used to wonder about the PCIe slot power as well as it always stays between 46W and 48.x W max, but that seems to be a good thing. I switched PSUs from Corsair AX1200 to Antec HPC1300 since those runs, but numbers look very similar.

Also, with the latest additions to CPUz, there are now three temps (GPU, Hotspot, VRAM)...does anyone know whether the 'Hotspot' temp is specific to either GPU or VRAM, or simply the highest temp anywhere on the PCB ?


----------



## KedarWolf

jomama22 said:


> Elmor had posted it in his discord. Power limit is irrelevant (its 999w). You be maxed out around 575w or so, depending on the pcie plugs wattage.


If my Unify-X motherboard has an extra PCI-e 6 pin power connector on the bottom of the motherboard designed to feed power into the video card PCI-e slot, it should help max out voltages, right?


----------



## J7SC

KedarWolf said:


> If my Unify-X motherboard has an extra PCI-e 6 pin power connector on the bottom of the motherboard designed to feed power into the video card PCI-e slot, it should help max out voltages, right?


...just my 2 cents on that as a quick 'fyi': The earlier motherboard the Strix 3090 was in initially also had additional connectors for PCIe power as it could theoretically do 4x SLI...when I plugged that into the PSU, back-to-back runs with the Strix showed no difference in power delivery/ consumption whatsoever


----------



## KedarWolf

J7SC said:


> ...just my 2 cents on that as a quick 'fyi': The earlier motherboard the Strix 3090 was in initially also had additional connectors for PCIe power as it could theoretically do 4x SLI...when I plugged that into the PSU, back-to-back runs with the Strix showed no difference in power delivery/ consumption whatsoever


Thank you.


----------



## EarlZ

Sold my 3080 Master Rev2 and got the MSI Suprim X 3090, Just installed the card and ran superposition to see if everything works. Wish they gave us more power than 450 power limit with an official bios.

EDIT:

Gaming bios, maxed power slider. This card clocks very conservatively when gaming. 1890 - 1920. My gigabyte (3080) would do1995-2040.


----------



## gfunkernaught

jura11 said:


> Personal best with KPE XOC BIOS
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 077 in Port Royal
> 
> 
> AMD Ryzen 9 3900X, NVIDIA GeForce RTX 3090 x 1, 65536 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> Hope this helps
> 
> Thanks,Jura


How come the clock speed looks like it isn't being reported correctly in those results?


----------



## jomama22

J7SC said:


> Re. your comment on PCIe plugs wattage w/ Elmor's XOC Strix bios, below is an earlier 'stock Strix bios' screenie I took a week++ ago - do you see anything there which looks 'odd', ie. in regard to power distribution and potentially using a higher-PL bios down the line once I'm comfy with the water setup and temps ? I used to wonder about the PCIe slot power as well as it always stays between 46W and 48.x W max, but that seems to be a good thing. I switched PSUs from Corsair AX1200 to Antec HPC1300 since those runs, but numbers look very similar.
> 
> Also, with the latest additions to CPUz, there are now three temps (GPU, Hotspot, VRAM)...does anyone know whether the 'Hotspot' temp is specific to either GPU or VRAM, or simply the highest temp anywhere on the PCB ?
> 
> View attachment 2480985


Yeah, pcie slot will always stay below ~65w or so. Never seen it above even when shunted. Power looks fine, I think I remember you showing that to me before and you have the V2 bios if I remember correctly. 

The 575w limit I said above was just the 50w from pcie + max draw from each plug @175w, so it would probably pull 600w at optimal balance and overshoot. 

You only really need to be concerned if you are seeing the power limit become the restrictor on clocks/performance in hwinfo/gpu-z.


----------



## Falkentyne

J7SC said:


> Re. your comment on PCIe plugs wattage w/ Elmor's XOC Strix bios, below is an earlier 'stock Strix bios' screenie I took a week++ ago - do you see anything there which looks 'odd', ie. in regard to power distribution and potentially using a higher-PL bios down the line once I'm comfy with the water setup and temps ? I used to wonder about the PCIe slot power as well as it always stays between 46W and 48.x W max, but that seems to be a good thing. I switched PSUs from Corsair AX1200 to Antec HPC1300 since those runs, but numbers look very similar.
> 
> Also, with the latest additions to CPUz, there are now three temps (GPU, Hotspot, VRAM)...does anyone know whether the 'Hotspot' temp is specific to either GPU or VRAM, or simply the highest temp anywhere on the PCB ?
> 
> View attachment 2480985


Strix Bios XOC1 didn't increase the SRC power limits for ANY of the 8 pins. (the 8 pin power limit is NOT important, it's the SRC that's important!!!)
SRC 1/2/3 at 175W limits the corresponding 8 pin to 175W. So you throttle. Power limit can't get close to 1000W obviously.

Strix Bios XOC2 increased the SRC1 and 2 limits to something like 475W (I forgot), but they forgot to change SRC3, so you will STILL throttle when 8 pin #3 reaches 175W, and due to power balancing, well, you aren't going to be able to avoid that without shunt modding that pin.

Might as well just shunt all the shunts if you're going to do that anyway....then you won't need the XOC2 Bios UNLESS the XOC2 bios has a larger MSVDD power limit (which can only be tested for, not shown in editors), without increasing MSVDD voltage.


----------



## jomama22

KedarWolf said:


> If my Unify-X motherboard has an extra PCI-e 6 pin power connector on the bottom of the motherboard designed to feed power into the video card PCI-e slot, it should help max out voltages, right?


Honestly won't really matter at the power we are drawing, but not reason not to use it if you have it. The pcie slot itself would be the only worry once you start going over 100w being pulled from it. The 24pin 12v lines would be fine even if the single card was pulling 120w+.


----------



## schoolofmonkey

The EVGA RTX 3090 XC3 is a underwhelming card.
Hit's power limits running stock clocks.
I stuck a Hybrid kit on it, temps never go over 55c (84c vram), even with increasing the power slider it'll power limit and drop down to 1725Mhz, same if I was to do a +150 on the core.
It does hit over 1900Mhz in Cyberpunk and a few other titles, 3D Mark (Port Royall/Time Spy Extreme) not so much.

From what I understand it has a stock 366w, I did flash it with the FKA2 390w BIOS, but it didn't really make much of a difference overall.
EVGA Jacob said it due to the cards "hardware limitations" and doesn't recommend flashing a higher wattage BIOS.
EVGA won't even do a 370w BIOS for it, so I don't know.

Shame really, the card stays well under 60c with the Hybrid kit, but doesn't perform any better than the stock cooler clock wise maintaining the stock 1725Mhz depending on the load


----------



## J7SC

jomama22 said:


> Yeah, pcie slot will always stay below ~65w or so. Never seen it above even when shunted. Power looks fine, I think I remember you showing that to me before and you have the V2 bios if I remember correctly.
> 
> The 575w limit I said above was just the 50w from pcie + max draw from each plug @175w, so it would probably pull 600w at optimal balance and overshoot.
> 
> You only really need to be concerned if you are seeing the power limit become the restrictor on clocks/performance in hwinfo/gpu-z.


Thanks  ...yeah, it is V2 bios per our earlier posts, and with up to 500 W per above already on stock bios and no outliers on the power deliver front, I'm thinking about going up to 550W - 575W via Strix XOC with the extensive custom cooling I'm building up. W-cooling parts from three suppliers ordered yesterday are now on their way with expected delivery from March 4th - 14th (...if all goes well, as they say).

Even 500 W is a lot for air-cooling to handle, as good as the Strix stock setup is (albeit at the price of weight and sheer size, like similar cards by other vendors). Also, I can't shunt mod this card as it is used for productivity (work) as well as play like all the other systems here. That won't stop me from running more than a few benches (also part of play for me) but overall, it has to reliably work 24/7/365 and 550 W - 575 W sounds just about right to me...


----------



## jura11

gfunkernaught said:


> How come the clock speed looks like it isn't being reported correctly in those results?


No idea, contacted already 3DMARK support and many people getting similar errors and literally I have no idea what is going on with this 

Few earlier 3DMARK benchmarks reported everything correctly but latest couple of versions of 3DMARK don't really work for me

Hope this helps 

Thanks, Jura


----------



## bmgjet

schoolofmonkey said:


> The EVGA RTX 3090 XC3 is a underwhelming card.
> Hit's power limits running stock clocks.
> I stuck a Hybrid kit on it, temps never go over 55c (84c vram), even with increasing the power slider it'll power limit and drop down to 1725Mhz, same if I was to do a +150 on the core.
> It does hit over 1900Mhz in Cyberpunk and a few other titles, 3D Mark (Port Royall/Time Spy Extreme) not so much.
> 
> From what I understand it has a stock 366w, I did flash it with the FKA2 390w BIOS, but it didn't really make much of a difference overall.
> EVGA Jacob said it due to the cards "hardware limitations" and doesn't recommend flashing a higher wattage BIOS.
> EVGA won't even do a 370w BIOS for it, so I don't know.
> 
> Shame really, the card stays well under 60c with the Hybrid kit, but doesn't perform any better than the stock cooler clock wise maintaining the stock 1725Mhz depending on the load



Its a card that shouldnt of been made.
Stock you have good overclocked 3080s beating it in benchmark. 366W isnt enough for a 3090.

Either way mines been running good for a long time now on 500W using the XOC 1000W bios.


----------



## schoolofmonkey

bmgjet said:


> Its a card that shouldnt of been made.
> Stock you have good overclocked 3080s beating it in benchmark. 366W isnt enough for a 3090.
> 
> Either way mines been running good for a long time now on 500W using the XOC 1000W bios.


I'm a little ticked, because of the price, there was no way I could get my hands on the 3080 and I needed a GPU, my sons laptop died, he needed a complete system for a Uni course, so he got my 9900k/2080ti.
But yeah you're getting high end RTX 3080 performance for nearly twice the RRP (we won't look at the stupid Ebay pricing).

I love how quiet the card is, fans never get to 50%, the performance isn't worth the price.

I'm just a little worried going too high on the cards wattage and having it die, no warranty.
I've actually found the Hybrid Kit BIOS worse than the stock BIOS for clock reeducations, you'd think it would be better seeing the GPU temps are fantastic.


----------



## gfunkernaught

schoolofmonkey said:


> I'm a little ticked, because of the price, there was no way I could get my hands on the 3080 and I needed a GPU, my sons laptop died, he needed a complete system for a Uni course, so he got my 9900k/2080ti.
> But yeah you're getting high end RTX 3080 performance for nearly twice the RRP (we won't look at the stupid Ebay pricing).
> 
> I love how quiet the card is, fans never get to 50%, the performance isn't worth the price.
> 
> I'm just a little worried going too high on the cards wattage and having it die, no warranty.
> I've actually found the Hybrid Kit BIOS worse than the stock BIOS for clock reeducations, you'd think it would be better seeing the GPU temps are fantastic.


If you're worried about high wattage on this card, just use the stock bios and don't overclock it. The fact that it is quiet, probably means it is running on the cooler side, which also means, less watts. Did you buy the card new? It should have EVGA's lifetime warranty. Even still though, lets say you RMA the card (hopefully not), they will send you a refurb IF they even have one. IF that refurb is in good shape. That sucks man.


----------



## schoolofmonkey

gfunkernaught said:


> If you're worried about high wattage on this card, just use the stock bios and don't overclock it. The fact that it is quiet, probably means it is running on the cooler side, which also means, less watts. Did you buy the card new? It should have EVGA's lifetime warranty. Even still though, lets say you RMA the card (hopefully not), they will send you a refurb IF they even have one. IF that refurb is in good shape. That sucks man.


It'd hit 80c with the stock cooler, I stuck a AIO Hybrid Kit on it, so that's why it's cool, cards brand new, bought it from a Australian retailer.
I'm not worried about the wattage on the card, I'm sure it'll take it but the problem is, if the card does die to the point I can't reflash the stock BIOS, you're out of luck.
I don't see it happening, I've never had a GPU die, had DOA's, that's why I run my gear stock for the first week.

I figured EVGA don't really want you doing it to get better performance, so they are telling people the card's hardware wont take the extra power.


----------



## long2905

EarlZ said:


> Sold my 3080 Master Rev2 and got the MSI Suprim X 3090, Just installed the card and ran superposition to see if everything works. Wish they gave us more power than 450 power limit with an official bios.
> 
> EDIT:
> 
> Gaming bios, maxed power slider. This card clocks very conservatively when gaming. 1890 - 1920. My gigabyte (3080) would do1995-2040.


you dont compare clock on different cards like that. 3080 have less cuda core and will be able to clock higher


----------



## AndreTM

Hey guys, I recently watercooled by RTX 3090 Strix OC.
No OC, the card boost @ 1860 - 1980 Mhz range and it resulted stable in all games that I tested except one: The Medium (the most demanding game that I played with this card). I also passed Unigine Heaven and Furmark without issues.

I updated the BIOS with the new one provided by ASUS (V2 BIOS) but I didn't notice changes, sometimes when I play The Medium my PC freezes.

Do you have any suggestion? I tried last night to play with power limit and I raised it to 123%. Clocks were higher while gaming but I noticed less fluctuation in the GPU clocks.

Thanks


----------



## Gebeleisis

What PSU do you have ?
What temps do you get ?


----------



## AndreTM

Gebeleisis said:


> What PSU do you have ?
> What temps do you get ?


EVGA 1200P2
Temps max around 45 °C

Thank you!


----------



## Gebeleisis

I would look into the cables that you use from PSU to GPU


----------



## AndreTM

Gebeleisis said:


> I would look into the cables that you use from PSU to GPU


Thanks replying Gabeleisis.
Ok but that would be odd. I made 3 8-PIN custom AWG15 cables for the GPU and I successfully passed intensive tests like Furmark without any issue.


----------



## EarlZ

I've seen some reviews on the MSI Suprim X with core speeds that hit 2Ghz but how are they doing that as with stock speeds, 450watts of power usage is essily reached at stock. 

Is there a bios that I can use to increase the stock limit to a higher value? I wont be able to do hard mods due to warranty.


----------



## Rhadamanthys

Bit confused atm, is there another 1000W bios that takes care of the memory idle bug or not?


----------



## truehighroller1

EarlZ said:


> I've seen some reviews on the MSI Suprim X with core speeds that hit 2Ghz but how are they doing that as with stock speeds, 450watts of power usage is essily reached at stock.
> 
> Is there a bios that I can use to increase the stock limit to a higher value? I wont be able to do hard mods due to warranty.



I have the suprim x and I'm able to use the kp xoc bios without shunt mods and got this last night on ts and ts extreme.

Got me a new 10940x cpu and am very happy with it so far.









I scored 21 670 in Time Spy


Intel Core i9-10940X X-series Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com













I scored 11 404 in Time Spy Extreme


Intel Core i9-10940X X-series Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com





I'll push it further tonight. That was 1.3v all cores locked in at 5ghz.


----------



## EarlZ

truehighroller1 said:


> I have the suprim x and I'm able to use the kp xoc bios without shunt mods and got this last night on ts and ts extreme.
> 
> Got me a new 10940x cpu and am very happy with it so far.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 21 670 in Time Spy
> 
> 
> Intel Core i9-10940X X-series Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 11 404 in Time Spy Extreme
> 
> 
> Intel Core i9-10940X X-series Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> I'll push it further tonight. That was 1.3v all cores locked in at 5ghz.


Water or stock air cooling?


----------



## truehighroller1

EarlZ said:


> Water or stock air cooling?



These below here, were on air. The ones I just showed you from last night were on a bitspower waterblock.









I scored 10 564 in Time Spy Extreme


Intel Core i9-7900X Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com













I scored 21 127 in Time Spy


Intel Core i9-7900X Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com





If you create a custom fan curve the card stays way cooler.

I've recorded spikes up to 765 watts using the xoc bios. Our card has fuses that will go poof at 800 watts so set 70-75% power limit in msi afterburner.


----------



## gfunkernaught

schoolofmonkey said:


> It'd hit 80c with the stock cooler, I stuck a AIO Hybrid Kit on it, so that's why it's cool, cards brand new, bought it from a Australian retailer.
> I'm not worried about the wattage on the card, I'm sure it'll take it but the problem is, if the card does die to the point I can't reflash the stock BIOS, you're out of luck.
> I don't see it happening, I've never had a GPU die, had DOA's, that's why I run my gear stock for the first week.
> 
> I figured EVGA don't really want you doing it to get better performance, so they are telling people the card's hardware wont take the extra power.


Could also be evga not wanting to honor RMAs from people that burn their cards out from irresponsible bios flashing. It's not like they have stock to replace these cards. And I'm sure that trying to repair/refurb a burnt card is time consuming and there really isn't any return for doing that. 
My trio with the stock cooler never exceeded 68c but that is because my case has fans everywhere. How is your case air flow?


----------



## jomama22

J7SC said:


> Thanks  ...yeah, it is V2 bios per our earlier posts, and with up to 500 W per above already on stock bios and no outliers on the power deliver front, I'm thinking about going up to 550W - 575W via Strix XOC with the extensive custom cooling I'm building up. W-cooling parts from three suppliers ordered yesterday are now on their way with expected delivery from March 4th - 14th (...if all goes well, as they say).
> 
> Even 500 W is a lot for air-cooling to handle, as good as the Strix stock setup is (albeit at the price of weight and sheer size, like similar cards by other vendors). Also, I can't shunt mod this card as it is used for productivity (work) as well as play like all the other systems here. That won't stop me from running more than a few benches (also part of play for me) but overall, it has to reliably work 24/7/365 and 550 W - 575 W sounds just about right to me...


Honestly, shunt modding isn't really a threat to the longevity of the card, especially the strix which is beefed up. It's easy enough to do even for inexperienced solderers (I guess that is a word lol). You can always tune down the power limit using afterburner and what not when you don't need the extra headroom.

If you plan on keeping the card for a long time it's worth doing. 

You can also just use the dual bios on the strix to have the stock bios in one position and then the xoc bios on the other if that's more your cup of tea.


----------



## jomama22

Rhadamanthys said:


> Bit confused atm, is there another 1000W bios that takes care of the memory idle bug or not?


It's not a bug, it's a feature of the bios. No reason to cool down the memory when using ln2 and in fact, it's better off running full tilt as most memory modules become unstable/hit cold bugs when going too low. This is why vram heaters exist for ln2 overclocking.

Anyway, to answer the question, no, there are not.


----------



## gfunkernaught

"Problems do have solutions you know..." 
Got rid of that pointless MSI support bracket and used good old wood.


----------



## Alex24buc

I sold my palit gaming pro oc and I bought the gamerock oc version for 300 euros more. Do you think I made a good deal? I gained a few fps more in games with the 470w. Also the old card had 5 months and this one is new. Thanks


----------



## itssladenlol

@Falkentyne 
The XOC2 Bios you gave me for the 3090 strix, does it have all protections in place and does the memory clock down in idle? 
I know the evga 1000w bios does have this Problems, but does the strix XOC2 Bios have the same Problems? (I know it has other Problems like rail #3 locked wattage)


----------



## des2k...

itssladenlol said:


> @Falkentyne
> The XOC2 Bios you gave me for the 3090 strix, does it have all protections in place and does the memory clock down in idle?
> I know the evga 1000w bios does have this Problems, but does the strix XOC2 Bios have the same Problems? (I know it has other Problems like rail #3 locked wattage)


mem at full speed while idle is only 30w tops, not sure why this is a big deal


----------



## J7SC

jomama22 said:


> Honestly, shunt modding isn't really a threat to the longevity of the card, especially the strix which is beefed up. It's easy enough to do even for inexperienced solderers (I guess that is a word lol). You can always tune down the power limit using afterburner and what not when you don't need the extra headroom.
> 
> If you plan on keeping the card for a long time it's worth doing.
> 
> You can also just use the dual bios on the strix to have the stock bios in one position and then the xoc bios on the other if that's more your cup of tea.


Thanks, yeah, the dual bios option (w/one for updated XOC Strix) is what I'm going to go for. FYI, re. productivity hybrid use, this and the other cards are part of a business purchase inventory and marked in as such, that's why I put limits on myself...in the bad old days of HWBot, I hard-modded all kinds of GPUs...most even survived...for the select few that didn't, a minute of silence, please (I miss that 290X Lightning) 

With 500W already on the stock Strix, I can't complain anyhow...when I (re)posted the GPUz tab with power consumption, it was because in my naivete, I thought I should see a nicely balanced 75w // 150w/150w/150w max w/overshoot...obviously, it all depends on the internal card / PCB controllers, tracing, NV Boost etc, though for the record, both my 2x w-cooled Aorus 2080 Tis almost always pull equal amounts per PCIe. With the upcoming water-cooling config, I'll flash one of the bios on the Strix to Elmor's XOC Strix (perhaps updated by then to version3), or the KPE 520W


----------



## TheNaitsyrk

Hi there, sorry I'm jumping in a little. But I'm curious as I want to flash my 3090 FTW3 with 1000W bios but not sure if it's safe? I have 4x 480m XE rads hooked up to 3090 FTW3 Ultra and 16x 120mm 3000RPM Noctua fans also two 3000RPM fans 140mm above the GPU etc.

Is it safe to flash Kingpin 1000W bios onto the FTW3 Ultra and if so how is it done? Could someone explain what exactly do I need and what I need to do / link to an EXACT guide?

I would appreciate it very much.


----------



## jura11

If you are asking if its safe flash such BIOS on FTW3, we don't know that, FTW3 do have higher failure rate, if you check few forums and check EVGA forum too regarding the failure rate of FTW3 

If I would flash it, I wouldn't risk it on your if you are looking to RMA if it fails, I use on my RTX 3090 GamingPro which is 2*8-pin GPU and works quite well 

Yours radiator setup should cope with that quite easily, I can keep my RTX 3090 which is pulling 600W in 33-36°C and VRAM at 60°C 


Hope this helps 

Thanks,Jura


----------



## Dan848

I suppose this is the "bragging rights" section?


----------



## TheNaitsyrk

jura11 said:


> If you are asking if its safe flash such BIOS on FTW3, we don't know that, FTW3 do have higher failure rate, if you check few forums and check EVGA forum too regarding the failure rate of FTW3
> 
> If I would flash it, I wouldn't risk it on your if you are looking to RMA if it fails, I use on my RTX 3090 GamingPro which is 2*8-pin GPU and works quite well
> 
> Yours radiator setup should cope with that quite easily, I can keep my RTX 3090 which is pulling 600W in 33-36°C and VRAM at 60°C
> 
> 
> Hope this helps
> 
> Thanks,Jura



Failure rates? Based on what? What I mean what are the causes I seen power delivery and such.

If it's true I'll just keep my 500W bios.

Thanks


----------



## jura11

TheNaitsyrk said:


> Failure rates? Based on what? What I mean what are the causes I seen power delivery and such.
> 
> If it's true I'll just keep my 500W bios.
> 
> Thanks


I would ask @Falkentyne he knows about that more than I do 

I think is down to power delivery of FTW3 and fuse, have a look its literally up to you if you want to flash KPE XOC BIOS, I'm using that BIOS on my RTX 3090 GamingPro and no issues 

Hope this helps 

Thanks, Jura


----------



## schoolofmonkey

gfunkernaught said:


> Could also be evga not wanting to honor RMAs from people that burn their cards out from irresponsible bios flashing. It's not like they have stock to replace these cards. And I'm sure that trying to repair/refurb a burnt card is time consuming and there really isn't any return for doing that.
> My trio with the stock cooler never exceeded 68c but that is because my case has fans everywhere. How is your case air flow?


Who knows, but yes it is better than the 2080ti, just really low on the 3090 performance ladder.
My case has plenty of airflow, O-11 Dynamic, 3 bottom intakes, 3 side mounted pulling air through the RAD, then the Hybrid RAD top mounted exhausting.
I noticed a lot of lower tier2x8pin cards do have slightly under performing coolers, I had a Inno3D previously, even with the side glass off with the aircon blowing at the machine it would still hit 84c and downclock badly, was told that was normal for those card by the retailer after they tested 2 of them.
This EVGA card never goes over 55c, I could go lower if I really want by increasing the fan curve, vram is always warm, but isn't it all 3090's that suffer the same problem, little 92mm fan on the back of the card the vram never goes over 84c.
Even with the test flash of a 390w bios temps didn't change, it did keep a higher base clock rate, but that only showed as a 2-4fps increase.

If I could of got my hands on the Kingpin I would of, but good luck on that, all the local retailers sold out in seconds, and EVGA doesn't ship to Australia.


----------



## gfunkernaught

Dan848 said:


> I suppose this is the "bragging rights" section?


----------



## Lobstar

gfunkernaught said:


> View attachment 2481092


I finally dumped the XC3 bios on my FTW3U and was able to get these results on air (with the window open).

14,497 
I scored 14 497 in Port Royal 

22,343 (graphics only)








I scored 21 095 in Time Spy


AMD Ryzen 9 3950X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## gfunkernaught

Lobstar said:


> I finally dumped the XC3 bios on my FTW3U and was able to get these results on air (with the window open).
> 
> 14,497
> I scored 14 497 in Port Royal
> 
> 22,343 (graphics only)
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 21 095 in Time Spy
> 
> 
> AMD Ryzen 9 3950X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com


Nice! So you put a lower power bios on your card?
Just for reference, these are my best PR results so far, on water, with the xoc bios on my Trio and 50% power limit i think, it was a while ago, I should probably start putting in my OC settings in the comments of each bench result.








I scored 14 950 in Port Royal


Intel Core i7-8700K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## Lobstar

gfunkernaught said:


> Nice! So you put a lower power bios on your card?
> Just for reference, these are my best PR results so far, on water, with the xoc bios on my Trio and 50% power limit i think, it was a while ago, I should probably start putting in my OC settings in the comments of each bench result.
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 14 950 in Port Royal
> 
> 
> Intel Core i7-8700K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com


I have a heavily power limited card (FTW3U but this is a common problem across many of these cards, specifically EVGA 3x 8-pin powered units) that won't go past 420ish watts on the XOC 500w bios. Once it hits like 81w on the PCIE power it stays power limited. @arestavo has also had similar results I believe. For reference, my best average clock before running that bios was 1946mhz. I didn't spend too much time tweaking it on the XC3 bios to get those results. I did, however, get some pretty pink space invaders at the desktop from time to time so I'm not running it as an every-day thing.


----------



## schoolofmonkey

Lobstar said:


> I have a heavily power limited card (FTW3U but this is a common problem across many of these cards, specifically EVGA 3x 8-pin powered units) that won't go past 420ish watts on the XOC 500w bios. Once it hits like 81w on the PCIE power it stays power limited. @arestavo has also had similar results I believe. For reference, my best average clock before running that bios was 1946mhz. I didn't spend too much time tweaking it on the XC3 bios to get those results. I did, however, get some pretty pink space invaders at the desktop from time to time so I'm not running it as an every-day thing.


I'm a little confused what XC3 BIOS are you using, I have a RTX 3090 XC3 Ultra card and my BIOS only does 366w of power before hitting the power limit then dropping clocks to compensate.


----------



## Lobstar

schoolofmonkey said:


> I'm a little confused what XC3 BIOS are you using, I have a RTX 3090 XC3 Ultra card and my BIOS only does 366w of power before hitting the power limit then dropping clocks to compensate.


I downloaded this exact bios. EVGA RTX 3090 VBIOS

Your card is a 2x8-pin, mine is a 3x8-pin. Somehow it fixes the power balancing on these FTW3/U models. /shrug lol EVGA done ****ed up this round.


----------



## schoolofmonkey

Lobstar said:


> I downloaded this exact bios. EVGA RTX 3090 VBIOS
> 
> Your card is a 2x8-pin, mine is a 3x8-pin. Somehow it fixes the power balancing on these FTW3/U models. /shrug lol EVGA done ****ed up this round.


Yeah because the BIOS only allows 366w of power draw, so that would be across the PCIe and 3x8 pin power.
If I put on a higher wattage BIOS it'll draw that amount of power, just doesn't make any real difference to performance, where with your FTW I'd say has better silicon, hence the higher default clocks.

Either way I agree EVGA really messed up with these cards, it's like a deliberate gimping of the XC3's hardware to sell the high tier cards.


----------



## bmgjet

Using 3 plug bios on 2 plug card will = less power since you dont have a 3rd plug taking in power so it just mirrors plug 1s reading.
So 500W 3 plug bios = 335W PL on a 2 plug card.

And it works the oppsite using a 2 plug bios on a 3 plug card since you have the 3rd plug becoming unmetered and all the 2 plug bios have a higher slot power limit.
So the XC3 2 plug bios 366W = 490W.
KFA 390W 2 plug bios = 520W.


----------



## bmgjet

schoolofmonkey said:


> Yeah because the BIOS only allows 366w of power draw, so that would be across the PCIe and 3x8 pin power.
> If I put on a higher wattage BIOS it'll draw that amount of power, just doesn't make any real difference to performance, where with your FTW I'd say has better silicon, hence the higher default clocks.
> 
> Either way I agree EVGA really messed up with these cards, it's like a deliberate gimping of the XC3's hardware to sell the high tier cards.


XC3 black, XC3 Gaming, XC3 Ultra, FTW3, FTW3 Ultra all have the same teir silicon and it comes down to lottery if you get a good or bad one. EVGA only does binning for the Kingpin and there binning is just to check it does 2100mhz on 1.1V.
They have said this themselfs on there forum and twitter a few times.

Given the same power the XC3 does just as good as a FTW3 card.
Heres my XC3.








I scored 15 008 in Port Royal


Intel Core i9-7900X Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## itssladenlol

TheNaitsyrk said:


> Failure rates? Based on what? What I mean what are the causes I seen power delivery and such.
> 
> If it's true I'll just keep my 500W bios.
> 
> Thanks


Trash board Design. 
Evga dropped the Ball this Generation. 
Ftw3 are ticking tomebombs.


----------



## Lobstar

schoolofmonkey said:


> where with your FTW I'd say has better silicon, hence the higher default clocks.


Not at all. This is with the XOC bios and many many many hours of tweaking to ensure I never hit more than 78w of PCIE draw in order to get an average clock above 1950mhz (by 2 whole mhz rofl).








I scored 13 986 in Port Royal


AMD Ryzen 9 3950X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com





While I may have better silicon the run at 2100mhz average was literally just me messing around plugging in random numbers (no curve) and maxing the memory at +1500. The PCIE draw never exceeded 66w during that 2100mhz run.

Here are two runs, the top with the XC3 Bios and the bottom is the XOC bios. From superposition.


----------



## schoolofmonkey

bmgjet said:


> Using 3 plug bios on 2 plug card will = less power since you dont have a 3rd plug taking in power so it just mirrors plug 1s reading.
> So 500W 3 plug bios = 335W PL on a 2 plug card.
> 
> And it works the oppsite using a 2 plug bios on a 3 plug card since you have the 3rd plug becoming unmetered and all the 2 plug bios have a higher slot power limit.
> So the XC3 2 plug bios 366W = 490W.
> KFA 390W 2 plug bios = 520W.


I got you, so it just makes on of the power connectors unmetered allowing better power draw.
Makes sense why you used the KFA2 BIOS, has one of the highest power limit along with being more compatible.


So really there shouldn't of been any 2x8pin cards, from what I've seen the power delivery on a 2x8pin card looks more than ample to run 3x8pin.
The face the FE can run 400w's on using 2x8 pin cables which connects to their proprietary cable, shows most 2x8pin cards should be able to 400w safely (I could be wrong).


----------



## Falkentyne

schoolofmonkey said:


> I got you, so it just makes on of the power connectors unmetered allowing better power draw.
> Makes sense why you used the KFA2 BIOS, has one of the highest power limit along with being more compatible.
> 
> 
> So really there shouldn't of been any 2x8pin cards, from what I've seen the power delivery on a 2x8pin card looks more than ample to run 3x8pin.
> The face the FE can run 400w's on using 2x8 pin cables which connects to their proprietary cable, shows most 2x8pin cards should be able to 400w safely (I could be wrong).


My FE can handle 550W. But temps get toasty.


----------



## schoolofmonkey

Falkentyne said:


> My FE can handle 550W. But temps get toasty.


That's technically through 2x8pin cables from the PSU...
Sad really.


----------



## Falkentyne

schoolofmonkey said:


> That's technically through 2x8pin cables from the PSU...
> Sad really.


12 pin 16 AWG Seasonic cable. No problem whatsoever. Very low +12v drop, connectors only get a little warm (maybe from all the heat the card is throwing out).


----------



## Spiritsly

Hi guys, I have the Zotac Rtx 3090 Trinity and i am having a weird issue that the power limit can not pass the 294Watt while mining. I have tried to flash the gigabyte gaming oc bios as well and revert it to default Zotac Bios but no luck. While gaming it can reach up to 360watts but while mining as i said it just stucks at 294watt wheter i am on hive os or windows. The vram temps are also within the range at 90 degrees. It was working fine up until now but today this is presented. What can i do to fix it? And also someone has the same issue and the same card in the link below and he solved his problem by returning it.


__
https://www.reddit.com/r/NiceHash/comments/k3ydc3


----------



## des2k...

"2100mhz on 1.1V. " lol... that's evga binning ? 
ouch, I guess when your board does 500w+ they can sell you the leakiest chips from Nvidia and charge a premium for it


----------



## des2k...

Spiritsly said:


> Hi guys, I have the Zotac Rtx 3090 Trinity and i am having a weird issue that the power limit can not pass the 294Watt while mining. I have tried to flash the gigabyte gaming oc bios as well and revert it to default Zotac Bios but no luck. While gaming it can reach up to 360watts but while mining as i said it just stucks at 294watt wheter i am on hive os or windows. The vram temps are also within the range at 90 degrees. It was working fine up until now but today this is presented. What can i do to fix it? And also someone has the same issue and the same card in the link below and he solved his problem by returning it.
> 
> 
> __
> https://www.reddit.com/r/NiceHash/comments/k3ydc3


there's nothing to fix, mining is mostly mem controller + memory, very little core load on rtx3090, on Trinity it was never 100% TDP with 350w or 390w when I tried on mine

You can disable P2 if the core/mem drops during mining. 
+1000 on mem will get you very good hash and you can drop board power bellow 294w, won't affect the hash rate.


----------



## Spiritsly

how do i disable that on windows and hive os?


----------



## EarlZ

Do these 500Watts XOC bios still have all the thermal and other safe guards in place, I am looking at flashing my MSI Suprim X and it would be great to have a higher power limit that the stock 450Watt but I would prefer to still have a safety net.


----------



## SolarBeaver

Hey guys, so today I've received my RMA'ed EKWB Strix water block, and the surface which covers GPU core is even more ugly looking than before (the reason I've RMA'ed it, core temps were pretty bad because of it). Is it a joke, I'm not even sure If I want to assemble it, is EKWB QC is that bad or is it really considered normal?
Here's photo:


----------



## Xeq54

Lol, that probably wont have a negative impact on performance, paste will fill the small grooves with little to no performance hit. 

But it is funny that they actually went through the hassle to CNC machine it and nickel plate it again instead of sending out a new copper base. Though it is true that every block they have seems to sell out really quickly, maybe they just did not have a replacement.


----------



## des2k...

SolarBeaver said:


> Hey guys, so today I've received my RMA'ed EKWB Strix water block, and the surface which covers GPU core is even more ugly looking than before (the reason I've RMA'ed it, core temps were pretty bad because of it). Is it a joke, I'm not even sure If I want to assemble it, is EKWB QC is that bad or is it really considered normal?
> Here's photo:
> View attachment 2481116


hope your not keeping that, looks pretty bad
what was the delta on the original block ?


----------



## itssladenlol

SolarBeaver said:


> Hey guys, so today I've received my RMA'ed EKWB Strix water block, and the surface which covers GPU core is even more ugly looking than before (the reason I've RMA'ed it, core temps were pretty bad because of it). Is it a joke, I'm not even sure If I want to assemble it, is EKWB QC is that bad or is it really considered normal?
> Here's photo:
> View attachment 2481116


Would never keep that garbage, ekwb is trash Tier for Premium price. 
Everyone knows that.


----------



## SolarBeaver

Xeq54 said:


> Lol, that probably wont have a negative impact on performance, paste will fill the small grooves with little to no performance hit.
> 
> But it is funny that they actually went through the hassle to CNC machine it and nickel plate it again instead of sending out a new copper base. Though it is true that every block they have seems to sell out really quickly, maybe they just did not have a replacement.


Sorry for the confusion, It seems I worded it poorly, that's actually a completely new block. 
So you think it shouldn't have much negative impact on the performance? Last one was actually terrible on core (tried to re-paste/re-assemble it multiple times to no avail, with delta being 30c+) and I thought the uneven surface was the main culprit. 
Guess I should try the new one first before going with another replacement.


----------



## SolarBeaver

des2k... said:


> hope your not keeping that, looks pretty bad
> what was the delta on the original block ?


30c+


itssladenlol said:


> Would never keep that garbage, ekwb is trash Tier for Premium price.
> Everyone knows that.


Sigh, I haven't...


----------



## Esenel

itssladenlol said:


> Would never keep that garbage, ekwb is trash Tier for Premium price.
> Everyone knows that.


Nah. EK has higher quality as AquaComputer at the moment :-D

Mine looks also completly fine.

@SolarBeaver 
I doubt this will have an impact on the temps.
It just looks ugly.

See even such a thing doesn't impact the temps:


----------



## pat182

Spiritsly said:


> Hi guys, I have the Zotac Rtx 3090 Trinity and i am having a weird issue that the power limit can not pass the 294Watt while mining. I have tried to flash the gigabyte gaming oc bios as well and revert it to default Zotac Bios but no luck. While gaming it can reach up to 360watts but while mining as i said it just stucks at 294watt wheter i am on hive os or windows. The vram temps are also within the range at 90 degrees. It was working fine up until now but today this is presented. What can i do to fix it? And also someone has the same issue and the same card in the link below and he solved his problem by returning it.
> 
> 
> __
> https://www.reddit.com/r/NiceHash/comments/k3ydc3


your are probably overheating memory, 300watt will make your memory go 110c.
put 270watt at 100hash , should keep it lower than 90c


----------



## gfunkernaught

EarlZ said:


> Do these 500Watts XOC bios still have all the thermal and other safe guards in place, I am looking at flashing my MSI Suprim X and it would be great to have a higher power limit that the stock 450Watt but I would prefer to still have a safety net.


Yes they do. On my Trio, the power limit gets triggered at or around 490w with those bios, same as the evga 520w bios. AFAIK those evga 500w bios aren't "XOC". XOC bios are those like Strix and Kingping XOC for eXtreme OverClocking, like you want your gpu to think for itself or something


----------



## KedarWolf

Welp, I was going to travel to another city, found a store that was accepting pre-orders in-store of the Strix OC RTX 3090. The day before I had the money in my account, their website updates, "No longer accepting in-store orders."


----------



## J7SC

KedarWolf said:


> Welp, I was going to travel to another city, found a store that was accepting pre-orders in-store of the Strix OC RTX 3090. The day before I had the money in my account, their website updates, "No longer accepting in-store orders."


...just keep on trying ...not going to repeat the whole story again, but I got my 3090 Strix OC (at low MSRP, no tariff) by accident / right-time-right-place when I went to buy some Arctic P12 fan value packs...I had shortlisted the Strix last fall as one of three models I was interested in, but never signed up anywhere for pre-order. 

...3090s trickle in in ultra-low quantities here in Canada; and once you see them online, they're usually gone within an hour. Unfortunately, the shortage seems to be getting worse not only in Canada...even bigger retailers in Europe who usually have at least a few of the entry-level models are sold out 

...check Memory Express stores if you near one...they won't take online orders for 3090s (afaik) but you can see online when an extra one is available at that location over-and-above the pre-order fulfillment for that day at that location. 

Good luck !


----------



## KedarWolf

J7SC said:


> ...just keep on trying ...not going to repeat the whole story again, but I got my 3090 Strix OC (at low MSRP, no tariff) by accident / right-time-right-place when I went to buy some Arctic P12 fan value packs...I had shortlisted the Strix last fall as one of three models I was interested in, but never signed up anywhere for pre-order.
> 
> ...3090s trickle in in ultra-low quantities here in Canada; and once you see them online, they're usually gone within an hour. Unfortunately, the shortage seems to be getting worse not only in Canada...even bigger retailers in Europe who usually have at least a few of the entry-level models are sold out
> 
> ...check Memory Express stores if you near one...they won't take online orders for 3090s (afaik) but you can see online when an extra one is available at that location over-and-above the pre-order fulfillment for that day at that location.
> 
> Good luck !


Yeah, it was Memory Express that told me yesterday they'd do an in-store order.


----------



## mirkendargen

SolarBeaver said:


> Hey guys, so today I've received my RMA'ed EKWB Strix water block, and the surface which covers GPU core is even more ugly looking than before (the reason I've RMA'ed it, core temps were pretty bad because of it). Is it a joke, I'm not even sure If I want to assemble it, is EKWB QC is that bad or is it really considered normal?
> Here's photo:
> View attachment 2481116


Not gonna defend EK, but how level the cold plate is matters infinitely more than how smooth it is for temps. Once it's mounted you can't even see this, are you putting the block on a card or in a display case?


----------



## Lobstar

mirkendargen said:


> Not gonna defend EK, but how level the cold plate is matters infinitely more than how smooth it is for temps. Once it's mounted you can't even see this, are you putting the block on a card or in a display case?


My EKWB C8H monoblock is bubbled like you wouldn't believe. You can see it with the naked eye.


----------



## yzonker

I didn't notice any issues with my Corsair block, but it was my first attempt at a GPU block, so might not know. At least they seem to have fixed the leak issue they had on the 20 series block. It was a really easy install for a noob, so good choice for first timers. Probably nobody on this board. Lol


----------



## des2k...

yzonker said:


> I didn't notice any issues with my Corsair block, but it was my first attempt at a GPU block, so might not know. At least they seem to have fixed the leak issue they had on the 20 series block. It was a really easy install for a noob, so good choice for first timers. Probably nobody on this board. Lol


Igor's review, it was like 16c delta at 340w , that honestly looks bad. My crappy EK manages 8c or less delta at that low wattage lol

What delta do you have on the Corsair block ?


----------



## des2k...

mirkendargen said:


> Not gonna defend EK, but how level the cold plate is matters infinitely more than how smooth it is for temps. Once it's mounted you can't even see this, are you putting the block on a card or in a display case?


any inconsistency on the cold plate, gpu or cpu EK will replace no questions asked,

Posted on reddit 2years ago, about weird stains, bumps on the nickel plating for my new EK block, got a message from ek next day to place a rma. Got a new cold plate shipped free no return asked.


----------



## jomama22

des2k... said:


> any inconsistency on the cold plate, gpu or cpu EK will replace no questions asked,
> 
> Posted on reddit 2years ago, about weird stains, bumps on the nickel plating for my new EK block, got a message from ek next day to place a rma. Got a new cold plate shipped free no return asked.


They definitely don't do this anymore. Everyone getting rma work done has to send the block back now and pay for that shipping. Not sure if they have to pay for shipping from el but yeah.


----------



## Nizzen

First 3090 strix white test with stock 11700k (4600mhz all core) with 3200c14 memory 








I scored 18 851 in Time Spy


Intel Core i7-11700K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## yzonker

des2k... said:


> Igor's review, it was like 16c delta at 340w , that honestly looks bad. My crappy EK manages 8c or less delta at that low wattage lol
> 
> What delta do you have on the Corsair block ?


13C at 390w. Was just playing RDR2 using the Aorus bios. And I just slapped it in there like a noob too using the original paste it came with so it could probably be better.


----------



## EarlZ

gfunkernaught said:


> Yes they do. On my Trio, the power limit gets triggered at or around 490w with those bios, same as the evga 520w bios. AFAIK those evga 500w bios aren't "XOC". XOC bios are those like Strix and Kingping XOC for eXtreme OverClocking, like you want your gpu to think for itself or something


Sounds good!

Is this the one you used EVGA GeForce RTX 3090 FTW3 XOC BIOS BETA - EVGA Forums


----------



## des2k...

yzonker said:


> 13C at 390w. Was just playing RDR2 using the Aorus bios. And I just slapped it in there like a noob too using the original paste it came with so it could probably be better.


that's pretty good, better than review temps,


----------



## long2905

des2k... said:


> Igor's review, it was like 16c delta at 340w , that honestly looks bad. My crappy EK manages 8c or less delta at that low wattage lol
> 
> What delta do you have on the Corsair block ?


debauer also tested one on a 3070 i believe the delta is also high in that video as well.


----------



## gfunkernaught

EarlZ said:


> Sounds good!
> 
> Is this the one you used EVGA GeForce RTX 3090 FTW3 XOC BIOS BETA - EVGA Forums


I downloaded it from techpowerup.com/vgabios
filenames are EVGA.RTX3090.24576.201020 for the 500w and EVGA.RTX3090.24576.201110 for the 520w.


----------



## gfunkernaught

So I got the heatsinks, lined them up horizontally and two vertically where the gpu is and then put my SP120 on top, pulling air up and away from the card. Been running port royal on loop, my water<>gpu delta is still 13c. Got my vram offset up to +1300 from +400 and raised the PL to 510w. Helps the upper limits a bit and the lowest clock I saw during this run was 2084mhz.


----------



## yzonker

des2k... said:


> that's pretty good, better than review temps,


Tried 340w, about 11C.


----------



## Thanh Nguyen

Nizzen said:


> First 3090 strix white test with stock 11700k (4600mhz all core) with 3200c14 memory
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 18 851 in Time Spy
> 
> 
> Intel Core i7-11700K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com


Is it worth to upgrade 10900k to 11900k?


----------



## Falkentyne

Should I use this on my 3090 FE?

Boron Nitride Paste 

Or am I asking for the Magic Smoke to come out, or for the GPU to get ripped off the substrate?
@bmgjet I know you can help me


----------



## bmgjet

SolarBeaver said:


> Hey guys, so today I've received my RMA'ed EKWB Strix water block, and the surface which covers GPU core is even more ugly looking than before (the reason I've RMA'ed it, core temps were pretty bad because of it). Is it a joke, I'm not even sure If I want to assemble it, is EKWB QC is that bad or is it really considered normal?
> Here's photo:
> View attachment 2481116


Flatness is what matters most. Which is where EKWB are really bad this gen since they arnt lapping blocks just straight off the CNC into the plating bath and then out to customers.
EKWB living off there past reputation for this gen, I defantly wont be getting one of there blocks again with how bad quality mine was with stand offs the wrong length preventing contact with die. 
And a massive convex bit on the die contact patch, Vram patches on the back plate not lining up with vram. And edge of back plate shorts out fan headers and needs to be ground down.


----------



## sultanofswing

I had better luck with Bykski than EK.


----------



## EarlZ

gfunkernaught said:


> I downloaded it from techpowerup.com/vgabios
> filenames are EVGA.RTX3090.24576.201020 for the 500w and EVGA.RTX3090.24576.201110 for the 520w.



The 520Watt is from the Kingpin bios and still has all the protection and downclocking, correct?


----------



## J7SC

sultanofswing said:


> I had better luck with Bykski than EK.


...I like Bykski as well (and I think I might have them as OEM on some factory w-cooled cards) but a very recent post in this thread reported that there was a spot where the plating was coming off in the Bykski 3090 block. With that in mind, I ordered the EK block (supposedly arriving over the next few days) after I couldn't locate alternatives such as Aquacomputer or Phanteks. My usual 'go-to' is Watercool Heatkiller, but they aren't offering a Strix block yet. I will give the EK a very close inspection (will take a bit to get all the parts for the dual-loop build in anyway)...also, I plan to stick with the stock Strix backplate for now.

...whether it is Covid-19, trade wars / tariffs and/or mining, the PC industry in general, including peripheral producers, seems to got through a weird time right now.


----------



## gfunkernaught

EarlZ said:


> The 520Watt is from the Kingpin bios and still has all the protection and downclocking, correct?


Yes it behaves like any other bios as far as 2d/3d clocks, thermal and power limitations


----------



## KedarWolf

Finally got it. 






ROG-STRIX-RTX 3090-O24G-GAMING | Graphics Cards


The ROG Strix GeForce RTX 3090 OC Edition 24GB GDDR6X unleash the maximum performance on NVIDIA Ampere Architecture, by featuring Axial-tech fan design, 0dB technology, 2.9-slot Design, Dual BIOS, Auto-Extreme Technology, SAP II, MaxContact Technology, and more.



rog.asus.com


----------



## EarlZ

Sweet card!


----------



## wirefox

I have a ftw3 ultra and ek block and backplate

On gpuz I have these two readings EVGA iCx MEM3 which is constantly 125C along with EVAG iCX PWR5 at 125C.

i am getting 19154 in timespy and see about 22-3 ram usage in msi afterburner when playing COD mostly on ultra with my G7 32" monitor. So I guess the ram works all works

Someone suggested the ram sensors could be off.... any other thoughts?

worried maybe i missed a thermal? but 125C seems absurd even on idle ... anyone have thoughts on this? 

Anyone have a schematic on what ram = what mem temp?\

Any other software that might be worth testing with?











thanks wirefox


----------



## gfunkernaught

wirefox said:


> I have a ftw3 ultra and ek block and backplate
> 
> On gpuz I have these two readings EVGA iCx MEM3 which is constantly 125C along with EVAG iCX PWR5 at 125C.
> 
> i am getting 19154 in timespy and see about 22-3 ram usage in msi afterburner when playing COD mostly on ultra with my G7 32" monitor. So I guess the ram works all works
> 
> Someone suggested the ram sensors could be off.... any other thoughts?
> 
> worried maybe i missed a thermal? but 125C seems absurd even on idle ... anyone have thoughts on this?
> 
> Anyone have a schematic on what ram = what mem temp?\
> 
> Any other software that might be worth testing with?
> 
> 
> View attachment 2481211
> 
> 
> thanks wirefox


Try HWInfo, see what those sensors say.


----------



## gfunkernaught

So I ran into an issue with my OC, stability related but at lower loads/power usage. I previosly mentioned that I set the curve to [email protected] so that the effective clock be around 2100mhz, which works when the load is 460w or more (up to 500w). Playing BLOPS Cold War, the usage was around 430w and the temps were lower, causing the effective clock to go between 2115-2145mhz. The game crashed suddenly, and the gpu core temp was 38c. So that would mean that the clock was too high for that voltage. Now it could be the vram oc, although the mem junction temp was 56-58c. Other than setting different profiles for low vs high load games, is there anything else I could do to force a max clock at specific voltage? I also did set three points on the curve, starting on the right with the highest.


----------



## KedarWolf

What are safe RAM temps, with stock Strix OC BIOS, undervolted, my RAM hits 85C in Cyberpunk, hotspot a bit more. I have Thermalright 12.8 mw/k pads, but I only have .5, 1.0 and 1.5mm.

Not sure what the stocks pads are on the card.


----------



## Alex24buc

Anyone here with a palit gamerock oc? How was your experience with it? I changed my gaming pro oc with the gamerock and I wanted to know if I did a good choice because the gaming pro oc version was a weak card and although I said I would not buy a palit card anymore the gamerock oc version was the only one I could find for 300 euro more. Thanks!


----------



## yzonker

gfunkernaught said:


> So I ran into an issue with my OC, stability related but at lower loads/power usage. I previosly mentioned that I set the curve to [email protected] so that the effective clock be around 2100mhz, which works when the load is 460w or more (up to 500w). Playing BLOPS Cold War, the usage was around 430w and the temps were lower, causing the effective clock to go between 2115-2145mhz. The game crashed suddenly, and the gpu core temp was 38c. So that would mean that the clock was too high for that voltage. Now it could be the vram oc, although the mem junction temp was 56-58c. Other than setting different profiles for low vs high load games, is there anything else I could do to force a max clock at specific voltage? I also did set three points on the curve, starting on the right with the highest.


Edit: I think I see now. It's moving the entire curve up from low load and lower temp. Not sure there is a way. I always set by offset and let it fall wherever for that reason.


----------



## Wihglah

I have been doing a little mining recently and wanted to keep my Memory Junction Temps as low as possible.

With just the Optimus block, they were hovering around 90C, so I put in 2 Noctua NF-F12s sat on the backplate. My memory Junction temps dropped significantly to 76-78C

Of course this solution was not even remotely viable long term (I like my rig pretty) So I ordered an MP5Works Backplate cooler.










I am running the parallel version although I got the series cover just in case as well. 

The good news is my junction temps are running at 72C now during mining.

Even better, the Water/Core delta during looping Port Royal has come down from 7C to 6C @ 550W.

All paid for by my first weeks mining.


----------



## gfunkernaught

yzonker said:


> Edit: I think I see now. It's moving the entire curve up from low load and lower temp. Not sure there is a way. I always set by offset and let it fall wherever for that reason.


I did set an offset initially, but with load and power limits being triggered, the clock and voltage would bounce so much that there were times where load was 99%, temp was 44c, clock would jump to 2115mhz and the voltage bump would be delayed, and set to 1032mv, causing a crash. Setting multiple points on the curve fixed that, but setting a curve also has its own issues, like gpu boost not listening to what we request it to do! I set the curve to 2100mhz, effective clock is 2077mhz..so I have to trick it, then it listens too good when load isn't heavy. It's alive I tell ya.


----------



## J7SC

KedarWolf said:


> What are safe RAM temps, with stock Strix OC BIOS, undervolted, my RAM hits 85C in Cyberpunk, hotspot a bit more. I have Thermalright 12.8 mw/k pads, but I only have .5, 1.0 and 1.5mm.
> 
> Not sure what the stocks pads are on the card.


...85 C for RAM is a bit high, but not unusual. First thing I did with my Strix was to experiment with single and dual 120mm fans on the back plate...VRAM temps dropped by 12 C +. ...'Alignment of the fans with the VRAM on the back' and angles of the fan(s) blowing on the back plate is also important. When using dual 120mm side-by-side, I flipped one of the fans around to 'draw' instead of 'push', and that's the one which extended coverage towards to cutout on the back for the VRM cooler on the Strix ("below the 3x 8 pin") - you don't want to 'blow against' the relevant stock fan on the front of the Strix (the one on the right) cooling the VRM.

...not sure (yet) about the stock pad size, but apparently on the back side, it is 1 mm, but absolutely necessary to independently confirm that

*EDIT: EK block arrived* for the 3090 Strix OC...still working hours here but a quick inspection suggests cold plate / GPU contact is fine...also found various old copper and brass pieces which might lend themselves for rear_VRAM cooling


----------



## gfunkernaught

@J7SC
I found these heatsinks I thought about getting. The m.2 heatsinks seem to be working, but after overclocking the vram to +1300 and a short quake 2 RTX run, vram reached 64c, core temp 44c at 500w. I might have to play with the placement of the sinks, right now they're covering most of the center-back of the card, where all the important stuff is, VRMs (both sides) vram and core.


https://www.amazon.com/dp/B07DH5T2RW/ref=cm_sw_r_cp_apa_fabc_BPQQM2XEQZ7DT0H2TR2H?_encoding=UTF8&psc=1


----------



## J7SC

gfunkernaught said:


> @J7SC
> I found these heatsinks I thought about getting. The m.2 heatsinks seem to be working, but after overclocking the vram to +1300 and a short quake 2 RTX run, vram reached 64c, core temp 44c at 500w. I might have to play with the placement of the sinks, right now they're covering most of the center-back of the card, where all the important stuff is, VRMs (both sides) vram and core.
> 
> 
> https://www.amazon.com/dp/B07DH5T2RW/ref=cm_sw_r_cp_apa_fabc_BPQQM2XEQZ7DT0H2TR2H?_encoding=UTF8&psc=1


Nice, and probably a good option. 

I have at least four 'universal GPU / Swiftech mcw82s w-cooling blocks' laying around unused, so I'm looking for a flat copper or brass base plate to mount on the stock Strix back plate ...if all else fails, I'll just wait until EK releases the actively w-cooled back plate, probably for big $$$ and 'out-of-stock, out-of-stock'


----------



## yzonker

gfunkernaught said:


> @J7SC
> I found these heatsinks I thought about getting. The m.2 heatsinks seem to be working, but after overclocking the vram to +1300 and a short quake 2 RTX run, vram reached 64c, core temp 44c at 500w. I might have to play with the placement of the sinks, right now they're covering most of the center-back of the card, where all the important stuff is, VRMs (both sides) vram and core.
> 
> 
> https://www.amazon.com/dp/B07DH5T2RW/ref=cm_sw_r_cp_apa_fabc_BPQQM2XEQZ7DT0H2TR2H?_encoding=UTF8&psc=1


I have one of those, but didn't find it performed any better than this shorter black one I already had. 









Amazon.com: Aluminum Large Heatsink 4.72'' x 2.72'' x 1.06'' inch /120 x 69 x 27 mm Heat Sink Cooling Black Oxide Radiator 22 Fin for Computer LED Power : Electronics


Buy Aluminum Large Heatsink 4.72'' x 2.72'' x 1.06'' inch /120 x 69 x 27 mm Heat Sink Cooling Black Oxide Radiator 22 Fin for Computer LED Power: Heatsinks - Amazon.com ✓ FREE DELIVERY possible on eligible purchases



www.amazon.com


----------



## gfunkernaught

@J7SC @yzonker 
Be mindful of the type of thermal pads you use if you're using heatsinks on the back of the card with a backplate. I just tried to remove a plate that felt like it was just a little "stuck", and some of the pad ripped a little. Looks like it got so hot that it dried up. My gelid extreme pads came in, they're kind of short, but I'll make them work. Also that company we talked about earlier in the thread, the shipped out those sample pads. My delta between gpu and water right now is 13c and change, so I'm a bit reluctant to do a remount and repaste lol. I'll wait a bit and see if thermal performance becomes degraded.


----------



## J7SC

gfunkernaught said:


> @J7SC @yzonker
> Be mindful of the type of thermal pads you use if you're using heatsinks on the back of the card with a backplate. I just tried to remove a plate that felt like it was just a little "stuck", and some of the pad ripped a little. Looks like it got so hot that it dried up. My gelid extreme pads came in, they're kind of short, but I'll make them work. Also that company we talked about earlier in the thread, the shipped out those sample pads. My delta between gpu and water right now is 13c and change, so I'm a bit reluctant to do a remount and repaste lol. I'll wait a bit and see if thermal performance becomes degraded.


...
Thanks for the heads-up. I think I played it safe re. thermal pad supplies (at least I hope so...)


----------



## wirefox

Sadly both gpuz and hwinfo show the mem and power line items with 125C.

Anyone know what mem goes to what sensors? there is definitely not a hot spot on the backplate that is 125C at idle yet temp is what it is...

Could I have borked a sensor cleaning off the gooey thermal pads?

Is this an RMA? .. I already suited it up in its armor though...


----------



## gfunkernaught

finally got my pc outside for some benching...here's port royal. kinda sad that i have to bring this thing outside to get this score when i see people here getting 15000 indoors, assumedly.








I scored 15 449 in Port Royal


Intel Core i7-8700K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## gfunkernaught

wirefox said:


> Sadly both gpuz and hwinfo show the mem and power line items with 125C.
> 
> Anyone know what mem goes to what sensors? there is definitely not a hot spot on the backplate that is 125C at idle yet temp is what it is...
> 
> Could I have borked a sensor cleaning off the gooey thermal pads?
> 
> Is this an RMA? .. I already suited it up in its armor though...
> 
> View attachment 2481336
> 
> 
> View attachment 2481337


First of all love the chess piece!
Second, does that temperature show up for those sensors with different bios as well? Not even bios from other vendors, check other versions of the bios for your card and see if those sensors report differently.


----------



## gfunkernaught

J7SC said:


> ...
> Thanks for the heads-up. I think I played it safe re. thermal pad supplies (at least I hope so...)
> 
> View attachment 2481322
> 
> 
> View attachment 2481323


Which pads are those blue ones on the right?


----------



## Falkentyne

gfunkernaught said:


> Which pads are those blue ones on the right?


Whos the guy you contacted for thermal pads?
I emailed them and got no replies back at all. It's been a week...

Can you give me the info again?
I think you said to actually call on the phone and ask for someone directly...

Sorry.


----------



## gfunkernaught

Falkentyne said:


> Whos the guy you contacted for thermal pads?
> I emailed them and got no replies back at all. It's been a week...
> 
> Can you give me the info again?
> I think you said to actually call on the phone and ask for someone directly...
> 
> Sorry.







__





Contact Us - New England Die Cutting (NEDC)


Get in touch through phone, fax, email, or form submission. You can also stop by our office at 96 Milk StreetMethuen, MA01844.




www.nedc.com





Call them up. I emailed the sales rep just now, asking if it is okay to have people from the forum contact him directly...


----------



## J7SC

gfunkernaught said:


> Which pads are those blue ones on the right?


Thermalright Odyssey thermal 1mm - come highly recommended...

...also, another delivery for the dual-loop build:


----------



## gfunkernaught

J7SC said:


> Thermalright Odyssey thermal 1mm - come highly recommended...
> 
> ...also, another delivery for the dual-loop build:
> 
> View attachment 2481353
> 
> 
> View attachment 2481354
> View attachment 2481358


Where'd you get those thermalrights from?


----------



## inedenimadam

Has EK released the backplate MEM waterblock for reference PCBs yet? I swear it has been like 3 months since I saw them tease it. I neeeeeeed it!


----------



## J7SC

gfunkernaught said:


> Where'd you get those thermalrights from?


Amazon



inedenimadam said:


> Has EK released the backplate MEM waterblock for reference PCBs yet? I swear it has been like 3 months since I saw them tease it. I neeeeeeed it!


...you mean the actively w-cooled one ? I checked their site earlier in the week and it wasn't listed yet for my model (Strix), or for any model afaik, for purchase...only teaser pics...should be getting close though (I hope)


----------



## gfunkernaught

Found some more odd behavior with my card. Set my curve to [email protected], PL 50%, load puts it at 2100mhz. Played quake 2 rtx, at 100% load and 500-505w, core clock got crushed down to as low as 1965mhz with the max clock at 1980mhz. I play with dynamic res since I want the framerate at 60fps. Core temp hit 42c. Thought to myself, hey not bad right? Crash. Screen went blank, PC rebooted. This has to be power related right? Not enough? Too much?


----------



## rusky1

I've got a 3090FE and ordered an EK loop for it along with the new block (April 2021 release). Will use Kryonaut TIM for the GPU, do you guys suggest I pick up some extra thermal pads as well? If so, what are your recommendations?

I also have some conductonaut laying around that I've used for CPUs before, but not really a fan of the corrosion/staining. Anyone using that or another liquid metal?


----------



## gfunkernaught

rusky1 said:


> I've got a 3090FE and ordered an EK loop for it along with the new block (April 2021 release). Will use Kryonaut TIM for the GPU, do you guys suggest I pick up some extra thermal pads as well? If so, what are your recommendations?
> 
> I also have some conductonaut laying around that I've used for CPUs before, but not really a fan of the corrosion/staining. Anyone using that or another liquid metal?


for the pads Gelid Extreme or Thermalright odyssey. I believe the thickness should be 1mm, but you can experiment with .5mm to 1.5mm. Someone with more experience with these pads can chime in. I just got my gelid extreme 1mm pads but I don't want to (feel like) installing them just yet.

For the conductonaut, I tried it on my 3090. I'd say it took off about 2c or so. My issue with LM on the gpu is it could get really messy. I put nail polish on the surrounding resistors on the gpu. I had to remount because the delta was too high, and cleaning it up was a nightmare. Some tiny drops of it fell onto the pcb and stuck there. I had to hunt them down one at a time. scary stuff. I just use ic diamond now.


----------



## ViRuS2k

This mining stuff is highly addictive ***** made so far on my 3090 + 1080ti getting 158mh/s and making around £74 a week ***** this is nuts FREE MONEY lol 
though im mining currently etho, anyone have any recommendations on whats better to mine ? higher rates ect or is etho still the best 

i really need to get my mp5works serial version installed asap though as my memory temps hover around 102c ***** but still memory is not throttling ect as im still pulling around 120mh/s alone with the 3090  @298w


----------



## J7SC

rusky1 said:


> I've got a 3090FE and ordered an EK loop for it along with the new block (April 2021 release). Will use Kryonaut TIM for the GPU, do you guys suggest I pick up some extra thermal pads as well? If so, what are your recommendations?
> 
> I also have some conductonaut laying around that I've used for CPUs before, but not really a fan of the corrosion/staining. Anyone using that or another liquid metal?


....LM is a personal choice...prep re. conductive insulation, GPU mounting orientation all play a role.

...as to extra thermal pads, I would pick some up ie similar to those in the spoiler on the right, from Amazon. I haven't mounted my block yet (just arrived a few hours ago, and I'm still expecting other w-loop parts). But I have seen vids where for example on the Strix EK, there wasn't very much extra thermal pad material included...also, with the double-sided VRAM on the 3090s, you want really good pads on the back.



Spoiler


----------



## Falkentyne

gfunkernaught said:


> Found some more odd behavior with my card. Set my curve to [email protected], PL 50%, load puts it at 2100mhz. Played quake 2 rtx, at 100% load and 500-505w, core clock got crushed down to as low as 1965mhz with the max clock at 1980mhz. I play with dynamic res since I want the framerate at 60fps. Core temp hit 42c. Thought to myself, hey not bad right? Crash. Screen went blank, PC rebooted. This has to be power related right? Not enough? Too much?


Quake 2 RTX has bugs if Vsync is disabled. It will crash the display driver _EVERY TIME_ after about 4 minutes, even at stock speeds!!! When this happens, any attempt to save, load or alt tab is a display driver crash. Not doing anything may eventually BSOD you or the game will crash and possibly BSOD after.

Either enable vsync, _OR_ play in windowed mode with vsync disabled.


----------



## gfunkernaught

Falkentyne said:


> Quake 2 RTX has bugs if Vsync is disabled. It will crash the display driver _EVERY TIME_ after about 4 minutes, even at stock speeds!!! When this happens, any attempt to save, load or alt tab is a display driver crash. Not doing anything may eventually BSOD you or the game will crash and possibly BSOD after.
> 
> Either enable vsync, _OR_ play in windowed mode with vsync disabled.


Ok. Speculating here: Does the bug have something to do with the framebuffer? Or is it crashing the driver because the framerate is too high? In my case it seemed to be capped to 62fps. 

Side note: Just playing Battlefield 1 in 8k...whoa...so when the game runs at [email protected], the gpu is moving 63,833,702,400 bytes of data per second...that is insane. Still noticed the clocks were very low while in BF1, 1980-2010mhz.


----------



## dk10438

a couple of questions for a Gigabyte 3090 OC

a) what's the general consensus for "safe" temps when mining? been keeping my VRAM at 100-102 degrees or so but wondering if this is too high for continuous use or if there's room to push it to 104 or 106 as long as it isn't thermal throttling. 

b) I have some Grizzly minus 8 pads in hand but some Gelid pads coming. Wondering if the Gelid pads are significantly better and if I should wait for them to arrive or just use the Grizzly's


----------



## Warocia

dk10438 said:


> a couple of questions for a Gigabyte 3090 OC
> 
> a) what's the general consensus for "safe" temps when mining? been keeping my VRAM at 100-102 degrees or so but wondering if this is too high for continuous use or if there's room to push it to 104 or 106 as long as it isn't thermal throttling.
> 
> b) I have some Grizzly minus 8 pads in hand but some Gelid pads coming. Wondering if the Gelid pads are significantly better and if I should wait for them to arrive or just use the Grizzly's


My memory temperatures were something like 92c. At first I could do +1300MHz after mining 24/7 a couple of months. I can only mine +1050MHz without black screen. So there is some degeneration.


----------



## EarlZ

dk10438 said:


> a couple of questions for a Gigabyte 3090 OC
> 
> a) what's the general consensus for "safe" temps when mining? been keeping my VRAM at 100-102 degrees or so but wondering if this is too high for continuous use or if there's room to push it to 104 or 106 as long as it isn't thermal throttling.
> 
> b) I have some Grizzly minus 8 pads in hand but some Gelid pads coming. Wondering if the Gelid pads are significantly better and if I should wait for them to arrive or just use the Grizzly's


A lot of folks say 100C is safe due to the 110c thermal throttle point. But id say anything above 90c for extended use will damage/degrade the chips.


----------



## KedarWolf

Just ran the entire first chapter of Shadow of The Tomb Raider in hardware HDR with Ray Tracing on Ultra at 2130 core, +710 memory at 10461 on my ASUS Strix OC.

I'm using the V2 BIOS as I'm still on air until I get my waterblock and backplate.


----------



## gfunkernaught

@Falkentyne 
Here's the contact info:
Garret Abare from NEDC SEALING SOLUTIONS
[email protected]
978-994-3838


----------



## gfunkernaught

KedarWolf said:


> Just ran the entire first chapter of Shadow of The Tomb Raider in hardware HDR with Ray Tracing on Ultra at 2130 core, +710 memory at 10461 on my ASUS Strix OC.
> 
> I'm using the V2 BIOS as I'm still on air until I get my waterblock and backplate.


Was 2130mhz your effective clock? What voltage?


----------



## EarlZ

2130Mhz on core sounds really good on air!


----------



## KedarWolf

gfunkernaught said:


> Was 2130mhz your effective clock? What voltage?


My clocks vary from 2012 and up, usually hover around 2050, 2070. 2130 at 1.083v. I top out playing the game at 75c. I don't use a curve, just use the slider to set my clocks.

I could fine-tune it better with a curve but going to wait for my waterblock etc.

I won't use the 1000w Kingpin BIOS until my waterblock and backplate arrive. With the Kingpin BIOS my RAM goes over 90c and that's not good.

When EKWB puts out an active backplate, I'll use that too.


----------



## KedarWolf

gfunkernaught said:


> Was 2130mhz your effective clock? What voltage?


Here is my GPU-Z log, SOTR maxed out, HDR on, ray tracing on Ultra.



Code:


        Date        , GPU Clock [MHz] , GPU Temperature [°C] , Hot Spot [°C] , Memory Temperature [°C] , MVDDC Power Draw [W] , PWR_SRC Power Draw [W] , PWR_SRC Voltage [V] , PCIe Slot Power [W] , PCIe Slot Voltage [V] , 8-Pin #1 Power [W] , 8-Pin #1 Voltage [V] , 8-Pin #2 Power [W] , 8-Pin #2 Voltage [V] , 8-Pin #3 Power [W] , 8-Pin #3 Voltage [V] , GPU Voltage [V] ,
2021-03-06 01:08:22 ,        2115.0   ,               53.9   ,        66.3   ,                  70.0   ,               89.3   ,                 97.3   ,              12.1   ,              23.6   ,                12.1   ,             61.4   ,               12.1   ,             40.8   ,               12.1   ,             40.8   ,               12.1   ,        1.0620   ,
2021-03-06 01:08:23 ,        2115.0   ,               53.7   ,        66.1   ,                  70.0   ,               88.7   ,                 98.0   ,              12.1   ,              23.8   ,                12.1   ,             60.9   ,               12.1   ,             41.8   ,               12.1   ,             41.1   ,               12.1   ,        1.0620   ,
2021-03-06 01:08:24 ,        2115.0   ,               53.3   ,        65.8   ,                  70.0   ,               88.1   ,                 97.5   ,              12.1   ,              23.8   ,                12.1   ,             61.0   ,               12.1   ,             41.6   ,               12.1   ,             41.0   ,               12.1   ,        1.0620   ,
2021-03-06 01:08:25 ,        2085.0   ,               61.9   ,        73.3   ,                  74.0   ,              143.7   ,                145.2   ,              12.5   ,              54.1   ,                12.1   ,            132.9   ,               12.0   ,            155.1   ,               12.1   ,            137.0   ,               12.0   ,        1.0180   ,
2021-03-06 01:08:26 ,        2070.0   ,               64.8   ,        76.5   ,                  76.0   ,              146.6   ,                140.2   ,              12.5   ,              48.8   ,                12.1   ,            133.0   ,               12.0   ,            156.4   ,               12.1   ,            139.9   ,               12.0   ,        1.0180   ,
2021-03-06 01:08:27 ,        2070.0   ,               65.4   ,        77.5   ,                  76.0   ,              146.9   ,                138.5   ,              12.5   ,              47.9   ,                12.1   ,            132.3   ,               12.1   ,            155.7   ,               12.1   ,            140.4   ,               12.0   ,        1.0180   ,
2021-03-06 01:08:28 ,        2070.0   ,               66.3   ,        78.3   ,                  78.0   ,              147.1   ,                137.0   ,              12.5   ,              46.8   ,                12.1   ,            131.8   ,               12.1   ,            155.4   ,               12.1   ,            141.3   ,               12.0   ,        1.0180   ,
2021-03-06 01:08:29 ,        2070.0   ,               66.6   ,        78.6   ,                  78.0   ,              147.1   ,                136.5   ,              12.5   ,              46.5   ,                12.1   ,            131.4   ,               12.1   ,            155.0   ,               12.1   ,            141.1   ,               12.0   ,        1.0180   ,
2021-03-06 01:08:30 ,        2070.0   ,               66.8   ,        79.2   ,                  78.0   ,              147.0   ,                138.1   ,              12.5   ,              48.0   ,                12.1   ,            131.4   ,               12.1   ,            155.0   ,               12.1   ,            140.9   ,               12.0   ,        1.0180   ,
2021-03-06 01:08:31 ,        2070.0   ,               67.4   ,        79.3   ,                  78.0   ,              151.1   ,                138.6   ,              12.5   ,              47.4   ,                12.1   ,            133.3   ,               12.0   ,            154.5   ,               12.1   ,            143.5   ,               12.0   ,        1.0310   ,
2021-03-06 01:08:32 ,        2055.0   ,               67.3   ,        79.4   ,                  80.0   ,              148.9   ,                137.6   ,              12.5   ,              47.2   ,                12.1   ,            131.5   ,               12.1   ,            151.3   ,               12.1   ,            140.9   ,               12.0   ,        1.0120   ,
2021-03-06 01:08:33 ,        2055.0   ,               67.8   ,        79.7   ,                  80.0   ,              155.1   ,                141.1   ,              12.5   ,              48.0   ,                12.1   ,            133.6   ,               12.1   ,            153.9   ,               12.1   ,            143.4   ,               12.0   ,        1.0120   ,
2021-03-06 01:08:34 ,        2055.0   ,               67.9   ,        80.5   ,                  80.0   ,              154.8   ,                139.7   ,              12.5   ,              46.7   ,                12.1   ,            133.3   ,               12.1   ,            153.5   ,               12.1   ,            143.1   ,               12.0   ,        1.0120   ,
2021-03-06 01:08:35 ,        2055.0   ,               68.2   ,        80.6   ,                  80.0   ,              153.1   ,                140.4   ,              12.5   ,              47.6   ,                12.1   ,            132.5   ,               12.1   ,            152.9   ,               12.1   ,            141.7   ,               12.0   ,        1.0120   ,
2021-03-06 01:08:36 ,        2040.0   ,               68.3   ,        81.0   ,                  82.0   ,              156.1   ,                141.0   ,              12.5   ,              47.4   ,                12.1   ,            134.5   ,               12.1   ,            155.3   ,               12.1   ,            144.2   ,               12.0   ,        0.9930   ,
2021-03-06 01:08:37 ,        2055.0   ,               68.5   ,        81.2   ,                  82.0   ,              152.2   ,                140.5   ,              12.5   ,              47.9   ,                12.1   ,            132.7   ,               12.1   ,            153.4   ,               12.1   ,            141.4   ,               12.0   ,        1.0120   ,
2021-03-06 01:08:38 ,        2055.0   ,               68.7   ,        81.8   ,                  80.0   ,              153.3   ,                142.2   ,              12.5   ,              48.9   ,                12.1   ,            133.3   ,               12.1   ,            153.9   ,               12.1   ,            141.4   ,               12.0   ,        1.0120   ,
2021-03-06 01:08:39 ,        2040.0   ,               69.1   ,        82.1   ,                  82.0   ,              157.1   ,                141.5   ,              12.5   ,              47.7   ,                12.1   ,            134.6   ,               12.1   ,            155.3   ,               12.1   ,            144.3   ,               12.0   ,        0.9930   ,
2021-03-06 01:08:40 ,        2055.0   ,               69.1   ,        82.3   ,                  82.0   ,              156.9   ,                141.2   ,              12.5   ,              47.4   ,                12.1   ,            133.6   ,               12.1   ,            154.8   ,               12.1   ,            144.0   ,               12.0   ,        1.0120   ,
2021-03-06 01:08:41 ,        2055.0   ,               69.3   ,        82.9   ,                  82.0   ,              153.2   ,                139.5   ,              12.5   ,              46.5   ,                12.1   ,            132.3   ,               12.1   ,            153.8   ,               12.1   ,            141.5   ,               12.0   ,        1.0120   ,
2021-03-06 01:08:42 ,        2040.0   ,               69.5   ,        82.9   ,                  82.0   ,              154.3   ,                140.0   ,              12.5   ,              47.1   ,                12.1   ,            132.9   ,               12.1   ,            154.1   ,               12.1   ,            142.3   ,               12.0   ,        0.9930   ,
2021-03-06 01:08:43 ,        2055.0   ,               69.7   ,        83.0   ,                  82.0   ,              157.1   ,                142.1   ,              12.5   ,              47.8   ,                12.1   ,            133.7   ,               12.1   ,            154.9   ,               12.1   ,            143.5   ,               12.0   ,        1.0120   ,
2021-03-06 01:08:44 ,        2040.0   ,               70.1   ,        83.0   ,                  82.0   ,              155.5   ,                140.2   ,              12.5   ,              47.7   ,                12.1   ,            133.8   ,               12.1   ,            154.2   ,               12.1   ,            144.8   ,               12.0   ,        0.9930   ,
2021-03-06 01:08:45 ,        2055.0   ,               69.6   ,        82.9   ,                  82.0   ,              149.2   ,                139.8   ,              12.5   ,              49.0   ,                12.1   ,            131.3   ,               12.1   ,            152.5   ,               12.1   ,            140.0   ,               12.0   ,        1.0120   ,
2021-03-06 01:08:46 ,        2055.0   ,               69.4   ,        82.8   ,                  82.0   ,              145.2   ,                137.0   ,              12.5   ,              47.8   ,                12.1   ,            125.4   ,               12.1   ,            141.7   ,               12.1   ,            132.5   ,               12.0   ,        1.0120   ,
2021-03-06 01:08:47 ,        2115.0   ,               60.4   ,        74.3   ,                  80.0   ,               93.5   ,                102.9   ,              12.2   ,              28.4   ,                12.1   ,             71.0   ,               12.1   ,             57.4   ,               12.1   ,             55.3   ,               12.1   ,        1.0930   ,
2021-03-06 01:08:48 ,        2115.0   ,               59.3   ,        72.6   ,                  78.0   ,               93.7   ,                103.0   ,              12.2   ,              28.8   ,                12.1   ,             72.0   ,               12.1   ,             58.2   ,               12.1   ,             57.0   ,               12.1   ,        1.0930   ,
2021-03-06 01:08:49 ,        2115.0   ,               58.6   ,        71.5   ,                  76.0   ,               93.7   ,                102.8   ,              12.2   ,              28.5   ,                12.1   ,             71.5   ,               12.1   ,             57.3   ,               12.1   ,             56.2   ,               12.1   ,        1.0930   ,
2021-03-06 01:08:50 ,        2115.0   ,               57.9   ,        71.1   ,                  76.0   ,               94.0   ,                103.1   ,              12.2   ,              28.6   ,                12.1   ,             71.6   ,               12.1   ,             57.6   ,               12.1   ,             56.3   ,               12.1   ,        1.0930   ,
2021-03-06 01:08:51 ,        2115.0   ,               57.5   ,        70.6   ,                  76.0   ,               94.4   ,                103.3   ,              12.2   ,              28.4   ,                12.1   ,             71.3   ,               12.1   ,             57.2   ,               12.1   ,             55.9   ,               12.1   ,        1.0930   ,
2021-03-06 01:08:52 ,        2115.0   ,               56.8   ,        69.5   ,                  76.0   ,               94.1   ,                101.3   ,              12.2   ,              26.8   ,                12.1   ,             67.4   ,               12.1   ,             50.8   ,               12.1   ,             50.6   ,               12.1   ,        1.0620   ,
2021-03-06 01:08:53 ,        2070.0   ,               65.6   ,        77.4   ,                  78.0   ,              137.5   ,                132.4   ,              12.6   ,              48.1   ,                12.1   ,            130.7   ,               12.1   ,            152.1   ,               12.1   ,            139.8   ,               12.0   ,        1.0180   ,
2021-03-06 01:08:54 ,        2070.0   ,               67.3   ,        79.8   ,                  80.0   ,              140.5   ,                126.8   ,              12.6   ,              43.8   ,                12.1   ,            127.1   ,               12.1   ,            150.7   ,               12.1   ,            143.6   ,               12.0   ,        1.0180   ,
2021-03-06 01:08:55 ,        2055.0   ,               67.8   ,        80.6   ,                  80.0   ,              144.9   ,                128.6   ,              12.6   ,              44.4   ,                12.1   ,            129.2   ,               12.1   ,            151.4   ,               12.1   ,            147.6   ,               12.0   ,        1.0120   ,
2021-03-06 01:08:56 ,        2055.0   ,               68.3   ,        81.2   ,                  80.0   ,              145.2   ,                128.8   ,              12.6   ,              44.4   ,                12.1   ,            129.2   ,               12.1   ,            152.1   ,               12.1   ,            147.9   ,               12.0   ,        1.0120   ,
2021-03-06 01:08:57 ,        2055.0   ,               68.5   ,        81.7   ,                  80.0   ,              143.1   ,                128.0   ,              12.6   ,              44.2   ,                12.1   ,            128.3   ,               12.1   ,            150.6   ,               12.1   ,            146.1   ,               12.0   ,        1.0120   ,
2021-03-06 01:08:58 ,        2055.0   ,               68.8   ,        81.4   ,                  80.0   ,              144.8   ,                128.2   ,              12.6   ,              44.3   ,                12.1   ,            128.9   ,               12.1   ,            151.3   ,               12.1   ,            147.8   ,               12.0   ,        1.0120   ,
2021-03-06 01:08:59 ,        2040.0   ,               69.0   ,        81.9   ,                  80.0   ,              145.8   ,                128.8   ,              12.6   ,              44.5   ,                12.1   ,            129.6   ,               12.1   ,            152.2   ,               12.1   ,            148.7   ,               12.0   ,        0.9930   ,
2021-03-06 01:09:00 ,        2055.0   ,               69.1   ,        82.3   ,                  80.0   ,              142.5   ,                127.2   ,              12.6   ,              44.3   ,                12.1   ,            129.2   ,               12.1   ,            151.4   ,               12.1   ,            148.4   ,               12.0   ,        0.9930   ,
2021-03-06 01:09:01 ,        2040.0   ,               69.0   ,        81.8   ,                  80.0   ,              137.7   ,                126.0   ,              12.6   ,              43.6   ,                12.1   ,            127.5   ,               12.1   ,            149.7   ,               12.1   ,            143.5   ,               12.0   ,        0.9930   ,
2021-03-06 01:09:02 ,        2040.0   ,               69.3   ,        82.2   ,                  80.0   ,              139.5   ,                126.6   ,              12.6   ,              44.2   ,                12.1   ,            128.6   ,               12.1   ,            150.8   ,               12.1   ,            145.3   ,               12.0   ,        0.9930   ,
2021-03-06 01:09:03 ,        2040.0   ,               69.5   ,        82.5   ,                  82.0   ,              139.7   ,                125.9   ,              12.6   ,              44.2   ,                12.1   ,            128.3   ,               12.1   ,            148.7   ,               12.1   ,            146.1   ,               12.0   ,        0.9930   ,
2021-03-06 01:09:04 ,        2040.0   ,               69.2   ,        82.6   ,                  82.0   ,              139.6   ,                126.9   ,              12.6   ,              44.0   ,                12.1   ,            127.5   ,               12.1   ,            149.6   ,               12.1   ,            144.2   ,               12.0   ,        0.9930   ,
2021-03-06 01:09:05 ,        2040.0   ,               69.5   ,        82.8   ,                  82.0   ,              139.4   ,                126.9   ,              12.6   ,              44.1   ,                12.1   ,            128.2   ,               12.1   ,            150.2   ,               12.1   ,            145.3   ,               12.0   ,        0.9930   ,
2021-03-06 01:09:06 ,        2055.0   ,               69.8   ,        83.0   ,                  82.0   ,              141.0   ,                128.4   ,              12.6   ,              44.9   ,                12.1   ,            129.5   ,               12.1   ,            150.9   ,               12.1   ,            144.7   ,               12.0   ,        1.0120   ,
2021-03-06 01:09:07 ,        2055.0   ,               69.9   ,        82.5   ,                  82.0   ,              139.6   ,                129.5   ,              12.6   ,              45.3   ,                12.1   ,            129.7   ,               12.1   ,            150.5   ,               12.1   ,            142.2   ,               12.0   ,        1.0120   ,
2021-03-06 01:09:08 ,        2055.0   ,               70.0   ,        83.2   ,                  82.0   ,              140.1   ,                129.6   ,              12.6   ,              45.7   ,                12.1   ,            131.8   ,               12.1   ,            152.5   ,               12.1   ,            144.6   ,               12.0   ,        1.0120   ,
2021-03-06 01:09:09 ,        2040.0   ,               70.4   ,        83.1   ,                  82.0   ,              137.4   ,                128.7   ,              12.6   ,              45.2   ,                12.1   ,            130.6   ,               12.1   ,            150.8   ,               12.1   ,            142.6   ,               12.0   ,        0.9930   ,
2021-03-06 01:09:10 ,        2055.0   ,               70.1   ,        83.3   ,                  82.0   ,              133.6   ,                128.5   ,              12.6   ,              45.6   ,                12.1   ,            131.1   ,               12.1   ,            148.2   ,               12.1   ,            139.6   ,               12.0   ,        1.0120   ,
2021-03-06 01:09:11 ,        2055.0   ,               70.6   ,        83.5   ,                  82.0   ,              138.3   ,                132.1   ,              12.6   ,              47.6   ,                12.1   ,            134.7   ,               12.1   ,            151.1   ,               12.1   ,            142.9   ,               12.0   ,        1.0120   ,
2021-03-06 01:09:12 ,        2055.0   ,               70.5   ,        83.8   ,                  82.0   ,              142.9   ,                130.1   ,              12.6   ,              45.4   ,                12.1   ,            130.5   ,               12.1   ,            151.0   ,               12.1   ,            144.2   ,               12.0   ,        1.0120   ,
2021-03-06 01:09:13 ,        2055.0   ,               70.5   ,        83.7   ,                  82.0   ,              143.5   ,                129.5   ,              12.6   ,              44.8   ,                12.1   ,            130.4   ,               12.1   ,            151.4   ,               12.1   ,            145.6   ,               12.0   ,        1.0120   ,
2021-03-06 01:09:14 ,        2040.0   ,               70.2   ,        83.7   ,                  82.0   ,              142.2   ,                128.6   ,              12.6   ,              44.8   ,                12.1   ,            129.5   ,               12.1   ,            149.9   ,               12.1   ,            145.4   ,               12.0   ,        0.9930   ,


----------



## J7SC

KedarWolf said:


> My clocks vary from 2012 and up, usually hover around 2050, 2070. 2130 at 1.083v. I top out playing the game at 75c. I don't use a curve, just use the slider to set my clocks.
> 
> I could fine-tune it better with a curve but going to wait for my waterblock etc.
> 
> I won't use the 1000w Kingpin BIOS until my waterblock and backplate arrive. With the Kingpin BIOS my RAM goes over 90c and that's not good.
> 
> When EKWB puts out an active backplate, I'll use that too.


How do you like the Strix so far ? I've had mine for just over a month and love it - It's air-cooler is actually very good, though I received my EK block today...on air, stock fans get audible after 70%, but the system is quite far away as it powers a 55 inch IPS HDR...did an hour or so of FS2020 on 4K Ultra w/ HDR this evening...fans at 97%, mild oc at 2130 / VRAM 1307...can and have gone higher to 2190, but this was buttery smooth anyhow, and first time on HDR in FS2020. 

Re. temps, I do have a helper fan over the back VRAM...ambient was 23 C.


----------



## KedarWolf

J7SC said:


> How do you like the Strix so far ? I've had mine for just over a month and love it - It's air-cooler is actually very good, though I received my EK block today...on air, stock fans get audible after 70%, but the system is quite far away as it powers a 55 inch IPS HDR...did an hour or so of FS2020 on 4K Ultra w/ HDR this evening...fans at 97%, mild oc at 2130 / VRAM 1307...can and have gone higher to 2190, but this was buttery smooth anyhow, and first time on HDR in FS2020.
> 
> Re. temps, I do have a helper fan over the back VRAM...ambient was 23 C.
> 
> View attachment 2481377
> 
> 
> View attachment 2481378


it was a choice between the Strix and the FTW3 Ultra, but I heard about too many issues from the Ultra.

And I love my card, just haven't pushed the core higher until I'm under water. But it does 2131 easy on the KPE 1000W bios at 1.050v even, downclocks to a steady 2100 though.


----------



## J7SC

KedarWolf said:


> it was a choice between the Strix and the FTW3 Ultra, but I heard about too many issues from the Ultra.
> 
> And I love my card, just haven't pushed the core higher until I'm under water. But it does 2131 easy on the KPE 1000W bios at 1.050v even, downclocks to a steady 2100 though.


...even after w-cooling, I probably won't use the KPE XOC 1kw...may be the KPE 520 W, or possibly the Strix XOC if it has matured a bit more and keeps some of the safeties...as mentioned before, this is also a productivity-use setup, and with stock bios I have already seen 495 W to 503 W on full oc.

At the end of the day, these cards are incredible performers, though...and the loop I'm planning should keep it nice'n cool and fairly quiet (12 Arctic fans in push pull for the GPU-only loop...)


----------



## Alex24buc

Alex24buc said:


> Anyone here with a palit gamerock oc? How was your experience with it? I changed my gaming pro oc with the gamerock and I wanted to know if I did a good choice because the gaming pro oc version was a weak card and although I said I would not buy a palit card anymore the gamerock oc version was the only one I could find for 300 euro more. Thanks!


Nobody?🙁


----------



## des2k...

I'm going to try another block for my card, this one looks promising, new model I think.
Already has double the fins of the EK block and very close to the heat source.

Going to take a while to get the block, 3weeks+ 




Barrow LRC2.0 full coverage GPU Water Block for ZOTAC 3090 X GAMING Aurora BS-ZOXG3090-PA2_BARROW 水冷智造专家


----------



## Padinn

Curious to see what temperatures KPE owners have under a 480ish watt load like time spy. Im seeing around 62c in X1 and 65c for GPU die. I have a thermal probe taking temperature of air entering radiator, its around 35c, so my delta is +30. Its on a push pull setup as exhaust on my o11 xl and this is with fans at a fairly high level.


----------



## TheNaitsyrk

Anyone flashed 1000W BIOS onto FTW3 3090 Ultra?

Any BIOS I get from EVGA and even stock one doesn't go beyond 400W which is terrible.


----------



## Wihglah

TheNaitsyrk said:


> Anyone flashed 1000W BIOS onto FTW3 3090 Ultra?
> 
> Any BIOS I get from EVGA and even stock one doesn't go beyond 400W which is terrible.


Yes. Its is effective. 
Monitor your PCIE power carefully. The PCIE power lane on the card has a 10Amp fuse. If you do not know what this means, then don't do it.


----------



## cazpy

Wihglah said:


> Yes. Its is effective.
> Monitor your PCIE power carefully. The PCIE power lane on the card has a 20Amp fuse. If you do not know what this means, then don't do it.


the pcie fuse has 10amps, the 3 8 pins have 20amps each


----------



## Wihglah

cazpy said:


> the pcie fuse has 10amps, the 3 8 pins have 20amps each


Sry you are right. Typo. Fixed


----------



## J7SC

des2k... said:


> I'm going to try another block for my card, this one looks promising, new model I think.
> Already has double the fins of the EK block and very close to the heat source.
> 
> Going to take a while to get the block, 3weeks+
> 
> 
> 
> 
> Barrow LRC2.0 full coverage GPU Water Block for ZOTAC 3090 X GAMING Aurora BS-ZOXG3090-PA2_BARROW 水冷智造专家


Barrow makes some interesting w-cooling equipment, including a highly compact CPU block w/integrated mini-res > here . I checked their Strix block, but as you say delivery times can be an issue, especially these days...

... for the Strix (and possibly other 3090s), they are also about to bring out a new version > here


----------



## gfunkernaught

Stupid question:
Overclocking the vram will ultimately count towards the total board power limit right? Since ocing the vram to +1000, my max boost clocks are lower than they were when the vram was set to +400. This effect is especially noticeable when the card is pinned at or close to 500w, like quake 2 RTX, metro exodus at 4k no DLSS, and battlefield 1 at 8k.


----------



## Falkentyne

gfunkernaught said:


> Stupid question:
> Overclocking the vram will ultimately count towards the total board power limit right? Since ocing the vram to +1000, my max boost clocks are lower than they were when the vram was set to +400. This effect is especially noticeable when the card is pinned at or close to 500w, like quake 2 RTX, metro exodus at 4k no DLSS, and battlefield 1 at 8k.


You need to look at the 8 pin and PCIE Slot Power readings very carefully and see if they changed or not. TDP% isn't the only limit at work.


----------



## gfunkernaught

Falkentyne said:


> You need to look at the 8 pin and PCIE Slot Power readings very carefully and see if they changed or not. TDP% isn't the only limit at work.


Yeah I noticed pin#1 went up a bit, about the same as pin#2 now, 174w. Pin#3 is still at about 80w. PCIe slot is still at 68-70w. Makes sense now. I even tried upping the PL to 51% and it didn't make a difference. Not that I'm seeing any performance issues, quake 2 RTX at native 4k (no scaling) 44-60fps, pretty damn good. BF1 at 8k dropped as low as 35fps in one scene, but mostly 45-60fps. Haven't tried cyberpunk yet.


----------



## des2k...

gfunkernaught said:


> Stupid question:
> Overclocking the vram will ultimately count towards the total board power limit right? Since ocing the vram to +1000, my max boost clocks are lower than they were when the vram was set to +400. This effect is especially noticeable when the card is pinned at or close to 500w, like quake 2 RTX, metro exodus at 4k no DLSS, and battlefield 1 at 8k.


total board power by definition is everything on the board😁
ex, rgb,fans,mem,core, vrm loss

with my 2x8pin at 390w vbios, just adding +200mem reduced the power available to the core

not really a problem with the xoc when you have 600w+ available, +1500 mem uses about 66w gaming


----------



## Falkentyne

gfunkernaught said:


> Yeah I noticed pin#1 went up a bit, about the same as pin#2 now, 174w. Pin#3 is still at about 80w. PCIe slot is still at 68-70w. Makes sense now. I even tried upping the PL to 51% and it didn't make a difference. Not that I'm seeing any performance issues, quake 2 RTX at native 4k (no scaling) 44-60fps, pretty damn good. BF1 at 8k dropped as low as 35fps in one scene, but mostly 45-60fps. Haven't tried cyberpunk yet.


And what is TDP Normalized % reading at?
Look at that. (Must use hwinfo).


----------



## PowerK

Has anyone of you here got hands on Galax RTX 3090 HOF? (I failed to see through search).
I've been using MSI RTX 3090 Suprim X quite happily for a few months now. It was only yesterday, I could spot one RTX 3090 HOF available and bought one (impulse purchase, I admit). It'll be arriving on Tuesday. I haven't seen any impressions nor real-world benchmark data for this card. Any info?


----------



## jura11

Alex24buc said:


> Nobody?🙁


Hi there

I'm using or have Palit RTX3090 GamingPro and on that GPU I'm using XOC BIOS and no issues,I almost got GameRock but what put me off is availability of waterblocks,I know Alphacool will be making one block

If you will get that card or will be buying GameRock I would suggest try EVGA BIOS which have similar IO layout and you shouldn't loose any DP or HDMI ports,cooler should be okay as well although I would highly recommend go with water cooling to get every percentage of performance 

Hope this helps

Thanks,Jura


----------



## jura11

PowerK said:


> Has anyone of you here got hands on Galax RTX 3090 HOF? (I failed to see through search).
> I've been using MSI RTX 3090 Suprim X quite happily for a few months now. It was only yesterday, I could spot one RTX 3090 HOF available and bought one (impulse purchase, I admit). It'll be arriving on Tuesday. I haven't seen any impressions nor real-world benchmark data for this card. Any info?


You one lucky son mate,props to you mate

You are planning to run on under water or just on air there? Wish availability of these GPU is here man😣

Hope this helps

Thanks,Jura


----------



## gfunkernaught

Falkentyne said:


> And what is TDP Normalized % reading at?
> Look at that. (Must use hwinfo).


I'm seeing a 1% increase in TDP Normalized going from +400 to +1000 on vram while running bright memory bench. Not sure it is a good idea to push 600W on my card...

Update:
I upped the PL to 55% clocks went from mid to upper 1900s to 2025-2077mhz, with the vram at +1000. And I didn't see much of an increase in performance going from mid 1900s to the low 2000s. Didn't someone here mention that performance scaling decreases at or near 600w?


----------



## PowerK

jura11 said:


> You one lucky son mate,props to you mate
> 
> You are planning to run on under water or just on air there? Wish availability of these GPU is here man😣
> 
> Hope this helps
> 
> Thanks,Jura


Just on air. 
The last custom water I did was TITAN Xp SLI back in 2016/2017ish. For me the result was not very satisfactory.


----------



## martinhal

Does anyone know what the thickness of the VRAM thermal pads are on a Palit 3090 Gaming OC ?


----------



## J7SC

PowerK said:


> Has anyone of you here got hands on Galax RTX 3090 HOF? (I failed to see through search).
> I've been using MSI RTX 3090 Suprim X quite happily for a few months now. It was only yesterday, I could spot one RTX 3090 HOF available and bought one (impulse purchase, I admit). It'll be arriving on Tuesday. I haven't seen any impressions nor real-world benchmark data for this card. Any info?


...this time (=3090) around, there seem to be a lot of different versions of the Galax HoF, per screen below - which one did you get ? In any case, hope to see some posts about it here - I always liked the Galax HoF series in the past


----------



## PowerK

J7SC said:


> ...this time (=3090) around, there seem to be a lot of different versions of the Galax HoF, per screen below - which one did you get ? In any case, hope to see some posts about it here - I always liked the Galax HoF series in the past
> 
> View attachment 2481535


I got RTX 3090 HOF Limited Edition. (1875 one)


----------



## Falkentyne

The OC Lab edition is the only one truly worth the money. But no one can buy it normally. That leaves only the Limited Edition


----------



## Alex24buc

Thanks for the answer, Jura


----------



## J7SC

PowerK said:


> I got RTX 3090 HOF Limited Edition. (1875 one)


....'grats ! That should be a very nice card. With 2080 Ti HoF / OcL, Galax also offered factory water-blocked versions, and they might add that option for retrofit for these 3090s



Falkentyne said:


> The OC Lab edition is the only one truly worth the money. But no one can buy it normally. That leaves only the Limited Edition


...Steve / GN recently mentioned that there were five OcLs in existence (at the time of his coverage, anyway), and those five have been doing the 3DM record battles with KPE and select Asus. This came up as there apprently was a (false) rumour that there were only one or two HoF OcL cards in existence, and being passed around between their HWBot top scorers.


----------



## sultanofswing

Got 3 3090 FTW3 Ultras tonight to test out. First 2 were dud's but this last one I am testing so far is passing multiple runs of Port Royal at [email protected] each time scoring over 15k.
This will probably be the one that gets a waterblock and the rest I will figure out something to do with them.
Stopped here for tonight, This beats my original FTW3 3090 that I had to use ice water with just to break 15k. This score is with the stock AIO no cooling mods at all.


----------



## SolarBeaver

J7SC said:


> ...even after w-cooling, I probably won't use the KPE XOC 1kw...may be the KPE 520 W, or possibly the Strix XOC if it has matured a bit more and keeps some of the safeties...as mentioned before, this is also a productivity-use setup, and with stock bios I have already seen 495 W to 503 W on full oc.
> 
> At the end of the day, these cards are incredible performers, though...and the loop I'm planning should keep it nice'n cool and fairly quiet (12 Arctic fans in push pull for the GPU-only loop...)


Couldn't find any info about Strix XOC bios, could anyone give me a link, please? Is it official?


----------



## gfunkernaught

sultanofswing said:


> Got 3 3090 FTW3 Ultras tonight to test out. First 2 were dud's but this last one I am testing so far is passing multiple runs of Port Royal at [email protected] each time scoring over 15k.
> This will probably be the one that gets a waterblock and the rest I will figure out something to do with them.
> Stopped here for tonight, This beats my original FTW3 3090 that I had to use ice water with just to break 15k. This score is with the stock AIO no cooling mods at all.


Can you share the link to that result?


----------



## des2k...

sultanofswing said:


> Got 3 3090 FTW3 Ultras tonight to test out. First 2 were dud's but this last one I am testing so far is passing multiple runs of Port Royal at [email protected] each time scoring over 15k.
> This will probably be the one that gets a waterblock and the rest I will figure out something to do with them.
> Stopped here for tonight, This beats my original FTW3 3090 that I had to use ice water with just to break 15k. This score is with the stock AIO no cooling mods at all.


2145,1068mv is about the same as my zotac but under TE GT2 
The sweet spot for mine is 2115 at 1018mv +1590mem.


----------



## sultanofswing

gfunkernaught said:


> Can you share the link to that result?


Here you go.








I scored 15 216 in Port Royal


Intel Core i9-10940X X-series Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## thrgk

Does the 3090 have a TJ MAX so that it would shut off the computer if it got to hot? My pump **** the bed and wasnt running for like 10-12 seconds b4 I noticed the RGB lights become Red (temp sensor lights) and I shut down immediately. Could any dmg of happened/should i warranty?


----------



## J7SC

martinhal said:


> Does anyone know what the thickness of the VRAM thermal pads are on a Palit 3090 Gaming OC ?


...I like to add a similar question: Does anyone know the thickness of the 'stock' VRAM thermal pads (front, back) for a 3090 Strix OC ? I plan to reuse the OEM backplate for now combined with a new GPU block and have stocked up on various-thickness thermal pad material - just want to make sure about the right thickness. Thanks 



sultanofswing said:


> Got 3 3090 FTW3 Ultras tonight to test out. First 2 were dud's but this last one I am testing so far is passing multiple runs of Port Royal at [email protected] each time scoring over 15k.
> This will probably be the one that gets a waterblock and the rest I will figure out something to do with them.
> Stopped here for tonight, This beats my original FTW3 3090 that I had to use ice water with just to break 15k. This score is with the stock AIO no cooling mods at all.


Nice ! ...also goes to show you the 'variance' even within a specific model range...been there plenty of times with tri and quad-sli (back when they had ASIC quality values in GPUz, four of the same GPUs ranged from 68 % to 93 %...bummer).

I cracked the lowish 15,000 PortRoyal a couple of times on the Strix / stock bios / air / 24 C ambient/ w/ CPU on AIO and wonder how that will improve with a dual-loop build commencing late next week. _Each loop_ will get 640x64 mm rads w/ Arctic P12 pwm in push/pull - hope that is plenty for RTX 3090 and AMD 3950X? I could go up to 840x60 per loop, but only with some difficulty re. build space


----------



## gfunkernaught

sultanofswing said:


> Here you go.
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 216 in Port Royal
> 
> 
> Intel Core i9-10940X X-series Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com


Cool thanks. What offset did you use? I want to see if my card can bench at those clocks.


----------



## des2k...

J7SC said:


> ...I like to add a similar question: Does anyone know the thickness of the 'stock' VRAM thermal pads (front, back) for a 3090 Strix OC ? I plan to reuse the OEM backplate for now combined with a new GPU block and have stocked up on various-thickness thermal pad material - just want to make sure about the right thickness. Thanks
> 
> 
> 
> Nice ! ...also goes to show you the 'variance' even within a specific model range...been there plenty of times with tri and quad-sli (back when they had ASIC quality values in GPUz, four of the same GPUs ranged from 68 % to 93 %...bummer).
> 
> I cracked the lowish 15,000 PortRoyal a couple of times on the Strix / stock bios / air / 24 C ambient/ w/ CPU on AIO and wonder how that will improve with a dual-loop build commencing late next week. _Each loop_ will get 640x64 mm rads w/ Arctic P12 pwm in push/pull - hope that is plenty for RTX 3090 and AMD 3950X? I could go up to 840x60 per loop, but only with some difficulty re. build space


dual loop maybe makes sense for benching, 

normal usage,
the cpu loop will be a complete waste, my 3900x goes 200w tops in Prime95 on the same loop,2x360 2x240 adds maybe 1c,2c to the water temp


----------



## sultanofswing

gfunkernaught said:


> Cool thanks. What offset did you use? I want to see if my card can bench at those clocks.


I was using voltage curve editor.


----------



## gfunkernaught

des2k... said:


> dual loop maybe makes sense for benching,
> 
> normal usage,
> the cpu loop will be a complete waste, my 3900x goes 200w tops in Prime95 on the same loop,2x360 2x240 adds maybe 1c,2c to the water temp


I second that. I thought about using my 240 just for the cpu, but then I'd loose some cooling for the gpu. The cpu temp usually matches the gpu temp when the water temp equalizes. If I wanted to push the cpu to +5ghz then I'd dedicate the 240 to the cpu.


----------



## gfunkernaught

@sultanofswing Guess I'm cpu limited here:








I scored 15 056 in Port Royal


Intel Core i7-8700K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## sultanofswing

gfunkernaught said:


> @sultanofswing Guess I'm cpu limited here:
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 056 in Port Royal
> 
> 
> Intel Core i7-8700K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com


I assume you have the normal Nvidia Control Panel settings set?
"Adjust image settings with preview"=Performance
"Texture Filtering"=High Performance


----------



## gfunkernaught

sultanofswing said:


> I assume you have the normal Nvidia Control Panel settings set?
> "Adjust image settings with preview"=Performance
> "Texture Filtering"=High Performance


nvidia driver defaults are Balanced and Quality for those settings you mentioned...
I set my texture filtering to high quality, and the only other setting I change is Prefer Maximum performance. 3dmark didn't complain about those settings. I have been using High Quality Tex Filtering for years. I prefer IQ over performance, since I could always make up for loss in fps with OC'ing, and keep the IQ up. Cake and eat it too.


----------



## Edge0fsanity

Can anyone confirm if there is something wrong with my Strix? Just put a brand new one into my test bench and loaded heaven benchmark. With the power slider and fans maxed out i'm approaching 90C on the core within minutes... Seems like that is very wrong. Or is the cooler that bad? Both of my FTW3s with the XC3 bios never got above 76C and it took hours of benching to hit that.


----------



## sultanofswing

gfunkernaught said:


> nvidia driver defaults are Balanced and Quality for those settings you mentioned...
> I set my texture filtering to high quality, and the only other setting I change is Prefer Maximum performance. 3dmark didn't complain about those settings. I have been using High Quality Tex Filtering for years. I prefer IQ over performance, since I could always make up for loss in fps with OC'ing, and keep the IQ up. Cake and eat it too.


3dmark will not complain about any of those settings but when benchmarking the settings I mentioned will gain you points in the benchmark.


----------



## gfunkernaught

sultanofswing said:


> 3dmark will not complain about any of those settings but when benchmarking the settings I mentioned will gain you points in the benchmark.


Not true, 3dmark will not validate a score if some settings aren't default.


----------



## sultanofswing

gfunkernaught said:


> Not true, 3dmark will not validate a score if some settings aren't default.


3dmark will validate with the settings I am talking about. It's 100% Legit and that is how everyone runs their settings when testing 3dmark.
You asked about your CPU holding you back and I responded with the 2 well known settings that people use for 3dmark that will increase your score and not invalidate the benchmark.


----------



## Falkentyne

Edge0fsanity said:


> Can anyone confirm if there is something wrong with my Strix? Just put a brand new one into my test bench and loaded heaven benchmark. With the power slider and fans maxed out i'm approaching 90C on the core within minutes... Seems like that is very wrong. Or is the cooler that bad? Both of my FTW3s with the XC3 bios never got above 76C and it took hours of benching to hit that.


Repaste. You can also test for heatsink flatness and pressure (keep in mind the core has a convex shape) with this.



IC Contact Test and Analysis Kit – Innovation Cooling



And if you can't get thermal pad measurement results (they have been posted but the results are scattered and really hard to find; only the 3090 FE has measurements done with a caliper), you can always buy this and make your own tests.



https://www.amazon.com/gp/product/B08KPZV9KR/


----------



## J7SC

des2k... said:


> dual loop maybe makes sense for benching,
> 
> normal usage,
> the cpu loop will be a complete waste, my 3900x goes 200w tops in Prime95 on the same loop,2x360 2x240 adds maybe 1c,2c to the water temp


...tx, but have been doing dual loops for a variety of reasons on both private and commercial builds since '13. Helps also when exchanging components more frequently.


----------



## Lord of meat

gfunkernaught said:


> Found some more odd behavior with my card. Set my curve to [email protected], PL 50%, load puts it at 2100mhz. Played quake 2 rtx, at 100% load and 500-505w, core clock got crushed down to as low as 1965mhz with the max clock at 1980mhz. I play with dynamic res since I want the framerate at 60fps. Core temp hit 42c. Thought to myself, hey not bad right? Crash. Screen went blank, PC rebooted. This has to be power related right? Not enough? Too much?


Sounds like not enought power. Try to just do a normal oc on the core +100 and +450 mem with power limit maxed *assuming you have regular bios). See if that's stable and then get the voltage for the max clock speed. That would a good base.
Most cards are not stable at over 2100 constant in games that have heavy load. I can get mine to do 2160 in port royal but in real games its 2050-2085.


----------



## Edge0fsanity

Falkentyne said:


> Repaste. You can also test for heatsink flatness and pressure (keep in mind the core has a convex shape) with this.
> 
> 
> 
> IC Contact Test and Analysis Kit – Innovation Cooling
> 
> 
> 
> And if you can't get thermal pad measurement results (they have been posted but the results are scattered and really hard to find; only the 3090 FE has measurements done with a caliper), you can always buy this and make your own tests.
> 
> 
> 
> https://www.amazon.com/gp/product/B08KPZV9KR/


Thanks, I'm gonna grab both of those. If the heatsink isn't shaped right it wouldn't be the first time I've seen this. Frustrating as hell since all I'm doing is binning cards and flipping them. Won't even use the heatsink assuming the card is decent since it'll go under water. Going to try a simple repaste for now since it doesn't look like the strix uses thermal putty.


----------



## gfunkernaught

Lord of meat said:


> Sounds like not enought power. Try to just do a normal oc on the core +100 and +450 mem with power limit maxed *assuming you have regular bios). See if that's stable and then get the voltage for the max clock speed. That would a good base.
> Most cards are not stable at over 2100 constant in games that have heavy load. I can get mine to do 2160 in port royal but in real games its 2050-2085.


Definitely not enough power. I'm using the xoc bios 1kw, PL set to 50%. I raised it up to 55% and the voltage slider to 100, upped the core offset to +160 and the average clocks went up to mid 2000s in quake 2 rtx, same as BF1 8k. I saw no performance boost. In fact, I saw performance loss in BF1 with the latter setting. I'll just stick to 500w for gaming. Benching quick runs with higher PLs is fine.


----------



## gfunkernaught

sultanofswing said:


> 3dmark will validate with the settings I am talking about. It's 100% Legit and that is how everyone runs their settings when testing 3dmark.
> You asked about your CPU holding you back and I responded with the 2 well known settings that people use for 3dmark that will increase your score and not invalidate the benchmark.


LOD setting can invalidate a score. I remember once I had it set to clamp. High performance texture filtering just lowers the quality of the filtering, gaining fps. The emphasis on performance setting is just an overall setting. Once you set something in "Manage 3D Settings" that emphasis setting is de-selected. So it is really just one setting, Texture Filtering Quality. I ran PL with the same clocks and gained 21 points. I think PL is mostly GPU bound, but the gpu needs the data from the cpu to render it.








I scored 15 075 in Port Royal


Intel Core i7-8700K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## sultanofswing

gfunkernaught said:


> LOD setting can invalidate a score. I remember once I had it set to clamp. High performance texture filtering just lowers the quality of the filtering, gaining fps. The emphasis on performance setting is just an overall setting. Once you set something in "Manage 3D Settings" that emphasis setting is de-selected. So it is really just one setting, Texture Filtering Quality. I ran PL with the same clocks and gained 21 points. I think PL is mostly GPU bound, but the gpu needs the data from the cpu to render it.
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 075 in Port Royal
> 
> 
> Intel Core i7-8700K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com


I never mentioned anything about LOD settings. Only 2 settings I mentioned were Image quality and texture filtering, not sure why you are trying to throw all these other settings into the mix.


----------



## gfunkernaught

sultanofswing said:


> I never mentioned anything about LOD settings. Only 2 settings I mentioned were Image quality and texture filtering, not sure why you are trying to throw all these other settings into the mix.


I know you didn't and I wasn't...just discussing.

The image quality slider setting becomes null when you adjust individual texture filtering settings under Manage 3D settings. It doesn't seem to help much, for me at least, gaining only 21 points, I think your score was much higher because of the faster cpu and possibly just a better card than mine, not texture filtering.


----------



## jura11

Finally broke 15100 point barrier in Port Royal,tried +150MHz on core and 1495MHz on VRAM and total score is 15136 points









I scored 15 136 in Port Royal


AMD Ryzen 9 3900X, NVIDIA GeForce RTX 3090 x 1, 65536 MB, 64-bit Windows 10}




www.3dmark.com





Finally fixed my 3DMARK too,not sure what caused these issues but now HW monitoring works hahahaha

Hope this helps

ThanksJura


----------



## gfunkernaught

jura11 said:


> Finally broke 15100 point barrier in Port Royal,tried +150MHz on core and 1495MHz on VRAM and total score is 15136 points
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 136 in Port Royal
> 
> 
> AMD Ryzen 9 3900X, NVIDIA GeForce RTX 3090 x 1, 65536 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> Finally fixed my 3DMARK too,not sure what caused these issues but now HW monitoring works hahahaha
> 
> Hope this helps
> 
> ThanksJura


Nice! I need to cool my set up better if I am to get better indoor scores lol.


----------



## Beagle Box

Edge0fsanity said:


> Thanks, I'm gonna grab both of those. If the heatsink isn't shaped right it wouldn't be the first time I've seen this. Frustrating as hell since all I'm doing is binning cards and flipping them. Won't even use the heatsink assuming the card is decent since it'll go under water. Going to try a simple repaste for now since it doesn't look like the strix uses thermal putty.


While my temps weren't nearly as bad as yours, my Strix was very poorly assembled. Some pads were not even touching their targets. The cooler wasn't mounted under equal pressure side-to-side. It literally looked like someone sat on it. The pads are gooey and easy to rip, so be careful during disassembly.


----------



## KedarWolf

Beagle Box said:


> While my temps weren't nearly as bad as yours, my Strix was very poorly assembled. Some pads were not even touching their targets. The cooler wasn't mounted under equal pressure side-to-side. It literally looked like someone sat on it. The pads are gooey and easy to rip, so be careful during disassembly.


Does anyone know all the pad thicknesses on a Strix? I have Thermalright 12.8 mh/k pads and would like to repaste the core and replace the pads.


----------



## J7SC

KedarWolf said:


> Does anyone know all the pad thicknesses on a Strix? I have Thermalright 12.8 mh/k pads and would like to repaste the core and replace the pads.


....yeah, I asked that question earlier today here as well...I think the pads on the back are 1mm per YT vid, but not sure (never mind the front pads)


----------



## Falkentyne

KedarWolf said:


> Does anyone know all the pad thicknesses on a Strix? I have Thermalright 12.8 mh/k pads and would like to repaste the core and replace the pads.


Someone needs to take caliper measurements of all the pad locations for the stock pads (assuming the original heatsink will be reapplied back).
Then add them to some sort of database or forum post somewhere so that this information can be easily found.


----------



## Edge0fsanity

Beagle Box said:


> While my temps weren't nearly as bad as yours, my Strix was very poorly assembled. Some pads were not even touching their targets. The cooler wasn't mounted under equal pressure side-to-side. It literally looked like someone sat on it. The pads are gooey and easy to rip, so be careful during disassembly.


already torn down and reassembled. Small rip in one pad over a couple VRMs but it should be fine. Replaced paste with TG kryonaut then reassembled with even pressure all around. No difference in temps, shoots right up to 90C within a few mins if i max out the power sliders. At stock with fans maxed its pushing 80C within 10 mins. All this in an open air test bench with fans blowing on it. The cooler is defective and needs an rma. ugh.


----------



## sultanofswing

gfunkernaught said:


> Nice! I need to cool my set up better if I am to get better indoor scores lol.
> View attachment 2481672


Having that OSD running lowers your score as well.


----------



## jura11

Got another RTX 3090 GamingPro here for testing, still is running on air as I'm assessing if its worth it to keep it or just return it 

Its for friend loop which I'm supposed to be doing for him, he ordered Bykski waterblock which should be here by next week 

Will do more tests later today or tomorrow as see 

Hope this helps 

Thanks, Jura


----------



## Falkentyne

Edge0fsanity said:


> already torn down and reassembled. Small rip in one pad over a couple VRMs but it should be fine. Replaced paste with TG kryonaut then reassembled with even pressure all around. No difference in temps, shoots right up to 90C within a few mins if i max out the power sliders. At stock with fans maxed its pushing 80C within 10 mins. All this in an open air test bench with fans blowing on it. The cooler is defective and needs an rma. ugh.


Did you do a pressure test first?

If you don't have contact paper (you can get free samples of "Ultra low Prescale" from sensorprod.com), you can just apply a not too small drop of Kryonaut (or toothpaste, or mayonnaise) in the middle of the core, tighten the heatsink then immediately unscrew it and see how it spreads. If it doesn't spread, you know you have a problem with core to heatsink contact.

Fujifilm (Innovation cooling sells it under their own brand) contact paper is invaluable for testing this stuff.
If the pressure paper shows proper contact but you still get 90C, then you know the cooler itself is defective. This is extremely rare. I can count the number of people i've seen with broken heatpipe coolers on one hand.


----------



## Falkentyne

gfunkernaught said:


> Nice! I need to cool my set up better if I am to get better indoor scores lol.
> View attachment 2481672


You really should relabel those fields. No need to have "Memory Junction Temperature" taking up 25% of the width...


----------



## long2905

J7SC said:


> ...tx, but have been doing dual loops for a variety of reasons on both private and commercial builds since '13. Helps also when exchanging components more frequently.


have you looked at quick disconnect fittings? those will do wonder with changing out components. I will soon move from ITX to ATX build for better cooling capacity and will be getting a 2 pairs of those fittings for the CPU and GPU.


----------



## J7SC

long2905 said:


> have you looked at quick disconnect fittings? those will do wonder with changing out components. I will soon move from ITX to ATX build for better cooling capacity and will be getting a 2 pairs of those fittings for the CPU and GPU.


...yes, I've been using quick disconnects (mostly Koolance) for years; they're great


----------



## Lord of meat

gfunkernaught said:


> Definitely not enough power. I'm using the xoc bios 1kw, PL set to 50%. I raised it up to 55% and the voltage slider to 100, upped the core offset to +160 and the average clocks went up to mid 2000s in quake 2 rtx, same as BF1 8k. I saw no performance boost. In fact, I saw performance loss in BF1 with the latter setting. I'll just stick to 500w for gaming. Benching quick runs with higher PLs is fine.


Sounds good. I don't think you would even see a difference between higher number, plus the power draw is not worth it in my opinion unless you care about score numbers.


----------



## gfunkernaught

sultanofswing said:


> Having that OSD running lowers your score as well.


Well I'll be damned...OSD takes up that much??? I wonder if framebuffer/viewport/vector/raster makes a difference...


----------



## sultanofswing

gfunkernaught said:


> Well I'll be damned...OSD takes up that much??? I wonder if framebuffer/viewport/vector/raster makes a difference...
> View attachment 2481687


Yes, you are polling sensors and what not and it ends up hurting performance.
When running any type of benchmark you really do not want to have anything running to get the best score.


----------



## gfunkernaught

sultanofswing said:


> Yes, you are polling sensors and what not and it ends up hurting performance.
> When running any type of benchmark you really do not want to have anything running to get the best score.


I see. Do you remember a Win98 "shell" called GameOS? It is relevant lol.


----------



## goingnorth

Figured it out.


----------



## WillP

Thanks to @bmgjet for pointing out that my loop was running too hot a mere, oh 2 months ago, and thanks to everyone I've picked up tips from here. My KFA2 flashed with Kingpin 1000w XOC is still running strong, and after a repaste on my card, and fixing AlphaCool's underestimate of thermal pad thickness needed on their product, I did some benching last night. Pretty happy with my 12th and 3rd for my processor on the 2 tests I ran given I'm a relative novice. I have the good fortune to live in the UK where the weather is bloody horrible at the moment, so my water temp was 6 Celsius for these runs, with max GPU temp of 27 I think. Running +1300 on the VRAM, and 212 on the clock, with a crude CPU OC to 5.2GHz. Know I could do better with some more time, knowledge and patience but there's a limit to how long I'm prepared to have the windows open in sub-zero conditions. Anyway, cheers folks.

Edit: top one is TS Extreme, bottom is Port Royal, if it wasn't obvious...


----------



## gfunkernaught

The samples came in. Two of them are 3W/mk and the other are 5W/mk. Thickness ranges from .8-2mm. I measured the thickness with the protective film/tape with a micrometer. Not sure if it will be worth trying since the ek pads are already 6w/mk and not the best. But hey they're massive!

I had my contact from NEDC look at the specs for EK and other pads, and he said if the testing method isn't listed, the advertised thermal conductivity rating is questionable.


----------



## des2k...

gfunkernaught said:


> View attachment 2481759
> The samples came in. Two of them are 3W/mk and the other are 5W/mk. Thickness ranges from .8-2mm. I measured the thickness with the protective film/tape with a micrometer. Not sure if it will be worth trying since the ek pads are already 6w/mk and not the best. But hey they're massive!
> 
> I had my contact from NEDC look at the specs for EK and other pads, and he said if the testing method isn't listed, the advertised thermal conductivity rating is questionable.


that's alot of thermal pads for free  too bad they are not higher rating but after one point they become very expensive,
ex. some 17.8w/mk pads , about .8w/mk more than fuji poly


https://canada.newark.com/multicomp-pro/mp-tg-a1780-150-0-5/thermal-pad-150mm-x-150mm-x-0/dp/49AH9351


----------



## gfunkernaught

des2k... said:


> that's alot of thermal pads for free  too bad they are not higher rating but after one point they become very expensive,
> ex. some 17.8w/mk pads , about .8w/mk more than fuji poly
> 
> 
> https://canada.newark.com/multicomp-pro/mp-tg-a1780-150-0-5/thermal-pad-150mm-x-150mm-x-0/dp/49AH9351


Oh yea and others that I've seen on mouser. I've seen 30w/mk go for $400 a sheet. I'm sure there is a way to calculate how many watts each component of the VRM can produce at stock then calculate the watts from overclocking. Have you ever heard of a mosfet generating 13w of heat? 
So the part numbers he sent me of what samples I'd be getting don't seem to match the actual item description. He said he was sending this:
.080" +/- .008" GAP PAD 5000S35, GP5000S35-0.080-02-0816-NA, LIGHT GREEN, 8" X 16" SHEET
.080" +/- .008" GAP PAD 3500ULM, GP3500ULM-0.080-02-0816, GRAY, 8" X 16" SHEET
.040" +/- .004" GAD PAD 5000S35, GP5000S35-0.040-02-0816-NA, LIGHT GREEN, 8" X 16" SHEET
.040" +/- .004" GAP PAD 3500ULM, GP3500ULM-0.040-05-7.87" X 15.75", GRAY, 7.87" X 11.75"

But I have a yellow, 2 light blue, and 1 gray. So I really don't know what their thermal conductivity actually is. There is no writing on the package either.


----------



## Lobstar

EVGA_JacobF has responded to the 3090 FTW3U power balancing issues in this post on their forum for the XOC bios. EVGA GeForce RTX 3090 FTW3 XOC BIOS BETA (Page 155) - EVGA Forums


----------



## sultanofswing

Lobstar said:


> EVGA_JacobF has responded to the 3090 FTW3U power balancing issues in this post on their forum for the XOC bios. EVGA GeForce RTX 3090 FTW3 XOC BIOS BETA (Page 155) - EVGA Forums


The way this is being handled sounds like they know what the Hardware issue is and are Offering people RMA's.
If it were a simple BIOS issue (which we know it's not) they would just post a new BIOS.


----------



## Lobstar

sultanofswing said:


> If it were a simple BIOS issue (which we know it's not) they would just post a new BIOS.


The second bullet point in the response email is what makes me think this will be something bios related. They are pre-loading the XOC straight from the RMA department.



> Hello,
> 
> This offer is ONLY for owners of the EVGA GeForce RTX 3090 FTW3 ULTRA (24G-P5-3987-KR). EVGA GeForce RTX 3090 FTW3 ULTRA owners may request a 1-time replacement with a new card that may allow you to reach a higher power usage when overclocking.* Please note the following:
> 
> • There are NO other benefits that should be expected by replacing your card.
> • The new replacement card will already have the XOC BIOS applied to the secondary BIOS position.
> • There are no expected changes or improvements to the stock running speed, boost clock, or stock overclocking on the default BIOS.
> • When using the secondary BIOS, the power usage of the card may allow you to reach a higher maximum while overclocking.* As is the nature with overclocking, EVGA makes no guarantee or specific claims that owners will see an improvement to boost clocks, overclocking, or performance compared to their currently owned product.
> • This offer is limited to Standard RMA only. Unfortunately, cross-shipping and EVGA Advanced RMA (EAR) are not available for this offer.
> • EVGA cannot offer an ETA on when the RMA will be approved. Once approved, you will receive an email when it is time to send in your card.
> 
> *As detected by popular VGA monitoring applications.
> 
> To see if you may benefit by replacing your card, please reply with the following:
> 
> • Your max wattage when using the XOC BIOS.
> • A screenshot of your max wattage while using the card under load. (EVGA Precision or GPUZ)
> • A list of games/applications tested with the card, including display resolution.
> 
> We will reply to you with further instructions. Please note there is no ETA on a reply or replacement.
> 
> Thanks
> EVGA


----------



## sultanofswing

Lobstar said:


> The second bullet point in the response email is what makes me think this will be something bios related. They are pre-loading the XOC straight from the RMA department.


Maybe a different XOC BIOS, the current XOC Bios that is out now isn't going to do anything to "fix" anything.


----------



## Lobstar

sultanofswing said:


> Maybe a different XOC BIOS, the current XOC Bios that is out now isn't going to do anything to "fix" anything.


That's exactly my thought. It might in fact have negative effects on properly working cards so they don't want to have the confusion of having different BIOS versions that do the same thing without a way to differentiate easily by the consumer easily available in the wild. They probably want to avoid a spat of defective RMAs from people wrongly applying such a bios expecting even more performance from their correctly functioning cards if that would lead to such a thing. Obviously this is all speculation at this point.


----------



## gfunkernaught

Here's another website that sells thermal pads/paste, high end, probably expensive, and you have to ask them for a quote 




__





High Performance Thermal Management – EC360







www.extremecool360.com


----------



## J7SC

gfunkernaught said:


> Here's another website that sells thermal pads/paste, high end, probably expensive, and you have to ask them for a quote
> 
> 
> 
> 
> __
> 
> 
> 
> 
> 
> High Performance Thermal Management – EC360
> 
> 
> 
> 
> 
> 
> 
> www.extremecool360.com


Nice info. 

_Now here's a question or two for all folks who know about these things:_ Can you 'stack' thermal pads (say 2x 1mm to make a single 2 mm), assuming one is careful re. not trapping air in-between ? Also, how about taking the right-thickness thermal pad and add a bit of Kryonaut or MX4/5 ?


----------



## yzonker

Lobstar said:


> That's exactly my thought. It might in fact have negative effects on properly working cards so they don't want to have the confusion of having different BIOS versions that do the same thing without a way to differentiate easily by the consumer easily available in the wild. They probably want to avoid a spat of defective RMAs from people wrongly applying such a bios expecting even more performance from their correctly functioning cards if that would lead to such a thing. Obviously this is all speculation at this point.


Well they could just give out a "private" bios update, but we know how that works out...


----------



## Chamidorix

Naw, its just a revision on the load balancing chip firmware, which can't be updated without specialized tools. There are already newer FTW3s that don't have the issues hitting 500W (and they keep pci-e ratio lower), this is just a filter to minimize how many RMAs they have to process. .


----------



## PowerK

She came in today. Looks beautiful!





























Compared to my other RTX 3090 Suprim X, HOF is slightly longer. Both seem to have the same thickness. And HOF is definitely heavier.


----------



## EarlZ

@*PowerK

lovely cards, is the MSI also a 3090? ( I cant read the sticker to know ) but if is id like to know the GPU temps you are getting for the MSI and the HOF with all stock profiles*


----------



## Nizzen

PowerK said:


> She came in today. Looks beautiful!
> 
> View attachment 2481833
> 
> 
> View attachment 2481834
> 
> 
> View attachment 2481835
> 
> 
> 
> Compared to my other RTX 3090 Suprim X, HOF is slightly longer. Both seem to have the same thickness. And HOF is definitely heavier.
> 
> View attachment 2481836
> 
> 
> View attachment 2481837
> 
> 
> 
> View attachment 2481838


Can you please post vram temperature. Maybe a drawback if the vram temp is a bit high, and it's hard to cool the backplate when it isn't flat? 

Like using mp5worksbackplate cooler or other heatsinks.

I ordered this card too, but no eta yet.


----------



## PowerK

EarlZ said:


> @*PowerK
> 
> lovely cards, is the MSI also a 3090? ( I cant read the sticker to know ) but if is id like to know the GPU temps you are getting for the MSI and the HOF with all stock profiles*


Yeah, they both are 3090s.
At a quick glance, it seems that HOF is running about 4-5C cooler than Suprim.


----------



## PowerK

Nizzen said:


> Can you please post vram temperature. Maybe a drawback if the vram temp is a bit high, and it's hard to cool the backplate when it isn't flat?
> 
> Like using mp5worksbackplate cooler or other heatsinks.
> 
> I ordered this card too, but no eta yet.


I never quite paid attention to VRAM temp. I had to install HWiNFO64 for this. (I primarily use AIDA64 for system monitoring).
According to HWiNFO64, the max. GPU Memory Junction Temperature recorded during Time Spy Stress Test run is 68C. (Room temp is 26C).


----------



## jura11

@PowerK 

Congratulations mate to HoF, that GPU looks awesome and looking forward to benchmarks and your view on that GPU as well 

I can't deny I'm bit jealous on that GPU mate, not seen them in UK for sale yet 

Hope this helps

Thanks, Jura


----------



## jura11

As you know guys I got another Palit RTX 3090 GamingPro here for testing and plans has changed a bit 

Plan has been put that Palit RTX 3090 GamingPro in friend loop, but friend bought himself Zotac RTX 3090 Trinity OC which he more likely be keeping and this Palit RTX 3090 GamingPro looks more likely its stays in my PC hahaha, will tear down loop tomorrow or during the week and put that GPU under water 

Under air I would say is very noisy above 70%,in 50% fan profile GPU is not loud and I would say is pretty quiet but massive almost 3 slot design is nono for me, didn't OC as much but initial results are not bad at all

Temperatures I have seen highest at 72-75°C in rendering only, didn't test in gaming or in Port Royal yet, VRAM temperatures have been in 82°C as max with 1250MHz OC on VRAM, didn't tried go higher, I will flash there probably KFA2 390W BIOS or another XOC BIOS, let's see, not decided if I will keep it for long or it will go in future builds 


Hope this helps

Than ks, Jura


----------



## undecided65

Falkentyne said:


> Did you do the PCIE slot shunt? You did all six shunts, right? One shunt out of the six is hidden in the middle of nowhere. You said you desoldered the original shunts and put on 3 mOhm ones? Can you post your hwinfo screenshot showing with the "1.67x multiplier" for all the power limits added, while you are pulling max power you can before getting Perfcap: Power, for all the power rails?
> 
> For fixing your thermal issues:
> 
> Just follow my guide and repad your card. Trust me on this. This will fix _ALL_ percap Thermal flags as well as prevent "stuttering" bugs that I had on my card before I ever even opened the card up. It may even let you push the clocks higher!
> This works. Perfectly.
> 
> 1) Buy four packs of 1.5mm thickness, Gelid 12 w/mk pads, Thermalright Odyssey 12 w/mk pads or Fujipoly 11 w/mk pads. You will need them for re-padding the entire card (which you probably should do if you're repasting anyway. If you spent $1500+ on a FE, you can spend a little bit more on quality pads. I _believe_ all of these pads are from the same supplier (at least the Thermalright and Fujipoly pads are). You need 4 packs because the amount of pad you get is extremely small. The fujipoly 60mm * 50mm pad is barely larger than a saltine cracker. Don't buy the 17 w/mk pads, they literally crumble if you even open the card after a repad once--even though they offer the best thermal performance, they are not re-usable and are expensive.
> 
> 1.5mm pads work on both sides of the card and make fine contact.
> 
> 2) The heatsink side of the card is pretty self explanatory. Just use what is shown on Igor's lab page here. BTW make sure the entire left side of the strip is covered on the heatsink picture.
> 
> 
> 
> 
> 
> 
> 
> 
> NVIDIA GeForce RTX 3090 Founders Edition Review: Between Value and Decadence - When Price is Not Everything | Page 2 | igor'sLAB
> 
> 
> With the GeForce RTX 3090, NVIDIA is rounding out its graphics card portfolio at the top end today, for now. Much more is not possible with the GA102-300 anyway and so one may see the current…
> 
> 
> 
> 
> www.igorslab.de
> 
> 
> 
> 
> 
> 3) The backplate side of the card is where the problems happen. Hotspots and stuttering galore as well as hot GDDR6X. To do this, look at my picture carefully.
> Pay close attention to the red outlines. This is exactly where you pad.
> 
> View attachment 2464901
> 
> 
> 
> I got this pad layout by comparing the 3090 and 3080 pages on Igor's lab, and also noticed that my 3090 didn't even have pads on one side of the "V", when I saw another user in the shunt mod thread that did have a pad on that side (it was just a square). And since the hotspots at the V should be the same on FE 3090 and 3080, I came up with that placement. Pay attention to igor's hotspot drawing for 3080 FE with Nvidia's V pads, that don't seem to be on both V sides of the 3090....
> 
> 
> 
> 
> 
> 
> 
> 
> 
> NVIDIA copies the Pad-Mod from igorsLAB for the GeForce RTX 3080 FE to smooth the hotspot for the GDDR6X | igor'sLAB
> 
> 
> It's nice to see that NVIDIA seems to be actively involved in this, or that you've reported on your reading of the catchy social media content, because the hotspot that I found in the launch article…
> 
> 
> 
> 
> www.igorslab.de
> 
> 
> 
> 
> 
> End result: No more "Thermal" flag, no more weird "Fast sync enabled" + Scanline Sync (RTSS) massive microstuttering, no more retrace lines with RTSS Scanline Sync + Vsync disabled, etc. And I can overclock the core higher too!
> 
> Remember use 1.5mm pads and choose the ones I listed. The Arctic 6 w/mk giant 145mm*145mm*1.5mm pads do work, but are not ideal for this card and I was not able to overclock as high, even though my first re-pad (where I took that picture before doing ANOTHER repad) with the arctic pads is what fixed the weird microstuttering problem!
> 
> I now have fujipoly and Thermalright Odyssey 1.5mm pads completely on the backplate side for RAM and hotspots, and the stock pads (which seem to be high quality and better than Arctics) on the GPU chip side (GPU re-pasted with Thermalright TFX) GDDRX RAM side but I replaced the original pads for the VRM's with Arctic 1.5mm's (which seems to be working fine, since the original pads are still on the RAM).


Stupid question, do the LR22 chokes make contact with the back/front plate? If not, would it make sense to add something to them so that they do? I am trying to 'quiet' down my 3090 FE which is buzzing ferociously. Thx


----------



## Alex24buc

A little update..I mentioned earlier that I sold my palit gaming pro oc and got the gamerock oc version. Now the buyer of my palit gaming pro oc got in touch with me and told me that the card he bought from me has issues. The fans stay at 100% all the time even if the card is on idle and they cannot manually be adjusted with a fan curve in msi afterburner. Fortunately the card is in warranty at a local retailer and I helped him to send it there. Now I wonder if the cause of the problem would be the fact that I had before the KFA2 390w bios and this broke the fan speed. Anyway I switched back the original palit bios and I don’t think that at the warranty they will know I had before another bios.


----------



## des2k...

..


undecided65 said:


> Stupid question, do the LR22 chokes make contact with the back/front plate? If not, would it make sense to add something to them so that they do? I am trying to 'quiet' down my 3090 FE which is buzzing ferociously. Thx


high power or high fps you'll get coil noise

i have chokes on the front with thermal pads(ek block) thermal pads no nothing for noise 

people that tell you they don't get coil noise have fan noise or can't pick up high freq noises


----------



## jura11

@Alex24buc I did tried KFA2 390W BIOS on stock Palit RTX 3090 GamingPro and no issues with the fans

Hope this helps 

Thanks, Jura


----------



## gfunkernaught

Has anyone here experimenting with thermal pads noticed any real difference in temps going from like 6w/mk to 13w/mk pads?


----------



## Alex24buc

@jura11 Thanks for the info, I am sure it is not the bios fault but something else. I am waiting for the answer at the warranty, I hope they will fix it or replace the gpu.


----------



## Falkentyne

gfunkernaught said:


> Has anyone here experimenting with thermal pads noticed any real difference in temps going from like 6w/mk to 13w/mk pads?


Yes.
12.8 w/mk allowed +600 on the memory to work with a shunt modded card. 6 w/mk crashed in Overwatch at +600, needed +500.


----------



## gfunkernaught

Falkentyne said:


> Yes.
> 12.8 w/mk allowed +600 on the memory to work with a shunt modded card. 6 w/mk crashed in Overwatch at +600, needed +500.


Hmm..I'm getting +1000 on the 6w/mk pads 62c peak vram temp.


----------



## undecided65

des2k... said:


> ..
> 
> high power or high fps you'll get coil noise
> 
> i have chokes on the front with thermal pads(ek block) thermal pads no nothing for noise
> 
> people that tell you they don't get coil noise have fan noise or can't pick up high freq noises


Thanks, my friend has a 3090 TUF OC and his card makes NO noise at same power in my PC. So it is possible to have a noiseless 3090


----------



## Pepillo

undecided65 said:


> Thanks, my friend has a 3090 TUF OC and his card makes NO noise at same power in my PC. So it is possible to have a noiseless 3090


Of course it's possible and you don't need to be deaf. I've had cards with and without coil wine, it distinguishes itself very clearly.


----------



## Falkentyne

gfunkernaught said:


> Hmm..I'm getting +1000 on the 6w/mk pads 62c peak vram temp.


You aren't using a 3090 FE with the stock cooler.


----------



## gfunkernaught

Oh I thought you were water-cooled. So twice the thermal conductivity gave you 100mhz on the ram. I'm wondering if I should bother remounting and try the gelid pads for the VRMs especially. 13.5c delta right now.


----------



## Falkentyne

gfunkernaught said:


> Oh I thought you were water-cooled. So twice the thermal conductivity gave you 100mhz on the ram. I'm wondering if I should bother remounting and try the gelid pads for the VRMs especially. 13.5c delta right now.


Because although my chip can do +150 stable (+165 in some games, +135 in Warzone/Modern Warfail at 1440p, +150 on Warzone/Modern Warfail at 1080p), +135 in Quack 2 RTX, with an average GPU core, I didn't win the VRAM lottery. +600 mhz is max stable. +700 is benchable sometimes, or if not, the computer just black screens and reboots (Black screen+reboot=Vram Clocks. Driver crashed and recovered=GPU clocks). So I'm a VRAM lottery loser. I guess that's better than those who can't do more than +250 though!


----------



## gfunkernaught

Could be worse indeed. Last night I played cyberpunk for an hour or so with +150 core +1000 vram with the pl set to 60%, and the core stayed around 2100-2115mhz with 1075mv, a bit on the high side but it was stable, core temp was 44c with some 45c peaks. No power throttle. Again not that 45c is terrible, but I'm slightly tempted to try the new pads. I'm also tempted to try those sample pads but I'm like if it ain't broke don't fix it


----------



## KedarWolf

Falkentyne said:


> Because although my chip can do +150 stable (+165 in some games, +135 in Warzone/Modern Warfail at 1440p, +150 on Warzone/Modern Warfail at 1080p), +135 in Quack 2 RTX, with an average GPU core, I didn't win the VRAM lottery. +600 mhz is max stable. +700 is benchable sometimes, or if not, the computer just black screens and reboots (Black screen+reboot=Vram Clocks. Driver crashed and recovered=GPU clocks). So I'm a VRAM lottery loser. I guess that's better than those who can't do more than +250 though!


My Strix OC isn't much better, can game at 2112 core, +736 memory, bench at 2130 core, +980 memory in Time Spy.

It might improve when I get my waterblock on it.

And if you want to sell an old GPU, do it now.

Someone is giving me $900 Canadian for my 1080 Ti with an EKWB waterblock and backplate on it, but it's a stellar card. Game 24/7 at 2113 and memory 24/7 at +980 6480MHz.

And it has the 12.8 wm/k pads on it. 

And the offer happened the next day after I posted my ad locally. 

A few months ago I sold one for $500.


----------



## jura11

I just done tests on other Palit RTX 3090 GamingPro, VRAM is unstable above 1100MHz(black screen and reboot hahaha), tried everything and core seems is stable with 105MHz, above that driver crash, still not decided if I will keep that GPU 

@KedarWolf 

Right now prices for used GPU are jacked off to heavens, I sold my both RTX 2080Ti's(Asus RTX 2080Ti Strix with EKWB Vector RTX 2080Ti Strix and Zotac RTX 2080Ti AMP with Aquacomputer Kryographics RTX 2080Ti with active backplate) for £2000 in total plus postage and I would say my Strix has been quite good sample and OC'er too, in benchmarks will do 2205-2220MHz and in games usually in 2160-2175MHz, only in RT games I needed drop clocks to 2130-2145MHz 

My old EVGA GTX1080Ti would do stable 2164MHz at 1.07v with 800MHz on VRAM and temperatures never broke 38°C with other GPUs in loop, that's with EVGA FTW3 BIOS, sold it for £400 last year, if I would wait I would probably get for her £650-£700 

Hope this helps 

Thanks, Jura


----------



## T.Sharp

gfunkernaught said:


> I'm also tempted to try those sample pads but I'm like if it ain't broke don't fix it


I thought the saying was "If it ain't broke, fix it till it is" 

I'm still waiting for someone to test TG-PP10 10W/mK thermal putty and copper shims on the cooler to minimize the gap. That should net the best mem cooling potential. A 0.5mm gap filled with 10W/mK putty, would have equivalent cooling potential of a 1.0mm gap filled by a 20W/mK pad. Of course you have to add the temp drop across the copper shim and cooler / block, but using something like Kryonaut between shim and cooler, should only have a gradient of 1c or less.


----------



## yzonker

Falkentyne said:


> Because although my chip can do +150 stable (+165 in some games, +135 in Warzone/Modern Warfail at 1440p, +150 on Warzone/Modern Warfail at 1080p), +135 in Quack 2 RTX, with an average GPU core, I didn't win the VRAM lottery. +600 mhz is max stable. +700 is benchable sometimes, or if not, the computer just black screens and reboots (Black screen+reboot=Vram Clocks. Driver crashed and recovered=GPU clocks). So I'm a VRAM lottery loser. I guess that's better than those who can't do more than +250 though!


Yep, my Zotac 3090 is almost identical to that for mem. Past +600 no gain. Past +750 crash. Didn't change at all going from HSF to water either despite mem temp going down 20C or so.


----------



## ttnuagmada

Falkentyne said:


> Yes.
> 12.8 w/mk allowed +600 on the memory to work with a shunt modded card. 6 w/mk crashed in Overwatch at +600, needed +500.


How sure are you that this wasn't just a contact issue?


----------



## Falkentyne

ttnuagmada said:


> How sure are you that this wasn't just a contact issue?


Because I used 1.5mm Arctic and then I switched to Thermalright odyssey 12.8 w/mk pads and got better results? +100 mhz.

With my Odyssey pads I can run at 600W TDP at +165 +600 in Overwatch at 4k with vsync off without crashing. Core temps get up to 80C though.
+650 crashes randomly.


----------



## yzonker

gfunkernaught said:


> Could be worse indeed. Last night I played cyberpunk for an hour or so with +150 core +1000 vram with the pl set to 60%, and the core stayed around 2100-2115mhz with 1075mv, a bit on the high side but it was stable, core temp was 44c with some 45c peaks. No power throttle. Again not that 45c is terrible, but I'm slightly tempted to try the new pads. I'm also tempted to try those sample pads but I'm like if it ain't broke don't fix it


Yea a lot of the reason I dropped my card in the block with the supplied Corsair pads (other than doing a fitment check of course) was to hopefully minimize the risk since it would be difficult to replace the card right now. I'm just going to live with it until someday when the market gets better. Like next winter (hopefully).


----------



## J7SC

KedarWolf said:


> My Strix OC isn't much better, can game at 2112 core, +736 memory, bench at 2130 core, +980 memory in Time Spy.
> 
> It might improve when I get my waterblock on it.
> 
> And if you want to sell an old GPU, do it now.
> 
> Someone is giving me $900 Canadian for my 1080 Ti with an EKWB waterblock and backplate on it, but it's a stellar card. Game 24/7 at 2113 and memory 24/7 at +980 6480MHz.
> 
> And it has the 12.8 wm/k pads on it.
> 
> And the offer happened the next day after I posted my ad locally.
> 
> A few months ago I sold one for $500.


...on the Strix memory, the typical 'default' speed for it and 3090s in general is an actual 1219 MHz, but according to Techpowerup, the VRAM chips on the Strix (and likely other 3090s) are actually rated at 1313 MHz...I've made it easily to 1400 or so with oc on my Strix, but haven't tried any higher until w-block is installed.









.


----------



## des2k...

J7SC said:


> ...on the Strix memory, the typical 'default' speed for it and 3090s in general is an actual 1219 MHz, but according to Techpowerup, the VRAM chips on the Strix (and likely other 3090s) are actually rated at 1313 MHz...I've made it easily to 1400 or so with oc on my Strix, but haven't tried any higher until w-block is installed.
> 
> View attachment 2481906
> 
> .


I play games at +1590 mem about 1417mhz with my Zotac / EK block.


----------



## EarlZ

PowerK said:


> Yeah, they both are 3090s.
> At a quick glance, it seems that HOF is running about 4-5C cooler than Suprim.


Sounds good, May I ask around what temps/clocks do you get on your MSI and your ambient temps.. I feel that I am getting really high temps on my MSI suprim x 3090 and I hit 80c


----------



## changboy

des2k... said:


> I play games at +1590 mem about 1417mhz with my Zotac / EK block.


Is this the Trinity card ? How memory temp without waterblock, if you remember ?


----------



## des2k...

changboy said:


> Is this the Trinity card ? How memory temp without waterblock, if you remember ?


yes Trinity model, hwinfo didn't have mem temps back when I got the card , after I installed the EK block


----------



## yzonker

changboy said:


> Is this the Trinity card ? How memory temp without waterblock, if you remember ?


Mine would hit 90+C gaming and the 110C limit within a minute of starting an Ethereum miner.


----------



## changboy

yzonker said:


> Mine would hit 90+C gaming and the 110C limit within a minute of starting an Ethereum miner.


Is this when the memory is oc ?
What happen if you run the card mining on memory +1000 and dont mind of this temp ?
The card have warranty.


----------



## yzonker

changboy said:


> Is this when the memory is oc ?
> What happen if you run the card mining on memory +1000 and dont mind of this temp ?
> The card have warranty.


Yea that was at +500. It throttles at 110C though and slows down. And my luck would be that it would just degrade some but not enough to get it RMA'd. Much better off not breaking it in the first place IMO. Adding a heatsink to the backplate got it down to 100-105C IIRC. Running the fans fairly high helps some too, but is way too loud for me.


----------



## PowerK

EarlZ said:


> Sounds good, May I ask around what temps/clocks do you get on your MSI and your ambient temps.. I feel that I am getting really high temps on my MSI suprim x 3090 and I hit 80c


I have never seen my Suprim above 70C. (room temp 24-26C) (Good case air flow, I guess)


----------



## sultanofswing

Has anyone been able to locate a 3 slot NVLink bridge for 3090's?


----------



## changboy

sultanofswing said:


> Has anyone been able to locate a 3 slot NVLink bridge for 3090's?




__
https://www.reddit.com/r/nvidia/comments/kdro5w


----------



## sultanofswing

changboy said:


> __
> https://www.reddit.com/r/nvidia/comments/kdro5w


Basically non-existent right now


----------



## J7SC

sultanofswing said:


> Has anyone been able to locate a 3 slot NVLink bridge for 3090's?


...Thanks a lot  ! After exclusively building 2x / 3x / 4x SLI GPU systems since 2012, I finally thought I 'reformed' in 2021 on SLI


----------



## sultanofswing

J7SC said:


> ...Thanks a lot  ! After exclusively building 2x / 3x / 4x SLI GPU systems since 2012, I finally thought I 'reformed' in 2021 on SLI


🤣 I mean I have 2 here so thought why not.
I can swap boards to one of my EVGA's that uses 4 slot I guess.


----------



## changboy

sultanofswing said:


> Basically non-existent right now


This tell you how mod the bridge to get 3 space sli


----------



## sultanofswing

changboy said:


> This tell you how mod the bridge to get 3 space sli


it does, vaguely. I'd rather just test them on my 4 slot spaced EVGA board anyway.


----------



## EarlZ

PowerK said:


> I have never seen my Suprim above 70C. (room temp 24-26C) (Good case air flow, I guess)


I have a Corsair 5000D-Air flow with Arctic P12 and 2x P14's for my intake, I can definitely feel a lot of air moving out of the case. However my ambient is between 29-32 depending on the time of day


----------



## EarlZ

Has anyone attempted to open up an MSI Suprim X 3090, I am wondering what pad thickness are used for the rear/front ram chips and on the heatpipe the connects the backplate to the main heatsink and an way to unscrew the GPU screws with out damaging the anti-tamper sticker


----------



## PowerK

I happened to spot another 3090 HOF in stock today. So, I placed an order. Will be receiving it tomorrow. I'll compare the two and I guess I'll sell one of them.

I didn't have enough time yesterday with HOF. But from early undervolting stress test runs, the card passed Time Spy Extreme and Port Royal stress test clocked at 1860MHz with 781mV. For comparison, the 3090 Suprim X required 850mV for 1860MHz.
I'll have more time to play around with them tomorrow. I'll share if I see any meaningful findings during comparisons.


----------



## SirCanealot

T.Sharp said:


> I'm still waiting for someone to test TG-PP10 10W/mK thermal putty and copper shims on the cooler to minimize the gap. That should net the best mem cooling potential. A 0.5mm gap filled with 10W/mK putty, would have equivalent cooling potential of a 1.0mm gap filled by a 20W/mK pad. Of course you have to add the temp drop across the copper shim and cooler / block, but using something like Kryonaut between shim and cooler, should only have a gradient of 1c or less.


I was all set to order some PP10 and start messing around with this. But then I realised in order to get it for a reasonable price I'd have to order 2 pots of the stuff: one 50g pot is about £20, but then to qualify for free shipping I need to spend £33 (otherwise the shipping is about £15). But then I'm spending £40 on putty that I may or may not use for anything else  (and then if I get hit by UK customs, that's another £15-20 on the price). If any other UK users want to go halves with me, let me know and I'll think about it... 

I have 3mm pads on my Acellero IV backplate and I recently drilled holes in it so I'd say I currently have reasonable mounting pressure. With the Thermal Grizzly Minus 8 pads, the memory ran at 100c with it set to +1500 and mining etherium. The 3mm GP Extreme pads finally turned up from AliExpress and I swapped them out yesterday — memory is now running at 96c. (previously it was throttling at around 80% performance and 105-110c, so really not too bad)

So I went from a 3mm 8w/mk pad to a 3mm 12w/mk pad and this resulted in a 4 degree drop... Quite interesting.

I have some copper shims now, so I might try cutting down my 3mm pads and seeing if I can get a decent mount with 1mm shim + 1mm of pad (I have some 2mm GP pads pre-ordered, but the date on QuietPC keeps getting pushed back. Not sure if I should just order a load from AliExpress again, lol). Might try this tomorrow 

Edit: KY Store Store on AliExpres has '10 day shipping', so I've ordered a sheet of 120x120 1mm and 2mm Thermalright Oddessy, so hopefully those will turn up soonish...


----------



## KedarWolf

So, I contacted ASUS tech support to get the Strix OC thermal pad thicknesses, and they emailed me back the information is confidential and they can't release it.


----------



## Falkentyne

KedarWolf said:


> So, I contacted ASUS tech support to get the Strix OC thermal pad thicknesses, and they emailed me back the information is confidential and they can't release it.


This is why you need a caliper and measure the thickness yourself and then it can be posted on a database.


----------



## pat182

hey guys, i rlly need some help to figure something out 
can some1 with a psu monitoring or killawatt tell me if you disable the gpu in ''device manager'' shuts down the card to like 1-5watt vs idling at 15-30watts ?

i have multiple gpu in a rig but wants to shut down some when mining not to use power and thermal for nothing


----------



## T.Sharp

SirCanealot said:


> I was all set to order some PP10 and start messing around with this. But then I realised in order to get it for a reasonable price I'd have to order 2 pots of the stuff: one 50g pot is about £20, but then to qualify for free shipping I need to spend £33 (otherwise the shipping is about £15). But then I'm spending £40 on putty that I may or may not use for anything else  (and then if I get hit by UK customs, that's another £15-20 on the price). If any other UK users want to go halves with me, let me know and I'll think about it...


Dang mang! Yeah that's a rip. I saw it available on some other websites, but having never even heard of them, idk if it's wise to order. Although if they carry something as obscure as TG-PP10, they are probably legit.. 🤷‍♂️
I was seriously thinking I should buy a kilo of the stuff an repackage it to sell on Amazon or Ebay 😄
I hope it catches on at some point. Makes choosing the perfect thickness and compressibility pads, a thing of the past. The difference between a 10W/mK putty and 12.8W/mK pad for memory cooling, should be minimal. In the case of mosfet or backplate cooling, even better, since it can fill around components and pull heat out of the PCB.


SirCanealot said:


> The 3mm GP Extreme pads finally turned up from AliExpress and I swapped them out yesterday — memory is now running at 96c. (previously it was throttling at around 80% performance and 105-110c, so really not too bad)


Yeah that's a reasonable improvement. Still right on the edge of GDDR6X spec though (95c). Have you tried using a fan blowing at the backplate? All these people sticking heatsinks all over the back, and I have to wonder if it's actually any better than just having a fan on ~1cm standoffs, blowing directly at the plate. Both methods make your system look like crap, but I'd argue that a single fan looks cleaner 😁


SirCanealot said:


> So I went from a 3mm 8w/mk pad to a 3mm 12w/mk pad and this resulted in a 4 degree drop... Quite interesting.


I feel like you could extrapolate some more accurate data for cooling potential with those 2 points, but I'm not sure how. -1c for every +1W/mK? Idk if that follows. It's also dependent on the part you're cooling, area of the heat source, and heat output. So any formula you make based off that would apply specifically to that card and use case.


SirCanealot said:


> have some copper shims now, so I might try cutting down my 3mm pads and seeing if I can get a decent mount with 1mm shim + 1mm of pad (I have some 2mm GP pads pre-ordered, but the date on QuietPC keeps getting pushed back. Not sure if I should just order a load from AliExpress again, lol). Might try this tomorrow


Frick yeah dude! Definitely interested to hear how it works out.


----------



## jura11

For people with Bykski waterblocks, Bykski using 1.2mm thermal pads, I just measured them 

Hope this helps 

Thanks, Jura


----------



## changboy

I didnt knew EVGA is doing psu of 2000w : 





Are you a human?







www.newegg.ca




4x rtx-3090 for molecular dynamics simulation :


----------



## jura11

I have Superflower 8pack 2000w PSU in my loop or in my PC and I would assume that EVGA 2000W PSU is same as my Superflower 

Hope this helps 

Thanks, Jura


----------



## jomama22

gfunkernaught said:


> Oh I thought you were water-cooled. So twice the thermal conductivity gave you 100mhz on the ram. I'm wondering if I should bother remounting and try the gelid pads for the VRMs especially. 13.5c delta right now.


Tbh, would be a waste of money to put pads on the vrms. Cooling them more isn't going to do anything for you at the end of the day.


----------



## inedenimadam

Well, my number finally came up on EVGA for the kingpin. Debating what to do with the zotac reference board I am running currently. Already have it blocked with LM on die. Is the kingpin a physically larger card height wise? Are there any blocks available?


----------



## jura11

@inedenimadam 

I don't think there is waterblock for KPE RTX 3090, I know Optimus and EVGA will be releasing waterblocks for KPE, I have no idea on others

I could run on my loop RTX 3090 GamingPro with Bykski waterblock and air cooled RTX 3090 GamingPro 

Hope this helps 

Thanks, Jura


----------



## gfunkernaught

jomama22 said:


> Tbh, would be a waste of money to put pads on the vrms. Cooling them more isn't going to do anything for you at the end of the day.


You mean putting better pads or pads in general?


----------



## inedenimadam

jura11 said:


> @inedenimadam
> 
> I don't think there is waterblock for KPE RTX 3090, I know Optimus and EVGA will be releasing waterblocks for KPE, I have no idea on others
> 
> I could run on my loop RTX 3090 GamingPro with Bykski waterblock and air cooled RTX 3090 GamingPro
> 
> Hope this helps
> 
> Thanks, Jura


Yes, that does help. Thank you.

I guess I'll throw one of my old EK universal blocks on it for the time being. I'll check Optimus and EK for releases every couple days. I'm still waiting for EK to release their water cooled back plate for reference boards too.


----------



## jomama22

gfunkernaught said:


> You mean putting better pads or pads in general?


Better pads. Vrms perform just fine at ever higher temps then what you are probably getting.


----------



## gfunkernaught

jomama22 said:


> Better pads. Vrms perform just fine at ever higher temps then what you are probably getting.


So better pads for the ram and back of the card would still help temps in general right? Like moving the heat away from the card.


----------



## EarlZ

My MSI Suprim X 3090 temperatures is acting funny, when I initially got the card I ran Superposition 1080p Extreme with 100% fan speed and I was getting 69c max and this was on my Fractal Design Meshify C and no intake fan directly from the case as the card will not fit, just yesterday I moved it to my new case which is a Corsair 5000D-Airflow, still with the side panel off and 100% fan speed, I am getting 80-81c on the GPU core with 100% fan speed, that is a 11-12c difference, ambient temps when I tested with my old and new case remain with in 1-2 difference ( 29-30c) its like the contact on the GPU has suddenly become so bad!


----------



## Dreams-Visions

Hey guys - Does any GPU block brand come recommended for a 3090 Strix?

Was looking at what is immediately available around me and that is blocks from EK, Alphacool, and Bitspower. 

Kinda hard to find long-term performance reports or experiences outside of the occasional post about poor finish issues with EK (mixed in with lots of generally satisfactory reports). 

Any advice would be appreciated.


----------



## Zogge

jura11 said:


> For people with Bykski waterblocks, Bykski using 1.2mm thermal pads, I just measured them
> 
> Hope this helps
> 
> Thanks, Jura


_From Bykski: Just to make sure you got it, x replied last night that they're 2.0w/mk and 2.6g/cc. The pad material is quite squishy, but he didn't provide precise details on compression specs. _

Also they felt thicker than 1.2mm honestly. I read somewhere that they are 1.8mm but I cannot find the source. If you measured them I believe you of course, but the EK block had 1.0mm and 1.5mm with it and both were thinner than the Bykski pads. Did you measure them uncompressed or compressed some ?


----------



## itssladenlol

Dreams-Visions said:


> Hey guys - Does any GPU block brand come recommended for a 3090 Strix?
> 
> Was looking at what is immediately available around me and that is blocks from EK, Alphacool, and Bitspower.
> 
> Kinda hard to find long-term performance reports or experiences outside of the occasional post about poor finish issues with EK (mixed in with lots of generally satisfactory reports).
> 
> Any advice would be appreciated.


Aquacomputer kryographics next.
Have one.
Forget the ekwb, its poorly made and increases coilwhine


----------



## jura11

Zogge said:


> _From Bykski: Just to make sure you got it, x replied last night that they're 2.0w/mk and 2.6g/cc. The pad material is quite squishy, but he didn't provide precise details on compression specs. _
> 
> Also they felt thicker than 1.2mm honestly. I read somewhere that they are 1.8mm but I cannot find the source. If you measured them I believe you of course, but the EK block had 1.0mm and 1.5mm with it and both were thinner than the Bykski pads. Did you measure them uncompressed or compressed some ?


Hi there 

I measured them like 1 or 2 days ago, just received on Monday waterblocks from Bykski and they are for sure 1.2mm and regarding the EK, I got as well EKWB pads and agree they seems are thinner than Bykski 

I measured them uncompressed or how they come there

Hope this helps 

Thanks, Jura


----------



## PowerK

Does anyone miss good'ol days of SLI ?
SLI had its quirks but I loved it and always opted for min. 2-Way SLI.
Now, even though I have two cards, SLI is no more.


----------



## jura11

Hi guys 

Reworked loop yesterday, tested another Palit RTX 3090 GamingPro and overall performance is not bad although VRAM won't go beyond 1125MHz and core is unstable beyond 105MHz, anything above 1125MHz will black screen

Temperatures are quite good too, temperatures won't break 35°C in rendering and VRAM temperatures won't break 60°C after 6 hours of rendering

Not sure if I will be keeping this Palit or if it goes to different build 

Hope this helps 

Thanks, Jura


----------



## PowerK

Jura, IMHO, if 1125MHz is sustained clock with minimal/zero throttling, it is very good in my book.


----------



## EarlZ

Is MSI AB's temp limit not being followed/applied to the RTX 3080 and 3090, I've had this exact issue with my gigabyte 3080 and now with my msi 3090, it works perfectly on my 2080ti tho. I understand that I can basically just limit the power draw to get an identical result but just wanted to check if this is the case with everyone else ?


----------



## gfunkernaught

PowerK said:


> Does anyone miss good'ol days of SLI ?
> SLI had its quirks but I loved it and always opted for min. 2-Way SLI.
> Now, even though I have two cards, SLI is no more.
> 
> View attachment 2482079
> 
> 
> View attachment 2482080
> 
> 
> View attachment 2482081


Hell yeah I do! I started SLI with two 8800 GTXs and remember how Metro 2033 maxed at 1080p would destroy that setup with very playable framerates but the heat...the heat. Next and final time I went SLI was with two GTX 580s. That setup was my favorite setup ever only because of how long it lasted and how well it scaled. I bought a 5krpm 120mm fan and mounted it directly behind the cards at the back of the case to suck out all the heat from the blower style coolers the cards had. Helped alot. Eventually though, one card died, then the other. Next thing I know, after going 980>1080 ti>2080 ti>then now 3090, idk the excitement isn't as strong as it was back then lol. That's the price I pay for upgrading too often.


----------



## gfunkernaught

Another question about thermal pad experience. When you guys repace the pads for the front of the card, did you use thinner in addition to higher conductivity? I want to try using either .5mm or these .8mm sample pads. I'm probably gonna use the samples first only cuz they're massive sheets lol.


----------



## WillP

I ran two 2080 Supers in SLI for a while before they went one each to my daughters' rigs. In compatible games it worked pretty well most of the time, though micro-stutter was still an issue despite very good scaling. My benchmark scores with that set up were within a stone's throw of my current scores with a heavily overclocked 3090.


----------



## J7SC

itssladenlol said:


> Aquacomputer kryographics next.
> Have one.
> Forget the ekwb, its poorly made and increases coilwhine


Aquacomputer kryographics w/ active backplate was my first choice, but not available in the config I wanted...also wondering about Igor's lab's recent test of the Aquacomputer re. PCB tolerance requirements...

...instead, just got the EKWB block for my Strix, it seems to be quite good quality...




PowerK said:


> Does anyone miss good'ol days of SLI ?
> SLI had its quirks but I loved it and always opted for min. 2-Way SLI.
> Now, even though I have two cards, SLI is no more.


...same here - my 3090 Strix build is the first one in nine years that isn't _at least _2x SLI....Dual 2080 Ti below are still doing great in SLI-CFR in my other '4K' workhorse though...dual 3090s in SLI-CFR would be killer, but CFR was never enabled for 3090s :-(


----------



## GRABibus

jura11 said:


> Yes there, that's with KPE XOC BIOS and yes that's 1000W BIOS and yes I have set power limit to 100%
> 
> Sadly I don't have Asus ROG RTX 3090 Strix, I have Palit RTX 3090 GamingPro which is 2*8-pin GPU and this BIOS is one best for 2*8-pin GPU, KFA2 390W BIOS works as well but with that BIOS I can score 14206-14300 points
> 
> Hope this helps
> 
> Thanks, Jura


I confirm.
This KFA2 bios helped me to Unlock power limit on my former EVGA FTW3 ultra 3090.
I could boost above 2100MHz with low temps.

‘Now, I own a Asus ROG Strix.


----------



## inedenimadam

Hello again OCN. 

Long shot here, can anybody with the KPE measure the thickness of the AIO block? I am trying to put together a plan for watercooling my incoming card. I have a couple of the older VGA supremacy universal blocks, and I am curious if it will fit under the shroud as a drop in replacement for the AIO. Is the pump on the block? or is it on the end of the radiator? 

I know these questions will all be answered in a few days when the card arrives, but I would rather have all my parts and plans together day 1 so I can get zooming immediately. 

Hopefully optimus or EK will drop a block on us, but the more I read up on it...the less confident I am of it happening in any type of timeframe that I can live with.


----------



## Dreams-Visions

itssladenlol said:


> Aquacomputer kryographics next.
> Have one.
> Forget the ekwb, its poorly made and increases coilwhine
> View attachment 2482074


A little difficult getting that delivered quickly here in the United States, unfortunately. I may get one to replace something, but I'd like to have something now to get the build done. Any other Strix block recommendations?


----------



## Beagle Box

Dreams-Visions said:


> A little difficult getting that delivered quickly here in the United States, unfortunately. I may get one to replace something, but I'd like to have something now to get the build done. Any other Strix block recommendations?


Alphacool Eisblock Aurora GPX-N GPU Water Block & Backplate, RTX 3080/3090 ROG Strix, Plexi
Great block. Works well for me. Igor's Lab has a review on its design.


----------



## J7SC

Yeah - I finally found my digital gap caliper  then realized that I had to go get a battery for it  . 
Anyhow, I can confirm that for the Strix 3090 OC, the EK-pads for the front are 1 mm...will add measurements for the (stock) back-plate pads when I opened it up...


----------



## gfunkernaught

Would it be a bad idea to use .5mm stead of 1mm pads for the front of the card for an ek block?


----------



## J7SC

gfunkernaught said:


> Would it be a bad idea to use .5mm stead of 1mm pads for the front of the card for an ek block?


...that would make GPU to block contact iffy, imo


----------



## Dreams-Visions

Beagle Box said:


> Alphacool Eisblock Aurora GPX-N GPU Water Block & Backplate, RTX 3080/3090 ROG Strix, Plexi
> Great block. Works well for me. Igor's Lab has a review on its design.


cool thank you. I'll give that a close look.

*On another subject:* what do you all look for in a GPU to determine whether the one you have is a good candidate for waterblocking?

I ask because I'm in something of a dilemma: I have an XTrio and a Strix that both perform roughly the same when on the same bios (500W). Both are rock solid up to +161 on the core and +1148 on the RAM on air.

At the moment, the Strix is scoring _very _slightly higher in benches (like ~75 points in PR) but is also about 10C warmer. I've long found the XTrio cooler to be superior in performance to the Strix so this doesn't come at a surprise. But I'm thinking that the fact that the Strix is posting similar scores while dealing with higher temps might mean that it might perform better when under water. In my mind, I think it could clock higher since the expected delta between its 100% fan air-cooled temp and water temp is greater than it is for the XTrio. Or should I be thinking about this differently? Based on this information, does either suggest one might scale better on water? Or is there something else I should be looking at to make the decision?


----------



## gfunkernaught

J7SC said:


> ...that would make GPU to block contact iffy, imo


I read through this and it helped me understand a little better.








Test of "chewing gums": 3× Arctic and Thermal Grizzly Minus Pads - HWCooling.net


Details and test proceduresThermal conductive pads are useful wherever a thermal paste cannot be reasonably applied. A good example are power supply circuit of graphics cards and processors, so the choice of test subjects was unambiguous. It was less clear, however, what thickness of pads is...




www.hwcooling.net


----------



## Beagle Box

Dreams-Visions said:


> cool thank you. I'll give that a close look.
> 
> *On another subject:* what do you all look for in a GPU to determine whether the one you have is a good candidate for waterblocking?
> 
> I ask because I'm in something of a dilemma: I have an XTrio and a Strix that both perform roughly the same when on the same bios (500W). Both are rock solid up to +161 on the core and +1148 on the RAM on air.
> 
> At the moment, the Strix is scoring _very _slightly higher in benches (like ~75 points in PR) but is also about 10C warmer. I've long found the XTrio cooler to be superior in performance to the Strix so this doesn't come at a surprise. But I'm thinking that the fact that the Strix is posting similar scores while dealing with higher temps might mean that it might perform better when under water. In my mind, I think it could clock higher since the expected delta between its 100% fan air-cooled temp and water temp is greater than it is for the XTrio. Or should I be thinking about this differently? Based on this information, does either suggest one might scale better on water? Or is there something else I should be looking at to make the decision?


I generally go by component quality and water block availability. The Strix uses superior components. So, in theory, it should handle more power better and last longer. I had a choice between the Strix OC and the MSI Suprim. I chose the Strix OC and am very happy with its performance on water. The backplate gets hot, but I easily run benchmarks with the memory speed slider maxed out. But you never really know how a GPU will perform under water until you try it. That's when ASIC quality really comes into play.


----------



## jura11

Dreams-Visions said:


> cool thank you. I'll give that a close look.
> 
> *On another subject:* what do you all look for in a GPU to determine whether the one you have is a good candidate for waterblocking?
> 
> I ask because I'm in something of a dilemma: I have an XTrio and a Strix that both perform roughly the same when on the same bios (500W). Both are rock solid up to +161 on the core and +1148 on the RAM on air.
> 
> At the moment, the Strix is scoring _very _slightly higher in benches (like ~75 points in PR) but is also about 10C warmer. I've long found the XTrio cooler to be superior in performance to the Strix so this doesn't come at a surprise. But I'm thinking that the fact that the Strix is posting similar scores while dealing with higher temps might mean that it might perform better when under water. In my mind, I think it could clock higher since the expected delta between its 100% fan air-cooled temp and water temp is greater than it is for the XTrio. Or should I be thinking about this differently? Based on this information, does either suggest one might scale better on water? Or is there something else I should be looking at to make the decision?


Other waterblock which you can add to selection is Bykski and Phanteks waterblocks, they should perform on par with EKWB or Alphacool, many people here are using Bykski waterblocks with good results and temperatures 

Which GPU choose and which I would keep, hard to choose between them, I would get waterblock for each GPU and if you are not happy sell them with GPU, I have done that several times and always recouped my money back from that and right now you will have no problem sell RTX 3090 

Hope this helps 

Thanks, Jura


----------



## Dreams-Visions

Thank you for the feedback, folks. It helps a ton. I do have a Bykski waterblock for the XTrio (I happened upon a Strix later and said "what the hell") but hadn't put my rig together as I _just_ got all the final parts today. I decided to test the Strix after not opening the box for a month (didn't expect much) but was surprised it was performing slightly better than the XTrio (previous Strix's that I had did not). 

I'll grab another Bykski or Alphacool for the Strix and go from there. As you said, none of the parts involved will be difficult to sell in the end. Cheers.


----------



## PowerK

Guys, is there any BIOS applicable to RTX 3090 HOF to uplift its 450W power limit ?
It's air cooling is quite good, I think. But I'm not having much fun in upper voltage range due to power limit. (Great in lower end with undervolting so far)


----------



## J7SC

@0451 fyi


----------



## jura11

PowerK said:


> Guys, is there any BIOS applicable to RTX 3090 HOF to uplift its 450W power limit ?
> It's air cooling is quite good, I think. But I'm not having much fun in upper voltage range due to power limit. (Great in lower end with undervolting so far)


I think over here has been posted Galax 1000W BIOS and maybe KPE 520W BIOS will be maybe better choice or you can try KPE XOC BIOS but I recommend you running that BIOS maybe at 55-60% on air

Hope this helps 

Thanks, Jura


----------



## geriatricpollywog

J7SC said:


> @0451 fyi
> 
> View attachment 2482199


Is this your first flight with the 3090?


----------



## J7SC

0451 said:


> Is this your first flight with the 3090?


Nope - but the last flight w/ 3090 on air instead of water (if all goes well this weekend) ...this flight was part of prepping a comp between single 3090 and 2x 2080 Ti SLI-CFR...


----------



## Style68

Hey Guys,

Could use some help on deciding which RTX 3090 I should water cool. I got my hands on two Strix RTX 3090's and both do 2130MHz at the same voltage. #1 has a higher temperature than #2. #1 idles at 45c and #2 idles at 39c. The fans for both units do 3000rpm at 100% and at full load one #1 goes up to 87c and #2 goes up to 80c. The fans on #2 sound a little bit louder than #1. Which RTX will most likely provide better performance when water cooled?


----------



## MrTOOSHORT

Style68 said:


> Hey Guys,
> 
> Could use some help on deciding which RTX 3090 I should water cool. I got my hands on two Strix RTX 3090's and both do 2130MHz at the same voltage. #1 has a higher temperature than #2. #1 idles at 36c and #2 idles at 45c. The fans for both units do 3000rpm at 100% and at full load one #1 goes up to 87c and #2 goes up to 80c. The fans on #2 sound a little bit louder than #1. Which RTX will most likely provide better performance when water cooled?


The one with better memory overclocks.


----------



## Style68

MrTOOSHORT said:


> The one with better memory overclocks.


Shoot I tested only +250 on memory. OK I will see how far Mem overclocks on each. Just out of curiosity what if memory overclock were the same for both?


----------



## jura11

@Style68 

Usually GPU under water should have same temperature unless you have very leaky chip, in past more hotter chips usually OC much better under water but with this generation of GPU's I literally don't have idea

I would start with +500MHz on VRAM, you should be able hit 1200MHz quite easily maybe 900-1100Hz but if you hit just 250MHz then not sure 

Try Port Royal with same OC and with OC of VRAM and without it and compare scores, I would guess with OC of VRAM you will gain more points and FPS too in games 

If you are stable in Port Royal with average 2130MHz then you have good chip, on my RTX 3090 I literally need XOC BIOS hahaha 

Hope this helps 

Thanks, Jura


----------



## Falkentyne

Style68 said:


> Hey Guys,
> 
> Could use some help on deciding which RTX 3090 I should water cool. I got my hands on two Strix RTX 3090's and both do 2130MHz at the same voltage. #1 has a higher temperature than #2. #1 idles at 36c and #2 idles at 45c. The fans for both units do 3000rpm at 100% and at full load one #1 goes up to 87c and #2 goes up to 80c. The fans on #2 sound a little bit louder than #1. Which RTX will most likely provide better performance when water cooled?


You need to do re-pad rework and thermal paste rework on both cards before you can make a conclusion.
Imbalanced heatsink or bad paste or pad jobs can cause temp diff also. We've already seen some cards with mediocre jobs.
If you get the same delta after a full re-paste and repad, then we can go from there.

Usually if both cards overclock to the EXACT Same stability, +same core offset before crash (you need to verify that), then the hotter chip will scale better if you cool it better.
Then again just listen to Jura, he knows more than I do.


----------



## KedarWolf

Falkentyne said:


> You need to do re-pad rework and thermal paste rework on both cards before you can make a conclusion.
> Imbalanced heatsink or bad paste or pad jobs can cause temp diff also. We've already seen some cards with mediocre jobs.
> If you get the same delta after a full re-paste and repad, then we can go from there.
> 
> Usually if both cards overclock to the EXACT Same stability, +same core offset before crash (you need to verify that), then the hotter chip will scale better if you cool it better.
> Then again just listen to Jura, he knows more than I do.


Yes, repad, but until we find out the thicknesses of the Strix pads you need to not tear them when removing the heatsink etc. and reusing them.


----------



## Style68

Thanks guys. I only want to open one up so it will definitely be a gamble for me.

So #1 can overclock RAM up to +1100 and #2 can do +900.

I think I will waterblock #1 for these 3 reasons though obviously it will still be a gamble.

1. Memory overclock was better
2. It can handle the same voltages at a higher temperature.
3. #2 is the white version.

Hopefully I made the right decision once it's watercooled.

By the way I am putting it under an Aquacomputer Kryographics NEXT GPU waterblock and backplate with an MP5Works on the backplate. I currently have an EKWB on a Strix 3090 that only does 2085MHz.


----------



## KedarWolf

Style68 said:


> Thanks guys. I only want to open one up so it will definitely be a gamble for me.
> 
> So #1 can overclock RAM up to +1100 and #2 can do +900.
> 
> I think I will waterblock #1 for these 3 reasons though obviously it will still be a gamble.
> 
> 1. Memory overclock was better
> 2. It can handle the same voltages at a higher temperature.
> 3. #2 is the white version.
> 
> Hopefully I made the right decision once it's watercooled.
> 
> By the way I am putting it under an Aquacomputer Kryographics NEXT GPU waterblock and backplate with an MP5Works on the backplate. I currently have an EKWB on a Strix 3090 that only does 2085MHz.


I find a good test while overclocking RAM is running TimeSpy and maybe Port Royal stress test. If they don't crash your driver or PC, you're good.


----------



## Style68

KedarWolf said:


> I find a good test while overclocking RAM is running TimeSpy and maybe Port Royal stress test. If they don't crash your driver or PC, you're good.


Thanks. In TS I can do +120 core and +1100 vram. In PR I can do +135 core and +1100 vram. It spikes to 2130mhz and averages out to 2085MHz. Hopefully on water it will average 2130MHz.


----------



## KedarWolf

Style68 said:


> Thanks. In TS I can do +120 core and +1100 vram. In PR I can do +135 core and +1100 vram. It spikes to 2130mhz and averages out to 2085MHz. Hopefully on water it will average 2130MHz.


Also, try running a game like Shadow of The Tomb Raider and Cyberpunk with ray tracing on for at least 20 minutes, no DLSS. I can run +980 in TimeSpy, but it'll eventually crash in Shadow Of The Tomb Raider. I can only do +738 without it crashing. Also, Battlefield 5 is a good game to test, but Shadow of The Tomb Raider is really good for testing memory overclocks.


----------



## GQNerd

Testing...

(Mad scientist laugh) 😈


----------



## ivans89

Miguelios said:


> View attachment 2482215
> Testing...
> 
> (Mad scientist laugh) 😈


Which block you use for the die?


----------



## GQNerd

ivans89 said:


> Which block you use for the die?


Just a universal CPU block from Freezemod I had laying around.. Had to mod the bracket/standoffs so wanted to test on a cheap block first.


----------



## des2k...

Style68 said:


> Thanks. In TS I can do +120 core and +1100 vram. In PR I can do +135 core and +1100 vram. It spikes to 2130mhz and averages out to 2085MHz. Hopefully on water it will average 2130MHz.


usually on water due to low temps your freq will be pinned at 2130+ and could crash vs air going up/back down from 2130.

On air my zotac was going up to 2190+, on water 2190+ is insanlly unstable in games.

Past 2100 eff freq on water you're looking at single digit fps gains.


----------



## Style68

des2k... said:


> usually on water due to low temps your freq will be pinned at 2130+ and could crash vs air going up/back down from 2130.
> 
> On air my zotac was going up to 2190+, on water 2190+ is insanlly unstable in games.
> 
> Past 2100 eff freq on water you're looking at single digit fps gains.


So then it's a good chance that this card will not produce much better results than my current water cooled Strix that averages a sustained 2085MHz?


----------



## des2k...

Style68 said:


> So then it's a good chance that this card will not produce much better results than my current water cooled Strix that averages a sustained 2085MHz?


I don't think you'll see the difference in games with anything around 2100 eff frequency.
Mem OC and lowest voltage is still better for 24/7 usage vs highest frequency / high wattage.

HZ Dawn at 4k for example is 89fps at stock (350w) and 99fps with 2190, +1590mem (550w+). 11% boost

I'm surprised by a few post here with high end models not getting good overclocks vs MSRP cost cards.


----------



## gfunkernaught

des2k... said:


> HZ Dawn at 4k for example is 89fps at stock (350w) and 99fps with 2190, +1590mem (550w+). 11% boost


That's quite a jump though. Maybe not worth the extra watts. That could also be 50fps vs 60fps, which is great when you're vsync'd on a 4k tv.


----------



## yzonker

gfunkernaught said:


> That's quite a jump though. Maybe not worth the extra watts. That could also be 50fps vs 60fps, which is great when you're vsync'd on a 4k tv.


Or coming up a little below the sync rate of a VR headset.


----------



## DrunknFoo

Managed to get a new card, another ftw. Pcie fuse on this is different than the older model.

Any idea what amp this is rated at? (I got no clue, is it 8A? If memory serves me correctlt the previous one was 20A errr 10A? Or im dellusional)


----------



## des2k...

DrunknFoo said:


> Managed to get a new card, another ftw. Pcie fuse on this is different than the older model.
> 
> Any idea what amp this is rated at? (I got no clue, is it 8A? If memory serves me correctlt the previous one was 20A errr 10A? Or im dellusional)
> View attachment 2482314


cool fuse you got there  so what 96w peak on pcie = dead card


----------



## EarlZ

Looking for more input before before I ultimately change the thermal paste ( possibly damage the tamper sticker and void my warranty )

When I got my MSI Suprim X 3090, I ran 1080p Extreme super position and got a score of 13,159 with a GPU temperature of 35min, 69max, MSI AB power limit set to 107% and a 100% fan speed, this was on my old case with a Fractal Design Meshify C, had to remove one of the intake fans in order to fit the GPU. Top intake fan is a stock case fan from Corsair and the bottom fan is a Artic Cooling P14. During this test ambient was around 28-29c and the side panel was removed to allow maximum airflow, case exhaust is a Arctic Cooling P12, Continued to use the card with now the side panel closed for a few days with zero temperature problems on gaming, as seen on the photo there is almost no cable management and the PCIE Power cables was just sitting on top of the GPU back plate

A few days after I decided to move my system to a Corsair 5000D-Airflow, used 1x Arctic P12 and 2x P14 as intake fans, top exhaust completely sealed to tunnel airflow all the way to the back of the case, used a custom made 3x8pin extension cables with combs. Reran the test with the same 107% power limit and a fixed fan speed of 100% and to my surprise I am now getting 82-83c and 185pts less, this is with the same 28-29c ambient and no side panel installed during this test. I've used a powerful floor fan directed towards the case and this has only improved the temperate by 1c which tells me airflow is not the problem. 

I got the card last March 3 and initially used on March 4, its really strange that the temperature would just have a delta of 14c from initial testing. I am looking at the possibility of a bad thermal paste application from factory but that should have been indicative when I first ran superposition.

I've also done a clean install of the graphics drivers via safe mode using DDU and I've even done a clean reinstall of MSI-AB, I've also uninstalled Dragon Center ( didn't install this while the card was still in my old case )


----------



## inedenimadam

J7SC said:


> fyi


Have you tried it in VR yet? It is pretty amazing. I have some setup tweaks to get it running smooth on a 3090 if you need!





Miguelios said:


> Testing...
> 
> (Mad scientist laugh) 😈


I have one of those cards inbound on wednesday. Would it be possible for you to take a measurement for me? I want to know how thick the AIO block is, and if there is any space between the top of the AIO block and the shroud. I have an older EK VGA supremacy universal block that should bolt right up, and I am hoping will also fit under the shroud. Trying to get my ducks in a row now for when it arrives.


----------



## jura11

des2k... said:


> usually on water due to low temps your freq will be pinned at 2130+ and could crash vs air going up/back down from 2130.
> 
> On air my zotac was going up to 2190+, on water 2190+ is insanlly unstable in games.
> 
> Past 2100 eff freq on water you're looking at single digit fps gains.


You are lucky, my Palit RTX 3090 GamingPro wouldn't do stable in gaming 2190MHz+,I can run XOC BIOS and still won't hit such clocks or even if I will open window or door and ambient temperature will be close to freezing hahaha, 2160MHz naybe it would pass but not 100% stable

Hope this helps

Thanks, Jura


----------



## J7SC

inedenimadam said:


> Have you tried it in VR yet? It is pretty amazing. I have some setup tweaks to get it running smooth on a 3090 if you need! (...)


...no yet - later in the year hopefully...and I will definitely ask you for the tweaks (thanks for the offer ). I currently have FS2020 on two 4K systems (2x 2080 TI-CFR on 40 inch and 1x 3090/Strix on 55 inch monitors) and I love to dive down through metro high-rise canyons during a snow storm, or land on glaciers. Big monitors help, but VR might be even better


----------



## geriatricpollywog

J7SC said:


> ...no yet - later in the year hopefully...and I will definitely ask you for the teaks (thanks for the offer ). I currently have FS2020 on two 4K systems (2x 2080 TI-CFR on 40 inch and 1x 3090/Strix on 55 inch monitors) and I love to dive down through metro high-rise canyons during a snow storm, or land on glaciers. Big monitors help, but VR might be even better


How does the framerate compare between the 2 setups? I'll be getting an 11900k to hopefully eliminate the pesky main thread limit.


----------



## J7SC

0451 said:


> How does the framerate compare between the 2 setups? I'll be getting an 11900k to hopefully eliminate the pesky main thread limit.


3950X and 3090 definitely > 2950X and 2x 2080 TI-CFR, but 'in the same class'...working on a more comprehensive comparison for my FS2020 thread once w-cooling for the 3950X / 3090 is finished


----------



## yzonker

J7SC said:


> ...no yet - later in the year hopefully...and I will definitely ask you for the teaks (thanks for the offer ). I currently have FS2020 on two 4K systems (2x 2080 TI-CFR on 40 inch and 1x 3090/Strix on 55 inch monitors) and I love to dive down through metro high-rise canyons during a snow storm, or land on glaciers. Big monitors help, but VR might be even better


I have both. Oculus Rift S and a 55" 4k TV that sits on my corner desk. Of course there are higher resolution HMDs, but I find the 4k monitor to have much much better image quality. OTOH though, the VR headset is a huge leap in immersion. It feels so much more like flying a plane (or whatever game you're playing) that I'm still amazed by it sometimes. Impossible to get that with a monitor.


----------



## inedenimadam

J7SC said:


> ...no yet - later in the year hopefully...and I will definitely ask you for the teaks (thanks for the offer ). I currently have FS2020 on two 4K systems (2x 2080 TI-CFR on 40 inch and 1x 3090/Strix on 55 inch monitors) and I love to dive down through metro high-rise canyons during a snow storm, or land on glaciers. Big monitors help, but VR might be even better


It's a good time to jump into VR, we have several headsets for a variety of purposes on the market. If MSFS is your jam, I'd look into the HP Reverb G2. It has a spectacular display that will bring your 3090 to it's knees. Or if you just want to dip your toe in the water, the Oculus Quest 2 is super cheap, and still offers a pretty good experience in MSFS, but the panels aren't the best, and it's less comfortable. 

I am not a huge fan of flight sims, but playing around with it in VR is 100% worth the price of entry. 

VR has been the fun injection that was needed for me. I had mostly stopped playing games until I bought my first headset. 4 years later, I still can't get enough.


----------



## J7SC

I am probably going to go for the HP Reverb G2...The Oculus Rift line certainly has a good reputation as well, but I won't do Facebook (never have) and I understand that you need a Facebook account for the Oculus, no ?

@0451 - re. FS2020 'MainThread' limit, you might want to loosen tRAS and especially rFAW on system RAM timings if they're really tight...I run identical Sammy-B in both systems with supertight timings with all memtest passing, but changing tFAW from 16 to 20 helped with FS2020 MainThread limit


----------



## geriatricpollywog

J7SC said:


> I am probably going to go for the HP Reverb G2...The Oculus Rift line certainly has a good reputation as well, but I won't do Facebook (never have) and I understand that you need a Facebook account for the Oculus, no ?
> 
> @0451 - re. FS2020 'MainThread' limit, you might want to loosen tRAS and especially rFAW on system RAM timings if they're really tight...I run identical Sammy-B in both systems with supertight timings with all memtest passing, but changing tFAW from 16 to 20 helped with FS2020 MainThread limit


Thanks for the tip but I’m not gonna fiddle with the timings when 11900k is 2 weeks away.


----------



## sultanofswing

Well let's see how good or bad this one is.


----------



## gfunkernaught

EarlZ said:


> Looking for more input before before I ultimately change the thermal paste ( possibly damage the tamper sticker and void my warranty )


FTC says your warranty is valid even if the sticker is removed or tampered with.


----------



## sultanofswing

Testing this ftw3 Hydrocopper now, It does not have power balancing issues, it came with the 500w bios already loaded on the OC slot and so far I am at 15.1k in Port Royal.
So far with my cooling loop I have ran Port Royal back to back in succession 6 times now and the GPU temp has not went over 38c at 482w.

I think she's a keeper. I am still on the default 500w bios.


----------



## EarlZ

gfunkernaught said:


> FTC says your warranty is valid even if the sticker is removed or tampered with.


Im not located in the USA.

EDIT:

Did a 30mins test run with 0.850v @ 1920Mhz, still getting 75-76c with Cyberpunk and 100% fan speed, sounds like bad mount/thermal paste from factory ?


----------



## sultanofswing

Called it here for the night. 








I scored 15 568 in Port Royal


Intel Core i9-10940X X-series Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## EarlZ

I've found a way to remove the tamper with zero visible damage, has anyone tried replacing the pads on their MSI Suprim X 3090, I dont have a caliper to measure the stock pads on the front and back side of the GDDR6 and for those pads that Connecticut the back side heat pipe to the main heatsink


----------



## inedenimadam

J7SC said:


> I am probably going to go for the HP Reverb G2...The Oculus Rift line certainly has a good reputation as well, but I won't do Facebook (never have) and I understand that you need a Facebook account for the Oculus, no ?


Correct. With the Oculus Quest, you are buying the device with your data. There is no way around having facebook on the device. No jailbreaks, side doors, no custom firmware...they locked it down pretty good and they data mine you for just looking at it. Definitely go for the G2 for flight sims. 

When you do get a headset, the one setting that will make the most difference is forcing Asynchronous Space Warp to be on 100%. The 3090 (or any card) just can't handle that resolution at 90+hz, so locking the frame rate to 45 and letting ASW create the frames between makes for a much smoother experience, despite a few halo artifacts around fast moving objects (which are not really a thing in MSFS). Beyond forcing ASW, typical appropriate graphics settings are all that is really needed. 

I'm stoked to get my second 3090 in next week because MSFS is one of the few titles that supports SLI.


----------



## Lobstar

J7SC said:


> I am probably going to go for the HP Reverb G2...The Oculus Rift line certainly has a good reputation as well, but I won't do Facebook (never have) and I understand that you need a Facebook account for the Oculus, no ?
> 
> @0451 - re. FS2020 'MainThread' limit, you might want to loosen tRAS and especially rFAW on system RAM timings if they're really tight...I run identical Sammy-B in both systems with supertight timings with all memtest passing, but changing tFAW from 16 to 20 helped with FS2020 MainThread limit


I have the G2. Don't buy it. Cheap plastics (the right ear came off on second use), cheap cables (there is a new revision but it causes lots of errors due to it's design and location), and no support from HP when something goes wrong. The subreddit is filled with issues and unhappy people. I'm actually back to using my Samsung Odyssey+ since it was so bad. My $100 headset is better than my $600 headset ...


----------



## itssladenlol

sultanofswing said:


> Well let's see how good or bad this one is.


How did you get one? Thought the Release got delayed?


----------



## J7SC

inedenimadam said:


> Correct. With the Oculus Quest, you are buying the device with your data. There is no way around having facebook on the device. No jailbreaks, side doors, no custom firmware...they locked it down pretty good and they data mine you for just looking at it. Definitely go for the G2 (...) *I'm stoked to get my second 3090 in next week because MSFS is one of the few titles that supports SLI.*


...."Whoa - hold the horses - stop the presses"....FS2020 does regular SLI (AFR) now ? I know it always 'kind of worked' with regular SLI (AFR) but in actual fact the 2nd GPU would not be loaded and just twiddle its thumbs (if it had any). SLI-CFR on the other hand works for select RTX 2K Turing, but afaik, the last CFR capable driver is from late May 2020 /early June 2020. That's well before Ampere / 3090s came out...still, if those CFR drivers work with 2x 3090s, that would be fantastic. Any more info you have on regular SLI patches for FS2020 would be welcome

FYI - Guru 3D just did a major 'revisit' on FS2020 (article is from March 5, '21) with exhaustive tests...more on that > here


----------



## gfunkernaught

EarlZ said:


> Im not located in the USA.


#murica


----------



## inedenimadam

J7SC said:


> ...."Whoa - hold the horses - stop the presses"....FS2020 does regular SLI (AFR) now ? I know it always 'kind of worked' with regular SLI (AFR) but in actual fact the 2nd GPU would not be loaded and just twiddle its thumbs (if it had any). SLI-CFR on the other hand works for select RTX 2K Turing, but afaik, the last CFR capable driver is from late May 2020 /early June 2020. That's well before Ampere / 3090s came out...still, if those CFR drivers work with 2x 3090s, that would be fantastic. Any more info you have on regular SLI patches for FS2020 would be welcome
> 
> FYI - Guru 3D just did a major 'revisit' on FS2020 (article is from March 5, '21) with exhaustive tests...more on that > here


My second card gets in middle of next week. My statement on SLI is only what I have heard. I have not tested it myself...but I fully intend to! It might take me a couple days to get around to shoving the card in. My PC is in a 4U rack that is watercooled, so I haven't figured out what I am going to do about the KPE having its own cooler. I may just run with the lid off and the 360 rad precariously perched on top of the drive cages or something. That is a problem for "next week me" to figure out...but I have some elegant ideas that may work.


----------



## J7SC

...after getting a pile of Amazon and other deliveries over the last 10 days, I finally started 'the process'...everything is flushed, all the fittings and allen head screws are tightened, and initial basic leak tests look fine - much, much more to do, though


----------



## sultanofswing

Calling it quits for a bit on testing this FTW Hydrocopper, If I can get temps below ambient I am sure I could push a little bit more.

500w XOC Bios is was able to carry all the way to 15.4k before hitting power limit at 496W.
This score was with 1kw KPE Bios with a max power draw reported by GPU-Z at 548W.
No power balancing issues and the most I ever saw on PCI-E slot draw was 81W.
I was able to max out the VRAM Slider in Afterburner with just 1 Arctic P12 pointed at the backplate with memory temps reported by GPU-Z at a max of 60c.









I scored 15 628 in Port Royal


Intel Core i9-10940X X-series Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## ViRuS2k

hey guys has there been a update on different bios`s or the MSI gaming X Trio 3090, i was using the SuprimX 450w bios but im hitting powerlimit very easily and when that happens clocks drop very very low.

is the 1000w still the best bet or is there a better one out there like for instance 550w ? with all protection enabled ? lol so i dont hit this stupid powerlimit.


----------



## gfunkernaught

ViRuS2k said:


> hey guys has there been a update on different bios`s or the MSI gaming X Trio 3090, i was using the SuprimX 450w bios but im hitting powerlimit very easily and when that happens clocks drop very very low.
> 
> is the 1000w still the best bet or is there a better one out there like for instance 550w ? with all protection enabled ? lol so i dont hit this stupid powerlimit.


I use the 1kw bios on my trio and limit the power to 50% or 500w. The clocks are more consistent than when I used the 500w bios, but the card runs warmer, about 2-3c warmer, because it is actually hitting 500w vs the 500w bios it was hitting 480w and getting limited. Don't use the 1kw bios unless you're water cooling or you have exceptional cooling.


----------



## Pepillo

A quick question. Has anyone put the Galax/KFA2 HOF bios in? I don't see it at techpowerup's base or remember at the forum.

Thanks


----------



## Beagle Box

Pepillo said:


> A quick question. Has anyone put the Galax/KFA2 HOF bios in? I don't see it at techpowerup's base or remember at the forum.
> 
> Thanks


This one? 
It's unverified. 
I haven't tried it. 
Let us know how it goes.


----------



## PowerK

For anyone interested, here's the BIOS for Galax RTX 3090 HOF Limited Edition I saved using GPU-Z. It's a performance mode BIOS.
Google Drive Link


----------



## Pepillo

Beagle Box said:


> This one?
> It's unverified.
> I haven't tried it.
> Let us know how it goes.


Thanks, this one's for 2x8, I'm not interested in a 3x8.


----------



## Pepillo

PowerK said:


> For anyone interested, here's the BIOS for Galax RTX 3090 HOF Limited Edition I saved using GPU-Z. It's a performance mode BIOS.
> Google Drive Link


Thanks, just 450w this one.


----------



## yzonker

Beagle Box said:


> This one?
> It's unverified.
> I haven't tried it.
> Let us know how it goes.


Not sure if it really didn't work or they just didn't flash it correctly? 


__
https://www.reddit.com/r/overclocking/comments/kjvsgq


----------



## ALSTER868

Hey guys, can anyone tell why I cannot flash any bios I've been able to flash before? I'm getting no display after any flash and can revert only to original Strix bioses..
Tried to flash KPE 520W and EVGA 500W bioses, also tried Suprim X bios










What can be the issue with it? Every time the flasher warns about mismatch of the ID.. nvflash is 5.670.0 which I flashed with many times before this card..


----------



## des2k...

ALSTER868 said:


> Hey guys, can anyone tell why I cannot flash any bios I've been able to flash before? I'm getting no display after any flash and can revert only to original Strix bioses..
> Tried to flash KPE 520W and EVGA 500W bioses, also tried Suprim X bios
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> What can be the issue with it? Every time the flasher warns about mismatch of the ID.. nvflash is 5.670.0 which I flashed with many times before this card..


Never seen that message, been using nvflash that comes with the asus tuf update from asus.





__





Tech4Gamers - All About Technology and Gaming News


Tech4Gamers is a definitive source for news on gaming and emerging technologies. Featuring product reviews and analysis on latest innovations.




tech4gamers.com





I disable the driver, set protection off and force flash & reboot.


----------



## KedarWolf

Bought these. Will arrive by Tuesday amazon.ca.

Digital Caliper, Sangabery 0-6 inches Caliper with Large LCD Screen, Auto - Off Feature

Need one but because ASUS support will NOT release the size on their thermal pads on their ASUS ROG Strix OC RTX 3090 video card so I can replace them with better pads.

Need to measure the stock pads.

With ambient temps very low, get 80C+ on memory and 85C+ hotspot not acceptable just while gaming.


----------



## itssladenlol

Just tested Bunch of different bios with my new strix 3090 oc.
Msi suprim x
Evga xc3
Asus xoc 2

The suprim runs Best for me and I can run [email protected],875 in Timespy extreme and superposition 4k on air.

With 1000w bios my card does 2200+MHz lol.
Ati Tool does 2340MHz without Crash, guess its a keeper.

Got my aquacomputer Block here already


----------



## sultanofswing

itssladenlol said:


> Just tested Bunch of different bios with my new strix 3090 oc.
> Msi suprim x
> Evga xc3
> Asus xoc 2
> 
> The suprim runs Best for me and I can run [email protected],875 in Timespy extreme and superposition 4k on air.
> 
> With 1000w bios my card does 2200+MHz lol.
> Ati Tool does 2340MHz without Crash, guess its a keeper.
> 
> Got my aquacomputer Block here already


Sounds like a good card.


----------



## itssladenlol

sultanofswing said:


> Sounds like a good card.


I love it and zero whine with that undervolt. 
My KFA2 was horrible in Terms of whine and Oc/uv


----------



## ALSTER868

des2k... said:


> I disable the driver, set protection off and force flash & reboot.


So did I, disable card in the device manager, do the protectoff and then flash.. it works only with Strix bios, others result in black screen.. never seen that before.
Anyone knows what the issue could be and if there is a different way I could do the flash?


----------



## J7SC

KedarWolf said:


> Bought these. Will arrive by Tuesday amazon.ca.
> 
> Digital Caliper, Sangabery 0-6 inches Caliper with Large LCD Screen, Auto - Off Feature
> 
> Need one but because ASUS support will NOT release the size on their thermal pads on their ASUS ROG Strix OC RTX 3090 video card so I can replace them with better pads.
> 
> Need to measure the stock pads.
> 
> With ambient temps very low, get 80C+ on memory and 85C+ hotspot not acceptable just while gaming.
> 
> View attachment 2482504


...check >> this post out & working on the Strix OC w-cooling conversion today and tomorrow...we should compare notes on Strix pad thickness once you're done



itssladenlol said:


> Just tested Bunch of different bios with my new strix 3090 oc.
> Msi suprim x
> Evga xc3
> Asus xoc 2
> 
> The suprim runs Best for me and I can run [email protected],875 in Timespy extreme and superposition 4k on air
> With 1000w bios my card does 2200+MHz lol.
> Ati Tool does 2340MHz without Crash, guess its a keeper.
> 
> Got my aquacomputer Block here already


Did you get the Aquacomputer active backplate as well ? They were sold out when I checked  ...as to Superposition 8K, my Stric OC result (air-cooled, stock bios) is in the spoiler. By 'Suprim bios' runs best on your Strix OC, how is it better than the stock bios if I may ask ? Just clocks ?



Spoiler


----------



## Falkentyne

ALSTER868 said:


> Hey guys, can anyone tell why I cannot flash any bios I've been able to flash before? I'm getting no display after any flash and can revert only to original Strix bioses..
> Tried to flash KPE 520W and EVGA 500W bioses, also tried Suprim X bios
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> What can be the issue with it? Every time the flasher warns about mismatch of the ID.. nvflash is 5.670.0 which I flashed with many times before this card..


Use this BIOS.

Remove the .txt at the end of the file name so it's .rom.


----------



## itssladenlol

J7SC said:


> ...check >> this post out & working on the Strix OC w-cooling conversion today and tomorrow...we should compare notes on Strix pad thickness once you're done
> 
> 
> 
> Did you get the Aquacomputer active backplate as well ? They were sold out when I checked  ...as to Superposition 8K, my Stric OC result (air-cooled, stock bios) is in the spoiler. By 'Suprim bios' runs best on your Strix OC, how is it better than the stock bios if I may ask ? Just clocks ?
> 
> 
> 
> Spoiler


More stable and less coilwhine (I know Sounds crazy but Same curve settings)
The aquacomputer active backplate is a joke.
Between the active and passive backplate is a 2 degree difference on memory.
We tested this Many times. 
Its Not really active backplate, just one heatpipe getting touched by water which does absolutely nothing


----------



## Falkentyne

KedarWolf said:


> Bought these. Will arrive by Tuesday amazon.ca.
> 
> Digital Caliper, Sangabery 0-6 inches Caliper with Large LCD Screen, Auto - Off Feature
> 
> Need one but because ASUS support will NOT release the size on their thermal pads on their ASUS ROG Strix OC RTX 3090 video card so I can replace them with better pads.
> 
> Need to measure the stock pads.
> 
> With ambient temps very low, get 80C+ on memory and 85C+ hotspot not acceptable just while gaming.
> 
> View attachment 2482504


Just like your Bios mods. Sometimes a man has to step up and be the man.


----------



## ALSTER868

Falkentyne said:


> Use this BIOS.


What kind of bios is this? Is it safe like KPE 520 is? Never seen it


----------



## J7SC

itssladenlol said:


> More stable and less coilwhine (I know Sounds crazy but Same curve settings)
> The aquacomputer active backplate is a joke.
> Between the active and passive backplate is a 2 degree difference on memory.
> We tested this Many times.
> Its Not really active backplate, just one heatpipe getting touched by water which does absolutely nothing


Thanks ! Yeah, I skipped a custom back-plate until I know more about the upcoming EK w-cooled back-plate (or rig up s.th. myself like others have shown here). As to bios, I wonder if the earlier beta of the Strix XOC (by Elmor if I'm not mistaken) will get an update soon. That coilwhine differential is weird, but I've seen and heard strange things re. coilwhine when switching to custom bios on GPUs


----------



## Rhadamanthys

So I have the KFA 390W bios on my reference 3090. I've seen it today in two games that both AB and HWInfo will report PL at around 350-370W, whereas running a benchmark such as TS Extreme (test 2), the power limit is reached at about 390W, as expected.

Anyone know what's going on?


----------



## PLATOON TEKK

Been on here for a minute. This forum helped me immensely during the TitanXP shunt days, truly appreciated. Anyways, got a hold of 2x oc lab.

The gpus are pretty mad, without any hard mods, they are able to hit over 1000w (78c at 15c ambient, full fan). felt like the house was going to explode during nvlink run, had to switch straight to wall from 1500w sine-wave APC. Am on 1000w KP bios, but feel results aren't as they should at such high power usage, will figure it out. 

Haven't had time to bench properly but hopefully soon. Anyone else try any version of HOF on KP bios? Is asus elmor bios a better fit maybe?

If you google "Platoon Tekk OC Lab" you should find the video of the dew-kkontrolled setup, pretty useless until we get KP or HOF waterblokks. Lowest maintainable dew-point with all on is -10.3c, -8 liquid. 

















#SWOUP


----------



## jomama22

PLATOON TEKK said:


> Been on here for a minute. This forum helped me immensely during the TitanXP shunt days, truly appreciated. Anyways, got a hold of 2x oc lab.
> 
> The gpus are pretty mad, without any hard mods, they are able to hit over 1000w (78c at 15c ambient, full fan). felt like the house was going to explode during nvlink run, had to switch straight to wall from 1500w sine-wave APC. Am on 1000w KP bios, but feel results aren't as they should at such high power usage, will figure it out.
> 
> Haven't had time to bench properly but hopefully soon. Anyone else try any version of HOF on KP bios? Is asus elmor bios a better fit maybe?
> 
> If you google "Platoon Tekk OC Lab" you should find the video of the dew-kkontrolled setup, pretty useless until we get KP or HOF waterblokks. Lowest maintainable dew-point with all on is -10.3c, -8 liquid.
> 
> View attachment 2482542
> View attachment 2482543
> 
> 
> #SWOUP


Not surprising they are able to draw 1000w with the kp xoc bios, any 3 pin should be able too (safely is another matter). I'm curious as to what you were running for it to pull 1000w and whether you had used volt mods with that (i.e. an evc) as any stress test I throw at my shunted strix needs near 1.3v to start approaching 900w (highest I saw was 880w with 1.32v in tse gt2)


----------



## KedarWolf

Falkentyne said:


> Just like your Bios mods. Sometimes a man has to step up and be the man.


I need to wait until I get the 2.0mm pads from Aliexpress, have lots of 0.5, 1.0 and 1.5mm but I'm really sure I'll need the 2.0mm as well.


----------



## PLATOON TEKK

jomama22 said:


> Not surprising they are able to draw 1000w with the kp xoc bios, any 3 pin should be able too (safely is another matter). I'm curious as to what you were running for it to pull 1000w and whether you had used volt mods with that (i.e. an evc) as any stress test I throw at my shunted strix needs near 1.3v to start approaching 900w (highest I saw was 880w with 1.32v in tse gt2)


Ran Port. Haven't boosted voltage at all yet, have evc with display here for strix but still in box. oddly can get there with voltage capped at 1.1 in afterburner but low clock


----------



## jomama22

PLATOON TEKK said:


> Ran Port. Haven't boosted voltage at all yet, have evc with display here for strix but still in box. oddly can get there with voltage capped at 1.1 in afterburner but low clock
> View attachment 2482550


What program are you running to get that? Just curious.


----------



## PLATOON TEKK

jomama22 said:


> What program are you running to get that? Just curious.


3dmark Port Royal and gpu-z sensor page


----------



## Nizzen

PLATOON TEKK said:


> 3dmark Port Royal and gpu-z sensor page


Do you have a "kill a watt" or something else that mesure powerdraw from the wall?

I want to see powerdraw from the wall, to see total powerdraw


----------



## mirkendargen

Anyone else notice Strix's went up another 10% in MSRP apparently? $2200 now at Best Buy lol.


----------



## J7SC

mirkendargen said:


> Anyone else notice Strix's went up another 10% in MSRP apparently? $2200 now at Best Buy lol.


...yeah, I saw this at Caseking (one of Europe's bigger retailers) yesterday...prices include sales tax though...I shared a similar screenshot a month or so ago, but that '$trix trend' is picking up steam...


----------



## lolhaxz

How bigger power supply do I need?
How much radiator do I need?

1100W in, 972W out (AX1200i)
~610W 3090 Load - Quake II RTX
~306W 10900K - Prime95 SmallFFT

~900W Heat load - 3x 360mm Radiators

23-24C Ambient, fans at ~1000rpm, delta ~10C
EKWB Vector Strix Block Delta: 19C (610W)

Peak temperatures reach at about 5 minutes.










A typical gaming load:


----------



## PLATOON TEKK

Nizzen said:


> Do you have a "kill a watt" or something else that mesure powerdraw from the wall?
> 
> I want to see powerdraw from the wall, to see total powerdraw












Here is the reading off one OC LAB in Port Royal, seems to be slightly lower than what it reports in GPUZ. peaked at 1465w once (got APC warning tone).
Below is a brief video of above:

__
http://instagr.am/p/CMbmWfoFd7m/

#SWOUP


----------



## des2k...

lolhaxz said:


> How bigger power supply do I need?
> How much radiator do I need?
> 
> 1100W in, 972W out (AX1200i)
> ~610W 3090 Load - Quake II RTX
> ~306W 10900K - Prime95 SmallFFT
> 
> ~900W Heat load - 3x 360mm Radiators
> 
> 23-24C Ambient, fans at ~1000rpm, delta ~10C
> EKWB Vector Strix Block Delta: 19C (610W)
> 
> Peak temperatures reach at about 5 minutes.
> 
> View attachment 2482572
> 
> 
> A typical gaming load:
> View attachment 2482574


900w load, jesus that heats up an entire floor
here's mine 600- 640w te gt2 + prime95 3900x(200w)

water temp is 26,27c with filters installed on my o11 3x360 2x240 , one 360 is corsair xr7 and one 240 is ocool 45mm rest are slim

ek block is garbage, it's 22c delta at 600w+ on my side. Waiting on the new barrow block.


----------



## jomama22

PLATOON TEKK said:


> 3dmark Port Royal and gpu-z sensor page


It's honestly really strange to see it pulling 1000w through port royal at stock voltage levels. What clocks are you hitting? Even if they are some of the leakiest chips, most chips will pull a max of 550-600w at stock voltage levels in port royal.

Also, I wouldn't expect an air cooler to be able to cope with that wattage, let alone keep it at 72C.

Definitely use a kill a watt and see if it's actually pulling that. My guess is it's not and for some reason, the kp xoc bios on that card is messing with the output power readings.

What's the wattage at idle?


----------



## PLATOON TEKK

jomama22 said:


> It's honestly really strange to see it pulling 1000w through port royal at stock voltage levels. What clocks are you hitting? Even if they are some of the leakiest chips, most chips will pull a max of 550-600w at stock voltage levels in port royal.
> 
> Also, I wouldn't expect an air cooler to be able to cope with that wattage, let alone keep it at 72C.
> 
> Definitely use a kill a watt and see if it's actually pulling that. My guess is it's not and for some reason, the kp xoc bios on that card is messing with the output power readings.
> 
> What's the wattage at idle?


I agree 100%. I think it throws an extra 300w on each card. Judging by temps and clocks. The APC reading is similarly accurate to a kill-a-watt.

Strangely, took out other card yesterday and reran test and APC peaked around 900w, guess it somehow draws both cards in NVLink (hence the 1465w) although gpu z says otherwise.

I mentioned the lower clocks in first post too, definitely not running 100. Will try the Elmor bios today if I have time and see if power draw is accurate.


----------



## SoldierRBT

1000W for both cards running Port Royal seems normal to me (500W for each card). 1000W on one card on PR isn't possible unless you're supplying 1.40v+ NVVDD to the card. RTX Quake 2 and Time Spy Extreme which are more demanding than Port Royal pull around 600W at stock clocks using the 1000W BIOS.


----------



## PLATOON TEKK

No doubt. Good point, that it still peaked at 1000w at 1.1v afterburner limit makes it extremely unlikely that it’s showing accurate draw. Will run single gpu tonight and watch meter and see what actually is being drawn.

I’ll also try the elmor bios, unless someone else has suggestions. Tried that 1000w Galax floating around but get same error during flash as on my KP and strix.

#SWOUP


----------



## gfunkernaught

des2k... said:


> ek block is garbage, it's 22c delta at 600w+ on my side. Waiting on the new barrow block.


I get 17c delta at 600w, and that is with 2x360mm and 1x240mm plus the stock ek 6w/mk pads. Wonder what the delta @600w is for other blocks...


----------



## crefjouy

Trying to find where some photos of the updated shunt design for the EVGA FTW3 Ultra RMA replacement units were posted here? Not clear if it has fixed the power balance issue.


----------



## Christopher2178

DrunknFoo said:


> Managed to get a new card, another ftw. Pcie fuse on this is different than the older model.
> 
> Any idea what amp this is rated at? (I got no clue, is it 8A? If memory serves me correctlt the previous one was 20A errr 10A? Or im dellusional)
> View attachment 2482314



Sent from my iPhone using Tapatalk


----------



## des2k...

gfunkernaught said:


> I get 17c delta at 600w, and that is with 2x360mm and 1x240mm plus the stock ek 6w/mk pads. Wonder what the delta @600w is for other blocks...


wonder... ? it's shown multiple times it's about 11c with bykski, ~8c with optimus block, even 5c delta with liquid metal on the optimus block

17c is nothing amazing, but better than mine lol
The EK block is an old design, only 20 some microfins, small area and tons of copper between core / fins for the heat to get to.

I'm waiting for the barrow block, but just look at the design, 40+ fins and the whole plate sits very low to the core.
It's no optimus block, but should get 10c delta easily

Barrow block








EK block


----------



## sultanofswing

10c Delta is going to be a stretch at 550+w.


----------



## Wihglah

sultanofswing said:


> 10c Delta is going to be a stretch at 550+w.


Can confirm Optimus beats that.


----------



## sultanofswing

Wihglah said:


> Can confirm Optimus beats that.


I'll have to try one and see. My current loop with 3 D5 pumps, 1 480 and 3 360's on this FTW Hydrocopper at 496w at 23c Water temp the GPU gets to 39c in port Royal.


----------



## J7SC

des2k... said:


> wonder... ? it's shown multiple times it's about 11c with bykski, ~8c with optimus block, even 5c delta with liquid metal on the optimus block
> 
> 17c is nothing amazing, but better than mine lol
> The EK block is an old design, only 20 some microfins, small area and tons of copper between core / fins for the heat to get to.
> 
> I'm waiting for the barrow block, but just look at the design, 40+ fins and the whole plate sits very low to the core.
> It's no optimus block, but should get 10c delta easily
> 
> Barrow block
> View attachment 2482661
> 
> EK block
> View attachment 2482662



The Barrow block pic looks like CGI  ...anyway, to each his own. The EK block for the Strix is a new gen, and check out the VRM cooling compared to the 'pics' you posted...


----------



## TheNaitsyrk

Gentlemen, would it be okay to run the Kingpin 1000W BIOS on my EVGA FTW3 Ultra and set power target to max 600W just for testing purposes in regular TimeSpy?

That's all I want to do, just making sure my card will live this.

Also, can someone link me the 1000W BIOS? I know it's on techpowerup but I'd love to have someone link it to be 100% sure I'm grabbing the right one.

I'll be cooling my 3090 with 4x 480mm 60mm thick XE rads from EK with 2x D5 pumps with amazing flowrate with 3000RPM all across the rads and 2x 3000RPM 140mm Noctua fans blowing on mobo and GPU area attached to tablet holders.

I was using XC3 BIOS on my 3090 FTW3 Ultra and it helped me get bit better scores but I'm trying to get extra 300 to 500 points more in regular TimeSpy. Currently at 22500 (I'd be super done once I got 23000).

Btw. I know it's not safe to run it, but I just want to know if TimeSpy isn't demanding enough for 600W in my setup.

Thanks in advance.


----------



## gfunkernaught

Guess I will have to wait and see if the Trio will get better water blocks . Kinda regret impulse-buying the Trio, just makes things harder. Like, after seeing all the data for the EK block for the Trio, who would buy a used one?


----------



## des2k...

J7SC said:


> The Barrow block pic looks like CGI  ...anyway, to each his own. The EK block for the Strix is a new gen, and check out the VRM cooling compared to the 'pics' you posted...
> 
> 
> View attachment 2482667
> 
> 
> View attachment 2482668


yeah bigger block, but same weak design. Very easy for EK to manufacture these, they don't cut too much copper and only 20 or so fins to carve out. Waterflow is on top. At 500W+ it's hard to get a good delta when that heat needs to move through chunks of solid copper.

I'll use optimus for example,
The fins area is way bigger, more fins & the cold plate sits very close. Mem & Vrm cooling are also machined low for the water to get to the heat source very fast.

That's the difference between 17c-20c delta and 8c at 600w.


----------



## inedenimadam

I don't have any problems with the EK block myself when used with LM. I run about 10C over ambient for extended load.


----------



## des2k...

inedenimadam said:


> I don't have any problems with the EK block myself when used with LM. I run about 10C over ambient for extended load.


That's strange because high end paste vs LM is at most 1,2c. I can't imagine dropping 10c on temps by going LM.

Somebody here with 10c delta said he was going to switch to normal paste to post results but we haven't heard back from that person


----------



## yzonker

TheNaitsyrk said:


> Gentlemen, would it be okay to run the Kingpin 1000W BIOS on my EVGA FTW3 Ultra and set power target to max 600W just for testing purposes in regular TimeSpy?
> 
> That's all I want to do, just making sure my card will live this.
> 
> Also, can someone link me the 1000W BIOS? I know it's on techpowerup but I'd love to have someone link it to be 100% sure I'm grabbing the right one.
> 
> I'll be cooling my 3090 with 4x 480mm 60mm thick XE rads from EK with 2x D5 pumps with amazing flowrate with 3000RPM all across the rads and 2x 3000RPM 140mm Noctua fans blowing on mobo and GPU area attached to tablet holders.
> 
> I was using XC3 BIOS on my 3090 FTW3 Ultra and it helped me get bit better scores but I'm trying to get extra 300 to 500 points more in regular TimeSpy. Currently at 22500 (I'd be super done once I got 23000).
> 
> Btw. I know it's not safe to run it, but I just want to know if TimeSpy isn't demanding enough for 600W in my setup.
> 
> Thanks in advance.


Here's the bios, 









EVGA RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com





I don't think you'll get to 600w. Even at 1.1v you won't get past 550w,at least I don't with my Zotac. Just pinned at 1.1.


----------



## KedarWolf

yzonker said:


> Here's the bios,
> 
> 
> 
> 
> 
> 
> 
> 
> 
> EVGA RTX 3090 VBIOS
> 
> 
> 24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory
> 
> 
> 
> 
> www.techpowerup.com
> 
> 
> 
> 
> 
> I don't think you'll get to 600w. Even at 1.1v you won't get past 550w,at least I don't with my Zotac. Just pinned at 1.1.


Is your Zotac two or three eight-pin power connectors?


----------



## satinghostrider

Anyone had any success running a better BIOS on the Aorus 3090 Xtreme Waterforce WB?








Thanks in advance! It is 2x8pin card btw.

@KedarWolf Any options? Thanks!


----------



## J7SC

inedenimadam said:


> I don't have any problems with the EK block myself when used with LM. I run about 10C over ambient for extended load.


...yeah, and apart from the price / availability of the Optimus and that sort of thing. Anyways, I'm having a good time with the Strix OC /EK... it's just a little desk build for the master bedroom, not a bencher...

fans are push / pull, btw (next time I might get a MoRa 420 or two  )


----------



## yzonker

KedarWolf said:


> Is your Zotac two or three eight-pin power connectors?


2


----------



## lolhaxz

inedenimadam said:


> I don't have any problems with the EK block myself when used with LM. I run about 10C over ambient for extended load.





des2k... said:


> That's strange because high end paste vs LM is at most 1,2c. I can't imagine dropping 10c on temps by going LM.
> 
> Somebody here with 10c delta said he was going to switch to normal paste to post results but we haven't heard back from that person


Quite safe to say its rubbish, unless:


Running pump at 3500-4000rpm+
Running fans at 100%
Have like the equivalent of 8+ 360mm radiators (for ~600W)
and/or very low load, ie 200-300W

The claim is 10C above AMBIENT; it's simply not plausible... to start with you need to have the water stay at ambient. (hence the incredible amount of rad area required)

I would say LM is worth 3-4C, having just recently doing it myself, including lapping the die past the point of the laser engraved text.


----------



## sultanofswing

Liquid metal for me on a 2080ti was worth 2-3c at best.
Was not worth the hassle vs just running Kryonaut.


----------



## J7SC

sultanofswing said:


> Liquid metal for me on a 2080ti was worth 2-3c at best.
> Was not worth the hassle vs just running Kryonaut.


...I've run LM on delidded ivy Bridge CPUs and NV GTX 700 series years back and from what I recall, the gains where about 3 C - 4 C max. Not worth the mess re. insulation unless you do a lot of benching, IMO. I just put Kryonaut on my 3950X and the 3090 - first time in years that skipped my usual go-to > MX4


----------



## gfunkernaught

I gotta try kryonaut again. Lots of people saying it's good, but I had several higher than icd temp experience with it. Also want to try lm again. 

But if the number of fins of a waterblock impede the transfer of heat at a given wattage such as 600w and up, maybe there needs to be more pressure. Perhaps adding another pump could alleviate that?


----------



## crefjouy

DrunknFoo said:


> Managed to get a new card, another ftw. Pcie fuse on this is different than the older model.
> 
> Any idea what amp this is rated at? (I got no clue, is it 8A? If memory serves me correctlt the previous one was 20A errr 10A? Or im dellusional)
> View attachment 2482314


The first gen cards were 10A on the PCIe and 20A on the 8-pin Rails. 

How did this replacement do with power balance and power throttling in the end?


----------



## T.Sharp

lolhaxz said:


> I would say LM is worth 3-4C, having just recently doing it myself, including lapping the die past the point of the laser engraved text.





sultanofswing said:


> Liquid metal for me on a 2080ti was worth 2-3c at best.


Did you test to find your highest stable OC after applying LM? I had the same ~3c temp drop when I tried it on a 1060, but was able to overclock an extra 60-75MHz stable. 

Afaik, the regular Nvidia GPU temp sensor reading is an average of many sensors, so it's not trustworthy for judging the effectiveness of LM. And if you mouseover GPU hotspot temp in HWiNFO, it says it's unconfirmed. Idk how trustworthy that is either.


----------



## TheNaitsyrk

J7SC said:


> ...yeah, and apart from the price / availability of the Optimus and that sort of thing. Anyways, I'm having a good time with the Strix OC /EK... it's just a little desk build for the master bedroom, not a bencher...
> 
> fans are push / pull, btw (next time I might get a MoRa 420 or two  )
> 
> View attachment 2482683


LM on 10980XE running it at 5.2Ghz (I might try 5.3Ghz since temps are in check)

Kyronaut on 3090 FTW3 Ultra and it doesn't go past 32C even in TimeSpy.

Nice!
I'm running 9x 480mm XE rads with 3000 RPM fans 40x from Noctua with 4x D5 pumps. Daily use and OC. Quick connects for convenience.

All controlled by AquaComputer Octo and 4x Splitty9 Active.

I got two 140mm 3000RPM Noctua fans suspended above the bench as well.


----------



## TheNaitsyrk

yzonker said:


> Here's the bios,
> 
> 
> 
> 
> 
> 
> 
> 
> 
> EVGA RTX 3090 VBIOS
> 
> 
> 24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory
> 
> 
> 
> 
> www.techpowerup.com
> 
> 
> 
> 
> 
> I don't think you'll get to 600w. Even at 1.1v you won't get past 550w,at least I don't with my Zotac. Just pinned at 1.1.


Thanks! I got the 3x 8pin GPU.

Also 500-600W is all I want then I revert to normal bios. 

Just want to know what my chip is capable of.


----------



## Edge0fsanity

Have any AIBs or nvidia made a triple slot nvlink bridge for this generation yet? I really want to add a second card for SLI since there are a few games I still play that support it.


----------



## J7SC

T.Sharp said:


> Did you test to find your highest stable OC after applying LM? I had the same ~3c temp drop when I tried it on a 1060, but was able to overclock an extra 60-75MHz stable.
> 
> Afaik, the regular Nvidia GPU temp sensor reading is an average of many sensors, so it's not trustworthy for judging the effectiveness of LM. And if you mouseover GPU hotspot temp in HWiNFO, it says it's unconfirmed. Idk how trustworthy that is either.


Normally I would say to just put physical temp probes in and/or use at least an IR gun sensor...but both HWInfo's and now GPUz's report on the 'Hotspot' temp may make sense if they have the right sensor read inputs. Nvidia's boost algorithm needs hyperfast sampling of hotspot temps to both protect the GPU from damage and as regular boost clock input (AMD is similar). The big Amperes have over 28 billion transistors though, and include a ton of sensor inputs...



TheNaitsyrk said:


> LM on 10980XE running it at 5.2Ghz (I might try 5.3Ghz since temps are in check)
> 
> Kyronaut on 3090 FTW3 Ultra and it doesn't go past 32C even in TimeSpy.
> 
> Nice!
> I'm running 9x 480mm XE rads with 3000 RPM fans 40x from Noctua with 4x D5 pumps. Daily use and OC. Quick connects for convenience.
> 
> All controlled by AquaComputer Octo and 4x Splitty9 Active.
> 
> I got two 140mm 3000RPM Noctua fans suspended above the bench as well.


Very nice setup 
...I doubt that I would get away with putting s.th. that into our m-bedroom though 

I've done 6x rad setups w/ 6x D5s before when I was benching a lot...these days, my max biggest system in my home office has 5x RX 360s and 4x D5s > here


----------



## rix2

Christopher2178 said:


> Sent from my iPhone using Tapatalk


what is different? fuse R005?


----------



## EarlZ

Has anyone had the chance to compare GELID Extreme and Thermalright Odessey pads to see which is softer ?


----------



## gfunkernaught

I want to add another D5 pump to my loop but I'm unsure of the best spot for it. I was thinking either right after my existing pump, or between my 360>gpu. I'm trying to create good pressure for the gpu to remove heat faster from the block. My existing pump is always at 100%. What do you guys think?


----------



## J7SC

gfunkernaught said:


> I want to add another D5 pump to my loop but I'm unsure of the best spot for it. I was thinking either right after my existing pump, or between my 360>gpu. I'm trying to create good pressure for the gpu to remove heat faster from the block. My existing pump is always at 100%. What do you guys think?


...ultimately, pressure will equalize anyway in a given loop. But I always distribute (usually 2x D5s) right before the highest restrictions, ie. GPU and CPU blocks.


----------



## mirkendargen

gfunkernaught said:


> I want to add another D5 pump to my loop but I'm unsure of the best spot for it. I was thinking either right after my existing pump, or between my 360>gpu. I'm trying to create good pressure for the gpu to remove heat faster from the block. My existing pump is always at 100%. What do you guys think?


You should always have your pumps right after a reservoir. There's really only one way of ruining a D5, and that's letting it run dry/cavitate, and you run this risk with one anywhere other than right after a reservoir.


----------



## J7SC

mirkendargen said:


> You should always have your pumps right after a reservoir. There's really only one way of ruining a D5, and that's letting it run dry/cavitate, and you run this risk with one anywhere other than right after a reservoir.


...yup, that goes w/o saying but so true. I actually use two reservoirs in a single loop w/ dual D5s...another 'trick' I started using is to pre-fill the rads before closing the loop (while being careful not to contaminate)


----------



## gfunkernaught

J7SC said:


> ...yup, that goes w/o saying but so true. I actually use two reservoirs in a single loop w/ dual D5s...another 'trick' I started using is to pre-fill the rads before closing the loop (while being careful not to contaminate)


I was planning on getting a D5 pump w/o res as a secondary pump. My primary pump is a D5 res combo. I know not to run a pump dry, that's why when I did get the 2nd pump installed, I wouldn't power it up until the whole loop is filled. Like J7SC said, I was thinking of going right before the gpu block to increase the flow impeded by the 240 rad that comes before it.


----------



## des2k...

gfunkernaught said:


> I was planning on getting a D5 pump w/o res as a secondary pump. My primary pump is a D5 res combo. I know not to run a pump dry, that's why when I did get the 2nd pump installed, I wouldn't power it up until the whole loop is filled. Like J7SC said, I was thinking of going right before the gpu block to increase the flow impeded by the 240 rad that comes before it.


Still waiting for my 2 d5 from aliexpress , but I have my second pump in the middle of the loop: pump/res -> gpu -> rad -> cpu -> 2nd pump -> rad -> rad -> rad -> rad


----------



## gfunkernaught

des2k... said:


> Still waiting for my 2 d5 from aliexpress , but I have my second pump in the middle of the loop: pump/res -> gpu -> rad -> cpu -> 2nd pump -> rad -> rad -> rad -> rad


Hm. Maybe instead of before the gpu, should i put it right between my two 360s since the 2nd 360 is vertical?


----------



## mirkendargen

gfunkernaught said:


> Hm. Maybe instead of before the gpu, should i put it right between my two 360s since the 2nd 360 is vertical?


Elevation change causes no pressure drop in a closed loop. Rad orientation doesn't matter for flowrate/pressure.


----------



## gfunkernaught

mirkendargen said:


> Elevation change causes no pressure drop in a closed loop. Rad orientation doesn't matter for flowrate/pressure.


Ok cool. I'm hoping putting it before the gpu will help move heat away from it quicker. Or should I put it right after the gpu? My goal is to create more pressure/flow inside the gpu block. Figured putting a pump before the gpu would do that.


----------



## J7SC

Further to my post last week on Strix pad thickness, here are some later pics...the GPU front thermal pads look to be 2mm the rear ones seem much more squishy but all stayed on the back-plate...I think they are at least 2mm on the rear, may be a touch thicker, but the squishy bits raised the edges a bit.

Also, I've included a YT vid from a professional builder in S.Korea as he takes the Strix OC apart and mounts an EK Vector Strix block...always nice to have a visual guide as much as this is not my first GPU block rodeo.


----------



## T.Sharp

Caliper set to inches?


----------



## J7SC

T.Sharp said:


> Caliper set to inches?


good catch...2 mm during the actual measurement, wrote it down - then a piece of factory squishy stuff got stuck on the caliper so I turned it off and cleaned it. Btw, has anyone tried to put two pieces of higher-rated thermal pads together (ie 1mm and 0.5mm), being careful not to trap air ? Thinking of adding material to two spots which had no imprint (mosfets?)

re-measured (in mm!)


----------



## KedarWolf

J7SC said:


> good catch...2 mm during the actual measurement, wrote it down - then a piece of factory squishy stuff got stuck on the caliper so I turned it off and cleaned it. Btw, has anyone tried to put two pieces of higher-rated thermal pads together (ie 1mm and 0.5mm), being careful not to trap air ? Thinking of adding material to two spots which had no imprint (mosfets?)
> 
> re-measured (in mm!)
> 
> View attachment 2482756


Did you measure the pads from the VRM, memory, backplate parts etc. separately? They can be different. Just because the pads like on the memory are 2.0mm doesn't mean the other pads are the same.


----------



## wirefox

EK releasing a water cooled backplate for Vector blocks

Check availability I think only works on PCB


----------



## jura11

Just done few tests last night with NiceHash, temperatures of VRAM in mining have been in 72°C with 1495MHz on VRAM and 105MHz on core with power limit set to 65% on KPE XOC BIOS on top one(GPU_0),second GPU(GPU_1)with same OC on core(105MHz) and VRAM OC to 1200MHz and VRAM temperatures been in 82°C as max with same BIOS KPE XOC BIOS and same power limit set to 65% and core temperatures have been on both in 34-36°C 

On top GPU I'm using only 80mm and 120mm fan and that's it, tried as well put on top copper heatsink whuch literally didn't make so much difference as I thought so 

On second GPU I put little copper heatsinks on backplate which not sure if that's made any difference too

Hope this helps 

Thanks, Jura


----------



## J7SC

KedarWolf said:


> Did you measure the pads from the VRM, memory, backplate parts etc. separately? They can be different. Just because the pads like on the memory are 2.0mm doesn't mean the other pads are the same.


...back-plate I already mentioned above, some of the VRM pad bits looked a bit thicker, but they had squished a bit with raised edges which made it harder to accurately measure. On the front, the power stage pads on either outside looked thinner, but also more compressed. On the EK GPU w-block, all pads for the front are the same thickness at 1 mm


----------



## satinghostrider

Guys,

Just want some quick advise. I have an Aorus 3090 Xtreme Waterforce WB. The one with the Waterblock. It is a 2x8pin card with a max TDP of 370W I think.
I can clock it +205Mhz core and pass Timespy which sees it peak at 2,175Mhz but average clock speed is 1,960Mhz. GPU Score is 21,566.

NVIDIA GeForce RTX 3090 video card benchmark result - Intel Core i9-9900KS Processor,Gigabyte Technology Co., Ltd. Z390 AORUS MASTER-CF (3dmark.com)

But overclocking the slightest bit seems to give me the dreaded black screen in Coldwar after a while and I need to hard reset. Like pretty much lost signal to the monitor. So I pretty much run it now with just my power and temp limits maxed out and it runs fine so far with a max temp of 44 degrees centigrade.

My question is

1) During games, is it worthwhile to max out the power and temp limits?
2) With power and temp limits maxed, should I also max out the core voltage?

I am not sure which one runs better given mine is a 2x8pin card and it is hitting the power limit most of the time. I just want to find a way to run this card with the proper settings for games mainly.

Thanks in Advance Guys!


----------



## Falkentyne

satinghostrider said:


> Guys,
> 
> Just want some quick advise. I have an Aorus 3090 Xtreme Waterforce WB. The one with the Waterblock. It is a 2x8pin card with a max TDP of 370W I think.
> I can clock it +205Mhz core and pass Timespy which sees it peak at 2,175Mhz but average clock speed is 1,960Mhz. GPU Score is 21,566.
> 
> NVIDIA GeForce RTX 3090 video card benchmark result - Intel Core i9-9900KS Processor,Gigabyte Technology Co., Ltd. Z390 AORUS MASTER-CF (3dmark.com)
> 
> But overclocking the slightest bit seems to give me the dreaded black screen in Coldwar after a while and I need to hard reset. Like pretty much lost signal to the monitor. So I pretty much run it now with just my power and temp limits maxed out and it runs fine so far with a max temp of 44 degrees centigrade.
> 
> My question is
> 
> 1) During games, is it worthwhile to max out the power and temp limits?
> 2) With power and temp limits maxed, should I also max out the core voltage?
> 
> I am not sure which one runs better given mine is a 2x8pin card and it is hitting the power limit most of the time. I just want to find a way to run this card with the proper settings for games mainly.
> 
> Thanks in Advance Guys!


You can't change the core voltage without Elmor's EVC2 mod tool, or the Classified software tool (only for Kingpin cards)
The "Core voltage" slider you see in MSI Afterburner does nothing except "unlock" the next voltage tier on the V/F curve, up to 1.1v on the curve, which also bumps your clock speed up by +15 mhz if you are NOT power limited. So it's not even a technical overclock. Just allowing one more frequency tier on the curve to be accessed. The "slider" itself seems to control under what conditions that tier becomes active. At 100%, that next tier is always active. At something like 50%, it's only active sometimes. I'm not sure if this is related to temps or not, because it's rather confusing how the curve works.

You can't select or force 1.1v either. Locking it will simply keep the card in full 3D clocks (this has its advantages) and it will still manually self-adjust the curve to cycle between 1.069v to 1.1v whenever it wants (if you are NOT power limited). Manually adjusting the V/F curve points to keep 1.10v always active will completely DESTROY your effective clocks and drop your performance (because MSVDD will no longer match up with NVVDD, and you have no ability to change MSVDD voltage to avoid this from happening).


----------



## satinghostrider

Falkentyne said:


> You can't change the core voltage without Elmor's EVC2 mod tool, or the Classified software tool (only for Kingpin cards)
> The "Core voltage" slider you see in MSI Afterburner does nothing except "unlock" the next voltage tier on the V/F curve, up to 1.1v on the curve, which also bumps your clock speed up by +15 mhz if you are NOT power limited. So it's not even a technical overclock. Just allowing one more frequency tier on the curve to be accessed. The "slider" itself seems to control under what conditions that tier becomes active. At 100%, that next tier is always active. At something like 50%, it's only active sometimes. I'm not sure if this is related to temps or not, because it's rather confusing how the curve works.
> 
> You can't select or force 1.1v either. Locking it will simply keep the card in full 3D clocks (this has its advantages) and it will still manually self-adjust the curve to cycle between 1.069v to 1.1v whenever it wants (if you are NOT power limited). Manually adjusting the V/F curve points to keep 1.10v always active will completely DESTROY your effective clocks and drop your performance (because MSVDD will no longer match up with NVVDD, and you have no ability to change MSVDD voltage to avoid this from happening).


So the best way is just to max out power and temp limits with no core voltage yes?
How would you run it in my scenario where I am power limited more than most due to 2x8pin?

TIA!


----------



## long2905

EK-Quantum Vector RE RTX 3080/3090 Active Backplate D-RGB - Plexi


EK-Quantum Vector RE RTX 3080/3090 Active Backplate D-RGB - Plexi is a cutting-edge addition to the EK® Quantum Line. It is made to complement the existing EK-Quantum Vector RE RTX 3080/3090 water blocks and actively cool the backside of a reference design NVIDIA® GeForce RTX™ 3080 and 3090 GPU.




www.ekwb.com





you can now pre order the active backplate from EK. shipping early April.


----------



## yzonker

satinghostrider said:


> So the best way is just to max out power and temp limits with no core voltage yes?
> How would you run it in my scenario where I am power limited more than most due to 2x8pin?
> 
> TIA!


Any game that loads the GPU heavily (Cyberpunk, RDR2, etc... that I've been playing for example) will require over 500w to get to 1.1v. With only 370w, you will never get close to that voltage while the card is loaded heavily. It'll only boost there under light load. So IMO the voltage slider isn't useful at all unless you have a lot of power available.

So yes, the best you can do is to max out power and then optimize your core and memory offsets.


----------



## yzonker

Falkentyne said:


> You can't change the core voltage without Elmor's EVC2 mod tool, or the Classified software tool (only for Kingpin cards)
> The "Core voltage" slider you see in MSI Afterburner does nothing except "unlock" the next voltage tier on the V/F curve, up to 1.1v on the curve, which also bumps your clock speed up by +15 mhz if you are NOT power limited. So it's not even a technical overclock. Just allowing one more frequency tier on the curve to be accessed. The "slider" itself seems to control under what conditions that tier becomes active. At 100%, that next tier is always active. At something like 50%, it's only active sometimes. I'm not sure if this is related to temps or not, because it's rather confusing how the curve works.
> 
> You can't select or force 1.1v either. Locking it will simply keep the card in full 3D clocks (this has its advantages) and it will still manually self-adjust the curve to cycle between 1.069v to 1.1v whenever it wants (if you are NOT power limited). Manually adjusting the V/F curve points to keep 1.10v always active will completely DESTROY your effective clocks and drop your performance (because MSVDD will no longer match up with NVVDD, and you have no ability to change MSVDD voltage to avoid this from happening).


That's interesting. Probably explains why I saw some odd performance changes when I was fiddling with the curve to try to get it to hold 1.1v in TimeSpy and PR. By default, using the XOC bios, my card wouldn't reach 1.1v I think because the curve had too many points at the same frequency. So I bumped up a couple of them. It worked and I saw a modest bump in my TimeSpy score, but then when I made some more changes to try to just force 1.1v with basically and "undervolt" of 1.1v, performance dropped a bunch.


----------



## satinghostrider

yzonker said:


> Any game that loads the GPU heavily (Cyberpunk, RDR2, etc... that I've been playing for example) will require over 500w to get to 1.1v. With only 370w, you will never get close to that voltage while the card is loaded heavily. It'll only boost there under light load. So IMO the voltage slider isn't useful at all unless you have a lot of power available.
> 
> So yes, the best you can do is to max out power and then optimize your core and memory offsets.


Gee thanks! I've had 1 instance of crash with just maxed out power and temp limits. If sliding to max the core voltage doesn't do much, I wonder if setting to 50% would improve stability for that matter.

And also, to run overclocks with the default maxed out power and temp limit seems pointless right given the low 370W TDP yes?


----------



## yzonker

satinghostrider said:


> Gee thanks! I've had 1 instance of crash with just maxed out power and temp limits. If sliding to max the core voltage doesn't do much, I wonder if setting to 50% would improve stability for that matter.
> 
> And also, to run overclocks with the default maxed out power and temp limit seems pointless right given the low 370W TDP yes?


Increasing the offsets (+core and +mem) will still increase performance. It's not a big gain though and if you are having stability problems with the core or mem increased, may not be worth doing. Maxing the power will allow the GPU to boost to somewhat higher frequencies. Once again a small increase in performance if your just going from 350w to 370w.


----------



## lolhaxz

x


----------



## Alex24buc

Hi guys, is it worth replacing the palit gamerock oc with msi suprim x without any cost? Somebody proposed me to make an exchange but I don`t know if suprim x is better because palit has a higher power limit, Thanks!


----------



## J7SC

long2905 said:


> EK-Quantum Vector RE RTX 3080/3090 Active Backplate D-RGB - Plexi
> 
> 
> EK-Quantum Vector RE RTX 3080/3090 Active Backplate D-RGB - Plexi is a cutting-edge addition to the EK® Quantum Line. It is made to complement the existing EK-Quantum Vector RE RTX 3080/3090 water blocks and actively cool the backside of a reference design NVIDIA® GeForce RTX™ 3080 and 3090 GPU.
> 
> 
> 
> 
> www.ekwb.com
> 
> 
> 
> 
> 
> you can now pre order the active backplate from EK. shipping early April.


...looks interesting and I've been waiting for it, but unfortunately - for now at least - it only seems to be available for the reference design cards...hopefully, they'll add custom PCB models to it, like the Strix, in the near future


----------



## ALSTER868

Alex24buc said:


> Hi guys, is it worth replacing the palit gamerock oc with msi suprim x without any cost? Somebody proposed me to make an exchange but I don`t know if suprim x is better because palit has a higher power limit, Thanks!


Suprim X is good on air cooling, should be better than GR. Are going to use it on air or water?
And you can easily flash the KPE 520W bios onto the Suprim.


----------



## Alex24buc

I don`t think I will use water cooling, just on air. Thanks for your help. Also I am getting 82 degrees vram temp with Palit gamerock oc in games, Is it ok?


----------



## pat182

im trying to play SC2 with my Igpu on the 9900k but the game default to my 3090 that mine even if i set the power saving gpu in graphics settings , any idea how to resolve that ?


----------



## KUSH43

hello i have zotac trinity 3090 with power limit 350w what a bios a need to flash ?
its really worth to flash bios for more power limit for only gaming ?
thank you


----------



## jura11

Hi @KUSH43 

I would say KFA2 390W or Gigabyte 390W BIOS would be best BIOS for gaming,I was using same BIOS on my Palit RTX3090 GamingPro and no issues,if its worth it,I would say yes there

Hope this helps

Thanks,Jura


----------



## KUSH43

jura11 said:


> Hi @KUSH43
> 
> I would say KFA2 390W or Gigabyte 390W BIOS would be best BIOS for gaming,I was using same BIOS on my Palit RTX3090 GamingPro and no issues,if its worth it,I would say yes there
> 
> Hope this helps
> 
> Thanks,Jura


 thank for the answer what version on gigabyte *Gaming OC* or Vision OC ?
and after flash gona have no probleme in future nvidia driver update ?


----------



## des2k...

yzonker said:


> That's interesting. Probably explains why I saw some odd performance changes when I was fiddling with the curve to try to get it to hold 1.1v in TimeSpy and PR. By default, using the XOC bios, my card wouldn't reach 1.1v I think because the curve had too many points at the same frequency. So I bumped up a couple of them. It worked and I saw a modest bump in my TimeSpy score, but then when I made some more changes to try to just force 1.1v with basically and "undervolt" of 1.1v, performance dropped a bunch.


The curve will work best if you limit your max at 1093mv with the 100% slider for voltage.

1.1v will automaticly drop to 2100 eff frequency and usually pretty hard to get the eff frequency up to 2190+ unless you have a card that can force nvdd offset / fixed voltage.

Nvidia driver will cap at 1.1v for nvdd on regular cards.


----------



## yzonker

des2k... said:


> The curve will work best if you limit your max at 1093mv with the 100% slider for voltage.
> 
> 1.1v will automaticly drop to 2100 eff frequency and usually pretty hard to get the eff frequency up to 2190+ unless you have a card that can force nvdd offset / fixed voltage.
> 
> Nvidia driver will cap at 1.1v for nvdd on regular cards.


Ok thanks. I'll play with it a bit more. BTW, my even crappier Corsair block appears to have a 20C delta at ~500w. Was just playing RDR2 for a little while at that level. About as high as it would go in that game. Still that seems a lot better than the [email protected] Igor showed. Not sure why there is such a big difference?


----------



## jomama22

des2k... said:


> The curve will work best if you limit your max at 1093mv with the 100% slider for voltage.
> 
> 1.1v will automaticly drop to 2100 eff frequency and usually pretty hard to get the eff frequency up to 2190+ unless you have a card that can force nvdd offset / fixed voltage.
> 
> Nvidia driver will cap at 1.1v for nvdd on regular cards.


I do not see this type of extreme effective clock drop on my strix. 1.075 holds the tightest effective clock but 1.1 will only drop by about 10 off of that.


----------



## des2k...

jomama22 said:


> I do not see this type of extreme effective clock drop on my strix. 1.075 holds the tightest effective clock but 1.1 will only drop by about 10 off of that.


just my card then, it's 2x8pin reference, so small VRM at high voltage it's stressed alot
my new 1200w power supply also doesn't help, very long pcie 8pin cables, lots of voltage drop


----------



## ALSTER868

Alex24buc said:


> Also I am getting 82 degrees vram temp with Palit gamerock oc in games, Is it ok?


For air it's quite good. I'm water cooled with radiators on the backplate and fans pointed at them and getting 60-62C in games and up tp 80C in mining with memory set +1100. For comparison.


----------



## des2k...

yzonker said:


> Ok thanks. I'll play with it a bit more. BTW, my even crappier Corsair block appears to have a 20C delta at ~500w. Was just playing RDR2 for a little while at that level. About as high as it would go in that game. Still that seems a lot better than the [email protected] Igor showed. Not sure why there is such a big difference?


well 20c delta at 500w is not so bad. It's not like you want to use more wattage for 24/7 gaming
I'm trying another block (barrow) if that doesn't work, will try liquid metal.

Right now, the EK starts at 17c delta at 500w. It's the 600w that climbs the delta to 22c which is very strange for just 100w more. Might be my waterflow too. Have some D5s ordered.

Got a bunch of stuff from Aliexpress but honestly with things are (covid) I have very low hope of receiving anything lol


----------



## satinghostrider

Spiriva said:


> The 500w evga bios can be downloaded here;
> 
> 
> 
> 
> 
> 
> 
> 
> 
> EVGA RTX 3090 VBIOS
> 
> 
> 24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory
> 
> 
> 
> 
> www.techpowerup.com


Guys, is this evga 500w bios the latest to flash if I were to do it on the MSI 3090 Gaming X Trio? Any issues?

Thanks!


----------



## jura11

KUSH43 said:


> thank for the answer what version on gigabyte *Gaming OC* or Vision OC ?
> and after flash gona have no probleme in future nvidia driver update ?


Hi there 

This BIOS should work 

Gigabyte RTX 3090 VBIOS 

Or this one 

KFA2 RTX 3090 VBIOS 

You shouldn't have any problems with Nvidia updates, I'm using on my RTX 3090 XOC BIOS capped at 65% and I have used too KFA2 390W BIOS for while 

Hope this helps 

Thanks, Jura


----------



## KedarWolf

J7SC said:


> ...looks interesting and I've been waiting for it, but unfortunately - for now at least - it only seems to be available for the reference design cards...hopefully, they'll add custom PCB models to it, like the Strix, in the near future
> 
> View attachment 2482828


I contacted EKWB support a few weeks ago.

They told me other cards like the Strix and FTW3 etc. will be released sometime after the reference active backplate.

No timeline though, just sometime after. :/


----------



## sultanofswing

4000 series will be out by the time all these waterblocks these companies claim to be making are ready.
I was able to snag 2 Optimus blocks between last night and this morning.


----------



## defcoms

yzonker said:


> Ok thanks. I'll play with it a bit more. BTW, my even crappier Corsair block appears to have a 20C delta at ~500w. Was just playing RDR2 for a little while at that level. About as high as it would go in that game. Still that seems a lot better than the [email protected] Igor showed. Not sure why there is such a big difference?


On my corsair strix block I have been getting around 10-12c delta at 480w @ 100/300 on my strix. I wonder why there is so much difference? I did replace the factory paste with kryro when I installed it. Playing BF4 for about 2 hrs last night @ 3440x1440 @ 150% resolution scale I was seeing power usage around 460-480 steady and 78w on my 5800x. Temps stayed under 54c on gpu with a loop temp of 42c. Idle temps where 30c. This is a new loop/first loop I just installed in my rig so I been watching temps alot.


----------



## KedarWolf

Alex24buc said:


> I don`t think I will use water cooling, just on air. Thanks for your help. Also I am getting 82 degrees vram temp with Palit gamerock oc in games, Is it ok?


Use HWInfo or GPU-Z to check your memory temperatures. If your core is getting to 82C, very likely your memory is going quite a bit higher.

I desync my temp and power limit in Afterburner on the stock BIOS and put the power limit at 85% to keep the total power draw under 390W and memory temps under 80c. For just gaming, it's fine doing that.

Edit: Read that wrong, thought you meant core temp. 82C should be fine, but as I said, I try to keep it under 80C while gaming.


----------



## mirkendargen

sultanofswing said:


> 4000 series will be out by the time all these waterblocks these companies claim to be making are ready.
> I was able to snag 2 Optimus blocks between last night and this morning.


4000 series will be out before 3000 series are in stock for more than a minute the way things have been going, lol.


----------



## Alex24buc

KedarWolf said:


> Use HWInfo or GPU-Z to check your memory temperatures. If your core is getting to 82C, very likely your memory is going quite a bit higher.
> 
> I desync my temp and power limit in Afterburner on the stock BIOS and put the power limit at 85% to keep the total power draw under 390W and memory temps under 80c. For just gaming, it's fine doing that.


I used gpu-z and unfortnately the vram temps go higher as 85C in metro exodus, the only game that heats memory harder, in other games the temps are lower


----------



## jomama22

des2k... said:


> just my card then, it's 2x8pin reference, so small VRM at high voltage it's stressed alot
> my new 1200w power supply also doesn't help, very long pcie 8pin cables, lots of voltage drop


On the strix at least (with no power limit being reached), the voltage controller will compensate for voltage drop from the vrms to the die at full stock (except the shunts).

An example of this:








This is from the vrm voltage readout point. Top is nvvdd, bottom is msvdd. The voltage readout reported by the controller itself is 1.073 for nvvdd (the difference between 1.1 and 1.073 is the vdroop of the llc and die, i.e. the difference between VID and voltage get) and 1.1 for msvdd (there is nearly no droop whatsoever I have ever seen for msvdd).

This is at stock voltages using AB in time spy extreme gt2, no muckery with the evc.

So the controller will ramp up voltage to meet the vid (minus the droop).

My speculation is that the difference in effective clocks we see has more to do with the way the cache/sub-timings/sub-clocks work in conjunction with a specified voltage point on the voltage/freqency curve. Whether those above timings are in any way tied to the gpu clock we can set is another question.

Another interesting fact is that you can't overcome the effective clock difference all that much by merely adding move voltage. For example, if I use the evc to set 1.15v nvvdd when ab curve is made to use 1.1v, it will increase effective clocks but only to a point and still lower than if I make the curve use 1.075 with the same clocks. I could use 1.075 with a lower evc voltage (say 1.12v) and still have higher effective clocks than 1.1 ab voltage and 1.15 evc voltage. This is keeping msvdd at 1.1 for all testing.

Effective clock does infact raise based on the nvvdd voltage being applied to the die as many with KP cards have noted. So there is definitely some algo going on with respect to effective clock and die voltage. Because of this relationship, it is also affected by load, and thus, vdroop because of load. Running 600+w will droop the voltage more than 500w and the effective clock will drop because of that.

Msvdd voltage has little to no effect on the effective clock so long as it's not too low (and I mean, way too low, nothing that stock would ever touch). Msvdd is one of those voltages where too high or too low starts to have negative effects but nothing notable otherwise. Msvdd is tied to nvvdd vid except that because it's droop is near non existent, it will outpace nvvdd at above 1.05 vid (ab voltage). The reason this is possible to see on the strix is because initially, before Elmore and I could get msvdd and nvvdd separated in the evc software, msvdd and nvvdd were sent the same voltage commands. So, by sending 1.2v vid for example, msvdd would happily be at 1.2v on die while nvvdd would be at 1.17v on die.

Another interesting thing to note is that your effective clock at specific voltage points in ab will change depending on the type of load/game/benchmark. For me, 1.075 is the sweet spot for just about all games and time spy regular/extreme. Port royal on the other hand had no issues scaling up to 1.1v in ab.

It should also be noted that it may be beneficial to use 1.1 in scenarios where you get a higher effective clock because you can obtain a higher requested clock. For example, is 1.075 let's you request 2175 and the effective is 2170 but 1.1 let's you request 2190 with an effective of 2175, then you are still pulling out ahead.


----------



## Mrip541

My EVGA 3090 Ultra just wasn't performing as expected. Gameplay wasn't smooth even at high refresh rates, flashing textures in some games, temps were surprisingly high all the time. I posted about all the issues a couple places and was told more or less this was expected behavior or the problem was with something else in my system. I spent hours and hours troubleshooting.
Well yesterday I received an RMA replacement. The new card is SO MUCH better. Smooth gameplay. Lower temps. Lower fan speed. Flickering gone (except in WoW which nvidia lists as a known issue). I just wanted to mention this in case other people are battling issues and being told the problem isn't the card.


----------



## yzonker

defcoms said:


> On my corsair strix block I have been getting around 10-12c delta at 480w @ 100/300 on my strix. I wonder why there is so much difference? I did replace the factory paste with kryro when I installed it. Playing BF4 for about 2 hrs last night @ 3440x1440 @ 150% resolution scale I was seeing power usage around 460-480 steady and 78w on my 5800x. Temps stayed under 54c on gpu with a loop temp of 42c. Idle temps where 30c. This is a new loop/first loop I just installed in my rig so I been watching temps alot.


Yea I just used the Corsair paste, so that might be some of it, but not nearly all I don't think. Seems like the reported deltas are all over the place. Our core temps aren't too much different with my water temp around 35C. I've got slim 360 and 280 rads. 

Flow path is different between the 2 though,

Strix,



https://www.corsair.com/us/en/medias/sys_master/images/images/hf3/h6a/9607726497822/CX-9020013-WW/Gallery/CX-9020013-WW_01/-CX-9020013-WW-Gallery-CX-9020013-WW-01.png_1200Wx1200H



Reference,



https://www.corsair.com/us/en/medias/sys_master/images/images/h35/h08/9637676417054/CX-9020015-WW/Gallery/xg7_rgb_30_series_ref_3090_3080_01/-CX-9020015-WW-Gallery-xg7-rgb-30-series-ref-3090-3080-01.png_1200Wx1200H



It's possible they designed them with the stock power limits in mind I guess.


----------



## J7SC

KedarWolf said:


> I contacted EKWB support a few weeks ago.
> 
> They told me other cards like the Strix and FTW3 etc. will be released sometime after the reference active backplate.
> 
> No timeline though, just sometime after. :/


Thanks - yeah 'sometime after' is quite a stretchable term


----------



## KedarWolf

J7SC said:


> Thanks - yeah 'sometime after' is quite a stretchable term


For those with FTW3 cards, and soon Strix and Kingpin etc. The Optimus blocks ae really well designed and the XL backplates are super thick. 

The waterblock outperforms every other block out there. Super heavy though, so if you're not a vertical mount, a video card support absolutely necessary. 

The backplates as well have one thermal pad that covers the entire back PCB, not bit and pieces on the memory etc.
As a result, they cool quite well and might even be preferable to an EKWB active backplate that for sure has patches of thermal pads covering only some of the backplate.









Optimus Waterblock


Hello there! I hope you can help me. I want buy a Optimus Foundation nickel plated, but I see than some units have issues like acrilic craking and bad nickel plating... Someone here ever had that or another issues with the blocks?




www.overclock.net


----------



## yzonker

Been trying to at least make it to 15k in PR, but seem to be stuck. Doesn't help the highest mem offset that will complete is about +650.









I scored 14 917 in Port Royal


AMD Ryzen 7 5800X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com





That is running at 1.1v (I checked it by leaving AB running with the overlay for one of the runs). Looks like my requested frequency is 2190 (+195). Effective frequency stays in the 2160-2175 range for the entire run. Can't go to +210 as it doesn't complete.


----------



## KedarWolf

yzonker said:


> Been trying to at least make it to 15k in PR, but seem to be stuck. Doesn't help the highest mem offset that will complete is about +650.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 14 917 in Port Royal
> 
> 
> AMD Ryzen 7 5800X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> That is running at 1.1v (I checked it by leaving AB running with the overlay for one of the runs). Looks like my requested frequency is 2190 (+195). Effective frequency stays in the 2160-2175 range for the entire run. Can't go to +210 as it doesn't complete.


I can do 2130 on the core, +930 on memory on air and Port Royal finishes. But I won't see better scores until I water cool it for sure.

Please, someone in Toronto, buy my 1080 Ti, with the EKWB waterblock and backplate on it. It overclocks so very well, game at 2113 core, memory +980 stable. Would be great for mining.

And no, I'm not trying to sell it here, just peeps have been low balling me locally and it's already the cheapest one posted here in Toronto, not even factoring in the $200 of water cooling stuff on it and the 12.8 mw/k thermal pads.

Don't PM me please to buy it though, I WON'T ship it. Just venting, I need the cash to buy a waterblock and stuff for my Strix.


----------



## jomama22

KedarWolf said:


> I can do 2130 on the core, +930 on memory on air and Port Royal finishes. But I won't see better scores until I water cool it for sure.
> 
> Please, someone in Toronto, buy my 1080 Ti, with the EKWB waterblock and backplate on it. It overclocks so very well, game at 2113 core, memory +980 stable. Would be great for mining.
> 
> And no, I'm not trying to sell it here, just peeps have been low balling me locally and it's already the cheapest one posted here in Toronto, not even factoring in the $200 of water cooling stuff on it and the 12.8 mw/k thermal pads.
> 
> Don't PM me please to buy it though, I WON'T ship it. Just venting, I need the cash to buy a waterblock and stuff for my Strix.


You're better off putting the stock cooler back on and selling it. Having a waterblock actually reduces the price normal people who don't watercool will pay and you could probably flip the block for like $50 or somthing on the side.


----------



## des2k...

KedarWolf said:


> I can do 2130 on the core, +930 on memory on air and Port Royal finishes. But I won't see better scores until I water cool it for sure.
> 
> Please, someone in Toronto, buy my 1080 Ti, with the EKWB waterblock and backplate on it. It overclocks so very well, game at 2113 core, memory +980 stable. Would be great for mining.
> 
> And no, I'm not trying to sell it here, just peeps have been low balling me locally and it's already the cheapest one posted here in Toronto, not even factoring in the $200 of water cooling stuff on it and the 12.8 mw/k thermal pads.
> 
> Don't PM me please to buy it though, I WON'T ship it. Just venting, I need the cash to buy a waterblock and stuff for my Strix.


how much are the 1080ti selling for ? I see some crazy listings here 850$ - 1000$
I put the stock cooler back on the FTW3, too lazy to post it for sale lol

Are they really selling for 850$+ ?


----------



## J7SC

KedarWolf said:


> For those with FTW3 cards, and soon Strix and Kingpin etc. The Optimus blocks ae really well designed and the XL backplates are super thick.
> 
> The waterblock outperforms every other block out there. Super heavy though, so if you're not a vertical mount, a video card support absolutely necessary.
> 
> The backplates as well have one thermal pad that covers the entire back PCB, not bit and pieces on the memory etc.
> As a result, they cool quite well and might even be preferable to an EKWB active backplate that for sure has patches of thermal pads covering only some of the backplate.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Optimus Waterblock
> 
> 
> Hello there! I hope you can help me. I want buy a Optimus Foundation nickel plated, but I see than some units have issues like acrilic craking and bad nickel plating... Someone here ever had that or another issues with the blocks?
> 
> 
> 
> 
> www.overclock.net


I checked Optimus a couple of weeks back when I was making a buying decision, but no model for Strix yet, and no firm eta...and the price for the FTW3 version was more than twice that of other competing blocks, so I suspect the Optimus Strix block will be in the same pricing neighborhood. 

I have no doubt that it is top-of-the-line, but if it's not available and will cost 2x if/when it eventually does make an appearance for Strix, by then I'll probably already start thinking about NVidia 'Hopper' or whatever the next gen will be called


----------



## KedarWolf

des2k... said:


> how much are the 1080ti selling for ? I see some crazy listings here 850$ - 1000$
> I put the stock cooler back on the FTW3, too lazy to post it for sale lol
> 
> Are they really selling for 850$+ ?


$850+ here in Canada.


----------



## KedarWolf

jomama22 said:


> You're better off putting the stock cooler back on and selling it. Having a waterblock actually reduces the price normal people who don't watercool will pay and you could probably flip the block for like $50 or somthing on the side.


I live in a city of five million peeps, it'll sell, just a matter of time.

I doubt I could find all the screws from it and am saving the thermal pads I bought for my 3090 Strix OC.


----------



## yzonker

I ended up with the Corsair block for basically the same reason I have a Zotac 3090. It's what I managed to find in stock at the time. Lol. At least sellers I checked in the US.


----------



## jura11

KedarWolf said:


> I live in a city of five million peeps, it'll sell, just a matter of time.
> 
> I doubt I could find all the screws from it and am saving the thermal pads I bought for my 3090 Strix OC.


I would get Bykski waterblock, for money you will find hardly better waterblock 

In my case I'm getting 36-38°C with KPE XOC BIOS capped at 65-75%(GPU pulls like 446W with 65% and 75% pulls like 515W) and VRAM temperatures are in 60's in gaming 

Hope this helps 

Thanks, Jura


----------



## EarlZ

KedarWolf said:


> Use HWInfo or GPU-Z to check your memory temperatures. If your core is getting to 82C, very likely your memory is going quite a bit higher.
> 
> I desync my temp and power limit in Afterburner on the stock BIOS and put the power limit at 85% to keep the total power draw under 390W and memory temps under 80c. For just gaming, it's fine doing that.
> 
> Edit: Read that wrong, thought you meant core temp. 82C should be fine, but as I said, I try to keep it under 80C while gaming.


at 85% power thats just like running your 3090 at 3070 speeds??


----------



## J7SC

jura11 said:


> I would get Bykski waterblock, for money you will find hardly better waterblock
> 
> In my case I'm getting 36-38°C with KPE XOC BIOS capped at 65-75%(GPU pulls like 446W with 65% and 75% pulls like 515W) and VRAM temperatures are in 60's in gaming
> 
> Hope this helps
> 
> Thanks, Jura


 
...wasn't there someone posting here with a 3090 Bykski block that had developed some plating issues a short time in ? Don't get me wrong, I like Bykski and think I have them as OEM w/o problems for two years+, just wondering about a potential batch issue for the 3090 block


----------



## jura11

J7SC said:


> ...wasn't there someone posting here with a 3090 Bykski block that had developed some plating issues a short time in ? Don't get me wrong, I like Bykski and think I have them as OEM w/o problems for two years+, just wondering about a potential batch issue for the 3090 block


Hi there 

Yup I seen that, not sure why he have that plating issue, but I'm running 2 Bykski waterblocks on my two RTX 3090 GamingPro and no issues, checked my other block which I'm already running I think close to 3 or 4 months, not sure how long running that block but block is like new and no plating issues 

I would assume that guy who have that issue is probably run only distilled water with killcoil or something like that, because if you are using normal coolant like Mayhems etc you shouldn't have these issues 

I built 4 PC with RTX 3090 and 3080 and I always use Bykski waterblocks and checked every build recently if there are any issues and absolutely nothing, they're clean and no plating issues 

Hope this helps 

Thanks, Jura


----------



## mirkendargen

J7SC said:


> ...wasn't there someone posting here with a 3090 Bykski block that had developed some plating issues a short time in ? Don't get me wrong, I like Bykski and think I have them as OEM w/o problems for two years+, just wondering about a potential batch issue for the 3090 block


Yup that's me, and I'd still buy another Bykski. For the price and actual performance, it can't be beat (pending seeing what Barrow offers).




jura11 said:


> Hi there
> 
> Yup I seen that, not sure why he have that plating issue, but I'm running 2 Bykski waterblocks on my two RTX 3090 GamingPro and no issues, checked my other block which I'm already running I think close to 3 or 4 months, not sure how long running that block but block is like new and no plating issues
> 
> I would assume that guy who have that issue is probably run only distilled water with killcoil or something like that, because if you are using normal coolant like Mayhems etc you shouldn't have these issues
> 
> I built 4 PC with RTX 3090 and 3080 and I always use Bykski waterblocks and checked every build recently if there are any issues and absolutely nothing, they're clean and no plating issues
> 
> Hope this helps
> 
> Thanks, Jura


Nope, running Mayhem's pastel UV white. I would expect distilled water to be LESS likely to cause plating to come off. And there's 0 issues with the plating on my Bykski TR4 block in the same loop (that's been in service 2 years longer) with 0 plating issues so I'm assuming it's a batch problem on the Strix 3090 block.


----------



## jura11

@mirkendargen 

Hard to say what caused plating go off, I'm using for while Mayhems X1 and prior to that build I was running or using Mayhems Pastel and never had issues with plating going off and I have run several blocks 

When my Thermalright Odyssey pads arrive I will take pictures of blocks but I have already checked one block and absolutely no issues

I would try contact Bykski regarding that, because plating going off after few months is not normal at all 

Hope this helps 

Thanks, Jura


----------



## mirkendargen

jura11 said:


> @mirkendargen
> 
> Hard to say what caused plating go off, I'm using for while Mayhems X1 and prior to that build I was running or using Mayhems Pastel and never had issues with plating going off and I have run several blocks
> 
> When my Thermalright Odyssey pads arrive I will take pictures of blocks but I have already checked one block and absolutely no issues
> 
> I would try contact Bykski regarding that, because plating going off after few months is not normal at all
> 
> Hope this helps
> 
> Thanks, Jura


Yeah when I took my loop apart to swap in the 3090 I opened up the TR4 block and scrubbed it with a toothbrush, the plating on it is pristine. Not sure if I'll have much luck contacting Bykski...but who knows.


----------



## sultanofswing

I've got a 1.5 year old Bykski Nickel block on my 2080ti KINGPIN, I have only ever ran straight distilled water in my loops and that block still looks pristine as it was new. Not a single issue with any flaking.


----------



## D0MINUS

gfunkernaught said:


> So I ran into an issue with my OC, stability related but at lower loads/power usage. I previosly mentioned that I set the curve to [email protected] so that the effective clock be around 2100mhz, which works when the load is 460w or more (up to 500w). Playing BLOPS Cold War, the usage was around 430w and the temps were lower, causing the effective clock to go between 2115-2145mhz. The game crashed suddenly, and the gpu core temp was 38c. So that would mean that the clock was too high for that voltage. Now it could be the vram oc, although the mem junction temp was 56-58c. Other than setting different profiles for low vs high load games, is there anything else I could do to force a max clock at specific voltage? I also did set three points on the curve, starting on the right with the highest.


I've been slowly plowing through this thread and am still 2 weeks behind, so not sure if this has been answered. 
Assuming you have the gaming drivers (studio don't seem to include this), you can run this from the command line:

nvidia-smi -lgc 210,2160

that will limit the clock to 2160MHz.


----------



## sippo

News in EKWB (as preorder):








EK-Quantum Vector RE RTX 3080/3090 Active Backplate D-RGB - Plexi


EK-Quantum Vector RE RTX 3080/3090 Active Backplate D-RGB - Plexi is a cutting-edge addition to the EK® Quantum Line. It is made to complement the existing EK-Quantum Vector RE RTX 3080/3090 water blocks and actively cool the backside of a reference design NVIDIA® GeForce RTX™ 3080 and 3090 GPU.




www.ekwb.com












EK-Quantum Vector XC3 RTX 3080/3090 Active Backplate D-RGB - Plexi


EK-Quantum Vector XC3 RTX 3080/3090 Active Backplate D-RGB - Plexi is a cutting-edge addition to the EK® Quantum Line. It is made to complement the existing EK-Quantum Vector XC3 RTX 3080/3090 water blocks and actively cool the backside of all EVGA® GeForce RTX™ 3080 and 3090 XC3 GPU.




www.ekwb.com





EVGA GeForce RTX 3080 XC3 + Reference


----------



## gfunkernaught

D0MINUS said:


> I've been slowly plowing through this thread and am still 2 weeks behind, so not sure if this has been answered.
> Assuming you have the gaming drivers (studio don't seem to include this), you can run this from the command line:
> 
> nvidia-smi -lgc 210,2160
> 
> that will limit the clock to 2160MHz.


Thanks but I've since resolved the issue by using the 1kw bios and limiting power to 500w, with +150 core and +1200 vram, and the clocks stay where I want them to, unless I'm power limited, they drop the the mid 1900s, but performance is good in games.


----------



## inedenimadam

EK active backplates are up for preorder on EK's website. $170ish with shipping to my door.

Ops. I see it's been posted already.


----------



## yzonker

inedenimadam said:


> EK active backplates are up for preorder on EK's website. $170ish with shipping to my door.
> 
> Ops. I see it's been posted already.


Doesn't look like it's compatible with my Zotac 3090 though. Not sure why EK made a separate block for it.


----------



## defcoms

Did some PR runs last night on the stock strix with corsair block. 
14561








I scored 14 561 in Port Royal


AMD Ryzen 7 5800X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com





Does this seem low for OC @ +165/1500 stock bios max power limit. This is as far as I could go clock speed. Would a higher power bios increase the score. My clocks seemed to be bouncing around some during the test.


----------



## mirkendargen

defcoms said:


> Did some PR runs last night on the stock strix with corsair block.
> 14561
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 14 561 in Port Royal
> 
> 
> AMD Ryzen 7 5800X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> Does this seem low for OC @ +165/1500 stock bios max power limit. This is as far as I could go clock speed. Would a higher power bios increase the score. My clocks seemed to be bouncing around some during the test.


Looking at your clocks, more power would get you something like 1% higher (maintain 2130 the entire run), so ~14700.


----------



## yzonker

defcoms said:


> Did some PR runs last night on the stock strix with corsair block.
> 14561
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 14 561 in Port Royal
> 
> 
> AMD Ryzen 7 5800X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> Does this seem low for OC @ +165/1500 stock bios max power limit. This is as far as I could go clock speed. Would a higher power bios increase the score. My clocks seemed to be bouncing around some during the test.


Make sure you've done the other tricks too. I think it's setting the 3DMark 3D settings for texture quality to "High Performance" and power management to "Prefer maximum performance". At least those are the 2 I have set. Run in a resolution no higher than PR (1440p). Close out everything you can including Afterburner, if you have the overlay turned on at least. And cool your water down first. Your average temp isn't much lower than mine and based on your previous comment, it should be quite a bit given the better deltaT you get. Water shouldn't heat up much during the run.


----------



## inedenimadam

yzonker said:


> Doesn't look like it's compatible with my Zotac 3090 though. Not sure why EK made a separate block for it.


I will have to double check, but the one I ordered should be for the Zotac Trinity. AFAIK it's a reference card, and works with the defence block. Which card do you have?


----------



## yzonker

inedenimadam said:


> I will have to double check, but the one I ordered should be for the Zotac Trinity. AFAIK it's a reference card, and works with the defence block. Which card do you have?


Base Zotac 3090 trinity. The Zotac has the LED plugs on the end. Other reference cards have them on the side. The trinity ek block is 9mm longer IIRC. 

I think the rest of the block manufacturers just have one block.


----------



## MrBeer

That is about right I get 14500 on 135 and 1000 mem. 14750 in 165 and 1500 mem evga ftw3 ultra with water cooler


----------



## defcoms

yzonker said:


> Make sure you've done the other tricks too. I think it's setting the 3DMark 3D settings for texture quality to "High Performance" and power management to "Prefer maximum performance". At least those are the 2 I have set. Run in a resolution no higher than PR (1440p). Close out everything you can including Afterburner, if you have the overlay turned on at least. And cool your water down first. Your average temp isn't much lower than mine and based on your previous comment, it should be quite a bit given the better deltaT you get. Water shouldn't heat up much during the run.


Thanks I did have afterburner open and did notice the geforce experience banner when I run it. I will give it a go tonight after work with those closed and my fans set to max. My loop temp was around 35c my normal my machine idles around 28-30c depending if the AC has kicked on or not. I did run a bunch of test I will give it more time to cool off between each test. My case isn't the best for water cooling asus helios. Both rads are set to blow into the case and i habe one exhaust fan. I think my loop just heats up some from trapped hot air


----------



## yzonker

defcoms said:


> Thanks I did have afterburner open and did notice the geforce experience banner when I run it. I will give it a go tonight after work with those closed and my fans set to max. My loop temp was around 35c my normal my machine idles around 28-30c depending if the AC has kicked on or not. I did run a bunch of test I will give it more time to cool off between each test. My case isn't the best for water cooling asus helios. Both rads are set to blow into the case and i habe one exhaust fan. I think my loop just heats up some from trapped hot air
> View attachment 2482996
> View attachment 2482998


Actually the rads get the coolest air that way though. I have mine blowing out, but only because I didn't want my case temp that high since the mem runs so hot on the 3090.


----------



## KedarWolf

My case has the motherboard horizontal mount and the video card vertical mount. I have three fans blowing directly down onto the video card with really cool ambient temps and my card gets to 75C and 80C memory even with a lower power limit on the stock BIOS pulling 390W.

I'm thinking my heatsink and backplate on the Strix OC are not really mounted well.

From hearing the pads are all 2mm I need to buy a bunch and replace them all until I get my EKWB block and active backplane when it's released.

Oh, side note. Person messages me locally to buy my 1080 Ti. I'm all, "You know it has a waterblock and backplate on it?" He says, "Yes, I do." Think I have it sold.

Then later says, "I just fill it with water and I'm good, right?" I'm all, "Nope, you need a water cooling loop, pump, rad, hoses etc." The person never messages me again.


----------



## PowerK

Vertically mounted the other day.


----------



## yzonker

That's a lot of Noctua A12's. I just got done upgrading my F12's to A12's on my 360 rad. Well worth it. Then I replaced my Noctua A14's with Arctic P14's. Also a nice upgrade. I've knocked 4-5C off my water temp without increasing noise level much, if any. That swept style blade design seems to be far superior. Wish I had done more research before I bought the first time.


----------



## defcoms

Nice setup!

Well I took the side panel off the case and it didn't make much of a difference. Score went up some but that was from going to +168 GPU and little more on the memory.








I scored 14 647 in Port Royal


AMD Ryzen 7 5800X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com




I don't think I can break 15k on this card with the stock bios 

Here is a picture of how my corsair strix block has been working on the card with kryronaut 10-12c Delta. Still figuring out my fan settings on this loop have only had it running for about a week and been working a ton so only been getting a few hrs to play after work.


----------



## J7SC

KedarWolf said:


> (...) Oh, side note. Person messages me locally to buy my 1080 Ti. I'm all, "You know it has a waterblock and backplate on it?" He says, "Yes, I do." Think I have it sold.
> Then later says, "I just fill it with water and I'm good, right?" I'm all, "Nope, you need a water cooling loop, pump, rad, hoses etc." The person never messages me again.


...that's how I ended up with my 2nd 2080 Ti Aorus Xtr WB for SLI back in December '18...I had bought the first one (same model) a few weeks earlier but it was the only one they had...then an 'open box' appeared at the same retailer...turns out a chap bought it in the morning thinking it was the AIO model (instead of the full factory w-block) and brought it back in the afternoon complaining that 'the hoses and rads are missing', what kind of quality control _is that _!? Saved me a hundred+ dollars, coz, open box... 



PowerK said:


> Vertically mounted the other day.
> 
> View attachment 2483022


Nice ! Depending on your mobo / chipset, are you using a PCIe 3.0 or 4.0 riser, and if it is the latter, is it working ok ?


----------



## PowerK

J7SC said:


> Nice ! Depending on your mobo / chipset, are you using a PCIe 3.0 or 4.0 riser, and if it is the latter, is it working ok ?


Motherboard is Crosshair VIII Dark Hero (X570) and the riser cable is PCI-E 4.0 cable from Lian Li. Working great so far.


----------



## WilliamLeGod

PowerK said:


> Motherboard is Crosshair VIII Dark Hero (X570) and the riser cable is PCI-E 4.0 cable from Lian Li. Working great so far.


Can u link the lian li case that comes with the 4.0 riser?


----------



## yzonker

defcoms said:


> Nice setup!
> 
> Well I took the side panel off the case and it didn't make much of a difference. Score went up some but that was from going to +168 GPU and little more on the memory.
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 14 647 in Port Royal
> 
> 
> AMD Ryzen 7 5800X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> I don't think I can break 15k on this card with the stock bios
> 
> Here is a picture of how my corsair strix block has been working on the card with kryronaut 10-12c Delta. Still figuring out my fan settings on this loop have only had it running for about a week and been working a ton so only been getting a few hrs to play after work.
> View attachment 2483024


Yea the core offset is holding you back for sure I think. The run I posted was +195. I successfully got +210 to work since then. It completed 3 times in a row (with time to cool). 18 friggin' points is all I needed...









I scored 14 982 in Port Royal


AMD Ryzen 7 5800X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## piperprinx

jura11 said:


> I would get Bykski waterblock, for money you will find hardly better waterblock
> 
> In my case I'm getting 36-38°C with KPE XOC BIOS capped at 65-75%(GPU pulls like 446W with 65% and 75% pulls like 515W) and VRAM temperatures are in 60's in gaming
> 
> Hope this helps
> 
> Thanks, Jura


I tried both Bykski and Alphacool with my Strix. Better temp with Alphacool, similar price.


----------



## DankeWaffle

PSA: ASUS ROG Strix GeForce RTX 3090 OC Thermal Pad Thickness

I posted this on Reddit, and was told you guys might find it useful here too.

So I haven't been able to find any actual measurements of the thermal pads online for the Strix 3090 OC, as I guess nobody has done it yet. This could be useful for anyone trying to replace the thermal pads to reduce GDDR6X temperatures.

I didn't have calipers handy, but I used a mm ruler, a ton of eyeballing, and way too many measurements. This should be accurate, but please reference it with caution.



http://imgur.com/a/6Tq0zeq


----------



## J7SC

DankeWaffle said:


> PSA: ASUS ROG Strix GeForce RTX 3090 OC Thermal Pad Thickness
> 
> I posted this on Reddit, and was told you guys might find it useful here too.
> 
> So I haven't been able to find any actual measurements of the thermal pads online for the Strix 3090 OC, as I guess nobody has done it yet. This could be useful for anyone trying to replace the thermal pads to reduce GDDR6X temperatures.
> 
> I didn't have calipers handy, but I used a mm ruler, a ton of eyeballing, and way too many measurements. This should be accurate, but please reference it with caution.
> 
> 
> 
> http://imgur.com/a/6Tq0zeq


That's great...I used a caliper on my Strix 3090 OC and posted on it before, but had trouble with 'squishing' and raised sides for a true pre-mount measurement. This is better.

FYI, I also noticed that on the backplate on mine, there were some longer ones which looked pristine and had no imprint, as if they never made solid contact. I believe it's the 'green-framed ones / 0.5mm' in your pic above. Did you see the same on your Strix 3090 OC ?


----------



## Beagle Box

J7SC said:


> That's great...I used a caliper on my Strix 3090 OC and posted on it before, but had trouble with 'squishing' and raised sides for a true pre-mount measurement. This is better.
> 
> FYI, I also noticed that on the backplate on mine, there were some longer ones which looked pristine and had no imprint, as if they never made solid contact. I believe it's the 'green-framed ones / 0.5mm' in your pic above. Did you see the same on your Strix 3090 OC ?


The ones outlined in light green were completely untouched on mine. 
Super thin and useless.


----------



## EarlZ

Finally managed to replace the stock paste on my MSI Suprim X 3090, From 85c @ 100% fan speed under super position and other games, Now its 66c @ 100% fan speed. Auto fan speed now hits 71-74c depending on the game and this is at auto fan speed ( 80-83% ) as previously it would also hit 85c even at 100%, Very pleased with the result!


----------



## DankeWaffle

J7SC said:


> That's great...I used a caliper on my Strix 3090 OC and posted on it before, but had trouble with 'squishing' and raised sides for a true pre-mount measurement. This is better.
> 
> FYI, I also noticed that on the backplate on mine, there were some longer ones which looked pristine and had no imprint, as if they never made solid contact. I believe it's the 'green-framed ones / 0.5mm' in your pic above. Did you see the same on your Strix 3090 OC ?





Beagle Box said:


> The ones outlined in light green were completely untouched on mine.
> Super thin and useless.


Yeah the squishing was pretty bad in some parts.. these measurements are what I believe to be the true 'unsquished' measurements lol. And I did notice that too, the green 0.5mm ones appeared to be untouched compared to the others. Looking at the PCB, there are two rows of MOSFETs on the back that line up with them, but probably don't even touch it, or barely make contact. 

If I had to guess it's just an oversight on transistors of lesser importance. Could probably get away with putting a 1.0 mm strip there, but not sure if it would even be useful.


----------



## John117-

It make sense buy a 3090 for VR and Gaming in your opinion?


----------



## Lobstar

John117- said:


> It make sense buy a 3090 for VR and Gaming in your opinion?


Last I knew there was a driver issue that affects all nvidia cards in regards to VR performance. Horrible latency, stuttering, and having to turn resolution scaling down to like 50. Typically you'd turn it up. The problem in regards to 3000 series is that the last driver that actually worked was before they supported these cards. I'm sure it will be fixed in the future but this has been a problem since Ampere released.


----------



## yzonker

Lobstar said:


> Last I knew there was a driver issue that affects all nvidia cards in regards to VR performance. Horrible latency, stuttering, and having to turn resolution scaling down to like 50. Typically you'd turn it up. The problem in regards to 3000 series is that the last driver that actually worked was before they supported these cards. I'm sure it will be fixed in the future but this has been a problem since Ampere released.


That makes it sound a whole lot worse than it actually is. The worst I get is a stutter occasionally. I found closing all of the monitoring programs (GPUZ, etc...) helps a lot. Not perfect, but 100% usable.


----------



## Lobstar

yzonker said:


> That makes it sound a whole lot worse than it actually is. The worst I get is a stutter occasionally. I found closing all of the monitoring programs (GPUZ, etc...) helps a lot. Not perfect, but 100% usable.


Elite Dangerous used to run at a solid 90 FPS on my 1080ti and on my 3090 I get about 12 FPS through the headset at 50% res scale.


----------



## EarlZ

I've managed to measure the stock pads on my MSI Suprim X and they use a mix of 1.5mm , 2mm and 2.5mm for the GDDR6X modules. Front side uses 2mm while the back plate portion uses a mix of 1.5mm on the heatpipes portion and 2.5mm on the GDDR6X that touches the back plate. GELID however does not sell 2.5mm pads. I was thinking of stacking a 2mm and 0.5mm pads but I am not sure how effective that can be, Are there any good brands that sell 2.5mm or 3mm soft pads with a thermal rating that is close to what GELID offers?


----------



## J7SC

EarlZ said:


> I've managed to measure the stock pads on my MSI Suprim X and they use a mix of 1.5mm , 2mm and 2.5mm for the GDDR6X modules. Front side uses 2mm while the back plate portion uses a mix of 1.5mm on the heatpipes portion and 2.5mm on the GDDR6X that touches the back plate. GELID however does not sell 2.5mm pads. I was thinking of stacking a 2mm and 0.5mm pads but I am not sure how effective that can be, Are there any good brands that sell 2.5mm or 3mm soft pads with a thermal rating that is close to what GELID offers?


I asked that question about stacking pads a few times, but the only general answer I can find via Google is that it is not a good idea - there likely will always be some air trapped in-between, no matter how careful.There also is the issue of the top coating with at least some pads meant to bond to the opposite surface. All that said, I am really not convinced that I've found some all the definitive answers on this...


----------



## yzonker

Lobstar said:


> Elite Dangerous used to run at a solid 90 FPS on my 1080ti and on my 3090 I get about 12 FPS through the headset at 50% res scale.


There's some other problem then. I haven't played ED since I got the 3090, but other games run fine. I recently finished Half Life and the 3090 wasn't even working very hard to maintain 80 fps on my Rift S. Only issue is an occasional noticeable stutter. Maybe every minute or so. Probably does it more often but it's not always noticeable.


----------



## yzonker

John117- said:


> It make sense buy a 3090 for VR and Gaming in your opinion?


To answer this more directly, it really depends on what HMD you buy. My Rift S doesn't push the 3090 very hard with a 80 fps refresh and relatively low resolution. Some of the other HMDs with higher refresh and higher resolution would definitely benefit more. The trick with VR is to always be at or above the refresh rate for smooth motion. So the faster card can come in handy when fps is close to the refresh rate.


----------



## EarlZ

J7SC said:


> I asked that question about stacking pads a few times, but the only general answer I can find via Google is that it is not a good idea - there likely will always be some air trapped in-between, no matter how careful.There also is the issue of the top coating with at least some pads meant to bond to the opposite surface. All that said, I am really not convinced that I've found some all the definitive answers on this...


Same answer I found online, wish gelid and thermalright started making highly compressible pads even if it a slightly lower thermal rating


----------



## Lobstar

yzonker said:


> There's some other problem then. I haven't played ED since I got the 3090, but other games run fine. I recently finished Half Life and the 3090 wasn't even working very hard to maintain 80 fps on my Rift S. Only issue is an occasional noticeable stutter. Maybe every minute or so. Probably does it more often but it's not always noticeable.


I have similar experiences on the Index and Reverb G2. It doesn't matter if it's WMR or SteamVR native. The only thing that works is rolling back to pre-v460 drivers. Many others have the same issues and it's very well documented. /shrug. I'm glad you aren't experiencing it.


----------



## inedenimadam

John117- said:


> It make sense buy a 3090 for VR and Gaming in your opinion?


It does and it doesn't. I upgraded from a 2080 to a 3090 without changing anything else. The only real advantage I have come across is that I can turn up super sampling. The 2080 was enough for full fat VR with some amount of super sampling in almost every game. The 3090 is overkill and a real pleasure because it's just crank max and play. No tuning whatsoever. 

I don't have stuttering issues anymore myself, but it was present for a while. Read more here Road to VR Nvidia stutter hotfix


----------



## derfer

Does anyone know the pad thickness on the 3090 ventus? I know the back vram is 3mm, but I don't know for the front vram, the mosfets, or the inductors.


----------



## Falkentyne

J7SC said:


> I asked that question about stacking pads a few times, but the only general answer I can find via Google is that it is not a good idea - there likely will always be some air trapped in-between, no matter how careful.There also is the issue of the top coating with at least some pads meant to bond to the opposite surface. All that said, I am really not convinced that I've found some all the definitive answers on this...


Stacking pads is never the best thing to do.
However stacked pads are better than no contact at all. Your card won't overheat if you stack and squish the pads carefully and have good contact. But a non stacked pad of the same w/mk and the proper thickness will be a few C cooler.


----------



## KedarWolf

Falkentyne said:


> Stacking pads is never the best thing to do.
> However stacked pads are better than no contact at all. Your card won't overheat if you stack and squish the pads carefully and have good contact. But a non stacked pad of the same w/mk and the proper thickness will be a few C cooler.


The only 2.5mm pads I can get are 6 wm/k at the highest. So would stacking two 12.8 wm/k be just as good you think?


----------



## Falkentyne

KedarWolf said:


> The only 2.5mm pads I can get are 6 wm/k at the highest. So would stacking two 12.8 wm/k be just as good you think?


I mean you can use this too.





TG-PP10-50 t-Global Technology | Fans, Thermal Management | DigiKey


Order today, ships today. TG-PP10-50 – Thermal Silicone Putty 50 gram Container from t-Global Technology. Pricing and Availability on millions of electronic components from Digi-Key Electronics.




www.digikey.com


----------



## KedarWolf

Falkentyne said:


> I mean you can use this too.
> 
> 
> 
> 
> 
> TG-PP10-50 t-Global Technology | Fans, Thermal Management | DigiKey
> 
> 
> Order today, ships today. TG-PP10-50 – Thermal Silicone Putty 50 gram Container from t-Global Technology. Pricing and Availability on millions of electronic components from Digi-Key Electronics.
> 
> 
> 
> 
> www.digikey.com


No, I'd rather not use putty.


----------



## EarlZ

KedarWolf said:


> The only 2.5mm pads I can get are 6 wm/k at the highest. So would stacking two 12.8 wm/k be just as good you think?


Sounds like the same thermal pads that come from where these cards are made, I think those pads will break down with an oily residue.


----------



## KedarWolf

EarlZ said:


> Sounds like the same thermal pads that come from where these cards are made, I think those pads will break down with an oily residue.


I would wear uncoated surgical rubber gloves when applying them. I've known to do that a long time now. Oil from your fingers is not a good thing.


----------



## EarlZ

KedarWolf said:


> I would wear uncoated surgical rubber gloves when applying them. I've known to do that a long time now. Oil from your fingers is not a good thing.



I mean the stock pads on some manufacturers will produce a lot of oily residue after some time even if you've never touched them. You can validate this by doing a search on youtube for a 3080/3090 thermal pad replacement. Brands like Gigabyte/MSI are most common with pools of oil.


----------



## Toopy

Spiriva said:


> PNY GeForce RTX 3090 24GB XLR8 Gaming EPIC-X RGB
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Backside
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Closeup
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> EK Waterblock installed.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> EK Backplate installed.
> 
> *___*
> 
> The Thermal pads PNY used on the 3090 was the worse Ive ever seen, There was no "peeling" them off, it was more less scraping w/e that was stuck on the memory etc away from the card. It came off really easy tho.


I don't suppose you have the thermal pad size for the stock heatsink?


----------



## KedarWolf

EarlZ said:


> at 85% power thats just like running your 3090 at 3070 speeds??


No, I just ran Port Royal top is with Power Limit maxed out, Suprim X BIOS, and bottom with it at 85%.

I got the exact same score.

I don't have the Nvidia settings optimised for benchmarking, just kept that at my normal gaming settings.


----------



## Thanh Nguyen

25c water and 1000w bios. I have not seen my card death yet.








I scored 15 677 in Port Royal


Intel Core i9-10900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## John117-

Thanks guys for reply at the my question. I wanted hear some opinion..


----------



## bmagnien

Is there any way to offset higher than +1500 on mem in Afterburner? I know this sounds absurd but I just got my 4th FTW3 ultra so have a pretty good idea of what’s good and what’s not, and think I’ve won the memory lottery. Steady improving scores all the way up to 1500 which appears to be rock solid stable, just curious what I could push it to as I’ve yet to have a memory crash or any artifacting


----------



## EarlZ

bmagnien said:


> Is there any way to offset higher than +1500 on mem in Afterburner? I know this sounds absurd but I just got my 4th FTW3 ultra so have a pretty good idea of what’s good and what’s not, and think I’ve won the memory lottery. Steady improving scores all the way up to 1500 which appears to be rock solid stable, just curious what I could push it to as I’ve yet to have a memory crash or any artifacting


May I ask what you use to test memory stability ?


----------



## bmagnien

EarlZ said:


> May I ask what you use to test memory stability ?


Unlike system memory which I run all sorts of dedicated programs to measure stability given the risks of an unstable ram overclock for data/system integrity, I test my gpu mem with a range of traditional synthetic graphic benchmarks and games, checking for score degradation, artifacting, and of course hard crashes. A much less scientific method for sure but it’s worked for me thus far


----------



## EarlZ

bmagnien said:


> Unlike system memory which I run all sorts of dedicated programs to measure stability given the risks of an unstable ram overclock for data/system integrity, I test my gpu mem with a range of traditional synthetic graphic benchmarks and games, checking for score degradation, artifacting, and of course hard crashes. A much less scientific method for sure but it’s worked for me thus far


What would be your top 3 tests to run ?


----------



## bmagnien

EarlZ said:


> What would be your top 3 tests to run ?


I run PR at a known stable core offset but nothing too aggressive so I can get some repeatable runs, and then start bumping up the memory offset from 0 in large increments until the score stops rising, then start fine tuning back down to within +/- 25 of the sweet spot. I then combine that mem offset with my known highest stable core overclock, and test if that combo is stable on something like 4K Heaven extreme which will often crash or show artifacts if not


----------



## Nizzen

bmagnien said:


> Is there any way to offset higher than +1500 on mem in Afterburner? I know this sounds absurd but I just got my 4th FTW3 ultra so have a pretty good idea of what’s good and what’s not, and think I’ve won the memory lottery. Steady improving scores all the way up to 1500 which appears to be rock solid stable, just curious what I could push it to as I’ve yet to have a memory crash or any artifacting


Use Evga P X tool


----------



## bmagnien

bmagnien said:


> I run PR at a known stable core offset but nothing too aggressive so I can get some repeatable runs, and then start bumping up the memory offset from 0 in large increments until the score stops rising, then start fine tuning back down to within +/- 25 of the sweet spot. I then combine that mem offset with my known highest stable core overclock, and test if that combo is stable on something like 4K Heaven extreme which will often crash or show artifacts if not





Nizzen said:


> Use Evga P X tool


im running either the KP520 or the 1000xoc bios - PX1 goes berserk when I try to run an FTW3 with those bios. It gets stuck in a loop of attempting to update the cards firmware - at least that was the case last I checked


----------



## des2k...

got my new barrow block, looks pretty good, happy the design is like their CGI pictures, very low machined / close to the heat source, big area for the fins 

The front pads looks like I might be able to re-use my odyssey. Have 1mm and 1.5mm.

The backplate is flat, looks like the white pads are for that, very chunky. Odyssey 2mm might work, need to measure them.

For now, I want to test core temps, I'm really hopping for 10c delta  
If it's good, I will prob cool, vrm, core, mem on the backplate with 1mm pads with copper shims.


----------



## yzonker

That's going to be a good test. Different blocks going in to the same loop and the same Zotac card I have. Looks like Barrow managed to do what Corsair didn't. The block is continuous over to the VRMs despite needing to clear the caps, etc...

I don't think I'm going to touch mine for a while though. Just don't want to risk bricking it when I can't get another one easily (and prices are crazy now).


----------



## mirkendargen

mirkendargen said:


> Yeah when I took my loop apart to swap in the 3090 I opened up the TR4 block and scrubbed it with a toothbrush, the plating on it is pristine. Not sure if I'll have much luck contacting Bykski...but who knows.


Took apart my peeling block to scrub it clean, here's how it looks. The fins are still completely clear, it's still working fine, just looks terrible. My CPU block still looks pristine.


----------



## jura11

mirkendargen said:


> Took apart my peeling block to scrub it clean, here's how it looks. The fins are still completely clear, it's still working fine, just looks terrible. My CPU block still looks pristine.
> 
> View attachment 2483424


I would straight away considering contacting Bykski, its not normal to have such waterblock after short of period of time there

Hope this helps 

Thanks, Jura


----------



## mirkendargen

jura11 said:


> I would straight away considering contacting Bykski, its not normal to have such waterblock after short of period of time there
> 
> Hope this helps
> 
> Thanks, Jura


Yeah I emailed them but I'm not going to hold my breath... While finding their contact info I noticed there's now a V2 of the Strix block, although I can't tell what's different about it other than the G1/4 port area being slightly redesigned.and possibly a slightly different flow design on the left side. Worst case I can order on of them without a backplate (reuse my current one) with free shipping and still be spending less total than an EK block.


----------



## jura11

mirkendargen said:


> Yeah I emailed them but I'm not going to hold my breath... While finding their contact info I noticed there's now a V2 of the Strix block, although I can't tell what's different about it other than the G1/4 port area being slightly redesigned.and possibly a slightly different flow design on the left side. Worst case I can order on of them without a backplate (reuse my current one) with free shipping and still be spending less total than an EK block.


Yes that's true they have now V2 versions of blocks, currently have V1 and V2 blocks in my loop, no difference in temperatures would say although think V2 is slightly better maybe 1-2°C on load, on idle they're identical and VRAM temperatures are slightly higher on V2 but this can be in my case by airflow too, will have here in couple of days Thermalright Odyssey pads and will do teardown of loop and do pictures of blocks and replace pads and see what I can achieve with Thermalright Odyssey pads 

Did you tried contacting Bykski seller on Aliexpress if you bought there? 

Bykski waterblocks are cheaper than anything here available, I bought another 3 of them for future builds 

Best of luck there 

Thanks, Jura


----------



## Lobstar

bmagnien said:


> Is there any way to offset higher than +1500 on mem in Afterburner? I know this sounds absurd but I just got my 4th FTW3 ultra so have a pretty good idea of what’s good and what’s not, and think I’ve won the memory lottery. Steady improving scores all the way up to 1500 which appears to be rock solid stable, just curious what I could push it to as I’ve yet to have a memory crash or any artifacting


You should check out the new FTW3U RMA process for cards with unbalanced power draw. All of my cards have done 1500 mem no problem.


----------



## J7SC

...finally managed to start up the now w-cooled 3090 and 3950X combo after fully bleeding the loops. Even with only half of the total 120mm fan compliment running, the 3090 Strix OC EK block's performance is fantastic...with 50% fans running slow, GPU temps still dropped by over 45 C already in TS-Ex (using Kryonaut, btw...). VRM and memory temps are significantly lower as well even though I did not have time to install the back-plate / memory fans yet.

...this 'first boot' with w-cooling got seriously delayed mid-stream...first, hyper busy at work, then I bruised a rib and possibly my liver (that hurt). Still, I decided to forge ahead with the w-cooling build while 'on the meds' for the rib injury, and promptly outdid my personal stupidity record by grabbing 5 replacement blades for a carpet knife mid-air when they went for a flight as I was trying to open their plastic box...took half an hour to clean the blood up, but now with bandages on my hands, I had to give up working on the project for a few days... anyway, all is well that ends well. Now I get to finish the project with all the visual goodies I want to add, then some bench-marking and pics for a build log.


----------



## des2k...

.


----------



## mirkendargen

jura11 said:


> Yes that's true they have now V2 versions of blocks, currently have V1 and V2 blocks in my loop, no difference in temperatures would say although think V2 is slightly better maybe 1-2°C on load, on idle they're identical and VRAM temperatures are slightly higher on V2 but this can be in my case by airflow too, will have here in couple of days Thermalright Odyssey pads and will do teardown of loop and do pictures of blocks and replace pads and see what I can achieve with Thermalright Odyssey pads
> 
> Did you tried contacting Bykski seller on Aliexpress if you bought there?
> 
> Bykski waterblocks are cheaper than anything here available, I bought another 3 of them for future builds
> 
> Best of luck there
> 
> Thanks, Jura


I guess I should have had more faith. Bykski emailed me back the same day saying they're sorry and asking for my full address so they can send me a new block. Cheapest price AND best service...?


----------



## J7SC

mirkendargen said:


> I guess I should have had more faith. Bykski emailed me back the same day saying they're sorry and asking for my full address so they can send me a new block. Cheapest price AND best service...?


...that's good to hear. Notwithstanding the troubles / time spent one has to go through w/ a custom loop to change a block, it would be FAR WORSE to get the corporate RMA runaround on top of that. I grew up in a 'metal fabrication town' in Europe and remember the 'succession baths' bits and pieces would be dunked into for plating, not to mention the electrolytic part. Not a foolproof process. 

Then there is the user treatment, ie. using non-compatible metals (silver coils) or use of aggressive cleaning solutions or non-compatible cooling liquids...not suggesting that was the issue in your case, btw, but making a general point...I've seen all kinds of 'advice' at OCN and elsewhere re. cleaning & flushing which clearly went against the manufacturer's guidelines...


----------



## jomama22

J7SC said:


> ...that's good to hear. Notwithstanding the troubles / time spent one has to go through w/ a custom loop to change a block, it would be FAR WORSE to get the corporate RMA runaround on top of that. I grew up in a 'metal fabrication town' in Europe and remember the 'succession baths' bits and pieces would be dunked into for plating, not to mention the electrolytic part. Not a foolproof process.
> 
> Then there is the user treatment, ie. using non-compatible metals (silver coils) or use of aggressive cleaning solutions or non-compatible cooling liquids...not suggesting that was the issue in your case, btw, but making a general point...I've seen all kinds of 'advice' at OCN and elsewhere re. cleaning & flushing which clearly went against the manufacturer's guidelines...


That bykski block just looks to have very thin nickel plating that wears away/flakes quickly under good loop flow. Everywhere water is restricted seems to have nickel missing. Higher acidity fluid would definitely speed up this process but I would expect to see it more in flow dead zones if that was the case. Could also just be poor adhesion of the buckle to copper.

Corrosion inhibitor probably wouldn't have prevented this. 

Only proper way to clean parts is to do them before adding them to the loop. Doing those full loop cleans when it's already assembled is just asking for trouble in my mind.


----------



## mirkendargen

jomama22 said:


> That bykski block just looks to have very thin nickel plating that wears away/flakes quickly under good loop flow. Everywhere water is restricted seems to have nickel missing. Higher acidity fluid would definitely speed up this process but I would expect to see it more in flow dead zones if that was the case. Could also just be poor adhesion of the buckle to copper.
> 
> Corrosion inhibitor probably wouldn't have prevented this.
> 
> Only proper way to clean parts is to do them before adding them to the loop. Doing those full loop cleans when it's already assembled is just asking for trouble in my mind.


I'm gonna call it a one-off/bad batch since tons of people on this thread have the same block and no one else has this problem. I would blame something with my loop/coolant except my CPU block is totally fine.

And yeah, I always do my loop cleaning (really just vinegar in the rads and opening up blocks to scrub them with a soft toothbrush) with the parts disconnected and then flush thoroughly. I know vinegar is a no-no on nickel plating. The bulk of the plating that came off collected in a pile in a low flow area of the GPU block that I cleaned out fine, but I'm gonna flush my loop with DI water a couple times then vinegar the rads before putting the new block back in just to make sure nothing's still floating around/clogging the rads.


----------



## KedarWolf

Pro-tip for anyone mining. Most of you probably know, but some don't.

Download this.









Release v3.5.0.0 · DeadManWalkingTO/NVidiaProfileInspectorDmW


Add AutoBuild bat script. Add Potrable Apps Original App Format.




github.com





You might have to run it as Admin.

Set P2 State as below or your memory speeds will be severely gimped while mining. Also, some benchmarks etc. lower RAM speeds as well and this fixes it.


----------



## yzonker

KedarWolf said:


> Pro-tip for anyone mining. Most of you probably know, but some don't.
> 
> Download this.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Release v3.5.0.0 · DeadManWalkingTO/NVidiaProfileInspectorDmW
> 
> 
> Add AutoBuild bat script. Add Potrable Apps Original App Format.
> 
> 
> 
> 
> github.com
> 
> 
> 
> 
> 
> You might have to run it as Admin.
> 
> Set P2 State as below or your memory speeds will be severely gimped while mining. Also, some benchmarks etc. lower RAM speeds as well and this fixes it.


One advantage also is by forcing it to P0 all the time, then your mining vram OC will be the same as your gaming OC. Otherwise you can have it crash when you stop mining as the card shifts from P2 back to P0.

Also, the XOC bios goes to something like P8, so you have to do this to mine at all with it loaded.


----------



## ArcticZero

Does anyone know for sure the thickness of pads needed for the stock PNY 3090 cooler? I repadded using 1mm Thermalright Odyssey for everything, except I based my data off EKWB's reference blocks, which obviously might have different specs. The pads seem compressed enough when I checked, and the GPU die seems to have decent contact based on the paste spread. Problem is my VRAM temps haven't improved as much as I would've thought, as I only went from 110c max to 104c.

Thanks!


----------



## bmagnien

Lobstar said:


> You should check out the new FTW3U RMA process for cards with unbalanced power draw. All of my cards have done 1500 mem no problem.


I did - this card is from the new program. It’s a rockstar with the KP bios, best chip and mem silicon I’ve gotten over the past 4 FTW3 3090s I’ve had, and no more issues with the pcie slot drawing too much power.
My question is how to set memory higher than +1500 in afterburner? I downloaded Asus’s GPUTweak2 and was able to set the memory to the equivalent of +1750 there, but that program and its core offset/curve settings suck compared to afterburner. Any other ideas?


----------



## des2k...

bmagnien said:


> I did - this card is from the new program. It’s a rockstar with the KP bios, best chip and mem silicon I’ve gotten over the past 4 FTW3 3090s I’ve had, and no more issues with the pcie slot drawing too much power.
> My question is how to set memory higher than +1500 in afterburner? I downloaded Asus’s GPUTweak2 and was able to set the memory to the equivalent of +1750 there, but that program and its core offset/curve settings suck compared to afterburner. Any other ideas?


you can set your curve,offset with after burner.
After open evga, switch to VF curve view then apply your memory oc. You don't loose afterburner settings.


----------



## bmagnien

des2k... said:


> you can set your curve,offset with after burner.
> After open evga, switch to VF curve view then apply your memory oc. You don't loose afterburner settings.


EVGA’s PX1 attempts to flash my card’s firmware with the kp520 bios installed, I pretty much can’t use it because it gets stuck in an infinite loop of restarts


----------



## Falkentyne

bmagnien said:


> I did - this card is from the new program. It’s a rockstar with the KP bios, best chip and mem silicon I’ve gotten over the past 4 FTW3 3090s I’ve had, and no more issues with the pcie slot drawing too much power.
> My question is how to set memory higher than +1500 in afterburner? I downloaded Asus’s GPUTweak2 and was able to set the memory to the equivalent of +1750 there, but that program and its core offset/curve settings suck compared to afterburner. Any other ideas?


Have you tried setting the clocks manually in Nvidia-smi ?

nvidia-smi -ac <maxmemspeed>,<maxclockspeed> or something similar?
Maybe see if you can ask one of the LN2'ers?


----------



## sultanofswing

bmagnien said:


> EVGA’s PX1 attempts to flash my card’s firmware with the kp520 bios installed, I pretty much can’t use it because it gets stuck in an infinite loop of restarts


Delete the files in the firmware folder and it will not attempt to flash the card.


----------



## bmagnien

sultanofswing said:


> Delete the files in the firmware folder and it will not attempt to flash the card.


This is good advice - I’ll give it a go - thanks! Is it programfiles/evga/firmware? Or something like that? Also, one time recently it wasn’t the firmware loop but a ‘px1 must restart your computer’ loop which was equally annoying - ever seen that?


----------



## mirkendargen

des2k... said:


> you can set your curve,offset with after burner.
> After open evga, switch to VF curve view then apply your memory oc. You don't loose afterburner settings.


Can you just edit the cfg file for your device ID in the afterburner\profiles folder and make MemClkBoost=<_something bigger than 1500000_>?


----------



## Dreams-Visions

Hey guys, is there a way to control or change the color of the X Trio color bar after flashing a different bios? I'd like to not have the rainbow permanently on my card. I know I can disconnect the RGB header, but if there is a software solution, I'd greatly prefer it. I'm currently running one of EVGA's bios, but the EVGA Precision app doesn't seem to recognize that it has a light bar to tune (though it identifies it as a Kingpin). OpenRGB doesn't see it. MSI Dragon Center doesn't seem to see it.


----------



## des2k...

well that new borrow block... terrible delta :-( not good core contact

got my flow meter , has better temp sensor vs mobo sensor g1/4 plug type.


Also my water flow is 120l/h, going up since too much air in the loop.

Back to the ek block, but temps are maybe 2c worst now.


----------



## sultanofswing

bmagnien said:


> This is good advice - I’ll give it a go - thanks! Is it programfiles/evga/firmware? Or something like that? Also, one time recently it wasn’t the firmware loop but a ‘px1 must restart your computer’ loop which was equally annoying - ever seen that?


Somewhere like that, I do not have PX1 installed anymore to check though.


----------



## EarlZ

bmagnien said:


> I run PR at a known stable core offset but nothing too aggressive so I can get some repeatable runs, and then start bumping up the memory offset from 0 in large increments until the score stops rising, then start fine tuning back down to within +/- 25 of the sweet spot. I then combine that mem offset with my known highest stable core overclock, and test if that combo is stable on something like 4K Heaven extreme which will often crash or show artifacts if not


Lost the silicon lottery on my GDDR6X overclock, at +800Mhz, Port Royal produces color flashing spotlight-like artifacts and crashes Tjunction is 84, GPU temp 60


----------



## yzonker

des2k... said:


> well that new borrow block... terrible delta :-( not good core contact
> 
> got my flow meter , has better temp sensor vs mobo sensor g1/4 plug type.
> 
> 
> Also my water flow is 120l/h, going up since too much air in the loop.
> 
> Back to the ek block, but temps are maybe 2c worst now.


Wow that sucks. Wasn't expecting that result. Maybe it not getting to the delta you wanted, but worse. 

Here's a noob question. What determines the amount of core to block pressure you get? It felt like the screws bottom out on my block, so it would depend on various dimensions of the block and card?


----------



## Zogge

If it bottoms out it is either perfect fit or too short screws. You could try with distances in plastic or so to increase pressure or springs on the screws. 
Of course on your own risk as you could potentially do it too hard and destroy something if not careful.
Dual washers on my bykski strix helped my core temp delta a few degree c.


----------



## yzonker

Zogge said:


> If it bottoms out it is either perfect fit or too short screws. You could try with distances in plastic or so to increase pressure or springs on the screws.
> Of course on your own risk as you could potentially do it too hard and destroy something if not careful.
> Dual washers on my bykski strix helped my core temp delta a few degree c.


Yea that's why I was asking. Partly wondering if there's potential for improvement but then not wanting to crunch the pcb or something. I just snugged them down with a small Jeweler type screwdriver. Unclear what to really TQ them to.


----------



## UdoG

Is it possible to flash the BIOS on the FE card? If yes, what is the best BIOS?

Thanks.


----------



## Nizzen

UdoG said:


> Is it possible to flash the BIOS on the FE card? If yes, what is the best BIOS?
> 
> Thanks.


No.

Next question


----------



## jomama22

des2k... said:


> well that new borrow block... terrible delta :-( not good core contact
> 
> got my flow meter , has better temp sensor vs mobo sensor g1/4 plug type.
> 
> 
> Also my water flow is 120l/h, going up since too much air in the loop.
> 
> Back to the ek block, but temps are maybe 2c worst now.


It's very possible your die just isn't all that level. If you were to try another block (like bykski or somthing) with the same results, it would leave your card as the common denominator here.


----------



## Lord of meat

Dreams-Visions said:


> Hey guys, is there a way to control or change the color of the X Trio color bar after flashing a different bios? I'd like to not have the rainbow permanently on my card. I know I can disconnect the RGB header, but if there is a software solution, I'd greatly prefer it. I'm currently running one of EVGA's bios, but the EVGA Precision app doesn't seem to recognize that it has a light bar to tune (though it identifies it as a Kingpin). OpenRGB doesn't see it. MSI Dragon Center doesn't seem to see it.


You need to flash back to the trio bios, change the color or disable it and then flash to the other bios. Keep in mind the rgb uses some watts so your losing a very small amount of watts.
I dont know how you didn't brick your card by using kingpin, as far as I know the trio is only compatible with the 500w xoc few.
If you do bricked feel free to contact me and I'll help ya.
Good luck.


----------



## jura11

des2k... said:


> well that new borrow block... terrible delta :-( not good core contact
> 
> got my flow meter , has better temp sensor vs mobo sensor g1/4 plug type.
> 
> 
> Also my water flow is 120l/h, going up since too much air in the loop.
> 
> Back to the ek block, but temps are maybe 2c worst now.


Didn't test yet Barrow waterblock for RTX 3xxx series, I only tested on RTX 2080Ti and there temperatures been pretty much within 3-5°C worse than EKWB, where EKWB would hit 38-42° C Barrow would hit 42-45°C in same loop 

Regarding the EKWB and worse temperatures after remount, experienced same with my Asus RTX 2080Ti Strix too, with first mount temperatures have been best and on 2nd and 3rd remount temperatures have been horrible by 3°C on core mainly 

Hope this helps 

Thanks, Jura


----------



## jura11

Lord of meat said:


> You need to flash back to the trio bios, change the color or disable it and then flash to the other bios. Keep in mind the rgb uses some watts so your losing a very small amount of watts.
> I dont know how you didn't brick your card by using kingpin, as far as I know the trio is only compatible with the 500w xoc few.
> If you do bricked feel free to contact me and I'll help ya.
> Good luck.


You can't brick the GPU by flashing different BIOS on RTX 3090 unless you are trying flash there 3080 BIOS 

I'm running on my both RTX 3090 GamingPro's KPE XOC BIOS capped at 65%

Hope this helps 

Thanks, Jura


----------



## Pepillo

Lord of meat said:


> You need to flash back to the trio bios, change the color or disable it and then flash to the other bios. Keep in mind the rgb uses some watts so your losing a very small amount of watts.
> I dont know how you didn't brick your card by using kingpin, as far as I know the trio is only compatible with the 500w xoc few.
> If you do bricked feel free to contact me and I'll help ya.
> Good luck.


The Trio is 100% compatible with EVGA bios, both the 500w XOC and the 520w Kipping, I've been with the latter for months.


----------



## Falkentyne

UdoG said:


> Is it possible to flash the BIOS on the FE card? If yes, what is the best BIOS?
> 
> Thanks.





Nizzen said:


> No.
> 
> Next question


You can flash a 3090 FE only with another 3090 FE Bios.
Someone had a NVflash with ID checks bypassed for Ampere but refused to share it.

Another person on Techpowerup was willing to try to make me one but he said it's very risky (he made his own NVflash bypass for the Asus 3080 TUF bricking issue, by installing custom modules which avoided the Asus bug which caused a brick) and has a high chance of bricking over multiple attempts, until he gets the coding right, and this would be no problem if the 3090 FE had a normal SOP-8 bios chip (then you just backup the entire chip manually with a Skypro programmer and a 1.8v adapter and a Pomona 5250 in-line clip, and can restore it manually, at the cost of new thermal pads and thermal paste ), but the 3090 FE chip is a UFDN8 chip, and you can barely even find socket adapters for that if you DESOLDER it, so you can forget in-line flashing.

Without ability to restore via a hardware programmer or some lucky / rich sob with a UDFN8 socket adapter and VERY expensive soldering equipment, nope. Don't even try.


----------



## Nizzen

Falkentyne said:


> You can flash a 3090 FE only with another 3090 FE Bios.
> Someone had a NVflash with ID checks bypassed for Ampere but refused to share it.
> 
> Another person on Techpowerup was willing to try to make me one but he said it's very risky (he made his own NVflash bypass for the Asus 3080 TUF bricking issue, by installing custom modules which avoided the Asus bug which caused a brick) and has a high chance of bricking over multiple attempts, until he gets the coding right, and this would be no problem if the 3090 FE had a normal SOP-8 bios chip (then you just backup the entire chip manually with a Skypro programmer and a 1.8v adapter and a Pomona 5250 in-line clip, and can restore it manually, at the cost of new thermal pads and thermal paste ), but the 3090 FE chip is a UFDN8 chip, and you can barely even find socket adapters for that if you DESOLDER it, so you can forget in-line flashing.
> 
> Without ability to restore via a hardware programmer or some lucky / rich sob with a UDFN8 socket adapter and VERY expensive soldering equipment, nope. Don't even try.


This?








NVIDIA NVFlash with Certificate Checks Bypassed (v5.287) Download


This modified version of NVFlash lets you flash a modified BIOS to your NVIDIA graphics card.




www.techpowerup.com


----------



## des2k...

jomama22 said:


> It's very possible your die just isn't all that level. If you were to try another block (like bykski or somthing) with the same results, it would leave your card as the common denominator here.


Die is flat, stock cooler & EK both compress / push paste out (the result is very little paste, semi-transparent)

Borrow block, I tried second mount, the block standoffs are a bit taller vs EK so takes alot of paste + doesn't compress / push out. 30c delta at just 300w load :-(

Also Very little paste doesn't reach the block, that's with no thermal pads.

I think, I would need to grind them which I don't want to right now. I could do the same for EK for better paste compression.


----------



## Falkentyne

Nizzen said:


> This?
> 
> 
> 
> 
> 
> 
> 
> 
> NVIDIA NVFlash with Certificate Checks Bypassed (v5.287) Download
> 
> 
> This modified version of NVFlash lets you flash a modified BIOS to your NVIDIA graphics card.
> 
> 
> 
> 
> www.techpowerup.com


No, that one doesn't work on Ampere.

Someone uploaded an ID bypassed version for 3090 in this thread but then they very quickly deleted their post. 

Then there's this guy who created his own bypass and I asked if he could make me one but he warned against it since it would cause several bricks before he got it right.









TUF 3080 Official Update Tool Bricked-Deleted 1/2 BIOS


Hello, I tried to use Asus official update tool (v1) to update BIOS on RTX3080 TUF OC. Bios switch was set on perf. bios. I followed the app instructions, but finally popped window the update is failed. The card and spec was completely gone from GPU-Z. After restart, PC boot'ed with black...




www.techpowerup.com


----------



## T.Sharp

des2k... said:


> Also Very little paste doesn't reach the block, that's with no thermal pads


Well that's a bummer. So you're saying that even without pads, the cold plate doesn't have any pressure on the die? Looks like you could remove the mounting inserts from the block if you want to sand them. The fact that you have to do that to get good die contact is whack.


----------



## jomama22

des2k... said:


> Die is flat, stock cooler & EK both compress / push paste out (the result is very little paste, semi-transparent)
> 
> Borrow block, I tried second mount, the block standoffs are a bit taller vs EK so takes alot of paste + doesn't compress / push out. 30c delta at just 300w load :-(
> 
> Also Very little paste doesn't reach the block, that's with no thermal pads.
> 
> I think, I would need to grind them which I don't want to right now. I could do the same for EK for better paste compression.
> 
> View attachment 2483697
> 
> View attachment 2483698


Wow that paste look horrible lol. No pressure at all on the die.

I mean honestly, there is a point to where daily usage temps are fine (talking about your ek experience) and lowering them a hand full of degrees won't net you much anyway. Granted, this is definitely the "I don't care anymore" way of thinking, but it's realistic. As an example, with my mem junction temps I have posted in the past (60c running the mining bench), I could absolutely get lower by not using ek pads on the back plate but meh, I have gotten to a point where it just doesn't really matter in every day use (games never top 50c on the menu no matter the game/resolution). 

This is how it goes for each new gen I buy. Mess with it for a month or two for benchmarks and crap, then just enjoy it.

That's just me though.


----------



## J7SC

des2k... said:


> Die is flat, stock cooler & EK both compress / push paste out (the result is very little paste, semi-transparent)
> 
> Borrow block, I tried second mount, the block standoffs are a bit taller vs EK so takes alot of paste + doesn't compress / push out. 30c delta at just 300w load :-(
> 
> Also Very little paste doesn't reach the block, that's with no thermal pads.
> 
> I think, I would need to grind them which I don't want to right now. I could do the same for EK for better paste compression.
> 
> View attachment 2483697
> 
> View attachment 2483698


That is somewhat disturbing. Before grinding down stand-offs, could you use a given thickness of copper shim(s) ? Not ideal, but not bad either, given copper's heat conductivity, at least to figure out how much material to remove from the stand-offs


----------



## yzonker

des2k... said:


> Die is flat, stock cooler & EK both compress / push paste out (the result is very little paste, semi-transparent)
> 
> Borrow block, I tried second mount, the block standoffs are a bit taller vs EK so takes alot of paste + doesn't compress / push out. 30c delta at just 300w load :-(
> 
> Also Very little paste doesn't reach the block, that's with no thermal pads.
> 
> I think, I would need to grind them which I don't want to right now. I could do the same for EK for better paste compression.


This is what I was getting at previously. The amount of pressure you get is dictated by the combined geometry and tolerances which leads to significant variability in performance potentially. Obviously this is an extreme example. Seems like there's a manufacturing flaw in this case.


----------



## Dreams-Visions

Lord of meat said:


> You need to flash back to the trio bios, change the color or disable it and then flash to the other bios. Keep in mind the rgb uses some watts so your losing a very small amount of watts.
> I dont know how you didn't brick your card by using kingpin, as far as I know the trio is only compatible with the 500w xoc few.
> If you do bricked feel free to contact me and I'll help ya.
> Good luck.


understood and ty for the tips. yea, no problems here on the KP bios. But I'll switch back to the 500W just to be sure.



Pepillo said:


> The Trio is 100% compatible with EVGA bios, both the 500w XOC and the 520w Kipping, I've been with the latter for months.


oh okay nm. for a second I thought I was alone. I've been on the KP bios since it came out. zero issues. But I also don't want to cause any issues, so I'll give it some thought.


----------



## des2k...

reseize bar already works , no vbios update ?








dfa676c


Image dfa676c hosted in ImgBB




ibb.co


----------



## Lobstar

KPE 520w bios on the new revision 3090 FTW3U.


----------



## wtf_apples

Lobstar said:


> KPE 520w bios on the new revision 3090 FTW3U.


That looks promising


----------



## mirkendargen

des2k... said:


> reseize bar already works , no vbios update ?
> 
> 
> 
> 
> 
> 
> 
> 
> dfa676c
> 
> 
> Image dfa676c hosted in ImgBB
> 
> 
> 
> 
> ibb.co


Must be something with that leaked engineer driver you're using. Says "no" on 461.92 even with a mobo BIOS that supports it and it enabled in BIOS.


----------



## Lobstar

wtf_apples said:


> That looks promising


I believe if I improve my thermals I'll be able to improve that quite a bit. Optimus block should be arriving before the end of the century so I'll do more testing when that arrives. I'll also follow up with board images of this new revised version.


----------



## sultanofswing

Man this Kingpin is driving me crazy, I dial in 1.2v nvvdd and a loadline of 1 in the classy tool and I get 1.4v on the OLED during a port royal run.


----------



## jomama22

sultanofswing said:


> Man this Kingpin is driving me crazy, I dial in 1.2v nvvdd and a loadline of 1 in the classy tool and I get 1.4v on the OLED during a port royal run.


Those voltage points are at the vrm, so higher than what is actually at the die, but the kingpins llc overvolts on all llc settings as lummi has stated on his youtube channel when testing the card.

I believe when he would set 1.2v with llc off, actual die voltage was near 1.3v or so. You can check his youtube channel to see exactly what he observed. Only real way of knowing what voltage the die is getting is by placing a probe on the back of the socket and taking a look.

For benchmarking and such, setting 1.2 should be fine. These card need cooling like no tomorrow to really make much headroom when benching anyway. Without going sub zero, you hit a massive wall around 1.2v or so where even up to 1.35v die voltage doesn't see too large of gains.


----------



## Falkentyne

des2k... said:


> reseize bar already works , no vbios update ?
> 
> 
> 
> 
> 
> 
> 
> 
> dfa676c
> 
> 
> Image dfa676c hosted in ImgBB
> 
> 
> 
> 
> ibb.co


Does it actually improve performance? Your vbios isn't the one that supports bar, is it?


----------



## specv89

Lobstar said:


> KPE 520w bios on the new revision 3090 FTW3U.
> View attachment 2483722


Nice! I got my card this week with the black clown lips is this the revised version? Also where can I find that 520w vbios? Thanks!!


----------



## KedarWolf

des2k... said:


> reseize bar already works , no vbios update ?
> 
> 
> 
> 
> 
> 
> 
> 
> dfa676c
> 
> 
> Image dfa676c hosted in ImgBB
> 
> 
> 
> 
> ibb.co


What VBIOS are you using? I enabled it in BIOS with those drivers and not working with the Suprim X VBIOS.


----------



## Lobstar

specv89 said:


> Nice! I got my card this week with the black clown lips is this the revised version? Also where can I find that 520w vbios? Thanks!!


It's the black lips from the special swap program. I just pulled the bios from tpu.


----------



## specv89

Lobstar said:


> It's the black lips from the special swap program. I just pulled the bios from tpu.


Just found the vbios! mine was not from swap program it was a purchase made with the notify que system. does the black lip version mean it is the revised one?


----------



## KedarWolf

Can you run the below and share the BIOS with us? Might need to upload it to Google Drive or a file share site.



Code:


nvflash --save vbios.rom


----------



## Lobstar

specv89 said:


> Just found the vbios! mine was not from swap program it was a purchase made with the notify que system. does the black lip version mean it is the revised one?


No, the latest revision has a different voltage regulator. If your card experiences a large disparity across the three 8-pin connectors and high pcie power draw above 75w on the xoc bios you can contact evga for a one-time swap and you should get this revision.


----------



## specv89

Lobstar said:


> It's the black lips from the special swap program. I just pulled the bios from tpu





Lobstar said:


> No, the latest revision has a different voltage regulator. If your card experiences a large disparity across the three 8-pin connectors and high pcie power draw above 75w on the xoc bios you can contact evga for a one-time swap and you should get this revision.


So i got my card this past Monday without knowing evga had revsied this. I loaded the 500w vbios from the site off the bat not knowing that it had already been applied. Been seeing some wierdness have have seen it go up to 492w I am thinking it is becasue i am running the old 500w vbios rather than the 500w "revised" vbios. Will check in the morning currently swapping cases.


----------



## KedarWolf

specv89 said:


> So i got my card this past Monday without knowing evga had revsied this. I loaded the 500w vbios from the site off the bat not knowing that it had already been applied. Been seeing some wierdness have have seen it go up to 492w I am thinking it is becasue i am running the old 500w vbios rather than the 500w "revised" vbios. Will check in the morning currently swapping cases.


What BIOS exactly are you getting Resize Bar enabled? I'm confused. :/


----------



## bmagnien

Got my revised FTW3 via their new RMA program over the weekend as well:








I scored 21 875 in Time Spy


AMD Ryzen 9 5950X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com












I scored 11 866 in Time Spy Extreme


AMD Ryzen 9 5950X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com












I scored 15 244 in Port Royal


AMD Ryzen 9 5950X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com





I see you also hit the mem lotto on yours with it maxing at +1500. Running with the EK waterblock. Definitely my best core and mem silicon over 4 FTW3s this gen. Maybe coincidence, who knows. I have the same issue as yours with pins 1 and 2 far outstripping pin 3, but they fixed the high PCIe slot draw and it works great with the 1000w and 520w bios, so I'm not complaining.

Here's the new 500w BIOS that shipped with the card. FYI it didn't work for me, capped at around 460w, so I immediately stripped if off and flashed the KP520 and XOC1000 and called it a day. 









EVGA_500W_NewRMA.rom


Shared with Dropbox




www.dropbox.com


----------



## Falkentyne

KedarWolf said:


> What VBIOS are you using? I enabled it in BIOS with those drivers and not working with the Suprim X VBIOS.


Yeah what vbios is this? And what card?
Or are all of these eVGA cards with that brand new vbios?


----------



## KedarWolf

Falkentyne said:


> Yeah what vbios is this? And what card?
> Or are all of these eVGA cards with that brand new vbios?


I tried the brand new EVGA Black Lips bios, no go.


----------



## EarlZ

Wanted to change the thermal pads on my MSI Suprim X 3090 however the GDDR6X uses a combination of 1.5mm and 2.5mm pads, only gelid/thermalright/thermalgrizzly and arctic thermal pads are avaialble to me, none of them produce 2.5mm pads. I am looking at two situations here

A.) Use a 0.5mm thicker pads which means I will now need 2mm and 3mm pads for the GDDR6X my only concern is that with this added height, I am not sure if the back plate has enough mounting pressure to squish that addition 0.5mm and wont affect the 3.5mm VRM pads. The VRM area has 2-3 screws close to them so I would assume they should not be affected by the increased height for the GDDR6X

B.) Gelid sells 1.5mm pads so I only need to stack a 2mm + 0.5mm pads

What do you guys think?


----------



## des2k...

KedarWolf said:


> What VBIOS are you using? I enabled it in BIOS with those drivers and not working with the Suprim X VBIOS.


not my screenshot, somebody on reddit that deleted his post, He didn't mention details.


----------



## Nizzen

__ https://twitter.com/i/web/status/1374778605125791746


*Dear customer

Thank you for the mail.

We will release a new BIOS of RTX 3080 to support Resizable Bar at the end of this month (March).
Please kindly be notified.
Thanks.

Best regards

Palit Support
Palit Microsystem Ltd.*


----------



## Falkentyne

des2k... said:


> not my screenshot, somebody on reddit that deleted his post, He didn't mention details.


Of course. Always some NDA guy with a NDA no leaky no leaky beta BIOS he got from sources....


----------



## KedarWolf

Deleted


----------



## Thanh Nguyen

Optimus block will be delivered today but Im out of coolant so cant install it till tomorrow. Hopefully its worth $400.


----------



## Alex24buc

For my understanding, in order to use resizable bar I will need a new bios for my rtx 3090 (palit gamerock oc) and also one for my motherboard which is x299x gigabyte aorus master, correct? And will gigabyte also provide me a new bios for this platform? Thanks!


----------



## KedarWolf

Thanh Nguyen said:


> Optimus block will be delivered today but Im out of coolant so cant install it till tomorrow. Hopefully its worth $400.


I subscribe to the Optimus thread and only hear really good things about the blocks and XL backplates. 

The only thing is they are super heavy, you want a GPU support bracket if your video card is not vertical mount.


----------



## Pepillo

Alex24buc said:


> For my understanding, in order to use resizable bar I will need a new bios for my rtx 3090 (palit gamerock oc) and also one for my motherboard which is x299x gigabyte aorus master, correct? And will gigabyte also provide me a new bios for this platform? Thanks!


EVGA has said it would get bios for its X299s, just like Asrok. Asus, there's one in the ROG forums for the Rampage, the others don't know anything yet. I haven't read anything from Gigabyte. I am also waiting to see if it will be possible on my Asus X299 Deluxe.


----------



## ALSTER868

Thanh Nguyen said:


> Optimus block will be delivered today but Im out of coolant so cant install it till tomorrow. Hopefully its worth $400.


Has anyone heard anything about release of Optimus block for Strix soon? Looking forward to it


----------



## bmagnien

Why are the developer 470.xx drivers so good in PR? Like 200pts better, but not stable in games: I scored 15 415 in Port Royal


----------



## Beagle Box

bmagnien said:


> Why are the developer 470.xx drivers so good in PR? Like 200pts better, but not stable in games: I scored 15 415 in Port Royal


Dunno. I only know about 470.05. There are others?


----------



## J7SC

bmagnien said:


> Why are the developer 470.xx drivers so good in PR? Like 200pts better, but not stable in games: I scored 15 415 in Port Royal





Beagle Box said:


> Dunno. I only know about 470.05. There are others?


...new drivers ? Got to try those out on the weekend  ! Also, per comment above, which games have stability issues with this driver ?


----------



## bmagnien

J7SC said:


> ...new drivers ? Got to try those out on the weekend  ! Also, per comment above, which games have stability issues with this driver ?


Yah it was 470.05, just couldn’t remember. Had them Downloaded from a week or so ago, I think these are the ones where Nvidia ‘forgot’ to enable their new mining prevention tech, so I’m not sure if they’re even still available via the Nvidia developers program or not. The games that crashed with them were division 2 the outriders demo, didn’t do extensive testing but both those games crashed like 5 times in a row so I just reverted and it’s fine now.


----------



## KedarWolf

J7SC said:


> ...new drivers ? Got to try those out on the weekend  ! Also, per comment above, which games have stability issues with this driver ?


They are the leaked development drivers that were pulled because peeps could mine on 3060's with them.

You can still find them archived online on Reddit and stuff though.

Apparently, someone with a 3090 had Resize Bar enabled with them as well, but we don't know the VBIOS they used to get that.

We still need a VBIOS update, even tried the GALAX one from 1-19, nada.


----------



## J7SC

bmagnien said:


> Yah it was 470.05, just couldn’t remember. Had them Downloaded from a week or so ago, I think these are the ones where Nvidia ‘forgot’ to enable their new mining prevention tech, so I’m not sure if they’re even still available via the Nvidia developers program or not. The games that crashed with them were division 2 the outriders demo, didn’t do extensive testing but both those games crashed like 5 times in a row so I just reverted and it’s fine now.





KedarWolf said:


> They are the leaked development drivers that were pulled because peeps could mine on 3060's with them.
> 
> You can still find them archived online on Reddit and stuff though.
> 
> Apparently, someone with a 3090 had Resize Bar enabled with them as well, but we don't know the VBIOS they used to get that.
> 
> We still need a VBIOS update, even tried the Galaxy one from 1-19, nada.


Thank you both  I'll see if I can find them...also wondering about the VBios update for the resizable BAR


----------



## KedarWolf

J7SC said:


> Thank you both  I'll see if I can find them...also wondering about the VBios update for the resizable BAR


VBIOS updates for Resizable Bar supposed to come the end of March.


----------



## Nizzen

Nizzen said:


> __ https://twitter.com/i/web/status/1374778605125791746
> 
> 
> *Dear customer
> 
> Thank you for the mail.
> 
> We will release a new BIOS of RTX 3080 to support Resizable Bar at the end of this month (March).
> Please kindly be notified.
> Thanks.
> 
> Best regards
> 
> Palit Support
> Palit Microsystem Ltd.*


Yep


----------



## J7SC

KedarWolf said:


> VBIOS updates for Resizable Bar supposed to come the end of March.


...tx for that and also for the reddit / 470.05 tip


----------



## sultanofswing

Just waiting on 2 pots and nvlink bridge.


----------



## KedarWolf

sultanofswing said:


> View attachment 2483857
> View attachment 2483858
> Just waiting on 2 pots and nvlink bridge.


Please ban him for getting ALL the Kingpins!!


----------



## yzonker

When one KP isn't enough....


----------



## J7SC

@sultanofswing ...but do they play Crysis ?

sorry


----------



## gfunkernaught

Got my first game crash in a while. Was playing Titanfall 2 [email protected] with PL 50% +150 core +1200 vram peak temp 45c core 58c vram. Except this time it wasn't the usual nvlddmkm but nvwgf2umx.dll. Thought that was interesting.


----------



## Spiriva

KedarWolf said:


> drivers


470.05 - 703.21 MB file on MEGA

470.14 - 689.13 MB file on MEGA


----------



## Zogge

ASUS X299 boards with resizeable bar support bioses available now. (post 47)






Will resizable bar (smart access memory alternative) be enabled on X299 - Page 5


Hello dear Asus, Will it be enabled ? Cheers



rog.asus.com


----------



## Pepillo

Zogge said:


> ASUS X299 boards with resizeable bar support bioses available now. (post 47)
> 
> 
> 
> 
> 
> 
> Will resizable bar (smart access memory alternative) be enabled on X299 - Page 5
> 
> 
> Hello dear Asus, Will it be enabled ? Cheers
> 
> 
> 
> rog.asus.com


Bravo Asus, X299 Deluxe Bios 3403, yesterday:


----------



## Nizzen

Pepillo said:


> Bravo Asus, X299 Deluxe Bios 3403, yesterday:
> 
> View attachment 2483890


Apex x299 too


----------



## ChrysalisH

Hey new girl to the site figured I'd join as I am mainly looking at the shunt mod forums but I have a 3950X and a 3090 FE which has had a few mods done already thanks to a DOA fan


----------



## xkm1948

What is the consensus on Hardware Accelerated GPU scheduling? Do you guys have it enabled or disabled?


----------



## gfunkernaught

@xkm1948 I have it enabled haven't found an issue that was fixed by disabling it. I believe it allows lower level access to the gpu from the OS. Idk if this caused/helps it but when I use ultra low latency with vsync on, it actually makes a difference in mouse input lag, a good one. Some games handle it differently though. Cyberpunk is the current king of input lag.


----------



## J7SC

gfunkernaught said:


> @xkm1948 I have it enabled haven't found an issue that was fixed by disabling it. I believe it allows lower level access to the gpu from the OS. Idk if this caused/helps it but when I use ultra low latency with vsync on, it actually makes a difference in mouse input lag, a good one. Some games handle it differently though. Cyberpunk is the current king of input lag.


...I got to try that on my 3090 w/ Cyberpunk 2077..then again, I hadn't really noticed a lag problem before, may be because I like to take it easy in NightCity and enjoy the RTX view


----------



## gfunkernaught

J7SC said:


> ...I got to try that on my 3090 w/ Cyberpunk 2077..then again, I hadn't really noticed a lag problem before, may be because I like to take it easy in NightCity and enjoy the RTX view


I do too. But when I get into gun fights, that's when I notice input lag. Titanfall 2, I leave vsync off and run at the game's max 144fps, and barely notice tearing on my 60hz tv. That's a twitchy game which is my favorite type of shooter. #quakeforever


----------



## des2k...

EarlZ said:


> Wanted to change the thermal pads on my MSI Suprim X 3090 however the GDDR6X uses a combination of 1.5mm and 2.5mm pads, only gelid/thermalright/thermalgrizzly and arctic thermal pads are avaialble to me, none of them produce 2.5mm pads. I am looking at two situations here
> 
> A.) Use a 0.5mm thicker pads which means I will now need 2mm and 3mm pads for the GDDR6X my only concern is that with this added height, I am not sure if the back plate has enough mounting pressure to squish that addition 0.5mm and wont affect the 3.5mm VRM pads. The VRM area has 2-3 screws close to them so I would assume they should not be affected by the increased height for the GDDR6X
> 
> B.) Gelid sells 1.5mm pads so I only need to stack a 2mm + 0.5mm pads
> 
> What do you guys think?
> 
> View attachment 2483736


The thermal oddyssey can be stacked. Once stacked you won't be able to seperate them.

I stacked 1mm to make it 2mm and they got fused. This was for a section of the EK backplate. They still got the backplate very hot vs stock pads.


----------



## J7SC

gfunkernaught said:


> I do too. But when I get into gun fights, that's when I notice input lag. Titanfall 2, I leave vsync off and run at the game's max 144fps, and barely notice tearing on my 60hz tv. That's a twitchy game which is my favorite type of shooter. #quakeforever


...haven't even got around to install Quake RTX yet, but had a chance today to look at some CP / 4K / RTX psycho on the 3090 for a bit  ...later on, I literally jumped off that bridge in the center (easy to get down, not so easy to get back up...). I got that scene saved, so I can jump again, and again...


----------



## gfunkernaught

So I got my block off my 3090 and I'm about to repaste and repad. I'm wondering if I can use 0.5mm for the front of the card instead of the 1mm pads. If I do this will it cause bad contact and cause the card to bend or will it be fine? Problem is I ordered three packs of 1 mm gelids and 3 packs of 0.5 mm Odyssey pads. I got the block off now and don't want to order more pads lol. For the backplate I'm going to leave the stock ek pads, and use these 5w/mk samples I got for the back of the backplate and place those heatsinks on top.


----------



## des2k...

gfunkernaught said:


> So I got my block off my 3090 and I'm about to repaste and repad. I'm wondering if I can use 0.5mm for the front of the card instead of the 1mm pads. If I do this will it cause bad contact and cause the card to bend or will it be fine? Problem is I ordered three packs of 1 mm gelids and 3 packs of 0.5 mm Odyssey pads. I got the block off now and don't want to order more pads lol. For the backplate I'm going to leave the stock ek pads, and use these 5w/mk samples I got for the back of the backplate and place those heatsinks on top.


Just stack the Odyssey pads, 0.5mm pads won't work if it's designed for 1mm pads ! 0.5mm is alot


----------



## KedarWolf

The 470.14 drivers do really well in Port Royal.

I got 14,918 on air with my Strix OC at 2130 core at 1.1v and +904 memory.

For some reason of all the BIOS's I've tried so far the F5 Suprim X seems to throttle the least.

Here is a direct download link to the Nvidia driver, a .exe install file.









File on MEGA







mega.nz


----------



## gfunkernaught

des2k... said:


> Just stack the Odyssey pads, 0.5mm pads won't work if it's designed for 1mm pads ! 0.5mm is alot


Decided to stick with the 1 mm since that's what the block was designed for. Putting some ectotherm on all the components before putting the pads on. 😁


----------



## jura11

Ditch Ecotherm for Kryonaut or NT-H1, hate Ecotherm 

Hope this helps 

Thanks, Jura


----------



## EarlZ

des2k... said:


> The thermal oddyssey can be stacked. Once stacked you won't be able to seperate them.
> 
> I stacked 1mm to make it 2mm and they got fused. This was for a section of the EK backplate. They still got the backplate very hot vs stock pads.


Thanks for this information, A redditor has posted that he used EC360-Silver thermal pads and it has hardness rating of Shore 20 which he claims is very compressible like the stock pads. I may get those instead as they are priced similarly to the Gelid/Thermalright, but if that fails then stacking pads is my only option.


----------



## jura11

EarlZ said:


> Thanks for this information, A redditor has posted that he used EC360-Silver thermal pads and it has hardness rating of Shore 20 which he claims is very compressible like the stock pads. I may get those instead as they are priced similarly to the Gelid/Thermalright, but if that fails then stacking pads is my only option.


Hi there 

With stacking pads I'm not sure if this will work as intended and if you see any difference drop in VRAM or VRM temperatures, Thermalright or Gelid are not as expensive as Fujipoly there or Alphacool pads

I'm waiting on Thermalright Odyssey and Gelid pads for my RTX 3090 and see if this will makes any difference in VRAM temperatures or not

In other thread you posted pictures of the thickness of the thermal pads and I don't know if there is any company which is making thicker pads like 3mm oor more, most of the pads are max 2mm or 2.5mm but no 3mm 

Hope this helps 

Thanks, Jura


----------



## des2k...

jura11 said:


> Hi there
> 
> With stacking pads I'm not sure if this will work as intended and if you see any difference drop in VRAM or VRM temperatures, Thermalright or Gelid are not as expensive as Fujipoly there or Alphacool pads
> 
> I'm waiting on Thermalright Odyssey and Gelid pads for my RTX 3090 and see if this will makes any difference in VRAM temperatures or not
> 
> In other thread you posted pictures of the thickness of the thermal pads and I don't know if there is any company which is making thicker pads like 3mm oor more, most of the pads are max 2mm or 2.5mm but no 3mm
> 
> Hope this helps
> 
> Thanks, Jura


Odyssey are 54c mem with fans on the ek backplate.
EK pads are 70c. For OC, it's +700 vs +1600 on mem.

For VRM, with oddyssey pads I'm passing GT2 with heavy undervolt. Stock pads, require super high voltage for all OC.

So premium pads are very so much worth it.

For the chokes, I kept EK pads, they remove the noise. Oddyssey on chokes are way louder.


----------



## Beagle Box

KedarWolf said:


> The 470.14 drivers do really well in Port Royal.
> 
> I got 14,918 on air with my Strix OC at 2130 core at 1.1v and +904 memory.
> 
> For some reason of all the BIOS's I've tried so far the F5 Suprim X seems to throttle the least.
> 
> Here is a direct download link to the Nvidia driver, a .exe install file.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> File on MEGA
> 
> 
> 
> 
> 
> 
> 
> mega.nz


The 470.14 driver made for a 39 point gain in my max Port Royal score earlier today. 
Too bad it's not an official driver and that score doesn't officially count.
I, too, prefer the Suprim BIOS. If only they would make a 600W version.... I'd be be so happy.


----------



## jomama22

Beagle Box said:


> The 470.14 driver made for a 39 point gain in my max Port Royal score earlier today.
> Too bad it's not an official driver and that score doesn't officially count.
> I, too, prefer the Suprim BIOS. If only they would make a 600W version.... I'd be be so happy.


Could always shunt....but I assume you know that already.


----------



## jura11

des2k... said:


> Odyssey are 54c mem with fans on the ek backplate.
> EK pads are 70c. For OC, it's +700 vs +1600 on mem.


I think Bykski thermal pads are bit better than EKWB, I hate EKWB pads because they can be so easily torn/tear(not sure on right word hahaha), although they're cheap but I hate them hahaha 

Will do tests later when my pads arrive, will test both Gelid and Thermalright Odyssey 

Received only 1mm Thermalright Odyssey pads, I ordered wrong size hahaha 

Hope this helps 

Thanks, Jura


----------



## mirkendargen

jura11 said:


> I think Bykski thermal pads are bit better than EKWB, I hate EKWB pads because they can be so easily torn/tear(not sure on right word hahaha), although they're cheap but I hate them hahaha
> 
> Will do tests later when my pads arrive, will test both Gelid and Thermalright Odyssey
> 
> Received only 1mm Thermalright Odyssey pads, I ordered wrong size hahaha
> 
> Hope this helps
> 
> Thanks, Jura


The Bykski ones that came with my block were super oily when I took my block off to clean it. It wasn't conductive and harming anything, but beware it's enough to run down the card from the memory area.


----------



## jura11

mirkendargen said:


> The Bykski ones that came with my block were super oily when I took my block off to clean it. It wasn't conductive and harming anything, but beware it's enough to run down the card from the memory area.


Didn't took apart yet my Bykski waterblocks and can't say how they look or not,will be taking them apart when my Thermalright and Gelid pads are here

Hope this helps

Thanks,Jura


----------



## Beagle Box

jomama22 said:


> Could always shunt....but I assume you know that already.


Yeah. But no need, really. MY CPU/GPU combo works great as is. 
I found the 470.14 driver makes for less difference between BIOSs for some reason. 
I think it's more efficient somehow. I hope they make it or something similar an official driver.


----------



## des2k...

Beagle Box said:


> Yeah. But no need, really. MY CPU/GPU combo works great as is.
> I found the 470.14 driver makes for less difference between BIOSs for some reason.
> I think it's more efficient somehow. I hope they make it or something similar an official driver.


Does it increase fps in games ?


----------



## Beagle Box

des2k... said:


> Does it increase fps in games ?


Seems to run about the same but maybe a bit cooler. Suprim BIOS + 470.14 runs very smoothly in Alien Isolation. I don't usually have an fps monitor running while playing.


----------



## gfunkernaught

So after a proper repaste with LM and remount with new pads + ectotherm, my delta seems to be between 10.2-10.5. Not bad.


----------



## gfunkernaught

A quick 10 or so minute run of quake 2 rtx at 600w pushed a 12.3-12.5c delta.


----------



## des2k...

gfunkernaught said:


> A quick 10 or so minute run of quake 2 rtx at 600w pushed a 12.3-12.5c delta.


What was the delta with regular paste ?
PR 470w is 15c delta, zotac, ek block
quake rtx, 600w+ at 20c+

noctua2 paste, very small dots, about 14 dots,

I do reach the stand offs on the block before the core but still get paste compressed.

The stock cooler has perfect 100% die paste compression but those standoffs go through the holes. Waterblock, the card rest on them and I really don't understand why waterblocks use this garbage type of mounting.

I could try liquid metal and sand off the mounting standoffs. Not sure if it's worth the effort since I don't really game past ~490w


----------



## sultanofswing

The Delta on my Hydrocopper seems to be about 14.5c at [email protected] during port royal, I ordered a Optimus for it so hopefully that helps out a bit.


----------



## des2k...

sultanofswing said:


> The Delta on my Hydrocopper seems to be about 14.5c at [email protected] during port royal, I ordered a Optimus for it so hopefully that helps out a bit.


Well optimus is around 7c delta, one of the best.
There's kingpin,strix blocks, then reference.
I have doubts any other block than ftw3 will release. They are so slow...

2190 PR is 540w or so ?


----------



## Thanh Nguyen

Bykski









Optimus


----------



## sultanofswing

des2k... said:


> Well optimus is around 7c delta, one of the best.
> There's kingpin,strix blocks, then reference.
> I have doubts any other block than ftw3 will release. They are so slow...
> 
> 2190 PR is 540w or so ?


496w


----------



## bmagnien

What’s the wattage cap on these Suprim bios you’re mentioning? Only ones I see on TPU are 450w Suprim X bios. Are there any other > 500w bios besides KP520 and the 1000xoc?


----------



## KedarWolf

After much testing, including GPU scaling, Sharpening and Ignore Film Grain Shader Cache off and on, the Filterings off and on, the below are the best Nvidia settings for Port Royal.

I'm using the 470.14 drivers. This on my ASUS Strix OC under air, 2145 at 1.1v, +904 memory.

On a side note, I'm selling my 1080 Ti locally tomorrow and ordering an EKWB block and backplate for my Strix OC 3090.

Plan to upgrade to the active backplate for it when they release it.


----------



## Beagle Box

bmagnien said:


> What’s the wattage cap on these Suprim bios you’re mentioning? Only ones I see on TPU are 450w Suprim X bios. Are there any other > 500w bios besides KP520 and the 1000xoc?


Yes, there's only the one 450W Suprim BIOS. 
As far as I know, there's the hacked '1000W' ASUS BIOS which doesn't seem to actually deliver more than ~490W, the EVGA KP520W and EVGA KP1000W.


----------



## gfunkernaught

des2k... said:


> What was the delta with regular paste ?
> 
> I could try liquid metal and sand off the mounting standoffs. Not sure if it's worth the effort since I don't really game past ~490w


With the IC Diamond and stock ek pads w/o thermal paste it was 13(and change)[email protected] and [email protected] I grew tired and disappointed in 45c @ 500w. Before I did repasted, I tried running the evga 520w bios again just to see how much performance I'd lose which I didn't see any loss BUT the temps were the same, so I was like ok enough of this lol trying LM again and this time doing it right. The first time I didn't apply it to the block. I used the Gelid Extreme pads for the VRMs and vram and applied ecto to all of those components, like a small drop. I have a bunch of arctic silver tubes but also had those ectotherm tubes that came with previous ek products that I bought. So I figured why not? I kept the stock ek pads for the backplate where it contacts the back of the card and also applied ecto to the vram. And for the back of the card on top of the backplate, I cut a large piece of the 2mm 5w/mk sample I got, then placed those m.2 heatsinks horizontally in the middle of the card, covering the gpu, vram, and just barely reaching the vrms. I figured they'd soak the heat towards the middle anyway. I saw that while playing quake 2 rtx at 600w, the vram got up to 68c with a +1200 offset. I'm not using the sp120 on there anymore because it didn't fit right so I went back to the corsair ram cooler. Maybe I could find a static 80mm fan or a thin 120mm static fan. After all that though 10.5c delta is pretty damn good IMO. Now I can game without the fans spinning up too loud with summer being upon us.


----------



## gfunkernaught

Thanh Nguyen said:


> Bykski
> View attachment 2483999
> 
> 
> Optimus
> View attachment 2484000


Dayamn dude! That optimus block is no joke. Too bad they don't make one for the Trio.


----------



## gfunkernaught

Just ordered a noctua 120x14 fan to fit between my gpu and RAM to cool the back. 14mm/.55" is the exact amount of space I have. #lovetheinternet

https://www.amazon.com/dp/B07ZP6KKKZ?psc=1&smid=A1Z5H6ZGWCMTNX&ref_=chk_typ_imgToDp


----------



## GRABibus

jura11 said:


> Ditch Ecotherm for Kryonaut or NT-H1, hate Ecotherm
> 
> Hope this helps
> 
> Thanks, Jura


NT-H2 even better than NT-H1 !


----------



## bmagnien

KedarWolf said:


> After much testing, including GPU scaling, Sharpening and Ignore Film Grain Shader Cache off and on, the Filterings off and on, the below are the best Nvidia settings for Port Royal.
> 
> I'm using the 470.14 drivers. This on my ASUS Strix OC under air, 2145 at 1.1v, +904 memory.
> 
> On a side note, I'm selling my 1080 Ti locally tomorrow and ordering an EKWB block and backplate for my Strix OC 3090.
> 
> Plan to upgrade to the active backplate for it when they release it.


Nice!

Did you test with 'Game Mode' on/off?
Did you test with Hardware Accelerated GPU on/off?
What windows Power Mode are you using?


----------



## GRABibus

KedarWolf said:


> For some reason of all the BIOS's I've tried so far the F5 Suprim X seems to throttle the least.


you mean this one ? MSI RTX 3090 VBIOS

is better for you than the Kingpin 520W ?
In games, did you test the SuprimX F5 ?

I have good sustained frequencies in Cold War for example when puttting offset on core +115MHz over my Strix OC with 520W kingpin bios, and setting the [email protected],05V this makes me throttle sustained at [email protected],05V at 21degrees ambient.


----------



## des2k...

GRABibus said:


> NT-H2 even better than NT-H1 !


Never used NT-H1, 

For my GPU,
NT-H2, seems to be about 2c cooler vs Kryonaut
The weird part is that Kryonaut seem to have better mem oc (around +100 more), edge of the die (mem controller)

For my 3900x, IHS & EK block are lapped and I'm using thick double washer to get more pressure from the EK springs Kryonaut is better vs NT-H2.

Now, I keep reading that IC Diamond gets very good temperature under low pressure , not perfect match surfaces.


----------



## KedarWolf

bmagnien said:


> Nice!
> 
> Did you test with 'Game Mode' on/off?
> Did you test with Hardware Accelerated GPU on/off?
> What windows Power Mode are you using?


Game Mode and Hardware Accelerated GPU both on is better.

I'm using this power plan, the benchmark after I switched to it.





__





AMD Ryzen™ Ultimate Performance v5-Backup-2021.23.1-08.46.46.pow







drive.google.com


----------



## KedarWolf

GRABibus said:


> you mean this one ? MSI RTX 3090 VBIOS
> 
> is better for you than the Kingpin 520W ?
> In games, did you test the SuprimX F5 ?
> 
> I have good sustained frequencies in Cold War for example when puttting offset on core +115MHz over my Strix OC with 520W kingpin bios, and setting the [email protected],05V this makes me throttle sustained at [email protected],05V at 21degrees ambient.


Yes, that one. But I'm running it because I'm still on air until I get the waterblock and backplate for my Strix OC.

However, in my tests, it throttled less than any Kingpin BIOS I tested, higher sustained clocks, and a few others have seen the same thing.

I tested in Shadow of The Tomb Raider and Cyberpunk as well, Suprim X seemed to be best.

However if you're on water, it remains to be seen which is best for you.


----------



## GRABibus

KedarWolf said:


> Yes, that one. But I'm running it because I'm still on air until I get the waterblock and backplate for my Strix OC.
> 
> However, in my tests, it throttled less than any Kingpin BIOS I tested, higher sustained clocks, and a few others have seen the same thing.
> 
> I tested in Shadow of The Tomb Raider and Cyberpunk as well, Suprim X seemed to be best.
> 
> However if you're on water, it remains to be seen which is best for you.


Thanks.
I am on air also and Kingpin 520W bios.
I will give a try to this F5 SuprimX.


----------



## KedarWolf

GRABibus said:


> Thanks.
> I am on air also and Kingpin 520W bios.
> I will give a try to this F5 SuprimX.


Broke 15000 for the first time. I put my fans at 100%. 

Edit: Ordering my EKWB waterblock and backplate from Performance PCs today.


----------



## GRABibus

KedarWolf said:


> Broke 15000 for the first time. I put my fans at 100%.
> 
> Edit: Ordering my EKWB waterblock and backplate from Performance PCs today.
> 
> View attachment 2484051


what are your V/F curve settings and offset values ?
Still F5 Suprim bios ?


----------



## Pepillo

Every day I spend happiest with the MSI Trio with the Bykski block (520w Kipping Bios), I can successfully pass benchs like the Port Royale at 2.190 Mhz (15.166 points):









I scored 15 166 in Port Royal


Intel Core i9-7900X Processor, NVIDIA GeForce RTX 3090 x 1, 32450 MB, 64-bit Windows 10}




www.3dmark.com





And 24/7 profile to play I have it stable as a rock at 2.100 Mhz 1.0v and 20.000 Mhz VRam:










I was very fortunate to get it on the same day of departure for less than 1.600 euros, given the current situation of prices and availability. Waiting for the BAR next week, let's see if it's better (My Asus X299 has the bios with BAR since yesterday).


----------



## KedarWolf

GRABibus said:


> what are your V/F curve settings and offset values ?
> Still F5 Suprim bios ?


Yeah, same BIOS, no VF Curve, lowered memory to +850, core slider to +139, +904 memory was making the core unstable.


----------



## jomama22

KedarWolf said:


> Yeah, same BIOS, no VF Curve, lowered memory to +850, core slider to +139, +904 memory was making the core unstable.
> 
> View attachment 2484067
> 
> View attachment 2484068


Not just for you but for everyone posting these:

You should really be posting what the runs average clock speed was (listed in the details section of the runs). Obviously, this in and of itself isn't all that great as effective clock is really what matters, but it is much better than merely posting "+130" or w.e. you set it AB. An offset number really doesn't mean much.


----------



## sultanofswing

This is my FTW Hydrocopper on 1kw Bios. Ambient temp at 23c with the water temp at 24c.
On Normal 500w Bios I can get 15.2k using voltage curve before I start to get power limited. This card does not suffer from the Unequal power balancing and can pull 496w before triggering power limit.
I run it daily on the KINGPIN 520W BIOS.








I scored 15 628 in Port Royal


Intel Core i9-10940X X-series Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## KedarWolf

jomama22 said:


> Not just for you but for everyone posting these:
> 
> You should really be posting what the runs average clock speed was (listed in the details section of the runs). Obviously, this in and of itself isn't all that great as effective clock is really what matters, but it is much better than merely posting "+130" or w.e. you set it AB. An offset number really doesn't mean much.


Where exactly is the average clock speed. There is a graph but it doesn't show much.


----------



## GRABibus

sultanofswing said:


> This is my FTW Hydrocopper on 1kw Bios. Ambient temp at 23c with the water temp at 24c.
> On Normal 500w Bios I can get 15.2k using voltage curve before I start to get power limited. This card does not suffer from the Unequal power balancing and can pull 496w before triggering power limit.
> I run it daily on the KINGPIN 520W BIOS.
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 628 in Port Royal
> 
> 
> Intel Core i9-10940X X-series Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com


insane 👍


----------



## jomama22

KedarWolf said:


> Where exactly is the average clock speed. There is a graph but it doesn't show much.


Once the run is completed, it should be under one of the drop down tabs that's labeled "details" or somthing to that affair.

All port royal postings that you submit online will have the in the listing by default. You can still submit scores online even if the driver isn't approved, it just won't add them to the hof. You can always share that link as well.


----------



## KedarWolf

jomama22 said:


> Once the run is completed, it should be under one of the drop down tabs that's labeled "details" or somthing to that affair.
> 
> All port royal postings that you submit online will have the in the listing by default. You can still submit scores online even if the driver isn't approved, it just won't add them to the hof. You can always share that link as well.


Can't wait to see what I can do once I got it water-cooled. 

I ordered the waterblock and backplate today. 

It's coming from the USA to Canada by FedEx and that's usually really quick, plus I paid a few dollars more for rush shipping. 









I scored 15 091 in Port Royal


AMD Ryzen 9 5950X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com





2115 average clocks on air with the F5 Suprim X BIOS, not too shabby.


----------



## bmagnien

jomama22 said:


> Once the run is completed, it should be under one of the drop down tabs that's labeled "details" or somthing to that affair.
> 
> All port royal postings that you submit online will have the in the listing by default. You can still submit scores online even if the driver isn't approved, it just won't add them to the hof. You can always share that link as well.


The drop downs look like 3 little dots on top right corner of each section in the results. Your gpu clock will be the max you set it to target in afterburner which it typically never actually reaches during the run, your average will be lower based on temps causing bin drops, and for me the max effective clock shown in hwinfo its typically somewhere in between.


----------



## bmagnien

Pepillo said:


> Every day I spend happiest with the MSI Trio with the Bykski block (520w Kipping Bios), I can successfully pass benchs like the Port Royale at 2.190 Mhz (15.166 points):
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 166 in Port Royal
> 
> 
> Intel Core i9-7900X Processor, NVIDIA GeForce RTX 3090 x 1, 32450 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> And 24/7 profile to play I have it stable as a rock at 2.100 Mhz 1.0v and 20.000 Mhz VRam:
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I was very fortunate to get it on the same day of departure for less than 1.600 euros, given the current situation of prices and availability. Waiting for the BAR next week, let's see if it's better (My Asus X299 has the bios with BAR since yesterday).


lol this afterburner skin is awesomely ridiculous. shouldn't be allowed to run it without an FE tho haha


----------



## gfunkernaught

Has anyone else here noticed no real gaming performance gain going from 2000mhz to 2100mhz effective? I just played through some Metro Exodus and saw no difference in fps between those two clocks. Temps were about the same too, PL at 50% but lower clocks and voltages.


----------



## jura11

des2k... said:


> Never used NT-H1,
> 
> For my GPU,
> NT-H2, seems to be about 2c cooler vs Kryonaut
> The weird part is that Kryonaut seem to have better mem oc (around +100 more), edge of the die (mem controller)
> 
> For my 3900x, IHS & EK block are lapped and I'm using thick double washer to get more pressure from the EK springs Kryonaut is better vs NT-H2.
> 
> Now, I keep reading that IC Diamond gets very good temperature under low pressure , not perfect match surfaces.


Hi there 

With NT-H1 I got slightly better temperatures on older generation of GPU's like GTX1080Ti or RTX 2080Ti's than with Kryonaut, on CPU I tried like Kryonaut or NT-H1 or NT-H2 and temperatures have been almost same 

On current generation of RTX 3090, one of GPUs is running NT-H1 and other is running Kryonaut and difference between the both GPUs temperatures is almost like 0-1°C in rendering 

I didn't yet tried ZF-EX and TFX on GPUs or CPUs, will do tests later when my thermal pads are here

I probably will never use LM on GPUs, but that's me 

Hope this helps 

Thanks, Jura


----------



## J7SC

gfunkernaught said:


> Has anyone else here noticed no real gaming performance gain going from 2000mhz to 2100mhz effective? I just played through some Metro Exodus and saw no difference in fps between those two clocks. Temps were about the same too, PL at 50% but lower clocks and voltages.


...are you playing at 4K ? That's probably the only resolution where it would make a bit of a difference, given the strength of 3090s out of the box...I play 4K exclusively on mine and just run games /sims at 2130 as default...only go higher for benches


----------



## gfunkernaught

J7SC said:


> ...are you playing at 4K ? That's probably the only resolution where it would make a bit of a difference, given the strength of 3090s out of the box...I play 4K exclusively on mine and just run games /sims at 2130 as default...only go higher for benches


Of course...maybe it was just metro. Sucks that I can't run 2100-2115mhz stable without keep the PL at 60%, otherwise PL throttling moves the voltage around causing instability.


----------



## des2k...

gfunkernaught said:


> Of course...maybe it was just metro. Sucks that I can't run 2100-2115mhz stable without keep the PL at 60%, otherwise PL throttling moves the voltage around causing instability.


I think the sweet spot is keeping eff freq at 2100 around 1v with high mem oc. After 2100, each steps adds at most 1fps and needs alot of voltage. That's 2x8 reference card, maybe diff with 3x8pin.

Last I checked, 2160 and 2145 is same fps in Control, dropping to 2115 I loose 1fps. I think you might get better scalling with rtx off.


----------



## J7SC

gfunkernaught said:


> Of course...maybe it was just metro. Sucks that I can't run 2100-2115mhz stable without keep the PL at 60%, otherwise PL throttling moves the voltage around causing instability.


heftiest challenges in what I play is CP2077 and FS2020...It may well be that the clocks move around a bit, but I usually check with in-game fps tools - like top-right below - across similar scenery...but I really doubt whether 2070 MHz vs 2145 Mhz would make enough of a difference in daily gaming with the 3090s to worry about it.


----------



## SoldierRBT

Working on breaking 15K under 400W. I may need waterblock to get it. 
3090 KPE 0.931v 2130MHz +1200 Mem 410W Max temp: 47C


















I scored 14 999 in Port Royal


Intel Core i9-10900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## sultanofswing

SoldierRBT said:


> Working on breaking 15K under 400W. I may need waterblock to get it.
> 3090 KPE 0.931v 2130MHz +1200 Mem 410W Max temp: 47C
> 
> View attachment 2484156
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 14 999 in Port Royal
> 
> 
> Intel Core i9-10900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com


Looks like a good bin. Did you do anything in the classy tool?


----------



## SoldierRBT

sultanofswing said:


> Looks like a good bin. Did you do anything in the classy tool?


Yes, used classy tool to lower voltages as much as possible without affecting score. Memory is undervolted to 1.325v from 1.375v stock to lower wattage. I was able to get 15K at 426W. 400W might be possible with a waterblock. Best I was able to get was 15728 with 49C avg temp. .









I scored 15 728 in Port Royal


Intel Core i9-10900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## bmagnien

@KedarWolf thanks for the nvidia settings you posted earlier and that custom .pow file. Eeeked out a few more points coupled with a +1750 mem OC which I was able to get from Asus GPU Tweak to get above AB's +1500 max. This is about the best I can hope for now that sub-ambient winter temps are gone in my area: I scored 15 509 in Port Royal

All in a SFF/MFF case too:

__
https://www.reddit.com/r/mffpc/comments/lah4tc


----------



## Falkentyne

Is this file ONLY for Galaxy cards or is it a universal firmware updater for BAR support (like the GTX displayport patch from years ago?)

影驰科技-全心致力于计算机硬件的生产和销售！ (Under the BIOS tab)?


----------



## KedarWolf

Falkentyne said:


> Is this file ONLY for Galaxy cards or is it a universal firmware updater for BAR support (like the GTX displayport patch from years ago?)
> 
> 影驰科技-全心致力于计算机硬件的生产和销售！ (Under the BIOS tab)?


Says my card is unsupported for my Strix OC.

Someone would have to install it on a GALAX card and run the below and share it.



Code:


nvflash --save barenabled.rom


----------



## mirkendargen

KedarWolf said:


> Says my card is unsupported for my Strix OC.
> 
> Someone would have to install it on a GALAX card and run the below and share it.
> 
> 
> 
> Code:
> 
> 
> nvflash --save barenabled.rom


Tried it with the 1000W Galax XOC BIOS and it didn't work. I bet it would work with the standard HOF BIOS installed.


----------



## Hulk1988

Here is my new beauty. 3090 FE + the EKWB Quantum Vector FE.They are using a new approach to cool the backplate.You have to add thermal paste for the backplate, too. And it seems to be working very well. I cannot get higher than 60 degrees memory Temperature.


----------



## J7SC

mirkendargen said:


> Tried it with the 1000W Galax XOC BIOS and it didn't work. I bet it would work with the standard HOF BIOS installed.


...is there any hard data yet on resizable BAR improvements in specific benchmarks, games for our 3090s, like a URL link ?


----------



## KedarWolf

J7SC said:


> ...is there any hard data yet on resizable BAR improvements in specific benchmarks, games for our 3090s, like a URL link ?











KFA2 RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com





Flashed this first, then the resizable bar BIOS .exe flashed just fine!!


----------



## Pepillo

KedarWolf said:


> KFA2 RTX 3090 VBIOS
> 
> 
> 24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory
> 
> 
> 
> 
> www.techpowerup.com
> 
> 
> 
> 
> 
> Flashed this first, then the resizable bar BIOS .exe flashed just fine!!


Interesting. Notes improvement?


----------



## ArcticZero

Suddenly getting code B2 on my mobo (load VGA BIOS). It crashed earlier and suddenly it's not posting. I don't exactly have a spare mobo at the moment to test. Has anyone got a solution to this?


----------



## KedarWolf

Pepillo said:


> Interesting. Notes improvement?


On my tests with Ray Tracing maxed out, graphics settings maxed out and HDR on, at 3840x1080 which is a bit less than the same as 1440p, zero improvements with resizable bar on or off, with DLSS or without.

I tested Shadow of The Tomb Raider with DLSS off and Cyberpunk with DLSS on, nada, not at all better.

And it says in Nvidia Control Panel System Information resizable bar is enabled.

Plus with that HOF BIOS coming from the Suprim X my Port Royal has dropped from 15091 to 14575.


----------



## sultanofswing

The update everyone wants but nobody needed!


----------



## ArcticZero

ArcticZero said:


> Suddenly getting code B2 on my mobo (load VGA BIOS). It crashed earlier and suddenly it's not posting. I don't exactly have a spare mobo at the moment to test. Has anyone got a solution to this?


*UPDATE:* I put it on another slot and it works. Could having shunted the PCIe slot killed it? I heard there would be other power limits on the card though (MSVDD/NVVDD) so I assumed it would be safe enough to do so. Though when this happened, all the card was doing was mining at ~320w, <70w from the slot according to HWiNFO, so I doubt it was that.

*UPDATE2:* I plugged it back on slot 1 and it works again?? I mean not complaining but I'm so confused.


----------



## gavros777

PowerK said:


> Yeah, they both are 3090s.
> At a quick glance, it seems that HOF is running about 4-5C cooler than Suprim.


Hello, is the vrm temp cooler too as the suprim x has a dedicated heatpipe for vrm.
Is there any worthy difference between the basic hof and the limited besides the tpg and boostclock?


----------



## Beagle Box

KedarWolf said:


> On my tests with Ray Tracing maxed out, graphics settings maxed out and HDR on, at 3840x1080 which is a bit less than the same as 1440p, zero improvements with resizable bar on or off, with DLSS or without.
> 
> I tested Shadow of The Tomb Raider with DLSS off and Cyberpunk with DLSS on, nada, not at all better.
> 
> And it says in Nvidia Control Panel System Information resizable bar is enabled.
> 
> Plus with that HOF BIOS coming from the Suprim X my Port Royal has dropped from 15091 to 14575.


I read a few places that you shouldn't expect much (if any) performance improvement if you're already running PCIe-4. 
Don't know if that's true. 
So folks should mention PCIe version when discussing their ReBAR performance gains/losses.


----------



## KedarWolf

Welp, I just got my Strix OC and I need to RMA it.

GPU 8-pin #3 is pulling zero load, doesn't move from 2.1W no matter what I run. I tried Port Royal, Cyberpunk with Ray Tracing maxed out, and the entire card doesn't pull more than 400W even with the 520W XOC BIOS and the Power Slider maxed out. 

Edit: Even with the 1000W BIOS only 8-pin #1 and 8-pin #2 showing power draw and not going over 380W total power draw.

I tried different cables, plugging them into different sockets on my power supply, nada.


----------



## ALSTER868

KedarWolf said:


> GPU 8-pin #3 is pulling zero load, doesn't move from 2.1W no matter what I run. I tried Port Royal, Cyberpunk with Ray Tracing maxed out, and the entire card doesn't pull more than 400W even with the 520W XOC BIOS and the Power Slider maxed out


Like this? I'm also on Strix with KPE 520 bios.
And how does it fare with stock Strix bios? Same issue?


----------



## MrTOOSHORT

KedarWolf said:


> Welp, I just got my Strix OC and I need to RMA it.
> 
> GPU 8-pin #3 is pulling zero load, doesn't move from 2.1W no matter what I run. I tried Port Royal, Cyberpunk with Ray Tracing maxed out, and the entire card doesn't pull more than 400W even with the 520W XOC BIOS and the Power Slider maxed out.
> 
> Edit: Even with the 1000W BIOS only 8-pin #1 and 8-pin #2 showing power draw and not going over 380W total power draw.
> 
> I tried different cables, plugging them into different sockets on my power supply, nada.



Sorry to hear that.

My son's Strix 3060 died during a gaming session last night. I already packed it up for RMA. The pci-E blinked a couple times, then monitors shut off. Tried it in another pc, same deal.


----------



## Beagle Box

KedarWolf said:


> Welp, I just got my Strix OC and I need to RMA it.
> 
> GPU 8-pin #3 is pulling zero load, doesn't move from 2.1W no matter what I run. I tried Port Royal, Cyberpunk with Ray Tracing maxed out, and the entire card doesn't pull more than 400W even with the 520W XOC BIOS and the Power Slider maxed out.
> 
> Edit: Even with the 1000W BIOS only 8-pin #1 and 8-pin #2 showing power draw and not going over 380W total power draw.
> 
> I tried different cables, plugging them into different sockets on my power supply, nada.


The KingPin BIOSs don't show 3rd 8-pin power draw. 
Try readings using the stock BIOS or the Suprim X - something that was coded for 3 connector GPUs.


----------



## Pepillo

GALAX and Gainward release GeForce RTX 3090/3080/3070 and 3060 Ti BIOSes with ResizableBAR support - VideoCardz.com


NVIDIA GeForce RTX 30 now (unofficially) support Resizable BAR First board partners have begun offering RTX 30 series BIOS updates with ResizableBAR support. Ever since AMD announced its Smart Access Memory technology NVIDIA graphics card owners were asking if NVIDIA is planning to support this...




videocardz.com


----------



## KedarWolf

Beagle Box said:


> The KingPin BIOSs don't show 3rd 8-pin power draw.
> Try readings using the stock BIOS or the Suprim X - something that was coded for 3 connector GPUs.


Yes, I flashed the stock BIOS, fixed it.


----------



## ALSTER868

Beagle Box said:


> The KingPin BIOSs don't show 3rd 8-pin power draw.


Yep, besides this I've got my 3rd 8-pin power connector constantly blinking red.
I'm wondering how safe is it for longterm using a different bios like this 520W on Strix? Could it affect somehow the card?


----------



## Beagle Box

ALSTER868 said:


> Yep, besides this I've got my 3rd 8-pin power connector constantly blinking red.
> I'm wondering how safe is it for longterm using a different bios like this 520W on Strix? Could it affect somehow the card?


That may be a different issue 
My Strix has cable power lights, but they only light up when the connectors aren't plugged in - hardware sensing, not software sensing, I think. 
What's your GPU?


----------



## ALSTER868

Beagle Box said:


> That may be a different issue
> My Strix has cable power lights, but they only light up when the connectors aren't plugged in - hardware sensing, not software sensing, I think.
> What's your GPU?


3090 Strix.
Thought the power connector light is blinking because it's somehow connected with 520 bios which is not monitoring this 3rd pin.


----------



## KedarWolf

Beagle Box said:


> The KingPin BIOSs don't show 3rd 8-pin power draw.
> Try readings using the stock BIOS or the Suprim X - something that was coded for 3 connector GPUs.


Stock BIOS shows power draw on the third connector but Suprim X doesn't, like the Kingpin.


----------



## Beagle Box

KedarWolf said:


> Stock BIOS shows power draw on the third connector but Suprim X doesn't, like the Kingpin.





KedarWolf said:


> Stock BIOS shows power draw on the third connector but Suprim X doesn't, like the Kingpin.


Okay. 
If it were me, I'd run Port Royal on both the stock BIOS and the KP BIOS with the same settings and compare the scores. 
It may be running fine, but the software just isn't reporting correctly using BIOS not made for it. 
That's not an RMA-worthy issue.


----------



## ALSTER868

I'm getting higher scores on kingpin bios than on stock bios.
So the question remains whether it's safe to use this bios longterm.


----------



## Beagle Box

ALSTER868 said:


> 3090 Strix.
> Thought the power connector light is blinking because it's somehow connected with 520 bios which is not monitoring this 3rd pin.


I don't think so. 
I've been benching with the KP1K BIOS the last day or so and no blinking lights. 
Maybe shut down, swap your cables, power up and see which cable port led blinks?


----------



## KedarWolf

Beagle Box said:


> Okay.
> If it were me, I'd run Port Royal on both the stock BIOS and the KP BIOS with the same settings and compare the scores.
> It may be running fine, but the software just isn't reporting correctly using BIOS not made for it.
> That's not an RMA-worthy issue.


Yes, I'm not RMAing it. I thought I had a dead power connector but obviously don't.


----------



## KedarWolf

ALSTER868 said:


> I'm getting higher scores on kingpin bios than on stock bios.
> So the question remains whether it's safe to use this bios longterm.


I got my best scores on the Suprim X BIOS but I'm still on air until next week.


----------



## Beagle Box

KedarWolf said:


> I got my best scores on the Suprim X BIOS but I'm still on air until next week.


I like the Suprim BIOS and run it daily even though I'm water-cooled. 
KP1K for benching, though. 
The stock ASUS BIOSs are garbage.


----------



## Thanh Nguyen

Optimus block gets me 100 more points at ambient temp.








I scored 15 724 in Port Royal


Intel Core i9-10900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## ALSTER868

Beagle Box said:


> Maybe shut down, swap your cables, power up and see which cable port led blinks?


No, that doesn't help. It's a known issue among strix 3090 owners, and often with Seasonic PSUs. MIne is 1000W PX-1000 Platinum.



We'll be back.



__
https://www.reddit.com/r/ASUS/comments/jidywl

Seems I have to just ignore this as my card is doing completely fine, though I'd like to be sure why this is happening to it.


----------



## Beagle Box

ALSTER868 said:


> No, that doesn't help. It's a known issue among strix 3090 owners, and often with Seasonic PSUs. MIne is 1000W PX-1000 Platinum.
> 
> 
> 
> We'll be back.
> 
> 
> 
> __
> https://www.reddit.com/r/ASUS/comments/jidywl
> 
> Seems I have to just ignore this as my card is doing completely fine, though I'd like to be sure why this is happening to it.


That sucks.
My PSU is an EVGA Supernova 1000 P2 Platinum. 
No blinking light issues with any BIOS. 
It's strange because the GPU is actually pulling power through that connector, so why does the PSU matter? 
Are you getting one red light or three?


----------



## ALSTER868

Beagle Box said:


> Are you getting one red light or three?


One. Some time ago even 3 of'em were blinking.
Just installed the GPUTweak II and disabled power detect there - no blinking anymore. But the question remains open.


----------



## Beagle Box

ALSTER868 said:


> One. Some time ago even 3 of'em were blinking.
> Just installed the GPUTweak II and disabled power detect there - no blinking anymore. But the question remains open.


Yeah. 
Are there some power controls/limiters that aren't functioning on the unmonitored connector that won't be preventing degradation?
No idea.


----------



## ALSTER868

Beagle Box said:


> preventing degradation?


Blinking LED means something wrong with the power supply on that connector. What exactly can it degrade there if it does?


----------



## Beagle Box

ALSTER868 said:


> Blinking LED means something wrong with the power supply on that connector. What exactly can it degrade there if it does?


I didn't mean the blinking light. 
I meant the fact that using a BIOS that doesn't recognize the connector is being powered may be a problem long-term.
It's why I only use the KP BIOS to bench.


----------



## gfunkernaught

J7SC said:


> heftiest challenges in what I play is CP2077 and FS2020...It may well be that the clocks move around a bit, but I usually check with in-game fps tools - like top-right below - across similar scenery...but I really doubt whether 2070 MHz vs 2145 Mhz would make enough of a difference in daily gaming with the 3090s to worry about it.
> 
> View attachment 2484152


Exactly. What bugs me (and it shouldn't lol) is that the temps are about the same running 2070 vs 2115 or 2130 in certain parts or areas in games. I gotta just turn that OSD off and play the game.


----------



## sultanofswing

I feel MSFS 2020 is one of the worst games to test clocks and temps as there are just times(90% of the time) the sim doesn't even fully load up the GPU.


----------



## ALSTER868

Same freq profile on Strix stock bios results in 200 points less than on KPE 520 bios.. too little power on stock bios.


----------



## J7SC

sultanofswing said:


> I feel MSFS 2020 is one of the worst games to test clocks and temps as there are just times(90% of the time) the sim doesn't even fully load up the GPU.


_'...90% of the time sim doesn't fully load up GPU (3090) ?_' ...that's certainly NOT my experience with two different systems at 4K Ultra / dense and a big rolling cache enabled. That said, FS2020 is 'weirder' than many other games as it loads one CPU thread particularly hard but also very much depends on network speed _and _latency to feed the GPU. I find that the ideal FS2020 4K / Ultra / dense setup is w/ cache enabled and the 'dev tool' fps and other counter either 'all green' or rapidly flipping back and forth between limited by 'Main Thread' and limited by 'GPU'. Both mean that there isn't one particular & constant bottleneck.


----------



## Beagle Box

ALSTER868 said:


> Same freq profile on Strix stock bios results in 200 points less than on KPE 520 bios.. too little power on stock bios.


How do those results compare to using Suprim X BIOS?


----------



## jomama22

Hulk1988 said:


> Here is my new beauty. 3090 FE + the EKWB Quantum Vector FE.They are using a new approach to cool the backplate.You have to add thermal paste for the backplate, too. And it seems to be working very well. I cannot get higher than 60 degrees memory Temperature.
> 
> View attachment 2484184
> 
> 
> View attachment 2484185
> 
> 
> 
> View attachment 2484181


You were only pulling a max of 330w for whatever test that was. Best bet would to max out power limit and go loop time spy extreme graphics test 2 and see where you end up.

If you really want to stress it, download phoenix miner and run it from command prompt with -bench.

Edit: I see your gpu-z pic now. Strange there would be such a large disparency between hwinfo and gpu-z for power consumption and tdp%. But yeah, would still run the test above and see what temps it nets you.


----------



## jomama22

ArcticZero said:


> *UPDATE:* I put it on another slot and it works. Could having shunted the PCIe slot killed it? I heard there would be other power limits on the card though (MSVDD/NVVDD) so I assumed it would be safe enough to do so. Though when this happened, all the card was doing was mining at ~320w, <70w from the slot according to HWiNFO, so I doubt it was that.
> 
> *UPDATE2:* I plugged it back on slot 1 and it works again?? I mean not complaining but I'm so confused.


If you shunted the pcie slot on the card and didn't change the multiplier in hwinfo for the slot power, you are drawing higher than 70w from it. What shunt do you have on there?


----------



## ALSTER868

Beagle Box said:


> How do those results compare to using Suprim X BIOS?


idk. Haven't used Suprim X bios yet


----------



## jomama22

bmagnien said:


> @KedarWolf thanks for the nvidia settings you posted earlier and that custom .pow file. Eeeked out a few more points coupled with a +1750 mem OC which I was able to get from Asus GPU Tweak to get above AB's +1500 max. This is about the best I can hope for now that sub-ambient winter temps are gone in my area: I scored 15 509 in Port Royal
> 
> All in a SFF/MFF case too:
> 
> __
> https://www.reddit.com/r/mffpc/comments/lah4tc


All "in" a sff seems a bit of a stretch yeah? Lol


----------



## gfunkernaught

@J7SC @sultanofswing 
When testing clocks and overall BOARD OC stability and performance with Ampere, in my experience, raster doesn't show too much. Games that do both RT and raster really tell you how your card's OC is performing and whether or not your desired OC will yield good results in d2d gaming. I remember playing witcher 3 on my 2080 ti and it pinned the gpu at 99% and heated it up quite a bit, and I was able to get higher stable clocks but those same clocks would crash minutes into shadow of the tomb raider. I haven't played FS2020 but isn't it DX11? Titanfall 2 is DX11 and is not a good way to test my OC, even though the gpu usage is high and clocks are stable. I run it at 144fps which is the game's max fps and all the IQ maxed as well, except for supersampling which would run the game at or close to 8k! Control is another good test for 3090's OC.


----------



## gfunkernaught

highest indoor PR score I've gotten so far...








I scored 15 335 in Port Royal


Intel Core i7-8700K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## yzonker

gfunkernaught said:


> @J7SC @sultanofswing
> When testing clocks and overall BOARD OC stability and performance with Ampere, in my experience, raster doesn't show too much. Games that do both RT and raster really tell you how your card's OC is performing and whether or not your desired OC will yield good results in d2d gaming. I remember playing witcher 3 on my 2080 ti and it pinned the gpu at 99% and heated it up quite a bit, and I was able to get higher stable clocks but those same clocks would crash minutes into shadow of the tomb raider. I haven't played FS2020 but isn't it DX11? Titanfall 2 is DX11 and is not a good way to test my OC, even though the gpu usage is high and clocks are stable. I run it at 144fps which is the game's max fps and all the IQ maxed as well, except for supersampling which would run the game at or close to 8k! Control is another good test for 3090's OC.


Maybe it depends on the system and gpu, but I've found both RDR2 and FS2020 to be more sensitive to the gpu overclock than say CP2077.


----------



## J7SC

jomama22 said:


> All "in" a sff seems a bit of a stretch yeah? Lol


.. 'all in' as 'in the general vicinity' ?! I haven't bought a new 'traditional' case since 2013 or so...just TT Core and such to add up to 1800mm rad space, depending on the build. BTW, as posted before, with Covid-19, lots of places offering custom acrylic work now (think cashier acrylic barrier) at much lower prices...great to complete and protect a semi-open build



gfunkernaught said:


> @J7SC @sultanofswing
> When testing clocks and overall BOARD OC stability and performance with Ampere, in my experience, raster doesn't show too much. Games that do both RT and raster really tell you how your card's OC is performing and whether or not your desired OC will yield good results in d2d gaming. I remember playing witcher 3 on my 2080 ti and it pinned the gpu at 99% and heated it up quite a bit, and I was able to get higher stable clocks but those same clocks would crash minutes into shadow of the tomb raider. I haven't played FS2020 but isn't it DX11? Titanfall 2 is DX11 and is not a good way to test my OC, even though the gpu usage is high and clocks are stable. I run it at 144fps which is the game's max fps and all the IQ maxed as well, except for supersampling which would run the game at or close to 8k! Control is another good test for 3090's OC.


As mentioned earlier, CP2077 (RTX, DX12) is the other major torture instrument I use a lot ....but raster performance still matters, IMO, especially if you actually use it on a daily basis like I do. FYI, FS2020 at 8K 

source


----------



## ALSTER868

And my best indoors with open windows..









I scored 15 628 in Port Royal


Intel Core i9-9900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com





Just indoors with room temps









I scored 15 485 in Port Royal


Intel Core i9-9900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## gfunkernaught

yzonker said:


> Maybe it depends on the system and gpu, but I've found both RDR2 and FS2020 to be more sensitive to the gpu overclock than say CP2077.


Yes there is that too! So many variables...I remember running cyberpunk on my 2080 ti [email protected] fine but Control would crash.


----------



## gfunkernaught

ALSTER868 said:


> And my best indoors with open windows..
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 628 in Port Royal
> 
> 
> Intel Core i9-9900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> Just indoors with room temps
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 485 in Port Royal
> 
> 
> Intel Core i9-9900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com


No vram OC? What settings did you use in the control panel? I def don't have a gold or great sample because the universe doesn't want me to have one. Hence all the work I had to put in (rads, LM, pads) just to get 15335 indoors with 21c ambient temp.


----------



## gfunkernaught

J7SC said:


> .. 'all in' as 'in the general vicinity' ?! I haven't bought a new 'traditional' case since 2013 or so...just TT Core and such to add up to 1800mm rad space, depending on the build. BTW, as posted before, with Covid-19, lots of places offering custom acrylic work now (think cashier acrylic barrier) at much lower prices...great to complete and protect a semi-open build
> 
> 
> 
> As mentioned earlier, CP2077 (RTX, DX12) is the other major torture instrument I use a lot ....but raster performance still matters, IMO, especially if you actually use it on a daily basis like I do. FYI, FS2020 at 8K
> 
> source
> 
> View attachment 2484231
> 
> View attachment 2484232


True dat. I now also use Metro Exodus as a test. 
FS2020 at 8k is like trying to fit 100lbs into a 5lbs bag...a 2080 ti could actually run it passed slideshow speeds, that is amazing.


----------



## sultanofswing

This is my GPU usage in MSFS 2020, 4k ultra settings.
Pretty poorly optimized and is why I don't bother testing overclocks with it.


----------



## ALSTER868

gfunkernaught said:


> No vram OC? What settings did you use in the control panel?


For my best score vram OC was +1500, in the second case it was +1450.
All I change in the control panel is texture filtering to highest performance and threaded optimization. I didn't even bother to disable the G-Sync.
Guess the score could add a hundred or even more points if I used subambient temps outside, but I was lazy


----------



## ALSTER868

gfunkernaught said:


> Hence all the work I had to put in (rads, LM, pads) just to get 15335 indoors with 21c ambient temp.


I think you should be happy with this result since it's pretty decent. For higher scores you have to have lower ambient temps and one of the kingpin bioses or a shunt


----------



## kx11

__ https://twitter.com/i/web/status/1375038992631087107


----------



## kx11

Resizable BAR vBios update for GALAX cards



http://www.szgalaxy.com/Areas/__ZH_GB__/Download/3090_update.exe




GAINWARD



http://www.gainward.cn/Download/3090_update.exe


----------



## gfunkernaught

ALSTER868 said:


> I think you should be happy with this result since it's pretty decent. For higher scores you have to have lower ambient temps and one of the kingpin bioses or a shunt


Did you see the details of my score? I scored lower than you did with higher clocks. I am using the 1kw KP BIOS btw. I also have the tex filtering on high performance and threaded optimization. I think besides having good silicon, cpu contributes. The highest I ever got without changing control panel settings was 15444 and that was outside.


----------



## mardon

kx11 said:


> Resizable BAR vBios update for GALAX cards
> 
> 
> 
> http://www.szgalaxy.com/Areas/__ZH_GB__/Download/3090_update.exe
> 
> 
> 
> 
> GAINWARD
> 
> 
> 
> http://www.gainward.cn/Download/3090_update.exe


Do we know what wattage the bios is?


----------



## jura11

I could get just beat 15k in Port Royal with KPE XOC BIOS, tried everything what I could with my Palit RTX 3090 GamingPro, temperatures are great and they won't break 36-38°C in worst case scenario, in brst case scenario they are in 32-33°C during multiple attempts in Port Royal and scores won't budge 

With different CPU maybe I would break that wall but for now I'm done with Port Royal, just enjoying gaming and mainly rendering where RTX 3090's are just awesome 

If I can't get hold off 5950X then maybe I will do few more tests 

Hope this helps 

Thanks, Jura


----------



## J7SC

sultanofswing said:


> This is my GPU usage in MSFS 2020, 4k ultra settings.
> Pretty poorly optimized and is why I don't bother testing overclocks with it.


...as mentioned, my experiences are different. Below is a quick one (stock clocks, PL)...the squiggle bit on the left is a new patch (...I get to reinstall again on the 2nd system  ). Once loaded, it is a relatively smooth line, on the right, with high usage. Important to note that this was a saved flight and pulling out of 'rolling cache' in FS2020...if your network speed and/or latency cannot keep up, your GPU will be twiddling its thumbs, waiting. On a fast un-throttled line, MS FS2020 Azure server hits just under 80 Mb/s on my setups as top speed (~ 23 latency to the Azure server in W. Washington). The only time I had really low usage w/ a flight in cache was when I had tightened tFAW too much which gave some trouble to the Main Thread processing (and thus no good flow to GPU). tFAW setting was tested extensively w/ memtest and error-free, but I find that several GPU apps like it (and a handful of other system memory settings) a bit less tight. All that said, FS2020 has so many other parameters that interact w/ each other that it can be frustrating to optimize for GPU...but once done, it is worth it  

re. resizable BAR, I'm looking forward to see what the performance improvements (if any) are in games such as CP2077 4K RTX Psycho


----------



## mardon

3090 Bar Bios doesn't work on my KFA2 reference card with Galax 390w bios.


----------



## kx11

mardon said:


> Do we know what wattage the bios is?


No idea mate.


----------



## J7SC

kx11 said:


> No idea mate.


...should be interesting to see when the XOC custom bios will be updated w/ resizable BAR


----------



## sultanofswing

J7SC said:


> ...as mentioned, my experiences are different. Below is a quick one (stock clocks, PL)...the squiggle bit on the left is a new patch (...I get to reinstall again on the 2nd system  ). Once loaded, it is a relatively smooth line, on the right, with high usage. Important to note that this was a saved flight and pulling out of 'rolling cache' in FS2020...if your network speed and/or latency cannot keep up, your GPU will be twiddling its thumbs, waiting. On a fast un-throttled line, MS FS2020 Azure server hits just under 80 Mb/s on my setups as top speed (~ 23 latency to the Azure server in W. Washington). The only time I had really low usage w/ a flight in cache was when I had tightened tFAW too much which gave some trouble to the Main Thread processing (and thus no good flow to GPU). tFAW setting was tested extensively w/ memtest and error-free, but I find that several GPU apps like it (and a handful of other system memory settings) a bit less tight. All that said, FS2020 has so many other parameters that interact w/ each other that it can be frustrating to optimize for GPU...but once done, it is worth it
> 
> re. resizable BAR, I'm looking forward to see what the performance improvements (if any) are in games such as CP2077 4K RTX Psycho
> 
> View attachment 2484244


Yea not sure why mine is like that. I have my rolling cache size set to 200 GB and I have 1gb/s internet so it should be no issue.
Forgot to mention my graph is while flying the FBW A320. I know the airliners can cause the cpu to become thread limited thus causing low GPU usage.


----------



## mardon

Got it to work. Had to flash the stock 350w bios from Galax.

Not too much of an issue as I'm shunted.


----------



## Nizzen

mardon said:


> 3090 Bar Bios doesn't work on my KFA2 reference card with Galax 390w bios.


Use this bios first








KFA2 RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com




Then install


----------



## ArcticZero

jomama22 said:


> If you shunted the pcie slot on the card and didn't change the multiplier in hwinfo for the slot power, you are drawing higher than 70w from it. What shunt do you have on there?


Hi. I'm familiar with how that works at least. I stacked another 0.5mOhm shunt on it which should double it if there were no other internal limiters on the card to keep it safe, but have my power limit at 48% while mining, and just a bit above for gaming. I did apply the HWiNFO multiplier accordingly (x2), and it never goes above 68w with my profiles. I doubt it was a PCIe power issue, honestly, as at the time all it was doing was mining with the aforementioned conservative power limit.


----------



## Falkentyne

ArcticZero said:


> Hi. I'm familiar with how that works at least. I stacked another 0.5mOhm shunt on it which should double it if there were no other internal limiters on the card to keep it safe, but have my power limit at 48% while mining, and just a bit above for gaming. I did apply the HWiNFO multiplier accordingly (x2), and it never goes above 68w with my profiles. I doubt it was a PCIe power issue, honestly, as at the time all it was doing was mining with the aforementioned conservative power limit.


You need to shunt all the shots equally on this platform.
That includes MVDDC (GDDR6X power!)

People don't understand how the power balancing works!!

And I mentioned this before but I feel either people don't care or they ignore me just because i'm not Elmor or some bios modder or something.

GPU Chip Power (Not Total Board Power) is made up of FOUR RAILS summed (3 rails on 2x8 pin cards. I'm not sure where the missing value is on 3x8 pin cards like the Strix, might be another NVVDD rail I'm not paying attention to).
GPU Chip Power is linked to SRC. GPU Chip Power being shunted better causes SRC to rise, and being shunted poorly causes SRC to drop as Chip Power rises.
On 2x8 pin cards, Misc0, Misc2 and NVVDD1 Input Power add up to GPU Chip Power.
GPU Chip Power being shunted poorly causes SRC to drop and can cause strange power limit behavior especially below 100%.

MVDDC shunt affects Misc0 input power. If MVDDC is shunted poorly, this will cause Misc0 input power to rise, which will cause GPU Chip Power to rise.
On some boards, PCIE Slot Power and MVDDC are linked. Not shunting PCIE Slot Power when all the other shunts are shunted can cause MVDDC to skyrocket to over 200W, despite it being shunted, causing both rails to throttle you via TDP Normalized%.
Shunting PCIE Slot Power well causes Chip Power to drop and SRC to rise.

Shunting MVDDC better will cause Misc0 to drop (thus making GPU Chip Power drop) and SRC to rise. Shunting the 8 pins well causes SRC to drop.
SRC or MVDDC at 0.0W at heavy load means something else is not shunted well enough (possibly those very shunts).

SRC or MVDDC at >150W means another shunt is done poorly.


----------



## ArcticZero

Falkentyne said:


> You need to shunt all the shots equally on this platform.
> That includes MVDDC (GDDR6X power!)
> 
> People don't understand how the power balancing works!!
> 
> And I mentioned this before but I feel either people don't care or they ignore me just because i'm not Elmor or some bios modder or something.
> 
> GPU Chip Power (Not Total Board Power) is made up of FOUR RAILS summed.
> GPU Chip Power is linked to SRC. GPU Chip Power being shunted better causes SRC to rise, and being shunted poorly causes SRC to drop as Chip Power rises.
> On 2x8 pin cards, Misc0, Misc2 and NVVDD1 Input Power add up to GPU Chip Power.
> GPU Chip Power being shunted poorly causes SRC to drop and can cause strange power limit behavior especially below 100%.
> 
> MVDDC shunt affects Misc0 input power. If MVDDC is shunted poorly, this will cause Misc0 input power to rise, which will cause GPU Chip Power to rise.
> On some boards, PCIE Slot Power and MVDDC are linked. Not shunting PCIE Slot Power when all the other shunts are shunted can cause MVDDC to skyrocket to over 200W, despite it being shunted, causing both rails to throttle you via TDP Normalized%.
> Shunting PCIE Slot Power well causes Chip Power to drop and SRC to rise.
> 
> Shunting MVDDC better will cause Misc0 to drop (thus making GPU Chip Power drop) and SRC to rise. Shunting the 8 pins well causes SRC to drop.
> SRC or MVDDC at 0.0W at heavy load means something else is not shunted well enough (possibly those very shunts).
> 
> SRC or MVDDC at >150W means another shunt is done poorly.


I actually have been following your advice on this an it is working quite well. Power balancing is spot on and aside from temps which I will fix with a repad soon, and that strange crash and code b2 no-post earlier today, it's been quite alright and very educational. So many, many thanks for the info.


----------



## bmagnien

jomama22 said:


> All "in" a sff seems a bit of a stretch yeah? Lol


touche. "in and around"


----------



## J7SC

sultanofswing said:


> Yea not sure why mine is like that. I have my rolling cache size set to 200 GB and I have 1gb/s internet so it should be no issue.
> Forgot to mention my graph is while flying the FBW A320. I know the airliners can cause the cpu to become thread limited thus causing low GPU usage.


...yeah, I have flown the 747 a few times, but just too sluggish a flight model compared to the smaller jets I prefer (below, near Mt. Everest)...I once got past 600 knots in a dive, but even just under 500 knots in a valley is 'not relaxing' re, trying to keep control...

...Microsoft has repeatedly stated that FS2020 will get the DX12 option ('soon'), that should be interesting...I wonder what they would / will do with RTX and DLSS in that case...FS2020 8K DLSS / 3090 ?


----------



## Benni231990

hello!

i have a problem i have a 3090 Gainward Phantom GS

i downloaded the gainward tool for Resizable BAR but i get allways an error : unsupportet contact card deliver?

i use the original phantom GS bios


----------



## KedarWolf

Benni231990 said:


> hello!
> 
> i have a problem i have a 3090 Gainward Phantom GS
> 
> i downloaded the gainward tool for Resizable BAR but i get allways an error : unsupportet contact card deliver?
> 
> i use the original phantom GS bios











KFA2 RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com





Flashed the above GALAX BIOS first, then the resizable bar BIOS .exe flashed just fine!!






3090_update.exe







drive.google.com


----------



## Dreams-Visions

Hey folks - I finally got my GPU in my open loop and thought I'd ask what the recommended daily driver bios is for the Strix? I've seen a bit of conversation on the last couple of pages but wasn't clear on what the conclusion/recommendation is here. Is the SuprimX bios recommended over the EVGA KPE 520W or FTW3 500W? Or something else entirely? Any concerns about running the memory with the slider maxed out daily if it's stable?

Thank you!


----------



## KedarWolf

Dreams-Visions said:


> Hey folks - I finally got my GPU in my open loop and thought I'd ask what the recommended daily driver bios is for the Strix? I've seen a bit of conversation on the last couple of pages but wasn't clear on what the conclusion/recommendation is here. Is the SuprimX bios recommended over the EVGA KPE 520W or FTW3 500W? Or something else entirely?
> 
> Thank you!


Suprim X for everyday gaming, Kingpin for benching seems to be the consensus.


----------



## gfunkernaught

KedarWolf said:


> Suprim X for everyday gaming, Kingpin for benching seems to be the consensus.


Or use the Kingpin 1kw and set up three profiles in Afterburner and set the power mode the prefer max performance in nvidia control panel:
1. Idle or normal non gaming use. Power limit 30%, leave core at 0, vram all the way to the left -587
2. Everyday gaming profile. Power limit 48%, core +135 vram +1000 (this OC may differ for you, it works for me). I've been testing cyberpunk with a single scene to see how many fps I lose while lowering the power limit. If I don't lose performance at 45%, then I'll lower it.
3. Benching Profile. Power limit 60%, core +225 ([email protected] on the curve) +1400 vram.

My card is watercooled so just use this as a reference. Once in a while I feel like benching and don't feel like flashing back and forth.


----------



## GRABibus

KedarWolf said:


> Stock BIOS shows power draw on the third connector but Suprim X doesn't, like the Kingpin.


this is issue With power readings with kingpin.
I have the same on my strix.

if your timespy or PR scores are in line, then you have no issue.


----------



## KedarWolf

GRABibus said:


> this is issue With power readings with kingpin.
> I have the same on my strix.
> 
> if your timespy or PR scores are in line, then you have no issue.


Yes, I figured that out when I flashed back the stock ASUS BIOS.

On the Suprim X BIOS on air I got 15091 in Port Royal so everything is fine.


----------



## Nizzen

So noone tested BAR wit 3090 and Battlefield V yet?

Hoped for some BAR results on Easter vaccation


----------



## KedarWolf

Nizzen said:


> So noone tested BAR wit 3090 and Battlefield V yet?
> 
> Hoped for some BAR results on Easter vaccation


I've tested Bar on Cyberpunk and Shadow of The Tomb Raider but with graphics settings and ray tracing maxed out, BAR doesn't help at all.


----------



## Nizzen

KedarWolf said:


> I've tested Bar on Cyberpunk and Shadow of The Tomb Raider but with graphics settings and ray tracing maxed out, BAR doesn't help at all.


Maybe try a game with BAR support? 

Games that has support now is limited:
More games will follow


_Assassin’s Creed Valhalla_
_Battlefield V_
_Borderlands 3_
_Forza Horizon 4_
_Gears 5_
_Metro Exodus_
_Red Dead Redemption 2_
_Watch Dogs: Legion_


----------



## ArcticZero

Well, I am getting Code B2 (load VGA BIOS) again on boot. I didn't even touch the card this time. All I did was shut down and install a new Noctua fan on the front of the case. Tried swapping PCIe slots, no dice this time. What a nightmare. 

I've tried googling but I can't find anything similar. I don't believe the GPU is dead because it's been fine so far and it worked the first time the code B2 showed up.


----------



## wesley8

Nizzen said:


> Maybe try a game with BAR support?
> 
> Games that has support now is limited:
> More games will follow
> 
> 
> _Assassin’s Creed Valhalla_
> _Battlefield V_
> _Borderlands 3_
> _Forza Horizon 4_
> _Gears 5_
> _Metro Exodus_
> _Red Dead Redemption 2_
> _Watch Dogs: Legion_











4K Highest Preset


----------



## Nizzen

wesley8 said:


> View attachment 2484284
> 
> 4K Highest Preset


Nice! Any improvment in min fps? 0.1% 1%..

And what about 1440p or 3440x1440?


----------



## wesley8

Nizzen said:


> Nice! Any improvment in min fps? 0.1% 1%..


Not much，around 2% improvement ....


----------



## HisN

Works









Same with me, nothing much change in UHD.
You did even not have to know wich test is bar-enabled or not. Because they are so similar.


----------



## Nizzen

I think most of the gains will show when cpu is more stressed. Like 1440p. Time wil show


----------



## pat182

wesley8 said:


> View attachment 2484284
> 
> 4K Highest Preset


lol ill keep my 9900k, no need to upgrade for rebar


----------



## Nizzen

pat182 said:


> lol ill keep my 9900k, no need to upgrade for rebar


Bar works with 9900k I think. Even z390 has bar support now...


----------



## Falkentyne

ArcticZero said:


> Well, I am getting Code B2 (load VGA BIOS) again on boot. I didn't even touch the card this time. All I did was shut down and install a new Noctua fan on the front of the case. Tried swapping PCIe slots, no dice this time. What a nightmare.
> 
> I've tried googling but I can't find anything similar. I don't believe the GPU is dead because it's been fine so far and it worked the first time the code B2 showed up.


Clean the PCIe pins with Deoxit D5 and inspect for damage.


----------



## ArcticZero

Falkentyne said:


> Clean the PCIe pins with Deoxit D5 and inspect for damage.


It actually decided to work again. I opened the card up and inspected, found nothing wrong. I now dread the next time I need to do a cold boot in case this happens again.


----------



## Gebeleisis

can someone tell me a multimeter that can measure small resistances like the R005 ? 
I have some R005 and i would like to measure them before doing anything with them.

thank you


----------



## yzonker

HisN said:


> Works
> View attachment 2484288
> 
> 
> Same with me, nothing much change in UHD.
> You did even not have to know wich test is bar-enabled or not. Because they are so similar.
> 
> View attachment 2484286


Looks like min fps got worse? Or am I reading that wrong?


----------



## pat182

Nizzen said:


> Bar works with 9900k I think. Even z390 has bar support now...


not on the asus z390 prime, no bios for it


----------



## HisN

yzonker said:


> Looks like min fps got worse? Or am I reading that wrong?


Nope, you read right. But maybe this is due the first run, I do not let go full through to create all shader after switching rBAR aktive again.


----------



## jomama22

Gebeleisis said:


> can someone tell me a multimeter that can measure small resistances like the R005 ?
> I have some R005 and i would like to measure them before doing anything with them.
> 
> thank you


You would have to either buy a ohmmeter that measures that low or you could calculate it out by measuring the voltage drop across it (would need a variable bench top power supply for that). The chance of the resistors being faulty or of significantly different resistances would be crazy low. I wouldn't sweat it.


----------



## jomama22

HisN said:


> Works
> View attachment 2484288
> 
> 
> Same with me, nothing much change in UHD.
> You did even not have to know wich test is bar-enabled or not. Because they are so similar.
> 
> View attachment 2484286
> 
> View attachment 2484287


If you have RDR2, I believe that was the largest jump of 10% improvement. Maybe try that out with the benchmark.


----------



## J7SC

jomama22 said:


> If you have RDR2, I believe that was the largest jump of 10% improvement. Maybe try that out with the benchmark.


...RDR2 also works great with Vulkan - which brings up the question whether resizable BAR will work generally w/ Vulkan and 3090 ?


----------



## KedarWolf

Nizzen said:


> Maybe try a game with BAR support?
> 
> Games that has support now is limited:
> More games will follow
> 
> 
> _Assassin’s Creed Valhalla_
> _Battlefield V_
> _Borderlands 3_
> _Forza Horizon 4_
> _Gears 5_
> _Metro Exodus_
> _Red Dead Redemption 2_
> _Watch Dogs: Legion_


Oh, I read a review it was tested with those two games so I thought it was supported.


----------



## Gebeleisis

jomama22 said:


> You would have to either buy a ohmmeter that measures that low or you could calculate it out by measuring the voltage drop across it (would need a variable bench top power supply for that). The chance of the resistors being faulty or of significantly different resistances would be crazy low. I wouldn't sweat it.


Thank you!


----------



## ArcticZero

Falkentyne said:


> Clean the PCIe pins with Deoxit D5 and inspect for damage.



Ok PC just froze again now and I'm getting B2 again on boot. It's unreliable now. Will check pins but I don't see any obvious damage.


----------



## Nizzen

So noone care to test BAR in 1440p/3440x1440 in some of the best supported games like Battlefield V or Valhalla?


----------



## Falkentyne

ArcticZero said:


> Ok PC just froze again now and I'm getting B2 again on boot. It's unreliable now. Will check pins but I don't see any obvious damage.


Disassemble the card.
Post high res, clean, detailed pictures of each shunt (yes I know that will take time) and of the PCB area around each shunt. You can upload them to IMGUR if you need to.
Doublecheck clearance around each shunt to backplate.
Double check clearance around each shunt to PCB.
Doublecheck for conductive balls of doom ™ sitting randomly on something that you may have missed.

Yes i am aware of the pictures you took before, but they are simply not clear enough. There's too much glare and too much bluriness.

When you cleaned the MG842AR silver paint, what did you use exactly to clean it? What process and method did you use?



> Falkentyne. I actually did clean it up using 99% isopropyl alcohol on Q-tips,


I need FAR more detail than this. Because if you used ANY sort of alcohol on top of paint, it's going to get on the PCB and short stuff.
The last 3 shunts I cleaned paint off of, I didn't even use alcohol at all until almost all of the paint was gone. I simply chipped and chipped and chipped until the paint finally came off, and only then did I use alcohol to get off the remainder, since then I could easily keep track of it and INSTANTLY clean off any that got on the PCB, rather than getting even more alcohol paint on the PCB.


----------



## KedarWolf

Nizzen said:


> So noone care to test BAR in 1440p/3440x1440 in some of the best supported games like Battlefield V or Valhalla?


I would test in Battlefield 5 but the GALAX resizable bar BIOS performs 10% worse than the Suprim X BIOS on my Strix OC and I'm not going to gain 10% with resizable bar enabled so no point. 

When they release a Suprim X resizable bar BIOS I WILL test it then.


----------



## des2k...

KedarWolf said:


> I would test in Battlefield 5 but the GALAX resizable bar BIOS performs 10% worse than the Suprim X BIOS on my Strix OC and I'm not going to gain 10% with resizable bar enabled so no point.
> 
> When they release a Suprim X resizable bar BIOS I WILL test it then.


Imagine 2x8pin then 🙄 it's at most 390w with regular vbios. If we get xoc vbios then yes I'll flash mine.


----------



## ArcticZero

Falkentyne said:


> Disassemble the card.
> Post high res, clean, detailed pictures of each shunt (yes I know that will take time) and of the PCB area around each shunt. You can upload them to IMGUR if you need to.
> Doublecheck clearance around each shunt to backplate.
> Double check clearance around each shunt to PCB.
> Doublecheck for conductive balls of doom ™ sitting randomly on something that you may have missed.
> 
> Yes i am aware of the pictures you took before, but they are simply not clear enough. There's too much glare and too much bluriness.
> 
> When you cleaned the MG842AR silver paint, what did you use exactly to clean it? What process and method did you use?
> 
> 
> 
> I need FAR more detail than this. Because if you used ANY sort of alcohol on top of paint, it's going to get on the PCB and short stuff.
> The last 3 shunts I cleaned paint off of, I didn't even use alcohol at all until almost all of the paint was gone. I simply chipped and chipped and chipped until the paint finally came off, and only then did I use alcohol to get off the remainder, since then I could easily keep track of it and INSTANTLY clean off any that got on the PCB, rather than getting even more alcohol paint on the PCB.


Thanks again for the response! Here's a link to the album:



http://imgur.com/a/kUNEnWA


My shunts appeared to be behaving correctly. Multiplying values by 2 as per the shunt calculator produced accurate readings for the expected performance (using mining hashrate and gaming to gauge). If I can get it to post again I'll show GPUz graphs to confirm.

I followed your suggestion to check shunt clearance, and no impressions were made on the tape. I decided to apply liquid electric tape on top of all the shunts to be sure (removed it for the photos above).

I used Q-tips with alcohol to remove the paint, which in hindsight was a bad idea as I hadn't thought it through. i dabbed the Q-tips with alcohol and scrubbed on the paint until it seemed to clear up from the shunt and get on the tips. Unfortunately did not use any tape until I actually went and soldered. Though I did blow off the PCB and had since given the shunt areas a quick alcohol wash to be sure, and let it dry thoroughly before I tried booting again (which it was doing fine until this b2 code kept showing up).

I'd appreciate tips on how to be doubly sure there aren't any leftover microscopic traces/residue of alcohol paint left on the PCB. Cheers!


----------



## Falkentyne

ArcticZero said:


> Thanks again for the response! Here's a link to the album:
> 
> 
> 
> http://imgur.com/a/kUNEnWA
> 
> 
> My shunts appeared to be behaving correctly. Multiplying values by 2 as per the shunt calculator produced accurate readings for the expected performance (using mining hashrate and gaming to gauge). If I can get it to post again I'll show GPUz graphs to confirm.
> 
> I followed your suggestion to check shunt clearance, and no impressions were made on the tape. I decided to apply liquid electric tape on top of all the shunts to be sure (removed it for the photos above).
> 
> I used Q-tips with alcohol to remove the paint, which in hindsight was a bad idea as I hadn't thought it through. i dabbed the Q-tips with alcohol and scrubbed on the paint until it seemed to clear up from the shunt and get on the tips. Unfortunately did not use any tape until I actually went and soldered. Though I did blow off the PCB and had since given the shunt areas a quick alcohol wash to be sure, and let it dry thoroughly before I tried booting again (which it was doing fine until this b2 code kept showing up).
> 
> I'd appreciate tips on how to be doubly sure there aren't any leftover microscopic traces/residue of alcohol paint left on the PCB. Cheers!


Some stuff I'm not liking already.

1) Those 1R0 chokes are crooked. Like REAL crooked. I mean look at them crooked. They're literally covering some small smd's on the board.
Is there physical contact between those 1R0 chokes and the SMD's? 

2) I see gunk or stuff around the edges of some shunts. I don't know if that's flux or something else. And I see a lot of residue of something.
Please take some alcohol and carefully wipe all of that stuff up around the shunts and then what is around those little solder balls and caps and smd's close to them.

3) if you applied any amount of alcohol to the top of the shunts when the MG 842AR paint was still well caked on, there's a 100% chance a film would get off and get on the PCB and start bridging stuff. I had that on 3090 FE around the 8 pins as originally I used alcohol to help remove the paint, even though I had Super 33+ tape all over, the alcohol paint mixture still easily got under the Super 33+ adhesive tape. Took a good 30 minutes of cleaning with a eye liner fabric tip followed by a toothpick edge with fabric wrapped around it just to get all of the stuff up. You can tell when it's there. Just take some tissue paper and put it on a paper clip or toothpick edge and apply alcohol to the tip then wipe around the shunt. If you see a silvery dark material come up when you wipe, that's MG 842AR paint residue.

It took 30 minutes just to clean the alcohol paint residue that had streaked around the 8 pins and caps. So if you didn't do that work, there's a high probability that there might be some there. If you didn't douse the entire card in 99% alcohol (this is safe to do, btw), it's there.

****

And......
In your first picture, what's up with that?
I see some orange stuff bridging two caps left of that shunt.
I also see another cap apparently suspended BETWEEN two points left of the shunt. Do you see it? I'm sure that's not supposed to be there....


----------



## ArcticZero

Falkentyne said:


> Some stuff I'm not liking already.
> 
> 1) Those 1R0 chokes are crooked. Like REAL crooked. I mean look at them crooked. They're literally covering some small smd's on the board.
> Is there physical contact between those 1R0 chokes and the SMD's?
> 
> 2) I see gunk or stuff around the edges of some shunts. I don't know if that's flux or something else. And I see a lot of residue of something.
> Please take some alcohol and carefully wipe all of that stuff up around the shunts and then what is around those little solder balls and caps and smd's close to them.
> 
> 3) if you applied any amount of alcohol to the top of the shunts when the MG 842AR paint was still well caked on, there's a 100% chance a film would get off and get on the PCB and start bridging stuff. I had that on 3090 FE around the 8 pins as originally I used alcohol to help remove the paint, even though I had Super 33+ tape all over, the alcohol paint mixture still easily got under the Super 33+ adhesive tape. Took a good 30 minutes of cleaning with a eye liner fabric tip followed by a toothpick edge with fabric wrapped around it just to get all of the stuff up. You can tell when it's there. Just take some tissue paper and put it on a paper clip or toothpick edge and apply alcohol to the tip then wipe around the shunt. If you see a silvery dark material come up when you wipe, that's MG 842AR paint residue.
> 
> It took 30 minutes just to clean the alcohol paint residue that had streaked around the 8 pins and caps. So if you didn't do that work, there's a high probability that there might be some there. If you didn't douse the entire card in 99% alcohol (this is safe to do, btw), it's there.
> 
> ****
> 
> And......
> In your first picture, what's up with that?
> I see some orange stuff bridging two caps left of that shunt.
> I also see another cap apparently suspended BETWEEN two points left of the shunt. Do you see it? I'm sure that's not supposed to be there....


There is enough clearance for them chokes, but yes they are indeed crooked out of the box. And yes. I'll definitely be giving the PCB another clean up to be sure.

More importantly WOW I didn't see that there was a loose SMD component. I don't remember knocking the card or mishandling it so that's strange. I nudged it now with my finger and it wasn't soldered or anything, so this is quite troublesome.

Any idea what this is? Looks like a tiny capacitor. It's so much smaller than the shunts so I'm not at all confident in soldering this back. I've added photos of it and where it seems to have broken off from (bottom row center). There seems to be some solder residue on the pads so they aren't clean. No idea how this happened, but I suspect the the backplate must've hit it somehow.


----------



## Falkentyne

ArcticZero said:


> There is enough clearance for them chokes, but yes they are indeed crooked out of the box. And yes. I'll definitely be giving the PCB another clean up to be sure.
> 
> More importantly WOW I didn't see that there was a loose SMD component. I don't remember knocking the card or mishandling it so that's strange. I nudged it now with my finger and it wasn't soldered or anything, so this is quite troublesome.
> 
> Any idea what this is? Looks like a tiny capacitor. It's so much smaller than the shunts so I'm not at all confident in soldering this back. I've added photos of it and where it seems to have broken off from (bottom row center). There seems to be some solder residue on the pads so they aren't clean. No idea how this happened, but I suspect the the backplate must've hit it somehow.
> 
> View attachment 2484372
> View attachment 2484373


Well that explains the shorting problem.

You can attach it if you apply Kapton tape VERY CAREFULLY everywhere around the SMD attachment point fully, leaving only that area exposed, and then take a pinhead sharp tip and apply TWO VERY TINY DABS of MG 842AR paint on each of the solder points (assuming you have any left over) and then take tweezers and apply the SMD directly on top of the two solder points and the MG842AR dots without moving it one bit, and then let it cure. The only other way to attach that would be to do the same thing with solder paste, and then heat it with a hot air station to harden the paste (which I have no experience with).


----------



## ArcticZero

Falkentyne said:


> Well that explains the shorting problem.
> 
> You can attach it if you apply Kapton tape VERY CAREFULLY everywhere around the SMD attachment point fully, leaving only that area exposed, and then take a pinhead sharp tip and apply TWO VERY TINY DABS of MG 842AR paint on each of the solder points (assuming you have any left over) and then take tweezers and apply the SMD directly on top of the two solder points and the MG842AR dots without moving it one bit, and then let it cure. The only other way to attach that would be to do the same thing with solder paste, and then heat it with a hot air station to harden the paste (which I have no experience with).


 Well I tried with MG 842AR and just hoping for the best while I cures. I've read that some of these ceramic capacitors are non essential, but I have yet to post and don't want to try until this has fully cured. No experience with hot air and solder paste either.


----------



## Falkentyne

ArcticZero said:


> Well I tried with MG 842AR and just hoping for the best while I cures. I've read that some of these ceramic capacitors are non essential, but I have yet to post and don't want to try until this has fully cured. No experience with hot air and solder paste either.
> 
> View attachment 2484374


Should take about an hour for basic cure (ability to use the card) and 24 hours for max curing at room temp. Heating the card to 80C in the oven (not sure what fahrenheit that is) allows curing in 30 minutes.
Make very well sure that that part does not somehow impact the backplate again because if it is attached, it's not going to be as secure as solder and could snap off easily, but won't come off if no pressure is applied to it.

How do you think it got sheared off?
What's creepy strange is that someone with a 3080 or 3090 FE also had one of those, right next to the PCIE slot, also get ripped off because something slipped (I think a screwdriver or something).


----------



## yzonker

des2k... said:


> Imagine 2x8pin then 🙄 it's at most 390w with regular vbios. If we get xoc vbios then yes I'll flash mine.


I think it could be a while before we see that. EVGA has to release one and I'm not sure they will since it's really just for benching and then someone has to leak it.


----------



## bmagnien

Several posts backs I was asking about issues I was having with the 1000XOC bios, where my PSU (Corsair SF750) was hard-shutting off my PC. It would do this at even around 500w (50%) and my 5950x set to 125w PPT- but only in games. I could run all synthetics at upwards of 600w. As far as other power consuming elements, I have a DDC pump (molex), an ITX mobo, 2x16gb sticks of RAM, and 10 x Noctua NF-A14s. Adding all that up didn't seem to be surpassing the 750w max limit, and I could run the 520KP bios just fine which didn't make much sense either. 

At the time it was explained that transient spikes might be the cause, and that I should try a larger PSU to see. So I just got my hands on the Seasonic SX1000, a 1000w SFX-L. I'm having more or less the same issues. I can go a bit higher, like 225W PPT on the CPU and the 520W KP Bios is stable, but as soon as I switch over to the 1000XOC, I start getting shutdowns in games. Strangely, I am able to go like 700w on the XOC and 225w on the CPU in TimeSpy extreme to the point where I'm no longer PWR throttled in Graphics Test 2. But as soon as I try gaming where I'm only pulling about 4-500w, I'll get a shutdown on XOC bios within 10 mins. Never shutdown with the 520KP bios in the same situation.

@Falkentyne had some thoughts about it, and I figured there was something in the 1000XOC Bios that just wasn't right and was triggering the PSU somehow, but thought whatever that would be would be resolved by upping the PSU cap by 250w. But I guess not. 

Any ideas?


----------



## KedarWolf

bmagnien said:


> Several posts backs I was asking about issues I was having with the 1000XOC bios, where my PSU (Corsair SF750) was hard-shutting off my PC. It would do this at even around 500w (50%) and my 5950x set to 125w PPT- but only in games. I could run all synthetics at upwards of 600w. As far as other power consuming elements, I have a DDC pump (molex), an ITX mobo, 2x16gb sticks of RAM, and 10 x Noctua NF-A14s. Adding all that up didn't seem to be surpassing the 750w max limit, and I could run the 520KP bios just fine which didn't make much sense either.
> 
> At the time it was explained that transient spikes might be the cause, and that I should try a larger PSU to see. So I just got my hands on the Seasonic SX1000, a 1000w SFX-L. I'm having more or less the same issues. I can go a bit higher, like 225W PPT on the CPU and the 520W KP Bios is stable, but as soon as I switch over to the 1000XOC, I start getting shutdowns in games. Strangely, I am able to go like 700w on the XOC and 225w on the CPU in TimeSpy extreme to the point where I'm no longer PWR throttled in Graphics Test 2. But as soon as I try gaming where I'm only pulling about 4-500w, I'll get a shutdown on XOC bios within 10 mins. Never shutdown with the 520KP bios in the same situation.
> 
> @Falkentyne had some thoughts about it, and I figured there was something in the 1000XOC Bios that just wasn't right and was triggering the PSU somehow, but thought whatever that would be would be resolved by upping the PSU cap by 250w. But I guess not.
> 
> Any ideas?


Could it be the motherboard overvolt protection is enabled in BIOS. This can cause hard shutdowns I know.


----------



## bmagnien

KedarWolf said:


> Could it be the motherboard overvolt protection is enabled in BIOS. This can cause hard shutdowns I know.


Not sure where it'd be in BIOS, thought I'd been all over it and never noticed anything like that. I read that Asus had an anti-surge protection in BIOS but I couldn't find that either. I have an Asus x570i with most recent bios -any ideas if there's something in there I could try turning on/off?


----------



## yzonker

bmagnien said:


> Several posts backs I was asking about issues I was having with the 1000XOC bios, where my PSU (Corsair SF750) was hard-shutting off my PC. It would do this at even around 500w (50%) and my 5950x set to 125w PPT- but only in games. I could run all synthetics at upwards of 600w. As far as other power consuming elements, I have a DDC pump (molex), an ITX mobo, 2x16gb sticks of RAM, and 10 x Noctua NF-A14s. Adding all that up didn't seem to be surpassing the 750w max limit, and I could run the 520KP bios just fine which didn't make much sense either.
> 
> At the time it was explained that transient spikes might be the cause, and that I should try a larger PSU to see. So I just got my hands on the Seasonic SX1000, a 1000w SFX-L. I'm having more or less the same issues. I can go a bit higher, like 225W PPT on the CPU and the 520W KP Bios is stable, but as soon as I switch over to the 1000XOC, I start getting shutdowns in games. Strangely, I am able to go like 700w on the XOC and 225w on the CPU in TimeSpy extreme to the point where I'm no longer PWR throttled in Graphics Test 2. But as soon as I try gaming where I'm only pulling about 4-500w, I'll get a shutdown on XOC bios within 10 mins. Never shutdown with the 520KP bios in the same situation.
> 
> @Falkentyne had some thoughts about it, and I figured there was something in the 1000XOC Bios that just wasn't right and was triggering the PSU somehow, but thought whatever that would be would be resolved by upping the PSU cap by 250w. But I guess not.
> 
> Any ideas?


You just described the exact path I went down except I haven't had that problem with the Seasonic Focus 1000 I replaced my Corsair RM750 with. 

But anyway, with the 750w I could run benchmarks near 500w with no issue. But the instant Cyberpunk started rendering (not in the menu), boom, power off. I've played Cyberpunk as high as 500w with the Seasonic with no issue though. 

I have a 5800x cpu BTW. I don't have it configured to draw a lot of power so that might be a difference.


----------



## KedarWolf

bmagnien said:


> Not sure where it'd be in BIOS, thought I'd been all over it and never noticed anything like that. I read that Asus had an anti-surge protection in BIOS but I couldn't find that either. I have an Asus x570i with most recent bios -any ideas if there's something in there I could try turning on/off?


Maybe?

If you are aware of all the risks, and decided to *disable* the *asus anti*-*surge protect* function, then go to "BIOS" - " Advanced Mode"-" Monitor "-" *Anti Surge* Support ", and set this parameter to" Disabled ". The message appears on a PC with an *Asus* motherboard.


----------



## bmagnien

yzonker said:


> You just described the exact path I went down except I haven't had that problem with the Seasonic Focus 1000 I replaced my Corsair RM750 with.
> 
> But anyway, with the 750w I could run benchmarks near 500w with no issue. But the instant Cyberpunk started rendering (not in the menu), boom, power off. I've played Cyberpunk as high as 500w with the Seasonic with no issue though.
> 
> I have a 5800x cpu BTW. I don't have it configured to draw a lot of power so that might be a difference.


Lol yeah dude - on the SF750, I'd click Continue Game, hear the radio announcer during the CP2077 load screen, get the prompt to click X past the load screen, and INSTANT crash every time. That was with XOC at 500w, while the KP520W would work just fine.

So are you running the XOC bios with the 1000W Seasonic? If so, at what power limit %? What's your 5800x PPT set to? I assume you've got PBO2 running to some extent right?


----------



## Falkentyne

bmagnien said:


> Not sure where it'd be in BIOS, thought I'd been all over it and never noticed anything like that. I read that Asus had an anti-surge protection in BIOS but I couldn't find that either. I have an Asus x570i with most recent bios -any ideas if there's something in there I could try turning on/off?


There's no such thing as a Seasonic SX-1000. Perhaps you meant Silverstone?

And it's very possible that PSU isn't Ampere friendly either. Try a Seasonic TX-1000 or Seasonic PX-1000 (not the old Prime Ultra series but the newer OneSeasonic version).

I can't answer this question.

What I said earlier was that the Kingpin 1000W vbios has the NVVDD and MSVDD limits increased or removed (this is what causes TDP Normalized % throttling, when TDP Normalized reaches the TDP slider % value, but is LOWER than the TDP value below it). Normalized is the highest value of ANY individual power rail, with respect to that rail's limit. For example, if there are 10 different power rails, and your TDP slider is set to "100%", and for simplicity, let's say each power rail has a max value of 100W, and the lowest rail is 15W and the highest rail is 95W (active).

In this example, your TDP Normalized % will be 95%, because 95W is 95% of 100W (Normalized will be the single highest rail with respect to any default value "linked" to it), so if any one rail, for example, exceeds its max value, it will be reported on TDP Normalized. Keep in mind that this also applies if the TDP% slider can go past 100% as well.

However the MSVDD / NVVDD limits are not shown anywhere so that makes stuff hard.

Have you tried the Strix XOC2 bios? Does this bios also trigger protection? You won't reach anywhere near the Kingpin 1000W limit due to a bug with "Source 3"'s power limit being default (which limits 8 pin #3 to 175W) however the Strix also has higher than default internal rail limits (which is why if you 5 mOhm stack shunt mod a Strix and run Timespy Extreme, you won't hit a TDP Normalized throttle Limit).

Attached that file here, RENAME It by removing the TXT at the end so it's a .ROM extension.

I did contact Asus about the SRC3 limit bug. No idea if they care or will put out another version (possibly through Elmor which is where I got this one). I told Shamino about it, but he's not with the video team.


----------



## yzonker

bmagnien said:


> Lol yeah dude - on the SF750, I'd click Continue Game, hear the radio announcer during the CP2077 load screen, get the prompt to click X past the load screen, and INSTANT crash every time. That was with XOC at 500w, while the KP520W would work just fine.
> 
> So are you running the XOC bios with the 1000W Seasonic? If so, at what power limit %? What's your 5800x PPT set to? I assume you've got PBO2 running to some extent right?


Yes I've been running the XOC bios. I have a lowly Zotac 3090 (2x8pin). Power limit is a little confusing because of that. I've run as high as 80% which is around 500w. 

My 5800x has SMT turned off and an all core OC of 4400 at 1.2v. 2 reasons. FS2020 runs better with SMT off (other games run no worse) and I'm saving heat in my loop by running the CPU at a low power level. Probably could go below 1.2 but haven't had time to test. Nothing I use the machine for runs worse.


----------



## bmagnien

@KedarWolf so I checked under Monitor in my BIOS and that option isn't there. Most of the posts Im seeing about asus antisurge issues were from several years ago so I'm wondering if they removed it

@Falkentyne thx again for taking the time to respond - i appreciate it. Yes, brain fart, meant the silverstone SX1000. I'll try those Strix XOC2 bios and report back. What has been the highest power draw reported back by people who have tried those? Think I'll have any issues running it on a FTW3U?

@yzonker thanks bud. Glad to hear the same issues you had on the 750, I'll keep tinkering to try to justify the price I paid for this SX1000


----------



## bmagnien

I just tested the Strix XOC2 bios limited to 500w(50%)...should I be scared about these PCIe pin readings?? This is with EVGAs new power balancing through their new RMA program on the FTW3 3090s...


----------



## des2k...

yzonker said:


> Yes I've been running the XOC bios. I have a lowly Zotac 3090 (2x8pin). Power limit is a little confusing because of that. I've run as high as 80% which is around 500w.
> 
> My 5800x has SMT turned off and an all core OC of 4400 at 1.2v. 2 reasons. FS2020 runs better with SMT off (other games run no worse) and I'm saving heat in my loop by running the CPU at a low power level. Probably could go below 1.2 but haven't had time to test. Nothing I use the machine for runs worse.


I have custom hwinfo done for the zotac if you want it, used amp clamp on the 8pins at 100% TDP,

Fairly accurate for real total power, core, cache, mem. You can monitor the TDP, Normalized TDP to see how that affects real power. I only use it at 100% TDP, I just cap the max freq.


----------



## bmagnien

I ran this for science, about as far as i have the balls to try until I figure out if these readings are accurate or not...


----------



## mirkendargen

If anyone ever has any Bykski issues, just email [email protected]


----------



## bmagnien

And now I’m super confused. Just threw the KPXOC1000 BIOS back on, ran furmark at 70% and CB23 at the same time... so 225w from CPU and 700w from GPU for 925w max load...And it ran fine. I’m wondering if the PSU shutdown might be heat related? I have a fully vented case but the top rad is intaking and exhausting straight into the top of the PSU where it’s trying to exhaust. Are PSUs that sensitive to heat?


----------



## T.Sharp

bmagnien said:


> And now I’m super confused. Just threw the KPXOC1000 BIOS back on, ran furmark at 70% and CB23 at the same time... so 225w from CPU and 700w from GPU for 925w max load...And it ran fine. I’m wondering if the PSU shutdown might be heat related? I have a fully vented case but the top rad is intaking and exhausting straight into the top of the PSU where it’s trying to exhaust. Are PSUs that sensitive to heat?


It does have over-temp protection. It's a very dense PSU by the looks of it. Even if it's 90% efficient at ~1000W, that's a lot of heat in a small area. If it's sucking in hot exhaust too, it's certainly possible that it's overheating.

It does say it's rated for up to 50c operating environment though.


----------



## bmagnien

T.Sharp said:


> It does have over-temp protection. It's a very dense PSU by the looks of it. Even if it's 90% efficient at ~1000W, that's a lot of heat in a small area. If it's sucking in hot exhaust too, it's certainly possible that it's overheating.


It’s sucking in cold air through some pretty large gauge venting. But it’s exhausting up right into the underside of a rad that’s pushing hot air down. I’ll keep a closer eye on temps. I’m running the xoc1000 in CP77 now for an extended run. Maybe the PSU just needed to be ‘broken in’ for a week before handling this type of load? No idea...


----------



## T.Sharp

bmagnien said:


> It’s sucking in cold air through some pretty large gauge venting. But it’s exhausting up right into the underside of a rad that’s pushing hot air down. I’ll keep a closer eye on temps. I’m running the xoc1000 in CP77 now for an extended run. Maybe the PSU just needed to be ‘broken in’ for a week before handling this type of load? No idea...


Ahh, I misunderstood. It says it's rated for 50c environment (just edited my other comment), in which case it really shouldn't be overheating if it's drawing fresh air. Can you put your hand on the PSU case to see if it's cookin? What case are you using?


----------



## sultanofswing

I don't think it's a good idea to push a psu that close to it's max rating.
Remember that is a "max" rating also known as a peak rating. The continuous rating I bet is around 900 or so watts.


----------



## bmagnien

T.Sharp said:


> Ahh, I misunderstood. It says it's rated for 50c environment (just edited my other comment), in which case it really shouldn't be overheating if it's drawing fresh air. Can you put your hand on the PSU case to see if it's cookin? What case are you using?


Lol that’s a complicated question. Technically a Sliger sm580 but:

__
https://www.reddit.com/r/mffpc/comments/lah4tc

Ive replaced the side acrylics with the chunky vented side panels since those photos, and all rads are intake with passive exhaust out the sides and back. RAM is my closest way of checking internal temps, they get to 50c after several hours, and they’re overclocked at 1.55v


----------



## J7SC

yzonker said:


> You just described the exact path I went down except I haven't had that problem with the Seasonic Focus 1000 I replaced my Corsair RM750 with.
> 
> But anyway, with the 750w I could run benchmarks near 500w with no issue. But the instant Cyberpunk started rendering (not in the menu), boom, power off. I've played Cyberpunk as high as 500w with the Seasonic with no issue though.
> 
> I have a 5800x cpu BTW. I don't have it configured to draw a lot of power so that might be a difference.


I only have good 1200W and above PSUs as all the other systems around here are SLI (2x, 3x, 4x) and with CPUs 16c/32t or higher...a pleasant side effect is that my builds w/ a single card, even at 500W+ for 3090, are never PSU 'spike sensitive'...fyi, Igor's lab did a series on spike peaks / ampere with PSUs not too long ago


----------



## T.Sharp

sultanofswing said:


> I don't think it's a good idea to push a psu that close to it's max rating.
> Remember that is a "max" rating also known as a peak rating. The continuous rating I bet is around 900 or so watts.


Quality PSUs are designed to be run at their max rating 24/7 and can usually handle 110% with no problem.
SilverStone SX1000 Platinum INTRODUCTION



bmagnien said:


> Lol that’s a complicated question. Technically a Sliger sm580 but:
> 
> __
> https://www.reddit.com/r/mffpc/comments/lah4tc
> 
> Ive replaced the side acrylics with the chunky vented side panels since those photos, and all rads are intake with passive exhaust out the sides and back. RAM is my closest way of checking internal temps, they get to 50c after several hours, and they’re overclocked at 1.55v


Ohh dang, I saw your build before but didn't realize just how choked off the PSU exhaust is. That could definitely be a problem. I dig the build though.


----------



## bmagnien

T.Sharp said:


> Quality PSUs are designed to be run at their max rating 24/7 and can usually handle 110% with no problem.
> SilverStone SX1000 Platinum INTRODUCTION
> 
> 
> Ohh dang, I saw your build before but didn't realize just how choked off the PSU exhaust is. That could definitely be a problem. I dig the build though.


Yah. I can try fixing it just by swapping the top fans to exhaust. I’ll give it a try. Thanks for everyone’s input so far. I love this forum, I learn so much. Cheers!


----------



## J7SC

...I follow Yangcom / S. Korea at YT - usually for their 4x 3090 / MoRa workstation builds... But here they use yet another Strix 3090 / EK blcok combo for a 'gaming pc' in a Corsair 1000D plus an Asus Thor 1200W Platinum PSU. Granted, I would do the w-cooling different, but nice eye candy nonetheless...


----------



## ArcticZero

Falkentyne said:


> Should take about an hour for basic cure (ability to use the card) and 24 hours for max curing at room temp. Heating the card to 80C in the oven (not sure what fahrenheit that is) allows curing in 30 minutes.
> Make very well sure that that part does not somehow impact the backplate again because if it is attached, it's not going to be as secure as solder and could snap off easily, but won't come off if no pressure is applied to it.
> 
> How do you think it got sheared off?
> 
> What's creepy strange is that someone with a 3080 or 3090 FE also had one of those, right next to the PCIE slot, also get ripped off because something slipped (I think a screwdriver or something).


I honestly have no idea how, as I don't recall ever hitting it. But it's most likely a break due to physical contact and not something else since you can see the rough, broken solder joints left behind. Very strange.

Did the guy's card still work? Because I'm hearing a few of these ceramic capacitors aren't 100% essential for normal operation, and if this cap has been detached for a while now then I'd been using it when it worked for a while now.

I let the paint bridges cure overnight and woke up to it not having adhered at all, so I most likely need to use more paint. In the middle of attempt #2 now. If I can't get this to adhere properly, I'll most likely have it sent to a repair shop for resoldering / hot air. I don't believe this should be unfixable.

Also, when doing a full alcohol bath, how long should the PCB be left submerged for to be confident there are no traces left?


----------



## Pepillo

BAR activated. With EVGA BIOS the update is done with the Precission X1


----------



## Falkentyne

ArcticZero said:


> I honestly have no idea how, as I don't recall ever hitting it. But it's most likely a break due to physical contact and not something else since you can see the rough, broken solder joints left behind. Very strange.
> 
> Did the guy's card still work? Because I'm hearing a few of these ceramic capacitors aren't 100% essential for normal operation, and if this cap has been detached for a while now then I'd been using it when it worked for a while now.
> 
> I let the paint bridges cure overnight and woke up to it not having adhered at all, so I most likely need to use more paint. In the middle of attempt #2 now. If I can't get this to adhere properly, I'll most likely have it sent to a repair shop for resoldering / hot air. I don't believe this should be unfixable.
> 
> Also, when doing a full alcohol bath, how long should the PCB be left submerged for to be confident there are no traces left?


That I have no idea. Jayz2cents did an alcohol bath video for cleaning a card of residue from a LN2 run. I would guess like 5 minutes if you're moving the alcohol over the card to get it cleaned, then a swab to easily finish the job.

And even if it were sheared off and still functioning, it is STILL metal and you have a piece of loose metal touching other stuff. Which causes shorts.
You will have to hope something didn't get fried by a voltage spike.


----------



## ArcticZero

Falkentyne said:


> That I have no idea. Jayz2cents did an alcohol bath video for cleaning a card of residue from a LN2 run. I would guess like 5 minutes if you're moving the alcohol over the card to get it cleaned, then a swab to easily finish the job.
> 
> And even if it were sheared off and still functioning, it is STILL metal and you have a piece of loose metal touching other stuff. Which causes shorts.
> You will have to hope something didn't get fried by a voltage spike.


I'll definitely give the alcohol bath a shot. Unfortunately the silver paint didn't adhere even on my second attempt, and the cap is just falling off. Looks like I'm going to have it fixed at an electronics repair shop. I didn't see anything obviously fried upon a visual inspection, at the very least. I'm really hoping the primary reason for it not booting is paint traces, but I guess I'll find out after the alcohol bath.


----------



## Dreams-Visions

Just an FYI for those with a O11 Dynamic XL case and an X Trio, Strix or card with similar dimensions: The Phanteks Glacier GPU block will allow you to fit your card horizontally in your case. It's short enough by a few mm. Fits perfect here after making the same mistake others did with the Bykski block. Bykski block is about 5mm too wide to fit in the case with the door.

On another subject, I'm getting roughly 2175MHz locked in games with the SuprimX bios, which was recommended here. Appreciate the tip. But I did notice that despite being locked at 2160MHz in Port Royal, my scores were significantly lower than with a non-curve overclock, despite it being a lower locked boost (around 2130). Did I do something wrong with my curve? Am I just a dummy? Should I move to a bios with a higher power limit? Any tips are greatly appreciated.


----------



## PowerK

Folks, am I correct in that for Resize BAR to work, you need:
1. Motherboard BIOS support
2. vBIOS support
3. Nvidia driver support.
?


----------



## Zogge

Yes


----------



## ALSTER868

Dreams-Visions said:


> But I did notice that despite being locked at 2160MHz in Port Royal, my scores were significantly lower than with a non-curve overclock, despite it being a lower locked boost (around 2130). Did I do something wrong with my curve?


Are you monitoring your GPU effective clock? With that steep curve you're getting lower effective clock than OCing the GPU by moving the slider in AB. Hence the difference in scores.

E.g. with a curve like yours I get around 2050-2080 MHz effective and if I'm setting the slider +170 to get same 2175 MHz requested clock, I get 2145-2165 MHz of effective clock which renders me extra 100-200 extra points in PR.


----------



## Pepillo

BAR activated:










Assasins Valhalla 4k Very High preset, no BAR:










BAR, *4,76% more FPS:








*


----------



## kx11

Anybody got ROG cards vbios??


----------



## yzonker

des2k... said:


> I have custom hwinfo done for the zotac if you want it, used amp clamp on the 8pins at 100% TDP,
> 
> Fairly accurate for real total power, core, cache, mem. You can monitor the TDP, Normalized TDP to see how that affects real power. I only use it at 100% TDP, I just cap the max freq.
> 
> View attachment 2484384


Thanks. I have it. You sent it to me on Reddit a while back. I just don't always think to look at it or have it loaded. I tend to just use the AB overlay in game.


----------



## yzonker

bmagnien said:


> It’s sucking in cold air through some pretty large gauge venting. But it’s exhausting up right into the underside of a rad that’s pushing hot air down. I’ll keep a closer eye on temps. I’m running the xoc1000 in CP77 now for an extended run. Maybe the PSU just needed to be ‘broken in’ for a week before handling this type of load? No idea...


One thing is that I haven't run CP for a long gaming session at those higher power levels. So I don't know if my PSU would trip or not in that case. I was _assuming_ it was good since it doesn't turn off fairly quickly like the 750w did before. Also even with the 750w, it seemed like in doing some math I wasn't really pulling enough power to overload it and my UPS indicated that as well, but it thought otherwise.


----------



## ArcticZero

@Falkentyne I gave it a thorough alcohol bath for five minutes, poured a fresh batch of alcohol to rinse it down, did a full cleaning with a paint brush afterwards including all the thermal paste residue left behind and....IT WORKED. I can cold boot and everything and it still works. And it looks like you were right on the money, as I found a few silver flakes floating in the alcohol bath. Fingers crossed it lasts this time. Big thanks for the tip!

Now I'm conflicted if I should go ahead and have the broken cap resoldered and maybe risk having it messed up again, or just leave it off because things are already working fine.

Just so happy to have a working GPU again, considering the market nowadays.


----------



## Thanh Nguyen

Pepillo said:


> BAR activated:
> 
> View attachment 2484403
> 
> 
> Assasins Valhalla 4k Very High preset, no BAR:
> 
> View attachment 2484404
> 
> 
> BAR, *4,76% more FPS:
> 
> View attachment 2484405
> *


Do we need to flash this bios or just update?


----------



## kairi_zeroblade

ArcticZero said:


> @Falkentyne I gave it a thorough alcohol bath for five minutes, poured a fresh batch of alcohol to rinse it down, did a full cleaning with a paint brush afterwards including all the thermal paste residue left behind and....IT WORKED. I can cold boot and everything and it still works. And it looks like you were right on the money, as I found a few silver flakes floating in the alcohol bath. Fingers crossed it lasts this time. Big thanks for the tip!
> 
> Now I'm conflicted if I should go ahead and have the broken cap resoldered and maybe risk having it messed up again, or just leave it off because things are already working fine.
> 
> Just so happy to have a working GPU again, considering the market nowadays.


for long-term I think you should have it re-soldered..with skilled hands shouldn't be hard to do it by yourself if you still have the component with you..


----------



## ArcticZero

Yep I certainly still have it. And yeah I'm worried about how leaving it off will affect the lifespan of the card, so definitely having it fixed ASAP.


----------



## Pepillo

Thanh Nguyen said:


> Do we need to flash this bios or just update?


Flash update, with X1 Precission for EVGA Bios:



https://www.evga.com/EVGA/GeneralDownloading.aspx?file=EVGA_Precision_X1_1.1.8.0.zip


----------



## nievz

Is there an EVGA 500w BIOS with resizeable bar for the X trio yet?


----------



## erazortt

For everyone with a reference 2-pin card, I just found this bios of a bar-enabled GALAX with the 390W bios.








GALAX RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com





I cannot test it right now, beacuse I'm currently not at home. But that'll be the first I'm gonna do when I'm back later.


----------



## schoolofmonkey

Pepillo said:


> Flash update, with X1 Precission for EVGA Bios:
> 
> 
> 
> https://www.evga.com/EVGA/GeneralDownloading.aspx?file=EVGA_Precision_X1_1.1.8.0.zip


Thanks for that, I couldn't find the link anywhere on the EVGA forums even though they were talking about it.
Updated fine RBAR is enabled on my RTX 3090 XC3.


----------



## ArcticZero

erazortt said:


> For everyone with a reference 2-pin card, I just found this bios of a bar-enabled GALAX with the 390W bios.
> 
> 
> 
> 
> 
> 
> 
> 
> GALAX RTX 3090 VBIOS
> 
> 
> 24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory
> 
> 
> 
> 
> www.techpowerup.com
> 
> 
> 
> 
> 
> I cannot test it right now, beacuse I'm currently not at home. But that'll be the first I'm gonna do when I'm back later.


I would try this but unfortunately Asus hasn't released Resizable BAR updates for theire Z370 lineup yet. Thanks for the link!


----------



## Pepillo

nievz said:


> Is there an EVGA 500w BIOS with resizeable bar for the X trio yet?











EVGA RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com





The Kipping 520w BAR enabled:









EVGA RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com


----------



## Thanh Nguyen

Any kingpin 1000w wit BAR?


----------



## Pepillo

Thanh Nguyen said:


> Any kingpin 1000w wit BAR?


You can do it in two minutes. Install the 1.000w EVGA bios, download Precission X1 1.1.18 and patch the bios with BAR. Two cliks and a reboot.


----------



## erazortt

erazortt said:


> For everyone with a reference 2-pin card, I just found this bios of a bar-enabled GALAX with the 390W bios.
> 
> 
> 
> 
> 
> 
> 
> 
> GALAX RTX 3090 VBIOS
> 
> 
> 24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory
> 
> 
> 
> 
> www.techpowerup.com
> 
> 
> 
> 
> 
> I cannot test it right now, beacuse I'm currently not at home. But that'll be the first I'm gonna do when I'm back later.


Ok I just rushed home. So I can confirm that this BIOS works on my ZOTAC card and GPUZ does regognize that re-bar is enabled. I'm going to do some perf tests.


----------



## schoolofmonkey

New driver:
Game Ready Driver WHQL 465.89




https://us.download.nvidia.com/Windows/465.89/465.89-desktop-win10-64bit-international-whql.exe




https://us.download.nvidia.com/Windows/465.89/465.89-desktop-win7-64bit-international-whql.exe



This new Game Ready Driver provides support for the launch of Outriders, which features NVIDIA DLSS technology.

Additionally, this release also provides optimal day-1 support for:
DIRT 5's new ray tracing update
The launch of Evil Genius 2: World Domination
The launch of the KINGDOM HEARTS Series on the Epic Games Store
Gaming Technology
Includes support for Resizable BAR across the GeForce RTX 30 Series of Desktop and Notebook GPUs
Includes beta support for virtualization on GeForce GPUs
Learn more in the Game Ready Driver article

New Features and Other Changes

OpenCL 3.0

Added support for OpenCL 3.0, the latest major version of OpenCL maintaining backward
compatibility with OpenCL 1.2. NVIDIA OpenCL 3.0 continues to support existing OpenCL 1.2 functionality as well as Khronos and vendor extensions that are already supported with NVIDIAOpenCL 1.2 drivers.

The following new features beyond existing NVIDIA OpenCL 1.2 features are supported by NVIDIA OpenCL 3.0
RGBA vector component naming in OpenCL C kernels
pragma_unroll hint
opencl_3d_image_writes
clCreate*WithProperties APIs which can be used as replacement for existing clCreateBuffer/Image APIs.
clSetContextDestructorCallback
clCloneKernel from OpenCL 2.1
clEnqueueSVMMigrateMem from OpenCL 2.1
Incorporates the following experimental 2.0 features:
Device side enqueue
The current implementation is limited to 64-bit platforms only.
OpenCL 2.0 allows kernels to be enqueued with global_work_size larger than thecompute capability of the NVIDIA GPU. The current implementation supports onlycombinations ofglobal_work_size and local_work_size that are within the computecapability of the NVIDIA GPU.
The maximum supported CUDA grid and block size of NVIDIA GPUs is available at http://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#compute-capabilities.
For a given grid dimension, the global_work_size can be determined by CUDA grid size x CUDA block size.

For executing kernels (whether from the host or the device), OpenCL 2.0 supports non-uniform ND-ranges where global_work_size does not need to be divisible by the local_work_size. This capability is not yet supported in the NVIDIA driver, and therefore not supported for device side kernel enqueues.
Shared virtual memory
The current implementation of shared virtual memory is limited to 64-bit platforms only.

Application Profiles
Added the following SLI profiles.
Shenmue III (NVIDIA Turing GPUs only)
The Medium (NVIDIA Turing GPUs only)
Other Changes
Added support for extended DP-to-HDMI 2.1 PCON clock frequency range to better support 8K TVs.
Changed the G-SYNC on-screen status indicator to be less obtrusive.

Limitations in This Release

The following features are not currently supported or have limited support in this driver release:

OpenCL 3.0 Known Issues

Device-Side-Enqueue related queries may return 0 values, although corresponding built-ins can be safely used by kernel.
This is in accordance with conformance requirements described at https://www.khronos.org/registry/Op...L_API.html#opencl-3.0-backwards-compatibility
The denorms that were flushed to zero by default when cl-fast-relaxed-math on OpenCL 1.2 drivers won't be flushed to zero with OpenCL 3.0 driver by default.
Using HDR with Games

There may be issues resulting from the interaction between in-game HDR settings and the
Windows HDR settings. See the NVIDIA Knowledge Base Article 5072 (Considerations for Playing Games with HDR Enabled | NVIDIA) for details.

Changes and Fixed Issues in Version 465.89
The following sections list the important changes and the most common issues resolved in this version. This list is only a subset of the total number of changes made in this driver version. The NVIDIA bug number is provided for reference.

Fixed Issues in this Release
[Rainbow Six Siege][Vulkan]: Smoke appears pixelated. [3266916]
[Vulkan][X4: Foundations 4.00/X4: Cradle of Humanity] The game may crash on GeForce RTX 30 Series. [200701230]
[GeForce RTX 3090]: Blue-screen crash occurs when Samsung Odyssey G9 is paired with HDMI TV. [3240366]
[GeForce RTX 2060]: Blue-screen crash (DPC_WATCHDOGS_VIOLATION) occurs when playing a game and watching YouTube video simultaneously. [3196272]
[Sunset Overdrive]: The application may display random green corruption if Depth of Field is enabled from in-game settings. [2750770]
Realtek Displayport-to-HDMI 2.1 protocol converter clock limited to 600MHz pixel clock. [3202060]
[G-SYNC][NVIDIA Ampere/Turing GPU architecture]: GPU power consumption may increase in idle mode on systems using certain high er refresh-rate G-SYNC monitors. [200667566].
[GFE Screenshot/HDR]: Application screenshots are washed out when HDR is enabled. [3229781]
Open Issues in in this Release
[World of Warcraft: Shadowlands]: Random flicker may occur in certain locations in the game. [3206341]
[Supreme Commander/Supreme Commander 2]: The games experience low FPS. [3231218/3230880]
[Batman Arkham Knight]: The game crashes when turbulence smoke is enabled. [3202250]
[Adobe Camera RAW 12.x]: RAW files may show up black in Adobe Lightroom. [3266883]
[Steam VR game]: Stuttering and lagging occur upon launching a game while any GPU hardware monitoring tool is running in the background. [3152190]
[VR]: Microsoft Flight Simulator 2020 VR may stutter if Hardware-accelerated GPU scheduling is disabled. [3246674]
(YouTube): Video playback stutters while scrolling down the YouTube page. [3129705] Fixed by March 29, 2021—KB5000842
[Notebook]: Some Pascal-based notebooks w/ high refresh rate displays may randomly drop to 60Hz during gameplay. [3009452]


----------



## ALSTER868

Any ReBar bios update for Strix 3090, any info on that?


----------



## jura11

Now we need EVGA KPE XOC BIOS with BAR enabled 

Hope this helps 

Thanks, Jura


----------



## wesley8

ALSTER868 said:


> Any ReBar bios update for Strix 3090, any info on that?





https://dlcdnets.asus.com/pub/ASUS/Graphic%20Card/NVIDIA/Utilities/RTX3090_V3.exe







ROG-STRIX-RTX3090-O24G-WHITE | ROG Strix | Gaming Graphics Cards｜ROG - Republic of Gamers｜ROG Global


The ROG Strix GeForce RTX™ 3090 White OC Edition 24GB GDDR6X featuring a white color scheme, Axial-tech fan design, 0dB technology, 2.9-slot Design, Dual BIOS, Auto-Extreme Technology, SAP II, MaxContact Technology, and more



rog.asus.com


----------



## jura11

Pepillo said:


> You can do it in two minutes. Install the 1.000w EVGA bios, download Precission X1 1.1.18 and patch the bios with BAR. Two cliks and a reboot.


Would you upload it on Techpowerup or any free hosting service 

Hope this helps 

Thanks, Jura


----------



## 7empe

Hey, my mobo M12E with v2004 bios (BAR-enabled) + NVIDIA 470.05 beta driver + Strix 3090 with 520W EVGA BAR-enabled vBIOS (94.02.42.C0.0C) got PR score of 15 531. Score higher by around 330 points (1.5 FPS in average) than with the same settings but with previous EVGA 520W vBIOS (94.02.42.00.0F). If 3dmark PR supports rBAR (I don't know that and couldn't find that info), then performance uptick with BAR-enabled is around 2.4%. But if 3dmark PR does not support rBAR, then performance gains may come from 470.05 itself...










BTW. you can find EVGA 520W vBIOS with BAR support here: EVGA RTX 3090 VBIOS
I used this one and works fine for me.


----------



## schoolofmonkey

7empe said:


> Hey, my mobo M12E with v2004 bios (BAR-enabled) + NVIDIA 470.05 beta driver + Strix 3090 with 520W EVGA BAR-enabled vBIOS (94.02.42.C0.0C) got PR score of 15 531. Score higher by around 330 points (1.5 FPS in average) than with the same settings but with previous EVGA 520W vBIOS (94.02.42.00.0F). If 3dmark PR supports rBAR (I don't know that and couldn't find that info), then performance uptick with BAR-enabled is around 2.4%. But if 3dmark PR does not support rBAR, then performance gains may come from 470.05 itself...


I saw the same in Time Spy Extreme:


----------



## bmagnien

if anyone has an EVGA card and the 1000wKPBios could they download the latest version of PX1, run the firmware/bios flash for ReBar, and then export your BIOS from GPUZ/NVFLASH and upload here? Looks like the KP520 achieved like that has already been uploaded to TPU and posted earlier. Thank you!


----------



## Zogge

Resizable bar working for me on 3090 strix.
Kingpin 520w new bios
Asus encore x299 0070 bios
Nvidia 470.xx driver


----------



## dante`afk

can anyone tell which asus card is under the Asus EKWB GEFORCE RTX 3090 ? is it a strix OC?


----------



## Zogge

I doubt that as it is 2 pin


----------



## 7empe

dante`afk said:


> can anyone tell which asus card is under the Asus EKWB GEFORCE RTX 3090 ? is it a strix OC?


Yep.


----------



## sultanofswing

7empe said:


> Hey, my mobo M12E with v2004 bios (BAR-enabled) + NVIDIA 470.05 beta driver + Strix 3090 with 520W EVGA BAR-enabled vBIOS (94.02.42.C0.0C) got PR score of 15 531. Score higher by around 330 points (1.5 FPS in average) than with the same settings but with previous EVGA 520W vBIOS (94.02.42.00.0F). If 3dmark PR supports rBAR (I don't know that and couldn't find that info), then performance uptick with BAR-enabled is around 2.4%. But if 3dmark PR does not support rBAR, then performance gains may come from 470.05 itself...
> 
> View attachment 2484419
> 
> 
> BTW. you can find EVGA 520W vBIOS with BAR support here: EVGA RTX 3090 VBIOS
> I used this one and works fine for me.


470.05 is worth around 2-300 points more versus the non beta driver.


----------



## Beagle Box

The new NVidia 465.89 Driver returns high Port Royal scores similar to the 470.05 and 470.14 drivers. 
3DMArk doesn't recognize it as official at this time.


----------



## Pepillo

__
https://www.reddit.com/r/nvidia/comments/mgfm1f


----------



## Falkentyne

Gained 250 points from BAR update and new game ready drivers.









I scored 14 833 in Port Royal


Intel Core i9-11900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com




Not sure if the BAR or drivers are the result of that or not.


----------



## mattxx88

Falkentyne said:


> Gained 250 points from BAR update and new game ready drivers.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 14 833 in Port Royal
> 
> 
> Intel Core i9-11900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> Not sure if the BAR or drivers are the result of that or not.


its the new drivers, i got 400+ points with 470.XX


----------



## jomama22

dante`afk said:


> can anyone tell which asus card is under the Asus EKWB GEFORCE RTX 3090 ? is it a strix OC?





7empe said:


> Yep.


No, this is completely wrong. It is a 2 pin card as shown here:


----------



## J7SC

I look forward to the BAR bios update and drivers, but probably will wait just a bit with my 3090 to see if there are bugs and related updates coming...also, per my earlier post yesterday, does anyone know whether Vulkan will support resizable BAR (ie in RDR2) ?


----------



## Sheyster

Pepillo said:


> EVGA RTX 3090 VBIOS
> 
> 
> 24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory
> 
> 
> 
> 
> www.techpowerup.com
> 
> 
> 
> 
> 
> The Kipping 520w BAR enabled:
> 
> 
> 
> 
> 
> 
> 
> 
> 
> EVGA RTX 3090 VBIOS
> 
> 
> 24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory
> 
> 
> 
> 
> www.techpowerup.com


Just what I was looking for! +rep

Now we just need the updated 1K XOC KPE BIOS with RBAR support for the full collection.


----------



## Sheyster

Falkentyne said:


> Gained 250 points from BAR update and new game ready drivers.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 14 833 in Port Royal
> 
> 
> Intel Core i9-11900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> Not sure if the BAR or drivers are the result of that or not.


How is SP4KO with BAR? I heard it was about 5% higher with AMD SAM.


----------



## Benni231990

exist any xoc bios that have all protections on? and have no ram bug and maybe BAR incl?


----------



## ALSTER868

Falkentyne said:


> Not sure if the BAR or drivers are the result of that or not.


It's drivers. I'm on Asus Z390 so ReBar not enabled, but got +2xx points in PR at same OC settings. It was 14 9xx on previous drivers.


----------



## Benni231990

i flashed my gainward phantom gs to the kingpin with BAR enabled but in gpuz says disable ?

what make i wrong?









EVGA RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com


----------



## KedarWolf

I received my waterblock and backplate for my Strix OC today. 

I won't be home from work for six hours though.


----------



## yzonker

Can't link to a specific post but someone said in here that updating the 1kw xoc bios gimps the power limit.



Resizable BAR Support VBIOS Update?


----------



## kx11

ASUS ROG O24G 



https://dlcdnets.asus.com/pub/ASUS/Graphic%20Card/NVIDIA/Utilities/RTX3090_V3.exe


----------



## 050

I have an MSI trio x 3090 flashed to the evga kingpin bios to allow up to a 520w power limit (430 base) on water and this seems to work just fine in Timespy (~21500 gpu score and it hits ~500w on the gpu at 117% in afterburner) and games like Ark survival evolved which I used to push it towards the power limit I set (ark hit ~480w just fine, steady around 450w playing at 4k maxxed out, ~55c on the gpu.)
My confusion is that when I play in VR (beatsaber) I can check hwinfo64 afterwards and it shows that the card hit ~375-380w and performance limit-power shows a max of "yes" - indicating that it throttled/hit the power limit at ~380w. Any ideas why this is happening? I do have the afterburner profile set (117% power limit) and the gpu hit a max of 47c while playing beatsaber. As far as I can tell and have been testing on timespy and other games, it shouldn't be hitting a power limit in vr... is there some sort of vr-only bios that is getting activated?

I do see up to ~400w playing halflife alyx so it doesn't seem to be directly limited at 380w but it did again say that it had power limited in hwinfo64. Odd.

Any advice?


----------



## nievz

Anyone tried the unverified EVGA 500w yet? I'm running the Kipping BIOS on air right now on my X Trio but I don't like the auto fan curve, it's too high.









EVGA RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com


----------



## sultanofswing

3090 Hydrocopper With KPE 520w BIOS, Rebar enabled with new nvidia driver.








I scored 15 372 in Port Royal


Intel Core i9-10940X X-series Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## PLATOON TEKK

jura11 said:


> Now we need EVGA KPE XOC BIOS with BAR enabled
> 
> Hope this helps
> 
> Thanks, Jura


On it right now. For now precision crashes when I try to “update vbios”

will link once it flashes and I extracted


----------



## Pepillo

nievz said:


> Anyone tried the unverified EVGA 500w yet? I'm running the Kipping BIOS on air right now on my X Trio but I don't like the auto fan curve, it's too high.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> EVGA RTX 3090 VBIOS
> 
> 
> 24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory
> 
> 
> 
> 
> www.techpowerup.com


The Kipping is hybrid and its fan curve does not work well on air graphics, it is good if you have it by block water.


----------



## erazortt

Benni231990 said:


> i flashed my gainward phantom gs to the kingpin with BAR enabled but in gpuz says disable ?
> 
> what make i wrong?
> 
> 
> 
> 
> 
> 
> 
> 
> 
> EVGA RTX 3090 VBIOS
> 
> 
> 24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory
> 
> 
> 
> 
> www.techpowerup.com


You need to also enable it in your mainboards bios.


----------



## PLATOON TEKK

1000w rebar bios (link removed, precision flashes 450w with rebar over 1000w for now)









I used precision to update the firmware and rebooted. Will test “power limit gimping” now.

EDIT: IGNORE. For some reason precision reflashes a 450w with rebar instead of a patched 1000w. At least according to gpuz.

Edit 2: this will also explain the “power gimping” people see. I noticed because power limit was up to 105 smh

just did this twice over, 2nd time I triple checked I was on 1000w before precision flash.


----------



## itssladenlol

sultanofswing said:


> 3090 Hydrocopper With KPE 520w BIOS, Rebar enabled with new nvidia driver.
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 372 in Port Royal
> 
> 
> Intel Core i9-10940X X-series Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com


Can you do me a favor and Show me load Watts of Pin 1 2 3 and pcie under load on the hydro copper? 
Big thanks.


----------



## yzonker

PLATOON TEKK said:


> 1000w rebar bios (link removed, precision flashes 450w with rebar over 1000w for now)
> View attachment 2484467
> 
> 
> I used precision to update the firmware and rebooted. Will test “power limit gimping” now.
> 
> EDIT: IGNORE. For some reason precision reflashes a 450w with rebar instead of a patched 1000w. At least according to gpuz.
> 
> Edit 2: this will also explain the “power gimping” people see. I noticed because power limit was up to 105 smh
> 
> just did this twice over, 2nd time I triple checked I was on 1000w before precision flash.


Yea it probably recognizes it as one of the shipped bios. Thanks for trying though.


----------



## sultanofswing

itssladenlol said:


> Can you do me a favor and Show me load Watts of Pin 1 2 3 and pcie under load on the hydro copper?
> Big thanks.


Mine does not have the issue. I'll have to get another screenshot but I can pull all of the wattage available before hitting power limit.
Here you go, still on KPE 520w BIOS.


----------



## PLATOON TEKK

yzonker said:


> Yea it probably recognizes it as one of the shipped bios. Thanks for trying though.


no problem at all. Just posted on EVGA forums for an answer. Will keep you all posted if I get 1000w rebar working


----------



## gfunkernaught

050 said:


> I have an MSI trio x 3090 flashed to the evga kingpin bios to allow up to a 520w power limit (430 base) on water and this seems to work just fine in Timespy (~21500 gpu score and it hits ~500w on the gpu at 117% in afterburner) and games like Ark survival evolved which I used to push it towards the power limit I set (ark hit ~480w just fine, steady around 450w playing at 4k maxxed out, ~55c on the gpu.)
> My confusion is that when I play in VR (beatsaber) I can check hwinfo64 afterwards and it shows that the card hit ~375-380w and performance limit-power shows a max of "yes" - indicating that it throttled/hit the power limit at ~380w. Any ideas why this is happening? I do have the afterburner profile set (117% power limit) and the gpu hit a max of 47c while playing beatsaber. As far as I can tell and have been testing on timespy and other games, it shouldn't be hitting a power limit in vr... is there some sort of vr-only bios that is getting activated?
> 
> I do see up to ~400w playing halflife alyx so it doesn't seem to be directly limited at 380w but it did again say that it had power limited in hwinfo64. Odd.
> 
> Any advice?


I don't have advice other than use the 1kw bios and limit the power to 50. My trio would get the power limit warning at 470-480w no higher and throttle on both the evga 500w and 520w bios. At least with the 1kw bios set to PL 50%, I'd see power usage actually get to 500w and sometimes 502w.


----------



## itssladenlol

sultanofswing said:


> Mine does not have the issue. I'll have to get another screenshot but I can pull all of the wattage available before hitting power limit.
> Here you go, still on KPE 520w BIOS.


Exactly what I was looking for, thanks.


----------



## 050

gfunkernaught said:


> I don't have advice other than use the 1kw bios and limit the power to 50.


I will have to test that I suppose; I am mostly confused about the difference in power limit levels in and out of VR... outside of VR it seems to be working exactly as intended, but in VR, sticking to roughly a stock 3090 trio x power limit. Odd.


----------



## ViRuS2k

Ok i have the SuprimX 450w bios installed on my MSI gaming X trio and i tried to update this bios to support rebar but dragon center nor msi like update 6 can seem to update this bios.
i am guessing you cant update a cards bios that has a different bios on your card other than the official bios working or you need to flash a already custom bios that has rebar enabled instead for it to work ?????

cause i cant seem to get my gaming x trio with suprimX bios updated lol

does anyone have the SuprimX 450w rebar enabled bios for this bios lol


----------



## nievz

ViRuS2k said:


> Ok i have the SuprimX 450w bios installed on my MSI gaming X trio and i tried to update this bios to support rebar but dragon center nor msi like update 6 can seem to update this bios.
> i am guessing you cant update a cards bios that has a different bios on your card other than the official bios working or you need to flash a already custom bios that has rebar enabled instead for it to work ?????
> 
> cause i cant seem to get my gaming x trio with suprimX bios updated lol
> 
> does anyone have the SuprimX 450w rebar enabled bios for this bios lol


Here's the SuprimX with ReBar support:








MSI RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com


----------



## Thanh Nguyen

Over 200 points with new driver and 27c water.









I scored 16 034 in Port Royal


Intel Core i9-10900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## Bobbylee

Is anyone with a pny xlr8 able to get it to work? For me it says: "No need to update your vga bios! (Or not support for your vga card)"


----------



## Pepillo

All of you who are having problems with the bios, keep in mind that if you do not have the BAR on the motherboard (CSM Disable and 4G Decode Enable) the bios patching of the graphics card fails or even gives errors and screens in black.


----------



## ViRuS2k

nievz said:


> Here's the SuprimX with ReBar support:
> 
> 
> 
> 
> 
> 
> 
> 
> MSI RTX 3090 VBIOS
> 
> 
> 24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory
> 
> 
> 
> 
> www.techpowerup.com


Thanks


----------



## Falkentyne

050 said:


> I will have to test that I suppose; I am mostly confused about the difference in power limit levels in and out of VR... outside of VR it seems to be working exactly as intended, but in VR, sticking to roughly a stock 3090 trio x power limit. Odd.


Are you referring to Output power vs input power?
This is an issue for (mostly) 2x8 pin shunted cards. You can shunt all the shunts to reduce the input power rails to below the bios default/max levels, the shunted rails don't affect output power at all. And MSVDD and NVVDD limits don't appear anywhere on HWinfo64 but they affect "TDP Normalized %", so they have a limit somewhere which the VBIOS knows about.


----------



## nievz

ViRuS2k said:


> Thanks





ViRuS2k said:


> Thanks


The link is for the Gaming Mode BIOS. Please let me know how it goes, I think I'll move to the SuprimX instead of the EVGA one.😉


----------



## 050

Falkentyne said:


> Are you referring to Output power vs input power?
> This is an issue for (mostly) 2x8 pin shunted cards. You can shunt all the shunts to reduce the input power rails to below the bios default/max levels, the shunted rails don't affect output power at all. And MSVDD and NVVDD limits don't appear anywhere on HWinfo64 but they affect "TDP Normalized %", so they have a limit somewhere which the VBIOS knows about.


Sorry no it was in reference to this issue I am having:



050 said:


> I have an MSI trio x 3090 flashed to the evga kingpin bios to allow up to a 520w power limit (430 base) on water and this seems to work just fine in Timespy (~21500 gpu score and it hits ~500w on the gpu at 117% in afterburner) and games like Ark survival evolved which I used to push it towards the power limit I set (ark hit ~480w just fine, steady around 450w playing at 4k maxxed out, ~55c on the gpu.)
> My confusion is that when I play in VR (beatsaber) I can check hwinfo64 afterwards and it shows that the card hit ~375-380w and performance limit-power shows a max of "yes" - indicating that it throttled/hit the power limit at ~380w. Any ideas why this is happening? I do have the afterburner profile set (117% power limit) and the gpu hit a max of 47c while playing beatsaber. As far as I can tell and have been testing on timespy and other games, it shouldn't be hitting a power limit in vr... is there some sort of vr-only bios that is getting activated?
> 
> I do see up to ~400w playing halflife alyx so it doesn't seem to be directly limited at 380w but it did again say that it had power limited in hwinfo64. Odd.
> 
> Any advice?


Basically in non-vr games and time spy I see the card use up to the power limit I have set in afterburner, 117% of the 430w nominal limit for the kingpin bios ~500w, and that's when I see the power limit trip in hwinfo64 - all good. On vr titles however, I see the card tripping that power limit flag at 380-400w. It is entirely possible (even likely) that it's a sub-section of the card that is power limiting but it's unclear what section that would be.

I'll attach two pics of hwinfo showing the power limits at ~375w (beat saber vr) and ~480w (playing Ark, non-vr). Apologies for the camera pic of the screen but it was quick at the time.


Spoiler: VR Game power limit - 375w

















Spoiler: Desktop Game power limit- 480w


----------



## Bobbylee

050 said:


> Sorry no it was in reference to this issue I am having:
> 
> 
> 
> Basically in non-vr games and time spy I see the card use up to the power limit I have set in afterburner, 117% of the 430w nominal limit for the kingpin bios ~500w, and that's when I see the power limit trip in hwinfo64 - all good. On vr titles however, I see the card tripping that power limit flag at 380-400w. It is entirely possible (even likely) that it's a sub-section of the card that is power limiting but it's unclear what section that would be.
> 
> I'll attach two pics of hwinfo showing the power limits at ~375w (beat saber vr) and ~480w (playing Ark, non-vr). Apologies for the camera pic of the screen but it was quick at the time.
> 
> 
> Spoiler: VR Game power limit - 375w
> 
> 
> 
> 
> View attachment 2484481
> 
> 
> 
> 
> 
> 
> Spoiler: Desktop Game power limit- 480w
> 
> 
> 
> 
> View attachment 2484482


I’m away from pc so can’t check for you, but have a look in Nvidia profile Inspector for any vr related settings you can try to change


----------



## Midian

Pepillo said:


> All of you who are having problems with the bios, keep in mind that if you do not have the BAR on the motherboard (CSM Disable and 4G Decode Enable) the bios patching of the graphics card fails or even gives errors and screens in black.


I updated my Asus Strix 3090 OC before motherboard and it worked just fine, maybe I was just lucky then. :O


----------



## Beagle Box

i7-8086
Z370
Missing out on all the fun.
Feels bad, man.
😖


----------



## 050

Bobbylee said:


> have a look in Nvidia profile Inspector for any vr related settings you can try to change


Oh, will do! Good idea, thanks!


----------



## HisN

Did someone of you test the internal Hitman3-Benchmark after installing rBAR and 465.89 driver?
Hitman3 is newly on rBAR whitelist and the bench with the exploding books runs like crap now. :-(
I can't remember this awfull performance before rBAR update.

So now I have to figure out: Is it rBAR, is it the 465.89 Driver, or the update from Epic-Store which was rolled out today. Nice :-(


----------



## SoldierRBT

Tested 465.89 driver without rBAR. Gained 200+ points. Something I noticed is that MVDCC power draw (memory) is 10W higher than previous driver with exact same voltage of 1.325v +1200 Mem. Ran it multiple times and the power draw was the same 10W higher. Maybe idk the new driver is running tighter memory timings so the score increases.

Before:









After


----------



## kx11

AC Valhalla runs worse for me after REBAR


----------



## bmagnien

kx11 said:


> AC Valhalla runs worse for me after REBAR


Are you using a riser per chance? Try something simple like Heaven or Firestrike. I noticed measurable gains in those relatively low res, high fps synthetics


----------



## emil2424

After updating the bios on the AORUS GeForce RTX 3090 XTREME WATERFORCE 24G GPU (GV-N3090AORUSX W-24G) I cannot reach the maximum power limit 370W +5% = 390W.









With the default bios (94.02.26.48.DF) the OC GPU worked at 1920-2040 MHz. TGP in the range 380-390W with jumps up to 404W (Max value in GPU-Z).

After upgrading to the F2 bios (94.02.59.00.5C) TGP is 340-350W with jumps up to 370W (Max value in GPU-Z).
Clocks with the same OC profile 1790-1845 MHz.

I tested in Superposition Benchmark and The Division 2 in 3440x1440 on Ultra. 

I reinstalled the drivers in safe mode using DDU but it didn't help.


----------



## jura11

Not tested yet BAR function on my RTX 3090, hopefully KPE XOC BIOS will be patched later on and thanks guys to everyone who tried add REBAR feature to KPE XOC BIOS 

Thanks, Jura


----------



## jura11

PLATOON TEKK said:


> 1000w rebar bios (link removed, precision flashes 450w with rebar over 1000w for now)
> View attachment 2484467
> 
> 
> I used precision to update the firmware and rebooted. Will test “power limit gimping” now.
> 
> EDIT: IGNORE. For some reason precision reflashes a 450w with rebar instead of a patched 1000w. At least according to gpuz.
> 
> Edit 2: this will also explain the “power gimping” people see. I noticed because power limit was up to 105 smh
> 
> just did this twice over, 2nd time I triple checked I was on 1000w before precision flash.


Thanks man for trying, hopefully Vince or EVGA will patch that KPE XOC BIOS for us

Thanks again for trying and help 

Thanks, Jura


----------



## kx11

bmagnien said:


> Are you using a riser per chance? Try something simple like Heaven or Firestrike. I noticed measurable gains in those relatively low res, high fps synthetics


Yeah it's all good now, i had Reshade on and forgot


----------



## Nizzen

Error 404


----------



## yzonker

jura11 said:


> Thanks man for trying, hopefully Vince or EVGA will patch that KPE XOC BIOS for us
> 
> Thanks again for trying and help
> 
> Thanks, Jura


Multiple people asked in the EVGA forum thread but no response from Jacob. He posted a pile of other bios' for people. A bit of a bummer although RDR2 is the only game on the list I own so not much of a loss ATM.


----------



## jura11

yzonker said:


> Multiple people asked in the EVGA forum thread but no response from Jacob. He posted a pile of other bios' for people. A bit of a bummer although RDR2 is the only game on the list I own so not much of a loss ATM.


Thanks guys for asking 👍

Not sure if I would gain something with such BIOS but at least would like to try it hahaha in some benchmarks 

Thanks, Jura


----------



## itssladenlol

bmagnien said:


> I did - this card is from the new program. It’s a rockstar with the KP bios, best chip and mem silicon I’ve gotten over the past 4 FTW3 3090s I’ve had, and no more issues with the pcie slot drawing too much power.
> My question is how to set memory higher than +1500 in afterburner? I downloaded Asus’s GPUTweak2 and was able to set the memory to the equivalent of +1750 there, but that program and its core offset/curve settings suck compared to afterburner. Any other ideas?


When you say no more issues with pcie Slot drawing too much Power, what numbers are we talking About on the new rev? 50w?60w?70w?
Just got a new 3090 FTW3 ultra yesterday and would like to know the number just to be sure.
Thanks


----------



## EarlZ

nievz said:


> Here's the SuprimX with ReBar support:
> 
> 
> 
> 
> 
> 
> 
> 
> MSI RTX 3090 VBIOS
> 
> 
> 24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory
> 
> 
> 
> 
> www.techpowerup.com


Shows upas an F5 bios, is this official release or user modified ?


----------



## des2k...

....


----------



## SoldierRBT

des2k... said:


> at +1500mem the diff between game mem power vs mining which has bigger load on the mem controller and memmory is 30w+.
> 
> So not suprised 10w more with rbar since it's using bigger chunks to transfer data


rBar was disable on both tests. The difference is between 461.92 and 465.89 drivers with same load and voltage.


----------



## schoolofmonkey

I gained 300 points on the GPU score in Time Spy Extreme with the 462.07 HF drivers once I enabled Re-Bar, I gained a extra 100 points with 465.89, so it's not "just the drivers".
Result on the left is right after I did the Re-Bar BIOS flash(XC3 2x8pin), on the right was prior doing the Re-Bar flash.


















Result







www.3dmark.com


----------



## bmagnien

itssladenlol said:


> When you say no more issues with pcie Slot drawing too much Power, what numbers are we talking About on the new rev? 50w?60w?70w?
> Just got a new 3090 FTW3 ultra yesterday and would like to know the number just to be sure.
> Thanks


So the new 500xoc that comes with the cards from the new rma card flat out didn’t work for me. I threw the kp520 on their and it worked well drawing 520-530 total, about 70 from the slot, but oddly enough the balancing of the pins was still weird. Like 175 on pins 1 and 2, and 100 on 3, 70 on slot. But I’m fine with that. On the xoc1000 bios I was running like 700 watts with maybe 85ish on the slot, and same ratio between pins 1, 2, and 3


----------



## Falkentyne

emil2424 said:


> After updating the bios on the AORUS GeForce RTX 3090 XTREME WATERFORCE 24G GPU (GV-N3090AORUSX W-24G) I cannot reach the maximum power limit 370W +5% = 390W.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> With the default bios (94.02.26.48.DF) the OC GPU worked at 1920-2040 MHz. TGP in the range 380-390W with jumps up to 404W (Max value in GPU-Z).
> 
> After upgrading to the F2 bios (94.02.59.00.5C) TGP is 340-350W with jumps up to 370W (Max value in GPU-Z).
> Clocks with the same OC profile 1790-1845 MHz.
> 
> I tested in Superposition Benchmark and The Division 2 in 3440x1440 on Ultra.
> 
> I reinstalled the drivers in safe mode using DDU but it didn't help.


Hi,
This was discussed a few weeks ago either in this thread or in the shunt mod thread, but I think it was discussed here.

Different Vbioses have different internal rail limits. I'm not sure if it was MSI or Gigabyte that was being talked about, or one of the Asus TUF Bioses, but one of them had lower "MVDDC" power limits and another later vbios for that card had lower 8 pin power limits but HIGHER MVDDC Power limits. The user used the bios with the higher MVDDC power limits, because the 8 pin power limit is ignored, as it is the "SRC" limit that controls the maximum draw from the 8 pins, and the 8 pin power limit is used for calculating max TDP values.

If you can't reach a certain power limit, it's highly likely you are hitting a MVDDC limit. Another person just said that after he patched with resizeable BAR, his MVDDC power draw went up by 10 watts. MVDDC will NOT report to TDP% because it isn't part of TDP (TDP is 8 pins + PCIE Slot Power). However it _WILL_ report to TDP Normalized %, which takes every single individual rail's input vs a maximum value, and the highest percentage rail is what gets the Normalized ratio. When TDP Normalized gets close to the TDP% percentage, you will throttle there too.

BTW the Nvidia BAR patch doesn't upgrade your vbios. it just patches the existing vbios. For FE 3090 cards, there are both people with september 1 bios " 94.02.27.00.0A " and September 15th bios 94.02.32.00.02 patched with resizeable BAR, because a region of the bios is patched, not the entire bios. But some AIB's will also upgrade the entire BIOS completely, and if that bios has different MVDDC limits, maybe it's lower than before. Or maybe its the same but the higher power draw puts you over the edge.

check TDP Normalized % in HWinfo64 and see if it's reporting over 100%.


----------



## PowerK

Just flashed one of 3090 HOF.
What I found interesting is that BIOS version remains the same before/after flashing for resizable bar support. Is this the case for you guys as well?

Stock:










Flashed for resizable BAR support:


----------



## emil2424

Falkentyne said:


> Hi,
> This was discussed a few weeks ago either in this thread or in the shunt mod thread, but I think it was discussed here.
> 
> Different Vbioses have different internal rail limits. I'm not sure if it was MSI or Gigabyte that was being talked about, or one of the Asus TUF Bioses, but one of them had lower "MVDDC" power limits and another later vbios for that card had lower 8 pin power limits but HIGHER MVDDC Power limits. The user used the bios with the higher MVDDC power limits, because the 8 pin power limit is ignored, as it is the "SRC" limit that controls the maximum draw from the 8 pins, and the 8 pin power limit is used for calculating max TDP values.
> 
> If you can't reach a certain power limit, it's highly likely you are hitting a MVDDC limit. Another person just said that after he patched with resizeable BAR, his MVDDC power draw went up by 10 watts. MVDDC will NOT report to TDP% because it isn't part of TDP (TDP is 8 pins + PCIE Slot Power). However it _WILL_ report to TDP Normalized %, which takes every single individual rail's input vs a maximum value, and the highest percentage rail is what gets the Normalized ratio. When TDP Normalized gets close to the TDP% percentage, you will throttle there too.
> 
> BTW the Nvidia BAR patch doesn't upgrade your vbios. it just patches the existing vbios. For FE 3090 cards, there are both people with september 1 bios " 94.02.27.00.0A " and September 15th bios 94.02.32.00.02 patched with resizeable BAR, because a region of the bios is patched, not the entire bios. But some AIB's will also upgrade the entire BIOS completely, and if that bios has different MVDDC limits, maybe it's lower than before. Or maybe its the same but the higher power draw puts you over the edge.
> 
> check TDP Normalized % in HWinfo64 and see if it's reporting over 100%.


----------



## KedarWolf

Still getting better Port Royal Results with the developer 417.12 then newest full release BIOS.

I bbiab, gonna test Metro Exodus with bar enabled and disabled, post my results. Apparently, you get huge gains in that game but remains to be seen with everything including Ray Tracing maxed out. :/


----------



## J7SC

Cyberpunk is supposed to pick up close to 6% w/ resizable BAR, but there's also a huge > new patch coming for Cyberpunk


----------



## satinghostrider

emil2424 said:


> After updating the bios on the AORUS GeForce RTX 3090 XTREME WATERFORCE 24G GPU (GV-N3090AORUSX W-24G) I cannot reach the maximum power limit 370W +5% = 390W.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> With the default bios (94.02.26.48.DF) the OC GPU worked at 1920-2040 MHz. TGP in the range 380-390W with jumps up to 404W (Max value in GPU-Z).
> 
> After upgrading to the F2 bios (94.02.59.00.5C) TGP is 340-350W with jumps up to 370W (Max value in GPU-Z).
> Clocks with the same OC profile 1790-1845 MHz.
> 
> I tested in Superposition Benchmark and The Division 2 in 3440x1440 on Ultra.
> 
> I reinstalled the drivers in safe mode using DDU but it didn't help.


I can confirm the same problem on my 3090 Aorus Waterforce WB. I am so pissed off.
I can slide the sliders to 105% but it does not utilise the additional power limit.
Do not know what is going on. As it is 2x8pin cards are so power limited then this stupid resize bar bios update gimped it further.


----------



## schoolofmonkey

Falkentyne said:


> BTW the Nvidia BAR patch doesn't upgrade your vbios. it just patches the existing vbios. For FE 3090 cards, there are both people with september 1 bios " 94.02.27.00.0A " and September 15th bios 94.02.32.00.02 patched with resizeable BAR, because a region of the bios is patched, not the entire bios. But some AIB's will also upgrade the entire BIOS completely, and if that bios has different MVDDC limits, maybe it's lower than before. Or maybe its the same but the higher power draw puts you over the edge.
> 
> check TDP Normalized % in HWinfo64 and see if it's reporting over 100%.


I think EVGA did a full BIOS update for the XC3 Hybrid kit, mines actually now using a little over the default 366w, where the previous wouldn't go over 360w, along with being able to maintain a higher base clock 1755Mhz to 1800Mhz (It actually boosted over 2000Mhz at one point. never done that before), I'm guessing that's due to a better power limit.
We'd be crying out for a long time seeing the "beta" Hybrid kit BIOS was a tad crud compared to the stock non hybrid bios, looks like they listened, wouldn't of been a hard fix either.
I didn't think I'd see decent improvement on this 2x8pin card, but it was a nice surprise.


----------



## satinghostrider

Falkentyne said:


> Hi,
> This was discussed a few weeks ago either in this thread or in the shunt mod thread, but I think it was discussed here.
> 
> Different Vbioses have different internal rail limits. I'm not sure if it was MSI or Gigabyte that was being talked about, or one of the Asus TUF Bioses, but one of them had lower "MVDDC" power limits and another later vbios for that card had lower 8 pin power limits but HIGHER MVDDC Power limits. The user used the bios with the higher MVDDC power limits, because the 8 pin power limit is ignored, as it is the "SRC" limit that controls the maximum draw from the 8 pins, and the 8 pin power limit is used for calculating max TDP values.
> 
> If you can't reach a certain power limit, it's highly likely you are hitting a MVDDC limit. Another person just said that after he patched with resizeable BAR, his MVDDC power draw went up by 10 watts. MVDDC will NOT report to TDP% because it isn't part of TDP (TDP is 8 pins + PCIE Slot Power). However it _WILL_ report to TDP Normalized %, which takes every single individual rail's input vs a maximum value, and the highest percentage rail is what gets the Normalized ratio. When TDP Normalized gets close to the TDP% percentage, you will throttle there too.
> 
> BTW the Nvidia BAR patch doesn't upgrade your vbios. it just patches the existing vbios. For FE 3090 cards, there are both people with september 1 bios " 94.02.27.00.0A " and September 15th bios 94.02.32.00.02 patched with resizeable BAR, because a region of the bios is patched, not the entire bios. But some AIB's will also upgrade the entire BIOS completely, and if that bios has different MVDDC limits, maybe it's lower than before. Or maybe its the same but the higher power draw puts you over the edge.
> 
> check TDP Normalized % in HWinfo64 and see if it's reporting over 100%.


Yes though GPU-Z and Afterburner shows my max power limit under 100%, normalized TDP under HWINFO shows 105%.
I am quite lost is this supposed to be normal after the VBIOS resize bar update?
I am boosting lower though compared to pre-resize bar update. Only maxing out power limits.


----------



## Falkentyne

satinghostrider said:


> Yes though GPU-Z and Afterburner shows my max power limit under 100%, normalized TDP under HWINFO shows 105%.
> I am quite lost is this supposed to be normal after the VBIOS resize bar update?
> I am boosting lower though compared to pre-resize bar update. Only maxing out power limits.


Yes, if the vbar update:
1) caused MVDDC to draw more power when you were already close to the MVDDC power limit
2) flashed another vbios that has a lower MVDDC limit.

I've seen the AUX rails (these seem to be Misc 0-Misc3 in the newest HWinfo, and Misc 1-4 in older ones) moved around a bunch on different Bioses, but the aux rails sometimes have "default" limits as low as 30W, and I've never seen the "default" AUX limits affect TDP Normalized%, only the Aux limits in the "Max" column. Usually the default limit is based on 100% TDP and the "max" limit is based on whatever the TDP slider is at max, but it's possible some of these rails don't cause a throttle if they exceed the max values. I already know from testing that the 8 pin rails can exceed their max values because they're limited by the SRC rails, rather than their own values...

I've even seen PCIE Slot Power hard throttle you before it reaches the max value in the bios, also...and PCIE Slot Power and MVDDC has continuity with each other...


----------



## satinghostrider

Falkentyne said:


> Yes, if the vbar update:
> 1) caused MVDDC to draw more power when you were already close to the MVDDC power limit
> 2) flashed another vbios that has a lower MVDDC limit.
> 
> I've seen the AUX rails (these seem to be Misc 0-Misc3 in the newest HWinfo, and Misc 1-4 in older ones) moved around a bunch on different Bioses, but the aux rails sometimes have "default" limits as low as 30W, and I've never seen the "default" AUX limits affect TDP Normalized%, only the Aux limits in the "Max" column. Usually the default limit is based on 100% TDP and the "max" limit is based on whatever the TDP slider is at max, but it's possible some of these rails don't cause a throttle if they exceed the max values. I already know from testing that the 8 pin rails can exceed their max values because they're limited by the SRC rails, rather than their own values...
> 
> I've even seen PCIE Slot Power hard throttle you before it reaches the max value in the bios, also...and PCIE Slot Power and MVDDC has continuity with each other...
> 
> View attachment 2484548


What I realised on the earlier bios though is that on Coldwar even maxing power limits (No OC), I could have crashes and black screens very randomly and rarely once in a while. Overclocking the smallest bit makes it more often.

I don't know the technicalities of what you mentioned but I'm guessing I could overclock mildly and hold stable post resize bar VBIOS because MVDDC is now drawing higher power?

Just so confusing...


----------



## yzonker

Also seeing about a 200 point bump in PR. Hollow victory but at least finally in to the 15's.









I scored 15 165 in Port Royal


AMD Ryzen 7 5800X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## yzonker

BTW I did have to decrease my previous memory oc by 50mhz (could be less) to get it to complete with the new driver.


----------



## yzonker

J7SC said:


> Cyberpunk is supposed to pick up close to 6% w/ resizable BAR, but there's also a huge > new patch coming for Cyberpunk


Hang on, so it gains performance even though it's not officially supported?


----------



## KedarWolf

Falkentyne said:


> Yes, if the vbar update:
> 1) caused MVDDC to draw more power when you were already close to the MVDDC power limit
> 2) flashed another vbios that has a lower MVDDC limit.
> 
> I've seen the AUX rails (these seem to be Misc 0-Misc3 in the newest HWinfo, and Misc 1-4 in older ones) moved around a bunch on different Bioses, but the aux rails sometimes have "default" limits as low as 30W, and I've never seen the "default" AUX limits affect TDP Normalized%, only the Aux limits in the "Max" column. Usually the default limit is based on 100% TDP and the "max" limit is based on whatever the TDP slider is at max, but it's possible some of these rails don't cause a throttle if they exceed the max values. I already know from testing that the 8 pin rails can exceed their max values because they're limited by the SRC rails, rather than their own values...
> 
> I've even seen PCIE Slot Power hard throttle you before it reaches the max value in the bios, also...and PCIE Slot Power and MVDDC has continuity with each other...
> 
> View attachment 2484548


Where did you get the ABE v0.09 app?


----------



## Nizzen

yzonker said:


> Hang on, so it gains performance even though it's not officially supported?



*GeForce RTX 30 Series Resizable BAR Supported Games*​As of March 30th, 2021​Assassin's Creed ValhallaBattlefield VBorderlands 3ControlCyberpunk 2077Death StrandingDIRT 5F1 2020Forza Horizon 4Gears 5GodfallHitman 2Hitman 3Horizon Zero DawnMetro ExodusRed Dead Redemption 2Watch Dogs Legion


----------



## J7SC

yzonker said:


> Hang on, so it gains performance even though it's not officially supported?


----------



## gfunkernaught

050 said:


> I will have to test that I suppose; I am mostly confused about the difference in power limit levels in and out of VR... outside of VR it seems to be working exactly as intended, but in VR, sticking to roughly a stock 3090 trio x power limit. Odd.


How is your VR headset connected to your 3090?


----------



## Falkentyne

KedarWolf said:


> Where did you get the ABE v0.09 app?


Was created and posted by bmgjet, who also made the useful shunt mod calculator. I don't think he likes me anymore


----------



## KedarWolf

Falkentyne said:


> Was created and posted by bmgjet, who also made the useful shunt mod calculator. I don't think he likes me anymore


I can only find v0.06 but it works okay.


----------



## Gbone

What BIOS for MSI Gaming X Trio is best now that we have BAR support?


----------



## KedarWolf

Gbone said:


> What BIOS for MSI Gaming X Trio is best now that we have BAR support?


Try the Suprim X BIOS. Getting the best results with it.









MSI RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com


----------



## gfunkernaught

KedarWolf said:


> Try the Suprim X BIOS. Getting the best results with it.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> MSI RTX 3090 VBIOS
> 
> 
> 24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory
> 
> 
> 
> 
> www.techpowerup.com


Have you ever tried the kp 1kw bios (limited to PL 40%) and compared it to the suprim x bios?


----------



## KedarWolf

gfunkernaught said:


> Have you ever tried the kp 1kw bios (limited to PL 40%) and compared it to the suprim x bios?


Yes, I've tried it at 50% and 75% and the Suprim X was better.


----------



## PowerK

J7SC said:


> Cyberpunk is supposed to pick up close to 6% w/ resizable BAR, but there's also a huge > new patch coming for Cyberpunk


The 'huge' patch is already released yesterday.


----------



## ALSTER868

PowerK said:


> What I found interesting is that BIOS version remains the same before/after flashing for resizable bar support. Is this the case for you guys as well?


Same thing here after flashing ASUS Strix bios, version remained the same.


----------



## mirkendargen

ALSTER868 said:


> Same thing here after flashing ASUS Strix bios, version remained the same.


What seems stranger is looking through Techpowerup it looks like Gigabyte and only Gigabyte has a new 94.02.59.xx.xx version while all the others are still 94.02.42.xx.xx.


----------



## nievz

KedarWolf said:


> Try the Suprim X BIOS. Getting the best results with it.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> MSI RTX 3090 VBIOS
> 
> 
> 24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory
> 
> 
> 
> 
> www.techpowerup.com


I flashed the Suprim X BIOS to my X Trio from the EVGA 520W but the vendor remained EVGA in GPU-Z. Apart from that, I don't see any issue. Any idea why this is? The BIOS version reflects the SuprimX one so I think it took it. Also, are you able to control RGB with this BIOS?


----------



## KedarWolf

nievz said:


> I flashed the Suprim X BIOS to my X Trio from the EVGA 520W but the vendor remained EVGA in GPU-Z. Apart from that, I don't see any issue. Any idea why this is? The BIOS version reflects the SuprimX one so I think it took it. Also, are you able to control RGB with this BIOS?
> 
> View attachment 2484563


I have the Suprim BIOS flashed and it's showing in GPU-Z and it has a different device ID than yours.

Did you reboot?


----------



## J7SC

PowerK said:


> The 'huge' patch is already released yesterday.


,,,s.rh. 'to look forward to' then, I last played CP'77 Sunday night...


----------



## KedarWolf

nievz said:


> I flashed the Suprim X BIOS to my X Trio from the EVGA 520W but the vendor remained EVGA in GPU-Z. Apart from that, I don't see any issue. Any idea why this is? The BIOS version reflects the SuprimX one so I think it took it. Also, are you able to control RGB with this BIOS?
> 
> View attachment 2484563


I have an ASUS Strix OC and Dragon Centre sees it, but doesn't change the RGB on it.


----------



## nievz

KedarWolf said:


> I have the Suprim BIOS flashed and it's showing in GPU-Z and it has a different device ID than yours.
> 
> Did you reboot?


Yes I've rebooted twice and Dragon Center doesn't sees it. Isn't device ID unique to the physical card you use or is it the same per model?

This is the BIOS i flashed. Is it the same one you used?








MSI RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com


----------



## KedarWolf

nievz said:


> Yes I've rebooted twice and Dragon Center doesn't sees it. Isn't device ID unique to the physical card you use or is it the same per model?
> 
> This is the BIOS i flashed. Is it the same one you used?
> 
> 
> 
> 
> 
> 
> 
> 
> MSI RTX 3090 VBIOS
> 
> 
> 24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory
> 
> 
> 
> 
> www.techpowerup.com


Yes, it is. But I don't think your flash worked. I see MSI and a different Device ID in GPU-Z.

Did you remember to reboot?

Edit: The Device ID goes by the BIOS.


----------



## Dreams-Visions

7empe said:


> Hey, my mobo M12E with v2004 bios (BAR-enabled) + NVIDIA 470.05 beta driver + Strix 3090 with 520W EVGA BAR-enabled vBIOS (94.02.42.C0.0C) got PR score of 15 531. Score higher by around 330 points (1.5 FPS in average) than with the same settings but with previous EVGA 520W vBIOS (94.02.42.00.0F). If 3dmark PR supports rBAR (I don't know that and couldn't find that info), then performance uptick with BAR-enabled is around 2.4%. But if 3dmark PR does not support rBAR, then performance gains may come from 470.05 itself...
> 
> View attachment 2484419
> 
> 
> BTW. you can find EVGA 520W vBIOS with BAR support here: EVGA RTX 3090 VBIOS
> I used this one and works fine for me.


late but thank you for the link.

installing now. I look forward to seeing if ASUS does anything more than a simple modification to existing drivers. Plus, I like all of my outputs to be working. lol

edit: oh i see Asus released bios for their cards too. I'll probably flash that one later. No big rush.


----------



## PowerK

PowerK said:


> Just flashed one of 3090 HOF.
> What I found interesting is that BIOS version remains the same before/after flashing for resizable bar support. Is this the case for you guys as well?
> 
> Stock:
> View attachment 2484543
> 
> 
> 
> Flashed for resizable BAR support:
> View attachment 2484544


Nobody here checks BIOS version before/after?


----------



## scaramonga

see below


----------



## scaramonga

erazortt said:


> For everyone with a reference 2-pin card, I just found this bios of a bar-enabled GALAX with the 390W bios.
> 
> 
> 
> 
> 
> 
> 
> 
> GALAX RTX 3090 VBIOS
> 
> 
> 24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory
> 
> 
> 
> 
> www.techpowerup.com
> 
> 
> 
> 
> 
> I cannot test it right now, beacuse I'm currently not at home. But that'll be the first I'm gonna do when I'm back later.


Anyone tested on a KFA2 yet?


----------



## 7empe

jomama22 said:


> No, this is completely wrong. It is a 2 pin card as shown here:
> 
> View attachment 2484432


You're right. It's 2-pin. Not sure why they did this... I would prefer stock 3-pin Strix + ekwb block installed manually in such case...


----------



## mattskiiau

Nevermind, figured it out. 🤪


----------



## nordschleife

scaramonga said:


> Anyone tested on a KFA2 yet?


Tested on a 3090 Galax SG and can confirm it works well, with good performance on this board. (haven't tested all the DP's though, only HDMI 2.1.


----------



## scosey

Galax with BAR works with my Ventus 3090 but default is 370W and powerslider allows 105% = 390W.

The KFA2 Bios I've used had a lower default power target and power slider 110% allowing 390W max.

I've played some BF V last night and this Bios or BAR enabled definitely results in a higher power draw, around 10-15W with same Voltage, Boost and everything. I limit my card to [email protected] and it's very easy to run into the 390W power limit with resolution scaling applied, even at [email protected]

With the older KFA2 Bios peak was around 375W.


----------



## nievz

KedarWolf said:


> Yes, it is. But I don't think your flash worked. I see MSI and a different Device ID in GPU-Z.
> 
> Did you remember to reboot?
> 
> Edit: The Device ID goes by the BIOS.
> 
> View attachment 2484568


I have reflashed the same BIOS and it looks good now. Sadly, RGB control still doesn't work. 😞


----------



## Pepillo

With EVGA using the X1 it seems that what it does is flash a new bios and not a patch, look at the date and version before and after the BAR (Kipping 520w bios):


----------



## Alex24buc

I am so nervous, Palit released the firmware tool for my Palit in order to enable resizable bar but my motherboard - x299x aorus master from gigabyte hasn t released a bios for it. Do you know if gigabyte will release a bios for x299x aorus master? I cannot find anywhere an answer.


----------



## RaMsiTo

Does anyone have bios of a xc3 ultra with resizable bar? For those of us who have power problems in the ftw3 ultra we cannot update through precision.


----------



## lolhaxz

Pepillo said:


> With EVGA using the X1 it seems that what it does is flash a new bios and not a patch, look at the date and version before and after the BAR (Kipping 520w bios):


I would think this was pretty obvious... I did have a chuckle when I read whoever wrote that earlier.

A) The ROM must be signed and valid (in multiple ways)
B) In order to patch a ROM the key(s) would be required - (there are very few sections (read: nothing worthwhile) of the BIOS you can modify, but there are some in theory!)
C) NVIDIA has made some significant "mistakes" recently, but I doubt bundling up signing code (or letting partners) in a "patcher" that would be easy to pick apart is going to be one of them.

If you find something that is a read/patch/write, then let us all know... I'll happily reverse engineer it to make a BIOS editor.

I'll admit I have not actually investigated... but only two of the following would be possible:

A) The BAR enablement is in a part of the BIOS that "doesn't matter"
B) These executables released by these third party can magically sign BIOS sections.


----------



## emil2424

I compared two bios but I can not draw conclusions - I have no idea about these parameters. The position "Max TDP PL" looks strange in a new bios.


----------



## ttnuagmada

I upgraded my 3080 FE to reBAR and got a pretty large jump in RDR2. Haven't flashed my 3090 Strix yet. I've fallen behind in keeping up with this thread, is the KPE 520w still the recommended vBIOS for watercooled 3-pin cards?


----------



## pat182

stupid asus updating x299 bios but not z390...
wont buy asus next time, such a poor support


----------



## gfunkernaught

KedarWolf said:


> Yes, I've tried it at 50% and 75% and the Suprim X was better.


Interesting. Which board do you have? Did you notice better performance in both bench and gaming?


----------



## ttnuagmada

pat182 said:


> stupid asus updating x299 bios but not z390...
> wont buy asus next time, such a poor support



Yeah, I've been a long-time ASUS fan, but being basically the only mobo manufacturer to not fully support PCIE 4.0 for RKL on their Z490 line really kinda rubbed me the wrong way.


----------



## Nizzen

pat182 said:


> stupid asus updating x299 bios but not z390...
> wont buy asus next time, such a poor support


Yeah right.... I bet you choose Gigabutt MB nex time due to the "best" support and "best" bioses 😅


----------



## Pepillo

Asus X299 happy owner here.


----------



## pat182

Pepillo said:


> Asus X299 happy owner here.


i was on x99 5930k then switched to z390, i should have went x299


----------



## J7SC

7empe said:


> You're right. It's 2-pin. *Not sure why they did this*... I would prefer stock 3-pin Strix + ekwb block installed manually in such case...


Yeah, not sure why they did this, either. For that matter, it's a similar story with Gigabyte / Aorus. The Asus EKWB 3900 model might be based on the TUF / OC which certainly has good reviews, but it is not Strix. On the Aorus side, with the RTX 2080 Ti, the factory full waterblock version (XTR WF WB, I have those) shared the top XTR PCB design and had the top factory clocks, but with the RTX 3090, the Aorus XTR WB also is a 2-pin PCIe, instead of the 3-pin of the air-cooled RTX 3090 XTR...the latter also has higher clocks. It would be easy to speculate that it is a nod to miners, but I don't really know


----------



## joyzao

Does anyone have the bios of the rtx 3090 asus rog oc? Here I updated on the asus website but the card has not updated, the bios A9 end continues, and now it says it is updated but it is not, could someone make it available to bios? To enable the rezise bar


----------



## 050

gfunkernaught said:


> How is your VR headset connected to your 3090?


It is a valve index connected via the stock DisplayPort cable.

I did test the 1kw kingpin EVGA bios to see if it exhibited the same ~380w power limit flag in hwinfo64 as the 520w kingpin bios, and it does seem to. I set the power limit at 50% and tested loading up VR and the power limit flag tripped the same as before. I suspect it must be some subsection of the board with a specific power limit that is relevant to VR more than desktop games as they still test fine and hit 500w with the 1kw kingpin bios at 50%.

Two other oddities - I tested the 1kw bios at 30% power limit to see if it would hit 300w mining ETH and it seemed that the 1kw bios did not want to hash at all. Is this some sort of safety function built into the high high power bios?

The second oddity - after flashing back to the kingpin 520w bios, the card is working fine but gives an error when I try to "nvflash --protecton", not allowing me to set the protection back on as I have done previously.

Not really a performance issue but odd.


----------



## pat182

joyzao said:


> Does anyone have the bios of the rtx 3090 asus rog oc? Here I updated on the asus website but the card has not updated, the bios A9 end continues, and now it says it is updated but it is not, could someone make it available to bios? To enable the rezise bar


they will probably release it today or next week;






ROG-STRIX-RTX3090-O24G-GAMING | ROG Strix | Gaming Graphics Cards｜ROG - Republic of Gamers｜ROG Canada


The ROG Strix GeForce RTX 3090 OC Edition 24GB GDDR6X unleash the maximum performance on NVIDIA Ampere Architecture, by featuring Axial-tech fan design, 0dB technology, 2.9-slot Design, Dual BIOS, Auto-Extreme Technology, SAP II, MaxContact Technology, and more.



rog.asus.com


----------



## Lord of meat

nievz said:


> I flashed the Suprim X BIOS to my X Trio from the EVGA 520W but the vendor remained EVGA in GPU-Z. Apart from that, I don't see any issue. Any idea why this is? The BIOS version reflects the SuprimX one so I think it took it. Also, are you able to control RGB with this BIOS?
> 
> View attachment 2484563


I had the same issue on the same card. Flash the bios to the evga one using nvflash reboot and then use the update tool on the evga forum to update it to the xoc one again. Reboot, shut down, turn on and flash the msi one and reboot.
That is what I did to fix mine dont know if it will work for ya, o and I'm not responsible for any ****ups yada yada.


----------



## Lord of meat

Pepillo said:


> The Trio is 100% compatible with EVGA bios, both the 500w XOC and the 520w Kipping, I've been with the latter for months.


Can you please send me the link to the Kipping one so I can check?
The one I tried ~a month ago bricked my card so maybe I'm just mixing something up.


----------



## yzonker

050 said:


> It is a valve index connected via the stock DisplayPort cable.
> 
> I did test the 1kw kingpin EVGA bios to see if it exhibited the same ~380w power limit flag in hwinfo64 as the 520w kingpin bios, and it does seem to. I set the power limit at 50% and tested loading up VR and the power limit flag tripped the same as before. I suspect it must be some subsection of the board with a specific power limit that is relevant to VR more than desktop games as they still test fine and hit 500w with the 1kw kingpin bios at 50%.
> 
> Two other oddities - I tested the 1kw bios at 30% power limit to see if it would hit 300w mining ETH and it seemed that the 1kw bios did not want to hash at all. Is this some sort of safety function built into the high high power bios?
> 
> The second oddity - after flashing back to the kingpin 520w bios, the card is working fine but gives an error when I try to "nvflash --protecton", not allowing me to set the protection back on as I have done previously.
> 
> Not really a performance issue but odd.


You may have already answered this, haven't kept track really well. Anyway, what voltage does the card boost to in VR with 380w? 

To mine, you have to turn off P2 state. The XOC jumps to something like P8 when you use compute by default. Also does that if you use a negative memory offset. Can't remember the name of the program I use to do that. I'm sure someone on here knows if you don't. I'm at work right now.


----------



## 050

yzonker said:


> You may have already answered this, haven't kept track really well. Anyway, what voltage does the card boost to in VR with 380w?
> 
> To mine, you have to turn off P2 state. The XOC jumps to something like P8 when you use compute by default. Also does that if you use a negative memory offset. Can't remember the name of the program I use to do that. I'm sure someone on here knows if you don't. I'm at work right now.


I appreciate the help! It's a fast moving thread, here is a post with hwinfo64 showing the ~375w having tripped the power limit flag in hwinfo64 in VR and ~480w (correctly) tripping the flag outside of VR. Edit: Sorry to answer more directly, 1.081v in both VR and non-VR use.



Spoiler: Previous Post






050 said:


> Sorry no it was in reference to this issue I am having:
> 
> 
> 
> Basically in non-vr games and time spy I see the card use up to the power limit I have set in afterburner, 117% of the 430w nominal limit for the kingpin bios ~500w, and that's when I see the power limit trip in hwinfo64 - all good. On vr titles however, I see the card tripping that power limit flag at 380-400w. It is entirely possible (even likely) that it's a sub-section of the card that is power limiting but it's unclear what section that would be.
> 
> I'll attach two pics of hwinfo showing the power limits at ~375w (beat saber vr) and ~480w (playing Ark, non-vr). Apologies for the camera pic of the screen but it was quick at the time.
> 
> 
> Spoiler: VR Game power limit - 375w
> 
> 
> 
> 
> View attachment 2484481
> 
> 
> 
> 
> 
> 
> Spoiler: Desktop Game power limit- 480w
> 
> 
> 
> 
> View attachment 2484482






It is tough to tell if this is really causing me performance problems, but it is still something I'd like to figure out the cause of, even if I then have to leave it at "yeah, it's the memory power draw hitting it's max" or something similar.

As for mining, that is interesting about the power states. I flashed back to the kingpin 430w/520w bios and it mines again as it did before so that's no big issue, must just be a difference in the ay those two bioses handle the power states.


----------



## J7SC

KedarWolf said:


> I have the Suprim BIOS flashed and it's showing in GPU-Z and it has a different device ID than yours.
> 
> Did you reboot?


Re. your preference for the MSI SuprimXbios over the 3090 Strix OC stock one, does it pull the same typical peak Watts, ie. in GPUz. My Stix is purring like a kitten on original bios, but wondering...


----------



## des2k...

050 said:


> It is a valve index connected via the stock DisplayPort cable.
> 
> I did test the 1kw kingpin EVGA bios to see if it exhibited the same ~380w power limit flag in hwinfo64 as the 520w kingpin bios, and it does seem to. I set the power limit at 50% and tested loading up VR and the power limit flag tripped the same as before. I suspect it must be some subsection of the board with a specific power limit that is relevant to VR more than desktop games as they still test fine and hit 500w with the 1kw kingpin bios at 50%.
> 
> Two other oddities - I tested the 1kw bios at 30% power limit to see if it would hit 300w mining ETH and it seemed that the 1kw bios did not want to hash at all. Is this some sort of safety function built into the high high power bios?
> 
> The second oddity - after flashing back to the kingpin 520w bios, the card is working fine but gives an error when I try to "nvflash --protecton", not allowing me to set the protection back on as I have done previously.
> 
> Not really a performance issue but odd.


no safety, that vbios only has P1 states to help stability for LN2, disable P2 compute state with nv inspector and that will fix it


----------



## des2k...

050 said:


> I appreciate the help! It's a fast moving thread, here is a post with hwinfo64 showing the ~375w having tripped the power limit flag in hwinfo64 in VR and ~480w (correctly) tripping the flag outside of VR. Edit: Sorry to answer more directly, 1.081v in both VR and non-VR use.
> 
> 
> It is tough to tell if this is really causing me performance problems, but it is still something I'd like to figure out the cause of, even if I then have to leave it at "yeah, it's the memory power draw hitting it's max" or something similar.
> 
> As for mining, that is interesting about the power states. I flashed back to the kingpin 430w/520w bios and it mines again as it did before so that's no big issue, must just be a difference in the ay those two bioses handle the power states.


VR the card seem to use the encoder/decoder according to hwinfo.

Are you streaming / encoding when playing VR ?
Usually if I do that when running 3dmark I will get board power drop. Doesn't seem to affect performance, maybe a reporting issue .


----------



## jomama22

joyzao said:


> Does anyone have the bios of the rtx 3090 asus rog oc? Here I updated on the asus website but the card has not updated, the bios A9 end continues, and now it says it is updated but it is not, could someone make it available to bios? To enable the rezise bar


The update from earlier in this thread is correct (post has a V3 in the link). The bios version number is retained but bar has been enabled. Can confirm on my end that both gpu-z and ncp report bar as enabled.


----------



## nievz

nievz said:


> I have reflashed the same BIOS and it looks good now. Sadly, RGB control still doesn't work. 😞
> 
> View attachment 2484581


Just an update, not sure what happened before but I'm now able to control the X Trio RGB with SuprimX BIOS in Mystic Light. There must have been some issue with MSI services not running but I don't know for sure. So far, everything working great. I have it undervolted so 450w is plenty of headroom for me in case there are spikes.😊


----------



## Pepillo

Lord of meat said:


> Can you please send me the link to the Kipping one so I can check?
> The one I tried ~a month ago bricked my card so maybe I'm just mixing something up.


VGA Bios Collection: EVGA RTX 3090 24 GB | TechPowerUp


----------



## yzonker

050 said:


> I appreciate the help! It's a fast moving thread, here is a post with hwinfo64 showing the ~375w having tripped the power limit flag in hwinfo64 in VR and ~480w (correctly) tripping the flag outside of VR. Edit: Sorry to answer more directly, 1.081v in both VR and non-VR use.
> 
> 
> It is tough to tell if this is really causing me performance problems, but it is still something I'd like to figure out the cause of, even if I then have to leave it at "yeah, it's the memory power draw hitting it's max" or something similar.
> 
> As for mining, that is interesting about the power states. I flashed back to the kingpin 430w/520w bios and it mines again as it did before so that's no big issue, must just be a difference in the ay those two bioses handle the power states.


I guess I was really asking if the card is actually not boosting as high during game play in VR. I can't tell from the max in HWINFO since it will boost high in menus, etc... Or did you reset it and what you are showing is only during game play?


----------



## Sheyster

pat182 said:


> stupid asus updating x299 bios but not z390...
> wont buy asus next time, such a poor support


Apparently Gigabyte is updating Z390 but has not updated X299. Maybe ASUS will update Z390 soon since MSI and GB have already done so.


----------



## 050

des2k... said:


> VR the card seem to use the encoder/decoder according to hwinfo.
> 
> Are you streaming / encoding when playing VR ?
> Usually if I do that when running 3dmark I will get board power drop. Doesn't seem to affect performance, maybe a reporting issue .


I loaded up VR and noted that it hits ~370w and trips the power limit flag during the launch of steamVR, and then after that used discord to stream to friends so I suspect that's the encode/decode load. that is interesting though, I may have to test streaming another (desktop) game to see if it drops the power limit when streaming.


----------



## 050

yzonker said:


> I guess I was really asking if the card is actually not boosting as high during game play in VR. I can't tell from the max in HWINFO since it will boost high in menus, etc... Or did you reset it and what you are showing is only during game play?


I tested last night after tweaking my OC curve and then launching VR then resetting hwinfo while in vr - I was actually checking to see if the cpu was dropping clock speed during VR when I switched windows to a balanced power plan. I saw that during VR the card did boost up to 2085MHz at 1.075v so that's good; I will have to test again and keep an eye on it but I think it is generally staying fully boosted during vr loads. I'll have a friend watch my screen to see what the speeds look like during gameplay, thank you for the idea!


----------



## warrior-kid

Got a bit behind the news folks, I've got a Gaming Trio X with a EVGA FTW3 BIOS as follows:

Video BIOS Version: 94.02.26.48.f8

What is the best BIOS to try with re-BAR? Is it safe to just allow EVGA Precision to upgrade? Or is it better to go for other newer BIOSes first like Suprim or Kingpin?

Many thanks!!


----------



## ALSTER868

joyzao said:


> Does anyone have the bios of the rtx 3090 asus rog oc? Here I updated on the asus website but the card has not updated, the bios A9 end continues


Bios version remains the same, mind you it's not only for ASUS, other vendors left previous bios No too. I updated also from official website, so here's the link





__





ROG Strix GeForce RTX 3090 OC Edition 24GB GDDR6X | Graphics Cards


The ROG Strix GeForce RTX 3090 OC Edition 24GB GDDR6X unleash the maximum performance on NVIDIA Ampere Architecture, by featuring Axial-tech fan design, 0dB technology, 2.9-slot Design, Dual BIOS, Auto-Extreme Technology, SAP II, MaxContact Technology, and more.



rog.asus.com





Check the bios build date and you'll make sure it's newest.


----------



## yzonker

050 said:


> I tested last night after tweaking my OC curve and then launching VR then resetting hwinfo while in vr - I was actually checking to see if the cpu was dropping clock speed during VR when I switched windows to a balanced power plan. I saw that during VR the card did boost up to 2085MHz at 1.075v so that's good; I will have to test again and keep an eye on it but I think it is generally staying fully boosted during vr loads. I'll have a friend watch my screen to see what the speeds look like during gameplay, thank you for the idea!


But if it boosts to or near the voltage limit then there's really no significant performance to be gained right? It's just that the VR game you're playing isn't loading the card as heavily. Not sure why it would show power limit in that case though.


----------



## Sheyster

joyzao said:


> Does anyone have the bios of the rtx 3090 asus rog oc? Here I updated on the asus website but the card has not updated, the bios A9 end continues, and now it says it is updated but it is not, could someone make it available to bios? To enable the rezise bar


If you want the official ASUS updater for Re-Bar support, you can get it here:






ROG-STRIX-RTX3090-O24G-WHITE | ROG Strix | Gaming Graphics Cards｜ROG - Republic of Gamers｜ROG USA


The ROG Strix GeForce RTX™ 3090 White OC Edition 24GB GDDR6X featuring a white color scheme, Axial-tech fan design, 0dB technology, 2.9-slot Design, Dual BIOS, Auto-Extreme Technology, SAP II, MaxContact Technology, and more



rog.asus.com





EDIT: I noticed you want the actual BIOS not the updater. I recommend you use the new EVGA 500W BIOS instead, it's much better IMHO. Here it is, rename to .ROM and flash.


----------



## Domstercool

Hi all,

So back when I got my Palit 3090 Gaming Pro OC I flashed it with the Gigabyte Gaming OC 3090 bios to get the maximum limit to go up to 390w. What is the recommended flash for this now that the Resize Bar BIOS is out? I can't seem to find a Gigabyte Gaming OC 3090 bios on its own to use nvflash with.

Thanks.


----------



## ArcticZero

For some reason the latest drivers (465.89) seem to have done something to power as my core clocks have stabilized somewhat. It sticks to a solid 1200 while mining, when it used to drop a few steps every now and then. Running the same power limit as before, GPU-Z started to report the perfcap shifting quickly from idle to power when it used to be a solid green power bar. Because of this, I was able to drop my power limit by 1% and maintain the same hash rates.

I haven't tested in games yet, but it's an interesting observation.


----------



## Christopher2178

emil2424 said:


> I compared two bios but I can not draw conclusions - I have no idea about these parameters. The position "Max TDP PL" looks strange in a new bios.
> View attachment 2484585


Does anyone know why the new Gigabyte bios don’t load correctly in ABE? Every other brand seems to work fine and still load in ABE after BAR updates but not the new backed up BAR Gigabyte Bios? Are they corrupt somehow in the way Gigabyte updates them or they are then backed up or is ABE just not reading those alone for some reason?

Thanks!


----------



## 050

yzonker said:


> But if it boosts to or near the voltage limit then there's really no significant performance to be gained right? It's just that the VR game you're playing isn't loading the card as heavily. Not sure why it would show power limit in that case though.


Yeah, it's perhaps just me being too nit-picky but I am mostly trying to figure out why it's showing a power limit at ~380w not closer to 500w, rather than noticing a big performance issue that is causing a problem. If it _is_ power limiting as it says, that could still cause some issue but for now it's more of an oddity than a major problem.


----------



## yzonker

050 said:


> Yeah, it's perhaps just me being too nit-picky but I am mostly trying to figure out why it's showing a power limit at ~380w not closer to 500w, rather than noticing a big performance issue that is causing a problem. If it _is_ power limiting as it says, that could still cause some issue but for now it's more of an oddity than a major problem.


Well it won't go past the voltage limit which you are close to. Most of the time mine will stop at 1.081 due to the VF curve having multiple points at the same frequency, so it doesn't make it to 1.093 or 1.1 without tweaking the curve.

But yea, it is odd it shows power limit. I'll try my Oculus when I get a chance and see if it behaves the same way. 

What games are you comparing (vr vs desktop)?


----------



## VinnieM

Christopher2178 said:


> Does anyone know why the new Gigabyte bios don’t load correctly in ABE? Every other brand seems to work fine and still load in ABE after BAR updates but not the new backed up BAR Gigabyte Bios? Are they corrupt somehow in the way Gigabyte updates them or they are then backed up or is ABE just not reading those alone for some reason?
> 
> Thanks!


I don't know, but the new Gigabyte Rebar Bioses seem very restricted in Max VRAM power limit. I found the new Gigabyte Gaming OC BIOS on Techpowerup and it loads correctly in ABE but only has a max limit of 83.5W for VRAM. It seems the new BIOS for my Master has the same limit because it starts throttling at 350W...
I just flashed the GALAX Rebar BIOS which has 100W Max VRAM PL and with that BIOS throttling starts at 390W so that's a lot better.


----------



## 050

yzonker said:


> What games are you comparing (vr vs desktop)?


I tested beat saber and saw a power limit flag in hwinfo64 with a peak power shown of 375w, half-life Alyx showed a power limit flag with a peak power closer to 400w, and then for non-vr I have tested time spy and ark survival evolved, both of which power limited showing a peak power right around 500w (117% is my power limit set in afterburner -> 430w*1.17=503w) so that checks out. It is possible that despite flagging the power limit, vr games could pull more (up to 500w) if I found the right game/section of a game. If that's the case I suppose there's no issue other than the odd flag before the set power limit. I will have to test other vr games and see if others draw more power.


----------



## kmellz

Anyone with a palit 3090 gamerock oc that can't use the rebar updater? ( ::Palit Products - GeForce RTX™ 3090 GameRock OC :: ) 
Just says card not supported/doesn't need it..


----------



## yzonker

050 said:


> I tested beat saber and saw a power limit flag in hwinfo64 with a peak power shown of 375w, half-life Alyx showed a power limit flag with a peak power closer to 400w, and then for non-vr I have tested time spy and ark survival evolved, both of which power limited showing a peak power right around 500w (117% is my power limit set in afterburner -> 430w*1.17=503w) so that checks out. It is possible that despite flagging the power limit, vr games could pull more (up to 500w) if I found the right game/section of a game. If that's the case I suppose there's no issue other than the odd flag before the set power limit. I will have to test other vr games and see if others draw more power.


I think that's correct. I have HL installed. I'll try it when I get a chance and report back. You might try cranking the render resolution way up to see if that results in a higher power draw.


----------



## jomama22

Testing the new driver and rebar, was able to get that same additional scoring in in port royal:

No voltage tweaking, could probably squeeze a bit more out:
15777 








I scored 15 777 in Port Royal


AMD Ryzen 9 5950X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com





Some voltage tweaking, can definitely get more out but time:
15985








I scored 15 985 in Port Royal


AMD Ryzen 9 5950X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## J7SC

jomama22 said:


> Testing the new driver and rebar, was able to get that same additional scoring in in port royal:
> (...)
> Some voltage tweaking, can definitely get more out but time:
> 15985
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 985 in Port Royal
> 
> 
> AMD Ryzen 9 5950X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com


Noice ! Only 15 more points to the 'magic' 16,000 in PR. 39C average temp


----------



## jomama22

J7SC said:


> Noice ! Only 15 more points to the 'magic' 16,000 in PR. 39C average temp


Yeah, when you start upping the voltage above what is stock it gets hot, fast, lol. The other score is just the limits AB or w.e. software you're using will top at. I was at around 1.27v at the die for that last score lol.


----------



## ViRuS2k

ok guys having a couple of issues, was wanting to try out this rebar with my 3090 trio with suprimX rebar bios.

tried playing cyberpunk 2077 havent played it in a wile and with this new 1.2 patch coming out i thought i would give it a try and when i press play on steam and menu comes up like normal then i press play on it and it automaticly closes like normal then the game should boot up but it does not boot up at all its like the game auto closes down 

and then i tried out gears 5 pc game of the year edition and well i try to load it up and all i am getting is the game booting up and detecting no graphics card thats compatible and saying its running in microsoft renderer mode meaning CPU software only ? i have no idea how to fix this.

i tried a few other games and they work properly 
crash bandicoot 4 and little nightmares 2 resident evil 2 and 3 and they all work so no idea why im getting issues with these 2 games.

anyone have any suggestions ? btw i am running 2 graphics cards in my system 3090 and 1080ti as i do mining and get more hashrates with the 1080ti in my system
though i disabled the card in device manager before trying the games and still the same problem with those 2


----------



## ttnuagmada

flashed the rebar 520w bios and updated the drivers, but telling me reBAR isn't enabled? wut do?


----------



## yzonker

ttnuagmada said:


> flashed the rebar 520w bios and updated the drivers, but telling me reBAR isn't enabled? wut do?


And it's enabled in your mobo bios?


----------



## Falkentyne

ttnuagmada said:


> flashed the rebar 520w bios and updated the drivers, but telling me reBAR isn't enabled? wut do?


Rebar is a patch to an existing BIOS.
It may not work if you used NVflash to flash someone else's BIOS dump. But I don't know for sure.
I've already seen someone say that they used the FE patcher to flash their bios to Rebar enabled, then they upgraded their BIOS to a newer one on techpowerup, and it was still enabled.


----------



## yzonker

Soooo, I decided to try testing reBar on my Zotac 3090. I used RDR2 since it has a built in benchmark. My goal was to decide whether I wanted to stick with the XOC bios or go to a 390w bios with reBar support. I had to leave RDR2 in Vulkan because I just get a memory error when I try to run it in DX12. So is Vulkan supported by reBar? I ask because of the results.

I'm only giving min/avg because it seemed to be stopping at 60fps, maybe vsync was on. Didn't care about max fps and it was below 60 for 99% of the runs. (yea I kinda suck at testing)

XOC @390w

Min: 33.8 fps
Avg: 47.6

XOC @500w (yes I checked destiny's HWINFO addin ) PL 80%

Min: 34.9
Avg: 49.6

Galax 390w

Min: 35.6
Avg: 47.4!!!

So maybe an increase on min fps, but I observed from running 3 runs back to back with the same settings that this number can change by +-1fps. So the increase you see for reBar is almost within the range of error. But it did beat the 500w run, so probably an improvement. Probably.

Comments? Invalid because of Vulcan?


----------



## 050

yzonker said:


> I think that's correct. I have HL installed. I'll try it when I get a chance and report back. You might try cranking the render resolution way up to see if that results in a higher power draw.


For what it's worth, I tested more beat saber and saw a peak draw of 408w, so I suspect that the power limit flag is not actually indicative of the card capping, it just reports it early in vr for reasons unclear.


----------



## J7SC

yzonker said:


> Soooo, I decided to try testing reBar on my Zotac 3090. (...)
> 
> So maybe an increase on min fps, but I observed from running 3 runs back to back with the same settings that this number can change by +-1fps. So the increase you see for reBar is almost within the range of error. But it did beat the 500w run, so probably an improvement. Probably.
> 
> Comments? Invalid because of Vulcan?


...also still trying to figure out whether Vulkan and resizable BAR is a thing. Still on the old vbios anyhow, but did have a chance this afternoon to get 90 min with the 3090 on the patched CP2077, and a bit of night-flying in London...debating whether I should update various bios over the Easter break, or go camping instead... 🥴 



Spoiler


----------



## yzonker

Yea you need som


050 said:


> For what it's worth, I tested more beat saber and saw a peak draw of 408w, so I suspect that the power limit flag is not actually indicative of the card capping, it just reports it early in vr for reasons unclear.


Yea you need something like MSFS 2020 with render scale cranked to see 500w.


----------



## yzonker

J7SC said:


> ...also still trying to figure out whether Vulkan and resizable BAR is a thing. Still on the old vbios anyhow, but did have a chance this afternoon to get 90 min with the 3090 on the patched CP2077, and a bit of night-flying in London...debating whether I should update various bios over the Easter break, or go camping instead... 🥴


So I rebooted and turned it off again (same Galax bios). Got this in 2 consecutive runs:

Min: 32.1
Avg: 47.35

Min: 33.3
Avg: 47.2 

So maybe the min increase is real.

I also just tried a static scene in CP2077 (loaded a save point each time). This seemed to be a consistent 1 fps. (51-52 vs 52-53)

So unless I can find something that makes this more appealing, back to the XOC for me.


----------



## Nizzen

yzonker said:


> Soooo, I decided to try testing reBar on my Zotac 3090. I used RDR2 since it has a built in benchmark. My goal was to decide whether I wanted to stick with the XOC bios or go to a 390w bios with reBar support. I had to leave RDR2 in Vulkan because I just get a memory error when I try to run it in DX12. So is Vulkan supported by reBar? I ask because of the results.
> 
> I'm only giving min/avg because it seemed to be stopping at 60fps, maybe vsync was on. Didn't care about max fps and it was below 60 for 99% of the runs. (yea I kinda suck at testing)
> 
> XOC @390w
> 
> Min: 33.8 fps
> Avg: 47.6
> 
> XOC @500w (yes I checked destiny's HWINFO addin ) PL 80%
> 
> Min: 34.9
> Avg: 49.6
> 
> Galax 390w
> 
> Min: 35.6
> Avg: 49.4!!!
> 
> So maybe an increase on min fps, but I observed from running 3 runs back to back with the same settings that this number can change by +-1fps. So the increase you see for reBar is almost within the range of error. But it did beat the 500w run, so probably an improvement. Probably.
> 
> Comments? Invalid because of Vulcan?


Crazy!


----------



## Sheyster

yzonker said:


> So unless I can find something that makes this more appealing, back to the XOC for me.


I'm waiting for now as well. Honestly it's the mobo BIOS that is causing me to hesitate. It's a bit of work getting everything dialed in again from scratch with a new mobo BIOS.


----------



## yzonker

Sheyster said:


> I'm waiting for now as well. Honestly it's the mobo BIOS that is causing me to hesitate. It's a bit of work getting everything dialed in again from scratch with a new mobo BIOS.


Yea that sucked ass. I was wanting the newer Gigabyte bios though because they've been improving the 5000 series cpu support. I took pictures with my phone of my settings before flashing.


----------



## yzonker

I can reconfirm that Galax bios works fine though. Seems to perform well for a 390w bios. As good as the others I've tried anyway.


----------



## Falkentyne

Got 128 FPS average in Watch dogs:L Benchmark at 1080p Ultra, Ray Tracing disabled, DLSS Off, with rebar disabled (180 FPS max)
117 FPS average with Rebar enabled (171 FPS max).

..............

With Ray Tracing on Ultra, FPS was identical for Rebar on and off (87 FPS average). I didn't pay attention to the max.


----------



## yzonker

Falkentyne said:


> Got 128 FPS average in Watch dogs:L Benchmark at 1080p Ultra, Ray Tracing disabled, DLSS Off, with rebar disabled (180 FPS max)
> 117 FPS average with Rebar enabled (171 FPS max).
> 
> ..............
> 
> With Ray Tracing on Ultra, FPS was identical for Rebar on and off (87 FPS average). I didn't pay attention to the max.


But it's supposed to be faster.









GeForce RTX 3090 Gets 3% Performance Boost From Resizable BAR


'Free' performance increase for all--or at least all RTX 3090 owners.




www.tomshardware.com





Based on that, RDR2 might not be a good choice though.


----------



## J7SC

yzonker said:


> But it's supposed to be faster.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> GeForce RTX 3090 Gets 3% Performance Boost From Resizable BAR
> 
> 
> 'Free' performance increase for all--or at least all RTX 3090 owners.
> 
> 
> 
> 
> www.tomshardware.com
> 
> 
> 
> 
> 
> Based on that, RDR2 might not be a good choice though.


...given the r_BAR linkage to memory, would 4K benches show the highest impact ?


----------



## satinghostrider

yzonker said:


> I can reconfirm that Galax bios works fine though. Seems to perform well for a 390w bios. As good as the others I've tried anyway.


I am keen to know if it would work with Gigabyte Xtreme Waterforce 3090 WB card.
The new resize bar update crippled the 2x8pin card even further not allowing you to max your power limiter.
I am only able to use peak 360W now instead of the usual 390-400W peak I used to see.


----------



## yzonker

J7SC said:


> ...given the r_BAR linkage to memory, would 4K benches show the highest impact ?


No idea at this point. That's what I tested at because that's what I game in. 

Why does CP2077 show a 1 fps increase on a static scene (standing still)? I tested that twice with the exact same result. Seems like there should be no transfer from main memory in that case.


----------



## yzonker

satinghostrider said:


> I am keen to know if it would work with Gigabyte Xtreme Waterforce 3090 WB card.
> The new resize bar update crippled the 2x8pin card even further not allowing you to max your power limiter.
> I am only able to use peak 360W now instead of the usual 390-400W peak I used to see.


Well flash at your own risk, but no reason why it shouldn't. You can flash pretty much any 3090 bios and not brick it at least. You'll likely get 390w with it or one of the other 390w bios.

I've had all of these on my Zotac at one point,

Gigabyte Gaming OC
Gigabyte Aurus
Kingpin 520w (long explanation as to why)
Kingpin XOC
Galax 390w


----------



## satinghostrider

yzonker said:


> Well flash at your own risk, but no reason why it shouldn't. You can flash pretty much any 3090 bios and not brick it at least. You'll likely get 390w with it or one of the other 390w bios.
> 
> I've had all of these on my Zotac at one point,
> 
> Gigabyte Gaming OC
> Gigabyte Aurus
> Kingpin 520w (long explanation as to why)
> Kingpin XOC
> Galax 390w


So I am guessing that you are running the Galax 390W right now and you're happy with it.
Does this have rebar support?









GALAX RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com


----------



## gfunkernaught

050 said:


> It is a valve index connected via the stock DisplayPort cable.
> 
> I did test the 1kw kingpin EVGA bios to see if it exhibited the same ~380w power limit flag in hwinfo64 as the 520w kingpin bios, and it does seem to. I set the power limit at 50% and tested loading up VR and the power limit flag tripped the same as before. I suspect it must be some subsection of the board with a specific power limit that is relevant to VR more than desktop games as they still test fine and hit 500w with the 1kw kingpin bios at 50%.
> 
> Two other oddities - I tested the 1kw bios at 30% power limit to see if it would hit 300w mining ETH and it seemed that the 1kw bios did not want to hash at all. Is this some sort of safety function built into the high high power bios?
> 
> The second oddity - after flashing back to the kingpin 520w bios, the card is working fine but gives an error when I try to "nvflash --protecton", not allowing me to set the protection back on as I have done previously.
> 
> Not really a performance issue but odd.


That is odd. I know displayport and secure boot/efi can be finicky. Hands don't always shake properly...My TV that is spec'd for 4:2:0 10bit HDR sometimes does weird handshakes with my trio depending on the bios. right now that HDR mode works while other's dont or I get banding/fringing/purple tint. Other bios HDR works properly when I select 4:4:4 8bit. This is through HDMI. Is the valve index stereoscopic? Does the headset duplicate the signal from the gpu output or is the gpu sending two video signals?


----------



## yzonker

satinghostrider said:


> So I am guessing that you are running the Galax 390W right now and you're happy with it.
> Does this have rebar support?
> 
> 
> 
> 
> 
> 
> 
> 
> 
> GALAX RTX 3090 VBIOS
> 
> 
> 24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory
> 
> 
> 
> 
> www.techpowerup.com


No this one that was linked earlier in this thread. Pay attention to the dates for a clue as to which support reBar. 









GALAX RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com


----------



## satinghostrider

yzonker said:


> No this one that was linked earlier in this thread. Pay attention to the dates for a clue as to which support reBar.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> GALAX RTX 3090 VBIOS
> 
> 
> 24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory
> 
> 
> 
> 
> www.techpowerup.com


Got it! Thanks for the clarification man!


----------



## J7SC

yzonker said:


> No idea at this point. That's what I tested at because that's what I game in.
> 
> *Why does CP2077 show a 1 fps increase on a static scene* (standing still)? I tested that twice with the exact same result. Seems like there should be no transfer from main memory in that case.


That's - surprising  
...anyway, did you notice a change in graphic 'quality' at the same settings after CP2077 / patch1.2 w/ your 3090 ? I've got a friend in town who has an almost identical setup to mine re. CPU, mobo and Strix (though he's currently playing on a FTW3) and he insists that things 'have gotten sharper' at 4k / DLSS / RTX Psycho. I am on the fence on that one w/ my 40 inch Philips monitor...


----------



## nievz

Warzone has sharper, vibrant and crisper colors for me. Unfortunately I'm not sure if it's the new driver or the new game update since I enabled rebar and updated at the same time. Is it the same with you guys?

EDIT: nevermind, it was the damn MSI Dragon Center interfering with my game's colors


----------



## Antsu

New drivers looking really nice, atleast for PR  I scored 16 000 in Port Royal


----------



## Z-U

New here, first post and not as technically proficient as most of you - but I wonder if you can help?...I've been trying to get this resizable-bar thing working on my GPU and it just won't play ball.

CPU: Ryzen 9 5950x
Memory: 32Gb 3600mhz CL14
GPU: Gigabyte 3090RTX Eagle 24G on the latest nVidia drivers (version 465.89)
MOBO: Gigabyte Aorus Master (rev 1.1), on Bios F33i (latest, BAR support started with F32) (Settings: XMP Profile=Enabled, PBO=Enabled, Above 4G Encoding=Enabled, Re-size BAR=Auto, CSM=Disabled)

I don't think the motherboard is the issue - nor the settings i'm displaying up there...my suspicion is that I'm not able to correctly flash my GPU...

I'm trying to re-flash with Gigabytes 'N3090E.F2' bios update package (this is the correct bios version for this F1 card) - and after executing the Exe, and some black screen-flickering later (which seem far too short to be doing anything at all - it's only like 1 or 2 seconds at most) message comes back saying "Bios Successfully Updated!", but in GPU-Z>Advanced>Message it says "GV-N3090EAGLE OC-24GD/*F1*/0AF0" which suggests to me it's still on *F1 bios not F2*. Sure enough GPU-Z also says BAR "Disabled", despite settings for 4G Encoding, BAR resize and CSM in mobo being correct based on Gigabytes support page. I just don't think it's flashing anything. The Gigabyte package itself doesn't have an obvious .bin file, in facts the bios file I believe is hidden in a passworded .zip too. So I can't just NVFlash a single file or anything.

How do I really tell whether the vbios has actually been updated? Should it say GV-N3090EAGLE OC-24GD/*F2*/0AF0 when it's done properly? 

Any advice would be greatly appreciated....at the moment, with those bios settings above, i just get a black screen on boot and "no signal" on the monitor.
Alternatively, is there a better way to flash the Vbios? Rather than the VBS script in Gigabytes zip file?

Thank you to anyone who can help.


----------



## warrior-kid

Z-U said:


> New here, first post and not as technically proficient as most of you - but I wonder if you can help?...I've been trying to get this resizable-bar thing working on my GPU and it just won't play ball.
> 
> CPU: Ryzen 9 5950x
> Memory: 32Gb 3600mhz CL14
> GPU: Gigabyte 3090RTX Eagle 24G on the latest nVidia drivers (version 465.89)
> MOBO: Gigabyte Aorus Master (rev 1.1), on Bios F33i (latest, BAR support started with F32) (Settings: XMP Profile=Enabled, PBO=Enabled, Above 4G Encoding=Enabled, Re-size BAR=Auto, CSM=Disabled)
> 
> I don't think the motherboard is the issue - nor the settings i'm displaying up there...my suspicion is that I'm not able to correctly flash my GPU...
> 
> I'm trying to re-flash with Gigabytes 'N3090E.F2' bios update package (this is the correct bios version for this F1 card) - and after executing the Exe, and some black screen-flickering later (which seem far too short to be doing anything at all - it's only like 1 or 2 seconds at most) message comes back saying "Bios Successfully Updated!", but in GPU-Z>Advanced>Message it says "GV-N3090EAGLE OC-24GD/*F1*/0AF0" which suggests to me it's still on *F1 bios not F2*. Sure enough GPU-Z also says BAR "Disabled", despite settings for 4G Encoding, BAR resize and CSM in mobo being correct based on Gigabytes support page. I just don't think it's flashing anything. The Gigabyte package itself doesn't have an obvious .bin file, in facts the bios file I believe is hidden in a passworded .zip too. So I can't just NVFlash a single file or anything.
> 
> How do I really tell whether the vbios has actually been updated? Should it say GV-N3090EAGLE OC-24GD/*F2*/0AF0 when it's done properly?
> 
> Any advice would be greatly appreciated....at the moment, with those bios settings above, i just get a black screen on boot and "no signal" on the monitor.
> Alternatively, is there a better way to flash the Vbios? Rather than the VBS script in Gigabytes zip file?
> 
> Thank you to anyone who can help.


Hi, welcome, relative newbie myself. I am wondering if you did something like that "*nvflash --protectoff* " prior. I do have a completely different BIOS and manufacturer reported when going from Trio X Gaming to EVGA.


----------



## satinghostrider

Z-U said:


> New here, first post and not as technically proficient as most of you - but I wonder if you can help?...I've been trying to get this resizable-bar thing working on my GPU and it just won't play ball.
> 
> CPU: Ryzen 9 5950x
> Memory: 32Gb 3600mhz CL14
> GPU: Gigabyte 3090RTX Eagle 24G on the latest nVidia drivers (version 465.89)
> MOBO: Gigabyte Aorus Master (rev 1.1), on Bios F33i (latest, BAR support started with F32) (Settings: XMP Profile=Enabled, PBO=Enabled, Above 4G Encoding=Enabled, Re-size BAR=Auto, CSM=Disabled)
> 
> I don't think the motherboard is the issue - nor the settings i'm displaying up there...my suspicion is that I'm not able to correctly flash my GPU...
> 
> I'm trying to re-flash with Gigabytes 'N3090E.F2' bios update package (this is the correct bios version for this F1 card) - and after executing the Exe, and some black screen-flickering later (which seem far too short to be doing anything at all - it's only like 1 or 2 seconds at most) message comes back saying "Bios Successfully Updated!", but in GPU-Z>Advanced>Message it says "GV-N3090EAGLE OC-24GD/*F1*/0AF0" which suggests to me it's still on *F1 bios not F2*. Sure enough GPU-Z also says BAR "Disabled", despite settings for 4G Encoding, BAR resize and CSM in mobo being correct based on Gigabytes support page. I just don't think it's flashing anything. The Gigabyte package itself doesn't have an obvious .bin file, in facts the bios file I believe is hidden in a passworded .zip too. So I can't just NVFlash a single file or anything.
> 
> How do I really tell whether the vbios has actually been updated? Should it say GV-N3090EAGLE OC-24GD/*F2*/0AF0 when it's done properly?
> 
> Any advice would be greatly appreciated....at the moment, with those bios settings above, i just get a black screen on boot and "no signal" on the monitor.
> Alternatively, is there a better way to flash the Vbios? Rather than the VBS script in Gigabytes zip file?
> 
> Thank you to anyone who can help.


Did you run the file as an administrator? Sometimes some flash tools won't execute unless you run as an administrator.


----------



## nievz

Z-U said:


> New here, first post and not as technically proficient as most of you - but I wonder if you can help?...I've been trying to get this resizable-bar thing working on my GPU and it just won't play ball.
> 
> CPU: Ryzen 9 5950x
> Memory: 32Gb 3600mhz CL14
> GPU: Gigabyte 3090RTX Eagle 24G on the latest nVidia drivers (version 465.89)
> MOBO: Gigabyte Aorus Master (rev 1.1), on Bios F33i (latest, BAR support started with F32) (Settings: XMP Profile=Enabled, PBO=Enabled, Above 4G Encoding=Enabled, Re-size BAR=Auto, CSM=Disabled)
> 
> I don't think the motherboard is the issue - nor the settings i'm displaying up there...my suspicion is that I'm not able to correctly flash my GPU...
> 
> I'm trying to re-flash with Gigabytes 'N3090E.F2' bios update package (this is the correct bios version for this F1 card) - and after executing the Exe, and some black screen-flickering later (which seem far too short to be doing anything at all - it's only like 1 or 2 seconds at most) message comes back saying "Bios Successfully Updated!", but in GPU-Z>Advanced>Message it says "GV-N3090EAGLE OC-24GD/*F1*/0AF0" which suggests to me it's still on *F1 bios not F2*. Sure enough GPU-Z also says BAR "Disabled", despite settings for 4G Encoding, BAR resize and CSM in mobo being correct based on Gigabytes support page. I just don't think it's flashing anything. The Gigabyte package itself doesn't have an obvious .bin file, in facts the bios file I believe is hidden in a passworded .zip too. So I can't just NVFlash a single file or anything.
> 
> How do I really tell whether the vbios has actually been updated? Should it say GV-N3090EAGLE OC-24GD/*F2*/0AF0 when it's done properly?
> 
> Any advice would be greatly appreciated....at the moment, with those bios settings above, i just get a black screen on boot and "no signal" on the monitor.
> Alternatively, is there a better way to flash the Vbios? Rather than the VBS script in Gigabytes zip file?
> 
> Thank you to anyone who can help.


I had a similar issue when I flashed the SuprimX BIOS to my X Trio. Only took 1 second and it was done, very unusual. So I tried it a second time and this time it was successful but still only took 1 second to flash.

I first flashed with the EVGA Kipping BIOS and that took a few seconds to complete which is what is expected. But switching to the SuprimX BIOS is where I had some issues as stated above.

I’m also on Aorus Master F33i BIOS.


----------



## Z-U

warrior-kid said:


> Hi, welcome, relative newbie myself. I am wondering if you did something like that "*nvflash --protectoff* " prior. I do have a completely different BIOS and manufacturer reported when going from Trio X Gaming to EVGA.


I'll try that first> Update - that didn't do anything but thanks for trying


----------



## Z-U

satinghostrider said:


> Did you run the file as an administrator? Sometimes some flash tools won't execute unless you run as an administrator.


I am running as Admin yes


----------



## yzonker

J7SC said:


> That's - surprising
> ...anyway, did you notice a change in graphic 'quality' at the same settings after CP2077 / patch1.2 w/ your 3090 ? I've got a friend in town who has an almost identical setup to mine re. CPU, mobo and Strix (though he's currently playing on a FTW3) and he insists that things 'have gotten sharper' at 4k / DLSS / RTX Psycho. I am on the fence on that one w/ my 40 inch Philips monitor...


No but I wasn't paying much attention. I did try some other areas and saw a slightly bigger increase in fps. Maybe 2-3. I might try logging framerate over a set path to see what the difference is.


----------



## gfunkernaught

yzonker said:


> No but I wasn't paying much attention. I did try some other areas and saw a slightly bigger increase in fps. Maybe 2-3. I might try logging framerate over a set path to see what the difference is.


It's possible they tweaked DLSS. I remember Control got sharper when DLSS 2.0 was released. Maybe it's DLSS 2.001?


----------



## tankyx

So I got the new Galax bios for my PNY 3090, Resizable bar is working but the card is not drawing up to 390w  It goes up to 370w max


----------



## des2k...

tankyx said:


> So I got the new Galax bios for my PNY 3090, Resizable bar is working but the card is not drawing up to 390w  It goes up to 370w max


From experience, 350w and 390w vbios are completly useless for 2x8pins. Any OC on mem or core already puts you over budget and rebar seems to be more demanding on core,mem which makes it even worst.

That would explain why so many get no performance increase or loss with rebar on.

They would need to increase total power, which I'm guessing they don't want to do !


----------



## yzonker

des2k... said:


> From experience, 350w and 390w vbios are completly useless for 2x8pins. Any OC on mem or core already puts you over budget and rebar seems to be more demanding on core,mem which makes it even worst.
> 
> That would explain why so many get no performance increase or loss with rebar on.
> 
> They would need to increase total power, which I'm guessing they don't want to do !


I think Vulkan may be the problem with my RDR2 test. I'll try to get it to run in DX12. 

I'm sure I'm seeing a 2 to 3 fps increase in some areas of CP2077. (framerate in the 40-50 fps range)

So I think I may end up sticking with the 390w bios.


----------



## motivman

tankyx said:


> So I got the new Galax bios for my PNY 3090, Resizable bar is working but the card is not drawing up to 390w  It goes up to 370w max


shunt mod it.. get 540W power draw with that bios. I have PNy 3090, shunt modded, running the same bios, and resizable bar is working here too.


----------



## joyzao

Sheyster said:


> If you want the official ASUS updater for Re-Bar support, you can get it here:
> 
> 
> 
> 
> 
> 
> ROG-STRIX-RTX3090-O24G-WHITE | ROG Strix | Gaming Graphics Cards｜ROG - Republic of Gamers｜ROG USA
> 
> 
> The ROG Strix GeForce RTX™ 3090 White OC Edition 24GB GDDR6X featuring a white color scheme, Axial-tech fan design, 0dB technology, 2.9-slot Design, Dual BIOS, Auto-Extreme Technology, SAP II, MaxContact Technology, and more
> 
> 
> 
> rog.asus.com
> 
> 
> 
> 
> 
> EDIT: I noticed you want the actual BIOS not the updater. I recommend you use the new EVGA 500W BIOS instead, it's much better IMHO. Here it is, rename to .ROM and flash.


I'll try this bios, is it from ftw or kingpin?

Will I get a better score? I currently use stock bios from Asus rog oc


----------



## motivman

what is the general consensus of the MSI gaming X trio running 1000W bios? Anyone killed their gaming x trio yet due to too much power draw?


----------



## jomama22

yzonker said:


> I think Vulkan may be the problem with my RDR2 test. I'll try to get it to run in DX12.
> 
> I'm sure I'm seeing a 2 to 3 fps increase in some areas of CP2077. (framerate in the 40-50 fps range)
> 
> So I think I may end up sticking with the 390w bios.


In my RDR2 test with vulkan, I saw a 4fps min increase and a 1 fps average increase, min from 32 to 36, average from 81 to 82.


----------



## Thanh Nguyen

motivman said:


> what is the general consensus of the MSI gaming X trio running 1000W bios? Anyone killed their gaming x trio yet due to too much power draw?


I hope 1000w bios kills my ftw3 so I let the power limit slide at 100% but I dont know why the card is alive and still strong.


----------



## nievz

Thanh Nguyen said:


> I hope 1000w bios kills my ftw3 so I let the power limit slide at 100% but I dont know why the card is alive and still strong.


What OC and voltage are you running for 1000w? Do you have a video while gaming I can watch?😃


----------



## yzonker

jomama22 said:


> In my RDR2 test with vulkan, I saw a 4fps min increase and a 1 fps average increase, min from 32 to 36, average from 81 to 82.


Did you see the min during the run? I never did. 99.9% of my runs were between 40 and 60 fps. With the average not changing (or increasing slightly in your test), I don't feel like reBar is really adding any significant performance.


----------



## Sheyster

joyzao said:


> I'll try this bios, is it from ftw or kingpin?
> 
> Will I get a better score? I currently use stock bios from Asus rog oc


That's the new 500W EVGA FTW3 Ultra BIOS. I would expect you'd get a better result as long as you dial in a similar core clock to the ASUS BIOS. You have to compare them at the same boost clock speeds. This does not always correspond to the same +core value in AB or PX.


----------



## Falkentyne

des2k... said:


> From experience, 350w and 390w vbios are completly useless for 2x8pins. Any OC on mem or core already puts you over budget and rebar seems to be more demanding on core,mem which makes it even worst.
> 
> That would explain why so many get no performance increase or loss with rebar on.
> 
> They would need to increase total power, which I'm guessing they don't want to do !


I get less performance with Rebar on (versus off) in Watch Dogs Legion (RT Off) on my shunt modded 3090 FE, so that isn't it. Not even close to hitting any power limit at all.


----------



## Lobstar

Z-U said:


> I'm trying to re-flash with Gigabytes 'N3090E.F2' bios update package (this is the correct bios version for this F1 card) - and after executing the Exe, and some black screen-flickering later (which seem far too short to be doing anything at all - it's only like 1 or 2 seconds at most) message comes back saying "Bios Successfully Updated!", but in GPU-Z>Advanced>Message it says "GV-N3090EAGLE OC-24GD/*F1*/0AF0" which suggests to me it's still on *F1 bios not F2*. Sure enough GPU-Z also says BAR "Disabled", despite settings for 4G Encoding, BAR resize and CSM in mobo being correct based on Gigabytes support page.


I don't see anywhere where you have done a restart. The GPU bios flash will not update in GPU-z until it's operating on the new bios which is only initiated during the boot process. You video card isn't being rebooted while flashing the bios.


----------



## jomama22

yzonker said:


> Did you see the min during the run? I never did. 99.9% of my runs were between 40 and 60 fps. With the average not changing (or increasing slightly in your test), I don't feel like reBar is really adding any significant performance.


Lol, I was actually going to say that. I never see the min or the max ever in those runs.

The minimum increase is definitely there though, consistent through my runs. The average is definitely close to just margin of error stuff ass run to run variance can be as high as 1fps.

If it matters, I'm running at 5120x1440 so near 4k. Have the gpu running at 2175/+1375. Effective clock is 2170+-1 throughout the bench.


----------



## des2k...

Anyone tried this re-bar bios on the zotac ?









Zotac RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com





I was looking at HZ 4k, and with rebar it does 98fps at just 1950 clock 385w.

For me to hit 98fps I need 2145 core and that's 450w+.


----------



## des2k...

Falkentyne said:


> I get less performance with Rebar on (versus off) in Watch Dogs Legion (RT Off) on my shunt modded 3090 FE, so that isn't it. Not even close to hitting any power limit at all.


Why do you think there's performance loss ?
Nvidia did say they are testing and only enabling for games that show fps gain.

What mobo / cpu do you have ?


----------



## Falkentyne

des2k... said:


> Why do you think there's performance loss ?
> Nvidia did say they are testing and only enabling for games that show fps gain.
> 
> What mobo / cpu do you have ?


I have no idea why there's performance loss. If I knew I would have definitely mentioned it.
Maximus 13 Extreme and i9 11900k RKL.


----------



## gfunkernaught

motivman said:


> what is the general consensus of the MSI gaming X trio running 1000W bios? Anyone killed their gaming x trio yet due to too much power draw?


I run my trio with the 1kw bios, haven't tried passed 600w (power limit 60%) yet. I had to go to great lengths to keep the card cool though, especially at 600w. When I run quake 2 RTX at 600w, I forget the exact offsets I used but my benching profile that in port royal I get 2170mhz core and 1300 vram, I get 2115mhz in quake, the gpu gets to about 42c.


----------



## jomama22

Falkentyne said:


> I have no idea why there's performance loss. If I knew I would have definitely mentioned it.
> Maximus 13 Extreme and i9 11900k RKL.


Could just be a driver degradation as well. Also, don't the the RKL is necessary after 11900k


----------



## jura11

After reading on performance of REBAR not sure if its worth it to run just 390W BIOS on my RTX 3090 GamingPro 

Didn't done any tests and running quite old BIOS for my Asus ROG Crosshair VIII Hero X570 and reworking whole BIOS for that not sure hahaha, currently OC is rock stable on 3900X and no issues running 64GB of G.SKILL TridentZ Neo 3600MHz at 1800FCLK 

I'm quite happy with KPE XOC BIOS which I've capped at 75% for gaming and 65% for rendering, in some games 80-85% seems is sweet spot 

Hope this helps 

Thanks, Jura


----------



## yzonker

des2k... said:


> Anyone tried this re-bar bios on the zotac ?
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Zotac RTX 3090 VBIOS
> 
> 
> 24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory
> 
> 
> 
> 
> www.techpowerup.com
> 
> 
> 
> 
> 
> I was looking at HZ 4k, and with rebar it does 98fps at just 1950 clock 385w.
> 
> For me to hit 98fps I need 2145 core and that's 450w+.


Well I'll try to do a better CP test tonight, but that's where I was headed with this. 390w vs 500w was only 2 fps in RDR2. So doesn't take a lot to match that.

But no I didn't know there was a 385w Zotac bios. I might try it just to actually be running a Zotac bios again (I think the stock bios on mine lasted a week at most).


----------



## yzonker

Ok, I got more reBar results. I did a manual walkthrough of the same area in CP2077. Not perfect but I think it gives a pretty good idea of what to expect.

Bottom line though, not very impressed. May stick with the XOC bios after all.

Average fps:

reBar off: 48.16
reBar on: 48.98

1.7% increase in average fps










I also managed to get RDR2 to run in D3D. Seems like a fail as for me at least, D3D is slower than Vulcan. Once again, I never saw the min/max fps during the run. I think they may occur during transitions from scene to scene.

2.8% increase in average fps


----------



## yzonker

Went back to the XOC bios. Here's the same CP run with a 80% PL run (max power was 490w). Sorry for no labels. 

Red: 390w, reBar off
Blue: 390w, reBar on
Yellow: 490w, reBar off

490w average fps: 50.13


----------



## J7SC

yzonker said:


> Went back to the XOC bios. Here's the same CP run with a 80% PL run (max power was 490w). Sorry for no labels.
> 
> Red: 390w, reBar off
> Blue: 390w, reBar on
> Yellow: 490w, reBar off
> 
> 490w average fps: 50.13
> 
> View attachment 2484863


Thanks for doing this comparative work w/ graph - very helpful as I love CP2077 for roaming through the RTX eye candy


----------



## satinghostrider

tankyx said:


> So I got the new Galax bios for my PNY 3090, Resizable bar is working but the card is not drawing up to 390w  It goes up to 370w max


I thought it was only Gigabyte that borked the card with the new rebar bios. Even I'm only hitting max 360W now instead of 390W-400W with the earlier bios pre-resize bar. Mine is a 2x8pin Aorus Xtreme 3090 Waterforce.

What I've noticed that my core clocks in some games have seriously taken a dive with this new update. But oddly, some games like Hitman 3, despite lower boost clocks, hits 385W at some point. I don't know it's really confusing.

I usually just max out power limit and let the card do everything so the average boost clocks now is very much lower. Having said that, my earlier bios with just power limits maxed out, I had random black screens in Coldwar which I don't get now. Still early to tell but another thing is that even running the slightest OC would crash Coldwar previously. Now I can run Coldwar with overclocks and it doesn't crash at all. GPU-Z shows I'm MAINLY power limited now compared to previously where I was power limited, vrel and vop limited. The entire behaviour of the card has changed since the new rebar bios. Someone posted this is the Gigabyte US forums and I'm not quite sure what to make of it.










I think every game is different even with overclock now, Hitman 3 runs lower boost clocks than previously for example. I'm quite confused on how this resize bar thing affects games, boost clocks and Tdp. Problem is no one knows resize bar is active in which game and that's a problem. Even my Timespy GPU scores took a 1000+ points nosedive!


----------



## Falkentyne

satinghostrider said:


> I thought it was only Gigabyte that borked the card with the new rebar bios. Even I'm only hitting max 360W now instead of 390W-400W with the earlier bios pre-resize bar. Mine is at 2x8pin Aorus Xtreme 3090 Waterforce.
> 
> What I've noticed that my core clocks in some games have seriously taken a dive with this new update. But oddly, some games like Hitman 3, despite lower boost clocks, hits 385W at some point. I don't know it's really confusing.
> 
> I usually just max out power limit and let the card do everything so the average boost clocks now is very much lower. Having said that, my earlier bios with just power limits maxed out, I had random black screens in Coldwar which I don't get now. Still early to tell but another thing is that even running the slightest OC would crash Coldwar previously. Now I can run Coldwar with overclocks and it doesn't crash at all.
> 
> I think every game is different even with overclock now, Hitman 3 runs lower boost clocks than previously for example. I'm quite confused on how this resize bar thing affects games, boost clocks and TDP.


Did your FPS go up in cold war from when you were crashing? Did you look at the effective clocks (hwinfo) to see what was happening?
Are you sure it's the BIOS? Maybe it's the new driver? What happens if you disable Rebar in cold war? If you don't crash anymore then it's the driver.


----------



## satinghostrider

Falkentyne said:


> Did your FPS go up in cold war from when you were crashing? Did you look at the effective clocks (hwinfo) to see what was happening?
> Are you sure it's the BIOS? Maybe it's the new driver? What happens if you disable Rebar in cold war? If you don't crash anymore then it's the driver.


FPS was all over the place with the old bios. New bios is lower but consistent. My effective clocks were around 1,935Mhz on HWINFO with peaks into 1,980Mhz with just power limits maxed out. Right now, with the new resize bar bios and nvidia drivers, it's anywhere from 1890Mhz-1950Mhz.

I tried disabling the resize bar function in bios and its the same behavior so it could be the drivers as welI as still can't get higher boost clocks and TDP is maxed at 360W instead of 390W previously. However, while afterburner says I'm 99.8% TDP, HWINFO reports 104.6% normalised TDP.


----------



## satinghostrider

Duplicate post.


----------



## jomama22

satinghostrider said:


> I thought it was only Gigabyte that borked the card with the new rebar bios. Even I'm only hitting max 360W now instead of 390W-400W with the earlier bios pre-resize bar. Mine is a 2x8pin Aorus Xtreme 3090 Waterforce.
> 
> What I've noticed that my core clocks in some games have seriously taken a dive with this new update. But oddly, some games like Hitman 3, despite lower boost clocks, hits 385W at some point. I don't know it's really confusing.
> 
> I usually just max out power limit and let the card do everything so the average boost clocks now is very much lower. Having said that, my earlier bios with just power limits maxed out, I had random black screens in Coldwar which I don't get now. Still early to tell but another thing is that even running the slightest OC would crash Coldwar previously. Now I can run Coldwar with overclocks and it doesn't crash at all. GPU-Z shows I'm MAINLY power limited now compared to previously where I was power limited, vrel and vop limited. The entire behaviour of the card has changed since the new rebar bios. Someone posted this is the Gigabyte US forums and I'm not quite sure what to make of it.
> 
> View attachment 2484875
> 
> 
> I think every game is different even with overclock now, Hitman 3 runs lower boost clocks than previously for example. I'm quite confused on how this resize bar thing affects games, boost clocks and Tdp. Problem is no one knows resize bar is active in which game and that's a problem. Even my Timespy GPU scores took a 1000+ points nosedive!


It looks like abe can't read the new bios correctly, so I wouldn't compare them using that.

You can always flash a different bios onto it if you think it's massing stuff up. 

You shouldn't be getting a time spy decrease with the new drivers/bios. 

Can always ddu again and reinstall the drivers as well.


----------



## satinghostrider

jomama22 said:


> It looks like abe can't read the new bios correctly, so I wouldn't compare them using that.
> 
> You can always flash a different bios onto it if you think it's massing stuff up.
> 
> You shouldn't be getting a time spy decrease with the new drivers/bios.
> 
> Can always ddu again and reinstall the drivers as well.


I would like to try another bios but I'm not sure which one would work for my 3090 Aorus Xtreme Waterforce.

I've yet to uninstall and try again but it seems correct as my boost clocks in games and total tdp is down from the previous bios irrespective of resize bar which brings a trivial improvement at best.


----------



## Falkentyne

jomama22 said:


> It looks like abe can't read the new bios correctly, so I wouldn't compare them using that.
> 
> You can always flash a different bios onto it if you think it's massing stuff up.
> 
> You shouldn't be getting a time spy decrease with the new drivers/bios.
> 
> Can always ddu again and reinstall the drivers as well.


I get a Timespy decrease (-100 points) but a Port Royal increase (+250 points). Doesn't matter whether rebar is enabled or not (in my case the bios version didn't change). It's the new drivers.


----------



## galsdgk

motivman said:


> what is the general consensus of the MSI gaming X trio running 1000W bios? Anyone killed their gaming x trio yet due to too much power draw?


I’ve been gaming and mining on it 24/7 for months. I’ve got a front and back water block though. I think at most I’ve drawn 780W. Other cards bench slightly better but not by much.


----------



## KedarWolf

So this happened. I put the waterblock and backplate on my Strix OC. Put the two fittings and hoses on it, attached it to my loop. Start my PC few seconds to get coolant in the block, top the rad of with coolant. Start my PC.

Welp, I forgot to put the two plugs into the backside of the waterblock and got coolant all over my motherboard. My MB is mounted horizontally.

I've done something like that once before though. 

I soaked up the excess water and now am letting a hair dryer blow on low speed on the motherboard overnight to evaporate the water.

Need low speed because it is 2 a.m. and I live in an apartment, no noise for my neighbours.

So, in the morning I'll find out if my PC is borked.


----------



## MrTOOSHORT

KedarWolf said:


> So this happened. I put the waterblock and backplate on my Strix OC. Put the two fittings and hoses on it, attached it to my loop. Start my PC few seconds to get coolant in the block, top the rad of with coolant. Start my PC.
> 
> Welp, I forgot to put the two plugs into the backside of the waterblock and got coolant all over my motherboard. My MB is mounted horizontally.
> 
> I've done something like that once before though.
> 
> I soaked up the excess water and now am letting a hair dryer blow on low speed on the motherboard overnight to evaporate the water.
> 
> Need low speed because it is 2 a.m. and I live in an apartment, no noise for my neighbours.
> 
> So, in the morning I'll find out if my PC is borked.



That's too bad.


----------



## Lord of meat

KedarWolf said:


> So this happened. I put the waterblock and backplate on my Strix OC. Put the two fittings and hoses on it, attached it to my loop. Start my PC few seconds to get coolant in the block, top the rad of with coolant. Start my PC.
> 
> Welp, I forgot to put the two plugs into the backside of the waterblock and got coolant all over my motherboard. My MB is mounted horizontally.
> 
> I've done something like that once before though.
> 
> I soaked up the excess water and now am letting a hair dryer blow on low speed on the motherboard overnight to evaporate the water.
> 
> Need low speed because it is 2 a.m. and I live in an apartment, no noise for my neighbours.
> 
> So, in the morning I'll find out if my PC is borked.


Get those absorbing rags at advance auto for next time, they soak up liquide instantly, comes in a tube. very useful for situations like that. If you want ill send you a link when I get home. Also try to clean the board, i use crc electronics cleaner.


----------



## KedarWolf

Lord of meat said:


> Get those absorbing rags at advance auto for next time, they soak up liquide instantly, comes in a tube. very useful for situations like that. If you want ill send you a link when I get home. Also try to clean the board, i use crc electronics cleaner.


Thanks for your concern, peeps.

Similar thing happened to me a few years ago and the hair dryer worked perfectly so I'm optimistic.


----------



## Nizzen

yzonker said:


> Went back to the XOC bios. Here's the same CP run with a 80% PL run (max power was 490w). Sorry for no labels.
> 
> Red: 390w, reBar off
> Blue: 390w, reBar on
> Yellow: 490w, reBar off
> 
> 490w average fps: 50.13
> 
> View attachment 2484863


Nice job!
Your contribution just made this forum a bit better


----------



## MrTOOSHORT

I think we all forgot a plug or fitting and water spurted out when the PC was turned on.


----------



## Gebeleisis

mining rate update 2 pin reference card , with bar bios: 
365w palit at 300 w - 115 mhs
390w galax kf2 295w - 118mhs

if anyone has some extra info please add to this. Thanks!


----------



## Bobbylee

Gebeleisis said:


> mining rate update 2 pin reference card , with bar bios:
> 365w palit at 300 w - 115 mhs
> 390w galax kf2 295w - 118mhs
> 
> if anyone has some extra info please add to this. Thanks!



I’m using the 1kw bios to mine at 30% power limit on a 2x8 card so it’s around 260power draw gets 125mhs. Then running this bios in games at 50% draws around 410w on my card. This bios just performs better for everything even if it doesn’t have resizeable bar support


----------



## Gebeleisis

Bobbylee said:


> I’m using the 1kw bios to mine at 30% power limit on a 2x8 card so it’s around 260power draw gets 125mhs. Then running this bios in games at 50% draws around 410w on my card. This bios just performs better for everything even if it doesn’t have resizeable bar support


can you link your bios ? tks


----------



## Bobbylee

Gebeleisis said:


> can you link your bios ? tks


Sorry I don’t have the link to hand. It’s in this post somewhere if you run a search for xoc, kingpin, kp or 1000w you should find it


----------



## Gebeleisis

is this the old XOC or the new bios with bar enabled ?


----------



## Bobbylee

Gebeleisis said:


> is this the old XOC or the new bios with bar enabled ?


The old one, I don’t think there is a bar enabled 1000w bios yet. Reducing power limits down to 390w even with bar enabled will get worse performance so I’m not in a rush to swap to a bar enabled one


----------



## satinghostrider

Bobbylee said:


> Sorry I don’t have the link to hand. It’s in this post somewhere if you run a search for xoc, kingpin, kp or 1000w you should find it


I wonder if it would work on my Aorus 3090 XTREME Waterforce. Not sure if it matters if it's reference pcb or not.









AORUS GeForce RTX™ 3090 XTREME WATERFORCE WB 24G Key Features | Graphics Card - GIGABYTE Global


Discover AORUS premium graphics cards, ft. WINDFORCE cooling, RGB lighting, PCB protection, and VR friendly features for the best gaming and VR experience!




www.gigabyte.com





I'll be happy if anyone can link that bios for us 2x8pin blokes to try. Don't wanna end up flashing the wrong one. 

TIA!


----------



## Bobbylee

satinghostrider said:


> I wonder if it would work on my Aorus 3090 XTREME Waterforce. Not sure if it matters if it's reference pcb or not.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> AORUS GeForce RTX™ 3090 XTREME WATERFORCE WB 24G Key Features | Graphics Card - GIGABYTE Global
> 
> 
> Discover AORUS premium graphics cards, ft. WINDFORCE cooling, RGB lighting, PCB protection, and VR friendly features for the best gaming and VR experience!
> 
> 
> 
> 
> www.gigabyte.com
> 
> 
> 
> 
> 
> I'll be happy if anyone can link that bios for us 2x8pin blokes to try. Don't wanna end up flashing the wrong one.
> 
> 
> TIA!





Gebeleisis said:


> is this the old XOC or the new bios with bar enabled ?



This is the 1000w bios, if you’re on stock cooler make sure you keep the power slider down. Your reported power draw on a 2x8 pin will be wrong. You must look at the pcie and 1x and 2x 8 pin power draw. Or take the total and subtract the 3x8 pin power value in hwinfo64. 

If mining and you don’t have a backplate cooler be careful, easily goes above 100c without active cooling on backplate 









EVGA RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com


----------



## satinghostrider

Bobbylee said:


> This is the 1000w bios, if you’re on stock cooler make sure you keep the power slider down. Your reported power draw on a 2x8 pin will be wrong. You must look at the pcie and 1x and 2x 8 pin power draw. Or take the total and subtract the 3x8 pin power value in hwinfo64.
> 
> If mining and you don’t have a backplate cooler be careful, easily goes above 100c without active cooling on backplate
> 
> 
> 
> 
> 
> 
> 
> 
> 
> EVGA RTX 3090 VBIOS
> 
> 
> 24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory
> 
> 
> 
> 
> www.techpowerup.com


Thanks a bunch.
I'm on custom loop anyway.
Just to check what do you mean keep the slider down? If PCIE X2 pin theoretical max is 300W + PCIE is 75W tops, how does this bios let you go past that? I presume safeties are off?

P.S : I'm not shunt modded.


----------



## Bobbylee

satinghostrider said:


> Thanks a bunch.
> I'm on custom loop anyway.
> Just to check what do you mean keep the slider down? If PCIE X2 pin theoretical max is 300W + PCIE is 75W tops, how does this bios let you go past that? I presume safeties are off?
> 
> P.S : I'm not shunt modded.


The 8 pins can actually carry up to 300w safely if your cables are decent,16 awg cables are good, 18awg should be ok. I mean set the power limit in msi afterburner using the slider. I can run benchmarks at 100% which draws over 500w but don’t think it’s a good idea to run that full time


----------



## satinghostrider

Bobbylee said:


> The 8 pins can actually carry up to 300w safely if your cables are decent,16 awg cables are good, 18awg should be ok. I mean set the power limit in msi afterburner using the slider. I can run benchmarks at 100% which draws over 500w but don’t think it’s a good idea to run that full time


I think I'll set to 50% for gaming and take it from there. I'm water-cooled so I'll monitor my temps and power draw. Do you touch the core voltage slider on this bios?


----------



## Bobbylee

satinghostrider said:


> I think I'll set to 50% for gaming and take it from there. I'm water-cooled so I'll monitor my temps and power draw. Do you touch the core voltage slider on this bios?


Sounds good. Yeah for gaming I have it +100 then mining I have It on zero with locked voltage with the curve at .737mv


----------



## satinghostrider

Bobbylee said:


> Sounds good. Yeah for gaming I have it +100 then mining I have It on zero with locked voltage with the curve at .737mv


Thanks again! Most helpful mate. 
Did you lose any displayports or HDMI? 
I think I'll probably lose some as my layout seems to be completely different from that EVGA model.


----------



## emil2424

satinghostrider said:


> I think I'll set to 50% for gaming and take it from there. I'm water-cooled so I'll monitor my temps and power draw. Do you touch the core voltage slider on this bios?


Using 1000W bios at 50% is a bit weird. There are other bios options for KINGPIN cards:
1) 430-450 W
2) 430-480 W
3) 430-520 W
4) 1000W


----------



## satinghostrider

emil2424 said:


> Using 1000W bios at 50% is a bit weird. There are other bios options for KINGPIN cards:
> 1) 430-450 W
> 2) 430-480 W
> 3) 430-520 W
> 4) 1000W


Emil, you're running the same card as me I believe. So you know how the new bios is. You flashed back to the previous one?


----------



## Bobbylee

satinghostrider said:


> Thanks again! Most helpful mate.
> Did you lose any displayports or HDMI?
> I think I'll probably lose some as my layout seems to be completely different from that EVGA model.


I only use one hdmi and one dp but they work fine.



emil2424 said:


> Using 1000W bios at 50% is a bit weird. There are other bios options for KINGPIN cards:
> 1) 430-450 W
> 2) 430-480 W
> 3) 430-520 W
> 4) 1000W


It’s not that weird, the only option for a 2x8 pin is the 1000w, this at 50% draws around 410w for me, vs 390w for any other bios can get. Also mining is far more efficient on this vbios than any other bios that works on a 2x8 pin card.


----------



## emil2424

satinghostrider said:


> Emil, you're running the same card as me I believe. So you know how the new bios is. You flashed back to the previous one?


I am currently using a new bios. But I'm going to go back to the old one today or tomorrow. I was thinking about uploading bios from another card but I am thinking about AIO control on my card. Will there be a problem with the operation of the pump? Can you trust the GPU-Z power consumption reading? Even this single type PCIE + 8 Pin +8 Pin.


Bobbylee said:


> It’s not that weird, the only option for a 2x8 pin is the 1000w, this at 50% draws around 410w for me, vs 390w for any other bios can get. Also mining is far more efficient on this vbios than any other bios that works on a 2x8 pin card.


Are you 100% sure that the value of 410W is true?


----------



## KedarWolf

KedarWolf said:


> So this happened. I put the waterblock and backplate on my Strix OC. Put the two fittings and hoses on it, attached it to my loop. Start my PC few seconds to get coolant in the block, top the rad of with coolant. Start my PC.
> 
> Welp, I forgot to put the two plugs into the backside of the waterblock and got coolant all over my motherboard. My MB is mounted horizontally.
> 
> I've done something like that once before though.
> 
> I soaked up the excess water and now am letting a hair dryer blow on low speed on the motherboard overnight to evaporate the water.
> 
> Need low speed because it is 2 a.m. and I live in an apartment, no noise for my neighbours.
> 
> So, in the morning I'll find out if my PC is borked.


Hey peeps, my PC is fine. The blow dryer worked.

The only issue I had was my pump started dying, Was getting 90C temps running benchmarks and it was grinding bad after a while.

I had a spare, now idle temps are 26C on one 360 rad and 50C while running Port Royal. 

Pump at 60%.


----------



## Bobbylee

emil2424 said:


> I am currently using a new bios. But I'm going to go back to the old one today or tomorrow. I was thinking about uploading bios from another card but I am thinking about AIO control on my card. Will there be a problem with the operation of the pump? Can you trust the GPU-Z power consumption reading? Even this single type PCIE + 8 Pin +8 Pin.
> 
> Are you 100% sure that the value of 410W is true?


Sorry sorry I’m on 60% not 50%. I’ve used an aio on a gpu before and the pump will run at full speed just plug into your motherboard. The single ones I think you can trust; I did some initial testing with a wattmeter at the wall and it seemed to all add up. But yes 60% limit not 50 my memory is going


----------



## Alex24buc

It seems I won’t get benefit too soon from resizable bar because Gigabyte doesn’t care about their customers with x299 platform motherboards 😞 I am very disapointed!


----------



## stryker7314

jomama22 said:


> Could just be a driver degradation as well. Also, don't the the RKL is necessary after 11900k


Gotta let everyone know about the backported dumpster-fire 🔥🔥🔥 waste of sand 🏖


----------



## gfunkernaught

Anyone get Cyberpunk crash with no warnings or events in event log? Was playing last night [email protected] gpu got to about 42c, the room did get a little warm from the house heat. Had the pl set to 60%. My first thought was bad oc but that would leave a mark in the event viewer.


----------



## Nizzen

Alex24buc said:


> It seems I won’t get benefit too soon from resizable bar because Gigabyte doesn’t care about their customers with x299 platform motherboards 😞 I am very disapointed!
> View attachment 2484902


People never learn... Gigabutt


----------



## Nizzen

My BAR=off VS BAR=on test in Walhalla.

Cpubound scenario and gpubound scenario.

Rocketlake 11900k
4266 Dualrank 2x16 c17


*17.02 % Difference when cpubound*






1440P Ultra (max settings) BAR=OFF vs BAR=ON

*12.2 % difference in gpubound scenario*


----------



## jomama22

Falkentyne said:


> I get a Timespy decrease (-100 points) but a Port Royal increase (+250 points). Doesn't matter whether rebar is enabled or not (in my case the bios version didn't change). It's the new drivers.


He said he had a 1000+ point decrease. Can totally understand if it was a 100 points.


----------



## Gandyman

Hey guys, started a new post but more people seem to read here, 

I have a 30c delta on my 3090 strix oc between gpu temp and gpu hotspot temp

is this normal or should I worry?



http://imgur.com/asP97aK


72c gpu temp 102c hotspot temp

Cheers


----------



## jomama22

KedarWolf said:


> Thanks for your concern, peeps.
> 
> Similar thing happened to me a few years ago and the hair dryer worked perfectly so I'm optimistic.


Yeah, if the mobo wasn't turned on it will probably be just fine. A while back, I actually had a fitting leak on my ram block while my computer was shut down. Middle of the night I hear my pc turning itself on and off. The water had gotten into the 24 pin plug and shorted the gnd and 12v pins so it was turning on and off. Water soaked all the dimm slots and covered that entire area of the board in water. 

Turned it all off, cleaned it up with 99% rubbing alcohol (had already created corrosion on some solder joints from power going through stuff while wet) and let it dry for a day doing the same thing with the hair dryer.

Next day turned it on and it was like it had never happened lmao. Used the mobo, ram and cpu for another 7 years.


----------



## mouacyk

Nizzen said:


> My BAR=off VS BAR=on test in Walhalla.
> 
> Cpubound scenario and gpubound scenario.
> 
> Rocketlake 11900k
> 4266 Dualrank 2x16 c17
> 
> 
> *17.02 % Difference when cpubound*
> 
> 
> 
> 
> 
> 
> 1440P Ultra (max settings) BAR=OFF vs BAR=ON
> 
> *12.2 % difference in gpubound scenario*


wololo...
Interesting results, that are very significant. Game must be streaming a crazy amount of assets in realtime.


----------



## pantsoftime

Gandyman said:


> I have a 30c delta on my 3090 strix oc between gpu temp and gpu hotspot temp
> 72c gpu temp 102c hotspot temp


I'm not sure our situations are entirely comparable, but with a bykski waterblock on an aorus xtreme (450W) I'm seeing a peak of 15 degree delta. The strix has a beefy cooler so 30 degrees does seem a little high, but you may want to wait for a strix person to chime in.


----------



## Gandyman

pantsoftime said:


> I'm not sure our situations are entirely comparable, but with a bykski waterblock on an aorus xtreme (450W) I'm seeing a peak of 15 degree delta. The strix has a beefy cooler so 30 degrees does seem a little high, but you may want to wait for a strix person to chime in.


 Cheers for the chime in - I just changed cases to a 0-11d with vertical mount, (was in a cpre p3 also vertical) and I installed msi afterburner and hwinfo to test my fan setup and I noticed that my 'quiet mode' bios was no longer quiet, where as before at 80c gpu temp fans would go to about 65%, now at 70c fans were pegged to the leafblower-esque 100%. 

I'm wondering if its possible that putting in a new case has increased this 'hot spot' temp and that's whats causing my fans to ramp up insanely all of a sudden. but strangely enough I think the directed airflow of my 0-11D actually out performs the open air no airflow of the core p3. The gpu temps are cooler then they were. So why after swapping components to a new case I get this 100% fan speed is beyond me.


----------



## Sheyster

yzonker said:


> Went back to the XOC bios. Here's the same CP run with a 80% PL run (max power was 490w). Sorry for no labels.
> 
> Red: 390w, reBar off
> Blue: 390w, reBar on
> Yellow: 490w, reBar off
> 
> 490w average fps: 50.13
> 
> View attachment 2484863


Yep, definitely staying with the XOC BIOS since my current primary game (Warzone) isn't even supported for Re-Bar. It's pretty clear that high power limit is king for now. If we actually get an XOC BIOS with Re-Bar support then that might be worthwhile. Honestly everything I'm seeing with Re-Bar at 4K is pretty underwhelming.

Anyone done a comparison for SP4KO with and without Re-Bar? I know with AMD SAM it was about a 5% bump with Re-Bar.


----------



## yzonker

Gandyman said:


> Cheers for the chime in - I just changed cases to a 0-11d with vertical mount, (was in a cpre p3 also vertical) and I installed msi afterburner and hwinfo to test my fan setup and I noticed that my 'quiet mode' bios was no longer quiet, where as before at 80c gpu temp fans would go to about 65%, now at 70c fans were pegged to the leafblower-esque 100%.
> 
> I'm wondering if its possible that putting in a new case has increased this 'hot spot' temp and that's whats causing my fans to ramp up insanely all of a sudden. but strangely enough I think the directed airflow of my 0-11D actually out performs the open air no airflow of the core p3. The gpu temps are cooler then they were. So why after swapping components to a new case I get this 100% fan speed is beyond me.


Well the cards ramp the fans at around 100C vram temp. I assume you're vram temp is below 100C?


----------



## yzonker

This review matches pretty well with my RDR2 test. Pretty much no benefit. 









NVIDIA GeForce RTX 3090 with Resizable BAR is 3.17% faster on average at 4K - VideoCardz.com


NVIDIA Geforce RTX 3090 already tested with Resizable BAR Some users have already tested how much faster the RTX 3090 graphics card is with ReBAR. Just yesterday we reported that Chinese graphics card manufacturers GALAX and Gainward have begun offering a patch for GeForce RTX 3090/3080/3070 and...




videocardz.com


----------



## Pepillo

NVIDIA PCI-Express Resizable BAR Performance Test - 22 Games, 3 Resolutions, RTX 3090, 3080, 3070, 3060 Ti


PCI-Express Resizable BAR, pioneered by AMD, seems to be a magic bullet for unlocking additional double-digit FPS gains. NVIDIA finally released their implementation for GeForce Ampere cards. In our PCIe BAR review, we test the feature on RTX 3090, 3080, 3070, and 3060 Ti in 22 games, at Full...




www.techpowerup.com


----------



## jomama22

Pepillo said:


> NVIDIA PCI-Express Resizable BAR Performance Test - 22 Games, 3 Resolutions, RTX 3090, 3080, 3070, 3060 Ti
> 
> 
> PCI-Express Resizable BAR, pioneered by AMD, seems to be a magic bullet for unlocking additional double-digit FPS gains. NVIDIA finally released their implementation for GeForce Ampere cards. In our PCIe BAR review, we test the feature on RTX 3090, 3080, 3070, and 3060 Ti in 22 games, at Full...
> 
> 
> 
> 
> www.techpowerup.com
> 
> 
> 
> 
> 
> View attachment 2484946


Yeah, about the only title that does anything lol. Really just a 1-3% performance boost at best atm.


----------



## Sheyster

Pepillo said:


> NVIDIA PCI-Express Resizable BAR Performance Test - 22 Games, 3 Resolutions, RTX 3090, 3080, 3070, 3060 Ti
> 
> 
> PCI-Express Resizable BAR, pioneered by AMD, seems to be a magic bullet for unlocking additional double-digit FPS gains. NVIDIA finally released their implementation for GeForce Ampere cards. In our PCIe BAR review, we test the feature on RTX 3090, 3080, 3070, and 3060 Ti in 22 games, at Full...
> 
> 
> 
> 
> www.techpowerup.com
> 
> 
> 
> 
> 
> View attachment 2484946



And here is BFV 4K:











That's pretty dang bad considering it is a supported game.


----------



## yzonker

Looks like their results line up with mine as well. I was starting to think everyone needed to test for themselves but maybe it's just resolution and vid card dependent


----------



## mouacyk

Sheyster said:


> I don't know what the real timeline is for 4090/4080Ti, but it sure sounds like it's going to be another big upgrade over the 3090. Cuda core difference alone is huge!






Huge smile at the intro


----------



## pj530i

Sheyster said:


> And here is BFV 4K:
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> That's pretty dang bad considering it is a supported game.


I assume they tested in single player to ensure repeatability. Performance might be different in MP. They also tested in DX11.

Very anecdotal but my FPS in BFV seem to be hitting the 144fps limit a lot more since the rebar update (3440x1440, 5950x, 3090, DX12/DXR on).


----------



## Antsu

Nizzen said:


> People never learn... Gigabutt


Yep. I learned this the hard way with my Z370 Gigabyte never being updated for 9th gen. Never again Gigabyte, no matter how good their hardware is.


----------



## J7SC

mouacyk said:


> Huge smile at the intro


...that fellow does like his painstakingly elaborate April 1 'presentations'....here's another (earlier) one which had me burst out laughing last year


----------



## martinhal

KedarWolf said:


> Hey peeps, my PC is fine. The blow dryer worked.
> 
> The only issue I had was my pump started dying, Was getting 90C temps running benchmarks and it was grinding bad after a while.
> 
> I had a spare, now idle temps are 26C on one 360 rad and 50C while running Port Royal.
> 
> Pump at 60%.


Thats great . I did the same not to long ago


----------



## KedarWolf

martinhal said:


> Thats great . I did the same not to long ago


Did you get it working and how?


----------



## des2k...

apparently with NV inspector you can enable rebar with 1 1 0 for flags for the game that is not whitelisted

"Show unknown setting from NVIDIA Predefined Profiles"


----------



## KedarWolf

Deleted


----------



## KedarWolf

des2k... said:


> apparently with NV inspector you can enable rebar with 1 1 0 for flags for the game that is not whitelisted
> 
> "Show unknown setting from NVIDIA Predefined Profiles"
> 
> 
> View attachment 2484977


Yeah, you need to delete the game profile for the game you want Rebar enabled, then just add the game .exe to the Gears Of War 5 profile.


----------



## J7SC

...further to earlier posts re. CP2077 / patch1.2, I do think that there must have been some changes in DLSS re. 'medium to far distance' after just playing another round at 4K/ RTX Psycho/ DLSS quality on saved scenes (fyi, not playing with r_BAR vBios update on the Strix yet)


----------



## jura11

Received Gelid Extreme 1.5mm pads and Thermalright Odyssey 1mm pads and I'm just waiting now for Thermalright Odyssey 1.5mm pad for repad of my Bykski Waterblocks and can't wait to put them to tests and see if any difference in VRAM temperatures 

What I'm expecting probably 2-4°C as max on VRAM 

Thanks, Jura


----------



## gfunkernaught

Has anyone had Cyberpunk crash on them with no trace or marks in the event viewer? Most of the crashes I get in games without error messages are OC stability related and the video driver crash/recovery event is logged. But last night i got a crash with no event log entries, nothing related to the video driver nor cyberpunk.


----------



## defcoms

gfunkernaught said:


> Has anyone had Cyberpunk crash on them with no trace or marks in the event viewer? Most of the crashes I get in games without error messages are OC stability related and the video driver crash/recovery event is logged. But last night i got a crash with no event log entries, nothing related to the video driver nor cyberpunk.


I had the game freeze up and it didn't give me anything in the event viewer. I was able to end the game in task manager though. I turned my OC down -10 on the core and haven't had it happen since.


----------



## gfunkernaught

defcoms said:


> I had the game freeze up and it didn't give me anything in the event viewer. I was able to end the game in task manager though. I turned my OC down -10 on the core and haven't had it happen since.


Just crashed again, no event logs. I lowered my OC and PL to 48%, clocks were going from 2055-2099mhz, gpu temp was 41c. Both of these crashes occurred when a cutscene was ending. Similar to when Quake 2 RTX would crash when loading levels. This doesn't seem like a bad OC but something else.


----------



## nordschleife

nice spoilers!


----------



## PowerK

gfunkernaught said:


> Just crashed again, no event logs. I lowered my OC and PL to 48%, clocks were going from 2055-2099mhz, gpu temp was 41c. ................................................................................................................................. Similar to when Quake 2 RTX would crash when loading levels. This doesn't seem like a bad OC but something else.


It is a bad OC. The game shows no stability problems on our PCs.
Also, your post can be spoliers for some people.


----------



## KedarWolf

jura11 said:


> Received Gelid Extreme 1.5mm pads and Thermalright Odyssey 1mm pads and I'm just waiting now for Thermalright Odyssey 1.5mm pad for repad of my Bykski Waterblocks and can't wait to put them to tests and see if any difference in VRAM temperatures
> 
> What I'm expecting probably 2-4°C as max on VRAM
> 
> Thanks, Jura


I looked at the installation manual for my EKWB block for my Strix OC and realized I missed a thermal pad on a small component. Had been looking at the picture on my phone and was hard to see. 

I took it apart, fixed it, repasted it with Mastermaker Gel thermal paste, now I'm getting 22C idle temps, 46C in Port Royal, this on one 360 rad with the pump at 60%. 

I also have two quick disconnects on my hoses to my GPU which slows the flow some.


----------



## mirkendargen

Got my free Bykski replacement Strix block today (thanks again Bykski) and got it all mounted with 1.5mm Gelid extreme, and repurposed the super squishy pads the block came with for coil whine prevention (card is now inaudible with the side on my case, success). Temps aren't as low as they'll ultimately go yet because the paste is still setting and my loop is still bleeding bubbles, but I finally managed to break 15k PR on my ****ty Strix with the new drivers.


----------



## Zogge

jura11 said:


> Received Gelid Extreme 1.5mm pads and Thermalright Odyssey 1mm pads and I'm just waiting now for Thermalright Odyssey 1.5mm pad for repad of my Bykski Waterblocks and can't wait to put them to tests and see if any difference in VRAM temperatures
> 
> What I'm expecting probably 2-4°C as max on VRAM
> 
> Thanks, Jura


Since you have measured it and all, where will you put the 1.0mm and 1.5mm ? You had a strix and bykski right ?

Or is it just 1.5mm on everything like you did mirkendargen ?


----------



## mirkendargen

Zogge said:


> Since you have measured it and all, where will you put the 1.0mm and 1.5mm ? You had a strix and bykski right ?
> 
> Or is it just 1.5mm on everything like you did mirkendargen ?


The pads that came with the block were roughly the same thickness as 1.5mm Gelid Extreme but far more compressible. so 1mm Gelid Extreme might be fine on the front. On the back you could use literally anything, even 0.5mm probably. There's nothing taller than the RAM to have clearance issues.


----------



## Gebeleisis

Bobbylee said:


> This is the 1000w bios, if you’re on stock cooler make sure you keep the power slider down. Your reported power draw on a 2x8 pin will be wrong. You must look at the pcie and 1x and 2x 8 pin power draw. Or take the total and subtract the 3x8 pin power value in hwinfo64.
> 
> If mining and you don’t have a backplate cooler be careful, easily goes above 100c without active cooling on backplate
> 
> 
> 
> 
> 
> 
> 
> 
> 
> EVGA RTX 3090 VBIOS
> 
> 
> 24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory
> 
> 
> 
> 
> www.techpowerup.com


Thank you!

I am on water and have a custom active backplate on it.


----------



## martin28bln

Did build up my 3090FE with the EKWB waterblock and modified a bit last week:


small spot thermal paste on all VRAM modules
changed originally pads for VRAM to 2x0,5mm Gelid Ultimate
against the manual I only connected the VRAM on backside to the backplate with pads (instead of also the other parts)

--> very good now on VRAM-Temp. <=70°C when mining with 122 Mh/s. Water 32-34°C in this test. Can just recommend my optimization.


----------



## jaghsar

I have a MSI Suprim X that I will put under water with EKWB Trio waterblocks on the way. Are the thermal pads that come with the EKWB decent? Does the EKWB backplate even have thermal pads in the packaging?


----------



## des2k...

jaghsar said:


> I have a MSI Suprim X that I will put under water with EKWB Trio waterblocks on the way. Are the thermal pads that come with the EKWB decent? Does the EKWB backplate even have thermal pads in the packaging?


Pads are ok, they do include them with the backplate. I would get 12mw/k for front/back memory if you want better temps.
If you plan on using 400w+ on the card I would replace the vrm ones too.

On my zotac with ek block, the oddyssey pads on mem doubled my memory oc. On vrm it allows maybe 50mv-100mv less on core voltage if you push past 2100core.


----------



## jura11

Zogge said:


> Since you have measured it and all, where will you put the 1.0mm and 1.5mm ? You had a strix and bykski right ?
> 
> Or is it just 1.5mm on everything like you did mirkendargen ?


I will probably use 1.5mm Thermalright Odyssey or Gelid Extreme pads on VRAM and VRM 

Bought extra slim heatsink which I'm planning put on second block or backplate 

Hope this helps 

Thanks, Jura


----------



## jaghsar

des2k... said:


> Pads are ok, they do include them with the backplate. I would get 12mw/k for front/back memory if you want better temps.
> If you plan on using 400w+ on the card I would replace the vrm ones too.
> 
> On my zotac with ek block, the oddyssey pads on mem doubled my memory oc. On vrm it allows maybe 50mv-100mv less on core voltage if you push past 2100core.


Ah, thanks! Do you know of good 12mw/k pads that will suit my purpose? The plan is to get the active EKWB backplate when they release it for 3090 Trio cards.


----------



## jura11

jaghsar said:


> Ah, thanks! Do you know of good 12mw/k pads that will suit my purpose? The plan is to get the active EKWB backplate when they release it for 3090 Trio cards.


Best thermal pads for money which you can get are Thermalright Odyssey or Gelid Extreme pads and I would recommend get them from Aliexpress 

Hope this helps 

Thanks, Jura


----------



## schoolofmonkey

It seems the Re-Bar BIOS update fixed the boost clocks on the 3090 XC3, my previous overclock is completely unstable in AC Valhalla, it'd randomly hang and crash to the desktop mostly in cut scenes or inside (error: Display driver nvlddmkm stopped responding and has successfully recovered.)
I'm seeing boost clocks of 1980Mhz, without the previous 100Mhz OC.
Before the BIOS update it wouldn't boost over 1755Mhz, hence the overclock.

Been pulling my hair out for a while trying to work out what might of been causing the crashes, it was trying to boost over 2085Mhz in low loads, which for a 2x8Pin 366w card isn't going to happen


----------



## Zogge

Is there any page or should we start it, stating binning statistics of 3090 ? For yourself and everyone else to compare their specific card and understand if they are bin 0, bin 1 or bin 2. Like Silicon Lottery for CPUs for instance. What do you think ?


----------



## jomama22

Zogge said:


> Is there any page or should we start it, stating binning statistics of 3090 ? For yourself and everyone else to compare their specific card and understand if they are bin 0, bin 1 or bin 2. Like Silicon Lottery for CPUs for instance. What do you think ?


I mean, there isn't really somthing to try and tie together different chips (such as asic %). It really just seems each card, no matter a zotac or kp, really are just all over the place interms of silicon.

The other issue is the power limit quandary and how it limits performance no matter what if on a stock bios.


----------



## J7SC

jura11 said:


> Best thermal pads for money which you can get are Thermalright Odyssey or Gelid Extreme pads and I would recommend get them from Aliexpress
> 
> Hope this helps
> 
> Thanks, Jura


...I've been using the Thermalright Odysee (1mm, 0.5mm) pads and they work great...they are also reasonably 'pliable' to adapt to component contours.



Zogge said:


> Is there any page or should we start it, stating binning statistics of 3090 ? For yourself and everyone else to compare their specific card and understand if they are bin 0, bin 1 or bin 2. Like Silicon Lottery for CPUs for instance. What do you think ?


...the only real way would be to compare unmolested stock-settings voltage curves in MSI AB on stock bios on a new card, IMO. That is as close as one can get to the old ASIC%. With NVidia boost algorithms, the rest comes down to variables such as ambient temp, cooling setup, custom voltages etc which would make direct comparisons not very useful....and as others also noted, PL of the bios is also important beyond just 'quality' of the chip, VRAM


----------



## emil2424

Heatsinks a bit crooked but my PC is in the room behind the wall and you can't see or hear it.
In more realistic applications such as games max memory TJ is 72C.


----------



## dr/owned

3090 TUF finally got the resizable BAR update from Asus:

Looks like they didn't even bother upreving the version number, just a different checksum.










Looks like it's working (Large Memory Range):










For the sake of completeness you need 3 things to enable resizable BAR: motherboard BIOS update, enable above 4G decoding, then resizable BAR on. (Some vendors may just directly have a resizable BAR setting, Gigabyte you have to enable above 4G decode to see the option). GeForce driver update. video BIOS update.


----------



## emil2424

Where can I find the latest version of ABE?


----------



## gfunkernaught

What effective core clock/voltage (sustained) is everyone getting while playing cyberpunk maxed at 4k and quality DLSS?


----------



## J7SC

gfunkernaught said:


> What effective core clock/voltage (sustained) is everyone getting while playing cyberpunk maxed at 4k and quality DLSS?


...probably not that helpful as I actually don't have anything else like HWInfo or GPUz running these days when I play CP2077 (4K / RTX Psycho / DLSS Quality), but the profile I use is set to 2130 MHz GPU and 1313 VRAM...from what I recall though from early tests when I first got my 3090, voltage would be around 1.063v and effective clocks with a delta of 30 MHz or less.


----------



## ViRuS2k

is anyone else getting the same issues im getting at the minute, 
and it all happened after i was enabling RBAR and flashing bios to RBAR and changing motherboards 4g decoding and stuff to enable rebar and installing latest drivers.

my 3090 is acting up....

some games are working and some games flat out refuse to run poping up and box saying : d3d12createdevicefailed
and this was trying to run resident evil 3, then i run right after that resident evil 7 and it works fine,
then i try out gears of war 5 and it refused to run with any graphics card i have in my system, seems to be running in software mode only.. at 1fps :/

i have no idea what happened to my system
i have tried the following

turning off HAGS
using driver uninstaller and installing fresh
using windows 10 gpu selector in graphics settings to select dedicated GPU but nothing works.....

what the hell is going on can someone help can it be only be getting this issue....


----------



## EniGma1987

emil2424 said:


>


Nice. Is that just a crapton of memory heatsinks? I need to do something on mine. I put a few little memory heatsinks on the back just one over each memory chip and a fan kinda sorta blowing on the back but it is still quite hot. I was thinking of getting two memory waterblocks and setting them on the back of my card. lol


----------



## emil2424

6 packs of 10 pieces + double-sided thermal conductive tape 0.5 mm from Alphacool


----------



## Falkentyne

ViRuS2k said:


> is anyone else getting the same issues im getting at the minute,
> and it all happened after i was enabling RBAR and flashing bios to RBAR and changing motherboards 4g decoding and stuff to enable rebar and installing latest drivers.
> 
> my 3090 is acting up....
> 
> some games are working and some games flat out refuse to run poping up and box saying : d3d12createdevicefailed
> and this was trying to run resident evil 3, then i run right after that resident evil 7 and it works fine,
> then i try out gears of war 5 and it refused to run with any graphics card i have in my system, seems to be running in software mode only.. at 1fps :/
> 
> i have no idea what happened to my system
> i have tried the following
> 
> turning off HAGS
> using driver uninstaller and installing fresh
> using windows 10 gpu selector in graphics settings to select dedicated GPU but nothing works.....
> 
> what the hell is going on can someone help can it be only be getting this issue....


What 3090 do you have?


----------



## gfunkernaught

J7SC said:


> ...probably not that helpful as I actually don't have anything else like HWInfo or GPUz running these days when I play CP2077 (4K / RTX Psycho / DLSS Quality), but the profile I use is set to 2130 MHz GPU and 1313 VRAM...from what I recall though from early tests when I first got my 3090, voltage would be around 1.063v and effective clocks with a delta of 30 MHz or less.


Power limit? 1063mv with 2100mhz is pretty good. I could run 2088-2100mhz effective but at 1075mv and I have to keep my PL at 60% so the voltage doesn't fluctuate. Otherwise I get crashes. I also tried +130 on the core with PL 48% and the clocks would go from like 2055-2099mhz (rare peaks), with the voltage anywhere from 1000mv to 1068mv. That setting works great. I tested a specific area while running windowed mode so that I can monitor changes in fps while switching profiles between the former settings and the latter. I saw maybe 1-2fps increase lol. Still trying to figure out if 480w capped vs letting cyberpunk use whatever it needs to maintain 2100mhz is even worth it. Probably not


----------



## ViRuS2k

Falkentyne said:


> What 3090 do you have?


Hi pal i have 2 graphics cards on my godlike x570 + 5950x Msi gaming X trio 3090 + 1080TI as i use both for mining 
i have ruled out that its cause of 2 different cards as i have disabled one in device manager but still same issue, i have MSI suprimX rbar bios installed, i think i installed Rbar correct aswell so i dont get it, HWINFO shows rebar enabled, GPUZ shows rebar enabled. 

its very strange.... i dont even know exactly when these games stopped working as i havent played them in a wile so not even sure its rbar thats the issue lol
but for one game to work and one game to not work is strange for instance

both using the same engine, resident evil 3 and 7 
7 works
3 gives 3d3 issue and does not startup.


----------



## J7SC

gfunkernaught said:


> Power limit? 1063mv with 2100mhz is pretty good. I could run 2088-2100mhz effective but at 1075mv and I have to keep my PL at 60% so the voltage doesn't fluctuate. Otherwise I get crashes. I also tried +130 on the core with PL 48% and the clocks would go from like 2055-2099mhz (rare peaks), with the voltage anywhere from 1000mv to 1068mv. That setting works great. I tested a specific area while running windowed mode so that I can monitor changes in fps while switching profiles between the former settings and the latter. I saw maybe 1-2fps increase lol. Still trying to figure out if 480w capped vs letting cyberpunk use whatever it needs to maintain 2100mhz is even worth it. Probably not


For games (DX11, DX12) I always use 115% - only go max PL and clocks for the benchies


----------



## gfunkernaught

J7SC said:


> For games (DX11, DX12) I always use 115% - only go max PL and clocks for the benchies


On which bios though? Should've asked that instead...
btw I spoke too soon about the 480w power limit, cyberpunk just crashed on me. Looks like I gotta stick with 600w so the voltage doesn't bounce around.


----------



## J7SC

gfunkernaught said:


> On which bios though? Should've asked that instead...
> btw I spoke too soon about the 480w power limit, cyberpunk just crashed on me. Looks like I gotta stick with 600w so the voltage doesn't bounce around.


...original / stock (highest GPUz 'max' I've seen w/ that on my Strix is 503 W)


----------



## Gosugod87

Anyone have any idea which thermal pad size the msi gaming x trio 3090 uses on die side and back side ? I changed the thermal pads using 1.5mm thermal right and now the card is locked under msi afterburner. Pretty sure the pads are the wrong size I used .. thanks


----------



## gfunkernaught

Gosugod87 said:


> Anyone have any idea which thermal pad size the msi gaming x trio 3090 uses on die side and back side ? I changed the thermal pads using 1.5mm thermal right and now the card is locked under msi afterburner. Pretty sure the pads are the wrong size I used .. thanks


Not sure about the stock pads, but the EK block uses 1mm pads, don't know if that matters...

What do you mean locked? The sliders are greyed out?


----------



## Gosugod87

gfunkernaught said:


> ads, don't know if that matters...
> 
> What do you mean locked? The sliders are greyed out?


Yeah man greyed out but windows still shows it detected and powers on, fans run and rgb turns on. I’m guessing there isn’t proper contact and the pads are wrong size ?


----------



## J7SC

Gosugod87 said:


> Yeah man greyed out but windows still shows it detected and powers on, fans run and rgb turns on. I’m guessing there isn’t proper contact and the pads are wrong size ?


...could also be too much or uneven pressure on the GPU


----------



## Gosugod87

J7SC said:


> ...could also be too much or uneven pressure on the GPU


Thanks man


----------



## gfunkernaught

Gosugod87 said:


> Yeah man greyed out but windows still shows it detected and powers on, fans run and rgb turns on. I’m guessing there isn’t proper contact and the pads are wrong size ?


greyed out sliders in AB sounds like AB isn't detecting the gpu...
Check if these settings are enabled in AB


----------



## Gosugod87

gfunkernaught said:


> greyed out sliders in AB sounds like AB isn't detecting the gpu...
> Check if these settings are enabled in AB
> View attachment 2485172


What AB man ?


----------



## Gosugod87

gfunkernaught said:


> greyed out sliders in AB sounds like AB isn't detecting the gpu...
> Check if these settings are enabled in AB
> View attachment 2485172


The 3 other cards are able to control , just this card I can’t since changing the thermal paste in afterburner


----------



## gfunkernaught

Gosugod87 said:


> The 3 other cards are able to control , just this card I can’t since changing the thermal paste in afterburner


Can you post a screenshot of what AB looks like with the problem card?


----------



## Pepillo

ViRuS2k said:


> is anyone else getting the same issues im getting at the minute,
> and it all happened after i was enabling RBAR and flashing bios to RBAR and changing motherboards 4g decoding and stuff to enable rebar and installing latest drivers.
> 
> my 3090 is acting up....
> 
> some games are working and some games flat out refuse to run poping up and box saying : d3d12createdevicefailed
> and this was trying to run resident evil 3, then i run right after that resident evil 7 and it works fine,
> then i try out gears of war 5 and it refused to run with any graphics card i have in my system, seems to be running in software mode only.. at 1fps :/
> 
> i have no idea what happened to my system
> i have tried the following
> 
> turning off HAGS
> using driver uninstaller and installing fresh
> using windows 10 gpu selector in graphics settings to select dedicated GPU but nothing works.....
> 
> what the hell is going on can someone help can it be only be getting this issue....


Many people have the same problem with the latest drivers, especially if they use an insider version of Windows. Switching to previous drivers usually solves this.


----------



## gfunkernaught

*


----------



## schoolofmonkey

I've started to notice some different behavior in my card since the Re-Bar bios update.
Not only has ability to overclock dropped from +100Mhz to +20Mhz but my stock clocks have increased to where I see a steady 1980Mhz when playing AC Valhalla, Hitman 3.
Plus my power slider is set to 104%, but I see 112% in Afterburner's monitoring along with a 371w power usage, (the XC3 has a 366w limit) and higher temps.

While I'm not complaining about a much better performing stock card, this is a card that has a stock boost clock of 1725MHz, it's never seen over 1800Mhz even with a overclock, defiantly not a steady clock speed.
Don't know what to make of it really.









AC Valhalla gameplay:


----------



## ViRuS2k

Pepillo said:


> Many people have the same problem with the latest drivers, especially if they use an insider version of Windows. Switching to previous drivers usually solves this.


Thank you, you where right....
this issue is driver issue, 465.89 has problems detecting a gpu when games try to load, not all games just some games.
i tried multiple times using DDU and reinstalling and same issue 

but then i installed 470.14 and all issues when`t away all games loaded no issues with gpu detection


----------



## ArcticZero

Has anyone tried the Bykski AIO for reference 3090? Just curious if anyone has experience with it. And yes I am aware it uses v1 of their block.

Bykski AIO GPU Water Cooling Kit For NVIDIA RTX 3080 3090 AIC Reference Graphics Card VGA Radiator 5V A RGB B FRD3090H RBW|Fluid DIY Cooling| - AliExpress

I have my CPU on an really old Swiftech H-240x and don't really want to mess with it now and get a full custom loop.

Also thinking of getting it since the stock PNY cooler can't handle any higher than ~420w without core hitting the high 80's, and can't seem to keep my memory junction temps low enough even with additional sinks on the backplate, since it's the front that's having issues.


----------



## yzonker

ArcticZero said:


> Has anyone tried the Bykski AIO for reference 3090? Just curious if anyone has experience with it. And yes I am aware it uses v1 of their block.
> 
> Bykski AIO GPU Water Cooling Kit For NVIDIA RTX 3080 3090 AIC Reference Graphics Card VGA Radiator 5V A RGB B FRD3090H RBW|Fluid DIY Cooling| - AliExpress
> 
> I have my CPU on an really old Swiftech H-240x and don't really want to mess with it now and get a full custom loop.
> 
> Also thinking of getting it since the stock PNY cooler can't handle any higher than ~420w without core hitting the high 80's, and can't seem to keep my memory junction temps low enough even with additional sinks on the backplate, since it's the front that's having issues.


I haven't seen anyone with one, but a 240mm thin rad is really small for a 3090. I started with a thin 360 and ended up with a 360+280. Kinda the same as you, I had a CPU AiO that worked well enough, but ended up replacing it.

Depends on what your goals are though. If you just want to keep it fairly well below the temp limits (say 60-70C core), then it could probably do that. Also depends on your tolerance for noise level. It won't be quiet.


----------



## nievz

I'm experiencing frequent micro stutters in Warzone since the rebar update. Anyone with the same experience?


----------



## des2k...

Well this is a nice surprise my first 15k in PR.
15 021 PR at just 2145 ~2120 eff freq.









I scored 15 021 in Port Royal


AMD Ryzen 9 3900X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## gfunkernaught

del


----------



## yzonker

To comment on a previous discussion a while ago. I finally got my Oculus set up again. I too see the odd behavior with it showing power limit at levels below what I have it set at. I had the GPU set to 420w and showing PL, but it was only drawing ~370w. It does drop down off of max voltage slightly (1.07-1.08). Cranking the power limit up to 500w did get rid of the PL indicator and peg the voltage at 1.1v. This only pulled 390w though.

Still, it does not behave the same as what we see using a monitor.


----------



## meinfehr

Does anyone know if flashing different vendor BIOS is possible yet for 30 series cards? I'd love to test out the EVGA 520W bios on my ASUS 3090.


----------



## Falkentyne

meinfehr said:


> Does anyone know if flashing different vendor BIOS is possible yet for 30 series cards? I'd love to test out the EVGA 520W bios on my ASUS 3090.


That's been possible since the 2nd week of release.
The only cards that can't be vendor cross flashed are Founder's Edition cards. The FE Bios can't be flashed on Non FE cards and FE cards can't take Non FE bioses 

A hardware programmer would work, if the checksums are correct, but there are still other issues, and no one who has used a hardware programmer to force flash a bios (or used a bios editor, corrected the checksums and then force flashed a modded bios) has posted here. And the FE Bios isn't SOP8, so a Pomona 5250 inline clip won't work on a UDFN-8 bios chip. Buildzoid thinks he can solder wires to the FE chip.


----------



## meinfehr

I was hoping I could flash the 520W bios to my card using NVFlash but I get an ID mismatch error


----------



## Falkentyne

meinfehr said:


> I was hoping I could flash the 520W bios to my card using NVFlash but I get an ID mismatch error


use --protect off and then use -6


----------



## PLATOON TEKK

nievz said:


> I'm experiencing frequent micro stutters in Warzone since the rebar update. Anyone with the same experience?


Same here. Good to know am not alone, might throw on old bios and test.


----------



## meinfehr

Falkentyne said:


> use --protect off and then use -6


thanks! I got it - is there a particular guide or post I can follow to tune it with the Classified Tool?


----------



## yzonker

meinfehr said:


> thanks! I got it - is there a particular guide or post I can follow to tune it with the Classified Tool?


Classified tool doesn't work unless you have a KP card AFAIK.


----------



## meinfehr

yzonker said:


> Classified tool doesn't work unless you have a KP card AFAIK.


It opens for me with the 520W bios, and allows me to change settings. It would be good to know if those settings actually do anything for cross-flashed cards.


----------



## meinfehr

I'm not sure if the 520W BIOS is working as it should. I have oc'd my card with Afterburner (custom curve + 800mem, PL120%) and the highest wattage I see from 3DMark and Furmark is 380W. Also my GPUz doesn't express power limit in watts, only percentage. Is there a safe test or method of 'blipping' the power usage to maximum?


----------



## PLATOON TEKK

meinfehr said:


> I'm not sure if the 520W BIOS is working as it should. I have oc'd my card with Afterburner (custom curve + 800mem, PL120%) and the highest wattage I see from 3DMark and Furmark is 380W. Also my GPUz doesn't express power limit in watts, only percentage. Is there a safe test or method of 'blipping' the power usage to maximum?


gpuz sensor page “board power draw” shows watts.

do you have a 3 pin or 2 pin asus gpu?


----------



## KedarWolf

Here are my GPU-Z and Afterburner logs with Port Royal and TimeSpy with the demos and benchmarks.

The HardwareMonitoring.txt has my effective clocks speeds, both logs with TimeSpy demo and benchmarks first, Port Royal demo and benchmark second.

This also passes both stress tests.

I'm on water now though.


----------



## meinfehr

PLATOON TEKK said:


> gpuz sensor page “board power draw” shows watts.
> 
> do you have a 3 pin or 2 pin asus gpu?


My card is a triple pin, and yes 380W max is what I saw from the GPU-Z sensor page
edit: in another test I saw it go up to 400W and it stays there consistently under a stress load


----------



## meinfehr

Here is my GPU Z sensor log and AB profile. Would love any help or education to make the most of my card (both for competitive benchmarks and gaming). One thing I notice is the gpu always hits power limit (green bars on PerfCap Reason) under load. 

I only have one theory but AFAIK thanks to buildzoid and other techtubers, pcie splitter cables are still more than capable of delivering the required power for a triple pin card (I have one direct cable and one 2-way splitter)


----------



## yzonker

meinfehr said:


> Here is my GPU Z sensor log and AB profile. Would love any help or education to make the most of my card (both for competitive benchmarks and gaming). One thing I notice is the gpu always hits power limit (green bars on PerfCap Reason) under load.
> 
> I only have one theory but AFAIK thanks to buildzoid and other techtubers, pcie splitter cables are still more than capable of delivering the required power for a triple pin card (I have one direct cable and one 2-way splitter)


It's that curve you have defined. It's rolling all the way back down to around the 900mv point. Just use a core offset and run with the original curve and see what power it draws with 3DMark.


----------



## des2k...

meinfehr said:


> Here is my GPU Z sensor log and AB profile. Would love any help or education to make the most of my card (both for competitive benchmarks and gaming). One thing I notice is the gpu always hits power limit (green bars on PerfCap Reason) under load.
> 
> I only have one theory but AFAIK thanks to buildzoid and other techtubers, pcie splitter cables are still more than capable of delivering the required power for a triple pin card (I have one direct cable and one 2-way splitter)
> View attachment 2485302


you'll always hit power limit. You'll have to flash the 1000w kingpin vbios if you want no power limit and get a waterblock.

quake rtx, 3dmark timespy for example goes to 600w+

Most 3090 will max out at 2190core and can use up to 500w in regular games. In demanding games, the difference between 400w and 500w+ for board power is like 10fps at most.


----------



## meinfehr

yzonker said:


> It's that curve you have defined. It's rolling all the way back down to around the 900mv point. Just use a core offset and run with the original curve and see what power it draws with 3DMark.


That curve is what I have tuned for the voltage application when the gpu is under load. It bounces between 900 and 1050mV depending on the intensity of the load, and for the best performance in my tests, keeping the card at or above 2GHz is ideal. Even testing with the default curve, I have not seen the board power go above 450W (when the card was on the Asus 480W BIOS). I will test more with the default curve at your recommendation, but I am not confident it will produce the results I am aiming for.


----------



## meinfehr

des2k... said:


> you'll always hit power limit. You'll have to flash the 1000w kingpin vbios if you want no power limit and get a waterblock.
> 
> quake rtx, 3dmark timespy for example goes to 600w+
> 
> Most 3090 will max out at 2190core and can use up to 500w in regular games. In demanding games, the difference between 400w and 500w+ for board power is like 10fps at most.


I'm aware that I'll always hit power limit, just not happy and a little confused that it hits a power limit when there is more power to be had lol. My concern here is less about games and more about testing and benchmarking. I test for fun, sometimes competitively where every single watt and fps matter. For that use, I'd love for it to take advantage of all 520W I am trying to give it.


----------



## nordschleife

Did Galax release a new 1000w RBar bios yet?


----------



## Zogge

Pin 3 on strix with KP bios always shows 2W hence you will see max around 360-380W. Add 150ish W to it and you are att 520+ W.

I verified it on my ax1600i that all 3 pins deliver around 13A. It is visible in corsair link.


----------



## meinfehr

Zogge said:


> Pin 3 on strix with KP bios always shows 2W hence you will see max around 360-380W. Add 150ish W to it and you are att 520+ W.
> 
> I verified it on my ax1600i that all 3 pins deliver around 13A. It is visible in corsair link.


Great to know, thanks! The highest total board power I've seen is 400W so with that info I know I'm running in spec.


----------



## meinfehr

nordschleife said:


> Did Galax release a new 1000w RBar bios yet?


I'm not sure if it helps you but I did see one in the unofficial EVGA bios page


----------



## PowerK

nordschleife said:


> Did Galax release a new 1000w RBar bios yet?


If I'm not mistaken, Galax never released 1kW vBIOS whether RBar enabled or not.


----------



## PLATOON TEKK

PowerK said:


> If I'm not mistaken, Galax never released 1kW vBIOS whether RBar enabled or not.


Think so. Apart from that unverified one. But that didn’t even flash on either of my oc lab and apparently needs hardware flash.


----------



## emil2424

des2k... said:


> Most 3090 will max out at 2190core and can use up to 500w in regular games. .....


Maybe in 1080p xD
Most of the RTX 3090 in regular games such as Metro Exodus, AC Vallhala, Division 2, Cyberpunk in 4K will be around 1920-2040 MHz after the OC at 375W.
With a 1000W bios you can add at most 100 MHz and balance somewhere in the range of 2000-2080 MHz in realistic load.
The clock at 2190 mhz can be observed in 1% of cards and only with a light load like Heaven Benchmark (I'm not talking about extreme cooling).


----------



## PowerK

emil2424 said:


> Maybe in 1080p xD
> Most of the RTX 3090 in regular games such as Metro Exodus, AC Vallhala, Division 2, Cyberpunk in 4K will be around 1920-2040 MHz after the OC at 375W.
> With a 1000W bios you can add at most 100 MHz and balance somewhere in the range of 2000-2080 MHz in realistic load.
> The clock at 2190 mhz can be observed in 1% of cards and only with a light load like Heaven Benchmark (I'm not talking about extreme cooling).


Indeed. Since they introduced GPU clock boost in Kepler, overclocking isn't much fun at all.


----------



## schoolofmonkey

emil2424 said:


> Maybe in 1080p xD
> Most of the RTX 3090 in regular games such as Metro Exodus, AC Vallhala, Division 2, Cyberpunk in 4K will be around 1920-2040 MHz after the OC at 375W.
> With a 1000W bios you can add at most 100 MHz and balance somewhere in the range of 2000-2080 MHz in realistic load.
> The clock at 2190 mhz can be observed in 1% of cards and only with a light load like Heaven Benchmark (I'm not talking about extreme cooling).


Currently since I upgraded the 3090 XC3's (366w) BIOS to the Re-Bar BIOS I'm getting those 1920-2040 MHz clock speeds without a overclock, I actually can't overclock the core at all now.
I'd been running a +100Mhz overclock, BIOS and driver update, unstable card, higher boost clock (2100Mhz).
Guess EVGA shut us XC3 users up from whining how the card underperforms, now they've max it out, with no overclocking headroom, well maybe +20mhz...


----------



## jomama22

PLATOON TEKK said:


> Think so. Apart from that unverified one. But that didn’t even flash on either of my oc lab and apparently needs hardware flash.


You should contact galax directly and inquire with your oc labs. Pretty sure they will give you a signed one you can flash to them. Wouldn't doubt that they will require you to sign away the warranty though.


----------



## des2k...

emil2424 said:


> Maybe in 1080p xD
> Most of the RTX 3090 in regular games such as Metro Exodus, AC Vallhala, Division 2, Cyberpunk in 4K will be around 1920-2040 MHz after the OC at 375W.
> With a 1000W bios you can add at most 100 MHz and balance somewhere in the range of 2000-2080 MHz in realistic load.
> The clock at 2190 mhz can be observed in 1% of cards and only with a light load like Heaven Benchmark (I'm not talking about extreme cooling).


If your talking eff freq, yes the most I've seen is 2155eff on mine for all day gaming(control/cp) and that's at 2190 core. 

It's really a hit and miss as depending on the paste / mount of the waterblock I do loose stability at 2190, 1093mv.

I did notice some texture flickers that high in 3dmark after looping PR for hours, so I have it pretty conservative at 2145core, 1056mv now.


----------



## defcoms

I have been getting around 2040-2070mhz in games @ +110 boost on my strix card with stock rbar bios @ 3440x1440.

Cyberpunk Clocks seem to stay @ 2070 and around 380w as with most games I play.





Battlefield 4 @ 125% resolution scale they seem to be around 2040-2070 @ 450W with some maps it does spike to 480W.


----------



## jura11

I can do 2130-2145MHz on Palit RTX 3090 GamingPro stable in any game with KPE XOC 1000W BIOS , 2190MHz I don't think I will ever see on my Palit RTX 3090 GamingPro

In Watch Dogs Legion I'm stable with 2115-2130MHz with some blips to 2145MHz usually in that game I'm hitting like 65% power limit of KPE XOC 1000W BIOS and temperatures won't break 36-38°C 

Hope this helps 

Thanks, Jura


----------



## J7SC

...as mentioned to @gfunkernaught , I usually just set my Strix OC to 2130 / 1313 / 115% these days, but I pushed it a bit here...delta between nominal and effective seems to be about 18 MHz, with running CP2077, then FS2020. I should add that the system has big cooling (total of 1280x60+ rads w/ push-pull, 2x D5s)


----------



## Sheyster

Seems there are several reports of micro-stutter in Warzone with Re-Bar enabled (MB BIOS, vBIOS, new driver all installed). This is interesting considering Warzone is not a supported game for Re-Bar. So apparently this is a driver bug of some sort, perhaps Re-Bar is not actually off in Warzone?

Who else has experienced issues like this in Warzone recently? Just curious to know how universal this issue is?


----------



## Sheyster

defcoms said:


> Battlefield 4 @ 125% resolution scale they seem to be around 2040-2070 @ 450W with some maps it does spike to 480W.


Sure do miss BF4! I logged over 10K kills with the scout heli's. Good times!


----------



## defcoms

Sheyster said:


> Sure do miss BF4! I logged over 10K kills with the scout heli's. Good times!


I can't wait for the new one to come out. I been playing BF4 again to hold me over. Don't remember there being so many servers with rules for allowed weapons. Seems the BF4 server admins running them have become a bunch of vaginas, every loadout I used to use gets me admin killed in someway.


----------



## PLATOON TEKK

Sheyster said:


> Seems there are several reports of micro-stutter in Warzone with Re-Bar enabled (MB BIOS, vBIOS, new driver all installed). This is interesting considering Warzone is not a supported game for Re-Bar. So apparently this is a driver bug of some sort, perhaps Re-Bar is not actually off in Warzone?
> 
> Who else has experienced issues like this in Warzone recently? Just curious to know how universal this issue is?


Someone else mentioned it here already, I have the same issues. Seems pretty widespread. I oddly had a stutter-free cod session last night though.


----------



## RedRumy3

On my 3090 Founders, I replaced the stock thermal pads and added more thermal pads on back side of card and it dropped my memory temps 24c. Crazy I was hitting around 92-96 on memory in BFV and other games and after the new thermal pads I am hitting around 70-72 on memory. Worth swapping out the pads for sure. I used 1.5m Thermalright Thermal Pad 12.8 W/mK.


----------



## EarlZ

defcoms said:


> I have been getting around 2040-2070mhz in games @ +110 boost on my strix card with stock rbar bios @ 3440x1440.
> 
> Cyberpunk Clocks seem to stay @ 2070 and around 380w as with most games I play.
> 
> 
> 
> 
> 
> Battlefield 4 @ 125% resolution scale they seem to be around 2040-2070 @ 450W with some maps it does spike to 480W.


What GPU voltage is that to only use 380watts at 2070Mhz in CP2077, something like 0.875 ?


----------



## gfunkernaught

So for those of you that are using the KP 1kw bios, are you leaving the PL slider at 100%? When I was testing cp2077 I tried setting the core to an offset that I can't remember since I deleted that profile, then set the curve to 1093mv and it wasn't stable at an effective [email protected], and that was with the PL slider at 60%, never saw the power limit warning nor did I see fluctuation/throttling.


----------



## KedarWolf

Here's my best Port Royal.

I don't take too much stock for peeps posting it without a verification URL, they can be using the LOD BIAS trick which greatly increases the score but the link will show they did that and it's an invalid result.









I scored 15 260 in Port Royal


AMD Ryzen 9 5950X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## bmagnien

KedarWolf said:


> Here's my best Port Royal.
> 
> I don't take too much stock for peeps posting it without a verification URL, they can be using the LOD BIAS trick which greatly increases the score but the link will show they did that and it's an invalid result.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 260 in Port Royal
> 
> 
> AMD Ryzen 9 5950X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> View attachment 2485429


Is this on air still or did you get your block set up?


----------



## KedarWolf

bmagnien said:


> Is this on air still or did you get your block set up?


No, got my block set up.


----------



## bmagnien

KedarWolf said:


> No, got my block set up.


Ok - not sure if you posted previously and I know this is unsolicited advice, but I’d take a look at your mounting and general loop thermals - 52c avg for a PR run seems quite high. I feel like you could get that much lower with the EK block (I think that’s what you’re using)? Was this run done will full fans/pump?


----------



## defcoms

EarlZ said:


> What GPU voltage is that to only use 380watts at 2070Mhz in CP2077, something like 0.875 ?


Not sure it's just power limit slider maxed out and +110 to core. I haven't got into messing with voltage curve yet so havent really been paying much attention to it. What wattage do you get?


----------



## KedarWolf

bmagnien said:


> Ok - not sure if you posted previously and I know this is unsolicited advice, but I’d take a look at your mounting and general loop thermals - 52c avg for a PR run seems quite high. I feel like you could get that much lower with the EK block (I think that’s what you’re using)? Was this run done will full fans/pump?


60% fans and pump, using an EKWB Phoenix 360 AIO and two quick disconnects on the hoses. Also, I only have two fans on the AIO. Waiting for screws to put three more fans on it.

Need to keep the one closest to the block off and it hits the top of the 3090 block with my case.

It's what I expected for my setup but I may repaste it when I get my MX-5 thermal paste.

They changed the formula of the Mastermaker Gel and it's super thick now and hard to apply right. 

Edit: I mean as is I get 26C idle temps so seems to be about right.


----------



## gfunkernaught

So I was able to play cyberpunk for about 30min with +150 core +1200 vram with the PL set to 100% on the 1kw bios and the eff clock was [email protected] I noticed the voltage would go drop a bin to 1068mv and the clock would drop to 2125mhz or so. All was fine, gpu temp was 41c avg. Then the voltage went up to 1085mv and after a few minutes the game crashed. I'm wondering if 1085mv is a problem for my card and 1075mv is the sweetspot voltage. I'll keep testing. Thought I'd share.


----------



## des2k...

gfunkernaught said:


> So for those of you that are using the KP 1kw bios, are you leaving the PL slider at 100%? When I was testing cp2077 I tried setting the core to an offset that I can't remember since I deleted that profile, then set the curve to 1093mv and it wasn't stable at an effective [email protected], and that was with the PL slider at 60%, never saw the power limit warning nor did I see fluctuation/throttling.


I leave mine 100%, outside of few games like quake rtx ,gt2 or path of exile unlimited fps power usage is not that high. I also play gsync 66fps cap, so that reduces the power a bit.

Since you have a good water delta, you could loop gt2 and pr for at least 1,2h to see what's stable for your card. I have not crashed heavy games with OC passing 4h in 3dmark.

1093mv is usually very hard on the vrm for ampere. I would honestly not play at 1093mv, unless you have a rtx3090 with a big vrm config like strix,ftw3,kingpin, etc


----------



## Falkentyne

gfunkernaught said:


> So I was able to play cyberpunk for about 30min with +150 core +1200 vram with the PL set to 100% on the 1kw bios and the eff clock was [email protected] I noticed the voltage would go drop a bin to 1068mv and the clock would drop to 2125mhz or so. All was fine, gpu temp was 41c avg. Then the voltage went up to 1085mv and after a few minutes the game crashed. I'm wondering if 1085mv is a problem for my card and 1075mv is the sweetspot voltage. I'll keep testing. Thought I'd share.





des2k... said:


> I leave mine 100%, outside of few games like quake rtx ,gt2 or path of exile unlimited fps power usage is not that high. I also play gsync 66fps cap, so that reduces the power a bit.
> 
> Since you have a good water delta, you could loop gt2 and pr for at least 1,2h to see what's stable for your card. I have not crashed heavy games with OC passing 4h in 3dmark.
> 
> 1093mv is usually very hard on the vrm for ampere. I would honestly not play at 1093mv, unless you have a rtx3090 with a big vrm config like strix,ftw3,kingpin, etc


It doesn't work this way.
You can lock a voltage point, but it doesn't exactly lock it. It locks the tier, not the voltage. Because what you see on the V/F curve isn't voltage at all. It's VID.
If you lock the 1.10v VID Point on the curve, it will simply try to use the highest tier it can. If the voltage slider is at 0%, this will limit the max VID to between 1.056v-1.081v. The % on the slider where 1.1v becomes available depends on the shape of the curve itself, which is designated by system firmware. At 100%, the VID range will be 1.069v-1.10v. It will cycle through these VID points and will constantly rearrange the shape of the curve, even if you lock 1.10v in MSI Afterburner by pressing L on that point.

Locking 1.075v will not have the effect you actually expect because 1.075v will sit on two possible tiers, depending on the shape of the curve, but it will stop the VID from exceeding 1.075v.

Manually changing the curve so that 1.10v (or 1.075v) is at its own isolated point (so the point can't change) completely destroys your effective core clocks. This can only be avoided by increasing MSVDD voltage manually (which you can only do with Elmor EVC2X device or Kingpin cards).

Remember: what you see on "voltage slider" and V/F curve is _NOT_ voltage! it's VID.


----------



## lolhaxz

Falkentyne said:


> It doesn't work this way.
> You can lock a voltage point, but it doesn't exactly lock it. It locks the tier, not the voltage. Because what you see on the V/F curve isn't voltage at all. It's VID.
> If you lock the 1.10v VID Point on the curve, it will simply try to use the highest tier it can. If the voltage slider is at 0%, this will limit the max VID to between 1.056v-1.081v. The % on the slider where 1.1v becomes available depends on the shape of the curve itself, which is designated by system firmware. At 100%, the VID range will be 1.069v-1.10v. It will cycle through these VID points and will constantly rearrange the shape of the curve, even if you lock 1.10v in MSI Afterburner by pressing L on that point.
> 
> Locking 1.075v will not have the effect you actually expect because 1.075v will sit on two possible tiers, depending on the shape of the curve, but it will stop the VID from exceeding 1.075v.
> 
> Manually changing the curve so that 1.10v (or 1.075v) is at its own isolated point (so the point can't change) completely destroys your effective core clocks. This can only be avoided by increasing MSVDD voltage manually (which you can only do with Elmor EVC2X device or Kingpin cards).
> 
> Remember: what you see on "voltage slider" and V/F curve is _NOT_ voltage! it's VID.


Take some notes, re: curves/frequencies/behaviour in general.

What is demonstrable:

A) Consecutive points on V/F curve that are equal will fall to the lowest point, unless power/temperature are low enough to dictate otherwise; if power/temperature are too high it will take the smallest step back it can... if you are going to be power limited, the MORE variation in the points, ie atleast 15MHz per step, the more choices the card has when it steps back, if it steps back to a point with 5 continuous points, it will step right back to the first (in a limited scenario)

B) The curve is dragged down 15MHz every 5-6C increase in temperature, or there abouts. (the algorithm is slightly more complex than this) - the resulting output curve is deviating slightly constantly... ie, some points may merge, diverge, differently at different temperatures, particularly nasty if the steps are single units, ie 15MHz.

C) Modifying a curve is a suggestion, the driver/card will often modify various points on the V/F curve for a whole bunch of reasons, ie. Rules.

D) A locked* voltage point* will only deviate if a power/temperature/etc constraint is hit, otherwise it will function as expected at the locked voltage point.. A locked *frequency point* behaves differently again... there are a few scenarios where constraints you don't even know about are being hit.

E) The MS clock and NV clock follow the V/F *independently*, following similar rules, both have a maximum voltage and power limit (as defined by the BIOS) - there is still a bit of mystery around how you should interpret these clocks.

So yes, there is a whole bunch of ways you can shoot yourself in the foot by setting a custom curve.. but play around, make some observations... ultimate take away would be, power limited: more diversity in the points as possible and make sure you at minimum set your MSVDD max voltage point to match (a step or two below) your NV point (they obviously cant be the same, otherwise refer to A)

Where you are not MSVDD power limited (very few scenarios in reality) .. You can effectively set the NV clock and MS clock separately... there is no need to "raise the MSVDD voltage"

Disclaimer: I talk about a "MS Clock" here, but in reality its just another clock domain being reported by the GPU that we really don't have any *solid *evidence about what its doing or what it is, all we know is, to some extent, it has an impact on performance.

Here are some example of curves, pay attention to clocks and why:


----------



## dr/owned

Re-BAR boosted my PR score by about 600 points, never was able to break 15000 before:









I scored 15 055 in Port Royal


Intel Core i9-10850K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com





I could probably squeeze out a couple hundred points more if I moved the core up to 2175 (only PR stable, would crash in game) but don't want to bother chasing it.

I also found giving +0.1V to the memory really improved its overclock quite a bit. +550 was all I can get stock and with more voltage it takes +1250 all day long. For mining even +1500 seems stable but graphically it seems not to be.


----------



## EarlZ

defcoms said:


> Not sure it's just power limit slider maxed out and +110 to core. I haven't got into messing with voltage curve yet so havent really been paying much attention to it. What wattage do you get?


At 0.925v with 2010Mhz on core, it can easily hit 440-460W in CP2077


----------



## defcoms

EarlZ said:


> At 0.925v with 2010Mhz on core, it can easily hit 440-460W in CP2077


I will take a look at what voltage I'm running after work today. Are you using offset or a custom curve. Wondering why such a large difference. +110 is stable for every game I play using offset. Any more then that and some games will crash. I do notice most of the time my card is voltage limited.


----------



## des2k...

lolhaxz said:


> Take some notes, re: curves/frequencies/behaviour in general.
> 
> What is demonstrable:
> 
> A) Consecutive points on V/F curve that are equal will fall to the lowest point, unless power/temperature are low enough to dictate otherwise; if power/temperature are too high it will take the smallest step back it can... if you are going to be power limited, the MORE variation in the points, ie atleast 15MHz per step, the more choices the card has when it steps back, if it steps back to a point with 5 continuous points, it will step right back to the first (in a limited scenario)
> 
> B) The curve is dragged down 15MHz every 5-6C increase in temperature, or there abouts. (the algorithm is slightly more complex than this) - the resulting output curve is deviating slightly constantly... ie, some points may merge, diverge, differently at different temperatures, particularly nasty if the steps are single units, ie 15MHz.
> 
> C) Modifying a curve is a suggestion, the driver/card will often modify various points on the V/F curve for a whole bunch of reasons, ie. Rules.
> 
> D) A locked* voltage point* will only deviate if a power/temperature/etc constraint is hit, otherwise it will function as expected at the locked voltage point.. A locked *frequency point* behaves differently again... there are a few scenarios where constraints you don't even know about are being hit.
> 
> E) The MS clock and NV clock follow the V/F *independently*, following similar rules, both have a maximum voltage and power limit (as defined by the BIOS) - there is still a bit of mystery around how you should interpret these clocks.
> 
> So yes, there is a whole bunch of ways you can shoot yourself in the foot by setting a custom curve.. but play around, make some observations... ultimate take away would be, power limited: more diversity in the points as possible and make sure you at minimum set your MSVDD max voltage point to match (a step or two below) your NV point (they obviously cant be the same, otherwise refer to A)
> 
> Where you are not MSVDD power limited (very few scenarios in reality) .. You can effectively set the NV clock and MS clock separately... there is no need to "raise the MSVDD voltage"
> 
> Disclaimer: I talk about a "MS Clock" here, but in reality its just another clock domain being reported by the GPU that we really don't have any *solid *evidence about what its doing or what it is, all we know is, to some extent, it has an impact on performance.
> 
> Here are some example of curves, pay attention to clocks and why:
> 
> View attachment 2485440
> 
> 
> View attachment 2485441
> 
> 
> View attachment 2485442
> 
> 
> View attachment 2485443


Well.... any given freq is valid for 3 voltage points, then it has to jump to the next bin. 
So 2 voltage points works the best for forcing freq/voltage. Works with xoc vbios regardless of gpu load or temps. 

It won't get you the most eff frequency but at least it locks you in for stress testing.

nvidia-smi limit to 2145
prefer max performance, nv panel

If you want to lock 2145 at 1056mv, the previous 3voltage points need to be 2130. 
It can be as high as 2140mhz and as low as 2115mhz but stays at 1056mv and 2145bin.

If that's too low for eff frequency just set 2175, or 2190 on the curve etc....


----------



## EarlZ

defcoms said:


> I will take a look at what voltage I'm running after work today. Are you using offset or a custom curve. Wondering why such a large difference. +110 is stable for every game I play using offset. Any more then that and some games will crash. I do notice most of the time my card is voltage limited.


I am using a custom curve, I've decided to lower the core clock to 1950, as with 2010 always hit 460watts and just power throttle.

I am not sure if I am doing the curve correctly, what I do is I set my fan speed to 100% so that the GPU can cool down to 31c, once its there I then set a -250 offset then raise only that one voltage.


----------



## des2k...

EarlZ said:


> I am using a custom curve, I've decided to lower the core clock to 1950, as with 2010 always hit 460watts and just power throttle.
> 
> I am not sure if I am doing the curve correctly, what I do is I set my fan speed to 100% so that the GPU can cool down to 31c, once its there I then set a -250 offset then raise only that one voltage.
> 
> 
> View attachment 2485462


offset + drag the last VF point is pretty aggressive, usually this will bounce all over the place (load and temps) and might return too high of an effective freq which will add to the instability 

you're better off doing the 2VF points, 3voltage points for the same freq, 4th voltage point for the next bin

You can repeat for the entire curve when your power limited which holds freq much better vs offset which will drop like crazy when power / temps limits kick in.

Example of 4 step from Igor'sLab


----------



## lolhaxz

For what you want, you should exactly follow the stock curve as it will be the most efficient, ie add +80MHz (click the target voltage point and move the (main window) offset slider until it matches your desired result) ... then just flatten the rest of the curve... although that can be a laborious process in AB (2 second process in other tools)

If you can't apply the offset across the curve equally, you are likely to run into stability problems anyway - that's been my experience.










Doing what you are doing will generally probably be fine, it ... just isn't particularly efficient if the GPU ends up in those lower clocks...


----------



## martin28bln

Updated to new Vbios and Resizebar was enabled. Instead of others here I mentioned no increase in performance:

3090FE

NVIDIA GeForce RTX 3090 Grafikkarten Benchmark Resultat - AMD Ryzen 9 5950X,ASUSTeK COMPUTER INC. ROG STRIX X570-I GAMING (3dmark.com)


----------



## EarlZ

des2k... said:


> offset + drag the last VF point is pretty aggressive, usually this will bounce all over the place (load and temps) and might return too high of an effective freq which will add to the instability
> 
> you're better off doing the 2VF points, 3voltage points for the same freq, 4th voltage point for the next bin
> 
> You can repeat for the entire curve when your power limited which holds freq much better vs offset which will drop like crazy when power / temps limits kick in.
> 
> Example of 4 step from Igor'sLab
> View attachment 2485464



I actually get pretty stable clocks, its mostly 1950 and drops to 1935 on some occasions, it was my understanding that the GPU boost will choose the next 15Mhz lower despite next block is 1890, my card still goes to 1935.

EDIT: I am not monitoring effective clocks, Still trying to understand the explanation by @lolhaxz , I've been reading it over and I am not sure if I totally understood it. I am a visual learner so It might be best for me to actually see an example curve for best efficiency based on my target of 1950Mhz @ 0.925v


----------



## GRABibus

dr/owned said:


> Re-BAR boosted my PR score by about 600 points, never was able to break 15000 before:
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 055 in Port Royal
> 
> 
> Intel Core i9-10850K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> I could probably squeeze out a couple hundred points more if I moved the core up to 2175 (only PR stable, would crash in game) but don't want to bother chasing it.
> 
> I also found giving +0.1V to the memory really improved its overclock quite a bit. +550 was all I can get stock and with more voltage it takes +1250 all day long. For mining even +1500 seems stable but graphically it seems not to be.


Maybe a stupid question, but which software do you use to change memory voltage ?


----------



## EarlZ

Not sure if I really understood the whole efficiency of the curve but should it look like this ?










1980Mhz starting at 0.950
1960Mhz with 3 voltage points of 0.943 , 0.937 and 0.931
1950Mhz with 3 voltage points of 0.925 , 0.918 and 0.912

Is there a trick to quickly flatten those voltage points that I am not interested in using or do i just really need to manually move them each time ?

I just added GPU effective clock on my stats, how do I move its value to the spot marked X, easier for my eyes to look at the clock, effective clock and gpu voltage.


----------



## J7SC

For those with a 3090 Strix OC who have updated to the r_BAR stock bios (via RTX3090_V3.exe), how is the oc performance when compared to the prior version (ie V2) , say, with r_BAR disabled ? Same clocks ? Any new issues ? I'll take my sweet time with the update given that the impact of r_BAR seems 'mixed' at best, and I have other projects on the go. Still, sooner or later...


----------



## Zogge

I do not notice much difference honestly. Same clocks for me as before meaning 2160mhz core when gaming.


----------



## GRABibus

No real differences (Strix with MSI Suprim X bios).
I played BF5 and got some stutters...as I alwaays had in this game, so...Difficult to conclude.

I didn't bench PR for exemple with BAR "off" and BAR "on" and I won't do currently, useless for me.


----------



## ALSTER868

No difference for me either on Strix V3 bios with rebar..


----------



## yzonker

lolhaxz said:


> For what you want, you should exactly follow the stock curve as it will be the most efficient, ie add +80MHz (click the target voltage point and move the (main window) offset slider until it matches your desired result) ... then just flatten the rest of the curve... although that can be a laborious process in AB (2 second process in other tools)
> 
> If you can't apply the offset across the curve equally, you are likely to run into stability problems anyway - that's been my experience.
> 
> View attachment 2485467
> 
> 
> Doing what you are doing will generally probably be fine, it ... just isn't particularly efficient if the GPU ends up in those lower clocks...


Yes you can flatten the rest of the curve easily in AB. Here's my writeup from a while ago.






Rtx 3000 series undervolt discussion


Its part of the guide in the op. Once you set your curve, youll want to set a power limit. I suggest between 80 and 90 but you should try it and see where you are happy with performance and Temps. At 90 you should basically be as fast as stock while running cooler. If you want even less power...




hardforum.com


----------



## Bobbylee

GRABibus said:


> No real differences (Strix with MSI Suprim X bios).
> I played BF5 and got some stutters...as I alwaays had in this game, so...Difficult to conclude.
> 
> I didn't bench PR for exemple with BAR "off" and BAR "on" and I won't do currently, useless for me.


I was having some microstutters in warzone which is similarly cpu and mem bound. I sorted mine by getting some 4400cl17 mem, if your mem is slow it might be worth ocing that or getting some fast ram


----------



## GRABibus

Bobbylee said:


> I was having some microstutters in warzone which is similarly cpu and mem bound. I sorted mine by getting some 4400cl17 mem, if your mem is slow it might be worth ocing that or getting some fast ram


I am on AMD Zen3 plateform (ASUS Crosshair VIII Hero), with [email protected] CL14 (4x8GB).
CPU=5900X.

All settings in my signature.

3733MHz is already a nice RAM OC for this plateform.

I won't get more and I don't think this is the issue.

Thank you for your reply.


----------



## jomama22

Falkentyne said:


> It doesn't work this way.
> You can lock a voltage point, but it doesn't exactly lock it. It locks the tier, not the voltage. Because what you see on the V/F curve isn't voltage at all. It's VID.
> If you lock the 1.10v VID Point on the curve, it will simply try to use the highest tier it can. If the voltage slider is at 0%, this will limit the max VID to between 1.056v-1.081v. The % on the slider where 1.1v becomes available depends on the shape of the curve itself, which is designated by system firmware. At 100%, the VID range will be 1.069v-1.10v. It will cycle through these VID points and will constantly rearrange the shape of the curve, even if you lock 1.10v in MSI Afterburner by pressing L on that point.
> 
> Locking 1.075v will not have the effect you actually expect because 1.075v will sit on two possible tiers, depending on the shape of the curve, but it will stop the VID from exceeding 1.075v.
> 
> Manually changing the curve so that 1.10v (or 1.075v) is at its own isolated point (so the point can't change) completely destroys your effective core clocks. This can only be avoided by increasing MSVDD voltage manually (which you can only do with Elmor EVC2X device or Kingpin cards).
> 
> Remember: what you see on "voltage slider" and V/F curve is _NOT_ voltage! it's VID.


Msvdd won't help your effective clocks much if at all, it's all pure core voltage.

There is 100% an internal clock that acts independent of the core clock that pays respect to the selected voltage at the driver level. 

There is a sweet spot voltage that will keep effective clocks closer together than any other spot, but it depends on the load/game. Mine is 1.075 as well for nerely all games and ts/tse. Port royal is the only test I have found that will scale effective clock all the way to 1.1v.

I said this in the past but manually setting voltage on the evc doesn't change this. I still have to spoof the driver into running at 1.075v in AB while also increasing the voltage on the evc to maintain the tightest effective clocks to requested clocks.

This is most likely the reason why people believe that just using an offset is the most beneficial, because the optimal driver level vid is never in the 1.87-1.1 range. 1.63-1.81 will act very similarly, within 1 or 2 effective clock mhz. 

Once I get my new psu in, I will provide evidence of the sort, the old one has died (thankfully not spectacularly).


----------



## Falkentyne

jomama22 said:


> Msvdd won't help your effective clocks much if at all, it's all pure core voltage.
> 
> There is 100% an internal clock that acts independent of the core clock that pays respect to the selected voltage at the driver level.
> 
> There is a sweet spot voltage that will keep effective clocks closer together than any other spot, but it depends on the load/game. Mine is 1.075 as well for nerely all games and ts/tse. Port royal is the only test I have found that will scale effective clock all the way to 1.1v.
> 
> I said this in the past but manually setting voltage on the evc doesn't change this. I still have to spoof the driver into running at 1.075v in AB while also increasing the voltage on the evc to maintain the tightest effective clocks to requested clocks.
> 
> This is most likely the reason why people believe that just using an offset is the most beneficial, because the optimal driver level vid is never in the 1.87-1.1 range. 1.63-1.81 will act very similarly, within 1 or 2 effective clock mhz.
> 
> Once I get my new psu in, I will provide evidence of the sort, the old one has died (thankfully not spectacularly).


There's an internal power limit for the effective clock (or one of the voltage rails).
This responds to "default" TDP (100%) and "Max TDP", but this exact rail is NOT shown in any of the bios editors.
It's why people with shunt modded video cards with all of the input rails well below the default values shown in the Ampere Bios Editor, can get like 70% TDP, but TDP Normalized hits 105% and causes a power limit flag.

I checked this VERY carefully. I put my time in on this.
before the power limit kicks in, effective clocks drop.
If you stare at the ground, in Overwatch, with the 750W total power limit, (FPS uncapped), you instantly get a power limit flag and power draw showing at 520W and effective clocks dropping.
If you stare normally straight, power goes up to around 600W before you start getting power limit from one of the MSVDD or NVVDD rails (TDP is only at like 85%).

Another thing, at 100% tdp, (which is equal to a "default" setting), the MSVDD power limit kicks in much sooner (at like 510W!), because it hits the max TDP Normalized limit (100%) sooner, it actually starts limiting power slowly at about 95% normalized..


----------



## jomama22

Falkentyne said:


> There's an internal power limit for the effective clock (or one of the voltage rails).
> This responds to "default" TDP (100%) and "Max TDP", but this exact rail is NOT shown in any of the bios editors.
> It's why people with shunt modded video cards with all of the input rails well below the default values shown in the Ampere Bios Editor, can get like 70% TDP, but TDP Normalized hits 105% and causes a power limit flag.
> 
> I checked this VERY carefully. I put my time in on this.
> before the power limit kicks in, effective clocks drop.
> If you stare at the ground, in Overwatch, with the 750W total power limit, (FPS uncapped), you instantly get a power limit flag and power draw showing at 520W and effective clocks dropping.
> If you stare normally straight, power goes up to around 600W before you start getting power limit from one of the MSVDD or NVVDD rails (TDP is only at like 85%).
> 
> Another thing, at 100% tdp, (which is equal to a "default" setting), the MSVDD power limit kicks in much sooner (at like 510W!), because it hits the max TDP Normalized limit (100%) sooner, it actually starts limiting power slowly at about 95% normalized..


My observations are purely based on never hitting a power limit. On my shunted strix, it just doesn't happen. No matter if I set msvdd to @1.25v and nvvdd to 1.325, it will continually scale power in every game and benchmark. 

I am aware of what you are talking about and understand that if you are indeed hitting a power limit somewhere you will see those affects.

I am merely telling the behavior of what the chip will act like when no power limit is hit. There are absolute sweet spots along the voltage curve for optimizing effective clock when power limits aren't reached. 

When I get the new psu in I will show you. Easiest way is to just run tse at any clock speed, set that clock and test at each point between 1.05v and 1.1v and test. Power limit will never be reached yet effective clocks will change.


----------



## gfunkernaught

des2k... said:


> I leave mine 100%, outside of few games like quake rtx ,gt2 or path of exile unlimited fps power usage is not that high. I also play gsync 66fps cap, so that reduces the power a bit.
> 
> Since you have a good water delta, you could loop gt2 and pr for at least 1,2h to see what's stable for your card. I have not crashed heavy games with OC passing 4h in 3dmark.
> 
> 1093mv is usually very hard on the vrm for ampere. I would honestly not play at 1093mv, unless you have a rtx3090 with a big vrm config like strix,ftw3,kingpin, etc


That's what I'm seeing here, higher voltage (or VID) than 1075mv is a problem for my card. Last night I tested again with a custom curve so now I'm getting 2100-2130mhz effective depending on the game and/or load/temp, but it doesn't exceed 1075mv. Quake 2 RTX w/o a power limit and 2130mhz constant is scary but an avg fps of 60 at native 4k is just straight up naughty.


----------



## Falkentyne

jomama22 said:


> My observations are purely based on never hitting a power limit. On my shunted strix, it just doesn't happen. No matter if I set msvdd to @1.25v and nvvdd to 1.325, it will continually scale power in every game and benchmark.
> 
> I am aware of what you are talking about and understand that if you are indeed hitting a power limit somewhere you will see those affects.
> 
> I am merely telling the behavior of what the chip will act like when no power limit is hit. There are absolute sweet spots along the voltage curve for optimizing effective clock when power limits aren't reached.
> 
> When I get the new psu in I will show you. Easiest way is to just run tse at any clock speed, set that clock and test at each point between 1.05v and 1.1v and test. Power limit will never be reached yet effective clocks will change.


Yes I know. But your Strix has higher internal MSVDD and NVVDD Limits, so you don't hit them when you do a shunt mod. Notice you can run Timespy Extreme and not get a green power limit throttle bar?

But on the 3090 FE, even with 3 mOhm stacked shunts (>900W power limit), you're still going to hit an internal rail limit (on MSVDD or NVVDD) at about 580W total power draw, and that limit isn't coming from any of the shunted input rails. (8 pin #1, 8 pin #2, SRC, GDDR6X, Chip Power, and PCIE Slot).

The four "AUX" rails in the editor seem to combine into partial sections of other limits (Aux1-Aux4 are Misc0-Misc3 in the current hwinfo64, and GPU Chip Power is the precise sum of Misc0 + Misc2 + NVVDD1 input power (sum).

Elmor mentioned these limits (MSVDD and NVVDD power limits) that are not shown in any editor. He even mentioned there is a "PLL rail limit" also.
These rails report to "TDP Normalized %". Normalized is simply any individual rail that is highest or exceeds a limit with respect to its 100% limit.


----------



## long2905

guys i finally moved away from ITX and into a 5000D case with B550 and 3900x chip. i took this opportunity and put in an additional 360 thick rad alongside my existing 280 rad and this helps with temp a lot.

however now that i can finally enable resize-able bar, i cannot flash my card and all the different tools i tried from different brands all reported that my card is not supported (inno3d ichill with galax vbios flashed). i tried both the tool available from inno3d and galax, none worked. should i just look for the latest 390w galax vbios available and flash that instead and be done with it? im on b550 tomahawk and already flashed the latest beta bios evailable and enable rebar within the bios settings.


----------



## GRABibus

Here is a gameplay in BFV with GPU undervolting and Resizable BAR enabled :







GPU undervolting (See curve below) :









PS : don't pay attention to my low skills


----------



## jomama22

Falkentyne said:


> Yes I know. But your Strix has higher internal MSVDD and NVVDD Limits, so you don't hit them when you do a shunt mod. Notice you can run Timespy Extreme and not get a green power limit throttle bar?
> 
> But on the 3090 FE, even with 3 mOhm stacked shunts (>900W power limit), you're still going to hit an internal rail limit (on MSVDD or NVVDD) at about 580W total power draw, and that limit isn't coming from any of the shunted input rails. (8 pin #1, 8 pin #2, SRC, GDDR6X, Chip Power, and PCIE Slot).
> 
> The four "AUX" rails in the editor seem to combine into partial sections of other limits (Aux1-Aux4 are Misc0-Misc3 in the current hwinfo64, and GPU Chip Power is the precise sum of Misc0 + Misc2 + NVVDD1 input power (sum).
> 
> Elmor mentioned these limits (MSVDD and NVVDD power limits) that are not shown in any editor. He even mentioned there is a "PLL rail limit" also.
> These rails report to "TDP Normalized %". Normalized is simply any individual rail that is highest or exceeds a limit with respect to its 100% limit.


Again, I realize this. But this needs to be observed on individual cards specifically.

Could be just reference and fe cards behave at the same point, could be all except KP, strix and hof. People need to test for themselves and see if they are indeed hitting that wall/power limit or not.

Just saying, making a blanket statement about how somthing behaves isn't entirely accurate.


----------



## PowerK

GRABibus said:


> Here is a gameplay in BFV with GPU undervolting and Resizable BAR enabled :


OMG. Talk about fisheye distortion. Even watching for a several seconds makes me sick. How can you play like that?


----------



## defcoms

GRABibus said:


> Here is a gameplay in BFV with GPU undervolting and Resizable BAR enabled :
> 
> 
> 
> 
> 
> 
> 
> GPU undervolting (See curve below) :
> View attachment 2485570
> 
> 
> PS : don't pay attention to my low skills


Nice clocks and temps look good.


----------



## GRABibus

PowerK said:


> OMG. Talk about fisheye distortion. Even watching for a several seconds makes me sick. How can you play like that?


what do you mean ?


----------



## GRABibus

defcoms said:


> Nice clocks and temps look good.


PC side panel is opened.
Otherwise my memory GDDR6X temps are in the range of 85degrees to 90degrees.

I need to change thermal pads of memory to see if it improves.


----------



## jomama22

GRABibus said:


> what do you mean ?


Your fov being at 105 creates stretching nearly everywhere but dead center. When your not ads if is crazy noticable.

Fisheye is the visual affect where images are stretched from an originating center point.

Compare the visuals at like fov 70 to your 105. Yes, lowers your viewable area but the stretching is much, much less. I'm actually quite surprised bfv has that much stretching, especially on a 16:9 monitor.


----------



## EniGma1987

PowerK said:


> OMG. Talk about fisheye distortion. Even watching for a several seconds makes me sick. How can you play like that?


wow ya you are right on that. That reminds me of the old half life 1 days. Looks horrible





GRABibus said:


> what do you mean ?


They call it fishbowl because it is a very similar distortion effect as looking at a fish bowl. As things get farther to the edges of the screen, the images compress more and more. As you turn, the image "de-compresses" into a normal view.
You can see when looking towards the side how everything looks flat and close up, but as you turn to focus on something that was at the end of your vision, it almost seems like that object stretches outward in a telescoping effect and is now far away when you look at it.
I also noticed it on a boat you walked by in the game. It was normal looking when it was 10' away or so, and as you start running closer it compresses itself into about half its normal size and then as you start running past the boat suddenly stretches into a huge long blur that looks like it is being sucked away into infinity.


----------



## GRABibus

EniGma1987 said:


> wow ya you are right on that. That reminds me of the old half life 1 days. Looks horrible
> 
> 
> 
> 
> They call it fishbowl because it is a very similar distortion effect as looking at a fish bowl. As things get farther to the edges of the screen, the images compress more and more. As you turn, the image "de-compresses" into a normal view.
> You can see when looking towards the side how everything looks flat and close up, but as you turn to focus on something that was at the end of your vision, it almost seems like that object stretches outward in a telescoping effect and is now far away when you look at it.


yes so horrible


----------



## GRABibus

jomama22 said:


> Your fov being at 105 creates stretching nearly everywhere but dead center. When your not ads if is crazy noticable.
> 
> Fisheye is the visual affect where images are stretched from an originating center point.
> 
> Compare the visuals at like fov 70 to your 105. Yes, lowers your viewable area but the stretching is much, much less. I'm actually quite surprised bfv has that much stretching, especially on a 16:9 monitor.


ok.
I will try


----------



## Zogge

Reporting that I changed my strix/bykski standard pads to Gelid Extreme as well as repasted the thermal grizzly kryonaut. I felt and visually noticed I got much better pressure on the mounting of the cooler plate towards the core so I knew it would be better.

Results for Port Royal on 480W bios with rebar on +120/+750. (14905 score)

Water temp 27c, flow rate 280l/h, pressure before block 495.1mBar.

Before:
Core 49.8c
Mem 68.4c
Hotspot 70.1c
VRM 44c

After:
Core 40.5c
Mem 50.0c
Hotspot 55.1c
VRM 42c

That's pretty good in my view. Dropped from 22c delta to 13c delta on core and memory going from 2W/m*K to 12W/m*K really made a difference of 18c !

If anyone wonder, I put 1 mm pads on the front side just like you advised earlier and 1.5mm pads on the back side. I did not use any plastic distances this time for the core screws, I just used the spring screws as is.


----------



## des2k...

Zogge said:


> Reporting that I changed my strix/bykski standard pads to Gelid Extreme as well as repasted the thermal grizzly kryonaut. I felt and visually noticed I got much better pressure on the mounting of the cooler plate towards the core so I knew it would be better.
> 
> Results for Port Royal on 480W bios with rebar on +120/+750. (14905 score)
> 
> Water temp 27c, flow rate 280l/h, pressure before block 495.1mBar.
> 
> Before:
> Core 49.8c
> Mem 68.4c
> Hotspot 70.1c
> VRM 44c
> 
> After:
> Core 40.5c
> Mem 50.0c
> Hotspot 55.1c
> VRM 42c
> 
> That's pretty good in my view. Dropped from 22c delta to 13c delta on core and memory... from 2W/m*K to 12W/m*K really made a difference of 18c !
> 
> If anyone wonder, I put 1 mm pads on the front side just like you advised earlier and 1.5mm pads on the back side. I did not use any plastic distances this time for the core screws, I just used the spring screws as is.


280l/h damn lol

Waiting on my second top to install my 2nd d5

right now I have xpc x4 + d5 pump, I'm not even at 120l/h , that's with 5rads


----------



## Zogge

Gpu loop has 3xd5 and the airplex gigant 3360x140 with 26 fans in total. (I have 2 QCs that reduce flow a bit)
280l/h pumps at 60-80% speed.

Cpu loop has 3xd5 and 3 360x120, 1 240x120 and 1 1080x120 and 20fans in total. 
305 l/h pumps at 60-75% speed.


----------



## dotose4011

hi,

i have a little question (i will delete this post right after), can i use this vbios : EVGA RTX 3090 VBIOS on a 3090 suprim x ?

(technically yes but i want to be sure)

regards ;^)


----------



## Zogge

Yes, mind the fans though and you might loose a port if it is not the same as for kingpin as well as pin 3 power might not read correct


----------



## dotose4011

ok, thanks for the answer, i will try carefully.


----------



## ALSTER868

Zogge said:


> That's pretty good in my view. Dropped from 22c delta to 13c delta on core and memory going from 2W/m*K to 12W/m*K really made a difference of 18c !


I would say it's a spectacular difference. Last time I tried to change my pads on the Bitspower block for Gelid Extreme on VRMs and Arctic on memory I got an increase of 2-3c in GPU-water T delta and chip temp but on the other hand a drop of 6c on VRMs and shaved some 2-3c off memory. Probably the mounting pressure is worse this time, idk. Will see later what can be done.


----------



## GRABibus

dotose4011 said:


> hi,
> 
> i have a little question (i will delete this post right after), can i use this vbios : EVGA RTX 3090 VBIOS on a 3090 suprim x ?
> 
> (technically yes but i want to be sure)
> 
> regards ;^)


usually yes.
I did flash it on my ASUS Strix OC.

BUT, I am running now the MSI SUPRIM X F5 Bios on my Strix OC, which gives good results (Better sustained frequency in games), where of course I don't reach Power limit.

As always, if you flash, this is at your own risk


----------



## rationality

Ok, guys, I am thinking of ordering nvidia rtx 3090 from aliexpress! I know it's risky but I found a seller which has the lowest price on this product. And yes, I checked reviews and also the ratings of that seller with Alitools, so I think I won't be cheated.


----------



## Sheyster

Is the new 520W KPE BIOS still the highest power limit currently available with Re-Bar support enabled?


----------



## jura11

Zogge said:


> Reporting that I changed my strix/bykski standard pads to Gelid Extreme as well as repasted the thermal grizzly kryonaut. I felt and visually noticed I got much better pressure on the mounting of the cooler plate towards the core so I knew it would be better.
> 
> Results for Port Royal on 480W bios with rebar on +120/+750. (14905 score)
> 
> Water temp 27c, flow rate 280l/h, pressure before block 495.1mBar.
> 
> Before:
> Core 49.8c
> Mem 68.4c
> Hotspot 70.1c
> VRM 44c
> 
> After:
> Core 40.5c
> Mem 50.0c
> Hotspot 55.1c
> VRM 42c
> 
> That's pretty good in my view. Dropped from 22c delta to 13c delta on core and memory going from 2W/m*K to 12W/m*K really made a difference of 18c !
> 
> If anyone wonder, I put 1 mm pads on the front side just like you advised earlier and 1.5mm pads on the back side. I did not use any plastic distances this time for the core screws, I just used the spring screws as is.


That's what I was thinking to do too on my Bykski Waterblocks, using 1mm thermal pads on front and on back 1.5mm thermal pads

You didn't use at all plastic washers at core or anywhere on the GPU? 

Thanks, Jura


----------



## yzonker

Sheyster said:


> Is the new 520W KPE BIOS still the highest power limit currently available with Re-Bar support enabled?


Yes, no reBar KP XOC yet unfortunately.


----------



## Lobstar

yzonker said:


> Yes, no reBar KP XOC yet unfortunately.


Is the KPE XOC the 1000w?
Here is the KPE 520w with ReBar here: EVGA RTX 3090 VBIOS


----------



## Zogge

jura11 said:


> That's what I was thinking to do too on my Bykski Waterblocks, using 1mm thermal pads on front and on back 1.5mm thermal pads
> 
> You didn't use at all plastic washers at core or anywhere on the GPU?
> 
> Thanks, Jura


No washers at all. The reason was that for 3 core screws it works with washers but for the 4th it does not reach and can be locked. Probably a manufacturing fault, so I had to use without on all 4 or the mouting would slope a bit.

None on the back plate screws either correct. Also I only put pads on memories, I skipped VRMs as I couldnt figure out the size to use there and the high pitched noise from the caps is gone now. and I did put 1.5mm on the VRMs as well now to be safe.

Below 50c on VRMs is no issue anyway in my view. Thoughts on that ?


----------



## EarlZ

I need some clarification on the "cold state" before applying any voltage curves, Is that cold state for my GPU idling at zero RPM which is 40-44c or should it be the cold state with 100% fan speed which is 32-34c ?


----------



## jomama22

EarlZ said:


> I need some clarification on the "cold state" before applying any voltage curves, Is that cold state for my GPU idling at zero RPM which is 40-44c or should it be the cold state with 100% fan speed which is 32-34c ?


Just whatever your normal idle temp is. 

Setting it when it's colder then in normally would be will just have a larger drop off as temps warm up during use. 

You can always set them after the card has warmed up as well, which will reduce the stoppage even more, but have to make sure it won't boost to somthing that is unstable when it cools off (during low load cutscenes and such).


----------



## EarlZ

jomama22 said:


> Just whatever your normal idle temp is.
> 
> Setting it when it's colder then in normally would be will just have a larger drop off as temps warm up during use.
> 
> You can always set them after the card has warmed up as well, which will reduce the stoppage even more, but have to make sure it won't boost to somthing that is unstable when it cools off (during low load cutscenes and such).


By if by normal you mean automatic fan speed, then it idles around 40-44c due to fan stop
If I keep the fan speed at 100%, it idles 1c above ambient

I am not certain if it would matter if I do the settings on 44c or 31c.


----------



## jomama22

EarlZ said:


> By if by normal you mean automatic fan speed, then it idles around 40-44c due to fan stop
> If I keep the fan speed at 100%, it idles 1c above ambient
> 
> I am not certain if it would matter if I do the settings on 44c or 31c.


I mean by whatever you normally have your setup at. If you usually use it with fans of or w.e., then do that.


----------



## J7SC

jomama22 said:


> Just whatever your normal idle temp is.
> 
> Setting it when it's colder then in normally would be will just have a larger drop off as temps warm up during use.
> 
> You can always set them after the card has warmed up as well, which will reduce the stoppage even more, *but have to make sure it won't boost to somthing that is unstable when it cools off *(during low load cutscenes and such).


1.) ...that was s.th. I ran into when I added much better cooling to my Strix...I had saved max-workable profiles, but all of a sudden, I started to have problems with those profiles even though temps were 30C or so lower...turns out that with the better cooling, it boosted right into the unsustainable range. A quick example below - same MSI AB profile apart from a minor difference in VRAM clock, just different temps....went from 2205 to 2235 to 2265 in GPUz...









2.)...still waiting for an updated 'final' mobo bios w/ new Agesa and r_BAR, but a quick question about enabling 'above 4G decoding' - looking on the interweb, opinions seems to be somewhat divided if above 4G decoding is necessary for r_BAR (even if it makes intuitive sense). What are you folks doing with that ?

3.) Another glorious morning for a quick FS2020 flight before getting down to work...for fans of the reality tv series 'Highway to hell' (a misnomer btw as it is gorgeous), the 2nd pic should explain a lot...



Spoiler


----------



## jomama22

J7SC said:


> 1.) ...that was s.th. I ran into when I added much better cooling to my Strix...I had saved max-workable profiles, but all of a sudden, I started to have problems with those profiles even though temps were 30C or so lower...turns out that with the better cooling, it boosted right into the unsustainable range. A quick example below - same MSI AB profile apart from a minor difference in VRAM clock, just different temps....went from 2205 to 2235 to 2265 in GPUz...
> View attachment 2485715
> 
> 
> 2.)...still waiting for an updated 'final' mobo bios w/ new Agesa and r_BAR, but a quick question about enabling 'above 4G decoding' - looking on the interweb, opinions seems to be somewhat divided if above 4G decoding is necessary for r_BAR (even if it makes intuitive sense). What are you folks doing with that ?
> 
> 3.) Another glorious morning for a quick FS2020 flight before getting down to work...for fans of the reality tv series 'Highway to hell' (a misnomer btw as it is gorgeous), the 2nd pic should explain a lot...
> 
> 
> 
> Spoiler
> 
> 
> 
> 
> View attachment 2485718
> 
> 
> View attachment 2485719


The 4g decoding and resizable bar go hand in hand with the api's amd and nvidia implement for large memory sharing between gpu and cpu. It expands the memory addressing block sizes and allows wider memory addresses to be used. This is more just a pcie standard that goes along with how rebar works on a functional level.









What is Above 4G Decoding – your BIOS's dark horse. Should it be enabled?


In modern motherboards BIOS, you can find the Above 4G Decoding checkbox, and now the Resizable Bar has appeared next to it. And the Resizable Bar was already well explained by AMD – the processor can directly communicate with the card through a couple of big steps, instead of a million tiny...




www.gameunion.tv





This site provides a good explanation of what is happening between 4g decoding and rebar. But in simple terms, it allows any part of dram to be used to store and pass information to vram in one large transaction as opposed to smaller chunks using multiple operations to send the same data.

You absolutely need 4g for rebar. Rebar is amd/nvidia's api communication pipeline for taking advantage of 4g decoding. One without the other on windows systems does not work.

On a coding level, much of this would need to be implemented during development to take advantage of it, hence why only some games work. You see negative impacts on games where the engine is built around an assumed block size of available data transfer and optimized to do so. Introducing a larger than expected memory block means it will potentially cause overflow issues, where the code will have to break up that larger than expected block, using more cycles and processing and, in turn, lower performance.


----------



## Sheyster

jomama22 said:


> You absolutely need 4g for rebar. Rebar is amd/nvidia's api communication pipeline for taking advantage of 4g decoding. One without the other on windows systems does not work.


AFAIK you can't enable Re-Bar in a Gigabyte BIOS without 4G being set on first. The Re-Bar enable option isn't there without 4G being on first.


----------



## jura11

Zogge said:


> No washers at all. The reason was that for 3 core screws it works with washers but for the 4th it does not reach and can be locked. Probably a manufacturing fault, so I had to use without on all 4 or the mouting would slope a bit.
> 
> None on the back plate screws either correct. Also I only put pads on memories, I skipped VRMs as I couldnt figure out the size to use there and the high pitched noise from the caps is gone now. and I did put 1.5mm on the VRMs as well now to be safe.
> 
> Below 50c on VRMs is no issue anyway in my view. Thoughts on that ?


Below 50°C on VRAM are awesome temperatures and from your other post your VRAM and core temperatures improved by good margin too 

In my case I can't hear the VRM or coilwhine, no issues with that

Will redo my loop too when I have time, probably next week or maybe this weekend and post my findings too regarding VRAM temperatures and core, sadly I can't monitor VRM and can't confirm or deny temperatures of VRM 

Regarding the plastic washers, I only assume Thermalright or Gelid pads(depending which one did you used) are less compressible than stock Bykski ones, I bought like Thermalright and Gelid pads and see which ones I will use

Hope this helps and thanks for letting me know regarding the plastic washers

Thanks, Jura


----------



## Zogge

Okat let me know how it goes. Also, isn't VRM temp visible in HWinfo for you ? Gelid pads has almost no compression hence they seem to be very hard.


----------



## ArcticZero

Zogge said:


> Okat let me know how it goes. Also, isn't VRM temp visible in HWinfo for you ? Gelid pads has almost no compression hence they seem to be very hard.


Which Gelid pads did you try? The Gelid Ultimate pads have a hardness rating of 60-70 shore, whereas the Gelid Extreme pads are well known for being quite soft and pliable at 35 shore.


----------



## Falkentyne

Just ordered this. Really curious how this is going to perform ON THE CORE ONLY, since this seems to be much improved over the original Soft PGS+ graphite pads.
I wonder if it will get heatsoaked at 500W. Going to be interesting. I guess money doesn't like me.






EYG-R0912ZRGD Panasonic Electronic Components | Fans, Thermal Management | DigiKey


Order today, ships today. EYG-R0912ZRGD – Thermal Pad Gray 120.00mm x 88.00mm Rectangular from Panasonic Electronic Components. Pricing and Availability on millions of electronic components from Digi-Key Electronics.




www.digikey.com









__





TechnologyGuide


Thank you for visiting the TechnologyGuide network. Unfortunately, these forums are no longer active. We extend a heartfelt thank you to the entire community for their steadfast support—it is really you, our readers, that drove




forum.notebookreview.com


----------



## ArcticZero

Falkentyne said:


> Just ordered this. Really curious how this is going to perform ON THE CORE ONLY, since this seems to be much improved over the original Soft PGS+ graphite pads.
> I wonder if it will get heatsoaked at 500W. Going to be interesting. I guess money doesn't like me.
> 
> 
> 
> 
> 
> 
> EYG-R0912ZRGD Panasonic Electronic Components | Fans, Thermal Management | DigiKey
> 
> 
> Order today, ships today. EYG-R0912ZRGD – Thermal Pad Gray 120.00mm x 88.00mm Rectangular from Panasonic Electronic Components. Pricing and Availability on millions of electronic components from Digi-Key Electronics.
> 
> 
> 
> 
> www.digikey.com
> 
> 
> 
> 
> 
> 
> 
> 
> 
> __
> 
> 
> 
> 
> 
> TechnologyGuide
> 
> 
> Thank you for visiting the TechnologyGuide network. Unfortunately, these forums are no longer active. We extend a heartfelt thank you to the entire community for their steadfast support—it is really you, our readers, that drove
> 
> 
> 
> 
> forum.notebookreview.com


Very interested in this and looking forward to your results. Apparently money hates me too.


----------



## des2k...

Falkentyne said:


> Just ordered this. Really curious how this is going to perform ON THE CORE ONLY, since this seems to be much improved over the original Soft PGS+ graphite pads.
> I wonder if it will get heatsoaked at 500W. Going to be interesting. I guess money doesn't like me.
> 
> 
> 
> 
> 
> 
> EYG-R0912ZRGD Panasonic Electronic Components | Fans, Thermal Management | DigiKey
> 
> 
> Order today, ships today. EYG-R0912ZRGD – Thermal Pad Gray 120.00mm x 88.00mm Rectangular from Panasonic Electronic Components. Pricing and Availability on millions of electronic components from Digi-Key Electronics.
> 
> 
> 
> 
> www.digikey.com
> 
> 
> 
> 
> 
> 
> 
> 
> 
> __
> 
> 
> 
> 
> 
> TechnologyGuide
> 
> 
> Thank you for visiting the TechnologyGuide network. Unfortunately, these forums are no longer active. We extend a heartfelt thank you to the entire community for their steadfast support—it is really you, our readers, that drove
> 
> 
> 
> 
> forum.notebookreview.com


I got the thermal grizzly carbonaut. I want to try it at 600w on my EK block.
Usually it comes within 2-5c of the best paste, but... due to being thick I'm thinking I would get better pressure on the die since my EK block standoffs are bit conservative for pressure.

I will try it in the weekend.


----------



## Zogge

These in 1.5mm and 1.0mm.








Gelid Solutions GP-Extreme, Termisk pude, 12 W/mK, Grå, 80 mm, 40 mm, 1 mm


- Ultimate Heat Conductivity- Non-Electrical Conductive- Non-Corrosive, Non-Curing & Non-ToxicThe GP-Extreme is designed to provide perfect thermal contact ...




www.computersalg.se


----------



## pat182

z390 rebar support late april/may


----------



## inedenimadam

Contacted EK about the rear VRAM block that I ordered. Delivery is set for april 23rd. 
YAY


----------



## jura11

Zogge said:


> Okat let me know how it goes. Also, isn't VRM temp visible in HWinfo for you ? Gelid pads has almost no compression hence they seem to be very hard.


Sadly VRM is not visible in the HWiNFO and I will do tests with Gelid Extreme pads and Thermalright Odyssey pads too and see how they perform, hopefully no issues 

Thanks, Jura


----------



## J7SC

jura11 said:


> Sadly VRM is not visible in the HWiNFO and I will do tests with Gelid Extreme pads and Thermalright Odyssey pads too and see how they perform, hopefully no issues
> 
> Thanks, Jura


...HWInfo does show at least partial VRM temp on some cards...this is from a screenshot from the first day(s) I got my 3090


----------



## jura11

J7SC said:


> ...HWInfo does show at least partial VRM temp on some cards...this is from a screenshot from the first day(s) I got my 3090
> 
> View attachment 2485784


On some GPUs like EVGA or Asus yes you can see VRM temperatures but sadly on Palit RTX 3090 GamingPro's I can't see that and is not exposed in HWinfo 

Hope this helps 

Thanks, Jura


----------



## KedarWolf

While running Port Royal with an EKWB block and backplate on my Strix OC the VRMs stay under 40C.

It's the memory that gets to 60C but that is more than acceptable. The core gets to 50C.

I only have two fans on my 360 rad for my GPU right now, waiting on screws to add three more. And my two fans and pump were at 60%. So temps now ideal right now.

Only five fans because the bottom left one hits the Strix waterblock on my Thermaltake Core X9 case.

I get flack for using a Thermaltake case, but I love the motherboard is horizontal and the video card vertical, plus it has some of the best rad and fan support I've seen in pretty much any case.

Fan Support

Front:
3 x 120mm
2 x 140mm
2 x 200mm
Top:
8 x 120mm
6 x 140mm
2 x 200mm
Rear:
2 x 120mm or 2 x 140mm
Bottom:
3 x 120mm
Left / Right Side:
4 x 120mm
3 x 140mm 

Rad Support

Front:
1 x 120mm or 1 x 240mm or 1 x 360mm
1 x 140mm or 1 x 280mm
Top:
2 x 120mm or 2 x 240mm or 2 x 360mm or 2 x 480mm
2 x 140mm or 2 x 280mm or 2 x 420mm
Rear:
1 x 120mm or or 1 x 140mm
Left / Right Side:
1 x 120mm or 1 x 240mm or 1 x 360mm or 1 x 480mm
1 x 140mm or 1 x 280mm or 1 x 420mm
Bottom:
1 x 120mm or 1 x 240mm or 1 x 360mm or 1 x 480mm
1 x 140mm or 1 x 280mm or 1 x 420mm


----------



## GRABibus

KedarWolf said:


> While running Port Royal with an EKWB block and backplate on my Strix OC the VRMs stay under 40C.
> 
> It's the memory that gets to 60C but that is more than acceptable. The core gets to 50C.
> 
> I only have two fans on my 360 rad for my GPU right now, waiting on screws to add three more. And my two fans and pump were at 60%. So temps now ideal right now.
> 
> Only five fans because the bottom left one hits the Strix waterblock on my Thermaltake Core X9 case.
> 
> I get flack for using a Thermaltake case, but I love the motherboard is horizontal and the video card vertical, plus it has some of the best rad and fan support I've seen in pretty much any case.
> 
> Fan Support
> 
> Front:
> 3 x 120mm
> 2 x 140mm
> 2 x 200mm
> Top:
> 8 x 120mm
> 6 x 140mm
> 2 x 200mm
> Rear:
> 2 x 120mm or 2 x 140mm
> Bottom:
> 3 x 120mm
> Left / Right Side:
> 4 x 120mm
> 3 x 140mm
> 
> Rad Support
> 
> Front:
> 1 x 120mm or 1 x 240mm or 1 x 360mm
> 1 x 140mm or 1 x 280mm
> Top:
> 2 x 120mm or 2 x 240mm or 2 x 360mm or 2 x 480mm
> 2 x 140mm or 2 x 280mm or 2 x 420mm
> Rear:
> 1 x 120mm or or 1 x 140mm
> Left / Right Side:
> 1 x 120mm or 1 x 240mm or 1 x 360mm or 1 x 480mm
> 1 x 140mm or 1 x 280mm or 1 x 420mm
> Bottom:
> 1 x 120mm or 1 x 240mm or 1 x 360mm or 1 x 480mm
> 1 x 140mm or 1 x 280mm or 1 x 420mm


some pictures ? 😊


----------



## KedarWolf

GRABibus said:


> some pictures ? 😊


No, I'm too embarrassed to show pictures. I have something like 20 fans in total and my cable management style falls into the category of, "There appears to have been a struggle." 

It's set up as a test bench though with no sides or top on it.


----------



## KedarWolf

I swear by this PWM fan controller. It uses a SATA power connector, not a crappy Molex one, has a cable to a single PWM header for fan speed control.

You can add up to eight fans, it's cheap, it's tiny and it has a sticky pad on the bottom of it for placing it in a convenient spot in your case.

Silverstone 8-Port PWM Fan Hub/Splitter for 4-Pin & 3-Pin Fans in Black SST-CPF04-USA (Newest Version)



https://www.amazon.ca/gp/product/B07N3HP8S5/ref=ppx_yo_dt_b_asin_title_o06_s00?ie=UTF8&psc=1


----------



## des2k...

KedarWolf said:


> I swear by this PWM fan controller. It uses a SATA power connector, not a crappy Molex one, has a cable to a single PWM header for fan speed control.
> 
> You can add up to eight fans, it's cheap, it's tiny and it has a sticky pad on the bottom of it for placing it in a convenient spot in your case.
> 
> Silverstone 8-Port PWM Fan Hub/Splitter for 4-Pin & 3-Pin Fans in Black SST-CPF04-USA (Newest Version)
> 
> 
> 
> https://www.amazon.ca/gp/product/B07N3HP8S5/ref=ppx_yo_dt_b_asin_title_o06_s00?ie=UTF8&psc=1
> 
> 
> 
> View attachment 2485909


Have this one but sata current is lower vs molex. Very cheap plastic/cables vs uphere free hubs. You do get them with the uphere rgb fans.

Here's some pic of my o11  

pump 1, no room inside

image-2021-04-08-152834

who needs the top cover

image-2021-04-08-153117

can I fit more stuff here ?

image-2021-04-08-153331

and... who needs the side panel, found room for my 2nd pump

image-2021-04-08-153510


----------



## KCDC

KedarWolf said:


> While running Port Royal with an EKWB block and backplate on my Strix OC the VRMs stay under 40C.
> 
> It's the memory that gets to 60C but that is more than acceptable. The core gets to 50C.
> 
> I only have two fans on my 360 rad for my GPU right now, waiting on screws to add three more. And my two fans and pump were at 60%. So temps now ideal right now.
> 
> Only five fans because the bottom left one hits the Strix waterblock on my Thermaltake Core X9 case.
> 
> I get flack for using a Thermaltake case, but I love the motherboard is horizontal and the video card vertical, plus it has some of the best rad and fan support I've seen in pretty much any case.
> 
> Fan Support
> 
> Front:
> 3 x 120mm
> 2 x 140mm
> 2 x 200mm
> Top:
> 8 x 120mm
> 6 x 140mm
> 2 x 200mm
> Rear:
> 2 x 120mm or 2 x 140mm
> Bottom:
> 3 x 120mm
> Left / Right Side:
> 4 x 120mm
> 3 x 140mm
> 
> Rad Support
> 
> Front:
> 1 x 120mm or 1 x 240mm or 1 x 360mm
> 1 x 140mm or 1 x 280mm
> Top:
> 2 x 120mm or 2 x 240mm or 2 x 360mm or 2 x 480mm
> 2 x 140mm or 2 x 280mm or 2 x 420mm
> Rear:
> 1 x 120mm or or 1 x 140mm
> Left / Right Side:
> 1 x 120mm or 1 x 240mm or 1 x 360mm or 1 x 480mm
> 1 x 140mm or 1 x 280mm or 1 x 420mm
> Bottom:
> 1 x 120mm or 1 x 240mm or 1 x 360mm or 1 x 480mm
> 1 x 140mm or 1 x 280mm or 1 x 420mm


when im out of screws to add more rad fans, i just take two from the others from either corner and use em in the same way for the new fans, dunno if ya really need all 4 for each fan, been working great for me that way.


----------



## KedarWolf

KCDC said:


> when im out of screws to add more rad fans, i just take two from the others from either corner and use em in the same way for the new fans, dunno if ya really need all 4 for each fan, been working great for me that way.


I found my bag of rad fans, added three more fans.

In Battlefield 5 with the graphics maxed out, no DLSS cuz 3840x1080, I'm getting 42C with very warm ambient temps.


----------



## GRABibus

Is the ASUS XOC Bios with BAR exists ?


----------



## EarlZ

What is the best curve shape/form to further optimize this, I opted for 1995Mhz that already tops out at 450Watts in Port Royal at 3440x1440


----------



## KedarWolf

Here is Battlefield 5, 3840x1080, no DLSS, graphics settings maxed out. And my GPU-Z log and an HWInfo screenshot. The code is from the Afterburner log.

This after running the game 30 minutes.

The HWInfo log formatting is really messed up and pretty much useless. 












Code:


                                               Core Speed         Memory Speed      Framerate         
80, 09-04-2021 22:35:43, 43.000              ,2145.000            ,10535.978           ,100.200             ,18.490        
80, 09-04-2021 22:35:44, 42.000              ,2145.000            ,10535.978           ,101.500             ,20.774        
80, 09-04-2021 22:35:45, 43.000              ,2145.000            ,10535.978           ,98.100              ,11.418        
80, 09-04-2021 22:35:46, 43.000              ,2145.000            ,10535.978           ,90.400              ,13.103        
80, 09-04-2021 22:35:47, 44.000              ,2145.000            ,10535.978           ,87.300              ,19.447        
80, 09-04-2021 22:35:48, 45.000              ,2145.000            ,10535.978           ,91.300              ,11.893        
80, 09-04-2021 22:35:49, 42.000              ,2145.000            ,10535.978           ,96.700              ,12.453        
80, 09-04-2021 22:35:50, 43.000              ,2145.000            ,10535.978           ,109.000             ,11.482        
80, 09-04-2021 22:35:51, 43.000              ,2145.000            ,10535.978           ,91.000              ,12.558        
80, 09-04-2021 22:35:52, 44.000              ,2145.000            ,10535.978           ,98.400              ,14.410        
80, 09-04-2021 22:35:53, 44.000              ,2145.000            ,10535.978           ,98.000              ,14.124        
80, 09-04-2021 22:35:54, 44.000              ,2145.000            ,10535.978           ,99.900              ,15.429        
80, 09-04-2021 22:35:55, 43.000              ,2145.000            ,10535.978           ,98.700              ,11.481        
80, 09-04-2021 22:35:56, 43.000              ,2145.000            ,10535.978           ,96.200              ,14.700        
80, 09-04-2021 22:35:57, 44.000              ,2145.000            ,10535.978           ,104.600             ,12.665        
80, 09-04-2021 22:35:58, 44.000              ,2145.000            ,10535.978           ,105.800             ,15.831        
80, 09-04-2021 22:35:59, 43.000              ,2145.000            ,10535.978           ,104.600             ,11.678        
80, 09-04-2021 22:36:00, 43.000              ,2145.000            ,10535.978           ,95.800              ,16.395        
80, 09-04-2021 22:36:01, 44.000              ,2145.000            ,10535.978           ,98.400              ,22.674        
80, 09-04-2021 22:36:02, 43.000              ,2145.000            ,10535.978           ,98.100              ,11.733        
80, 09-04-2021 22:36:03, 43.000              ,2145.000            ,10535.978           ,94.900              ,13.372        
80, 09-04-2021 22:36:04, 43.000              ,2145.000            ,10535.978           ,94.800              ,20.207        
80, 09-04-2021 22:36:05, 42.000              ,2145.000            ,10535.978           ,97.000              ,12.179        
80, 09-04-2021 22:36:06, 43.000              ,2145.000            ,10535.978           ,93.900              ,20.548        
80, 09-04-2021 22:36:07, 43.000              ,2145.000            ,10535.978           ,101.400             ,11.180        
80, 09-04-2021 22:36:08, 44.000              ,2145.000            ,10535.978           ,102.800             ,10.808        
80, 09-04-2021 22:36:09, 44.000              ,2145.000            ,10535.978           ,99.800              ,14.475        
80, 09-04-2021 22:36:10, 44.000              ,2145.000            ,10535.978           ,99.900              ,20.922


----------



## Azmodeus

Hi, I just purchased a Gainward 3090 Phantom and a Zotac 3090 Trinity OC for 2399 Euro each. Any recommendations on which model to keep for mostly gaming (Flight Simulator is my most used game) and mining when I am not using the PC?

Thanks!


----------



## KedarWolf

Azmodeus said:


> Hi, I just purchased a Gainward 3090 Phantom and a Zotac 3090 Trinity OC for 2399 Euro each. Any recommendations on which model to keep for mostly gaming (Flight Simulator is my most used game) and mining when I am not using the PC?
> 
> Thanks!


The Gainward has three power connectors, the Zotac two, so it might be the Gainward could be the best card with the right BIOS flashed.


----------



## emil2424

EarlZ said:


> What is the best curve shape/form to further optimize this, I opted for 1995Mhz that already tops out at 450Watts in Port Royal at 3440x1440


What angle do you want to optimize? Maximum performance? Silence? Power consumption? 
When it comes to performance most people shouldn't use the curve because they don't understand how it works in 100% and they do more harm than good.
Use the OC slider and at most slightly edit the curve if you feel you are really close to stability.
You can also use a curve to limit the maximum clock. 
It is a waste of time to play with curve if you are not trying to compete with other people in the benchmarks.


----------



## defcoms

Well I finally broke 15k in PR. This all I can do until the weather cools off again. One thing I noticed is that I seem to get lower scores yet my average clocks is higher then some results. I see some people break 15k at 2085 or so, anyone else notice this?









I scored 15 043 in Port Royal


AMD Ryzen 7 5800X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## Dreams-Visions

Alright, my best run (Strix OC, bar enabled)









I scored 15 029 in Port Royal


AMD Ryzen 9 5950X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com





Nothing really tuned yet. New mobo; haven't gotten a chance to address the CPU or memory tuning.

Oddly, I did a very casual run (a bunch of apps open, just testing undervolts) and I ended my first try with a run not especially distant from that run.









I scored 14 719 in Port Royal


AMD Ryzen 9 5950X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com





1000mV cap, 2025 target. First time open loop.

Do these look alright to you all? Idunno. Also, for some reason I expected there to be a gap bigger than 300 points given the voltage limit applied. I feel like I have a lot to learn about VF's and offsets. I'd like to know what my card is actually capable of just to know, but I'm not sure if I'll ever be knowledgeable enough to find out.


----------



## GRABibus

Dreams-Visions said:


> Alright, my best run (Strix OC, bar enabled)
> 
> I scored 15 029 in Port Royal
> 
> Nothing really tuned yet. New mobo; haven't gotten a chance to address the CPU or memory tuning.
> 
> Oddly, I did a very casual run (a bunch of apps open, just testing undervolts) and I ended my first try with a run not especially distant from that run.
> 
> I scored 14 719 in Port Royal
> 
> 1000mV cap, 2025 target. Idunno. For some reason I expected there to be a gap bigger than 300 points given the voltage limit applied. I feel like I have a lot to learn about VF's and offsets. I'd like to know what my card is actually capable of just to know, but I'm not sure if I'll ever be knowledgeable enough to find out.


which bios do you use ?
Are you on water ?


----------



## Dreams-Visions

GRABibus said:


> which bios do you use ?
> Are you on water ?


The first one is the updated Strix 480W bios.

The second one is the updated KP 520W bios (just thought I'd try it out). 

On water first time, yes. I'm in Florida, so my ambient is a bit higher than others. Like 24.5C or so.


----------



## defcoms

Dreams-Visions said:


> The first one is the updated Strix 480W bios.
> 
> The second one is the updated KP 520W bios (just thought I'd try it out).
> 
> On water first time, yes. I'm in Florida, so my ambient is a bit higher than others. Like 24.5C or so.


Im in Florida as well. Cant get my loop much cooler without cranking the ac down to 21c and a few points not worth the cost.


----------



## Dreams-Visions

defcoms said:


> Im in Florida as well. Cant get my loop much cooler without cranking the ac down to 21c and a few points not worth the cost.


glad it's not just me living the tropical "struggle"! lol


----------



## J7SC

Dreams-Visions said:


> glad it's not just me living the tropical "struggle"! lol


...that calls for a weather update from my neck of the woods...right now it's sunny, with snow on the local mountains, and temp is 8 C / 46.4 F ☕ ...3090 likes it !


----------



## Lord of meat

Anyone else had clock speed dropped, yet portrayal scores go up after bios update on mobo and gpu for rebar?
I used to get 2100-2160mhz and a score of mid 14k now its 15k while I do 2070-2100 also had to lower my memory oc because it would crash. Temps are same nothing goes over 57c on water.
Very odd.


----------



## Falkentyne

Lord of meat said:


> Anyone else had clock speed dropped, yet portrayal scores go up after bios update on mobo and gpu for rebar?
> I used to get 2100-2160mhz and a score of mid 14k now its 15k while I do 2070-2100 also had to lower my memory oc because it would crash. Temps are same nothing goes over 57c on water.
> Very odd.


What drivers?
Newest NV drivers increased PR scores by about 250 points. May be some RT optimizations.


----------



## GRABibus

No news from ASUS XOC Bios or KP XOC with BAR ?


----------



## EarlZ

emil2424 said:


> What angle do you want to optimize? Maximum performance? Silence? Power consumption?
> When it comes to performance most people shouldn't use the curve because they don't understand how it works in 100% and they do more harm than good.
> Use the OC slider and at most slightly edit the curve if you feel you are really close to stability.
> You can also use a curve to limit the maximum clock.
> It is a waste of time to play with curve if you are not trying to compete with other people in the benchmarks.


Get maximum performance with the stock bios, using 0.950V as a starting point. Not for benchmark but for gaming. Though I understand this might not be the right forum to get a newbie advise.


----------



## sultanofswing

GRABibus said:


> No news from ASUS XOC Bios or KP XOC with BAR ?


Vince told me it is not ready.


----------



## Lord of meat

Falkentyne said:


> What drivers?
> Newest NV drivers increased PR scores by about 250 points. May be some RT optimizations.


The ones thats came out on March 30


----------



## des2k...

sultanofswing said:


> Vince told me it is not ready.


At least we know it's comming 😁
Of course I'm on 2x8pin Zotac, looking forward to flash this rbar vbios.

That will be easy, what's not easy is my 3900x / aorus master. My cpu/board hate new agesa codes. Still on old f11, anything more it's unstable(audio crackling, whea bsod,reboots,etc) at stock.


----------



## Lobstar

defcoms said:


> Well I finally broke 15k in PR. This all I can do until the weather cools off again. One thing I noticed is that I seem to get lower scores yet my average clocks is higher then some results. I see some people break 15k at 2085 or so, anyone else notice this?
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 043 in Port Royal
> 
> 
> AMD Ryzen 7 5800X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com


I noticed your memory clocks are stock. I'm on air and lower average core than you but have a higher score. My memory is set to +1500. I scored 15 118 in Port Royal
Edit: I'm also on the KPE520w bios on my revised 3090 FTW3 Ultra.


----------



## des2k...

Lobstar said:


> I noticed your memory clocks are stock. I'm on air and lower average core than you but have a higher score. My memory is set to +1500. I scored 15 118 in Port Royal
> Edit: I'm also on the KPE520w bios on my revised 3090 FTW3 Ultra.


Also got 15k+, 2145core +1482mem with my zotac.
Prob new driver, it's a new branch optimized for ampere.

I think before, this oc was about 14.6k or so


----------



## defcoms

Lobstar said:


> I noticed your memory clocks are stock. I'm on air and lower average core than you but have a higher score. My memory is set to +1500. I scored 15 118 in Port Royal
> Edit: I'm also on the KPE520w bios on my revised 3090 FTW3 Ultra.


My memory was set at +1382. Not sure whats going on with my score. I was using the KPE520 bios as well on my strix. I do notice some drops in my cpu freq during the run. Its like my cpu start to downclock like it has noting to process for a second or 2. It drops to 3700 then back up to 4950ish.


----------



## Beagle Box

defcoms said:


> My memory was set at +1382. Not sure whats going on with my score. I was using the KPE520 bios as well on my strix. I do notice some drops in my cpu freq during the run. Its like my cpu start to downclock like it has noting to process for a second or 2. It drops to 3700 then back up to 4950ish.


Port Royal makes AVX calls. Set your AVX offset to 0 and your CPU won't downclock.


----------



## defcoms

Beagle Box said:


> Port Royal makes AVX calls. Set your AVX offset to 0 and your CPU won't downclock.


I got 5800x AMD doesn't have AVX offset to set. I will try manual OC and see if it makes any difference.


----------



## Lobstar

defcoms said:


> My memory was set at +1382. Not sure whats going on with my score. I was using the KPE520 bios as well on my strix. I do notice some drops in my cpu freq during the run. Its like my cpu start to downclock like it has noting to process for a second or 2. It drops to 3700 then back up to 4950ish.


My bad, I must have looked at the number to the right of the actual because I see it now.


----------



## emil2424

EarlZ said:


> Get maximum performance with the stock bios, using 0.950V as a starting point. Not for benchmark but for gaming. Though I understand this might not be the right forum to get a newbie advise.











NVIDIA GeForce RTX 3090 Undervolting Update - This is how (a little) common sense works! | igor'sLAB


You already know the first article, but of course you can read it again here: " GeForce RTX 3080 and RTX 3090 Undervolting – When Reason and Experimentation Meet NVIDIA Ampere". Due to the demand I…




www.igorslab.de


----------



## Beagle Box

defcoms said:


> I got 5800x AMD doesn't have AVX offset to set. I will try manual OC and see if it makes any difference.


I wonder about this. Is it possible that AMD uses a non-adjustable AVX offset? 
When I first started testing with Port Royal, the CPU (Intel) graph showed 3 distinct dips in processor frequency. 
Those dips vanished when I set AVX offset to 0.


----------



## GRABibus

Lobstar said:


> I noticed your memory clocks are stock. I'm on air and lower average core than you but have a higher score. My memory is set to +1500. I scored 15 118 in Port Royal
> Edit: I'm also on the KPE520w bios on my revised 3090 FTW3 Ultra.


i see you have a revided FTW3 Ultra.
You mean the new one's on which Unbalanced power on 8-pin connectors is solved, and then, where power limitation is solved also ?
If yes, how can we be sure to get this revised card when we purchase it ?

Thank you.


----------



## yzonker

des2k... said:


> At least we know it's comming 😁
> Of course I'm on 2x8pin Zotac, looking forward to flash this rbar vbios.
> 
> That will be easy, what's not easy is my 3900x / aorus master. My cpu/board hate new agesa codes. Still on old f11, anything more it's unstable(audio crackling, whea bsod,reboots,etc) at stock.


That's odd. I have a Aorus Ultra and a 5800x. So far no issues with F33. My Cinebench score even increased slightly. Since I put F33 on, I've gone back to running PBO rather than an all core OC.

Also, been running that 385w Zotac bios you found. Kinda interesting in that my block delta is quite a bit lower using it vs. 390w on either a Gigabyte bios, or setting the XOC to 390w. Yea it's 5w less, but my delta is 11-12C now versus the 13-14C I saw before. I've logged it and saw 395w peak vs 401w peak on a 390w bios. Also ran Port Royal and was only 50pts lower, so I can't really explain why my card runs that much cooler. Some of the 50pts was from dropping 15mhz off my core offset too (180 vs 195). For some reason PR wouldn't complete with the +195 I had run on other bios. So a small hit there. My normal gaming OC settings seem to be just as stable though (+120 to +150).


----------



## Lobstar

GRABibus said:


> i see you have a revided FTW3 Ultra.
> You mean the new one's on which Unbalanced power on 8-pin connectors is solved, and then, where power limitation is solved also ?
> If yes, how can we be sure to get this revised card when we purchase it ?
> 
> Thank you.


I'm not entirely sure they are in the retail channels yet and you only seem to see negative responses on manufacturer forums. The power across 8-pin 1 and 2 is pretty spot on balanced but the third lags behind. The pcie power doesn't go above 75w with the KPE bios and not about 67w on the xoc bios the card comes with. The only way to guarantee you get a revision is to go through the special rma process by initiating it by emailing [email protected].


----------



## J7SC

yzonker said:


> That's odd. I have a Aorus Ultra and a 5800x. So far no issues with F33. My Cinebench score even increased slightly. Since I put F33 on, I've gone back to running PBO rather than an all core OC.
> 
> Also, been running that 385w Zotac bios you found. Kinda interesting in that my block delta is quite a bit lower using it vs. 390w on either a Gigabyte bios, or setting the XOC to 390w. Yea it's 5w less, but my delta is 11-12C now versus the 13-14C I saw before. I've logged it and saw 395w peak vs 401w peak on a 390w bios. Also ran Port Royal and was only 50pts lower, so I can't really explain why my card runs that much cooler. Some of the 50pts was from dropping 15mhz off my core offset too (180 vs 195). For some reason PR wouldn't complete with the +195 I had run on other bios. So a small hit there. My normal gaming OC settings seem to be just as stable though (+120 to +150).


Are you running NV tab on 'optimal' or ' 'prefer max power' ? When my Strix was air-cooled, I got better temps and better scores with 'optimal'...presumably because it momentarily down-clocked for milliseconds, affecting temps and thus boost algorithm...with w-cooling, it is less of an issue, but unlike previous gen (ie. 2080 Ti), 'optimal' vs 'prefer max power' seems to have less impact on fps or at least is a more pronounced trade-off re. power vs internal GPU temps.


----------



## Dreams-Visions

J7SC said:


> ...that calls for a weather update from my neck of the woods...right now it's sunny, with snow on the local mountains, and temp is 8 C / 46.4 F ☕ ...3090 likes it !


lol must be nice!

In any event, mine is coming along in spite of the heat:









I scored 15 139 in Port Royal


AMD Ryzen 9 5950X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com





I'm gonna get to those CPU and memory tweaking one day. But not today.


----------



## GRABibus

with my strix OC, on stock air cooling, I get best case 14850pts.

This is with :
KP 520W Bios
+130MHz on core
+1000MHz on memory (Higher, it crashes).
Open side panel for the PC
All fans at 100%
21°C ambient temperature.

My GPU hits until 79°C in this test with an average of 65°C.

I would like to know if those temps are normal for these conditions or if my Strix is heating more than the usual one's ?

I know this is the key fopr better scores, but to what I saw on a lot of tests and reading here, I have the feeling my strix is overheating versus average


----------



## J7SC

GRABibus said:


> with my strix OC, on stock air cooling, I get best case 14850pts.
> 
> This is with :
> KP 520W Bios
> +130MHz on core
> +1000MHz on memory (Higher, it crashes).
> Open side panel for the PC
> All fans at 100%
> 21°C ambient temperature.
> 
> My GPU hits until 79°C in this test with an average of 65°C.
> 
> I would like to know if those temps are normal for these conditions or if my Strix is heating more than the usual one's ?
> 
> I know this is the key fopr better scores, but to what I saw on a lot of tests and reading here, I have the feeling my strix is overheating versus average


Temps seems a touch high, but I never ran anything but stock Bios...currently updating my system w/ new CPU/mobo combo , but as stated above, I got best results on air / stock Bios with 'optimal' rather than 'prefer max performance' NV tab settings. PR per spoiler was with 641.40 driver, and obviously no r_BAR. GPUz 'max temps' (pre-hotspot features) were 68 C for the GPU



Spoiler


----------



## GRABibus

J7SC said:


> Temps seems a touch high, but I never ran anything but stock Bios...currently updating my system w/ new CPU/mobo combo , but as stated above, I got best results on air / stock Bios with 'optimal' rather than 'prefer max performance' NV tab settings. PR per spoiler was with 641.40 driver, and obviously no r_BAR. GPUz 'max temps' (pre-hotspot features) were 68 C for the GPU
> 
> 
> 
> Spoiler
> 
> 
> 
> 
> View attachment 2486138


Thanks.

it shows I have clearly a temperature issue.


----------



## yzonker

J7SC said:


> Are you running NV tab on 'optimal' or ' 'prefer max power' ? When my Strix was air-cooled, I got better temps and better scores with 'optimal'...presumably because it momentarily down-clocked for milliseconds, affecting temps and thus boost algorithm...with w-cooling, it is less of an issue, but unlike previous gen (ie. 2080 Ti), 'optimal' vs 'prefer max power' seems to have less impact on fps or at least is a more pronounced trade-off re. power vs internal GPU temps.


All I have is "Normal" and "Prefer Maximum Performance" if you are referring to the Power Management Mode? It's set to normal.


----------



## J7SC

yzonker said:


> All I have is "Normal" and "Prefer Maximum Performance" if you are referring to the Power Management Mode? It's set to normal.


...yeah, I run various machines w/ different NVidia cards in my home-office, and with some of them, I get three options in the NV tab: 'Optimal', 'Adaptive', and 'Prefer Maximum Performance', while others only have 'Normal' and 'Prefer Maximum Performance'....in any case, 'Prefer Maximum Performance', upon reboot, stops my Strix from down-clocking,..and in benches, not using 'Prefer Maximum Performance' was not as good as 'optimal' in fps while the card was air-cooled. With w-cooling, the difference is less dramatic but since I like the card to down-clock, I avoid 'Prefer Maximum Performance'


----------



## yzonker

J7SC said:


> ...yeah, I run various machines w/ different NVidia cards in my home-office, and with some of them, I get three options in the NV tab: 'Optimal', 'Adaptive', and 'Prefer Maximum Performance', while others only have 'Normal' and 'Prefer Maximum Performance'....in any case, 'Prefer Maximum Performance', upon reboot, stops my Strix from down-clocking,..and in benches, not using 'Prefer Maximum Performance' was not as good as 'optimal' in fps while the card was air-cooled. With w-cooling, the difference is less dramatic but since I like the card to down-clock, I avoid 'Prefer Maximum Performance'


Yes, I prefer it to idle down also and I've never seen any significant performance difference anyway. Still don't really understand the drop in block delta though. I haven't changed any settings, just the bios. 5w just isn't enough to explain it.


----------



## EarlZ

emil2424 said:


> NVIDIA GeForce RTX 3090 Undervolting Update - This is how (a little) common sense works! | igor'sLAB
> 
> 
> You already know the first article, but of course you can read it again here: " GeForce RTX 3080 and RTX 3090 Undervolting – When Reason and Experimentation Meet NVIDIA Ampere". Due to the demand I…
> 
> 
> 
> 
> www.igorslab.de


Thanks, Looks like my original voltage curve was already the most optimal.


----------



## ViRuS2k

guys i think i have a problem with the cooling of my graphics card. 3090 msi trio x.
in maximum demanding games temps on the card reach 70c under water. and my water temp never goes above 35c and the graphics cards outlet fitting when i touch it is very very hot.
is that normal for the graphics card to have a very hot outlet fitting also my cpu 5950x gets up to 90c easily. but only when the graphics card is toasty

cause my waterloops is

START-res-pump-graphics card-mp5works-cpu-thick 360mm-thick 360mm- then back to res-
its in a dynamic o11 xl case with ek distro plate combo.

temps on the memory are great all the time using mp5works but it dumps its temps right into my cpu block under mining memory gets to average -85c and max 96c perfectly normal under mining i try to keep the memory temps under 100c at all times. and under gaming max demanding games memory temps never go over 68c

but i am begining to think that my gpu and mp5 works are whats causing my high cpu temps as there is no radiator inbetween that lot

gpu-mp5-cpublock- its like i have a huge hotspot lol
now im thinking about adding a radiator inbetween all that so where would the best place be to put another radiator would it be after the GPU before the mp5works or after the mp5 works before the cpu block, cause i need to get my cpu temps down and reduce the gpu temps more

cause there is something fishy going on lol


----------



## Zogge

To me it sounds like bad mounting pressure or too low flow for both gpu and cpu. Delta on gpu should be less than 20c with standard application and for cpu I would say 20c delta also if not overclocked. 

Just for comparing my water is around 25c-29c flow 280l/h and a lot of rads. Gpu overclocked to 2200mhz but on standard bios 480w is like 10-12c delta T. Hence 35c to 41c max on gpu idle is same as water temp, hence 0 delta. My cpu is clocked to 5.1 ghz on all cores 1.36V but never exceeds 78c. That is an 8 core x299 cpu.

Mp5 in serial I assume with serial block ? If it is a parallell block with the mini inlets connected in serial setup then I understand the flow problem and heat. I never got over 50l/h with 3 d5s with the parallell version of it in serial mode. You need the serial top for it. Just thinking high now.


----------



## Pepillo

ViRuS2k said:


> guys i think i have a problem with the cooling of my graphics card. 3090 msi trio x.
> in maximum demanding games temps on the card reach 70c under water. and my water temp never goes above 35c and the graphics cards outlet fitting when i touch it is very very hot.
> is that normal for the graphics card to have a very hot outlet fitting also my cpu 5950x gets up to 90c easily. but only when the graphics card is toasty
> 
> cause my waterloops is
> 
> START-res-pump-graphics card-mp5works-cpu-thick 360mm-thick 360mm- then back to res-
> its in a dynamic o11 xl case with ek distro plate combo.
> 
> temps on the memory are great all the time using mp5works but it dumps its temps right into my cpu block under mining memory gets to average -85c and max 96c perfectly normal under mining i try to keep the memory temps under 100c at all times. and under gaming max demanding games memory temps never go over 68c
> 
> but i am begining to think that my gpu and mp5 works are whats causing my high cpu temps as there is no radiator inbetween that lot
> 
> gpu-mp5-cpublock- its like i have a huge hotspot lol
> now im thinking about adding a radiator inbetween all that so where would the best place be to put another radiator would it be after the GPU before the mp5works or after the mp5 works before the cpu block, cause i need to get my cpu temps down and reduce the gpu temps more
> 
> cause there is something fishy going on lol


It's not normal at all. We have the same setup, O11XL with distro plate and two 360 radiators and my maximum temperatures are 40º the water and 52º-53º the card, 3090+7900X overclocked. I don't know what your problem will be, but something's not right.


----------



## jomama22

ViRuS2k said:


> guys i think i have a problem with the cooling of my graphics card. 3090 msi trio x.
> in maximum demanding games temps on the card reach 70c under water. and my water temp never goes above 35c and the graphics cards outlet fitting when i touch it is very very hot.
> is that normal for the graphics card to have a very hot outlet fitting also my cpu 5950x gets up to 90c easily. but only when the graphics card is toasty
> 
> cause my waterloops is
> 
> START-res-pump-graphics card-mp5works-cpu-thick 360mm-thick 360mm- then back to res-
> its in a dynamic o11 xl case with ek distro plate combo.
> 
> temps on the memory are great all the time using mp5works but it dumps its temps right into my cpu block under mining memory gets to average -85c and max 96c perfectly normal under mining i try to keep the memory temps under 100c at all times. and under gaming max demanding games memory temps never go over 68c
> 
> but i am begining to think that my gpu and mp5 works are whats causing my high cpu temps as there is no radiator inbetween that lot
> 
> gpu-mp5-cpublock- its like i have a huge hotspot lol
> now im thinking about adding a radiator inbetween all that so where would the best place be to put another radiator would it be after the GPU before the mp5works or after the mp5 works before the cpu block, cause i need to get my cpu temps down and reduce the gpu temps more
> 
> cause there is something fishy going on lol


You shouldn't be getting that hot at all. Sounds like an extremely poor gpu mount. 

5950x shouldn't be reaching 90c in gaming unless you have some obscene voltage and clocks set, even then, no game I know of will stress the cpu that much.

Are you sure you have flow actually working through the loop? If the output fitting is extremely hot and the input is just kinda/not hot after some period of time (let's say 10 min) that indicates really poor flow through the loop. Temps in the loop should equalize rather quickly.

Do you have flow meter at all? Is the pump set at full speed? If the gpu mount is good, that what my guess would be. Is the pump dead? Lol


----------



## Lobstar

ViRuS2k said:


> <Not a single mention of fans>


Hey, what are your fans doing? O11XL kinda sucks for air flow. Have you tried just cranking all of your fans to 100%


ViRuS2k said:


> but i am begining to think that my gpu and mp5 works are whats causing my high cpu temps as there is no radiator inbetween that lot


Loop order doesn't matter.

Edit: I just looked at that backplate thing ... please tell me you don't have the parallel version setup in serial. If so, we've found your restriction. For those who like me had no clue what this thing is:


----------



## venturi

BIOS

naming and numbering scheme.
for my 3090's FE.

What is the latest bios in the scheme, because the naming convention doesn't help me and all the bios dates say Sep1, 2020.

94.02.27.00.0a (the one the cards came with)

94.02.32.00.02 (all my cards were now on tis one)

94.02.4b.00.0b ?

I'm trying to figure out where in the order is the 4b bios


thx in advance




Current build:
2x 3090 RTX Founders Edition & SLI / NvLink bridge bios 94.02.4b.00.0b
2x 8280L (56/112 cores, Asus c621 Sage Dual socket motherboard) bios 9904 Resizable Bar Enabled
1.5 TB ram. 2933 DDR4 ECC LRDIMMs 1600W digital power supply
4x Raid Samsung 860 pro (4TB each, 16TB total) data and backups, 1x Samsung 860 pro (4TB) data
1x Sabrent 8TB nvme for Apps and Games, 1x Sabrent 4TB nvme (OS)
Asus PA32UCX-P monitor bios 105, SFF case, MS MS Data Center 2020


----------



## yzonker

Lobstar said:


> Hey, what are your fans doing? O11XL kinda sucks for air flow. Have you tried just cranking all of your fans to 100%
> 
> Loop order doesn't matter.
> 
> Edit: I just looked at that backplate thing ... please tell me you don't have the parallel version setup in serial. If so, we've found your restriction. For those who like me had no clue what this thing is:


Before I got to this post I was already pretty certain it's a flow problem. Outlet should never feel hot with 35C water. Just barely warm to the touch at most.


----------



## PLATOON TEKK

New Afterburner beta finally adds memory overclocking over 1,500 on 30 series cards 

4.6.4 beta 2 (Guru3d)

Seeing as I get best results from AB, could be good.


----------



## Falkentyne

venturi said:


> BIOS
> 
> naming and numbering scheme.
> for my 3090's FE.
> 
> What is the latest bios in the scheme, because the naming convention doesn't help me and all the bios dates say Sep1, 2020.
> 
> 94.02.27.00.0a (the one the cards came with)
> 
> 94.02.32.00.02 (all my cards were now on tis one)
> 
> 94.02.4b.00.0b ?
> 
> I'm trying to figure out where in the order is the 4b bios
> 
> 
> thx in advance
> 
> 
> 
> 
> Current build:
> 2x 3090 RTX Founders Edition & SLI / NvLink bridge bios 94.02.4b.00.0b
> 2x 8280L (56/112 cores, Asus c621 Sage Dual socket motherboard) bios 9904 Resizable Bar Enabled
> 1.5 TB ram. 2933 DDR4 ECC LRDIMMs 1600W digital power supply
> 4x Raid Samsung 860 pro (4TB each, 16TB total) data and backups, 1x Samsung 860 pro (4TB) data
> 1x Sabrent 8TB nvme for Apps and Games, 1x Sabrent 4TB nvme (OS)
> Asus PA32UCX-P monitor bios 105, SFF case, MS MS Data Center 2020


Where did you get this vbios?
94.02.4b.00.0b ?

Isn't on techpowerup database. Any chance you can create a bios dump with the _latest_ version of GPU-Z and either upload it to techpowerup or post it here? (to post it here as an attachment, you need to RENAME it to TXT, something like "GA102.ROM.TXT).

4B is a larger number than "32" as these are in hexadecimal format, so it's newer.

_Edit_ Nvm I found it under unverified uploads. Can't open the file in any ABE Version however.








TechPowerUp


Extensive repository of graphics card BIOS image files. Our database covers submissions categorized by GPU vendor, type, and board partner variant.




www.techpowerup.com


----------



## Lobstar

Falkentyne said:


> Where did you get this vbios?
> 94.02.4b.00.0b ?


I'm not the original commenter but here: NVIDIA RTX 3090 VBIOS


----------



## ViRuS2k

Thanks guys for your help
yeah it could be a flow issue cause i have yet to add new pump to this distro plate that i bought cause the old one was acting up but came good again its possible its still broken even though it reports 3000rpm as in running at that but it might not be

i plan on draining the system and adding a 420mm monsta rad on the top of my o11 dynamic and adding a new pump
i wish there was a way i could add a different pump to this distro plate as i have 2x serial D5 pump but no way to plumb it in
i wish there was a way to buy a distro plate g4 in and out top that can replace the **** pump thats already there lol but this new pump has better head pressure so possibly be better...and the mp5 works is a serial version not pararell version the one i have has the G4 fittings...

here is a current image of my setup
ignore the 1080ti its there for mining purposes lol










the rads are 2x coolstream xl 360 radiators the very thick ones.
though i think im going to plumb in another 420mm monsta radiator or 920mm nova radiator wich is 2x360mm but in one big radiator

though something is very wrong with my system to be getting such awfull temps, graphics card shows 29c watertemp currently and with mining enabled aswell that fitting on the graphcis card left most one is very warm to touch but the other one is completely cold


----------



## mirkendargen

ViRuS2k said:


> Thanks guys for your help
> yeah it could be a flow issue cause i have yet to add new pump to this distro plate that i bought cause the old one was acting up but came good again its possible its still broken even though it reports 3000rpm as in running at that but it might not be
> 
> i plan on draining the system and adding a 420mm monsta rad on the top of my o11 dynamic and adding a new pump
> i wish there was a way i could add a different pump to this distro plate as i have 2x serial D5 pump but no way to plumb it in
> i wish there was a way to buy a distro plate g4 in and out top that can replace the **** pump thats already there lol but this new pump has better head pressure so possibly be better...and the mp5 works is a serial version not pararell version the one i have has the G4 fittings...
> 
> here is a current image of my setup
> ignore the 1080ti its there for mining purposes lol
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> the rads are 2x coolstream xl 360 radiators the very thick ones.
> though i think im going to plumb in another 420mm monsta radiator or 920mm nova radiator wich is 2x360mm but in one big radiator
> 
> though something is very wrong with my system to be getting such awfull temps, graphics card shows 29c watertemp currently and with mining enabled aswell that fitting on the graphcis card left most one is very warm to touch but the other one is completely cold


A hot fitting and a cold fitting is obvious evidence of a severe flow problem. All fittings should feel the same temp. You might need more pump, you might just need to prime/bleed your loop better.


----------



## jomama22

ViRuS2k said:


> Thanks guys for your help
> yeah it could be a flow issue cause i have yet to add new pump to this distro plate that i bought cause the old one was acting up but came good again its possible its still broken even though it reports 3000rpm as in running at that but it might not be
> 
> i plan on draining the system and adding a 420mm monsta rad on the top of my o11 dynamic and adding a new pump
> i wish there was a way i could add a different pump to this distro plate as i have 2x serial D5 pump but no way to plumb it in
> i wish there was a way to buy a distro plate g4 in and out top that can replace the **** pump thats already there lol but this new pump has better head pressure so possibly be better...and the mp5 works is a serial version not pararell version the one i have has the G4 fittings...
> 
> here is a current image of my setup
> ignore the 1080ti its there for mining purposes lol
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> the rads are 2x coolstream xl 360 radiators the very thick ones.
> though i think im going to plumb in another 420mm monsta radiator or 920mm nova radiator wich is 2x360mm but in one big radiator
> 
> though something is very wrong with my system to be getting such awfull temps, graphics card shows 29c watertemp currently and with mining enabled aswell that fitting on the graphcis card left most one is very warm to touch but the other one is completely cold


You can always add in pumps wherever makes sense within the loop. They are just recommend after the res because it's easier to keep them from running dry when doing a fill. 

If you put them somewhere else in the loop, you can just leave them off while filling (while the distro plate pump stays on) and then once the loop is well filled, turn them on and top up the res/distro plate.


----------



## jura11

Hi @ViRuS2k 

These temperatures are way too high for water cooling or GPU under water, single D5 shouldn't struggle with such loop, its genuine D5(Lowara) or its Bykski or Barrow copy of D5? 

I have built recently loop in Lian Li o11 DYNAMIC XL and I hated that case, I used there just 360mm 30mm thick radiator on top and 240mm 30mm thck radiator on bottom and temperatures with XOC 1000W BIOS never broke 42-45°C with fans running at 1200RPM that's with 105MHz on core and 1200MHz on VRAM

VRAM temperatures have been in his case at 80's,sometimes I have seen them in low 90's 

I don't think adding extra radiator will help, I would disassemble loop, try run without the MP5works backplate and compare temperatures, this should be easier than check everything 

Or you can try running pump at maximum speed if temperatures would change or not

In my loop with two RTX 3090 GamingPro's both are running XOC 1000W BIOS and temperatures won't break 36-38°C in rendering and VRAM temperatures won't break 60 on top one and on bottom one they hit 72-75°C as max, in test mining for 1 hour top one VRAM are in 68-72°C and on bottom one VRAM temperatures are in 82-84°C max temperatures, core temperatures are in 31°C on top one and 28°C on bottom one, VRAM OC on top I'm running 1250MHz and bottom one is running 875MHz 

Hope this helps 

Thanks, Jura


----------



## J7SC

ViRuS2k said:


> Thanks guys for your help
> yeah it could be a flow issue cause i have yet to add new pump to this distro plate that i bought cause the old one was acting up but came good again its possible its still broken even though it reports 3000rpm as in running at that but it might not be
> 
> i plan on draining the system and adding a 420mm monsta rad on the top of my o11 dynamic and adding a new pump
> i wish there was a way i could add a different pump to this distro plate as i have 2x serial D5 pump but no way to plumb it in
> i wish there was a way to buy a distro plate g4 in and out top that can replace the **** pump thats already there lol but this new pump has better head pressure so possibly be better...and the mp5 works is a serial version not pararell version the one i have has the G4 fittings...
> 
> here is a current image of my setup
> ignore the 1080ti its there for mining purposes lol
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> the rads are 2x coolstream xl 360 radiators the very thick ones.
> though i think im going to plumb in another 420mm monsta radiator or 920mm nova radiator wich is 2x360mm but in one big radiator
> 
> though something is very wrong with my system to be getting such awfull temps, graphics card shows 29c watertemp currently and with mining enabled aswell that fitting on the graphcis card left most one is very warm to touch but the other one is completely cold


...lots of good advice here already, but I like to add s.th. I almost always build loops with multiple D5s 'physically distributed' along the loop. Key is not to have them run dry, as others observed already. Now, if you suspect blockage in an existing rad / block arrangement or want to add new rads and other components, always flush them very thoroughly first - I never had a new rad by any manufacturer which didn't have some crud residue in it from manufacturing.

...with complex loops, I started to 'pre-fill' big rads and all but one connecting tube before making the final connections to the res / pump and closing the loop. This helps a lot with bleeding a complex loop with multiple blocks, rads and pumps. When I am finally ready to start it up for leak-testing and bleeding the air bubbles, I just connect one pump (the one situated right below the reservoir) to an external 12V source (an old mini mobo with its own PSU). There likely will be a lot gurgling and air moving through, but after a few minutes, I connect the remaining D5(s). This makes filling and air-bleeding a big loop with multiple components much easier IMO.


----------



## mirkendargen

ViRuS2k said:


> guys i think i have a problem with the cooling of my graphics card. 3090 msi trio x.
> in maximum demanding games temps on the card reach 70c under water. and my water temp never goes above 35c and the graphics cards outlet fitting when i touch it is very very hot.
> is that normal for the graphics card to have a very hot outlet fitting also my cpu 5950x gets up to 90c easily. but only when the graphics card is toasty
> 
> cause my waterloops is
> 
> START-res-pump-graphics card-mp5works-cpu-thick 360mm-thick 360mm- then back to res-
> its in a dynamic o11 xl case with ek distro plate combo.
> 
> temps on the memory are great all the time using mp5works but it dumps its temps right into my cpu block under mining memory gets to average -85c and max 96c perfectly normal under mining i try to keep the memory temps under 100c at all times. and under gaming max demanding games memory temps never go over 68c
> 
> but i am begining to think that my gpu and mp5 works are whats causing my high cpu temps as there is no radiator inbetween that lot
> 
> gpu-mp5-cpublock- its like i have a huge hotspot lol
> now im thinking about adding a radiator inbetween all that so where would the best place be to put another radiator would it be after the GPU before the mp5works or after the mp5 works before the cpu block, cause i need to get my cpu temps down and reduce the gpu temps more
> 
> cause there is something fishy going on lol


Also do you have any quick disconnect fittings? Mine will sometimes not be "open" completely even when fully screwed back together after a disconnect, and I need to partially loosen then retighten them to get full flow.


----------



## venturi

Lobstar said:


> I'm not the original commenter but here: NVIDIA RTX 3090 VBIOS


Thank you. That last bios, for me, came with a replacement new warranty card I just received two days ago. 
As there are multiple gpus involved I like for them to all have the same bios.

Incidentally, the warranty replacement new card came with resizable bar enabled.


----------



## KingKnick

Hi, any news between a 1000w vbios with resize bar :/??


----------



## des2k...

J7SC said:


> ...lots of good advice here already, but I like to add s.th. I almost always build loops with multiple D5s 'physically distributed' along the loop. Key is not to have them run dry, as others observed already. Now, if you suspect blockage in an existing rad / block arrangement or want to add new rads and other components, always flush them very thoroughly first - I never had a new rad by any manufacturer which didn't have some crud residue in it from manufacturing.
> 
> ...with complex loops, I started to 'pre-fill' big rads and all but one connecting tube before making the final connections to the res / pump and closing the loop. This helps a lot with bleeding a complex loop with multiple blocks, rads and pumps. When I am finally ready to start it up for leak-testing and bleeding the air bubbles, I just connect one pump (the one situated right below the reservoir) to an external 12V source (an old mini mobo with its own PSU). There likely will be a lot gurgling and air moving through, but after a few minutes, I connect the remaining D5(s). This makes filling and air-bleeding a big loop with multiple components much easier IMO.


I only started using d5 recently and this pump at 12v is seriously over hyped. The impelar is good because they are individually balanced with weights but head pressure is complete garbage.

I replaced my cheap china pump(20$) that his rated for 500l/h with 5m head with 1300l/h 3m head d5 and on my restrictive loop it made very little difference to flow. Had to add back my cheap pump to increase flow  and bleed out the air.

Maybe the EK at 24v 1600l/h 5m head will make a difference but still waiting for my 2nd top & 24v step up board.

I still think a bigger external pump(marine)200$ on aliexpress would of been a better value, they are rated at 6000l/h and run on 120v(wall) vs buying 4+ d5, tops,fitings to fix a restricve / big rad setups.

I have x4 xpc,gpu,cpu, 2 freezemod 360,45mm ocool 240, xr5 240 corsair,xr7 360corsair flow is .8l/minute, with d5 added 1.6l/minute, adding my china cheap pump I'm at 2.2l/minute.

I feel like I need to go back to my parralel runs to incraese flow. The only component that is easy on the flow is the 45mm ocool rad, everything else kills the flow, especially the xr7 360.


----------



## Shawnb99

des2k... said:


> I only started using d5 recently and this pump at 12v is seriously over hyped. The impelar is good because they are individually balanced with weights but head pressure is complete garbage.
> 
> I replaced my cheap china pump(20$) that his rated for 500l/h with 5m head with 1300l/h 3m head d5 and on my restrictive loop it made very little difference to flow. Had to add back my cheap pump to increase flow  and bleed out the air.
> 
> Maybe the EK at 24v 1600l/h 5m head will make a difference but still waiting for my 2nd top & 24v step up board.
> 
> I still think a bigger external pump(marine)200$ on aliexpress would of been a better value, they are rated at 6000l/h and run on 120v(wall) vs buying 4+ d5, tops,fitings to fix a restricve / big rad setups.
> 
> I have x4 xpc,gpu,cpu, 2 freezemod 360,45mm ocool 240, xr5 240 corsair,xr7 360corsair flow is .8l/minute, with d5 added 1.6l/minute, adding my china cheap pump I'm at 2.2l/minute.
> 
> I feel like I need to go back to my parralel runs to incraese flow. The only component that is easy on the flow is the 45mm ocool rad, everything else kills the flow, especially the xr7 360.


I’d buy a MCP35X2 or two before I’d ever buy any pump off aliexpress. Pumps like the PSU is something you should never cheap out on.
If head pressure is an issue get a DDC pump them or run a pair in series.
If you can afford a 3090 you can afford a decent pump


----------



## erazortt

KingKnick said:


> Hi, any news between a 1000w vbios with resize bar :/??


I think you could flash the old BIOS without reBAR and then use the newest precision x from evga to enable reBAR on it.


----------



## Pepillo

erazortt said:


> I think you could flash the old BIOS without reBAR and then use the newest precision x from evga to enable reBAR on it.


So I did with the Kipping bios of 520w and it went well, but I understand that with the 1.000w does not go, it puts you one of 450w.


----------



## J7SC

des2k... said:


> I only started using d5 recently and this pump at 12v is seriously over hyped. The impelar is good because they are individually balanced with weights but head pressure is complete garbage.
> 
> I replaced my cheap china pump(20$) that his rated for 500l/h with 5m head with 1300l/h 3m head d5 and on my restrictive loop it made very little difference to flow. Had to add back my cheap pump to increase flow  and bleed out the air.
> 
> Maybe the EK at 24v 1600l/h 5m head will make a difference but still waiting for my 2nd top & 24v step up board.
> 
> I still think a bigger external pump(marine)200$ on aliexpress would of been a better value, they are rated at 6000l/h and run on 120v(wall) vs buying 4+ d5, tops,fitings to fix a restricve / big rad setups.
> 
> I have x4 xpc,gpu,cpu, 2 freezemod 360,45mm ocool 240, xr5 240 corsair,xr7 360corsair flow is .8l/minute, with d5 added 1.6l/minute, adding my china cheap pump I'm at 2.2l/minute.
> 
> I feel like I need to go back to my parralel runs to incraese flow. The only component that is easy on the flow is the 45mm ocool rad, everything else kills the flow, especially the xr7 360.


...well, I've had most of my D5s (around C$ 80 at the time) for 8 years +, just added a couple of 'partial D5s' (no top, ie. for certain res and distro). According to not only my own observations but DerBauer's as well, two D5s in a loop are a good option for a variety of technical reasons. Anyway, buying a quality D5 means getting a steady performer for many years to come. If I really would want to go nuts, I'ed buy an electric water-pump for a car along with some appropriate industrial-sized rads - but not from AliExpress...


----------



## long2905

guys it looks like byski also released a full cover block as per the screenshot below. anyone plan to get one?


----------



## jura11

long2905 said:


> guys it looks like byski also released a full cover block as per the screenshot below. anyone plan to get one?
> View attachment 2486419


Probably will be getting one or rather two,will want to try that but first I want to test Thermalright and Gelid Extreme pads on my RTX 3090 GamingPro's and see if temperatures improve

Hope this helps 

Thanks, Jura


----------



## Lobstar

I've actually wanted to install a 110/120v water pump with my MORA3 for a while. What are some examples that support 100% running time and 100% duty cycle? Thanks in advance


----------



## bmagnien

The AB beta with extended mem OC (now goes to +2000) has helped me a bit. I was hardcapped at the previous +1500 limit, and can now gain steady improvements before hitting artifacting/score degradation up until +1800. Here's a completed bench with no memory dips (avg mem clock = mem clock): I scored 15 415 in Port Royal


----------



## J7SC

Lobstar said:


> I've actually wanted to install a 110/120v water pump with my MORA3 for a while. What are some examples that support 100% running time and 100% duty cycle? Thanks in advance


In case you were referring to my earlier post, I was thinking about automotive pumps (12v/24v/48v) but in terms of 110/120v, check out quality aquarium pumps for 100% duty cycle


----------



## mirkendargen

J7SC said:


> In case you were referring to my earlier post, I was thinking about automotive pumps (12v/24v/48v) but in terms of 110/120v, check out quality aquarium pumps for 100% duty cycle


Iwaki pumps are ****ing beasts, but pricey and a bit noisy. If you're doing a remote rad setup (rads in the basement like me or similar) where noise doesn't matter, they're a great choice.


----------



## Lobstar

Well I just got a little giant pump from Amazon and ordered the appropriate fittings to adapt to G1/4 from mcmaster carr catalog.


----------



## Nico67

yzonker said:


> Yes you can flatten the rest of the curve easily in AB. Here's my writeup from a while ago.
> 
> 
> 
> 
> 
> 
> Rtx 3000 series undervolt discussion
> 
> 
> Its part of the guide in the op. Once you set your curve, youll want to set a power limit. I suggest between 80 and 90 but you should try it and see where you are happy with performance and Temps. At 90 you should basically be as fast as stock while running cooler. If you want even less power...
> 
> 
> 
> 
> hardforum.com


Thats how I have been doing my curve also, but I also ctrl "L" to fix the VID so it does go up more than a bin or two on the fixed freq. Only trouble I find is you set that at a given temp, but once it rises to game temp the curve changes and is not flat. If you try and set it flat in gaming temp range, its not flat at idle and that can causes issues as well.
I wish they would use one curve and just drop bins for either power as they do, or temp


----------



## GRABibus

PLATOON TEKK said:


> New Afterburner beta finally adds memory overclocking over 1,500 on 30 series cards
> 
> 4.6.4 beta 2 (Guru3d)
> 
> Seeing as I get best results from AB, could be good.


thanks !
Will be useless for as +1000MHz is the max I can put on my Strix.
Everything beyond => artefacts or crash.

clearly bad memory chips I have on this strix.


----------



## Beagle Box

GRABibus said:


> thanks !
> Will be useless for as +1000MHz is the max I can put on my Strix.
> Everything beyond => artefacts or crash.
> 
> clearly bad memory chips I have on this strix.


Bad memory chips or poor cooling? Stock cooler or waterblock? The stock cooler on my Strix OC was was installed very poorly. 
Water-cooled, it runs very well. 
But the backside memory still gets very hot when overclocked.


----------



## J7SC

Beagle Box said:


> Bad memory chips or poor cooling? Stock cooler or waterblock? The stock cooler on my Strix OC was was installed very poorly.
> Water-cooled, it runs very well.
> But the backside memory still gets very hot when overclocked.


...yeah, temps, especially on the back side memory, play a huge role re. oc headroom. GDDR6X chips are rated at 1313 MHz on the Strix (and other cards, I reckon), but most vendors seem to underclock it for the 'default' setting, likely due to heat.

...what backplate are you running with your w-cooled Strix ?


----------



## GRABibus

I have stock cooler.

I get max 14600pts in PR with such conditions :

stock bios With BAR
+160MHz on core
+1000MHz on memory
Voltage slider at 100% in MSI AB
Power slider at 123% (maximum)
Fans at 100%
22degrees ambient
Open PC side panel
A fan at 2000rpm on back plate (pushing)
A fan at 100% in front of the right side of the card (pushing)

max power draw in GPUZ : 480W
max temp in GPUZ is 70 degrees.
Average temp in 3dmark is 59dregrees
Max frequency in 3dmark : 2100MHz
Average frequency in GPUZ : 2025MHz

What do you think ?
Bad temps or poor boosting GPU ?
Both ?
Normal temps and normal score ?

the best score I could get is 14814pts, with KP bios 520W (same configuration as beyond), but 81 degrees max on GPU in GPUz.

is it common to reach 15000pts on air ?


----------



## Beagle Box

J7SC said:


> ...yeah, temps, especially on the back side memory, play a huge role re. oc headroom. GDDR6X chips are rated at 1313 MHz on the Strix (and other cards, I reckon), but most vendors seem to underclock it for the 'default' setting, likely due to heat.
> 
> ...what backplate are you running with your w-cooled Strix ?


Stock Alphacool Eisblock GPX backplate. It's thick, but not particularly cooling. 
I've attached SSD heatsinks to it above the RAM chips. 
It's good for ~ +1620 OC.


----------



## Beagle Box

GRABibus said:


> I have stock cooler.
> 
> I get max 14600pts in PR with such conditions :
> 
> stock bios With BAR
> +160MHz on core
> +1000MHz on memory
> Voltage slider at 100% in MSI AB
> Power slider at 123% (maximum)
> Fans at 100%
> 22degrees ambient
> Open PC side panel
> A fan at 2000rpm on back plate (pushing)
> A fan at 100% in front of the right side of the card (pushing)
> 
> max power draw in GPUZ : 480W
> max temp in GPUZ is 70 degrees.
> Average temp in 3dmark is 59dregrees
> Max frequency in 3dmark : 2100MHz
> Average frequency in GPUZ : 2025MHz
> 
> What do you think ?
> Bad temps or poor boosting GPU ?
> Both ?
> Normal temps and normal score ?
> 
> the best score I could get is 14814pts, with KP bios 520W (same configuration as beyond), but 81 degrees max on GPU in GPUz.
> 
> is it common to reach 15000pts on air ?


I couldn't reach 15000 in Port Royal with the stock cooler and stock Strix BIOS. I think I scored around 14650 or so. 
I didn't flash any new BIOSs until I installed the water block. The WB got me to 15000 on the crappy stock ASUS BIOS.
The MSI Suprim BIOS bumped up the scores. 
The EVGA KP520 bumped them up more. 
My top PR scores are with the EVGA KP1K BIOS.

I have an old chip on an outdated motherboard, so no re-BAR for me.


----------



## GRABibus

Beagle Box said:


> I couldn't reach 15000 in Port Royal with the stock cooler and stock Strix BIOS. I think I scored around 14650 or so.
> I didn't flash any new BIOSs until I installed the water block. The WB got me to 15000 on the crappy stock ASUS BIOS.
> The MSI Suprim BIOS bumped up the scores.
> The EVGA KP520 bumped them up more.
> My top PR scores are with the EVGA KP1K BIOS.
> 
> I have an old chip on an outdated motherboard, so no re-BAR for me.


yes, water is the key, especially on chips which run hot on air and clock not so bad. In this case water gives the best benefits than chips running cool on air.


----------



## Spiriva

New drivers today: 466.11









GeForce 466.11 WHQL driver download


Download GeForce 466.11 WHQL drivers. The drivers include the optimum experience in the latest games, and introduces optimizations and enhancements for game ready for NVIDIA DLSS and Ray Tracing in Mo...




www.guru3d.com


----------



## J7SC

Beagle Box said:


> I couldn't reach 15000 in Port Royal with the stock cooler and stock Strix BIOS. I think I scored around 14650 or so.
> I didn't flash any new BIOSs until I installed the water block. The WB got me to 15000 on the crappy stock ASUS BIOS.
> The MSI Suprim BIOS bumped up the scores.
> The EVGA KP520 bumped them up more.
> My top PR scores are with the EVGA KP1K BIOS.
> 
> I have an old chip on an outdated motherboard, so no re-BAR for me.





GRABibus said:


> yes, water is the key, especially on chips which run hot on air and clock not so bad. In this case water gives the best benefits than chips running cool on air.


My Strix did make it (just...) past 15,000 in PR on stock air cooler and stock Bios...worth noting though that while 3DM PR tends to be the great equalizer re. CPUs, it is still sensitive to things like system RAM settings and perhaps also AVX...


----------



## Beagle Box

J7SC said:


> My Strix did make it (just...) past 15,000 in PR on stock air cooler and stock Bios...worth noting though that while 3DM PR tends to be the great equalizer re. CPUs, it is still sensitive to things like system RAM settings and perhaps also AVX...


I went back and looked. Seems I was able to score to 14753 on stock air cooler with the stock ASUS BIOS.
CPU power still matters. No way my 6 core 8086 is competing with Intel's new silicon. Running my CPU @ 5.3GHz versus 5.0 is good for ~ 180 pts. 
AVX offset definitely makes a difference.
GPU memory overclock has made a difference for me.
System RAM probably makes a difference. Mine's running CL16 @4000.
The 466.11 Driver offers no improvement over 465.89 in PR.


----------



## J7SC

Beagle Box said:


> I went back and looked. Seems I was able to score to 14753 on stock air cooler with the stock ASUS BIOS.
> CPU power still matters. No way my 6 core 8086 is competing with Intel's new silicon. Running my CPU @ 5.3GHz versus 5.0 is good for ~ 180 pts.
> AVX offset definitely makes a difference.
> GPU memory overclock has made a difference for me.
> System RAM probably makes a difference. Mine's running CL16 @4000.
> The 466.11 Driver offers no improvement over 465.89 in PR.


Two quick observations on that:

1.) My stock Strix 3090 OC scored slightly higher in PR on an older Intel HEDT quad-channel (Intel 5960X) than on an AMD 3950X with identical GPU oc and ambient; now finishing up a 5950X/ Dark hero update, but Strix is w-cooled anyway, so no direct comparison.

2.) On my quad channel / HEDT AMD TR 2950X with 2x w-cooled 2080 Tis, I picked up around 300 points in PR just by switching from NUMA to UMA memory mode...got me into the top 15 Port Royal HoF overall...


----------



## GRABibus

J7SC said:


> Two quick observations on that:
> 
> 1.) My stock Strix 3090 OC scored slightly higher in PR on an older Intel HEDT quad-channel (Intel 5960X) than on an AMD 3950X with identical GPU oc and ambient; now finishing up a 5950X/ Dark hero update, but Strix is w-cooled anyway, so no direct comparison.
> 
> 2.) On my quad channel / HEDT AMD TR 2950X with 2x w-cooled 2080 Tis, I picked up around 300 points in PR just by switching from NUMA to UMA memory mode...got me into the top 15 Port Royal HoF overall...


I have 5900X which boosts well during PR (Beyond 5GHz sometimes).
I have 3733MHz 14-15-15-28-1T.

I don't see what I can improve, except going to water, which I won't do

You won the silicon lottery I think with your strix.


----------



## Beagle Box

J7SC said:


> Two quick observations on that:
> 
> 1.) My stock Strix 3090 OC scored slightly higher in PR on an older Intel HEDT quad-channel (Intel 5960X) than on an AMD 3950X with identical GPU oc and ambient; now finishing up a 5950X/ Dark hero update, but Strix is w-cooled anyway, so no direct comparison.
> 
> 2.) On my quad channel / HEDT AMD TR 2950X with 2x w-cooled 2080 Tis, I picked up around 300 points in PR just by switching from NUMA to UMA memory mode...got me into the top 15 Port Royal HoF overall...


I find myself wondering how much I'd gain from 2 more CPU cores and/or more/faster RAM, but I'm well past the point of diminishing returns and nobody is giving away i9-9900k chips anywhere. 
I'm not really looking to invest more $ into a dead socket on a Z370 MB.

As far as GPU memory overclock goes: the difference in PR scores from +0 Memory and +1625 Memory OC is 400pts.


----------



## J7SC

GRABibus said:


> I have 5900X which boosts well during PR (Beyond 5GHz sometimes).
> I have 3733MHz 14-15-15-28-1T.
> 
> I don't see what I can improve, except going to water, which I won't do
> 
> You won the silicon lottery I think with your strix.


...thanks. I know it's a good sample of the Strix...but no matter what kind of sample one ended up with, with modern GPU (and CPU) having complex boost algorithms, it always pays to work on those parameters you can impact - cooling, cooling, cooling being the obvious and very rewarding one. Custom bios and/or hard-modding (ie. shunts) are also options for some folks, though I stay away from those given the dual work/play roles my system has. Cooling on the other hand is a no-brainer...especially as the Strix (and other models) can pull around 500 W, with much of it focused on an 8nm 'heat pocket'. Then there's the rear VRAM chips on the 3090 that don't enjoy the same cooling the front ones have. 

Did I mention cooling ?


Spoiler


----------



## Beagle Box

GRABibus said:


> I have 5900X which boosts well during PR (Beyond 5GHz sometimes).
> I have 3733MHz 14-15-15-28-1T.
> 
> I don't see what I can improve, except going to water, which I won't do
> 
> You won the silicon lottery I think with your strix.


What sort of curves are you using to bench? 
Are you adjusting Voltage? I've found that even my highest runs don't require more than +55 on Voltage. Try bumping it just a little, if any. How much additional Voltage depends on what your controller likes. More is not always better.


----------



## GRABibus

Beagle Box said:


> What sort of curves are you using to bench?
> Are you adjusting Voltage? I've found that even my highest runs don't require more than +55 on Voltage. Try bumping it just a little, if any. How much additional Voltage depends on what your controller likes. More is not always better.


I currently do :

100% on voltage
123% on power
160Mhz on core
+1000Mhz on memore
Fans at 100%.

which kind of tweaking V/F is it advised ?


----------



## J7SC

Question on Strix 3090 OC bios update...my mobo bios is all-up-to-date for resizable_BAR as is the latest driver, so I will finally get to updating the Strix Bios...downloaded the RTX3080_v3.exe file (which I presume will have the .rom etc inside for the regular NVFlash steps at the OP / P1 here). 

Have any of you Strix owners who used this any issues (ie. blackscreen), even with CSM disabled in Bios ? Read a few comments over at ROG where folks had issues. Also, of the two onboard bios (Performance and Quiet), I plan to just update the Performance one, or do we have to do both ? Thanks in advance


----------



## Beagle Box

GRABibus said:


> I currently do :
> 
> 100% on voltage
> 123% on power
> 160Mhz on core
> +1000Mhz on memore
> Fans at 100%.
> 
> which kind of tweaking V/F is it advised ?


There are many ways to get a high, stable OC. You can Temp limit, Voltage limit and Total Power limit with any given Core Clock Curve. 
So you can sometimes get the same score by using +0 Voltage Slider with +195 Core Clock Curve as you would a +100 Voltage Slider with +160 Core Clock Curve.
I would try all my over clocks with +0 Voltage slider and slowly increase Voltage. More Voltage means more heat and I use it only as a last resort. 
My highest Port Royal score uses a Voltage % of +51 and Power Limit of 55. My second highest has Voltage @ +80%. My highest Firestrike score has a Core Voltage of +0% 
And these are all "Legendary" 3dMark Scores.

I know at least once in Superposition I've used a huge Core Clock setting of +240 and then lowered the Power Limit slider to stop MHz overshoot. 
Of my highest scores, only one has a custom curve. I think it's Port Royal where I used a standard curve and then just flattened everything past 1.081V.
Experiment!


----------



## GRABibus

Beagle Box said:


> There are many ways to get a high, stable OC. You can Temp limit, Voltage limit and Total Power limit with any given Core Clock Curve.
> So you can sometimes get the same score by using +0 Voltage Slider with +195 Core Clock Curve as you would a +100 Voltage Slider with +160 Core Clock Curve.
> I would try all my over clocks with +0 Voltage slider and slowly increase Voltage. More Voltage means more heat and I use it only as a last resort.
> My highest Port Royal score uses a Voltage % of +51 and Power Limit of 55. My second highest has Voltage @ +80%. My highest Firestrike score has a Core Voltage of +0%
> And these are all "Legendary" 3dMark Scores.
> 
> I know at least once in Superposition I've used a huge Core Clock setting of +240 and then lowered the Power Limit slider to stop MHz overshoot.
> Of my highest scores, only one has a custom curve. I think it's Port Royal where I used a standard curve and then just flattened everything past 1.081V.
> Experiment!


55% limit power ??

I did a test with 50% voltage, 100% power and curve fixed at [email protected],950Mhz.

I lost 300points versus the settings In my former post.
my max voltage was 0,950V as a expected.

‘BUT, I had the same temperature (max and average) on GPU than with 123% power, voltage 100% and +160MHz on core, with a

How Is it possible ? 😂

PS : by decreasing power, my score drop down automatically.
Voltage seems to have no influence on my score 

I


----------



## Beagle Box

GRABibus said:


> 55% limit power ??
> 
> I did a test with 50% voltage, 100% power and curve fixed at [email protected],950Mhz.
> 
> I lost 300points versus the settings In my former post.
> my max voltage was 0,950V as a expected.
> 
> ‘BUT, I had the same temperature (max and average) on GPU than with 123% power, voltage 100% and +160MHz on core, with a
> 
> How Is it possible ? 😂
> 
> PS : by decreasing power, my score drop down automatically.
> Voltage seems to have no influence on my score
> 
> I


I'm using the 1000W KingPin BIOS, so 55 = 550W power. My point was that higher power does not equate to higher scores. Also true is that a cool 2205MHz will score higher than a hot 2220MHz. Try 123% power and +50% Voltage or even +0% Voltage. Usually, more Voltage brings nothing but heat.

EDIT:
If you're able to run 123% Power, you must be using the original ASUS BIOS. It's garbage. Even the less powerful Suprim X BIOS will run cooler and score higher in PR.


----------



## GRABibus

Beagle Box said:


> I'm using the 1000W KingPin BIOS, so 55 = 550W power. My point was that higher power does not equate to higher scores. Also true is that a cool 2205MHz will score higher than a hot 2220MHz. Try 123% power and +50% Voltage or even +0% Voltage. Usually, more Voltage brings nothing but heat.
> 
> EDIT:
> If you're able to run 123% Power, you must be using the original ASUS BIOS. It's garbage. Even the less powerful Suprim X BIOS will run cooler and score higher in PR.


Thank for your advices.

Yes, I use ASUS stock Bios.
I just made a run with 0% for voltage => Same scores and same temps as with 100% voltage.

I have already tried KP 520W Bios, Suprim X Bios, ASUS XOC 1000W Bios. The best results I had was 14814pts with KP 520W. Don't remember Core offset, power slider and voltage were at max, +1000MHz on memory and fans at 100%, open PC and additionnal fans on the card (Which I have currently).

I will give a try again when I have time with KP 520W and play with voltage and power.


----------



## Beagle Box

GRABibus said:


> Thank for your advices.
> 
> Yes, I use ASUS stock Bios.
> I just made a run with 0% for voltage => Same scores and same temps as with 100% voltage.
> 
> I have already tried KP 520W Bios, Suprim X Bios, ASUS XOC 1000W Bios. The best results I had was 14814pts with KP 520W. Don't remember Core offset, power slider and voltage were at max, +1000MHz on memory and fans at 100%, open PC and additionnal fans on the card (Which I have currently).
> 
> I will give a try again when I have time with KP 520W and play with voltage and power.


For me, the MSI Suprim BIOS scored +100 points over the ASUS BIOS.
The KP520 scored +200 over the ASUS BIOS.

When Voltage doesn't matter, you need to concentrate on your curve and temp and maybe even power. Sometimes setting the Core Clock Curve higher and then flattening it out helps. 
Good luck.


----------



## GRABibus

Beagle Box said:


> For me, the MSI Suprim BIOS scored +100 points over the ASUS BIOS.
> The KP520 scored +200 over the ASUS BIOS.
> 
> When Voltage doesn't matter, you need to concentrate on your curve and temp and maybe even power. Sometimes setting the Core Clock Curve higher and then flattening it out helps.
> Good luck.


ok thanks.
I am currently experiencing this in Cold War.

110% power instead of 123%.
I have a steady 2100MHz at 1,081V.
GPU at 58degrees and memory at 72 degrees, this means 4 degrees less than when I was at 123%.
Power draw went from 380W-400W to 370W-390W.

I am pretty sure I didn’t loose fps. To be checked


----------



## Beagle Box

GRABibus said:


> ok thanks.
> I am currently experiencing this in Cold War.
> 
> 110% power instead of 123%.
> I have a steady 2100MHz at 1,081V.
> GPU at 58degrees and memory at 72 degrees, this means 4 degrees less than when I was at 123%.
> Power draw went from 380W-400W to 370W-390W.
> 
> I am pretty sure I didn’t loose fps. To be checked


I wouldn't be surprised if you lost no fps. 
When not benching, I run the MSI Suprim X BIOS for gaming and every day use. I just flip my BIOS switch and reboot into it. I suggest you try whatever settings work best using the ASUS BIOS with the Suprim X BIOS (taking the 450W Suprim limit vs the ASUS 480 limit into account and adjusting the power limit accordingly). You may experience better playability, lower heat and overall superior performance despite the lower power max.


----------



## cx-ray

J7SC said:


> Have any of you Strix owners who used this any issues (ie. blackscreen), even with CSM disabled in Bios ? Read a few comments over at ROG where folks had issues. Also, of the two onboard bios (Performance and Quiet), I plan to just update the Performance one, or do we have to do both ? Thanks in advance


No black screens here. My STRIX card also continues to work on a mobo that doesn't support ReBAR (used it on X299 + ReBAR and Z270 no ReBAR, CSM disabled, UEFI Windows 10 install, don't know what happens with CSM enabled). I had to manually extract the update however and use NVFlash. For whatever reason the ASUS 3090 VBIOS update util never worked on my card.


----------



## J7SC

cx-ray said:


> No black screens here. My STRIX card also continues to work on a mobo that doesn't support ReBAR (used it on X299 + ReBAR and Z270 no ReBAR, CSM disabled, UEFI Windows 10 install, don't know what happens with CSM enabled). I had to manually extract the update however and use NVFlash. For whatever reason the ASUS 3090 VBIOS update util never worked on my card.


Thanks  ...I've flashed a lot of NVidia cards before, but with the most recent mobos / bios and GPUs there are a lot more parameters/settings which may impact. Then there were several Strix owners over at ROG's forum that had big problems like blacks screens (most likely re. CSM). I plan to extract the .rom from the. exe file and do the regular NVflash routine.


----------



## GRABibus

Hello guys,
i read this EVGA thread :






Okay lets hear it Highest Port Royal 3090 Kingpin Scores! (Page 9) - EVGA Forums


I thought it would be fun and helpful to hear everyones Port Royal Kingpin scores and how they got there, spare no details! This is your time to brag and help those of us trying to get in the 15’s and beyond. Either post pictures or let us know what settings you used. Example: My highest run I...



forums.evga.com





They all refer to Classified tool.

do you know it ? (I assume yes).

should be interested to test for scores increase ?


----------



## jomama22

GRABibus said:


> Hello guys,
> i read this EVGA thread :
> 
> 
> 
> 
> 
> 
> Okay lets hear it Highest Port Royal 3090 Kingpin Scores! (Page 9) - EVGA Forums
> 
> 
> I thought it would be fun and helpful to hear everyones Port Royal Kingpin scores and how they got there, spare no details! This is your time to brag and help those of us trying to get in the 15’s and beyond. Either post pictures or let us know what settings you used. Example: My highest run I...
> 
> 
> 
> forums.evga.com
> 
> 
> 
> 
> 
> They all refer to Classified tool.
> 
> do you know it ? (I assume yes).
> 
> should be interested to test for scores increase ?


Doesn't work with any cards except a KP. If you want voltage control you will have to use an evc2sx connected either to built in i2c headers or connected directly to the microcontrollers themselves.


----------



## Beagle Box

GRABibus said:


> Hello guys,
> i read this EVGA thread :
> 
> 
> 
> 
> 
> 
> Okay lets hear it Highest Port Royal 3090 Kingpin Scores! (Page 9) - EVGA Forums
> 
> 
> I thought it would be fun and helpful to hear everyones Port Royal Kingpin scores and how they got there, spare no details! This is your time to brag and help those of us trying to get in the 15’s and beyond. Either post pictures or let us know what settings you used. Example: My highest run I...
> 
> 
> 
> forums.evga.com
> 
> 
> 
> 
> 
> They all refer to Classified tool.
> 
> do you know it ? (I assume yes).
> 
> should be interested to test for scores increase ?


Yeah, that tool only works on the King Pin.
And from the scores shown, even with the tool, I'm thinking King Pin GPUs aren't anything special.


----------



## GRABibus

Beagle Box said:


> Yeah, that tool only works on the King Pin.
> And from the scores shown, even with the tool, I'm thinking King Pin GPUs aren't anything special.


yes, big part of EVGA FTW3 and Kingpin are power limited (unbalanced power on the 3x8 pin connectors)


----------



## Lobstar

GRABibus said:


> yes, big part of EVGA FTW3 and Kingpin are power limited (unbalanced power on the 3x8 pin connectors)


There is a swap program for FTW3U cards with the issue. Mine came back great.


----------



## Beagle Box

Lobstar said:


> There is a swap program for FTW3U cards with the issue. Mine came back great.


How does the 'after' performance compare to the 'before' performance?


----------



## Lobstar

Beagle Box said:


> How does the 'after' performance compare to the 'before' performance?


I couldn't hold 1950mhz on my unbalanced card without hitting the power limit. I now hold 2100mhz no problem. I haven't been testing too much since my water block is on it's way and will spend the time tweaking it under water instead. I've been running the KPE520w bios.


----------



## joyzao

Hi Guys

Could they help? I bought the rtx 3090 pny (2 x8pin) which bios could use in it, is it possible to use the 3x8-pin? Or do I have to stay in the 2-pin bios? what would be the best option, the galax bios? Can I do it without any problems? Thanks


----------



## nordschleife

joyzao said:


> Hi Guys
> 
> Could they help? I bought the rtx 3090 pny (2 x8pin) which bios could use in it, is it possible to use the 3x8-pin? Or do I have to stay in the 2-pin bios? what would be the best option, the galax bios? Can I do it without any problems? Thanks


Blza? Cara a bios da KFA/Galax 390w com Rbar foi a melhor que eu testei até hj.









I scored 14 530 in Port Royal


Intel Core i9-10900K Processor, NVIDIA GeForce RTX 3090 x 1, 65536 MB, 64-bit Windows 10}




www.3dmark.com





Antes disso tinha uma da Giga de 390w boa tbm, mas capava um Display Port p mim.

Agora na PNY eu não sei como são os ports e tal, mas o pessoa geralmente costuma a testar bastante as bios 2 pins e nunca vi ngm ter problemas além de perder port devido a configs diferentes.

Bios de 3 Pins eu não testei nenhuma, ACHO que tem gente que usa, mas eu não manjo dos esquemas e não quis arriscar.

@jura11 tested some 2 pin bios, he probably has some good advice too.

Abrax

(DCO/Badday aqui)


----------



## des2k...

...


Beagle Box said:


> Yeah, that tool only works on the King Pin.
> And from the scores shown, even with the tool, I'm thinking King Pin GPUs aren't anything special.


what score are you looking at ? Pretty sure most top scores on the board is with the kingpin

you need that tool for LN2, you're not reaching 2.5ghz+ on the core without it 😁


----------



## Beagle Box

des2k... said:


> ...
> 
> what score are you looking at ? Pretty sure most top scores on the board is with the kingpin
> 
> you need that tool for LN2, you're not reaching 2.5ghz+ on the core without it 😁


I was referring to the non-LN2 scores shown on the website cited in that post.
I'd wager most high non-LN2 scores are using the Kingpin BIOS whether the actual GPU is a Kingpin or not.
If you use an EVGA BIOS, your results page shows EVGA as the Vendor. 
Most of my runs on the 3DMark board show 'EVGA' as the Vendor.
Runs using the MSI Suprim X BIOS show MSI as the Vendor.
In the 'Description' box, if my score is Top 20 for my CPU, I note that it's an ASUS Strix OC GPU using the EVGA KP1K BIOS.


----------



## KedarWolf

Peeps think I'm crazy using a 1600W Corsair AX1600i power supply, but I just punched my PC specs in the OuterVision PSU calculator and it came up with 1574W total usage from my PC.

Power Supply Calculator - PSU Calculator | OuterVision


----------



## ViRuS2k

KedarWolf said:


> Peeps think I'm crazy using a 1600W Corsair AX1600i power supply, but I just punched my PC specs in the OuterVision PSU calculator and it came up with 1574W total usage from my PC.
> 
> Power Supply Calculator - PSU Calculator | OuterVision


Not stupid at all,  im running superflower 2000w psu


----------



## Lobstar

KedarWolf said:


> Peeps think I'm crazy using a 1600W Corsair AX1600i power supply, but I just punched my PC specs in the OuterVision PSU calculator and it came up with 1574W total usage from my PC.
> 
> Power Supply Calculator - PSU Calculator | OuterVision


It says I should have a 1674w psu ... It's really wrong lol.


----------



## ViRuS2k

Lobstar said:


> It says I should have a 1674w psu ... It's really wrong lol.


I think its wrong also, i just ran the calculations and its recommending a 3000+w psu for my system *****


----------



## Beagle Box

ViRuS2k said:


> I think its wrong also, i just ran the calculations and its recommending a 3000+w psu for my system ***


I don't know. My estimate is 945W. I have a 1000W EVGA P2.


----------



## stryker7314

Got a Kill A Watt to see how many watts the entire sigrig was drawing and it's no more than 650w and usually at 550-575w under load... 

Why guess when you can get a $30 piece of gear and actually know. Most pc's draw significantly less than these calculators guess at.


----------



## Lobstar

My ups tells me. My rig hits like 800-900 depending on what I'm doing at max.


----------



## jura11

joyzao said:


> Hi Guys
> 
> Could they help? I bought the rtx 3090 pny (2 x8pin) which bios could use in it, is it possible to use the 3x8-pin? Or do I have to stay in the 2-pin bios? what would be the best option, the galax bios? Can I do it without any problems? Thanks


Hi there 

I wouldn't use 3*8-pin BIOS on 2*8-pin GPU, it will lowers scores and in theory you will hit just 350-365W and therefore its no point to use such BIOS on your GPU 

Best BIOS for 2*8-pin RTX 3090 I would say is Gigabyte 390W BIOS but just bit of a caution, you will loose one DP port, if you are okay with that then I would use it, other BIOS which I have used abd tested is KFA2 390W BIOS, someone already posted here BIOS with REBAR patch applied 

If you have good cooling or you are running under water then I would go with KPE XOC 1000W BIOS, using this BIOS on my RTX 3090 GamingPro's and no issues, for gaming 75% is more than enough and temperatures are awesome, there are only few games which will use more than 75% of KPE XOC BIOS 

In benchmarks usually I see 85% max which will pull at such power limit 590W? 

Yes Galax or KFA2 390W are safe and I run that BIOS without any issues 

Hope this helps 

Thanks, Jura


----------



## jura11

KedarWolf said:


> Peeps think I'm crazy using a 1600W Corsair AX1600i power supply, but I just punched my PC specs in the OuterVision PSU calculator and it came up with 1574W total usage from my PC.
> 
> Power Supply Calculator - PSU Calculator | OuterVision


I don't think that PSU calculator is trustworthy, have look my PC have like 10 HDD's(mixed 4TB,3TB, 8TB and 6TB) 2*2TB SSD and 2*1TB SSD, two RTX 3090 GamingPro's with KPE XOC BIOS capped at 65% for rendering and 75% for gaming, 38 fans in total, Aquaero 6XT etc and at idle PC drawing from wall 270-370W and at load in rendering I have seen 1200-1300W as max 

I would rather get Killawat wall meter, its great thing and you will know for sure how much your PC pulls under load or idle

Hope this helps 

Thanks, Jura


----------



## J7SC

Lobstar said:


> It says I should have a 1674w psu ... It's really wrong lol.





ViRuS2k said:


> I think its wrong also, i just ran the calculations and its recommending a 3000+w psu for my system ***


...you guys are in luck 


Spoiler



hehe


----------



## Lobstar

J7SC said:


> ...you guys are in luck
> 
> 
> Spoiler
> 
> 
> 
> hehe
> View attachment 2486740


Twice the fires in half the time!!!


----------



## J7SC

Lobstar said:


> Twice the fires in half the time!!!


...yeah, but you got a REALLY BIG water-pump to put those fires out 'in a flash'


----------



## Lobstar

J7SC said:


> ...yeah, but you got a REALLY BIG water-pump to put those fires out 'in a flash'


Famous last words: I wonder if you can water cool a power supply lol.

In unrelated news, Black Ops Cold War is the first game I've seen use more than 20GB of VRAM. +80/+1448 is totally stable on the KPE520w bios running on my FTW3U on air. +100/+1548 wasn't but that might change putting the card under water maybe?


----------



## J7SC

Lobstar said:


> *Famous last words: I wonder if you can water cool a power supply lol.*
> 
> In unrelated news, Black Ops Cold War is the first game I've seen use more than 20GB of VRAM. +80/+1448 is totally stable on the KPE520w bios running on my FTW3U on air. +100/+1548 wasn't but that might change putting the card under water maybe?
> (...)


----------



## Zogge

KedarWolf said:


> Peeps think I'm crazy using a 1600W Corsair AX1600i power supply, but I just punched my PC specs in the OuterVision PSU calculator and it came up with 1574W total usage from my PC.
> 
> Power Supply Calculator - PSU Calculator | OuterVision


You are as crazy as me if so


----------



## WillP

Lobstar said:


> In unrelated news, Black Ops Cold War is the first game I've seen use more than 20GB of VRAM. +80/+1448 is totally stable on the KPE520w bios running on my FTW3U on air.
> View attachment 2486753


My 3090 was using over 16GB on Battlefront 2 the other day. +150/+1000 on XOC at 85% power limit under water.


----------



## Benni231990

ASUS has released the z370/390 rBar beta bios  

i updatet and it works


----------



## emil2424

WillP said:


> My 3090 was using over 16GB on Battlefront 2 the other day. +150/+1000 on XOC at 85% power limit under water.





Lobstar said:


> In unrelated news, Black Ops Cold War is the first game I've seen use more than 20GB of VRAM. +80/+1448 is totally stable on the KPE520w bios running on my FTW3U on air. +100/+1548 wasn't but that might change putting the card under water maybe?


For the thousandth time ... this is not memory usage but allocation. Some games reserve a large part of the available memory in advance. If you had a GPU with 64 GB of vRAM you probably would have seen 50 GB + "usage". Monitoring programs do not differentiate between actual use and allocation.


----------



## Rhadamanthys

joyzao said:


> Hi Guys
> 
> Could they help? I bought the rtx 3090 pny (2 x8pin) which bios could use in it, is it possible to use the 3x8-pin? Or do I have to stay in the 2-pin bios? what would be the best option, the galax bios? Can I do it without any problems? Thanks


I have the same card as you. Used it with KFA 390W bios first, then switched to Gigabyte 390W. No issues with either. Be aware there have been reports that when flashing a ReBAR 390W bios, max power limit will be lower than 390W, as opposed to a bios without ReBAR that offers the full 390W. That's why I haven't bothered upgrading to ReBAR, also given that it seems to be more of side-grade actually.


----------



## Lobstar

emil2424 said:


> For the thousandth time ... this is not memory usage but allocation. Some games reserve a large part of the available memory in advance. If you had a GPU with 64 GB of vRAM you probably would have seen 50 GB + "usage". Monitoring programs do not differentiate between actual use and allocation.


Ok nerd, chill out. No one attacked you lol


----------



## KedarWolf

jura11 said:


> I don't think that PSU calculator is trustworthy, have look my PC have like 10 HDD's(mixed 4TB,3TB, 8TB and 6TB) 2*2TB SSD and 2*1TB SSD, two RTX 3090 GamingPro's with KPE XOC BIOS capped at 65% for rendering and 75% for gaming, 38 fans in total, Aquaero 6XT etc and at idle PC drawing from wall 270-370W and at load in rendering I have seen 1200-1300W as max
> 
> I would rather get Killawat wall meter, its great thing and you will know for sure how much your PC pulls under load or idle
> 
> Hope this helps
> 
> Thanks, Jura


Would the Corsair Link software be good enough to see how much power I'm using in total? I can get a used Kill-A-Watt meter cheap here in Toronto but I don't want to travel halfway across the city to pick it up.


----------



## WillP

emil2424 said:


> For the thousandth time ... this is not memory usage but allocation. Some games reserve a large part of the available memory in advance. If you had a GPU with 64 GB of vRAM you probably would have seen 50 GB + "usage". Monitoring programs do not differentiate between actual use and allocation.


Maybe you'd be kind enough to explain, probably again for the thousandth time, with your 16 posts, why "allocation" varies so widely between applications, and within the same application at different times? Yes, "using" was probably the wrong word to use, and yes I did know it was allocation, as I expect Lobstar did.


----------



## jomama22

WillP said:


> Maybe you'd be kind enough to explain, probably again for the thousandth time, with your 16 posts, why "allocation" varies so widely between applications, and within the same application at different times? Yes, "using" was probably the wrong word to use, and yes I did know it was allocation, as I expect Lobstar did.


Should read this post and then try it for yourself:








MSI Afterburner can now display per process VRAM!


Nov 17th 2020: Per Process VRAM Monitoring now is supported internally, and works in all games. Install MSI Afterburner 4.6.3 Beta 4 Build 15910 from -MSIAfterburnerSetup463Beta4Build15910.rar']over here. Enter the MSI Afterburner settings/properties menu Click the monitoring tab (should be...




www.resetera.com





The long and short of it is that allocated vram is the sum of every running process that is requesting a vram buffer that (very likely) isn't actually using that entire buffer.

Above, AB/rivatuner now allows you to not only view total vram allocation, but also a specific process's vram usage. 

To be clear, the vram usage/process above still isn't 100% accurate as actual vram usage is difficult to ascertain from what is reported by each process, but it is pretty close to what we want interms of usage.


----------



## Lobstar

jomama22 said:


> Should read this post and then try it for yourself:
> 
> 
> 
> 
> 
> 
> 
> 
> MSI Afterburner can now display per process VRAM!
> 
> 
> Nov 17th 2020: Per Process VRAM Monitoring now is supported internally, and works in all games. Install MSI Afterburner 4.6.3 Beta 4 Build 15910 from -MSIAfterburnerSetup463Beta4Build15910.rar']over here. Enter the MSI Afterburner settings/properties menu Click the monitoring tab (should be...
> 
> 
> 
> 
> www.resetera.com
> 
> 
> 
> 
> 
> The long and short of it is that allocated vram is the sum of every running process that is requesting a vram buffer that (very likely) isn't actually using that entire buffer.
> 
> Above, AB/rivatuner now allows you to not only view total vram allocation, but also a specific process's vram usage.
> 
> To be clear, the vram usage/process above still isn't 100% accurate as actual vram usage is difficult to ascertain from what is reported by each process, but it is pretty close to what we want interms of usage.











Shows more this time. i also tried it with bots which had a lot less 'allocated' so it's probably related to player skins and such. Either way, it's not available for other applications (which aren't even running). Seems like semantics. /shrug


----------



## J7SC

jomama22 said:


> Should read this post and then try it for yourself:
> 
> 
> 
> 
> 
> 
> 
> 
> MSI Afterburner can now display per process VRAM!
> 
> 
> Nov 17th 2020: Per Process VRAM Monitoring now is supported internally, and works in all games. Install MSI Afterburner 4.6.3 Beta 4 Build 15910 from -MSIAfterburnerSetup463Beta4Build15910.rar']over here. Enter the MSI Afterburner settings/properties menu Click the monitoring tab (should be...
> 
> 
> 
> 
> www.resetera.com
> 
> 
> 
> 
> 
> The long and short of it is that allocated vram is the sum of every running process that is requesting a vram buffer that (very likely) isn't actually using that entire buffer.
> 
> Above, AB/rivatuner now allows you to not only view total vram allocation, but also a specific process's vram usage.
> 
> To be clear, the vram usage/process above still isn't 100% accurate as actual vram usage is difficult to ascertain from what is reported by each process, but it is pretty close to what we want interms of usage.


...to some extent, the confusion between 'actual usage' and 'allocated' VRAM can also result from the terminology used by monitoring software, such as MSI AB (older version / Fall 2020)...take the pic below which I have posted before - it's for FS2020 4K /Ultra /Dense Objects on a system w/ dual 2080 Ti in SLI-CFR. It shows one flight session over both easy and very complex scenery. While it says 'GPU memory usage', I think it probably is allocation. That said, the real kicker in that graph is the increase in allocation / usage as complex scenery bits are displayed, in particular if your VRAM is limited to 10 GB

..perhaps it is best to think of allocated V-memory as the likely, forecast budget for a building project, and actual usage as the final, audited post-project cost..


----------



## SoldierRBT

Apparently launching soon..


----------



## dr/owned

Driveby post but after learning that HWInfo reports memory temperatures in a recent-ish update, here's my shunted 3090 TUF with an active watercooled backplate (Bykski) while mining:










Memory is overclocked to +1250 and +0.1V (1.5V) on the memory. So probably about as "worst case" as you can get for temperatures since mining thrashes the vRAM.


----------



## jomama22

Lobstar said:


> View attachment 2486812
> 
> Shows more this time. i also tried it with bots which had a lot less 'allocated' so it's probably related to player skins and such. Either way, it's not available for other applications (which aren't even running). Seems like semantics. /shrug


Not really semantics. You're example was directly from the game, which I would imagine is going to be quite accurate.

What I posted above was about being able to measure any games vram usage in an accurate way is all.


----------



## ArcticZero

Hit OCP for the first time with my shunted 3090. Card was pulling ~500w from my Seasonic Prime Gold 1000w, playing FFXV which seems to be one of the only games that really pushes the power limit. After over an hour of playing, PC rebooted on its own and get a new PSU, or settle for a lower power limit. 😅

EDIT: Actually I'm not even sure it's an OCP event since it simply soft rebooted.

EDIT2: I'm actually seeing lots of reports of overly sensitive OCP on older Seasonic PSUs, of which mine falls into. Kind of a shame, didn't expect I may have to get a new PSU already.


----------



## pat182

Benni231990 said:


> ASUS has released the z370/390 rBar beta bios
> 
> i updatet and it works


cant find the option in bios after update

nvm got it


----------



## bl4ckdot

Got my HOF, so far so good : NVIDIA GeForce RTX 3090 video card benchmark result - Intel Core i9-10900K Processor,ASUSTeK COMPUTER INC. ROG MAXIMUS XIII HERO (3dmark.com)


----------



## GRABibus

bl4ckdot said:


> View attachment 2486849
> 
> 
> Got my HOF, so far so good : NVIDIA GeForce RTX 3090 video card benchmark result - Intel Core i9-10900K Processor,ASUSTeK COMPUTER INC. ROG MAXIMUS XIII HERO (3dmark.com)


Ça déchire 😉


----------



## jomama22

ArcticZero said:


> Hit OCP for the first time with my shunted 3090. Card was pulling ~500w from my Seasonic Prime Gold 1000w, playing FFXV which seems to be one of the only games that really pushes the power limit. After over an hour of playing, PC rebooted on its own and get a new PSU, or settle for a lower power limit. 😅
> 
> EDIT: Actually I'm not even sure it's an OCP event since it simply soft rebooted.
> 
> EDIT2: I'm actually seeing lots of reports of overly sensitive OCP on older Seasonic PSUs, of which mine falls into. Kind of a shame, didn't expect I may have to get a new PSU already.


That wouldn't cause ocp. I would be more on a bad oc than anything else.


----------



## KCDC

KedarWolf said:


> Would the Corsair Link software be good enough to see how much power I'm using in total? I can get a used Kill-A-Watt meter cheap here in Toronto but I don't want to travel halfway across the city to pick it up.


It's accurate when there's some sort of load to the system, but at idle it shows 40-50 watts when my kill-a-watt shows accurate numbers as well as my ups, closer to 300.

You're better off with a kill-a-watt or a UPS with a wattage readout for consistency


----------



## WillP

jomama22 said:


> Should read this post and then try it for yourself:
> 
> 
> 
> 
> 
> 
> 
> 
> MSI Afterburner can now display per process VRAM!
> 
> 
> Nov 17th 2020: Per Process VRAM Monitoring now is supported internally, and works in all games. Install MSI Afterburner 4.6.3 Beta 4 Build 15910 from -MSIAfterburnerSetup463Beta4Build15910.rar']over here. Enter the MSI Afterburner settings/properties menu Click the monitoring tab (should be...
> 
> 
> 
> 
> www.resetera.com
> 
> 
> 
> 
> 
> The long and short of it is that allocated vram is the sum of every running process that is requesting a vram buffer that (very likely) isn't actually using that entire buffer.
> 
> Above, AB/rivatuner now allows you to not only view total vram allocation, but also a specific process's vram usage.
> 
> To be clear, the vram usage/process above still isn't 100% accurate as actual vram usage is difficult to ascertain from what is reported by each process, but it is pretty close to what we want interms of usage.


Thanks. 
I was a little offended by the tone of the guy who'd criticised the two of us, and my post was thus a little sarcastic. 
The point I was making is that the allocation of VRAM is clearly app and setting dependent, and so whether the "allocation" is all "used" is a bit arbitrary. The value reported by MSI AB does reflect, indirectly, maybe proportionally, the "use" of VRAM. I find the rate at which VRAM use seems to be going up interesting. While this may be entirely predictable it was only 6 months ago I was kind of getting by with an 8GB GPU for gaming, with close to that being allocated regularly, and now, with me pushing settings on my games with a more capable card, the allocation is over double that.


----------



## J7SC

...finally got around to updating the vBios for r_BAR; was all done in 30 sec or so / no drama. Now I can focus on finishing the build as I just upgraded to a Dark Hero / 5950X, only waiting for an Amazon delivery tomorrow for some heatsinks for the EK backplate (might upgrade to the active EK backplate once it's available for Strix, not sure yet).

...on the mobo bios in the 'Advanced / PCIe' subsection, below 4G decode and resizable_BAR options, I left SR-IOV disabled as I'm not running a virtual machine / hypervisor etc...would be interesting though to find out if resizable_BAR works with hypervisor - that would make good sense in that resource-sharing scenario.


----------



## yzonker

ArcticZero said:


> Hit OCP for the first time with my shunted 3090. Card was pulling ~500w from my Seasonic Prime Gold 1000w, playing FFXV which seems to be one of the only games that really pushes the power limit. After over an hour of playing, PC rebooted on its own and get a new PSU, or settle for a lower power limit. 😅
> 
> EDIT: Actually I'm not even sure it's an OCP event since it simply soft rebooted.
> 
> EDIT2: I'm actually seeing lots of reports of overly sensitive OCP on older Seasonic PSUs, of which mine falls into. Kind of a shame, didn't expect I may have to get a new PSU already.


Possibly, although when I hit OCP on my previous Corsair 750w PSU, it just shut the machine off. Power was still being supplied to the mobo after (led was on). Just like I had shut it down from Windows.


----------



## J7SC

...fyi, Cyberpunk2077 4K RTX Ultra wasn't really a trouble spot before, but it seems even more fluid w/ resizable_BAR


----------



## ArcticZero

jomama22 said:


> That wouldn't cause ocp. I would be more on a bad oc than anything else.


Hmm, why wouldn't it? FFXV always tends to max out my power limits more than any game I currently play/test. And i does so constantly. Running the game with 10% less on the power slider (around 450w) works perfectly indefinitely. Perhaps it could be the slight undervolt, though historically all I've experienced with a bad GPU OC is driver crashes.

There was no such thing here as it simply rebooted. I read about similar cases on Reddit concerning older Seasonic PSUs and Ampere due to it being prone to sudden power draw spikes, but yes from what I'd understood about OCP is you would have to manually switch the PSU back on, and it wouldn't just be a simple reboot.


----------



## jomama22

ArcticZero said:


> Hmm, why wouldn't it? FFXV always tends to max out my power limits more than any game I currently play/test. And i does so constantly. Running the game with 10% less on the power slider (around 450w) works perfectly indefinitely. Perhaps it could be the slight undervolt, though historically all I've experienced with a bad GPU OC is driver crashes.
> 
> There was no such thing here as it simply rebooted. I read about similar cases on Reddit concerning older Seasonic PSUs and Ampere due to it being prone to sudden power draw spikes, but yes from what I'd understood about OCP is you would have to manually switch the PSU back on, and it wouldn't just be a simple reboot.


A vram overclock will reboot the system if it becomes unstable. Also, allowing more watts into the gpu will also heat it up more as well, making overclocks that are stable at whatever temp 450w you get, potentially unstable at what're 500w+ gets you.

Memory junction rises pretty linearly with gpu core temps as that head gets dumped into the pcb.

I would just try turning the vram down slightly and see what that gets you. It's possible the psi has issues with transient responses, but I don't think that would be the cause.

If you have a kill-a-watt to see what your pc in total is pulling, you can get a better idea of what may be happening. If it's only at 800w (which is what I would guess it's near with the gpu @ 550w or so).l, I doubt ocp would be tripping.


----------



## ArcticZero

jomama22 said:


> A vram overclock will reboot the system if it becomes unstable. Also, allowing more watts into the gpu will also heat it up more as well, making overclocks that are stable at whatever temp 450w you get, potentially unstable at what're 500w+ gets you.
> 
> Memory junction rises pretty linearly with gpu core temps as that head gets dumped into the pcb.
> 
> I would just try turning the vram down slightly and see what that gets you. It's possible the psi has issues with transient responses, but I don't think that would be the cause.
> 
> If you have a kill-a-watt to see what your pc in total is pulling, you can get a better idea of what may be happening. If it's only at 800w (which is what I would guess it's near with the gpu @ 550w or so).l, I doubt ocp would be tripping.


I don't OC my VRAM when gaming, so I doubt that was it. I do run it pretty stably at +1250 when mining though. My memory junction temps while gaming never go above 85c, so that's fine. I only generally have tried a Kill-a-watt while benching, which saw it draw no higher than 850w on a non-undervolted profile.

From what I've read it's not necessarily that it reaches the actual 1000w power limit, but something about over-sensitivity of OCP due to quick spikes that occur occasionally. I recall @Falkentyne had discussed this issue on some of the threads I saw on Reddit as well regarding older Seasonic PSUs.

But yes I do hope that is the case and it isn't OCP or anything. I'll try again running at bone stock without an undervolt at similar power limits to see if it happens again. My watercooling budget this billing period means I don't have enough to get a newer PSU, which would take precedence if needed.


----------



## sultanofswing

Some of the older Seasonic units were known to have issues with OCP even on the 2080ti.


----------



## EarlZ

sultanofswing said:


> Some of the older Seasonic units were known to have issues with OCP even on the 2080ti.


This is one of the improvements they made with the Seasonic Prime GX/PX/TX series.


----------



## CZonin

Hey everyone. Looking to finally try flashing a different bios on my TUF (non-OC) 3090. Any suggestions? I don't want something with too high of a power limit so I was thinking about trying the Gigabyte Gaming OC.

Also, with the release of new rebar bios for all cards, I'm assuming all the older files available on places like Techpowerup are no longer useful since they are pre-rebar support?


----------



## yzonker

CZonin said:


> Hey everyone. Looking to finally try flashing a different bios on my TUF (non-OC) 3090. Any suggestions? I don't want something with too high of a power limit so I was thinking about trying the Gigabyte Gaming OC.
> 
> Also, with the release of new rebar bios for all cards, I'm assuming all the older files available on places like Techpowerup are no longer useful since they are pre-rebar support?


These are the only 2 I've tried. The Zotac is 385w though.









GALAX RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com













Zotac RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com





Both work on my Zotac 3090 and enable reBar.

I suspect there are one or more Gigabyte bios in the unverified list also that would work, but I haven't tried any of them.


----------



## GRABibus

CZonin said:


> Hey everyone. Looking to finally try flashing a different bios on my TUF (non-OC) 3090. Any suggestions? I don't want something with too high of a power limit so I was thinking about trying the Gigabyte Gaming OC.
> 
> Also, with the release of new rebar bios for all cards, I'm assuming all the older files available on places like Techpowerup are no longer useful since they are pre-rebar support?


I tried this one on my ASUS Strix : work good, but on the Strix...









Gigabyte RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com





At your own risk...


----------



## yzonker

So I apparently have a very small leak in my loop somewhere. The only evidence I have of it though is a faint smell of the EK blue coolant (sweet smell like car coolant). I haven't even been able to spot any residue from the coolant. It does seem to be localized to the end of my 360 rad with the fittings. One of the lines coming off seems to have the same odor, but no sign of residue. I have a white tissue on both lines right now in hopes of seeing some blue residue, but so far nothing.

So the only options I can think of are:

1) Replace the 360 rad and fittings hoping that's it (really not 100% sure).
2) Buy the EK leak tester (or similar if there are others?) and see if it will show up using that.
3) Punt and just fill it with water with an additive. This is kinda tempting but would prefer to find the leak obviously.

Water level in my reservoir hasn't changed in almost 2 months. _Maybe_ it's dropped a tiny bit, but it's so little that could also be just a little air from a rad or something. (or the fact that I didn't mark it and it hasn't changed at all, just eyeballing it from the filler screw)

Suggestions?


----------



## CZonin

yzonker said:


> These are the only 2 I've tried. The Zotac is 385w though.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> GALAX RTX 3090 VBIOS
> 
> 
> 24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory
> 
> 
> 
> 
> www.techpowerup.com
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Zotac RTX 3090 VBIOS
> 
> 
> 24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory
> 
> 
> 
> 
> www.techpowerup.com
> 
> 
> 
> 
> 
> Both work on my Zotac 3090 and enable reBar.
> 
> I suspect there are one or more Gigabyte bios in the unverified list also that would work, but I haven't tried any of them.





GRABibus said:


> I tried this one on my ASUS Strix : work good, but on the Strix...
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Gigabyte RTX 3090 VBIOS
> 
> 
> 24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory
> 
> 
> 
> 
> www.techpowerup.com
> 
> 
> 
> 
> 
> At your own risk...


Got it, ty! I'll probably wait for someone to confirm something working with the TUF.

Quick question: For cards with a dual bios switch, is it as simple as just switching to the bios I want to override and then following the guide here?


----------



## inedenimadam

SoldierRBT said:


> Apparently launching soon..
> 
> View attachment 2486816


Thank God! It is about time. I was really starting to wonder.


----------



## GRABibus

Best PR score until now : 14869










I scored 14 869 in Port Royal


AMD Ryzen 9 5900X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com














Bios MSI SUPRIM X with BAR on my STRIX
22°C ambient temperature.
Slider Voltage at "0%"
Slider power at 107%
+175MHz on core
+1050MHz on memory
Fans at 100%
Open PC case
All fans at 100% (One fan pushing air on the backplate, on fan pulling at the rear of the card (At 8xpin connectors side).









What is very interesting to mention, is that this run has been made with my 5900X overclock at 4.7GHz all cores (Static OC with 1.4Vcore).

When I use my 24/7 PBO/CO overclock (See in my sig), I loose 150pts !!

I read on some forum people who lost point by switching from Intel to Ryzen's :









5950x slower In Port royal Benchmark then 9900k


I have a problem with my new processor. I had a 9900k before and had 14,700 points with the 3090 FE. With the new Ryzen 5950x I only get 14200 points. I use c15 4000mhz ram and 2000mhz inf. with good timings. My CPU Score is 16900 Points in CPU Score in Timespy.that's fine. But unfortunately I...




www.overclock.net









5950x with 3090kpe lost score over 10900k 3090kpe in port royal help - EVGA Forums


Has anyone found a fix for the 3090 not scoring as well on ryzen 5000? I just received a 5950x and msi meg ace x570 and it scores 600pts lower. Only thing i can figure is my oc on my 10900k is helping it out. 5200 core 4800 cache. The 5950x is nice just useless if i cant use it for benchmarking...



forums.evga.com





I am pretty sure that with an Intel CPU and a memory that I could overclock higher (Beyond 1050MHz it crashes in benchmarks and beyond +1000MHz it artefacts in games), I would break 15000pts with no problem...


----------



## jomama22

GRABibus said:


> Best PR score until now : 14869
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 14 869 in Port Royal
> 
> 
> AMD Ryzen 9 5900X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> View attachment 2486900
> 
> 
> Bios MSI SUPRIM X with BAR on my STRIX
> 22°C ambient temperature.
> Slider Voltage at "0%"
> Slider power at 107%
> +175MHz on core
> +1050MHz on memory
> Fans at 100%
> Open PC case
> All fans at 100% (One fan pushing air on the backplate, on fan pulling at the rear of the card (At 8xpin connectors side).
> View attachment 2486902
> 
> 
> What is very interesting to mention, is that this run has been made with my 5900X overclock at 4.7GHz all cores (Static OC with 1.4Vcore).
> 
> When I use my 24/7 PBO/CO overclock (See in my sig), I loose 150pts !!
> 
> I read on some forum people who lost point by switching from Intel to Ryzen's :
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 5950x slower In Port royal Benchmark then 9900k
> 
> 
> I have a problem with my new processor. I had a 9900k before and had 14,700 points with the 3090 FE. With the new Ryzen 5950x I only get 14200 points. I use c15 4000mhz ram and 2000mhz inf. with good timings. My CPU Score is 16900 Points in CPU Score in Timespy.that's fine. But unfortunately I...
> 
> 
> 
> 
> www.overclock.net
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 5950x with 3090kpe lost score over 10900k 3090kpe in port royal help - EVGA Forums
> 
> 
> Has anyone found a fix for the 3090 not scoring as well on ryzen 5000? I just received a 5950x and msi meg ace x570 and it scores 600pts lower. Only thing i can figure is my oc on my 10900k is helping it out. 5200 core 4800 cache. The 5950x is nice just useless if i cant use it for benchmarking...
> 
> 
> 
> forums.evga.com
> 
> 
> 
> 
> 
> I am pretty sure that with an Intel CPU and a memory that I could overclock higher (Beyond 1050MHz it crashes in benchmarks and beyond +1000MHz it artefacts in games), I would break 15000pts with no problem...


You wouldn't be able to oc higher on intel. The score difference comes from memory latency. Pbo is naturally a bit slower than an all core latency wise (run aida and see for yourself, probably around .7ns or so). Also, 2 ccd ryzens (5900x/5950x) also have a latency penalty over single cc'd ryzens.

Go ahead and disable smt and and one of your ccd's and then bench again and you'll see an uplift of probably around 150-200 points.

Also, as a general rule, you will usually obtain a "higher" gpu oc on weaker cpus as you aren't straining the gpu as much. Granted, this is quite small, but as an extreme example, when benching my 3090 on a intel 3960x (sandy bridge from 2012) I was able to have an average gpu core of like 2100 on stock air cooling/stock bios on a 3090 FE. Once input it into the 5950x system, average core dropped to like 2010 and was no where stable at the curve I originally set.

Also, you can take a look at my score in pr leaderboard (somewhere in the 40s) and compare it to the intel's around me clock for clock and see it's basically identical scaling.


----------



## SirCanealot

Hey guys, 

Quick question about the 3090 FE before I start to panic  

I have 2 3090 FEs right now, and I was fully swapping out the thermal pads on the 2nd one like I had done for the 1st one. Unfortunately while taking it apart, I pulled the RGB connector off of the board (for the life of me, it just would not come out. Even now it's off the board, I cannot get the cable out of the connector!). I didn't think any of this at the time since I was going to leave it unconnected anyway, but now I've put it all back together the card is not detected at all. Fans turn on though...

Any ideas at all? Does damaging the RGB connector stop the card from booting in any way?


----------



## jomama22

SirCanealot said:


> Hey guys,
> 
> Quick question about the 3090 FE before I start to panic
> 
> I have 2 3090 FEs right now, and I was fully swapping out the thermal pads on the 2nd one like I had done for the 1st one. Unfortunately while taking it apart, I pulled the RGB connector off of the board (for the life of me, it just would not come out. Even now it's off the board, I cannot get the cable out of the connector!). I didn't think any of this at the time since I was going to leave it unconnected anyway, but now I've put it all back together the card is not detected at all. Fans turn on though...
> 
> Any ideas at all? Does damaging the RGB connector stop the card from booting in any way?


I would hazard a guess you damaged somthing else in the area or ripped traces connected to the connector. Without pictures it's hard to know.

Simply having the rgb connector disconnected wouldn't cause it the fail detection (see having a waterblock on it as an example). So I would say something else it damaged.


----------



## GRABibus

jomama22 said:


> You wouldn't be able to oc higher on intel. The score difference comes from memory latency. Pbo is naturally a bit slower than an all core latency wise (run aida and see for yourself, probably around .7ns or so). Also, 2 ccd ryzens (5900x/5950x) also have a latency penalty over single cc'd ryzens.
> 
> Go ahead and disable smt and and one of your ccd's and then bench again and you'll see an uplift of probably around 150-200 points.
> 
> Also, as a general rule, you will usually obtain a "higher" gpu oc on weaker cpus as you aren't straining the gpu as much. Granted, this is quite small, but as an extreme example, when benching my 3090 on a intel 3960x (sandy bridge from 2012) I was able to have an average gpu core of like 2100 on stock air cooling/stock bios on a 3090 FE. Once input it into the 5950x system, average core dropped to like 2010 and was no where stable at the curve I originally set.
> 
> Also, you can take a look at my score in pr leaderboard (somewhere in the 40s) and compare it to the intel's around me clock for clock and see it's basically identical scaling.


I didn't win so much by disabling SMT and 1 CCD versus static OC all cores 4.75GHz with SMT andf both CCD's enabled (30 points only) :









I scored 14 896 in Port Royal


AMD Ryzen 9 5900X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com















Bios MSI Suprim X BAR F5
BAR disabled
+175MHz sur core
+1025MHz sur mémoire
PBO/CO -30 all cores and -0.05V Vcore offset
SMT disabled
1 CCD disabled
Drivers Nvidia default
Gsync disabled
22°C ambiante

I will do a run this night with open window in m y room (Fresh nights currently).
I would at least break 14900pts....

Pretty convionced than with a 10900k, I would win at least 150pts more. I don't talk about memory if I could raise it at +1300MHz for example, then 15k wouldn't be so far.


----------



## jomama22

GRABibus said:


> I didn't win so much by disabling SMT and 1 CCD versus static OC all cores 4.75GHz with SMT andf both CCD's enabled (30 points only) :
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 14 896 in Port Royal
> 
> 
> AMD Ryzen 9 5900X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> 
> View attachment 2486914
> 
> 
> Bios MSI Suprim X BAR F5
> BAR disabled
> +175MHz sur core
> +1025MHz sur mémoire
> PBO/CO -30 all cores and -0.05V Vcore offset
> SMT disabled
> 1 CCD disabled
> Drivers Nvidia default
> Gsync disabled
> 22°C ambiante
> 
> I will do a run this night with open window in m y room (Fresh nights currently).
> I would at least break 14900pts....
> 
> Pretty convionced than with a 10900k, I would win at least 150pts more. I don't talk about memory if I could raise it at +1300MHz for example, then 15k wouldn't be so far.


I mean running 1ccd and smt off with all core, not pbo.

And the memory I am referring to is ram, not vram.

Again, take a look at my score. It is absolutely possible to match intel there is like for like gpu clocks. People at 5.5ghz with 4600+ vram are basically identical to me (those using same driver as me).


----------



## Nizzen

jomama22 said:


> You wouldn't be able to oc higher on intel. The score difference comes from memory latency. Pbo is naturally a bit slower than an all core latency wise (run aida and see for yourself, probably around .7ns or so). Also, 2 ccd ryzens (5900x/5950x) also have a latency penalty over single cc'd ryzens.
> 
> Go ahead and disable smt and and one of your ccd's and then bench again and you'll see an uplift of probably around 150-200 points.
> 
> Also, as a general rule, you will usually obtain a "higher" gpu oc on weaker cpus as you aren't straining the gpu as much. Granted, this is quite small, but as an extreme example, when benching my 3090 on a intel 3960x (sandy bridge from 2012) I was able to have an average gpu core of like 2100 on stock air cooling/stock bios on a 3090 FE. Once input it into the 5950x system, average core dropped to like 2010 and was no where stable at the curve I originally set.
> 
> Also, you can take a look at my score in pr leaderboard (somewhere in the 40s) and compare it to the intel's around me clock for clock and see it's basically identical scaling.


Did you ever benchmark your 3090 with Elmor mod 5950x VS 10900k and 3090 Elmor?


I think you compared results with 3090 non Elmor + 10900k VS your 5950x + 3090 with Elmor ?


----------



## SirCanealot

jomama22 said:


> I would hazard a guess you damaged somthing else in the area or ripped traces connected to the connector. Without pictures it's hard to know.
> 
> Simply having the rgb connector disconnected wouldn't cause it the fail detection (see having a waterblock on it as an example). So I would say something else it damaged.


Thanks for the reply! Now I've taken a closer look, seems like there are some lil scuffs in the area where I was trying to get the connector out >_<
I was trying to slide a pair of electrical tweezers under the connector like Gamers Nexus did in their teardown, then the whole thing just came off...

Do you think these could be causing the issue?

I've uploaded some pictures here: 
Edit: If you want me to try and take a better picture, let me know. These look more in-focus on the phone, heh...



http://imgur.com/a/uzI1C6C


If pictures of the front of the board would be useful, please let me know. Thanks!


----------



## GRABibus

jomama22 said:


> I mean running 1ccd and smt off with all core, not pbo.


I did also what you say, no difference


----------



## Falkentyne

SirCanealot said:


> Thanks for the reply! Now I've taken a closer look, seems like there are some lil scuffs in the area where I was trying to get the connector out >_<
> I was trying to slide a pair of electrical tweezers under the connector like Gamers Nexus did in their teardown, then the whole thing just came off...
> 
> Do you think these could be causing the issue?
> 
> I've uploaded some pictures here:
> Edit: If you want me to try and take a better picture, let me know. These look more in-focus on the phone, heh...
> 
> 
> 
> http://imgur.com/a/uzI1C6C
> 
> 
> If pictures of the front of the board would be useful, please let me know. Thanks!


The reason why it came off is because you didn't have the latch in the unlock position.
The exact precise unlock positions are shown at about 5:30-5:34.
Once it's unlocked, tweezers will get the cable out of the socket.
I can already see on your imgur picture that the latch isn't even fully in the unlock position.


----------



## SirCanealot

Falkentyne said:


> The reason why it came off is because you didn't have the latch in the unlock position.
> The exact precise unlock positions are shown at about 5:30-5:34.
> Once it's unlocked, tweezers will get the cable out of the socket.
> I can already see on your imgur picture that the latch isn't even fully in the unlock position.


I did have it unlocked when it was attached to the board. Just wouldn't come out


----------



## jomama22

SirCanealot said:


> Thanks for the reply! Now I've taken a closer look, seems like there are some lil scuffs in the area where I was trying to get the connector out >_<
> I was trying to slide a pair of electrical tweezers under the connector like Gamers Nexus did in their teardown, then the whole thing just came off...
> 
> Do you think these could be causing the issue?
> 
> I've uploaded some pictures here:
> Edit: If you want me to try and take a better picture, let me know. These look more in-focus on the phone, heh...
> 
> 
> 
> http://imgur.com/a/uzI1C6C
> 
> 
> If pictures of the front of the board would be useful, please let me know. Thanks!


Looks like you have a damaged trace down the center there. Not sure what type of affect that may have but it's possible you have a short around there now.

Looking at the copper pads, you have exposed copper between the two middle pads. Whether that is stock or not I don't know but I wouldn't think so. Possible those two pads are now shorted together.

Also, the traced damaged looks to be cut in that pic as well.


----------



## jomama22

Nizzen said:


> Did you ever benchmark your 3090 with Elmor mod 5950x VS 10900k and 3090 Elmor?
> 
> 
> I think you compared results with 3090 non Elmor + 10900k VS your 5950x + 3090 with Elmor ?


Just going by average clocks is really all I can do. No way to know who is using a kp using the classified tool or not or anyone else using an evc. So without that info it's difficult to make any sort of conclusion on way or another.









I scored 15 777 in Port Royal


AMD Ryzen 9 5950X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com





This is my score without using the evc, which still seems to fall inline as mentioned above. 

Granted, both of these runs are just with normal ambiant temp when the last bios had come out, so not really ideal but no big deal.


----------



## CZonin

CZonin said:


> Got it, ty! I'll probably wait for someone to confirm something working with the TUF.
> 
> Quick question: For cards with a dual bios switch, is it as simple as just switching to the bios I want to override and then following the guide here?


Ended up flashing this on my TUF (non-OC) and it worked perfectly. Was able to get +115 core, +600 mem, and 105 power target. Gained me 700 points on Port Royal and landed at 14007. Also deshrouded about a week ago and replaced it with 3 NF-A9 so with the new settings and a fairly silent curve it's not going over 62C.

Really happy with the results, and it's not too crazy so I can probably leave it as my daily settings.


----------



## jomama22

GRABibus said:


> I did also what you say, no difference


Strange. May just come down to memory oc then, not really sure. I had seen a pretty clear difference between the two when doing my bench's when maxing out. 

Curious if you see the difference in aida or not as well. As an example, running pbo with 3800cl14 nets me 53.7ns, all core oc @ 4800 is 52.8ns, single ccd with all core @ 4800 nets me 51.5.


----------



## GRABibus

jomama22 said:


> Strange. May just come down to memory oc then, not really sure. I had seen a pretty clear difference between the two when doing my bench's when maxing out.
> 
> Curious if you see the difference in aida or not as well. As an example, running pbo with 3800cl14 nets me 53.7ns, all core oc @ 4800 is 52.8ns, single ccd with all core @ 4800 nets me 51.5.


Disabling SMT and 1 CCD comes to 8 cores for you and 6 for me.

Coudl it be an explanantion ?.

My RAM is at 3733MHz 14-15-15-28-1T.

Maybe my GPU si not a so good booster


----------



## jomama22

GRABibus said:


> Disabling SMT and 1 CCD comes to 8 cores for you and 6 for me.
> 
> Coudl it be an explanantion ?.
> 
> My RAM is at 3733MHz 14-15-15-28-1T.
> 
> Maybe my GPU si not a so good booster


Possible. During the PR runs I had actually set it to use only 4 cores with smt off and 1 ccd(in bios that is). Could try that and see what happens.


----------



## Nizzen

jomama22 said:


> Just going by average clocks is really all I can do. No way to know who is using a kp using the classified tool or not or anyone else using an evc. So without that info it's difficult to make any sort of conclusion on way or another.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 777 in Port Royal
> 
> 
> AMD Ryzen 9 5950X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> This is my score without using the evc, which still seems to fall inline as mentioned above.
> 
> Granted, both of these runs are just with normal ambiant temp when the last bios had come out, so not really ideal but no big deal.


Tnx for answer 

Ambient temps and Average temperature 33 °C? 🙃


----------



## GRABibus

jomama22 said:


> Possible. During the PR runs I had actually set it to use only 4 cores with smt off and 1 ccd(in bios that is). Could try that and see what happens.


How do you disable cores ?


----------



## changboy

Where i can buy 3 slot nvlink sli bridge for those(FTW3) rtx-3090 ?


----------



## J7SC

changboy said:


> Where i can buy 3 slot nvlink sli bridge for those(FTW3) rtx-3090 ?


...well, check out this time-stamped bit at GN (and let it run a bit...)


----------



## GRABibus

jomama22 said:


> Possible. During the PR runs I had actually set it to use only 4 cores with smt off and 1 ccd(in bios that is). Could try that and see what happens.











I scored 14 904 in Port Royal


AMD Ryzen 9 5900X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com














Bios MSI Suprim X BAR F5
BAR enabled
+175MHz sur core
+1050MHz sur mémoire
All core Oc 4.9Ghz
SMT disabled
1 CCD
Core count 4
Drivers Nvidia default
Gsync enabled
22°C ambiante


----------



## jomama22

GRABibus said:


> I found unde
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 14 904 in Port Royal
> 
> 
> AMD Ryzen 9 5900X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> View attachment 2486932
> 
> 
> Bios MSI Suprim X BAR F5
> BAR enabled
> +175MHz sur core
> +1050MHz sur mémoire
> All core Oc 4.9Ghz
> SMT disabled
> 1 CCD
> Core count 4
> Drivers Nvidia default
> Gsync enabled
> 22°C ambiante


Not sure then. Works consistently for me. Maybe mine is just magical lol. 

Why is gsync enabled though?


----------



## GRABibus

jomama22 said:


> Not sure then. Works consistently for me. Maybe mine is just magical lol.
> 
> Why is gsync enabled though?


Look at my average temp : 67°C

This is the key.....


----------



## Falkentyne

SirCanealot said:


> I did have it unlocked when it was attached to the board. Just wouldn't come out


It's not unlocked.
I can see it VERY clearly in your imgur picture.
It's not slid all the way out. The metal is very clearly blocking a tiny section of the plastic plug. 
In the igor's lab video, it shows the lock vs unlock position. Look at the four solder points below the tab, on Igor's video when it's unlocked. It has to be exactly at that position.
The first time you unlatch the metal slider, it takes a little bit of force to get it to that point. The people who ripped off the entire connector either pulled it past that point, or didn't pull it enough, then ripped out the connector trying to get the plug out. I've seen this multiple times.


----------



## yzonker

GRABibus said:


> I found unde
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 14 904 in Port Royal
> 
> 
> AMD Ryzen 9 5900X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> View attachment 2486932
> 
> 
> Bios MSI Suprim X BAR F5
> BAR enabled
> +175MHz sur core
> +1050MHz sur mémoire
> All core Oc 4.9Ghz
> SMT disabled
> 1 CCD
> Core count 4
> Drivers Nvidia default
> Gsync enabled
> 22°C ambiante


Will it complete at +180 core? Keep in mind the VF curve only moves in 15 mhz increments. +175 falls in between +165 and +180 which are the only 2 valid values. It might be running +165.


----------



## GRABibus

yzonker said:


> Will it complete at +180 core? Keep in mind the VF curve only moves in 15 mhz increments. +175 falls in between +165 and +180 which are the only 2 valid values. It might be running +165.


Yes I know, but at 180MHz, it crashes.
Also often at 175MHz....


----------



## bmagnien

I picked up a wattmeter off Amazon that stores max watts, as I wanted to get a better idea of full system draw before going back to the 1000w bios. I set up a custom registry key for HWInfo that calculates max CPU+GPU power at any given moment. I then ran 520w KP bios Furmark + CB23 Multi at the same time through CB completion. GPU runs around 520ish watts, CPU hits around 240. Then took the max reading from the wattmeter and subtracted out the combined HWInfo reading, to get an idea of all the 'other extraneous' system power draws (mobo, ram, ddc pump, fans, etc.). With no RGB enabled, but fans and pump on max, my 'extraneous' power draw was about 205w. I was super surprised it was this high, but I'm glad I finally got an accurate reading and can justify my purchase of a 1000w SFX-L PSU.

Also nice to have that reading to add a raw number on to my variable HWInfo registry key to be able to get a good idea of actual real time total power draw in-system while using the machine.


----------



## GRABibus

Com'on.....Still 50pts to win :









I scored 14 950 in Port Royal


AMD Ryzen 9 5900X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com














Bios EVGA KP 520W with BAR
BAR enabled
+160MHz sur core
+1050MHz sur mémoire
Voltage slider "0%"
Power slider "121%"
All core OC 4.9Ghz
SMT disabled
1 CCD
Core count 4
Drivers Nvidia 466.11 default
Gsync disabled
22°C ambient
All fans at 100%
Open PC case


----------



## Falkentyne

GRABibus said:


> Yes I know, but at 180MHz, it crashes.
> Also often at 175MHz....


Then use +150 mhz.


----------



## GRABibus

Falkentyne said:


> Then use +150 mhz.


My score is lower with +150MHz


----------



## KedarWolf

GRABibus said:


> Com'on.....Still 50pts to win :
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 14 950 in Port Royal
> 
> 
> AMD Ryzen 9 5900X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> View attachment 2486940
> 
> 
> Bios EVGA KP 520W with BAR
> BAR enabled
> +160MHz sur core
> +1050MHz sur mémoire
> Voltage slider "0%"
> Power slider "121%"
> All core OC 4.9Ghz
> SMT disabled
> 1 CCD
> Core count 4
> Drivers Nvidia 466.11 default
> Gsync disabled
> 22°C ambient
> All fans at 100%
> Open PC case


Try the F5 Suprim X BIOS. I do better on it than with the 520W KP BIOS.









MSI RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com


----------



## GRABibus

I tested following bios :
ASUS stock Bios, ASUS XOC 1000W, MSI SUPRIM X F5, KP520W and definitely, KP 520W bios is the one to go for me.

Hre my best run after hours of test :









I scored 14 986 in Port Royal


AMD Ryzen 9 5900X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com















Bios EVGA KP 520W with BAR
BAR enabled
+160MHz sur core
+1075MHz sur mémoire
Voltage slider "0%"
Power slider "121%"
All core OC 4.9Ghz
SMT disabled
1 CCD disabled
Core count 4
Drivers Nvidia 466.11 textures max performances - Multi Threaded optimised - Low latency enabled
Gsync disabled
21°C ambiante
[email protected] 14-15-15-28-1T

Clearly, PR has not been done for ZEN 3, or, ZEN 3 is not done for PR, as you want.
Zen 3 processors require a lot of tweaking in Bios for PR.

Some people have reported (See the links I posted some posts above) to have lost several hundred points going from 9900K to 5950X for example.

Even Jomama had to tweak in the Bios 

=> SMT disabled and 1 CCD disabled and Core count 4 with an all Core overclock (Only 4 cores in fact) is what you should try in PR with a ZEN3 processor.

I have also set my RAM OC @ 3800MHz 16-16-16-28-1T with all necessary voltages increased in Bios instead of my 24/7 stable 3733MHz 14-15-15-28-1T.

I currently can't improve my cooling and sometyimes the same settings beyond crash in PR, so I am on Razor's edge of stability at each run.

Still didn't break 15k by the way....

if you have some advices to tweak better the drivers, thank you in advance !

P s; I will haver the opportunty to get another Strix on next week and will test itr to see if it performs better, almost on Vram (On my current one, I can do maximum +1000MHz, other wise, crash or arctefacts).


----------



## des2k...

Not sure PR score is that dependent on CPU. Here's my score, nothing crazy, this is with the 3900x that sucks at boost frequency and slow memory 3800cl18.

2145core +1484mem I think








I scored 15 021 in Port Royal


AMD Ryzen 9 3900X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## Thanh Nguyen

Watercool + 1000w bios= the best. My card has no problem at all.


----------



## GRABibus

des2k... said:


> Not sure PR score is that dependent on CPU. Here's my score, nothing crazy, this is with the 3900x that sucks at boost frequency and slow memory 3800cl18.
> 
> 2145core +1484mem I think
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 021 in Port Royal
> 
> 
> AMD Ryzen 9 3900X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com


I assume it depends on each configuration.
But, it seems that with Ryzen 3, it is dependent.
I won 300pts by setting only 1 CCD, disability SMT and disabling 2 cores (advises from Jomama).

On your run, which is on water I assume, you have a perfect stable 2145MHz core frequency and a VRAM fréquency of 1405MHz. 

I have only 1350MHz on VRAM, my average frequency is 2060MHz and I am on air and we only differ by 35 points.
I am pretty sure you would get at least 200pts more on a 10900k for example.

But all rigs are different and it is not a general rule I think.


----------



## GRABibus

des2k... said:


> Not sure PR score is that dependent on CPU. Here's my score, nothing crazy, this is with the 3900x that sucks at boost frequency and slow memory 3800cl18.
> 
> 2145core +1484mem I think
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 021 in Port Royal
> 
> 
> AMD Ryzen 9 3900X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com











5950x slower In Port royal Benchmark then 9900k


I have a problem with my new processor. I had a 9900k before and had 14,700 points with the 3090 FE. With the new Ryzen 5950x I only get 14200 points. I use c15 4000mhz ram and 2000mhz inf. with good timings. My CPU Score is 16900 Points in CPU Score in Timespy.that's fine. But unfortunately I...




www.overclock.net










5950x with 3090kpe lost score over 10900k 3090kpe in port royal help - EVGA Forums


Has anyone found a fix for the 3090 not scoring as well on ryzen 5000? I just received a 5950x and msi meg ace x570 and it scores 600pts lower. Only thing i can figure is my oc on my 10900k is helping it out. 5200 core 4800 cache. The 5950x is nice just useless if i cant use it for benchmarking...



forums.evga.com


----------



## des2k...

GRABibus said:


> I assume it depends on each configuration.
> But, it seems that with Ryzen 3, it is dependent.
> I won 300pts by setting only 1 CCD, disability SMT and disabling 2 cores (advises from Jomama).
> 
> On your run, which is on water I assume, you have a perfect stable 2145MHz core frequency and a VRAM fréquency of 1405MHz.
> 
> I have only 1350MHz on VRAM, my average frequency is 2060MHz and I am on air and we only differ by 35 points.
> I am pretty sure you would get at least 200pts more on a 10900k for example.
> 
> But all rigs are different and it is not a general rule I think.


I'm surprised you need to disable CCDs and SMT. Latest Nvidia driver & windows default power plan seem to be working very well for Ryzen.
I only get CCD1 high boost, CCD2 is idle unless it's a really cpu heavy game. I think Threaded optimization helps alot.


----------



## jomama22

GRABibus said:


> 5950x slower In Port royal Benchmark then 9900k
> 
> 
> I have a problem with my new processor. I had a 9900k before and had 14,700 points with the 3090 FE. With the new Ryzen 5950x I only get 14200 points. I use c15 4000mhz ram and 2000mhz inf. with good timings. My CPU Score is 16900 Points in CPU Score in Timespy.that's fine. But unfortunately I...
> 
> 
> 
> 
> www.overclock.net
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 5950x with 3090kpe lost score over 10900k 3090kpe in port royal help - EVGA Forums
> 
> 
> Has anyone found a fix for the 3090 not scoring as well on ryzen 5000? I just received a 5950x and msi meg ace x570 and it scores 600pts lower. Only thing i can figure is my oc on my 10900k is helping it out. 5200 core 4800 cache. The 5950x is nice just useless if i cant use it for benchmarking...
> 
> 
> 
> forums.evga.com


Just takes a well tuned system as you said.

I can't be the only one who somehow can make zen 3 perform well/on par with a 10900k in PR lol.

The same can be said for time spy. Ignoring overall/cpu score and looking only at graphics score, my lesser clocked 3090 beats and/or matches 10900k's clocked to the gills:








3DMark.com search


3DMark.com search




www.3dmark.com




(I'm 37th in graphics score there, and that's forgetting to do any tweaking in nvcp)

Just needs to be set up properly and it's good to go. Port royal just seems to be the real killer for most people but at the end of the day, you don't play PR so who cares lol.


----------



## J7SC

...I used to run Port Royal a lot with my 2x 2080 Tis (insert in 3DM pic) but haven't lately, even though I like the visuals of that bench. With the stock 3090 on air, I managed to squeak past 15K right after I got it, but then decided to upgrade mobo and CPU twice (the first upgrade is off to a f/t life as a workstation). 

Right now, trying to mod the (in)famous EK backplate for some extra VRAM cooling, at least until a custom active w-cooled option becomes available


----------



## KedarWolf

My best result by far with the 1000W XOC BIOS power limit at 60%. My previous best was 15260 with the Suprim X BIOS.

I'm really surprised it actually completed as my ambient temps right now are rather high.


















I scored 15 358 in Port Royal


AMD Ryzen 9 5950X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## KedarWolf

KedarWolf said:


> My best result by far with the 1000W XOC BIOS power limit at 60%. My previous best was 15260 with the Suprim X BIOS.
> 
> I'm really surprised it actually completed as my ambient temps right now are rather high.
> 
> View attachment 2486968
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 358 in Port Royal
> 
> 
> AMD Ryzen 9 5950X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> View attachment 2486967


Just put Nvidia settings from Normal to Performance. The rest optimised for benching as well, I've shared them before.









I scored 15 454 in Port Royal


AMD Ryzen 9 5950X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## changboy

There you go :








I scored 15 458 in Port Royal


Intel Core i9-10980XE Extreme Edition Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## KedarWolf

changboy said:


> There you go :
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 458 in Port Royal
> 
> 
> Intel Core i9-10980XE Extreme Edition Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com


My best result. 15492 









I scored 15 492 in Port Royal


AMD Ryzen 9 5950X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## Nizzen

jomama22 said:


> Just takes a well tuned system as you said.
> 
> I can't be the only one who somehow can make zen 3 perform well/on par with a 10900k in PR lol.
> 
> The same can be said for time spy. Ignoring overall/cpu score and looking only at graphics score, my lesser clocked 3090 beats and/or matches 10900k's clocked to the gills:
> 
> 
> 
> 
> 
> 
> 
> 
> 3DMark.com search
> 
> 
> 3DMark.com search
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> (I'm 37th in graphics score there, and that's forgetting to do any tweaking in nvcp)
> 
> Just needs to be set up properly and it's good to go. Port royal just seems to be the real killer for most people but at the end of the day, you don't play PR so who cares lol.


You are the only one with Elmor and 5950x 

Even Kingpin said he is getting better "effency" per clock buy tuning the "hidden" voltage that can be tuned with kingpincards and Elmor tool.

I haven't kingpin 3090 or Elmor tool, so I can't test myself


----------



## Beagle Box

jomama22 said:


> Not sure then. Works consistently for me. Maybe mine is just magical lol.
> 
> Why is gsync enabled though?


Looking at your scores, I'd say your high scores are largely due to having a solid GPU. 
You can max out your Memory OC and average >2205 across a Port Royal run.
That's better than mine at average room temps.
That's a very solid GPU. You should try the newest Afterburner and see how far you can OC the memory on that card.


----------



## jomama22

Beagle Box said:


> Looking at your scores, I'd say your high scores are largely due to having a solid GPU.
> You can max out your Memory OC and average >2205 across a Port Royal run.
> That's better than mine at average room temps.
> That's a very solid GPU. You should try the newest Afterburner and see how far you can OC the memory on that card.


Well yes, but that's not the point. It's a comparison of gpu clocks/score with different cpus I'd all . So the actual end score doesn't really matter.



Nizzen said:


> You are the only one with Elmor and 5950x
> 
> Even Kingpin said he is getting better "effency" per clock buy tuning the "hidden" voltage that can be tuned with kingpincards and Elmor tool.
> 
> I haven't kingpin 3090 or Elmor tool, so I can't test myself


Even without that, still falls into the same conclusion.

I scored 22 672 in Time Spy 

Not sure if this was my highest without using the evc (I know it's without evc by the date but whether it's my highest, I dunno), you can find this result by looking through the search function on 3dmark (just sort by gfx score and then adjust the top graph to show this area of results.). Comparing avg clock and gfx score to 10900ks around it it is the same story.

And this whole magical voltage is just msvdd and to be frank, it doesn't really do much for you at ambiant. Don't think I really got any gains or stability from upping it beyond 1.1v, which is what you will get at stock using the ab v/f curve at 1.1v.

This is why I sent both the PR and now TS result without using the evc, so that it's taken out of the equation.


----------



## GRABibus

So, should I change to 10900K now to get normal PR results ?  

Honestly, Futuremark should have separated Gx score and CPU score on PR, as all other benchmarks.


----------



## SirCanealot

Falkentyne said:


> It's not unlocked.
> I can see it VERY clearly in your imgur picture.


_bangs head on wall_
Thanks for the clarification! At least I understand what I did wrong now  (gotta laugh or I'll cry!)
That does actually ring a bell from the first 3090 I took apart — guess I was over-confident with it that it was definately unlocked and should come out 



jomama22 said:


> Looks like you have a damaged trace down the center there. Not sure what type of affect that may have but it's possible you have a short around there now.


Thanks so much for pointing this out! I wouldn't have thought to check this otherwise. 

Luckily there's a pretty reliable repair guy in central London that I can hopefully get to in the next day or two, I think there's decent odds he can do something with it _fingers crossed_

The most annoying thing with this blasted connector was I thought I was being very careful with it... but guess you can never be too careful with electronics...


----------



## sultanofswing

I have a 7800x system here that I tested on and I have my daily 10940x rig, Both systems score exactly the same with the same 3090 and overclocks.

I also have 2 Kingpin 3090's here that port royal responds very well to adding msvdd and I also have a FTW Hydrocopper that has no msvdd control but runs circles around both my Kingpins.
This is my Hydrocopper before the Nvidia driver update that boosted PR score by a couple of hundred points.








I scored 15 628 in Port Royal


Intel Core i9-10940X X-series Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## jomama22

GRABibus said:


> So, should I change to 10900K now to get normal PR results ?
> 
> Honestly, Futuremark should have separated Gx score and CPU score on PR, as all other benchmarks.


Meh, wouldn't really matter. 3dmark needed a bench that just focuses as much as possible on gpu. It's never going to be 100% ideal. You're better off just looking at game benchmarks to have any real conclusion about total system performance anyway. If you really want to minimize/eliminate cpu as much as possible, really should just make the bench run at 4k.

Some games like latency, some games like ipc, some like some mixture of the two.

It's such a moot point at the end of the day between a 10900k and 5900x/5950x. It really buys you nothing perceivable one way or the other, so long as you have the systems set up correctly.

And let's be real, if you're using a 3090, you're most likely playing at 2560x1440 or greater. At 4k, it's literally irrelevant and the only time it matters is when you are just staved for cores.


sultanofswing said:


> I have a 7800x system here that I tested on and I have my daily 10940x rig, Both systems score exactly the same with the same 3090 and overclocks.
> 
> I also have 2 Kingpin 3090's here that port royal responds very well to adding msvdd and I also have a FTW Hydrocopper that has no msvdd control but runs circles around both my Kingpins.
> This is my Hydrocopper before the Nvidia driver update that boosted PR score by a couple of hundred points.
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 628 in Port Royal
> 
> 
> Intel Core i9-10940X X-series Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com


Yeah, msvdd seems to be temperamental between chips as to whether adding more really does much of anything or works well. It's very sweetspot-ish, much like dram tweaking. Mine just doesn't care lol.


----------



## Beagle Box

sultanofswing said:


> I have a 7800x system here that I tested on and I have my daily 10940x rig, Both systems score exactly the same with the same 3090 and overclocks.
> 
> I also have 2 Kingpin 3090's here that port royal responds very well to adding msvdd and I also have a FTW Hydrocopper that has no msvdd control but runs circles around both my Kingpins.
> This is my Hydrocopper before the Nvidia driver update that boosted PR score by a couple of hundred points.
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 628 in Port Royal
> 
> 
> Intel Core i9-10940X X-series Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com


Not surprising. Seems in PR, all things being equal, it's mostly the quality of the GPU's processor that determines score. 
An exceptional GPU is going to perform exceptionally in any decent PC.


----------



## GRABibus

Finally, 15k broken...Strix on air









I scored 15 061 in Port Royal


AMD Ryzen 9 5900X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com














Bios EVGA KP 520W with BAR
BAR enabled
+160MHz sur core
+1075MHz sur mémoire
Voltage slider "0%"
Power slider "121%"
*=> All core OC 4.9Ghz
=> SMT disabled
=> 1 CCD disabled
=> Core count 4*
Drivers Nvidia 466.11 textures max performances - Multi Threaded optimised - Low latency enabled - No Anysotropy - No anti Aliasing
Gsync disabled
19°C ambiante
[email protected] 14-15-15-28-1T

The same run now with same conditions beyond, but with my 24/7 PBO/CO OC (See signature), instead of Bolt CPU tweaking beyond where I reach 15061








I scored 14 868 in Port Royal


AMD Ryzen 9 5900X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com





Roughly 200pts difference.....


----------



## Beagle Box

GRABibus said:


> Finally, 15k broken...
> .......



Congratulations.
You've earned it.


----------



## GRABibus

jomama22 said:


> Meh, wouldn't really matter. 3dmark needed a bench that just focuses as much as possible on gpu. It's never going to be 100% ideal. You're better off just looking at game benchmarks to have any real conclusion about total system performance anyway. If you really want to minimize/eliminate cpu as much as possible, really should just make the bench run at 4k.
> 
> Some games like latency, some games like ipc, some like some mixture of the two.
> 
> It's such a moot point at the end of the day between a 10900k and 5900x/5950x. It really buys you nothing perceivable one way or the other, so long as you have the systems set up correctly.
> 
> And let's be real, if you're using a 3090, you're most likely playing at 2560x1440 or greater. At 4k, it's literally irrelevant and the only time it matters is when you are just staved for cores.
> 
> Yeah, msvdd seems to be temperamental between chips as to whether adding more really does much of anything or works well. It's very sweetspot-ish, much like dram tweaking. Mine just doesn't care lol.


I was talking about benches.
If I would have to dedicate a rig only for bench now, it would be with Intel currently due to this posssible mismatch with Ryzen and PR.


----------



## Lobstar

GRABibus said:


> I was talking about benches.
> If I would have to dedicate a rig only for bench now, it would be with Intel currently due to this posssible mismatch with Ryzen and PR.


Whats wrong with Ryzen and PR? This is my 3950x/3090. I scored 15 118 in Port Royal


----------



## GRABibus

Lobstar said:


> Whats wrong with Ryzen and PR? This is my 3950x/3090. I scored 15 118 in Port Royal


Read at posts beyond.

Of course, this is certainly not a general rule, but it seems that some people underperform in PR With Ryzen 3.









5950x slower In Port royal Benchmark then 9900k


I have a problem with my new processor. I had a 9900k before and had 14,700 points with the 3090 FE. With the new Ryzen 5950x I only get 14200 points. I use c15 4000mhz ram and 2000mhz inf. with good timings. My CPU Score is 16900 Points in CPU Score in Timespy.that's fine. But unfortunately I...




www.overclock.net










5950x with 3090kpe lost score over 10900k 3090kpe in port royal help (Page 2) - EVGA Forums


Has anyone found a fix for the 3090 not scoring as well on ryzen 5000? I just received a 5950x and msi meg ace x570 and it scores 600pts lower. Only thing i can figure is my oc on my 10900k is helping it out. 5200 core 4800 cache. The 5950x is nice just useless if i cant use it for benchmarking...



forums.evga.com





Here a nice comparison (extracted from EVGA link just beyond) :









Result







www.3dmark.com





I am in this case.

See my scores in my last post.

PBO/CO overclock (which is in my signature) => 14868pts

Tweaking CPU based on Jomama advises (who has the same phenomena) :
=> 15061pts

Tweaking CPU details to improve my score by roughly 150 average :
All core static OC 4,9GHz
SMT disabled 
One CCD disabled.
Only 4 core counts.

Maybe just try from your side to see if this improve your score or not....but maybe this happens only with Zen 3

We have only 55 points difference and your are on water with much higher average boost clock and lower temps than me (my strix is on air).


----------



## des2k...

Lobstar said:


> Whats wrong with Ryzen and PR? This is my 3950x/3090. I scored 15 118 in Port Royal


Prob Amd that sucks at software owned 4 ryzen cpus and 4 mobos and it's all over the place for performance. Also the difference between cpu quality is huge on amd side. You're more likely to get a garbage cpu or stuck with a bios that is full of weird bugs.

It's a great platform for wasting time 🤣


----------



## des2k...

GRABibus said:


> Finally, 15k broken...Strix on air
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 061 in Port Royal
> 
> 
> AMD Ryzen 9 5900X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> View attachment 2487030
> 
> 
> Bios EVGA KP 520W with BAR
> BAR enabled
> +160MHz sur core
> +1075MHz sur mémoire
> Voltage slider "0%"
> Power slider "121%"
> *=> All core OC 4.9Ghz
> => SMT disabled
> => 1 CCD disabled
> => Core count 4*
> Drivers Nvidia 466.11 textures max performances - Multi Threaded optimised - Low latency enabled - No Anysotropy - No anti Aliasing
> Gsync disabled
> 19°C ambiante
> [email protected] 14-15-15-28-1T
> 
> The same run now with same conditions beyond, but with my 24/7 PBO/CO OC (See signature), instead of Bolt CPU tweaking beyond where I reach 15061
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 14 868 in Port Royal
> 
> 
> AMD Ryzen 9 5900X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> Roughly 200pts difference.....


yes, congrats for being part of the 15k PR club.

.... and games perform the same if you had 14k score on PR 🤣

The next club to join if you're on water is that 7c water delta. I'm not there yet, but I can tell already I dream of gaming at 32c gpu temps vs 42c.
*that was sarcastic 🙄


----------



## jomama22

GRABibus said:


> I was talking about benches.
> If I would have to dedicate a rig only for bench now, it would be with Intel currently due to this posssible mismatch with Ryzen and PR.


I mean...it would only be good at PR I guess? Gets smashed in any other 3dmark bench just by virtue of large core deficit. And if I can match/beat intel's in PR with the same or lower average clocks no problem, then I fail to see your point I suppose?

The fact I can have my cake and eat it too is quite nice.

That kid in the evga forums was running 3200mhz ram which...I mean...what? Lol.


----------



## GRABibus

jomama22 said:


> it would only be good at PR I guess?


Probably.


----------



## yzonker

sultanofswing said:


> I have a 7800x system here that I tested on and I have my daily 10940x rig, Both systems score exactly the same with the same 3090 and overclocks.
> 
> I also have 2 Kingpin 3090's here that port royal responds very well to adding msvdd and I also have a FTW Hydrocopper that has no msvdd control but runs circles around both my Kingpins.
> This is my Hydrocopper before the Nvidia driver update that boosted PR score by a couple of hundred points.
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 628 in Port Royal
> 
> 
> Intel Core i9-10940X X-series Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com


Are you running the FTW3 in the original HC block? If so, what block delta do you get and how are vram temps?

I'm on the notify list for a FTW3 HC. Sadly I got in 34 seconds after it opened and didn't make the first wave. They only made it to 10:00:16. Lol


----------



## changboy

yzonker said:


> Are you running the FTW3 in the original HC block? If so, what block delta do you get and how are vram temps?
> 
> I'm on the notify list for a FTW3 HC. Sadly I got in 34 seconds after it opened and didn't make the first wave. They only made it to 10:00:16. Lol


 Me i enter at 10h32 lol


----------



## sultanofswing

yzonker said:


> Are you running the FTW3 in the original HC block? If so, what block delta do you get and how are vram temps?
> 
> I'm on the notify list for a FTW3 HC. Sadly I got in 34 seconds after it opened and didn't make the first wave. They only made it to 10:00:16. Lol


Yes it is the original Hydrocopper. It seems to me the delta is about 10-12c when looping port royal, Water temp will stabilize at 28-29c and the gpu will sit around 38-40c.
During gaming I have seen the delta go as high as 15-16c, gpu will get up to about 44c and the water temp will get to 29-30c depending on which game. Metro Exodus pushes the card way harder if I run 4k RTX and no DLSS but if I turn DLSS on the temps will drop back down into the 38-39c range.

I have an Optimus block on the way and with my Radiator surface area that I have I feel I should get a few C better delta.


----------



## yzonker

sultanofswing said:


> Yes it is the original Hydrocopper. It seems to me the delta is about 10-12c when looping port royal, Water temp will stabilize at 28-29c and the gpu will sit around 38-40c.
> During gaming I have seen the delta go as high as 15-16c, gpu will get up to about 44c and the water temp will get to 29-30c depending on which game. Metro Exodus pushes the card way harder if I run 4k RTX and no DLSS but if I turn DLSS on the temps will drop back down into the 38-39c range.
> 
> I have an Optimus block on the way and with my Radiator surface area that I have I feel I should get a few C better delta.


Thanks. Who knows, 3 months from now when 18 seconds have passed in the EVGA queue, I might have one. LOL


----------



## Sheyster

yzonker said:


> Thanks. Who knows, 3 months from now when 18 seconds have passed in the EVGA queue, I might have one. LOL


LOL, are you implying that the EVGA queue is near a black hole, and time slows down there?


----------



## WillP

Sheyster said:


> LOL, are you implying that the EVGA queue is near a black hole, and time slows down there?


A singularity caused by the extreme mass of virtual bot customers that occupy no volume in space but swallow any GPU product. Yes, this is clearly what's going on.


----------



## J7SC

^^ ^^...oh boy, as long as there's no quantum entanglement, like two buyers simultaneously getting the same card


----------



## inedenimadam

I need: 
my 3090 reference h20 backplate from EK to come in.
ANYBODY to release a block for the KPE.

why is the industry so slow this release cycle?


----------



## bmagnien

inedenimadam said:


> why is the industry so slow this release cycle?


so there was this bat in Wuhan...


----------



## yzonker

inedenimadam said:


> I need:
> my 3090 reference h20 backplate from EK to come in.
> ANYBODY to release a block for the KPE.
> 
> why is the industry so slow this release cycle?


EVGA KPE HC blocks are really close I think. I assume you are monitoring the EVGA forum and Jacob's twitter feed. Need to jump on it. Don't be like me and end up at 10:00:34!


----------



## yzonker

Crap, trying to hold out on us.






Okay lets hear it Highest Port Royal 3090 Kingpin Scores! (Page 22) - EVGA Forums


I thought it would be fun and helpful to hear everyones Port Royal Kingpin scores and how they got there, spare no details! This is your time to brag and help those of us trying to get in the 15’s and beyond. Either post pictures or let us know what settings you used. Example: My highest run I...



forums.evga.com





Edit: that link doesn't seem to jump to the correct post. Anyway, maybe some already know but the reBar XOC is out, but they are trying to keep the lid on it this time. Can we start a go fund me to bribe someone with it?


----------



## J7SC

yzonker said:


> Crap, trying to hold out on us.
> 
> 
> 
> 
> 
> 
> Okay lets hear it Highest Port Royal 3090 Kingpin Scores! (Page 22) - EVGA Forums
> 
> 
> I thought it would be fun and helpful to hear everyones Port Royal Kingpin scores and how they got there, spare no details! This is your time to brag and help those of us trying to get in the 15’s and beyond. Either post pictures or let us know what settings you used. Example: My highest run I...
> 
> 
> 
> forums.evga.com
> 
> 
> 
> 
> 
> Edit: that link doesn't seem to jump to the correct post. Anyway, maybe some already know but the reBar XOC is out, but they are trying to keep the lid on it this time. Can we start a go fund me to bribe someone with it?


...guess the regular KPE w/ 520W and r_BAR is out then as well...just upgraded my Strix OC to Asus v3, but I might give the KPE 520 a try.

Quicks Qs -1.) My Strix typically draws 55W or so on PCI slot power, and a friend of mine who also has a Strix is at around 30W to 35W - in both cases overall board draw is between 480W and 500W...low PCI slot power draw is a good thing, I take it ?

2.) tuning a new mobo/CPU combo for the Strix and looking at the latest HWInfo beta and trying to get a feel for 'normalized GPU power'...wasn't my best run / VRAM not yet maxed as I have to reinstall fans, but what reading / range are you folks getting for normalized GPU power in relation to % of TDP ?


----------



## des2k...

yzonker said:


> Crap, trying to hold out on us.
> 
> 
> 
> 
> 
> 
> Okay lets hear it Highest Port Royal 3090 Kingpin Scores! (Page 22) - EVGA Forums
> 
> 
> I thought it would be fun and helpful to hear everyones Port Royal Kingpin scores and how they got there, spare no details! This is your time to brag and help those of us trying to get in the 15’s and beyond. Either post pictures or let us know what settings you used. Example: My highest run I...
> 
> 
> 
> forums.evga.com
> 
> 
> 
> 
> 
> Edit: that link doesn't seem to jump to the correct post. Anyway, maybe some already know but the reBar XOC is out, but they are trying to keep the lid on it this time. Can we start a go fund me to bribe someone with it?


it's weird because if there's no nda signed and warranty already voided for using the 1000w vbios, why does it matter if it's leaked again ?


----------



## sultanofswing

J7SC said:


> ...guess the regular KPE w/ 520W and r_BAR is out then as well...just upgraded my Strix OC to Asus v3, but I might give the KPE 520 a try.
> 
> Quicks Qs -1.) My Strix typically draws 55W or so on PCI slot power, and a friend of mine who also has a Strix is at around 30W to 35W - in both cases overall board draw is between 480W and 500W...low PCI slot power draw is a good thing, I take it ?
> 
> 2.) tuning a new mobo/CPU combo for the Strix and looking at the latest HWInfo beta and trying to get a feel for 'normalized GPU power'...wasn't my best run / VRAM not yet maxed as I have to reinstall fans, but what reading / range are you folks getting for normalized GPU power in relation to % of TDP ?
> 
> View attachment 2487221


KPE 520W Rebar has been out for a while.
Had to sneak in a Superpostion run since you showed yours. Have not ran this benchmark in over a year probably.


----------



## J7SC

sultanofswing said:


> KPE 520W Rebar has been out for a while.
> Had to sneak in a Superpostion run since you showed yours. Have not ran this benchmark in over a year probably.



... now you made me dig out this 8k, on air


----------



## yzonker

des2k... said:


> it's weird because if there's no nda signed and warranty already voided for using the 1000w vbios, why does it matter if it's leaked again ?


Maybe they can't tell if you used it if you flash back to another bios? 

Has anyone tried this Galax? Looks newer than the Galax 1000w I've seen before, but maybe not new enough to support reBar. Someone on the EVGA forum reported using a 1000w Galax, but didn't say which one.









GALAX RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com


----------



## yzonker

And as far as bios flashing in general, I'm sure none of the manufacturers want people to be able to do it since it messes up their tiered approach to cards. i.e. charging extra for an "OC" card that really is only differentiated by a bios with a higher power limit like 350w vs 390w. EVGA can just more easily control the 1000w bios.


----------



## GRABibus

J7SC said:


> ... now you made me dig out this 8k, on air
> 
> View attachment 2487245


Definitely a *Platinum sample you have...

What were your settings ? Ambient temp ?
Is it with your rig with all the fans ?

2265MHz on air.....


----------



## des2k...

yzonker said:


> Maybe they can't tell if you used it if you flash back to another bios?
> 
> Has anyone tried this Galax? Looks newer than the Galax 1000w I've seen before, but maybe not new enough to support reBar. Someone on the EVGA forum reported using a 1000w Galax, but didn't say which one.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> GALAX RTX 3090 VBIOS
> 
> 
> 24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory
> 
> 
> 
> 
> www.techpowerup.com


well somebody here did say that Vince told him the vbios rbar is not ready so maybe that guy is using galax one.

If it's the kingpin one it's not on his web server, newest files 🙄 is still the old 1000w vbios


----------



## yzonker

des2k... said:


> well somebody here did say that Vince told him the vbios rbar is not ready so maybe that guy is using galax one.
> 
> If it's the kingpin one it's not on his web server, newest files 🙄 is still the old 1000w vbios


Yea I don't think there is a Galax 1kw with reBar. Obviously the KP XOC is available with reBar, EVGA is just trying hard to keep people from sharing it. The more I think about it, the more I'm convinced it's the desire to keep their tiered sales intact. They don't want people with non-KP 3 connector cards (FTW3) being able to match the KP.


----------



## Thanh Nguyen

J7SC said:


> ... now you made me dig out this 8k, on air
> 
> View attachment 2487245


Gpu min temp is 7c. Is your ambient at 5c? I guess it hit 2265mhz for a few seconds.


----------



## sultanofswing

J7SC said:


> ... now you made me dig out this 8k, on air
> 
> View attachment 2487245


----------



## GRABibus

Thanh Nguyen said:


> Gpu min temp is 7c. Is your ambient at 5c? I guess it hit 2265mhz for a few seconds.


At 7 degrees, those amazing scores would be then more logical.


----------



## J7SC

Thanh Nguyen said:


> Gpu min temp is 7c. Is your ambient at 5c? I guess it hit 2265mhz for a few seconds.


...7 C temp is nonsense / impossible...system was an air, and per original post a month ago or so, 'window open' meant ambient temp of 17 C give or take. Unigine isn't always accurate on system sensors. Back then, I also mentioned that the '2265' didn't last very long..



sultanofswing said:


>


...very nice ! I better get a move on with finishing this new mobo/CPU combo...I do know that now that it is on water, it can do 2205 GPU clock = effective clock in SuperPos, though with the run below, PL was just a touch below max due to a new backplate setup.

...which reminds me: Typically, I run the system on pbo auto and the rest default as it is also used for productivity. vBios is stock (the V2, now V3). However, the Strix has 2 bios settings ('quiet' and 'performance') - I should be ok flashing the KPE r_BAR 520 W onto one (ie. quiet) but not the other, I take it.


----------



## GRABibus

J7SC said:


> ...7 C temp is nonsense / impossible...system was an air, and per original post a month ago or so, 'window open' meant ambient temp of 17 C give or take. Unigine isn't always accurate on system sensors. Back then, I also mentioned that the '2265' didn't last very long..
> 
> 
> 
> ...very nice ! I better get a move on with finishing this new mobo/CPU combo...I do know that now that it is on water, it can do 2205 GPU clock = effective clock in SuperPos, though with the run below, PL was just a touch below max due to a new backplate setup.
> 
> ...which reminds me: Typically, I run the system on pbo auto and the rest default as it is also used for productivity. vBios is stock (the V2, now V3). However, the Strix has 2 bios settings ('quiet' and 'performance') - I should be ok flashing the KPE r_BAR 520 W onto one (ie. quiet) but not the other, I take it.
> 
> View attachment 2487263


Even at 17degrees, that’s insane.

What was your curve and core offset ?

If I set +160MHz on core, I boost best case at 2160MHz, even at 19 degrees.


----------



## J7SC

GRABibus said:


> Even at 17degrees, that’s insane.
> 
> What was your curve and core offset ?
> 
> If I set +160MHz on core, I boost best case at 2160MHz, even at 19 degrees.


...don't use curve / offset, not yet anyways, just sliders in MSI AB (without touching 'voltage'). With Superpos, I run +204 on the core w/ regular ambient around 23 C. VRAM currently below max due to new/old backplate and thermal pads settling. I remounted the stock Strix one (front is still w-cooled).


----------



## ALSTER868

J7SC said:


> don't use curve / offset, not yet anyways, just sliders in MSI AB (without touching 'voltag


Best scores are achieved by using offset (by sliders). Voltage is +100


----------



## J7SC

Very nice ! What voltage do you end up running after +100 on slider ?...also, what card / bios ?


----------



## ALSTER868

It's Strix on KPE 520 bios on water of course. Voltages are jumping from 1.05 to 1.1
Offset is +175 if I remeber it right. In games I use a custom curve.


----------



## J7SC

ALSTER868 said:


> It's Strix on KPE 520 bios . Voltages are jumping from 1.05 to 1.1


...uh, nice - I'm looking forward to try the KPE 520 on my Strix. Per above Q, nothing wrong with putting KPE 520 on bios 2 (Strix Quiet) and keep stock Strix V3 bios on stock 1 ?!


----------



## ALSTER868

J7SC said:


> Per above Q, nothing wrong with putting KPE 520 on bios 2 (Strix Quiet) and keep stock Strix V3 bios on stock 1 ?!


That's exactly what I am doing.. Strix V3 bios on stock 1 and KPE 520 on quiet.


----------



## J7SC

ALSTER868 said:


> Thta's exactly what I am doing.. Strix V3 bios on stock 1 and KPE 520 on quiet.


----------



## sultanofswing

My scores are by using a curve set to [email protected]

Sent from my OnePlus 7 Pro using Tapatalk


----------



## ALSTER868

sultanofswing said:


> My scores are by using a curve set to [email protected]


Can I have a peek at your curve? Great chip btw


----------



## jomama22

sultanofswing said:


> My scores are by using a curve set to [email protected]
> 
> Sent from my OnePlus 7 Pro using Tapatalk


Show a benchmark of it using higher volts/clocks? Or does the chip just not scale?


----------



## PLATOON TEKK

yzonker said:


> Maybe they can't tell if you used it if you flash back to another bios?
> 
> Has anyone tried this Galax? Looks newer than the Galax 1000w I've seen before, but maybe not new enough to support reBar. Someone on the EVGA forum reported using a 1000w Galax, but didn't say which one.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> GALAX RTX 3090 VBIOS
> 
> 
> 24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory
> 
> 
> 
> 
> www.techpowerup.com


Will throw it on one of my Strix now and let you know


----------



## sultanofswing

ALSTER868 said:


> Can I have a peek at your curve? Great chip btw


I can post it tonight after I get home from work.


jomama22 said:


> Show a benchmark of it using higher volts/clocks? Or does the chip just not scale?


More voltage does not improve the score so the chip is a good one. When I get home I'll post the last one I ran last night where I just put in an offset of 260 and that was 2235mhz with 1.081mv I think which gave me a 8900 something score.
The fact it can pass any benchmark I've thrown at it at 1037mv and 2220 shows it's a really good bin. I'm not using chilled water or anything with room ambient between 23-24c


----------



## J7SC

sultanofswing said:


> I can post it tonight after I get home from work.
> 
> More voltage does not improve the score so the chip is a good one. When I get home I'll post the last one I ran last night where I just put in an offset of 260 and that was 2235mhz with 1.081mv I think which gave me a 8900 something score.
> The fact it can pass any benchmark I've thrown at it at 1037mv and 2220 shows it's a really good bin. I'm not using chilled water or anything with room ambient between 23-24c


...it does make sense not to max voltage if there's no score improvement results from it after a certain point, at least with a vBios that has a PL limit. There's also extra heat to consider (temp being one of the NVidia boost algorithm parameters).


----------



## jomama22

sultanofswing said:


> More voltage does not improve the score so the chip is a good one. When I get home I'll post the last one I ran last night where I just put in an offset of 260 and that was 2235mhz with 1.081mv I think which gave me a 8900 something score.
> The fact it can pass any benchmark I've thrown at it at 1037mv and 2220 shows it's a really good bin. I'm not using chilled water or anything with room ambient between 23-24c





J7SC said:


> ...it does make sense not to max voltage if there's no score improvement results from it after a certain point, at least with a vBios that has a PL limit. There's also extra heat to consider (temp being one of the NVidia boost algorithm parameters).


I realize the point of using lower volts. I never said anything about the bin or quality of chip, just merely asked if the chip scaled or not beyond that point. That's it.

That bench really isn't going to show you're low voltage doing its work anyway as you are going to be hitting a power limit with superposition 8k.

as an example, here's my card running 1.037v:









Its just the power limit you are hitting in this bench. Nothing else.


----------



## sultanofswing

jomama22 said:


> I realize the point of using lower volts. I never said anything about the bin or quality of chip, just merely asked if the chip scaled or not beyond that point. That's it.
> 
> That bench really isn't going to show you're low voltage doing its work anyway as you are going to be hitting a power limit with superposition 8k.
> 
> as an example, here's my card running 1.037v:
> View attachment 2487287
> 
> 
> Its just the power limit you are hitting in this bench. Nothing else.


I've ran the test on both the 520w bios and the 1kw bios and scores are the same power limit or not.
At the end of the day yes that is as far as I can go because more voltage adds more heat and once I start crossing over 42c it will get unstable. So it probably can scale with more voltage but temps would need to be lower.


----------



## jomama22

sultanofswing said:


> I've ran the test on both the 520w bios and the 1kw bios and scores are the same power limit or not.
> At the end of the day yes that is as far as I can go because more voltage adds more heat and once I start crossing over 42c it will get unstable. So it probably can scale with more voltage but temps would need to be lower.


It would be interesting to see what you clocks actually are during the bench. You're 8800 run was at 530w+, so definitely smacking against the power limiter in places. I wouldn't expect a 3% score difference between 2205 and 2235 starting clocks (is only 1.6% or somthing). So I would imagine it's getting downclocked hard during the bench either because of temp or power. Power will happily move you to the next lowest voltage point as opposed to just downclocking as downclocking isn't going to really save you all that much power. With how you have your curve set up, it could really just drop you hard.

Temp downclocking on the other hand will keep you at your voltage point usually and just drop those dumb 15mhz steps every whatever temp.


Maybe your memory OC isn't really stable at such high clocks and is eating at your performance as well.

Also, I though I had a pic but don't, maybe I'll do another run, but the difference between 1.037v and 1.1v was a measly 20w in superposition (574 vs 594). Score was like 9170 or somthing.


----------



## sultanofswing

jomama22 said:


> It would be interesting to see what you clocks actually are during the bench. You're 8800 run was at 530w+, so definitely smacking against the power limiter in places. I wouldn't expect a 3% score difference between 2205 and 2235 starting clocks (is only 1.6% or somthing). So I would imagine it's getting downclocked hard during the bench either because of temp or power. Power will happily move you to the next lowest voltage point as opposed to just downclocking as downclocking isn't going to really save you all that much power. With how you have your curve set up, it could really just drop you hard.
> 
> Temp downclocking on the other hand will keep you at your voltage point usually and just drop those dumb 15mhz steps every whatever temp.
> 
> 
> Maybe your memory OC isn't really stable at such high clocks and is eating at your performance as well.
> 
> Also, I though I had a pic but don't, maybe I'll do another run, but the difference between 1.037v and 1.1v was a measly 20w in superposition (574 vs 594). Score was like 9170 or somthing.


I'll post it up tonight. 1037mv at 2220 with no power limit.
I understand how the clocks drop based on temp etc etc. Those 2 runs I made last night were also with my cpu 400 mhz below my normal bench config as I'm on my daily overclock right now so I don't know how superposition scales with that as I don't normally run superposition. I also did not have the normal nvidia tweaks in place either.


----------



## J7SC

...fyi there was a 3DM 'update' between last night and earlier today and so far, it seems to cost a few points in PR, TSX etc- even with sequential runs to replenish cache (though may be I should just wipe that...). In addition, there are multiple stories about the latest (April) Win 10 '''update''' hurting some fps, and otherwise causing headaches. 

*..reminisce / rant time:* I remember the good ol' days when a windows update was optional and would politely ask you whether to go ahead or not, and vBios could be edited in Notepad (ie. GTX 600 series)...now, Skynet everywhere  

...other than that, I'm enjoying the new X570 Dark Hero / 5950X / Strix 3090 OC w-cooled setup tremendously...all on (mostly) stock that can bench one moment, and do Handbrake / Blender / Adobe et al the next


----------



## yzonker

J7SC said:


> ...fyi there was a 3DM 'update' between last night and earlier today and so far, it seems to cost a few points in PR, TSX etc- even with sequential runs to replenish cache (though may be I should just wipe that...). In addition, there are multiple stories about the latest (April) Win 10 '''update''' hurting some fps, and otherwise causing headaches.
> 
> *..reminisce / rant time:* I remember the good ol' days when a windows update was optional and would politely ask you whether to go ahead or not, and vBios could be edited in Notepad (ie. GTX 600 series)...now, Skynet everywhere
> 
> ...other than that, I'm enjoying the new X570 Dark Hero / 5950X / Strix 3090 OC w-cooled setup tremendously...all on (mostly) stock that can bench one moment, and do Handbrake / Blender / Adobe et al the next


I still do. Haven't updated in a while.





__





StopUpdates10 3.7.2022.712 - Take your Control over Windows 11/10/8/7 updates!


StopUpdates10 3.7.2022.712 - Take your Control over Windows 11/10/8/7 updates!



greatis.com


----------



## GRABibus

J7SC said:


> ...fyi there was a 3DM 'update' between last night and earlier today and so far, it seems to cost a few points in PR, TSX etc- even with sequential runs to replenish cache (though may be I should just wipe that...). In addition, there are multiple stories about the latest (April) Win 10 '''update''' hurting some fps, and otherwise causing headaches.
> 
> *..reminisce / rant time:* I remember the good ol' days when a windows update was optional and would politely ask you whether to go ahead or not, and vBios could be edited in Notepad (ie. GTX 600 series)...now, Skynet everywhere
> 
> ...other than that, I'm enjoying the new X570 Dark Hero / 5950X / Strix 3090 OC w-cooled setup tremendously...all on (mostly) stock that can bench one moment, and do Handbrake / Blender / Adobe et al the next


Did you loose points at PR ? How many ?


----------



## J7SC

GRABibus said:


> Did you loose points at PR ? How many ?


...still in set up and tuning mode for new build on stock everything, but it appears to be around 150 - 175 pts or so in PR, and TSX Graphics with identical settings and temps...


----------



## GRABibus

J7SC said:


> ...still in set up and tuning mode for new build on stock everything, but it appears to be around 150 - 175 pts or so in PR, and TSX Graphics with identical settings and temps...


Maybe you face a little penalty of Zen 3 on PR versus Intel and it is not due to the update.









[Official] NVIDIA RTX 3090 Owner's Club


I have a 7800x system here that I tested on and I have my daily 10940x rig, Both systems score exactly the same with the same 3090 and overclocks. I also have 2 Kingpin 3090's here that port royal responds very well to adding msvdd and I also have a FTW Hydrocopper that has no msvdd control...




www.overclock.net










5950x with 3090kpe lost score over 10900k 3090kpe in port royal help (Page 2) - EVGA Forums


Has anyone found a fix for the 3090 not scoring as well on ryzen 5000? I just received a 5950x and msi meg ace x570 and it scores 600pts lower. Only thing i can figure is my oc on my 10900k is helping it out. 5200 core 4800 cache. The 5950x is nice just useless if i cant use it for benchmarking...



forums.evga.com





Could you Try the tips that provide me Jomama to see if it helps ?

Disable SMT
Disable 1 CCD
Set up core count = 4
Set up a static OC higher as possible (for example I do 4,9GHz on the four cores)

This makes me win 150pts more in PR versus my 24/7 PBO CO overclock.

By the way, enjoy your new rig !


----------



## des2k...

...


J7SC said:


> ...fyi there was a 3DM 'update' between last night and earlier today and so far, it seems to cost a few points in PR, TSX etc- even with sequential runs to replenish cache (though may be I should just wipe that...). In addition, there are multiple stories about the latest (April) Win 10 '''update''' hurting some fps, and otherwise causing headaches.
> 
> *..reminisce / rant time:* I remember the good ol' days when a windows update was optional and would politely ask you whether to go ahead or not, and vBios could be edited in Notepad (ie. GTX 600 series)...now, Skynet everywhere
> 
> ...other than that, I'm enjoying the new X570 Dark Hero / 5950X / Strix 3090 OC w-cooled setup tremendously...all on (mostly) stock that can bench one moment, and do Handbrake / Blender / Adobe et al the next


yeah, my 15.04 pr is now 14.94 lol same settings

played some destiny2 guardian games event, It felt choppy for animations and when doing 360 really fast with the camera. But I had the nvidia performance overlay and no dips in fps. Almost feels like vsync/gsync randomly doesn't work(frame pacing?)

Went and removed the april update, that didn't fix. So it might be Destiny. I'm going to remove another update that installed with the April one to see.


----------



## J7SC

des2k... said:


> ...
> yeah, my 15.04 pr is now 14.94 lol same settings
> 
> played some destiny2 guardian games event, It felt choppy for animations and when doing 360 really fast with the camera. But I had the nvidia performance overlay and no dips in fps. Almost feels like vsync/gsync randomly doesn't work(frame pacing?)
> 
> Went and removed the april update, that didn't fix. So it might be Destiny. I'm going to remove another update that installed with the April one to see.


...where's my Win XP 64 Pro slim ? >> not here


----------



## Thanh Nguyen

Quick run at 27c water.


----------



## inedenimadam

yzonker said:


> EVGA KPE HC blocks are really close I think. I assume you are monitoring the EVGA forum and Jacob's twitter feed. Need to jump on it. Don't be like me and end up at 10:00:34!


I have not logged into twitter for a year+, but I did today, and even put it on my phone because of your suggestion. I really want to get rid of the AIO.


J7SC said:


> ...still in set up and tuning mode for new build on stock everything, but it appears to be around 150 - 175 pts or so in PR, and TSX Graphics with identical settings and temps...


I have the update paused. Wonder how long before it forces itself on me.


----------



## KedarWolf

des2k... said:


> ...
> yeah, my 15.04 pr is now 14.94 lol same settings
> 
> played some destiny2 guardian games event, It felt choppy for animations and when doing 360 really fast with the camera. But I had the nvidia performance overlay and no dips in fps. Almost feels like vsync/gsync randomly doesn't work(frame pacing?)
> 
> Went and removed the april update, that didn't fix. So it might be Destiny. I'm going to remove another update that installed with the April one to see.


Check to see in Settings/System/Graphics settings if Hardware Accelerated GPU scheduling got turned off.

I finally broke 15500. 









I scored 15 527 in Port Royal


AMD Ryzen 9 5950X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## J7SC

inedenimadam said:


> (...)
> 
> I have the update paused. Wonder how long before it forces itself on me.


----------



## EarlZ

-delete-


----------



## sultanofswing

jomama22 said:


> It would be interesting to see what you clocks actually are during the bench. You're 8800 run was at 530w+, so definitely smacking against the power limiter in places. I wouldn't expect a 3% score difference between 2205 and 2235 starting clocks (is only 1.6% or somthing). So I would imagine it's getting downclocked hard during the bench either because of temp or power. Power will happily move you to the next lowest voltage point as opposed to just downclocking as downclocking isn't going to really save you all that much power. With how you have your curve set up, it could really just drop you hard.
> 
> Temp downclocking on the other hand will keep you at your voltage point usually and just drop those dumb 15mhz steps every whatever temp.
> 
> 
> Maybe your memory OC isn't really stable at such high clocks and is eating at your performance as well.
> 
> Also, I though I had a pic but don't, maybe I'll do another run, but the difference between 1.037v and 1.1v was a measly 20w in superposition (574 vs 594). Score was like 9170 or somthing.


So here is [email protected], Obviously temp causes to clock to drop to 2205 throughout the run but it's not power limited as I flashed the 1kw BIOS just to remove that variable.
My ambient tonight is showing 24.8c on my Aquaero or I would add more voltage but as you can see it will hold 2205 through out the whole run at 1037mv.
Cpu still at only 4.6ghz right now also, Not sure if going back up to my Overclock profile with 2 cores at 50 and the other 12 cores at 49 would help.


----------



## ALSTER868

sultanofswing said:


> So here is [email protected]


Would you post your curve? Thx


----------



## sultanofswing

ALSTER868 said:


> Would you post your curve? Thx


This is how I run it for benchmarking. There are some "tricks" to get it to properly work at least on my end.
If you set the curve up like I have it and can keep temps below 45c what will happen is the first time you run a benchmark the GPU will drop 2 full bins down to 2190.
So what I do is setup the curve, launch a benchmark and let it run for around 3 seconds and exit out of it.
When you go back to the curve if you click on the 1037mv/2220 dot and move it up one time and hit apply you will see the whole curve shift down 1 full bin, then raise it back up at the 1037mv point back to 2220 and then the 3 before 1037 back up to 2205.
Doing that is how I keep the card from dropping more than 1 bin, it's a weird quirk of afterburner that I have always had and I assume other users should have it as well. Like I said if you setup the curve and just go with it then it will drop more than 1 bin based on temp.


----------



## Fusion145

Has anyone here successfully flashed a Gigabyte Gaming OC 3090 BIOS (with reBar) to an MSI Ventus 3090? And can confirm that reBar works and the powerlimit can be changed to 390W?
Gigabyte RTX 3090 VBIOS


----------



## CZonin

Fusion145 said:


> Has anyone here successfully flashed a Gigabyte Gaming OC 3090 BIOS (with reBar) to an MSI Ventus 3090? And can confirm that reBar works and the powerlimit can be changed to 390W?
> Gigabyte RTX 3090 VBIOS


Not sure if this helps, but just flashed that on my 3090 TUF a couple days ago. reBAR works and power limit is now 390W.


----------



## GRABibus

KedarWolf said:


> Check to see in Settings/System/Graphics settings if Hardware Accelerated GPU scheduling got turned off.
> 
> I finally broke 15500.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 527 in Port Royal
> 
> 
> AMD Ryzen 9 5950X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> View attachment 2487315


You should be 99 in HOF


----------



## Lobstar

Well, got the block on my 3090 FTW3U but can't get a stable OC any higher than I had on air. Clocks are hella stable though and peg right where I set them. Despite that scores are the same or lower in every bench. At 25°C water the card never exceeds 35°C and I haven't seen memory over 48°C in gaming/benching. I'm also not power limiting just hitting weird artifacting at high mem clocks and crashes with high core clocks. Guess I just lost the lottery.


----------



## jomama22

sultanofswing said:


> This is how I run it for benchmarking. There are some "tricks" to get it to properly work at least on my end.
> If you set the curve up like I have it and can keep temps below 45c what will happen is the first time you run a benchmark the GPU will drop 2 full bins down to 2190.
> So what I do is setup the curve, launch a benchmark and let it run for around 3 seconds and exit out of it.
> When you go back to the curve if you click on the 1037mv/2220 dot and move it up one time and hit apply you will see the whole curve shift down 1 full bin, then raise it back up at the 1037mv point back to 2220 and then the 3 before 1037 back up to 2205.
> Doing that is how I keep the card from dropping more than 1 bin, it's a weird quirk of afterburner that I have always had and I assume other users should have it as well. Like I said if you setup the curve and just go with it then it will drop more than 1 bin based on temp.


Yeah, setting AB after the card is a bit warmer helps keep the clocks higher. Only quirk to be aware of is if during the game or w.e. goes to a really non-intensive part and the temps drop substantially, it will put you into a higher clock bin then what you set, i.e. if you set 2020 as you max clock, it will jump that up to 2035 (and usually the rest of the curve with it). It will go back down once it heats up again, but if you aren't stable at that higher bin, it can cause you to crash.


----------



## Spiriva

Fusion145 said:


> Has anyone here successfully flashed a Gigabyte Gaming OC 3090 BIOS (with reBar) to an MSI Ventus 3090? And can confirm that reBar works and the powerlimit can be changed to 390W?
> Gigabyte RTX 3090 VBIOS











GALAX RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com





This bios is for 2x8pin cards, and got Nvidia BAR, and 390W.


----------



## Fusion145

Spiriva said:


> GALAX RTX 3090 VBIOS
> 
> 
> 24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory
> 
> 
> 
> 
> www.techpowerup.com
> 
> 
> 
> 
> 
> This bios is for 2x8pin cards, and got Nvidia BAR, and 390W.


Thanks for the tip. 
I just flashed this BIOS on my MSI Ventus OC 3090, and it works completely fine (390W+reBar).


----------



## Rhadamanthys

Fusion145 said:


> Thanks for the tip.
> I just flashed this BIOS on my MSI Ventus OC 3090, and it works completely fine (390W+reBar).


Did you check if you're actually hitting the 390W? There have been reports of reduced max power draw with 2x8 pin cards on a ReBAR bios.


----------



## Spiriva

Rhadamanthys said:


> Did you check if you're actually hitting the 390W? There have been reports of reduced max power draw with 2x8 pin cards on a ReBAR bios.


The Galax bios dont have that problem. I get ~394W as max from it. I use to have the XOC 1000W on my 2x8pin it was working very well to but i wanted to try BAR.
Ive read that Gigabyte 2x8pin bioses have problem to reach 390W.



__
https://www.reddit.com/r/gigabyte/comments/migq08


----------



## des2k...

390w... well that's no xoc, heavy rtx will drop freq like crazy on 2pin cards + no heavy mem oc

somebody needs to leak that rebar xoc😁 already...


----------



## WillP

des2k... said:


> 390w... well that's no xoc, heavy rtx will drop freq like crazy on 2pin cards + no heavy mem oc
> 
> somebody needs to leak that rebar xoc😁 already...


Yeah, ASUS has just released, finally, their REBAR for Z390 bios. Come on one of you Kingpin owners. 🙂


----------



## yzonker

Spiriva said:


> The Galax bios dont have that problem. I get ~394W as max from it. I use to have the XOC 1000W on my 2x8pin it was working very well to but i wanted to try BAR.
> Ive read that Gigabyte 2x8pin bioses have problem to reach 390W.
> 
> 
> 
> __
> https://www.reddit.com/r/gigabyte/comments/migq08


Yea I saw a peak of 401w with that bios. Haven't tried a Gigabyte bios yet. Tempted to try the aorus bios. It worked really well before.


----------



## yzonker

des2k... said:


> 390w... well that's no xoc, heavy rtx will drop freq like crazy on 2pin cards + no heavy mem oc
> 
> somebody needs to leak that rebar xoc😁 already...


Yea I want to get back to the XOC. I've been playing RDR2 with everything maxed. On 385w it down clocks a lot. ReBar does almost nothing in that game too.


----------



## J7SC

...so is the 'regular' KPE 520W w/ r_BAR publicly available, just not/not yet the 1k W XOC r_BAR ?


----------



## yzonker

J7SC said:


> ...so is the 'regular' KPE 520W w/ r_BAR publicly available, just not/not yet the 1k W XOC r_BAR ?


Yes, that's correct. The 520w bios is on Techpowerup I think as well as posted somewhere on evga forums. 1kw exists, but AFAIK only in the hands of KP owners. Vince is telling them not to share it.


----------



## yzonker

Best chance of us getting the new 1kw bios is if they offer the KP HC and I could manage to win the F5 contest that I failed on the FTW3 HC.


----------



## J7SC

yzonker said:


> Yes, that's correct. The 520w bios is on Techpowerup I think as well as posted somewhere on evga forums. 1kw exists, but AFAIK only in the hands of KP owners. Vince is telling them not to share it.


Tx  Dos the KPE r-BAR 520W include extra PL (ie.120% or whatever), or is that on top ? After w-cooling, Kryonaut and Thermalright pads, I'm consistent w/ decent GPU clocks on the Strix in Superposition et al on stock bios (still slowly upping the VRAM after new thermal pads), but figure 530W to 550W would be ideal for some select benches w/o having safeties disabled as I also use this system at times for productivity.


----------



## yzonker

J7SC said:


> Tx  Dos the KPE r-BAR 520W include extra PL (ie.120% or whatever), or is that on top ? After w-cooling, Kryonaut and Thermalright pads, I'm consistent w/ decent GPU clocks on the Strix in Superposition et al on stock bios (still slowly upping the VRAM after new thermal pads), but figure 530W to 550W would be ideal for some select benches w/o having safeties disabled as I also use this system at times for productivity.
> 
> View attachment 2487463


No its 430w base with 520w max using the power slider. 120 or 121%. Whatever that works out to be.


----------



## J7SC

yzonker said:


> No its 430w base with 520w max using the power slider. 120 or 121%. Whatever that works out to be.


Thanks  As posted yesterday, I plan to keep the Strix V3 r_BAR on one vBios and try out the KPE 520 W r_BAR on the other vBios setting.

BTW, a bit of weirdness: I had saved earlier MSI AB profiles when the Strix was still air-cooled, complete with fans at 97%...noticed that on water, said profile still said 97% for fans (though no rpm). Still, the overall power usage was down by about 15 W - 20 W compared to setting fans to '0'. I know that doesn't make much sense, but tried it twice...

The other thing with saved profiles and going from air to water is what others also reported: all of a sudden, in lighter scenes in for example 3DM TSX, it will boost up to way crazy numbers (due to the improved temps / boost algorithm) and sometimes hang as a result


----------



## jomama22

J7SC said:


> Tx  Dos the KPE r-BAR 520W include extra PL (ie.120% or whatever), or is that on top ? After w-cooling, Kryonaut and Thermalright pads, I'm consistent w/ decent GPU clocks on the Strix in Superposition et al on stock bios (still slowly upping the VRAM after new thermal pads), but figure 530W to 550W would be ideal for some select benches w/o having safeties disabled as I also use this system at times for productivity.
> 
> View attachment 2487463


The superposition bench isn't going to give you realistic usage clocks, just a fyi. Timespy extreme will be closer to reality. Depends on the game what your actual stable clocks will be though.

Tse is also a good way to to test different different memory clocks. Just set a low enough clock so it will be consistent run to run (honestly, just leave the card at stock and max power limit) and go through the memory clocks. You can probably just start at something like +1200 and work up from there in steps of 25 or so. Just go through the whole range up to like 1800, or until it crashes. Once you find the highest score out if the tests, re run it a few times to confirm it's actually stable (not just not crashing, but that the score doesn't fluctuate largely) and then give it a go in games. If it crashes in games, back off like 50mhz or so.

Also, a memory crash will usually just full on reboot the system, core will just kick you out of the bench.


----------



## Beagle Box

For owners of the MSI Gaming M5 Z370 Motherboard and an i9-9900k, you're in luck.
There's a Resizeable BAR BIOS for you.
MSI Support Site: BIOS 7B58v1B2(Beta version)


----------



## CZonin

So I set a random goal for myself to get 14k in PR and have that as my daily settings. I'm also trying to be somewhat conservative in my OC as I don't want to kill this card in the ~2 years that I'll likely have it. My setup is a 10900k @ 5.0, 3090 TUF, and 3600 CL16. I deshrouded the card and added 3 Noctua NF-A9. With a pretty silent fan curve the temp never goes over 62C even with the settings below.

At stock my card was hitting around 13300 in PR. I flashed the 390W Gigabyte Gaming OC bios and added 95 to core, 700 to mem, and 105 power limit. With that I get ~13960 consistently. Adding more to the memory doesn't give any additional score. I've found that CP2077 is a good stress test for core and will crash in it after all my other test games are stable. Going over 95 on the core causes crashes.

What should be my next steps? Should I add a little voltage to see if it helps getting over 95 stable on the core, or do I try another bios that has a slightly higher power limit? Or do I just quit while I'm ahead at this point and not cause the card to degrade any more than my settings already will.


----------



## inedenimadam

Luumi just popped 19,309 on a KPE. He must have the 1kw w/rebar.


----------



## J7SC

jomama22 said:


> The superposition bench isn't going to give you realistic usage clocks, just a fyi. Timespy extreme will be closer to reality. Depends on the game what your actual stable clocks will be though.
> 
> Tse is also a good way to to test different different memory clocks. Just set a low enough clock so it will be consistent run to run (honestly, just leave the card at stock and max power limit) and go through the memory clocks. You can probably just start at something like +1200 and work up from there in steps of 25 or so. Just go through the whole range up to like 1800, or until it crashes. Once you find the highest score out if the tests, re run it a few times to confirm it's actually stable (not just not crashing, but that the score doesn't fluctuate largely) and then give it a go in games. If it crashes in games, back off like 50mhz or so.
> 
> Also, a memory crash will usually just full on reboot the system, core will just kick you out of the bench.


...yeah, I use Superposition not so much as a yardstick for clocks but because of the 8K setting I can use to give a larger amount of VRAM a workout (btw, I did a few years at HWBot, including sub-zero back in the day). I just made a custom solution for the backplate which required custom mounting to the PCB, that's why I am stressing and stepping up the VRAM slowly; so far, I'm happy with the results re. temp improvements. In my experience, new thermal pads take a few heat cycles to settle, depending on the materials. 

...I do have a nice database now of TimeSpy Extreme, PortRoyal and other for both air-c and now building up water-c. Also, initially, my Strix was mounted on a X99 Rampage XTR 5960X HEDT/quad-RAM, then moved to a new Asus Hero Crosshair VIII / 3950X...no sooner had I started to put that together when I had a great but unexpected offer for a new 5950X / Dark Hero combo - I love that combo (and have other uses for the 3950X as I am in the computer-related field). Bottom line though is that combined with work getting really busy and constantly changing hardware configs on this build over the last 2+ months, I am only now getting around to finishing the build (it is not in a normal computer case) and set it up for hybrid work-play uses.


----------



## bmgjet

CZonin said:


> So I set a random goal for myself to get 14k in PR and have that as my daily settings. I'm also trying to be somewhat conservative in my OC as I don't want to kill this card in the ~2 years that I'll likely have it. My setup is a 10900k @ 5.0, 3090 TUF, and 3600 CL16. I deshrouded the card and added 3 Noctua NF-A9. With a pretty silent fan curve the temp never goes over 62C even with the settings below.
> 
> At stock my card was hitting around 13300 in PR. I flashed the 390W Gigabyte Gaming OC bios and added 95 to core, 700 to mem, and 105 power limit. With that I get ~13960 consistently. Adding more to the memory doesn't give any additional score. I've found that CP2077 is a good stress test for core and will crash in it after all my other test games are stable. Going over 95 on the core causes crashes.
> 
> What should be my next steps? Should I add a little voltage to see if it helps getting over 95 stable on the core, or do I try another bios that has a slightly higher power limit? Or do I just quit while I'm ahead at this point and not cause the card to degrade any more than my settings already will.



PR is basically a power limit benchmark. Higher your power limit the higher your score will be.
More voltage will give you a lower score since youll be hitting the power limit harder.
Best thing you can do is reduce the power draw. Turn any RGB on the card off. Power the fan externally and get the card as cool as you can since cooler card will use less power for the same clocks.


----------



## J7SC

inedenimadam said:


> Luumi just popped 19,309 on a KPE. He must have the 1kw w/rebar.


...I just saw that battle for PortRoyal (below) between Luumi / KPE and OGS who is running Galax HoF. Time for some popcorn....  I wonder if Galax has r_BAR XOC yet. 

I like watching Luumi's vids, low key but very informative. I think his KPE broke s.th. (cap ?) recently and he had it fixed by KingPin's team as far as I recall. Speaking of crazy clock stuff, DerBauer hit 3225 MHz on a 6900 XT / LN2...


----------



## sultanofswing

Well I can promise you guys the 1kw BAR BIOS is not what is causing the score increase, The newer Drivers are.


----------



## J7SC

sultanofswing said:


> Well I can promise you guys the 1kw BAR BIOS is not what is causing the score increase, The newer Drivers are.


...just loaded the previous 'new' drivers, oh well. 

BTW, neither Luumi nor OGS were running 466.11 yet - probably a battle of the thumb drives...


----------



## sultanofswing

J7SC said:


> ...just loaded the previous 'new' drivers, oh well.
> 
> BTW, neither Luumi nor OGS were running 466.11 yet - probably a battle of the thumb drives...


The driver before 466.11 also gave the big boost in port royal so it's not confined to 466.11
I am actually getting a little worse score in Superposition with the 1kw REBAR BIOS.


----------



## J7SC

sultanofswing said:


> The driver before 466.11 also gave the big boost in port royal so it's not confined to 466.11
> I am actually getting a little worse score in Superposition with the 1kw REBAR BIOS.


...the biggest improvement re. r_BAR on the stock Asuus Strix v3 bios I have seen is with Cyberpunk 2077 - specifically around frame times at 4K RTX Ultra / Psycho. FS2020 is supposedly getting slightly worse results with r_BAR, but I'm capped at 60 fps anyway and haven't noticed either an improvement or degradation. Per above post, just getting down to some 'final config' benchmarking ('famous last words' re. final config  ...)


----------



## jomama22

inedenimadam said:


> Luumi just popped 19,309 on a KPE. He must have the 1kw w/rebar.


Rebar isn't active with PR (at default) and gives you less performance when it is (set manually). The driver is what gives the performance boost.


----------



## CZonin

bmgjet said:


> PR is basically a power limit benchmark. Higher your power limit the higher your score will be.
> More voltage will give you a lower score since youll be hitting the power limit harder.
> Best thing you can do is reduce the power draw. Turn any RGB on the card off. Power the fan externally and get the card as cool as you can since cooler card will use less power for the same clocks.


Ya that makes sense. With the deshroud I've already disconnected the RGB and have the fans going through a hub. It seems like my options are to either go for a more aggressive fan curve to keep temps down, or flash a different bios with a higher power limit.

What's the max power limit I should go for with a card that has 2x8 pin?


----------



## des2k...

CZonin said:


> Ya that makes sense. With the deshroud I've already disconnected the RGB and have the fans going through a hub. It seems like my options are to either go for a more aggressive fan curve to keep temps down, or flash a different bios with a higher power limit.
> 
> What's the max power limit I should go for with a card that has 2x8 pin?


depends of the cooling and what freq you want to hold on your oc

Official vbios would be 390w max for 2x8pin

xoc 1000w vbios would be 600w+ max , holds 2100+ on any load


----------



## acrvr

Anyone mining ETH using 3090 Kingpin want to share their numbers? I can get 126MH/s but power usage is very high at 311Watts. A friend with 3090 TUF was able to get 126MH/s with 260Watts. 

Thanks


----------



## bl4ckdot

Pushing the HOF on air a bit more : NVIDIA GeForce RTX 3090 video card benchmark result - Intel Core i9-10900K Processor,ASUSTeK COMPUTER INC. ROG MAXIMUS XIII HERO (3dmark.com) 
I really need a waterblock 😅
Getting my LN2 pot next week


----------



## GRABibus

bl4ckdot said:


> Pushing the HOF on air a bit more : NVIDIA GeForce RTX 3090 video card benchmark result - Intel Core i9-10900K Processor,ASUSTeK COMPUTER INC. ROG MAXIMUS XIII HERO (3dmark.com)
> I really need a waterblock 😅
> Getting my LN2 pot next week


Omg...sure it’s a fake 😂😂


----------



## J7SC

When I re-pasted & re-padded my 3090, I noticed just how 'raised' the GPU die lettering was, far more pronounced than on any other GPU I have ever seen. I had read elsewhere that it is an issue with this Ampere gen, but still - couldn't believe it. For warranty and other purposes, I don't think I want to lap that even though I am sure it would make a difference (I have seen DerBauer lap a KPE 3090). I have lapped (and for that mater delidded) several CPUs, but lapping a GPU mounted on the PCB is not easy to begin with. Has any of you folks attempted that with your 3090 ?


----------



## bmagnien

sultanofswing said:


> I am actually getting a little worse score in Superposition with the 1kw REBAR BIOS.


Which BIOS?


----------



## sultanofswing

bmagnien said:


> Which BIOS?


KP 1KW Rebar.


----------



## Bobbylee

sultanofswing said:


> KP 1KW Rebar.


Didn’t think this was out, got a link?


----------



## bmagnien

sultanofswing said:


> KP 1KW Rebar.


Maybe stupid question but that’s being kept private for the time being? or do you have the ability to share?


----------



## sultanofswing

Bobbylee said:


> Didn’t think this was out, got a link?


There is no link right now.


----------



## Lobstar

sultanofswing said:


> There is no link right now.


Translated: I'm choosing not to make one


----------



## jomama22

J7SC said:


> When I re-pasted & re-padded my 3090, I noticed just how 'raised' the GPU die lettering was, far more pronounced than on any other GPU I have ever seen. I had read elsewhere that it is an issue with this Ampere gen, but still - couldn't believe it. For warranty and other purposes, I don't think I want to lap that even though I am sure it would make a difference (I have seen DerBauer lap a KPE 3090). I have lapped (and for that mater delidded) several CPUs, but lapping a GPU mounted on the PCB is not easy to begin with. Has any of you folks attempted that with your 3090 ?


You would most likely need to sand the standoffs for the full cover block as well to retain pressure on the die. Would also need to make sure the thermal pads can squish enough under the shortened hight. Granted it should only be a few fractions of a mm so the pads would probably be fine, but the gpu pressure is what I would be most concerned about.

It works on ln2 pots as they are designed to only touch the die itself, so pressure won't be an issue.

Also just have to keep in mind how much pcb bending you're ok with. Shouldn't be an issue though.

It most likely wouldn't gain you all that much in temperature though. The sanding for ln2 is done moreso so that the paste doesn't crack when the chip is heated and and cooled down. Wouldn't be worth it to me tbh.


----------



## KingKnick

Lobstar said:


> Translated: I'm choosing not to make one


Nice guy


----------



## Bobbylee

sultanofswing said:


> There is no link right now.


Can you upload to it to tech power up please?


----------



## J7SC

jomama22 said:


> You would most likely need to sand the standoffs for the full cover block as well to retain pressure on the die. Would also need to make sure the thermal pads can squish enough under the shortened hight. Granted it should only be a few fractions of a mm so the pads would probably be fine, but the gpu pressure is what I would be most concerned about.
> 
> It works on ln2 pots as they are designed to only touch the die itself, so pressure won't be an issue.
> 
> Also just have to keep in mind how much pcb bending you're ok with. Shouldn't be an issue though.
> 
> It most likely wouldn't gain you all that much in temperature though. The sanding for ln2 is done moreso so that the paste doesn't crack when the chip is heated and and cooled down. Wouldn't be worth it to me tbh.


...you are right about the cooler stand-offs also playing a role, and in any case, I just wanted to know if anyone had tried it w/ a w-block as it is not as easy as lapping a CPU...fyi, here is DerBauer doing it, but ultimately for a friend's KPE 3090 for LN2 (subtitles in English if you watch it at YT)


----------



## 86Jarrod

J7SC said:


> When I re-pasted & re-padded my 3090, I noticed just how 'raised' the GPU die lettering was, far more pronounced than on any other GPU I have ever seen. I had read elsewhere that it is an issue with this Ampere gen, but still - couldn't believe it. For warranty and other purposes, I don't think I want to lap that even though I am sure it would make a difference (I have seen DerBauer lap a KPE 3090). I have lapped (and for that mater delidded) several CPUs, but lapping a GPU mounted on the PCB is not easy to begin with. Has any of you folks attempted that with your 3090 ?


I lapped my strix but can't really tell a difference on water. Maybe a degree better water to die delta and a degree better hotspot delta. I just taped sand paper to my old samsung note 4 lol.


----------



## KingKnick

Bobbylee said:


> Can you upload to it to tech power up please?


No he cant  because he is a cooool guy


----------



## des2k...

KingKnick said:


> No he cant  because he is a cooool guy


well a few have this vbios already, I would say start a VPN and upload to techpower, no way they can trace you


----------



## jomama22

des2k... said:


> well a few have this vbios already, I would say start a VPN and upload to techpower, no way they can trace you


Depends, if they were forced to give a sn as part of the deal, it could just be baked into the bios and the extrapolated out by evga. NDA may have terms that they will remove you from elite status and remove product warranty or somthing to that end.

Would they actually do that though? Meh lol


----------



## J7SC

jomama22 said:


> Depends, if they were forced to give a sn as part of the deal, it could just be baked into the bios and the extrapolated out by evga. NDA may have terms that they will remove you from elite status and remove product warranty or somthing to that end.
> 
> Would they actually do that though? Meh lol


...with tight supply and also mining, warranties on some select (for now mining) GPUs are tightening, per below. Not directly the same, and to each his own, but if they can find out that you loaded a 1kw XOC bios with safeties removed, it could get complicated re. warranty. Whether that's the case now, I do not know...but some RMA things might get tougher down the line.


----------



## jomama22

J7SC said:


> ...with tight supply and also mining, warranties on some select (for now mining) GPUs are tightening, per below. Not directly the same, and to each his own, but if they can find out that you loaded a 1kw XOC bios with safeties removed, it could get complicated re. warranty. Whether that's the case now, I do not know...but some RMA things might get tougher down the line.


Thankfully we live somewhere with decent consumer warranty laws lmao. 

But tbf, I can kinda see why an aib would want to see what kind of bios you decided to run with. Not sure many powertrain warranties would be valid after hooking up two 18psi turbos to the intakes and then cracking the engine block lol.

I soldered shunts on my card so I have no horse in this race anyway. 

I think you are going to see tighter warrantees on mining cards because of the expected up time and environment they are usually in. I imagine just selling a gpu for gaming or work purposes has some estimated use time and stress level. Mining far outstrips it so it's to be expected.


----------



## bl4ckdot

Slightly better : NVIDIA GeForce RTX 3090 video card benchmark result - Intel Core i9-10900K Processor,ASUSTeK COMPUTER INC. ROG MAXIMUS XIII HERO (3dmark.com) 
Probably won't try until I get a better cooling for it


----------



## TonyBrownTown

HELP!

I have an Inno 3D x4 iChill that is not posting. I bought it off craigslist from someone who bought it from Alibaba LOL. (I was desperate …) It ran great for a month but would hard restart my machine sometimes.

The card turns on and all 4 fans spin, if I plug the 3090 into a MB with an onboard VGA it does not show in device manager (even hidden devices). I believe it froze during a bios/firmware update. There is no scorching on the card, every capacitor looks good.

This thread here has been helpful - [SOLVED] - Bricked 3090 looking for advice

Essentially I am trying to use a CH341A programmer to manually flash a new BIOS onto it. However the BIOS chip on here is ISSI IS25WP080










I found a data sheet that seems to elude to a 8Mbit/1Mbyte interface with a 1.8 voltage. I got the 1.8v adapter and I was able to pull of the ROM and save it (with CH341A programmer software).

ISSI Datasheet - https://www.issi.com/WW/pdf/25WP016_080_040_020.pdf

I then found a BIOS online here - Inno3D RTX 3090 VBIOS

Opened it in the CH341A programmer software and wrote it (after performing a wipe). There is no ISSI manufacturer so I am just manually entering 8Mbit/1Mbyte. It seems to connect fine to the chip (light turns on?)



















Concerning thing is when I write then verify it does not match parity ...

The card still does not post. I also tried a GIGAByte 3090 Bios, to no avail.

I have been looking at UEFI Tool to try and inspect these ROM files but I am pretty lost.

Hope this is the correct place, any help or ideas appreciated!

I managed to get a 3070 but the 3090 was spoiling! 165hz 2650x1440 target on Threadripper 2950x.

Thanks!


----------



## yzonker

jomama22 said:


> Depends, if they were forced to give a sn as part of the deal, it could just be baked into the bios and the extrapolated out by evga. NDA may have terms that they will remove you from elite status and remove product warranty or somthing to that end.
> 
> Would they actually do that though? Meh lol


Yea I thought of this after I commented about sharing it if I got a KP. Quite possible they're watermarking them. That's what I would do. Need 2 copies from different people to compare. Only way to know I think.


----------



## jomama22

TonyBrownTown said:


> HELP!
> 
> I have an Inno 3D x4 iChill that is not posting. I bought it off craigslist from someone who bought it from Alibaba LOL. (I was desperate …) It ran great for a month but would hard restart my machine sometimes.
> 
> The card turns on and all 4 fans spin, if I plug the 3090 into a MB with an onboard VGA it does not show in device manager (even hidden devices). I believe it froze during a bios/firmware update. There is no scorching on the card, every capacitor looks good.
> 
> This thread here has been helpful - [SOLVED] - Bricked 3090 looking for advice
> 
> Essentially I am trying to use a CH341A programmer to manually flash a new BIOS onto it. However the BIOS chip on here is ISSI IS25WP080
> 
> View attachment 2487580
> 
> 
> I found a data sheet that seems to elude to a 8Mbit/1Mbyte interface with a 1.8 voltage. I got the 1.8v adapter and I was able to pull of the ROM and save it (with CH341A programmer software).
> 
> ISSI Datasheet - https://www.issi.com/WW/pdf/25WP016_080_040_020.pdf
> 
> I then found a BIOS online here - Inno3D RTX 3090 VBIOS
> 
> Opened it in the CH341A programmer software and wrote it (after performing a wipe). There is no ISSI manufacturer so I am just manually entering 8Mbit/1Mbyte. It seems to connect fine to the chip (light turns on?)
> 
> View attachment 2487582
> 
> 
> View attachment 2487581
> 
> 
> Concerning thing is when I write then verify it does not match parity ...
> 
> The card still does not post. I also tried a GIGAByte 3090 Bios, to no avail.
> 
> I have been looking at UEFI Tool to try and inspect these ROM files but I am pretty lost.
> 
> Hope this is the correct place, any help or ideas appreciated!
> 
> I managed to get a 3070 but the 3090 was spoiling! 165hz 2650x1440 target on Threadripper 2950x.
> 
> Thanks!


If there is another chip ID that is kinda close to it, you can try to write to it that way and see what happens. Probably should check the pcb for shorts and such anyway just to rule out a hardware fault.


----------



## PLATOON TEKK

Been hectic two days but just had brief chance to try out the 1000w Galax on one of my Strix.

i) It flashes fine (unlike other 1000w Galax)
ii) it is indeed a 1000w bios
iii) on the two PCs I tested, NO rebar
iv) max reported watts i hit during FF bench was 445 watts.
Edit:
v) no thermal limits

I barely had time to test it, but seems legit for what’s it’s worth.


----------



## yzonker

PLATOON TEKK said:


> Been hectic two days but just had brief chance to try out the 1000w Galax on one of my Strix.
> 
> i) It flashes fine (unlike other 1000w Galax)
> ii) it is indeed a 1000w bios
> iii) on the two PCs I tested, NO rebar
> iv) max reported watts i hit during FF bench was 445 watts.
> 
> I barely had time to test it, but seems legit for what’s it’s worth.


Is it the newer one I posted the other day from around March 17th? Safeties removed like the KP XOC? Techpowerup didn't show any thermal limits so I'm guessing yes.


----------



## PLATOON TEKK

yzonker said:


> Is it the newer one I posted the other day from around March 17th? Safeties removed like the KP XOC? Techpowerup didn't show any thermal limits so I'm guessing yes.


Thats the one. Saw someone on EVGA mention it and got curious.

Correct, does not mention thermal limits here.


----------



## WillP

So us not in the Kingpin club wanting unlocked power limit AND REBAR could be potentially waiting on a Galax bios as an alternative? I might have to get my soldering kit and shunts out soon otherwise, I like no power limit...


----------



## Falkentyne

TonyBrownTown said:


> HELP!
> 
> I have an Inno 3D x4 iChill that is not posting. I bought it off craigslist from someone who bought it from Alibaba LOL. (I was desperate …) It ran great for a month but would hard restart my machine sometimes.
> 
> The card turns on and all 4 fans spin, if I plug the 3090 into a MB with an onboard VGA it does not show in device manager (even hidden devices). I believe it froze during a bios/firmware update. There is no scorching on the card, every capacitor looks good.
> 
> This thread here has been helpful - [SOLVED] - Bricked 3090 looking for advice
> 
> Essentially I am trying to use a CH341A programmer to manually flash a new BIOS onto it. However the BIOS chip on here is ISSI IS25WP080
> 
> View attachment 2487580
> 
> 
> I found a data sheet that seems to elude to a 8Mbit/1Mbyte interface with a 1.8 voltage. I got the 1.8v adapter and I was able to pull of the ROM and save it (with CH341A programmer software).
> 
> ISSI Datasheet - https://www.issi.com/WW/pdf/25WP016_080_040_020.pdf
> 
> I then found a BIOS online here - Inno3D RTX 3090 VBIOS
> 
> Opened it in the CH341A programmer software and wrote it (after performing a wipe). There is no ISSI manufacturer so I am just manually entering 8Mbit/1Mbyte. It seems to connect fine to the chip (light turns on?)
> 
> View attachment 2487582
> 
> 
> View attachment 2487581
> 
> 
> Concerning thing is when I write then verify it does not match parity ...
> 
> The card still does not post. I also tried a GIGAByte 3090 Bios, to no avail.
> 
> I have been looking at UEFI Tool to try and inspect these ROM files but I am pretty lost.
> 
> Hope this is the correct place, any help or ideas appreciated!
> 
> I managed to get a 3070 but the 3090 was spoiling! 165hz 2650x1440 target on Threadripper 2950x.
> 
> Thanks!


The Skypro natively supports this chip (still need a 1.8v adapter).
They call it: ISSI IS25WP080D; (go to the pulldown menu at the top right and choose English instead of Chinese)

Shown under support list.




__





SkyPRO(USB Programmer)-CORIGHT


SkyPRO是CORIGHT推出的USB高速编程器。SkyPRO支持24、25、93 EEPROM、SPI FLASH、DataFlash以及AVR单片机。SkyPRO安全、高速、便携。SkyPRO 产品特性外观小巧、便携。内置自恢复保险丝，为设备提供持续保护。内置蜂鸣器。USB 2.0接口，速度高达12Mbps。支持24 EEPROM、25 EEPROM、93 EEPROM、SPI FLASH、




www.coright.com





I've told many people they should get Skypros, but it seems most just ignore me and go get the cheapest programmer (often with awful software) that they can find.
I purchased mine off Amazon for my laptop MXM 1070, like 3 years ago but seems to be out of stock.
They're all over Aliexpress, though, but that's slowboat shipping. Maybe there's an ebay link with a USA seller.









Amazon.com : WINGONEER Skypro high-Speed USB SPI Programmer Better Than EZP2010 EZP2013 Supports 24Cxx 25Cxx 93Cxx 25Lxx EEPROM 25 Flash bios WIN7 WIN8 Vista : Electronics


Amazon.com : WINGONEER Skypro high-Speed USB SPI Programmer Better Than EZP2010 EZP2013 Supports 24Cxx 25Cxx 93Cxx 25Lxx EEPROM 25 Flash bios WIN7 WIN8 Vista : Electronics



www.amazon.com


----------



## yzonker

PLATOON TEKK said:


> Thats the one. Saw someone on EVGA mention it and got curious.
> 
> Correct, does not mention thermal limits here.


Ok, thanks. Yea I saw the same post I think which is what prompted me to go look and post it the other day. Figured it wasn't reBar though given the upload date. Needs to be about 3-30 or later.


----------



## TonyBrownTown

Falkentyne said:


> The Skypro natively supports this chip (still need a 1.8v adapter).
> They call it: ISSI IS25WP080D; (go to the pulldown menu at the top right and choose English instead of Chinese)
> 
> Shown under support list.
> 
> 
> 
> 
> __
> 
> 
> 
> 
> 
> SkyPRO(USB Programmer)-CORIGHT
> 
> 
> SkyPRO是CORIGHT推出的USB高速编程器。SkyPRO支持24、25、93 EEPROM、SPI FLASH、DataFlash以及AVR单片机。SkyPRO安全、高速、便携。SkyPRO 产品特性外观小巧、便携。内置自恢复保险丝，为设备提供持续保护。内置蜂鸣器。USB 2.0接口，速度高达12Mbps。支持24 EEPROM、25 EEPROM、93 EEPROM、SPI FLASH、
> 
> 
> 
> 
> www.coright.com
> 
> 
> 
> 
> 
> I've told many people they should get Skypros, but it seems most just ignore me and go get the cheapest programmer (often with awful software) that they can find.
> I purchased mine off Amazon for my laptop MXM 1070, like 3 years ago but seems to be out of stock.
> They're all over Aliexpress, though, but that's slowboat shipping. Maybe there's an ebay link with a USA seller.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Amazon.com : WINGONEER Skypro high-Speed USB SPI Programmer Better Than EZP2010 EZP2013 Supports 24Cxx 25Cxx 93Cxx 25Lxx EEPROM 25 Flash bios WIN7 WIN8 Vista : Electronics
> 
> 
> Amazon.com : WINGONEER Skypro high-Speed USB SPI Programmer Better Than EZP2010 EZP2013 Supports 24Cxx 25Cxx 93Cxx 25Lxx EEPROM 25 Flash bios WIN7 WIN8 Vista : Electronics
> 
> 
> 
> www.amazon.com



Ordered one, will report back thanks!


----------



## Falkentyne

TonyBrownTown said:


> Ordered one, will report back thanks!


Where did you order one from? I couldn't find any links except aliexpress.
My 1 still works but I wonder if a skypro 3 would be a useful expensive toy to have someday or not.


----------



## TonyBrownTown

Falkentyne said:


> Where did you order one from? I couldn't find any links except aliexpress.
> My 1 still works but I wonder if a skypro 3 would be a useful expensive toy to have someday or not.


Amazon randomly, said 1 left. $30, here in two days hopefully!


----------



## Falkentyne

TonyBrownTown said:


> Amazon randomly, said 1 left. $30, here in two days hopefully!


Remember you need a decent clip for it, and some male to female cables. Don't use cheap chinese clips. (Edit, just saw your picture above. Yep, that's a cheap chinese clip and a pin adapter. Many of those clips make it impossible to get a proper read on the chip without errors. My RT809F's default clip worked fine on my GTX 1070 but another chinese clip I bought was atrocious. The Pomona 5250 worked the best. Use what I linked below please).

This is what I have. No regrets. I highly suggest buying the Pomona clip. The wires will make the clip a no solder no fuss connection to the 1.8v. (and 3.3v to motherboard bios chips, if not using 1.8v adapter).









Pomona Electronics 5250 8-Pin Gold Plated SOIC Clip Test Clip with 0.1" Lead Spacing: Amazon.com: Industrial & Scientific


Pomona Electronics 5250 8-Pin Gold Plated SOIC Clip Test Clip with 0.1" Lead Spacing: Amazon.com: Industrial & Scientific



www.amazon.com






https://www.amazon.com/gp/product/B01EV70C78/


(and of course the 1.8v adapter which you already have).

(too bad the 3090 FE has a UDFN-8 chip rather than an SOP-8 chip, so inline flashing won't work and I don't have the soldering skills to solder wires to something that tiny).


----------



## KingKnick

I don't understand why nobody reads and uploads this bios with GPUZ !? How can you be so unfair? A forum is there to support each other! If someone sends me the bios by email, I upload it via techpowerup!


----------



## jomama22

KingKnick said:


> I don't understand why nobody reads and uploads this bios with GPUZ !? How can you be so unfair? A forum is there to support each other! If someone sends me the bios by email, I upload it via techpowerup!


Possibly because of the mentioned above. It's not unfair, no one has to upload it if they don't want to. Just shunt your card if you want the extra headroom and rebar so bad.


----------



## bmgjet

KingKnick said:


> I don't understand why nobody reads and uploads this bios with GPUZ !? How can you be so unfair? A forum is there to support each other! If someone sends me the bios by email, I upload it via techpowerup!


Because they dont want there UID of there card exposed like the last guy to leak it. Then have all the leagal trouble.


----------



## WillP

KingKnick said:


> I don't understand why nobody reads and uploads this bios with GPUZ !? How can you be so unfair? A forum is there to support each other! If someone sends me the bios by email, I upload it via techpowerup!


Yeah, I wouldn't blame individuals for not wanting to risk legal action or being blacklisted for future releases. I'd blame Nvidia and the AIBs, if anyone, for locking down their product line-ups so artificially. That said, there has to be some blame also attributed to the end users given the number of posts here discussing bypassing warranty restrictions...


----------



## WillP

bmgjet said:


> Because they dont want there UID of there card exposed like the last guy to leak it. Then have all the leagal trouble.


Do you have any more details on this "legal trouble"? I remember someone ages back in this thread claiming to have been the one to leak it and regretting it, but it seemed their regrets were more about people bricking their cards with it. I don't remember legal issues being mentioned. I'm just curious.


----------



## sultanofswing

If I get permission to share it I will, until then just use the non rebar version as the rebar version doesn't improve benchmark scores. After all that is what the bios is for, not for gaming.


----------



## GRABibus

sultanofswing said:


> the rebar version doesn't improve benchmark scores.


That's true.

And also, I think it also increases temperature.
I play 2 games mainly currently (BFV and Cold War).
I had enabled Rebar (even if Cold War doesn't support it) since it was launched.

I just disabled it yesterday : my GPU temps went down by 3°C roughly

Ps : I am on Bios MSI Suprim X F5


----------



## yzonker

sultanofswing said:


> If I get permission to share it I will, until then just use the non rebar version as the rebar version doesn't improve benchmark scores. After all that is what the bios is for, not for gaming.


I don't expect you or anyone else to share it of course.

Although us 2x8pin guys need it to get above 390w.


----------



## inedenimadam

never even got to see it in stock...was there any stock? Are they releasing new items on notify only?! I want it three weeks ago.


----------



## rusky1

At the tail-end of my 5950X + 3090FE loop build. This is what the new EK 3090FE limited edition (black) block looks like in person:


----------



## yzonker

inedenimadam said:


> View attachment 2487722
> never even got to see it in stock...was there any stock? Are they releasing new items on notify only?! I want it three weeks ago.


You had to play the F5 war with the auto notify that opened at 10:00 PT. 

I tried for the FTW3 HC and ended up at 10:00 :34 and wasn't in the first drop. It got to 10:00:16.


----------



## KingKnick

sultanofswing said:


> If I get permission to share it I will, until then just use the non rebar version as the rebar version doesn't improve benchmark scores. After all that is what the bios is for, not for gaming.


I'm sorry, but what kind of scaredy are you? are you serious? you could upload it anonymously! how can you be so selfish? i know exactly what kind of guy you are xD


----------



## sultanofswing

KingKnick said:


> I'm sorry, but what kind of scaredy are you? are you serious? you could upload it anonymously! how can you be so selfish? i know exactly what kind of guy you are xD


What kind of guy I am? The kind of guy that owns 2 Kingpin cards and is also friends with Vince and was told not to share it. So I will abide by his wish as he is a friend that I look up to and get advice from a lot.
Complaining about not being able to get it seems kind of childish to me honestly. If I didn't have a kingpin card and wanted the bios I would do what everyone else does and just wait.


----------



## inedenimadam

yzonker said:


> You had to play the F5 war with the auto notify that opened at 10:00 PT.
> 
> I tried for the FTW3 HC and ended up at 10:00 :34 and wasn't in the first drop. It got to 10:00:16.


I was there. It never went green! I had to try on mobile. I'm at work and far from a desk. I guess I'll be checking my email 8 times a day until....

I would guess I hit the auto notify 20 seconds after it became available.


----------



## jomama22

inedenimadam said:


> I was there. It never went green! I had to try on mobile. I'm at work and far from a desk. I guess I'll be checking my email 8 times a day until....
> 
> I would guess I hit the auto notify 20 seconds after it became available.


It's $300 at that for a block that doesn't look to even be very good. I wouldn't be sweating it and would rather put the $300 towards an optimus block whenever that decided to go on preorder.


KingKnick said:


> I'm sorry, but what kind of scaredy are you? are you serious? you could upload it anonymously! how can you be so selfish? i know exactly what kind of guy you are xD


Why don't you shunt your card? What kind of scaredy are you? Why don't you buy a KP yourself?


----------



## TheFinnishOne

rusky1 said:


> At the tail-end of my 5950X + 3090FE loop build. This is what the new EK 3090FE limited edition (black) block looks like in person:
> 
> View attachment 2487723
> 
> 
> View attachment 2487726


Definetly a more unique looking block than most out there, how are temperatures with that one?


----------



## J7SC

...so far, the only real beenfit of r_BAR I have personally seen is in CP2077/frametimes, and the benefit is minor. Then again, I expect more r_BAR modded games and apps to appear as this is fairly new. I did however notice, a bit of strange behavior when I turned r_BAR off...

...working on the latest config update (5950X, Dark Hero, w-cooled Strix OC)... @T.Sharp - 'Cherenkov blue' ?!?


----------



## des2k...

bmgjet said:


> Because they dont want there UID of there card exposed like the last guy to leak it. Then have all the leagal trouble.


wasn't the last leak on the Vince public website, there was no UID 

Question for you, if they dump the xoc rbar, and somebody with non-kingpin flash it then dumps it again, the UID would be changed no ?


----------



## yzonker

PLATOON TEKK said:


> Been hectic two days but just had brief chance to try out the 1000w Galax on one of my Strix.
> 
> i) It flashes fine (unlike other 1000w Galax)
> ii) it is indeed a 1000w bios
> iii) on the two PCs I tested, NO rebar
> iv) max reported watts i hit during FF bench was 445 watts.
> Edit:
> v) no thermal limits
> 
> I barely had time to test it, but seems legit for what’s it’s worth.


Interesting. That Galax bios power limits to just over 300w on 2x8pin.


----------



## des2k...

yzonker said:


> Interesting. That Galax bios power limits to just over 300w on 2x8pin.


prob a vbios like the asus xoc where they just punch in tdp 1000w and call it a day and ignore all other default limits🙄


----------



## des2k...

wonder if this rbar garbage will be usefull when RTX IO comes out. All gpu assets suppose to move to vram without mem or cpu being involved.

So with pcie4, I can see the default 256mb transfer size being too small.

I also wonder if RTX IO requires another vbios update, just enough to scare people from sharing rbar xoc🤣


----------



## mirkendargen

inedenimadam said:


> View attachment 2487722
> never even got to see it in stock...was there any stock? Are they releasing new items on notify only?! I want it three weeks ago.


I have so many thoughts...

$300 is a utterly ridiculous price for that block.

A block instantly going out of stock? It's not that hard to make...just allow preorders a month ago and go make however many.

If so many people want it, why the hell is eVGA defaulting to an AIO on Kingpins? I thought that was stupid in the beginning and a block should have been the default, this is just reinforcing that opinion. Just sell it with a block by default, then sell an optional AIO kit that's some tubes, a rad/pump combo, and come coolant. Then everyone is happy and gets better performance.


----------



## Beagle Box

mirkendargen said:


> I have so many thoughts...
> 
> $300 is a utterly ridiculous price for that block.
> 
> A block instantly going out of stock? It's not that hard to make...just allow preorders a month ago and go make however many.
> 
> If so many people want it, why the hell is eVGA defaulting to an AIO on Kingpins? I thought that was stupid in the beginning and a block should have been the default, this is just reinforcing that opinion. Just sell it with a block by default, then sell an optional AIO kit that's some tubes, a rad/pump combo, and come coolant. Then everyone is happy and gets better performance.


I think they make more money doing it AIO first, then water block.


----------



## rusky1

TheFinnishOne said:


> Definetly a more unique looking block than most out there, how are temperatures with that one?


I haven't gamed with this setup yet but did mine overnight with it. GPU core - 37C, GPU memory - 78C, CPU core 45C. 

CPU is stock + PBO, GPU is -500 core, +1100 memory, 85% power. Giving about 120 MH/s.

Pump has been running at 50%, and radiator fans on low/medium.


----------



## Falkentyne

des2k... said:


> prob a vbios like the asus xoc where they just punch in tdp 1000w and call it a day and ignore all other default limits🙄


And I'm the only person who ever seemed to notice this.


----------



## yzonker

Newest drivers/3dmark seems to perform well. Set my highest scores just now and my room is kinda warm.









I scored 18 223 in Time Spy


AMD Ryzen 7 5800X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com













I scored 15 262 in Port Royal


AMD Ryzen 7 5800X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com





Nothing record setting of course.


----------



## tefla

Hey all! Just wanted to share some results from a 3090 FE shunt mod (5 mOhm stacked on all 6 original shunts)

*Pre-shunt-mod (114% Power)*

*14,545 PR*: I scored 14 545 in Port Royal
*21,467 TS*: I scored 20 498 in Time Spy

*Post-Shunt-mod + repad + repaste (114% Power)*

*15,008 PR*: I scored 15 008 in Port Royal
*22,054 TS*: I scored 21 025 in Time Spy

This is with the stock cooler. Definitely seeing the need to put this thing underwater to really see what it can do. Even with an open case, cool ambient (~11c), high fans, etc. I'm hitting ~68c at the end of a PR run. Hopefully, i can eek out a bit more to break into the top 10 leaderboard for the 3090 + 5900x, better thermals + more finely tuned oc settings incl the single ccd and aggressive mem oc should do it.

The TS score improvement is low, I think I need further runs here, this was done in the same session as PR as a one-off so I'm not entirely sure why I'm "only" getting around 22k. Maybe thermals since it's a longer test?

I also think I rushed the reassembly. I was getting a hot-spot temp of 92-97c on those aforementioned runs. I could reduce this delta with a more careful repaste and remount, right?

Overall though, the shunt mod wasn't too bad difficulty-wise. Not gonna lie, it was incredibly nerve-wracking (my first time soldering was the same week I did the mod). But as long as you take your time, do your research, and come prepared with the right tools (definitely recommend practicing the stacking on an old gpu) it should go pretty smoothly. Big thanks to @Falkentyne for all the help. This dude basically wrote a gd instruction manual for me to follow and I'm very thankful for the contributors that make this forum the incredible resource it is.


----------



## J7SC

Beagle Box said:


> I think they make more money doing it AIO first, then water block.


...for 500 W plus minus, many of the air-cooled and AIO 3090 cards are doing quite well (then again > at the cost of sheer weight and size). But after all is said and done, 500 W heat energy focused on smallish 'hotspots' sounds like firm w-block territory to me, at least if you're sensitive to fan noise


----------



## Falkentyne

tefla said:


> Hey all! Just wanted to share some results from a 3090 FE shunt mod (5 mOhm stacked on all 6 original shunts)
> 
> *Pre-shunt-mod (114% Power)*
> 
> *14,545 PR*: I scored 14 545 in Port Royal
> *21,467 TS*: I scored 20 498 in Time Spy
> 
> *Post-Shunt-mod + repad + repaste (114% Power)*
> 
> *15,008 PR*: I scored 15 008 in Port Royal
> *22,054 TS*: I scored 21 025 in Time Spy
> 
> This is with the stock cooler. Definitely seeing the need to put this thing underwater to really see what it can do. Even with an open case, cool ambient (~11c), high fans, etc. I'm hitting ~68c at the end of a PR run. Hopefully, i can eek out a bit more to break into the top 10 leaderboard for the 3090 + 5900x, better thermals + more finely tuned oc settings incl the single ccd and aggressive mem oc should do it.
> 
> The TS score improvement is low, I think I need further runs here, this was done in the same session as PR as a one-off so I'm not entirely sure why I'm "only" getting around 22k. Maybe thermals since it's a longer test?
> 
> I also think I rushed the reassembly. I was getting a hot-spot temp of 92-97c on those aforementioned runs. I could reduce this delta with a more careful repaste and remount, right?
> 
> Overall though, the shunt mod wasn't too bad difficulty-wise. Not gonna lie, it was incredibly nerve-wracking (my first time soldering was the same week I did the mod). But as long as you take your time, do your research, and come prepared with the right tools (definitely recommend practicing the stacking on an old gpu) it should go pretty smoothly. Big thanks to @Falkentyne for all the help. This dude basically wrote a gd instruction manual for me to follow and I'm very thankful for the contributors that make this forum the incredible resource it is.


TS scores decreased drastically on the last two Nvidia drivers, while PR scores increased by the same amount. Your scores look pretty typical of the newest driver shenanigans.

Which hotspot? Core or Memory junction? There are two of them.
68C core and 97C Core hotspot is atrocious and means part of the core has no pressure or paste on it. 97C Memory junction means you may have repadded poorly. What pads did you use and what thickness, and what thickness on what side of the card, (or what section of that side of the card)?

I'm starting to develop a theory that Gelid Extreme (NOT Ultimate) 1.5mm pads may be better on the Core side (GDDR6X and VRMs) than Thermalright Odyssey pads, not because of the VRAM (w/mk is slightly lower after all) but because they are less hard, more compressible, which would increase core heatsink PSI pressure and increase thermal paste longevity. I Just put gelid 1.5mm on my card yesterday and will be seeing how it works with TFX. I've had to repaste several times due to core hotspot and average core temps slowly increasing, even with TFX, and I suspect the hardness of the Thermalright pads may be reducing contact pressure.


----------



## yzonker

Kinda odd though. Comparing 390w vs KP XOC at the voltage limit (PL at 100%) in RDR2. It only gains about 2 fps (like 47 to 49), yet the clock speed I see on the AB overlay increases by almost 200 mhz. Doesn't seem to scale very well. 

Why doesn't it scale better? Not enough mem speed? My card is gimped and only goes to +600 or so. Or just inherent limitations in the 3090 or RDR2?


----------



## tefla

Falkentyne said:


> TS scores decreased drastically on the last two Nvidia drivers, while PR scores increased by the same amount. Your scores look pretty typical of the newest driver shenanigans.
> 
> Which hotspot? Core or Memory junction? There are two of them.
> 68C core and 97C Core hotspot is atrocious and means part of the core has no pressure or paste on it. 97C Memory junction means you may have repadded poorly. What pads did you use and what thickness, and what thickness on what side of the card, (or what section of that side of the card)?


Core. Atrocious indeed haha, I plan on fixing that at the first possible opportunity. Like I said, I was pretty exhausted after doing the shunt mod and must have rushed, I’ve never done such a poor job. 😅

Mem junction temps are great, top out in the 80s while mining. 1.5mm thermalright all around.

Your thoughts on the core side pads makes sense to me and I might be an additional guinea pig and try that. I’m certainly a little confused on what a large delta I have. Even in other times I’ve rushed I’ve never had such a large delta. I used the x with dots in between for paste on the core, tightened mount progressively switching sides, it _shouldn’t _be this bad.


----------



## Falkentyne

yzonker said:


> Kinda odd though. Comparing 390w vs KP XOC at the voltage limit (PL at 100%) in RDR2. It only gains about 2 fps (like 47 to 49), yet the clock speed I see on the AB overlay increases by almost 200 mhz. Doesn't seem to scale very well.
> 
> Why doesn't it scale better? Not enough mem speed? My card is gimped and only goes to +600 or so. Or just inherent limitations in the 3090 or RDR2?


Did the effective clocks also increase by 200 mhz?
If core requested clocks increased by 200 mhz but effective clocks only went up by 60 mhz, well there you go. Also in some cases, (in 15 mhz steps on requested clocks), +90 mhz may only give you 2 FPS or may give you 5 FPS or 0 FPS. All depends on what you're running.


----------



## Falkentyne

tefla said:


> Core. Atrocious indeed haha, I plan on fixing that at the first possible opportunity. Like I said, I was pretty exhausted after doing the shunt mod and must have rushed, I’ve never done such a poor job. 😅
> 
> Mem junction temps are great, top out in the 80s while mining. 1.5mm thermalright all around.


Best paste application method is X with dots in quadrants. 



http://imgur.com/61sbQk3


Anyway,

I repadded the core side with Gelid 12 w/mk Shore 35, 1.5mm pads yesterday, so I haven't had alot of time to see any durability (slow boat 120mm * 120mm from Aliexpress) and a quick 5 minute Heaven run at 114% TDP (Power: 535W), gave a core to core hotspot delta of 10.4C (the lowest I've ever gotten), 70.9C-->81.3C. I have no idea what the precise shore rating of the Odyssey pads are. The package says 30-55 (Type OO), but I have absolutely no idea why there's a range because there's a pretty big difference between 30-55. Gelid Ultimate are 60-70 (why the hell is there a RANGE?), and Gelid Extreme are 35. Gelid Ultimate 15 w/mk are probably too thick for the core side (the tolerances of core-heatsink are too low, as 1.5mm means contact but poor, 1.0mm means VRAM contact is compromised, 2mm means zero contact on core), while Gelid Extreme 12 w/mk have been tested by a few others (3090 FE) as 2.0mm allowing core contact (but I'm 100% it's going to be poor contact, but much better than Thermalright 2mm or Gelid Ultimate 2mm, both which a few users tested and saw no core contact at all!). The Gelid Extreme pads definitely feel softer than the thermalright pads. Very obvious when you're handling them.

The shunt mod is annoying. Especially when you have to not only apply 3M Kapton tape perfectly around the PCIE and Power Plane SRC shunts (GPU Chip Power has a bit more room), but then you have to get that damn solder bridge to stick on the depressed lower edges by heating up the edge enough for the solder to actually flow and then stay on and not suddenly re-attach itself to the iron again, no matter how well you applied Flux...), and then when you apply the new shunt on the top, you have to hope your bridge was properly done so that the solder doesn't "detach" itself and migrate along the flux to the TOP of the new shunt, leaving no connection under it...um...yeah...

I'll find out in a few days if the Gelid 1.5mm Extreme pads helped TFX longevity or not.

I think, for the Thermalright 1.5mm pads, you actually need to manually compress them about 0.1mm (0.2mm max) BEFORE applying. This is judging from comparing the imprint of the stock pads vs TM pads, and seeing that the stock pads (these are also 1.5mm for VRAM, 1.8-2.0mm for VRM's, VRM pads have a much deeper imprint so swapping to 1.5mm works for those) create a deeper imprint. You can see it clearly here.



http://imgur.com/fCx1J8C


Neither of these are my pictures but this second one might show it clearer).



http://imgur.com/LsJL2Z5


Here are the Thermalright core side pads I got out of the trash to take a picture of, so I'm sorry if this disturbs anyone.
You can see there is contact, but hardly any impression mark. It's very faint. You can compare it to the Nvidia "mesh fabric putty".



http://imgur.com/1YfXsHk


The VRM strip has even less than this. It shows clear contact but no compression at all.

You can probably compress the TM pads evenly by first cutting them to shape, then once checking the size (make sure they fit), instead of applying them, put them on a very flat surface, with clean plastic wrap (stretched out) under them, and a flat rectangle of glass above them, then press them down all at once (not TOO hard). Then check the compression and then trim them down again to fit, then apply. This is just a guess.

So the Gelid 1.5mm Extreme pads will have to be a better choice just by logic.


----------



## des2k...

"Best paste application method is X with dots in quadrants. "

Line or big dots method don't work very well for big GPU die. You end up with high spots of paste and require a lot of pressure to spread. I usually end up wasting expensive paste with this method.

What works for my 3090, is the mini dots all over the die, never fails for even paste spread & very easy on the screws / mount pressure. This might be the stupid EK block which reaches the stand offs very easy. I might sand it down .2mm this weekend.


----------



## yzonker

Falkentyne said:


> Did the effective clocks also increase by 200 mhz?
> If core requested clocks increased by 200 mhz but effective clocks only went up by 60 mhz, well there you go. Also in some cases, (in 15 mhz steps on requested clocks), +90 mhz may only give you 2 FPS or may give you 5 FPS or 0 FPS. All depends on what you're running.


Looks like 150-175 mhz in HWInfo. Ranging from high 1800's to low 1900's at 390w, 2070-2085 with the power maxed out. (using a +120 core offset)

Still going from 47 to 49 fps is only about 1/2 the percentage gain.

Holds true for PR too if the 3DMark average clock is an indication,

Max power, voltage limit:

15262 score
70.66 FPS 
2,166 MHz average (according to 3DMark)

14437 score
66.84 FPS
1,976 MHz average (according to 3DMark)

FPS %Change: 5.7%
Clock %Change: 9.6%


----------



## Falkentyne

des2k... said:


> "Best paste application method is X with dots in quadrants. "
> 
> Line or big dots method don't work very well for big GPU die. You end up with high spots of paste and require a lot of pressure to spread. I usually end up wasting expensive paste with this method.
> 
> What works for my 3090, is the mini dots all over the die, never fails for even paste spread & very easy on the screws / mount pressure. This might be the stupid EK block which reaches the stand offs very easy. I might sand it down .2mm this weekend.
> 
> View attachment 2487791


Yes I'm aware of that method. I posted it before.



http://imgur.com/VWLqHCL


But there's nothing wrong with a diagonal X with dots either. I get the same results.
When you say "line", you're probably referring to the single straight line method. Even the long diagonal X has been posted as a method by Thermal Grizzly and others. I Just like to put dots in the middle of the quadrant X to ensure full coverage.


----------



## mirkendargen

Falkentyne said:


> Yes I'm aware of that method. I posted it before.
> 
> 
> 
> http://imgur.com/VWLqHCL
> 
> 
> But there's nothing wrong with a diagonal X with dots either. I get the same results.
> When you say "line", you're probably referring to the single straight line method. Even the long diagonal X has been posted as a method by Thermal Grizzly and others. I Just like to put dots in the middle of the quadrant X to ensure full coverage.


Another trick I do that helps with blocks (especially with thicker pastes at room temp like Kryonaut) is connect some tubing to the block and after mounting it, run some real hot water through the block to heat it up good, and do another once over the screws to tighten. Helps flow the paste nicely.


----------



## Beagle Box

Re: Spreading TIM.
Once I used the Tuniq TIM Spreader card, I've never used anything else for non-LM application. 
If you have steady hands, nothing works better.


----------



## inedenimadam

mirkendargen said:


> I have so many thoughts...
> 
> $300 is a utterly ridiculous price for that block.
> 
> A block instantly going out of stock? It's not that hard to make...just allow preorders a month ago and go make however many.
> 
> If so many people want it, why the hell is eVGA defaulting to an AIO on Kingpins? I thought that was stupid in the beginning and a block should have been the default, this is just reinforcing that opinion. Just sell it with a block by default, then sell an optional AIO kit that's some tubes, a rad/pump combo, and come coolant. Then everyone is happy and gets better performance.



Agree whole heartedly that the block is way over priced. However, I am sure that the optimus block will be too, and also may be made of unobtainium. If somehow Optimus beats EVGA to market with a block, then I would absolutely be on board for it. I think it is kind of irritating that they do the AIO as well. Honestly didn't have room for the radiator in my set up, and its all types of ghetto currently...which is why I am anxious enough to pay the silly price and was sitting on the refresh at 10:00 AM PT. 

I am just surprised that with such a niche item at such a stupid price that there was not enough stock to last 30 seconds. 

I really want to get this card blocked and everything put back together and tidy. I tired an older EK VGA supremacy, but the hole spacing on 3000 series is different from previous generations, so a whole new bracket is required.


----------



## J7SC

Beagle Box said:


> Re: Spreading TIM.
> Once I used the Tuniq TIM Spreader card, I've never used anything else for non-LM application.
> If you have steady hands, nothing works better.


...didn't think I would see a 'best TIM spreading method' discussion in this thread 
...but after having done a few Threadrippers etc as well, I stick with my method I use for all CPUs and GPUs no matter their size - use the little spatulas that come with Kryonaut, Gelid Extreme etc, and cover the whole die...for good measure, I add some TIM on the other mating surface. If done right, most of it will squeeze out anyway - the TIM is primarily there to fill the tiny cavities and imperfections even apparently smooth metal surfaces have..

..as posted recently, I was also quite shocked when I saw the extra-thick, raised letters on my 3090 die - I could easily (but gently) hook into them with my fingernail and/or the spatula...never seen anything like it on any other GPU or CPU


----------



## bmgjet

Always spread it on a naked die. Then you know you wont have a spot thats been missed and becomes your hot spot.
I guess my spreading method with a plastic bag over my finger cant be too bad since my hotspot temps only 8C more then die temp.


----------



## des2k...

bmgjet said:


> Always spread it on a naked die. Then you know you wont have a spot thats been missed and becomes your hot spot.
> I guess my spreading method with a plastic bag over my finger cant be too bad since my hotspot temps only 8C more then die temp.


I think my hotspot reading is just mem temp on my zotac. I could be mining (bench) with board power at 280w and core temps at 26c and that hotspot will be into 60c+ exactly the same as the mem (back of the card)  Makes no sense having 60c hotspot on the die with average 26c for core temp.


----------



## jomama22

des2k... said:


> I think my hotspot reading is just mem temp on my zotac. I could be mining (bench) with board power at 280w and core temps at 26c and that hotspot will be into 60c+ exactly the same as the mem (back of the card)  Makes no sense having 60c hotspot on the die with average 26c for core temp.
> 
> 
> View attachment 2487811


Nope, that is definitely your hotspot. Probably some part of the memory controllers on the edge of the chip that's getting cooked and has bad contact/pressure.

Core isn't doing much of anything so the dead core area if the chip will be cool, hence that large disparity in avg and hotspot


----------



## KedarWolf

jomama22 said:


> Nope, that is definitely your hotspot. Probably some part of the memory controllers on the edge of the chip that's getting cooked and has bad contact/pressure.


My hotspot by the end of a PR run is cooler than my memory temps by a few C. Is that normal?

I might switch my Thermalright pads for Gelid eventually.


----------



## mirkendargen

KedarWolf said:


> My hotspot by the end of a PR run is cooler than my memory temps by a few C. Is that normal?
> 
> I might switch my Thermalright pads for Gelid eventually.


Hotspot has nothing to do with memory to my understanding. My understanding is there are multiple temp sensors in the GPU die, and the hot spot reading is whichever one reads the highest, the normal GPU temp is an average. The memory temp reading is whichever GDDR6X die reads the highest.


----------



## jomama22

KedarWolf said:


> My hotspot by the end of a PR run is cooler than my memory temps by a few C. Is that normal?
> 
> I might switch my Thermalright pads for Gelid eventually.


The two aren't really related other than being on the same pcb. It's entirely possible to have instances like that.


----------



## mattxx88

des2k... said:


> I think my hotspot reading is just mem temp on my zotac. I could be mining (bench) with board power at 280w and core temps at 26c and that hotspot will be into 60c+ exactly the same as the mem (back of the card)  Makes no sense having 60c hotspot on the die with average 26c for core temp.
> 
> 
> View attachment 2487811


How can you get only 60° while mining 
mine VRAM hits 86 +1000 and 90/92 + 1300


----------



## nievz

This started after they dropped the 1984 update. How's your FPS on your 3090? I'm seeing at least 40fps drop in overall framerate.

During dropoff on old WZ i used to get 200+, now 160+.

In Gulg i used to get 250, now 160.


----------



## J7SC

..solved a little mystery re. 3090 GPU temp rise...w/ waterblock, my Strix typically was in the low to mid 40s in full-tilt benchmarking (23 C ambient). Then it rose inexplicably by 5 C...turns out that when I changed mobos / CPU recently, I forgot to update the fan profiles in bios; 24x fans were at min rpm as it is a big loop and never hit the bios fan profile default presets...oops


----------



## jura11

@J7SC 

I would get Aquacomputer Quadro or Octo as fan controller, with this controller you should easily control all fans and have fans based on water or ambient temperature


Hope this helps

Thanks, Jura


----------



## Beagle Box

I have an OCTO. I really like it. Most of the radiator fans run curves based on water temp, but 3 fans still SATA-powered and PWM-controlled per CPU temp off MB.


----------



## TheFinnishOne

Finally managed to break the 15k barrier. Untouched Strix oc with the V3 bios and 466.11 driver.
Now I can happily start to look into repasting, re padding and waterblocking.









I scored 15 076 in Port Royal


Intel Core i5-9600K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## ViRuS2k

question for the peeps in the know how,
can you look at this image and tell me exactly what those 3 bright red leds are for ???? sometimes there lit and sometimes there not.. is this a motherboard issue ?

cheers
im stumped and i looked around the msi forums cant find anything on them lol


----------



## des2k...

took my 3090 EK block apart, want to sand the stand off

of course need to take the terminal off(hangs low) suprise smell and tons my machine oil ***

take the POM cover off, the o-ring and block are full of machining oil and dust🤮

QA sticker means nothing to EK lol

*** is this ? It's not even nickel plated and has burn heat marks









IMG-20210424-155613


Image IMG-20210424-155613 hosted in ImgBB




ibb.co


----------



## Lobstar

I cut a piece off the firm plastic covering from my Dark Hero and used that to spread the kryonaut extreme as thin as possible across the entire die. I then put a little on my finger covered with a plastic baggy and firmly press it into the cold plate then wipe off the excess. My gpu only runs 8°C warmer than my water with an Optimus block.


----------



## itssladenlol

des2k... said:


> took my 3090 EK block apart, want to sand the stand off
> 
> of course need to take the terminal off(hangs low) suprise smell and tons my machine oil ***
> 
> take the POM cover off, the o-ring and block are full of machining oil and dust🤮
> 
> QA sticker means nothing to EK lol
> 
> *** is this ? It's not even nickel plated and has burn heat marks
> 
> 
> 
> 
> 
> 
> 
> 
> 
> IMG-20210424-155613
> 
> 
> Image IMG-20210424-155613 hosted in ImgBB
> 
> 
> 
> 
> ibb.co


Normal EK quality, selling garbage with Premium pricetag since years 😂


----------



## GRABibus

Hello,

can you please give me the best thermal pads that you should use to replace the stocks one on a STRIX ?
Also all the necessary thickness ?

Thank you !


----------



## jomama22

itssladenlol said:


> Normal EK quality, selling garbage with Premium pricetag since years 😂


That is nickel that has worn away. The dust you can remove are particles of that Nicole plating. You also have oxidation going on.

I'd be curious to know what else is in your loop and what acidity your loop is at. You don't have flaking, you have chemical erosion going on there.

Here's an example of an ek 290x block that had 7.5 years of use (mine):










Just some plasticizer build up and some minor corrosion. Obviously EK could have changed their process but this was bought at the height of when everyone was give ek **** for their nickle plating.

Probably want to contact them and see what they can do. Only. Possible the rads had a crap ton of flux still in them as well and just spread all throughout the loop.

Question, how did you clean out your rads before putting this in?


----------



## des2k...

jomama22 said:


> That is nickel that has worn away. The dust you can remove are particles of that Nicole plating. You also have oxidation going on.
> 
> I'd be curious to know what else is in your loop and what acidity your loop is at. You don't have flaking, you have chemical erosion going on there.
> 
> Here's an example of an ek 290x block that had 7.5 years of use (mine):
> View attachment 2487939
> 
> 
> 
> Just some plasticizer build up and some minor corrosion. Obviously EK could have changed their process but this was bought at the height of when everyone was give ek **** for their nickle plating.
> 
> Probably want to contact them and see what they can do. Only. Possible the rads had a crap ton of flux still in them as well and just spread all throughout the loop.


well my 1080ti ek block was in the same loop for 3years, still looks new.

Possible it was like this when I got it or nickel plating was not done properly.

I can check the PH, I have strips.

It's not yellow and not full green, about 6.5ph -7ph








IMG-20210424-194918


Image IMG-20210424-194918 hosted in ImgBB




ibb.co


----------



## Falkentyne

des2k... said:


> took my 3090 EK block apart, want to sand the stand off
> 
> of course need to take the terminal off(hangs low) suprise smell and tons my machine oil ***
> 
> take the POM cover off, the o-ring and block are full of machining oil and dust🤮
> 
> QA sticker means nothing to EK lol
> 
> *** is this ? It's not even nickel plated and has burn heat marks
> 
> 
> 
> 
> 
> 
> 
> 
> 
> IMG-20210424-155613
> 
> 
> Image IMG-20210424-155613 hosted in ImgBB
> 
> 
> 
> 
> ibb.co


Remember what I told you guys?
Friends don't let friends buy EK.


----------



## J7SC

...EK is not my first choice (attn.: Watercool / Heatkiller: where's the 3090 Strix block ? ) but sometimes there's little choice re. what's available for a given custom PCB. I have defended EK in the past, hoping that its Nickel plating issues were behind it - but recently, I had a brand new backplate flake off big chunks of Nickel plating; no cleaners or chemicals (as I said, brand new...). Another issue I have run into more than once with EK are some Allen head screws not tightened on a brand new copper block (even had a small leak, but caught it during leak testing). EK likes to slap on 'Made in Europe' but in my mind, that is trying to coattail on the quality of other brands from W.Europe...

... apart from Watercool (and Aquacomputer), I also like Phanteks - even have Bykski OEM blocks which have been perfect for 2.5 years now, though others have posted about some Nickel flaking issues with their Bykski 3090 block.

...the point made above by @jomama22 re. chemicals is worth underling though. It never ceases to amaze me what folks pour into their loop for 'cleaning', never mind pastel liquids etc.


----------



## jomama22

des2k... said:


> well my 1080ti ek block was in the same loop for 3years, still looks new.
> 
> Possible it was like this when I got it or nickel plating was not done properly.
> 
> I can check the PH, I have strips.
> 
> It's not yellow and not full green, about 6.5ph -7ph
> 
> 
> 
> 
> 
> 
> 
> 
> IMG-20210424-194918
> 
> 
> Image IMG-20210424-194918 hosted in ImgBB
> 
> 
> 
> 
> ibb.co


Yeah, my only thought would have been if there was like residual cleaning stuff in there.

I would just contact them. Dont let them make you pay for shipping no matter the direction lol. I would probably demand them just to ship you a new block with you having to return one.

Still very very odd. Once the optimus strix block comes out (and If I ever get one delivered) I'll have to take apart my ek and see how it looks. Same deal as you just have the acetal/pom.


----------



## jomama22

ViRuS2k said:


> question for the peeps in the know how,
> can you look at this image and tell me exactly what those 3 bright red leds are for ???? sometimes there lit and sometimes there not.. is this a motherboard issue ?
> 
> cheers
> im stumped and i looked around the msi forums cant find anything on them lol


Take a look in you mobo manual. Whether it's ok or not, no one can really know unless we know what less are lit and what mobo it is.


----------



## ViRuS2k

jomama22 said:


> Take a look in you mobo manual. Whether it's ok or not, no one can really know unless we know what less are lit and what mobo it is.


Thanks pal i figured it out thought its the Ezdebug led`s for that is plugged into the fan headers lol 
red for PWM and White for DC though trying now to figure out how to turn those leds off lol


----------



## des2k...

well after cleaning the block, sanding the standoffs, it's 10c-11c delta for cp2077 2145 core +0mem ~470w or so. A nice improvement but I may be limited by the 1mm pads because the standoffs are shorter now.

I need to switch them to 0.5mm because they looked crushed and breaking apart when taking the block off.

Of course water temps goes up the more you play, but I was happy to see it's 35c on the gpu 😁 for 5mins or so vs 40c before.


----------



## mirkendargen

J7SC said:


> ...EK is not my first choice (attn.: Watercool / Heatkiller: where's the 3090 Strix block ? ) but sometimes there's little choice re. what's available for a given custom PCB. I have defended EK in the past, hoping that its Nickel plating issues were behind it - but recently, I had a brand new backplate flake off big chunks of Nickel plating; no cleaners or chemicals (as I said, brand new...). Another issue I have run into more than once with EK are some Allen head screws not tightened on a brand new copper block (even had a small leak, but caught it during leak testing). EK likes to slap on 'Made in Europe' but in my mind, that is trying to coattail on the quality of other brands from W.Europe...
> 
> ... apart from Watercool (and Aquacomputer), I also like Phanteks - even have Bykski OEM blocks which have been perfect for 2.5 years now, though others have posted about some Nickel flaking issues with their Bykski 3090 block.
> 
> ...the point made above by @jomama22 re. chemicals is worth underling though. It never ceases to amaze me what folks pour into their loop for 'cleaning', never mind pastel liquids etc.


Just chiming in to say I'm the only Bykski plating complainer here so it doesn't seem wide spread, and they sent me a new block DHL Express with no questions asked (except for where to send it) when I emailed them a picture.


----------



## J7SC

mirkendargen said:


> Just chiming in to say I'm the only Bykski plating complainer here so it doesn't seem wide spread, and they sent me a new block DHL Express with no questions asked (except for where to send it) when I emailed them a picture.


...fair enough, though the EK RMA also went fine with just pics. I'm not trying to diss Bykski, I have some of their blocks (trouble-free). I add that my Strix EK block also looks pristine (plexi cover allows for a good view). Still, there are other brands I used for years w/o ever any issue - if they would have had a Strix block available, I would have preferred those.

...'elsewhere', I'm trying out the KPE 520W r_BAR bios for the first time...mobo / CPU at stock / default, and just starting to dial the GPU in via sliders only. GPU max indicated voltage at 1.068V, ambient at 23 C...still some headroom there I think. NV power tab on 'normal'. The only thing that bothers me is that I can't get an accurate power reading via the usual software.


----------



## Biscottoman

Guys i just completed my 3090 custom loop but i got a very big air bubble stacked in my mp5works (on a vertical mounted GPU) I've already tried to tilt the case in any direction but seems like nothing changed at all. Any solution to this?


----------



## sultanofswing

Biscottoman said:


> Guys i just completed my 3090 custom loop but i got a very big air bubble stacked in my mp5works (on a vertical mounted GPU) I've already tried to tilt the case in any direction but seems like nothing changed at all. Any solution to this?
> View attachment 2488023
> View attachment 2488024


Run the pump at max speed if you have not already, if that does not get it out it could be an indication of poor flow rate.


----------



## Biscottoman

sultanofswing said:


> Run the pump at max speed if you have not already, if that does not get it out it could be an indication of poor flow rate.


I've been running pump (ekwb kinetic d5) without no pwm signal for some hours and even tilting the case i have let all the other bubbles to escape, but mp5works is very restrictive and so that air bubble seems not being able to escape 

The pump is for GPU only so it should be more than enough


----------



## des2k...

Biscottoman said:


> Guys i just completed my 3090 custom loop but i got a very big air bubble stacked in my mp5works (on a vertical mounted GPU) I've already tried to tilt the case in any direction but seems like nothing changed at all. Any solution to this?
> View attachment 2488023
> View attachment 2488024


not a water cooling expert but

water will take the path of least resistance, 
so in this case, 90%+ of the flow will go into the gpu block.

that mp5 with those tiny tubes is worthless unless you go serial flow but then even a d5 full speed will have low flow🙄


----------



## J7SC

Biscottoman said:


> Guys i just completed my 3090 custom loop but i got a very big air bubble stacked in my mp5works (on a vertical mounted GPU) I've already tried to tilt the case in any direction but seems like nothing changed at all. Any solution to this?
> View attachment 2488023
> View attachment 2488024


...a bit hard to tell 'which way is up' in the pics, but the hose / tube connections should be on the top to get the air bubble out as air is lighter than water. It looks like the VRAM block doesn't have rigid tubing - can you 'unhook' it and rotate ? Sure you might have to redo the paste/pads, but the air bubble will only come out if it has at least a level exit (better still, top exit). Than run your pump full tilt - to get all the bubbles out (including smaller ones, or this one when it gets out into the general loop) might take a few days anyhow.

BTW, from what I can see, very nice build !


----------



## Biscottoman

des2k... said:


> not a water cooling expert but
> 
> water will take the path of least resistance,
> so in this case, 90%+ of the flow will go into the gpu block.
> 
> that mp5 with those tiny tubes is worthless unless you go serial flow but then even a d5 full speed will have low flow🙄


It should be designed to be installed in parallel, since its aim is too keep the main flow to the "front" of the waterblock as it is the main part which requires to be cooled. The side effect of this is that of finding oneself with great difficulty in removing air bubble from it


----------



## jomama22

Biscottoman said:


> Guys i just completed my 3090 custom loop but i got a very big air bubble stacked in my mp5works (on a vertical mounted GPU) I've already tried to tilt the case in any direction but seems like nothing changed at all. Any solution to this?
> View attachment 2488023
> View attachment 2488024


You're tilting it with the pump running yeah? 

You could detach it from the backplate with the pump off and then sit it lower (and the tubes) than the gpu block inlet. Make sure the mpworks block inlets are facing up.Make sure the res top/plug is off so pressure pressure doesn't block the movement of the air too much. Should help move the aire atleast towards the gpu block.

While doing this, you can turn the pump on and off to mix the air bubbles around. Keep it in the orientation while bleeping out the system and once it's bled out, then put it back on.

You can also detach it from the backplate, run the pump, and just shake the mp5 to help get the air mixed in with the water.

Worst cast, you can drain enough so you can disconnect it and fill the mp5 separately before attaching it.



Really just seems like you don't have enough flow though. That large of a bubble is just accumulating all the air bubbles that are going into it. Doesn't even look like it has flow from one side to the other.


----------



## Nizzen

Biscottoman said:


> Guys i just completed my 3090 custom loop but i got a very big air bubble stacked in my mp5works (on a vertical mounted GPU) I've already tried to tilt the case in any direction but seems like nothing changed at all. Any solution to this?
> View attachment 2488023
> View attachment 2488024


Here is my MP5 works (serial) on 3090 strix.
I can't see if I have bubbles inside, but the performance is great 
Watertemp is 30-31. 68c memory max when looping Port Royal. 64-66c when playing Battlefield V 30min+. Memory is set to +1000.

Gpu is ~45-46c


----------



## KedarWolf

Nizzen said:


> Here is my MP5 works (serial) on 3090 strix.
> I can't see if I have bubbles inside, but the performance is great
> Watertemp is 30-31. 68c memory max when looping Port Royal. 64-66c when playing Battlefield V 30min+. Memory is set to +1000.
> 
> Gpu is ~45-46c
> View attachment 2488036


Are you sure your MP5 is cooling right?

On Battlefield 5 30 minutes in, 3840x1080 144HZ HDR on, ray tracing and graphics maxed out my memory tops out at 62C on my EKWB block and backplate on my Strix OC.


----------



## Nizzen

KedarWolf said:


> Are you sure your MP5 is cooling right?
> 
> On Battlefield 5 30 minutes in, 3840x1080 144HZ HDR on, ray tracing and graphics maxed out my memory tops out at 62C on my EKWB block and backplate on my Strix OC.


It depends on many factors. Thermalpads under the backplate, airflow around the backplate, case, powerlimit, ambient temperature, water temperature, waterflow etc.... 

There is no doubt it's working. 10+c cooler with slow fans for me.

Let me see your setup


----------



## CZonin

Meant to follow up on an earlier comment but didn't get the chance. What's the max power limit I should target on a 2x8 pin card? I know 390 is the max that you can get from the factory, but is that where I should leave it for safe daily operation? I have the cooling and PSU to handle a bit more than 390 but not sure if I should without risking damage.


----------



## inedenimadam

CZonin said:


> Meant to follow up on an earlier comment but didn't get the chance. What's the max power limit I should target on a 2x8 pin card? I know 390 is the max that you can get from the factory, but is that where I should leave it for safe daily operation? I have the cooling and PSU to handle a bit more than 390 but not sure if I should without risking damage.


I know that with my zotac Trinity, I have run into a hard voltage cap. Even with the 1000W bios, the V cap keeps the card well below the danger zone, topping out around 500W. The trinity is far from a top end design, but responds well to the increased power limit. I don't fear it's demise at 1.1V. 

What model are you running? Does it have a voltage cap?


----------



## KedarWolf

Nizzen said:


> It depends on many factors. Thermalpads under the backplate, airflow around the backplate, case, powerlimit, ambient temperature, water temperature, waterflow etc....
> 
> There is no doubt it's working. 10+c cooler with slow fans for me.
> 
> Let me see your setup


I have an EKWB block with Thermalright 12.8wm/k pads on the block and an EKWB backplate, an EKWB Phoenix 360 AIO with two quick disconnects on the hoses between the block and the backplate. I keep the pump at 100% and the fans at 60%.

I DO have an open bench with something like 16 fans in it in a Thermalright Core X9 case and five fans on the 360 radiator of the AIO. So that likely makes a huge difference.

Edit: I use the 1000W XOC BIOS with the power limit at 50%.


----------



## CZonin

inedenimadam said:


> I know that with my zotac Trinity, I have run into a hard voltage cap. Even with the 1000W bios, the V cap keeps the card well below the danger zone, topping out around 500W. The trinity is far from a top end design, but responds well to the increased power limit. I don't fear it's demise at 1.1V.
> 
> What model are you running? Does it have a voltage cap?


Ah okay got it. I have a TUF (non-OC). Not sure if it has a voltage cap.

Do you know off hand what bios would be the next step up in power limit from 390w? Right now I'm using the Gigabyte Gaming OC bios.

Edit: I'm thinking the strix might be a good one to try?


----------



## J7SC

CZonin said:


> Ah okay got it. I have a TUF (non-OC). Not sure if it has a voltage cap.
> 
> Do you know off hand what bios would be the next step up in power limit from 390w? Right now I'm using the Gigabyte Gaming OC bios.
> 
> Edit: I'm thinking the strix might be a good one to try?


...Strix bios is set up for 3x PCIe 8 pins though; would work on the 2x 8 pin TUF, but with 1/3rd or so less power than in the Strix


----------



## yzonker

CZonin said:


> Ah okay got it. I have a TUF (non-OC). Not sure if it has a voltage cap.
> 
> Do you know off hand what bios would be the next step up in power limit from 390w? Right now I'm using the Gigabyte Gaming OC bios.
> 
> Edit: I'm thinking the strix might be a good one to try?


The only bios that can give you more than 390w on 2x8pin is the KP XOC. All of the boards other than some made specifically for extreme overclocking (KP for one) are limited to 1.1v. 

And yes with most games and benches you can't get too far beyond 500w. Quake 2 RTX can hit 600w. RDR2 or CP2077 (with DLSS off) can get in to the 500-550w range running at 4k and max settings. 

I've done it regularly with my Zotac 3090 also without blowing it up. Just need decent water cooling. Make sure the 8 pin cables and connectors stay cool as well. I have one of those triple fan slot coolers below my block to move air through that entire area.


----------



## CZonin

J7SC said:


> ...Strix bios is set up for 3x PCIe 8 pins though; would work on the 2x 8 pin TUF, but with 1/3rd or so less power than in the Strix


Right, so should I just be sticking with bios that are for cards with 2x8 pin? In that case I guess my max power limit would be 390w since there aren't any other 2x8 pin with a higher limit than that.

Sorry for all the questions, been overclocking for years but never tried flashing different bios on a card until recently.


----------



## inedenimadam

CZonin said:


> Right, so should I just be sticking with bios that are for cards with 2x8 pin? In that case I guess my max power limit would be 390w since there aren't any other 2x8 pin with a higher limit than that.
> 
> Sorry for all the questions, been overclocking for years but never tried flashing different bios on a card until recently.


Depends on what your intentions are with the card, and what you are comfortable with. Like what has already been states, there isn't anything higher than 390 for 2*8 pin cards, and the 520W bios actually ends up at 342W because you are missing the 3rd power connector. So...it's 390W or 1000W. 
It's funny because I am actually more comfortable throwing the 1000W bios on my trinity with waterblock and locked voltage, than I am my kingpin. The kingpin can absolutely go hard with the voltage and power draw, and until I have a full cover block instead of this stupid AIO, I'm leaving the 1000W bios to the LN2 bois and the yolo crowd.


----------



## CZonin

inedenimadam said:


> Depends on what your intentions are with the card, and what you are comfortable with. Like what has already been states, there isn't anything higher than 390 for 2*8 pin cards, and the 520W bios actually ends up at 342W because you are missing the 3rd power connector. So...it's 390W or 1000W.
> It's funny because I am actually more comfortable throwing the 1000W bios on my trinity with waterblock and locked voltage, than I am my kingpin. The kingpin can absolutely go hard with the voltage and power draw, and until I have a full cover block instead of this stupid AIO, I'm leaving the 1000W bios to the LN2 bois and the yolo crowd.


That makes sense, ty for clarifying! I was just trying to get a little more performance out of the card, but I'm just going to stay at 390w. I have some more headroom with my PSU and cooling but definitely not enough to go for 1000w.


----------



## yzonker

inedenimadam said:


> Depends on what your intentions are with the card, and what you are comfortable with. Like what has already been states, there isn't anything higher than 390 for 2*8 pin cards, and the 520W bios actually ends up at 342W because you are missing the 3rd power connector. So...it's 390W or 1000W.
> It's funny because I am actually more comfortable throwing the 1000W bios on my trinity with waterblock and locked voltage, than I am my kingpin. The kingpin can absolutely go hard with the voltage and power draw, and until I have a full cover block instead of this stupid AIO, I'm leaving the 1000W bios to the LN2 bois and the yolo crowd.


And the XOC has the safeties disabled, so no thermal protection. Forget to mention that before.


----------



## J7SC

CZonin said:


> Right, so should I just be sticking with bios that are for cards with 2x8 pin? In that case I guess my max power limit would be 390w since there aren't any other 2x8 pin with a higher limit than that.
> 
> Sorry for all the questions, been overclocking for years but never tried flashing different bios on a card until recently.


...generally better to get a bios made for the same basic PCIe pin count, Then again, If you get a bios with 1000W such as the KPE XOC @yzonker and others mentioned, or the Galax HoF XOC, (all 3x 8 pin) you still have way more than you could use even via 2 pin. The problem is that those XOC bios have many of the safeties removed as they are meant for sub-zero (like LN2). 

As you may already be aware, cross-flashing other vendor's bios can disable some IO ports, depending what you flash. There's also the shunt mod method; others in this thread can expand on that...but suffice it to say that you move further away from warranty et al with that approach.

...getting as much cooling as possible will help (not sure what your cooling setup is) given the way NVidia boost works - a colder card is a bit like getting a higher-limit bios.


----------



## yzonker

CZonin said:


> That makes sense, ty for clarifying! I was just trying to get a little more performance out of the card, but I'm just going to stay at 390w. I have some more headroom with my PSU and cooling but definitely not enough to go for 1000w.


And the real gaming performance increase is small, at least for my card. I ran the RDR2 benchmark at 390w and 500w. Went from 47 to 49 fps average. Less than 5%.


----------



## inedenimadam

yzonker said:


> And the real gaming performance increase is small, at least for my card. I ran the RDR2 benchmark at 390w and 500w. Went from 47 to 49 fps average. Less than 5%.


This was pretty much my experience as well. Measurable, but maybe not sensible gains. Overclocking on air/water isn't as exciting as it used to be. Chip manufacturers have really started pushing chips closer to full potential out of the box thanks to boost algorithms.


----------



## CZonin

J7SC said:


> ...generally better to get a bios made for the same basic PCIe pin count, Then again, If you get a bios with 1000W such as the KPE XOC @yzonker and others mentioned, or the Galax HoF XOC, (all 3x 8 pin) you still have way more than you could use even via 2 pin. The problem is that those XOC bios have many of the safeties removed as they are meant for sub-zero (like LN2).
> 
> As you may already be aware, cross-flashing other vendor's bios can disable some IO ports, depending what you flash. There's also the shunt mod method; others in this thread can expand on that...but suffice it to say that you move further away from warranty et al with that approach.
> 
> ...getting as much cooling as possible will help (not sure what your cooling setup is) given the way NVidia boost works - a colder card is a bit like getting a higher-limit bios.


Thanks for all the info! Ya I've been sticking with bios that have the same IO so everythings still working. I'm definitely trying to stay within warranty so shunt mod is out.

As for my cooling setup, I deshrouded and added 3x NF-A9s. It's allowed me to keep the card cooler than stock with a more silent curve. Between that and the 390w bios, I've gotten a decent uplift already, but was just seeing if there's anything else I could do while staying within safe limits for daily use.


----------



## Biscottoman

jomama22 said:


> You're tilting it with the pump running yeah?
> 
> You could detach it from the backplate with the pump off and then sit it lower (and the tubes) than the gpu block inlet. Make sure the mpworks block inlets are facing up.Make sure the res top/plug is off so pressure pressure doesn't block the movement of the air too much. Should help move the aire atleast towards the gpu block.
> 
> While doing this, you can turn the pump on and off to mix the air bubbles around. Keep it in the orientation while bleeping out the system and once it's bled out, then put it back on.
> 
> You can also detach it from the backplate, run the pump, and just shake the mp5 to help get the air mixed in with the water.
> 
> Worst cast, you can drain enough so you can disconnect it and fill the mp5 separately before attaching it.
> 
> 
> 
> Really just seems like you don't have enough flow though. That large of a bubble is just accumulating all the air bubbles that are going into it. Doesn't even look like it has flow from one side to the other.


Thanks mare, i fixed the issue reinstalling the mp5works with the inlet ports on the the top while before it was in the bottom, now i can finally see water flowing and seems like every air bubble is gone


----------



## WillP

des2k... said:


> that mp5 with those tiny tubes is worthless unless you go serial flow but then even a d5 full speed will have low flow🙄


There is significant flow through the MP5 in parallel, and it cools the back-plate significantly. When filling the loop the MP5 fills quickly while the part it is in parallel with also fills fine. I'm using a Corsair Hydro X XD5 pump/res combo with pump profile tied to water temp in the default setting most of the time. I'm sure the MP5 is not as effective as rigging a RAM block or something similar to it, but it definitely makes a difference. I also have heat sinks thermal taped to the areas not covered by the MP5.


----------



## WillP

CZonin said:


> That makes sense, ty for clarifying! I was just trying to get a little more performance out of the card, but I'm just going to stay at 390w. I have some more headroom with my PSU and cooling but definitely not enough to go for 1000w.


That's the thing, you won't get anywhere near 1000w with the XOC bios on a 2 8pin card, max is approx 695w at 100% power limit, and it can easily be lowered with Afterburner or similar. As previously mentioned it'll never draw that much in any games anyway.
There are clear risks associated with flashing it, but your PSU I don't think should be one of them if you go slow and monitor the usage.
That said, whilst benching last night, my watt-meter at the plug reported max power draw of 860w on Timespy running the XOC bios at 95% and a 9900k at 5.2ghz.


----------



## des2k...

KedarWolf said:


> Are you sure your MP5 is cooling right?
> 
> On Battlefield 5 30 minutes in, 3840x1080 144HZ HDR on, ray tracing and graphics maxed out my memory tops out at 62C on my EKWB block and backplate on my Strix OC.


I have thermalright pads + 4 heatsink with thermal tape(pads and paste don't work well, lack of pressure)but yeah I get 54c for gaming.

I would say that mp5 will work better with thermal tape .2mm assuming you have 13wmk pads under the backplate


----------



## des2k...

WillP said:


> That's the thing, you won't get anywhere near 1000w with the XOC bios on a 2 8pin card, max is approx 695w at 100% power limit, and it can easily be lowered with Afterburner or similar. As previously mentioned it'll never draw that much in any games anyway.
> There are clear risks associated with flashing it, but your PSU I don't think should be one of them if you go slow and monitor the usage.
> That said, whilst benching last night, my watt-meter at the plug reported max power draw of 860w on Timespy running the XOC bios at 95% and a 9900k at 5.2ghz.


2x8 pins will hit normalized tdp due to xoc vbios maxing out at 100w pcie.

So at 100w pcie, the max on reference 2x8pins is about 280w per 8pin(amp clamp) About 660w absolute max.

The rest of that 860w will be psu power loss + intel cpu, other stuff on mobo.

It does use 660w in games, Quake rtx or Path of exile, i'm sure there's other games.


----------



## Sean McCargar

Hey everybody,

I have a GIGABYTE waterforce rtx 3090 extreme as well. I wanted to report all my finding in regards to vbios flashing rebar update.

So my card is a 390w card origionally.
Flashing to the F2 vbios for rebar support from gigabyte breaks the power draw and is stuck at 350w. Essentially utter garbage.

Honestly sucks so much that the rebar vbios is borked but I wanted to let all the other gigabyte waterforce owners I opened a ticket with them for improper power draw on the 3090 vbios they have released. They said they will work with me on this since nobody actually reported the issue to there support. I am waiting on a reply.


----------



## KingKnick

Sean McCargar said:


> Hey everybody,
> 
> I have a GIGABYTE waterforce rtx 3090 extreme as well. I wanted to report all my finding in regards to vbios flashing rebar update.
> 
> So my card is a 390w card origionally.
> Flashing to the F2 vbios for rebar support from gigabyte breaks the power draw and is stuck at 350w. Essentially utter garbage.
> Flashing Galax 3090 2x8pin vbios does work and draws 390w. Only problem here is video ports are screwed up hdmi not transferring audio only 1 dp port works 1 hdmi without audio.
> 
> Flashing the Kingpin 3090 1000w vbios does unlock the power draw but I need to state these things becasue i dont think anyone should be flashing this vbios honestly. Better to hardware mod.
> So the issue is one of the 2 8pins will draw over 300W. The other 8 pin will only draw 190w. The pcie will draw up to 75w.
> The issue here is the massive issue with power draw not being equal accross the 2x8pins.
> So make sure you are checking with HWINFO or GPUZ to check the actual power draw off your 2x8pins. If they are not equal and offset like mine do not use this VBIOS as you could melt your 1 8pin.
> I saw it draw up to 310w. Technically this is still under the limit 330w but even 18 gauge wireing on my 1200w technically cannot support this. Recommend buying 16 gauge pcie cables for a power supply that can actually output it. 1200w axi corsair does support 16 gauge pcie cables.
> The issue here is the vbios is detecting the second 8pin as the kingpins 3rd i think.
> From my understanding the kingpin is limited on the 3rd pin on power draw just becasue of how the vbios power draw is written. So the first and second pin will max out first then the 3rd or something like that.
> 
> Honestly sucks so much that the rebar vbios is borked but I wanted to let all the other gigabyte waterforce owners I opened a ticket with them for improper power draw on the 3090 vbios they have released. They said they will work with me on this since nobody actually reported the issue to there support. I am waiting on a reply.
> 
> Anyone actually hardware flash that 350w-1000w bios?? not the kingpin one lol the one made for 2x8pins.


Ist is not possible to flash that Bios... you becomes an error in nvflash....


----------



## Sean McCargar

KingKnick said:


> Ist is not possible to flash that Bios... you becomes an error in nvflash....


I mean hardware flash with a programmer not nvflash.


----------



## Falkentyne

Sean McCargar said:


> Hey everybody,
> 
> I have a GIGABYTE waterforce rtx 3090 extreme as well. I wanted to report all my finding in regards to vbios flashing rebar update.
> 
> So my card is a 390w card origionally.
> Flashing to the F2 vbios for rebar support from gigabyte breaks the power draw and is stuck at 350w. Essentially utter garbage.
> 
> Honestly sucks so much that the rebar vbios is borked but I wanted to let all the other gigabyte waterforce owners I opened a ticket with them for improper power draw on the 3090 vbios they have released. They said they will work with me on this since nobody actually reported the issue to there support. I am waiting on a reply.


This is completely wrong. The power draw has been tested with current clamps already. The power draw readings become bugged out when you flash the 1000W vbios on many non Kingpin cards. The 8 pin 1 and 2 are actually pretty balanced. You are NOT drawing 300W through a single 8 pin unless you actually set the TDP to 100% and even if you do that, you aren't going to have the VID or clocks high enough to exceed 750W under worst circumstances on air or water.


----------



## WillP

des2k... said:


> 2x8 pins will hit normalized tdp due to xoc vbios maxing out at 100w pcie.
> 
> So at 100w pcie, the max on reference 2x8pins is about 280w per 8pin(amp clamp) About 660w absolute max.
> 
> The rest of that 860w will be psu power loss + intel cpu, other stuff on mobo.
> 
> It does use 660w in games, Quake rtx or Path of exile, i'm sure there's other games.


Fair enough, that makes a lot of sense, and yes, I'm aware that the CPU and board (and RGB/fans) are also pulling power, hence mentioning the CPU and overclock on it.
Haven't played either of those games, max I'm seeing from trying to calculate from the incorrectly reported power usage seems to be up to about 600w in the games I've played, mostly Watchdogs Legion, CP2077, AC Valhalla and Metro Exodus recently.
I've clearly been doing my maths wrong on the correction from 1000w on the 2*8pin card, would you explain more? I thought it was 1000w-75w for the PCIE, then 2/3 of the 925w, making approx 620w, and then adding the 75w PCIE back on, getting 695w.


----------



## yzonker

One data point I'll add to this discussion. Admittedly I have a lower end PSU (Seasonic Focus 1000w), but the other night I hit OCP while playing RDR2 with the XOC set to 100% on my Zotac 3090 2x8pin. I never saw anything much higher than 500w on the AB overlay but it just have spiked or something. Maybe exceeded a limit on an 8-pin?


----------



## J7SC

yzonker said:


> One data point I'll add to this discussion. Admittedly I have a lower end PSU (Seasonic Focus 1000w), but the other night I hit OCP while playing RDR2 with the XOC set to 100% on my Zotac 3090 2x8pin. I never saw anything much higher than 500w on the AB overlay *but it just have spiked or something*. Maybe exceeded a limit on an 8-pin?


...when Ampere / 3080 first came out, several sites talked about much higher 'spikes', below a screenshot from Igor's lab on the topic...he also has a few chocie words (in German) re. assumptions on PCIe 75W. Which reminds me, @jomama22 both a friend's and my Strix OC have really low PCIe slot power (30W - 57W +-) even at full benching, no matter what bios is used. A bit weird, wondering why, though I certainly would not want anything higher than 60 W. Nothing wrong with oc and/or overall performance though with either card.

...anyhow, I'm running one of my trusty Antec HPC 1300W Platinum PSUs with the 5950X / Strix OC 520W, and lots of headroom left, zero OCPs.


----------



## dr/owned

yzonker said:


> One data point I'll add to this discussion. Admittedly I have a lower end PSU (Seasonic Focus 1000w), but the other night I hit OCP while playing RDR2 with the XOC set to 100% on my Zotac 3090 2x8pin. I never saw anything much higher than 500w on the AB overlay but it just have spiked or something. Maybe exceeded a limit on an 8-pin?


Just guessing but years ago I had an XFX psu manufactured by Seasonic and their specs were BS and each 8pin connector was limited to 30A. I think Corsair sets their -i series at 30 or 40A unless you turn the limits off in CorsairLink/iCue.


----------



## yzonker

dr/owned said:


> Just guessing but years ago I had an XFX psu manufactured by Seasonic and their specs were BS and each 8pin connector was limited to 30A. I think Corsair sets their -i series at 30 or 40A unless you turn the limits off in CorsairLink/iCue.


30A should be enough though. 30x12=360w


----------



## yzonker

J7SC said:


> ...when Ampere / 3080 first came out, several sites talked about much higher 'spikes', below a screenshot from Igor's lab on the topic...he also has a few chocie words (in German) re. assumptions on PCIe 75W. Which reminds me, @jomama22 both a friend's and my Strix OC have really low PCIe slot power (30W - 57W +-) even at full benching, no matter what bios is used. A bit weird, wondering why, though I certainly would not want anything higher than 60 W. Nothing wrong with oc and/or overall performance though with either card.
> 
> ...anyhow, I'm running one of my trusty Antec HPC 1300W Platinum PSUs with the 5950X / Strix OC 520W, and lots of headroom left, zero OCPs.
> 
> View attachment 2488153


Something like this? I don't see a screenshot in your post. 



https://i.redd.it/n20stmrh5jn51.png


----------



## J7SC

yzonker said:


> Something like this? I don't see a screenshot in your post.
> 
> 
> 
> https://i.redd.it/n20stmrh5jn51.png


...yeah, and tx, I added (a different) screenshot a few moments ago in my earlier post.


----------



## KedarWolf

dr/owned said:


> Just guessing but years ago I had an XFX psu manufactured by Seasonic and their specs were BS and each 8pin connector was limited to 30A. I think Corsair sets their -i series at 30 or 40A unless you turn the limits off in CorsairLink/iCue.


Oh, then I could turn off multi-rail on my AX1600i?


----------



## des2k...

WillP said:


> Fair enough, that makes a lot of sense, and yes, I'm aware that the CPU and board (and RGB/fans) are also pulling power, hence mentioning the CPU and overclock on it.
> Haven't played either of those games, max I'm seeing from trying to calculate from the incorrectly reported power usage seems to be up to about 600w in the games I've played, mostly Watchdogs Legion, CP2077, AC Valhalla and Metro Exodus recently.
> I've clearly been doing my maths wrong on the correction from 1000w on the 2*8pin card, would you explain more? I thought it was 1000w-75w for the PCIE, then 2/3 of the 925w, making approx 620w, and then adding the 75w PCIE back on, getting 695w.


I use afterburner monitoring (x*66). Comes very close to my own formula that I did with the amp clamp.
1000w xoc max are 100w pcie 300w for each pin1,2,3.

2x8pin 1000w - 300w(pin3), 700w. 
But pin1,2,3 are balanced with PCIE power draw so 1/3 of that balanced power draw on the PCIE will be lost from pin1 and pin2. PCIE still climbs to 100w if you push the card.

700w -33w,= 666w max


----------



## jura11

Running from day one when KPE XOC 1000W BIOS has been released, I use my PC like for gaming or rendering and tested in mining as well uncapped and capped to 75-85% and no issues,in Port Royal I have seen highest power draw at 85% which is something around 590-600W 

My both RTX 3090 GamingPro's are running KPE XOC 1000W BIOS and no issues, power draw is high for sure

Temperatures are good too, not seen higher temperatures than 40°C on GPU's in warmer weather, in normal weather temperatures won't break 36-38°C 

As I said previously, if you are scared then don't use KPE XOC 1000W BIOS, for me is one of the best BIOS for 2*8-pin GPUs, I will don't do on my RTX 3090s shunt mod, sadly my hands are way too shaking for that hahaha 

Hope this helps 

Thanks, Jura


----------



## jura11

des2k... said:


> I use afterburner monitoring (x*66). Comes very close to my own formula that I did with the amp clamp.
> 1000w xoc max are 100w pcie 300w for each pin1,2,3.
> 
> 2x8pin 1000w - 300w(pin3), 700w.
> But pin1,2,3 are balanced with PCIE power draw so 1/3 of that balanced power draw on the PCIE will be lost.
> 700w -33w,= 666w max
> 
> View attachment 2488155


Somewhere here in this thread is buried correct formula for KPE XOC 1000W BIOS 

I think is 0.696 not 0.66,but maybe I'm wrong on this there

Hope this helps 

Thanks, Jura


----------



## Falkentyne

yzonker said:


> One data point I'll add to this discussion. Admittedly I have a lower end PSU (Seasonic Focus 1000w), but the other night I hit OCP while playing RDR2 with the XOC set to 100% on my Zotac 3090 2x8pin. I never saw anything much higher than 500w on the AB overlay but it just have spiked or something. Maybe exceeded a limit on an 8-pin?


The older Focus series before the OneSeasonic rework have the old SS-1050XP OCP ciruit, which will tend to trip on high current video cards like Vega 64 or 3090's, regardless if you're far below the max ceiling of the PSU. If the box says "Focus Plus Platinum" or "Focus Gold", it's the old version.

The Focus GX and Focus PX at 750W+ won't do this as they have the new OCP circuit.


----------



## yzonker

Falkentyne said:


> The older Focus series before the OneSeasonic rework have the old SS-1050XP OCP ciruit, which will tend to trip on high current video cards like Vega 64 or 3090's, regardless if you're far below the max ceiling of the PSU. If the box says "Focus Plus Platinum" or "Focus Gold", it's the old version.
> 
> The Focus GX and Focus PX at 750W+ won't do this as they have the new OCP circuit.


Looks like you nailed it.



https://www.amazon.com/dp/B078T1XSJ1/ref=cm_sw_r_cp_apa_glt_fabc_82Q17M2F5ZFP30NXKHGS?_encoding=UTF8&psc=1



As a lot of things right now, it's what I could find in stock. Works fine as I don't intend to run quite that high for gaming anyway. I was just testing to see what performance gain I could get and see if my loop could handle that level for longer periods of time.

Partly also because I'm thinking about buying the FTW3 HC 3090 if 18 seconds ever pass in the queue. I would likely run it at higher power levels than my 2x8pin. Still I'd probably just run the 500w or 520w bios on it. When I cap the XOC on my Zotac to that level I've never hit OCP.


----------



## des2k...

jura11 said:


> Somewhere here in this thread is buried correct formula for KPE XOC 1000W BIOS
> 
> I think is 0.696 not 0.66,but maybe I'm wrong on this there
> 
> Hope this helps
> 
> Thanks, Jura


0.696 doesn't match my amp clamp. You can see in hwinfo that the xoc tdp normalized will cut the power before tdp hits 100% due to PCIE 100w limit.

If we're talking 2x8pin reference from Nvidia, all power sources are balanced. So it's impossible to hit 696w because the fake pin3 that reports 300w(example max) will make pcie increase power draw(real) which in turn will cap pin1 and pin2 early never reaching 300w.


----------



## yzonker

jura11 said:


> Somewhere here in this thread is buried correct formula for KPE XOC 1000W BIOS
> 
> I think is 0.696 not 0.66,but maybe I'm wrong on this there
> 
> Hope this helps
> 
> Thanks, Jura


That seems too high. I went to some effort to match boost frequencies when running Port Royal between a 390w bios and the XOC. That ended up with the XOC at 62%. That works out to only be 0.63. Obviously a somewhat flawed method but it should be close unless one bios boosts higher than another with the same power. 

@des2k... That actually matches your HWINFO addin pretty well too from what I've seen.


----------



## WillP

des2k... said:


> I use afterburner monitoring (x*66). Comes very close to my own formula that I did with the amp clamp.
> 1000w xoc max are 100w pcie 300w for each pin1,2,3.
> 
> 2x8pin 1000w - 300w(pin3), 700w.
> But pin1,2,3 are balanced with PCIE power draw so 1/3 of that balanced power draw on the PCIE will be lost from pin1 and pin2. PCIE still climbs to 100w if you push the card.
> 
> 700w -33w,= 666w max
> 
> View attachment 2488155


Brilliant, thanks so much, makes the maths a lot easier than my method, as well as being more accurate.


----------



## jura11

des2k... said:


> 0.696 doesn't match my amp clamp. You can see in hwinfo that the xoc tdp normalized will cut the power before tdp hits 100% due to PCIE 100w limit.
> 
> If we're talking 2x8pin reference from Nvidia, all power sources are balanced. So it's impossible to hit 696w because the fake pin3 that reports 300w(example max) will make pcie increase power draw(real) which in turn will cap pin1 and pin2 early never reaching 300w.


Good to know that there, I always used 0.696 formula for calculations of my power draw 

Is there a way for HWiNFO to report power draw too correctly? 

Hope this helps 

Thanks, Jura


----------



## jura11

yzonker said:


> That seems too high. I went to some effort to match boost frequencies when running Port Royal between a 390w bios and the XOC. That ended up with the XOC at 62%. That works out to only be 0.63. Obviously a somewhat flawed method but it should be close unless one bios boosts higher than another with the same power.
> 
> @des2k... That actually matches your HWINFO addin pretty well too from what I've seen.


As I said somewhere here in this thread is buried that formula for KPE XOC 1000W BIOS and 2*8-pin GPUs 

I used that method on my RTX 3090s, I'm using Killawatt wall meter for measuring power draw my whole PC and this can be bit deceiving because I'm running 3*D5 pumps and 10-12 HDD's plus 38 fans etc 

Hope this helps 

Thanks, Jura


----------



## Martin778

Just curious...has the 3090 FTW3 power issue ever been fixed? I lost more than an hour digging through their forum thread on this case but haven't seen anyone mentioning a fix.


----------



## bmagnien

Martin778 said:


> Just curious...has the 3090 FTW3 power issue ever been fixed? I lost more than an hour digging through their forum thread on this case but haven't seen anyone mentioning a fix.


Yes. They opened up a specific RMA program to address it. Cards from that program come back hitting the PL on the 520w kp or original 500w xoc no problem. The 3x8 pin ratios are a little odd, like (1:1:.7) but slot power is a non issue now and total PL which was the main issue has been solved


----------



## Martin778

Hmm, might consider getting one then.
Few last posts stated they got their special RMA cards back that were as bad as the old one


----------



## bmagnien

Martin778 said:


> Hmm, might consider getting one then.
> Few last posts stated they got their special RMA cards back that were as bad as the old one


The “new 500w xoc” they put on the board and ship back to you didn’t work for me. Maybe that’s what those folks are using. The 520w kp works best for me with new card and the PL is 100% leagues better than what it was previously without a shunt. Also, anecdotally and perhaps coincidentally, this card was the best core and mem silicon out of 4 previous 3090 ftw3s


----------



## Martin778

Is the fan control and reBAR working on the 520W KPE BIOS?


----------



## yzonker

Falkentyne said:


> The older Focus series before the OneSeasonic rework have the old SS-1050XP OCP ciruit, which will tend to trip on high current video cards like Vega 64 or 3090's, regardless if you're far below the max ceiling of the PSU. If the box says "Focus Plus Platinum" or "Focus Gold", it's the old version.
> 
> The Focus GX and Focus PX at 750W+ won't do this as they have the new OCP circuit.


Nope take that back. Focus Gx-1000 is on my actual box.


----------



## bmagnien

Martin778 said:


> Is the fan control and reBAR working on the 520W KPE BIOS?


the 520w kp bios w/ rbar was posted earlier in this thread. can't speak to fan control as i'm on water and don't use px1


----------



## inedenimadam

Martin778 said:


> Is the fan control and reBAR working on the 520W KPE BIOS?


I am on the 520WREBAR KPE bios on my KPE, I have fan control and rebar.


----------



## bmagnien

inedenimadam said:


> I am on the 520WREBAR KPE bios on my KPE, I have fan control and rebar.


i assume they were asking about fan control working on a ftw3 with kp bios


----------



## inedenimadam

bmagnien said:


> i assume they were asking about fan control working on a ftw3 with kp bios


Oh. My bad!


----------



## Lobstar

I have multiple monitors. How do I get my FTW3U to not idle at 115w?


----------



## cletus-cassidy

tefla said:


> Hey all! Just wanted to share some results from a 3090 FE shunt mod (5 mOhm stacked on all 6 original shunts)
> 
> *Pre-shunt-mod (114% Power)*
> 
> *14,545 PR*: I scored 14 545 in Port Royal
> *21,467 TS*: I scored 20 498 in Time Spy
> 
> *Post-Shunt-mod + repad + repaste (114% Power)*
> 
> *15,008 PR*: I scored 15 008 in Port Royal
> *22,054 TS*: I scored 21 025 in Time Spy
> 
> This is with the stock cooler. Definitely seeing the need to put this thing underwater to really see what it can do. Even with an open case, cool ambient (~11c), high fans, etc. I'm hitting ~68c at the end of a PR run. Hopefully, i can eek out a bit more to break into the top 10 leaderboard for the 3090 + 5900x, better thermals + more finely tuned oc settings incl the single ccd and aggressive mem oc should do it.
> 
> The TS score improvement is low, I think I need further runs here, this was done in the same session as PR as a one-off so I'm not entirely sure why I'm "only" getting around 22k. Maybe thermals since it's a longer test?
> 
> I also think I rushed the reassembly. I was getting a hot-spot temp of 92-97c on those aforementioned runs. I could reduce this delta with a more careful repaste and remount, right?
> 
> Overall though, the shunt mod wasn't too bad difficulty-wise. Not gonna lie, it was incredibly nerve-wracking (my first time soldering was the same week I did the mod). But as long as you take your time, do your research, and come prepared with the right tools (definitely recommend practicing the stacking on an old gpu) it should go pretty smoothly. Big thanks to @Falkentyne for all the help. This dude basically wrote a gd instruction manual for me to follow and I'm very thankful for the contributors that make this forum the incredible resource it is.


I did a shunt mod around the same time as you for an Alienware 3090 reference card. I am running it under water (EK block and active backplate). Not a good core clocker but good memory OC. Pulling good power from the mod and scoring 15K+ in PR but seems to be significantly underperforming in Time Spy and Fire Strike. Curious if anyone can help me identify why it’s underperforming?

Port Royal: I scored 15 167 in Port Royal
Time Spy Extreme: I scored 10 707 in Time Spy Extreme
Fire Strike Extreme: I scored 18 450 in Fire Strike Extreme
Time Spy: I scored 19 654 in Time Spy


----------



## jomama22

cletus-cassidy said:


> I did a shunt mod around the same time as you for an Alienware 3090 reference card. I am running it under water (EK block and active backplate). Not a good core clocker but good memory OC. Pulling good power from the mod and scoring 15K+ in PR but seems to be significantly underperforming in Time Spy and Fire Strike. Curious if anyone can help me identify why it’s underperforming?
> 
> Port Royal: I scored 15 167 in Port Royal
> Time Spy Extreme: I scored 10 707 in Time Spy Extreme
> Fire Strike Extreme: I scored 18 450 in Fire Strike Extreme
> Time Spy: I scored 19 654 in Time Spy


I'v had this issue pop up in the past. Uninstall AB or w.e. oc tool you're using, reboot. Reinstall the drivers over the current ones ( do the clean install), reboot. Then go to safemode and ddu the drivers out. Then reinstall the drivers again.

Should fix it up. Not sure what happens but had had it do it a few times to me.

People say to disable intel speedstep but if it's something you use, I would just wait until after trying the above.


----------



## Bobbylee

jura11 said:


> As I said somewhere here in this thread is buried that formula for KPE XOC 1000W BIOS and 2*8-pin GPUs
> 
> I used that method on my RTX 3090s, I'm using Killawatt wall meter for measuring power draw my whole PC and this can be bit deceiving because I'm running 3*D5 pumps and 10-12 HDD's plus 38 fans etc
> 
> Hope this helps
> 
> Thanks, Jura


38 fans?! Please share a pic lol


----------



## ArcticZero

Falkentyne said:


> The older Focus series before the OneSeasonic rework have the old SS-1050XP OCP ciruit, which will tend to trip on high current video cards like Vega 64 or 3090's, regardless if you're far below the max ceiling of the PSU. If the box says "Focus Plus Platinum" or "Focus Gold", it's the old version.
> 
> The Focus GX and Focus PX at 750W+ won't do this as they have the new OCP circuit.


My dilemma. Seasonic Prime Gold 1000w, and absolutely no GX/PX/TX units above 750w available anywhere. I've had to keep it reined in at 420-450w or I hit OCP after extended gaming sessions. Happened once on FFXV. Crazy.


----------



## EarlZ

ArcticZero said:


> My dilemma. Seasonic Prime Gold 1000w, and absolutely no GX/PX/TX units above 750w available anywhere. I've had to keep it reined in at 420-450w or I hit OCP after extended gaming sessions. Happened once on FFXV. Crazy.


You can get the GX/PX/TX from PChub

On a different topic- Its now about a month since I replaced the thermal paste on my 3090 and Kryonauts durability is starting to show. I am now getting 71c ( formerly 67c) with superposition extreme. Thats a 4c delta in just 30 days. 

Room temp is the same at 29-30c


----------



## jura11

Bobbylee said:


> 38 fans?! Please share a pic lol


Its Caselabs M8 with pedestal and MO-ra3 360mm

Here is older picture of my Caselabs M8 with pedestal, that time I run GTX1080Ti with 2*GTX1080's and X99 and 5960x 










Now I have changed on that loop most of the things like motherboard is Asus ROG Crosshair VIII Hero X570, CPU block is Aquacomputer Kryos Next and running just two GPU setup or two RTX 3090 GamingPro's plus PSU is Superflower 8pack 2000w PSU and tubing have changed from Mayhems UV White to EK ZMT all around 

Hope this helps 

Thanks, Jura


----------



## des2k...

EarlZ said:


> You can get the GX/PX/TX from PChub
> 
> On a different topic- Its now about a month since I replaced the thermal paste on my 3090 and Kryonauts durability is starting to show. I am now getting 71c ( formerly 67c) with superposition extreme. Thats a 4c delta in just 30 days.
> 
> Room temp is the same at 29-30c


Kryonauts is prob one of the most stable / durable paste. Had that on my 3900x for 3years no degradation.

It is sensitive to application, you need the right amount and spread with the applicator because die is very big on 3090.

For the stress testing 600w+ on my 3090, it's the only paste that holds. Noctua, ek paste,mx4 always comes out with higher temps after a few weeks and seem to run out(very liquid, pumps out) after a few 600w runs.


----------



## tefla

cletus-cassidy said:


> I did a shunt mod around the same time as you for an Alienware 3090 reference card. I am running it under water (EK block and active backplate). Not a good core clocker but good memory OC. Pulling good power from the mod and scoring 15K+ in PR but seems to be significantly underperforming in Time Spy and Fire Strike. Curious if anyone can help me identify why it’s underperforming?
> 
> Port Royal: I scored 15 167 in Port Royal
> Time Spy Extreme: I scored 10 707 in Time Spy Extreme
> Fire Strike Extreme: I scored 18 450 in Fire Strike Extreme
> Time Spy: I scored 19 654 in Time Spy


I'd give what @jomama22 suggested a try and see if that changes anything. I haven't had time to do additional tests since that post, but I'll give an update if I figure anything out before you.


----------



## cletus-cassidy

jomama22 said:


> I'v had this issue pop up in the past. Uninstall AB or w.e. oc tool you're using, reboot. Reinstall the drivers over the current ones ( do the clean install), reboot. Then go to safemode and ddu the drivers out. Then reinstall the drivers again.
> 
> Should fix it up. Not sure what happens but had had it do it a few times to me.
> 
> People say to disable intel speedstep but if it's something you use, I would just wait until after trying the above.


Thank you - Will work on this tonight and report back.


----------



## itssladenlol

sultanofswing said:


> I have a 7800x system here that I tested on and I have my daily 10940x rig, Both systems score exactly the same with the same 3090 and overclocks.
> 
> I also have 2 Kingpin 3090's here that port royal responds very well to adding msvdd and I also have a FTW Hydrocopper that has no msvdd control but runs circles around both my Kingpins.
> This is my Hydrocopper before the Nvidia driver update that boosted PR score by a couple of hundred points.
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 628 in Port Royal
> 
> 
> Intel Core i9-10940X X-series Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com


What are the memory temps on the Hydrocopper? 😅


----------



## RaMsiTo

received today 3090 ftw3 ultra from rma's special program for energy balance problems.

I leave a port royal with the bios kingpin by air, what do you think?









Result not found







www.3dmark.com


----------



## EarlZ

des2k... said:


> Kryonauts is prob one of the most stable / durable paste. Had that on my 3900x for 3years no degradation.
> 
> It is sensitive to application, you need the right amount and spread with the applicator because die is very big on 3090.
> 
> For the stress testing 600w+ on my 3090, it's the only paste that holds. Noctua, ek paste,mx4 always comes out with higher temps after a few weeks and seem to run out(very liquid, pumps out) after a few 600w runs.


So what you are saying is that my application was bad despite the first 30 days of performance was good ?


----------



## Falkentyne

EarlZ said:


> So what you are saying is that my application was bad despite the first 30 days of performance was good ?


This is sort of a loaded answer he gave.
Kryonaut on a CPU is almost always going to be stable (unless it's a laptop CPU, then it's utter garbage) due to the high mounting pressure and balanced mount. I never had a problem with Kryonaut on any of my desktop CPU's (Laptop BGA, on the other hand....yikes), and Kryonaut had no issues on my Radeon 290X either. But that's a pretty flat die.

On Ampere it's a lot less desirable because of how convex the core is. (why do you think people are sanding their dies?). Kryonaut is just not vicious enough to be good long term on a non perfect surface. TFX would probably be better (Gel Maker Nano also seems to be good), but make sure you have good mounting pressure on the core first. Using thermal pads that are too hard, even if they are the correct thickness, can compromise core heatsink pressure and can even cause TFX to degrade (the paste doesn't degrade, the heat compression and expansion from a loose fit will cause hot spots slowly).


----------



## Lobstar

I'm at about a week with kryonaut extreme on my 3090 with optimus block and I haven't seen more than 38C with 27-30C water with over 500w loads. You're saying I'll have to re-apply it after a month?


----------



## GRABibus

The best Bios I have tested until now for gaming :









EVGA RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com





Tested in Cold War :
=> I got my highest sustained and stable GPU frequencies
=> I got my lowest GPU temps versus frequencies and voltages

Better than Suprim X in Cold War for me.

For bench, forget it (Tested in PR).


----------



## EarlZ

Falkentyne said:


> This is sort of a loaded answer he gave.
> Kryonaut on a CPU is almost always going to be stable (unless it's a laptop CPU, then it's utter garbage) due to the high mounting pressure and balanced mount. I never had a problem with Kryonaut on any of my desktop CPU's (Laptop BGA, on the other hand....yikes), and Kryonaut had no issues on my Radeon 290X either. But that's a pretty flat die.
> 
> On Ampere it's a lot less desirable because of how convex the core is. (why do you think people are sanding their dies?). Kryonaut is just not vicious enough to be good long term on a non perfect surface. TFX would probably be better (Gel Maker Nano also seems to be good), but make sure you have good mounting pressure on the core first. Using thermal pads that are too hard, even if they are the correct thickness, can compromise core heatsink pressure and can even cause TFX to degrade (the paste doesn't degrade, the heat compression and expansion from a loose fit will cause hot spots slowly).


I plan to change it to TFX but not anytime soon unless i start to see 83c while gaming.


----------



## Falkentyne

EarlZ said:


> I plan to change it to TFX but not anytime soon unless i start to see 83c while gaming.


Right, you replaced the thermal pads right? (GPU Core side)
With TM Odyssey or Gelids (Extreme or Ultimates) or something else?


----------



## Falkentyne

Lobstar said:


> I'm at about a week with kryonaut extreme on my 3090 with optimus block and I haven't seen more than 38C with 27-30C water with over 500w loads. You're saying I'll have to re-apply it after a month?


If your mounting pressure is solid, you shouldn't need to reapply it at all.
If the mounting pressure is weak, you'll see temp degradation, probably preceded by core to core hotspot delta increasing.


----------



## des2k...

Falkentyne said:


> If your mounting pressure is solid, you shouldn't need to reapply it at all.
> If the mounting pressure is weak, you'll see temp degradation, probably preceded by core to core hotspot delta increasing.


what's a good core to hotspot delta ?


I'm still working at getting that perfect mount with the EK block.

Sanding off the standoffs did help a bit. For example Quake RTX/Timespy 600w is now 17.5c vs 22c+ before.

Regular games 500w seem to be around 11c-13c delta. Usually this should be good enough since I don't really use past 500w. Gpu to Hotspot delta didn't improve as much, maybe 2c better.


----------



## J7SC

Falkentyne said:


> If your mounting pressure is solid, you shouldn't need to reapply it at all.
> If the mounting pressure is weak, you'll see temp degradation, probably preceded by core to core hotspot delta increasing.


...on cooling, I found that the Thermalright pads are most excellent; they dropped my VRAM temp by 8C-10C. For the die, I am using Kryonaut (well spread on the die) and so far, temps seem to be stable, but it only has been three weeks since the latest re-assembly, so time will tell. My prior experience with Kryonaut, MX4 and Gelid Ex suggest as well that they tend to be stable temp wise if the application and mounting / pressure was ok in the first place. With 3090s though, I am extra paranoid since it has 28 billion transistors / 8nm / +- 500 W on a relatively small area. I probably end up redoing the waterblock mount...while temps are great and deltas stable, the GPU die has heavily raised lettering, while the mating GPU block at that area also wasn't smooth...one of these days, I'll 3D print my own w-block, and lap the GPU .

*...I do have a weird issue though*: My Strix (dual bios, P-mode and Q-mode) had factory bios V2 versions on both. I then updated to Strix 'V3 /resizable BAR' via Asus' download site recently. I was going to do the usual 'nvflash --protectoff / flash / --protection procedure, but the Asus .exe file did it all by itself. It worked fine. Then just a few days ago, I switched from the P-bios / Asus V3 r_BAR to the Q-bios / Asus V2, cold-rebooted and yes, it was the original. This time, I loaded the KPE 520 W r_BAR on 'Q'. That worked (and works) fine, too. GPUz will show 1920MHz as the base boost, and EVGA as the vendor. However, when I reboot (cold or otherwise) after switching back to P-bios (which should be the Asus V3 r-BAR), I get a hybrid...GPUz will say EVGA (and cannot accurately display power consumption, just like on Q-520W KPE bios), but the base boost is the correct-for-Strix 1860 MHz 

...I haven't tried to reflash the Asus V3 (using nvflash) and I suspect that the Asus V3 bios update '.exe' didn't do the --protection step. Still, I have never run into this, even with dual PEX, quad-SLI systems and cards that had 3 bios (3x XOC) each during my HWBot days...has anyone ever heard of such a thing ? Again, performance, temps et al are great (below, all on the latest driver), not least as I have not yet finished the rest of the cooling system of the build and won't run max for now (and no XOC 1kw). But I am scratching my head re. this hybrid bios thing...anyone have any ideas ?


----------



## EarlZ

Falkentyne said:


> Right, you replaced the thermal pads right? (GPU Core side)
> With TM Odyssey or Gelids (Extreme or Ultimates) or something else?


I didnt change the pads yet, only replaced the paste as i was getting 85c with 100% fanspeed at 0.85v 1850Mhz. It was running at 69c max for less than week then it tanked which prompted me to replace the paste only.

Over a few mins of 450watts load that temp stays ar 85c while the gpu core drops to 1700mhz which is really bad.

After the the paste was replaced, I was getting 67c max at 100% fan speed. Now after 30days its at 71c, still max fan speed and room temps.


----------



## des2k...

Is 11c-12 Gpu to hotspot delta considering good ? Is it worth putting effort into a better mount / better temps ?


----------



## Falkentyne

des2k... said:


> Is 11c-12 Gpu to hotspot delta considering good ? Is it worth putting effort into a better mount / better temps ?
> View attachment 2488331


11-12C Delta is average.


----------



## Lobstar

Ever since I put my waterblock on my card I'm seeing horizontal flashes at certain overclocks (doesn't happen at normal clocks or light OCing). The card also isn't hitting power limits or anything when that happens. It's at like +105/+1248 in benchmarks but is sporadic. If I go up to +135/+1448 it's more frequent. The benchmark doesn't necessarily crash .. it's just an epilepsy inducing flashing until it's over. It happens on all screens though not in the same place. I'm just curious what that symptom is linked to ... heat, power, bad thermal pad contact? This is my first watercooled video card so I'm a bit blind.


----------



## cletus-cassidy

jomama22 said:


> I'v had this issue pop up in the past. Uninstall AB or w.e. oc tool you're using, reboot. Reinstall the drivers over the current ones ( do the clean install), reboot. Then go to safemode and ddu the drivers out. Then reinstall the drivers again.
> 
> Should fix it up. Not sure what happens but had had it do it a few times to me.
> 
> People say to disable intel speedstep but if it's something you use, I would just wait until after trying the above.


Tried your recommendation. Also reinstalled Windows. Getting the same range of scores, which seem low for the specs. Here is the latest Time Spy run. Given I'm at ~2100 clock speed and not power limited due to the shunt mod, I'm thinking my score should be higher. Including some additional screen shot info below. Curious if anyone has any thoughts or can help me diagnose?

Time Spy: I scored 19 614 in Time Spy


----------



## Lobstar

cletus-cassidy said:


> Tried your recommendation. Also reinstalled Windows. Getting the same range of scores, which seem low for the specs. Here is the latest Time Spy run. Given I'm at ~2100 clock speed and not power limited due to the shunt mod, I'm thinking my score should be higher. Including some additional screen shot info below. Curious if anyone has any thoughts or can help me diagnose?
> 
> Time Spy: I scored 19 614 in Time Spy


Definitely seems weird. Here is my run just a little higher average clocks than yours. I scored 21 448 in Time Spy


----------



## gfunkernaught

Falkentyne said:


> 11-12C Delta is average.


My peak core to hotspot delta is 9c at 500w


----------



## sultanofswing

cletus-cassidy said:


> Tried your recommendation. Also reinstalled Windows. Getting the same range of scores, which seem low for the specs. Here is the latest Time Spy run. Given I'm at ~2100 clock speed and not power limited due to the shunt mod, I'm thinking my score should be higher. Including some additional screen shot info below. Curious if anyone has any thoughts or can help me diagnose?
> 
> Time Spy: I scored 19 614 in Time Spy


Are you running a monitor that is greater than 1440p resolution?
I have had more issues with Timespy than any other benchmark, Had one issue where it was scoring right at 1k points low with no hardware changes other than I switched to a 4k display.
So for giggles I decided to set the display to 1440p and my scores jumped back up to normal but some days I could run it with my display at 4k and get normal scores and other days it would score low.
Also for timespy make sure the power plan in Windows is set to maximum performance.
Regular timespy for me has been bugged for well over a year and never consistent so I stopped running it. Timespy Extreme works normally though.
This is running 3DMark Standalone also.


----------



## fransarj

J7SC said:


> ...on cooling, I found that the Thermalright pads are most excellent; they dropped my VRAM temp by 8C-10C. For the die, I am using Kryonaut (well spread on the die) and so far, temps seem to be stable, but it only has been three weeks since the latest re-assembly, so time will tell. My prior experience with Kryonaut, MX4 and Gelid Ex suggest as well that they tend to be stable temp wise if the application and mounting / pressure was ok in the first place. With 3090s though, I am extra paranoid since it has 28 billion transistors / 8nm / +- 500 W on a relatively small area. I probably end up redoing the waterblock mount...while temps are great and deltas stable, the GPU die has heavily raised lettering, while the mating GPU block at that area also wasn't smooth...one of these days, I'll 3D print my own w-block, and lap the GPU .
> 
> *...I do have a weird issue though*: My Strix (dual bios, P-mode and Q-mode) had factory bios V2 versions on both. I then updated to Strix 'V3 /resizable BAR' via Asus' download site recently. I was going to do the usual 'nvflash --protectoff / flash / --protection procedure, but the Asus .exe file did it all by itself. It worked fine. Then just a few days ago, I switched from the P-bios / Asus V3 r_BAR to the Q-bios / Asus V2, cold-rebooted and yes, it was the original. This time, I loaded the KPE 520 W r_BAR on 'Q'. That worked (and works) fine, too. GPUz will show 1920MHz as the base boost, and EVGA as the vendor. However, when I reboot (cold or otherwise) after switching back to P-bios (which should be the Asus V3 r-BAR), I get a hybrid...GPUz will say EVGA (and cannot accurately display power consumption, just like on Q-520W KPE bios), but the base boost is the correct-for-Strix 1860 MHz
> 
> ...I haven't tried to reflash the Asus V3 (using nvflash) and I suspect that the Asus V3 bios update '.exe' didn't do the --protection step. Still, I have never run into this, even with dual PEX, quad-SLI systems and cards that had 3 bios (3x XOC) each during my HWBot days...has anyone ever heard of such a thing ? Again, performance, temps et al are great (below, all on the latest driver), not least as I have not yet finished the rest of the cooling system of the build and won't run max for now (and no XOC 1kw). But I am scratching my head re. this hybrid bios thing...anyone have any ideas ?
> 
> 
> View attachment 2488326




Can i see your VF curve? Good scores there. i only have 14.7 on Port and 19.8 on Timespy.
MSI Gaming X 3090 (Kingpin 520W bios). Upgraded memory pads to Gelid Xtreme + Repasted to Kryonaut.
Does the 3900X affect the scores that much as well?


----------



## Beagle Box

J7SC said:


> ...on cooling, I found that the Thermalright pads are most excellent; they dropped my VRAM temp by 8C-10C. For the die, I am using Kryonaut (well spread on the die) and so far, temps seem to be stable, but it only has been three weeks since the latest re-assembly, so time will tell. My prior experience with Kryonaut, MX4 and Gelid Ex suggest as well that they tend to be stable temp wise if the application and mounting / pressure was ok in the first place. With 3090s though, I am extra paranoid since it has 28 billion transistors / 8nm / +- 500 W on a relatively small area. I probably end up redoing the waterblock mount...while temps are great and deltas stable, the GPU die has heavily raised lettering, while the mating GPU block at that area also wasn't smooth...one of these days, I'll 3D print my own w-block, and lap the GPU .
> 
> *...I do have a weird issue though*: My Strix (dual bios, P-mode and Q-mode) had factory bios V2 versions on both. I then updated to Strix 'V3 /resizable BAR' via Asus' download site recently. I was going to do the usual 'nvflash --protectoff / flash / --protection procedure, but the Asus .exe file did it all by itself. It worked fine. Then just a few days ago, I switched from the P-bios / Asus V3 r_BAR to the Q-bios / Asus V2, cold-rebooted and yes, it was the original. This time, I loaded the KPE 520 W r_BAR on 'Q'. That worked (and works) fine, too. GPUz will show 1920MHz as the base boost, and EVGA as the vendor. However, when I reboot (cold or otherwise) after switching back to P-bios (which should be the Asus V3 r-BAR), I get a hybrid...GPUz will say EVGA (and cannot accurately display power consumption, just like on Q-520W KPE bios), but the base boost is the correct-for-Strix 1860 MHz
> 
> ...I haven't tried to reflash the Asus V3 (using nvflash) and I suspect that the Asus V3 bios update '.exe' didn't do the --protection step. Still, I have never run into this, even with dual PEX, quad-SLI systems and cards that had 3 bios (3x XOC) each during my HWBot days...has anyone ever heard of such a thing ? Again, performance, temps et al are great (below, all on the latest driver), not least as I have not yet finished the rest of the cooling system of the build and won't run max for now (and no XOC 1kw). But I am scratching my head re. this hybrid bios thing...anyone have any ideas ?
> 
> ...


I've also had strange issues with Strix OC BIOS changes. 
What I've learned: 
The PC must be up and running when you slide the BIOS switch from one BIOS to the other. Then reboot.
The ASUS update tool can't be trusted to fully update the BIOS correctly unless your original BIOSs are installed.
To get the BIOSs back to default, I had to manually flash the two original BIOSs separately.
I could then run the ASUS BIOS change tool to get it 'right'.

I'm no longer using ASUS BIOSs at all because they suck, anyway.

I haven't tried the rebar updates because Intel doesn't support it on my CPU.


----------



## PLATOON TEKK

Bitspower active (full blown) backplate for strix is available for pre-order off their site. If I’m not mistaken, is price of a normal block.

wonder how well this will work

Strix Backplate Pre-order (Bitspower Shop)


----------



## TheFinnishOne

PLATOON TEKK said:


> Bitspower active (full blown) backplate for strix is available for pre-order off their site. If I’m not mistaken, is price of a normal block.
> 
> wonder how well this will work
> 
> Strix Backplate Pre-order (Bitspower Shop)
> 
> View attachment 2488377


I will more than likely get that, gives a cleaner look, for what thats worth. Than the MP5 works setup.
300€ for the whole block and backplate setup is expensive but not ridiculously so.


----------



## PLATOON TEKK

TheFinnishOne said:


> I will more than likely get that, gives a cleaner look, for what thats worth. Than the MP5 works setup.
> 300€ for the whole block and backplate setup is expensive but not ridiculously so.


Same, beats the wc ram cooler I have on backplate at the moment. Wonder how the coil whine will be on this.

Also, yzonker, I think you are right about that 1000w Galax bios. Just half assed xoc.


----------



## EarlZ

Official measurements of the 3090 Suprim X pads directly from MSI

Credits to this BibochOfficial from reddit for sharing this

3090 SUPRIM X Official Thermal Pads Measurements : nvidia (reddit.com)


----------



## J7SC

fransarj said:


> Can i see your VF curve? Good scores there. i only have 14.7 on Port and 19.8 on Timespy.
> MSI Gaming X 3090 (Kingpin 520W bios). Upgraded memory pads to Gelid Xtreme + Repasted to Kryonaut.
> Does the 3900X affect the scores that much as well?
> 
> View attachment 2488364
> 
> View attachment 2488363


...I don't use VF curves at all, just sliders (build isn't even finished yet as I keep on changing mobo and cooling components). Here's the accompanying GPUz for the Port Royal run. As to 3900X vs 5950X, I'm not sure that makes much of a difference in PR (will a bit in TS, TSX); RAM bandwidth and latency is also important though (I run 4x8 GB Samsung-B at IF1900/DDR4 3800; 16-15-14-14). Also keep in mind that this system has already 1280x64 rad space - not the 'craziest' setup here, but still good for cooling a 500W+ 3090.













Beagle Box said:


> I've also had strange issues with Strix OC BIOS changes.
> What I've learned:
> The PC must be up and running when you slide the BIOS switch from one BIOS to the other. Then reboot.
> The ASUS update tool can't be trusted to fully update the BIOS correctly unless your original BIOSs are installed.
> To get the BIOSs back to default, I had to manually flash the two original BIOSs separately.
> I could then run the ASUS BIOS change tool to get it 'right'.
> 
> I'm no longer using ASUS BIOSs at all because they suck, anyway.
> 
> I haven't tried the rebar updates because Intel doesn't support it on my CPU.


Thanks  ...I'll try the 'switch bios while running, then reboot' steps. But yeah, the KPE 520 r_BAR bios works very well, and it is only an indicated 30 W or so higher in max PL than the Strix V2 stock one...I probably just keep it on the KPE 520 r_BAR, just dial back PL and MHz a bit for FS2020, CP2077 etc (never mind work stuff)


----------



## des2k...

gfunkernaught said:


> My peak core to hotspot delta is 9c at 500w


It changes ? It's mostly the same delta for me, idle or load.

I may take off more from the standoffs and replace with .5mm pads. But I noticed some caps(rectangle flat ones near the mem might hit the block. I will have to check clearence when I do .5mm pads.

Got iceberg pads🙄 never heard of them but where 6$ cheaper vs thermalright 13mwk vs 12.8mwk of thermalright.

I'm also due for a core lap, it's showing pretty ugly with more pressure and I already checked the cold plate is flat. The core letters are already marking my cold plate.








IMG-20210424-150943


Image IMG-20210424-150943 hosted in ImgBB




ibb.co


----------



## J7SC

...speaking of raised lettering...


----------



## jomama22

TheFinnishOne said:


> I will more than likely get that, gives a cleaner look, for what thats worth. Than the MP5 works setup.
> 300€ for the whole block and backplate setup is expensive but not ridiculously so.


I really fail to see how it cools the vrms without a massive thermal pad.

Also, their blocks this gen perform really poorly compared to everyone else's. Take a look at their blocks fins over the die and you'll see why.


----------



## jomama22

gfunkernaught said:


> My peak core to hotspot delta is 9c at 500w


Which is weird considering you posted yours here and it's 10.5 and nealy 11 in the current reading:



gfunkernaught said:


> So after a proper repaste with LM and remount with new pads + ectotherm, my delta seems to be between 10.2-10.5. Not bad.
> View attachment 2483979


----------



## jura11

In my case top one RTX 3090 GamingPro which is running KPE XOC 1000W BIOS have GPU core temperature to GPU hot-spot temperature around 11-12°C doesn't matter what load or how much GPU is pulling,bottom one RTX 3090 GamingPro which is running same KPE XOC 1000W BIOS have larger delta in 14-15°C but I think is down to hotter VRAM temperatures which are around 16-18°C higher than on top one 

Will do repad this weekend probably because of that hahaha 

Hope this helps 

Thanks, Jura


----------



## des2k...

J7SC said:


> ...speaking of raised lettering...


that's some nasty laser marking, at high wattage you could be burning your chip😬


----------



## TonyBrownTown

TonyBrownTown said:


> Amazon randomly, said 1 left. $30, here in two days hopefully!


OK OMG










Found software for it here

上海科睿电子科技 (coright.com)

Got the thing to connect and my cheap clip seems to hold OK -










I tried to print over a few BIOS roms last night and neither posted

Inno 3D one - Inno3D RTX 3090 VBIOS

I also tried some GIGBYTE one, forget where I got it.

If I scroll down there is stuff?










When I go to open a BIOS it asks me this?











I think maybe some settings are wrong? 

Trying to interpret this chip here - https://www.issi.com/WW/pdf/25WP016_080_040_020.pdf

IS25WP080

Any help much appreciated!


----------



## jomama22

TonyBrownTown said:


> OK OMG
> 
> View attachment 2488420
> 
> 
> Found software for it here
> 
> 上海科睿电子科技 (coright.com)
> 
> Got the thing to connect and my cheap clip seems to hold OK -
> 
> View attachment 2488421
> 
> 
> I tried to print over a few BIOS roms last night and neither posted
> 
> Inno 3D one - Inno3D RTX 3090 VBIOS
> 
> I also tried some GIGBYTE one, forget where I got it.
> 
> If I scroll down there is stuff?
> 
> View attachment 2488422
> 
> 
> When I go to open a BIOS it asks me this?
> 
> View attachment 2488423
> 
> 
> 
> I think maybe some settings are wrong?
> 
> Trying to interpret this chip here - https://www.issi.com/WW/pdf/25WP016_080_040_020.pdf
> 
> IS25WP080
> 
> Any help much appreciated!







Take a look here. He also has a recent video of a different programmer as well with slightly different software.


----------



## inedenimadam

The EK h2o backplate is pretty legit. I may catch flak, but mining eth at +1200 mem the memory junction is 64C, down from 98C at +850. I'll take it. No noticeable difference in core temp though...maybe 1-2C? Core on my trinity is already using LM and has a pretty low delta for mining, so I didn't take running averages for core. Getting 125mh/s. 

Now I just need one of these block makers to figure out how to take my money for a KPE block and I will be a happy man.


----------



## CZonin

Anyone else notice a recent drop in PR scores? I haven't changed anything but I'm getting around -200 from where I was 5 days ago.

Here's the 2 runs: Result
Looks like my clock frequency dropped from 2070 to 1935.

Meanwhile I just did a Time Spy Extreme run and core frequency is at 2070: I scored 9 769 in Time Spy Extreme

I've done a few other PR runs since and each time it reports core frequency at 1935.

Edit: After looking through a few other PR runs from 5 days ago I had multiple that were at 1935 but were still ~200 points higher.

Edit 2: Found out my power plan changed to balanced instead of high performance. Changing it back fixed my scores.


----------



## cletus-cassidy

Lobstar said:


> Definitely seems weird. Here is my run just a little higher average clocks than yours. I scored 21 448 in Time Spy


Any thoughts on what could be capping me? Can't imagine it's the clock speed differ


sultanofswing said:


> Are you running a monitor that is greater than 1440p resolution?
> I have had more issues with Timespy than any other benchmark, Had one issue where it was scoring right at 1k points low with no hardware changes other than I switched to a 4k display.
> So for giggles I decided to set the display to 1440p and my scores jumped back up to normal but some days I could run it with my display at 4k and get normal scores and other days it would score low.
> Also for timespy make sure the power plan in Windows is set to maximum performance.
> Regular timespy for me has been bugged for well over a year and never consistent so I stopped running it. Timespy Extreme works normally though.
> This is running 3DMark Standalone also.


Yes, I'm on a 4K LG CX48. Will try setting to 1440p and see if that makes a difference. Time Spy Extreme also had a relatively low score, however.


----------



## sultanofswing

cletus-cassidy said:


> Any thoughts on what could be capping me? Can't imagine it's the clock speed differ
> 
> 
> Yes, I'm on a 4K LG CX48. Will try setting to 1440p and see if that makes a difference. Time Spy Extreme also had a relatively low score, however.


Same display I have.

Sent from my OnePlus 7 Pro using Tapatalk


----------



## des2k...

New Metro rtx only looks very demanding.
I wonder if we're going to get another Quake RTX level of power usage on xoc!


----------



## Bobbylee

J7SC said:


> ...I don't use VF curves at all, just sliders (build isn't even finished yet as I keep on changing mobo and cooling components). Here's the accompanying GPUz for the Port Royal run. As to 3900X vs 5950X, I'm not sure that makes much of a difference in PR (will a bit in TS, TSX); RAM bandwidth and latency is also important though (I run 4x8 GB Samsung-B at IF1900/DDR4 3800; 16-15-14-14). Also keep in mind that this system has already 1280x64 rad space - not the 'craziest' setup here, but still good for cooling a 500W+ 3090.
> 
> View attachment 2488407
> 
> 
> 
> 
> 
> Thanks  ...I'll try the 'switch bios while running, then reboot' steps. But yeah, the KPE 520 r_BAR bios works very well, and it is only an indicated 30 W or so higher in max PL than the Strix V2 stock one...I probably just keep it on the KPE 520 r_BAR, just dial back PL and MHz a bit for FS2020, CP2077 etc (never mind work stuff)



Your gpu is running in pcie 1.1 not 3.0 or 4.0 by the way. Remount the gpu (unplug and replugin), try a ddu, if that doesn’t work reflash your vbios. Should get some better scores


----------



## Martin778

Can someone confirm this by the way? I could've sworn I've seen this happen with other Nvidia cards that were in idle, showing stuff like PCIE 1.1 in GPU-Z and they'd jump back to 3.0/4.0 in 3D load.


----------



## GRABibus

J7SC said:


> ...I don't use VF curves at all, just sliders (build isn't even finished yet as I keep on changing mobo and cooling components). Here's the accompanying GPUz for the Port Royal run. As to 3900X vs 5950X, I'm not sure that makes much of a difference in PR (will a bit in TS, TSX); RAM bandwidth and latency is also important though (I run 4x8 GB Samsung-B at IF1900/DDR4 3800; 16-15-14-14). Also keep in mind that this system has already 1280x64 rad space - not the 'craziest' setup here, but still good for cooling a 500W+ 3090.
> 
> View attachment 2488407
> 
> 
> 
> 
> 
> Thanks  ...I'll try the 'switch bios while running, then reboot' steps. But yeah, the KPE 520 r_BAR bios works very well, and it is only an indicated 30 W or so higher in max PL than the Strix V2 stock one...I probably just keep it on the KPE 520 r_BAR, just dial back PL and MHz a bit for FS2020, CP2077 etc (never mind work stuff)


did you try to tweak your 5950X and see if score increases ?

SMT disabled
1 CCD disabled
Core count = 4
All core OC (in fact 4 cores then ) highest as possible.


----------



## des2k...

Bobbylee said:


> Your gpu is running in pcie 1.1 not 3.0 or 4.0 by the way. Remount the gpu (unplug and replugin), try a ddu, if that doesn’t work reflash your vbios. Should get some better scores


all GPUs idle in lowest pcie power state, if you want to pin it pcie 4.0 x16 all the time you need to disable PCIE power management in the windows power plan


----------



## jomama22

des2k... said:


> all GPUs idle in lowest pcie power state, if you want to pin it pcie 4.0 x16 all the time you need to disable PCIE power management in the windows power plan


Doesn't seem to work as far as I can tell. Have tried it recently with no success. Not that it matters either way.

If you can get it to lock at 4.0 lmk.


----------



## J7SC

Bobbylee said:


> Your gpu is running in pcie 1.1 not 3.0 or 4.0 by the way. Remount the gpu (unplug and replugin), try a ddu, if that doesn’t work reflash your vbios. Should get some better scores


...ahem, not really - just running 'normal' instead of 'prefer maximum power' in NV Tab, also see next entry below



des2k... said:


> all GPUs idle in lowest pcie power state, if you want to pin it pcie 4.0 x16 all the time you need to disable PCIE power management in the windows power plan


----------



## D0MINUS

Haven't seen it mentioned here, so this may be news to some: there's a new Bykski waterblock with an active backplate.
Here's the Strix version


----------



## J7SC

D0MINUS said:


> Haven't seen it mentioned here, so this may be news to some: there's a new Bykski waterblock with an active backplate.
> Here's the Strix version


Nice - I wonder if the price is for both parts combined; if so, it's a decent deal



GRABibus said:


> did you try to tweak your 5950X and see if score increases ?
> 
> SMT disabled
> 1 CCD disabled
> Core count = 4
> All core OC (in fact 4 cores then ) highest as possible.


tx..and no, I haven't tweaked the 5950X yet though the Asus Dark Hero does a decent job out of the box. For now, I just want to finish the cosmetics of this build before I disable CCDs / limit cores for serious benching...right now, it's in productivity mode, need all the cores and threads I can get for that .


----------



## gfunkernaught

des2k... said:


> It changes ? It's mostly the same delta for me, idle or load.


I only looked at max temp, just looked at it now and it is still 9c.


----------



## gfunkernaught

jomama22 said:


> Which is weird considering you posted yours here and it's 10.5 and nearly 11 in the current reading:


Actually is weird. I never really paid attention to hot spot. Tonight after playing cyberpunk and right now running bright memory, the peak delta is 9.7c, current is 10c.


----------



## Zogge

Bobbylee said:


> 38 fans?! Please share a pic lol


Only 38 ? Around 50 for my system.


----------



## fransarj

ArcticZero said:


> My dilemma. Seasonic Prime Gold 1000w, and absolutely no GX/PX/TX units above 750w available anywhere. I've had to keep it reined in at 420-450w or I hit OCP after extended gaming sessions. Happened once on FFXV. Crazy.


I got my FSP G Power Pro 1000W Gold from PChub. Keeps my 500W bios happy. around 7.2k. The cables are crap though. Get extensions for better aesthetics. I also checked --this PSU is Tier 1 in the LTT PSU tier list.


----------



## Lord of meat

Hey you guys🤪
I got my gpu to go 2085-2115 while gaming, core temp not going over 53c on water (ekwb and 240mm rad) after 30 min of port royal, in games it stays under 48.

The memory temp is 60-63c and junction is 68-70c.
I am using the kingpin bios 520w on msi trio.

The problem: i cant overclock the memory past +500 it causes drivers to crash in games. the core crashes on games and port when its over 2145.

I am planning on doing a remount and swapping the pads provided by ek 3.8w/mk with ones from thermalight 12.8w/mk next week.
Would that help or am I wasting my time?

Thanks for your time.


----------



## jomama22

gfunkernaught said:


> Actually is weird. I never really paid attention to hot spot. Tonight after playing cyberpunk and right now running bright memory, the peak delta is 9.7c, current is 10c.
> View attachment 2488483


Rights so it fluctuates. 

Only reason I said somthing is because you added a post that brought 0 value to the conversation other then trying to make yourself look good I guess? 

It fluctuates. So yeah, still not 9*. You also don't need to edit out what you had originally said in your comment either.


----------



## Bobbylee

des2k... said:


> all GPUs idle in lowest pcie power state, if you want to pin it pcie 4.0 x16 all the time you need to disable PCIE power management in the windows power plan


I see, I wasn’t aware of this. I run ultimate performance power plan so mine has never said pcie 1.1, saw some vids about 3090 underperforming as it was running 1.1 during games though that’s why I made the recommendation


----------



## Bobbylee

J7SC said:


> ...ahem, not really - just running 'normal' instead of 'prefer maximum power' in NV Tab, also see next entry below





J7SC said:


> ...ahem, not really - just running 'normal' instead of 'prefer maximum power' in NV Tab, also see next entry below


I see, wasn’t aware of this. Mines always on 3.0. Not sure why your scores aren’t higher then, thought that was the problem


----------



## GRABibus

Lord of meat said:


> Hey you guys🤪
> I got my gpu to go 2085-2115 while gaming, core temp not going over 53c on water (ekwb and 240mm rad) after 30 min of port royal, in games it stays under 48.
> 
> The memory temp is 60-63c and junction is 68-70c.
> I am using the kingpin bios 520w on msi trio.
> 
> The problem: i cant overclock the memory past +500 it causes drivers to crash in games. the core crashes on games and port when its over 2145.
> 
> I am planning on doing a remount and swapping the pads provided by ek 3.8w/mk with ones from thermalight 12.8w/mk next week.
> Would that help or am I wasting my time?
> 
> Thanks for your time.


it can help if your silicon memory has margin to overclock more by reducing temps

but it could not help if your silicon overclock potential is already at the limit.

I think about the same for for my strix on air : the highest offset I can set before crash is +1000MHz with temps which can go in 80’s in BFV for example.

I think about changing pads (I have gellid extreme 12,8mw/k thickness from 0.5mm to 2mm)


----------



## Lord of meat

GRABibus said:


> it can help if your silicon memory has margin to overclock more due to high temps...
> 
> but it could not help if your silicon overclock potential is already at the limit.
> 
> I think about the same for for my strix on air : the highest offset I can set before crash is +1000MHz with temps which can go in 80’s in BFV for example.
> 
> I think about changing pads (I have gellid extreme 12,8mw/k thickness from 0.5mm to 2mm)


I will report back once I swap it. I have a feeling it won't change the oc, but if temp drop ill be happy anyway.


----------



## inedenimadam

Lord of meat said:


> Hey you guys🤪
> 
> The problem: i cant overclock the memory past +500 it causes drivers to crash in games. the core crashes on games and port when its over 2145.
> 
> I am planning on doing a remount and swapping the pads provided by ek 3.8w/mk with ones from thermalight 12.8w/mk


Not to knock your methodology, but have you tried only clocking the memory? +500 does seem low. Are you running into the TDP wall? Do you notice performance regression before crashing? Like are you scoring lower at +450 than you did at +400? 

Thermal pad replacement probably isn't necessary, and those memory temps are fine. I don't think it's a bad idea to replace, but far from needed.


----------



## Lord of meat

inedenimadam said:


> Not to knock your methodology, but have you tried only clocking the memory? +500 does seem low. Are you running into the TDP wall? Do you notice performance regression before crashing? Like are you scoring lower at +450 than you did at +400?
> 
> Thermal pad replacement probably isn't necessary, and those memory temps are fine. I don't think it's a bad idea to replace, but far from needed.


If I don't overclock the core I can run the memory to +1050. It's when I combine the two I get issues. As far as power I give it 520w and max voltage, psu is brand new 1000w antec titanium cpu is ryzen 3950.
Should have plenty watts.

Port royal scores go up but the overclock will fail in stress test and games if the memory is over +500 unless i run core lower.


----------



## GRABibus

inedenimadam said:


> +500 does seem low.


it is linked to silicon potential (lottery).

+1000Mhz stable for example is not so common.


----------



## inedenimadam

Lord of meat said:


> If I don't overclock the core I can run the memory to +1050. It's when I combine the two I get issues. As far as power I give it 520w and max voltage, psu is brand new 1000w antec titanium cpu is ryzen 3950.
> Should have plenty watts.
> 
> Port royal scores go up but the overclock will fail in stress test and games if the memory is over +500 unless i run core lower.



It sounds to me like you are running into power limit, and when the core starts dropping bins the new clock/voltage isn't stable at the same offset. You can try to pinpoint which voltage/clock bin you are having issues with, and reduce the clock for that one or few points on your clock/volt curve. Or you could just overclock the mem first and reduce the core until it stops crashing.


GRABibus said:


> it is linked to silicon potential (lottery).
> 
> +1000Mhz stable for example is not so common.


Indeed, but 500 is low, and GDDR6 has some error correction layer...meaning he would loose performance well before crashing. I can crank the memory on both of my 3090s to 1500 and still make it through port royal, but i start loosing performance around 1200 on my trinity, and my kingpin starts to run out of steam around 900.


----------



## Beagle Box

Lord of meat said:


> If I don't overclock the core I can run the memory to +1050. It's when I combine the two I get issues. As far as power I give it 520w and max voltage, psu is brand new 1000w antec titanium cpu is ryzen 3950.
> Should have plenty watts.
> 
> Port royal scores go up but the overclock will fail in stress test and games if the memory is over +500 unless i run core lower.


I think that's common. My Strix OC will memory OC well, but I have to run lower clocks. Occurs for both games and benching. 
I wonder what's limiting the combination of (PL curve OC + Memory OC) Heat? Some sort of internal Power Limit?
My card isn't shunted. Maybe shunting alters these limits? 
Dunno...


----------



## GRABibus

Beagle Box said:


> I think that's common. My Strix OC will memory OC well, but I have to run lower clocks. Occurs for both games and benching.
> I wonder what's limiting the combination of (PL curve OC + Memory OC) Heat? Some sort of internal Power Limit?
> My card isn't shunted. Maybe shunting alters these limits?
> Dunno...


I will check if by reducing core freq my memory OC will increase....
Not sure of that...

currently, my max memory OC is +1000MHz, combined with my max core frequency.

What I see also is that the junction memory temp is equal to the hotspot GPU temp... Maybe it is random....


----------



## J7SC

I think that as long as you have a more or less stock PL limit (='overall power budget limit'), there will always be the potential of a trade-off between sustained GPU MHz and sustained VRAM MHz - that is before other factors (ie. temp) that play a role with NV boost.

...an unscientific observation re. VRAM in Asus Strix V2/3 and KPE 520W:. I think the latter allows for one or two higher clock bins, but the GDDR6X error correction drop-off is steeper...in a way, that's a good thing as it is easier to lock in most efficient top VRAM clock.

...finally, a comp between PCIe 3.0 and 4.0 on raw performance via 3DM PCIe test...when I got my Strix back in February, it ran in an Intel X99 HEDT (PCIe 3.0 per second entry below). On air and w/o r_BAR, it still managed PR of 15038 / PCIe 3.0...I don't think a single 3090 can easily saturate PCIe 3.0 x16 (though probably can get closer than any other mainstream card). Anyway, running PCIe 4.0 is nice, but in apps, it won't 'automatically double your scores'


----------



## gfunkernaught

jomama22 said:


> Rights so it fluctuates.
> 
> Only reason I said somthing is because you added a post that brought 0 value to the conversation other then trying to make yourself look good I guess?
> 
> It fluctuates. So yeah, still not 9*. You also don't need to edit out what you had originally said in your comment either.


_Cough_ tough crowd.


----------



## J7SC

gfunkernaught said:


> _Cough_ tough crowd.


...don't sweat it ...500 W +- 3090s do all the sweating needed already


----------



## Beagle Box

GRABibus said:


> I will check if by reducing core freq my memory OC will increase....
> Not sure of that...
> 
> currently, my max memory OC is +1000MHz, combined with my max core frequency.
> 
> What I see also is that the junction memory temp is equal to the hotspot GPU temp... Maybe it is random....


Sure it fluctuates. Running Superposition 8kO with max memory OC (+1620) with a moderate OC Core Clock OC (+195) will give me an average hotspot-junction difference of 13.5*, but the difference between maximum values is 23*.

*Edit: HW INFO blew up*

Rerunning a different scenario in Port Royal (+1300, +195) results in an average hotspot-junction difference of 10.4*. The difference between maximum values is 18.2*.


----------



## GRABibus

Beagle Box said:


> Running Superposition 8kO with max memory OC (+1620) with a moderate OC Core Clock OC (+195)


Ahaha 😊

+195MHz moderate overclock ??

I can’t go beyond 160MHz in any bench and was considering it is a good OC..

now, with your comment , I think I have a trash Strix ? 😂

I am on air...


----------



## Beagle Box

GRABibus said:


> Ahaha 😊
> 
> +195MHz moderate overclock ??
> 
> I can’t go beyond 160MHz in any bench and was considering it is a good OC..
> 
> now, with your comment , I think I have a trash Strix ? 😂
> 
> I am on air...


Okay, the Core Clock OC may be a bit better than 'moderate'. I can't run max Core OC + max memory OC with the MSI Suprim BIOS when ambient temp is this warm. 
My Strix OC GPU is probably above average, but it's water-cooled. 

My stock Strix cooler was poorly installed. It was lopsided. Looked like someone sat on it. 
I couldn't break 15000 in PR on air with the stock ASUS BIOS and old drivers.

I think your GPU would be much better than average if water-cooled.


----------



## gfunkernaught

Lord of meat said:


> Hey you guys🤪
> I got my gpu to go 2085-2115 while gaming, core temp not going over 53c on water (ekwb and 240mm rad) after 30 min of port royal, in games it stays under 48.
> 
> The memory temp is 60-63c and junction is 68-70c.
> I am using the kingpin bios 520w on msi trio.
> 
> The problem: i cant overclock the memory past +500 it causes drivers to crash in games. the core crashes on games and port when its over 2145.
> 
> I am planning on doing a remount and swapping the pads provided by ek 3.8w/mk with ones from thermalight 12.8w/mk next week.
> Would that help or am I wasting my time?
> 
> Thanks for your time.


I have a Trio and yes those pads will help a lot. I also used liquid metal instead of paste for the core and some ektotherm paste on the vrm and vram chips under the pads. I put a bunch of m.2 data heatsinks on the back of the card, with a large piece of 5w/mk pad, pulling heat away from the card. My mem temp maxes out at 62c with +1000 on the mem offset only when the card is being pushed like with quake 2 RTX. I also have the ek block but I have two 360s and a 240 rad. My core temp at 500w is around 40c.

So yeah def do the better pads and if you're up to it, liquid metal. Any way you can move heat away from the card as efficiently as possible will help. I think there have been several mentions of the ek block loosing efficiency passed 500w which is true.


----------



## J7SC

Beagle Box said:


> Sure it fluctuates. Running Superposition 8kO with max memory OC (+1620) with a moderate OC Core Clock OC (+195) will give me an average hotspot-junction difference of 13.5*, but the difference between maximum values is 23*.
> 
> *Edit: HW INFO blew up*
> 
> Rerunning a different scenario in Port Royal (+1300, +195) results in an average hotspot-junction difference of 10.4*. The difference between maximum values is 18.2*.


HW INFO blew up ??


----------



## GRABibus

Beagle Box said:


> Okay, the Core Clock OC may be a bit better than 'moderate'. I can't run max Core OC + max memory OC with the MSI Suprim BIOS when ambient temp is this warm.
> My Strix OC GPU is probably above average, but it's water-cooled.
> 
> My stock Strix cooler was poorly installed. It was lopsided. Looked like someone sat on it.
> I couldn't break 15000 in PR on air with the stock ASUS BIOS and old drivers.
> 
> I think your GPU would be much better than average if water-cooled.





Beagle Box said:


> Okay, the Core Clock OC may be a bit better than 'moderate'. I can't run max Core OC + max memory OC with the MSI Suprim BIOS when ambient temp is this warm.
> My Strix OC GPU is probably above average, but it's water-cooled.
> 
> My stock Strix cooler was poorly installed. It was lopsided. Looked like someone sat on it.
> I couldn't break 15000 in PR on air with the stock ASUS BIOS and old drivers.
> 
> I think your GPU would be much better than average if water-cooled.


To break 15k on PR, forget it also on Asus bios on my strix.
Also, due the penalty induced with Zen 3 on PR, not simple...

to break 15k for me :
All fans at 100%
2 fans pulling air from the backplate
Open PC
Bios KP 520W
21 degrees max ambient
+175 on core (if this doesn’t crash)
+ 1075 on memory.
Complete CPU tweaking in bios :
1 CCD disabled
4 core count
SMTdisabled
All core (4 cores) OC 4,9GHz

What a lot of efforts 😊


----------



## Beagle Box

J7SC said:


> HW INFO blew up ??


Yeah. 
Just stopped reporting temps for some reason on the previous PR run. 
It's v7.03-4450 which I believe is a beta. 
Another curious anomaly is that it's reporting my max PCIe speed as 8.0 GT/s no matter what I'm running. 
Of course, the last version did the same. That doesn't seem right to me...

I'm running the first MB BIOS that would support this CPU and have 2 NVMe drives. Shouldn't be limiting my lanes, though, but who knows?

Wait! is 8.0 GT/s the same as x16 3.0? 
I'm so confused...


----------



## pat182

who wants my pg27uq ? i think im gonna get the new pg32uqx hdr1400, looking rlly nice


----------



## GRABibus

pat182 said:


> who wants my pg27uq ? i think im gonna get the new pg32uqx hdr1400, looking rlly nice


off topic ?


----------



## Beagle Box

GRABibus said:


> To break 15k on PR, forget it also on Asus bios on my strix.
> Also, due the penalty induced with Zen 3 on PR, not simple...
> 
> to break 15k for me :
> All fans at 100%
> 2 fans pulling air from the backplate
> Open PC
> Bios KP 520W
> 21 degrees max ambient
> +175 on core (if this doesn’t crash)
> + 1075 on memory.
> Complete CPU tweaking in bios :
> 1 CCD disabled
> 4 core count
> SMTdisabled
> All core (4 cores) OC 4,9GHz
> 
> What a lot of efforts 😊


Yeah. I watched your struggles. You deserve a medal for determination. 
I am curious why you seem adamantly opposed to water-cooling. ????


----------



## pat182

GRABibus said:


> off topic ?


well, its a great monitor to drive my 3090 
cant wait to try exodus pc edition next week


----------



## GRABibus

Beagle Box said:


> Yeah. I watched your struggles. You deserve a medal for determination.
> I am curious why you seem adamantly opposed to water-cooling. ????


😊 thanks.

I am not skilled in WC custom loop.
I would need to dedicate this to a specialist as you all 😊.


----------



## gfunkernaught

pat182 said:


> who wants my pg27uq ? i think im gonna get the new pg32uqx hdr1400, looking rlly nice


I think you're supposed to post that in the classified forum. I tried that once when I asked if anyone wanted my 2080 ti block and got slapped. 😅


----------



## Beagle Box

GRABibus said:


> 😊 thanks.
> 
> I am not skilled in WC custom loop.
> I would need to dedicate this to a specialist as you all 😊.


I think you should consider water-cooling only the GPU. 
Not difficult and great for gaming. My GPU performance was greatly improved.


----------



## GRABibus

Beagle Box said:


> I think you should consider water-cooling only the GPU.
> Not difficult and great for gaming. My GPU performance was greatly improved.


I have to learn on this, even for only the GPU.

what is the best guide for this ?


----------



## J7SC

Beagle Box said:


> Yeah.
> Just stopped reporting temps for some reason on the previous PR run.
> It's v7.03-4450 which I believe is a beta.
> Another curious anomaly is that it's reporting my max PCIe speed as 8.0 GT/s no matter what I'm running.
> Of course, the last version did the same. That doesn't seem right to me...
> 
> I'm running the first MB BIOS that would support this CPU and have 2 NVMe drives. Shouldn't be limiting my lanes, though, but who knows?
> 
> *Wait! is 8.0 GT/s the same as x16 3.0?*
> I'm so confused...


Yeah, PCIe X 16 3.0 is an indicated 8.0GT/s, while PCIe x 16 4.0 is an indicated 16.0GT/s, all theoretical values of course

@GRABibus - it is understandable to be cautious about w-cooling, depending also about the warranty conditions in your jurisdiction...then again, Asus offers what looks like a TUF w/ EK block from the factory. As long as you're careful and follow the instructions, it's not a big deal. The last time I had a new card that wasn't getting w-cooling was probably 2013 or so...

With NVBoost algorithm being so sensitive to temps, it does make sense to consider w-cooling. That said, when my Strix was air-cooled, the air-cooler did a really good job (subject to fan 'sound') for a 500W card. Yet like others also observed, the factory assembly left some things to be desired, ie. re. contact impressions on certain thermal pads, alignment etc. In any case, there must be some specialty shops near you which can do at least the GPU cooler assembly, thermal pads and thermal grease thing for you...building the rest of the loop for it yourself is actually kind of fun, as long as you follow basic principals (res above pump, thorough cleaning of components, leak testing and so forth). I think your Strix shows some great potential judging by your earlier bench posts, but temp throttling is an issue.


----------



## Shawnb99

GRABibus said:


> 😊 thanks.
> 
> I am not skilled in WC custom loop.
> I would need to dedicate this to a specialist as you all 😊.


It’s amazingly easy to do. Leaks should be extremely rare and even rarer to damage parts




GRABibus said:


> I have to learn on this, even for only the GPU.
> 
> what is the best guide for this ?


Just ask away. Create a thread in the WC section stating your specs abs what you want from the loop and we’ll be glad to help


----------



## Sean McCargar

Falkentyne said:


> This is completely wrong. The power draw has been tested with current clamps already. The power draw readings become bugged out when you flash the 1000W vbios on many non Kingpin cards. The 8 pin 1 and 2 are actually pretty balanced. You are NOT drawing 300W through a single 8 pin unless you actually set the TDP to 100% and even if you do that, you aren't going to have the VID or clocks high enough to exceed 750W under worst circumstances on air or water.


I did the calculations and that is not the case. Power draw equaled 190w plus 309 plus 75. I verified this using the software for my 1200axi. It shows the total power draw. I wasnt eating up 300 plus 300 plus 75. If you have the test results from the current clamp that would be more believable lol.


----------



## CH33TAH

I recently built a RTX 3090 system with a Ryzen 5600X. I have been running a bunch of benchmarks. I have been especially interested in 8K performance in both recent and older titles. So I decided to benchmark a bunch of games and put my number out there..... Check it out  

8K Gaming With RTX 3090 - ExtremeBench


----------



## Beagle Box

GRABibus said:


> I have to learn on this, even for only the GPU.
> 
> what is the best guide for this ?


Not certain there's an actual _guide_, per se. 
I'm sure there's lots of info on this site scattered everywhere. 
I found it best to start very simply with a CPU AIO years ago and moved on from there. I no longer recommend AIOs. 

There's certain hardware that's needed and other parts that are nice-to-have. 
Unfortunately, there's a lot of heated discussion about which parts fall into each category.


----------



## gfunkernaught

CH33TAH said:


> I recently built a RTX 3090 system with a Ryzen 5600X. I have been running a bunch of benchmarks. I have been especially interested in 8K performance in both recent and older titles. So I decided to benchmark a bunch of games and put my number out there..... Check it out
> 
> 8K Gaming With RTX 3090 - ExtremeBench


Very cool. I've been playing bf1 and gta5 in 8k, both using the in game scaler at 200%. I never actually averaged out the fps but from what I remember, I'd say avg 45fps without turning anything down.


----------



## Falkentyne

Sean McCargar said:


> I did the calculations and that is not the case. Power draw equaled 190w plus 309 plus 75. I verified this using the software for my 1200axi. It shows the total power draw. I wasnt eating up 300 plus 300 plus 75. If you have the test results from the current clamp that would be more believable lol.


These were posted many many pages up in this thread, by people who were using the 1000W Kingpin bios and were scared they were pulling 350W through a single 8 pin, with a 600w total power limit. They checked their current clamp and it was nowhere near that.

I don't have the links. You need to do your own searching. It was posted many pages up in a debate about that and they found out they were drawing balanced power (like 220W+215W + something) rather than 350W + 150W, from the two 8 pins (by checking the current clamp).


----------



## KedarWolf

GRABibus said:


> I have to learn on this, even for only the GPU.
> 
> what is the best guide for this ?


I bought an EKWB Phoenix 360 AIO that comes with two quick disconnects on the hoses, attached two matching quick disconnects on the hoses to my GPU and Strix block, problem solved!!

Nearly as good performance as a custom loop with little hassle, well, except the QDCs restricts flow a bit, but great for throwing in a new motherboard or swapping out the CPU etc.

I think you can only get Phoenix 240s on their website now, but may be able to find a 280 or 360 AIO elsewhere.

Edit: Or you can buy something like the below, remove the CPU black and add a QDC to it for the GPU block. You need to add QDCs to the hoses to the GPU block as well.









Alphacool Eisbaer 360 CPU - Black


With the “Eisbaer”, Alphacool is fundamentally revolutionizing the AIO cooler market. Where traditional AIO CPU-coolers are disposable products which are neither upgradeable nor refillable, the Alphacool “Eisbaer” is modularly built and...




www.aquatuning.us





Or just use your own block with these and use the existing hoses and QDCs.









All-in-One GPU


All-in-One GPU




www.aquatuning.us


----------



## WillP

KedarWolf said:


> I bought an EKWB Phoenix 360 AIO that comes with two quick disconnects on the hoses, attached two matching quick disconnects on the hoses to my GPU and Strix block, problem solved!!
> 
> Nearly as good performance as a custom loop with little hassle, well, except the QDCs restricts flow a bit, but great for throwing in a new motherboard or swapping out the CPU etc.
> 
> I think you can only get Phoenix 240s on their website now, but may be able to find a 280 or 360 AIO elsewhere.
> 
> Edit: Or you can buy something like the below, remove the CPU black and add a QDC to it for the GPU block. You need to add QDCs to the hoses to the GPU block as well.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Alphacool Eisbaer 360 CPU - Black
> 
> 
> With the “Eisbaer”, Alphacool is fundamentally revolutionizing the AIO cooler market. Where traditional AIO CPU-coolers are disposable products which are neither upgradeable nor refillable, the Alphacool “Eisbaer” is modularly built and...
> 
> 
> 
> 
> www.aquatuning.us
> 
> 
> 
> 
> 
> Or just use your own block with these and use the existing hoses and QDCs.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> All-in-One GPU
> 
> 
> All-in-One GPU
> 
> 
> 
> 
> www.aquatuning.us


That seems very good value considering the rad and cold plate are copper. Good for someone who wants to spread the cost of a loop and do things in stages. I'm not sure how easy filling the loop is at a latter stage, is the small reservoir accessible and easy to top up?


----------



## WillP

deleted, ignore


----------



## KedarWolf

WillP said:


> That seems very good value considering the rad and cold plate are copper. Good for someone who wants to spread the cost of a loop and do things in stages. I'm not sure how easy filling the loop is at a latter stage, is the small reservoir accessible and easy to top up?


You top it up through the reservoir, has round plug screws you take off for that. And you can top it off whether you top mount or front mount. If you front mount like any AIO better to have the hoses at the bottom.

Edit: Hard to find any of the EKWB ones for sale now. I did a quick Google.


----------



## PLATOON TEKK

Got the EVC2SX working on a Strix without the need to solder (makes RMA a lot easier).










A command allows me to control the msvdd separate from the core voltage. I highly recommend it to anyone with a Strix (or card with I2C), is also very affordable.

Is in stock and available from ElmorLabs


----------



## KedarWolf

WillP said:


> That seems very good value considering the rad and cold plate are copper. Good for someone who wants to spread the cost of a loop and do things in stages. I'm not sure how easy filling the loop is at a latter stage, is the small reservoir accessible and easy to top up?





https://www.invadeit.co.th/product/water-cooling/ekwb/ek-mlc-phoenix-280-radiator-core-module-cpu-module-intel-am4-p047194/


----------



## Lobstar

PLATOON TEKK said:


> Got the EVC2SX working on a Strix without the need to solder (makes RMA a lot easier).
> 
> A command allows me to control the msvdd separate from the core voltage. I highly recommend it to anyone with a Strix (or card with IC2), is also very affordable.
> 
> Is in stock and available from ElmorLabs


Where can I find a list of compatible cards? I have a revised EVGA 3090 FTW3U and would love to try this.


----------



## PLATOON TEKK

Lobstar said:


> Where can I find a list of compatible cards? I have a revised EVGA 3090 FTW3U and would love to try this.


Do you have clear PCB pics? It will be 3 “holes” that are labeled pcon1/pcon2. Or any ports that allow “i2c” communication.

edit: this is where I connected mine;


----------



## Lobstar

PLATOON TEKK said:


> Do you have clear PCB pics? It will be 3 “holes” that are labeled pcon1/pcon2. Or any ports that allow “i2c” communication.


I'm not sure if I have a compatible digital voltage controller. The only other post on the forum regarding an NCP81610 has no response.


----------



## gfunkernaught

Has anyone noticed any change in performance in cyberpunk with the latest 1.22 hotfix patch? I see slightly less but noticeable avg fps drop, I had to change dlss to balanced to get smoother fps whereas before I was getting smooth fps using quality dlss. Also noticed in some areas there were these hard fps drops, not like the kind of drops you'd see in that central park area, but in a small room for example that has some reflective surfaces, if I was say 10ft away from a reflective surface that was facing me, my fps was 45fps, then I take a few steps towards it, fps drops to 35, then another few steps towards, fps goes back up to 45. The drops felt similar to double buffered vsync. This anomaly occurs with both balanced and quality dlss.

I'm downloading the 466.27 driver now to see if the problem persists.


----------



## yzonker

Supposed to perform better although maybe that was the console version? If I can find my save I'll run that same test from when I was testing reBar.


----------



## gfunkernaught

Update on the cyberpunk 1.22 performance issue
Installed the latest drivers, problem persists. I tried to narrow it down to see what was going on in the scene that would cause a 10fps drop. It looks like transparencies and reflective surfaces have been bugged in this patch. Something to do with DLSS and transparent textures. You know those "glimmers" caused by DLSS? Try looking at those glimmers through glass or something and see if your fps drops 10 or so then rises back. I was curious to see if it was a buffer issue since I use ultra low latency and use the nvidia control panel profile to enable vsync instead of in-game. So I turned those off and ran cp2077 without vsync at all, same thing. If anyone knows that sudden stutter caused by double-buffer vsync when the fps falls below refresh rate and gets cut in half briefly, that is what I am seeing but the fps is dropping to the mid 30s from like 45-50fps so it feels like that stutter.


----------



## sultanofswing

Lobstar said:


> I'm not sure if I have a compatible digital voltage controller. The only other post on the forum regarding an NCP81610 has no response.


I asked Elmor about that controller and he says it's a weird one as he has looked at it, It's i2c digital capable but has no voltage control.


----------



## J7SC

...now you folks made me look for 3-holes in my Strix, per TechpowerUp Strix 3090 analysis. Then I glanced over at my 1080p retro gamer and its EVbot - sigh...

fyi, those Classies have a Strix XOC bios on one of the bios, btw; things come full circle I guess


----------



## sultanofswing

J7SC said:


> ...now you folks made me look for 3-holes in my Strix, per TechpowerUp Strix 3090 analysis. Then I glanced over at my 1080p retro gamer and its EVbot - sigh...fyi, those Classies have a Strix XOC bios on one of the bios, btw; things come full circle I guess
> 
> View attachment 2488569


I need that evbot!


Sent from my OnePlus 7 Pro using Tapatalk


----------



## KedarWolf

gfunkernaught said:


> Has anyone noticed any change in performance in cyberpunk with the latest 1.22 hotfix patch? I see slightly less but noticeable avg fps drop, I had to change dlss to balanced to get smoother fps whereas before I was getting smooth fps using quality dlss. Also noticed in some areas there were these hard fps drops, not like the kind of drops you'd see in that central park area, but in a small room for example that has some reflective surfaces, if I was say 10ft away from a reflective surface that was facing me, my fps was 45fps, then I take a few steps towards it, fps drops to 35, then another few steps towards, fps goes back up to 45. The drops felt similar to double buffered vsync. This anomaly occurs with both balanced and quality dlss.
> 
> I'm downloading the 466.27 driver now to see if the problem persists.


I tested it with Riva Tuner. 3840x1080 ray tracing on Ultra, graphics maxed out, getting a solid 71FPS, DLSS on Quality. Doesn't seem to be dropping, but I use this too with SMT patch on for my 5950x. 









GitHub - yamashi/CyberEngineTweaks: Cyberpunk 2077 tweaks, hacks and scripting framework


Cyberpunk 2077 tweaks, hacks and scripting framework - GitHub - yamashi/CyberEngineTweaks: Cyberpunk 2077 tweaks, hacks and scripting framework




github.com


----------



## J7SC

For those of you who have the Phanteks block for the Strix, how do you like it, including on temps, and build-quality ?


----------



## gfunkernaught

KedarWolf said:


> I tested it with Riva Tuner. 3840x1080 ray tracing on Ultra, graphics maxed out, getting a solid 71FPS, DLSS on Quality. Doesn't seem to be dropping, but I use this too with SMT patch on for my 5950x.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> GitHub - yamashi/CyberEngineTweaks: Cyberpunk 2077 tweaks, hacks and scripting framework
> 
> 
> Cyberpunk 2077 tweaks, hacks and scripting framework - GitHub - yamashi/CyberEngineTweaks: Cyberpunk 2077 tweaks, hacks and scripting framework
> 
> 
> 
> 
> github.com


I have RT on psycho, and my framerate is mostly consistent, except for the areas I mentioned. I forget where the one good spot to test this was exactly, but it is a ripperdoc with a robot thing behind glass, in the upper-class area of the map. That's where I noticed the weird drops.


----------



## Dreams-Visions

J7SC said:


> For those of you who have the Phanteks block for the Strix, how do you like it, including on temps, and build-quality ?


I like mine quite a bit, yes. No complaints about temps, build quality seems excellent. Good presentation and polish on this product. It's also more narrow than most other products out there, allowing the Strix to fix horizontally in the O11dXL.


----------



## Hg201

ASUS jsut released BIOS V4 for the RTX-3090 TUF Gaming OC.




__





TUF-RTX3090-O24G-GAMING｜Graphics Cards｜ASUS USA


TUF Gaming graphics cards add hefty 3D horsepower to the TUF Gaming ecosystem, with features like Auto-Extreme manufacturing, steel backplates, high-tech fans, and IP5X certifications. And it’s all backed by a rigorous battery of validation tests to ensure compatibility with the latest TUF...




www.asus.com




What is the difference from V3?


----------



## Nizzen

Finally 😁
For the first time, I don't want to watercool a GPU 😜


----------



## des2k...

wow very nice card ! you guys like expensive models 🙄

here with zotac 3090, but... after bleed-out most of the air (I say mostly because 5 rads in all different oriantations is hell) and some heat cycle I dropped 2c on the delta. 

It's 15.5c-16.5c delta at 600w+ Quake RTX on the EK block. Sanding off the standoffs + cleaning the block was worth it 😁

The block was 24c+ delta in Quake RTX when I first installed.

The stange thing, is that Power usage dropped by about 20w for games,bench. I guesss lower temps, less current required.


----------



## PLATOON TEKK

Nizzen said:


> Finally 😁
> For the first time, I don't want to watercool a GPU 😜
> 
> View attachment 2488629
> 
> View attachment 2488630


Congrats! Is a dope card. If it’s anything like my two OC Labs you will def want it on water soon lol. Have a shot of it with thermal imaging camera in video on my channel, gets pretty hot. 

Apparently bykski is working on one. Be sure to press the “performance” button by the DP ports. 

Also did it ship with the 450/500 bios for the p/s switches? Use the Galax tool for the re-bar versions.

Enjoy it brother 🤘🏻


----------



## J7SC

Dreams-Visions said:


> I like mine quite a bit, yes. No complaints about temps, build quality seems excellent. Good presentation and polish on this product. It's also more narrow than most other products out there, allowing the Strix to fix horizontally in the O11dXL.


...just ordered one


----------



## PLATOON TEKK

J7SC said:


> For those of you who have the Phanteks block for the Strix, how do you like it, including on temps, and build-quality ?


Performs as good as my Bykski blocks do. Quality is excellent and in terms of visuals is by far the most appealing block out in my opinion.

DO NOT get the backplate, at least in my opinion. It was really light, felt cheap and overall junk.


----------



## J7SC

PLATOON TEKK said:


> Performs as good as my Bykski blocks do. Quality is excellent and in terms of visuals is by far the most appealing block out in my opinion.
> 
> DO NOT get the backplate, at least in my opinion. It was really light, felt cheap and overall junk.
> 
> View attachment 2488655
> 
> View attachment 2488656


Thanks ! I already have the EK Strix block and backplate, but several problems with both, not insurmountable, but still ...we might get another Strix 3k (3070, 80) for the office 'when available', so the EK block will go there (once parts are back from RMA).

.


----------



## PLATOON TEKK

J7SC said:


> Thanks ! I already have the EK Strix block and backplate, but several problems with both, not insurmountable, but still ...we might get another Strix 3k (3070, 80) for the office 'when available', so the EK block will go there (once parts are back from RMA).
> 
> .


Boom. That should work, I have an extra bykski backplate on there and it works fine (coil whine is also not noticeably louder thankfully).


----------



## kx11

Nizzen said:


> Finally 😁
> For the first time, I don't want to watercool a GPU 😜
> 
> View attachment 2488629
> 
> View attachment 2488630


Hold on to that warranty, and never use their HOF Ai


----------



## GRABibus

I got mine also today !



















PR at stock :










PR with following settings :
Stock Bios
Open PC
All fans at 100%
+175MHz on core
+1500MHz on memory
1 CCD disabled for 5900X
SMT disabled for 5900X
core count = 4 for 5900X
OC 4 [email protected] for 5900X
Drivers NVIDIA default settings











PR with following settings :
Open PC
20°C ambiante
All fans at 100%
Bios EVGA XOC 1000W
+190MHz sur Core
+1500MHz on memeory
Drivers NVIDIA tweaked on High performances (Except Power which is on normal)
G-sync disabled
Slider Voltage at 100%
Slider PL at 100%











Temps are incredibly low on this card, even with with KP XOC 1000W Bios.
I have monitored also and junction memory and I think there is no issue to run it on this card 24/7.
I didn't reach 60°C on GPU, 70°C on memory and 55°C on VRM during Cold war sessions.

In, addition, I can bench +1700MHz on memory without any issue !!!


*I tried also KP 520W bios and surpisingly, it gaves me bad results (The worst !!) as I couldn't reach the 520W power. I was stuck at 445W maximum in PR, power limit problem ?

If some of you HOF owners could try the KP 520W bios and give feedback ?*

This card is only interesting with at least a unlimited power bios on air.

I think to go to water now.


----------



## PLATOON TEKK

GRABibus said:


> I got mine also today !
> 
> View attachment 2488666
> View attachment 2488667
> 
> 
> 
> 
> PR at stock :
> View attachment 2488668
> 
> 
> 
> PR with following settings :
> Stock Bios
> Open PC
> All fans at 100%
> +175MHz on core
> +1500MHz on memory
> 1 CCD disabled for 5900X
> SMT disabled for 5900X
> core count = 4 for 5900X
> OC 4 [email protected] for 5900X
> Drivers NVIDIA default settings
> 
> View attachment 2488670
> 
> 
> 
> PR with following settings :
> Open PC
> 20°C ambiante
> All fans at 100%
> Bios EVGA XOC 1000W
> +190MHz sur Core
> +1500MHz on memeory
> Drivers NVIDIA tweaked on High performances (Except Power which is on normal)
> G-sync disabled
> Slider Voltage at 100%
> Slider PL at 100%
> 
> View attachment 2488671
> 
> 
> 
> Temps are incredibly low on this card, even with with KP XOC 1000W Bios.
> I have monitored also and junction memory and I think there is no issue to run it on this card 24/7.
> I didn't reach 60°C on GPU, 70°C on memory and 55°C on VRM during Cold war sessions.
> 
> In, addition, I can bench +1700MHz on memory without any issue !!!
> 
> 
> *I tried also KP 520W bios and surpisingly, it gaves me bad results (The worst !!) as I couldn't reach the 520W power. I was stuck at 445W maximum in PR, power limit problem ?
> 
> If some of you HOF owners could try the KP 520W bios and give feedback ?*
> 
> This card is only interesting with at least a unlimited power bios on air.
> 
> I think to go to water now.


Is your power reading accurate with the KP 1000w? Mine were reporting near 1000w (which was clearly false).


----------



## GRABibus

PLATOON TEKK said:


> Is your power reading accurate with the KP 1000w? Mine were reporting near 1000w (which was clearly false).


In PR on my highest score in the post, GPuz reports 547W max


----------



## PLATOON TEKK

GRABibus said:


> In PR on my highest score in the post, GPuz reports 547W max


Thanks for fast response. Interesting, seems pretty accurate.


----------



## GRABibus

Yes apparently


----------



## J7SC

GRABibus said:


> In PR on my highest score in the post, GPuz reports 547W max


Isn't there a 1000 W Galax Hof XOC bios (though w/o serious cooling, I would not want to try that)


----------



## GRABibus

J7SC said:


> Isn't there a 1000 W Galax Hof XOC bios (though w/o serious cooling, I would not want to try that)


Yes I have it.

I will try it. 

But currently KP 1000W is ok.


----------



## J7SC

GRABibus said:


> Yes I have it.
> 
> I will try it.
> 
> But currently KP 1000W is ok.


...have you considered moving ?


----------



## GRABibus

J7SC said:


> ...have you considered moving ?


😂

Temps are cool on this card as I mentioned, even with 1000W bios.
I will cap it at 60%


----------



## Beagle Box

GRABibus said:


> I got mine also today !
> ...
> 
> In, addition, I can bench +1700MHz on memory without any issue !!!
> 
> ...


That +1700 memory is really impressive. I wonder if they binned the memory chips.


----------



## GRABibus

Beagle Box said:


> That +1700 memory is really impressive. I wonder if they binned the memory chips.


Very interesting video :


----------



## J7SC

GRABibus said:


> Very interesting video :


Yeah, I watched it twice already - but that PCB is soooo beautiful to look at (never mind the barrage of 'output filtering')


----------



## bl4ckdot

HOF is truly a work of art


----------



## des2k...

Beagle Box said:


> That +1700 memory is really impressive. I wonder if they binned the memory chips.


Card is expensive, I'm sure they get good memory chips. +1700 on the air cooler is impressive if it doesn't decrease the score.
Mine goes high like that, but I loose score after +1530.


----------



## Beagle Box

des2k... said:


> Card is expensive, I'm sure they get good memory chips. +1700 on the air cooler is impressive if it doesn't decrease the score.
> Mine goes high like that, but I loose score after +1530.


Mine tops out @ +1630, but benchmark scores do increase up to that point. Above that, I get block artifacting.


----------



## J7SC

okee-dokee - updating the Strix w-cooling sometime later next week and wondering about what paste to use (for pads, I already got an arsenal of Thermalright Odyssey 0.5mm, 1mm). For thermal paste, I have available Noctua NT-H2, Gelid GC Extreme, TG Kryonaut and MX4...I generally like the Kryonaut, but it is a bit too liquid IMO for a GPU / block that is standing (mobo is flat)...I'm leaning towards the Gelid which is a bit thicker. What do you folks think ?


----------



## GRABibus

How do you consider and take care about hot spot GPU temp ?

I have 15degrees between GPU temp and hot spot, leading from 76 degrees to 83 degrees on hot spot in BFV with XOC 1000W KO Bios in 2560x1440 with fans at 100%


----------



## gfunkernaught

Here's a video of the fps drops I was talking about in case anyone is interested. I recorded the video with AB so there is visual stuttering from the capture, but the fps counter is key. Watch how the fps goes from 50s to 30s then back to 50s without much change in scene. This wasn't happening before the 1.22 hotfix patch.
SPOILER ALERT


----------



## jomama22

Beagle Box said:


> That +1700 memory is really impressive. I wonder if they binned the memory chips.


May want to test that you are actually getting gains from that clock speed. My strix can do 1800 but doesn't see any scaling beyond around 1575 or so. There are also lots of memory holes up that high where you will lose performance.


----------



## ViRuS2k

gfunkernaught said:


> Here's a video of the fps drops I was talking about in case anyone is interested. I recorded the video with AB so there is visual stuttering from the capture, but the fps counter is key. Watch how the fps goes from 50s to 30s then back to 50s without much change in scene. This wasn't happening before the 1.22 hotfix patch.
> SPOILER ALERT


What you cooling your trio x 3090 with ?
what block you got and what cooling setup ? and if you dont mind me asking what thermal pad sizes did you use a diagram would be sweet on the thermal pad sizes as i have slight board bending with my biski block your thermals though are impressive on the card currently i use 2x superthick coolstream xl radiators + one monster 480mm thick radiator
and get gpu max temps of 57c i think im loosing a lot of compression due to board bending due to using uneven sized thermal pads. with this bisky block.


----------



## Falkentyne

gfunkernaught said:


> Here's a video of the fps drops I was talking about in case anyone is interested. I recorded the video with AB so there is visual stuttering from the capture, but the fps counter is key. Watch how the fps goes from 50s to 30s then back to 50s without much change in scene. This wasn't happening before the 1.22 hotfix patch.
> SPOILER ALERT


The last time I saw something like this was in Fortnite. Had something to do with the Shader Cache. Eventually after waiting awhile, and after a few starts and exits, it stopped happening.
I think the same thing happens in Warzone (Modern Warfare) after a new driver, which I saw myself when doing 466.27. It stared shader installation and the FPS dropped noticeably, but the shader installation stayed at 0% and didn't complete, until I actually entered the game mode directly (e.g. Multiplayer, Co-Op, etc), and only then did that particular shader compile and complete fully.
Once I entered all three modes Multiplayer, Co-op, Campaign), everything finished fully and FPS was solid.

This was the only time I ever saw something like this happen. In the previous driver, I simply left it on the main menu and it updated all the shader packs automatically.

In Cold War, I didn't have this issue. I entered the main menu and it updated multiplayer and all the other shaders automatically and correctly.


----------



## Beagle Box

jomama22 said:


> May want to test that you are actually getting gains from that clock speed. My strix can do 1800 but doesn't see any scaling beyond around 1575 or so. There are also lots of memory holes up that high where you will lose performance.


Gaining from memory speed, at least on my Strix, seems to be related to maximum attainable clocks. If my card can do 2220MHz core in a benchmark, I'm forced to settle for ~ +1500 on memory, but if I'm stuck at 2190MHz core, then I can usually show score gains up to +1620 - +1630. It's a trade-off that I can't explain, but it's there. 
Of course, others may have different results. 
Mine is water-cooled, but is no golden chipped super card. 
No shunts. 
No external controller. 
I'd be interested if anyone else has found a similar core speed / memory speed trade-off. 

Gaming, I usually run about +600 - +700 memory OC with a mild core overclock. Seems smoother than no memory OC and higher core. Just my sense of it. 
Again, others' experiences may differ.


----------



## gfunkernaught

ViRuS2k said:


> What you cooling your trio x 3090 with ?
> what block you got and what cooling setup ? and if you dont mind me asking what thermal pad sizes did you use a diagram would be sweet on the thermal pad sizes as i have slight board bending with my biski block your thermals though are impressive on the card currently i use 2x superthick coolstream xl radiators + one monster 480mm thick radiator
> and get gpu max temps of 57c i think im loosing a lot of compression due to board bending due to using uneven sized thermal pads. with this bisky block.


I have the ek block and used 1mm on the front with ektotherm paste on the ram and vrm, liquid metal on the core. The back I used 1.5mm, again using thermal paste between the pads and components. Those temps you see in the video are low because the game has only been running for a few minutes. But I get decent temps.


----------



## J7SC

Beagle Box said:


> Gaining from memory speed, at least on my Strix, seems to be related to maximum attainable clocks. If my card can do 2220MHz core in a benchmark, I'm forced to settle for ~ +1500 on memory, but if I'm stuck at 2190MHz core, then I can usually show score gains up to +1620 - +1630. It's a trade-off that I can't explain, but it's there.
> Of course, others may have different results.
> Mine is water-cooled, but is no golden chipped super card.
> No shunts.
> No external controller.
> I'd be interested if anyone else has found a similar core speed / memory speed trade-off.
> 
> Gaming, I usually run about +600 - +700 memory OC with a mild core overclock. Seems smoother than no memory OC and higher core. Just my sense of it.
> Again, others' experiences may differ.


...it would make sense that as long as you don't have a 'near-unlimited' PL from a XOC bios, there is a trade-off. But I simply haven't run enough consistent benchmarks to chime in with certainty...on a few occasions, I have seen Port Royal peak up to 2205 MHz, but only when VRAM was below 1360 MHz in GPUz even though it can go higher efficiently in other apps.

It may also depend on which 'region / sets of cores' of the GPU die gets the workout - in Superposition 8K, I have seen (and posted) the GPU very briefly spiking to 2265 MHz (w/o crashing) and regularly at 2235 MHz, AND with VRAM closer to 1400 MHz / GPUz...but that is an older DX11 title w/o ray tracing cores kicking in etc.

Depending on your bios and also other factors such as temps, there is an overall power budget that gets divided up between GPU, VRAM, and possibly fans and rgb, depending if those have their own little cordoned-off PL corner. On my 2080 Tis, turning RGB off actually increased scores (measurably and repeatedly), though 3090s may be different. 

I do agree re. better gaming performance with a mild oc on GPU and VRAM, at least in FS2020 and CP2077. Also, I usually leave the voltage slider alone but noticed recently that a mild +10 to +20 with the KPE 520W bios on the Strix, it seems smoother in Port Royal - but who knows, may be it is psychosomatic


----------



## gfunkernaught

Falkentyne said:


> The last time I saw something like this was in Fortnite. Had something to do with the Shader Cache.


Hmm, I'll try disabling Shader Cache in the control panel and see if that helps. Thanks!


----------



## Beagle Box

gfunkernaught said:


> I have the ek block and used 1mm on the front with ektotherm paste on the ram and vrm, liquid metal on the core. The back I used 1.5mm. those temps you seen the video are low because the game has only been running for a few minutes. But I get decent temps.


Did using paste + pads help on the RAM? My pads are solidly applied, but when I last inspected them, it seemed there were air pockets on a few.


----------



## gfunkernaught

Beagle Box said:


> Did using paste + pads help on the RAM? My pads are solidly applied, but when I last inspected them, it seemed there were air pockets on a few.


Seems so. Before with just pads iirc the ram would top out at 64c. I have since added m.2 heatsinks and a 120mm sp fan pulling air up and away from the backplate. Now the ram sits at 58c. The highest I have seen it go was 62c with paste+pad during quake 2 rtx and +680w and a heavy oc. That was scary but the fps minimum was 52fps so once you see that you can't unsee it. I have to settle for 500w and dynamic res which is fine. Idk I just feel bad using over 500w to play a game.


----------



## gfunkernaught

@Falkentyne 
Disabling shader cache did nothing. I tried setting the in-game fps limit to 60fps, nothing. It's so weird because it behaves exactly like double-buffered vsync but the fps counter shows that isn't the case.


----------



## yzonker

gfunkernaught said:


> @Falkentyne
> Disabling shader cache did nothing. I tried setting the in-game fps limit to 60fps, nothing. It's so weird because it behaves exactly like double-buffered vsync but the fps counter shows that isn't the case.


Where is that ripperdoc? I don't think I've been there.


----------



## Falkentyne

gfunkernaught said:


> @Falkentyne
> Disabling shader cache did nothing. I tried setting the in-game fps limit to 60fps, nothing. It's so weird because it behaves exactly like double-buffered vsync but the fps counter shows that isn't the case.


I didn't disable the shader cache and then test a game.

I either let it sit there like 15 minutes idle then restarted the game (Fortnite), OR I disabled shader cache, REBOOTED, manually deleted the files in C:\program data\Nvidia corporation\NV_CACHE , rebooted, then enabled shader cache again and let it rebuild it.


----------



## yzonker

yzonker said:


> Where is that ripperdoc? I don't think I've been there.


I found that location. Didn't drop below 40 fps for me. 4k, settings maxed, phycho, DLSS quality.


----------



## Beagle Box

gfunkernaught said:


> Seems so. Before with just pads iirc the ram would top out at 64c. I have since added m.2 heatsinks and a 120mm sp fan pulling air up and away from the backplate. Now the ram sits at 58c. The highest I have seen it go was 62c with paste+pad during quake 2 rtx and +680w and a heavy oc. That was scary but the fps minimum was 52fps so once you see that you can't unsee it. I have to settle for 500w and dynamic res which is fine. Idk I just feel bad using over 500w to play a game.


Did it affect max OC ability of the RAM or just temps? 
I wonder if RAM OC is less power draw intensive than core clock OC.


----------



## gfunkernaught

delete


----------



## gfunkernaught

Beagle Box said:


> Did it affect max OC ability of the RAM or just temps?
> I wonder if RAM OC is less power draw intensive than core clock OC.


I think so. I can run +1300 on vram. To be honest I didn't pay attention between the stock cooler and the ek block so I couldn't say. I think I was only doing +600 on the vram but was afraid to push passed that before I changed out the pads because the temps were high. 

As for the power draw, that I don't know because I didn't study/research it but others on here have been. I know different bios have varying power limits for several parts of the PCB and corresponding circuits.


----------



## Beagle Box

J7SC said:


> ...it would make sense that as long as you don't have a 'near-unlimited' PL from a XOC bios, there is a trade-off. But I simply haven't run enough consistent benchmarks to chime in with certainty...on a few occasions, I have seen Port Royal peak up to 2205 MHz, but only when VRAM was below 1360 MHz in GPUz even though it can go higher efficiently in other apps.
> 
> It may also depend on which 'region / sets of cores' of the GPU die gets the workout - in Superposition 8K, I have seen (and posted) the GPU very briefly spiking to 2265 MHz (w/o crashing) and regularly at 2235 MHz, AND with VRAM closer to 1400 MHz / GPUz...but that is an older DX11 title w/o ray tracing cores kicking in etc.
> 
> Depending on your bios and also other factors such as temps, there is an overall power budget that gets divided up between GPU, VRAM, and possibly fans and rgb, depending if those have their own little cordoned-off PL corner. On my 2080 Tis, turning RGB off actually increased scores (measurably and repeatedly), though 3090s may be different.
> 
> I do agree re. better gaming performance with a mild oc on GPU and VRAM, at least in FS2020 and CP2077. Also, I usually leave the voltage slider alone but noticed recently that a mild +10 to +20 with the KPE 520W bios on the Strix, it seems smoother in Port Royal - but who knows, may be it is psychosomatic


My peak core speed is also short blips to 2205 in PR. The odd thing is the KP1k 'unlimited' BIOS only delivers ~ 80 pts higher score than the MSI Suprim X BIOS, the GPU able to run VRAM +10 pts higher with the KP1k BIOS. 

I'm able to run Superposition 8KO with VRAM @ +1610, but my core speed won't get over 2220MHz. With VRAM @ +0, it'll start @ 2235 and later drop to 2220 when things heat up. I should try it with fans on full sometime.

I've found games run best with my card with Voltage set between +20 and +50. Benchmarks can be best with +0V all the way up to +80. 
Memory @ +400 - +600
Core Clock +0 - +120.

I posit such settings will be different from card to card, game to game. 

Temperature is always the difference between an Excellent benchmark and a Legendary one.


----------



## gfunkernaught

Falkentyne said:


> I didn't disable the shader cache and then test a game.
> 
> I either let it sit there like 15 minutes idle then restarted the game (Fortnite), OR I disabled shader cache, REBOOTED, manually deleted the files in C:\program data\Nvidia corporation\NV_CACHE , rebooted, then enabled shader cache again and let it rebuild it.


With the shader cache option still disabled for the cyberpunk profile in the control panel The issue was there but like you said it went away after a few minutes and running around. I didn't clear the cache. or reboot or anything. Strange.


----------



## des2k...

The EK backplate is not super ugly anymore  
Not sure if it was here or reddit, but somebody mentioned this blue heatsink from Amazon.


----------



## Biscottoman

Hi guys, i just completed my build and seems like my shunt mod on my 3090 strix didn't worked, the 480w bios is power limiting my card at 2130-2145mhz and there's no way to get over it in any way. Do you suggest me to run switch the bios going for the evga 520w or even trying the 1000w as i would like to get at least 2200-2235 mhz if the core is capable of it


----------



## gfunkernaught

des2k... said:


> The EK backplate is not super ugly anymore
> Not sure if it was here or reddit, but somebody mentioned this blue heatsink from Amazon.
> View attachment 2488768


Whoa is that a one piece heatsink?


----------



## des2k...

gfunkernaught said:


> Whoa is that a one piece heatsink?


yep, listed on amazon

Aluminum Heat Sink 150 x 74 x10 mm/5.9 x 2.91 x 0.39 inch Blue Heatsinks Module Cooler Fin Heat Board Cooling for Amplifier Transistor Semiconductor Devices


----------



## Falkentyne

des2k... said:


> yep, listed on amazon
> 
> Aluminum Heat Sink 150 x 74 x10 mm/5.9 x 2.91 x 0.39 inch Blue Heatsinks Module Cooler Fin Heat Board Cooling for Amplifier Transistor Semiconductor Devices


This or another?








Amazon.com: Awxlumv Aluminum Large Heatsink 5.9 x 3.6 x 0.59 Inch /150 x 93x 15 mm Black Heat Sinks Fins for Cooler PCB Board LED Motherboard (2 Pcs) : Electronics


Buy Awxlumv Aluminum Large Heatsink 5.9 x 3.6 x 0.59 Inch /150 x 93x 15 mm Black Heat Sinks Fins for Cooler PCB Board LED Motherboard (2 Pcs): Heatsinks - Amazon.com ✓ FREE DELIVERY possible on eligible purchases



www.amazon.com


----------



## des2k...

Falkentyne said:


> This or another?
> 
> 
> 
> 
> 
> 
> 
> 
> Amazon.com: Awxlumv Aluminum Large Heatsink 5.9 x 3.6 x 0.59 Inch /150 x 93x 15 mm Black Heat Sinks Fins for Cooler PCB Board LED Motherboard (2 Pcs) : Electronics
> 
> 
> Buy Awxlumv Aluminum Large Heatsink 5.9 x 3.6 x 0.59 Inch /150 x 93x 15 mm Black Heat Sinks Fins for Cooler PCB Board LED Motherboard (2 Pcs): Heatsinks - Amazon.com ✓ FREE DELIVERY possible on eligible purchases
> 
> 
> 
> www.amazon.com











Amazon.com: Aluminum Heat Sink 150 x 74 x10 mm/5.9 x 2.91 x 0.39 inch Blue Heatsinks Module Cooler Fin Heat Board Cooling for Amplifier Transistor Semiconductor Devices : Electronics


Buy Aluminum Heat Sink 150 x 74 x10 mm/5.9 x 2.91 x 0.39 inch Blue Heatsinks Module Cooler Fin Heat Board Cooling for Amplifier Transistor Semiconductor Devices: Heatsinks - Amazon.com ✓ FREE DELIVERY possible on eligible purchases



www.amazon.com


----------



## Martin778

Got my hands on an FTW3 from the later batches, with black accents. Man, this card has some borked BIOS it seems. They come preloaded with a 500W BIOS in the OC bios switch position, yet it caps out at about 460-470W.
After flashing the KP XOC 520W BIOS it can pull 510W no problem, the only issue with the KP BIOS is that the fan curve is quite agressive. I've did a quick run and got almost 14.600 in Port Royal w. +80/+800 but it's borderline for aircooling. Highest PCI-E power draw was 68.6W by the way.


----------



## Midian

Asus RTX3090_V4 bios released, Asus doesn't list any changes though: ROG Strix GeForce RTX 3090 OC Edition 24GB GDDR6X | Graphics Cards

Edit: interestingly it can't be installed if you already have V3 installed.


----------



## sippo

Hey,
Do you have any opionon about : Bykski N-AS3090STRIX-TC-V2 GPU Water Block With Waterway Backplane For ASUS RTX3080 3090 STRIX Gaming ?


----------



## KedarWolf

sippo said:


> Hey,
> Do you have any opionon about : Bykski N-AS3090STRIX-TC-V2 GPU Water Block With Waterway Backplane For ASUS RTX3080 3090 STRIX Gaming ?
> 
> 
> View attachment 2488825


I looked at the installation diagrams. There is a ton of stuff this block doesn't use thermal pads on the block that the EK block does. I think I'll wait for the EK active backplate myself. Even without it, my memory tops at 65c in Port Royal.


----------



## GRABibus

Did some PR tonight with my HOF on stock air cooler

Here are my best current scores :

*1/ With stock Bios (450W, version 94.02.42.80.C0) :*
Slide voltage to 100
Slide power to 107%
fans at 100%
+205MHz sur Core
+1500MHz sur memory
18°C ambient (Open room window).
NV Drivers on performances except power
G-Sync disabled
BAR disabled
Tweak CPU (5900X)
=> 1 CCD disabled
=> SMT disabled
=> Core count = 4
=> All core OC (4 cores) = 4.95GHz









I scored 14 973 in Port Royal


AMD Ryzen 9 5900X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com














I didn't break 15k with stock Bios.....
with an Intel CPU, those 15k would be broken easily

And also, I don't understand why I never reach the Bios power limit which is 450W. I always hit maximum 440W - 445W.
If someone could explain me this... ?


*2/ With Bios Kingpin 1000W OC (version 94.02.38.00.2F) :*
Slide voltage à 100
Slide power à 100
+190MHz sur Core
+1500MHz sur memory
17°C ambient (Open room window).
NV Drivers on performances except power
G-Sync disabled
BAR disabled
Tweak CPU (5900X)
=> 1 CCD disabled
=> SMT disabled
=> Core count = 4
=> All core OC (4 cores) = 4.95GHz









I scored 15 313 in Port Royal


AMD Ryzen 9 5900X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com















I like this score on air


----------



## GRABibus

Martin778 said:


> Got my hands on an FTW3 from the later batches, with black accents. Man, this card has some borked BIOS it seems. They come preloaded with a 500W BIOS in the OC bios switch position, yet it caps out at about 460-470W.
> After flashing the KP XOC 520W BIOS it can pull 510W no problem, the only issue with the KP BIOS is that the fan curve is quite agressive. I've did a quick run and got almost 14.600 in Port Royal w. +80/+800 but it's borderline for aircooling. Highest PCI-E power draw was 68.6W by the way.


Pff, this is still high PCIe power draw....
I don’t understand how EVGA could break the power balance like this...


----------



## des2k...

GRABibus said:


> And also, I don't understand why I never reach the Bios power limit which is 450W. I always hit maximum 440W - 445W.
> If someone could explain me this... ?


That's just Nvidia, they usually keep continuous power draw at -5w to -10w of the vbios limit. I think it's due to load spikes and not to overshoot. A few vbios I tried had this issue; I searched this but forgot to bookmark it, Nvidia said vbios max power is not equal to pinned continuous power draw.

Like most 390w vbios for 2x8pin will throttle around 385w.


----------



## J7SC

des2k... said:


> That's just Nvidia, they usually keep continuous power draw at -5w to -10w of the vbios limit. I think it's due to load spikes and not to overshoot. A few vbios I tried had this issue; I searched this but forgot to bookmark it, Nvidia said vbios max power is not equal to pinned continuous power draw.
> 
> Like most 390w vbios for 2x8pin will throttle around 385w.


...makes sense, though my Strix on stock (v2) bios regularly went past the advertised max of 480W.


----------



## des2k...

.


----------



## jlodvo

anyone with a AORUS GeForce RTX™ 3090 XTREME WATERFORCE WB 24G 2x8pin, what bios did you guys flash on it? is it ok to flash it with the 3x8pin non water block AORUS GeForce RTX™ 3090 XTREME 24G bios?


----------



## Nizzen

J7SC said:


> ...makes sense, though my Strix on stock (v2) bios regularly went past the advertised max of 480W.
> 
> View attachment 2488832


And gpu-z is 100% accurate?


----------



## Lord of meat

des2k... said:


> The EK backplate is not super ugly anymore
> Not sure if it was here or reddit, but somebody mentioned this blue heatsink from Amazon.
> View attachment 2488768


That looks most excellent, nice job!


----------



## satinghostrider

jlodvo said:


> anyone with a AORUS GeForce RTX™ 3090 XTREME WATERFORCE WB 24G 2x8pin, what bios did you guys flash on it? is it ok to flash it with the 3x8pin non water block AORUS GeForce RTX™ 3090 XTREME 24G bios?


I'm also keen to know. Gigabyte screwed the resize bar bios updates and forced the card to max cap of 360W. That's a good 30W difference from the advertised 360W/390W rating.


----------



## TheNaitsyrk

Quick question to all.

Anyone using 1000W bios on their 3090?

I flashed it but power usage according to AB and GPU-Z is 450W in Heaven 4 1440p.

I have Suprim X. Could someone confirm they get more watts as said by GPU-Z in Heaven 4 1440p?


----------



## EarlZ

How effective is MSI Kombustor for stability or is PR and Timespy test 2 still king ?


----------



## Nizzen

TheNaitsyrk said:


> Quick question to all.
> 
> Anyone using 1000W bios on their 3090?
> 
> I flashed it but power usage according to AB and GPU-Z is 450W in Heaven 4 1440p.
> 
> I have Suprim X. Could someone confirm they get more watts as said by GPU-Z in Heaven 4 1440p?


I was getting 550w max on the 3090 supreme x on air, with 1000w bios. According to Gpu-z. Port royal.

I have been using 1000w bios on one of my 3090 strix oc with waterblock since the 1000w bios first came. Still it's working great. *Total* system draw from the wall is about 850w in games like Battlefield V and Cyberpunk. This is with Amd 5900x cpu. 8 fans and one pump in the mix. 2x m.2 ssd's.


----------



## TheNaitsyrk

Nizzen said:


> I was getting 550w max on the 3090 supreme x on air, with 1000w bios. According to Gpu-z. Port royal.
> 
> I have been using 1000w bios on one of my 3090 strix oc with waterblock since the 1000w bios first came. Still it's working great. *Total* system draw from the wall is about 850w in games like Battlefield V and Cyberpunk. This is with Amd 5900x cpu. 8 fans and one pump in the mix. 2x m.2 ssd's.


My one just stays at 440-450W with this BIOS. KP520 BIOS is the same. 

It only goes up in watts if I run Superposition 4K optimised and then it peaks at 490W with KP520


----------



## ALSTER868

KedarWolf said:


> There is a ton of stuff this block doesn't use thermal pads on the block that the EK block does


I wonder how the card would feel without cooling on inductors.. is it OK for the card? Are there Strix owners with Bykski waterblock here?


----------



## TheNaitsyrk

Okay reporting back.

I installed 1000W BIOS again and power usage goes very slightly up (like 5W extra) with every 10+ MHz to core clock.
Managed to get sustained 2190Mhz which wasn't possible before. This is very strange. I had a small notch in my GPU-Z that said (Thermal) but GPU temps are within reason at 40C.

Update: Ran TimeSpy with power slider to 100% on 1000W BIOS and saw power usage topping 560W.


----------



## Biscottoman

Nizzen said:


> I was getting 550w max on the 3090 supreme x on air, with 1000w bios. According to Gpu-z. Port royal.
> 
> I have been using 1000w bios on one of my 3090 strix oc with waterblock since the 1000w bios first came. Still it's working great. *Total* system draw from the wall is about 850w in games like Battlefield V and Cyberpunk. This is with Amd 5900x cpu. 8 fans and one pump in the mix. 2x m.2 ssd's.


Is there any 1000w BIOS with resizable bar ON or are you running the normal kingpin one?


----------



## Nizzen

Biscottoman said:


> Is there any 1000w BIOS with resizable bar ON or are you running the normal kingpin one?


Normal K|np|n 
There is K|np|n 1000w with bar into the wild. The big boys want it for them self for a while... That is how it works 

Most of them loves to show off that they have it, but don't love to share it...


----------



## TheNaitsyrk

Does this look normal in TimeSpy on 1000W BIOS?


----------



## Biscottoman

Nizzen said:


> Normal K|np|n
> There is K|np|n 1000w with bar into the wild. The big boys want it for them self for a while... That is how it works
> 
> Most of them loves to show off that they have it, but don't love to share it...


Ok thanks, i will probably tthe go for it on my new watercooled 3090 strix since my shunt mod seems not work to me


----------



## yzonker

jlodvo said:


> anyone with a AORUS GeForce RTX™ 3090 XTREME WATERFORCE WB 24G 2x8pin, what bios did you guys flash on it? is it ok to flash it with the 3x8pin non water block AORUS GeForce RTX™ 3090 XTREME 24G bios?


Probably can flash it, but a 2x8pin card with a 3x8pin bios will be power limited since the bios will still see the 3rd 8-pin. You'll get something like 2/3rds the rated power limit. The only 3x8pin bios that lets you get up to or past 390w is the KP XOC.


----------



## Nizzen

TheNaitsyrk said:


> Does this look normal in TimeSpy on 1000W BIOS?


Very normal


----------



## TheNaitsyrk

Nizzen said:


> Very normal


Do you have about same results in GPU-Z.

I just realised that my previous GPU (3090 FTW3) was performing identically to this (maybe tad worse) but PCI-E slot was out of spec (93W) otherwise almost identical.

Now I know how people manage slightly higher scores than me in TimeSpy. They hook up their look to tap water and run coldest water possible to run their components coldest possible to get that extra 200-300 points in TimeSpy.

Right, I thought it's a problem not pulling enough power too. But then I looked at 8pin ratings. 150W per connector so each connector slightly exceeds the 150W and PCI-E slot provides 55W so it all adds up.

I could potentially switch the PSU to single rail mode and it will start pulling more? But otherwise I'm happy, it's just that I wasn't sure how it works. I guess if you LN2 it, it will perform better because it will have temp headroom = more watts as well.


----------



## GRABibus

des2k... said:


> That's just Nvidia, they usually keep continuous power draw at -5w to -10w of the vbios limit. I think it's due to load spikes and not to overshoot. A few vbios I tried had this issue; I searched this but forgot to bookmark it, Nvidia said vbios max power is not equal to pinned continuous power draw.
> 
> Like most 390w vbios for 2x8pin will throttle around 385w.


with the Suprim X Bios 450W on my HOF, I hit those 450W with no problem in PR.
But with stock Bios HOF 450W, I don’t.
So, it is more linked to the bios than Nvidia margin I think....

and blackdot (French member here HOF owner) showed me a screenshot on which he hits with no problem 450W with the HOF stock Bios, which I can’t .....

strange...


----------



## Nizzen

TheNaitsyrk said:


> Do you have about same results in GPU-Z.
> 
> I just realised that my previous GPU (3090 FTW3) was performing identically to this (maybe tad worse) but PCI-E slot was out of spec (93W) otherwise almost identical.
> 
> Now I know how people manage slightly higher scores than me in TimeSpy. They hook up their look to tap water and run coldest water possible to run their components coldest possible to get that extra 200-300 points in TimeSpy.
> 
> Right, I thought it's a problem not pulling enough power too. But then I looked at 8pin ratings. 150W per connector so each connector slightly exceeds the 150W and PCI-E slot provides 55W so it all adds up.
> 
> I could potentially switch the PSU to single rail mode and it will start pulling more? But otherwise I'm happy, it's just that I wasn't sure how it works. I guess if you LN2 it, it will perform better because it will have temp headroom = more watts as well.


You can't force the gpu to draw more "current". The card asking for the "current" it need. Cold is the key 


The 3090 Suprim X I tested was a poor bin. 2050mhz was the max stable, even with 15c cold air 😜


----------



## TheNaitsyrk

Nizzen said:


> You can't force the gpu to draw more "current". The card asking for the "current" it need. Cold is the key
> 
> 
> The 3090 Suprim X I tested was a poor bin. 2050mhz was the max stable, even with 15c cold air 😜


My one tops out at 2205Mhz.

Just need to install fresh windows 10 with everything removed via power shell and then I'm ready to rock.

I might make dedicated NVMe just for benching.

Time to hook up my AC unit and blow it into this bad boi.

Thanks for the help. I know my GPU functions properly now.

I also need to wait and get that Resizable BAR BIOS 1000W so perf is upped a bit, is it actually available?


----------



## GRABibus

Nizzen said:


> Normal K|np|n
> There is K|np|n 1000w with bar into the wild. The big boys want it for them self for a while... That is how it works
> 
> Most of them loves to show off that they have it, but don't love to share it...


That is full stupid behaviour in my opinion.

I don't see why this kind of Bios shouldn't be shared.


----------



## GRABibus

J7SC said:


> ...makes sense, though my Strix on stock (v2) bios regularly went past the advertised max of 480W.
> 
> View attachment 2488832


I think your strix is not a strix in fact...


----------



## Nizzen

My 3090 strix oc white VS KFA2 (Galax) HOF:

I love white gpu's


----------



## GRABibus

Nizzen said:


> My 3090 strix oc white VS KFA2 (Galax) HOF:
> 
> I love white gpu's


the HOH seems easy to disassemble ?


----------



## des2k...

GRABibus said:


> with the Suprim X Bios 450W on my HOF, I hit those 450W with no problem in PR.
> But with stock Bios HOF 450W, I don’t.
> So, it is more linked to the bios than Nvidia margin I think....
> 
> and blackdot (French member here HOF owner) showed me a screenshot on which he hits with no problem 450W with the HOF stock Bios, which I can’t .....
> 
> strange...


For example a 450w bios limit, might report power higher than 450w for a fraction of a second and those tools record that value. On the hardware itself or nvidia driver could still clip that power usage.

You can monitor power directly with the nvidia tool, you're looking at continuous values reported at 450w, then you'll know the continuous power usage. 460w values for like 1,2sec won't be enough for the board to take advantage of.

_nvidia_-_smi --query-gpu=power.draw --format=csv --loop_-ms=150


----------



## Biscottoman

I just tried flashing the kingpin 1000w bios on my strix 3090, but the card had a really strange behaviour. Wasn't able to mine anymore since It would lowest frequencies while mining and while i tried to run time Spy/Port royal it just instantly ceashed while not hitting even 400w


----------



## Martin778

My best score on air: 14.848


----------



## des2k...

Biscottoman said:


> I just tried flashing the kingpin 1000w bios on my strix 3090, but the card had a really strange behaviour. Wasn't able to mine anymore since It would lowest frequencies while mining and while i tried to run time Spy/Port royal it just instantly ceashed while not hitting even 400w


you can mine, nvidia inspector, P2 compute work set to disable


----------



## Nizzen

PLATOON TEKK said:


> Congrats! Is a dope card. If it’s anything like my two OC Labs you will def want it on water soon lol. Have a shot of it with thermal imaging camera in video on my channel, gets pretty hot.
> 
> Apparently bykski is working on one. Be sure to press the “performance” button by the DP ports.
> 
> Also did it ship with the 450/500 bios for the p/s switches? Use the Galax tool for the re-bar versions.
> 
> Enjoy it brother 🤘🏻


Is it possible to change memory voltage in the software? It's "grayed out", so is it a unlock here? Or maybe just for OC lab edition? Where is the biosswitch?  I can't see it on the top. Maybe I have to take it out too see it under?

390/450w P/S on the normal HOF. Can you post the 500w bios?


----------



## GRABibus

Nizzen said:


> Is it possible to change memory voltage in the software? It's "grayed out", so is it a unlock here? Or maybe just for OC lab edition? Where is the biosswitch?  I can't see it on the top. Maybe I have to take it out too see it under?


Bios switch is at the rear of the card, beside RGB connector (at 3 pin connectors side)


----------



## jlodvo

follow up question, is thier a way to apply the settings you made in msi afterburner to be the stock bios? like no need to run msi afterburner? i remember before days of the jurasic park thier was a nvidia bios editor, is thier anything like it now? where you set the curve to be save in the gpu bios?


----------



## gfunkernaught

des2k... said:


> yep, listed on amazon
> 
> Aluminum Heat Sink 150 x 74 x10 mm/5.9 x 2.91 x 0.39 inch Blue Heatsinks Module Cooler Fin Heat Board Cooling for Amplifier Transistor Semiconductor Devices


That's awesome. How are your temps with that on there?


----------



## gfunkernaught

I've narrowed down what part of the map in Cyberpunk I get those weird frame rate drops and it seems to be only in that upper-class area where all the rich people and corpos work/live. Everywhere else is fine, just when I enter that area, especially at night, there are lag mines everywhere in that area. Lighting? Other parts of the map have a lot of lights and reflections. Maybe the shaders for this area are unoptimized?


----------



## jomama22

TheNaitsyrk said:


> Okay reporting back.
> 
> I installed 1000W BIOS again and power usage goes very slightly up (like 5W extra) with every 10+ MHz to core clock.
> Managed to get sustained 2190Mhz which wasn't possible before. This is very strange. I had a small notch in my GPU-Z that said (Thermal) but GPU temps are within reason at 40C.
> 
> Update: Ran TimeSpy with power slider to 100% on 1000W BIOS and saw power usage topping 560W.


Don't worry about the temp spike thing, just a bug.


----------



## jomama22

TheNaitsyrk said:


> Do you have about same results in GPU-Z.
> 
> I just realised that my previous GPU (3090 FTW3) was performing identically to this (maybe tad worse) but PCI-E slot was out of spec (93W) otherwise almost identical.
> 
> Now I know how people manage slightly higher scores than me in TimeSpy. They hook up their look to tap water and run coldest water possible to run their components coldest possible to get that extra 200-300 points in TimeSpy.
> 
> Right, I thought it's a problem not pulling enough power too. But then I looked at 8pin ratings. 150W per connector so each connector slightly exceeds the 150W and PCI-E slot provides 55W so it all adds up.
> 
> I could potentially switch the PSU to single rail mode and it will start pulling more? But otherwise I'm happy, it's just that I wasn't sure how it works. I guess if you LN2 it, it will perform better because it will have temp headroom = more watts as well.


I don't think many people are hooking their loops to rap water tbh lol. If they have lower temps that are still above freezing, it's either a chiller or someone took it outside/opened a window during the winter.


----------



## yzonker

des2k... said:


> For example a 450w bios limit, might report power higher than 450w for a fraction of a second and those tools record that value. On the hardware itself or nvidia driver could still clip that power usage.
> 
> You can monitor power directly with the nvidia tool, you're looking at continuous values reported at 450w, then you'll know the continuous power usage. 460w values for like 1,2sec won't be enough for the board to take advantage of.
> 
> _nvidia_-_smi --query-gpu=power.draw --format=csv --loop_-ms=150


Maybe just look at the average while holding the card at the PL?


----------



## jura11

KedarWolf said:


> I looked at the installation diagrams. There is a ton of stuff this block doesn't use thermal pads on the block that the EK block does. I think I'll wait for the EK active backplate myself. Even without it, my memory tops at 65c in Port Royal.


I have on my RTX 3090 GamingPro's Bykski waterblocks and no issues, temperatures are great and VRAM temperatures won't break 60°C in gaming or rendering 

Regarding the installation manual its quite easy to understand and using pads on parts where is no point not sure if ut would make any difference 

I did test EKWB waterblock on my RTX 3090 few days ago, friend bought waterblock and on his RTX 3090 temperatures have been always in 40's on core and in 60's on VRAM with stock 390W BIOS,with XOC 1000W BIOS temperatures would break easily 50°C and VRAM would break 70°C without the problems on my RTX 3090 GamingPro, EKWB waterblock seems have poor contact with core and VRAM what I can see or is the issue of that block, but for now I'm happy with Bykski waterblocks 

Hope this helps 

Thanks, Jura


----------



## des2k...

gfunkernaught said:


> That's awesome. How are your temps with that on there?


no fans on it, looks like this for cp2077 2145core +0mem









with +1484mem


----------



## gfunkernaught

des2k... said:


> no fans on it, looks like this for cp2077 2145core +0mem
> 
> View attachment 2488899
> 
> with +1484mem
> View attachment 2488900


Nice! The highest I've pushed the mem was +1400 and it crashed while running PR, and that was outside in the winter. You think if you put a fan on there you could push the mem offset even more?


----------



## des2k...

gfunkernaught said:


> Nic
> ! The highest I've pushed the mem was +1400 and it crashed while running PR, and that was outside in the winter. You think if you put a fan on there you could push the mem offset even more?


I have my system tilted, so I can't use the fans. Had to flush my loop. Left cp2077 running

+1484 cp2077 just crashed at 64c
now at +1469 cp2077 no crash

on new pads(they are in bad shape now, too many remounts) I did play/finish the game at +1500mem

I always run with fans on the backplate, for sure +1484 didn't crash before at 54c.

For max, +1530, +1560 PR score drops. 








I scored 15 182 in Port Royal


AMD Ryzen 9 3900X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com





out of the box🤣








I scored 12 925 in Port Royal


AMD Ryzen 9 3900X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com





Looped +1530 for 45mins I think. Didn't really do the full stress test.

I have my copper mem plates, 0.5mm and 1mm, gelid 15wmk 0.5mm didn't get here yet.

I do plan to switch to 0.5mm pads on the back.


----------



## J7SC

Martin778 said:


> My best score on air: 14.848
> View attachment 2488869


Nice, especially for air-cooling. I also recently loaded the KPE 520W r_BAR for the first time (on water, though) and I think it's a keeper.



jomama22 said:


> Don't worry about the temp spike thing, just a bug.


..good to know...what specific temps are known to do the spike thing ?


----------



## bmagnien

Anyone tried the RE: Village demo (free on Steam for the next week) ? I just ran the 'village' playthrough - had to switch over to the 1000w bios from the KP520 because I was hitting PL and couldn't boost over 2020mhz. On the 1000w with PL uncapped, the game was drawing steady 605w+, 2145mhz, +1650mem and still only getting between 30-40 fps at 4k with every setting maxed out. Curious to see what other people have found? This game looks super heavy, will have to try lowering some settings to see what's dragging the frame rate down so much.


----------



## bmagnien

Actually I'm an idiot. When I was maxing out every setting I set Image Quality to 2, up from 1. Didn't take the time to read the subtext that Image Quality = Resolution Scaling. So I was running it at 8k lol. Switched that setting to 1 and now hitting 115 fps, power around 550, and 2145mhz clock. Much better...


----------



## gfunkernaught

des2k... said:


> I have my system tilted, so I can't use the fans. Had to flush my loop. Left cp2077 running
> 
> +1484 cp2077 just crashed at 64c
> now at +1469 cp2077 no crash
> 
> on new pads(they are in bad shape now, too many remounts) I did play/finish the game at +1500mem
> 
> I always run with fans on the backplate, for sure +1484 didn't crash before at 54c.
> 
> For max, +1530, +1560 PR score drops.
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 182 in Port Royal
> 
> 
> AMD Ryzen 9 3900X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> out of the box🤣
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 12 925 in Port Royal
> 
> 
> AMD Ryzen 9 3900X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> Looped +1530 for 45mins I think. Didn't really do the full stress test.
> 
> I have my copper mem plates, 0.5mm and 1mm, gelid 15wmk 0.5mm didn't get here yet.
> 
> I do plan to switch to 0.5mm pads on the back.


that's nuts. copper plates huh? so it goes like vram>pad>copper plate?


----------



## des2k...

gfunkernaught said:


> that's nuts. copper plates huh? so it goes like vram>pad>copper plate?


yep, then use thermal paste on top of the copper shim for the ek backplate.
Shouldn't be too difficult, they match the size of the memory & reduce half the thermal pad thickness (more efficient at moving heat out). The backplate is already marked where it touches the memory.


----------



## gfunkernaught

des2k... said:


> yep, then use thermal paste on top of the copper shim for the ek backplate.
> Shouldn't be too difficult, they match the size of the memory & reduce half the thermal pad thickness (more efficient at moving heat out). The backplate is already marked where it touches the memory.
> View attachment 2488969
> 
> 
> View attachment 2488963


So only for the back? I did vmem>paste>pad for both sides. You had to get the right size plates to match the required spacing? Interesting. I prob won't do the front because I am really happy with and don't feel like redoing the LM. But the back I could do. I am planning on at some point installing a 2nd pump and redoing some fittings and joints. Maybe then. Could you link to those shims?


----------



## TheNaitsyrk

Nizzen said:


> You can't force the gpu to draw more "current". The card asking for the "current" it need. Cold is the key
> 
> 
> The 3090 Suprim X I tested was a poor bin. 2050mhz was the max stable, even with 15c cold air 😜





jomama22 said:


> I don't think many people are hooking their loops to rap water tbh lol. If they have lower temps that are still above freezing, it's either a chiller or someone took it outside/opened a window during the winter.


They did. They posted in Sig as well.


----------



## gfunkernaught

Another example of how sensitive PR is to memory clocks and how the core and mem clocks affect each other:
+220 core, +1100 mem








I scored 15 527 in Port Royal


Intel Core i7-8700K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com




+205 core, +1500 mem








I scored 15 563 in Port Royal


Intel Core i7-8700K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com





Had all fans at 100% for this run. Core temp avg'd 41c, mem temp was 52c. The voltage slider was at 100% to get that 1093mv for the core.

@des2k... BTW how did you get above 1500 on the mem offset? nvsmi?


----------



## des2k...

gfunkernaught said:


> So only for the back? I did vmem>paste>pad for both sides. You had to get the right size plates to match the required spacing? Interesting. I prob won't do the front because I am really happy with and don't feel like redoing the LM. But the back I could do. I am planning on at some point installing a 2nd pump and redoing some fittings and joints. Maybe then. Could you link to those shims?


sure








0.54US $ 33% OFF|10pcs 0.3mm/0.5mm/0.8mm/1.0mm Laptop Copper Sheet Plate Strip Shim Thermal Pad Heatsink Sheet For Gpu Cpu Vga Chip Ram Cooling - Tool Parts - AliExpress


Smarter Shopping, Better Living! Aliexpress.com




www.aliexpress.com


----------



## RaMsiTo

ftw3 ultra , bios hof 1000w , air cooling











Result not found







www.3dmark.com


----------



## Beagle Box

TheNaitsyrk said:


> They did. They posted in Sig as well.


Whhhhaaaaaaaaa? Who did that?


----------



## des2k...

gfunkernaught said:


> @des2k... BTW how did you get above 1500 on the mem offset? nvsmi?


new afterburner let's you use over +1500 now. Don't have it updated, I load evga px1 on the vf curve window and apply the mem offset then close it.


----------



## jomama22

TheNaitsyrk said:


> They did. They posted in Sig as well.


Well you refer to "people". How many is it?


----------



## J7SC

GRABibus said:


> I think your strix is not a strix in fact...


...you mean they slipped me a Matrix instead ? I think I want my money back...err, wait strike that


----------



## Jpmboy

jomama22 said:


> Well you refer to "people". How many is it?


Using "house" water is actually common in large water cooled enterprise installations (actually save $$$ vs air chilling the whole shebang. Most still do not direct connect to city water and use liquid-liquid heat exchangers to dump the heat and circulate non-aqueous in the racks. Koolance makes heat exchangers for us "enthusiasts" If you're gonna do it at home using your faucet/spigot, you need a pressure reduction adaptor. No waterblock is built to handle the pressure most Tap water can push. Nothing like turning your $4000 rig into a lawn sprinkler!


----------



## jomama22

Jpmboy said:


> Using "house" water is actually common in large water cooled enterprise installations (actually save $$$ vs air chilling the whole shebang. Most still do not direct connect to city water and use liquid-liquid heat exchangers to dump the heat and circulate non-aqueous in the racks. Koolance makes heat exchangers for us "enthusiasts" If you're gonna do it at home using your faucet/spigot, you need a pressure reduction adaptor. No waterblock is built to handle the pressure most Tap water can push. Nothing like turning your $4000 rig into a lawn sprinkler!


Lol I understand the concept of industrial cooling. Was merely responding to a specific claim that "people" above him in the PR hof are using tap water for cooling and I am asking how many.


----------



## jomama22

.


----------



## jlodvo

des2k... said:


> no fans on it, looks like this for cp2077 2145core +0mem
> 
> View attachment 2488899
> 
> with +1484mem
> View attachment 2488900


can you show your curve setting? wanna see where you guys push your curves thanks


----------



## TheNaitsyrk

Beagle Box said:


> Whhhhaaaaaaaaa? Who did that?


Linus did it too. FYI.


----------



## Beagle Box

TheNaitsyrk said:


> Linus did it too. FYI.


Yeah. Okay. But was he serious or just doing an experiment for his many fans? 
No offense, but I'm just not buying that it would take such drastic measures to outscore you in Time Spy. 
There's always a faster card, a more efficient motherboard, a higher grade CPU, a more experienced overclocker, etc... out there. 

BTW, Is there a reason you haven't 'hidden' some of your lesser scores? 
You're got like 20 in the Top 100 list for your CPU/GPU combo. 
Leaving just your fastest couple would prove how awesomely fast your PC is in Time Spy. 
Why not give others some space to post theirs as well?
Serious question. Why do some people have so many listed?


----------



## jomama22

Beagle Box said:


> Yeah. Okay. But was he serious or just doing an experiment for his many fans?
> No offense, but I'm just not buying that it would take such drastic measures to outscore you in Time Spy.
> There's always a faster card, a more efficient motherboard, a higher grade CPU, a more experienced overclocker, etc... out there.
> 
> BTW, Is there a reason you haven't 'hidden' some of your lesser scores?
> You're got like 20 in the Top 100 list for your CPU/GPU combo.
> Leaving just your fastest couple would prove how awesomely fast your PC is in Time Spy.
> Why not give others some space to post theirs as well?
> Serious question. Why do some people have so many listed?


I genuinely had no idea you could hide a run lmao. Probably have like 10 runs all smooshed together there.


----------



## TheNaitsyrk

Beagle Box said:


> Yeah. Okay. But was he serious or just doing an experiment for his many fans?
> No offense, but I'm just not buying that it would take such drastic measures to outscore you in Time Spy.
> There's always a faster card, a more efficient motherboard, a higher grade CPU, a more experienced overclocker, etc... out there.
> 
> BTW, Is there a reason you haven't 'hidden' some of your lesser scores?
> You're got like 20 in the Top 100 list for your CPU/GPU combo.
> Leaving just your fastest couple would prove how awesomely fast your PC is in Time Spy.
> Why not give others some space to post theirs as well?
> Serious question. Why do some people have so many listed?


They're not outscoring me personally. They're doing overclocking as hobby. It's not drastic. It's simple to hook up your PC to a tap.

What do you mean "lesser" scores?

Scores will be close to each other anyway, because there always is variation. Once you get 23000 then another run you get 23700 in GPU in TimeSpy.


----------



## Jpmboy

jomama22 said:


> Lol I understand the concept of industrial cooling. Was merely responding to a specific claim that "people" above him in the PR hof are using tap water for cooling and I am asking how many.


Yeah, you know the game. Most tap water is in the 10C range - helpful, not really enough to keep the card below the first T-clock bin drop under benching loads.
My Koolance EXC-800 chiller does 1C and the various aquarium chillers I have get down to about 10C but can't sustain that under load. Lol, all these "benchers" chasing HOF: Just loop in a 360 rad and sink it in a bucket of ice water. Easy.

ITDiva use to us *these *in her (amazing) builds.

Guys - if any of the HOF entries you are jumping thru hoops to beat are posted on HWBOT, the entry will be labeled as "Ambient", "Chilled" or "LN2/Phase".


----------



## jomama22

Jpmboy said:


> Yeah, you know the game. Most tap water is in the 10C range - helpful, not really enough to keep the card below the first T-clock bin drop under benching loads.
> My Koolance EXC-800 chiller does 1C and the various aquarium chillers I have get down to about 10C but can't sustain that under load. Lol, all these "benchers" chasing HOF: Just loop in a 360 rad and sink it in a bucket of ice water. Easy.
> 
> ITDiva use to us *these *in her (amazing) builds.
> 
> Guys - if any of the HOF entries you are jumping thru hoops to beat are posted on HWBOT, the entry will be labeled as "Ambient", "Chilled" or "LN2/Phase".


Right, exactly. Why anyone would use tap water is genuinely beyond me. Better ways to achieve the same or better results (without shoving who knows how hard water through your loop). Hence why I said most of the people in the 0-20c range are either using a chiller or just taking the pc outside during winter (like I did).

And yeah, I know about heat exchangers. I had considered using them for wintertime loops but really don't feel like dealing with the inevitable condensation that can occur ok the pc side lowering temps beyond the indoor dew point. If I want to bench when it's cold, I just take the whole rig outside, easy enough and don't have to deal with condensation.


----------



## jomama22

TheNaitsyrk said:


> They're not outscoring me personally. They're doing overclocking as hobby. It's not drastic. It's simple to hook up your PC to a tap.
> 
> What do you mean "lesser" scores?
> 
> Scores will be close to each other anyway, because there always is variation. Once you get 23000 then another run you get 23700 in GPU in TimeSpy.


Dude, again, how many "people" are using tap water. Please give us a count. People indicate more than 1 lol.

And you shouldn't be getting such a variation in timespy runs tbh.

He's just pointing out that people, even on ambiant air, have higher scores than you. Just a fact of the matter is all.

Highest TS graphics score I could fine of yours was 23400. Here is mine at ambiant @ 23746: 








I scored 22 563 in Time Spy


AMD Ryzen 9 5950X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com





I'm sure there are others above me at ambiant as well.


----------



## Beagle Box

TheNaitsyrk said:


> They're not outscoring me personally. They're doing overclocking as hobby. It's not drastic. It's simple to hook up your PC to a tap.
> 
> What do you mean "lesser" scores?
> 
> Scores will be close to each other anyway, because there always is variation. Once you get 23000 then another run you get 23700 in GPU in TimeSpy.


_I don't believe it takes running tap water through a loop to reach higher graphic scores than you._ Is that better? 
Lesser scores = lower scores.

My question is:
If you run a benchmark 100 times, why show all 100 scores on the list?
Why not just show the top 2 or 3 and hide or delete the* lower *scores from the list?

@jomama22 Just gave his answer. He didn't realize there was a _Hide_ function.
Some of the Top 100 scores list are populated by the same 10 guys showing 5-15 scores. 

Just wonder why no one thinks to hide older scores.

Nothing really wrong with it, unless number 101 is a member here and I might be prompted to inquire as to how they got such a high score with a cheap motherboard and a Zotac GPU or something.

Not a big deal. Pondered this question yesterday while mulching the flower beds. Just curious.


----------



## J7SC

Jpmboy said:


> Yeah, you know the game. Most tap water is in the 10C range - helpful, not really enough to keep the card below the first T-clock bin drop under benching loads.
> My Koolance EXC-800 chiller does 1C and the various aquarium chillers I have get down to about 10C but can't sustain that under load. Lol, all these "benchers" chasing HOF: Just loop in a 360 rad and sink it in a bucket of ice water. Easy.
> 
> ITDiva use to us *these *in her (amazing) builds.
> 
> Guys - if any of the HOF entries you are jumping thru hoops to beat are posted on HWBOT, the entry will be labeled as "Ambient", "Chilled" or "LN2/Phase".


...thanks for the Koolance heat exchanger link - I might just get one of those (3/8 fitting on those would need an adapter for my builds though). I used to mess around with HWBot / sub zero, and even on ambient managed to get into the top 20 at HoF for the odd bench (ie PR) for a while. These days, I prefer ambient w/ 'big water' to be as close to 'stock / productivity' usage of the system as is possible. Smudging Vaseline or liquid electrical tape all over a new mobo or gpu to insulate for sub-zero was never that much fun...


----------



## des2k...

jlodvo said:


> can you show your curve setting? wanna see where you guys push your curves thanks


I have mine at 2205 @ 1063mv and limit max boost with nvidia-smi at 2145.

I also did play alot at 2175 core 1093mv before but now +1500 mem is enough to re-gain those fps at lower core freq.

Usually I try to limit power at/bellow 500w where the ek block delta is good 11c-13c and because the Zotac core vrm works really hard past 500w.


----------



## jomama22

Beagle Box said:


> _I don't believe it takes running tap water through a loop to reach higher graphic scores than you._ Is that better?
> Lesser scores = lower scores.
> 
> My question is:
> If you run a benchmark 100 times, why show all 100 scores on the list?
> Why not just show the top 2 or 3 and hide or delete the* lower *scores from the list?
> 
> @jomama22 Just gave his answer. He didn't realize there was a _Hide_ function.
> Some of the Top 100 scores list are populated by the same 10 guys showing 5-15 scores.
> 
> Just wonder why no one thinks to hide older scores.
> 
> Nothing really wrong with it, unless number 101 is a member here and I might be prompted to inquire as to how they got such a high score with a cheap motherboard and a Zotac GPU or something.
> 
> Not a big deal. Pondered this question yesterday while mulching the flower beds. Just curious.


Worst case, you can just use the slider at the top of the results page.

I genuinely just add them to online so I have some sort of repository to go back on.


----------



## Bobbylee

Beagle Box said:


> Yeah. Okay. But was he serious or just doing an experiment for his many fans?
> No offense, but I'm just not buying that it would take such drastic measures to outscore you in Time Spy.
> There's always a faster card, a more efficient motherboard, a higher grade CPU, a more experienced overclocker, etc... out there.
> 
> BTW, Is there a reason you haven't 'hidden' some of your lesser scores?
> You're got like 20 in the Top 100 list for your CPU/GPU combo.
> Leaving just your fastest couple would prove how awesomely fast your PC is in Time Spy.
> Why not give others some space to post theirs as well?
> Serious question. Why do some people have so many listed?


View the list in leaderboard mode


----------



## Beagle Box

jomama22 said:


> Worst case, you can just use the slider at the top of the results page.
> 
> I genuinely just add them to online so I have some sort of repository to go back on.


But you already have a repository if you're logged in under *My Results* broken out by benchmark, even when they're _Hidden_ from the official Top 100 list. 

Not a big deal. Really just pointing out that there is a _Hide Result_ function and it allows more people to show in the Top 100 lists.


----------



## Bobbylee

Beagle Box said:


> But you already have a repository if you're logged in under *My Results* broken out by benchmark, even when they're _Hidden_ from the official Top 100 list.
> 
> Not a big deal. Really just pointing out that there is a _Hide Result_ function and it allows more people to show in the Top 100 lists.


Yes I’m aware. It’s tedious to hide them all individually, unless you can select multiple to hide, I’m unaware of this.


----------



## Jpmboy

J7SC said:


> ...thanks for the Koolance heat exchanger link - I might just get one of those (3/8 fitting on those would need an adapter for my builds though). I used to mess around with HWBot / sub zero, and even on ambient managed to get into the top 20 at HoF for the odd bench (ie PR) for a while. These days, I prefer ambient w/ 'big water' to be as close to 'stock / productivity' usage of the system as is possible. *Smudging Vaseline or liquid electrical tape all over a new mobo or gpu to insulate for sub-zero was never that much fun...*


even then, ya had to put the board in a hot box to dry it!. I've set benching aside for awhile, but you know "just when you think you are out... they pull you back in". Aka, Goodfella's style!


----------



## J7SC

Beagle Box said:


> But you already have a repository if you're logged in under *My Results* broken out by benchmark, even when they're _Hidden_ from the official Top 100 list.
> 
> Not a big deal. Really just pointing out that there is a _Hide Result_ function and it allows more people to show in the Top 100 lists.


...perhaps this is more a question for 3DM staff itself for their setup, ie. only have the top 3 runs by a user for the same hw combo and benchmark enter the public db. You can't really expect a user to constantly edit their own repository...they are simply doing what the 3DM setup allows for. The user's account itself could keep the rest available in private mode.

...I tend to run most benchies with the network cable disconnected (had a few what appeared to be promising runs freeze, with a Win 10 update kicking in after update delay period had expired...). I then take screenshots which also includes settings software such as GPUz, MSI AB...the auto-save function with a bench saves them anyways for later official uploads if I decide to do so. The problem is only the labeling of the auto-saves, especially if you did a whole bunch of runs in a short time of several different benchmarks. The auto-saved label tag should have more useful info, such as PR15525 or so, in the file name



Jpmboy said:


> even then, ya had to put the board in a hot box to dry it!. I've set benching aside for awhile, but you know *"just when you think you are out... they pull you back in".* Aka, Goodfella's style!


...know _exactly_ what you mean  , especially if you end up with a decent hardware sample after a new purchase


----------



## Biscottoman

jomama22 said:


> Dude, again, how many "people" are using tap water. Please give us a count. People indicate more than 1 lol.
> 
> And you shouldn't be getting such a variation in timespy runs tbh.
> 
> He's just pointing out that people, even on ambiant air, have higher scores than you. Just a fact of the matter is all.
> 
> Highest TS graphics score I could fine of yours was 23400. Here is mine at ambiant @ 23746:
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 22 563 in Time Spy
> 
> 
> AMD Ryzen 9 5950X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> I'm sure there are others above me at ambiant as well.


What BIOS are you running on your card?


----------



## TheNaitsyrk

jomama22 said:


> Dude, again, how many "people" are using tap water. Please give us a count. People indicate more than 1 lol.
> 
> And you shouldn't be getting such a variation in timespy runs tbh.
> 
> He's just pointing out that people, even on ambiant air, have higher scores than you. Just a fact of the matter is all.
> 
> Highest TS graphics score I could fine of yours was 23400. Here is mine at ambiant @ 23746:
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 22 563 in Time Spy
> 
> 
> AMD Ryzen 9 5950X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> I'm sure there are others above me at ambiant as well.


I haven't tested this new GPU yet. My previous GIMPED 3090 FTW3 did 23400. I hope this one is better.

My scores always ambient. Never hooked it up to chillers and what not.

Not sure why I'm being attacked here when someone literally said in their benchmark run that they hooked it up to tap water. Just highlighting that I ONLY MENTIONED it and everyone is making a big deal out of this.

And why the "hiding lesser scores" talk came out? I don't understand.


----------



## TheNaitsyrk

jomama22 said:


> Dude, again, how many "people" are using tap water. Please give us a count. People indicate more than 1 lol.
> 
> And you shouldn't be getting such a variation in timespy runs tbh.
> 
> He's just pointing out that people, even on ambiant air, have higher scores than you. Just a fact of the matter is all.
> 
> Highest TS graphics score I could fine of yours was 23400. Here is mine at ambiant @ 23746:
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 22 563 in Time Spy
> 
> 
> AMD Ryzen 9 5950X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> I'm sure there are others above me at ambiant as well.


Well done. My one should be around there as well. Clocks 2205Mhz as well.


----------



## gfunkernaught

des2k... said:


> new afterburner let's you use over +1500 now. Don't have it updated, I load evga px1 on the vf curve window and apply the mem offset then close it.


New as in beta or final?

And thanks for that link btw.


----------



## gfunkernaught

I'm going to hire Sub-Zero for my overclocking endeavors.🤣

Ok Bihan, just hold the radiators.


----------



## TheNaitsyrk

jomama22 said:


> Dude, again, how many "people" are using tap water. Please give us a count. People indicate more than 1 lol.
> 
> And you shouldn't be getting such a variation in timespy runs tbh.
> 
> He's just pointing out that people, even on ambiant air, have higher scores than you. Just a fact of the matter is all.
> 
> Highest TS graphics score I could fine of yours was 23400. Here is mine at ambiant @ 23746:
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 22 563 in Time Spy
> 
> 
> AMD Ryzen 9 5950X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> I'm sure there are others above me at ambiant as well.


By the way, you have Res. bar enabled?


----------



## autoshot

Hi!
I'd like to give my Zotac 3090 Trinity a little more breathing room and read that you can flash the Gigabyte 3090 Gaming OC BIOS onto the Zotac in order to have 390W instead of 350W max. power consumption. The problem now is: which of the three available BIOS versions should I choose, F3, F12 or F61?








Cheers,
Daniel


----------



## yzonker

autoshot said:


> Hi!
> I'd like to give my Zotac 3090 Trinity a little more breathing room and read that you can flash the Gigabyte 3090 Gaming OC BIOS onto the Zotac in order to have 390W instead of 350W max. power consumption. The problem now is: which of the three available BIOS versions should I choose, F3, F12 or F61?
> View attachment 2489077
> 
> Cheers,
> Daniel


I'm running this Galax right now.









GALAX RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com





Gigabyte guys have reported the newer resize bar Gigabyte bios gimps the power limit. Haven't tested that myself.


----------



## Beagle Box

J7SC said:


> ...perhaps this is more a question for 3DM staff itself for their setup, ie. only have the top 3 runs by a user for the same hw combo and benchmark enter the public db. You can't really expect a user to constantly edit their own repository...they are simply doing what the 3DM setup allows for. The user's account itself could keep the rest available in private mode.
> 
> ...I tend to run most benchies with the network cable disconnected (had a few what appeared to be promising runs freeze, with a Win 10 update kicking in after update delay period had expired...). I then take screenshots which also includes settings software such as GPUz, MSI AB...the auto-save function with a bench saves them anyways for later official uploads if I decide to do so. The problem is only the labeling of the auto-saves, especially if you did a whole bunch of runs in a short time of several different benchmarks. The auto-saved label tag should have more useful info, such as PR15525 or so, in the file name
> 
> 
> 
> ...


It's just something that occurred to me while looking at systems comparable to mine. The list is littered with same people. 
I get what you're saying. I run mine online so score verifications are instantaneous and the _Hide Result_ button is right there on the online results page.

I like the top 3 idea. Top 3 per machine/user would be nice. It's pretty much what I do, anyway.
1 good score = Happenstance
2 top scores = Coincidence
3 top scores = Playa!

There's an autohide function, but its an upgrade feature.


----------



## des2k...

autoshot said:


> Hi!
> I'd like to give my Zotac 3090 Trinity a little more breathing room and read that you can flash the Gigabyte 3090 Gaming OC BIOS onto the Zotac in order to have 390W instead of 350W max. power consumption. The problem now is: which of the three available BIOS versions should I choose, F3, F12 or F61?
> View attachment 2489077
> 
> Cheers,
> Daniel





autoshot said:


> Hi!
> I'd like to give my Zotac 3090 Trinity a little more breathing room and read that you can flash the Gigabyte 3090 Gaming OC BIOS onto the Zotac in order to have 390W instead of 350W max. power consumption. The problem now is: which of the three available BIOS versions should I choose, F3, F12 or F61?
> View attachment 2489077
> 
> Cheers,
> Daniel


just grab the zotac oc rbar vbios from techpower up a few pages back, I posted the link

you get 390w without loosing display ports

gigabyte vbios are super buggy, early one on my zotac had inccorect boost table, micro stutters with high fps games


----------



## yzonker

des2k... said:


> just grab the zotac oc rbar vbios from techpower up a few pages back, I posted the link
> 
> you get 390w without loosing display ports
> 
> gigabyte vbios are super buggy, early one on my zotac had inccorect boost table, micro stutters with high fps games


Yea it works well too. I just tested them both yesterday (Galax and Zotac). Galax showed an average power level of 385w while the Zotac 377w as you would expect with the Zotac being 5w less on the PL. Was about 60 pts more in PR (Galax). Haven't tested all ports with it though.


----------



## PLATOON TEKK

On the topic, been working on a 6,000w (20,473 btu) chiller setup that should be able to hit -20c (thanks to 3 x 900s and 1x 4,200w chiller)

The coolant can hit -30c. Have two naval grade desiccant dehumidifiers with 2 acs controlling temp and humidity in control room. The pc itself will be in a custom air-tight enclosure (currently being manufactured for me) that will rely on heat exchanging units (this will ensure no air exchange or additional moisture). This is only possible because the chillers will be pulling the majority of heat (all components including psu are watercooled).

Will keep everyone updated once I finalize the Platoon Tekk set and then loop. Regular updates will be here.

If all works out, in theory, can be a “daily” sub-zero setup.


----------



## yzonker

Here's the Zotac.









Zotac RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com





Can't go wrong with either.


----------



## KedarWolf

yzonker said:


> Yea it works well too. I just tested them both yesterday (Galax and Zotac). Galax showed an average power level of 385w while the Zotac 377w as you would expect with the Zotac being 5w less on the PL. Was about 60 pts more in PR (Galax). Haven't tested all ports with it though.


If you use a two power connector VBIOS on a three power connector card, will it pull 1/3 more power because you have 3 power connectors?


----------



## geriatricpollywog

Nizzen said:


> Normal K|np|n
> There is K|np|n 1000w with bar into the wild. The big boys want it for them self for a while... That is how it works
> 
> Most of them loves to show off that they have it, but don't love to share it...


Vince sent me the link to the original bios when I emailed him. Maybe someone with a Kingpin just needs to ask nicely for the rebar bios.


----------



## KedarWolf

0451 said:


> Vince sent me the link to the original bios when I emailed him. Maybe someone with a Kingpin just needs to ask nicely for the rebar bios.


What, and risk the wrath of the Kingpin gods by sharing it with us? 'gasp'


----------



## KedarWolf

On a side note, I find in running Cyberpunk, which really stresses your GPU overclock and you can find out if you're unstable at all, the Suprim X BIOS will hold a higher memory overclock and the same core overclock than the 1000W Kingin BIOS without the game crashing. Like I mean +880 memory vs +785 memory Kingpin.


----------



## yzonker

KedarWolf said:


> If you use a two power connector VBIOS on a three power connector card, will it pull 1/3 more power because you have 3 power connectors?


I don't have a 3x8pin to try but I've read that works. I wonder if that would provide a way to get above 520w and reBar support.


----------



## Biscottoman

0451 said:


> Vince sent me the link to the original bios when I emailed him. Maybe someone with a Kingpin just needs to ask nicely for the rebar bios.


could you share it to us?


----------



## autoshot

yzonker said:


> Gigabyte guys have reported the newer resize bar Gigabyte bios gimps the power limit. Haven't tested that myself.


Ok that would suck. Is the Galax-BIOS also rBAR-ready? And is there any advantage of choosing this BIOS over the Zotac-BIOS @des2k... posted?



des2k... said:


> just grab the zotac oc rbar vbios from techpower up a few pages back, I posted the link. you get 390w without loosing display ports


Thanks, that's even better of course  You don't happen to know which page that was? I went back to 735 but didn't see anything until then :/



yzonker said:


> Yea it works well too. I just tested them both yesterday (Galax and Zotac). Galax showed an average power level of 385w while the Zotac 377w as you would expect with the Zotac being 5w less on the PL.


Why would the Zotac consume only 377W when it should max out at 390?


----------



## yzonker

The Zotac is 385w PL, not 390w. Close enough that it makes pretty much no difference in performance. Upside is you get to run an actual Zotac bios. Seemed like it ran cooler too (more than 5 or so watts would make).

Yes both support reBar.


----------



## gfunkernaught

Has anyone here upgraded their motherboard and processor solely for the purpose of using rbar? I'm curious to hear from people here about the improvements going from like an 8700K to one of the bigger AMD chips that have like a bunch of cores.


----------



## Beagle Box

gfunkernaught said:


> Has anyone here upgraded their motherboard and processor solely for the purpose of using rbar? I'm curious to hear from people here about the improvements going from like an 8700K to one of the bigger AMD chips that have like a bunch of cores.


No. 
But like you, I have the MSI Z370 Gaming M5 - a MB that's got a ReBar BIOS available. I thought of finding a i9-9900k just to see how much performance I gain over my i7-8086. 
Might be enough to hold off on further upgrades until DDR5 hits the market.


----------



## bmgjet

Be a massive waste of money for just rbar. Its increase is nothing in most games.


----------



## ViRuS2k

yeah found out doing my own research that rebar brings with it more problems than it solves _stuttering / performance drops in some games / can bring instability to some games_ and various other issues, and the performance is 1-2% at most 
for now i wont be using it and will wait until it matures more and more games use it.... cause its nothing but a sheep in wolfs clothing..


----------



## ViRuS2k

i modded a 480mm monster alphacool radiator ontop of fmy o11 dynamic xl case 
i have so meny waterhoses hahahah i need to find a 3090 msi waterblock that has its own active watercooling back plate so i can get rid of 2 hoses going to this mp5works im using 

temps are great though now i get 53c max load pulling 690w from my 3090  and can get clock speeds of 2150 sustained with 10000 memory in games


----------



## gfunkernaught

Beagle Box said:


> No.
> But like you, I have the MSI Z370 Gaming M5 - a MB that's got a ReBar BIOS available. I thought of finding a i9-9900k just to see how much performance I gain over my i7-8086.
> Might be enough to hold off on further upgrades until DDR5 hits the market.


Wait what? I thought rebar was only for newer platforms...


----------



## Beagle Box

gfunkernaught said:


> Wait what? I thought rebar was only for newer platforms...


MSI Z370 Gaming M5 BIOS [ReBAR beta] page


The question is: Is the upgrade to i9-9900k worth it?


----------



## Pepillo

ViRuS2k said:


> i need to find a 3090 msi waterblock that has its own active watercooling back plate


Bykski N-MS3090TRIO-TC GPU Water Block With Waterway Backplane For MSI RTX 3080 3090 GAMING X TRIO / SUPERIM at formulamod sale


----------



## ViRuS2k

Pepillo said:


> Bykski N-MS3090TRIO-TC GPU Water Block With Waterway Backplane For MSI RTX 3080 3090 GAMING X TRIO / SUPERIM at formulamod sale


Thanks pal i seen that but the problem is i cant find any UK website with it on :/ lol


----------



## gfunkernaught

Beagle Box said:


> MSI Z370 Gaming M5 BIOS [ReBAR beta] page
> 
> 
> The question is: Is the upgrade to i9-9900k worth it?


That motherboard isn't well equipped to handle that cpu, I've tried. I think the power phases aren't enough, even though it will post and boot.


----------



## Pepillo

ViRuS2k said:


> Thanks pal i seen that but the problem is i cant find any UK website with it on :/ lol


You'll have to buy it in China, Alliexpress, which is where we usually do it all of us. Five or six weeks, but good prices, my last blocks have come from there for the same thing as your appointment, not being able to find them all over Europe.


----------



## gfunkernaught

I know this is just one post but, apparently rebar is supported on z370 AND an 8700k. I'll try this out later.


----------



## Beagle Box

gfunkernaught said:


> I know this is just one post but, apparently rebar is supported on z370 AND an 8700k. I'll try this out later.
> ...


Well, that kinda stinks. I run in UEFI+Legacy mode. I'll have to repartition my SSD just to see if the rebar works with the MSI Z370 MB + i7-8086.


----------



## KedarWolf

Beagle Box said:


> Well, that kinda stinks. I run in UEFI+Legacy mode. I'll have to repartition my SSD just to see if the rebar works with the MSI Z370 MB + i7-8086.


There are ways to change a legacy drive to a UEFI drive. Google is your friend.


----------



## gfunkernaught

KedarWolf said:


> There are ways to change a legacy drive to a UEFI drive. Google is your friend.


Yes convert the drive from MBR to GPT. Be careful with that though, as the process slightly resizes the partitions and can prevent windows from booting properly.


----------



## Beagle Box

KedarWolf said:


> There are ways to change a legacy drive to a UEFI drive. Google is your friend.


Unfortunately, Google has proven to be a not-so-much my friend who doesn't take any responsibility whatsoever when it feeds me BS. _Especially computer-related BS.  _I'll need to record all my OC MB BIOS settings, update the BIOS to the ReBAR beta version. Not sure this will be worth the effort. I think I'll wait and watch as *@gfunkernaught *experiments with it. 

Or I may just experiment with an unused 970 Pro SSD I've got just sitting here collecting dust. Sounds like a hassle, though, for apparently little gain at this time.


----------



## ViRuS2k

Pepillo said:


> You'll have to buy it in China, Alliexpress, which is where we usually do it all of us. Five or six weeks, but good prices, my last blocks have come from there for the same thing as your appointment, not being able to find them all over Europe.


Thanks, ok guess i order from there, btw you wouldnt happen to know if there is a bykski waterblock and waterback plate that also has that water temp module 
i have a previous bykski water block that i got with the oled watertemp readout on it  and it looks really nice , you would just about be able to make it out in the image i posted above
does this all in one waterblock + water back plate come with that module also ??? would be awesome if it did lol


----------



## yzonker

I took pics of all my bios screens before I updated. I did some tests with RDR2 and CP2077 a while ago and showed a minimal improvement. 

Pretty much nothing in RDR2 with Vulkan,









[Official] NVIDIA RTX 3090 Owner's Club


flashed the rebar 520w bios and updated the drivers, but telling me reBAR isn't enabled? wut do?




www.overclock.net





Small increase in CP2077 and RDR2 (in DX12, but slower overall than Vulkan),









[Official] NVIDIA RTX 3090 Owner's Club


what is the general consensus of the MSI gaming X trio running 1000W bios? Anyone killed their gaming x trio yet due to too much power draw? I hope 1000w bios kills my ftw3 so I let the power limit slide at 100% but I dont know why the card is alive and still strong.




www.overclock.net


----------



## PLATOON TEKK

Nizzen said:


> Is it possible to change memory voltage in the software? It's "grayed out", so is it a unlock here? Or maybe just for OC lab edition? Where is the biosswitch?  I can't see it on the top. Maybe I have to take it out too see it under?
> 
> 390/450w P/S on the normal HOF. Can you post the 500w bios?


What software are you using? To my knowledge, none of it is exclusive to the OC Lab. 

Am down to send 500w. will be back home with the cards in a few days and can send link.


----------



## nyk20z3

Friend of mine just picked up a 3090 from MC in Queens NY, if y’all really want one just keep camping or dropping in at random. Seems to be the most efficient way to pick up any GPU right now.


----------



## Lord of meat

Update to anyone swapping thermal pads.
I was having an issue with the memory overclock on my 3090 msi trio with ek block, i have now swapped the 3.8wmk pads from ek with 12.8 from thermalight.
the hotspot now doesn't go over 65c (used to go up to 72c). i used thicker pads on the backplate, all 2mm. the block itself i used the same fatness as ek asks.

i cleaned the gpu with crc electronics cleaner and alcohol on some spots. at first i did not get a post but i redid my work and waited 20 min for it to dry. i assume there was a short somewhere from the chemicals or i mounted it like a monkey.

Using EVGA XOC 500w bios
+140 Core (`2085-2115), will crash if its over 2130.
+850 Mem (10600), lowers core clocks if higher but no longer crashes.

so far it works, will update if it fails.


----------



## ViRuS2k

that brings me also to memory oc, what are you guys using on your cards for memory overclocks ?
my 3090 seems like a monster gaming x trio 3090 
does +1350 under mining and does +1050 under gaming though thats as high as i went, i think it will go higher but just curious if this is safe lol as i know it will go higher under gaming


----------



## ViRuS2k

Lord of meat said:


> Update to anyone swapping thermal pads.
> I was having an issue with the memory overclock on my 3090 msi trio with ek block, i have now swapped the 3.8wmk pads from ek with 12.8 from thermalight.
> the hotspot now doesn't go over 65c (used to go up to 72c). i used thicker pads on the backplate, all 2mm. the block itself i used the same fatness as ek asks.
> 
> i cleaned the gpu with crc electronics cleaner and alcohol on some spots. at first i did not get a post but i redid my work and waited 20 min for it to dry. i assume there was a short somewhere from the chemicals or i mounted it like a monkey.
> 
> Using EVGA XOC 500w bios
> +140 Core (`2085-2115), will crash if its over 2130.
> +850 Mem (10600), lowers core clocks if higher but no longer crashes.
> 
> so far it works, will update if it fails.


i think there is something wrong with your block or pads, if thats under gaming only then i would be quite worried. my hotspot under 2000 memory speeds in afterburner when mining wich is way worse on memory temps than gaming is HOTSPOT : 53c and GPU 39c and memory 78c and thats under mining, my temps are way way lower when gaming for instance hotspot never goes over 42c under gaming. with a junction temp of 64c and this is with the card overclocked to +1000 on memory when gaming.

EDIT: ok im a tool lol  i have active MP5works cooling my cards memory hence why the temps are much lower  thats what happens when your up all night with and read to fast


----------



## Lord of meat

ViRuS2k said:


> i think there is something wrong with your block or pads, if thats under gaming only then i would be quite worried. my hotspot under 2000 memory speeds in afterburner when mining wich is way worse on memory temps than gaming is HOTSPOT : 53c and GPU 39c and memory 78c and thats under mining, my temps are way way lower when gaming for instance hotspot never goes over 42c under gaming. with a junction temp of 64c and this is with the card overclocked to +1000 on memory when gaming.
> 
> EDIT: ok im a tool lol  i have active MP5works cooling my cards memory hence why the temps are much lower  thats what happens when your up all night with and read to fast





ViRuS2k said:


> i think there is something wrong with your block or pads, if thats under gaming only then i would be quite worried. my hotspot under 2000 memory speeds in afterburner when mining wich is way worse on memory temps than gaming is HOTSPOT : 53c and GPU 39c and memory 78c and thats under mining, my temps are way way lower when gaming for instance hotspot never goes over 42c under gaming. with a junction temp of 64c and this is with the card overclocked to +1000 on memory when gaming.
> 
> EDIT: ok im a tool lol  i have active MP5works cooling my cards memory hence why the temps are much lower  thats what happens when your up all night with and read to fast


here are the temps, i think im good so far.
on port loop with 500w i get 62 hotspot and 47 core, i also only have a 240 rad 
still happy.


----------



## gfunkernaught

@Lord of meat
I also have a Trio and ek block. Replaced the front pads with 1mm 13w/mk and the back I used 1.5mm 12.7w/mk pads. For every pad though there is thermal paste between the pad and component. I used liquid metal for the core. On top of the back plate I put 7xm.2 heatsinks that sit on a large 2mm 5w/mk pad, and then a 120mm sp fan on top of the heatsinks, blowing upwards. I don't mine but at 500w I avg 40c core, 54c mem, 50ish Celsius on the hotspot. I run +135 core (2085-2100mhz effective) +1200 vram offsets. I too get crashes if I try to run 2130mhz effective regardless of voltage, but crashes in cyberpunk only. I could play quake 2 RTX at that clock speed just fine. I think your temps can improve though.
I jump whenever I see people on here with the same card and block.😁

...and I just saw that you only have a 240 rad...doh!


----------



## Lord of meat

gfunkernaught said:


> @Lord of meat
> I also have a Trio and ek block. Replaced the front pads with 1mm 13w/mk and the back I used 1.5mm 12.7w/mk pads. For every pad though there is thermal paste between the pad and component. I used liquid metal for the core. On top of the back plate I put 7xm.2 heatsinks that sit on a large 2mm 5w/mk pad, and then a 120mm sp fan on top of the heatsinks, blowing upwards. I don't mine but at 500w I avg 40c core, 54c mem, 50ish Celsius on the hotspot. I run +135 core (2085-2100mhz effective) +1200 vram offsets. I too get crashes if I try to run 2130mhz effective regardless of voltage, but crashes in cyberpunk only. I could play quake 2 RTX at that clock speed just fine. I think your temps can improve though.
> I jump whenever I see people on here with the same card and block.😁
> 
> ...and I just saw that you only have a 240 rad...doh!


Its all good man, once I save for a block for the cpu ill get 2 360 😝


----------



## ViRuS2k

Lord of meat said:


> here are the temps, i think im good so far.
> on port loop with 500w i get 62 hotspot and 47 core, i also only have a 240 rad
> still happy.
> 
> View attachment 2489252


indeed good enough for a 240 rad 
but going by your screen shot your pulling a max of 352w you might need to run something more demanding to pull all that 500w from that 500w bios and then see what temps you get. or up your resolution, you play a game that i use all the time to test max clocks at max power you cant beat resident evil 3 or 2 for pulling power when you up the resolution scalier in the game to 2.0  it will draw all the power a card has lol
still very good for a 240 rad.


----------



## gfunkernaught

Lord of meat said:


> Its all good man, once I save for a block for the cpu ill get 2 360 😝


Get 3 👍


----------



## erazortt

gfunkernaught said:


> @Lord of meat
> ...and I just saw that you only have a 240 rad...doh!


You have only a single 240 rad? But your Temps are at 47C while looping port royal..? How the heck is that even possible? How fast are your fans running?


----------



## gfunkernaught

.


----------



## des2k...

erazortt said:


> You have only a single 240 rad? But your Temps are at 47C while looping port royal..? How the heck is that even possible? How fast are your fans running?


prob not using 500w, or his block delta is really good lol like 7c

I have 5rads and my water temp is 28c+ when dumping 500w+ into the loop

a few (youtube) tested already stock 3090 350w+ on 240 and 360 horrible temps 55c-65c


----------



## Lord of meat

erazortt said:


> You have only a single 240 rad? But your Temps are at 47C while looping port royal..? How the heck is that even possible? How fast are your fans running?


Fans are at 100 and I use air conditioning.
The top is an aio for cpu that should not fit in this case. one fatty is pushing out rad, 2 120 are pulling out the front rad, one at bottom intake and the rear is intake. I tried different ways and fans.
with no ac i think last night it was doing 53ish but I try not to run without it, it scares me.

Edit: any good under 100 usd 360 rad?


----------



## des2k...

Lord of meat said:


> Fans are at 100 and I use air conditioning.
> The top is an aio for cpu that should not fit in this case. one fatty is pushing out rad, 2 120 are pulling out the front rad, one at bottom intake and the rear is intake. I tried different ways and fans.
> with no ac i think last night it was doing 53ish but I try not to run without it, it scares me.
> 
> Edit: any good under 100 usd 360 rad?
> View attachment 2489298


I like my xr7 360, very good for removing heat, but takes alot of space and need a D5 for good flow.



https://www.amazon.com/Corsair-Hydro-360mm-Cooling-Radiator/dp/B07W94H4G6/ref=mp_s_a_1_1?dchild=1&keywords=xr7+360&qid=1620230360&sr=8-1


----------



## gfunkernaught

^
I can also vouch for xr7's. I have a 360x55 and a 240x55.


----------



## Biscottoman

Guys does anyone of you has the 520w bios with resizable bar on or the 1000w one (always with bar on). Would be really appreciated


----------



## Pepillo

Biscottoman said:


> Guys does anyone of you has the 520w bios with resizable bar on or the 1000w one (always with bar on). Would be really appreciated


520w BAR









EVGA RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com


----------



## GRABibus

ViRuS2k said:


> that brings me also to memory oc, what are you guys using on your cards for memory overclocks ?
> my 3090 seems like a monster gaming x trio 3090
> does +1350 under mining and does +1050 under gaming though thats as high as i went, i think it will go higher but just curious if this is safe lol as i know it will go higher under gaming


On my STRIX I use +1000MHz. Everything beyond are artefacts or crash.
I had temporarly the opportunity to test a GALAX HOF and see if I will definitely buy it or not.
I could bench at +1700MHz on it.
But in game, everything beyond +1400MHz wnet into erratic little artefacts (BFV, OVERWATCH, COLD WAR).

To a certain point, performance starts to decrease when you increase the memory frequency.
You have to test the performance benefit when you increase the memory frequency and find the value at which you start to have less performance that with lower frequencies (Through Port Royal for example).


----------



## GRABibus

Biscottoman said:


> Guys does anyone of you has the 520w bios with resizable bar on or the 1000w one (always with bar on). Would be really appreciated


EVGA 520W with Rebar is on Techpower up at unverified section :








EVGA RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com





EDIT : ah Sorry Pepillo was faster than me 

Guys,
I really advised you to test this one for gaming :









EVGA RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com





=> With, I have my highest stable frequencies in game and lower temps.
The frequency is really stable, no throttle and temperature stabilizes very quickly.

In cold war, I have 3 degrees less on Core than with MSI SuprimX Bios and 30MHz higher stable core frequency.


----------



## GRABibus

PLATOON TEKK said:


> What software are you using? To my knowledge, none of it is exclusive to the OC Lab.
> 
> Am down to send 500w. will be back home with the cards in a few days and can send link.


he uses this one :





HOF AI Software (RTX)


Galaxy Microsystems Ltd.




www.galax.com





I used it also in the HOF I had on last week and it was greyed also.


----------



## gfunkernaught

Are there not official bar enabled bios available from evga?


----------



## yzonker

gfunkernaught said:


> Are there not official bar enabled bios available from evga?


They posted links to the normal KP bios' on the EVGA forum. In the sticky thread somewhere.


----------



## yzonker

This what you want? (Jacob's post if permalink doesn't work)






Enable Resizable Bar on EVGA GeForce RTX 30 Series (Page 5) - EVGA Forums


Important: If your motherboard has a setting for CSM, please disable this BEFORE installing this update. The new resizable bar update is here which can bring improved performance to EVGA GeForce 30 series cards. To get the new update all you need to do is download EVGA Precision X1 version 1.1....



 forums.evga.com


----------



## gfunkernaught

Is there a special process to flash bar enabled bios to a regular bios? Like I have to flash a regular 520 watt bios first then the bar enabled version? It's like a partial firmware flash right?


----------



## J7SC

...does anyone have a download link to the latest A.B.E. bios tool ? My earlier version got overwritten by a new Win 10... Thanks


----------



## gfunkernaught

I used X1 to update my 1kw bios to the rebar enabled version and like others have reported, it seems to have flashed to a 450w bios. Probably detects an XC ultra or something. Not going to test this out as the lower clocks/rebar benefits will probably cancel each other out.


----------



## gfunkernaught

GRABibus said:


> EVGA 520W with Rebar is on Techpower up at unverified section :
> 
> 
> 
> 
> 
> 
> 
> 
> EVGA RTX 3090 VBIOS
> 
> 
> 24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory
> 
> 
> 
> 
> www.techpowerup.com
> 
> 
> 
> 
> 
> EDIT : ah Sorry Pepillo was faster than me
> 
> Guys,
> I really advised you to test this one for gaming :
> 
> 
> 
> 
> 
> 
> 
> 
> 
> EVGA RTX 3090 VBIOS
> 
> 
> 24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory
> 
> 
> 
> 
> www.techpowerup.com
> 
> 
> 
> 
> 
> => With, I have my highest stable frequencies in game and lower temps.
> The frequency is really stable, no throttle and temperature stabilizes very quickly.
> 
> In cold war, I have 3 degrees less on Core than with MSI SuprimX Bios and 30MHz higher stable core frequency.


Did you try the 520w bar bios and compared to the 500w?


----------



## des2k...

J7SC said:


> ...does anyone have a download link to the latest A.B.E. bios tool ? My earlier version got overwritten by a new Win 10... Thanks


that would be user here, he's the dev for that app, DM him


----------



## gfunkernaught

@GRABibus
Nevermind I did it. The 520w bios scores higher even at the same average clock speed of 2090mhz. I did not notice any difference clock stability or temp, as the 500w and 520w bios always seem to be limited to around 480w on my Trio. At least with the 1kw bios at 50% I can actually get 500w. 

See this comparison below. Left is 500w and right is 520w.








Result







www.3dmark.com


----------



## J7SC

des2k... said:


> that would be user here, he's the dev for that app, DM him


Thanks, I found it via Goggle > OCN


----------



## KedarWolf

J7SC said:


> ...does anyone have a download link to the latest A.B.E. bios tool ? My earlier version got overwritten by a new Win 10... Thanks











File on MEGA







mega.nz


----------



## J7SC

KedarWolf said:


> File on MEGA
> 
> 
> 
> 
> 
> 
> 
> mega.nz


Tx  see above


----------



## Lobstar

I'd appreciate some advice. I'm able to hold stable clocks very high however the benchmark scores don't seem to follow. Any tips?

This run was 15308 in port royal.


----------



## UdoG

Deleted.


----------



## GRABibus

gfunkernaught said:


> @GRABibus
> Nevermind I did it. The 520w bios scores higher even at the same average clock speed of 2090mhz. I did not notice any difference clock stability or temp, as the 500w and 520w bios always seem to be limited to around 480w on my Trio. At least with the 1kw bios at 50% I can actually get 500w.
> 
> See this comparison below. Left is 500w and right is 520w.
> 
> 
> 
> 
> 
> 
> 
> 
> Result
> 
> 
> 
> 
> 
> 
> 
> www.3dmark.com


I was mainly talking about gaming, not PR.
Yes, of course, I tested 520W bios.
Gives me much better score on PR (the best bios for my strix for PR), but in gaming, more heat and then lower frequencies than the 500W bios.

I am on air, and maybe it works better on strix and depends on the sample…


----------



## Pepillo

gfunkernaught said:


> @GRABibus
> Nevermind I did it. The 520w bios scores higher even at the same average clock speed of 2090mhz. I did not notice any difference clock stability or temp, as the 500w and 520w bios always seem to be limited to around 480w on my Trio. At least with the 1kw bios at 50% I can actually get 500w.
> 
> See this comparison below. Left is 500w and right is 520w.
> 
> 
> 
> 
> 
> 
> 
> 
> Result
> 
> 
> 
> 
> 
> 
> 
> www.3dmark.com


I have a 3090 MSI Trio just like you with block and 520w bios. I still had a wall in the 480w, but it's been about two months since I got it. Even with undervolt at 1.0v I pass the 510w in any demanding game. The bad news is that I don't know what fixed it, if it was drivers, some Windows update, the bios with BAR, the touching of the curve, or all together a little bit. Also, I have a good friend with the same Trio 3090 and settings, and the also he had that wall and it doesn't have it anymore.

My PR in case you want to compare:









I scored 15 314 in Port Royal


Intel Core i9-7900X Processor, NVIDIA GeForce RTX 3090 x 1, 32450 MB, 64-bit Windows 10}




www.3dmark.com


----------



## TheNaitsyrk

GRABibus said:


> EVGA 520W with Rebar is on Techpower up at unverified section :
> 
> 
> 
> 
> 
> 
> 
> 
> EVGA RTX 3090 VBIOS
> 
> 
> 24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory
> 
> 
> 
> 
> www.techpowerup.com
> 
> 
> 
> 
> 
> EDIT : ah Sorry Pepillo was faster than me
> 
> Guys,
> I really advised you to test this one for gaming :
> 
> 
> 
> 
> 
> 
> 
> 
> 
> EVGA RTX 3090 VBIOS
> 
> 
> 24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory
> 
> 
> 
> 
> www.techpowerup.com
> 
> 
> 
> 
> 
> => With, I have my highest stable frequencies in game and lower temps.
> The frequency is really stable, no throttle and temperature stabilizes very quickly.
> 
> In cold war, I have 3 degrees less on Core than with MSI SuprimX Bios and 30MHz higher stable core frequency.


Will the 520W BIOS work with Suprim X (rebar enabled one?)

If so I'll do some testing.


----------



## Pepillo

TheNaitsyrk said:


> Will the 520W BIOS work with Suprim X (rebar enabled one?)
> 
> If so I'll do some testing.


It works, it works, but if it's by air you'll have problems with the fans and you don't reach the speed it should be. That's why this bios is best only with liquid and use the 500w bios on air.


----------



## Pepillo

Lobstar said:


> I'd appreciate some advice. I'm able to hold stable clocks very high however the benchmark scores don't seem to follow. Any tips?
> 
> This run was 15308 in port royal.
> View attachment 2489360


Others with Ryzen have reported under-performance issues at Port Royal. They said turn off cores or other things, I don't know very well, but there seems to be something. Your score for such rig, clocks, and temperatures is very low.


----------



## GRABibus

Pepillo said:


> Others with Ryzen have reported under-performance issues at Port Royal. They said turn off cores or other things, I don't know very well, but there seems to be something. Your score for such rig, clocks, and temperatures is very low.


we discussed already this subject some days ago on the thread.
Jomama gave me this tip :

disable 1 CCD in bios
Disable SMT in bios
Set up core « count = 4 » in bios
Set up a all core OC (so with 4 cores) at the highest stable frequency (from my side it is 4,95GHz)

you should see a score increase of 100pts roughly.

It seems that Ryzen has a penalty in PR versus Intel.

but honestly, this 15300 score seems in line…






5950x with 3090kpe lost score over 10900k 3090kpe in port royal help (Page 2) - EVGA Forums


Has anyone found a fix for the 3090 not scoring as well on ryzen 5000? I just received a 5950x and msi meg ace x570 and it scores 600pts lower. Only thing i can figure is my oc on my 10900k is helping it out. 5200 core 4800 cache. The 5950x is nice just useless if i cant use it for benchmarking...



forums.evga.com


----------



## TheNaitsyrk

Pepillo said:


> It works, it works, but if it's by air you'll have problems with the fans and you don't reach the speed it should be. That's why this bios is best only with liquid and use the 500w bios on air.



Got 4320mm worth of very thick rads cooling stuff with 4 pumps. Currently on 1000W bios but that doesn't have Rebar so will flash 520W with rebar and see. Also need to upgrade mobo bios for X299 dark to newest to support it all.


----------



## Beagle Box

gfunkernaught said:


> @GRABibus
> Nevermind I did it. The 520w bios scores higher even at the same average clock speed of 2090mhz. I did not notice any difference clock stability or temp, as the 500w and 520w bios always seem to be limited to around 480w on my Trio. At least with the 1kw bios at 50% I can actually get 500w.
> 
> See this comparison below. Left is 500w and right is 520w.
> 
> 
> 
> 
> 
> 
> 
> 
> Result
> 
> 
> 
> 
> 
> 
> 
> www.3dmark.com


So rebar is working on the Z370 / 8700k combo?
What process did you follow to upgrade to the EVGA KP520 w/Rebar BIOS? 
My PC is balking - giving me a processor error that results in CSM being switched back on.


----------



## Beagle Box

delete
dup


----------



## yzonker

GRABibus said:


> we discussed already this subject some days ago on the thread.
> Jomama gave me this tip :
> 
> disable 1 CCD in bios
> Disable SMT in bios
> Set up core « count = 4 » in bios
> Set up a all core OC (so with 4 cores) at the highest stable frequency (from my side it is 4,95GHz)
> 
> you should see a score increase of 100pts roughly.
> 
> It seems that Ryzen has a penalty in PR versus Intel.
> 
> but honestly, this 15300 score seems in line…
> 
> 
> 
> 
> 
> 
> 5950x with 3090kpe lost score over 10900k 3090kpe in port royal help (Page 2) - EVGA Forums
> 
> 
> Has anyone found a fix for the 3090 not scoring as well on ryzen 5000? I just received a 5950x and msi meg ace x570 and it scores 600pts lower. Only thing i can figure is my oc on my 10900k is helping it out. 5200 core 4800 cache. The 5950x is nice just useless if i cant use it for benchmarking...
> 
> 
> 
> forums.evga.com


He's a little low I think. This is what I scored with lower core and mem clocks. I did have smt disabled with a modest 4600mhz all core on my 5800x.









I scored 15 262 in Port Royal


AMD Ryzen 7 5800X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com





Slightly lower but not enough to explain the gpu clock differences.


----------



## gfunkernaught

GRABibus said:


> I was mainly talking about gaming, not PR.
> Yes, of course, I tested 520W bios.
> Gives me much better score on PR (the best bios for my strix for PR), but in gaming, more heat and then lower frequencies than the 500W bios.
> 
> I am on air, and maybe it works better on strix and depends on the sample…


I noticed slightly better performance in cyberpunk but it crashed way sooner than the 1kw bios, since the voltage isn't staying where I want it to stay, due to PL bounce. I was only pushing a +120 core offset yielding about 2099mhz avg. I was able to sustain that with the 1kw bios for hours and hours with the pl at 50%.


----------



## gfunkernaught

Beagle Box said:


> So rebar is working on the Z370 / 8700k combo?
> What process did you follow to upgrade to the EVGA KP520 w/Rebar BIOS?
> My PC is balking - giving me a processor error that results in CSM being switched back on.


Yes it is. At first I used X1 to update kp1kw>ftw3 because x1 thought that was my card. From there I flashed to the 520w unofficial from a couple pages back on here, using nvflash. Yeap rebar enabled yes in nvcpl. I'll take screenshots when I get home from work.


----------



## Beagle Box

gfunkernaught said:


> Yes it is. At first I used X1 to update kp1kw>ftw3 because x1 thought that was my card. From there I flashed to the 520w unofficial from a couple pages back on here, using nvflash. Yeap rebar enabled yes in nvcpl. I'll take screenshots when I get home from work.


Very Cool. 😎
X1 thought your MSI was an EVGA? Because the old EVGA BIOS was already installed? Interesting. 
I prefer the MSI Suprim X BIOS for daily use. I wonder if updating the MSI BIOS for REBAR would act similarly?
Thanks.


----------



## gfunkernaught

Beagle Box said:


> Very Cool. 😎
> X1 thought your MSI was an EVGA? Because the old EVGA BIOS was already installed? Interesting.
> I prefer the MSI Suprim X BIOS for daily use. I wonder if updating the MSI BIOS for REBAR would act similarly?
> Thanks.


I skipped MSI rebar because I didn't want to install/use MSI Live update or their DragonCenter software crapware. I've tried the suprim non-rbar for daily use and again, crashes because of voltage drops. Hopefully the kp1kw bios will get an update for rebar then I'll use those.

Btw what cpu errors were you getting?


----------



## Beagle Box

gfunkernaught said:


> I skipped MSI rebar because I didn't want to install/use MSI Live update or their DragonCenter software crapware. I've tried the suprim non-rbar for daily use and again, crashes because of voltage drops. Hopefully the kp1kw bios will get an update for rebar then I'll use those.
> 
> Btw what cpu errors were you getting?


Strange your card doesn't like the Suprim X BIOS. My ASUS absolutely loves it. Both gaming and benchmark performance with that BIOS approach obscene levels.

I was getting a quick black screen DOS (640x480) - looking errors. 
They flashed up very quickly while Windows was loading, basically stating my hardware didn't support certain functions and that my system was being set to CSM mode. 
Sure enough, on next boot, I checked the MB BIOS setting and Windows WHQL settings were set to CSM.

Weird.

I just did as you suggested.
I flashed the old KP520 BIOS to my card and used EVGA's Precision X1 to update my BIOS to EVGA's rebar version.
Went off without a hitch.
Looks like everything is working now.
I wonder if I can now save this BIOS to file and just flash it as is later, or if I'll need to repeat the whole process...?
.........
The hardest part of the whole affair was swapping my SSD from MBR to GPT. 
In the end, I found the recovery partition was dorked. Was enabled and looked fine, but had to be disabled and re-enabled to finish the process.

So everything rebar is working now. And at least the PR numbers look good. 

Thanks for the help!


----------



## gfunkernaught

Beagle Box said:


> Strange your card doesn't like the Suprim X BIOS. My ASUS absolutely loves it. Both gaming and benchmark performance with that BIOS approach obscene levels.
> 
> I was getting a quick black screen DOS (640x480) - looking errors.
> They flashed up very quickly while Windows was loading, basically stating my hardware didn't support certain functions and that my system was being set to CSM mode.
> Sure enough, on next boot, I checked the MB BIOS setting and Windows WHQL settings were set to CSM.
> 
> Weird.
> 
> I just did as you suggested.
> I flashed the old KP520 BIOS to my card and used EVGA's Precision X1 to update my BIOS to EVGA's rebar version.
> Went off without a hitch.
> Looks like everything is working now.
> I wonder if I can now save this BIOS to file and just flash it as is later, or if I'll need to repeat the whole process...?
> .........
> The hardest part of the whole affair was swapping my SSD from MBR to GPT.
> In the end, I found the recovery partition was dorked. Was enabled and looked fine, but had to be disabled and re-enabled to finish the process.
> 
> So everything rebar is working now. And at least the PR numbers look good.
> 
> Thanks for the help!


Yeah so far of all the bios I've tried, 1kw is the most favorable. I could bench 2190mhz but for gaming I'm limited to 2100mhz, 2115mhz if the card stays cool enough, but those two bins. I saw no real gains gaming above 500w, with the exception of quake 2 RTX, which I tried playing with the PL at 100% and 2130mhz, and no dynamic res native 4k, wow...55fps avg, insane, almost 700w! Wish I didn't see that lol. But yeah, 1kw bios at 500w with avg 2099mhz avg clock and +1300 on the vram, is excellent. Glad you go through the MBR>GPT transition. I do a lot of that at my job converting MBR images to GPT using partition magic 11, and since there is a change in partition size or spacing it can cause issues with windows if not done right. Rock-'n'-roll!


----------



## Beagle Box

gfunkernaught said:


> Yeah so far of all the bios I've tried, 1kw is the most favorable. I could bench 2190mhz but for gaming I'm limited to 2100mhz, 2115mhz if the card stays cool enough, but those two bins. I saw no real gains gaming above 500w, with the exception of quake 2 RTX, which I tried playing with the PL at 100% and 2130mhz, and no dynamic res native 4k, wow...55fps avg, insane, almost 700w! Wish I didn't see that lol. But yeah, 1kw bios at 500w with avg 2099mhz avg clock and +1300 on the vram, is excellent. Glad you go through the MBR>GPT transition. I do a lot of that at my job converting MBR images to GPT using partition magic 11, and since there is a change in partition size or spacing it can cause issues with windows if not done right. Rock-'n'-roll!


I have found the KP1k to definitely be best for benching if the air is cool enough - maxed @ 61-64 power. But gaming, it's less great. Probably depends on the game. I'm also ulra-wide, so not running @ full 4k graphics, though all is usually set to Ultra. 

I used trial versions of Minitool Shadow Maker and Partition Wizard to experiment on a SSD until I got it right. Didn't want to reinstall Windows because I've stripped mine of unwanted BS. 

For some reason, the MSI Suprim X BIOS scores above its power draw. I have seen my 3rd power Pin is unresponsive and pegged @ ~ 3 Watts. 
I wonder if its lying to me and I'm getting full draw on it, by-passing the BIOS's stated 450W limit? 
Can think of no other reason my PR scores are only slightly lower with the MSI than with the KP1k and gaming is better.


----------



## J7SC

GRABibus said:


> we discussed already this subject some days ago on the thread.
> Jomama gave me this tip :
> 
> disable 1 CCD in bios
> Disable SMT in bios
> Set up core « count = 4 » in bios
> Set up a all core OC (so with 4 cores) at the highest stable frequency (from my side it is 4,95GHz)
> (...)





yzonker said:


> He's a little low I think. This is what I scored with lower core and mem clocks. I did have smt disabled with a modest 4600mhz all core on my 5800x.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 262 in Port Royal
> 
> 
> AMD Ryzen 7 5800X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> Slightly lower but not enough to explain the gpu clock differences.


I first ran the Strix (air cooled, V2 stock bios) on an oc'ed Intel X99 and it did score marginally higher (15032) than in the subsequent Ryzen setup with the same cooling and v_bios. With w-cooling the GPU and 520 W r_bar bios, I got to 15285 in PR with 5950X all-core 16c/32t so far, but still w/o finalized w-loop, so a bit more room there. IMO, the Ryzen 'penalty' is not noticeable in real life, and more than made up for in other benches and apps. Also worth noting that fast, tight system memory helps with Port Royal, and with the Ryzen 5K series, disabling one CCD will definitely impact positively on latency.



Spoiler


----------



## gfunkernaught

Beagle Box said:


> I have found the KP1k to definitely be best for benching if the air is cool enough - maxed @ 61-64 power. But gaming, it's less great. Probably depends on the game. I'm also ulra-wide, so not running @ full 4k graphics, though all is usually set to Ultra.
> 
> I used trial versions of Minitool Shadow Maker and Partition Wizard to experiment on a SSD until I got it right. Didn't want to reinstall Windows because I've stripped mine of unwanted BS.
> 
> For some reason, the MSI Suprim X BIOS scores above its power draw. I have seen my 3rd power Pin is unresponsive and pegged @ ~ 3 Watts.
> I wonder if its lying to me and I'm getting full draw on it, by-passing the BIOS's stated 450W limit?
> Can think of no other reason my PR scores are only slightly lower with the MSI than with the KP1k and gaming is better.


My guess is the suprim bios run cooler than the 1kw at the same power usage, based on my experience. You could clamp the PCIe cables and find out for sure.


----------



## J7SC

Beagle Box said:


> I have found the KP1k to definitely be best for benching if the air is cool enough - maxed @ 61-64 power. But gaming, it's less great. Probably depends on the game. I'm also ulra-wide, so not running @ full 4k graphics, though all is usually set to Ultra.
> 
> I used trial versions of Minitool Shadow Maker and Partition Wizard to experiment on a SSD until I got it right. Didn't want to reinstall Windows because I've stripped mine of unwanted BS.
> 
> For some reason, the MSI Suprim X BIOS scores above its power draw. I have seen my 3rd power Pin is unresponsive and pegged @ ~ 3 Watts.
> I wonder if its lying to me and I'm getting full draw on it, by-passing the BIOS's stated 450W limit?
> Can think of no other reason my PR scores are only slightly lower with the MSI than with the KP1k and gaming is better.





gfunkernaught said:


> My guess is the suprim bios run cooler than the 1kw at the same power usage, based on my experience. You could clamp the PCIe cables and find out for sure.


...have a look at the pic below...I was looking for ABE the other day coz I wanted to do some 'snooping' re. the MSI SuprimX bios. Whether Asus, KPE 520 or KPE XOC 1000, the value with the blue oval, labeled 'Unknown PL' is always the same...the outlier is the MSI SuprimX, which runs a higher value...I have never run the MSI SuprimX, but I wonder whether 'Unknown PL' has anything to do with your observations (never mind what it actually stands for...)


----------



## Lobstar

J7SC said:


> I first ran the Strix (air cooled, V2 stock bios) on an oc'ed Intel X99 and it did score marginally higher (15032) than in the subsequent Ryzen setup with the same cooling and v_bios. With w-cooling the GPU and 520 W r_bar bios, I got to 15285 in PR with 5950X all-core 16c/32t so far, but still w/o finalized w-loop, so a bit more room there. IMO, the Ryzen 'penalty' is not noticeable in real life, and more than made up for in other benches and apps. Also worth noting that fast, tight system memory helps with Port Royal, and with the Ryzen 5K series, disabling one CCD will definitely impact positively on latency.


Disabling CCDs and SMT reduced my score below 15k. Running locked mem/fclk @ 1900 16/16/16/32/48. I can get slightly higher with lower clocks on the gpu however it seems to be hit and miss. The same settings result in scores +/-100 points apart from each other with 30 mins between tests. I scored 15 398 in Port Royal


----------



## Beagle Box

J7SC said:


> ...have a look at the pic below...I was looking for ABE the other day coz I wanted to do some 'snooping' re. the MSI SuprimX bios. Whether Asus, KPE 520 or KPE XOC 1000, the value with the blue oval, labeled 'Unknown PL' is always the same...the outlier is the MSI SuprimX, which runs a higher value...I have never run the MSI SuprimX, but I wonder whether 'Unknown PL' has anything to do with your observations (never mind what it actually stands for...)
> 
> 
> Spoiler: BIOS tables
> 
> 
> 
> 
> View attachment 2489385


That's very interesting. Memory power? Memory Controller Power?
Whatever it is, it works like magic on my GPU. 
I want to try the rebar version, but don't want to use MSI's software to install it. Their utilities are crazy invasive to Windows - even worse than ASUS. 

I'd be interested in your thoughts on its performance, both in games and benching if ever you run it.


----------



## gfunkernaught

J7SC said:


> ...have a look at the pic below...I was looking for ABE the other day coz I wanted to do some 'snooping' re. the MSI SuprimX bios. Whether Asus, KPE 520 or KPE XOC 1000, the value with the blue oval, labeled 'Unknown PL' is always the same...the outlier is the MSI SuprimX, which runs a higher value...I have never run the MSI SuprimX, but I wonder whether 'Unknown PL' has anything to do with your observations (never mind what it actually stands for...)
> 
> View attachment 2489385


That is strange indeed. So something on these boards needs to be limited to 3.5w or so. Must be important for it to be consistent like that, perhaps core related, since the core is consistent across all boards.


----------



## GRABibus

Lobstar said:


> Disabling CCDs and SMT reduced my score below 15k.


You disabled one CCD of course ?
Did you reduce Core Count to 4 ?
Strange that this disabling CCDs and SMTdecreases your score...

Ryzen don't like too much PR...


----------



## jomama22

Lobstar said:


> Disabling CCDs and SMT reduced my score below 15k. Running locked mem/fclk @ 1900 16/16/16/32/48. I can get slightly higher with lower clocks on the gpu however it seems to be hit and miss. The same settings result in scores +/-100 points apart from each other with 30 mins between tests. I scored 15 398 in Port Royal


You're running pbo, that's most likely why you aren't seeing much change.

Here's an example of mine with lower average clocks, matches Intel results:









I scored 15 777 in Port Royal


AMD Ryzen 9 5950X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com







GRABibus said:


> You disabled one CCD of course ?
> Did you reduce Core Count to 4 ?
> Strange that this disabling CCDs and SMTdecreases your score...
> 
> Ryzen don't like too much PR...


Gotta tune it right. Same as anything else.


----------



## GRABibus

gfunkernaught said:


> I noticed slightly better performance in cyberpunk but it crashed way sooner than the 1kw bios, since the voltage isn't staying where I want it to stay, due to PL bounce. I was only pushing a +120 core offset yielding about 2099mhz avg. I was able to sustain that with the 1kw bios for hours and hours with the pl at 50%.


I only played Cold war, overwatch and BFV with EVGA RTX 3090 VBIOS

Just played 45c minutes Cold war today at 21°c ambient temperature :
+175MHz on Core and +1000Mhz on memory.
=> Core frequency stable at 2130MHz during 45 minutes
=> Voltage between 1.068V and 1.1V, mostly at 1.075V
=> No crash at all
=> Max Core temp=61°C and max memory temp = 68°C (I am on air and repasted with Noctua NT-H2 and change the memory thermal pads with Odyssey from thermalright 3 days ago).

Yesterday I did the same with MSI SuprimX Bios and Core frequency stabilized at 2100MHz, with 64°C Core stabilized temp, so 3°C more than with EVGA 500W Bios.

Of course this probably depends on games and GPU's, but , currently, this is definitely the best Bios I tested in the 3 games I play currently (coldf War, BFV, Overwatch)
=> Lower temps, higher frequencies with no crash, lower volatges, Versus KP 520W, MSI SuprimX. I don't mention of course ASUS Bioses which are garbage...


----------



## Beagle Box

gfunkernaught said:


> My guess is the suprim bios run cooler than the 1kw at the same power usage, based on my experience. You could clamp the PCIe cables and find out for sure.


It probably does run cooler, but it would have to be _much_ cooler to keep up similar clocks with so much less power available to it. And others would be probably be seeing similar results.


----------



## Lobstar

GRABibus said:


> You disabled one CCD of course ?
> Did you reduce Core Count to 4 ?
> Strange that this disabling CCDs and SMTdecreases your score...
> 
> Ryzen don't like too much PR...


I tried alternating the CCD. I kicked down to four cores with CPPC on.



jomama22 said:


> You're running pbo, that's most likely why you aren't seeing much change.


I'm running 4.75ghz 'all-core' with the 4 cores CPPC on.


----------



## jomama22

Lobstar said:


> I tried alternating the CCD. I kicked down to four cores with CPPC on.
> 
> 
> I'm running 4.75ghz 'all-core' with the 4 cores CPPC on.
> View attachment 2489392


Reason I say that is because your 3dmark post from above shows 5050 mhz for the cpu but I'm guessing that was a different run. CPPC shouldn't matter. Maybe disabling df states will help. Not really sure.


----------



## gfunkernaught

GRABibus said:


> I only played Cold war, overwatch and BFV with EVGA RTX 3090 VBIOS
> 
> Just played 45c minutes Cold war today at 21°c ambient temperature :
> +175MHz on Core and +1000Mhz on memory.
> => Core frequency stable at 2130MHz during 45 minutes
> => Voltage between 1.068V and 1.1V, mostly at 1.075V
> => No crash at all
> => Max Core temp=61°C and max memory temp = 68°C (I am on air and repasted with Noctua NT-H2 and change the memory thermal pads with Odyssey from thermalright 3 days ago).
> 
> Yesterday I did the same with MSI SuprimX Bios and Core frequency stabilized at 2100MHz, with 64°C Core stabilized temp, so 3°C more than with EVGA 500W Bios.
> 
> Of course this probably depends on games and GPU's, but , currently, this is definitely the best Bios I tested in the 3 games I play currently (coldf War, BFV, Overwatch)
> => Lower temps, higher frequencies with no crash, lower volatges, Versus KP 520W, MSI SuprimX. I don't mention of course ASUS Bioses which are garbage...


Those games you mentioned don't push a 3090 very hard, except for maybe bfv but I've never played it so don't know. I could play cold war at 2130mhz and 35c for hours because the power usage is low.


----------



## Lobstar

jomama22 said:


> Reason I say that is because your 3dmark post from above shows 5050 mhz for the cpu but I'm guessing that was a different run. CPPC shouldn't matter. Maybe disabling df states will help. Not really sure.


I have a dark hero. 5050-5125 happens during loading and gathering settings. All-core cutover is 45 amps.


----------



## GRABibus

gfunkernaught said:


> Those games you mentioned don't push a 3090 very hard, except for maybe bfv but I've never played it so don't know. I could play cold war at 2130mhz and 35c for hours because the power usage is low.


we all adapt also bios to the games we play.
BFV pushes really hard the GPU, and as I am on air, if a Bios helps to boost a little more and have 3 degrees less, I take it 😊

in BFV, My GPU temp is 10 to 20 degrees higher than in Cold War.
Concerning Overwatch, you should test it…

It pulls more than 500W from the GPU in 2560x1440 by Setting the « fps slider » at > 300fps in video game options


----------



## jomama22

Lobstar said:


> I have a dark hero. 5050-5125 happens during loading and gathering settings. All-core cutover is 45 amps.


Yeah, don't use the dos when benching. Just use a fixed all-core. I can't imagine PR uses anywhere near 45A on the cpu (it's only got 2 threads of work).


----------



## gfunkernaught

GRABibus said:


> we all adapt also bios to the games we play.
> BFV pushes really hard the GPU, and as I am on air, if a Bios helps to boost a little more and have 3 degrees less, I take it 😊
> 
> in BFV, My GPU temp is 10 to 20 degrees higher than in Cold War.
> Concerning Overwatch, you should test it…
> 
> It pulls more than 500W from the GPU in 2560x1440 by Setting the « fps slider » at > 300fps in video game options


Right I made statement assuming vsync enabled, since I always use vsync. I play Titanfall 2 vsync disabled so it runs at 144fps max and yeah it will use up to 480w.


----------



## GRABibus

gfunkernaught said:


> Right I made statement assuming vsync enabled, since I always use vsync. I play Titanfall 2 vsync disabled so it runs at 144fps max and yeah it will use up to 480w.


I disable Vsync.
But, yes, Vsync enabled reduces power


----------



## Lobstar

jomama22 said:


> Yeah, don't use the dos when benching. Just use a fixed all-core. I can't imagine PR uses anywhere near 45A on the cpu (it's only got 2 threads of work).


It does. Dunno what to tell ya. I tried it again with it disabled and no pbo and no difference. /shrug


----------



## mardon

Wow new Metro is a power virus!! 450w @ 0.90v 1900mhz!! Until I get my new PSU I can't go much further. Seems like Cyberpunk. Letting it sup as much as it wants doesn't gain many FPS.


----------



## Apecos

MSI 3090 GAMING X TRIO 24G re bar enable , new bios update through MSI LIVE UPDATE, I noticed that this bios is not listed in the msi 3090 support page.


----------



## des2k...

resident evil village demo from Steam rtx GI looks awesome, 4k max, fidelity cas off + 1.3 render 

great if you want to test your cooling lol 
~560w indoor castle 🙄

This game seriously needs dlss to cut down on power. Amd fidelity cas is so garbage since render resolution is kept pretty high.


----------



## gfunkernaught

GRABibus said:


> I disable Vsync.
> But, yes, Vsync enabled reduces power


I guess what I was trying to say was that a game like overwatch or titanfall, while using 500w, still doesn't use all of the 3090's core or more of it like Control, Cyberpunk, Metro, etc do.


----------



## Falkentyne

GRABibus said:


> we all adapt also bios to the games we play.
> BFV pushes really hard the GPU, and as I am on air, if a Bios helps to boost a little more and have 3 degrees less, I take it 😊
> 
> in BFV, My GPU temp is 10 to 20 degrees higher than in Cold War.
> Concerning Overwatch, you should test it…
> 
> It pulls more than 500W from the GPU in 2560x1440 by Setting the « fps slider » at > 300fps in video game options


Overwatch can go over 600W at 4k.


----------



## GRABibus

Falkentyne said:


> Overwatch can go over 600W at 4k.


I am not surprised.
I have seen 540W at 2560x1440


----------



## GRABibus

gfunkernaught said:


> I guess what I was trying to say was that a game like overwatch or titanfall, while using 500w, still doesn't use all of the 3090's core or more of it like Control, Cyberpunk, Metro, etc do.


If they put Ray Tracing on overwatch 2 and if the graphic engine is similar to Overwatch 1....


----------



## yzonker

Well totally struck out on getting in on the KP HC. Gone in an instant just like on Amazon, etc... Miles down on the KP Hybrid too. Probably months out if ever. 

Have to stick with Zotac unless EVGA actually makes more FTW3 HC's.


----------



## VinnieM

Just replaced my Samsung 950 PRO with a 980 PRO and for some reason my RTX 3090 PCIE link speed dropped from 16x to 8x.
I'm using a x570 AORUS PRO motherboard with a 3800x and as far as I believe the CPU should have enough lanes to support both a PCI 4.0 GPU at 16x and a PCI 4.0 M.2 SSD at 4x, am I right?
I've tried to clean the PCIe connector of the graphics card and reseated the card several times, but it's still stuck at 8x. I've ran the AC Valhalla benchmark and it dropped a couple of frames so it's not entirely impactless.
Anyone have any thoughts on this?


----------



## des2k...

VinnieM said:


> Just replaced my Samsung 950 PRO with a 980 PRO and for some reason my RTX 3090 PCIE link speed dropped from 16x to 8x.
> I'm using a x570 AORUS PRO motherboard with a 3800x and as far as I believe the CPU should have enough lanes to support both a PCI 4.0 GPU at 16x and a PCI 4.0 M.2 SSD at 4x, am I right?
> I've tried to clean the PCIe connector of the graphics card and reseated the card several times, but it's still stuck at 8x. I've ran the AC Valhalla benchmark and it dropped a couple of frames so it's not entirely impactless.
> Anyone have any thoughts on this?


that's weird, because pcie3 vs pcie4 lanes don't change, sounds like a defective cpu because those 24pcie4 lanes are on the cpu for zen2

you could reset the bios or run the nvme on the chipset nvme, it will still be x4 pcie4 same speed vs the cpu nvme

pice4 nvme don't really use 100% of the speed, even with sata raid or other cards going on the chipset, speed should be good


----------



## des2k...

Anyone ran Marbles RTX yet on their 3090 ?


----------



## satinghostrider

Just curious, what bios are your guys running on your 3090 Gaming X Trio with rebar? I've tried the Suprim X and it was alright. Anything better while keeping all the ports working and stable for gaming predominantly? 

Is it normal for 3rd pin connector to be much less than the 1st and 2nd? I am guessing my PCIE slot power draw is within limits?

P.S - I'm under water.


----------



## Lobstar

VinnieM said:


> Just replaced my Samsung 950 PRO with a 980 PRO and for some reason my RTX 3090 PCIE link speed dropped from 16x to 8x.
> ...
> Anyone have any thoughts on this?


Hey, I had a similar thing happen and was trying all sorts of ****. What worked was simply re-seating my 3090 and ensuring I had it sitting straight up with the thumb screw properly aligning the card to the back plate thing.


----------



## Bal3Wolf

eventualy i might be able to say i have a 3090 i got in on the 3090 kingpin hydro copper que, just now comes the hurry up and wait see how long it takes to get to buy one lol.


----------



## Manya3084

OK, I've finally cracked into the HOF for Port Royal.

It took "Jimmy" rigging a custom solution to the EK TUF 3090back plate, and +0.06v (Elmore EVC2)...oh, and a water chiller... 


















I scored 15 669 in Port Royal


AMD Ryzen Threadripper 3960X, NVIDIA GeForce RTX 3090 x 1, 65536 MB, 64-bit Windows 10}




www.3dmark.com


----------



## VinnieM

des2k... said:


> that's weird, because pcie3 vs pcie4 lanes don't change, sounds like a defective cpu because those 24pcie4 lanes are on the cpu for zen2
> 
> you could reset the bios or run the nvme on the chipset nvme, it will still be x4 pcie4 same speed vs the cpu nvme
> 
> pice4 nvme don't really use 100% of the speed, even with sata raid or other cards going on the chipset, speed should be good


Thanks. I've moved the Nvme SSD to the chipset slot and the graphics card was running at 16x again, but then another issue occurred. The system would take a minute to post and on the motherboard the GPU led was on at this time. Once the UEFI showed, it was terribly slow, even a lot slower than running with CSM disabled (know problem with Gigabyte BIOS). I could boot into Windows, but the performance of the GPU was lackluster and there were nvlddmkm event messages.
I've reseated the card a couple of times and it's now back at running at 8x but performance is at least good... Maybe I have to properly clean the PCIe connector again.



Lobstar said:


> Hey, I had a similar thing happen and was trying all sorts of ****. What worked was simply re-seating my 3090 and ensuring I had it sitting straight up with the thumb screw properly aligning the card to the back plate thing.


Yes, I've tried several "alignments" and when it's straight up it's running at 16x but then it takes ages to post and there are other issues. If I let it sag a bit it's running at 8x but then the performance is at least good enough. I'll try cleaning the connector and slot again, thanks.


----------



## Beagle Box

*MSI Suprim X BIOS with REBAR support.*

By now, everyone knows how much I like the way the Suprim X BIOS runs on my Strix OC 3090. 
Just putting a note here to let it be known that this 'unverified' BIOS found on this page of TECHpowerUP is the rebar version of my favorite '.F5' BIOS. 
It's a direct flash version, so there's no need to have the old F5 in place.
There is also no need to install the invasive MSI junk software to install it.

Just flashed it in and it works.


----------



## J7SC

Beagle Box said:


> *MSI Suprim X BIOS with REBAR support.*
> 
> By now, everyone knows how much I like the way the Suprim X BIOS runs on my Strix OC 3090.
> Just putting a note here to let it be known that this 'unverified' BIOS found on this page of TECHpowerUP is the rebar version of my favorite '.F5' BIOS.
> It's a direct flash version, so there's no need to have the old F5 in place.
> There is also no need to install the invasive MSI junk software to install it.
> 
> Just flashed it in and it works.
> 
> View attachment 2489591


 nice - so just use nvflash and replace whatever bios is currently on chip 1 or 2, ie. the Asus V3 ?


----------



## satinghostrider

satinghostrider said:


> Just curious, what bios are your guys running on your 3090 Gaming X Trio with rebar? I've tried the Suprim X and it was alright. Anything better while keeping all the ports working and stable for gaming predominantly?
> 
> Is it normal for 3rd pin connector to be much less than the 1st and 2nd? I am guessing my PCIE slot power draw is within limits?
> 
> P.S - I'm under water.
> 
> View attachment 2489541
> View attachment 2489542


Does my powerdraw look normal on this 3090 Suprim X F5 Re-Bar Bios? Took this during 8k Optimized Superposition.


----------



## Biscottoman

How can you guys reach 2220 MHz on a 450w BIOS while my strix average 2115/2130mhz (max 2175) on his 480w averaging 34/35c (waterblock+active backplate mp5works). Is there anything wrong in my card?


----------



## Beagle Box

J7SC said:


> nice - so just use nvflash and replace whatever bios is currently on chip 1 or 2, ie. the Asus V3 ?


That's what I did. 
My second BIOS slot has the KP1k BIOS [non-rebar] because its the best for benching.
I'm all smiles, now.


----------



## satinghostrider

Biscottoman said:


> How can you guys reach 2220 MHz on a 450w BIOS while my strix average 2115/2130mhz (max 2175) on his 480w averaging 34/35c (waterblock+active backplate mp5works). Is there anything wrong in my card?


Silicon Lottery plays a big part and I think many after November or December last year realised not so great bins from most of the Asus Strix cards. You should also try the Suprim X bios many have reported better performance and cooler temps running it.


----------



## des2k...

Biscottoman said:


> How can you guys reach 2220 MHz on a 450w BIOS while my strix average 2115/2130mhz (max 2175) on his 480w averaging 34/35c (waterblock+active backplate mp5works). Is there anything wrong in my card?


your card is fine, 99% top out at 2175core for 24/7 gaming 

In fact some premiuim cards don't even get to 2175 outside of benchmarking run with low ambient

For games, if you get +1500mem, you can drop the core to 2145 and have almost the same fps as some crazy cards at 2175+ on the core


----------



## Biscottoman

satinghostrider said:


> Silicon Lottery plays a big part and I think many after November or December last year realised not so great bins from most of the Asus Strix cards. You should also try the Suprim X bios many have reported better performance and cooler temps running it.


I would surely try the 520w bar on BIOS to see if i can squeeze more, because everything seems like i'm totally power limited by my actual BIOS


----------



## Beagle Box

satinghostrider said:


> Does my powerdraw look normal on this 3090 Suprim X F5 Re-Bar Bios? Took this during 8k Optimized Superposition.


Very interesting...

Yours:









Mine:









There may be a clue here as to why my Strix runs so well on this BIOS...


----------



## jomama22

Biscottoman said:


> How can you guys reach 2220 MHz on a 450w BIOS while my strix average 2115/2130mhz (max 2175) on his 480w averaging 34/35c (waterblock+active backplate mp5works). Is there anything wrong in my card?


During the same superposition test? That bench just allows unrealistic clocks.

Ignore whatever the bios TDP is, really doesn't mean much is the long run. You are going to usually hit a different power limit before then anyway.


----------



## Nizzen

Biscottoman said:


> How can you guys reach 2220 MHz on a 450w BIOS while my strix average 2115/2130mhz (max 2175) on his 480w averaging 34/35c (waterblock+active backplate mp5works). Is there anything wrong in my card?


Maybe they are running very cold water or/and shuntmodded card


----------



## Beagle Box

Biscottoman said:


> How can you guys reach 2220 MHz on a 450w BIOS while my strix average 2115/2130mhz (max 2175) on his 480w averaging 34/35c (waterblock+active backplate mp5works). Is there anything wrong in my card?


Depends on the benchmark, BIOS, driver, CPU, MB settings, etc... 

If you have a Strix and you're running the ASUS driver, your card is running as designed. My Strix, for reasons yet to be determined, runs much better on almost any other BIOS than an ASUS BIOS. 

My current working theory is that the 3rd Power Pin misreports it's value, which allows the GPU to pull more than the BIOS's stated Power Max. I think I'm probably pulling ~520W benching using the MSI Suprim X BIOS - which is supposedly limited to 450W.

Not certain, though.  I'm open to other ideas.


----------



## Biscottoman

jomama22 said:


> During the same superposition test? That bench just allows unrealistic clocks.
> 
> Ignore whatever the bios TDP is, really doesn't mean much is the long run. You are going to usually hit a different power limit before then anyway.


I didn't test the card so much as the build has been completed a weeks ago and at the moment i'm working at ram+CPU tweaking (5950x platform). I've just tried few runs of Port royal scoring 14976 (CPU not overclocked at +155 core and +1300 mem (but the card on GPU-z was showing hard cap 470-480w limiting the core averaging 2115/2120mhz). I will surely try the 520w BIOS as i end overclock on cpu, but i tought that 480w would have let me to perform higher clocks at 35/36c


----------



## Biscottoman

Beagle Box said:


> Depends on the benchmark, BIOS, driver, CPU, MB settings, etc...
> 
> If you have a Strix and you're running the ASUS driver, your card is running as designed. My Strix, for reasons yet to be determined, runs much better on almost any other BIOS than an ASUS BIOS.
> 
> My current working theory is that the 3rd Power Pin misreports it's value, which allows the GPU to pull more than the BIOS's stated Power Max. I think I'm probably pulling ~520W benching using the MSI Suprim X BIOS - which is supposedly limited to 450W.
> 
> Not certain, though.  I'm open to other ideas.


Would in your opinion the 450w suprim x BIOS be a better option over the evga 520?


----------



## GRABibus

Biscottoman said:


> I didn't test the card so much as the build has been completed a weeks ago and at the moment i'm working at ram+CPU tweaking (5950x platform). I've just tried few runs of Port royal scoring 14976 (CPU not overclocked at +155 core and +1300 mem (but the card on GPU-z was showing hard cap 470-480w limiting the core averaging 2115/2120mhz). I will surely try the 520w BIOS as i end overclock on cpu, but i tought that 480w would have let me to perform higher clocks at 35/36c


Be aware also that with a 5950x, you also get a penalty in the PR score versus Intel.
You will have to tweak the CPU in the Bios to improve your score.


----------



## Beagle Box

Biscottoman said:


> Would in your opinion the 450w suprim x BIOS be a better option over the evga 520?


On your card? Who knows? On my GPU, the Suprim X BIOS and the Suprim X w/rebar BIOS score higher and play games more smoothly than the EVGA KP520 and EVGA KP520 w/rebar. 
What I would suggest is to try different BIOSs until you find a couple _that work well with your GPU on your system_. 
There is no panacea - no magic bullet that will work well for everyone.


----------



## GRABibus

Nizzen said:


> Maybe they are running very cold water or/and shuntmodded card


What about your Galax HOF ? 😊


----------



## Biscottoman

GRABibus said:


> Be aware also that with a 5950x, you also get a penalty in the PR score versus Intel.
> You will have to tweak the CPU in the Bios to improve your score.


Yes yes i know that too, as a matter of fact my actual score isn't so bad but i just would have expected higher clocks


Beagle Box said:


> On your card? Who knows? On my GPU, the Suprim X BIOS and the Suprim X w/rebar BIOS score higher and play games more smoothly than the EVGA KP520 and EVGA KP520 w/rebar.
> What I would suggest is to try different BIOSs until you find a couple _that work well with your GPU on your system_.
> There is no panacea - no magic bullet that will work well for everyone.


Ok, nice to know I will surely try both of them


----------



## Nizzen

GRABibus said:


> What about your Galax HOF ? 😊


Bone stock and stock cooling  For now


----------



## yzonker

felix121 said:


> Dang this thing wont stop going up.....some1 throw me some advice I go buy buy 4 more graphics cards tom?


It spikes occasionally. It'll settle back down. Although profits have been good lately.


----------



## Panchovix

Not a 3090 owner here (3080), but mining payouts today will be really high, pretty high gas fee and eth is at 3800+ USD price.

Expect some juicy gains today


----------



## yzonker

felix121 said:


> It spikes that much?....Mine has been holding above 40 most of the day now...


This is longer than normal. Go to history and stats to see the graph.


----------



## jomama22

felix121 said:


> Im actually averaging above 20 bucks per day on this card without overclocking.....planning to buy more gpus...good idea?


I mean, Depending what you pay for the card (let's say $3000 if you are going the stockx or eBay route) would still take you 150 days to break even not including electricity expenses.

But that's just me, you do you lol. If it were my $15000, I'd be putting that in the stock market.


----------



## Biscottoman

felix121 said:


> Dont have to do that...I just look at the payout every 4 hours...Im averaging 4 dollars per 4 hours....but now with this half day spike its more like 7 dollars per 4 hours off of one friggin card....that is not even overclocked....


What mining pool?


----------



## yzonker

I've been mining with my 4 gaming cards since January (3090, 2080S, 2060, 1080ti). That's a combined hashrate of well over 200 mh/s and I'd say I've averaged about $20/day. Higher sometimes, other times lower.


----------



## KedarWolf

I want to upload the V4 Asus Strix OC BIOS to TechPowerUp. I extracted the BIOS update .exe but there are so many BIOS versions, I can't figure out which one is the latest for the Strix OC.

Here is the V4 download, I tried opening the Testini.txt file but can't make any sense of it. 



https://dlcdnets.asus.com/pub/ASUS/Graphic%20Card/NVIDIA/Utilities/RTX3090_V4.exe


----------



## Panchovix

felix121 said:


> So how come I am making 20 off of one card.....what is your payout right now as in today if its ok to ask....i dont have even have it on the high setting but I just put it to high lets see what happens will post the new dollar value per day soon


In my case I use binance pool, averaging last 24 hours average price, with a 3080 + a 3060Ti (160 MH/s) I'm doing 0.0064ETH per day, or 24.5USD.

A 3090 should do (assuming 120MH/s), using 3-table, 18.3~ USD

Now, using actual eth price instead of average, I'm supposed to do 30-35USD per 160MH/s, so a 3090 at 120MH/s should do 21-26USD by itself.


----------



## KedarWolf

KedarWolf said:


> I want to upload the V4 Asus Strix OC BIOS to TechPowerUp. I extracted the BIOS update .exe but there are so many BIOS versions, I can't figure out which one is the latest for the Strix OC.
> 
> Here is the V4 download, I tried opening the Testini.txt file but can't make any sense of it.
> 
> 
> 
> https://dlcdnets.asus.com/pub/ASUS/Graphic%20Card/NVIDIA/Utilities/RTX3090_V4.exe


Never mind, figured out I can upload it was GPU-Z but it says it's already there. I don't think that is correct though, the V4 was just released and I think it's matching with the uploaded V3. :/


----------



## J7SC

...just picked up a TT Core P8 (price was too good to pass up, but that thing is heavy...40 pds by itself ). It comes with a PCIe 3.0 riser cable...anyone here know the performance penalty - if any - for PCIe 3.0 x16 vs 4.0 x16 ? I know there are some good PCIe 4.0 riser cables out there, but I haven't got one yet


----------



## J7SC

felix121 said:


> Dang how much did you pay for that.....


...you mean the TT Core P8 ? ~ US$ 200


----------



## GRABibus

KedarWolf said:


> Never mind, figured out I can upload it was GPU-Z but it says it's already there. I don't think that is correct though, the V4 was just released and I think it's matching with the uploaded V3. :/


And what are the changes between V3 and V4 ?


----------



## ViRuS2k

Yeah its been awesome today hahaha  my payout rate - 3090 + 1080TI - 163mh/s

-----
ETH BTC GBP

Minute0.000000.0000000.010Hour0.000220.0000140.633Day0.005510.00035915.206Week0.038610.002516106.445Month0.165470.010784456.193
Prices in GBP

£15 quid a day not to shabby


----------



## GQNerd

3090 @ 124mh/s = $22-24 daily avg. on Ethermine.. today it’s up around $30!!

EDIT: Turned on my Razer w/2080s - getting another 38mh/s from that


----------



## yzonker

felix121 said:


> Thanks for that ......thats the confirmation I wanted...so its not a fluke.....anyway what is your hashrate on that 3060ti without the overclock im being offered one for 900 bucks...


But it fluctuates a lot. Looking at what you are earning at any point in time doesn't tell you much. Certainly isn't a good way to decide to buy very expensive cards for dedicated mining.


----------



## krizby

KedarWolf said:


> Never mind, figured out I can upload it was GPU-Z but it says it's already there. I don't think that is correct though, the V4 was just released and I think it's matching with the uploaded V3. :/


There are 3 vBios entries for Asus Strix 3090 OC which are 94.02.42.00.AS03-AS26-AS27, new version is 94.02.42.00.AS46 (Re-Bar vBIOS)


----------



## gfunkernaught

des2k... said:


> your card is fine, 99% top out at 2175core for 24/7 gaming
> 
> In fact some premiuim cards don't even get to 2175 outside of benchmarking run with low ambient
> 
> For games, if you get +1500mem, you can drop the core to 2145 and have almost the same fps as some crazy cards at 2175+ on the core


My card must be in that 1% because it can barely do 2100mhz for 24/7 gaming all games stable.


----------



## ViRuS2k

PeriodETHBTCGBPMinute0.000000.0000000.012Hour0.000270.0000180.760Day0.006490.00043418.244Week0.045490.003038127.712Month0.194960.013021547.339

pretty good for me at the minute ***** 
had my 0.1% payout yesterday at 11am £396 in just under 3 weeks. not to shabby and since then im already nearly at £20 made since yesterday at 11am until now.

my money mining method is, when ETH is up mine  and when its average play games  lol
i also had to bake my 1080ti in the oven to fix it, it had intermittent power issues but now its fixed and has worked for 2 months now without issue and its been mining in all that time works great and has good memory clocks for mining 800+


----------



## Lobstar

Lower clocks, higher scores. I scored 15 595 in Port Royal


----------



## duppex

Consider picking up an RTX 3090 new but from resell. Only going to game 4k modded graphics.

My concern is long-term VRAM damage due to reported high temperatures.

Can anyone with an RTX 3090 since launch date advise me of the current state of their GPU after intensive mining or gaming over the last 7 to 8 months.

Cheers


----------



## inedenimadam

Lobstar said:


> Lower clocks, higher scores. I scored 15 595 in Port Royal


Nice score!
I got my HC block in from EVGA for the KP. I hope to get past the 15,300 I have been stuck at with the AIO.



duppex said:


> Consider picking up an RTX 3090 new but from resell. Only going to game 4k modded graphics.
> 
> My concern is long-term VRAM damage due to reported high temperatures.
> 
> Can anyone with an RTX 3090 since launch date advise me of the current state of their GPU after intensive mining or gaming over the last 7 to 8 months.
> 
> Cheers


I have had a zotac trinity that I have mined on since launch (couple weeks after). VRAM was 95+ for months with no noticeable degradation. I do however have both front and back waterblocks installed now, so the VRAM isn't getting quite as hammered. Yes, I believe that prolonged exposure to 100C will eventually be the death of these 3090s that are mining...however, I also believe that they will outlive their useful life as a high end gaming card well before they kick the can.


----------



## duppex

inedenimadam said:


> Nice score!
> I got my HC block in from EVGA for the KP. I hope to get past the 15,300 I have been stuck at with the AIO.
> 
> 
> I have had a zotac trinity that I have mined on since launch (couple weeks after). VRAM was 95+ for months with no noticeable degradation. I do however have both front and back waterblocks installed now, so the VRAM isn't getting quite as hammered. Yes, I believe that prolonged exposure to 100C will eventually be the death of these 3090s that are mining...however, I also believe that they will outlive their useful life as a high end gaming card well before they kick the can.


Thanks for your useful insight. You are right, if the GPU works as intended for at least 2 - 3 years it should be enough as I would most likely be looking for the next thing by then.


----------



## Ironcobra

After waiting 3 months for my aquacomputer 3090 strix block and active backplate to finish up the parts for my first watercooled build the active backplate was missing the heatpipe in the packaging


----------



## yzonker

gfunkernaught said:


> My card must be in that 1% because it can barely do 2100mhz for 24/7 gaming all games stable.


My Zotac is the same. Seems to hit a wall to some extent around 2070-2100. My temps aren't nearly as good as some of you guys but I kinda doubt dropping 5-10C would make it stable in the 2145-2205 range.


----------



## J7SC

inedenimadam said:


> (...)
> I have had a zotac trinity that I have mined on since launch (couple weeks after). VRAM was 95+ for months with no noticeable degradation. I do however have both front and back waterblocks installed now, so the VRAM isn't getting quite as hammered. Yes, I believe that prolonged exposure to 100C will eventually be the death of these 3090s that are mining...however, I also believe that they will outlive their useful life as a high end gaming card well before they kick the can.


...probably true, though I rather not find out w/ the current supply issues. Also, the hotspot delta to GPU (and VRAM) temp plays a big role with 8nm / 28 billion transistors... 

...


----------



## yzonker

J7SC said:


> ...probably true, though I rather not find out w/ the current supply issues. Also, the hotspot delta to GPU (and VRAM) temp plays a big role with 8nm / 28 billion transistors...
> 
> ...


Understand that. Main reason why I've never pulled my card back out of the block to try to improve temps. I have been mining with it for 4 months now with no apparent degradation either. Although I've always kept mem temp below 90C and it's been around 80C since I put it in a block in late January.


----------



## yzonker

felix121 said:


> Lol....hmmm u sure u know much about gpus....vram always gets hot....on almost anything you run on it.......and yes it can touch 100 easy ......if they thought that temp would destroy your gpu then it would throttle at 80c or 70c....unless your a computer hardware engineer dont make assumptions...fyi games and rendering are alot more damaging to a gpu then a constant sustainable temperature...the flux in temps from cold to very hot has a more damaging affect then a constant warm temp.


I'm pretty sure the 110C limit is a safety to keep the card from killing itself quickly, not to stop long term degradation. It's a safety, not a design parameter.


----------



## ArcticZero

No need to make multiple posts in a row, particularly inflammatory ones.

In any case on topic, I know there's a general dislike for EK items for the most part, but I do have an EKWB RE block + active backplate coming. First custom loop. Looking forward to dropping my memory junction temps more. Currently at 94c max while mining (35c+ ambients)


----------



## J7SC

ArcticZero said:


> No need to make multiple posts in a row, particularly inflammatory ones.
> 
> In any case on topic, I know there's a general dislike for EK items for the most part, but I do have an EKWB RE block + active backplate coming. First custom loop. Looking forward to dropping my memory junction temps more. Currently at 94c max while mining (35c+ ambients)


...well put. Also, extra heat is known to accelerate degradation, and folks like DerBauer and Buildzoid have some pretty good if complex exposée on that. As to mining, to each his own, but manufacturers (among them MSI, Gigabyte) are starting to cut warranties for some GPUs popular for miners. The fear is that this will spill over into general (ie. gamer, workstation) GPU warranties, and also raise prices for the next round of GPUs as warranty costs have already increased significantly.


----------



## ArcticZero

Wow. Well I did get my card for gaming, and mine on the side while doing productivity.

I doubt that warranty policy will become the norm for general purpose GPUs. They'll probably just try to better standardize and reinforce their hashrate limiters on the BIOS level, and require you to run a minimum driver revision to use the GPUs.


----------



## J7SC

ArcticZero said:


> Wow. Well I did get my card for gaming, and mine on the side while doing productivity.
> 
> I doubt that warranty policy will become the norm for general purpose GPUs. They'll probably just try to better standardize and reinforce their hashrate limiters on the BIOS level, and require you to run a minimum driver revision to use the GPUs.


...it's a 'hot' topic, if you pardon the pun...according to GN, some manufacturers even looked at adding a chip to the GPU that reports usage and may be other data like temps...for now though, that seems to have been taken off the table...for now...


----------



## yzonker

ArcticZero said:


> No need to make multiple posts in a row, particularly inflammatory ones.
> 
> In any case on topic, I know there's a general dislike for EK items for the most part, but I do have an EKWB RE block + active backplate coming. First custom loop. Looking forward to dropping my memory junction temps more. Currently at 94c max while mining (35c+ ambients)


Do report back with your results. I keep being tempted to buy it (assuming I can get one). Only thing that stopped me was the what I mentioned above (risking damage to the card).


----------



## ArcticZero

yzonker said:


> Do report back with your results. I keep being tempted to buy it (assuming I can get one). Only thing that stopped me was the what I mentioned above (risking damage to the card).


Will do. And well, I have done nearly everything under the sun to my GPU already (shunt modded, replaced thermal pads/paste 2349524 times, deshrouded). Drew the line at copper shims since I'd already had prior issues with "unwanted" metal bits on my PCB on this card a few weeks back. It is still alive after all of that, so I am happy and fortunate.

Water cooling is the final frontier for me.


----------



## gfunkernaught

yzonker said:


> My Zotac is the same. Seems to hit a wall to some extent around 2070-2100. My temps aren't nearly as good as some of you guys but I kinda doubt dropping 5-10C would make it stable in the 2145-2205 range.


Like my 2080 ti, higher than 2100mhz would require less than 38c at least. Maybe even 35c as the upper limit.


----------



## jura11

I know guys this is about the RTX 3090 thread but these mining recommendations or even your experience with that you should do that in separate thread where you can exchange your experience and your views on that

These mh/s, please spare me and others of that rubbish

If there is any degradation of component by mining, this nobody knows,it would be easy to compare, do mining for 1 year and then compare same OC which previously you run in games or benchmarks, something like before and after

If gaming or rendering degrade faster GPU's, I just don't think so, I use PC for rendering and previously I have run 3*GPUs setup which only been used for rendering and some gaming, temperatures in rendering are under 36°C on both GPUs and VRAM temperatures won't break 60's on top one and on bottom one VRAM temperatures are in low to mid 70's, in gaming temperatures are exactly the same as in rendering and VRAM as well, in mining I only assume VRAM temperatures will be higher by good 10-15°C with my current setup 

Hope this helps 

Thanks, Jura


----------



## motivman

I have been gone from this thread for a while now... anyone care to let me know if there is a rebar 1000W bios yet?


----------



## Beagle Box

motivman said:


> I have been gone from this thread for a while now... anyone care to let me know if there is a rebar 1000W bios yet?


It exists. You can't have it unless you have a KingPin GPU.


----------



## Salvi

-


----------



## Salvi

How do I sew rebar 1000W bios
I have an Evga RTX 3090 Kingpin, how do I get that bios?


----------



## Beagle Box

Salvi said:


> How do I sew rebar 1000W bios
> I have an Evga RTX 3090 Kingpin, how do I get that bios?


Maybe PM @sultanofswing and ask how he got his. 
He's running it.
Don't get your hopes up, though. Apparently, not everyone is eligible.


----------



## Thanh Nguyen

Everyone know why my card shows gpu benchmark error when I run nicehash? Is it because 1000w bios?


----------



## Biscottoman

Guys I've just flashed the 520w rebar bios on my rtx 3090 strix (which was capped to 2115-2130mhz at his bios), on the bios switch i'm improved my pr score by something like 100 points I scored 15 108 in Port Royal averaging 2145-2160 mhz; is that ok that msi after burner shows me a capped power limit of 350w during the run? Am i still power limited?


----------



## yzonker

Thanh Nguyen said:


> Everyone know why my card shows gpu benchmark error when I run nicehash? Is it because 1000w bios?


If you haven't already, you need to turn off P2 state so it runs in P0 while mining. XOC jumps to something like P8 IIRC.


----------



## yzonker

Beagle Box said:


> Maybe PM @sultanofswing and ask how he got his.
> He's running it.
> Don't get your hopes up, though. Apparently, not everyone is eligible.


Yea they contact Vince at EVGA. Not sure who he is but that's the contact.


----------



## Thanh Nguyen

yzonker said:


> If you haven't already, you need to turn off P2 state so it runs in P0 while mining. XOC jumps to something like P8 IIRC.


How to do it ?


----------



## J7SC

Biscottoman said:


> Guys I've just flashed the 520w rebar bios on my rtx 3090 strix (which was capped to 2115-2130mhz at his bios), on the bios switch i'm improved my pr score by something like 100 points I scored 15 108 in Port Royal averaging 2145-2160 mhz; is that ok that msi after burner shows me a capped power limit of 350w during the run? Am i still power limited?


...depends on your actual card...on my Strix, the KPE 520W r_BAR bios messes up the power reading in GPUz and HWInfo as it cannot correctly show the 3rd PCIe 8 pin


----------



## kebong

Right, so I flashed the 520W rebar bios on my Suprim X, but I think I'm still getting capped by a 480W-ish wall. Every time I get to 490W something, it starts throttling. Any ideas?


----------



## Biscottoman

J7SC said:


> ...depends on your actual card...on my Strix, the KPE 520W r_BAR bios messes up the power reading in GPUz and HWInfo as it cannot correctly show the 3rd PCIe 8 pin


Yes that's same on my card, it shows me no wattage from the third pin


----------



## des2k...

Thanh Nguyen said:


> How to do it ?


nvinspector utility


----------



## sultanofswing

yzonker said:


> Yea they contact Vince at EVGA. Not sure who he is but that's the contact.


Vince is KINGPIN.


----------



## gfunkernaught

kebong said:


> Right, so I flashed the 520W rebar bios on my Suprim X, but I think I'm still getting capped by a 480W-ish wall. Every time I get to 490W something, it starts throttling. Any ideas?


Evga bios except for 1kw cap my trio at 480w as well. Not sure why.


----------



## kebong

gfunkernaught said:


> Evga bios except for 1kw cap my trio at 480w as well. Not sure why.


Ugh, I guess I'll try that then. The one in the OP, right? Or is there a newer/better version?


----------



## jura11

Thanh Nguyen said:


> How to do it ?


As above @des2k... said you need to use NvInspector









NVIDIA Inspector (1.9.8.7 Beta) Download


NVIDIA Inspector is a utility to read all GPU relevant data from the NVIDIA driver. It also supports overclocking and changing driver settings, for e




www.techpowerup.com





Hope this helps 

Thanks, Jura


----------



## kebong

gfunkernaught said:


> Evga bios except for 1kw cap my trio at 480w as well. Not sure why.


this worked btw, thanks! but I can't seem to breach 15k on Port Royal... stuck at 14,844. maybe this is silicon lottery limiting my card already *🤷*


----------



## gfunkernaught

kebong said:


> this worked btw, thanks! but I can't seem to breach 15k on Port Royal... stuck at 14,844. maybe this is silicon lottery limiting my card already *🤷*


Set the following for the 3dmark profile in the nv control panel:
texture filtering to high performance 
Performance mode to prefer max performance
What offsets are you using with the 1kw bios?


----------



## Beagle Box

kebong said:


> this worked btw, thanks! but I can't seem to breach 15k on Port Royal... stuck at 14,844. maybe this is silicon lottery limiting my card already *🤷*


If just beating 15k is the goal:
Run only 3 or 4 cores
Turn hyperthreading off
Overclock CPU to 5.0GHz
Run GPU memory over +500
Close any non-needed services and apps

Scoring over 15000 with a daily driver set up takes actual system optimization and either a good GPU or water-cooling.


----------



## Biscottoman

Seems like that even using the 520w bios on my 3090 strix i keep getting stacked at low clock rate I scored 11 609 in Time Spy Extreme. I just gained some few points in both time spy and port royal but it's really strange to me that such temps i'm going averaging 2075 mhz on time spy stucked with low voltages. Is that cause by a bad bin or there would be any other possible reason/fix to this?


----------



## Beagle Box

Biscottoman said:


> Seems like that even using the 520w bios on my 3090 strix i keep getting stacked at low clock rate I scored 11 609 in Time Spy Extreme. I just gained some few points in both time spy and port royal but it's really strange to me that such temps i'm going averaging 2075 mhz on time spy stucked with low voltages. Is that cause by a bad bin or there would be any other possible reason/fix to this?


You're much closer to having a Top 100 GPU score than you are a Top 100 CPU score. Increasing your CPU score will also raise your GPU score in TSE. So if you want higher TSE scores, work on getting a higher CPU score. 
What's your Port Royal score?
Is your GPU air cooled?


----------



## Biscottoman

Beagle Box said:


> You're much closer to having a Top 100 GPU score than you are a Top 100 CPU score. Increasing your CPU score will also raise your GPU score in TSE. So if you want higher TSE scores, work on getting a higher CPU score.
> What's your Port Royal score?
> Is your GPU air cooled?


Actual PR score I scored 15 121 in Port Royal, my gpu is watercooled + mp5 works as active backplate cooling and seems like it can keep +1500 on mem (at least on benchmark). My 5950x is already quite tweaked pbo2 curve optimizer, i don't think it could handle a better curve (don't know if the dark hero dynamic oc could be a better overclock choose). I still have to try the msi suprimx 450w rebar bios as you recommended me


----------



## Beagle Box

Biscottoman said:


> Actual PR score I scored 15 121 in Port Royal, my gpu is watercooled + mp5 works as active backplate cooling and seems like it can keep +1500 on mem (at least on benchmark). My 5950x is already quite tweaked pbo2 curve optimizer, i don't think it could handle a better curve (don't know if the dark hero dynamic oc could be a better overclock choose). I still have to try the msi suprimx 450w rebar bios as you recommended me


Your GPU can already reach 15k in PR. 
With my PC set up for gaming, I score less than 100 points higher than you in PR. 
My graphics score in TSE at my usual game settings is actually lower than yours. 
If you can game at those settings, you're in *great* shape. 
Achieving ultra high scores in benchmarks is a completely different animal. 
It's about system optimization for each benchmark. 
If I can score higher benchmark scores with my PC than you can with yours, it means absolutely nothing in the real world.


----------



## Biscottoman

Beagle Box said:


> Your GPU can already reach 15k in PR.
> With my PC set up for gaming, I score less than 100 points higher than you in PR.
> My graphics score in TSE at my usual game settings is actually lower than yours.
> If you can game at those settings, you're in *great* shape.
> Achieving ultra high scores in benchmarks is a completely different animal.
> It's about system optimization for each benchmark.
> If I can score higher benchmark scores with my PC than you can with yours, it means absolutely nothing in the real world.


Yes, I know that focusing on getting high benchmark scores is just a mental fix for us hardware enthusiasts, but in reality it finds very little utility in daily use. I was just worried if the card was getting his right performance with that overclock setup. I will verify if the card is game stable at these clocks and i will keep them for my daily gaming use. Thanks btw for the advices and the help


----------



## GQNerd

Antsy waiting for my KP HC block to arrive so I ran some benches to get a baseline... in the process, finally cracked 16k! 

PR - 16075

Can't wait to compare/try to smash that score with the block


----------



## Beagle Box

Miguelios said:


> Antsy waiting for my KP HC block to arrive so I ran some benches to get a baseline... in the process, finally cracked 16k!
> 
> PR - 16075
> 
> Can't wait to compare/try to smash that score with the block


Nice run.
What model 3090 is that?
Never mind. I see it's a KP.


----------



## Thanh Nguyen

Miguelios said:


> Antsy waiting for my KP HC block to arrive so I ran some benches to get a baseline... in the process, finally cracked 16k!
> 
> PR - 16075
> 
> Can't wait to compare/try to smash that score with the block


Just dip that rad to a bucket of ice and u got better temp than a waterblock.


----------



## PLATOON TEKK

Just saw this looking for an update on my active backplate for STRIX. Am currently using ram cooler for backplate and avoided mp5 for flow restrictions. Might have to add to order. Should work on anything really.

Cop here








Edit: confirmed block for HOF!! From their support.


----------



## Nizzen

PLATOON TEKK said:


> View attachment 2489982
> 
> 
> Just saw this looking for an update on my active backplate for STRIX. Am currently using ram cooler for backplate and avoided mp5 for flow restrictions. Might have to add to order. Should work on anything really.
> 
> Cop here
> 
> View attachment 2489984


Very good flow in mp5works serial


----------



## PLATOON TEKK

Nizzen said:


> Very good flow in mp5works serial


That’s badass. Good to see it doesn’t necessarily restrict the flow. Will let you know how these Bitspowers do when they arrive.


----------



## arvinz

PLATOON TEKK said:


> That’s badass. Good to see it doesn’t necessarily restrict the flow. Will let you know how these Bitspowers do when they arrive.


Bitspower VRAM waterblock looks great! I'm assuming it only works with their backplate as it gets screwed in right? I'm one of many Strix owners waiting for the Optimus block but I don't think this would work with the Optimus backplate. I did order the mp5works serial and was going to run that with the Optimus waterblock...hmmm...


----------



## GRABibus

Salvi said:


> How do I sew rebar 1000W bios
> I have an Evga RTX 3090 Kingpin, how do I get that bios?


if you don’t get the Bios with rebar, resale your Kingpin.
That should be unacceptable you don’t get this Bios…


----------



## Lobstar

I know intel folks get better scores in these benches but I finally started getting decent results.
Port Royal: I scored 15 595 in Port Royal
Time Spy (Gfx: 23191): I scored 21 861 in Time Spy


----------



## GRABibus

J7SC said:


> ...depends on your actual card...on my Strix, the KPE 520W r_BAR bios messes up the power reading in GPUz and HWInfo as it cannot correctly show the 3rd PCIe 8 pin


it occurs on all strix with the 520w KP bios


----------



## Biscottoman

Lobstar said:


> I know intel folks get better scores in these benches but I finally started getting decent results.
> Port Royal: I scored 15 595 in Port Royal
> Time Spy (Gfx: 23191): I scored 21 861 in Time Spy


What overclock are you running on your 5950x?


----------



## Lobstar

Biscottoman said:


> What overclock are you running on your 5950x?


4.7 all-core with FMAX PBO enabled. Cutover at 50A.
3090:


----------



## Beagle Box

Lobstar said:


> I know intel folks get better scores in these benches but I finally started getting decent results.
> Port Royal: I scored 15 595 in Port Royal
> Time Spy (Gfx: 23191): I scored 21 861 in Time Spy


Very nice.
What changes have you made to increase your score?


----------



## Lobstar

Beagle Box said:


> Very nice.
> What changes have you made to increase your score?


Lowering GPU core clocks. Dropped from 2250 to 2190.


----------



## Biscottoman

Lobstar said:


> 4.7 all-core with FMAX PBO enabled. Cutover at 50A.
> 3090:
> View attachment 2490020


Thanks. Is your opinion worth running Dynamic oc switch over pbo2 curve optimizer? What vid are you running to get both ccd at 4.7?


----------



## Lobstar

Biscottoman said:


> Thanks. Is your opinion worth running Dynamic oc switch over pbo2 curve optimizer? What vid are you running to get both ccd at 4.7?


I haven't spent much time with curve optimizer so I can't speak to that. That said, with 1.30 VID, enabling PBO and enabling FMAX and setting a moderate cutover it's taken next to zero effort and I have no issues with hunting down stability issues across many cores. If I was seriously concerned with keeping my proc as cool as possible for as long as possible I might try the curve optimizer but I have enough cooling headroom that I honestly can't be bothered.


----------



## J7SC

...I also prefer to run with dynamic oc, FMax for benching. Since cooling hasn't been an issue, I've had all-core at 4750 on my 5960X with a few more steps to go within a reasonable v-core and temps next time around. Depending on the bench / game / app, a 3090 does very well in this scenario with dynamic oc and FMax.


----------



## Lobstar

J7SC said:


> ...I also prefer to run with dynamic oc, FMax for benching. Since cooling hasn't been an issue, I've had all-core at 4750 on my 5960X with a few more steps to go within a reasonable v-core and temps next time around. Depending on the bench / game / app, a 3090 does very well in this scenario with dynamic oc and FMax.


I can't quite get 4750. I'm at that point that more voltage puts the proc above 80C. I'm sure if I spent the time with core optimizer I could get a little more out of it but I just can't be bothered lol.


----------



## J7SC

Lobstar said:


> I can't quite get 4750. I'm at that point that more voltage puts the proc above 80C. I'm sure if I spent the time with core optimizer I could get a little more out of it but I just can't be bothered lol.


...yeah, I just want to 'find' the max all-core at below 1.35v and below 80 C; wouldn't run that 'daily'. 4650 is a nice daily for productivity as well such as compile and render. Not sure how much the 3090 would pick up in 3DM TimeSpy/Ex when going beyond 4750 all-c on the proc, but will find out when this build gets actually finished...


----------



## jomama22

Lobstar said:


> I haven't spent much time with curve optimizer so I can't speak to that. That said, with 1.30 VID, enabling PBO and enabling FMAX and setting a moderate cutover it's taken next to zero effort and I have no issues with hunting down stability issues across many cores. If I was seriously concerned with keeping my proc as cool as possible for as long as possible I might try the curve optimizer but I have enough cooling headroom that I honestly can't be bothered.


Just fyi, CO isn't so that your cpu runs cooler, pbo will still use the temperature headroom available...it undervolts to increase available power budget and temperature headroom. 

You are leaving performance on the table by not using it.


----------



## GRABibus

There has been a little update of 3DMark.

I lost 500pts in PR....

I tested TimeSpy and my scores are in line.

Nobody else ?


----------



## Lobstar

jomama22 said:


> Just fyi, CO isn't so that your cpu runs cooler, pbo will still use the temperature headroom available...it undervolts to increase available power budget and temperature headroom.
> 
> You are leaving performance on the table by not using it.


Like what? less than 1% for how many horrible hours of running some random core testing software and randomly throwing numbers at 16 freaking cores? Undervolting seems WAY overrated for the level of instability it brings to a system.


----------



## jomama22

Lobstar said:


> Like what? less than 1% for how many horrible hours of running some random core testing software and randomly throwing numbers at 16 freaking cores? Undervolting seems WAY overrated for the level of instability it brings to a system.


Lmao, look, you didn't even know what CO did in the first place so I'm not going indulge in it with you. Just telling you what's available if you want more performance.


----------



## Lobstar

jomama22 said:


> Lmao, look, you didn't even know what CO did in the first place so I'm not going indulge in it with you. Just telling you what's available if you want more performance.


Yeah, I've been following that thread on this board. I know what it does. Its just more snake oil like that conman Yuri and CTR. Same BS that recommends that I run my CPU at under 4.5ghz because power savings or some such lunacy. Why should I run my chip at under 1.2v and some mystical profile switching when AMD already figured it out with PBO? I'm certainly not smarter than an AMD engineer in regards to how these chips work. Enjoy your stretched clocks.


----------



## jomama22

Lobstar said:


> Yeah, I've been following that thread on this board. I know what it does. Its just more snake oil like that conman Yuri and CTR. Same BS that recommends that I run my CPU at under 4.5ghz because power savings or some such lunacy. Why should I run my chip at under 1.2v and some mystical profile switching when AMD already figured it out with PBO? I'm certainly not smarter than an AMD engineer in regards to how these chips work. Enjoy your stretched clocks.
> View attachment 2490072


You refused to learn how to just manually set ctr with w.e. clocks and volts you want, that's no one's fault but your own. You are hilariously stubborn.

I mean, I know I get better scores than you in any bench you want to throw down at so I'm really not worried about my "clock stretching" lol.


----------



## Lobstar

jomama22 said:


> You refused to learn how to just manually set ctr with w.e. clocks and volts you want, that's no one's fault but your own. You are hilariously stubborn.
> 
> I mean, I know I get better scores than you in any bench you want to throw down at so I'm really not worried about my "clock stretching" lol.


I spent several weeks in the CTR discord help channel with Anne trying to figure out why we could not get a profile working better than static overclocks. Version after version failing to provide any benefit and random profile settings being applied constantly only to result in less performance than with just a static overclock. Like I said, I'm sure you can pull 1% or less in cinebench but at what time investment? How many nights have you run your system with clock cycler or whatnot waiting for a result? I got **** to do man lol. Also, what a child. "Throw down" like you're whipping your dick out on the table. OK, whatever man. My CB23 score is 30789. I spent 20 minutes in the bios to get that after I installed the proc.


----------



## J7SC

...'clock stretching' is an old hat...Salvador Dali painted this in 1931...


----------



## jomama22

Lobstar said:


> I spent several weeks in the CTR discord help channel with Anne trying to figure out why we could not get a profile working better than static overclocks. Version after version failing to provide any benefit and random profile settings being applied constantly only to result in less performance than with just a static overclock. Like I said, I'm sure you can pull 1% or less in cinebench but at what time investment? How many nights have you run your system with clock cycler or whatnot waiting for a result? I got **** to do man lol. Also, what a child. "Throw down" like you're whipping your dick out on the table. OK, whatever man. My CB23 score is 30789. I spent 20 minutes in the bios to get that after I installed the proc.


I told you exactly how to use ctr manually after you complained about them the first time. It's just as easy to do as setting up dynamic oc switcher in the bios ( I have that feature too).

I'm confident I'd beat you by more than 1% across the entire spectrum, but that's not even my point.

You're knocking it without even trying it, which is just annoying, you know?


----------



## Biscottoman

jomama22 said:


> I told you exactly how to use ctr manually after you complained about them the first time. It's just as easy to do as setting up dynamic oc switcher in the bios ( I have that feature too).
> 
> I'm confident I'd beat you by more than 1% across the entire spectrum, but that's not even my point.
> 
> You're knocking it without even trying it, which is just annoying, you know?


How would you reccomend to set co with dynamic oc? Cause i've setted the l'ho curve limiter and without using dynamic oc i've found a big negative performance gap in both single and multi thread


----------



## GQNerd

OK boys and girls, simmer down.. Pretty sure there’s a thread for AMD somewhere on this site.. take the argument over there


----------



## gfunkernaught

J7SC said:


> ...'clock stretching' is an old hat...Salvador Dali painted this in 1931...


Your post did not go unnoticed and is appreciated!


----------



## yzonker

Nerd fight! 

Anyway, I've got some odd problem with RDR2. It's the only game I've found that I can't run at higher boost levels. And to the extreme. I literally can't run past 1950mhz or so without it crashing. And more interesting is that it will consistently crash in the same place. If I limit to 1950 or less, then never crashes. I even tried running no offsets at all, just enough power to get it to max voltage which is right around 2000mhz. Still crashed in the exact same spot.

Can't get a crash in anything else at such a low boost clock. CP2077, FS2020, Quake 2 RTX is what I've played recently. 

Seems to have always been a problem with RDR2 I think as I even had to dial back my core offset for it when running a 390w bios.

I've read reports of other people with this sort of problem, but doesn't seem to be all cards/machines.

Anything to try? Seems like it's still an instability, but doesn't make sense that it's the only game I've found like this and even crashes on the base VF curve.


----------



## gfunkernaught

So far so good, hoping the universe doesn't jinx me. Been using the unofficial 520w rebar bios on my trio with +75 core +1200 mem and they've been great for gaming. Cyberpunk is where I see the biggest improvement, especially when using Quality DLSS. I'm capped at around 480w though, but I'm fine with that for now. How is rebar affecting everyone else's OC yields here? In gaming not benches.


----------



## gfunkernaught

yzonker said:


> Nerd fight!
> 
> Anyway, I've got some odd problem with RDR2. It's the only game I've found that I can't run at higher boost levels. And to the extreme. I literally can't run past 1950mhz or so without it crashing. And more interesting is that it will consistently crash in the same place. If I limit to 1950 or less, then never crashes. I even tried running no offsets at all, just enough power to get it to max voltage which is right around 2000mhz. Still crashed in the exact same spot.
> 
> Can't get a crash in anything else at such a low boost clock. CP2077, FS2020, Quake 2 RTX is what I've played recently.
> 
> Seems to have always been a problem with RDR2 I think as I even had to dial back my core offset for it when running a 390w bios.
> 
> I've read reports of other people with this sort of problem, but doesn't seem to be all cards/machines.
> 
> Anything to try? Seems like it's still an instability, but doesn't make sense that it's the only game I've found like this and even crashes on the base VF curve.


I don't have RDR2 but I can say that when I ran [email protected] with the power unlimited (1000w bios) quake 2 rtx never crashed but cyberpunk did. Check to see if your voltage is jumping around. Does RDR2 crash in both vulkan and DX12?


----------



## Falkentyne

GRABibus said:


> There has been a little update of 3DMark.
> 
> I lost 500pts in PR....
> 
> I tested TimeSpy and my scores are in line.
> 
> Nobody else ?


Yeah I lost 500 points in Port Royal also.
I noticed the FPS during the benchmark is significantly lower (instead of the very start of the bench not going below 61 FPS, now it goes down to 57 FPS, etc).


----------



## yzonker

gfunkernaught said:


> I don't have RDR2 but I can say that when I ran [email protected] with the power unlimited (1000w bios) quake 2 rtx never crashed but cyberpunk did. Check to see if your voltage is jumping around. Does RDR2 crash in both vulkan and DX12?


I've tried both letting it bounce and flattening the curve and holding a voltage. Crashes either way. I was thinking I need to test DX12. Only been using Vulkan because the built in benchmark showed it to perform better. I'll try it though to rule that out.

And max overclocks vary from game to game, but I was a little baffled when it crashed with no offset, just high enough power limit to get it in the 1950 to 2000 range.


----------



## Beagle Box

GRABibus said:


> There has been a little update of 3DMark.
> 
> I lost 500pts in PR....
> 
> I tested TimeSpy and my scores are in line.
> 
> Nobody else ?


You are not alone. 

And something is up with my GPU now. Could be coincidence, but what a coincidence! I ran a decent bench run with my usual game (low OC) settings. Tried running again a few minutes later. Got the 3DMark update screen. Ran the bench. Scored crazy low - 14 something. Tried running again, but got a blue screen and a loud electronic hum. Now I'm getting rectangular artifacts when I try any overclock. This was with the latest driver and MSI Suprim rebar BIOS. 

Reflashed BIOS.
Reinstalled latest driver.
Tried other BIOSs.
No dice.

Won't really OC now. Still runs games. Just doesn't like OC. Can barely beat 15000 in PR now... I think it's some sort of a memory thing. 

What's it cost to repair one of these Strix OC GPUs, anyway?


----------



## J7SC

yzonker said:


> Nerd fight!
> 
> Anyway, I've got some odd problem with RDR2. It's the only game I've found that I can't run at higher boost levels. And to the extreme. I literally can't run past 1950mhz or so without it crashing. And more interesting is that it will consistently crash in the same place. If I limit to 1950 or less, then never crashes. I even tried running no offsets at all, just enough power to get it to max voltage which is right around 2000mhz. Still crashed in the exact same spot.
> 
> Can't get a crash in anything else at such a low boost clock. CP2077, FS2020, Quake 2 RTX is what I've played recently.
> 
> Seems to have always been a problem with RDR2 I think as I even had to dial back my core offset for it when running a 390w bios.
> 
> I've read reports of other people with this sort of problem, but doesn't seem to be all cards/machines.
> 
> *Anything to try?* Seems like it's still an instability, but doesn't make sense that it's the only game I've found like this and even crashes on the base VF curve.


I don't run RDR2 personally (yet), but I've read several times that it is much better on vulkan. Have you tried it on vulkan ?


----------



## yzonker

J7SC said:


> I don't run RDR2 personally (yet), but I've read several times that it is much better on vulkan. Have you tried it on vulkan ?


That's all I've been running. Like I said above, I may try DX12 just to see if the problem persists.


----------



## yzonker

Beagle Box said:


> You are not alone.
> 
> And something is up with my GPU now. Could be coincidence, but what a coincidence! I ran a decent bench run with my usual game (low OC) settings. Tried running again a few minutes later. Got the 3DMark update screen. Ran the bench. Scored crazy low - 14 something. Tried running again, but got a blue screen and a loud electronic hum. Now I'm getting rectangular artifacts when I try any overclock. This was with the latest driver and MSI Suprim rebar BIOS.
> 
> Reflashed BIOS.
> Reinstalled latest driver.
> Tried other BIOSs.
> No dice.
> 
> Won't really OC now. Still runs games. Just doesn't like OC. Can barely beat 15000 in PR now... I think it's some sort of a memory thing.
> 
> What's it cost to repair one of these Strix OC GPUs, anyway?


Isn't it usually vram that causes artifacts? What were you trying to oc? Core /mem/both?


----------



## J7SC

Beagle Box said:


> You are not alone.
> 
> And something is up with my GPU now. Could be coincidence, but what a coincidence! I ran a decent bench run with my usual game (low OC) settings. Tried running again a few minutes later. Got the 3DMark update screen. Ran the bench. Scored crazy low - 14 something. Tried running again, but got a blue screen and a loud electronic hum. Now I'm getting rectangular artifacts when I try any overclock. This was with the latest driver and MSI Suprim rebar BIOS.
> 
> Reflashed BIOS.
> Reinstalled latest driver.
> Tried other BIOSs.
> No dice.
> 
> Won't really OC now. Still runs games. Just doesn't like OC. Can barely beat 15000 in PR now... I think it's some sort of a memory thing.
> 
> What's it cost to repair one of these Strix OC GPUs, anyway?


...weird, especially that 'loud electronic hum'. To be absolutely safe for a potential RMA, you might want to first revert back to the Asus Strix bios for both settings, though hopefully, it won't come to that. In any case, that 'hum' concerns me the most - could it be that a VRAM controller somehow got 'affected' (though hard to believe that w/ just a software update). 

...you say it runs games fine w/o oc - no artifacts ? That's actually a good sign. May be try the old ATI tool to slowly bring up VRAM speed and see where it starts to report artifacts. I would probably also take the cooler off to see about anything unusual on the PCB around VRAM controllers, such as discoloration. 

...keep us posted, and good luck getting it sorted.


----------



## Beagle Box

J7SC said:


> ...weird, especially that 'loud electronic hum'. To be absolutely safe for a potential RMA, you might want to first revert back to the Asus Strix bios for both settings, though hopefully, it won't come to that. In any case, that 'hum' concerns me the most - could it be that a VRAM controller somehow got 'affected' (though hard to believe that w/ just a software update).
> 
> ...you say it runs games fine w/o oc - no artifacts ? That's actually a good sign. May be try the old ATI tool to slowly bring up VRAM speed and see where it starts to report artifacts. I would probably also take the cooler off to see about anything unusual on the PCB around VRAM controllers, such as discoloration.
> 
> ...keep us posted, and good luck getting it sorted.


Unfortunately, I just looked more closely and _I do have artifacts_ at stock memory speeds But they're very hard to see and only on one screen in one game - _for now_.
They're random small vertical rectangles that are just a lightening of background that slowly fade in and out. 

I've overclocked this GPU, so it's on me. Not looking to RMA, per se, but repair if possible.

Not too happy right now, for sure.
This was an awesome card.


@yzonker -

Just running my typical game OC. +50 power / +150 Curve / +500 memory
That's a super mild OC for this GPU.


----------



## gfunkernaught

Beagle Box said:


> Unfortunately, I just looked more closely and _I do have artifacts_ at stock memory speeds But they're very hard to see and only on one screen in one game - _for now_.
> They're random small vertical rectangles that are just a lightening of background that slowly fade in and out.
> 
> I've overclocked this GPU, so it's on me. Not looking to RMA, per se, but repair if possible.
> 
> Not too happy right now, for sure.
> This was an awesome card.


Happened to me too when I was flashing various bios and trying different OCs. One time I was on a non-1kw bios (forget which one), flashed to 1kw, reboot, set an offset too high, hard crash, hard reboot, purple artifacts in 3D mode and crashing even at stock clocks and lower power limit. What I had to do is run DDU, reset the secure boot keys, reinstall drivers, reboot, then it was fine.


----------



## Falkentyne

Beagle Box said:


> Unfortunately, I just looked more closely and _I do have artifacts_ at stock memory speeds But they're very hard to see and only on one screen in one game - _for now_.
> They're random small vertical rectangles that are just a lightening of background that slowly fade in and out.
> 
> I've overclocked this GPU, so it's on me. Not looking to RMA, per se, but repair if possible.
> 
> Not too happy right now, for sure.
> This was an awesome card.
> 
> 
> @yzonker -
> 
> Just running my typical game OC. +50 power / +150 Curve / +500 memory
> That's a super mild OC for this GPU.


Did this happen RIGHT after the 3dmark update??

Because if it did, when I got the same ultra low score (-500 points), I immediately ran Heaven benchmark and my FPS was immediately like 15 below normal at the very start of the benchmark (where I start off getting 195-205 FPS at the beginning, it didn't exceed 185 !!!)

I had to reboot the computer to get the FPS back to normal.
Put on my tinfoil hat here but I swear that 3dmark update did something bad. Like maybe it installed a miner or something....

I did notice something. I saw my GPU Usage at like 25% for no reason for awhile after seeing that low score, and the only thing running was a Fortnite download update (and I know it wasn't the epic client causing it!).


----------



## Beagle Box

gfunkernaught said:


> Happened to me too when I was flashing various bios and trying different OCs. One time I was on a non-1kw bios (forget which one), flashed to 1kw, reboot, set an offset too high, hard crash, hard reboot, purple artifacts in 3D mode and crashing even at stock clocks and lower power limit. What I had to do is run DDU, reset the secure boot keys, reinstall drivers, reboot, then it was fine.


How dare you give me hope!
The BIOS I was using is only 450W and all settings were well beneath the GPU's capabilities.
I have already reflashed the BIOSs and uninstalled the drivers with REVO twice. 
I have no idea what a secure boot key is.

@Falkentyne :
Yes. Immediately. The 3DMark updated as I was launching Port Royal. I had just run a 15,386 or something and wanted to see if it was a fluke at the settings I was using. 
I got the update and it ran like crap. I rebooted and tried again and got a BSD and a loud buzz. The next run was also way low. Like 14650 or something.


----------



## cletus-cassidy

Falkentyne said:


> Yeah I lost 500 points in Port Royal also.
> I noticed the FPS during the benchmark is significantly lower (instead of the very start of the bench not going below 61 FPS, now it goes down to 57 FPS, etc).


Same symptoms here. Thought I was going crazy as I made some minor system changes late last night and had my scores change in two hours.


----------



## Falkentyne

cletus-cassidy said:


> Same symptoms here. Thought I was going crazy as I made some minor system changes late last night and had my scores change in two hours.


Except right after I ran Port Royal and got the massively lower score, this also somehow made heaven benchmark give lower FPS also! I had to reboot to get my FPS back (in Heaven)....


----------



## J7SC

Falkentyne said:


> Except right after I ran Port Royal and got the massively lower score, this also somehow made heaven benchmark give lower FPS also! I had to reboot to get my FPS back (in Heaven)....


...may also be related to 3DM Systeminfo, rather than only the benchmark itself...Systeminfo stays on even if you close 3DM, until reboot. I had issues before with Systeminfo


----------



## Beagle Box

This was my run just before the update: Port Royal Run Just before the Update: 15344
Now, I can't break 14600 with those GPU OC settings, even with faster system RAM settings.


----------



## cletus-cassidy

Falkentyne said:


> Except right after I ran Port Royal and got the massively lower score, this also somehow made heaven benchmark give lower FPS also! I had to reboot to get my FPS back (in Heaven)....


Same exact behavior including that Heaven FPS drop but only after PR. Clears up after rebooting for me as well. My Time Spy scores were significantly lower than last night as well, not just PR.


----------



## Falkentyne

cletus-cassidy said:


> Same exact behavior including that Heaven FPS drop but only after PR. Clears up after rebooting for me as well. My Time Spy scores were significantly lower than last night as well, not just PR.


So you think that's what caused your artifacting, @Beagle Box ?
Did you try powering off hard and then rebooting?
DDU + driver reinstall?


----------



## GRABibus

I hope they will solve this « -500pts in PR » issue…

even after reboot, I still have -500pts in PR


----------



## Beagle Box

Falkentyne said:


> So you think that's what caused your artifacting, @Beagle Box ?
> Did you try powering off hard and then rebooting?
> DDU + driver reinstall?


Tried everything. BIOSs, drivers, Afterburner re-install....

My theory is that some part of the 3dMark software is bugged or purposely 'updated' in some way that puts more stress on the 3090's memory system.
Before the recent update, I would see benchmark score improvements up to +1630 on memory. Above that, scores would drop and the card seemed to struggle. I think that 'struggling' was evidence of memory error-correction kicking in.
I'm now seeing those symptoms with any memory OC, so I think the only reason the card still plays games is due to the 3090's memory correction system.

Personally, I would stay away from memory OC of any kind running anything 3Dark until I knew more. 

In my case, I think the software load has caused a hardware problem. The BSD and loud buzzing, to my mind, is when that damage occurred. If I had been running a serious memory OC, at the time, my card would probably have blown completely.

Unfortunately, I have too many other things to deal with right now that are taking up my time and money to work this out.


----------



## GRABibus

Weird.....


----------



## jomama22

Beagle Box said:


> Tried everything. BIOSs, drivers, Afterburner re-install....
> 
> My theory is that some part of the 3dMark software is bugged or purposely 'updated' in some way that puts more stress on the 3090's memory system.
> Before the recent update, I would see benchmark score improvements up to +1630 on memory. Above that, scores would drop and the card seemed to struggle. I think that 'struggling' was evidence of memory error-correction kicking in.
> I'm now seeing those symptoms with any memory OC, so I think the only reason the card still plays games is due to the 3090's memory correction system.
> 
> Personally, I would stay away from memory OC of any kind running anything 3Dark until I knew more.
> 
> In my case, I think the software load has caused a hardware problem. The BSD and loud buzzing, to my mind, is when that damage occurred. If I had been running a serious memory OC, at the time, my card would probably have blown completely.
> 
> Unfortunately, I have too many other things to deal with right now that are taking up my time and money to work this out.


If you have a spare drive you can throw a clean windows install on I would test as well. 

If you throw a large cpu load on, anything weird happen? Rather, does you overclock still hold? Wonder is maybe the psu has started to go. 

I would be very surprised if a piece of software could damage the card in that way. I do think it's more likely that a crash may have caused corruption somewhere along the graphics pipeline and have messed something up in that respect.

Also possible that it was merely coincidental with 3dmark and a component on the graphics card was already beginning to be degraded.

Would definitely try a clean windows install first though.


----------



## yzonker

jomama22 said:


> If you have a spare drive you can throw a clean windows install on I would test as well.
> 
> If you throw a large cpu load on, anything weird happen? Rather, does you overclock still hold? Wonder is maybe the psu has started to go.
> 
> I would be very surprised if a piece of software could damage the card in that way. I do think it's more likely that a crash may have caused corruption somewhere along the graphics pipeline and have messed something up in that respect.
> 
> Also possible that it was merely coincidental with 3dmark and a component on the graphics card was already beginning to be degraded.
> 
> Would definitely try a clean windows install first though.


Yea I was thinking the same thing. Can 3DMark push the mem harder than a ETH miner? Probably not.


----------



## Beagle Box

jomama22 said:


> If you have a spare drive you can throw a clean windows install on I would test as well.
> 
> If you throw a large cpu load on, anything weird happen? Rather, does you overclock still hold? Wonder is maybe the psu has started to go.
> 
> I would be very surprised if a piece of software could damage the card in that way. I do think it's more likely that a crash may have caused corruption somewhere along the graphics pipeline and have messed something up in that respect.
> 
> Also possible that it was merely coincidental with 3dmark and a component on the graphics card was already beginning to be degraded.
> 
> Would definitely try a clean windows install first though.


Already swapped the system drive SSD with a mirrored copy made when converting over to GPT for rebar - identical 500GB Samsung EVO 970 SSDs.
PSU is fine.
CPU benchmarks @ 5.3GHz on all cores are unaffected.

Perhaps a GPU component was already degrading, but I stopped hard benching a while back without issue. 
I was just tweaking my video gaming OC settings when all this occurred and those setting are very mild OC.

This is what the artifacts look like during gaming. There's usually one or two such rectangles at a time. They appear and disappear randomly like raindrops on the screen, lasting maybe 2 seconds at a time. I only notice them on static screens like this.


----------



## Beagle Box

felix121 said:


> How old is your GPU


Less than 6 months. 
Didn't flash any other BIOS to it until it was water-cooled. 
Had the MSI Suprim X Rebar and the KP520 Rebar BIOS on it when it blew. 
Was running the MSI BIOS at the time because that's what I game with.


----------



## Bobbylee

yzonker said:


> Nerd fight!
> 
> Anyway, I've got some odd problem with RDR2. It's the only game I've found that I can't run at higher boost levels. And to the extreme. I literally can't run past 1950mhz or so without it crashing. And more interesting is that it will consistently crash in the same place. If I limit to 1950 or less, then never crashes. I even tried running no offsets at all, just enough power to get it to max voltage which is right around 2000mhz. Still crashed in the exact same spot.
> 
> Can't get a crash in anything else at such a low boost clock. CP2077, FS2020, Quake 2 RTX is what I've played recently.
> 
> Seems to have always been a problem with RDR2 I think as I even had to dial back my core offset for it when running a 390w bios.
> 
> I've read reports of other people with this sort of problem, but doesn't seem to be all cards/machines.
> 
> Anything to try? Seems like it's still an instability, but doesn't make sense that it's the only game I've found like this and even crashes on the base VF curve.


How many frames do you expect to gain? Is it significant? If not don’t stress. If you’re bothered, stick the 1kw bios on your card


----------



## des2k...

Beagle Box said:


> Tried everything. BIOSs, drivers, Afterburner re-install....
> 
> My theory is that some part of the 3dMark software is bugged or purposely 'updated' in some way that puts more stress on the 3090's memory system.
> Before the recent update, I would see benchmark score improvements up to +1630 on memory. Above that, scores would drop and the card seemed to struggle. I think that 'struggling' was evidence of memory error-correction kicking in.
> I'm now seeing those symptoms with any memory OC, so I think the only reason the card still plays games is due to the 3090's memory correction system.
> 
> Personally, I would stay away from memory OC of any kind running anything 3Dark until I knew more.
> 
> In my case, I think the software load has caused a hardware problem. The BSD and loud buzzing, to my mind, is when that damage occurred. If I had been running a serious memory OC, at the time, my card would probably have blown completely.
> 
> Unfortunately, I have too many other things to deal with right now that are taking up my time and money to work this out.


I doubt 3dmark kills your memory,core unless the mount was wrong.

Here are things that cause artifacts on my 3090(since I remounted alot)

Not enough pressure on the edge of the die(this can happen on thermal cycle after weeks of usage)

Too much pressure on the backpkate at the core part

GPU not being fully inserted into the pcie slot

Too much pressure on the die from the front block

Playing games with a heavy oc on core/mem for a long time that doesn't pass 3dmark PR and TE loops for 2h+

Not re-testing OC with 3dmark on driver/vbios changes, the new drivers and rbar are more demanding on memory

Bad pads on memory, usually this only shows under heavy path tracing or minning benchmark with a mem OC


----------



## gfunkernaught

@Beagle Box 
Secure boot keys in your motherboard bios. Secure boot is also tied to the graphics adapter along with efi. They all have to "agree" or match each other in the boot chain. If your motherboard has on board video, plug your monitor into the onboard video then try to reflash to the stock bios, specific to your card. Once flashed, shutdown. Plug your monitor into the 3090. Then reinstall the drivers. I'm telling you I thought my card was toast due to artifacts then I started from scratch trying to bring the entire back to normal and it worked.


----------



## J7SC

Beagle Box said:


> Less than 6 months.
> Didn't flash any other BIOS to it until it was water-cooled.
> Had the MSI Suprim X Rebar and the KP520 Rebar BIOS on it when it blew.
> Was running the MSI BIOS at the time because that's what I game with.


As mentioned before, I would check the mount / pads of the waterblock as well. I know it is a pain (have done it more than once). While the block is off, also check PCB around VRAM chips and VRAM-related VRM components. This also relates back to the 'hum' you mentioned before. Perhaps memory-intensive parts of 3DM created enough heat/stress on an area that had become marginal after heat-cycling.

Also, are you using Thermalright pads ? I love those and use them, but there apparently are some fake Thermalrights items being sold on Amazon.


----------



## Beagle Box

@felix121 
Definitely not a second hand GPU. Got it from Microcenter on a drop day. It was in the retail box, inside the crate shipment box.

@des2k... 
Card is fully seated and has been for a while. 
All overclocks were fully tested on the 5 BIOSs I have used over the last few months. The Suprim rebar and KP520 rebar BIOSs most recently.
I was thinking maybe some other system instability was triggered while I was running the 3DMark benchmarks, but this system is and has been remarkably solid. 
I don't run any extreme values in day to day operation. 
System RAM has mild OC.
CPU has been running flawlessly 50/50 for 2 years.

I have considered reseating the WB and backplate. I don't like the alphacool backplate at all. It doesn't tighten evenly due to its design. But I'm not getting high memory temps.
GPU temps are cool as are CPU temps. 

@gfunkernaught 
That's pretty much how I flash the GPU BIOS. Haven't tried reflashing original BIOSs into the card. May try later this week.


----------



## HeadlessKnight

Would you guys try to RMA your EVGA graphics card simply because it has the PCI-E +12 power at 80-75W issue? If that is even an issue itself at all? There is a very long thread in EVGA forums discussing this problem, do you think it is blown out of proportion?
This has been bothering me for a while now, but I am not sure If I should RMA it because of this.
Also my card cannot hold clocks well at all. Even with KPE and XC3 BIOS it still can't maintain even 2010 MHz, and in demanding game can go as low as 1935 MHz even with maxed out power limit.
Never seen the card total board exeeding 475W, expect for maybe fractions of a second it will peak at 503W or so. It loves to stay at 445-450W on average even with the 500 and 520W BIOSes.


----------



## GRABibus

HeadlessKnight said:


> Would you guys try to RMA your EVGA graphics card simply because it has the PCI-E +12 power at 80-75W issue? If that is even an issue itself at all? There is a very long thread in EVGA forums discussing this problem, do you think it is blown out of proportion?
> This has been bothering me for a while now, but I am not sure If I should RMA it because of this.
> Also my card cannot hold clocks well at all. Even with KPE and XC3 BIOS it still can't maintain even 2010 MHz, and in demanding game can go as low as 1935 MHz even with maxed out power limit.
> Never seen the card total board exeeding 475W, expect for maybe fractions of a second it will peak at 503W or so. It loves to stay at 445-450W on average even with the 500 and 520W BIOSes.


I sold mine because of that.
Due to this issue, full power limit !!
So, yes, I would RMA it.
Some here did the RMA and maybe will give a feedback on improvements.


----------



## jomama22

HeadlessKnight said:


> Would you guys try to RMA your EVGA graphics card simply because it has the PCI-E +12 power at 80-75W issue? If that is even an issue itself at all? There is a very long thread in EVGA forums discussing this problem, do you think it is blown out of proportion?
> This has been bothering me for a while now, but I am not sure If I should RMA it because of this.
> Also my card cannot hold clocks well at all. Even with KPE and XC3 BIOS it still can't maintain even 2010 MHz, and in demanding game can go as low as 1935 MHz even with maxed out power limit.
> Never seen the card total board exeeding 475W, expect for maybe fractions of a second it will peak at 503W or so. It loves to stay at 445-450W on average even with the 500 and 520W BIOSes.


Pretty sure they have an exchange program for it no?


----------



## GRABibus

jomama22 said:


> Pretty sure they have an exchange program for it no?


yes


----------



## GRABibus

did some of you try Liquid Metal to cool the GPU on the stock cooler of the Strix OC ?

I can't find anywhere if the dissipator is copper or aluminium...


----------



## PLATOON TEKK

Got a block for my KP and was able to land a KP HC.









all chillers arrived. Now waiting for enclosure to go sub zero. Wonder how much difference stock thermal paste will do vs grizzly extreme.


----------



## GRABibus

PLATOON TEKK said:


> Got a block for my KP and was able to land a KP HC.
> View attachment 2490244
> 
> 
> all chillers arrived. Now waiting for enclosure to go sub zero. Wonder how much difference stock thermal paste will do vs grizzly extreme.


be careful….Corona is on the way to infect your rig 😊


----------



## Beagle Box

GRABibus said:


> did some of you try Liquid Metal to cool the GPU on the stock cooler of the Strix OC ?
> 
> I can't find anywhere if the dissipator is copper or aluminium...


Stock Strix cooler CPU mating surface is nickel-plated copper.


----------



## PLATOON TEKK

GRABibus said:


> be careful….Corona is on the way to infect your rig 😊


haha you are right. I feel it creeping up. I’ll have a beer for everyone in this thread! Hats off to you all, good to have you 🤘🏻


----------



## GRABibus

PLATOON TEKK said:


> haha you are right. I feel it creeping up. I’ll have a beer for everyone in this thread! Hats off to you all, good to have you 🤘🏻


🎸🎸


----------



## GRABibus

Beagle Box said:


> Stock Strix cooler CPU mating surface is nickel-plated copper.


thanks.
So liquid metal usage should be ok ?


----------



## Beagle Box

GRABibus said:


> thanks.
> So liquid metal usage should be ok ?


Oh yeah. Should be fine. You'll need to seal around the processor with nail polish and perhaps make a square tape damn on the cooler itself. Will be fine for a horizontal mount.


----------



## des2k...

PLATOON TEKK said:


> Got a block for my KP and was able to land a KP HC.
> View attachment 2490244
> 
> 
> all chillers arrived. Now waiting for enclosure to go sub zero. Wonder how much difference stock thermal paste will do vs grizzly extreme.


nice 😁

I placed an order for another ek d5 and mo-ra3 420 and fans. EK had 140mm vardar for 7$ , got a few, 400-1600rpm version. 

Could be while, was out of stock so I ordered direct, but I keep hearing it's weeks before it arrives.


----------



## Rainstar

first 3090 is an asus tuf, was just able to score a 3090 FE as a second.

Asus Tuf waterblocked with EK Vector TUF

the FE i think ill go with the Bykski N-RTX3090FE-TC-V2 has a active WC backplate. 

cant really judge just yet but would I have any NV link compatibility issues?


----------



## yzonker

Follow up on my RDR2 crashing. DX12 is exactly the same. Crashes in the same place. Only thing that works is limiting core clock. I tried a mild [email protected] No crash. 

Seems like it's pure clock speed though. I can run a lower core offset but let it go closer to 2000mhz with more power, crash every time.

So I just saved the 1950 undervolt in AB. Use that for now. 

It did give the GFX state error that a gets lots of hits when searching. So somewhat a common problem. Most appear to have just reduced clocks to fix it. Kinda bizarre.


----------



## GRABibus

I have posted in Steam at 3DMark forum the following :

GRABibus
There is an issue with this new update for Port Royal.
Scores drop down by 500points !
See also here confirmations :
https://www.overclock.net/threads/official-nvidia-rtx-3090-owners-club.1753930/page-755



*Here is the answer :*
UL_Jarnis [développer.]
We are investigating these reports on Port Royal and can confirm that we've reproduced the effect mentioned on 30-series cards (but not on others) and while it is small, it appears to be real. We do not yet know what causes this and there should have been no other changes to the workload except the addition of the loop counter to the HUD when running stress test.


----------



## Beagle Box

GRABibus said:


> I have posted in Steam at 3DMark forum the following :
> 
> GRABibus
> There is an issue with this new update for Port Royal.
> Scores drop down by 500points !
> See also here confirmations :
> https://www.overclock.net/threads/official-nvidia-rtx-3090-owners-club.1753930/page-755
> 
> 
> 
> *Here is the answer :*
> UL_Jarnis [développer.]
> We are investigating these reports on Port Royal and can confirm that we've reproduced the effect mentioned on 30-series cards (but not on others) and while it is small, it appears to be real. We do not yet know what causes this and there should have been no other changes to the workload except the addition of the loop counter to the HUD when running stress test.


"while it is small"
Uh huh. A score drop from 15344 to 14624 at the same settings is not _small_ to _me_. 
Why only 30 series cards? 
Maybe they should check to see which code monkeys at 3DMark are running AMD cards on their personal gaming PCs? 

I hope they find the root cause. My GPU has suffered damage. 

They'll probably deny finding anything, but refresh the code and claim all is well...


----------



## des2k...

Beagle Box said:


> "while it is small"
> Uh huh. A score drop from 15344 to 14624 at the same settings is not _small_ to _me_.
> Why only 30 series cards?
> Maybe they should check to see which code monkeys at 3DMark are running AMD cards on their personal gaming PCs?
> 
> I hope they find the root cause. My GPU has suffered damage.
> 
> They'll probably deny finding anything, but refresh the code and claim all is well...


It's over 700points lower on my 3090 now !


----------



## yzonker

I'm just not going to run it until I hear this is resolved. No point anyway.


----------



## J7SC

GRABibus said:


> I have posted in Steam at 3DMark forum the following :
> 
> GRABibus
> There is an issue with this new update for Port Royal.
> Scores drop down by 500points !
> See also here confirmations :
> https://www.overclock.net/threads/official-nvidia-rtx-3090-owners-club.1753930/page-755
> 
> 
> 
> *Here is the answer :*
> UL_Jarnis [développer.]
> We are investigating these reports on Port Royal and can confirm that we've reproduced the effect mentioned on 30-series cards (but not on others) and while it is small, it appears to be real. We do not yet know what causes this and there should have been no other changes to the workload except the addition of the loop counter to the HUD when running stress test.


Good move letting them know...of course they are not going to publicly say that the update is a total gong show on RTX30, but at least they are working on a fix. 

I have not installed the latest 3DM '''update'',' and won't until it is resolved. However, for those who have, it would be interesting to see if the PR score issue persists if you go to 'options' and turn off Systeminfo _and_ hardware monitoring


----------



## Beagle Box

J7SC said:


> Good move letting them know...of course they are not going to publicly say that the update is a total gong show on RTX30, but at least they are working on a fix.
> 
> I have not installed the latest 3DM '''update'',' and won't until it is resolved. However, for those who have, it would be interesting to see if the PR score issue persists if you go to 'options' and turn off Systeminfo _and_ hardware monitoring


Scores still low with those turned off, at least with my f'd up GPU.


----------



## cletus-cassidy

Beagle Box said:


> Scores still low with those turned off, at least with my f'd up GPU.


Same (except for the effing GPU part). I see lower scores in Time Spy / Extreme as well, but I need to run TS without first having run PR to see if that makes a difference.


----------



## J7SC

Beagle Box said:


> Scores still low with those turned off, at least with my f'd up GPU.


Thanks for the 'test' 
...also, meant to ask earlier, but have your VRAM temps changed after that 'PR+hum' event. I realize that you're running lower VRAM speeds now, but still, you might have some recollection / old screenies of stock-speed VRAM temps


----------



## GRABibus

J7SC said:


> Good move letting them know...of course they are not going to publicly say that the update is a total gong show on RTX30, but at least they are working on a fix.
> 
> I have not installed the latest 3DM '''update'',' and won't until it is resolved. However, for those who have, it would be interesting to see if the PR score issue persists if you go to 'options' and turn off Systeminfo _and_ hardware monitoring


Still low score...


----------



## Beagle Box

J7SC said:


> Thanks for the 'test'
> ...also, meant to ask earlier, but have your VRAM temps changed after that 'PR+hum' event. I realize that you're running lower VRAM speeds now, but still, you might have some recollection / old screenies of stock-speed VRAM temps


No temp anomalies. I just can't drive the VRAM anymore - or more accurately- driving the VRAM faster doesn't do anything. 

I think I took some sort of Voltage surge. My GPU has always been a low, but efficient clocker. Adding Voltage never helped performance. That's the sign of a well-insulated chip. When you over-Volt this sort of chip, you can damage it. After that, performance is degraded and then usually continues to deteriorate slowly over time. People with experience overclocking CPUs know the drill. All it takes is one spike. 
I was running +50 on Voltage at the time, but maybe the Voltage spiked when it BSDed on me...?


----------



## Falkentyne

Beagle Box said:


> No temp anomalies. I just can't drive the VRAM anymore - or more accurately- driving the VRAM faster doesn't do anything.
> 
> I think I took some sort of Voltage surge. My GPU has always been a low, but efficient clocker. Adding Voltage never helped performance. That's the sign of a well-insulated chip. When you over-Volt this sort of chip, you can damage it. After that, performance is degraded and then usually continues to deteriorate slowly over time. People with experience overclocking CPUs know the drill. All it takes is one spike.
> I was running +50 on Voltage at the time, but maybe the Voltage spiked when it BSDed on me...?


If this isn't a Kingpin card or you don't have the Elmor EVC2X device, you didn't increase the voltage...


----------



## J7SC

Beagle Box said:


> No temp anomalies. I just can't drive the VRAM anymore - or more accurately- driving the VRAM faster doesn't do anything.
> 
> I think I took some sort of Voltage surge. My GPU has always been a low, but efficient clocker. Adding Voltage never helped performance. That's the sign of a well-insulated chip. When you over-Volt this sort of chip, you can damage it. After that, performance is degraded and then usually continues to deteriorate slowly over time. People with experience overclocking CPUs know the drill. All it takes is one spike.
> I was running +50 on Voltage at the time, but maybe the Voltage spiked when it BSDed on me...?


I share the same philosophy (and the same Strix...)...typically, I only use MSI AB sliders and don't touch the voltage at all... for my setup, it tends to not do very much anyway. 

On a different note, I'm also wondering about the r_BAR bit in your situation, since r_BAR affects both VRAM and system RAM ('wondering about' doesn't mean that's the issue, but it comes down to eliminating one var after the other to find out what is wrong). A related question would be if the r_BAR implementation of the MSI SuprimX bios differs significantly from the stock Asus Strix V3/4


----------



## Lobstar

HeadlessKnight said:


> Would you guys try to RMA your EVGA graphics card simply because it has the PCI-E +12 power at 80-75W issue? If that is even an issue itself at all? There is a very long thread in EVGA forums discussing this problem, do you think it is blown out of proportion?
> This has been bothering me for a while now, but I am not sure If I should RMA it because of this.
> Also my card cannot hold clocks well at all. Even with KPE and XC3 BIOS it still can't maintain even 2010 MHz, and in demanding game can go as low as 1935 MHz even with maxed out power limit.
> Never seen the card total board exeeding 475W, expect for maybe fractions of a second it will peak at 503W or so. It loves to stay at 445-450W on average even with the 500 and 520W BIOSes.


I did the special RMA program. My old card never exceeded 440w unless I very carefully dialed in an underclock. It regularly hit 83w on the PCIe draw and occasionally hit 90w. My new card takes basically whatever I can throw at it. You can't really choose to do the program though as they will ask for data before accepting the card for the swap. I'm glad I took the swap as offered. I'm now watercooled and my card can hold 2250mhz through benchmarks. I scored 15 308 in Port Royal (This isn't my highest score but the one where I've had the highest clocks)


----------



## GRABibus

Lobstar said:


> I did the special RMA program. My old card never exceeded 440w unless I very carefully dialed in an underclock. It regularly hit 83w on the PCIe draw and occasionally hit 90w. My new card takes basically whatever I can throw at it. You can't really choose to do the program though as they will ask for data before accepting the card for the swap. I'm glad I took the swap as offered. I'm now watercooled and my card can hold 2250mhz through benchmarks. I scored 15 308 in Port Royal (This isn't my highest score but the one where I've had the highest clocks)


What would be nice is to know if we purchase a new Ultra FTW3, do we get the swap one ?....

I sent a mail to [email protected]


----------



## Beagle Box

Falkentyne said:


> If this isn't a Kingpin card or you don't have the Elmor EVC2X device, you didn't increase the voltage...


Well, I mean just the AB slider. My old GTX 1080 ran great with that slider maxed. And the higher the Core Clock slider, the faster it would go. 
This 3090 GPU likes no more than +50 on Voltage slider and runs best when curves created Core Clock slider setting are flattened at 1.087V and above. 




J7SC said:


> I share the same philosophy (and the same Strix...)...typically, I only use MSI AB sliders and don't touch the voltage at all... for my setup, it tends to not do very much anyway.
> 
> On a different note, I'm also wondering about the r_BAR bit in your situation, since r_BAR affects both VRAM and system RAM ('wondering about' doesn't mean that's the issue, but it comes down to eliminating one var after the other to find out what is wrong). A related question would be if the r_BAR implementation of the MSI SuprimX bios differs significantly from the stock Asus Strix V3/4


The Voltage slider does do something. I've seen Core Curves that wouldn't work @ +100 work just fine @ +80 or +50 and be faster than +0. I guess I use it more as a limiter with troublesome curves.
YMMV.

I've run lots of BIOSs now. Old and rebar versions. Scores are all low in all 3DMark tests and now Superposition scores suck (compared to my own score history). 

Last night, I removed all NVidia drivers and pulled my GPU from my PC. I re-flashed the original ASUS BIOSs to the Power and Quiet BIOS positions. I completely re-installed the GPU and latest NVidia driver. Reinstalled Afterburner. Then I updated both BIOSs to the ASUS rebar versions.

Result?
GPU runs like crap.


----------



## J7SC

Beagle Box said:


> Well, I mean just the AB slider. My old GTX 1080 ran great with that slider maxed. And the higher the Core Clock slider, the faster it would go.
> This 3090 GPU likes no more than +50 on Voltage slider and runs best when curves created Core Clock slider setting are flattened at 1.087V and above.
> 
> 
> 
> 
> The Voltage slider does do something. I've seen Core Curves that wouldn't work @ +100 work just fine @ +80 or +50 and be faster than +0. I guess I use it more as a limiter with troublesome curves.
> YMMV.
> 
> I've run lots of BIOSs now. Old and rebar versions. Scores are all low in all 3DMark tests and now Superposition scores suck (compared to my own score history).
> 
> Last night, I removed all NVidia drivers and pulled my GPU from my PC. I re-flashed the original ASUS BIOSs to the Power and Quiet BIOS positions. I completely re-installed the GPU and latest NVidia driver. Reinstalled Afterburner. Then I updated both BIOSs to the ASUS rebar versions.
> 
> Result?
> GPU runs like crap.


...have you taken the block and back-plate off yet for physical inspection ?
Also: each time you use a different bios and boot up, Windows (as well as MSI AB) will 'see and treat it' like a new GPU with separate entries, ie in the fabled windows registry. I wonder if things got garbled up


----------



## gfunkernaught

Too bad I didn't run a PR test with the nonBAR 520w bios before flashing to the rebar version. Can't compare the 1kw to 520w unfortunately. Any news about 1kw rebar bios?


----------



## Beagle Box

J7SC said:


> ...have you taken the block and back-plate off yet for physical inspection ?
> Also: each time you use a different bios and boot up, Windows (as well as MSI AB) will 'see and treat it' like a new GPU with separate entries, ie in the fabled windows registry. I wonder if things got garbled up


I did remove the GPU from Windows using device manager, but I suppose that doesn't mean there aren't more device profiles left in the registry. I'll have to check that out. 

I haven't pulled the backplate, yet. That's a hassle I'm putting off.

The AF profiles did get confused, but only the EVGA KP520 and KP1k BIOSs profiles. The buttons showed +60 on the Power Limit Slider when running the kp520 instead of 120, or whatever the KP520 BIOS power limit is. The curves and Voltages I run are all pretty similar. They're just power limited on lesser powered BIOSs.


----------



## Falkentyne

Beagle Box said:


> Well, I mean just the AB slider. My old GTX 1080 ran great with that slider maxed. And the higher the Core Clock slider, the faster it would go.
> This 3090 GPU likes no more than +50 on Voltage slider and runs best when curves created Core Clock slider setting are flattened at 1.087V and above.
> 
> 
> 
> 
> The Voltage slider does do something. I've seen Core Curves that wouldn't work @ +100 work just fine @ +80 or +50 and be faster than +0. I guess I use it more as a limiter with troublesome curves.
> YMMV.
> 
> I've run lots of BIOSs now. Old and rebar versions. Scores are all low in all 3DMark tests and now Superposition scores suck (compared to my own score history).
> 
> Last night, I removed all NVidia drivers and pulled my GPU from my PC. I re-flashed the original ASUS BIOSs to the Power and Quiet BIOS positions. I completely re-installed the GPU and latest NVidia driver. Reinstalled Afterburner. Then I updated both BIOSs to the ASUS rebar versions.
> 
> Result?
> GPU runs like crap.


The Afterburner slider does not increase the voltage (directly).
It increases it on AMD cards and on OLD Nvidia cards.

I'm not sure if it increased it on Pascal, but on Ampere and Turing, it only allows a higher point on the existing V/F curve (in the case of Ampere, it allows one higher voltage tier of +15 mhz on the core clocks), so if the previous tier was 1.056v-1.075v at 2100 mhz, this would allow 1.069v-1.10v @ 2115 mhz. This is actually VID, rather than voltage, as there is NVVDD and MSVDD voltage also (these can be set (and read back separately!) in the Kingpin classified tool on their cards, or with Elmor's tool, on cards where the voltage controller is not "Read Only"). There's other stuff going on behind the scenes also including Loadline Calibration and other things.


----------



## Lobstar

GRABibus said:


> What would be nice is to know if we purchase a new Ultra FTW3, do we get the swap one ?....
> 
> I sent a mail to [email protected]


From what I've seen in the thread the revisions are making their way into the retail channels but as of even a few weeks ago many were still getting the OG cards with the crappy voltage controller.


----------



## KedarWolf

Falkentyne said:


> The Afterburner slider does not increase the voltage (directly).
> It increases it on AMD cards and on OLD Nvidia cards.
> 
> I'm not sure if it increased it on Pascal, but on Ampere and Turing, it only allows a higher point on the existing V/F curve (in the case of Ampere, it allows one higher voltage tier of +15 mhz on the core clocks), so if the previous tier was 1.056v-1.075v at 2100 mhz, this would allow 1.069v-1.10v @ 2115 mhz. This is actually VID, rather than voltage, as there is NVVDD and MSVDD voltage also (these can be set (and read back separately!) in the Kingpin classified tool on their cards, or with Elmor's tool, on cards where the voltage controller is not "Read Only"). There's other stuff going on behind the scenes also including Loadline Calibration and other things.


Where do we get this 'Elmor's Tool' and is the Strix OC voltage locked?


----------



## Beagle Box

Falkentyne said:


> The Afterburner slider does not increase the voltage (directly).
> It increases it on AMD cards and on OLD Nvidia cards.
> 
> I'm not sure if it increased it on Pascal, but on Ampere and Turing, it only allows a higher point on the existing V/F curve (in the case of Ampere, it allows one higher voltage tier of +15 mhz on the core clocks), so if the previous tier was 1.056v-1.075v at 2100 mhz, this would allow 1.069v-1.10v @ 2115 mhz. This is actually VID, rather than voltage, as there is NVVDD and MSVDD voltage also (these can be set (and read back separately!) in the Kingpin classified tool on their cards, or with Elmor's tool, on cards where the voltage controller is not "Read Only"). There's other stuff going on behind the scenes also including Loadline Calibration and other things.


Yeah. That's what I use it for - to limit the GPU from trying to request more Voltage along the core clock curve. 
When benching, it can mean the difference between a stellar score and crashing. 
My current GPU is prone to jumping from a smooth @2205 to an insta-crash 2235. Adjusting the Voltage slider, along with limiting power and flattening the core curve above 1.087V has been good for mitigating that inexplicable tendency for me. 

Haven't tried any external controller. Not sure how much more performance I could squeeze out of this card. I'm thinking 16000 in PR would still be out of reach on my little 8086 and z370 MB.


----------



## Falkentyne

KedarWolf said:


> Where do we get this 'Elmor's Tool' and is the Strix OC voltage locked?


You buy it from Elmor labs.


----------



## GRABibus

Beagle Box said:


> my little 8086 and z370 MB.


Still better maybe than my 5900x on PR 😂


----------



## Beagle Box

GRABibus said:


> Still better maybe than my 5900x on PR 😂


Well, yeah. Of course. 😉


----------



## GQNerd

For anyone who's GPU is still working, but getting low scores, you can uninstall 3dmark and download an older version from here: Futuremark 3DMark for Windows | TechPowerUp 

At least until they fix it.


----------



## West.

Any HOF owners here? I have a 3090 HOF coming very soon. How's the memory temperature under gaming / mining? Is it necessary to change thermal pads on this card if I want to keep memory temp below 90s? Thanks!


----------



## GRABibus

West. said:


> Any HOF owners here? I have a 3090 HOF coming very soon. How's the memory temperature under gaming / mining? Is it necessary to change thermal pads on this card if I want to keep memory temp below 90s? Thanks!


Tested one 10 days ago.

Could bench the memory at +1700MHz, still below 80°C (Port Royal and Black Ops Cold war, 22°C ambient).
To be tested in more demanding apps and games.

PS : Nizzen and bl4ckdot are HOF owners here.
PPS : as soon as I can find one available on the market, I will probably buy it !


----------



## jomama22

KedarWolf said:


> Where do we get this 'Elmor's Tool' and is the Strix OC voltage locked?


The evc communicates directly with the voltage controllers so it allows you to set whatever you want. It's what the i2c soldering points on the pcb are for.


----------



## inedenimadam

GRABibus said:


> lost 500pts in PR....





Falkentyne said:


> Yeah I lost 500 points in Port Royal also.





Beagle Box said:


> You are not alone.





cletus-cassidy said:


> Same symptoms here.


Me too. Almost exactly 500 points. I can usually break 15000 with just the sliders, but I'm stuck in the 14500s. Haven't even bothered with the classy tool because I am so far off that there is no point. 

Goalpost just got further away.


----------



## J7SC

inedenimadam said:


> Me too. Almost exactly 500 points. I can usually break 15000 with just the sliders, but I'm stuck in the 14500s. Haven't even bothered with the classy tool because I am so far off that there is no point.
> 
> *Goalpost just got further away. *


I ain't updating 3DM, coz


----------



## TonyBrownTown

My saga continues, I have an Inno3D 3090 iChill X4 that is not posting so I connected a Skypro (thanks for the help) to the ISSI chip on it and I can erase, read and write however the first sector on the chip gets lost.

Does this mean I need to solder on a new bios chip? Any workaround?

I tried to offset by one and no post LOL

You can see here when I open it has NVGI - >











When I write it changes to a '.'










Confirmed that is the only change to the write -



Any ideas?


----------



## Vayne4800

Hello everyone. I have been searching for information regarding the Error when trying to use nvflash protecton function. I flashed my Trinity None OC with the OC vbios (with rebar) and all went well except this last step. Should I be concerned? Also, Junction Temps went 2c higher (was 104c in RE8 and now 106C, with fan speed going from 75% max to 85% max). Appreciate thoughts on this.

Also lost 500 points in PR but gained as much in FS 3Dmark. Issues with 3Dmark software?


----------



## GRABibus

Good Bios to test !









KFA2 RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com


----------



## Beagle Box

GRABibus said:


> Good Bios to test !
> 
> 
> 
> 
> 
> 
> 
> 
> 
> KFA2 RTX 3090 VBIOS
> 
> 
> 24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory
> 
> 
> 
> 
> www.techpowerup.com


This BIOS is rebar enabled.


----------



## VinnieM

TonyBrownTown said:


> My saga continues, I have an Inno3D 3090 iChill X4 that is not posting so I connected a Skypro (thanks for the help) to the ISSI chip on it and I can erase, read and write however the first sector on the chip gets lost.
> 
> Does this mean I need to solder on a new bios chip? Any workaround?
> 
> I tried to offset by one and no post LOL
> 
> You can see here when I open it has NVGI - >
> 
> 
> View attachment 2490406
> 
> 
> When I write it changes to a '.'
> 
> View attachment 2490407
> 
> 
> Confirmed that is the only change to the write -
> 
> 
> 
> Any ideas?


So it's just the first byte that's not written (zeroed out)? What happens if you disable the "SECTOR" option? Or if you leave it enabled, change sector from and to in 0 and 255.
Does the "Edit" button work by the way? Maybe you can manually correct the first byte.


----------



## SShadowS

Have anyone found a BIOS, for a reference design board, with the VRAM temp limit raised above the standard 110?
My board has a defective sensor and it reads the temps as fluctuating between 108-112, depending on what it landed on at reboot.
So sometimes it is stuck Thermal Throttling and either I have to reboot or reset the driver state with NVFLASH.


----------



## Beagle Box

GRABibus said:


> Good Bios to test !
> 
> 
> 
> 
> 
> 
> 
> 
> 
> KFA2 RTX 3090 VBIOS
> 
> 
> 24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory
> 
> 
> 
> 
> www.techpowerup.com


Best Port Royal score: 14163  
Worst BIOS I've tried on my Strix OC so far. 
Absolute 

Flashed it in twice because 3DMArk recognized it as a generic NVidia BIOS: GRAPHICS CARD IS NOT RECOGNIZED
YMMV


----------



## GRABibus

Beagle Box said:


> Best Port Royal score: 14163
> Worst BIOS I've tried on my Strix OC so far.
> Absolute
> 
> Flashed it in twice because 3DMArk recognized it as a generic NVidia BIOS: GRAPHICS CARD IS NOT RECOGNIZED
> YMMV


yes, it is marked as nvidia.
But with it, during gaming, you can monitor VRM temps and you don’t loose one 8 connector reading for the power, which is not the case with MSI suprim X.

my perfs and frequencies are exactly the same in Cold War for me than Suprim X.


----------



## Beagle Box

GRABibus said:


> yes, it is marked as nvidia.
> But with it, during gaming, you can monitor VRM temps and you don’t loose one 8 connector reading for the power, which is not the case with MSI suprim X.
> 
> my perfs and frequencies are exactly the same in Cold War for me than Suprim X.


Okay. 
I'll give gaming a try this weekend. 
Seems to be 3 types of BIOSs: 1.Benching 2.Gaming 3.Just Suck (Asus)


----------



## J7SC

Beagle Box said:


> Best Port Royal score: 14163
> Worst BIOS I've tried on my Strix OC so far.
> Absolute
> 
> Flashed it in twice because 3DMArk recognized it as a generic NVidia BIOS: GRAPHICS CARD IS NOT RECOGNIZED
> YMMV





GRABibus said:


> yes, it is marked as nvidia.
> But with it, during gaming, you can monitor VRM temps and you don’t loose one 8 connector reading for the power, which is not the case with MSI suprim X.
> 
> my perfs and frequencies are exactly the same in Cold War for me than Suprim X.


...speaking of flashing Asus Strix OC, per earlier comments about Bios 1 (Asus V3) and subsequent flash to Bios 2 (KPE520 r_BAR) 'mixing' info in GPUz when cold-booting, do you still have that issue ? Asus V3 was flashed from that silly Asus stock .exe file, and I don't think it had re-enabled -protecton


----------



## Beagle Box

J7SC said:


> ...speaking of flashing Asus Strix OC, per earlier comments about Bios 1 (Asus V3) and subsequent flash to Bios 2 (KPE520 r_BAR) 'mixing' info in GPUz when cold-booting, do you still have that issue ? Asus V3 was flashed from that silly Asus stock .exe file, and I don't think it had re-enabled -protecton


I think Windows isn't bothering to read in the whole BIOS. If you boot enough times, the data will be correct. For a while, I thought the BIOS selection switch wasn't working on my Strix. Turns out, the switch position must be changed while the card is powered. Lots of little quirks with these GPUs re BIOSs.

I think the failure to re-enable protection is a random occurrence. 
I just flashed the HOF BIOS twice. It failed to re-enable both times.


----------



## Twintale

Hello everyone. Does someone know measurements for thermalpads on palit gamerock rtx 3090? (just swapped to 1,5 mm on the back side but temps are worse now, so it's 1 or 0,5 mm, default pads are smashed so they look like 0,5 mm but could be 1...)


----------



## yzonker

SShadowS said:


> Have anyone found a BIOS, for a reference design board, with the VRAM temp limit raised above the standard 110?
> My board has a defective sensor and it reads the temps as fluctuating between 108-112, depending on what it landed on at reboot.
> So sometimes it is stuck Thermal Throttling and either I have to reboot or reset the driver state with NVFLASH.


It reads 110C+ even when the card is at idle?

Only bios I'm aware of that would not throttle is the KP XOC, but that is not a good idea if you can't even get good temp readings. (because it doesn't throttle at all).

Seems like you need to RMA the card.


----------



## Falkentyne

Beagle Box said:


> Best Port Royal score: 14163
> Worst BIOS I've tried on my Strix OC so far.
> Absolute
> 
> Flashed it in twice because 3DMArk recognized it as a generic NVidia BIOS: GRAPHICS CARD IS NOT RECOGNIZED
> YMMV





Vayne4800 said:


> Hello everyone. I have been searching for information regarding the Error when trying to use nvflash protecton function. I flashed my Trinity None OC with the OC vbios (with rebar) and all went well except this last step. Should I be concerned? Also, Junction Temps went 2c higher (was 104c in RE8 and now 106C, with fan speed going from 75% max to 85% max). Appreciate thoughts on this.
> 
> Also lost 500 points in PR but gained as much in FS 3Dmark. Issues with 3Dmark software?


As explained multiple times, it's a bug in the latest version.
However none of this crap explains why this ALSO affects OTHER software too. Running Heaven benchmark after running Port Royal gives substantially lower FPS even in that. And the only way to get your FPS back is to reboot the computer.
And I don't think disabling 3dmark sysinfo from running avoids this problem either (since it did NOT stop the -500 points in PR), and I'm not going to be the one to check this. Because when you run PR, just look at the very beginning, you can immediately see the FPS is lower than it should be.


----------



## GRABibus

Falkentyne said:


> As explained multiple times, it's a bug in the latest version.
> However none of this crap explains why this ALSO affects OTHER software too. Running Heaven benchmark after running Port Royal gives substantially lower FPS even in that. And the only way to get your FPS back is to reboot the computer.
> And I don't think disabling 3dmark sysinfo from running avoids this problem either (since it did NOT stop the -500 points in PR), and I'm not going to be the one to check this. Because when you run PR, just look at the very beginning, you can immediately see the FPS is lower than it should be.


I posted also on 3Dmark forum and they are working on a fix.


----------



## Beagle Box

Falkentyne said:


> As explained multiple times, it's a bug in the latest version.
> However none of this crap explains why this ALSO affects OTHER software too. Running Heaven benchmark after running Port Royal gives substantially lower FPS even in that. And the only way to get your FPS back is to reboot the computer.
> And I don't think disabling 3dmark sysinfo from running avoids this problem either (since it did NOT stop the -500 points in PR), and I'm not going to be the one to check this. Because when you run PR, just look at the very beginning, you can immediately see the FPS is lower than it should be.


It's occurred to me that perhaps the high scores we were posting before were the anomalies - that they were artificially high due to some sort of strange interactions among processes, BIOSs, drivers, ... and what we're scoring now is closer to the truth. 
Just a thought... 

Regardless, I do think there's some process or data hanging up in memory that is affecting performance even after exiting the 3DMark app.


----------



## Falkentyne

Beagle Box said:


> It's occurred to me that perhaps the high scores we were posting before were the anomalies - that they were artificially high due to some sort of strange interactions among processes, BIOSs, drivers, ... and what we're scoring now is closer to the truth.
> Just a thought...
> 
> Regardless, I do think there's some process or data hanging up in memory that is affecting performance even after exiting the 3DMark app.


Doubtful. Because I saw those "high" scores ever since October 9th release.
And FPS=scores. Why would the FPS 'drop'? FPS is FPS. If it drops, something is wrong. FPS=scores.

The only "Jump" in PR I saw was on the "Rebar enabled" drivers. That gave about another +150 points. But those cost about 100 points in Timespy.

But now we are -500 mhz on PR ** ++PLUS++ a decrease in Heaven benchmark after running PR????? That's.....a serious problem.


----------



## KedarWolf

On the Elmor's Labs EVC2SX do I need to solder the probes to the contacts on my Strix OC PCB or is there a better way to attach them without soldering?


----------



## Beagle Box

Falkentyne said:


> Doubtful. Because I saw those "high" scores ever since October 9th release.
> And FPS=scores. Why would the FPS 'drop'? FPS is FPS. If it drops, something is wrong. FPS=scores.
> 
> The only "Jump" in PR I saw was on the "Rebar enabled" drivers. That gave about another +150 points. But those cost about 100 points in Timespy.
> 
> But now we are -500 mhz on PR ** ++PLUS++ a decrease in Heaven benchmark after running PR????? That's.....a serious problem.


Then our experiences differ. I experienced a large jump in scores when the 466.11 NVidia beta driver came out which was unchanged when the official 466.27 was released. I also saw healthy gains when Afterburner allowed memory overclocking above +1500. 

Now, neither the memory overclock nor the driver makes much difference in my scores. They're higher, but only marginally.

I have scores over 15800 in Port Royal using the MSI 450W BIOS on running 6 core CPU. I'm at least willing to question if something strange was happening there.


----------



## Falkentyne

Beagle Box said:


> Then our experiences differ. I experienced a large jump in scores when the 466.11 NVidia beta driver came out which was unchanged when the official 466.27 was released. I also saw healthy gains when Afterburner allowed memory overclocking above +1500.
> 
> Now, neither the memory overclock nor the driver makes much difference in my scores. They're higher, but only marginally.
> 
> I have scores over 15800 in Port Royal using the MSI 450W BIOS on running 6 core CPU. I'm at least willing to question if something strange was happening there.


I think you misunderstood me here.
When I say "high" scores I mean "Normal": scores. 
I saw the same 'higher" scores on 466.11 which was the re-bar driver. +150 points or so.
Then a drastic massive loss on the last 3dmark update which also affected Heaven after you run Port Royal.
You implied that the "high" scores are a fluke, which makes no sense. Because scores are based on FPS. Higher FPS=good. There's no such thing as "too much FPS" unless something is actually not being rendered properly. And I've seen no proof of this.


----------



## Vayne4800

Vayne4800 said:


> Hello everyone. I have been searching for information regarding the Error when trying to use nvflash protecton function. I flashed my Trinity None OC with the OC vbios (with rebar) and all went well except this last step. Should I be concerned? Also, Junction Temps went 2c higher (was 104c in RE8 and now 106C, with fan speed going from 75% max to 85% max). Appreciate thoughts on this.
> 
> Also lost 500 points in PR but gained as much in FS 3Dmark. Issues with 3Dmark software?


Sorry for the repost but still looking for any feedback in regards to the error caused by failed EEPROM protecton command. Also would appreciate thoughts on the results of the flash.


----------



## Beagle Box

Falkentyne said:


> I think you misunderstood me here.
> When I say "high" scores I mean "Normal": scores.
> I saw the same 'higher" scores on 466.11 which was the re-bar driver. +150 points or so.
> Then a drastic massive loss on the last 3dmark update which also affected Heaven after you run Port Royal.
> You implied that the "high" scores are a fluke, which makes no sense. Because scores are based on FPS. Higher FPS=good. There's no such thing as "too much FPS" unless something is actually not being rendered properly. And I've seen no proof of this.


I'm definitely seeing lower fps on runs (-5fps on average), but with the same core speeds (2190-2205MHz). 
So is my GPU less efficient now or has the fps counter changed? 
If the fps counter has changed was it right before and wrong now or vice versa?

Just trying to keep an open mind.


----------



## Falkentyne

Beagle Box said:


> I'm definitely seeing lower fps on runs (-5fps on average), but with the same core speeds (2190-2205MHz).
> So is my GPU less efficient now or has the fps counter changed?
> If the fps counter has changed was it right before and wrong now or vice versa?
> 
> Just trying to keep an open mind.


This is hard to explain.
Back when I first got my 3090, after doing a shunt mod so I was no longer power limited, I often saw a variance of *300 points* between runs. Note that if I did one run then immediately did another, I would get close to the same score. But if my scores started low, the only way to fix them was to repeatedly close 3dmark and EXIT steam, and then reload it and run it again (and again) until the scores were normal. (this affects both Port Royal and Timespy).

There was discussion about this on evga forums also.
This strange issue seemed to only affect 3dmark.

No one determined why this was happening. Some people thought it might be the shader cache. One person said his scores got back to normal after he reinstalled windows clean. Another person said his scores got back to normal when he ditched the Steam version and went standalone. I can't give you the answer you want. All I can tell you is what I've read.

Now, this latest version is just plain BAD. No other time has 3dmark not only given a CONSISTENT -500 points, but also destroyed my Heaven FPS by 15% (e.g. 200 FPS to 185 FPS) until I restart the computer. And this has already been verified by others so it's "not just me."


----------



## Quantum Reality

Beagle Box said:


> that they were artificially high due to some sort of strange interactions among processes, BIOSs, drivers, ... and what we're scoring now is closer to the truth.


This reminds me of how enabling PhysX used to skew 3DMark Vantage results.


----------



## Beagle Box

Falkentyne said:


> This is hard to explain.
> Back when I first got my 3090, after doing a shunt mod so I was no longer power limited, I often saw a variance of *300 points* between runs. Note that if I did one run then immediately did another, I would get close to the same score. But if my scores started low, the only way to fix them was to repeatedly close 3dmark and EXIT steam, and then reload it and run it again (and again) until the scores were normal. (this affects both Port Royal and Timespy).
> 
> There was discussion about this on evga forums also.
> This strange issue seemed to only affect 3dmark.
> 
> No one determined why this was happening. Some people thought it might be the shader cache. One person said his scores got back to normal after he reinstalled windows clean. Another person said his scores got back to normal when he ditched the Steam version and went standalone. I can't give you the answer you want. All I can tell you is what I've read.
> 
> Now, this latest version is just plain BAD. No other time has 3dmark not only given a CONSISTENT -500 points, but also destroyed my Heaven FPS by 15% (e.g. 200 FPS to 185 FPS) until I restart the computer. And this has already been verified by others so it's "not just me."


I get it. My scores are down 1000 in PR at the same settings. 

Unfortunately, there' a lot going on when these benchmarks run.
Steam runs multiple processes.
3DMark runs multiple processes.
Windows runs multiple processes and services.
There's different BIOSs - rebar and non-rebar.
There's different drivers.
There's Afterburner and other apps
There's card mods
There's unscheduled midnight Windows updates

Who knows how all this is interacting for better or for worse?

I'm trying to remain optimistic, but won't be surprised when 3DMark reports what they believe to be the issue - and they're wrong. 
I found it less than encouraging when their first response was to say that they had only updated the reporting screen so scores shouldn't have been affected.


----------



## KedarWolf

Quantum Reality said:


> This reminds me of how enabling PhysX used to skew 3DMark Vantage results.


One of my Nvidia options tricks when benchmarking is manually force PhysX to CPU.


----------



## Falkentyne

Beagle Box said:


> I get it. My scores are down 1000 in PR at the same settings.
> 
> Unfortunately, there' a lot going on when these benchmarks run.
> Steam runs multiple processes.
> 3DMark runs multiple processes.
> Windows runs multiple processes and services.
> There's different BIOSs - rebar and non-rebar.
> There's different drivers.
> There's Afterburner and other apps
> There's card mods
> There's unscheduled midnight Windows updates
> 
> Who knows how all this is interacting for better or for worse?
> 
> I'm trying to remain optimistic, but won't be surprised when 3DMark reports what they believe to be the issue - and they're wrong.
> I found it less than encouraging when their first response was to say that they had only updated the reporting screen so scores shouldn't have been affected.


Which is why you don't trust anything UL says.
I can instantly know what my score will be just by looking at the first 10 seconds of the entire benchmark.
Score is related to FPS. If your FPS is lower than normal, your score will be lower than normal.
It did get a bit weird with Rocket Lake and with the 466.11 driverse, but one other 100% way (which takes more patience) to determine what your score will be, is to look at the "pre-landing" scene of the fighter ship, and then the landing scene where it's panning along its backside. If you memorize those two groups of FPS ranges, you will be able to 100% accurately predict your final score, within 50 points.

That "new" update not only DESTROYED the FPS throughout the entire benchmark (averaged a loss of 10% FPS globally), it also cost heaven benchmark 15% FPS (as I mentioned before). And I am not running 3Dmark again to even determine if it's (the Heaven problem ONLY) because of Sysinfo . Not touching 3dmark until someone else verifies that the problem was fixed.


----------



## GRABibus

Falkentyne said:


> Which is why you don't trust anything UL says.
> I can instantly know what my score will be just by looking at the first 10 seconds of the entire benchmark.
> Score is related to FPS. If your FPS is lower than normal, your score will be lower than normal.
> It did get a bit weird with Rocket Lake and with the 466.11 driverse, but one other 100% way (which takes more patience) to determine what your score will be, is to look at the "pre-landing" scene of the fighter ship, and then the landing scene where it's panning along its backside. If you memorize those two groups of FPS ranges, you will be able to 100% accurately predict your final score, within 50 points.
> 
> That "new" update not only DESTROYED the FPS throughout the entire benchmark (averaged a loss of 10% FPS globally), it also cost heaven benchmark 15% FPS (as I mentioned before). And I am not running 3Dmark again to even determine if it's (the Heaven problem ONLY) because of Sysinfo . Not touching 3dmark until someone else verifies that the problem was fixed.


yes, when I saw first 10 seconds in PR with fps below 60 fps, I knew that there was an issue and that score would be hardly impacted.


----------



## GRABibus

KedarWolf said:


> One of my Nvidia options tricks when benchmarking is manually force PhysX to CPU.


does it really influence PR score ?


----------



## Beagle Box

Falkentyne said:


> Which is why you don't trust anything UL says.
> I can instantly know what my score will be just by looking at the first 10 seconds of the entire benchmark.
> Score is related to FPS. If your FPS is lower than normal, your score will be lower than normal.
> It did get a bit weird with Rocket Lake and with the 466.11 driverse, but one other 100% way (which takes more patience) to determine what your score will be, is to look at the "pre-landing" scene of the fighter ship, and then the landing scene where it's panning along its backside. If you memorize those two groups of FPS ranges, you will be able to 100% accurately predict your final score, within 50 points.


LOL
Yeah. I do that, but in more general terms.
If I don't see 65fps in the first scene as the center of the camera passes the frozen waiter, I know it's a bad run. 
I look for a minimum of 111-112fps in the overhead pre-landing shot. Max of 117-118fps or so there.
Also, a max fps below 94fps at the very beginning of the landing scene is bad.
Or not reaching 70fps in the last 5 seconds of the benchmark. 

Right now, I'm seeing 61 fps where I should be seeing 65 and 65 where I should be seeing 70. 
Something ain't right.


----------



## Biscottoman

Guys is possible that mining has effected my memory overclock after few days? I was capable of run my 3090 at +1300 at mining without crashing then i bounced ti +1350 and even +1400 which i've run for something like 20 hours without having crash (since i was able to benchmark superposition 8k and ts extreme even at +1450). Memory has always stayed under 80c averaging 76 because i have mp5works+gelid 15w pads. Yesterday i ceashed while mining at +1400 two times so today i went back to +1350 and after 4-5 hours it ceashed again then i went back even ti +1300 which i'm 100% sure was rock solid and again after 4-5 hours it even made my PC reboot. Did i just degradated my Memory without having high temps and only after 5-6 days of mining?


----------



## J7SC

...I still think it might have s.th. to do with their monitoring (> reporting) changes...a bigger-data vacuum perhaps ? Who knows. But even when wiping Systeminfo out via ctrl-alt-dlt, it always pops back in during the run....even when it and monitoring is also turned off in 'options'.

...what is disturbing a lot though is what @Falkentyne mentioned about issues continuing w/ subsequent Heaven runs, until reboot... and @Beagle Box 's VRAM issues even after reboot, even worse...


----------



## Beagle Box

Biscottoman said:


> Guys is possible that mining has effected my memory overclock after few days? I was capable of run my 3090 at +1300 at mining without crashing then i bounced ti +1350 and even +1400 which i've run for something like 20 hours without having crash (since i was able to benchmark superposition 8k and ts extreme even at +1450). Memory has always stayed under 80c averaging 76 because i have mp5works+gelid 15w pads. Yesterday i ceashed while mining at +1400 two times so today i went back to +1350 and after 4-5 hours it ceashed again then i went back even ti +1300 which i'm 100% sure was rock solid and again after 4-5 hours it even made my PC reboot. Did i just degradated my Memory without having high temps and only after 5-6 days of mining?





J7SC said:


> ...I still think it might have s.th. to do with their monitoring (> reporting) changes...a bigger-data vacuum perhaps ? Who knows. But even when wiping Systeminfo out via ctrl-alt-dlt, it always pops back in during the run....even when it and monitoring is also turned off in 'options'.
> 
> ...what is disturbing a lot though is what @Falkentyne mentioned about issues continuing w/ subsequent Heaven runs, until reboot... and @Beagle Box 's VRAM issues even after reboot, even worse...


I'm currently thinking high memory OCs aren't good for the memory, even if it's kept cool. 
Don't know if it's the memory chips themselves, the power stages or the controllers, but I do think something in the memory system may be degrading on my GPU, though I don't mine. I seem to recall someone of note (Igor? Elmor?) mentioning driving the memory hard might be a bad thing...


----------



## Biscottoman

Beagle Box said:


> I'm currently thinking high memory OCs aren't good for the memory, even if it's kept cool.
> Don't know if it's the memory chips themselves, the power stages or the controllers, but I do think something in the memory system may be degrading on my GPU, though I don't mine. I seem to recall someone of note (Igor? Elmor?) mentioning driving the memory hard might be a bad thing...


I'm going to ddu right now and swapping from 457.30 driver to 466.11 as you also stated that they should be also better performing for rtx 3090 with rebar enable, I just hope my ram didn't degradated so fast 'cause I have this card since two weeks


----------



## Beagle Box

Biscottoman said:


> I'm going to ddu right now and swapping from 457.30 driver to 466.11 as you also stated that they should be also better performing for rtx 3090 with rebar enable, I just hope my ram didn't degradated so fast 'cause I have this card since two weeks


466.11 is an older driver. Use the latest official driver - 466.27.


----------



## SShadowS

yzonker said:


> It reads 110C+ even when the card is at idle?
> 
> Only bios I'm aware of that would not throttle is the KP XOC, but that is not a good idea if you can't even get good temp readings. (because it doesn't throttle at all).
> 
> Seems like you need to RMA the card.


Yes, know I need to RMA it. But don't want to do it now, will take forever to get a replacement. So will wait til later.


----------



## J7SC

According to TechPowerup's review of the 3090 Strix OC, the GDDR6X is actually rated at > 1313 MHz (21 Gbps effective), yet Asus clocks them at 1219 as 'default' like so many other vendors also do. This is a bit unusual, perhaps the vendors know that the heat double-sided GDDR6X produces is not s.th. they want to warrant over the longer term. For normal gaming and apps, I clock my w-cooled GPU at 2130, and VRAM at 1313 - both not anywhere near observed and sustainable max, but still overwhelmingly fast, as much as I like to push the envelope a bit. 

There's also the question of loading 'foreign' bios of cards that might have a different VRAM phase rating and design...not that I know for sure that it could lead to quicker degradation, and I also don't think that this is underlying the latest 3DM 'update issues', but s.th. to keep in mind for the longer-term.


----------



## des2k...

Beagle Box said:


> I'm currently thinking high memory OCs aren't good for the memory, even if it's kept cool.
> Don't know if it's the memory chips themselves, the power stages or the controllers, but I do think something in the memory system may be degrading on my GPU, though I don't mine. I seem to recall someone of note (Igor? Elmor?) mentioning driving the memory hard might be a bad thing...


the vrm for the memory is over built, runs ~30% capacity on stock

they are 4 stages 50a each, you're looking at 270w at 1.35v on the mem

at +1500mem you're not going over 110w or so for mem power, no way the vrm would degrade

That heat is also spread over mutiple chips.

I haven't experienced degradation with OC, but my mem at the back is mostly 54c for games


----------



## VinnieM

Remember the PCIe problems I had with my 3090? It would not run at PCIe 4.0 16x anymore and even Gen3 at 16x was not reliable.
Well, I've just replaced my 3800x with a 5900x and guess what? All problems are gone! It posts fast at Gen4 and performance is good.
Now I just need to tune (undervolt) the 5900x because it's dumping extra heat in the case in comparison with the 3800x (70-110W vs 40-60W while gaming).


----------



## Beagle Box

des2k... said:


> the vrm for the memory is over built, runs ~30% capacity on stock
> 
> they are 4 stages 50a each, you're looking at 270w at 1.35v on the mem
> 
> at +1500mem you're not going over 110w or so for mem power, no way the vrm would degrade
> 
> That heat is also spread over mutiple chips.
> 
> I haven't experienced degradation with OC, but my mem at the back is mostly 54c for games


I doubt the problems I'm having is due to heat. More likely Voltage or power delivery. 
I'm definitely having issues and I'm thinking it's memory related, despite all OCing having been done underwater with really good temps. 
Maybe I've popped a cap or something and it's unbalanced.
Dunno.
I haven't as yet removed the backplate to see.


----------



## jomama22

J7SC said:


> ...I still think it might have s.th. to do with their monitoring (> reporting) changes...a bigger-data vacuum perhaps ? Who knows. But even when wiping Systeminfo out via ctrl-alt-dlt, it always pops back in during the run....even when it and monitoring is also turned off in 'options'.
> 
> ...what is disturbing a lot though is what @Falkentyne mentioned about issues continuing w/ subsequent Heaven runs, until reboot... and @Beagle Box 's VRAM issues even after reboot, even worse...


The sysinfo is to watch for any changes made my the user during the runs to prevent fake scoring.


----------



## Falkentyne

Obviously the thing to do is:

1) reboot the computer clean, set your OC, run Heaven, record your average FPS at the beginning along the road (first scene only)
2) close Heaven and load 3dmark.
3) Run Port Royal then end the benchmark at the beginning.
4) Close 3dmark and run Heaven.

Is your FPS lower?
If Yes,
5) Force "End task" on Sysinfo (you may also need to manually terminate the Futuremark Sysinfo services process in "Run-->Services.MSC")
6) Run Heaven.

Is your FPS back to normal?

If your FPS is _STILL_ lower, uninstall 3dmark and don't ever touch that piece of crap program. 

(I'm not going to be the person to test this, sorry)


----------



## J7SC

jomama22 said:


> The sysinfo is to watch for any changes made my the user during the runs to prevent fake scoring.


I know, but what I'm saying is 

a.) When you close 3DM completely after use, Systeminfo stays active per task manager, until next reboot
b.) When turning off Systeminfo via options in 3DM, it comes on anyway (!), and stays on
c.) When turning off Systeminfo and HW monitoring, you cannot enter scores at 3DM DB anyway, at least not for comp purposes
d.) I hate snooping software that 'calls home' (like Windows, Google, RGB, my dog spot...)


----------



## Biscottoman

Beagle Box said:


> 466.11 is an older driver. Use the latest official driver - 466.27.


Seems like cleaning and updating drivers fixed It, tomorrow i will test it more for sure. But something tells that one of these memory ceashes has caused driver corruption and instability, not first time this happens to me with an Nvidia card


----------



## yzonker

Well look on the bright side, if they don't change PR back to the way it was, won't need to waste any time running it until the next gen of cards come out. LOL


----------



## Beagle Box

yzonker said:


> Well look on the bright side, if they don't change PR back to the way it was, won't need to waste any time running it until the next gen of cards come out. LOL


On the upside, it's once again a challenge to break 15000.


----------



## J7SC

yzonker said:


> Well look on the bright side, if they don't change PR back to the way it was, won't need to waste any time running it until the next gen of cards come out. LOL


haha, good thing I take screenshots of my 3DM results, else...


----------



## Vayne4800

still having a hard time getting an answer to this, for those who are flashing their ampere vbios, did you have any problems enabling EEPROM protection?


----------



## GRABibus

Beagle Box said:


> On the upside, it's once again a challenge to break 15000.


It was already a challenge 😂


----------



## Alex24buc

Could anybody help me to understand why my palit gamerock oc has suddenly lost 300 points in port royal this score being on the default settings without oc and power limit increased to the maximum for 470w. It is not degradation of the card because it is new (one month) and it works normally, haven t done any oc to it. I saw that the power usage is 470w so it isn t the culprit here. I also disabled rebar and I get the same score in port royal. What do you think could be the cause?


----------



## Beagle Box

Alex24buc said:


> Could anybody help me to understand why my palit gamerock oc has suddenly lost 300 points in port royal this score being on the default settings without oc and power limit increased to the maximum for 470w. It is not degradation of the card because it is new (one month) and it works normally, haven t done any oc to it. I saw that the power usage is 470w so it isn t the culprit here. I also disabled rebar and I get the same score in port royal. What do you think could be the cause?


You should take a few moments to read the last 7 or so pages of this thread.


----------



## GRABibus

Alex24buc said:


> Could anybody help me to understand why my palit gamerock oc has suddenly lost 300 points in port royal this score being on the default settings without oc and power limit increased to the maximum for 470w. It is not degradation of the card because it is new (one month) and it works normally, haven t done any oc to it. I saw that the power usage is 470w so it isn t the culprit here. I also disabled rebar and I get the same score in port royal. What do you think could be the cause?


you should be happy, most of the people have lost 500points 😊


----------



## Biscottoman

We should just turn back our 3d Mark to the older version which has been linked something like two pages ago and then wait for the update fix


----------



## Beagle Box

Biscottoman said:


> We should just turn back our 3d Mark to the older version which has been linked something like two pages ago and then wait for the update fix


I've just been working on tweaks to reach 15100 on this version. It's taken a bit of creativity just to beat 15000, I must admit. But a challenge is a challenge.


----------



## J7SC

Beagle Box said:


> I've just been working on tweaks to reach 15100 on this version. It's taken a bit of creativity just to beat 15000, I must admit. But a challenge is a challenge.


..how about your earlier-described VRAM artifacts at OC and the electronic hum ?


----------



## Beagle Box

J7SC said:


> ..how about your earlier-described VRAM artifacts at OC and the electronic hum ?


The hum was a momentary thing - maybe 4 seconds before I could hit the PSU off switch. The whole system locked up with a BSD. 
The artifacts are only there while playing games and only on certain screens.
I've decided to continue using and testing the GPU, if at lower clocks and going easy on the memory OC.
I'm ordering new thermal pads today and will deal with what I find when I take things apart to install them. 

Makes reaching 15100 virtually impossible, but I'm stuck at my desk at the moment, anyway, so I'm messing with it.

EDIT: I have noticed that when benches crash, if the screen freezes, it's sometimes in 8-bit color tones. So that's new...


----------



## J7SC

Beagle Box said:


> The hum was a momentary thing - maybe 4 seconds before I could hit the PSU off switch. The whole system locked up with a BSD.
> The artifacts are only there while playing games and only on certain screens.
> I've decided to continue using and testing the GPU, if at lower clocks and going easy on the memory OC.
> I'm ordering new thermal pads today and will deal with what I find when I take things apart to install them.
> 
> Makes reaching 15100 virtually impossible, but I'm stuck at my desk at the moment, anyway, so I'm messing with it.
> 
> *EDIT: I have noticed that when benches crash, if the screen freezes, it's sometimes in 8-bit color tones. So that's new...*


I would be nervous  and only do 2D until I would have taken it apart, checked the PCB etc and installed new pads, as there's a 'new' issue on top of the previous, so some sort of progression...but that's just me


----------



## Beagle Box

J7SC said:


> I would be nervous  and only do 2D until I would have taken it apart, checked the PCB etc and installed new pads, as there's a 'new' issue on top of the previous, so some sort of progression...but that's just me


Yeah. I've been going back and forth on it. 
My experience with motherboards, nics, and GPUs has been that the card will either stay as it is or get worse. 
If I can't use this GPU to game, I have no reason to have it. My MB has an IGP. 
So I'm going to see if it's going to get worse and just deal with it. 
If it gets worse, I'll send it to my EE brother for a diagnosis.


----------



## Vayne4800

Now I am wondering if anyone can even see my posts 🤨 or is the question stupid/answered/has no answer?


----------



## Beagle Box

Vayne4800 said:


> Now I am wondering if anyone can even see my posts 🤨 or is the question stupid/answered/has no answer?


Perhaps no one here has an answer for you. 
What exactly are you expecting us to do?
This is not the company Help Desk.


----------



## Vayne4800

Beagle Box said:


> Perhaps no one here has an answer for you.
> What exactly are you expecting us to do?
> This is not the company Help Desk.


I am expecting someone to respond. The thing is, can't find anything on this issue with a clear answer. Despite it being stickied as a step on the first page of this thread.


----------



## Beagle Box

Vayne4800 said:


> I am expecting someone to respond. The thing is, can't find anything on this issue with a clear answer. Despite it being stickied as a step on the first page of this thread.


I re-read your original post. I'm not sure what your issue is.
The BIOS seemed to flash in correctly.
Usually, you don't need to set protection back on manually if that's what you tried to do.
Often times, you'll know protection is still off because GPU-Z will display your memory amount as 0 or unknown.
Reflashing will usually fix this error.

What do you think your problem is?

Or did I read the wrong post? lol?


----------



## Lobstar

Vayne4800 said:


> Now I am wondering if anyone can even see my posts 🤨 or is the question stupid/answered/has no answer?


You have a problem that a very very limited amount of people will ever even try to resolve themselves. The people with the skills and knowledge you seek might not even be reading this thread. To 99.99% of us here you have a dead card that we would recycle. What exactly are you expecting? Here's some advice: Bring it to a professional to figure out.


----------



## Falkentyne

Vayne4800 said:


> I am expecting someone to respond. The thing is, can't find anything on this issue with a clear answer. Despite it being stickied as a step on the first page of this thread.


If no one responded it means no one knows. If someone knew they would have answered. Simple as that. Please don't act elitist and entitlied.

Also you posted your first question at night when more than half the main server population is asleep (not everyone lives in your country), and expect an answer?
Do you realize you posted at 12 AM PACIFIC TIME midnight?


----------



## GQNerd

Vayne4800 said:


> Now I am wondering if anyone can even see my posts 🤨 or is the question stupid/answered/has no answer?


I assume it’s because we didn’t quite understand your question... Are you trying to enable Eeprom protection after flashing?

I’ve only used the protection off command before flashing a rom, never afterwards.

If you’re trying to run protection off, and it’s failing, you probably have the wrong/old version of NVflash.


----------



## GQNerd

Beagle Box said:


> I re-read your original post. I'm not sure what your issue is.
> The BIOS seemed to flash in correctly.
> Usually, you don't need to set protection back on manually if that's what you tried to do.
> Often times, you'll know protection is still off because GPU-Z will display your memory amount as 0 or unknown.
> Reflashing will usually fix this error.
> 
> What do you think your problem is?
> 
> Or did I read the wrong post? lol?


You weren’t the only one, lol


----------



## Vayne4800

Beagle Box said:


> I re-read your original post. I'm not sure what your issue is.
> The BIOS seemed to flash in correctly.
> Usually, you don't need to set protection back on manually if that's what you tried to do.
> Often times, you'll know protection is still off because GPU-Z will display your memory amount as 0 or unknown.
> Reflashing will usually fix this error.
> 
> What do you think your problem is?
> 
> Or did I read the wrong post? lol?


Yep, I tried to set the protection back manually as per the stickied post on flashing steps. I did check GPU-Z and it does display the correct memory amount. I guess that means all is well and I don't need to set protection to on.










You read my post right and it seems all is well. Thanks for the information and GPU-Z check approach.



Lobstar said:


> You have a problem that a very very limited amount of people will ever even try to resolve themselves. The people with the skills and knowledge you seek might not even be reading this thread. To 99.99% of us here you have a dead card that we would recycle. What exactly are you expecting? Here's some advice: Bring it to a professional to figure out.


I believe I stated that the card flashed without issue. It is working as expected. Was just concerned about the EEPROM protection step, but it seems like that is an outdated step or an unnecessary one?



Falkentyne said:


> If no one responded it means no one knows. If someone knew they would have answered. Simple as that. Please don't act elitist and entitlied.
> 
> Also you posted your first question at night when more than half the main server population is asleep (not everyone lives in your country), and expect an answer?
> Do you realize you posted at 12 AM PACIFIC TIME midnight?


Not trying to be elitist or entitled. I have been posting on this forums, rarely, but generally get a response. Anything else was not intentional, behaviour wise. Apologies if it seemed as such. Timezone wise, well, I am aware of the difference and know that answers, if any, can take time to come.


----------



## gfunkernaught

Vayne4800 said:


> Hello everyone. I have been searching for information regarding the Error when trying to use nvflash protecton function. I flashed my Trinity None OC with the OC vbios (with rebar) and all went well except this last step. Should I be concerned? Also, Junction Temps went 2c higher (was 104c in RE8 and now 106C, with fan speed going from 75% max to 85% max). Appreciate thoughts on this.
> 
> Also lost 500 points in PR but gained as much in FS 3Dmark. Issues with 3Dmark software?


It would help to see what exactly nvflash spit out during the process. You used vbios specific to your card right?

Beagle beat me to it lol nvm


----------



## Vayne4800

Miguelios said:


> I assume it’s because we didn’t quite understand your question... Are you trying to enable Eeprom protection after flashing?
> 
> I’ve only used the protection off command before flashing a rom, never afterwards.
> 
> If you’re trying to run protection off, and it’s failing, you probably have the wrong/old version of NVflash.


Apologies for not being clear. I am indeed trying to enable EEPROM protection after flashing.
I suspected that most people don't try to activate it as I have searched quite a lot to find any similar issues and figured that it seems to be the case.
Running protection off is working fine.

Thanks for the reply


----------



## Vayne4800

gfunkernaught said:


> It would help to see what exactly nvflash spit out during the process. You used vbios specific to your card right?
> 
> Beagle beat me to it lol nvm












I am using the Trinity OC latest vbios on a Trinity None OC 3090. It worked as expected and allowed me to increase power limit to 110%.


----------



## des2k...

you can't set protection on with nvflash

usually you set protection off with nvflash, then flash the vbios. The flashing process or the reboot sets the protection back on because if you try to flash again on the next reboot you'll get a warning that you need to set protection off.

Setting protection on manually with nvflash has never worked for my 3090.

You'll know if protection is off on the vbios after flashing because you'll get no display, card doesn't initialize.


----------



## Beagle Box

des2k... said:


> you can't set protection on with nvflash
> 
> usually you set protection off with nvflash, then flash the vbios. The flashing process or the reboot sets the protection back on because if you try to flash again on the next reboot you'll get a warning that you need to set protection off.
> 
> Setting protection on manually with nvflash has never worked for my 3090.
> 
> You'll know if protection is off on the vbios after flashing because you'll get no display, card doesn't initialize.


That's interesting because I had a different experience with the HOF BIOS. 
I flashed it and it hung on the last step, giving me a quick error message and a black screen.
I had to reboot.
On rebooting, GPU-Z said I had an NVIDIA BIOS and my memory was unknown. 
I ran Port Royal, which ran, but gave me an unknown BIOS error, which is correct because this BIOS is new.
I felt uneasy because of the GPU memory issues I've been having, so I tried to reflash the HOF BIOS .
I was able to flash it _without running [nvflash --protectoff] _ first!

It flashed correctly this time.

I reflashed my MSI BIOS and all went well first try.

I then reflashed the HOF BIOS and _again _it blew up at the end and GPU-Z gave me the unknown memory amount.
I reflashed it again without --protectoof and flashed and it took, showing the correct amount of memory.

So there's some sort of weirdness with my card and the HOF BIOS.

I've fklashed the MSI BIOS back in and am going to leave it alone for a while.

I guess I'm lucky it booted unprotected and maybe with an only partially updated BIOS. 😅


----------



## GRABibus

Beagle Box said:


> That's interesting because I had a different experience with the HOF BIOS.
> I flashed it and it hung on the last step, giving me a quick error message and a black screen.
> I had to reboot.
> On rebooting, GPU-Z said I had an NVIDIA BIOS and my memory was unknown.
> I ran Port Royal, which ran, but gave me an unknown BIOS error, which is correct because this BIOS is new.
> I felt uneasy because of the GPU memory issues I've been having, so I tried to reflash the HOF BIOS .
> I was able to flash it _without running [nvflash --protectoff] _ first!
> 
> It flashed correctly this time.
> 
> I reflashed my MSI BIOS and all went well first try.
> 
> I then reflashed the HOF BIOS and _again _it blew up at the end and GPU-Z gave me the unknown memory amount.
> I reflashed it again without --protectoof and flashed and it took, showing the correct amount of memory.
> 
> So there's some sort of weirdness with my card and the HOF BIOS.
> 
> I've fklashed the MSI BIOS back in and am going to leave it alone for a while.
> 
> I guess I'm lucky it booted unprotected and maybe with an only partially updated BIOS. 😅


you definitely have a memory issue…

Concerning protectoff, On my strix , on the former EVGA FTW3 ultra I had and the HOF I tested 2 weeks ago, I never had to « protect off » before first flash.


----------



## Beagle Box

GRABibus said:


> you definitely have a memory issue…
> 
> Concerning protectoff, On my strix , on the former EVGA FTW3 ultra I had and the HOF I tested 2 weeks ago, I never had to « protect off » before first flash.


My strix has always refused to flash when I forget --protectoff except when using the HOF BIOS or the ASUS flash tool. 
But there's something going on with my strix, so mine is not the usual case.


----------



## GRABibus

Beagle Box said:


> My strix has always refused to flash when I forget --protectoff except when using the HOF BIOS or the ASUS flash tool.
> But there's something going on with my strix, so mine is not the usual case.


what is more strange is that I never had to use « protectoff » whatever the 3090 I got 😂


----------



## Beagle Box

GRABibus said:


> what is more strange is that I never had to use « protectoff » whatever the 3090 I got 😂


Maybe you have a different NVFlash version that has the command embedded.


----------



## gfunkernaught

@GRABibus 
I as well. I always use nvflash.exe -6 bios.rom and never had a problem. I believe the exe I am using is patched for 3090s.


----------



## Vayne4800

Interesting comments on the matter. I did read somewhere that some vendors do not enable EEPROM protection for their cards, hence the nvflash -6 works immidiately. Then again, as Beagle Box mentioned, it might be embedded in a later version of nvflash. Though it seems as per feedback here, noone managed to get protecton working on their 3090 so far and might not be even a necessary step anyway.


----------



## TonyBrownTown

VinnieM said:


> So it's just the first byte that's not written (zeroed out)? What happens if you disable the "SECTOR" option? Or if you leave it enabled, change sector from and to in 0 and 255.
> Does the "Edit" button work by the way? Maybe you can manually correct the first byte.


Trying to change the Sector to 0 it auto changes to 1. 

The edit function only edits the buffer, when you write the first sector is still lost.

I think I am going to try and order a new BIOS chip and solder it on. :/

Thanks for the ideas though!


----------



## geriatricpollywog

Biscottoman said:


> Guys is possible that mining has effected my memory overclock after few days? I was capable of run my 3090 at +1300 at mining without crashing then i bounced ti +1350 and even +1400 which i've run for something like 20 hours without having crash (since i was able to benchmark superposition 8k and ts extreme even at +1450). Memory has always stayed under 80c averaging 76 because i have mp5works+gelid 15w pads. Yesterday i ceashed while mining at +1400 two times so today i went back to +1350 and after 4-5 hours it ceashed again then i went back even ti +1300 which i'm 100% sure was rock solid and again after 4-5 hours it even made my PC reboot. Did i just degradated my Memory without having high temps and only after 5-6 days of mining?


I saw this with my 2080ti and 3090. On both cards, day 1 I could set memory to +1500, then it “degraded” to +1300 after a week with no further degradation. However the average memory clock speed reported by 3DMark never actually changed. I think this might not be degradation but rather the memory calibrating itself or something.


----------



## GRABibus

After repaste with Conductonaut on my Strix with stock air cooler, changing memory pads with THERMALRIGHT Odyssey and putting a 14mm fan pulling air from the backplate, I am happy with my temps !

22°c ambient, open PC, Bios EVGA RTX 3090 FTW3 ULTRA 500W version 94.02.42.80.27





Ps : no comment on my FOV please, I like playing with "Fisheye" !!


----------



## gfunkernaught

GRABibus said:


> After repaste with Conductonaut on my Strix with stock air cooler, changing memory pads with THERMALRIGHT Odyssey and putting a 14mm fan pulling air from the backplate, I am happy with my temps !
> 
> 22°c ambient, open PC, Bios EVGA RTX 3090 FTW3 ULTRA 500W version 94.02.42.80.27
> 
> 
> 
> 
> 
> Ps : no comment on my FOV please, I like to play with "Fisheye" !!


amazing duder! I can only smile in envy since my trio barely does [email protected], under water, LM, the whole 9.


----------



## yzonker

In doing more testing with RDR2, what I've found that it is not stable using the KP XOC bios even at settings that should be equivalent (and even more conservative) to what I've run (and found to be stable) using the Galax 390w bios. Although I didn't try going even further down, my conclusion is RDR2 isn't completely stable at all on my machine using the KP XOC. 

Anyone have any thoughts on that? Something to try? I haven't tried flashing the KP XOC and then wiping the drivers and RDR2 settings though. Might try that next time I put the XOC on. Doesn't really matter much, but I'm kinda curious as to why that would happen. It seems to be the only game that has this problem that I've tested.


----------



## Sheyster

GRABibus said:


> Ps : no comment on my FOV please, I like to play with "Fisheye" !!


Ahhh "Devastation" .. Great CQ map! I have not played BFV in quite some time but it's still installed. I may have to jump in and play a bit! 

Regarding your FOV, how high is it set to? Just curious, not criticizing!


----------



## GRABibus

Sheyster said:


> Regarding your FOV, how high is it set to? Just curious, not criticizing!


No problem for criticizing, if this is constructive 

My FOV is at maximum !

I tested a lower FOV, but the soldier runs slowly and of course I don't see on sides.
I am used to play with maximal FOV since years, on all my games, even when I played some matches in COD1 and COD2 when I was in an italian team 15 years ago.


----------



## devilhead

KedarWolf said:


> On the Elmor's Labs EVC2SX do I need to solder the probes to the contacts on my Strix OC PCB or is there a better way to attach them without soldering?


I got mine evc2sx too and not willing to woid warranty  is it enough just put a bit thicker wires thru holes and tape on the other side? Or soldering is necessary?


----------



## Falkentyne

KedarWolf said:


> On the Elmor's Labs EVC2SX do I need to solder the probes to the contacts on my Strix OC PCB or is there a better way to attach them without soldering?


Are there "holes" in the PCB?
If there are, it may be possible to hook up male jumper cables to those holes, but that would require heatsink clearance for attachment. All depends on where the holes are. Might require also removing the backplate as well in that case. I was under the impression that the EVC2SX already came with the required solderless jumper cables though?

Something similar to this?








Amazon.com: ELEGOO 120pcs Multicolored Dupont Wire 40pin Male to Female, 40pin Male to Male, 40pin Female to Female Breadboard Jumper Wires Ribbon Cables Kit Compatible with Arduino Projects : Industrial & Scientific


Buy ELEGOO 120pcs Multicolored Dupont Wire 40pin Male to Female, 40pin Male to Male, 40pin Female to Female Breadboard Jumper Wires Ribbon Cables Kit Compatible with Arduino Projects: Robot Accessories - Amazon.com ✓ FREE DELIVERY possible on eligible purchases



www.amazon.com


----------



## motivman

Is shunt modding still worth it? I finally got hold of a good sample strix, but not sure if I should shunt or just run xoc bios. I do not like the fact that power reporting on 3rd pin is borked on the strix with evga bios.


----------



## Beagle Box

motivman said:


> Is shunt modding still worth it? I finally got hold of a good sample strix, but not sure if I should shunt or just run xoc bios. I do not like the fact that power reporting on 3rd pin is borked on the strix with evga bios.


What do you want to do with it? Mine runs great / scores great as-is. 
I've shunted cards before, but not this one. I see no need. 
There are other BIOSs that game better than the KP BIOSs. 
I suggest you test the card in your system, first, doing whatever you're going to use it for.


----------



## Falkentyne

motivman said:


> Is shunt modding still worth it? I finally got hold of a good sample strix, but not sure if I should shunt or just run xoc bios. I do not like the fact that power reporting on 3rd pin is borked on the strix with evga bios.


Just run the bios I posted multiple times first and see if you like the results. Keep in mind 8 pin #3 won't be able to exceed 175W with this bios due to a bug with the SRC3 power limit not being changed from default (when 8 pin #3 reaches 175W you will get a TDP throttle via TDP Normalized %). I do not know if there is a newer one since I don't even have a strix. Rename the file to .ROM or .BIN (remove the .txt at the end).

Shunt modding is always worth it (or just use the higher tdp bioses) not just for the overall 10% FPS boost (going from 350W to 550W), but also for the more consistent frametimes (clocks not jumping around as much). This was seen as early as Pascal with TDP mods.


----------



## motivman

Falkentyne said:


> Just run the bios I posted multiple times first and see if you like the results. Keep in mind 8 pin #3 won't be able to exceed 175W with this bios due to a bug with the SRC3 power limit not being changed from default (when 8 pin #3 reaches 175W you will get a TDP throttle via TDP Normalized %). I do not know if there is a newer one since I don't even have a strix. Rename the file to .ROM or .BIN (remove the .txt at the end).
> 
> Shunt modding is always worth it (or just use the higher tdp bioses) not just for the overall 10% FPS boost (going from 350W to 550W), but also for the more consistent frametimes (clocks not jumping around as much). This was seen as early as Pascal with TDP mods.


I have been gone for months, so what is this? A custom bios for the strix? What is the power limit? This is exciting!!!!!


----------



## GRABibus

motivman said:


> I have been gone for months, so what is this? A custom bios for the strix? What is the power limit? This is exciting!!!!!


1000W
If you are on air, avoid it !


----------



## Falkentyne

GRABibus said:


> 1000W
> If you are on air, avoid it !


It won't reach 1000W. As I said in my last post, 8 pin #3 is limited to 175W, which will limit the entire card. You'll be lucky to exceed 550W.


----------



## motivman

Falkentyne said:


> It won't reach 1000W. As I said in my last post, 8 pin #3 is limited to 175W, which will limit the entire card. You'll be lucky to exceed 550W.


so it’s essentially a 550w bios for the strix. Does it outperform the 1000w bios? Third 8 pin power reporting works just fine on this?


----------



## Falkentyne

motivman said:


> so it’s essentially a 550w bios for the strix. Does it outperform the 1000w bios? Third 8 pin power reporting works just fine on this?


Test it for yourself. I don't have a Strix.


----------



## GQNerd

Think I finally maxed out my KP with H2O.. at least until I do the ice bucket challenge or go LN2.

3dmark.com - PR - 16,312

Swapped from AIO to Hydro Copper block, used liquid metal and 12.8w/mk pads. On it's own loop w/360 60mm rad in push + pull. Classified tool used, and on OLD version of 3dmark before anyone asks.


----------



## J7SC

Miguelios said:


> Think I finally maxed out my KP with H2O.. at least until I do the ice bucket challenge or go LN2.
> 
> 3dmark.com - PR - 16,312
> 
> Swapped from AIO to Hydro Copper block, used liquid metal and 12.8w/mk pads. On it's own loop w/360 60mm rad in push + pull. Classified tool used, and on OLD version of 3dmark before anyone asks.


Nice  What kind of back-plate cooling setup do you run ?


----------



## GQNerd

J7SC said:


> Nice  What kind of back-plate cooling setup do you run ?


Swapped the back pads with the same 12.8 used on the front, added a 2mm 6w/mk thermal pad on top of the back plate, and big aluminum heatsink with a fan on top.


----------



## J7SC

Miguelios said:


> Swapped the back pads with the same 12.8 used on the front, added a 2mm 6w/mk thermal pad on top of the back plate, and big aluminum heatsink with a fan on top.


Thanks  ...I'm looking at further back-plate mods on my w-cooled Strix over the next week or two.


----------



## gfunkernaught

yzonker said:


> In doing more testing with RDR2, what I've found that it is not stable using the KP XOC bios even at settings that should be equivalent (and even more conservative) to what I've run (and found to be stable) using the Galax 390w bios. Although I didn't try going even further down, my conclusion is RDR2 isn't completely stable at all on my machine using the KP XOC.
> 
> Anyone have any thoughts on that? Something to try? I haven't tried flashing the KP XOC and then wiping the drivers and RDR2 settings though. Might try that next time I put the XOC on. Doesn't really matter much, but I'm kinda curious as to why that would happen. It seems to be the only game that has this problem that I've tested.


Are you referring to the 520w xoc or 1kw?


----------



## yzonker

gfunkernaught said:


> Are you referring to the 520w xoc or 1kw?


1kw running on my Zotac 3090. 

I did try a clean driver install and blowing away the RDR2 settings directory last night but it still crashed at a fairly mild +120 core +500 mem at about 440w max. (70% PL) 

On a 390w bios even with +180 core, it is rock solid. Core clocks are a little lower in that case, but not much due to the higher offset. 

Kinda weird.


----------



## Sweetzen

Hy guys, sorry for my English but i will try to explain my issue...
I have a RTX 3090 FTW3 ULTA (94.02.42.80.27) and my friend a 3090 STRIX OC ( stock bios) and whe have both a ryzen 5900x
i have better results on bechmark like time spy (cpu and gpu) but in game he has more FPS than me... like 20-25 in Cold war (ultra 1440p)
I don't know what is the issue, do i have to flash my bios with my better one ?

Thanks for your help guys.


----------



## Lobstar

Sweetzen said:


> Hy guys, sorry for my English but i will try to explain my issue...
> I have a RTX 3090 FTW3 ULTA (94.02.42.80.27) and my friend a 3090 STRIX OC ( stock bios) and whe have both a ryzen 5900x
> i have better results on bechmark like time spy (cpu and gpu) but in game he has more FPS than me... like 20-25 in Cold war (ultra 1440p)
> I don't know what is the issue, do i have to flash my bios with my better one ?
> 
> Thanks for your help guys.


The strix is a much better card. I say that as an owner of the ultra. Check your power draw to ensure you don't have the faulty designed card. If you do get the special RMA done. If you don't, well, understand EVGA cards suck this generation.


----------



## Sweetzen

Lobstar said:


> The strix is a much better card. I say that as an owner of the ultra. Check your power draw to ensure you don't have the faulty designed card. If you do get the special RMA done. If you don't, well, understand EVGA cards suck this generation.


oh okey ! i have to check the power draw or


Lobstar said:


> The strix is a much better card. I say that as an owner of the ultra. Check your power draw to ensure you don't have the faulty designed card. If you do get the special RMA done. If you don't, well, understand EVGA cards suck this generation.


So i did a stresse test on Time spy extreme and that's my result, i don't go over 475w on Board Power Draw, is it a faulty designed card for you ?

Thanks a lot for your help.


----------



## KedarWolf

Futuremark 3DMark for Windows (v2.18.7185) Download

A new version, Port Royal supposed to be fixed but I haven't tested it yet.

A new version of *Futuremark 3DMark for Windows* is available in the TechPowerUp Downloads section.

The new version is: *v2.18.7185*

New in this version:

*Fixed*

Fixed a minor issue introduced in 3DMark v2.18.7184 that had a small effect on Port Royal performance on NVIDIA GeForce RTX 30-series cards.


Want to stop receiving this notification? Unsubscribe.


----------



## des2k...

yzonker said:


> 1kw running on my Zotac 3090.
> 
> I did try a clean driver install and blowing away the RDR2 settings directory last night but it still crashed at a fairly mild +120 core +500 mem at about 440w max. (70% PL)
> 
> On a 390w bios even with +180 core, it is rock solid. Core clocks are a little lower in that case, but not much due to the higher offset.
> 
> Kinda weird.


nothing wierd about it, kingpin bios might have more aggressive boost table + the kingpin card are very aggressive on real vcore, zotac is not + all big caps for vcore(those don't like frequent freq up/down on the core or high eff freq)

adding offset to that might be enough to crash, 

I know I can't get any big offset to work , that's why I use VF points which allows a better / lower eff frequency on the Zotac

big offsets work with other bios like 370,390w,etc


----------



## Beagle Box

KedarWolf said:


> Futuremark 3DMark for Windows (v2.18.7185) Download
> 
> A new version, Port Royal supposed to be fixed but I haven't tested it yet.
> 
> A new version of *Futuremark 3DMark for Windows* is available in the TechPowerUp Downloads section.
> 
> The new version is: *v2.18.7185*
> 
> New in this version:
> 
> *Fixed*
> 
> Fixed a minor issue introduced in 3DMark v2.18.7184 that had a small effect on Port Royal performance on NVIDIA GeForce RTX 30-series cards.
> 
> 
> Want to stop receiving this notification? Unsubscribe.


Love that. This "minor issue" of "small effect" dragged my scores down by over 500pts.
These are with the MSI Suprim X non-rebar BIOS yesterday and today.

My Port Royal scores before and after today's 'fix'


----------



## yzonker

des2k... said:


> nothing wierd about it, kingpin bios might have more aggressive boost table + the kingpin card are very aggressive on real vcore, zotac is not + all big caps for vcore(those don't like frequent freq up/down on the core or high eff freq)
> 
> adding offset to that might be enough to crash,
> 
> I know I can't get any big offset to work , that's why I use VF points which allows a better / lower eff frequency on the Zotac
> 
> big offsets work with other bios like 370,390w,etc


I have tried limiting the VF curve so it stays locked at a specific voltage point. Still crashed. I had applied an offset first (no more than +120). 

Maybe I don't quite follow though. You are applying an offset to the VF points though, correct? 

Otherwise you'd be on the stock curve at probably no more than 2000mhz or so.

Like I've mentioned before, this seems to be unique to RDR2. I've tested CP2077, FS 2020, Doom, and Quake 2 RTX and did not see this problem. Varying levels of stability, but not anywhere nearly as limited as RDR2 seems to be. I'm not sure it's ever 100% stable with any reasonable settings with the KP XOC.


----------



## Lobstar

Sweetzen said:


> So i did a stresse test on Time spy extreme and that's my result, i don't go over 475w on Board Power Draw, is it a faulty designed card for you ?
> 
> Thanks a lot for your help.


Your board looks pretty balanced in that screen shot. Without more info I'd say it's probably fine.

If you spend some time playing with overclocking and a different bios depending on your cooling situation you might be able to catch up to your friend.


----------



## Sweetzen

Lobstar said:


> Your board looks pretty balanced in that screen shot. Without more info I'd say it's probably fine.
> 
> If you spend some time playing with overclocking and a different bios depending on your cooling situation you might be able to catch up to your friend.


Nice, so i don't have to RMA ? i have a 2014 serial number. Do you have a good Bios to propose ? i have the 500W from EVGA but i don't know if i can flash whith the another one ? like the strix of suprim x ?


----------



## jura11

Just got my 5950X and let's see how it performs, can't wait to test it 

Regarding the VF points, with RTX 2080Ti Strix I used to VF points all the time because only with VF points curve I could run 2205MHz or 2220MHz 

With RTX 3090 I didn't tried VF curve at all but will do tests later this week probably when I do rebuild of my loop

Hope this helps 

Thanks, Jura


----------



## gfunkernaught

yzonker said:


> 1kw running on my Zotac 3090.
> 
> I did try a clean driver install and blowing away the RDR2 settings directory last night but it still crashed at a fairly mild +120 core +500 mem at about 440w max. (70% PL)
> 
> On a 390w bios even with +180 core, it is rock solid. Core clocks are a little lower in that case, but not much due to the higher offset.
> 
> Kinda weird.


Same mem offset on both bios? Keep in mind that the 1kw bios will have higher power limits across the board, not just VID, vs a regular bios. Could be something your pcb doesn't like while on 1kw. You said the crash happens at a specific spot on rdr2 right?


----------



## GRABibus

New 3DMark update :
My PR score with my strix at stock settings, fans on auto, 22°C ambient;, drivers on auto, Rebar enabled and with Bios EVGA RTX 3090 FTW3 ULTRA 500W version 94.02.42.80.27

NVIDIA GeForce RTX 3090 video card benchmark result - AMD Ryzen 9 5900X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII HERO (3dmark.com)

550 points more than last week.


----------



## Alex24buc

New update from 3dmark fixed my port royal score. I get 14013 with no oc and rebar enabled on my palit gamerock oc, which is a good score.


----------



## yzonker

gfunkernaught said:


> Same mem offset on both bios? Keep in mind that the 1kw bios will have higher power limits across the board, not just VID, vs a regular bios. Could be something your pcb doesn't like while on 1kw. You said the crash happens at a specific spot on rdr2 right?


I've tried no offset on the mem too. But yea I found a mission of the main story that crashed in approximately the same place. It crashes in other places too though.


----------



## des2k...

new 3dmark update, about 100points less vs before, but the room is very hot 27c so maybe that affects score









I scored 15 166 in Port Royal


AMD Ryzen 9 3900X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## Lobstar

I'm also down about 100pts from before the borked update.


----------



## J7SC

jura11 said:


> Just got my 5950X and let's see how it performs, can't wait to test it
> 
> Regarding the VF points, with RTX 2080Ti Strix I used to VF points all the time because only with VF points curve I could run 2205MHz or 2220MHz
> 
> With RTX 3090 I didn't tried VF curve at all but will do tests later this week probably when I do rebuild of my loop
> 
> Hope this helps
> 
> Thanks, Jura


 Congrats ! ...I love the 5950X for both work and play applications. I still like Intel as well, but this is the third AMD 16 core I got and it joins a 2950X and 3950X.

5950X also plays very nice with 3090, r_BAR etc.


----------



## jura11

J7SC said:


> Congrats ! ...I love the 5950X for both work and play applications. I still like Intel as well, but this is the third AMD 16 core I got and it joins a 2950X and 3950X.
> 
> 5950X also plays very nice with 3090, r_BAR etc.


From initial testing that CPU ot 5950X is running really hot hahaha, first time I have seen 82°C under heavy load in Cinebench, using for now PBO with curve optimizer 

Probably will redo OC settings and use manual OC and see where I land 

My 3900X will go to my brother PC or I will put it to some of builds which I'm planning 

Hope this helps 

Thanks, Jura


----------



## yzonker

My settings were slightly different, but looks like it scores really close to what it did before on this 390w bios I've got flashed right now.









Result







www.3dmark.com


----------



## felix121

Put one of this bad boys on the top of my aurora 10 to help exhaust the hot air...got a nice drop of 6c...problem is will it affect the default fan for the exhaust?


----------



## Bobbylee

felix121 said:


> Put one of this bad boys on the top of my aurora 10 to help exhaust the hot air...got a nice drop of 6c...problem is will it affect the default fan for the exhaust?


So long as what ever fan inside is going in the same direction it should be good


----------



## felix121

Bobbylee said:


> So long as what ever fan inside is going in the same direction it should be good


thanks for the reply yah...i think they are no issues yet...set it on low....amazed a 15 dollar usb fan can drop the temp by that much.


----------



## yzonker

This is moderately interesting, to me anyway. The changes in results of the Corsair blocks appears to really show the balance between pressure on the the core and pressure on the thermal pads. Really appears Corsair reduced the memory thermal pad pressure to improve core temps. Sadly they didn't give water temps, so no way to compare block deltas. I would have liked to have seen that to see where my block falls.









EKWB Active Backplate and Corsair XG7 Retest - KitGuru


Today we look back on a review from last month where we tested three GPU water blocks for RTX 30-ser




www.kitguru.net





And yes, annoyingly they tested the active back plate with a friggin' 3080. ***?


----------



## des2k...

yzonker said:


> This is moderately interesting, to me anyway. The changes in results of the Corsair blocks appears to really show the balance between pressure on the the core and pressure on the thermal pads. Really appears Corsair reduced the memory thermal pad pressure to improve core temps. Sadly they didn't give water temps, so no way to compare block deltas. I would have liked to have seen that to see where my block falls.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> EKWB Active Backplate and Corsair XG7 Retest - KitGuru
> 
> 
> Today we look back on a review from last month where we tested three GPU water blocks for RTX 30-ser
> 
> 
> 
> 
> www.kitguru.net
> 
> 
> 
> 
> 
> And yes, annoyingly they tested the active back plate with a friggin' 3080. ***?


what block / delta do you have again ?

without water temps / gpu delta,wattage, this review is complete trash!


----------



## yzonker

des2k... said:


> what block / delta do you have again ?
> 
> without water temps / gpu delta,wattage, this review is complete trash!


We covered that before. About 13C @390w.


----------



## des2k...

yzonker said:


> We covered that before. About 13C @390w.


hehe bad memory  but yeah that's good vs some reviews
There's always "sanding standoffs" route, works very well for core pressure but not sure if that's worth much.

I moved from high delta to good delta up to 500w, other then running cool on the GPU, I didn't gain anything in terms of OC / stability lol


----------



## yzonker

des2k... said:


> hehe bad memory  but yeah that's good vs some reviews
> There's always "sanding standoffs" route, works very well for core pressure but not sure if that's worth much.
> 
> I moved from high delta to good delta up to 500w, other then running cool on the GPU, I didn't gain anything in terms of OC / stability lol


That's what I thought was interesting. Probably explains why Igor's test showed the Corsair block so much worse as well.

No I'm leaving it alone for now. Not worth the effort. Besides I'll have to take my loop apart when I get a 3090 TI anyway. LOL


----------



## GRABibus

yzonker said:


> when I get a 3090 TI


----------



## yzonker

GRABibus said:


>


The question is whether that's real or just somebody at Zotac being dumb. I'm not sure.


----------



## GRABibus

yzonker said:


> The question is whether that's real or just somebody at Zotac being dumb. I'm not sure.


a 3080Ti you meant ?


----------



## yzonker

GRABibus said:


> a 3080Ti you meant ?


No, this fumble by Zotac. 









Zotac Thinks That Nvidia Will Make a GeForce RTX 3090 Ti


Insider info or wild guess?




www.tomshardware.com


----------



## Toopy

Can anyone help me out here as to why this zotac 3090 amp holo is power limiting at 280w?

To me it's probably because 8pin #3 is at 150W, yet there is plenty of headroom available on 1,2 and the pci-e bus.
Maximum in the bios is 407w


----------



## Falkentyne

Toopy said:


> Can anyone help me out here as to why this zotac 3090 amp holo is power limiting at 280w?
> 
> To me it's probably because 8pin #3 is at 150W, yet there is plenty of headroom available on 1,2 and the pci-e bus.
> Maximum in the bios is 407w
> View attachment 2511563


RMA that card. It's defective.
The fact that the MVDDC (VRAM) power draw is 3.1W and PCIE Slot Power is 14.7W should give that away. There's something messed up with the power balancing on that card. Your GPU Chip Power is also reading sky high.

Are you mining? Because if you are, the MVDCC should be pulling at least 100W and GPU Chip Power should be reporting half of what it's showing. So something is wrong with the power draw, and it's pulling too much from PCIE Pin #3 and not enough from Slot Power or 8 pin #2. RMA it instantly.


----------



## Toopy

yes mining,

I figured out Its stuck in P2 mode when mining,
Running 3d mark allows it to hit its p/l, now trying to figure out how to get it to P0 while mining.
The mem o/c is dragging the core down

It's a new pcb apparently from zotac so I wonder if thats why the power reading discrepancies, they have this power boost chip or something.

I have already opened it and repadded, found a horrible thermal design. Some of the memory on the back has no padding what so ever.
Even worse there is plastic/expoy there where the pads would go, for stupid lighting, *** were zotac thinking


----------



## Lobstar

Toopy said:


> I figured out Its stuck in P2 mode when mining,
> Running 3d mark allows it to hit its p/l, now trying to figure out how to get it to P0 while mining.


There are scripts out there that will do it for you. Or you can make one yourself. P0 power state - Crypto Mining Blog


----------



## Toopy

Lobstar said:


> There are scripts out there that will do it for you. Or you can make one yourself. P0 power state - Crypto Mining Blog


This was the first thing I tried and I get









Thanks Though


----------



## yzonker

I use this version of Nvidia Profile Inspector to force P0. Not sure what site I downloaded it from though.









Download Nvidia Profile Inspector 3.5.0.0 Latest Version


Download Nvidia profile inspector latest version 3.5.0.0 for windows. Control overclocking of Nvidia graphics card using Nvidia profiler. Lots of




nvidiaprofileinspector.com


----------



## felix121

yzonker said:


> I use this version of Nvidia Profile Inspector to force P0. Not sure what site I downloaded it from though.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Download Nvidia Profile Inspector 3.5.0.0 Latest Version
> 
> 
> Download Nvidia profile inspector latest version 3.5.0.0 for windows. Control overclocking of Nvidia graphics card using Nvidia profiler. Lots of
> 
> 
> 
> 
> nvidiaprofileinspector.com


Lol...cant remember the site...must have been a long night


----------



## gfunkernaught

After the custom evga rebar 520w bios flash, I've been getting random CTD's while playing titanfall 2. Nothing in the event logs. I even backed the offset down a bin. Just now I got this error that I've never seen before. Wonder if it has something to do with rebar, or just another case of degrading OC stability:








+75 core +1200 mem. Most likely unstable OC. Its the power limit bouncing the VID around and causing crashes. _ugh_ Weird thing is that before when I had the 1kw bios at 50%, this didn't happen.


----------



## Toopy

I think that zotac have implemented a eth limiter in the AMP Holo 3090, Still trying to find an answer, but same mining program different algo and full power
Anything eth based and its power limited to 280w, which crashes the core speed and limits hashrate


----------



## J7SC

Toopy said:


> I think that zotac have implemented a eth limiter in the AMP Holo 3090, Still trying to find an answer, but same mining program different algo and full power
> Anything eth based and its power limited to 280w, which crashes the core speed and limits hashrate


When did you get your ZoTac 3090 ? I read a few weeks ago when 3080 Ti and other upcoming cards were officially announced w/ hash limiters baked in and a new code stamped on the GPU die that they would implement 'running changes' on 3080s and 3090s as well 'over the next weeks and months'...the retail box should have a three-letter 'marking' to that effect though (never mind the GPU die)


----------



## Toopy

J7SC said:


> When did you get your ZoTac 3090 ? I read a few weeks ago when 3080 Ti and other upcoming cards were officially announced w/ hash limiters baked in and a new code stamped on the GPU die that they would implement 'running changes' on 3080s and 3090s as well 'over the next weeks and months'...the retail box should have a three-letter 'marking' to that effect though (never mind the GPU die)


Ordered last week (only came into stock then) arrived this week.
I have forced P0 with profile inspector no change. Kicking my self for not taking a pic (or noting the gpu, I only checked to see if it had GA102-200 crossed out) the core side of the card.


----------



## Xeq54

Toopy said:


> I think that zotac have implemented a eth limiter in the AMP Holo 3090, Still trying to find an answer, but same mining program different algo and full power
> Anything eth based and its power limited to 280w, which crashes the core speed and limits hashrate


I think there is something wrong with the card, no way should MVDDC read 3.1 watts, not even in Idle. This indicates something is wrong with the power sensing circuit on the mem rail and the bios triggers the power limit as it expects a much higher reading. How much MVDDC draw do you see when gaming ? Are you also limited when gaming ? Did you do any shunt modding ?

P2 is actually good for mining, as you can have a slightly unstable OC on ram and still mine without the system freezing. With P2 enabled I am at 125-126MH/s, when I disable it, I have to run the ram slower with around 120MH/s otherwise the system will hang at random.

I am mining under water at 21.8 gbps memory speed and MVDDC draw shows 145watts. Board draw is around 295W. It stays around 90watts when playing warzone.


----------



## Toopy

Xeq54 said:


> I think there is something wrong with the card, no way should MVDDC read 3.1 watts, not even in Idle. This indicates something is wrong with the power sensing circuit on the mem rail and the bios triggers the power limit as it expects a much higher reading. How much MVDDC draw do you see when gaming ? Are you also limited when gaming ? Did you do any shunt modding ?
> 
> P2 is actually good for mining, as you can have a slightly unstable OC on ram and still mine without the system freezing. With P2 enabled I am at 125-126MH/s, when I disable it, I have to run the ram slower with around 120MH/s otherwise the system will hang at random.
> 
> I am mining under water at 21.8 gbps memory speed and MVDDC draw shows 145watts. Board draw is around 295W. It stays around 90watts when playing warzone.


I know that not reading correctly isn't great but I don't think anything is wrong with the card. I found someone else on another forum that bought the same card his reads identically and has the same hashrate issue.
The card works fine doing everything else but mining eth, when gaming it boosts to 407w, mining anything but eth and eth based coins.

No shunt modding, only changed/installed the mem pads. 
I put some over the memory string that had none from stock, getting heat into the epoxy is better than nothing. The mem temps aren't great when mining but are better than they were stock and aren't throttling (96-8 solid)

The card has some power boost "chip" as zotac call it so the power circuitry has been modified from stock which is probably why gpu-z isn't reading it correctly


----------



## yzonker

Toopy said:


> I know that not reading correctly isn't great but I don't think anything is wrong with the card. I found someone else on another forum that bought the same card his reads identically and has the same hashrate issue.
> The card works fine doing everything else but mining eth, when gaming it boosts to 407w, mining anything but eth and eth based coins.
> 
> No shunt modding, only changed/installed the mem pads.
> I put some over the memory string that had none from stock, getting heat into the epoxy is better than nothing. The mem temps aren't great when mining but are better than they were stock and aren't throttling (96-8 solid)
> 
> The card has some power boost "chip" as zotac call it so the power circuitry has been modified from stock which is probably why gpu-z isn't reading it correctly


What hashrate do you see? 280w is enough for 100+ mh/s.


----------



## felix121

Toopy said:


> I know that not reading correctly isn't great but I don't think anything is wrong with the card. I found someone else on another forum that bought the same card his reads identically and has the same hashrate issue.
> The card works fine doing everything else but mining eth, when gaming it boosts to 407w, mining anything but eth and eth based coins.
> 
> No shunt modding, only changed/installed the mem pads.
> I put some over the memory string that had none from stock, getting heat into the epoxy is better than nothing. The mem temps aren't great when mining but are better than they were stock and aren't throttling (96-8 solid)
> 
> The card has some power boost "chip" as zotac call it so the power circuitry has been modified from stock which is probably why gpu-z isn't reading it correctly


why would you want it to pull more power for hashing....if ur getting a decent hashrate with low wattage thats good ...


----------



## Toopy

yzonker said:


> What hashrate do you see? 280w is enough for 100+ mh/s.





felix121 said:


> why would you want it to pull more power for hashing....if ur getting a decent hashrate with low wattage thats good ...


It's pulling 106Mh,
The 280w limit is hindering the core speed, the other 3090's and 80's I have perform best @ ~320-30w with1200 on the core, then whatever the memory can go to without thermal throttling.

This card is different, you start mining it goes to ~330w with the core running at 1200Mhz then after a second it drops to ~1000 and the power drops to 280w.
If you increase the mem o/c the core drops lower, further reducing hashrate, as the mem takes power off the core.

Mining anything else works fine as does gaming, benching etc


----------



## yzonker

Toopy said:


> It's pulling 106Mh,
> The 280w limit is hindering the core speed, the other 3090's and 80's I have perform best @ ~320-30w with1200 on the core, then whatever the memory can go to without thermal throttling.
> 
> This card is different, you start mining it goes to ~330w with the core running at 1200Mhz then after a second it drops to ~1000 and the power drops to 280w.
> If you increase the mem o/c the core drops lower, further reducing hashrate, as the mem takes power off the core.
> 
> Mining anything else works fine as does gaming, benching etc


You only need 270-280w to get close to 120 mh/s. Notice I have the VF curve locked at the lowest voltage point and I have a negative core offset to minimize core power usage. Just slowly increase the core offset until hashrate doesn't increase. I lost the vram lottery, so it doesn't help hashrate above +900 for me.


----------



## Toopy

yzonker said:


> You only need 270-280w to get close to 120 mh/s. Notice I have the VF curve locked at the lowest voltage point and I have a negative core offset to minimize core power usage. Just slowly increase the core offset until hashrate doesn't increase. I lost the vram lottery, so it doesn't help hashrate above +900 for me.


The card clocks really well in benches and gaming. It boosts and locks solid at 2025 core with a +85core & +1200 mem clock and hits the 407w bios limit. Still testing.
It just won't go over 280w when mining eth.


----------



## yzonker

Toopy said:


> The card clocks really well in benches and gaming. It boosts and locks solid at 2025 core with a +85core & +1200 mem clock and hits the 407w bios limit. Still testing.
> It just won't go over 280w when mining eth.


All I'm saying is that is enough power to get close to max hashrate. 

I don't know why it doesn't go higher, although since the rest of the readings are incorrect, possibly the total power is wrong. You're obviously pulling more than single digit watts on the mem or it wouldn't be hashing 100+. Or the mem temp being 90+C.


----------



## dual109

hi,
I'm running an MSI 3090 Trio X with the Strix OC bios (450w) (94.02.26.C0.13) and I regularly see the card pulling upwards of 475w however I still find that my clocks and voltage are still bouncing all over the place under heavy load and rarely see my voltage hit over 1.000v and clocks drop up to 100Mhz under . Is there a custom bios out there that negates this behaviour and stabilizes my clocks?

Card rarely hits 65c under heavy load and I'm using a V/F curve in MSI afterburner.

I flashed to the strix bios from the default (390w) to reduce this but seems like this card is also choking on the Strix bios as it appears to be drawing upward of 450w.

Thanks


----------



## J7SC

Toopy said:


> Ordered last week (only came into stock then) arrived this week.
> I have forced P0 with profile inspector no change. Kicking my self for not taking a pic (or noting the gpu, I only checked to see if it had GA102-200 crossed out) the core side of the card.


 ...here is in part what I was referring to earlier


----------



## yzonker

J7SC said:


> ...here is in part what I was referring to earlier


Yea this isn't a hashrate limiter he's seeing. It's some quirk with the readings being incorrect. Couldn't get to 100+ mh/s on a lhs card.


----------



## J7SC

yzonker said:


> Yea this isn't a hashrate limiter he's seeing. It's some quirk with the readings being incorrect. Couldn't get to 100+ mh/s on a lhs card.


...I agree, and it isn't even clear whether this will affect 3090s as well (which I read elsewhere that it will eventually). Still, the potential is there that there will be some accidental 'mix and match' at the vendor level...stranger things have happened


Spoiler


----------



## gfunkernaught

dual109 said:


> hi,
> I'm running an MSI 3090 Trio X with the Strix OC bios (450w) (94.02.26.C0.13) and I regularly see the card pulling upwards of 475w however I still find that my clocks and voltage are still bouncing all over the place under heavy load and rarely see my voltage hit over 1.000v and clocks drop up to 100Mhz under . Is there a custom bios out there that negates this behaviour and stabilizes my clocks?
> 
> Card rarely hits 65c under heavy load and I'm using a V/F curve in MSI afterburner.
> 
> I flashed to the strix bios from the default (390w) to reduce this but seems like this card is also choking on the Strix bios as it appears to be drawing upward of 450w.
> 
> Thanks


I have the same card. I found the only way to avoid clock/vid bounce when overclocking is to use the 1000w bios and set the power limit to 50-60% and keep the core offset in check because you can still end up bouncing, but nowhere near as much as stock bios. The trio is very finicky.

If you use the 1kw bios, make sure you water cool the card and use better pads and plenty of rads. It's crucial that you keep the temp of the whole board and all of it's components low when using xoc 1kw bios. You can damage the card if you don't. Low as in below 45c core temp. Stay under 60c if you're doing extreme OC on the vram.

Otherwise, just use the evga 520w bios and do a mild offset overclock.


----------



## yzonker

gfunkernaught said:


> After the custom evga rebar 520w bios flash, I've been getting random CTD's while playing titanfall 2. Nothing in the event logs. I even backed the offset down a bin. Just now I got this error that I've never seen before. Wonder if it has something to do with rebar, or just another case of degrading OC stability:
> View attachment 2511670
> 
> +75 core +1200 mem. Most likely unstable OC. Its the power limit bouncing the VID around and causing crashes. _ugh_ Weird thing is that before when I had the 1kw bios at 50%, this didn't happen.


Do you think your card has actually degraded or is this just one bios making the card a little more stable than another?


----------



## gfunkernaught

yzonker said:


> Do you think your card has actually degraded or is this just one bios making the card a little more stable than another?


You know that's a tough one. So for cyberpunk, 1kw bios were less stable than 520w rebar, but the reverse was true for Titanfall 2 and other games. Very strange. I'm leaning towards clock/vid bounce but that specific message I've never seen before. I flashed back to 1kw last night and played Titanfall 2 with no issues.


----------



## gfunkernaught

Dup post


----------



## felix121

yzonker said:


> Do you think your card has actually degraded or is this just one bios making the card a little more stable than another?


Bios driver...alot of factors...effect the card....your cards actually most parts of your pcs...have built in safety mechanisms that keep them damaging themselves...so suggest u stop stressing about something that does not exist....anyway dont you have a warranty on it?


----------



## gfunkernaught

felix121 said:


> Bios driver...alot of factors...effect the card....your cards actually most parts of your pcs...have built in safety mechanisms that keep them damaging themselves...so suggest u stop stressing about something that does not exist....anyway dont you have a warranty on it?


Degradation is the easiest and quickest answer but also not impossible. For my card, so far every crash I've seen due to unstable OC was VID bounce. I noticed how the core clock and VID don't always change together, they're not synced. Like if the clock drops from 2100 to 2085, the vid doesn't drop until the next bin drop, then the core goes back up, the vid stays at the lower bin, and then I get a crash. I think using a multi-point vf curve would solve that, however there is the issue of setting the curve 30mhz higher than the desired clock, which causes problems as well. If I want 2100, I set it to 2130, then sometimes it will actually get to 2130 and crash because the vid wasn't set high enough. So I just use the offset slider and a high power limit to avoid those issues.


----------



## ViRuS2k

defo something wrong with your trio x 3090, using same card can draw as high as 780w out of my card can probably go higher. under demanding games
and currrently have set 300w max power when mining @1500mhz core / 1350mhz memory _21704_ effective. between 120-126mh/s memory can go as high as 1450 @1500 i get correction errors in hashing program nanominer.
can play games @1600mhz memory without a issue, i think i have golden sample memory or i lucked out with the memory on my card.
everything is under water though even the memory reaching 78c max temp under mining and 50`s to low 60`s memory temps when gaming....


----------



## felix121

gfunkernaught said:


> Degradation is the easiest and quickest answer but also not impossible. For my card, so far every crash I've seen due to unstable OC was VID bounce. I noticed how the core clock and VID don't always change together, they're not synced. Like if the clock drops from 2100 to 2085, the vid doesn't drop until the next bin drop, then the core goes back up, the vid stays at the lower bin, and then I get a crash. I think using a multi-point vf curve would solve that, however there is the issue of setting the curve 30mhz higher than the desired clock, which causes problems as well. If I want 2100, I set it to 2130, then sometimes it will actually get to 2130 and crash because the vid wasn't set high enough. So I just use the offset slider and a high power limit to avoid those issues.


Most crashes...are caused by people trying to push there card to hard...as in they try to manually overclock there card.....you have a gpu that is not capable of running certain games at certain settings or at certain frame rates but they insist on pushing the limit..and when it crashes they blame it on degradation....it doesn't work that way...if your not overclocking or manually messing around with settings....and suddenly your card is running less fps...on certain games at the same settings then maybe yah...


----------



## felix121

ViRuS2k said:


> defo something wrong with your trio x 3090, using same card can draw as high as 780w out of my card can probably go higher. under demanding games
> and currrently have set 300w max power when mining @1500mhz core / 1350mhz memory _21704_ effective. between 120-126mh/s memory can go as high as 1450 @1500 i get correction errors in hashing program nanominer.
> can play games @1600mhz memory without a issue, i think i have golden sample memory or i lucked out with the memory on my card.
> everything is under water though even the memory reaching 78c max temp under mining and 50`s to low 60`s memory temps when gaming....


You cant compare mining to gaming..the 2 are totally different beasts.....


----------



## dk10438

Toopy said:


> It's pulling 106Mh,
> The 280w limit is hindering the core speed, the other 3090's and 80's I have perform best @ ~320-30w with1200 on the core, then whatever the memory can go to without thermal throttling.
> 
> This card is different, you start mining it goes to ~330w with the core running at 1200Mhz then after a second it drops to ~1000 and the power drops to 280w.
> If you increase the mem o/c the core drops lower, further reducing hashrate, as the mem takes power off the core.
> 
> Mining anything else works fine as does gaming, benching etc


power limit seems kinda high IMO.
my 3090 is -250 core +1300 memory 299W and 122 Mh/s
try limiting the core and see what happens...


----------



## felix121

dk10438 said:


> power limit seems kinda high IMO.
> my 3090 is -250 core +1300 memory 299W and 122 Mh/s
> try limiting the core and see what happens...


Thats not bad your under 300w....why push it for more?


----------



## dk10438

felix121 said:


> Thats not bad your under 300w....why push it for more?


don't quite understand...
are you asking why don't I cut the power to 280 or 290 W to improve efficiency or are you asking why don't I increase to 300+ W to get a greater hashrate?


----------



## felix121

dk10438 said:


> don't quite understand...
> are you asking why don't I cut the power to 280 or 290 W to improve efficiency or are you asking why don't I increase to 300+ W to get a greater hashrate?


Seems like its running efficient between 280-290...or maybe I misunderstood sorry...


----------



## gfunkernaught

ViRuS2k said:


> defo something wrong with your trio x 3090, using same card can draw as high as 780w out of my card can probably go higher. under demanding games
> and currrently have set 300w max power when mining @1500mhz core / 1350mhz memory _21704_ effective. between 120-126mh/s memory can go as high as 1450 @1500 i get correction errors in hashing program nanominer.
> can play games @1600mhz memory without a issue, i think i have golden sample memory or i lucked out with the memory on my card.
> everything is under water though even the memory reaching 78c max temp under mining and 50`s to low 60`s memory temps when gaming....


780w on a Trio just sounds reckless...


----------



## dk10438

dk10438 said:


> don't quite understand...
> are you asking why don't I cut the power to 280 or 290 W to improve efficiency or are you asking why don't I increase to 300+ W to get a greater hashrate?


yeah, if I cut the power to 290 the hash rate goes down. I can increase the power to 310W and +1500 memory but it maxes out at 124-125 Mh so it didnt seem like it was worth it. 122 Mh/s seems pretty reasonable...


----------



## des2k...

dk10438 said:


> yeah, if I cut the power to 290 the hash rate goes down. I can increase the power to 310W and +1500 memory but it maxes out at 124-125 Mh so it didnt seem like it was worth it. 122 Mh/s seems pretty reasonable...


you don't need more than default 850mv 1700mhz on the core. At that low frequency because mining uses just mem contoller + mem only not much power is needed.

It's 260w on my card, +1500mem it's always 128mh/s 

about 100w for memory usage, 160w for core/cache


----------



## ViRuS2k

gfunkernaught said:


> 780w on a Trio just sounds reckless...


i didn't say i run it at that, i just know its capable of that and more, as i was replying to the guy with the same card that seems to be power locked for some reason.


----------



## gfunkernaught

ViRuS2k said:


> i didn't say i run it at that, i just know its capable of that and more, as i was replying to the guy with the same card that seems to be power locked for some reason.


Is it though? Isn't the Trio a mid-range 3090? Capable for how long before something pops? The highest I've taken my Trio was 680w with quake 2 rtx and that just didn't feel right lol. Now if it was a Strix, Kingpin, or HOF level then yeah.


----------



## yzonker

gfunkernaught said:


> Is it though? Isn't the Trio a mid-range 3090? Capable for how long before something pops? The highest I've taken my Trio was 680w with quake 2 rtx and that just didn't feel right lol. Now if it was a Strix, Kingpin, or HOF level then yeah.


It's only 14 phase I think if this is the right card? 









MSI GeForce RTX 3090 Gaming X Trio Review


The MSI GeForce RTX 3090 Gaming X is the quietest RTX 3090 we've tested today. Acoustics are pretty impressive actually, considering the performance and heat output. MSI has also overclocked their card and bumped its power limit, which yields a very decent performance improvement.




www.techpowerup.com





I remember this reddit thread about it too.


__
https://www.reddit.com/r/overclocking/comments/j6dlpi


----------



## Toopy

dk10438 said:


> power limit seems kinda high IMO.
> my 3090 is -250 core +1300 memory 299W and 122 Mh/s
> try limiting the core and see what happens...


the core already is dropping to ~1000 when screen off. If I try to ramp the mem up it drops further. If I reduce the core it reduces as well. Both drop hashrate.
all the other GA102’s I have act differently. Where the blue ink is, is where I held the cursor during snip.

This card limits power to 280w in seconds when mining daggerhasimoto, everything else works fine.


----------



## motivman

Dual shunt modded 3090 strix... getting great hashrates, but memory junction temps sucks. Both cards are on Phanteks waterblocks... any ideas on how I can reduce my memory junction temps? already replaced backplate thermal pads with thermalright 12.8 w/mk, but not much help with memory junction temps... SMH.


----------



## ViRuS2k

i dont see any issue with my trio x its under full waterblock and mem blocks so i dont see the problem if the card wants to draw that power and has good temps then i dont see the problem....
its heat is what kills ****.


----------



## yzonker

Toopy said:


> the core already is dropping to ~1000 when screen off. If I try to ramp the mem up it drops further. If I reduce the core it reduces as well. Both drop hashrate.
> all the other GA102’s I have act differently. Where the blue ink is, is where I held the cursor during snip.
> 
> This card limits power to 280w in seconds when mining daggerhasimoto, everything else works fine.
> 
> View attachment 2511758


I wonder if that's actually a hardware problem or just the bios. I'd be tempted to flash one of the KP bios to it.


----------



## ViRuS2k

what a gaming x trio should look like also i have suprimx bios flashed. rbar version.


----------



## ViRuS2k

motivman said:


> Dual shunt modded 3090 strix... getting great hashrates, but memory junction temps sucks. Both cards are on Phanteks waterblocks... any ideas on how I can reduce my memory junction temps? already replaced backplate thermal pads with thermalright 12.8 w/mk, but not much help with memory junction temps... SMH.
> 
> View attachment 2511759


You need a active backplate, or mp5 works or some janky flat heatsink with fan on top......


----------



## Falkentyne

motivman said:


> Dual shunt modded 3090 strix... getting great hashrates, but memory junction temps sucks. Both cards are on Phanteks waterblocks... any ideas on how I can reduce my memory junction temps? already replaced backplate thermal pads with thermalright 12.8 w/mk, but not much help with memory junction temps... SMH.
> 
> View attachment 2511759


What are you using to cool the backplate?
Because if you're using nothing, that's exactly the temps you should get.
Even if you put a rectangular low profile heatsink on the backplate with 3M thermal tape or thermal pads, the backplate will _still_ reach close to the same temp! (it will be a few C lower depending on what you put on it). It will just take _longer_ to reach that temp! That's because eventually the heatsink itself will become heat soaked, after the backplate is already heat soaked. It's like trying to run a high TDP CPU with a massive heatsink but without a fan. The temps will skyrocket, but the larger the heatsink is, the longer it will take the temps to reach saturation point (Until you have a heatsink massive enough that it can dissipate the heat passively without becoming soaked). Thus that's where fans come in.

Just putting a fan on helps. Putting a heatsink on + thermal pads/tape and then a fan helps the most.

You need to actively cool it to avoid that, by cooling the heatsink you put on the backplate.


----------



## WillP

motivman said:


> Dual shunt modded 3090 strix... getting great hashrates, but memory junction temps sucks. Both cards are on Phanteks waterblocks... any ideas on how I can reduce my memory junction temps? already replaced backplate thermal pads with thermalright 12.8 w/mk, but not much help with memory junction temps... SMH.
> 
> View attachment 2511759


mp5 backplate cooler keeps my VRAM at 80-84 Celsius when mining at 123mH/s for prolonged periods. Seems the proper active backplates work even better.


----------



## yzonker

Chance to show off my ghetto HSF again. Good for 80-82C while mining, although only 115 mh/s.


----------



## dk10438

An old amd cooler on top of the backplate with aluminum heat sinks and a fan
82 degrees at 122 mph/s


----------



## WillP

Interesting, seems just about any backplate cooling solution knocks 20-25 C off of a flat backplate. Kinda begs the question as to how Nvidia and AIOs completely missed the issue?


----------



## yzonker

dk10438 said:


> An old amd cooler on top of the backplate with aluminum heat sinks and a fan
> 82 degrees at 122 mph/s
> View attachment 2511775


I probably would have done that but all I have is the giant 3900x HSF. (yes I considered it)


----------



## yzonker

WillP said:


> Interesting, seems just about any backplate cooling solution knocks 20-25 C off of a flat backplate. Kinda begs the question as to how Nvidia and AIOs completely missed the issue?


Well the waterblock dropped mine from thermal throttling (110C+) to 90-92C. The HSF I show above got me to 80-82C.


----------



## Falkentyne

WillP said:


> Interesting, seems just about any backplate cooling solution knocks 20-25 C off of a flat backplate. Kinda begs the question as to how Nvidia and AIOs completely missed the issue?


They didn't miss the issue.
Tell me directly--in what universe have you EVER seen a video card with a backplate heatsink that was not a custom loop build? Such a thing has never existed before.

For one thing a backplate heatsink causes instant problems with certain motherboards and case layouts. 
It also makes the 1x PCIE slot completely unusable.

Second, how many video cards have had VRAM on the back side of the card? Almost all cards in the past have had VRAM on the GPU core side only, because no card came with more than 11 GB of VRAM. So besides Titans and professional cards which could have alot more VRAM, there was no issue anything similar to this for normal customers or gamers.

The only consumer card with VRAM on the back side of the PCB were the 3090 cards. So that was something new.

Boards aren't designed to accommodate backplate heatsinks, so that's not a standard configuration you design for. So the most you could do is put on some very thick metal backplate with extremely good thermal pads, but you still run into the basic problem that a backplate is not a heatsink. And it doesn't matter what card it is. A flat metal backplate is going to get heat saturated. Even if you put a good low profile heatsink on it, it will still get heat saturated. It will just take longer to do so (which is why heatsinks have to be actively cooled). So then you think that all 3090's should have actual fan cooled heatsinks on the backplates and then you have a compatibility nightmare on your hands.


----------



## Toopy

This is what I did to control the temps on my PNY Gaming, works well with my case airflow and fits under my noctua NHD14









Can mine away at +1700 no problem


----------



## J7SC

...some Pro cards like the Quadro RTX 8000 have double-sided VRAM (_twice_ that of a 3090  ), they also run lower power overall, and not GDDR6_X_ which is known to get hotter...as posted before, cards like the Strix (and I assume other 3090s) have VRAM that is actually rated at 1313MHz, but default-set to 1219 by the vendors in vBios. I presume that heat w/o extra cooling on the back PCB has s.th. to do with that...

Adding a nice 120mm fan really helps...I've seen over 10 C - 12 C improvement...currently looking at a space-saving heat sink mod, but not certain yet whether it will work (w-cooling might be another option for the back).


----------



## OrionBG

Hey guys, regarding the more "normal" 3090 models (by normal I mean not water-cooled or KingPin like models), which have you had the best results at overclocking?
I'll most certainly put the card under water. (oh and I'm talking strictly about gaming! No mining stuff)


----------



## dk10438

des2k... said:


> you don't need more than default 850mv 1700mhz on the core. At that low frequency because mining uses just mem contoller + mem only not much power is needed.
> 
> It's 260w on my card, +1500mem it's always 128mh/s
> 
> about 100w for memory usage, 160w for core/cache


with the -250 core offset, I never see voltages over 750mv or core over 1300mhz anyway, Lower power just results in lower hashrate for me....


----------



## jlodvo

i have the AORUS GeForce RTX™ 3090 XTREME WATERFORCE WB 24G card its power limited right now to 350watts or 360watts, i wanted to try to flash a different bios, but i dont know which one to use, can anyone suggest which?


----------



## Spiriva

jlodvo said:


> i have the AORUS GeForce RTX™ 3090 XTREME WATERFORCE WB 24G card its power limited right now to 350watts or 360watts, i wanted to try to flash a different bios, but i dont know which one to use, can anyone suggest which?











Gigabyte RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com





This one works well, it max out on ~393W (highest ive seen using it) It also have Rebar.


----------



## WillP

Falkentyne said:


> They didn't miss the issue.
> Tell me directly--in what universe have you EVER seen a video card with a backplate heatsink that was not a custom loop build? Such a thing has never existed before.
> 
> For one thing a backplate heatsink causes instant problems with certain motherboards and case layouts.
> It also makes the 1x PCIE slot completely unusable.
> 
> Second, how many video cards have had VRAM on the back side of the card? Almost all cards in the past have had VRAM on the GPU core side only, because no card came with more than 11 GB of VRAM. So besides Titans and professional cards which could have alot more VRAM, there was no issue anything similar to this for normal customers or gamers.
> 
> The only consumer card with VRAM on the back side of the PCB were the 3090 cards. So that was something new.
> 
> Boards aren't designed to accommodate backplate heatsinks, so that's not a standard configuration you design for. So the most you could do is put on some very thick metal backplate with extremely good thermal pads, but you still run into the basic problem that a backplate is not a heatsink. And it doesn't matter what card it is. A flat metal backplate is going to get heat saturated. Even if you put a good low profile heatsink on it, it will still get heat saturated. It will just take longer to do so (which is why heatsinks have to be actively cooled). So then you think that all 3090's should have actual fan cooled heatsinks on the backplates and then you have a compatibility nightmare on your hands.


I suppose I was wondering why they didn't go for a backplate designed to have increased surface area, simply putting grooves into it appears that it would have helped, and the flow from decent case airflow would possibly have been enough to cool it. 
If the problem is purely caused by putting VRAM on the back of the card, and the issue is compatibility that can't be solved any other way, well, maybe putting the VRAM on the back wasn't the best idea?
My KFA2 card actually did come with an additional clip-on fan to go on the back plate, though to improve with the flow-through part of the cooler rather than the VRAM itself. It was stupid, and did cause interference issues with my DDR4, but it wouldn't have been beyond them to design something lower profile if that was the only way.
And lastly, case/MOBO compatibility certainly doesn't seem like a huge issue for them, the boards have been getting steadily bigger anyway causing compatibility issues elsewhere.
Hey, I don't care, the problem is more or less solved for me, but it is a problem for some users of the cards in some use cases, and it seems strange to me that Nvidia didn't see the issue coming.


----------



## WillP

Actually, I guess the FE cards do have a grooved backplate. Does it help at all FE owners?


----------



## yzonker

WillP said:


> Actually, I guess the FE cards do have a grooved backplate. Does it help at all FE owners?


They have some of the highest vram temps from what I've seen. That won't help much.

Although how many cards have true vram temp issues while gaming? And not just mining? I suspect gaming was the primary usage considered when they designed the cards.

Edit: and to add to that, how many have vram temp issues while gaming and running at stock clocks and power levels?


----------



## WillP

yzonker said:


> They have some of the highest vram temps from what I've seen. That won't help much.
> 
> Although how many cards have true vram temp issues while gaming? And not just mining? I suspect gaming was the primary usage considered when they designed the cards.
> 
> Edit: and to add to that, how many have vram temp issues while gaming and running at stock clocks and power levels?


Hmm, I remember the controversy about what the "Titan class" GPU without professional features was actually for, but yes (8k gaming wasn't it?), VRAM does run cooler in gaming. @Toopy does seem to have come up with a passive combination that works. Does anyone actually use the 24gb for a professional application (not mining...) and suffer issues as a result?


----------



## yzonker

Toopy said:


> This is what I did to control the temps on my PNY Gaming, works well with my case airflow and fits under my noctua NHD14
> View attachment 2511782
> 
> 
> Can mine away at +1700 no problem


At what mem temp though?


----------



## Toopy

yzonker said:


> At what mem temp though?


94 @ +1700, I posted the results here








3080/3090 Junction temp cooling thread


My evga ftw can't do 1300mhz...




www.overclock.net





The pads have been replaced with TGPP10, I do believe that this is from the front side mem though as ramping the fans to 100% lowers it to 91. Wish there was a program to read the temp from each chip.


----------



## motivman

What would you guys say is the maximum safe junction temp for 24/7? I would hate to degrade my 3090's, replaced thermal pads both front and back, and got both down to 90C. Waiting on backplate heatsinks to come in, to see if I can further reduce temps by 5-10C.









Amazon.com: Awxlumv Aluminum Large Heatsink 5.9 x 3.6 x 0.59 Inch /150 x 93x 15 mm Black Heat Sinks Fins for Cooler RTX 3090 3080 ti Backplate PCB LED Motherboard (2 Pcs) : Electronics


Buy Awxlumv Aluminum Large Heatsink 5.9 x 3.6 x 0.59 Inch /150 x 93x 15 mm Black Heat Sinks Fins for Cooler RTX 3090 3080 ti Backplate PCB LED Motherboard (2 Pcs): Heatsinks - Amazon.com ✓ FREE DELIVERY possible on eligible purchases



www.amazon.com


----------



## jura11

I'm getting in gaming or rendering VRAM temperatures are in 60's on top one and on bottom one are in mid 70's with 1495MHz OC on top and on bottom one 1250MHz OC, in quick 2 hour mining tests with same OC on VRAM temperatures won't break 70's on top one and on bottom one VRAM temperatures won't break 80's, that's with stock Bykski thermal pads, with Gelid thermal pads I'm suspecting temperatures will drop probably by 5-8°C maybe less or more, have no idea what I should expect, core temperatures too won't break 36-38°C in gaming or rendering and in mining 30°C on top one and 32°C on bottom one

On both GPUs running same BIOS(KPE XOC 1000W BIOS) with 45% power limit and - 200MHz on core 

Hope this helps 

Thanks, Jura


----------



## yzonker

motivman said:


> What would you guys say is the maximum safe junction temp for 24/7? I would hate to degrade my 3090's, replaced thermal pads both front and back, and got both down to 90C. Waiting on backplate heatsinks to come in, to see if I can further reduce temps by 5-10C.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Amazon.com: Awxlumv Aluminum Large Heatsink 5.9 x 3.6 x 0.59 Inch /150 x 93x 15 mm Black Heat Sinks Fins for Cooler RTX 3090 3080 ti Backplate PCB LED Motherboard (2 Pcs) : Electronics
> 
> 
> Buy Awxlumv Aluminum Large Heatsink 5.9 x 3.6 x 0.59 Inch /150 x 93x 15 mm Black Heat Sinks Fins for Cooler RTX 3090 3080 ti Backplate PCB LED Motherboard (2 Pcs): Heatsinks - Amazon.com ✓ FREE DELIVERY possible on eligible purchases
> 
> 
> 
> www.amazon.com
> 
> 
> 
> 
> 
> View attachment 2511834


Nobody really knows. Lower is better.


----------



## Hulk1988

Hello guys 

I have my Kingpin 3090 Copper Edition now. Its running really good with stable 2175/2190mhz / +1650 memory , without any voltage adjustments and the normal LN2 Bios with 520W.

I have seen this video now and wonder how do I can get the 3090 Kingpin Classified tool and the special Bios? I have found some classified Bios over Google but always getting this:










Do I need the special BIOS to activate the RTX Classified Controller?

Thank you!


----------



## PLATOON TEKK

Hulk1988 said:


> Hello guys
> 
> I have my Kingpin 3090 Copper Edition now. Its running really good with stable 2175/2190mhz / +1650 memory , without any voltage adjustments and the normal LN2 Bios with 520W.
> 
> I have seen this video now and wonder how do I can get the 3090 Kingpin Classified tool and the special Bios? I have found some classified Bios over Google but always getting this:
> 
> View attachment 2511878
> 
> 
> Do I need the special BIOS to activate the RTX Classified Controller?
> 
> Thank you!


The bios is separate from the Classified software. Your issue is most likely that you are using the old Classified tool. Search the EVGA forums for the newer version and it should run. In theory, as long as you have a KP and are running a KP bios, the tool should work.


----------



## inedenimadam

Man, the VRAM on my KPE is garbage! +850 is all I get. RIP


----------



## Beagle Box

inedenimadam said:


> Man, the VRAM on my KPE is garbage! +850 is all I get. RIP


That's not great. 
Maybe check the pads and reinstall the cooler? 
Has anyone written up a safe way to improve memory clocks? 
.


----------



## Toopy

WillP said:


> Hmm, I remember the controversy about what the "Titan class" GPU without professional features was actually for, but yes (8k gaming wasn't it?), VRAM does run cooler in gaming. @Toopy does seem to have come up with a passive combination that works. Does anyone actually use the 24gb for a professional application (not mining...) and suffer issues as a result?


A lot of the MSI variants have a flat heatpipe on the rear vram to distribute the heat evenly across the backplate, I think that this needs to be taken a step further and the heatpipe fed back to a fin section in the flow through area at the front of the card. This would allow the heat to be removed from the memory.
GDDR6X has caught a lot of AIB partners and nvidia themselves off guard, sure mostly the problem arises from the constant memory usage during mining, but the cooling should be adequate to utilize the memory constantly regardless.


----------



## inedenimadam

Beagle Box said:


> That's not great.
> Maybe check the pads and reinstall the cooler?
> Has anyone written up a safe way to improve memory clocks?
> .


I have installed the hydrocopper block and replaced rear thermal pads, not a noticeable improvement for gaming/benching. My zotac trinity has a better core and better VRAM, but much worse power delivery with locked voltage. Because of the better power delivery and virtually limitless power limit, it still performs overall better. I got the card for MSRP, so it's kind of hard to complain. Maybe I will try shoving a universal block on the backside or something to cool the mem down.

Also: Has anybody else noticed that the classified tool profile.ini reads [RTX3080TiKPE] 👀


----------



## gfunkernaught

del


----------



## WillP

Toopy said:


> A lot of the MSI variants have a flat heatpipe on the rear vram to distribute the heat evenly across the backplate, I think that this needs to be taken a step further and the heatpipe fed back to a fin section in the flow through area at the front of the card. This would allow the heat to be removed from the memory.
> GDDR6X has caught a lot of AIB partners and nvidia themselves off guard, sure mostly the problem arises from the constant memory usage during mining, but the cooling should be adequate to utilize the memory constantly regardless.


When I first got the card and realised how hot the backplate was getting I was terrified I had somehow broken something or it was defective. Your solution sounds sensible, especially given the sudden surge in flow-through designs, and I agree totally that the card should be "fit for purpose" whatever purpose the customer has for it. Whatever "gamers" may think there is no reason why someone shouldn't buy a card to mine with, not that I did. I know there have been some warranty exclusions suggested for cards used for mining, interesting to see if they are legally enforceable, though my warranty sailed away some time ago anyway...


----------



## jura11

Guys if you are overclocking off course temperatures will rise like on VRAM or core that's downside of OC

If I don't OC my GPUs they are running cool like cucumbers hahaha, they won't break 33°C on core and VRAM they're under 60's

Same applies for CPU and many other things, nobody will ever warrant your GPU will be great under OC or will OC beyond that figure or even temperatures

They will get away with it because every PC or case is different etc

That's why I always running GPUs under water

Hope this helps 

Thanks, Jura


----------



## KedarWolf

jura11 said:


> Guys if you are overclocking off course temperatures will rise like on VRAM or core that's downside of OC
> 
> If I don't OC my GPUs they are running cool like cucumbers hahaha, they won't break 33°C on core and VRAM they're under 60's
> 
> Same applies for CPU and many other things, nobody will ever warrant your GPU will be great under OC or will OC beyond that figure or even temperatures
> 
> They will get away with it because every PC or case is different etc
> 
> That's why I always running GPUs under water
> 
> Hope this helps
> 
> Thanks, Jura


If I have my TV on, run my A/C and play a game on my PC, I blow a power breaker.

The fix is to undervolt my 3090, then everything runs just fine. Unless I'm benchmarking, really don't need super high clocks. 🐺


----------



## jura11

KedarWolf said:


> If I have my TV on, run my A/C and play a game on my PC, I blow a power breaker.
> 
> The fix is to undervolt my 3090, then everything runs just fine. Unless I'm benchmarking, really don't need super high clocks. 🐺


Not sure what in USA or Canada you are using power breaker

For day to day running I'm running or using on both RTX 3090 GamingPro's XOC 1000W BIOS capped at 65% and for benchmarks usually I run them at 100% and as for CPU currently running 5950X with PBO 2 and curve optimizer etc

Never have issues here with tripping the breaker or overloading the circuit

Hope this helps 

Thanks, Jura


----------



## Beagle Box

KedarWolf said:


> If I have my TV on, run my A/C and play a game on my PC, I blow a power breaker.
> 
> The fix is to undervolt my 3090, then everything runs just fine. Unless I'm benchmarking, really don't need super high clocks. 🐺


The fix is to not plug everything into the same circuit. Dining room circuits are best. 20 Amps, no GFCI and nothing else running on it.


----------



## KedarWolf

Beagle Box said:


> The fix is to not plug everything into the same circuit. Dining room circuits are best. 20 Amps, no GFCI and nothing else running on it.


I'm in a small apartment and even the plugs across the room the A/C is in and the plug on the right side of the room are on the same circuit. I have no other plugs I can use at all.


----------



## KedarWolf

KedarWolf said:


> I'm in a small apartment and even the plugs across the room the A/C is in and the plug on the right side of the room are on the same circuit. I have no other plugs I can use at all.


According to Corsair Link software my max wattage is about 400W for my entire PC gaming with my 3090 undervolted and underclocked. The circuit can handle that. The A/C is a 1350 BTU portable, a 40 inch LCD TV and Fibre box not included in that 400W. Modem, home phone and stuff not included too.


----------



## Beagle Box

KedarWolf said:


> I'm in a small apartment and even the plugs across the room the A/C is in and the plug on the right side of the room are on the same circuit. I have no other plugs I can use at all.


If you have a kitchen counter top, those are 20 amp circuits, so they should be separate from the rest of the apartment. By code, anyway.


----------



## KedarWolf

Beagle Box said:


> If you have a kitchen counter top, those are 20 amp circuits, so they should be separate from the rest of the apartment. By code, anyway.


Yeah, but don't want to have an extension cord running from my A/C to my kitchen, to be honest. I just tested Cyberpunk, undervolted, graphics maxed out, DLSS on quality, 3840x1080, 80 FPS, so I'm good.


----------



## gfunkernaught

.


----------



## jomama22

KedarWolf said:


> Yeah, but don't want to have an extension cord running from my A/C to my kitchen, to be honest. I just tested Cyberpunk, undervolted, graphics maxed out, DLSS on quality, 3840x1080, 80 FPS, so I'm good.


I'm guessing the breaker is a 15A? Are you sure that ac unit is only 1350 BTUs? I can't find any that go that low. If that breaker has tripped more than a hand full of times, at this point it is probably shot and will keep tripping at lower and lower amperage, probably needs to be replaced at this point.

If you were going to run an extension cord for anything I would do I for the ac unit. The initial peak draw will always be at startup and can be quite massive.


----------



## J7SC

...if you live in a condo / apartment with in-suite laundry, the dryer usually has its own breaker with 220-240V and an additional 110V outlet in N.America...I've used those a lot


----------



## PLATOON TEKK

Agree 100


----------



## KedarWolf

jomama22 said:


> I'm guessing the breaker is a 15A? Are you sure that ac unit is only 1350 BTUs? I can't find any that go that low. If that breaker has tripped more than a hand full of times, at this point it is probably shot and will keep tripping at lower and lower amperage, probably needs to be replaced at this point.
> 
> If you were going to run an extension cord for anything I would do I for the ac unit. The initial peak draw will always be at startup and can be quite massive.


I meant 13500 BTUs.


----------



## GRABibus

Beagle Box said:


> That's not great.
> Maybe check the pads and reinstall the cooler?
> Has anyone written up a safe way to improve memory clocks?
> .


I have replaced memory pads with thermalright Odyssey on my strix with stock air Cooler and it reduced the junction temps by 5 degrees to 10 degrees.

Before replacing the pads, +1000MHz was the max on memory before artefacts or crash.
Now…..With new pads, same maximum allowable => +1000MHz.

If bottleneck is silicon lottery, reducing temps , even by 10 degrees , can be useless, except for memory lifetime of course.


----------



## jomama22

KedarWolf said:


> I meant 13500 BTUs.


I figured as much lol. Those things will pull 12 amps on their own. Really gotta hook that up to a different circuit. 15A breakers will usually handle up to 17-18 amps before trippings which is why you haven't had it happen too much but it will definitely start happening more.


----------



## Beagle Box

GRABibus said:


> I have replaced memory pads with thermalright Odyssey on my strix with stock air Cooler and it reduced the junction temps by 5 degrees to 10 degrees.
> 
> Before replacing the pads, +1000MHz was the max on memory before artefacts or crash.
> Now…..With new pads, same maximum allowable => +1000MHz.
> 
> If bottleneck is silicon lottery, reducing temps , even by 10 degrees , can be useless, except for memory lifetime of course.


Same here.
Lower temps have given me no real increase in GPU memory speeds.
I wonder if the guys with the KP or Elmor tools have been able to raise memory OC speeds significantly.


----------



## ttnuagmada

Woohoo! EKWB active backplate for the Strix is on pre-order for anyone that didn't know


----------



## Zogge

I do not really see on the pic but do you need a strix ek front as well or can it be used standalone with a byski front for instance ?


----------



## ViRuS2k

wish i could get a hold of a active back plate + cooler for the Trio X 3090

is it me or does the Trio X line up of 3080`s and 3090`s not get quite the same love from from GPU water block manufacturers. ???? like its starting to annoy me lol
might sell my 3090 trio X even though its a awesome clocking card and the memory clocks like a dream  i want a better waterblock  one with active back plate so i dont have to use this MP5 works memory cooler as i think it looks to bulky and i could do without a few more hose lines in my case


----------



## PLATOON TEKK

Zogge said:


> I do not really see on the pic but do you need a strix ek front as well or can it be used standalone with a byski front for instance ?


Am not 100, but the article mentions a new terminal. This implies you’d need the front plate. You can consider the new Bitspower generic gpu vram cooler or mp5 for any Gpu though.


----------



## Zogge

I have an ek block not used and a bykski front +back mounted with mp5 works on currently. 
Hence I am wondering if I should go back to ek and add the active back plate and skip byski/mp5 or if I should just skip it and keep it as is...bykski/mp5 that is.


----------



## ttnuagmada

Zogge said:


> I do not really see on the pic but do you need a strix ek front as well or can it be used standalone with a byski front for instance ?


Almost positive you would need the vector block to go with it. The terminal that comes with the backplate replaces the block terminal.


----------



## PLATOON TEKK

Zogge said:


> I have an ek block not used and a bykski front +back mounted with mp5 works on currently.
> Hence I am wondering if I should go back to ek and add the active back plate and skip byski/mp5 or if I should just skip it and keep it as is...bykski/mp5 that is.


Got you. Yeah I’d def just stick to mp5. Doubt the difference will be that big. Also, imo EKWB have been amongst the worst in terms of performance and quality recently.


----------



## jura11

Zogge said:


> I do not really see on the pic but do you need a strix ek front as well or can it be used standalone with a byski front for instance ?


Hi there 

You will definitely need get EK Vector for yours RTX 3090 because EKWB using terminal connection for connecting backplate and waterblock

If you are happy wity current performance of your block then it makes no sense to me switch to different waterblock 

Hope this helps 

Thanks, Jura


----------



## KedarWolf

ROG Strix RTX 3080/3090 Get Active Backplates by EK - ekwb.com


EK, the leading computer cooling solutions provider, is introducing another addition to the EK-Quantum Vector series of active backplates. EK-Quantum Vector Strix RTX 3080/3090 active backplate is made to complement the existing EK-Quantum Vector Strix RTX 3080/3090 water blocks and actively...




www.ekwb.com


----------



## cletus-cassidy

PLATOON TEKK said:


> Got you. Yeah I’d def just stick to mp5. Doubt the difference will be that big. Also, imo EKWB have been amongst the worst in terms of performance and quality recently.


I believe Bykski has an active backplate for the Strix now too.


----------



## PLATOON TEKK

cletus-cassidy said:


> I believe Bykski has an active backplate for the Strix now too.


thanks for heads up. Had no idea, have Bykski on 2x Strix. Will def get that. Appreciated.


----------



## cletus-cassidy

PLATOON TEKK said:


> thanks for heads up. Had no idea, have Bykski on 2x Strix. Will def get that. Appreciated.


NP - here is the link I learned about it from:

__
https://www.reddit.com/r/watercooling/comments/mugcl3

Apparently it's really important to get v2, which I believe is now live.


----------



## gfunkernaught

ViRuS2k said:


> wish i could get a hold of a active back plate + cooler for the Trio X 3090
> 
> is it me or does the Trio X line up of 3080`s and 3090`s not get quite the same love from from GPU water block manufacturers. ???? like its starting to annoy me lol
> might sell my 3090 trio X even though its a awesome clocking card and the memory clocks like a dream  i want a better waterblock  one with active back plate so i dont have to use this MP5 works memory cooler as i think it looks to bulky and i could do without a few more hose lines in my case


Trio's a mid-range card, and probably not as popular as FTW or Strix. Then you have the HOF/Kingpin cards that are in their own league. I was debating bykski or ek and went with ek. It's great until you hit 500w, the delta just keeps growing. Last time I checked I think at 600w my Trio+Ek block the delta was like 14c or something.


----------



## dr/owned

Just popping in to ask a question: I'm going to attempt to get ahold of an MSRP 3090 Strix as an upgrade to my TUF (and shuffle the TUF into my guest gaming rig). Does there exist a 1000W BIOS with resizable BAR yet?

(Upgrading because I wanna see what a 3 pin 3090 can do...700W is cool on my TUF but I wanna hit 1000W  Plus Bykski now makes a active backplate for the Strix so that'll be fun to try out)


----------



## gfunkernaught

@dr/owned I was just gonna ask the same thing. I went back to the 1kw bios from the 520w rebar bios, since the rebar bios gave me issues with some games. I got spoiled with rebar and cyberpunk though. The 1kw bios just doesn't perform as well as the rebar bios in cyberpunk, even at higher clocks. The 520w rebar was extra sensitive to clock/vid fluctuations in games like titanfall 2 and cold war. Maybe I'll give them another go and just accept lower OC.


----------



## ArcticZero

Heads up, the new Argus Monitor update now supports hotspot and memory junction temps.



> Support for extra GPU temperatures (Hot Spot Temperature and Memory Junction Temperature) on selected Nvidia / AMD GPUs.


I for one have been waiting for this for quite a while now. Very handy if you want to adjust fan speeds based on memory temps.


----------



## motivman

dr/owned said:


> Just popping in to ask a question: I'm going to attempt to get ahold of an MSRP 3090 Strix as an upgrade to my TUF (and shuffle the TUF into my guest gaming rig). Does there exist a 1000W BIOS with resizable BAR yet?
> 
> (Upgrading because I wanna see what a 3 pin 3090 can do...700W is cool on my TUF but I wanna hit 1000W  Plus Bykski now makes a active backplate for the Strix so that'll be fun to try out)


What you should do:

1.Use asus tool to upgrade your bios to resizable bar bios.
2. shunt mod with 5mohm resistors
3. Enjoy unlimited power on the strix.

And yeah, you will NOT hit 1000W.. not happening. Max will be around 750W even on Timespy extreme...


----------



## yzonker

gfunkernaught said:


> @dr/owned I was just gonna ask the same thing. I went back to the 1kw bios from the 520w rebar bios, since the rebar bios gave me issues with some games. I got spoiled with rebar and cyberpunk though. The 1kw bios just doesn't perform as well as the rebar bios in cyberpunk, even at higher clocks. The 520w rebar was extra sensitive to clock/vid fluctuations in games like titanfall 2 and cold war. Maybe I'll give them another go and just accept lower OC.


Like I showed back here,









[Official] NVIDIA RTX 3090 Owner's Club


what is the general consensus of the MSI gaming X trio running 1000W bios? Anyone killed their gaming x trio yet due to too much power draw?




www.overclock.net





You have to add quite a bit of power and clock to exceed a lower power bios with reBar.


----------



## gfunkernaught

yzonker said:


> Like I showed back here,
> 
> 
> 
> 
> 
> 
> 
> 
> 
> [Official] NVIDIA RTX 3090 Owner's Club
> 
> 
> what is the general consensus of the MSI gaming X trio running 1000W bios? Anyone killed their gaming x trio yet due to too much power draw?
> 
> 
> 
> 
> www.overclock.net
> 
> 
> 
> 
> 
> You have to add quite a bit of power and clock to exceed a lower power bios with reBar.


that was back when I wasn't aware I could use rebar so I wasn't around the forums. I was using +520w and higher clocks and was getting less performance than 480w rebar in cyberpunk. You at least at 490w got higher fps than rebar.


----------



## yzonker

gfunkernaught said:


> that was back when I wasn't aware I could use rebar so I wasn't around the forums. I was using +520w and higher clocks and was getting less performance than 480w rebar in cyberpunk. You at least at 490w got higher fps than rebar.


Yea about 1 fps average in that test. Not really worth it. My room even starts getting hot if I play at that power level for an hour or more.

Need to test again with the newer version of CP and drivers. Possible the gap has narrowed even more.

Given that, I've mostly just been running the Galax 390w bios. 

Never have gotten RDR2 to not crash at power levels much above 390w with the KP XOC bios either. Might be heat related in that case. Usually it will run for 30 minutes or so before crashing. I'm considering adding a large external rad.

On that topic, do you or anyone else know how well a single D5 (EK) pump will work with 2 thin rads (360 EK and 280 Corsair) and a larger external (like a 480 mm rad that's 60mm thick)?

I don't want to lose so much flow that it hurts my block delta.


----------



## ViRuS2k

yzonker said:


> Yea about 1 fps average in that test. Not really worth it. My room even starts getting hot if I play at that power level for an hour or more.
> 
> Need to test again with the newer version of CP and drivers. Possible the gap has narrowed even more.
> 
> Given that, I've mostly just been running the Galax 390w bios.
> 
> Never have gotten RDR2 to not crash at power levels much above 390w with the KP XOC bios either. Might be heat related in that case. Usually it will run for 30 minutes or so before crashing. I'm considering adding a large external rad.
> 
> On that topic, do you or anyone else know how well a single D5 (EK) pump will work with 2 thin rads (360 EK and 280 Corsair) and a larger external (like a 480 mm rad that's 60mm thick)?
> 
> I don't want to lose so much flow that it hurts my block delta.


I already added a external radiator 420mm thick radiator with push pull on top of my dynamic o11 xl case  it has improved temps for me...





































I even added a mounting system behind the front of the external radiator so that it does not come loose or fall off 
but i really need a better GPU WATERBLOCK for my 3090 triox that has a smaller footprint as the mp5 works is adding to meny hoses to watercooling loop as its starting to look messy..


----------



## gfunkernaught

yzonker said:


> Yea about 1 fps average in that test. Not really worth it. My room even starts getting hot if I play at that power level for an hour or more.
> 
> Need to test again with the newer version of CP and drivers. Possible the gap has narrowed even more.
> 
> Given that, I've mostly just been running the Galax 390w bios.
> 
> Never have gotten RDR2 to not crash at power levels much above 390w with the KP XOC bios either. Might be heat related in that case. Usually it will run for 30 minutes or so before crashing. I'm considering adding a large external rad.
> 
> On that topic, do you or anyone else know how well a single D5 (EK) pump will work with 2 thin rads (360 EK and 280 Corsair) and a larger external (like a 480 mm rad that's 60mm thick)?
> 
> I don't want to lose so much flow that it hurts my block delta.


Almost like the benefits from rebar cancel out the gains from an extreme overclock, which is great. I kind of saw that coming from the reviews I saw about rebar. I'm gonna do some more testing with the 520w rebar and find out what is making it crash with Titanfall and cold war, other than vid bounce. I noticed with th see bios on my card, the difference between requested and effective clock is about 35mhz. When I play cold war with DLSS, power usage is low, and my clock goes up to 2130mhz with a +75 core offset. Cyberpunk, PL is pinned at 480w, clocks avg 2055mhz. So maybe while for a brief moment the power usage is low I tried to jump the clock before raising the vid as well, causing a crash.

The D5 pump is excellent. It pushes through two 360s and a 240.


----------



## ViRuS2k

Guys whats the highest rebar bios ? is it the 520w has anyone used this bios on there 3090 trio x ??? any issues ... is it better than the current bios im using SuprimX 450w rbar.


----------



## gfunkernaught

ViRuS2k said:


> Guys whats the highest rebar bios ? is it the 520w has anyone used this bios on there 3090 trio x ??? any issues ... is it better than the current bios im using SuprimX 450w rbar.


I use the 520w rebar on my trio and the power limit seems like it is 480-490w. This is the case for the non rebar evga bios as well, with the exception of KP 1kw bios. Only issues I have are with Titanfall 2 and Cold War so far. Seems like unstable overclock. Cyberpunk runs fine. This is all with +75 core offset and +1200 vram.


----------



## J7SC

gfunkernaught said:


> Almost like the benefits from rebar cancel out the gains from an extreme overclock, which is great. I kind of saw that coming from the reviews I saw about rebar. I'm gonna do some more testing with the 520w rebar and find out what is making it crash with Titanfall and cold war, other than vid bounce. I noticed with th see bios on my card, the difference between requested and effective clock is about 35mhz. When I play cold war with DLSS, power usage is low, and my clock goes up to 2130mhz with a +75 core offset. Cyberpunk, PL is pinned at 480w, clocks avg 2055mhz. So maybe while for a brief moment the power usage is low I tried to jump the clock before raising the vid as well, causing a crash.
> 
> *The D5 pump is excellent. It pushes through two 360s and a 240.*


...yeah, already using 2xD5s for 1280x64 rads, but working on an update build w/ dual mobos; so this arrived yesterday ...


----------



## gfunkernaught

J7SC said:


> ...yeah, already using 2xD5s for 1280x64 rads, but working on an update build w/ dual mobos; so this arrived yesterday ...
> 
> View attachment 2512145


Exsqueeze me? That exists?! Would that increase flow/pressure inside the gpu block the same way putting the 2nd pump before the gpu block would?


----------



## ViRuS2k

J7SC said:


> ...yeah, already using 2xD5s for 1280x64 rads, but working on an update build w/ dual mobos; so this arrived yesterday ...
> 
> View attachment 2512145


god darn, you bought that  i have that very pump in a box up stairs that i havent used or need at present, curious how much you pay for that serial pump 
bet ya i could have sold you it cheaper


----------



## J7SC

gfunkernaught said:


> Exsqueeze me? That exists?! Would that increase flow/pressure inside the gpu block the same way putting the 2nd pump before the gpu block would?


I always use two D5s for fail-over (though they rarely if ever fail), but using two D5s also helps with much better_ flow/ pressure consistency_. DerBauer among others did a review on that.

...apart from EK, Aquacomputer has a similar dual D5 thing...but I got the EK Dual from the US; shipping from Aquacomputer to CDN was just too high, apart from actual inventory...



ViRuS2k said:


> god darn, you bought that  i have that very pump in a box up stairs that i havent used or need at present, curious how much you pay for that serial pump
> bet ya i could have sold you it cheaper


...shipping from UK to W.Canada...ouch, also: warranty. In any case, I now have more than ten D5s (across various systems) and love them all


----------



## ViRuS2k

gfunkernaught said:


> Exsqueeze me? That exists?! Would that increase flow/pressure inside the gpu block the same way putting the 2nd pump before the gpu block would?


Yes this serial dual pump from EK will deliver higher head presure and also faster flow rates 
its why i got it  but never got around to using it as i use a 011 dynamic xl case with distro plate that has intergrated pump. though i changed that pump to a higher model


----------



## ViRuS2k

gfunkernaught said:


> I use the 520w rebar on my trio and the power limit seems like it is 480-490w. This is the case for the non rebar evga bios as well, with the exception of KP 1kw bios. Only issues I have are with Titanfall 2 and Cold War so far. Seems like unstable overclock. Cyberpunk runs fine. This is all with +75 core offset and +1200 vram.


might give that bios a try, got a link to it, cause currently i max out @450w with the SuprimX rbar 450w bios and it actually does max out at its bios rating of 450w. on the card sometimes 453w lol


----------



## GRABibus

Guys, test this one, the best for gaming for me:









EVGA RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com


----------



## gfunkernaught

@J7SC and @ViRuS2k 
I understand that two pumps will increase flow and consistency, that totally makes sense. Does order/placement of the pump(s) matter? I'm looking to increase flow into the gpu block and also out because the water has to flow out of the gpu block, then up to the top rad.


----------



## PLATOON TEKK

gfunkernaught said:


> @J7SC and @ViRuS2k
> I understand that two pumps will increase flow and consistency, that totally makes sense. Does order/placement of the pump(s) matter? I'm looking to increase flow into the gpu block and also out because the water has to flow out of the gpu block, then up to the top rad.


I support pumps! Have 10x PMP 500s in my loop now. However, the large amount in my case is due to tube length lol

edit: from my experience, flow rate can make an absolute difference. I’ve come to learn that each loop requires a somewhat different flow rate to hit around “max” performance. Obviously there is a threshold, but saying using a flow rate of “x” is enough for a loop, from my experience is way too general.


----------



## gfunkernaught

Not sure what happened here, but the 520w rebar bios are behaving better now. All I changed was this







from "third party" in afterburner and changed the voltage % to 50. Now the core offset of +120 actually gives me 2115/2100 at 1070mv which is what I wanted. I know that heavier games will push that down so I will see what happens. But its nice to see the req/eff clocks only 1 bin apart as opposed to 2 bins.


----------



## yzonker

gfunkernaught said:


> Not sure what happened here, but the 520w rebar bios are behaving better now. All I changed was this
> View attachment 2512199
> from "third party" in afterburner and changed the voltage % to 50. Now the core offset of +120 actually gives me 2115/2100 at 1070mv which is what I wanted. I know that heavier games will push that down so I will see what happens. But its nice to see the req/eff clocks only 1 bin apart as opposed to 2 bins.


Interesting. I googled that setting. "Third party" supposedly looks at a predefined table for non-reference design cards, where "reference design" _should_ be for cards like my Zotac (when it's not running an EVGA bios at least). Mine was set to "MSI Standard" which is wrong too as it's supposed to be for MSI cards, but worked to unlock voltage to the full 1.1v though. Don't know why that worked for you, but don't touch it! lol


----------



## gfunkernaught

yzonker said:


> Interesting. I googled that setting. "Third party" supposedly looks at a predefined table for non-reference design cards, where "reference design" _should_ be for cards like my Zotac (when it's not running an EVGA bios at least). Mine was set to "MSI Standard" which is wrong too as it's supposed to be for MSI cards, but worked to unlock voltage to the full 1.1v though. Don't know why that worked for you, but don't touch it! lol


Yea so far so good! Knocking on wood. I totally forgot how useful that setting can be. I just finished playing cyberpunk without crashing for an hour. Average clock was like 2055mhz and temps avg 41c. The rebar makes such a difference. I could use quality dlss with the performance of balanced dlss now. I played cold war before without an issue, no vid bounce. I noticed the vid bounce has been minimized/greatly reduced/stabilized since changing that voltage control setting.


----------



## des2k...

gfunkernaught said:


> @J7SC and @ViRuS2k
> I understand that two pumps will increase flow and consistency, that totally makes sense. Does order/placement of the pump(s) matter? I'm looking to increase flow into the gpu block and also out because the water has to flow out of the gpu block, then up to the top rad.


For two pumps, the best flow will be with 1 d5 at the start of the loop and 1 d5 at the end of the loop.

Middle d5 or dual d5 will prob be very close, but prob a bit more noise.

I know a middle install d5 (clone) is complete garbage on my side, just non-stop cavitation at 100% power.

If you want flow/pressure get strong D5 24v + high flow top.

I'm back to 2.1l/m after installing my mora3 420, 1 x4 2 ek d5. I got one high flow top stuck in shipping and regular top that I just ordered for my D5 clone. But I'm not hopping for much impprovement , the mora 420 is simpply too much area.

Took like 4 res to just fill the mora and my res is 270mm🤣


----------



## gfunkernaught

des2k... said:


> For two pumps, the best flow will be with 1 d5 at the start of the loop and 1 d5 at the end of the loop.
> 
> Middle d5 or dual d5 will prob be very close, but prob a bit more noise.
> 
> I know a middle install d5 (clone) is complete garbage on my side, just non-stop cavitation at 100% power.
> 
> If you want flow/pressure get strong D5 24v + high flow top.


A 24v would require some type of 12v to 24v power adapter right? What does high flow top mean? Google wasn't giving much info on that.


----------



## des2k...

gfunkernaught said:


> A 24v would require some type of 12v to 24v power adapter right? What does high flow top mean? Google wasn't giving much info on that.


12v to 24v are fairly cheap , about 5$-10$ on aliexpress. Just need some spare molex and screwdriver.
Mine is pretty cheap, no display, needs a multimeter to measure voltage when you use the dial.










Low flow tops are usually smaller chamber / flat. They are not machined close to the o-ring); the center intake is also not machined lower + no spiral machining for the output.

I say usually because some still have decent flow with this design.








Monsoon MMRS Stand-Alone D5 Pump Top - Page 2 of 5 - ExtremeRigs.net


Review of Monsoon's MMRS SAP D5 pump top, performance and installation compared to EK, Bitspower Laing




www.xtremerigs.net





Monsoon MMRS Stand-Alone D5 Pump Top. Getting this one tomorrow from PPCS for my D5 clone.









Prob hard to see, but the Freezemod design is more for high flow. Running 2 of these with the EK D5s, 100% speed, low noise very good flow.


----------



## jura11

gfunkernaught said:


> @J7SC and @ViRuS2k
> I understand that two pumps will increase flow and consistency, that totally makes sense. Does order/placement of the pump(s) matter? I'm looking to increase flow into the gpu block and also out because the water has to flow out of the gpu block, then up to the top rad.


Hi there 

Usually dual pumps will increase flow rate, I have run for while 2*DDC 18W pumps with D5 Vario and my flow rate with CPU block and 3*GPUs setup and 4*360mm radiators plus MO-ra3 360mm been in 135-155LPH

Order pumps shouldn't matter, just pump must be before the reservoir

Right now I'm running dual D5 pumps in EK pump top with D5 with reservoir and flow rate is at 223-230LPH with 2*RTX 3090 GamingPro's and CPU block and 4*360mm radiators plus MO-ra3 360mm

For D5 pumps I recommend get Lowara pumps which are exactly the same as D5 used by EK etc, on Aliexpress yiu can find them quite a bit cheaper 

Hope this helps 

Thanks, Jura


----------



## J7SC

jura11 said:


> Hi there
> 
> Usually dual pumps will increase flow rate, I have run for while 2*DDC 18W pumps with D5 Vario and my flow rate with CPU block and 3*GPUs setup and 4*360mm radiators plus MO-ra3 360mm been in 135-155LPH
> 
> Order pumps shouldn't matter, just pump must be before the reservoir
> 
> Right now I'm running dual D5 pumps in EK pump top with D5 with reservoir and flow rate is at 223-230LPH with 2*RTX 3090 GamingPro's and CPU block and 4*360mm radiators plus MO-ra3 360mm
> 
> For D5 pumps I recommend get Lowara pumps which are exactly the same as D5 used by EK etc, on Aliexpress yiu can find them quite a bit cheaper
> 
> Hope this helps
> 
> Thanks, Jura


...I've been running serial D5s since early '13, for both productivity and HWBot builds (ie Quad SLI + dual CPU). Back then, 'brilliant ideas' such as 5+ serial D5s for multi-mobo setups seemed to be the right thing, but no, not really. The sweet spot seems to be 2x D5s per loop, within reason re. number of CPU and GPU blocks.

...D5s have a larger internal diameter than for example DDCs...while that on paper means an advantage in pressure and flow, it is also far more sensitive to air bubbles and such re. drop-off and thus heat spikes. Dual D5s in series all but eliminate that specific if rare susceptibility, apart from also providing fail-over insurance.

...with fast-flowing D5s, exact positioning of each D5 matters no more than for 1 -2 C according to the tests I have seen. Still, I like to place each D5 after a reservoir and before a CPU or GPU block in a multi-block system...


----------



## Toopy

gfunkernaught said:


> Almost like the benefits from rebar cancel out the gains from an extreme overclock, which is great. I kind of saw that coming from the reviews I saw about rebar. I'm gonna do some more testing with the 520w rebar and find out what is making it crash with Titanfall and cold war, other than vid bounce. I noticed with th see bios on my card, the difference between requested and effective clock is about 35mhz. When I play cold war with DLSS, power usage is low, and my clock goes up to 2130mhz with a +75 core offset. Cyberpunk, PL is pinned at 480w, clocks avg 2055mhz. So maybe while for a brief moment the power usage is low I tried to jump the clock before raising the vid as well, causing a crash.
> 
> The D5 pump is excellent. It pushes through two 360s and a 240.


Both my PNY and Zotac AMP holo do this, it can be a pain as the gpu boosts itself to a crash. My zotac was boosting to 2160 and crashing with only a +100Mhz core setting, upping the core voltage helped keep it stable though. 
Looking at doing the same thing I did to my PNY ([Official] NVIDIA RTX 3090 Owner's Club) to my zotac.
I think I will use smaller size heatsink blocks, as the backplates aren't perfectly flat. You can see here where some of the heatsinks have better contact than the others.


----------



## newls1

Hello all, Just have a simple question. Looks like this upcoming week I should have my hands on a eVGA FTW3 Ultra RTX 3090 and was just seeing if there is tips/tricks/bioses i need to do to this thing before it gets waterblocked? Been out of the nvidia game way to long and pretty stoked about getting my hands on a 2nd gen RTX card. IS there maybe any updated Bios'es for this card that fix any bugs, and will allow me to enable re-size bar? Any info would be greatly appreciated, thank you


----------



## Bobbylee

jura11 said:


> Hi there
> 
> Usually dual pumps will increase flow rate, I have run for while 2*DDC 18W pumps with D5 Vario and my flow rate with CPU block and 3*GPUs setup and 4*360mm radiators plus MO-ra3 360mm been in 135-155LPH
> 
> Order pumps shouldn't matter, just pump must be before the reservoir
> 
> Right now I'm running dual D5 pumps in EK pump top with D5 with reservoir and flow rate is at 223-230LPH with 2*RTX 3090 GamingPro's and CPU block and 4*360mm radiators plus MO-ra3 360mm
> 
> For D5 pumps I recommend get Lowara pumps which are exactly the same as D5 used by EK etc, on Aliexpress yiu can find them quite a bit cheaper
> 
> Hope this helps
> 
> Thanks, Jura


Hi Jura, off the back of what you’ve said here. Do you think this would be a good buy for a second pump in loop? https://m.aliexpress.com/item/33046...1prqbel1D3KVjSZFyq6zuFpXaU.jpg_640x640Q90.jpg


----------



## des2k...

Bobbylee said:


> Hi Jura, off the back of what you’ve said here. Do you think this would be a good buy for a second pump in loop? https://m.aliexpress.com/item/33046...1prqbel1D3KVjSZFyq6zuFpXaU.jpg_640x640Q90.jpg


Aliexpress is good if you go with D5 clone. Maybe 40$ saving but quality won't be awesome.

Better option would be this:





__





Amazon.com






www.amazon.com





+ good top, 3in 1out and res support










ASHATA Computers D5 Pump Cover, Computer Water Cooling Pump Cover PC Water Cooler Pump Cover Set with Support, Dissipates Heat Quickly: Amazon.com: Industrial & Scientific


ASHATA Computers D5 Pump Cover, Computer Water Cooling Pump Cover PC Water Cooler Pump Cover Set with Support, Dissipates Heat Quickly: Amazon.com: Industrial & Scientific



www.amazon.com


----------



## Bobbylee

des2k... said:


> Aliexpress is good if you go with D5 clone. Maybe 40$ saving but quality won't be awesome.
> 
> Better option would be this:
> 
> 
> 
> 
> 
> __
> 
> 
> 
> 
> 
> Amazon.com
> 
> 
> 
> 
> 
> 
> www.amazon.com
> 
> 
> 
> 
> 
> + good top, 3in 1out and res support
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> ASHATA Computers D5 Pump Cover, Computer Water Cooling Pump Cover PC Water Cooler Pump Cover Set with Support, Dissipates Heat Quickly: Amazon.com: Industrial & Scientific
> 
> 
> ASHATA Computers D5 Pump Cover, Computer Water Cooling Pump Cover PC Water Cooler Pump Cover Set with Support, Dissipates Heat Quickly: Amazon.com: Industrial & Scientific
> 
> 
> 
> www.amazon.com


Ek isn’t the Oem are they? Also this is a second pump I don’t need a res for this one. Thanks for the links I’ll look into it


----------



## jura11

Bobbylee said:


> Hi Jura, off the back of what you’ve said here. Do you think this would be a good buy for a second pump in loop? https://m.aliexpress.com/item/33046...1prqbel1D3KVjSZFyq6zuFpXaU.jpg_640x640Q90.jpg


Hi there 

I would get this one Lowara D5 Vario pump

￡39.59 8%OFF | Bykski LOWARA D5 Pump, Maximum Flow 1100L/H, Output Head 3.8m, Hungary D5 pump, B-PMD5








85.95US $ 10% OFF|Bykski Lowara D5 Pump, Maximum Flow 1100l/h, Output Head 3.8m, Hungary D5 Pump, B-pmd5 - Fans & Cooling - AliExpress


Smarter Shopping, Better Living! Aliexpress.com




a.aliexpress.com





For pump top Bykski own ones are okay although I would say they can be a bit restrictive or will not flow as EKWB or Aquacomputer ones or Monsoon(this one cost £29.99)

Hope this helps 

Thanks, Jura


----------



## newls1

Looking for y'alls opinion on which FCWB (that will also cool rear memory) For my eVGA FTW3 Ultra GPU? looking for it to be "In stock" so I can get this project going next week as im just super happy about finally being able to land a 3090 GPU! Any blocks out ther ethat will cool the rear mem as I keep reading the memory seems to run HOT so im very carefully wanting to choose a very good block setup for this card, money not an object. I see optimus cooling offers a block, but looks like it takes up to 6 weeks, i have no patients to wait that long! Please throw any idea at me if you have any, THANK YOU!


----------



## gfunkernaught

Toopy said:


> Both my PNY and Zotac AMP holo do this, it can be a pain as the gpu boosts itself to a crash. My zotac was boosting to 2160 and crashing with only a +100Mhz core setting, upping the core voltage helped keep it stable though.
> Looking at doing the same thing I did to my PNY ([Official] NVIDIA RTX 3090 Owner's Club) to my zotac.
> I think I will use smaller size heatsink blocks, as the backplates aren't perfectly flat. You can see here where some of the heatsinks have better contact than the others.
> View attachment 2512314


What are the dimensions of those heatsinks?


----------



## KedarWolf

EKWB has an active backplate for the FTW3 as well as the Strix. My apologies if it's already been posted.





__





EK-Quantum Vector FTW3 RTX 3080/3090 Active Backplates


EKWaterBlocks Shop offers you complete assortiment for water-cooling of your PC. Only EK and EK confirmed quality products.




www.ekwb.com


----------



## newls1

thats a pre order with expected release in july...


----------



## jura11

newls1 said:


> Looking for y'alls opinion on which FCWB (that will also cool rear memory) For my eVGA FTW3 Ultra GPU? looking for it to be "In stock" so I can get this project going next week as im just super happy about finally being able to land a 3090 GPU! Any blocks out ther ethat will cool the rear mem as I keep reading the memory seems to run HOT so im very carefully wanting to choose a very good block setup for this card, money not an object. I see optimus cooling offers a block, but looks like it takes up to 6 weeks, i have no patients to wait that long! Please throw any idea at me if you have any, THANK YOU!


Optimus will be my first choice although right now they are don't do active backplate but their backplate and waterblock is one of the best

On my loop I'm running or using Bykski waterblocks and VRAM temperatures won't break 60 on top one for prolonged time in gaming with XOC 1000W BIOS capped at 75% and would love to test Optimus waterblock on my loop how it performs but for now I'm quite happy with Bykski waterblocks on my both RTX 3090 GamingPro's

Hope this helps 

Thanks, Jura


----------



## newls1

what if I got a FCWB from somewhere and added this MP5 waterworks backplate cooler I keep hearing about, would that be better then just a "backplate"? And with MP5 do I want, i see parallel and serial options


----------



## KedarWolf

newls1 said:


> thats a pre order with expected release in july...


Yes, I know, same as the Strix.


----------



## Bobbylee

newls1 said:


> what if I got a FCWB from somewhere and added this MP5 waterworks backplate cooler I keep hearing about, would that be better then just a "backplate"? And with MP5 do I want, i see parallel and serial options


Go for the series one, parallel has worse flow rate. Byski do full cover blocks meant to be very good especially on the rear mem. I mounted a ek ram block to mine. Mining temps at +1500 on the mem 72c


----------



## newls1

Bobbylee said:


> Go for the series one, parallel has worse flow rate. Byski do full cover blocks meant to be very good especially on the rear mem. I mounted a ek ram block to mine. Mining temps at +1500 on the mem 72c


byski doesnt have one for the ftw3 series yet though.....


----------



## newls1

Guys, i hate to make a million posts in here, but im trying to catch up on the 3090 side of things as im coming from a 6900XT... So im just trying to get all my ducks in a row here and get everything i need ordered so when my video card gets here, ill have everything ready to go to get this GPU in my loop and start gaming again... Heres what i have:

I have a eVGA FTW3 Ultra coming and look to see what BIOS it recommended to flash to this card to enable re-size bar support, and 500+ watt allowance
I guess for now im just gonna do a FCWB for card and a MP5 serial for rear backplate....will the mp5 still work okay even thought my GPU sits in my case vertically?? anything else I need to know?! thank you for your input


----------



## Toopy

gfunkernaught said:


> What are the dimensions of those heatsinks?


These were 40x40x11 mm, there are 10 all up








1.99US $ |We Do Heatsink 40x40x11mm Black Anodize Aluminum Heat Sink Radiator With Thermal Pad - Fans & Cooling - AliExpress


Smarter Shopping, Better Living! Aliexpress.com




www.aliexpress.com




I have also ordered some 8x22x10 ones for the strip down the edges, the backplate edge away side from the pci-e socket gets especially hot as shown by the thermal imaging.

There’s some 20x20x10mm blocks that I think I will go for with the zotac, this will enable me to go around the disco lights as well, maybe even look at 10x10 ones. (Just a lot of them)
I have a 15mm gap between the rear of the board and my D14 cpu cooler.


----------



## gfunkernaught

Spoke too soon...cold war just crashed on me, while I was in 1st place during gun game, of course...
Peak power usage was 392w, so I no longer think its vid bounce that caused the crash, as the voltage stays at 1075mv as long as the power limit isn't reached.


----------



## newls1

Would this be a good FCWB for my FTW3 and ill use the MP5 on the rear backplate.... Alphacool Eisblock Aurora Acryl GPX-N RTX 3080/3090 FTW3 with Backplate


----------



## gfunkernaught

newls1 said:


> Would this be a good FCWB for my FTW3 and ill use the MP5 on the rear backplate.... Alphacool Eisblock Aurora Acryl GPX-N RTX 3080/3090 FTW3 with Backplate


I've read good things about those blocks for the 20 series. Not sure about 30s.


----------



## yzonker

gfunkernaught said:


> I've read good things about those blocks for the 20 series. Not sure about 30s.


Different card but 30 series.









Alphacool Aurora Plexi GPX-N RTX 3090/3080 GPU Water Block Review - How to turn 340 watts on a GeForce RTX 3080 into a frosty zone | igor'sLAB


With the Alphacool GPU water block Aurora Plexi GPX-N RTX 3090/3080 I want to start the new round of GPU water blocks, but this time for Ampere and not Turing. A water cooling system makes sense with…




www.igorslab.de


----------



## KedarWolf

I'm redoing my EKWB Strix waterblock with 15 wm/k Gelid GP-Ultimate thermal pads and when I can get my Strix active backplate, I'll enough pads to do it as well.

They are really cheap in two packs, 90x50mm at QuietPC and the shipping is cheap as well.

Check them out, I think they'll be softer and compress better than the Thermalright pads I'm using right now. The Thermalright pads are super stiff and barely compress.

But if you wait until I get them in a week or two I'll let you know. They ship from the UK and the shipping for three two packs was under $10 CAD to Canada.

Edit: Looking at both the Strix waterblock and active backplate installation instructions, I realized it would be a good idea to have extra 1.0mm pads. 

When I ordered them, they combined the extra pads with my existing order automatically that I made an hour earlier and only cost me an extra 70 cents for shipping.

That's pretty good service, no need for another $10 on shipping with a separate order.


----------



## newls1

I cant win to loose! Finally scored a 3090 FTW3 Ultra and come to find out there are hugely failing/RMAing them for power balancing issues..... Who here has one of these that was bought BEFORE MARCH (so you are 100% on Rev 0.1 of the PCB) without an issue?


----------



## GRABibus

newls1 said:


> I cant win to loose! Finally scored a 3090 FTW3 Ultra and come to find out there are hugely failing/RMAing them for power balancing issues..... Who here has one of these that was bought BEFORE MARCH (so you are 100% on Rev 0.1 of the PCB) without an issue?


I bought one in last December which was completed failing with unbalanced power (75W on PCIe slot !! ).
I sold it.


----------



## gfunkernaught

GRABibus said:


> I bought one in last December which was completed failing with unbalanced power (75W on PCIe slot !! ).
> I sold it.


75W PCIe slot power isn't normal?


----------



## GRABibus

gfunkernaught said:


> 75W PCIe slot power isn't normal?


Not when power is unbalanced.
On my strix I get 50W max


----------



## gfunkernaught

GRABibus said:


> Not when power is unbalanced.
> On my strix I get 50W max


I have always been getting a max of 75w pcie on my card with pretty much any bios


----------



## GRABibus

gfunkernaught said:


> I have always been getting a max of 75w pcie on my card with pretty much any bios


Is your power unbalanced on the 3x8 pin connectors ?

I mean one 8 pin slot with a ridiculous low power.


----------



## gfunkernaught

So has anyone been having issues with the evga 520w rebar bios? I'm getting frustrated because I keep having to lower my core and vram offset to no avail. Cold war keeps crashing, this time 2099mhz and 1087mv, which seems like plenty of power for that clock. Could someone link either the suprim or trio rebar bios? I'm done trying to OC with non 1kw bios. At this point I'll just use stock bios with rebar and not OC until the 1kw rebar bios becomes available. This is driving me nuts. Today the VID that gpu boost chooses is 1087mv, tomorrow it will be 1068mv, the day after that it will choose 1075mv, without me touching the offset.


----------



## gfunkernaught

GRABibus said:


> Is your power unbalanced on the 3x8 pin connectors ?
> 
> I mean one 8 pin slot with a ridiculous low power.


No the first two are around the same, so when I'm at say 480w total, they're around 160w or so, then the 3rd is like 90w.

Correction: Screenshot with more accuracy


----------



## GRABibus

gfunkernaught said:


> No the first two are around the same, so when I'm at say 480w total, they're around 160w or so, then the 3rd is like 90w.
> 
> Correction: Screenshot with more accuracy
> View attachment 2512497





gfunkernaught said:


> No the first two are around the same, so when I'm at say 480w total, they're around 160w or so, then the 3rd is like 90w.


Which card do you have ?
Is the screenshot with your stock bios or a custom bios like 520 KP (which create a mess for power reading) ?


----------



## gfunkernaught

GRABibus said:


> Which car do you have ?
> Is the screenshot with your stock bios or a custom bios like 520 KP (which create a mess for power reading) ?


Trio with the evga 520w rbar bios.


----------



## GRABibus

gfunkernaught said:


> Trio with the evga 520w rbar bios.


Then you must not trust at your power reading in OSD.


----------



## gfunkernaught

GRABibus said:


> Then you must not trust at your power reading in OSD.


How far off are the OSD power readings of the evga bios?


----------



## geriatricpollywog

Is Port Royal still scoring 500 pts lower with the new version? I was out of the country for a month and just fired up my main rig and got a good Port Royal score with daily settings. No performance loss on my end.

NVIDIA GeForce RTX 3090 video card benchmark result - Intel Core i7-10700K Processor,Micro-Star International Co., Ltd. MEG Z490 UNIFY (MS-7C71) (3dmark.com)


----------



## inedenimadam

0451 said:


> Is Port Royal still scoring 500 pts lower with the new version? I was out of the country for a month and just fired up my main rig and got a good Port Royal score with daily settings. No performance loss on my end.


I think it may have been nvidia drivers, but I can't say for sure. I lost 500ish points, but now I am back in the 15200 range, which is where my KPE likes to be with 1.15V


----------



## geriatricpollywog

inedenimadam said:


> I think it may have been nvidia drivers, but I can't say for sure. I lost 500ish points, but now I am back in the 15200 range, which is where my KPE likes to be with 1.15V


Yep, I just updated my NVIDIA drivers and the score dropped over 400 points. I'll be rolling back to 27.21.14.6589 

NVIDIA GeForce RTX 3090 video card benchmark result - Intel Core i7-10700K Processor,Micro-Star International Co., Ltd. MEG Z490 UNIFY (MS-7C71) (3dmark.com)


----------



## degenn

inedenimadam said:


> I think it may have been nvidia drivers, but I can't say for sure. I lost 500ish points, but now I am back in the 15200 range, which is where my KPE likes to be with 1.15V





0451 said:


> Yep, I just updated my NVIDIA drivers and the score dropped over 400 points. I'll be rolling back to 27.21.14.6589
> 
> NVIDIA GeForce RTX 3090 video card benchmark result - Intel Core i7-10700K Processor,Micro-Star International Co., Ltd. MEG Z490 UNIFY (MS-7C71) (3dmark.com)


There was actually a bad update that went out for Port Royal in early May that caused a bunch of people to lose points, I myself lost 400-700pts. 3DMark acknowledge that there was an issue for some people and it was promptly corrected with another update, all within a couple weeks. The new build not only brought back scores for those who lost points but seems to have provided around a 100pt-200pt increase universally across the board for all users.

Now I haven't updated my drivers during this whole process (was and still am on 466.27) so I can't comment about *0451*'s findings regarding latest drivers.


----------



## GRABibus

gfunkernaught said:


> How far off are the OSD power readings of the evga bios?


I don’t know….


----------



## GRABibus

degenn said:


> There was actually a bad update that went out for Port Royal in early May that caused a bunch of people to lose points, I myself lost 400-700pts. 3DMark acknowledge that there was an issue for some people and it was promptly corrected with another update, all within a couple weeks. The new build not only brought back scores for those who lost points but seems to have provided around a 100pt-200pt increase universally across the board for all users.
> 
> Now I haven't updated my drivers during this whole process (was and still am on 466.27) so I can't comment about *0451*'s findings regarding latest drivers.


The « 500points » loss was due a bad update.
This has been corrected with a new update.

This is not related to nvidia drivers.


----------



## newls1

Can someone with first hand knowledge please tell me what is now the Preferred bios to flash to a FTW3 Ultra 3090 to have 500+Watts and Re-Bar support?! I cant believe how many options there are and so many people with all sorts of different issues..... What do you recommend?


----------



## GRABibus

newls1 said:


> Can someone with first hand knowledge please tell me what is now the Preferred bios to flash to a FTW3 Ultra 3090 to have 500+Watts and Re-Bar support?! I cant believe how many options there are and so many people with all sorts of different issues..... What do you recommend?


If you have a power limit issue (unbalanced power on 8 pin connectors), this means you are limited at 1995MHz and 0,98V in benchmarks like TS or PR, this KFA2 one will unlock this issue :









KFA2 RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com





It is a 3x8 pin one, 390W.
No rebar

Orherwise you can the one’s mentioned in this thread :









EVGA 3090 FTW POWER LIMIT BYPASS


Hello guys, Just trying some things, Ive managed to fix my card somewhat, its not downclocking or randomly undervolting to 987mv anymore. During TimeSpy run its holding fullboost as long as youre under 60C, so 2000mhz+ and 1.1Vcore. Performance also went up for me, same settings I gained 10fps...




www.overclock.net


----------



## newls1

im still mega confused here... why are there so many bioses... so in order for me to get 500w+ PL, and rebar support i need which one?


----------



## GRABibus

Try this one, I use it for my strix :









EVGA RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com


----------



## geriatricpollywog

degenn said:


> There was actually a bad update that went out for Port Royal in early May that caused a bunch of people to lose points, I myself lost 400-700pts. 3DMark acknowledge that there was an issue for some people and it was promptly corrected with another update, all within a couple weeks. The new build not only brought back scores for those who lost points but seems to have provided around a 100pt-200pt increase universally across the board for all users.
> 
> Now I haven't updated my drivers during this whole process (was and still am on 466.27) so I can't comment about *0451*'s findings regarding latest drivers.





GRABibus said:


> The « 500points » loss was due a bad update.
> This has been corrected with a new update.
> 
> This is not related to nvidia drivers.


I ran PR again after restarting my computer and got a 15328, so there are no issues with the current revision or drivers.

Edit: Whoops, I left my bios switch to the 500w position. After switching to the 520w position, I got a nice score bump.

NVIDIA GeForce RTX 3090 video card benchmark result - Intel Core i7-10700K Processor,Micro-Star International Co., Ltd. MEG Z490 UNIFY (MS-7C71) (3dmark.com)


----------



## Sheyster

GRABibus said:


> Try this one, I use it for my strix :
> 
> 
> 
> 
> 
> 
> 
> 
> 
> EVGA RTX 3090 VBIOS
> 
> 
> 24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory
> 
> 
> 
> 
> www.techpowerup.com


How does this one compare to the 520W KP BIOS with RBAR support in games? I know the old KP 520W BIOS won't run the middle fan at 100% on the Strix (max is 66%), which is an issue for air-cooled folks.


----------



## ScottRoberts91

Can the gaming X trio take 520watts for daily usage.

I've been running 500watts for a few months now on air in effectly a wind tunnel. Doesn't break 67c.

Hitting power limit in days gone at 500w


----------



## GRABibus

Sheyster said:


> How does this one compare to the 520W KP BIOS with RBAR support in games? I know the old KP 520W BIOS won't run the middle fan at 100% on the Strix (max is 66%), which is an issue for air-cooled folks.


You are fully right and as I am an air cooled folk, this one makes my card run cooler in games than with the KP 520W.


----------



## Pepillo

ScottRoberts91 said:


> Can the gaming X trio take 520watts for daily usage.
> 
> I've been running 500watts for a few months now on air in effectly a wind tunnel. Doesn't break 67c.
> 
> Hitting power limit in days gone at 500w


Eight happy months with the 3090 Gaming X Trio and the 520w bios since it came out, currently with BAR and no problem. Water-cooled with a Bykski block.


----------



## gfunkernaught

Pepillo said:


> Eight happy months with the 3090 Gaming X Trio and the 520w bios since it came out, currently with BAR and no problem. Water-cooled with a Bykski block.


What offsets are you using? Do you play Cold War? I have a Trio with the 520w rbar bios and Cold War does not tolerate any overclock, while Cyberpunk is perfectly fine with a decent +105 core offset.


----------



## Pepillo

gfunkernaught said:


> What offsets are you using? Do you play Cold War? I have a Trio with the 520w rbar bios and Cold War does not tolerate any overclock, while Cyberpunk is perfectly fine with a decent +105 core offset.


For daily, a moderate of +60 core and +250 memories. And since I have had it for those eight months, it is many hours and many games (also Cold War), it is 150% stable. I can bench 2.190 - 21.500. For me the most demanding is the new Metro. Anyway, you already know that no two units are the same, the silicon lottery


----------



## gfunkernaught

Pepillo said:


> For daily, a moderate of +60 core and +250 memories. And since I have had it for those eight months, it is many hours and many games (also Cold War), it is 150% stable. I can bench 2.190 - 21.500. For me the most demanding is the new Metro. Anyway, you already know that no two units are the same, the silicon lottery


Of course every gpu is different. My issue is with bios. With the 1kw bios I was able to run 2100-2115 daily with any game. I was getting stutters with the latest drivers in cyberpunk which the rebar bios have eliminated, but introduced OC stability issues with other games. Just now, I updated the drivers to 466.47 and ran Cold War. It crashed at the lobby screen where the character is walking, with a +90 core offset. This never happened with the 1kw bios. I guess I will have to use a stock non OC'ed profile for games that have problems.


----------



## KedarWolf

Sheyster said:


> How does this one compare to the 520W KP BIOS with RBAR support in games? I know the old KP 520W BIOS won't run the middle fan at 100% on the Strix (max is 66%), which is an issue for air-cooled folks.


This BIOS scores in Port Royal quite a bit higher than the Suprim BIOS I was using.


----------



## ScottRoberts91

gfunkernaught said:


> Of course every gpu is different. My issue is with bios. With the 1kw bios I was able to run 2100-2115 daily with any game. I was getting stutters with the latest drivers in cyberpunk which the rebar bios have eliminated, but introduced OC stability issues with other games. Just now, I updated the drivers to 466.47 and ran Cold War. It crashed at the lobby screen where the character is walking, with a +90 core offset. This never happened with the 1kw bios. I guess I will have to use a stock non OC'ed profile for games that have problems.


There are issues with the latest Drivers.i keep getting driver and system crashes with error code 117, I've revered back to March drivers on the hope this fixes it.

Happens whilst Mining and Gaming seemingly at Random.

If it's not a driver it's either a new PSU or back to my stick bios and an RMA


----------



## Pepillo

gfunkernaught said:


> Of course every gpu is different. My issue is with bios. With the 1kw bios I was able to run 2100-2115 daily with any game. I was getting stutters with the latest drivers in cyberpunk which the rebar bios have eliminated, but introduced OC stability issues with other games. Just now, I updated the drivers to 466.47 and ran Cold War. It crashed at the lobby screen where the character is walking, with a +90 core offset. This never happened with the 1kw bios. I guess I will have to use a stock non OC'ed profile for games that have problems.


Sorry, I can't help you, I don't have those problems, as I said, 150% stable for many months regardless of the drivers used. I haven't tried the 1k bios, I just tried the 480w Asus and the 500w EVGA for a few days before deciding on the 520w Kipping, and since then my only change has been the BAR update. Good luck with your problem.


----------



## Toopy

So, an update on the AMP! core holo problems I am having [Official] NVIDIA RTX 3090 Owner's Club
seems its a bios bug limiting the PL when mining with this card and I am not alone. Zotac have also released a updated version of the card with the backplate/cooling issues (6 dies without thermal pads and plastic/cavities above them) amended.
Not a very happy customer at present.


__
https://www.reddit.com/r/ZOTAC/comments/nlz9j6


----------



## yzonker

Toopy said:


> So, an update on the AMP! core holo problems I am having, seems its a bios bug limiting the PL when mining with this card and I am not alone. Zotac have also released a updated version of the card with the backplate/cooling issues amended.
> Not a very happy customer at present.
> 
> 
> __
> https://www.reddit.com/r/ZOTAC/comments/nlz9j6


If it's a bios bug and not hardware, try a different bios. Just put the base 430/450 watt KP bios on it since I think it's still air cooled? It's 3x8 pin right?


----------



## Toopy

yzonker said:


> If it's a bios bug and not hardware, try a different bios. Just put the base 430/450 watt KP bios on it since I think it's still air cooled? It's 3x8 pin right?


Yeah it's 3x 8pin, it's a custom pcb though, does this matter?


----------



## yzonker

Toopy said:


> Yeah it's 3x 8pin, it's a custom pcb though, does this matter?


Nope. Not as far as I know. Given that even mixing up 2x8pin and 3x8pin can work, pretty likely it will.


----------



## gfunkernaught

Pepillo said:


> Sorry, I can't help you, I don't have those problems, as I said, 150% stable for many months regardless of the drivers used. I haven't tried the 1k bios, I just tried the 480w Asus and the 500w EVGA for a few days before deciding on the 520w Kipping, and since then my only change has been the BAR update. Good luck with your problem.


Ha thanks. I think the issue extends far beyond just luck alone. I've read that people are having issues with bar bios and their overclocks.


----------



## Toopy

yzonker said:


> Nope. Not as far as I know. Given that even mixing up 2x8pin and 3x8pin can work, pretty likely it will.


Thanks, this worked.
First custom pcb card I have owned

Flashed this one








EVGA RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com


----------



## yzonker

Toopy said:


> Thanks, this worked.
> First custom pcb card I have owned
> 
> Flashed this one
> 
> 
> 
> 
> 
> 
> 
> 
> EVGA RTX 3090 VBIOS
> 
> 
> 24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory
> 
> 
> 
> 
> www.techpowerup.com


Cool. This should be the rebar enabled version although I haven't flashed it. 









EVGA RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com


----------



## Toopy

yzonker said:


> Cool. This should be the rebar enabled version although I haven't flashed it.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> EVGA RTX 3090 VBIOS
> 
> 
> 24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory
> 
> 
> 
> 
> www.techpowerup.com


Thanks, I'm still running my old O/C'd 3770k system so no rebar for me, I don't feel the need to upgrade at present when I can run every game released at max settings in 1440p ultra wide.

With the bios I flashed I'm still limited at 410w though gaming/benching, I think it's because 8pin 1 and 3 are maxing out at 150w yet 2 is only hovering around 60w.
Is this the balancing issue discussed a few posts back?


----------



## yzonker

Toopy said:


> Thanks, I'm still running my old O/C'd 3770k system so no rebar for me, I don't feel the need at present when I can run every game released at present at max settings in 1440p ultra wide.
> 
> With the bios I flashed I'm still limited at 410w though gaming/benching, I think it's because 8pin 1 and 3 are maxing out at 150w yet 2 is only hovering around 60w.
> Is this the balancing issue discussed a few posts back?


Was it doing that with the original bios? I remember it did for mining but thought it was balanced when gaming?

If it wasn't then might be some oddity with the KP bios on that specific Zotac card. There are a lot of other choices if you want experiment. In some cases you'll lose some ports though depending on the original cards port layout. 

If it did have the same problem originally then yea probably some kind of balancing problem.


----------



## ArcticZero

So I was getting random black screens again while putting any kind of load on my card. My profiles never exceed 490w for gaming and 320w for mining. Was going to do a full teardown but noticed it felt weird pulling out my power cables.










Just completely melted. 😱 Been using these cables for years now with no issues. Forgot what gauge these are but yeah, connecting them directly fixed everything (so far). I don't suspect anything else was damaged. But so far no more driver errors on event log, no more black screens.


----------



## yzonker

ArcticZero said:


> So I was getting random black screens again while putting any kind of load on my card. My profiles never exceed 490w for gaming and 320w for mining. Was going to do a full teardown but noticed it felt weird pulling out my power cables.
> 
> View attachment 2512635
> 
> 
> Just completely melted. 😱 Been using these cables for years now with no issues. Forgot what gauge these are but yeah, connecting them directly fixed everything (so far). I don't suspect anything else was damaged. But so far no more driver errors on event log, no more black screens.


I think typically contact degrades on connectors over time causing an increase in resistance and heat. 2x8pin card? PNY in your sig but I've forgotten what you have.


----------



## ArcticZero

yzonker said:


> I think typically contact degrades on connectors over time causing an increase in resistance and heat. 2x8pin card? PNY in your sig but I've forgotten what you have.


Yep it's a 2x8-pin PNY card, and despite it being shunt modded, I barely game lately and it was a mostly mining workload which is capped at 320w. Rails seemed balanced, and was using separate cables of course.

Oh well, it's alright. At this point after everything I've put my card through, the issue being as simple to fix as a cable is a huge relief either way. These were old extensions anyway and I should be able to do better research when I purchase a new set.


----------



## yzonker

ArcticZero said:


> Yep it's a 2x8-pin PNY card, and despite it being shunt modded, I barely game lately and it was a mostly mining workload which is capped at 320w. Rails seemed balanced, and was using separate cables of course.
> 
> Oh well, it's alright. At this point after everything I've put my card through, the issue being as simple to fix as a cable is a huge relief either way. These were old extensions anyway and I should be able to do better research when I purchase a new set.


I ditched my pretty extensions when I started running the KP XOC. Last thing I want is more plugs and longer wires.


----------



## SoldierRBT

I installed the KPE block. Changed thermal paste to KPx on the core and thermalright 12.8 w/mk 2mm thermal pads on the memory chips (First time watercooling GPU). Temps are okay, maybe I was expecting too much. Core delta temp in gaming is 14C at 470-480W which isn't that amazing for a $300 block. 

Ran same UV test. There's a 5C reduction in core and 10C in memory hotspot

3090 KPE Hybrid 0.930v 2130MHz +1200 Mem Core: 45C Mem: 62C









3090 KPE HC 0.930v 2130MHz +1250 Mem Core: 40C Mem: 51C


----------



## holyshade

Heyya I'm looking for a specific bios I've not seen anywhere I'm looking for official MSI Gaming X Trio with Rebar support. It's on their bios update tool officially but it just won't run for me without crashing. If anyone has already got this working and can dump their bios that would be absolutely amazing. Don't want to void my warranty if I decide to go mining at some point is all.


----------



## degenn

SoldierRBT said:


> I installed the KPE block. Changed thermal paste to KPx on the core and thermalright 12.8 w/mk 2mm thermal pads on the memory chips (First time watercooling GPU). Temps are okay, maybe I was expecting too much. Core delta temp in gaming is 14C at 470-480W which isn't that amazing for a $300 block.
> 
> Ran same UV test. There's a 5C reduction in core and 10C in memory hotspot
> 
> 3090 KPE Hybrid 0.930v 2130MHz +1200 Mem Core: 45C Mem: 62C
> 
> 
> Spoiler
> 
> 
> 
> 
> View attachment 2512667
> 
> 
> 
> 3090 KPE HC 0.930v 2130MHz +1250 Mem Core: 40C Mem: 51C
> 
> 
> Spoiler
> 
> 
> 
> 
> View attachment 2512668


Which vbios were you using for these runs?


----------



## degenn

holyshade said:


> Heyya I'm looking for a specific bios I've not seen anywhere I'm looking for official MSI Gaming X Trio with Rebar support. It's on their bios update tool officially but it just won't run for me without crashing. If anyone has already got this working and can dump their bios that would be absolutely amazing. Don't want to void my warranty if I decide to go mining at some point is all.


All I could find:

MSI RTX 3090 VBIOS Gaming X Trio OC Bios ReBAR enabled
MSI RTX 3090 VBIOS Suprim X OC Bios ReBAR enabled


----------



## holyshade

degenn said:


> All I could find:
> 
> MSI RTX 3090 VBIOS Gaming X Trio OC Bios ReBAR enabled
> MSI RTX 3090 VBIOS Suprim X OC Bios ReBAR enabled



:O

Wow! thanks this is the exact version officially on the tool that I couldn't flash thank you so much!


----------



## SoldierRBT

degenn said:


> Which vbios were you using for these runs?


520W KPE bios


----------



## KedarWolf

ArcticZero said:


> Yep it's a 2x8-pin PNY card, and despite it being shunt modded, I barely game lately and it was a mostly mining workload which is capped at 320w. Rails seemed balanced, and was using separate cables of course.
> 
> Oh well, it's alright. At this point after everything I've put my card through, the issue being as simple to fix as a cable is a huge relief either way. These were old extensions anyway and I should be able to do better research when I purchase a new set.


CableMod Pro cable sets are really good cables. I've had zero issues with them, come with cable combs preinstalled on the cables, and I'm pretty sure you can buy individual cables in custom lengths now. 






Pro Cable Kits – CableMod







cablemod.com










Configurator – CableMod Global Store







store.cablemod.com


----------



## KedarWolf

SoldierRBT said:


> 520W KPE bios


A few days ago on this VBIOS I got 15356 in Port Royal, over 200 points higher than my best on the Suprim X BIOS.









EVGA RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com





It might even be better than the KPE 520 but I haven't tested that theory.


----------



## SoldierRBT

KedarWolf said:


> A few days ago on this VBIOS I got 15356 in Port Royal, over 200 points higher than my best on the Suprim X BIOS.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> EVGA RTX 3090 VBIOS
> 
> 
> 24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory
> 
> 
> 
> 
> www.techpowerup.com
> 
> 
> 
> 
> 
> It might even be better than the KPE 520 but I haven't tested that theory.


My best run with the 520W was 15682 with the Hybrid 360mm AIO. 








I scored 15 682 in Port Royal


Intel Core i9-10900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com





With the HC Block I got 15639 today








I scored 15 639 in Port Royal


Intel Core i9-10900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## degenn

SoldierRBT said:


> 520W KPE bios


In that case it's interesting that GPU-Z is reporting ~96% TDP @ ~415 watts Board Power Draw. Is it the the Re-BAR enabled 520W KPE bios or the older one?

~96% TDP on the 520W bios should equate to ~500 watts Board Power Draw.



holyshade said:


> :O
> 
> Wow! thanks this is the exact version officially on the tool that I couldn't flash thank you so much!


No problem!


----------



## degenn

SoldierRBT said:


> My best run with the 520W was 15682 with the Hybrid 360mm AIO.
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 682 in Port Royal
> 
> 
> Intel Core i9-10900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> With the HC Block I got 15639 today
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 639 in Port Royal
> 
> 
> Intel Core i9-10900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com


Did you happen to note Board Power Draw in those runs? Maybe something wonky going on with GPU-Z reporting because with those scores, you are clearly not power limited below 520W like I alluded to in my last post.


----------



## J7SC

In GPUz and HWInfo etc, does anyone know whether 'Hotspot' refers to VRAM, or to GPU - or is that still unknown (or varies by card vendor) ?


----------



## SoldierRBT

@degenn 

Yes the power draw is low because I locked the voltage to 0.930v which is the lowest It can do for 2100MHz avg run in PR. The GPU-Z power draw percentage is also correct since the max allow is 120% - 520W (see MSI afterburner slider). 96% = 416W

I’m using the 520W bios I got with my card in December. rBAR isn’t enable. On the 15.6k run I locked voltage to 1.025v 2205MHz. Power draw at that voltage is 518-523W. If I go higher it would hit power limit which lower the score.


----------



## GRABibus

J7SC said:


> In GPUz and HWInfo etc, does anyone know whether 'Hotspot' refers to VRAM, or to GPU - or is that still unknown (or varies by card vendor) ?


It is written « GPU hotspot », no ?


----------



## Toopy

yzonker said:


> I think typically contact degrades on connectors over time causing an increase in resistance and heat. 2x8pin card? PNY in your sig but I've forgotten what you have.





ArcticZero said:


> Yep it's a 2x8-pin PNY card, and despite it being shunt modded, I barely game lately and it was a mostly mining workload which is capped at 320w. Rails seemed balanced, and was using separate cables of course.
> 
> Oh well, it's alright. At this point after everything I've put my card through, the issue being as simple to fix as a cable is a huge relief either way. These were old extensions anyway and I should be able to do better research when I purchase a new set.


Believe it or not,
Those molex PCIE 6 & 8 pins are only rated for 30 connections. Definitely seems like it’s designed life cycle a little short of reality.



https://www.molex.com/pdm_docs/ts/TS-5556-002-001.pdf


----------



## Toopy

yzonker said:


> Was it doing that with the original bios? I remember it did for mining but thought it was balanced when gaming?
> 
> If it wasn't then might be some oddity with the KP bios on that specific Zotac card. There are a lot of other choices if you want experiment. In some cases you'll lose some ports though depending on the original cards port layout.
> 
> If it did have the same problem originally then yea probably some kind of balancing problem.


yep it was balanced when gaming, Im sure it was, wish I posted a gpuz screen here. I might try some other bioses and see, now I know that other’s work.


----------



## gfunkernaught

SoldierRBT said:


> @degenn
> 
> Yes the power draw is low because I locked the voltage to 0.930v which is the lowest It can do for 2100MHz avg run in PR. The GPU-Z power draw percentage is also correct since the max allow is 120% - 520W (see MSI afterburner slider). 96% = 416W
> 
> I’m using the 520W bios I got with my card in December. rBAR isn’t enable. On the 15.6k run I locked voltage to 1.025v 2205MHz. Power draw at that voltage is 518-523W. If I go higher it would hit power limit which lower the score.


When you say 1.025v 2205mhz, thats the point in the v/f curve you set right? What was your effective clock?


----------



## J7SC

GRABibus said:


> It is written « GPU hotspot », no ?


...nope - kind of inconclusive which it is referring to (GPU or VRAM ?)


----------



## SoldierRBT

gfunkernaught said:


> When you say 1.025v 2205mhz, thats the point in the v/f curve you set right? What was your effective clock?


Yes, 1.025v voltage point +195 which is 2205MHz. I use classified tool to increase effective clocks to match MSI clocks. In this case, 2205MHz peak, 2160MHz avg effective clock in PR. I’m not sure why my avg clock is low considering avg temp was 44C.


----------



## gfunkernaught

SoldierRBT said:


> Yes, 1.025v voltage point +195 which is 2205MHz. I use classified tool to increase effective clocks to match MSI clocks. In this case, 2160MHz avg effective clock in PR


That is a kingpin you have right?


----------



## GRABibus

J7SC said:


> ...nope - kind of inconclusive which it is referring to (GPU or VRAM ?)
> 
> View attachment 2512716


In HWINfo should be written « GPU hotspot ».


----------



## SoldierRBT

Correct KPE HC


----------



## gfunkernaught

Just wanted to point out that the PCIe 8pin power balancing isn't off on my Trio while using evga bios. Here is a screenshot of the Trio rbar bios running on my Trio. The 3rd pin has always been less on my card, and PCIe +12v always caps at 75w which is a hardware limitation. So I'm pretty sure the power reading reported by the evga bios on my card is accurate. I haven't clamped anything so not certain but...pretty sure


----------



## newls1

what is about average OC for a FTW3 Ultra with 500w re bar bios... Is +125core and +1000 mem set in MSI AB a decent starting point?


----------



## motivman

Anyone have an opinion on this? went to switch power supplies on my strix, and noticed on one of my cards, one of the red leds that lights up when you disconnect the PCIE cable does not light up at all? tried removing and reinserting the GPU, but still no red led above that pcie connector. Card works and boots up fine, power draw looks normal in GPU-Z, however it looks like the red LED is dead??? anyone ever experienced this with a STRIX?


----------



## jura11

J7SC said:


> ...nope - kind of inconclusive which it is referring to (GPU or VRAM ?)
> 
> View attachment 2512716


As per Techpowerup 

As its name suggests, the hotspot is the hottest spot on the GPU, measured from a network of thermal sensors across the GPU die, unlike conventional "GPU Temperature" sensors, which reads off a single physical location of the GPU die. AMD refers to this static sensor as "Edge temperature." In some cases, the reported temperature of this sensor could differ from the hotspot by as much as 20°C, which underscores the importance of hotspot. The sensor with the highest temperature measurement becomes the hotspot. 









NVIDIA GPUs Have Hotspot Temperature Sensors Like AMD


NVIDIA GeForce GPUs feature hotspot temperature measurement akin to AMD Radeon ones, according to an investigative report by Igor's Lab. A beta version of HWInfo already supports hotspot measurement. As its name suggests, the hotspot is the hottest spot on the GPU, measured from a network of...




www.techpowerup.com





Hope this helps 

Thanks, Jura


----------



## J7SC

GRABibus said:


> In HWINfo should be written « GPU hotspot ».





jura11 said:


> As per Techpowerup
> 
> As its name suggests, the hotspot is the hottest spot on the GPU, measured from a network of thermal sensors across the GPU die, unlike conventional "GPU Temperature" sensors, which reads off a single physical location of the GPU die. AMD refers to this static sensor as "Edge temperature." In some cases, the reported temperature of this sensor could differ from the hotspot by as much as 20°C, which underscores the importance of hotspot. The sensor with the highest temperature measurement becomes the hotspot.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> NVIDIA GPUs Have Hotspot Temperature Sensors Like AMD
> 
> 
> NVIDIA GeForce GPUs feature hotspot temperature measurement akin to AMD Radeon ones, according to an investigative report by Igor's Lab. A beta version of HWInfo already supports hotspot measurement. As its name suggests, the hotspot is the hottest spot on the GPU, measured from a network of...
> 
> 
> 
> 
> www.techpowerup.com
> 
> 
> 
> 
> 
> Hope this helps
> 
> Thanks, Jura


...tx, but yeah, in HWInfo everything is prefixed with 'GPU'; *just trying to figure out where that sensor is exactly*...


----------



## jura11

@J7SC

I have no idea where that GPU hot-spot temperature sensor it is, its not disclosed by Nvidia, on AMD GPU it edge temperature 

You should see 10-13°C difference or rather delta between the GPU core and GPU hot-spot temperature on good coolers 

Hope this helps 

Thanks, Jura


----------



## gfunkernaught

Does anyone know if there is an indicator that a game is using rebar? Or is it "enabled" all the time? I'm thinking like the SLI indicator. I'm just curious to see if games that don't support or use rebar behave differently that those that do.


----------



## Falkentyne

jura11 said:


> @J7SC
> 
> I have no idea where that GPU hot-spot temperature sensor it is, its not disclosed by Nvidia, on AMD GPU it edge temperature
> 
> You should see 10-13°C difference or rather delta between the GPU core and GPU hot-spot temperature on good coolers
> 
> Hope this helps
> 
> Thanks, Jura


The hotspot delta has an absolute minimum cap that it won't go under (besides margin of error about 0.1C).
On 3090 FE, that's 10C. This may be a BIOS set register or limitation. It can go way above 10C but never below 10C (9.9C).
Thermspy 3.3.0 shows a hotspot delta of 8C, fixed, regardless of what HWinfo or GPU-Z reads out. Might be the absolute minimum delta that is allowed by firmware that can be set by the vbios but it's definitely not the active delta.


----------



## J7SC

Falkentyne said:


> The hotspot delta has an absolute minimum cap that it won't go under (besides margin of error about 0.1C).
> On 3090 FE, that's 10C. This may be a BIOS set register or limitation. It can go way above 10C but never below 10C (9.9C).
> Thermspy 3.3.0 shows a hotspot delta of 8C, fixed, regardless of what HWinfo or GPU-Z reads out. Might be the absolute minimum delta that is allowed by firmware that can be set by the vbios but it's definitely not the active delta.


...interesting ! I've been tracking Hotspot on both my 3090 and 2080 Tis...it definitely seems to have strong input to the boost table

On another note, I seem to recall that you mentioned thermal putty before (as opposed to thermal paste or pads). What's the best pliable, long-lasting thermal putty out there with a decent W/mK (say above 8) ? Brand name ? Links ? Thanks in advance


----------



## Toopy

J7SC said:


> ...interesting ! I've been tracking Hotspot on both my 3090 and 2080 Tis...it definitely seems to have strong input to the boost table
> 
> On another note, I seem to recall that you mentioned thermal putty before (as opposed to thermal paste or pads). What's the best pliable, long-lasting thermal putty out there with a decent W/mK (say above 8) ? Brand name ? Links ? Thanks in advance


Adding my experience
I've only used 1 brand, TGPP10 from Tglobal TG-PP-10 Silicone Thermal Putty - T-Global Technology, 
I ordered it from digikey TG-PP10-50 t-Global Technology | Fans, Thermal Management | DigiKey 

50g will do 1 3090 front and back with a little spare, this obviously depends on the thickness of the stock pads.

It works really well, is completely reusable.
I used it initially on my 3090, then went to shims and it came off easily, I reused some between the shims and the HSF. (1.5mm shims in a 2mm gap, so 0.5mm of putty)
I find the best thing about it is that its soft, so you can use it to fill gaps etc, but the pressure that the HS mount excerpts allows it to flow and fill the voids without preventing correct seating.


----------



## J7SC

Toopy said:


> Adding my experience
> I've only used 1 brand, TGPP10 from Tglobal TG-PP-10 Silicone Thermal Putty - T-Global Technology
> It works really well, is completely reusable.
> I used it initially on my 3090, then went to shims and it came off easily, I reused some between the shims and the HSF. (1.5mm shims in a 2mm gap, so 0.5mm of putty)
> I find the best thing about it is that its soft, so you can use it to fill gaps etc, but the pressure that the HS mount excerpts allows it to flow and fill the voids without preventing correct seating.


 Thanks !


----------



## newls1

newls1 said:


> what is about average OC for a FTW3 Ultra with 500w re bar bios... Is +125core and +1000 mem set in MSI AB a decent starting point?


anyone have any feedback about this?


----------



## Robin Jacques

Hello guys 

I recently got my 3090 FE, and the EKWB Quantum Vector that goes with it

I followed EK assembly's and thermal pads placing instructions, but It looks like the backplate doesn't fit really well ( see pictures hereafter ) : it is also slightly bent in some places

I don't know if other users on this topic have this GPU/WB combo, but if so, I'd love to get their feedback on the assembly and also to know if they have the same end results 

Thanks beforehand !


----------



## des2k...

Robin Jacques said:


> Hello guys
> 
> I recently got my 3090 FE, and the EKWB Quantum Vector that goes with it
> 
> I followed EK assembly's and thermal pads placing instructions, but It looks like the backplate doesn't fit really well ( see pictures hereafter ) : it is also slightly bent in some places
> 
> I don't know if other users on this topic have this GPU/WB combo, but if so, I'd love to get their feedback on the assembly and also to know if they have the same end results
> 
> Thanks beforehand !
> 
> 
> View attachment 2512796
> View attachment 2512794
> View attachment 2512795


looks fine to me, for good mem temps the EK backplate will have a slight bend at the front / side for pad pressure

in my experience, trying to get the backplate perfectly flat will give you bad memory temps under extreme load

my block is the reference not the FE


----------



## newls1

just pre ordered the EK Vector Active backplate and FCWB for my FTW3 Ultra 3090 directly from EK, so maybe in 67weeks I might s ee it, but CANT WAIT!! W00T.. cant wait to get this card on water


----------



## PLATOON TEKK

Edit: ignore


----------



## gfunkernaught

I'm running into an issue now where Cyberpunk crashes within a minute of being in-game with load on the gpu. I flashed the Trio rebar bios, DDU'ed the drivers, cleared nv_cache, disabled MSI AB and no overclocks at all. Disabled GOG overlay, even excluded the cyberpunk folder from Security Center. Same result, driver keeps crashing. I even rolled back to 466.27, same result. I haven't tried flashing back to the bios that came with the card yet. Any suggestions?


----------



## KedarWolf

gfunkernaught said:


> I'm running into an issue now where Cyberpunk crashes within a minute of being in-game with load on the gpu. I flashed the Trio rebar bios, DDU'ed the drivers, cleared nv_cache, disabled MSI AB and no overclocks at all. Disabled GOG overlay, even excluded the cyberpunk folder from Security Center. Same result, driver keeps crashing. I even rolled back to 466.27, same result. I haven't tried flashing back to the bios that came with the card yet. Any suggestions?


I'll test Cyberpunk when I get home, see if it's a game issue, but I'm on the 500W XOC rebar BIOS though.


----------



## gfunkernaught

KedarWolf said:


> I'll test Cyberpunk when I get home, see if it's a game issue, but I'm on the 500W XOC rebar BIOS though.


I tried those as well as the 520W rebar bios, same thing. That 520w bios also gave me trouble with cold war and titanfall 2. I just flashed back to my stock rom that I saved when I got the card, so rebar disabled, cyberpunk crashed within a few minutes. Again, no overclocking at all, no core no mem, left power limit sliders at default. Even prevented AB from starting up. The only thing I enabled recently was core isolation under Settings>Device Security, but why would that cause the video driver to crash? I just found out that there is a Beta version 2 motherboard bios for my board. I guess I could try that. But everything was working fine when I first started using rebar.

EDIT: I am already on the latest/only version of my mobo's bios that supports rebar.


----------



## yzonker

gfunkernaught said:


> I tried those as well as the 520W rebar bios, same thing. That 520w bios also gave me trouble with cold war and titanfall 2. I just flashed back to my stock rom that I saved when I got the card, so rebar disabled, cyberpunk crashed within a few minutes. Again, no overclocking at all, no core no mem, left power limit sliders at default. Even prevented AB from starting up. The only thing I enabled recently was core isolation under Settings>Device Security, but why would that cause the video driver to crash? I just found out that there is a Beta version 2 motherboard bios for my board. I guess I could try that. But everything was working fine when I first started using rebar.
> 
> EDIT: I am already on the latest/only version of my mobo's bios that supports rebar.


Interesting. I just went through this with CP yesterday. I had been running it on the KP XOC and it seemed fine trying out my new external rad. (which was a huge improvement) 

Then I decided to flash back to the Galax 390w bios to compare framerates since it has reBar enabled. Took quite a bit longer than what you're seeing, but it would crash after 10 to 20 minutes even with conservative settings. Seemed like it might be stable at bone stock 350w with no offsets though. But it took so long to crash sometimes I can't be sure. 

In my case though running DDU in safe mode may have fixed it. I did relax my main mem timings some and changed to a mild all core o/c on my 5800x that has always worked just out of desperation (although I hadn't touched any bios settings in weeks).

Anyway I've put in 4 or 5 hours now including just putting V on top of a car and letting the game run for an hour a couple of times. No crashes so far even with my normal o/c in Afterburner. Still haven't changes back my bios settings though.


----------



## Lord of meat

newls1 said:


> anyone have any feedback about this?


Throw it away!
Just kidding... run some port royal stress test see if it can finish the 20 loops. The higher the memory the lower the core clock will be depends on balance. 
Sometimes the memory corrects itself it will lower performance.


----------



## gfunkernaught

yzonker said:


> Interesting. I just went through this with CP yesterday. I had been running it on the KP XOC and it seemed fine trying out my new external rad. (which was a huge improvement)
> 
> Then I decided to flash back to the Galax 390w bios to compare framerates since it has reBar enabled. Took quite a bit longer than what you're seeing, but it would crash after 10 to 20 minutes even with conservative settings. Seemed like it might be stable at bone stock 350w with no offsets though. But it took so long to crash sometimes I can't be sure.
> 
> In my case though running DDU in safe mode may have fixed it. I did relax my main mem timings some and changed to a mild all core o/c on my 5800x that has always worked just out of desperation (although I hadn't touched any bios settings in weeks).
> 
> Anyway I've put in 4 or 5 hours now including just putting V on top of a car and letting the game run for an hour a couple of times. No crashes so far even with my normal o/c in Afterburner. Still haven't changes back my bios settings though.


So non-rbar bios are stable for you? I rolled back my mobos bios to non rbar and the stock trio bios with no offsets. Played cyberpunk for hours no crash. Have to test the other games to be sure.


----------



## yzonker

gfunkernaught said:


> So non-rbar bios are stable for you? I rolled back my mobos bios to non rbar and the stock trio bios with no offsets. Played cyberpunk for hours no crash. Have to test the other games to be sure.


With reBar is what I'm running now and seems stable.


----------



## J7SC

...haven't had any problems either with Asus Strix r_BAR or KPE 520 r_BAR in games such as CP2077...mind you, I don't crank GPU or VRAM up to benching-settings when I play but just use a mild oc...


----------



## ViRuS2k

man when is someone going to leak a 1000w bios with RBAR enabled   surely someone has figured out by now what the differences are between same bios`s and compaired them in hex to see what has changed in them between hex edits to see what code has been added to enable rebar :/ :/

we need a 1000w rbar bios lol


----------



## gfunkernaught

J7SC said:


> ...haven't had any problems either with Asus Strix r_BAR or KPE 520 r_BAR in games such as CP2077...mind you, I don't crank GPU or VRAM up to benching-settings when I play but just use a mild oc...


So both the Trio rBar and KP 520w rBar bios were crashing with no offset at all. I turned things down one at a time. Started with the core, then mem, then eventually the power limit slider. Eventually landed on straight bios defaults with AB not running at all. Still crashed. I believe it might have something to do with my motherboard's compatibility with rBar. MSI listed the firmware as being beta. I gotta do more testing tonight.


----------



## ViRuS2k

gfunkernaught said:


> So both the Trio rBar and KP 520w rBar bios were crashing with no offset at all. I turned things down one at a time. Started with the core, then mem, then eventually the power limit slider. Eventually landed on straight bios defaults with AB not running at all. Still crashed. I believe it might have something to do with my motherboard's compatibility with rBar. MSI listed the firmware as being beta. I gotta do more testing tonight.


Might wanna try the Trio x 3090 bios`s that work for me, i tried the KP 520w rbar bios and it was pretty unstable, crashing up at around 1980mhz but memory was fine no matter the bios on my car can game easy at 1300-1400+

most stable bios`s i have found for ocing my trio x so far though is 450w SuprimX Rbar bios. and the Evga XOC 500w Rbar bios. and followed lastly with the XOC1000w bios
im going to test this bios tonight or tommorow and compare HOF OC LAB 500w bios and see how it does, though unsure if its rbar enabled.
the bios though beleave it or not that i like the most out of all bios`s so far though is the SuprimX 450w 1860mhz boost clock bios followed by the 500w rbar evga bios as it has a good starting boost clock of 1920mhz. and is good to use for people that want the fastest out of the box card without messing with overclocking tools  
a good test for overclocking these 3090`s though is to find games with rendering sliders like resident evil 7 or village or days gone and max them up to 170-200% and it will show you how high of a clock your card does when under extreme load and temps as guys it does not matter how fast you can get your card to under light load or low temps if you really max your card out you will see it lowering clocks unless of course your using 1000w bios or your shunt modded.

Also gfunkernaught if your card is crashing at default i would look into PSU being number one suspect or unstable cpu oc or unstable memory oc or your ring clock on your motherboard or memory timings.


----------



## KedarWolf

Can someone link the 520W XOC BIOS or is the Kipping one the one to use? I found that one.


----------



## Bal3Wolf

Just got my EVGA 3090 KingPin Hydo Copper ordered will be here friday, with a mp5works active back place and a new 27GN950-b 4k monitor just droped 4500 bucks this week.


----------



## gfunkernaught

ViRuS2k said:


> Might wanna try the Trio x 3090 bios`s that work for me, i tried the KP 520w rbar bios and it was pretty unstable, crashing up at around 1980mhz but memory was fine no matter the bios on my car can game easy at 1300-1400+
> 
> most stable bios`s i have found for ocing my trio x so far though is 450w SuprimX Rbar bios. and the Evga XOC 500w Rbar bios. and followed lastly with the XOC1000w bios
> im going to test this bios tonight or tommorow and compare HOF OC LAB 500w bios and see how it does, though unsure if its rbar enabled.
> the bios though beleave it or not that i like the most out of all bios`s so far though is the SuprimX 450w 1860mhz boost clock bios followed by the 500w rbar evga bios as it has a good starting boost clock of 1920mhz. and is good to use for people that want the fastest out of the box card without messing with overclocking tools
> a good test for overclocking these 3090`s though is to find games with rendering sliders like resident evil 7 or village or days gone and max them up to 170-200% and it will show you how high of a clock your card does when under extreme load and temps as guys it does not matter how fast you can get your card to under light load or low temps if you really max your card out you will see it lowering clocks unless of course your using 1000w bios or your shunt modded.
> 
> Also gfunkernaught if your card is crashing at default i would look into PSU being number one suspect or unstable cpu oc or unstable memory oc or your ring clock on your motherboard or memory timings.


Never had an issue before using rbar, other than thr obvious too high OC. But crashing on stock bios without oc? I even tried non rbar trio bios while still having the rbar motherboard bios and that wasn't stable. So I'm back on non rbar all the way. I'll check out the suprim bios again. And yes the 1kw bios are my favorite for my card.


----------



## KedarWolf

I enabled rebar on non-rebar apps/games in the global driver profile with Nvidia Inspector and got my best ever Port Royal with the 520W Kingpin Kipping BIOS.





__





Here is how to enable REBAR on non whitelisted apps/games


Made own thread for more visibility. 1 - Download Nvidia Inspector. Probably at least version 2.3 ideal. 2 - In Nvidia Inspector, look for the icon that kind of looks like a hammer 2nd from the right, if you hover over it, it says "Show unknown setting from Nvidia Predefined Profiles"...




www.overclockers.co.uk













I scored 15 595 in Port Royal


AMD Ryzen 9 5950X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## KedarWolf

Broke 15600 by quite a bit. 









I scored 15 678 in Port Royal


AMD Ryzen 9 5950X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## J7SC

KedarWolf said:


> Can someone link the 520W XOC BIOS or is the Kipping one the one to use? I found that one.


'Kipping' / TPU  is the one I use


----------



## Gypsycurse

Hi all, 

Forgive me if someone has asked this already.

I recently stripped down my 3090 FE and reapplied the thermal pads and paste, now I am looking to overclock it beyond the %114 power limit. 
What are the best bios to use, assuming I continue to use the Nvidia cooler?

Thanks

Gypsy


----------



## yzonker

Gypsycurse said:


> Hi all,
> 
> Forgive me if someone has asked this already.
> 
> I recently stripped down my 3090 FE and reapplied the thermal pads and paste, now I am looking to overclock it beyond the %114 power limit.
> What are the best bios to use, assuming I continue to use the Nvidia cooler?
> 
> Thanks
> 
> Gypsy


To my knowledge there isn't a better bios I think due to the unique 12 pin connector on the card. From what I've seen, only option is to shunt mod for more power on those.


----------



## Falkentyne

Gypsycurse said:


> Hi all,
> 
> Forgive me if someone has asked this already.
> 
> I recently stripped down my 3090 FE and reapplied the thermal pads and paste, now I am looking to overclock it beyond the %114 power limit.
> What are the best bios to use, assuming I continue to use the Nvidia cooler?
> 
> Thanks
> 
> Gypsy


The "best" bios to use would be the 1000W Kingpin Bios, where 100% would be 667W, but the problem is, NVflash won't flash any non FE Bios on a FE card, and someone somewhere had a patched NVflash without this restriction but won't release it. Because if you bricked your BIOS and can't reflash it with another card or the iGPU (if Nvflash fails to flash), you're completely screwed. No one has the courage to test this for very obvious reasons.

The issue is that
1) The FE vbios chip is a nonstandard UDFN8 chip rather than the normal SOIC8 chip used on other boards. So if you bricked your BIOS, a hardware programmer won't be able to flash it because there is no "in-line" clip (like the famous Pomona 5250 clip) that would attach to the legs to force flash it with another system (or a battery powered offline programmer), along with the REQUIRED 1.8v adapter.

2) Desoldering the FE chip is next to impossible without special equipment, and I haven't been able to find a UDFN-8 socket adapter (For HW programmers) anywhere for sale, at least not anywhere reputable for less than $100.

So if you want 520W-650W (depending on what you're running), you need to shunt mod. 

Best way (but very hard): Desolder the 5 mOhm shunts (very difficult without pre-heating the board and a lot of prior experience) and replace them with 3 mOhm shunts. This gives no clearance issues with the backplate whatsoever.

Good way: Stack 5 mOhm shunts on top of the existing shunts by fluxing, then building a solder bridge on the edges of the original shunts, then fluxing again, then applying the new shunt on top and melt the solder so the shunt is fully flat and evenly soldered. This method is difficult to do because the shunts have 'depressed' edges because they are 2W shunts. On shunts with completely flush edges, which are 1W shunts, which some AIB's have, but some also use 2W as well (Flush with the middle black housing=1W, depressed metal edges=2W) it's extremely easy.

Drawback of the "Good way" is backplate clearance from the stacked shunt, so you would need to make sure the shunt is fully completely flat, AND absolutely NO solder leftover on top of the stacked shunt whatsoever, and use 2mm thermal pads to give extra clearance for the backplate (also stick a piece of 3M Kapton tape behind the stacked shunts, on the backplate as well).


----------



## yzonker

Well I had the right answer for the wrong reason. Lol


----------



## Gypsycurse

Well that wasn't the answer I wanted, but it was comprehensive and detailed, so I guess barring any software changes, I'll live with 114%  

thanks for all the information


----------



## gfunkernaught

Some of you may remember the issues I had when I installed another 16GB of RAM to my system, well turns out, a seemingly faulty DIMM was the root of all the issues I was having with reBAR. I uninstalled the newer pair first, then tested without rebar, stock everything. Played without a single crash or hitch. Swapped the old DIMMs with the new and ran memtest, passed. Just played cyberpunk, rebar, for 2-3hrs, no crash. Probably jinxing myself but I am fairly certain this was a memory issue. 

I noticed one of the old DIMMs had some scratches and two of the pins were black. The dirt did come off but there was visible damage on a few pins, I don't even know how it worked at all. I check the slots didn't see any dirty or bent pins. It could also be that my motherboard just doesn't like four DIMMs. I may look for 2x16GB at some point. But yeah, hopefully this is it and I can move forward with reBar.


----------



## J7SC

gfunkernaught said:


> Some of you may remember the issues I had when I installed another 16GB of RAM to my system, well turns out, a seemingly faulty DIMM was the root of all the issues I was having with reBAR. I uninstalled the newer pair first, then tested without rebar, stock everything. Played without a single crash or hitch. Swapped the old DIMMs with the new and ran memtest, passed. Just played cyberpunk, rebar, for 2-3hrs, no crash. Probably jinxing myself but I am fairly certain this was a memory issue.
> 
> I noticed one of the old DIMMs had some scratches and two of the pins were black. The dirt did come off but there was visible damage on a few pins, I don't even know how it worked at all. I check the slots didn't see any dirty or bent pins. It could also be that my motherboard just doesn't like four DIMMs. I may look for 2x16GB at some point. But yeah, hopefully this is it and I can move forward with reBar.


...with r_BAR extending the access between system RAM and VRAM, that does make some sense. Weird though re. the black pins you described...heat damage, perhaps combined with a bit of dirt that got in there ?

By coincidence, I picked up a new set of 32GB (4x8 Samsung-B) CL15 DDR4 4000 last week for the system w/ the 3090 in it...works like a charm.


----------



## yzonker

gfunkernaught said:


> Some of you may remember the issues I had when I installed another 16GB of RAM to my system, well turns out, a seemingly faulty DIMM was the root of all the issues I was having with reBAR. I uninstalled the newer pair first, then tested without rebar, stock everything. Played without a single crash or hitch. Swapped the old DIMMs with the new and ran memtest, passed. Just played cyberpunk, rebar, for 2-3hrs, no crash. Probably jinxing myself but I am fairly certain this was a memory issue.
> 
> I noticed one of the old DIMMs had some scratches and two of the pins were black. The dirt did come off but there was visible damage on a few pins, I don't even know how it worked at all. I check the slots didn't see any dirty or bent pins. It could also be that my motherboard just doesn't like four DIMMs. I may look for 2x16GB at some point. But yeah, hopefully this is it and I can move forward with reBar.


I wonder why the problem would show up now though? Had you changed anything else on your system? Edit, somehow missed you said the pins had problems. 

I put my bios settings back to what I had before CP started crashing. Still seems stable. Must have just been running DDU that fixed mine. I assume this is due to something getting set wrong or corrupted when flashing bios' back and forth.

Everyone: What do you guys do when you flash a different bios. Just reboot and not re-install/DDU drivers as long as Win10 detects the card again? Or do you DDU or at least run the clean install from the Nvidia installer?


----------



## gfunkernaught

@yzonker @J7SC
What I still haven't confirmed is if the DIMM is bad OR the motherboard doesn't like four DIMMs. I'm leaning towards the latter. The dirt came off after a few minutes of nail scratching. I then saw some pits or scratches on the copper contacts. I don't think calcium can cut copper 😅. I'll take a picture of later so you can see. Or maybe the ICs have to be exactly the same in order to use four DIMMs? One set is Samsung the other is Hynix. Who knows?


----------



## ViRuS2k

yzonker said:


> I wonder why the problem would show up now though? Had you changed anything else on your system? Edit, somehow missed you said the pins had problems.
> 
> I put my bios settings back to what I had before CP started crashing. Still seems stable. Must have just been running DDU that fixed mine. I assume this is due to something getting set wrong or corrupted when flashing bios' back and forth.
> 
> Everyone: What do you guys do when you flash a different bios. Just reboot and not re-install/DDU drivers as long as Win10 detects the card again? Or do you DDU or at least run the clean install from the Nvidia installer?


Just do what i do, flash the card in windows using nvflash
dont forget to disable it in device manager first then flash then reenable then reboot, windows 10 will redetect card and install drivers automatically just remember to make sure the correct driver was reinstalled, if not go to device manager and do a driver upgrade from within there and select the correct driver then install then finally reboot


----------



## yzonker

ViRuS2k said:


> Just do what i do, flash the card in windows using nvflash
> dont forget to disable it in device manager first then flash then reenable then reboot, windows 10 will redetect card and install drivers automatically just remember to make sure the correct driver was reinstalled, if not go to device manager and do a driver upgrade from within there and select the correct driver then install then finally reboot


Yea that's the exact process I always use.


----------



## yzonker

gfunkernaught said:


> @yzonker @J7SC
> What I still haven't confirmed is if the DIMM is bad OR the motherboard doesn't like four DIMMs. I'm leaning towards the latter. The dirt came off after a few minutes of nail scratching. I then saw some pits or scratches on the copper contacts. I don't think calcium can cut copper 😅. I'll take a picture of later so you can see. Or maybe the ICs have to be exactly the same in order to use four DIMMs? One set is Samsung the other is Hynix. Who knows?


Old days we used a pencil eraser to clean contacts. Of course you need to own a pencil with an eraser. I'm not sure I do anymore. LOL

I had a machine years ago that occasionally would not post. Just removing the RAM and installing it again would fix it. Never could see anything wrong with the contacts and it never crashed, etc... once it posted.


----------



## gfunkernaught

yzonker said:


> Old days we used a pencil eraser to clean contacts. Of course you need to own a pencil with an eraser. I'm not sure I do anymore. LOL
> 
> I had a machine years ago that occasionally would not post. Just removing the RAM and installing it again would fix it. Never could see anything wrong with the contacts and it never crashed, etc... once it posted.


That is what I do for systems that don't post. Musical DIMMs. But when I start seeing random errors that point to memory, moving the DIMMs around won't fix that. I'm looking at my mobos ram compatibility now, might do some shopping. Thinking of saying f'it and maxing out to 64GB. I do some workstation related tasks on my PC as well as gaming. I trust Corsair, but also not looking to overclock the ram passed what XMP would do. What brands would you recommend?


----------



## Bal3Wolf

So my 3090 kingpin hc comes in in a few hrs, a few questions for you guys do i need to do anything for rebar ? and any tweaks to do to get benchmarking ?


----------



## jura11

Personally I would go with G.SKILL RAM @gfunkernaught are you on AMD or Intel? 

Running G.SKILL TridentZ Neo 3600MHz at 1800MHz FCLK and 64GB and no issues 

Hope this helps 

Thanks, Jura


----------



## gfunkernaught

jura11 said:


> Personally I would go with G.SKILL RAM @gfunkernaught are you on AMD or Intel?
> 
> Running G.SKILL TridentZ Neo 3600MHz at 1800MHz FCLK and 64GB and no issues
> 
> Hope this helps
> 
> Thanks, Jura


I have Intel. How does gskill compare to Corsair in terms of timings and longevity?

Also, should I stick with two DIMMs or go for four?


----------



## J7SC

gfunkernaught said:


> I have Intel. How does gskill compare to Corsair in terms of timings and longevity?


Apart from one older Corsair Dominator kit, I have been running GSkill exclusively on both Intel desktop and HEDT, and AMD AM4 and TR w/o comparability issues since '12 (DDR3, DDR4). GSkill DDR4 GTZR works great for both CPU brands


----------



## yzonker

Yea I've been running this kit.









Amazon.com: G.Skill Trident Z NEO Series 16GB (2 x 8GB) 288-Pin SDRAM (PC4-28800) DDR4 3600 CL16-19-19-39 1.35V Dual Channel Desktop Memory Model F4-3600C16D-16GTZNC : Electronics


Amazon.com: G.Skill Trident Z NEO Series 16GB (2 x 8GB) 288-Pin SDRAM (PC4-28800) DDR4 3600 CL16-19-19-39 1.35V Dual Channel Desktop Memory Model F4-3600C16D-16GTZNC : Electronics



www.amazon.com





Edit: that's my old pair, upgraded to the same thing in 32gig.






Are you a human?







www.newegg.com


----------



## gfunkernaught

yzonker said:


> Yea I've been running this kit.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Amazon.com: G.Skill Trident Z NEO Series 16GB (2 x 8GB) 288-Pin SDRAM (PC4-28800) DDR4 3600 CL16-19-19-39 1.35V Dual Channel Desktop Memory Model F4-3600C16D-16GTZNC : Electronics
> 
> 
> Amazon.com: G.Skill Trident Z NEO Series 16GB (2 x 8GB) 288-Pin SDRAM (PC4-28800) DDR4 3600 CL16-19-19-39 1.35V Dual Channel Desktop Memory Model F4-3600C16D-16GTZNC : Electronics
> 
> 
> 
> www.amazon.com
> 
> 
> 
> 
> 
> Edit: that's my old pair, upgraded to the same thing in 32gig.
> 
> 
> 
> 
> 
> 
> Are you a human?
> 
> 
> 
> 
> 
> 
> 
> www.newegg.com


Can the LEDs be controlled?


----------



## jura11

In therms of timings and longevity I think G.SKILL are my favourite RAM, with Corsair I don't have best experience and I tried too on my X570 and 3900X Corsair Dominator RAM and I hate them, I couldn't run them at 3000mhz at all and if I compare them to my TridentZ Neo they're have been so much more easier to run them at XMP profile

Hope this helps 

Thanks, Jura


----------



## yzonker

gfunkernaught said:


> Can the LEDs be controlled?


Yes, I'm still using the same software I used to control my Asus Strix 2080S. May be something else also, but I've still got it installed so just stuck with it. Looks like it's called Armoury Crate. I don't know much about all the RGB stuff as I either turn it off (or dim way down) or don't hook it up. My machine is in a home theater room (light controlled).


----------



## gfunkernaught

jura11 said:


> In therms of timings and longevity I think G.SKILL are my favourite RAM, with Corsair I don't have best experience and I tried too on my X570 and 3900X Corsair Dominator RAM and I hate them, I couldn't run them at 3000mhz at all and if I compare them to my TridentZ Neo they're have been so much more easier to run them at XMP profile
> 
> Hope this helps
> 
> Thanks, Jura


I never had a problem with Corsair until now. I'm still not certain that the DIMMs are bad OR my motherboard doesn't like four DIMMs. MSI says that when using four DIMMs I have to match the chipsets, which in my case they aren't. Guess I'm gonna have to splurge on a 64GB kit lol. It never ends.


----------



## mirkendargen

gfunkernaught said:


> I never had a problem with Corsair until now. I'm still not certain that the DIMMs are bad OR my motherboard doesn't like four DIMMs. MSI says that when using four DIMMs I have to match the chipsets, which in my case they aren't. Guess I'm gonna have to splurge on a 64GB kit lol. It never ends.


Sometimes timings need to be loosened slightly when going from one to two DIMMs per channel.


----------



## J7SC

mirkendargen said:


> Sometimes timings need to be loosened slightly when going from one to two DIMMs per channel.


^^ this. 

One can also look at SPD info and set all DIMMs to the slowest / loosest timings manually, than slowly tighten w/ lots of intermittent testing


----------



## gfunkernaught

J7SC said:


> ^^ this.
> 
> One can also look at SPD info and set all DIMMs to the slowest / loosest timings manually, than slowly tighten w/ lots of intermittent testing


Xmp set the timings the same on all four. Each kit is a matched 2x16gb kit, same model#, different chip. Want to try loosing timings lowering clocks tonight with all four and see what happens.


----------



## sultanofswing

Finally got my Optimus block in and installed on my FTW3 Hydro copper. Seeing a water to gpu delta of right at 8c, assume that is pretty normal.
The hydro copper block had a delta of between 14-15c so a nice improvement.


----------



## Bal3Wolf

i got my 3090 kingpin hydro copper installed today loving it damn thing runs so cold makes my 1080 hydro copper look like a joke how hot it runs.


----------



## SoldierRBT

Bal3Wolf said:


> i got my 3090 kingpin hydro copper installed today loving it damn thing runs so cold makes my 1080 hydro copper look like a joke how hot it runs.


What’s the delta temp in yours? I get 14C delta at 460-480W


----------



## Bal3Wolf

rooms about 70f cards maxing out at 40c only 36c mining still playing with overclocking trying out some benchmarks now to feel out how it will clock.









I scored 20 748 in Time Spy


AMD Ryzen 9 3900X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## Dreams-Visions

any recommendations for a waterblock for the X Trio that is reasonably available in the US? From what I'm seeing, the easiest stuff to get is from EK, Alphacool, and Bitspower. I tried a Bykski block but it was too wide for my case.


----------



## Bal3Wolf

Dreams-Visions said:


> any recommendations for a waterblock for the X Trio that is reasonably available in the US? From what I'm seeing, the easiest stuff to get is from EK, Alphacool, and Bitspower. I tried a Bykski block but it was too wide for my case.


ek if it was me and if i recall they even make a active back plate that will work with the x trio.


----------



## Dreams-Visions

Bal3Wolf said:


> ek if it was me and if i recall they even make a active back plate that will work with the x trio.


roger that. ty!


----------



## J7SC

Dreams-Visions said:


> roger that. ty!


FYI, I had several problems with my EK block and backplate and switched to Phanteks


----------



## ahnafakeef

Hello everyone. I’m considering an EVGA 3090 FTW3 Ultra as my next upgrade. Is there any particular issue that I should be concerned about? And assuming parity in price, is there any reason why I might want to consider a 3080Ti FTW3 instead? 

Also, what HDMI 2.1 cable is the best for enabling 120Hz output at 4:4:4 colour? 

Thank you!


----------



## mardon

Is anyone else finding the BAR bios running hotter?

I was running the 385w Zotac Bios with my shunted Reference 3090. I've just swapped over to the 350w KFA2 one to see if it runs any cooler. I'm manually limiting my power draw to 450w as I see diminishing gains after this other than in benchmarks.


----------



## des2k...

sultanofswing said:


> Finally got my Optimus block in and installed on my FTW3 Hydro copper. Seeing a water to gpu delta of right at 8c, assume that is pretty normal.
> The hydro copper block had a delta of between 14-15c so a nice improvement.


8c delta is very good, what load/wattage ?

I still want to aim for better delta on my ek block; I know I have to quick lap the cold plate. I did take .5mm off the standoff, about 10c-12c up to 500w.


----------



## Dreams-Visions

J7SC said:


> FYI, I had several problems with my EK block and backplate and switched to Phanteks


unfortunately they do not have a block for the X Trio. But given the number of people using the EK block, it must be generally okay. If not, I'll send it back.


----------



## gfunkernaught

Bal3Wolf said:


> ek if it was me and if i recall they even make a active back plate that will work with the x trio.


The active backplate is universal to all ek blocks?


----------



## Bal3Wolf

gfunkernaught said:


> The active backplate is universal to all ek blocks?


no EK's one only goes on certain cards heres ones they show all of them are 130+


https://www.ekwb.com/shop/water-blocks/vga-blocks/fc-backplates-for-nvidia-geforce/geforce-rtx-30x0-series?dir=desc&limit=36&order=price


----------



## n00bftw

I currently have an i9 7900x and a 1080TI FE, both are overclocked, CPU is at 4.6Ghz.

 

All of this is running in a single custom waterloop with x2 Alphacool NexXxoS ST30 Full Copper 420mm radiators (Push configuration) and a D5 Pump. The case is a Dark Base Pro 900, one of the radiators is intake (front), and one is exhaust (top), only other fan in the system is an exhaust fan (140mm) out the back of the case.

Current System Temps while gaming:

Water Temp: 40c

Water Pump Speed: 2.9

Radiator Fans RPM: 1000/1100 RPM

CPU Temp: 62-72c

GPU Temp: 46c

Question, if I replace the 1080TI, which has a TDP of 250 stock (FYI my GPU is overclocked so will be higher), with the 3080 (TDP 320) or the 3090 (TDP 350), and don’t overclock the new GPU, (1.) will my radiators be able to handle the extra heat?

(2.) Will I have to increase my fan speed? By how much…

(3.) Will the water loop run hotter, how much hotter? Will it be too hot for the soft tubing I use?

(4.) Will the additional heat of the 3080/3090 heat up my CPU more than it is now?


Thanks in advance.


----------



## jura11

@n00bftw

I think yes your current configuration of loop should be able handle that extra TDP although I wouldn't compare GTX1080Ti vs RTX 3090 or 3080 because RTX 3090 or 3080 are running bit hotter than previous generations of GPUs plus VRAM are literally running hot but if you will be running them at stock and you will up power limits etc I still think your loop should easily handle extra TDP of GPU

What fans are you running or using? You will probably end up running fans at higher speeds but how much, I can only guess maybe extra 200RPM maybe you will get away with current speed of fans

I would go with EPDM or EK ZMT tubing which are zero maintenance tubings and they last up to 110°C

I would guess you will need to run pumps and fans bit faster than now and personally I would modify front and top of BeQuiet case

Hope this helps 

Thanks, Jura


----------



## n00bftw

jura11 said:


> @n00bftw
> 
> I think yes your current configuration of loop should be able handle that extra TDP although I wouldn't compare GTX1080Ti vs RTX 3090 or 3080 because RTX 3090 or 3080 are running bit hotter than previous generations of GPUs plus VRAM are literally running hot but if you will be running them at stock and you will up power limits etc I still think your loop should easily handle extra TDP of GPU
> 
> What fans are you running or using? You will probably end up running fans at higher speeds but how much, I can only guess maybe extra 200RPM maybe you will get away with current speed of fans
> 
> I would go with EPDM or EK ZMT tubing which are zero maintenance tubings and they last up to 110°C
> 
> I would guess you will need to run pumps and fans bit faster than now and personally I would modify front and top of BeQuiet case
> 
> Hope this helps
> 
> Thanks, Jura


Thanks for the reply, I have already modified the top and front of the Be Quiet case for maximum airflow, what top modification of the front do you refer to exactly? I have the Be Quiet SILENT WINGS 3 140mm PWM high-speed (Max 1500RPM). I have Mayhems ULTRA-CLEAR WATERCOOLING TUBING (3/8 - 5/8) 16/10MM, which i think has a max operating temp of 70c, but I'm not 100% sure on that :/


----------



## jura11

n00bftw said:


> Thanks for the reply, I have already modified the top and front of the Be Quiet case for maximum airflow, what top modification of the front do you refer to exactly? I have the Be Quiet SILENT WINGS 3 140mm PWM high-speed (Max 1500RPM). I have Mayhems ULTRA-CLEAR WATERCOOLING TUBING (3/8 - 5/8) 16/10MM, which i think has a max operating temp of 70c, but I'm not 100% sure on that :/


BeQuiet Silent Wings 3 are great fans and they should perform quite nicely on radiators

Regarding the tubing, I would get EK ZMT or EPDM tubing or if you can get Tygon A-60-G tubing which is similar to EK ZMT tubing

And regarding the modified front panel and top, something like this










Few companies offer that mod, I think I have quotedflr something like that £50-£75

Hope this helps 

Thanks, Jura


----------



## Lord of meat

Dreams-Visions said:


> any recommendations for a waterblock for the X Trio that is reasonably available in the US? From what I'm seeing, the easiest stuff to get is from EK, Alphacool, and Bitspower. I tried a Bykski block but it was too wide for my case.


If your getting ek block I would recommend to get better thermal pads. Made a big difference for me.


----------



## KedarWolf

Lord of meat said:


> If your getting ek block I would recommend to get better thermal pads. Made a big difference for me.


I went with these. 

They are cheap and I've heard they are softer than Thermalright and Fujipoly pads and compress well. Better wm/k than the Thermalright pads. 

And delivery to Canada was cheap at about $10. 






Gelid GP Ultimate 2pc 90x50 Thermal Pads


Building on the success of the GP Extreme thermal pads, Gelid have release their ["]Ultimate["] version. It differs by being slightly larger, while offering better heat conductivity.




quietpc.ca





You can get the 1mm pads as well, just in single packs, not two packs.


----------



## Clovis559

Does anyone have a link to Classified 3.1.7.0 that will work with Kingpin HC?


----------



## KedarWolf

Clovis559 said:


> Does anyone have a link to Classified 3.1.7.0 that will work with Kingpin HC?








3090 []







kingpinoc.com


----------



## yzonker

Well this is a little embarrassing. Apparently running DDU fixed my crashing problem in RDR2 using the KP XOC bios too. Previously I had just run the clean driver install with the Nvidia installer. 

me = noob 

Doh...


----------



## J7SC

yzonker said:


> Well this is a little embarrassing. Apparently running DDU fixed my crashing problem in RDR2 using the KP XOC bios too. Previously I had just run the clean driver install with the Nvidia installer.
> 
> me = noob
> 
> Doh...


...don't blame yourself too much...Windows 10's control freakery could also have a hand in it...I noticed that when I switched between vBios, and even after installing a new RAM kit with a different id and timings - took Win 10 multiple cold boots to admit to the new realities...

...in other news, my 3090 Strix got a new neighbor (per dual mobo setup in a big case) - a 3x8 pin 6900 XT moved in next door. I hope they keep the red-green arguments under control.


----------



## PLATOON TEKK

Boom. Got the setup running. Added another 900 and finally added the big boy chiller.
Ran port at 15C (Plan is -15C once enclosure arrives)

15th PR
Will take my time to tweak properly and use classy tool etc. when I get the chance.

Trying to push coolant as far as possible.


----------



## des2k...

n00bftw said:


> I currently have an i9 7900x and a 1080TI FE, both are overclocked, CPU is at 4.6Ghz.
> 
> 
> 
> All of this is running in a single custom waterloop with x2 Alphacool NexXxoS ST30 Full Copper 420mm radiators (Push configuration) and a D5 Pump. The case is a Dark Base Pro 900, one of the radiators is intake (front), and one is exhaust (top), only other fan in the system is an exhaust fan (140mm) out the back of the case.
> 
> Current System Temps while gaming:
> 
> Water Temp: 40c
> 
> Water Pump Speed: 2.9
> 
> Radiator Fans RPM: 1000/1100 RPM
> 
> CPU Temp: 62-72c
> 
> GPU Temp: 46c
> 
> Question, if I replace the 1080TI, which has a TDP of 250 stock (FYI my GPU is overclocked so will be higher), with the 3080 (TDP 320) or the 3090 (TDP 350), and don’t overclock the new GPU, (1.) will my radiators be able to handle the extra heat?
> 
> (2.) Will I have to increase my fan speed? By how much…
> 
> (3.) Will the water loop run hotter, how much hotter? Will it be too hot for the soft tubing I use?
> 
> (4.) Will the additional heat of the 3080/3090 heat up my CPU more than it is now?
> 
> 
> Thanks in advance.


going from 250w to 350w on gpu, at worst could add maybe 2c to water temp. Cpu no idea, mine is ryzen 7nm, that has a huge delta already. Never seen temps up/down going from 1080ti to 3090 or adding more cooling rads.


----------



## mouacyk

nice disguise for bitmain asic miners


----------



## SoldierRBT

3090 KPE HC 1.025v locked 2220MHz +1350 Mem Max temp: 47C 520W BIOS

Ambient temperature 23-24C nothing crazy just pump/fans at max speed. 









I scored 15 752 in Port Royal


Intel Core i9-10900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## Thanh Nguyen

Anyone has the extra 3090 that want to let it go at what you paid? Prefer ftw3.


----------



## Bal3Wolf

SoldierRBT said:


> 3090 KPE HC 1.025v locked 2220MHz +1350 Mem Max temp: 47C 520W BIOS
> 
> Ambient temperature 23-24C nothing crazy just pump/fans at max speed.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 752 in Port Royal
> 
> 
> Intel Core i9-10900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> View attachment 2513148


nice wonder if mine will do that have hit 2190 a few times benchmarking.


----------



## Falkentyne

Anyone try this patched NVflash to see if it can flash FE boards with non FE Bioses?

nvflash.exe

Apparently has board ID mismatches disabled.
I would NOT suggest you try it unless you have a spare board.


----------



## Nizzen

Thanh Nguyen said:


> Anyone has the extra 3090 that want to let it go at what you paid? Prefer ftw3.


LOL


----------



## Toopy

gfunkernaught said:


> @yzonker @J7SC
> What I still haven't confirmed is if the DIMM is bad OR the motherboard doesn't like four DIMMs. I'm leaning towards the latter. The dirt came off after a few minutes of nail scratching. I then saw some pits or scratches on the copper contacts. I don't think calcium can cut copper 😅. I'll take a picture of later so you can see. Or maybe the ICs have to be exactly the same in order to use four DIMMs? One set is Samsung the other is Hynix. Who knows?


I know it doesn't matter but your nails are made of *keratin.*

Make sure the ram is identical per channel. A1 B1 samsung A2 B2 Hynix, running 4 ram modules will be harder on the IMC in the processor so as mentioned try loosening the timings especially 1T/2T command rates


----------



## sultanofswing

des2k... said:


> 8c delta is very good, what load/wattage ?
> 
> I still want to aim for better delta on my ek block; I know I have to quick lap the cold plate. I did take .5mm off the standoff, about 10c-12c up to 500w.


Doing some more testing. Port Royal at 550w on KPE BIOS.
2235mhz max GPU temp- 37.3c
Max Water temp- 27.0
So at 550w the delta is 10.3c


----------



## SoldierRBT

Does anyone know if ReBAR bios improve PR score? I have been using the 520W I got with my card back in December.


----------



## sultanofswing

SoldierRBT said:


> Does anyone know if ReBAR bios improve PR score? I have been using the 520W I got with my card back in December.


I have not tested in quite a while but my last tests showed REBAR did not improve PR scores.
Last time I tested besides today was about 2 months ago though.


----------



## Falkentyne

Absolutely amazing. Not a single person cared about the NVflash I posted last page. This forum is truly dead.


----------



## jura11

Falkentyne said:


> Absolutely amazing. Not a single person cared about the NVflash I posted last page. This forum is truly dead.


Sadly I don't have Founders Edition and if I have Founders Edition I would already tried flash AIB BIOS on that FE hahaha 

Thanks, Jura


----------



## Nizzen

Falkentyne said:


> Absolutely amazing. Not a single person cared about the NVflash I posted last page. This forum is truly dead.


Maybe noone care about FE? Here in Norway there was "5" minutes window to buy 3090 FE @ nvidia.no. After that, they just stopped selling from nvidia.no 

That's why I have 2x3090 strix and one 3090 HOF. No 3090FE to buy. I wanted to buy from a norwegian shop because of the EPIC 5 years return if broken policy. Best in the world


----------



## Christopher2178

Falkentyne said:


> Absolutely amazing. Not a single person cared about the NVflash I posted last page. This forum is truly dead.


Been waiting for this.. I want to try and flash a 400w FE rebar bios onto my poorly power balanced ftw3.. any thoughts? 
I have dual bios so shouldn’t be a prob even if fails. I currently run the Galax 385w or gigabyte 390w as these 2 pin bios give me the best performance on my power broken launch ftw3. 
I can only pull about 425w on the kpe 520w 3 pin bios and scores are quite a bit lower than on a 2 pin bios. No idea what wattage I am actually pulling on 2 pin bios as reporting is broken when using 2 pin bios but it most be closer to 500w give or take and It’s on water and works great for now gaming at 2190-2130mhz steady locked so I’m not taking time to send it in for replacement program till it dies. I know I wouldn’t gain much from those few extra watts of FE 400w bios but been wanting to try for fun.. we’ll see.. Thanks!


Sent from my iPhone using Tapatalk


----------



## Toopy

jura11 said:


> Sadly I don't have Founders Edition and if I have Founders Edition I would already tried flash AIB BIOS on that FE hahaha
> 
> Thanks, Jura


I agree, majority of people don’t have an FE so have no need/experience/knowledge to post.


----------



## GRABibus

Falkentyne said:


> Absolutely amazing. Not a single person cared about the NVflash I posted last page. This forum is truly dead.


If I had a FE, I would have tried for sure 😊


----------



## Falkentyne

Christopher2178 said:


> Been waiting for this.. I want to try and flash a 400w FE rebar bios onto my poorly power balanced ftw3.. any thoughts?
> I have dual bios so shouldn’t be a prob even if fails. I currently run the Galax 385w or gigabyte 390w as these 2 pin bios give me the best performance on my power broken launch ftw3.
> I can only pull about 425w on the kpe 520w 3 pin bios and scores are quite a bit lower than on a 2 pin bios. No idea what wattage I am actually pulling on 2 pin bios as reporting is broken when using 2 pin bios but it most be closer to 500w give or take and It’s on water and works great for now gaming at 2190-2130mhz steady locked so I’m not taking time to send it in for replacement program till it dies. I know I wouldn’t gain much from those few extra watts of FE 400w bios but been wanting to try for fun.. we’ll see.. Thanks!
> 
> 
> Sent from my iPhone using Tapatalk


You have dual bios (AFAIK) so no harm in trying.
One person on a laptop card trying to crossflash (mobile 3080) for a higher power limit had the flash "fail", despite working for another user (said card was 0 GB in GPU-Z) and easily recovered (and i'd be alot more scared about flashing a laptop card than a desktop).









Bricked 3080: nvflash "Main firmware range does not match a protectable range"


On a whim, I decided to test out flashing my Zephyrus G15 3080 with the Strix 115w vbios available here: This seems to have worked for others. It did not work for me. My GPU is bricked. GPU-Z recognizes the card, but not the BIOS, etc. Reflashing with my backup appears to work, but results in...




www.techpowerup.com






__
https://www.reddit.com/r/GamingLaptops/comments/np5vrp/_/h05i740

As to who patched NVflash? No idea.


----------



## yzonker

Falkentyne said:


> You have dual bios (AFAIK) so no harm in trying.
> One person on a laptop card trying to crossflash (mobile 3080) for a higher power limit had the flash "fail", despite working for another user (said card was 0 GB in GPU-Z) and easily recovered (and i'd be alot more scared about flashing a laptop card than a desktop).
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Bricked 3080: nvflash "Main firmware range does not match a protectable range"
> 
> 
> On a whim, I decided to test out flashing my Zephyrus G15 3080 with the Strix 115w vbios available here: This seems to have worked for others. It did not work for me. My GPU is bricked. GPU-Z recognizes the card, but not the BIOS, etc. Reflashing with my backup appears to work, but results in...
> 
> 
> 
> 
> www.techpowerup.com
> 
> 
> 
> 
> 
> 
> __
> https://www.reddit.com/r/GamingLaptops/comments/np5vrp/_/h05i740
> 
> As to who patched NVflash? No idea.


How is that one different than the NVFlash we normally use to cross flash between brands? I thought we were already overriding the board id mismatch?


----------



## Falkentyne

yzonker said:


> How is that one different than the NVFlash we normally use to cross flash between brands? I thought we were already overriding the board id mismatch?


I don't know. But it has some ID's bypassed completely (the -6 parameter is not required at all). I do not know if it allows a FE Bios on a on FE card though. That might be a different protection. Check the links I posted.

Well something was patched compared to the TPU version.

06/05/2021 06:41 PM <DIR> .
06/05/2021 06:41 PM <DIR> ..
10/16/2020 04:26 AM 10,616,296 nvflash.exe
06/04/2021 10:14 PM 10,616,296 nvflashm.exe
06/05/2021 06:38 PM 3,988,257 nvflash_5.670.0.zip
3 File(s) 25,220,849 bytes
2 Dir(s) 238,746,120,192 bytes free

C:\Utilities\Nvidia\Untested>fc /b nvflash.exe nvflashm.exe
Comparing files nvflash.exe and NVFLASHM.EXE
0001881B: 85 84
00018B43: 85 84

C:\Utilities\Nvidia\Untested>


----------



## yzonker

Falkentyne said:


> I don't know. But it has some ID's bypassed completely (the -6 parameter is not required at all). I do not know if it allows a FE Bios on a on FE card though. That might be a different protection. Check the links I posted.


I did look at the links and saw it was simplified in regards to flashing, but just didn't see what functionality it gained. 

I have wondered if a really skilled programmer/hacker could pretty much reverse engineer NVFlash to open everything up. The holy grail is obviously flashing an edited bios. Maybe the encryption is built in to the hardware or something. Otherwise I think someone would have done it.


----------



## Falkentyne

yzonker said:


> I did look at the links and saw it was simplified in regards to flashing, but just didn't see what functionality it gained.
> 
> I have wondered if a really skilled programmer/hacker could pretty much reverse engineer NVFlash to open everything up. The holy grail is obviously flashing an edited bios. Maybe the encryption is built in to the hardware or something. Otherwise I think someone would have done it.


NVflash can never flash an edited BIOS because the HULK certification is both embedded on the vbios AND in the exe as well. The most you can do with an edited BIOS is throw a 1.8v adapter and SPI/hardware programmer at it and hope the card boots, as long as the checksum is correct (the checksum must always be corrected after anything is edited).

The file I linked last page should allow crossflashing, but its again unknown whether this crossflashing extends to the FE cards or FE Bios (on non FE cards).
The reason people would want to crossflash is flash the 400W FE Bios on a 3x8 pin card for a higher power limit, as the third 8 pin's power draw (done via the controller chip on the card) would not be reported to the BIOS. (That is what people were doing with the 2x8 pin XC3 Bios on the 3x8 pin FTW3 cards).


----------



## Christopher2178

Falkentyne said:


> I don't know. But it has some ID's bypassed completely (the -6 parameter is not required at all). I do not know if it allows a FE Bios on a on FE card though. That might be a different protection. Check the links I posted.
> 
> Well something was patched compared to the TPU version.
> 
> 06/05/2021 06:41 PM .
> 06/05/2021 06:41 PM ..
> 10/16/2020 04:26 AM 10,616,296 nvflash.exe
> 06/04/2021 10:14 PM 10,616,296 nvflashm.exe
> 06/05/2021 06:38 PM 3,988,257 nvflash_5.670.0.zip
> 3 File(s) 25,220,849 bytes
> 2 Dir(s) 238,746,120,192 bytes free
> 
> C:\Utilities\Nvidia\Untested>fc /b nvflash.exe nvflashm.exe
> Comparing files nvflash.exe and NVFLASHM.EXE
> 0001881B: 85 84
> 00018B43: 85 84
> 
> C:\Utilities\Nvidia\Untested>


No luck for me flashing FE rebar bios to an FTW3 I get a
Error: PCI subsystem mismatch
And no flash.. 

Edit: Sorry I was using -6 and getting the error..
Tried without and flashed but no boot for me.. 
bios error 97 on Asus board “Load VGA Bios”

Edit 2: booted backup on 2nd bios switch and reflashed first so all is good here.. Thankful for dual Bios as usual lol


----------



## Falkentyne

Christopher2178 said:


> No luck for me flashing FE rebar bios to an FTW3 I get a
> Error: PCI subsystem mismatch
> And no flash..
> 
> 
> Sent from my iPhone using Tapatalk


Looks like the FE check is completely different than the normal PCI ID check 
Going to post in that sub and see if he knows anything about it.
Might even PM him.


----------



## jura11

Christopher2178 said:


> No luck for me flashing FE rebar bios to an FTW3 I get a
> Error: PCI subsystem mismatch
> And no flash..
> 
> Edit: Sorry I was using -6 and getting the error..
> Tried without and flashed but no boot for me..
> bios error 97 on Asus board “Load VGA Bios”
> 
> Edit 2: booted backup on 2nd bios switch and reflashed first so all is good here.. Thankful for dual Bios as usual lol


Thanks for testing it there and thanks for confirmation mate

Thanks, Jura


----------



## Falkentyne

Christopher2178 said:


> No luck for me flashing FE rebar bios to an FTW3 I get a
> Error: PCI subsystem mismatch
> And no flash..
> 
> Edit: Sorry I was using -6 and getting the error..
> Tried without and flashed but no boot for me..
> bios error 97 on Asus board “Load VGA Bios”
> 
> Edit 2: booted backup on 2nd bios switch and reflashed first so all is good here.. Thankful for dual Bios as usual lol


Thanks for testing that for us!
Glad that the reflash worked (was pretty sure dual bios was safe and reflash would work would since it worked on the laptop 3080).


----------



## J7SC

Falkentyne said:


> NVflash can never flash an edited BIOS because the HULK certification is both embedded on the vbios AND in the exe as well. The most you can do with an edited BIOS is throw a 1.8v adapter and SPI/hardware programmer at it and hope the card boots, as long as the checksum is correct (the checksum must always be corrected after anything is edited).
> 
> The file I linked last page should allow crossflashing, but its again unknown whether this crossflashing extends to the FE cards or FE Bios (on non FE cards).
> The reason people would want to crossflash is flash the 400W FE Bios on a 3x8 pin card for a higher power limit, as the third 8 pin's power draw (done via the controller chip on the card) would not be reported to the BIOS. (That is what people were doing with the 2x8 pin XC3 Bios on the 3x8 pin FTW3 cards).


...every time I read s.th. like this, I cannot help but reminisce about the 'good ol' days', ie. NV GTX 600 series where you could edit everything in the bios in notepad (!), or a subsequent editor for Radeon 79xx series where you could bump voltages to 1.4v, never mind PL - those were the days...  

...of course today's GPUs are far more powerful and sophisticated (never mind far more $$$s for bricking such cards), so may be a few extra hurdles aren't such a bad thing. Also, I just picked up a 3x8 pin dual bios 6900 XT for a productivity build, and bios mods /3rd party bios options seem more sparse for Big Navi than compared to my 3090...


----------



## ESRCJ

sultanofswing said:


> Doing some more testing. Port Royal at 550w on KPE BIOS.
> 2235mhz max GPU temp- 37.3c
> Max Water temp- 27.0
> So at 550w the delta is 10.3c


How on earth are you pulling off a 10.3C delta with a Hydro Copper block? Are you using liquid metal? My 3090 Kingpin Hydro Copper delta is around 16C or so at around 500W with KPX thermal paste.


----------



## Falkentyne

ESRCJ said:


> How on earth are you pulling off a 10.3C delta with a Hydro Copper block? Are you using liquid metal? My 3090 Kingpin Hydro Copper delta is around 16C or so at around 500W with KPX thermal paste.


My 3090 FE delta is 10C-11C with stock HS and repad and repaste.


----------



## zkareemz

sultanofswing said:


> Doing some more testing. Port Royal at 550w on KPE BIOS.
> 2235mhz max GPU temp- 37.3c
> Max Water temp- 27.0
> So at 550w the delta is 10.3c


where to download this bios?


----------



## des2k...

ESRCJ said:


> How on earth are you pulling off a 10.3C delta with a Hydro Copper block? Are you using liquid metal? My 3090 Kingpin Hydro Copper delta is around 16C or so at around 500W with KPX thermal paste.


it's prob just luck & pads, depends alot on how flat surfaces are and soft pads for mem(they need to compress easily). These can loose you alot on mounting pressure since they are close to the core.

My ek is 10-13c delta at 500w and my block is pretty uneven. Started sanding/lapping it, already dropped 3c, still not perfect.

I ran out thermal oddessey pads, used gelid and these are horrible, 0.5mm gave me 3c drop on delta but mem reached 80c. Switched to 1mm gelid, lost my core delta(20c+ lol) and memory is back at 54c.

I still need to finish lapping the core and I know odessey 1mm easily compress to .6mm so that should fix it😁

Lapping gpu block(core part) is hell when you have a high spot, no leverrage and no room to move the other flat/sand paper around.

My center high spot is nasty, I can easy rock left/right my razor blade when resting on the gpu block🙄

I may end up trashing this block with the amount of copper I need to remove.

For your evga bock you can do better, my crap EK get 16c delta at 600w+ with regular paste.


----------



## sultanofswing

ESRCJ said:


> How on earth are you pulling off a 10.3C delta with a Hydro Copper block? Are you using liquid metal? My 3090 Kingpin Hydro Copper delta is around 16C or so at around 500W with KPX thermal paste.


You didn't see my previous post, I removed the hydrocopper block and installed an Optimus.


----------



## Falkentyne

des2k... said:


> it's prob just luck & pads, depends alot on how flat surfaces are and soft pads for mem(they need to compress easily). These can loose you alot on mounting pressure since they are close to the core.
> 
> My ek is 10-13c delta at 500w and my block is pretty uneven. Started sanding/lapping it, already dropped 3c, still not perfect.
> 
> I ran out thermal oddessey pads, used gelid and these are horrible, 0.5mm gave me 3c drop on delta but mem reached 80c. Switched to 1mm gelid, lost my core delta(20c+ lol) and memory is back at 54c.
> 
> I still need to finish lapping the core and I know odessey 1mm easily compress to .6mm so that should fix it😁
> 
> Lapping gpu block(core part) is hell when you have a high spot, no leverrage and no room to move the other flat/sand paper around.
> 
> My center high spot is nasty, I can easy rock left/right my razor blade when resting on the gpu block🙄
> 
> I may end up trashing this block with the amount of copper I need to remove.
> 
> For your evga bock you can do better, my crap EK get 16c delta at 600w+ with regular paste.


Which Gelids?
Gelid extremes vs Gelid Ultimates are massively different in softness.
I use Gelid extreme 1.5mm on my 3090 FE on the GPU side and my hotspot delta has held at 10C-11C after 24 days.
The pads I use on the backplate side don't affect the delta.

If you are getting bad deltas its the thickness or lack of softness of the pads.

If you're having issues with mounting pressure, use these to test.









Innovation Cooling IC Contact Test and Analysis Kit For CPU GPU Heatsink | eBay


Find many great new & used options and get the best deals for Innovation Cooling IC Contact Test and Analysis Kit For CPU GPU Heatsink at the best online prices at eBay! Free shipping for many products!



www.ebay.com


----------



## J7SC

des2k... said:


> it's prob just luck & pads, depends alot on how flat surfaces are and soft pads for mem(they need to compress easily). These can loose you alot on mounting pressure since they are close to the core.
> 
> My ek is 10-13c delta at 500w and my block is pretty uneven. Started sanding/lapping it, already dropped 3c, still not perfect.
> 
> I ran out thermal oddessey pads, used gelid and these are horrible, 0.5mm gave me 3c drop on delta but mem reached 80c. Switched to 1mm gelid, lost my core delta(20c+ lol) and memory is back at 54c.
> 
> I still need to finish lapping the core and I know odessey 1mm easily compress to .6mm so that should fix it😁
> 
> Lapping gpu block(core part) is hell when you have a high spot, no leverrage and no room to move the other flat/sand paper around.
> 
> My center high spot is nasty, I can easy rock left/right my razor blade when resting on the gpu block🙄
> 
> I may end up trashing this block with the amount of copper I need to remove.
> 
> For your evga bock you can do better, my crap EK get 16c delta at 600w+ with regular paste.


...I can identify re. EK 3090 block (Strix OC in my case).  I never even used the EK backplate as the Nickel started to peel off before installation (below), and the machining on the front of the block is horrible (also below)...that machining-tip round 'imprint' sits right above the GPU die and its raised writing is so uneven that Kryonaut (relatively liquid) started to get pushed out...overall temps are still fine/stable, but Hotspot delta started to increase. I did get an RMA for the (rather obvious) backplate Nickel issue but also know that the stock Strix backplate performs better re. VRAM temps anyway. I haven't yet tried to RMA the GPU/ front block - I just bought a Phanteks Glacier for the Strix (not mounted yet) and some 'thicker' Gelid Xtreme TIM. The Phanteks machining looks way better...

...I may RMA the EK GPU/ block as well, or just have it machined down to serve as a w-cooler for the back, subject to mounting questions.I like to add that I have several other EK GPU blocks from years ago, perfect finish, but also 'just' copper...but what I see here is more than just an issue re. Nickel plating, it is questionable machining before plating...


----------



## des2k...

Falkentyne said:


> Which Gelids?
> Gelid extremes vs Gelid Ultimates are massively different in softness.
> I use Gelid extreme 1.5mm on my 3090 FE on the GPU side and my hotspot delta has held at 10C-11C after 24 days.
> The pads I use on the backplate side don't affect the delta.
> 
> If you are getting bad deltas its the thickness or lack of softness of the pads.
> 
> If you're having issues with mounting pressure, use these to test.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Innovation Cooling IC Contact Test and Analysis Kit For CPU GPU Heatsink | eBay
> 
> 
> Find many great new & used options and get the best deals for Innovation Cooling IC Contact Test and Analysis Kit For CPU GPU Heatsink at the best online prices at eBay! Free shipping for many products!
> 
> 
> 
> www.ebay.com


oh ok, that's what I figured

I might be able to save the 1mm gelid, the mem pads where overhanging, they hit components around when they got compressed.Going to trim them a bit. Some part of the mem pads end up on top of the mem caps and the die metal guard thing.


The other problem comes from EK machining the core convex and my 3090 die is flat. The standoffs are shorten at max, can't go lower than the mem caps(rentangle black ones).

In the future I'm going to stat away from EK & Borrow, both junk. Feels like this generation everything was rushed & no quality comtrol.


----------



## Falkentyne

des2k... said:


> oh ok, that's what I figured
> 
> I might be able to save the 1mm gelid, the mem pads where overhanging, they hit components around when they got compressed.Going to trim them a bit. Some part of the mem pads end up on top of the mem caps and the die metal guard thing.
> 
> 
> The other problem comes from EK machining the core convex and my 3090 die is flat. The standoffs are shorten at max, can't go lower than the mem caps(rentangle black ones).
> 
> In the future I'm going to stat away from EK & Borrow, both junk. Feels like this generation everything was rushed & no quality comtrol.


Water blocks and proper memory/VRM thermal pads is always a challenge. And I can bet you $100 in legal backed gold that none of the block OEM's even look at minimum core to core hotspot temp deltas either.


----------



## PLATOON TEKK

J7SC said:


> ...I can identify re. EK 3090 block (Strix OC in my case).  I never even used the EK backplate as the Nickel started to peel off before installation (below), and the machining on the front of the block is horrible (also below)...that machining-tip round 'imprint' sits right above the GPU die and its raised writing is so uneven that Kryonaut (relatively liquid) started to get pushed out...overall temps are still fine/stable, but Hotspot delta started to increase. I did get an RMA for the (rather obvious) backplate Nickel issue but also know that the stock Strix backplate performs better re. VRAM temps anyway. I haven't yet tried to RMA the GPU/ front block - I just bought a Phanteks Glacier for the Strix (not mounted yet) and some 'thicker' Gelid Xtreme TIM. The Phanteks machining looks way better...
> 
> ...I may RMA the EK GPU/ block as well, or just have it machined down to serve as a w-cooler for the back, subject to mounting questions.I like to add that I have several other EK GPU blocks from years ago, perfect finish, but also 'just' copper...but what I see here is more than just an issue re. Nickel plating, it is questionable machining before plating...
> 
> View attachment 2513342
> 
> 
> View attachment 2513343


I can’t say it enough. EK is garbage now. A few years ago literally every single part of my loop was EK. It’s wild how much they’ve failed me. Even finding micro leaks on their AIOs when taking builds apart. Rotating fittings leaking from the center. I’ll take literally anything over EK at this point.

last few gens their blocks have been near bottom in terms of performance too


----------



## ESRCJ

sultanofswing said:


> You didn't see my previous post, I removed the hydrocopper block and installed an Optimus.


Ah ok that explains it. Nobody on the EVGA forums seem to be pulling off that kind of delta with their Hydro Coppers and I've repasted mine several times. Liquid metal dropped my 2080 Ti's delta by 2C, so I can't imagine it being much greater with a 3090. What kind of delta were you getting with your Hydro Copper block? In the past, I've seen a lot of people trash EVGA's water blocks and I was very unimpressed with the delta when I installed mine. To be blunt, I figured the block was just garbage.



des2k... said:


> it's prob just luck & pads, depends alot on how flat surfaces are and soft pads for mem(they need to compress easily). These can loose you alot on mounting pressure since they are close to the core.
> 
> My ek is 10-13c delta at 500w and my block is pretty uneven. Started sanding/lapping it, already dropped 3c, still not perfect.
> 
> I ran out thermal oddessey pads, used gelid and these are horrible, 0.5mm gave me 3c drop on delta but mem reached 80c. Switched to 1mm gelid, lost my core delta(20c+ lol) and memory is back at 54c.
> 
> I still need to finish lapping the core and I know odessey 1mm easily compress to .6mm so that should fix it😁
> 
> Lapping gpu block(core part) is hell when you have a high spot, no leverrage and no room to move the other flat/sand paper around.
> 
> My center high spot is nasty, I can easy rock left/right my razor blade when resting on the gpu block🙄
> 
> I may end up trashing this block with the amount of copper I need to remove.
> 
> For your evga bock you can do better, my crap EK get 16c delta at 600w+ with regular paste.


People on the EVGA forum have used thinner thermal pads compared to the pads included with the Hydro Copper block and their deltas aren't much better than mine. I basically gave up on this block and I've been waiting for Optimus to release their Kingpin block, which probably won't release until the end of the year at this rate. My temps are so pathetic with this block compared to what I'm used to seeing with previous cards. Granted, the thermal density of this GPU is much higher than what we're used to seeing. Although seeing users with a 10C delta with a different block gives me hope... and it also makes me wonder how EVGA could miss the mark by that much with this block.


----------



## mirkendargen

ESRCJ said:


> How on earth are you pulling off a 10.3C delta with a Hydro Copper block? Are you using liquid metal? My 3090 Kingpin Hydro Copper delta is around 16C or so at around 500W with KPX thermal paste.


It can depend where people's sensor is in their loop too. For example my coolant is 2C cooler right before my first block than right after my last block. Sensors can also vary in accuracy.


----------



## sultanofswing

ESRCJ said:


> Ah ok that explains it. Nobody on the EVGA forums seem to be pulling off that kind of delta with their Hydro Coppers and I've repasted mine several times. Liquid metal dropped my 2080 Ti's delta by 2C, so I can't imagine it being much greater with a 3090. What kind of delta were you getting with your Hydro Copper block? In the past, I've seen a lot of people trash EVGA's water blocks and I was very unimpressed with the delta when I installed mine. To be blunt, I figured the block was just garbage.
> 
> 
> 
> People on the EVGA forum have used thinner thermal pads compared to the pads included with the Hydro Copper block and their deltas aren't much better than mine. I basically gave up on this block and I've been waiting for Optimus to release their Kingpin block, which probably won't release until the end of the year at this rate. My temps are so pathetic with this block compared to what I'm used to seeing with previous cards. Granted, the thermal density of this GPU is much higher than what we're used to seeing. Although seeing users with a 10C delta with a different block gives me hope... and it also makes me wonder how EVGA could miss the mark by that much with this block.


as low as 12c and as high as 16c on the old Hydro copper.


----------



## ESRCJ

sultanofswing said:


> as low as 12c and as high as 16c on the old Hydro copper.


Alright so just about in-line with what I'm seeing (16C when the card is drawing 500W+). Next time I might just buy a FTW3 since a lot of you guys have much better silicon than my lousy Kingpin and you have a lot more waterblock options lol.


----------



## autoshot

des2k... said:


> Anyone tried this re-bar bios on the zotac ?
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Zotac RTX 3090 VBIOS
> 
> 
> 24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory
> 
> 
> 
> 
> www.techpowerup.com


Not sure whether by "tried" you mean see if it doesn't brick the regular 3090 Trinity or overclock better but I can confirm that this BIOS works on the regular Zotac 3090 Trinity. Overclocking capabilities did not change compared to the Gigabyte BIOS I used before though (+165 MHz curve offset is still the limit for my chip).
What I did notice though: while the Gigabyte BIOS supports the firmware controle mode offered by MSI Afterburner, the Zotac BIOS doesn't.


----------



## sultanofswing

ESRCJ said:


> Alright so just about in-line with what I'm seeing (16C when the card is drawing 500W+). Next time I might just buy a FTW3 since a lot of you guys have much better silicon than my lousy Kingpin and you have a lot more waterblock options lol.


Yea, I have 2 Kingpins and neither one of them come close to my Hydro Copper.


----------



## ESRCJ

sultanofswing said:


> Yea, I have 2 Kingpins and neither one of them come close to my Hydro Copper.


It would have been nice if they actually binned them. Mine can't even do 2205MHz without increasing NVVDD in Classified well-above 1.10V in most games at 4K.


----------



## sultanofswing

ESRCJ said:


> It would have been nice if they actually binned them. Mine can't even do 2205MHz without increasing NVVDD in Classified.


Pretty much the same with both of mine. They seem to only be able to do around 2160-2175 mhz if I am lucky at 1093mv.


----------



## SoldierRBT

ESRCJ said:


> It would have been nice if they actually binned them. Mine can't even do 2205MHz without increasing NVVDD in Classified well-above 1.10V in most games at 4K.


Classified tool voltages only increase internal clocks, they don't affect requested clocks. NVVDD is rated up to 1.20v (at least on FE models) so you're safe pushing it higher than 1.10v.


----------



## jura11

My best Port Royal result with 5950X and Palit RTX 3090 GamingPro with XOC 1000W BIOS capped at 75%









I scored 15 160 in Port Royal


AMD Ryzen 9 5950X, NVIDIA GeForce RTX 3090 x 1, 65536 MB, 64-bit Windows 10}




www.3dmark.com





Hope this helps

Thanks, Jura


----------



## Falkentyne

SoldierRBT said:


> Classified tool voltages only increase internal clocks, they don't affect requested clocks. NVVDD is rated up to 1.20v (at least on FE models) so you're safe pushing it higher than 1.10v.


You can't adjust NVVDD on FE boards right? or did Elmor find a way to remove the read only issue on the EVC2SX with FE boards?


----------



## kot0005

So is the 3080Ti really faster or all these reviewers on crack because the 3090 is hitting power limit due to more VRAM ?


----------



## Thanh Nguyen

kot0005 said:


> So is the 3080Ti really faster or all these reviewers on crack because the 3090 is hitting power limit due to more VRAM ?


NVIDIA guidance is smoke or drink something before doing a review thats why you see 3080ti is faster than 3090.


----------



## yzonker

I looked at a bunch of the reviews. In most cases where the 3080ti edged out the 3090, they were comparing cards like FTW3/Strix/SupremeX/etc... to their original FE 3090 review result. I think I did see a few instances where the 3080Ti FE was shown faster than the 3090 FE. Might be something to a difference in power usage due to 12 vs 24 Gig of VRAM. Probably also they didn't re-run the 3090 with the newer drivers either.


----------



## ahnafakeef

Any opinions or advice on ROG Strix OC vs FTW3 Ultra?

Which one is better and why?
Can I flash a new BIOS on either two get the same performance, esp regarding the 480 vs 500 power limit?


----------



## KedarWolf

ahnafakeef said:


> Any opinions or advice on ROG Strix OC vs FTW3 Ultra?
> 
> Which one is better and why?
> Can I flash a new BIOS on either two get the same performance, esp regarding the 480 vs 500 power limit?


I really love my Strix OC. I've benched Port Royal at 2220 which is decent, and benched memory with that overclock at +1132, which isn't the best, but okay.


----------



## yzonker

Maybe now that EVGA has revised the FTW3 it's equivalent. Certainly wasn't on initial release. I don't have either card but pretty obvious from reading various forums.

Does the Strix have fuses that can blow like the FTW3? Thinking about if you really wanted to push the card with the KP XOC. 

OTOH, hard to beat EVGA's customer service.


----------



## Falkentyne

yzonker said:


> Maybe now that EVGA has revised the FTW3 it's equivalent. Certainly wasn't on initial release. I don't have either card but pretty obvious from reading various forums.
> 
> Does the Strix have fuses that can blow like the FTW3? Thinking about if you really wanted to push the card with the KP XOC.
> 
> OTOH, hard to beat EVGA's customer service.


Strix (and TUF) have no fuses.


----------



## gfunkernaught

Toopy said:


> I know it doesn't matter but your nails are made of *keratin.*
> 
> Make sure the ram is identical per channel. A1 B1 samsung A2 B2 Hynix, running 4 ram modules will be harder on the IMC in the processor so as mentioned try loosening the timings especially 1T/2T command rates


Keratin right that's it lol. 

So using XMP for the four DIMMs won't be good enough? I have always used XMP for two DIMMs and never had a problem. I was told by Corsair that even if the model# matches, you shouldn't use unmatched kits. This motherboard has been proven to be on the weak side. I tried a 9700k on it and it didn't like it.


----------



## Bal3Wolf

spending more time playing with my kingpin is it normal to not break 500watts very offten using the ln2 bios with a 520watt limit and usualy top out in 490s.








I scored 15 309 in Port Royal


AMD Ryzen 9 3900X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com












I scored 10 796 in Time Spy Extreme


AMD Ryzen 9 3900X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## ahnafakeef

KedarWolf said:


> I really love my Strix OC. I've benched Port Royal at 2220 which is decent, and benched memory with that overclock at +1132, which isn't the best, but okay.


But that’s on watercooling, right?

Is it standard for Strix OCs to break the 2000 mark on the default air cooling?

and what’s the situation on custom BIOSes this time around? Is there a BIOS that can increase power limit beyond the 480w limit? And possibly disable GPU Boost for a stable 2000+ core clock with a custom fan curve via Afterburner?


----------



## yzonker

Bal3Wolf said:


> spending more time playing with my kingpin is it normal to not break 500watts very offten using the ln2 bios with a 520watt limit and usualy top out in 490s.
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 309 in Port Royal
> 
> 
> AMD Ryzen 9 3900X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 10 796 in Time Spy Extreme
> 
> 
> AMD Ryzen 9 3900X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> View attachment 2513458


I've seen other comments like that on the EVGA forum I think. I don't have a KP so no first hand experience. 

What is it limiting by during the run? Power/voltage/mix of the two?


----------



## SoldierRBT

The KPE 520W BIOS on the KPE cards is limited to 500W. You can test this with RTX Quake 2. Using the dipswitches on the back of the card will let it pull 520-524W in the same test. 1K BIOS doesn't have this problem. It'll pull 600W+ on stock settings (no dipswitches).


----------



## J7SC

ahnafakeef said:


> But that’s on watercooling, right?
> 
> Is it standard for Strix OCs to break the 2000 mark on the default air cooling?
> 
> and what’s the situation on custom BIOSes this time around? Is there a BIOS that can increase power limit beyond the 480w limit? And possibly disable GPU Boost for a stable 2000+ core clock with a custom fan curve via Afterburner?


...don't know if it is 'standard' for Strix OC, but mine typically hit 2190 - 2205 ('max', before heat-up) in various benches on air, and would peak above 500 W per GPUz 'max' settings. It's air-cooler is actually very capable, but no matter what, with that kind of heat energy to be disbursed, w-cooling makes a lot of sense.


----------



## KedarWolf

ahnafakeef said:


> But that’s on watercooling, right?
> 
> Is it standard for Strix OCs to break the 2000 mark on the default air cooling?
> 
> and what’s the situation on custom BIOSes this time around? Is there a BIOS that can increase power limit beyond the 480w limit? And possibly disable GPU Boost for a stable 2000+ core clock with a custom fan curve via Afterburner?


Yes, it's under an EKWB block and backplate with Thermalright pads.

The best Port Royal score I got was 15689 which is decent. 









I scored 15 689 in Port Royal


AMD Ryzen 9 5950X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com





But this was with the 520W XOC BIOS and 2190 core speed. The 1000W XOC BIOS will do the 2220 but scores less as I enabled Rebar in the global Nvidia profile with Nvidia Inspector and the 520W Rebar VBIOS scored better.


----------



## Panchovix

KedarWolf said:


> Yes, it's under an EKWB block and backplate with Thermalright pads.
> 
> The best Port Royal score I got was 15689 which is decent.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 689 in Port Royal
> 
> 
> AMD Ryzen 9 5950X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> But this was with the 520W XOC BIOS and 2190 core speed. The 1000W XOC BIOS will do the 2220 but scores less as I enabled Rebar in the global Nvidia profile with Nvidia Inspector and the 520W Rebar VBIOS scored better.


Humble 3080 user here, but, I knew you can force ReBar in some 3D apps (I forgot how though lol) but I didn't know it improved the score/performance on 3DMark. Well that's pretty interesting lol, just wanted to say thanks for the info.

Time to search how to do it again.


----------



## Bal3Wolf

KedarWolf said:


> Yes, it's under an EKWB block and backplate with Thermalright pads.
> 
> The best Port Royal score I got was 15689 which is decent.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 689 in Port Royal
> 
> 
> AMD Ryzen 9 5950X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> But this was with the 520W XOC BIOS and 2190 core speed. The 1000W XOC BIOS will do the 2220 but scores less as I enabled Rebar in the global Nvidia profile with Nvidia Inspector and the 520W Rebar VBIOS scored better.


I been thinking of getting a 5900x to upgrade my 3900x but seems like im right on or beating most the 5900xs with my setup.








I scored 15 345 in Port Royal


AMD Ryzen 9 3900X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## ahnafakeef

J7SC said:


> ...don't know if it is 'standard' for Strix OC, but mine typically hit 2190 - 2205 ('max', before heat-up) in various benches on air, and would peak above 500 W per GPUz 'max' settings. It's air-cooler is actually very capable, but no matter what, with that kind of heat energy to be disbursed, w-cooling makes a lot of sense.


Thank you for your response
A few questions if you don’t mind.
1. is that with power and voltage limits maxed out? 
2. is the voltage used and heat generated within safe limits for hours of continuous gaming? 
3. what temps are you getting, and do you get thermal throttling at all?
4. Is GPU boost disabled, or can it be disabled by flashing a different bios? 
5. is the 2200MHz core game stable?



KedarWolf said:


> Yes, it's under an EKWB block and backplate with Thermalright pads.
> 
> The best Port Royal score I got was 15689 which is decent.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 689 in Port Royal
> 
> 
> AMD Ryzen 9 5950X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> But this was with the 520W XOC BIOS and 2190 core speed. The 1000W XOC BIOS will do the 2220 but scores less as I enabled Rebar in the global Nvidia profile with Nvidia Inspector and the 520W Rebar VBIOS scored better.


Thank you for your response. 

I saw the EKWB version but couldn’t go for it since I don’t have a custom water loop. Also saw the Gigabyte Waterforce Extreme, but again, did not want to mess with water.

What kind of core clocks can you stabilise in games with the stock Strix OC cooler?

And am I correct in assuming that the white version of the Strix OC is otherwise identical to the standard black version of the Strix OC minus the color?
*____*
Is there a guide on here that can teach me about the basics of voltage and heat limitations with 3090s, and custom BIOSes and safety limits on air with 3090s? Been out of the loop for a while, and would love to read up on custom BIOSes for the Strix OC to get the maximum possible overclock on air. Thank you.


----------



## KedarWolf

Panchovix said:


> Humble 3080 user here, but, I knew you can force ReBar in some 3D apps (I forgot how though lol) but I didn't know it improved the score/performance on 3DMark. Well that's pretty interesting lol, just wanted to say thanks for the info.
> 
> Time to search how to do it again.


Do it in the Global Nvidia profile.





__





Here is how to enable REBAR on non whitelisted apps/games


Made own thread for more visibility. 1 - Download Nvidia Inspector. Probably at least version 2.3 ideal. 2 - In Nvidia Inspector, look for the icon that kind of looks like a hammer 2nd from the right, if you hover over it, it says "Show unknown setting from Nvidia Predefined Profiles"...




www.overclockers.co.uk


----------



## KedarWolf

ahnafakeef said:


> Thank you for your response
> A few questions if you don’t mind.
> 1. is that with power and voltage limits maxed out?
> 2. is the voltage used and heat generated within safe limits for hours of continuous gaming?
> 3. what temps are you getting, and do you get thermal throttling at all?
> 4. Is GPU boost disabled, or can it be disabled by flashing a different bios?
> 5. is the 2200MHz core game stable?
> 
> 
> 
> Thank you for your response.
> 
> I saw the EKWB version but couldn’t go for it since I don’t have a custom water loop. Also saw the Gigabyte Waterforce Extreme, but again, did not want to mess with water.
> 
> What kind of core clocks can you stabilise in games with the stock Strix OC cooler?
> 
> And am I correct in assuming that the white version of the Strix OC is otherwise identical to the standard black version of the Strix OC minus the color?
> *____*
> Is there a guide on here that can teach me about the basics of voltage and heat limitations with 3090s, and custom BIOSes and safety limits on air with 3090s? Been out of the loop for a while, and would love to read up on custom BIOSes for the Strix OC to get the maximum possible overclock on air. Thank you.


No, 2220 on the 1000W BIOS was only Port Royal benchmarking borderline stable. On stock cooler, I'd try the EVGA 500w BIOS, I get really great results with that, almost on par with the 520W BIOS. If you custom search my Username and Sort By Date, I linked it a few pages back.

For gaming, I run 2145 +742 memory. I could do that on stock cooling as well.

Yes, the white Strix OC is the same card.

Edit: I was voltage and power limit maxed out on the 520W BIOS and 60% power limit on the 1000W BIOS but most peeps do 50% on it.

If your core and memory are under 80C while gaming, you should be good. I use Afterburner to set my speeds, voltage limit and power limit, they max out at what they are set at, regardless of Boost.

Second Edit: When not benchmarking, I undervolt my card with clocks at 1902 and +542 memory at .875v because of my 1350 BTU A/C, TV, all my peripherals like my modem etc. have to be on the same electrical loop in my apartment and I trip the breaker while gaming. I always have sports on my TV in the background, and in summer, A/C is on when I'm home. I WON'T run an extension cord say from the A/C to the kitchen to be on a different circuit and no other options. 


I still average 92FPS in SOTTR benchmark and 72 FPS Cyberpunk both graphics maxed out, Cyberpunk on Quality DLSS, no DLSS on SOTTR at 3840x1080, even with underclocking.


----------



## SoldierRBT

20 points difference with the 520W KPE ReBar BIOS.
3090 KPE HC 1.025v 2220MHz +1350 Mem Max temp: 45C 521W








I scored 15 771 in Port Royal


Intel Core i9-10900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## Falkentyne

SoldierRBT said:


> 20 points difference with the 520W KPE ReBar BIOS.
> 3090 KPE HC 1.025v 2220MHz +1350 Mem Max temp: 45C 521W
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 771 in Port Royal
> 
> 
> Intel Core i9-10900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> View attachment 2513513


Did you see my last message? or am I blacklisted?


----------



## J7SC

ahnafakeef said:


> Thank you for your response
> A few questions if you don’t mind.
> 1. is that with power and voltage limits maxed out?
> 2. is the voltage used and heat generated within safe limits for hours of continuous gaming?
> 3. what temps are you getting, and do you get thermal throttling at all?
> 4. Is GPU boost disabled, or can it be disabled by flashing a different bios?
> 5. is the 2200MHz core game stable?
> (...)


...re. 1.) ...'yes'

...to. 2.) ...IMO, yes as long as you stay on the stock bios and drivers will also reduce power automatically when heat goes up. While I can start up at 2205 and actually far beyond, it won't end there after hours of gaming, more like 2130 - 2160+-

...to 3.) ...if you can stand fans at full blast, GPU temp will generally hover around 68 C...also: the backplate of a 3090 STrix OC is actually fairly efficient, IMO, but adding a 120mm fan (1200 rpm) to blow on the rear of the GPU + VRAM really does help...I dropped VRAM junction temps by over 7 C +- with that. Thermal throttling goes in increments, and starts somewhere in the 30 C range (if not before) via 'NVBoost 3.0'...typically, you lose in 15 MHz increments as temps creep up and NVBoost spoils your party. As to voltage, I don't touch the voltage slider and most of the time, it maxes out at 1.06x V, and rarely 1.07x

...to 4.) GPU boost is not disabled, plus the card has a full w-block - and the KPE 520 W r_BAR vbios now. That said, when brand new (early February), it could easily break 15K in Port Royal on air, and that was before r_BAR and other PR enhancements and updated drivers.

...to 5.) I would say 'no' simply because NVBoost kicks in quickly as temps rise. I posted s.th. like 2265 MHz as the max in GPUz when starting a bench or game, but it very quickly drops - how quickly depends on your cooling solution. While I can go higher, I typically set the Strix OC at 2130 MHz and 115% PL for long and challenging gaming at 4K Ultra, such as CP2077 or Microsoft FS2020.


----------



## SoldierRBT

Falkentyne said:


> Did you see my last message? or am I blacklisted?


Yes I saw your last message. As far as I know , NVVDD can't be adjusted in the FE models or at least no one has been able to by pass the issue. What I was trying to say is that core voltage in MSI afterburner isn't the same as NVVDD and it's safe to use over 1.10v NVVDD. Not blacklisted, your messages are always welcome.


----------



## GQNerd

SoldierRBT said:


> 20 points difference with the 520W KPE ReBar BIOS.
> 3090 KPE HC 1.025v 2220MHz +1350 Mem Max temp: 45C 521W
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 771 in Port Royal
> 
> 
> Intel Core i9-10900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> View attachment 2513513


2175Mhz avg @ 1.025 and 43c max, not bad at all.. Did you touch Classified tool for that (if so, not truly at 1.025v)?
Or just stock 520w and let her rip?


----------



## SoldierRBT

Miguelios said:


> 2175Mhz avg @ 1.025 and 43c max, not bad at all.. Did you touch Classified tool for that (if so, not truly at 1.025v)?
> Or just stock 520w and let her rip?


Classified tool was used to match internal clocks with requested clocks, using more than it needs just create unnecessary heat which lower avg clocks = lower score. NVVDD/MSVDD voltages only affect internal clocks. Requested clocks on each voltage point are determined by silicon quality and temperature.


----------



## GQNerd

SoldierRBT said:


> Classified tool was used to match internal clocks with requested clocks, using more than it needs just create unnecessary heat which lower avg clocks = lower score. NVVDD/MSVDD voltages only affect internal clocks. Requested clocks on each voltage point are determined by silicon quality and temperature.


I'm aware, but for example.. If you solely set to 1.025v in AB, aside from requested clocks not matching effective, you would drop a few bins when it warms up, and your total board power would be less than 520w+.. You're still pumping more voltage to the core than 1.025v. I asked because I would be AMAZED if someone can lock in 2205+ at 1.025v


----------



## SoldierRBT

Miguelios said:


> I'm aware, but for example.. If you solely set to 1.025v in AB, aside from requested clocks not matching effective, you would drop a few bins when it warms up, and your total board power would be less than 520w+.. You're still pumping more voltage to the core than 1.025v. I asked because I would be AMAZED if someone can lock in 2205+ at 1.025v


Yes, core drops a few bins in the test due to increase in temperature. Power draw actually increases when temperature rises since chip becomes leaky. Also, power draw depends on the application/load. PR isn't as intense as RTX Quake 2 or Time Spy which would drop voltage to 0.993v-1.00v and still pulling 520W+ (This can be solved with a higher TDP BIOS).


----------



## jlodvo

ive read somewhere you should only flash a 2x 8pin bios to a 2x 8pin card? is this true?
i have a AORUS GeForce RTX™ 3090 XTREME WATERFORCE WB 24G and new bios with rebar scewred the power limit to 350watts only so fps drop a little, and im confused which bios i should try to flash with it, its the waterblock edition of 3090 from gigabyte and have a good water loop on it, max gpu temp 42 and mem temp max at 66 so wanted to try to flash a better bios with a higher power limit and clocks, anyone can suggest a good one to try or have the same card and tried a diff bios?


----------



## inedenimadam

jlodvo said:


> ive read somewhere you should only flash a 2x 8pin bios to a 2x 8pin card? is this true?
> i have a AORUS GeForce RTX™ 3090 XTREME WATERFORCE WB 24G and new bios with rebar scewred the power limit to 350watts only so fps drop a little, and im confused which bios i should try to flash with it, its the waterblock edition of 3090 from gigabyte and have a good water loop on it, max gpu temp 42 and mem temp max at 66 so wanted to try to flash a better bios with a higher power limit and clocks, anyone can suggest a good one to try or have the same card and tried a diff bios?


I flashed my 2x8 Trinity with the 1000W KPE bios. It works just fine other than the power readings being off. The card is completely incapable of reaching 1000W under any circumstances due to the locked voltage, so I am actually less afraid of it on the Trinity than I am my Kingpin.


----------



## PLATOON TEKK

7th PR








It’s wild how much of this is temperature. Pretty much same run as 15th only at 11C instead of 15C. Wild.

edit: 30252 points. max 579.8w. 2250 max. 2175 avg. Didn’t use classy tool yet so 1.1v.


----------



## Falkentyne

SoldierRBT said:


> Yes I saw your last message. As far as I know , NVVDD can't be adjusted in the FE models or at least no one has been able to by pass the issue. What I was trying to say is that core voltage in MSI afterburner isn't the same as NVVDD and it's safe to use over 1.10v NVVDD. Not blacklisted, your messages are always welcome.


Thank you.
I know the MP2888B controller is "read only" with the Elmor EVC2SX device and I don't think he found a way past it after the few people trying it ran into a brick Nvidia wall.


----------



## yzonker

jlodvo said:


> ive read somewhere you should only flash a 2x 8pin bios to a 2x 8pin card? is this true?
> i have a AORUS GeForce RTX™ 3090 XTREME WATERFORCE WB 24G and new bios with rebar scewred the power limit to 350watts only so fps drop a little, and im confused which bios i should try to flash with it, its the waterblock edition of 3090 from gigabyte and have a good water loop on it, max gpu temp 42 and mem temp max at 66 so wanted to try to flash a better bios with a higher power limit and clocks, anyone can suggest a good one to try or have the same card and tried a diff bios?


You can flash pretty much any of them. The issue is that other than the 1000w KP XOC bios, all the rest will result in a lower true power limit than one of the 390w 2x8pin bios. This is due to the 3x8pin bios duplicating the first 8pin as the 3rd, so it thinks you're pulling more power than you are. The 1KW bios just works because the limit is so high.


----------



## Thanh Nguyen

Anyone know the reason my card is slower with opencl test in realbench? I just ran realbench and see it. Also run blender on cuda or optix and its extremely slow.


----------



## J7SC

...quick question for those of you who have the *Phanteks* Glacier Strix 3090 water-block. I'm about to mount mine (that EK is outta here) in the near future...how do you like the stock pads that came with the Phanteks block ? I have extra Thermalright 12.8 W/mk in 0.5mm, 1.0mm (and 2.0mm) thickness laying around and am wondering whether I should substitute those in.

...also, that Phanteks block is the heaviest single-GPU block I have ever handled , tx to that chunky copper cold plate


----------



## PLATOON TEKK

Have the Phanteks block on one of the strix on evc2x. I actually found the pads and temps to be pretty good, however, i haven’t had a chance to get too deep with pads yet.

Imo the backplate is absolute light junk. Maybe byksi backplate (what I did), Bykski plate is heavy and flat for efficient water gpu vram cooler mount).


----------



## J7SC

PLATOON TEKK said:


> Have the Phanteks block on one of the strix on evc2x. I actually found the pads and temps to be pretty good, however, i haven’t had a chance to get too deep with pads yet.
> 
> Imo the backplate is absolute light junk. Maybe byksi backplate (what I did), Bykski plate is heavy and flat for efficient water gpu vram cooler mount).
> 
> View attachment 2513548
> View attachment 2513549


Thanks  ...I didn't get the backplate. While I have a new EK backplate via RMA, I plan to actually keep the (modded w/ some M.2 coolers) stock Strix backplate.

...and yeah, got a Bykski waterblock that arrived today for a 6900 XT; came with a backplate that seems fairly sturdy


----------



## PLATOON TEKK

J7SC said:


> Thanks  ...I didn't get the backplate. While I have a new EK backplate via RMA, I plan to actually keep the (modded w/ some M.2 coolers) stock Strix backplate .


No worries at all man. Found the Strix stock plate to have a nice amount of pads and performance so you should be good. Especially with those m.2s!

Only thing you’ll notice is the extended plate, but I just pretend it’s some radical design. Enjoy it, card’s a beast 🤘🏻


----------



## GQNerd

KP HC with Kryonaut.. think I can get it lower with LM, and can always add a chiller. 
Temps are high because the card is actually inside the case with the panels on for once..

1000w bios, locked at 2205 @ 1.1v (1.168 NVVDD), only went down 1 bin during the run. 
+1600 mem (stock voltage, think 1.36)

Look at the board power, SHEEEESH


----------



## SirCanealot

Hey everyone, quick question 

Going from a discussion I had on Reddit randomly, someone mentioned (link below) using 1.2mm copper shims on the front of the card with thermal paste on both sides (IE, no thermal pad).

__
https://www.reddit.com/r/gpumining/comments/mrhmqs/_/gyqoop5

Has anyone else tried this?

I have 0.5mm copper shims and have tried using them with 1mm thermal pads, but given the relitively recent knowledge that we need to compress the thermal pads and end up with around 1.2-1.3mm worth of thermal material, would 1.2mm copper shims actually be a better idea? Otherwise I might have to try some of the squishy Gelid pads 

Thanks!


----------



## mckajvah

Is there a RTX 3090 undevolt table somewhere? Just want to know what I realistically can achieve.


----------



## KedarWolf

mckajvah said:


> Is there a RTX 3090 undevolt table somewhere? Just want to know what I realistically can achieve.


I game at .875v +542 memory, 1902 core with no issues.

Edit: I've ran Cyberpunk, SOTTR, Diablo 3 with those settings just fine.


----------



## mckajvah

KedarWolf said:


> I game at 875v +542 memory, 1902 core with no issues.


ok. But there is no spreadsheet where a lot of people have put in values for what they could achieve?


----------



## pat182

mckajvah said:


> ok. But there is no spreadsheet where a lot of people have put in values for what they could achieve?


1.006v @ 2055mhz will give you 2055mhz even at 500watts draw ( if you keep it around 60c)


----------



## yzonker

mckajvah said:


> ok. But there is no spreadsheet where a lot of people have put in values for what they could achieve?


Not that I've seen. Just have to experiment a bit. Generally I've been able to run a +180 or more below 900mv. That number starts to fall off some on my card at higher voltages. Others have had better success. Really depends on your sample and how low your temps are.


----------



## Benni231990

hi

i have a question i want ot replace my thermalpads from my backplate of my gainward 3090 phantom gs (96°C in gaming) what pads must i use? 0,5 or 1mm?

i found this here EC360® Gold 14,5W/mK (50 x 50 x 0,5 mm) is this enough for all ram chips on the backplate?

and i will also add on the backplate a aluminium heatsink 150x100x27mm

and what thermalpad shoud i use between the heatsink and the backplate? 0,5mm or 1mm?


----------



## Falkentyne

Benni231990 said:


> hi
> 
> i have a question i want ot replace my thermalpads from my backplate of my gainward 3090 phantom gs (96°C in gaming) what pads must i use? 0,5 or 1mm?
> 
> i found this here EC360® Gold 14,5W/mK (50 x 50 x 0,5 mm) is this enough for all ram chips on the backplate?
> 
> and i will also add on the backplate a aluminium heatsink 150x100x27mm
> 
> and what thermalpad shoud i use between the heatsink and the backplate? 0,5mm or 1mm?


Use Gelid Extreme pads. EC360 are highway robbery prices.
No I don't know what thickness.


----------



## SoldierRBT

Have anyone tested Gelid Solutions GP-Ultimate 15w/mK vs Thermalright 12.8w/mK? Is there any difference?


----------



## Bal3Wolf

question for you guys that are using a 5900x with a 3090 how much perf boost do you think there is over a [email protected] im Thinking maybe looking to pickup a 5900x but not sure if its enough of a boost to be worth it some of the scores i get with my 3900x right now.









I scored 15 643 in Port Royal


AMD Ryzen 9 3900X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com












I scored 10 796 in Time Spy Extreme


AMD Ryzen 9 3900X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## KedarWolf

SoldierRBT said:


> Have anyone tested Gelid Solutions GP-Ultimate 15w/mK vs Thermalright 12.8w/mK? Is there any difference?


I have Ultimate on the way but am returning them for Extreme.

Someone told me Ultimate are hard and flaky, and the Extreme are soft and compressible which is preferable for a thermal pad for GPUs.


----------



## des2k...

Fixed my mem temps, since I switched from 1mm to .5mm some pads were not making contact.

Quick lap on the block core, still not perfect + removed some height from the standoffs. These seem to be brass, with 220grit + weight of the block, only takes about 5,6 left right movements to remove .1mm.

I don't have my temp/flow installed but usually my water temp is about 1c-2c higher than sensor thermal tape on the xr7 rad. So guessing about 7c-8c delta at 420w-440w.

Used tons of thermal paste to close the gap, some memory ICs were not touching the pads while others got compressed.

Didn't re-apply core paste lol.

Need to find that thermal putty that was posted here a while ago.


----------



## FedericoUY

yzonker said:


> Maybe now that EVGA has revised the FTW3 it's equivalent. Certainly wasn't on initial release. I don't have either card but pretty obvious from reading various forums.
> 
> Does the Strix have fuses that can blow like the FTW3? Thinking about if you really wanted to push the card with the KP XOC.
> 
> OTOH, hard to beat EVGA's customer service.


How do i find out if I have a ftw3 ultra revised version?


----------



## Falkentyne

SoldierRBT said:


> Have anyone tested Gelid Solutions GP-Ultimate 15w/mK vs Thermalright 12.8w/mK? Is there any difference?


On what card?

On my 3090 FE, Gelid Extreme pads (on the backplate, 2mm, 12 w/mk), oddly enough gave the lowest VRAM Temps! Gelid Ultimate 2mm pads were 2C hotter, and Thermalright Odyssey 2mm pads were 2-4C hotter than that! Unfortunately the Gelid Extreme pads on the backplate start turning into sticky putty (does not affect performance) due to the massive heat load, which doesn't happen on the front side of the card. No idea why the Extremes (12 w/k) are better than the Ultimate (15 w/mk) on the backplate but I assume it has to do with how they conform to shape easier while the harder pads don't contact perfectly (and cause slight warping).

On the front (core) side, Neither Gelid Ultimate 1.5mm nor Thermalright Odyssey 1.5mm are great due to the hardness of the pads affecting core to heatsink PSI pressure contact integrity (1mm is too thin), so Gelid Extremes for that are the best choice. (You can actually get perfect temps by applying the Extremes, screwing down the X bracket and backplate screws, then immediately unscrew it, remove the PCB, apply thermal paste and then screw it down again, an annoying OCD Protip).


----------



## des2k...

Falkentyne said:


> On what card?
> 
> On my 3090 FE, Gelid Extreme pads (on the backplate, 2mm, 12 w/mk), oddly enough gave the lowest VRAM Temps! Gelid Ultimate 2mm pads were 2C hotter, and Thermalright Odyssey 2mm pads were 2-4C hotter than that! Unfortunately the Gelid Extreme pads on the backplate start turning into sticky putty (does not affect performance) due to the massive heat load, which doesn't happen on the front side of the card. No idea why the Extremes (12 w/k) are better than the Ultimate (15 w/mk) on the backplate but I assume it has to do with how they conform to shape easier while the harder pads don't contact perfectly (and cause slight warping).
> 
> On the front (core) side, Neither Gelid Ultimate 1.5mm nor Thermalright Odyssey 1.5mm are great due to the hardness of the pads affecting core to heatsink PSI pressure contact integrity (1mm is too thin), so Gelid Extremes for that are the best choice. (You can actually get perfect temps by applying the Extremes, screwing down the X bracket and backplate screws, then immediately unscrew it, remove the PCB, apply thermal paste and then screw it down again, an annoying OCD Protip).


odyssey pads are ok, you can actually stretch them before applying. I stretched them to .8mm from 1mm. Then they conpressed to .6mm easy.

I have a few spares now, but still left the iceberg pads + noctua paste. These are .5mm on the front.

The backpkate I still have odyssey 1mm and they are a mess(about 20+ remounts) but temps are good so I don't want to loose the 54c mem temps with new pads.


----------



## mckajvah

FedericoUY said:


> How do i find out if I have a ftw3 ultra revised version?


On the revised version it's marked "rev 1.0" just beside the PCI-E slot. Old one says "rev 0.1".


----------



## jlodvo

finally took the plunge and tried the 1000w bios on my 3090wb, one thing is when i do the nvflash ---protecton it failed , is it important or i can leave it just that?


----------



## yzonker

jlodvo said:


> finally took the plunge and tried the 1000w bios on my 3090wb, one thing is when i do the nvflash ---protecton it failed , is it important or i can leave it just that?


It reverts back to protected automatically from what I remember from the last time this came up a few pages ago.


----------



## Toopy

des2k... said:


> Fixed my mem temps, since I switched from 1mm to .5mm some pads were not making contact.
> 
> Quick lap on the block core, still not perfect + removed some height from the standoffs. These seem to be brass, with 220grit + weight of the block, only takes about 5,6 left right movements to remove .1mm.
> 
> I don't have my temp/flow installed but usually my water temp is about 1c-2c higher than sensor thermal tape on the xr7 rad. So guessing about 7c-8c delta at 420w-440w.
> 
> Used tons of thermal paste to close the gap, some memory ICs were not touching the pads while others got compressed.
> 
> Didn't re-apply core paste lol.
> 
> Need to find that thermal putty that was posted here a while ago.
> 
> 
> View attachment 2513648



Just grab some from digikey





TG-PP10-50 t-Global Technology | Fans, Thermal Management | DigiKey


Order today, ships today. TG-PP10-50 – Thermal Silicone Putty 50 gram Container from t-Global Technology. Pricing and Availability on millions of electronic components from Digi-Key Electronics.




www.digikey.com


----------



## felix121

....


----------



## FedericoUY

mckajvah said:


> On the revised version it's marked "rev 1.0" just beside the PCI-E slot. Old one says "rev 0.1".


Luckly I got the new one then:









So this new revision card is on par to the strix right?
A couple of questions:
-Are you finding this chip (if good one) capable to 24/7 at 2000 / 0.950v or less?
-As I've read that ram tjunction temps (mostly upside chips) in some cases gets dangerously high, which temp would you consider the limit? I touch the backplate and gets really hot...

Thank you.


----------



## des2k...

FedericoUY said:


> Luckly I got the new one then:
> 
> View attachment 2513671
> 
> 
> So this new revision card is on par to the strix right?
> A couple of questions:
> -Are you finding this chip (if good one) capable to 24/7 at 2000 / 0.950v or less?
> -As I've read that ram tjunction temps (mostly upside chips) in some cases gets dangerously high, which temp would you consider the limit? I touch the backplate and gets really hot...
> 
> Thank you.


My Zotac does about 2010 900mv in TE GT2 / PR loops. I never tried pushing for 1,2bins more or less voltage in games.
I may not have the best OC card at high core voltage, but It holds 2115 easily at 1018mv and +1500mem with xoc vbios on any load.


----------



## FedericoUY

des2k... said:


> My Zotac does about 2010 900mv in TE GT2 / PR loops. I never tried pushing for 1,2bins more or less voltage in games.
> I may not have the best OC card at high core voltage, but It holds 2115 easily at 1018mv and +1500mem with xoc vbios on any load.


Nice. 0.9v is great, did not had time to test under 0.95v, but definetly will do. +1500 mems leads to what final speed?


----------



## J7SC

Falkentyne said:


> On what card?
> 
> On my 3090 FE, Gelid Extreme pads (on the backplate, 2mm, 12 w/mk), oddly enough gave the lowest VRAM Temps! Gelid Ultimate 2mm pads were 2C hotter, and Thermalright Odyssey 2mm pads were 2-4C hotter than that! Unfortunately the Gelid Extreme pads on the backplate start turning into sticky putty (does not affect performance) due to the massive heat load, which doesn't happen on the front side of the card. No idea why the Extremes (12 w/k) are better than the Ultimate (15 w/mk) on the backplate but I assume it has to do with how they conform to shape easier while the harder pads don't contact perfectly (and cause slight warping).
> 
> On the front (core) side, Neither Gelid Ultimate 1.5mm nor Thermalright Odyssey 1.5mm are great due to the hardness of the pads affecting core to heatsink PSI pressure contact integrity (1mm is too thin), so Gelid Extremes for that are the best choice. (You can actually get perfect temps by applying the Extremes, screwing down the X bracket and backplate screws, then immediately unscrew it, remove the PCB, apply thermal paste and then screw it down again, an annoying OCD Protip).


This is good info ! I'm getting closer to mounting both the Phanteks block on the 3090 Strix OC, and the Bykski block on the 6900 XT (both custom PCB, 3x8 pin) as part of my complete work+play machines home office overhaul. 

For the 3090 / Phanteks, I probably stick with the supplied grey pads that look like they have some squishy adaptability (in spite of having plenty of Thermalright Odyssey pads in 0.5, 1.0 and 2.0 mm sizes available). 

Interestingly, the Byksi block is build such that it uses only paste on the front side (GPU die, VRAM). For that, I plan to use either Gelid GC-Extreme or Noctua NH2 paste as those are a bit 'thicker' and less runny than the Kryonaut I usually use, not least as both GPUs are mounted vertically.


----------



## newls1

I keep seeing people replacing the thermal pads that come with FCWB's so it leads me to this question: I have the EK Vector FCWB and active backplate coming for my 3090 FTW3 Ultra... should I stick with what EK ships with or replace with something else? If something else.... what, and what thickness?


----------



## KedarWolf

newls1 said:


> I keep seeing people replacing the thermal pads that come with FCWB's so it leads me to this question: I have the EK Vector FCWB and active backplate coming for my 3090 FTW3 Ultra... should I stick with what EK ships with or replace with something else? If something else.... what, and what thickness?


Go to pads these days are Gelid Extreme. They are soft and compressible, not hard like Thermalright and Fujipoly and people are getting the best results with them.

Edit: The same thickness as in the installation guide for the block and active backplate.


----------



## GRABibus

mckajvah said:


> ok. But there is no spreadsheet where a lot of people have put in values for what they could achieve?


[email protected],975V on V/F curve.
+750MHz on memory.
Strix on air, repasted with Conductonaut and thermal pads changed to Thermalright Odyssey.
at 22degrees ambient, it stabilizes at [email protected],975V


----------



## degenn

Good for #2 HoF: 1x 3090/5900x.

Not bad. Makes me wish I built an 11900k rig instead.


----------



## des2k...

Just ran Quake RTX at 600w on my EK block mod; I have to say I wasn't expecting ~12c delta at 600w with kryonaut. But there you go, with decent mount no need for liquid metal.

Kryonaut is so expensive, I wasted a few small tubes by now on this block.


----------



## Falkentyne

des2k... said:


> Just ran Quake RTX at 600w on my EK block mod; I have to say I wasn't expecting ~12c delta at 600w with kryonaut. But there you go, with decent mount no need for liquid metal.
> 
> Kryonaut is so expensive, I must of wasted a good 4 tubes by now on this block re-mounts.
> 
> View attachment 2513746


Use this instead.
Equal to or better than Kryonaut (Kryonaut still beats it on CPU's however)
Remember to spread it fully in a thick layer.

This stuff is basically TFX in an easier to spread viscosity so 90% of it doesn't stick to the spatula anymore.



https://www.amazon.com/dp/B079DQS77Q



This stuff is 100% equal to TFX on my 9900k in temps.
1C better than Kryonaut Extreme on my GTX 1070 MXM card.


----------



## PLATOON TEKK

Optimus Strix block available now!








Optimus PC Store
They mention KP is next.


----------



## Bal3Wolf

degenn said:


> Good for #2 HoF: 1x 3090/5900x.
> 
> Not bad. Makes me wish I built an 11900k rig instead.



Nice one i was looking at you saying you had 2nd fastest 5900x and decided to look i have fastest 3900x with a 3090.








I scored 15 643 in Port Royal


AMD Ryzen 9 3900X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## des2k...

PLATOON TEKK said:


> Optimus Strix block available now!
> View attachment 2513751
> 
> Optimus PC Store
> They mention KP is next.


Wonder if they are still doing ref design. Although I think the Zotac is a few mm longer vs ref


3 months old email

Glad ya like our products! 8c-10c delta is definitely normal. We are planning on doing the ref design, though it'll be after Strix and Kingpin. We don't have an ETA, it'll be hopefully in the next few months. 

Cheers,

*JOSH RAY*
Optimus Advanced Water Cooling
optimuspc.com


----------



## jlodvo

questions guys

i have a gigabyte GV-N3090AORUSX WB-24GD 2x8pin with its current bios with rebar and 350w power limit / boost clock 1785

i flash it with a similar bios from gigabyte GV-N3090AORUS X-24GD 3x8pin newer extreme with latest bios with rebar with 420w power limit / boost clock 1860

but in games i have lower fps with the 3x8pin xtreme bios , i can see it has higher power board draw its going to 400watts instead of the original one for my gpu that is around 320w to 340w
the original bios boost is higher also, is it because the bios for the extreme was made for a 3x8pin? thats why it has lower performance?
in warzone at 4k max settings with nvidia filters all the same settings dlss off i have lower fps like around 15-20fps

so should i try to find a similar bios build for a 2x8pin gpu?


----------



## KedarWolf

jlodvo said:


> questions guys
> 
> i have a gigabyte GV-N3090AORUSX WB-24GD 2x8pin with its current bios with rebar and 350w power limit / boost clock 1785
> 
> i flash it with a similar bios from gigabyte GV-N3090AORUS X-24GD 3x8pin newer extreme with latest bios with rebar with 420w power limit / boost clock 1860
> 
> but in games i have lower fps with the 3x8pin xtreme bios , i can see it has higher power board draw its going to 400watts instead of the original one for my gpu that is around 320w to 340w
> the original bios boost is higher also, is it because the bios for the extreme was made for a 3x8pin? thats why it has lower performance?
> in warzone at 4k max settings with nvidia filters all the same settings dlss off i have lower fps like around 15-20fps
> 
> so should i try to find a similar bios build for a 2x8pin gpu?


Only the 1000W BIOS will increase voltage and performance on a two 8-pin power pin card. Any three 8-pin pin BIOS other than that will lower performance.


----------



## KedarWolf

Did we ever et the rebar enabled 1000w bios?


----------



## Falkentyne

jlodvo said:


> questions guys
> 
> i have a gigabyte GV-N3090AORUSX WB-24GD 2x8pin with its current bios with rebar and 350w power limit / boost clock 1785
> 
> i flash it with a similar bios from gigabyte GV-N3090AORUS X-24GD 3x8pin newer extreme with latest bios with rebar with 420w power limit / boost clock 1860
> 
> but in games i have lower fps with the 3x8pin xtreme bios , i can see it has higher power board draw its going to 400watts instead of the original one for my gpu that is around 320w to 340w
> the original bios boost is higher also, is it because the bios for the extreme was made for a 3x8pin? thats why it has lower performance?
> in warzone at 4k max settings with nvidia filters all the same settings dlss off i have lower fps like around 15-20fps
> 
> so should i try to find a similar bios build for a 2x8pin gpu?


That power draw is fake.
_Some_ 3x8 pin cards can use _some_ 2x8 pin Bioses and gain a higher power limit, because the board's firmware will draw from the 3x8 pins (via the controller chip), but the third 8 pin won't be reported to the firmware because the bios will only see the first two 8 pins, and it will get the power limit from there (with the third 8 pin drawing more power than what is reported to BIOS). So on these boards, the board will report LESS power draw than what it's actually using (you get helped from this).

On a 2x8 pin card, if you use a 3x8 pin Bios, if the card boots, the firmware, since it's only designed to draw from 2 8 pins, but the BIOS is expecting 3x8 pins, it will duplicate the third 8 pin from the 8 pin #1 reading, causing the bios to report MORE power than what the board is actually using which means it will hit the power limit SOONER (and the reported "TDP" power draw of the board will be above what the board is actually pulling, because the bios will include the "fake" 8 pin #3 reading in Total Board power draw, which hurts you, because it hits a power limit at a very low "true" power draw.


----------



## J7SC

Toopy said:


> Just grab some from digikey
> 
> 
> 
> 
> 
> TG-PP10-50 t-Global Technology | Fans, Thermal Management | DigiKey
> 
> 
> Order today, ships today. TG-PP10-50 – Thermal Silicone Putty 50 gram Container from t-Global Technology. Pricing and Availability on millions of electronic components from Digi-Key Electronics.
> 
> 
> 
> 
> www.digikey.com


...got some TG-PP10-50 thermal putty coming


----------



## KedarWolf

This is PR stress test and TS test 2 stable.


----------



## VickyBeaver

KedarWolf said:


> Did we ever et the rebar enabled 1000w bios?


Thats what I would like to know???, would not mind the enable of rebar but performance gains on my 2 8pin getting a decent oc stable oc of 2165 - 2140 compaired to rebar and only 390w bios, the 1000w is an easy choice


----------



## yzonker

Falkentyne said:


> That power draw is fake.
> _Some_ 3x8 pin cards can use _some_ 2x8 pin Bioses and gain a higher power limit, because the board's firmware will draw from the 3x8 pins (via the controller chip), but the third 8 pin won't be reported to the firmware because the bios will only see the first two 8 pins, and it will get the power limit from there (with the third 8 pin drawing more power than what is reported to BIOS). So on these boards, the board will report LESS power draw than what it's actually using (you get helped from this).
> 
> On a 2x8 pin card, if you use a 3x8 pin Bios, if the card boots, the firmware, since it's only designed to draw from 2 8 pins, but the BIOS is expecting 3x8 pins, it will duplicate the third 8 pin from the 8 pin #1 reading, causing the bios to report MORE power than what the board is actually using which means it will hit the power limit SOONER (and the reported "TDP" power draw of the board will be above what the board is actually pulling, because the bios will include the "fake" 8 pin #3 reading in Total Board power draw, which hurts you, because it hits a power limit at a very low "true" power draw.


So only some 3x8 pin cards work with a 2x8pin? I've only seen it done on a 3090 FTW3, but thought it might work on others. I just picked up a 3080 ti FTW3 and Jacob stated they are not planning on releasing any kind of XOC for it. Thought this trick might work on it. Maybe I'll just have to try it and see. Has it been tried on other cards and failed? Bricked? Or just no help on the PL?


----------



## des2k...

universal ref block is still comming from Optimus cooling, they already have a test block, got a reply on reddit

I mentioned 12c delta at 600w for my trinity and he said it's about 8c delta out of the box on their test unit 😁


----------



## FedericoUY

Does the OC switch in the ftw3 ultra, switches to a more powerful wattage bios on this card?


----------



## GRABibus

FedericoUY said:


> Does the OC switch in the ftw3 ultra, switches to a more powerful wattage bios on this card?


No, it changes fan profile.


----------



## KedarWolf

Bought these to go with my EKWB Strix OC 3090 block.

EKWB Strix Active Backplate (preorder).










Gelid Extreme 2.0mm/1.5mm/1.0mm x3 each.










To top off my 360 rad for my GPU.


----------



## Biscottoman

Guys i would like to ask if my actual temps on my 3090 strix are fine (520W evga BIOS). I'm maxing at 56° while gaming at max load and water reaches 41° which seems pretty high to me. The card is cooled by the phanteks block+mp5works as active backplate, the radiator is an hardwarelabs gtx560+ekwb kinetic d5 pump (GPU only). With this setup i would expect much lower temp, but I have to say that these days room temps are really high too: something like 30-31° so maybe that's the issue. Thanks in advice for the help


----------



## des2k...

Does the game actually use 520w ?
If yes that makes it about 15c delta.

A good mount on block could be closer to 10c-8c delta at 500w. Another rad will prob drop temps an additional 5c.

So you could be 10c lower temps, but is it worth the remounts + extra rad ?


----------



## des2k...

KedarWolf said:


> Bought these to go with my EKWB Strix OC 3090 block.
> 
> EKWB Strix Active Backplate (preorder).
> 
> View attachment 2513855
> 
> 
> Gelid Extreme 2.0mm/1.5mm/1.0mm x3 each.
> 
> View attachment 2513857
> 
> 
> To top off my 360 rad for my GPU.
> 
> View attachment 2513858


They have the zotac active plate in stock, almost bought it the other day.

My mem temps at the back are good if we're talking normal games, 54c. It's 64c+ for extreme loads(3dmark, quake rtx) or minning which I don't do.


----------



## jura11

@Biscottoman 

What fans are you running and how fast they are running? And what case do you have? 

Did you tried just running fans at max speed with pump at max speed if this will make any difference? 

Yours ambient temperatures too plays bigger role in higher temperatures, I would guess with lower ambient temperatures yours overall temperatures would be lower too

Hope this helps 

Thanks, Jura


----------



## Falkentyne

des2k... said:


> Does the game actually use 520w ?
> If yes that makes it about 15c delta. I guess it's good but not amazing.
> 
> A good mount on block could be closer to 10c-8c delta at 500w. Another rad will prob drop temps an additional 5c.
> 
> So you could be 10c lower temps, but is it worth the remounts + extra rad ?


There's an absolute minimum delta which can't be exceeded, regardless of what the true 'hotspot' actually is, relative to the average core temp. When you reach that point, the hotspot will drop at idle in literal lockstep, in tandem with the average temp, 0.1C at a time all the way to 0C, without going below that delta. This seems to be vbios related.


----------



## Bal3Wolf

delta is air to water temp correct been awhile my room temp is 22c my water is 29c mining on my 3090 and 50% of my 3900x, just last night i gave my rads a real good cleaning and droped 12c in water temp with 4 bad discs in my bad its a choir to move this 100 pound beast so been putting it off lol.


----------



## Falkentyne

Bal3Wolf said:


> delta is air to water temp correct been awhile my room temp is 22c my water is 29c mining on my 3090 and 50% of my 3900x, just last night i gave my rads a real good cleaning and droped 12c in water temp with 4 bad discs in my bad its a choir to move this 100 pound beast so been putting it off lol.


I thought he was talking about Core to core hotspot temp, not water temps. I dont use water cooling. Sorry.


----------



## des2k...

Oh it was about his water to GPU delta, I guess I should of quoted his message lol

"Guys i would like to ask if my actual temps on my 3090 strix are fine (520W evga BIOS). I'm maxing at 56° while gaming at max load and water reaches 41° which seems pretty high to me. "

For Hotspot, I did see/read here something like 10c is the minimum set by vbios and might not actually represent hot temps on the die. I haven't seen anything lower than 13c.

This is how it looks for my last mount , which got me 11-12c delta at 600w and 13c hotspot delta. I guess there's room for improvements, but I don't really want to chase 1-3c by taking this thing apart again :-(

Another lap on the gpu core, remove .05mm from the standoffs, 2000grit quick pass on the die to remove the markings. But I would say it's not worth much for temps if the optimus block will get 8c delta out of the box.


----------



## EarlZ

mckajvah said:


> ok. But there is no spreadsheet where a lot of people have put in values for what they could achieve?



I havent noticed any, probably due to how random it can be.

On my GPU I can do:
0.850v at 1860Mhz 
0.950 at 1950Mhz
1.00v at 2040Mhz

I think these are below average results and I cannot tell any difference on the frame rates/smoothness from 1860Mhz to 2040Mhz, maybe just 2-3fps more.


----------



## des2k...

EarlZ said:


> I havent noticed any, probably due to how random it can be.
> 
> On my GPU I can do:
> 0.850v at 1860Mhz
> 0.950 at 1950Mhz
> 1.00v at 2040Mhz
> 
> I think these are below average results and I cannot tell any difference on the frame rates/smoothness from 1860Mhz to 2040Mhz, maybe just 2-3fps more.


your undervolting numbers look the same


----------



## Biscottoman

jura11 said:


> @Biscottoman
> 
> What fans are you running and how fast they are running? And what case do you have?
> 
> Did you tried just running fans at max speed with pump at max speed if this will make any difference?
> 
> Yours ambient temperatures too plays bigger role in higher temperatures, I would guess with lower ambient temperatures yours overall temperatures would be lower too
> 
> Hope this helps
> 
> Thanks, Jura


The case is a tower 900, fans are noctua industrial 3000 rpm running at something like 1800 rpm (not pushing more than this, cause they are very loud at 100%). Btw yes probably the main issue, is high room temps, outside temperature in these days is reaching 34-35 here in Italy where I live


----------



## SoldierRBT

3090 KPE HC 1.018v 2175MHz +1300 Mem Max temp: 48C 520W BIOS








I scored 22 091 in Time Spy


Intel Core i9-10900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## Alex24buc

Anybody with the palit gamerock oc knows why on palit product page there is a new version of bios resizable bar from 31.05.2021? If I installed the bios version from their page with rebar in april, should I install try to install this new version? Thanks!!


----------



## Benni231990

i have a phantom gs and i downloadet the tool but it said no update needet

and have we now a 1000 watt biod with rebar ?? (but pls no EVGA the bios works not great on my gainward)


----------



## Alex24buc

*@ Benni231990*

I tried the new tool but it tells me too that there is no update needed. Strange, I wonder what is teh difference between this new tool from 31.05 and the one from april.


----------



## waltdanger

Just a note about the Gelid Ultimate pads, they seem to work fine for me. I got the 1.5mm ones for both front and back. People say they are too hard/flaky but the ones I got while somewhat hard, were not at all flaky, and are still compressible enough. I think people generally use way too much padding (at least from photos and video tutorials I've seen) which will make it harder for it to compress. You just need enough for the tops of the casings, even a little less since when it compresses it will cover more of it.
I have 3090 FE, everything air cooled. Changing JUST the backplate pads did nothing to temps, might have even made them go up 2C. But once I did front too, I dropped over 20C.









The core temp went up after the front repad about 8C - but my theory is that since the front vram heat is actually being transferred to the big heatsink, the additional heat in the heatsink causes core to run higher too. I do not think it is a contact issue because of how rapidly the heat dissipates when the load is removed, and gaming the core is in the low 70s and hotspot temp low 80s (when overclocked mildly and set to 400W PL) which I feel is not a danger - unlike the 100+ vram temps I had before.


----------



## des2k...

waltdanger said:


> Just a note about the Gelid Ultimate pads, they seem to work fine for me. I got the 1.5mm ones for both front and back. People say they are too hard/flaky but the ones I got while somewhat hard, were not at all flaky, and are still compressible enough. I think people generally use way too much padding (at least from photos and video tutorials I've seen) which will make it harder for it to compress. You just need enough for the tops of the casings, even a little less since when it compresses it will cover more of it.
> I have 3090 FE, everything air cooled. Changing JUST the backplate pads did nothing to temps, might have even made them go up 2C. But once I did front too, I dropped over 20C.
> 
> View attachment 2513963
> 
> 
> The core temp went up after the front repad about 8C - but my theory is that since the front vram heat is actually being transferred to the big heatsink, the additional heat in the heatsink causes core to run higher too. I do not think it is a contact issue because of how rapidly the heat dissipates when the load is removed, and gaming the core is in the low 70s and hotspot temp low 80s (when overclocked mildly and set to 400W PL) which I feel is not a danger - unlike the 100+ vram temps I had before.


Having heat transfer faster with premium pads from the VRM and MEM side will not add 8c to core temps. If that was the case nobody would re-pad their cooler with premium pads.

You prob lost pressure on the core from the mem pads or your paste application is not like before.

Mem front is like ~45w heat

for VRM, the mem & uncore heat from the stages are mostly nothing because they run 40% capacity

the core VRM will also run 40% capacity at 400w board power


----------



## Falkentyne

waltdanger said:


> Just a note about the Gelid Ultimate pads, they seem to work fine for me. I got the 1.5mm ones for both front and back. People say they are too hard/flaky but the ones I got while somewhat hard, were not at all flaky, and are still compressible enough. I think people generally use way too much padding (at least from photos and video tutorials I've seen) which will make it harder for it to compress. You just need enough for the tops of the casings, even a little less since when it compresses it will cover more of it.
> I have 3090 FE, everything air cooled. Changing JUST the backplate pads did nothing to temps, might have even made them go up 2C. But once I did front too, I dropped over 20C.
> 
> View attachment 2513963
> 
> 
> The core temp went up after the front repad about 8C - but my theory is that since the front vram heat is actually being transferred to the big heatsink, the additional heat in the heatsink causes core to run higher too. I do not think it is a contact issue because of how rapidly the heat dissipates when the load is removed, and gaming the core is in the low 70s and hotspot temp low 80s (when overclocked mildly and set to 400W PL) which I feel is not a danger - unlike the 100+ vram temps I had before.


Is that a 3090 FE or some other AIB?

Remember to test your temps with manual fan speed (100%), not auto fans!
The auto fan speeds up not just based on core temp but also on VRAM temp.
So if you reduced your VRAM temps, your fans won't spin up as much and your core will be hotter because the fans aren't spinning up because of the cooler VRAM 
So in order to determine if you really gained or lost temps, you must use a manual fan speed!

I use 100% since I can't even hear the card over my 3x Noctua 3000 RPM 120mm and 3x Noctua 3000 RPM 140mm industrials.



des2k... said:


> Having heat transfer faster with premium pads from the VRM and MEM side will not add 8c to core temps. If that was the case nobody would re-pad their cooler with premium pads.
> 
> You prob lost pressure on the core from the mem pads or your paste application is not like before.
> 
> Mem front is like ~45w heat
> 
> for VRM, the mem & uncore heat from the stages are mostly nothing because they run 40% capacity
> 
> the core VRM will also run 40% capacity at 400w board power


Not necessarily. I explained that above. He's using auto fan speed, which is a problem when both the VRAM and GPU temps control how fast the fan ramps up. If the VRAM is cooler the fan won't go as fast, so then the GPU will run hotter. He needs to have been using 100% fan speed both before and after his re-pad and pasting, to determine the actual improvements he got. Auto fans is not the way to go about this. So you could very well be right possibly (in which case if he used Gelid Extreme 1.5mm pads on the core side rather than Gelid Ultimates, his temps would be lower), but it could also be the auto fans too. Need to have the fan factor eliminated because "Auto test + Auto test" is not a valid test when multiple IC's in the card can spin up the fan.

I've already determined that the fans will spin up to 57% auto speed at 73C on the core. (This is when VRAM is <80C).

Why am I on point so much today? Because I lost over 40 one minute lightning blitz chess games in a row. Yeah.


----------



## waltdanger

Its an FE and yeah i should have tested a bit more scientifically. To make things worse my case fans changed too, i would run them all at 100% while mining. After repad i lowered them all so that they are much quieter, about 70%. 
Either way im happy with the results. For the thermal paste i used Gelid GC Extreme. Im just going to leave the card alone now unless i ever see Core hit 80 or have money to blow on a custom loop. I would consider using LM on the core but that requires a perfect mount (with untested firm pads) and i dont want to risk a $1500 card. 
My goal was to get this card in a state where its still alive 5 years from now, before i felt it wouldnt even make it past 3.


----------



## SoldierRBT

Is it possible to set minimum voltage in nvidia-smi? I'm currently using nvidia-smi.exe -lgc 210,2100 +150 in MSI Afterburner which drop voltage to 1.006v. Next step +165 isn't stable because it drops voltage to 0.987v. How I can get it to 1.000v as minimum?


----------



## Nizzen

SoldierRBT said:


> Is it possible to set minimum voltage in nvidia-smi? I'm currently using nvidia-smi.exe -lgc 210,2100 +150 in MSI Afterburner which drop voltage to 1.006v. Next step +165 isn't stable because it drops voltage to 0.987v. How I can get it to 1.000v as minimum?


What gpu is it?
Strix?


----------



## SoldierRBT

@Nizzen 

3090 KPE HC


----------



## wirx

What is best BIOS for MSI 3090 Gaming X Trio? I use at moment 500W EVGA, but wan´t to use resizable bar. Is only option 450W MSI Suprim X or are there some 500-520W EVGA ones available with resizable bar for MSI?


----------



## Nizzen

wirx said:


> What is best BIOS for MSI 3090 Gaming X Trio? I use at moment 500W EVGA, but wan´t to use resizable bar. Is only option 450W MSI Suprim X or are there some 500-520W EVGA ones available with resizable bar for MSI?


Try evga 520w rebar 









EVGA RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com


----------



## wirx

Thanx, does with thisone all fans works correctly?


----------



## Nizzen

wirx said:


> Thanx, does with thisone all fans works correctly?


I use this on strix with watercooling, so I don't know 
Gpu-z powerusage shows only 2x 8 pin, but it drawing the right amount of power anyway. Looking at "kill a watt" meter

With aircooling, 420-450w is enough if you don't want to have a jet plane in youre case 😆


----------



## wirx

I have original fans only, as far as I know Kingpin bios won't run them correctly when used MSI videocards.


----------



## Nizzen

wirx said:


> I have original fans only, as far as I know Kingpin bios won't run them correctly when used MSI videocards.


Try and see what Port Royal score you'll get


----------



## Biscottoman

guys I don't know why I suddenly lost some performance on my 3090 strix in almost every benchmark so I tried to swtich many drivers (reinstalling with ddu many times) but nothing changed at all
the best result i've got right now on PR is this one I scored 14 936 in Port Royal
while before I was able to reach easily more 200 points more with lower frequencies I scored 15 164 in Port Royal

this seems like to happen also on time spy extreme where i've lost something like 300/400 points on gpu score and on superposition 8k optimized where i lost 100 points

do you which could be my issue?


----------



## Falkentyne

Biscottoman said:


> guys I don't know why I suddenly lost some performance on my 3090 strix in almost every benchmark so I tried to swtich many drivers (reinstalling with ddu many times) but nothing changed at all
> the best result i've got right now on PR is this one I scored 14 936 in Port Royal
> while before I was able to reach easily more 200 points more with lower frequencies I scored 15 164 in Port Royal
> 
> this seems like to happen also on time spy extreme where i've lost something like 300/400 points on gpu score and on superposition 8k optimized where i lost 100 points
> 
> do you which could be my issue?


I've seen that happen.
Try

1) completely uninstall MSI Afterburner, reboot computer completely
2) reinstall current Nvidia drivers you have already, with the option " Perform a clean installation". Reboot computer.
3) Uninstall Nvidia drivers from control panel/programs and features. Reboot computer.
4) Download display driver uninstaller (DDU), extract to a folder, reboot computer to safe mode (Hold down shift while clicking restart, go to advanced options).
5) Run DDU in safe mode, clean and restart.
6) Install current Nvidia drivers. Reboot computer
7) Install MSI Afterburner.

Does that help?


----------



## Biscottoman

Falkentyne said:


> I've seen that happen.
> Try
> 
> 1) completely uninstall MSI Afterburner, reboot computer completely
> 2) reinstall current Nvidia drivers you have already, with the option " Perform a clean installation". Reboot computer.
> 3) Uninstall Nvidia drivers from control panel/programs and features. Reboot computer.
> 4) Download display driver uninstaller (DDU), extract to a folder, reboot computer to safe mode (Hold down shift while clicking restart, go to advanced options).
> 5) Run DDU in safe mode, clean and restart.
> 6) Install current Nvidia drivers. Reboot computer
> 7) Install MSI Afterburner.
> 
> Does that help?


Ok i will surely going to try this right now, just a question: at the moment I'm running 457.30 since many people told me these are the most stable and performing driver for Nvidia carda, would be a better option to go for the last released drivers or should i keep these?


----------



## EarlZ

des2k... said:


> your undervolting numbers look the same



Is that a bad thing ?


----------



## des2k...

EarlZ said:


> Is that a bad thing ?


Was just pointing out what the average oc+undervolt looks like.

All depends how lucky you are with the silicon + how cool you can keep your card.

On my zotac, I can do alot better than this video but again It's not worth much for me since I push it past that on water cooling.


----------



## Thanh Nguyen

I dont see cuda force p2 state in nvidia inspector. What is going on guys? My card downclock to 420mhz at compute workload.


----------



## des2k...

Thanh Nguyen said:


> I dont see cuda force p2 state in nvidia inspector. What is going on guys? My card downclock to 420mhz at compute workload.


you need to download the correct up to date nvinspector 

why is it 420mhz ? xoc vbios ?


----------



## Thanh Nguyen

des2k... said:


> you need to download the correct up to date nvinspector
> 
> why is it 420mhz ? xoc vbios ?


Got it works now. I used the old nvinspector.


----------



## GRABibus

wirx said:


> What is best BIOS for MSI 3090 Gaming X Trio? I use at moment 500W EVGA, but wan´t to use resizable bar. Is only option 450W MSI Suprim X or are there some 500-520W EVGA ones available with resizable bar for MSI?


You can try the XOC 500W EVGA with ReBar :









EVGA RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com


----------



## yzonker

Where do you guys get updated copies of ABE? I'd like to try looking at some 3080ti bios, but the copy someone linked back a while ago in this thread doesn't work with them.


----------



## FedericoUY

GRABibus said:


> You can try the XOC 500W EVGA with ReBar :
> 
> 
> 
> 
> 
> 
> 
> 
> 
> EVGA RTX 3090 VBIOS
> 
> 
> 24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory
> 
> 
> 
> 
> www.techpowerup.com


Is anyone using this bios? Is this bios from EVGA?


----------



## GRABibus

FedericoUY said:


> Is anyone using this bios? Is this bios from EVGA?


It is aˋ Evga bios.
I use it.
Best bios I tested for my strix


----------



## GRABibus

SoldierRBT said:


> Is it possible to set minimum voltage in nvidia-smi? I'm currently using nvidia-smi.exe -lgc 210,2100 +150 in MSI Afterburner which drop voltage to 1.006v. Next step +165 isn't stable because it drops voltage to 0.987v. How I can get it to 1.000v as minimum?


It drops to 0,987V because of power limit ?
Did you try to undervolt ?


----------



## FedericoUY

GRABibus said:


> It is aˋ Evga bios.
> I use it.
> Best bios I tested for my strix


What I meant is if it came out from people of EVGA, or if it is a 3rd party bios.


----------



## SoldierRBT

GRABibus said:


> It drops to 0,987V because of power limit ?
> Did you try to undervolt ?


+150 it stays 1.006v and +165 should be 0.993v-1.000v. It crashes at +165 because it drops voltage to 0.987v while still holding 2100MHz. nvidia-smi method is very good because it keeps internal clocks very close to requested clocks. Undervolting with v/f curve drops internal clocks and perf is worse.


----------



## Shawnb99

What an utter joke it is for every card drop. Hour before the drop and the EVGA site is so bogged down you can't even log in. Just an ****show all around for everything now


----------



## Lord of meat

FedericoUY said:


> What I meant is if it came out from people of EVGA, or if it is a 3rd party bios.


You can flash it with that, then go on evga forum and get the one from there and flash it with their tool if your worried.


----------



## Lord of meat

any one know how to remove the ekwb rgb bracket from the vector trio? The side on the side.


----------



## yzonker

Shawnb99 said:


> What an utter joke it is for every card drop. Hour before the drop and the EVGA site is so bogged down you can't even log in. Just an ****show all around for everything now


Yea I played that game for the 3080ti. Finally won the F5 battle for a 2nd time (counting my Zotac 3090 from last November). I spent about 90 minutes of retrying to get it. Really wanted the 3090 FTW3 HC, but they've never dropped a single card again since the queue reopened. Still 18 sec away... 

Good enough though. I really just wanted a card for a second machine and a backup in case I blow up my Zotac 3090. Lol


----------



## J7SC

yzonker said:


> Yea I played that game for the 3080ti. Finally won the F5 battle for a 2nd time (counting my Zotac 3090 from last November). I spent about 90 minutes of retrying to get it. Really wanted the 3090 FTW3 HC, but they've never dropped a single card again since the queue reopened. Still 18 sec away...
> 
> Good enough though. I really just wanted a card for a second machine and a backup in case I blow up my Zotac 3090. Lol


...I also needed a 2nd card for another application, but I 'crossed the street' and got a 3x8 pin 6900XT...found both at the same place (4 month apart) at or below MSRP...Both the 3090 Strix and the 6900 XT will be neighbors in a dual mobo, single case build if all goes well - hopefully they won't start a brawl in the middle of the night


----------



## GRABibus

J7SC said:


> ...I also needed a 2nd card for another application, but I 'crossed the street' and get a 3x8 pin 6900XT...found both at the same place (4 month apart) at or below MSRP...Both the 3090 Strix and the 6900 XT will be neighbors in a dual mobo, single case build of all goes well - hopefully they won't start a brawl in the middle of the night





J7SC said:


> ...I also needed a 2nd card for another application, but I 'crossed the street' and get a 3x8 pin 6900XT...found both at the same place (4 month apart) at or below MSRP...Both the 3090 Strix and the 6900 XT will be neighbors in a dual mobo, single case build of all goes well - hopefully they won't start a brawl in the middle of the night


Where do you live ?
In a GPU stock ? 😂


----------



## J7SC

GRABibus said:


> Where do you live ?
> In a GPU stock ? 😂


...in a big city by the Pacific...just figured out the days/time when the local outlet of a large national retailer gets deliveries


----------



## Thanh Nguyen

Used Strix $2500 or sealed kingpin hc $2900? Worth $400?


----------



## EarlZ

des2k... said:


> Was just pointing out what the average oc+undervolt looks like.
> 
> All depends how lucky you are with the silicon + how cool you can keep your card.
> 
> On my zotac, I can do alot better than this video but again It's not worth much for me since I push it past that on water cooling.


Okay, I kinda feel that this result is below average TBH but its all about silicon lottery, Id make a guess that even 1850 vs 2100 on the core would result in less than 5fps of a difference


----------



## gfunkernaught

Thanh Nguyen said:


> Used Strix $2500 or sealed kingpin hc $2900? Worth $400?


sealed kingpin i think


----------



## PLATOON TEKK

1st global on Port Royal for 11900K and 3090

30338

good look on all the good advice in this thread, on the real.


----------



## Nizzen

PLATOON TEKK said:


> 1st global on Port Royal for 11900K and 3090
> 
> 30338
> 
> good look on all the good advice in this thread, on the real.


Nice 
Did you peak 1600w from the wall


----------



## Arizor

Hey folks,

I've got my ROG Strix 3090 in an EK water block and have flashed the BIOS, trying a few different EVGA ones, such as the one quoted for KINGPIN.

However none of the BIOSs seem to pay attention to the power limit, the wattage sits around *330(!), instead of 480,* and looking in GPU-Z it seems my 8-Pin Power #3 is never used.

Any ideas? Running on the Strix BIOS it hits the 480W fine.

edit: Pic of GPU-Z whilst running Unigine benchmark.





GRABibus said:


> You can try the XOC 500W EVGA with ReBar :
> 
> 
> 
> 
> 
> 
> 
> 
> 
> EVGA RTX 3090 VBIOS
> 
> 
> 24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory
> 
> 
> 
> 
> www.techpowerup.com


----------



## GRABibus

Arizor said:


> Hey folks,
> 
> I've got my ROG Strix 3090 in an EK water block and have flashed the BIOS, trying a few different EVGA ones, such as the one quoted for KINGPIN.
> 
> However none of the BIOSs seem to pay attention to the power limit, the wattage sits around *330(!), instead of 480,* and looking in GPU-Z it seems my 8-Pin Power #3 is never used.
> 
> Any ideas? Running on the Strix BIOS it hits the 480W fine.
> 
> edit: Pic of GPU-Z whilst running Unigine benchmark.


Power readings are messed with EVGA bios on strix (and others).
Don’t rely to the values you read currently with softwares as GPUz or others.


----------



## PLATOON TEKK

Nizzen said:


> Nice
> Did you peak 1600w from the wall


Thanks brother. Surprisingly no beep from the UPS and that’s capped at 1500w, psu at 1600w. Was too stoned to remember to save gpuz page. but hopefully I push that score a bit more and will screenshot next time.


----------



## Arizor

GRABibus said:


> Power readings are messed with EVGA bios on strix (and others).
> Don’t rely to the values you read currently with softwares as GPUz or others.


thanks for the response mate. So how do we track/know our power with these bios flashes? Or do we just ignore?


----------



## GRABibus

Arizor said:


> thanks for the response mate. So how do we track/know our power with these bios flashes? Or do we just ignore?


Powermeter and voltmeter I assume....


----------



## Arizor

Ah I see… don’t have a voltmeter/power meter to hand, will have to grab one.

Can anyone with a ROG Strix post their favoured BIOS paired with Afterburner settings? New to this and would really appreciate a solid starting point


----------



## slayer6288

Nizzen said:


> I use this on strix with watercooling, so I don't know
> Gpu-z powerusage shows only 2x 8 pin, but it drawing the right amount of power anyway. Looking at "kill a watt" meter
> 
> With aircooling, 420-450w is enough if you don't want to have a jet plane in youre case 😆


since it shows only 2x8 pin does this work on the 3090 FE?


----------



## GRABibus

Arizor said:


> Ah I see… don’t have a voltmeter/power meter to hand, will have to grab one.
> 
> Can anyone with a ROG Strix post their favoured BIOS paired with Afterburner settings? New to this and would really appreciate a solid starting point


Strix on air with stock cooler repasted with conductonaut and memory chips repaded with Thermalright ODYSSSEY

Bios :
Bios EVGA RTX 3090 FTW3 ULTRA XOC 500W version 94.02.42.80.27


MSI AB stable gaming settings :


----------



## Arizor

Thanks @GRABibus !


----------



## des2k...

EarlZ said:


> Okay, I kinda feel that this result is below average TBH but its all about silicon lottery, Id make a guess that even 1850 vs 2100 on the core would result in less than 5fps of a difference


I didn't test 1850 to 2100 core, but I tested around 2100core. Most games come out like this:

every +60mhz on the core is about 3fps gain

every +500mem is 1fps more

The lower the core OC, less memory bandwith is required so you can't always combine them.

2115 +1000mem vs 2130 +1500mem is 1fps diff

after, 2175 +1500mem shows 0fps - 1fps gain.

after that, well you won't be pushing it past that, power limits and silicon quality


----------



## Sheyster

Thanh Nguyen said:


> Used Strix $2500 or sealed kingpin hc $2900? Worth $400?


Around here any Strix 3090 (new or used) is $3K+, new often $3400. If you have a loop definitely roll with the KP HC.


----------



## des2k...

Sheyster said:


> Around here any Strix 3090 (new or used) is $3K+, new often $3400. If you have a loop definitely roll with the KP HC.


damn, you have to be crazy to pay 3400$ for a 3090, my Zotac 3090 1500$ is a bargain compared to today's prices lol


----------



## Nizzen

des2k... said:


> damn, you have to be crazy to pay 3400$ for a 3090, my Zotac 3090 1500$ is a bargain compared to today's prices lol


Sold 2x 2 years old 2080ti, and got 3090 strix oc white for "free" + money for lots of beer and Whisky 🤙


----------



## GRABibus

I got my used Strix for 2300$ on EBay.
When I see current prices, I am happy.


----------



## J7SC

...I kept my 2x Aorus 2080 Tis (used in a work machine), and added the 3090 Strix OC (equivalent US$ 1800 range before tax) and the Gigabyte 6900 XT (equivalent US$1600 range before tax). Those were for additional and overdue productivity+entertainment machine upgrades via 'opportunistic purchasing' when they became briefly available, several months apart...fyi, no tariffs here


----------



## PLATOON TEKK

Anyone have a chance to bench on Win11? Wondering how performance will compare. Am running it on side pc at the second, am tempted.

I remember them mentioning the OS would be able to address hardware more efficiently (similar to rebar).


----------



## Bal3Wolf

des2k... said:


> damn, you have to be crazy to pay 3400$ for a 3090, my Zotac 3090 1500$ is a bargain compared to today's prices lol


Haha yea i was happy to pay 2450 shipped for my kingpin wc msrp with current prices on ebay for crap i waset gonna pay a scalper no matter what.


----------



## EarlZ

des2k... said:


> every +60mhz on the core is about 3fps gain


Thats about 12fps gained from 1860 to 2100 core, that is a huge number if only my games scale that well. On my end 1860 to 2040mhz is less than 5fps on most of my games.


----------



## Bal3Wolf

i use a curve on my 3090 kingpin stays around 2140 to 2160 gaming benchmarking it hovers around 2170-2210 when i use my dip switches.


----------



## pat182

theres now people copying the dlss.dll of R6S and putting it in older dlss game and seeing image improvement, yall should give it a try. im gonna test cyberpunk if image and performance will boost a little


----------



## KedarWolf

pat182 said:


> theres now people copying the dlss.dll of R6S and putting it in older dlss game and seeing image improvement, yall should give it a try. im gonna test cyberpunk if image and performance will boost a little


Rainbow Six Siege uses DLSS 2.2, DLL can be used in older games to reduce artifacts Download found here.


----------



## des2k...

pat182 said:


> theres now people copying the dlss.dll of R6S and putting it in older dlss game and seeing image improvement, yall should give it a try. im gonna test cyberpunk if image and performance will boost a little


Tried CP2077, can't really pick any differences, quality or FPS. It's at 4k Psycho, Performance DLSS.
Watched youtube CP2077 new,old DLSS, also can't pick any differences in the image.


----------



## PLATOON TEKK

Bal3Wolf said:


> i use a curve on my 3090 kingpin stays around 2140 to 2160 gaming benchmarking it hovers around 2170-2210 when i use my dip switches.


do you use the dip switches in combo with classy tool? Unless I’m mistaken (which I might be), I was under the impression that the dip switches are just a “physical” voltage adjustment.

or do the dip switches allow more voltage?


----------



## Bal3Wolf

PLATOON TEKK said:


> do you use the dip switches in combo with classy tool? Unless I’m mistaken (which I might be), I was under the impression that the dip switches are just a “physical” voltage adjustment.
> 
> or do the dip switches allow more voltage?


I havet found a copy of classy tool yet that works with the 3090 kingpin hc i think i have to flash the 1k watt hybrid bios to use it and havet done that yet. I was under impression with dips you can increase voltage by 100mv and turn a load line setting on also not alot of info out tho.


----------



## Arizor

What’s a safe voltage for hours of gaming, in your experience folks?

I find CP2077 is the crucible for any overclock, it will crash in a few seconds or minutes if I push beyond a 2010mhz overclock on my Strix; I can go +1200 memory and lock the voltage at 1.081 at 2010mhz and it’ll run all day long, but not sure how healthy that is for the card (under water, temp hits max of 52C). What do you folk use in terms of voltage?


----------



## Bal3Wolf

I have a curve using 1.075 at 2140 to 2160 will hold all day gaming +1500 on memory.


----------



## Arizor

Bal3Wolf said:


> I have a curve using 1.075 at 2140 to 2160 will hold all day gaming +1500 on memory.


😲that’s awesome, what card (your bio says a 1080 but I’m assuming it’s not that)?


----------



## Bal3Wolf

Yea I need to update it got a evga 3090 kingpin hydrocopper pared with a 3900x but I have a 5950x coming in soon and 32gigs of crucial ddr 3600mhz memory clocked at 3800 with tuned timings.


----------



## Arizor

Nice!


----------



## Bal3Wolf

updated my sig and some pics i took recently shes not perfect but runs great case has had some wear over the years. That one pic looks like theres dust on the top but its not i had just cleaned it all up and it took a wierd pic for some reason and only saw it once i had moved the pc lol so never retook it.


----------



## mirkendargen

Arizor said:


> What’s a safe voltage for hours of gaming, in your experience folks?
> 
> I find CP2077 is the crucible for any overclock, it will crash in a few seconds or minutes if I push beyond a 2010mhz overclock on my Strix; I can go +1200 memory and lock the voltage at 1.081 at 2010mhz and it’ll run all day long, but not sure how healthy that is for the card (under water, temp hits max of 52C). What do you folk use in terms of voltage?


If the card will let you run it without an external voltage controller, it's completely safe.


----------



## Arizor

Thanks @mirkendargen .

holy crap @Bal3Wolf do I count 3 rads?! Nice.


----------



## Bal3Wolf

Arizor said:


> Thanks @mirkendargen .
> 
> holy crap @Bal3Wolf do I count 3 rads?! Nice.


2 right now i had 3 in the past right now its a 420 in front and a 360 in bottom with top exhausting heat out by 3 ek vardar fans.


----------



## mirkendargen

Bal3Wolf said:


> updated my sig and some pics i took recently shes not perfect but runs great case has had some wear over the years. That one pic looks like theres dust on the top but its not i had just cleaned it all up and it took a wierd pic for some reason and only saw it once i had moved the pc lol so never retook it.
> 
> View attachment 2514381
> 
> 
> View attachment 2514382
> 
> 
> View attachment 2514383


I have this exact case too, and used to run the same rad setup before I switched to giant external rads in the basement. The metal is pretty flimsy but the size and configurability is amazing.


----------



## des2k...

Arizor said:


> 😲that’s awesome, what card (your bio says a 1080 but I’m assuming it’s not that)?


My zotac 3090 xoc vbios also holds 2160 1075mv +1500mem on games, control , cp2077,etc.

But 2130 1040mv +1500mem is only 1fps lower
and 2115 1018mv +1000mem is 2fps lower on those 2games

Usually those high voltages are when I was looping TE GT2 2,4h which ends up stable in games.
I'm curious if games actually can go lower on the core voltage, they are not 600w like TE.

And... I got bored testing CP2077, room is getting hot, AC is off lol, 29c water 40c core, up to 500w
+1518mem
2160 1043mv is a hard crash at 5mins
2160 1050mv holds, stopped after 40mins


----------



## Bal3Wolf

mirkendargen said:


> I have this exact case too, and used to run the same rad setup before I switched to giant external rads in the basement. The metal is pretty flimsy but the size and configurability is amazing.


yea for 120 shipped i paid for my case and picked it up at my local walmart its been a great case i got mine dec 2016 so shes will be 6 years old soon. It was one of the few cases out back then that was even remotely water cooling friendly without needing to do alot of modding yourself.


----------



## Arizor

That’s incredible. The Kingpin bios (then updated to rebar BIOS via X1 precision) worked well on my Strix but since I cannot reliably monitor power on it (3rd pin showing nothing on any monitoring software) I got worried running it so flashed the FTW instead.



des2k... said:


> My zotac 3090 xoc vbios also holds 2160 1075mv +1500mem on games, control , cp2077,etc.
> 
> But 2130 1040mv +1500mem is only 1fps lower
> and 2115 1018mv +1000mem is 2fps lower on those 2games
> 
> Usually those high voltages are when I was looping TE GT2 2,4h which ends up stable in games.
> I'm curious if games actually can go lower on the core voltage, they are not 600w like TE.
> 
> And... I got bored testing CP2077, room is getting hot, AC is off lol, 29c water 40c core, up to 500w
> +1518mem
> 2160 1043mv is a hard crash at 5mins
> 2160 1050mv holds, stopped after 40mins


----------



## Bal3Wolf

yea i see some wierd issues with my curve it will drop into 1900s seems like i hit my power limit at times or im not doing my curve correct pretty new to that, testing less voltage now 1043mv.


----------



## PLATOON TEKK

Bal3Wolf said:


> I havet found a copy of classy tool yet that works with the 3090 kingpin hc i think i have to flash the 1k watt hybrid bios to use it and havet done that yet. I was under impression with dips you can increase voltage by 100mv and turn a load line setting on also not alot of info out tho.


you should have a working tool. I’ll send you a copy of what I’m using when I get home. found it on the EVGA forums. In my case, the tools works with 520w bios too, don’t know how effective it is though.
Yeah, it’s oddly hard to find info on the switches.

also, tested win11 and dropped by 400 points so am assuming it’s far from optimized at the second.


----------



## newls1

well, i gambled my luck today... I cancelled my order with EKWB for my FTW3 FCWB with Active Backplate as it was atleast another 2-3weeks out, and placed my order with Bykski as theiers was released today for this GPU..... Ive never owned a Bykski product so im hoping their quality is atleast like EKWB's and im hoping i dont have to play any games with thermal pad thickness. Im hoping what it ships with will provide optimal results. Id ask if anyone has 1st hand experience with this setup yet, but it was only just released today for the FTW3 cards. Should I order 1 - 1.5mm pads just incase? and what is your opinion on Bykski products?
Here is what I ordered Bykski Full Coverage GPU Water Block w/ Integrated Active Backplate for EVGA RTX 3090 FTW3 Ultra Gaming (N-EV3090FTW3-TC)


----------



## Bal3Wolf

PLATOON TEKK said:


> you should have a working tool. I’ll send you a copy of what I’m using when I get home. found it on the EVGA forums. In my case, the tools works with 520w bios too, don’t know how effective it is though.
> Yeah, it’s oddly hard to find info on the switches.
> 
> also, tested win11 and dropped by 400 points so am assuming it’s far from optimized at the second.


i managed to get a copy from a member of evga forums havet played with it yet.


----------



## EarlZ

KedarWolf said:


> Rainbow Six Siege uses DLSS 2.2, DLL can be used in older games to reduce artifacts Download found here.


Does this work woth DLSS1.0 games ?

EDIT: NVM, limited to games that has 2.0


----------



## des2k...

newls1 said:


> well, i gambled my luck today... I cancelled my order with EKWB for my FTW3 FCWB with Active Backplate as it was atleast another 2-3weeks out, and placed my order with Bykski as theiers was released today for this GPU..... Ive never owned a Bykski product so im hoping their quality is atleast like EKWB's and im hoping i dont have to play any games with thermal pad thickness. Im hoping what it ships with will provide optimal results. Id ask if anyone has 1st hand experience with this setup yet, but it was only just released today for the FTW3 cards. Should I order 1 - 1.5mm pads just incase? and what is your opinion on Bykski products?
> Here is what I ordered Bykski Full Coverage GPU Water Block w/ Integrated Active Backplate for EVGA RTX 3090 FTW3 Ultra Gaming (N-EV3090FTW3-TC)


There're a few here with great delta with bykski, so should work fine for you. Prob better quality than EK.

My 1080ti ftw3 ek block was fine vs the 
rushed jobs they are doing on these 3090 blocks.


----------



## newls1

des2k... said:


> There're a few here with great delta with bykski, so should work fine for you. Prob better quality than EK.
> 
> My 1080ti ftw3 ek block was fine vs the
> rushed jobs they are doing on these 3090 blocks.


man i hope so... I just worried ill be playing the thermal pad guessing game. By looking at there instructions online, there arent even applying thermal pads to the rear core area to cool the caps there and I would very much like to apply a fat square pad there but dont know what thickness to use.... Just want to set this up the correct way the first time...


----------



## PLATOON TEKK

I did what I aimed to do, still at 10C instead of -15C.

Fastest non ln2 Port Royal 









Edit:link is new score, was surprisingly able to squeeze another 150 points in



Bal3Wolf said:


> i managed to get a copy from a member of evga forums havet played with it yet.


Boom. Let me know if you still need it or it doesn’t work for some reason. 🤘🏻


----------



## J7SC

newls1 said:


> man i hope so... I just worried ill be playing the thermal pad guessing game. By looking at there instructions online, there arent even applying thermal pads to the rear core area to cool the caps there and I would very much like to apply a fat square pad there but dont know what thickness to use.... Just want to set this up the correct way the first time...


...I'm finally about to take my EK block off my 3090 Strix and mount a new Phanteks one...not to mention mount a Bykski block on my 6900 XT...never mind various custom or stock back plates...following the advice by others as well as my own experience, I loaded up on:

a.) various thickness sku of Thermalright 12.8 W/mK thermal pads
b.) some jars of 10.0 W/mK thermal putty recommended by another user in this thread. This is primarily for VRAM or back plates etc where I'm not sure about the exact pad thickness (if any) as putty 'adapts' to the available space
c.) various thermal grease (Kryonaut, Gelod GC-Extr., Noctua etc...)



Spoiler


----------



## ahnafakeef

Hello everyone.

Is the card still power limited at 480W? Is it safe to use the EVGA Kingpin BIOS with a ROG Strix cooler to get a higher power limit?

If yes, what is the maximum voltage I should add to the Core Voltage slider on Afterburner?

Basically looking to get the highest stable overclock I can get on the stock cooler of the ROG Strix version. Would the Kingpin BIOS be a good starting point, or should I try a different BIOS?

Thank you.

EDIT:

Okay so I flashed the BIOS but now the max resolution being shown is 1920x1080 and the display connector is being shown as DVI-PC when it should be HDMI.

How do I fix this? Thank you.


----------



## GQNerd

ahnafakeef said:


> Hello everyone.
> 
> Is the card still power limited at 480W? Is it safe to use the EVGA Kingpin BIOS with a ROG Strix cooler to get a higher power limit?
> 
> If yes, what is the maximum voltage I should add to the Core Voltage slider on Afterburner?
> 
> Basically looking to get the highest stable overclock I can get on the stock cooler of the ROG Strix version. Would the Kingpin BIOS be a good starting point, or should I try a different BIOS?
> 
> Thank you.
> 
> EDIT:
> 
> Okay so I flashed the BIOS but now the max resolution being shown is 1920x1080 and the display connector is being shown as DVI-PC when it should be HDMI.
> 
> How do I fix this? Thank you.


Kingpin only has 1 HDMI port vs 2x HDMI ports on the Strix, so try the other HDMI port to see if that resolves your resolution issue.

As for running the KP bios on the stock cooler, I personally wouldn’t as you’d struggle to control the extra heat being generated.. Also, assuming you’re talking about the 1000w KP bios, your vram would always be running at max clocks, generating further heat even when not under load...But do you, just hope you can tolerate those Strix fans at 100% 24/7


----------



## ahnafakeef

Miguelios said:


> Kingpin only has 1 HDMI port vs 2x HDMI ports on the Strix, so try the other HDMI port to see if that resolves your resolution issue.
> 
> As for running the KP bios on the stock cooler, I personally wouldn’t as you’d struggle to control the extra heat being generated.. Also, assuming you’re talking about the 1000w KP bios, your vram would always be running at max clocks, generating further heat even when not under load...But do you, just hope you can tolerate those Strix fans at 100% 24/7


I don’t necessarily want to use the Kingpin bios. It’s the only one I found on TPU’s website that has a higher power limit than 480w. 

I dislike heat or noise in my system as much as the next person. So please suggest a different, more suitable bios for my card that can give me a higher power limit. Because I feel that I’m still not utilising the thermal and voltage limits that my card can withstand. 

Is it at all possible for someone to simply edit the base Strix bios with a higher power limit? I remember it being possible in the days of the original Titan. Not sure if that practice is still in vogue. But that could really help as a starting point. 

Thank you.


----------



## Arizor

EVGA RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com





This is a good one for Strix @ahnafakeef


----------



## ahnafakeef

Arizor said:


> EVGA RTX 3090 VBIOS
> 
> 
> 24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory
> 
> 
> 
> 
> www.techpowerup.com
> 
> 
> 
> 
> 
> This is a good one for Strix @ahnafakeef


Thank you. 

But that is still only 500w, if I’m not wrong. I’m already pulling 480w with the stock bios. Is there anything that has a higher power limit in order to ensure that I’m definitely not hitting a wall in terms of power limit?


----------



## GQNerd

ahnafakeef said:


> Thank you.
> 
> But that is still only 500w, if I’m not wrong. I’m already pulling 480w with the stock bios. Is there anything that has a higher power limit in order to ensure that I’m definitely not hitting a wall in terms of power limit?


There is a 520w KP bios out there. That’s about as high as you’ll find, aside from the 1000w. If you end up putting a water block on your card, I’d use the 1000w. I was using it as a daily when I had my Strix


----------



## ahnafakeef

Miguelios said:


> There is a 520w KP bios out there. That’s about as high as you’ll find, aside from the 1000w. If you end up putting a water block on your card, I’d use the 1000w. I was using it as a daily when I had my Strix


Am I correct in assuming that if 480w is limiting, then 520w will be almost just as limiting and won’t let me max out the voltage sliders (so that I can attain a higher core clock)?

Is it even possible to keep the card within its thermal limits with the voltage slider maxed out, assuming there is a way to overcome the power limit issue? 

After having tried 123% power limit and +150 on the core with no voltage increase, I feel 2100 on the core clock is a fairly realistic expectation. What kind of clocks were you hitting on your Strix with the stock cooler?

Thank you.


----------



## gfunkernaught

checking in, still no 1kw rebar bios yet right?


----------



## Arizor

the voltage will max out on my Strix at 1.1 regardless of the FTW or Kingpin BIOS, using the kingpin I can hit 2100 and +1200 mem all day for benchmarks (under water) but my litmus test for game stability is CP2077 and there my max stable is kingpin BIOS @2040, +1200 mem. 



ahnafakeef said:


> Am I correct in assuming that if 480w is limiting, then 520w will be almost just as limiting and won’t let me max out the voltage sliders (so that I can attain a higher core clock)?
> 
> Is it even possible to keep the card within its thermal limits with the voltage slider maxed out, assuming there is a way to overcome the power limit issue?
> 
> After having tried 123% power limit and +150 on the core with no voltage increase, I feel 2100 on the core clock is a fairly realistic expectation. What kind of clocks were you hitting on your Strix with the stock cooler?
> 
> Thank you.


----------



## des2k...

gfunkernaught said:


> checking in, still no 1kw rebar bios yet right?


People here had that vbios for a while but they are not sharing it by choice or maybe they had to sign NDAs.


----------



## Arizor

I used the 1kw KP bios available and then was able to update it via EVGA X1 to rebar. Though maybe I’m missing something?


----------



## ViRuS2k

Arizor said:


> I used the 1kw KP bios available and then was able to update it via EVGA X1 to rebar. Though maybe I’m missing something?


Rip it out and post it for people to use with gpuz 
but i think from past reports that, although you think its the 1kw bios that was updated to rebar i think you will find your 1kw bios was replaced with a 520w rebar bios instead..


----------



## rhyno

yup


----------



## Arizor

Ah fair enough, bugger.


----------



## KedarWolf

Deleted.


----------



## ALSTER868

Hey guys, which of these bioses here is 520w one with ReBar?









TechPowerUp


Extensive repository of graphics card BIOS image files. Our database covers submissions categorized by GPU vendor, type, and board partner variant.




www.techpowerup.com


----------



## yzonker

reBar enabled,









EVGA RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com


----------



## CallsignVega

Ugg, I had the 1,000 watt EVGA Kingpin BIOS on my MSI Suprim X 3090 and wanted to try Rebar. Precision X proceeded to load some garbage 450w BIOS in its place. Is the above 520w BIOS the highest wattage Rebar BIOS out there?


----------



## yzonker

Yup unless you can con a KP owner out of the new one or buy a KP.


----------



## CallsignVega

Also my max fan speed went from 3K RPM to 2K RPM with the above 520w Kinping BIOS.


----------



## kx11

Just ordered Optimus GPU block for my Strix, do u guys think 240mm rad is enough ?! it will cool the backplate too











Absolute GPU Block - ASUS Strix 3080/Ti/3090


ASUS STRIX 3080, 3080Ti, 3090 GPU WATERBLOCK The Absolute block is our all-out performance design, created to achieve maximum cooling on all areas of the NVIDIA RTX 3080, 3080 Ti and 3090 Strix GPUs from ASUS. The Strix GPUs pull huge amounts of power and require top cooling on all areas -- die...




optimuspc.com


----------



## GRABibus

CallsignVega said:


> Also my max fan speed went from 3K RPM to 2K RPM with the above 520w Kinping BIOS.


use this one :








EVGA RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com





fans are at 3k rpm and less heat than with 520W KP bios.

‘definitely the best Bios I tested for my strix.


----------



## J7SC

kx11 said:


> Just ordered Optimus GPU block for my Strix, do u guys think 240mm rad is enough ?! it will cool the backplate too
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Absolute GPU Block - ASUS Strix 3080/Ti/3090
> 
> 
> ASUS STRIX 3080, 3080Ti, 3090 GPU WATERBLOCK The Absolute block is our all-out performance design, created to achieve maximum cooling on all areas of the NVIDIA RTX 3080, 3080 Ti and 3090 Strix GPUs from ASUS. The Strix GPUs pull huge amounts of power and require top cooling on all areas -- die...
> 
> 
> 
> 
> optimuspc.com


...IMO no, 240 mm is marginal at best for around 500W GPU (+ CPU?), especially if you game for longer periods - it will likely heat-soak fairly quickly


----------



## Nizzen

CallsignVega said:


> Also my max fan speed went from 3K RPM to 2K RPM with the above 520w Kinping BIOS.


Is aircooling legal on the 3090? 😜


----------



## kx11

J7SC said:


> ...IMO no, 240 mm is marginal at best for around 500W GPU (+ CPU?), especially if you game for longer periods - it will likely heat-soak fairly quickly


No i already have a 360mm hooked to the cpu, i hope i got enough room for that 360mm rad since my tank is front mounted (bitspower for O11DXL)


----------



## jura11

kx11 said:


> Just ordered Optimus GPU block for my Strix, do u guys think 240mm rad is enough ?! it will cool the backplate too
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Absolute GPU Block - ASUS Strix 3080/Ti/3090
> 
> 
> ASUS STRIX 3080, 3080Ti, 3090 GPU WATERBLOCK The Absolute block is our all-out performance design, created to achieve maximum cooling on all areas of the NVIDIA RTX 3080, 3080 Ti and 3090 Strix GPUs from ASUS. The Strix GPUs pull huge amounts of power and require top cooling on all areas -- die...
> 
> 
> 
> 
> optimuspc.com


I would say 360mm radiator is lowest what I would consider for cooling RTX 3090 and CPU as well, I have built loop with single 360mm radiator and RTX 3090 and CPU but I wouldn't do that again 

Hope this helps 

Thanks, Jura


----------



## J7SC

jura11 said:


> I would say 360mm radiator is lowest what I would consider for cooling RTX 3090 and CPU as well, I have built loop with single 360mm radiator and RTX 3090 and CPU but I wouldn't do that again
> 
> Hope this helps
> 
> Thanks, Jura


..yeah, back in the good ol' days, a 360 rad was 'ok' for an oc'ed 4c/8t CPU and a 250 W GPU, but these days, not so much. I'm now using triple 480x64 rads to cool a 5950X and 3090 Strix OC (520W vBios)...that said, I use TT Core P5 and P8 cases which allow for much more flexibility re. mounting of multiple big rads etc.


----------



## CallsignVega

Nizzen said:


> Is aircooling legal on the 3090? 😜


Yup. 3090's are almost maxed out on air anyway, water gives like 1-2% better performance. Not even worth it anymore, unless you want less sound. Although I have my 3090 in a sound controlled compartment, so the noise isn't an issue.

I come from an era when ambient water would give 5-10%+ boost in CPU/GPU performance. Those days are long gone.


----------



## kx11

jura11 said:


> I would say 360mm radiator is lowest what I would consider for cooling RTX 3090 and CPU as well, I have built loop with single 360mm radiator and RTX 3090 and CPU but I wouldn't do that again
> 
> Hope this helps
> 
> Thanks, Jura


with the backplate cooling included i think i need another 360mm rad, that thing hits 90c without a block if OC, undervolting helps keeping the temps down but it's noisy


----------



## jura11

J7SC said:


> ..yeah, back in the good ol' days, a 360 rad was 'ok' for an oc'ed 4c/8t CPU and a 250 W GPU, but these days, not so much. I'm now using triple 480x64 rads to cool a 5950X and 3090 Strix OC (520W vBios)...that said, I use TT Core P5 and P8 cases which allow for much more flexibility re. mounting of multiple big rads etc.


I recently built loop with Palit RTX 3090 GamingPro and 10900k where I used single 360mm radiator and temperatures hasn't been bad or good, GPU temperatures would be in mid 50's to low 60's with fairly aggressive fan speeds and CPU temperatures, I didn't OC too much because temperatures would be in early 90's to low 100's 🙄

There is no such overkill with radiators, personally I'm running 4*360mm radiators plus MO-ra3 360mm and planning get 2nd MO-ra3 360mm or 420mm but I'm cooling 2 RTX 3090 GamingPro's and both are running Kingpin XOC 1000W BIOS and 5950X as well 

Never had a chance to built loop in Thermaltake cases and personally I don't or won't touch Thermaltake stuff, that's my personal preference in that 

Hope this helps 

Thanks, Jura


----------



## jura11

kx11 said:


> with the backplate cooling included i think i need another 360mm rad, that thing hits 90c without a block if OC, undervolting helps keeping the temps down but it's noisy


Optimus waterblocks are one of the best waterblocks on market and if this one is as their FTW3 waterblock then you shouldn't have problems cooling your Strix with fairly good OC on core or VRAM

Assuming you have Lian Li O11 Dynamic XL? That case do looks nice but their cooling potential or rather airflow is one of the worst, if you didn't remove filters, then remove them and remove all unused PCI_E slot covers 

If not I would investigate or consider to use external radiators such MO-ra3 360mm or 420mm 

Hope this helps 

Thanks, Jura


----------



## Arizor

kx11 said:


> Just ordered Optimus GPU block for my Strix, do u guys think 240mm rad is enough ?! it will cool the backplate too
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Absolute GPU Block - ASUS Strix 3080/Ti/3090
> 
> 
> ASUS STRIX 3080, 3080Ti, 3090 GPU WATERBLOCK The Absolute block is our all-out performance design, created to achieve maximum cooling on all areas of the NVIDIA RTX 3080, 3080 Ti and 3090 Strix GPUs from ASUS. The Strix GPUs pull huge amounts of power and require top cooling on all areas -- die...
> 
> 
> 
> 
> optimuspc.com


I'm using one EK 240mm to cool my Strix GPU (using this kit Buy EK Classic P240 D-RGB Liquid Cooling Kit Black Nickel [3831109834527] | PC Case Gear Australia ) and with a push/pull (combining the included fans with Noctua fans) my GPU tops out at 53C OC'd, so you should be fine.


----------



## J7SC

jura11 said:


> I recently built loop with Palit RTX 3090 GamingPro and 10900k where I used single 360mm radiator and temperatures hasn't been bad or good, GPU temperatures would be in mid 50's to low 60's with fairly aggressive fan speeds and CPU temperatures, I didn't OC too much because temperatures would be in early 90's to low 100's 🙄
> 
> There is no such overkill with radiators, personally I'm running 4*360mm radiators plus MO-ra3 360mm and planning get 2nd MO-ra3 360mm or 420mm but I'm cooling 2 RTX 3090 GamingPro's and both are running Kingpin XOC 1000W BIOS and 5950X as well
> 
> Never had a chance to built loop in Thermaltake cases and personally I don't or won't touch Thermaltake stuff, that's my personal preference in that
> 
> Hope this helps
> 
> Thanks, Jura


I agree, one can never have enough cooling - especially as many of the latest gen GPUs and CPUs all use boost algorithms with temp as one of the major inputs...while things may not throttle until a higher temp, they certainly boost at a lower one. 

As to Thermaltake, I know it has its detractors, but the things I do have from them all have been very good and long-lasting. Their Core P series cases (I have a P5 and P8) are fairly unique and allow me to do what I want to do with custom mods on them, including two or even three mobos on one of them (the third on the back for Linux).

One of these days, I'll put together a system with dual MoRa 420s w/ 200 mm fans...


----------



## yzonker

jura11 said:


> I recently built loop with Palit RTX 3090 GamingPro and 10900k where I used single 360mm radiator and temperatures hasn't been bad or good, GPU temperatures would be in mid 50's to low 60's with fairly aggressive fan speeds and CPU temperatures, I didn't OC too much because temperatures would be in early 90's to low 100's 🙄
> 
> There is no such overkill with radiators, personally I'm running 4*360mm radiators plus MO-ra3 360mm and planning get 2nd MO-ra3 360mm or 420mm but I'm cooling 2 RTX 3090 GamingPro's and both are running Kingpin XOC 1000W BIOS and 5950X as well
> 
> Never had a chance to built loop in Thermaltake cases and personally I don't or won't touch Thermaltake stuff, that's my personal preference in that
> 
> Hope this helps
> 
> Thanks, Jura


Yup, I started with a single 360 with Noctua F12's. At full speed (1500 rpm), gpu core temp would hit about 55C at just 390w (only gpu in the loop). Ended up adding a 280mm and then finally a 480mm 60mm thick external. Finally happy with it. 5-6C air to water delta running 500w on the gpu. CPU is in the loop now of course too.


----------



## geriatricpollywog

Thanh Nguyen said:


> Used Strix $2500 or sealed kingpin hc $2900? Worth $400?


There are currently no 3rd party waterblocks for the Kingpin. All the EVGA Hydrocopper blocks have homes.


----------



## gfunkernaught

delete


----------



## mardon

ahnafakeef said:


> I don’t necessarily want to use the Kingpin bios. It’s the only one I found on TPU’s website that has a higher power limit than 480w.
> 
> I dislike heat or noise in my system as much as the next person. So please suggest a different, more suitable bios for my card that can give me a higher power limit. Because I feel that I’m still not utilising the thermal and voltage limits that my card can withstand.
> 
> Is it at all possible for someone to simply edit the base Strix bios with a higher power limit? I remember it being possible in the days of the original Titan. Not sure if that practice is still in vogue. But that could really help as a starting point.
> 
> Thank you.


This probably won't go down well on here as overclocking is what we do. Seriously other than benchmarks anything over 450w brings diminishing gains. Boot up Cyberpunk, Metro or Days gone and keep an eye on your stock FPS, then apply your current max OC. You'll likely see a massive increase in power consumption and little to no FPS difference.
I'm shunt modded and have 2 profiles Max Undervolted Overclock @ 1.05v and then an undervolt of 0.950 there's very very little between the two in real world performance.


----------



## ahnafakeef

mardon said:


> This probably won't go down well on here as overclocking is what we do. Seriously other than benchmarks anything over 450w brings diminishing gains. Boot up Cyberpunk, Metro or Days gone and keep an eye on your stock FPS, then apply your current max OC. You'll likely see a massive increase in power consumption and little to no FPS difference.
> I'm shunt modded and have 2 profiles Max Undervolted Overclock @ 1.05v and then an undervolt of 0.950 there's very very little between the two in real world performance.


My card actually might be a bad overclocker. On the stock Strix BIOS, the card is unstable in AC Valhalla with PL at 123 and core clock at +98 in Afterburner with everything else untouched. Ran through Superposition just fine with +150 core, but that was just one run.

Not sure if I'm doing anything wrong and I'm currently playing on stock settings. Please advise as to how I should proceed to attain the highest core clock my card can run. Thank you.


----------



## bigjdubb

I have a chance to get an Alienware R11 with a 10900kf with 64gb of ram and a 3090 for $2500. This 3090 is not an FE or custom model, it appears to be some sort of OEM model that is fairly small in size. See : Aurora R11









It's only 2 fans and 10.5" long so it will absolutely be limited on air. Has anyone come across one of these? I'm wondering if a waterblock will fit it.

I will part out the rest of it, no desire to have a 10900k or 64 gigs of ram, but I only want the 3090 if I can put a block on it. I figure this is the best place to find someone that may have had their hands on one.


----------



## J7SC

bigjdubb said:


> I have a chance to get an Alienware R11 with a 10900kf with 64gb of ram and a 3090 for $2500. This 3090 is not an FE or custom model, it appears to be some sort of OEM model that is fairly small in size. See : Aurora R11
> View attachment 2514705
> 
> 
> It's only 2 fans and 10.5" long so it will absolutely be limited on air. Has anyone come across one of these? I'm wondering if a waterblock will fit it.
> 
> I will part out the rest of it, no desire to have a 10900k or 64 gigs of ram, but I only want the 3090 if I can put a block on it. I figure this is the best place to find someone that may have had their hands on one.


Not sure how helpful this is as I don't know for sure, especially w/o seeing the naked PCB, but when Ampere came out, there was:

a.) the FE model 
b.) the 'reference' design (different from the FE version), and 
c.) custom PCB

It is likely that this card is ' b.) ', especially if it has 2x 8 pin and I do believe that there are water blocks for the reference design (I ran across when searching for custom PCB water blocks; Bykski ?).


----------



## bigjdubb

I did find "reference pcb" blocks, and there are many models that fall into that category. I need to find some better images of the one in the alienware, here is what I have















I want to take that 3090 out and stick a 2070 I have in it and sell that machine. I thought about sticking my 2080ti in it but I don't think it will fit.


----------



## des2k...

bigjdubb said:


> I have a chance to get an Alienware R11 with a 10900kf with 64gb of ram and a 3090 for $2500. This 3090 is not an FE or custom model, it appears to be some sort of OEM model that is fairly small in size. See : Aurora R11
> View attachment 2514705
> 
> 
> It's only 2 fans and 10.5" long so it will absolutely be limited on air. Has anyone come across one of these? I'm wondering if a waterblock will fit it.
> 
> I will part out the rest of it, no desire to have a 10900k or 64 gigs of ram, but I only want the 3090 if I can put a block on it. I figure this is the best place to find someone that may have had their hands on one.


It's the typical 2x8pin reference. You should have a few blocks that are compatible. The new EK block (wood version lol) is new design compatible with multiple 2x8pin reference cards.


----------



## bigjdubb

I just found the Dell 3090 on the Corsair waterblock compatibility checker, it says it is reference. I think I'm going to go for it, hopefully I can get a bit of money for the rest of the machine.


----------



## des2k...

bigjdubb said:


> I just found the Dell 3090 on the Corsair waterblock compatibility checker, it says it is reference. I think I'm going to go for it, hopefully I can get a bit of money for the rest of the machine.


Forgot where I got this file, but here's the pcb


----------



## lmfodor

Hi, I just bought a new Strix 3090 White edition. Coming off a TUF 3080 and coming here to learn how to improve my performance. All the mods I saw in this thread are amazing. I did not know that there was a 1000W BIOS, nor that the EVGA BIOS works well on the Strix. Well, I'm not that advanced, I'm actually a newbie and I have two quick questions that are easy for you to answer.

I currently have a Seasonic PRIME TX-850 Titanium power supply. So even though the power efficiency is good and I was able to run tests by unlocking the GPU voltage, I'm not sure if my PSU will hold up to my entire system well, considering I'm using the latest ASUS V4 BIOS which is max 481W of power . I managed to get a score of 20660 in Time Spy with my 5900x, 10671 in TimeSpy Extreme and 14405 in Port Royale, +135 Core and +1300 Mem. But I noticed that my memory temperatures are limiting me, although the temperature of the GPU reaches ~ 75c which is the limit, without using 480W, being at 380W w/o AB I am at 65c. I see that most of you went to a waterblock, and I wonder if the way to go for this GPU, because honestly I have no experience with CLC, I only use an EK360 AIO for my CPU, so my ideal world would be an AIO for this card, which I don't think exists. I mean, I watched a lot of videos on how to get started with water cooling ... but I don't see myself handling and bending tubes, checking for leaks, I prefer something plug and play like an AIO. Also, I've never opened a GPU, I'm not worried about losing the warranty, but I'd actually rather change the thermal pads than go to water cooling (for now). I also see some people putting a fan on top of the back plate. Can I change just the memory thermal pads by removing the back plate or do I just have to disassemble the entire card? I bought several packs of Thermalright 12.8 W / mK, 85x45x1mm and x0.5 thermal pads, but I forget to buy 2mm and I also have a lot of thermal paste to try, the Extreme hydronaut, MX-5, Thermaltight TFX .. So I could apply any of those..

So the first question is, how could I improve my cooling while in the air? Is there a block with some kind of AIO like the Kingpin that comes with a built in radiator and pump?

The second is easier, can anyone share any undervoltage curves for gaiming to try? I would like to have a profile for gaming and another for benchs. As you can see, I don't have much space with my Panthek P500A.










Thanks!


----------



## KedarWolf

lmfodor said:


> Hi, I just bought a new Strix 3090 White edition. Coming off a TUF 3080 and coming here to learn how to improve my performance. All the mods I saw in this thread are amazing. I did not know that there was a 1000W BIOS, nor that the EVGA BIOS works well on the Strix. Well, I'm not that advanced, I'm actually a newbie and I have two quick questions that are easy for you to answer.
> 
> I currently have a Seasonic PRIME TX-850 Titanium power supply. So even though the power efficiency is good and I was able to run tests by unlocking the GPU voltage, I'm not sure if my PSU will hold up to my entire system well, considering I'm using the latest ASUS V4 BIOS which is max 481W of power . I managed to get a score of 20660 in Time Spy with my 5900x, 10671 in TimeSpy Extreme and 14405 in Port Royale, +135 Core and +1300 Mem. But I noticed that my memory temperatures are limiting me, although the temperature of the GPU reaches ~ 75c which is the limit, without using 480W, being at 380W w/o AB I am at 65c. I see that most of you went to a waterblock, and I wonder if the way to go for this GPU, because honestly I have no experience with CLC, I only use an EK360 AIO for my CPU, so my ideal world would be an AIO for this card, which I don't think exists. I mean, I watched a lot of videos on how to get started with water cooling ... but I don't see myself handling and bending tubes, checking for leaks, I prefer something plug and play like an AIO. Also, I've never opened a GPU, I'm not worried about losing the warranty, but I'd actually rather change the thermal pads than go to water cooling (for now). I also see some people putting a fan on top of the back plate. Can I change just the memory thermal pads by removing the back plate or do I just have to disassemble the entire card? I bought several packs of Thermalright 12.8 W / mK, 85x45x1mm and x0.5 thermal pads, but I forget to buy 2mm and I also have a lot of thermal paste to try, the Extreme hydronaut, MX-5, Thermaltight TFX .. So I could apply any of those..
> 
> So the first question is, how could I improve my cooling while in the air? Is there a block with some kind of AIO like the Kingpin that comes with a built in radiator and pump?
> 
> The second is easier, can anyone share any undervoltage curves for gaiming to try? I would like to have a profile for gaming and another for benchs. As you can see, I don't have much space with my Panthek P500A.
> 
> View attachment 2514783
> 
> 
> Thanks!


I bought a EKWB Phoenix 360 rad and pump unit which had quick disconnects on it and added two short hoses and two quick disconnects to my EKWB block, attached it to my Phoenix, topped it off with EKWB premixed coolant, problem solved.

Those AIOs are really hard to find now though, I think I recently only saw the 240 still available, which would underperform on a 3090. 

Edit: Maybe two 240s in series, but having all those quick disconnects would really slow the coolant flow. :/


----------



## Arizor

Welcome @lmfodor , to answer your questions:

1. Yes there are AIOs both from EVGA (Kingpin and their 'hybrid' editions, you can even buy a separate 'hybrid' AIO kit for your EVGA 3080/90) and Gigabyte (Aorus Xtreme Waterforce).

2. Undervolt for gaming on my Strix - it can hold 1950mhz and +1500mhz mem on a 1.0V curve. You could probably push the voltage lower but I want stability playing demanding games like CP2077 (though when I play I'm under water so I just push it up to 2040mhz and the voltage bounces between 1.075 - 1.1, temps stay around 49C).

Undervolt for mining - 0.925V, +1200MEM, clock at around 1400mhz. Get 121 hashrate, VRAM stays in the 80s. Again this is on water though (yet just a passive backplate from EK so not too dissimilar to air).


----------



## GQNerd

Repurposed GA102-250

My Kingpin 3080ti, lmaooo

All joking aside, so far there’s no difference in Perf compared to my OG KP w/GA102-300

In PR was already able to hit 15,919 without really knowing what voltages the card likes with Classified tool. 

Core set to 2235, avg. 2205
+1500 Mem (cant hit +1700 like my OG, or maybe needs more voltage)

Will see what it can do this weekend

ps. HydroCopper water block on the card


----------



## lmfodor

KedarWolf said:


> I bought a EKWB Phoenix 360 rad and pump unit which had quick disconnects on it and added two short hoses and two quick disconnects to my EKWB block, attached it to my Phoenix, topped it off with EKWB premixed coolant, problem solved.
> 
> Those AIOs are really hard to find now though, I think I recently only saw the 240 still available, which would underperform on a 3090.
> 
> Edit: Maybe two 240s in series, but having all those quick disconnects would really slow the coolant flow. :/


Hi @KedarWolf! I’m still using your memory timings, are the best for my DR Tridenz! it sounds easy.. I guess I should see some installation guide video .. if it comes with flexible hoses instead of tubes, and with a some premixed coolant it would be easy, my only concern would be testing the leakage. Maybe it’s easier than I thought. 

Would you send me all the parts you bought? 

I'm traveling to the US in a month, so I'm putting together my list of components to buy. I think you were right to go to the water block. These cards have plenty of headroom to grow. I don't expect to hold the world record, but I do expect to play games with a good stable OC for better 4K performance. Only that. I know that there is no rational use of a 3090 for games, since the greater the amount of VRAM when overclocking the whole set heats up. so, the only way it seems to go to water cooling! I hope to have this card for a long time, so I think it is the way to go. In fact, I suppose I should also buy a new power supply, right? For now, at 480W, the 850W seems fine, but I don't know it will support EVGA's BIOS 500/520. what do you think? Thanks 


Sent from my iPhone using Tapatalk Pro


----------



## lmfodor

Arizor said:


> Welcome @lmfodor , to answer your questions:
> 
> 1. Yes there are AIOs both from EVGA (Kingpin and their 'hybrid' editions, you can even buy a separate 'hybrid' AIO kit for your EVGA 3080/90) and Gigabyte (Aorus Xtreme Waterforce).
> 
> 2. Undervolt for gaming on my Strix - it can hold 1950mhz and +1500mhz mem on a 1.0V curve. You could probably push the voltage lower but I want stability playing demanding games like CP2077 (though when I play I'm under water so I just push it up to 2040mhz and the voltage bounces between 1.075 - 1.1, temps stay around 49C).
> 
> Undervolt for mining - 0.925V, +1200MEM, clock at around 1400mhz. Get 121 hashrate, VRAM stays in the 80s. Again this is on water though (yet just a passive backplate from EK so not too dissimilar to air).


Hi [email protected], thanks for your welcome!

Regarding the AIO as I have an Strix it seems that there’s not a 360mm version, so maybe I should go to the entire water block, hoses and premixed fluid .. I’ll look in YouTube for some tutorials ..

I’m not mining so I will try to a sable curve for gaming. At which frequency you start with 1V? Do you have an screenshot of your curve to share? I finished Cyberpunk in February.. Now I’m just starting metro exodus enhance ed so it would be good to test right? Thanks!


Sent from my iPhone using Tapatalk Pro


----------



## Arizor

Hey @lmfodor ,

On my Macbook at the moment so can't share the curve, but I basically do a classic quick undervolt in Afterburner:

1. Load up Afterburner and hit CTRL+F. This will bring up the voltage curve.
2. On your Afterburner interface, bring the GPU clock DOWN to -300ish, hit apply.
3. Now grab the dot sitting at 1V, and drag it up to around 1935, hit apply. Set your mem to +1500, set power and core voltage to max (unlock voltage in afterburner settings, select "reference design" so it unlocks properly).
4. This should now run, give it a test, adjust as needed based on crashes/if you think you can go higher.

edit: Update, stable in CP2077 @ .975mv, 1935mhz, if you want to go lower maybe start there and see if you hit any stability issues.



lmfodor said:


> Hi [email protected], thanks for your welcome!
> 
> Regarding the AIO as I have an Strix it seems that there’s not a 360mm version, so maybe I should go to the entire water block, hoses and premixed fluid .. I’ll look in YouTube for some tutorials ..
> 
> I’m not mining so I will try to a sable curve for gaming. At which frequency you start with 1V? Do you have an screenshot of your curve to share? I finished Cyberpunk in February.. Now I’m just starting metro exodus enhance ed so it would be good to test right? Thanks!
> 
> 
> Sent from my iPhone using Tapatalk Pro


----------



## J7SC

Miguelios said:


> View attachment 2514793
> 
> Repurposed GA102-250
> 
> My Kingpin 3080ti, lmaooo
> 
> All joking aside, so far there’s no difference in Perf compared to my OG KP w/GA102-300
> 
> In PR was already able to hit 15,919 without really knowing what voltages the card likes with Classified tool.
> 
> Core set to 2235, avg. 2205
> +1500 Mem (cant hit +1700 like my OG, or maybe needs more voltage)
> 
> Will see what it can do this weekend
> 
> ps. HydroCopper water block on the card


Per below, this seems to have happened before, though your bin is anything but low hehe... 
I just wish they would stop mucking up the die surface with raised lettering of any kind


----------



## gfunkernaught

Is there a noticeable difference in cooling capacity between two 420mm radiators that are of different thickness? 33mm vs 40mm. I was looking at the XR5 420 and there was this one Alphacool was it? On amazon, it was 40mm thick.


----------



## lmfodor

Arizor said:


> Hey @lmfodor ,
> 
> On my Macbook at the moment so can't share the curve, but I basically do a classic quick undervolt in Afterburner:
> 
> 1. Load up Afterburner and hit CTRL+F. This will bring up the voltage curve.
> 2. On your Afterburner interface, bring the GPU clock DOWN to -300ish, hit apply.
> 3. Now grab the dot sitting at 1V, and drag it up to around 1935, hit apply. Set your mem to +1500, set power and core voltage to max (unlock voltage in afterburner settings, select "reference design" so it unlocks properly).
> 4. This should now run, give it a test, adjust as needed based on crashes/if you think you can go higher.
> 
> edit: Update, stable in CP2077 @ .975mv, 1935mhz, if you want to go lower maybe start there and see if you hit any stability issues.


Hi Arizor, thank you very much. It was a good start, now I should try stability in games. For the moment I will leave running Unigine Heaven I know it's old.. I don't know if there other good stress test for stabilty., maybe 3D Mark? I don't thing furmak would be needed for this. But I lowered 2C in the GPU temp and GPU Memory Junction (73 and 80 respectively). I think it's fine right? 









Thanks Again


----------



## Arizor

Yep @lmfodor that all looks present and correct for a gaming stable OC.


----------



## J7SC

gfunkernaught said:


> Is there a noticeable difference in cooling capacity between two 420mm radiators that are of different thickness? 33mm vs 40mm. I was looking at the XR5 420 and there was this one Alphacool was it? On amazon, it was 40mm thick.


...the thicker, the better !...below, a regular aluminum AIO rad @ 360 / 27 mm on the left vs a bronze & copper rad @ 480 / 64 mm on the right > 64 mm allows for triple row configuration. Of course, other considerations play a role...apart from mounting space, there is fin density, fans, case airflow and other loop peripherals etc ...


----------



## Lord of meat

Just did a repaste to test tf8 paste.
Temps are:
Gpu core 47c max
Hotspot 58c max
Memory 64c max
Gpu oc at 2115 memory +750
Game sable sofar. Port royal can do +1000.
Also if anyone wants to remove rgb on ekwb there are standoff that can be removed to take out the white plastic housing.
Finally got this hot **** to run like I want . Going to use tf8 instead of mx4 from now on.
Msi trio 3090 with 500w rebar evga bios.


----------



## KingKnick

Nobody here has the balls to upload the 1000w rbar bios? is that embarrassing man…


----------



## Shawnb99

J7SC said:


> the thicker, the better


HWL GTS routinely beats thicker radiators. So no thicker is not better, that’s another WC myth that needs to die.

Thicker CAN be better but that generally means going push/pull and running faster fans. Those so called “other conditions” play a very large factor.


----------



## Shawnb99

KingKnick said:


> Nobody here has the balls to upload the 1000w rbar bios? is that embarrassing man…


Not after this comment. You can go buy a KPE yourself with that attitude


----------



## KingKnick

Shawnb99 said:


> Not after this comment. You can go buy a KPE yourself with that attitude


Do you know how often I have already uploaded a bios ???


----------



## Shawnb99

KingKnick said:


> Do you know how often I have already uploaded a bios ???


Do you know how much I care? I’m sure you can guess.

I don’t give a flying **** if you uploaded a million bois, still doesn’t excuse your rude ass comment.


----------



## gfunkernaught

J7SC said:


> ...the thicker, the better !...below, a regular aluminum AIO rad @ 360 / 27 mm on the left vs a bronze & copper rad @ 480 / 64 mm on the right > 64 mm allows for triple row configuration. Of course, other considerations play a role...apart from mounting space, there is fin density, fans, case airflow and other loop peripherals etc ...
> 
> View attachment 2514815


I figured as much. I don't mind doing push/pull. I'm really interested in that 1000D...in love actually 😂.

The front takes two 480mm rads and the top takes a 420mm. Based on what I saw there is plenty of room for a thick rad on top and the front. Maybe not 60mm haha, because push/pull might get really close to the motherboard. I like that it has filters too and bottom. I'm trying to avoid an open case because of the dog and other factors. I can get one for $500 and they seem to be hard to find at that price. Don't remember seeing a mobo tray though. But yeah...in love...wanna jump...even if I don't build a new PC now...want it lol


----------



## J7SC

Shawnb99 said:


> HWL GTS routinely beats thicker radiators. So no thicker is not better, that’s another WC myth that needs to die.
> 
> Thicker CAN be better but that generally means going push/pull and running faster fans. Those so called “other conditions” play a very large factor.


Well, you edited out the rest of my post on that... Be that as it may, I also have some HWL rads in use for years that perform nicely, though the bigger and thicker rads with the right kind of push/pull perform best of all, in my builds at least.


----------



## des2k...

Shawnb99 said:


> Not after this comment. You can go buy a KPE yourself with that attitude


I'm still on this thread daily in hopes of that magical rbar vbios since I have 2x8pin card.

But chances are low :-( a shame since this OC comunity is small and should help out if possible


----------



## Shawnb99

des2k... said:


> I'm still on this thread daily in hopes of that magical rbar vbios since I have 2x8pin card.
> 
> But chances are low :-( a shame since this OC comunity is small and should help out if possible


How does one get that bios if they have the KPE? If uploading it doesn't get me in trouble and someone can tell me how to get it I might be willing to help out.


----------



## des2k...

Shawnb99 said:


> How does one get that bios if they have the KPE? If uploading it doesn't get me in trouble and someone can tell me how to get it I might be willing to help out.


You'll have to contact Kingpin Vince by email(a few here have his contact).

I think he will ask for your card serial# and should be able to send you that xoc rbar vbios.


----------



## Shawnb99

des2k... said:


> You'll have to contact Kingpin Vince by email(a few here have his contact).
> 
> I think he will ask for your card serial# and should be able to send you that xoc rbar vbios.


Sweet Thanks. So call out to anyone who has his contact. Please send me a PM


----------



## tbrown7552

I will say. I know of quite a few people who have Kingpin cards that have reached out to Vince for the rebar vbios and are getting no response.


----------



## Nizzen

Want "unlimited" power with rebar bios? Shuntmod that gpu like many of us 
Crying about the top dogs not sharing their xoc bios's are not helping anyone. They want to hold on to their top spots in 3dmark, and that's ok. Do something about the "problem" and fix it yourself 

GG 

How to "earn" special bios? Then you need to kill at least 5x 3090 cards Live on stream


----------



## PLATOON TEKK

New 471.11 drivers.

They mainly fix sharpening (useful to turn up in order to lower reso scale for extra frames) and VR stutter. Add WDM 3.0. These also supposedly work much better on Win11, but I'm not a bout to reformat and re-test win11 lol.

"DPC latency is higher when color mode is set to 8-bit color compared to 10-bit color. [3316424] -> To workaround issue, disable MPO using the registry key found in the FAQ below: After updating to NVIDIA Game Ready Driver 461.09 or newer, some desktop apps may flicker or stutter when resizing the window on some PC configurations | NVIDIA"

Found this under issues. I wonder if this affects scores at all. How does DPC latency work with benchmarks in general? For example, does setting things to "ultra" under latency actually benefit scores?

I've uploaded the stripped version of the driver below for those who prefer it (no telemetry. only includes;core driver, install core and phsyx).

471.11 Barebone Driver (using NVSlimmer .10)  (Dropbox)


----------



## des2k...

PLATOON TEKK said:


> New 471.11 drivers.
> 
> They mainly fix sharpening (useful to turn up in order to lower reso scale for extra frames) and VR stutter. Add WDM 3.0. These also supposedly work much better on Win11, but I'm not a bout to reformat and re-test win11 lol.
> 
> "DPC latency is higher when color mode is set to 8-bit color compared to 10-bit color. [3316424] -> To workaround issue, disable MPO using the registry key found in the FAQ below: After updating to NVIDIA Game Ready Driver 461.09 or newer, some desktop apps may flicker or stutter when resizing the window on some PC configurations | NVIDIA"
> 
> Found this under issues. I wonder if this affects scores at all. How does DPC latency work with benchmarks in general? For example, does setting things to "ultra" under latency actually benefit scores?
> 
> I've uploaded the stripped version of the driver below for those who prefer it (no telemetry. only includes;core driver, install core and phsyx).
> 
> 471.11 Barebone Driver (using NVSlimmer .10)  (Dropbox)


I doubt DPC latency would affect benchmark scores that much. Mine is tweaked, it's in micro seconds here for GPU resolves and my 3900x scores lower on PortRoyal vs Intel or Zen3.

Playing youtube video, but it's the same for games


----------



## PLATOON TEKK

des2k... said:


> I doubt DPC latency would affect benchmark scores that much. Mine is tweaked, it's in micro seconds here for GPU resolves and my 3900x scores lower on PortRoyal vs Intel or Zen3.
> 
> Playing youtube video, but it's the same for games
> View attachment 2514877


Holy piss! That’s the lowest latency I’ve ever seen. And I thought mine got low. How did you manage that, stripped windows?

The reason I ask about latency is because I heard that one’s frames actually drop slightly when using “ultra” in order to achieve the low latency. But I think you’re right, overall system latency might not play a big role in scores.


----------



## Nizzen

des2k... said:


> I doubt DPC latency would affect benchmark scores that much. Mine is tweaked, it's in micro seconds here for GPU resolves and my 3900x scores lower on PortRoyal vs Intel or Zen3.
> 
> Playing youtube video, but it's the same for games
> View attachment 2514877


I want to learn from you! Crazy low latency 🤩


----------



## des2k...

PLATOON TEKK said:


> Holy piss! That’s the lowest latency I’ve ever seen. And I thought mine got low. How did you manage that, stripped windows?
> 
> The reason I ask about latency is because I heard that one’s frames actually drop slightly when using “ultra” in order to achieve the low latency. But I think you’re right, overall system latency might not play a big role in scores.


Just normal windows, DPC went up with this new driver even with MPO off in the registry. So quality of the driver does matter a lot.

Logging is disabled, and the startup traces are deleted on every reboot. I disable things in the device manager I don't need. I check the hidden drivers when I shutdown, looks like hwinfo driver re-enabled itself again. I run this x-msi on all IRQs + Priority like my screenshot.
3900x, 1900IF and my memory is not the best.
I run a batch file on startup for closing services, dropping some priority on stuff that runs in the background(not sure if it makes a huge difference)

net stop "CDPUserSvc_%cut2%"
net stop "DevicesFlowUserSvc_%cut2%"
net stop "PimIndexMaintenanceSvc_%cut2%"
net stop "WpnService"
net stop "SysMain"
net stop "lmhosts"
sc stop EvtEng
net stop "iphlpsvc"
net stop "BDESVC"
net stop "DiagTrack"
net stop "lfsvc"
net stop "LanmanServer"
net stop "WerSvc"
net stop "DusmSvc"
net stop "XboxNetApiSvc"
net stop "GamingServicesNet"
net stop "stisvc"
net stop "wlidsvc"
net stop "PcaSvc"

REM Background Apps [ALWAYS RUNNING]
wmic process where name="agent_ovpnconnect_1594367036109.exe" CALL setpriority "below normal"
wmic process where name="Everything.exe" CALL setpriority "below normal"

wmic process where name="ibtsiva.exe" CALL setpriority "below normal"
wmic process where name="SearchApp.exe" CALL setpriority "below normal"
wmic process where name="SearchIndexer.exe" CALL setpriority "below normal"
wmic process where name="spoolsv.exe" CALL setpriority "below normal"
wmic process where name="fontdrvhost.exe" CALL setpriority "below normal"
wmic process where name="UserOOBEBroker.exe" CALL setpriority "below normal"
wmic process where name="GamingServices.exe" CALL setpriority "below normal"

taskkill /IM "XboxPcApp.exe" /F
taskkill /IM "GameBar.exe" /F
taskkill /IM "GameBarFTServer.exe" /F


----------



## yzonker

Nizzen said:


> Want "unlimited" power with rebar bios? Shuntmod that gpu like many of us
> Crying about the top dogs not sharing their xoc bios's are not helping anyone. They want to hold on to their top spots in 3dmark, and that's ok. Do something about the "problem" and fix it yourself
> 
> GG
> 
> How to "earn" special bios? Then you need to kill at least 5x 3090 cards Live on stream


Yea now that I have a spare card, I've been toying with the idea of shunt modding. Although reBar is worth almost nothing so not much gain for quite a bit of work and some risk.


----------



## des2k...

yzonker said:


> Yea now that I have a spare card, I've been toying with the idea of shunt modding. Although reBar is worth almost nothing so not much gain for quite a bit of work and some risk.


I wouldn't even bother with shunt mod for just rbar. 
It would be nice on 2x8pins if we do get that xoc rbar.

We didn't hear much from nvidia for whitelisting more games, so chances are that dev don't really care or they can't extract extra performance from this gen.

By the time rbar makes a diff everybody will be on next gen cards lol


----------



## mirkendargen

Is it really that hard to get DPC latency that low? I get in the ballpark of that doing nothing at all fancy but forcing all my devices to use MSI not IRQ and disabling devices I don't use in BIOS. And I have a lot of weird **** like a Mellanox NIC, ISCSI drive, AMD NVME RAID, and PrimoCache running that I would expect if anything to cause weird added DPC latency lol.


----------



## yzonker

des2k... said:


> I wouldn't even bother with shunt mod for just rbar.
> It would be nice on 2x8pins if we do get that xoc rbar.
> 
> We didn't hear much from nvidia for whitelisting more games, so chances are that dev don't really care or they can't extract extra performance from this gen.
> 
> By the time rbar makes a diff everybody will be on next gen cards lol


Well the other advantage is running a bios with safeties in place.


----------



## gfunkernaught

@J7SC 
I got the case, but I can't find the 8x120mm rack for the top. It comes with a 3x140mm rack. I'm sure 2x480mm will cool better than 1x420mm but I'm trying to figure out how much, and it will be louder for sure. 8 vs 3 fans.


----------



## PLATOON TEKK

des2k... said:


> Just normal windows, DPC went up with this new driver even with MPO off in the registry. So quality of the driver does matter a lot.
> 
> Logging is disabled, and the startup traces are deleted on every reboot. I disable things in the device manager I don't need. I check the hidden drivers when I shutdown, looks like hwinfo driver re-enabled itself again. I run this x-msi on all IRQs + Priority like my screenshot.
> 3900x, 1900IF and my memory is not the best.
> I run a batch file on startup for closing services, dropping some priority on stuff that runs in the background(not sure if it makes a huge difference)
> 
> net stop "CDPUserSvc_%cut2%"
> net stop "DevicesFlowUserSvc_%cut2%"
> net stop "PimIndexMaintenanceSvc_%cut2%"
> net stop "WpnService"
> net stop "SysMain"
> net stop "lmhosts"
> sc stop EvtEng
> net stop "iphlpsvc"
> net stop "BDESVC"
> net stop "DiagTrack"
> net stop "lfsvc"
> net stop "LanmanServer"
> net stop "WerSvc"
> net stop "DusmSvc"
> net stop "XboxNetApiSvc"
> net stop "GamingServicesNet"
> net stop "stisvc"
> net stop "wlidsvc"
> net stop "PcaSvc"
> 
> REM Background Apps [ALWAYS RUNNING]
> wmic process where name="agent_ovpnconnect_1594367036109.exe" CALL setpriority "below normal"
> wmic process where name="Everything.exe" CALL setpriority "below normal"
> 
> wmic process where name="ibtsiva.exe" CALL setpriority "below normal"
> wmic process where name="SearchApp.exe" CALL setpriority "below normal"
> wmic process where name="SearchIndexer.exe" CALL setpriority "below normal"
> wmic process where name="spoolsv.exe" CALL setpriority "below normal"
> wmic process where name="fontdrvhost.exe" CALL setpriority "below normal"
> wmic process where name="UserOOBEBroker.exe" CALL setpriority "below normal"
> wmic process where name="GamingServices.exe" CALL setpriority "below normal"
> 
> taskkill /IM "XboxPcApp.exe" /F
> taskkill /IM "GameBar.exe" /F
> taskkill /IM "GameBarFTServer.exe" /F
> 
> View attachment 2514884
> 
> View attachment 2514885
> 
> View attachment 2514886
> 
> View attachment 2514887
> 
> View attachment 2514888


I truly appreciate you taking your time like this and providing such a detailed and thorough answer. I have noted all of your steps and will be messing around with them myself. Thanks again man, definitely good advice.



















Tested power (for those who had asked) on one of my top runs, seems to peak at 1380w during PR. This is for the entire pc including 11900K.

edit: posted temps on initial boot too. Those shoot up pretty quickly 😓


----------



## ttnuagmada

EK active backplate for my 3090 strix has shipped!


----------



## KedarWolf

ttnuagmada said:


> EK active backplate for my 3090 has shipped!


Mine hasn't shipped yet but I ordered it 6/11/2021 so I'm a bit late in the queue.


----------



## ttnuagmada

KedarWolf said:


> Mine hasn't shipped yet but I ordered it 6/11/2021 so I'm a bit late in the queue.


I ordered mine on May 24th. It was the first or seond day it could be ordered im pretty sure.


----------



## GRABibus

Guys,
is this card (Gundam) offers higher performances or binned chips versus Strix ?
In France, it is 450Euros more in France than usual Strix, so I wonder ...









Asus GeForce RTX 3090 ROG STRIX O24G GUNDAM - Carte graphique - Top Achat


Carte graphique - Asus GeForce RTX 3090 ROG STRIX O24G GUNDAM - Top Achat - Carte graphique PCI-Express overclockée - Refroidissement semi-passif (mode 0 dB) - Avec backplate - Compatible VR




www.topachat.com


----------



## KedarWolf

GRABibus said:


> Guys,
> is this card (Gundam) offers higher performances or binned chips versus Strix ?
> In France, it is 450Euros more in France than usual Strix, so I wonder ...
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Asus GeForce RTX 3090 ROG STRIX O24G GUNDAM - Carte graphique - Top Achat
> 
> 
> Carte graphique - Asus GeForce RTX 3090 ROG STRIX O24G GUNDAM - Top Achat - Carte graphique PCI-Express overclockée - Refroidissement semi-passif (mode 0 dB) - Avec backplate - Compatible VR
> 
> 
> 
> 
> www.topachat.com


I'm pretty sure you're just paying more for the Gundum heatsink and it's a reference Strix PCB.


----------



## Nizzen

GRABibus said:


> Guys,
> is this card (Gundam) offers higher performances or binned chips versus Strix ?
> In France, it is 450Euros more in France than usual Strix, so I wonder ...
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Asus GeForce RTX 3090 ROG STRIX O24G GUNDAM - Carte graphique - Top Achat
> 
> 
> Carte graphique - Asus GeForce RTX 3090 ROG STRIX O24G GUNDAM - Top Achat - Carte graphique PCI-Express overclockée - Refroidissement semi-passif (mode 0 dB) - Avec backplate - Compatible VR
> 
> 
> 
> 
> www.topachat.com


Not binned.


----------



## EarlZ

Has anyone had a Video Internal Scheduler BSOD with the 471.11 drivers? I just got one today after updating my drivers yesterday.


----------



## kx11

GRABibus said:


> Guys,
> is this card (Gundam) offers higher performances or binned chips versus Strix ?
> In France, it is 450Euros more in France than usual Strix, so I wonder ...
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Asus GeForce RTX 3090 ROG STRIX O24G GUNDAM - Carte graphique - Top Achat
> 
> 
> Carte graphique - Asus GeForce RTX 3090 ROG STRIX O24G GUNDAM - Top Achat - Carte graphique PCI-Express overclockée - Refroidissement semi-passif (mode 0 dB) - Avec backplate - Compatible VR
> 
> 
> 
> 
> www.topachat.com


the 450euro increase goes to GUNDAM estate


----------



## kx11

prices of GPUs are going down because China are blocking mining activities 


Graphics CardMay PricingCurrent PricingReductionMSI GeForce RTX 3080 Ti Gaming X Trio$2,273$2,1655%MSI GeForce RTX 3070 Gaming X Trio$1,624$1,23724%Asus TUF Gaming GeForce RTX 3060 OC Edition$1,531$83545%Colorful iGame GeForce RTX 3060 Ultra OC$1,121$71133%


----------



## Zaudi

Question to the 3090 FE users.
I got mine this Week.
No coil wine and Temps are Ok.

But the BIOS version makes me unhappy.

I saved it, but diddn't checked the vBIOS number.
rBar was already activated.

But still I used the NVIDIA BIOS Flash Tool and let it check my version.





NVIDIA Resizable BAR Firmware Update Tool | NVIDIA







nvidia.custhelp.com




It shows me a new one and I pressed 'Y' to install it.

Now with all GPU bench my score lost ~200 points.

BIOS now: 

VBIOS Version:94.02.4B.00.0B

Techpowerup says:
*Warning:* You are viewing an unverified BIOS file.
This upload has not been verified by us in any way (like we do for the entries listed under the 'AMD', 'ATI' and 'NVIDIA' sections).
Please exercise caution when flashing it to your graphics card, and always have a backup. 

Can I flash with NVflash the "old" version back again?
I heard, the FE cards I can't flash


----------



## Falkentyne

Zaudi said:


> Question to the 3090 FE users.
> I got mine this Week.
> No coil wine and Temps are Ok.
> 
> But the BIOS version makes me unhappy.
> 
> I saved it, but diddn't checked the vBIOS number.
> rBar was already activated.
> 
> But still I used the NVIDIA BIOS Flash Tool and let it check my version.
> 
> 
> 
> 
> 
> NVIDIA Resizable BAR Firmware Update Tool | NVIDIA
> 
> 
> 
> 
> 
> 
> 
> nvidia.custhelp.com
> 
> 
> 
> 
> It shows me a new one and I pressed 'Y' to install it.
> 
> Now with all GPU bench my score lost ~200 points.
> 
> BIOS now:
> 
> VBIOS Version:94.02.4B.00.0B
> 
> Techpowerup says:
> *Warning:* You are viewing an unverified BIOS file.
> This upload has not been verified by us in any way (like we do for the entries listed under the 'AMD', 'ATI' and 'NVIDIA' sections).
> Please exercise caution when flashing it to your graphics card, and always have a backup.
> 
> Can I flash with NVflash the "old" version back again?
> I heard, the FE cards I can't flash


You can flash other 3090FE Bioses, yes.

Try 94.02.32.00.02 and check your scores.


----------



## Zaudi

Falkentyne said:


> You can flash other 3090FE Bioses, yes.


Good to know! Thanks 



Falkentyne said:


> Try 94.02.32.00.02 and check your scores.


intressting, techpowerup showes:


VBIOS VersionBIOS Build date94.02.4B.00.0B2021-03-0594.02.32.00.022021-03-0994.02.27.00.0A2021-03-18

Did someone know, wich one perform best?
And why there are 3 relase... in 14 days?


----------



## des2k...

PLATOON TEKK said:


> I truly appreciate you taking your time like this and providing such a detailed and thorough answer. I have noted all of your steps and will be messing around with them myself. Thanks again man, definitely good advice.
> View attachment 2514957
> View attachment 2514971
> View attachment 2514973
> 
> Tested power (for those who had asked) on one of my top runs, seems to peak at 1380w during PR. This is for the entire pc including 11900K.
> 
> edit: posted temps on initial boot too. Those shoot up pretty quickly 😓


I almost forgot, I run hpet on in the bios and off in windows.

Old Build version of windows that are not running the synthetic timer 10mhz are best for low latency especially with 1T 3800+ mem.

You can disable power management on the timer and force the resolution to be fixed.
bcdedit /set disabledynamictick yes


----------



## gfunkernaught

Can someone recommend a 480mm radiator for me? I'm getting four and will use noctua NF-F12s with them. Was thinking XR7 but I'll take some recommendations.


----------



## MrTOOSHORT

gfunkernaught said:


> Can someone recommend a 480mm radiator for me? I'm getting four and will use noctua NF-F12s with them. Was thinking XR7 but I'll take some recommendations.


Check out this thread, very good info on what rad to try and get:

Need some quick help, Whats currently the top performing PC rad? | Overclock.net


----------



## J7SC

gfunkernaught said:


> Can someone recommend a 480mm radiator for me? I'm getting four and will use noctua NF-F12s with them. Was thinking XR7 but I'll take some recommendations.


How much space have you got available ? I have three TT Pacific CL 480x64 mm (copper and bronze, 4 in/outlets plus drain port) which work great. Now, before the 'I hate TT chorus' sets in, not everything TT produces is to my liking, and I tried one of those rads out first to see about quality and performance before adding two more.

My 'regular' go-to rads are the XSPC RX 360, and they do make a RX 480 which could also be good


----------



## gfunkernaught

J7SC said:


> How much space have you got available ? I have three TT Pacific CL 480x64 mm (copper and bronze, 4 in/outlets plus drain port) which work great. Now, before the 'I hate TT chorus' sets in, not everything TT produces is to my liking, and I tried one of those rads out first to see about quality and performance before adding two more.
> 
> My 'regular' go-to rads are the XSPC RX 360, and they do make a RX 480 which could also be good


They're going into a Corsair 1000D. Will fit 4x480 but thickness is the question. Haven't taken it out of the box yet, gotta measure all the goods. Who hates TT?? I have used their products with mixed results, no reason to hate them lol. Actually going to get two of their 1-10 fan hubs. Those rads you mentioned, will the NF-F12s push through them proper? I know some rads are made for higher or lower static pressure or flow, just dunno which ones.


----------



## GRABibus

Here is why I like the Bios EVGA RTX 3090 FTW3 ULTRA XOC 500W version 94.02.42.80.27 on Strix with stock air coooler.






[email protected]@1.1V => Temp below 60°C.
[email protected] => temp below 70°C
Ambient 24°C-25°C
Stock air cooler, repasted with conductonaut.
Memory chips repaded with Thermalright ODYSSEY thermal pads.

I am really happy with those temps and frequencies.

Here is the V/F curve :


----------



## J7SC

gfunkernaught said:


> They're going into a Corsair 1000D. Will fit 4x480 but thickness is the question. Haven't taken it out of the box yet, gotta measure all the goods. Who hates TT?? I have used their products with mixed results, no reason to hate them lol. Actually going to get two of their 1-10 fan hubs. Those rads you mentioned, will the NF-F12s push through them proper? I know some rads are made for higher or lower static pressure or flow, just dunno which ones.


...I use four of TT Sata-powered 1-10 fan hubs w/o any issues (two since late '18). As to high static pressure fans > very advisable / recommended for those big rads !

the TT CL 480 x 64 mm has a triple-row design and 14 fpi...I use Arctic P12 PST pwm in push / pull on those, replacing my trusty (but 'not quiet') GentleTyphoon 3000 rpm 'push' fans
the XSPC 480 x 56 mm has a double-row design and 13 fpi

I am not sure about the NF-F12s being able to get it done; certainly would require push / pull. Also, I usually use electrical tape to seal the fan-rad 'gap' if any...


----------



## gfunkernaught

J7SC said:


> ...I use four of TT Sata-powered 1-10 fan hubs w/o any issues (two since late '18). As to high static pressure fans > very advisable / recommended for those big rads !
> 
> the TT CL 480 x 64 mm has a triple-row design and 14 fpi...I use Arctic P12 PST pwm in push / pull on those, replacing my trusty (but 'not quiet') GentleTyphoon 3000 rpm 'push' fans
> the XSPC 480 x 56 mm has a double-row design and 13 fpi
> 
> I am not sure about the NF-F12s being able to get it done; certainly would require push / pull. Also, I usually use electrical tape to seal the fan-rad 'gap' if any...


The old electric tape sealant trick! So using the fins-per-inch property, I can determine what flow and pressure fan property to match to a particular radiator. I wouldn't mind doing push/pull, actually was planning it. I'm choosing noctua just for how freaking silent they are! I have 3 nf-p12s and can't hear them at 100%, 1300rpm.


----------



## yzonker

gfunkernaught said:


> Can someone recommend a 480mm radiator for me? I'm getting four and will use noctua NF-F12s with them. Was thinking XR7 but I'll take some recommendations.


I have this one as an external, but it was just in stock on Amazon. Didn't do a lot of research. Massive improvement when added to my existing 360 and 280 though. Took a lot cleaning.









Amazon.com: Thermaltake Pacific DIY Liquid Cooling System CL480 64mm Thick Copper Radiator CL-W192-CU00BL-A : Electronics


Amazon.com: Thermaltake Pacific DIY Liquid Cooling System CL480 64mm Thick Copper Radiator CL-W192-CU00BL-A : Electronics



www.amazon.com





But don't get F12s. I've had those and Arctic P12s. P12s are quieter and move as much air. I have P12s on the 480 (had the F12s on the 360 previously, have A12s on it now). Noctua A12s are the best of the 3.

A12
P12
F12

In that order.


----------



## J7SC

yzonker said:


> I have this one as an external, but it was just in stock on Amazon. Didn't do a lot of research. Massive improvement when added to my existing 360 and 280 though. Took a lot cleaning.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Amazon.com: Thermaltake Pacific DIY Liquid Cooling System CL480 64mm Thick Copper Radiator CL-W192-CU00BL-A : Electronics
> 
> 
> Amazon.com: Thermaltake Pacific DIY Liquid Cooling System CL480 64mm Thick Copper Radiator CL-W192-CU00BL-A : Electronics
> 
> 
> 
> www.amazon.com
> 
> 
> 
> 
> 
> But don't get F12s. I've had those and Arctic P12s. P12s are quieter and move as much air. I have P12s on the 480 (had the F12s on the 360 previously, have A12s on it now). Noctua A12s are the best of the 3.
> 
> A12
> P12
> F12
> 
> In that order.


Thanks for making me feel good  ...I paid CAN $ 163 (+ taxes) per, for each of the three...mind you, they usually sell out quickly

@gfunkernaught ...to give you an idea about 'thickness' ...left is Corsair H150i Pro 360 rad



Spoiler


----------



## Wihglah

GRABibus said:


> Here is why I like the Bios EVGA RTX 3090 FTW3 ULTRA XOC 500W version 94.02.42.80.27 on Strix with stock air coooler.
> 
> 
> 
> 
> 
> 
> [email protected]@1.1V => Temp below 60°C.
> [email protected] => temp below 70°C
> Ambient 24°C-25°C
> Stock air cooler, repasted with conductonaut.
> Memory chips repaded with Thermalright ODYSSEY thermal pads.
> 
> I am really happy with those temps and frequencies.
> 
> Here is the V/F curve :
> 
> View attachment 2515151


Your frame rate is pretty low for 1440p. Need to work on that CPU overclock.


----------



## gfunkernaught

yzonker said:


> I have this one as an external, but it was just in stock on Amazon. Didn't do a lot of research. Massive improvement when added to my existing 360 and 280 though. Took a lot cleaning.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Amazon.com: Thermaltake Pacific DIY Liquid Cooling System CL480 64mm Thick Copper Radiator CL-W192-CU00BL-A : Electronics
> 
> 
> Amazon.com: Thermaltake Pacific DIY Liquid Cooling System CL480 64mm Thick Copper Radiator CL-W192-CU00BL-A : Electronics
> 
> 
> 
> www.amazon.com
> 
> 
> 
> 
> 
> But don't get F12s. I've had those and Arctic P12s. P12s are quieter and move as much air. I have P12s on the 480 (had the F12s on the 360 previously, have A12s on it now). Noctua A12s are the best of the 3.
> 
> A12
> P12
> F12
> 
> In that order.


Which A12s? Tradition, redux, or chrome.max?


----------



## GRABibus

Wihglah said:


> Your frame rate is pretty low for 1440p. Need to work on that CPU overclock.


You can add 5% to 10% fps because of Geforce experience which takes some ressources.
My PBO overclock on 5900X gives completely in line scores on CBR20 and TimeSpy CPU score.

*Ray tracing is on ultra* (DLSS on quality).
all others settings are on ultra

if I switch my PBO OC to "Per CCX" OC (4.65GHz/4.55GHz), I get more fps.
This game benefits to all Corer OC versus PBO.


----------



## gfunkernaught

J7SC said:


> Thanks for making me feel good  ...I paid CAN $ 163 (+ taxes) per, for each of the three...mind you, they usually sell out quickly
> 
> @gfunkernaught ...to give you an idea about 'thickness' ...left is Corsair H150i Pro 360 rad
> 
> 
> 
> Spoiler
> 
> 
> 
> 
> View attachment 2515154


Yes I saw that. But I have to figure out how thick I can go considering that will end up going push/pull. The 1000D is big but the corners are tight for rads.


----------



## J7SC

gfunkernaught said:


> Yes I saw that. But I have to figure out how thick I can go considering that will end up going push/pull. The 1000D is big but the corners are tight for rads.


...1000D is great (in more ways than just LxWxH)), but one day, you will likely just go external MoRa(s)


----------



## KedarWolf

ttnuagmada said:


> EK active backplate for my 3090 strix has shipped!


I got an email today saying mine should ship July 7th.


----------



## Wihglah

GRABibus said:


> You can add 5% to 10% fps because of Geforce experience which takes some ressources.
> My PBO overclock on 5900X gives completely in line scores on CBR20 and TimeSpy CPU score.
> 
> *Ray tracing is on ultra* (DLSS on quality).
> all others settings are on ultra
> 
> if I switch my PBO OC to "Per CCX" OC (4.65GHz/4.55GHz), I get more fps.
> This game benefits to all Corer OC versus PBO.


I just tried my rig on 1440p and was getting 150-175fps on that map.
Rig in sig


----------



## GRABibus

Wihglah said:


> I just tried my rig on 1440p and was getting 150-175fps on that map.
> Rig in sig


yes this is what I have (even more) without GeForce experience and all core OC (not PBO).


----------



## yzonker

J7SC said:


> Thanks for making me feel good  ...I paid CAN $ 163 (+ taxes) per, for each of the three...mind you, they usually sell out quickly


I paid $168 when it was in stock by Amazon. I do think it's a nice rad though. I haven't bought very many, but have no complaints other than working quite a while to get it clean. I have 4 Arctic P12's on it. Haven't bothered doing push/pull as my water temp is so much better now I decided I didn't care. Probably only get 1C more at best other than I could do lower fan speed then with the same delta I have now.


----------



## yzonker

gfunkernaught said:


> Which A12s? Tradition, redux, or chrome.max?


This one but they are stupid expensive to buy a bunch of them. I still say the Arctic P12 is what you want. Much cheaper and very good. 









Amazon.com: Noctua NF-A12x25 PWM, Premium Quiet Fan, 4-Pin (120mm, Brown) : Electronics


Buy Noctua NF-A12x25 PWM, Premium Quiet Fan, 4-Pin (120mm, Brown): Case Fans - Amazon.com ✓ FREE DELIVERY possible on eligible purchases



www.amazon.com


----------



## J7SC

yzonker said:


> I paid $168 when it was in stock by Amazon. I do think it's a nice rad though. I haven't bought very many, but have no complaints other than working quite a while to get it clean. I have 4 Arctic P12's on it. Haven't bothered doing push/pull as my water temp is so much better now I decided I didn't care. Probably only get 1C more at best other than I could do lower fan speed then with the same delta I have now.


Canadian 'rubles', to use Linus' expression > 

...yeah, I clean every new rad before use (and re-cleaning used ones) re. every new build / application...worth it though, IMO. Every single new rad I bought over the last decade, from the TTs, to XSPC or to HWL, had some manufacturing 'crud' in it when new...


----------



## gfunkernaught

J7SC said:


> ...1000D is great (in more ways than just LxWxH)), but one day, you will likely just go external MoRa(s)


The reason why I opted for the 1000D instead of just getting MoRADs for my current setup is that with a morad, if I want to move the case, I have to move the morad with it, so I'm moving them one at time but still attached. If I had a perm desk and the PC wasn't going anywhere, then yea morad. This thing isn't going any where. I'm sure there are ways to make something that could hold and house both the case and morads but eh. My 760T is getting jelly right now.


----------



## gfunkernaught

yzonker said:


> This one but they are stupid expensive to buy a bunch of them. I still say the Arctic P12 is what you want. Much cheaper and very good.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Amazon.com: Noctua NF-A12x25 PWM, Premium Quiet Fan, 4-Pin (120mm, Brown) : Electronics
> 
> 
> Buy Noctua NF-A12x25 PWM, Premium Quiet Fan, 4-Pin (120mm, Brown): Case Fans - Amazon.com ✓ FREE DELIVERY possible on eligible purchases
> 
> 
> 
> www.amazon.com


This Arctic P12?








Amazon.com: ARCTIC P12 PWM PST CO - 120 mm Case Fan, PWM Sharing Technology (PST), Pressure-optimised, Dual Ball Bearing for Continuous Operation, Computer, 200-1800 RPM - Black : Electronics


Buy ARCTIC P12 PWM PST CO - 120 mm Case Fan, PWM Sharing Technology (PST), Pressure-optimised, Dual Ball Bearing for Continuous Operation, Computer, 200-1800 RPM - Black: Case Fans - Amazon.com ✓ FREE DELIVERY possible on eligible purchases



www.amazon.com


----------



## J7SC

gfunkernaught said:


> This Arctic P12?
> 
> 
> 
> 
> 
> 
> 
> 
> Amazon.com: ARCTIC P12 PWM PST CO - 120 mm Case Fan, PWM Sharing Technology (PST), Pressure-optimised, Dual Ball Bearing for Continuous Operation, Computer, 200-1800 RPM - Black : Electronics
> 
> 
> Buy ARCTIC P12 PWM PST CO - 120 mm Case Fan, PWM Sharing Technology (PST), Pressure-optimised, Dual Ball Bearing for Continuous Operation, Computer, 200-1800 RPM - Black: Case Fans - Amazon.com ✓ FREE DELIVERY possible on eligible purchases
> 
> 
> 
> www.amazon.com


...even better in > 5-value pack (for which I paid about US$ 34 per 5-pack recently)


----------



## des2k...

Those Artic P12s, if you have bunch of them close together they will resonate / hum. They seem to interfere with each other's air flow. I have 10 of them, had to separate them into 3groups with different RPM to get them to stop. No other fans do that.


----------



## gfunkernaught

Quick comparison between the NF-P12 $13.90 and Arctic P12 $11.02


----------



## Bobbylee

des2k... said:


> Those Artic P12s, if you have bunch of them close together they will resonate / hum. They seem to interfere with each other's air flow. I have 10 of them, had to separate them into 3groups with different RPM to get them to stop. No other fans do that.


I’ve heard this a few times. I’ve got 12 of them I haven’t noticed it but I don’t run them very high rpm, no need, they perform very well.


----------



## gfunkernaught

des2k... said:


> Those Artic P12s, if you have bunch of them close together they will resonate / hum. They seem to interfere with each other's air flow. I have 10 of them, had to separate them into 3groups with different RPM to get them to stop. No other fans do that.


That is troublesome. I am very sensitive to resonance. The three NF-P12's 1300rpm I have don't do that. Its bizarre how quiet these things are, even at 1300rpm.


----------



## J7SC

Bobbylee said:


> I’ve heard this a few times. I’ve got 12 of them I haven’t noticed it but I don’t run them very high rpm, no need, they perform very well.





gfunkernaught said:


> That is troublesome. I am very sensitive to resonance. The three NF-P12's 1300rpm I have don't do that. Its bizarre how quiet these things are, even at 1300rpm.


...I got over 40 of them - no 'hum' issue, either...



Spoiler


----------



## gfunkernaught

Gonna need another pump as I've been reading that a single D5 might not be enough for 4x480. This should do it right? I already have the EK pump/res D5 combo.








Amazon.com: EKWB EK-Quantum Inertia D5 PWM Pump, Digital RGB, Plexi : Electronics


Buy EKWB EK-Quantum Inertia D5 PWM Pump, Digital RGB, Plexi: Water Cooling Systems - Amazon.com ✓ FREE DELIVERY possible on eligible purchases



www.amazon.com


----------



## yzonker

des2k... said:


> Those Artic P12s, if you have bunch of them close together they will resonate / hum. They seem to interfere with each other's air flow. I have 10 of them, had to separate them into 3groups with different RPM to get them to stop. No other fans do that.


Maybe that's a problem with a lot of fans and depends on your case, etc? I ditched the Noctua F12s and F14s because of the exact same problem. For me, the Noctua A12 and Arctic P12 was much better. The 4 on my 480mm rad don't resonate either.


----------



## yzonker

gfunkernaught said:


> That is troublesome. I am very sensitive to resonance. The three NF-P12's 1300rpm I have don't do that. Its bizarre how quiet these things are, even at 1300rpm.


After the last bunch of comments, maybe you should stick with what is working and I'll shut up. Lol

And just to clarify, this is the fan I had that resonated above 1000 rpm or so. Not quite the same one.





__





Amazon.com






www.amazon.com


----------



## des2k...

Any of you tried Lego builders journey on the 3090? Uses all of MS DXR features with the new nvidia driver. 4k max, quality dlss and it doesn't reach 60fps, some dips into 40.

For OC, it doesn't scall at all. Played with 2160core was continously 500w+


----------



## gfunkernaught

yzonker said:


> After the last bunch of comments, maybe you should stick with what is working and I'll shut up. Lol
> 
> And just to clarify, this is the fan I had that resonated above 1000 rpm or so. Not quite the same one.
> 
> 
> 
> 
> 
> __
> 
> 
> 
> 
> 
> Amazon.com
> 
> 
> 
> 
> 
> 
> www.amazon.com


That resonance thing depends on like they said how close they are to each other, and also how they're mounted and what they're mounted to. You good homie! 

I'm leaning towards the NF-P12 1700rpm, but since the 1300rpm is so damn quiet, if I make them into a push/pull config, the slightly less af/sp could be made up vs the 1700rpm, but double the noise? Idk, 19dB I really can't hear them. 25db might be more audible. I have them mounted on my ext rad, pulling. At 100% I have to go close to hear them, and even then all I hear is air moving, not being cut or chopped or warped. No tone. Idk if the same will be the case for the 1700rpm variant.


----------



## gfunkernaught

des2k... said:


> Any of you tried Lego builders journey on the 3090? Uses all of MS DXR features with the new nvidia driver. 4k max, quality dlss and it doesn't reach 60fps, some dips into 40.
> 
> For OC, it doesn't scall at all. Played with 2160core was continously 500w+


I haven't tried that, but have you or anyone else tried the Boundary Benchmark? I noticed about 10fps avg difference between the three DLSS modes, 47-57-67 for performance-balanced-quality respectively. The lowest I saw was 30fps on Quality DLSS, and with the 520W rbar bios.


----------



## des2k...

gfunkernaught said:


> I haven't tried that, but have you or anyone else tried the Boundary Benchmark? I noticed about 10fps avg difference between the three DLSS modes, 47-57-67 for performance-balanced-quality respectively. The lowest I saw was 30fps on Quality DLSS, and with the 520W rbar bios.


Sounds about right when I ran that benchmark but I don't have my fps data saved.
What res are you running it at ?


----------



## gfunkernaught

des2k... said:


> Sounds about right when I ran that benchmark but I don't have my fps data saved.
> What res are you running it at ?


3840x2160. I saved the results but they came out as frametimes not fps. The numbers don't look like fps.


----------



## Arizor

On the topic of resolution and FPS, has anyone noticed/can explain how and why the GPU clock goes lower the higher the resolution?

For example, CP2077: running performance DLSS, everything else pretty much maxed out, I hold a stable clock of approx 2010-2025mhz with +1500mem. 

In Native 4K, my GPU downclocks around -100mhz. In DLSS Balanced, it's about -15mhz (i.e. one step), in DLSS Quality, around 30mhz (i.e. 2 steps).

My flaky hypothesis is that lower frame rates push the clock less.

ALSO, since it's Friday here in Oz, time for a rather dumb question I'm hoping someone can enlighten me on. In all your GPU-Z pics, there's a clear showing of the power in Watts. Yet my GPU-Z only ever shows percentage. How do I get it to show wattage? See attached.


----------



## gfunkernaught

Arizor said:


> On the topic of resolution and FPS, has anyone noticed/can explain how and why the GPU clock goes lower the higher the resolution?
> 
> For example, CP2077: running performance DLSS, everything else pretty much maxed out, I hold a stable clock of approx 2010-2025mhz with +1500mem.
> 
> In Native 4K, my GPU downclocks around -100mhz. In DLSS Balanced, it's about -15mhz (i.e. one step), in DLSS Quality, around 30mhz (i.e. 2 steps).
> 
> My flaky hypothesis is that lower frame rates push the clock less.
> 
> ALSO, since it's Friday here in Oz, time for a rather dumb question I'm hoping someone can enlighten me on. In all your GPU-Z pics, there's a clear showing of the power in Watts. Yet my GPU-Z only ever shows percentage. How do I get it to show wattage? See attached.


All cards have power limits. The harder you push the card, the more power it will use, and more likely slam the limit, forcing the clocks to decrease.

In GPUz check the Sensors tab and scroll to the bottom. You will see watts.


----------



## des2k...

gfunkernaught said:


> 3840x2160. I saved the results but they came out as frametimes not fps. The numbers don't look like fps.


There you go, 2115core +1018mem


----------



## yzonker

Anybody else see this slow performance loss in PR with the last few releases of drivers?









Result







www.3dmark.com





I reverted back and got my old score. That's the first run in the compare.

And then found a little more even,









I scored 15 347 in Port Royal


AMD Ryzen 7 5800X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## Shawnb99

gfunkernaught said:


> Can someone recommend a 480mm radiator for me? I'm getting four and will use noctua NF-F12s with them. Was thinking XR7 but I'll take some recommendations.


HWL GTX if you can find them


----------



## GRABibus

gfunkernaught said:


> I haven't tried that, but have you or anyone else tried the Boundary Benchmark? I noticed about 10fps avg difference between the three DLSS modes, 47-57-67 for performance-balanced-quality respectively. The lowest I saw was 30fps on Quality DLSS, and with the 520W rbar bios.


you mean 67-57-47 for performance-balanced-quality ?


----------



## gfunkernaught

GRABibus said:


> you mean 67-57-47 for performance-balanced-quality ?


Woops! Yes.


----------



## gfunkernaught

Shawnb99 said:


> HWL GTX if you can find them


I see what you mean. So far from this quick list the TT Pacific would be my first choice, but the 480GTX seems even better. That XSPC TX480 is only 20mm but has high density. How would that perform compared to the 480GTX.


----------



## Shawnb99

gfunkernaught said:


> I see what you mean. So far from this quick list the TT Pacific would be my first choice, but the 480GTX seems even better. That XSPC TX480 is only 20mm but has high density. How would that perform compared to the 480GTX.
> View attachment 2515220


With radiators it's all down to how fast you want to run your fans. If you're under 1200rpm's you're choices change compared to if your over 1800rpm's. Thickness generally means they can soak up more heat but doesn't always mean they perform the best under ideal conditions, also thicker tends to work better with push/pull. The XSPC TX480 is about midrange in performance but the difference is at best 5 degrees between worst and last

Check out Thermalbench for comparisons.









HardwareLabs Black Ice Nemesis 480GTX


Introduction The Black Ice GT Extreme was one of the highest performing radiators for the PC DIY enthusiast- not something easy to improve upon. But the consistent evolution of design from Hardware…




thermalbench.com


----------



## J7SC

gfunkernaught said:


> I see what you mean. So far from this quick list the TT Pacific would be my first choice, but the 480GTX seems even better. That XSPC TX480 is only 20mm but has high density. How would that perform compared to the 480GTX.
> View attachment 2515220


Generally speaking, the higher the FPI, the more static pressure is needed via the fans...but that is not the only consideration - thickness also matters; the TT CL 480 is a triple-row design, compared to the XSPC RX 480 and Hardwarelabs 480 GTX which are dual-row. So you really need push/pull for the TT, or loud fans (my GentleTyphoon 3k rpm can do it in push mode).

Unfortunately, most 'radiator comparisons' are kind of dated and either use older rad versions or omit new ones altogether, and only a few bother with 480mm and up. That said, if you can find decent prices for any of the three - TT, HWL or XSPC you are probably well-served re. quality - I run all three brands. Just pair them with appropriate fans that won't drive you crazy over the long haul. FYI, we have an unprecedented heatwave going on w/ records breaking / broken, and I have three machines / workstations running for work (all are AMD 16c and have high-PL GPU combos). I am glad that the cooling capacities via the big rads have 'excess capacity' to spare right about now...


----------



## gfunkernaught

J7SC said:


> Generally speaking, the higher the FPI, the more static pressure is needed via the fans...but that is not the only consideration - thickness also matters; the TT CL 480 is a triple-row design, compared to the XSPC RX 480 and Hardwarelabs 480 GTX which are dual-row. So you really need push/pull for the TT, or loud fans (my GentleTyphoon 3k rpm can do it in push mode).
> 
> Unfortunately, most 'radiator comparisons' are kind of dated and either use older rad versions or omit new ones altogether, and only a few bother with 480mm and up. That said, if you can find decent prices for any of the the three - TT, HWL or XSPC you are probably well-served re. quality - I run all three brands. Just pair them with appropriate fans that won't drive you crazy over the long haul. FYI, we have an unprecedented heatwave going on w/ records breaking / broken, and I have three machines / workstations running for work (all are AMD 16c and have high-PL GPU combos). I am glad that the cooling capacities via the big rads have 'excess capacity' to spare right about now...


Would the NF-P12 1700rpm in a push/pull config perform well with high density/thickness rads? Could I go lower rpm since I'd be doing push/pull? Noise is a concern, I dont want them running too loud. Even though most of the time, if the fans are ramping up, means I'm playing a game with speakers or headphones.


----------



## J7SC

gfunkernaught said:


> Would the NF-P12 1700rpm in a push/pull config perform well with high density/thickness rads? Could I go lower rpm since I'd be doing push/pull? Noise is a concern, I dont want them running too loud. Even though most of the time, if the fans are ramping up, means I'm playing a game with speakers or headphones.


...can't say for certain...while I have NF-P12 and like them, I use the Arctic P12 pst in push-pull on the big 480s. HOWEVER, given the fan blade count and design of the NF-P12s and their rpm range, I think they would work fine in push-pull on the big rads...may be someone else here has them in push-pull on their thick rad w/ fairly high FPI ?


----------



## inedenimadam

J7SC said:


> ...can't say for certain...while I have NF-P12 and like them, I use the Arctic P12 pst in push-pull on the big 480s.


 +stonks
I love the P-12s. Running them in push/pull on a MORA-3. The price difference in fancy fans with the same performance for this set up would have been a mortgage payment.


----------



## gfunkernaught

inedenimadam said:


> +stonks
> I love the P-12s. Running them in push/pull on a MORA-3. The price difference in fancy fans with the same performance for this set up would have been a mortgage payment.


Which P12s are you referring to? The price difference between the noctua p12 and the Arctic p12 is $3 🤔


----------



## inedenimadam

gfunkernaught said:


> Which P12s are you referring to? The price difference between the noctua p12 and the Arctic p12 is $3 🤔


Arctic at 5/$30. Buy them by the box. The NF-P12 are also amazing fans at a good value, but are double+ what I pay for the Arctic. Both are cheaper than the SL120s type fans that are so popular right now.


----------



## Arizor

What benchmarks do you folks use to test the efficacy of a GPU memory overclock, for game purposes?

For example using Port Royal memory seems to have a much higher ceiling for how effective a memory overclock is (+1500 seems to be the sweet spot), whilst gamebased benchmarks (eg Bright Memory) show much less benefits (+750ish here).

so what do you folk use to gauge a good overclock?


----------



## J7SC

gfunkernaught said:


> Which P12s are you referring to? The price difference between the noctua p12 and the Arctic p12 is $3 🤔





inedenimadam said:


> Arctic at 5/$30. Buy them by the box. The NF-P12 are also amazing fans at a good value, but are double+ what I pay for the Arctic. Both are cheaper than the SL120s type fans that are so popular right now.


...apart from the price, the other advantage when buying the 5x Value-pack P12s is that those have both male and female pwm connectors for daisy-chaining, unlike the P12s in the single box


----------



## ttnuagmada

Push/pull SW3's on GTX rads have worked extremely well for me.


----------



## jlodvo

is it posible to run a 3090 below 40c max temp with watercooling and a ambient temp of 22c? if yes how many rad would it take to tame it?


----------



## ttnuagmada

jlodvo said:


> is it posible to run a 3090 below 40c max temp with watercooling and a ambient temp of 22c? if yes how many rad would it take to tame it?


Optimus block and a 5C Delta T should get you there I think.


----------



## J7SC

jlodvo said:


> is it posible to run a 3090 below 40c max temp with watercooling and a ambient temp of 22c? if yes how many rad would it take to tame it?


...depends on your loop setup (rad number, size, fans). One of my systems is a dual-loop setup (one loop for TR w/ Heatkiller IV Pro block; separate loop for 2x 2080 Ti w/ Aorus OEM blocks) and it consistently manages to keep the combined peak 760W GPUs in the mid-to-high 30s with 21 C ambient...GPU loop is 2x D5 pumps and 3x XSPC RX360s


----------



## Nizzen

ttnuagmada said:


> Optimus block and a 5C Delta T should get you there I think.


Running 350w bios helps too lol


----------



## ahnafakeef

GRABibus said:


> Here is why I like the Bios EVGA RTX 3090 FTW3 ULTRA XOC 500W version 94.02.42.80.27 on Strix with stock air coooler.
> 
> 
> 
> 
> 
> 
> [email protected]@1.1V => Temp below 60°C.
> [email protected] => temp below 70°C
> Ambient 24°C-25°C
> Stock air cooler, repasted with conductonaut.
> Memory chips repaded with Thermalright ODYSSEY thermal pads.
> 
> I am really happy with those temps and frequencies.
> 
> Here is the V/F curve :
> 
> View attachment 2515151


1. Any reason why I cannot replicate these results with the stock Strix BIOS?
2. What is that curve on the right? What does it do, and if it helps with a higher stable overclock, how do I set it up?

Also, I'm crashing in Valhalla with the following settings.









Boosted core clock stays at 2055 in Valhalla according to Afterburner. Can I get an even higher stable core clock with the stock BIOS? Or is flashing a different BIOS the only way forward?

Thank you.


----------



## Arizor

@ahnafakeef 

To speak specifically to the stock ASUS BIOS (assuming you've updated to "V4" with reBAR enabled):

1. Firstly you should probably allow your ASUS to access higher voltage by increasing "core voltage", but here's how to do it properly (see attachment with red arrow, make sure you select the right card design 'reference'). Then restart Afterburner and up core voltage to max.

2. @GRABibus ' graph is what we call a custom curve, he's done it to effectively "lock" his GPU voltage/clock at what he wants. You can access this via CTRL+F in Afterburner.


----------



## ahnafakeef

@Arizor Thank you for your response.

I have just updated the BIOS to v4. What exactly is rebar, and what difference does it make in terms of overclocking?

As for core voltage, should I simply set it to +100 and leave it at that? Will the system automatically stay within the threshold of what is considered a safe core voltage? And should I have Force Constant Voltage turned on?

Also, If I set the voltage to +100 and power limit to +123 (which are the max limits for each as I can see), will GPU Boost automatically boost the core as high as possible without me having to do anything else? And should I leave core clock and memory clock as is in Afterburner? Or is that where the curve comes in?

I've only ever increase core clock with the slider on AB. If the curve is a better way to go about it, please let me know how to set a custom curve to get a better overclock.

Also, if there is anything else I need to do in order to better configure Afterburner or Nvidia Control Panel to help with the overclock, do let me know.

Thank you.


----------



## lmfodor

Hi, I would like to ask your guidance on how to put together a list of parts to mod my new Strix White to some good Waterblock, with some active backplate and fundamentally what flexible tubes plus bearing, compression fittings, pump and reservoir should I buy. 

Never build any close loop, just install AIO for CPU. Perhaps what limited me the most was the issue of the tubes, cutting or bending them with heat, or I saw that they were polished, and then how to hold them well. 

I have been watching several videos, of independent ones on how to disassemble the strix and place it in a waterblock, as well as one from Linus (with several errors in the installation) and another from DeBauer where he tested the Corsar HydroX XG7 that seems a good waterblock at least easy because It comes with all the thermal pads pre-installed, it's just very expensive. Later I saw videos of another one that is the Bykski that seem new. I use the board to play games so I want to cool down the memories and the GPU so that they perform better. simply that. Maybe by changing the thermal pads I can do it. However I want to encourage water cooling 

Can someone help me put together a list of everything I should buy? fundamentally the tubes, sizes so that everything fits well. It would be my first major mod. 

Corsair seems very expensive but easy (I guess). EK has the brand. When I see the flexible tubes I realize that my only concern would be how to seal the tubes and connect well with the pump and radiator. I also see the premixed liquid so everything seems more easy than I thought 

I know you here are pro! The world record and other amazing thing. I’m a noob so I really appreciate your help to make a bill of materials so I can make the list to buy. 

Thanks! Martin 


Sent from my iPhone using Tapatalk Pro


----------



## lmfodor

Arizor said:


> @ahnafakeef
> 
> To speak specifically to the stock ASUS BIOS (assuming you've updated to "V4" with reBAR enabled):
> 
> 1. Firstly you should probably allow your ASUS to access higher voltage by increasing "core voltage", but here's how to do it properly (see attachment with red arrow, make sure you select the right card design 'reference'). Then restart Afterburner and up core voltage to max.
> 
> 2. @GRABibus ' graph is what we call a custom curve, he's done it to effectively "lock" his GPU voltage/clock at what he wants. You can access this via CTRL+F in Afterburner.
> 
> View attachment 2515323


Hi Arizor, I have a question about your suggested setting with afterburner. Why should we need to set “force constant voltage” I always left that option unchecked . Currently I’m running and undervolt profile. Should I set that settings? 

Thanks!


Sent from my iPhone using Tapatalk Pro


----------



## jura11

@lmfodor 

What case do you have right now? Would you consider running or have external radiator like is MO-ra3 360mm or 420mm? 

If your case can house at least 2*360mm radiators then you should be okay, for radiators check HWLabs or XSPC or Alphacool, depending on your case and your budget too 

For waterblocks get Bykski waterblock with active backplate, it will cost less than waterblock from EK, Heatkiller, Optimus or Corsair etc, Bykski waterblocks perform very well, running 2 of them on my both RTX 3090 GamingPro's 

For pump and reservoir, D5 pump and reservoir from Bykski or Barrow will do job and for tubing get EK ZMT or EPDM tubing or Tygon A-60-G and fittings I would recommend get Bykski or Barrow fittings in size 16/10mm Compression fittings 

For CPU waterblock depending on the CPU Bykski or Barrow makes good waterblocks as well and they perform well too 

Hope this helps 

Thanks, Jura


----------



## Arizor

@ahnafakeef @lmfodor 

Don't worry about Force Constant Voltage, I was just testing that, nothing to do with my recommendation, leave it unticked.

reBar is Resizable BAR, which is basically a way for your GPU to link up quicker with large memory requests, don't worry about it, it has very little effect at the moment in any game, but in the future some games could get a 2-10% boost (AC Valhalla already does a bit).

With Voltage and Power Limit set to max, your GPU boost will go higher, but not as high as your individual GPU _might_ with a custom curve.

As @GRABibus recommends, I've found the EVGA FTW BIOS to be the best for the Rog Strix, for game FPS.

At the moment I'm playing CP2077, which is _very_ sensitive to GPU core overclocks; whilst I can overclock my VRAM to +1500, my GPU can basically hit 2010 without crashes, any higher and it's crash city anywhere from 10 mins - 1 hour.

But yes, a custom curve is always best, all other things being equal. Your memory clock cannot be set within the curve, you can only do that on the main afterburner interface; as with GPU cores, everyone's memory is a bit of a lottery, but mine can do +1500 without issue, so try +1000 and test for frame rates in benchmarks, go higher until you see the score dropping.




ahnafakeef said:


> @Arizor Thank you for your response.
> 
> I have just updated the BIOS to v4. What exactly is rebar, and what difference does it make in terms of overclocking?
> 
> As for core voltage, should I simply set it to +100 and leave it at that? Will the system automatically stay within the threshold of what is considered a safe core voltage? And should I have Force Constant Voltage turned on?
> 
> Also, If I set the voltage to +100 and power limit to +123 (which are the max limits for each as I can see), will GPU Boost automatically boost the core as high as possible without me having to do anything else? And should I leave core clock and memory clock as is in Afterburner? Or is that where the curve comes in?
> 
> I've only ever increase core clock with the slider on AB. If the curve is a better way to go about it, please let me know how to set a custom curve to get a better overclock.
> 
> Also, if there is anything else I need to do in order to better configure Afterburner or Nvidia Control Panel to help with the overclock, do let me know.
> 
> Thank you.


----------



## Falkentyne

ahnafakeef said:


> @Arizor Thank you for your response.
> 
> I have just updated the BIOS to v4. What exactly is rebar, and what difference does it make in terms of overclocking?
> 
> As for core voltage, should I simply set it to +100 and leave it at that? Will the system automatically stay within the threshold of what is considered a safe core voltage? And should I have Force Constant Voltage turned on?
> 
> Also, If I set the voltage to +100 and power limit to +123 (which are the max limits for each as I can see), will GPU Boost automatically boost the core as high as possible without me having to do anything else? And should I leave core clock and memory clock as is in Afterburner? Or is that where the curve comes in?
> 
> I've only ever increase core clock with the slider on AB. If the curve is a better way to go about it, please let me know how to set a custom curve to get a better overclock.
> 
> Also, if there is anything else I need to do in order to better configure Afterburner or Nvidia Control Panel to help with the overclock, do let me know.
> 
> Thank you.


 The voltage slider doesn't increase the voltage. It only allows another V/F point on the curve to be accessed, and will increase your core clocks by another 15 mhz when you do that. If you open the curve editor, you will see that the highest frequency your card is at corresponds to the 1.056v-1.075v voltage point. Setting the slider to 100 allows the 1.081v-1.10v point to be used if the card is not power limited, but also will increase the core by +15 as it responds to the shape of the graph.


----------



## ahnafakeef

@Arizor and @Falkentyne 

Thank you for your responses.

So in essence, I'm not fully utilising the maximum voltage available if I'm not using a custom curve?

How do I set a custom curve? And do I leave core clock untouched at +0 on the main interface?

As for V/F points on the curve, I can see the max voltage is 1250mV. And it didn't change even when I increased core voltage to +100. Should my targeted core clock correspond to 1250mV, or something less? Can the card even hit 1250mV from Afterburner, and should I take precautions so that it doesn't in order to prevent damage to the card?

Last but not least - I've been increasing core clock from the main interface. Max stable I reached was 2055 with +50 on the core and +123 power limit. However, now even with voltage increased to +100, it crashes at the same point it used to before. I'm assuming this is because the voltage slider isn't actually increasing the voltage? If not, how do I implement a higher voltage and attain a higher core clock?

Thank you.


----------



## lmfodor

jura11 said:


> @lmfodor
> 
> What case do you have right now? Would you consider running or have external radiator like is MO-ra3 360mm or 420mm?
> 
> If your case can house at least 2*360mm radiators then you should be okay, for radiators check HWLabs or XSPC or Alphacool, depending on your case and your budget too
> 
> For waterblocks get Bykski waterblock with active backplate, it will cost less than waterblock from EK, Heatkiller, Optimus or Corsair etc, Bykski waterblocks perform very well, running 2 of them on my both RTX 3090 GamingPro's
> 
> For pump and reservoir, D5 pump and reservoir from Bykski or Barrow will do job and for tubing get EK ZMT or EPDM tubing or Tygon A-60-G and fittings I would recommend get Bykski or Barrow fittings in size 16/10mm Compression fittings
> 
> For CPU waterblock depending on the CPU Bykski or Barrow makes good waterblocks as well and they perform well too
> 
> Hope this helps
> 
> Thanks, Jura


Hi Jura! I have a Phanteks P500A and for me it has space. The specs says that it support 420mm at fron and actually I’m using a 360 on top. So I think it would be fine. I saw many configurations with a “block” of a lot of fans. If it helps I have a little balcony and maybe I let the external fans outside if it really improves performance. 

Let me find on Amazon all the part you suggested and share it here to check if anything missing. I live puso de US and I will be traveling in Augusts for a month, so I want to buy all or parts in US. The tubes form me are the key part. I don’t want to bend tubes because I don’t have the tools and I’m not comfortable doing that. The most critical part for me would be the disassembly of the Strix and the put it well in the water block with the right size of thermal para. The truth is, I really want to lower the mem temps. I’d love to have a good close loop that allows me to tun more power and a stable más frequency not worried about the memory junction. I will never mining. 
Again the flexible tubes that I see in deBauer or Linus likes me a lot, like an AIO. I don’t care much about the aesthetic but more functionality. If I can to use this new setup for the procesador would be great too. Now I’m running an EK 360 AIO and it’s work week, it’s not a CLC but works well..

Let me put all part here to double check is something is missing and if I should have problem with the fitting tubes to avoid leakages 

Thanks a lot for your help! 


Sent from my iPhone using Tapatalk Pro


----------



## Falkentyne

ahnafakeef said:


> @Arizor and @Falkentyne
> 
> Thank you for your responses.
> 
> So in essence, I'm not fully utilising the maximum voltage available if I'm not using a custom curve?
> 
> How do I set a custom curve? And do I leave core clock untouched at +0 on the main interface?
> 
> As for V/F points on the curve, I can see the max voltage is 1250mV. And it didn't change even when I increased core voltage to +100. Should my targeted core clock correspond to 1250mV, or something less? Can the card even hit 1250mV from Afterburner, and should I take precautions so that it doesn't in order to prevent damage to the card?
> 
> Last but not least - I've been increasing core clock from the main interface. Max stable I reached was 2055 with +50 on the core and +123 power limit. However, now even with voltage increased to +100, it crashes at the same point it used to before. I'm assuming this is because the voltage slider isn't actually increasing the voltage? If not, how do I implement a higher voltage and attain a higher core clock?
> 
> Thank you.


Using a custom curve can decrease your FPS if you don't do it right.
There are two clocks. Requested clocks and effective clocks. Effective clocks are what controls your actual performance (these are also known as PLL clocks). If you mess with the curve in the wrong way at the upper voltage points, your effective clocks will drop way below your requested clocks, because the NVVDD and VID voltages will differ too much from each other.

There are also two voltages. There's VID and actual NVVDD (not just uncore or sram volts). Or maybe MSVDD.
VID is like VID on intel CPU's. It's requested voltage. NVVDD (and /or MSVDD?) are internal voltages. You have no access to changing NVVDD (or is it MSVDD?) without external hardware soldered to the board (or a Galax HOF (OCLab?) or Kingpin card). If NVVDD and VID differ too much from each other, effective clocks will drop down to the one with what the controller thinks is the proper value.

You'll notice if you look at the curve when running a game and you are _NOT_ power limited, the curve will keep adjusting itself and the voltage (VID) point keeps cycling between a range of values on that "frequency tier", e.g. 1.056v-1.069v, or 1.069v-1.10v if 100% voltage slider is set (+15 mhz core clock).

Note that I'm not even sure anymore if MSVDD and SRAM voltage are the same thing or not.


----------



## ahnafakeef

Falkentyne said:


> Using a custom curve can decrease your FPS if you don't do it right.
> There are two clocks. Requested clocks and effective clocks. Effective clocks are what controls your actual performance (these are also known as PLL clocks). If you mess with the curve in the wrong way at the upper voltage points, your effective clocks will drop way below your requested clocks, because the NVVDD and VID voltages will differ too much from each other.
> 
> There are also two voltages. There's VID and actual NVVDD (not just uncore or sram volts). Or maybe MSVDD.
> VID is like VID on intel CPU's. It's requested voltage. NVVDD (and /or MSVDD?) are internal voltages. You have no access to changing NVVDD (or is it MSVDD?) without external hardware soldered to the board (or a Galax HOF (OCLab?) or Kingpin card). If NVVDD and VID differ too much from each other, effective clocks will drop down to the one with what the controller thinks is the proper value.
> 
> You'll notice if you look at the curve when running a game and you are _NOT_ power limited, the curve will keep adjusting itself and the voltage (VID) point keeps cycling between a range of values on that "frequency tier", e.g. 1.056v-1.069v, or 1.069v-1.10v if 100% voltage slider is set (+15 mhz core clock).
> 
> Note that I'm not even sure anymore if MSVDD and SRAM voltage are the same thing or not.


I've never delved this deep into voltage control before. Among the ones you've mentioned, which are the ones I can actually control? And how do I configure them appropriately to benefit from it?

Also, I gave the custom curve a shot. Please check if I'm doing this right.









The monitoring figures on the top left are after a Valhalla benchmark run. Notice how the core clock abruptly drops at some points. Why does this happen, and is it possible to attain a constant core clock without these dips dropping FPS at random points while playing?

Also, how do I find out if the card is actually hitting the highest safe voltage it can hit under stress? Because I have a feeling I'm not hitting the 1.1v mark yet with my current settings.

Thank you.


----------



## Falkentyne

ahnafakeef said:


> I've never delved this deep into voltage control before. Among the ones you've mentioned, which are the ones I can actually control? And how do I configure them appropriately to benefit from it?
> 
> Also, I gave the custom curve a shot. Please check if I'm doing this right.
> View attachment 2515380
> 
> 
> The monitoring figures on the top left are after a Valhalla benchmark run. Notice how the core clock abruptly drops at some points. Why does this happen, and is it possible to attain a constant core clock without these dips dropping FPS at random points while playing?
> 
> Also, how do I find out if the card is actually hitting the highest safe voltage it can hit under stress? Because I have a feeling I'm not hitting the 1.1v mark yet with my current settings.
> 
> Thank you.


I can't answer this question.
The voltage points and the frequency at a certain "tier" depend purely on the voltage points of the previous tier. If you move them too far up, VID is out of balance from what the card expects, and the effective clocks drop drastically. You can pause Heaven or Superposition (but it has to be something NOT at the power limit, otherwise you will throttle to a much lower v/f point) and then check what happens when you screw with the curve.

You have _NO_ ability to control voltage directly without custom hardware. VID is not voltage. Controlling voltage directly requires programming the VRM controller. And that requires special firmware to do that, just like with a CPU motherboard, e.g. Intel. Controlling Vcore directly bypasses "VID" and sends a voltage directly to the VRM, as opposed to changing VID or the V/F points (e.g. Intel 10th/11th gen, or many laptops)

Set the voltage to +100% and then "Lock" the 1.10v on the stock curve manually with 'L' on the V/F curve on MSI Afterburner. Make sure you are NOT power limited. Then run a test. You will notice that even though you have 1.1v locked, the "VID" will change from 1.10v to 1.069v then cycle back, sometimes if there is a hitch or interruption, suddenly going to an intermediate point. That's why.


----------



## ahnafakeef

Falkentyne said:


> I can't answer this question.
> The voltage points and the frequency at a certain "tier" depend purely on the voltage points of the previous tier. If you move them too far up, VID is out of balance from what the card expects, and the effective clocks drop drastically. You can pause Heaven or Superposition (but it has to be something NOT at the power limit, otherwise you will throttle to a much lower v/f point) and then check what happens when you screw with the curve.
> 
> You have _NO_ ability to control voltage directly without custom hardware. VID is not voltage. Controlling voltage directly requires programming the VRM controller. And that requires special firmware to do that, just like with a CPU motherboard, e.g. Intel. Controlling Vcore directly bypasses "VID" and sends a voltage directly to the VRM, as opposed to changing VID or the V/F points (e.g. Intel 10th/11th gen, or many laptops)
> 
> Set the voltage to +100% and then "Lock" the 1.10v on the stock curve manually with 'L' on the V/F curve on MSI Afterburner. Make sure you are NOT power limited. Then run a test. You will notice that even though you have 1.1v locked, the "VID" will change from 1.10v to 1.069v then cycle back, sometimes if there is a hitch or interruption, suddenly going to an intermediate point. That's why.


So the best I can do with the curve is to "ask" for a certain core clock for different voltage levels, but the voltage itself will still be determined by the card/system itself?

If yes, am I correct in assuming that my "asking" core clock for each point on the curve needs to stable at the corresponding voltage level? How do I find out what frequency is stable at each voltage point?

And if the VID/voltage fluctuates randomly, how do I get the card to stay constantly at a certain core clock/frequency under stress?

Can the card go above 1.1v? If not, should all the points beyond 1.1v be at the same frequency level? Or does it not matter since the card won't even go above 1.1v?

This is what my current curve looks like. Please advise if I need to make any changes or if I can optimise it even further. I'm targeting to stabilise 2175 at 1.1v like @GRABibus.









Thank you.


----------



## Arizor

@ahnafakeef the Strix, as far as I can push it, seems to be locked to a maximum of 1.1V when the voltage is fully unlocked regardless of BIOS. I imagine only a hardware mod will allow further.

You can only know stability through hours of testing.

You get the frequency/voltage to stay at a certain point in 2 ways: firstly, low temps, secondly, a well plotted curve.

edit: also bear in mind max frequency changes based on program demands. For example 4K will run at a lower frequency than 1440; using raytracing will force a lower frequency than pure rasterisation and so on.


----------



## GRABibus

ahnafakeef said:


> So the best I can do with the curve is to "ask" for a certain core clock for different voltage levels, but the voltage itself will still be determined by the card/system itself?
> 
> If yes, am I correct in assuming that my "asking" core clock for each point on the curve needs to stable at the corresponding voltage level? How do I find out what frequency is stable at each voltage point?
> 
> And if the VID/voltage fluctuates randomly, how do I get the card to stay constantly at a certain core clock/frequency under stress?
> 
> Can the card go above 1.1v? If not, should all the points beyond 1.1v be at the same frequency level? Or does it not matter since the card won't even go above 1.1v?
> 
> This is what my current curve looks like. Please advise if I need to make any changes or if I can optimise it even further. I'm targeting to stabilise 2175 at 1.1v like @GRABibus.
> View attachment 2515459
> 
> 
> Thank you.


If you set 1,[email protected], depending on you cooling, you will have some throttling which makes you stabilized below 2175MHz.
I am on air stock cooler. I repasted with Conductonaut liquid metal , and I need to set [email protected],1V to stabilize at 2160MhZ in games, at 24degrees-25degrees ambient.

Please not also that I have an offset of 145MHz on core, and then I set the [email protected],1V.

Of course, stability in games at these
Settings wil depend on silicon quality.


----------



## sepia21

Hi, First of all, I'm sorry if this is a repeated question. I searched the discussion but after 7 pages of results, I couldn't find anything. I have the Zotac 3090 trinity which is water-cooled. I flashed the XOC bios so I have more power limit now that I have more temperature headroom to play with. But after flashing the bios my memory frequency was stuck in max freq and it didn't work properly. I searched and I realized I have to download Nvidia inspector DwM 3.5.0.0 and turn off the "CUDA - Force P2 State". But even after doing this and even re-installing the driver and restarting the pc multiple times, nothing changed and the frequency kept on being maxed out. I don't know what I'm doing wrong and where is the problem. I will appreciate it if someone helps me with this. 
By the way, I'm using the XOC bios that is posted on the first page of this discussion.


----------



## jura11

@sepia21 

Yup that's normal for XOC BIOS to running VRAM at max speed, create MSI Afterburner with setting VRAM at - 502MHz or downclock VRAM to - 502MHz and you should see VRAM will downclock at idle and when you want to play game or render or mine switch to your favourite profile 

Hope this helps 

Thanks, Jura


----------



## sepia21

jura11 said:


> @sepia21
> 
> Yup that's normal for XOC BIOS to running VRAM at max speed, create MSI Afterburner with setting VRAM at - 502MHz or downclock VRAM to - 502MHz and you should see VRAM will downclock at idle and when you want to play game or render or mine switch to your favourite profile
> 
> Hope this helps
> 
> Thanks, Jura


Ah, I didn't know that. Thank you so much. I think I will revert back to official Zotac bios.


----------



## jura11

sepia21 said:


> Ah, I didn't know that. Thank you so much. I think I will revert back to official Zotac bios.


Most of the XOC BIOS from KPE or Asus are running VRAM at max speeds, I have run XOC BIOS like on RTX 2080Ti Strix and there as well VRAM has been running at full speed 

I'm running on my Palit RTX 3090 GamingPro as well XOC 1000W BIOS capped at 75% from date when has been released and no issues to the date 

Hope this helps 

Thanks, Jura


----------



## newls1

quick question if i may please.... Just installed my Bykski FCWB with active backplate on my RTX 3090 eVGA FTW3 Ultra, and was wondering if the circled temp readings in Hwinfo64 are actually reading somewhat accurate mem temp readings, cause if so, this is pretty incredible and only had to play the thermal pad game 1 time. Whatcha think?










*This is after a gaming session of FC5.... temps seem pretty damn good!








*


----------



## Nizzen

newls1 said:


> quick question if i may please.... Just installed my Bykski FCWB with active backplate on my RTX 3090 eVGA FTW3 Ultra, and was wondering if the circled temp readings in Hwinfo64 are actually reading somewhat accurate mem temp readings, cause if so, this is pretty incredible and only had to play the thermal pad game 1 time. Whatcha think?
> 
> View attachment 2515499
> 
> 
> *This is after a gaming session of FC5.... temps seem pretty damn good!
> 
> View attachment 2515500
> *


Nice cold water too


----------



## newls1

Nizzen said:


> Nice cold water too


just room temp. this is all ambient cooling.


----------



## PLATOON TEKK

Just a heads up to those debating Galax. The two OC labs perform like ass compared to the KPs, also didn’t ship with XOC bios. To add insult to injury one of the screens already broke itself 🤦‍♂️. Am definitely sticking to EVGA next round.

Edit: when I say xoc bios, I don’t mean the two limited 1000w galax bios that are out.


----------



## J7SC

PLATOON TEKK said:


> Just a heads up to those debating Galax. The two OC labs perform like ass compared to the KPs, also didn’t ship with XOC bios. To add insult to injury one of the screens already broke itself 🤦‍♂️. Am definitely sticking to EVGA next round.
> 
> 
> View attachment 2515515


Well, that kind of sucks, but I was wondering about Galax' 'market segmentation' w/ various sub-models by geography this time around, per attached...didn't think the OC Labs would have performance deficiencies, though


----------



## PLATOON TEKK

J7SC said:


> Well, that kind of sucks, but I was wondering about Galax' 'market segmentation' w/ various sub-models by geography this time around, per attached...didn't think the OC Labs would have performance deficiencies, though
> 
> View attachment 2515516


Was genuinely surprised myself. Hof 2080ti vs KP was all HOF in terms of results for me. These can’t even hold above 2050mhz in most scenarios. Pretty shocking.Definitely never again.

either I got a dud (twice) or the magic is in the actual XOC bios that’s impossible to get unless your OGS apparently.


----------



## J7SC

PLATOON TEKK said:


> Was genuinely surprised myself. Hof 2080ti vs KP was all HOF in terms of results for me. These can’t even hold above 2050mhz in most scenarios. Pretty shocking.Definitely never again.
> 
> either I got a dud (twice) or the magic is in the bios that’s impossible to get


...small comfort perhaps, but I still think Galax HoF OCL has the nicest PCB !


----------



## gfunkernaught

jura11 said:


> Most of the XOC BIOS from KPE or Asus are running VRAM at max speeds, I have run XOC BIOS like on RTX 2080Ti Strix and there as well VRAM has been running at full speed
> 
> I'm running on my Palit RTX 3090 GamingPro as well XOC 1000W BIOS capped at 75% from date when has been released and no issues to the date
> 
> Hope this helps
> 
> Thanks, Jura


I guess you opted out of using rBar? After I move my system into my new case I want to try the 1kw bios again. The only real issue I had with that bios was stuttering in cp2077. Once I moved to the rbar 520w bios, stuttering gone. 

Caveats...everywhere!


----------



## PLATOON TEKK

J7SC said:


> ...small comfort perhaps, but I still think Galax HoF OCL has the nicest PCB !


Haha thanks. Agreed! That white pcb never gets old.


----------



## degenn

PLATOON TEKK said:


> Just a heads up to those debating Galax. The two OC labs perform like ass compared to the KPs, also didn’t ship with XOC bios. To add insult to injury one of the screens already broke itself 🤦‍♂️. Am definitely sticking to EVGA next round.
> 
> 
> View attachment 2515515


I use the GALAX bios on my Kingpin


----------



## jura11

gfunkernaught said:


> I guess you opted out of using rBar? After I move my system into my new case I want to try the 1kw bios again. The only real issue I had with that bios was stuttering in cp2077. Once I moved to the rbar 520w bios, stuttering gone.
> 
> Caveats...everywhere!


I think I have tested like for 2 days rBar BIOS but not sure on that at all, in most cases I have been power limited and went back to XOC 1000W BIOS which works for me, sadly there is no XOC 1000W BIOS with ReBAR which is fine with me, I just hope someone will leak it later on to public and I understand people who don't want to leak it to public 

You will see if its worth it for you to run XOC 1000W BIOS

Hope this helps 

Thanks, Jura


----------



## Arizor

Just repeating this advice for anyone developing in Blender or similar programs with a flashed BIOS (I'm using the EVGA FTW on my Strix). If you don't underclock your VRAM and Core a bit in Afterburner, you'll get computer resets quite regularly (Blender in particular hates overclocks, even factory ones, and will power spike reset your PC if you don't underclock first).



jura11 said:


> @sepia21
> 
> Yup that's normal for XOC BIOS to running VRAM at max speed, create MSI Afterburner with setting VRAM at - 502MHz or downclock VRAM to - 502MHz and you should see VRAM will downclock at idle and when you want to play game or render or mine switch to your favourite profile
> 
> Hope this helps
> 
> Thanks, Jura


----------



## PLATOON TEKK

degenn said:


> I use the GALAX bios on my Kingpin


Haha whatever works for you. If it’s the 1000w “xoc” that’s out, unfortunately, it has some hidden limits in place and no rebar. I even have voltage adjustment, but the cards just don’t fare 1/10 as well as the KPs do (even before water at similar temps).


----------



## Zaudi

Question...
If I compaire my 3090 FE Bios to another 3090 FE BIOS (after reBar Update)
NVFlash showes me that:









If someone other does the Same, his NVFlash shows that:










After he does my Bios on his Card, it looks like that:









Any idea why? and what does my mismatches mean?
If I put his "PASS" BIOS on my Card, but everything is like the first picture on my GPU/system....

What is wrong with my system/GPU?
I already installed 2 different NVIDIA drivers with DDU new and clean.
And I tested every vBios I could find.

My Card works great 2000/2115 MHz I can use, without crash.
But I don't understand that mismatches.


----------



## Falkentyne

Zaudi said:


> Question...
> If I compaire my 3090 FE Bios to another 3090 FE BIOS (after reBar Update)
> NVFlash showes me that:
> View attachment 2515551
> 
> 
> If someone other does the Same, his NVFlash shows that:
> View attachment 2515553
> 
> 
> 
> After he does my Bios on his Card, it looks like that:
> View attachment 2515552
> 
> 
> Any idea why? and what does my mismatches mean?
> If I but his BIOS on my Card everything is like the first picture....
> 
> What is wrong with my system/GPU?
> I already installed 2 different NVIDIA drivers with DDU new and clean.
> And I tested every vBios I could find.
> 
> My Card works great 2000/2115 MHz it does! But I don't understand that mismatches.


Simple answer.
Stop worrying about it.
You aren't a programmer. And hardly anyone here is and if they are they aren't talking. Your card works, enjoy the fact that you have a card.


----------



## geriatricpollywog

PLATOON TEKK said:


> Just a heads up to those debating Galax. The two OC labs perform like ass compared to the KPs, also didn’t ship with XOC bios. To add insult to injury one of the screens already broke itself 🤦‍♂️. Am definitely sticking to EVGA next round.
> 
> Edit: when I say xoc bios, I don’t mean the two limited 1000w galax bios that are out.
> 
> View attachment 2515515


The Galax HOF is made in China, not Taiwan like the KP, so there is an upper limit to how good the quality control can be.


----------



## West.

Have been running a normal HOF with stock bios (420w?) for a while.
Curious if there's any release of the oc lab bios with higher power limit (500w?) ?
Thanks!


----------



## sepia21

jura11 said:


> I think I have tested like for 2 days rBar BIOS but not sure on that at all, in most cases I have been power limited and went back to XOC 1000W BIOS which works for me, sadly there is no XOC 1000W BIOS with ReBAR which is fine with me, I just hope someone will leak it later on to public and I understand people who don't want to leak it to public
> 
> You will see if its worth it for you to run XOC 1000W BIOS
> 
> Hope this helps
> 
> Thanks, Jura


I changed my mind again and decided to keep the 1000W bios. May I ask what is your OC setting since you are also running a dual 8 pin card? Currently, I'm running 1995Mhz at 950mv but I think there is still room for improvement. My max temp is 57 with this setup


----------



## yzonker

sepia21 said:


> I changed my mind again and decided to keep the 1000W bios. May I ask what is your OC setting since you are also running a dual 8 pin card? Currently, I'm running 1995Mhz at 950mv but I think there is still room for improvement. My max temp is 57 with this setup


 I have a Zotac 3090 running the KP XOC bios. Lately I've been either running a core offset of +150 at 1.000v or +135 at 1.068v. The first one is in the mid 2000mhz range, 2nd is around 2100mhz. These are slightly conservative, but if I go much higher eventually I'll get a crash. Core temp is 50C or less with ambient around 23C. This is with the PL set at 85% which limits to around 520-530w. 

You need very good cooling to run at those levels though. Not only gpu cooling, but you need plenty of airflow in your case to keep the 2x8 pins cool. Cables can get pretty warm when you're pulling 200w+ on each. 

@des2k... that posts here some is probably 15C cooler yet. But that's with quite a bit of effort to optimize everything (gpu block, lots of rad area, etc...).


----------



## jura11

@sepia21 

KP XOC 1000W BIOS is okay just as above @yzonker said just pay attention to airflow of the case and check 8-pin connectors

Regarding the what OC settings I'm using,I would start with offset of 105MHz if its stable in RT games like is Control, Cyberpunk 2077 etc

Personally I'm running in Cyberpunk 2077 or Control +105MHz, in some games I can push +135MHz but that's max what my GPU can do, in most games it holds easily 2115-2130MHz and in benchmarks like Port Royal will holds 2145-2160MHz maximum which I have seen is 2175MHz 

Temperatures won't break 36-38°C on both GPUs in rendering and in gaming temperatures are usually similar only in warmer weather when ambient is beyond 25-27°C I see up to 42°C with fans spinning around 700-800RPM, maximum what I ever seen on my GPUs is 42°C 

Hope this helps 

Thanks, Jura


----------



## gfunkernaught

jura11 said:


> I think I have tested like for 2 days rBar BIOS but not sure on that at all, in most cases I have been power limited and went back to XOC 1000W BIOS which works for me, sadly there is no XOC 1000W BIOS with ReBAR which is fine with me, I just hope someone will leak it later on to public and I understand people who don't want to leak it to public
> 
> You will see if its worth it for you to run XOC 1000W BIOS
> 
> Hope this helps
> Thanks, Jura


I do miss using the xoc 1kw bios, especially playing quake 2 RTX or control, both at native 4k, oc'ed to like 2145mhz and using insane amounts of power, but higher than limited power framerates, especially the minimum fps. But only cyberpunk stutters with non rbar bios.


----------



## yzonker

@sepia21

The other thing to keep in mind is the KP XOC bios has safeties disabled. I have Argus Monitor set up to shut down my machine when temps exceed limits I've defined. HWINFO can also do this. Probably biggest risk is pump failure.


----------



## J7SC

yzonker said:


> @sepia21
> 
> The other thing to keep in mind is the KP XOC bios has safeties disabled. I have Argus Monitor set up to shut down my machine when temps exceed limits I've defined. HWINFO can also do this. *Probably biggest risk is pump failure*.


...I usually use two D5 in series for that (=fail-over) reason...two D5 also have other advantages, as posted before re. DerBauer testing


----------



## PLATOON TEKK

Just received the ElmorLabs 3 slot nvlink. After having success with their evc2xs I figured I’d give it a go. Also found this:

Power limit modifier (ElmorLabs store)

Seeing as a lot of people on here shunt mod, this might be of use. Will test the nvlinks and post results here if anything unexpected. 












West. said:


> Have been running a normal HOF with stock bios (420w?) for a while.
> Curious if there's any release of the oc lab bios with higher power limit (500w?) ?
> Thanks!


I posted it here a few weeks ago but link might be expired. Will re-upload and link here when I get the chance.


----------



## J7SC

PLATOON TEKK said:


> Just received the ElmorLabs 3 slot nvlink. After having success with their evc2xs I figured I’d give it a go. Also found this:
> 
> Power limit modifier (ElmorLabs store)
> 
> Seeing as a lot of people on here shunt mod, this might be of use. Will test the nvlinks and post results here if anything unexpected.
> 
> View attachment 2515629
> 
> 
> 
> 
> I posted it here a few weeks ago but link might be expired. Will re-upload and link here when I get the chance.


...that's some nice kit to play with !


----------



## lmfodor

Hi @Jura and all here!

I make up a bill of materials to buy the entire water cooling system. I saw a good results with the new Bylski which it has an active backplate. What do you think about it? I know is new and it’s no so expensive 
Bykski GPU Copper RBW LED Water Cooling Block for Asus ROG Strix RTX 3090 RTX3080 Gaming (5V LED GPU Block with Water Cooling Copper Backplate) 
Amazon.com: Bykski GPU Copper RBW LED Water Cooling Block for Asus ROG Strix RTX 3090 RTX3080 Gaming (5V LED GPU Block with Water Cooling Copper Backplate): Computers & Accessories
2) the pump / reservoir. I’m between two, the Corsair Hydro X that has 330m at 154usd vs the EKWB EK-Quantum Kinetic TBE 200 D5 PWM Pump-Reservoir Combo, D-RGB, Plex
 https://www.amazon.com/dp/B08HSQV5...abc_PH41YEXVXK61YG1FH7PX?_encoding=UTF8&psc=1 or 
Amazon.com: EKWB EK-Quantum Kinetic TBE 200 D5 PWM Pump-Reservoir Combo, D-RGB, Plexi: Computers & Accessories
3) then I found that tubes that seems to be flexible from Corsair Corsair CX-9059001-WW Hydro X Series, Xt Softline, 10/13mm (3/8I... I also found the EKWB EK-DuraClear Soft Tubing, 12/16mm (7/16" ID, 5/8" OD), 3 Me... ( the size are not the same)
This EK Amazon.com: EKWB EK-DuraClear Soft Tubing, 12/16mm (7/16" ID, 5/8" OD), 3 Meter, Clear: Computers & Accessories
Or this  https://www.amazon.com/dp/B07W95LJ...abc_AX8AT15HWY5WQ18NFRYX?_encoding=UTF8&psc=1

4) and the I don’t know if these EKWB EK-Quantum Torque STC-12/16 Compression Fitting for Soft Tubing, 12/16mm (7/16" ID, 5/8" OD), Black Nickel, 4-Pack would be fine for the flex tubes 

5) then, should I buy some Ball Ball like EKWB EK-AF G1/4" 10mm Ball Valve, Nickel?

6) and what about the angle fitting? EKWB EK-AF Classic G1/4" 90 Degree Angled Fitting, Black Nickel,...

7) also I have to but a premix liquid. I found these. EKWB EK-CryoFuel Solid Premix PC Coolant, 1000mL, Cloud White but I don’t know if I could add some color for some effect. Could I?

8) do I need some preasure netter or leaker tester?

The radiator it’s ease.. my main issue is the size of the tubes, that allows me to connect everything find between the water block, the pump and the radiator. I don know what is the size of the tubes or the compression fitting to have no leakages 

I really appreciate your support and guidance since it’s my first attempt to watercooling and I really wants to push a little more the strix to have a better performance in games, and lowering the memory junction that reaches 84 degrees maximum and I don’t like it 

Thanks a lot!!


Sent from my iPhone using Tapatalk Pro[/IMG]


----------



## jura11

Hi @lmfodor 

Yes this waterblock is good, I'm using Bykski waterblocks on my RTX 3090s and no issues and temperatures are okay and for money you hardly find better waterblock although prices over on Amazon are quite expensive if you are compare them with Aliexpress 

Regarding the pump and reservoir, many people would go with any of them as both are using D5 pumps, personally I used on few builds Bykski or Barrow pumps with reservoirs and no issues to the date 

If you are sticking with EK fittings get their EK ZMT tubing which will last you lifetime or get good EPDM tubing like is Tygon A-60-G whuch is similar to the EK ZMT, its not transparent but it will last longer than any soft tubing and size of tubing 16/10mm 

For fittings I prefer Bykski or Barrow fittings in size 16/10mm, I wouldn't go with 13/10mm because it will kink 

Yes ball valve I would recommend, you need it for drain, I'm using on my loop Barrow QDC fitting for drain 

For angle fittings,depending on how you want to do that, its hard now to say how many you will need 

For CPU waterblock you need 2 fittings, for GPU waterblock you need 2 fittings, for radiators you need too 2 fittings etc and same applies for reservoir etc

Size of tubing or compression fittings doesn't matter just what matters is you will buy compression fittings for soft tubing and if you are buying 16/10mm Compression fitting then you should buy tubing which is 16/10mm and not 13/10mm etc 

Leak you shouldn't see if you do everything correctly and your loop shouldn't leak, just take a time and everything will be okay there 

Hope this helps 

Thanks, Jura


----------



## degenn

PLATOON TEKK said:


> Haha whatever works for you. If it’s the 1000w “xoc” that’s out, unfortunately, it has some hidden limits in place and no rebar. I even have voltage adjustment, but the cards just don’t fare 1/10 as well as the KPs do (even before water at similar temps).


Yeah, my KP is an odd one... 

It's the one on TPU, I tried it just messing around and got better results than I was getting with EVGA 1000w. Strangely the Classified tool was working with the Galax bios which came as a surprise, thought it only works on KPE, maybe I misunderstand how it works. 

What hidden limits? As for re-bar even the KP doesn't have that on the 1000w bios so didn't mind that, unless there is and I don't know about it?


----------



## sultanofswing

degenn said:


> Yeah, my KP is an odd one...
> 
> It's the one on TPU, I tried it just messing around and got better results than I was getting with EVGA 1000w. Strangely the Classified tool was working with the Galax bios which came as a surprise, thought it only works on KPE, maybe I misunderstand how it works.
> 
> What hidden limits? As for re-bar even the KP doesn't have that on the 1000w bios so didn't mind that, unless there is and I don't know about it?


Yes there is 1kw REBAR BIOS for KPE.


----------



## wtf_apples

Arizor said:


> @ahnafakeef
> 
> To speak specifically to the stock ASUS BIOS (assuming you've updated to "V4" with reBAR enabled):
> 
> 1. Firstly you should probably allow your ASUS to access higher voltage by increasing "core voltage", but here's how to do it properly (see attachment with red arrow, make sure you select the right card design 'reference'). Then restart Afterburner and up core voltage to max.
> 
> 2. @GRABibus ' graph is what we call a custom curve, he's done it to effectively "lock" his GPU voltage/clock at what he wants. You can access this via CTRL+F in Afterburner.
> 
> View attachment 2515323


What setting would you recommend for evga cards? Thanks


----------



## Arizor

Which EVGA card @wtf_apples ? Also by "settings", do you mean settings for the custom curve in Afterburner, or the more general settings (power limit, memory clock etc.)?



wtf_apples said:


> What setting would you recommend for evga cards? Thanks


----------



## wtf_apples

Arizor said:


> Which EVGA card @wtf_apples ? Also by "settings", do you mean settings for the custom curve in Afterburner, or the more general settings (power limit, memory clock etc.)?


Ya sorry I should have used more words. I was referring to the picture with the red arrow.
For a 3090 ftw3 and kingpin, does selecting the board type (reference, standard msi, third party) make that much of a difference? Ive always used third party.
Thanks!


----------



## Arizor

Ah I see, no worries.

For FTW3 I believe "reference design" unlocks voltage in Afterburner, not sure about Kingpin, though try reference design and see if you can access higher voltages. It's quick trial and error, there's no harm in selecting the "wrong" one.


----------



## degenn

sultanofswing said:


> Yes there is 1kw REBAR BIOS for KPE.


I am assuming it's not been "leaked" yet?


----------



## Alex24buc

Guys I need your advice again. I have the 3090 palit gamerock oc (3x8 pin - 470w). Is it worth it to sell my card and get the 3080ti Asus strix oc (450w) at the same price? Thanks for your help!


----------



## West.

PLATOON TEKK said:


> I posted it here a few weeks ago but link might be expired. Will re-upload and link here when I get the chance.


Have been searching the oc lab 500w bios for a while but no luck :/
Thanks in advance


----------



## yzonker

Alex24buc said:


> Guys I need your advice again. I have the 3090 palit gamerock oc (3x8 pin - 470w). Is it worth it to sell my card and get the 3080ti Asus strix oc (450w) at the same price? Thanks for your help!


Doesn't seem like there's any upside to that at all.


----------



## Wihglah

Alex24buc said:


> Guys I need your advice again. I have the 3090 palit gamerock oc (3x8 pin - 470w). Is it worth it to sell my card and get the 3080ti Asus strix oc (450w) at the same price? Thanks for your help!


No.


----------



## Alex24buc

I will keep my palit gamerock oc than. Thanks!


----------



## WillP

yzonker said:


> @sepia21
> 
> The other thing to keep in mind is the KP XOC bios has safeties disabled. I have Argus Monitor set up to shut down my machine when temps exceed limits I've defined. HWINFO can also do this. Probably biggest risk is pump failure.


I've done the same with RealTemp. As you say there is always the nagging thought of pump failure.


----------



## cennis

Where can I find the reBAR 520 KPE bios?

Is it this one? EVGA RTX 3090 VBIOS 
Date looks old.


----------



## Dante444

Hi, I currently have my Zotac Trinity with the BIOS from Gigabtyte without REBAR the November one. What would be the best BIOS with REBAR for the 3090 Trinity right now?

Thanks and best regards


----------



## degenn

cennis said:


> Where can I find the reBAR 520 KPE bios?
> 
> Is it this one? EVGA RTX 3090 VBIOS
> Date looks old.


Here


----------



## cennis

dr/owned said:


> Driveby post but after learning that HWInfo reports memory temperatures in a recent-ish update, here's my shunted 3090 TUF with an active watercooled backplate (Bykski) while mining:
> 
> View attachment 2486828
> 
> 
> Memory is overclocked to +1250 and +0.1V (1.5V) on the memory. So probably about as "worst case" as you can get for temperatures since mining thrashes the vRAM.


How do you change the voltage on the vRAM? I only see core voltage in MSI AB.


----------



## West.

Hello everyone. I have a normal hof and been trying to get power draw to reach 500w but no luck so far. I tried KPE 520w rebar bios, power draw limited to below 440w during a PR run and it broke my fan control. Then I flashed FTW3 500w rebar bios, fan control was normal but same thing happened on max 440w power draw. On the 500w bios 8-Pin #2 only drawing around 103w while #1 and #3 are drawing 147w. Then I revert back to hof stock 450w bios and 8-Pin #2 draws 150w no problem but I’m now bios power limited (440w draw again). I guess EVGA bios just doesn’t work on hof cards :/

Any suggestion on what bios (except 1kw bios since card is air cooled) should I try to get close to “real” 500w power draw? Thank you.


----------



## yzonker

Dante444 said:


> Hi, I currently have my Zotac Trinity with the BIOS from Gigabtyte without REBAR the November one. What would be the best BIOS with REBAR for the 3090 Trinity right now?
> 
> Thanks and best regards


I found this one worked well.









GALAX RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com


----------



## Dante444

yzonker said:


> I found this one worked well.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> GALAX RTX 3090 VBIOS
> 
> 
> 24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory
> 
> 
> 
> 
> www.techpowerup.com


Thanks for your answer, do you think it is better than Gigabyte Gaming Oc?

A few hours ago I installed this:








Gigabyte RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com





So far without failures.

Thx


----------



## yzonker

Dante444 said:


> Thanks for your answer, do you think it is better than Gigabyte Gaming Oc?
> 
> A few hours ago I installed this:
> 
> 
> 
> 
> 
> 
> 
> 
> Gigabyte RTX 3090 VBIOS
> 
> 
> 24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory
> 
> 
> 
> 
> www.techpowerup.com
> 
> 
> 
> 
> 
> So far without failures.
> 
> Thx


Only that the Galax may have better port compatibility. Depends on whether you need them all working.


----------



## OrionBG

Hey guys,
How good/bad/better/worse is the 3090 FE compared to AIB custom cards like the EVGA FTW3, ASUS Strix, MSI SupremeX?
Does the 12pin power connector hinder overclocking compared to cards with 3x8pin power?
I'll be water cooling the card 100% and trying to OC the hell out of it  The OC will probably be mostly for scores and stuff... The main use will be as a GPU for Gaming at 4K (with hopefully some futureproofing)...


----------



## Bobbylee

0451 said:


> The Galax HOF is made in China, not Taiwan like the KP, so there is an upper limit to how good the quality control can be.


Just going to point out, Apple who have great quality products get their stuff manufactured in China. China does not always equal lesser quality.


----------



## geriatricpollywog

Bobbylee said:


> Just going to point out, Apple who have great quality products get their stuff manufactured in China. China does not always equal lesser quality.


Exceptions are not the rule, but they have a well-deserved reputation for garbage.


----------



## sepia21

jura11 said:


> @sepia21
> 
> KP XOC 1000W BIOS is okay just as above @yzonker said just pay attention to airflow of the case and check 8-pin connectors
> 
> Regarding the what OC settings I'm using,I would start with offset of 105MHz if its stable in RT games like is Control, Cyberpunk 2077 etc
> 
> Personally I'm running in Cyberpunk 2077 or Control +105MHz, in some games I can push +135MHz but that's max what my GPU can do, in most games it holds easily 2115-2130MHz and in benchmarks like Port Royal will holds 2145-2160MHz maximum which I have seen is 2175MHz
> 
> Temperatures won't break 36-38°C on both GPUs in rendering and in gaming temperatures are usually similar only in warmer weather when ambient is beyond 25-27°C I see up to 42°C with fans spinning around 700-800RPM, maximum what I ever seen on my GPUs is 42°C
> 
> Hope this helps
> 
> Thanks, Jura


How do you keep your temps so low? I'm using the Alphacool EISWOLF AIO cooler which has a 360 rad and temp goes as high as 55°C-57°C. This is during heavy tasks such as benchmarks (like the Port Royal or others) but during gaming, it's around 46°C. Still far warmer than yours. This is considering the fact that I'm not running the clocks you mentioned. currently, mine is set to 1995MHz. Also, fans are spinning around 1200-1300RPM (So not so quiet as you can imagine)


----------



## yzonker

sepia21 said:


> How do you keep your temps so low? I'm using the Alphacool EISWOLF AIO cooler which has a 360 rad and temp goes as high as 55°C-57°C. This is during heavy tasks such as benchmarks (like the Port Royal or others) but during gaming, it's around 46°C. Still far warmer than yours. This is considering the fact that I'm not running the clocks you mentioned. currently, mine is set to 1995MHz. Also, fans are spinning around 1200-1300RPM (So not so quiet as you can imagine)


A lot of posters here have a massive amount of rad area and have spent time optimizing their block mounts to minimize temps. The 480/360/280 rads I have is still a lot less than some.


----------



## PLATOON TEKK

West. said:


> Have been searching the oc lab 500w bios for a while but no luck :/
> Thanks in advance


Here it is:
500w Galax OC LAB bios [use at own risk] (easyupload.io upload)

This is the rom extracted off the card itself.
Performs garbage for me but maybe you have better luck.

edit: that was fast, file.io deleted file. re-uploaded to easyupload.io


----------



## des2k...

Some universal blocks for the back are showing on aliexpress, 57$, 71$.

This one is flat no standoffs, so you could thermal tape it to the backplate.


















Another one


----------



## jura11

sepia21 said:


> How do you keep your temps so low? I'm using the Alphacool EISWOLF AIO cooler which has a 360 rad and temp goes as high as 55°C-57°C. This is during heavy tasks such as benchmarks (like the Port Royal or others) but during gaming, it's around 46°C. Still far warmer than yours. This is considering the fact that I'm not running the clocks you mentioned. currently, mine is set to 1995MHz. Also, fans are spinning around 1200-1300RPM (So not so quiet as you can imagine)


Hi there 

I'm running or I have 4*360mm radiators(top 2*HWLabs SR-2 360mm with 2*XSPC RX360 V3 in pedestal ) plus MO-ra3 360mm 

Assuming yours ambient temperatures are higher than over here and that's can play big role in that and plus airflow of the case, I have Caselabs M8 with pedestal and that case have awesome airflow and is built for water cooling 


Hope this helps 

Thanks, Jura


----------



## sultanofswing

sepia21 said:


> How do you keep your temps so low? I'm using the Alphacool EISWOLF AIO cooler which has a 360 rad and temp goes as high as 55°C-57°C. This is during heavy tasks such as benchmarks (like the Port Royal or others) but during gaming, it's around 46°C. Still far warmer than yours. This is considering the fact that I'm not running the clocks you mentioned. currently, mine is set to 1995MHz. Also, fans are spinning around 1200-1300RPM (So not so quiet as you can imagine)


Radiator space is KING.


----------



## Arizor

Does anyone get an "Event 41" (viewable in Windows Event Viewer), i.e. an instant reboot in a demanding task with a bad overclock? Happens if I run Blender with an overclock (Blender is _very_ sensitive to OC), but I also can get it with the new version of Doom Eternal (patched now with RTX and DLSS).

Wondering if I need to inspect my cables, perhaps one is a bit loose, or perhaps it is just as simple as an unstable OC on the GPU.


----------



## sepia21

jura11 said:


> Hi there
> 
> I'm running or I have 4*360mm radiators(top 2*HWLabs SR-2 360mm with 2*XSPC RX360 V3 in pedestal ) plus MO-ra3 360mm
> 
> Assuming yours ambient temperatures are higher than over here and that's can play big role in that and plus airflow of the case, I have Caselabs M8 with pedestal and that case have awesome airflow and is built for water cooling
> 
> 
> Hope this helps
> 
> Thanks, Jura


Unfortunately, I cant fit 4 rads in my case. I'm limited to 420mm in front and 360mm on top (I have Phanteks P600s). I don't know how much it will help to add a 420mm in front but if I do that I have to add the CPU to the water loop as well. So there will be another heat source.


----------



## cssorkinman

I'll take delivery of an MSI 3090 suprim x today, the product of entering the Newegg shuffle nearly every day since it's inception. Any thoughts on how it compares to other models? Thinking that I'll do some limited mining with it, at least enough to cover the mark up over the price at release.


----------



## Zaudi

cssorkinman said:


> I'll take delivery of an MSI 3090 suprim x today, the product of entering the Newegg shuffle nearly every day since it's inception. Any thoughts on how it compares to other models? Thinking that I'll do some limited mining with it, at least enough to cover the mark up over the price at release.


My Ranking:
1. ASUS ROG STRIX OC
2. MSI Suprime X
3. FE
4. MSI Gaming
5. ASUS TUF OC
6. EVGA ULTRA

But Asus and EVGA have some quality issues.

So in my opinion, I just would buy MSI or the cheaper FE (if you got one).
The MSI Gaming is more silent and a little bit cooler than the Suprime.
But of course with a little bit less performance.

So if you put a *waterblock *on it: *MSI Suprime X* <- best choice
If you let it original as *air *cooler: *MSI Gaming X Trio*

just how I think about it.

(And I am not a MSI Fan )

I am a XFX Fanboy. But only for red Navi Cards


----------



## yzonker

sepia21 said:


> Unfortunately, I cant fit 4 rads in my case. I'm limited to 420mm in front and 360mm on top (I have Phanteks P600s). I don't know how much it will help to add a 420mm in front but if I do that I have to add the CPU to the water loop as well. So there will be another heat source.


You'll still drop several degrees with the CPU in the loop and a 420 added. Although only if you make them both either intake or exhaust. One intake, the other exhaust tends to just feed hot air to the exhuast rad.


----------



## sepia21

yzonker said:


> You'll still drop several degrees with the CPU in the loop and a 420 added. Although only if you make them both either intake or exhaust. One intake, the other exhaust tends to just feed hot air to the exhuast rad.


Good point, thank you so much. I will search for some rads and also a CPU, pump combo block.


----------



## Bobbylee

sepia21 said:


> How do you keep your temps so low? I'm using the Alphacool EISWOLF AIO cooler which has a 360 rad and temp goes as high as 55°C-57°C. This is during heavy tasks such as benchmarks (like the Port Royal or others) but during gaming, it's around 46°C. Still far warmer than yours. This is considering the fact that I'm not running the clocks you mentioned. currently, mine is set to 1995MHz. Also, fans are spinning around 1200-1300RPM (So not so quiet as you can imagine)


when I’m using the 1kw vbios at 75% pl on a 2x8 card after a long gaming sesh with 25-27c ambient my temps hover around 45-46c with a 360+280 rad using EK waterblock and ek ram cooler as a backplate cooler. Hope that helps identify if your temps are good or bad.


----------



## mirkendargen

Arizor said:


> Does anyone get an "Event 41" (viewable in Windows Event Viewer), i.e. an instant reboot in a demanding task with a bad overclock? Happens if I run Blender with an overclock (Blender is _very_ sensitive to OC), but I also can get it with the new version of Doom Eternal (patched now with RTX and DLSS).
> 
> Wondering if I need to inspect my cables, perhaps one is a bit loose, or perhaps it is just as simple as an unstable OC on the GPU.


I haven't checked the event log, but I get instant reboots if my VRAM overclock is unstable, I think that's the Ampere norm.


----------



## yzonker

I never get a reboot from either core or mem OC being too high. Either CTD or just frozen (manual reset).


----------



## Arizor

mirkendargen said:


> I haven't checked the event log, but I get instant reboots if my VRAM overclock is unstable, I think that's the Ampere norm.


Yeah that’s interesting, mem does that for me too whilst gpu core only ever CTDs.


----------



## Vld

This is what “active backplate” should” look like 😄

EKWB Tec cooler on EKWB 3090 Strix !

Gpu temps down by 6-8 C, mem temps down by 8-10 C, running Intels software cryo mode.


----------



## J7SC

Vld said:


> This is what “active backplate” should” look like 😄
> 
> EKWB Tec cooler on EKWB 3090 Strix !
> 
> Gpu temps down by 6-8 C, mem temps down by 8-10 C, running Intels software cryo mode.
> 
> View attachment 2516033
> View attachment 2516034
> View attachment 2516035
> View attachment 2516036


Nice - I'm also contemplating a custom solution that 'involves drills' (though not cryo) on the custom backplate for my Strix - any clearance issues re. bolt heads on the inside of the backplate ?


----------



## Vld

J7SC said:


> Nice - I'm also contemplating a custom solution that 'involves drills' (though not cryo) on the custom backplate for my Strix - any clearance issues re. bolt heads on the inside of the backplate ?


I used 3mm SS bolts with flat heads, drilled holes using 3 mm drill, then used 6mm drill to make “bead” for heads so they dont stick out. Cooler sits directly above gpu.


----------



## J7SC

Vld said:


> I used 3mm SS bolts with flat heads, drilled holes using 3 mm drill, then used 6mm drill to make “bead” for heads so they dont stick out. Cooler sits directly above gpu.


Thanks


----------



## yzonker

Yea I'd do a countersink head if the backplate is thick enough.


----------



## SoldierRBT

3090 KPE HC 15K PR at just 376W








I scored 15 019 in Port Royal


Intel Core i9-10900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## sultanofswing

SoldierRBT said:


> 3090 KPE HC 15K PR at just 376W
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 019 in Port Royal
> 
> 
> Intel Core i9-10900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> View attachment 2516045


Great score for the given clockspeed, Seems to be a good card.


----------



## jura11

mirkendargen said:


> I haven't checked the event log, but I get instant reboots if my VRAM overclock is unstable, I think that's the Ampere norm.


Unstable VRAM OC and instant reboots, this I can confirm, seen that in Port Royal benchmarks when I'm pushing like 1535MHz

Unstable core usually results in CTD or TDR in gaming, in rendering CUDA errors in best case or hard freeze(I have quite high TDR set), most of Unreal games are good for core stability testing, for VRAM usually I test it in rendering 

Hope this helps 

Thanks, Jura


----------



## Arizor

jura11 said:


> Unstable VRAM OC and instant reboots, this I can confirm, seen that in Port Royal benchmarks when I'm pushing like 1535MHz
> 
> Unstable core usually results in CTD or TDR in gaming, in rendering CUDA errors in best case or hard freeze(I have quite high TDR set), most of Unreal games are good for core stability testing, for VRAM usually I test it in rendering
> 
> Hope this helps
> 
> Thanks, Jura


Yep was running +1500mem in games, could run for hours, could cause a reboot in Doom Eternal, dropped to +1000 everything running fine. Though of course that OC itch makes me wonder what is the _absolute highest_ I can push and remain stable 😄


----------



## West.

PLATOON TEKK said:


> Here it is:
> 500w Galax OC LAB bios [use at own risk] (easyupload.io upload)
> 
> This is the rom extracted off the card itself.
> Performs garbage for me but maybe you have better luck.
> 
> edit: that was fast, file.io deleted file. re-uploaded to easyupload.io


Thanks for the upload 
Just did a quick port royal run with +145 core (2100mhz avg) and +1000 mem, 15055 score. Not bad.


----------



## Wihglah

So, what is considered a decent 24/7 overclock for a 3090?


----------



## GRABibus

Wihglah said:


> So, what is considered a decent 24/7 overclock for a 3090?


a stable one


----------



## Arizor

GRABibus said:


> a stable one


yeeep.

My standard for games is 2040 gpu, +1000 mem.

edit: for context I’m playing CP2077 and Doom eternal with RTX.


----------



## Wihglah

GRABibus said:


> a stable one


Just trying to decide if mine is any good.


----------



## mattxx88

Wihglah said:


> So, what is considered a decent 24/7 overclock for a 3090?


by my side a fixed 2000mhz without drops it's fine (more psychological than performance factor 😂)


----------



## Nizzen

SoldierRBT said:


> 3090 KPE HC 15K PR at just 376W
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 019 in Port Royal
> 
> 
> Intel Core i9-10900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> View attachment 2516045


Did you mesure total draw from the wall?


----------



## SoldierRBT

Nizzen said:


> Did you mesure total draw from the wall?


I have an Asus Thor. Power draw was around 450-500W.


----------



## Nizzen

SoldierRBT said:


> I have an Asus Thor. Power draw was around 450-500W.


I have Thor 1200w too on one of my pc's. Nice too see total powerdraw directly on the psu. On the other pc, I have "kill a watt" on the psu cable.
450-500w is nice and low in port Royal! Must be a good gpu bin for low voltage  Low temperature on the gpu helps too


----------



## SoldierRBT

Nizzen said:


> I have Thor 1200w too on one of my pc's. Nice too see total powerdraw directly on the psu. On the other pc, I have "kill a watt" on the psu cable.
> 450-500w is nice and low in port Royal! Must be a good gpu bin for low voltage  Low temperature on the gpu helps too


Thanks. It's the same KPE I got in December. Wanted to try the lowest wattage needed to get 15K. I locked 0.906v and added +255. Memory only +1100 because I lowered the voltage to 1.30v. It doesn't require much bandwidth at this clocks.

1.025v 2220MHz (2175MHz avg) 520W 15771 PR is around 700W from the wall.


----------



## cennis

SoldierRBT said:


> Thanks. It's the same KPE I got in December. Wanted to try the lowest wattage needed to get 15K. I locked 0.906v and added +255. Memory only +1100 because I lowered the voltage to 1.30v. It doesn't require much bandwidth at this clocks.
> 
> 1.025v 2220MHz (2175MHz avg) 520W 15771 PR is around 700W from the wall.



Thats an awesome card, just wondering how you tweaked voltage for memory? I have a Strix so I don't believe it will work with the classy tool even if I flashed the 520 KPE bios.


----------



## SoldierRBT

cennis said:


> Thats an awesome card, just wondering how you tweaked voltage for memory? I have a Strix so I don't believe it will work with the classy tool even if I flashed the 520 KPE bios.


Thx. I used classy tool for that. Stock voltage (1.368v) to 1.30v is around 15W less MVDDC and lower memory temps. Unfortunately, can't go lower than that, card would display Warning low memory voltage. 

I believe you'd need EVC2SX for voltage control on the Strix


----------



## heatdotnet

First post here. I've read hundreds of pages of this thread trying to answer my question, but I don't think I'll be satisfied until I actually ask. I "settled" for a 3090 TUF because it's tough to be choosy in this market. I'm about to order a bunch of parts for a water loop and need to decide if I'm going to stick with this card or not. I could hold off and try to snag an EVGA or Asus 3x8 card in a few weeks at my local Microcenter.

Are the 2x8 pin power connectors on this card going to be a limitation in gaming? I'm not worried about benchmarking, so if I have a good water loop, a good PSU, and 18 or 16 AWG pcie cables, will the 2x8 power delivery be enough to let the core + vram do its thing without power throttling? I assume the use of the KP BIOS at lower-than-100 power target.


----------



## Bal3Wolf

Wonder if i could get some help i just got my 5950x last week and sence then i cant match scores i got on my 3900x havet been able to figure out why even tho its getting great cinebench scores. I just recently installed another rad in my loop 1 hardware labs 360gts 1x hardware labs 360 gtx and a ek 420 ce now my card for most part stays under 40c benchmarking and 44c gaming iv noticed its stable over 2200mhz never was before but still cant seem to get correct benchmark scores.








I scored 14 752 in Port Royal


AMD Ryzen 9 5950X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com





best score i got with a 3900x 








I scored 15 786 in Port Royal


AMD Ryzen 9 3900X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## GQNerd

Bal3Wolf said:


> Wonder if i could get some help i just got my 5950x last week and sence then i cant match scores i got on my 3900x havet been able to figure out why even tho its getting great cinebench scores. I just recently installed another rad in my loop 1 hardware labs 360gts 1x hardware labs 360 gtx and a ek 420 ce now my card for most part stays under 40c benchmarking and 44c gaming iv noticed its stable over 2200mhz never was before but still cant seem to get correct benchmark scores.
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 14 752 in Port Royal
> 
> 
> AMD Ryzen 9 5950X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> best score i got with a 3900x
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 786 in Port Royal
> 
> 
> AMD Ryzen 9 3900X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com


interesting..

- memory latency? I realize they’re at the same frequency, but check the latency #’s using Aida64

- is SAM/BAR enabled while using the 5950x?

- Try SMT off?

only things off the top of my head


----------



## Bal3Wolf

same exact memory and settings latency seemed better with the 5950x to be honest like 59ms where 3900x usualy was around 62ms, tried with smt off and on with pbo and manual clocks no wea errors either tried underclocking memory also and uck.


----------



## jura11

@Bal3Wolf 

I can say from my own experience with 5950X I'm scoring higher with 5950X with PBO and curve optimizer than with my old 3900X which has run 4.35GHz with static voltage or 4.4GHz for benchmarks only, I did tried as well PBO with 3900X but this never worked for me

I didn't tried yet to push my 5950X to limit like static OC with static voltage or OC bit more RAM etc, I'm quite happy how it works for now 

I would ask @GRABibus and @J7SC both of them have better experience with 5950X than I have 

Assuming you are using same Windows, same driver and same 3DMARK suite? 

In Port Royal with 5950X GPU is running higher speeds or have higher average speed plus VRAM is running slightly faster by why is scoring way less than with 3900X 

Hope this helps 

Thanks, Jura


----------



## ahnafakeef

mattxx88 said:


> by my side a fixed 2000mhz without drops it's fine (more psychological than performance factor 😂)


What’s the performance gain from a 100-200MHz overclock on a 3090, when going from 2000 to 2100/2200?

Tried messing around with the voltage/frequency curve and still haven’t been able to stabilise anything beyond ~2010 in Valhalla. Wondering if 2100 is worth the trouble. 

Also, does memory overclock play a big role in performance boost with a 3090?


----------



## J7SC

Bal3Wolf said:


> same exact memory and settings latency seemed better with the 5950x to be honest like 59ms where 3900x usualy was around 62ms, tried with smt off and on with pbo and manual clocks no wea errors either tried underclocking memory also and uck.


If you have the cooling for the 5950X, try running 'all-core' with GPU settings (Mhz, PL) fixed and compare to your other runs. I found that it helps with many GPU scores, and also shows up via faster L3 cache speeds.

FYI, my mobo is the Asus X570 Dark, and my daily all-core is 4.650 (left below). 4,800 (right below) for benching all-core (again > good cooling, vCore at 1.35v). 4,650 all-core should suffice for your PR and TS/E tests. I'm not running any curves, SMT is enabled (16c/32t), and chiplets are synchronized. Asus X570 Dark has dynamicOC enabled


----------



## Bal3Wolf

ok did a 4650 overclock good r23 score but i still get crap benchmark scores lol makes no sence barely any downclocking of gpu on this last test but some how its lower lol.









I scored 14 093 in Port Royal


AMD Ryzen 9 5950X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## J7SC

Bal3Wolf said:


> ok did a 4650 overclock good r23 score but i still get crap benchmark scores lol makes no sence barely any downclocking of gpu on this last test but some how its lower lol.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 14 093 in Port Royal
> 
> 
> AMD Ryzen 9 5950X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> View attachment 2516194


...yeah, weird indeed. I don't want to send you on a wild goose chase and I use a different system RAM config than you, but tRFC at 570 looks awfully high (as does tRC). As to Asus bios settings, try PBO (all default!) enabled along with whatever extra boost function your board has (OC3 ?), but not much else - I noticed that while you're CPU package power isn't low, it still has some headroom. But look at tRFC first - sometimes, bios settings won't adapt right even if you have specified the right values.


----------



## Bal3Wolf

Trc been a pain for me to get very tight with this ram i turned off power down and gear down modes my 3900x wouldnt post if i did that this 5950x did lol but scores seem crappy. Had to bump vcore up to 1.244-1.275 to even get aida to finish the cachemem benchmark lol.


----------



## PLATOON TEKK

West. said:


> Thanks for the upload
> Just did a quick port royal run with +145 core (2100mhz avg) and +1000 mem, 15055 score. Not bad.


Boom. Happy it helped. You might be better off with the 520w rebar KP bios though, would be interesting to see.


----------



## wtf_apples

Started messing with the classified tool but having a hard time getting a higher core average with my kphc.
Im using the stock ln2 520w bios and its maxing out at 540w board draw. I scored 15 122 in Port Royal
Hitting the power limit is causing the failed runs in port royal im guessing? I cant seem to get more than this right now.
Also looking for some tips and things to watch out for when running the 1kw bios.
Im late to get started here but glad its finally happening.
Thanks!


----------



## J7SC

wtf_apples said:


> Started messing with the classified tool but having a hard time getting a higher core average with my kphc.
> Im using the stock ln2 520w bios and its maxing out at 540w board draw. I scored 15 122 in Port Royal
> Hitting the power limit is causing the failed runs in port royal im guessing? I cant seem to get more than this right now.
> Also looking for some tips and things to watch out for when running the 1kw bios.
> Im late to get started here but glad its finally happening.
> Thanks!


...'grats on the KP_HC ! I wouldn't call 15122 bad by any stretch of the imagination, but there should be a bit more in it...I've hit around 15,300+ with the KPE 520W r_BAR on my Strix w/o maxing everything. Have you tried some earlier drivers, ie. 466.xx ?


----------



## GRABibus

jura11 said:


> @Bal3Wolf
> 
> I can say from my own experience with 5950X I'm scoring higher with 5950X with PBO and curve optimizer than with my old 3900X which has run 4.35GHz with static voltage or 4.4GHz for benchmarks only, I did tried as well PBO with 3900X but this never worked for me
> 
> I didn't tried yet to push my 5950X to limit like static OC with static voltage or OC bit more RAM etc, I'm quite happy how it works for now
> 
> I would ask @GRABibus and @J7SC both of them have better experience with 5950X than I have
> 
> Assuming you are using same Windows, same driver and same 3DMARK suite?
> 
> In Port Royal with 5950X GPU is running higher speeds or have higher average speed plus VRAM is running slightly faster by why is scoring way less than with 3900X
> 
> Hope this helps
> 
> Thanks, Jura


as we already discussed some weeks ago, Zen3 gets some penalties in Port Royal.

to recover better scores With my 5900X, I set it like this :

smt off
Core count=4
1 CCD disabled
Static OC as higher as I can (4,95GHz in my case)

this would help to increase score by 100pts to 150pts


----------



## Wihglah

ahnafakeef said:


> What’s the performance gain from a 100-200MHz overclock on a 3090, when going from 2000 to 2100/2200?
> 
> Tried messing around with the voltage/frequency curve and still haven’t been able to stabilise anything beyond ~2010 in Valhalla. Wondering if 2100 is worth the trouble.
> 
> Also, does memory overclock play a big role in performance boost with a 3090?



Depends on the game. Anywhere from inconsequential to linear.


----------



## des2k...

Wihglah said:


> Depends on the game. Anywhere from inconsequential to linear.


seems about right, there's not much gains in games

+60mhz on core is about 1fps
+500 on mem is another fps

At one point, you'll also hit the limit of the silicon or game engine limit and won't scale at all.

Certain games will scale better like quake rtx and will use an insane amount of power. Other games like Lego builder will not scale past 2115core.

A good balance is undervolt + mem oc. 2115 1v with +1000mem will be about 1,2fps less vs high power oc like 2190+ 1.1v +1500mem


----------



## des2k...

Bal3Wolf said:


> Trc been a pain for me to get very tight with this ram i turned off power down and gear down modes my 3900x wouldnt post if i did that this 5950x did lol but scores seem crappy. Had to bump vcore up to 1.244-1.275 to even get aida to finish the cachemem benchmark lol.
> 
> View attachment 2516206
> 
> 
> View attachment 2516207


I have 3800cl18 2T with 600+trfc on my 3900x lol
Scores ok 15300-15400 for PR.

Being 1440p, I doubt cpu,mem oc affects much.
Disabling smt, cores also makes no sense on zen2,zen3. On my zen2 I don't gain anything doing those on benchmarks or games. 

Others with 5900x don't have to tweak much, so I'm guessing something is off with bios / power plan in windows. Going by zen history, it seems the quality of silicon is worst vs intel 14nm because there's more chances of getting a poor sample.


----------



## Bal3Wolf

des2k... said:


> I have 3800cl18 2T with 600+trfc on my 3900x lol
> Scores ok 15300-15400 for PR.
> 
> Being 1440p, I doubt cpu,mem oc affects much.
> Disabling smt, cores also makes no sense on zen2,zen3. On my zen2 I don't gain anything doing those on benchmarks or games.
> 
> Others with 5900x don't have to tweak much, so I'm guessing something is off with bios / power plan in windows. Going by zen history, it seems the quality of silicon is worst vs intel 14nm because there's more chances of getting a poor sample.


Yea for me iv tried everything under the sun and its not just port royal all of them one common factor cpu score is usualy good but gpu score is always down im starting to wonder if its just my 3090 kingpin acting wierd.


----------



## J7SC

Bal3Wolf said:


> Yea for me iv tried everything under the sun and its not just port royal all of them one common factor cpu score is usualy good but gpu score is always down im starting to wonder if its just my 3090 kingpin acting wierd.


...I'm still wondering if it has s.th. to do with RAM / cache sub systems - 5950X and 3900X behave differently due to different cache design etc...

...in any event, for different work and play functions, I run a TR 2950X w/ 2x 2080 Ti, an AM4 3950X w/ 6900XT (formerly w/ the 3090 Strix) and an AM4 5950X w/ the 3090 Strix ...the 2950X has some unique bios settings that change the RAM addressing dramatically (different than those found in the other CPUs). When switching from one mode to the other on the 2950X with everything else the same (including RAM speed and timings, CPU OC, GPU settings), Port Royal's score would change by well over 300 points. Also, the 3950X and its super-tight 4x8 GB RAM that passed all stress tests could react badly in Port Royal (and FS2020) to the wrong tFAW, tRC etc. 

...I'm also not entirely sure how r_BAR is affected by all this, but obviously it gives system RAM access to the full VRAM frame buffer of the GPU; it might be worth toggling r_BAR in your tests, and perhaps even use a pre- r_BAR NVidia driver for testing as well.


----------



## newls1

so playing Metro Exodus enhanced edition proved my 2160MHz core speed unstable, however in FC5 its stable all day @ 2160.. obviously Metro is way more demanding with ray tracing and DLSS, etc... I noticed in metro the Vgpu never goes above 1.083v.. Is there a way to get the full 1.1v I thought the evga xoc 500w rebar bios is able to achieve? anything I can do to get the full 1.1v to hopefulyl stablize 2160mhz in metro exodus? My MSI afterburner settings are 

+150Mhz core
+1200 Mem
+119 power (most it will go)
and max voltage slider

Appreciate any assistance here!


----------



## yzonker

newls1 said:


> so playing Metro Exodus enhanced edition proved my 2160MHz core speed unstable, however in FC5 its stable all day @ 2160.. obviously Metro is way more demanding with ray tracing and DLSS, etc... I noticed in metro the Vgpu never goes above 1.083v.. Is there a way to get the full 1.1v I thought the evga xoc 500w rebar bios is able to achieve? anything I can do to get the full 1.1v to hopefulyl stablize 2160mhz in metro exodus? My MSI afterburner settings are
> 
> +150Mhz core
> +1200 Mem
> +119 power (most it will go)
> and max voltage slider
> 
> Appreciate any assistance here!


How much power is it pulling at 1083mv? I'd be surprised if 500w is enough to maintain that voltage. If you have enough power, then it's probably not going the full voltage because of the shape of the VF curve. Might need to tweak the last 2 or 3 points.

For me running 4k max settings and DLSS off, that game pulls 500w at 1000mv. Must take 600w to hold 1093mv.


----------



## newls1

yzonker said:


> How much power is it pulling at 1083mv? I'd be surprised if 500w is enough to maintain that voltage. If you have enough power, then it's probably not going the full voltage because of the shape of the VF curve. Might need to tweak the last 2 or 3 points.
> 
> For me running 4k max settings and DLSS off, that game pulls 500w at 1000mv. Must take 600w to hold 1093mv.


i dont know how to even start an OC using the curve.. guess thats what im asking.


----------



## Arizor

Ctrl+F to bring up the curve, pick a node tied to a voltage, Ctrl+Up Arrow moves it in +10 intervals, set your curve with the “tick” in AB main interface. 

Then it’s trial and error for hours, enjoy!


----------



## yzonker

newls1 said:


> i dont know how to even start an OC using the curve.. guess thats what im asking.


I'm referring to the flat spot above 1081mv. It won't boost beyond the first point with 3 at the same frequency. So it gets stuck at 1081mv. Like @Arizor just said, Ctrl+F to bring up the curve in AB. Then select points and move them up down. Remember it only works on 15 mhz steps, so move the points by that much. The entire curve shifts up/down with the core offset. So set that first, then tweak points. You may want to start with +135 so that you can move the 1093 point up to +150. (or 1100, but performance is generally better at 1093).


----------



## wtf_apples

J7SC said:


> ...'grats on the KP_HC ! I wouldn't call 15122 bad by any stretch of the imagination, but there should be a bit more in it...I've hit around 15,300+ with the KPE 520W r_BAR on my Strix w/o maxing everything. Have you tried some earlier drivers, ie. 466.xx ?


Thank you! Ya it took me awhile to get to that score. Always striving for more. Nice score with the strix. 
Ill adjust a few things and try an older driver like you suggested.

Today ive been trying to find a good source for replacement thermal pads and the thermal puddy for the chokes. 
Not an easy task ha


----------



## yzonker

Hey, I know this is the 3090 thread, but could one of you guys with a lot of experience looking at these boards answer my question,









[Official] NVIDIA RTX 3080 Ti Owner's Club


Damn that really blows, I would have gladly paid 50$ more for ftw3 if given the opportunity. ;D Is that a 2x 8 pin card? If you end up testing the Zotac Bios, let me know how it went. And yes it's a 2x8pin card




www.overclock.net





I took a pic of my board when I swapped on the hybrid cooler, but don't see the PCIE fuse in it either.


----------



## J7SC

wtf_apples said:


> Thank you! Ya it took me awhile to get to that score. Always striving for more. Nice score with the strix.
> Ill adjust a few things and try an older driver like you suggested.
> 
> *Today ive been trying to find a good source for replacement thermal pads and the thermal puddy* for the chokes.
> Not an easy task ha


...there's always Amazon.ca or DigiKey...Amazon.ca should have restocked by now re. all those Thermalright 12.8 W/mk thermal pads I bought (0.5mm, 1mm, 1.5mm, 2mm) . If you're careful re. not trapping air bubbles, you can even stack those Thernalright pads to get the desired thickness.

As to thermal putty (as opposed to pads and grease), I recently got two jars of the listed DigiKey putty (10.0 W/mk) below, after another user here recommended it. I really do like thermal putty for its flexibility, after years of chasing the right size for a given GPU (+ tolerances). The supplier below delivered to me in W.Canada swiftly and w/o any drama...


----------



## newls1

yzonker said:


> I'm referring to the flat spot above 1081mv. It won't boost beyond the first point with 3 at the same frequency. So it gets stuck at 1081mv. Like @Arizor just said, Ctrl+F to bring up the curve in AB. Then select points and move them up down. Remember it only works on 15 mhz steps, so move the points by that much. The entire curve shifts up/down with the core offset. So set that first, then tweak points. You may want to start with +135 so that you can move the 1093 point up to +150. (or 1100, but performance is generally better at 1093).
> 
> View attachment 2516309


i will attempt to mess with this in a few hours. thank you so much for your reply, much appreciated.


----------



## Wihglah

J7SC said:


> ...there's always Amazon.ca or DigiKey...Amazon.ca should have restocked by now re. all those Thermalright 12.8 W/mk thermal pads I bought (0.5mm, 1mm, 1.5mm, 2mm) . If you're careful re. not trapping air bubbles, you can even stack those Thernalright pads to get the desired thickness.
> 
> As to thermal putty (as opposed to pads and grease), I recently got two jars of the listed DigiKey putty (10.0 W/mk) below, after another user here recommended it. I really do like thermal putty for its flexibility, after years of chasing the right size for a given GPU (+ tolerances). The supplier below delivered to me in W.Canada swiftly and w/o any drama...
> 
> View attachment 2516323


TG-PP-10 is very effective.


----------



## newls1

yzonker said:


> I'm referring to the flat spot above 1081mv. It won't boost beyond the first point with 3 at the same frequency. So it gets stuck at 1081mv. Like @Arizor just said, Ctrl+F to bring up the curve in AB. Then select points and move them up down. Remember it only works on 15 mhz steps, so move the points by that much. The entire curve shifts up/down with the core offset. So set that first, then tweak points. You may want to start with +135 so that you can move the 1093 point up to +150. (or 1100, but performance is generally better at 1093).
> 
> View attachment 2516309


ok, i did this and now im at 1.093v @ 2160MHz but how can i make it idle at low vcore and speed again at desktop setting? Shes full bore 24/7 now??


----------



## yzonker

newls1 said:


> ok, i did this and now im at 1.093v @ 2160MHz but how can i make it idle at low vcore and speed again at desktop setting? Shes full bore 24/7 now??


Shouldn't be unless you selected the 1093 point and hit "L" to lock it? Otherwise the core should idle down, but the mem won't (XOC does this for real LN2 OCing). You can set the mem offset to the lowest negative value to idle the mem too.


----------



## newls1

Right, I just learned to NOT do the CTRL L option, and now core and mem are downclocking. Am I doing this wrong cause my mem is actually downclocking like normal....


----------



## yzonker

newls1 said:


> Right, I just learned to NOT do the CTRL L option, and now core and mem are downclocking. Am I doing this wrong cause my mem is actually downclocking like normal....


Huh, I don't know. Oh wait, you're not running the KP XOC. My bad. You're fine if I have it right now.


----------



## newls1

yzonker said:


> Huh, I don't know. Oh wait, you're not running the KP XOC. My bad. You're fine if I have it right now.


I think im screwing this up, let me upload a pic of this curve i did.. im getting confused! It seems to go right to 2145 then drop to 2115 then back to 2145.. Cant keep 2160 and 3dmark pretty much stays at 1995 no matter what.. Im getting pretty confused here! youve been a HUGE HELP so far and im very grateful for that! Any pointer on how my curve should look like for trying to keep 2160MHz and 1.093 or 1.1v?? Thanks!


----------



## newls1

the speeds are fluctuating all over the palce in metro! Im at .950mv @ like 1980mhz 1 sec, then 2130 @ 1.087, then 2160 @ 1.093 depending on scene... How can I just set it to 2160 @ 1.093 for the whole game but NOT do the ctrl L option? Im sure this is becuase i set a frame limiter of 142fps to keep my 144hz monitor in check, but i love the smoothness of this.....


----------



## yzonker

newls1 said:


> I think im screwing this up, let me upload a pic of this curve i did.. im getting confused! It seems to go right to 2145 then drop to 2115 then back to 2145.. Cant keep 2160 and 3dmark pretty much stays at 1995 no matter what.. Im getting pretty confused here! youve been a HUGE HELP so far and im very grateful for that! Any pointer on how my curve should look like for trying to keep 2160MHz and 1.093 or 1.1v?? Thanks!


My guess is you don't have enough power to hold 1093mv like I mentioned originally. Run 3DMark with GPUZ open on the sensors tab and see what it says for "PerfCap Reason". Also look at what your max board power shows.

Also, the curve will shift up/down mostly depending on the temp of the card. There's no way to completely control that. Best method is to set up your curve and save it while the card is cool (below 30C if possible).


----------



## newls1

yzonker said:


> My guess is you don't have enough power to hold 1093mv like I mentioned originally. Run 3DMark with GPUZ open on the sensors tab and see what it says for "PerfCap Reason". Also look at what your max board power shows.
> 
> Also, the curve will shift up/down mostly depending on the temp of the card. There's no way to completely control that. Best method is to set up your curve and save it while the card is cool (below 30C if possible).


card is on a fcwb with active backplate on its own loop with a 420mm and 240mm rad with serial D5 pumps.... temps are not an issue here! Let me do exactly what you just suggested and ill report back.. THANKS!


----------



## newls1

Yes, performance cap says "PWR" here is a screen shot of gpuz with some parameters set to "MAX" so you can see temps and wattage load. What can I do here?


----------



## Wihglah

newls1 said:


> the speeds are fluctuating all over the palce in metro! Im at .950mv @ like 1980mhz 1 sec, then 2130 @ 1.087, then 2160 @ 1.093 depending on scene... How can I just set it to 2160 @ 1.093 for the whole game but NOT do the ctrl L option? Im sure this is becuase i set a frame limiter of 142fps to keep my 144hz monitor in check, but i love the smoothness of this.....


Even if you set a curve, it will still downclock based on temps. There are temp limits in the 30'sC


----------



## newls1

i understand, guess there is no way to adjust those settings...


----------



## des2k...

newls1 said:


> i understand, guess there is no way to adjust those settings...


If you're not power limited or temp limited you can use 2 VF points & nvidia-smi , set NV panel to max performance

C:\Windows\System32\nvidia-smi.exe -lgc 210,2175

For any given freq, only 3 voltage points are valid, then the card will need to jump to the next freq bin.

This holds, 2175 with a water block for any load (xoc vbios)

Point 1, apply first









Point 2, apply second, needs to be 4 voltages points before max freq; this will be 1bin less on the FREQ


----------



## newls1

that just confused me even more......


----------



## cennis

How much does CPU affect port Royale with a 3090?

For example, if someone knows the relative difference between any of these CPUs it will be helpful.
4770k
3300x
5600x
5800x
5900x


----------



## yzonker

newls1 said:


> Yes, performance cap says "PWR" here is a screen shot of gpuz with some parameters set to "MAX" so you can see temps and wattage load. What can I do here?
> View attachment 2516354


A 1kw bios like the KP XOC is the only way to get a enough power to hold the voltage you want. 









EVGA RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com


----------



## yzonker

cennis said:


> How much does CPU affect port Royale with a 3090?
> 
> For example, if someone knows the relative difference between any of these CPUs it will be helpful.
> 4770k
> 3300x
> 5600x
> 5800x
> 5900x


I don't think anybody really knows. Too many variables. Even two 5900x systems for example may perform differently depending on the rest of the system, Windows settings/install, etc...


----------



## OrionBG

Guys, I can get a 3090 FE for about 150 euro less than an MSI 3090 SupremeX. Is the FE as good an overclocker (on water) or should I go for the supremeX?


----------



## newls1

yzonker said:


> A 1kw bios like the KP XOC is the only way to get a enough power to hold the voltage you want.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> EVGA RTX 3090 VBIOS
> 
> 
> 24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory
> 
> 
> 
> 
> www.techpowerup.com


is there a 1kw bios that supports rebar? and what program do i use to flash the card with? I added the bios im currently using cause evga made it simple as all i had to do was click a .exe file and it flashed itself......

**ALSO... is a 2145/2160MHz OC about on par for these cards on water and this 500w bios?


----------



## ahnafakeef

Wihglah said:


> Depends on the game. Anywhere from inconsequential to linear.





des2k... said:


> seems about right, there's not much gains in games
> 
> +60mhz on core is about 1fps
> +500 on mem is another fps
> 
> At one point, you'll also hit the limit of the silicon or game engine limit and won't scale at all.
> 
> Certain games will scale better like quake rtx and will use an insane amount of power. Other games like Lego builder will not scale past 2115core.
> 
> A good balance is undervolt + mem oc. 2115 1v with +1000mem will be about 1,2fps less vs high power oc like 2190+ 1.1v +1500mem


I think my overclocks crashing is explained by the 480w power limit, as mentioned in another recent post that 1.1v requires around 600w.

What is the highest power limit I can get by flashing a different BIOS? And what is the highest power limit I can get, by whatever method necessary except hardware level mods?

And what BIOS is everyone using to get the 600w or above power limits?


----------



## Nizzen

ahnafakeef said:


> I think my overclocks crashing is explained by the 480w power limit, as mentioned in another recent post that 1.1v requires around 600w.
> 
> What is the highest power limit I can get by flashing a different BIOS? And what is the highest power limit I can get, by whatever method necessary except hardware level mods?
> 
> And what BIOS is everyone using to get the 600w or above power limits?


You need COLD cpu to make the gpu draw past 600w. Even with 1000w bios, most card don't ask for more power than about 550w

If your gpu is under 40c, you often don't need anything else than Evga 550w rebar bios 

For benchmarking, you can use 1000w bios, but now rebar 1000w for us yet. Only some few "pro overclockers" has the 1000w rebar bios, and they don't want to share it...


----------



## newls1

newls1 said:


> is there a 1kw bios that supports rebar? and what program do i use to flash the card with? I added the bios im currently using cause evga made it simple as all i had to do was click a .exe file and it flashed itself......





Nizzen said:


> You need COLD cpu to make the gpu draw past 600w. Even with 1000w bios, most card don't ask for more power than about 550w
> 
> If your gpu is under 40c, you often don't need anything else than Evga 550w rebar bios
> 
> For benchmarking, you can use 1000w bios, but now rebar 1000w for us yet. Only some few "pro overclockers" has the 1000w rebar bios, and they don't want to share it...


where is this 550w rebar bios at? Im currently on the evga xoc 500 rebar bios... think that will help me sustain a 2145 core clock?


----------



## newls1

newls1 said:


> is there a 1kw bios that supports rebar? and what program do i use to flash the card with? I added the bios im currently using cause evga made it simple as all i had to do was click a .exe file and it flashed itself......
> 
> **ALSO... is a 2145/2160MHz OC about on par for these cards on water and this 500w bios?


after 30min of playing metro, she finally crashed @ 2145 1.093v.. DAMN IT. Its not temps as core was 36c and mem was 45c (active backplate) so i can only assume obviously hitting power limit, and or core just cant go that high. What is a normal / average core speed for the 3090?


----------



## Nizzen

newls1 said:


> where is this 550w rebar bios at? Im currently on the evga xoc 500 rebar bios... think that will help me sustain a 2145 core clock?


Sorry I ment 520w bar bios









EVGA RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com


----------



## des2k...

newls1 said:


> after 30min of playing metro, she finally crashed @ 2145 1.093v.. DAMN IT. Its not temps as core was 36c and mem was 45c (active backplate) so i can only assume obviously hitting power limit, and or core just cant go that high. What is a normal / average core speed for the 3090?


hum... I guess I could try that game for 45mins or so


----------



## Thanh Nguyen

newls1 said:


> after 30min of playing metro, she finally crashed @ 2145 1.093v.. DAMN IT. Its not temps as core was 36c and mem was 45c (active backplate) so i can only assume obviously hitting power limit, and or core just cant go that high. What is a normal / average core speed for the 3090?


That metro is very intensive. It can destroy so many so called “ stable” system. I usually run the built in benchmark 30 loops to make sure its really stable.


----------



## Nizzen

Thanh Nguyen said:


> That metro is very intensive. It can destroy so many so called “ stable” system. I usually run the built in benchmark 30 loops to make sure its really stable.


I play Battlefield V multiplayer for stability testing. Pushing memory, cpu and gpu hard. Fun to play too for a few hours. I love battlefield


----------



## des2k...

newls1 said:


> after 30min of playing metro, she finally crashed @ 2145 1.093v.. DAMN IT. Its not temps as core was 36c and mem was 45c (active backplate) so i can only assume obviously hitting power limit, and or core just cant go that high. What is a normal / average core speed for the 3090?


So this game RTX load is pretty heavy, usually I can play Quake RTX, Lego buliders RTX with 2115core 1006mv or so. This games requires 1018-1025mv to get it stable at 2115core.

So going by those higher voltages, those are very close to my 3dmark TE GT2(loops) voltages.

2145 is prob asking too much, if it's not stable at 1065mv I wouldn't even bother trying higher voltage. Also 2145 core spikes to 600w+ for power.


----------



## Falkentyne

des2k... said:


> If you're not power limited or temp limited you can use 2 VF points & nvidia-smi , set NV panel to max performance
> 
> C:\Windows\System32\nvidia-smi.exe -lgc 210,2175
> 
> For any given freq, only 3 voltage points are valid, then the card will need to jump to the next freq bin.
> 
> This holds, 2175 with a water block for any load (xoc vbios)
> 
> Point 1, apply first
> View attachment 2516358
> 
> 
> Point 2, apply second, needs to be 4 voltages points before max freq; this will be 1bin less on the FREQ
> View attachment 2516359





newls1 said:


> that just confused me even more......


This makes no sense.
confused the hell out of me also.


----------



## des2k...

Falkentyne said:


> This makes no sense.
> confused the hell out of me also.


Here's a basic explanation of Ampere VF table.

*Any given frequency requested by the GPU is only valid for 3 consecutive voltages, after these 3 consecutive voltages it needs to jump the frequency & voltage point.* This will happen on the 4th voltage point.

Regardless how you want to apply them in Afterburner, you need to start with this rule.
So to lock 2115 at 1018mv regardless of temp/load/power:

1000mv is 2100 bin (becomes +150 core offset from the default VF table)
1006mv is 2100 bin (+0 offset from last point)
1012mv is 2100 bin again (+0 offset from last point)
1018mv will be 2115 bin*

*Of course you can't use 2115 bin because the offset needs to be higher(prevents going up/down bins with load,temp changes)

So this last point will be:
1018mv @ 2160 bin & limit max boost with this command nvidia-smi.exe -lgc 210,2115


----------



## Ironcobra

Man what a nightmare aquacomputer has been to deal with, got a full watercooling system from them for my 5800x and 3090 strix
cuplex kryos NEXT VARIO pvd/nickel
3090 krographics next with active backplate
flow 2 sensor 
utilitube d5 200 pro
octo fan controller
tygon a60g tubing
*Bykski* fittings
2 360 HWL gtx rads
arctic p12 fans 
mayhems distilled liquid
First issue is they shipped me an active backplate with nothing in the box but the backplate, after a week of trying to contact them they finally say they will ship me the rest of the parts. Received the missing parts after a month to find out they forgot to ship the connection point for the heatpipe that goes from the backplate to the water block. Another two weeks receive my connection block, drain the loop fit all the pieces together and of course the new connection point is leaking where the heatpipe goes into it and runs all the way down my vertically mounted strix causing a green screen and shut down. After draining back down and drying everything off thoroughly I attached the original connection point that came with the block thats not for the active backplate and fired it back up and thankfully the 3090 still works. I might be a outlier but I will never be buying anything from aquacomputer in the future there quality control seems really bad. Its too bad as even without the heatpipe Im seeing great temps on my mem junction 64c mining at -110 core and +1500 mem. Vram at 45c and core at 28c ambient is 23c. Havent had a chance to game yet but will report back.


----------



## newls1

Metro Exodus enhanced is really showing me that my GPU OC wasnt even near stable.... So far 2130 @ 1.1v isnt stable, but what does this perf cap reason mean?

*EDIT* Ok, 5 loop runs @ +110core and 1100mem @ 1.081v (not touching curve) to OC proved stable with Metro Exodus enhanced benchmark. It stablized out @ 2100 flat. Is this about average for a FTW3 Ultra card?


----------



## des2k...

newls1 said:


> Metro Exodus enhanced is really showing me that my GPU OC wasnt even near stable.... So far 2130 @ 1.1v isnt stable, but what does this perf cap reason mean?
> 
> *EDIT* Ok, 5 loop runs @ +110core and 1100mem @ 1.081v (not touching curve) to OC proved stable with Metro Exodus enhanced benchmark. It stablized out @ 2100 flat. Is this about average for a FTW3 Ultra card?
> View attachment 2516412


If you can reach 2100 on metro that's decent but 1081mv seems high to me.

You should test your fps with geforce experience performance overlay. For example try 2030,2045 on the core it's prob 1,2fps less and alot less voltage / power usage 😁

Maybe it's just me, but playing games at 500w+ even in winter is annoying, due to heat.
Even my ftw3 1080ti with asus xoc, playing divison at 400w+ was annoying lol


----------



## newls1

des2k... said:


> If you can reach 2100 on metro that's decent but 1081mv seems high to me.
> 
> You should test your fps with geforce experience performance overlay. For example try 2030,2045 on the core it's prob 1,2fps less and alot less voltage / power usage 😁


can you recommend a higher wattage bios for this card but also retain rebar?


----------



## Bal3Wolf

i have gave up my 3090 is like nope not gonna give you good scores with the 5950x no matter clocks i even did a 4.8/4.5ghz ccx overclock and didnt matter not just port royal either my scores just jump all over.


----------



## J7SC

Bal3Wolf said:


> i have gave up my 3090 is like nope not gonna give you good scores with the 5950x no matter clocks i even did a 4.8/4.5ghz ccx overclock and didnt matter not just port royal either my scores just jump all over.


...still hope you get to the bottom of it. I'm using a 5950X w/ the w-cooled Strix (KPE 520W r_BAR) and it is consistent re. decent scores. You might want to run a series of benches w/ HWInfo and other sensor software (3DM included) recording everything...may be the culprit can be tracked that way


----------



## Bal3Wolf

yea i have hwinfo on all times i do notice i hit power limits at random sometimes when im in 440watt range on 520w bios.


----------



## newls1

someone mentioned a few pages ago about an evga 550w bios... IS there a re-bar 550watt bios available for my ftw3 ultra 3090? I would like to have a higher powerlimit bios then the xoc 500 re-bar bios i have now... Anyone know of one?


----------



## Nizzen

newls1 said:


> someone mentioned a few pages ago about an evga 550w bios... IS there a re-bar 550watt bios available for my ftw3 ultra 3090? I would like to have a higher powerlimit bios then the xoc 500 re-bar bios i have now... Anyone know of one?


It's 520w rebar  it's posted a few posts ago.
#16117


----------



## SoldierRBT

Bal3Wolf said:


> yea i have hwinfo on all times i do notice i hit power limits at random sometimes when im in 440watt range on 520w bios.


I’d recommend to uninstall nvidia driver with DDU uninstaller and install newest driver. You can try enabling both NVVDD/MSVDD dipswitches on the back of the card and add normal OC offset in afterburner. You should be able to score 15.4K+ with this. 

If you have access to classy tool, select a voltage point in Afterburner that doesn’t hit PL (520W) could be around 1.018-1.037v depending how leaky your chip is. Open classy tool and try 1.13-1.15v NVVDD / 1.075-1.10v MSVVD / 1.40v memory with your highest stable clock


----------



## Wihglah

newls1 said:


> someone mentioned a few pages ago about an evga 550w bios... IS there a re-bar 550watt bios available for my ftw3 ultra 3090? I would like to have a higher powerlimit bios then the xoc 500 re-bar bios i have now... Anyone know of one?


The only one that works properly is the Kingpin 1000W BIOS. Which at the moment does not have ReBar.


----------



## KedarWolf

Bal3Wolf said:


> i have gave up my 3090 is like nope not gonna give you good scores with the 5950x no matter clocks i even did a 4.8/4.5ghz ccx overclock and didnt matter not just port royal either my scores just jump all over.


I scored 15 695 in Port Royal

5950x and Curve Optimizer with memory on the system at 3800 CL14 and the memory on the Strix OC not overclocked really high like a Kingpin can get.

Core overclocks decent though. 

*PORT ROYAL

VALID RESULT*
*SCORE 15695 with NVIDIA GeForce RTX 3090(1x) and AMD Ryzen 9 5950X
Graphics Score15 695*

Edit: Oh wait, I might have set a static CPU overclock of 4.65/4.60GHz CCX on the CPU I think, not Curve Optimizer.

This was with Curve Optimizer though.









I scored 15 686 in Port Royal


AMD Ryzen 9 5950X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## Arizor

KedarWolf said:


> I scored 15 695 in Port Royal
> 
> 5950x and Curve Optimizer with memory on the system at 3800 CL14 and the memory on the Strix OC not overclocked really high like a Kingpin can get.
> 
> Core overclocks decent though.
> 
> *PORT ROYAL
> 
> VALID RESULT*
> *SCORE 15695 with NVIDIA GeForce RTX 3090(1x) and AMD Ryzen 9 5950X
> Graphics Score15 695*


Holy moly you won the silicon lottery with that one, my Strix has a heart attack if it goes above 2070.


----------



## Arizor

Man I’m done with other BIOSs. Anything but the Strix BIOS will at some point restart my comp (event 41 in event viewer, “kernel-power” error). It could be 2 hours it could be days, but it always happens at some point. Not worth it for the extra 2 FPS in cyberpunk at the end of the day 😂


----------



## Bal3Wolf

KedarWolf said:


> I scored 15 695 in Port Royal
> 
> 5950x and Curve Optimizer with memory on the system at 3800 CL14 and the memory on the Strix OC not overclocked really high like a Kingpin can get.
> 
> Core overclocks decent though.
> 
> *PORT ROYAL
> 
> VALID RESULT*
> *SCORE 15695 with NVIDIA GeForce RTX 3090(1x) and AMD Ryzen 9 5950X
> Graphics Score15 695*
> 
> Edit: Oh wait, I might have set a static CPU overclock of 4.65/4.60GHz CCX on the CPU I think, not Curve Optimizer.
> 
> This was with Curve Optimizer though.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 686 in Port Royal
> 
> 
> AMD Ryzen 9 5950X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com


funny part with my 3900x i score 15,786 but best i been able to get with my 5950x 15,621 even had a fresh install of windows one thing i might of found need more testing is steam ver of 3dmark seems more consisit then the ver you get from the website.* 
*








Result







www.3dmark.com


----------



## newls1

Nizzen said:


> It's 520w rebar  it's posted a few posts ago.
> #16117


Damn it, dont know how i missed that, thank you. Comparing this 520w rebar bios to the 500w xoc stock evga bios, you think it might be worth it to try it? Anyone have any experience with this 520w rebar bios, and can add your 2cents in? Much appreciated. Bios in question is this one ---> EVGA RTX 3090 VBIOS


----------



## Arizor

Can any of the many experts here offer a summary/lesson on the relationship between voltage and power draw?

I’m trying undervolting and finding that at lower voltages my card can actually run high frequencies more stable than using a big voltage (eg 1.081-1.1) but draws more power (in terms of power limit percentage) but less watts from the wall (as I’d expect from less voltage).

Edited for clarity.


----------



## Bal3Wolf

well likely hitting the power limits of some type then card down clocks where lower voltages if its stable at the clock can stay at it without downclocking.


----------



## Arizor

Bal3Wolf said:


> well likely hitting the power limits of some type then card down clocks where lower voltages if its stable at the clock can stay at it without downclocking.


Yeah it’s interesting behaviour - I’d crash if set to, for example 1.1V @2040, but 1.05 @2040 is solid.


----------



## Nizzen

newls1 said:


> Damn it, dont know how i missed that, thank you. Comparing this 520w rebar bios to the 500w xoc stock evga bios, you think it might be worth it to try it? Anyone have any experience with this 520w rebar bios, and can add your 2cents in? Much appreciated. Bios in question is this one ---> EVGA RTX 3090 VBIOS


I'm using it for one of my 3090 strix.
Always try for yourself. Only way to know if it's good for YOU


----------



## GRABibus

newls1 said:


> Damn it, dont know how i missed that, thank you. Comparing this 520w rebar bios to the 500w xoc stock evga bios, you think it might be worth it to try it? Anyone have any experience with this 520w rebar bios, and can add your 2cents in? Much appreciated. Bios in question is this one ---> EVGA RTX 3090 VBIOS


Try, experiment, this is the only way to learn and to find your best bios 😊


----------



## newls1

GRABibus said:


> Try, experiment, this is the only way to learn and to find your best bios 😊


will do! Last question I SWEAR!!!! Can someone point me to a tutorial or video on how to flash to this bios PLEASE!? I flashed to the xoc 500w cause evga made it easy by just clicking a .exe file and it was done! so any info on this would be amazing and appreciated!


----------



## Nizzen

newls1 said:


> will do! Last question I SWEAR!!!! Can someone point me to a tutorial or video on how to flash to this bios PLEASE!? I flashed to the xoc 500w cause evga made it easy by just clicking a .exe file and it was done! so any info on this would be amazing and appreciated!


You Joined Jun 3, 2007..... We all know you can do it 

Download newest nvflash
Put the 520w rebar bios in the nvflash folder
Open cmd with Admin
"cd C:\nvflash" or whatever place you have it...
*nvflash --protectoff
nvflash -6 biosname.rom*
Press Y
Press Y
Done
-----

Love from Norway <3


----------



## J7SC

Bal3Wolf said:


> funny part with my 3900x i score 15,786 but best i been able to get with my 5950x 15,621 even had a fresh install of windows one thing i might of found need more testing is steam ver of 3dmark seems more consisit then the ver you get from the website.
> 
> 
> 
> 
> 
> 
> 
> 
> Result
> 
> 
> 
> 
> 
> 
> 
> www.3dmark.com


Well, now I'm confused re. 'the problem' - what problem ? ! Not only are BOTH excellent results, but PortRoyal is less sensitive to CPU compared to other 3D benches...what are your TS, TSE comps like ?

Also, per your 3DM PR comp link, you are running two different graphics drivers, slightly higher average GPU clocks, and slightly higher VRAM, and a 1 C difference in average temps.


----------



## newls1

Nizzen said:


> You Joined Jun 3, 2007..... We all know you can do it
> 
> Download newest nvflash
> Put the 520w rebar bios in the nvflash folder
> Open cmd with Admin
> "cd C:\nvflash" or whatever place you have it...
> *nvflash --protectoff
> nvflash -6 biosname.rom*
> Press Y
> Press Y
> Done
> -----
> 
> Love from Norway <3


always been to worried about bricking a card! but i have a bios switch on this card, so i feel so much better cause i cant brick it but rather just flip a switch and repair bad flash if needed.. thanks million sir, HUGE HELP AND GREATLY APPRECIATED!!!


----------



## yzonker

J7SC said:


> Well, now I'm confused re. 'the problem' - what problem ? ! Not only are BOTH excellent results, but PortRoyal is less sensitive to CPU compared to other 3D benches...what are your TS, TSE comps like ?
> 
> Also, per your 3DM PR comp link, you are running two different graphics drivers, slightly higher average GPU clocks, and slightly higher VRAM, and a 1 C difference in average temps.


For me, PR score has been slowly dropping with each newer driver version pretty much since the reBar driver that increased everyone's PR scores.


----------



## Bal3Wolf

J7SC said:


> Well, now I'm confused re. 'the problem' - what problem ? ! Not only are BOTH excellent results, but PortRoyal is less sensitive to CPU compared to other 3D benches...what are your TS, TSE comps like ?
> 
> Also, per your 3DM PR comp link, you are running two different graphics drivers, slightly higher average GPU clocks, and slightly higher VRAM, and a 1 C difference in average temps.


That just happend to be my best result on the 5950x i have tried same drivers and older or new and slower/faster core/memory i have had it even running the benchmark under 37c lol. I did find something odd im on my stock bios switch right now and my scores seem to be more stable around 14,6800-14750 so now im gonna try my oc switch see what i get on it might be my ln2 bios is causing issues standard and oc switch seem to have stable runs i can get close to same scores on repeat runs and they are higher then what i been getting on ln2 switch.

just pulled this score on just oc bios and i was able to repeat the score so something is up on my ln2 bios also seeing if i use a curve overclock my benchmark scores drop.








I scored 15 416 in Port Royal


AMD Ryzen 9 5950X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com












I scored 15 418 in Port Royal


AMD Ryzen 9 5950X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com












I scored 15 500 in Port Royal


AMD Ryzen 9 5950X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com












I scored 15 580 in Port Royal


AMD Ryzen 9 5950X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com












I scored 15 638 in Port Royal


AMD Ryzen 9 5950X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com












I scored 15 735 in Port Royal


AMD Ryzen 9 5950X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com












I scored 15 808 in Port Royal


AMD Ryzen 9 5950X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com





So i found out my card can do super high memory clocks up to +1800mhz
i can get 15,500 on my game clocks now very happy








I scored 15 505 in Port Royal


AMD Ryzen 9 5950X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## GRABibus

I am currently testing the HOF LAB OC Bios :
HOFOCLAB500.rom.txt

Does someone has the Rebar support version of this Bios ?


----------



## GRABibus

Ok, i could update this Bios with Rebar :






Resizable bar BIOS update


Galaxy Microsystems Ltd.




www.galax.com


----------



## J7SC

GRABibus said:


> I am currently testing the HOF LAB OC Bios :
> HOFOCLAB500.rom.txt
> 
> Does someone has the Rebar support version of this Bios ?


...attachment file name issue :-( take out ' .rom'


----------



## GRABibus

J7SC said:


> ...attachment file name issue :-( take out ' .rom'


is it ok ?

HOFOCLAB500.txt


----------



## gfunkernaught

Has anyone ever seen a case where you start seeing some artifacts at the desktop while for example minimizing a window or expanding a window anything that uses some 2d acceleration, Then the video scheduler BSOD, and reinstalling the drivers was the fix. Happened to me I hope that doesn't mean the card is dying. Reinstalling the driver's did fix the issue but I'm a little concerned.


----------



## GRABibus

gfunkernaught said:


> Has anyone ever seen a case where you start seeing some artifacts at the desktop while for example minimizing a window or expanding a window anything that uses some 2d acceleration, Then the video scheduler BSOD, and reinstalling the drivers was the fix. Happened to me I hope that doesn't mean the card is dying. Reinstalling the driver's did fix the issue but I'm a little concerned.


No, never happened to me


----------



## D-EJ915

newls1 said:


> always been to worried about bricking a card! but i have a bios switch on this card, so i feel so much better cause i cant brick it but rather just flip a switch and repair bad flash if needed.. thanks million sir, HUGE HELP AND GREATLY APPRECIATED!!!


If your board has display output you can use your internal gpu as well in case you're scared of the screen going blank and driver resetting.


----------



## GRABibus

For those who are still on air as me, ASUS Strix OC :

Test EVGA 500W XOC Bios with Rebar in BFV Multiplayer






+130MHz on Core, then V/F tweaking [email protected]
+1000MHz on memory
Stock air cooler repasted with Conductonaut
Memory chips repaded with THERMALRIGHT Odyssey
24°C ambient


----------



## Arizor

That's awesome @GRABibus , what resolution do you run at? There's no way my Strix can hold that high a clock at 4K (in fact, there's no way my Strix can hold that clock ever, but definitely not above 1440p!).


----------



## SoldierRBT

des2k... said:


> Here's a basic explanation of Ampere VF table.
> 
> *Any given frequency requested by the GPU is only valid for 3 consecutive voltages, after these 3 consecutive voltages it needs to jump the frequency & voltage point.* This will happen on the 4th voltage point.
> 
> Regardless how you want to apply them in Afterburner, you need to start with this rule.
> So to lock 2115 at 1018mv regardless of temp/load/power:
> 
> 1000mv is 2100 bin (becomes +150 core offset from the default VF table)
> 1006mv is 2100 bin (+0 offset from last point)
> 1012mv is 2100 bin again (+0 offset from last point)
> 1018mv will be 2115 bin*
> 
> *Of course you can't use 2115 bin because the offset needs to be higher(prevents going up/down bins with load,temp changes)
> 
> So this last point will be:
> 1018mv @ 2160 bin & limit max boost with this command nvidia-smi.exe -lgc 210,2115
> 
> 
> 
> 
> View attachment 2516391
> 
> 
> View attachment 2516389


I tried this method and it's better to stick with nvidia-smi + normal OC offset for high internal clocks.

2100MHz at 1000mv with this method internal clocks are around 2050-2055MHz while normal OC offset +150 with nvidia-smi.exe -lgc 210,2100 internal clocks are around 2080-2085MHz. Another difference is that normal OC offset would stabilize at 1006mv after temperature increases.


----------



## des2k...

SoldierRBT said:


> I tried this method and it's better to stick with nvidia-smi + normal OC offset for high internal clocks.
> 
> 2100MHz at 1000mv with this method internal clocks are around 2050-2055MHz while normal OC offset +150 with nvidia-smi.exe -lgc 210,2100 internal clocks are around 2080-2085MHz. Another difference is that normal OC offset would stabilize at 1006mv after temperature increases.


Not on my side. 2115 at 1v is 2100eff frequency. It may drop to 2090 as lowest under heavy rtx load but not 2055mhz.

Now if run 2115 at 1018mv, it's 2100 eff.

If I go higher, like 2175, It will drop to 2155eff freq. Offset is better for eff freq here. But 20mhz less, won't affect fps at all.


----------



## KedarWolf

SoldierRBT said:


> I tried this method and it's better to stick with nvidia-smi + normal OC offset for high internal clocks.
> 
> 2100MHz at 1000mv with this method internal clocks are around 2050-2055MHz while normal OC offset +150 with nvidia-smi.exe -lgc 210,2100 internal clocks are around 2080-2085MHz. Another difference is that normal OC offset would stabilize at 1006mv after temperature increases.


How would I set say, 2190 at 1.1v for benchmarking? And after I do, can I still set lower offset points in Afterburner like i normally do or no need to?


----------



## KedarWolf

In other news, I've had no working PC a month now. My motherboard is in for repair/replacement. After a dozen phone calls, including the repair centre here in Toronto where the motherboard was sent to, MSI FINALLY admitted my replacement is stuck in customs with no ETA.

Good news is, I found an MSI B550 Tomahawk for really cheap locally from a guy with all five star reviews and it's considered an S tier board. 

I'll have a working PC tomorrow evening.


----------



## GRABibus

Arizor said:


> That's awesome @GRABibus , what resolution do you run at? There's no way my Strix can hold that high a clock at 4K (in fact, there's no way my Strix can hold that clock ever, but definitely not above 1440p!).


it is 2560x1440


----------



## Arizor

GRABibus said:


> it is 2560x1440


That’s great. Meanwhile I’m happy to run CP2077 at 4K @ 2025mhz and +1000 mem 😂


----------



## KedarWolf

KedarWolf said:


> How would I set say, 2190 at 1.1v for benchmarking? And after I do, can I still set lower offset points in Afterburner like i normally do or no need to?


Never mind, this explained it.









[Official] NVIDIA RTX 3090 Owner's Club


Yes, performance cap says "PWR" here is a screen shot of gpuz with some parameters set to "MAX" so you can see temps and wattage load. What can I do here?




www.overclock.net


----------



## KedarWolf

Arizor said:


> Just repeating this advice for anyone developing in Blender or similar programs with a flashed BIOS (I'm using the EVGA FTW on my Strix). If you don't underclock your VRAM and Core a bit in Afterburner, you'll get computer resets quite regularly (Blender in particular hates overclocks, even factory ones, and will power spike reset your PC if you don't underclock first).


On my Strix OC I ran the Blender benchmark at 2190 core, +1112 memory or something like that and had six or seven #1 results on the leaderboards for a 3090. 

Now that I've mentioned that, all the Kingpin peeps will go and smash my #1's.


----------



## sultanofswing

You need to keep points within 15mhz of each other in the curve to avoid requested vs effective clock discrepency at or near your desired clock/voltage point.
Meaning if you set [email protected] then you need to set the points below that to [email protected] and so on and so forth. You will notice that 2 to 3 points will be tied together so to move 1 the others will follow suit.


----------



## Arizor

KedarWolf said:


> On my Strix OC I ran the Blender benchmark at 2190 core, +1112 memory or something like that and had six or seven #1 results on the leaderboards for a 3090.
> 
> Now that I've mentioned that, all the Kingpin peeps will go and smash my #1's.


I didn't even know there WAS a Blender benchmark! I'm talking specifically about working in Blender for hours at a time; I do some work in game dev and if overclocked using Blender it will eventually crash.


----------



## Arizor

sultanofswing said:


> You need to keep points within 15mhz of each other in the curve to avoid requested vs effective clock discrepency at or near your desired clock/voltage point.
> Meaning if you set [email protected] then you need to set the points below that to [email protected] and so on and so forth. You will notice that 2 to 3 points will be tied together so to move 1 the others will follow suit.


Yep good tip. Does anyone else sometimes have the issue with Afterburner where you try and move a node/nodes up, and when you hit 'confirm/tick' the node/s simply pop back to where they were, i.e. refusing where you moved them? This happens sometimes when I'm trying to move the lower frequencies/voltages after first setting my highest, it's pretty irritating.


----------



## KedarWolf

Arizor said:


> I didn't even know there WAS a Blender benchmark! I'm talking specifically about working in Blender for hours at a time; I do some work in game dev and if overclocked using Blender it will eventually crash.











Blender - Open Data


Blender Open Data is a platform to collect, display and query the results of hardware and software performance tests - provided by the public.




opendata.blender.org


----------



## newls1

newls1 said:


> always been to worried about bricking a card! but i have a bios switch on this card, so i feel so much better cause i cant brick it but rather just flip a switch and repair bad flash if needed.. thanks million sir, HUGE HELP AND GREATLY APPRECIATED!!!


Just an update.... I flashed to this 520w re-bar bios and was able to stabilize Metro Enhanced benchmark @ 2130/2145MHz @ 1.093-1.087v now. so thats up from 2105Mhz. Im happy! Thank you to everyone who helped me and taught me how to flash it! Much appreciated


----------



## gfunkernaught

newls1 said:


> Just an update.... I flashed to this 520w re-bar bios and was able to stabilize Metro Enhanced benchmark @ 2130/2145MHz @ 1.093-1.087v now. so thats up from 2105Mhz. Im happy! Thank you to everyone who helped me and taught me how to flash it! Much appreciated


What settings did you run the benchmark with? Also, were you power limited at all?


----------



## newls1

gfunkernaught said:


> What settings did you run the benchmark with? Also, were you power limited at all?


2560x1080 (my monitors res) @ 144hz, FPS capped to 142 via NVCP, DLSS 2.2.6.0 patch w/ DLSS set to "performance", All High settings and VRS @ 2x, Looped 3 times. If I missed saying anything else, please let me know. As far as power limited, im not sure, i didnt have GPU-Z open while running benchmark. I can say this though... Using my prior XOC 500w Bios, during benchmark I saw 330-ish watts max for "Gpu board power" showing from Rivatuner statistics with afterburner running. Now on this bios, i saw it go to 340-350ish watts, so if im still power limited, IDK, but if i am, im "LESS" power limited then i was.... if that makes sense?


----------



## GRABibus

Tested the "'HOF LAB OC 500W" Bios in Cold War :






2560x1440 144Hz
All settings ultra
Ray tracing Ultra
DLSS performances

Strix on air with stock cooler (Repasted with Conductonaut and Memory chips repaded with THERMLARIGHT Odyssey).

=> 2205MHz sustained at 1.1V (23°C-24°C ambient, open PC)

Curve and MSI AB settings :

+130MHz on Core then [email protected]
+1000MHz on Memory










Compared with EVGA 500W XOC Bios :
=> 2°C more roughly than with EVGA 500W XOC Bios (2600rpm fans for HOF LAB OC Bios versus 3000rpm fans for EVGA 500W XOC Bios)
=> 30MHz to 45MHz more than with EVGA 500W XOC Bios !!

I will keep on testing it, but seems very good.


----------



## gfunkernaught

newls1 said:


> 2560x1080 (my monitors res) @ 144hz, FPS capped to 142 via NVCP, DLSS 2.2.6.0 patch w/ DLSS set to "performance", All High settings and VRS @ 2x, Looped 3 times. If I missed saying anything else, please let me know. As far as power limited, im not sure, i didnt have GPU-Z open while running benchmark. I can say this though... Using my prior XOC 500w Bios, during benchmark I saw 330-ish watts max for "Gpu board power" showing from Rivatuner statistics with afterburner running. Now on this bios, i saw it go to 340-350ish watts, so if im still power limited, IDK, but if i am, im "LESS" power limited then i was.... if that makes sense?


Makes perfect sense, that's why your clocks were so high during the run, you weren't near the limit. Good stuff!


----------



## KedarWolf

GRABibus said:


> Tested the "'HOF LAB OC 500W" Bios in Cold War :
> 
> 
> 
> 
> 
> 
> 2560x1440 144Hz
> All settings ultra
> Ray tracing Ultra
> DLSS performances
> 
> Strix on air with stock cooler (Repasted with Conductonaut and Memory chips repaded with THERMLARIGHT Odyssey).
> 
> => 2205MHz sustained at 1.1V (23°C-24°C ambient, open PC)
> 
> Curve and MSI AB settings :
> 
> +130MHz on Core then [email protected]
> +1000MHz on Memory
> View attachment 2516602
> 
> 
> 
> Compared with EVGA 500W XOC Bios :
> => 2°C more roughly than with EVGA 500W XOC Bios (2600rpm fans for HOF LAB OC Bios versus 3000rpm fans for EVGA 500W XOC Bios)
> => 30MHz to 45MHz more than with EVGA 500W XOC Bios !!
> 
> I will keep on testing it, but seems very good.


Can you see if those clocks are stable with Metro Exodus Enhanced with ray tracing on Ultra and Battlefield 5 if you have them?

Those two games will find unstable overclocks quick, but turn G-Sync/FreeSync off though or you're not fully testing stability.


----------



## GRABibus

KedarWolf said:


> Can you see if those clocks are stable with Metro Exodus Enhanced with ray tracing on Ultra and Battlefield 5 if you have them?
> 
> Those two games will find unstable overclocks quick, but turn G-Sync/FreeSync off though or you're not fully testing stability.




I could play BFV Multiplayer 30 minutes with no crash (Usually "Crash to Deskto"p when I crash in this game)

2560x1440 144Hz
All settings ultra
RT ultra
No DLSS (Not available at 2560x1440)
I always enable Gsync.

I need to test more days and weeks but it is quite promising !

EDIT : GPU frequency was stabilized at [email protected], 24°C ambient.
By the way, this was not the heaviest map of the game


----------



## sultanofswing

Arizor said:


> Yep good tip. Does anyone else sometimes have the issue with Afterburner where you try and move a node/nodes up, and when you hit 'confirm/tick' the node/s simply pop back to where they were, i.e. refusing where you moved them? This happens sometimes when I'm trying to move the lower frequencies/voltages after first setting my highest, it's pretty irritating.


Yea that happens all the time.


----------



## wtf_apples

J7SC said:


> ...there's always Amazon.ca or DigiKey...Amazon.ca should have restocked by now re. all those Thermalright 12.8 W/mk thermal pads I bought (0.5mm, 1mm, 1.5mm, 2mm) . If you're careful re. not trapping air bubbles, you can even stack those Thernalright pads to get the desired thickness.
> 
> As to thermal putty (as opposed to pads and grease), I recently got two jars of the listed DigiKey putty (10.0 W/mk) below, after another user here recommended it. I really do like thermal putty for its flexibility, after years of chasing the right size for a given GPU (+ tolerances). The supplier below delivered to me in W.Canada swiftly and w/o any drama...


Awesome, Ive seen this mentioned before actually. Im also w.canada so ill def order form this same place. 
Id rather use this stuff since having multiple thickness w.mk pads gets expensive and wasteful.
Thanks!


----------



## Nizzen

Anyone tested the Galax 1000w bios for 3080*ti *?








GALAX RTX 3080 Ti VBIOS


12 GB GDDR6X, 1365 MHz GPU, 1188 MHz Memory




www.techpowerup.com





Yes I know it's the wrong card for the topic


----------



## GRABibus

Nizzen said:


> Anyone tested the Galax 1000w bios for 3080*ti *?
> 
> 
> 
> 
> 
> 
> 
> 
> GALAX RTX 3080 Ti VBIOS
> 
> 
> 12 GB GDDR6X, 1365 MHz GPU, 1188 MHz Memory
> 
> 
> 
> 
> www.techpowerup.com
> 
> 
> 
> 
> 
> Yes I know it's the wrong card for the topic


Do you still have your 3090 HOF ?
We didn't see any overclocks and benchmarks....


----------



## Nizzen

GRABibus said:


> Do you still have your 3090 HOF ?
> We didn't see any overclocks and benchmarks....


Still have it, but my daughter is using it in a gamingpc  . Summertime now, so not so much benchmarking


----------



## KedarWolf

wtf_apples said:


> Awesome, Ive seen this mentioned before actually. Im also w.canada so ill def order form this same place.
> Id rather use this stuff since having multiple thickness w.mk pads gets expensive and wasteful.
> Thanks!


Try the GELID Extreme pads, they are soft and compressible, not stiff and hard like the Thermalright pads. About the same wm/k.

You'll get better contact on the memory, VRMs, and even the core as a result.

Not the GELID Ultimate though, they are stiff and hard like the Thermalright.

You can get the GELID pads cheap from the GELID store and even though they came from China, they never took long at all, with no taxes or duty.


----------



## KedarWolf

GRABibus said:


> Tested the "'HOF LAB OC 500W" Bios in Cold War :
> 
> 
> 
> 
> 
> 
> 2560x1440 144Hz
> All settings ultra
> Ray tracing Ultra
> DLSS performances
> 
> Strix on air with stock cooler (Repasted with Conductonaut and Memory chips repaded with THERMLARIGHT Odyssey).
> 
> => 2205MHz sustained at 1.1V (23°C-24°C ambient, open PC)
> 
> Curve and MSI AB settings :
> 
> +130MHz on Core then [email protected]
> +1000MHz on Memory
> View attachment 2516602
> 
> 
> 
> Compared with EVGA 500W XOC Bios :
> => 2°C more roughly than with EVGA 500W XOC Bios (2600rpm fans for HOF LAB OC Bios versus 3000rpm fans for EVGA 500W XOC Bios)
> => 30MHz to 45MHz more than with EVGA 500W XOC Bios !!
> 
> I will keep on testing it, but seems very good.


Do you have Metro Exodus Enhanced? If you do, go to the executable folder, open the benchmark, put Ray Tracing maxed out, settings on Ultra, Hairworks and PhysX on, highest performance DLSS, and see if it'll run 10 loops, really good stability test.


----------



## GRABibus

KedarWolf said:


> Do you have Metro Exodus Enhanced? If you do, go to the executable folder, open the benchmark, put Ray Tracing maxed out, settings on Ultra, Hairworks and PhysX on, highest performance DLSS, and see if it'll run 10 loops, really good stability test.


no I don’t have


----------



## sultanofswing

If you are testing Metro Exodus you should be testing without DLSS.


----------



## Arizor

If you want the ultimate crash test just max out CP2077 and stand in the Afterlife bar, will crash within 10 mins max if unstable (normally less).

Edit: also for those who might have the same issue, though I thought it was a VRAM overclock resetting my computer, turns out it was an undervolt applied to my new 5900X via PBO.


----------



## tps3443

Hey guys!! I’m joining the club!!! After a long road with several different 2080Ti’s over the years, a friend sold me his 3090 Kingpin at MSRP. So, my 3090KP will be here in a few days.


(I am also getting the 3090 Kingpin Hydro copper water block from him as well)

Anyways, I’m excited!!!


----------



## ManniX-ITA

GRABibus said:


> no I don’t have


It's on offer on Steam till the next 11 hours, I'd buy it









Save 75% on Metro Exodus on Steam


Flee the shattered ruins of the Moscow Metro and embark on an epic, continent-spanning journey across the post-apocalyptic Russian wilderness. Explore vast, non-linear levels, lose yourself in an immersive, sandbox survival experience, and follow a thrilling story-line that spans an entire year...




store.steampowered.com







sultanofswing said:


> If you are testing Metro Exodus you should be testing without DLSS.


I was testing both with and without at the beginning but I'm only testing with DLSS Ultra Performance now.
With DLSS the memory and Tensor cores are more stressed and the average clock is 70-90 MHz higher.
It works better for stability testing; many times running without was stable while with DLSS wasn't.


----------



## GRABibus

Thanks for the information.
I just bought the game


----------



## InvictuSZN

KedarWolf said:


> Try the GELID Extreme pads, they are soft and compressible, not stiff and hard like the Thermalright pads. About the same wm/k.
> 
> You'll get better contact on the memory, VRMs, and even the core as a result.
> 
> Not the GELID Ultimate though, they are stiff and hard like the Thermalright.
> 
> You can get the GELID pads cheap from the GELID store and even though they came from China, they never took long at all, with no taxes or duty.


I ordered the GELID Ultimate yesterday, I bought the different thicknesses that are mentioned in the manuals for the EKWB Strix Block as well as the as the active backplate for the Strix from EK. 
(1mm for all the pads needed on the front and 1.5 and 2mm for the back) Should the hardness be a concern to me if i have the correct sizes as per EK? I can still cancel the order n get the extremes instead.


----------



## GAN77

InvictuSZN said:


> I ordered the GELID Ultimate yesterday, I bought the different thicknesses that are mentioned in the manuals for the EKWB Strix Block as well as the as the active backplate for the Strix from EK.
> (1mm for all the pads needed on the front and 1.5 and 2mm for the back) Should the hardness be a concern to me if i have the correct sizes as per EK? I can still cancel the order n get the extremes instead.


Ultimate is not solid.


----------



## InvictuSZN

GAN77 said:


> Ultimate is not solid.
> View attachment 2516841
> View attachment 2516842


Yeh that looks pretty good. Gonna keep the order for the Ultimates then.

Thank you!


----------



## KedarWolf

InvictuSZN said:


> I ordered the GELID Ultimate yesterday, I bought the different thicknesses that are mentioned in the manuals for the EKWB Strix Block as well as the as the active backplate for the Strix from EK.
> (1mm for all the pads needed on the front and 1.5 and 2mm for the back) Should the hardness be a concern to me if i have the correct sizes as per EK? I can still cancel the order n get the extremes instead.


Yeah, the Extreme are highly recommended, even over the Ultimate. If you contact them right away, you may be able to cancel your order.


----------



## KedarWolf

GAN77 said:


> Ultimate is not solid.
> View attachment 2516841
> View attachment 2516842


Someone that bought the Ultimate told me they are hard and flaky, not soft and very compressable like the Extreme.






3080 FE Full thermal pad & paste replacement (results + photos) - r/nvidia


View on Libreddit, an alternative private front-end to Reddit.




libredd.it





Edit - quote:

"I tried with the ultimate not realising they are twice as dense as the extreme pads they dont make good enough contact with the GPU die, you need the slightly more squishy pads.

Did you use the softer Gelid Extreme 12W/mk pads, or the harder Gelid Ultimate 15W/mk pads on the core side?
Some people say the Ultimates are too hard, but surely they can be squished down?

u/natsak491 May 14 '21
I think it was only the 12w/mk ones, they were pretty squishy. I can mine at 98/mhs at 50% fan speed and vram doesn't go above 88c. Core sits at 56c.
Temps while gaming are so much lower too.

u/scatyman 29d ago
No, get the extremes, the ultimates stop the GPU core from making as good as a contact as it should."


----------



## tps3443

GAN77 said:


> Ultimate is not solid.
> View attachment 2516841
> View attachment 2516842


I think some people received some bad batches of gelid ultimate. I too have had these previously, and they were “Hard as work boot rubber“ lol anyways, I had to make sure the thickness was just perfectly right. Because they were so hard, that a cooler and electronic components just won’t squeeze these pads down. So you can have some gaps between the die and cooler. (No real good way to see this) 

Unless, 

You use this ultra thin pressure film before waterblock install. You wanna make sure your getting that perfect contact, if your installing new thermal pads.

This stuff works amazing! It is super thin, you can just lay it in on the die, place all of your thermal pads, reassemble the gpu with block or cooler. Then disassemble the gpu and confirm where the paper has been pressed and how even your contact is.


----------



## InvictuSZN

tps3443 said:


> I think some people received some bad batches of gelid ultimate. I too have had these previously, and they were “Hard as work boot rubber“ lol anyways, I had to make sure the thickness was just perfectly right!!
> 
> Also, make sure you use this ultra thin pressure film before waterblock install. You wanna make sure your getting that perfect contact, if your installing new thermal pads.
> 
> Thia stuff works amazing! It is super thin, you can just lay it in tempe die, place all of your thermal pads, reassemble the gpu with block or cooler. Then disassemble the gpu and confirm where the paper has been pressed and how evenly too.


I think as long as the thermal pad size is correct it doesnt really matter. I bought thermal pads according to the sizes shown in the manuals for the EK Strix GPU block and active backplate. If those arent right I will report back I guess. The active backplate is still on preorder tho, apparently I should be getting it 2nd August.


----------



## tps3443

InvictuSZN said:


> I think as long as the thermal pad size is correct it doesnt really matter. I bought thermal pads according to the sizes shown in the manuals for the EK Strix GPU block and active backplate. If those arent right I will report back I guess. The active backplate is still on preorder tho, apparently I should be getting it 2nd August.


Absolutely, I have used them even being hard as a rock lol. But you just want to make sure they are all the right thickness. And if you don’t know, then confirm proper gpu die contact.


----------



## KedarWolf

InvictuSZN said:


> I think as long as the thermal pad size is correct it doesnt really matter. I bought thermal pads according to the sizes shown in the manuals for the EK Strix GPU block and active backplate. If those arent right I will report back I guess. The active backplate is still on preorder tho, apparently I should be getting it 2nd August.


I had an email my Strix Active backplate was supposed to ship on the 7th, nada. I did a ticket to EKWB, waiting for a response, but yeah, I'm likely Aug. 2nd now as well.


----------



## gfunkernaught

Looking to start a small speculative convo about the kinpin 1kw rebar bios. I dont know how long after the kingpin hybrid 3090 was released that the 1kw bios were available to the public, but it wasn't long right? I wonder why this time with rBar they're not being shared. Is there something significant? What is so special about the 1kw xoc rbar bios that no one is sharing it yet the non-rbar bios became available. Thoughts?


----------



## yzonker

gfunkernaught said:


> Looking to start a small speculative convo about the kinpin 1kw rebar bios. I dont know how long after the kingpin hybrid 3090 was released that the 1kw bios were available to the public, but it wasn't long right? I wonder why this time with rBar they're not being shared. Is there something significant? What is so special about the 1kw xoc rbar bios that no one is sharing it yet the non-rbar bios became available. Thoughts?


EVGA is actively telling KP owners to not share it. Been speculation the copy they give people is tied to them but I'm not sure that's known definitely though.


----------



## SoldierRBT

3090 KPE HC 520W BIOS - 16007 PR


















I scored 16 007 in Port Royal


Intel Core i9-10900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com





There's still room for improvement, internal clocks could also improve a little bit more. +1350 used to pass now only +1300. High ambient temps isn't helping.


----------



## sultanofswing

yzonker said:


> EVGA is actively telling KP owners to not share it. Been speculation the copy they give people is tied to them but I'm not sure that's known definitely though.


There is nothing special about it.


----------



## yzonker

sultanofswing said:


> There is nothing special about it.


So you know for a fact that every copy they hand out is identical?


----------



## sultanofswing

yzonker said:


> So you know for a fact that every copy they hand out is identical?


I am talking about as far as a performance standpoint. Not sure if each copy is different and tied to anything.


----------



## Toopy

Is it required to alter the VF curve to use more power than ~400w?
I flashed my zotac AMP Holo Gaming STE edition - (****house thermal engineering) with the Kingpin 520W bios.

It opens up the power slider to 120% and gpu-z shows the correct limits, but I can't get it to draw more than 410w, which is the original max the zotac bios had.
Thermals are good etc.. maybe its just gpuz not reading correctly.


----------



## ManniX-ITA

Toopy said:


> Is it required to alter the VF curve to use more power than ~400w?


It should be a 2x8P so it's unlikely you can draw more than that.
You need a 3x8P card to draw more.


----------



## InvictuSZN

KedarWolf said:


> I had an email my Strix Active backplate was supposed to ship on the 7th, nada. I did a ticket to EKWB, waiting for a response, but yeah, I'm likely Aug. 2nd now as well.


I just received an email from EK. They provided me with a tracking ID for DHL... But the DHL page tells me that EK only created a shipping lable but didnt actually hand anything to DHL yet... so essentially I have no idea now if im getting it way earlier than I was originally supposed to or they just created the shipping lable 3 weeks early for whatever reason lmao. Its kinda annoying cuz I didnt actually order all the parts I need for the loop yet :/


----------



## Toopy

ManniX-ITA said:


> It should be a 2x8P so it's unlikely you can draw more than that.
> You need a 3x8P card to draw more.


It's a 3x 8pin,


----------



## ManniX-ITA

Toopy said:


> It's a 3x 8pin,


Right, I saw the 3080 instead.
Try another BIOS then.

I've tested the EVGA 520W BIOS on the ASUS Strix OC; max 375W power draw (almost same scores, just a bit tad lower...) and one DP port disabled.
The third 3pin is not used at all.
Now I'm testing the HOF 500W; power draw went up to 500W, no ReBar (not sure, I think it's not supported), still have to check if the disabled DP port came back to life.
It's a tiny bit better than the original Strix OC but by a very little margin, slightly higher temperatures but it does support zero fan rpm.
Metro Exodus EE is more consistent in in fps with a little higher min and average.
Max framerate is limited by the cooling which is still stock but on the first run is a couple of frames above the original.


----------



## ManniX-ITA

GRABibus said:


> I will keep on testing it, but seems very good.


May I ask which one did you install?

I had one shared earlier here and now tested this one from:









KFA2 RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com





Both of them doesn't seem to support ReBar.


----------



## Toopy

ManniX-ITA said:


> Right, I saw the 3080 instead.
> Try another BIOS then.
> 
> I've tested the EVGA 520W BIOS on the ASUS Strix OC; max 375W power draw (almost same scores, just a bit tad lower...) and one DP port disabled.
> The third 3pin is not used at all.
> Now I'm testing the HOF 500W; power draw went up to 500W, no ReBar (not sure, I think it's not supported), still have to check if the disabled DP port came back to life.
> It's a tiny bit better than the original Strix OC but by a very little margin, slightly higher temperatures but it does support zero fan rpm.
> Metro Exodus EE is more consistent in in fps with a little higher min and average.
> Max framerate is limited by the cooling which is still stock but on the first run is a couple of frames above the original.


Ok, seems more experimenting is in order then.
I'm still running my o/c 3770k system so rebar isn't an issue. 
I just don't feel the need to drop ~$1000Aud when I can get scores like this








I scored 14 100 in Port Royal


Intel Core i7-3770K Processor, NVIDIA GeForce RTX 3090 x 1, 16384 MB, 64-bit Windows 10}




www.3dmark.com




and run all the AAA titles at max in ultrawide with silky frame rates.

I'll give the HOC a shot tomorrow.


----------



## GRABibus

ManniX-ITA said:


> May I ask which one did you install?
> 
> I had one shared earlier here and now tested this one from:
> 
> 
> 
> 
> 
> 
> 
> 
> 
> KFA2 RTX 3090 VBIOS
> 
> 
> 24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory
> 
> 
> 
> 
> www.techpowerup.com
> 
> 
> 
> 
> 
> Both of them doesn't seem to support ReBar.


yes this is the bios I use.
You can update the bios with rebar with this link :






Resizable bar BIOS update


Galaxy Microsystems Ltd.




www.galax.com


----------



## ManniX-ITA

GRABibus said:


> yes this is the bios I use.
> You can update the bios with rebar with this link :


Thanks, this is the ReBar version the updater installed:









GALAX RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com





Guess it's better to flash this one directly instead of doing it twice.

Are you still testing this one? More findings?


----------



## GRABibus

ManniX-ITA said:


> Thanks, this is the ReBar version the updater installed:
> 
> 
> 
> 
> 
> 
> 
> 
> 
> GALAX RTX 3090 VBIOS
> 
> 
> 24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory
> 
> 
> 
> 
> www.techpowerup.com
> 
> 
> 
> 
> 
> Guess it's better to flash this one directly instead of doing it twice.
> 
> Are you still testing this one? More findings?



It is not written in the Bios description that it is with ReBar ::









GALAX RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com




*
GALAX, 3090 HOF, NVIDIA GeForce RTX 3090, 500W OC Lab (1395 / 1219) *


----------



## ManniX-ITA

Toopy said:


> I just don't feel the need to drop ~$1000Aud when I can get scores like this


Well I kept my lovely i4770k till I could.
And it was still great.
But I have to say that when I moved to the 3800x both gaming and everything else was definitely an order of magnitude better 
Now with the 5950x gaming is even better but by a small margin.
For the everything else which is heavy load it's of course a monster.
If your main usage is gaming you can get a massive bump up with a 5800x or a 11700k.

I don't see a big difference in performances with the HOF BIOS but indeed the the target 460w/500w can be seen on power draw.
Have still to boot on the Benching install to compare but I see it keeps an almost steady 485W-495W on TimeSpy Extreme.
The Strix OC original was more around 465W-475W with brief peaks to 485W.


----------



## OrionBG

Hey guys, soo... I did a thing yesterday and ordered myself an RTX 3090FE... 
Almost at MSRP too. I'm a happy camper... Hopefully, It will be here next week.
I've ordered one of those Bykski 3090FE blocks with an active backplate so the temps should be under control.
I have one question though, does anybody knows where in the EU they sell 12 pin sleeved GPU power cables compatible with the Corsair AX1600i? I'm not giving CableMod 40 Euro + 20 shipping for a single cable and also they have a 70 Euro minimum order...


----------



## ManniX-ITA

OrionBG said:


> I have one question though, does anybody knows where in the EU they sell 12 pin sleeved GPU power cables compatible with the Corsair AX1600i?


Those ones?






Corsair Premium Sleeved PCIe Dual-Kabel, Doppelpack (Gen 4) - schwarz


Premium PCIe Dual-Kabel von Corsair, 2x 6+2-Pin-PCIe-Dual-Kabel, Schwarzes Textil-Sleeving aus Paracord-Material, Länge von je 650 Millimetern, 4x 8-Slot-Kabelkamm inklusive




www.caseking.de


----------



## OrionBG

ManniX-ITA said:


> Those ones?
> 
> 
> 
> 
> 
> 
> Corsair Premium Sleeved PCIe Dual-Kabel, Doppelpack (Gen 4) - schwarz
> 
> 
> Premium PCIe Dual-Kabel von Corsair, 2x 6+2-Pin-PCIe-Dual-Kabel, Schwarzes Textil-Sleeving aus Paracord-Material, Länge von je 650 Millimetern, 4x 8-Slot-Kabelkamm inklusive
> 
> 
> 
> 
> www.caseking.de


Thanks but those are normal 8 pin GPU power cables. I have a lot of those. I need a 12 pin one for the 3090 FE...


----------



## ManniX-ITA

OrionBG said:


> Thanks but those are normal 8 pin GPU power cables. I have a lot of those. I need a 12 pin one for the 3090 FE...


Sorry, I ended up there searching for the SKU and didn't noticed it was a different one 
Seems to be close to impossible to find...
But isn't available to order directly from Corsair?
Looks like it's possible from Germany at least, costs 19.90€ + shipping though.


----------



## gfunkernaught

sultanofswing said:


> I am talking about as far as a performance standpoint. Not sure if each copy is different and tied to anything.


So if there is nothing special about these bios, why is evga extra diligent this time? The non rbar 1kw xoc bios got out pretty quick right? I think evga was actively trying to keep that under wraps but it got out. What changed with the rbar bios that no one is sharing them? It's not like the hardware changed so owners can't rip the firmware. That's the part I'm talking about. Seems odd.


----------



## Shawnb99

yzonker said:


> EVGA is actively telling KP owners to not share it. Been speculation the copy they give people is tied to them but I'm not sure that's known definitely though.


How do we get this bios? Do we have to contact EVGA?


----------



## ManniX-ITA

gfunkernaught said:


> It's not like the hardware changed so owners can't rip the firmware. That's the part I'm talking about. Seems odd.


Well, couldn't be they started to use firmware files tied to a license?

nvflash seems to support something like this, sadly...



Code:


  Create unlock license request file.
  > nvflash --licreq=LicenseRequest.bin USER_FW_MOD

  Install unlock license objects.
  > nvflash --wrhlk=License.hulk

  Flash tweaked firmware.
  > nvflash --license=License.hulk vbios.rom


----------



## OrionBG

ManniX-ITA said:


> Sorry, I ended up there searching for the SKU and didn't noticed it was a different one
> Seems to be close to impossible to find...
> But isn't available to order directly from Corsair?
> Looks like it's possible from Germany at least, costs 19.90€ + shipping though.


Yep, those are quite hard to find, especially 16AWG. Most are 18AWG which they don't recommend for 3090 specifically.
I'll look on Aliexpress for something good but the delivery time and the new EU rules make it quite expensive to get stuff from there too...
The Corsair one is not sleeved and will not fit well in my system but it is a solution I might get to use if I can't get anything else.


----------



## GAN77

OrionBG said:


> I have one question though, does anybody knows where in the EU they sell 12 pin sleeved GPU power cables compatible with the Corsair AX1600i?


Temporary solution. Use the supplied adapter.


----------



## OrionBG

GAN77 said:


> Temporary solution. Use the supplied adapter.


for sure that will be what I"ll be using initially, at least before I have the waterblock and figure out the placement of the screen I got for hardware monitoring.
The interesting thing is that all CableMod kits they sell are supposedly 18AWG and I see a lot of mentions that for 3090 16AWG cables are recommended...


----------



## yzonker

gfunkernaught said:


> So if there is nothing special about these bios, why is evga extra diligent this time? The non rbar 1kw xoc bios got out pretty quick right? I think evga was actively trying to keep that under wraps but it got out. What changed with the rbar bios that no one is sharing them? It's not like the hardware changed so owners can't rip the firmware. That's the part I'm talking about. Seems odd.


Probably a combination of EVGA wanting to keep that bios exclusive to the KP to help separate it from other cards and also possibly RMA issues with people hurting their card with the 1kw bios, but flashing it back to stock (assuming it isn't dead). 

They use power limits for defining their product tiers. That bios leaking really changes it. Even gives us poor 2x8pin guys a decent PL.


----------



## tbrown7552

EVGA isnt doing anything. They dont have anything to do with the 1kw bios. You have to contact Vince directly and even then hes ghosting people. I know several people with KP cards that have reached out for the rebar 1kw bios and no response with me included.


----------



## yzonker

tbrown7552 said:


> EVGA isnt doing anything. They dont have anything to do with the 1kw bios. You have to contact Vince directly and even then hes ghosting people. I know several people with KP cards that have reached out for the rebar 1kw bios and no response with me included.


Well EVGA or Nvidia is supporting it by signing the bios or giving Vince the ability to sign it. Otherwise you couldn't flash it.


----------



## KedarWolf

My queue for a 3090 Kingpin has come up but I'm at work all day and can't sell my Strix OC 3090 within eight hours. 

If anyone wants it I'll give them access to my EVGA Elite account and they can change the address to theirs. I'd NEVER scam peeps, so if anyone wants, they have maybe 7 hours.

I don't have that kind of disposable income.


----------



## gfunkernaught

So the 1kw rbar bios have an additional set of code to "prevent" it from being used on non-kp boards. We all know that can be hacked. Maybe Vince had to sign some kind of nda that says something like "if these bios end up on the internet it's on you".


----------



## zkareemz

GRABibus said:


> Tested the "'HOF LAB OC 500W" Bios in Cold War :
> 
> 
> 
> 
> 
> 
> 2560x1440 144Hz
> All settings ultra
> Ray tracing Ultra
> DLSS performances
> 
> Strix on air with stock cooler (Repasted with Conductonaut and Memory chips repaded with THERMLARIGHT Odyssey).
> 
> => 2205MHz sustained at 1.1V (23°C-24°C ambient, open PC)
> 
> Curve and MSI AB settings :
> 
> +130MHz on Core then [email protected]
> +1000MHz on Memory
> View attachment 2516602
> 
> 
> 
> Compared with EVGA 500W XOC Bios :
> => 2°C more roughly than with EVGA 500W XOC Bios (2600rpm fans for HOF LAB OC Bios versus 3000rpm fans for EVGA 500W XOC Bios)
> => 30MHz to 45MHz more than with EVGA 500W XOC Bios !!
> 
> I will keep on testing it, but seems very good.


 where to download the bios?


----------



## GRABibus

zkareemz said:


> where to download the bios?


File to be renamed with .rom extension


----------



## zkareemz

deleted


----------



## des2k...

...


gfunkernaught said:


> So the 1kw rbar bios have an additional set of code to "prevent" it from being used on non-kp boards. We all know that can be hacked. Maybe Vince had to sign some kind of nda that says something like "if these bios end up on the internet it's on you".


no such thing, all generic vbios from Nvidia, oem just put's their brand name / power limits and re-sign it

no oem will take the risk of bricking 3090 by adding more to the size and locking it !


----------



## Falkentyne

des2k... said:


> ...
> 
> no such thing, all generic vbios from Nvidia, oem just put's their brand name / power limits and re-sign it
> 
> no oem will take the risk of bricking 3090 by adding more to the size and locking it !


This is absolutely not true at all.
There are multiple sections of a vbios. There's inforom, HULK and several other regions related to security and 'profiles'.
Here is a NVflash 5.670.0 with ID checks disabled. Try flashing a 3090 FE with another vbios (or vice versa) and see what happens to your card (you better hope you can flash it back with another card or iGPU).

rename to exe and remove the txt


----------



## KedarWolf

InvictuSZN said:


> I just received an email from EK. They provided me with a tracking ID for DHL... But the DHL page tells me that EK only created a shipping lable but didnt actually hand anything to DHL yet... so essentially I have no idea now if im getting it way earlier than I was originally supposed to or they just created the shipping lable 3 weeks early for whatever reason lmao. Its kinda annoying cuz I didnt actually order all the parts I need for the loop yet :/


I got an email today too, my Strix OC active backplate just got shipped today, tracking shows it in Slovakia. 

Edit: Actually, it's in Italy now.


----------



## tps3443

A friend is shipping my 3090 Kingpin Monday. I will email vince for the 1KW bios directly. From what I understand, he requires the S/N and your name and address information due to warranty purposes. He will only give the bios out to registered and verified 3090KP owners.

They’re trying to make the card just a little bit exclusive from what I understand. I mean, you can essentially get most of the performance on other(2)x8 or (3x8pin) 3090’s (Or better even I suppose) a lot of novice or inexperienced overclockers have purchased the 3090Kp, and their numbers aren’t gonna look that fantastic to an experienced overclocker with an FTW3 on custom water cooling. Even then, when their numbers are not like crazy high. Other overclockers with lower tier 3090‘s brag about their higher score lol. (When in reality) the 3090KP doesn’t give you magical numbers, with magical powers. You’ve gotta put the time in on tweaking that thing to the maximum. (And it requires the KP Hydro Copper block too) And this goes for just about any video card. 

I have owned several 2080Ti’s and I could get some really performance out of them. But, I could never match (The enthusiast overclockers) who owned, Galax cards or KP 2080Ti’s just due to slightly better fine tuning. (I just needed an actual HOF or KP to get that last little bit of performance, you eventually just hit a wall on the reference board models a lot of times. (It is what it is) Galax had some awesome XOC software that was pretty exclusive, and the KP cards at least didn’t require you to break out a soldering iron lol..


----------



## KedarWolf

tps3443 said:


> A friend is shipping my 3090 Kingpin Monday. I will email vince for the 1KW bios directly. From what I understand, he requires the S/N and your name and address information due to warranty purposes. He will only give the bios out to registered and verified 3090KP owners.
> 
> I’m not really sure why some guys are so upset over this. They’re are trying to make the card just a little exclusive. I mean, you can essentially get most of the performance on other 3090’s (Or better I suppose) a lot of novice over lockers have purchased the 3090Kp. And when their numbers are not like crazy high. Other overclockers with lower tier 3090‘s brag about their higher score. (When in reality) the 3090KP doesn’t give you magical numbers. You’ve gotta out the time in on tweaking that thing. (And it requires the KP Hydro Copper block too)
> 
> I have owned several 2080Ti’s and I could get some insane performance out of this. But, I could never match the Galax cards or the KP 2080Ti’s just due to slightly better fine tuning. (I just needed an actually HOF or KP to get that last little bit of performance. (It is what it is)


If your friend is the @0451 on Post Your Last Purchase, there was a problem with the order and it was cancelled as they use their info and had already bought one.

However, if it IS you that wants it, you have two hours to contact me and you can order it directly yourself through my EVGA Elite account if you never ordered one before.


----------



## Nizzen

KedarWolf said:


> I got an email today too, my Strix OC active backplate just got shipped today, tracking shows it in Slovakia.
> 
> Edit: Actually, it's in Italy now.


Nice !

Too bad the active backplate came that late to the market. I've got the strix waterblock from almost the first batch, and have the mp5works serial backplate cooler on it. Works pretty well. 65c vram temp in games with not the best thermal pads. Using Thermal Grizzly Minus Pad 8


----------



## tps3443

KedarWolf said:


> If your friend is the @0451 on Post Your Last Purchase, there was a problem with the order and it was cancelled as they use their info and had already bought one.
> 
> However, if it IS you that wants it, you have two hours to contact me and you can order it directly yourself through my EVGA Elite account if you never ordered one before.


Oh, you may have me mixed up with someone else.


----------



## gfunkernaught

@KedarWolf That is super tempting, but I already spent $2k on my Trio _sob_. I'd then have to get a block, backplate, do the whole routine. Sounds like fun but there are better things to spend that kind of money on...like a telescope or good mount ...Or I could just save it and get the most out of my Trio.


----------



## Toopy

ManniX-ITA said:


> Well I kept my lovely i4770k till I could.
> And it was still great.
> But I have to say that when I moved to the 3800x both gaming and everything else was definitely an order of magnitude better
> Now with the 5950x gaming is even better but by a small margin.
> For the everything else which is heavy load it's of course a monster.
> If your main usage is gaming you can get a massive bump up with a 5800x or a 11700k.
> 
> I don't see a big difference in performances with the HOF BIOS but indeed the the target 460w/500w can be seen on power draw.
> Have still to boot on the Benching install to compare but I see it keeps an almost steady 485W-495W on TimeSpy Extreme.
> The Strix OC original was more around 465W-475W with brief peaks to 485W.


Despite flashing both the 450 & the 520w KP bios’s I have not seen the power draw exceed ~415w peak.
Lamenting buying this zotac pos, first custom pcb card I have ever bought and could possibly be the last.

The work required to fix the severally flawed thermal engineering, then rectify the crippled bios zotac installed to cover up the issue, now the seemingly hardware locked power limit ceiling.
(Haven’t tested the HOC yet)
All have left a bad taste, at the very least I will never buy a zotac card again.
Yes it games well, now anyway.

On a side note has anyone ever seen the memory boosting higher than the set frequency on any 3090? I set mem clocks to +1000 but during gaming/bench it runs at 1356Mhz which is higher. Actually I think my PNY did this as well.


----------



## yzonker

Toopy said:


> Despite flashing both the 450 & the 520w KP bios’s I have not seen the power draw exceed ~415w peak.
> Lamenting buying this zotac pos, first custom pcb card I have ever bought and could possibly be the last.
> 
> The work required to fix the severally flawed thermal engineering, then rectify the crippled bios zotac installed to cover up the issue, now the seemingly hardware locked power limit ceiling.
> (Haven’t tested the HOC yet)
> All have left a bad taste, at the very least I will never buy a zotac card again.
> Yes it games well, now anyway.
> 
> On a side note has anyone ever seen the memory boosting higher than the set frequency on any 3090? I set mem clocks to +1000 but during gaming/bench it runs at 1356Mhz which is higher. Actually I think my PNY did this as well.


What power draw do you see on the 8 pins and slot? Any of them maxed out?


----------



## Toopy

ManniX-ITA said:


> Well I kept my lovely i4770k till I could.
> And it was still great.
> But I have to say that when I moved to the 3800x both gaming and everything else was definitely an order of magnitude better
> Now with the 5950x gaming is even better but by a small margin.
> For the everything else which is heavy load it's of course a monster.
> If your main usage is gaming you can get a massive bump up with a 5800x or a 11700k.
> 
> I don't see a big difference in performances with the HOF BIOS but indeed the the target 460w/500w can be seen on power draw.
> Have still to boot on the Benching install to compare but I see it keeps an almost steady 485W-495W on TimeSpy Extreme.
> The Strix OC original was more around 465W-475W with brief peaks to 485W.


So, update on this.
Flashed the HOF 500w bios, now seeing peaks of 470W and no P/L throttling. Core clock holds steadily locked at 2070 (so far still testing)
Forget the bit about the memory, I had my clock calculation wrong. For some reason I had in my head 1337Mhz was +1200, but its only +1000



yzonker said:


> What power draw do you see on the 8 pins and slot? Any of them maxed out?


I couldn't tell with the KP bios, 8pin1 & 3 read identically.
Now with the HOF bios they all read correctly


----------



## ManniX-ITA

Toopy said:


> Lamenting buying this zotac pos, first custom pcb card I have ever bought and could possibly be the last.


Really, you can't judge AIB cards from Zotac.
It's a brand that lives in its own league of bad quality 

Happy anyway that the HOF version fixed it.
Still testing it; in benchmarks is worse than the original Strix OC.


----------



## Toopy

ManniX-ITA said:


> Really, you can't judge AIB cards from Zotac.
> It's a brand that lives in its own league of bad quality
> 
> Happy anyway that the HOF version fixed it.
> Still testing it; in benchmarks is worse than the original Strix OC.


Zotac, should be shot.
Not sure if you have seen my previous posts about this card and it's unpadded memory.









[Official] NVIDIA RTX 3090 Owner's Club


The question is whether that's real or just somebody at Zotac being dumb. I'm not sure. a 3080Ti you meant ?




www.overclock.net












[Official] NVIDIA RTX 3090 Owner's Club


im still mega confused here... why are there so many bioses... so in order for me to get 500w+ PL, and rebar support i need which one?




www.overclock.net


----------



## ManniX-ITA

Toopy said:


> Zotac, should be shot.
> Not sure if you have seen my previous posts about this card and it's unpadded memory.


Oh my, I didn't.. 🤦‍♂️ 
They are villains.


----------



## dual109

Anyone try the HOF LAB OC 500W bios on a MSI RTX3090 Trio X ???

I've tried the Strix 480w and the EVGA 500w and my clocks are still bouncing all over even with a V/F curve set.

Yes I'm on air card sits at around 63 degrees during intense gaming.

Thanks


----------



## Nizzen

dual109 said:


> Anyone try the HOF LAB OC 500W bios on a MSI RTX3090 Trio X ???
> 
> I've tried the Strix 480w and the EVGA 500w and my clocks are still bouncing all over even with a V/F curve set.
> 
> Yes I'm on air card sits at around 63 degrees during intense gaming.
> 
> Thanks


Set max powerlimit and +100 voltage in afterburner?


----------



## Toopy

dual109 said:


> Anyone try the HOF LAB OC 500W bios on a MSI RTX3090 Trio X ???
> 
> I've tried the Strix 480w and the EVGA 500w and my clocks are still bouncing all over even with a V/F curve set.
> 
> Yes I'm on air card sits at around 63 degrees during intense gaming.
> 
> Thanks


Are the mem temps ok? The core will bounce if the mem hits limits


----------



## yzonker

Toopy said:


> So, update on this.
> Flashed the HOF 500w bios, now seeing peaks of 470W and no P/L throttling. Core clock holds steadily locked at 2070 (so far still testing)
> Forget the bit about the memory, I had my clock calculation wrong. For some reason I had in my head 1337Mhz was +1200, but its only +1000
> 
> 
> 
> I couldn't tell with the KP bios, 8pin1 & 3 read identically.
> Now with the HOF bios they all read correctly


Obviously the Galax bios is a better choice if the power readings are working better, but you don't really know how much power it pulled on the KP bios if the 1/3 8pins were duplicated (and potentially reading low). The total power you see will be based on those #'s whether they are correct or not. It probably maxed out on #2 (which may or may have not been accurate either). A good example of this is my testing of the Galax XOC on my 3080ti.









[Official] NVIDIA RTX 3080 Ti Owner's Club


So I used a clamp meter to measure the amps at each 8pin connector on my FTW3 running at 450w (stock bios). Pretty interesting. Apparently the 2nd 8pin runs even hotter than we thought. See my table below. The meter nails connectors 1 and 3 (and I checked it on another vid card as well), so...




www.overclock.net


----------



## J7SC

yzonker said:


> Obviously the Galax bios is a better choice if the power readings are working better, but you don't really know how much power it pulled on the KP bios if the 1/3 8pins were duplicated (and potentially reading low). The total power you see will be based on those #'s whether they are correct or not. It probably maxed out on #2 (which may or may have not been accurate either). A good example of this is my testing of the Galax XOC on my 3080ti.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> [Official] NVIDIA RTX 3080 Ti Owner's Club
> 
> 
> So I used a clamp meter to measure the amps at each 8pin connector on my FTW3 running at 450w (stock bios). Pretty interesting. Apparently the 2nd 8pin runs even hotter than we thought. See my table below. The meter nails connectors 1 and 3 (and I checked it on another vid card as well), so...
> 
> 
> 
> 
> www.overclock.net


...that lack of accurate 8pin power readouts is one thing which I don't like about the KPE 520 r_BAR bios on my Strix. Also, it does seem to pump more power in during 'regular' desktop activities (ie not hi-po benching) judging by the temp differences to the Strix bios


----------



## yzonker

J7SC said:


> ...that lack of accurate 8pin power readouts is one thing which I don't like about the KPE 520 r_BAR bios on my Strix. Also, it does seem to pump more power in during 'regular' desktop activities (ie not hi-po benching) judging by the temp differences to the Strix bios


I assume that is the difference in the controllers used (NCP81610/uP9511R vs MP2888A)? I noticed in trying many different 3080ti bios', most do not read accurately on my FTW3 except others that use the same controller like Palit.


----------



## tps3443

Toopy said:


> Despite flashing both the 450 & the 520w KP bios’s I have not seen the power draw exceed ~415w peak.
> Lamenting buying this zotac pos, first custom pcb card I have ever bought and could possibly be the last.
> 
> The work required to fix the severally flawed thermal engineering, then rectify the crippled bios zotac installed to cover up the issue, now the seemingly hardware locked power limit ceiling.
> (Haven’t tested the HOC yet)
> All have left a bad taste, at the very least I will never buy a zotac card again.
> Yes it games well, now anyway.
> 
> On a side note has anyone ever seen the memory boosting higher than the set frequency on any 3090? I set mem clocks to +1000 but during gaming/bench it runs at 1356Mhz which is higher. Actually I think my PNY did this as well.


Why not just solder all the shunts with new resistors stacked on top of them? And from what I understand, you’ve gotta do the PCI-e shunt too on RTX3000? This is what I did with all my 2080Ti’s. I could pull up to 500-530 watts in timespy extreme graphics. And games like Metro. My timespy extreme graphics score was literally just nipping at the heels of a stock 3080Ti FE. So, this method certainly works well. (You need watercooling to benefit though) so these clocks can sustain in games , and put this power down essentially forever while playing (And never downclocking)

These are the sacrifices we must make when trying to squeeze performance from
reference boards, or low power cards.

Break out the solder iron, get a waterblock, get that thing running cool around 40C or less preferably and locking in 2.1Ghz or better on the core, with the memory OC as high as possible in games. (Plenty fast right there)


----------



## joshpdemesa

Falkentyne said:


> Just run the bios I posted multiple times first and see if you like the results. Keep in mind 8 pin #3 won't be able to exceed 175W with this bios due to a bug with the SRC3 power limit not being changed from default (when 8 pin #3 reaches 175W you will get a TDP throttle via TDP Normalized %). I do not know if there is a newer one since I don't even have a strix. Rename the file to .ROM or .BIN (remove the .txt at the end).
> 
> Shunt modding is always worth it (or just use the higher tdp bioses) not just for the overall 10% FPS boost (going from 350W to 550W), but also for the more consistent frametimes (clocks not jumping around as much). This was seen as early as Pascal with TDP mods.


I can confirm the bios works. Suprisingly, I'm getting around 520W board power draw with my Strix card while running Unigine Superposition. However, I'm still getting the PWR PerfCap while under load and Resizable Bar is not present which is understandable. PCIE 1 and 3 are only reaching low 150W. I guess this is working as expected? 

BTW, When running furmark Board power draw reached 581.3W and PCIE 2 reached 226.6W


----------



## J7SC

joshpdemesa said:


> I can confirm the bios works. Suprisingly, I'm getting around 520W board power draw with my Strix card while running Unigine Superposition. However, I'm still getting the PWR PerfCap while under load and Resizable Bar is not present which is understandable. PCIE 1 and 3 are only reaching low 150W. I guess this is working as expected?
> 
> BTW, When running furmark Board power draw reached 581.3W and PCIE 2 reached 226.6W
> View attachment 2517096


...sorry, not sure I recall what Bios you used for this on your Strix - Galax Hof ?


----------



## joshpdemesa

J7SC said:


> ...sorry, not sure I recall what Bios you used for this on your Strix - Galax Hof ?


I used the BIOS Falkentyne attached here. you can check my reply to his comment.


----------



## Bal3Wolf

So i learned something today if i force rebar on with profile inspector with my 5950x cpu score drops 3000 but it works fine with the 3900x.


----------



## des2k...

Bal3Wolf said:


> So i learned something today if i force rebar on with profile inspector with my 5950x cpu score drops 3000 but it works fine with the 3900x.


well... don't expect 3dmark cpu score to relate to anything

For example their new cpu benchmark is complete grabage, 16thread will score 5000points with smt on and 8000points with smt off on my 3900x 🙄

The workload they have is suppose to replicate game load. There's no way the cpu is 50% faster in games with smt off😂


----------



## J7SC

joshpdemesa said:


> I used the BIOS Falkentyne attached here. you can check my reply to his comment.


Oh that one, thanks ! I already downloaded it a few months back - haven't tried it yet though on my Strix, just the 520W KPE r_BAR


----------



## newls1

joshpdemesa said:


> I used the BIOS Falkentyne attached here. you can check my reply to his comment.


What bios did he link here? Im curious and would love to try it, hes a smart dude...


----------



## KedarWolf

I'm selling my ASUS Strix OC RTX 3090 locally with the EKWB water block and active backplate because I have the opportunity to get an EVGA Kingpin RTX 3090 at retail.

Please don't PM me to buy it though, I'm selling my Strix much more than retail and only locally.


----------



## Nizzen

KedarWolf said:


> I'm selling my ASUS Strix OC RTX 3090 locally with the EKWB water block and active backplate because I have the opportunity to get an EVGA Kingpin RTX 3090 at retail.
> 
> Please don't PM me to buy it though, I'm selling my Strix much more than retail and only locally.


Then why posting this here? 🤔


----------



## KedarWolf

Nizzen said:


> Then why posting this here? 🤔


Maybe because it's news about my 3090 situation? This is the 3090 thread, right?


----------



## Nizzen

KedarWolf said:


> Maybe because it's news about my 3090 situation? This is the 3090 thread, right?


I pray that you are getting a good sample, and post some nice scores here 😘


----------



## KedarWolf

Pro-tip, if you want to increase your Time Spy and Port Royal scores on AMD, turn CPU SMT off in BIOS, you'll get better scores, even just the graphics score on TimeSpy, and CPU scores will be quite a bit better.

Intel, hyper-threading off, I dunno.


----------



## KedarWolf

Deleted


----------



## KedarWolf

des2k... said:


> If you're not power limited or temp limited you can use 2 VF points & nvidia-smi , set NV panel to max performance
> 
> C:\Windows\System32\nvidia-smi.exe -lgc 210,2175
> 
> For any given freq, only 3 voltage points are valid, then the card will need to jump to the next freq bin.
> 
> This holds, 2175 with a water block for any load (xoc vbios)
> 
> Point 1, apply first
> View attachment 2516358
> 
> 
> Point 2, apply second, needs to be 4 voltages points before max freq; this will be 1bin less on the FREQ
> View attachment 2516359


How do I remove the set clocks limit and put them back as default settings. The option you show when I run Port Royal keeps my clocks like at 1920.


----------



## des2k...

KedarWolf said:


> How do I remove the set clocks limit and put them back as default settings. The option you show when I run Port Royal keeps my clocks like at 1920.


default is nvidia-smi.exe -rgc


----------



## GRABibus

Nizzen said:


> Then why posting this here? 🤔





KedarWolf said:


> Pro-tip, if you want to increase your Time Spy and Port Royal scores on AMD, turn CPU SMT off in BIOS, you'll get better scores, even just the graphics score on TimeSpy, and CPU scores will be quite a bit better.
> 
> Intel, hyper-threading off, I dunno.


For Zen3, the tip is :
SMT off
Core count=4
1 CCD disabled
Static OC


----------



## GRABibus

Nizzen said:


> Then why posting this here? 🤔


😂😂


----------



## KedarWolf

GRABibus said:


> For Zen3, the tip is :
> SMT off
> Core count=4
> 1 CCD disabled
> Static OC


How do you set Core Count in the BIOS?

Never mind, figured it out.


----------



## GRABibus

KedarWolf said:


> How do you set Core Count in the BIOS?
> 
> Never mind, figured it out.


I am not in front on my PC but this is in Advanced options


----------



## InvictuSZN

KedarWolf said:


> I got an email today too, my Strix OC active backplate just got shipped today, tracking shows it in Slovakia.
> 
> Edit: Actually, it's in Italy now.


Hey there, didnt check this thread for a few days, my backplate is coming Friday. My MO-RA and all the other stuff I need for the loop is also arriving friday and saturday. Cant wait to put this bad boi on water haha


----------



## KedarWolf

InvictuSZN said:


> Hey there, didnt check this thread for a few days, my backplate is coming Friday. My MO-RA and all the other stuff I need for the loop is also arriving friday and saturday. Cant wait to put this bad boi on water haha


My Strix OC 3090 EKWB active backplate arrived today but I'm selling my Strix because I can get an EVGA Kingpin RTX 3090 at retail. 

I'm letting the person that buys the Strix install the backplate.


----------



## tps3443

KedarWolf said:


> My Strix OC 3090 EKWB active backplate arrived today but I'm selling my Strix because I can get an EVGA Kingpin RTX 3090 at retail.
> 
> I'm letting the person that buys the Strix install the backplate.


Was this the same in-stock alert that you were trying to pass to other people? lol

That’s awesome. Mine will be here tomorrow! (Wednesday) Its going on the KP hydro copper block almost immediately after too (Just a few days on the AIO). I’m hoping to get some good numbers with it. 

Also, why not just sell your Strix to someone for what you have in it only? I mean, you said that you found a KP for MSRP, so why scalp someone on the strix? You found a KP for MSRP so why bother? I don’t get it.

To each his own of course, I’m just curious is all.


----------



## KedarWolf

tps3443 said:


> So you decided to finally go for the 3090KP then?
> 
> That’s awesome. Mine will be here tomorrow! (Wednesday) Its going on the KP hydro copper block almost immediately after too. I’m hoping to get some good numbers with it.
> 
> Also, why not just sell your Strix to someone for what you have in it only? I mean, you said that you found a KP for MSRP, so why scalp someone on the strix? You found a KP for MSRP so why bother?
> 
> 
> I mean I let my 2080Ti’s go for $800-$815 bucks each (Once I bought the 3090KP). Supporting scalping just seems wrong I suppose.
> 
> So expressing an opinion is all.


I'm just selling it for enough to get the Kingpin and a waterblock and backplate for it. And my card has a water block AND a water-cooled active backplate, so I'm not scalping at all.


----------



## KedarWolf

Deleted, wrong thread.


----------



## tps3443

KedarWolf said:


> I'm just selling it for enough to get the Kingpin and a waterblock and backplate for it. And my card has a water block AND a water-cooled active backplate, so I'm not scalping at all.


Ok, well that’s cool. Strix 3090 is super popular, so they go quick too. I’d say just put the air cooler back on it for quicker sale.


----------



## tps3443

Gonna post an unboxing and initial impressions once my card arrives. I’m gonna be running this 3090KP for a good lonngggg while. Hopefully 2 years from now easy.


----------



## KedarWolf

tps3443 said:


> Ok, well that’s cool. Strix 3090 is super popular, so they go quick too. I’d say just put the air cooler back on it for quicker sale.


I stripped the heads of a few screws on the stock cooler, so can't properly put it back on. 

Plus I live in a city of five million people and priced it decently low, it'll sell with the water stuff on it.


----------



## wtf_apples

J7SC said:


> As to thermal putty (as opposed to pads and grease), I recently got two jars of the listed DigiKey putty (10.0 W/mk) below, after another user here recommended it. I really do like thermal putty for its flexibility, after years of chasing the right size for a given GPU (+ tolerances). The supplier below delivered to me in W.Canada swiftly and w/o any drama...


Been really busy but ive finally ordered some TG-PP10-30-ND‎.
Can I use kpx on the die and TG-PP10-30-ND‎ on the rest of the components?
Thanks


----------



## Falkentyne

wtf_apples said:


> Been really busy but ive finally ordered some TG-PP10-30-ND‎.
> Can I use kpx on the die and TG-PP10-30-ND‎ on the rest of the components?
> Thanks


The 50g one is cheaper. Don't ask me why and don't tell Digikey. 





TG-PP10-50 t-Global Technology | Fans, Thermal Management | DigiKey


Order today, ships today. TG-PP10-50 – Thermal Silicone Putty 50 gram Container from t-Global Technology. Pricing and Availability on millions of electronic components from Digi-Key Electronics.




www.digikey.com


----------



## joshpdemesa

wtf_apples said:


> Been really busy but ive finally ordered some TG-PP10-30-ND‎.
> Can I use kpx on the die and TG-PP10-30-ND‎ on the rest of the components?
> Thanks


I did this on my Strix 3090 with Bykski active backplate. Kingpin KPx on the die then 10W/mK putty on everything else. Memory junction max temp is 60°C at 26°C ambient.


----------



## mattxx88

joshpdemesa said:


> I did this on my Strix 3090 with Bykski active backplate. Kingpin KPx on the die then 10W/mK putty on everything else. Memory junction max temp is 60°C at 26°C ambient.


is that paste easy to remove/clean?


----------



## OrionBG

Falkentyne said:


> The 50g one is cheaper. Don't ask me why and don't tell Digikey.
> 
> 
> 
> 
> 
> TG-PP10-50 t-Global Technology | Fans, Thermal Management | DigiKey
> 
> 
> Order today, ships today. TG-PP10-50 – Thermal Silicone Putty 50 gram Container from t-Global Technology. Pricing and Availability on millions of electronic components from Digi-Key Electronics.
> 
> 
> 
> 
> www.digikey.com


Let's say I don't have access to those specific pastes... Will something like Thermal Grizzly Hydronaut work for things other than the GPU chip itself?
Thermal conductivity is 11.5W/mK on it but I don't know how thick it is...


----------



## joshpdemesa

mattxx88 said:


> is that paste easy to remove/clean?


Kingpin KPx paste? Yes it's as easy as any other thermal paste. For the putty, I just used a prying/scraping tool to remove the chunks (you can also wipe them off). Then alcohol and a soft bristle toothbrush to remove the excess. removing a thermal pad is much easier of course. But the temperature difference makes it worth the hassle. From 100°C+ memory temp to 60°C is a huge drop IMO.


----------



## mattxx88

joshpdemesa said:


> Kingpin KPx paste? Yes it's as easy as any other thermal paste. For the putty, I just used a prying/scraping tool to remove the chunks (you can also wipe them off). Then alcohol and a soft bristle toothbrush to remove the excess. removing a thermal pad is much easier of course. But the temperature difference makes it worth the hassle. From 100°C+ memory temp to 60°C is a huge drop IMO.


i meant the putty yes, thanks for sharing 
i'll give it a try, cause mine 3090 FE doesn't like pads that much


----------



## J7SC

Falkentyne said:


> The 50g one is cheaper. Don't ask me why and don't tell Digikey.
> 
> 
> 
> 
> 
> TG-PP10-50 t-Global Technology | Fans, Thermal Management | DigiKey
> 
> 
> Order today, ships today. TG-PP10-50 – Thermal Silicone Putty 50 gram Container from t-Global Technology. Pricing and Availability on millions of electronic components from Digi-Key Electronics.
> 
> 
> 
> 
> www.digikey.com


...may be they operate on 'volume discount' ? Anyway, I got some TG-PP10 50g from them in late May and really like the stuff for all my GPUs and their VRAM needs, especially as some tolerances by GPU card manufacturers and w-block manufacturers can and do vary.

...as to thermal paste for the 3090 die, I would recommend 'thicker' paste if you mount your GPU vertically. I have used TG Kryonaut before on my Strix and it worked fine for a while, but noticed that it had started to run out a bit (though in terms of initial cooling performance, Kryonaut is great). It doesn't help that the Ampere dies tend to be 'not entirely flat', never mind the raised writing on the die. I ended up switching the die paste to the thicker Gelid GC-Extreme and use the TG-PP10 on the VRAM etc


----------



## tps3443

wtf_apples said:


> Been really busy but ive finally ordered some TG-PP10-30-ND‎.
> Can I use kpx on the die and TG-PP10-30-ND‎ on the rest of the components?
> Thanks


What is this used for the thermal specs of that stuff is really amazing! Would it be worth ordering it and just coating the all components that produce heat on the card all over it? Back plate etc.


----------



## tps3443

joshpdemesa said:


> Kingpin KPx paste? Yes it's as easy as any other thermal paste. For the putty, I just used a prying/scraping tool to remove the chunks (you can also wipe them off). Then alcohol and a soft bristle toothbrush to remove the excess. removing a thermal pad is much easier of course. But the temperature difference makes it worth the hassle. From 100°C+ memory temp to 60°C is a huge drop IMO.



This sounds pretty good, and I have done similar things before only using K5 Pro on every thing lol, and a quality thermal paste like TFX or Kpx for the die.

So you think the performance difference would be worth it to, remove all of the thermal pads on my Kingpin and use this 10k/w thermal spread filler stuff? From what I have read, the KP uses fujipoly pads on the hydro copper block. Not sure about what’s under the back plate. But they look high quality. 

I’m after lowest possible temps of course. Doing all the research I can , so when I go to install this Kingpin Hydro copper block I can have it setup the best possible way or close to it, on the first go. 

My 3090kp will be here tomorrow, and waterblock a little after that.

Im really not sure what route I should take.


----------



## des2k...

"I can have it setup the best possible way or close to it, on the first go. "
oh yes, that was my thinking with my block.... I lost track of how many remounts and wasted paste/pads lol


----------



## Lord of meat

any tried this on msi trio?








GALAX RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com


----------



## Nizzen

Lord of meat said:


> any tried this on msi trio?
> 
> 
> 
> 
> 
> 
> 
> 
> GALAX RTX 3090 VBIOS
> 
> 
> 24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory
> 
> 
> 
> 
> www.techpowerup.com


Why ask? It takes seconds to flash. It's actual faster to try for yourself, than wait for an answer LOL

I tried that bios on my 3090 HOF, and it works just fine


----------



## Lord of meat

Nizzen said:


> Why ask? It takes seconds to flash. It's actual faster to try for yourself, than wait for an answer LOL
> 
> I tried that bios on my 3090 HOF, and it works just fine


My only concern is that i brick the card.
Edit: i have a xoc 520w flashed on it currently.


----------



## GRABibus

Lord of meat said:


> any tried this on msi trio?
> 
> 
> 
> 
> 
> 
> 
> 
> GALAX RTX 3090 VBIOS
> 
> 
> 24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory
> 
> 
> 
> 
> www.techpowerup.com


Also great on my Strix, at least for gaming.
Didn't try any benchmarks with it.


----------



## Lord of meat

GRABibus said:


> Also great on my Strix, at least for gaming.
> Didn't try any benchmarks with it.


If you dont mind telling, what is your clock speed and temp with this one?
Thanks.


----------



## Nizzen

Lord of meat said:


> My only concern is that i brick the card.
> Edit: i have a xoc 520w flashed on it currently.


"Impossible" du brick the card with nvflash...

I flashed maybe 100+ times on my 30 series cards with all kinds of bioses. Official and not official bioses


----------



## Lord of meat

Nizzen said:


> "Impossible" du brick the card with nvflash...
> 
> I flashed maybe 100+ times on my 30 series cards with all kinds of bioses. Official and not official bioses


I managed to brick it once and had my 980ti that i used to fix it. But now i dont have that 980ti.


----------



## GRABibus

Lord of meat said:


> If you dont mind telling, what is your clock speed and temp with this one?
> Thanks.


Depends on the game.

Cold War at 24°c ambient :
2160MHz on [email protected] 1.1V
10752MHz on memory
[email protected]°C max
[email protected]°C max

Here is my curve :










For more demanding games as BFV, I undervolt my GPU


----------



## Lord of meat

GRABibus said:


> Depends on the game.
> 
> Cold War at 24°c ambient :
> 2160MHz on Core
> 10752MHz on memory
> [email protected]°C max
> [email protected]°C max
> 
> Here is my curve :
> 
> View attachment 2517311


Sweet! ill try to flash it on my trio and see what happens, fingers crossed.


----------



## GRABibus

Lord of meat said:


> Sweet! ill try to flash it on my trio and see what happens, fingers crossed.


Note that I have still the stock air cooler, repasted with conductonaut.
I also repaded memory chips with THERMALRIGHT Odyssey


----------



## tps3443

Nizzen said:


> "Impossible" du brick the card with nvflash...
> 
> I flashed maybe 100+ times on my 30 series cards with all kinds of bioses. Official and not official bioses


Yeah I did the same with my 2080Ti(s) always flashing bios 3-4 times a day. That’s why I went 3090KP this go around lol. I am gonna request for that XOC 1KW KP bios right away. And void that warranty as soon as possible!!! lol. JK. Probably just role with the standard bios.

You can still brick a card though. But you can blind flash them back fairly easily with no monitor or monitor signal. I have recovered several cards, and never lost one for good. (Been flashing GPU’s since 2004)

I have seen cards flash to any 1 of 3 bios and work just fine on any of those 3 bios. But, I have seen cards brick and lose monitor signal, when flashing to one of those same 3 bios in a bad or different order.(Not sure why) this happen with all of my A bin 2080Ti’s. I think
it was flashing from the KP XOC 520watt bios to the Galax XOC 2KW bios or the other way around.


----------



## Lord of meat

GRABibus said:


> Note that I have still the stock air cooler, repasted with conductonaut.
> I also repaded memory chips with THERMALRIGHT Odyssey


Just flashed it. Got monitor signal 😀.
I got a ek cooler on, tf8 paste and odyssey pads, so ill start testing it and report back.


----------



## yzonker

tps3443 said:


> Yeah I did the same with my 2080Ti(s) always flashing bios 3-4 times a day. That’s why I went 3090KP this go around lol. I am gonna request for that XOC 1KW KP bios right away. And void that warranty as soon as possible!!! lol. JK. Probably just role with the standard bios.
> 
> You can still brick a card though. But you can blind flash them back fairly easily with no monitor or monitor signal. I have recovered several cards, and never lost one for good. (Been flashing GPU’s since 2004)
> 
> I have seen cards flash to any 1 of 3 bios and work just fine on any of those 3 bios. But, I have seen cards brick and lose monitor signal, when flashing to one of those same 3 bios in a bad or different order.(Not sure why) this happen with all of my A bin 2080Ti’s. I think
> it was flashing from the KP XOC 520watt bios to the Galax XOC 2KW bios or the other way around.


There was a guy on the EVGA forum that claimed they let him rma anyway.


----------



## yzonker

des2k... said:


> "I can have it setup the best possible way or close to it, on the first go. "
> oh yes, that was my thinking with my block.... I lost track of how many remounts and wasted paste/pads lol


It's a friggin addiction. Now I have a 2nd D5 on the way for delivery tomorrow. Lol


----------



## tps3443

des2k... said:


> "I can have it setup the best possible way or close to it, on the first go. "
> oh yes, that was my thinking with my block.... I lost track of how many remounts and wasted paste/pads lol


Been there done that lol. But, the average 2080Ti owners were not as smart as the average 3090 owners. So it seems you can learn a lot about just reading these forums about 3090’s. Especially the enthusiast 3090 owners or extreme RTX3090 Kingpin owners. The guys who are testing multiple re-mounts on the block, and backplate. They post their results and temp reductions too. (You can see thermal pad thickness, type, and where at did what.

Go look at 2080Ti owners in late 2018 or early 2019. They would post screenshots saying their 80C air cooled 2080Ti FE was a silicon lottery lol.


----------



## tps3443

yzonker said:


> It's a friggin addiction. Now I have a 2nd D5 on the way for delivery tomorrow. Lol


I tell you what’s weird about my D5. So I have a EKWB Quantum inertia D5 pump. It has 12V and PWM sensor wire. So I have it plugged straight in to a 12V source on my PSU, and I have the PWM set to 100% in the bios (3,579 RPM average). Well, my X299 Dark motherboard’s 100%, wasn’t actually the full 100% that the pump was capable of.

I accidentally unplugged the PWM signal, and my pump started zooming!!! Like it seemed like it went from 60% to 90%. As I was seeing a vortex in my reservoir, and additional tiny hidden air bubbles getting pulled from other components.

So I tried another PWM and now this 100% is 4,310RPM full speed. Temps went down a little too! I have (3) 360mm rads, Optimus Sig v2 cpu block, and a ekwb 2080Ti block at the time of discovering this. So, more rpm was nice.

Just a heads up to anyone running their D5 at 100%. I was running mine at a (false 100% of 3,500 RPM) for a year straight, thinking I need another D5 pump.

Anyways, 4,300RPM is so MUCH better. Although, as the addiction does impact everyone... I want another D5 pump too... lol. Just because of course!! Every degree counts!!


----------



## gfunkernaught

Is there an rBar bios other than the 520w and 500w evga? I'm curious to see if power readings change and if they affect when the limit is hit.


----------



## yzonker

tps3443 said:


> I tell you what’s weird about my D5. So I have a EKWB Quantum inertia D5 pump. It has 12V and PWM sensor wire. So I have it plugged straight in to a 12V source on my PSU, and I have the PWM set to 100% in the bios (3,579 RPM average). Well, my X299 Dark motherboard’s 100%, wasn’t actually the full 100% that the pump was capable of.
> 
> I accidentally unplugged the PWM signal, and my pump started zooming!!! Like it seemed like it went from 60% to 90%. As I was seeing a vortex in my reservoir, and additional tiny hidden air bubbles getting pulled from other components.
> 
> So I tried another PWM and now this 100% is 4,310RPM full speed. Temps went down a little too! I have (3) 360mm rads, Optimus Sig v2 cpu block, and a ekwb 2080Ti block at the time of discovering this. So, more rpm was nice.
> 
> Just a heads up to anyone running their D5 at 100%. I was running mine at a (false 100% of 3,500 RPM) for a year straight, thinking I need another D5 pump.
> 
> Anyways, 4,300RPM is so MUCH better. Although, as the addiction does impact everyone... I want another D5 pump too... lol. Just because of course!! Every degree counts!!


Full speed on my EK D5 is about 4600-4700.


----------



## satinghostrider

Personally I find the Suprim X Bios for the 3090 Gaming X Trio quite good. Those mentioning XOC or other bioses, do you even get the proper readout of the voltage draw from each PCIE pin? Cause I recalled some of these values you can't monitor with those 520W bioses. But then again, I've only tried the 450W Suprim X Bios. I'm also interested to know whose got better results and stability with a higher limit bios for the 3090 Gaming X Trio.


----------



## mirkendargen

tps3443 said:


> I tell you what’s weird about my D5. So I have a EKWB Quantum inertia D5 pump. It has 12V and PWM sensor wire. So I have it plugged straight in to a 12V source on my PSU, and I have the PWM set to 100% in the bios (3,579 RPM average). Well, my X299 Dark motherboard’s 100%, wasn’t actually the full 100% that the pump was capable of.
> 
> I accidentally unplugged the PWM signal, and my pump started zooming!!! Like it seemed like it went from 60% to 90%. As I was seeing a vortex in my reservoir, and additional tiny hidden air bubbles getting pulled from other components.
> 
> So I tried another PWM and now this 100% is 4,310RPM full speed. Temps went down a little too! I have (3) 360mm rads, Optimus Sig v2 cpu block, and a ekwb 2080Ti block at the time of discovering this. So, more rpm was nice.
> 
> Just a heads up to anyone running their D5 at 100%. I was running mine at a (false 100% of 3,500 RPM) for a year straight, thinking I need another D5 pump.
> 
> Anyways, 4,300RPM is so MUCH better. Although, as the addiction does impact everyone... I want another D5 pump too... lol. Just because of course!! Every degree counts!!


I've heard this is a known issue with PWM D5's (and the issue was maybe actually that a lot of mobo makers don't actually follow the PWM spec or something?) and it's better to just get the ones with the dial to set and forget. I don't remember if it was every PWM D5 or just some.


----------



## sultanofswing

yzonker said:


> Full speed on my EK D5 is about 4600-4700.


All my EVGA X299 Boards do the same with a D5 pump.


----------



## J7SC

I have some newer D5s (XSPC) which have both the Molex and the PWM connector as well as the red adjustment screw on the back - I never connected the PWM and just run it like all my other D5s (mostly Swiftech), setting the adjustment on the back and 'forget about it'...some of the older Swiftech D5s have continuously run like that as far back as 2012...


----------



## Lord of meat

GRABibus said:


> Depends on the game.
> 
> Cold War at 24°c ambient :
> 2160MHz on [email protected] 1.1V
> 10752MHz on memory
> [email protected]°C max
> [email protected]°C max
> 
> Here is my curve :
> 
> View attachment 2517311
> 
> 
> For more demanding games as BFV, I undervolt my GPU


are you drawing 500w? i cant seem to get it to draw more than 478w.


----------



## joshpdemesa

tps3443 said:


> This sounds pretty good, and I have done similar things before only using K5 Pro on every thing lol, and a quality thermal paste like TFX or Kpx for the die.
> 
> So you think the performance difference would be worth it to, remove all of the thermal pads on my Kingpin and use this 10k/w thermal spread filler stuff? From what I have read, the KP uses fujipoly pads on the hydro copper block. Not sure about what’s under the back plate. But they look high quality.
> 
> I’m after lowest possible temps of course. Doing all the research I can , so when I go to install this Kingpin Hydro copper block I can have it setup the best possible way or close to it, on the first go.
> 
> My 3090kp will be here tomorrow, and waterblock a little after that.
> 
> Im really not sure what route I should take.


That I'm not sure since I didn't test the putty with thermal pads. I'm afraid you're gonna have to test this yourself or you can watch this video from Frame Chasers if you really want to mount your block in one go. Great card btw.


----------



## Dmitry Vinogradov

jura11 said:


> Hi there
> 
> Here is the BIOS which I'm using on my Palit RTX 3090 GamingPro
> 
> 
> 
> 
> 
> 
> 
> 
> 
> KFA2 RTX 3090 VBIOS
> 
> 
> 24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory
> 
> 
> 
> 
> www.techpowerup.com
> 
> 
> 
> 
> 
> For flash procedure please use this guide
> 
> 
> 
> 
> 
> 
> 
> 
> 
> How To Flash A Different BIOS On Your 1080 Ti.
> 
> 
> Important note: After flashing your BIOS or reinstalling your video card driver unzip the file below and right click it, Edit it using NotePad, change the '-pl 3**' to your max TDP on your card, Save As the file, choose 'All Files', not '.txt', save the powerlimit.bat file, than right click it...
> 
> 
> 
> 
> www.overclock.net
> 
> 
> 
> 
> 
> Hope this helps
> 
> Thanks, Jura


Dear Jura,

Could you please tell me if right now you have resizable bar enabled for your 390w KFA2 bios on PALIT Reference design card?

Thanks,

Best regards, 
Dmitry.


----------



## GRABibus

Lord of meat said:


> are you drawing 500w? i cant seem to get it to draw more than 478w.


Sorry, I came back currently to Bios EVGA RTX 3090 FTW3 ULTRA XOC 500W version 94.02.42.80.27


----------



## mattxx88

joshpdemesa said:


> Kingpin KPx paste? Yes it's as easy as any other thermal paste. For the putty, I just used a prying/scraping tool to remove the chunks (you can also wipe them off). Then alcohol and a soft bristle toothbrush to remove the excess. removing a thermal pad is much easier of course. But the temperature difference makes it worth the hassle. From 100°C+ memory temp to 60°C is a huge drop IMO.


let me another question please, this putty is also somewhat adesive?
i was wondering to use it to stick an heatsink on the backside of my card


----------



## Wihglah

mattxx88 said:


> let me another question please, this putty is also somewhat adesive?
> i was wondering to use it to stick an heatsink on the backside of my card


It's tacky, but not adhesive.


----------



## gfunkernaught

satinghostrider said:


> Personally I find the Suprim X Bios for the 3090 Gaming X Trio quite good. Those mentioning XOC or other bioses, do you even get the proper readout of the voltage draw from each PCIE pin? Cause I recalled some of these values you can't monitor with those 520W bioses. But then again, I've only tried the 450W Suprim X Bios. I'm also interested to know whose got better results and stability with a higher limit bios for the 3090 Gaming X Trio.


I haven't tried the suprim x rbar bios on my trio, but have tried the non rbar and wasn't pleased. So far, in terms of non xoc 1kw, the 520w rbar bios on my trio perform the best. Power limit gets smacked alot in heavier games such as cyberpunk, lowering the clocks down to an average of about 2055mhz. But perform is great.


----------



## SoldierRBT

Memory on the 1000W BIOS idles at stock speed. Anyone knows how to disable that? I've tried turning off CUDA - Force P2 state in NVidia inspector but doesn't work. -502 in the memory is an option but kinda annoying to select profiles when I start a game.


----------



## ManniX-ITA

SoldierRBT said:


> Memory on the 1000W BIOS idles at stock speed. Anyone knows how to disable that? I've tried turning off CUDA - Force P2 state in NVidia inspector but doesn't work. -502 in the memory is an option but kinda annoying to select profiles when I start a game.


I've never seen a solution, it's just like that AFAIK.
But in Afterbuner you can define a 2D and a 3D profile in the settings.
Not the same but better than nothing.


----------



## Lord of meat

gfunkernaught said:


> I haven't tried the suprim x rbar bios on my trio, but have tried the non rbar and wasn't pleased. So far, in terms of non xoc 1kw, the 520w rbar bios on my trio perform the best. Power limit gets smacked alot in heavier games such as cyberpunk, lowering the clocks down to an average of about 2055mhz. But perform is great.


What is your memory oc? It seems that the core speed drops when i set memory higher than 750 and crash if memory is +1000 and core hits 2100.


----------



## gfunkernaught

SoldierRBT said:


> Memory on the 1000W BIOS idles at stock speed. Anyone knows how to disable that? I've tried turning off CUDA - Force P2 state in NVidia inspector but doesn't work. -502 in the memory is an option but kinda annoying to select profiles when I start a game.


Set the mem offset to -550 or something I forget the exact number, but all the way to the left and save a profile. Use keyboard shortcuts to load that profile when idle or browsing. Also save a profile for your OC. Use the keyboard shortcuts to load either profile.


----------



## tps3443

“RTX3090 KINGPIN Out for delivery!!!


----------



## des2k...

gfunkernaught said:


> Set the mem offset to -550 or something I forget the exact number, but all the way to the left and save a profile. Use keyboard shortcuts to load that profile when idle or browsing. Also save a profile for your OC. Use the keyboard shortcuts to load either profile.


not sure why anyone would do this, full speed memory with no load = not much heat
also NVIDIA cards are terrible at system latency when memory goes low freq with high res / high refresh monitor

you're better off leaving it full speed and ignore the 5w of power saving you would get


----------



## gfunkernaught

des2k... said:


> not sure why anyone would do this, full speed memory with no load = not much heat
> also NVIDIA cards are terrible at system latency when memory goes low freq with high res / high refresh monitor
> 
> you're better off leaving it full speed and ignore the 5w of power saving you would get


Why would there be 2d speeds or low power clock profiles if it didn't really matter?🤔


----------



## gfunkernaught

Lord of meat said:


> What is your memory oc? It seems that the core speed drops when i set memory higher than 750 and crash if memory is +1000 and core hits 2100.


I set my mem to +1000 and core to +60. Core can hit as high as 2115mhz and the lowest I've seen it drop was 1980mhz. I tried +1200 yesterday and cold war started generating texture errors, as soon as I dropped to +1000 the glitches disappeared.


----------



## Lord of meat

gfunkernaught said:


> I set my mem to +1000 and core to +60. Core can hit as high as 2115mhz and the lowest I've seen it drop was 1980mhz. I tried +1200 yesterday and cold war started generating texture errors, as soon as I dropped to +1000 the glitches disappeared.


Looks like it depends on the game, but its strange.
In port royal and cyberpunk on that bios +60 gets me to 2100-2070 and the memory can do 850.
In war thunder and red dead i get 2100-2045 but memory needs to be 750.
My gpu temp float 46-48c on core and 60-63c on mem.
Might be drivers or just bad/cheap memory chips on my card.
If you got any advice or something i can test j would appeiciate it.


----------



## jura11

Dmitry Vinogradov said:


> Dear Jura,
> 
> Could you please tell me if right now you have resizable bar enabled for your 390w KFA2 bios on PALIT Reference design card?
> 
> Thanks,
> 
> Best regards,
> Dmitry.



Hi Dmitry 

I'm using on my Palit RTX 3090 GamingPro now XOC 1000W BIOS which I wouldn't run on stock cooler at all, for that I would ad least recommend AIO or water cooling loop 

You can try this BIOS which is latest uploaded on Techpowerup 









KFA2 RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com




GALAX RTX 3090 VBIOS 

If you already have their BIOS flashed you can use their ReBAR tool






Resizable bar BIOS update


Galaxy Microsystems Ltd.




www.kfa2.com





Hope this helps 

Thanks, Jura


----------



## gfunkernaught

Lord of meat said:


> Looks like it depends on the game, but its strange.
> In port royal and cyberpunk on that bios +60 gets me to 2100-2070 and the memory can do 850.
> In war thunder and red dead i get 2100-2045 but memory needs to be 750.
> My gpu temp float 46-48c on core and 60-63c on mem.
> Might be drivers or just bad/cheap memory chips on my card.
> If you got any advice or something i can test j would appeiciate it.


Could be that your temps are too high. My core temp is around 40c and vram stays at 58c while playing those heavier types of games.

How are you cooling the GPU? If your water cooled what is your loop like?


----------



## Lord of meat

gfunkernaught said:


> Could be that your temps are too high. My core temp is around 40c and vram stays at 58c while playing those heavier types of games.
> 
> How are you cooling the GPU? If your water cooled what is your loop like?


Those temp are usually in port royal when i hammer on it. In regular use its 40-43.
Ekwb with d5 and 360 rad.


----------



## gfunkernaught

Lord of meat said:


> Those temp are usually in port royal when i hammer on it. In regular use its 40-43.
> Ekwb with 360 rad.


Port Royal will use more than 500w, and 40c is what I get during Port Royal.

"My gpu temp float 46-48c on core and 60-63c on mem." That during PR or regular use? Cyberpunk can push the card harder than PR.


----------



## Lord of meat

gfunkernaught said:


> Port Royal will use more than 500w, and 40c is what I get during Port Royal.
> 
> "My gpu temp float 46-48c on core and 60-63c on mem." That during PR or regular use? Cyberpunk can push the card harder than PR.


I can check again when i get home, from what i recall cyberpunk didnt push that hard but i will double check.
Thanks!


----------



## Dmitry Vinogradov

jura11 said:


> I'm using on my Palit RTX 3090 GamingPro now XOC 1000W BIOS which I wouldn't run on stock cooler at all, for that I would ad least recommend AIO or water cooling loop


Dear Jura,

Now i am very interested in such bios  

Could you please answer a few of my questions:
Was it worth for reference 2*8pin card ? 
What is your 3090 power draw on this bios?
Full name of card that came stock with 1000w bios is 3090 KINGPIN Hydro Copper XOC?

Sory if these questions were already covered in this thread. 

I've got following setup in terms of cooling capabilities (5*140mm rad surface):








Thanks,
Dmirty.


----------



## gfunkernaught

Lord of meat said:


> I can check again when i get home, from what i recall cyberpunk didnt push that hard but i will double check.
> Thanks!


Right I forget that not everyone uses the same settings and monitor.

I recently started playing cyberpunk at 4096x2160 and the game seems to keep everything scaled properly. Usually that res will overscan horizontally unless the app or game supports that 256:135 aspect ratio. I figured, DLSS uses a current res percentage right? Why not give it more pixels?


----------



## tps3443

Yay!!!! 

About to run it in real good!!


----------



## jura11

Dmitry Vinogradov said:


> Dear Jura,
> 
> Now i am very interested in such bios
> 
> Could you please answer a few of my questions:
> Was it worth for reference 2*8pin card ?
> What is your 3090 power draw on this bios?
> Full name of card that came stock with 1000w bios is 3090 KINGPIN Hydro Copper XOC?
> 
> Sory if these questions were already covered in this thread.
> 
> I've got following setup in terms of cooling capabilities (5*140mm rad surface):
> View attachment 2517427
> 
> 
> Thanks,
> Dmirty.



Hi Dmitry 

I assume you have reference PCB GPU and not Founders Edition GPU? If you have Founders Edition GPU you can't flash any above mentioned BIOS on your GPU, don't flash 3*8-pin GPU BIOS on 2*8-pin GPU, you will have lower performance than with stock BIOS, only 3*8-pin GPU BIOS which works on 2*8-pin GPU is XOC 1000W BIOS and Strix one but I'm not sure if its better or worse, didn't tested yet that BIOS 

If its worth it on 2*8-pin GPU, I would say in my case is worth it, I'm not hitting power limit in any games which is nice, temperatures are same on my loop like with 390W BIOS just because I'm running 4*360mm radiators plus MO-ra3 360mm and power draw max what I have ever seen is in 600-620W as max 

Running as you are running Bykski waterblocks on both of my RTX 3090 GamingPro's and no issues with temperatures and I'm running on both GPUs XOC 1000W BIOS, for normal gaming I recommend you set power limit to 75-85% as max, for rendering or mining you need to enable Force P2 CUDA state because GPU will stuck at idle speeds, VRAM won't downclock during the idle, you will need to create profile in MSI AB with - 502MHz for VRAM 

Link on BIOS is here 









EVGA RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com





Other thah that, be sure you have good airflow in case and you are using quality cables and check sometimes power cables because you will be pulling well above 500W 

Hope this helps 

Thanks, Jura


----------



## Dmitry Vinogradov

jura11 said:


> Hi Dmitry
> 
> I assume you have reference PCB GPU and not Founders Edition GPU? If you have Founders Edition GPU you can't flash any above mentioned BIOS on your GPU, don't flash 3*8-pin GPU BIOS on 2*8-pin GPU, you will have lower performance than with stock BIOS, only 3*8-pin GPU BIOS which works on 2*8-pin GPU is XOC 1000W BIOS and Strix one but I'm not sure if its better or worse, didn't tested yet that BIOS
> 
> If its worth it on 2*8-pin GPU, I would say in my case is worth it, I'm not hitting power limit in any games which is nice, temperatures are same on my loop like with 390W BIOS just because I'm running 4*360mm radiators plus MO-ra3 360mm and power draw max what I have ever seen is in 600-620W as max
> 
> Running as you are running Bykski waterblocks on both of my RTX 3090 GamingPro's and no issues with temperatures and I'm running on both GPUs XOC 1000W BIOS, for normal gaming I recommend you set power limit to 75-85% as max, for rendering or mining you need to enable Force P2 CUDA state because GPU will stuck at idle speeds, VRAM won't downclock during the idle, you will need to create profile in MSI AB with - 502MHz for VRAM
> 
> Link on BIOS is here
> 
> 
> 
> 
> 
> 
> 
> 
> 
> EVGA RTX 3090 VBIOS
> 
> 
> 24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory
> 
> 
> 
> 
> www.techpowerup.com
> 
> 
> 
> 
> 
> Other thah that, be sure you have good airflow in case and you are using quality cables and check sometimes power cables because you will be pulling well above 500W
> 
> Hope this helps
> 
> Thanks, Jura


Dear Jura,

Many thanks for such detailed review and links to bioses, I will have to think that through. 

I am running 2*8pin reference board, pulled from HP Omen 
Maybe i will stick with 390w as a more conservative solution, a little bit afraid of such power draw from 2*8pin (peak 600-620w) on 1000w bios.

Thanks,
Dmitry.


----------



## satinghostrider

gfunkernaught said:


> I haven't tried the suprim x rbar bios on my trio, but have tried the non rbar and wasn't pleased. So far, in terms of non xoc 1kw, the 520w rbar bios on my trio perform the best. Power limit gets smacked alot in heavier games such as cyberpunk, lowering the clocks down to an average of about 2055mhz. But perform is great.


What temps are you getting on the 520W rebar bios if you don't mind sharing mate?


----------



## gfunkernaught

satinghostrider said:


> What temps are you getting on the 520W rebar bios if you don't mind sharing mate?


Don't mind at all. So the temp delta between the water and gpu is about 10c at 500w and increases as power usage goes up. Probably because the water block can't keep up with the increased heat load. I think it was on this forum that I read about the number of fins, more fins=more heat absorption. Makes sense. So typically around this time of year (summer) air is thicker so even with the A/C keeping the room temp at about 22c, the gpu core will hit 40c while gaming at 500w, vram gets to 58c. The 500w and 520w bios, rbar and non-rbar, have always been limited to really 480-497w on my Trio. Something to do with the way power is read. Just doesn't work as it should or would on an actual evga card. The 1kw bios, however, when I set the power limit to 50%, the power limit would get triggered at around 500-510w.


----------



## satinghostrider

gfunkernaught said:


> Don't mind at all. So the temp delta between the water and gpu is about 10c at 500w and increases as power usage goes up. Probably because the water block can't keep up with the increased heat load. I think it was on this forum that I read about the number of fins, more fins=more heat absorption. Makes sense. So typically around this time of year (summer) air is thicker so even with the A/C keeping the room temp at about 22c, the gpu core will hit 40c while gaming at 500w, vram gets to 58c. The 500w and 520w bios, rbar and non-rbar, have always been limited to really 480-497w on my Trio. Something to do with the way power is read. Just doesn't work as it should or would on an actual evga card. The 1kw bios, however, when I set the power limit to 50%, the power limit would get triggered at around 500-510w.


Thanks for the reply. I'm getting around 46 degrees load and around 60 degrees hotspot on my Trio X with the Suprim X BIOS RB. VRAM sits consistently at 60+ degrees. Weather in Singapore is hot as hell now as 37 degrees ambient. And those temps are with Aircon set at 18 degrees celcius. I peak around 464W on my card so I'm just seeing if I can push a little more on my setup with a higher TDP bios.

I might try out the 1Kw BIOS but I wanna redo my pasting with better paste and more compressible pads.


----------



## des2k...

Dmitry Vinogradov said:


> Dear Jura,
> 
> Many thanks for such detailed review and links to bioses, I will have to think that through.
> 
> I am running 2*8pin reference board, pulled from HP Omen
> Maybe i will stick with 390w as a more conservative solution, a little bit afraid of such power draw from 2*8pin (peak 600-620w) on 1000w bios.
> 
> Thanks,
> Dmitry.


you can run 600w on 2x8pin cards
all are built to Nvidia specs, at 600w the core VRM will be working at max rating about 40a per 50a stage

You need a good setup:
PSU that uses 16awg cables
High end mobo, you'll use 100w on the pcie
At 600w heat, your waterblock needs to have a near perfect mount

Been running my zotac at 600w a few times. Nothing happens other than the room getting insane hot 🙄 and gpu temps reaching 43c in summer time.


----------



## gfunkernaught

satinghostrider said:


> Thanks for the reply. I'm getting around 46 degrees load and around 60 degrees hotspot on my Trio X with the Suprim X BIOS RB. VRAM sits consistently at 60+ degrees. Weather in Singapore is hot as hell now as 37 degrees ambient. And those temps are with Aircon set at 18 degrees celcius. I peak around 464W on my card so I'm just seeing if I can push a little more on my setup with a higher TDP bios.
> 
> I might try out the 1Kw BIOS but I wanna redo my pasting with better paste and more compressible pads.


Please be careful with the 1kw bios. What is your cooling setup btw?


----------



## J7SC

..ok, taking a look at my (growing) 3090 vBios folder and while I do have the 500 W 3090 Galax, I don't think it is r_BAR enabled...I also checked the TechPowerUp DB, but so far no luck...does anyone here have the Galax HoF 500W r_BAR, and perhaps be so kind and save it via GPUz, rename .rom to .txt and upload it here ? Thanks


----------



## satinghostrider

gfunkernaught said:


> Please be careful with the 1kw bios. What is your cooling setup btw?


I'm using an EK Vector Block for my 3090 + Thermalright Odyssey pads all around and Thermal Grizzly. Cooled by 2 X 360mm radiators with a D5 pump and 6x Gentletyphoon fans on them radiators.


----------



## ManniX-ITA

J7SC said:


> ..ok, taking a look at my (growing) 3090 vBios folder and while I do have the 500 W 3090 Galax, I don't think it is r_BAR enabled...I also checked the TechPowerUp DB, but so far no luck...does anyone here have the Galax HoF 500W r_BAR, and perhaps be so kind and save it via GPUz, rename .rom to .txt and upload it here ? Thanks


Not in the description but here's the reBar version:









[Official] NVIDIA RTX 3090 Owner's Club


3090 KPE HC 520W BIOS - 16007 PR https://www.3dmark.com/pr/1120041 There's still room for improvement, internal clocks could also improve a little bit more. +1350 used to pass now only +1300. High ambient temps isn't helping.




www.overclock.net


----------



## J7SC

ManniX-ITA said:


> Thanks, this is the ReBar version the updater installed:
> 
> 
> 
> 
> 
> 
> 
> 
> 
> GALAX RTX 3090 VBIOS
> 
> 
> 24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory
> 
> 
> 
> 
> www.techpowerup.com
> 
> 
> 
> 
> 
> Guess it's better to flash this one directly instead of doing it twice.
> 
> Are you still testing this one? More findings?


 Thanks. Apart from the (native) Strix OC bios, I loaded the KPE 520W r_BAR in bios position 2, but I want to check out the Galax HoF as well.


----------



## Mystic33

I share my 3090 kaf 2 aio mod with only 390w vbios superposition bechmark





































I scored 19 480 in Time Spy


Intel Core i9-9900K Processor, NVIDIA GeForce RTX 3090 x 1, 16384 MB, 64-bit Windows 10}




www.3dmark.com


----------



## GRABibus

J7SC said:


> ..ok, taking a look at my (growing) 3090 vBios folder and while I do have the 500 W 3090 Galax, I don't think it is r_BAR enabled...I also checked the TechPowerUp DB, but so far no luck...does anyone here have the Galax HoF 500W r_BAR, and perhaps be so kind and save it via GPUz, rename .rom to .txt and upload it here ? Thanks


use this tool to update the bios with rebar :






Resizable bar BIOS update


Galaxy Microsystems Ltd.




www.galax.com


----------



## GRABibus

tps3443 said:


> Yay!!!!
> 
> About to run it in real good!!


Where did you get it and at which price ?


----------



## OrionBG

Hey guys, I'm trying to get the NVIDIA ResizableBAR Firmware Updater but the link on the page gives me "Access Denied" error every time...
the page is: NVIDIA Resizable BAR Firmware Update Tool | NVIDIA and the link in question is near the bottom of the page just before "PARTNER FIRMWARE UPDATE TOOLS"
Can someone please check if it works for them and post the tool here?

UPDATE: Nevermind, it works now.


----------



## Lord of meat

gfunkernaught said:


> Right I forget that not everyone uses the same settings and monitor.
> 
> I recently started playing cyberpunk at 4096x2160 and the game seems to keep everything scaled properly. Usually that res will overscan horizontally unless the app or game supports that 256:135 aspect ratio. I figured, DLSS uses a current res percentage right? Why not give it more pixels?


U were right, i increased the res to 4k clock drops to 2055 (did a quick oc) and temp goes to ~47 without the ac on. 
Memory works at +1000 in port royal.
I swapped some fans to evga clc ones i had laying around and a corsair sp, got better airflow.
Going to do more runs and see if i can push it and if its stable.


----------



## gfunkernaught

ManniX-ITA said:


> Not in the description but here's the reBar version:
> 
> 
> 
> 
> 
> 
> 
> 
> 
> [Official] NVIDIA RTX 3090 Owner's Club
> 
> 
> 3090 KPE HC 520W BIOS - 16007 PR https://www.3dmark.com/pr/1120041 There's still room for improvement, internal clocks could also improve a little bit more. +1350 used to pass now only +1300. High ambient temps isn't helping.
> 
> 
> 
> 
> www.overclock.net


This is definitely the rbar version?


----------



## Nizzen

gfunkernaught said:


> This is definitely the rbar version?


Try it, and you will find the answer 

Maybe you need a youtubevideo that are testing it for you? 😂


----------



## ManniX-ITA

gfunkernaught said:


> This is definitely the rbar version?


Yes, I compared the hash with the version installed by the Rebar upgrader.


----------



## gfunkernaught

Nizzen said:


> Try it, and you will find the answer
> 
> Maybe you need a youtubevideo that are testing it for you? 😂


Actually found the answer above. 😏


----------



## gfunkernaught

ManniX-ITA said:


> Yes, I compared the hash with the version installed by the Rebar upgrader.


Cool thanks for confirming.


----------



## KedarWolf

Mystic33 said:


> I share my 3090 kaf 2 aio mod with only 390w vbios superposition bechmark
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 19 480 in Time Spy
> 
> 
> Intel Core i9-9900K Processor, NVIDIA GeForce RTX 3090 x 1, 16384 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com


You only have thermal pads on the memory and nothing on the VRMs etc.? That can't be good.


----------



## tps3443

Hey guys im finally benching my RTX3090 Kingpin, what are you guys getting on memory overclocking game stable frequencies? And benching frequencies? What is considered really good? and what is bad? 

I am so familiar with 2080Ti’s, I just don’t even know where to start with something new to tinker with lol.

I just applied +1,000Mhz. I will try that out.


----------



## GRABibus

tps3443 said:


> Hey guys im finally benching my RTX3090 Kingpin, what are you guys getting on memory overclocking game stable frequencies? And benching frequencies? What is considered really good? and what is bad?
> 
> I am so familiar with 2080Ti’s, I just don’t even know where to start with something new to tinker with lol.
> 
> I just applied +1,000Mhz. I will try that out.


I assume that the KP has binned chips, so you could even try +1500MHz as a start.


----------



## Arizor

GRABibus said:


> I assume that the KP has binned chips, so you could even try +1500MHz as a start.


Yep for benching on my Strix +1500 is fine, so I imagine it's solid for KP.

For gaming, +1000 is the most stable for me, again I imagine the KP can go higher.


----------



## tps3443

I am doing a little tuning with the factory pre installed bios on my new 3090 Kingpin. I just tried +1200 memory, and it seemed perfectly fine. I will continue going as high as I can in the memory.

Card will be much better once I get the Kingpin Hydro Copper full cover waterblock installed, and I may as well throw on the Kingpin XOC 1000 watt bios that I get from Vince.

It should be a beast, once I get everything dialed in.

This is ran on my direct die [email protected] with 4000Mhz CL15 XMP Profile. Evga X299 Dark motherboard W/ XOC bios. ”HT is enabled”


----------



## GQNerd

Since Kedar did it, I’m gonna jump in here 

Sold one of my KP ‘s and have a used Hydro copper block available if anyone is interested? If not it’s going on eBay in the next few days.

ps. I’m in NorCal if there’s any locals interested


----------



## tps3443

Miguelios said:


> Since Kedar did it, I’m gonna jump in here
> 
> Sold one of my KP ‘s and have a used Hydro copper block available if anyone is interested? If not it’s going on eBay in the next few days.
> 
> ps. I’m in NorCal if there’s any locals interested


Already have one but thanks!


----------



## SoldierRBT

tps3443 said:


> Hey guys im finally benching my RTX3090 Kingpin, what are you guys getting on memory overclocking game stable frequencies? And benching frequencies? What is considered really good? and what is bad?
> 
> I am so familiar with 2080Ti’s, I just don’t even know where to start with something new to tinker with lol.
> 
> I just applied +1,000Mhz. I will try that out.


I have one KPE HC that does +1600 benching but only +1000 for gaming. If I start gaming at +1100 or more it would eventually show artifacts. I have another KPE Hybrid with HC installed that only does +1350 for benching and can do +1250 in games no issues. This card has never shown artifacts on me. The moment it detects memory instability it restarts. I'm not sure if it has some kind of firmware to prevent artifacts. This card is from 2nd batch December 2020 and the HydroCopper it's from June 2021.


----------



## KedarWolf

Miguelios said:


> Since Kedar did it, I’m gonna jump in here
> 
> Sold one of my KP ‘s and have a used Hydro copper block available if anyone is interested? If not it’s going on eBay in the next few days.
> 
> ps. I’m in NorCal if there’s any locals interested


You're not supposed to advertise selling stuff in the forum I think, only in the buy and sell section.

I only let someone jump my Kingpin queue because I couldn't do it, never sold it here, and when I said I was selling my Strix, I also said only locally, not to make offers, I wasn't selling it to someone on overclock.net.


----------



## GQNerd

KedarWolf said:


> You're not supposed to advertise selling stuff in the forum I think, only in the buy and sell section.
> 
> I only let someone jump my Kingpin queue because I couldn't do it, never sold it here, and when I said I was selling my Strix, I also said only locally, not to make offers, I wasn't selling it to someone on overclock.net.


no doubt.. what u did was ‘totally’ different than me. 😂🤣

But I’d rather notify my fellow OC’ers of it’s availability first.. then go to Ebay/FB

Good luck with the KP, better pray it’s better than the Strix you’re selling!

-After going thru 3 KP’s, nothing is guaranteed this Gen.

*Mods feel free to remove my previous post if necessary* No worries


----------



## Arizor

The rules are what they are, but I personally appreciate the opportunity to purchase rare high-end components from fellow OCers, if the mods can regard these things as the exception to the rule; I'd love to pick up a Kingpin (though as @Miguelios says, might not even be better than my trusty Strix...!).


----------



## ManniX-ITA

Since Kedar did it, I'll jump in too.
I'm selling my nefarious soul for tons of gold and a kingdom  

Kidding, I'm going to keep it forever and ever 

Any suggestion for shunt modding the Strix OC?

Thinking about stacking another 5 mOhm on top; any reason to go lower or higher?

Going to buy from Digi-Key a selection of 3/5/7 when I'm back.

Found these Stackpole since the Panasonic are not available:






CSNL2512FT5L00 Stackpole Electronics Inc | Widerstände | DigiKey


Heute bestellt, heute ausgeliefert. CSNL2512FT5L00 – 5 mOhms ±1% 2W Chipwiderstand 2512 (6432 metrisch) Strommessung, feuchtigkeitsbeständig Metallelement von Stackpole Electronics Inc. Preisgestaltung und Verfügbarkeit für Millionen von elektronischen Komponenten von Digi-Key Electronics.




www.digikey.de





Highlights: moisture-proof and only 0.55mm height.


----------



## tps3443

Miguelios said:


> no doubt.. what u did was ‘totally’ different than me. 😂🤣
> 
> But I’d rather notify my fellow OC’ers of it’s availability first.. then go to Ebay/FB
> 
> Good luck with the KP, better pray it’s better than the Strix you’re selling!
> 
> -After going thru 3 KP’s, nothing is guaranteed this Gen.
> 
> *Mods feel free to remove my previous post if necessary* No worries


Yeah he totally did the exact same thing lol. He said he needed to get rid of his 3090 Strix asap. And people need to PM him some notification he got. 

(I really don’t care though lol) It had nothing to do with me.

We should all be happy we own 3090’s at all lol. No matter the AIB. it could be so much worse.


----------



## KedarWolf

tps3443 said:


> Yeah he totally did the exact same thing lol.
> 
> I bought my 3090 Kingpin from a fellow member on NBR. Anyways, why have you been going through 3090 KP’s?


No, I never did the same thing. I never sold anything at all. I couldn't buy the Kingpin from EVGA, so I let someone use my Elite login to buy it at retail.

That is NOT the same as advertising and selling something in this thread.


----------



## tps3443

SoldierRBT said:


> I have one KPE HC that does +1600 benching but only +1000 for gaming. If I start gaming at +1100 or more it would eventually show artifacts. I have another KPE Hybrid with HC installed that only does +1350 for benching and can do +1250 in games no issues. This card has never shown artifacts on me. The moment it detects memory instability it restarts. I'm not sure if it has some kind of firmware to prevent artifacts. This card is from 2nd batch December 2020 and the HydroCopper it's from June 2021.


How do I find out the batch on mine?


----------



## GQNerd

tps3443 said:


> Yeah he totally did the exact same thing lol.
> 
> I bought my 3090 Kingpin from a fellow member on NBR. Anyways, why have you been going through 3090 KP’s?


First one from Dec was for me.. 2nd one was for a client’s build, 3rd was for binning purposes to make sure I had the best chip possible.

Ended up keeping the original one, and the 3rd one had the GA102-250 chip I posted a few weeks back.. Was a good performer but ran a bit hotter.

Dating back to ampere’s release, I’ve also had two 3090 FE’s, and two Strixes. So that’s where I drew my conclusions:


Silicon Lottery Is REAL this year
FE is the most efficient
Strix with shunt mod + 1000w XOC performs within 2-3 bins of the KP’s on the same BIOS ([email protected] compared to [email protected])
Most manufacturers aren’t binning as thoroughly as prior years due to scarcity, hence EVGA slapping a repurposed 3080ti chip on their Flagship model


----------



## Bal3Wolf

SoldierRBT said:


> I have one KPE HC that does +1600 benching but only +1000 for gaming. If I start gaming at +1100 or more it would eventually show artifacts. I have another KPE Hybrid with HC installed that only does +1350 for benching and can do +1250 in games no issues. This card has never shown artifacts on me. The moment it detects memory instability it restarts. I'm not sure if it has some kind of firmware to prevent artifacts. This card is from 2nd batch December 2020 and the HydroCopper it's from June 2021.



my kpe hc does +1800 benching and +1600 gaming i do have the mp5works backplate cooler on mine tho I got my kpe hc early june 2021 also.


----------



## tps3443

KedarWolf said:


> No, I never did the same thing. I never sold anything at all. I couldn't buy the Kingpin from EVGA, so I let someone use my Elite login to buy it at retail.
> 
> That is NOT the same as advertising and selling something in this thread.


Im just messing with you man. I really don’t mind at all what you post around here. I know what your intentions were. No big deal.

So you can never buy another 3090KP right? Since you gave away your chance at one?


----------



## tps3443

Bal3Wolf said:


> my kpe hc does +1800 benching and +1600 gaming i do have the mp5works backplate cooler on mine tho I got my kpe hc early june 2021 also.



That’s awesome! I’m so temp limited right now. This 3090 Kingpin I have is trying to run 2,175Mhz at 50-54C in games on the LN2 bios lol so these are definitely binned chips. No way a run of the mill GA102 would do that. I’ve seen a lot of 2080Ti’s on watercooling and most would downclock from 2,100Mhz to 2,085Mhz immediately after hitting 48-49C. Well I have seen some better 2080Ti’s manage that 2130-2145 even at 48-49C. So seeing this Kingpin run like this is pretty wild!

The AIO can’t handle the default installed LN2 bios on it at all. It is still very temperature limited. I am gonna go for like 2.1Ghz undervolted for lower temps and quieter fans for now, until I have the full block on it. I am hoping for
SUB 42C- SUB40C after I get the KP Hydrocopper by mid next week.


----------



## Bal3Wolf

tps3443 said:


> That’s awesome! I’m so temp limited right now. This 3090 Kingpin I have is trying to run 2,175Mhz at 50-54C in games on the LN2 bios lol so these are definitely binned chips. No way a run of the mill GA102 would do that. I’ve seen a lot of 2080Ti’s on watercooling and most would downclock from 2,100Mhz to 2,085Mhz immediately after hitting 48-49C. Well I have seen some better 2080Ti’s manage that 2130-2145 even at 48-49C. So seeing this Kingpin run like this is pretty wild!
> 
> The AIO can’t handle the default installed LN2 bios on it at all. It is still very temperature limited. I am gonna go for like 2.1Ghz undervolted for lower temps and quieter fans for now, until I have the full block on it. I am hoping for
> SUB 42C- SUB40C after I get the KP Hydrocopper by mid next week.


Yea I got a good memory binned card not so much core around 2140-2170 gaming on ln2 bios sometimes benching I can hold 2200+ curve I can bench up to 2250. My card benching is 37-39c gaming around 40-44c.


----------



## OrionBG

Hey guys, I finally got my shiny new 3090FE today and I've been tinkering with it.
I have a strange issue... I have the latest BIOS for my Zenith II Extreme Alpha (with Resizable BAR support) enabled in the BIOS, I've updated the 3090FE firmware to enable the feature and I have the latest GPU driver / Win10. Still, the Nvidia control panel reports it as OFF.
Ideas?
The previous GPU inside the system, the RX6900XT had it working...

Nevermind... again...  I figured it out.
Toggling the Resizable Bar option Off and On again in the BIOS fixed it.


----------



## SoldierRBT

Bal3Wolf said:


> my kpe hc does +1800 benching and +1600 gaming i do have the mp5works backplate cooler on mine tho I got my kpe hc early june 2021 also.


Did you see any improvement with the mp5works backplate? Do you see artifacts in yours when benching? The KPE that does +1600 would show artifacts when it gets warm but can complete PR runs. The other one if I do +1400 then click apply it would restart even on desktop without any load and it doesn’t show artifacts even when it gets warm.

Also seems performance/clocks varies between cards. 2180-2190 avg +1350 memory is enough to brake 16K in PR but the other one with similar clocks and +1600 memory can’t get close to that (both running 520W Bios)


----------



## yzonker

Got my 2nd D5 pump installed. It did improve my block delta slightly. About 1C at 390w, closer to 2C at 500w or more. That's 100% pump before and after. Gain might be larger if you were always limiting to say 50% pump. The redundancy is nice too and was really my primary motivator since I daily drive the KP XOC quite a bit. 

Not exactly 3090 related other than that's the card I have (Zotac 3090 trinity) , but it's been talked about before here.


----------



## des2k...

SoldierRBT said:


> Did you see any improvement with the mp5works backplate? Do you see artifacts in yours when benching? The KPE that does +1600 would show artifacts when it gets warm but can complete PR runs. The other one if I do +1400 then click apply it would restart even on desktop without any load and it doesn’t show artifacts even when it gets warm.
> 
> Also seems performance/clocks varies between cards. 2180-2190 avg +1350 memory is enough to brake 16K in PR but the other one with similar clocks and +1600 memory can’t get close to that (both running 520W Bios)


I can push a bit more on my 3090 both core and mem.
Still not anywhere close to 16k lol









I scored 15 359 in Port Royal


AMD Ryzen 9 3900X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com













I scored 15 365 in Port Royal


AMD Ryzen 9 3900X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com





The effective clock for those curious, PR was lower about 2173 eff


----------



## yzonker

des2k... said:


> I can push a bit more on my 3090 both core and mem.
> Still not anywhere close to 16k lol
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 359 in Port Royal
> 
> 
> AMD Ryzen 9 3900X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com


So what is the difference? The rest of the system or some other underlying difference between our Zotac cards and the KP? I'm stuck in basically the same place. I guess no surprise since we have the same card both on AMD platforms.


----------



## newls1

SoldierRBT said:


> Did you see any improvement with the mp5works backplate? Do you see artifacts in yours when benching? The KPE that does +1600 would show artifacts when it gets warm but can complete PR runs. The other one if I do +1400 then click apply it would restart even on desktop without any load and it doesn’t show artifacts even when it gets warm.
> 
> Also seems performance/clocks varies between cards. 2180-2190 avg +1350 memory is enough to brake 16K in PR but the other one with similar clocks and +1600 memory can’t get close to that (both running 520W Bios)


if im understanding your reply, when you are at +1600 mem speeds and your score is lower that cause the ECC is kicking in and you must lower clock speeds to stop mem from "correcting" itself.


----------



## SoldierRBT

newls1 said:


> if im understanding your reply, when you are at +1600 mem speeds and your score is lower that cause the ECC is kicking in and you must lower clock speeds to stop mem from "correcting" itself.


I'm not sure if it's me or could just be a theory but none of the KPE I've tested have ECC enable. Could be they remove that for LN2 runs so you always get the highest score possible. Performance increases until black screen.

I've tested a lot of 3080s and they do have ECC which lower score a lot. If you go above the stable memory OC, performance drops.


----------



## newls1

that could be, thats above my know-how....


----------



## GQNerd

For those trying to crack 16k:

First, you need to be either shunt modded, flashed a higher limit BIOS or both depending on your model card..

High clocks help for sure, but maintaining a constant frequency is Key.. Here’s a 16k score with my target being [email protected], and when it downclocked, it only went down 1 bin or 15mhz.









I scored 16 017 in Port Royal


Intel Core i9-10900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com





Also, there are diminishing returns past 2235-2265 clocks.. Even with controlling the temps (low ambient with AC going in the same room), and setting 2310 as my target, I only bumped the score by ~300pts..









I scored 16 312 in Port Royal


Intel Core i9-10900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com





Ps.
_HydroCopper Block +1600 or +1700 mem, no active backplate.. just thermal pads and an aluminum block on top._


----------



## KedarWolf

tps3443 said:


> Im just messing with you man. I really don’t mind at all what you post around here. I know what your intentions were. No big deal.
> 
> So you can never buy another 3090KP right? Since you gave away your chance at one?


Someone offered to let me jump their queue to buy one at retail, but it might not go through, is the same person that used my queue and are returning the offer, but they used their Elite address as the shipping address from my queue, so the order might be immediately cancelled.


----------



## des2k...

Started playing some new Metro, it doesn't like undervolting at all, just 2115 2080-2090eff it seems I need 1050mv.
66fps cap, 4k.


----------



## gfunkernaught

The GALAX 500w rbar bios on my Trio only show the first two 8-pin power readings, and the power limit is around 450w, and the req vs eff clock is about 70-80mhz difference.

14702 in PR with +60 core +1000 mem, blah


----------



## tps3443

KedarWolf said:


> Someone offered to let me jump their queue to buy one at retail, but it might not go through, is the same person that used my queue and are returning the offer, but they used their Elite address as the shipping address from my queue, so the order might be immediately cancelled.


Well hopefully you’ll get one, best GPU I have owned. My KP Hydro copper block will be here in a few days hopefully.


----------



## jura11

Miguelios said:


> For those trying to crack 16k:
> 
> First, you need to be either shunt modded, flashed a higher limit BIOS or both depending on your model card..
> 
> High clocks help for sure, but maintaining a constant frequency is Key.. Here’s a 16k score with my target being [email protected], and when it downclocked, it only went down 1 bin or 15mhz.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 16 017 in Port Royal
> 
> 
> Intel Core i9-10900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> Also, there are diminishing returns past 2235-2265 clocks.. Even with controlling the temps (low ambient with AC going in the same room), and setting 2310 as my target, I only bumped the score by ~300pts..
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 16 312 in Port Royal
> 
> 
> Intel Core i9-10900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> Ps.
> _HydroCopper Block +1600 or +1700 mem, no active backplate.. just thermal pads and an aluminum block on top._



You are lucky sod hahaha and congratulations to your GPUs, I can't break 16k on my Palit RTX 3090 GamingPro, literally I would need to run LN2 pot get such clocks like you are getting 

My lowest temperatures which I could get my GPUs to run has been in 20's on load that's with freezing temperatures outside last winter, now I'm lucky to hit below 40°C in such warm weather hahaha 

Hope this helps 

Thanks, Jura


----------



## GQNerd

jura11 said:


> You are lucky sod hahaha and congratulations to your GPUs, I can't break 16k on my Palit RTX 3090 GamingPro, literally I would need to run LN2 pot get such clocks like you are getting
> 
> My lowest temperatures which I could get my GPUs to run has been in 20's on load that's with freezing temperatures outside last winter, now I'm lucky to hit below 40°C in such warm weather hahaha
> 
> Hope this helps
> 
> Thanks, Jura


Silicon lottery, plus OC skills 

As for the Palit,

Can’t speak on the component quality, but if you shunt mod and flash 1000w bios, your temperatures should be decent enough to get close to 15.8-16K.

These cards need more power and controlled temps (duh, I know) to raise the clocks.. My 16k runs were reaching 650-690w from the GPU

I’ve had great runs where the temperature reached up to 67c. (When the KP had the AIO on it)

Try those suggestions or wait for winter ❄ Lol


----------



## J7SC

Miguelios said:


> Silicon lottery, plus OC skills
> 
> As for the Palit,
> 
> Can’t speak on the component quality, but if you shunt mod and flash 1000w bios, your temperatures should be decent enough to get close to 15.8-16K.
> 
> I’ve had great runs where the temperature reached up to 67c. (When the KP had the AIO on it)
> 
> Or wait for winter ❄ Lol


...coincidentally, I was just watching an Igor's Lab vid in which he suggested that Palit is the parent company of Galax (and its EU KFA2 version).


----------



## SoldierRBT

I got this score a few days ago on the 520W BIOS 23c ambient temp. I may be able to push it higher when I find the 1K rebar BIOS. Another thing with PR is that sometimes it would start the run with 2-3 bins lower. I have to cancel the run and try again until I get a stable frequency to pass. 16022 I think was with 2189MHz avg.


----------



## tps3443

SoldierRBT said:


> I got this score a few days ago on the 520W BIOS 23c ambient temp. I may be able to push it higher when I find the 1K rebar BIOS. Another thing with PR is that sometimes it would start the run with 2-3 bins lower. I have to cancel the run and try again until I get a stable frequency to pass. 16022 I think was with 2189MHz avg.
> 
> View attachment 2517701


Do you have the hybrid cooler still on there? Or you have the Kingpin Hydro Copper on?

I have a little ways to go then. Really need my kp block to get here lol. I’m hoping to manage like SUB 42C or SUB 40C GPU temps.


----------



## tps3443

My setup is terrible looking right now. lol. 

I have some fresh black ekwb tubing on the way, and i’m gonna clean this up drastically with the Kingpin waterblock. 

Because my current situation is just hideous lol.


----------



## Arizor

Toying around with getting my effective clock speed as close as possible to afterburner’s reported clock speed; for example if I just “drag up” a node and set it to 2070, the effective clock is actually 1830. Here’s a quick video on the gap and why it happens with certain manual overclocks:


----------



## SoldierRBT

tps3443 said:


> Do you have the hybrid cooler still on there? Or you have the Kingpin Hydro Copper on?
> 
> I have a little ways to go then. Really need my kp block to get here lol. I’m hoping to manage like SUB 42C or SUB 40C GPU temps.


I installed the hydro copper block. The block is okay not that great but it’s the only one available at the moment for the KP cards. The block has a 14C delta. I can get it to stay around 45-46C when pulling 500W+ but it requires a strong water cooling setup.


----------



## ManniX-ITA

Arizor said:


> Toying around with getting my effective clock speed as close as possible to afterburner’s reported clock speed; for example if I just “drag up” a node and set it to 2070, the effective clock is actually 1830


You can avoid that by pulling a strap instead of a single point (Shift+LeftMouseClick).
Not as good as a custom curve with 3 consecutive bins and forcing clock with nvidia-smi but much quicker.


----------



## lmfodor

ManniX-ITA said:


> You can avoid that by pulling a strap instead of a single point (Shift+LeftMouseClick).
> Not as good as a custom curve with 3 consecutive bins and forcing clock with nvidia-smi but much quicker.


Hi Mannix, so long, good to see you here. As I got tired of trying the 14 flat and avoiding the WHEAs, a month ago I added a white strix 3090 to my 3080. Next step is the EK active backplate and then I’ll see how far I can go .. are you going to do a shunt mod the strix? is it worth it?


Sent from my iPhone using Tapatalk Pro


----------



## ManniX-ITA

lmfodor said:


> Hi Mannix, so long, good to see you here. As I got tired of trying the 14 flat and avoiding the WHEAs, a month ago I added a white strix 3090 to my 3080. Next step is the EK active backplate and then I’ll see how far I can go .. are you going to do a shunt mod the strix? is it worth it?


Hey 
I can't do either flat 14 @3800 with the super expensive 4000C14 kit... have to see with water cooling.
May be a thermal issue and with the 3090 the memory gets really hot of course.

I have high hopes for the shunt mod but I'll tell you once it's done 
From what I gathered, just a bit better without the original cooler repasted and with better pads.
It should really shine with water cooling.
But also there you need a serious setup cause it's going to drop 600W-700W on it.
I'm going for the EK block with the active backplate as well.


----------



## devilhead

testing 3090 strix active backplate + 12.8w/mk pads at the worst case scenario --- mining +2000 on memory 









it's way better than mine frankenstein backplate with stock EK pads  temps was around 80C


----------



## Arizor

ManniX-ITA said:


> You can avoid that by pulling a strap instead of a single point (Shift+LeftMouseClick).
> Not as good as a custom curve with 3 consecutive bins and forcing clock with nvidia-smi but much quicker.


Yep for sure. Finding 1.06-.07 seems the sweet spot for good gaming overclocks, using 3 nodes.


----------



## GRABibus

Arizor said:


> Toying around with getting my effective clock speed as close as possible to afterburner’s reported clock speed; for example if I just “drag up” a node and set it to 2070, the effective clock is actually 1830. Here’s a quick video on the gap and why it happens with certain manual overclocks:


Nice to remind the basics 👍


----------



## GRABibus

gfunkernaught said:


> The GALAX 500w rbar bios on my Trio only show the first two 8-pin power readings, and the power limit is around 450w, and the req vs eff clock is about 70-80mhz difference.
> 
> 14702 in PR with +60 core +1000 mem, blah


this is why I came back to EVGA XOC 500W with bar.
I get 200 points more with it than with galax 500W, with same overclocking settings.


----------



## des2k...

SoldierRBT said:


> I'm not sure if it's me or could just be a theory but none of the KPE I've tested have ECC enable. Could be they remove that for LN2 runs so you always get the highest score possible. Performance increases until black screen.
> 
> I've tested a lot of 3080s and they do have ECC which lower score a lot. If you go above the stable memory OC, performance drops.


there's no ECC on gaming cards, you can verify this with the nvidia-smi tool

it will show ECC ON for the workstations / data center models


----------



## Wihglah

des2k... said:


> there's no ECC on gaming cards, you can verify this with the nvidia-smi tool
> 
> it will show ECC ON for the workstations / data center models


3090 VRAM is ECC


----------



## ManniX-ITA

Wihglah said:


> 3090 VRAM is ECC


If it's ECC, it can't be enabled:



Code:


ECC features not supported for GPU 00000000:2B:00.0.
Treating as warning and moving on.
All done.


----------



## des2k...

"3090 VRAM is ECC "

*source ?*
ECC is not enabled in the vbios or drivers for gaming RTX cards and I'm not sure why so many mention ECC without proof.

there could be memory controler retry when there's signal loss or degradation or memory frequency throttle,wait cycles but the memory ICs / controller are not wired for extra ECC bits !


----------



## gfunkernaught

GRABibus said:


> this is why I came back to EVGA XOC 500W with bar.
> I get 200 points more with it than with galax 500W, with same overclocking settings.


I went back to the 520w since they perform better than the 500w.


----------



## tps3443

SoldierRBT said:


> I installed the hydro copper block. The block is okay not that great but it’s the only one available at the moment for the KP cards. The block has a 14C delta. I can get it to stay around 45-46C when pulling 500W+ but it requires a strong water cooling setup.



Dang, I was hoping to hear better than that.

Evga said the KP Hydro Copper had a >/=10C delta over water temp. 

My water loop works pretty well, I only have (3) 360’s. But my fans are incredible 2.75mm h2o static pressure.


----------



## tps3443

Hey everyone I have a situation with the Rebar bios on my 3090KP.

So, in order to enable rebar I have to switch over to the X299 Dark V1.28 bios (Which sucks) That is the newer microcode, it slows my CPU down by 5-6% and it also idles 8C-10C hotter, and loads up over 12C hotter. I always run a XOC 1.03 or XOC 1.07 bios daily on my X299 Dark for my Direct die 7980XE. 

7980XE/ 5Ghz all core with more voltage runs (5C cooler) under load on XOC 1.03/1.07 bios 

VS.

7980XE/4.9Ghz all core less voltage on V1.28 Rebar supported evga bios. (5C hotter load) worse performance 

Is rebar even worth it? lol.


----------



## GRABibus

tps3443 said:


> Hey everyone I have a situation with the Rebar bios on my 3090KP.
> 
> So, in order to enable rebar I have to switch over to the X299 Dark V1.28 bios (Which sucks) That is the newer microcode, it slows my CPU down by 5-6% and it also idles 8C-10C hotter, and loads up over 12C hotter. I always run a XOC 1.03 or XOC 1.07 bios daily on my X299 Dark for my Direct die 7980XE.
> 
> 7980XE/ 5Ghz all core with more voltage runs (5C cooler) under load on XOC 1.03/1.07 bios
> 
> VS.
> 
> 7980XE/4.9Ghz all core less voltage on V1.28 Rebar supported evga bios. (5C hotter load) worse performance
> 
> Is rebar even worth it? lol.


wait for a better bios


----------



## des2k...

SoldierRBT said:


> I installed the hydro copper block. The block is okay not that great but it’s the only one available at the moment for the KP cards. The block has a 14C delta. I can get it to stay around 45-46C when pulling 500W+ but it requires a strong water cooling setup.


It wouldn't the fins, because it has tons more vs the EK block.

If you look at optimus block that gets 8c delta at 500w+ their block has a flat plate and no standoffs so it's a perfect die coverage / pressure.


----------



## SoldierRBT

des2k... said:


> "3090 VRAM is ECC "
> 
> *source ?*
> ECC is not enabled in the vbios or drivers for gaming RTX cards and I'm not sure why so many mention ECC without proof.
> 
> there could be memory controler retry when there's signal loss or degradation or memory frequency throttle,wait cycles but the memory ICs / controller are not wired for extra ECC bits !


I mean you can call it whatever you want. The result is the same. Performance loss when pushing memory too high. I've tested a few cards and KP cards don't have it.


----------



## des2k...

SoldierRBT said:


> I mean you can call it whatever you want. The result is the same. Performance loss when pushing memory too high. I've tested a few cards and KP cards don't have it.


You know KP might have higher bin memory or higher voltage for the memory so you don't see the perfomrance loss early at high OC. But shouldn't be put under ECC label


----------



## tps3443

I have not even pushed my 3090 Kingpins memory to the limits yet +1300 was absolutely fine, and continued to result in a higher PR score. It seems to be good memory. After I get done with work at home, I’m gonna push it to the max and see where it stops at. I will go for +1500 next PR run.


----------



## InvictuSZN

Deleted


----------



## J7SC

...still on the 466.11 driver w/ my Strix. How's 477.11 ?


----------



## yzonker

des2k... said:


> You know KP might have higher bin memory or higher voltage for the memory so you don't see the perfomrance loss early at high OC. But shouldn't be put under ECC label


I'm not exactly disagreeing with you as I don't really know what they have, but I've definitely seen my 3080ti fall off in performance without crashing when increasing mem OC. Why does that happen if not ECC?


----------



## Luck100

yzonker said:


> I'm not exactly disagreeing with you as I don't really know what they have, but I've definitely seen my 3080ti fall off in performance without crashing when increasing mem OC. Why does that happen if not ECC?


Memory OC is using more power, leaving less for core boost.


----------



## ManniX-ITA

Luck100 said:


> Memory OC is using more power, leaving less for core boost.


Also higher temperature which will throttle the speed.



J7SC said:


> ...still on the 466.11 driver w/ my Strix. How's 477.11 ?


On benchmarks slower.


----------



## GRABibus

J7SC said:


> ...still on the 466.11 driver w/ my Strix. How's 477.11 ?


No issues so far


----------



## macangel

sorry if this might be covered somewhere, there's over 800 pages to go through.
These new vBIOS's, do they come pre-patched for Resizeable Bar?


----------



## pantsoftime

yzonker said:


> I'm not exactly disagreeing with you as I don't really know what they have, but I've definitely seen my 3080ti fall off in performance without crashing when increasing mem OC. Why does that happen if not ECC?


GDDR6X uses "Error Detection and Replay" which allows the memory controller to detect a transmission error and retry the transaction. The purpose of this is primarily in correcting signal integrity issues that can result from running at such high clocks (especially in overclocking). This is a great feature but is also why performance suffers when you overclock too high.

This differs a bit from ECC in that ECC adds additional information storage and will pick up on errors that occur in the memory array. Sometimes memory bits flip due to things like "single event upsets". ECC can correct for this as well as for some types of transmission errors. EDR is only functional during the process of memory reading / writing rather than also during storage.


----------



## tps3443

I tried the resizeable bar bios on my 3090 Kingpin yesterday. Mainly due to the hype surrounding this whole resizeable bar thing, and I wanted to post this and say, make sure you test your systems performance before and after, (if you are required to perform a bios update on your mainboard too)

I switched over to (1of3) preinstalled bios on my X299 Dark motherboard. I selected bios 1.28V and enabled the proper options in the bios to enable this resizeable bar. Then I updated my 3090KP to the resizeable bar bios as well through PX1. I don’t really have any games that support this, but I am not 100% sure either.

The problem with resizeable bar is that, your required to use a newer bios on X299 chipset. Well, in my case the bios 1.28v on my x299 dark just sucks terribly. It updates my cpu microcode to a newer version that idles hotter and runs slower due to the intel security vulnerabilities.

So I have always ran an older XOC 1.05 bios for my X299 dark and direct die 5Ghz 7980XE, which is prior to the vulnerabilities patch.

It makes no sense how a drastically newer bios by years can make my cpu run hotter, and slower, and put out multithreaded performance scores that are just drastically worse..

So, reaizeable bar just isn’t for me! (Or not on this platform)

I am essentially giving up 250-300Mhz worth of cpu performance from the newer bios microcode that is required to enable reaizeable bar on the 3090Kingpin (Just to potentially see better performance in games that support resizeable bar) it’s the stupidest thing ever lol.

My 7980XE performance at 4,700Mhz on the older XOC bios can slam my cpu at 5,000Mhz on the newer bios. What sense does that even make lol?

Evga needs to step up the bios game on the X299 Dark, and release a new bios. I don’t care how old this platform is. I spit out the same IPC per clock as a stock 10900K (As we all know intel hasn’t done much since 2017) and then on top of that, my system memory latency at sub 48ns is still better than even ryzens X570 platform.

If you guys are enabling to resizeable bar. Check your before and after numbers for all components. Especially on x299 platform.


----------



## gfunkernaught

tps3443 said:


> I tried the resizeable bar bios on my 3090 Kingpin yesterday. Mainly due to the hype surrounding this whole resizeable bar thing, and I wanted to post this and say, make sure you test your systems performance before and after, (if you are required to perform a bios update on your mainboard too)
> 
> I switched over to (1of3) preinstalled bios on my X299 Dark motherboard. I selected bios 1.28V and enabled the proper options in the bios to enable this resizeable bar. Then I updated my 3090KP to the resizeable bar bios as well through PX1. I don’t really have any games that support this, but I am not 100% sure either.
> 
> The problem with resizeable bar is that, your required to use a newer bios on X299 chipset. Well, in my case the bios 1.28v on my x299 dark just sucks terribly. It updates my cpu microcode to a newer version that idles hotter and runs slower due to the intel security vulnerabilities.
> 
> So I have always ran an older XOC 1.05 bios for my X299 dark and direct die 5Ghz 7980XE, which is prior to the vulnerabilities patch.
> 
> It makes no sense how a drastically newer bios by years can make my cpu run hotter, and slower, and put out multithreaded performance scores that are just drastically worse..
> 
> So, reaizeable bar just isn’t for me! (Or not on this platform)
> 
> I am essentially giving up 250-300Mhz worth of cpu performance from the newer bios microcode that is required to enable reaizeable bar on the 3090Kingpin (Just to potentially see better performance in games that support resizeable bar) it’s the stupidest thing ever lol.
> 
> My 7980XE performance at 4,700Mhz on the older XOC bios can slam my cpu at 5,000Mhz on the newer bios. What sense does that even make lol?
> 
> Evga needs to step up the bios game on the X299 Dark, and release a new bios. I don’t care how old this platform is. I spit out the same IPC per clock as a stock 10900K (As we all know intel hasn’t done much since 2017) and then on top of that, my system memory latency at sub 48ns is still better than even ryzens X570 platform.
> 
> If you guys are enabling to resizeable bar. Check your before and after numbers for all components. Especially on x299 platform.


I would have stayed with the xoc 1kw bios if cyberpunk wasn't stuttering on it. Once I went to the 520w rbar, stuttering gone, but lower clocks.


----------



## jura11

Miguelios said:


> Silicon lottery, plus OC skills
> 
> As for the Palit,
> 
> Can’t speak on the component quality, but if you shunt mod and flash 1000w bios, your temperatures should be decent enough to get close to 15.8-16K.
> 
> These cards need more power and controlled temps (duh, I know) to raise the clocks.. My 16k runs were reaching 650-690w from the GPU
> 
> I’ve had great runs where the temperature reached up to 67c. (When the KP had the AIO on it)
> 
> Try those suggestions or wait for winter ❄ Lol



Hi there 

Yup Silicon Lottery hahaha and I lost it on both GPUs which I expected hahaha and my OC skills I would day they are not bad although I can't measure myself against the best here 

15.2k or 15.3k I think there I have with XOC 1000W BIOS on my RTX 3090 but that's it and temperatures now are out of hand, currently we are baking in 30's and without air conditioning right now I'm not able to control temperatures

No shunt mod for me as I'm chicken with that, I will be not doing that to my GPUs for time being 

Don't have for comparison Strix or Kingpin with which I would love to compare 

I use my PC for rendering and some gaming but mostly GPUs are running XOC 1000W BIOS capped at 75% for rendering with +105MHz on core and +1295MHz on VRAM on top and bottom with 1100MHz 

Thanks for last time you offered me to buy yours Strix, really appreciate for that 

Hope this helps 

Thanks, Jura


----------



## tps3443

gfunkernaught said:


> I would have stayed with the xoc 1kw bios if cyberpunk wasn't stuttering on it. Once I went to the 520w rbar, stuttering gone, but lower clocks.


The stuttering is more than likely your card just bouncing off the temp and voltage limits. It causes massive latency spikes


----------



## gfunkernaught

tps3443 said:


> The stuttering is more than likely your card just bouncing off the temp and voltage limits. It causes massive latency spikes


On the 1kw bios? Most likely not because when I use that bios, the clock/volt stay put when I have the PL set to like 60% and above.


----------



## Nizzen

tps3443 said:


> I tried the resizeable bar bios on my 3090 Kingpin yesterday. Mainly due to the hype surrounding this whole resizeable bar thing, and I wanted to post this and say, make sure you test your systems performance before and after, (if you are required to perform a bios update on your mainboard too)
> 
> I switched over to (1of3) preinstalled bios on my X299 Dark motherboard. I selected bios 1.28V and enabled the proper options in the bios to enable this resizeable bar. Then I updated my 3090KP to the resizeable bar bios as well through PX1. I don’t really have any games that support this, but I am not 100% sure either.
> 
> The problem with resizeable bar is that, your required to use a newer bios on X299 chipset. Well, in my case the bios 1.28v on my x299 dark just sucks terribly. It updates my cpu microcode to a newer version that idles hotter and runs slower due to the intel security vulnerabilities.
> 
> So I have always ran an older XOC 1.05 bios for my X299 dark and direct die 5Ghz 7980XE, which is prior to the vulnerabilities patch.
> 
> It makes no sense how a drastically newer bios by years can make my cpu run hotter, and slower, and put out multithreaded performance scores that are just drastically worse..
> 
> So, reaizeable bar just isn’t for me! (Or not on this platform)
> 
> I am essentially giving up 250-300Mhz worth of cpu performance from the newer bios microcode that is required to enable reaizeable bar on the 3090Kingpin (Just to potentially see better performance in games that support resizeable bar) it’s the stupidest thing ever lol.
> 
> My 7980XE performance at 4,700Mhz on the older XOC bios can slam my cpu at 5,000Mhz on the newer bios. What sense does that even make lol?
> 
> Evga needs to step up the bios game on the X299 Dark, and release a new bios. I don’t care how old this platform is. I spit out the same IPC per clock as a stock 10900K (As we all know intel hasn’t done much since 2017) and then on top of that, my system memory latency at sub 48ns is still better than even ryzens X570 platform.
> 
> If you guys are enabling to resizeable bar. Check your before and after numbers for all components. Especially on x299 platform.


Using res bar on my Asus x299 Apex now.
3dmark working pretty good with this soon 4 year old cpu 
3090 hof on air.

Ps: hof 1000w bios without rebar, but the rebar bios.










I scored 15 499 in Port Royal


Intel Core i9-7980XE Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 11}




www.3dmark.com













I scored 21 517 in Time Spy


Intel Core i9-7980XE Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 11}




www.3dmark.com





Any games I should test with 500/520w rebar bios?


----------



## GAN77

Nizzen said:


> 3090 hof on air.


Which version do you have, premium or regular?


----------



## tps3443

Nizzen said:


> Using res bar on my Asus x299 Apex now.
> 3dmark working pretty good with this soon 4 year old cpu
> 3090 hof on air.
> 
> Ps: hof 1000w bios without rebar, but the rebar bios.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 499 in Port Royal
> 
> 
> Intel Core i9-7980XE Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 11}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 21 517 in Time Spy
> 
> 
> Intel Core i9-7980XE Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 11}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> Any games I should test with 500/520w rebar bios?




I didn’t say the bios didn’t work in my X299 Dark. I just said it’s not worth it. The above numbers you posted are not where the problem is. It’s CPU prowess that goes down. Blender times, render times, R15/R20/R23 power in multithreaded loads.


My system is absolutely tuned out. I just have a problem with that newer microcode that bogs up the CPU.

You can try R15/R20/R23. With the newer bios vs the older bios. Maybe that’ll explain things some.


----------



## J7SC

A quick confirmation re. 'delays' after switching over vBios via the little toggle switch (ie. on my Strix OC) - it seem to take two or three reboots (even after initial 'cold' reboot) for various monitoring software and Win 10 to correctly recognize and identify 'the other' vBios. I'm running the stock Strix OC v3 r_BAR on one vBios and the KPE 520 r_BAR on the other.

Also, I am using a slightly modified stock backplate on my Strix (mod re. PCIe cable clearance w/ w-block for the front of the GPU). Apart from that, I only re-pasted the rear VRAM and my VRAM temps are around 63 C at 23 C ambient...While I have all the items for a w-cooled custom back-plate setup, I'm wondering if the temp gains are worth the extra plumbing as my VRAM temps seem reasonable ?

FYI, my last Timespy score on stock Strix vBios from back in late April...haven't even updated TS w/ KPE 520 yet as I've been working on the 5950X and its new RAM kit. Best TS CPU score also below, afair it's either 4750 or 4800 all-c


----------



## Arizor

@J7SC those ambient temps are fine, what's it like under your average gaming load? For example my VRAM under gaming load maxes out around 74ish C, which is absolutely fine and well within parameters, using the EKWB passive backplate, so I have no urge to move to the Active. If yours are similar I wouldn't worry about it unless you're really chasing those ultra _ultra_ 3D mark scores.


----------



## Nizzen

GAN77 said:


> Which version do you have, premium or regular?


Regular 
Only one avaiable.
I wanted the lcd, but impossible to buy


----------



## J7SC

Arizor said:


> @J7SC those ambient temps are fine, what's it like under your average gaming load? For example my VRAM under gaming load maxes out around 74ish C, which is absolutely fine and well within parameters, using the EKWB passive backplate, so I have no urge to move to the Active. If yours are similar I wouldn't worry about it unless you're really chasing those ultra _ultra_ 3D mark scores.


...it's a hybrid gaming - productivity system rather than an ultra-cubed 3DM benchmark chaser...Still, Blender & Co can heat things up, never mind CP2077 and FS2020 (at least if you play it in a long session). VRAM stays in the mid-60 C with those, what with a water-cooled GPU front and a TT Core P8 'max airflow' case. That case also has another system in it with a 6900 XT (that one for daily work / productivity plus a bit of gaming), and that card is incidentally nibbling on the 3090 Strix OC GPU results in TimeSpy...But DLSS and Ray Tracing titles are a different matter...


----------



## Arizor

J7SC said:


> ...it's a hybrid gaming - productivity system rather than an ultra-cubed 3DM benchmark chaser...Still, Blender & Co can heat things up, never mind CP2077 and FS2020 (at least if you play it in a long session). VRAM stays in the mid-60 C with those, what with a water-cooled GPU front and a TT Core P8 'max airflow' case. That case also has another system in it with a 6900 XT (that one for daily work / productivity plus a bit of gaming), and that card is incidentally nibbling on the 3090 Strix OC GPU results in TimeSpy...But DLSS and Ray Tracing titles are a different matter...


Yep I'm in the same boat, using mine for gaming + game dev, I think those temps are absolutely fine, I wouldn't bother changing a thing in your position (and yep, CP2077 is an absolute bonecrusher for OC stability...!).


----------



## yzonker

J7SC said:


> A quick confirmation re. 'delays' after switching over vBios via the little toggle switch (ie. on my Strix OC) - it seem to take two or three reboots (even after initial 'cold' reboot) for various monitoring software and Win 10 to correctly recognize and identify 'the other' vBios. I'm running the stock Strix OC v3 r_BAR on one vBios and the KPE 520 r_BAR on the other.
> 
> Also, I am using a slightly modified stock backplate on my Strix (mod re. PCIe cable clearance w/ w-block for the front of the GPU). Apart from that, I only re-pasted the rear VRAM and my VRAM temps are around 63 C at 23 C ambient...While I have all the items for a w-cooled custom back-plate setup, I'm wondering if the temp gains are worth the extra plumbing as my VRAM temps seem reasonable ?
> 
> FYI, my last Timespy score on stock Strix vBios from back in late April...haven't even updated TS w/ KPE 520 yet as I've been working on the 5950X and its new RAM kit. Best TS CPU score also below, afair it's either 4750 or 4800 all-c


I didn't think the bios switch even worked on my 3080ti FTW3. I would shut the machine down, flip the switch, power up. No change to the bios. Then I discovered if I flip the switch with the machine just running in Windows and do a warm boot it would switch correctly every time. May not be the same for you Strix, but i was contrary to what I had read in regards to switching.


----------



## tps3443

Kingpin Hydro Copper tracking shows 07/22! I‘m so ready to get this 3090kp on custom loop.


----------



## KedarWolf

tps3443 said:


> Kingpin Hydro Copper tracking shows 07/22! I‘m so ready to get this 3090kp on custom loop.


Maybe Optimus will release their Kingpin block that comes standard with a water-cooled active backplate by then? I'd hold out for it even if it isn't out. They make the best GPU blocks you can buy!!









Optimus Waterblock


Anyone see RTX A6000 reviews, apparently gddr6x is reason why ampere is an power hog. Yes it was reported early on that the 24GB of vram in a 24x 1GB config on the 3090 uses upwards of 90W at stock clocks. That’s why there’s speculation that we may yet see a 3090 Super or Ti with 2.4% more...




www.overclock.net







https://twitter.com/optimus_wc?lang=en


----------



## tps3443

Nizzen said:


> Regular
> Only one avaiable.
> I wanted the lcd, but impossible to buy


Do we have the same system memory?

I run the GSkill Royal-Z 4000CL15 1.5v XMP 4x8GB kit.

The default XMP profile is 4000Mhz 15-16-16-36 @1.5V (Excellent bdie quad channel kit)

I saw your timings and thought how familiar it looked lol. Because, I run nearly the same timings lol. Only difference is, I run them at 4,000Mhz 15-16-16-32-290-1T Tfaw at 23. They really rip too! HCI memtest stable.

My 7980XE’s IMC seems to clap out, and start dropping channels around 4,160-4,180Mhz. So i’m certainly good with my timings and current frequency.

My uncore voltage is like 1,450 by auto. (That’s about +535 or so)


----------



## ericorg87

I've seen a guy that put thermal+putty + Thermalright Thermal Pad on his 3090 Strix and managed to stay in low 60s while mining with a single 280mm radiator. 
That is insane. 
I didn't know about this thermal putty + thermalpads combo people have been using. I just blocked my Zotac 3090 with Bykski's active backplate and had amazing drops to 80~84c memory junction with 40c water temp. I managed to overclock past 1500mhz in the memory which was previously impossible. I was wondering if it will be worthy opening my block up and adding Putty when the summer hits here in Brasil.


----------



## KedarWolf

tps3443 said:


> Do we have the same system memory?
> 
> I run the GSkill Royal-Z 4000CL15 1.5v XMP 4x8GB kit.
> 
> The default XMP profile is 4000Mhz 15-16-16-36 @1.5V (Excellent bdie quad channel kit)
> 
> I saw your timings and thought how familiar it looked lol. Because, I run nearly the same timings lol. Only difference is, I run them at 4,000Mhz 15-16-16-32-290-1T Tfaw at 23. They really rip too! HCI memtest stable.
> 
> My 7980XE’s IMC seems to clap out, and start dropping channels around 4,160-4,180Mhz. So i’m certainly good with my timings and current frequency.
> 
> My uncore voltage is like 1,450 by auto. (That’s about +535 or so)


I always make a full drive backup with Macrium Reflect Free, full featured, no need to pay, after I get my O/S setup and tweaked the way I like with all the drivers installed and standard programs and games I use.

Takes a few minutes to back it up to my secondary M.2, but I only have Diablo 3 installed for now.

You install Reflect, make a boot USB in Other Tasks, the USB needs to be formatted MBR, but boots from UEFI too, boot from USB, image your drive to a secondary drive, SSD or M.2.

Restore it from the boot USB too.

Then when testing memory overclocks, benching, all that, I get freezes or BSODs, takes me a few minutes to restore my Windows, including the boot partitions, no more reinstalling Windows at all. 

You can even uninstall the program with Revo Uninstaller (you do use Revo, right, I swear by it!) and just keep the boot USB. 🐺


----------



## KedarWolf

ericorg87 said:


> I've seen a guy that put thermal+putty + Thermalright Thermal Pad on his 3090 Strix and managed to stay in low 60s while mining with a single 280mm radiator.
> That is insane.
> I didn't know about this thermal putty + thermalpads combo people have been using. I just blocked my Zotac 3090 with Bykski's active backplate and had amazing drops to 80~84c memory junction with 40c water temp. I managed to overclock past 1500mhz in the memory which was previously impossible. I was wondering if it will be worthy opening my block up and adding Putty when the summer hits here in Brasil.


That reminds me, I need to install my EKWB active backplate before I go to bed, then leak test it outside my PC overnight.

I've had it a few weeks, but it's a bit complicated and my motivation to get it done is not super high right now.


----------



## geriatricpollywog

SoldierRBT said:


> I installed the hydro copper block. The block is okay not that great but it’s the only one available at the moment for the KP cards. The block has a 14C delta. I can get it to stay around 45-46C when pulling 500W+ but it requires a strong water cooling setup.


When you installed the Hydrocopper block, did you use the pre-installed thermal pads and paste?


----------



## mirkendargen

ericorg87 said:


> I've seen a guy that put thermal+putty + Thermalright Thermal Pad on his 3090 Strix and managed to stay in low 60s while mining with a single 280mm radiator.
> That is insane.
> I didn't know about this thermal putty + thermalpads combo people have been using. I just blocked my Zotac 3090 with Bykski's active backplate and had amazing drops to 80~84c memory junction with 40c water temp. I managed to overclock past 1500mhz in the memory which was previously impossible. I was wondering if it will be worthy opening my block up and adding Putty when the summer hits here in Brasil.


Seems totally plausible to me. I have a Strix with a Bykski block (non active backplate) with Thermalright Odyssey pads and a 4 DIMM RAM block attached to the backplate with Arctic Silver thermal epoxy.

With 27C coolant my VRAM is at 68C mining.


----------



## SoldierRBT

0451 said:


> When you installed the Hydrocopper block, did you use the pre-installed thermal pads and paste?


I used KPx and thermalright 2mm 12.8 w/mk pads


----------



## J7SC

...I'm just going to leave that here (A100 = 3090's big brother, 500W x8, giggles madly)


----------



## tps3443

KedarWolf said:


> Maybe Optimus will release their Kingpin block that comes standard with a water-cooled active backplate by then? I'd hold out for it even if it isn't out. They make the best GPU blocks you can buy!!
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Optimus Waterblock
> 
> 
> Anyone see RTX A6000 reviews, apparently gddr6x is reason why ampere is an power hog. Yes it was reported early on that the 24GB of vram in a 24x 1GB config on the 3090 uses upwards of 90W at stock clocks. That’s why there’s speculation that we may yet see a 3090 Super or Ti with 2.4% more...
> 
> 
> 
> 
> www.overclock.net
> 
> 
> 
> 
> 
> 
> 
> https://twitter.com/optimus_wc?lang=en


Optimus watercooling is awesome. But the 22nd is in (3 days when it’ll be delivered lol)

I am 100% happy with getting a EVGA Kingpin Hydro Copper waterblock at MSRP. I already purchased it, and it’ll be delivered on Wednesday.

I’m gonna use pressure sensitive micro film to ensure perfect contact everywhere thermal paste or pads is located. I’m going to use LM or KPx not sure yet. Every 0.9C or 1.2C or 2.9C less LOL temps really count here. I want 

With proper mount and contact I think the block can potentially do very well. Plus, it’s practically here. Just a few more days... Im excited about it, and mostly ready to get this big KP 360mm AIO off of this Kingpin and back in the KP box. This big AIO rad is just hanging out of my case with no door. I can’t fit a 4th 360 rad inside my case unfortunately.


----------



## J7SC

tps3443 said:


> Optimus watercooling is awesome. But the 22nd is in (3 days when it’ll be delivered lol)
> 
> I am 100% happy with getting a EVGA Kingpin Hydro Copper waterblock at MSRP. I already purchased it, and it’ll be delivered on Wednesday.
> 
> I’m gonna use pressure sensitive micro film to ensure perfect contact everywhere thermal paste or pads is located. I’m going to use LM or KPx not sure yet. Every 0.9C or 1.2C or 2.9C less LOL temps really count here. I want
> 
> With proper mount and contact I think the block can potentially do very well. Plus, it’s practically here. Just a few more days... Im excited about it, and mostly ready to get this big KP 360mm AIO off of this Kingpin and back in the KP box. This big AIO rad is just hanging out of my case with no door. I can’t fit a 4th 360 rad inside my case unfortunately.


...I would also first see how the EVGA Kingpin Hydro Copper waterblock is going to perform before dismissing it. 

Re. double-sided VRAM and back-plate temps, I just got the items below (el cheapo @ Amazon)...several other folks here also use those big heat spreaders (do check for clearance though re. RAM sticks). I haven't mounted them yet on either the 3090 or BigNavi, but Thermalright puts out oversized / extra-wide 12.8W/mk sheets which seem to work with it....at worst, it will be a $30 bad idea, at best, it will improve on my already decent VRAM temps...and as mentioned before, I do have some extra 'universal GPU blocks' I could mod for active back-plate cooling if need be...


----------



## tps3443

J7SC said:


> ...I would also first see how the EVGA Kingpin Hydro Copper waterblock is going to perform before dismissing it.
> 
> Re. double-sided VRAM and back-plate temps, I just got the items below (el cheapo @ Amazon)...several other folks here also use those big heat spreaders (do check for clearance though re. RAM sticks). I haven't mounted them yet on either the 3090 or BigNavi, but Thermalright puts out oversized / extra-wide 12.8W/mk sheets which seem to work with it....at worst, it will be a $30 bad idea, at best, it will improve on my already decent VRAM temps...and as mentioned before, I do have some extra 'universal GPU blocks' I could mod for active back-plate cooling if need be...
> 
> View attachment 2518033
> 
> 
> View attachment 2518034


Oh if Optimus drops a super good waterblock, then yeah for sure. But at this point, even at the MSRP pricing I paid, I have $2,499 in to a video card and waterblock with taxes lol. I’m gonna force my self to be happy with it lol.

3090KP $2,090
KP Hydro copper block $300
Then sales tax, shipping.

I have never stepped in to such extreme realm of PC parts. And what’s funny is that I feel like I got a great deal, and would easily do it all over again!

OMG PC community is in trouble as of the year 2021. Im an average guy, not rich by any means. Can you imagine if the past PC community from like 2016-2017 could talk to the PC community from now?

LOL.


----------



## J7SC

tps3443 said:


> (....)
> ...PC community is in trouble as of the year 2021. Im an average guy, not rich by any means. *Can you imagine if the past PC community from like 2016-2017 could talk to the PC community from now?*
> 
> LOL.


...or the PC community from 2025-2026 talk to the enthusiasts from 2021  

...while the same measurable performance re. a given benchmark will be cheaper 'per fps', the top end stuff operates on a ratchet-$ effect, IMO...NVidia and AMD now know that they can sell $2,400 GPUs EzPz


----------



## Arizor

Yep it's always been the case that any company's "halo" offerings provide vastly diminishing returns, this is I think more true in 2021 than at any previous point in modern GPU history, but it's always been there, even back in the Voodoo days.


----------



## Nizzen

tps3443 said:


> Do we have the same system memory?
> 
> I run the GSkill Royal-Z 4000CL15 1.5v XMP 4x8GB kit.
> 
> The default XMP profile is 4000Mhz 15-16-16-36 @1.5V (Excellent bdie quad channel kit)
> 
> I saw your timings and thought how familiar it looked lol. Because, I run nearly the same timings lol. Only difference is, I run them at 4,000Mhz 15-16-16-32-290-1T Tfaw at 23. They really rip too! HCI memtest stable.
> 
> My 7980XE’s IMC seems to clap out, and start dropping channels around 4,160-4,180Mhz. So i’m certainly good with my timings and current frequency.
> 
> My uncore voltage is like 1,450 by auto. (That’s about +535 or so)


I'm using some old 3600c15 trident-z


----------



## tps3443

Nizzen said:


> I'm using some old 3600c15 trident-z


Worth a shot lol. Well at least that’s a good 3600CL15 kit you’ve got.


----------



## tps3443

Can anyone confirm what thermal pads I should order for the Kingpin Hydro copper waterblock? 

I would like to put the best possible thermal pads for the waterblock and backplate on the very first go, and have everything here on Thursday for the install. 

What pads should I order? I found this picture from evga apparently. But it is only further confusing me lol. 

Kpx? Or Kryonaut extreme? 

Thanks for the help guys!


----------



## des2k...

tps3443 said:


> Can anyone confirm what thermal pads I should order for the Kingpin Hydro copper waterblock?
> 
> I would like to put the best possible thermal pads for the waterblock and backplate on the very first go, and have everything here on Thursday for the install.
> 
> What pads should I order? I found this picture from evga apparently. But it is only further confusing me lol.
> 
> Kpx? Or Kryonaut extreme?
> 
> Thanks for the help guys!


I wouldn't use any good paste or expensive pads on your first mounts.

Unless it's the optimus block(no standoffs) you'll be struggling to get a good pressure on the core.

Most other blocks are not flat and standoffs are complete garbage.

Maybe you'll have better luck, but my EK block had to be lapped and standoffs grinded down for a good delta.


----------



## Thanh Nguyen

where can I buy 3 slot nvlink bridge guys?


----------



## GAN77

Thanh Nguyen said:


> where can I buy 3 slot nvlink bridge guys?





https://www.elmorlabs.com/product/nvb-3s-gpu-interconnect-bridge/


----------



## Thanh Nguyen

Anyone here try Amazon New World game? Heard it would brick 3090.


----------



## SoldierRBT

Thanh Nguyen said:


> Anyone here try Amazon New World game? Heard it would brick 3090.


I've been playing it in my 3080 FTW3 and the menu seems to draw 420-450W but in game it's less than 300W. Also core clocks in menu varies a lot. It goes from 2160MHz to 2100MHz then back to 2160MHz then drops to 2070MHz. At 450W PCie slot is only 45-47W. 

My guess is that the cards that are dying are the 3090 FTW3 with unbalance power draw from the PCie slot.


----------



## ttnuagmada

So is the consensus that this "3090's are dying" click-bait is just early FTW3's that hadn't already fried?


----------



## Lobstar

The thread on the EVGA forums had a response saying they are investigating. Full thread: https://forums.evga.com/EVGA-3090-o...t-now-as-it-might-kill-the-card-m3432165.aspx


----------



## GRABibus

Lobstar said:


> The thread on the EVGA forums had a response saying they are investigating. Full thread: https://forums.evga.com/EVGA-3090-o...t-now-as-it-might-kill-the-card-m3432165.aspx


Deleted


----------



## Lobstar

Oopsie poopsie.


----------



## GRABibus

Lobstar said:


> Huh?
> View attachment 2518257


i deleted my comment as it was not accepted by moderator 😊


----------



## Sheyster

Lobstar said:


> The thread on the EVGA forums had a response saying they are investigating. Full thread: https://forums.evga.com/EVGA-3090-o...t-now-as-it-might-kill-the-card-m3432165.aspx


Ugh... Glad I passed on the 3090 FTW when my queue number came up, with the various issues that card has experienced. Apparently this is happening with at least one of the Gigabyte 3090's as well, not sure which one though.


----------



## Thanh Nguyen

Can seasonic 1300w enough for 2 3090 runs port royal guys?


----------



## J7SC

Thanh Nguyen said:


> Can seasonic 1300w enough for 2 3090 runs port royal guys?


...if the two 3090s run 500W + bios plus an oc'ed CPU and w/ peripherals, it might get a bit marginal but why not try a few runs


----------



## Nizzen

Thanh Nguyen said:


> Can seasonic 1300w enough for 2 3090 runs port royal guys?


I'm running 2x 1300w with 2x 3090, but I'm using overclocked 7980xe 

550w*2 if using 1000w bios + cpu+fans+pumps+MB etc... Port Royal isn't using cpu much, so I'ts ok with 1300w. I think the psu can draw 1400w without tripping...

Used 7980xe + 2x 2080ti with 380w bios for 2 years without any problems.


----------



## tps3443

des2k... said:


> I wouldn't use any good paste or expensive pads on your first mounts.
> 
> Unless it's the optimus block(no standoffs) you'll be struggling to get a good pressure on the core.
> 
> Most other blocks are not flat and standoffs are complete garbage.
> 
> Maybe you'll have better luck, but my EK block had to be lapped and standoffs grinded down for a good delta.


I have micro thin pressure film to confirm good contact and pressure being applied to components and the die of the 3090KP. The film is designed to turn red within normal mounting pressure of computer heatsinks and waterblocks. And you can find out exactly where the pressure is high or low at, and exactly how much pressure is being applied to that area.

I can get the mount right without running the card. Im just a little confused on thermal pad thickness, and what I should actually order?

Some say 1-2mm. And evga says 2-3mm.

Then others specify something entirely different. I’m lost lol. And I keep getting different answers.

I would rather not order 0.5-1-2-3-4-5mm thick thermal pads all at the same time due to waste and cost. 

I was hoping to narrow this down to


----------



## lmfodor

Hey, I just saw the news about the New World game that seems to break a lot of EVGA FTW3 cards. Has anyone tried it on Strix or Kingpin?


Sent from my iPhone using Tapatalk Pro


----------



## Panchovix

Based on this post


__
https://www.reddit.com/r/nvidia/comments/oollg4

It seems mostly affects EVGA FTW3 3090 models, though seems some Gigabyte models are affected too.

And, by exactly this comment


__
https://www.reddit.com/r/nvidia/comments/oollg4/_/h5zkrkl

"edit: Some users in nwg sub reporting ASUS 3080 water loop, MSI 3080 Suprim and EVGA 3080 FTW3 having similar issues... Maybe more likely to happen to triple 8 pin cards?"


----------



## tps3443

Nizzen said:


> I'm running 2x 1300w with 2x 3090, but I'm using overclocked 7980xe
> 
> 550w*2 if using 1000w bios + cpu+fans+pumps+MB etc... Port Royal isn't using cpu much, so I'ts ok with 1300w. I think the psu can draw 1400w without tripping...
> 
> Used 7980xe + 2x 2080ti with 380w bios for 2 years without any problems.


People in the 2080Ti owners club would always tell me i’m crazy, and my 1200 watt PSU was plenty and my (2) cards used 400-600 watts maximum combined lol!!.. And my 5Ghz 7980XE and (2) power modded 2080Ti‘s were easily within the limits of my single 1200 watt seasonic platinum lol. No one seemed to believe me that I was running out of power..

My 2080Ti’s were soldered and flashed, so they could pull around 520+ watt each card. Dont even get me started on the CPU lol. You already know, the 7980XE isn’t the most efficient in the world lol. Though it is a monster for being almost 4 years old haha.

I had to run my (2) 2080Ti’s stock clocks in RDR2, and 7980XE downclocked to stay under power shutdowns.


----------



## tps3443

lmfodor said:


> Hey, I just saw the news about the New World game that seems to break a lot of EVGA FTW3 cards. Has anyone tried it on Strix or Kingpin?
> 
> 
> Sent from my iPhone using Tapatalk Pro


I will try it out. I saw that too I have a Kingpin 3090. I will gladly be the test subject. I get off working from home around 8PM, and will DL that game! lol


----------



## des2k...

tps3443 said:


> I have micro thin pressure film to confirm good contact and pressure being applied to components and the die of the 3090KP. The film is designed to turn red within normal mounting pressure of computer heatsinks and waterblocks. And you can find out exactly where the pressure is high or low at, and exactly how much pressure is being applied to that area.
> 
> I can get the mount right without running the card. Im just a little confused on thermal pad thickness, and what I should actually order?
> 
> Some say 1-2mm. And evga says 2-3mm.
> 
> Then others specify something entirely different. I’m lost lol. And I keep getting different answers.
> 
> I would rather not order 0.5-1-2-3-4-5mm thick thermal pads all at the same time due to waste and cost.
> 
> I was hoping to narrow this down to


Prob a good idea to get a digital caliper and measure your pcb components / block. 
I don't have KP or hydro copper, but from this picture it's flat for the core / mem.
For memory I'll be suprised if 0.5mm or 1mm will reach those components.


----------



## des2k...

Panchovix said:


> Based on this post
> 
> 
> __
> https://www.reddit.com/r/nvidia/comments/oollg4
> 
> It seems mostly affects EVGA FTW3 3090 models, though seems some Gigabyte models are affected too.
> 
> And, by exactly this comment
> 
> 
> __
> https://www.reddit.com/r/nvidia/comments/oollg4/_/h5zkrkl
> 
> "edit: Some users in nwg sub reporting ASUS 3080 water loop, MSI 3080 Suprim and EVGA 3080 FTW3 having similar issues... Maybe more likely to happen to triple 8 pin cards?"


Didn't even know this game existed. I signed up for beta, not sure if it's open.

It's strange how a simple game could brick cards. This guy was told to use a 'stable' vbios


----------



## GRABibus

lmfodor said:


> Hey, I just saw the news about the New World game that seems to break a lot of EVGA FTW3 cards. Has anyone tried it on Strix or Kingpin?
> 
> 
> Sent from my iPhone using Tapatalk Pro


Try it and let us know 😂


----------



## Falkentyne

des2k... said:


> Didn't even know this game existed. I signed up for beta, not sure if it's open.
> 
> It's strange how a simple game could brick cards. This guy was told to use a 'stable' vbios
> 
> View attachment 2518285


If you preorder the game you get beta access.


----------



## lmfodor

GRABibus said:


> Try it and let us know


I have a virgin strix! Waiting for the active backplate!!! I don’t want to try it on air 


Sent from my iPhone using Tapatalk Pro


----------



## J7SC

...fyi re. 'that Amazon' game / beta...as always, pinch of salt at this early stage of the problem


----------



## Lobstar

Sheyster said:


> Ugh... Glad I passed on the 3090 FTW when my queue number came up, with the various issues that card has experienced. Apparently this is happening with at least one of the Gigabyte 3090's as well, not sure which one though.


I wanted the Strix but this came up in the EVGA queue first and Asus' website is a dumpster fire. I'm happy with it now that I have my optimus block but still upset I had to spend over $200 in shipping fees for RMAs ultimately resulting from an insufficient voltage controller.


----------



## mirkendargen

Wasn't there some other game with unlimited framerates in the menus that was popping EVGA cards awhile back? Was it Overwatch?


----------



## Lobstar

mirkendargen said:


> Wasn't there some other game with unlimited framerates in the menus that was popping EVGA cards awhile back? Was it Overwatch?


GTAV and Halo were both doing it consistently.


----------



## Falkentyne

Anyone with a shunt modded (non EVGA) card or the Kingpin card, (with MSVDD jumpers enabled) or Galax / Kingpin XOC Bioses, or a Strix and who has beta access (or a preorder, which locks you into the beta) for that New World game want to see if their card can handle unlocked FPS and shunt/bios mods? If Quake 2 RTX and Timespy Extreme doesn't kill the card...


----------



## tps3443

My 3090 KP seems to have some great memory on it! I managed to squeeze a +17% memory bandwidth increase on top of the already fast 936.2GB bandwidth. 

Still tweaking at it. Hydro Copper block will be delivered today.


----------



## V I P E R

tps3443 said:


> My 3090 KP seems to have some great memory on it! I managed to squeeze a +17% memory bandwidth increase on top of the already fast 936.2GB bandwidth.
> 
> Still tweaking at it. Hydro Copper block will be delivered today.


My FTW3 makes 1070.6 GB/s.


----------



## Nizzen

lmfodor said:


> I have a virgin strix! Waiting for the active backplate!!! I don’t want to try it on air
> 
> 
> Sent from my iPhone using Tapatalk Pro


Asus Strix 3090 can't die of this


----------



## chispy

It seems the flaw of some rtx 3090 designs are showing up now with this game :/ . mostly evga rtx 3090s are the ones dying left and right.









Design-related EVGA problem instead of NVIDIA issue? EVGA GeForce RTX 3090 vs. Amazon’s New World and first insights | Exclusive | igor'sLAB


Disclaimer: The following article is machine translated from the original German, and has not been edited or checked for errors. Thank you for understanding!




www.igorslab.de


----------



## skandertje

So, I have been down the rabbit hole looking for a solution to fit a 3090 FE in my case. I have two PCI-e slots, but the card is 3 slot, namely a 3090 Founder Edition (so that means non-reference PCB). I kind of want to mod my 3090 FE so that it will fit. I am at the point of taking a iron saw and making it fit but I prefer to keep the original parts in-tact. So what *I need is a two slot i/o plate that would fit fit the 3090 FE*. I have searched the Chinese sellers and they only sell reference editions (very poor product descriptions); Nvidia wont give/sell me one. The water block sales people wont have one that would be screwed in the PCB and I have not found anyone that would sell me their old 3-slot FE i/o plate. Can anyone help me find a solution to this crazy problem, why did Nvidia make the 3090 3-slot. Makes no sense.


----------



## mirkendargen

The FE water blocks come with 2 slot brackets, reach out to the companies selling them to see if you can buy just a bracket.



skandertje said:


> So, I have been down the rabbit hole looking for a solution to fit a 3090 FE in my case. I have two PCI-e slots, but the card is 3 slot, namely a 3090 Founder Edition (so that means non-reference PCB). I kind of want to mod my 3090 FE so that it will fit. I am at the point of taking a iron saw and making it fit but I prefer to keep the original parts in-tact. So what *I need is a two slot i/o plate that would fit fit the 3090 FE*. I have searched the Chinese sellers and they only sell reference editions (very poor product descriptions); Nvidia wont give/sell me one. The water block sales people wont have one that would be screwed in the PCB and I have not found anyone that would sell me their old 3-slot FE i/o plate. Can anyone help me find a solution to this crazy problem, why did Nvidia make the 3090 3-slot. Makes no sense.
> 
> View attachment 2518335


----------



## Lobstar

skandertje said:


> I have two PCI-e slots, but the card is 3 slot


You do realize that no matter how many 'slots' a card it still just uses one of the connectors on the motherboard, right? Are you saying you only have two expansion slots on the back of your case?


----------



## Falkentyne

chispy said:


> It seems the flaw of some rtx 3090 designs are showing up now with this game :/ . mostly evga rtx 3090s are the ones dying left and right.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Design-related EVGA problem instead of NVIDIA issue? EVGA GeForce RTX 3090 vs. Amazon’s New World and first insights | Exclusive | igor'sLAB
> 
> 
> Disclaimer: The following article is machine translated from the original German, and has not been edited or checked for errors. Thank you for understanding!
> 
> 
> 
> 
> www.igorslab.de


I don't get it.
Igor is saying that the fan controller IC is blowing up?
But buildzoid is saying that fuses are blowing up because MSVDD power spikes are triggering OCP, or blowing fuses somehow?


----------



## GAN77

Falkentyne said:


> Igor is saying that the fan controller IC is blowing up?
> But buildzoid is saying that fuses are blowing up because MSVDD power spikes are triggering OCP, or blowing fuses somehow?


Igor usually talks a lot and is not to the point. He once talked about capacitors and was wrong.
Buildzoid makes a lot more sense.


----------



## Lobstar

GAN77 said:


> Igor usually talks a lot and is not to the point. He once talked about capacitors and was wrong.
> Buildzoid makes a lot more sense.


Yet neither will acknowledge that EVGA had to release a redesigned board due to power issues and these boards are obviously dying of power issues.


----------



## tps3443

V I P E R said:


> My FTW3 makes 1070.6 GB/s.
> View attachment 2518322


Hey, that’s awesome. I haven’t adjusted any voltages to my memory yet. But so far, I am at 1,095GB bandwidth +17% on top of the default bandwidth of 936.2. But not a huge difference either way. My memory is pretty toasty on the kingpin Hybrid cooler. The default hybrid cooler doesn’t cool the memory well. it’s essentially passively cooled, unless the fan is turned on the card.

My Kingpin Hydrocopper just arrived today though, and I am hoping I can get much more out of it with additional cooling and attempting to add voltage. 

Still haven’t hit a wall as of yet on this memory though. So there’s no telling how high it can go.


----------



## tps3443

It’s time! I am going to run the 3dmark stress test exactly how the card is first, then I am
going to run it again after installing the Hydro Copper.

I have also been running the memory at +1700 which is totally fine. Maybe additional voltages, and better cooling will get me even further!


----------



## des2k...

Falkentyne said:


> I don't get it.
> Igor is saying that the fan controller IC is blowing up?
> But buildzoid is saying that fuses are blowing up because MSVDD power spikes are triggering OCP, or blowing fuses somehow?


might be two different issues


----------



## tps3443

des2k... said:


> might be two different issues or could be related
> 
> the good question would be, why did EVGA decide to use ICX sensors and not Nvidia sensors and why the fuses ?
> 
> The chip itself drives the voltage controllers which in turn drive VRM stages and that works fine on reference 2x8pin / FE board, no fuses needed. The boost/load is controlled by Nvidia drivers also


People can at least rest assured if they’re gpu is an evga. Because they’ll certainly replace it with the 2nd hand warranty, or new warranty.


----------



## J7SC

...this reminds me a bit of the 2080 Ti 'test escapes' where a bunch of them failed early on...after many on YT falsely blamed Micron etc (which was nonsensical as the regular 2080 had the same VRAM type / memory controller), it turned out that it was indeed NVidia's chip production line which had the aforementioned test escapes - meaning chips that failed QC and should never have been passed on to vendors in the first place.

...this current Amazon World Game issue might clearly have different causes, but when I looked for a MSRP - 3090 in late January, I shied away from the EVGA FTW3 even when I found one and went for the Strix instead as there were other power-related issues in the news re. the EVGA FTW3. Worth noting that the Kingpin has a different PCB and power delivery and I would take that one any day (or night...).

..finally, there just is badly-written software apps out there which can push your hardware over the edge. Theoretically, fail-safes should kick in, especially with a card with fuses and may be that is exactly what happened. Still, this is another one of those annoying 'blame the other vendor' situations for the consumer.


----------



## Falkentyne

J7SC said:


> ...this reminds me a bit of the 2080 Ti 'test escapes' where a bunch of them failed early on...after many on YT falsely blamed Micron etc (which was nonsensical as the regular 2080 had the same VRAM type / memory controller), it turned out that it was indeed NVidia's chip production line which had the aforementioned test escapes - meaning chips that failed QC and should never have been passed on to vendors in the first place.
> 
> ...this current Amazon World Game issue might clearly have different causes, but when I looked for a MSRP - 3090 in late January, I shied away from the EVGA FTW3 even when I found one and went for the Strix instead as there were other power-related issues in the news re. the EVGA FTW3. Worth noting that the Kingpin has a different PCB and power delivery and I would take that one any day (or night...).
> 
> ..finally, there just is badly-written software apps out there which can push your hardware over the edge. Theoretically, fail-safes should kick in, especially with a card with fuses and may be that is exactly what happened. Still, this is another one of those annoying 'blame the other vendor' situations for the consumer.


It was bad or defective solder. More precisely, the VRAM chip next to the PCIE slot that was losing its connection (which is why when the card cooled down, e.g. to be shipped out, it suddenly worked again).
That same chip is the hottest VRAM chip on Ampere cards.


----------



## J7SC

Falkentyne said:


> It was bad or defective solder. More precisely, the VRAM chip next to the PCIE slot that was losing its connection (which is why when the card cooled down, e.g. to be shipped out, it suddenly worked again).
> That same chip is the hottest VRAM chip on Ampere cards.


...no problem accepting that, though 'bad solder' is still a QC issue at the end of the day. However, I'm steering more towards the 'expectations' of a consumer acting on imperfect information, and this latest Amazon World game beta issue doesn't help


----------



## tps3443

Evga Hydro Copper Kit unboxing


----------



## GAN77

tps3443 said:


> Evga Hydro Copper Kit unboxing


I got dizzy from the constant rotation of the block in your hands)


----------



## Nizzen

No active backplate for hydro cooper? 🤔


----------



## D-EJ915

Turns out my aorus xtreme card had a really bad soldering job on one of the vrm capacitors and it broke off lol. I soldered it back on but was having issues running benches, turns out it was my bench system with the new benchmark that was unstable and my crappy soldering was fine lol. I really hate when stuff is badly done from the factory.


----------



## tps3443

My Kingpin 3090 has some writing and dots on the die. 

What could 199K mean?


----------



## newls1

guess that pretty much confirms they use hand binned dies


----------



## GAN77

tps3443 said:


> What could 199K mean?


Possibly 1995 chip frequency when tested at factory.


----------



## Arizor

GAN77 said:


> Possibly 1995 chip frequency when tested at factory.


Yep to me that looks like 1995, which would make sense as a chip frequency.


----------



## WHeelbeer

Did anyone try to flash the Gigabyte 3090 Turbo 24G vBIOS with the Gigabyte 3090 Gaming OC one to have a higher board power limit ? is it too risky ?
Both card have almost same PCB and same pin connectors.
Here is the Gaming OC specs Target: 370.0 W Limit: 390.0 W and here is the Turbo 24G specs Target: 350.0 W Limit: 350.0 W ?
I have a watercooling custom loop so my Turbo would handle very well the heat.
best regards


----------



## yzonker

Should be low risk.


----------



## WHeelbeer

yzonker said:


> Should be low risk.


Thank you, RTX 3090 Vision OC should be even better because it's also a 1 BIOS card. Gaming OC has 2.


----------



## SoldierRBT

tps3443 said:


> My Kingpin 3090 has some writing and dots on the die.
> 
> What could 199K mean?


Interesting. My KP has same marks. 2 dots on the top left corner. "1" on the top right corner and "1995" on lower right corner. 

Could you test your chip with ATI Tool? Make sure to max out the voltage slider and idle temps 22-23c before opening the v/f curve. 1.10v should be 2055-2070MHz at stock. Then add 270-285 to that voltage point and click run. It should hold 2325-2340MHz at 30-31C.


----------



## Arizor

SoldierRBT said:


> Interesting. My KP has same marks. 2 dots on the top left corner. "1" on the top right corner and "1995" on lower right corner.
> 
> Could you test your chip with ATI Tool? Make sure to max out the voltage slider and idle temps 22-23c before opening the v/f curve. 1.10v should be 2055-2070MHz at stock. Then add 270-285 to that voltage point and click run. It should hold 2325-2340MHz at 30-31C.
> View attachment 2518439


Yeah I reckon this confirms they’re noting factory tested overclocks.

Out of interest what’s your “effective clock” with that curve, using HWInfo? I’ve found setting a curve like that Afterburner misrepresents the real (aka effective) clock speed I’m achieving by as much as 100-150mhz.


----------



## SoldierRBT

Arizor said:


> Yeah I reckon this confirms they’re noting factory tested overclocks.
> 
> Out of interest what’s your “effective clock” with that curve, using HWInfo? I’ve found setting a curve like that Afterburner misrepresents the real (aka effective) clock speed I’m achieving by as much as 100-150mhz.


For v/f OC like the one in the photo for benching, you'd need to add NVVDD/MSVDD voltages to increase internal clocks. It's important to test requested clocks because they're the ones capable of crashing. Internal clocks only add performance. For daily use, normal OC offset + dip switches seems to be the best option.


----------



## tps3443

SoldierRBT said:


> Interesting. My KP has same marks. 2 dots on the top left corner. "1" on the top right corner and "1995" on lower right corner.
> 
> Could you test your chip with ATI Tool? Make sure to max out the voltage slider and idle temps 22-23c before opening the v/f curve. 1.10v should be 2055-2070MHz at stock. Then add 270-285 to that voltage point and click run. It should hold 2325-2340MHz at 30-31C.
> View attachment 2518439


I will try later today if I get some more free time.

1995Mhz does make sense. I know that if I run port royal stress test, my frequency starts out as 1,995Mhz after a few minutes, the card hits 2,010Mhz and then it goes to 2,025Mhz and locks in on 2,025Mhz.


No overclocking, or additional voltages.


But yes I will try that. Let me get my temps in check. I just got it on my loop.


----------



## GRABibus

Lobstar said:


> Yet neither will acknowledge that EVGA had to release a redesigned board due to power issues and these boards are obviously dying of power issues.





tps3443 said:


> My Kingpin 3090 has some writing and dots on the die.
> 
> What could 199K mean?


The PR score


----------



## tps3443

GRABibus said:


> The PR score


Coming soon. Unfortunately I am working right now. Staring at a 3090KP HC while logged in to a super slow remote system for the next 6-7 hours. So no port royal yet unfortunately.


----------



## tps3443

Just wanted to say that my memory temps are ludicrous! Hottest I saw was about 55C on the back side. And my backplate is cooking like never before! It hits right at about the same temp as the memory does. Too hot to touch thats for sure. I imagine with a fan blowing on it, I may be able to get my back GDDR6X modules in the very low 50’s or under 50 Now I see why people install heatsinks and fans on these back plates now. It looked kinda silly at first seeing picture in the forums of people do that, but on my Hybrid the backplate just didn’t get nearly this hot.

I am only using the pre included thermal pads with the KP hydro copper kit. They work amazing. I cleaned the crap out of the mem modules with alcohol. And laid all the pads down just so.

My memory was hitting 75C on the back before the hydro copper block. So going down to 55 from that is huge!


----------



## des2k...

tps3443 said:


> Just wanted to say that my memory temps are ludicrous! Hottest I saw was about 55C on the back side. And my backplate is cooking like never before! It hits right at about the same temp as the memory does. Too hot to touch thats for sure. I imagine with a fan blowing on it, I may be able to get my back GDDR6X modules in the very low 50’s or under 50 Now I see why people install heatsinks and fans on these back plates now. It looked kinda silly at first seeing picture in the forums of people do that, but on my Hybrid the backplate just didn’t get nearly this hot.
> 
> I am only using the pre included thermal pads with the KP hydro copper kit. They work amazing. I cleaned the crap out of the mem modules with alcohol. And laid all the pads down just so.
> 
> My memory was hitting 75C on the back before the hydro copper block. So going down to 55 from that is huge!


test mining benchmark or quake rtx, it's usually 10c more vs regular usage if the contact is good

I'm at 64c for the back mem, heatsink + fan on my EK backplate


----------



## tps3443

des2k... said:


> test mining benchmark or quake rtx, it's usually 10c more vs regular usage if the contact is good
> 
> I'm at 64c for the back mem, heatsink + fan on my EK backplate


I had about 30 minutes of time to spare before work today. So I ran the port royal stress test. That’s when I saw the back modules were 55c. Never tried Quake RTX before.


I logged to a gpu-z file before and after with the hybrid to the hydro copper. I haven’t looked thoroughly at the results yet. But I am interested for sure. The memory before and after temps were the most incredible of anything.


----------



## 1devomer

Falkentyne said:


> It was bad or defective solder. More precisely, the VRAM chip next to the PCIE slot that was losing its connection (which is why when the card cooled down, e.g. to be shipped out, it suddenly worked again).
> That same chip is the hottest VRAM chip on Ampere cards.





J7SC said:


> ...no problem accepting that, though 'bad solder' is still a QC issue at the end of the day. However, I'm steering more towards the 'expectations' of a consumer acting on imperfect information, and this latest Amazon World game beta issue doesn't help


Well, i never came back to check what was really the issue, thank you for pointing out.

At the time, Charlie from hardforum was following the matter, but he didn't disclose what was the issue, he later on joined Intel, then left.
And we were arguing, at the time, about either the memory controller or the memory chips themselves.
I was wrong it seems, i bet on the gpu memory controller.

Tho, it does not surprise me, if this issue arisen from bad QA/QC on soldering.
Nvidia soldering job is pretty awful, leaving empty exposed copper pads, to cheap on solder quantity and quality, which is not great on a 2k$ card. 🙆‍♀️
_(purely and simply, from an engineering and manufacturing point of view)_


----------



## Falkentyne

1devomer said:


> Well, i never came back to check what was really the issue, thank you for pointing out.
> 
> At the time, Charlie from hardforum was following the matter, but he didn't disclose what was the issue, he later on joined Intel, then left.
> And we were arguing, at the time, about either the memory controller or the memory chips themselves.
> I was wrong it seems, i bet on the gpu memory controller.
> 
> Tho, it does not surprise me, if this issue arisen from bad QA/QC on soldering.
> Nvidia soldering job is pretty awful, leaving empty exposed copper pads, to cheap on solder quantity and quality, which is not great on a 2k$ card. 🙆‍♀️
> _(purely and simply, from an engineering and manufacturing point of view)_


It was a German solderer on Elmor's discord who found out. It happened on his 2080 card and he ran a (linux or some inline i2c) test which showed memory errors at a certain location and then he checked the soldering and saw the problem on one VRAM chip. I think his name is oldirdey. He has a 2080 ti shunt mod soldering video on youtube.


----------



## pantsoftime

Just FYI, Gigabyte posted a new F3 BIOS for the Aorus Xtreme this week at AORUS GeForce RTX™ 3090 XTREME 24G Gallery | Graphics Card - GIGABYTE Global 
I've been running it for a few hours without any issues. I noticed a VR game I ran earlier was a bit smoother, but that could have been due to a couple of reasons.


----------



## jincuteguy

Anyone tried the new Amazon MMO New World with your 3090 FE card? I'm scared to play it and wonder if they have fixed the issue yet


----------



## macangel

So, I was finally able to sell one of my kidneys and pick up a 3090 today. Initially I ran over to pick up the MSI RTX 3090 Suprim X, but I noticed they had a Gigabyte Xtreme there as well. I have a rather rare/unique gaming setup. I run three 55" Samsung RU8000 4K TVs in surround mode. These are the TVs from 2019 that came with Freesync before Samsung ditched it for the last two generations, but I had never been able to test it, and I couldn't find anyone that had tested the new 30 Series (or even proper testing with the 20 series) cards and the Samsung RU8000 with VRR. Before this, I was running a pair of GTX1080ti's in SLI. Anyway, the Gigabyte is the only one that has some cards that have 3 HDMI ports, so as much as I hate Gigabyte (good products, horrible, HORRIBLE customer support/service), I decided to try it anyway in hopes that the HDMI would work (I bought three 3D Club DP1.4 to HDMI2.0b from Amazon a few days ago, but it doesn't support VRR). Got it home, hooked it up, tinkered around with it, crashed my computer repeatedly trying to use CRU (Custom Resolution Utility) to get rid of that stupid extra 4K resolution that TVs support, but none display, before saying F*** it and just work around it. What I did notice was that when I turn on Game Mode and FreeSync Ultimate on the TVs, NVidia Control Panel does recognize them as G-Sync compatible displays, including when I have them in surround mode.
So, gonna get around to trying to O/C it later on. But I have been confused about the custom ROMs on the first page. On the list, under Power Limit, for mine it says 420/450. What does that mean? It's a 420 card, but with the ROM it goes to 450?
Also, I asked before, but didn't get an answer. Do the new ROMs have BAR support? Or is that separate that I have to flash?


----------



## J7SC

....fyi, Igor's lab has an interesting comparison of thermal pads with different W/mK ratings on GDDR6X ...

source


----------



## tps3443

macangel said:


> So, I was finally able to sell one of my kidneys and pick up a 3090 today. Initially I ran over to pick up the MSI RTX 3090 Suprim X, but I noticed they had a Gigabyte Xtreme there as well. I have a rather rare/unique gaming setup. I run three 55" Samsung RU8000 4K TVs in surround mode. These are the TVs from 2019 that came with Freesync before Samsung ditched it for the last two generations, but I had never been able to test it, and I couldn't find anyone that had tested the new 30 Series (or even proper testing with the 20 series) cards and the Samsung RU8000 with VRR. Before this, I was running a pair of GTX1080ti's in SLI. Anyway, the Gigabyte is the only one that has some cards that have 3 HDMI ports, so as much as I hate Gigabyte (good products, horrible, HORRIBLE customer support/service), I decided to try it anyway in hopes that the HDMI would work (I bought three 3D Club DP1.4 to HDMI2.0b from Amazon a few days ago, but it doesn't support VRR). Got it home, hooked it up, tinkered around with it, crashed my computer repeatedly trying to use CRU (Custom Resolution Utility) to get rid of that stupid extra 4K resolution that TVs support, but none display, before saying F*** it and just work around it. What I did notice was that when I turn on Game Mode and FreeSync Ultimate on the TVs, NVidia Control Panel does recognize them as G-Sync compatible displays, including when I have them in surround mode.
> So, gonna get around to trying to O/C it later on. But I have been confused about the custom ROMs on the first page. On the list, under Power Limit, for mine it says 420/450. What does that mean? It's a 420 card, but with the ROM it goes to 450?
> Also, I asked before, but didn't get an answer. Do the new ROMs have BAR support? Or is that separate that I have to flash?



The default power limit is 420 watts. And with overclocking programs like PX1, or MSI Afterburner you can increase that power limit to 450 watts.

So 420 watts would be (100% or default). And you have a little power slider in most overclocking utilities that allows you to increase the power limit. So 450 watts would be around probably 107%-108%.


----------



## macangel

KedarWolf said:


> If anyone wants 15% of GELID pads (I'd go for the Extreme, they are really soft and compress much better than the Ultimate and Thermalright pads), use this code in the official Gelid store.
> 
> FRIEND-8JQVPBX
> 
> 
> 
> 
> 
> 
> 
> 
> 
> GELID Store | Powered by GELID Solutions
> 
> 
> Best-in-class products for gamers and tech enthusiasts.
> 
> 
> 
> 
> gelidstore.com


awesome. I was checking these out on Amazon and Newegg earlier today. They make a big difference?


----------



## macangel

tps3443 said:


> The default power limit is 420 watts. And with overclocking programs like PX1, or MSI Afterburner you can increase that power limit to 450 watts.
> 
> So 420 watts would be (100% or default). And you have a little power slider in most overclocking utilities that allows you to increase the power limit. So 450 watts would be around probably 107%-108%.


ah, gotcha. Awesome, thanks for the info.
Do people get much of a boost going to the 1000? over 800 pages to go through, lol.


----------



## Lobstar

macangel said:


> Do people get much of a boost going to the 1000?


I don't have any experience with the 1kw bios options but I am running the KPE 520w Rebar bios with a lot of success. 

15595 PR








I scored 15 595 in Port Royal


AMD Ryzen 9 5950X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com





23191 Graphics Score Time Spy








I scored 21 861 in Time Spy


AMD Ryzen 9 5950X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com













EVGA RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com


----------



## tps3443

Lobstar said:


> I don't have any experience with the 1kw bios options but I am running the KPE 520w Rebar bios with a lot of success.
> 
> 15595 PR
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 595 in Port Royal
> 
> 
> AMD Ryzen 9 5950X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> 23191 Graphics Score Time Spy
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 21 861 in Time Spy
> 
> 
> AMD Ryzen 9 5950X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> EVGA RTX 3090 VBIOS
> 
> 
> 24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory
> 
> 
> 
> 
> www.techpowerup.com


Yeah I run the same bios on my KP. I just moved over to the normal bios for gaming though. My 3090KP is running pretty cool on the hydro copper with the normal bios. (No need to OC with such a card for gaming lol) 2025 is plenty

Very fast! I love it!


----------



## jura11

J7SC said:


> ....fyi, Igor's lab has an interesting comparison of thermal pads with different W/mK ratings on GDDR6X ...
> 
> source


Would love to see comparison between the stock Alphacool pads which are supplied on their blocks and Bykski or even EKWB 

As well he is comparing Alphacool pads, how hard for him to compare against Thermalright or Gelid pads or Fujipoly pads

Alphacool pads are just overpriced and won't touch Alphacool pads, fittings etc One thing which is okay is radiators

Hope this helps 

Thanks, Jura


----------



## des2k...

macangel said:


> ah, gotcha. Awesome, thanks for the info.
> Do people get much of a boost going to the 1000? over 800 pages to go through, lol.


1000w works fine for my zotac 2x8pin
no rbar, allows to hold high OC on core and mem on all games for the extra 3-5fps if you need that


----------



## tps3443

J7SC said:


> ....fyi, Igor's lab has an interesting comparison of thermal pads with different W/mK ratings on GDDR6X ...
> 
> source


I didn’t see this before. Thank you for sharing!

I almost wish I would have waited before installing my block. I was spending way too long researching thermal pads, and thickness etc. I literally was just like F’it i'm puttin it on anyways with what’s included.


----------



## tps3443

des2k... said:


> 1000w works fine for my zotac 2x8pin
> no rbar, allows to hold high OC on core and mem on all games for the extra 3-5fps if you need that


Well, it isn’t always 3-5 fps. (But I get what your saying though) Although, not all “Stock” 3090’s are made equal. 

For example my last bone stock 2080Ti Founders Edition on stock air cooling burning through a heavy load in port royal manages around 36 fps. Well after watercooling, flashing the bios, and soldering on the 8 ohm resistors and overclocking the memory and core as much as possible that 36 fps average goes to 48 fps average. (That’s huge) Or instead of 60 fps it’s now 80 fps. Plenty of game show even more than that. I’ve seen as high as 38% with in game benchmarks.

I needed the extra 35% performance in games. It became a standard for me lol.

My main point is that, while any 3090 is fast no matter how bad or slow it may be. There are plenty of 3090’s out there that can cough up a whole lot more than 3-5fps.


----------



## OrionBG

KedarWolf said:


> If anyone wants 15% of GELID pads (I'd go for the Extreme, they are really soft and compress much better than the Ultimate and Thermalright pads), use this code in the official Gelid store.
> 
> FRIEND-8JQVPBX
> 
> 
> 
> 
> 
> 
> 
> 
> 
> GELID Store | Powered by GELID Solutions
> 
> 
> Best-in-class products for gamers and tech enthusiasts.
> 
> 
> 
> 
> gelidstore.com


Any idea what thickness I'll need for 3090FE with a Bykski block and active backplate?


----------



## Trevbev

@macangel I used this to get rid of resolutions I don’t want:Steam Community :: Guide :: Fix black bars in cutscenes and aspect ratio issues with 4K/UHD TVs on PC (GTA V, Deus Ex...)


----------



## macangel

Trevbev said:


> @macangel I used this to get rid of resolutions I don’t want:Steam Community :: Guide :: Fix black bars in cutscenes and aspect ratio issues with 4K/UHD TVs on PC (GTA V, Deus Ex...)


EDIT: I just scrolled more through the Steam post and it referred to CRU, which is what I mentioned I was using before. That's what's crashing my computer when I try using it on the 3090. More info about that in the rest of this...

that's not the issue I have. It's that Windows, both in Display Settings and NVidia Control Panel, under resolutions, it shows 3840x2160 native, but above that, 4096x2160. Even though it shows that 3840x2160 is native, it keeps switching on the other one, especially any time I use NVidia Surround mode with the three displays. It's been a known problem for years, and Custom Resolution Utility has been around for a long time. It's actually a really fantastic utility that lets you access and tweak every property that has to do with the display. In the past, it's most popularly known for overclocking monitors and laptop displays. But it covers everything, including audio formats for ARC or eARC, VRR, etc. I've been using it for years with my 1080tis to get rid of that extra resolution. Not really sure why I was having problems with it last night. I tried it like 5 times and it just froze the computer. Been having a really, REALLY rough week with a lot of personal problems that I definitely didn't have the patience to keep trying to trouble shoot it, or even get into trying to overclock and/or flash the card last night. I was able to get Middle Earth: Shadows of War to run at 11,520x2160 with the HD res pack and everything turned up except blur and depth of field, and maintained between the 48-60fps that the TVs should be covering with G-Sync compatible. So I played that and relaxed before crashing. Definitely happy with the card. That game was using over 10GB of VRAM, but it's also a few years old. I know other newer games will use even more VRAM at that resolution, so that's why I went for the 3090 instead of the 3080ti.


----------



## Trevbev

macangel said:


> EDIT: I just scrolled more through the Steam post and it referred to CRU, which is what I mentioned I was using before. That's what's crashing my computer when I try using it on the 3090. More info about that in the rest of this...
> 
> that's not the issue I have. It's that Windows, both in Display Settings and NVidia Control Panel, under resolutions, it shows 3840x2160 native, but above that, 4096x2160. Even though it shows that 3840x2160 is native, it keeps switching on the other one, especially any time I use NVidia Surround mode with the three displays. It's been a known problem for years, and Custom Resolution Utility has been around for a long time. It's actually a really fantastic utility that lets you access and tweak every property that has to do with the display. In the past, it's most popularly known for overclocking monitors and laptop displays. But it covers everything, including audio formats for ARC or eARC, VRR, etc. I've been using it for years with my 1080tis to get rid of that extra resolution. Not really sure why I was having problems with it last night. I tried it like 5 times and it just froze the computer. Been having a really, REALLY rough week with a lot of personal problems that I definitely didn't have the patience to keep trying to trouble shoot it, or even get into trying to overclock and/or flash the card last night. I was able to get Middle Earth: Shadows of War to run at 11,520x2160 with the HD res pack and everything turned up except blur and depth of field, and maintained between the 48-60fps that the TVs should be covering with G-Sync compatible. So I played that and relaxed before crashing. Definitely happy with the card. That game was using over 10GB of VRAM, but it's also a few years old. I know other newer games will use even more VRAM at that resolution, so that's why I went for the 3090 instead of the 3080ti.


Yeah sorry I didn’t read your post properly


----------



## J7SC

jura11 said:


> Would love to see comparison between the stock Alphacool pads which are supplied on their blocks and Bykski or even EKWB
> 
> As well he is comparing Alphacool pads, how hard for him to compare against Thermalright or Gelid pads *or Fujipoly pads
> 
> Alphacool pads are just overpriced and won't touch Alphacool pads, fittings etc One thing which is okay is radiators*
> 
> Hope this helps
> 
> Thanks, Jura


Well ...the source article I linked above talked about that:


----------



## des2k...

tps3443 said:


> Well, it isn’t always 3-5 fps. (But I get what your saying though) Although, not all “Stock” 3090’s are made equal.
> 
> For example my last bone stock 2080Ti Founders Edition on stock air cooling burning through a heavy load in port royal manages around 36 fps. Well after watercooling, flashing the bios, and soldering on the 8 ohm resistors and overclocking the memory and core as much as possible that 36 fps average goes to 48 fps average. (That’s huge) Or instead of 60 fps it’s now 80 fps. Plenty of game show even more than that. I’ve seen as high as 38% with in game benchmarks.
> 
> I needed the extra 35% performance in games. It became a standard for me lol.
> 
> My main point is that, while any 3090 is fast no matter how bad or slow it may be. There are plenty of 3090’s out there that can cough up a whole lot more than 3-5fps.


Sure you can get more than 5fps boost. Quake RTX would the game that shows 10fps+ gains.
The problem will be, that it takes 700w+ to get there.

If you take new lego builders or marble rtx demo well those doesn't scale past 2115core or mem oc, 0 fps gains.

You also have new Metro, where 2115core requires alot of core voltage, so you won't scale because you'll hit 1.1v early.

I tested a few games and between 2115core +1000mem vs 2190 +1500mem is at most 3fps more. If I push for more 2220core +1560mem, maybe I see +0.5fps😂

My general observation is 1fps for every +66mhz core and another 1fps for every +500mem. 🙄

You also have to consider the weird power behaviour we get on rtx3090. I have 66fps lock, voltage limit to 1v and game menus can spike power to 500w when the game itself doesn't even use 300w. So something to consider if you want to keep the card for a long time at high OC.


----------



## jura11

OrionBG said:


> Any idea what thickness I'll need for 3090FE with a Bykski block and active backplate?


Hi there 

I would suggest get 1.5mm thermal pads from Gelid or Thermalright 

Bykski using 1.2mm thermal pads on their blocks what I know and I measured them as well 

Hope this helps 

Thanks, Jura


----------



## OrionBG

jura11 said:


> Hi there
> 
> I would suggest get 1.5mm thermal pads from Gelid or Thermalright
> 
> Bykski using 1.2mm thermal pads on their blocks what I know and I measured them as well
> 
> Hope this helps
> 
> Thanks, Jura


Thanks for the info!

Have anyone tried those EC360® Silver 12W/mK Thermal Pads?


----------



## Jordel

Do we have any kind of consensus on how to identify "bad" 3090 FTW3 Ultras, and what's causing it?

I've seen some reports that the pull on the PCIe slot might be an indicator, as it shouldn't exceed the spec (75W). My card pulls a max of 80.2 W from the slot during Time Spy Extreme.

This is my second FTW3. First one died all of a sudden, and was no longer detected. This was at the beginning of the year.


----------



## xrb936

Just got one 3090 Strix White OC Edition. Any difference between it and the regular Strix OC?


----------



## des2k...

Jordel said:


> Do we have any kind of consensus on how to identify "bad" 3090 FTW3 Ultras, and what's causing it?
> 
> I've seen some reports that the pull on the PCIe slot might be an indicator, as it shouldn't exceed the spec (75W). My card pulls a max of 80.2 W from the slot during Time Spy Extreme.
> 
> This is my second FTW3. First one died all of a sudden, and was no longer detected. This was at the beginning of the year.


I've seen post where the new FTW3 had 45w pcie power and able to use 500w+ for board power.

Not sure if even those new FTW3 boards are also replaced due to new issues with weird games frying them ( that Amazon game).


----------



## Falkentyne

Jordel said:


> Do we have any kind of consensus on how to identify "bad" 3090 FTW3 Ultras, and what's causing it?
> 
> I've seen some reports that the pull on the PCIe slot might be an indicator, as it shouldn't exceed the spec (75W). My card pulls a max of 80.2 W from the slot during Time Spy Extreme.
> 
> This is my second FTW3. First one died all of a sudden, and was no longer detected. This was at the beginning of the year.


All bad 3090 FTW3's have a ver 0.1 stamped next to the PCIE slot.
Or rather all _potentially_ bad ones.
All of the good ones are rev 1.0.
The good ones have a digital VRM controller. The bad one have an analog controller.

This also apparently applies to 3080's as well.


----------



## sepia21

Hi, has any one flashed this bios (Zotac RTX 3090 VBIOS) on Zotac trinity? 
This card is 3 pin and my trinity is two 8 pins. I wanna know if flashing this bios will increase the power limit? I tried the XOC bios but it had some problems with one of the software I use (3D max) so I flashed the original zotac trinity bios.


----------



## des2k...

Falkentyne said:


> All bad 3090 FTW3's have a ver 0.1 stamped next to the PCIE slot.
> Or rather all _potentially_ bad ones.
> All of the good ones are rev 1.0.
> The good ones have a digital VRM controller. The bad one have an analog controller.
> 
> This also apparently applies to 3080's as well.


There's alot of ref 3090 (not evga) with analogue controllers. 
I'm assuming this is not a sign that all analogue controller designs are bad ?


----------



## Falkentyne

des2k... said:


> There's alot of ref 3090 (not evga) with analogue controllers.
> I'm assuming this is not a sign that all analogue controller designs are bad ?


I'm talking about eVGA 3080 and 3090. Not other AIB's.


----------



## des2k...

Falkentyne said:


> I'm talking about eVGA 3080 and 3090. Not other AIB's.


I know, it was about evga. I was curious of other brands using analog controllers.


----------



## yzonker

sepia21 said:


> Hi, has any one flashed this bios (Zotac RTX 3090 VBIOS) on Zotac trinity?
> This card is 3 pin and my trinity is two 8 pins. I wanna know if flashing this bios will increase the power limit? I tried the XOC bios but it had some problems with one of the software I use (3D max) so I flashed the original zotac trinity bios.


Nope, won't give you a higher PL. Probably lower. Only 2x8pin upgrades are the 390w Gigabyte/Galax/etc... Or the KP XOC 1kw.


----------



## tps3443

Hey everyone I ordered some Fujipoly 1.5MM and, 0.5MM 17kwm thermal pads. 

Is it ok to stack these pads to make 2MM thermal pads? 

I am gonna swap the thermal pads on my Kingpin Hydro Copper.

I’m after the best pads possible. So just curious to hear inputs on this. Thank you everyone!


----------



## jura11

tps3443 said:


> Hey everyone I ordered some Fujipoly 1.5MM and, 0.5MM 17kwm thermal pads.
> 
> Is it ok to stack these pads to make 2MM thermal pads?
> 
> I am gonna swap the thermal pads on my Kingpin Hydro Copper.
> 
> I’m after the best pads possible. So just curious to hear inputs on this. Thank you everyone!


Not sure if I would stack thermal pads, I would probably use copper shims with thermal putty which guys over here posted several times 

I never tried stacking thermal pads and can't comment if this stacking really works or not and if it hampers or not transfer of heat 

Hope this helps 

Thanks, Jura


----------



## EB_Z590

Folks,

I'm in the middle of a 3090 hybrid build.

All the commotion lately about New World, and 3090 exploding got me thinking about the necessity for clean, uninterrupted power in an appliance with this type of horsepower.

I am currently using the pci-e cables from my evga supernova 1000watt gold psu, and those are plugged into (Linkup Brand) sleeved pci-e extensions which are plugged into the card.
I have heard that this might not always be a recommended approach, and for a card like this, I should only use a single cable.

Does anyone have any opinions or knowledge in regards to this?


----------



## Nizzen

xrb936 said:


> Just got one 3090 Strix White OC Edition. Any difference between it and the regular Strix OC?


Same, I have both


----------



## des2k...

EB_Z590 said:


> Folks,
> 
> I'm in the middle of a 3090 hybrid build.
> 
> All the commotion lately about New World, and 3090 exploding got me thinking about the necessity for clean, uninterrupted power in an appliance with this type of horsepower.
> 
> I am currently using the pci-e cables from my evga supernova 1000watt gold psu, and those are plugged into (Linkup Brand) sleeved pci-e extensions which are plugged into the card.
> I have heard that this might not always be a recommended approach, and for a card like this, I should only use a single cable.
> 
> Does anyone have any opinions or knowledge in regards to this?
> 
> View attachment 2518551
> View attachment 2518552


the longer the cable the more drop you'll get on the 12v line

you'll have to measure your 12v lines with ext vs no ext(hwinfo) and see the difference under load

but getting 12v vs 11.8v makes no difference, atx specs allows +- for voltages

My new 1200w psu has stupid long cables(also no end caps) i've seen as low as 11.7v🙄 for high power load on 8pin(240w)


----------



## des2k...

tps3443 said:


> Hey everyone I ordered some Fujipoly 1.5MM and, 0.5MM 17kwm thermal pads.
> 
> Is it ok to stack these pads to make 2MM thermal pads?
> 
> I am gonna swap the thermal pads on my Kingpin Hydro Copper.
> 
> I’m after the best pads possible. So just curious to hear inputs on this. Thank you everyone!


 fuji are very expensive. But, we're talking about 2mm of material, your thermal resistance will be very high.

The ones that are 1mm and 1.5mm and are rated at max transfer is because their test compressed them at 50%. That max rating is around 0.5mm-0.7mm for pads.

You can stack them if they are soft / putty like if you have no other choice.

You said your mem is already 55c, why would you need fuji ?


----------



## yzonker

des2k... said:


> the longer the cable the more drop you'll get on the 12v line
> 
> you'll have to measure your 12v lines with ext vs no ext(hwinfo) and see the difference under load
> 
> but getting 12v vs 11.8v makes no difference, atx specs allows +- for voltages
> 
> My new 1200w psu has stupid long cables(also no end caps) i've seen as low as 11.7v🙄 for high power load on 8pin(240w)


Yea I think as long as the plugs are good, non issue for 3x8pin. Maybe if you combined a 1kw bios with a poor load balanced card it would be an issue.


----------



## OrionBG

Guys,
Any thoughts on the Thermal Grizzly Minus 8 thermal pads?
Are they a good choice?


----------



## tps3443

des2k... said:


> fuji are very expensive. But, we're talking about 2mm of material, your thermal resistance will be very high.
> 
> The ones that are 1mm and 1.5mm and are rated at max transfer is because their test compressed them at 50%. That max rating is around 0.5mm-0.7mm for pads.
> 
> You can stack them if they are soft / putty like if you have no other choice.
> 
> You said your mem is already 55c, why would you need fuji ?


My memory temps are fine. Not really worried about that. I might try and re-mount my KP hydro copper block “Just to confirm I am achieving my lowest possible deltas of course” lol now that I know what’s, what, on it of course.

Using default settings on the Re-Bar KP OC Bios with the exception of the memory dip switches enabled for more voltage. (forgot to turn those back off) I was testing +1800 or so earlier. “So the fbvbb for the memory is still getting like 1.45V+ worth of juice.

Anyways,

stock clocks, OC KP Bios

My GPU temps hit 46C
My GPU Die temps hit 42.5C

My ambient temps are about 23.5C-24C or (74F) inside my home at the moment.

I am thinking, if I do swap thermal paste for possibly KPx and re-mount the KP water-block I am going to go for the best possible thermal pads while I’m at it.

This is only me being particular over stuff.

But, maybe this block is just as good as it gets unfortunately? I’ve read this, but I have also come across a few people who tinkered with it and have achieved some really really good results. 

Very very happy with the memory temps. Just considering doing everything possible to achieve lowest GPU temps if I do remove and remount the thing.


----------



## Nizzen

OrionBG said:


> Guys,
> Any thoughts on the Thermal Grizzly Minus 8 thermal pads?
> Are they a good choice?


I have them on ek waterblock on 3090. Works great. There are better, but it looks like it's good enough. Vram is 65-70c


----------



## the bag

Hello,

i'm running the Gigabyte Gaming Oc vBios with resizable bar at my Asus Tuf Oc 3090 with nice improvments the powertarget of 390 allows much higher and stable clock's. Effective clock's arround 1950Mhz in HWinfo everything on Air.
With the stock bios of the tuf i didnt see 1850Mhz in benchmarks. I only lost 200rpm fan speed with Gigabyte vBios.
Anyone flashed and tried another vBios at his Tuf 3090 Oc?

Greetings the bag


----------



## Nizzen

the bag said:


> Hello,
> 
> i'm running the Gigabyte Gaming Oc vBios with resizable bar at my Asus Tuf Oc 3090 with nice improvments the powertarget of 390 allows much higher and stable clock's. Effective clock's arround 1950Mhz in HWinfo everything on Air.
> With the stock bios of the tuf i didnt see 1850Mhz in benchmarks. I only lost 200rpm fan speed with Gigabyte vBios.
> Anyone flashed and tried another vBios at his Tuf 3090 Oc?
> 
> Greetings the bag


A few bioses people like:
Msi suprim 450w bios with rebar
Evga 500w and 520w with rebar
Classic evga 1000w bios
Galax 500w bios with rebar
Strx 480w rebar

This list is nice to test, if you either want to see the potential of the chip or want the best performance per watt.

Good luck with a "few" hours of bios testing fun


----------



## satinghostrider

Has anyone tried the Asus 480W bios on the 3090 Gaming X Trio? Worth flashing from the 3090 Suprim X Bios? TIA!


----------



## the bag

Hello,

Thanks for the list I'll give it a try after testing out the gigabyte vBios this week. I'm wondering a little bit about the powertarget heaven benchmark, gpu-z, hw-info everything shows a powertarget at 390 watts only in cyberpunk 2077 im running into the powerlimit at 370 watts. Anyone an idea?

Greetings


----------



## Arizor

@the bag are you using raytracing and DLSS in CP2077? That will change the power profile compared to those others.


----------



## the bag

@Arizor yes raytracing and DLSS is i'm actually playing arround with the settings in CP2077 thanks for the advise


----------



## yzonker

Is this the fuse that blew? If I'm looking in the right place.









MMO New World (Beta) is killing GeForce RTX 3090 (updated: EVGA reports 30 cards toasted)


The closed beta for the "New World," an Amazon Game Studio MMO in development is showing a rather weird sideeffect. the NVIDIA GeForce RTX 3090 graphics cards seems to have issues with the...




www.guru3d.com


----------



## Jordel

des2k... said:


> I've seen post where the new FTW3 had 45w pcie power and able to use 500w+ for board power.
> 
> Not sure if even those new FTW3 boards are also replaced due to new issues with weird games frying them ( that Amazon game).





Falkentyne said:


> All bad 3090 FTW3's have a ver 0.1 stamped next to the PCIE slot.
> Or rather all _potentially_ bad ones.
> All of the good ones are rev 1.0.
> The good ones have a digital VRM controller. The bad one have an analog controller.
> 
> This also apparently applies to 3080's as well.


Yeah, it seems you can contact EVGA if you have one of the boards that are undesirable, since they limit overclocking quite a lot, too, and get it replaced with a newer revision.

It seems my replacement one is one of the bad ones. Happily eats away at the PCIe slot power (80+ watts) and won't go above 440W, despite having the 500W bios and PL pushed to the max.

My reason for not taking it out and checking is because it's part of a tight loop, need to thus take it apart.

The only annoying thing is that it's completely covered in an EK WB + backplate, and mounting back the original cooler is going to suck hard.


----------



## jura11

OrionBG said:


> Guys,
> Any thoughts on the Thermal Grizzly Minus 8 thermal pads?
> Are they a good choice?


Stock Bykski thermal pads are okay I would say, running them on my two RTX 3090 GamingPro's and VRAM temperatures won't break 60°C on top one and bottom one won't break 70's, both GPUs are running XOC 1000W BIOS with +105MHz on core and +1295MHz on VRAM on top and bottom one +1100MHz 

If you want good thermal pads then Thermalright Odyssey or Gelid pads which will cost you less than Thermal Grizzly Minus 8 pads and Thermalright Odyssey or Gelid Extreme pads will outperform TG Minus 8 pads for sure

Friend running Bykski waterblock with active backplate and his temperatures won't break on VRAM 50-54°C with core temperatures in 38-40°C 

Hope this helps 

Thanks, Jura


----------



## OrionBG

jura11 said:


> Stock Bykski thermal pads are okay I would say, running them on my two RTX 3090 GamingPro's and VRAM temperatures won't break 60°C on top one and bottom one won't break 70's, both GPUs are running XOC 1000W BIOS with +105MHz on core and +1295MHz on VRAM on top and bottom one +1100MHz
> 
> If you want good thermal pads then Thermalright Odyssey or Gelid pads which will cost you less than Thermal Grizzly Minus 8 pads and Thermalright Odyssey or Gelid Extreme pads will outperform TG Minus 8 pads for sure
> 
> Friend running Bykski waterblock with active backplate and his temperatures won't break on VRAM 50-54°C with core temperatures in 38-40°C
> 
> Hope this helps
> 
> Thanks, Jura


Thanks a lot for the info.
I was looking at the Thermal Grizzly as they are available locally in my country. The prices of Amazon for shipping things to Bulgaria have gone up quite a lot lately, and they want 19 euros to deliver something that weighs 20 grams probably (OK 100gr with packaging  ) I guess I'll stay with the Bykski supplied ones. 
I Will report some temps when I mount it... Hopefully next week.


----------



## jura11

OrionBG said:


> Thanks a lot for the info.
> I was looking at the Thermal Grizzly as they are available locally in my country. The prices of Amazon for shipping things to Bulgaria have gone up quite a lot lately, and they want 19 euros to deliver something that weighs 20 grams probably (OK 100gr with packaging  ) I guess I'll stay with the Bykski supplied ones.
> I Will report some temps when I mount it... Hopefully next week.


Hi there 

Personally I would buy the Thermalright or Gelid thermal pads from Aliexpress, you should receive them in 10-14 days as max, I bought Thermalright and Gelid Extreme pads and they arrived to UK in 10 days 

Yup right now prices for shipping went beyond what we paid before all that pandemic 

Bykski supplied pads are okay and I still think they are better than EKWB thermal pads which I have used too, I compared them on my friend RTX 3090 and temperatures went up with EKWB thermal pads by 6°C on VRAM only 

Hope this helps 

Thanks, Jura


----------



## WHeelbeer

yzonker said:


> Should be low risk.


Indeed it was low risk, everything work fine. However, even with a Gaming / Vision OC Bios, my card is stuck at 350w (360w maximum). 
Gaming / Vision OC vBIOS are with 370w /390w. 
I tried to overclock with Afterburner but nothing changes. Still at 350w, 360w at most. 
And the GPU stay at 52°C no matter what.

Any idea ?


----------



## OrionBG

jura11 said:


> Hi there
> 
> Personally I would buy the Thermalright or Gelid thermal pads from Aliexpress, you should receive them in 10-14 days as max, I bought Thermalright and Gelid Extreme pads and they arrived to UK in 10 days
> 
> Yup right now prices for shipping went beyond what we paid before all that pandemic
> 
> Bykski supplied pads are okay and I still think they are better than EKWB thermal pads which I have used too, I compared them on my friend RTX 3090 and temperatures went up with EKWB thermal pads by 6°C on VRAM only
> 
> Hope this helps
> 
> Thanks, Jura


Everything ordered through Aliexpress now (since 1st of July 2021) needs to go through customs and I need to pay VAT for it. Also, ever since COVID, the deliveries have been very slow...
Only the customs declaration that I need to prepare here is about 10 Euro. Then you add the price of the item, VAT, and delivery and at the end, it gets more expensive than Amazon... Also, you never know if what they will send is the real thing or a fake one...


----------



## the bag

Hey,

Anyone here owns a Tuf Oc and replaced his stock thermalpads on air cooling? In some videos it looks like asus used not that good stuff. Any expierence? I'm running the Gigabyte Gaming Oc Bios @1950Mhz, @80%Fans, ambiente arround 25, Card is below the 70's like 66 - 69 but the memory juntion hits the 90's - 92's.
So i'm wondering if replacing the stock thermal pads would improve the temperature in a positive way on the Tuf Oc. 
Also the thermalpaste asus had used, some people mentioned on their vertical mount the paste was like a fluent river after a few days. They realised it after the card started throttling because of the 110's on the vRam.

Greetings the bag


----------



## macangel

So, I got around to flashing my Gigabyte Xtreme RTX 3090 with the EVGA 1000W ROM. Got a black screen for a long time that scared the crap out of me. I stepped away from the computer and when I came back it had restarted with the new ROM showing up in GPU-Z. However, it now says Resizable BAR is Disabled, before it was enabled. Is there a 1000W ROM that has BAR enabled?


----------



## yzonker

macangel said:


> So, I got around to flashing my Gigabyte Xtreme RTX 3090 with the EVGA 1000W ROM. Got a black screen for a long time that scared the crap out of me. I stepped away from the computer and when I came back it had restarted with the new ROM showing up in GPU-Z. However, it now says Resizable BAR is Disabled, before it was enabled. Is there a 1000W ROM that has BAR enabled?


Nope, not for us non-KP owners anyway unless someone leaks it eventually.


----------



## macangel

yzonker said:


> Nope, not for us non-KP owners anyway unless someone leaks it eventually.


yea, just found out something else. Apparently the ROMs also control the number and use of the HDMI ports. Only Gigabyte has 3 HDMI ports, which I'm using. With the EVGA, only one would give me 4k, the others were stuck at 1080p. GPU-Z only showed as 3 DP and 1 HDMI. CRU was able to get all the info from the TVs, but the ROM wouldn't do anything with them. So I flashed back to the Gigabyte, but sadly, I didn't back up my ROM, I just had downloaded the one from Tech Powerup, and that one doesn't seem to support BAR either.


----------



## yzonker

macangel said:


> yea, just found out something else. Apparently the ROMs also control the number and use of the HDMI ports. Only Gigabyte has 3 HDMI ports, which I'm using. With the EVGA, only one would give me 4k, the others were stuck at 1080p. GPU-Z only showed as 3 DP and 1 HDMI. CRU was able to get all the info from the TVs, but the ROM wouldn't do anything with them. So I flashed back to the Gigabyte, but sadly, I didn't back up my ROM, I just had downloaded the one from Tech Powerup, and that one doesn't seem to support BAR either.


Try looking in the unverified category on TPU if you haven't.


----------



## tps3443

OrionBG said:


> Thanks a lot for the info.
> I was looking at the Thermal Grizzly as they are available locally in my country. The prices of Amazon for shipping things to Bulgaria have gone up quite a lot lately, and they want 19 euros to deliver something that weighs 20 grams probably (OK 100gr with packaging  ) I guess I'll stay with the Bykski supplied ones.
> I Will report some temps when I mount it... Hopefully next week.


The minus 8 pads are great. Although, I would just go for some 12-13K/w pads. but the Minus 8 pads will get you about 85% there. And the Minus 8 pads are fairly soft, and affordable too.


----------



## EB_Z590

des2k... said:


> the longer the cable the more drop you'll get on the 12v line
> 
> you'll have to measure your 12v lines with ext vs no ext(hwinfo) and see the difference under load
> 
> but getting 12v vs 11.8v makes no difference, atx specs allows +- for voltages
> 
> My new 1200w psu has stupid long cables(also no end caps) i've seen as low as 11.7v🙄 for high power load on 8pin(240w)


Thanks for the reply.
I ran Heaven for 20 minutes and there were no significant swings, .3v +/- 
When idle, they hardy change.


----------



## des2k...

Who wants to try to get the xoc rbar from vince ?
I see some new people got it on evga forums. I have no idea about warranty / nda.

Don't have kingpin but running the 1000w non rbar, and want to try it if possible.

send serial # of Kingpin gpu. [email protected]


----------



## yzonker

Desperation is taking hold.


----------



## tps3443

des2k... said:


> Who wants to try to get the xoc rbar from vince ?
> I see some new people got it on evga forums. I have no idea about warranty / nda.
> 
> Don't have kingpin but running the 1000w non rbar, and want to try it if possible.
> 
> send serial # of Kingpin gpu. [email protected]


Im thinking about it lol. I am gonna trust the card a little more first. I trusted my 2080Ti lol. Not quite there yet with my 3090KP.


----------



## tps3443

Alright guys, so I’m gonna just order some Fujipoly Extreme 1.5MM pads. They’re very dense pads, and I feel like my GPU deltas are getting hurt due to the stock 2.25mm pads being too thick. And this is why my memory temps are super good. but my GPU temps are not where they need to be.

So these thick 2.00MM-2.25MM pads on the Kingpin Hydro Copper block need to be thinner to offer a better GPU contact. And lower GPU temps.

0.5MM is really a minimal difference. So I’m thinking a firm 1.5MM Fujipoly extreme with a hard squish compound is gonna offer better performance.

The worst that can happen is higher memory temps lol.. 

Any thoughts?


----------



## satinghostrider

tps3443 said:


> Alright guys, so I’m gonna just order some Fujipoly Extreme 1.5MM pads. They’re very dense pads, and I feel like my GPU deltas are getting hurt due to the stock 2.25mm pads being too thick. And this is why my memory temps are super good. but my GPU temps are not where they need to be.
> 
> So these thick 2.00MM-2.25MM pads on the Kingpin Hydro Copper block need to be thinner to offer a better GPU contact. And lower GPU temps.
> 
> 0.5MM is really a minimal difference. So I’m thinking a firm 1.5MM Fujipoly extreme with a hard squish compound is gonna offer better performance.
> 
> The worst that can happen is higher memory temps lol..
> 
> Any thoughts?


Look at the shore scale of these pads.

I also was using Thermalright Odysseys on my 3090 Gaming X Trio and my hotspot deltas crept up from 15 degrees to almost 20 degrees in a matter of 2 months. GPU Temps were 52 degrees peak and memory temps with the EK backplate (Not active) was 60 degrees. These Thermalright Odyseey pads are not so compressible and they are rather rigid.

I just redid my card with Gelid Extreme Pads which are far more compressible than my previous Thermalright Odyssey pads.
But I had to use Thermalright Odyssey pads on the back of the card as Gelids are known to melt the pads making it a pain to remove. Apparently, it does not hurt performance though.

I also ran out of paste for my card cause I messed up the Thermalright TFX paste (Horrible to spread) and only had EK Ektotherm paste and had to use that unfortunately.

My new temps were rather surprising given a very average paste like the EK TIM. I am sitting at 47 degrees max now with my hotspot deltas only at 13 degrees right now. Memory temps are 58 degrees. 

My guess is that the Gelid pads which are more compressible allows for better GPU Die to Block contact which overall improved the temps of the card though memory temps were only 2 degrees cooler. But GPU was 5 degrees cooler and hotspots dropped 6 degrees on average.

Hope this helps you.


----------



## long2905

devilhead said:


> testing 3090 strix active backplate + 12.8w/mk pads at the worst case scenario --- mining +2000 on memory
> it's way better than mine frankenstein backplate with stock EK pads  temps was around 80C


can you share your ambient temp? is that +2000 in afterburner or something else that would double the actual number?


jura11 said:


> Personally I would buy the Thermalright or Gelid thermal pads from Aliexpress, you should receive them in 10-14 days as max, I bought Thermalright and Gelid Extreme pads and they arrived to UK in 10 days
> 
> Yup right now prices for shipping went beyond what we paid before all that pandemic
> 
> Bykski supplied pads are okay and I still think they are better than EKWB thermal pads which I have used too, I compared them on my friend RTX 3090 and temperatures went up with EKWB thermal pads by 6°C on VRAM only


I'm also using the bykski active backplate with the included thermal pads. while tooking it out for maintenance and experimenting putting paste as an extra layer in, i see the pads are compressed quite a bit even though its only 1.2mm thick as you said. do you think i can put in the gelid extreme pad i bought as a backup before putting in all this even though the gelid pad is only 1mm thick?


----------



## tbrown7552

yzonker said:


> Is this the fuse that blew? If I'm looking in the right place.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> MMO New World (Beta) is killing GeForce RTX 3090 (updated: EVGA reports 30 cards toasted)
> 
> 
> The closed beta for the "New World," an Amazon Game Studio MMO in development is showing a rather weird sideeffect. the NVIDIA GeForce RTX 3090 graphics cards seems to have issues with the...
> 
> 
> 
> 
> www.guru3d.com
> 
> 
> 
> 
> 
> View attachment 2518595
> 
> 
> View attachment 2518596


Yes the shunt you circled on image 2 is what blew on picture one. However that image is from a card a friend of mine has that he sold on ebay as a shunt modded waterblocked card. Buyer didnt listen to the seller and put a stock cooler on it and it blew a hole in the card and the buyer played the defective card game and the deal went sour. This happened a month or more ago. The Buyer for some reason is taking these pics and trying to get efamous or something saying it happened from New World when it didnt. The Guru3d article is textbook fake news.


----------



## GRABibus

des2k... said:


> Who wants to try to get the xoc rbar from vince ?
> I see some new people got it on evga forums. I have no idea about warranty / nda.
> 
> Don't have kingpin but running the 1000w non rbar, and want to try it if possible.
> 
> send serial # of Kingpin gpu. [email protected]


I want it for my strix


----------



## GRABibus

deleted


----------



## Nizzen

GRABibus said:


> I want it for my strix





des2k... said:


> Who wants to try to get the xoc rbar from vince ?
> I see some new people got it on evga forums. I have no idea about warranty / nda.
> 
> Don't have kingpin but running the 1000w non rbar, and want to try it if possible.
> 
> send serial # of Kingpin gpu. [email protected]


I have this 1000w rebar bios on my 3090 strix  Works great in games 
Is this the most expensive bios in the history?


----------



## des2k...

Nizzen said:


> I have this 1000w rebar bios on my 3090 strix  Works great in games
> Is this the most expensive bios in the history?


can you share it please on the thread ?

If you don't want it to be visible, maybe PM me. There's like what 3,4 people here interested.


----------



## des2k...

tps3443 said:


> Alright guys, so I’m gonna just order some Fujipoly Extreme 1.5MM pads. They’re very dense pads, and I feel like my GPU deltas are getting hurt due to the stock 2.25mm pads being too thick. And this is why my memory temps are super good. but my GPU temps are not where they need to be.
> 
> So these thick 2.00MM-2.25MM pads on the Kingpin Hydro Copper block need to be thinner to offer a better GPU contact. And lower GPU temps.
> 
> 0.5MM is really a minimal difference. So I’m thinking a firm 1.5MM Fujipoly extreme with a hard squish compound is gonna offer better performance.
> 
> The worst that can happen is higher memory temps lol..
> 
> Any thoughts?


There's a guy on evga forums with 14c delta at 700w for your block.
What water temp do you have ? You might already be at lowest delta possible without using LM.

I'm around 13,14c delta on my EK at 600w(not that I have many games using that)


----------



## GRABibus

Nizzen said:


> I have this 1000w rebar bios on my 3090 strix  Works great in games
> Is this the most expensive bios in the history?


----------



## GRABibus

Nizzen said:


> I have this 1000w rebar bios on my 3090 strix  Works great in games
> Is this the most expensive bios in the history?


Yeah, could you PM it to me please ? Norvegia is the greatest country ever


----------



## Nizzen

GRABibus said:


> Yeah, could you PM it to me please ? Norvegia is the greatest country ever


----------



## yzonker

Uh oh. Pic is gone. KP police must have got him. Lol


----------



## Sheyster

des2k... said:


> If you don't want it to be visible, maybe PM me. There's like what 3,4 people here interested.


I'm sure there are many many more than 3-4 people interested here. Two KP 3090 owners should get together and compare MD5 hashes to see if there is any truth to this digital fingerprinting rumor.


----------



## GRABibus

No matter to say you have the bios and show some pics if you don't wanna share or PM it


----------



## GRABibus

Deleted


----------



## tps3443

des2k... said:


> There's a guy on evga forums with 14c delta at 700w for your block.
> What water temp do you have ? You might already be at lowest delta possible without using LM.
> 
> I'm around 13,14c delta on my EK at 600w(not that I have many games using that)


My delta is about 23C on the GPU temp. 

My water temp is 29.5C to 30.5C at the very hottest water temp. Right before it enters the 3090KP block.

I ran it last night with a sustained 460-500 watts of heat load for 20 minutes. My 3090KP hit 54C max. (Not good) (Ive gotta get this resolved) (The memory temps are good though)

So I know the KP Hydro copper has 2.25MM thermal pads on the front side of the card for memory, and 2.75MM thermal pads on the backside of the card. I feel as if the thicker pads may be on the wrong side of the card. This is increasing a gap between my die, and causing issues. GPU is too warm.


----------



## Thanh Nguyen

Is it worth to replace the paste on kingpin hc to liquid metal ?


----------



## J7SC

yzonker said:


> Uh oh. Pic is gone. KP police must have got him. Lol


...must sound like heresy to benchers, but I'm perfectly happy with the relatively small bump I got for my Strix OC with the KP 520 r_BAR, even for the occasional benchies. While I do have a massive cooling system on the card, 1000 W XOC bios is just 'out there', and you never know about the 'next Amazon New World' surprise - a bit of safety headroom on the TDC is a good thing


----------



## yzonker

tps3443 said:


> My delta is about 23C on the GPU temp.
> 
> My water temp is 29.5C to 30.5C at the very hottest water temp. Right before it enters the 3090KP block.
> 
> I ran it last night with a sustained 460-500 watts of heat load for 20 minutes. My 3090KP hit 54C max. (Not good) (Ive gotta get this resolved) (The memory temps are good though)
> 
> So I know the KP Hydro copper has 2.25MM thermal pads on the front side of the card for memory, and 2.75MM thermal pads on the backside of the card. I feel as if the thicker pads may be on the wrong side of the card. This is increasing a gap between my die, and causing issues. GPU is too warm.


I saw that post about the [email protected] The poster didn't qualify that at all in regards to what was done to achieve that (that I saw anyway). Seems pretty optimistic without going to a lot of effort. I've seen results from others on the EVGA forum that were not nearly that good. More like 14C at 500w.


----------



## tps3443

yzonker said:


> I saw that post about the [email protected] The poster didn't qualify that at all in regards to what was done to achieve that (that I saw anyway). Seems pretty optimistic without going to a lot of effort. I've seen results from others on the EVGA forum that were not nearly that good. More like 14C at 500w.


Yeah I dunno about that either. Ive done a lot of research on the KP HC block. it seems like 13-14C above water temp is considered really good for one of these. Getting below that is going to take some serious work.


----------



## Wihglah

yzonker said:


> I saw that post about the [email protected] The poster didn't qualify that at all in regards to what was done to achieve that (that I saw anyway). Seems pretty optimistic without going to a lot of effort. I've seen results from others on the EVGA forum that were not nearly that good.  More like 14C at 500w.


My FTW3 is 7C at 500W. The HC block is rubbish. I haven't seen anyone getting good results.


----------



## tps3443

Wihglah said:


> My FTW3 is 7C at 500W. The HC block is rubbish. I haven't seen anyone getting good results.


You are right for the most part Lol. Although, these blocks look very good, and OEM, and these temps shouldn’t be this bad. So I started digging in to mine to find out what’s up with it..

And, I think I have discovered the problem for high GPU temps/high delta over water temps. And this is possibly an assembly error on evga(s) behalf.

So the Kingpin Hydro copper block uses (2) different thermal pad thickness on the GDDR6X memory.

2.25MM for the front GDDR6X modules.
2.75MM for the back GDDR6X modules.

Well, the front GDDR6X modules thermal pads are pre-applied to the KP HC block right out of the box. I am almost 99% certain that these are the 2.75MM thermal pads. This is creating a +0.5MM gap between the die and block. These 2.75MM pads are supposed to be on the back of the card. The thinner 2.25’s need to be on the front. ((((I don’t have calipers to measure anything. But I have both sides in front of me now. And the front thermal pads are thicker! )))))

And this would be very wrong. So if everyone is installing their KP HC block with the preinstalled thermal pads, then everyone is gonna have some high gpu deltas.

Which would explain why some people have really good temps. Maybe they discovered this issue too. Or their block was properly setup.

These pre-included pads are very nice. And look to be great quality pads. I am going to swap the position, with the thinner pads around the GPU die, and the fatter pads on the back (Per EVGA’s own specs)


----------



## KedarWolf

tps3443 said:


> My delta is about 23C on the GPU temp.
> 
> My water temp is 29.5C to 30.5C at the very hottest water temp. Right before it enters the 3090KP block.
> 
> I ran it last night with a sustained 460-500 watts of heat load for 20 minutes. My 3090KP hit 54C max. (Not good) (Ive gotta get this resolved) (The memory temps are good though)
> 
> So I know the KP Hydro copper has 2.25MM thermal pads on the front side of the card for memory, and 2.75MM thermal pads on the backside of the card. I feel as if the thicker pads may be on the wrong side of the card. This is increasing a gap between my die, and causing issues. GPU is too warm.


Get ready to order an Optimus block that comes standard with a water-cooled active backplate.

They should be released soon but after quite expensive.

The best block you can buy for a KP though.


----------



## tps3443

KedarWolf said:


> Get ready to order an Optimus block that comes standard with a water-cooled active backplate.
> 
> They should be released soon but after quite expensive.
> 
> The best block you can buy for a KP though.


Honestly, I’m not worried about the Optimus block. I’m pretty satisfied with the Kingpin Hydro Copper. I love the way it looks, and my other temperatures are incredible I’m over 1,100GB memory bandwidth with some super good temps. As for the GPU well, I have no contact between GPU and block at all right now, I have confirmed there is a gap between them less than 0.5MM. It seems that evga is installing the wrong thermal pads on these. Only the thermal paste is filling this large gap here.

Im curious as to why no one else has discovered this problem yet...

Anyways, these are cool waterblocks. They look amazing. Hopefully this fixes my Deltas some. I would be happy with SUB 44-46C 500 watts.


----------



## Zorel

Can anyone share the kp 1000w rebar bios with me?
I would like to try it on strix. 
Thanks!


----------



## KedarWolf

Can someone PM me the Kingpin 1000W rebar BIOS. I won't publicly share it and won't mention someone did.


----------



## KedarWolf

Has anyone tested two power connector BIOS's on three connector cards? I think they pull more power because the power limits are set for two power connectors but our cards have three.

What is the highest wattage 2-pin BIOS?


----------



## yzonker

KedarWolf said:


> Has anyone tested two power connector BIOS's on three connector cards? I think they pull more power because the power limits are set for two power connectors but our cards have three.
> 
> What is the highest wattage 2-pin BIOS?


It works on some cards. Some of the 3090 FTW3 owners were doing it that had gimped cards. I tried it on my 3080ti FTW3 though and it maxed out on 8pin #2 before getting to a higher PL.

There are Galax and Gigabyte 390w 2x8pin bios in TPU.


----------



## tps3443

Zorel said:


> Can anyone share the kp 1000w rebar bios with me?
> I would like to try it on strix.
> Thanks!


That’s not allowed per Vince.


----------



## GRABibus

KedarWolf said:


> Has anyone tested two power connector BIOS's on three connector cards? I think they pull more power because the power limits are set for two power connectors but our cards have three.
> 
> What is the highest wattage 2-pin BIOS?


When I had my 3090 FTW3 which was completely Power limit due to high PCIE power, I flashed it with following KFA2 bios :

it unlocked my power limit problems, due to the fact probably that PL was lower.








KFA2 RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com


----------



## yzonker

Although obviously there's a performance penalty with the 3080TI, at least Galax was nice enough to release a 1kw rebar bios. 

Has there been no updates to either the Galax or Asus XOC bios for the 3090 with reBar? Haven't seen them on TPU.


----------



## SoldierRBT

Has anyone experienced lower score in TimeSpy with ReBar BIOS? I get around 200-250 lower GPU score with ReBAR. Port Royal seems to get a nice boost though.


----------



## GRABibus

SoldierRBT said:


> Has anyone experienced lower score in TimeSpy with ReBar BIOS? I get around 200-250 lower GPU score wi


yes, I have lower score with Rebar in TS.


----------



## GRABibus

yzonker said:


> Although obviously there's a performance penalty with the 3080TI, at least Galax was nice enough to release a 1kw rebar bios.
> 
> Has there been no updates to either the Galax or Asus XOC bios for the 3090 with reBar? Haven't seen them on TPU.


I didn’t see any 1000W XOC bios with Rebar on TPU.


----------



## Nizzen

GRABibus said:


> I didn’t see any 1000W XOC bios with Rebar on TPU.











GALAX RTX 3080 Ti VBIOS


12 GB GDDR6X, 1365 MHz GPU, 1188 MHz Memory




www.techpowerup.com




Use this, then update it with the software on "gallax.com".... Or I actual think it's native 






Resizable bar BIOS update


Galaxy Microsystems Ltd.




www.galax.com


----------



## GRABibus

Do you know where


Nizzen said:


> GALAX RTX 3080 Ti VBIOS
> 
> 
> 12 GB GDDR6X, 1365 MHz GPU, 1188 MHz Memory
> 
> 
> 
> 
> www.techpowerup.com
> 
> 
> 
> 
> Use this, then update it with the software on "gallax.com".... Or I actual think it's native
> 
> 
> 
> 
> 
> 
> Resizable bar BIOS update
> 
> 
> Galaxy Microsystems Ltd.
> 
> 
> 
> 
> www.galax.com


I am not sure this works.

I tried this tool with GALAX 1kW XOC Bios for 3090 and it didn't work.


----------



## Nizzen

GRABibus said:


> Do you know where
> 
> 
> I am not sure this works.
> 
> I tried this tool with GALAX 1kW XOC Bios for 3090 and it didn't work.


ok


----------



## yzonker

Maybe I don't understand, but as I said that 1kw 3080ti bios has reBar. Have it flashed right now.


----------



## gamerMwM

So I got a 3090 FE from Best Buy recently, and already had a MSI 3080 Ti Gaming X Trio (flashed to Suprim 440 watt BIOS). Going to keep one of them. Here's some observations after OC'ing both (all of this on Air in an open case):

The MSI 3080 Ti can pass Port Royal up to 165 core / 800 memory. Stable in games up to 90 core / 150 memory. Manual freq. curve adjustment go it to an avg. core clock of 2020 mhz
The 3090 FE can pass Port Royal up to 180 core / 1200 memory. Stable in games 165 core / 1000 to 1200 memory. Working on frequency curve currently. Avg core clocks around 1950 mhz.

After adjustments both are scoring over 14000 in Port Royal and about 65 to 66 fps on that benchmark.

I have reached higher scores in Port Royal (a few hundred higher) with the MSI 3080 Ti. The memory can't be OC nearly as high, but avg clocks are higher. I tried to get the average core clock up higher on the 3090 FE but it doesn't want to get above 2000 mhz and stay there.

Looking back at past benchmarks of the MSI 3080 Ti when I had it on the Stock 380 watt bios the average core clocks were also around 1950 which is close to the FE which limits at 400 watts.


----------



## des2k...

is NVIDIA NVFlash 5.692.0 the latest ?
March 18th, 2021 

Prob a stupid question, it supports rebar roms ?


----------



## KingKnick

tps3443 said:


> That’s not allowed per Vince.


Mimimimimimi


----------



## Nizzen

des2k... said:


> is NVIDIA NVFlash 5.692.0 the latest ?
> March 18th, 2021
> 
> Prob a stupid question, it supports rebar roms ?


Even older can flash rebar roms


----------



## GRABibus

yzonker said:


> Maybe I don't understand, but as I said that 1kw 3080ti bios has reBar. Have it flashed right now.


I was speaking about 3090 1000W bioses 😊


----------



## des2k...

*rebar 1000w & classified tool*
I can't verify this right now, need to update my x570 from F11 to F34 and I know new agesa are very weird for stability for my 3900x. Want to test system stability before I flash this one.

if you guys are setup for rebar already and have dual bios, maybe confirm it's working for me ?


----------



## V I P E R

des2k... said:


> *rebar 1000w & classified tool*
> I can't verify this right now, need to update my x570 from F11 to F34 and I know new agesa are very weird for stability for my 3900x. Want to test system stability before I flash this one.
> 
> if you guys are setup for rebar already and have dual bios, maybe confirm it's working for me ?


Will this BIOS work for FTW3?


----------



## KingKnick

des2k... said:


> *rebar 1000w & classified tool*
> I can't verify this right now, need to update my x570 from F11 to F34 and I know new agesa are very weird for stability for my 3900x. Want to test system stability before I flash this one.
> 
> if you guys are setup for rebar already and have dual bios, maybe confirm it's working for me ?













Works !!! Nice Bro !!! THX !!! 
Finally someone who has the balls to post it !!!


----------



## des2k...

KingKnick said:


> View attachment 2518843
> 
> 
> 
> Works !!! Nice Bro !!! THX !!!
> Finally someone who has the balls to post it !!!


very nice, will start my mobo bios update soon so I can flash it
what nvflash did you use ? NVFlash 5.692.0 from techpowerup ?


----------



## yzonker

des2k... said:


> *rebar 1000w & classified tool*
> I can't verify this right now, need to update my x570 from F11 to F34 and I know new agesa are very weird for stability for my 3900x. Want to test system stability before I flash this one.
> 
> if you guys are setup for rebar already and have dual bios, maybe confirm it's working for me ?


Holy ****! Nice job! Thanks.

I have a Gigabyte x570 aorus ultra in my 3090 machine running F34 and a 5800x. Seems to work well. Annoying to flash and lose all settings though. 

My 3080ti machine I just upgraded has a Asus X570 Tuf Pro with my left over 3900x. Working good on the latest bios as well with reBar.


----------



## Falkentyne

des2k... said:


> *rebar 1000w & classified tool*
> I can't verify this right now, need to update my x570 from F11 to F34 and I know new agesa are very weird for stability for my 3900x. Want to test system stability before I flash this one.
> 
> if you guys are setup for rebar already and have dual bios, maybe confirm it's working for me ?


Anyone man enough to try flashing this on a FE?


----------



## KingKnick

des2k... said:


> very nice, will start my mobo bios update soon so I can flash it
> what nvflash did you use ? NVFlash 5.692.0 from techpowerup ?


Yes just the newest nvflsh from tpu


----------



## KingKnick

Falkentyne said:


> Anyone man enough to try flashing this on a FE?


Try it  You habe an onboard gpu for back flash ;9


----------



## KingKnick

So i have it uploaded to TPU

VGA Bios Collection: EVGA RTX 3090 24 GB | TechPowerUp 










Have fun Guys...


----------



## GRABibus

Vince is going to be happy 😆


----------



## KingKnick

GRABibus said:


> Vince is going to be happy 😆


Hahahhahahah xD


----------



## GRABibus

Falkentyne said:


> Anyone man enough to try flashing this on a FE?


I tried a PR with it on my strix, which is on air.

won’t do it anymore 😂.
=> More than 100degrees on GPU and average GPU temp at 81degrees (27degrees ambient).

I will try to get a 3090 Kingpin card.

I will stick with KP 520W currently on my strix


----------



## GRABibus

V I P E R said:


> Will this BIOS work for FTW3?


if you are on water, you can try….


----------



## Falkentyne

KingKnick said:


> Try it  You habe an onboard gpu for back flash ;9


Nope. Not me. Not without ability to hardware flash the card. I've already seen what happens when a TUF 3080 gets bricked by Asus own official flasher, then NVflash starts whining about "corrupt PCI header" when trying to restore the original bios and refuses to flash, then you need a special modded NVflash, which another user made specifically to fix that guy's card, to fix the card (check TPU's thread a few months ago for that). I have a Skypro and 1.8v adapter and no inline clip exists for a UDFN8 chip and I'm no stranger to force flashing
(modded my mobile GTX 1070 with a TDP bios editor).


----------



## tps3443

Sweet! I can test out the 1KW RBar without voiding my warranty now.


----------



## tps3443

Falkentyne said:


> Shots fired.
> 
> Still waiting for someone richer than me to test this on a 3090 Founder's Edition. No I'm not going to be the man this time.


Of all people I know your extreme man! You can do
it!

We all ran the Galax 2KW on 2080Ti’s, and the KP XOC 520 watt on them too.

I ran the Galax 2KW on my 2080TI daily for a long time. Probably close to a year. Then I desoldered, and sold the card on the stock air cooler.

New owner could not be happier!


----------



## yzonker

Falkentyne said:


> Shots fired.
> 
> Still waiting for someone richer than me to test this on a 3090 Founder's Edition. No I'm not going to be the man this time.



Has anyone had success getting any non FE bios to work on a FE card? I was under the impression that they don't work.


----------



## KedarWolf

macangel said:


> awesome. I was checking these out on Amazon and Newegg earlier today. They make a big difference?


The Extreme the way to go. They are really soft a compressible and are much better than the stock pads that come with water blocks etc.

They're cheap from the GELID store too.

The Ultimate are hard and so are the Thermaltright and Fuji, so you'll get better compression with the Extreme and better core contact and stuff.

And yes, they really do help. 

Edit: They ship them from China but they are quick, maybe 8 days or so. And I paid no duty or taxes on them which was nice from the GELID store.


----------



## GRABibus

Some posts have been deleted here


----------



## Falkentyne

yzonker said:


> Has anyone had success getting any non FE bios to work on a FE card? I was under the impression that they don't work.


This modded NVflash (based on dec 2020 version) bypasses the ID check which no longer works if you use the -6 parameter. But the one person who tried the FE Bios on his Strix had a black screen (VGA error code Load VGA Bios) and had to reflash by switching to the backup Bios then flipping the switch back before running NVflash. (rename from text to exe).


----------



## tps3443

GRABibus said:


> Some posts have been deleted here


You aren’t kidding. We’ve got some gaps.


----------



## KedarWolf

Damn, I was on my way to work and never downloaded the 1000W rebar BIOS.

Can someone PM it to me, please?


----------



## Nizzen

des2k... said:


> *rebar 1000w & classified tool*
> I can't verify this right now, need to update my x570 from F11 to F34 and I know new agesa are very weird for stability for my 3900x. Want to test system stability before I flash this one.
> 
> if you guys are setup for rebar already and have dual bios, maybe confirm it's working for me ?


XD


----------



## GRABibus

KedarWolf said:


> Damn, I was on my way to work and never downloaded the 1000W rebar BIOS.
> 
> Can someone PM it to me, please?











[Official] NVIDIA RTX 3090 Owner's Club


Has anyone tested two power connector BIOS's on three connector cards? I think they pull more power because the power limits are set for two power connectors but our cards have three. What is the highest wattage 2-pin BIOS? When I had my 3090 FTW3 which was completely Power limit due to high...




www.overclock.net













EVGA RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com


----------



## InvictuSZN

des2k... said:


> *rebar 1000w & classified tool*
> I can't verify this right now, need to update my x570 from F11 to F34 and I know new agesa are very weird for stability for my 3900x. Want to test system stability before I flash this one.
> 
> if you guys are setup for rebar already and have dual bios, maybe confirm it's working for me ?


Sorry if this is a stupid question but what does the classified tool do? And does it even work on non KP cards like the Strix?


----------



## amstech

Damn look at the prices of these GPU's! Holy #$%^.
You can buy a used Jetski, or a RTX 3090.


----------



## Falkentyne

InvictuSZN said:


> Sorry if this is a stupid question but what does the classified tool do? And does it even work on non KP cards like the Strix?


No it will not work on any non kingpin card. It won't even load.


----------



## GRABibus

Falkentyne said:


> No it will not work on any non kingpin card. It won't even load.


I can load it on my strix with KP 520W bios,
But of course, I assume this doesn’t work.


----------



## InvictuSZN

Falkentyne said:


> No it will not work on any non kingpin card. It won't even load.


Yeh i see, tried starting it but it doesnt do anything. 3090 Strix 1000W XOC (no Rebar) bios

Edit: Wait never mind it booted lmao, but doubt it will do anything


----------



## GRABibus

InvictuSZN said:


> Yeh i see, tried starting it but it doesnt do anything. 3090 Strix 1000W XOC (no Rebar) bios
> 
> Edit: Wait never mind it booted lmao, but doubt it will do anything


it takes minutes to load….


----------



## InvictuSZN

GRABibus said:


> it takes minutes to load….


yup same, 2 mins or so.... I really wonder if it will do anything but i dont wanna brick my card lol


----------



## KedarWolf

GRABibus said:


> [Official] NVIDIA RTX 3090 Owner's Club
> 
> 
> Has anyone tested two power connector BIOS's on three connector cards? I think they pull more power because the power limits are set for two power connectors but our cards have three. What is the highest wattage 2-pin BIOS? When I had my 3090 FTW3 which was completely Power limit due to high...
> 
> 
> 
> 
> www.overclock.net
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> EVGA RTX 3090 VBIOS
> 
> 
> 24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory
> 
> 
> 
> 
> www.techpowerup.com


TY, I downloaded it on my phone in case it gets taken down from the database. Tried emailing it to myself, even changing the .rom to .txt Gmail knows better than me and immediately deletes the email.


----------



## KedarWolf

GRABibus said:


> I can load it on my strix with KP 520W bios,
> But of course, I assume this doesn’t work.


Did you check in Afterburner the power limits are removed, and rebar in Nvidia System Information is enabled?


----------



## GRABibus

KedarWolf said:


> Did you check in Afterburner the power limits are removed, and rebar in Nvidia System Information is enabled?


I was talking about classified tool


----------



## KedarWolf

GRABibus said:


> I was talking about classified tool


I'm pretty sure I read the settings are locked on Strix cards and only with the Elmor's hardware thingy you can change them. EVC2 or something like that.


----------



## tps3443

InvictuSZN said:


> Sorry if this is a stupid question but what does the classified tool do? And does it even work on non KP cards like the Strix?


It allows you to manually increase the GPU voltages, and GDDR6X voltages. You can also disable the OCP protections. And calibrate load line for all of these voltages too.

It is a very fun tool. And it seems it would let you just cook a 3090KP in to flames if your not careful.


----------



## newls1

KedarWolf said:


> TY, I downloaded it on my phone in case it gets taken down from the database. Tried emailing it to myself, even changing the .rom to .txt Gmail knows better than me and immediately deletes the email.


finally a 1000w rebar evga bios... have you installed it yet?


----------



## GRABibus

newls1 said:


> finally a 1000w rebar evga bios... have you installed it yet?


Forget it if you are not in a custom loop or on a kingpin with its AIO.


----------



## tps3443

I’m running the 1000 watt KP Rbar bios on my 3090 Kingpin Hydro Copper.

I like it!! It works great.

I will more than likely set a limit of like 600-650 watts with it.

This bios is pulling right at 500 watts max without any voltages increased at all yet.

It is hot as hell in my home. My A/C sucks, and my card is hitting 50C. And it is (27.6C/81.6F) ambient temperature right now.

Anyways, my home AC is terrible which is the main reason why I watercool all of my components In the first place.


----------



## Falkentyne

GRABibus said:


> Forget it if you are not in a custom loop or on a kingpin with its AIO.


You can set it to 50% power limit and then not throttle in stuff like Timespy Extreme or Superposition 4k anymore. 500-550W is enough for air with good aftermarket thermal paste job.


----------



## GRABibus

Falkentyne said:


> You can set it to 50% power limit and then not throttle in stuff like Timespy Extreme or Superposition 4k anymore. 500-550W is enough for air with good aftermarket thermal paste job.


I did.
Temps in PR are still near 100degrees.


----------



## Falkentyne

GRABibus said:


> I did.
> Temps in PR are still near 100degrees.


Shouldn't get that hot. What thermal paste are you using?


----------



## GRABibus

Falkentyne said:


> Shouldn't get that hot. What thermal paste are you using?


I have repasted my strix on air with Conductonaut and temps are very good, except with this bios.
With 520W KP bios, average GPU temps in PR is in the 60’s.

something else is happening with this bios


----------



## GRABibus

Falkentyne said:


> Shouldn't get that hot. What thermal paste are you using?


try the bios


----------



## yzonker

GRABibus said:


> I have repasted my strix on air with Conductonaut and temps are very good, except with this bios.
> With 520W KP bios, average GPU temps in PR is in the 60’s.
> 
> something else is happening with this bios


It's either nearly pulling the full 1000w or some kind of error in reading the temp seems like.


----------



## GRABibus

yzonker said:


> It's either nearly pulling the full 1000w or some kind of error in reading the temp seems like.


Yes probably.


----------



## J7SC

GRABibus said:


> Yes probably.


...this may sound silly, but reboot a few times (referring to earlier posts where I would get 'hybrid' readouts in MSI AB, GPUz etc after enabling a new vbios - corrected itself after several re/boots)


----------



## GQNerd

Welcome to the club, now lets see those scores/clocks/temps with the 1k bar bios!!


----------



## Falkentyne

GRABibus said:


> try the bios


I have a FE. No one has gotten a non FE card working with a FE BIOS working without a hard brick and no one is brave enough to use the patched NVflash I posted two pages ago and flash their FE card and I'm not going to risk my card.
I have a Skypro programmer but no adapter or inline clip exists for "UDFN8" chips. Your chip is SOIC8.


----------



## GRABibus

NVIDIA GeForce RTX 3090 video card benchmark result - AMD Ryzen 9 5900X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII HERO (3dmark.com)

Look at my crazy temps with PL capped at 60%.
I rebooted 3 times after Bios installation


----------



## J7SC

...ouch ...I really would stop running that XOC bios on air, for safety reasons.


----------



## tps3443

The 1KW RBar seems to run around the same temps as my normal 520W LN2 RBar bios does.


----------



## GRABibus

J7SC said:


> ...ouch ...I really would stop running that XOC bios on air, for safety reasons.


definitely


----------



## Sheyster

GRABibus said:


> I have repasted my strix on air with Conductonaut and temps are very good, except with this bios.
> With 520W KP bios, average GPU temps in PR is in the 60’s.
> 
> something else is happening with this bios


Check the fan speeds in GPU-Z, maybe the fan curves don't match the original 1000W KP BIOS?


----------



## GRABibus

Sheyster said:


> Check the fan speeds in GPU-Z, maybe the fans curves don't match the original 1000W KP BIOS?


set to 100%, 3000rpm


----------



## Sheyster

GRABibus said:


> set to 100%, 3000rpm


Well, that clearly was not the issue, unless the middle fan was not spinning at all!


----------



## GRABibus

I tested in Heaven benchmarks and temps remain low...

I will test Timepspy now


----------



## Sheyster

GRABibus said:


> I tested in Heaven benchmarks and temps remain low...
> 
> I will test Timepspy now


I will eventually test it too, but not today. The only game I have installed that might benefit from re-bar is BFV, and I hardly ever play it anymore.


----------



## J7SC

Sheyster said:


> I will eventually test it too, but not today. The only game I have installed that might benefit from re-bar is BFV, and I hardly ever play it anymore.


...soon...:


----------



## GRABibus

NVIDIA GeForce RTX 3090 video card benchmark result - AMD Ryzen 9 5900X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII HERO (3dmark.com)

Exactly same temps as in PR...

*CONCLUSION : DON'T USE THIS BIOS ON AIR !!!!*


----------



## des2k...

GRABibus said:


> NVIDIA GeForce RTX 3090 video card benchmark result - AMD Ryzen 9 5900X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII HERO (3dmark.com)
> 
> Exactly same temps as in PR...
> 
> *CONCLUSION : DON'T USE THIS BIOS ON AIR !!!!*


Didn't want to mention it, but it's very aggressive for power ramp up and holding it.

My afterburner power chart is pinned with very few dips in PR,Control,cp2077,etc. You won't see temp difference with a good waterblock.

Just PR 2115 pinned to 500w so I laughed a bit at the post on top with 600w cap


----------



## Sheyster

J7SC said:


> ...soon...:
> 
> View attachment 2518952


Indeed, I actually re-installed BF4 a few months ago and played it for a few weeks. I needed to get some time in the Little Bird helo, love that thing!


----------



## tps3443

GRABibus said:


> NVIDIA GeForce RTX 3090 video card benchmark result - AMD Ryzen 9 5900X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII HERO (3dmark.com)
> 
> Exactly same temps as in PR...
> 
> *CONCLUSION : DON'T USE THIS BIOS ON AIR !!!!*


THAT IS INSANE!!!! Lol.

Ok, I’m about to go benching, just got off work. I’ll post some scores soon.


----------



## GRABibus

des2k... said:


> Didn't want to mention it, but it's very aggressive for power ramp up and holding it.
> 
> My afterburner power chart is pinned with very few dips in PR,Control,cp2077,etc. You won't see temp difference with a good waterblock.
> 
> Just PR 2115 pinned to 500w so I laughed a bit at the post on top with 600w cap


happy to make you laugh 😂


----------



## newls1

GRABibus said:


> Forget it if you are not in a custom loop or on a kingpin with its AIO.


very much on a custom GPU only loop....


----------



## tps3443

delete


----------



## J7SC

Sheyster said:


> Indeed, I actually re-installed BF4 a few months ago and played it for a few weeks. I needed to get some time in the Little Bird helo, love that thing!


...I saw that trailer with the 'robo dog' and the F35 stunt - the visuals should be s.th. else, especially with a decent GPU...toying with the idea to get a LG CX 48 Oled - for work


----------



## des2k...

tps3443 said:


> delete


I'm here lol
Got the bios late yesturday, not from this forum.

Friend of a friend lol....


----------



## tps3443

des2k... said:


> Didn't want to mention it, but it's very aggressive for power ramp up and holding it.
> 
> My afterburner power chart is pinned with very few dips in PR,Control,cp2077,etc. You won't see temp difference with a good waterblock.
> 
> Just PR 2115 pinned to 500w so I laughed a bit at the post on top with 600w cap



Oh ok, I was like WTH lol.

Anyways yeah I tested PR too earlier (I only had a few minutes to flash and run once on break) And yeah 500 watts easily no additional voltages or anything.

Im about to push it now though.


----------



## GRABibus

des2k... said:


> I'm here lol
> Got the bios late yesturday, not from this forum.
> 
> Friend of a friend lol....


Who is my friend also 😂


----------



## newls1

will using this 1000w new bios possibly blow the fuses on the FTW3 card? I have the load power balance issue and just dont want to blow fuses and have to send card for repair.... anyone have a clue about this?


----------



## tps3443

Since I’m running port royal.


newls1 said:


> will using this 1000w new bios possibly blow the fuses on the FTW3 card? I have the load power balance issue and just dont want to blow fuses and have to send card for repair.... anyone have a clue about this?


Set a power limit. If your on air cooling it is not recommended though.

We just saw @GRABibus running 96C in benchmarks lol. I dunno why. Thats toasty, and he’s on a Strix .

You don’t want that happening on a card that could potentially fail.


----------



## des2k...

newls1 said:


> will using this 1000w new bios possibly blow the fuses on the FTW3 card? I have the load power balance issue and just dont want to blow fuses and have to send card for repair.... anyone have a clue about this?


fuses are 20a on that ? if you go close to 20a x12v 240w on one of those 8pins yes 
this bios allows 100w pcie & up to 300w per 8pin (LN OC I guess)


----------



## Sheyster

newls1 said:


> will using this 1000w new bios possibly blow the fuses on the FTW3 card? I have the load power balance issue and just dont want to blow fuses and have to send card for repair.... anyone have a clue about this?


Just stick with the 520W BIOS, it's perfectly fine for gaming. I wouldn't risk the 1000W BIOS with that card.


----------



## J7SC

tps3443 said:


> (....)
> We just saw @GRABibus running 96C in benchmarks lol. I dunno why. Thats toasty, and he’s on a Strix .
> (...)


...I was more '''impressed''' with the 114 C Hotspot w/ Strix on air


----------



## GRABibus

J7SC said:


> ...I was more '''impressed''' with the 114 C Hotspot w/ Strix on air


I usually have between 12degrees and 20 degrees between GPU temp and hotspot temp.

it depends on games and power draw. The higher the power, the higher the difference the difference between both.

it is the same whatever my paste.


----------



## GRABibus

Sheyster said:


> Just stick with the 520W BIOS, it's perfectly fine for gaming. I wouldn't risk the 1000W BIOS with that card.


and the 520W bios is very good for bench also


----------



## tps3443

Power usage really isn’t all that bad..

I am running it through now just finding my best score. This is the first time really pushing my 3090KP.

I hit 15,300 or so on first run with an overclock applied. Memory at +1700/ GPU at 2,160 internal clock is 2145. 










I scored 15 302 in Port Royal


Intel Core i9-7980XE Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com





I can do better of course. I’m pushing now. This is so fun.


----------



## Sheyster

J7SC said:


> ...I saw that trailer with the 'robo dog' and the F35 stunt - the visuals should be s.th. else, especially with a decent GPU...toying with the idea to get a LG CX 48 Oled - for work


Just roll with a C1 now. They already have the firmware update available now for Dolby Vision 4K/120Hz. The CX is supposed to get it but who knows when. Also make sure you get a good HDMI 2.1 cable. I recommend Zeskit Maya 8K HDMI 2.1 certified cable from Amazon.


----------



## newls1

des2k... said:


> fuses are 20a on that ? if you go close to 20a x12v 240w on one of those 8pins yes
> this bios allows 100w pcie & up to 300w per 8pin (LN OC I guess)


pcie fuse is 10A, nearly 99% positive on that... Guess I shouldnt risk this i guess?! Card is fcwb'd and active cooled backplate


----------



## tps3443

Here is a GPU-Z run with a 15,311 PR score.

My ambient temps are pretty hot. Enough to sweat sitting still lol.

So my temps reflect this. Anyways, max power draw is 500.2 watt cruising through at 2,160Mhz during the whole test.

I am running a 3090 Kingpin hydro copper.

I am running (+200/+1700). These settings are very stable though. Haven’t messed with classified tool yet, but that’s coming in a few minutes here. I can get this score a little better I think.


----------



## GRABibus

tps3443 said:


> Here is a GPU-Z run with a 15,311 PR score.
> 
> My ambient temps are pretty hot. Enough to sweat sitting still lol.
> 
> So my temps reflect this. Anyways, max power draw is 500.2 watt cruising through at 2,160Mhz during the whole test.
> 
> I am running a 3090 Kingpin hydro copper.
> 
> I am running (+200/+1700). These settings are very stable though. Haven’t messed with classified tool yet, but that’s coming in a few minutes here. I can get this score a little better I think.


I am surprised about 500W max power draw.

When I tested a Galax HOF some weeks ago with Galax 1000W XOC Bios, I could draw 550W.


----------



## J7SC

Sheyster said:


> Just roll with a C1 now. They already have the firmware update available now for Dolby Vision 4K/120Hz. The CX is supposed to get it but who knows when. Also make sure you get a good HDMI 2.1 cable. I recommend Zeskit Maya 8K HDMI 2.1 certified cable from Amazon.


Thanks  - I've been looking at both (after all, they're very similar) but need to check IO for RTX2k...btw, oddly enough, the C1 is priced about C$100 less in Canada than the CX 48...



GRABibus said:


> I usually have between 12degrees and 20 degrees between GPU temp and hotspot temp.
> it depends on games and power draw. The higher the power, the higher the difference the difference between both.
> it is the same whatever my paste.


...when I saw your 114 C Hotspot, I was reminded of some results over at the 6900XT thread (I have one of those in my 'daily'). There is a tool called MPT and it lets you mod the power envelope of your existing bios by modding the Windows registry (no bios flashing - nice !).

As you bump the power limit by more than 20%, the Hotspot temp starts taking off much more so in percentage terms than the average GPU temp...it could be that the latest crop of GPUs with 7nm or 8nm are just that much more difficult to control re. high intensity heat spots. Also, it pays to check the GPU die re. flatness and even the raised lettering...with huge PL, that can also lead to emphasized and localized heat issues...


----------



## Arizor

Great to see things like this coming out, makes watercooling much more accessible to the average builder - Alphacool Eiswolf 2 AIO - 360mm RTX 3090/3080 with Backplate (Reference)


----------



## J7SC

Arizor said:


> Great to see things like this coming out, makes watercooling much more accessible to the average builder - Alphacool Eiswolf 2 AIO - 360mm RTX 3090/3080 with Backplate (Reference)


That is interesting...and it looks like it has that QD in case you want to customize down the line. I wonder which custom PCBs they might add this for (if any), in addition to the reference model


----------



## Arizor

@J7SC yeah the ease of expansion is a really nice touch.


----------



## PLATOON TEKK

Finally got blocks for the OC Labs cards. Got the Bykski with active plate, the cards are huge with both blocks installed.





















Will test temps etc and keep y’all posted. Hopefully chilled water will help the mediocre performance.


----------



## yzonker

des2k... said:


> Didn't want to mention it, but it's very aggressive for power ramp up and holding it.
> 
> My afterburner power chart is pinned with very few dips in PR,Control,cp2077,etc. You won't see temp difference with a good waterblock.
> 
> Just PR 2115 pinned to 500w so I laughed a bit at the post on top with 600w cap


Same for me with so-so Corsair block. Played CP2077 for quite a while. Temps appeared pretty much identical. Works great. Seems just as stable as the old one. Thanks again!


----------



## tps3443

PLATOON TEKK said:


> Finally got blocks for the OC Labs cards. Got the Bykski with active plate, the cards are huge with both blocks installed.
> 
> View attachment 2518967
> View attachment 2518968
> View attachment 2518970
> 
> 
> Will test temps etc and keep y’all posted. Hopefully chilled water will help the mediocre performance.


That’s a beautiful block either way. And that’s how I look at my KP HC waterblock. Damn thing was $350 with taxes and shipping lol. Once upon a time, money like that would buy you a latest and greatest mid range GPU.

The Kingpin hydro copper isn’t the best performer either. Although, Ive got some new pads, tim, etc in route to hopefully help it out some more with the high deltas. My high ambient temps certainly don’t help. 

Im thinking I need to invest in to chilled water my self too. You’ve gotta pay to play I suppose.


----------



## tps3443

So I’m still running 2,145Mhz. And I have
managed to squeeze SUB 15,500 in PR so far.

This Kingpin memory is insane though! +1,800 like a beast!! It might just do 2K on the memory. I’m still pushing here.


----------



## des2k...

tps3443 said:


> So I’m still running 2,145Mhz. And I have
> managed to squeeze SUB 15,500 in PR so far.
> 
> This Kingpin memory is insane though! +1,800 like a beast!! It might just do 2K on the memory. I’m still pushing here.


very nice, here's a very conservative run with my peasant Zotac 3090 
2115core +1018mem

Good run I think, hit 111fps at the ship entry.








I scored 15 172 in Port Royal


AMD Ryzen 9 3900X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## tps3443

des2k... said:


> very nice, here's a very conservative run with my peasant Zotac 3090
> 2115core +1018mem
> 
> Good run I think, hit 111fps at the ship entry.
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 172 in Port Royal
> 
> 
> AMD Ryzen 9 3900X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com


Absolutely awesome! And that’s no peasant card either. My 3090KP was $2,089 bucks. I have seen plenty of people pay wayyy more for that Zotac or something similar. Anyways, Im tinkering with internal voltage which helps boost that internal clock to match the external clock.

I’m at 15,485 so far.

Highest power consumption I have seen is maybe 560 watts. And I’ve got every dip switch on the back of my 3090KP enabled.

Trying to find full potential before I resort to classified tool, which will help me apply additional voltages for going past 2,145 sustained.

So far I’m +1,850 on the memory though, and it’s still scaling. I am really shocked here. Looking at the top 1-10 Port royal scores, and my memory seems to OC really really well.


Anyways. I’ve got some work to do. I dunno if 16K is gonna happen. I’m trying. I will probably have to resort to LM the gpu die to really help the KP HC block out some.


My 7980XE will boil water temps, ao
I even have that dialed to 4.5Ghz at much Lower voltages. This helps the 3090KP run drastically cooler.









I scored 15 485 in Port Royal


Intel Core i9-7980XE Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## KedarWolf

I scored 15 769 in Port Royal

My best ever enabling rebar globally with Nvidia Inspector and using 1000W rebar BIOS.


----------



## tps3443

KedarWolf said:


> I scored 15 769 in Port Royal
> 
> My best ever enabling rebar globally with Nvidia Inspector and using 1000W rebar BIOS.


I have rebar disabled on mine. Is that even worth anything? Or beneficial?


----------



## jura11

long2905 said:


> can you share your ambient temp? is that +2000 in afterburner or something else that would double the actual number?
> 
> I'm also using the bykski active backplate with the included thermal pads. while tooking it out for maintenance and experimenting putting paste as an extra layer in, i see the pads are compressed quite a bit even though its only 1.2mm thick as you said. do you think i can put in the gelid extreme pad i bought as a backup before putting in all this even though the gelid pad is only 1mm thick?


Hi there

I personally would go with 1.5mm Gelid or Thermalright pads,I will be doing maintenance of my loop,will be switching my Aquacomputer KRYOS NEXT CPU waterblock for Bykski CPU-RYZEN-X-MC and will be repadding my GPUs with Thermalright and Gelid pads,have both 1.5mm and will use ZF-EX thermal paste or if I find in UK Kingpin thermal paste

I can't comment if 1mm thermal pad will work or not there,I would guess will work but you will don't need to use plastic washer because of thermal pad thickness 

Hope this helps

Thanks,Jura


----------



## Arizor

Reporting on my Strix using the 1000w rebar BIOS, all temps are the same as using other BIOSes, hit 14900 on PR, so not bad, seems to hold high clocks for longer.

Power doesn't seem to go above 450 watts (going by my killometer in the wall), perhaps some hardware limitation on these late Strix cards not allowing higher.

edit: That's with PL set to 100%, doesn't matter basically if I set it anywhere above 45%.


----------



## KedarWolf

tps3443 said:


> I have rebar disabled on mine. Is that even worth anything? Or beneficial?


Google how to enable rebar globally with Nvidia Inspector, it's easy, but do it in the Global Driver Profile.


----------



## tps3443

KedarWolf said:


> Google how to enable rebar globally with Nvidia Inspector, it's easy, but do it in the Global Driver Profile.


I had it enabled before on some other bios for my 3090KP, and I can just flip over to bios #3 on my X299 Dark and it’ll turn back on. However, I have to use my crap bios 1.28v for my X299 Dark which adds Rebar support. The 1.28 bios sucks because that’s post intel mitigation meltdown which adds 5C temps to my 7980XE package and eats up about 8% multithreaded performance.

I lose 300 Mhz in CPU performance and received hotter cpu temps al from going from non rebar mobo bios to rebar mobo bios lol.


It isn’t worth it to say the least. Latest rebar bios ruin the 7980XE. It takes that beastly 5Ghz cpu that outpaces most 5950X Ryzen 9’s, to a tame 4.6Ghz mediocre style performance Lol

This is all due to updated microcode for my cpu.


----------



## Falkentyne

Arizor said:


> Reporting on my Strix using the 1000w rebar BIOS, all temps are the same as using other BIOSes, hit 14900 on PR, so not bad, seems to hold high clocks for longer.
> 
> Power doesn't seem to go above 450 watts (going by my killometer in the wall), perhaps some hardware limitation on these late Strix cards not allowing higher.
> 
> edit: That's with PL set to 100%, doesn't matter basically if I set it anywhere above 45%.


Run Timespy Extreme. You'll draw more.

Run superposition 4k custom with extreme shaders. You'll draw even more
Run Quake 2 RTX. You'll go above 600W.


----------



## tps3443

So 15,506 is where I’m at so far in PR. Re-bar is disabled, when the plane comes in to land I hit 117fps. 

I‘m still messing with it.


----------



## Nizzen

tps3443 said:


> Power usage really isn’t all that bad..
> 
> I am running it through now just finding my best score. This is the first time really pushing my 3090KP.
> 
> I hit 15,300 or so on first run with an overclock applied. Memory at +1700/ GPU at 2,160 internal clock is 2145.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 302 in Port Royal
> 
> 
> Intel Core i9-7980XE Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> I can do better of course. I’m pushing now. This is so fun.


I got 15499 on air with stock cooler on x299. 3090 HOF 








I scored 15 499 in Port Royal


Intel Core i9-7980XE Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 11}




www.3dmark.com


----------



## ttnuagmada

Finally got around to installing the EK active backplate for my Strix. took the opportunity to replace all of the thermal pads with Thermalright 12.8 W/mk ones. Here are the before and after screenshots of GPU-Z after 35 loops of TSE GT2. +100 core +500 mem and max power limt. I will say that it looked like one of the memory rows wasn't making good contact on the backside of the card prior to installing the active backplate, so at least some of the improvement is probably due to that. Either way, I'm pretty pleased with the results.

Before:









After:


----------



## Falkentyne

Arizor said:


> Reporting on my Strix using the 1000w rebar BIOS, all temps are the same as using other BIOSes, hit 14900 on PR, so not bad, seems to hold high clocks for longer.
> 
> Power doesn't seem to go above 450 watts (going by my killometer in the wall), perhaps some hardware limitation on these late Strix cards not allowing higher.
> 
> edit: That's with PL set to 100%, doesn't matter basically if I set it anywhere above 45%.


Elmor just said that he checked with a hex editor and SRC3's power limit is still incorrectly set to 150W, which is the same problem that the Strix XOC2 vbios had. SRC3 links to 8 pin #3 (SRC1=8 pin 1, SRC2=8 pin 2, etc) so if 8 pin #3 exceeds 150W, that triggers a TDP "Normalized" power limit unless the TDP slider can FAR exceed the 100% position. But this depends on what the "maximum" value for SRC3 is set to. If it's set to 500W or something, then ignore everything I said.

I asked him if he can check the "maximum" value.
_Edit_ he said maximum value for SRC3 is 175W.

Yikes....


----------



## GRABibus

Arizor said:


> Reporting on my Strix using the 1000w rebar BIOS, all temps are the same as using other BIOSes, hit 14900 on PR, so not bad, seems to hold high clocks for longer.
> 
> Power doesn't seem to go above 450 watts (going by my killometer in the wall), perhaps some hardware limitation on these late Strix cards not allowing higher.
> 
> edit: That's with PL set to 100%, doesn't matter basically if I set it anywhere above 45%.


You are on air cooler ?


----------



## Arizor

@GRABibus nope under water, but very modest, using EKWB 240mm kit (separate AIO for CPU). Temp maxes around 54C, delta of 10C, so nothing compared to some folk here.


----------



## GRABibus

Arizor said:


> @GRABibus nope under water, but very modest, using EKWB 240mm kit (separate AIO for CPU). Temp maxes around 54C, delta of 10C, so nothing compared to some folk here.


OK.

Because from my side, temps on air cooler with 1000W ReBar bios are crazy :








[Official] NVIDIA RTX 3090 Owner's Club


Yes probably. ...this may sound silly, but reboot a few times (referring to earlier posts where I would get 'hybrid' readouts in MSI AB, GPUz etc after enabling a new vbios - corrected itself after several re/boots)




www.overclock.net





With GALAX 1000W Bios or ASUS 1000W Bios, I don't get those crazy temps.


----------



## Arizor

GRABibus said:


> OK.
> 
> Because from my side, temps on air cooler with 1000W ReBar bios are crazy :
> 
> 
> 
> 
> 
> 
> 
> 
> [Official] NVIDIA RTX 3090 Owner's Club
> 
> 
> Yes probably. ...this may sound silly, but reboot a few times (referring to earlier posts where I would get 'hybrid' readouts in MSI AB, GPUz etc after enabling a new vbios - corrected itself after several re/boots)
> 
> 
> 
> 
> www.overclock.net
> 
> 
> 
> 
> 
> With GALAX 1000W Bios or ASUS 1000W Bios, I don't get those crazy temps.


Yeah I saw your posts, pretty wild, but not happening on mine.

ASUS has a 1000W bios?!


----------



## GRABibus

Arizor said:


> Yeah I saw your posts, pretty wild, but not happening on mine.
> 
> ASUS has a 1000W bios?!


Yes, this enclosed one for example, but with No rebar.


----------



## Arizor

@GRABibus wow I had no idea, thanks, ASUS don’t seem to have it on their site.


----------



## GRABibus

Arizor said:


> Reporting on my Strix using the 1000w rebar BIOS, all temps are the same as using other BIOSes, hit 14900 on PR, so not bad, seems to hold high clocks for longer.
> 
> Power doesn't seem to go above 450 watts (going by my killometer in the wall), perhaps some hardware limitation on these late Strix cards not allowing higher.
> 
> edit: That's with PL set to 100%, doesn't matter basically if I set it anywhere above 45%.


What I don't understand is that I get 14900+ score also, with my crazy temps, on air !!

So, I wonder really if those temps are real or not....

it would be great if some folks here could report temps in PR or TimeSpy with a strix 3090 *on air* with the KP 1000W Rebar bios.


----------



## Nizzen

GRABibus said:


> What I don't understand is that I get 14900+ score also, with my crazy temps, on air !!
> 
> So, I wonder really if those temps are real or not....
> 
> it would be great if some folks here could report temps in PR or TimeSpy with a strix 3090 *on air* with the KP 1000W Rebar bios.


This bios doesn't get hotter than any other 1000w bios I tried. I'm on water on the strix card, but I guess I would see if there was way higher powerdraw on this. It isn't. Atleast on my strix.


----------



## GRABibus

Nizzen said:


> This bios doesn't get hotter than any other 1000w bios I tried. I'm on water on the strix card, but I guess I would see if there was way higher powerdraw on this. It isn't. Atleast on my strix.


so I don’t understand what happens….


----------



## Nizzen

GRABibus said:


> NVIDIA GeForce RTX 3090 video card benchmark result - AMD Ryzen 9 5900X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII HERO (3dmark.com)
> 
> Look at my crazy temps with PL capped at 60%.
> I rebooted 3 times after Bios installation


3dmark doesn't repport gpu temp?
Give me link to the score


----------



## GRABibus

with 1000W Asus bios or galax 1000w bios, my temps remain normal.

only with this KP bios 1000W I can see these high temps, and only in PR, Timespy.
In games they are normal, as with 520W KP bios.


----------



## GRABibus

Nizzen said:


> 3dmark doesn't repport gpu temp?
> Give me link to the score


you have the link in your post.
76 degrees average.


----------



## J7SC

Arizor said:


> Reporting on my Strix using the 1000w rebar BIOS, all temps are the same as using other BIOSes, hit 14900 on PR, so not bad, seems to hold high clocks for longer.
> 
> Power doesn't seem to go above 450 watts (going by my killometer in the wall), perhaps some hardware limitation on these late Strix cards not allowing higher.
> 
> edit: That's with PL set to 100%, doesn't matter basically if I set it anywhere above 45%.


...what were your power results with the KPE 520 r_BAR ? I presume you had that before the 1kw one


----------



## GRABibus

I did a test with GALAX 1000W non rebar bios :

NVIDIA GeForce RTX 3090 video card benchmark result - AMD Ryzen 9 5900X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII HERO (3dmark.com)

Much better socre than with KP 1000W rebar bios, but stil crazy temps.

Look at my power draw : 680 W !!!!

*QUESTION : HOW IS IT POSSIBLE TO GET MORE THAN 15000pts WITH SUCH HIGH TEMPS ??*


----------



## yzonker

Just to verify, I checked my block delta this morning with the KP XOC reBar bios. 17C at 500w. My block isn't great, but same as I saw with the previous KP XOC. Although this was running CP2077, not 3dmark.

I did run 3dmark also. 2C higher but I didn't cool my loop down either like I probably did before.









Result







www.3dmark.com


----------



## yzonker

GRABibus said:


> I did a test with GALAX 1000W non rebar bios :
> 
> NVIDIA GeForce RTX 3090 video card benchmark result - AMD Ryzen 9 5900X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII HERO (3dmark.com)
> 
> Much better socre than with KP 1000W rebar bios, but stil crazy temps.
> 
> Look at my power draw : 680 W !!!!
> 
> *QUESTION : HOW IS IT POSSIBLE TO GET MORE THAN 15000pts WITH SUCH HIGH TEMPS ??*


Well it won't thermal throttle since limits are turned off on those bios. Just drop the core clock some due to the high temp. So it's not impossible at least.


----------



## J7SC

GRABibus said:


> I did a test with GALAX 1000W non rebar bios :
> 
> NVIDIA GeForce RTX 3090 video card benchmark result - AMD Ryzen 9 5900X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII HERO (3dmark.com)
> 
> Much better socre than with KP 1000W rebar bios, but stil crazy temps.
> 
> Look at my power draw : 680 W !!!!
> 
> *QUESTION : HOW IS IT POSSIBL ETO GET MORE THAN 15000pts WITH SUCH HIGH TEMPS ??*


...thanks for the info - though I hardly dare watch a Strix on air w/ 680W spikes...I think that might be violating the Geneva Convention !

...re. you question, I don't think you'll get to PR16,000 w/o doing some serious cooling upgrades (never mind the safety aspect). Looking at your temps above, NV Boost will have both feet stomping on the brakes (edit: thermal throttling may still be partially hard-baked in evn w/XOC). The card itself and 'that' bios are probably capable of it, but not on air....w-cooling + chiller would probably work. BTW, the web is full of vids of old air conditioners getting 'modded' as a chiller (even OCN has some). TiN (formerly of EVGA/KP) also did one...


----------



## Nizzen

GRABibus said:


> I did a test with GALAX 1000W non rebar bios :
> 
> NVIDIA GeForce RTX 3090 video card benchmark result - AMD Ryzen 9 5900X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII HERO (3dmark.com)
> 
> Much better socre than with KP 1000W rebar bios, but stil crazy temps.
> 
> Look at my power draw : 680 W !!!!
> 
> *QUESTION : HOW IS IT POSSIBLE TO GET MORE THAN 15000pts WITH SUCH HIGH TEMPS ??*


Have you checked total powerdraw from the wall, too see if the 680w is real?

I have ~720-750w total draw from the wall in port royal with 1000w rebar and 5950x pbo and "no" powerlimit for the cpu. This is with one pump, 8 fans and MB, 2x dimm etc...

Maybe it's different batches of the pcb? The one I use now is from second batch that came to Norway. So pretty early.


----------



## GRABibus

NVIDIA GeForce RTX 3090 video card benchmark result - AMD Ryzen 9 5900X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII HERO (3dmark.com) 

Here with PL capped at 55%.

Temps are much more acceptable !!!
When capping PL with KP 1000W rebar bios, this doesn' t work.

So, best case for me would be to get the GALAX 1000W Bios with rebar and cap teh PL to 55%.


----------



## GRABibus

Nizzen said:


> Have you checked total powerdraw from the wall, too see if the 680w is real?
> 
> I have ~720-750w total draw from the wall in port royal with 1000w rebar and 5950x pbo and "no" powerlimit for the cpu. This is with one pump, 8 fans and MB, 2x dimm etc...
> 
> Maybe it's different batches of the pcb? The one I use now is from second batch that came to Norway. So pretty early.


I think ity is real, as capping at 55%, I get 562W (See my last post)


----------



## J7SC

GRABibus said:


> I think ity is real, as capping at 55%, I get 562W (See my last post)


...with all the troubles you go through (incl. HSp 114, and 600W+ heat energy), is there any specific reason you won't put a water-block on the Strix ?


----------



## GRABibus

I am lazy

Hsp 114 ? What is it ?


----------



## newls1

anyone try this bios on the FTW3 card yet?


----------



## des2k...

FYI
There are 0 protection active on XOC from kingpin. It runs at max clocks and pushes max power from VRM for your requested overclock  regardless of temps.

You either have to back down the clocks or use 50% or less power target.


----------



## GRABibus

newls1 said:


> anyone try this bios on the FTW3 card yet?


you should make your experiment, as I am
Currently doing 😊


----------



## Sheyster

GRABibus said:


> I did a test with GALAX 1000W non rebar bios :
> 
> NVIDIA GeForce RTX 3090 video card benchmark result - AMD Ryzen 9 5900X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII HERO (3dmark.com)
> 
> Much better socre than with KP 1000W rebar bios, but stil crazy temps.
> 
> Look at my power draw : 680 W !!!!
> 
> *QUESTION : HOW IS IT POSSIBLE TO GET MORE THAN 15000pts WITH SUCH HIGH TEMPS ??*


Your previous post was tempting me to test out the new BIOS today. Now that you've tested the old 1K BIOS I can tell you that for me, temps are fine with the old BIOS. I game at up to 2100 MHz, and it drops a few bins as it heats up. I only use +500 for memory when gaming. PL is set to 60% and it never hits it in the games I play.


----------



## Sheyster

des2k... said:


> FYI
> There are 0 protection active on XOC from kingpin. It runs at max clocks and pushes max power from VRM for your requested overclock  regardless of temps.
> 
> You either have to back down the clocks or use 50% or less power target.
> 
> 
> View attachment 2519036


Yep, this is not an idiot proof BIOS.  Use at your own risk!


----------



## kryptonfly

Hi, sorry if it's already written here but I don't know how to search... Anyway, I have a Gigabyte Turbo that I flashed with Gaming OC bios for many months now but I would like to reach +400W because I'm power limited. I tried Aorus Xtreme bios but all probes are crazy with a third "virtual" 8-pin 12v. Is it possible to flash with a 3x8-pin bios ? If yes, what bios for Gigabyte 3090 PCB ?
(it's WC, temp is OK)


----------



## Sheyster

GRABibus said:


> I am lazy


LOL, at least you're honest!


----------



## J7SC

des2k... said:


> FYI
> There are 0 protection active on XOC from kingpin. It runs at max clocks and pushes max power from VRM for your requested overclock  regardless of temps.
> 
> You either have to back down the clocks or use 50% or less power target.
> 
> 
> View attachment 2519036


...I wasn't thinking about vbios limits (ie. 255C above ) but read early on that some temp limit is hard-baked into every Ampere...then again, those limits must be pretty high, according to @GRABibus sheets...


----------



## GRABibus

Sheyster said:


> Your previous post was tempting me to test out the new BIOS today. Now that you've tested the old 1K BIOS I can tell you that for me, temps are fine with the old BIOS. I game at up to 2100 MHz, and it drops a few bins as it heats up. I only use +500 for memory when gaming. PL is set to 60% and it never hits it in the games I play.


what do you mean by old bios ?

galax 1000X is perfect, capped at 50%-55%.
at lest capping it works very well, which is not the case with KP 1000W bios rebar.

pity that the galax 1000W rebar doesn’t exist…


----------



## GRABibus

des2k... said:


> FYI
> There are 0 protection active on XOC from kingpin. It runs at max clocks and pushes max power from VRM for your requested overclock  regardless of temps.
> 
> You either have to back down the clocks or use 50% or less power target.
> 
> 
> View attachment 2519036


Capping the PL with the KP 1000W rebar bios doesn't work for me.

With Galax 1000W non rebar bios, it works great.


----------



## GAN77

des2k... said:


> You either have to back down the clocks or use 50% or less power target.


How to display this data?


----------



## Falkentyne

Nizzen said:


> Have you checked total powerdraw from the wall, too see if the 680w is real?
> 
> I have ~720-750w total draw from the wall in port royal with 1000w rebar and 5950x pbo and "no" powerlimit for the cpu. This is with one pump, 8 fans and MB, 2x dimm etc...
> 
> Maybe it's different batches of the pcb? The one I use now is from second batch that came to Norway. So pretty early.


Everyone seems to have completely ignored and are still ignoring what I said.
The Kingpin 1000W rebar vbios has SRC3 power limits unchanged. They are still 150W default, 175W maximum.
This limits 8 pin #3''s maximum power draw before a PWR flag is set.
The Strix XOC2 vbios I posted months ago has the same problem.
The original KP 1000W vbios has proper limits. (SRC1/2/3 are all like 400W or something)
I'm guessing the Galax (non rebar?) one that was posted recently works as well (don't know).
I can't test these bioses.


----------



## GRABibus

I need the GALAX 1000W Bios with Bar.

No news for that ?


----------



## newls1

how is this PR Score using 520w XOC Bios? I scored 14 934 in Port Royal


----------



## GRABibus

kryptonfly said:


> Hi, sorry if it's already written here but I don't know how to search... Anyway, I have a Gigabyte Turbo that I flashed with Gaming OC bios for many months now but I would like to reach +400W because I'm power limited. I tried Aorus Xtreme bios but all probes are crazy with a third "virtual" 8-pin 12v. Is it possible to flash with a 3x8-pin bios ? If yes, what bios for Gigabyte 3090 PCB ?
> (it's WC, temp is OK)


Salut,
You card is a 2x8pin, so you wanna flash a 2x8pin bios ?

The best one I tested was this one :









Gigabyte RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com


----------



## yzonker

Falkentyne said:


> Everyone seems to have completely ignored and are still ignoring what I said.
> The Kingpin 1000W rebar vbios has SRC3 power limits unchanged. They are still 150W default, 175W maximum.
> This limits 8 pin #3''s maximum power draw before a PWR flag is set.
> The Strix XOC2 vbios I posted months ago has the same problem.
> The original KP 1000W vbios has proper limits. (SRC1/2/3 are all like 400W or something)
> I'm guessing the Galax (non rebar?) one that was posted recently works as well (don't know).
> I can't test these bioses.


I read your post but was confused by it. I'm running it on my Zotac 3090 trinity 2x8pin card. The 3rd connector is a duplicate of the 1st IIRC. It's 200w+ and hit 550w total power last night just like the old KP XOC.


----------



## GRABibus

newls1 said:


> how is this PR Score using 520w XOC Bios? I scored 14 934 in Port Royal


Good, you should easily break 15k
Your average GPU temp is 37°C => So you are on water ?


----------



## tps3443

Nizzen said:


> I got 15499 on air with stock cooler on x299. 3090 HOF
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 499 in Port Royal
> 
> 
> Intel Core i9-7980XE Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 11}
> 
> 
> 
> 
> www.3dmark.com


Oh Ive already gotcha Lol.

In my 15,506 run, I had re-bar disabled, I had my 7980XE at stock frequencies for lower water temps, and I didn’t have a functioning home air conditioning system. Seeing as your air cooled card running cooler than mine Lol.

My ambients were 27.2C so pretty rough.

I have gotten those down to 21.6C ambients, Re-Bar enabled, and 7980XE back to 5Ghz.

I will be posting my follow up runs with (Home AC) in about 8 hours lol.


----------



## GRABibus

Arizor said:


> Great to see things like this coming out, makes watercooling much more accessible to the average builder - Alphacool Eiswolf 2 AIO - 360mm RTX 3090/3080 with Backplate (Reference)


I can't find where to buy it in Europe at a ressseler.

Only at Aquatuning directly :








Alphacool Eiswolf 2 AIO - 360mm RTX 3080/3090 Ventus with Backplate


The Alphacool Eiswolf 2 is the first full cover GPU AIO waterblock from Alphacool. It is based on the Alphacool GPX Eisblock Aurora GPX water block, a pump unit and a 360mm NexXxoS ST30 full copper radiator. The latter is equipped with...




www.aquatuning.fr


----------



## newls1

GRABibus said:


> Good, you should easily break 15k
> Your average GPU temp is 37°C => So you are on water ?


yes sir, ive said i was on water a few times already.. my loop is huge for just this gpu.. 60mm 420 rad push/pull and 240 push... with D5, and a 250ML res.. lots of water and lots of heat dissipation potential! I would love to try this 1000w bios, but im deathly scared of popping my fuses on this FTW3


----------



## Falkentyne

yzonker said:


> I read your post but was confused by it. I'm running it on my Zotac 3090 trinity 2x8pin card. The 3rd connector is a duplicate of the 1st IIRC. It's 200w+ and hit 550w total power last night just like the old KP XOC.


Your card doesn't use an 8 pin #3 connector. So you're going to get power limited anyway like before and the wrong SRC3 limits won't affect you the same way as your hardware chip only has SRC1 and 2 with SRC3 glitched as SRC1.
.
The issue is that anyone with a 3x8 pin card is going to hit a power limit as soon as 8 pin #3 hits 175W.
Scroll up and see that one guy's post where he saw no increase in power draw past 450W with the KP Rebar vbios on his Strix (I think, might be wrong what card he used).
That's because he's being limited by SRC3.


----------



## GRABibus

Falkentyne said:


> Your card doesn't use an 8 pin #3 connector. So you're going to get power limited anyway like before and the wrong SRC3 limits won't affect you the same way as your hardware chip only has SRC1 and 2 with SRC3 glitched as SRC1.
> .
> The issue is that anyone with a 3x8 pin card is going to hit a power limit as soon as 8 pin #3 hits 175W.
> Scroll up and see that one guy's post where he saw no increase in power draw past 450W with the KP Rebar vbios on his Strix (I think, might be wrong what card he used).
> That's because he's being limited by SRC3.


with GALAX 1000W non rebar bios, you don't have this issue.


----------



## des2k...

...


----------



## newls1

Falkentyne said:


> Your card doesn't use an 8 pin #3 connector. So you're going to get power limited anyway like before and the wrong SRC3 limits won't affect you the same way as your hardware chip only has SRC1 and 2 with SRC3 glitched as SRC1.
> .
> The issue is that anyone with a 3x8 pin card is going to hit a power limit as soon as 8 pin #3 hits 175W.
> Scroll up and see that one guy's post where he saw no increase in power draw past 450W with the KP Rebar vbios on his Strix (I think, might be wrong what card he used).
> That's because he's being limited by SRC3.


so are you saying this 1000w rebar bios is bunked?


----------



## Nizzen

tps3443 said:


> Oh Ive already gotcha Lol.
> 
> In my 15,506 run, I had re-bar disabled, I had my 7980XE at stock frequencies for lower water temps, and I didn’t have a functioning home air conditioning system. Seeing as your air cooled card running cooler than mine Lol.
> 
> My ambients were 27.2C so pretty rough.
> 
> I have gotten those down to 21.6C ambients, Re-Bar enabled, and 7980XE back to 5Ghz.
> 
> I will be posting my follow up runs with (Home AC) in about 8 hours lol.


You better beat my friend, because he using a watercooled (~17-20c water) Msi 3090 Suprim, and a cheap msi motherboard 








I scored 16 061 in Port Royal


Intel Core i9-10900KF Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 11}




www.3dmark.com





You have atleast +500points just controlling all the voltages on the card


----------



## Sheyster

GRABibus said:


> what do you mean by old bios ?


I meant the old KP 1000W BIOS without re-bar.


----------



## tps3443

GRABibus said:


> I think ity is real, as capping at 55%, I get 562W (See my last post)


That’s wild 680 watts lol, after my highest run last night my KP never exceeded 560 watts. No power limits set using the KP 1K watt bios.

And I’m dumping voltages everywhere. (fingers crossed the whole time I wouldn’t pop something with classified tool)


----------



## kryptonfly

GRABibus said:


> Salut,
> You card is a 2x8pin, so you wanna flash a 2x8pin bios ?
> 
> The best one I tested was this one :
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Gigabyte RTX 3090 VBIOS
> 
> 
> 24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory
> 
> 
> 
> 
> www.techpowerup.com


Salut 
Thanks but I already flashed with the 390W Gaming OC bios (best for Turbo) few months ago, working fine. I just want at least ~420-450W, do you have GALAX 1000W non rebar for 2x8-pin (Gigabyte Turbo PCB) ? It's under water, 50°C @390W


----------



## tps3443

Nizzen said:


> You better beat my friend, because he using a watercooled (~17-20c water) Msi 3090 Suprim, and a cheap msi motherboard
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 16 061 in Port Royal
> 
> 
> Intel Core i9-10900KF Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 11}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> You have atleast +500points just controlling all the voltages on the card


Temperature is king for sure. I’m still learning classified tool. Never used it before. This tool is certainly how any noob could just insta-kill a 3090 KP.


Anyways, when you getting a block?


----------



## Nizzen

Falkentyne said:


> Your card doesn't use an 8 pin #3 connector. So you're going to get power limited anyway like before and the wrong SRC3 limits won't affect you the same way as your hardware chip only has SRC1 and 2 with SRC3 glitched as SRC1.
> .
> The issue is that anyone with a 3x8 pin card is going to hit a power limit as soon as 8 pin #3 hits 175W.
> Scroll up and see that one guy's post where he saw no increase in power draw past 450W with the KP Rebar vbios on his Strix (I think, might be wrong what card he used).
> That's because he's being limited by SRC3.


Like this:
Total pc powerdraw from the wall is ~720-750w in PR.


----------



## yzonker

Falkentyne said:


> Your card doesn't use an 8 pin #3 connector. So you're going to get power limited anyway like before and the wrong SRC3 limits won't affect you the same way as your hardware chip only has SRC1 and 2 with SRC3 glitched as SRC1.
> .
> The issue is that anyone with a 3x8 pin card is going to hit a power limit as soon as 8 pin #3 hits 175W.
> Scroll up and see that one guy's post where he saw no increase in power draw past 450W with the KP Rebar vbios on his Strix (I think, might be wrong what card he used).
> That's because he's being limited by SRC3.


Well I mentioned it because the Asus XOC did appear to limit on the 3rd 8pin because it has a 150w limit on the 3rd 8pin. Maybe we're talking about 2 different limits. I'm not in front of my computer to look at everything.


----------



## Nizzen

tps3443 said:


> Temperature is king for sure. I’m still learning classified tool. Never used it before. This tool is certainly how any noob could just insta-kill a 3090 KP.
> 
> 
> Anyways, when you getting a block?


I think I will order a Bykski block after summer  3 week vacation in a few days.


----------



## Falkentyne

newls1 said:


> so are you saying this 1000w rebar bios is bunked?





yzonker said:


> Well I mentioned it because the Asus XOC did appear to limit on the 3rd 8pin because it has a 150w limit on the 3rd 8pin. Maybe we're talking about 2 different limits. I'm not in front of my computer to look at everything.


*elmor — Today at 1:38 AM*
Well I checked with a hex editor and the SRC3 power limit seems to still be 150W



_[_1:38 AM_]_
So I don't think that part is changed unfortunately

*Falkentyne — Today at 1:40 AM*
that limits 8 pin #3 to 150W doesn't it?



_[_1:41 AM_]_
then if 8 pin #3 exceeds 150W it triggers a TDP Normalized power limit for bypassing the SRC3? Is that why that guy on overclock.net couldn't pass 500W?



_[_1:45 AM_]_
that would cause power throttling unless the TDP slider can FAR exceed 100% (which can goven how far 8 pin#3 can exceed the SRC3 power limit). @elmor can you check the "maximum" value for SRC3? there's a default and a maximum. if the maximum is set to like 400-500W, then the default isn't important now.










*elmor — Today at 1:48 AM*
Same 175W










*Falkentyne — Today at 1:49 AM*
Yikes



_[_1:55 AM_]_
Can you fix it and sign the bios somehow?



_[_1:55 AM_]_
or is the almighty Elmor still enslaved by the Green Goblin ?








*elmor — Today at 2:56 AM*
We're checking with the VGA team again if they can change this value


----------



## Nizzen

Galax 1000w no rebar on strix









Kingpin 1000w Rebar on strix










Total powerdraw looks the same from the wall....


----------



## GAN77

Nizzen said:


> Galax 1000w no rebar on strix


Is this BIOS version?


----------



## Falkentyne

GAN77 said:


> Is this BIOS version?


That is the vbios but NO version of ABE can read the power values correctly due to them being "offset" from previous bios versions. That vbios came out when ABE was abandoned or made super private.
Trying to read the KP Rebar bios crashes the application.


----------



## GRABibus

Nizzen said:


> Galax 1000w no rebar on strix
> View attachment 2519067
> 
> 
> Kingpin 1000w Rebar on strix
> View attachment 2519068
> 
> 
> 
> Total powerdraw looks the same from the wall....


Sorry but where do you conclude this ?.


----------



## GRABibus

kryptonfly said:


> Salut
> Thanks but I already flashed with the 390W Gaming OC bios (best for Turbo) few months ago, working fine. I just want at least ~420-450W, do you have GALAX 1000W non rebar for 2x8-pin (Gigabyte Turbo PCB) ? It's under water, 50°C @390W


No.
You can try the GALAX 1000W non rebar and cap PL at 45%...At your own risk as you have 2 pin connectors !


----------



## GRABibus

tps3443 said:


> That’s wild 680 watts lol, after my highest run last night my KP never exceeded 560 watts. No power limits set using the KP 1K watt bios.
> 
> And I’m dumping voltages everywhere. (fingers crossed the whole time I wouldn’t pop something with classified tool)


I was with GALAX 1000W non Rebar bios.
I think power readings are completely different between the GALAX 1000W and KP 1000W Rebar.


----------



## Falkentyne

Nizzen said:


> Galax 1000w no rebar on strix
> View attachment 2519067
> 
> 
> Kingpin 1000w Rebar on strix
> View attachment 2519068
> 
> 
> 
> Total powerdraw looks the same from the wall....


Your 8 pin #3 is glitched and not reporting a value, which is a _GOOD_ thing. It is drawing about 160W (there was a user a few months ago who threw a current clamp on the 2W 8 pin #3 and saw it drawing proper power), but it isn't reporting it to the controller chip. This is a good thing because you won't run into the 175W power limit.

For people (3x8 pin boards) that DO get a proper non glitched/non duplicated value for 8 pin #3, they WILL get throttled when 8 pin #3 reaches 175W (Normalized TDP% limit), unless their Power limit slider goes far past 125%--e.g. 200% power limit TDP slider would allow 150W * 2 from the 8 pin #3 (300W).


----------



## J7SC

Nizzen said:


> Galax 1000w no rebar on strix
> View attachment 2519067
> 
> 
> Kingpin 1000w Rebar on strix
> View attachment 2519068
> 
> 
> 
> Total powerdraw looks the same from the wall....


The difference in PCIe slot power looks interesting between the Galax and KP. On my Strix, PCIe slot power is usually the one thing that stays fairly constant w/ different vBios, including KP 520 r_BAR, Strix V3 & other


----------



## Falkentyne

J7SC said:


> The difference in PCIe slot power looks interesting between the Galax and KP. On my Strix, PCIe slot power is usually the one thing that stays fairly constant w/ different vBios, including KP 520 r_BAR, Stix V3 & other


Um...
He had it set to idle (or whatever current reading was) in one of them and 'peak (max)' in the other one. They can't be compared.


----------



## Nizzen

Falkentyne said:


> Um...
> He had it set to idle (or whatever current reading was) in one of them and 'peak (max)' in the other one. They can't be compared.


I didn't set anything.
I ran PR with gpu-z open with Kp rebar bios. Took screenshot.
Then I flashed to galax 1000w no rebar,ran PR with gpu-z open, then took a screenshot.

On both pictures, the MAX value is shown 8 pin, pci-e slot power and board power draw.

Maybe I misunderstand you?


----------



## J7SC

Falkentyne said:


> Um...
> He had it set to idle (or whatever current reading was) in one of them and 'peak (max)' in the other one. They can't be compared.


...nope:












Nizzen said:


> I didn't set anything.
> I ran PR with gpu-z open with Kp rebar bios. Took screenshot.
> Then I flashed to galax 1000w no rebar,ran PR with gpu-z open, then took a screenshot.
> 
> On both pictures, the MAX value is shown 8 pin, pci-e slot power and board power draw.
> 
> Maybe I misunderstand you?


----------



## J7SC

@Nizzen ...wasn't there a r_BAR exe tool linked to Galax' site to update a regular vbios to r_BAR, or was that just for 3080 / Ti and/or non-XOC ?


----------



## Nizzen

J7SC said:


> @Nizzen ...wasn't there a r_BAR exe tool linked to Galax' site to update a regular vbios to r_BAR, or was that just for 3080 / Ti and/or non-XOC ?








Resizable bar BIOS update


Galaxy Microsystems Ltd.




www.galax.com













PS: The tool worked with my 3090 HOF card with the stock bios


----------



## GRABibus

Nizzen said:


> Resizable bar BIOS update
> 
> 
> Galaxy Microsystems Ltd.
> 
> 
> 
> 
> www.galax.com
> 
> 
> 
> 
> View attachment 2519084
> 
> 
> PS: The tool worked with my 3090 HOF card with the stock bios


and unfortunately it doesn’t work with 1000W bios


----------



## tps3443

Nizzen said:


> Resizable bar BIOS update
> 
> 
> Galaxy Microsystems Ltd.
> 
> 
> 
> 
> www.galax.com
> 
> 
> 
> 
> View attachment 2519084
> 
> 
> PS: The tool worked with my 3090 HOF card with the stock bios


This tool works on the KP cards too. But I’ve seen it fail on a Gigabyte 3080Ti? So I dunno.


----------



## J7SC

Nizzen said:


> Resizable bar BIOS update
> 
> 
> Galaxy Microsystems Ltd.
> 
> 
> 
> 
> www.galax.com
> 
> 
> 
> 
> View attachment 2519084
> 
> 
> PS: The tool worked with my 3090 HOF card with the stock bios


Tx, but


----------



## Lobstar

Falkentyne said:


> For people (3x8 pin boards) that DO get a proper non glitched/non duplicated value for 8 pin #3, they WILL get throttled when 8 pin #3 reaches 175W (Normalized TDP% limit), unless their Power limit slider goes far past 125%--e.g. 200% power limit TDP slider would allow 150W * 2 from the 8 pin #3 (300W).


Jokes on you. I have an EVGA 3090 FTW3U so my 3rd 8-Pin never goes over 100w. :3head:


----------



## PLATOON TEKK

tps3443 said:


> That’s a beautiful block either way. And that’s how I look at my KP HC waterblock. Damn thing was $350 with taxes and shipping lol. Once upon a time, money like that would buy you a latest and greatest mid range GPU.
> 
> The Kingpin hydro copper isn’t the best performer either. Although, Ive got some new pads, tim, etc in route to hopefully help it out some more with the high deltas. My high ambient temps certainly don’t help.
> 
> Im thinking I need to invest in to chilled water my self too. You’ve gotta pay to play I suppose.


I definitely feel you on that. These blocks go for $299 on Bykski US, but was able to source for much cheaper (eBay). Still madness considering years ago $300 for a block would have been April’s fools level.

I’m all for chillers, check aquarium stores for much cheaper deals (if you find 1/4 adapter), but you’ll need a pump and start consider dew points. Also make sure you have enough wattage.

I used the Bykski supplied pads and thermal grizzly. Ran quake 2 rtx to test temps real quick: 2055mhz/ 22c ambient/ 42c full load / 54c hot spot / 52 memory.

With the RGB on, the backplate nearly looks like a second gpu ha. I definitely still prefer Optimus for both looks and temps, doubt they will be making HOF blocks tho. Instant cop for my KPs hopefully.


----------



## Falkentyne

Nizzen said:


> I didn't set anything.
> I ran PR with gpu-z open with Kp rebar bios. Took screenshot.
> Then I flashed to galax 1000w no rebar,ran PR with gpu-z open, then took a screenshot.
> 
> On both pictures, the MAX value is shown 8 pin, pci-e slot power and board power draw.
> 
> Maybe I misunderstand you?


I confused PWR_SRC with PCIE Slot Power.
Apparently I've lost so many 1 minute chess games today that I can't read anymore.
I apologize.


----------



## tps3443

Benchmarking in Port Royal is fun. But, holy crap guys!!! I downloaded CyberPunk 2077 last night, after a long break from it. And just played during my lunch break at home. (First time on my 3090KP)

So far I have the 1000 watt KP bios installed on my 3090KP HC, running 2144Mhz (Internal clock) and GDDR6X at 23,000Mhz with FBVVD voltage at 1.490-1.5V. MSVDD voltage at 1.110V, and NVVDD voltages are 1.250V within classified tool.


This card destroys this game.

I play 2560x1440P and I am probably getting an easy 50% performance gains over my prior heavily overclocked 2080Ti.

RTX3000 really shines in heavy ray tracing titles.

What a shame I cant Wipe my memory of CP77 and start over, only this time with a 3090.


----------



## Falkentyne




----------



## West.

PLATOON TEKK said:


> I definitely feel you on that. These blocks go for $299 on Bykski US, but was able to source for much cheaper (eBay). Still madness considering years ago $300 for a block would have been April’s fools level.
> 
> I’m all for chillers, check aquarium stores for much cheaper deals (if you find 1/4 adapter), but you’ll need a pump and start consider dew points. Also make sure you have enough wattage.
> 
> I used the Bykski supplied pads and thermal grizzly. Ran quake 2 rtx to test temps real quick: 2055mhz/ 22c ambient/ 42c full load / 54c hot spot / 52 memory.
> 
> With the RGB on, the backplate nearly looks like a second gpu ha. I definitely still prefer Optimus for both looks and temps, doubt they will be making HOF blocks tho. Instant cop for my KPs hopefully.
> 
> View attachment 2519082
> 
> View attachment 2519083


Damn $300 is crazy. This exact block usually selling at RMB 950 (~USD 150) in Taobao China. And I have seen it went as low as RMB 750 (~USD 120) during big sale event.


----------



## PLATOON TEKK

West. said:


> Damn $300 is crazy. This exact block usually selling at RMB 950 (~USD 150) in Taobao China. And I have seen it went as low as RMB 750 (~USD 120) during big sale event.


Was surprised myself, saw it as low as $145 USD but missed out, ended up paying $175 for each.

Bykski USA 

Is up right now for $299.

Definitely not worth that.


----------



## KedarWolf

PLATOON TEKK said:


> Was surprised myself, saw it as low as $145 USD but missed out, ended up paying $175 for each.
> 
> Bykski USA
> 
> Is up right now for $299.
> 
> Definitely not worth that.


Those blocks on Aliexpress usually cheap.


----------



## newls1

Falkentyne said:


> *elmor — Today at 1:38 AM*
> Well I checked with a hex editor and the SRC3 power limit seems to still be 150W
> 
> 
> 
> _[_1:38 AM_]_
> So I don't think that part is changed unfortunately
> 
> *Falkentyne — Today at 1:40 AM*
> that limits 8 pin #3 to 150W doesn't it?
> 
> 
> 
> _[_1:41 AM_]_
> then if 8 pin #3 exceeds 150W it triggers a TDP Normalized power limit for bypassing the SRC3? Is that why that guy on overclock.net couldn't pass 500W?
> 
> 
> 
> _[_1:45 AM_]_
> that would cause power throttling unless the TDP slider can FAR exceed 100% (which can goven how far 8 pin#3 can exceed the SRC3 power limit). @elmor can you check the "maximum" value for SRC3? there's a default and a maximum. if the maximum is set to like 400-500W, then the default isn't important now.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> *elmor — Today at 1:48 AM*
> Same 175W
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> *Falkentyne — Today at 1:49 AM*
> Yikes
> 
> 
> 
> _[_1:55 AM_]_
> Can you fix it and sign the bios somehow?
> 
> 
> 
> _[_1:55 AM_]_
> or is the almighty Elmor still enslaved by the Green Goblin ?
> 
> 
> 
> 
> 
> 
> 
> 
> *elmor — Today at 2:56 AM*
> We're checking with the VGA team again if they can change this value


im no where near as smart as you and others here but im taking this as i still wont exceed 500ish watts then cause 8pin #3 is still very limited?? Thanks again for your valuable input as always


----------



## yzonker

Falkentyne said:


> Your card doesn't use an 8 pin #3 connector. So you're going to get power limited anyway like before and the wrong SRC3 limits won't affect you the same way as your hardware chip only has SRC1 and 2 with SRC3 glitched as SRC1.
> .
> The issue is that anyone with a 3x8 pin card is going to hit a power limit as soon as 8 pin #3 hits 175W.
> Scroll up and see that one guy's post where he saw no increase in power draw past 450W with the KP Rebar vbios on his Strix (I think, might be wrong what card he used).
> That's because he's being limited by SRC3.


Yea I guess I don't understand fully how the power limiting works. I thought I had seen another bios get stuck on that 3rd 8pin despite it not really existing. I assumed the bios does not know that and limits whenever it sees any of the 8pins hit the limit that is defined. Possibly it was really limiting on something else and just appeared that way. I thought it was the last Asus XOC that has the limit stuck at 150w, but I may have forgotten. I've flashed a lot of bios' on my 3090 and 3080ti over the last few months. LOL


----------



## des2k...

newls1 said:


> im no where near as smart as you and others here but im taking this as i still wont exceed 500ish watts then cause 8pin #3 is still very limited?? Thanks again for your valuable input as always


and... that's prob the reason why this bios was kept super quiet out of public

a Kingpin 1000w rebar vbios that works on 2x8pin and strix....

a stupid mistake, they knew 1000w xoc worked why didn't they use the same values ???


----------



## Falkentyne

des2k... said:


> and... that's prob the reason why this bios was kept super quiet out of public
> 
> a Kingpin 1000w rebar vbios that works on 2x8pin and strix....


When someone gets it working on a FE, I'll eat a frog and record myself doing it too.
It can already be "flashed' with the 5.670 with ID checks disabled but no one is brave enough to even try it since no adapter socket or Pomona clip exists for UDFN8 (if someone finds one, tell me).
The only flash was flashing a 3090 FE bios on a Strix and it caused "Load VGA Bios" post code error.
All other cards seem to be working fine (5.670 doesn't support 3070 Ti or 3080 Ti)
(rename from .txt to .rar or remove the .txt extension)


----------



## yzonker

des2k... said:


> and... that's prob the reason why this bios was kept super quiet out of public
> 
> a Kingpin 1000w rebar vbios that works on 2x8pin and strix....
> 
> a stupid mistake, they knew 1000w xoc worked why didn't they use the same values ???


I figured it was just wanting to keep the KP card a little more special. It's their halo product. I'd be willing to bet there are at least some KP owners that get irked that people are running the same 1kw bios with at least similar performance (obviously the KP has some tuning advantages).


----------



## yzonker

Huh, maybe I'm just catching on (or not). How the heck does the KP XOC bios (rebar) work on the KP card if it's limited to 150w on the 3rd 8pin? We know it works there.


----------



## J7SC

yzonker said:


> I figured it was just wanting to keep the KP card a little more special. It's their halo product. I'd be willing to bet there are at least some KP owners that get irked that people are running the same 1kw bios with at least similar performance (obviously the KP has some tuning advantages).


That's a good point, though I approached it from the other end...picked up a Strix at a good price, knowing full-well that a good KP bios would come along  Also, folks like Vince with their HWBot background would know it would eventually leak...anyhow, back in early February when I got the Strix, it was already getting past 15k in PR on air and stock bios. KP 520 r_BAR plus other driver updates are just icing on the cake. For my daily use and gaming, I actually run the stock Asus r_BAR.

When I finish my w-cooling updates currently underway, I might give the KP 1000 r_BAR a shot just for some select benching...though I still hope Strix owners can also get a look at the Galax HoF 1000 r_BAR as that does seem to show correct power consumption detail unlike the KP 1000 on the Strix.


----------



## yzonker

Strix would actually be my first choice, but they've been pretty much unobtanium for me. My previous gen card was a Strix 2080S. Still like that card. It's got a home in one of my HTPC's. Maybe I can get back to one with the 40 series next year, although I'm not really confident in that.


----------



## tps3443

yzonker said:


> I figured it was just wanting to keep the KP card a little more special. It's their halo product. I'd be willing to bet there are at least some KP owners that get irked that people are running the same 1kw bios with at least similar performance (obviously the KP has some tuning advantages).


*Binned die
*Binned memory (No handwriting on the memory
but I assume it’s binned due to amazing OC)
*Classified tool
*Match internal clock with MSVDD voltages

Factory water cooled for $2,090

Aside from all of this, I think the most impressive thing is probably being able to run such ridiculous overclocks on the memory and GPU actually stable in games.

If it crashes, you actually have something to
do about it... Besides just turning down an OC, or turning up a fan lol


Im at a pretty amazing +19% memory bandwidth boost In games.

But honestly... You need good watercooling and chiller for full potential with one. And LN2 for best results. But to each his own Of course.


----------



## Falkentyne

tps3443 said:


> *Binned die
> *Binned memory (No handwriting on the memory
> but I assume it’s binned due to amazing OC)
> *Classified tool
> *Match internal clock with MSVDD voltages
> 
> Factory water cooled for $2,090
> 
> Aside from all of this, I think the most impressive thing is probably being able to run such ridiculous overclocks on the memory and GPU actually stable in games.
> 
> If it crashes, you actually have something to
> do about it... Besides just turning down an OC, or turning up a fan lol
> 
> 
> Im at a pretty amazing +19% memory bandwidth boost In games.
> 
> But honestly... You need good watercooling and chiller for full potential with one. And LN2 for best results. But to each his own Of course.


----------



## wtf_apples

Signature GPU Block - Kingpin 3090 - Optimus Water Cooling (optimuspc.com) 

Glad I got a hydro copper block.


----------



## Trevbev

That’s a seriously expensive block!
Do you think the weight would be a concern with the copper back plate?


----------



## kryptonfly

GRABibus said:


> No.
> You can try the GALAX 1000W non rebar and cap PL at 45%...At your own risk as you have 2 pin connectors !


So I tried EVGA 1000W bios but it's worse on Gigabyte Turbo, GamingOC PCB. All probes are crazy, gpu temp at 40°C instead of ~48°C, vram at 100% load ~100W in idle and it reached ~560W PL 50% under FFXIV Endwalker bench and my pc shutted down many times = power supply TX-750W, I think it's due to fast "peaks" from 350 to 500W in few seconds, I even saw 680W PL 55 !! But the worst : perfs are bad, ~1800mhz 800mv 450W in this bench = perf cap reason "power limit", I tried +180 on core or tweak the curve but I can't achieve perf like before. I reflashed with GamingOC bios, I stick with 390W... I'll soon improve temp with 3x360mm more. Maybe shunt mod would help but I prefer to tweak temp as low as I can first.


----------



## ENTERPRISE

Hey All, 

I was reading some of the posts here, I have an EVGA 3090 FTW3 Ultra (Now under WC). So I was considering what would be the best BIOS to go with that has RBAR enabled. I have been reading the comments on the KP1000Watt BIOS but seems it is not all what it is cracked up to be. Happy to hear some suggestions.


----------



## newls1

ENTERPRISE said:


> Hey All,
> 
> I was reading some of the posts here, I have an EVGA 3090 FTW3 Ultra (Now under WC). So I was considering what would be the best BIOS to go with that has RBAR enabled. I have been reading the comments on the KP1000Watt BIOS but seems it is not all what it is cracked up to be. Happy to hear some suggestions.


I've asked this very same question here a few times and as usual "falkentyne" answers with very detailed replies, but im not smart enough to understand them! Im guessing we should stay clear of this bios cause our FTW3 cards have fuses and they are only 10A... More then likely will pop.....


----------



## Thanh Nguyen

wtf_apples said:


> Signature GPU Block - Kingpin 3090 - Optimus Water Cooling (optimuspc.com)
> 
> Glad I got a hydro copper block.


Just ordered. Now its a waiting game to see the performance.


----------



## yzonker

ENTERPRISE said:


> Hey All,
> 
> I was reading some of the posts here, I have an EVGA 3090 FTW3 Ultra (Now under WC). So I was considering what would be the best BIOS to go with that has RBAR enabled. I have been reading the comments on the KP1000Watt BIOS but seems it is not all what it is cracked up to be. Happy to hear some suggestions.


The KP XOC bios would probably work well on your card, but it does have some inherent risk with no thermal limits and in the case of the FTW3, the potential to blow a fuse depending on how well your card is load balanced.

The highest PL of safe bios' is the KP LN2 bios with 520w limit.


----------



## yzonker

newls1 said:


> I've asked this very same question here a few times and as usual "falkentyne" answers with very detailed replies, but im not smart enough to understand them! Im guessing we should stay clear of this bios cause our FTW3 cards have fuses and they are only 10A... More then likely will pop.....


20A on the 8pins (240w). 10A on pcie for the original 3090 FTW3, but I think someone showed the new revision is 8A. My 3080ti FTW3 has an 8 amp pcie slot fuse.


----------



## GRABibus

kryptonfly said:


> So I tried EVGA 1000W bios but it's worse on Gigabyte Turbo, GamingOC PCB. All probes are crazy, gpu temp at 40°C instead of ~48°C, vram at 100% load ~100W in idle and it reached ~560W PL 50% under FFXIV Endwalker bench and my pc shutted down many times = power supply TX-750W, I think it's due to fast "peaks" from 350 to 500W in few seconds, I even saw 680W PL 55 !! But the worst : perfs are bad, ~1800mhz 800mv 450W in this bench = perf cap reason "power limit", I tried +180 on core or tweak the curve but I can't achieve perf like before. I reflashed with GamingOC bios, I stick with 390W... I'll soon improve temp with 3x360mm more. Maybe shunt mod would help but I prefer to tweak temp as low as I can first.


Salut,

yes, you are fully power limited

Did you try the GALAX 1000W one and cap at 40% ? Still at your own risk....

In my case, this works much better than the KP 1000W with capped PL, which has no effect.


----------



## Falkentyne

kryptonfly said:


> So I tried EVGA 1000W bios but it's worse on Gigabyte Turbo, GamingOC PCB. All probes are crazy, gpu temp at 40°C instead of ~48°C, vram at 100% load ~100W in idle and it reached ~560W PL 50% under FFXIV Endwalker bench and my pc shutted down many times = power supply TX-750W, I think it's due to fast "peaks" from 350 to 500W in few seconds, I even saw 680W PL 55 !! But the worst : perfs are bad, ~1800mhz 800mv 450W in this bench = perf cap reason "power limit", I tried +180 on core or tweak the curve but I can't achieve perf like before. I reflashed with GamingOC bios, I stick with 390W... I'll soon improve temp with 3x360mm more. Maybe shunt mod would help but I prefer to tweak temp as low as I can first.


The SRC3 power limit wasn't changed limiting 8 pin #3 to 150W, and when it reaches that you throttle via TDP Normalized %.
Elmor looked at it.
If your board is bugged and only draws 2W on 8 pin #3 it won't throttle you.


----------



## Nizzen

Strix xoc rebar bios in testing


----------



## ENTERPRISE

newls1 said:


> I've asked this very same question here a few times and as usual "falkentyne" answers with very detailed replies, but im not smart enough to understand them! Im guessing we should stay clear of this bios cause our FTW3 cards have fuses and they are only 10A... More then likely will pop.....





yzonker said:


> The KP XOC bios would probably work well on your card, but it does have some inherent risk with no thermal limits and in the case of the FTW3, the potential to blow a fuse depending on how well your card is load balanced.
> 
> The highest PL of safe bios' is the KP LN2 bios with 520w limit.


Thanks guys. Guess it would be better if I kept to the KP LN2 BIOS then for the sake of safety. Any idea if it is RBAR enabled ? 

I assume it is this one ? EVGA RTX 3090 VBIOS


----------



## yzonker

ENTERPRISE said:


> Thanks guys. Guess it would be better if I kept to the KP LN2 BIOS then for the sake of safety. Any idea if it is RBAR enabled ?
> 
> I assume it is this one ? EVGA RTX 3090 VBIOS


This one. Generally needs to have a date on or after 3/29 or so to have reBar. 









EVGA RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com


----------



## J7SC

ENTERPRISE said:


> Hey All,
> 
> I was reading some of the posts here, I have an EVGA 3090 FTW3 Ultra (Now under WC). So I was considering what would be the best BIOS to go with that has RBAR enabled. I have been reading the comments on the KP1000Watt BIOS but seems it is not all what it is cracked up to be. Happy to hear some suggestions.


There are a couple of options (subject to the exact 3pin EVGA FTW3 model / revision you have)...I run a w-cooled Strix OC and the KPE 520 r_BAR is the standard 'upgrade' for many folks. Other like the SuprimX or non-XOC' Galax HoF



Nizzen said:


> Strix xoc rebar bios in testing


...any download link, pls ? My vBios folder still has lots of room


----------



## KedarWolf

Trevbev said:


> That’s a seriously expensive block!
> Do you think the weight would be a concern with the copper back plate?


All the Optimus blocks, if your card is mounted horizontally, you're going to need a GPU support bracket.

And I already knew it was going to be around $600, but it comes with a water-cooled active backplate the cools the entire back of the card, not just the area around the memory modules like other active backplates do.

The GPU blocks Optimus puts out are the best in the business.


----------



## GRABibus

Nizzen said:


> Strix xoc rebar bios in testing


So, good results ?


----------



## tps3443

Optimus kingpin 3090... DANG!!! $598?????









Signature GPU Block - Kingpin 3090


The Optimus Signature Kingpin block is here! Our first ever Signature GPU block is designed to take advantage of the EVGA Kingpin's extreme performance. We've put everything into this block, and then some. And you asked for the ultimate block with an active backplate to match, so we're launching...




optimuspc.com


----------



## KedarWolf

Falkentyne said:


> The SRC3 power limit wasn't changed limiting 8 pin #3 to 150W, and when it reaches that you throttle via TDP Normalized %.
> Elmor looked at it.
> If your board is bugged and only draws 2W on 8 pin #3 it won't throttle you.


If your PC is shutting down on the 1000W BIOS, your power supply isn't good enough for the power draw.

But I have a 3 power connector card.

First is the 520W rebar, second is the 1000W rebar. This with rebar enabled globally with Nvidia Inspector.









I scored 15 695 in Port Royal


AMD Ryzen 9 5950X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com













I scored 15 769 in Port Royal


AMD Ryzen 9 5950X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com





*Edit: I think I quoted the wrong post, was meant for the person saying their PC was shutting down.*


----------



## KedarWolf

tps3443 said:


> Optimus kingpin 3090... DANG!!! $598?????
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Signature GPU Block - Kingpin 3090
> 
> 
> The Optimus Signature Kingpin block is here! Our first ever Signature GPU block is designed to take advantage of the EVGA Kingpin's extreme performance. We've put everything into this block, and then some. And you asked for the ultimate block with an active backplate to match, so we're launching...
> 
> 
> 
> 
> optimuspc.com


You can order the Kingpin blocks RIGHT NOW!

But I'm pretty sure it's a preorder.


----------



## GRABibus

KedarWolf said:


> If your PC is shutting down on the 1000W BIOS, your power supply isn't good enough for the power draw.
> 
> But I have a 3 power connector card.
> 
> First is the 520W rebar, second is the 1000W rebar. This with rebar enabled globally with Nvidia Inspector.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 695 in Port Royal
> 
> 
> AMD Ryzen 9 5950X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 769 in Port Royal
> 
> 
> AMD Ryzen 9 5950X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> *Edit: I think I quoted the wrong post, was meant for the person saying their PC was shutting down.*


You have 2 different memory OC on these runs…?


----------



## kryptonfly

GRABibus said:


> Salut,
> 
> yes, you are fully power limited
> 
> Did you try the GALAX 1000W one and cap at 40% ? Still at your own risk....
> 
> In my case, this works much better than the KP 1000W with capped PL, which has no effect.


Salut  yes I tried it first but it's even worse, SRC was fluctuating from 11.5v to ~15v ! And all probes are still incorrect, the third "virtual" 8 pin throwed a different power than 2 others... I tried Aorus Xtreme, Galax, EVGA, still same, low & bad freq at +450W. Seems Gigabyte 2x8 pin PCB really doesn't like 3x8 pin bios 


Falkentyne said:


> The SRC3 power limit wasn't changed limiting 8 pin #3 to 150W, and when it reaches that you throttle via TDP Normalized %.
> Elmor looked at it.
> If your board is bugged and only draws 2W on 8 pin #3 it won't throttle you.


 With EVGA 1000W and others (except Galax), the third #3 8 pin was a copy from the #1. But it's less than normal and 390W bios except "Board Power Draw" ~500W and "Chip Power Draw" ~250W with 3x8 pin bios. For now I flashed with Waterforce WB because boost clock is 1785mhz instead of 1755mhz but doesn't change anything ! It would be nice to have 420-450W for 2x8 pin without to shunt mod.


----------



## Lord of meat

KedarWolf said:


> If your PC is shutting down on the 1000W BIOS, your power supply isn't good enough for the power draw.
> 
> But I have a 3 power connector card.
> 
> First is the 520W rebar, second is the 1000W rebar. This with rebar enabled globally with Nvidia Inspector.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 695 in Port Royal
> 
> 
> AMD Ryzen 9 5950X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 769 in Port Royal
> 
> 
> AMD Ryzen 9 5950X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> *Edit: I think I quoted the wrong post, was meant for the person saying their PC was shutting down.*


would you say the 1000w was worth it? all im seeing is 10mhz increase.


----------



## KedarWolf

GRABibus said:


> You have 2 different memory OC on these runs…?


Yes, that was the best run I had with the 520W, I had runs with higher memory OC but got worse scores.

That 1000W was fine though at that memory OC though.

And I edited the post, I had the scores reversed, in the wrong order.


----------



## KedarWolf

Lord of meat said:


> would you say the 1000w was worth it? all im seeing is 10mhz increase.


My score was a decent jump, but just gaming, no, not worth it.


----------



## kryptonfly

KedarWolf said:


> If your PC is shutting down on the 1000W BIOS, your power supply isn't good enough for the power draw.
> 
> But I have a 3 power connector card.
> 
> First is the 520W rebar, second is the 1000W rebar. This with rebar enabled globally with Nvidia Inspector.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 695 in Port Royal
> 
> 
> AMD Ryzen 9 5950X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 769 in Port Royal
> 
> 
> AMD Ryzen 9 5950X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> *Edit: I think I quoted the wrong post, was meant for the person saying their PC was shutting down.*


I think I'm the one you're speaking. I have a TX-750W and it happened with PL +45% and high core freq +180 mhz. It even reached 680W few seconds but I think it's false (no shut down & no beep from my inverter). That's why I don't need more than 450W I think.


----------



## Lord of meat

KedarWolf said:


> My score was a decent jump, but just gaming, no, not worth it.


Thanks!
I was going to test it, but if for gaming its not much than im good. just going to add more radiators.


----------



## Nizzen

GRABibus said:


> So, good results ?


Meeeh on PR.
Max 520w powerdraw. Perfect for gaming


----------



## kryptonfly

@KedarWolf : is it possible to force rebar with nVidia Inspector ?

Btw, does anyone have a Gigabyte 2x8 pin ? If so, what 3x8 pin bios is compatible with it ?


----------



## wtf_apples

wtf_apples said:


> Signature GPU Block - Kingpin 3090 - Optimus Water Cooling (optimuspc.com)
> 
> Glad I got a hydro copper block.


 Did anyone click the link and buy one  Its so dreamy. 
I wish the hydro copper got an active backplate like this.


----------



## tps3443

KedarWolf said:


> Yes, that was the best run I had with the 520W, I had runs with higher memory OC but got worse scores.
> 
> That 1000W was fine though at that memory OC though.
> 
> And I edited the post, I had the scores reversed, in the wrong order.


Yeah the memory needs more voltage if it starts scoring low. I’m dumping like 1.50+FSVDD. For +1900 on the memory.

The standard bios switch on my kingpin only manages +1500 stable. Due to memory voltage of only 1.3675V.


1.5V seems ok. Been playing a cyberpunk like this with a +1900 memory OC.. Or 20% bandwidth boost.


----------



## Nizzen

wtf_apples said:


> Did anyone click the link and buy one  Its so dreamy.
> I wish the hydro copper got an active backplate like this.


I wish the OC gain was like 780ti classified with evbot....


----------



## tps3443

wtf_apples said:


> Did anyone click the link and buy one  Its so dreamy.
> I wish the hydro copper got an active backplate like this.


Unfortunately no.... I’m so jealous though. I’m selling some crap to buy one Though lol

I want SUB 38C-40C GPU temps at 500 watts. This is what a 3090KP needs... They’re binned and 50C + on my KP HC isn’t gonna cut it.


These things get HOTTTTTTTTTT

We need active backplate cooling. The backplate hits 52C at 550+ watts


----------



## GRABibus

Looking for a 3090 Kingpin brand new.
I am in France.
Please, PM me if there is an opportunity.
Thanks !


----------



## tps3443

Lord of meat said:


> Thanks!
> I was going to test it, but if for gaming its not much than im good. just going to add more radiators.


The KP 520 watt bios is excellent. And you can break 16K PR with it if your SUB 38C temps.


----------



## Nizzen

tps3443 said:


> Unfortunately no.... I’m so jealous though. I’m selling some crap to buy one Though lol
> 
> I want SUB 38C-40C GPU temps at 500 watts. This is what a 3090KP needs... They’re binned and 50C + on my KP HC isn’t gonna cut it.
> 
> 
> These things get HOTTTTTTTTTT
> 
> We need active backplate cooling. The backplate hits 52C at 550+ watts


Waterchiller -> set watertemp = 20c


----------



## J7SC

Nizzen said:


> I wish the OC gain was like 780ti classified with evbot....


----------



## tps3443

Nizzen said:


> Waterchiller -> set watertemp = 20c


Yep yep. I have a friend that has the KP HC Block. And even with his chiller. It’ll probably do what a Optimus can without the chiller lol.


----------



## Nizzen

J7SC said:


> View attachment 2519218


This was the picture right before they went under water 

Fun times 
Edit: I found the old 780 classy LOL , not the 780ti  Had both in sli.









Here is 780ti classy in sli 
Had to dig hard


----------



## KedarWolf

kryptonfly said:


> @KedarWolf : is it possible to force rebar with nVidia Inspector ?
> 
> Btw, does anyone have a Gigabyte 2x8 pin ? If so, what 3x8 pin bios is compatible with it ?


Yes, it's really easy. Google it. Forcing rebar I mean. 

Just do it in the Global Nvidia profile for ALL apps and games is best I think.


----------



## J7SC

Nizzen said:


> This was the picture right before they went under water
> 
> Fun times
> Edit: I found the old 780 classy LOL , not the 780ti  Had both in sli.
> View attachment 2519219


...ran 4x 780 Ti Cls + EVBot under water back then - learned _A LOT_ about PSUs and multi-PSU setups


----------



## des2k...

KedarWolf said:


> Yes, it's really easy. Google it. Forcing rebar I mean.
> 
> Just do it in the Global Nvidia profile for ALL apps and games is best I think.


why global ? I use per app,


----------



## GRABibus

I wanted


des2k... said:


> why global ? I use per app,
> 
> View attachment 2519222


I wanted to do it for Cold War, but the necessary flag numbers to change don't appear in 'Unknown" list.....


----------



## tps3443

Anyone remember running the Geforce 6800 Ultra? That was when BFGPU first started for me lol. YEAH AGP!!

I remember guys holding on to AGP for dear life. As it was essentially on life support.

This was when I got in to PC gaming, and overclocking


----------



## Nizzen

J7SC said:


> ...ran 4x 780 Ti Cls + EVBot under water back then - learned _A LOT_ about PSUs and multi-PSU setups


2x 780ti was 1700w from the wall 

Used 2x1200w


----------



## GRABibus

des2k... said:


> why global ? I use per app,
> 
> View attachment 2519222


in 3DMark PR DLSS I only have 2 lines under "Unknown" : Any idea why ?


----------



## des2k...

GRABibus said:


> in 3DMark PR DLSS I only have 2 lines under "Unknown" : Any idea why ?
> View attachment 2519226


toolbar, one of those icons is for listing all unknown flags


----------



## GRABibus

des2k... said:


> toolbar, one of those icons is for listing all unknown flags


Ok thanks !


----------



## J7SC

Nizzen said:


> 2x 780ti was 1700w from the wall
> 
> Used 2x1200w


Nice ! 

I still have most of my PSUs from my HWBot days in use today - happens when you buy quality (haven't bought a new PSU for private use since 2015  ). PSU prices these days (shakes head)

...3x Antec 1300 HPC platinum currently run 3090 Strix, 6900 XT, and 2x 2080 Ti systems. BeQuiet 1200 DarkPro waiting for redeployment next week. 2x Corsair AX1200 taking a rest (one of those is starting to have slight issues though)....Lepa 1200 W (2012) only ran once...coz, scary ripples. 3x Corsair TX850s from back then still are rock solid in smaller work / backup systems today.


----------



## GRABibus

des2k... said:


> why global ? I use per app,
> 
> View attachment 2519222


Works great in cold war.

5% to 10% more fps


----------



## Sheyster

ENTERPRISE said:


> Thanks guys. Guess it would be better if I kept to the KP LN2 BIOS then for the sake of safety. Any idea if it is RBAR enabled ?


Honestly ENTERPRISE, given some of the "history" with the 3090 FTW3 cards/issues, I would personally just stay with the official FTW3 XOC 500W BIOS. You will notice virtually no difference in gaming. I believe @GRABibus has tested them both extensively and can attest to this. I have tested them both as well on my Strix.

Link:

EVGA RTX 3090 VBIOS


----------



## GRABibus

Sheyster said:


> Honestly ENTERPRISE, given some of the "history" with the 3090 FTW3 cards/issues, I would personally just stay with the official FTW3 XOC 500W BIOS. You will notice virtually no difference in gaming. I believe @GRABibus has tested them both extensively and can attest to this. I have tested them both as well on my Strix.
> 
> Link:
> 
> EVGA RTX 3090 VBIOS


Agreed !

For bench, KP 520W remains better


----------



## gfunkernaught

KedarWolf said:


> I scored 15 769 in Port Royal
> 
> My best ever enabling rebar globally with Nvidia Inspector and using 1000W rebar BIOS.


Which version of Nvidia Inspector? Is their official site nvidiainspector.com? I have seen many versions but didnt know which to use.


----------



## gfunkernaught

delete


----------



## GRABibus

gfunkernaught said:


> Which version of Nvidia Inspector? Is their official site nvidiainspector.com? I have seen many versions but didnt know which to use.


2.3.0.13


----------



## tps3443

The 1000 watt Kingpin Rebar bios isn’t going to be that beneficial on a standard 3090. You can manage great scores with the 520 watt KP bios already. (WITH GOOD TEMPS OF COURSE)

Now, classified tool that works with 3090KP to control voltages greatly increases power consumption. Idle power consumption on my 3090 Kingpin uses about 99 watts, and if I apply my very mild voltage increments through classified tool, this 99 watts now becomes 155 watts. Same 1000 Watt KP Rebar bios. The load power consumption reflects this as well, so that 520 Watt KP bios runs out fast while using classified tool. This is where the 1KW KP bios is gonna help out.

Hopefully this makes sense guys.

If you have good cooling, then the 520 watt KP bios is excellent for any RTX3090! 

Low temps, with classified tool greatly benefit the 1KW KP bios.


----------



## GRABibus

My best score with my strix on air :

NVIDIA GeForce RTX 3090 video card benchmark result - AMD Ryzen 9 5900X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII HERO (3dmark.com)










Bios KP 520W with ReBar
NV Drivers on high performances
Rebar enabled in "3DMark DLSS" profile in NVIDIA Inspector.
Ambient : 27°C !

Would like to make it at 20°C.... 

Eidt : Rebar increases temperature but also, it seems that Vram is more stable.
I could bench here with +1150MHz several times without any artefacts or crash, which was noty possible witout Rebar enabled (Max stable was +1075MHz in bench)
I will see if it confirms in games


----------



## gfunkernaught

hehe...








I scored 15 960 in Port Royal


Intel Core i9-9900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## tps3443

gfunkernaught said:


> hehe...
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 960 in Port Royal
> 
> 
> Intel Core i9-9900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com


You’ve got a great card there. I wish I could hit 41C lol temps.


ughhhh.


----------



## des2k...

room was warm, water got to 28c, that's pretty much the max on my 2x8pin card









I scored 15 723 in Port Royal


AMD Ryzen 9 3900X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## gfunkernaught

tps3443 said:


> You’ve got a great card there. I wish I could hit 41C lol temps.
> 
> 
> ughhhh.


Did you get a full cover block yet? I skimmed through the last like 10 pages and I ask because I wasn't sure if you did. Fear not, you will get to 41c and below. I've wished for those temps too when I first got my 3090 under water, and now I got my 1000D waiting to be built, I'm hoping to stay under 40c but that probably won't happen passed 500w.

Also those same clocks, TSExtreme fails almost immediately. Not sure why. I backed the core offset down to +190 and it crashed during GT2. TimeSpy vanilla passed though, so it is a power issue, I think. I noticed pcie#2 reached 227w. #1 hit 197w, #3 hit 143w. The PCIe slot hit 92w. I'm gonna give them a break for now, dont want to melt anything!


----------



## gfunkernaught

What is the trick to getting 3 points working on the v/f curve? here's what I know:
set the highest point first, then set the two lower bins as desired, then hit apply? Or hit apply on each? should I have a load on the core before doing this? cold core?


----------



## tps3443

gfunkernaught said:


> Did you get a full cover block yet? I skimmed through the last like 10 pages and I ask because I wasn't sure if you did. Fear not, you will get to 41c and below. I've wished for those temps too when I first got my 3090 under water, and now I got my 1000D waiting to be built, I'm hoping to stay under 40c but that probably won't happen passed 500w.
> 
> Also those same clocks, TSExtreme fails almost immediately. Not sure why. I backed the core offset down to +190 and it crashed during GT2. TimeSpy vanilla passed though, so it is a power issue, I think. I noticed pcie#2 reached 227w. #1 hit 197w, #3 hit 143w. The PCIe slot hit 92w. I'm gonna give them a break for now, dont want to melt anything!


Yes I did. I‘m running the Kingpin Hydrocopper. Not that impressed though..

I may just grab the the new KingPin Optimus block that launched today. It is $598 with optional active back plate water cooling. (very expensive) but I know it’d be worth it.

The memory on these cards get toasty with all this power and voltage etc. We really need active backplate cooling.

(GPU) runs 54C at an average of 500-530 watts.

(GPU DIE) runs 43C at 500-530 watts.

My ambient temp in the room is 24.5C-25C


----------



## gfunkernaught

tps3443 said:


> Yes I did. I‘m running the Kingpin Hydrocopper. Not that impressed though..
> 
> I may just grab the the new KingPin Optimus block that launched today. It is $598 with optional active back plate water cooling. (very expensive) but I know it’d be worth it.
> 
> The memory on these cards get toasty with all this power and voltage etc. We really need active backplate cooling.
> 
> (GPU) runs 54C at an average of 500-530 watts.
> 
> (GPU DIE) runs 43C at 500-530 watts.
> 
> My ambient temp in the room is 24.5C-25C


She is expensive for sure. I just read their price breakdown. Have you tried remount/repaste and all of the info here on that block? Can be a huge help too.


----------



## gfunkernaught

Has anyone tried this yet?








Bykski Full Coverage GPU Water Block w/ Integrated Active Backplate for MSI RTX 3090 GAMING X TRIO (N-MS3090TRIO-TC)


This full coverage block directly cools the video cards core (GPU), graphics memory (RAM), and voltage regulator modules (VRM) on both sides of GPUs. By directly cooling these components you can achieve great temperatures along with the ability to reach higher overclocks or extend the overall...




www.bykski.us


----------



## gfunkernaught

Boundary Benchmark Results. Core offset +195 then [email protected] on the curve. +1400 on the vram. Forgot to record power usage and average temps so this is an incomplete report. D minus.


----------



## Shawnb99

wtf_apples said:


> Did anyone click the link and buy one  Its so dreamy.
> I wish the hydro copper got an active backplate like this.


I grabbed one the first few minutes the page went live. Sitting on a Hydrocopper as well that I've never installed


----------



## tps3443

Shawnb99 said:


> I grabbed one the first few minutes the page went live. Sitting on a Hydrocopper as well that I've never installed


Which one did you get? Please share photos whenever it arrives. That’d be greatly appreciated! Thanks you!


----------



## tps3443

Nizzen said:


> I got 15499 on air with stock cooler on x299. 3090 HOF
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 499 in Port Royal
> 
> 
> Intel Core i9-7980XE Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 11}
> 
> 
> 
> 
> www.3dmark.com


^This guys 3090 HOF air cooler works better than my full block lol.

I have my CPU at 4.5Ghz W/
Re-bar disabled.

Anyways, I removed Evga PX1 software.

First attempt on MSI AB (Right at home again)

Evga should be ashamed of themselves though. This waterblock isn’t working as it should be Lol.

The silicon is great, and this card has potential. Especially seeing 2175Mhz internal clock sustained in PR while averaging 55C and peaking at just under 58C. I just need some better temps. I will mess with it this weekend probably. Hopefully I can reduce them a little.

I need 2,235Mhz internal clock sustained to break 16K PortRoyal. I’m thinking 45C would put me right at 2235Mhz. This card holds frequencies very well. 









I scored 15 539 in Port Royal


Intel Core i9-7980XE Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## gfunkernaught

tps3443 said:


> ^This guys 3090 HOF air cooler works better than my full block lol.
> 
> I have my CPU at 4.5Ghz W/
> Re-bar disabled.
> 
> Anyways, I removed Evga PX1 software.
> 
> First attempt on MSI AB (Right at home again)
> 
> Evga should be ashamed of themselves though. This waterblock isn’t working as it should be Lol.
> 
> The silicon is great, and this card has potential. Especially seeing 2175Mhz internal clock sustained in PR while averaging 55C and peaking at just under 58C. I just need some better temps. I will mess with it this weekend probably. Hopefully I can reduce them a little.
> 
> I need 2,235Mhz internal clock sustained to break 16K PortRoyal. I’m thinking 45C would put me right at 2235Mhz. This card holds frequencies very well.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 539 in Port Royal
> 
> 
> Intel Core i9-7980XE Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com


2235mhz what core voltage?


----------



## gfunkernaught

Just found out I actually paid $msrp for my Trio. I thought I was getting jacked by tarrifs, scarcity, and profit margins. Checked MSIs website, price is exactly what I paid for it at microcenter.


----------



## des2k...

Canadian cool mornings.... good for PR runs, it's only 2185eff clock but still 35c for 570w+ 









I scored 15 777 in Port Royal


AMD Ryzen 9 3900X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## heatdotnet

gfunkernaught said:


> Just found out I actually paid $msrp for my Trio. I thought I was getting jacked by tarrifs, scarcity, and profit margins. Checked MSIs website, price is exactly what I paid for it at microcenter.


Lol if you bought it in the US you paid tariffs. Tariffs and MSIs greed are baked in to the price you see on their website and Microcenters product listing.


----------



## kryptonfly

Here's what I got with the famous KP 520W bios :



In idle... probes are crazy, 8-pin #3 is hidden from settings but was same than #1, all 3x8 pin bios are not compatible with Gigabyte 2x8 pin PCB. I would like to shunt mod resistances to go from 390W to 430-450W max. I think it's the four resistances near 8-pin connectors ? What about pcie resistance ? Shall we shunt too ? What value I have to stick to get ~450W ? I will strengthen my WC on august with +3 360mm rads, it will be fine.


----------



## tps3443

gfunkernaught said:


> 2235mhz what core voltage?


I’m not running 2,235. That’s what I would need to break 16K in PR.

I am running 2,175mhz. My card is like 55C+ So I think that’s about 1.068-1.075v.


----------



## gfunkernaught

heatdotnet said:


> Lol if you bought it in the US you paid tariffs. Tariffs and MSIs greed are baked in to the price you see on their website and Microcenters product listing.


Well their factory burned down and their ceo fell out of a window. These are dire times lol.


----------



## jura11

tps3443 said:


> The 1000 watt Kingpin Rebar bios isn’t going to be that beneficial on a standard 3090. You can manage great scores with the 520 watt KP bios already. (WITH GOOD TEMPS OF COURSE)
> 
> Now, classified tool that works with 3090KP to control voltages greatly increases power consumption. Idle power consumption on my 3090 Kingpin uses about 99 watts, and if I apply my very mild voltage increments through classified tool, this 99 watts now becomes 155 watts. Same 1000 Watt KP Rebar bios. The load power consumption reflects this as well, so that 520 Watt KP bios runs out fast while using classified tool. This is where the 1KW KP bios is gonna help out.
> 
> Hopefully this makes sense guys.
> 
> If you have good cooling, then the 520 watt KP bios is excellent for any RTX3090!
> 
> Low temps, with classified tool greatly benefit the 1KW KP bios.


KP 520W is maybe good for 3*8-pin GPU but definitely not good or advisable to use on 2*8-pin GPU 

Best BIOS for 2*8-pin GPU is XOC 1000W BIOS which you always can cap at 65-75% and enjoy not being limited by power limit 

Shunt mod is more advisable for 2*8-pin GPU but that's not for everyone and for me 

With XOC 1000W BIOS my temperatures won't break 42°C in hotter ambient temperatures, in normal weather ir normal ambient temperatures are in 36-38°C as max and VRAM temperatures are in 60's 

Hope this helps 

Thanks, Jura


----------



## GRABibus

NVIDIA GeForce RTX 3090 video card benchmark result - AMD Ryzen 9 5900X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII HERO (3dmark.com) 










My target is to break 15250 on air....


----------



## jura11

gfunkernaught said:


> Has anyone tried this yet?
> 
> 
> 
> 
> 
> 
> 
> 
> Bykski Full Coverage GPU Water Block w/ Integrated Active Backplate for MSI RTX 3090 GAMING X TRIO (N-MS3090TRIO-TC)
> 
> 
> This full coverage block directly cools the video cards core (GPU), graphics memory (RAM), and voltage regulator modules (VRM) on both sides of GPUs. By directly cooling these components you can achieve great temperatures along with the ability to reach higher overclocks or extend the overall...
> 
> 
> 
> 
> www.bykski.us


I only installed that block on reference PCB RTX 3090 and with XOC 1000W BIOS temperatures won't break 42°C and VRAM temperatures won't break 56°C with hefty OC on VRAM(+1515MHz) and core with +165MHz 

Hope this helps 

Thanks, Jura


----------



## gfunkernaught

jura11 said:


> I only installed that block on reference PCB RTX 3090 and with XOC 1000W BIOS temperatures won't break 42°C and VRAM temperatures won't break 56°C with hefty OC on VRAM(+1515MHz) and core with +165MHz
> 
> Hope this helps
> 
> Thanks, Jura


42c at how many watts? Also how many rads? How many pushups??? How much juice did your PC drink???


----------



## J7SC

GRABibus said:


> (...)
> *My target is to break 15250 on air.*...


...just don;'t break the Strix on air first ! 1000 W r_BAR bios, or 520 r_BAR ?

PS. Nice score


----------



## GRABibus

J7SC said:


> ...just don;'t break the Strix on air first ! 1000 W r_BAR bios, or 520 r_BAR ?
> 
> PS. Nice score


520W KP bios, rebar enabled for « 3DMark Port Royal DLSS » profile in nvidia inspector.
25degrees ambient.


----------



## jura11

gfunkernaught said:


> 42c at how many watts? Also how many rads? How many pushups??? How much juice did your PC drink???


Its friend build, I built for him, 2*360mm radiators plus MO-ra3 360mm with P12 PWM fans and for CPU he is using 5950X with Bykski waterblock as well and temperatures are great on his loop, I built him loop in Lian Li O11 Dynamic XL and I wouldn't use that case again, his loop pulls something around 800-900W from wall, GPU pulls usually in gaming 500-600W as max, friend he is running uncapped his GPU, I will be redoing his loop and will be adding to his loop extra 360mm radiator which in theory brings temperatures with swapping case for something else 

Running same waterblock on my RTX 3090 GamingPro's just without active backplate because my VRAM temperatures are okay and both RTX 3090 are running XOC 1000W BIOS capped at 75-85% and temperatures won't break 36-38 in normal gaming in normal weather around 21-23°C ambient temperature and VRAM temperatures are okay too in 60's on top and bottom usually sits in the low to mid 70's 

Hope this helps 

Thanks, Jura


----------



## yzonker

jura11 said:


> Its friend build, I built for him, 2*360mm radiators plus MO-ra3 360mm with P12 PWM fans and for CPU he is using 5950X with Bykski waterblock as well and temperatures are great on his loop, I built him loop in Lian Li O11 Dynamic XL and I wouldn't use that case again, his loop pulls something around 800-900W from wall, GPU pulls usually in gaming 500-600W as max, friend he is running uncapped his GPU, I will be redoing his loop and will be adding to his loop extra 360mm radiator which in theory brings temperatures with swapping case for something else
> 
> Running same waterblock on my RTX 3090 GamingPro's just without active backplate because my VRAM temperatures are okay and both RTX 3090 are running XOC 1000W BIOS capped at 75-85% and temperatures won't break 36-38 in normal gaming in normal weather around 21-23°C ambient temperature and VRAM temperatures are okay too in 60's on top and bottom usually sits in the low to mid 70's
> 
> Hope this helps
> 
> Thanks, Jura


This one?









Bykski Full Coverage GPU Water Block w/ Integrated Active Backplate for AIC Reference RTX 3090 (N-RTX3090H-TC-V2)


Directly cools the video cards core, graphics memory (RAM), and voltage regulator modules (VRM) on both sides of GPUs achieving great temperatures along with the ability to reach higher overclocks or extend the overall lifespan of your card by running at below factory temps. Compatibility AIC...




www.bykski.us





It says Zotac Trinity OC, so I assume it would work on my base Zotac? Thought I'd ask while we're on the topic. Byksi has a lot of blocks listed and I don't see a compatibility checker.


----------



## tps3443

What a beast guys. I love this GPU. Doing yet another flush on my coolant loop. And I’m gonna poke around and hopefully get these GPU temps downs a little more.


----------



## geriatricpollywog

tps3443 said:


> What a beast guys. I love this GPU. Doing yet another flush on my coolant loop. And I’m gonna poke around and hopefully get these GPU temps downs a little more.


How much did your clock speeds improve after installing the HC block?


----------



## tps3443

0451 said:


> How much did your clock speeds improve after installing the HC block?


No increase at all. However, it was absolutely worth it regardless. My memory temps are incredible. And I have both FBVDD dip switches on and +1800 on the memory in (Anything) is literally easy peasy. That’s about 1.5V on the memory instead of the standard 1.43v that a kingpin 3090 gets. Or 1.375V that a standard 3090 will get.


The silicon is pretty good on mine so 2175Mhz can be sustained even up to 560+ watts and 55C.

The block doesn’t do a ton for the GPU, But it will reduce memory and VRM temps about 35-40% easily.

Also, I am actually removing the block to confirm proper mount right now. So, who knows... Maybe I can improve upon the 50-52 (500 watt average)

Not sure if this makes a difference but it says my GPU die is 43-45C and the GPU is 50-52C

This in Cyberpunk 2077.


----------



## des2k...

tps3443 said:


> No increase at all. However, it was absolutely worth it regardless. My memory temps are incredible. And I have both FBVDD dip switches on and +1800 on the memory in (Anything) is literally easy peasy. That’s about 1.5V on the memory instead of the standard 1.43v that a kingpin 3090 gets. Or 1.375V that a standard 3090 will get.
> 
> 
> The silicon is pretty good on mine so 2175Mhz can be sustained even up to 560+ watts and 55C.
> 
> The block doesn’t do a ton for the GPU, But it will reduce memory and VRM temps about 35-40% easily.
> 
> Also, I am actually removing the block to confirm proper mount right now. So, who knows... Maybe I can improve upon the 50-52 (500 watt average)
> 
> Not sure if this makes a difference but it says my GPU die is 43-45C and the GPU is 50-52C
> 
> This in Cyberpunk 2077.


have you ever run hwinfo for your temps?
gpu temperature and gpu hotspot is reported and it's very accurate.

Not sure what GPU die and GPU temp is. Is that a kingpin thing ? on the LCD screen ?


----------



## tps3443

des2k... said:


> have you ever run hwinfo for your temps?
> gpu temperature and gpu hotspot is reported and it's very accurate.
> 
> Not sure what GPU die and GPU temp is. Is that a kingpin thing ? on the LCD screen ?


Luumi goes by GPU die temp which would be lower.. Although anything else uses GPU temp which is hotter. I dunno. I have some KPx paste I’m gonna put on here this time. Hopefully, I can improve these temps.


----------



## gfunkernaught

jura11 said:


> Its friend build, I built for him, 2*360mm radiators plus MO-ra3 360mm with P12 PWM fans and for CPU he is using 5950X with Bykski waterblock as well and temperatures are great on his loop, I built him loop in Lian Li O11 Dynamic XL and I wouldn't use that case again, his loop pulls something around 800-900W from wall, GPU pulls usually in gaming 500-600W as max, friend he is running uncapped his GPU, I will be redoing his loop and will be adding to his loop extra 360mm radiator which in theory brings temperatures with swapping case for something else
> 
> Running same waterblock on my RTX 3090 GamingPro's just without active backplate because my VRAM temperatures are okay and both RTX 3090 are running XOC 1000W BIOS capped at 75-85% and temperatures won't break 36-38 in normal gaming in normal weather around 21-23°C ambient temperature and VRAM temperatures are okay too in 60's on top and bottom usually sits in the low to mid 70's
> 
> Hope this helps
> 
> Thanks, Jura


Guess no one got the reference _sigh_ lol
Anyways, that is a hell of a cooling system. I will be setting up my 1000D soon. Two 60x480mm and two 64x360mm with NF-P12 redux's at 1800RPM. I believe this amount of cooling space will do wonders for my max temp as long as I get the air pressure and flow correct.


----------



## des2k...

.


tps3443 said:


> Luumi goes by GPU die temp which would be lower.. Although anything else uses GPU temp which is hotter. I dunno. I have some KPx paste I’m gonna put on here this time. Hopefully, I can improve these temps.


I would use hwinfo, uses nvidia internal sensors.

Gpu temp is the average of all die sensors. GPU hotspot is the highest temp reported on the die.

The hotspot to gpu temp delta starts at 10cdelta.
The closer you are to that, the better the mount.


----------



## tps3443

Guys what direction is the inlet based on the picture? Because the manual list it as the other way. I know it may not matter much... But anything helps!

The manual says the left port is inlet and right port is outlet.

But the waterblock has arrows indicating the right port is inlet and the left port is outlet.


Im confused Evga....


----------



## macangel

KedarWolf said:


> The Extreme the way to go. They are really soft a compressible and are much better than the stock pads that come with water blocks etc.
> 
> They're cheap from the GELID store too.
> 
> The Ultimate are hard and so are the Thermaltright and Fuji, so you'll get better compression with the Extreme and better core contact and stuff.
> 
> And yes, they really do help.
> 
> Edit: They ship them from China but they are quick, maybe 8 days or so. And I paid no duty or taxes on them which was nice from the GELID store.


Awesome, though probably going to have to put on hold for a little while. We are moving for September 1st. A LOT of packing and my fiance is still working full time, so I'm doing most of it. Luckily she works from home, so nothing in the way of commuting. I haven't even had time to O/C the Core i9 11900K, let alone the new GPU. Spent a very little bit of time gaming. I can't stop Middle Earth Shadow of War from restarting my computer (no error log, but WHEA shows up in the even viewer), so I just said F*** it, loaded up Horizon Zero Dawn. Looks so amazing running at 11,520x2160 on three 55" 4K TVs. Sadly I haven't been able to figure out how to get HDR to work with NVidia Surround, but again, I haven't really had time to tinker with anything. Afterburner said it was using almost 17GB of VRAM, so made me really happy that I stuck with going after a 3090 instead of a 3080ti.


----------



## jura11

Probably will be getting this one but let's see, its 1/10HP chiller, price is awesome already 

￡124.72 12%OFF | Barrow Compressor Cooler for Water Cooling System Radiator for Lower The Chasiss Temperature CPU and GPU Cooler, YSZY-01
AliExpress - Online Shopping for Popular Electronics, Fashion, Home & Garden, Toys & Sports, Automobiles and More. L

Hope this helps 

Thanks, Jura


----------



## jura11

yzonker said:


> This one?
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Bykski Full Coverage GPU Water Block w/ Integrated Active Backplate for AIC Reference RTX 3090 (N-RTX3090H-TC-V2)
> 
> 
> Directly cools the video cards core, graphics memory (RAM), and voltage regulator modules (VRM) on both sides of GPUs achieving great temperatures along with the ability to reach higher overclocks or extend the overall lifespan of your card by running at below factory temps. Compatibility AIC...
> 
> 
> 
> 
> www.bykski.us
> 
> 
> 
> 
> 
> It says Zotac Trinity OC, so I assume it would work on my base Zotac? Thought I'd ask while we're on the topic. Byksi has a lot of blocks listed and I don't see a compatibility checker.


I bought two Bykski waterblocks for Zotac RTX 3090 Trinity OC because friend have two of them and I think Zotac are different to reference PCB GPUs(Palit RTX 3090 GamingPro or PNY etc) but maybe I'm wrong on this there

I have for comparison at home Zotac and Palit waterblocks and there are few differences between the two blocks 

I would rather ask first Bykski dealer 

Hope this helps 

Thanks, Jura


----------



## jura11

gfunkernaught said:


> Guess no one got the reference _sigh_ lol
> Anyways, that is a hell of a cooling system. I will be setting up my 1000D soon. Two 60x480mm and two 64x360mm with NF-P12 redux's at 1800RPM. I believe this amount of cooling space will do wonders for my max temp as long as I get the air pressure and flow correct.


Me and my friend running same GPUs Palit RTX 3090 GamingPro and they're based on reference PCB, sadly that time when I wanted to get Asus RTX 3090 Strix OC that GPU has been out of stock everywhere 

My loop have 4*360mm radiators plus MO-ra3 360mm and probably will add extra MO-ra3 360mm to loop or get Barrow chiller YSZY-01 to loop 

1000D seems like good case for water cooling plus there you can put lots of radiators which is big plus

I think so too such amount of radiators should do wonders with water temperatures and overall temperatures 

Hope this helps 

Thanks, Jura


----------



## jura11

tps3443 said:


> Guys what direction is the inlet based on the picture? Because the manual list it as the other way. I know it may not matter much... But anything helps!
> 
> The manual says the left port is inlet and right port is outlet.
> 
> But the waterblock has arrows indicating the right port is inlet and the left port is outlet.
> 
> 
> Im confused Evga....


From pictures seems like left is inlet and right is outlet like on most waterblocks, same layout have Bykski, EK and most others

I would try left as inlet and right as outlet, you always can swap and see if there is change in temperatures, usually you should see 1-2°C difference by swapping inlet for outlet etc

Hope this helps 

Thanks, Jura


----------



## tps3443

jura11 said:


> From pictures seems like left is inlet and right is outlet like on most waterblocks, same layout have Bykski, EK and most others
> 
> I would try left as inlet and right as outlet, you always can swap and see if there is change in temperatures, usually you should see 1-2°C difference by swapping inlet for outlet etc
> 
> Hope this helps
> 
> Thanks, Jura


This is how I have always ran it, left in-right out. The arrows on the block made me think that was wrong though.

I re-mounted the block and the idle temps have shown a huge improvement. The die certainly had a gap based on the paste print I saw, only a tiny dot in the center of the die was touching. These idle temps are just insane! Unfortunately though, my D5 seems to be barely flowing water which I didn’t notice before. It says it’s spinning at full 4200-4300RPM. But the return line for the reservoir looks very weak (trickles down the side of the reservoir) usually there is a heavy stream, and a lot of action inside the reservoir lol. Just haven’t been seeing this since. Maybe I ran it dry too much during the priming process.

I have not tested load temps yet, I’m working from home. So I can’t test load temps
until I get off work. (Logged in to a remote system)


----------



## Lobstar

The GamePass version of The Ascent had me pulling 570w in DX11 mode with every setting maxed at 3440x1440p on my ftw3u. The game is beautiful btw.


----------



## tps3443

Lobstar said:


> The GamePass version of The Ascent had me pulling 570w in DX11 mode with every setting maxed at 3440x1440p on my ftw3u. The game is beautiful btw.


I was just reading about this game, they were saying something about how console has no ray tracing, due to console version not having DLSS.

I thought this Xbox Series X thing was supposed to be like a 2080Ti beast equivalent or something. Sure doesn’t look that way.


----------



## Lobstar

tps3443 said:


> I was just reading about this game, they were saying something about how console has no ray tracing, due to console version not having DLSS.
> 
> I thought this Xbox Series X thing was supposed to be like a 2080Ti beast equivalent or something. Sure doesn’t look that way.


I just figured out the gamepass version does have ray tracing so I turned it on and am testing it now. When I tried to enable it before the DX12 setting would always just revert to DX11 even after restarting the game. This time after setting it I hit escape after selecting DX12 and it wanted me to restart the game. Now the raytracing options are there. The game definitely power limits my card on the 520w KPE bios.

This was at +105/1448 but it clocked down one bin to +90. I might have to load up the party bios again to see what it can do with ray tracing on.


----------



## tps3443

Lobstar said:


> I just figured out the gamepass version does have ray tracing so I turned it on and am testing it now. When I tried to enable it before the DX12 setting would always just revert to DX11 even after restarting the game. This time after setting it I hit escape after selecting DX12 and it wanted me to restart the game. Now the raytracing options are there. The game definitely power limits my card on the 520w KPE bios.
> 
> This was at +105/1448 but it clocked down one bin to +90. I might have to load up the party bios again to see what it can do with ray tracing on.
> View attachment 2519455


Wow, I’d love to try this game. Is it any good? What is it like? I may give it a whirl on my 3090.


----------



## Lobstar

tps3443 said:


> Wow, I’d love to try this game. Is it any good? What is it like? I may give it a whirl on my 3090.


Cyberpunk twinstick shooter with loot and leveling like diablo.








The Ascent


The Ascent for PC game reviews & Metacritic score: You belong to the corporation. Can you survive without it? The Ascent is a solo and co-op action RPG set in a cyberpunk world. The mega corporation that owns yo...




www.metacritic.com




Edit: It's on gamepass for $10/mo


----------



## tps3443

Lobstar said:


> Cyberpunk twinstick shooter with loot and leveling like diablo.
> 
> 
> 
> 
> 
> 
> 
> 
> The Ascent
> 
> 
> The Ascent for PC game reviews & Metacritic score: You belong to the corporation. Can you survive without it? The Ascent is a solo and co-op action RPG set in a cyberpunk world. The mega corporation that owns yo...
> 
> 
> 
> 
> www.metacritic.com
> 
> 
> 
> 
> Edit: It's on gamepass for $10/mo


Yeah I have game pass. My enterprise windows 10 LTSC doesn’t let me use game pass though.


----------



## Nizzen

tps3443 said:


> Yeah I have game pass. My enterprise windows 10 LTSC doesn’t let me use game pass though.


"gamepass" 😂 
*The Ascent-CODEX*


----------



## yzonker

jura11 said:


> Me and my friend running same GPUs Palit RTX 3090 GamingPro and they're based on reference PCB, sadly that time when I wanted to get Asus RTX 3090 Strix OC that GPU has been out of stock everywhere
> 
> My loop have 4*360mm radiators plus MO-ra3 360mm and probably will add extra MO-ra3 360mm to loop or get Barrow chiller YSZY-01 to loop
> 
> 1000D seems like good case for water cooling plus there you can put lots of radiators which is big plus
> 
> I think so too such amount of radiators should do wonders with water temperatures and overall temperatures
> 
> Hope this helps
> 
> Thanks, Jura


OK thanks. FYI, the Zotac has the rgb headers at the end of the card making the PCB longer. Ek has both reference and Zotac blocks, but for example Corsair only has one. Must just be a bit longer so all fit.


----------



## tps3443

Nizzen said:


> "gamepass" 😂
> *The Ascent-CODEX*


LOL.

Hey! We should support the PC game developers lol. but yeah, pretty sure I started downloading Codex games and stuff when I was like 14-15 years old lol. I’m 31 now and obviously, I don’t do such things anymore.I want the game development industry to thrive. So I pay to play.


----------



## Nizzen

tps3443 said:


> LOL.
> 
> Hey! We should support the PC game developers lol. but yeah, pretty sure I started downloading Codex games and stuff when I was like 14-15 years old lol. I’m 31 now and obviously, I don’t do such things anymore.I want the game development industry to thrive. So I pay to play.


I only play Quake and Battlefield games. Nothing else 
I'm buying other games just for benchmarking LOL


----------



## yzonker

This new KP XOC seems to have really good stability for me. Pushed offsets higher than ever (and enabled reBar).









I scored 15 510 in Port Royal


AMD Ryzen 7 5800X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## Falkentyne

Lobstar said:


> I just figured out the gamepass version does have ray tracing so I turned it on and am testing it now. When I tried to enable it before the DX12 setting would always just revert to DX11 even after restarting the game. This time after setting it I hit escape after selecting DX12 and it wanted me to restart the game. Now the raytracing options are there. The game definitely power limits my card on the 520w KPE bios.
> 
> This was at +105/1448 but it clocked down one bin to +90. I might have to load up the party bios again to see what it can do with ray tracing on.
> View attachment 2519455


I tried it on my shunted 3090 FE. Didn't get power limited at 1080p and I didn't see a render scale option for running at higher resolution, DX11 and 12 seemed the same at the very start of the game.
78.3% TDP, 89.% TDP Normalized%. But it got hot.


----------



## Arizor

If you’re on the game pass version it renders DX12 regardless of your selection, raytracing doesn’t work and DLSS is excluded. Devs are working on a patch.


----------



## GRABibus

yzonker said:


> This new KP XOC seems to have really good stability for me. Pushed offsets higher than ever (and enabled reBar).
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 510 in Port Royal
> 
> 
> AMD Ryzen 7 5800X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com


Yes, even on air it is sa fantastic Bios.....









I scored 15 262 in Port Royal


AMD Ryzen 9 5900X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com





But not often, and only when room temperature doesn't exceed 15°C 

Concerning gaming, it is finally the best Bios for me currently => Higher stable boost with lower temps.

Temps are under control (Depending on power draw pulling by the game).
In case of high power, I undervolt the GPU (Overwatch, BFV for example).

So, after weeks of Bios tests, here is my top 3 for a strix on air :

for gaming :
KP 1000W Rebar => Necessary undervolt for high power pulling games
EVGA 500W Rebar
KP 520W Rebar.

For bench :
KP 1000W Rebar => Room temp very low and not often => High risk
KP 520W Rebar.


----------



## yzonker

I haven't spent much time playing with the new KP 1kw for gaming. That's next.

Anyway, old drivers FTW again though,









Result







www.3dmark.com





(15559 vs 15510)


----------



## tps3443

yzonker said:


> This new KP XOC seems to have really good stability for me. Pushed offsets higher than ever (and enabled reBar).
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 510 in Port Royal
> 
> 
> AMD Ryzen 7 5800X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com


Hello, what 3090 are you running? And what cooling?


----------



## tps3443

yzonker said:


> I haven't spent much time playing with the new KP 1kw for gaming. That's next.
> 
> Anyway, old drivers FTW again though,
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Result
> 
> 
> 
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> (15559 vs 15510)


Yeah I was gonna say, try to disable Rebar too. I have it disabled and can manage around 15,550 at 2,175Mhz.

I’m seeing worse performance with rebar on.

Anyways, +118.5 points more every +15Mhz seems to be the stamdard in port royal with a 3090.

You should be around 15,625 with 2,190. Have you tried to reduce the memory OC a tiny bit?


----------



## PLATOON TEKK

Got the two Optimus blocks for the Strix’s today. You can def tell quality difference between that and last weeks OC Lab Bykski I posted. Am surprised at how thin the pads are. Will post temps here when I get a chance to mount and test.

Am mad hype for the KP blocks to arrive now.


----------



## Arizor

That looks premium @PLATOON TEKK , eager to see what results you get vs other blocks. Also am I right reading your wig, you have multiple of many 3090s? You need them for work, some kind of machine learning?


----------



## PLATOON TEKK

Arizor said:


> That looks premium @PLATOON TEKK , eager to see what results you get vs other blocks. Also am I right reading your wig, you have multiple of many 3090s? You need them for work, some kind of machine learning?


No doubt, I’ll keep you posted on the delta, the backplate has full pad too, should be interesting.

Correct, I run the “TEKK” division of the Paper Platoon, a media production company and record label. We use the PCs to render, record, game and program. Am blessed that it gives me a good chance to mess around with some filthy hardware.

Paper Platoon - Spark Master Tape
TEKK 2 TEKK 1

Here is a link to our primary artist and “Tekk” videos.


----------



## tps3443

PLATOON TEKK said:


> Got the two Optimus blocks for the Strix’s today. You can def tell quality difference between that and last weeks OC Lab Bykski I posted. Am surprised at how thin the pads are. Will post temps here when I get a chance to mount and test.
> 
> Am mad hype for the KP blocks to arrive now.
> View attachment 2519515


I would love to know what temps are with a 500 watt load or so with ambient water cooling. Thanks in advance.

I am deciding if I’m gonna throw in the towel on my 3090 Kingpin Hydro copper block or not .. This waterblock has a delta of almost 30C right now! (Not kidding) lol.

I remounted it last night, and the results are not good. The idle temps are fantastic.

My load water temp is 27.9C inside the reservoir. And my ambient temp is 24.3C. This Kingpin HC will peak at 59C under heavy load through port royal.

All of the other components run cool as anything though. 30’s and 40’s for the temps. Just my GPU won’t stay cool..


----------



## gfunkernaught

tps3443 said:


> I would love to know what temps are with a 500 watt load or so with ambient water cooling. Thanks in advance.
> 
> I am deciding if I’m gonna throw in the towel on my 3090 Kingpin Hydro copper block or not .. This waterblock has a delta of almost 30C right now! (Not kidding) lol.
> 
> I remounted it last night, and the results are not good. The idle temps are fantastic.
> 
> My load water temp is 27.9C inside the reservoir. And my ambient temp is 24.3C. This Kingpin HC will peak at 59C under heavy load through port royal.
> 
> All of the other components run cool as anything though. 30’s and 40’s for the temps. Just my GPU won’t stay cool..


What paste are you using on the core? Also, did you put paste and/or copper shim between the IC's and the cooler? It is a stock cooler after all so I wouldn't expect miracles. Heck I expected miracles from my EK block and quickly learned that it needed work to get down to [email protected] delta. And that is decent not amazing, as soon as I go passed 510w the delta grows pretty quick. Last time I checked my delta at 600w was 14c. But yeah definitely check your mount and if you still can't get good results, then it might be time for a custom block purchase.


----------



## PLATOON TEKK

tps3443 said:


> I would love to know what temps are with a 500 watt load or so with ambient water cooling. Thanks in advance.
> 
> I am deciding if I’m gonna throw in the towel on my 3090 Kingpin Hydro copper block or not .. This waterblock has a delta of almost 30C right now! (Not kidding) lol.
> 
> I remounted it last night, and the results are not good. The idle temps are fantastic.
> 
> My load water temp is 27.9C inside the reservoir. And my ambient temp is 24.3C. This Kingpin HC will peak at 59C under heavy load through port royal at a max power draw of only (459 watts)
> 
> All of the other components run cool as anything though. 30’s and 40’s for the temps. Just my GPU won’t stay cool..
> 
> 
> So I’ve gotta grab something else.


Hmm sorry to hear you’re having issues. I’ve had pretty good luck with my HCs but I’m running chillers and 13 pumps so the delta varies slightly.

My bad if this has been asked, what paste are you using? Did you tighten the screws around the core nice and solid?

A high delta for me (at least on CPUs, been pretty ok with gpus) came from a few things in my past; slightly uneven screws, too little thermal paste, wrong inlet and outlet, slow flow speed, too much voltage (obv not case here). Any of these apply?

edit: haha damn, gfunkernaught beat me to paste question

edit 2: Optimus pads are mad thin, am assuming .8mm, had no clue.


----------



## des2k...

*"I would love to know what temps are with a 500 watt load or so with ambient water cooling. Thanks in advance. "*
I think 500w is too easy on the optimus block, if I have to guess, prob 5c delta

*".. This waterblock has a delta of almost 30C right now!"*
You are not making contact on the die or no pressure. I had that 30c delta with the borrow block for my zotac, it's hitting the coil early but still shows the paste spread on the die.

For my EK block, I had to drop the standoffs and lap the block.

Since your pads are already 2mm for the front, it would be easy to just drop 1mm on the standoffs. Usually they are brass, only takes 8 passes or so on 200grit with the weight of the block. Then you would install 1mm pads and reach the core early for good pressure.


----------



## tps3443

gfunkernaught said:


> What paste are you using on the core? Also, did you put paste and/or copper shim between the IC's and the cooler? It is a stock cooler after all so I wouldn't expect miracles. Heck I expected miracles from my EK block and quickly learned that it needed work to get down to [email protected] delta. And that is decent not amazing, as soon as I go passed 510w the delta grows pretty quick. Last time I checked my delta at 600w was 14c. But yeah definitely check your mount and if you still can't get good results, then it might be time for a custom block purchase.


That 29C delta is at a average of 400-430 watt load with a max of only 459 watt. My idle temps are incredible which doesn’t make sense. I’ve been messing with it for hours now. And I just discovered a clog in my loop... So fortunately it’s not the block mount. (Maybe the first time a little)

I mean, look at my idle temps lol. However, I discovered a Very very hot outlet tube coming off my CPU Sig V2 block. And water is literally dripping, or side ways pissing back in to the reservoir (if I pump the lines and let go) I can alter the flow back to the res if I pump the crap out of the lines and let go, the flow literally changes directions on me like I am moving a blockage around inside the cold plate fins or something. I tore my loop apart before putting the 3090KP in, all new tubing etc.I worked something lose, and now I’m clogged I guess


image hosting


----------



## geriatricpollywog

tps3443 said:


> I would love to know what temps are with a 500 watt load or so with ambient water cooling. Thanks in advance.
> 
> I am deciding if I’m gonna throw in the towel on my 3090 Kingpin Hydro copper block or not .. This waterblock has a delta of almost 30C right now! (Not kidding) lol.
> 
> I remounted it last night, and the results are not good. The idle temps are fantastic.
> 
> My load water temp is 27.9C inside the reservoir. And my ambient temp is 24.3C. This Kingpin HC will peak at 59C under heavy load through port royal.
> 
> All of the other components run cool as anything though. 30’s and 40’s for the temps. Just my GPU won’t stay cool..


Something is off. My Kingpin tops out at 53C under heavy load and I haven’t installed my HC block yet. That’s my AIO load temp at 80% rad fans and 50% VRM fan speed.


----------



## yzonker

tps3443 said:


> Hello, what 3090 are you running? And what cooling?


Same as @desk2k..., a Zotac 3090 Trinity. Just with less cooling. Lol. Custom loop with Corsair block. 3 rads (360, 280, 480)



tps3443 said:


> Yeah I was gonna say, try to disable Rebar too. I have it disabled and can manage around 15,550 at 2,175Mhz.
> 
> I’m seeing worse performance with rebar on.
> 
> Anyways, +118.5 points more every +15Mhz seems to be the stamdard in port royal with a 3090.
> 
> You should be around 15,625 with 2,190. Have you tried to reduce the memory OC a tiny bit?


That was with reBar enabled. I saw an increase of 100-150 pts when I enabled it. Here's a run with it off.









I scored 15 323 in Port Royal


AMD Ryzen 7 5800X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com





I spent quite a while last night testing mem speeds. For some reason I've gained a tiny bit. Still not good, but +700 gave the highest score. I tested 650/700/750. In the past 600-650 gave the highest scores. All that has changed is the new KP bios and I did add a 2nd D5 which improved my block delta a small amount (~2C).


----------



## Darcris

Good evening,

I've been browsing this thread for a while along with my friend and we've been trying our best to tweak our 3090's

For some slight background information - I've built my first custom water loop this year which includes the following:

i9 10900k @ 5.2Ghz 1.4v LLC6
32GB DDR4 G.Skill FlareX (2x8GB) B-die 1.4v @3200MHz XMP
Zotac Trinity 3090 Non OC version

O11D XL with custom water loop
3090 Trinity EK Waterblock and Active backplate with Conductonaut LM
EK Velocity CPU Block with Kryonaut TP
2x 30mm Corsair XR5 360 Rads
1x 54mm Corsair XR7 360 Rad
10x Noctua NF-A12x25 PWM fans @ 100%
All Rads are intake 1x exhaust at rear of case

I have flashed the vBios with the Kingpin 3090 Re-Bar 1000W vBios

This is the sort of score I managed with my card and setup, on a fresh install of windows and an ambient room temperature of 20C.

MSI Afterburn settings



















I scored 15 815 in Port Royal

I am not entirely sure if this is good or bad but I thought I'd post it here for the sake of it.


Any thoughts would be neat


----------



## yzonker

Darcris said:


> Good evening,
> 
> I've been browsing this thread for a while along with my friend and we've been trying our best to tweak our 3090's
> 
> For some slight background information - I've built my first custom water loop this year which includes the following:
> 
> i9 10900k @ 5.2Ghz 1.4v LLC6
> 32GB DDR4 G.Skill FlareX (2x8GB) B-die 1.4v @3200MHz XMP
> Zotac Trinity 3090 Non OC version
> 
> O11D XL with custom water loop
> 3090 Trinity EK Waterblock and Active backplate with Conductonaut LM
> EK Velocity CPU Block with Kryonaut TP
> 2x 30mm Corsair XR5 360 Rads
> 1x 54mm Corsair XR7 360 Rad
> 10x Noctua NF-A12x25 PWM fans @ 100%
> All Rads are intake 1x exhaust at rear of case
> 
> I have flashed the vBios with the Kingpin 3090 Re-Bar 1000W vBios
> 
> This is the sort of score I managed with my card and setup, on a fresh install of windows and an ambient room temperature of 20C.
> 
> MSI Afterburn settings
> 
> View attachment 2519535
> 
> 
> View attachment 2519538
> 
> 
> I scored 15 815 in Port Royal
> 
> I am not entirely sure if this is good or bad but I thought I'd post it here for the sake of it.
> 
> 
> Any thoughts would be neat


Well that's the highest 2x8pin score I've seen so it isn't terrible.


----------



## J7SC

FYI for those who play Microsoft Flight Sim 2020, big (40+ GB) patch out improving performance significantly, plus new planes included (ie Cessna float plane) and other visual improvements. The 3090 never had performance problems with this sim and its gorgeous graphics, but now I can fool around w/ 8K native before other adjustments


----------



## des2k...

yzonker said:


> Well that's the highest 2x8pin score I've seen so it isn't terrible.


The Zotac does cap around 2185-2190eff clock. Low temps do help & low latency mem(Intel or Zen3)

Got close to 15.8k with my 3900x. I did find some bios settings F34 x570 Master for more consistent mem and L3 latency.

gmi encryption -off for the infinity fabric efficiency, dropped to 67.3ns from 67.6ns on repeated Aida tests for mem

and preferred IO -A(bus io is 10 for my rtx3090) gives a drop to DPC latency under graphic load









I scored 15 794 in Port Royal


AMD Ryzen 9 3900X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## ViRuS2k

been trying out this bios KP1000w rebar on my msi trio x 3090
its pretty good, managed to get so far 2145mhz gpu clock effective @615w power draw lol @65c yeah i know temps suck but clocks wont drop due to this bios not being temp limited
but i have to say i cant go much over 2145 or i get inst freezing at 1100mv dont think there is any way for me to improve this though and i cant be bothered draining and adding better thermal pads or what not hahaha but 2145 is still pretty respectable 

memory on my card though is no problems at all as i have mp3works on it 75c never goes higher in temps @1200-21902 effective in games can also go higher on clocks but whats the point ??? do you actually gain anything going past 21902 lol almost 2.2ghz a sec memory speeds. 
can some people do me a favor can you guys post your gaming 24/7 clocks and voltages  so i can get an idea whats good for all types of games to run the card 24/7 on as i cant or dont have the time to test loads of different games


----------



## GRABibus

ViRuS2k said:


> been trying out this bios KP1000w rebar on my msi trio x 3090
> its pretty good, managed to get so far 2145mhz gpu clock effective @615w power draw lol @65c yeah i know temps suck but clocks wont drop due to this bios not being temp limited
> but i have to say i cant go much over 2145 or i get inst freezing at 1100mv dont think there is any way for me to improve this though and i cant be bothered draining and adding better thermal pads or what not hahaha but 2145 is still pretty respectable
> 
> memory on my card though is no problems at all as i have mp3works on it 75c never goes higher in temps @1200-21902 effective in games can also go higher on clocks but whats the point ??? do you actually gain anything going past 21902 lol almost 2.2ghz a sec memory speeds.
> can some people do me a favor can you guys post your gaming 24/7 clocks and voltages  so i can get an idea whats good for all types of games to run the card 24/7 on as i cant or dont have the time to test loads of different games



What are your temps in PR ?
With my strix, I hit 74degrees average and 95 max temp on GPU.









I scored 15 270 in Port Royal


AMD Ryzen 9 5900X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com





Your 65 degrees seem far below
My temps….
Are you on air ?


----------



## tps3443

0451 said:


> Something is off. My Kingpin tops out at 53C under heavy load and I haven’t installed my HC block yet. That’s my AIO load temp at 80% rad fans and 50% VRM fan speed.


Yeah my GPU temps and hot spot ran cooler with the Hybrid AIO. You’ll drop in memory temps, and vrm temps with the HC though

I have (3) 360’s (2.75MM H2O static pressure fans) D5 at full tilt.

Open case.

Im gonna have to tear the whole thing down again to figure out exactly where i’m
clogged.


----------



## geriatricpollywog

tps3443 said:


> Yeah my GPU temps and hot spot ran cooler with the Hybrid AIO. You’ll drop in memory temps, and vrm temps with the HC though
> 
> I have (3) 360’s (2.75MM H2O static pressure fans) D5 at full tilt.
> 
> Open case.
> 
> Im gonna have to tear the whole thing down again to figure out exactly where i’m
> clogged.


When I had a clog, it was plasticizer in the CPU block fins.


----------



## ViRuS2k

GRABibus said:


> What are your temps in PR ?
> With my strix, I hit 74degrees average and 95 max temp on GPU.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 270 in Port Royal
> 
> 
> AMD Ryzen 9 5900X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> Your 65 degrees seem far below
> My temps….
> Are you on air ?


Im not on air pal  like i said temps suck @600+w board power at the normal power range temps are around 46c 
they jump drastically when going extreme on board power


----------



## GRABibus

ViRuS2k said:


> Im not on air pal  like i said temps suck @600+w board power at the normal power range temps are around 46c
> they jump drastically when going extreme on board power


Ok.
Just to know, what’s your water cooling set up
For you GPU?


----------



## ViRuS2k

GRABibus said:


> Ok.
> Just to know, what’s your water cooling set up
> For you GPU?


dynamic XL + 2x thick 360mm radiators + 420 monster radiator on top 18 fans 
bottom rad push pull/ top rad push/higher rad push pull 

water temps never break 32c even under load so i know its the block that needs readjusting lol but i cant be arsed with all the draining and changing stuff hahaha 
im not a synthetic benchmark crazy person  i just look for fastest/best possible optimal high overclock for games only  in a 24/7 setting


----------



## jura11

@ViRuS2k 

I'm using XOC 1000W BIOS on my GPUs and I'm using my GPUs for rendering every day and my PC is running 24/7 and I do game on my PC, running currently XOC 1000W BIOS capped at 75% for gaming with +135MHz on core and +1295MHz on VRAM, my VRAM are in 60's during the whole day rendering or gaming and core temperatures fluctuating in 38-42°C that's depending on the ambient, during the heatwaves which we are been experiencing here in UK core temperatures been in 42°C as max with VRAM temperatures in mid 60's

Hope this helps 

Thanks, Jura


----------



## GRABibus

ViRuS2k said:


> been trying out this bios KP1000w rebar on my msi trio x 3090
> its pretty good, managed to get so far 2145mhz gpu clock effective @615w power draw lol @65c yeah i know temps suck but clocks wont drop due to this bios not being temp limited
> but i have to say i cant go much over 2145 or i get inst freezing at 1100mv dont think there is any way for me to improve this though and i cant be bothered draining and adding better thermal pads or what not hahaha but 2145 is still pretty respectable
> 
> memory on my card though is no problems at all as i have mp3works on it 75c never goes higher in temps @1200-21902 effective in games can also go higher on clocks but whats the point ??? do you actually gain anything going past 21902 lol almost 2.2ghz a sec memory speeds.
> can some people do me a favor can you guys post your gaming 24/7 clocks and voltages  so i can get an idea whats good for all types of games to run the card 24/7 on as i cant or dont have the time to test loads of different games


I use the KP 1000 rebar for gaming since 2 Days on my strix on air.

I have tried it on Cold War in 2560x1440, ultra settings , Ray tracing ultra and DLSS on quality.

+160MHz on core and 2190MHz point at 1,1V.
+1000MHz on memory.

The game stabilized at 2160MHz at 1,1V at 26degrees ambient, 60 degrees on core.


----------



## tps3443

0451 said:


> When I had a clog, it was plasticizer in the CPU block fins.


Yeah I am almost certain the clog was from flushing my whole loop, and washing out the radiators with water. It worked something loose.

I just discovered the clog was in my front EK 360MM radiator (The last rad before the reservoir). I hooked up (2) tubes and fittings to it. And not even air from blowing could pass through it. (Nothing was coming out lol) so I kept at it for a while, and a huge release finally came out lol.


^ That didn’t sound right..But you get the idea! Lol

Anyways! I am getting it all back together, and hopefully will get these temps down!


----------



## yzonker

des2k... said:


> The Zotac does cap around 2185-2190eff clock. Low temps do help & low latency mem(Intel or Zen3)
> 
> Got close to 15.8k with my 3900x. I did find some bios settings F34 x570 Master for more consistent mem and L3 latency.
> 
> gmi encryption -off for the infinity fabric efficiency, dropped to 67.3ns from 67.6ns on repeated Aida tests for mem
> 
> and preferred IO -A(bus io is 10 for my rtx3090) gives a drop to DPC latency under graphic load
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 794 in Port Royal
> 
> 
> AMD Ryzen 9 3900X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com


Yea I realized he just nudged you out by a tiny bit. And that's exactly what I saw with my card too. If you look at the plot from 3DMark, the core frequency starts at 2025 for a fee seconds and then drops to 2190 for the rest of the run. Increasing the offset at 1093mv had no effect. 

Now that I have the 3080ti for a spare card I may get serious about at least remounting in the Corsair block or maybe buying a different block. Unclear what block is really the best choice. Barrow was crap that you bought. So then there is Alphacool, Bykski, and EK. Any thoughts?


----------



## Nizzen

I think @devilhead has one of the highest PR scores without LN2! He does chilled water with non modded strix card 
Wild!
My good friend from Norway 








I scored 16 401 in Port Royal


Intel Core i9-10900KF Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## jura11

I just tested XOC ReBAR 1000W BIOS and my best score is 15601 points with +150MHz on core and +1515MHz on VRAM, VRAM temperatures won't broke 60°C and core temperatures been maximum at 42°C during the I think 20 runs, didn't played too much but seems is much better than normal XOC 1000W BIOS 

Sadly my GPUs wouldn't more than 2175MHz effectively and VRAM stops at +1515MHz anything above scores are going down by 100-200 points per run and anything above +150MHz driver would crash 

I think I have there extra 200 points with CPU optimisation and RAM but that's it 









I scored 15 601 in Port Royal


AMD Ryzen 9 5950X, NVIDIA GeForce RTX 3090 x 1, 65536 MB, 64-bit Windows 10}




www.3dmark.com





Hope this helps 

Thanks, Jura


----------



## jura11

Nizzen said:


> I think @devilhead has one of the highest PR scores without LN2! He does chilled water with non modded strix card
> Wild!
> My good friend from Norway
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 16 401 in Port Royal
> 
> 
> Intel Core i9-10900KF Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com


Did you seen Barrow chiller, thinking getting one hahaha and maybe with luck I will break 16k hahaha 

Thanks again mate for uploading Holy Grail of all XOC BIOSes hahaha, XOC 1000W BIOS with ReBAR 

Hope this helps 

Thanks, Jura


----------



## gfunkernaught

@tps3443 I see you got the clog cleared. I'm sure that could affect temps but as for the idle temps being awesome, these cards are extremely efficient when it comes to idle temps. My Trio on the stock air cooler idle'd at 30c, with the ek block it idles at 25c. I think it was @des2k... that mentioned this (correct me if I'm wrong) but when your idle temp is good but load isn't, then its a mounting/contact issue. Another way to verify this is to check other temps. 

For example: I before I remounted I had high load temps but great idle temps. I would put say a 500w load on the gpu, the core would be at 44c and sometimes peak at 46c. I touched the block and it wasn't warm. The air coming out of the rads wasn't warm. That to me meant that the heat wasn't being transferred properly.


----------



## SoldierRBT

@tps3443 

Did you use the same thermal pads when you remounted the block? I did it once and saw 4c core temps higher. I had to put new thermal pads and core went down to 14C delta again.


----------



## gfunkernaught

I saw some mentions of the 1kw rbar bios not being temp limited. I'm still seeing the standard gpu boost temp throttle when the core hits like 40-42c then the core drops one bin. Guess that function is hard wired into the card.


----------



## jura11

gfunkernaught said:


> I saw some mentions of the 1kw rbar bios not being temp limited. I'm still seeing the standard gpu boost temp throttle when the core hits like 40-42c then the core drops one bin. Guess that function is hard wired into the card.


Yup that's always will be like that, its Nvidia Boost and that's how Nvidia Boost should work

Hope this helps 

Thanks, Jura


----------



## jura11

gfunkernaught said:


> @tps3443 I see you got the clog cleared. I'm sure that could affect temps but as for the idle temps being awesome, these cards are extremely efficient when it comes to idle temps. My Trio on the stock air cooler idle'd at 30c, with the ek block it idles at 25c. I think it was @des2k... that mentioned this (correct me if I'm wrong) but when your idle temp is good but load isn't, then its a mounting/contact issue. Another way to verify this is to check other temps.
> 
> For example: I before I remounted I had high load temps but great idle temps. I would put say a 500w load on the gpu, the core would be at 44c and sometimes peak at 46c. I touched the block and it wasn't warm. The air coming out of the rads wasn't warm. That to me meant that the heat wasn't being transferred properly.


Usually when idle temperatures are great or close to water temperature then mount is good or okay and when temperatures shoot up quickly then you have bad mount, during the load if you can monitor VRM temperature then you are golden, you already can monitor VRAM temperatures which shouldn't under water and with good airflow be as high like on air for comparison 

If there is blockage or flow issue, you should see that under load, that's why I recommend get Aquacomputer High Flow USB or any of their flow meters and do tests 

Usually I do test mount with cheap TIM like Arctic MX4 etc and check for TIM imprint on the waterblock etc, its worth it do such extra work just be on safe side 

But EVGA HydroCopper always been bad performing blocks, I have tested them on GTX1080Ti and RTX 2080Ti and they always run hotter than any other block, on RTX 2080Ti HydroCopper has been outperformed by cheap Bykski waterblock by 6-8°C 

I wouldn't touch EVGA HydroCopper 

Hope this helps 

Thanks, Jura


----------



## satinghostrider

ViRuS2k said:


> been trying out this bios KP1000w rebar on my msi trio x 3090
> its pretty good, managed to get so far 2145mhz gpu clock effective @615w power draw lol @65c yeah i know temps suck but clocks wont drop due to this bios not being temp limited
> but i have to say i cant go much over 2145 or i get inst freezing at 1100mv dont think there is any way for me to improve this though and i cant be bothered draining and adding better thermal pads or what not hahaha but 2145 is still pretty respectable
> 
> memory on my card though is no problems at all as i have mp3works on it 75c never goes higher in temps @1200-21902 effective in games can also go higher on clocks but whats the point ??? do you actually gain anything going past 21902 lol almost 2.2ghz a sec memory speeds.
> can some people do me a favor can you guys post your gaming 24/7 clocks and voltages  so i can get an idea whats good for all types of games to run the card 24/7 on as i cant or dont have the time to test loads of different games


Nice to see a fellow 3090 Gaming X Trio here.
I'm also interested to try the 1000W bios rebar. I'm mainly interested in gaming performance and stability so I am more interested in that.

With the Suprim X Bios, maxed out power and temp limit, Core +105Mhz Mem +500Mhz, these are what I'm getting. Gaming sees a steady 2115-2130Mhz in Coldwar for example which I usually play. If I run Superstition 8k, I will hit 455W and temps are also around the same based on 3 back to back loops.

These are my stats after 3 hours of back to back gaming. No active backplate just an EK waterblock with Gelid pads on front and Thermalright pads on the rear.










With the 1000W rebar, what was your power limit set to? Did you have any overclock on this bios based on your results? Is all your DP and HDMI ports working?

Thanks man...


----------



## yzonker

Not trying to start 3080ti trash talk or anything, but I'm kinda irked my 3080ti FTW3 Hybrid scores this close to my 3090,









Result not found







www.3dmark.com


----------



## tps3443

SoldierRBT said:


> @tps3443
> 
> Did you use the same thermal pads when you remounted the block? I did it once and saw 4c core temps higher. I had to put new thermal pads and core went down to 14C delta again.


I did re-use the same thermal pads. I squished them down some, and cleaned them with 99.9% alcohol and a clear plastic bag to wipe them down. 

I actually improved the temps with the re-mount.


Anyways, I fixed it! I had a serious clog in my custom loop. One of my 3 radiators was literally almost airtight clogged.

Temps are 45C-47C now in Cyberpunk 2077 at 450+ watts. (While my 7980XE is at 5Ghz) and while using the 1KW Rebar bios on my 3090KP HC

Previously, my GPU was hitting 55C in cyberpunk at only 370-420 watts on the OC bios. While my CPU was de-tuned to 4.5Ghz, for cooler water temps.

I can clearly see my coolant flowing a thick stream
back in to the reservoir now. Previously it was like, dripping or barely a stream at all.


Anyways! I am good to go now!!! The water temp
differences of my 7980XE at 4.5Ghz to 5Ghz is massive. So these results are insane. 

I had no idea I had a clog at all.


----------



## des2k...

...


----------



## gfunkernaught

Was testing out the 1kw rbar bios again in gaming, titfanfall 2 144fps, +150 core +1200 vram, PL set to 70%, gpu core temp hit 46c! That's a bit too warm for me, especially considering ambient temp is 25c. The core frequency was 2145-2160mhz. I know 46c isn't bad, but the rest of board is probably much warmer, especially the back. I backed my settings down to +105 core and PL to 52%, tested out bright memory bench on loop, and the temp settled at 44c. Unless you have excellent cooling, no, exceptional cooling, gaming on the 1kw bios uncapped in the summer is probably not a good idea for the health of the card. Unless someone can shed some light and prove me wrong. I'm all ears.


----------



## tps3443

yzonker said:


> Not trying to start 3080ti trash talk or anything, but I'm kinda irked my 3080ti FTW3 Hybrid scores this close to my 3090,
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Result not found
> 
> 
> 
> 
> 
> 
> 
> www.3dmark.com


Seriously! I think I get about the same score at 2,160Mh internal clock (Exactly) that’s weird!!


----------



## tps3443

gfunkernaught said:


> Was testing out the 1kw rbar bios again in gaming, titfanfall 2 144fps, +150 core +1200 vram, PL set to 70%, gpu core temp hit 46c! That's a bit too warm for me, especially considering ambient temp is 25c. The core frequency was 2145-2160mhz. I know 46c isn't bad, but the rest of board is probably much warmer, especially the back. I backed my settings down to +105 core and PL to 52%, tested out bright memory bench on loop, and the temp settled at 44c. Unless you have excellent cooling, no, exceptional cooling, gaming on the 1kw bios uncapped in the summer is probably not a good idea for the health of the card. Unless someone can shed some light and prove me wrong. I'm all ears.


I plan on running this bios 24/7. Now that all of my temps are good anyways!


----------



## gfunkernaught

tps3443 said:


> I plan on running this bios 24/7. Now that all of my temps are good anyways!


You think that 46c is okay for this card running such high power usage?


----------



## Arizor

tps3443 said:


> Seriously! I think I get about the same score at 2,160Mh internal clock (Exactly) that’s weird!!


Is it weird? The 3080Ti is pretty much a 3090 with half the VRAM, and if you factor in chip lottery, I can imagine many getting within 1-2% of 3090s on benchmarks that don't need that extra memory. For content creators like me the 3090 is much, _much_ better simply for the extra memory; if I were just gaming etc. I'd go for the Ti all day.


----------



## J7SC

Arizor said:


> Is it weird? The 3080Ti is pretty much a 3090 with half the VRAM, and if you factor in chip lottery, I can imagine many getting within 1-2% of 3090s on benchmarks that don't need that extra memory. For content creators like me the 3090 is much, _much_ better simply for the extra memory; if I were just gaming etc. I'd go for the Ti all day.


With the previous gens, I always went for the 'Ti' custom PCB models in SLI, never the Titan class - but since I also use this for productivity and all monitors here are now 4K min, the 3090 makes sense. I added a 6900XT for a secondary system a few months ago as I do not want to get a _new_ GPU with less than 16 GB of VRAM in this day and age...a 3080 Ti with 16(+) GB would have made my choice(s) much harder. All that said, a well-cooled 3-pin 500W+ 3090 is _very nice _


----------



## tps3443

gfunkernaught said:


> You think that 46c is okay for this card running such high power usage?


I think it will be just fine. All of the components on my 3090 run really cool. The VRM’s, the GDDR6X, everything. And now that my D5 is actually pumping correctly, my GPU/GPU die temps are even lower.

The highest power consumption I have ever seen was about 570 watts. These 3090’s are meant for high power. 

Now, I know some people pushing 600-700+ watts on a air cooled 3090’s. That’s probably not the most realistic scenario. Maybe not for daily usage.

If your temps are reasonable, then yeah it should be fine.


----------



## Arizor

And here I am running my Strix on a single 240mm radiator maxxing at 54C and thinking it's great after a year on air 

46C is nothing for these cards, in terms of temps. Of course the wattage/voltage running through them is another issue entirely, I personally wouldn't go beyond 520 watts or 1.1V for daily use. Indeed I'm inclined for purposes outside of benching to just run the Strix 480 watts bios, using voltage max around 1.075.


----------



## yzonker

Arizor said:


> Is it weird? The 3080Ti is pretty much a 3090 with half the VRAM, and if you factor in chip lottery, I can imagine many getting within 1-2% of 3090s on benchmarks that don't need that extra memory. For content creators like me the 3090 is much, _much_ better simply for the extra memory; if I were just gaming etc. I'd go for the Ti all day.


Yea my mem OC on my 3090 sucks which is a lot of the problem. I'm planning to do a remount at some point to get my mem temps down which may help.


----------



## Arizor

yzonker said:


> Yea my mem OC on my 3090 sucks which is a lot of the problem. I'm planning to do a remount at some point to get my mem temps down which may help.


Yeah I lucked out on my Strix's mem, can go up to +1500 without issue; conversely I lost the GPU lottery terribly, if I go anywhere north of 2ghz I'm likely to crash playing games for more than 30 mins.

That said, for gaming, I've found diminishing returns kick in rather swiftly once you go beyond 2ish ghz on the GPU, and +1000 mem.


----------



## tps3443

This thing is a monster in cyberpunk. I imagine power usage is pretty bad at the wall lol. My 7980XE does incredible in this title though. 

Full OC enabled.

1KW bios

23Ghz GDDR6X


----------



## Arizor

@tps3443 niceeee. You can also drag and drop the latest DLSS version file into the root directory and it works perfectly, DLSS 2.2 really improves 'performance' mode at 4k, looks great and I was getting great frame rates, really only clubs like Afterlife that drop you into the 60s (who cares though because that raytracing is gorgeous).


----------



## tps3443

Arizor said:


> @tps3443 niceeee. You can also drag and drop the latest DLSS version file into the root directory and it works perfectly, DLSS 2.2 really improves 'performance' mode at 4k, looks great and I was getting great frame rates, really only clubs like Afterlife that drop you into the 60s (who cares though because that raytracing is gorgeous).


That’s awesome.

I run it at 2560x1440P W/

Psycho ray tracing
+
Quality DLSS


----------



## Arizor

Yeah quality DLSS at 1440p is a must for CP. If you can grab a 4K monitor in the near future this is certainly one of those games that really looks stunning on it. I'm using the gigabyte FV43U (43 inch, 144hz, HDR1000) and it's jaw dropping.


----------



## tps3443

Right now im still testing stability in Cyberpunk 2077. I usually push the highest possible stable overclocks on a GPU and run it like that daily. (I mean, we only keep these cards for a year to maybe 2 years usually lol) May as well use every drop.

I want to get a 4K monitor. Now that I have enough GPU power for it.


----------



## gfunkernaught

tps3443 said:


> I think it will be just fine. All of the components on my 3090 run really cool. The VRM’s, the GDDR6X, everything. And now that my D5 is actually pumping correctly, my GPU/GPU die temps are even lower.
> 
> The highest power consumption I have ever seen was about 570 watts. These 3090’s are meant for high power.
> 
> Now, I know some people pushing 600-700+ watts on a air cooled 3090’s. That’s probably not the most realistic scenario. Maybe not for daily usage.
> 
> If your temps are reasonable, then yeah it should be fine.


These 3090s meaning the higher end models I assume you mean. The Trio/Suprim are mid-range. Strix/FTW/HOF/KP are the high end cards. I have no way to monitor VRM temp unless it is a sensor that appears in HWINFO that I am unaware of.


----------



## tps3443

Arizor said:


> Is it weird? The 3080Ti is pretty much a 3090 with half the VRAM, and if you factor in chip lottery, I can imagine many getting within 1-2% of 3090s on benchmarks that don't need that extra memory. For content creators like me the 3090 is much, _much_ better simply for the extra memory; if I were just gaming etc. I'd go for the Ti all day.



Almost every single reviewer has shot down the 3080Ti as a ripoff. And I never understood why. I would have taken a soldering iron to one immediately out of the box for maximum power.

But instead, every one said they were a horrible value. which is not true at all.

But yeah the 3080Ti certainly makes sense with a good bios on it. But for the average gamer it doesn’t work this way. ‘Nvidia power politics’ straight gimped the 3080Ti pretty bad. ’Box Stock Performance’ In timespy extreme, my watercooled 2080Ti FE W/ Samsung GDDR6 was literally nipping at the heels of a stock 3080Ti FE. roughly (8,190 points VS like 8,377 points) 

So imagine a lot of 3080Ti’s get thrown in a hot case, with poor air flow, and a copy of Windows 10 that has seen better days lol.

Anyways, power helps quite a bit for the 3080Ti. I was going to buy a cheap AIB model for like $1,700 dollars. Until I was offered a 3090 Kingpin at MSRP of $2089 dollars hehe. The 3080Ti didn’t look like a good deal anymore so I quickly turned it down. 


I would be curious to see some other benchmark! I’d like to see how it compares to my card using some in game benches. How about the RDR2 in game bench feature?


----------



## Falkentyne

tps3443 said:


> Almost every single reviewer has shot down the 3080Ti as a ripoff. And I never understood why. I would have taken a soldering iron to one immediately out of the box for maximum power.
> 
> But instead, every one said they were a horrible value. which is not true at all.
> 
> But yeah the 3080Ti certainly makes sense with a good bios on it. But for the average gamer it doesn’t work this way. ‘Nvidia power politics’ straight gimped the 3080Ti pretty bad. ’Box Stock Performance’ In timespy extreme, my watercooled 2080Ti FE W/ Samsung GDDR6 was literally nipping at the heels of a stock 3080Ti FE. roughly (8,190 points VS like 8,377 points)
> 
> So imagine a lot of 3080Ti’s get thrown in a hot case, with poor air flow, and a copy of Windows 10 that has seen better days lol.
> 
> Anyways, power helps quite a bit for the 3080Ti. I was going to buy a cheap AIB model for like $1,700 dollars. Until I was offered a 3090 Kingpin at MSRP of $2089 dollars hehe. The 3080Ti didn’t look like a good deal anymore so I quickly turned it down.
> 
> 
> I would be curious to see some other benchmark! I’d like to see how it compares to my card using some in game benches. How about the RDR2 in game bench feature?


I think you need to put things in perspective.
You're clearly either rich or VERY well secured in income.
You buy multiple high end video cards and then a $500 water block and parts to go on it. This isn't the average customer.
If you're rich, it doesn't matter what a card costs on the "scale". As long as it isn't past a certain point, you just buy it
The reviewers aren't catering to the rich. They're catering to people on a budget. Where $1,000-$1300 is a LOT of money and they can't assume that those people are going to have $600 to throw a custom loop on the hardware.
I'm fully aware there's a ton of people out there with lots of money to spend, but there's also a ton of people who need to watch their budget. A card that is $1200 MSRP is a really bad value when you can pay just $300 more and get double the VRAM and even better cooling (AGAIN, I know full well you have a custom loop. You aren't the target audience of the "average" gamer profile). . It was supposed to be $999 MSRP (Founder's).


----------



## Arizor

Yeah when it comes to GPUs, as @Falkentyne says we're definitely the 1%ers. Most folks' jaws drop when they learn how much my PC costs, and that's probably not much compared to some here!


----------



## tps3443

gfunkernaught said:


> These 3090s meaning the higher end models I assume you mean. The Trio/Suprim are mid-range. Strix/FTW/HOF/KP are the high end cards. I have no way to monitor VRM temp unless it is a sensor that appears in HWINFO that I am unaware of.


I would point some fans at the back of it, should be fine, maybe add some thermal pads, or coat the whole back PCB in K5 Pro (cheap, and works well!) I owned one of my best 2080Ti‘s for over a year, most of that time on the Galax 2KW XOC bios. Besides the last few months, I ran the KFA2 380watt bios with soldered resistors (still crazy high available power up-to 532 watt). I de-soldered the resistors, re-installed the factory FE cooler. (I left the KFA2 380 bios for the new owner, who was a teenager) That card was a beast! super strong! Super reliable! Cant kill it Lol. I beat Cyberpunk on it, RDR2 on it, and Metro, death stranding. That thing lived it’s life at 450-500+ watts. Especially Metro lol. The new owner loves it. (Member bought it off of these forums lol) reference 2080Ti’s were safe for up to 600+ sustained Per Buildzoids videos I think?

With how well last Gen handled high watts, and the large power jump we saw with Ampere generation, I imagine you’d be just fine.


Or maybe just run the Kingpin 520LN2 bios.


----------



## gfunkernaught

tps3443 said:


> I would point some fans at the back of it, should be fine, maybe add some thermal pads, or coat the whole back PCB in K5 Pro (cheap, and works well!) I owned one of my best 2080Ti‘s for over a year, most of that time on the Galax 2KW XOC bios. Besides the last few months, I ran the KFA2 380watt bios with soldered resistors (still crazy high available power up-to 532 watt). I de-soldered the resistors, re-installed the factory FE cooler. (I left the KFA2 380 bios for the new owner, who was a teenager) That card was a beast! super strong! Super reliable! Cant kill it Lol. I beat Cyberpunk on it, RDR2 on it, and Metro, death stranding. That thing lived it’s life at 450-500+ watts. Especially Metro lol. The new owner loves it. (Member bought it off of these forums lol) reference 2080Ti’s were safe for up to 600+ sustained Per Buildzoids videos I think?
> 
> With how well last Gen handled high watts, and the large power jump we saw with Ampere generation, I imagine you’d be just fine.
> 
> 
> Or maybe just run the Kingpin 520LN2 bios.


I've been running the 520w bios and they're fine but you know, I want more lol. I like to bench more than just 3dmark too. I also really like testing and benching 8k. I just get worried about the VRM temps sometimes. My vram temp never goes above 65c last time I checked which was when I was playing quake 2 rtx on the 1kw bios and the core was running 2145mhz and the vram offset was +1200. 680w at times and the core temp reached 47c. That is when I was like "ok maybe I should stop". If I do get worried about the back of the card then I will have to find an active cooling solution. TBH the thought crossed my head to make my own cooling solution. I thought about using a couple of cpu water blocks and placing them at key points along the back of the card, on top of the backplate. I could connect them in series. Probably will need a second pump.


----------



## tps3443

gfunkernaught said:


> I've been running the 520w bios and they're fine but you know, I want more lol. I like to bench more than just 3dmark too. I also really like testing and benching 8k. I just get worried about the VRM temps sometimes. My vram temp never goes above 65c last time I checked which was when I was playing quake 2 rtx on the 1kw bios and the core was running 2145mhz and the vram offset was +1200. 680w at times and the core temp reached 47c. That is when I was like "ok maybe I should stop". If I do get worried about the back of the card then I will have to find an active cooling solution. TBH the thought crossed my head to make my own cooling solution. I thought about using a couple of cpu water blocks and placing them at key points along the back of the card, on top of the backplate. I could connect them in series. Probably will need a second pump.



I found a thread of some guy who literally just stuck on a bunch of those little mini heatsinks to his 3090 backplate, and then put (2) little 60MM slim fans on the back of his backplate. He dropped like 20C in temps on the rear components of the card. He didn’t even put new pads under the backplate. No telling how many hours and days he invested in to messing with it. He has tested a lot of different methods of pads, and fans, and heatsinks etc for the just back plate alone.

So I would certainly go for something like that. The results will surprise you.


----------



## tps3443

Falkentyne said:


> I think you need to put things in perspective.
> You're clearly either rich or VERY well secured in income.
> You buy multiple high end video cards and then a $500 water block and parts to go on it. This isn't the average customer.
> If you're rich, it doesn't matter what a card costs on the "scale". As long as it isn't past a certain point, you just buy it
> The reviewers aren't catering to the rich. They're catering to people on a budget. Where $1,000-$1300 is a LOT of money and they can't assume that those people are going to have $600 to throw a custom loop on the hardware.
> I'm fully aware there's a ton of people out there with lots of money to spend, but there's also a ton of people who need to watch their budget. A card that is $1200 MSRP is a really bad value when you can pay just $300 more and get double the VRAM and even better cooling (AGAIN, I know full well you have a custom loop. You aren't the target audience of the "average" gamer profile). . It was supposed to be $999 MSRP (Founder's).


Im certainly not rich or wealthy by any means. Im
just a normal person with normal things. The same type of person who thinks spending $1,200 on a GPU is expensive lol. It’s just PC hardware of 2020-2021 has really brain washed me I guess.

I was ready on all of the online or local GPU launch events for every RTX3000 GPUs. Unfortunately, I was always left stuck with my old GPU. I could never get one, so I read those reviewers talking crap about a 3080Ti being a bad value for $1200. When I would’ve loved to pay that my self and try it out my self! Please?? anyone??!! Lol.

And like I said before. I’m not rich.. I have a family, I’m in college, and I work from home making a very regular wage. Managing money and saving money is key for me.

You’d be surprised what the mindset is for the average person now a days. Because I feel as if we have all been influenced to what is acceptable to pay for something.


----------



## Nizzen

Arizor said:


> Yeah when it comes to GPUs, as @Falkentyne says we're definitely the 1%ers. Most folks' jaws drop when they learn how much my PC costs, and that's probably not much compared to some here!


Most folks' jaws drop when they learn how much a new car costs.... Let alone tuning and styling it 
Pc is cheap 🤩


----------



## PLATOON TEKK

Ran quick temp test on Optimus Strix block. Ambient is 26 /max is 32 (furmark 8 mins [most extreme] and timespy x1). 2040mhz. Using kp rebar bios so can’t see a lot more temps.

Single chiller loop. This obliterates my other blocks. Will be adding one of my EVC2SX to the second Strix and add voltage and see how it deals with that.


----------



## satinghostrider

Nizzen said:


> Most folks' jaws drop when they learn how much a new car costs.... Let alone tuning and styling it
> Pc is cheap 🤩


I think most would faint knowing how much cars in Scandinavian countries cost and a country like Singapore where it is like 300% tax on cars lol! Ask me how I know.


----------



## Arizor

Looks awesome @PLATOON TEKK


----------



## PLATOON TEKK

Arizor said:


> Looks awesome @PLATOON TEKK


Thanks so much brother 🙏🏼. That’s actually a Lian-li desk case with the gpu on top of it lol. Got so damn annoying moving the monitor and glass to change gpu. I HIGHLY advise against them unless you have a panel you can flip open or some ****.


----------



## Arizor

PLATOON TEKK said:


> Thanks so much brother 🙏🏼. That’s actually a Lian-li desk case with the gpu on top of it lol. Got so damn annoying moving the monitor and glass to change gpu. I HIGHLY advise against them unless you have a panel you can flip open or some ****.


I hear you, I got the Coolermaster C700P cosmos as my case before I got into watercooling; looks good with the curved glass but you can barely fit 2 rads into this thing with a fullsized motherboard, useless.


----------



## PLATOON TEKK

Arizor said:


> I hear you, I got the Coolermaster C700P cosmos as my case before I got into watercooling; looks good with the curved glass but you can barely fit 2 rads into this thing with a fullsized motherboard, useless.


That’s a beautiful case just looked it up. I fully agree, at the moment my two favorite 3090 setups at the studio are this:









I could def without the rgb, but due to the amount of blocks and gpus we change this is heaven in terms of ease of use. The KP setups is even easier since I can change mobos without gpus.








These setups also work really well with the external chiller tubing. The KP setup is dismantled at the moment; waiting on darkz590 then will get back on 7th!! Lol


----------



## Arizor

Haha that’s wild @PLATOON TEKK , especially that first one, love it.


----------



## Arizor

Man stock here in Australia for the high end cards is ridiculously scant, especially the kingpin.

If anyone wants to trade their Kingpin I’m happy to swap my waterblocked Strix plus cash, just thought I’d put it out there.


----------



## elbramso

First post on this forum^^
does anyone know why my PR score is lower compared to the score of another user (from this forum):








Result







www.3dmark.com




It seems my clock and mems where higher but still I fall behind by 200pts


----------



## gfunkernaught

tps3443 said:


> I found a thread of some guy who literally just stuck on a bunch of those little mini heatsinks to his 3090 backplate, and then put (2) little 60MM slim fans on the back of his backplate. He dropped like 20C in temps on the rear components of the card. He didn’t even put new pads under the backplate. No telling how many hours and days he invested in to messing with it. He has tested a lot of different methods of pads, and fans, and heatsinks etc for the just back plate alone.
> 
> So I would certainly go for something like that. The results will surprise you.


I already have a bunch of m.2 heatsinks at the back covering the vram, core, and all the vrms, sitting on top of a 2mm pad, then a 120mm fan on top of the sinks. It helps that area, but the rest of the back of the card gets toasty. Those m.2 were a waste of money i think, because those long heatsinks on Amazon are way cheaper and cover more space. I'm hesitant to keep testing the 1kw bios right now with my current setup. Have to build the 1000D and see how that goes.


----------



## gfunkernaught

elbramso said:


> First post on this forum^^
> does anyone know why my PR score is lower compared to the score of another user (from this forum):
> 
> 
> 
> 
> 
> 
> 
> 
> Result
> 
> 
> 
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> It seems my clock and mems where higher but still I fall behind by 200pts


Did you check the 3dmark profile settings in the Nvidia control panel? Set the texture filtering setting, should be set to high performance. Also make sure all monitoring software is not running while benching. You can monitor while tweaking your OC, but not during the actual run.


----------



## ViRuS2k

satinghostrider said:


> Nice to see a fellow 3090 Gaming X Trio here.
> I'm also interested to try the 1000W bios rebar. I'm mainly interested in gaming performance and stability so I am more interested in that.
> 
> With the Suprim X Bios, maxed out power and temp limit, Core +105Mhz Mem +500Mhz, these are what I'm getting. Gaming sees a steady 2115-2130Mhz in Coldwar for example which I usually play. If I run Superstition 8k, I will hit 455W and temps are also around the same based on 3 back to back loops.
> 
> These are my stats after 3 hours of back to back gaming. No active backplate just an EK waterblock with Gelid pads on front and Thermalright pads on the rear.
> 
> View attachment 2519600
> 
> 
> With the 1000W rebar, what was your power limit set to? Did you have any overclock on this bios based on your results? Is all your DP and HDMI ports working?
> 
> Thanks man...


yeah pal all hdmi ports working  the good news aswell is that we are not limited on our cards pci express 3 power line as its glitched 
to get anywhere near the limit you would have to be running something like 750+ board power lol
i run the card @ a safe 70-80% power limit with the KP 1000w rbar bios giving me around 600-650 board power.
to saturate that though you would need to be running at 2150+ on the gpu core 

i found a middle ground though 75% power limit with 2130-2150 gpu core clock and 1200mhz memory clock and the reason why is im a gamer and i would rather get best performance in my games than run synthetic benchmarks the good thing about this bios is that there is no power limit and if you have good temps then you would not drop bins at those clocks either


----------



## GRABibus

gfunkernaught said:


> Was testing out the 1kw rbar bios again in gaming, titfanfall 2 144fps, +150 core +1200 vram, PL set to 70%, gpu core temp hit 46c! That's a bit too warm for me, especially considering ambient temp is 25c. The core frequency was 2145-2160mhz. I know 46c isn't bad, but the rest of board is probably much warmer, especially the back. I backed my settings down to +105 core and PL to 52%, tested out bright memory bench on loop, and the temp settled at 44c. Unless you have excellent cooling, no, exceptional cooling, gaming on the 1kw bios uncapped in the summer is probably not a good idea for the health of the card. Unless someone can shed some light and prove me wrong. I'm all ears.


I currently game with it (strix on air) PL capped at 56%.


Cold War => 2560x1440 144Hz, 60degrees max on core at 26 ambient, [email protected],1V
Overwatch => 2560x1440 144Hz, 65degrees max on core at 26 ambient, undervolt [email protected],950V
BFV, same for overwatch.

VRM are 60 degrees max and memories at 74 degrees max.

I don’t see any dangerous data here and, what is funny, is that I have same temps with PL at 100%.
I am not sure capping PL works with this bios.


----------



## J7SC

GRABibus said:


> I currently game with it (strix on air) L capped at 56%.
> 
> 
> Cold War => 2560x1440 144Hz, 60degrees max on core at 26 ambient, [email protected],1V
> Overwatch => 2569x1440 144Hz, 65degrees max on core at 26 ambient, undervolt [email protected],950V
> BFV, same for overwatch.
> 
> VRM are 60 degrees max and memories at 74 degrees max.
> 
> *I don’t see any dangerous data here and, these are same temps what is funny, is that I have same temps with PL at 100%.
> I am not sure capping PL works with this bios.*


Checking / logging hotspot temps regularly is one measure I use when running 500+ bios on my w-cooled Strix...that in fact might have saved the card earlier (vertically mounted) as hotspot started to 'run away' compared to the overall GPU and even VRAM temps...a problem with the EKWB block combined with the relatively thin Kryonaut and was causing the trouble.

Another thing to keep in mind are momentary 'spikes' which can seriously get past regular PL max settings - I prefer to keep a bit of safety headroom in my settings. Somewhat related, there's the Amazon New World debacle with some 3090s. FYI, FS2020 had a huge 40 GB patch a couple of days ago which changed a lot under the hood, and on one of my setups running that, fps in the menu started to hit around 700  at times. I capped max fps via NVidia Inspector after that.


----------



## gfunkernaught

GRABibus said:


> I currently game with it (strix on air) PL capped at 56%.
> 
> 
> Cold War => 2560x1440 144Hz, 60degrees max on core at 26 ambient, [email protected],1V
> Overwatch => 2560x1440 144Hz, 65degrees max on core at 26 ambient, undervolt [email protected],950V
> BFV, same for overwatch.
> 
> VRM are 60 degrees max and memories at 74 degrees max.
> 
> I don’t see any dangerous data here and, what is funny, is that I have same temps with PL at 100%.
> I am not sure capping PL works with this bios.


How do you monitor vrm temperature?


----------



## ArcticZero

Finally got around to putting my PNY 3090 under water (EKWB Quantum Vector RE + Active Backplate, Gelid GC Extreme pads). Temps are while mining, but gaming never went higher than 55c (25c ambient) at 500w power (shunt modded). I'm not sure if hot spot delta could be better, but I'm not really in the mood to reseat the block for what are already amazing temps for me. Used to max out at 80c core, 98c memory, high 90's hotspot. 

Also absolutely no clearance issues with stacked shunts on either the front or rear block.


----------



## GRABibus

gfunkernaught said:


> How do you monitor vrm temperature?


with HWINfo


----------



## yzonker

J7SC said:


> Checking / logging hotspot temps regularly is one measure I use when running 500+ bios on my w-cooled Strix...that in fact might have saved the card earlier (vertically mounted) as hotspot started to 'run away' compared to the overall GPU and even VRAM temps...a problem with the EKWB block combined with the relatively thin Kryonaut and was causing the trouble.
> 
> Another thing to keep in mind are momentary 'spikes' which can seriously get past regular PL max settings - I prefer to keep a bit of safety headroom in my settings. Somewhat related, there's the Amazon New World debacle with some 3090s. FYI, FS2020 had a huge 40 GB patch a couple of days ago which changed a lot under the hood, and on one of my setups running that, fps in the menu started to hit around 700  at times. I capped max fps via NVidia Inspector after that.


Did you keep using Kryonaut or switch to something else? Doesn't seem to be a clear consensus on what is the best paste for a 30 series gpu.


----------



## gfunkernaught

GRABibus said:


> with HWINfo


What's the name of the sensor?


----------



## GRABibus

gfunkernaught said:


> What's the name of the sensor?


I am not at home.
Maybe someone can answer.


----------



## Lobstar

gfunkernaught said:


> What's the name of the sensor?


Here is my EVGA card and it's reporting in HWinfo.


----------



## Falkentyne

Lobstar said:


> Here is my EVGA card and it's reporting in HWinfo.
> View attachment 2519691


We're not talking about eVGA cards though.
Only evga and kingpin cards have that sensor (even if the other cards can report their temps to the firmware but isn't exposed via i2c, nvapi nor smbus).


----------



## GAN77

Falkentyne said:


> We're not talking about eVGA cards though.
> Only evga and kingpin cards have that sensor (even if the other cards can report their temps to the firmware but isn't exposed via i2c, nvapi nor smbus).


HWinfo reads the temperature VRM Asus.


----------



## gfunkernaught

GRABibus said:


> with HWINfo





Lobstar said:


> Here is my EVGA card and it's reporting in HWinfo.
> View attachment 2519691


Did you change the labels? Also, I'm not sure that all 3090s have vrm temp sensors.


----------



## J7SC

yzonker said:


> Did you keep using Kryonaut or switch to something else? Doesn't seem to be a clear consensus on what is the best paste for a 30 series gpu.


...I originally remounted and repasted with Kryonaut (no luck after a couple of weeks). I then switched to Gelid GC Extreme which is 'thicker / stickier' while mounting the new Strix GPU block (Phanteks)....

EKWB's brilliant machining had left a cutting head imprint on that block with a small ridge right above the GPU die  , which would make it easier for the thinner Kryonaut to eventually 'run / pump out', especially in a vertical mount


----------



## Lobstar

yzonker said:


> Did you keep using Kryonaut or switch to something else? Doesn't seem to be a clear consensus on what is the best paste for a 30 series gpu.


I'm using Kryonaut Exteme on my 3090FTW3U/Optimus WB. GPU is at 34℃, Water is 27℃, ambient air is 22.0℃. Running heaven loop at 2160mhz Core / 2800mhz Memory @ 450w. I used the old school credit card paste application method with applying TIM as thin as possible on both mating surfaces. Then wipe it away and reapply the paste with a credit card as thin as you can with keeping full coverage. I also measured my fasteners to ensure they were all the same length and sanded them to ensure even pressure of the cold plate. The worst I've seen is a full 10℃ between the GPU and water.


----------



## PLATOON TEKK

J7SC said:


> ...I originally remounted and repasted with Kryonaut (no luck after a couple of weeks). I then switched to Gelid GC Extreme which is 'thicker / stickier' while mounting the new Strix GPU block (Phanteks)....
> 
> EKWB's brilliant machining had left a cutting head imprint on that block with a small ridge right above the GPU die  , which would make it easier for the thinner Kryonaut to eventually 'run / pump out', especially in a vertical mount


damn, sorry to hear. Wild how far EK have fallen. How are temps on your phantek block? I actually had a leak from the plate itself after adding a pump. Pressure was still low so was surprised, luckily didn’t loose card.


----------



## Nizzen

Lobstar said:


> I'm using Kryonaut Exteme on my 3090FTW3U/Optimus WB. GPU is at 34℃, Water is 27℃, ambient air is 22.0℃. Running heaven loop at 2160mhz Core / 2800mhz Memory @ 450w. I used the old school credit card paste application method with applying TIM as thin as possible on both mating surfaces. Then wipe it away and reapply the paste with a credit card as thin as you can with keeping full coverage. I also measured my fasteners to ensure they were all the same length and sanded them to ensure even pressure of the cold plate. The worst I've seen is a full 10℃ between the GPU and water.


Nice 

What is your Port Royal score with "no powerlimit" ? 

Optimus blocks are wild! 🤩


----------



## J7SC

PLATOON TEKK said:


> damn, sorry to hear. Wild how far EK have fallen. How are temps on your phantek block? I actually had a leak from the plate itself after adding a pump. Pressure was still low so was surprised, luckily didn’t loose card.


...temps are very good so far. As to the leak you described, was that on the Phanteks ? Exact location of the leak (I should check !), and do you know the cause (ie. screws not tight enough, squished O-ring) ?


----------



## PLATOON TEKK

J7SC said:


> ...temps are very good so far. As to the leak you described, was that on the Phanteks ? Exact location of the leak (I should check !), and do you know the cause (ie. screws not tight enough, squished O-ring) ?


Correct, I only purchased one of the Phanteks, temps were pretty good and it looked good too. Since the leak happened with added pressure, I’m assuming the plate got lifted apart (loose screws). I had similar happen to me with a **** distro plate.

The leak came from the corner of the block itself. So maybe just swipe your hands along the gpu edges to make sure you have no wetness!









I was worried because the power delivery was literally dripping while on 🤦‍♂️. Saw it through the glass of the table.


----------



## Lobstar

Nizzen said:


> What is your Port Royal score with "no powerlimit" ?


This is my best run on the KPE520w bios.








I scored 15 595 in Port Royal


AMD Ryzen 9 5950X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## J7SC

PLATOON TEKK said:


> Correct, I only purchased one of the Phanteks, temps were pretty good too and it looked good too. Since the leak happened with added pressure, I’m assuming the plate got lifted apart (loose screws). I had similar happen to me with a **** distro plate.
> 
> The leak came from the corner of the block itself. So maybe just swipe your hands along the gpu edges to make sure you have no wetness!
> 
> View attachment 2519702
> 
> I was worried because the power delivery was literally dripping while on 🤦‍♂️. Saw it through the glass of the table.


Thanks much ! I checked, and so far, so dry  but will keep an eye on it. That said, every new CPU and every new GPU block gets the screw-tightening check as years ago, I had brand new EKWB GPU blocks mounted on two GPUs, one of which started to leak at the top part where the Acetal with the G1/4 threads sit during leak testing...on both those blocks, a total of 4 out of 6 screws were not tight...

Per spoiler, some of the 3090 EKWB issues. First, 3 of 5 RGB lights stopped working after about 10 days or so...not the end of the world, but then if it has RGB and one paid for it, it shouldn't go bust like that. Next, the 'Nickel' back plate starting shedding BIG CHUNKS of Nickel coating before I had even mounted it...the included thermal pads were enough to make that happen. Finally, some pics of the 'machine head' imprint (see orange circle)


Spoiler


----------



## Arizor

Dammit lads, you're not doing my wallet any favours. After @PLATOON TEKK 's great photos and the worries around EKWB, plus my own concern I'm not getting the best performance from my EKWB block, I've went and ordered the Optimus block for my Strix, plus a new D5 pump to replace my "beginners" pump from the EKWB starter kit. This is a slippery slope


----------



## elbramso

gfunkernaught said:


> Did you check the 3dmark profile settings in the Nvidia control panel? Set the texture filtering setting, should be set to high performance. Also make sure all monitoring software is not running while benching. You can monitor while tweaking your OC, but not during the actual run.


I checked Nvidia control panel and closed monitoring software but the score didn't change.
My guess is that my riser cable affects the score more than I thought - I'm using a 60cm riser cable


----------



## OrionBG

Hey guys,
BIG Problem...
I destroyed the HDMI port on my 3090FE... don't ask...
Can anybody provide me with a part number for that port on the FE, so I can get one and bring it to a repair shop to be replaced?










Thanks!


----------



## gfunkernaught

OrionBG said:


> Hey guys,
> BIG Problem...
> I destroyed the HDMI port on my 3090FE... don't ask...
> Can anybody provide me with a part number for that port on the FE, so I can get one and bring it to a repair shop to be replaced?
> 
> View attachment 2519742
> 
> 
> Thanks!


Check Northridge fix's website they have HDMI ports.


----------



## gfunkernaught

elbramso said:


> I checked Nvidia control panel and closed monitoring software but the score didn't change.
> My guess is that my riser cable affects the score more than I thought - I'm using a 60cm riser cable


How hard would It be to connect the GPU directly to the PCIe slot? Maybe then you can eliminate the riser cable as a factor.


----------



## Bal3Wolf

managed to get my best pr and timespy with the new 1kw rebar bios.









I scored 15 891 in Port Royal


AMD Ryzen 9 5950X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com













I scored 21 287 in Time Spy


AMD Ryzen 9 5950X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## Nizzen

Bal3Wolf said:


> managed to get my best pr and timespy with the new 1kw rebar bios.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 891 in Port Royal
> 
> 
> AMD Ryzen 9 5950X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 21 287 in Time Spy
> 
> 
> AMD Ryzen 9 5950X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com





Bal3Wolf said:


> managed to get my best pr and timespy with the new 1kw rebar bios.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 891 in Port Royal
> 
> 
> AMD Ryzen 9 5950X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 21 287 in Time Spy
> 
> 
> AMD Ryzen 9 5950X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com


Cpu score is bad. Run smt= off and pbo should do 18500+ with 3800c14 tweaked 

Love from Norway ♡


----------



## Bal3Wolf

Iv always had a low score on this 5950x for some reason and my ram can't do c14.


----------



## OrionBG

gfunkernaught said:


> Check Northridge fix's website they have HDMI ports.


Thanks but it appears they only have ports for game consoles which are a little different than the ones on the GPU.
Anyway, I was able to find one locally. The stupid thing is less than a dollar...
I'll probably get it tomorrow and I'll bring the card to one local repair shop for the transplant...
I feel so stupid because of the whole fiasco... The system was ready to boot... I filled the loops and everything and the only thing left was to connect the cables...
Because of space restrictions, I had to put the connector at an angle and the result...you saw it...


----------



## J7SC

Arizor said:


> Dammit lads, you're not doing my wallet any favours. After @PLATOON TEKK 's great photos and the worries around EKWB, plus my own concern I'm not getting the best performance from my EKWB block, I've went and ordered the Optimus block for my Strix, plus a new D5 pump to replace my "beginners" pump from the EKWB starter kit. *This is a slippery slope *


...yes, slippery slope indeed - I started sliding down that slope in late 2012; brace yourself (and your wallet) for what is to come.


----------



## GRABibus

gfunkernaught said:


> Did you change the labels? Also, I'm not sure that all 3090s have vrm temp sensors.


no label ch


gfunkernaught said:


> How do you monitor vrm temperature?


Here it is :










Strix with KP 1000W Rebar Bios.


----------



## gfunkernaught

GRABibus said:


> no label ch
> 
> 
> Here it is :
> 
> View attachment 2519769
> 
> 
> Strix with KP 1000W Rebar Bios.


Yeah my Trio doesn't have VRM sensors. Only sensors are gpu, vram, and hotspot.


----------



## gfunkernaught

J7SC said:


> ...yes, slippery slope indeed - I started sliding down that slope in late 2012; brace yourself (and your wallet) for what is to come.


For me it's like Olympic ice skating. My 1000D build costs at least $1200, case + cooling alone.


----------



## Arizor

J7SC said:


> ...yes, slippery slope indeed - I started sliding down that slope in late 2012; brace yourself (and your wallet) for what is to come.





gfunkernaught said:


> For me it's like Olympic ice skating. My 1000D build costs at least $1200, case + cooling alone.


yep just ordered a new radiator and 6 noctua fans to cover it, sweating the details of it’ll even fit properly in the case but bugger it 😂


----------



## J7SC

Arizor said:


> yep just ordered a new radiator and 6 noctua fans to cover it, sweating the details of it’ll even fit properly in the case but bugger it 😂


...and - you're off


----------



## Arizor

😂


----------



## Lobstar

Arizor said:


> yep just ordered a new radiator and 6 noctua fans to cover it, sweating the details of it’ll even fit properly in the case but bugger it 😂


Don't ever go and add up your purchases afterward either. I was just curious. Only counting case, fans, radiators, and pumps I'm at $2119.49. The actual PC parts not counting the stupid graphics card are only $2050. At least I'll be able to use most of the cooling stuff and case in the next build.


----------



## Bal3Wolf

lol i dont dare add up my watercooling over the last 15 years i just know i ended up with 2 fully cooled computers cpu and gpu each with mutiple rads lol what do you do when you have a extra rad buy parts to water cool another pc.


----------



## Arizor

Haha I feel you both, I did just do some quick arithmetic in my head and I don't like the numbers...!


----------



## GRABibus

Long time that I didn't run TimeSpy with my Strix on air.....









I scored 21 117 in Time Spy


AMD Ryzen 9 5900X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com





Bios KP 1000W XOC ReBar
25°c ambient

Graphics score => 22501


----------



## GRABibus

Bal3Wolf said:


> Iv always had a low score on this 5950x for some reason and my ram can't do c14.


Not sure it is latency CL4 related.

This is my run with 5900X and 3800MHz CL16 @ 25°C (KP XOC 1000W rebar bios) :









I scored 21 117 in Time Spy


AMD Ryzen 9 5900X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com





If I run this test at 20°C (did it last winter with ASUS XOC 1000W Bios), my CPU score is around 16100pts, same as your 5950X.

CPU tweaking will be more beneficial than reducing latency.


----------



## Bal3Wolf

so looks like my cpu score is pretty normal.


----------



## Arizor

So how do you folks go about draining a loop without a drainage valve (I've got one on the way along with an EKWB D5 pump for next time!), specifically getting _all that goddamn water out of the GPU block_.

I need to drain my system once the new pump and radiator arrive, and I hate that moment when I unscrew the tubing from the GPU and water leaks out; no matter how little it always freaks me out.


----------



## J7SC

Arizor said:


> So how do you folks go about draining a loop without a drainage valve (I've got one on the way along with an EKWB D5 pump for next time!), specifically getting _all that goddamn water out of the GPU block_.
> 
> I need to drain my system once the new pump and radiator arrive, and I hate that moment when I unscrew the tubing from the GPU and water leaks out; no matter how little it always freaks me out.


Are you using hard-tubing ? If not, just separate the GPU+block from the (powered down and unplugged) mobo and rest on a towel. Also make sure to unplug all video IO and power cables and put them out of the way...


----------



## Arizor

J7SC said:


> Are you using hard-tubing ? If not, just separate the GPU+block from the (powered down and unplugged) mobo and rest on a towel. Also make sure to unplug all video IO and power cables and put them out of the way...


Thanks @J7SC , good tip, soft tubing so can do that quite easily, I'll just do an upside-down over the sink then rest on a towel.


----------



## GRABibus

Bal3Wolf said:


> so looks like my cpu score is pretty normal.


Let's say you could get more than my *5900x* with your *5950X*


----------



## Shawnb99

Lobstar said:


> Don't ever go and add up your purchases afterward either. I was just curious. Only counting case, fans, radiators, and pumps I'm at $2119.49. The actual PC parts not counting the stupid graphics card are only $2050. At least I'll be able to use most of the cooling stuff and case in the next build.


I'm afraid to go back and add up all that I've spent. If just counting radiators, pumps and some AC gear I'm already over $2500


----------



## Arizor

Holy moley. How many PCs have you got running? Mining operation?


----------



## Shawnb99

Arizor said:


> View attachment 2519826
> 
> 
> Holy moley. How many PCs have you got running? Mining operation?


Lol just the one. Use it mostly for gaming. Fit as much as I could in my case and went all push/pull. Keep the fans at around 600rpm or so and have a 0.4 delta normally. Need to remount my CPU block and stuff so never went all out yet, will dot hat once I get my Optimus block.


----------



## Bal3Wolf

GRABibus said:


> Let's say you could get more than my *5900x* with your *5950X*


cpu performs fine in everything but timespy iv yet to really figure out why i have tried tons of things, it gets 31,500 on cb23 any cpu bencmark it runs perfeft.


----------



## J7SC

...I'm in the software-related field, and all my systems are a business expense for 'productivity investments' (that's my excuse, and I'm sticking to it  ).

@Bal3Wolf 5950X can be in the high 16k low 17k range for CPU in TimeSpy...could your score issue be related to C and P states ?


----------



## KedarWolf

Shawnb99 said:


> Lol just the one. Use it mostly for gaming. Fit as much as I could in my case and went all push/pull. Keep the fans at around 600rpm or so and have a 0.4 delta normally. Need to remount my CPU block and stuff so never went all out yet, will dot hat once I get my Optimus block.


What case do you use?


----------



## Arizor

Shawnb99 said:


> Lol just the one. Use it mostly for gaming. Fit as much as I could in my case and went all push/pull. Keep the fans at around 600rpm or so and have a 0.4 delta normally. Need to remount my CPU block and stuff so never went all out yet, will dot hat once I get my Optimus block.


Amazing. We must see pics if you have any.



J7SC said:


> ...I'm in the software-related field, and all my systems are a business expense for 'productivity investments' (that's my excuse, and I'm sticking to it  ).


Haha yep me too. I make and teach games so every time the partner and kid receive a delivery I'm just like "errrr... for my work  I assure you".


----------



## Shawnb99

KedarWolf said:


> What case do you use?


Caselabs TH10 with a pedestal


----------



## Arizor

Just looked that case up. Jesus. Christ.


----------



## J7SC

Arizor said:


> (...)
> Haha yep me too. I make and teach games so every time the partner and kid receive a delivery I'm just like "errrr... for my work  I assure you".


...as long as I don't try to write off a case of fine wine with my computer purchases, the accountants leave me alone 😎 

...as to fans, I had some from work in my home-office, but ended up replacing them all recently; the old ones were 5k rpm Sunon server fans.._.'can you hear me now ?'_ ...they made my GentleTyphoon 3K sound whisper-quiet.


----------



## Bal3Wolf

J7SC said:


> ...I'm in the software-related field, and all my systems are a business expense for 'productivity investments' (that's my excuse, and I'm sticking to it  ).
> 
> @Bal3Wolf 5950X can be in the high 16k low 17k range for CPU in TimeSpy...could your score issue be related to C and P states ?
> 
> View attachment 2519828


 tried with c states off my cpu went lower lol p-states wouldnt affect me if im doing a ccd overclock right ?


----------



## J7SC

Bal3Wolf said:


> tried with c states off my cpu went lower lol p-states wouldnt affect me if im doing a ccd overclock right ?


...believe it or not, I've never bothered with CCD, just 'old school all-core' mixed in with the dynamic oc on the Asus Crosshair Dark. 

I would save your current profile in bios and just try all-core for TimeSpy (even on a non-dark mobo) - it does positively affect not only the CPU score but the GPU score as well, at least on my setup. Depending on your cooling setup, you should be able to hit 4650 - 4800 all-c


Spoiler


----------



## gfunkernaught

Arizor said:


> Thanks @J7SC , good tip, soft tubing so can do that quite easily, I'll just do an upside-down over the sink then rest on a towel.


you could also use a bicycle tire pump if you have one. Since I only have one drain point, not all the water comes out so I hand pump the rest.


----------



## gfunkernaught

Here is my Timespy run, probably could do better though but not near 21k, need a better cpu for that.








I scored 20 006 in Time Spy


Intel Core i9-9900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## Arizor

gfunkernaught said:


> you could also use a bicycle tire pump if you have one. Since I only have one drain point, not all the water comes out so I hand pump the rest.


Good idea, I've got a pump, also an ESD blower for dusting, thought that might be a bit too powerful...!


----------



## Bal3Wolf

J7SC said:


> ...believe it or not, I've never bothered with CCD, just 'old school all-core' mixed in with the dynamic oc on the Asus Crosshair Dark.
> 
> I would save your current profile in bios and just try all-core for TimeSpy (even on a non-dark mobo) - it does positively affect not only the CPU score but the GPU score as well, at least on my setup. Depending on your cooling setup, you should be able to hit 4650 - 4800 all-c
> 
> 
> Spoiler
> 
> 
> 
> 
> View attachment 2519835


Worth a try i can do 4.75Ghz all core no problem i Usualy run 4.775/4.7 my ccd2 is a little weak ccd1 can do 4.8Ghz but then ccd2 needs a ton of vcore to run that. I have tried all core 4.5 in the past ccd and pbo overclocks my cpu score is always down for some odd reason even tested 4000 memory and 2000 unicore so far my best results come from my 4.775/4.7Ghz ccd overclock like this cpu hates 3dmark lol.

4.75Ghz all core 15 499








Result not found







www.3dmark.com





smt off pbo 16 083








Result not found







www.3dmark.com


----------



## dangerSK

Arizor said:


> Thanks @J7SC , good tip, soft tubing so can do that quite easily, I'll just do an upside-down over the sink then rest on a towel.


draining from rads worked for me the best since they were further away from components


----------



## J7SC

Bal3Wolf said:


> Worth a try i can do 4.75Ghz all core no problem i Usualy run 4.775/4.7 my ccd2 is a little weak ccd1 can do 4.8Ghz but then ccd2 needs a ton of vcore to run that. I have tried all core 4.5 in the past ccd and pbo overclocks my cpu score is always down for some odd reason even tested 4000 memory and 2000 unicore so far my best results come from my 4.775/4.7Ghz ccd overclock like this cpu hates 3dmark lol.
> 
> 4.75Ghz all core 15 499
> 
> 
> 
> 
> 
> 
> 
> 
> Result not found
> 
> 
> 
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> smt off pbo 16 083
> 
> 
> 
> 
> 
> 
> 
> 
> Result not found
> 
> 
> 
> 
> 
> 
> 
> www.3dmark.com


...weird, my setup seems to prefer all-c - mind you that is with the Asus CH8 Dark set to 4750 w/ crossover on dynamic oc at s.th like 55. AFAIR, the 17006 CPU score in TS was either 4750 all-c or 4800 all-c (SMT on). I use this system for work as well so SMT stays on, with my daily at 4650 all-c plus dynamic oc at 45.

I should add that I run a different RAM config than you > 4 sticks


----------



## Bal3Wolf

J7SC said:


> ...weird, my setup seems to prefer all-c - mind you that is with the Asus CH8 Dark set to 4750 w/ crossover on dynamic oc at s.th like 55. AFAIR, the 17006 CPU score in TS was either 4750 all-c or 4800 all-c (SMT on). I use this system for work as well so SMT stays on, with my daily at 4650 all-c plus dynamic oc at 45.
> 
> I should add that I run a different RAM config than you > 4 sticks


close to my best run but i found a fix for me using process lasso and disabling smt got me my best cpu score ever 17 239 i waset pushing my benchmark clocks on gpu just my standard game clocks. What makes no sence tho is i tried that by bios for pbo and all core/ccd and it didnt help somehow it does with process lasso tho.








I scored 21 113 in Time Spy


AMD Ryzen 9 5950X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## J7SC

Bal3Wolf said:


> close to my best run but i found a fix for me using process lasso and disabling smt got me my best cpu score ever 17 239 i waset pushing my benchmark clocks on gpu just my standard game clocks. What makes no sence tho is i tried that by bios for pbo and all core/ccd and it didnt help somehow it does with process lasso tho.
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 21 113 in Time Spy
> 
> 
> AMD Ryzen 9 5950X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com


I've used process lasso on another machine for FS2020 but not benchies (as mentioned also a work machine so I won't push too hard in benchies). The fact that process lasso helps with your TS CPU says what though - some other processes stealing cycles ?


----------



## Bal3Wolf

J7SC said:


> I've used process lasso on another machine for FS2020 but not benchies (as mentioned also a work machine so I won't push too hard in benchies). The fact that process lasso helps with your TS CPU says what though - some other processes stealing cycles ?


No don't think so cause setting it to high alone didnt fix it i had to turn off smt by using process lasso then it seems to work right for some odd reason. When i was googling i came across some others with my issue and one thing most came up is timespy is only able to use 10 cores max so before i gave up for the night i decided to try process lasso and suprised it worked.

best score ever so far.








I scored 21 578 in Time Spy


AMD Ryzen 9 5950X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## J7SC

Bal3Wolf said:


> No don't think so cause setting it to high alone didnt fix it i had to turn off smt by using process lasso then it seems to work right for some odd reason. When i was googling i came across some others with my issue and one thing most came up is timespy is only able to use 10 cores max so before i gave up for the night i decided to try process lasso and suprised it worked.


...but turning off SMT in bios (instead of or in addition to process lasso doing it) does not yield the same results ? Odd indeed. Sorry if I sound a bit slow / confused, getting tired


----------



## Bal3Wolf

J7SC said:


> ...but turning off SMT in bios (instead of or in addition to process lasso doing it) does not yield the same results ? Odd indeed. Sorry if I sound a bit slow / confused, getting tired


yea i know does not make any sence to me either lol very wierd bug i have with this cpu lol.


----------



## Nizzen

I scored 21 882 in Time Spy


AMD Ryzen 9 5950X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com




Ok cpu score

Ps: cpu pbo on 30c water and tweaked memory.


----------



## dante`afk

does anyone have comparisons of a front cover gpu block vs full cover (active backplate) and all relevant temps?


----------



## Lobstar

dante`afk said:


> does anyone have comparisons of a front cover gpu block vs full cover (active backplate) and all relevant temps?


I think LTT did a video.


----------



## GRABibus

Bal3Wolf said:


> tried with c states off my cpu went lower lol p-states wouldnt affect me if im doing a ccd overclock right ?


What are your CPU OC settings ?


----------



## Nizzen

dante`afk said:


> does anyone have comparisons of a front cover gpu block vs full cover (active backplate) and all relevant temps?


I don't have a good campare, but I can say it's worth it. "Cheap" Bykski does a fantastic job, so does EK. With 30c water I got 25-30c colder vram and maybe 5-10 colder core.


----------



## radrok

I agree, got a Bykski FTW3 3090 wb + active backplate never seen over 60c on VRAM and 50c on core (500W BIOS)

Getting 15k+ in Port Royal


----------



## Bal3Wolf

GRABibus said:


> What are your CPU OC settings ?


My daily oc is a ccd overclock of 4.775/4.7 i need a manual oc cause i have 80-90% loads quite often i dont really have any issues games still run good and i have my fix for timespy now.


----------



## GRABibus

Bal3Wolf said:


> My daily oc is a ccd overclock of 4.775/4.7 i need a manual oc cause i have 80-90% loads quite often i dont really have any issues games still run good and i have my fix for timespy now.


Ok.
Surely, with a PBO and CO tweaking, you would get a higher score than static OC.

but now you have a fix 😊


----------



## Bal3Wolf

GRABibus said:


> Ok.
> Surely, with a PBO and CO tweaking, you would get a higher score than static OC.
> 
> but now you have a fix 😊


Maybe but i need a all/ccd overclock for other software losing a little single thread dont bother me now i know what to do tho if cpu scores are lacking.


----------



## Lobstar

GRABibus said:


> Ok.
> Surely, with a PBO and CO tweaking, you would get a higher score than static OC.
> 
> but now you have a fix 😊


I haven't had that experience with my 5950x.


----------



## GRABibus

Lobstar said:


> I haven't had that experience with my 5950x.


If you have time, try your best static OC and your best PBO/CO OC.
In Time Spy, PBO/CO should give you best results


----------



## GRABibus

Did some of you tried Rebar in Time Spy.

Definitely a no go for me.

With Rebar versus non Rebar :
=> +200 graphics score (I score 22695pts on air !)
=> - 2000pts on CPU Score !!! (13700 instead of 15700)

Result, global score is lower than with Rebar "off".


----------



## Bal3Wolf

GRABibus said:


> Did some of you tried Rebar in Time SSpy.
> 
> Definitely a no go for me.
> 
> +200 graphics score (I score 22695pts on air !)
> - 2000pts on CPU Score !!!
> 
> Result, global score is lower than with Rebar "off".


Same timespy cpu score tanks with rebar you can use it with port royal tho.


----------



## GRABibus

Bal3Wolf said:


> Same timespy cpu score tanks with rebar you can use it with port royal tho.


you have same timespy cpu score with and without rebar with static OC ?


----------



## gfunkernaught

@J7SC what are the dimensions of the screws that you used to mount your fans to your CL480?


----------



## KedarWolf

GRABibus said:


> If you have time, try your best static OC and your best PBO/CO OC.
> In Time Spy, PBO/CO should give you best results


For some reason, a static overclock scores quite a bit better in CB20 and CB23, as well as TimeSpy CPU score.

And my Curve is at Scaler 6, 150 boost, -11 -17 two top cores, -30 the rest of the cores.

Still, a static overclock of 4.7GHz first CCD, 4.65GHz second CCD on my 5950x scores around 13300 in CB20, I get maybe 11750 with my Curve.


----------



## Bal3Wolf

GRABibus said:


> you have same timespy cpu score with and without rebar with static OC ?


yea i do older ones but you can see the cpu scores another person had same issue one thing we did notice we both had asus boards a guy with a msi didnt have the same issue.

rebar off 16 148 








I scored 21 255 in Time Spy


AMD Ryzen 9 5950X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com





rebar on 13 955 








I scored 20 734 in Time Spy


AMD Ryzen 9 5950X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## GRABibus

Bal3Wolf said:


> yea i do older ones but you can see the cpu scores another person had same issue one thing we did notice we both had asus boards a guy with a msi didnt have the same issue.
> 
> rebar off 16 148
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 21 255 in Time Spy
> 
> 
> AMD Ryzen 9 5950X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> rebar on 13 955
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 20 734 in Time Spy
> 
> 
> AMD Ryzen 9 5950X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com




And from my side :

*Rebar on with static OC @ 4.7GHz :*








I scored 20 702 in Time Spy


AMD Ryzen 9 5900X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com





*Rebar off with static OC @ 4.7GHz :*
I scored 21 230 in Time Spy

=> My best run ever on air with my strix on air


With Rebar 
GPU score is high => 22733 !
CPU score is low => 13744 !

Without Rebar :
GPU score is good => 22549 !
CPU score is good => 15949 !


----------



## GRABibus

Bal3Wolf said:


> yea i do older ones but you can see the cpu scores another person had same issue one thing we did notice we both had asus boards a guy with a msi didnt have the same issue.
> 
> rebar off 16 148
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 21 255 in Time Spy
> 
> 
> AMD Ryzen 9 5950X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> rebar on 13 955
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 20 734 in Time Spy
> 
> 
> AMD Ryzen 9 5950X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com


What's your Bios ? i assume it is not KP XOC 1000W
I have better graphics score than you with 74°C average on air


----------



## Nizzen

KedarWolf said:


> For some reason, a static overclock scores quite a bit better in CB20 and CB23, as well as TimeSpy CPU score.
> 
> And my Curve is at Scaler 6, 150 boost, -11 -17 two top cores, -30 the rest of the cores.
> 
> Still, a static overclock of 4.7GHz first CCD, 4.65GHz second CCD on my 5950x scores around 13300 in CB20, I get maybe 11750 with my Curve.


What is your TS cpu score with ht=off?


----------



## J7SC

gfunkernaught said:


> @J7SC what are the dimensions of the screws that you used to mount your fans to your CL480?


...for the regular-thickness 120mm fans, the upper ones in the pic below...the lower ones also work with a washer...

...pro tip: use a beer coaster to slide on top of the rad core and below where the fan screws mount, take the first fan and mount it not on either side but 'one over' so that you can see the screw going through the fan housing into the rad shell (not the rad core  )...if the beer coaster shows a screw imprint, too bad. If it doesn't, use beer coaster for a nice cold one !


----------



## yzonker

J7SC said:


> ...for the regular-thickness 120mm fans, the upper ones in the pic below...the lower ones also work with a washer...
> 
> ...pro tip: use a beer coaster to slide on top of the rad core and below where the fan screws mount, take the first fan and mount it not on either side but 'one over' so that you can see the screw going through the fan housing into the rad shell (not the rad core  )...if the beer coaster shows a screw imprint, too bad. If it doesn't, use beer coaster for a nice cold one !
> 
> View attachment 2519934


I used these for my CL480 (along with some others I already had),









Amazon.com: Replacement EximoSX Radiator Screw Pack - Quad Radiator : Automotive


Buy Replacement EximoSX Radiator Screw Pack - Quad Radiator: Radiators - Amazon.com ✓ FREE DELIVERY possible on eligible purchases



www.amazon.com





Arctic P12 fans.


----------



## yzonker

Bal3Wolf said:


> yea i do older ones but you can see the cpu scores another person had same issue one thing we did notice we both had asus boards a guy with a msi didnt have the same issue.
> 
> rebar off 16 148
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 21 255 in Time Spy
> 
> 
> AMD Ryzen 9 5950X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> rebar on 13 955
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 20 734 in Time Spy
> 
> 
> AMD Ryzen 9 5950X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com


If you are talking [H] Forum, that was my 3080ti. I need to try it on my 3090 with a Gigabyte X570 Ultra. Interestingly, TSE does not have this problem. Only TS.


----------



## Bal3Wolf

yzonker said:


> If you are talking [H] Forum, that was my 3080ti. I need to try it on my 3090 with a Gigabyte X570 Ultra. Interestingly, TSE does not have this problem. Only TS.


I might should test that now with process lasso disabling smt and see if it drops score again later .


----------



## PLATOON TEKK

J7SC said:


> Thanks much ! I checked, and so far, so dry  but will keep an eye on it. That said, every new CPU and every new GPU block gets the screw-tightening check as years ago, I had brand new EKWB GPU blocks mounted on two GPUs, one of which started to leak at the top part where the Acetal with the G1/4 threads sit during leak testing...on both those blocks, a total of 4 out of 6 screws were not tight...
> 
> Per spoiler, some of the 3090 EKWB issues. First, 3 of 5 RGB lights stopped working after about 10 days or so...not the end of the world, but then if it has RGB and one paid for it, it shouldn't go bust like that. Next, the 'Nickel' back plate starting shedding BIG CHUNKS of Nickel coating before I had even mounted it...the included thermal pads were enough to make that happen. Finally, some pics of the 'machine head' imprint (see orange circle)
> 
> 
> Spoiler
> 
> 
> 
> 
> View attachment 2519705
> 
> 
> View attachment 2519706
> 
> 
> View attachment 2519707


Wow, that’s absolute madness. I knew ekwb dropped the ball, but it seems they outright popped and ate it. That’s terrible, avoid at all costs at this point I guess, thanks for sharing pics.

In regards to tightening the screws, I think you are right, should just be habit no matter where the block is from.




Arizor said:


> Dammit lads, you're not doing my wallet any favours. After @PLATOON TEKK 's great photos and the worries around EKWB, plus my own concern I'm not getting the best performance from my EKWB block, I've went and ordered the Optimus block for my Strix, plus a new D5 pump to replace my "beginners" pump from the EKWB starter kit. This is a slippery slope


haha welcome to the world of why our wives hate us lol. I fully support said purchases! The added pump is always good and will help temps and keep the liquid above 2 flow.

Honestly, this has been my first Optimus GPU block purchase (only tried cpus) and it feels the hype is real. It’s definitely worth the money in my opinion. The ONLY thing you’re missing now is a:

EVC2SX and the little lcd display (makes things easier). You can install it during gpu block install. You also don’t have to solder, i chose not to. That way you can add voltage to that **** and really spin the block!


----------



## Lobstar

KedarWolf said:


> For some reason, a static overclock scores quite a bit better in CB20 and CB23, as well as TimeSpy CPU score.
> 
> And my Curve is at Scaler 6, 150 boost, -11 -17 two top cores, -30 the rest of the cores.
> 
> Still, a static overclock of 4.7GHz first CCD, 4.65GHz second CCD on my 5950x scores around 13300 in CB20, I get maybe 11750 with my Curve.


Damn dude, did you steal my results lol. Any chance you have 4 sticks of single rank like 4x8gb? I do and I'm pretty sure this is why none of the usual tricks work on my system.


----------



## KedarWolf

Lobstar said:


> Damn dude, did you steal my results lol. Any chance you have 4 sticks of single rank like 4x8gb? I do and I'm pretty sure this is why none of the usual tricks work on my system.


I run 2x16GB CL14 timing.


----------



## Arizor

PLATOON TEKK said:


> Wow, that’s absolute madness. I knew ekwb dropped the ball, but it seems they outright popped and ate it. That’s terrible, avoid at all costs at this point I guess, thanks for sharing pics.
> 
> In regards to tightening the screws, I think you are right, should just be habit no matter where the block is from.
> 
> 
> 
> 
> haha welcome to the world of why our wives hate us lol. I fully support said purchases! The added pump is always good and will help temps and keep the liquid above 2 flow.
> 
> Honestly, this has been my first Optimus GPU block purchase (only tried cpus) and it feels the hype is real. It’s definitely worth the money in my opinion. The ONLY thing you’re missing now is a:
> 
> EVC2SX and the little lcd display (makes things easier). You can install it during gpu block install. You also don’t have to solder, i chose not to. That way you can add voltage to that **** and really spin the block!


Haha at this point we're all just enabling one another, cheers!😄


----------



## gfunkernaught

@J7SC
Those screws seem like the standard length and diameter, which I have some extras of. I was more worried about the thread pitch, which I'm assuming is also of some kind of standard.

@yzonker
Cool thanks!


----------



## tps3443

An


Shawnb99 said:


> Caselabs TH10 with a pedestal


Are these cases still available? I am sick of a standard case nowadays. All that’s available is the usually 2 to 3 radiator support options. I’ve even retro fitted my current case to hold (3) 360’s. I can’t really find anything larger that I like. Might even have to go like test bench or something.


----------



## tps3443

I flashed the 3090 Kingpin Hydro Copper bios on to my 3090 Kingpin Hybrid. Now the RGB control works properly, and no more auto fan curves going all the time anymore. And of course, my card thinks it actually is a KP Hydrocopper now! lol.

After I flashed the bios on, PX1 then updated some sort of MCU firmware on my card? That’s what it said anyways. Anyways, it’s a legit Hydro copper KP now.

Not gonna lie, the bone stock settings on this card are actually excellent for gaming.. No increased voltages, no increased power limits. Voltages the stock too at +0, and power limits are stock.

Cyberpunk 2077 boosting right to 2,025Mhz on default settings which is really sufficient for gaming.


Also, I did some digging on the Kingpin Hydro-Copper waterblock. And apparently KP HC block has 5mk/w thermal pads pre-applied from the factory for the (front and back). These thermal pads have a shore scale hardness of 50.

^ So not only do they have a fairly low
thermal transfer, but they are also very very stiff. Similar to Fujipoly XR extreme thermal pad stiffness. (And Fujipoly offers a maximum of 1.5MM because of this) These KP HC thermal pads are also 2.25MM thick. Last time I looked at my 3090KP sideways, I could see the memory thermal pads sticking past GPU. Considering how firm they’re, I don’t think that would offer proper contact to the die under a heavy load.

While I would love to get the Optimus block for $598. This card is really a beast right how it sits, and with all that being said, I’m gonna give it another chance with a full thermal pad replacement with some softer more heat conductive pads.

Just figured I’d share this with anyone who may be running one of these blocks.


----------



## geriatricpollywog

tps3443 said:


> I flashed the 3090 Kingpin Hydro Copper bios on to my 3090 Kingpin Hybrid. Now the RGB control works properly, and no more auto fan curves going all the time anymore. And of course, my card thinks it actually is a KP Hydrocopper now! lol.
> 
> After I flashed the bios on, PX1 then updated some sort of MCU firmware on my card? That’s what it said anyways. Anyways, it’s a legit Hydro copper KP now. So when I go to request the 1KW RBar bios from Vince I’m going to specify for the Hydro copper version. PX1 had all sorts of glitches when it thought my card was a hybrid.
> 
> Not gonna lie, the bone stock settings on this card are actually excellent for gaming.. No increased voltages, no increased power limits. Voltages the stock too at +0, and power limits are stock.
> 
> Cyberpunk 2077 boosting right to 2,040Mhz on default settings which is really sufficient.
> 
> 
> Also, I did some digging on the Kingpin Hydro-Copper waterblock. And apparently KP HC block has 5mk/w thermal pads pre-applied from the factory for the (front and back). These thermal pads have a shore scale hardness of 50.
> 
> ^ So not only do they have a fairly low
> thermal transfer, but they are also very very stiff. Similar to Fujipoly XR extreme thermal pad stiffness. (And Fujipoly offers a maximum of 1.5MM because of this) These KP HC thermal pads are also 2.25MM thick. Last time I looked at my 3090KP sideways, I could see the memory thermal pads sticking past GPU. Considering how firm they’re, I don’t think that would offer proper contact to the die under a heavy load.
> 
> While I would love to get the Optimus block for $598. This card is really a beast right how it sits, and with all that being said, I’m gonna give it another chance with a full thermal pad replacement with some softer more heat conductive pads.
> 
> Just figured I’d share this with anyone who may be running one of these blocks.
> 
> 
> 
> 
> 
> post image


This is actually very helpful since I am installing my Hydrocopper block this weekend! I am not interested in changing the VRM pads, but I am considering upgrading the memory pads.

Can you explain what this means, and do you have pictures: "Last time I looked at my 3090KP sideways, I could see the memory thermal pads sticking past GPU."

Are the 2.25mm pads too thick or too thin? Is your PCB flexing?


----------



## Shawnb99

tps3443 said:


> Are these cases still available? I am sick of a standard case nowadays. All that’s available is the usually 2 to 3 radiator support options. I’ve even retro fitted my current case to hold (3) 360’s. I can’t really find anything larger that I like. Might even have to go like test bench or something.


Sadly the company went out of business years ago. You can still find them on eBay and such but expect to pay through the ass for one.

I currently have a pair of 560’s in the pedestal and three 480’s and a pair of 360’s in the main case


----------



## tps3443

0451 said:


> This is actually very helpful since I am installing my Hydrocopper block this weekend! I am not interested in changing the VRM pads, but I am considering upgrading the memory pads.
> 
> Can you explain what this means, and do you have pictures: "Last time I looked at my 3090KP sideways, I could see the memory thermal pads sticking past GPU."
> 
> Are the 2.25mm pads too thick or too thin? Is your PCB flexing?


No board flexing at all, and I was looking for it constantly too. The block literally fits excellent! The actual contact area for the GPU die and memory modules is flat on the waterblock, so it doesn’t press harder in one area over the other. It’s just leaving a 0.5-1.0MM gap between the die and the block. This gap is filled with thermal paste. But the end result is higher GPU temps. I have found a few people who have managed sub 40C load temps on this HC waterblock with the standard pads, and with undervolting at 2,055Mhz. I’ve also seen someone manage 43C at 500+ watts. And I saw another guy who did a full back PCB thermal pad. He literally used a cheap and super soft 1.5MM blue 6-8w/k thermal pad to cover the entire back of the cards board, then he remounted the back plate. He swapped the front pads with higher quality 1MM thermal pads and he swears up and down he is at 42C with sub 650watts of power. However, these few people who have tried to mount, and tune this HC block properly don’t include half the information you need when they go to share their amazingly fantastic results on forums lol. So your really guessing as to how they did it.

From what I understand the Optimus block uses the same thermal pad thickness for the back of the card as the Hydro copper block does, but the Optimus is using a 0.5MM fujipoly for the front? That’s a pretty extreme difference going from 2.25mm to 0.5mm And when I was inspecting my card sideways holding some 0.5mm copper shims for reference, It looks like 0.5-1.0MM would be perfect to give the GPU excellent contact to the block with firm pads like the Fujipoly XR’s.

I just think that the pre-installed thermal pads are too firm. And this firmness combined with their thickness isn’t going to offer that squish we need for a good die to block contact when you go for tightening in cross pattern. Now the memory temps are ridiculously good lol.

My Memory hotspot is 58C while playing cyberpunk 2077 for an hour. Ambient temp is 25.6C right now.

One last thing. The installation manual specifies that you should you the left port for an inlet and right port for outlet. Although, the block actually has arrows on it that directly contradict this. The arrows right on the blocks acrylic indicate the right port is the inlet, and left port is outlet.. Not that it matters much. But I’m gonna test it out and reverse the flow. The channels are shaped differently, so who really knows?




These photos demonstrate what I mean by flow direction confusion.

Once I get a little bit more time, I’m going to go through this card+block really well again and re-mount the everything up just one more time. And hopefully I can get it perfect.


----------



## geriatricpollywog

tps3443 said:


> No board flexing at all, and I was looking for it constantly too. The block literally fits excellent! The actual contact area for the GPU die and memory modules is flat on the waterblock, so it doesn’t press harder in one area over the other. It’s just leaving a 0.5-1.0MM gap between the die and the block. This gap is filled with thermal paste. But the end result is higher GPU temps. I have found a few people who have managed sub 38-39C load on this block with the standard pads with undervolting at 2,055Mhz. I’ve also seen someone manage 43C at 500+ watts. Another guy did a full back PCB thermal pad, he literally used a cheap and super soft 1.5MM blue 6w/k thermal pad, he then covered the entire back of the cards board and remounted the back plate. He swapped the front pads with 1.0MM and he swears up and down he is at 42C with sub 650watts of power. However, almost everyone who has tried to mount and tune this HC block properly doesn’t include half the information you need lol.
> 
> From what I understand the Optimus block uses the same thermal pad thickness for the back of the card as the Hydro copper block does, but the Optimus is using a 0.5MM fujipoly for the front? That’s a pretty extreme difference going from 2.25mm to 0.5mm And when I was inspecting my card sideways holding some 0.5mm copper shims for reference, It looks like 0.5-1.0MM would be perfect. (Firm pads like Fujipoly XR)
> 
> I just think that the pre-installed thermal pads are too firm. And this firmness combined with their thickness isn’t going to offer that squish we need for a good die to block contact when you go for tightening in cross pattern. Now the memory temps are ridiculously good lol.
> 
> My Memory hotspot is 58C while playing cyberpunk 2077 for an hour. Ambient temp is 25.6C right now.
> 
> One last thing. The installation manual specifies that you should you the left port for an inlet and right port for outlet. Although, the block actually has arrows on it that directly contradict this. The arrows right on the blocks acrylic indicate the right port is the inlet, and left port is outlet.. Not that it matters much. But I’m gonna test it out and reverse the flow. The channels are shaped differently, so who really knows?
> 
> 
> 
> 
> These photos demonstrate what I mean by flow direction confusion.
> 
> Once I get a little bit more time, I’m going to go through this card really well, and re-mount the block just one more time.


Thanks for the informative reply. ANY gap between the block and the die is a big NOPE for me. I may test fit using the provided pads, and if there’s a gap, try pads of varying thickness until it’s perfect. Are the memory pads causing the gap, the VRM pads, or both? The HC block comes with extra pads in addition to the pre-installed ones. Do you know what these are for?

As for the flow direction, I’ll be making sure the water flows into the top of the fins, instructions be damned.


----------



## yzonker

gfunkernaught said:


> @J7SC
> Those screws seem like the standard length and diameter, which I have some extras of. I was more worried about the thread pitch, which I'm assuming is also of some kind of standard.
> 
> @yzonker
> Cool thanks!


Yea they're M3. It is annoying that so many different sizes are used by different manufacturers. I have 3 rads and I think they are all dffferent. CL480 is M3, EK is 6x32, and IIRC the Corsair is M4.


----------



## tps3443

Doea 


0451 said:


> Thanks for the informative reply. ANY gap between the block and the die is a big NOPE for me. I may test fit using the provided pads, and if there’s a gap, try pads of varying thickness until it’s perfect. Are the memory pads causing the gap, the VRM pads, or both? The HC block comes with extra pads in addition to the pre-installed ones. Do you know what these are for?
> 
> As for the flow direction, I’ll be making sure the water flows into the top of the fins, instructions be damned.


The included thermal pads are for the back of the card, under the back plate. They are the exact same size and style as the front thermal pads. When you remove your backplate, you’ll more than likely ruin your pads on there. So they include a fresh set in the box (These seem to contact well) 

Also, removing the back plate is the most difficult task ever. You seriously feel like your gonna damage your GPU lol. It’s just not natural. I have never experienced anything like this in all of the dozens and dozens of GPU’s I have owned over the years. I was about ready to give up, because It felt as if I was going to pull my memory modules off with the backplate. After all of the screws are removed, it takes about the same amount of force as ripping the sole of a shoe off lol. I recommend a hair dryer 100%! I didn’t have one.

All of the other components are good on the front and back of the card. It’s just that the memory modules on the front and die contact to the block you need to check carefully during assembly. The mosfets press in to their pads very very easily. And the VRM’s use the included inductor paste which is super gritty feeling, but spreads easily and works amazing. I would draw a lower case t on each VRM for best contact instead of just a straight line like in the manual.

Also, for the mosfets the preinstalled thermal pads are great, although I would remove both of the used long skinny mosfet thermal pad strip from your hybrid setup and install them behind the mosfets on the opposite/back of the cards PCB. This drastically helped lower the power temp readings on my PX1 sensors. Then the mosfets will have thermal pads on both sides of the board. And it doesn’t cause any problems with the backplate either. The mosfets thermal pad is identical on the hybrid or hydrocopper.


My cards temps are not bad, I’m
just after better GPU temps. I can maintain 42–44C on the GPU with an undervolt using PX1 at 2,055Mhz using the LN2 520 watt bios. The bios does supply more FBVDD memory voltage and additional msvdd voltages too.

Now if I just go for full on 2160Mhz sustained in cyberpunk 2077 with my memory at 23Ghz and full voltages then my GPU will hit 53C-55C with over 500 watts of sustained power consumption. This isn’t all that terrible though. I only have (1) 360x20MM and (2) 360x30MM radiators and a single D5 with 24.5-25C ambients. After the block re-mount I got better memory temps and better power delivery temps for sure though. I think GPU temps seem to be about the same. And when I first removed the block during the remount, I saw a tiny ballpoint pen sized thermal paste bald spot in the center of the GPU die, where the block was contacting the GPU It’s self. So it was barely touching it. And I think the 2nd mount is only slightly better.

I will probably try some soft 1.5MM thick pads which will more than likely be perfect.


----------



## OrionBG

Nizzen said:


> I don't have a good campare, but I can say it's worth it. "Cheap" Bykski does a fantastic job, so does EK. With 30c water I got 25-30c colder vram and maybe 5-10 colder core.





radrok said:


> I agree, got a Bykski FTW3 3090 wb + active backplate never seen over 60c on VRAM and 50c on core (500W BIOS)
> 
> Getting 15k+ in Port Royal


Guys, I see you use the Bykski block/active Backplate, did you used the original thermal pads that came with them or you replaced them with others?
My 3090FE, unfortunately, had an accident and now needs an HDMI port replacement but the repair shop will also shunt mod it for me so I'll see for myself how the Bykski blocks handle ~700w  2x 360mm EK PE rads + 1x 480mm Alphacool 60mm thick rad all in push/pull should keep the card cool enough


----------



## GRABibus

gfunkernaught said:


> Was testing out the 1kw rbar bios again in gaming, titfanfall 2 144fps, +150 core +1200 vram, PL set to 70%, gpu core temp hit 46c! That's a bit too warm for me, especially considering ambient temp is 25c. The core frequency was 2145-2160mhz. I know 46c isn't bad, but the rest of board is probably much warmer, especially the back. I backed my settings down to +105 core and PL to 52%, tested out bright memory bench on loop, and the temp settled at 44c. Unless you have excellent cooling, no, exceptional cooling, gaming on the 1kw bios uncapped in the summer is probably not a good idea for the health of the card. Unless someone can shed some light and prove me wrong. I'm all ears.



Some gameplays with my strix on air and KP 1000W ReBar Bios on which you can see my monitoring.
25°C ambient

*Cold War *:
2560x1440, 144Hz, Ultra settings, Ray Tracing Ultra, DLSS Quality
ReBar enabled

OC Settings :
















*BFV : *
2560x1440, 144Hz, Ultra settings, Ray Tracing Ultra, No DLSS
ReBar enabled

OC Settings : Undervolting


----------



## des2k...

tps3443 said:


> I flashed the 3090 Kingpin Hydro Copper bios on to my 3090 Kingpin Hybrid. Now the RGB control works properly, and no more auto fan curves going all the time anymore. And of course, my card thinks it actually is a KP Hydrocopper now! lol.
> 
> After I flashed the bios on, PX1 then updated some sort of MCU firmware on my card? That’s what it said anyways. Anyways, it’s a legit Hydro copper KP now.
> 
> Not gonna lie, the bone stock settings on this card are actually excellent for gaming.. No increased voltages, no increased power limits. Voltages the stock too at +0, and power limits are stock.
> 
> Cyberpunk 2077 boosting right to 2,025Mhz on default settings which is really sufficient for gaming.
> 
> 
> Also, I did some digging on the Kingpin Hydro-Copper waterblock. And apparently KP HC block has 5mk/w thermal pads pre-applied from the factory for the (front and back). These thermal pads have a shore scale hardness of 50.
> 
> ^ So not only do they have a fairly low
> thermal transfer, but they are also very very stiff. Similar to Fujipoly XR extreme thermal pad stiffness. (And Fujipoly offers a maximum of 1.5MM because of this) These KP HC thermal pads are also 2.25MM thick. Last time I looked at my 3090KP sideways, I could see the memory thermal pads sticking past GPU. Considering how firm they’re, I don’t think that would offer proper contact to the die under a heavy load.
> 
> While I would love to get the Optimus block for $598. This card is really a beast right how it sits, and with all that being said, I’m gonna give it another chance with a full thermal pad replacement with some softer more heat conductive pads.
> 
> Just figured I’d share this with anyone who may be running one of these blocks.


That's weird the vbios controls the RGB on some cards.

My Zotac RGB still works and can be controlled with non Zotac vbios. Same with the fans before the waterblock.


----------



## J7SC

yzonker said:


> Yea they're M3. It is annoying that so many different sizes are used by different manufacturers. I have 3 rads and I think they are all dffferent. CL480 is M3, EK is 6x32, and IIRC the Corsair is M4.


...with close to a 100 120mm fans here - some of them for / from workstations and servers, so not all in one system  - I've been chasing all manner of fan screws. Beyond thread pitch, the height of the fans also plays a role re. screws (at least 3 different heights for 120mm diameter fans in use). 'Typically' though, 120mm fans have a height of 25mm. While I have enough fan screws laying around, I also restocked not only from Amazon before but also HomeDepot...just take a fan screw you know is the right kind to a hardware store and 'match'. Quick fyi: The CL 480 comes with appropriate screws that are easy to miss in the cardboard box, depending which end you open first 

@tps3443 - if you can't find a good used CaseLabs, you might want to check out the TT Core P3, Core P5 or Core P8 series, depending on mobo size, cooling peripherals etc (there are also other versions). The TT Core P series can be configured as a 'test bench', as a 'wall mount' or a more or less normal standing 'case'.

I personally use a TT Core P5 and a Core P8. The former carries an e-ATX mobo, dual w-cooled GPUs and 5x XSPC 360/60 rads along with 4x D5s, after some mods with a Dremel and a drill (see 'Orca' below). The TT Core P8 is even bigger and on ongoing build that has three mobos (two on the front and one on the back). I find the TT Core series to be the most versatile 'out of the box' and also very easy to mod for custom fitting. Plus absolutely no complaints about the quality.


----------



## yzonker

My CL480 came with 16 screws, but that's obviously only half that is needed for push/pull.


----------



## J7SC

yzonker said:


> My CL480 came with 16 screws, but that's obviously only half that is needed for push/pull.


...unless you only use 2 per fan (diagonally arranged) and some electrical tape around the rad / fan perimeters for 'air flow efficiency'...still, 4 screws per fan is obviously better than 2 per fan, though in my experience, 2x will work w/ above addition.


----------



## gfunkernaught

I measured the distance between the fins and frame of the cl480, it's 10mm, so using 30mm screws through a 25mm fan should be fine. I'm assuming the cl360 is the same. I found a pack of 60 silver colored screws on Amazon for $8. I need 68 screws, so I think I should have some extra laying around. Yes I've done the two per fan diagonal, but securing the fan with four is better I think.


----------



## J7SC

...HomeDepot to the rescue ...


----------



## yzonker

I had 2 per fan but my OCD didn't allow me to leave it that way.


----------



## newls1

OrionBG said:


> Guys, I see you use the Bykski block/active Backplate, did you used the original thermal pads that came with them or you replaced them with others?
> My 3090FE, unfortunately, had an accident and now needs an HDMI port replacement but the repair shop will also shunt mod it for me so I'll see for myself how the Bykski blocks handle ~700w  2x 360mm EK PE rads + 1x 480mm Alphacool 60mm thick rad all in push/pull should keep the card cool enough


I used what came with the block, temps are fine


----------



## gfunkernaught

J7SC said:


> ...HomeDepot to the rescue ...
> 
> View attachment 2520022


Home depot's pricing per screw didn't seem right to me. It was one of my first places to look but Amazon beat their prices.


----------



## J7SC

gfunkernaught said:


> Home depot's pricing per screw didn't seem right to me. It was one of my first places to look but Amazon beat their prices.


...I found the right ones at HomeDepot in those plastic bulk packages w/ a decent price. Then again, I also got two packs of these from Amazon.ca, coz you can never have enough fan screws in reserve


----------



## PLATOON TEKK

tps3443 said:


> I flashed the 3090 Kingpin Hydro Copper bios on to my 3090 Kingpin Hybrid. Now the RGB control works properly, and no more auto fan curves going all the time anymore. And of course, my card thinks it actually is a KP Hydrocopper now! lol.
> 
> After I flashed the bios on, PX1 then updated some sort of MCU firmware on my card? That’s what it said anyways. Anyways, it’s a legit Hydro copper KP now.
> 
> Not gonna lie, the bone stock settings on this card are actually excellent for gaming.. No increased voltages, no increased power limits. Voltages the stock too at +0, and power limits are stock.
> 
> Cyberpunk 2077 boosting right to 2,025Mhz on default settings which is really sufficient for gaming.
> 
> 
> Also, I did some digging on the Kingpin Hydro-Copper waterblock. And apparently KP HC block has 5mk/w thermal pads pre-applied from the factory for the (front and back). These thermal pads have a shore scale hardness of 50.
> 
> ^ So not only do they have a fairly low
> thermal transfer, but they are also very very stiff. Similar to Fujipoly XR extreme thermal pad stiffness. (And Fujipoly offers a maximum of 1.5MM because of this) These KP HC thermal pads are also 2.25MM thick. Last time I looked at my 3090KP sideways, I could see the memory thermal pads sticking past GPU. Considering how firm they’re, I don’t think that would offer proper contact to the die under a heavy load.
> 
> While I would love to get the Optimus block for $598. This card is really a beast right how it sits, and with all that being said, I’m gonna give it another chance with a full thermal pad replacement with some softer more heat conductive pads.
> 
> Just figured I’d share this with anyone who may be running one of these blocks.


Dope, much appreciated. Let us know your temps on the re-paste with the different pads, would be curious to see. I also had my doubts when using the pre-supplied pads but figured they know best, I guess not lol.

edit: when it comes to paste, I’d rather over-paste slightly than under.


----------



## Mr.Vegas

Guys help.
Whats the recommended BIOS with resizable bar for crossflashing 3090 MSI Ventus OC?
Right now after I updated my Ventus OC to Reisable BAR bios, i cant push the power level pass 100%, on OG bios it was 105% at least [or 103%]
Also, how much is a realistic VRAM overclock in MSI afterburner? How far to push it? right now im on +100Mhz


----------



## OrionBG

newls1 said:


> I used what came with the block, temps are fine


Did you put pads on any specific places that were not mentioned in the manual?


----------



## geriatricpollywog

tps3443 said:


> Doea
> 
> 
> The included thermal pads are for the back of the card, under the back plate. They are the exact same size and style as the front thermal pads. When you remove your backplate, you’ll more than likely ruin your pads on there. So they include a fresh set in the box (These seem to contact well)


Is it possible to install the block without removing the backplate?



tps3443 said:


> Also, removing the back plate is the most difficult task ever. You seriously feel like your gonna damage your GPU lol. It’s just not natural. I have never experienced anything like this in all of the dozens and dozens of GPU’s I have owned over the years. I was about ready to give up, because It felt as if I was going to pull my memory modules off with the backplate. After all of the screws are removed, it takes about the same amount of force as ripping the sole of a shoe off lol. I recommend a hair dryer 100%! I didn’t have one.


Have you tried installing the leaf spring on the back of a Fury/Vega? It feels like I am crushing the die in one corner.



tps3443 said:


> All of the other components are good on the front and back of the card. It’s just that the memory modules on the front and die contact to the block you need to check carefully during assembly. The mosfets press in to their pads very very easily. *And the VRM’s use the included inductor paste which is super gritty feeling*, but spreads easily and works amazing. I would draw a lower case t on each VRM for best contact instead of just a straight line like in the manual.


So that's what the included paste is for.



tps3443 said:


> Also, for the mosfets the preinstalled thermal pads are great, although I would remove both of the used long skinny mosfet thermal pad strip from your hybrid setup and install them behind the mosfets on the opposite/back of the cards PCB. This drastically helped lower the power temp readings on my PX1 sensors. Then the mosfets will have thermal pads on both sides of the board. And it doesn’t cause any problems with the backplate either. The mosfets thermal pad is identical on the hybrid or hydrocopper.


I will do this. Thanks.



tps3443 said:


> My cards temps are not bad, I’m just after better GPU temps. I can maintain 42–44C on the GPU with an undervolt using PX1 at 2,055Mhz using the LN2 520 watt bios. The bios does supply more FBVDD memory voltage and additional msvdd voltages too.
> 
> Now if I just go for full on 2160Mhz sustained in cyberpunk 2077 with my memory at 23Ghz and full voltages then my GPU will hit 53C-55C with over 500 watts of sustained power consumption. This isn’t all that terrible though. I only have (1) 360x20MM and (2) 360x30MM radiators and a single D5 with 24.5-25C ambients. After the block re-mount I got better memory temps and better power delivery temps for sure though. I think GPU temps seem to be about the same. And when I first removed the block during the remount, I saw a tiny ballpoint pen sized thermal paste bald spot in the center of the GPU die, where the block was contacting the GPU It’s self. So it was barely touching it. And I think the 2nd mount is only slightly better.


With the AIO, I am seeing 52-53 under max load in Cyberpunk at 480-520w, so I expect better with the block.



tps3443 said:


> I will probably try some soft 1.5MM thick pads which will more than likely be perfect.


I will mount the block with no pads or paste and measure the memory chip clearance with feeler gauges. But I trust 1.5mm is correct based on your experience.


----------



## ttnuagmada

I'm pretty far behind on this thread, can someone confirm whether or not the KPE 520w bios is still the preferred one to use on a water cooled strix?


----------



## gfunkernaught

ttnuagmada said:


> I'm pretty far behind on this thread, can someone confirm whether or not the KPE 520w bios is still the preferred one to use on a water cooled strix?


While that bios is excellent, the 1kw rbar bios are also good, just require excellent cooling


----------



## ttnuagmada

gfunkernaught said:


> While that bios is excellent, the 1kw rbar bios are also good, just require excellent cooling


None of those downclock the ram at idle though right?


----------



## gfunkernaught

ttnuagmada said:


> None of those downclock the ram at idle though right?


The 520w does, the 1kw does not however you can use profiles in AB to switch between an idle preset where you set the mem offset to -500 or all the way to the left, and another profile to set your oc. Use keyboard shortcuts to make it easier. Check that in AB.


----------



## VinnieM

gfunkernaught said:


> The 520w does, the 1kw does not however you can use profiles in AB to switch between an idle preset where you set the mem offset to -500 or all the way to the left, and another profile to set your oc. Use keyboard shortcuts to make it easier. Check that in AB.


An underclock of 251MHz or more is required to lower the memory speeds. It seems however that the Nvidia driver isn't very happy with this anymore, because I'm getting random driver resets when the memory is pinned to 400MHz. It happens even while doing trivial things on the desktop, like opening Chrome or other simple tasks.
First the screen freezes and after a few seconds the driver restarts. There are 2 entries in the event viewer after this, first a message from nvlddmkm:

\Device\000000a1
0000(0000) 00000000 00000000

And after that the usual 'driver stopped responding and has successfully recovered' message. This didn't happen half a year ago with the older non-Rebar BIOS, but I've determined it happens with that one now as well. So it's probably a driver issue. Any thoughts on how to fix this? Probably only Nvidia knows...

Anyway, I've tested this new 1KW Rebar BIOS on my air cooled card and while it works, the power usage and temperatures are sky high obviously 
When I limit the card to about 525W it's still power limited in Port Royal and I'm only getting about ~2000MHz GPU clocks and a 14.4K score. Not too bad I guess, but the scary part is the temp spike at the start of the benchmark. GPU temp rises from 50C to 80C in just a second, followed by 100% fan speed and then the temp drops to about 65C. I think I'll wait for winter again to use this BIOS again


----------



## GRABibus

VinnieM said:


> An underclock of 251MHz or more is required to lower the memory speeds. It seems however that the Nvidia driver isn't very happy with this anymore, because I'm getting random driver resets when the memory is pinned to 400MHz. It happens even while doing trivial things on the desktop, like opening Chrome or other simple tasks.
> First the screen freezes and after a few seconds the driver restarts. There are 2 entries in the event viewer after this, first a message from nvlddmkm:
> 
> \Device\000000a1
> 0000(0000) 00000000 00000000
> 
> And after that the usual 'driver stopped responding and has successfully recovered' message. This didn't happen half a year ago with the older non-Rebar BIOS, but I've determined it happens with that one now as well. So it's probably a driver issue. Any thoughts on how to fix this? Probably only Nvidia knows...
> 
> Anyway, I've tested this new 1KW Rebar BIOS on my air cooled card and while it works, the power usage and temperatures are sky high obviously
> When I limit the card to about 525W it's still power limited in Port Royal and I'm only getting about ~2000MHz GPU clocks and a 14.4K score. Not too bad I guess, but the scary part is the temp spike at the start of the benchmark. GPU temp rises from 50C to 80C in just a second, followed by 100% fan speed and then the temp drops to about 65C. I think I'll wait for winter again to use this BIOS again


what’s your card ?
did you monitor GPU max temp and GPU hotspot max temp with GPUz in Port Royal ?

with my strix on air, I have max GPU temp at 95 degrees and max hotspot Temp at 110 degrees !! (25 ambient)

Whatever the value I cap for the PL (100%, 70%, 50%, etc…), I have the same temps.


----------



## gfunkernaught

VinnieM said:


> An underclock of 251MHz or more is required to lower the memory speeds. It seems however that the Nvidia driver isn't very happy with this anymore, because I'm getting random driver resets when the memory is pinned to 400MHz. It happens even while doing trivial things on the desktop, like opening Chrome or other simple tasks.
> First the screen freezes and after a few seconds the driver restarts. There are 2 entries in the event viewer after this, first a message from nvlddmkm:
> 
> \Device\000000a1
> 0000(0000) 00000000 00000000
> 
> And after that the usual 'driver stopped responding and has successfully recovered' message. This didn't happen half a year ago with the older non-Rebar BIOS, but I've determined it happens with that one now as well. So it's probably a driver issue. Any thoughts on how to fix this? Probably only Nvidia knows...
> 
> Anyway, I've tested this new 1KW Rebar BIOS on my air cooled card and while it works, the power usage and temperatures are sky high obviously
> When I limit the card to about 525W it's still power limited in Port Royal and I'm only getting about ~2000MHz GPU clocks and a 14.4K score. Not too bad I guess, but the scary part is the temp spike at the start of the benchmark. GPU temp rises from 50C to 80C in just a second, followed by 100% fan speed and then the temp drops to about 65C. I think I'll wait for winter again to use this BIOS again


Yes that's why I highly recommend EXCELLENT cooling if you want to play with 1kw bios, unless you have money and access to plenty of 3090s and you don't care about creating unnecessary waste 

As far as your driver crashes, sounds like you should run DDU and clear the shader cache in c:\program data\nvidia\ (if it exists). I had similar issues when I would just "upgrade" drivers. I don't have any issues like you described when I force the mem clock down to 400mhz. I also set the PL to 10% along with the negative mem offset.


----------



## GRABibus

gfunkernaught said:


> Yes that's why I highly recommend EXCELLENT cooling if you want to play with 1kw bios, unless you have money and access to plenty of 3090s and you don't care about creating unnecessary waste


During gaming, I have no dangerous monitored temps with KP 1000W ReBar and They are exactly the same as with EVGA 500W bios or 520W KP bios (see my last post with game plays BFV and Cold War)

those crazy temps I have are only in PR (second half of test) and Time Spy graphics test 2.


----------



## J7SC

GRABibus said:


> During gaming, I have no dangerous monitored temps with KP 1000W ReBar and They are exactly the same as with EVGA 500W bios or 520W KP bios (see my last post with game plays BFV and Cold War)
> 
> those crazy temps I have are only in PR (second half of test) and Time Spy graphics test 2.


...no offense, and you know what you are doing, but every time I hear of an air-cooled card on XOC 1kw vbios, I think of...


----------



## GRABibus

J7SC said:


> ...no offense, and you know what you are doing, but every time I hear of an air-cooled card on XOC 1kw vbios, I think of...


😊😊

did you check my temps during gaming ?
What is exactly the danger in case of all monitored temps are under control and exactly the same as with 500W bioses ?

Do I miss something ?









[Official] NVIDIA RTX 3090 Owner's Club


For some reason, a static overclock scores quite a bit better in CB20 and CB23, as well as TimeSpy CPU score. And my Curve is at Scaler 6, 150 boost, -11 -17 two top cores, -30 the rest of the cores. Still, a static overclock of 4.7GHz first CCD, 4.65GHz second CCD on my 5950x scores around...




www.overclock.net


----------



## gfunkernaught

or this...


----------



## gfunkernaught

GRABibus said:


> 😊😊
> 
> did you check my temps during gaming ?
> What is exactly the danger in case of all monitored temps are under control and exactly the same as with 500W bioses ?
> 
> Do I miss something ?
> 
> 
> 
> 
> 
> 
> 
> 
> 
> [Official] NVIDIA RTX 3090 Owner's Club
> 
> 
> For some reason, a static overclock scores quite a bit better in CB20 and CB23, as well as TimeSpy CPU score. And my Curve is at Scaler 6, 150 boost, -11 -17 two top cores, -30 the rest of the cores. Still, a static overclock of 4.7GHz first CCD, 4.65GHz second CCD on my 5950x scores around...
> 
> 
> 
> 
> www.overclock.net


when you run 1kw bios at 50% yeah, but the 1kw bios has higher or no limits in other areas of the board. I see higher temps at 52% 1kw bios vs 520w bios. not by much but measurable. that just bothers me so thats why I keep saying excellent cooling is highly recommended to use 1kw bios daily because anything can happen.


----------



## Arizor

Is anyone else playing The Ascent? Pretty good game, but my 3090 runs at a really low GPU clock (like 1600-1750) even though I've got my curve set, and it runs in other games, around 2k. Strange.


----------



## yzonker




----------



## Nizzen

gfunkernaught said:


> Yes that's why I highly recommend EXCELLENT cooling if you want to play with 1kw bios, unless you have money and access to plenty of 3090s and you don't care about creating unnecessary waste
> 
> As far as your driver crashes, sounds like you should run DDU and clear the shader cache in c:\program data\nvidia\ (if it exists). I had similar issues when I would just "upgrade" drivers. I don't have any issues like you described when I force the mem clock down to 400mhz. I also set the PL to 10% along with the negative mem offset.


On one of my 3090 strix cards with watercooling I've used 1000w from day one. It aint dead yet 
The card doesn't draw more than 1.1v without elmor, so no worry. Some think the card magical drawing 1000w with that bios, but instead it drawing 550-600w...


----------



## yzonker

Just finished remounting my 3090 in the Corsair block. Gelid Extreme pads and Kryonaut Extreme paste. 

Mem temps are down 10-15C. 10C right now but I only have the heatsink sitting on the backplate (was using thermal paste before). Block delta improved by a small amount, 11C at 390w, down from 13C. I put the Galax 390w bios on. Didn't want to boot up with a fresh mount on the KP XOC. We'll see what it does at higher power levels. 

Nothing record setting in this crowd, but going in the right direction at least.


----------



## Lobstar

Arizor said:


> Is anyone else playing The Ascent? Pretty good game, but my 3090 runs at a really low GPU clock (like 1600-1750) even though I've got my curve set, and it runs in other games, around 2k. Strange.


I'm on the GamePass version with DX12, everything on except motion blur. 137fps hanging out there.









Edit: 115fps here:








The 'curve' is just 2205 @ 1.081v locked and a few of the preceding nodes lowered to the next bin.








Water temp is 27C.


----------



## Arizor

Lobstar said:


> I'm on the GamePass version with DX12, everything on except motion blur. 137fps hanging out there.
> 
> View attachment 2520196
> 
> View attachment 2520197
> 
> View attachment 2520198


Raytracing isn’t enabled in the game pass version, you can see by enabling/disabling it nothing changes.


----------



## Lobstar

Arizor said:


> Raytracing isn’t enabled in the game pass version, you can see by enabling/disabling it nothing changes.


I didn't say anything about that. I was only stating my version and my settings for comparison with what you might have setup  I included my OC and system data so you could compare for troubleshooting.

Edit: It was patched in.








The Ascent's latest patch adds ray tracing to Game Pass version, squashes co-op bugs


The Game Pass version might finally be up to snuff, but no word on DLSS yet.




www.pcgamer.com


----------



## Arizor

Lobstar said:


> I didn't say anything about that. I was only stating my version and my settings for comparison with what you might have setup  I included my OC and system data so you could compare for troubleshooting.
> 
> Edit: It was patched in.
> 
> 
> 
> 
> 
> 
> 
> 
> The Ascent's latest patch adds ray tracing to Game Pass version, squashes co-op bugs
> 
> 
> The Game Pass version might finally be up to snuff, but no word on DLSS yet.
> 
> 
> 
> 
> www.pcgamer.com


Gotcha, just thought I'd clarify. Yeah I get similar results with raytracing off.

Also it wasn't patched in to the gamepass version (as of yet), the latest download available via Windows store/game pass did nothing for the raytracing. I've got the Steam version and the game pass and the differences (both in visual quality and frame rate) are very noticeable.


----------



## Falkentyne

Arizor said:


> Is anyone else playing The Ascent? Pretty good game, but my 3090 runs at a really low GPU clock (like 1600-1750) even though I've got my curve set, and it runs in other games, around 2k. Strange.


Check your TDP Normalized % in HWinfo64 (GPU-Z does not show this).

Most likely either internal rail limits related to MSVDD and NVVDD voltage (although this is probably not likely at just 350-450W), or higher load on those internal rails themselves, causing greater power throttling (similar to if you ran Furmark just nowhere near as extreme).

Enable "Ultra" Global Illumination + shadows in Path of Exile and you get a 100 mhz drop in clocks at a 400W power limit (Did not test 350W).
Set it to high and the clocks are 100 mhz higher at the same power limit. Just tested this a few hours ago actually.


----------



## Arizor

Falkentyne said:


> Check your TDP Normalized % in HWinfo64 (GPU-Z does not show this).
> 
> Most likely either internal rail limits related to MSVDD and NVVDD voltage (although this is probably not likely at just 350-450W), or higher load on those internal rails themselves, causing greater power throttling (similar to if you ran Furmark just nowhere near as extreme).
> 
> Enable "Ultra" Global Illumination + shadows in Path of Exile and you get a 100 mhz drop in clocks at a 400W power limit (Did not test 350W).
> Set it to high and the clocks are 100 mhz higher at the same power limit. Just tested this a few hours ago actually.


It's a weird one, nothing strange shown on HWInfo, but needed to restart the game and set a different (higher) custom curve and it runs as it should. Game needs a few more patches methinks.


----------



## Nizzen

KedarWolf said:


> If you use my link, you get 15% off in the GELID store.
> 
> http://gelidstore.refr.cc/jamesj
> 
> Get your Extreme pads!!
> 
> Apparently, I get a gift but I have no use for it, to be honest, not why I'm sharing here.


I never tested Gelid pads. What is prefered to use on gpu memory; extreme or ultimate?


----------



## VinnieM

GRABibus said:


> what’s your card ?
> did you monitor GPU max temp and GPU hotspot max temp with GPUz in Port Royal ?
> 
> with my strix on air, I have max GPU temp at 95 degrees and max hotspot Temp at 110 degrees !! (25 ambient)
> 
> Whatever the value I cap for the PL (100%, 70%, 50%, etc…), I have the same temps.


I have the Aorus Master (rev. 1.0), so it's only a 2x 8 pin card and these 1kw BIOS'es are the only way to get more than 390W power (apart from shunt modding).
I've ran PR again and max hotspot temp was 83.7C, GPU 71C and memory 90C (with +1000 offset, 22 ambient).
I did not see the weird spike of 80C again this time, which was kinda strange. Maybe I'll try again with higher power limit, this was at 525W.
Do you think up to 600W is safe enough if temperatures stay around 90C or lower?



gfunkernaught said:


> Yes that's why I highly recommend EXCELLENT cooling if you want to play with 1kw bios, unless you have money and access to plenty of 3090s and you don't care about creating unnecessary waste
> 
> As far as your driver crashes, sounds like you should run DDU and clear the shader cache in c:\program data\nvidia\ (if it exists). I had similar issues when I would just "upgrade" drivers. I don't have any issues like you described when I force the mem clock down to 400mhz. I also set the PL to 10% along with the negative mem offset.


Thanks, I've performed a clean driver install and removed some NVIDIA folders in the user folder and have not had a driver crash again!


----------



## Nizzen

VinnieM said:


> I have the Aorus Master (rev. 1.0), so it's only a 2x 8 pin card and these 1kw BIOS'es are the only way to get more than 390W power (apart from shunt modding).
> I've ran PR again and max hotspot temp was 83.7C, GPU 71C and memory 90C (with +1000 offset, 22 ambient).
> I did not see the weird spike of 80C again this time, which was kinda strange. Maybe I'll try again with higher power limit, this was at 525W.
> Do you think up to 600W is safe enough if temperatures stay around 90C or lower?
> 
> 
> Thanks, I've performed a clean driver install and removed some NVIDIA folders in the user folder and have not had a driver crash again!


Most likely you can't make the card draw more than 550w on air anyway. Temp is stopping you, and the speedbin wil drop way more than it wil make any sense. Cold card < higher powerlimit


----------



## Rena Ryugu

Does anybody have 1000W XOC BIOS with Resizable Bar support? If you do, would you mind sharing it with us?


----------



## GRABibus

Rena Ryugu said:


> Does anybody have 1000W XOC BIOS with Resizable Bar support? If you do, would you mind sharing it with us?











EVGA RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com


----------



## KedarWolf

Nizzen said:


> I never tested Gelid pads. What is prefered to use on gpu memory; extreme or ultimate?


Extreme are soft and compressible, Ultimate are hard and flaky.

So the Extreme are preferred.

And I saw a review of the new Alphacool Soft pads, and the 11wm/k Soft pads and the 14wm/k Soft pads performed exactly the same.


----------



## long2905

I scored 15 184 in Port Royal


AMD Ryzen 9 3900X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com





finally broke 15k with the new rebar bios and you guys are at 16k lol. My core temp is quite high though due in part to the summer heat but i still need to take the card apart soon.

anyone using 3900x can share your optimized settings for daily use and port royal run? Hopefully i can squeeze a bit more of out this card.


----------



## GRABibus

long2905 said:


> I scored 15 184 in Port Royal
> 
> 
> AMD Ryzen 9 3900X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> finally broke 15k with the new rebar bios and you guys are at 16k lol. My core temp is quite high though due in part to the summer heat but i still need to take the card apart soon.
> 
> anyone using 3900x can share your optimized settings for daily use and port royal run? Hopefully i can squeeze a bit more of out this card.


what’s your card and cooling ?


----------



## des2k...

long2905 said:


> I scored 15 184 in Port Royal
> 
> 
> AMD Ryzen 9 3900X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> finally broke 15k with the new rebar bios and you guys are at 16k lol. My core temp is quite high though due in part to the summer heat but i still need to take the card apart soon.
> 
> anyone using 3900x can share your optimized settings for daily use and port royal run? Hopefully i can squeeze a bit more of out this card.


I think the 3900x is fairly limited in terms of tweaks, especially with bios that support rebar.
It's mostly PBO and MEM OC to get a small boost.

If you have trouble getting high boost you can drop freq on cores that are not doing much.

This drops to 900mhz for idle, 46 value would be around 1700mhz for idle.
Been running with PBO and -50mv , runs very cool with my old EK waterblock.

Powercfg -setacvalueindex scheme_current sub_processor THROTTLING On
Powercfg -setacvalueindex scheme_current sub_processor PROCTHROTTLEMIN 26
Powercfg -setacvalueindex scheme_current sub_processor PROCTHROTTLEMIN1 26
Powercfg -setactive scheme_current

New bios for me is very aggressive with current. Doesn't go much past 4.55 in games, R20 is limited to about 7300 145w lol.

Prime95/FAH is mostly 4ghz-4.1ghz, doesn't even reach past 75c, due to amd current limits.

Other small tweaks:
idle control, low
power loading,off
both cppc options, on
c1 os declaration should be left auto(off) because it adds latency for frequency ramp up
vrm,llc all auto
scallar x10 vs x1 only adds maybe 25mhz to the freq


----------



## J7SC

On thermal pads, Igor's lab did a comparison of different W/mk ratings just over two weeks ago (which I posted before)...could be useful
source

I also got some thermal putty (10 W/mk) and so far, it seems to perform the best on VRAM, VRM, perhaps because it conforms so nicely w/o high mounting pressure


----------



## des2k...

J7SC said:


> On thermal pads, Igor's lab did a comparison of different W/mk ratings just over two weeks ago (which I posted before)...could be useful
> source
> 
> I also got some thermal putty (10 W/mk) and so far, it seems to perform the best on VRAM, VRM, perhaps because it conforms so nicely w/o high mounting pressure


thick paste will also work, drop a bunch of it on the block before the thermal pads

my front pads are 0.5mm so the noctua paste worked very good, the thermal putty wouldn't work for me, the gap is less then .1mm for full pressure on the front memory 

that would mean dropping the premium pads (15wmk ) + noctua paste vs thermal putty at just 10wmk


----------



## Nizzen

J7SC said:


> On thermal pads, Igor's lab did a comparison of different W/mk ratings just over two weeks ago (which I posted before)...could be useful
> source
> 
> I also got some thermal putty (10 W/mk) and so far, it seems to perform the best on VRAM, VRM, perhaps because it conforms so nicely w/o high mounting pressure


Here it looks like Gelid extreme sux?








Datasheet vs. Reality: Gelid GP-Extreme and EC360 Silver Review - Heat Conduction Pads with alleged 12 W/(m*K) | Page 4 | igor'sLAB


Disclaimer: The following article is machine translated from the original German, and has not been edited or checked for errors. Thank you for understanding!




www.igorslab.de





@KedarWolf


----------



## GAN77

Nizzen said:


> Here it looks like Gelid extreme sux?
> 
> 
> 
> 
> 
> 
> 
> 
> Datasheet vs. Reality: Gelid GP-Extreme and EC360 Silver Review - Heat Conduction Pads with alleged 12 W/(m*K) | Page 4 | igor'sLAB
> 
> 
> Disclaimer: The following article is machine translated from the original German, and has not been edited or checked for errors. Thank you for understanding!
> 
> 
> 
> 
> www.igorslab.de


It was a fake. Gelid is never black.
Igor is constantly looking for sensations and is often mistaken)


----------



## Nizzen

GAN77 said:


> It was a fake. Gelid is never black.
> Igor is constantly looking for sensations and is often mistaken)


Anyone else tested gelid extreme?


----------



## GAN77

Nizzen said:


> Anyone else tested gelid extreme?


There is a convincing video


----------



## tps3443

0451 said:


> Is it possible to install the block without removing the backplate?
> 
> 
> Have you tried installing the leaf spring on the back of a Fury/Vega? It feels like I am crushing the die in one corner.
> 
> 
> 
> 
> So that's what the included paste is for.
> 
> 
> I will do this. Thanks.
> 
> 
> With the AIO, I am seeing 52-53 under max load in Cyberpunk at 480-520w, so I expect better with the block.
> 
> 
> I will mount the block with no pads or paste and measure the memory chip clearance with feeler gauges. But I trust 1.5mm is correct based on your experience.


Hey sorry about the late reply. You must remove the backplate. There is an additional (5 screws?) underneath the backplate. Just heat the back plate up, move slowly with a plastic card around the PCB. Slowly pry around the edges. You won’t damage anything. I’ve remove mine twice so far. No issues at all.


The 3090FE uses 1.5MM thermal pads with a flat surface contact plate on the GPU and memory modules around the die. So yeah the 2.25MM thick pads on a 3090KP with 50 shore hardness, seems a little extreme to me.

The thermal paste in the tube is for “Inductor use only” so that’s for the VRM’s/Chokes. (Gray squares) it is not really thermal paste. But this is a thermal gap filler that won’t dry out on you.


Hows everything going so far?


PS: The OEM RTX3090 Kingpin Hydro Copper thermal pads are ZIITEK13Kw 2.25mm. These thermal pads are actually extremely effective for the back of the board, and the front too. They’re just too thick. And a gap between the die and waterblock is present. Based on my own examination during the re-mount process.


----------



## J7SC

GAN77 said:


> It was a fake. Gelid is never black.
> Igor is constantly looking for sensations and is often mistaken)


...what ? Igor only tested the different W/mk Alphacool ones in this particular test (apples to apples) ...can't say anything about the subsequent test and his comments there


----------



## GAN77

J7SC said:


> ...what ?











Datasheet vs. Reality: Gelid GP-Extreme and EC360 Silver Review - Heat Conduction Pads with alleged 12 W/(m*K) | Page 4 | igor'sLAB


Disclaimer: The following article is machine translated from the original German, and has not been edited or checked for errors. Thank you for understanding!




www.igorslab.de


----------



## Falkentyne

GAN77 said:


> It was a fake. Gelid is never black.
> Igor is constantly looking for sensations and is often mistaken)


This is very interesting.
I have these pads. They are "GP02-C" from China, 120mm * 120mm.
While the GP-01 are 80mm * 40mm, retail packing for overseas.

I emailed Gelid Solutions asking about these pads, if they are fake or not, and they told me that the GP02 120mm * 120mm SKU is only sold in China.
Now I don't know if they are fake or not. The sample I have are darker pigment than the 80mm * 40mm packs from Gelid's store on Amazon, but they are not as dark as the one in Igor's picture.
The shore rating is the same, and I can't detect any difference in performance. Pretty sure the ones I have are proper, but I'll ask Gelid again about the pigment difference.

I received a sample of EC360 Silver (they sent me one to test shipping speed) so I was able to take a picture.

Left: Gelid Extreme 1.5mm GP01-C
Middle: What's left over of Gelid Extreme 1.5mm GP02-C (was 120mm * 120mm from aliexpress).
Right: EC360 Silver.

The left most pad is the brightest. But the middle pad (in comparison to the EC360) is nowhere near as dark as the Pad in Igor's article (with reference to his EC360 silver next to it).


----------



## GAN77

Gelid GP-ULTIMATE.
Ordered at the Brand Store gelidstore.com


----------



## J7SC

GAN77 said:


> Datasheet vs. Reality: Gelid GP-Extreme and EC360 Silver Review - Heat Conduction Pads with alleged 12 W/(m*K) | Page 4 | igor'sLAB
> 
> 
> Disclaimer: The following article is machine translated from the original German, and has not been edited or checked for errors. Thank you for understanding!
> 
> 
> 
> 
> www.igorslab.de


...different article 

@PLATOON TEKK ...further to our recent posts on this, I am getting ready to mount the 3090 Strix OC (5950X) and 6900XT (3950X) systems into one TT Core P8. Both systems had already been running by themselves for months, but I took the opportunity to take the Phanteks 3090 Strix block and the Bykski 6900 XT block off now and mounted them into a leak-test loop w/ high pressure (dual D5s at 100%) per below...not pretty, but should sniff out any weaknesses

...so far so good, but will let it run overnight at full tilt. As mentioned earlier, almost every GPU block I have had some of its screws 'not tight', with EK being the worst offender. I noticed this years ago on a quad-GPU system (after a leak on one of the blocks) and since then check every screw of every new block. A few minutes of extra work, but more peace of mind


----------



## tps3443

@GRABibus

I watched your gameplay videos! very nice!

@J7SC is right, take caution with that bios. You can degrade a GPU and lower its default boost bin by 15Mhz-30Mhz fairly easily.

Ive been using the Kingpin 450 watt bios. Running bone stock for gaming. 2,025-2,040 boost with no overclocking. Benching is another thing. These cards haul ass either way in games though.


----------



## PLATOON TEKK

J7SC said:


> ...different article
> 
> @PLATOON TEKK ...further to our recent posts on this, I am getting ready to mount the 3090 Strix OC (5950X) and 6900XT (3950X) systems into one TT Core P8. Both systems had already been running by themselves for months, but I took the opportunity to take the Phanteks 3090 Strix block and the Bykski 6900 XT block off now and mounted them into a leak-test loop w/ high pressure (dual D5s at 100%) per below...not pretty, but should sniff out any weaknesses
> 
> ...so far so good, but will let it run overnight at full tilt. As mentioned earlier, almost every GPU block I have had some of its screws 'not tight', with EK being the worst offender. I noticed this years ago on a quad-GPU system (after a leak on one of the blocks) and since then check every screw of every new block. A few minutes of extra work, but more peace of mind
> 
> View attachment 2520280


Boom. That’s a nice little test setup, thanks for the update and let me know if your Phanteks block shows any leaks.

I fear you are absolutely right, I just checked the HOF Bykski block I posted here, some of the active backplate screws were pretty loose. I know not to over tighten them, but still, pretty surprising how loose out the box.

Checked the Optimus but they all seem tight. Going to walk around the studio with an Allen key now and check the rest ha. I guess it’s DTA (Don’t Trust Anything) when it comes to these components.


----------



## tps3443

J7SC said:


> ...different article
> 
> @PLATOON TEKK ...further to our recent posts on this, I am getting ready to mount the 3090 Strix OC (5950X) and 6900XT (3950X) systems into one TT Core P8. Both systems had already been running by themselves for months, but I took the opportunity to take the Phanteks 3090 Strix block and the Bykski 6900 XT block off now and mounted them into a leak-test loop w/ high pressure (dual D5s at 100%) per below...not pretty, but should sniff out any weaknesses
> 
> ...so far so good, but will let it run overnight at full tilt. As mentioned earlier, almost every GPU block I have had some of its screws 'not tight', with EK being the worst offender. I noticed this years ago on a quad-GPU system (after a leak on one of the blocks) and since then check every screw of every new block. A few minutes of extra work, but more peace of mind
> 
> View attachment 2520280


I want that dual D5 pump! I love the way those look. I think it’s about time for two D5’s personally. If I lower the RPM of my single D5 the temps go up immediately. Not by much 1-2C idle, and maybe 2-3C on load. But if I drop any amount of RPM off of my D5 the temps do go up a little. So this makes me think, I am slightly pump limited. The 360MM XSPC 20.5MM slim rad is really restrictive, so is my Optimus Sig V2.

So with (3) 360’s, GPU block, and cpu block, and a couple 90 degree fittings. I think I need 2 D5’s minimum for best performance.

Before adding the Ultra thin XSPC radiator. Using only (2) ek 360 rads, I tested my D5 between 70-100% and it performed the exact same way.

Now it’s like, I need 100% pumping all the time.


----------



## Nizzen

So maybe Igor got some fake Gelid pads from china?
"Strange" Igor didn't reach out to Gelid to get an answer about the bad performance. Guess Igor doesn't care?


----------



## Falkentyne

Nizzen said:


> So maybe Igor got some fake Gelid pads from china?
> "Strange" Igor didn't reach out to Gelid to get an answer about the bad performance. Guess Igor doesn't care?


I don't know if they are fake.
Take a look at his pictures.
In the first picture of the gelid, in the package, the color looks correct.
In the next picture, they look much darker.
Looks like a completely different pad. and I already know that Gelid Extremes allow good imprint of the chips.

These are my Gelid extreme 1.5mm pads from a previous repaste (the Aliexpress GP-02-C, 120mm * 120mm)
The top are partially melted Gelid Extreme 2.0mm pads (Because of the super hot backplate), the 90mm * 50mm Amazon Gelid Store ones.

The 1.5mm look absolutely nothing like Igor's colors, but the ones on his package look correct in color? What gives?

(mine)


http://imgur.com/7cvMw6b

 (on card, this was a few months ago).










Um I see absolutely no problems with imprints of the chips on the pads. Do you? Looks nice and clear to me....











(The middle pad is the leftover of the 120mm * 120mm GP02-C Gelid pad. Same pad as in the first picture above). The color is not far from the 80mm * 40mm Gelid Extreme USA pad (which is the left one). The right most pad is the EC360 Silver 1.5mm pad that EC360 sent me to test shipping speed.

But Igor's....
(Igor):









Datasheet vs. Reality: Gelid GP-Extreme and EC360 Silver Review - Heat Conduction Pads with alleged 12 W/(m*K) | igor'sLAB


Disclaimer: The following article is machine translated from the original German, and has not been edited or checked for errors. Thank you for understanding!




www.igorslab.de





https://www.igorslab.de/wp-content/uploads/2021/07/Pads-Front.jpg (color looks proper, no problem right).

https://www.igorslab.de/wp-content/uploads/2021/07/Pads-Before.jpg ???? (how the hell did the color change?)

https://www.igorslab.de/wp-content/uploads/2021/07/Pads-After.jpg ???? (color??)


----------



## Nizzen

Falkentyne said:


> I don't know if they are fake.
> Take a look at his pictures.
> In the first picture of the gelid, in the package, the color looks correct.
> In the next picture, they look much darker.
> Looks like a completely different pad. and I already know that Gelid Extremes allow good imprint of the chips.
> 
> These are my Gelid extreme 1.5mm pads from a previous repaste (the Aliexpress GP-02-C, 120mm * 120mm)
> The top are partially melted Gelid Extreme 2.0mm pads (Because of the super hot backplate), the 90mm * 50mm Amazon Gelid Store ones.
> 
> The 1.5mm look absolutely nothing like Igor's colors, but the ones on his package look correct in color? What gives?
> 
> (mine)
> 
> 
> http://imgur.com/7cvMw6b
> 
> (on card, this was a few months ago).
> 
> View attachment 2520333
> 
> 
> Um I see absolutely no problems with imprints of the chips on the pads. Do you? Looks nice and clear to me....
> 
> 
> View attachment 2520332
> 
> 
> (The middle pad is the leftover of the 120mm * 120mm GP02-C Gelid pad. Same pad as in the first picture above). The color is not far from the 80mm * 40mm Gelid Extreme USA pad.
> 
> But Igor's....
> (Igor):
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Datasheet vs. Reality: Gelid GP-Extreme and EC360 Silver Review - Heat Conduction Pads with alleged 12 W/(m*K) | igor'sLAB
> 
> 
> Disclaimer: The following article is machine translated from the original German, and has not been edited or checked for errors. Thank you for understanding!
> 
> 
> 
> 
> www.igorslab.de
> 
> 
> 
> 
> 
> https://www.igorslab.de/wp-content/uploads/2021/07/Pads-Front.jpg (color looks proper, no problem right).
> 
> https://www.igorslab.de/wp-content/uploads/2021/07/Pads-Before.jpg ???? (how the hell did the color change?)
> 
> https://www.igorslab.de/wp-content/uploads/2021/07/Pads-After.jpg ???? (color??)


Yes, *** is that LOL

Can't be the real Gelid pads. He must have been drunk, and changed the pads with some random fake 1w/k pads 😅


----------



## satinghostrider

Here are my Gelids I've done for reference.


----------



## jura11

Falkentyne said:


> I don't know if they are fake.
> Take a look at his pictures.
> In the first picture of the gelid, in the package, the color looks correct.
> In the next picture, they look much darker.
> Looks like a completely different pad. and I already know that Gelid Extremes allow good imprint of the chips.
> 
> These are my Gelid extreme 1.5mm pads from a previous repaste (the Aliexpress GP-02-C, 120mm * 120mm)
> The top are partially melted Gelid Extreme 2.0mm pads (Because of the super hot backplate), the 90mm * 50mm Amazon Gelid Store ones.
> 
> The 1.5mm look absolutely nothing like Igor's colors, but the ones on his package look correct in color? What gives?
> 
> (mine)
> 
> 
> http://imgur.com/7cvMw6b
> 
> (on card, this was a few months ago).
> 
> View attachment 2520333
> 
> 
> Um I see absolutely no problems with imprints of the chips on the pads. Do you? Looks nice and clear to me....
> 
> 
> View attachment 2520332
> 
> 
> (The middle pad is the leftover of the 120mm * 120mm GP02-C Gelid pad. Same pad as in the first picture above). The color is not far from the 80mm * 40mm Gelid Extreme USA pad (which is the left one). The right most pad is the EC360 Silver 1.5mm pad that EC360 sent me to test shipping speed.
> 
> But Igor's....
> (Igor):
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Datasheet vs. Reality: Gelid GP-Extreme and EC360 Silver Review - Heat Conduction Pads with alleged 12 W/(m*K) | igor'sLAB
> 
> 
> Disclaimer: The following article is machine translated from the original German, and has not been edited or checked for errors. Thank you for understanding!
> 
> 
> 
> 
> www.igorslab.de
> 
> 
> 
> 
> 
> https://www.igorslab.de/wp-content/uploads/2021/07/Pads-Front.jpg (color looks proper, no problem right).
> 
> https://www.igorslab.de/wp-content/uploads/2021/07/Pads-Before.jpg ???? (how the hell did the color change?)
> 
> https://www.igorslab.de/wp-content/uploads/2021/07/Pads-After.jpg ???? (color??)


That color of these Gelid pads looks more like Bykski supplied pads which I have used or no name pads from Aliexpress

Strange is he have such high temperatures with Gelid pads, I tried other pads on same waterblock in past, recently I have replaced pads on EK Vector waterblock for Aliexpress Gelid pads and VRAM temperatures dropped to 60's from early 90's after repad whole block

Color changed maybe down to temperatures which is possibility 

Can do pictures of my Bykski and Thermalright and Bykski thermal pads 

Hope this helps 

Thanks, Jura


----------



## Falkentyne

satinghostrider said:


> Here are my Gelids I've done for reference.
> 
> View attachment 2520340
> 
> 
> View attachment 2520341
> 
> 
> View attachment 2520343


What shop did you order your pads from? Can you link it for me, please?


----------



## jura11

Falkentyne said:


> What shop did you order your pads from? Can you link it for me, please?


Here is shop which I have used 

Please help me choose!
￡11.58 6%OFF | GELID TP-GP02 120x120x0.5MM 1.0MM 1.5MM graphics processor cooling radiator Conductive silicone pad Thermal Pad high quality








15.51US $ |Gelid Tp-gp02 120x120x0.5mm 1.0mm 1.5mm Graphics Processor Cooling Radiator Conductive Silicone Pad Thermal Pad High Quality - Pc Components Cooling & Tools - AliExpress


Smarter Shopping, Better Living! Aliexpress.com




a.aliexpress.com






Hope this helps 

Thanks, Jura


----------



## Falkentyne

jura11 said:


> Here is shop which I have used
> 
> Please help me choose!
> ￡11.58 6%OFF | GELID TP-GP02 120x120x0.5MM 1.0MM 1.5MM graphics processor cooling radiator Conductive silicone pad Thermal Pad high quality
> 
> 
> 
> 
> 
> 
> 
> 
> 15.51US $ |Gelid Tp-gp02 120x120x0.5mm 1.0mm 1.5mm Graphics Processor Cooling Radiator Conductive Silicone Pad Thermal Pad High Quality - Pc Components Cooling & Tools - AliExpress
> 
> 
> Smarter Shopping, Better Living! Aliexpress.com
> 
> 
> 
> 
> a.aliexpress.com
> 
> 
> 
> 
> 
> 
> Hope this helps
> 
> Thanks, Jura


I am trying to find the shop that has the transparent "gelid" cover writing on the RAM (with gelid written all over repeatedly), rather than the clear plastic or blue plastic one.


----------



## satinghostrider

Falkentyne said:


> What shop did you order your pads from? Can you link it for me, please?


@Falkentyne 
Let's shop together on AliExpress with new user coupons
Your SG$ 2.78 in coupons are here!








18.56US $ |GELID Heat Dissipation Thermal Pad 120x120mm 0.5mm/1.0mm/1.5mm Notebook,GPU Card North South Bridge,RAM Cooler 12W/mk|Fans & Cooling| - AliExpress


Smarter Shopping, Better Living! Aliexpress.com




a.aliexpress.com


----------



## jura11

Falkentyne said:


> I am trying to find the shop that has the transparent "gelid" cover writing on the RAM (with gelid written all over repeatedly), rather than the clear plastic or blue plastic one.


I received such Gelid thermal pads where Gelid writing is all over them repeatedly on transparent cover 

Can do pictures of these pads and I think blue plastic ones have covers have Thermalright I think 

Hope this helps 

Thanks, Jura


----------



## J7SC

FYI, Igor has in fact expanded on the potential for fake 'inputs' used by reputable companies, per below. 

Also, I've got various Thermal pads from Amazon and others, ranging from 0.5mm to 2mm...I'm convinced that for example some of my Thermalright Odyssey packets might be fake, not least as the packaging has a slightly different colour compared to all other 'identical' purchases earlier, and also the feel of the pads themselves...only longer-term testing will tell. One of my old 780 Ti Classies 'barbecues' w/ XOC bios and EVBot will be good to put some heat into it  and see if the pads break down after repeated cycles.


----------



## gfunkernaught

tps3443 said:


> I want that dual D5 pump! I love the way those look. I think it’s about time for two D5’s personally. If I lower the RPM of my single D5 the temps go up immediately. Not by much 1-2C idle, and maybe 2-3C on load. But if I drop any amount of RPM off of my D5 the temps do go up a little. So this makes me think, I am slightly pump limited. The 360MM XSPC 20.5MM slim rad is really restrictive, so is my Optimus Sig V2.
> 
> So with (3) 360’s, GPU block, and cpu block, and a couple 90 degree fittings. I think I need 2 D5’s minimum for best performance.
> 
> Before adding the Ultra thin XSPC radiator. Using only (2) ek 360 rads, I tested my D5 between 70-100% and it performed the exact same way.
> 
> Now it’s like, I need 100% pumping all the time.


Speaking of dual D5's, would a single D5 be enough for two 360x64s and two 480x60s? Plus a gpu and cpu block.


----------



## J7SC

tps3443 said:


> I want that dual D5 pump! I love the way those look. I think it’s about time for two D5’s personally. (...)





gfunkernaught said:


> Speaking of dual D5's, would a single D5 be enough for two 360x64s and two 480x60s? Plus a gpu and cpu block.


...the dual D5 I have have been great so far - it's fun watching the flow ('vortex') at full tilt. However, the ultimate D5 setup IMO is this from Aquacomputer


----------



## gfunkernaught

J7SC said:


> ...the dual D5 I have have been great so far - it's fun watching the flow ('vortex') at full tilt. However, the ultimate D5 setup IMO is this from Aquacomputer
> 
> View attachment 2520378


...so is that a yes or a no?😂 I've read that a single would be fine but since you have experience with the CL480s why I asked. Did you go dual for the hell of it or because you actually needed to?


----------



## jura11

gfunkernaught said:


> Speaking of dual D5's, would a single D5 be enough for two 360x64s and two 480x60s? Plus a gpu and cpu block.


I would say yes you will be okay with single D5 but flow rate will suffer, I would expect with such loop flow rate in 75LPH maybe close to 100LPH as max 

Hope this helps 

Thanks, Jura


----------



## J7SC

gfunkernaught said:


> ...so is that a yes or a no?😂 I've read that a single would be fine but since you have experience with the CL480s why I asked. Did you go dual for the hell of it or because you actually needed to?


I'm probably the wrong guy to ask ...a single D5 might work fine, but I ALWAYS use dual D5s per loop for a variety of reasons anyway - including for fail-over as most of my machines have work - play use or later on become light servers etc. I've tried a single D5 on one CL480 and 2x GPU blocks...it works, but CL480s are triple-row design rads; with additional rads plus a GPU and CPU block, I would go dual, never mind that I'm actually going from dual CL480s to triple CL 480s.

Apart from that, since a D5 has a larger diameter compared to for example a DDC, it is a bit more susceptible to hotspots via air pockets (per DerBauer D5 DDC comp tests). Dual D5s in series on the other hand do not have that problem and deliver the best possible flow / pressure combo, with the added fail-over insurance.


----------



## tps3443

gfunkernaught said:


> ...so is that a yes or a no?😂 I've read that a single would be fine but since you have experience with the CL480s why I asked. Did you go dual for the hell of it or because you actually needed to?


I would run Dual D5’s! A single D5 will work, but it won’t vortex at all in my system. And your flow rate will be an acceptable minimum and that’s about it. I have a fairly restrictive loop. Mostly from my CPU block.

I have a EKWB D5 at full speed! 4,150RPM. And I get a very relaxed drink straw sized flow back in to my reservoir. The water is pretty calm.

If I reduce RPM at all, my temps go up a little. So this tells me I need more flow.

I have some extra cash to burn. I’m tempted to either buy full set front/rear Fujipoly pads for my KP Hydro copper block, or maybe another radiator, or another D5. And in reality I need all (3) of those lol. 

If your after the lowest possible temps then a dual D5 setup is the way to go. If you just want reliably low temps with watercooling then a single D5 will work.


----------



## SirCanealot

I should hopefully have some Gelid pads coming in the week from Ali Express. I'll post some pictures when I get them... :S


----------



## J7SC

PLATOON TEKK said:


> Boom. That’s a nice little test setup, thanks for the update and let me know if *your Phanteks block shows any leaks*.(...)


...leak-tests complete - no leaks whatsoever. I did decide to take off the 2 'cosmetic cover plates' and found 9 more screws underneath it - one was very loose, right at the VRM area. Anyway, everything checked out ok after well more than 24 hrs and a couple of start-stop cycles 










...overall, the EK had the most 'loosish' screws, with the Bykski the least. Another thing about the EK - apart from the Nickel plating peeling off (back-plate), over half of the RGBs not working shortly after install and of course the machine-head imprint and associated ridge affecting hotspot temps - is how 'thin' the GPU block Nickel plating on the copper is...check out the pic below when using a camera flash


----------



## gfunkernaught

@J7SC 
I was thinking the same thing as far as better flow rate. Fail over is another thing I should be considering. 

That pump from aquacomputer looks nice. What is the model so I can look it up? Also, is a reservoir necessary with a pump like that? After taking the hdd cage out of my 1000D I now have more room. So far I have three rads, and three 10-port fan hubs, should probably get another since I'm thinking about mounting the hubs to the rads instead of running extension cables to the back of the case so I'll probably need another 10-port hub, the last rad is coming tomorrow. I have all the fans. Screws on the way, all 120 of them lol. I need 68 so I bought two packs of 60 for $14. Not bad. So a dual pump is next and maybe a res.


----------



## J7SC

gfunkernaught said:


> @J7SC
> I was thinking the same thing as far as better flow rate. Fail over is another thing I should be considering.
> 
> That pump from aquacomputer looks nice. What is the model so I can look it up? Also, is a reservoir necessary with a pump like that? After taking the hdd cage out of my 1000D I now have more room. So far I have three rads, and three 10-port fan hubs, should probably get another since I'm thinking about mounting the hubs to the rads instead of running extension cables to the back of the case so I'll probably need another 10-port hub, the last rad is coming tomorrow. I have all the fans. Screws on the way, all 120 of them lol. I need 68 so I bought two packs of 60 for $14. Not bad. So a dual pump is next and maybe a res.


Aquacomputer sells the dual D5 housing separately > here ...you would need to purchase the two D5Next in addition, though afaik, regular Liang D5 style pumps should fit...you might even email or call Aquacomputer to find out, and perhaps also ask re. a package deal for 2 pumps & housing (they used to advertise that whole unit as one SKU, but not anymore).

Also, I would not run a loop w/o a reservoir, not over the longer run anyways. If space is an issue, I've used several of the > Swiftech mini reservoirs before...they work great


----------



## gfunkernaught

J7SC said:


> Aquacomputer sells the dual D5 housing separately > here ...you would need to purchase the two D5Next in addition, though afaik, regular Liang D5 style pumps should fit...you might even email or call Aquacomputer to find out, and perhaps also ask re. a package deal for 2 pumps & housing (they used to advertise that whole unit as one SKU, but not anymore).
> 
> Also, I would not run a loop w/o a reservoir, not over the longer run anyways. If space is an issue, I've used several of the > Swiftech mini reservoirs before...they work great


I see. Cool. Since I've only ever owned two pumps, a really old Thermal take pump/res combo, that blue rectangle box with the pump at the bottom, and the one I'm using now EK D5 pump res combo, I have no clue what to get for my next pump. Is that pump from Aquacomputer like top of the line as far as pump quality goes? All that RGB and OLED display stuff is luxury I can tell. Do I still need a reservoir at this point?


----------



## 8472

Hello all, 

Two questions:

1. I have a MSI Suprim X 3090 and a EKWB. Is the top recommendation to use 1mm Gelid Extreme thermal pads for both front and back? Or would it be better to use different sizes for different for different parts? Or maybe a different brand as well. 

2. For the thermal paste, I am planning on using KPX, am I right in assuming that there will only be a marginal difference, if any, between KPX and the other non liquid metal alternatives such as Kryonaut, NT-H2, etc.? 

Thanks!


----------



## Arizor

8472 said:


> Hello all,
> 
> Two questions:
> 
> 1. I have a MSI Suprim X 3090 and a EKWB. Is the top recommendation to use 1mm Gelid Extreme thermal pads for both front and back? Or would it be better to use different sizes for different for different parts? Or maybe a different brand as well.
> 
> 2. For the thermal paste, I am planning on using KPX, am I right in assuming that there will only be a marginal difference, if any, between KPX and the other non liquid metal alternatives such as Kryonaut, NT-H2, etc.?
> 
> Thanks!


Hey mate,

1. I can only speak to the Strix EKWB, but for that I needed 1mm, 1.5mm and 2mm pads.

2. Yes, afaik, KPX is certainly top tier and you won't see much difference unless you go LM.


----------



## gfunkernaught

Arizor said:


> Hey mate,
> 
> 1. I can only speak to the Strix EKWB, but for that I needed 1mm, 1.5mm and 2mm pads.
> 
> 2. Yes, afaik, KPX is certainly top tier and you won't see much difference unless you go LM.


I've never tried KPX but I have tried Kryonaut and IC Diamon on my 3090, and the IC Diamond performed better than the Kryonaut. I have since moved to liquid metal and haven't looked back. With great cooling comes great responsimability. The first time I had to reapply LM I carelessly removed the block and it "popped" off and I found some LM beads scattered on the PCB. I got them all though. Just something to consider should you choose to cross over to the dark side.


----------



## yzonker

Managed to get past the 2190 average clock that I seemed to be stuck at before (yea, only 15mhz). Still, slightly higher score thanks to that and a little better mem OC. Re-mount seems to have at least added a small bit.

15633








I scored 15 633 in Port Royal


AMD Ryzen 7 5800X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com





That's actually over 100pts as my previous best was 15559, but that was on the 6111 driver which scored about 50pts higher than 7111. Didn't feel like doing driving swapping for that again. Maybe 15680 at best if I switched drivers.









Result







www.3dmark.com


----------



## Arizor

gfunkernaught said:


> I've never tried KPX but I have tried Kryonaut and IC Diamon on my 3090, and the IC Diamond performed better than the Kryonaut. I have since moved to liquid metal and haven't looked back. With great cooling comes great responsimability. The first time I had to reapply LM I carelessly removed the block and it *"popped" off and I found some LM beads scattered on the PCB*. I got them all though. Just something to consider should you choose to cross over to the dark side.


God that terrifies me, think I'll stick with KP for my Optimus getting delivered. One day though, one day...


----------



## gfunkernaught

Arizor said:


> God that terrifies me, think I'll stick with KP for my Optimus getting delivered. One day though, one day...


Proper sentiment. You should have heard the sound I made when I saw the beads. There were only a few of them and luckily I saw them all and they didn't smear.


----------



## Arizor

gfunkernaught said:


> Proper sentiment. You should have heard the sound I made when I saw the beads. There were only a few of them and luckily I saw them all and they didn't smear.


😄 I imagine it as the howl of a coyote when the trap snaps around its ankle.


----------



## gfunkernaught

Arizor said:


> 😄 I imagine it as the howl of a coyote when the trap snaps around its ankle.


More like raking a chalkboard.


----------



## gfunkernaught

I just thought of something, I already have the EK D5 pump/res combo. Isn't the pump detachable from the res? I could just get another pump. Is it better to have the two pumps attached to each other or should I place the 2nd pump elsewhere in the loop?


----------



## J7SC

gfunkernaught said:


> I just thought of something, I already have the EK D5 pump/res combo. Isn't the pump detachable from the res? I could just get another pump. Is it better to have the two pumps attached to each other or should I place the 2nd pump elsewhere in the loop?


...won't make much difference to have two separate D5s in series on their own rather than a common housing...the latter just makes it a lot easier re. fittings, tubing & routing etc.


----------



## tps3443

gfunkernaught said:


> Proper sentiment. You should have heard the sound I made when I saw the beads. There were only a few of them and luckily I saw them all and they didn't smear.


I deal with this all the time. I’ll find tiny LM drops all over my floor lol. I recommend washing your hands frequently while applying LM, because it can be on your hands and it’ll transfer and smudge another surface.

My 7980XE is direct die, and I re-do the application about every 2 months for optimal performance.. If LM does land on something sensitive, and you can reach it, just blow the LM beads right off, without smudging something by wiping it away.


LM is sensitive, I would recommend anyone inspect all surfaces and working areas after and while using it with a bright light. I splash it on my motherboard and other components quite frequently. It is always removed!

No LM on the 3090KP die yet. Not quite there mentally yet lol. I’m not worried about applying it. Just not sure if I want to even do it..


----------



## long2905

GRABibus said:


> what’s your card and cooling ?


mine is a inno3d ichill x4 with bykski active backplate combo block installed on a combination of rads. The temperature is quite high I noticed as the delta is up to 24c but im not sure if its due to the hot weather or because of flow restriction in my loop. only have 1 D5 installed though on a 360 thick, 360 ultra slim, 280 slim and 120 slim rads. It's all over the place as I try to fit them into a 5000D case.


des2k... said:


> I think the 3900x is fairly limited in terms of tweaks, especially with bios that support rebar.
> It's mostly PBO and MEM OC to get a small boost.
> 
> If you have trouble getting high boost you can drop freq on cores that are not doing much.
> 
> This drops to 900mhz for idle, 46 value would be around 1700mhz for idle.
> Been running with PBO and -50mv , runs very cool with my old EK waterblock.
> 
> Powercfg -setacvalueindex scheme_current sub_processor THROTTLING On
> Powercfg -setacvalueindex scheme_current sub_processor PROCTHROTTLEMIN 26
> Powercfg -setacvalueindex scheme_current sub_processor PROCTHROTTLEMIN1 26
> Powercfg -setactive scheme_current
> 
> New bios for me is very aggressive with current. Doesn't go much past 4.55 in games, R20 is limited to about 7300 145w lol.
> 
> Prime95/FAH is mostly 4ghz-4.1ghz, doesn't even reach past 75c, due to amd current limits.
> 
> Other small tweaks:
> idle control, low
> power loading,off
> both cppc options, on
> c1 os declaration should be left auto(off) because it adds latency for frequency ramp up
> vrm,llc all auto
> scallar x10 vs x1 only adds maybe 25mhz to the freq


can you go into details on how to apply those configs you shared? I just set core multiplier to 40 with 1.1 vcore to keep things cool even though i have 3600c14 b die ram. core frequency stay locked at 4ghz and i cant get it to clock down on idle. whats your 3900x temp on load also?


----------



## Arizor

I'm thinking about selling my Strix OC to grab a Kingpin (listed here on Amazon, trustworthy? Amazon.com: EVGA RTX 3090 Kingpin: Computers & Accessories ). How easy do you folk think it is to sell a Strix with an EKWB waterblock on it? Not sure I can be bothered to put the air cooler back on...!


----------



## Falkentyne

Arizor said:


> I'm thinking about selling my Strix OC to grab a Kingpin (listed here on Amazon, trustworthy? Amazon.com: EVGA RTX 3090 Kingpin: Computers & Accessories ). How easy do you folk think it is to sell a Strix with an EKWB waterblock on it? Not sure I can be bothered to put the air cooler back on...!


That's a scalper. Don't even touch it.
Only buy "shipped and sold by Amazon.com"


----------



## Arizor

Falkentyne said:


> That's a scalper. Don't even touch it.
> Only buy "shipped and sold by Amazon.com"


Thanks mate, I'll stick with the Strix OC and put the Optimus block on it. Just a bit frustrated I can get +1500mem on this but my GPU core tops out, stability-wise for gaming at least, around 2010 effective clock.


----------



## J7SC

Arizor said:


> I'm thinking about selling my Strix OC to grab a Kingpin (listed here on Amazon, trustworthy? Amazon.com: EVGA RTX 3090 Kingpin: Computers & Accessories ). How easy do you folk think it is to sell a Strix with an EKWB waterblock on it? Not sure I can be bothered to put the air cooler back on...!


...remember that bobsled run ? Your next stop:


----------



## Arizor

🤣


----------



## gfunkernaught




----------



## 8472

Arizor said:


> Hey mate,
> 
> 1. I can only speak to the Strix EKWB, but for that I needed 1mm, 1.5mm and 2mm pads.
> 
> 2. Yes, afaik, KPX is certainly top tier and you won't see much difference unless you go LM.


Thanks! 

Which parts of the card did you use each size for?


----------



## Arizor

8472 said:


> Thanks!
> 
> Which parts of the card did you use each size for?


It's hard for me to recall precisely, I've looked at the latest instructions PDF on EKWB's site ( https://www.ekwb.com/shop/EK-IM/EK-IM-3831109832455.pdf ) and they've changed the guidance, and apparently everything is 1mm now? I am absolutely _certain_ my instructions showed different sizes for at least the VRM and memory.


----------



## Arizor

Ah here it is - https://www.ekwb.com/shop/EK-IM/EK-IM-3831109832486.pdf , this of course assumes you're using the backplate too.


----------



## 8472

Arizor said:


> It's hard for me to recall precisely, I've looked at the latest instructions PDF on EKWB's site ( https://www.ekwb.com/shop/EK-IM/EK-IM-3831109832455.pdf ) and they've changed the guidance, and apparently everything is 1mm now? I am absolutely _certain_ my instructions showed different sizes for at least the VRM and memory.





Arizor said:


> Ah here it is - https://www.ekwb.com/shop/EK-IM/EK-IM-3831109832486.pdf , this of course assumes you're using the backplate too.


Yes, I'll be using the backplate as well. 

Thanks for your help! 

I'll post my results once all the parts come in.


----------



## ttnuagmada

Arizor said:


> Thanks mate, I'll stick with the Strix OC and put the Optimus block on it. Just a bit frustrated I can get +1500mem on this but my GPU core tops out, stability-wise for gaming at least, around 2010 effective clock.


That's pretty rough. Have you tried reseating?


----------



## macangel

I wanted to get around to ordering some Gelid thermal pads for my Aorus RTX 3090, but I'm having a hard time finding exactly what sizes I need to buy. Not just thicknesses, but also the other dimensions (how long, how wide) to know how many packages to buy of what. I'm assuming not all RTX 3090s are the same?

little added note. A while back I bought some generic aluminum heatsinks from Amazon, along with a small roll of double sided thermal tape. I covered the whole back of the heatsinks and put them on the backplate of my RTX 3090, and sat a 140mm case fan on top. I did this with my GTX 1080 ti's as well, though I had to put the case fan on the side to suck air out from between the cards, as well as placing one on top. Look pretty? No. But I'm not overly concerned about how my computer case looks, I'm more interested in performance. My computer sits behind a 55" 4K TV, out of view. Since I'm only running a single RTX 3090, I put both heatsinks on the back of it and a single case fan on top. I haven't done any proper performance testing, I am in the middle of getting ready to move, but I know it made a big difference with my 1080 ti's. Hearing about how the 3090s memory heats up like crazy, I know it's doing something. Especially with the heatsink getting pretty hot.


----------



## ttnuagmada

Arizor said:


> Ah here it is - https://www.ekwb.com/shop/EK-IM/EK-IM-3831109832486.pdf , this of course assumes you're using the backplate too.


I got one of the first EK blocks, and it was definitely 1mm all the way around on the strix. The backplate was what needed different sizes.


----------



## ttnuagmada

8472 said:


> Yes, I'll be using the backplate as well.
> 
> Thanks for your help!
> 
> I'll post my results once all the parts come in.


Having just switched from the regular backplate to the active backplate, I noticed that my regular backplate didn't make good contact with the 3 module row. Keep a look out for that. Memory temps were maxing at 80C at +500 mem in a timespy GT2 loop. Installing the active backplate, using the Thermalright 12.8w/mk pads and making sure I had good contact got me down to 50C.


----------



## Arizor

ttnuagmada said:


> That's pretty rough. Have you tried reseating?


Yeah, on Air I had it directly in the PCI slot (X570) and it was hitting 1995, now it's vertically mounted (using gen4 riser cable) and 2010 effective seems to be the spot for stability; in benchmarks I can do around 2085 but not for gaming 

Hopefully my incoming 360 rad and Optimus waterblock help out, with my EKWB block and 240 rad I'm hitting 54C after hours of gaming, hopefully with the aforementioned I get to around 45 max.


----------



## yzonker

Arizor said:


> Yeah, on Air I had it directly in the PCI slot (X570) and it was hitting 1995, now it's vertically mounted (using gen4 riser cable) and 2010 effective seems to be the spot for stability; in benchmarks I can do around 2085 but not for gaming
> 
> Hopefully my incoming 360 rad and Optimus waterblock help out, with my EKWB block and 240 rad I'm hitting 54C after hours of gaming, hopefully with the aforementioned I get to around 45 max.


How far apart is the core and hotspot temps? Seems like something isn't right. Might run DDU if you haven't. I had some weird stability issues that DDU somehow fixed.


----------



## Arizor

yzonker said:


> How far apart is the core and hotspot temps? Seems like something isn't right. Might run DDU if you haven't. I had some weird stability issues that DDU somehow fixed.


On HWInfo I would say there's regularly a 13-14C gap between core and hotspot temps. What's a good number?

edit: To clarify, the hotspot is 13-14C hotter.


----------



## ttnuagmada

Arizor said:


> On HWInfo I would say there's regularly a 13-14C gap between core and hotspot temps. What's a good number?
> 
> edit: To clarify, the hotspot is 13-14C hotter.


mine is about 10C with an EK block.


----------



## Arizor

ttnuagmada said:


> mine is about 10C with an EK block.


Interesting. Hopefully when I get the optimus block I can improve mine.


----------



## 8472

ttnuagmada said:


> Having just switched from the regular backplate to the active backplate, I noticed that my regular backplate didn't make good contact with the 3 module row. Keep a look out for that. Memory temps were maxing at 80C at +500 mem in a timespy GT2 loop. Installing the active backplate, using the Thermalright 12.8w/mk pads and making sure I had good contact got me down to 50C.


Did you stay with the same thickness for the pads? 

I wish EK would release an active backplate for the MSI cards.


----------



## ttnuagmada

8472 said:


> Did you stay with the same thickness for the pads?
> 
> I wish EK would release an active backplate for the MSI cards.


I stayed 100% with EK's recommended sizes.


----------



## satinghostrider

ttnuagmada said:


> I stayed 100% with EK's recommended sizes.


EK recommended sizes are fine but observe the shore scale of the pads you're replacing with. A harder pad like Thermalright Odyssey's would deteriorate your hotspots over time due to the compressibility of those pads. I know because my hotspots climbed to 19 degrees delta from 13 degrees in 2 months. I've swapped my pads to Gelid Extreme's and I'm sitting consistent at 11.5 degrees delta max for the last 3 weeks. I'm monitoring it after hours of play almost everyday and I hope it stays this way.


----------



## satinghostrider

8472 said:


> Did you stay with the same thickness for the pads?
> 
> I wish EK would release an active backplate for the MSI cards.


If you're gaming, I feel the active backplate isn't worth it unless you're mining or doing something that stresses the memory. I'm only hitting 54 degrees max on my 3090 with the EK backplate with Thermalright pads on the back and Gelid pads on the front.

I'm on the 3090 Gaming X Trio with 450W SuprimX bios flash.


----------



## ttnuagmada

satinghostrider said:


> EK recommended sizes are fine but observe the shore scale of the pads you're replacing with. A harder pad like Thermalright Odyssey's would deteriorate your hotspots over time due to the compressibility of those pads. I know because my hotspots climbed to 19 degrees delta from 13 degrees in 2 months. I've swapped my pads not to Gelid Extreme's and I'm sitting consistent at 11.5 degrees delta max for the last 3 weeks. I'm monitoring it after hours of play almost everyday and I hope it stays this way.


I'll keep an eye on it


----------



## 8472

ttnuagmada said:


> I stayed 100% with EK's recommended sizes.


Got it. Thanks! 



satinghostrider said:


> If you're gaming, I feel the active backplate isn't worth it unless you're mining or doing something that stresses the memory. I'm only hitting 54 degrees max on my 3090 with the EK backplate with Thermalright pads on the back and Gelid pads on the front.
> 
> I'm on the 3090 Gaming X Trio with 450W SuprimX bios flash.


That sounds good to me. On air I'm hitting 95C with the VRAM while gaming.


----------



## pcnesgaaa

Hi, I'm trying to understand what's the exact distance of the 3090 screws. I'm talking about the 4 main screws, right next to the GPU chip. As I've noticed that they're all identical, no matter the brand, does anyone knows the distance in between them and the hole size? Thanks!


----------



## gfunkernaught

Cooling the back of the card will lower front temps. If I turn the fan that cools the back card of the card off, not only do my vram temps rise but my core temp goes up as well. You have to remove heat from the card, regardless of what type of load is being put on it. Gaming, mining, rendering, doesn't matter.


----------



## jura11

long2905 said:


> mine is a inno3d ichill x4 with bykski active backplate combo block installed on a combination of rads. The temperature is quite high I noticed as the delta is up to 24c but im not sure if its due to the hot weather or because of flow restriction in my loop. only have 1 D5 installed though on a 360 thick, 360 ultra slim, 280 slim and 120 slim rads. It's all over the place as I try to fit them into a 5000D case.
> 
> can you go into details on how to apply those configs you shared? I just set core multiplier to 40 with 1.1 vcore to keep things cool even though i have 3600c14 b die ram. core frequency stay locked at 4ghz and i cant get it to clock down on idle. whats your 3900x temp on load also?


Hi there 

What's your ambient temperature? What BIOS are you using? 

Temperatures I would expect in mid 40's with such radiator space, if you can add extra D5 pump to loop then add it, what radiators are you using and what fans and at what speeds these fans are running? 

I done few loops with RTX 3090s and with Bykski waterblocks and usually I could keep them in 12-14°C as max under load and they idle close to water temperature and temperatures usually been in 38-42°C in best cases and in worst case they are run in mid 40's 

VRAM temperatures usually been in mid 60's to 70's, not seen them in 80's or 90's in rendering or gaming or mining 

Hope this helps 

Thanks, Jura


----------



## yzonker

gfunkernaught said:


> Cooling the back of the card will lower front temps. If I turn the fan that cools the back card of the card off, not only do my vram temps rise but my core temp goes up as well. You have to remove heat from the card, regardless of what type of load is being put on it. Gaming, mining, rendering, doesn't matter.


Yep, my block delta is slightly better with a HSF on the backplate. Not a lot, maybe 2C, but a nice bonus in addition to lower VRAM temps.


----------



## J7SC

gfunkernaught said:


> Cooling the back of the card will lower front temps. If I turn the fan that cools the back card of the card off, not only do my vram temps rise but my core temp goes up as well. You have to remove heat from the card, regardless of what type of load is being put on it. Gaming, mining, rendering, doesn't matter.





yzonker said:


> Yep, my block delta is slightly better with a HSF on the backplate. Not a lot, maybe 2C, but a nice bonus in addition to lower VRAM temps.


I observed the same, and it does make sense that cooling an additional component that generates heat will have a positive net effect on overall temps.

After leak testing, I'm now debating what back-plate to use (before extra back-plate cooling mods). So far, the stock back plate of the Strix OC is actually performing the best


----------



## gfunkernaught

J7SC said:


> I observed the same, and it does make sense that cooling an additional component that generates heat will have a positive net effect on overall temps.
> 
> After leak testing, I'm now debating what back-plate to use (before extra back-plate cooling mods). So far, the stock back plate of the Strix OC is actually performing the best


an exotic and out there idea I've been chewing on:. Use a CPU block (or multiple) on the back of the card, mounted to the backplate. Has that been tried yet? 🤔


----------



## heatdotnet

On the topic of block mounting - how tightly torqued down should an EK Vector front block be on a 3090? Specifically for the 4 screws around the core, are you supposed to crank them down until you run out of threads?


----------



## tps3443

satinghostrider said:


> Here are my Gelids I've done for reference.
> 
> View attachment 2520340
> 
> 
> View attachment 2520341
> 
> 
> View attachment 2520343



Wow. Is that just a 1MM thick pad? I’m tempted to test something like this in my Kingpin. If I’m gonna waste $120+ on a bunch of squishy pads, I’m gonna do it right! Lol

42c at 460 watts is good to me.


----------



## J7SC

gfunkernaught said:


> an exotic and out there idea I've been chewing on:. Use a CPU block (or multiple) on the back of the card, mounted to the backplate. Has that been tried yet? 🤔


I've toyed with the idea before to mount multiple universal blocks (ie Swiftech below) on the back of my 3090 Strix...have a pile of those universal ones (flat copper mating surface), and used them on both CPU and GPU for HWBot way back. But routing multiple tubing, weight and exact methods of affixing it put that plan on the back-burner


----------



## yzonker

J7SC said:


> I've toyed with the idea before to mount multiple universal blocks (ie Swiftech below) on the back of my 3090 Strix...have a pile of those universal ones (flat copper mating surface), and used them on both CPU and GPU for HWBot way back. But routing multiple tubing, weight and exact methods of affixing it put that plan on the back-burner
> 
> View attachment 2520479


I've definitely seen it done. It was either this forum or Reddit. If the backplate is thick enough you can countersink screws and have studs sticking up for mounting. I guess if you're careful the head could protrude on the inside but I prefer to avoid that myself.


----------



## ttnuagmada

I will also chime in about my active backplate lowering my core delta by 2-3C


----------



## yzonker

Yea you can even see it in the TPU reviews of the EK blocks (with/without active backplate). About 1.5C in that case.









Watercool Heatkiller V RTX 3080/3090 + eBC Backplate Review


Watercool introduces its brand-new GPU block series with the Heatkiller V featuring aesthetic updates, including a bolder design language and optional ARGB lighting, as well as a new cooling engine altogether. We examine the RTX 3080 version and take a look at both its function and form.




www.techpowerup.com


----------



## des2k...

J7SC said:


> I've toyed with the idea before to mount multiple universal blocks (ie Swiftech below) on the back of my 3090 Strix...have a pile of those universal ones (flat copper mating surface), and used them on both CPU and GPU for HWBot way back. But routing multiple tubing, weight and exact methods of affixing it put that plan on the back-burner
> 
> View attachment 2520479


but why ....? just get the bykski universal block, you can mount direct or back plate and it's the cheapest solution & should perform good


----------



## J7SC

des2k... said:


> but why ....? just get the bykski universal block, you can mount direct or back plate and it's the cheapest solution & should perform good
> 
> 
> View attachment 2520493


...not a matter of 'cheap' or 'price' - just have more than 8 of the universal coolers laying around anyway...it would just be a nice challenge


----------



## J7SC

@tps3443 @gfunkernaught ...who needs a flow meter ?  

as discussed...dual D5 / 100% speed test loop via XSPC reservoir > Revo Dual D5 > Phanteks 3090 Strix GPU block > Bykski 6900XT GPU block > TT CL 480/64 triple-row rad > back to reservoir.

My first YT vids, sorry about the quality...pumps are silent, CPU fan in the background



Spoiler


----------



## Arizor

Well that was a useful upgrade from 240mm rad to 360 (push pull x 6 noctua fans), temps down from 54C to 45C and even the VRAM benefits from the cooler core/pcb.

Hopefully once I receive the optimus block I can get this temp down further.


----------



## gfunkernaught

J7SC said:


> @tps3443 @gfunkernaught ...who needs a flow meter ?
> 
> as discussed...dual D5 / 100% speed test loop via XSPC reservoir > Revo Dual D5 > Phanteks 3090 Strix GPU block > Bykski 6900XT GPU block > TT CL 480/64 triple-row rad > back to reservoir.
> 
> My first YT vids, sorry about the quality...pumps are silent, CPU fan in the background
> 
> 
> 
> Spoiler


Holy crap that is a lot of flow! And that is with dual d5s attached to each other right? I got started on my 1000D build tonight and couldn't see an optimal placement for a second pump how I wanted it, which would be at the other end of the loop, just to release some resistance you know? That aquacomputer dual pump looks nice but pricey. Take a look at the case so far. I'm already digging the black/grey look. All thats left is either an additional pump or a new dual d5. Still not sure if a res is necessary but I think there's room for one, probably a flat one.


----------



## tps3443

gfunkernaught said:


> Holy crap that is a lot of flow! And that is with dual d5s attached to each other right? I got started on my 1000D build tonight and couldn't see an optimal placement for a second pump how I wanted it, which would be at the other end of the loop, just to release some resistance you know? That aquacomputer dual pump looks nice but pricey. Take a look at the case so far. I'm already digging the black/grey look. All thats left is either an additional pump or a new dual d5. Still not sure if a res is necessary but I think there's room for one, probably a flat one.
> View attachment 2520520


I have really thin radiators. But I’ve thought about getting some larger ones. how are the thermaltake radiators?

I run (2) 28MM thick 360’s, and (1) 20MM thick 360.

Anyone happen to know what the single best performing radiator is? With high pressure, and fairly high rpm 1600-2100 fans in pull?

I’m doing a little shopping for new rads.


----------



## Shawnb99

tps3443 said:


> I have really thin radiators. But I’ve thought about getting some larger ones. how are the thermaltake radiators?
> 
> I run (2) 28MM thick 360’s, and (1) 20MM thick 360.
> 
> Anyone happen to know what the single best performing radiator is? With high pressure, and fairly high rpm 1600-2100 fans in pull?
> 
> I’m doing a little shopping for new rads.


Thickness isn't always better when it comes to radiators and usually comes at the cost of performing better in push/pull. The HWL GTS can beat many thicker ones depending on the fan speeds. The best Radiator atm is the GTX but if you're serious about that high fan speeds then look at the GTX. Up to 1500 RPM's and the GTX win's above that and the GTR takes the crown


----------



## gfunkernaught

tps3443 said:


> I have really thin radiators. But I’ve thought about getting some larger ones. how are the thermaltake radiators?
> 
> I run (2) 28MM thick 360’s, and (1) 20MM thick 360.
> 
> Anyone happen to know what the single best performing radiator is? With high pressure, and fairly high rpm 1600-2100 fans in pull?
> 
> I’m doing a little shopping for new rads.


I'm assuming the thicker rads are better since there is more surface. As you can see I'm doing a push/pull to make sure I can get the air through the thickness. I'm gonna find out how good it all performs once I get this thing up and running.


----------



## Shawnb99

gfunkernaught said:


> I'm assuming the thicker rads are better since there is more surface. As you can see I'm doing a push/pull to make sure I can get the air through the thickness. I'm gonna find out how good it all performs once I get this thing up and running.


It's really down to fan speeds. Up to 1200rpm the GTS is almost neck and neck with the GTX, it's past that that thicker starts to help


----------



## J7SC

gfunkernaught said:


> I'm assuming the thicker rads are better since there is more surface. As you can see I'm doing a push/pull to make sure I can get the air through the thickness. I'm gonna find out how good it all performs once I get this thing up and running.


...as long as you have the right fans / speed matched to the fpi, thicker rads are better for some obvious reasons: Much more cooling liquid volume...


----------



## ttnuagmada

tps3443 said:


> I have really thin radiators. But I’ve thought about getting some larger ones. how are the thermaltake radiators?
> 
> I run (2) 28MM thick 360’s, and (1) 20MM thick 360.
> 
> Anyone happen to know what the single best performing radiator is? With high pressure, and fairly high rpm 1600-2100 fans in pull?
> 
> I’m doing a little shopping for new rads.


If you go thick, go HWL GTX and go push/pull fan config. Just be aware that they're not very flexible in terms of inlet/outlet and fan direction if you want them to perform optimally. The HWL SR2 is pretty close in performance but does not have the same type of restrictions.

The EK XE is also pretty decent, but the build quality is nowhere near as good as the HWL rads.

Its a few years old at this point, but the xtremerigs rad roundup includes quite a few rads that are still sold today. I would bet a good chunk of us used it as a reference.

These numbers are in watts of heat at a 10C delta T



















Radiator Review Round Up 2016 - Page 5 of 10 - ExtremeRigs.net


360 Radiator Review - hardware labs, koolance, ek, xspc, aquacomputer, watercool, water cooling, black ice, liquid cooling, phobya, alphacool, coolgate




www.xtremerigs.net


----------



## Lobstar

Pssh. But do you guys have dual DUAL D5 pumps???????????


----------



## Shawnb99

Lobstar said:


> Pssh. But do you guys have dual DUAL D5 pumps???????????



I have 6 D5 next's in my build


----------



## Lobstar

Shawnb99 said:


> I have 6 D5 next's in my build


----------



## J7SC

Lobstar said:


> Pssh. *But do you guys have dual DUAL D5 pumps*???????????
> View attachment 2520591


...why yes, doesn't everybody ? 

 ...just a small selection - the 6x D5 build didn't fit into the pic


----------



## Zogge

6 D5s here too. 4x840x140 + 5x360x120 rads. Noctua ippc 3000 rpm fans push config.

I freak out if delta T ambient to water go over 2C !!


----------



## J7SC

Zogge said:


> 6 D5s here too. 4x840x140 + 5x360x120 rads. Noctua ippc 3000 rpm fans push config.
> 
> I freak out if delta T ambient to water go over 2C !!


...yes, let cooler heads prevail 
...hot is meant for...


----------



## tps3443

HOLY CRAP!!!! This above few post makes my water setup feel so inadequate. I have three little skimpy 360’s and a single d5 pumpin to death.

And, the hottest components too.

Direct die 7980XE, and a 3090 Kingpin hydro copper.

Its time to step it up. I’m tired of these high 48-52C gpu temps. My radiators are just totally inadequate

Should I just scrap my current (3) 360’s? And go for something new entirely? Or should I add a large external radiator to my current (3) 360’s?


----------



## gfunkernaught

I think I'm gonna settle for that dual d5 ek pump setup they got. The aquacomputer pump seems like it's the same spec wise except it has that oled display and is more $.


----------



## GRABibus

tps3443 said:


> HOLY CRAP!!!! This above few post makes my water setup feel so inadequate. I have three little skimpy 360’s and a single d5 pumpin to death.
> 
> And, the hottest components too.
> 
> Direct die 7980XE, and a 3090 Kingpin hydro copper.
> 
> Its time to step it up. I’m tired of these high 48-52C gpu temps. My radiators are just totally inadequate
> 
> Should I just scrap my current (3) 360’s? And go for something new entirely? Or should I add a large external radiator to my current (3) 360’s?


I propose you send your Kingpin to me 😊


----------



## macangel

Sorry for the repost. I asked before, but pretty sure it's been buried by now.
Looking to replace the thermal pads on my Aorus RTX 3090 Xtreme. Can anyone tell me if all third party cards roughly the same for the thickness of the thermal pads? I'm trying to figure out how many of what to order. Looking at Gelid GP-Extreme


----------



## J7SC

macangel said:


> Sorry for the repost. I asked before, but pretty sure it's been buried by now.
> Looking to replace the thermal pads on my Aorus RTX 3090 Xtreme. Can anyone tell me if all third party cards roughly the same for the thickness of the thermal pads? I'm trying to figure out how many of what to order. Looking at Gelid GP-Extreme


AFAIK, no, not all custom PCB cards use the same / standardized thermal pad thickness


----------



## Arizor

Yep as @J7SC says different designs across the brands, requiring different sizes.


----------



## yzonker

macangel said:


> Sorry for the repost. I asked before, but pretty sure it's been buried by now.
> Looking to replace the thermal pads on my Aorus RTX 3090 Xtreme. Can anyone tell me if all third party cards roughly the same for the thickness of the thermal pads? I'm trying to figure out how many of what to order. Looking at Gelid GP-Extreme


Sounds like some trial and error, but this might give you a starting point. 


__
https://www.reddit.com/r/gigabyte/comments/l2sznz

The dimensions of the VRAM pads will be more or less the same for most 3090s probably since the chips are arranged the same from what I've seen. Then it's just having enough for the VRMs which can vary some.


----------



## yzonker

@macangel, might try over here too. Some familiar user names popping up.









3090 Owner's thread


Has anyone here tried replacing the thermal pads on their Gigabyte Aorus 3090 Master? If so, how much of a difference did it make to your VRAM temps?...




forums.guru3d.com


----------



## tps3443

@J7SC and anyone else‘s opinion too.

Hey, you think a Alphacool XT45 1080MM would be a worthwhile investment? I dont want to spend too much as I need to buy another d5 pump, and 9-18 decent fans too.

Anyways, these aren’t as good as the super nice German quality round tube Mora3’s, but this is also only $115 USD. So I think it would be interesting to test out.


----------



## tps3443

macangel said:


> Sorry for the repost. I asked before, but pretty sure it's been buried by now.
> Looking to replace the thermal pads on my Aorus RTX 3090 Xtreme. Can anyone tell me if all third party cards roughly the same for the thickness of the thermal pads? I'm trying to figure out how many of what to order. Looking at Gelid GP-Extreme


If the card uses a frame on the PCB like the Strix cards it looks like those use some 0.5mm-1.0mm thermal pads. If you are unsure. Then just purchase 1mm and 2mm in a SUB 13-14K/w rating so the pads won’t be too firm. Grab some
softer pads so they’ll press down.


The Gelid extreme would be just fine. Maybe not the Ultimates though, they’re pretty firm.

Also, don’t get too caught up in the thermal
transfer ability and specs on these thermal pads. From what I have learned, “Quality pad+proper contact” is key, and even just basic lower end model Fujipoly 6K watt thermal-pads in the proper thicknesses would offer extremely good cooling ability for a video card with hot memory. It would be good enough, to just use those and call it a day. Due to how easy they are to work with.

The stock pads on your Aorus extreme are probably thicker than they need to be. 0.5MM and 1.5MM Gelid Extreme should work for the entire card front and rear.

Also, get some pressure sensitive microfilm and plastigauges to check the thickness on the GPU first hand if your going for extra firm
thermal pads, and perfect fit. The manufacturers probably don’t know themselves. Your cards memory is most likely hot under load. So that tells me they didn’t install the proper spec thermal pads from the beginning. Just like several other 3090 AIB manufacturers.


----------



## J7SC

tps3443 said:


> @J7SC and anyone else‘s opinion too.
> 
> Hey, you think a Alphacool XT45 1080MM would be a worthwhile investment? I dont want to spend too much as I need to buy another d5 pump, and 9-18 decent fans too.
> 
> Anyways, these aren’t as good as the super nice German quality round tube Mora3’s, but this is also only $115 USD. So I think it would be interesting to test out.


I don't know this one, though they are reminiscent of the Watercool MoRas. The dimensions, including thickness, and copper core look decent of this Alphacool, as does the option to use 140mm or 200mm fans....and the price is right ! Fpi = 12, re. your fan choices.


----------



## Arizor

@tps3443 looks good, mounting the bigger fans should hopefully lessen the noise too.

Speaking of which, what's everyone's noise tolerance in terms of fan speed? I've got the 6 x 120 Noctuas in a push/pull on my 360x40mm rad, going great, I use FanControl to max them out at 50%, which means my temps max out doing anything really demanding at 47C, which is fine for me (if I 100% the fans I get the GPU to 38C, but the noise is too much for my tastes).


----------



## Lobstar

Arizor said:


> @tps3443 looks good, mounting the bigger fans should hopefully lessen the noise too.
> 
> Speaking of which, what's everyone's noise tolerance in terms of fan speed? I've got the 6 x 120 Noctuas in a push/pull on my 360x40mm rad, going great, I use FanControl to max them out at 50%, which means my temps max out doing anything really demanding at 47C, which is fine for me (if I 100% the fans I get the GPU to 38C, but the noise is too much for my tastes).


I keep 12x 140mm Noctua Industrial 3krpms at 30%, no curve. I have 4x 200mm Noctua 800rpm fans at 100%. I have 2x Monsta 480s and a MORA3 420 Pro. I also have various case fans on auto for various machines in the office. I feel like I have an above average tolerance for fan noise but I also have a portable AC in the room to help keep everything at a reasonable temp. When that is on it's loud AF in here.


----------



## tps3443

Arizor said:


> @tps3443 looks good, mounting the bigger fans should hopefully lessen the noise too.
> 
> Speaking of which, what's everyone's noise tolerance in terms of fan speed? I've got the 6 x 120 Noctuas in a push/pull on my 360x40mm rad, going great, I use FanControl to max them out at 50%, which means my temps max out doing anything really demanding at 47C, which is fine for me (if I 100% the fans I get the GPU to 38C, but the noise is too much for my tastes).


Noise doesn’t bother me enough. My house is louder than my PC anyways. So I can certainly deal with noise.

I use (13) Arctic P120 BioniX fans (Non-RGB models). They are 2.75mm h2o static pressure, and work really really well at low RPM, however I keep them at 2,050RPM+ I like low noise levels when I test it out it on my system. But, I kinda just resort back to maximum performance mode Lol. I like lower temps Better.

Maybe this 45x1080MM radiator will offer enough capability to allow me to run my system quieter.


----------



## Shawnb99

Arizor said:


> @tps3443 looks good, mounting the bigger fans should hopefully lessen the noise too.
> 
> Speaking of which, what's everyone's noise tolerance in terms of fan speed? I've got the 6 x 120 Noctuas in a push/pull on my 360x40mm rad, going great, I use FanControl to max them out at 50%, which means my temps max out doing anything really demanding at 47C, which is fine for me (if I 100% the fans I get the GPU to 38C, but the noise is too much for my tastes).


My fans generally don't go above 600RPM's, if they need to go higher my entire case has been soundproofed. Does a decent job


----------



## Lobstar

BTW, anyone played with this FF14 benchmark? It seems to be like TimeSpy where it's a complete system test and result. I found it in the 3080ti thread.


----------



## Zogge

In my view to answer the question on tiny vs solid water loop setups :

First find out the delta T for gpu core to water. I think if you are less than 15 C there on full load, the application of pads and block is ok. If optimus I would say 10 C.

Then look at water to ambient delta T. If over 5C then I think you benefit from more rads , higher flow etc. If already at 5C and under then the steps will give less and less and at 1-2C you start to face diminishing returns.

Then finally work on ambient temp. Hence pray for colder weather, move north, install aircon, open windows, push hot air out from the computer room etc.


----------



## tps3443

Hey guys Optimus sales the 3mm Fujipoly full coverage back PCB GPU thermal pad for $35 bucks. That is in insane value! I just bought one, I’m gonna modify it to fit my kingpin 3090. It is shaped for the FTW3 Model.

They also sale the front thermal pad kit for only $10 dollars! This is like a 100x76mm fujipoly sheet for $10 bucks lol. They limit one per customer. But it’s enough to cover all front components and some apparently.

The rear pad is the size of the whole PCB!

So $48 bucks for front and rear Fujipoly pads Lol. I would’ve spent like $130+ bucks on Amazon for 1/3 this amount.

I bought front kit, and rear kit.


----------



## tps3443

@Arizor they sale a rear full cover Fujipoly pad for the Strix too.

Cant wait to test it out!!


----------



## Arizor

@tps3443 that’s awesome, how did you order? Can’t find the pads on their main site.


----------



## ttnuagmada

Lobstar said:


> I keep 12x 140mm Noctua Industrial 3krpms at 30%, no curve. I have 4x 200mm Noctua 800rpm fans at 100%. I have 2x Monsta 480s and a MORA3 420 Pro. I also have various case fans on auto for various machines in the office. I feel like I have an above average tolerance for fan noise but I also have a portable AC in the room to help keep everything at a reasonable temp. When that is on it's loud AF in here.


Man, what's the point of that much rad space if you cant set it all up to not even be able to tell your computer is turned on?


----------



## RosaPanteren

gfunkernaught said:


> an exotic and out there idea I've been chewing on:. Use a CPU block (or multiple) on the back of the card, mounted to the backplate. Has that been tried yet? 🤔


I’m cooling the backplate with an old Threadripper block that was lapped down for max m2m surface.

Tested out small cpu blocks along with ram coolers, but threadripper block works best by far.

I glued the block with arctic silver adhesiv to the backplate.

Some initial test stats with Alphacool Gamerock block and the Frankenstein backplate:

Playing BFV @1440p 250FPS for an hour and 20 min had a max GPU temp of 33c and average temp of 28c. Water temp was 22-26c.

So that gives a delta of 6-9c under gaming load......

LM used between block and die, and I also glued(artic silver thermal glue) a threadripper block to the backplate.

So memory junction temp for this gaming session was a high of 48c with an avg. of 44c. Threadripper block is 2nd in same loop after die. Loop conists of 2xD5 with a Mora360, gpu->backplate.

Bios 520w with rebar



http://imgur.com/MUuYHqK










I scored 15 417 in Port Royal


AMD Ryzen 9 5950X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com







http://imgur.com/d51Dqfy


----------



## macangel

yzonker said:


> Sounds like some trial and error, but this might give you a starting point.
> 
> 
> __
> https://www.reddit.com/r/gigabyte/comments/l2sznz
> 
> The dimensions of the VRAM pads will be more or less the same for most 3090s probably since the chips are arranged the same from what I've seen. Then it's just having enough for the VRMs which can vary some.


yea, I found that thread on Reddit, too, and if you scroll down there's a picture that the guy shows the thicknesses of the pads for the front plate.
front plate cooler pads
I just wasn't sure how that translates into how many of each pack (like the VRAM, I'm assuming it's 80mm long? or is it more? It looks like all one piece, and the packs I find online are 80mm x 40mm. But also, how wide?). And then also trying to figure out the back plate, too).
A while back I bought some double sided thermal tape, and a couple of aluminum heatsinks (140x140mm I think) from Amazon. I put them on the back of my 1080ti cards, and a 140mm case fan on top. That made a big difference. I did the same with my 3090, putting both heatsinks on the back of the 3090 and the case fan on top. I haven't had a chance to do any real overclocking and testing, on my Core i9 11900k, my RAM, or my 3090 yet. I have to move at the end of the month and still searching for a place and packing, and I'm old, lol. When I get my downtime, I'm just gaming. Through a little bit of tweaking I got the VRAM to +1000 in Afterburner, no artifacts in FurMark, but the infamous NVLDDMKM error when playing Horizon Zero Dawn (something to note, my gaming PC is hooked up to three 55" 4K TVs, so 11,560x2160). My temps were only capping at 92*C for the VRAM, and about 67*C for the core, so I think it was doing pretty good.


----------



## des2k...

tps3443 said:


> Hey guys Optimus sales the 3mm Fujipoly full coverage back PCB GPU thermal pad for $35 bucks. That is in insane value! I just bought one, I’m gonna modify it to fit my kingpin 3090. It is shaped for the FTW3 Model.
> 
> They also sale the front thermal pad kit for only $10 dollars! This is like a 100x76mm fujipoly sheet for $10 bucks lol. They limit one per customer. But it’s enough to cover all front components and some apparently.
> 
> The rear pad is the size of the whole PCB!
> 
> So $48 bucks for front and rear Fujipoly pads Lol. I would’ve spent like $130+ bucks on Amazon for 1/3 this amount.
> 
> I bought front kit, and rear kit.


those pads are for existing optimus blocks, I think ftw3, and those are only 0.5mm front and last I checked they are only 6wmk so why the price is good. Pad performance won't be as good as some premium 15wmk.


----------



## yzonker

macangel said:


> yea, I found that thread on Reddit, too, and if you scroll down there's a picture that the guy shows the thicknesses of the pads for the front plate.
> front plate cooler pads
> I just wasn't sure how that translates into how many of each pack (like the VRAM, I'm assuming it's 80mm long? or is it more? It looks like all one piece, and the packs I find online are 80mm x 40mm. But also, how wide?). And then also trying to figure out the back plate, too).
> A while back I bought some double sided thermal tape, and a couple of aluminum heatsinks (140x140mm I think) from Amazon. I put them on the back of my 1080ti cards, and a 140mm case fan on top. That made a big difference. I did the same with my 3090, putting both heatsinks on the back of the 3090 and the case fan on top. I haven't had a chance to do any real overclocking and testing, on my Core i9 11900k, my RAM, or my 3090 yet. I have to move at the end of the month and still searching for a place and packing, and I'm old, lol. When I get my downtime, I'm just gaming. Through a little bit of tweaking I got the VRAM to +1000 in Afterburner, no artifacts in FurMark, but the infamous NVLDDMKM error when playing Horizon Zero Dawn (something to note, my gaming PC is hooked up to three 55" 4K TVs, so 11,560x2160). My temps were only capping at 92*C for the VRAM, and about 67*C for the core, so I think it was doing pretty good.


These are the VRAM pad dimensions Corsair gave me. I think it's the same for all cards.

2x Thermal pad 52x15
2x Thermal pad 5x5x
1x Thermal pad 38x15
1x Thermal pad 15x15

I needed 2 80x40 pads to do the VRAM. I ended up not having enough for the 15x15 on each side. 

The VRM pads can definitely be longer than 80mm. I'm not sure what people do. For one of mine I just put 2 pieces end to end and tried to be sure there was no gap.


----------



## des2k...

RosaPanteren said:


> I’m cooling the backplate with an old Threadripper block that was lapped down for max m2m surface.
> 
> Tested out small cpu blocks along with ram coolers, but threadripper block works best by far.
> 
> I glued the block with arctic silver adhesiv to the backplate.
> 
> Some initial test stats with Alphacool Gamerock block and the Frankenstein backplate:
> 
> Playing BFV @1440p 250FPS for an hour and 20 min had a max GPU temp of 33c and average temp of 28c. Water temp was 22-26c.
> 
> So that gives a delta of 6-9c under gaming load......
> 
> LM used between block and die, and I also glued(artic silver thermal glue) a threadripper block to the backplate.
> 
> So memory junction temp for this gaming session was a high of 48c with an avg. of 44c. Threadripper block is 2nd in same loop after die. Loop conists of 2xD5 with a Mora360, gpu->backplate.
> 
> Bios 520w with rebar
> 
> 
> 
> http://imgur.com/MUuYHqK
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 417 in Port Royal
> 
> 
> AMD Ryzen 9 5950X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> 
> 
> http://imgur.com/d51Dqfy


What's the memory temps with Quake RTX after 5mins or so ?


----------



## J7SC

RosaPanteren said:


> I’m cooling the backplate with an old Threadripper block that was lapped down for max m2m surface.
> 
> Tested out small cpu blocks along with ram coolers, but threadripper block works best by far.
> 
> I glued the block with arctic silver adhesiv to the backplate.
> 
> Some initial test stats with Alphacool Gamerock block and the Frankenstein backplate:
> 
> Playing BFV @1440p 250FPS for an hour and 20 min had a max GPU temp of 33c and average temp of 28c. Water temp was 22-26c.
> 
> So that gives a delta of 6-9c under gaming load......
> 
> LM used between block and die, and I also glued(artic silver thermal glue) a threadripper block to the backplate.
> 
> So memory junction temp for this gaming session was a high of 48c with an avg. of 44c. Threadripper block is 2nd in same loop after die. Loop conists of 2xD5 with a Mora360, gpu->backplate.
> 
> Bios 520w with rebar
> 
> 
> 
> http://imgur.com/MUuYHqK
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 417 in Port Royal
> 
> 
> AMD Ryzen 9 5950X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> 
> 
> http://imgur.com/d51Dqfy


Nice ! I actually have that same Heatkiller TR block - but it's in use ...anyhow, ordered 2x this as I have custom cooling designs for the back-plates for both the 3090 and 6900...


----------



## RosaPanteren

I’m on vacation now but could test in about a weeks time.


----------



## tps3443

des2k... said:


> those pads are for existing optimus blocks, I think ftw3, and those are only 0.5mm front and last I checked they are only 6wmk so why the price is good. Pad performance won't be as good as some premium 15wmk.



I am trying to lower the overall GPU temps as much as possible. And after a lot of research, and seeing what other 3090KP HC owners have done. It turns out that achieving a 10C delta is very possible with this block. Although, the entire rear PCB must be covered with a thermal pad. It doesn’t have to be a super high quality pad either. I am taking the entire card down again, and installing the brand new thermal pad kit evga sent me. The OEM pads are actually Ziitek 13W/mk pads. Which would explain why my memory hot spot has never exceeded 60-62C. 

I going to only install the Fujipoly 0.5mm thermal pads on the vrm’s only which currently use a thermal gap filler provided in a tube from evga. My vrm temps are also pretty good. But I would rather use a 0.5mm fujipoly pad here. Instead of the included inductor paste filler which is only 1.8K/w conductivity. 


The Fujipoly pads are certainly gonna work well to help remove all of that heat on the rear PCB portion

I also ordered a Alphacool 1080mm*45MM thick radiator to help lower overall water temps.


----------



## des2k...

tps3443 said:


> I am trying to lower the overall GPU temps as much as possible. And after a lot of research, and seeing what other 3090KP HC owners have done. It turns out that achieving a 10C delta is very possible with this block. Although, the entire rear PCB must be covered with a thermal pad. It doesn’t have to be a super high quality pad either. I am taking the entire card down again, and installing the brand new thermal pad kit evga sent me. The OEM pads are actually Ziitek 13W/mk pads. Which would explain why my memory hot spot has never exceeded 60-62C.
> 
> I going to only install the Fujipoly 0.5mm thermal pads on the vrm’s only which currently use a thermal gap filler provided in a tube from evga. My vrm temps are also pretty good. But I would rather use a 0.5mm fujipoly pad here. Instead of the included inductor paste filler which is only 1.8K/w conductivity.
> 
> 
> The Fujipoly pads are certainly gonna work well to help remove all of that heat on the rear PCB portion
> 
> I also ordered a Alphacool 1080mm*45MM thick radiator to help lower overall water temps.


yeah 10c delta is very nice to get  you'll be part of the club soon enough lol

It's not 37c after hours of playing, but yeah, here's mine Quake RTX 600w or so

Not insanely hard on the EK block with regular paste, but I know users here with LM that can run this type of high load for long periods, never going over 39c.


----------



## tps3443

des2k... said:


> yeah 10c delta is very nice to get  you'll be part of the club soon enough lol
> 
> It's not 37c after hours of playing, but yeah, here's mine Quake RTX 600w or so
> 
> Not insanely hard on the EK block with regular paste, but I know users here with LM that can run this type of high load for long periods, never going over 39c.
> 
> View attachment 2520739


Yeah that’s really amazing.

It’s mostly my current water loop that is inadequate. And I’m just going all out on this 3090KP to help get there with maximum thermal pad coverage. I don’t wanna have to go back again, thinking I could have some things better with the block and it’s pads.

I’m gonna mount the big 1080MM radiator on my
side panel.


----------



## PLATOON TEKK

What y’all know about a 14 x PMP500 (16 l/pm) loop though 😅. 4 pumps in the chillers and an additional 10 pumps in the loop (had to add 3 more from pic). Ek D5s are 23 watts , these are 32w each (448w).


----------



## J7SC

PLATOON TEKK said:


> What y’all know about a 14 x PMP500 (16 l/pm) loop though 😅. 4 pumps in the chillers and an additional 10 pumps in the loop (had to add 3 more from pic).
> View attachment 2520749
> View attachment 2520750


moar cold = good cold 
(especially as we're heading for 32 C the next few days  )


----------



## PLATOON TEKK

J7SC said:


> moar cold = good cold
> (especially as we're heading for 32 C the next few days  )


Haha agree. I hate humidity like I hate my ex.








Live by the coast so I’m running two “navy grade” desiccant dehumidifiers, they are battling an entire ocean and loosing ⚰


----------



## J7SC

...3rd heat dome on the way...supposed to get 'better after that', but whatever is happening seems outside the training and inference of their big data models 

...fortunately, electricity is cheap here (+ 96% from hydro) and we have a nice AC which will be doing some OT again


----------



## des2k...

PLATOON TEKK said:


> What y’all know about a 14 x PMP500 (16 l/pm) loop though 😅. 4 pumps in the chillers and an additional 10 pumps in the loop (had to add 3 more from pic). Ek D5s are 23 watts , these are 32 each.
> View attachment 2520749
> View attachment 2520750


I have 2 D5 ek vario with freezemod tops, I put the amp clamp on the 12v line.

I have exactly 23w on one and the other only 18w at max.

Not a big difference(5W) but I'm puzzled since they push the same l/m.

The 18w, the impeller is pefectly balanced, no weights added. Can't believe that would require 5w less.


----------



## PLATOON TEKK

des2k... said:


> I have 2 D5 ek vario with freezemod tops, I put the amp clamp on the 12v line.
> 
> I have exactly 23w on one and the other only 18w at max.
> 
> Not a big difference(5W) but I'm puzzled since they push the same l/m.
> 
> The 18w, the impeller is pefectly balanced, no weights added. Can't believe that would require 5w less.


Hmm, that is pretty trippy. What coolant are you using? Could you maybe have build up in 23w pump? You could also try turning the pump speeds all the way down and back up a few times (carefully). That has resolved an odd low-power/lpm state one of my pumps was in.

When planning with koolance they recommend to run all pumps at the same wattage and flow rate. Apparently this prevents wear. However, I’ve had mad different speeds, wattage and head pressures in one loop just fine before, for years.



J7SC said:


> ...3rd heat dome on the way...supposed to get 'better after that', but whatever is happening seems outside the training and inference of their big data models
> 
> ...fortunately, electricity is cheap here (+ 96% from hydro) and we have a nice AC which will be doing some OT again


I feel you on that. The bill here is madness. There’s 9 chillers (ranging from small to medium fridge cost) in operation in total, along with tons of acs and dehumidifiers, ouch.


----------



## tps3443

Hey guys, I removed my bottom 20x360mm radiator, and my 3090 Kingpin temps are actually better, along with lower cpu temps too. I mounted my reservoir directly off a radiator so I can have my reservoir return line submerged. (Unfortunately I couldn’t mount it the normal way) so I just came up with this temporary solution along the way.

Does my reservoir look stupid? Because it certainly works great! My 3090 Kingpin Hydro-Copper runs a lot cooler. At least 3C-4C cooler on the GPU. Just with the massive increase in water flow! My pump is PUMPIN good!

Now I am thinking the XSPC 20.5MM slim 360 radiator is really really restrictive on water flow, and eating up all of the pressure In my water loop. It also was blowing out hot air that was 5-6C warmer than my other radiators. This hot air was blasting my 3090 Kingpin’s water-block.

So the big 1080MM rad is on the way. And I had a theory, and I’m certainly glad I was right. My water pressure is tremendously better right now! When I fired up my EK D5 to prime the system, it took no time at all. Besides having to fill up the reservoir 15-20 times or so due to its miniature size. No need to shake the system around to get the trapped air out. 


Anyways, my water flow is huge. And there are (3) intake fans blowing directly on the 3090, instead of hot radiator air now.

I can’t wait for the thick boy 1080mm rad to show up!


----------



## Shawnb99

PLATOON TEKK said:


> What y’all know about a 14 x PMP500 (16 l/pm) loop though 😅. 4 pumps in the chillers and an additional 10 pumps in the loop (had to add 3 more from pic). Ek D5s are 23 watts , these are 32w each (448w).
> View attachment 2520749
> View attachment 2520750



And I thought I was mental with 6 pumps. What in gods name are you cooling with all that? I'm humbled by that.... like wow


----------



## Outcasst

Got my active backplate from EK installed yesterday. I previously had the passive backplate from EK. VRAM temps whilst mining ETH have gone down from 84c to 64c. Doesn't seem to have changed the core temps.


----------



## PLATOON TEKK

Shawnb99 said:


> And I thought I was mental with 6 pumps. What in gods name are you cooling with all that? I'm humbled by that.... like wow


Much appreciated man. What’s wild is that every single pump is running at full 10/10. Those chillers have the ability to hit -20C, coolant -30C, hence the insulation. I’ve just been waiting on more humidity control equipment to arrive. The aim is to be the coldest “on water” machine on earth. Did some initial tests a few weeks back and was fastest PR on water global and only set to 2c. But the amount of power this setup consumes is madness. It also creates in an incredible amount of heat. The chiller on the bottom is actually more powerful than all 4 units up top (but required dedicated HC fuse).

The build attached was a quick test and has been disassembled. Upon my return from traveling I will hopefully spend some more time on it.

edit: 65 fans sounds pretty mad too lol!



Spoiler: KP Build


----------



## des2k...

Outcasst said:


> Got my active backplate from EK installed yesterday. I previously had the passive backplate from EK. VRAM temps whilst mining ETH have gone down from 84c to 64c. Doesn't seem to have changed the core temps.
> 
> View attachment 2520890


Yeah active plate only drops temps for regular games not so much for quake rtx or mining. That's what I got from various post on reddit.

A large heatsink with thermal tape +80mm fan will also get you 64c in quake rtx/mining.

I always found people posting about core temp drop with active plate to be weird. Technically a good mount on the front block, you won't get much heat on the back. 

Anything around 35c- 40c on the core at 500w+ I just don't see how a waterblock on the back core would get you 1c-5c cooler


----------



## steadly2004

Subbed…. Got a card waiting on me at the post office as I wasn’t able to be home for the signature delivery.

cheapest I could get was a HP omen pulled 3090. Already have some pads on the way for memory and a heatsink for the backplate.

Thinking of flattening/grinding (basically lapping) the backplate for the heatsink. It has ridges and maybe a small plastic coating that interferes with heat transfer? I’ll probably try the heatsink directly and pad swap and see the temps before hacking at it. 

Not sure how I’ll do it, but currently in my head I was thinking of using my knife sharpening stones as I have a few different grit stones and can polish it after making it flat. Probably would just spray paint the bare metal after applying the heatsink. But I don’t know what’s the heat route.

do you guys think thermal tape is a decent idea for the backplate heatsink? I got some of that coming too but no experience using it.


----------



## Nizzen

des2k... said:


> Yeah active plate only drops temps for regular games not so much for quake rtx or mining. That's what I got from various post on reddit.
> 
> A large heatsink with thermal tape +80mm fan will also get you 64c in quake rtx/mining.
> 
> I always found people posting about core temp drop with active plate to be weird. Technically a good mount on the front block, you won't get much heat on the back.
> 
> Anything around 35c- 40c on the core at 500w+ I just don't see how a waterblock on the back core would get you 1c-5c cooler


On one of my 3090 strix cards with EK block and passive backplate I'm using mp5 works serial backplate water cooler. I got lower vram temp and lower core temp. Go figure...
Sorry, but it is what it is 
Maybe 3-5c lower core temp, just slapping the cooler on, and plug it in the loop.


----------



## Jpmboy

PLATOON TEKK said:


> Much appreciated man. What’s wild is that every single pump is running at full 10/10. Those chillers have the ability to hit -20C, coolant -30C, hence the insulation. I’ve just been waiting on more humidity control equipment to arrive. The aim is to be the coldest “on water” machine on earth. Did some initial tests a few weeks back and was fastest PR on water global and only set to 2c. But the amount of power this setup consumes is madness. It also creates in an incredible amount of heat. The chiller on the bottom is actually more powerful than all 4 units up top (but required dedicated HC fuse).
> 
> The build attached was a quick test and has been disassembled. Upon my return from traveling I will hopefully spend some more time on it.
> 
> edit: 65 fans sounds pretty mad too lol!


I have those Koolance chillers also.... just too loud just themselves for 24/7. But they do work well!


----------



## ttnuagmada

Outcasst said:


> Got my active backplate from EK installed yesterday. I previously had the passive backplate from EK. VRAM temps whilst mining ETH have gone down from 84c to 64c. Doesn't seem to have changed the core temps.
> 
> View attachment 2520890



Did you stick with the stock thermal pads?

Also, have you tried a stress test like a looped 3Dmark or anything to see what kind of vram temps you'd get? I got the active backplate too, but I haven't done any mining on mine so I don't know how our temps compare. My VRAM in a looped 3dmark GT2 test topped out at about a 15C Delta T or around there (this was with +500 on the mem). I also switched to the Thermalright 12.8 w/mk pads all the way around.


----------



## satinghostrider

ttnuagmada said:


> Did you stick with the stock thermal pads?
> 
> Also, have you tried a stress test like a looped 3Dmark or anything to see what kind of vram temps you'd get? I got the active backplate too, but I haven't done any mining on mine so I don't know how our temps compare. My VRAM in a looped 3dmark GT2 test topped out at about a 15C Delta T or around there (this was with +500 on the mem). I also switched to the Thermalright 12.8 w/mk pads all the way around.


You should have used Gelid Extreme's instead. The Thermalright does not give you optimal contact with the die. I went through that and my deltas increased over time. I'm consistent now 11-12.5 degrees for the last month or so.


----------



## KedarWolf

Does anyone that has bought from the official GELID store have a GELID referral code?

It won't let me use my own so I can't save 15%.


----------



## ttnuagmada

satinghostrider said:


> You should have used Gelid Extreme's instead. The Thermalright does not give you optimal contact with the die. I went through that and my deltas increased over time. I'm consistent now 11-12.5 degrees for the last month or so.


I probably also should have bought a million bitcoins back in 2010.


----------



## GAN77

KedarWolf said:


> It won't let me use my own so I can't save 15%.


Try








ReferralCandy | Best Customer Referral Program Software for Ecommerce Stores


ReferralCandy powers referral marketing programs for online stores of all shapes and sizes. Start your free trial today!




gelidstore.refr.cc




FRIEND-HJRDP72


----------



## Falkentyne

KedarWolf said:


> GP-EXTREME THERMAL PAD
> 
> http://gelidstore.refr.cc/jamesj 15% off official GELID store.
> 
> And here's why to get Extreme, not Ultimate or Thermalright pads. They are soft and compressible, not hard and flaky like the Ultimate and Thermalright pads.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> [Official] NVIDIA RTX 3090 Owner's Club
> 
> 
> What y’all know about a 14 x PMP500 (16 l/pm) loop though 😅. 4 pumps in the chillers and an additional 10 pumps in the loop (had to add 3 more from pic). moar cold = good cold (especially as we're heading for 32 C the next few days :( )
> 
> 
> 
> 
> www.overclock.net
> 
> 
> 
> 
> 
> "You should have used Gelid Extreme's instead. The Thermalright does not give you optimal contact with the die. I went through that and my deltas increased over time. I'm consistent now 11-12.5 degrees for the last month or so."
> 
> 
> 
> View attachment 2521030


I think I deserve a little credit for wasting too much money to be one of the first to find this out (it was actually a 3080 FE user on reddit who first determined that though).


----------



## ttnuagmada

So I keep seeing people saying that the termalright pads are hard and flaky, but that was absolutely _not _the case with the ones i bought. They are extremely mushy. Are we certain some of you just didn't get a bad batch or something?


----------



## yzonker

ttnuagmada said:


> So I keep seeing people saying that the termalright pads are hard and flaky, but that was absolutely _not _the case with the ones i bought. They are extremely mushy. Are we certain some of you just didn't get a bad batch or something?


Well it says 30-55 shore vs 35 for Gelid. Maybe some are actually 55? Otherwise dunno. There definitely seems to be mixed results for thermal pads (except for Gelid possibly).


----------



## J7SC

ttnuagmada said:


> So I keep seeing people saying that the termalright pads are hard and flaky, but that was absolutely _not _the case with the ones i bought. They are extremely mushy. Are we certain some of you just didn't get a bad batch or something?


All of the Thermalright pads I have installed are also not hard and flaky but quite soft and pliable; performance has been great so far (~ 5 months). I do however have one unopened packet (from Amazon) that seems to have a slightly different package colouring which might indicate it is a fake - I'll check the contents out over the next couple of weeks.


----------



## Falkentyne

yzonker said:


> Well it says 30-55 shore vs 35 for Gelid. Maybe some are actually 55? Otherwise dunno. There definitely seems to be mixed results for thermal pads (except for Gelid possibly).


This depends on the thickness most likely.
I'm guessing 35 shore is for 0.5mm
55 shore is for 1.5mm-2.0mm

It's similar with Arctic pads.
Arctic 0.5mm and 1.0mm pads are Shore 25.
1.5mm pads are shore _50_.

I have the packing right in front of me...


----------



## satinghostrider

The Thermalrights just seems denser and not so squishy. It's isn't flaky or hard. You'll understand what I mean when you remove the block for a card that has Gelid pads and Thermalright pads. On the Gelid pads, you can actually see the imprint of the memory/VRM letterings on the pads. Meaning to say the Gelid pads actually wrap somewhat 'around' these crucial components. I think that is why even the die gets better contact as the block is able to fully seat properly and 'sink' as the pads allow more squish. Thermalrights feel like they have an absolute stop point when mounting.


----------



## tps3443

des2k... said:


> Yeah active plate only drops temps for regular games not so much for quake rtx or mining. That's what I got from various post on reddit.
> 
> A large heatsink with thermal tape +80mm fan will also get you 64c in quake rtx/mining.
> 
> I always found people posting about core temp drop with active plate to be weird. Technically a good mount on the front block, you won't get much heat on the back.
> 
> Anything around 35c- 40c on the core at 500w+ I just don't see how a waterblock on the back core would get you 1c-5c cooler


Very true. But, if you have poor die to block contact like me, cooling the back PCB down certainly helps.

I still have not fixed my poor deltas yet. Still waiting on a few more thermal pads, and cooling parts to come in before I take the KP HC apart again.

I did receive my Fujipoly 3090 3mm rear pad though. 

This thing is beast! And for $38 dollars it’s not bad at all for such a large mid level Fujipoly pad.

I am wondering if I should install just this pad on to my bare kingpin PCB by its self, or use my new OEM evga 3090KP thermal pads on crucial components, and use this large Fujipoly pad for a filler, to fill in all of the empty/bald spots on the back if my PCB.


What do you think would work better? Run just this whole pad on the back, or use a combination of this pad, and 13w/k pads?


----------



## des2k...

tps3443 said:


> Very true. But, if you have poor die to block contact like me, cooling the back PCB down certainly helps.
> 
> I still have not fixed my poor deltas yet. Still waiting on a few more thermal pads, and cooling parts to come in before I take the KP HC apart again.
> 
> I did receive my Fujipoly 3090 3mm rear pad though.
> 
> This thing is beast! And for $38 dollars it’s not bad at all for such a large mid level Fujipoly pad.
> 
> I am wondering if I should install just this pad on to my bare kingpin PCB by its self, or use my new OEM evga 3090KP thermal pads on crucial components, and use this large Fujipoly pad for a filler, to fill in all of the empty/bald spots on the back if my PCB.
> 
> 
> What do you think would work better? Run just this whole pad on the back, or use a combination of this pad, and 13w/k pads?


Prob others can answer since I don't have this board / backplate. For my EK backplate, it was hard to find the correct thermal pad thickness outside the hotspot zones.

When I tired putting extra pads in the past, I lost most pressure on the backplate for the important areas.

The Optimus block doesn't have standoffs, so the backplate is used for support, that's why they use that big thermal pad.

For example, there's no heat to extract outside these zones at the back(vrm controllers, back of the vrm, back of the core & mem)










If it's between Fuji 6wmk+ vs 13wmk, I would use 13wmk unless those Fuji have better fit.


----------



## Lobstar

Diablo II Resurrected is the first game I've seen consistently pull over 500w for the entire game session.


----------



## des2k...

Lobstar said:


> Diablo II Resurrected is the first game I've seen consistently pull over 500w for the entire game session.


500w for that... why? I would feel guilty wasting 500w playing Diablo 2

no fps cap ?


----------



## Lobstar

des2k... said:


> 500w for that... why? I would feel guilty wasting 500w playing Diablo 2
> 
> no fps cap ?


That was less than 60 fps with 2x resolution scaling.


----------



## kot0005

Is this any good ? NVIDIA GeForce RTX 3090 video card benchmark result - AMD Ryzen 9 5900X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII FORMULA (3dmark.com) 
3090 strix white OC, stock 4802 bios with max pL and +120Mhz core +200mhz vram
pbo 2, -10 on 5900x


----------



## Nizzen

kot0005 said:


> Is this any good ? NVIDIA GeForce RTX 3090 video card benchmark result - AMD Ryzen 9 5900X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII FORMULA (3dmark.com)
> 3090 strix white OC, stock 4802 bios with max pL and +120Mhz core +200mhz vram
> pbo 2, -10 on 5900x


Run timespy or port royal. I haven't tried the ancient Firestrike for a long time.

More easy to compare with the newer benches. Firestrike is way too cpubound and frequency bound for gpu


----------



## Arizor

As @Nizzen says, Timespy and Port Royal to really get an idea of how your GPU stacks up.

Your memory overclock is very low compared to your core OC, any reason?

Speaking of which, what are folks thoughts on over clocking gpu core vs mem? I find that I get quite a lot more frames over clocking mem vs gpu.

For example, running at 2055mhz with +500 mem vs 1995mhz +1000mem, I get about 5% more frames on the latter. Has anyone done some rigorous comparisons?


----------



## Lobstar




----------



## J7SC

Arizor said:


> As @Nizzen says, Timespy and Port Royal to really get an idea of how your GPU stacks up.
> 
> Your memory overclock is very low compared to your core OC, any reason?
> 
> Speaking of which, what are folks thoughts on over clocking gpu core vs mem? I find that I get quite a lot more frames over clocking mem vs gpu.
> 
> For example, running at 2055mhz with +500 mem vs 1995mhz +1000mem, I get about 5% more frames on the latter. Has anyone done some rigorous comparisons?


...worth adding that the actual rated speed of the Micron GDDR6X is 1313 MHz to begin with (at least on my Strix OC, per TPU excerpt below - probably other cards with that VRAM as well) though 'default' is set to 1219 MHz, perhaps because of heat / power consumption reasons ?


----------



## tps3443

Arizor said:


> As @Nizzen says, Timespy and Port Royal to really get an idea of how your GPU stacks up.
> 
> Your memory overclock is very low compared to your core OC, any reason?
> 
> Speaking of which, what are folks thoughts on over clocking gpu core vs mem? I find that I get quite a lot more frames over clocking mem vs gpu.
> 
> For example, running at 2055mhz with +500 mem vs 1995mhz +1000mem, I get about 5% more frames on the latter. Has anyone done some rigorous comparisons?


Yes memory OC is huge, I can apply it and see the jump in Cyberpunk 2077 frame rate right away.

I actually run my 3090 Kingpin on stock default boost for gaming daily (Which is plenty at 2,025-2,055Mhz. But I leave the memory at +1850Mhz OC 24/7.

I have the memory voltage increased to 1.50V. I can get a little more out of it, but I’m being conservative with a 100% stable +1,850Mhz.

I will say that power consumption, and heat go
up drastically with memory overclocking.

It’ll take my memory hot spot from
about 56C to 68C. Additional voltages, and much more bandwidth.

This is over a 20% jump in memory bandwidth improvement on top of the stock 936GB.

So, I say yeah test the memory OC as high as it can go. It shows great improvement in the games I play.


----------



## tps3443

J7SC said:


> ...worth adding that the actual rated speed of the Micron GDDR6X is 1313 MHz to begin with (at least on my Strix OC, per TPU excerpt below - probably other cards with that VRAM as well) though 'default' is set to 1219 MHz, perhaps because of heat / power consumption reasons ?
> 
> View attachment 2521135


This is exactly how the RTX2080 Super was too. They had Samsung 16GBPS rated modules. Although, they were de-tuned to 15.5GBPS.

And the very rare and unicorn like MSI 2080Ti Gaming-Z, actually had the newer 16GBPS Modules installed. (The only 2080Ti with these modules) all other 2080Ti’s had standard 14GBPS modules.

But I will say this, the Micron GDDR6X is great! All of the Micron memory was generally terrible on last generation 2080Ti’s. It’s nice to see such an improvement with their quality in memory. I had such a biased built up against Micron just from the 2080Ti’s lol. I had (2) models with Samsung and +1,050Mhz was minimum. While the other (4) 2080Ti’s I have owned had Micron and +600 +700 was the minimum. And it was generally disappointing to own one with micron memory..So when I knew my 3090 Kingpin would have Micron memory, I didn’t really have high hopes. 


This is not the case at all with 3090’s though. Maybe Micron saves the better memory for the RTX3090’s? But it’s certainly really good this generation compared to last!


----------



## J7SC

tps3443 said:


> This is exactly how the RTX2080 Super was too. They had Samsung 16GBPS rated modules. Although, they were de-tuned to 15.5GBPS.
> 
> And the very rare and unicorn like MSI 2080Ti Gaming-Z, actually had the newer 16GBPS Modules installed. (The only 2080Ti with these modules) all other 2080Ti’s had standard 14GBPS modules.
> 
> But I will say this, the Micron GDDR6X is great! All of the Micron memory was generally terrible on last generation 2080Ti’s. It’s nice to see such an improvement with their quality in memory. I had such a biased built up against Micron just from the 2080Ti’s lol. I had (2) models with Samsung and +1,050Mhz was minimum. While the other (4) 2080Ti’s I have owned had Micron and +600 +700 was the minimum. And it was generally disappointing to own one with micron memory..So when I knew my 3090 Kingpin would have Micron memory, I didn’t really have high hopes.
> 
> 
> This is not the case at all with 3090’s though. Maybe Micron saves the better memory for the RTX3090’s? But it’s certainly really good this generation compared to last!


...that 24GB GDDR6X on my Strix is going to get an enhanced workout...just picked up ('heaved up' is a better expression, given size / weight) a LG C1 48..._"we're going to need a bigger boat desk"  _


----------



## tps3443

J7SC said:


> ...that 24GB GDDR6X on my Strix is going to get an enhanced workout...just picked up ('heaved up' is a better expression, given size / weight) a LG C1 48..._"we're going to need a bigger boat desk" _


Thats a nice display! That would probably be the only thing that would make me go to 4K PC gaming, and that one thing is , “OLED!!”

The dark colors are so beautiful.


I wish they had a 32-43” available! Do they??


----------



## J7SC

tps3443 said:


> Thats a nice display! That would probably be the only thing that would make me go to 4K, “OLED!!”
> 
> The dark colors are so beautiful.
> 
> 
> I wish they had a 32-43” available! Do they??


...may be in a future release, but for now, 48 inch is 'minimum' and 83 inch is 'maximum'


----------



## tps3443

J7SC said:


> ...may be in a future release, but for now, 48 inch is 'minimum' and 83 inch is 'maximum'


Im looking forward to OLED monitors.


Dell actually made an Ultra Sharp 32” OLED 4K display. But it seems they discontinued it very quickly after release. Only a few people in the world seem to own one. And they sell for $3,000-$5,000 dollars on ebay when they do pop up from time to time.

I do wish I had one though.


----------



## Arizor

@tps3443 LG has a new 32inch OLED out, only $4k, might as well buy 2...!


----------



## ttnuagmada

J7SC said:


> ...that 24GB GDDR6X on my Strix is going to get an enhanced workout...just picked up ('heaved up' is a better expression, given size / weight) a LG C1 48..._"we're going to need a bigger boat desk" _


I'm very close to talking myself into getting one. I have a 77in one in my living room with a 3080/5800x rig. ****s all over my CHG70.


----------



## Arizor

I use an Aorus 43inch as my daily monitor (not OLED, but HDR1000 so very pretty for games) and you quickly adapt to the bigger screen size, it's great for my daily game dev stuff. 48" would be a bit much for my desk, but ymmv and as long as you can sit about 3-4ft away I think you'll get used to it quickly.


----------



## mirkendargen

I've been using a 48" LG CX since they released over a year ago. 0 regrets, greatest gaming display ever made. I upgraded my 10 year old 65" Panasonic plasma in the living room to a 77" CX too about 4 months ago.


----------



## J7SC

ttnuagmada said:


> I'm very close to talking myself into getting one. I have a 77in one in my living room with a 3080/5800x rig. ****s all over my CHG70.





mirkendargen said:


> I've been using a 48" LG CX since they released over a year ago. 0 regrets, greatest gaming display ever made. I upgraded my 10 year old 65" Panasonic plasma in the living room to a 77" CX too about 4 months ago.


...just started to play around with it and haven't dived deep into the menus (pretty much all on default for now) - but I'm speechless at 4K HDR 120, even the sound is 'quite good' ...this C1 will have three machines hooked in for work as well as entertainment and sit next to a 2015 Philips 40 inch 4K VA which has its own strengths, particular text...my new home-office is finally taking shape after some delays..4K min all around, no HDDs anymore etc

EDIT: ...and then there is this  > Secret LG C1 "Hack" to Unlock G1 OLED Evo


----------



## steadly2004

Anybody know if an HP branded 3090 will accept other bios flashes?

I just moved my rig into a new case, a p200a . Moved from a 2080 super to 3090. Also from a smaller noctua (9 cm?) to an AIO for the 3800x.

I replaced the pads with gelid pads. And added a heatsink to the rear with a fan.


----------



## steadly2004

Delete, double post


----------



## Arizor

That's a tricky one @steadly2004 , the board looks similar to the FE, but I'm not familiar with HP as having their own factory or whether they simply rebrand another AIB (like a supermarket getting another brand to make their in-store laundry detergent?).

For the AIO you want to flip the radiator to make sure the pump (located on the CPU block) is below the radiator inlet, otherwise you'll get all sorts of issues with pressure after a while (see here 



 ).


----------



## Falkentyne

steadly2004 said:


> Anybody know if an HP branded 3090 will accept other bios flashes?
> 
> I just moved my rig into a new case, a p200a . Moved from a 2080 super to 3090. Also from a smaller noctua (9 cm?) to an AIO for the 3800x.
> 
> I replaced the pads with gelid pads. And added a heatsink to the rear with a fan.
> 
> View attachment 2521159
> View attachment 2521160
> View attachment 2521161


Why are there two missing phases on that 3090 ?


----------



## des2k...

steadly2004 said:


> Anybody know if an HP branded 3090 will accept other bios flashes?
> 
> I just moved my rig into a new case, a p200a . Moved from a 2080 super to 3090. Also from a smaller noctua (9 cm?) to an AIO for the 3800x.
> 
> I replaced the pads with gelid pads. And added a heatsink to the rear with a fan.
> 
> View attachment 2521159
> View attachment 2521160
> View attachment 2521161


should flash fine, looks 2x8pin reference to me

It's either 390w max or 1000w xoc(no protection)


----------



## des2k...

Falkentyne said:


> Why are there two missing phases on that 3090 ?


that's the default 2x8pin reference design from Nvidia,

It's 4 phases for MEM.
It's 14 phases for core: 9 phases vcore1, 5 phases vcore2 (uncore: mem controller,cache,etc)

Technically you're missing 50a more on core(maybe on LN2 but nobody runs 2x8pin on LN2 for 700w+)
and 50a more on uncore (this will never be needed, maybe under LN2)

assuming the vrm controllers don't need replacing, then it's no longer a reference and Nvidia would need to approve the PCB


----------



## ttnuagmada

J7SC said:


> ...just started to play around with it and haven't dived deep into the menus (pretty much all on default for now) - but I'm speechless at 4K HDR 120, even the sound is 'quite good' ...this C1 will have three machines hooked in for work as well as entertainment and sit next to a 2015 Philips 40 inch 4K VA which has its own strengths, particular text...my new home-office is finally taking shape after some delays..4K min all around, no HDDs anymore etc
> 
> EDIT: ...and then there is this  > Secret LG C1 "Hack" to Unlock G1 OLED Evo


yeah man they're legit. Trying to play darker games on an LCD is torture after getting a C1.

If you get the service remote for the TV, you can get into the service menu and disable some of the ABL on it. I recommend checking out AVSforum for details on the C1. If you happen to have the right panel type and the firmware is old enough, you can still enable G1 mode on them (gotta have the service remote though).


----------



## mirkendargen

J7SC said:


> ...just started to play around with it and haven't dived deep into the menus (pretty much all on default for now) - but I'm speechless at 4K HDR 120, even the sound is 'quite good' ...this C1 will have three machines hooked in for work as well as entertainment and sit next to a 2015 Philips 40 inch 4K VA which has its own strengths, particular text...my new home-office is finally taking shape after some delays..4K min all around, no HDDs anymore etc
> 
> EDIT: ...and then there is this  > Secret LG C1 "Hack" to Unlock G1 OLED Evo


Doom Eternal will blow your HDR mind.


----------



## des2k...

mirkendargen said:


> Doom Eternal will blow your HDR mind.


If you like having your eyeballs blasted with bright light. I find 350nits+ perfectly fine, even then I have to step back from my 27" 4k monitor in most games where there's fast bright patterns combined with darker scenes.

HDR is prob not for me


----------



## J7SC

des2k... said:


> If you like having your eyeballs blasted with bright light. I find 350nits+ perfectly fine, even then I have to step back from my 27" 4k monitor in most games where there's fast bright patterns combined with darker scenes.
> 
> HDR is prob not for me


...I'm having a grand old time having my 'eyeballs blasted with bright light'  ...below is full-on HDR on the C1, though screenshots w/ HDR content don't look really as good as the original...the whole thing feels a lot more '3D / spatial'

Anyway, one can turn HDR off !


----------



## steadly2004

des2k... said:


> should flash fine, looks 2x8pin reference to me
> 
> It's either 390w max or 1000w xoc(no protection)


Which would you recommend for a safe flash? Definitely don't need the XOC. The cooling isn't up for that at all. Might be ok for 390w. 


des2k... said:


> that's the default 2x8pin reference design from Nvidia,
> 
> It's 4 phases for MEM.
> It's 14 phases for core: 9 phases vcore1, 5 phases vcore2 (uncore: mem controller,cache,etc)
> 
> Technically you're missing 50a more on core(maybe on LN2 but nobody runs 2x8pin on LN2 for 700w+)
> and 50a more on uncore (this will never be needed, maybe under LN2)
> 
> assuming the vrm controllers don't need replacing, then it's no longer a reference and Nvidia would need to approve the PCB
> 
> 
> View attachment 2521168


Thanks for the explanation!



Arizor said:


> That's a tricky one @steadly2004 , the board looks similar to the FE, but I'm not familiar with HP as having their own factory or whether they simply rebrand another AIB (like a supermarket getting another brand to make their in-store laundry detergent?).
> 
> For the AIO you want to flip the radiator to make sure the pump (located on the CPU block) is below the radiator inlet, otherwise you'll get all sorts of issues with pressure after a while (see here
> 
> 
> 
> ).


I actually have seen this video and understand fluid levels. If you watch starting at 19:45 you'll see that it addresses my current setup. Its mainly needing the pump lower than the water level, which is above the pump. You're more likely to have more headroom with a given amount of air in this setup than if it were reversed. But ultimately the pump needs to be below the water line.


----------



## gfunkernaught

des2k... said:


> Prob others can answer since I don't have this board / backplate. For my EK backplate, it was hard to find the correct thermal pad thickness outside the hotspot zones.
> 
> When I tired putting extra pads in the past, I lost most pressure on the backplate for the important areas.
> 
> The Optimus block doesn't have standoffs, so the backplate is used for support, that's why they use that big thermal pad.
> 
> For example, there's no heat to extract outside these zones at the back(vrm controllers, back of the vrm, back of the core & mem)
> View attachment 2521094
> 
> 
> 
> If it's between Fuji 6wmk+ vs 13wmk, I would use 13wmk unless those Fuji have better fit.


So those heat zones you highlighted are what make the backplate so hot?


----------



## Sheyster

J7SC said:


> ...that 24GB GDDR6X on my Strix is going to get an enhanced workout...just picked up ('heaved up' is a better expression, given size / weight) a LG C1 48..._"we're going to need a bigger boat desk" _


Welcome to the LG OLED club!

This thread over on AVS Forum is a must-read for CX/C1 owners:









2020 LG CX–GX dedicated GAMING thread, consoles and PC


This thread is created specifically for gamers who own/potentially will own the LG CX-GX. If you have any gaming-related questions or troubleshooting tips, feel free to share here. LG OLED Gaming/PC monitor Recommended Settings Guide




www.avsforum.com





A great settings link from Reddit is in the first post.


----------



## Sheyster

Arizor said:


> @tps3443 LG has a new 32inch OLED out, only $4k, might as well buy 2...!


That one is not a gaming display, it's a pro display for content creators. The CX/C1 are [email protected] with fairly low input lag in gaming mode.


----------



## Arizor

Sheyster said:


> That one is not a gaming display, it's a pro display for content creators. The CX/C1 are [email protected] with fairly low input lag in gaming mode.


Was just offering a bit of sarcasm


----------



## yzonker

steadly2004 said:


> Which would you recommend for a safe flash? Definitely don't need the XOC. The cooling isn't up for that at all. Might be ok for 390w.
> 
> Thanks for the explanation!
> 
> 
> I actually have seen this video and understand fluid levels. If you watch starting at 19:45 you'll see that it addresses my current setup. Its mainly needing the pump lower than the water level, which is above the pump. You're more likely to have more headroom with a given amount of air in this setup than if it were reversed. But ultimately the pump needs to be below the water line.


This is the one I've used on my Zotac 3090 Trinity.









GALAX RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com





And yes, your board looks nearly identical to a reference PCB like my Zotac as @des2k... stated.



https://www.techpowerup.com/review/zotac-geforce-rtx-3090-trinity/images/front.jpg


----------



## RosaPanteren

Any experience or knowledge base input will be highly appreciated!

The Alphacool waterblock for the Palit Gamerock and Gainward Phantom series have cutouts in the coldplate above some of the LR22 inductors as shown in the pictures.



















When I saw the illustration picture back in March I thought these cutouts was only to show the thermal pad placement.

But when I got the block these were physical present and bothered me a bit when installing the block.

some of the LR22 inductors clearly dont have any kind of active or passiv cooling as there is minimal air flow above these components.

The Gamrock community have asked AC for a confirmation that this could not harm the components, but no answer have been given so far(about 2 week).

From a bit of searching I found that these kind of inductors are suspect to cracking. I have no idea if it was because of heat, but this is the cause I would imagine to be most likely.

Any input if this seems like a possible hazardous desgin flaw or would the components in question be fine with no cooling?

Also I could not find specifications for heat tolerance in the datasheets for the LR22, anyone know what temps would be acceptable?

And if anyone with a KP could please share vrm temps under say 500w load it would be super nice. I’ll attach some temp sensors and test a bit when I get home.

Any input on this will be highly appreciated!


----------



## Lownage

kot0005 said:


> Is this any good ? NVIDIA GeForce RTX 3090 video card benchmark result - AMD Ryzen 9 5900X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII FORMULA (3dmark.com)
> 3090 strix white OC, stock 4802 bios with max pL and +120Mhz core +200mhz vram
> pbo 2, -10 on 5900x


I get a higher gpu score with lower core clock, but higher vram:









I scored 24 173 in Fire Strike Extreme


Intel Core i9-11900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 11}




www.3dmark.com


----------



## des2k...

RosaPanteren said:


> Any experience or knowledge base input will be highly appreciated!
> 
> The Alphacool waterblock for the Palit Gamerock and Gainward Phantom series have cutouts in the coldplate above some of the LR22 inductors as shown in the pictures.
> 
> View attachment 2521200
> 
> 
> View attachment 2521201
> 
> 
> When I saw the illustration picture back in March I thought these cutouts was only to show the thermal pad placement.
> 
> But when I got the block these were physical present and bothered me a bit when installing the block.
> 
> some of the LR22 inductors clearly dont have any kind of active or passiv cooling as there is minimal air flow above these components.
> 
> The Gamrock community have asked AC for a confirmation that this could not harm the components, but no answer have been given so far(about 2 week).
> 
> From a bit of searching I found that these kind of inductors are suspect to cracking. I have no idea if it was because of heat, but this is the cause I would imagine to be most likely.
> 
> Any input if this seems like a possible hazardous desgin flaw or would the components in question be fine with no cooling?
> 
> Also I could not find specifications for heat tolerance in the datasheets for the LR22, anyone know what temps would be acceptable?
> 
> And if anyone with a KP could please share vrm temps under say 500w load it would be super nice. I’ll attach some temp sensors and test a bit when I get home.
> 
> Any input on this will be highly appreciated!


If your block support it, yes cool the caps / inductors if you're planning on using 500w+ 24/7.
My understanding is that they do reach 90c+ with no active cooling on them. They are rated for high temps but heat cycles can crack them. Your caps will also degrade faster the hotter they run.

I usually stay away from alphacool blocks, they use the cheap thin copper and they can't clear those areas; so you're left with ugly plexi - orings and cutouts everywhere 

I thermal padded my caps, inductors with my EK block, didn't want to take any chance with the XOC vbios.

Borrow is no better, they also used thin copper plate for their 2x8pin reference block and they didn't fully clear the caps,inductors. The block doens't sit on the core, mem properly. Waste of money lol


----------



## Mat_UK

satinghostrider said:


> If you're gaming, I feel the active backplate isn't worth it unless you're mining or doing something that stresses the memory. I'm only hitting 54 degrees max on my 3090 with the EK backplate with Thermalright pads on the back and Gelid pads on the front.
> 
> I'm on the 3090 Gaming X Trio with 450W SuprimX bios flash.


Dude can you link me that bios you’re using? Want to try that one on my 3090 x trio too. Thanks!


----------



## RosaPanteren

des2k... said:


> If your block support it, yes cool the caps / inductors if you're planning on using 500w+ 24/7.
> My understanding is that they do reach 90c+ with no active cooling on them. They are rated for high temps but heat cycles can crack them. Your caps will also degrade faster the hotter they run.
> 
> I usually stay away from alphacool blocks, they use the cheap thin copper and they can't clear those areas; so you're left with ugly plexi - orings and cutouts everywhere
> 
> I thermal padded my caps, inductors with my EK block, didn't want to take any chance with the XOC vbios.
> 
> Borrow is no better, they also used thin copper plate for their 2x8pin reference block and they didn't fully clear the caps,inductors. The block doens't sit on the core, mem properly. Waste of money lol
> 
> View attachment 2521217


It’s the only block for this custom pcb so it was either this or stock air....

The block works well for the core and front mem ic’s, as far as I can tell, but the cutouts with lack of cooling for some of the LR22 caps bother me.

I run the 520w bios but it’s mostly a gaming rig so at it runs for 6-8 hour stretches at most on full tilt.

Anyone blown similar caps due to insufficient cooling?

And yeah the block looks far from pretty, but that I couldn’t care less about.


----------



## jura11

RosaPanteren said:


> It’s the only block for this custom pcb so it was either this or stock air....
> 
> The block works well for the core and front mem ic’s, as far as I can tell, but the cutouts with lack of cooling for some of the LR22 caps bother me.
> 
> I run the 520w bios but it’s mostly a gaming rig so at it runs for 6-8 hour stretches at most on full tilt.
> 
> Anyone blown similar caps due to insufficient cooling?
> 
> And yeah the block looks far from pretty, but that I couldn’t care less about.


Other option is Bykski which I'm running and using 9n my 2 RTX 3090s GamingPro's and no issues and now they are selling with Active Backplate with which you should see nice VRAM temperatures and it will cost less than Alphacool waterblock 



https://www.bykski.com/page1000028?_l=en&product_id=5255



Hope this helps 

Thanks, Jura


----------



## satinghostrider

Mat_UK said:


> Dude can you link me that bios you’re using? Want to try that one on my 3090 x trio too. Thanks!


Here you go mate.









MSI RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com





Everything works perfectly well on the 3090 Gaming X Trio flashed with that Suprim X Bios.


----------



## RosaPanteren

jura11 said:


> Other option is Bykski which I'm running and using 9n my 2 RTX 3090s GamingPro's and no issues and now they are selling with Active Backplate with which you should see nice VRAM temperatures and it will cost less than Alphacool waterblock
> 
> 
> 
> https://www.bykski.com/page1000028?_l=en&product_id=5255
> 
> 
> 
> Hope this helps
> 
> Thanks, Jura


Thanks for the tip, but GamingPro is reference pcb and GameRock is custom so it will not fit unfortunately, unless Bykski have made an own for the custom pcb.


----------



## des2k...

RosaPanteren said:


> Thanks for the tip, but GamingPro is reference pcb and GameRock is custom so it will not fit unfortunately, unless Bykski have made an own for the custom pcb.


I wouldn't worry about it if there's no alternative blocks. The heat will eventually sink into the pcb and eventually get to the front block or backplate, but at slower rate vs direct cooling.


----------



## geriatricpollywog

Somebody needs a bath!


----------



## sultanofswing

0451 said:


> Somebody needs a bath!
> 
> View attachment 2521272


Haha, I washed one of my KPE's under the sink and scrubbed it with Dawn dish soap.
My roommate about died.


----------



## Falkentyne

0451 said:


> Somebody needs a bath!
> 
> View attachment 2521272





sultanofswing said:


> Haha, I washed one of my KPE's under the sink and scrubbed it with Dawn dish soap.
> My roommate about died.


Use 99% isopropyl alcohol you jabronis...


----------



## sultanofswing

Falkentyne said:


> Use 99% isopropyl alcohol you jabronis...


I used 99% alcohol after washing it to clear away any water.
Card never skipped a beat.


----------



## geriatricpollywog

Ibm don’t have 99% IPA so my card is now in the oven at 170 degrees.


----------



## Arizor

Fearless, thy name is @0451


----------



## gfunkernaught

0451 said:


> Ibm don’t have 99% IPA so my card is now in the oven at 170 degrees.


I baked my 8800gtx when it became unresponsive back in the day. Supposedly it fixed the solder joints. And it did!


----------



## geriatricpollywog

gfunkernaught said:


> I baked my 8800gtx when it became unresponsive back in the day. Supposedly it fixed the solder joints. And it did!


170 degrees for 2 hours is just to dry the card.

I tried baking my dead FuryX and it broke the card more. Your’s is the first success story I’ve heard.


----------



## gfunkernaught

Has anyone tried battlefield 4 at 8k yet? I was getting low fps, in the 30s mostly. It's odd because battlefield 1 at 8k did much better, minimum fps was like 35, but mostly in the 40s. Is it because it's DX11? Also wonder if it is because I have rebar enabled globally.


----------



## geriatricpollywog

gfunkernaught said:


> Has anyone tried battlefield 4 at 8k yet? I was getting low fps, in the 30s mostly. It's odd because battlefield 1 at 8k did much better, minimum fps was like 35, but mostly in the 40s. Is it because it's DX11? Also wonder if it is because I have rebar enabled globally.


Bit of an early adopter there. What kind of TV do you havr?


----------



## J7SC

gfunkernaught said:


> Has anyone tried battlefield 4 at 8k yet? I was getting low fps, in the 30s mostly. It's odd because battlefield 1 at 8k did much better, minimum fps was like 35, but mostly in the 40s. Is it because it's DX11? Also wonder if it is because I have rebar enabled globally.


What, no 16K ? Laggard...


----------



## gfunkernaught

0451 said:


> Bit of an early adopter there. What kind of TV do you havr?


its a sony [email protected] HDR Tv. I set the res to 2160 and res scale to 200%. Anywhere I can and if the game supports it. Doom Eternal does not want to run in 8k (DSR) at all, just a black screen with the menu music playing in the background. GTA 5 is another game I play in 8k using the in-game res scale, which btw looks amazing and the fps is usually 60 with some dips into the 50s.


----------



## gfunkernaught

J7SC said:


> What, no 16K ? Laggard...
> 
> View attachment 2521290


Ahem








Forget 8K, are you ready for 32K?


Forget 8K, are you ready for 32K?




www.redsharknews.com


----------



## geriatricpollywog

@tps3443 I mounted the Hydrocopper block using the pre-installed memory pads and a very thin layer of thermal paste on the die. The purpose was to check if the 2.5mm pads are too thick and if 1.5mm pads are needed as you suggested. I am pleased to report that the die made good contact with the stock pads, so I kept them in place. I ran Port Royal and ripped a 15,555 using the 520w bios with a maximum temperature of 40.0C with a single 280MM radiator shared with the CPU. Typically the core reaches 52C. I could definitely benefit from using the 1000w bios with the block.


----------



## elbramso

gfunkernaught said:


> How hard would It be to connect the GPU directly to the PCIe slot? Maybe then you can eliminate the riser cable as a factor.


Sorry for being super late on this^^ Currently it is impossible to use the pcie slot directly.


----------



## cennis

0451 said:


> @tps3443 I mounted the Hydrocopper block using the pre-installed memory pads and a very thin layer of thermal paste on the die. The purpose was to check if the 2.5mm pads are too thick and if 1.5mm pads are needed as you suggested. I am pleased to report that the die made good contact with the stock pads, so I kept them in place. I ran Port Royal and ripped a 15,555 using the 520w bios with a maximum temperature of 40.0C with a single 280MM radiator shared with the CPU. Typically the core reaches 52C. I could definitely benefit from using the 1000w bios with the block.
> 
> View attachment 2521314
> 
> 
> View attachment 2521318


What are your ambient and coolant temps? That sounds like an amazing block and mount.


----------



## jura11

RosaPanteren said:


> Thanks for the tip, but GamingPro is reference pcb and GameRock is custom so it will not fit unfortunately, unless Bykski have made an own for the custom pcb.


Hi there 

Bykski are now making waterblock for GameRock, I posted above link previously 

￡78.62 15%OFF | Bykski GPU Block For Palit RTX3090 GameRock OC Full Cover With Backplate GPU Water Cooling Cooler , N-PT3090GR-X








115.22US $ 15% OFF|Bykski GPU Block For Palit RTX3090 GameRock OC Full Cover With Backplate GPU Water Cooling Cooler , N PT3090GR X|Fans & Cooling| - AliExpress


Smarter Shopping, Better Living! Aliexpress.com




a.aliexpress.com







https://www.bykski.com/page1000028?_l=en&product_id=5255



Hope this helps 

Thanks, Jura


----------



## RosaPanteren

jura11 said:


> Hi there
> 
> Bykski are now making waterblock for GameRock, I posted above link previously
> 
> ￡78.62 15%OFF | Bykski GPU Block For Palit RTX3090 GameRock OC Full Cover With Backplate GPU Water Cooling Cooler , N-PT3090GR-X
> 
> 
> 
> 
> 
> 
> 
> 
> 115.22US $ 15% OFF|Bykski GPU Block For Palit RTX3090 GameRock OC Full Cover With Backplate GPU Water Cooling Cooler , N PT3090GR X|Fans & Cooling| - AliExpress
> 
> 
> Smarter Shopping, Better Living! Aliexpress.com
> 
> 
> 
> 
> a.aliexpress.com
> 
> 
> 
> 
> 
> 
> 
> https://www.bykski.com/page1000028?_l=en&product_id=5255
> 
> 
> 
> Hope this helps
> 
> Thanks, Jura


Sorry I missed that link and was not aware they had a suitable block!

Thx a bunch!


----------



## elbramso

cennis said:


> What are your ambient and coolant temps? That sounds like an amazing block and mount.


The value would be misleading if he is referring to the gpu die temp like shown on the oled display.


----------



## jura11

RosaPanteren said:


> Sorry I missed that link and was not aware they had a suitable block!
> 
> Thx a bunch!


Try get with active backplate, without active backplate VRAM temperatures are not bad, in my case they're in 60's that's with +1295MHz OC on VRAM and with +1515MHz on VRAM, VRAM temperatures are pretty much same

Hope this helps 

Thanks, Jura


----------



## RosaPanteren

jura11 said:


> Try get with active backplate, without active backplate VRAM temperatures are not bad, in my case they're in 60's that's with +1295MHz OC on VRAM and with +1515MHz on VRAM, VRAM temperatures are pretty much same
> 
> Hope this helps
> 
> Thanks, Jura


I couldn’t find a deal including the active backplate? But will search some more the coming days before ordering.

It will be interesting to compare this block/active backplate with the Alphacool block and the home made threadripper backplate I got today.

Btw Alphacool answered that the LR22 coils do not need cooling, and from some pic’s of the stock air cooler there was no physical contact to these components. However it would be quite some airflow on these components in case of stock air so Im still a bit worried about AC block not cooling these caps.


----------



## geriatricpollywog

cennis said:


> What are your ambient and coolant temps? That sounds like an amazing block and mount.


Ambient temp is 69F and I don’t have a water temperature sensor. I’ll check the temperature later tonight with a fluke thermocouple. My computer is laid out on the breakfast table, outside the case. But yeah, 12C improvement in core temperature and memory is about 30C colder.


----------



## geriatricpollywog

@tps3443 How do I load the Hydrocopper firmware so that my card thinks it was born a Hydrocopper?


----------



## des2k...

0451 said:


> Ambient temp is 69F and I don’t have a water temperature sensor. I’ll check the temperature later tonight with a fluke thermocouple. My computer is laid out on the breakfast table, outside the case. But yeah, 12C improvement in core temperature and memory is about 30C colder.


for your 40c temp on the oled, what does hwinfo say for core and hotspot temps ?

for a single rad, 520w that's impressive if you got 40c in hwinfo, because at that output + cpu you're looking at 35c+ water temps

if so you're at 5c delta, kinda hard to believe🙄


----------



## tps3443

0451 said:


> @tps3443 I mounted the Hydrocopper block using the pre-installed memory pads and a very thin layer of thermal paste on the die. The purpose was to check if the 2.5mm pads are too thick and if 1.5mm pads are needed as you suggested. I am pleased to report that the die made good contact with the stock pads, so I kept them in place. I ran Port Royal and ripped a 15,555 using the 520w bios with a maximum temperature of 40.0C with a single 280MM radiator shared with the CPU. Typically the core reaches 52C. I could definitely benefit from using the 1000w bios with the block.
> 
> View attachment 2521314
> 
> 
> View attachment 2521318



This is the latest classified tool download link. This classified tool adds support for the newer device ID 3999= 3090 Kingpin Hydro Copper

And here is the standard, LN2, and OC version KP HC. All of these bios are for device ID 3999 which is the Kingpin HC, the bios for each switch.

They may not all be Re-Bar enabled. But once you install the bios, PX1 will update your firm ware to a Hydro Copper Kingpin 3090, and it’ll then install the latest Rebar version bios for each (switch 1/2/3) your running on. You have to install each bios manually on each switch, and let PX1 update on each switch. 

I would also DDU and fresh install after your finished.


(Classified tool latest version)
Private file
———————————————————-

(Kingpin Hydrocopper standard bios 430-450 watt) 105%

EVGA RTX 3090 VBIOS

———————————————————-



(Kingpin Hydro Copper LN2 480-520 watt bios)
111%


EVGA RTX 3090 VBIOS

———————————————————



(Kingpin 3090 hydro Copper OC 450-480 watt bios) 121%


EVGA RTX 3090 VBIOS


----------



## geriatricpollywog

des2k... said:


> for your 40c temp on the oled, what does hwinfo say for core and hotspot temps ?
> 
> for a single rad, 520w that's impressive if you got 40c in hwinfo, because at that output + cpu you're looking at 35c+ water temps
> 
> if so you're at 5c delta, kinda hard to believe🙄


I did not say I’m at 5C delta water to core temp, but I’ll measure that when I get home later. It may be a single 280 rad, but it’s also the highest performing radiator available, the HWLabs GTR. It’s hooked up to Corsair ML140Pro fans in push pull at 2k RPM and my whole rig is deconstructed on the kitchen table with no case to hold the heat inside.

What’s most important is the mount, and what can I say, I’m good at mounting things. I feel for anyone who had bad results with the Hydrocopper and then needlessly spent $600 on one of those Cadillac blocks with the active backplate.



tps3443 said:


> This is the latest classified tool download link. This classified tool adds support for the newer device ID 3999= 3090 Kingpin Hydro Copper
> 
> And here is the standard, LN2, and OC version KP HC. All of these bios are for device ID 3999 which is the Kingpin HC, the bios for each switch.
> 
> They may not all be Re-Bar enabled. But once you install the bios, PX1 will update your form ware to a Hydro Copper Kingpin, and it’ll install the latest Rebar version for each switch 1,2,3 you run. You have to install each bios manually. And do the internal firmware update through Evga PX1.
> 
> I would also DDU and fresh install after your finished.
> 
> 
> (Classified tool latest version)
> Private file
> ———————————————————-
> 
> (Kingpin Hydrocopper standard bios 430-450 watt) 105%
> 
> EVGA RTX 3090 VBIOS
> 
> ———————————————————-
> 
> 
> 
> (Kingpin Hydro Copper LN2 480-520 watt bios)
> 111%
> 
> 
> EVGA RTX 3090 VBIOS
> 
> ———————————————————
> 
> 
> 
> (Kingpin 3090 hydro Copper OC 450-480 watt bios) 121%
> 
> 
> EVGA RTX 3090 VBIOS


Thank you so much! I’ll load the bios later.


----------



## jura11

RosaPanteren said:


> I couldn’t find a deal including the active backplate? But will search some more the coming days before ordering.
> 
> It will be interesting to compare this block/active backplate with the Alphacool block and the home made threadripper backplate I got today.
> 
> Btw Alphacool answered that the LR22 coils do not need cooling, and from some pic’s of the stock air cooler there was no physical contact to these components. However it would be quite some airflow on these components in case of stock air so Im still a bit worried about AC block not cooling these caps.



Hi there 

Here is the link on official Bykski store where you can get Bykski waterblock with active backplate for GameRock OC

￡124.17 25%OFF | Bykski GPU Active Backplate Block For Palit RTX 3090 GameRock OC/ Maxsun RTX 3090 TURBO JET Video Card VRAM Radiator Cooler








159.35US $ 25% OFF|Bykski Gpu Active Backplate Block For Palit Rtx 3090 Gamerock Oc/ Maxsun Rtx 3090 Turbo Jet Video Card Vram Radiator Cooler - Fluid Diy Cooling & Accessories - AliExpress


Smarter Shopping, Better Living! Aliexpress.com




a.aliexpress.com





Will be getting as well active backplates for my both RTX 3090 GamingPro's and will compare them although I don't need them as my VRAM temperatures are great already 

On GameRock OC can you monitor VRM temperatures or this is not exposed in HWiNFO or any other monitoring software? 

On your current Alphacool waterblock what temperatures are you seeing or getting under load? 

On my RTX 3090 GamingPro with XOC 1000W BIOS and Bykski waterblock and their passuve backplate I'm seeing 36-38°C in Cyberpunk 2077 or Control etc 

Hope this helps 

Thanks, Jura


----------



## tps3443

0451 said:


> I did not say I’m at 5C delta water to core temp, but I’ll measure that when I get home later. It may be a single 280 rad, but it’s also the highest performing radiator available, the HWLabs GTR. It’s hooked up to Corsair ML140Pro fans in push pull at 2k RPM and my whole rig is deconstructed on the kitchen table with no case to hold the heat inside.
> 
> What’s most important is the mount, and what can I say, I’m good at mounting things. I feel for anyone who had bad results with the Hydrocopper and then needlessly spent $600 on one of those Cadillac blocks with the active backplate.
> 
> 
> Thank you so much! I’ll load the bios later.


Its possible I just have a bad block too. I hope thats not the case at all. But 80%+ have high temps with a Kingpin hydrocopper (Or 80%+ to me have high temps). But then again, EVGA also apparently had QC issues with these KP HC 3090 blocks in the beginning, and that caused the delay. But I didn’t know nearly as much as I do now. 


I am tempted to just order one of these $600 dollar Cadillac’s though haha! The frustration saved, as I near my (3rd) attempt at re-mounting is worth every penny to me in just time saved! If someone said, I can mount your KP HC block 100% right now, and net you 10-14C delta over water. (I‘d be damn tempted to pay them $300 Dollars for mounting it lol) But 25C delta is common, and then a really good re-mount nets like 17-18C Delta. But I am well aware of of some people achieving very very good deltas.

Honestly, my direct die 7980XE is very tedious when
It comes to mounting. You either have it right or you don’t have it at all. And its very difficult to have it right (Very difficult). People have literally ruined processors by not being able to obtain proper temps with direct die by lapping the silicon lol.

Well, my core to core deviation is like 0-3C under 900 watt of load. It’s like gods hand guided the block mount on this things die. I just cannot make it happen on my 3090 KP HC.

If I knew what I knew now, on my last block mount, I would probably have a really good water delta temp.

I‘m gonna try for this 3rd remount today or tomorrow. I hate being an enthusiast, this block keeps me up at night. And spending the $600 dollars on a Optimus KP Block does too.


----------



## J7SC

tps3443 said:


> Its possible I just have a bad block too. I hope thats not the case at all. But 80%+ have high temps with a Kingpin hydrocopper (Or 80%+ to me have high temps). But then again, EVGA also apparently had QC issues with these KP HC 3090 blocks in the beginning, and that caused the delay. But I didn’t know nearly as much as I do now.
> 
> 
> I am tempted to just order one of these $600 dollar Cadillac’s though haha! The frustration saved, as I near my (3rd) attempt at re-mounting is worth every penny to me in just time saved! If someone said, I can mount your KP HC block 100% right now, and net you 10-14C delta over water. (I‘d be damn tempted to pay them $300 Dollars for mounting it lol) But 25C delta is common, and then a really good re-mount nets like 17-18C Delta. But I am well aware of of some people achieving very very good deltas.
> 
> Honestly, my direct die 7980XE is very tedious when
> It comes to mounting. You either have it right or you don’t have it at all. And its very difficult to have it right (Very difficult). People have literally ruined processors by not being able to obtain proper temps with direct die by lapping the silicon lol.
> 
> Well, my core to core deviation is like 0-3C under 900 watt of load. It’s like gods hand guided the block mount on this things die. I just cannot make it happen on my 3090 KP HC.
> 
> If I knew what I knew now, on my last block mount, I would probably have a really good water delta temp.
> 
> I‘m gonna try for this 3rd remount today or tomorrow. I hate being an enthusiast, this block keeps me up at night. And spending the $600 dollars on a Optimus KP Block does too.


...you seem the adventurous type re. PC mods and oc'ing, so why not just submerge in....


----------



## Arizor

tps3443 said:


> I hate being an enthusiast, this block keeps me up at night. And spending the $600 dollars on a Optimus KP Block does too.


A pain we are all very familiar with no doubt...! Group hug people.


----------



## geriatricpollywog

tps3443 said:


> Its possible I just have a bad block too. I hope thats not the case at all. But 80%+ have high temps with a Kingpin hydrocopper (Or 80%+ to me have high temps). But then again, EVGA also apparently had QC issues with these KP HC 3090 blocks in the beginning, and that caused the delay. But I didn’t know nearly as much as I do now.
> 
> 
> I am tempted to just order one of these $600 dollar Cadillac’s though haha! The frustration saved, as I near my (3rd) attempt at re-mounting is worth every penny to me in just time saved! If someone said, I can mount your KP HC block 100% right now, and net you 10-14C delta over water. (I‘d be damn tempted to pay them $300 Dollars for mounting it lol) But 25C delta is common, and then a really good re-mount nets like 17-18C Delta. But I am well aware of of some people achieving very very good deltas.
> 
> Honestly, my direct die 7980XE is very tedious when
> It comes to mounting. You either have it right or you don’t have it at all. And its very difficult to have it right (Very difficult). People have literally ruined processors by not being able to obtain proper temps with direct die by lapping the silicon lol.
> 
> Well, my core to core deviation is like 0-3C under 900 watt of load. It’s like gods hand guided the block mount on this things die. I just cannot make it happen on my 3090 KP HC.
> 
> If I knew what I knew now, on my last block mount, I would probably have a really good water delta temp.
> 
> I‘m gonna try for this 3rd remount today or tomorrow. I hate being an enthusiast, this block keeps me up at night. And spending the $600 dollars on a Optimus KP Block does too.


I just got the Hydrocopper firmware working. Thanks!

I tested the Delta T and it was nowhere near 5C, but it was still pretty good based on what you are saying is normal.

I used a Fluke 51 II thermocouple which is far more accurate than any of those so-called temperature sensors on Performance PCs. I placed the thermocouple directly into the tubing inside the reservoir downstream of the radiator. Although this is the coldest part of the loop, it is the only place I can measure. The loop is short enough and the flowrate is high enough that the entire loop should be the same temperature.











With an average power consumption of 427 watts in Cyberpunk, the delta T of average GPU temperature to water temperature was 13.6C. Hotspot delta T was 24.9C. As you can see, the OLED temperature reads a little low compared to HWInfo64.











With an average power consumption of 492 watts in a Port Royal loop, the delta T of average GPU temperature to water temperature was 17.1C. Hotspot delta T was 28C.


----------



## gfunkernaught

Summer time is the best time to tune your cooling setup. If you can get your temps as low as you can in the summer, you know you'll have a excellent temps in the winter.


----------



## Shawnb99

tps3443 said:


> Its possible I just have a bad block too. I hope thats not the case at all. But 80%+ have high temps with a Kingpin hydrocopper (Or 80%+ to me have high temps). But then again, EVGA also apparently had QC issues with these KP HC 3090 blocks in the beginning, and that caused the delay. But I didn’t know nearly as much as I do now.
> 
> 
> I am tempted to just order one of these $600 dollar Cadillac’s though haha! The frustration saved, as I near my (3rd) attempt at re-mounting is worth every penny to me in just time saved! If someone said, I can mount your KP HC block 100% right now, and net you 10-14C delta over water. (I‘d be damn tempted to pay them $300 Dollars for mounting it lol) But 25C delta is common, and then a really good re-mount nets like 17-18C Delta. But I am well aware of of some people achieving very very good deltas.
> 
> Honestly, my direct die 7980XE is very tedious when
> It comes to mounting. You either have it right or you don’t have it at all. And its very difficult to have it right (Very difficult). People have literally ruined processors by not being able to obtain proper temps with direct die by lapping the silicon lol.
> 
> Well, my core to core deviation is like 0-3C under 900 watt of load. It’s like gods hand guided the block mount on this things die. I just cannot make it happen on my 3090 KP HC.
> 
> If I knew what I knew now, on my last block mount, I would probably have a really good water delta temp.
> 
> I‘m gonna try for this 3rd remount today or tomorrow. I hate being an enthusiast, this block keeps me up at night. And spending the $600 dollars on a Optimus KP Block does too.


I’m notoriously bad at mounting so I didn’t even try to install my Hydrocopper block, specially after I saw the bad temps people were getting.
My solution is I’m sending my card to Optimus to have them install it for me, less chance of me screwing it up.


----------



## GRABibus

gfunkernaught said:


> Summer time is the best time to tune your cooling setup. If you can get your temps as low as you can in the summer, you know you'll have a excellent temps in the winter.


and instability in winter as you will boost too high 😊


----------



## tps3443

0451 said:


> I just got the Hydrocopper firmware working. Thanks!
> 
> I tested the Delta T and it was nowhere near 5C, but it was still pretty good based on what you are saying is normal.
> 
> I used a Fluke 51 II thermocouple which is far more accurate than any of those so-called temperature sensors on Performance PCs. I placed the thermocouple directly into the tubing inside the reservoir downstream of the radiator. Although this is the coldest part of the loop, it is the only place I can measure. The loop is short enough and the flowrate is high enough that the entire loop should be the same temperature.
> 
> View attachment 2521408
> 
> 
> 
> With an average power consumption of 427 watts in Cyberpunk, the delta T of average GPU temperature to water temperature was 13.6C. Hotspot delta T was 24.9C. As you can see, the OLED temperature reads a little low compared to HWInfo64.
> View attachment 2521409
> 
> 
> 
> 
> With an average power consumption of 492 watts in a Port Royal loop, the delta T of average GPU temperature to water temperature was 17.1C. Hotspot delta T was 28C.
> 
> View attachment 2521410


I found out what the arrows on the waterblock
are for. I forgot to mention it before. The arrows are not a direction indication for water flow direction, but they are actually supposed to help water flow the easiest path to prevent dead spots or hot spots inside the block. The flow
of water hits the point of the arrows, and it splits the water to guide it across the entire surface. So the actual correct inlet is the left port in, and right port out.


Also, on my first mount my hotspot GPU temp was just like yours. On the 2nd mount my GPU hotspot temp never goes over 11C delta over GPU temp. 

The oled on the Kingpin has two GPU read outs. It has The GPU die temp, and then GPU temp. Pretty sure the GPU die temp is the internal temperature of the actual die it’s self. And then GPU2 is the backside of the core under the backplate. But the internal temp of 43C is correct for GPU die temp is right. Go inside evga PX1 and add the GPU temp to the Oled read out.

The kingpin block works well enough. Especially considering your only using a single 280MM radiator.


----------



## gfunkernaught

GRABibus said:


> and instability in winter as you will boost too high 😊


Actually I find most stability in the winter. Temps will drop you one bin at most if your cooling is proper. I'm referring to the post 40c mark that drops one bin. I learned this lesson working a while winter to find a stable oc only to find it is not stable in the summer. I keep forgetting that not everyone here is water cooled.


----------



## geriatricpollywog

tps3443 said:


> I found out what the arrows on the waterblock
> are for. I forgot to mention it before. The arrows are not a direction indication for water flow direction, but they are actually supposed to help water flow the easiest path to prevent dead spots or hot spots inside the block. The flow
> of water hits the point of the arrows, and it splits the water to guide it across the entire surface. So the actual correct inlet is the left port in, and right port out.
> 
> 
> Also, on my first mount my hotspot GPU temp was just like yours. On the 2nd mount my GPU hotspot temp never goes over 11C delta over GPU temp.
> 
> The oled on the Kingpin has two GPU read outs. It has The GPU die temp, and then GPU temp. Pretty sure the GPU die temp is the internal temperature of the actual die it’s self. And then GPU2 is the backside of the core under the backplate. But the internal temp of 43C is correct for GPU die temp is right. Go inside evga PX1 and add the GPU temp to the Oled read out.
> 
> The kingpin block works well enough. Especially considering your only using a single 280MM radiator.


I did not know that about the arrows. Later, I will test again after switching the ports.

As you can see, my hotspot is 11C higher than GPU temperature. Is this what I want, or should I remount?

The radiator doesn’t matter for the purpose of my bench testing, since we are testing the water temp to GPU delta, not water temp to air delta.


----------



## gfunkernaught

0451 said:


> I did not know that about the arrows. Later, I will test again after switching the ports.
> 
> As you can see, my hotspot is 11C higher than GPU temperature. Is this what I want, or should I remount?
> 
> The radiator doesn’t matter for the purpose of my bench testing, since we are testing the water temp to GPU delta, not water temp to air delta.


The GPU hotspot delta can vary in that range so I think that's normal.


----------



## yzonker

gfunkernaught said:


> The GPU hotspot delta can vary in that range so I think that's normal.


Yea I think I've seen the comment that 10C is very good. After remounting, mine is 11C, sometimes 12C depending on what I'm running. Seems to vary a little, I assume depending on what parts of the chip are being used the most heavily.


----------



## gfunkernaught

yzonker said:


> Yea I think I've seen the comment that 10C is very good. After remounting, mine is 11C, sometimes 12C depending on what I'm running. Seems to vary a little, I assume depending on what parts of the chip are being used the most heavily.


Yeah and regarding parts of the chip being used, I've seen an oc be stable running quake 2 RTX and unstable running cyberpunk. Weird.


----------



## tps3443

0451 said:


> I did not know that about the arrows. Later, I will test again after switching the ports.
> 
> As you can see, my hotspot is 11C higher than GPU temperature. Is this what I want, or should I remount?
> 
> The radiator doesn’t matter for the purpose of my bench testing, since we are testing the water temp to GPU delta, not water temp to air delta.


I think your good to go on the hotspot delta. And it took a lot of digging to find out about the arrows and what they were designed for. I was thinking someone was just drunk when they were actually drawing the design, manual etc. haha. “OOPS” 

Turns out it’s actually fluid dynamic engineering that was incorporated in to the blocks design.

Evga put a lot of time in to it, so these blocks can offer really good results. The stock thermal pads are even 13w/k pads and their OEM is Ziitec. They are Ziitec’s best pad available.


Evga claims these blocks are capable of a 10-14C Delta over water temp. I’m guessing this is on the OC bios maybe.

Either way it’s a cool block, and it does look nice. Memory temps are awesome on mine. And my hotspot is usually 10-12C higher than the GPU temp.

Evga mailed my thermal pads yesterday, and I purchased them on the 8th. So I think they were waiting on them too apparently. But I have a tracking number for a complete thermal pad kit from Evga for Friday! So, I’m gonna re-mount and I will have everything at my disposal for it. Full coverage rear backplate Fujipoly thermal pad, 0.5mm Fujipoly thermal pads, evga Kingpin HC thermal pad kit, and 3M micro pressure sensitive film.

I’m after a 14C,15C,16C Delta minimum with 450-500 watts. And I hope I can obtain that. If so I will keep the waterblock 100%. If not, I will buy that big heavy boat Cadillac lol.


100% though the bad temps are from
the stock thermal pads.

Everyone says, don’t buy the Gelid Ultimates they’re too hard, and don’t squish down. They all say the gelid extremes are softer.

Well the OEM evga Kingpin Hydrocopper pads are just like Gelid Ultimate pads. Hell, they may be the same thing made by the same company lol.


----------



## geriatricpollywog

tps3443 said:


> I think your good to go on the hotspot delta. And it took a lot of digging to find out about the arrows and what they were designed for. I was thinking someone was just drunk when they were actually drawing the design, manual etc. haha. “OOPS”
> 
> Turns out it’s actually fluid dynamic engineering that was incorporated in to the blocks design.
> 
> Evga put a lot of time in to it, so these blocks can offer really good results. The stock thermal pads are even 13w/k pads and their OEM is Ziitec. They are Ziitec’s best pad available.
> 
> 
> Evga claims these blocks are capable of a 10-14C Delta over water temp. I’m guessing this is on the OC bios maybe.
> 
> Either way it’s a cool block, and it does look nice. Memory temps are awesome on mine. And my hotspot is usually 10-12C higher than the GPU temp.
> 
> Evga mailed my thermal pads yesterday, and I purchased them on the 8th. So I think they were waiting on them too apparently. But I have a tracking number for a complete thermal pad kit from Evga for Friday! So, I’m gonna re-mount and I will have everything at my disposal for it. Full coverage rear backplate Fujipoly thermal pad, 0.5mm Fujipoly thermal pads, evga Kingpin HC thermal pad kit, and 3M micro pressure sensitive film.
> 
> I’m after a 14C,15C,16C Delta minimum with 450-500 watts. And I hope I can obtain that. If so I will keep the waterblock 100%. If not, I will buy that big heavy boat Cadillac lol.
> 
> 
> 100% though the bad temps are from
> the stock thermal pads.
> 
> Everyone says, don’t buy the Gelid Ultimates they’re too hard, and don’t squish down. They all say the gelid extremes are softer.
> 
> Well the OEM evga Kingpin Hydrocopper pads are just like Gelid Ultimate pads. Hell, they may be the same thing made by the same company lol.


With all your research, you probably know more than anyone about how capable the HC block is. Did you use springs on the 4 die screws? I did not. I bet if you retrofitted an AMD leaf spring on the back, delta T would be even better.


----------



## des2k...

tps3443 said:


> I think your good to go on the hotspot delta. And it took a lot of digging to find out about the arrows and what they were designed for. I was thinking someone was just drunk when they were actually drawing the design, manual etc. haha. “OOPS”
> 
> Turns out it’s actually fluid dynamic engineering that was incorporated in to the blocks design.
> 
> Evga put a lot of time in to it, so these blocks can offer really good results. The stock thermal pads are even 13w/k pads and their OEM is Ziitec. They are Ziitec’s best pad available.
> 
> 
> Evga claims these blocks are capable of a 10-14C Delta over water temp. I’m guessing this is on the OC bios maybe.
> 
> Either way it’s a cool block, and it does look nice. Memory temps are awesome on mine. And my hotspot is usually 10-12C higher than the GPU temp.
> 
> Evga mailed my thermal pads yesterday, and I purchased them on the 8th. So I think they were waiting on them too apparently. But I have a tracking number for a complete thermal pad kit from Evga for Friday! So, I’m gonna re-mount and I will have everything at my disposal for it. Full coverage rear backplate Fujipoly thermal pad, 0.5mm Fujipoly thermal pads, evga Kingpin HC thermal pad kit, and 3M micro pressure sensitive film.
> 
> I’m after a 14C,15C,16C Delta minimum with 450-500 watts. And I hope I can obtain that. If so I will keep the waterblock 100%. If not, I will buy that big heavy boat Cadillac lol.
> 
> 
> 100% though the bad temps are from
> the stock thermal pads.
> 
> Everyone says, don’t buy the Gelid Ultimates they’re too hard, and don’t squish down. They all say the gelid extremes are softer.
> 
> Well the OEM evga Kingpin Hydrocopper pads are just like Gelid Ultimate pads. Hell, they may be the same thing made by the same company lol.


You want to see the quality of the block and mount you need to test 600w, it's a kingping afterall 😂

Optimus test all their blocks with xoc 600w+

I would not aim that low(high delta) 450w for direct cooling on huge die like 3090 is nothing. Any generic block and bad mount,crap paste will get you 16c delta.

It's suppose to be around 7c delta at such low wattage.🙄

side note, my zotac game oc didn't change from 24c delta to 10c delta. Still hits the limit on frequency being 14c cooler.


----------



## geriatricpollywog

des2k... said:


> You want to see the quality of the block and mount you need to test 600w, it's a kingping afterall 😂
> 
> Optimus test all their blocks with xoc 600w+
> 
> I would not aim that low, 450w for direct cooling on huge die like 3090 is nothing. Any generic block and bad mount,crap paste will get you 16c delta.
> 
> It's suppose to be around 7c delta at such low wattage.🙄


I provide actual test results. You provide emojis. Cough up real data.


----------



## yzonker

0451 said:


> I provide actual test results. You provide emojis. Cough up real data.


Well I'm getting 15-16C delta at 500w with my Corsair block after a remount. Nothing special other than Gelid Extreme pads and Kryonaut Extreme paste. I think that would be the low bar @des2k... is referring to. 😂


----------



## des2k...

0451 said:


> I provide actual test results. You provide emojis. Cough up real data.


ok... don't really have a premium block

idle









a quick cp2077









Quake RTX from last week or so


----------



## geriatricpollywog

des2k... said:


> ok... don't really have a premium block
> 
> idle
> View attachment 2521489
> 
> 
> a quick cp2077
> View attachment 2521490
> 
> 
> Quake RTX from last week or so
> View attachment 2521491


What is EC temp, is that your water temp? Are you showing Delta T of GPU temp under load versus water temp?


----------



## des2k...

0451 said:


> What is EC temp, is that your water temp? Are you showing Delta T of GPU temp under load versus water temp?


yes that's the water temp


----------



## geriatricpollywog

des2k... said:


> yes that's the water temp


I showed a 5 minute run in Port Royal at 490w with minimum, maximum, and average temperatures for GPU temperature and hotspot displayed in HWInfo64. I also described my test setup, with pictures, including the location of the water temperature probe.


----------



## des2k...

0451 said:


> I showed a 5 minute run in Port Royal at 490w with minimum, maximum, and average temperatures for GPU temperature and hotspot displayed in HWInfo64. I also described my test setup, with pictures, including the location of the water temperature probe.


not sure what the purpose of this would be, I'm showing you better delta can be achieved and my block mount is not that great,









it's around 5c 350w
7c-10c up to 500w
13c or so at 600w+

running it longer and probing water everywhere won't change the delta, the GPU temps will climb a bit ~4c-2c (summer / winter) 1min vs hours gaming.

and 13c delta at 600w is nothing special, easily goes to 8c-10c on other blocks


----------



## geriatricpollywog

des2k... said:


> not sure what the purpose of this would be, I'm showing you better delta can be achieved and my block mount is not that great,
> 
> it's around 5c 350w
> 7c-10c up to 500w
> 13c or so at 600w+
> 
> running it longer and probing water everywhere won't change the delta, the GPU temps will climb a bit 2c-4c (summer / winter) 1min vs 8hours gaming.
> 
> and 13c delta at 600w is nothing special, easily goes to 8c-10c on other blocks


I don’t doubt it’s possible to achieve better than a 14C delta with more mounting pressure and active cooling on the MLCCs behind the die, but as I suspected, you are unable to demonstrate this. If you are going to make a claim about Delta T, at least demonstrate as robust testing methodology as I have.


----------



## des2k...

0451 said:


> I don’t doubt it’s possible to achieve better than a 14C delta with more mounting pressure and active cooling on the MLCCs behind the die, but as I suspected, you are unable to demonstrate this. If you are going to make a claim about Delta T, at least demonstrate as robust testing methodology as I have.


yep... I certainly didn't test / post a lot of info like you

But....you should start quake rtx at 2115 core on XOC 1000w for like 6sec and shows us your GPU temp lol

I can guarantee you, you won't get 37c temps(hwinfo / evga x1 ) unless you have good delta

There's screenshots of people running 600w+ for long periods , gpu temp never going over 39c a few pages back, not sure I want to waste 40mins+ trying to find them again.

Mine is not like that in summer, in winter yes i'll be around 39c +1,2c. This is just testing I don't game at 600w. 

And mostly why I finished working on my block /re-mount, since it will show mostly at 600w+ if I drop delta more.


----------



## satinghostrider

0451 said:


> I don’t doubt it’s possible to achieve better than a 14C delta with more mounting pressure and active cooling on the MLCCs behind the die, but as I suspected, you are unable to demonstrate this. If you are going to make a claim about Delta T, at least demonstrate as robust testing methodology as I have.


It is possible but I find it hard to get less than 10 degrees on these cards unless your ambient temps are very low to start with and I'm in tropical weather and the best my card can manage is ~13 degrees delta average for 460W. 

I think as a general rule of thumb ~15 degrees delta considered good and anything above 20 degrees delta points to a bad mount. Everything depends on your ambient temps and power draw. ~10 degrees delta is very good but probably using an Optimus block or a setup with active backplate.

These are my temps I could manage with 3090 with 460W peak draw. No active backplate with just gaming/benchmark runs. 2x EK PE Rads with 6x Adata XPG fans.


----------



## gfunkernaught

des2k... said:


> yep... I certainly didn't test / post a lot of info like you
> 
> But....you should start quake rtx at 2115 core on XOC 1000w for like 6sec and shows us your GPU temp lol
> 
> I can guarantee you, you won't get 37c temps(hwinfo / evga x1 ) unless you have good delta
> 
> There's screenshots of people running 600w+ for long periods , gpu temp never going over 39c a few pages back, not sure I want to waste 40mins+ trying to find them again.
> 
> Mine is not like that in summer, in winter yes i'll be around 39c +1,2c. This is just testing I don't game at 600w.
> 
> And mostly why I finished working on my block /re-mount, since it will show mostly at 600w+ if I drop delta more.


I think making a statement like "you can get better delta temps with a better mount" doesn't require robust methodology and proof. Its common sense on this thread, and people, like me, pick it up eventually and crush that delta temp. CRUSH IT!


----------



## Falkentyne

satinghostrider said:


> It is possible but I find it hard to get less than 10 degrees on these cards unless your ambient temps are very low to start with and I'm in tropical weather and the best my card can manage is ~13 degrees delta average for 460W.
> 
> I think as a general rule of thumb ~15 degrees delta considered good and anything above 20 degrees delta points to a bad mount. Everything depends on your ambient temps and power draw. ~10 degrees delta is very good but probably using an Optimus block or a setup with active backplate.
> 
> These are my temps I could manage with 3090 with 460W peak draw. No active backplate with just gaming/benchmark runs. 2x EK PE Rads with 6x Adata XPG fans.
> 
> View attachment 2521502


Are you guys talking about GPU Core temp to Water temp?
Or GPU Core to GPU Hotspot temp? Because it sounds like from the dizzying amount of h2o posts lately, people are talking about BOTH. I keep seeing water temps in half the posts then people talking about "deltas" without specifying which delta they're talking about!

This thread has turned into nothing except water cooling and nothing else. Almost like this needs a new section just devoted to watercooling ...


----------



## satinghostrider

Falkentyne said:


> Are you guys talking about GPU Core temp to Water temp?
> Or GPU Core to GPU Hotspot temp? Because it sounds like from the dizzying amount of h2o posts lately, people are talking about BOTH. I keep seeing water temps in half the posts then people talking about "deltas" without specifying which delta they're talking about!
> 
> This thread has turned into nothing except water cooling and nothing else. Almost like this needs a new section just devoted to watercooling ...


I'm referring expressively to GPU core to Hotspot temps only. I thought that was the context in which everyone spoke when speaking about hotspots much earlier in this thread.


----------



## des2k...

Falkentyne said:


> Are you guys talking about GPU Core temp to Water temp?
> Or GPU Core to GPU Hotspot temp? Because it sounds like from the dizzying amount of h2o posts lately, people are talking about BOTH. I keep seeing water temps in half the posts then people talking about "deltas" without specifying which delta they're talking about!
> 
> This thread has turned into nothing except water cooling and nothing else. Almost like this needs a new section just devoted to watercooling ...


I was only talking about water temp to gpu temp delta. I didn't mention gpu hotspot temps.

I wouldn't even talk about gpu hotspots, it's very specific to models and looks like the vbios also play a role the way it's reported / modified.

On hotspot delta(gpu temp to gpu hotspot) 0451 has a better delta then me. I know how to fix mine(need to finish lapping the block and also a quick pass on the die). I just don't care about the last 1c-3c better temps.

He has 11c delta I think gpu temp to gpu hotspot but also way higher water to gpu temp delta.

On my side I have 13c hotspot delta, but just ~8c delta water to gpu temp for like 450w. So it's lower hotspot temp in the end if that makes any sense.


----------



## Arizor

What's more interesting to me is what the different deltas indicate. For example, does the delta between water temp and GPU have a set of different causes than the delta between hotspot and GPU? Or do they overlap too much to say?


----------



## des2k...

Arizor said:


> What's more interesting to me is what the different deltas indicate. For example, does the delta between water temp and GPU have a set of different causes than the delta between hotspot and GPU? Or do they overlap too much to say?


All I know is that gpu temp is the average of all sensors on the die and works the same for all cards.

Hotspot temps, that could be anything: either hot spot on the die or vbios just taking gpu temp adding 10c with no actual higher temps on die.


----------



## Falkentyne

des2k... said:


> I was only talking about water temp to gpu temp delta. I didn't mention gpu hotspot temps.
> 
> I wouldn't even talk about gpu hotspots, it's very specific to models and looks like the vbios also play a role the way it's reported / modified.
> 
> On hotspot delta(gpu temp to gpu hotspot) 0451 has a better delta then me. I know how to fix mine(need to finish lapping the block and also a quick pass on the die). I just don't care about the last 1c-3c better temps.
> 
> He has 11c delta I think gpu temp to gpu hotspot but also way higher water to gpu temp delta.
> 
> On my side I have 13c hotspot delta, but just ~8c delta water to gpu temp for like 450w. So it's lower hotspot temp in the end if that makes any sense.





Arizor said:


> What's more interesting to me is what the different deltas indicate. For example, does the delta between water temp and GPU have a set of different causes than the delta between hotspot and GPU? Or do they overlap too much to say?



See what I mean now?
Everyone's talking about something different. Since this thread devolved into water cooling almost exclusively.


----------



## gfunkernaught

We are drowning in water-cooled 3090 deltas


----------



## Arizor

Switch up the convo @Falkentyne , happy to see discussion on anything else 3090 OC.


----------



## Nizzen

D E L T A Force 😆


----------



## geriatricpollywog

It's summertime in the northern hemisphere. People want to talk about cooling.

I'm just glad people finally shut up about their power limits after I leaked the 1000w bios 😆


----------



## geriatricpollywog

des2k... said:


> yep... I certainly didn't test / post a lot of info like you
> 
> But....you should start quake rtx at 2115 core on XOC 1000w for like 6sec and shows us your GPU temp lol
> 
> I can guarantee you, you won't get 37c temps(hwinfo / evga x1 ) unless you have good delta
> 
> There's screenshots of people running 600w+ for long periods , gpu temp never going over 39c a few pages back, not sure I want to waste 40mins+ trying to find them again.
> 
> Mine is not like that in summer, in winter yes i'll be around 39c +1,2c. This is just testing I don't game at 600w.
> 
> And mostly why I finished working on my block /re-mount, since it will show mostly at 600w+ if I drop delta more.


It says you are pulling 459w.

Anyway, I don't use the 1000w bios nor do I think 6 seconds is long enough to generate meaningful data. I also couldn't get Quake II RTX to run in windowed mode. I can alt/tab out, but the GPU temp drops considerably and that would be cheating.

However, since we are making arbitrary tests, here is 10 minutes of Furmark at 520w at 38C.


----------



## kairi_zeroblade

Delta saga still continues..


----------



## superchicco

long2905 said:


> ichill x4 user here. im currently using the XOC 1000W vbios and set a undervolt of [email protected]
> the stock air cooler is very hot and the paste job is bad, very bad. After repasting with Thermalright TFX things improved considerably but still not as great as other top tier card. the max PR score i can get out of the card on air with XOC bios is 14300.
> 
> The strange boost behavior is the same on every card. to get consistent boost and performance you have to tweak the voltage curve. a good air cooler or waterblock will help immensely.
> 
> if you put the card in water most of the issues so go away. get the alphacool block if you can as that is what they used for the integrated WB version Frostbite.
> 
> I plan to finish gathering numbers and making a user review for the card on youtube and then sell it for a strix or a gaming x trio if i cant find a strix at a reasonable price.
> 
> let me know if you have any more question.


hello i own ichill x4 rtx 3090 cpu core and vram are very hot, compare to firnd 3090 rog we test on the same pc, i would like to know what soulution can i have to fix this problem, change the thermalpad or other ? can you please help me as i see you own same gpu, on youtube i cant find much !!


----------



## gfunkernaught

So the Chuck Norris vibes were accurate.


----------



## RosaPanteren

Anyone have an idea as to why I don't see a higher board power draw on the KP 520w ReBar (ver 94.02.42.C0.0C)? I got a 3x8 pin card

Bugged readings or am I hitting some kind of cap on something? 

Gpu-Z states PerfCap Pwr, but I see others report readings close to 520w cap, my readings are 20-40w away...

For Furmark stresstest Gpu-Z reports a max of 501w and HwInfo 480w on the same run.











PortRoyal dosen't seem to draw more than 490w either














des2k... said:


> What's the memory temps with Quake RTX after 5mins or so ?


56c after 12 min or so, but the pwr reading states only 470w....... Tried both full screen and window mode, but that didn't make a difference.

Unfortunately I saved the screen grab in .jpg so the pic is blurry.












jura11 said:


> On GameRock OC can you monitor VRM temperatures or this is not exposed in HWiNFO or any other monitoring software?


Unfortunately not......


----------



## Nizzen

RosaPanteren said:


> Anyone have an idea as to why I don't see a higher board power draw on the KP 520w ReBar (ver 94.02.42.C0.0C)? I got a 3x8 pin card
> 
> Bugged readings or am I hitting some kind of cap on something?
> 
> Gpu-Z states PerfCap Pwr, but I see others report readings close to 520w cap, my readings are 20-40w away...
> 
> For Furmark stresstest Gpu-Z reports a max of 501w and HwInfo 480w on the same run.
> 
> View attachment 2521556
> 
> 
> 
> PortRoyal dosen't seem to draw more than 490w either
> 
> View attachment 2521557
> 
> 
> 
> 
> 
> 
> 56c after 12 min or so, but the pwr reading states only 470w....... Tried both full screen and window mode, but that didn't make a difference.
> 
> Unfortunately I saved the screen grab in .jpg so the pic is blurry.
> 
> View attachment 2521558
> 
> 
> 
> 
> Unfortunately not......


Don't bother what the reading says. Care about the result


----------



## gfunkernaught

Nizzen said:


> Don't bother what the reading says. Care about the result


Yeah but the reading will help when troubleshooting...for example if you want to rule out your PSU, etc.


----------



## yzonker

RosaPanteren said:


> Anyone have an idea as to why I don't see a higher board power draw on the KP 520w ReBar (ver 94.02.42.C0.0C)? I got a 3x8 pin card
> 
> Bugged readings or am I hitting some kind of cap on something?
> 
> Gpu-Z states PerfCap Pwr, but I see others report readings close to 520w cap, my readings are 20-40w away...
> 
> For Furmark stresstest Gpu-Z reports a max of 501w and HwInfo 480w on the same run.
> 
> View attachment 2521556
> 
> 
> 
> PortRoyal dosen't seem to draw more than 490w either
> 
> View attachment 2521557
> 
> 
> 
> 
> 
> 
> 56c after 12 min or so, but the pwr reading states only 470w....... Tried both full screen and window mode, but that didn't make a difference.
> 
> Unfortunately I saved the screen grab in .jpg so the pic is blurry.
> 
> View attachment 2521558
> 
> 
> 
> 
> Unfortunately not......


Maybe hitting the PCIE slot limit. I'm not in front of my machine to check, but I think it's 82w.


----------



## Falkentyne

RosaPanteren said:


> Anyone have an idea as to why I don't see a higher board power draw on the KP 520w ReBar (ver 94.02.42.C0.0C)? I got a 3x8 pin card
> 
> Bugged readings or am I hitting some kind of cap on something?
> 
> Gpu-Z states PerfCap Pwr, but I see others report readings close to 520w cap, my readings are 20-40w away...
> 
> For Furmark stresstest Gpu-Z reports a max of 501w and HwInfo 480w on the same run.
> 
> View attachment 2521556
> 
> 
> 
> PortRoyal dosen't seem to draw more than 490w either
> 
> View attachment 2521557
> 
> 
> 
> 
> 
> 
> 56c after 12 min or so, but the pwr reading states only 470w....... Tried both full screen and window mode, but that didn't make a difference.
> 
> Unfortunately I saved the screen grab in .jpg so the pic is blurry.
> 
> View attachment 2521558
> 
> 
> 
> 
> Unfortunately not......


You're either hitting the PCIE Slot Limit or the MVDDC power limit. I don't know what the limits are on the 520W Kingpin bios, but on the original Palit Bios, it's 121W for MVDDC and between 75W-83W for PCIE Slot. 143W MVDDC is a bit high for a port royal run.


----------



## GAN77

Falkentyne said:


> I don't know what the limits are on the 520W Kingpin bios,


Here is a screenshot of the Kingpin BIOS. Where in it MVDDC ?


----------



## Nizzen

RosaPanteren said:


> Anyone have an idea as to why I don't see a higher board power draw on the KP 520w ReBar (ver 94.02.42.C0.0C)? I got a 3x8 pin card
> 
> Bugged readings or am I hitting some kind of cap on something?
> 
> Gpu-Z states PerfCap Pwr, but I see others report readings close to 520w cap, my readings are 20-40w away...
> 
> For Furmark stresstest Gpu-Z reports a max of 501w and HwInfo 480w on the same run.
> 
> View attachment 2521556
> 
> 
> 
> PortRoyal dosen't seem to draw more than 490w either
> 
> View attachment 2521557
> 
> 
> 
> 
> 
> 
> 56c after 12 min or so, but the pwr reading states only 470w....... Tried both full screen and window mode, but that didn't make a difference.
> 
> Unfortunately I saved the screen grab in .jpg so the pic is blurry.
> 
> View attachment 2521558
> 
> 
> 
> 
> 
> Unfortunately not......


What gpu brand are you using? Can't see what it is?
Tried high performance powerplan in windows?
nvidia control panel : Power management mode -> "Prefer maximum performance " ?


Try Asus "xoc" 520w rebar bios

Asus xoc 520w rebar

PS: This bios says 1000w, but it's drawing 520w max on my Strix 3090

Just in case:



Code:


C:\Program Files\NVIDIA Corporation\NVSMI
CMD (administrator mode)
nvidia-smi.exe -pl 520w

restart


----------



## RosaPanteren

yzonker said:


> Maybe hitting the PCIE slot limit. I'm not in front of my machine to check, but I think it's 82w.


The pcie slot is hitting +80w and 2nd and 3rd power cable are only hitting 125-139w, so could it be bad power load balance?

The 3rd power cable is a 6+2 cable and 1&2 are 8 pins, could this matter?

The PSU is a RM1000x and I might be pushing +85% usage in full load. But I havent measured this exact.


----------



## RosaPanteren

Nizzen said:


> What gpu brand are you using? Can't see what it is?
> Tried high performance powerplan in windows?
> nvidia control panel : Power management mode -> "Prefer maximum performance " ?
> 
> 
> Try Asus "xoc" 520w rebar bios
> Asus xoc 520w rebar
> 
> PS: This bios says 1000w, but it's drawing 520w max on my Strix 3090


Palit Gamerock non OC version

Both win power plan and nvidia control are set to high performance.

Thx I’ll test the Asus bios👍


----------



## J7SC

Nizzen said:


> What gpu brand are you using? Can't see what it is?
> Tried high performance powerplan in windows?
> nvidia control panel : Power management mode -> "Prefer maximum performance " ?
> 
> 
> Try Asus "xoc" 520w rebar bios
> Asus xoc 520w rebar
> 
> PS: This bios says 1000w, but it's drawing 520w max on my Strix 3090
> 
> Just in case:
> 
> 
> 
> Code:
> 
> 
> C:\Program Files\NVIDIA Corporation\NVSMI
> CMD (administrator mode)
> nvidia-smi.exe -pl 520w
> 
> restart


....that Asus XOC 520W rebar link is dead  ( says file removed) - do you have another link ?


----------



## Falkentyne

GAN77 said:


> Here is a screenshot of the Kingpin BIOS. Where in it MVDDC ?
> 
> View attachment 2521580
> View attachment 2521580


VRAM=MVDDC.(memory voltage power or something)


----------



## Nizzen

J7SC said:


> ....that Asus XOC 520W rebar link is dead  ( says file removed) - do you have another link ?


updated


----------



## J7SC

Nizzen said:


> What gpu brand are you using? Can't see what it is?
> Tried high performance powerplan in windows?
> nvidia control panel : Power management mode -> "Prefer maximum performance " ?
> 
> 
> Try Asus "xoc" 520w rebar bios
> 
> Asus xoc 520w rebar
> 
> PS: This bios says 1000w, but it's drawing 520w max on my Strix 3090
> 
> Just in case:
> 
> 
> 
> Code:
> 
> 
> C:\Program Files\NVIDIA Corporation\NVSMI
> CMD (administrator mode)
> nvidia-smi.exe -pl 520w
> 
> restart


Thanks !


----------



## Arizor

Nizzen said:


> updated


Attachment not available again, dammit  . Why don't ASUS just release this on their site?


----------



## Nizzen

Arizor said:


> Attachment not available again, dammit  . Why don't ASUS just release this on their site?


Both links working here... Strange

Did you check the attachment?


----------



## des2k...

0451 said:


> It says you are pulling 459w.
> 
> Anyway, I don't use the 1000w bios nor do I think 6 seconds is long enough to generate meaningful data. I also couldn't get Quake II RTX to run in windowed mode. I can alt/tab out, but the GPU temp drops considerably and that would be cheating.
> 
> However, since we are making arbitrary tests, here is 10 minutes of Furmark at 520w at 38C.
> 
> View attachment 2521523


Quake RTX is not 460w at 2115 core, Included hwinfo 5mins run, pin1 & pin2 values are verified with current clamp.

No doubt the temps will climb after hours of playing. But you get the general idea of block performance from this screenshot


----------



## Nizzen

des2k... said:


> mmm
> 
> 
> Quake RTX is not 460w at 2115 core, Included hwinfo 5mins run, pin1 & pin2 values are verified with current clamp
> 
> View attachment 2521606


What is the total powerdraw from the wall? Checked with a "kill a watt" or something


----------



## RosaPanteren

Nizzen said:


> What is the total powerdraw from the wall? Checked with a "kill a watt" or something


I'll do

Btw what is it with this local "Mjød" your serving 

Gpu-Z tells me MVDDC is 605w and HwInfo shows FBVDD(Asus for MVDDC I believe) at approx the same.

I set the PL to 520w by nvidia-smi, but this MVDDC makes me a bit more than concerned.

3rd 8 pin power shows 150w draw on idle.

Buggy readings or should I be concerned? If the mem drew 600w I guess they would have been fried by now


----------



## Arizor

Nizzen said:


> Both links working here... Strange
> 
> Did you check the attachment?


Got it cheers bud!


----------



## des2k...

Nizzen said:


> What is the total powerdraw from the wall? Checked with a "kill a watt" or something


no need for kill a watt, my amp clamp can read that ; I'll find a split wire extension but not expecting lower wattage, it's going to show very high power at the wall: cpu, 29 fans, 4 pumps @ 15w+ and a bunch of usb ssd,nvme 🙄


----------



## tps3443

0451 said:


> It says you are pulling 459w.
> 
> Anyway, I don't use the 1000w bios nor do I think 6 seconds is long enough to generate meaningful data. I also couldn't get Quake II RTX to run in windowed mode. I can alt/tab out, but the GPU temp drops considerably and that would be cheating.
> 
> However, since we are making arbitrary tests, here is 10 minutes of Furmark at 520w at 38C.
> 
> View attachment 2521523


Im impressed!!! Your temps are destroying mine lol.

Thats great!! If your water temp is 30-31C I mean, your right at that 10C delta over Water temp. I’m shocked!!!! That’s amazingly good for this KP block. Evga claimed 10C delta. well, there it is at 520 watts.


Which I was referring to this the entire time, If anyone else was wondering (Load GPU temp) [minus or subtract] (load water temp)


----------



## geriatricpollywog

I’m


tps3443 said:


> Im impressed!!! Your temps are destroying mine lol.
> 
> Thats great!! If your water temp is 30-31C I mean, your right at that 10C delta over Water temp. I’m shocked!!!! That’s amazingly good for this KP block. Evga claimed 10C delta. well, there it is at 520 watts.
> 
> 
> Which I was referring to this the entire time, If anyone else was wondering (Load GPU temp) [minus or subtract] (load water temp)


Actually I found my fancy pants thermocouple is reading 3C below my AC thermostat and my household thermometer. How embarrassing… maybe I need to get it calibrated or get something from Performance PCs. My actual delta may be 3C better than I thought.

Edit: I also switched the inlet and outlet and I added another pump and another 2 radiators to the setup.


----------



## mastah

Somehow I can't flash the strix XOC bios



Code:


nvflash64 -6 STRIX3090_XOC_rebar.rom




Code:


NVIDIA Firmware Update Utility (Version 5.660.0)
Copyright (C) 1993-2020, NVIDIA Corporation. All rights reserved.


Checking for matches between display adapter(s) and image(s)...

Adapter: NVIDIA GeForce RTX 3090 (10DE,2204,1043,87AF) H:--:NRM  S:00,B:54,D:00,F:00


EEPROM ID (EF,6014) : WBond W25Q80EW 1.65-1.95V 8192Kx1S, page

Current      - Version:94.02.42.00.A9 ID:10DE:2204:1043:87AF
               GPU Board (Normal Board)
Replace with - Version:94.02.59.00.D7 ID:10DE:2204:1043:87AF
               GPU Board (Normal Board)

Update display adapter firmware?
Press 'y' to confirm (any other key to abort):
EEPROM ID (EF,6014) : WBond W25Q80EW 1.65-1.95V 8192Kx1S, page




BIOS Cert 3.0 Verification Error, Update aborted.


Nothing changed!



ERROR: Invalid firmware image detected.


----------



## mastah

Got it working, nvflash64 was too old. Working with nvflash_5.715 !
Nizzen where does this bios come from ?!


----------



## Arizor

The ASUS XOC BIOS works well for games and benchmarks for me, but it's absolutely killed my mining hashrate (from 125mh/s to... 2...).


----------



## des2k...

tps3443 said:


> Im impressed!!! Your temps are destroying mine lol.
> 
> Thats great!! If your water temp is 30-31C I mean, your right at that 10C delta over Water temp. I’m shocked!!!! That’s amazingly good for this KP block. Evga claimed 10C delta. well, there it is at 520 watts.
> 
> 
> Which I was referring to this the entire time, If anyone else was wondering (Load GPU temp) [minus or subtract] (load water temp)


yeah looks good, but we don't know the water temp / ambient

I just run ~540w furemark, I'm around 10c delta, 28c water 38c gpu, this was just 5mins


----------



## mirkendargen

Arizor said:


> The ASUS XOC BIOS works well for games and benchmarks for me, but it's absolutely killed my mining hashrate (from 125mh/s to... 2...).


Get NvidiaProfileInspector, look in the "Common" section, and set "CUDA - Force P2 State" to off.


----------



## Arizor

Cheers @mirkendargen !


----------



## yzonker

des2k... said:


> no need for kill a watt, my amp clamp can read that ; I'll find a split wire extension but not expecting lower wattage, it's going to show very high power at the wall: cpu, 29 fans, 4 pumps @ 15w+ and a bunch of usb ssd,nvme 🙄


No need for that anyway. I measured my Zotac 3090 also running with the KP XOC bios with an amp clamp. Got approximately the same result. True power draw is approximately 0.66 times the indicated power in monitoring software. Varies a little depending on what is running but not enough matter on this block delta discussion.


----------



## tps3443

0451 said:


> I’m
> 
> Actually I found my fancy pants thermocouple is reading 3C below my AC thermostat and my household thermometer. How embarrassing… maybe I need to get it calibrated or get something from Performance PCs. My actual delta may be 3C better than I thought.
> 
> Edit: I also switched the inlet and outlet and I added another pump and another 2 radiators to the setup.


Im really shocked. Yeah I’m looking forward to re-doing my Kingpin GPU block.

To help aid with GPU temps, and mostly my water temps. I have sold my 7980XE Direct die, and I actually just bought a super binned 10900K, direct die, and a Evga Z490 Dark mobo, and Optimus Sig V2 nickel.

My 7980XE would idle doing nothing on the desktop at 250-300 watts. And that’s with my GPU in optimal power mode too. (Just 1 D5)

R23 would go north of 950 watts on the CPU at the wall.

So Since I really didn’t need a 18/36 beast at 5Ghz. I figured a (5.2Ghz daily 10900K)/Z490 Dark would make more sense. That 18/36 7980XE would chew up a 32/64 threadripper 2990WX in R23 multithreaded lol. But dayumm, it would use a ton of power while doing it lol 

Here’s a pic of the new setup, should be here Friday!


----------



## tps3443

des2k... said:


> yeah looks good, but we don't know the water temp / ambient
> 
> I just run ~540w furemark, I'm around 10c delta, 28c water 38c gpu, this was just 5mins


Hey, yeah I’m not doubting how good, and how cool your GPU runs. That’s not a problem at all.

I am doubting mine, and the kingpin 3090 block in general lol. My temps really blow plain and simple...

So I’m just excited to see (Actual) good temps on the 3090 Kingpin HC block with another user who has one. Everyone is reporting poor temps, even people with good cooling setups.


Looking forward to fixing mine up.

And I was so tempted to just buy that Optimus Kingpin GPU block. It looks sick, but man it’s so expensive. I can kinda see a glimmer of light with this Kingpin block. So we shall see.


----------



## geriatricpollywog

tps3443 said:


> Im really shocked. Yeah I’m looking forward to re-doing my Kingpin GPU block.
> 
> To help aid with GPU temps, and mostly my water temps. I have sold my 7980XE Direct die, and I actually just bought a super binned 10900K, direct die, and a Evga Z490 Dark mobo, and Optimus Sig V2 nickel.
> 
> My 7980XE would idle doing nothing on the desktop at 250-300 watts. And that’s with my GPU in optimal power mode too. (Just 1 D5)
> 
> R23 would go north of 950 watts on the CPU at the wall.
> 
> So Since I really didn’t need a 18/36 beast at 5Ghz. I figured a (5.2Ghz daily 10900K)/Z490 Dark would make more sense. That 18/36 7980XE would chew up a 32/64 threadripper 2990WX in R23 multithreaded lol. But dayumm, it would use a ton of power while doing it lol
> 
> Here’s a pic of the new setup, should be here Friday!


Did you buy Than Nguyen's SP114 CPU?


----------



## tps3443

0451 said:


> Did you buy Than Nguyen's SP114 CPU?


No I bought all of this stuff from Mr.Fox, from NBRForums

From what I’ve heard, mr fox was binning 10900K’s for someone at HidEvolution. And HIDEvolution sales custom high end DTR laptops with Prema bios on them, with desktop socketed CPU’s.

So, it’s certainly a good one.

I just wanted something with less power consumption. If I was actually using 36 threads that’d be different.


----------



## tps3443

@des2k...

I didn’t realize how good these HWlabs GTX/GTR radiators really were.

I’ve been doing some research, and just a single 280MM HWLabs GTX/GTR in push/pull @1,850RPM, can slightly out perform my (2) EKWB SE 360’s in push only at 1,850RPM. So, with my 7980XE thrown in the loop, and inside of a case too. This would explain why @0451 had such good temps, when he had all his stuff on the table running reasonably cool temps with just that little 280mm.

Like no exaggeration here, I’m shocked how
good these rads really are. This single 280GTX/GTR seem to scale extremely well with fans all the way up to 3,000 RPM. And apparently just a HWLabs 280GTX/GTR is just barely equivalent to the HWLabs 360GTX/GTR Model. (xtremerigs) explained this phenomenon really well. The HWLabs 280GTX is a beast lol.

Im amazed by this. I didn’t know they were so good. This is all making me rethink what the hell crap radiators I have been running Lol. Im
about ready to toss them in a bin,


----------



## geriatricpollywog

tps3443 said:


> @des2k...
> 
> I didn’t realize how good these HWlabs GTX/GTR radiators really were.
> 
> I’ve been doing some research, and just a single 280MM HWLabs GTX/GTR in push/pull @1,850RPM, can slightly out perform my (2) EKWB SE 360’s in push only at 1,850RPM. So, with my 7980XE thrown in the loop, and inside of a case too. This would explain why @0451 had such good temps, when he had all his stuff on the table running reasonably cool temps with just that little 280mm.
> 
> Like no exaggeration here, I’m shocked how
> good these rads really are. This single 280GTX/GTR seem to scale extremely well with fans all the way up to 3,000 RPM. And apparently just a HWLabs 280GTX/GTR is just barely equivalent to the HWLabs 360GTX/GTR Model. (xtremerigs) explained this phenomenon really well. The HWLabs 280GTX is a beast lol.
> 
> Im amazed by this. I didn’t know they were so good. This is all making me rethink what the hell crap radiators I have been running Lol. Im
> about ready to toss them in a bin,


The Delta T result I posted 2 days ago was as described, with just a 280MM Black Ice Nemesis GTR. Bear in mind, this radiator is outperformed by the GTX under 1200RPM. It is best used when you have a mix of radiators and need 1 that performs well at high RPM.

The Furmark result was with the setup below: 280 HW Labs GTR, 280 EK CE, 360 EK SE, 2 D5 pumps


----------



## long2905

superchicco said:


> hello i own ichill x4 rtx 3090 cpu core and vram are very hot, compare to firnd 3090 rog we test on the same pc, i would like to know what soulution can i have to fix this problem, change the thermalpad or other ? can you please help me as i see you own same gpu, on youtube i cant find much !!


yeah wow you quoted my post from a while ago. well to set the expectation the stock cooler on the X4 is very bad. comparing it to the strix is ridiculous to be honest. you have 2 options:

1. go full custom loop watercooling and get a bykski active backplate combo which I'm using right now. This helps tremendously but albeit can be costly.
2. get new thermal paste (thermalright tf8) and thermal pad (gelid extreme) and you should see better temp as well. it wont be as good as water cooling but it will be an improvement over the stock job. Do set a custom fan curve if you can but be aware that the stock fans are extremely loud.


----------



## devilhead

Power hungry strix  was around 1.23v


----------



## West.

I've been reading comments on this threat for a while and I learnt a lot from all you guys' settings and experience. Thanks for all the sharing.

This is my first PR attempt on stock 3090 HOF (non oc lab) with KP1KW rebar bios. 26-27c ambient, 100% all fans with side panel off.
I scored 15 229 in Port Royal

Fan control is broken so I have to use the 100% fan button to crank those fans up.

What PR score does other air cooled cards getting? Is 15.2k consider decent on air?










Originally, my plan was to leave everything stock and get the max out of it. But recently I found out that there's oil around fan blades, gpu shroud and side panel... (Really? On a $2500 card?)

I'm planning to replace those oily pads. Currently considering gelid ultimate for both front and back. But I've seen some good results with odyssey on the back. Which option is better

Has anyone swapped pads on HOF yet? What thickness should I get?

Any recommendations are welcome. Thanks!


----------



## GRABibus

mastah said:


> Somehow I can't flash the strix XOC bios
> 
> 
> 
> Code:
> 
> 
> nvflash64 -6 STRIX3090_XOC_rebar.rom
> 
> 
> 
> 
> Code:
> 
> 
> NVIDIA Firmware Update Utility (Version 5.660.0)
> Copyright (C) 1993-2020, NVIDIA Corporation. All rights reserved.
> 
> 
> Checking for matches between display adapter(s) and image(s)...
> 
> Adapter: NVIDIA GeForce RTX 3090 (10DE,2204,1043,87AF) H:--:NRM  S:00,B:54,D:00,F:00
> 
> 
> EEPROM ID (EF,6014) : WBond W25Q80EW 1.65-1.95V 8192Kx1S, page
> 
> Current      - Version:94.02.42.00.A9 ID:10DE:2204:1043:87AF
> GPU Board (Normal Board)
> Replace with - Version:94.02.59.00.D7 ID:10DE:2204:1043:87AF
> GPU Board (Normal Board)
> 
> Update display adapter firmware?
> Press 'y' to confirm (any other key to abort):
> EEPROM ID (EF,6014) : WBond W25Q80EW 1.65-1.95V 8192Kx1S, page
> 
> 
> 
> 
> BIOS Cert 3.0 Verification Error, Update aborted.
> 
> 
> Nothing changed!
> 
> 
> 
> ERROR: Invalid firmware image detected.


on which card do you flash this bios ?

I tested 2 strix oc bios and they suck.

max Power draw was 520W with poor gaming performances.

the bios in my signature (EVGA 500W XOC with rebar) is much better in games and benchmarks.


----------



## GRABibus

RosaPanteren said:


> I'll do
> 
> Btw what is it with this local "Mjød" your serving
> 
> Gpu-Z tells me MVDDC is 605w and HwInfo shows FBVDD(Asus for MVDDC I believe) at approx the same.
> 
> I set the PL to 520w by nvidia-smi, but this MVDDC makes me a bit more than concerned.
> 
> 3rd 8 pin power shows 150w draw on idle.
> 
> Buggy readings or should I be concerned? If the mem drew 600w I guess they would have been fried by now
> 
> 
> View attachment 2521613


the NORVEGIAN cooperation 😊


----------



## GRABibus

West. said:


> I've been reading comments on this threat for a while and I learnt a lot from all you guys' settings and experience. Thanks for all the sharing.
> 
> This is my first PR attempt on stock 3090 HOF (non oc lab) with KP1KW rebar bios. 26-27c ambient, 100% all fans with side panel off.
> I scored 15 229 in Port Royal
> 
> Fan control is broken so I have to use the 100% fan button to crank those fans up.
> 
> What PR score does other air cooled cards getting? Is 15.2k consider decent on air?
> 
> View attachment 2521680
> 
> 
> Originally, my plan was to leave everything stock and get the max out of it. But recently I found out that there's oil around fan blades, gpu shroud and side panel... (Really? On a $2500 card?)
> 
> I'm planning to replace those oily pads. Currently considering gelid ultimate for both front and back. But I've seen some good results with odyssey on the back. Which option is better
> 
> Has anyone swapped pads on HOF yet? What thickness should I get?
> 
> Any recommendations are welcome. Thanks!


this score is completely in line.
Did you enable rebar in « 3DMark Port Royal DLSS » profile with nvidia profile inspector ?

concerning Thermalright odyssey, I mounted them on my memory chips on my strix on air and my memory temps decreased by 10 degrees.
Very happy with this.

but people here prefer gelid, which are softer.


----------



## Falkentyne

GRABibus said:


> on which card do you flash this bios ?
> 
> I tested 2 strix oc bios and they suck.
> 
> max Power draw was 520W with poor gaming performances.
> 
> the bios in my signature (EVGA 500W XOC with rebar) is much better in games and benchmarks.


Strix 1kw OC Bioses including rebar ones are broken.
SRC3 power limit is limited to 150W (def), 175W (max with Afterburner power limit slider >100%)which causes a "TDP Normalized" Power limit throttle when 8 pin #3 reaches 175W.
Elmor discovered this.


----------



## D-EJ915

tps3443 said:


> Im really shocked. Yeah I’m looking forward to re-doing my Kingpin GPU block.
> 
> To help aid with GPU temps, and mostly my water temps. I have sold my 7980XE Direct die, and I actually just bought a super binned 10900K, direct die, and a Evga Z490 Dark mobo, and Optimus Sig V2 nickel.
> 
> My 7980XE would idle doing nothing on the desktop at 250-300 watts. And that’s with my GPU in optimal power mode too. (Just 1 D5)
> 
> R23 would go north of 950 watts on the CPU at the wall.
> 
> So Since I really didn’t need a 18/36 beast at 5Ghz. I figured a (5.2Ghz daily 10900K)/Z490 Dark would make more sense. That 18/36 7980XE would chew up a 32/64 threadripper 2990WX in R23 multithreaded lol. But dayumm, it would use a ton of power while doing it lol
> 
> Here’s a pic of the new setup, should be here Friday!


I didn't test my optimus on my dark board but on my apex without a backplate I got instability and couldn't hit my normal oc. If you have issue try mounting kit with a backplate.


----------



## steadly2004

yzonker said:


> This is the one I've used on my Zotac 3090 Trinity.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> GALAX RTX 3090 VBIOS
> 
> 
> 24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory
> 
> 
> 
> 
> www.techpowerup.com
> 
> 
> 
> 
> 
> And yes, your board looks nearly identical to a reference PCB like my Zotac as @des2k... stated.
> 
> 
> 
> https://www.techpowerup.com/review/zotac-geforce-rtx-3090-trinity/images/front.jpg


Thanks for the recommendation. I flashed this bios and it’s working.
I’m actually getting slightly better effiency when mining and higher 3Dmark scores.

The only game I tried was cyberpunk2077. It’s crashing every 2-10min. However I also flashed a beta bios to my mobo (attic x570i gaming) trying to get curve optimizer. But it didn’t even show up in bios settings. And I don’t know if the instability is the bios on the mobo or card. I loaded bios defaults and XMP only without tuning ram timings. It passes 3Dmark without any problems. I haven’t tried any other games. I have so many variables im not sure where to start. I am on win 11, which itself might be a problem. I’ll figure it out, but most likely it’s the beta bios on the mobo.


----------



## Arizor

CP2077 is the absolute gauntlet for card stability (as long as you're using RTX and DLSS as we are), it gives the card such an intense workout.

I actually find the mobo curve optimiser settings can crash a lot. When I had my curve optimiser set to negative 30 (i.e. an undervolt) I was getting resets (event 41 in windows event viewer) every few days; same if I tinkered with PBO mhz limit. Lots of those settings basically are just hankering to crash your comp, and honestly if you're playing 1440p or above on a 3090, tinkering with your CPU is unlikely to provide big gains. Try setting it all to default/auto and see if you get crashes still.


----------



## mirkendargen

Arizor said:


> CP2077 is the absolute gauntlet for card stability (as long as you're using RTX and DLSS as we are), it gives the card such an intense workout.
> 
> I actually find the mobo curve optimiser settings can crash a lot. When I had my curve optimiser set to negative 30 (i.e. an undervolt) I was getting resets (event 41 in windows event viewer) every few days; same if I tinkered with PBO mhz limit. Lots of those settings basically are just hankering to crash your comp, and honestly if you're playing 1440p or above on a 3090, tinkering with your CPU is unlikely to provide big gains. Try setting it all to default/auto and see if you get crashes still.


Metro Exodus benchmark seems to be a way tougher test than Cyberpunk. It's my goto for finding actual stability.


----------



## tps3443

0451 said:


> I’m
> 
> Edit: I also switched the inlet and outlet and I added another pump and another 2 radiators to the setup.


I might order (3) HWLabs 280MM GTR’s, I may just run (2) of them on my 3090 Kingpin. And (1) for my cpu. I have a big 1080MM external sized Alphacool RAD coming, but it’s shipping from Germany. Which I didn’t even know upon purchase. No updates at all on it.



D-EJ915 said:


> I didn't test my optimus on my dark board but on my apex without a backplate I got instability and couldn't hit my normal oc. If you have issue try mounting kit with a backplate.



Fortunately, I don’t have to mount it lol. CPU is already in the board, direct die frame installed, and the Optimus Signature V2 nickel block already mounted.

The owner is pretty good at these things. So I trust him mount is good.


----------



## Arizor

Yeah Metro is a real workout too, but I find it can run a lot longer before crashing; if I have an unstable clock in CP2077 it's crashing within a minute.


----------



## geriatricpollywog

tps3443 said:


> I might order (3) HWLabs 280MM GTR’s, I may just run (2) of them on my 3090 Kingpin. And (1) for my cpu. I have a big 1080MM external sized Alphacool RAD coming, but it’s shipping from Germany. Which I didn’t even know upon purchase. No updates at all on it.
> 
> 
> 
> 
> Fortunately, I don’t have to mount it lol. CPU is already in the board, direct die frame installed, and the Optimus Signature V2 nickel block already mounted.
> 
> The owner is pretty good at these things. So I trust him mount is good.


Before you buy the GTR, read the Thermalbench review. If I recall correctly, it’s the worst radiator at under 1k RPM fan speed and trades blows with the GTX at 1k-1400rpm. Only at the highest fan speeds does it outperform everything else. This kind of defeats one of the benefits of watercooling: quiet operation.


----------



## tps3443

0451 said:


> Before you buy the GTR, read the Thermalbench review. If I recall correctly, it’s the worst radiator at under 1k RPM fan speed and trades blows with the GTX at 1k-1400rpm. Only at the highest fan speeds does it outperform everything else. This kind of defeats one of the benefits of watercooling: quiet operation.


Yeah I have read all about them. I have finally learned all the differences between the HWLabs models. The GTX models work nice with low RPM. I usually run my fans at 1,800-2,000RPM anyways. So that’s why I was considering the GTR. But I do run a silent setup every now and then when I’m bored. I’ve just never had proper radiators to sustain it and be satisfied with the performance from day to day.

There seems to be inventory issues with the GTR anyways, leaving only the 280MM model in-stock.

But I would certainly take a GTX model too, for that low rpm performance.

I have all of my thermal pads now too. So I’m gonna order some really cool stuff over the next few hours.

Probably primochill SX test bench, and (3) 480’s lol.

Guy on ebay has a lot of 480 GTX’s for around $100 each. But they’re white! Hmmm. Not a huge fan of white, but I really don’t care that much either.


----------



## geriatricpollywog

tps3443 said:


> Yeah I have read all about them. I have finally learned all the differences between the HWLabs models. The GTX models work nice with low RPM. I usually run my fans at 1,800-2,000RPM anyways. So that’s why I was considering the GTR. But I do run a silent setup every now and then when I’m bored. I’ve just never had proper radiators to sustain it and be satisfied with the performance from day to day.
> 
> There seems to be inventory issues with the GTR anyways, leaving only the 280MM model in-stock.
> 
> But I would certainly take a GTX model too, for that low rpm performance.
> 
> I have all of my thermal pads now too. So I’m gonna order some really cool stuff over the next few hours.
> 
> Probably primochill SX test bench, and (3) 480’s lol.
> 
> Guy on ebay has a lot of 480 GTX’s for around $100 each. But they’re white! Hmmm. Not a huge fan of white, but I really don’t care that much either.


Keep us posted on the “Post your latest purchase” thread 😉


----------



## West.

GRABibus said:


> this score is completely in line.
> Did you enable rebar in « 3DMark Port Royal DLSS » profile with nvidia profile inspector ?
> 
> concerning Thermalright odyssey, I mounted them on my memory chips on my strix on air and my memory temps decreased by 10 degrees.
> Very happy with this.
> 
> but people here prefer gelid, which are softer.


Nope, no dlss. 

Stock pads on hof card were actually decent in terms of performance. Have not seen anything above 80c(daily) or 96c(mining). Sadly, it gets oily after some use. Gonna get some gelid pads and a digital caliper (maybe?) since I can‘t find any info on hof pads. I guess hof is too rare


----------



## GRABibus

West. said:


> Nope, no dlss.
> 
> Stock pads on hof card were actually decent in terms of performance. Have not seen anything above 80c(daily) or 96c(mining). Sadly, it gets oily after some use. Gonna get some gelid pads and a digital caliper (maybe?) since I can‘t find any info on hof pads. I guess hof is too rare


 I was asking if you have enabled ReBar for your test Port Royal.
If not, this should increase your score by roughly 200pts.

How to proceed here :








Here's How You Can Enable Resizable BAR Support in Any Game via NVIDIA Inspector


While only sixteen games have been officially whitelisted for Resizable BAR support by NVIDIA, there's a procedure to manually enable any.




wccftech.com





the profile for Port Royal is « 3DMark Port Royal DLSS ».


----------



## elbramso

GRABibus said:


> I was asking if you have enabled ReBar for your test Port Royal.
> If not, this should increase your score by roughly 200pts.
> 
> How to proceed here :
> 
> 
> 
> 
> 
> 
> 
> 
> Here's How You Can Enable Resizable BAR Support in Any Game via NVIDIA Inspector
> 
> 
> While only sixteen games have been officially whitelisted for Resizable BAR support by NVIDIA, there's a procedure to manually enable any.
> 
> 
> 
> 
> wccftech.com
> 
> 
> 
> 
> 
> the profile for Port Royal is « 3DMark Port Royal DLSS ».


Regardles of the CPU? I'm just asking because I'm on an older gen (8[email protected]) and I'm wondering if my card would benefit a lot from ReBar.
My best PR result was: PR score


----------



## GRABibus

elbramso said:


> Regardles of the CPU? I'm just asking because I'm on an older gen ([email protected]) and I'm wondering if my card would benefit a lot from ReBar.
> My best PR result was: PR score


nice.
And score without rebar ? 
this is the only way to answer to your question 😊


----------



## elbramso

GRABibus said:


> nice.
> And score without rebar ?
> this is the only way to answer to your question 😊


This is even with an old driver because newer driver crush my score by ~200-300 points 
I didn't use the profile inspector to enable ReBar but its enabled in the mainboard and card bios.


----------



## GRABibus

elbramso said:


> This is even with an old driver because newer driver crush my score by ~200-300 points
> I didn't use the profile inspector to enable ReBar but its enabled in the mainboard and card bios.


yes but not enabled in inspector means not enable for the considered game or application
It is not enough to enable it in motherboard bios.


----------



## elbramso

GRABibus said:


> yes but not enabled in inspector means not enable for the considered game or profile.
> It is not enough to enable it in motherboard bios.


So there is a good chance that I can put this card back in the HOF


----------



## SirCanealot

Oh yeah, I must share my recent success with Gelid Extreme pads 
(I was going to try the thermal putty, but of course it's out of stock when I actually go to order it!)

I used 1.5mm pads on the VRM, etc, and 0.5mm copper shims+1mm Gelid pads. (I had 15x15mm copper shims lying around. I ended up spending ages filing them down, especially for the stupid lil chips near the PCI slot >_<)

After putting the card back together my core temps weren't great (60c on a 300w mining load), but my memory temps were much better at around 82c at +1600mhz.

I slapped a large aluminum heatsink onto the backplate that I got from Aliexpress (I used thermal paste, then held it down with some duct tape and used a cable tie around the card to ensure it stays on tight), and this dropped my memory temperatures to around 76c  
I also packed some Gelid pads on the back of the core and the heatsink dropped my core temperature to around 56-57 when under the same mining load. 

I hotspot temps aren't great though. I'm using kryonaut extreme, and we'll see if my core temps get worse over the next couple of months. 

Might need to get my hand on my KPX and redo the card again sometime... 

Thanks for the discussion and advice on this thread the last few weeks!


----------



## Falkentyne

SirCanealot said:


> Oh yeah, I must share my recent success with Gelid Extreme pads
> (I was going to try the thermal putty, but of course it's out of stock when I actually go to order it!)
> 
> I used 1.5mm pads on the VRM, etc, and 0.5mm copper shims+1mm Gelid pads. (I had 15x15mm copper shims lying around. I ended up spending ages filing them down, especially for the stupid lil chips near the PCI slot >_<)
> 
> After putting the card back together my core temps weren't great (60c on a 300w mining load), but my memory temps were much better at around 82c at +1600mhz.
> 
> I slapped a large aluminum heatsink onto the backplate that I got from Aliexpress (I used thermal paste, then held it down with some duct tape and used a cable tie around the card to ensure it stays on tight), and this dropped my memory temperatures to around 76c
> I also packed some Gelid pads on the back of the core and the heatsink dropped my core temperature to around 56-57 when under the same mining load.
> 
> I hotspot temps aren't great though. I'm using kryonaut extreme, and we'll see if my core temps get worse over the next couple of months.
> 
> Might need to get my hand on my KPX and redo the card again sometime...
> 
> Thanks for the discussion and advice on this thread the last few weeks!


What's the core hotspot temp compared to the core temp?


----------



## KedarWolf

I bought these GELID Extreme 120x120mm 1mm pads from Aliexpress, they look A LOT darker than normal GELID pads, I wonder if they are cheap knockoffs. 

I'm going to contact GELID support, check with them. They look even darker IRL.


----------



## West.

GRABibus said:


> I was asking if you have enabled ReBar for your test Port Royal.
> If not, this should increase your score by roughly 200pts.
> 
> How to proceed here :
> 
> 
> 
> 
> 
> 
> 
> 
> Here's How You Can Enable Resizable BAR Support in Any Game via NVIDIA Inspector
> 
> 
> While only sixteen games have been officially whitelisted for Resizable BAR support by NVIDIA, there's a procedure to manually enable any.
> 
> 
> 
> 
> wccftech.com
> 
> 
> 
> 
> 
> the profile for Port Royal is « 3DMark Port Royal DLSS ».


Oh yes thanks for the remind. I didn’t have Nvidia inspector installed but I have rebar enabled in bios. Will try the inspector method when I have the chance. Currently, ambient temp is over 30c… Not good for an air cooled card…


----------



## Falkentyne

KedarWolf said:


> I bought these GELID Extreme 120x120mm 1mm pads from Aliexpress, they look A LOT darker than normal GELID pads, I wonder if they are cheap knockoffs.
> 
> I'm going to contact GELID support, check with them. They look even darker IRL.
> 
> View attachment 2521806
> 
> 
> View attachment 2521807


Can you please take a picture with the backing partially removed and outside of the ziploc bag, please? Also watch the focus of the camera.
That picture looks awfully blurry.


----------



## West.

Followed @GRABibus suggestions, enabled rebar on PR and did some quick run. Score goes from 15.2k to 15.4k despite running 1 bin lower than previous run.

I scored 15 404 in Port Royal

With lower ambient temp and more refine oc settings probably able to reach PR 15.5k on a bone stock card.


----------



## domdtxdissar

I watercooled my graphic card yesterday so bear with me and my stupid questions.. 

Why cant i make my msi 3090 supreme x pull more then ~520w ? Even with the EVGA 1000w bios ?
I thought it was normal for a 3090 to pull upwards ~700 watts when watercooled and no power restrictions (?)

Did i forget some settings or is this normal ?








Link to the port royal run can be found here: I scored 15 613 in Port Royal


----------



## yzonker

domdtxdissar said:


> I watercooled my graphic cards yesterday so bear with me and my stupid questions..
> 
> Why cant i make my msi 3090 supreme x pull more then ~520w ? Even with the EVGA 1000w bios ?
> I thought it was normal for a 3090 to pull upwards ~700 watts when watercooled and no power restrictions (?)
> 
> Did i forget some settings or is this normal ?
> View attachment 2521869
> 
> Link to the port royal run can be found here: I scored 15 613 in Port Royal


What is HWINFO showing, voltage or power limit? I don't think it's in your screenshot.


----------



## domdtxdissar

yzonker said:


> What is HWINFO showing, voltage or power limit? I don't think it's in your screenshot.


Is this not all the data needed ?














_edit_
Also have this screenshot from a asus bios i tested:


----------



## kx11

After 2 months of waiting the Optimus WB finally shipped for my Strix, hopefully it's worth the wait


----------



## Falkentyne

domdtxdissar said:


> Is this not all the data needed ?
> View attachment 2521873
> View attachment 2521874
> 
> 
> _edit_
> Also have this screenshot from a asus bios i tested:
> View attachment 2521875


You aren't showing what's important.
Please show TDP Normalized % and TDP% in hwinfo64 instead of hiding it. Also show ALL of the other power wattage rails instead of hiding them. Expand them. There are a lot of them.


----------



## KedarWolf

Falkentyne said:


> Can you please take a picture with the backing partially removed and outside of the ziploc bag, please? Also watch the focus of the camera.
> That picture looks awfully blurry.


----------



## KedarWolf

KedarWolf said:


> View attachment 2521887


These are from the official GELID store.










Side by side.


----------



## Falkentyne

KedarWolf said:


> These are from the official GELID store.
> 
> View attachment 2521888
> 
> 
> Side by side.
> 
> View attachment 2521889


That color is normal. The 120mm * 120mm pad is a China only SKU (Gelid explained that in email) so it's very likely the pigment is different (I don't know why, maybe to stop someone from reselling them as 80mm * 40mm chopped up?). Mine look identical to that.
The EC360 Silver pads look even slightly darker.


----------



## satinghostrider

KedarWolf said:


> These are from the official GELID store.
> 
> View attachment 2521888
> 
> 
> Side by side.
> 
> View attachment 2521889


Yup looks about right and the same one I'm using. Gelid isn't having the 120x120mm extreme pads and apparently it's China only SKU. I know cause I'm a distributor for them locally and they only offer valuepacks 80x40mm and 120x20mm SKUs for Extremes. 120x120mm Extreme's only China market and officially outside China, you can only get 120x120mm equivalents only in Ultimates.


----------



## ossimc

on aliexpress there are Gelid extreme pads 80mmx40mm for about 12(€ in my case) which seems the reasonably pricing(although not cheaper as in my local area)

but there are also some selling these for 2,50€ (also 80mmx40mm). I must assume these are fake right? anyone here took the chance and put the cheap ones on their 3090?^^


----------



## domdtxdissar

Falkentyne said:


> You aren't showing what's important.
> Please show TDP Normalized % and TDP% in hwinfo64 instead of hiding it. Also show ALL of the other power wattage rails instead of hiding them. Expand them. There are a lot of them.


This should be every sensor shown, correct ?








Can i make this card suck down even more voltage/watt so i can clock gpu core higher ? Why does the card max draw around 500w when i'm running with a 1000w bios ?

If i can get this sorted out, i plan on doing some real coldair benching tonight.. 

@Nizzen help a brother out 

_edit_
Could it be down to me needing a DDC and fresh driver install after vbios flash ?


----------



## yzonker

domdtxdissar said:


> This should be every sensor shown, correct ?
> View attachment 2521910
> 
> Can i make this card suck down even more voltage/watt so i can clock gpu core higher ? Why does the card max draw around 500w when i'm running with a 1000w bios ?
> 
> If i can get this sorted out, i plan on doing some real coldair benching tonight..
> 
> @Nizzen help a brother out


Looks like it's sitting at the 1.1v limit through the run with no power limits hit. So that's as far as it'll go in regards to power usage.


----------



## GRABibus

domdtxdissar said:


> This should be every sensor shown, correct ?
> View attachment 2521910
> 
> Can i make this card suck down even more voltage/watt so i can clock gpu core higher ? Why does the card max draw around 500w when i'm running with a 1000w bios ?
> 
> If i can get this sorted out, i plan on doing some real coldair benching tonight..
> 
> @Nizzen help a brother out
> 
> _edit_
> Could it be down to me needing a DDC and fresh driver install after vbios flash ?


as said by @yzonker, according to your GPUz shot, the card pulls all it can.
Probably a limitation of SuprimX….

All card don’t pull the same power in these benches.

if you try Galax 1000W, you get same max power ?


----------



## domdtxdissar

GRABibus said:


> as said by @yzonker, according to your GPUz shot, the card pulls all it can.
> Probably a limitation of SuprimX….
> 
> All card don’t pull the same power in these benches.
> 
> if you try Galax 1000W, you get same max power ?


Can you link me the bios ? Does it have rebar enable ?

Maybe its like *yzonker* said on last page, my card is using all the power it can..
When i'm limited to 1100mv on gpu core, and don't hit any temp/power limiters -> the only way to make the card use more power is to clock it higher ?



yzonker said:


> Looks like it's sitting at the 1.1v limit through the run with no power limits hit. So that's as far as it'll go in regards to power usage.


----------



## yzonker

domdtxdissar said:


> Can you link me the bios ? Does it have rebar enable ?
> 
> Maybe its like *yzonker* said on last page, my card is using all the power it can..
> When i'm limited to 1100mv on gpu core, and don't hit and temp/power limiters -> the only way to make the card use more power is to clock it higher ?


Yes that's correct. Doesn't look like you've done any VF curve tweaking. I would try increasing the 1093mv point and locking to that point. It may pull slightly less power but I've gotten slightly better scores at 1093 vs 1100. IIRC, effective frequency sometimes will drop at 1100 vs 1093. Or least be no higher and your increasing temps at 1100.

Otherwise force reBar on if you haven't and various other tweaks.


----------



## GRABibus

domdtxdissar said:


> Can you link me the bios ? Does it have rebar enable ?
> 
> Maybe its like *yzonker* said on last page, my card is using all the power it can..
> When i'm limited to 1100mv on gpu core, and don't hit any temp/power limiters -> the only way to make the card use more power is to clock it higher ?











GALAX RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com




No rebar, but at least, to see if you pull a higher power or not.


----------



## SirCanealot

Falkentyne said:


> What's the core hotspot temp compared to the core temp?


Right now I'm mining with a core temp of 57c and a hotspot temp of 72c. So not the worst, but always want better, right?  

When gaming under a 400w load it seems a little closer. With the core at 71c, the hotspot is around 81c.

As I said, might try KPX one day...


----------



## GRABibus

With my strix on air, I have 10 degrees delta between GPU temp and hotspot at idle.

in games, the delta raises to 15 degrees average.
I could see some peaks at 20 degrees, in BFV for example.

i had this with stock paste.
I repasted with kryonaut => same delta’s.
I repasted With Conductonaut => same !


----------



## Falkentyne

domdtxdissar said:


> This should be every sensor shown, correct ?
> View attachment 2521910
> 
> Can i make this card suck down even more voltage/watt so i can clock gpu core higher ? Why does the card max draw around 500w when i'm running with a 1000w bios ?
> 
> If i can get this sorted out, i plan on doing some real coldair benching tonight..
> 
> @Nizzen help a brother out
> 
> _edit_
> Could it be down to me needing a DDC and fresh driver install after vbios flash ?


Yeah your card is pulling the max it can that is allowed at the max VID of 1.10v.
In order to make it use more in most benches, you need to increase MSVDD and NVVDD voltage with the Classified tool, which only works on Kingpins, the onboard dip switches, again, kingpin only, or Elmor's EVC2X hardware tool. However, may be able to reach 550-650W in Timespy Extreme, Superposition 4K "Custom" + Extreme shaders, Quake 2 RTX, and Path of Exile + Global Illumination + Shadows=Ultra.


----------



## b0urne

Just stumbled upon this thread - not sure if I should just ask you here guys, but... I've got 3090 Trinity and would like to flash some other BIOS. Are there some requirements (amount of 8pin connectors must be the same?) or can I just get the Kingpin's BIOS and shove it deep inside Zotac's... ?
If not - is the go-to BIOS the one mentioned in this reddit thread?
(1) Someone flashed a 3090 bios already? : ZOTAC (reddit.com) (basically VGA Bios Collection: GALAX RTX 3090 24 GB | TechPowerUp )


----------



## yzonker

b0urne said:


> Just stumbled upon this thread - not sure if I should just ask you here guys, but... I've got 3090 Trinity and would like to flash some other BIOS. Are there some requirements (amount of 8pin connectors must be the same?) or can I just get the Kingpin's BIOS and shove it deep inside Zotac's... ?
> If not - is the go-to BIOS the one mentioned in this reddit thread?
> (1) Someone flashed a 3090 bios already? : ZOTAC (reddit.com) (basically VGA Bios Collection: GALAX RTX 3090 24 GB | TechPowerUp )


Yes, although if you want reBar, then this one for 390w,









GALAX RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com





or this one for KP XOC,









EVGA RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com





You need good cooling for the KP XOC (not air cooled).


----------



## b0urne

yzonker said:


> Yes, although if you want reBar, then this one for 390w,
> 
> 
> 
> 
> 
> 
> 
> 
> 
> GALAX RTX 3090 VBIOS
> 
> 
> 24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory
> 
> 
> 
> 
> www.techpowerup.com
> 
> 
> 
> 
> 
> or this one for KP XOC,
> 
> 
> 
> 
> 
> 
> 
> 
> 
> EVGA RTX 3090 VBIOS
> 
> 
> 24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory
> 
> 
> 
> 
> www.techpowerup.com
> 
> 
> 
> 
> 
> You need good cooling for the KP XOC (not air cooled).


Yeah, I'll do the first one. Thank you. Going to watercool my whole build (CPU, front&back of the card) in a month or two so the second one will become handy later.

Sheesh.Thank you once again @yzonker for a great response.


----------



## yzonker

SirCanealot said:


> Right now I'm mining with a core temp of 57c and a hotspot temp of 72c. So not the worst, but always want better, right?
> 
> When gaming under a 400w load it seems a little closer. With the core at 71c, the hotspot is around 81c.
> 
> As I said, might try KPX one day...


I see a higher delta when mining vs gaming also. Probably typical.


----------



## Lobstar

domdtxdissar said:


> Why cant i make my msi 3090 supreme x pull more then ~520w ? Even with the EVGA 1000w bios ?
> 
> View attachment 2521869


Your screenshot literally shows you pulling max 530w.


----------



## Falkentyne

yzonker said:


> I see a higher delta when mining vs gaming also. Probably typical.


This is normal. Mining uses different parts of the core than gaming and it appears VRAM temps affect the hotspot in some imbalanced way (could also be caused by a higher load on MSVDD rails compared to NVVDD rails--or Vice Versa, caused by mining).


----------



## geriatricpollywog

I finally enabled ReBar and my score went up 300 points on the stock 520w bios. Since we're all cheating in Port Royal now, are there any other ways I can inflate my score? How come nobody else on the HOF has a Rocket Lake? Should I try my 10700k?

NVIDIA GeForce RTX 3090 video card benchmark result - Intel Core i9-11900K Processor,ASRock Z590 OC Formula (3dmark.com)


----------



## Nizzen

domdtxdissar said:


> This should be every sensor shown, correct ?
> View attachment 2521910
> 
> Can i make this card suck down even more voltage/watt so i can clock gpu core higher ? Why does the card max draw around 500w when i'm running with a 1000w bios ?
> 
> If i can get this sorted out, i plan on doing some real coldair benching tonight..
> 
> @Nizzen help a brother out
> 
> _edit_
> Could it be down to me needing a DDC and fresh driver install after vbios flash ?


Try colder card. Then it will draw even more.


----------



## yzonker

Metro Exodus EE is another good way to torture your card with high power draw. I was just playing it and some areas were hitting 550w at just 1000mv. 4k and DLSS off. I'm limited to 60hz though, so some areas don't even fully load the card.


----------



## des2k...

0451 said:


> I finally enabled ReBar and my score went up 300 points on the stock 520w bios. Since we're all cheating in Port Royal now, are there any other ways I can inflate my score? How come nobody else on the HOF has a Rocket Lake? Should I try my 10700k?
> 
> NVIDIA GeForce RTX 3090 video card benchmark result - Intel Core i9-11900K Processor,ASRock Z590 OC Formula (3dmark.com)


and... rebar on, is not cheating lol
usually I run texture quality high performance, have NV sharpness filter on already

You can boost your score by running day1 driver, very aggressive core frequency, no rebar.

I've seen people start /quit superposition before 3dmark for extra score boost & maybe single display only works too for score boost.


----------



## des2k...

yzonker said:


> Metro Exodus EE is another good way to torture your card with high power draw. I was just playing it and some areas were hitting 550w at just 1000mv. 4k and DLSS off. I'm limited to 60hz though, so some areas don't even fully load the card.


lol metro rtx for stressing the gpu

it's like saying Prime95 is a good cpu stress and we should limit the OC for other apps. But games don't run that low frequency/high power load 🤣

I can run 2115 1006 or 1012mv on all games. Metro needs 1050mv.

The boost will cap at 1.1v, so tweaking my oc based on Metro is a no no, that's 80mhz less on the core🙄 for 99.9% of my games.


----------



## yzonker

des2k... said:


> lol metro rtx for stressing the gpu
> 
> it's like saying Prime95 is a good cpu stress and we should limit the OC for other apps. But games don't run that low frequency/high power load 🤣
> 
> I can run 2115 1006 or 1012mv on all games. Metro needs 1050mv.
> 
> The boost will cap at 1.1v, so tweaking my oc based on Metro is a no no, that's 80mhz less on the core🙄 for 99.9% of my games.


Yea, same. I had a curve set up that would work in everything else, but crashed within 1 min when I tried it in ME. The good news is that curve now seems to be stable in ME after I re-mounted the card. Could be other games will go higher but I haven't tried. My block delta didn't improve all that much, but some and delta to the hotspot temp came down a couple of C also. And of course VRAM is running 12C or so lower. VRM probably is too since I did Gelid's everywhere. Might be the sum of the whole.


----------



## geriatricpollywog

des2k... said:


> and... rebar on, is not cheating lol
> usually I run texture quality high performance, have NV sharpness filter on already
> 
> You can boost your score by running day1 driver, very aggressive core frequency, no rebar.
> 
> I've seen people start /quit superposition before 3dmark for extra score boost & maybe single display only works too for score boost.


Biso Biso is running the latest driver so I’ll stick with it. I haven’t tried high performance on textures or sharpness filter yet.


----------



## des2k...

yzonker said:


> Yea, same. I had a curve set up that would work in everything else, but crashed within 1 min when I tried it in ME. The good news is that curve now seems to be stable in ME after I re-mounted the card. Could be other games will go higher but I haven't tried. My block delta didn't improve all that much, but some and delta to the hotspot temp came down a couple of C also. And of course VRAM is running 12C or so lower. VRM probably is too since I did Gelid's everywhere. Might be the sum of the whole.


Optimus is really close to reference block release, and it's a 8c delta at 600w out of the box.

So that might be the easiest solution for crazy delta vs re-mounts or modifications. Especially if you game at 450w, that will be 5c delta 😁


----------



## yzonker

Nizzen said:


> Try colder card. Then it will draw even more.


Is that true? I've always thought it was the opposite? Higher temp, higher resistance or something like that. I even did this test with my 3080ti hybrid one time to demonstrate it. Cooled the card out and let Kombustor run until it stabilized. Starts at about 430w and stops when it gets to the 450w PL. Obviously this happens in part due to the card not reaching TDP in many instances by hitting some other limit.










Same test with my 3090 with the voltage locked at 950mv, minimal fans to let the water heat up.


----------



## RosaPanteren

Okey so I tried to flash the KP xoc 1k rebar and screen went black as soon as I had hit enter on the command.

So I thought I’ll give it some time but nothing happened.

Rebooted the computer and screen is still black.....

Did a clear cmos and then got screen back.......?

Boot to win now and it seems 1k bios is flashed!?!?

Is this normal procedure with this bios or am I in twilight zone?

Event viewer states 10110 DriverFrameworks-UserMode as error from the time of flashing


----------



## yzonker

RosaPanteren said:


> Okey so I tried to flash the KP xoc 1k rebar and screen went black as soon as I had hit enter on the command.
> 
> So I thought I’ll give it some time but nothing happened.
> 
> Rebooted the computer and screen is still black.....
> 
> Did a clear cmos and then got screen back.......?
> 
> Boot to win now and it seems 1k bios is flashed!?!?
> 
> Is this normal procedure with this bios or am I in twilight zone?
> 
> Event viewer states 10110 DriverFrameworks-UserMode as error from the time of flashing


Did you disable the card in device manager before flashing? I think NVFlash will do that anyway but I always do it manually just in case. 

What card do you have?
Which version of NVFlash did you use?


----------



## RosaPanteren

yzonker said:


> Did you disable the card in device manager before flashing? I think NVFlash will do that anyway but I always do it manually just in case.
> 
> What card do you have?
> Which version of NVFlash did you use?


No I didn't disable the card as I run an AMD cpu with no igpu

Latest version of Nvflash 5.715

I got a Palit Gamerock, non OC version

Running Furmark makes the card draw 787.9w according to Gpu-Z and HwInfo......... I kinda quite that fast 

I thought setting "nvidia-smi.exe -pl 520" would stick but I guess it didn't after I did a reboot to setup ram and pbo curve


----------



## Arizor

Always disable card before flashing @RosaPanteren , you'll still get an image via the card it will just be unaccelerated.


----------



## yzonker

RosaPanteren said:


> No I didn't disable the card as I run an AMD cpu with no igpu
> 
> Latest version of Nvflash 5.715
> 
> I got a Palit Gamerock, non OC version
> 
> Running Furmark makes the card draw 787.9w according to Gpu-Z and HwInfo......... I kinda quite that fast
> 
> I thought setting "nvidia-smi.exe -pl 520" would stick but I guess it didn't after I did a reboot to setup ram and pbo curve
> 
> View attachment 2521986


Looks like a legit 700w+ as best I can tell. Be careful with that. Lol

I haven't used the command line for setting card params. AB settings don't stick though unless you have it reapply on startup.


----------



## RosaPanteren

Arizor said:


> Always disable card before flashing @RosaPanteren , you'll still get an image via the card it will just be unaccelerated.


Hmmm, I tried that before and screen went black and I had todo a clear cmos to recover. Guess I missed something then.

Ok thx for the advise


----------



## yzonker

RosaPanteren said:


> Hmmm, I tried that before and screen went black and I had todo a clear cmos to recover. Guess I missed something then.
> 
> Ok thx for the advise


You disabled the card in Win10 and had to clear CMOS to get it to post? Maybe I'm missing something?


----------



## Arizor

yzonker said:


> You disabled the card in Win10 and had to clear CMOS to get it to post? Maybe I'm missing something?


Yeah to be clear @RosaPanteren we just mean disable the GPU in Device Manager within Windows 10. It will go black, then come back with an unaccelerated image on desktop (i.e. bad colours, lower refresh rate maybe).


----------



## RosaPanteren

yzonker said:


> You disabled the card in Win10 and had to clear CMOS to get it to post? Maybe I'm missing something?


I disabled the display adapter and the screen went black, tried to reboot a couple of times but it still wouldn’t show any picture. Searched a bit and found an article that said it might help with a clear cmos. Didn’t seem logical but it got the screen picture back


QUOTE="Arizor, post: 28856841, member: 648054"]
Yeah to be clear @RosaPanteren we just mean disable the GPU in Device Manager within Windows 10. It will go black, then come back with an unaccelerated image on desktop (i.e. bad colours, lower refresh rate maybe).
[/QUOTE]

Thats what I did, but screen went black....

Now I’ve been fiddeling with ram oc, pbo curve ect.

so it might have been something else causing it, and it beeing resolved by the clear cmos idk


----------



## yzonker

des2k... said:


> Optimus is really close to reference block release, and it's a 8c delta at 600w out of the box.
> 
> So that might be the easiest solution for crazy delta vs re-mounts or modifications. Especially if you game at 450w, that will be 5c delta 😁


I must have had something that wasn't being cooled well before. Corner of core, vrm, etc... I just ran CP at 2100 requested (2085 effective) at 1000mv for about 30 min without a crash. I doubt it would have made it 5min before. May still not be completely stable at that offset but a lot better.


----------



## des2k...

yzonker said:


> I must have had something that wasn't being cooled well before. Corner of core, vrm, etc... I just ran CP at 2100 requested (2085 effective) at 1000mv for about 30 min without a crash. I doubt it would have made it 5min before. May still not be completely stable at that offset but a lot better.


Very nice ! pretty much like mine now. If you hit 2100 at 1v in cp2077 you have a good mount on the core and vrm. The vrm are important because they work hard on zotac, they need those premium pads to get a good undervolt OC.

The 3090 die on the Zotac are low leakage, since it's a 350w stock and need to boost early on the curve due to a lack of power. Very similar to the 3090 die binning on the FE.

They undervolt / OC great but will fall behind at high voltage/wattage vs 3x8pim boards.


----------



## WillP

I've been away from my 3090 for a bit and lost track of this thread as a result. I know it's a controversial topic, but is there a 1000w XOC bios with REBAR available without having to ask favours of Vince/Kingpin? I've skimmed been through and seen a few people saying they're using it, but can't see a link. Thanks in advance.


----------



## yzonker

WillP said:


> I've been away from my 3090 for a bit and lost track of this thread as a result. I know it's a controversial topic, but is there a 1000w XOC bios with REBAR available without having to ask favours of Vince/Kingpin? I've skimmed been through and seen a few people saying they're using it, but can't see a link. Thanks in advance.


I just posted it a page or 2 ago. It's on TPU.


----------



## WillP

yzonker said:


> I just posted it a page or 2 ago. It's on TPU.


Thanks, must have missed. it. 

Edit: didn't miss it, just misunderstood the description and thought it was the non-REBAR one. Thanks again.


----------



## des2k...

WillP said:


> I've been away from my 3090 for a bit and lost track of this thread as a result. I know it's a controversial topic, but is there a 1000w XOC bios with REBAR available without having to ask favours of Vince/Kingpin? I've skimmed been through and seen a few people saying they're using it, but can't see a link. Thanks in advance.


controversial for all the wrong reasons, it's on Page 832


----------



## geriatricpollywog

I’ve tested the 1000w bios every month or 2 since Vince sent it to me and each time I had better performance with the stock 520w bios. It makes sense since I’m not sub-zero.


----------



## gfunkernaught

Found something odd happening while I was playing GTA 5 in 8k using the 1kw rebar bios. I decided to keep a core offset of +105 and mem +1200. So the core was running 2077-2085mhz, core temp was 44c, and PL slider at 100%. I wanted to see how much power the game would use like that. I the most I saw was 580w, but it seemed to average around 520-530w. I was hearing some random short clicks/pops in the game sound. Like a glitchy sound. Then all of a sudden, my PC powers off completely, then powers back on. This happened twice so far in the past four hours. I do have the A/C running. I believe it and the PC are on different circuits. I could see card instability would cause a crash or lockup. But this sounds more like a power issue. Now I can't tell if it is the fact that there is extra draw all over the house from portable A/Cs (no central), if the PSU is the issue. It is a Corsair HX1200. If the GPU is using 600w for example, CPU about 90-100w, then all the fans, the pump. Now I'm closer to 700w, over the 50% mark for the best efficiency of this PSU. This has that single/multi switch for splitting/combining amps on the 12v rail which I have it set to single.

What this sound like to you? PSU, power source, or both?

UPDATE: I set the PL to 52% which is my norm, and my PC shutdown again. If the PC and A/C were on the same circuit, the breaker would trip.


----------



## domdtxdissar

domdtxdissar said:


> I watercooled my graphic card yesterday so bear with me and my stupid questions..
> 
> Why cant i make my msi 3090 supreme x pull more then ~520w ? Even with the EVGA 1000w bios ?
> I thought it was normal for a 3090 to pull upwards ~700 watts when watercooled and no power restrictions (?)
> 
> Did i forget some settings or is this normal ?
> View attachment 2521869
> 
> Link to the port royal run can be found here: I scored 15 613 in Port Royal


So i did a full benchrun last night with my new gpu waterblock, to compare to my old 450w max air cooled runs:


Spoiler: Stock cooling and stock 450w bios



*Port Royal = 14789* *@ *I scored 14 789 in Port Royal

Graphics Score = 14789
*TIME SPY =21861 *@ I scored 21 861 in Time Spy

Graphics Score = 22575
CPU Score = 18540
*TIME SPY EXTREME = 11557* @ I scored 11 557 in Time Spy Extreme

Graphics Score = 11577
CPU Score = 11450
*FIRE STRIKE = 41342* @ I scored 41 342 in Fire Strike

Graphics Score = 45888
Physics Score = 44869
Combined Score = 22218
*FIRE STRIKE extreme = 25679 @* I scored 25 679 in Fire Strike Extreme

Graphics Score = 26227
Physics Score = 44859
Combined Score = 14282
*FIRE STRIKE ULTRA = 14329* @ I scored 14 329 in Fire Strike Ultra

Graphics Score = 13996
Physics Score = 44727
Combined Score = 7785
*WILD LIFE = 116589* @ I scored 14 329 in Fire Strike Ultra



It look as if the MSI Suprim x 3090 have too few / weak VRMs for it to make any sense to use 1000w XOC bios instead of lets say EVGA 520w for daily usage.. In most of the benches the card only used around 475w to 550w.. Was only in Time Spy I saw a peak of 616w for a moment.

These are my 3dmark results:

*Port Royal = 16 050* *@ *I scored 16 050 in Port Royal

Graphics Score = 16050
*TIME SPY = 22 666 *@ I scored 22 666 in Time Spy

Graphics Score = 23 449
CPU Score = 19 062
*TIME SPY EXTREME = 12 180* @ I scored 12 180 in Time Spy Extreme

Graphics Score = 12 258
CPU Score = 11 760
*FIRE STRIKE = 43 513* @ I scored 43 513 in Fire Strike

Graphics Score = 49 613
Physics Score = 44 674
Combined Score = 22 188
*FIRE STRIKE extreme = 27 138 @* I scored 27 138 in Fire Strike Extreme

Graphics Score = 27 940
Physics Score = 44 267
Combined Score = 15 113
*FIRE STRIKE ULTRA = 15 072 * @ I scored 15 072 in Fire Strike Ultra

Graphics Score = 14 796
Physics Score = 44 514
Combined Score = 8 140
Did also run some other benches 

Unigine Heaven @ 1080p 









Unigine Heaven @ 1440p 









Superposition 1080p extreme 









And some Finale Fantasy runs to end it 
Endwalker @ 1440p Maximum









Final Fantasy XV benchmark @ 1920p standard 









Now i have a baseline that can be used to compare against both Alder lake and Zen3 3d cache when they are launched.. 
(think i will go for Zen v-cache since i already have the platform)


----------



## GAN77

domdtxdissar said:


> So i did a full benchrun last night with my new gpu waterblock


Great results.
Which waterblock did you choose?


----------



## domdtxdissar

GAN77 said:


> Great results.
> Which waterblock did you choose?


Not much to choose between for the msi suprim x 3090, so i ended up with a EK-Quantum Vector Trio RTX 3080/3090.








Some screens here from the install with LM
(did 5 layers with clear nail polish)


----------



## Nizzen

domdtxdissar said:


> So i did a full benchrun last night with my new gpu waterblock, to compare to my old 450w max air cooled runs:
> 
> 
> Spoiler: Stock cooling and stock 450w bios
> 
> 
> 
> *Port Royal = 14789* *@ *I scored 14 789 in Port Royal
> 
> Graphics Score = 14789
> *TIME SPY =21861 *@ I scored 21 861 in Time Spy
> 
> Graphics Score = 22575
> CPU Score = 18540
> *TIME SPY EXTREME = 11557* @ I scored 11 557 in Time Spy Extreme
> 
> Graphics Score = 11577
> CPU Score = 11450
> *FIRE STRIKE = 41342* @ I scored 41 342 in Fire Strike
> 
> Graphics Score = 45888
> Physics Score = 44869
> Combined Score = 22218
> *FIRE STRIKE extreme = 25679 @* I scored 25 679 in Fire Strike Extreme
> 
> Graphics Score = 26227
> Physics Score = 44859
> Combined Score = 14282
> *FIRE STRIKE ULTRA = 14329* @ I scored 14 329 in Fire Strike Ultra
> 
> Graphics Score = 13996
> Physics Score = 44727
> Combined Score = 7785
> *WILD LIFE = 116589* @ I scored 14 329 in Fire Strike Ultra
> 
> 
> 
> It look as if the MSI Suprim x 3090 have too few / weak VRMs for it to make any sense to use 1000w XOC bios instead of lets say EVGA 520w for daily usage.. In most of the benches the card only used around 475w to 550w.. Was only in Time Spy I saw a peak of 616w for a moment.
> 
> These are my 3dmark results:
> 
> *Port Royal = 16 050* *@ *I scored 16 050 in Port Royal
> 
> Graphics Score = 16050
> *TIME SPY = 22 666 *@ I scored 22 666 in Time Spy
> 
> Graphics Score = 23 449
> CPU Score = 19 062
> *TIME SPY EXTREME = 12 180* @ I scored 12 180 in Time Spy Extreme
> 
> Graphics Score = 12 258
> CPU Score = 11 760
> *FIRE STRIKE = 43 513* @ I scored 43 513 in Fire Strike
> 
> Graphics Score = 49 613
> Physics Score = 44 674
> Combined Score = 22 188
> *FIRE STRIKE extreme = 27 138 @* I scored 27 138 in Fire Strike Extreme
> 
> Graphics Score = 27 940
> Physics Score = 44 267
> Combined Score = 15 113
> *FIRE STRIKE ULTRA = 15 072 * @ I scored 15 072 in Fire Strike Ultra
> 
> Graphics Score = 14 796
> Physics Score = 44 514
> Combined Score = 8 140
> Did also run some other benches
> 
> Unigine Heaven @ 1080p
> View attachment 2522022
> 
> 
> Unigine Heaven @ 1440p
> View attachment 2522023
> 
> 
> Superposition 1080p extreme
> View attachment 2522024
> 
> 
> And some Finale Fantasy runs to end it
> Endwalker @ 1440p Maximum
> View attachment 2522025
> 
> 
> Final Fantasy XV benchmark @ 1920p standard
> View attachment 2522026
> 
> 
> Now i have a baseline that can be used to compare against both Alder lake and Zen3 3d cache when they are launched..
> (think i will go for Zen v-cache since i already have the platform)


Think you have to wait a bit for HT=on and HT=off tests on all this tests with new cpu's. Looks like you need to buy both Alder Lake and 3d cache to compare the right way  You need to compare with the same cold temperature too 
12c GPU is nothing Tech Jesus is testing benchmark with


----------



## GRABibus

domdtxdissar said:


> So i did a full benchrun last night with my new gpu waterblock, to compare to my old 450w max air cooled runs:
> 
> 
> Spoiler: Stock cooling and stock 450w bios
> 
> 
> 
> *Port Royal = 14789* *@ *I scored 14 789 in Port Royal
> 
> Graphics Score = 14789
> *TIME SPY =21861 *@ I scored 21 861 in Time Spy
> 
> Graphics Score = 22575
> CPU Score = 18540
> *TIME SPY EXTREME = 11557* @ I scored 11 557 in Time Spy Extreme
> 
> Graphics Score = 11577
> CPU Score = 11450
> *FIRE STRIKE = 41342* @ I scored 41 342 in Fire Strike
> 
> Graphics Score = 45888
> Physics Score = 44869
> Combined Score = 22218
> *FIRE STRIKE extreme = 25679 @* I scored 25 679 in Fire Strike Extreme
> 
> Graphics Score = 26227
> Physics Score = 44859
> Combined Score = 14282
> *FIRE STRIKE ULTRA = 14329* @ I scored 14 329 in Fire Strike Ultra
> 
> Graphics Score = 13996
> Physics Score = 44727
> Combined Score = 7785
> *WILD LIFE = 116589* @ I scored 14 329 in Fire Strike Ultra
> 
> 
> 
> It look as if the MSI Suprim x 3090 have too few / weak VRMs for it to make any sense to use 1000w XOC bios instead of lets say EVGA 520w for daily usage.. In most of the benches the card only used around 475w to 550w.. Was only in Time Spy I saw a peak of 616w for a moment.
> 
> These are my 3dmark results:
> 
> *Port Royal = 16 050* *@ *I scored 16 050 in Port Royal
> 
> Graphics Score = 16050
> *TIME SPY = 22 666 *@ I scored 22 666 in Time Spy
> 
> Graphics Score = 23 449
> CPU Score = 19 062
> *TIME SPY EXTREME = 12 180* @ I scored 12 180 in Time Spy Extreme
> 
> Graphics Score = 12 258
> CPU Score = 11 760
> *FIRE STRIKE = 43 513* @ I scored 43 513 in Fire Strike
> 
> Graphics Score = 49 613
> Physics Score = 44 674
> Combined Score = 22 188
> *FIRE STRIKE extreme = 27 138 @* I scored 27 138 in Fire Strike Extreme
> 
> Graphics Score = 27 940
> Physics Score = 44 267
> Combined Score = 15 113
> *FIRE STRIKE ULTRA = 15 072 * @ I scored 15 072 in Fire Strike Ultra
> 
> Graphics Score = 14 796
> Physics Score = 44 514
> Combined Score = 8 140
> Did also run some other benches
> 
> Unigine Heaven @ 1080p
> View attachment 2522022
> 
> 
> Unigine Heaven @ 1440p
> View attachment 2522023
> 
> 
> Superposition 1080p extreme
> View attachment 2522024
> 
> 
> And some Finale Fantasy runs to end it
> Endwalker @ 1440p Maximum
> View attachment 2522025
> 
> 
> Final Fantasy XV benchmark @ 1920p standard
> View attachment 2522026
> 
> 
> Now i have a baseline that can be used to compare against both Alder lake and Zen3 3d cache when they are launched..
> (think i will go for Zen v-cache since i already have the platform)


how do you get such low temps (39 degrees average temperature on Port Royal and 37 degrees in Time Spy) with stock air cooler and stock bios ?
This seems a « water » cooling temp…


----------



## domdtxdissar

GRABibus said:


> how do you get such low temps (39 degrees average temperature on Port Royal and 37 degrees in Time Spy) with stock air cooler and stock bios ?
> This seems a « water » cooling temp…


Do note that 3dmark reports average temp.. getting close to 50 degrees maxtemp by the end of the benchmark

This is my old 15k port royal record with air cooling:


----------



## GRABibus

domdtxdissar said:


> Do note that 3dmark reports average temp.. getting close to 50 degrees maxtemp by the end of the benchmark
> 
> This is my old 15k port royal record with air cooling:
> View attachment 2522031
> View attachment 2522032


yes I know it is average and even with this, less Than 40 degrees average on air is crazy. PC in a NORVEGIAN fridge ? 😊


----------



## gfunkernaught

domdtxdissar said:


> Not much to choose between for the msi suprim x 3090, so i ended up with a EK-Quantum Vector Trio RTX 3080/3090.
> View attachment 2522027
> 
> Some screens here from the install with LM
> (did 5 layers with clear nail polish)


Did you put paste under those pads?


----------



## domdtxdissar

gfunkernaught said:


> Did you put paste under those pads?


Nope, just tried to clean the oil from original pads before i put on the new ones from the GPU block.


----------



## gfunkernaught

domdtxdissar said:


> Nope, just tried to clean the oil from original pads before i put on the new ones from the GPU block.


You should put thermal paste under the pads. It will help a lot. Ek even recommends that in the installation manual of the vector block. So it has to be vrm/ram>paste>pad>block.


----------



## GAN77

gfunkernaught said:


> Ek even recommends that in the installation manual of the vector block. So it has to be vrm/ram>paste>pad>block.


I have not met such recommendations. Can you show where ek writes about this?


----------



## gfunkernaught

GAN77 said:


> I have not met such recommendations. Can you show where ek writes about this?


Well looks like I may have misspoke a bit. For some reason I thought I saw the recommendation from EK but I guess not. Although I have read on here and other forums that thermal paste between IC and thermal pads help with heat transfer. It certainly helped with my setup. I also have the Trio and Vector block.


----------



## J7SC

gfunkernaught said:


> Well looks like I may have misspoke a bit. For some reason I thought I saw the recommendation from EK but I guess not. Although I have read on here and other forums that thermal paste between IC and thermal pads help with heat transfer. It certainly helped with my setup. I also have the Trio and Vector block.


...some EK mounting sheets actually do recommended adding thermal paste onto the thermal pads...I can probably link that before Christmas...


----------



## tps3443

swapped platforms. I moved over to a binned 10900KF direct die, and Z490 Dark kingpin edition, and Optimus Sig V2 Nickel block.

Anyways, during the new setup I disassembled my D5 pump, to my surprise it was absolute chaos inside.. I don’t even know how it was functioning like this. (And this was the cause of my issue with high 3090 Kingpin Hydro copper GPU temps)


My pump turns so fast right now it’s ridiculous.

At full speed, my D5 tries to blow water out of the fill port of the reservoir lol. And the return line goes in the bottom of the reservoir (This is how my pump was when I bought it brand new 1yr ago)


Anyways, my load GPU temp as of GPU-Z was 45.3C (GPU max) and GPU die hits 41.3C. The difference is massive!! Now, I actually just set a fan pointed directly at my backplate behind the GPU, and I overclocked the core/memory, now the GPU max is 43.1C GPU-Z. Water temp is 30.1C inside the reservoir. (My ambient room temp is a little warm at 25C) But the difference is quite impresssive. Im doing any fancy testing, just running a game at 100% GPU usage, and looking at GPU-Z. Whats strange is, my GPU hot spot delta has increased. Although my GPU temp over water has decreased substantially. GPU temp is +13C over water temp. 

I was hitting about 52C as a minimum in any game all the time before. Sometimes even higher and I assume this is from the pumps poor performance, my flow rate was all over the place. Because I had days where 54-57C GPU temps was common.

I do have (2) Alphacool Nexxxos 1080x45mm radiators on the way! Alphacool has also released an enclosure, and feet for these models too! These enclosure and feet were just released earlier this month. So availability in accessories is still limited in the United States.

If I would have had a flow meter, I wouldn’t have been chasing my tail for weeks now.


Also, I am still moving forward with the re-mount too, especially now!!! I see the light at the end of a tunnel with the 3090KP HC waterblock!!

One of my Alphacool 1080’s will be here Monday or by Tuesday! So, I will post back with some new results after the Re-Mount and Alphacool 1080x45mm radiator install.


----------



## GAN77

gfunkernaught said:


> It certainly helped with my setup.


How many degrees have you won? What thermal pads did you use?


----------



## gfunkernaught

GAN77 said:


> How many degrees have you won? What thermal pads did you use?


I bought Gelid and Thermalright odyssey but can't remember which I used for the front of the board. Since I cannot measure VRM temperature I can only speak overall board temp. My water>gpu delta temp was about 15c at 500w previously with the EK stock pads and IC Diamond on the gpu core. Now with the better pads+paste and liquid metal on the gpu core, that delta has dropped to 10c.


----------



## geriatricpollywog

tps3443 said:


> swapped platforms. I moved over to a binned 10900KF direct die, and Z490 Dark kingpin edition, and Optimus Sig V2 Nickel block.
> 
> Anyways, during the new setup I disassembled my D5 pump, to my surprise it was absolute chaos inside.. I don’t even know how it was functioning like this. (And this was the cause of my issue with high 3090 Kingpin Hydro copper GPU temps)
> 
> 
> My pump turns so fast right now it’s ridiculous.
> 
> At full speed, my D5 tries to blow water out of the fill port of the reservoir lol. And the return line goes in the bottom of the reservoir (This is how my pump was when I bought it brand new 1yr ago)
> 
> 
> Anyways, my load GPU temps are hitting 45C (GPU max temp) and GPU die hits 41.3C. The difference is massive!!
> 
> I was hitting about 52C as a minimum in any game all the time before. Sometimes even higher and I assume this is from the pumps poor performance, my flow rate was all over the place. Because I had days where 54-57C GPU temps was common.
> 
> I do have (2) Alphacool Nexxxos 1080x45mm radiators on the way! Alphacool has also released an enclosure, and feet for these models too! These enclosure and feet were just released earlier this month. So availability in accessories is still limited in the United States.
> 
> If I would have had a flow meter, I wouldn’t have been chasing my tail for weeks now.
> 
> 
> Also, I am still moving forward with the re-mount too, especially now!!! I see the light at the end of a tunnel with the 3090KP HC waterblock!!
> 
> One of my Alphacool 1080’s will be here Monday or by Tuesday! So, I will post back with some new results after the Re-Mount and Alphacool 1080x45mm radiator install.


Great results! The HC block is Optimus good.


----------



## gfunkernaught

I've seen talks about using a clamp to measure power draw from the wall on here. I just bought a Klein Tools CL120 and I used the included splitter plugged into my power strip then plugged my PC into the splitter. I clamped the right side of the splitter then ran Bright Memory benchmark on loop with my normal OC settings PL 52%. I set the dial to 2/20A and it was overloaded (OL), so I dialed the 200/400A dial and got a reading of about 55A. I have the range set to auto. I did the math 120x55=6600W!!?! Can't be. So I tried just moving the decimal in my head and got 660w which makes more sense. I know this isn't a Klein Tools thread but was hoping someone could shed some light since I've seen people say they use it on this thread.


----------



## tps3443

@0451


0451 said:


> Great results! The HC block is Optimus good.


Yes, Its actually good! I just put a fan on the back of my 3090kp, and I’m gaming at 42C-43C gpu temps, and the GPU die temp is 40C.

My ambient temps are 25C, and water is 29-30C.

I haven’t looked at power consumption yet. But this is just incredible. I’m using just (2) thin EKWB SE classic 360’s. (Probably the worst performing radiators available lol)

I have also reduced system power consumption goin from a 7980XE to 10900K. But, I had a pump/flow issue all along.

I’m finally satisfied with this block. So, hooray! It’s only gonna get better moving forward.


This is just while gaming in the LN2 520 KP HC bios. Running just a very minimal OC of +100/+1500. No undervolting at all. it said 1.062 or so on the oled. This game has been running for over 2 hours now.

I’m gonna do some more in depth testing. But im
about to re-mount it anyways, now that I have all of my pads/paste/etc.

But this in the below photos, this was never possible!! My current water loop is not that amazing, and my current mount is barely any better. This is game would always provide a 52C+ GPU temp for me full open case and no panels. And no overclocking at all, with less voltage.


----------



## gfunkernaught

Not sure if anyone saw my earlier post about my PC shutting down, but I'm wondering if the 1kw rbar bios has even less power limits than the non-rbar version. I've been using it without an issue until I started playing GTA 5 in 8k. I can't access nvidia's website to download the newest drivers because "I dont have permission", its a thing that happens. So is my PSU shutting off the system to protect it from the 3090's transient spikes? Or is it something else? No circuit breakers are being tripped either.


----------



## Falkentyne

gfunkernaught said:


> Not sure if anyone saw my earlier post about my PC shutting down, but I'm wondering if the 1kw rbar bios has even less power limits than the non-rbar version. I've been using it without an issue until I started playing GTA 5 in 8k. I can't access nvidia's website to download the newest drivers because "I dont have permission", its a thing that happens. So is my PSU shutting off the system to protect it from the 3090's transient spikes? Or is it something else? No circuit breakers are being tripped either.


The 1kw (EVGA) bios has higher limits for either the internal MSVDD and NVVDD rails or PLL rails, which is why TDP Normalized=TDP%, even when you're pulling 600W, you aren't anywhere close to 100% TDP Normalized (which means all of the internal rails are maxed out). Guessing that the NVVDD and MSVDD rails are set at something like 500W or something?


----------



## Arizor

gfunkernaught said:


> Not sure if anyone saw my earlier post about my PC shutting down, but I'm wondering if the 1kw rbar bios has even less power limits than the non-rbar version. I've been using it without an issue until I started playing GTA 5 in 8k. I can't access nvidia's website to download the newest drivers because "I dont have permission", its a thing that happens. So is my PSU shutting off the system to protect it from the 3090's transient spikes? Or is it something else? No circuit breakers are being tripped either.


What are you seeing in Windows' Event Viewer (assuming you're on Win10)? Is it an event 41?


----------



## yzonker

gfunkernaught said:


> Not sure if anyone saw my earlier post about my PC shutting down, but I'm wondering if the 1kw rbar bios has even less power limits than the non-rbar version. I've been using it without an issue until I started playing GTA 5 in 8k. I can't access nvidia's website to download the newest drivers because "I dont have permission", its a thing that happens. So is my PSU shutting off the system to protect it from the 3090's transient spikes? Or is it something else? No circuit breakers are being tripped either.


There's definitely been some talk of it being even more aggressive with power. I guess you could flash the old one and see if the problem goes away. 

If your machine just clicks off like you shut it down or held the power button, then it's definitely OCP on the PSU. Been there done that. 

If you haven't, try limiting the VF curve to 1063mv or less. That Endwalker benchmark will trigger OCP on my psu (it's just 1000w) if I don't do that. Reducing the power limit wasn't as effective. I had it all the way down to 75% (2x8pin card) and it still shut off. But dropping to 1063mv worked. (might have been 1050mv). Must just be some crazy spikes.

I've run TS, TSE, PR all at 1093mv with the PL at 100% with no issue, so I don't think my PSU is too bad.


----------



## gfunkernaught

Arizor said:


> What are you seeing in Windows' Event Viewer (assuming you're on Win10)? Is it an event 41?


Nothing in event viewer.


----------



## gfunkernaught

yzonker said:


> There's definitely been some talk of it being even more aggressive with power. I guess you could flash the old one and see if the problem goes away.
> 
> If your machine just clicks off like you shut it down or held the power button, then it's definitely OCP on the PSU. Been there done that.
> 
> If you haven't, try limiting the VF curve to 1063mv or less. That Endwalker benchmark will trigger OCP on my psu (it's just 1000w) if I don't do that. Reducing the power limit wasn't as effective. I had it all the way down to 75% (2x8pin card) and it still shut off. But dropping to 1063mv worked. (might have been 1050mv). Must just be some crazy spikes.
> 
> I've run TS, TSE, PR all at 1093mv with the PL at 100% with no issue, so I don't think my PSU is too bad.


My PC clicks off as if I have either shut it down or it lost power. But then it powers itself back on which leads me to believe the motherboard got triggered somehow. Because the AC power loss recovery setting in my bios was set to power off. So if the power supply shuts itself off, can it turn the motherboard back on?


----------



## Arizor

gfunkernaught said:


> Nothing in event viewer.


Wow that's wild, even with an immediate shutdown folk usually get at least something in the 'critical' category. If it's restarting itself I wouldn't think it's power (because, afaik, a PSU failure would just power off, not also restart).

When I had those sorts of issues, it was due to unstable BIOS settings for my CPU (Ryzen 5900).

edit: For a lot of others, it's unstable RAM settings in BIOS.


----------



## gfunkernaught

Arizor said:


> Wow that's wild, even with an immediate shutdown folk usually get at least something in the 'critical' category. If it's restarting itself I wouldn't think it's power (because, afaik, a PSU failure would just power off, not also restart).
> 
> When I had those sorts of issues, it was due to unstable BIOS settings for my CPU (Ryzen 5900).
> 
> edit: For a lot of others, it's unstable RAM settings in BIOS.


I didn't see anything other than "system shutdown unexpectedly" that is recorded after the system starts back up. But I didn't see anything prior to shutdown. I just ran quake 2 rtx with an undervolt as @yzonker suggested. Before I was running [email protected] now [email protected] (both effective freq), with the PL at 60% just to stress the system. No shutdowns. I did see the PCIe slot power go as high as 90w! That is insane. 8-pin#2 got to 220w. 8-pin#1 was around 180w or so, and 8-pin#3 was around 100w. I wish the 3rd 8-pin would take more of that load so the PCIe slot would remain at the rated 75w max.


----------



## Arizor

Yeah that would concern me a bit! Is there any surefire way you were triggering it? Just intense GPU use or other scenarios?


----------



## gfunkernaught

Arizor said:


> Yeah that would concern me a bit! Is there any surefire way you were triggering it? Just intense GPU use or other scenarios?


Other than hwinfo and rtss no. The CPU usage was 15%. I managed to get through a decent amount of time with the undervolt, but the GPU core temp kept rising despite all my fans at 100%. This time I got a driver crash which is better than a power supply shutting down my computer. So it looks like @yzonker was right about undervolting. I'm gonna test now with the same curve and power limited to 52%.


----------



## gfunkernaught

So after undervolting, and settings the power limit to 52%, AND disabling frame scaling, so now native 4k, around 300-400w usage, core temp around 42c, PC shutdown again.


----------



## yzonker

gfunkernaught said:


> So after undervolting, and settings the power limit to 52%, AND disabling frame scaling, so now native 4k, around 300-400w usage, core temp around 42c, PC shutdown again.


So much for that theory. So to clarify, does this only happen with the KP XOC reBar bios?


----------



## gfunkernaught

yzonker said:


> So much for that theory. So to clarify, does this only happen with the KP XOC reBar bios?


So far yes. I'm testing the 520W rbar bios right now.


----------



## Arizor

Yeah was gonna say 520W seems to be my sweet spot for stability and good clocks.


----------



## gfunkernaught

Ok definitely not the bios. I just ran an offset of +120 core +1200 mem on the 520w bios. Heard a small "pop" glitch sound coming from the game, a few seconds later, PC shut down. This is weird. I found an old post on Reddit where someone else had the same exact problem, but with older hardware. I messaged the OP to inquire if a solution was found.


----------



## J7SC

Arizor said:


> Yeah was gonna say 520W seems to be my sweet spot for stability and good clocks.


...still have to try the 1kw Galax HoF on my w-cooled Strix, but so far, 520 KPE r_bar is the _'sweet spot_' also


----------



## gfunkernaught

J7SC said:


> ...still have to try the 1kw Galax HoF on my w-cooled Strix, but so far, 520 KPE r_bar is the _'sweet spot_' also


It truely is the "sweet spot" bios. The unfortunate thing is that the 1kw kp rbar bios ran GTA 5 in 8k _very_ nicely!!! Going back to the 520w is just sluggish compared to the 1kw bios. At least I ruled out the bios being the culprit. Now it has to be PSU, motherboard, or both. I'm leaning towards PSU. I already emailed corsair about it. I've been reading a lot about how 3090s can shut down high power/quality PSU's with those transient spikes. Now I'm thinking should I get an AX1600i? Will that be enough for this beast? Or will nvidia do something about the spikes?? I was building this 1000D specifically to use the 1kw bios and crank stuff up but if I can't find a PSU to handle all this raw power then I guess it will have to wait.


----------



## J7SC

gfunkernaught said:


> It truely is the "sweet spot" bios. The unfortunate thing is that the 1kw kp rbar bios ran GTA 5 in 8k _very_ nicely!!! Going back to the 520w is just sluggish compared to the 1kw bios. At least I ruled out the bios being the culprit. Now it has to be PSU, motherboard, or both. I'm leaning towards PSU. I already emailed corsair about it. I've been reading a lot about how 3090s can shut down high power/quality PSU's with those transient spikes. Now I'm thinking should I get an AX1600i? Will that be enough for this beast? Or will nvidia do something about the spikes?? I was building this 1000D specifically to use the 1kw bios and crank stuff up but if I can't find a PSU to handle all this raw power then I guess it will have to wait.


...I'm still using my Antec 1300 W Platinum HPC PSUs (3x) from back in the HWBot days way back...after quad SLI and hungry HEDT Intel, the current AMD 5950X and single 3090 Strix is like old-age pasture for them 

..fyi, you might want to do another DDU...every time a different bios is loaded in Win 10, it creates another registry entry (so dual bios is two cards; single card with five different flashed bios is five card entries)...sometimes, Win 10 and MSI AB etc get confused about those


----------



## gfunkernaught

J7SC said:


> ...I'm still using my Antec 1300 W Platinum HPC PSUs (3x) from back in the HWBot days way back...after quad SLI and hungry HEDT Intel, the current AMD 5950X and single 3090 Strix is like old-age pasture for them
> 
> ..fyi, you might want to do another DDU...every time a different bios is loaded in Win 10, it creates another registry entry (so dual bios is two cards; single card with five different flashed bios is five card entries)...sometimes, Win 10 and MSI AB etc get confused about those


That I know about. I can tell that's not the case for me because when I flashed back to the 520w, msi still thought it was still the previous card, because my profiles were still there. I noticed MSI afterburner will detect a new card and wipe out the profiles from the interface, although their files still exist just not loaded, because hardware ID is different.
I undervolted it again, 520w rbar, and seeing how that goes.


----------



## gfunkernaught

No dice. Powered off again. I'm 80% sure now it's the PSU. What's odd though is that quake 2 RTX doesn't shut down my PC the way GTA 5 does.


----------



## Sheyster

gfunkernaught said:


> I can't access nvidia's website to download the newest drivers because "I dont have permission", its a thing that happens.


Try bookmarking and using this link to avoid that:









Advanced Driver Search official NVIDIA drivers


Advanced Driver Search official NVIDIA drivers



www.nvidia.com


----------



## tps3443

gfunkernaught said:


> Not sure if anyone saw my earlier post about my PC shutting down, but I'm wondering if the 1kw rbar bios has even less power limits than the non-rbar version. I've been using it without an issue until I started playing GTA 5 in 8k. I can't access nvidia's website to download the newest drivers because "I dont have permission", its a thing that happens. So is my PSU shutting off the system to protect it from the 3090's transient spikes? Or is it something else? No circuit breakers are being tripped either.


So when this happen to me, it was while using the XOC 2KW 2080Ti bios. And my PCI-e slot power in GPU-Z was reporting really high readings. Like always 90+ watts, sometimes just over 100+watts. After plugging in the supplemental PCIe 6 pin power cable on my motherboard, I no longer received power shut downs.


----------



## gfunkernaught

tps3443 said:


> So when this happen to me, it was while using the XOC 2KW 2080Ti bios. And my PCI-e slot power in GPU-Z was reporting really high readings. Like always 90+ watts, sometimes just over 100+watts. After plugging in the supplemental PCIe 6 pin power cable on my motherboard, I no longer received power shut downs.


Thanks for the reply but subsequent posts show that it wasn't that. I even tried using 520w bios, and the PCIe slot power never exceeded 65w. Idk what to do at this point other than play GTA 5 at 4k yuk.


----------



## tps3443

gfunkernaught said:


> Thanks for the reply but subsequent posts show that it wasn't that. I even tried using 520w bios, and the PCIe slot power never exceeded 65w. Idk what to do at this point other than play GTA 5 at 4k yuk.


Is there possibly something wrong with the card? I have read a post more than once, of users claiming to have RMA’ed their 3090 due to load balancing issues or something like that.


Also, confirm the rest of the system is stable. CPU’s will power shutdown under a good load after a while if they’re not getting enough voltage. 


The game RUST would crash on me after about 2-3 hours of non-stop playing. (just power off and restart) it was cpu instability. But it can be many things


----------



## Arizor

Yep as @tps3443 says, though it can appear as a GPU or PSU fault oftentimes it's an unlikely suspect such as the RAM or CPU. Is your CPU/RAM OC'd? Tried resetting to default and testing?


----------



## gfunkernaught

tps3443 said:


> Is there possibly something wrong with the card? I have read a post more than once, of users claiming to have RMA’ed their 3090 due to load balancing issues or something like that.
> 
> 
> Also, confirm the rest of the system is stable. CPU’s will power shutdown under a good load after a while if they’re not getting enough voltage.
> 
> 
> The game RUST would crash on me after about 2-3 hours of non-stop playing. (just power off and restart) it was cpu instability. But it can be many things


I did consider something might be wrong with the card, but if quake 2 rtx can't make my system shutdown, then it has to be something else. I reinstalled the drivers, with the latest version, and didn't enable rbar globally. Again this shutting down thing has never happened to this system, ever. This is very new. I've played gta 5 at 8k before water cooling it, but that was before bios flashing and before rbar, many moons ago, when I first got the card, and I also just realized that I was using a different PSU, the RM750. I read that a "hacker" found a bug in GTA 5's loading process in some dll file and Rockstar had patched it. So that means that GTA 5 has changed since as well. Its a weird engine. I could be driving around the city or urban area at 55-60fps in 8k. I drive up to where the mountains start, still 60fps, drive a few feet more then the framerate drops to the 40s in an instant, and remains there and a bit lower until I leave the mountains. I know that is the foliage doing that. But I could be walking on a street leading to the mountains and I could see all the foliage and it looks like everything is loaded, but then boom fps tanks.


----------



## Arizor

It's a really tricky one @gfunkernaught , it's almost the equivalent of doing an elimination diet to figure out what's giving you a headache...! I'd try resetting everything to default, testing for stability, then OCing one thing at a time until it crashes.

Where does everyone recommend you buy your PSU cables from, by the way? I'm in Australia and predictably it's impossible to find anywhere local (read: entire Oceania region) to find sleeved cables for my EVGA G5.


----------



## Falkentyne

gfunkernaught said:


> I did consider something might be wrong with the card, but if quake 2 rtx can't make my system shutdown, then it has to be something else. I reinstalled the drivers, with the latest version, and didn't enable rbar globally. Again this shutting down thing has never happened to this system, ever. This is very new. I've played gta 5 at 8k before water cooling it, but that was before bios flashing and before rbar, many moons ago, when I first got the card, and I also just realized that I was using a different PSU, the RM750. I read that a "hacker" found a bug in GTA 5's loading process in some dll file and Rockstar had patched it. So that means that GTA 5 has changed since as well. Its a weird engine. I could be driving around the city or urban area at 55-60fps in 8k. I drive up to where the mountains start, still 60fps, drive a few feet more then the framerate drops to the 40s in an instant, and remains there and a bit lower until I leave the mountains. I know that is the foliage doing that. But I could be walking on a street leading to the mountains and I could see all the foliage and it looks like everything is loaded, but then boom fps tanks.


Quake 2 RTX is pure path traced, and is a very old engine, so it puts very low load on the MSVDD and NVVDD rails. On a shunt modded 3090 FE, when it reaches 620W, TDP% is about 85% and TDP Normalized is about 90%, which indicates that the highest reported rails (hwinfo64 can't show them, you need Elmor's device), are not close to 100% of their base values. It just draws a ton of board power from the path tracing.

Something like Metro Exodus / EE, in the main menu, at 620W power draw (note that 100% TDP is 700W, slider is at 114%, which does nothing for TDP with the shunt mod but DOES extend the hidden NVVDD and MSVDD limits!), reports 108.1% on TDP Normalized% and only 84% on TDP. And since the sub rails in hwinfo64 are not close to 100% of their base values, it's heavy load/power draw on internal MSVDD and NVVDD doing it. And it's dropping the effective clocks and sometimes throttling down one voltage tier due to MSVDD being exceeded. So you can't just look at board power to assume your PSU is fine in X but shuts down in Y. Of course I can't test that long as I'm air cooled and the GPU gets to 80C after awhile...


----------



## tps3443

So even though, I have incredible GPU temps right now. As I ready knew, GPU die to block contact isn’t perfect. It’s only making contact about the size of a big pea in the center of the silicon.

I just removed my KP Hydro block, going to use all new pads, and Kryonaut Extreme.

My current temps before re-mount are 43C max when mid-day heat 76F-77F in my home (Hot inside). Or later in the evening 41.3C max my home reaches normal temps cooler temps of 73F inside.

So maybe I can get even better than that.

My die contact isn’t perfect. It’s touching maybe 1/4 of the die. Now, this could be due to the waterblock having a dome shape base? I’m not sure yet.

Here’s a picture. I just removed the block. My temps are fantastic either way! I had a water pump/flow issue prior that I thought was just poor performance on the Kingpin Hydro Copper behalf. But, that is not the case at all!


----------



## J7SC

tps3443 said:


> So even though, I have incredible GPU temps right now. As I ready knew, GPU die to block contact isn’t perfect. It’s only making contact about the size of a big pea in the center of the silicon.
> 
> I just removed my KP Hydro block, going to use all new pads, and Kryonaut Extreme.
> 
> My current temps before re-mount are 43C max when mid-day heat 76F-77F in my home (Hot inside). Or later in the evening 41.3C max my home reaches normal temps cooler temps of 73F inside.
> 
> So maybe I can get even better than that.
> 
> My die contact isn’t perfect. It’s touching maybe 1/4 of the die. Now, this could be due to the waterblock having a dome shape base? I’m not sure yet.
> 
> Here’s a picture. I just removed the block. My temps are fantastic either way! I had a water pump/flow issue prior that I thought was just poor performance on the Kingpin Hydro Copper behalf. But, that is not the case at all!


...probably shouldn't remind you that Ampere dies in particular tend to be not perfectly flat (even before the sometimes raised lettering) and that folks like Luumi and DerBauer CAREFULLY lap their 3090 KPE. 

...my preferred method was/is a fully-spread layer of thicker thermal paste

Caveat emptor...


----------



## geriatricpollywog

tps3443 said:


> So even though, I have incredible GPU temps right now. As I ready knew, GPU die to block contact isn’t perfect. It’s only making contact about the size of a big pea in the center of the silicon.
> 
> I just removed my KP Hydro block, going to use all new pads, and Kryonaut Extreme.
> 
> My current temps before re-mount are 43C max when mid-day heat 76F-77F in my home (Hot inside). Or later in the evening 41.3C max my home reaches normal temps cooler temps of 73F inside.
> 
> So maybe I can get even better than that.
> 
> My die contact isn’t perfect. It’s touching maybe 1/4 of the die. Now, this could be due to the waterblock having a dome shape base? I’m not sure yet.
> 
> Here’s a picture. I just removed the block. My temps are fantastic either way! I had a water pump/flow issue prior that I thought was just poor performance on the Kingpin Hydro Copper behalf. But, that is not the case at all!


I wouldn't worry about it. We have the same delta and mine seems to be making full contact. Side note, does anybody know what the writing means on the lower right corner of the GPU? Mine said 1995 before I cleaned the PCB. Could this be a binning parameter?


Original factory mount:










Hydrocopper mount:


----------



## tps3443

0451 said:


> I wouldn't worry about it. We have the same delta and mine seems to be making full contact. Side note, does anybody know what the writing means on the lower right corner of the GPU? Mine said 1995 before I cleaned the PCB. Could this be a binning parameter?
> 
> 
> Original factory mount:
> View attachment 2522109
> 
> 
> 
> Hydrocopper mount:
> View attachment 2522110


The 1995 writing on kingpin dies has something to do with the binning process. I’m guessing the 1995Mhz under certain voltage, load, heat conditions, is a pass to become a Kingpin die.


----------



## elbramso

tps3443 said:


> So even though, I have incredible GPU temps right now. As I ready knew, GPU die to block contact isn’t perfect. It’s only making contact about the size of a big pea in the center of the silicon.
> 
> I just removed my KP Hydro block, going to use all new pads, and Kryonaut Extreme.
> 
> My current temps before re-mount are 43C max when mid-day heat 76F-77F in my home (Hot inside). Or later in the evening 41.3C max my home reaches normal temps cooler temps of 73F inside.
> 
> So maybe I can get even better than that.
> 
> My die contact isn’t perfect. It’s touching maybe 1/4 of the die. Now, this could be due to the waterblock having a dome shape base? I’m not sure yet.
> 
> Here’s a picture. I just removed the block. My temps are fantastic either way! I had a water pump/flow issue prior that I thought was just poor performance on the Kingpin Hydro Copper behalf. But, that is not the case at all!


So it is flow rate after all? I have the same issue with a bad working Hydrocopper block and I thought it's the block or really bad mount bad now I think it might be flow rate as well...
I have a single pump but 2x 560mm gtr + 1x 480mm HWlabs gts rads.
I'm using an ultitube pump mit software flow meter. Flow at 100% pump is ~180 l/h or at least that is what the software tells me


----------



## gfunkernaught

Arizor said:


> It's a really tricky one @gfunkernaught , it's almost the equivalent of doing an elimination diet to figure out what's giving you a headache...! I'd try resetting everything to default, testing for stability, then OCing one thing at a time until it crashes.
> 
> Where does everyone recommend you buy your PSU cables from, by the way? I'm in Australia and predictably it's impossible to find anywhere local (read: entire Oceania region) to find sleeved cables for my EVGA G5.


So I just read that the HX1200's OTP threshold is 50c. My PSU is mounted fan down and there is a vent for the PSU at the bottom of the case. I want to try mounting it fan up to see if it will run cooler. I can feel hot air being blown out the back of the PSU so I assume the fan is working.


Falkentyne said:


> Quake 2 RTX is pure path traced, and is a very old engine, so it puts very low load on the MSVDD and NVVDD rails. On a shunt modded 3090 FE, when it reaches 620W, TDP% is about 85% and TDP Normalized is about 90%, which indicates that the highest reported rails (hwinfo64 can't show them, you need Elmor's device), are not close to 100% of their base values. It just draws a ton of board power from the path tracing.
> 
> Something like Metro Exodus / EE, in the main menu, at 620W power draw (note that 100% TDP is 700W, slider is at 114%, which does nothing for TDP with the shunt mod but DOES extend the hidden NVVDD and MSVDD limits!), reports 108.1% on TDP Normalized% and only 84% on TDP. And since the sub rails in hwinfo64 are not close to 100% of their base values, it's heavy load/power draw on internal MSVDD and NVVDD doing it. And it's dropping the effective clocks and sometimes throttling down one voltage tier due to MSVDD being exceeded. So you can't just look at board power to assume your PSU is fine in X but shuts down in Y. Of course I can't test that long as I'm air cooled and the GPU gets to 80C after awhile...


Well I'm not assuming anything just speculating. Those are my first two tests since the shutdown. Nvvdd is core voltage and msvdd is vram core correct? Quake 2 RTX makes my vram the hottest out of any game I play. I also just remembered something else that was different about my system back when GTA 5 didn't shut down my PC: I had an 8700k not 9900k. I wonder if there is a spike in vrms somewhere triggering a shut down. My PSU isn't advanced enough to be able to automatic send the power on signal to the motherboard. So I'm now leaning towards motherboard a bit more than PSU.


----------



## Falkentyne

gfunkernaught said:


> So I just read that the HX1200's OTP threshold is 50c. My PSU is mounted fan down and there is a vent for the PSU at the bottom of the case. I want to try mounting it fan up to see if it will run cooler. I can feel hot air being blown out the back of the PSU so I assume the fan is working.
> 
> Well I'm not assuming anything just speculating. Those are my first two tests since the shutdown. Nvvdd is core voltage and msvdd is vram core correct? Quake 2 RTX makes my vram the hottest out of any game I play. I also just remembered something else that was different about my system back when GTA 5 didn't shut down my PC: I had an 8700k not 9900k. I wonder if there is a spike in vrms somewhere triggering a shut down. My PSU isn't advanced enough to be able to automatic send the power on signal to the motherboard. So I'm now leaning towards motherboard a bit more than PSU.


VRAM is memory voltage (1.35v)
MSVDD is uncore (sram)
NVVDD is main chip
The voltages for gpu core and sram shown in hwinfo are requested voltages (just like the clock speed is requested clock on V/F curve), not actual voltages. Think of VID on Intel chips.
also amps draw on NVVDD and MSVDD output isn't shown either (Elmor's tool can read it) although it can be estimated by looking at one of the NVVDD output powers (forgot which, one of the ones when summed with the second, adds up to the third, which I think is chip+memory power combined), and Sram output voltages, then doing math (watts=V*A)


----------



## tps3443

elbramso said:


> So it is flow rate after all? I have the same issue with a bad working Hydrocopper block and I thought it's the block or really bad mount bad now I think it might be flow rate as well...
> I have a single pump but 2x 560mm gtr + 1x 480mm HWlabs gts rads.
> I'm using an ultitube pump mit software flow meter. Flow at 100% pump is ~180 l/h or at least that is what the software tells me


Flow rate fixed my high temps. I disassembled my D5/top and cleaned it up really well, I put everything back together with a slightly snug D5 pump ring. And my flow rate is through the roof! I can’t remove my fill cap with my system powered on or it’ll blow water out on my psu. 

I would check if your fittings on the card are warm after a very long gaming session or load. That was what I noticed when my card was running 50’s in games etc. my fittings were really really warm. Especially the outlet fitting.

After my flow rate was increased it dropped my temps like crazy. I also re-mounted my block last night, and installed a full coverage Fujipoly rear PCB thermal pad. The highest temp I have seen is 40C on my GPU temp, and GPU die runs 38C

No fans on the card or anything. And the GPU2 hits 41C (back of die)

The GPU starts out at 33/34C and stays there for about 2 minutes, then it slowly goes up and bounces between 39/40C.


I was ready to give this block away. I was so upset. I’m just so impressed!!! I have finally 
achieved super amazing GPU temperatures.


----------



## tps3443

0451 said:


> I wouldn't worry about it. We have the same delta and mine seems to be making full contact. Side note, does anybody know what the writing means on the lower right corner of the GPU? Mine said 1995 before I cleaned the PCB. Could this be a binning parameter?
> 
> 
> Original factory mount:
> View attachment 2522109
> 
> 
> 
> Hydrocopper mount:
> View attachment 2522110


So I re-mounted the block last night, and there was a really good improvement!!

First of all, I cut out the Fujipoly 3mm thick rear thermalpad. This full PCB pad was for a Optimus FTW3 3090 rear backplate, so I had to cut off some excess in certain areas.

I used Kryonaut extreme. I spready It really well, and put a small dot in the center. Put all the new OEM evga 2.25mm KP pads on the memory too.

My GPU2 temps for the back of the card are 41C with no fan at all, the large thermalpad provided a 6C reduction in GPU2 temps. Memory#3 temps read 33C in game. Thats all I could see. I ran the game for about 30 minutes to an hour. And hottest GPU temp was 40C, and GPU die temp was 38C.

This block works great!!! I’m glad I re-mounted. It’s good, I’m not touching it anymore. And I’m done.

I literally laid my bare 3090KP PCB on that giant Fujipoly thermal pad and pressed down to get a good print on it. Then I just cut around any overhang in areas. It probably added 1 pound to the weight of my 3090KP lol.

I could certainly tell the mount was good based on my ambient temps being the same as yesterday, and how long my card held that 33/34C.


Also, I wanted to add. These evga OEM KP pads were perfect!! Evga mailed me this set as a replacement. My original KP HC pads that were included on the block were very thick. Anyways those are gone and my temps are solid now. Im
not buying the Optimus KP block for $622 with tax.


Now my system is all out on a table with two of the worst performing 360’s you can buy. So I hope I can manage that 33/34/35C temp when I get my 1080MM Alphacool(s) installed.


----------



## gfunkernaught

Falkentyne said:


> VRAM is memory voltage (1.35v)
> MSVDD is uncore (sram)
> NVVDD is main chip
> The voltages for gpu core and sram shown in hwinfo are requested voltages (just like the clock speed is requested clock on V/F curve), not actual voltages. Think of VID on Intel chips.
> also amps draw on NVVDD and MSVDD output isn't shown either (Elmor's tool can read it) although it can be estimated by looking at one of the NVVDD output powers (forgot which, one of the ones when summed with the second, adds up to the third, which I think is chip+memory power combined), and Sram output voltages, then doing math (watts=V*A)


So I don't think my motherboard has OCP on the PCIe slots, and if the OCP on my PSU was tripped by the GPU, the system would stay off. My suspicion is the mobos vrm thermal protection getting tripped. Even though theres a fan on them, there is probably a threshold somewhere in the bios. I'll try everything stock and see what happens.


----------



## tps3443

One of my BFR’s just showed up!!


Alphacool 350x350x45


----------



## dante`afk

anyone here around with the bykski full cover active block for the 3090 FE?

I've spent 3 hours now trying to get this working, however every time I use the Bykski block my computer would not turn on. LEDs, Motherboard, fans, pumps etc turns on for a splitsecond and then stops, but nothing else happens. If I put back my other block from Bitspower again it works fine. There seems to be contact that should not happen which probably tells the system to stop powering on.

Any ideas?


----------



## Falkentyne

dante`afk said:


> anyone here around with the bykski full cover active block for the 3090 FE?
> 
> I've spent 3 hours now trying to get this working, however every time I use the Bykski block my computer would not turn on. LEDs, Motherboard, fans, pumps etc turns on for a splitsecond and then stops, but nothing else happens. If I put back my other block from Bitspower again it works fine. There seems to be contact that should not happen which probably tells the system to stop powering on.
> 
> Any ideas?


Active backplate is the same on both blocks or are you changing them?


----------



## tps3443

Whoa!!! These temps are so nice!!


Big rad not even installed yet. But this large back thermal pad, and repaste has helped a lot!!! Using Kryonaut Extreme 14.2W/mk thermal paste, with brand new Evga Kingpin 13W/mk OEM pads, and on the back of the card I have a (Full PCB coverage Fujipoly 3MM thermal pad)

Playing Rust, and I’m loving these temps!! This game (use) to run 48-52C minimum when I had a water flow issue.

I can probably break 16K PR now lol. Gonna hook up that big boy radiator tonight! So far so good though! Water pressure is important for these blocks. I’m also using only the backside ports, for inlet/outlet too if that makes a difference.


PS my GPU temps run like 36C-38C. Memory temps are hovering between 35/36/37’s range.

Im just loving this. I was so upset over spending like $350 with tax fees on this Kingpin 3090 block, only to deal with poor performance, and apparently the vast majority of other owners too. Very satisfied now though. Evga did not cut corners on this thing. Sure, they could’ve offered us an active backplate, but with memory temps in the 30’s while gaming, why even bother?


PS this is NOT a max overclock. Voltages are all stock. Just +165 and roll. I’m still tinkering.


----------



## elbramso

tps3443 said:


> Whoa!!! These temps are so nice!!
> 
> 
> Big rad not even installed yet. But this large back thermal pad, and repaste has helped a lot!!! Using Kryonaut Extreme 14.2W/mk thermal paste, with brand new Evga Kingpin 13W/mk OEM pads, and on the back of the card I have a (Full PCB coverage Fujipoly 3MM thermal pad)
> 
> Playing Rust, and I’m loving these temps!! This game (use) to run 48-52C minimum when I had a water flow issue.
> 
> I can probably break 16K PR now lol. Gonna hook up that big boy radiator tonight! So far so good though! Water pressure is important for these blocks. I’m also using only the backside ports, for inlet/outlet too if that makes a difference.
> 
> 
> PS my GPU temps run like 36C-38C. Memory temps are hovering between 35/36/37’s range.
> 
> Im just loving this. I was so upset over spending like $350 with tax fees on this Kingpin 3090 block, only to deal with poor performance, and apparently the vast majority of other owners too. Very satisfied now though. Evga did not cut corners on this thing. Sure, they could’ve offered us an active backplate, but with memory temps in the 30’s while gaming, why even bother?
> 
> 
> PS this is NOT a max overclock. Voltages are all stock. Just +165 and roll. I’m still tinkering.


Happy to see someone who actually managed to get this block to work 
Now that I've read this I must redo my mount as well. I used 2mm thermalright pads on the front but it doesn't seem to work well. I can't make good die contact. I guess the OEM pads are softer and that's why your result is better. I'm using Kryonaut as well but not the extreme variant, maybe I'll give it a try. 

Could you do some testing with the card pulling 500w constantly and show your water and gpu temps? 

Ur giving me hope that my block can work as well


----------



## gamerMwM

So what do I need to know before running the 1000 watt KP Rebar Bios? Power Limit should be set at around 50-52%?

What about memory OC? I thought I remembered someone here saying you have to undervolt it when not running benchmarks. Is that still true....also how much are you putting on the memory when OC'ing?

I've got a Strix OC 3090 in a full coverage Bykski block with Gelid Extreme Pads. Temps are nice all around. Currently running the KP 520 watt Bios and it's not reading the 3rd 8 pin correctly and power draw numbers are wrong peaking at "380 watts" so I can't see exactly where I'm at but it's OC'ing well and I've broken 15400 on Port Royal for the first time.

Trying to find a daily driver OC on this 520 watt Bios is a little tricky as I've had games crash with as little as 120 on the core and even just 500 on memory. 

Passes benchmarks fine though with an even higher OC (highest so far was around 165 core and 1200 memory on this Bios). Seemed like my Asus daily driver OC was more stable at around +150 clock / +1000 memory when gaming on the stock 480 watt Bios.

Not sure how much Voltage figures in now as I usually only gave it +25 on the voltage on stock Bios. I've been experimenting with using more voltage on the 520 watt Bios with mixed results. 

Here's my best score so far with my new water loop and block:









Result not found







www.3dmark.com


----------



## gfunkernaught

gamerMwM said:


> So what do I need to know before running the 1000 watt KP Rebar Bios? Power Limit should be set at around 50-52%?
> 
> What about memory OC? I thought I remembered someone here saying you have to undervolt it when not running benchmarks. Is that still true....also how much are you putting on the memory when OC'ing?
> 
> I've got a Strix OC 3090 in a full coverage Bykski block with Gelid Extreme Pads. Temps are nice all around. Currently running the KP 520 watt Bios and it's not reading the 3rd 8 pin correctly and power draw numbers are wrong peaking at "380 watts" so I can't see exactly where I'm at but it's OC'ing well and I've broken 15400 on Port Royal for the first time.
> 
> Trying to find a daily driver OC on this 520 watt Bios is a little tricky as I've had games crash with as little as 120 on the core and even just 500 on memory.
> 
> Passes benchmarks fine though with an even higher OC (highest so far was around 165 core and 1200 memory on this Bios). Seemed like my Asus daily driver OC was more stable at around +150 clock / +1000 memory when gaming on the stock 480 watt Bios.
> 
> Not sure how much Voltage figures in now as I usually only gave it +25 on the voltage on stock Bios. I've been experimenting with using more voltage on the 520 watt Bios with mixed results.
> 
> Here's my best score so far with my new water loop and block:
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Result not found
> 
> 
> 
> 
> 
> 
> 
> www.3dmark.com


Which 3090 do you have? What are your load temps like? The 1kw bios is for advanced users and you should really be very mindful of every setting you change. 1kw bios PL at 52% is not the same as using a 520w bios. There are other limits and protections that a 520w bios has that the 1kw does not. 

As for the daily driver OC on the 520w bios, I do +65 core and +1200 mem. Anything higher than that will be throttled when running heavy games that slam the power limit.


----------



## gfunkernaught

I swapped the HX1200 with my old RM750, no crash, no power off, over an hour of GTA 5 at 8k on the 520w bios with a v/f curve of [email protected] Go figure. It seemed like PSU at first but the fact that the PC would power back on made me think it was something else.


----------



## tps3443

elbramso said:


> Happy to see someone who actually managed to get this block to work
> Now that I've read this I must redo my mount as well. I used 2mm thermalright pads on the front but it doesn't seem to work well. I can't make good die contact. I guess the OEM pads are softer and that's why your result is better. I'm using Kryonaut as well but not the extreme variant, maybe I'll give it a try.
> 
> Could you do some testing with the card pulling 500w constantly and show your water and gpu temps?
> 
> Ur giving me hope that my block can work as well


Hey, it was actually @0451 who demonstrated really really good temps first, and proved that this KP 3090 Hydrocopper is actually a great waterblock. And he did all that on just a 280MH HWLabs GTR. 

I felt the same way as you! And I actually remember talking to someone on the evga forums, and I was gonna go respond back to that guy. And say, (Hey the block isn’t crap!!/((Dont throw it away)) lol. But then I realized your actually him.

The OEM pads are actually pretty firm at 50 shore scale. The OEM KP pads are also really really good quality too!

I think evga had Installed the wrong size pads on KP HC block when I first received it. Ask them for another set, it’s only $15.00 shipped for one complete side.

But yes, the block is great. And I can actually push this thing now. Im installing my Big 1080x45mm rad right now!!

I was gaming at 2,190-2,205Mhz earlier with no tinkering or voltage tuning at all, on just (2) thin 360mm rads in pull only at SUB 38C.

The rear full coverage pad has also helped too. GPU2 temps were always super hot! Burn your skin hot. It hits maybe 37-38 max on the back side of the GPU die.


----------



## Arizor

Damn you all, I’ve now ordered a HWLabs 360 GTR to replace my quite new EK pe 360 to take full advantage of my noctua push/pull. On the way from Germany right now, no local suppliers in Australia (for so much, PC scene here needs work!).


----------



## tps3443

gfunkernaught said:


> I swapped the HX1200 with my old RM750, no crash, no power off, over an hour of GTA 5 at 8k on the 520w bios with a v/f curve of [email protected] Go figure. It seemed like PSU at first but the fact that the PC would power back on made me think it was something else.


That’s awesome! Glad it was something else and not the 3090.


----------



## tps3443

Arizor said:


> Damn you all, I’ve now ordered a HWLabs 360 GTR to replace my quite new EK pe 360 to take full advantage of my noctua push/pull. On the way from Germany right now, no local suppliers in Australia (for so much, PC scene here needs work!).


I imagine the more extreme PC industry and parts will become more available for everyone. The PC industry has really blown up. I remember thinking years back, when smartphones started becoming popular, and the PC industry was gonna slowly eventually just die off... Console will take over, and mobile computing lol.

Well, It’s just crazy how things turn out, you walk in your local Walmart in 2021 and they have (2) isles dedicated to PC gaming alone. I am in a really rural area in North Carolina and I rely solely on online ordering for all of my components.


I can see it now, Walmart buying out some of these large PC parts companies, and just selling the parts right in store.


Oh and those GTR rads are just ridiculously good. I was gonna go crazy and run like 3 480’s. But I was too cheap and just went big external. The easy way out I suppose.


----------



## Shawnb99

tps3443 said:


> Oh and those GTR rads are just ridiculously good. I was gonna go crazy and run like 3 480’s. But I was too cheap and just went big external. The easy way out I suppose.


Just a quick reminder GTR's shine at 2k rpm and up, if running you're fans lower then the GTX is the better choice.


----------



## tps3443

Shawnb99 said:


> Just a quick reminder GTR's shine at 2k rpm and up, if running you're fans lower then the GTX is the better choice.


Yeah, I wanted them. But I have decided to move away from having a case. I mess with my PC all the time.

So I’m installing this right now. They also launched the enclosure, and feet for it earlier this month. So once they’re In stock I’m gonna grab some.

I have one of my 1080x45mm so far, one more still on the way.


----------



## Arizor

Shawnb99 said:


> Just a quick reminder GTR's shine at 2k rpm and up, if running you're fans lower then the GTX is the better choice.


Yep plan to go push/pull with my 6 x Noctua 12Fs at around 1800rpm.


----------



## gamerMwM

gfunkernaught said:


> Which 3090 do you have? What are your load temps like? The 1kw bios is for advanced users and you should really be very mindful of every setting you change. 1kw bios PL at 52% is not the same as using a 520w bios. There are other limits and protections that a 520w bios has that the 1kw does not.
> 
> As for the daily driver OC on the 520w bios, I do +65 core and +1200 mem. Anything higher than that will be throttled when running heavy games that slam the power limit.


Temps while gaming with my full coverage Bykski block are as follows:

Ambient: 21.6 C
Gpu: 44
Hot Spot: 56
Memory: 44
VRM: 38

I game with the glass side panels off of my case. CPU is on a 240 AIO by itself. GPU is in a loop with a Corsair XD5 Pump/Res, a 280mm & a 240mm Corsair XR5 Rad. I have Noctua fans on the radiators.
When I put the block and active backplate on, I used Gelid Extreme pads on the front and back. I used a lot of Arctic MX-5 on the Core, and a decent amount also on the VRAM on both sides and some in the cracks between VRAM chips. Pretty happy with temps. If I end up trying out the 1000 watt bios, it will be only for a few benchmarks and then back to the Asus stock bios or the KP 520 for daily use.


----------



## J7SC

tps3443 said:


> Yeah, I wanted them. *But I have decided to move away from having a case. *I mess with my PC all the time.
> 
> So I’m installing this right now. They also launched the enclosure, and feet for it earlier this month. So once they’re In stock I’m gonna grab some.
> 
> I have one of my 1080x45mm so far, one more still on the way.


...I know the feeling  First, I moved away from standard 'cases' I had frustrated myself with for years and started using a TT Core P5 (w/ 5x 360/60 rads and 4x D5s onboard).

Then I added a TT Core P8 with three mobos and no rads onboard...instead picked up a little table w/ wheels (!) at Walmart and made it my mobile cooling table...below is an earlier version; when done, a 3rd 480/64 rad and also 3x 360/60 rads will be added on top of the mobile cooling setup (where the test-bench is in the pic), handling the w-cooling for a 5950X/3090 Strix and a 3950X/6900XT and all connected via Koolance QDs to the Core P8. All fans are push/pull.

...the cooling table will be right by a window but somewhat hidden from view, while the rest of the setup - freed from rads etc - has enough room for 40 and 48 inch monitor work+play stations.


----------



## kryptonfly

I finally shunt modded my Gigabyte 3090 Turbo 2x8 pin with 15mΩ from 390W to ~520W ratio 1.33 but starting around 450W it's PL. The gpu achieves ~2145 mhz at 975mV in Port Royal but there's power limit at this setting :

I scored 15 150 in Port Royal

Does anybody know a soft which can report correct power reading with offset ? For exemple : I see 300W now in afterburner but it's 400W in real, I would like to see 400W directly in afterburner. Thanks


----------



## tps3443

kryptonfly said:


> I finally shunt modded my Gigabyte 3090 Turbo 2x8 pin with 15mΩ from 390W to ~520W ratio 1.33 but starting around 450W it's PL. The gpu achieves ~2145 mhz at 975mV in Port Royal but there's power limit at this setting :
> 
> I scored 15 150 in Port Royal
> 
> Does anybody know a soft which can report correct power reading with offset ? For exemple : I see 300W now in afterburner but it's 400W in real, I would like to see 400W directly in afterburner. Thanks



Now way to see this with soldered shunts. Just an external power meter to read power from the 12v lines?

The 3090 seems to need about 560 maximum for pretty much full potential. But your doing good either way.

You can’t flash the bios on that model?


----------



## tps3443

This is my (15,701 Port Royal) score is on the 520 watt bios. My card is now power limited, and I can actually use the 1KW XOC bios. I should break 16K tonight after work.

[email protected] Stock
3090 Kingpin HC
520w LN2 KP HC bios
Single Alphacool 1080x45mm radiator 
Single D5 ekwb pump


----------



## Falkentyne

kryptonfly said:


> I finally shunt modded my Gigabyte 3090 Turbo 2x8 pin with 15mΩ from 390W to ~520W ratio 1.33 but starting around 450W it's PL. The gpu achieves ~2145 mhz at 975mV in Port Royal but there's power limit at this setting :
> 
> I scored 15 150 in Port Royal
> 
> Does anybody know a soft which can report correct power reading with offset ? For exemple : I see 300W now in afterburner but it's 400W in real, I would like to see 400W directly in afterburner. Thanks


Just apply a multiplier to the shunt values in MSI Afterburner. It's in the options for the monitoring.
But it makes more sense to do it in HWinfo64 directly and use Rivatuner RTSS as a HWinfo plugin (In the hwinfo options). You can set all the input rails with a multiplier. Do NOT change the output rails. The output rails are values read directly by the VRM and are reasonably accurate (they do not respond to shunt mods). You can even get the MSVDD power from this (MSVDD power = Total Board Power - GPU Rail Power (gpu rail power=NVVDD Output Power (sum) in hwinfo64)


----------



## KedarWolf

Falkentyne said:


> That color is normal. The 120mm * 120mm pad is a China only SKU (Gelid explained that in email) so it's very likely the pigment is different (I don't know why, maybe to stop someone from reselling them as 80mm * 40mm chopped up?). Mine look identical to that.
> The EC360 Silver pads look even slightly darker.


From Gelid support.

Hi,

I do not suggest you buy the older darker thermal pads from China, as they have been replaced due to some quality issues.

Please try our new ultimate pads instead.


GELID TEAM


----------



## elbramso

tps3443 said:


> Hey, it was actually @0451 who demonstrated really really good temps first, and proved that this KP 3090 Hydrocopper is actually a great waterblock. And he did all that on just a 280MH HWLabs GTR.
> 
> I felt the same way as you! And I actually remember talking to someone on the evga forums, and I was gonna go respond back to that guy. And say, (Hey the block isn’t crap!!/((Dont throw it away)) lol. But then I realized your actually him.
> 
> The OEM pads are actually pretty firm at 50 shore scale. The OEM KP pads are also really really good quality too!
> 
> I think evga had Installed the wrong size pads on KP HC block when I first received it. Ask them for another set, it’s only $15.00 shipped for one complete side.
> 
> But yes, the block is great. And I can actually push this thing now. Im installing my Big 1080x45mm rad right now!!
> 
> I was gaming at 2,190-2,205Mhz earlier with no tinkering or voltage tuning at all, on just (2) thin 360mm rads in pull only at SUB 38C.
> 
> The rear full coverage pad has also helped too. GPU2 temps were always super hot! Burn your skin hot. It hits maybe 37-38 max on the back side of the GPU die.


Yes lol, we have already been in touch on the evga forum^^

Unfortunately, evga doesn't ship replacement pads to Germany and rma centre in Germany doesn't ship them either (because they have none) 
I've ordered new thermal pads now. EC360 silver 2mm which seem really soft. Hopefully that will finally help.


----------



## Arizor

Could a bad mount cause low OC potential? For example, in any benchmark I’d i try going over 2085 I crash to desktop, regardless of temps (this is a Strix OC model).

Hoping when this optimus block arrives I get a good mount.


----------



## West.

I have some gelid pads arriving tomorrow for my 3090HOF. What is the best / recommended thermal paste for it? I have few grams of kryonaut but I’ve seen someone said kryo dry out pretty quick and performance drop off significantly after it dries. Is that true?


----------



## gfunkernaught

gamerMwM said:


> Temps while gaming with my full coverage Bykski block are as follows:
> 
> Ambient: 21.6 C
> Gpu: 44
> Hot Spot: 56
> Memory: 44
> VRM: 38
> 
> I game with the glass side panels off of my case. CPU is on a 240 AIO by itself. GPU is in a loop with a Corsair XD5 Pump/Res, a 280mm & a 240mm Corsair XR5 Rad. I have Noctua fans on the radiators.
> When I put the block and active backplate on, I used Gelid Extreme pads on the front and back. I used a lot of Arctic MX-5 on the Core, and a decent amount also on the VRAM on both sides and some in the cracks between VRAM chips. Pretty happy with temps. If I end up trying out the 1000 watt bios, it will be only for a few benchmarks and then back to the Asus stock bios or the KP 520 for daily use.


If you're strictly benching the 1kw bios then yeah. But still, 1kw bios will turn everything up in terms of temperature and power. Just keep that in mind when you test it out.


----------



## tps3443

elbramso said:


> Yes lol, we have already been in touch on the evga forum^^
> 
> Unfortunately, evga doesn't ship replacement pads to Germany and rma centre in Germany doesn't ship them either (because they have none)
> I've ordered new thermal pads now. EC360 silver 2mm which seem really soft. Hopefully that will finally help.


Stock pads look just like gelid extreme
pads. Same hardness, same color, and same
thermal conductivity.

You’ll get it there though.


----------



## tps3443

West. said:


> I have some gelid pads arriving tomorrow for my 3090HOF. What is the best / recommended thermal paste for it? I have few grams of kryonaut but I’ve seen someone said kryo dry out pretty quick and performance drop off significantly after it dries. Is that true?


Im using that Kryonaut extreme
stuff, the pink kind Seems to work really well.

I would say kingpin paste but I’ve received a couple bad tubes.


----------



## tps3443

Arizor said:


> Could a bad mount cause low OC potential? For example, in any benchmark I’d i try going over 2085 I crash to desktop, regardless of temps (this is a Strix OC model).
> 
> Hoping when this optimus block arrives I get a good mount.


Water temperature and water flow will cause those issues. I assume your water flow is fine?


----------



## gfunkernaught

tps3443 said:


> Water temperature and water flow will cause those issues. I assume your water flow is fine?


He did mention regardless of temperature.

@Arizor 
What is the voltage you see when running at 2085mhz?


----------



## des2k...

about flow, I have 4 pumps here, if I run single pump flow is only 0.8l/m , there's 0 difference on gpu delta on my side with the EK block

still low delta, 8c-10c up to 500w🙄


----------



## GRABibus

Hi,
did some of you measure the real power draw by MSI Suprim X BAR Bios ? (The one in my sig).

I ask you that because with my strix on air I produced such score and temps :









I scored 15 151 in Port Royal


AMD Ryzen 9 5900X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com















Bios MSI Suprim X BAR
Drivers 471.68 on high perf
ReBar enabled in profile "3DMArk Port Royal DLSS" in NVIDIA Inspector.
+160MHz on core
+1075MHz on memory
100% voltage slider
107% PL slider
25°C room temperature

Tweaking 5900X :
1 CCD
4.95GHz static OC
SMT disabled
Core count = 4


I am surprised about this nice score (Despite benefits of new drivers and ReBar), but I am mainly surpised by my temps which according to my tests with other Bios as EVGA 500W BAR or EVGA KP 520W BAR, it shows that clearly, power draw by MSI bios is much higher than 450W....

As power reading is messed in GPU-Z when flashing those bioses on my strix on air, I asked you if you have an idea ?...

Thank you.


----------



## geriatricpollywog

tps3443 said:


> Hey, it was actually @0451 who demonstrated really really good temps first, and proved that this KP 3090 Hydrocopper is actually a great waterblock. And he did all that on just a 280MH HWLabs GTR.
> 
> I felt the same way as you! And I actually remember talking to someone on the evga forums, and I was gonna go respond back to that guy. And say, (Hey the block isn’t crap!!/((Dont throw it away)) lol. But then I realized your actually him.
> 
> The OEM pads are actually pretty firm at 50 shore scale. The OEM KP pads are also really really good quality too!
> 
> I think evga had Installed the wrong size pads on KP HC block when I first received it. Ask them for another set, it’s only $15.00 shipped for one complete side.
> 
> But yes, the block is great. And I can actually push this thing now. Im installing my Big 1080x45mm rad right now!!
> 
> I was gaming at 2,190-2,205Mhz earlier with no tinkering or voltage tuning at all, on just (2) thin 360mm rads in pull only at SUB 38C.
> 
> The rear full coverage pad has also helped too. GPU2 temps were always super hot! Burn your skin hot. It hits maybe 37-38 max on the back side of the GPU die.


Your results are truly impressive! I was happy with my mount, but now you are making me re-think it. If I were to remount, what exact pads and sizes should I use?



des2k... said:


> about flow, I have 4 pumps here, if I run single pump flow is only 0.8l/m , there's 0 difference on gpu delta on my side with the EK block
> 
> still low delta, 8c-10c up to 500w🙄


You show an impressive delta from water to core temp, but isn’t your core just at 2115? There might be some room for improvement for your card as well.


----------



## Arizor

gfunkernaught said:


> He did mention regardless of temperature.
> 
> @Arizor
> What is the voltage you see when running at 2085mhz?


It’s around 1.093/1.1 I believe, basically the default curve of the EVGA 500W FTW rebar bios upped by 120-130 or so.


----------



## tps3443

des2k... said:


> about flow, I have 4 pumps here, if I run single pump flow is only 0.8l/m , there's 0 difference on gpu delta on my side with the EK block
> 
> still low delta, 8c-10c up to 500w🙄


My flow was bad. The water going back to my reservoir was a sideways trickle. it was so low that it actually ran down the side of my reservoir. I couldn’t even remove my impeller, that’s how bad and dirty this pump was. It looked like the pump was dipped in liquid jolly ranchers, and then left out to dry. I had to chisel away at the buildup. This was all caused from prior pastel coolant usage.

The flow now is crazy good though. All of my outlet fittings aren’t hot anymore.

Gonna run the 1KW XOC bios tonight. And hopefully get that 16K PR score


----------



## des2k...

0451 said:


> Your results are truly impressive! I was happy with my mount, but now you are making me re-think it. If I were to remount, what exact pads and sizes should I use?
> 
> 
> You show an impressive delta from water to core temp, but isn’t your core just at 2115? There might be some room for improvement for your card as well.


It might be possible, I just haven't had time to work more on the block. It won't be anything insane, maybe 1-3c delta improvement.

So far 1.1v 2180-2190eff freq in PortRoyal, temps are still around 40c. It won't be 10c delta because that voltage will push 560w.


----------



## dante`afk

Falkentyne said:


> Active backplate is the same on both blocks or are you changing them


seems to have been one of the IO bracket screws, tightening it too hard bend the pcb,.




gamerMwM said:


> Temps while gaming with my full coverage Bykski block are as follows:
> 
> Ambient: 21.6 C
> Gpu: 44
> Hot Spot: 56
> Memory: 44
> VRM: 38


can you do 30 minutes of bright benchmark on quality RT settings?


---------------------------


switched to an active BP as well, didnt' improve too much, biggest bonus was aesthetics.

30 min bright benchmark on quality RT settings

bitspower block + passive plate + 120mm fan (I played games several hours before running the bench, thus the water temps)










bykski block w/ active BP. (lots of air bubbles in the loop, the pumps are loud AF )


----------



## gfunkernaught

Arizor said:


> It’s around 1.093/1.1 I believe, basically the default curve of the EVGA 500W FTW rebar bios upped by 120-130 or so.


Assuming the power limit slider is all the way to the right. Does the clock bounce around 2085mhz along with the voltage?


----------



## Arizor

gfunkernaught said:


> Assuming the power limit slider is all the way to the right. Does the clock bounce around 2085mhz along with the voltage?


Yep PL set to max. Clock seems to stay at 2085 in PR at 1.087, slight bounce down to 2070 depending upon the moment.

edit: But then, for demonstration, if I try TimeSpy with exact same settings, crash within 1 minute.


----------



## gfunkernaught

Arizor said:


> Yep PL set to max. Clock seems to stay at 2085 in PR at 1.087, slight bounce down to 2070 depending upon the moment.
> 
> edit: But then, for demonstration, if I try TimeSpy with exact same settings, crash within 1 minute.


That was happening to me too while using the 1kw rbar bios. I always run Time Spy extreme which runs in 4k, and PR is 1440p. Could explain the crash with a heavier load. Does your voltage bounce too?


----------



## Arizor

gfunkernaught said:


> That was happening to me too while using the 1kw rbar bios. I always run Time Spy extreme which runs in 4k, and PR is 1440p. Could explain the crash with a heavier load. Does your voltage bounce too?


With a heavier load, such as running CP2077 or Metro Exodus maxxed out 4K/DLSS/RT etc., it crashes very, very quickly at that clock. If I lower the clocks (CP2077 is pretty stable at 2040 for example), it keeps a quite consistent voltage and clock.


----------



## elbramso

Tested if flow has an impact on my bad HydroCopper delta, The answer is no! Doesn't matter much.
Did a 25 minute run of heaven benchmark with slight overclock at 45% and 90% pumpspeed.

Did them back2back so my ambient temps were higher on the 90% run. Fans on my 560mm rads had 35% fixed speed.


----------



## West.

Follow up of my HOF pad swap, i eye balled the stock pad thickness and matched gelid to the corresponding area. Now i can’t get a good mount on the die. It thermal throttle at 300w load. Any HOF owners have successful pad swap results, I think I need some help…


----------



## yzonker

West. said:


> Follow up of my HOF pad swap, i eye balled the stock pad thickness and matched gelid to the corresponding area. Now i can’t get a good mount on the die. It thermal throttle at 300w load. Any HOF owners have successful pad swap results, I think I need some help…
> View attachment 2522347


I found this article that says 0.5mm mem pads. Does the heatsink have the raised areas for the mem like the article says? I'm assuming you're repadding the stock cooler. If not then disregard.









Hot Tech Info - Where Tech Comes Into Play


Where Tech Comes Into Play




hottechinfo.com


----------



## West.

yzonker said:


> I found this article that says 0.5mm mem pads. Does the heatsink have the raised areas for the mem like the article says? I'm assuming you're repadding the stock cooler. If not then disregard.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Hot Tech Info - Where Tech Comes Into Play
> 
> 
> Where Tech Comes Into Play
> 
> 
> 
> 
> hottechinfo.com


Oh f, I dont have 0.5mm pads. Yes, I’m repadding the stock cooler. I’ll order asap and post results. Those ultimates are hard I can’t compress them at all. Meanwhile, I have a m11 apex mobo (no igpu) and no spear gpu…

Anyway, thanks for the help,


----------



## yzonker

West. said:


> Oh f, I dont have 0.5mm pads. Yes, I’m repadding the stock cooler. I’ll order asap and post results. Those ultimates are hard I can’t compress them at all. Meanwhile, I have a m11 apex mobo (no igpu) and no spear gpu…
> 
> Anyway, thanks for the help,


Get the Extreme's instead. Very soft.


----------



## gfunkernaught

Arizor said:


> With a heavier load, such as running CP2077 or Metro Exodus maxxed out 4K/DLSS/RT etc., it crashes very, very quickly at that clock. If I lower the clocks (CP2077 is pretty stable at 2040 for example), it keeps a quite consistent voltage and clock.


I keep seeing on here that Metro Exodus with DLSS is a heavy load but when I play it the gpu usage rarely goes over 85%, even at 4k and rt ultra. If I disable dlss then yeah heavy load for sure.

But idk man. Is your PSU delivering? I just found out my beefy HX1200 can't handle GTA 5 but my RM750 does it fine. I don't want to say "bad silicon" just yet.


----------



## yzonker

gfunkernaught said:


> I keep seeing on here that Metro Exodus with DLSS is a heavy load but when I play it the gpu usage rarely goes over 85%, even at 4k and rt ultra. If I disable dlss then yeah heavy load for sure.
> 
> But idk man. Is your PSU delivering? I just found out my beefy HX1200 can't handle GTA 5 but my RM750 does it fine. I don't want to say "bad silicon" just yet.


When I've mentioned ME EE, I was referring to DLSS off. As soon as you turn it on the card is really rendering in a lower resolution which greatly decreases the load on the gpu.


----------



## Arizor

gfunkernaught said:


> I keep seeing on here that Metro Exodus with DLSS is a heavy load but when I play it the gpu usage rarely goes over 85%, even at 4k and rt ultra. If I disable dlss then yeah heavy load for sure.
> 
> But idk man. Is your PSU delivering? I just found out my beefy HX1200 can't handle GTA 5 but my RM750 does it fine. I don't want to say "bad silicon" just yet.


I know mate, tricky! PSU seems to be delivering, at least I find no correlation between watts (as measured at the wall) and the crashing.

Let’s hope when I mount this on the Optimus I get an improvement. But prepared to just accept I lost the bin lottery too.


----------



## gfunkernaught

Arizor said:


> I know mate, tricky! PSU seems to be delivering, at least I find no correlation between watts (as measured at the wall) and the crashing.
> 
> Let’s hope when I mount this on the Optimus I get an improvement. But prepared to just accept I lost the bin lottery too.


I didn't either. I used a clamp, about 660w total system consumption at the wall while playing GTA 5 @8k.


----------



## kryptonfly

tps3443 said:


> Now way to see this with soldered shunts. Just an external power meter to read power from the 12v lines?
> 
> The 3090 seems to need about 560 maximum for pretty much full potential. But your doing good either way.
> 
> You can’t flash the bios on that model?


 Nope, I tried all bios on this thread but nothing works, probes were all crazy even temps. I shunt modded with 15mΩ and it works well, I will try 10mΩ, waiting from aliexpress some resistances.


Falkentyne said:


> Just apply a multiplier to the shunt values in MSI Afterburner. It's in the options for the monitoring.
> But it makes more sense to do it in HWinfo64 directly and use Rivatuner RTSS as a HWinfo plugin (In the hwinfo options). You can set all the input rails with a multiplier. Do NOT change the output rails. The output rails are values read directly by the VRM and are reasonably accurate (they do not respond to shunt mods). You can even get the MSVDD power from this (MSVDD power = Total Board Power - GPU Rail Power (gpu rail power=NVVDD Output Power (sum) in hwinfo64)


Perfect  I did with HWinfo64 with x1.33 ratio and put it in Afterburner by searching a little, works great, exactly what I looked for :

 



GRABibus said:


> Hi,
> did some of you measure the real power draw by MSI Suprim X BAR Bios ? (The one in my sig).
> 
> I ask you that because with my strix on air I produced such score and temps :
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 151 in Port Royal
> 
> 
> AMD Ryzen 9 5900X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> 
> View attachment 2522312
> 
> 
> Bios MSI Suprim X BAR
> Drivers 471.68 on high perf
> ReBar enabled in profile "3DMArk Port Royal DLSS" in NVIDIA Inspector.
> +160MHz on core
> +1075MHz on memory
> 100% voltage slider
> 107% PL slider
> 25°C room temperature
> 
> Tweaking 5900X :
> 1 CCD
> 4.95GHz static OC
> SMT disabled
> Core count = 4
> 
> 
> I am surprised about this nice score (Despite benefits of new drivers and ReBar), but I am mainly surpised by my temps which according to my tests with other Bios as EVGA 500W BAR or EVGA KP 520W BAR, it shows that clearly, power draw by MSI bios is much higher than 450W....
> 
> As power reading is messed in GPU-Z when flashing those bioses on my strix on air, I asked you if you have an idea ?...
> 
> Thank you.


Salut 
We have almost the same score :

I scored 15 150 in Port Royal

Curved : 2145 mhz at 975mV but PL
+1128 mhz on memory
105% PL slider from 390W (bios Gigabyte Gaming OC) to ~520W shunt mod but only "theorical" because around 450W it's PL, I soldered 3x R005 with wires on each 6 shunts because I only have R005, maybe it's even more than 15mΩ. I ordered true R010 and R015 to make it clear the next time.
Drivers 471.68 on normal
No Rebar, bios "normal" and no support on X99 (if someone knows a hack maybe ?)
CPU stock 3.5 ghz and ram stock 2666CL14 really loosed. I don't see any improvement with OC 4.6 ghz.

I don't know how you perform so well with such high temps ! Your PL should be really high to mitigate throttling, I can't imagine once watercooled !

Superposition at 4.6 ghz, it helps for fps min I think :



2145 mhz at 918mV, PL beyond. I will put 10mΩ the next time. Waiting Gelid extreme pads because vram is at 54°C idle to 74°C full load. I tried an active backplate but it hits ram slots 
And weird thing, I have +15°C on hot spot (12°C before), and gpu is cooler :



True Power = power x 1.33


----------



## GRABibus

kryptonfly said:


> Nope, I tried all bios on this thread but nothing works, probes were all crazy even temps. I shunt modded with 15mΩ and it works well, I will try 10mΩ, waiting from aliexpress some resistances.
> Perfect  I did with HWinfo64 with x1.33 ratio and put it in Afterburner by searching a little, works great, exactly what I looked for :
> 
> 
> 
> Salut
> We have almost the same score :
> 
> I scored 15 150 in Port Royal
> 
> Curved : 2145 mhz at 975mV but PL
> +1128 mhz on memory
> 105% PL slider from 390W (bios Gigabyte Gaming OC) to ~520W shunt mod but only "theorical" because around 450W it's PL, I soldered 3x R005 with wires on each 6 shunts because I only have R005, maybe it's even more than 15mΩ. I ordered true R010 and R015 to make it clear the next time.
> Drivers 471.68 on normal
> No Rebar, bios "normal" and no support on X99 (if someone knows a hack maybe ?)
> CPU stock 3.5 ghz and ram stock 2666CL14 really loosed. I don't see any improvement with OC 4.6 ghz.
> 
> I don't know how you perform so well with such high temps ! Your PL should be really high to mitigate throttling, I can't imagine once watercooled !
> 
> Superposition at 4.6 ghz, it helps for fps min I think :
> 
> 
> 
> 2145 mhz at 918mV, PL beyond. I will put 10mΩ the next time. Waiting Gelid extreme pads because vram is at 54°C idle to 74°C full load. I tried an active backplate but it hits ram slots
> And weird thing, I have +15°C on hot spot (12°C before), and gpu is cooler :
> 
> 
> 
> True Power = power x 1.33


Salut !!

with ReBar forced for Port Royal with NVIDIA Inspector, it is a gain of 200points.
I am sure also you are maybe bottlenecked by your CPU in this test...Just a supposition...

My temps are high because I am on air (Despite liquid metal and total Memory chips reapd with Thermalright Odyssey), and because Suprim X Bios pulls more than 500W, I am sure, despite its PL is declared as 450W.
I have roughly same temps with the 450W Suprim X Bios than with KP 520W 's one....


----------



## yzonker

My bank account is hoping this is a false rumor. 









NVIDIA GeForce RTX 3090 SUPER rumored to feature 10752 CUDA cores, 400W+ of power - VideoCardz.com


NVIDIA GeForce RTX 3090 SUPER with 10752 CUDA cores and 400W+ of power Greymon55 is a leaker better known for AMD content. This time around, it seems that he has the first information on the upcoming flagship GPU from the SUPER series. While many gamers still struggle to find GeForce RTX 30...




videocardz.com


----------



## Arizor

Doesn’t make much sense as a product, but then again NVIDIA have made some horrid decisions in the past year.


----------



## yzonker

Arizor said:


> Doesn’t make much sense as a product, but then again NVIDIA have made some horrid decisions in the past year.


Yes Nvidia isn't interested in making money.


----------



## Arizor

yzonker said:


> Yes Nvidia isn't interested in making money.


Yeah I should say "horrid decisions for the consumer, but great decisions for a corporation intent on sucking every last dollar out of the market regardless of the product's value proposition".


----------



## GRABibus

Are there some Cold War players here ?

Since yesterday, my GPU usage dropped down to 65%-90%.
This is with Ray Tracing on ultra and DLSS on quality and rebar enabled in Bios (Not forced in game).

Until yesterday, I had from 85% to 95% usage.

I trieds a lot of things :
Change GPU Bios
Reinstall former drivers with DDU (I am currently on 471.68)
Disabling ReBar
Etc....

No way.

If you play this game in multiplayer in 2560x1440, and if you enable Ray tracing at "ultra" and DLSS on "quality" and ReBAr enabled in bios, what do you get as GPU usage ?
All other in game Graphics settings at max (Except Memory usage at 80%).

Thank you in advance.

EDIT : it seems that disabling ReBAr in Bios improves GPU usage a little bit....but not so much 
And you ? Thanks !


----------



## J7SC

yzonker said:


> My bank account is hoping this is a false rumor.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> NVIDIA GeForce RTX 3090 SUPER rumored to feature 10752 CUDA cores, 400W+ of power - VideoCardz.com
> 
> 
> NVIDIA GeForce RTX 3090 SUPER with 10752 CUDA cores and 400W+ of power Greymon55 is a leaker better known for AMD content. This time around, it seems that he has the first information on the upcoming flagship GPU from the SUPER series. While many gamers still struggle to find GeForce RTX 30...
> 
> 
> 
> 
> videocardz.com


...going back to the top end of the RTX2K...while there was a 2080 Super, the much-talked about 2080 Ti Super never actually materialized, so hard to say what will happen with the 3090 Super - it probably depends on what the new Intel Arc and perhaps AMD can bring... from a technical and board partner perspective, easy enough for NVidia to release a Super version if the market competition etc warrant it.


----------



## gfunkernaught

GRABibus said:


> Are there some Cold War players here ?
> 
> Since yesterday, my GPU usage dropped down to 65%-90%.
> This is with Ray Tracing on ultra and DLSS on quality and rebar enabled in Bios (Not forced in game).
> 
> Until yesterday, I had from 85% to 95% usage.
> 
> I trieds a lot of things :
> Change GPU Bios
> Reinstall former drivers with DDU (I am currently on 471.68)
> Disabling ReBar
> Etc....
> 
> No way.
> 
> If you play this game in multiplayer in 2560x1440, and if you enable Ray tracing at "ultra" and DLSS on "quality" and ReBAr enabled in bios, what do you get as GPU usage ?
> All other in game Graphics settings at max (Except Memory usage at 80%).
> 
> Thank you in advance.
> 
> EDIT : it seems that disabling ReBAr in Bios improves GPU usage a little bit....but not so much
> And you ? Thanks !


1440p with or without DLSS will not stress a 3090 much. Something changed somewhere...


----------



## gfunkernaught

J7SC said:


> ...going back to the top end of the RTX2K...while there was a 2080 Super, the much-talked about 2080 Ti Super never actually materialized, so hard to say what will happen with the 3090 Super - it probably depends on what the new Intel Arc and perhaps AMD can bring... from a technical and board partner perspective, easy enough for NVidia to release a Super version if the market competition etc warrant it.


Did they use up whatever ampere chips they had left for the 3080 ti's?


----------



## GRABibus

gfunkernaught said:


> 1440p with or without DLSS will not stress a 3090 much. Something changed somewhere...


a mystery...


----------



## mirkendargen

J7SC said:


> ...going back to the top end of the RTX2K...while there was a 2080 Super, the much-talked about 2080 Ti Super never actually materialized, so hard to say what will happen with the 3090 Super - it probably depends on what the new Intel Arc and perhaps AMD can bring... from a technical and board partner perspective, easy enough for NVidia to release a Super version if the market competition etc warrant it.


I don't think there's any chance of competition in the lifetime of Ampere, but I could see them releasing it as an Ampere Titan just so they can charge $4k for it because they know enough people will pay it if it's the only top tier GPU they can find in stock.


----------



## J7SC

gfunkernaught said:


> Did they use up whatever ampere chips they had left for the 3080 ti's?


...all comes down to the yields of the wafers they get and how many 'fully-enabled' dies they can squeeze out. This 3090 Super leak, IMO, is more of a typical media play to counter a competitor's air time (Intel Arc, for example). 

Another question would be if the 3090 Super would be 'LHR'...But a 3090 Super could be brought to market fairly quickly 'if needed' re. existing PCB designs.


----------



## yzonker

There was some talk when the 3090 ti was rumored that there wasn't enough room left on the chip for very many more cores even if yields were good enough, but I don't know enough about them to know whether that holds water or not.


----------



## shadow85

I have a EVGA 3090 FTW3 ULTRA running factory speeds which hover around 1900-2030 MHz when gaming, and temps around 65-74°C. If I upgrade the cooling on it with a hybrid or a water block, and OC it a bit more, can I expect to gain real world fps increases at 4K in AAA titles like Cyberpunk 2077 and Horizon Zero Dawn?

Or would it be a waste of money?


----------



## Falkentyne

shadow85 said:


> I have a EVGA 3090 FTW3 ULTRA running factory speeds which hover around 1900-2030 MHz when gaming, and temps around 65-74°C. If I upgrade the cooling on it with a hybrid or a water block, and OC it a bit more, can I expect to gain real world fps increases at 4K in AAA titles like Cyberpunk 2077 and Horizon Zero Dawn?
> 
> Or would it be a waste of money?


You can look at this and decide if the increase in FPS is worth it.

Shunt modded 3090 FE:

3090 FE, Metro: Exodus Enhanced Edition

1080p, Ultra settings

400W TDP: 400W used
157 FPS
1845 mhz requested core clock (+150/+600)
1833 mhz effective clock


Unlimited TDP (5 mOhm shunt mod stacked: 700W=100%, technically 800W would be 114%)

170 FPS
2085 mhz requested clock, 2074 mhz effective clock (56C), 594W used

168 FPS, 2060 mhz effective clock (62C), 600W used
169 FPS, 65C, 2070 requested clock, 2054 requested clock
168 FPS, 70C, 2055 requested, 2036 effective, 607W used (89% TDP out of 114% max), throttling 1 tier due to internal MSVDD power limits exceeded

TDP reached 89% but hit a throttle point (1 tier throttle, requested GPU voltage drops to 1.056v at 100% slider in MSI AB) due to 108% Normalized TDP, caused by MSVDD or NVVDD power limits being exceeded for 1.10v internal MSVDD voltage, which maxes out at 114% (it responds to the TDP slider at the 100% (default) and 114% (maximum) positions with <100% being ignored as a limit) before heavy throttle (would need to increase MSVDD to prevent this, which is impossible on a FE due to the controller being read only, Elmor has not found a way around this). While the exact internal power values are unknown, it seems to be hitting 220W on MSVDD before throttling.

So, 13 FPS increase for 200W higher power draw at 1080p. No idea what 1440p or 4k would be. Percentage wise, looks like 7-8% increase.


----------



## KedarWolf

Other than the crappy Asus BIOS's, what BIOS can I use on my Strix OC that'll show me the third 8-pin and the full power draw?

EVGA BIOS's and I think MSI don't.


----------



## shadow85

Falkentyne said:


> You can look at this and decide if the increase in FPS is worth it.
> 
> Shunt modded 3090 FE:
> 
> 3090 FE, Metro: Exodus Enhanced Edition
> 
> 1080p, Ultra settings
> 
> 400W TDP: 400W used
> 157 FPS
> 1845 mhz requested core clock (+150/+600)
> 1833 mhz effective clock
> 
> 
> Unlimited TDP (5 mOhm shunt mod stacked: 700W=100%, technically 800W would be 114%)
> 
> 170 FPS
> 2085 mhz requested clock, 2074 mhz effective clock (56C), 594W used
> 
> 168 FPS, 2060 mhz effective clock (62C), 600W used
> 169 FPS, 65C, 2070 requested clock, 2054 requested clock
> 168 FPS, 70C, 2055 requested, 2036 effective, 607W used (89% TDP out of 114% max), throttling 1 tier due to internal MSVDD power limits exceeded
> 
> TDP reached 89% but hit a throttle point (1 tier throttle, requested GPU voltage drops to 1.056v at 100% slider in MSI AB) due to 108% Normalized TDP, caused by MSVDD or NVVDD power limits being exceeded for 1.10v internal MSVDD voltage, which maxes out at 114% (it responds to the TDP slider at the 100% (default) and 114% (maximum) positions with <100% being ignored as a limit) before heavy throttle (would need to increase MSVDD to prevent this, which is impossible on a FE due to the controller being read only, Elmor has not found a way around this). While the exact internal power values are unknown, it seems to be hitting 220W on MSVDD before throttling.
> 
> So, 13 FPS increase for 200W higher power draw at 1080p. No idea what 1440p or 4k would be. Percentage wise, looks like 7-8% increase.


Lol, I actually don't have much of an idea of what any of that means.

Kind of a noob to GPU overclocking.


----------



## KedarWolf

Falkentyne said:


> You can look at this and decide if the increase in FPS is worth it.
> 
> Shunt modded 3090 FE:
> 
> 3090 FE, Metro: Exodus Enhanced Edition
> 
> 1080p, Ultra settings
> 
> 400W TDP: 400W used
> 157 FPS
> 1845 mhz requested core clock (+150/+600)
> 1833 mhz effective clock
> 
> 
> Unlimited TDP (5 mOhm shunt mod stacked: 700W=100%, technically 800W would be 114%)
> 
> 170 FPS
> 2085 mhz requested clock, 2074 mhz effective clock (56C), 594W used
> 
> 168 FPS, 2060 mhz effective clock (62C), 600W used
> 169 FPS, 65C, 2070 requested clock, 2054 requested clock
> 168 FPS, 70C, 2055 requested, 2036 effective, 607W used (89% TDP out of 114% max), throttling 1 tier due to internal MSVDD power limits exceeded
> 
> TDP reached 89% but hit a throttle point (1 tier throttle, requested GPU voltage drops to 1.056v at 100% slider in MSI AB) due to 108% Normalized TDP, caused by MSVDD or NVVDD power limits being exceeded for 1.10v internal MSVDD voltage, which maxes out at 114% (it responds to the TDP slider at the 100% (default) and 114% (maximum) positions with <100% being ignored as a limit) before heavy throttle (would need to increase MSVDD to prevent this, which is impossible on a FE due to the controller being read only, Elmor has not found a way around this). While the exact internal power values are unknown, it seems to be hitting 220W on MSVDD before throttling.
> 
> So, 13 FPS increase for 200W higher power draw at 1080p. No idea what 1440p or 4k would be. Percentage wise, looks like 7-8% increase.


Did you factor in your shunt mod?

In Metro Exodus benchmark on my Strix OC I pull 400W on Ultra with DLSS on Quality, 460W with DLSS off.

Oh, 3840x1080.

This with my curve at max .962v 1950 core clock, +558 memory.


----------



## Falkentyne

KedarWolf said:


> Did you factor in your shunt mod?
> 
> In Metro Exodus benchmark on my Strix OC I pull 400W on Ultra with DLSS on Quality, 460W with DLSS off.
> 
> Oh, 3840x1080.
> 
> This with my curve at max .962v 1950 core clock, +558 memory.


Yes I did. 400W is at 58% power limit. Of course 57% would be "perfect" 400W (easy to calculate if you assume 50% is 350W), if you do direct 50% increase (reduction in reported power) but my soldering on 8 pin #1 wasn't perfect (at 400W TDP it reports about 6W higher than 8 pin #2, and at 530W, about 9-10W higher (raw values, not multiplied by 1.98*, which I used for all the shunts) than 8 pin #2) so I throw in an extra 1% on the power limit slider to get 400%.


----------



## gfunkernaught

I like the "+400W+" bit from the rumor. Ok so a few more cores and higher power limit and it comes with a compressor and a can of refrigerant. Plus I think that they did away with "Titan" to keep their "Titan" cards on the line between gaming and workstation. It seemed like before the x80 Ti was the king and the Titan was for rendering and some gaming. So anything above a 3090 IMO would be Quadro then those Uber tesla DC chips or whatever they're called. T1000s? 😂


----------



## J7SC

gfunkernaught said:


> I like the "+400W+" bit from the rumor. Ok so a few more cores and higher power limit and it comes with a compressor and a can of refrigerant. Plus I think that they did away with "Titan" to keep their "Titan" cards on the line between gaming and workstation. It seemed like before the x80 Ti was the king and the Titan was for rendering and some gaming. So anything above a 3090 IMO would be Quadro then those Uber tesla DC chips or whatever they're called. T1000s? 😂


...try eight of these NVidia A100s w/ up to 500W each


----------



## KedarWolf

Falkentyne said:


> Yes I did. 400W is at 58% power limit. Of course 57% would be "perfect" 400W (easy to calculate if you assume 50% is 350W), if you do direct 50% increase (reduction in reported power) but my soldering on 8 pin #1 wasn't perfect (at 400W TDP it reports about 6W higher than 8 pin #2, and at 530W, about 9-10W higher (raw values, not multiplied by 1.98*, which I used for all the shunts) than 8 pin #2) so I throw in an extra 1% on the power limit slider to get 400%.


What BIOS are you using? Can you link it?


----------



## GRABibus

KedarWolf said:


> Other than the crappy Asus BIOS's, what BIOS can I use on my Strix OC that'll show me the third 8-pin and the full power draw?
> 
> EVGA BIOS's and I think MSI don't.


This one :








KFA2 RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com




=> OK for gaming
=> Crappy for bench

Rebar tool here :





Resizable bar BIOS update


Galaxy Microsystems Ltd.




www.galax.com


----------



## Nizzen

KedarWolf said:


> Other than the crappy Asus BIOS's, what BIOS can I use on my Strix OC that'll show me the third 8-pin and the full power draw?
> 
> EVGA BIOS's and I think MSI don't.


Galax no rebar bios


----------



## kx11

Finally got Optimus GPU block for STRIX

a mini review with benchmarks

it took me more than 2 months waiting for it but it did arrive after all, they claim there's a manual inbox but there isn't in mine so i used the EKWB tutorial, the bakcplate is straight forward, just screw it on the BP since it's got the thermal pads pre-installed


now benchmarks

using MSI AB and forcing constant voltage
Core +120
MEM +1200
power 123%
Voltage slider +19

TSE








I scored 10 393 in Time Spy Extreme


AMD Ryzen 9 3900XT, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com





FSU








I scored 14 075 in Fire Strike Ultra


AMD Ryzen 9 3900XT, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com





HWinfo










used Aquacomputer controller to handle the 6 fans with a kinda high rpm (judging by the noise it makes not sure about the rpm but i think it's around 2100rpm)


impressive if you consider that my gpu used to hit 88c with the same OC values on air


----------



## Falkentyne

KedarWolf said:


> What BIOS are you using? Can you link it?


Huh??
I'm using this.








NVIDIA RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com




You did read my post completely, didn't you....?


----------



## GRABibus

kx11 said:


> Finally got Optimus GPU block for STRIX
> 
> a mini review with benchmarks
> 
> it took me more than 2 months waiting for it but it did arrive after all, they claim there's a manual inbox but there isn't in mine so i used the EKWB tutorial, the bakcplate is straight forward, just screw it on the BP since it's got the thermal pads pre-installed
> 
> 
> now benchmarks
> 
> using MSI AB and forcing constant voltage
> Core +120
> MEM +1200
> power 123%
> Voltage slider +19
> 
> TSE
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 10 393 in Time Spy Extreme
> 
> 
> AMD Ryzen 9 3900XT, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> FSU
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 14 075 in Fire Strike Ultra
> 
> 
> AMD Ryzen 9 3900XT, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> HWinfo
> View attachment 2522418
> 
> 
> 
> used Aquacomputer controller to handle the 6 fans with a kinda high rpm (judging by the noise it makes not sure about the rpm but i think it's around 2100rpm)
> 
> 
> impressive if you consider that my gpu used to hit 88c with the same OC values on air


Also impressive because you use the most crappy bios, Asus strix stock bios 😊


----------



## GTANY

Falkentyne said:


> You can look at this and decide if the increase in FPS is worth it.
> 
> Shunt modded 3090 FE:
> 
> 3090 FE, Metro: Exodus Enhanced Edition
> 
> 1080p, Ultra settings
> 
> 400W TDP: 400W used
> 157 FPS
> 1845 mhz requested core clock (+150/+600)
> 1833 mhz effective clock
> 
> 
> Unlimited TDP (5 mOhm shunt mod stacked: 700W=100%, technically 800W would be 114%)
> 
> 170 FPS
> 2085 mhz requested clock, 2074 mhz effective clock (56C), 594W used
> 
> 168 FPS, 2060 mhz effective clock (62C), 600W used
> 169 FPS, 65C, 2070 requested clock, 2054 requested clock
> 168 FPS, 70C, 2055 requested, 2036 effective, 607W used (89% TDP out of 114% max), throttling 1 tier due to internal MSVDD power limits exceeded
> 
> TDP reached 89% but hit a throttle point (1 tier throttle, requested GPU voltage drops to 1.056v at 100% slider in MSI AB) due to 108% Normalized TDP, caused by MSVDD or NVVDD power limits being exceeded for 1.10v internal MSVDD voltage, which maxes out at 114% (it responds to the TDP slider at the 100% (default) and 114% (maximum) positions with <100% being ignored as a limit) before heavy throttle (would need to increase MSVDD to prevent this, which is impossible on a FE due to the controller being read only, Elmor has not found a way around this). While the exact internal power values are unknown, it seems to be hitting 220W on MSVDD before throttling.
> 
> So, 13 FPS increase for 200W higher power draw at 1080p. No idea what 1440p or 4k would be. Percentage wise, looks like 7-8% increase.


Thank you for your feedback. In addition to the Kingpin and the Strix which are not MSVDD and NVVDD power limited, are there other 3090 models ? For example, flashing some 3 power plugs cards with the 500 W EVGA bios will avoid these MSVDD and NVVDD power limits ?


----------



## adversary

I may soon try to get 3090, and watercool it immidiately with better thermal pads

for watercooling I'm always using EKWB products, and easiest for me to get it delivered from them
but I prefer using better thermal pads

I know waterblocks are not same for AIB models, some have more expensive waterblocks also in offer, some don't. Also thermal pads thickness may vary between AIB models when using EKWB waterblock, I guess.

I'm not asking for everyone temperature exact numbers after watercooling card, but I would like to roughly know, does for example more expensive waterblocks performed better?

and most important for me, as I would like to prepare thermal pads before I get card (no problem if I buy some which are not going to be used, it can always be used later on some other card).
so, can you share with me which thickness (not all is going to be same of course) is most commonly used when using EKWB waterblocks on 3090. 1mm, 1.5mm, 2mm, 3mm ??

please share some of your expirience with me and it will be very useful

for thermal pads I used so far Gelid Ultimate 15 W/mk and Thermalright Odyssey 12.8 W/mk. for Odyssey, I noticed they are pretty "hard" (but cool very very good)

tnx


----------



## kx11

GRABibus said:


> Also impressive because you use the most crappy bios, Asus strix stock bios 😊


I don't wanna touch the bios, who's gonna replace my broken 3090 if something happened? i need it for 4k gaming + silence


----------



## KedarWolf

Falkentyne said:


> Huh??
> I'm using this.
> 
> 
> 
> 
> 
> 
> 
> 
> NVIDIA RTX 3090 VBIOS
> 
> 
> 24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory
> 
> 
> 
> 
> www.techpowerup.com
> 
> 
> 
> 
> You did read my post completely, didn't you....?


I just reread your post. nowhere does it say which BIOS you are using. :/


----------



## KedarWolf

If I don't max out the power limit levels, my Strix OC will underclock the core, more the lower I put the power limit.

Edit: I mean on normal BIOS's not the XOC 1000w etc.

But I'm not shunt modded though.


----------



## Falkentyne

KedarWolf said:


> I just reread your post. nowhere does it say which BIOS you are using. :/


I said I'm using a Founder's Edition, so it doesn't matter which Bios I'm using, because FE's can only take their own Bios anyway with the stock NVflashes.

(I posted a patched NVflash with ID checks bypassed (5.670.0) for ampere but not a single person is brave enough to try flashing their FE cards directly with an AIB Bios with it. One person flashed his Strix with a FE Bios and got a motherboard post code "Load VGA Bios" error and had to reflash using the bios switch then switching it back in windows and flashed his original back on. But no one tried to see what would happen if you flashed a 1kw / Galax / eVGA / Strix bios on a FE).



KedarWolf said:


> If I don't max out the power limit levels, my Strix OC will underclock the core, more the lower I put the power limit.
> 
> Edit: I mean on normal BIOS's not the XOC 1000w etc.
> 
> But I'm not shunt modded though.


You're talking about two different things here. Lowering the power limit of course will lower the clocks because it tries to stay within the power limit.
I'm talking about lowering the power limit on a shunt modded card when you are already WAY above the original power limit with the mod. Here it's completely different because at stock MSVDD/NVVDD voltages (Classified tool or Elmor EVC2X allows you to raise these voltages which also raises their power limits...this may be some sort of limit related to amps draw with respect to their internal voltages), there seems to be a "default" limit and a "maximum" limit.

What I meant is, for the MSVDD and NVVDD limits, setting the TDP slider BELOW 100% won't affect MSVDD or NVVDD limits. They seem to only respond between their default and maximum values (Whatever they are in reality). I actually just verified this by setting TDP to 100% and then TDP to 85%. There was absolutely no difference in performance and throttling point at TDP 85%. It still throttled. (Clarification, Talking about Metro Exodus main menu here. 85%-100% all throttled at about 540W, with requested VID (GPU Voltage) dropping to about 1.00v, with absolutely no difference in behavior between 85% to 100%. So the TDP slider below 100% only affects "main" TDP itself, which is already way above its corrected values due to the mod, so TDP % isn't causing the throttling. Normalized is. Main TDP was 77.5% and TDP Normalized was 97.4%. So it was throttling from Normalized TDP (MSVDD / NVVDD rails), with respect to their 100% baseline values, despite the TDP slider being set far lower. So that means these limits don't go below their 100% values. But they CAN go higher if the TDP slider is set past 100%.

At 114% on the slider however, it reached 600W before it got to 107% on TDP Normalized and then throttled "1 bin" to avoid reaching 114%.

The only way to increase these limits without a TDP slider going past 100% is to actually increase MSVDD and/or NVVDD voltage, or use a bios that has these limits ignored completely (Kingpin 1kw XOC, and I can't use that on a FE and i'm not brave enough to use the patched NVflash to even attempt trying).

Does this make any sense at all?

Yes I know, it doesn't help that HWinfo64 doesn't' show these under "Input" rails and only Elmor's tool or Kingpin cards seem to be able to report it. But again this is unproven and this is just from one Chinese user on Elmor's discord, but it seems these can be found on the output rails.

The "default" limit seems to kick in around 213W (95% TDP Normalized out of 100%) and the maximum limit seems to kick in at 239W (107.1% TDP normalized out of 114%) Note it usually tries to avoid reaching the "limit" since it tries to throttle itself to avoid it.

From someone on Elmor's Discord, this seems to be MSVDD power triggering this. He said that MSVDD power = Total Board Power - GPU Rail Power. GPU Rail Power in Hwinfo64 is GPU Core NVVDD Output Power (sum). So that can be calculated. He also has amps monitoring on his Elmor EVC2X tool so he can get the amps draw on the MSVDD rail.

I'm going to also guess that NVVDD also has its own power limit as well.
(363W Output Power (sum?) at 114% TDP (maximum) and 320W Output Power (sum) at 100% TDP (Default?)

I don't know if that means it's pulling (max) 217A on MSVDD and 330A on NVVDD or not...


----------



## yzonker

KedarWolf said:


> I just reread your post. nowhere does it say which BIOS you are using. :/


I think the bit you are missing is he said he has a FE card which cannot be flashed to any other bios. Only 30 series card that is stuck on the stock bios to my knowledge.


----------



## KedarWolf

yzonker said:


> I think the bit you are missing is he said he has a FE card which cannot be flashed to any other bios. Only 30 series card that is stuck on the stock bios to my knowledge.


Yes, I missed that totally.


----------



## gfunkernaught

So of those of you using the 1kw bios (any brand, rbar or not), what power supplies are you using? Corsair is honoring the warranty for my HX1200 but I am wondering if there are better options out there to handle an unlimited 3090. I'm also looking into power conditioners (instead of a UPS).


----------



## Falkentyne

gfunkernaught said:


> So of those of you using the 1kw bios (any brand, rbar or not), what power supplies are you using? Corsair is honoring the warranty for my HX1200 but I am wondering if there are better options out there to handle an unlimited 3090. I'm also looking into power conditioners (instead of a UPS).


If you're not using LN2 and not on HEDT or Threadripper, OneSeasonic Seasonic TX-1000 or PX-1000 will work fine. Their PX-1300 and GX-1300 PSU's haven't been announced yet.
You're not going to be pulling 1kw on normal CPU's and any type of air or water cooling on the GPU anyway.

If you really need more than 1100W from the wall, and you're using a 400W+ CPU, I guess Corsair AX1600i


----------



## mirkendargen

Falkentyne said:


> If you're not using LN2 and not on HEDT or Threadripper, OneSeasonic Seasonic TX-1000 or PX-1000 will work fine. Their PX-1300 and GX-1300 PSU's haven't been announced yet.
> You're not going to be pulling 1kw on normal CPU's and any type of air or water cooling on the GPU anyway.
> 
> If you really need more than 1100W from the wall, and you're using a 400W+ CPU, I guess Corsair AX1600i


If you do have a Threadripper and overclock it beware though, I managed to pull 600W from just my Threadripper one time, lol. I dialed it down after that. Unzipping files while playing a game did put me at 1kW from the wall (and forced me to upgrade my UPS...), I dialed down the CPU a bit after that because I have a (extremely good) 1kW PSU, heh.


----------



## yzonker

Falkentyne said:


> If you're not using LN2 and not on HEDT or Threadripper, OneSeasonic Seasonic TX-1000 or PX-1000 will work fine. Their PX-1300 and GX-1300 PSU's haven't been announced yet.
> You're not going to be pulling 1kw on normal CPU's and any type of air or water cooling on the GPU anyway.
> 
> If you really need more than 1100W from the wall, and you're using a 400W+ CPU, I guess Corsair AX1600i


I have one of the newer Seasonic models (GX-1000) that supposedly have the OCP problem fixed, but that Endwalker benchmark hits OCP on that PSU if I go above 1063mv or so. Even with the PL on the KP XOC set fairly low (80%, 2x8pin).

Edit: 5800x cpu


----------



## Falkentyne

yzonker said:


> I have one of the newer Seasonic models (GX-1000) that supposedly have the OCP problem fixed, but that Endwalker benchmark hits OCP on that PSU if I go above 1063mv or so. Even with the PL on the KP XOC set fairly low (80%, 2x8pin).
> 
> Edit: 5800x cpu


Are you talking about the 520W bios or the 1KW Bios?

Check total power from the wall.
If it's less than 900W, RMA the PSU immediately.

There was a user on HardForum (Formerly HardOCP) who had his Seasonic TX-1000 (the titanium version) trip on a 3090. He RMA'd it to Seasonic and got a replacement TX-1000 which has been flawless. No problems. The original PSU was just defective.

I have a Seasonic PX-1000 and I have no problem running the Endwalker benchmark. Card reached up to 1.1v VID and topped out at about 580W (shunt mod) and didn't trip anything. I ran it on max settings at 1080p.


----------



## des2k...

adversary said:


> I may soon try to get 3090, and watercool it immidiately with better thermal pads
> 
> for watercooling I'm always using EKWB products, and easiest for me to get it delivered from them
> but I prefer using better thermal pads
> 
> I know waterblocks are not same for AIB models, some have more expensive waterblocks also in offer, some don't. Also thermal pads thickness may vary between AIB models when using EKWB waterblock, I guess.
> 
> I'm not asking for everyone temperature exact numbers after watercooling card, but I would like to roughly know, does for example more expensive waterblocks performed better?
> 
> and most important for me, as I would like to prepare thermal pads before I get card (no problem if I buy some which are not going to be used, it can always be used later on some other card).
> so, can you share with me which thickness (not all is going to be same of course) is most commonly used when using EKWB waterblocks on 3090. 1mm, 1.5mm, 2mm, 3mm ??
> 
> please share some of your expirience with me and it will be very useful
> 
> for thermal pads I used so far Gelid Ultimate 15 W/mk and Thermalright Odyssey 12.8 W/mk. for Odyssey, I noticed they are pretty "hard" (but cool very very good)
> 
> tnx


I have the EK Vector Trinity for my Zotac. Out of the box was pretty bad for temps, after some mods it's ok for temps.


----------



## yzonker

Falkentyne said:


> Are you talking about the 520W bios or the 1KW Bios?
> 
> Check total power from the wall.
> If it's less than 900W, RMA the PSU immediately.
> 
> There was a user on HardForum (Formerly HardOCP) who had his Seasonic TX-1000 (the titanium version) trip on a 3090. He RMA'd it to Seasonic and got a replacement TX-1000 which has been flawless. No problems. The original PSU was just defective.
> 
> I have a Seasonic PX-1000 and I have no problem running the Endwalker benchmark. Card reached up to 1.1v VID and topped out at about 580W (shunt mod) and didn't trip anything. I ran it on max settings at 1080p.


Yes the 1kw reBar bios. I saw that post. But I wasn't sure how to explain to Season support that I'm running a 350w Zotac 3090 Trinity at 500w+. Lol

Oddly I can even run TSE at 1093mv with no issue. Endwalker must cause power spikes or something that triggers it that nothing else seems to do.


----------



## Falkentyne

yzonker said:


> Yes the 1kw reBar bios. I saw that post. But I wasn't sure how to explain to Season support that I'm running a 350w Zotac 3090 Trinity at 500w+. Lol
> 
> Oddly I can even run TSE at 1093mv with no issue. Endwalker must cause power spikes or something that triggers it that nothing else seems to do.


I find that bizarre because I don't even get high TDP Normalized in Endwalker. Maybe you're running it at super high resolutions, but i'm at 1080p. I get higher abuse of MSVDD in Metro Exodus or Path of Exile GI Shadows / Illumination=Ultra and still no trip. But it hits the rail limits and throttles, but not even close in Endwalker. 

Can you try the Kingpin 520W bios (you have dual bios so that's easy to do) and check endwalker there? Do you get a power limit throttle in the KP 520W Bios?

You would need Elmor's EVC2X tool to determine if you're getting an amps spike on the MSVDD and NVVDD rails. But while this may be a difficult decision, I would look into RMA'ing that PSU or swapping it for a TX-1000 or PX-1000.

But I do find that odd because TimeSpy extreme hammers MSVDD and NVVDD. So does Superposition 4k "Custom extreme shaders". Does your PSU trip when you run Superposition 4k with custom extreme shaders? I've seen TDP Normalized report up to 120% due to MSVDD and NVVDD blasting past their internal power limits on non XOC Bioses in that test (tested on shunt modded Founder's Editions with TDP slider at 114%).


----------



## yzonker

Falkentyne said:


> I find that bizarre because I don't even get high TDP Normalized in Endwalker. Maybe you're running it at super high resolutions, but i'm at 1080p. I get higher abuse of MSVDD in Metro Exodus or Path of Exile GI Shadows / Illumination=Ultra and still no trip. But it hits the rail limits and throttles, but not even close in Endwalker.
> 
> Can you try the Kingpin 520W bios (you have dual bios so that's easy to do) and check endwalker there? Do you get a power limit throttle in the KP 520W Bios?
> 
> You would need Elmor's EVC2X tool to determine if you're getting an amps spike on the MSVDD and NVVDD rails. But while this may be a difficult decision, I would look into RMA'ing that PSU or swapping it for a TX-1000 or PX-1000.
> 
> But I do find that odd because TimeSpy extreme hammers MSVDD and NVVDD. So does Superposition 4k "Custom extreme shaders". Does your PSU trip when you run Superposition 4k with custom extreme shaders? I've seen TDP Normalized report up to 120% due to MSVDD and NVVDD blasting past their internal power limits on non XOC Bioses in that test (tested on shunt modded Founder's Editions with TDP slider at 114%).


You're mixing me up with someone else. I have a base Zotac 3090 trinity, same as @des2k... It's obviously 390w or KP 1kw.

I was running 1440p on Endwalker with max settings just like @Lobstar posted a while back. Just under 28k with the gpu somewhat limited. 

I've been considering just buying another PSU but haven't been able to decide which one. I was actually hoping you or someone else might comment on it. I like the GX-1000 due to its compact size. I have a relatively small case (Corsair 450d). 

I'll try Superposition 4K configured as you suggest. I've run it in 4k, but probably whatever the default is otherwise. 

To be honest, if everything else works fine, I'm not sure I'm very motivated to do anything. Maybe buy a spare and then try to RMA if anything. 

Good info anyway as always. Appreciate the feedback.


----------



## Falkentyne

yzonker said:


> You're mixing me up with someone else. I have a base Zotac 3090 trinity, same as @des2k... It's obviously 390w or KP 1kw.
> 
> I was running 1440p on Endwalker with max settings just like @Lobstar posted a while back. Just under 28k with the gpu somewhat limited.
> 
> I've been considering just buying another PSU but haven't been able to decide which one. I was actually hoping you or someone else might comment on it. I like the GX-1000 due to its compact size. I have a relatively small case (Corsair 450d).
> 
> I'll try Superposition 4K configured as you suggest. I've run it in 4k, but probably whatever the default is otherwise.
> 
> To be honest, if everything else works fine, I'm not sure I'm very motivated to do anything. Maybe buy a spare and then try to RMA if anything.
> 
> Good info anyway as always. Appreciate the feedback.


No I wasn't mixing you up with someone else. I can't use custom bioses and I forgot you can't use non 1kw 3x8 pin bioses on a 2x8 pin card or you get a really low power limit. Sorry. I guess I "assumed" that you had a 3x8 pin card so any bios would work. I'm sort of busy studying chess nonstop so I'm not exactly paying full attention to people's hardware since I'm working right now.


----------



## yzonker

Falkentyne said:


> No I wasn't mixing you up with someone else. I can't use custom bioses and I forgot you can't use non 1kw 3x8 pin bioses on a 2x8 pin card or you get a really low power limit. Sorry. I guess I "assumed" that you had a 3x8 pin card so any bios would work. I'm sort of busy studying chess nonstop so I'm not exactly paying full attention to people's hardware since I'm working right now.


No worries. As I said, thanks for the info.


----------



## yzonker

@Falkentyne, didn't push the card 100%, but more than enough for Endwalker to hit OCP. Max power was about 570w.


----------



## gfunkernaught

@yzonker @Falkentyne 
I measured the power draw from the wall during GTA 5 8k runs, 660w, that is with the 9900k OC'ed to 5ghz all cores, and the 1kw bios power limit set to 65%. That was on my HX1200 which kept shutting off the PC. So next question would be: Are the seasonic 1kw psu's better equipped for spikes than the hx1200? Or is the HX1200 fine and mine just went bad?


----------



## Falkentyne

gfunkernaught said:


> @yzonker @Falkentyne
> I measured the power draw from the wall during GTA 5 8k runs, 660w, that is with the 9900k OC'ed to 5ghz all cores, and the 1kw bios power limit set to 65%. That was on my HX1200 which kept shutting off the PC. So next question would be: Are the seasonic 1kw psu's better equipped for spikes than the hx1200? Or is the HX1200 fine and mine just went bad?


I am not a PSU guy. I can't answer these questions. If the PSU's OCP is overly sensitive (like most seasonic models before mid 2019, and OEMs that used Seasonic housings), or just defective, it will trip. I know otherwise nothing except "oh this brand is good and this brand is bad.". I know a few of the OneSeasonic (TX, PX, etc) series had defective OCP circuits in them and when they were RMA'd for a replacement, they worked. Sorry but this stuff exceeds my knowledge.


----------



## West.

Follow up again on 3090 hof repad, I’ve replaced front vram pads with .5mm pads, back with 1mm. Now core temps are fine, but not good. However vram temp skyrocketed to 105c :/ I’m getting a bit frustrated…


----------



## domdtxdissar

Running MSI suprim x 3090 @ unlimited bios --> around 500w normal usage (highest ive ever seen with this card is ~640w in firestrike extreme)
Using a EVGA SuperNOVA 1000 G3, 1000W PSU together with a pretty pushed 5950x

Dont think ive had and problems yet which was caused by the psu.. (eventho its from 2017)

My current highscore for Endwalker 1440p maximum:


----------



## yzonker

West. said:


> Follow up again on 3090 hof repad, I’ve replaced front vram pads with .5mm pads, back with 1mm. Now core temps are fine, but not good. However vram temp skyrocketed to 105c :/ I’m getting a bit frustrated…


First step is to pull it back apart and try to determine which mem chips are not making good contact. Might be able to tell by the impressions on the pads (or lack of). Someone else in this thread was using some kind of impression film to check contact IIRC. That might be another option.

Might post pics after you pull it apart with the pads still in place.


----------



## kryptonfly

GRABibus said:


> Salut !!
> 
> with ReBar forced for Port Royal with NVIDIA Inspector, it is a gain of 200points.
> I am sure also you are maybe bottlenecked by your CPU in this test...Just a supposition...
> 
> My temps are high because I am on air (Despite liquid metal and total Memory chips reapd with Thermalright Odyssey), and because Suprim X Bios pulls more than 500W, I am sure, despite its PL is declared as 450W.
> I have roughly same temps with the 450W Suprim X Bios than with KP 520W 's one....


 Salut 🖖

I don't think I'm cpu limited in this particular test, contrariwise I don't have Rebar nor gen 4.0, drivers "normal", G-sync enabled, shunt mod is around 520W but in real I don't have this, I perform almost as you, thanks to low temps but my PL is lower than you. You mean with Rebar I should do 15350 in Port Royal at 2130 mhz locked ? In some dx11 bench I'm cpu limited, as Endwalker :
1440p max : 24802 (no shunt)

But... I just noticed, 2 days ago, I'm PL again around 410W with shunt mod (wires and 3x R005 each time) 15mΩ = 308W GPU-Z. I think there's something wrong with "Input PP Source" x1.33, so 113.5W no shunt. Is it total PCIe slot power ? Is it safe to let things like that in waiting for some 10mΩ to come in few weeks ? Waiting some Extreme pads too... Shunts still works but bad, less effective, PL starting around 410W at 100% slider, honestly it's almost a miracle it works because I didn't scrape anything, solder was "repellent", I know for sure there's one which worrying me but not from pci slot, Power SRC maybe ? I'm seriously thinking about MG 842AR paint + liquid electrical tape but not cheap for what I need  Thanks !


----------



## Falkentyne

kryptonfly said:


> Salut 🖖
> 
> I don't think I'm cpu limited in this particular test, contrariwise I don't have Rebar nor gen 4.0, drivers "normal", G-sync enabled, shunt mod is around 520W but in real I don't have this, I perform almost as you, thanks to low temps but my PL is lower than you. You mean with Rebar I should do 15350 in Port Royal at 2130 mhz locked ? In some dx11 bench I'm cpu limited, as Endwalker :
> 1440p max : 24802 (no shunt)
> 
> But... I just noticed, 2 days ago, I'm PL again around 410W with shunt mod (wires and 3x R005 each time) 15mΩ = 308W GPU-Z. I think there's something wrong with "Input PP Source" x1.33, so 113.5W no shunt. Is it total PCIe slot power ? Is it safe to let things like that in waiting for some 10mΩ to come in few weeks ? Waiting some Extreme pads too... Shunts still works but bad, less effective, PL starting around 410W at 100% slider, honestly it's almost a miracle it works because I didn't scrape anything, solder was "repellent", I know for sure there's one which worrying me but not from pci slot, Power SRC maybe ? I'm seriously thinking about MG 842AR paint + liquid electrical tape but not cheap for what I need  Thanks !


Something's wrong with the shunts you did.
You said you stacked multiple shunts on top of each other???
You should only stack ONE shunt on top of the original, not 2 and not 3....
And you also need to use Rosin flux every step of the way, before applying solder, and then after (on the sides), before using the iron to melt solder to create a bond.

Your Misc0 input power should be no more than 2W away from Misc2 input power. 
You having it 29W difference means there is a problem with the GPU Chip Power shunt. This shunt, called GPU Core NVVDD Input Power(sum) at the top, below Total Board Power, is a "sum" of Misc0, Misc2 and NVVDD1 input power(sum). 
The 8 pins are also a little too far apart. Not sure about the others but you need to do the mod properly this time and remember to use flux before applying solder (on the original shunt) and after applying solder (before putting the new shunt on top). Remember to clean the flux off the shunts and PCB after you're done. And it helps to use Polyimide tape on the PCB to help protect it against accidental solder drops getting on the board (aka Kapton tape).


----------



## kryptonfly

Falkentyne said:


> Something's wrong with the shunts you did.
> You said you stacked multiple shunts on top of each other???
> You should only stack ONE shunt on top of the original, not 2 and not 3....
> And you also need to use Rosin flux every step of the way, before applying solder, and then after (on the sides), before using the iron to melt solder to create a bond.
> 
> Your Misc0 input power should be no more than 2W away from Misc2 input power.
> You having it 29W difference means there is a problem with the GPU Chip Power shunt. This shunt, called GPU Core NVVDD Input Power(sum) at the top, below Total Board Power, is a "sum" of Misc0, Misc2 and NVVDD1 input power(sum).
> The 8 pins are also a little too far apart. Not sure about the others but you need to do the mod properly this time and remember to use flux before applying solder (on the original shunt) and after applying solder (before putting the new shunt on top). Remember to clean the flux off the shunts and PCB after you're done. And it helps to use Polyimide tape on the PCB to help protect it against accidental solder drops getting on the board (aka Kapton tape).


I added 3x R005 "in series" to make as a "R015" because I don't have R015 and I used fan's wires. I think it's more than 15mΩ at the end. Yep I don't have good stuff to make it properly, I ordered true R010 and R015 in any case and I'll do it again with MG 842AR paint like you did. Indeed my solders are really bad, there's a "coat" which prevent good contacts, repel solder, I didn't scrape anything... I have to wait R010s to come maybe in 2 weeks. For now I won't pull so much on it or I will remove them in the meantime, I'm waiting Gelid pads because vram are too high (bad pressure).

Anyway, thank you for your support, I will make it properly ASAP !


----------



## Falkentyne

kryptonfly said:


> I added 3x R005 "in series" to make as a "R015" because I don't have R015 and I used fan's wires. I think it's more than 15mΩ at the end. Yep I don't have good stuff to make it properly, I ordered true R010 and R015 in any case and I'll do it again with MG 842AR paint like you did. Indeed my solders are really bad, there's a "coat" which prevent good contacts, repel solder, I didn't scrape anything... I have to wait R010s to come maybe in 2 weeks. For now I won't pull so much on it or I will remove them in the meantime, I'm waiting Gelid pads because vram are too high (bad pressure).
> 
> Anyway, thank you for your support, I will make it properly ASAP !


Don't use MG842AR. It degrades and it's inconsistent. Solder properly.
Use flux every step.
Watch this video.
Remove the old shunts, leaving the original 5 mohm on.
Then solder 10 mohm on top of 5 mohm properly, fluxing, building a solder bridge, then fluxing AGAIN then melting it with the new shunt on top of the bridge.


----------



## cennis

yzonker said:


> Yes the 1kw reBar bios. I saw that post. But I wasn't sure how to explain to Season support that I'm running a 350w Zotac 3090 Trinity at 500w+. Lol
> 
> Oddly I can even run TSE at 1093mv with no issue. Endwalker must cause power spikes or something that triggers it that nothing else seems to do.


Which bios has 1kw and reBar? Can you link it? I thought this was only within KPE owners.


----------



## GRABibus

cennis said:


> Which bios has 1kw and reBar? Can you link it? I thought this was only within KPE owners.











EVGA RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com


----------



## gfunkernaught

Well...ok...dammit lol.


----------



## GRABibus

gfunkernaught said:


> View attachment 2522598
> 
> Well...ok...dammit lol.


better wait for RTX 4…. 😊


----------



## Falkentyne

gfunkernaught said:


> View attachment 2522598
> 
> Well...ok...dammit lol.


I see a whole 5 FPS boost, assuming both 3090 and 3090 Super are running at the same exact TDP and same exact clocks on each card (meaning orig 3090 is slightly overclocked to match, but memory would require +750 offset which some cards can't do reliably)


----------



## GAN77

gfunkernaught said:


> Well...ok...dammit lol.


Micron promised dual density memory chips. Perhaps we will see memory on one side of the board.


----------



## des2k...

Falkentyne said:


> I see a whole 5 FPS boost, assuming both 3090 and 3090 Super are running at the same exact TDP and same exact clocks on each card (meaning orig 3090 is slightly overclocked to match, but memory would require +750 offset which some cards can't do reliably)


OC scalling is horrible on these rtx30 cards, 500w+
It's already 50w more for TDP, more cuda & mem OC. 

but why would we need another 1,2% better card lol
I guess it's for 3dmark scores


----------



## yzonker

gfunkernaught said:


> View attachment 2522598
> 
> Well...ok...dammit lol.


My bank account sighed again.


----------



## yzonker

des2k... said:


> OC scalling is horrible on these rtx30 cards, 500w+
> It's already 50w more for TDP, more cuda & mem OC. At this point if your not water cooling there's prob 0fps diff on air coolers.


Yea if the rumors are true about 40 series, not sure how air coolers are going to be viable. My 3080ti at 450w with the air cooler hit 80C at 100% fan. 77C with the case cover off.


----------



## des2k...

yzonker said:


> Yea if the rumors are true about 40 series, not sure how air coolers are going to be viable. My 3080ti at 450w with the air cooler hit 80C at 100% fan. 77C with the case cover off.


hehe, I would imagine a few here will get 3090super then 40 series 😁

Kept my 1080ti for 3years, I think 3090 will hold just fine for me.


----------



## J7SC

yzonker said:


> My bank account sighed again.


FYI, the VRAM speed "gain" of the proposed 3090 Super (if that comes to pass) is really just the Micron GDDR6X running at its actual rated speed, rather than being down-clocked...may be via better rear thermal pads / back-plate with heat pipes etc.


----------



## steadly2004

I've been pulling my hair out. I couldn't get AC Valhalla to run. It just would crash before loading. So frustrating. I actually just added a PCIexpress Gen4 riser to get my video card vertical and better airflow for my p200a. I figured maybe that's the problem. Dropped it to gen3, no help. Then I figured maybe its my ram at 3600 cl14 for the OC. Reset to stock, no dice. I was playing with ryzen clock tuner. SO I turned that off, no help. I reinstalled windows 11 and it worked..... once. Then upon applying my memory OC it didn't, and going to optimized defaults didn't help again..... reinstall windows 11 again and it still didn't work. I was also trying to get resizable BAR turned on. Turned it off no go. No combination of settings helped. 

Reinstalled Win10 and it works! Then updated drivers, still works. Added back in memory OC, still works. Enabled resizable BAR, still works! It was windows 11 all along.


----------



## Alex24buc

A question for palit gamerock oc users. Please tell me what is your version number of the palit rebar bios. I have some glitches after updating to rebar bios from palit website and I saw that recently they removed the rebar bios from their website. I want to lnow if there is another version of rebar bios from palit. Thanks!


----------



## mirkendargen

steadly2004 said:


> I've been pulling my hair out. I couldn't get AC Valhalla to run. It just would crash before loading. So frustrating. I actually just added a PCIexpress Gen4 riser to get my video card vertical and better airflow for my p200a. I figured maybe that's the problem. Dropped it to gen3, no help. Then I figured maybe its my ram at 3600 cl14 for the OC. Reset to stock, no dice. I was playing with ryzen clock tuner. SO I turned that off, no help. I reinstalled windows 11 and it worked..... once. Then upon applying my memory OC it didn't, and going to optimized defaults didn't help again..... reinstall windows 11 again and it still didn't work. I was also trying to get resizable BAR turned on. Turned it off no go. No combination of settings helped.
> 
> Reinstalled Win10 and it works! Then updated drivers, still works. Added back in memory OC, still works. Enabled resizable BAR, still works! It was windows 11 all along.


Do you notice it start to stutter after playing ~30min and devolve into a slideshow after ~60min, especially if doing raids? I've tried every supposed fix for that crap and nothing's working for me...


----------



## steadly2004

mirkendargen said:


> Do you notice it start to stutter after playing ~30min and devolve into a slideshow after ~60min, especially if doing raids? I've tried every supposed fix for that crap and nothing's working for me...


I actually haven't played after getting it working. Just testing rebar on/off, OC mem and drivers, all using the in game benchmark. But I'll definitely update here once I get a chance to play. Have to cook dinner and spend some time with the wife at the moment.

_edit_ Just played the first 1hr10min and no slow down. Only got to do 1 raid. But no issues there. I will probably rarely spend much more than 1-1.5hr at a time just due to my darn schedule between work and other stuff.


----------



## dante`afk

switched the stock pads on the byksky fullcover block to gelid extremes.

22c varm drop, 1c drop on gpu, hotspot about the same


----------



## Falkentyne

dante`afk said:


> switched the stock pads on the byksky fullcover block to gelid extremes.
> 
> 22c vrm drop, 1c drop on gpu, hotspot about the same
> 
> View attachment 2522621


I think you mean VRAM temp. VRM temp cant be monitored. It just registers with the hotspot (hottest point on either core or any VRM sensor).


----------



## Nizzen

gfunkernaught said:


> @yzonker @Falkentyne
> I measured the power draw from the wall during GTA 5 8k runs, 660w, that is with the 9900k OC'ed to 5ghz all cores, and the 1kw bios power limit set to 65%. That was on my HX1200 which kept shutting off the PC. So next question would be: Are the seasonic 1kw psu's better equipped for spikes than the hx1200? Or is the HX1200 fine and mine just went bad?


Is the hx1200 old?


----------



## J7SC

Falkentyne said:


> I think you mean VRAM temp. * VRM temp cant be monitored*. It just registers with the hotspot (hottest point on either core or any VRM sensor).


...depends on the card, my Strix shows GPU VRM temps in HWInfo (an old air-cooled result below)


----------



## Lobstar

People were comparing die contact a few pages back. Here is what I found under my Optimus Absolute. The paste is Kryonaut Extreme. I get excellent thermals even though it looks like there is not much direct contact.


----------



## Falkentyne

Lobstar said:


> People were comparing die contact a few pages back. Here is what I found under my Optimus Absolute. The paste is Kryonaut Extreme. I get excellent thermals even though it looks like there is not much direct contact.
> View attachment 2522628


Yes, that's perfect contact. Very well played by Optimus (amazing what they can do when they actually RELEASE something). That's how paste is supposed to look like. Almost totally squeezed out with a VERY THIN film left on the GPU that doesn't look like large waves in the ocean. You just have the slightest hint of waves with mostly an actual film left over over most of the die. On a scale of 1 to 10 of perfect paste application, I would rate this a 10. I would rate it an 11 if the very top middle edge were a bit more perfect.


----------



## gfunkernaught

Nizzen said:


> Is the hx1200 old?


I bought it 2/21 and it was mfg'd 12/20.


----------



## Nizzen

gfunkernaught said:


> I bought it 2/21 and it was mfg'd 12/20.


Then it's new


----------



## des2k...

Lobstar said:


> People were comparing die contact a few pages back. Here is what I found under my Optimus Absolute. The paste is Kryonaut Extreme. I get excellent thermals even though it looks like there is not much direct contact.
> View attachment 2522628


very nice, their plate is flat and pcb doesn't rest on standoffs so no suprise here

my EK is very close, need to drop .1mm on the standoffs and lap more the block and it will get there🙄

what's your water / core delta ?


----------



## Lobstar

des2k... said:


> what's your water / core delta ?


7-10°C depending on the workload.


----------



## yzonker

Yea that has to be about the best paste spread I've ever seen on a 30 series. I didn't even know it could be that good. Lol. Looks like a CPU. (a good CPU mount). My 3090 didn't look like that when I remounted it in my Corsair block. I'm sure it won't next time either.


----------



## GRABibus

J7SC said:


> ...depends on the card, my Strix shows GPU VRM temps in HWInfo (an old air-cooled result below)
> 
> View attachment 2522625


Yes, on my Stix too
Except when flashed with MSI Suprim X Bios...


----------



## J7SC

GRABibus said:


> Yes, on my Stix too
> Except when flashed with MSI Suprim X Bios...


...speaking of Strix bios, I really do wish Asus would make available an updated (and fully functional) XOC bios for the the Strix (call it "Matrix" ?), complete with r_BAR...that way, we could get accurate power read-outs as well.


----------



## chispy

Guys i would like to know if the thermal pads Gelid extreme 1.5 would squish enough on my ek 3090 asus Tuf that uses 1.0 pads everywhere. I'm doing maintnance on my loop and taking appart the block to clean it and i would like to replace the shiiitty 1.0 thermal pads from ek on this , plus i will try liquid metal on the die. Any advice or guidance wish good thermal pads to replace ( stock ek ones are 1.0 ). Thanks in advance.


----------



## J7SC

chispy said:


> Guys i would like to know if the thermal pads Gelid extreme 1.5 would squish enough on my ek 3090 asus Tuf that uses 1.0 pads everywhere. I'm doing maintnance on my loop and taking appart the block to clean it and i would like to replace the shiiitty 1.0 thermal pads from ek on this , plus i will try liquid metal on the die. Any advice or guidance wish good thermal pads to replace ( stock ek ones are 1.0 ). Thanks in advance.


...fyi, if you can't get the right thermal pads, there's also the thermal putty option (for anything but the die) which conforms to whatever space is available. After some recommendations in this thread, I picked up the 10 W/mk putty below and really like it.


----------



## GRABibus

J7SC said:


> ...speaking of Strix bios, I really do wish Asus would make available an updated (and fully functional) XOC bios for the the Strix (call it "Matrix" ?), complete with r_BAR...that way, we could get accurate power read-outs as well.


----------



## Lobstar

Hmm. The thermal pads on my LR22 inductors are not making contact with the Optimus block on my Rev1.0 FTW3U. Should I be concerned about this?


----------



## Falkentyne

chispy said:


> Guys i would like to know if the thermal pads Gelid extreme 1.5 would squish enough on my ek 3090 asus Tuf that uses 1.0 pads everywhere. I'm doing maintnance on my loop and taking appart the block to clean it and i would like to replace the shiiitty 1.0 thermal pads from ek on this , plus i will try liquid metal on the die. Any advice or guidance wish good thermal pads to replace ( stock ek ones are 1.0 ). Thanks in advance.


With Gelid Extremes, you "should" be able to go "up to" 0.5mm higher than the original stock pads safely. Plus you can always manually compress them a tiny bit like 0.2mm (best to use a digital micrometer to check though). Gelid Ultimates are much harder and I would not recommend Ultimates on the GPU Core side if you have low tolerance leeway for GPU Core->VRM->VRAM->Inductors/chokes height distance. Ultimates on blackplate are fine.


----------



## Falkentyne

J7SC said:


> ...fyi, if you can't get the right thermal pads, there's also the thermal putty option (for anything but the die) which conforms to whatever space is available. After some recommendations in this thread, I picked up the 10 W/mk putty below and really like it.
> 
> View attachment 2522670


Which has been out of stock for the last two months, as far as I know, and I've been checking every day....


----------



## gfunkernaught

Nizzen said:


> Then it's new


Indeed. RMA is processing. Hopefully what they send back to me will be solid and be able to handle an unlimited 3090 with those spikes.


----------



## GRABibus

gfunkernaught said:


> Indeed. RMA is processing. Hopefully what they send back to me will be solid and be able to handle an unlimited 3090 with those spikes.


You worry me, I have a HX 1200


----------



## kryptonfly

Falkentyne said:


> Don't use MG842AR. It degrades and it's inconsistent. Solder properly.
> Use flux every step.
> Watch this video.
> Remove the old shunts, leaving the original 5 mohm on.
> Then solder 10 mohm on top of 5 mohm properly, fluxing, building a solder bridge, then fluxing AGAIN then melting it with the new shunt on top of the bridge.


I completely removed old shunts and cleaned originals, "as new", I pulled max load 390W, here's the behavior at full stock + PL 105.4% (390W) :

 

GPU-Z is absolutely normal, since I have this Gigabyte there was always pin #1 eating more than pin #2, so we can see :
MVDDC = FBVDD Input Power
PWR_SRC = Input PP Source
Misc 0 & 2 have 7W difference, not just 2W.
Misc 0 + Misc 2 + NVVDD1 = 188W not 178W like the GPU core. 10W in more.

I don't know if we can apply same scheme here compared to others gpus. Maybe you understand what others powers point at ? Do you think I have to mix 10 & 15mΩ to balance pin #1 & #2 ? 10mΩ on pin #1 and 15mΩ on pin #2 for example ? Or to let the card manage unbalanced powers as it was made for ? Anyway, I don't have them yet...


----------



## Falkentyne

kryptonfly said:


> I completely removed old shunts and cleaned originals, "as new", I pulled max load 390W, here's the behavior at full stock + PL 105.4% (390W) :
> 
> 
> 
> GPU-Z is absolutely normal, since I have this Gigabyte there was always pin #1 eating more than pin #2, so we can see :
> MVDDC = FBVDD Input Power
> PWR_SRC = Input PP Source
> Misc 0 & 2 have 7W difference, not just 2W.
> Misc 0 + Misc 2 + NVVDD1 = 188W not 178W like the GPU core. 10W in more.
> 
> I don't know if we can apply same scheme here compared to others gpus. Maybe you understand what others powers point at ? Do you think I have to mix 10 & 15mΩ to balance pin #1 & #2 ? 10mΩ on pin #1 and 15mΩ on pin #2 for example ? Or to let the card manage unbalanced powers as it was made for ? Anyway, I don't have them yet...


First of all, the power readings wont be perfect on all cards. 7W is standard variance for misc 0 and 2. GPU Core NVVDD Input Power sum (GPU Chip Power) is Misc0+2+nvvdd1 input (sum). So expect that. And 8 pin #1 and #2 aren't always going to be perfect either. Don't focus on it. But >20W difference at load is usually a problem.

Second, please use the same shunts values on all resistors. Not doing this can throw power balancing off as the card expects different values than what it's getting. Don't expect perfection here.
Just solder on 10 mOhm shunts on all 6 shunt resistors (7 shunts for 3x8 pin cards) and be done with it.


----------



## chispy

Falkentyne said:


> With Gelid Extremes, you "should" be able to go "up to" 0.5mm higher than the original stock pads safely. Plus you can always manually compress them a tiny bit like 0.2mm (best to use a digital micrometer to check though). Gelid Ultimates are much harder and I would not recommend Ultimates on the GPU Core side if you have low tolerance leeway for GPU Core->VRM->VRAM->Inductors/chokes height distance. Ultimates on blackplate are fine.



Thank you my friend for your feedback , truly appreciate it  . I will go ahead and order the Gelid Extremes 1.5mm to replace the generick EK 1.0mm ones stock that came with the block. Summer has brought very high temperatures over here bro and i need to cool and tame this best on h2o as best as i can.


----------



## gfunkernaught

GRABibus said:


> You worry me, I have a HX 1200


It was all good until I tried GTA 5. Like I wasnt even getting signs of bad oc. Just power off, power on. Based on what I've read it sounds like OCP. The OTP is triggered when the PSU reaches 50c, which I never actually measured but the air coming from the PSU was warm to say the least. My understanding of thermal protection is that the component cannot be turned back on until thermals decrease to safe operating temperature. Over current protection is when there is for example a spike in current and PSU can return to operation immediately after the current is under the OCP threshold. I'm no expert so I'm going on just what I've read and understand about it.


----------



## J7SC

GRABibus said:


> You worry me, I have a HX 1200


...get a Leadex 2000 W (France at 230V ) like that triple 3090 Strix build below (only w/o the EKWB blocks)


----------



## yzonker

chispy said:


> Thank you my friend for your feedback , truly appreciate it  . I will go ahead and order the Gelid Extremes 1.5mm to replace the generick EK 1.0mm ones stock that came with the block. Summer has brought very high temperatures over here bro and i need to cool and tame this best on h2o as best as i can.


Why are you buying thicker pads though? I must be missing something.


----------



## steadly2004

yzonker said:


> Why are you buying thicker pads though? I must be missing something.


I agree. I would only go up if they weren’t being compressed. Otherwise the thinner the better.
Too big and you can gain distance between the heatsink and GPU causing higher temps on the main chip. Are the 1mm pads not available? 

I actually went down to 1mm on my backplate to minimize the heat transfer material. I probably should have checked that nothing touched and grounded out, but I didn’t think of it until later. Also the parts on my backplate that done touch are coated in plastic.


----------



## des2k...

Lobstar said:


> Hmm. The thermal pads on my LR22 inductors are not making contact with the Optimus block on my Rev1.0 FTW3U. Should I be concerned about this?


if you're worried get that thermal putty

most will say it's not important to cool these, but the main function of these copper coils is... stable voltage stepdown for vcore,vmem,uncore from 12v source

example, vrm controller drives vrm stages. Vrm stages will send 500 times a second 12v pulse to the LR22 inductor and ouput will be 1v for vcore

would that be affected by more heat ? would the vrm work harder if the copper coil is 80c vs 50c ? that I have no idea🙄 maybe 5a more per stage if copper gets hot

I would say, vcore 1v or lower, 350w you can get away by not cooling them.

24/7 1.1v 600w, yes you need to cool them


----------



## Falkentyne

Where? That putty is out of stock everywhere.


----------



## des2k...

Falkentyne said:


> Where? That putty is out of stock everywhere.


didn't know it's out of stock, noctua nth2 paste worked for me for closing the gap on .5mm pads for the front memory


----------



## J7SC

des2k... said:


> didn't know it's out of stock, noctua nth2 paste worked for me for closing the gap on .5mm pads for the front memory


...how could you know unless it's on your daily-check menu. Besides, certain 3090s are also out of stock, doesn't mean folks won't recommend them  

...I got two smaller containers of the thermal putty left, hopefully enough for two GPU blocks. One of those is a Bykski block for a 6900XT where they recommend thermal paste for VRAM instead of pads, so a putty savings right there. When it is back in stock, I'll definitely get more putty though


----------



## steadly2004

Falkentyne said:


> Where? That putty is out of stock everywhere.


DELETED content to prevent confusion* and to prevent someone from using it. Thanks Falkentyne!

You can try and order from Fujipoly themselves. I don't know which product is the proper one. They also let you order a free sample.





Fujipoly - Thermal Interface Materials - Gap Filler Pads (Putty)







www.fujipoly.com


----------



## Falkentyne

steadly2004 said:


> Not sure if it's safe for computers. But it says its non-conductive. Just searched and found:
> 
> 
> 
> 
> 
> 
> 
> 
> Slice Engineering Boron Nitride Paste
> 
> 
> Thermal paste for improving conduction Acts as a release agent Unlike anti-seize compounds, Boron Nitride is not electrically conductive, which eliminates the risk of shorting out your heater cartridge or thermistor Use the included applicator to coat the cartridge heater hole in your 3d printer...
> 
> 
> 
> 
> www.3dprinternational.com
> 
> 
> 
> 
> 
> Trying to find an example of it actually being used, I can't. I did find a site saying "not for use on computer products", so.... probably not safe. Please disregard. Unless they're saying don't use it as TIM.


HOLY (CENSORED) DONT EVEN THINK ABOUT THAT, EVER !!!!
I BOUGHT THAT STUFF AND ALMOST DESTROYED MY R9 290X.
Card overheated within seconds (First it took 30 seconds to reach 90C, then it took 5 seconds...)

There was a burn mark on the chip after and the 'paste' was hardened like some shoal material. Took awhile to scrape it off. Card still worked fine.


----------



## Falkentyne

J7SC said:


> ...how could you know unless it's on your daily-check menu. Besides, certain 3090s are also out of stock, doesn't mean folks won't recommend them
> 
> ...I got two smaller containers of the thermal putty left, hopefully enough for two GPU blocks. One of those is a Bykski block for a 6900XT where they recommend thermal paste for VRAM instead of pads, so a putty savings right there. When it is back in stock, I'll definitely get more putty though


Because I do check daily. Been trying to buy it for ages.
And they usually say when more is coming in "expected or /on order."


----------



## steadly2004

Falkentyne said:


> Because I do check daily. Been trying to buy it for ages.
> And they usually say when more is coming in "expected or /on order."


I think I found some in stock and "probably" the correct product. I just have never used or purchased thermal putty before.





TG-NSP80-4OZ t-Global Technology | Fans, Thermal Management | DigiKey


Order today, ships today. TG-NSP80-4OZ – Thermal Non-Silicone Putty 4 oz Jar from t-Global Technology. Pricing and Availability on millions of electronic components from Digi-Key Electronics.




www.digikey.com





its just over $ $80 for 30cc, I previously looked at the 5mwk one and it was only $40
Here is the manufacture's product page.








TG-NSP80 Non-Silicone Thermal Putty - T-Global Technology


TG-NSP80 Non-Silicone Thermal Putty is a one-part, fully cured non-silicone gap filler with has been designed to reduce the time to market.




www.tglobaltechnology.com






Falkentyne said:


> Because I do check daily. Been trying to buy it for ages.
> And they usually say when more is coming in "expected or /on order."


----------



## tps3443

gfunkernaught said:


> View attachment 2522598
> 
> Well...ok...dammit lol.


That’s a hard pass for me. I’ve already got 1,100GBPS memory bandwidth. And plenty of power. Why are they even releasing something like this? Lol.

There would be no noticeable difference at all lol. 1%? 2%?

Maybe Nvidia is just bored?


----------



## tps3443

steadly2004 said:


> I think I found some in stock and "probably" the correct product. I just have never used or purchased thermal putty before.
> 
> 
> 
> 
> 
> TG-NSP80-4OZ t-Global Technology | Fans, Thermal Management | DigiKey
> 
> 
> Order today, ships today. TG-NSP80-4OZ – Thermal Non-Silicone Putty 4 oz Jar from t-Global Technology. Pricing and Availability on millions of electronic components from Digi-Key Electronics.
> 
> 
> 
> 
> www.digikey.com
> 
> 
> 
> 
> 
> its just over $ $80 for 30cc, I previously looked at the 5mwk one and it was only $40
> Here is the manufacture's product page.
> 
> 
> 
> 
> 
> 
> 
> 
> TG-NSP80 Non-Silicone Thermal Putty - T-Global Technology
> 
> 
> TG-NSP80 Non-Silicone Thermal Putty is a one-part, fully cured non-silicone gap filler with has been designed to reduce the time to market.
> 
> 
> 
> 
> www.tglobaltechnology.com


Maybe a couple members would be willing to go in together for some. I’m not really after buying $162 dollars of 4 ounces of really good thermal putty. However, I wouldn’t mind paying half or going 4 ways with some other members. 4 people would get 1 ounce each for $40 dollars each. Which seems more reasonable. And they could just pay their shipping to. One person could collect the pot, and their small shipping fee. We could order it, and split it up. Sounds silly. $162 dollars for thermal
filler is expensive lol especially for most of
it to be wasted away in a massive tub.


----------



## tps3443

Lobstar said:


> Hmm. The thermal pads on my LR22 inductors are not making contact with the Optimus block on my Rev1.0 FTW3U. Should I be concerned about this?


I am thinking the same thing, I may grab some TG10 stuff for the VRM’s. The stuff that Evga includes with the KP HC block is only 1.6 W/mk thermal conductivity. It may not even show an improvement at all. But with what I have done to my waterblock, everything counts at this point I guess.


----------



## gfunkernaught

tps3443 said:


> That’s a hard pass for me. I’ve already got 1,100GBPS memory bandwidth. And plenty of power. Why are they even releasing something like this? Lol.
> 
> There would be no noticeable difference at all lol. 1%? 2%?
> 
> Maybe Nvidia is just bored?


There will be a noticeable increase in profits because people WILL feverishly buy them. Just like people feverishly buy gas when they _hear_ about a storm coming.


----------



## ManniX-ITA

Falkentyne said:


> Just solder on 10 mOhm shunts


Why 10 mOhm and not 5?


----------



## kryptonfly

Falkentyne said:


> First of all, the power readings wont be perfect on all cards. 7W is standard variance for misc 0 and 2. GPU Core NVVDD Input Power sum (GPU Chip Power) is Misc0+2+nvvdd1 input (sum). So expect that. And 8 pin #1 and #2 aren't always going to be perfect either. Don't focus on it. But >20W difference at load is usually a problem.
> 
> Second, please use the same shunts values on all resistors. Not doing this can throw power balancing off as the card expects different values than what it's getting. Don't expect perfection here.
> Just solder on 10 mOhm shunts on all 6 shunt resistors (7 shunts for 3x8 pin cards) and be done with it.


Thanks for all your explanation, you relieved me . You're dead right, I planned to put 10mΩ shunts and to keep it like that. Excuse me if I'm picky, I'm starting to figure out all tricks here 

Next trouble is my vram temp with Bykski waterblock, I disassembled it dozen times, changed pads from Bykski to Thermalright Odyssey but too hard and thick (my bad), I even bought an active backplate but exactly same, waiting some Gelid Extreme to see if it's better... vram 54°C to 74°C and gpu 39°C at 390W (delta 15°C).


----------



## SoldierRBT

15.5K under 430W
3090 KPE 0.931v 2130MHz +1200 Mem








I scored 15 500 in Port Royal


Intel Core i9-10900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## kryptonfly

ManniX-ITA said:


> Why 10 mOhm and not 5?


My choice. I tried 15mΩ worked well but still PL (bad solders btw), 5mΩ is x2 PL and I don't want to risk anything, 10mΩ is the sweet spot for me. My inverter 1400VA did beep in Timespy test 2 at the end with 15mΩ shunts, so I think with 10mΩ it would be worse... I just hope my power supply will follow the load (Prime TX-750W).


----------



## Falkentyne

kryptonfly said:


> My choice. I tried 15mΩ worked well but still PL (bad solders btw), 5mΩ is x2 PL and I don't want to risk anything, 10mΩ is the sweet spot for me. My inverter 1400VA did beep in Timespy test 2 at the end with 15mΩ shunts, so I think with 10mΩ it would be worse... I just hope my power supply will follow the load (Prime TX-750W).


TX-750 is low for 3090 even stock. I would get PX-1000 or TX-1000. 750W is Nvidia's lowest requirement for 3090s. No idea about the inverter or what that even is.


----------



## yzonker

Falkentyne said:


> TX-750 is low for 3090 even stock. I would get PX-1000 or TX-1000. 750W is Nvidia's lowest requirement for 3090s. No idea about the inverter or what that even is.


UPS I think. My 3080ti running the Galax 1kw bios has caused my 1350va to turn off (OCP). RM850x has always held on though. (haven't tried Endwalker though. Lol)


----------



## gfunkernaught

Falkentyne said:


> TX-750 is low for 3090 even stock. I would get PX-1000 or TX-1000. 750W is Nvidia's lowest requirement for 3090s. No idea about the inverter or what that even is.


I can say with confidence that my RM750 handles my OC'd 3090 w/520w bios and 9900k (stock) quite well. Of course I won't stay on this PSU as it is temporary until my RMA arrives. Plus, I generally try to follow a 50% load rule of thumb when it comes to sensitive and delicate stuff like power and in other hobbies like astrophotography (50% of mount's max payload capacity). The HX1200 was at a slightly higher load than 50% so the efficiency wasn't peak but better than maxing out the RM750.


----------



## tps3443

So I figured I’d share some temps. This was on my 3090 Kingpin Hydro Copper.


----------



## gfunkernaught

tps3443 said:


> So I figured I’d share some temps. This was on my 3090 Kingpin Hydro Copper.


Do you remember what the ambient temp was during that test?


----------



## tps3443

More temps


gfunkernaught said:


> Do you remember what the ambient temp was during that test?


23C-24C in my house. However, my work area is usually on the higher end of that.


----------



## elbramso

tps3443 said:


> More temps
> 
> 
> 23C-24C in my house. However, my work area is usually on the higher end of that.


me again^^

already replied to your post on the "optimus thread" and was a bit rude (sorry for that).

I'm trying to understand your provided numbers.
Ambient was 24c and GPU max was 460w (while gaming?) / average load was 270w.
You don't have a water temp sensor, right?

Let's say your water is 4c over ambient (I saw you have a Mo-Ra). So, your water is ~ 28c.
GPU my provided shows 42.5c, so the* delta is 14.5c at 470w.* That's pretty average for a waterblock and pretty bad for a waterblock of this price^^
Still you receive ~2-3c better delta then I do^^

ofc your watertemp may be higher which would mean your block is performing better in this example...


----------



## des2k...

elbramso said:


> me again^^
> 
> already replied to your post on the "optimus thread" and was a bit rude (sorry for that).
> 
> I'm trying to understand your provided numbers.
> Ambient was 24c and GPU max was 460w (while gaming?) / average load was 270w.
> You don't have a water temp sensor, right?
> 
> Let's say your water is 4c over ambient (I saw you have a Mo-Ra). So, your water is ~ 28c.
> GPU my provided shows 42.5c, so the* delta is 14.5c at 470w.* That's pretty average for a waterblock and pretty bad for a waterblock of this price^^
> Still you receive ~2-3c better delta then I do^^
> 
> ofc your watertemp may be higher which would mean your block is performing better in this example...


Some get lucky with gpu mount & their gpu die.

First my gpu die is not 100% flat(this I didn't want to lap), my EK block was also horrible. Ended up doing a quick lap, using double washers(plastic + metal) on the gpu core, taking a good 1mm+ off the standoffs. This resulted in using .5mm pads vs 1mm pads (stock for EK blocks). That close tolerance had to use some paste for the mem pads to fix 80c mem temps.

The backplate also got premium pads + large heatsinks to keep mem temps good.

Got to ~7.5c delta 450w, up to 13.5c at 600w. In the process I wasted alot of premium pads & paste. I ended up re-mounting so many times. Not something I want to repeat


----------



## elbramso

des2k... said:


> Some get lucky with gpu mount & their gpu die.
> 
> First my gpu die is not 100% flat(this I didn't want to lap), my EK block was also horrible. Ended up doing a quick lap, using double washers(plastic + metal) on the gpu core, taking a good 1mm+ off the standoffs. This resulted in using .5mm pads vs .1mm pads (stock for EK blocks). That close tolerance had to use some paste for the mem pads to fix 80c mem temps.
> 
> The backplate also got premium pads + large heatsinks to keep mem temps good.
> 
> Got to ~7.5c delta 450w, up to 13.5c at 600w. In the process I wasted alot of premium pads & paste. I ended up re-mounting so many times. Not something I want to repeat


By now I'm pretty sure either my block or my DIE isn't flat (or even both).
I re-mounted the block yesterday and went from Thermalright Extrem 2mm to Gelid Extreme 2mm. The Gelid Pads are softer and my result was better by 2c @ 520w. Still I'm sitting at a delta of 19c @ 520w... super frustrating...
What is the easiest way to check how curved my DIE is?


----------



## jura11

In my case water temperature is usually close to GPUs idle temperatures, if GPUs idle as example 23°C then my water temperature usually is around that figure, there is small difference or deviation like 0.5°C but usually is pretty close

Regarding the delta, everybody compares something different hahaha and my delta usually is at ~500W around 11-12°C with VRAM temperatures in 58-60°C, delta between the GPU core and GPU hot-spot 8-10°C 

Hope this helps 

Thanks, Jura


----------



## jura11

Wish Bykski is making waterblocks for Kingpin, maybe they will be making waterblocks for Kingpin as in past they're been making but on my RTX 3090 GamingPro's they work okay 

Hope this helps 

Thanks, Jura


----------



## des2k...

elbramso said:


> By now I'm pretty sure either my block or my DIE isn't flat (or even both).
> I re-mounted the block yesterday and went from Thermalright Extrem 2mm to Gelid Extreme 2mm. The Gelid Pads are softer and my result was better by 2c @ 520w. Still I'm sitting at a delta of 19c @ 520w... super frustrating...
> What is the easiest way to check how curved my DIE is?


This type of razor blade. You can use this on the block & die. If it's not 100% flat, you'll see small gaps of light.


----------



## gfunkernaught

tps3443 said:


> More temps
> 
> 
> 23C-24C in my house. However, my work area is usually on the higher end of that.


Your gpu temp is about the same as mine at that power range and ambient air temp you mentioned.


----------



## Lobstar

elbramso said:


> By now I'm pretty sure either my block or my DIE isn't flat (or even both).


What is your mounting pressure like? A guy I know had issue with his temps and all the mounting hardware was like finger tight because he was afraid of hurting the pcb. Not saying you have this issue personally but it's always worth mentioning. In my example the guy got 12°C back.


----------



## elbramso

Lobstar said:


> What is your mounting pressure like? A guy I know had issue with his temps and all the mounting hardware was like finger tight because he was afraid of hurting the pcb. Not saying you have this issue personally but it's always worth mentioning. In my example the guy got 12°C back.


Good point but I doubt this is the case here. 
My screws are tightened so much that I asked evga for an extra pair of the 4 middle screws^^


----------



## kryptonfly

Falkentyne said:


> TX-750 is low for 3090 even stock. I would get PX-1000 or TX-1000. 750W is Nvidia's lowest requirement for 3090s. No idea about the inverter or what that even is.





yzonker said:


> UPS I think. My 3080ti running the Galax 1kw bios has caused my 1350va to turn off (OCP). RM850x has always held on though. (haven't tried Endwalker though. Lol)


Yep  Back-UPS - onduleur régulation automatique de tension - 1400VA - prises FR - APC France

For now, my TX-750 holds on when I shunted with 15mΩ (~520W peak) but in Timespy test 2 my UPS (inverter) did beep, that means there was +700W intake from the wall (1400 VA = 700W), monitoring via usb I can see the load. TX-750 can handle ~900W supply according to reviews and with ~94% of efficiency, it should be ~960W from the wall = my UPS will give up first !
But something odd : I tried previously Galax and EVGA 1000W bios, all probes were crazy even temps, PL to death 1800mhz at 800mv and as soon as I started Endwalker, my PSU turned off, no beep from UPS but a message from my PSU in bios when my pc turned on. I repeated 5 times and it did same so I reflashed the bios as before. No trouble at all with 15mΩ shunts, I think maybe the 1000W bios ate too much suddenly from PCIe or 8 pin beyond spec... I'm just wondering if they will last with 10mΩ shunts 

1 year ago, I had a Cooler Master V1200 but got OCP in cyberpunk on march so I applied for a RMA and in the meantime I ordered a TX-750 because they didn't have one (1200W)... and I just received the new PSU few days ago ! It's a Cooler Master MWE 1250W Gold V2 brand new. Honestly I prefer to keep the TX with higher efficiency, I can pump less from the wall for more supply, no beep (~650W from the wall) but with the V1200 it was +700W from the wall (same load). If it holds on = gorgeous, if not I will use the 1250W and my UPS would be obsolete  but I didn't know there was OCP in UPS.
There is noted "Input/Output 220-240V~, 6.1A" so at least 1342W max from the wall, I don't think my UPS would turn off... will check this soon with 10mΩ shunts, time will tell.


----------



## KedarWolf

20% off Official GELID store until Sept. 5th with the code BACKTOSCHOOL

Get your Extreme pads while you can.


----------



## des2k...

kryptonfly said:


> Yep  Back-UPS - onduleur régulation automatique de tension - 1400VA - prises FR - APC France
> 
> For now, my TX-750 holds on when I shunted with 15mΩ (~520W peak) but in Timespy test 2 my UPS (inverter) did beep, that means there was +700W intake from the wall (1400 VA = 700W), monitoring via usb I can see the load. TX-750 can handle ~900W supply according to reviews and with ~94% of efficiency, it should be ~960W from the wall = my UPS will give up first !
> But something odd : I tried previously Galax and EVGA 1000W bios, all probes were crazy even temps, PL to death 1800mhz at 800mv and as soon as I started Endwalker, my PSU turned off, no beep from UPS but a message from my PSU in bios when my pc turned on. I repeated 5 times and it did same so I reflashed the bios as before. No trouble at all with 15mΩ shunts, I think maybe the 1000W bios ate too much suddenly from PCIe or 8 pin beyond spec... I'm just wondering if they will last with 10mΩ shunts
> 
> 1 year ago, I had a Cooler Master V1200 but got OCP in cyberpunk on march so I applied for a RMA and in the meantime I ordered a TX-750 because they didn't have one (1200W)... and I just received the new PSU few days ago ! It's a Cooler Master MWE 1250W Gold V2 brand new. Honestly I prefer to keep the TX with higher efficiency, I can pump less from the wall for more supply, no beep (~650W from the wall) but with the V1200 it was +700W from the wall (same load). If it holds on = gorgeous, if not I will use the 1250W and my UPS would be obsolete  but I didn't know there was OCP in UPS.
> There is noted "Input/Output 220-240V~, 6.1A" so at least 1342W max from the wall, I don't think my UPS would turn off... will check this soon with 10mΩ shunts, time will tell.


did you say 10mΩ on pcie slot shunt 🙄 with 1000w vbios ?
that's 200w over 5 tiny gold traces at the pcb 🔥

that's also 16a on the 2 12v lines on your 24pin😂


----------



## kryptonfly

des2k... said:


> did you say 10mΩ on pcie slot shunt 🙄 with 1000w vbios ?
> that's 200w over 5 tiny gold traces at the pcb 🔥
> 
> that's also 16a on the 2 12v lines on your 24pin😂


No, 1000W bios at stock OR shunt. I don't have 10mΩ yet. And impossible to pull so much & burn my gpu with my setup, TX seems really OCP sensitive (manufactured on march 2020 though) and UPS inverter would beep in that case.


----------



## yzonker

kryptonfly said:


> No, 1000W bios at stock OR shunt. I don't have 10mΩ yet. And impossible to pull so much & burn my gpu with my setup, TX seems really OCP sensitive (manufactured on march 2020 though) and UPS inverter would beep in that case.


You did have the same result as me with Endwalker. Kinda bizarre. I had wondered if it was the combination of that benchmark and the KP XOC bios. Really bizarre. Nothing else I've run will cause my PSU to hit OCP.


----------



## tps3443

UPDATE: (MY GPU IS BOOSTING WRONG IN THIS VIDEO) Evga PX1 is applying a profile from the standard KP 1KW XOC bios (Lower offsets) So the card is only running 1995 or so Mhz. This is wrong. I deleted the profile, and reinstalled Evga PX1, and it fixed the boosting. So the default boost is actually much higher. I am gonna re-do this video.

My 3090 Kingpin Hydro Copper temps all stock on the 520 Watt LN2 bios. (Port Royal Stress test)

Also, only running pull configuration on my fans now, I had it setup push/pull today (But I had a rattling or buzzing sound going on with a few fans)


----------



## SoldierRBT

15.8K under 480W
3090 KPE 1.00v 2145MHz +1300 Mem 








I scored 15 804 in Port Royal


Intel Core i9-10900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## ManniX-ITA

kryptonfly said:


> but I didn't know there was OCP in UPS.


I have a house in the countryside in Italy and lived there for a little while years ago.
Terrible power supply with frequent blackouts.
Had a couple of the same, older model, APC 1400VA as well and they suck for that kind of wattage.
Found out there was a guy on eBay refurbishing APC UPSs at a very good price with extended batteries (3 years lifetime instead of 12-18 months APC original) at a fraction of the MSRP.

Bought a couple of Smart 2200VA:





SUA2200I - APC Smart-UPS 2200VA USB & Serial 230V | Schneider Electric Global


Schneider Electric Global. SUA2200I - APC Smart-UPS 2200VA USB & Serial 230V.




www.se.com





Similar to this one:




__





APC Smart-UPS, Line Interactive, 2200VA, Tower, 230V, 8 IEC C13 und 2 IEC C19-Stecker, SmartConnect Port+SmartSlot, AVR, LCD - SMT2200IC | APC Deutschland


SMT2200IC - APC Smart-UPS, Line Interactive, 2200VA, Tower, 230V, 8 IEC C13 und 2 IEC C19-Stecker, SmartConnect Port+SmartSlot, AVR, LCD | APC Deutschland




www.apc.com





This is the real deal.
Only one that could support my SLI setup at the time.
And another doing great for the whole Home Theater setup.

These models are Online Interactive; meaning when there's a power fluctuation or loss the battery will engage.
When it beeps means the load is too high and in case of engagement it will not switch to battery if needed.

What matters is not only the power circuit itself but the number of batteries (I think there are 2 inside).
As you can see this thing is huge, double the size; this way with more batteries it can support 1.98kW with 2200VA.
Of course with 2kW load the battery runtime is just a few minutes  

Look for this model on eBay refurbished by someone good and it will last you forever, this is the ultimate one!


----------



## owikh84

After few months of waiting and delays I finally managed to complete my new dual-loop with some new hardware. So, I'm here to share some thermal results of my watercooled Strix 3090 with active backplate...

EK Vector Strix + ABP:



Spoiler











Spoiler

















Thermal results after running Heaven Extreme for 30 minutes, stock clocks:


My hardware setup:
CPU: 10900K SP94 @ 5.1GHz
M/B: ASUS ROG Maximus XIII Extreme BIOS 1007
RAM: 4x8GB G.Skill TridentZ RGB DDR4-4266C19 @ 4133 CL17 1.45v
SSD: WD SN850 1TB + SN750 2TB
HDD: Seagate Barracuda 4TB
GPU: ASUS ROG Strix 3090 vBIOS V4 Driver 471.68 Rebar ON
PSU: Corsair AX1500i + CableMod ModFlex Carbon/Black full cable set
Case: Lian Li V3000
Others: Lancool Vertical GPU Mount Bracket (modded) + ADT PCIe4 15cm Riser Cable

Custom loop setup:
CPU: EK-Velocity Nickel+Plexi D-RGB + TG Kryonaut
GPU: EK-Vector Strix + Active Backplate Nickel+Plexi D-RGB + TG Kryonaut paste + TR Extreme Odyssey pads
Rad: 2x EK PE 480mm
Pump: 2x Laing D5 Vario
Distro plate: WV Mech D5 Dual Loop
Fans: 12x EK Vardar-S 120mm D-RGB
Fittings: Bitspower+Barrow+Bykski
Tubing: Barrow PETG 14mm

P/S: Will take better pics with ARGB of course when I'm free :hyper:


----------



## KedarWolf

owikh84 said:


> After few months of waiting and delays I finally managed to complete my new dual-loop with some new hardware. So, I'm here to share some thermal results of my watercooled Strix 3090 with active backplate...
> 
> EK Vector Strix + ABP:
> 
> 
> 
> Spoiler
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Spoiler
> 
> 
> 
> 
> 
> 
> 
> 
> Thermal results after running Heaven Extreme for 30 minutes, stock clocks:
> 
> 
> My hardware setup:
> CPU: 10900K SP94 @ 5.1GHz
> M/B: ASUS ROG Maximus XIII Extreme BIOS 1007
> RAM: 4x8GB G.Skill TridentZ RGB DDR4-4266C19 @ 4133 CL17 1.45v
> SSD: WD SN850 1TB + SN750 2TB
> HDD: Seagate Barracuda 4TB
> GPU: ASUS ROG Strix 3090 vBIOS V4 Driver 471.68 Rebar ON
> PSU: Corsair AX1500i + CableMod ModFlex Carbon/Black full cable set
> Case: Lian Li V3000
> Others: Lancool Vertical GPU Mount Bracket (modded) + ADT PCIe4 15cm Riser Cable
> 
> Custom loop setup:
> CPU: EK-Velocity Nickel+Plexi D-RGB + TG Kryonaut
> GPU: EK-Vector Strix + Active Backplate Nickel+Plexi D-RGB + TG Kryonaut paste + TR Extreme Odyssey pads
> Rad: 2x EK PE 480mm
> Pump: 2x Laing D5 Vario
> Distro plate: WV Mech D5 Dual Loop
> Fans: 16x EK Vardar-S 120mm D-RGB
> Fittings: Bitspower+Barrow+Bykski
> Tubing: Barrow PETG 14mm
> 
> P/S: Will take better pics with ARGB of course when I'm free :hyper:


My setup will be similar once I install my active backplate, except I'm swapping out all my pads from TR Odyssey for GELID Extreme.

Oh, my block and backplate are black though.

Just waiting on the pads to get here.


----------



## owikh84

KedarWolf said:


> My setup will be similar once I install my active backplate, except I'm swapping out all my pads from TR Odyssey for GELID Extreme.
> 
> Oh, my block and backplate are black though.
> 
> Just waiting on the pads to get here.


Nice bro, can't wait to see your results. I also have the GELID Extreme and Ultimate coming, just in case the Odyssey doesn't perform well but so far I'm happy with the thermal improvement it delivered.


----------



## Lobstar

owikh84 said:


> After few months of waiting and delays I finally managed to complete my new dual-loop with some new hardware. So, I'm here to share some thermal results of my watercooled Strix 3090 with active backplate...
> 
> EK Vector Strix + ABP:
> 
> 
> 
> Spoiler
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Spoiler
> 
> 
> 
> 
> 
> 
> 
> 
> Thermal results after running Heaven Extreme for 30 minutes, stock clocks:
> 
> 
> My hardware setup:
> CPU: 10900K SP94 @ 5.1GHz
> M/B: ASUS ROG Maximus XIII Extreme BIOS 1007
> RAM: 4x8GB G.Skill TridentZ RGB DDR4-4266C19 @ 4133 CL17 1.45v
> SSD: WD SN850 1TB + SN750 2TB
> HDD: Seagate Barracuda 4TB
> GPU: ASUS ROG Strix 3090 vBIOS V4 Driver 471.68 Rebar ON
> PSU: Corsair AX1500i + CableMod ModFlex Carbon/Black full cable set
> Case: Lian Li V3000
> Others: Lancool Vertical GPU Mount Bracket (modded) + ADT PCIe4 15cm Riser Cable
> 
> Custom loop setup:
> CPU: EK-Velocity Nickel+Plexi D-RGB + TG Kryonaut
> GPU: EK-Vector Strix + Active Backplate Nickel+Plexi D-RGB + TG Kryonaut paste + TR Extreme Odyssey pads
> Rad: 2x EK PE 480mm
> Pump: 2x Laing D5 Vario
> Distro plate: WV Mech D5 Dual Loop
> Fans: 12x EK Vardar-S 120mm D-RGB
> Fittings: Bitspower+Barrow+Bykski
> Tubing: Barrow PETG 14mm
> 
> P/S: Will take better pics with ARGB of course when I'm free :hyper:


Huh, I would have expected a better than 15C delta with an active backplate. Also the hotspot is higher than I've seen before at 13C over the GPU. Am I missing something? No disrespect meant, my not sure how hard you're pushing your card.


----------



## Lobstar

ManniX-ITA said:


> Had a couple of the same, older model, APC 1400VA as well and they suck for that kind of wattage.


I have the 1300VA 2019 model. The UPS will scream at you and beep but only because the internal inverter is only rated for like 720W. However, even if it screams it's not on the backup power system so it doesn't actually shut off or anything since it's not using the inverter. I've had mine reporting as high as 1600W on HWinfo with no issues this far. But yeah, just don't expect to do that on battery.


----------



## elbramso

Lobstar said:


> What is your mounting pressure like? A guy I know had issue with his temps and all the mounting hardware was like finger tight because he was afraid of hurting the pcb. Not saying you have this issue personally but it's always worth mentioning. In my example the guy got 12°C back.


Now that I think about it - will overtightening the screws lead to bad thermal results? I still don’t know why I can't get this block to work 

Maybe someone could have a look at my mounting (looks different now). As you can see the pressure is uneven, the single mem has almost no contact and the corners of the DIE don't seem to have good contact either. Maybe I should use some extra washer?


----------



## owikh84

Lobstar said:


> Huh, I would have expected a better than 15C delta with an active backplate. *Also the hotspot is higher than I've seen before at 13C over the GPU. *Am I missing something? No disrespect meant, my not sure how hard you're pushing your card.


I'm also not sure how well are my thermal results. Maybe different load and way of testing.


----------



## gamerMwM

I scored 15 544 in Port Royal


AMD Ryzen 9 5900X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com





Pretty happy with this Strix OC 3090. This baby ran HOT on air. I've read that cards that run hotter perform better when water cooled. I had a different 3090 before and it ran so much cooler that at first I wondered if I had gotten a bad Strix card. Didn't see anything wrong when I pulled it apart. I couldn't wait to put it on water so I built a custom loop, put the card in a Bykski full coverage block with active backplate where it stays nice and cool (Used Gelid Extreme pads all around). Ambient temp for this run (my best) was 21 C on 1K watt KP Bios. This score was the #5 score with a 5900x (single GPU). Ended with +205 core / 1390 memory OC. Got crashes going above that. About to go back to the 520 watt KP bios for my daily driver OC


----------



## gfunkernaught

All this Ups talk, are those of you using UPS's for your 3090 builds doing actual works that requires a UPS? How you looked into power conditioners? The good ones also protect against brownouts.


----------



## tps3443

Lobstar said:


> Huh, I would have expected a better than 15C delta with an active backplate. Also the hotspot is higher than I've seen before at 13C over the GPU. Am I missing something? No disrespect meant, my not sure how hard you're pushing your card.


I thought the same. And I have no active rear cooling at all.


----------



## tps3443

elbramso said:


> Now that I think about it - will overtightening the screws lead to bad thermal results? I still don’t know why I can't get this block to work
> 
> Maybe someone could have a look at my mounting (looks different now). As you can see the pressure is uneven, the single mem has almost no contact and the corners of the DIE don't seem to have good contact either. Maybe I should use some extra washer?


You don’t need washers. It looks like your thermal pads need to be replaced. One side is totally squished out, and the other is not. Also, your VRMS are not cooled evenly all the way across. I can see an area where it hasn’t even flattened the gray inductor paste on your VRM’s.

Your die contact will be fine. Besides the fact that your block pressed down a pattern that was a little crooked.

The key to this waterblocks heart, is going to be cooling down the back side (It sounds unbelievable, but it’s very much true). Order some 3MM thick thermal pads and fill the entire back plate with it like your baking some dough, and it’s a baking pan lol. Cut around the foam sticky pad, and cut around your dip switches hole, and cut around the air venting holes too. (It looks ugly when you see thermal pad poking through the vent holes)


I knocked off 10-12C on my GPU temps easily, and probably 13.5C or more on the GPU die temp. By doing this re-mount, and full cover rear pad. (I had a slow turning sludged up D5 too though from a prior loop that ran an old cheap pastel coolant) (And half of my high temp problem was also poor water flow) so, getting that D5 working right certainly added to this) 

Honestly though, I was hitting like 54-59C GPU temp at around 2,130-2,175Mhz on the 520 watt LN2 bios Previously. So my temps have really gone wayyyy down. 


First things first. Clean that KP waterblock really well with alcohol, you can re-use the inductor paste. I certainly did, scrape non-dried paste up, pull the plunger out and pack it back in to the back of the plastic syringe, clean that whole Kingpin PCB up with 99% alcohol really really well (Check everything really well) And spread out a (medium-ish and generous amount) of really good quality thermal paste, lay down brand new 2MM (12W-15W) thermal pads directly on the PCB and it’s memory modules, you can give them each a (very light press) just to make sure they are on there and flat. Then flip the PCB over, plug in your led cable, and lay it down on the block making sure it lined up perfectly (There is a small flared piece on one of the (4) GPU mounting threads on the block, this flared piece goes kinda flush in to the PCB holes, I think there is another one somewhere too, make sure it’s lined up, make sure it’s flush. Press down on the back of the die firmly once you know it’s right, then press around it, and in every area around the die where you have pads or inductor paste etc, while still keeping pressure on the back of the die. Then tighten down your GPU screws barely while keeping pressure on the back of that PCB over the die. I would pick up the block and PCB and give it a look over by eye. Make sure it looks even. Install all of the screws working from the center die to the outside of the PCB. Pick up the card and look at it again. lay it flat and give the memory modules a good press down, from the back of the PCB. Make sure the pads are getting pressed down evenly. Check your screws again. Make sure it looks good. Also, loosen the screws again, if you need to check your tightening them in a correct order. You will get good contact. (Inwouldnt remove the PCB, you shouldn’t have too) just take your time, and don’t over think it.


PS: You can use a + symbol on the VRM’s. That’s what I did. It works better than a single line. Use a good amount. I tested a 0.5MM Fujipoly thermal pad here and laid the block and it didn’t touch the pad, or leave a single mark on it. This stuff is soft and it’s a thermal gap filler. Very low
conductivity at only like 1.8 or 2.2 Watt thermal conductivity. But, it’s good enough. However, I do wonder what would happen if I had used some of that really good thermal gap filler that is like 8-10W/m.k watt thermal conductivity. Or even just K5 Pro Is over 5W/K alone.

Covering the rear PCB with an entire thermal
pad is the most important of all. I got this information from someone else on another forum. Some one his technique, and they responded too him with really really good comments. So I tried it my self, and yea it really does work!! You’ll have similar performance to some cards with active rear cooling.
Funny thing about the full cover rear pad is, they were both using those cheap 5Kw soft pads, and just covering the entire cards backside lol. Soft, thick, and easy to press down. I went with Optimus pad, so it’s probably similar or maybe even slightly more effective. None of those users shared real numbers. Just magical fluffy words of excitement like I did originally.


I really struggled with this block in the beginning. But just know this Cards PCB is like really really conductive. It fills with heat everywhere. And if your re-doing the mount. Then order some minus 8 pads or something those will be just fine for the back PCB. You don’t want something too firm. And make sure you gently use a tooth brush with alcohol on the back of the die for to clean the ceramics back there. They get packed with pieces of thermal pad. I’d also recommend scraping the thermal material out from in between the VRM’s, Mosfets, and memory modules. If your gonna do it right. You may as well do every single thing you can to make as perfect as possible, and go the extra mile. (Then you have nothing to question later) (Or after its up and running, asking your self, could I have achieved even lower numbers than this? Lol)


----------



## yzonker

tps3443 said:


> I thought the same. And I have no active rear cooling at all.


Seems typical to me for an EK block and first mount attempt.


----------



## ManniX-ITA

gfunkernaught said:


> All this Ups talk, are those of you using UPS's for your 3090 builds doing actual works that requires a UPS? How you looked into power conditioners? The good ones also protect against brownouts.


Power conditioners are more focused on noise filtering.
Capacitors instead of batteries.
Basic ones they don't filter almost at all, mostly protecting against brownouts and spikes.
For good noise filtering you need to spend a lot of money.
But we are talking about a brownout protection for microseconds and only if the incoming voltage stays above a certain value.
Otherwise the power is cut. Capacitors can store a very limited amount of energy.
It's more for Hi-Fi where these kind of issues can kill the appliances and/or degrade the sound output.

It's the same kind of filtering and protection you get from a high quality PSU which you probably have already with a 3090.
Just another layer on top doesn't really bring that much.

Whatever is work or play, a few blackouts can corrupt your disks and mess your system.
That's why it's better to have a UPS.
I got my UPS when they cut the power 3 times in a row and messed up my Windows install, around the first week I was there.
In the rural areas they don't give a frack and when they have to do some work they just switch it on and off like it's their closet's light switch.


----------



## J7SC

owikh84 said:


> I'm also not sure how well are my thermal results. Maybe different load and way of testing.


I have a 3090 Strix that until recently had the same EK block but switched to another make because of several manufacturing issues on that block (posted here earlier). Anyway, one thing I did notice in your pics is that you have the GPU-side outlet connected as inlet (inlet should be on the left as viewed from the front).This will make some difference on GPU and Hotspot temps.


----------



## gfunkernaught

ManniX-ITA said:


> Power conditioners are more focused on noise filtering.
> Capacitors instead of batteries.
> Basic ones they don't filter almost at all, mostly protecting against brownouts and spikes.
> For good noise filtering you need to spend a lot of money.
> But we are talking about a brownout protection for microseconds and only if the incoming voltage stays above a certain value.
> Otherwise the power is cut. Capacitors can store a very limited amount of energy.
> It's more for Hi-Fi where these kind of issues can kill the appliances and/or degrade the sound output.
> 
> It's the same kind of filtering and protection you get from a high quality PSU which you probably have already with a 3090.
> Just another layer on top doesn't really bring that much.
> 
> Whatever is work or play, a few blackouts can corrupt your disks and mess your system.
> That's why it's better to have a UPS.
> I got my UPS when they cut the power 3 times in a row and messed up my Windows install, around the first week I was there.
> In the rural areas they don't give a frack and when they have to do some work they just switch it on and off like it's their closet's light switch.


Mid to top range psus like the corair hx and ax series don't provide power conditioning. They have surge suppression and over protections. But providing a clean and steady 120v is what a power conditioner does, and some higher end UPS units. I'm not saying UPSs aren't necessarily by any means. I used one for a long time and I got a 3090 which kept overloading it. So I asked myself do I need uninterrupted power for gaming? No. "Disk" corruption isn't really a thing for SSDs as much as it was for HDDs. For example, raspberry pi. You can kill power to that thing over and over and if files get corrupted the system will repair itself. HDDs on the other hand, now you have half a file written on a sector etc and that could cause your OS to not boot. Even still, at my job, there are plenty of PCs with HDDs that aren't on UPSs. Once in a while we see corrupt file systems. An offline chkdsk fixes that.


----------



## ManniX-ITA

gfunkernaught said:


> Mid to top range psus like the corair hx and ax series don't provide power conditioning. They have surge suppression and over protections.


They provide basic noise filtering which is exactly what a cheap "power conditioner" does.
The good ones, which does rebuild the sinu wave, are very very expensive.
Just saying doesn't make sense to spend that money for a PC.
That kind of noise filtering makes sense only for very expensive Hi-Fi parts, in my opinion.

You can get the same "class" of filtering also from an UPS if it's a true Online type, not like the APC models above which are Interactive.
On a true Online model the AC is never directly connected to the AC output but it's always on battery.
But those models are also very expensive today.
They weren't many years ago till APC found out they could sell switch the "cheap" Online models to Online Interactive and keep the same price...



gfunkernaught said:


> No. "Disk" corruption isn't really a thing as much as it was for HDDs.


It's not as bad but it's the same.
Today Windows, if you keep disk cache as write through, makes a good job in flushing data as quickly as possible.
Nothing changed except that.
SSDs are even worse than HDD cause they do a lot of background work in idle.
It's just a matter of luck and what you are running at the time.
The Logitech software often gets corrupted configuration files with a BSOD/power outage.
Bad coding leads to more corruption but there's no magic here.
Unclean shutdown is always very dangerous.



gfunkernaught said:


> You can kill power to that thing over and over and if files get corrupted the system will repair itself.


That's a myth. As above it's more a software optimization.
But unless you use a specific filesystem, normally it's ext4.
It's more robust than NTFS but it gets corrupted as well without a clean shutdown.
Depends on what was running at the time.


----------



## yzonker

J7SC said:


> I have a 3090 Strix that until recently had the same EK block but switched to another make because of several manufacturing issues on that block (posted here earlier). Anyway, one thing I did notice in your pics is that you have the GPU-side outlet connected as inlet (inlet should be on the left as viewed from the front).This will make some difference on GPU and Hotspot temps.


Seems like I remember the active backplate changes this though? From looking at the manual I think it's correct.



https://www.ekwb.com/shop/EK-IM/EK-IM-3831109836538.pdf


----------



## ManniX-ITA

yzonker said:


> Seems like I remember the active backplate changes this though? From looking at the manual I think it's correct.


I'm planning too to use the Vector with the ABP. And now I'm very confused 

On the Vector manual I see this:










And on the ABP manual this:










Not sure what it means shifted for 180 degrees...

My assumption was that with ABP you have to use either one or the other pair of ports.
Without any specific inlet/outlet directions as without the ABP.
But I assumed it's best to have the inlet on the port on the front plate for the flow to go over the GPU chip first...


----------



## des2k...

ManniX-ITA said:


> I'm planning too to use the Vector with the ABP. And now I'm very confused
> 
> On the Vector manual I see this:
> 
> View attachment 2523084
> 
> 
> And on the ABP manual this:
> 
> View attachment 2523085
> 
> 
> Not sure what it means shifted for 180 degrees...
> 
> My assumption was that with ABP you have to use either one or the other pair of ports.
> Without any specific inlet/outlet directions as without the ABP.
> But I assumed it's best to have the inlet on the port on the front plate for the flow to go over the GPU chip first...


180 degree, as in remove the screws on the terminal, rotate 180😁 install it again


----------



## KedarWolf

Deleted


----------



## KedarWolf

owikh84 said:


> Nice bro, can't wait to see your results. I also have the GELID Extreme and Ultimate coming, just in case the Odyssey doesn't perform well but so far I'm happy with the thermal improvement it delivered.


After you installed the active backplate, did you do leak testing?

I feel like if things are not perfect, you're risking the chance of having a leak.

And my motherboard is horizontal, so leaks would drip directly on it.


----------



## Lobstar

elbramso said:


> Now that I think about it - will overtightening the screws lead to bad thermal results? I still don’t know why I can't get this block to work


It shouldn't? The only situation I can think of that would cause over tightening to be detrimental would be if some of the fasteners were of unequal length. Maybe get a cheap digital micrometer, some sand paper, and an inch-pounds torque driver to confirm. At least on the Optimus block there are only three sizes of fasteners used to adhere the block to the card so it's easy to check. If other manufacturers use many different sizes (my EKWB monoblock for instance ... ) this would be impossible without repeated remounting and testing.


----------



## ManniX-ITA

des2k... said:


> 180 degree, as in remove the screws on the terminal, rotate 180😁 install it again


Yes I just can't figure out what would be the resulting position 
I thought it had only one fixed position.
Where would end up the tubes inlet/outlet?


----------



## gfunkernaught

ManniX-ITA said:


> They provide basic noise filtering which is exactly what a cheap "power conditioner" does.
> The good ones, which does rebuild the sinu wave, are very very expensive.
> Just saying doesn't make sense to spend that money for a PC.
> That kind of noise filtering makes sense only for very expensive Hi-Fi parts, in my opinion.
> 
> You can get the same "class" of filtering also from an UPS if it's a true Online type, not like the APC models above which are Interactive.
> On a true Online model the AC is never directly connected to the AC output but it's always on battery.
> But those models are also very expensive today.
> They weren't many years ago till APC found out they could sell switch the "cheap" Online models to Online Interactive and keep the same price...
> 
> 
> 
> It's not as bad but it's the same.
> Today Windows, if you keep disk cache as write through, makes a good job in flushing data as quickly as possible.
> Nothing changed except that.
> SSDs are even worse than HDD cause they do a lot of background work in idle.
> It's just a matter of luck and what you are running at the time.
> The Logitech software often gets corrupted configuration files with a BSOD/power outage.
> Bad coding leads to more corruption but there's no magic here.
> Unclean shutdown is always very dangerous.
> 
> 
> 
> That's a myth. As above it's more a software optimization.
> But unless you use a specific filesystem, normally it's ext4.
> It's more robust than NTFS but it gets corrupted as well without a clean shutdown.
> Depends on what was running at the time.


Like I said it is repairable, not requiring a reinstallation of your OS, that's the point I was trying to make. 
As far as spending money to protect my expensive PC, very worth it. I find power conditioners more affordable than a UPS that can handle a higher limited 3090 with all my other component. It's not just noise that's the problem, which yes in audio can be an issue. It's making sure your PSU gets a constant, smooth 120v as long as there is power of course. I'm not saying that it's perfectly fine to power off a device while it's storage is being written to, regardless of media type. But I always found SSD to be more _tolerant_ to power outage than a HDD in terms of filesystem corruption.

I've had a raspberry pi with a loose power cable go out on me so many times and the filesystem is intact. The real risky part is when the system is being written to.


----------



## ManniX-ITA

gfunkernaught said:


> But I always found SSD to be more _tolerant_ to power outage than a HDD in terms of filesystem corruption.


I didn't notice such a big difference.
I think almost 90% of the filesystem corruption I got in the last 5-10 years were due to the SATA cables.
The big difference for me it was the M.2 connector 



gfunkernaught said:


> I've had a raspberry pi with a loose power cable go out on me so many times and the filesystem is intact. The real risky part is when the system is being written to.


Depends what you run. If it's a specific distribution, like for emulators or Kodi they often optimize the mount options to enhance reliability at expense of performances.
A more "complex" way is to make the system partition read-only; that's the best way to avoid boot issues.



Lobstar said:


> It shouldn't? The only situation I can think of that would cause over tightening to be detrimental would be if some of the fasteners were of unequal length.


The EK manual has a very big warning about it.
It's very dangerous to tighten too much.
Every material will expand or shrink at different temperatures.
Copper is a very soft material and too much pressure can make the contact worse when it gets hot.
A very hard material like aluminum can break a die when hot if the pressure is too high.

The CPU mountings are made as such to get a fixed pressure however you tight the screws.
But the GPU doesn't, there are only the springs on the screws so you need to make it right.


----------



## J7SC

yzonker said:


> Seems like I remember the active backplate changes this though? From looking at the manual I think it's correct.
> 
> 
> 
> https://www.ekwb.com/shop/EK-IM/EK-IM-3831109836538.pdf


...yeah, correct per manual, but I still think it is not as efficient (nor what is usually displayed w/o active back-plate)...you want the coolest water to hit the channel to the jet-plate over the die first. That's also how the Bykski Strix active back plate' setup shows it.

On the other hand, when in doubt, follow the manual, and with solid enough cooling capacity and high enough flow speed, it probably only makes a few degrees of difference.


----------



## Lobstar

ManniX-ITA said:


> The EK manual has a very big warning about it.
> It's very dangerous to tighten too much.


Optimus advised its basically impossible and their design makes it worry free. But then again my optimus block is nearly 8 pounds in total so maybe there is a material difference. Gaskets and o-rings make thermal expansion a non-issue for water retention. The other components like the PCB will be expanding and contracting just as much so there is little worry there. The biggest worry for me is quality of fasteners as it's easy to twist the head off an M-series screw if it's poor quality. Then you have to tap/drill it out. I torqued everything to 1.4Nm/cm2 and gave everything an extra half turn.


















Optimus Waterblock


Hello there! I hope you can help me. I want buy a Optimus Foundation nickel plated, but I see than some units have issues like acrilic craking and bad nickel plating... Someone here ever had that or another issues with the blocks?




www.overclock.net


----------



## ManniX-ITA

Lobstar said:


> I torqued everything to 1.4Nm/cm2 and gave everything an extra half turn.


I was looking for some info on how much pressure to apply but I didn't find any.
Is 1.4Nm a value recommended by some manufacturer or what you think it's reasonable in your experience?


----------



## des2k...

ManniX-ITA said:


> I was looking for some info on how much pressure to apply but I didn't find any.
> Is 1.4Nm a value recommended by some manufacturer or what you think it's reasonable in your experience?


the pcb rest on standoffs, the 3090 die is big too, on my EK block for example I have metal washers on top of plastic ones, my 4 core screws are torqued at max (big screw driver, put your arm weight into them type of torque😂)

If try to take them off now and I don't press / put my arm weight on them, they will get stripped lol

My pcb is straight, so I don't worry about it.


----------



## Falkentyne

Lobstar said:


> Optimus advised its basically impossible and their design makes it worry free. But then again my optimus block is nearly 8 pounds in total so maybe there is a material difference. Gaskets and o-rings make thermal expansion a non-issue for water retention. The other components like the PCB will be expanding and contracting just as much so there is little worry there. The biggest worry for me is quality of fasteners as it's easy to twist the head off an M-series screw if it's poor quality. Then you have to tap/drill it out. I torqued everything to 1.4Nm/cm2 and gave everything an extra half turn.
> 
> View attachment 2523099
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Optimus Waterblock
> 
> 
> Hello there! I hope you can help me. I want buy a Optimus Foundation nickel plated, but I see than some units have issues like acrilic craking and bad nickel plating... Someone here ever had that or another issues with the blocks?
> 
> 
> 
> 
> www.overclock.net


Wow nice item. Might get one for the future.
Question though. Does this accept 1/8th bits like the ones in the Ifixit kit? This seems to accept only 1/4" bits, but bits used for like Torx T5/T6 and mini philips heads are 1/8th"


----------



## Lobstar

ManniX-ITA said:


> I was looking for some info on how much pressure to apply but I didn't find any.
> Is 1.4Nm a value recommended by some manufacturer or what you think it's reasonable in your experience?


No, I used the info a bit lower down that the Threadripper mounting pressure is 1.5Nm and just decided to back off a tiny bit.


Falkentyne said:


> Wow nice item. Might get one for the future.
> Question though. Does this accept 1/8th bits like the ones in the Ifixit kit? This seems to accept only 1/4" bits, but bits used for like Torx T5/T6 and mini philips heads are 1/8th"


I have a Tekton driver set that has a 1/4 to 1/8 adapter. Edit: maybe it didn't come from there ... hmmm.






Amazon.com: TEKTON Everybit Ratchet Screwdriver and Bit Set (135-Piece) | 2841 : Tools & Home Improvement


Amazon.com: TEKTON Everybit Ratchet Screwdriver and Bit Set (135-Piece) | 2841 : Tools & Home Improvement



www.amazon.com





Here is the link to the torque driver. I had a cheaper one but had tool envy.


Amazon.com


----------



## Falkentyne

Lobstar said:


> No, I used the info a bit lower down that the Threadripper mounting pressure is 1.5Nm and just decided to back off a tiny bit.
> 
> I have a Tekton driver set that has a 1/4 to 1/8 adapter. Edit: maybe it didn't come from there ... hmmm.
> 
> 
> 
> 
> 
> 
> Amazon.com: TEKTON Everybit Ratchet Screwdriver and Bit Set (135-Piece) | 2841 : Tools & Home Improvement
> 
> 
> Amazon.com: TEKTON Everybit Ratchet Screwdriver and Bit Set (135-Piece) | 2841 : Tools & Home Improvement
> 
> 
> 
> www.amazon.com
> 
> 
> 
> 
> 
> Here is the link to the torque driver. I had a cheaper one but had tool envy.
> 
> 
> Amazon.com


Let me know if you find a 1/4 to 1/8th adapter that would work with that torque driver. Don't want to spend $150 on something I can't use.

The ifixit bits that go in the "small" ifixit driver (it comes with two) are tiny. The larger ifixit driver uses standard bits for the larger sizes.


----------



## Lobstar

Falkentyne said:


> Let me know if you find a 1/4 to 1/8th adapter that would work with that torque driver.


OK, went back and looked. The Tekton kit has 1/4 bits down to T5. Thats what I used but just wanted to verify for you.


----------



## yzonker

des2k... said:


> the pcb rest on standoffs, the 3090 die is big too, on my EK block for example I have metal washers on top of plastic ones, my 4 core screws are torqued at max (big screw driver, put your arm weight into them type of torque😂)
> 
> If try to take them off now and I don't press / put my arm weight on them, they will get stripped lol
> 
> My pcb is straight, so I don't worry about it.


I guess I need a better picture. I don't understand how putting an extra washer on the screws does anything if the card is already just sitting on the standoffs?

My Corsair block only has screws that go through the backplate too. None sit against the PCB itself.


----------



## tps3443

yzonker said:


> I guess I need a better picture. I don't understand how putting an extra washer on the screws does anything if the card is already just sitting on the standoffs?
> 
> My Corsair block only has screws that go through the backplate too. None sit against the PCB itself.



Because the screws bottom out in the standoffs and prevent torquing the screws. The washer reduces the thread depth that goes in the standoff and in turn allows you to really torque it down. If the threads go in to the standoffs, the. You’ll hit a wall before you can actually torque down the pcb to block via a screw


----------



## GRABibus

Testing a Suprim X currently.


KP 1000W Rebar bios, 26°c ambient, stock air cooler :









I scored 15 331 in Port Royal


AMD Ryzen 9 5900X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com















566W max power draw

The Suprim X stock cooler (With fans at 100%), is just amazing


----------



## Falkentyne

yzonker said:


> I guess I need a better picture. I don't understand how putting an extra washer on the screws does anything if the card is already just sitting on the standoffs?
> 
> My Corsair block only has screws that go through the backplate too. None sit against the PCB itself.


A extra washer can potentially DECREASE mounting pressure, not increase it.
You have to be careful with how the leaf spring is actually designed here!
I tested small rubber washers on my 3090 FE, using Fujifilm contact Ultra Low Prescale pressure paper and it reduced the pressure. I'm assuming this is because the contact pressure is from the inner ring of the leaf spring, and NOT from the screwhole mounts, so not being able to fully screw down the screws to the complete stop would prevent pressure on the inner ring, since that would be determined by distance the mounting grips would go down to the PCB, even though the downwards pressure on the PCB is increased.

I'm not sure why washer mods increased mounting pressure on AMD cards (Vega 64, RX 5700), maybe the design of the leaf spring is different and there are springs on the screws and the balancing is different?


----------



## Lobstar

Falkentyne said:


> I'm not sure why washer mods increased mounting pressure on AMD cards (Vega 64, RX 5700)


Depends on where they are using the washer. I'm not familiar with those cards personally but if you're putting a washer between the crown of the fastener and the surface that could be increasing pressure if there isn't sufficient depth to the hole drilled in the receiving side. Opposite is obviously true if you were to put a washer between the things being held together.


----------



## owikh84

KedarWolf said:


> After you installed the active backplate, did you do leak testing?
> 
> I feel like if things are not perfect, you're risking the chance of having a leak.
> 
> And my motherboard is horizontal, so leaks would drip directly on it.


Yes, air leak testing was done before filling up with the loops.


----------



## tps3443

Falkentyne said:


> A extra washer can potentially DECREASE mounting pressure, not increase it.
> You have to be careful with how the leaf spring is actually designed here!
> I tested small rubber washers on my 3090 FE, using Fujifilm contact Ultra Low Prescale pressure paper and it reduced the pressure. I'm assuming this is because the contact pressure is from the inner ring of the leaf spring, and NOT from the screwhole mounts, so not being able to fully screw down the screws to the complete stop would prevent pressure on the inner ring, since that would be determined by distance the mounting grips would go down to the PCB, even though the downwards pressure on the PCB is increased.
> 
> I'm not sure why washer mods increased mounting pressure on AMD cards (Vega 64, RX 5700), maybe the design of the leaf spring is different and there are springs on the screws and the balancing is different?


I only use an additional layer of those black stick on thin plastic circle PCB protectors. They are a 3M stick on PCB sticker. Although my card doesn’t use the spring leaf mounting.

My card didn’t really need more mounting pressure. But you can feel those threads bottom out in the threads. So a slightly shorter screw is helpful.


----------



## elbramso

tps3443 said:


> I only use an additional layer of those black stick on thin plastic circle PCB protectors. They are a 3M stick on PCB sticker. Although my card doesn’t use the spring leaf mounting.
> 
> My card didn’t really need more mounting pressure. But you can feel those threads bottom out in the threads. So a slightly shorter screw is helpful.


So you used the "extra" washer like described because I didn't  I doubt this would change much but I'll give it a try


----------



## des2k...

elbramso said:


> So you used the "extra" washer like described because I didn't  I doubt this would change much but I'll give it a try


I looked at the picture and you have horrible die coverage / lack of pressure on the core. If everything fails, because that waterblock design is flat, you can easily drop the standoffs(sand them) by .5mm and use pads that are .5mm less for thickness.
Then repeat another .5mm if you still get poor die coverage / no pressure.
Assuming the block area around the die is flat. You might need to lap that.

On the positive side, you can just get the Optimus block, out of the box it will have an amazing delta no need to waste time/pads.

I moded my EK block but that's was cheap and easily available to buy replacement.


----------



## West.

After few days of frustration, I finally found a solution for my 3090 HOF thermal pads / mounting issue. This card is extremely sensitive to vram pad thickness. I acquired some factory thermal pads to compare with aftermarket one and I found out both are 1mm but the stock one is much softer . Due to the design of HOF heatsink, vram pads are thinner (1mm) than other brand cards (2mm). I tried gelid ultimates and odyssey on the front. Both result in bad core mounting no matter how I compress them. 

If any of HOF owner wanna change pads, I suggest you to use Laird HD90000 since they are extremely soft and compressible. 1mm for vram, 1.5mm for power stages and others. I tried to compress ultimates and odyssey with a ruler and some force, not working at all. Did some quick vram mining test with room temp of 27c, core1100 mem+1400 pl300w results in at most 84c. I kept all the wasted pads and put it between backplate heatsink and backplate. After installing backplate heatsink, temps drops 4-8c further depends on room temp.


----------



## Falkentyne

West. said:


> After few days of frustration, I finally found a solution for my 3090 HOF thermal pads / mounting issue. This card is extremely sensitive to vram pad thickness. I acquired some factory thermal pads to compare with aftermarket one and I found out both are 1mm but the stock one is much softer . Due to the design of HOF heatsink, vram pads are thinner (1mm) than other brand cards (2mm). I tried gelid ultimates and odyssey on the front. Both result in bad core mounting no matter how I compress them.
> 
> If any of HOF owner wanna change pads, I suggest you to use Laird HD90000 since they are extremely soft and compressible. 1mm for vram, 1.5mm for power stages and others. I tried to compress ultimates and odyssey with a ruler and some force, not working at all. Did some quick vram mining test with room temp of 27c, core1100 mem+1400 pl300w results in at most 84c. I kept all the wasted pads and put it between backplate heatsink and backplate. After installing backplate heatsink, temps drops 4-8c further depends on room temp.


You're supposed to use Gelid Extreme 1mm on the core side, not Gelid Ultimate.


----------



## West.

Falkentyne said:


> You're supposed to use Gelid Extreme 1mm on the core side, not Gelid Ultimate.


Yeah, that’s the mistake I made. Local shops does not have extreme so I went for ultimates. Didn’t work out and I paid the price. Luckily I recycled some pads for my backplate heatsink.


----------



## kryptonfly

yzonker said:


> You did have the same result as me with Endwalker. Kinda bizarre. I had wondered if it was the combination of that benchmark and the KP XOC bios. Really bizarre. Nothing else I've run will cause my PSU to hit OCP.


PSU is in cause, you can take a look at official bench thread => Final Fantasy XIV: Endwalker GPU benchmark/Post Your Scores 
About me, It triggered OCP only with 1000W vbios (no shunt) at PL +50% in this bench for my gpu. No vbios at all is compatible with my Gigabyte 2x8 pin, except from Gigabyte ofc. Probes were completely crazy *AT IDLE *(hidden #3 8 pin) : 



Makes sense OCP occurs with kinda weird numbers ! At least OCP works well, no doubt. 15mΩ shunt after that worked perfect but there was a bad solder so I removed everything 3 days after. I will see if with 10mΩ it holds on this bench. Very high framerate is really power hungry ! Such as Firestrike, Superposition, Heaven Unigine, 3dmark 11, A Plague Tale, Outriders and so on... We can drastically lessen GPU vCore, mainly in DX11 loads, it's a different curve than DX12/RTX games but Endwalker is the one I'm aware which triggers PSU OCP.

Btw, how much I'm cpu limited according to you ? This bench is really cpu (IPC) & memory dependent (latency). I ran "1080p maximum", definitely a cpu benchmark at these settings for RTX 3090 :

*1080p maximum* :
 

Bios Gigabyte Gaming OC 390W
No ReBAR (incompatible)
Driver : max perf
Low latency ultra
G-Sync disabled
Minimum Frame Rate = 80 fps 

The first 3 scenes are cpu bound and the 2 last are gpu bound, I'm a little PL at the end. The green graph is GPU FPS and the "blue" one is CPU FPS (not present, noticeable on some systems, see thread above). I don't know if I sell my current rig for 10900KF/10850K or wait for Raptor Lake but it will lose value if I wait too much  


ManniX-ITA said:


> I have a house in the countryside in Italy and lived there for a little while years ago.
> Terrible power supply with frequent blackouts.
> Had a couple of the same, older model, APC 1400VA as well and they suck for that kind of wattage.
> Found out there was a guy on eBay refurbishing APC UPSs at a very good price with extended batteries (3 years lifetime instead of 12-18 months APC original) at a fraction of the MSRP.
> 
> Bought a couple of Smart 2200VA:
> 
> 
> 
> 
> 
> SUA2200I - APC Smart-UPS 2200VA USB & Serial 230V | Schneider Electric Global
> 
> 
> Schneider Electric Global. SUA2200I - APC Smart-UPS 2200VA USB & Serial 230V.
> 
> 
> 
> 
> www.se.com
> 
> 
> 
> 
> 
> Similar to this one:
> 
> 
> 
> 
> __
> 
> 
> 
> 
> 
> APC Smart-UPS, Line Interactive, 2200VA, Tower, 230V, 8 IEC C13 und 2 IEC C19-Stecker, SmartConnect Port+SmartSlot, AVR, LCD - SMT2200IC | APC Deutschland
> 
> 
> SMT2200IC - APC Smart-UPS, Line Interactive, 2200VA, Tower, 230V, 8 IEC C13 und 2 IEC C19-Stecker, SmartConnect Port+SmartSlot, AVR, LCD | APC Deutschland
> 
> 
> 
> 
> www.apc.com
> 
> 
> 
> 
> 
> This is the real deal.
> Only one that could support my SLI setup at the time.
> And another doing great for the whole Home Theater setup.
> 
> These models are Online Interactive; meaning when there's a power fluctuation or loss the battery will engage.
> When it beeps means the load is too high and in case of engagement it will not switch to battery if needed.
> 
> What matters is not only the power circuit itself but the number of batteries (I think there are 2 inside).
> As you can see this thing is huge, double the size; this way with more batteries it can support 1.98kW with 2200VA.
> Of course with 2kW load the battery runtime is just a few minutes
> 
> Look for this model on eBay refurbished by someone good and it will last you forever, this is the ultimate one!


 Too big for me lol ! Mine has also "line interactive", I calibrated it according to my usage before but I didn't know I would need more. I think my PSU can handle the 10mΩ shunt load but the UPS will beep for sure. Waiting some 10mΩ to check it.


gfunkernaught said:


> All this Ups talk, are those of you using UPS's for your 3090 builds doing actual works that requires a UPS? How you looked into power conditioners? The good ones also protect against brownouts.


Precisely, I have few brownouts here, I'm at the end of line, the very last. Light fluctuates, sometimes UPS switches on batteries, a must have for me.


West. said:


> Yeah, that’s the mistake I made. Local shops does not have extreme so I went for ultimates. Didn’t work out and I paid the price. Luckily I recycled some pads for my backplate heatsink.


You mean between backplate and heatsink, not between pcb and backplate ? Me too, I ordered Thermalright Extreme Odyssey 120*120*2mm but just a waste of time & money, too hard & thick for Bykski. I ordered Gelid Extreme 120*120*1mm and 80*40*1.5mm, I've read somewhere Bykski spacing is 1.2mm. I'm thinking to put some really thin copper plates 0.5mm + 1mm pads on GPU side to reach around 1.2mm if somehow 1mm would be too thin, pressure is unequal, bad gpu contact. Amazon.com: Antrader 15 x 15 x 0.5mm Laptop GPU CPU Heatsink Thermal Copper Pad Shims Pack of 30 : Electronics


----------



## ENTERPRISE

Quick one. 

I am running the KP 520Watt BIOS. Have any of you tried increasing the core voltage at all ? If so is there a max safe value ?


----------



## KedarWolf

kryptonfly said:


> PSU is in cause, you can take a look at official bench thread => Final Fantasy XIV: Endwalker GPU benchmark/Post Your Scores
> About me, It triggered OCP only with 1000W vbios (no shunt) at PL +50% in this bench for my gpu. No vbios at all is compatible with my Gigabyte 2x8 pin, except from Gigabyte ofc. Probes were completely crazy *AT IDLE *(hidden #3 8 pin) :
> 
> 
> 
> Makes sense OCP occurs with kinda weird numbers ! At least OCP works well, no doubt. 15mΩ shunt after that worked perfect but there was a bad solder so I removed everything 3 days after. I will see if with 10mΩ it holds on this bench. Very high framerate is really power hungry ! Such as Firestrike, Superposition, Heaven Unigine, 3dmark 11, A Plague Tale, Outriders and so on... We can drastically lessen GPU vCore, mainly in DX11 loads, it's a different curve than DX12/RTX games but Endwalker is the one I'm aware which triggers PSU OCP.
> 
> Btw, how much I'm cpu limited according to you ? This bench is really cpu (IPC) & memory dependent (latency). I ran "1080p maximum", definitely a cpu benchmark at these settings for RTX 3090 :
> 
> *1080p maximum* :
> 
> 
> Bios Gigabyte Gaming OC 390W
> No ReBAR (incompatible)
> Driver : max perf
> Low latency ultra
> G-Sync disabled
> Minimum Frame Rate = 80 fps
> 
> The first 3 scenes are cpu bound and the 2 last are gpu bound, I'm a little PL at the end. The green graph is GPU FPS and the "blue" one is CPU FPS (not present, noticeable on some systems, see thread above). I don't know if I sell my current rig for 10900KF/10850K or wait for Raptor Lake but it will lose value if I wait too much
> Too big for me lol ! Mine has also "line interactive", I calibrated it according to my usage before but I didn't know I would need more. I think my PSU can handle the 10mΩ shunt load but the UPS will beep for sure. Waiting some 10mΩ to check it.
> Precisely, I have few brownouts here, I'm at the end of line, the very last. Light fluctuates, sometimes UPS switches on batteries, a must have for me.
> You mean between backplate and heatsink, not between pcb and backplate ? Me too, I ordered Thermalright Extreme Odyssey 120*120*2mm but just a waste of time & money, too hard & thick for Bykski. I ordered Gelid Extreme 120*120*1mm and 80*40*1.5mm, I've read somewhere Bykski spacing is 1.2mm. I'm thinking to put some really thin copper plates 0.5mm + 1mm pads on GPU side to reach around 1.2mm if somehow 1mm would be too thin, pressure is unequal, bad gpu contact. Amazon.com: Antrader 15 x 15 x 0.5mm Laptop GPU CPU Heatsink Thermal Copper Pad Shims Pack of 30 : Electronics


If you ordered GELID Extreme 120x120mm from Aliexpress or another cheap third party site, I bought some and they looked very different from the ones from the official GELID store.

When I contacted GELID support, they said they are older pads that have quality problems and not to use them.

Use the code BACKTOSCHOOL for 20% off GELID pads from the official GELID store until Sept. 5th. 

They email me codes, that one will work for anyone.


----------



## Falkentyne

West. said:


> Yeah, that’s the mistake I made. Local shops does not have extreme so I went for ultimates. Didn’t work out and I paid the price. Luckily I recycled some pads for my backplate heatsink.


Oh those Tflex HD90000 are Shore OO 32. The Gelid Extremes are Shore OO 35. That's why they work so well, then. Sorry you couldn't find any.


----------



## tps3443

Metro Exodus peak power with some pretty stupid overclocks, and dip switches enabled.

GPU was 51C -52C and GPU die temp was 45C. The card was totally quiet though.

I was just playing around, and this isn’t something that I have ever done before.


----------



## KedarWolf

ENTERPRISE said:


> Quick one.
> 
> I am running the KP 520Watt BIOS. Have any of you tried increasing the core voltage at all ? If so is there a max safe value ?


You can max it out at 1.1v safely, but if you're on air, your card will run hotter. So check your temps in GPU-Z.

1.1v is as high as our cards go.

Edit: Unless you have Kingpin or a card you can use the Elmor hardware tool on, then you can change some things.


----------



## ManniX-ITA

tps3443 said:


> Metro Exodus peak power with some pretty stupid overclocks, and dip switches enabled.


Did it bring a tangible uplift in framerate this insane power consumption?


----------



## elbramso

tps3443 said:


> Metro Exodus peak power with some pretty stupid overclocks, and dip switches enabled.
> 
> GPU was 51C -52C and GPU die temp was 45C. The card was totally quiet though.
> 
> I was just playing around, and this isn’t something that I have ever done before.


Wow! How much voltage is needed for this amount of power?


----------



## des2k...

tps3443 said:


> Metro Exodus peak power with some pretty stupid overclocks, and dip switches enabled.
> 
> GPU was 51C -52C and GPU die temp was 45C. The card was totally quiet though.
> 
> I was just playing around, and this isn’t something that I have ever done before.


800w+ nice
do a graph of fps vs power usage please

If I go by Quake RTX 600w vs 750w terrible fps scalling so not worth to push past 500w in most games


----------



## geriatricpollywog

ENTERPRISE said:


> Quick one.
> 
> I am running the KP 520Watt BIOS. Have any of you tried increasing the core voltage at all ? If so is there a max safe value ?


According to Vince (Kingpin) during the GN livestream video, 1.2v on ambient cooling.


----------



## tps3443

elbramso said:


> Wow! How much voltage is needed for this amount of power?


I have sent around 1.375 NVVD volts to the card. This voltage really helps boost that internal clock speed. However, this was with load line calibration enabled. And the idle voltage is different than load voltage. And it would just bounce up to that, and come back to slightly lower levels.

Try using the dip switches. You can flip them while the system is on, and while the card is rendering 3D under load. (It won’t hurt it, I tested them numerous times) it’s an immediate difference, like a light switch.


enabling all switches won’t hurt the card.

nvvd helps boost the core voltage, it will drastically increase the internal core clock to match the external clock. (up to an extent) set a stable OC, force boost, check HWinfo and make sure internal clock matches. If not, flip a switch for nvvd 1, if it’s still not good try switch 2 as well.

If a clock speed is unstable, increase the msvdd. It will also increase the clock speed ceiling sometimes too. (For example, if you apply +150, and it says 2,205Mhz, higher msvvd will make it jump to 2,220) it also increase stability too. This one really hurts power consumption.

FBVDD is all memory voltage. It’ll allow stable frequencies going beyond that 1650-1700 range (In games)

I can literally see artifacts, flip a switch and bye bye artifacts.

Then you have the load line calibration switches. They simply apply heavier affects to all of voltages.


I watched a video from Luumi, and he said increasing the MSVDD voltages increase the internal clock. (This is not true)


I have pushed very high voltages with no issues or degradation. You should confirm up to what extent it is helping your card, or if it’s just a waste.

The best dip Switch is probably the memory switches (FBVDD) leave them on all the time for maximum memory OC potential, and stability.


To answer voltage question I would say 1.250-1.275 is the limit for ambient water GPU voltage or nvvd. The MVVD should not be set higher than 1.125-1.150V. (Default is like 1.050) However, if you have +100 set in PX1 or MSI AB, your load volts can be screwed up. It’ll add that to your dip switches too.


----------



## tps3443

des2k... said:


> 800w+ nice
> do a graph of fps vs power usage please
> 
> If I go by Quake RTX 600w vs 750w terrible fps scalling so not worth to push past 500w in most games


Yeah that was like a 3-4 minute run. But, I felt like something was gonna blow up.


----------



## Falkentyne

tps3443 said:


> Metro Exodus peak power with some pretty stupid overclocks, and dip switches enabled.
> 
> GPU was 51C -52C and GPU die temp was 45C. The card was totally quiet though.
> 
> I was just playing around, and this isn’t something that I have ever done before.


And I was crying when I saw 620W on my air cooled card (Not from the wall) in Metro Exodus EE and ran into MSVDD (or did you say its NVVDD's own power limits that's power throttling the card??) power limits at 80C....
We outta meet sometime and hang out for a beer or something (Even though I don't drink).
And Quake RTX doesn't hit NVVDD limits but gets too hot.

Meanwhile on an air cooled FE card with only Total Board Power multiplied by 1.98x in HWinfo (all input power rails should be multiplied for accurate values):









so I'm throttling despite only 87% TDP reached out of 114%, due to very high TDP Normalized%.
Is it NVVDD being limited internally (1.10v? 1.120v?) that's causing this? This is an FE card so no adjustments are allowed, Elmor's EVC2 is read only. 
MSI Afterburner is already at +100 (this does nothing but increase core clock by +15 and allow the 1.087v, 1.194v and 1.10v VID to be used, in the tier "1.069v-1.10v", rather than 1.056v-1.081v without +15 mhz.


----------



## SoldierRBT

tps3443 said:


> I have sent around 1.375 NVVD volts to the card. This voltage really helps boost that internal clock speed. However, this was with load line calibration enabled. And the idle voltage is different than load voltage. And it would just bounce up to that, and come back to slightly lower levels.
> 
> Try using the dip switches. You can flip them while the system is on, and while the card is rendering 3D under load. (It won’t hurt it, I tested them numerous times) it’s an immediate difference, like a light switch.
> 
> 
> enabling all switches won’t hurt the card.
> 
> nvvd helps boost the core voltage, it will drastically increase the internal core clock to match the external clock. (up to an extent) set a stable OC, force boost, check HWinfo and make sure internal clock matches. If not, flip a switch for nvvd 1, if it’s still not good try switch 2 as well.
> 
> If a clock speed is unstable, increase the msvdd. It will also increase the clock speed ceiling sometimes too. (For example, if you apply +150, and it says 2,205Mhz, higher msvvd will make it jump to 2,220) it also increase stability too. This one really hurts power consumption.
> 
> FBVDD is all memory voltage. It’ll allow stable frequencies going beyond that 1650-1700 range (In games)
> 
> I can literally see artifacts, flip a switch and bye bye artifacts.
> 
> Then you have the load line calibration switches. They simply apply heavier affects to all of voltages.
> 
> 
> I watched a video from Luumi, and he said increasing the MSVDD voltages increase the internal clock. (This is not true)
> 
> 
> I have pushed very high voltages with no issues or degradation. You should confirm up to what extent it is helping your card, or if it’s just a waste.
> 
> The best dip Switch is probably the memory switches (FBVDD) leave them on all the time for maximum memory OC potential, and stability.
> 
> 
> To answer voltage question I would say 1.250-1.275 is the limit for ambient water GPU voltage or nvvd. The MVVD should not be set higher than 1.125-1.150V. (Default is like 1.050) However, if you have +100 set in PX1 or MSI AB, your load volts can be screwed up. It’ll add that to your dip switches too.


There's a point where adding more NVVDD doesn't increase performance, just heat. 1.375v NVVDD is a lot, probably for chiller runs. MSVDD is important, you may have internal clocks matching requested clocks but without the correct MSVVDD voltage, score won't be good. I've been working on getting 16K PR under 500W but it's kinda difficult since my ambient temps are higher than before. 1.125v NVVDD LLC6 is what I'm using for this experiment. Still tweaking MSVDD. If I can get 2175MHz avg + the right MSVDD or LLC I can get it under 500W.








I scored 15 927 in Port Royal


Intel Core i9-10900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## tps3443

SoldierRBT said:


> There's a point where adding more NVVDD doesn't increase performance, just heat. 1.375v NVVDD is a lot, probably for chiller runs. MSVDD is important, you may have internal clocks matching requested clocks but without the correct MSVVDD voltage, score won't be good. I've been working on getting 16K PR under 500W but it's kinda difficult since my ambient temps are higher than before. 1.125v NVVDD LLC6 is what I'm using for this experiment. Still tweaking MSVDD. If I can get 2175MHz avg + the right MSVDD or LLC I can get it under 500W.
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 927 in Port Royal
> 
> 
> Intel Core i9-10900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> View attachment 2523242


1.375 is high lol. That was when I first got the card, and I started playing with voltages etc. I didn’t really know what was what.


----------



## gfunkernaught

des2k... said:


> 800w+ nice
> do a graph of fps vs power usage please
> 
> If I go by Quake RTX 600w vs 750w terrible fps scalling so not worth to push past 500w in most games


I noticed an increase in performance running GTA 5 8k with the PL set to 65%, Battlefield 1 8k as well. The highest power draw I saw according to the RTSS GPU Power indicator was like 580-600w. But yeah totally not worth it for daily. I kinda feel bad gaming passed 500w but sometimes its fun.


----------



## ENTERPRISE

KedarWolf said:


> You can max it out at 1.1v safely, but if you're on air, your card will run hotter. So check your temps in GPU-Z.
> 
> 1.1v is as high as our cards go.
> 
> Edit: Unless you have Kingpin or a card you can use the Elmor hardware tool on, then you can change some things.





0451 said:


> According to Vince (Kingpin) during the GN livestream video, 1.2v on ambient cooling.


Thanks guys. I will look at the GPU core voltage then as well. In Afterburner I wish you could set just a limit of 1.1v as opposed to how much you wish to add on using the slider. Want to make sure im not adding too much.

I should have mentioned I have the EVGA FTW3 Ultra Gaming. It is under WC.

I was initially interested in the KP1000W BIOS but kind of feared blowing up my GPU lol. I assume though in reality, so long as you have a sensible PL that your ok ?

Also does the KP1000W support ReBar ?

Thanks!


----------



## geriatricpollywog

ENTERPRISE said:


> Thanks guys. I will look at the GPU core voltage then as well. In Afterburner I wish you could set just a limit of 1.1v as opposed to how much you wish to add on using the slider. Want to make sure im not adding too much.
> 
> I should have mentioned I have the EVGA FTW3 Ultra Gaming. It is under WC.
> 
> I was initially interested in the KP1000W BIOS but kind of feared blowing up my GPU lol. I assume though in reality, so long as you have a sensible PL that your ok ?
> 
> Also does the KP1000W support ReBar ?
> 
> Thanks!


You can max out the slider without worrying about the card. The only way to set the voltage manually is with the Classified software tool that works only with the Kingpin, or with an Elmor labs tool soldered to the card as Kedar mentioned.

I have an actual Kingpin card and the 1000w bios does not give me any extra performance over the stock 520w EVGA bios, which you can flash to your FTW. Your mileage may vary.


----------



## Sheyster

ENTERPRISE said:


> Also does the KP1000W support ReBar ?


Here is the new one which does support re-bar:









EVGA RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com


----------



## elbramso

SoldierRBT said:


> There's a point where adding more NVVDD doesn't increase performance, just heat. 1.375v NVVDD is a lot, probably for chiller runs. MSVDD is important, you may have internal clocks matching requested clocks but without the correct MSVVDD voltage, score won't be good. I've been working on getting 16K PR under 500W but it's kinda difficult since my ambient temps are higher than before. 1.125v NVVDD LLC6 is what I'm using for this experiment. Still tweaking MSVDD. If I can get 2175MHz avg + the right MSVDD or LLC I can get it under 500W.
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 927 in Port Royal
> 
> 
> Intel Core i9-10900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> View attachment 2523242



I had my best PR score with 1.20625v NVVDD (LL1) that is still ~ 600w.
Let's make a 520w challenge^^ my best score with 520w was 15736 so far


----------



## ENTERPRISE

0451 said:


> You can max out the slider without worrying about the card. The only way to set the voltage manually is with the Classified software tool that works only with the Kingpin, or with an Elmor labs tool soldered to the card as Kedar mentioned.
> 
> I have an actual Kingpin card and the 1000w bios does not give me any extra performance over the stock 520w EVGA bios, which you can flash to your FTW. Your mileage may vary.


Oh right. That is interesting to hear that. Sometimes you assume more power = better performance but should know better as I know that is not always the case.

In reality then, there is no other advantages to the KP1000W, other than the zero temp limits ?



Sheyster said:


> Here is the new one which does support re-bar:
> 
> 
> 
> 
> 
> 
> 
> 
> 
> EVGA RTX 3090 VBIOS
> 
> 
> 24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory
> 
> 
> 
> 
> www.techpowerup.com


Nice thanks for that !


----------



## gfunkernaught

ENTERPRISE said:


> Oh right. That is interesting to hear that. Sometimes you assume more power = better performance but should know better as I know that is not always the case.
> 
> In reality then, there is no other advantages to the KP1000W, other than the zero temp limits ?
> 
> 
> 
> Nice thanks for that !


The higher power limit and ability to control the power limit is another advantage over the 520w bios for example. The temp limit is removed however thermal throttling is still present no matter what, like when the core hits 40c, the core clock will drop one bin.


----------



## West.

kryptonfly said:


> You mean between backplate and heatsink, not between pcb and backplate ? Me too, I ordered Thermalright Extreme Odyssey 120*120*2mm but just a waste of time & money, too hard & thick for Bykski. I ordered Gelid Extreme 120*120*1mm and 80*40*1.5mm, I've read somewhere Bykski spacing is 1.2mm. I'm thinking to put some really thin copper plates 0.5mm + 1mm pads on GPU side to reach around 1.2mm if somehow 1mm would be too thin, pressure is unequal, bad gpu contact. Amazon.com: Antrader 15 x 15 x 0.5mm Laptop GPU CPU Heatsink Thermal Copper Pad Shims Pack of 30 : Electronics


Yes, between heatsink & backplate. I tried both ultimates & odyssey. I would not recommend anyone using them on front pcb as they are not compressible. The slightest thickness problem will lead to a bad mount. I wasted quite a few pads, only able to recycle few of them for my backplate heatsink

For 1.2mm I would recommend you use soft pads like gelid extreme or hd90000. From my experience, hd90000 is soft and can be reshaped like clay so I dont have to worry too much about thickness.


----------



## West.

Falkentyne said:


> Oh those Tflex HD90000 are Shore OO 32. The Gelid Extremes are Shore OO 35. That's why they work so well, then. Sorry you couldn't find any.


I would say HD90000 are the safest bet of pads. They are very soft and can be easily compressed or reshaped. It might not be the best performer but not far off. It gives room for thickness error.


----------



## domdtxdissar

elbramso said:


> I had my best PR score with 1.20625v NVVDD (LL1) that is still ~ 600w.
> Let's make a 520w challenge^^ my best score with 520w was 15736 so far


16k @ around ~520w (peak 535w for a split second)









Peak 560w in regular timespy








Peak 616w in timespy extreme


----------



## GRABibus

Still testing my Suprim X on air :

KP 1000W BAR Bios
27°c ambient









I scored 15 439 in Port Royal


AMD Ryzen 9 5900X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com















*=> Stock air cooler
=> 571W max power draw*


----------



## GRABibus

West. said:


> Yes, between heatsink & backplate. I tried both ultimates & odyssey. I would not recommend anyone using them on front pcb as they are not compressible. The slightest thickness problem will lead to a bad mount. I wasted quite a few pads, only able to recycle few of them for my backplate heatsink
> 
> For 1.2mm I would recommend you use soft pads like gelid extreme or hd90000. From my experience, hd90000 is soft and can be reshaped like clay so I dont have to worry too much about thickness.
> 
> View attachment 2523362


Where did you get the heatsink ?
is it efficient ?


----------



## des2k...

GRABibus said:


> Where did you get the heatsink ?
> is it efficient ?


I have this one (blue) from Amazon,~67c +1500mem oc with 80mm fan under mining. 









IMG-20210501-133155


Image IMG-20210501-133155 hosted in ImgBB




ibb.co


----------



## GRABibus

des2k... said:


> I have this one (blue) from Amazon,~67c +1500mem oc with 80mm fan under mining.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> IMG-20210501-133155
> 
> 
> Image IMG-20210501-133155 hosted in ImgBB
> 
> 
> 
> 
> ibb.co


Thanks.
What is the brand and part number please ?


----------



## West.

GRABibus said:


> Where did you get the heatsink ?
> is it efficient ?


I bought these directly from taobao. They are dirt cheap. I paid ~usd 7.43 (converted) for 2 of them including shipping. 150*90*15mm each, 2 covers the whole backplate. The pad i used is expensive.

Speaking of efficiency, room temp 25c (AC on) ~80c with +1525mem mining. Without heatsink would be + 6-8c. One heatsink stick right on top of vram area is enough, second one dont help much, at most 2c. I use 2 only because I bought them heatsinks for cheap and I have some leftover and “wasted” pads from my pad swap project.


----------



## J7SC

GRABibus said:


> Thanks.
> What is the brand and part number please ?


...picked up two sets of these from Amazon.ca earlier in the year...part numbers won't match at Amazon.fr, but try the first brand name as a search term


----------



## kryptonfly

KedarWolf said:


> If you ordered GELID Extreme 120x120mm from Aliexpress or another cheap third party site, I bought some and they looked very different from the ones from the official GELID store.
> 
> When I contacted GELID support, they said they are older pads that have quality problems and not to use them.
> 
> Use the code BACKTOSCHOOL for 20% off GELID pads from the official GELID store until Sept. 5th.
> 
> They email me codes, that one will work for anyone.


Thanks for voucher  but I had already ordered from Aliexpress few days ago and I just received pads yesterday :



Is it "true" quality and conform from the official Gelid store ? Right color ?


West. said:


> Yes, between heatsink & backplate. I tried both ultimates & odyssey. I would not recommend anyone using them on front pcb as they are not compressible. The slightest thickness problem will lead to a bad mount. I wasted quite a few pads, only able to recycle few of them for my backplate heatsink
> 
> For 1.2mm I would recommend you use soft pads like gelid extreme or hd90000. From my experience, hd90000 is soft and can be reshaped like clay so I dont have to worry too much about thickness.
> 
> View attachment 2523362


I should have bought Gelid Extreme first but Thermalright was a bit cheaper, result : I wasted ~25€ (120*120*2mm)... This is Gelid Extreme right here, 120*120*1mm and 80*40*1.5mm. I will use 1mm on vram and 1.5mm on vrm and if somehow 1mm would not be enough, I would use thin 0.5mm copper plates. I will check first good GPU contact without pad (with thermal grease) and then adjust pads height. For now, I just have a GPU contact at the center of about 1cm with stock Bykski pads. No comment with Thermalright pads, almost no contact, +60°C in few seconds !


----------



## GRABibus

J7SC said:


> ...picked up two sets of these from Amazon.ca earlier in the year...part numbers won't match at Amazon.fr, but try the first brand name as a search term
> 
> View attachment 2523434



Thanks.

Ordered 2 pces today amazon.com

With which thermal adhesive do you fix it ?


----------



## elbramso

Ok I tried to get a better mounting of the waterblock one more (and last) time.

Unfortunately, the result will always end up being the same:
As you can see I'm not able to get good gpu contact at the die corners. The pads I used are super soft (gelid extreme 2mm), they don't have a negative impact on the diecontact.
I really took my time to align the screw holes to almost 100% before I laid the gpu on the block. I placed all the screws and then started to tighten the 4 middle screws x-wise (like I always do) - at the end I pressed down the pcb with 2 fingers to tighten the screws even more. I could hear the thermal paste + pads making contact.

The paste pattern on the gpu will always look like shown on the picture  The die seems flat though...


----------



## Benni231990

does the new reBAR XOC 1000 watt bios reduce the memory clock speed in idle ? or not?


----------



## kryptonfly

elbramso said:


> Ok I tried to get a better mounting of the waterblock one more (and last) time.
> 
> Unfortunately, the result will always end up being the same:
> As you can see I'm not able to get good gpu contact at the die corners. The pads I used are super soft (gelid extreme 2mm), they don't have a negative impact on the diecontact.
> I really took my time to align the screw holes to almost 100% before I laid the gpu on the block. I placed all the screws and then started to tighten the 4 middle screws x-wise (like I always do) - at the end I pressed down the pcb with 2 fingers to tighten the screws even more. I could hear the thermal paste + pads making contact.
> 
> The paste pattern on the gpu will always look like shown on the picture  The die seems flat though...


If I could have same GPU contact, I will be happy ! I just have around 1 cm at the center but I will swap stock pads with Gelid Extreme really soon, waiting flux & 10mΩ shunts...
You can check without pads to see if GPU contact is same or not. It's what I will do first when I will change pads.


----------



## ManniX-ITA

elbramso said:


> The paste pattern on the gpu will always look like shown on the picture  The die seems flat though...


Aren't 3 & 4 supposed to be same as 1 & 2?


----------



## GAN77

ManniX-ITA said:


> Aren't 3 & 4 supposed to be same as 1 & 2?


Threaded leg 1 and 2 look like rails to facilitate mounting.
If the distance between them is not accurate, there may be excessive stress when tightening the screws.


----------



## J7SC

GRABibus said:


> Thanks.
> 
> Ordered 2 pces today amazon.com
> 
> With which thermal adhesive do you fix it ?


...this thermal adhesive at strategic locations (+ Thermalright Extreme Odyssey pads elsewhere) :


----------



## ManniX-ITA

GAN77 said:


> Threaded leg 1 and 2 look like rails to facilitate mounting.
> If the distance between them is not accurate, there may be excessive stress when tightening the screws.


I don't know the KP block but usually they are all the same.
Just looking at the picture doesn't look like they have the same height. But it's just a picture...
Depends on what's on the other side. At first glance it doesn't feel right.


----------



## ENTERPRISE

gfunkernaught said:


> The higher power limit and ability to control the power limit is another advantage over the 520w bios for example. The temp limit is removed however thermal throttling is still present no matter what, like when the core hits 40c, the core clock will drop one bin.


Thanks for letting me know. So to confirm with the 1000Watt BIOS, To set the desired maximum wattage, I simply adjust the power level slider on Afterburner. So for example, if I set it to 60% that would equal 600Watt total ?


----------



## gfunkernaught

ENTERPRISE said:


> Thanks for letting me know. So to confirm with the 1000Watt BIOS, To set the desired maximum wattage, I simply adjust the power level slider on Afterburner. So for example, if I set it to 60% that would equal 600Watt total ?


You are correct. Just keep in mind that 1kw bios has other limits removed as well that can't be controlled unless you have an actual KP 3090. So for example setting 52% on the 1kw bios is not exactly the same as using a 520w bios. So make sure your cooling is excellent.


----------



## J7SC

@GRABibus ...if you ordered the 'taller' black heat-sinks I showed as one of two sets, make sure to measure the clearance(s) with your mobo 'on the bottom' so the card fits with the cooler on the back while still covering the bottom VRAM chip on the back...it works w/ the Strix, but not much room for error.


----------



## domdtxdissar

GRABibus said:


> Thanks.
> 
> Ordered 2 pces today amazon.com
> 
> With which thermal adhesive do you fix it ?


So i did also end up ordering 2x Aluminium Heat Sink Radiator Cooling Fin Silver for CPU LED Power 150x69x36mm

Instead of thermal adhesive glue i will use thermal double-sided tape instead.. 2x EC360® TAPE Wärmeleitfolie/kleber 100x100mm Doppelseitig 3W/mK WLPad Adhesive


----------



## ENTERPRISE

gfunkernaught said:


> You are correct. Just keep in mind that 1kw bios has other limits removed as well that can't be controlled unless you have an actual KP 3090. So for example setting 52% on the 1kw bios is not exactly the same as using a 520w bios. So make sure your cooling is excellent.


Thanks! My FTW3 ULTRA Gaming is under custom WC so should be ok.


----------



## Falkentyne

domdtxdissar said:


> So i did also end up ordering 2x Aluminium Heat Sink Radiator Cooling Fin Silver for CPU LED Power 150x69x36mm
> 
> Instead of thermal adhesive glue i will use thermal double-sided tape instead.. 2x EC360® TAPE Wärmeleitfolie/kleber 100x100mm Doppelseitig 3W/mK WLPad Adhesive


Might do better just using daisy-chained zip ties together, and using 0.5mm Gelid Ultimate thermal pads instead. That would do a lot more than 3w/mk thermal tape here.



Amazon.com










Amazon.com: Hmrope 100pcs Cable Zip Ties Heavy Duty 12 Inch, Premium Plastic Wire Ties with 50 Pounds Tensile Strength, Self-Locking Black Nylon Zip Ties for Indoor and Outdoor : Electronics


Amazon.com: Hmrope 100pcs Cable Zip Ties Heavy Duty 12 Inch, Premium Plastic Wire Ties with 50 Pounds Tensile Strength, Self-Locking Black Nylon Zip Ties for Indoor and Outdoor : Electronics



www.amazon.com





Might look really ugly but if you can wrap them around the card, then you can get full use of the Gelid Ultimate pads. Not sure about the temp rating though.
Maybe try these if you think a hot backplate would cause problems.



Amazon.com


----------



## tps3443

kryptonfly said:


> If I could have same GPU contact, I will be happy ! I just have around 1 cm at the center but I will swap stock pads with Gelid Extreme really soon, waiting flux & 10mΩ shunts...
> You can check without pads to see if GPU contact is same or not. It's what I will do first when I will change pads.


Stock 3090 KP thermal pads are similar or even a little better than gelid extreme.

They are 13K/w 2.25MM pre-included on the evga KP Hydro copper block.


@elbramso couldnt get easy replacments from Evga, so I think that’s why he just ordered Gelid extremes.


----------



## J7SC

domdtxdissar said:


> So i did also end up ordering 2x Aluminium Heat Sink Radiator Cooling Fin Silver for CPU LED Power 150x69x36mm
> 
> Instead of thermal adhesive glue i will use thermal double-sided tape instead.. 2x EC360® TAPE Wärmeleitfolie/kleber 100x100mm Doppelseitig 3W/mK WLPad Adhesive


...still could be an improvement, but height of 69mm won't fully cover the VRAM , per below


----------



## yzonker

Falkentyne said:


> Might do better just using daisy-chained zip ties together, and using 0.5mm Gelid Ultimate thermal pads instead. That would do a lot more than 3w/mk thermal tape here.
> 
> 
> 
> Amazon.com
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Amazon.com: Hmrope 100pcs Cable Zip Ties Heavy Duty 12 Inch, Premium Plastic Wire Ties with 50 Pounds Tensile Strength, Self-Locking Black Nylon Zip Ties for Indoor and Outdoor : Electronics
> 
> 
> Amazon.com: Hmrope 100pcs Cable Zip Ties Heavy Duty 12 Inch, Premium Plastic Wire Ties with 50 Pounds Tensile Strength, Self-Locking Black Nylon Zip Ties for Indoor and Outdoor : Electronics
> 
> 
> 
> www.amazon.com
> 
> 
> 
> 
> 
> Might look really ugly but if you can wrap them around the card, then you can get full use of the Gelid Ultimate pads. Not sure about the temp rating though.
> Maybe try these if you think a hot backplate would cause problems.
> 
> 
> 
> Amazon.com


Thermal paste works well if you don't mind using zip ties. I've been doing that quite a while now. MX-4 comes in a big tube and is cheap. Messy of course.


----------



## J7SC

yzonker said:


> Thermal paste works well if you don't mind using zip ties. I've been doing that quite a while now. MX-4 comes in a big tube and is cheap. Messy of course.


...good tip !  ..I was down to my last tube of MX4 - my 'standard go-to' - and those big MX4 tubes were my favourite, but can't get them locally anymore. I picked up some chunky 20g tubes of MX5 instead after seeing some comps to MX4 though haven't used any yet ...won't need it for the 3090 back-plate heat-sink as I used a combo of thermal adhesive and Thermalright pads, but I might try it out on the BigNavi back-plate heat-sink...below is from an upcoming build-log with all the 'usual suspects'...


----------



## GRABibus

Suprim X on air, still in test.
Summary of best scores until now.

*1/ TimeSpy :*
27°C
KP 1000W ReBar Bios









I scored 21 327 in Time Spy


AMD Ryzen 9 5900X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com













*=> 597W max power draw


2/ TimeSpy Extreme :*
27°C
KP 1000W ReBar Bios









I scored 11 160 in Time Spy Extreme


AMD Ryzen 9 5900X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com













*=> 656W max power draw


3/ Port Royal :*
27°C
KP 1000W ReBar Bios









I scored 15 439 in Port Royal


AMD Ryzen 9 5900X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com













*=> 571W max power draw*


----------



## Arizor

Damn you all, thanks to this thread I ordered, received and installed a HWLabs GTR360 to replace my pretty much new EK PE 360.

That said, it did drop temps by about 5C on average at lower RPM, so yeah, the HW hype is real.


----------



## vasyltheonly

Hey all! I was able to trade my 3080Ti for a 3090 FE model. I'm wondering if anyone has adapted their 2x8pin to a single 12 pin cable. I have the CableMod PRO ModMesh series cables which apparently are 18 gauge. Was thinking of getting a 12 pin and just converting my 8 pins to the 12. My only concern is whether the 12 pin adapter will be able to house those or not. Since it's all ground and 12v, I feel pretty comfortable about attempting this.

I also was thinking of getting the alphacool eisblock and found a review by Igor's lab, but no comparison to any of the other brands. Does anyone have experience with this block compared to others such as bitspower, Bykski, or ekwb? Thanks!


----------



## Falkentyne

vasyltheonly said:


> Hey all! I was able to trade my 3080Ti for a 3090 FE model. I'm wondering if anyone has adapted their 2x8pin to a single 12 pin cable. I have the CableMod PRO ModMesh series cables which apparently are 18 gauge. Was thinking of getting a 12 pin and just converting my 8 pins to the 12. My only concern is whether the 12 pin adapter will be able to house those or not. Since it's all ground and 12v, I feel pretty comfortable about attempting this.
> 
> I also was thinking of getting the alphacool eisblock and found a review by Igor's lab, but no comparison to any of the other brands. Does anyone have experience with this block compared to others such as bitspower, Bykski, or ekwb? Thanks!


What PSU?
Seasonic OEM housings are compatible with the 16 AWG Seasonic 12 pin microfit 3.0 cable. Some different OEM's use this platform (Corsair, Bequiet, and Phanteks have been known to) and they may even have their own version of that cable (for far too much money!).



https://www.btosinte.com/Seasonic-Two-8pin-to-12pin-9A-Terminals-Micro-Fit-30-16AWG-VGA-SS-MF12P-9A-VGA.htm



For non Seasonic branded PSUs, contact the OEM and see what they have, or make one yourself (you can get the connector from digikey or mouser



https://www.mouser.com/ProductDetail/Molex/43025-1208?qs=rg3U%2F5nEhHdiW0ROllnE5g%3D%3D




__
https://www.reddit.com/r/nvidia/comments/j3feyu
).


----------



## J7SC

Arizor said:


> Damn you all, thanks to this thread I ordered, received and installed a HWLabs GTR360 to replace my pretty much new EK PE 360.
> 
> That said, it did drop temps by about 5C on average at lower RPM, so yeah, the HW hype is real.


...and now, time to _really upgrade _with two of these  :


----------



## kryptonfly

tps3443 said:


> Stock 3090 KP thermal pads are similar or even a little better than gelid extreme.
> 
> They are 13K/w 2.25MM pre-included on the evga KP Hydro copper block.
> 
> 
> @elbramso couldnt get easy replacments from Evga, so I think that’s why he just ordered Gelid extremes.


I meant from Bykski pads, if stock pads are as good as Gelid Extreme, sure he can keep them.


GRABibus said:


> Suprim X on air, still in test.
> Summary of best scores until now.
> 
> *1/ TimeSpy :*
> 27°C
> KP 1000W ReBar Bios
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 21 327 in Time Spy
> 
> 
> AMD Ryzen 9 5900X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> View attachment 2523613
> 
> *=> 597W max power draw
> 
> 
> 2/ TimeSpy Extreme :*
> 27°C
> KP 1000W ReBar Bios
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 11 160 in Time Spy Extreme
> 
> 
> AMD Ryzen 9 5900X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> View attachment 2523616
> 
> *=> 656W max power draw
> 
> 
> 3/ Port Royal :*
> 27°C
> KP 1000W ReBar Bios
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 439 in Port Royal
> 
> 
> AMD Ryzen 9 5900X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> View attachment 2523617
> 
> *=> 571W max power draw*


You did 22647 in Timespy GPU test at 597W, I did 22537 @39°C with 15mΩ shunt, "in theory" ~520W max but I had a bad solder so maybe less than 500W. Not sure if +77W "in theory" should deserve +0.5% perf. You will so benefit well better with watercooling !



You did 11824 in Timespy Extreme GPU test at 656W, I did 11003 @+42°C without shunt, so 390W in peak. Not sure if +266W (+68%) should deserve +7.5% perf.
I scored 9 636 in Time Spy Extreme
To my mind too much joule effect here.


Arizor said:


> Damn you all, thanks to this thread I ordered, received and installed a HWLabs GTR360 to replace my pretty much new EK PE 360.
> 
> That said, it did drop temps by about 5C on average at lower RPM, so yeah, the HW hype is real.


You mean from this review ? => Bykski 30 mm RC Series Radiator Review
I'm not aware about the HW hype but I have 5x Bykski CR-RD360RC-TN-V2 and according to this review, it performs better than a GTR 360 @600rpm and same as a GTS 360 @1200rpm. However, starting 1500rpm, the GTR 360 becomes better. I just point this review, not here to blame anything, if you did improve your temps compared to before, glad to hear that


----------



## des2k...

J7SC said:


> ...and now, time to _really upgrade _with two of these  :


Very nice rad, the build quality is amazing on this thing. Mine is white. Went with cheap 5$ 140mm vardar fans. Wanted RGB 140mm but those are not cheap. Got a dust filter also from ppcs.


----------



## Arizor

Haha @J7SC dont tempt me! I’ve got an optimus block ordered (how long do these dudes take, been a while since I ordered!), hoping this guarantees no movement above the high 30s even with punishing activity.



kryptonfly said:


> You mean from this review ? => Bykski 30 mm RC Series Radiator Review
> I'm not aware about the HW hype but I have 5x Bykski CR-RD360RC-TN-V2 and according to this review, it performs better than a GTR 360 @600rpm and same as a GTS 360 @1200rpm. However, starting 1500rpm, the GTR 360 becomes better. I just point this review, not here to blame anything, if you did improve your temps compared to before, glad to hear that


Yeah that’s the one, the GTR model is designed specifically for high rpm fans, so comes into its own at 1800+, or like for me, in push/pull with each fan at 1800.


----------



## Zogge

I combined my cpu and gpu loop again and my ambient to water never goes over 1C again, which is good news. Flow rate took a hit going to 150l/h from 230l/h though. (3360x140+2280x120 rads) hence a long loop.

Max heatload is gpu 520w+cpu 225w+cpu vrms ??W. 

However my bykski block on 3090 starts to drop in efficiency. It was delta 12C water to core 6m ago, now 16c. 
Memory max on 48c with +1000 which is okay/good I guess and same as 6m ago. (Mp5 serial on back plate)
Strange with the core though, is the thermal grizzly extreme (non conductive) degrading so fast ?


----------



## satinghostrider

Zogge said:


> I combined my cpu and gpu loop again and my ambient to water never goes over 1C again, which is good news. Flow rate took a hit going to 150l/h from 230l/h though. (3360x140+2280x120 rads) hence a long loop.
> 
> Max heatload is gpu 520w+cpu 225w+cpu vrms ??W.
> 
> However my bykski block on 3090 starts to drop in efficiency. It was delta 12C water to core 6m ago, now 16c.
> Memory max on 48c with +1000 which is okay/good I guess and same as 6m ago. (Mp5 serial on back plate)
> Strange with the core though, is the thermal grizzly extreme (non conductive) degrading so fast ?


It's been mentioned time and again that you need to use those thicker pastes like Thermalright TFX, SYY-157, etc to prevent your temps from creeping up over time. Many have experienced this including myself.


----------



## J7SC

...I posted this question earlier at 'general GPU' > here ...anyone here know the answer ?



satinghostrider said:


> It's been mentioned time and again that you need to use those thicker pastes like Thermalright TFX, SYY-157, etc to prevent your temps from creeping up over time. Many have experienced this including myself.


...same here, I switched from TG Kryonaut to GC Gelid Extreme for now after observing creeping delta degradation.


----------



## Lobstar

satinghostrider said:


> It's been mentioned time and again that you need to use those thicker pastes like Thermalright TFX, SYY-157, etc to prevent your temps from creeping up over time. Many have experienced this including myself.


I've had great experience with Kryonaut Extreme however it's an absolute pain in the ass to get good coverage with. I have to use clear plastic and apply it to both surfaces.


----------



## Zogge

J7SC said:


> ...I posted this question earlier at 'general GPU' > here ...anyone here know the answer ?


Answered it just now.


----------



## Falkentyne

J7SC said:


> ...I posted this question earlier at 'general GPU' > here ...anyone here know the answer ?
> 
> 
> 
> ...same here, I switched from TG Kryonaut to GC Gelid Extreme for now after observing creeping delta degradation.


You need to be careful about what is causing the delta degradation as well.

If the delta creeps higher BUT the core temp remains constant, it's probably NOT the thermal paste, but bad contact pressure on the VRM's, causing higher heat and the VRM thermal pads to start deforming because if only a very small top section of the pad is touching the VRM (rather than the entire pad fully molding around it and diffusing the heat over a large section), that section will get very high heat and will cause a heat indentation warp, which can cause that.

I actually managed to get a picture of this in action, after 2 months of use, and seeing some strange behavior (Even though it never went above 12.5C) from the hotspot, depending on the power draw AND the type of load on the video card.

Decent contact (left VRM)











Weak contact (Right VRM).



If the delta creeps higher AND the core average temp creeps higher as well---or if the delta remains the same but the core temps slowly go up with time, despite same ambient and water temp equalized, then it's the thermal paste most likely (either just paste separating/degradation or even low contact core pressure).

GPU Hotspot is BOTH GPU itself (some individual sensors) and VRM scattered sensors. Unfortunately there is an absolute minimum delta that will register, regardless of what the "real" hotspot temp is, so if that minimum delta allowed by firmware (VBIOS) is 10C, if the core temp rises, the delta will just get pushed up with it, and vice versa.









Obviously, VRAM and Core contact was immaculate.

Problem fixed by switching from 1.5mm Gelid Extreme pads on the VRM's only, to 2mm (including the little isolated VRM chip), keeping 1.5mm on the VRAM.


----------



## Zogge

Falkentyne said:


> You need to be careful about what is causing the delta degradation as well.
> 
> If the delta creeps higher BUT the core temp remains constant, it's probably NOT the thermal paste, but bad contact pressure on the VRM's, causing higher heat and the VRM thermal pads to start deforming because if only a very small top section of the pad is touching the VRM (rather than the entire pad fully molding around it and diffusing the heat over a large section), that section will get very high heat and will cause a heat indentation warp, which can cause that.
> 
> If the delta creeps higher AND the core average temp creeps higher as well---or if the delta remains the same but the core temps slowly go up with time, despite same ambient and water temp equalized, then it's the thermal paste most likely (either just paste separating/degradation or even low contact core pressure).
> 
> GPU Hotspot is BOTH GPU itself (some individual sensors) and VRM scattered sensors. Unfortunately there is an absolute minimum delta that will register, regardless of what the "real" hotspot temp is, so if that minimum delta allowed by firmware (VBIOS) is 10C, if the core temp rises, the delta will just get pushed up with it, and vice versa.


I understand, but I do not really push high wattage on it, even run it with 480W Asus bios 24/7. VRMs rarely if ever go above 40-42C so that heat should be neglectable. I will, make some more detailed notes and continue to monitor it over time before I crank it open and change - again. I will not buy an optimus block for this version but for next perhaps.


----------



## J7SC

...I posted this earlier at 'general GPU' > here ...anyone know the answer ?


Falkentyne said:


> You need to be careful about what is causing the delta degradation as well.
> 
> If the delta creeps higher BUT the core temp remains constant, it's probably NOT the thermal paste, but bad contact pressure on the VRM's, causing higher heat and the VRM thermal pads to start deforming because if only a very small top section of the pad is touching the VRM (rather than the entire pad fully molding around it and diffusing the heat over a large section), that section will get very high heat and will cause a heat indentation warp, which can cause that.
> 
> I actually managed to get a picture of this in action, after 2 months of use, and seeing some strange behavior (Even though it never went above 12.5C) from the hotspot, depending on the power draw AND the type of load on the video card.
> 
> Decent contact (left VRM)
> 
> View attachment 2523756
> 
> 
> 
> Weak contact (Right VRM).
> 
> 
> 
> If the delta creeps higher AND the core average temp creeps higher as well---or if the delta remains the same but the core temps slowly go up with time, despite same ambient and water temp equalized, then it's the thermal paste most likely (either just paste separating/degradation or even low contact core pressure).
> 
> GPU Hotspot is BOTH GPU itself (some individual sensors) and VRM scattered sensors. Unfortunately there is an absolute minimum delta that will register, regardless of what the "real" hotspot temp is, so if that minimum delta allowed by firmware (VBIOS) is 10C, if the core temp rises, the delta will just get pushed up with it, and vice versa.
> View attachment 2523757
> 
> 
> Obviously, VRAM and Core contact was immaculate.
> 
> Problem fixed by switching from 1.5mm Gelid Extreme pads on the VRM's only, to 2mm (including the little isolated VRM chip), keeping 1.5mm on the VRAM.


...on the Strix, I can monitor the VRM temps and VRM temps haven't changed....pads were not the problem in my situation, though they certainly can be on other setups.

The culprit was the EK Vector block. As posted before, I had multiple issues with the EK block and back-plate (like big chunks of Nickel plating coming off the back-plate). But on the creeping degradation, there's a ridge resulting from the cutting head imprint which shouldn't be there...never mind Nickel plating that. The ridge sits right on top of the die, per orange circle in the pic below...Using thicker (GC Gelid Extreme paste) helped a lot after a remount but still did not yield the deltas the rest of the cooling system is capable of...I then switched to the Phanteks Glacier Strix block a while back (again w/ GC Gelid Extreme) and that seems to have fixed it for good.


----------



## J7SC

Zogge said:


> Answered it just now.


----------



## Nizzen

Zogge said:


> I combined my cpu and gpu loop again and my ambient to water never goes over 1C again, which is good news. Flow rate took a hit going to 150l/h from 230l/h though. (3360x140+2280x120 rads) hence a long loop.
> 
> Max heatload is gpu 520w+cpu 225w+cpu vrms ??W.
> 
> However my bykski block on 3090 starts to drop in efficiency. It was delta 12C water to core 6m ago, now 16c.
> Memory max on 48c with +1000 which is okay/good I guess and same as 6m ago. (Mp5 serial on back plate)
> Strange with the core though, is the thermal grizzly extreme (non conductive) degrading so fast ?


Have you checked fins inside the gpu block?


----------



## gfunkernaught

J7SC said:


> ...and now, time to _really upgrade _with two of these  :


@Arizor ORRRRR you could go this way  
yes, there are four rads in there. I'm waiting on a few more parts before I can finally get this thing running.


----------



## Arizor

Good lord @gfunkernaught !


----------



## des2k...

Zogge said:


> I combined my cpu and gpu loop again and my ambient to water never goes over 1C again, which is good news. Flow rate took a hit going to 150l/h from 230l/h though. (3360x140+2280x120 rads) hence a long loop.
> 
> Max heatload is gpu 520w+cpu 225w+cpu vrms ??W.
> 
> However my bykski block on 3090 starts to drop in efficiency. It was delta 12C water to core 6m ago, now 16c.
> Memory max on 48c with +1000 which is okay/good I guess and same as 6m ago. (Mp5 serial on back plate)
> Strange with the core though, is the thermal grizzly extreme (non conductive) degrading so fast ?


block delta shouldn't change at all, let alone by 4c; I never seen this on my EK block (I started with 24c delta with bad contact / mount to 10c delta with good contact / mount) but never had delta change running with the same install / mount.

You might have bad paste, your block got clogged up or your block screw become very loose ?


----------



## gfunkernaught

Arizor said:


> Good lord @gfunkernaught !


This is what happens when you follow the 1kw bios rabbit hole... otherwise I would just stick with what I got. And also this forum gave me a nice push down the hole. Grateful I am. 👍


----------



## dagan

About a week ago I picked up a EVGA 3090 FTW Ultra, ugh mouthful

I've been using Seasonic 850 PRIME Ultra Titanium with it and previously(1080ti , 3070ti) never had a power cycle issue but I do so with 3090 now.

Has anyone have experience with Seasonic warranty? I've read up on here and there stating that earlier model of Seasonic PSU had agressive OCP which triggers the power cycle with 3090.

If so I'm wondering if their unit fixed the issue with the problem and do they offer advanced rma?


----------



## jura11

Zogge said:


> I combined my cpu and gpu loop again and my ambient to water never goes over 1C again, which is good news. Flow rate took a hit going to 150l/h from 230l/h though. (3360x140+2280x120 rads) hence a long loop.
> 
> Max heatload is gpu 520w+cpu 225w+cpu vrms ??W.
> 
> However my bykski block on 3090 starts to drop in efficiency. It was delta 12C water to core 6m ago, now 16c.
> Memory max on 48c with +1000 which is okay/good I guess and same as 6m ago. (Mp5 serial on back plate)
> Strange with the core though, is the thermal grizzly extreme (non conductive) degrading so fast ?


I thought so my loop is overkill but yours is on another level hahaha, yes of course your flow rate will drop with such radiator space plus MP5works which l
I would guess will restrict flow 

1°C delta is awesome, on my loop with both GPUs rendering usually I see 2-3°C delta, same is in normal gaming like in Skyrim SE with lits of mods and ENB where in some places my GPU pulls like quite easily 500-520W and my temperatures won't break 36-38°C and that's with XOC 1000W BIOS ReBAR and VRAM temperatures won't break 60°C for most of the gaming or rendering 

Regarding the GPU delta, this I have observed on my top RTX 3090 GamingPro where temperatures starting to creep up, previously I would able keep in same ambient 38°C now I'm seeing 39-40°C, VRAM temperatures are exactly same 

As you I have used Kryonaut for one of the GPU and on other one I have used ZF-EX and temperatures are exactly the same like have been few months back and I'm guessing that Kryonaut is drying out because just last week I have changed my CPU block from Aquacomputer Kryos NEXT to Bykski CPU waterblock and my temperatures dropped by 10°C and if you saw Kryonaut literally was just dry and now on CPU I have used just ZF-EX and will redo my loop probably in couple of weeks and will use ZF-EX on GPUs and finally will repad the GPUs

My Kryonaut on CPU was I think too 6 months old 

Hope this helps 

Thanks, Jura


----------



## Falkentyne

jura11 said:


> I thought so my loop is overkill but yours is on another level hahaha, yes of course your flow rate will drop with such radiator space plus MP5works which l
> I would guess will restrict flow
> 
> 1°C delta is awesome, on my loop with both GPUs rendering usually I see 2-3°C delta, same is in normal gaming like in Skyrim SE with lits of mods and ENB where in some places my GPU pulls like quite easily 500-520W and my temperatures won't break 36-38°C and that's with XOC 1000W BIOS ReBAR and VRAM temperatures won't break 60°C for most of the gaming or rendering
> 
> Regarding the GPU delta, this I have observed on my top RTX 3090 GamingPro where temperatures starting to creep up, previously I would able keep in same ambient 38°C now I'm seeing 39-40°C, VRAM temperatures are exactly same
> 
> As you I have used Kryonaut for one of the GPU and on other one I have used ZF-EX and temperatures are exactly the same like have been few months back and I'm guessing that Kryonaut is drying out because just last week I have changed my CPU block from Aquacomputer Kryos NEXT to Bykski CPU waterblock and my temperatures dropped by 10°C and if you saw Kryonaut literally was just dry and now on CPU I have used just ZF-EX and will redo my loop probably in couple of weeks and will use ZF-EX on GPUs and finally will repad the GPUs
> 
> My Kryonaut on CPU was I think too 6 months old
> 
> Hope this helps
> 
> Thanks, Jura


This is original Kryonaut?
Do you have the same problem with Kryonaut Extreme that you did with Kryonaut?


----------



## J7SC

gfunkernaught said:


> @Arizor ORRRRR you could go this way
> yes, there are four rads in there. I'm waiting on a few more parts before I can finally get this thing running.
> View attachment 2523768


...there's still hope for @Arizor ...long weekend here, and I've been 'downsizing' a TR / 2x 2080 Ti machine I built 2.5 yrs ago from 4x D5s and 5x 360/60 to 2x D5 and 3x 360/60. 

Of course now my 3090 system has five rads


----------



## yzonker

Had a little D5 pump saga going here,









EK D5 Pump Vibration


My original D5 pump I bought in January seems to have developed more vibration. Definitely more than the other D5 I have in my loop. I assume this is the bearing going bad? Haven't run it dry. It's had plenty of air go through it from refilling the loop a few times. Just curious what...




www.overclock.net





Short version.: 

Bearing appears to have worn out prematurely. Never run dry. ~7 months 24/7 at 50-75% most of the time.
Replaced pump. This one has a SATA plug. 
Pump would not run at full speed on SATA
Using an adapter to molex fixed it.
Any explanation as to why SATA did not work? Must be the cable? Just realized I didn't try another one. 

Prefer to just use the adapter and stay with molex unless there is some risk using the adapter?


----------



## ManniX-ITA

Falkentyne said:


> This is original Kryonaut?
> Do you have the same problem with Kryonaut Extreme that you did with Kryonaut?


Same here with Kryonaut on CPU.
A pile of dry mud... about 6 months old.
I bought the 9ml Extreme can ripoff but still didn't use it.
Really hope it's going to be better considering the price.


----------



## Zogge

Forgot to say I have 6x D5s spread out in the loop at around 80% speed.


----------



## yzonker

ManniX-ITA said:


> Same here with Kryonaut on CPU.
> A pile of dry mud... about 6 months old.
> I bought the 9ml Extreme can ripoff but still didn't use it.
> Really hope it's going to be better considering the price.


I've got Extreme cooking in my 5800x and 3090 right now. Only been running about a month though. So far no degredation.


----------



## J7SC

yzonker said:


> Had a little D5 pump saga going here,
> 
> 
> 
> 
> 
> 
> 
> 
> 
> EK D5 Pump Vibration
> 
> 
> My original D5 pump I bought in January seems to have developed more vibration. Definitely more than the other D5 I have in my loop. I assume this is the bearing going bad? Haven't run it dry. It's had plenty of air go through it from refilling the loop a few times. Just curious what...
> 
> 
> 
> 
> www.overclock.net
> 
> 
> 
> 
> 
> Short version.:
> 
> Bearing appears to have worn out prematurely. Never run dry. ~7 months 24/7 at 50-75% most of the time.
> Replaced pump. This one has a SATA plug.
> Pump would not run at full speed on SATA
> Using an adapter to molex fixed it.
> Any explanation as to why SATA did not work? Must be the cable? Just realized I didn't try another one.
> 
> Prefer to just use the adapter and stay with molex unless there is some risk using the adapter?


...too bad about the ek-D5 - I actually have an EK Revo XTop dual D5 setup I got about three months ago, and so far, so good - but obviously that's not a long-enough time to speculate on reliability. For now, they're perfectly quite though even at full speed.

For those as well as 2x XSPC D5 (both models w/ Molex and pwm) I only use Molex, *some with a Sata-to-Molex plug*. The remaining six D5s are all Swiftech Molex (w/ adjustment screw on the back, like the two XSPC D5s) and those have been running since 2013 or so without complaint (other than one I killed by accident when running it dry thinking its Molex was for a Gentle Typhoon fan set  ...)


----------



## yzonker

J7SC said:


> ...too bad about the ek-D5 - I actually have an EK Revo XTop dual D5 setup I got about three months ago, and so far, so good - but obviously that's not a long-enough time to speculate on reliability. For now, they're perfectly quite though even at full speed.
> 
> For those as well as 2x XSPC D5 (both models w/ Molex and pwm) I only use Molex, *some with a Sata-to-Molex plug*. The remaining six D5s are all Swiftech Molex (w/ adjustment screw on the back, like the two XSPC D5s) and those have been running since 2013 or so without complaint (other than one I killed by accident when running it dry thinking its Molex was for a Gentle Typhoon fan set  ...)


Maybe just bad luck or possibly some debris got in there from a rad that I didn't manage to get 100% clean. Although I haven't seen any signs of that. Blocks are clear and nothing in the res.


----------



## ManniX-ITA

yzonker said:


> I've got Extreme cooking in my 5800x and 3090 right now. Only been running about a month though. So far no degredation.


It's quite a long time that I use Kryonaut.
The degradation starts usually after 6 months; that 1-3c edge over other thermal paste kinds get lost.
But it doesn't get worse usually.
Problem is that the moment you open it, you have to re-paste it.
Which is not really good for a paste that claims it doesn't dry.
Got a tube of Thermalright TFX for the GPU die.


----------



## J7SC

yzonker said:


> Maybe just bad luck or possibly some debris got in there from a rad that I didn't manage to get 100% clean. Although I haven't seen any signs of that. Blocks are clear and nothing in the res.


...you never know what really caused it, other than running it dry accidentally. Still, I clean my rads very, very thoroughly - the two 360/60s I referenced above have been flushed a gazillion times, now waiting for the vinegar solution to do its thing for 24 hours.

The related loop - though not the rads - did have some bigger black 'grinds' that showed up during flushing and which had collected at one end of a Koolance QD (good catch !)...this came from an older-style tube set which I knew does it over time...the PrimoChill Advanced LRT tubing on the other hand is still 'perfect' after 2.5 years, as was the rest of the fluid.


----------



## Zogge

I have two filters in my loop to collect residue etc.


----------



## jura11

Falkentyne said:


> This is original Kryonaut?
> Do you have the same problem with Kryonaut Extreme that you did with Kryonaut?


Hi there 

Yes that's original Kryonaut which I bought over Overclockers.co.uk 6 months ago, been surprised because usually Kryonaut is quite thick and this one wasn't 

Not tried yet Kryonaut Extreme or Kingpin thermal paste, Kingpin thermal paste is not available in the UK and Kryonaut Extreme not sure if its worth it when ZF-EX or Thermalright are my current paste which I use

Used ZF-EX on friend loop only and after 5 months temperatures are still great for his loop, 

Hope this helps 

Thanks, Jura


----------



## jura11

Zogge said:


> Forgot to say I have 6x D5s spread out in the loop at around 80% speed.


Nice there, I'm running 4 D5 pumps, 3 of them are running at highest speed and 4th D5 is running just at lowest possible speed and flow rate is now at 246LPH,previously with Aquacomputer Kryos NEXT flow rate been in 210-220LPH 

As I said in previous reply, your loop is just overkill, how big is your loop? 

Hope this helps 

Thanks, Jura


----------



## tps3443

Do you guys think a MP5Works would help temps even further? Or should I just not bother with it. 

While doing a Port Royal Stress test my 3090 Kingpin reaches. These are the temps after looping (Port Royal stress 20 times)Ambient temp is 24c water temp is 27.7C.

MEM 1=35C Max
MEM 2=40.3C Max
MEM 3=37.5C Max
GPU#2 Back of Die/backplate temp 41.7C Max
GPU Die temp 38.3C Max
GPU temp 41.3C Max 

^ This is with the card stock, on the 520W KP bios. Single 120mm fan blowing on the back of the card. And a full coverage Optimus Fujipoly thermal pad under the backplate.

I ordered that Optimus Kingpin 3090 waterblock last week. But now, I am kinda questioning that decision now

Even with my memory at +1,800 and full memory voltage increased to 1.49-1.51, the MEM1/MEM2/MEM3, have never exceeded 36.9-41.3C under a Port Royal stress test loop. (Reported by HWinfo)

I am thinking, If I have managed these temps, what is the Optimus capable of lol. (probably something ridiculous) 


I could cancel the Optimus KP block, and just grab a MP5 Works. Any thoughts on this?

I am worried I won’t see this Optimus KP block for 6 months or more. Thats what everyone keeps spreading, as they brag about their (Early number spot)


----------



## jura11

yzonker said:


> Had a little D5 pump saga going here,
> 
> 
> 
> 
> 
> 
> 
> 
> 
> EK D5 Pump Vibration
> 
> 
> My original D5 pump I bought in January seems to have developed more vibration. Definitely more than the other D5 I have in my loop. I assume this is the bearing going bad? Haven't run it dry. It's had plenty of air go through it from refilling the loop a few times. Just curious what...
> 
> 
> 
> 
> www.overclock.net
> 
> 
> 
> 
> 
> Short version.:
> 
> Bearing appears to have worn out prematurely. Never run dry. ~7 months 24/7 at 50-75% most of the time.
> Replaced pump. This one has a SATA plug.
> Pump would not run at full speed on SATA
> Using an adapter to molex fixed it.
> Any explanation as to why SATA did not work? Must be the cable? Just realized I didn't try another one.
> 
> Prefer to just use the adapter and stay with molex unless there is some risk using the adapter?


Hi there 

How many other devices are you running I meant by this how many HDD's, SSD or what is powered through the Molex or through SATA? 

I have run to such issue only once when I have run on single Molex Aquaero etc and literally under rendering my Superflower 8pack 2000w PSU turned off all 10 HDD's in worst case or turned off Aquaero 😞

Hope this helps 

Thanks, Jura


----------



## newls1

Someone please help me understand this. Why does my voltage for GPU stay @ .987v (something close to that) for all 3dmark bench tests?? This is pissing me off severely. GPU is FTW3 Ultra 3090 with 520w evga ReBar BIOS, and does it on stock bios as well. This obviously hugely affects my boost speeds which NEVER go past 2040Mhz... Things ive tried are:
1) bios flashing, didnt fix this issue
2) under 3mark profile in the NVCP, I set gpu to "maximum performance" same issue.... also preset other settings in profile too, no change
3) Card is FCWB'd so temps dont exceed 36c under the most demanding part of bench test, so its NOT temp related.....

What the hell can I do here?? Any input is appreciated


----------



## Arizor

@newls1 just to get the obvious out of the way, what's your afterburner set to? And in its Settings, do you have voltage control unlocked and set to Third Party?


----------



## newls1

Arizor said:


> @newls1 just to get the obvious out of the way, what's your afterburner set to? And in its Settings, do you have voltage control unlocked and set to Third Party?


oh, sorry forgot to mention that. +105 core (boosts to 2175/2160 in all games) +1100 mem. Voltage slider (if it actually does anything) is maxed out. Dont think i know what you mean by "third party" ... In All my games I play, gpu goes to 2175MHz with temps under 32c @ 1.100v (somewhere around there) then settles down to 2160 @ 1.087v


----------



## Arizor

Hey @newls1 , this is what I mean. 

Regardless, if it's boosting normal in games, seems like it's a 3DMark issue, rather than a BIOS or AB issue. Not encountered it before myself; I'd contact the makers of 3DMark about it.


----------



## yzonker

jura11 said:


> Hi there
> 
> How many other devices are you running I meant by this how many HDD's, SSD or what is powered through the Molex or through SATA?
> 
> I have run to such issue only once when I have run on single Molex Aquaero etc and literally under rendering my Superflower 8pack 2000w PSU turned off all 10 HDD's in worst case or turned off Aquaero 😞
> 
> Hope this helps
> 
> Thanks, Jura


Just the pumps. This machine has a single NVMe drive. Nothing else. Even a single D5 on the sata cable would only run at 3800-3900 rpm but runs full speed (4700 or so) on the molex cable.


----------



## newls1

Arizor said:


> Hey @newls1 , this is what I mean.
> 
> Regardless, if it's boosting normal in games, seems like it's a 3DMark issue, rather than a BIOS or AB issue. Not encountered it before myself; I'd contact the makers of 3DMark about it.
> 
> View attachment 2523875


what does putting that setting on "third party" do? I have that on its default setting.... Curious now?!


----------



## Arizor

@newls1 it just ensures the voltage is correctly unlocked in Afterburner for the right type of card (i.e. it's not just placebo). But again, since yours is working fine in other software, seems like a 3DMark issue rather than a BIOS/card issue.


----------



## Falkentyne

newls1 said:


> what does putting that setting on "third party" do? I have that on its default setting.... Curious now?!


You can't control GPU Voltage in MSI Afterburner away. Forget about that option. You can only control the VID along the V/F Curve-nothing more, and that voltage slider in AB will only unlock the 1.087v-1.10v VID tier, which is +15 more mhz of GPU speed as well. That is not the same as GPU voltage. And even adjusting the V/F curve without adjusting NVVDD voltage internally will cause a drop in effective clocks (either MSVDD or NVVDD will drop by at least 30mv just by touching the curve!), so you just don't want to mess with it.

The internal voltages are VDDC and MSVDDC and you can only control those with external hardware tools, unless you have an actual Kingpin card with Classified tool / Dip switches.


----------



## Falkentyne

newls1 said:


> Someone please help me understand this. Why does my voltage for GPU stay @ .987v (something close to that) for all 3dmark bench tests?? This is pissing me off severely. GPU is FTW3 Ultra 3090 with 520w evga ReBar BIOS, and does it on stock bios as well. This obviously hugely affects my boost speeds which NEVER go past 2040Mhz... Things ive tried are:
> 1) bios flashing, didnt fix this issue
> 2) under 3mark profile in the NVCP, I set gpu to "maximum performance" same issue.... also preset other settings in profile too, no change
> 3) Card is FCWB'd so temps dont exceed 36c under the most demanding part of bench test, so its NOT temp related.....
> 
> What the hell can I do here?? Any input is appreciated


Can you please check your HWINFO64 and tell me what TDP% and TDP Normalized % both say?
That looks like you are running into stock NVVDD power limits for the 1.081v-1.10v VID cap around 520W or so. Hard to say without seeing your TDP Normalized%.
Increasing TDP Slider past 100% (if possible) also slightly boosts how much higher you can go with the stock NVVDD and MSVDD voltage, but a TDP slider set below 100% will not go below the stock values for MSVDD/NVVDD power limits.

We already know the Kingpin 1kw Bios completely removes these limits.


----------



## des2k...

Falkentyne said:


> You can't control GPU Voltage in MSI Afterburner away. Forget about that option. You can only control the VID along the V/F Curve-nothing more, and that voltage slider in AB will only unlock the 1.087v-1.10v VID tier, which is +15 more mhz of GPU speed as well. That is not the same as GPU voltage. And even adjusting the V/F curve without adjusting NVVDD voltage internally will cause a drop in effective clocks (either MSVDD or NVVDD will drop by at least 30mv just by touching the curve!), so you just don't want to mess with it.
> 
> The internal voltages are VDDC and MSVDDC and you can only control those with external hardware tools, unless you have an actual Kingpin card with Classified tool / Dip switches.


*edit
You make it sounds like every 3090 outside Kingpin are broken!

Ref 2x8Pin Zotac works fine on my side. 

vcore is always 15mv to 25mv higher than what's requested by Afterburner with offset or VF curve, max 1.1v


----------



## Falkentyne

des2k... said:


> You make it sounds like every 3090 outside Kingpin are broken!
> 
> Ref 2x8Pin Zotac works fine on my side. Up to 1.1v set, vcore goes up to 1.125v.
> 
> vcore is always 15mv to 25mv higher than what's requested by Afterburner with offset or VF curve.
> 
> Of course Kingpin goes over this 1.125mv vcore(vbios limit) but that's also really outside Nvidia vbios specs.
> 
> 15Mhz ,30Mhz loss on the eff freq; you only see that after 2100+core.


Can you show me a screenshot of this 1.125v? How are you monitoring this?

If it's the EVC2X Elmor tool, then EVERY card can go up to 1.125v. This is internal voltage.
I'm referring to VID on my posts (e.g. HWinfo64, MSI Afterburner, GPU-Z), not internal voltage. Because normal users can't monitor internal voltage.

I haven't seen a single card that would allow a VID higher than 1.10v. Thank you.


----------



## des2k...

Falkentyne said:


> Can you show me a screenshot of this 1.125v? How are you monitoring this?
> 
> If it's the EVC2X Elmor tool, then EVERY card can go up to 1.125v. This is internal voltage.
> I'm referring to VID on my posts (e.g. HWinfo64, MSI Afterburner, GPU-Z), not internal voltage. Because normal users can't monitor internal voltage.
> 
> I haven't seen a single card that would allow a VID higher than 1.10v. Thank you.


sure, NVVDD output rails(vcore), always runs in positive offset

I will also add that the nvidia boost / eff freq needs to have 4 consecutive v/f point on the vbios to work the best. After 1.1v or 2100(what ever is the max) you can't have these 4 points. Those Kingpin dip switches/classfiled tool set a single v/f outside vbios since it's a LN2 card😁


----------



## Falkentyne

des2k... said:


> sure, NVVDD output rails(vcore), always runs in positive offset
> 
> 
> View attachment 2523947


I see a 1.012v there not 1.112v....
That's 88mv below "1.10v".


----------



## des2k...

Falkentyne said:


> I see a 1.012v there not 1.112v....
> That's 88mv below "1.10v".


The requested voltage(red) vs real voltage(green) for vcore
*edit
lol, I forgot to check but yeah NVVDD stops at 1.1v. no positive offset


----------



## Apecos

dagan said:


> About a week ago I picked up a EVGA 3090 FTW Ultra, ugh mouthful
> 
> I've been using Seasonic 850 PRIME Ultra Titanium with it and previously(1080ti , 3070ti) never had a power cycle issue but I do so with 3090 now.
> 
> Has anyone have experience with Seasonic warranty? I've read up on here and there stating that earlier model of Seasonic PSU had agressive OCP which triggers the power cycle with 3090.
> 
> If so I'm wondering if their unit fixed the issue with the problem and do they offer advanced rma?


Hi sagan, check this out, Roman uses the same power supply. After Load the game the OCP was triggered. I think this particular model of seasonic has some issues becase he is running a 3080 ti in the bench.






22:55 min


----------



## TheGlow

Hey boys, I'm in the club. Camped at a bestbuy and got the last ticket, which was for a 3090. So now I'm learning how to mine to offset this cost.
I was getting mem temps of 105 so I did the pad replacement, and went to 100 only. if I open my case side panel i get maybe 2 more c off. If I aim a fan below the 3090, then it can dip to 92 depending on ambient temps. My room is always 8-10f hotter than thermostat. If I turn off all my PC stuff, it takes about 2 hours to cool down to the rest of the rooms, so I think its just a bad design, house holds the heat longer, etc.

Anyways, what else can I do? I have no idea where to start to overclock and get better scores in 3d mark, or just optimizing for games as well.


----------



## yzonker

If you search this thread you'll find multiple posts about Seasonic OCP issues.









[Official] NVIDIA RTX 3090 Owner's Club


Does anyone know exactly what & where the HWInfo 'Memory Junction Temperature' is measured ? Is it an average ? Does it include front & back VRAM chips ? ...on my Strix 3090, that temp is typically in the mid-70s, even when the CPU is shown at mid-60s. Tx. :)




www.overclock.net


----------



## ENTERPRISE

Hey guys, 

Is there a preferred app for monitoring total power drawer from the GPU ? I have just flashed the KP1000Watt BIOS and wanted to check out my settings VS the power draw at load. I have HWMonitor which I guess would do the job ?


----------



## Falkentyne

ENTERPRISE said:


> Hey guys,
> 
> Is there a preferred app for monitoring total power drawer from the GPU ? I have just flashed the KP1000Watt BIOS and wanted to check out my settings VS the power draw at load. I have HWMonitor which I guess would do the job ?


HWinfo64 with Rivatuner Statistics Server (RTSS) OSD Plugin. Or rainmeter skins with HWinfo.
HWmonitor is hot garbage.


----------



## Zogge

jura11 said:


> Nice there, I'm running 4 D5 pumps, 3 of them are running at highest speed and 4th D5 is running just at lowest possible speed and flow rate is now at 246LPH,previously with Aquacomputer Kryos NEXT flow rate been in 210-220LPH
> 
> As I said in previous reply, your loop is just overkill, how big is your loop?
> 
> Hope this helps
> 
> Thanks, Jura


Now where do I start...

1x Aquacomputer Airplex Gigant 3360x140
1x Aquacomputer Aqualis 450 with level meter
6x D5 Next vision pumps spread out
a lot of 10/13 soft tubing
a lot of compression fittings
3x Aquacomputer Octo
1x Aquacomputer 5 LT
1x Aquacomputer 6 PRO
1x Aquacomputer filter
2x Aquacomputer Highflow Next Vision
3x Alphacool Industry 360 Nexxos 30mm
2x Alphacool Industry 240 X-Flow 45mm
2x Alphacool Nexxos 360 UT 60mm
2x Alphacool Nexxos 120 30mm
1x Aquacomputer Highflow
2x Aquacomputer MPS Delta 1000 pressure sensors
1x Asus Crosshair Formula VRM liquid cooling block
1x Bykski 3090 Strix block front
1x MP5 works serial block back (G1/4")
1x Tech-N CPU block (5950X)
50+ Assorted fans 120-140mm

About 5 liters of Aquacomputer DP Ultra clear.

All in 2x Thermaltake W200 Core + 1xThermaltake Core P200 (stacked in column)

I estimate the loop to be something like 7-8m tubing and all in serial.

Cost - WAY too much. Probably north of 7000 USD all in all. (no hardware, just the above+chassis)

But in my defense, I plan to use this for a long time.
I will connect my 10980XE server to this loop (mounted in the rear of one of the W200 chassis) as well, IF I just figure out how to turn on the loop if either of computers are switched on and switched off if both computers are switched off. Any suggestions here ?


----------



## esolma

Hello everybody !

I have a question : anyone know which soft I can use to mod my RTX 3090 FE firmware ?
Before I had a Titan X and i used MaxwellTweaker for it but for my 3090 I don't know...

I know that I can just use kingpin bios and flash but this is not for GPU performance. It's for modifiying fan control. My GPU is at 75° and fan turn only at 1600RPM.... And I didn't want to use software for that like MSI Afterburner.

Thank you


----------



## J7SC

Zogge said:


> Now where do I start...
> (...)


^^...love it & very clean by the looks of it !

I'm in the middle of re-configuring my home office's work and play machines... not counting dev servers, 5 mobos over 2 TT Core P5/8 cases and most of it is w-cooled, so I can identify a bit 

But when it's all done, there will only be 2 'stations' each with 2 monitors...a big space saving over what I started out with. One key step was to separate the cooling (rads, pumps) via Koolance QDs and a mobile cooling table. Took a fair amount of Primochill Advanced LRT, but has other big advantages.


----------



## Zogge

J7SC said:


> ^^...love it & very clean by the looks of it !
> 
> I'm in the middle of re-configuring my home office's work and play machines... not counting dev servers, 5 mobos over 2 TT Core P5/8 cases and most of it is w-cooled, so I can identify a bit
> 
> But when it's all done, there will only be 2 'stations' each with 2 monitors...a big space saving over what I started out with. One key step was to separate the cooling (rads, pumps) via Koolance QDs and a mobile cooling table. Took a fair amount of Primochill Advanced LRT, but has other big advantages.


Thanks ! Yes, quite clean I would say. Of course it becomes quite many fan connectors, power connectors, tubes and all. 
As you could see it is the lower chassi that houses the computers right now. The top part only has fans and empty space.

Again, inside the chassi it is not world class clean look but it is still okay I guess. I tried to route things as good as possible and it can always be improved. The good thing is that I have room for 2 more systems if I would like. Just to hook it up to the loop or just use the space for whatever.

If there is one thing I hate, that is cramped small chassis with mm allowance for things and you have to bend those tubes and cables until they break to fit it.


----------



## ENTERPRISE

Falkentyne said:


> HWinfo64 with Rivatuner Statistics Server (RTSS) OSD Plugin. Or rainmeter skins with HWinfo.
> HWmonitor is hot garbage.


HwInfo..Thank you, that was the one I could not remember.


----------



## jura11

Zogge said:


> Now where do I start...
> 
> 1x Aquacomputer Airplex Gigant 3360x140
> 1x Aquacomputer Aqualis 450 with level meter
> 6x D5 Next vision pumps spread out
> a lot of 10/13 soft tubing
> a lot of compression fittings
> 3x Aquacomputer Octo
> 1x Aquacomputer 5 LT
> 1x Aquacomputer 6 PRO
> 1x Aquacomputer filter
> 2x Aquacomputer Highflow Next Vision
> 3x Alphacool Industry 360 Nexxos 30mm
> 2x Alphacool Industry 240 X-Flow 45mm
> 2x Alphacool Nexxos 360 UT 60mm
> 2x Alphacool Nexxos 120 30mm
> 1x Aquacomputer Highflow
> 2x Aquacomputer MPS Delta 1000 pressure sensors
> 1x Asus Crosshair Formula VRM liquid cooling block
> 1x Bykski 3090 Strix block front
> 1x MP5 works serial block back (G1/4")
> 1x Tech-N CPU block (5950X)
> 50+ Assorted fans 120-140mm
> 
> About 5 liters of Aquacomputer DP Ultra clear.
> 
> All in 2x Thermaltake W200 Core + 1xThermaltake Core P200 (stacked in column)
> 
> I estimate the loop to be something like 7-8m tubing and all in serial.
> 
> Cost - WAY too much. Probably north of 7000 USD all in all. (no hardware, just the above+chassis)
> 
> But in my defense, I plan to use this for a long time.
> I will connect my 10980XE server to this loop (mounted in the rear of one of the W200 chassis) as well, IF I just figure out how to turn on the loop if either of computers are switched on and switched off if both computers are switched off. Any suggestions here ?
> 
> View attachment 2524056


Awesome loop and great looking thing, although I'm not big fan of TT but that case always been my favourite from them 

I planned get it as well but after what they did to Caselabs I bought original Caselabs M8 with pedestal

In that case I'm running 2*HWLabs SR-2 360mm on top with 2*Bykski 360mm 60mm radiators in bottom in pedestal plus MO-ra3 360mm with total 39 fans total I think but with that combination I can cool both RTX 3090 GamingPro's without single issue and can keep them under 40°C in 24/7 rendering and can keep water delta T in 3-4°C, both of my RTX 3090s running XOC 1000W BIOS capped at 75-80% for rendering with 105MHz OC on core and 1295MHz on VRAM plus in loop have fitted 5950X with Bykski waterblock which to my surprise is better than Aquacomputer Kryos Next, previously I have used 3*GPUs setup or 4*GPUs setup as well like 3*GTX1080's or 2*GTX1080'S with GTX1080Ti or 3*RTX 2080Ti with GTX1080Ti etc, that loop went through the many iterations 

Plan was get SMA-8 but sadly Caselabs bankrupt and now getting Caselabs SMA-8 is almost impossible for good price

But must say awesome setup mate and great selection of parts 

Hope this helps and best of luck mate 

Thanks, Jura


----------



## jura11

esolma said:


> Hello everybody !
> 
> I have a question : anyone know which soft I can use to mod my RTX 3090 FE firmware ?
> Before I had a Titan X and i used MaxwellTweaker for it but for my 3090 I don't know...
> 
> I know that I can just use kingpin bios and flash but this is not for GPU performance. It's for modifiying fan control. My GPU is at 75° and fan turn only at 1600RPM.... And I didn't want to use software for that like MSI Afterburner.
> 
> Thank you


Hi there 

Sadly there is no tool which will allow you to tweak stock BIOS, last one was released for Maxwell GPU's after that there was no other tool which will allow you to do that 

If you are looking for higher power limit then only option for your GPU is shunt mod, if you are looking for better temperatures then water cooling always will be best option for you although possibilities are to use aftermarket GPU coolers from Raijintek although not sure how it fares with Ampere because of power draw as this Raijintek is more for GPUs with something like 300W top where they shine

Don't be scared of using Afterburner, its great app and works great

Hope this helps 

Thanks, Jura


----------



## Falkentyne

esolma said:


> Hello everybody !
> 
> I have a question : anyone know which soft I can use to mod my RTX 3090 FE firmware ?
> Before I had a Titan X and i used MaxwellTweaker for it but for my 3090 I don't know...
> 
> I know that I can just use kingpin bios and flash but this is not for GPU performance. It's for modifiying fan control. My GPU is at 75° and fan turn only at 1600RPM.... And I didn't want to use software for that like MSI Afterburner.
> 
> Thank you


Who told you you can use the Kingpin bios with a FE? You can't flash it. NVflash will not bypass the ID check.
No fan curve editor exists, period.

There _IS_ a nvflash with PCI ID check bypassed by force, for Ampere to "allow" crossflashing bioses (which people already do with the regular NVflash with -6) but this was only tested on laptop cards, as someone (I am not sure, the thread was on techpowerup and on reddit, where someone wanted to flash a laptop 3070 95W Bios to 115W or 150W or something from a different OEM that had a higher power limit) managed to patch two bytes in NVflash to allow crossflashing of laptop ampere 95W bioses with 115W or 150W bioses, (without using the -6 parameter, which no longer works). While it will flash desktop cards with the proper vbios for your card still, ONLY ONE person tried using this, to flash a 3090 FE vbios on a ROG Strix card. The flash completed but the card refused to load the vbios on boot (Motherboard gave a "Load VGA Bios" Post code error) and he had to switch to the backup bios, boot, flip the switch back in windows and reflash his original bios.

I posted this NVflash before (and here again--rename the attachment from txt to exe), but you have a 99% chance of a complete brick if you try to flash a kingpin 1kw or 520w bios on your card. I DO NOT suggest even trying this unless you have some magical way to force flash your card with a hardware tool that can read a tiny UDFN-8 bios chip, or you have a way to blind flash with a second card or iGPU (no one has been brave enough to even try, for good reason).

If you are one of the 0.1% of people with hardware capable of recovering from bios flashes without an easy to program SOIC8 bios chip (Pomona 5250 clip, etc), go ahead and risk your card (please don't be stupid).

(Let me make this clear again I'm talking about this modified (2 bytes) NVflash which will "allow" NVflash 5.670.0 to bypass the ID protection when trying to flash a 3090 FE Bios on another card or another card's BIOS on a 3090 FE. While it will flash, you have a 99% chance of your 3090 FE not booting and needing to be blind flashed by booting with iGPU or a different card in primary X16 slot,or reflashed with a special tool that can flash inline UDFN8 bios chips, or desoldered (I can't even find a proper socket adapter for a desoldered chip, in stock for this chip format). People with dual bios switches can recover easily. (AIB cards use SOIC8 bios chips which can be force flashed with a Pomona 5250 clip, Male to female adafruit or any standard jumper cables, 1.8v adapter, and hardware flasher (e.g. Skypro etc)

(Original source: nvflash.exe )


----------



## kryptonfly

Falkentyne said:


> Who told you you can use the Kingpin bios with a FE? You can't flash it. NVflash will not bypass the ID check.
> No fan curve editor exists, period.
> 
> There _IS_ a nvflash with PCI ID check bypassed by force, for Ampere to "allow" crossflashing bioses (which people already do with the regular NVflash with -6) but this was only tested on laptop cards, as someone (I am not sure, the thread was on techpowerup and on reddit, where someone wanted to flash a laptop 3070 95W Bios to 115W or 150W or something from a different OEM that had a higher power limit) managed to patch two bytes in NVflash to allow crossflashing of laptop ampere 95W bioses with 115W or 150W bioses, (without using the -6 parameter, which no longer works). While it will flash desktop cards with the proper vbios for your card still, *ONLY ONE person tried using this, to flash a 3090 FE vbios on a ROG Strix card.* The flash completed but the card refused to load the vbios on boot (Motherboard gave a "Load VGA Bios" Post code error) and he had to switch to the backup bios, boot, flip the switch back in windows and reflash his original bios.
> 
> I posted this NVflash before (and here again--rename the attachment from txt to exe), but you have a 99% chance of a complete brick if you try to flash a kingpin 1kw or 520w bios on your card. I DO NOT suggest even trying this unless you have some magical way to force flash your card with a hardware tool that can read a tiny UDFN-8 bios chip, or you have a way to blind flash with a second card or iGPU (no one has been brave enough to even try, for good reason).
> 
> If you are one of the 0.1% of people with hardware capable of recovering from bios flashes without an easy to program SOIC8 bios chip (Pomona 5250 clip, etc), go ahead and risk your card (please don't be stupid).
> 
> (Original source: nvflash.exe )


From my tests, I tried a dozen vbios for my Gigabyte Turbo => Asus, MSI, eVGA, Galax, Palit, 1000W... There were only 2 vbios that did not work : Galax 1000W modified vbios (not the one here but another on techpowerup) cause on techpowerup there's the same version with "normal" PL that I don't remember the exact version, and the 3090 FE vbios. Nvflash didn't want to flash these 2 particular vbios (not in correct format, something like that), however ALL others did flash well but not a single vbios worked, temps and all probes were crazy, one with +200W power draw in idle ! I posted a screen few days ago.

It's all what I can say from my experience, I don't know at all for a FE card but I could not flash a FE vbios into my Gigabyte Turbo (nvflash didn't want) but no brick or anything. I'm not 100% convinced but I think I tried both nvflash (normal & hacked), I don't remember well, so many tries & tests... (I don't know if it matters but I'm in CSM compatibility mode, for MBR & legacy stuff). I could be wrong somewhere, I gave up to flash vbios for shunts. Anyway, glad to share here 

@esolma : my advice would be to watercool your FE, just for silence and temp then if you want, try shunt mod.


----------



## V I P E R

des2k... said:


> The requested voltage(red) vs real voltage(green) for vcore
> *edit
> lol, I forgot to check but yeah NVVDD stops at 1.1v. no positive offset
> 
> 
> 
> View attachment 2523948
> 
> View attachment 2523950


What is this software ampere tools and where can I download it?


----------



## ManniX-ITA

I would order it RIGHT NOW


----------



## esolma

Hello everybody,

Thank you very much for your answer. I did not understand that it was not possible to flash 3090 FE.... And I didn't want to try... I already change Thermal Pad and loose my warranty. I didn't want to try and break it.
I will use Afterburner as you said.

Thank you very much


----------



## PLATOON TEKK

ManniX-ITA said:


> I would order it RIGHT NOW
> 
> View attachment 2524152


Your capitalization on right now has me intrigued ha. My bad I missed this, what’s so good about this paste?


----------



## ManniX-ITA

PLATOON TEKK said:


> Your capitalization on right now has me intrigued ha. My bad I missed this, what’s so good about this paste?


It was out of stock for almost 2 months 

It's a thermal *putty*, more dense than a paste (Shore OO 50).
Very good price for 50g and a nice 10 W/mK thermal conductivity.
Good idea to use some between VRAM ICs, Caps, etc to improve thermals, will fill the gaps with the thermal pad.
Plus I can imagine some other creative ideas of course, it's like a liquid thermal pad. Or a solid thermal paste.


----------



## Wihglah

PLATOON TEKK said:


> Your capitalization on right now has me intrigued ha. My bad I missed this, what’s so good about this paste?


It's very effective when used in the right way.


----------



## J7SC

ManniX-ITA said:


> It was out of stock for almost 2 months
> 
> It's a thermal *putty*, more dense than a paste (Shore OO 50).
> Very good price for 50g and a nice 10 W/mK thermal conductivity.
> Good idea to use some between VRAM ICs, Caps, etc to improve thermals, will fill the gaps with the thermal pad.
> Plus I can imagine some other creative ideas of course, it's like a liquid thermal pad. Or a solid thermal paste.


Thanks for the heads-up...I had two jars left but am in the middle of a multi-system build that will use much of it up, so time to restock 

PS - Price went up just a tad (~ $2 per jar) since my last order in June


----------



## PLATOON TEKK

ManniX-ITA said:


> It was out of stock for almost 2 months
> 
> It's a thermal *putty*, more dense than a paste (Shore OO 50).
> Very good price for 50g and a nice 10 W/mK thermal conductivity.
> Good idea to use some between VRAM ICs, Caps, etc to improve thermals, will fill the gaps with the thermal pad.
> Plus I can imagine some other creative ideas of course, it's like a liquid thermal pad. Or a solid thermal paste.


Boom! That’s good to know. Still working on my sub zero build so this should help. I ordered a grip. Thanks for the heads up. ✊




Falkentyne said:


> Who told you you can use the Kingpin bios with a FE? You can't flash it. NVflash will not bypass the ID check.
> No fan curve editor exists, period.
> 
> There _IS_ a nvflash with PCI ID check bypassed by force, for Ampere to "allow" crossflashing bioses (which people already do with the regular NVflash with -6) but this was only tested on laptop cards, as someone (I am not sure, the thread was on techpowerup and on reddit, where someone wanted to flash a laptop 3070 95W Bios to 115W or 150W or something from a different OEM that had a higher power limit) managed to patch two bytes in NVflash to allow crossflashing of laptop ampere 95W bioses with 115W or 150W bioses, (without using the -6 parameter, which no longer works). While it will flash desktop cards with the proper vbios for your card still, ONLY ONE person tried using this, to flash a 3090 FE vbios on a ROG Strix card. The flash completed but the card refused to load the vbios on boot (Motherboard gave a "Load VGA Bios" Post code error) and he had to switch to the backup bios, boot, flip the switch back in windows and reflash his original bios.
> 
> I posted this NVflash before (and here again--rename the attachment from txt to exe), but you have a 99% chance of a complete brick if you try to flash a kingpin 1kw or 520w bios on your card. I DO NOT suggest even trying this unless you have some magical way to force flash your card with a hardware tool that can read a tiny UDFN-8 bios chip, or you have a way to blind flash with a second card or iGPU (no one has been brave enough to even try, for good reason).
> 
> If you are one of the 0.1% of people with hardware capable of recovering from bios flashes without an easy to program SOIC8 bios chip (Pomona 5250 clip, etc), go ahead and risk your card (please don't be stupid).
> 
> (Let me make this clear again I'm talking about this modified (2 bytes) NVflash which will "allow" NVflash 5.670.0 to bypass the ID protection when trying to flash a 3090 FE Bios on another card or another card's BIOS on a 3090 FE. While it will flash, you have a 99% chance of your 3090 FE not booting and needing to be blind flashed by booting with iGPU or a different card in primary X16 slot,or reflashed with a special tool that can flash inline UDFN8 bios chips, or desoldered (I can't even find a proper socket adapter for a desoldered chip, in stock for this chip format). People with dual bios switches can recover easily. (AIB cards use SOIC8 bios chips which can be force flashed with a Pomona 5250 clip, Male to female adafruit or any standard jumper cables, 1.8v adapter, and hardware flasher (e.g. Skypro etc)
> 
> (Original source: nvflash.exe )


Thanks for sharing this man, always appreciated.


----------



## mattxx88

ManniX-ITA said:


> I would order it RIGHT NOW
> 
> View attachment 2524152


thank you, I have been waiting for this for months


----------



## Carillo

Hello. Has anyone here experienced a problem with underperformance? 13800 points in Port Royal overclocked with the same driver and the same windows as I got over 16,000 points with another 3090 .. Both cards also used Kingpin bios. I have tried DDU and another Windows, but same result .. only 12800 points stock, with kingpin bios and chiller. The card is a Gigabyte 3090 Turbo with water block (Naturally) Link to score: 









I scored 13 824 in Port Royal


Intel Core i9-10900KF Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 11}




www.3dmark.com













I scored 12 936 in Port Royal


Intel Core i9-10900KF Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## kryptonfly

Carillo said:


> Hello. Has anyone here experienced a problem with underperformance? 13800 points in Port Royal overclocked with the same driver and the same windows as I got over 16,000 points with another 3090 .. Both cards also used Kingpin bios. I have tried DDU and another Windows, but same result .. only 12800 points stock, with kingpin bios and chiller. The card is a Gigabyte 3090 Turbo with water block (Naturally) Link to score:
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 13 824 in Port Royal
> 
> 
> Intel Core i9-10900KF Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 11}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 12 936 in Port Royal
> 
> 
> Intel Core i9-10900KF Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com


Hi bro 🤟 I also have a Gigabyte 3090 Turbo but no vbios works, I don't know how you succeeded to make a Kingpin vbios works ! How ? I tried many vbios and only those from Gigabyte work, currently I have a Gaming OC vbios. I experienced all probes completely crazy and heavy PL with others vbios.

What is your waterblock and pads ?

EDIT : Try with another vbios = Gigabyte Gaming OC, with this 390W vbios I do ~14300 in Port Royal, for reference.
I scored 14 349 in Port Royal
Reinstall 3dmark, maybe there're some files corrupted
Nothing changed since the last time ? New hardware ? Check PCIe 3.0 link.


----------



## Carillo

kryptonfly said:


> Hi bro 🤟 I also have a Gigabyte 3090 Turbo but no vbios works, I don't know how you succeeded to make a Kingpin vbios works ! How ? I tried many vbios and only those from Gigabyte work, currently I have a Gaming OC vbios. I experienced all probes completely crazy and heavy PL with others vbios.
> 
> What is your waterblock and pads ?
> 
> EDIT : Try with another vbios = Gigabyte Gaming OC, with this 390W vbios I do ~14300 in Port Royal, for reference.
> I scored 14 349 in Port Royal
> Reinstall 3dmark, maybe there're some files corrupted
> Nothing changed since the last time ? New hardware ? Check PCIe 3.0 link.


Thank you for the answer. I first tried the Girgabyte waterforce bios, but got heavy PL limitations! No, nothing has changed in terms of hardware, and I have tried to reinstall 3dMark. I will try more bios in a couple of hours, and report back. When it comes to the KingPin bios, I flashed it in the normal way, no challenges. A small strange detail I notice is that in the part of PT where the FPS normally hits 110, it never goes over 95 FPS ... I have also tried to undervolt the card all the way down to 800mV .. But the same bad result


----------



## kryptonfly

Carillo said:


> Thank you for the answer. I first tried the Girgabyte waterforce bios, but got heavy PL limitations! No, nothing has changed in terms of hardware, and I have tried to reinstall 3dMark. I will try more bios in a couple of hours, and report back. When it comes to the KingPin bios, I flashed it in the normal way, no challenges. A small strange detail I notice is that in the part of PT where the FPS normally hits 110, it never goes over 95 FPS ... I have also tried to undervolt the card all the way down to 800mV .. But the same bad result


Hum, indeed there's something wrong... did you check with Afterburner in real time the frequency, temps and PL ? Seems like it's a memory/bandwidth/high framerate problem. Did you experience same in games, Timespy, Firestrike and Superposition ?

What is your waterblock and pads please ? (just by curiosity). I have a Bykski but pads are too thick & bad, I will put Gelid Extreme in few days.


----------



## yzonker

Carillo said:


> Thank you for the answer. I first tried the Girgabyte waterforce bios, but got heavy PL limitations! No, nothing has changed in terms of hardware, and I have tried to reinstall 3dMark. I will try more bios in a couple of hours, and report back. When it comes to the KingPin bios, I flashed it in the normal way, no challenges. A small strange detail I notice is that in the part of PT where the FPS normally hits 110, it never goes over 95 FPS ... I have also tried to undervolt the card all the way down to 800mV .. But the same bad result


Post a pic of HWINFO while something is running with all of the GPU power/limits/etc... expanded.


----------



## Carillo

It's a Barrow waterblock, with barrow pads. Seems like my card is only running pcie 4x 3.0 ? just updated motherboard bios and flashed in the gigabyte gaming OC bios. Also tried a different slot...


----------



## kryptonfly

Carillo said:


> It's a Barrow waterblock, with barrow pads. Seems like my card is only running pcie 4x 3.0 ? just updated motherboard bios and flashed in the gigabyte gaming OC bios. Also tried a different slot...


Thanks. Yep, with such a low PCIe link you will have bad performance, your GPU will clock high without PL but it will be limited by bandwidth. Maybe a wrong setting or too much PCIe lines in use. You can check with HWinfo64 and see during 3D loads "PCIe Link Speed" at max 8.0 GT/s for PCIe x16 gen 3.

Would you like to share your exact Kingpin vbios ?


----------



## Carillo

yzonker said:


> Post a pic of HWINFO while something is running with all of the GPU power/limits/etc... expanded.


----------



## Carillo

kryptonfly said:


> Thanks. Yep, with such a low PCIe link you will have bad performance, your GPU will clock high without PL but it will be limited by bandwidth. Maybe a wrong setting or too much PCIe lines in use. You can check with HWinfo64 and see during 3D loads "PCIe Link Speed" at max 8.0 GT/s for PCIe x16 gen 3.
> 
> Would you like to share your exact Kingpin vbios ?


Nah, tried jamming in a GTX 670 i had lying around, and it did 16x3.0 right away. Maby the card is toast ?

EDIT: Kingpin bios : EVGA RTX 3090 VBIOS


----------



## J7SC

Carillo said:


> Nah, tried jamming in a GTX 670 i had lying around, and it did 16x3.0 right away. Maby the card is toast ?
> 
> EDIT: Kingpin bios : EVGA RTX 3090 VBIOS


...this happened to (3 out of 8) EVGA earlier-gen cards I had for various machines here...if you can RMA, that would be the way to go. If not, you can try to 're-flow' solder etc by the oven method (after carefully reading up on it).


----------



## Carillo

J7SC said:


> ...this happened to (3 out of 8) EVGA earlier-gen cards I had for various machines here...if you can RMA, that would be the way to go. If not, you can try to 're-flow' solder etc by the oven method (after carefully reading up on it).


Thanks for that! i digged out an old HP server with 1650v2 , and jammed the card in there. Suddenly running 8X, with better performance.... But still.


----------



## Carillo

J7SC said:


> ...this happened to (3 out of 8) EVGA earlier-gen cards I had for various machines here...if you can RMA, that would be the way to go. If not, you can try to 're-flow' solder etc by the oven method (after carefully reading up on it).


You have a link for that method ? Thanks


----------



## J7SC

Carillo said:


> Thanks for that! i digged out an old HP server with 1650v2 , and jammed the card in there. Suddenly running 8X, with better performance.... But still.





Carillo said:


> You have a link for that method ? Thanks


4x >8x can happen on different mobos, ie. those with PEX chips for PCIe lanes. As to reflow methods, there are several, including heat-gun. I had partial success with re-flow on some of the aforementioned cards, but in addition to the Linus vid below, you should still search Google, YouTube etc. Two things to keep in mind:

1.) Make sure to examine your card re. plastic / nylon etc (ie. vBios switch) as you do not want to meld those and you have to watch temps

2.) If you use your regular oven, make sure to thoroughly clean it afterwards as toxic fumes can stick to the sides from this method


----------



## kryptonfly

@J7SC : Why should he use oven method if his temps were incredible low (27°C gpu, 60°C vram) ? No hardware change since the last time and no possible overheat. Why solders should be broken ?


----------



## Nizzen

Carillo said:


> Hello. Has anyone here experienced a problem with underperformance? 13800 points in Port Royal overclocked with the same driver and the same windows as I got over 16,000 points with another 3090 .. Both cards also used Kingpin bios. I have tried DDU and another Windows, but same result .. only 12800 points stock, with kingpin bios and chiller. The card is a Gigabyte 3090 Turbo with water block (Naturally) Link to score:
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 13 824 in Port Royal
> 
> 
> Intel Core i9-10900KF Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 11}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 12 936 in Port Royal
> 
> 
> Intel Core i9-10900KF Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com


I had the same problem. I disconnected the GPU from the computer. Reinstalled the gpu in the pci-e slot, installed the stock bios. Tested all OK. Then flashed 1000w bios. Performance OK. 
Very strange, but it worked LOL.


----------



## J7SC

kryptonfly said:


> @J7SC : Why should he use oven method if his temps were incredible low (27°C gpu, 60°C vram) ? No hardware change since the last time and no possible overheat. Why solders should be broken ?


Indeed, I would only use re. reflow method as last resort and try a remount first...sometimes, uneven pressure on PCB (ie. waterblock mount) can cause the described PCIe lane issue, at other times, it is weak factory soldering, ie. :

source


----------



## Carillo

J7SC said:


> Indeed, I would only use re. reflow method as last resort and try a remount first...sometimes, uneven pressure on PCB (ie. waterblock mount) can cause the described PCIe lane issue, at other times, it is weak factory soldering, ie. :
> 
> source
> View attachment 2524276


Thanks! 8x is better than 4X, LOL


----------



## Lobstar

If something is running your overclock hasn't applied. your clocks are under 1700, the default speed for that bios.


----------



## kryptonfly

Lobstar said:


> If something is running your overclock hasn't applied. your clocks are under 1700, the default speed for that bios.


He's well OC'ed, clocks ~2190 mhz
Carillo's Port Royal


----------



## kryptonfly

@Carillo : try to flash with Gigabyte Gaming OC vbios if not already done... test like that, I did ~14300 under watercooling, you should do same or even better. If not, RMA your card.
I scored 14 349 in Port Royal


----------



## ENTERPRISE

Hey guys, 

So I am playing around with the KP1000Watt BIOS. So far as increasing the core voltage, it was said that 1.1 is the highest/safest for a non KPe card. So with respects to setting a max of 1.1v in afterburner what figure would I put in the Core Voltage value. As that field looks like it is designed to add additional voltage on top, one would assume you do not put it 1100Mv ?


----------



## Falkentyne

ENTERPRISE said:


> Hey guys,
> 
> So I am playing around with the KP1000Watt BIOS. So far as increasing the core voltage, it was said that 1.1 is the highest/safest for a non KPe card. So with respects to setting a max of 1.1v in afterburner what figure would I put in the Core Voltage value. As that field looks like it is designed to add additional voltage on top, one would assume you do not put it 1100Mv ?


That isn't voltage. It's VID. VID is safe to use and has nothing to do with voltage. Setting the MSI Burner slider to 100% doesn't force the voltage to 1.10v. It allows VID (NVVDD requested voltage) range to change (with some NDA strange formulas based on the boost table and temps) from 1.044v-1.081v (not sure if its 1.044 or 1.056v at the bottom limit), to 1.069v-1.10v, with a corresponding overclock of +15 mhz as you are using a higher tier on the V/F graph (which you can view in MSI Afterburner with Control F). I believe the same thing happens for SRAM VID (MSVDD VID). Then the card constantly readjusts its V/F graph and seems to cycle through the V/F individual points between 1.069v-1.10v whenever it wants. I've also noticed that SRAM VID often isn't the same as NVVDD VID either. I've seen NVVDD VID at 1.10v and MSVDD VID at 1.069v at the same time. I've also seen "effective" GPU clocks (provided you are not at an internal power limit related to MSVDD or NVVDD itself) be anywhere from 26 mhz below requested clocks to 41 mhz below requested clocks, and I think this discrepancy may have to do with how much requested NVVDD differs from internal NVVDD (which again you can't see on Afterburner). you can see both these VIDS in hwinfo64 (MSI Afterburner will only show NVVDD VID). You can't see the internal NVVDD or MSVDD in any program outside of the Kingpin cards' OSD or Elmor's tools.

It appears that any sort of power throttling throttles NVVDD VID (and thus requested core clocks) first, while MSVDD VID remains normal. MSVDD (SRAM) is strange. Someone with a Kingpin needs to explain its relationship with NVVDD. E.g. on a normal 400W TDP card, NVVDD VID can throttle down to 0.925v while SRAM voltage is still happily at 1.081v.

The % slider in MSI Afterburner seems to be some sort of threshold where this "last" voltage tier gets unlocked (1.069v-1.10v). This seems to have something to do with load and temps. I have absolutely no idea how it works. I've seen the next tier get unlocked with the slider at 34% in some cases, and other times it required it to be at 80%. I don't think anyone on these forums can explain this. Maybe @SoldierRBT , who has an actual KPE card, can tell you.

Actual internal NVVDD default is 1.15v and you have no access to this voltage without a Kingpin card (Dip switches or Classified tool) or Elmor's EVC2X tool soldered to the card.


----------



## geriatricpollywog

Initial impressions of Brünhilde are favorable. GPU temp is very nice.


----------



## Lobstar

kryptonfly said:


> He's well OC'ed, clocks ~2190 mhz
> Carillo's Port Royal


Huh, I had quoted his HWinfo screenshot showing max 2010mhz boost.


----------



## kryptonfly

Carillo said:


>





Lobstar said:


> Huh, I had quoted his HWinfo screenshot showing max 2010mhz boost.


My apologies but we didn't see it. However I don't understand what's wrong here : I can see a Gigabyte 3090 TURBO with Kingpin vbios at stock BUT PCIe Link Speed is clearly at x16 gen 3 ! His card works great if I rely on the screen, there's nothing wrong.

You can check with GPU-Z or CPU-Z or ultimately with 3dmark pci express feature test ~12-13 Gb/s with PCIe x16 gen 3.0


----------



## J7SC

...not sure how helpful this is, but here is a comparison of the same RTX 3090 in 3DM - PCI Express test 4.0 vs 3.0 (on different CPU/mobo combos). In gaming and such, differences will be much less, at least until you play at 4K...in PortRoyal, which is 1440 resolution, scores were almost equal at the same GPU settings for 16x 3.0 vs 16x 4.0.

IMO, anything below 16x 3.0 becomes an issue with 3090s though


----------



## Lobstar

Ah, I had a similar issue. It was Gen 4 x16 mode but the card was askew in the slot and was only x8. I reseated it and everything was good.


kryptonfly said:


> My apologies but we didn't see it. However I don't understand what's wrong here : I can see a Gigabyte 3090 TURBO with Kingpin vbios at stock BUT PCIe Link Speed is clearly at x16 gen 3 ! His card works great if I rely on the screen, there's nothing wrong.
> 
> You can check with GPU-Z or CPU-Z or ultimately with 3dmark pci express feature test ~12-13 Gb/s with PCIe x16 gen 3.0


----------



## Carillo

kryptonfly said:


> My apologies but we didn't see it. However I don't understand what's wrong here : I can see a Gigabyte 3090 TURBO with Kingpin vbios at stock BUT PCIe Link Speed is clearly at x16 gen 3 ! His card works great if I rely on the screen, there's nothing wrong.
> 
> You can check with GPU-Z or CPU-Z or ultimately with 3dmark pci express feature test ~12-13 Gb/s with PCIe x16 gen 3.0


Like i showed earlier, the card is running in 4x GEN 3.. but using a old motherboard it runs in 8x with much better performance. RMA next :/


----------



## kryptonfly

Carillo said:


> Like i showed earlier, the card is running in 4x GEN 3.. but using a old motherboard it runs in 8x with much better performance. RMA next :/


How do you know it's 4x gen 3.0 ? I didn't see anything showing that. Do you have a screen ? HWinfo64 is normal, I have same. Test with 3dmark pci express feature test, you should have ~12-13 Gb/s.


----------



## Carillo

kryptonfly said:


> How do you know it's 4x gen 3.0 ? I didn't see anything showing that. Do you have a screen ? HWinfo64 is normal, I have same. Test with 3dmark pci express feature test, you should have ~12-13 Gb/s.


Sorry, I thought I had published the image of GPU-z where it shows 4x. At work now, so do not have access to the image. I can try 3dmark PCIE later, but performance in Port Royal clearly shows that the card does not run in 16x3.0. I achieve 10-12% lower score than the corresponding overclock on another 3090


----------



## kryptonfly

@Carillo : Ok... you can try Aida64 GPU bench too. Same, you should do +12 Gb/s in reading but for unknown reason in 3dmark it shows PCIe x16 4.0, a bit buggy, info from 3090 I guess...


----------



## J7SC

ENTERPRISE said:


> Hey guys,
> 
> So I am playing around with the KP1000Watt BIOS. So far as increasing the core voltage, it was said that 1.1 is the highest/safest for a non KPe card. So with respects to setting a max of 1.1v in afterburner what figure would I put in the Core Voltage value. As that field looks like it is designed to add additional voltage on top, one would assume you do not put it 1100Mv ?


For a decent bench such as 3DM TimeSpy Extreme or Port Royal, what is your max power draw, including PCIe slot, with the KPE 1KW vBios via HWInfo ?


----------



## ENTERPRISE

Falkentyne said:


> That isn't voltage. It's VID. VID is safe to use and has nothing to do with voltage. Setting the MSI Burner slider to 100% doesn't force the voltage to 1.10v. It allows VID (NVVDD requested voltage) range to change (with some NDA strange formulas based on the boost table and temps) from 1.044v-1.081v (not sure if its 1.044 or 1.056v at the bottom limit), to 1.069v-1.10v, with a corresponding overclock of +15 mhz as you are using a higher tier on the V/F graph (which you can view in MSI Afterburner with Control F). I believe the same thing happens for SRAM VID (MSVDD VID). Then the card constantly readjusts its V/F graph and seems to cycle through the V/F individual points between 1.069v-1.10v whenever it wants. I've also noticed that SRAM VID often isn't the same as NVVDD VID either. I've seen NVVDD VID at 1.10v and MSVDD VID at 1.069v at the same time. I've also seen "effective" GPU clocks (provided you are not at an internal power limit related to MSVDD or NVVDD itself) be anywhere from 26 mhz below requested clocks to 41 mhz below requested clocks, and I think this discrepancy may have to do with how much requested NVVDD differs from internal NVVDD (which again you can't see on Afterburner). you can see both these VIDS in hwinfo64 (MSI Afterburner will only show NVVDD VID). You can't see the internal NVVDD or MSVDD in any program outside of the Kingpin cards' OSD or Elmor's tools.
> 
> It appears that any sort of power throttling throttles NVVDD VID (and thus requested core clocks) first, while MSVDD VID remains normal. MSVDD (SRAM) is strange. Someone with a Kingpin needs to explain its relationship with NVVDD. E.g. on a normal 400W TDP card, NVVDD VID can throttle down to 0.925v while SRAM voltage is still happily at 1.081v.
> 
> The % slider in MSI Afterburner seems to be some sort of threshold where this "last" voltage tier gets unlocked (1.069v-1.10v). This seems to have something to do with load and temps. I have absolutely no idea how it works. I've seen the next tier get unlocked with the slider at 34% in some cases, and other times it required it to be at 80%. I don't think anyone on these forums can explain this. Maybe @SoldierRBT , who has an actual KPE card, can tell you.
> 
> Actual internal NVVDD default is 1.15v and you have no access to this voltage without a Kingpin card (Dip switches or Classified tool) or Elmor's EVC2X tool soldered to the card.


Thanks for the information and the general lesson. I have to say, I have not delved this much into GPU overclocking for a good while.




J7SC said:


> For a decent bench such as 3DM TimeSpy Extreme or Port Royal, what is your max power draw, including PCIe slot, with the KPE 1KW vBios via HWInfo ?


I just ran Port Royal, here is what HWinfo has to say about power values (see attached)


----------



## Simkin

Anyone know what the differences between the 3090 Founder's Edition bioses are?

94.02.27.00.0A
94.02.32.00.02
94.02.4B.00.0B


----------



## MangoMunchaa

Hey guys, I am using the EK ref block and have roughly a 20°C delta under load (around 500w) I was just wondering what methods I could use to reduce it? From what I've read using double washers around the core helps. I have also ordered gelid ultimate thermal pads to replace the current ek stock ones hoping to get more pressure. I have arctic mx-4 applied at the moment but now have kingpin kpx which I hope will help as it's a lot thicker too. Any suggestions would be great!!


----------



## ManniX-ITA

MangoMunchaa said:


> gelid ultimate thermal pads


I think that would do the opposite, they are stiff.
You need the Extreme thermal pads which are very soft.


----------



## Carillo

kryptonfly said:


> @Carillo : Ok... you can try Aida64 GPU bench too. Same, you should do +12 Gb/s in reading but for unknown reason in 3dmark it shows PCIe x16 4.0, a bit buggy, info from 3090 I guess...












Pretty clear it's not running 16X


----------



## kx11

Carillo said:


> View attachment 2524581
> 
> 
> Pretty clear it's not running 16X


Mine looks like this


----------



## SoldierRBT

Not sure if mine is okay. Daily settings.
10900K 5.2GHz
32GB 4600 CL17
Z490 Apex
3090


----------



## kryptonfly

Carillo said:


> View attachment 2524581
> 
> 
> Pretty clear it's not running 16X


Indeed. If you RMA the card I don't know if you will get a Turbo because they don't make it anymore since march/april, they removed it in Gigabyte site like if it never existed ! It's a rare and golden card for collectors lol...

@kx11 : you have a 3900XT with PCIe 4.0 lanes, your result is normal.

@SoldierRBT : you have a 10900k with PCIe 3.0 lanes, don't worry about 3dmark's PCIe 4.0, it's a bug from the 3090 I think. Your result is normal.


----------



## J7SC

...typically, vendors keep a given percentage of a model for RMAs, even if the model is no longer available. That said, if there was a higher-than-average RMA rate, he might have to do with another 3090 of 'equal or greater' standing.


----------



## kryptonfly

Gigabyte Rains On Partners' Parade By Cancelling GeForce RTX 3090 Turbo

I hope so. It's a strong card with only poscap capacitors (76 total front and back) and mlcc x20 behind the gpu, better than the Gaming OC. I have same and I chose it for quality build.


----------



## MangoMunchaa

ManniX-ITA said:


> I think that would do the opposite, they are stiff.
> You need the Extreme thermal pads which are very soft.


oh oops, I figured they were just better than the extreme!! I'll see how they go and if I don't get an improvement try the extremes, thanks for the heads up


----------



## kryptonfly

Yesterday I finally shunted with 10m ohms but I think it's a little worse than 15m ohms. Now PL triggered around 305W in GPU-Z (x1.5) around 460W, it happens in Port Royal, I can't do 2145mhz all the way like 15m ohms did. +5% power slider doesn't help. Normally at stock and 15m ohms shunts it's PL around 345W true power. However all solders are good, all powers have drastically decreased in GPU-Z. I think it reaches internal limitations, I've seen Vrel once (just a pixel line), first time since I have it. For games it works great, none can pull more than 430W but in bench it's not the case. I will put again true 15m ohms (not 5+5+5 in serie), it should be good. OR maybe I can't do anything (except volt mod)... 305x1,5 = 457W... 345x1,33 = 458W. I will test again but I feel 15m ohms was better 🤔


----------



## Falkentyne

kryptonfly said:


> Yesterday I finally shunted with 10m ohms but I think it's a little worse than 15m ohms. Now PL triggered around 305W in GPU-Z (x1.5) around 460W, it happens in Port Royal, I can't do 2145mhz all the way like 15m ohms did. +5% power slider doesn't help. Normally at stock and 15m ohms shunts it's PL around 345W true power. However all solders are good, all powers have drastically decreased in GPU-Z. I think it reaches internal limitations, I've seen Vrel once (just a pixel line), first time since I have it. For games it works great, none can pull more than 430W but in bench it's not the case. I will put again true 15m ohms (not 5+5+5 in serie), it should be good. OR maybe I can't do anything (except volt mod)... 305x1,5 = 457W... 345x1,33 = 458W. I will test again but I feel 15m ohms was better 🤔


What card is this again? Gigabyte?
What vbios are you using?

Please post a hwinfo64 screenshot with ALL of the power (Wattage) rails fully expanded and shown, as well as TDP Normalized % and TDP% and we'll see how it is. Do your port royal run so the maximum values are shown from the run.


----------



## kryptonfly

Falkentyne said:


> What card is this again? Gigabyte?
> What vbios are you using?
> 
> Please post a hwinfo64 screenshot with ALL of the power (Wattage) rails fully expanded and shown, as well as TDP Normalized % and TDP% and we'll see how it is. Do your port royal run so the maximum values are shown from the run.


Yes it's Gigabyte 3090 Turbo 2x8 pin. I'm using Gigabyte 3090 Gaming OC vbios 370W at 100% and 390W at 105% + shunts 10mΩ offset x1.5 :



All powers are x1.5 except Outputs. It seems there's something wrong between Misc 0 & 2 again, maybe there's one or 2 bad solders I'm not fully convinced...

I put again powers at full stock (no shunt) at 105% power slider to help you (in game full load) :


----------



## Falkentyne

kryptonfly said:


> Yes it's Gigabyte 3090 Turbo 2x8 pin. I'm using Gigabyte 3090 Gaming OC vbios 370W at 100% and 390W at 105% + shunts 10mΩ offset x1.5 :
> 
> 
> 
> All powers are x1.5 except Outputs. It seems there's something wrong between Misc 0 & 2 again, maybe there's one or 2 bad solders I'm not fully convinced...
> 
> I put again powers at full stock (no shunt) at 105% power slider to help you (in game full load) :


Thank you for your hard work. It is helpful and I appreciate it.

Yes, you are completely correct.
In the shunt picture (460W / x1.5 multiplier), you are indeed throttling from "TDP Normalized". It will throttle slowly once it gets very close to the maximum value (105%), since it tries to avoid reaching the maximum. But I am unsure which rail is reaching 105%. I do not know the default values for all sub rails. I know Kingpin 1kw bios has these limits all removed.
Your main TDP is only 84% so its normalized that is throttling you, not the TDP% itself (8 pin#1 + 8 pin #2 + PCIE Slot Power)

TDP Normalized has a 'base' limit at 100% TDP, which is a certain bios value for all rails (it cannot use any rail value below 100% of default for throttle), and a maximum value at 105% TDP (which is the maximum slider for your TDP%). Some rails have a 'default' and 'maximum' direct value, and some only have a default, which is normalized past 100% (I think VCORE power NVVDD and MSVDD--not shown in hwinfo but the power of the actual voltage, part of this. And to increase this power allowance, you need to add direct MSVDD or NVVDD voltage, which is impossible without soldering tools like Elmor EVC2X, or Kingpin dip switches.

But yes you are also correct there is a problem with MISC0. It's too high. And the balance between Misc2 and NVVDD1 (sum) is wrong.
Either misc2 is too low or Misc0 is too high. Should not be 120W -->72W. 

In unshunted value picture, misc2 is just less than 4x NVVDD1 (3.85x), but in the shunted picture, Misc2 is 2.7x NVVDD1
So something is wrong.

This is important because GPU Chip Power (GPUZ), or very top GPU Core NVVDD Input Power (sum) HWinfo64, is very close to :

GPU Chip Power = Misc0 + Misc2 + GPU Core NVVDD1 input power(sum)
(same thing as: GPU Core NVVDD Input Power(sum) = Misc0 + Misc2 + NVVDD1(sum)

The balance is wrong in the shunted one vs the unshunted one. (did you forget a 1.5x in any of them?)

And the total value is strange. It is always a little 'off' in unshunted but its close:
87 + 80 + 15 = 182W, GPU Chip Power=178W = close enough (good)..

Shunted value: 120 + 72 + 30 = 222, (should be close to GPU Core NVVDD inputpower (sum) at the very top (the first one).
GPU Chip Power= 244.461 W? Something is wrong. And 120/72 balance (with 30W too) is very wrong (87 and 80 is fine, e.g. 87 * 1.5, 80 * 1.5 is 130 and 120, still close enough).

Maybe Misc0 and NVVDD1 is correct (120= 4 * 30), and Misc2 is too low (too low from misc1). Or maybe Misc0 and NVVDD1 are too high and not reporting reduced power from shunt properly but Misc2 is correct--I don't know. But its a power balance problem. Because the "SRC" will always try to power balance and correct something. If one power is too high, it may raise a 2nd power or lower a third power to create balance between sub-rails. As you can see, "GPU Chip Power" is a sum of several rails....

Anyway hope you can fix it. 

This wrong balance can be caused by bad soldering of GPU Chip Power shunt.
It can also be caused by bad soldering of SRC shunt or PCIE Slot shunt.

But I think its GPU Chip Power shunt with weak solder or SRC because I do know there is some relationship between them (worse solder on Chip Power can make SRC lower, better solder can make SRC higher--yes that's weird but that's what I noticed in my own tests). One time when I was using MG842AR silver paint, it was so bad that both "SRC" and "MVDDC (FBVDD) would be 0W at full load.....haha).....anyway...

That's where I would start looking. 

I would look at your soldering quality. Make sure you always use rosin flux, first on top of original shunt, before applying solder, then apply again on top of solder "bridge" (solder between original shunt and shunt you are putting on top), then melt the bridge with the new shunt on top, for the bonding.

Remember to use 3M Polyimide tape on the PCB around the shunt, to protect the board from accidental solder drops/mistakes.


----------



## kryptonfly

Falkentyne said:


> Thank you for your hard work. It is helpful and I appreciate it.
> 
> Yes, you are completely correct.
> In the shunt picture (460W / x1.5 multiplier), you are indeed throttling from "TDP Normalized". It will throttle slowly once it gets very close to the maximum value (105%), since it tries to avoid reaching the maximum. But I am unsure which rail is reaching 105%. I do not know the default values for all sub rails. I know Kingpin 1kw bios has these limits all removed.
> Your main TDP is only 84% so its normalized that is throttling you, not the TDP% itself (8 pin#1 + 8 pin #2 + PCIE Slot Power)
> 
> TDP Normalized has a 'base' limit at 100% TDP, which is a certain bios value for all rails (it cannot use any rail value below 100% of default for throttle), and a maximum value at 105% TDP (which is the maximum slider for your TDP%). Some rails have a 'default' and 'maximum' direct value, and some only have a default, which is normalized past 100% (I think VCORE power NVVDD and MSVDD--not shown in hwinfo but the power of the actual voltage, part of this. And to increase this power allowance, you need to add direct MSVDD or NVVDD voltage, which is impossible without soldering tools like Elmor EVC2X, or Kingpin dip switches.
> 
> But yes you are also correct there is a problem with MISC0. It's too high. And the balance between Misc2 and NVVDD1 (sum) is wrong.
> Either misc2 is too low or Misc0 is too high. Should not be 120W -->72W.
> 
> In unshunted value picture, misc2 is just less than 4x NVVDD1 (3.85x), but in the shunted picture, Misc2 is 2.7x NVVDD1
> So something is wrong.
> 
> This is important because GPU Chip Power (GPUZ), or very top GPU Core NVVDD Input Power (sum) HWinfo64, is very close to :
> 
> GPU Chip Power = Misc0 + Misc2 + GPU Core NVVDD1 input power(sum)
> (same thing as: GPU Core NVVDD Input Power(sum) = Misc0 + Misc2 + NVVDD1(sum)
> 
> The balance is wrong in the shunted one vs the unshunted one. (did you forget a 1.5x in any of them?)
> 
> And the total value is strange. It is always a little 'off' in unshunted but its close:
> 87 + 80 + 15 = 182W, GPU Chip Power=178W = close enough (good)..
> 
> Shunted value: 120 + 72 + 30 = 222, (should be close to GPU Core NVVDD inputpower (sum) at the very top (the first one).
> GPU Chip Power= 244.461 W? Something is wrong. And 120/72 balance (with 30W too) is very wrong (87 and 80 is fine, e.g. 87 * 1.5, 80 * 1.5 is 130 and 120, still close enough).
> 
> Maybe Misc0 and NVVDD1 is correct (120= 4 * 30), and Misc2 is too low (too low from misc1). Or maybe Misc0 and NVVDD1 are too high and not reporting reduced power from shunt properly but Misc2 is correct--I don't know. But its a power balance problem. Because the "SRC" will always try to power balance and correct something. If one power is too high, it may raise a 2nd power or lower a third power to create balance between sub-rails. As you can see, "GPU Chip Power" is a sum of several rails....
> 
> Anyway hope you can fix it.
> 
> This wrong balance can be caused by bad soldering of GPU Chip Power shunt.
> It can also be caused by bad soldering of SRC shunt or PCIE Slot shunt.
> 
> But I think its GPU Chip Power shunt with weak solder or SRC because I do know there is some relationship between them (worse solder on Chip Power can make SRC lower, better solder can make SRC higher--yes that's weird but that's what I noticed in my own tests). One time when I was using MG842AR silver paint, it was so bad that both "SRC" and "MVDDC (FBVDD) would be 0W at full load.....haha).....anyway...
> 
> That's where I would start looking.
> 
> I would look at your soldering quality. Make sure you always use rosin flux, first on top of original shunt, before applying solder, then apply again on top of solder "bridge" (solder between original shunt and shunt you are putting on top), then melt the bridge with the new shunt on top, for the bonding.
> 
> Remember to use 3M Polyimide tape on the PCB around the shunt, to protect the board from accidental solder drops/mistakes.


Thank you very much for your support, you're really awesome 
I was replying when you edited your message, yep I tried to figure out on my side and I think Misc 2 is too low compared to Misc 0 = 120W /1.5 = 80W (still possible).
I think 8 pin#1 + 8 pin #2 + PCIE Slot Power is correct, this morning I could pull 508W = 339W in GPU-Z + my CPU OC'ed (~200W), my inverter did beep ~700W intake, so I think it's correct. (my assumption). And besides, numbers seem close & realistic.
There's also Input PP too low = 127W (/1.5 = 84W) almost same as unshunted = 128W !

"_*In unshunted value picture, misc2 is just less than 4x NVVDD1 (3.85x), but in the shunted picture, Misc2 is 2.7x NVVDD1*_"
- We don't have same result... Misc 2 (80.281) / NVVDD1 (21.820) = 3.68x and in the shunted picture 72.792 / 30.120 = 2.42x... maybe I'm wrong ! Anyway it's on the same way...

"_*And the total value is strange. It is always a little 'off' in unshunted but its close:*_
*87 + 80 + 15 = 182W, GPU Chip Power=178W = close enough (good)..

Shunted value: 120 + 72 + 30 = 222, (should be close to GPU Core NVVDD inputpower (sum) at the very top (the first one).*
_*GPU Chip Power= 244.461 W? Something is wrong. And 120/72 balance (with 30W too) is very wrong (87 and 80 is fine, e.g. 87 * 1.5, 80 * 1.5 is 130 and 120, still close enough).*_"
- Same here, in unshunted picture you do : Misc 0 + Misc 2 + Misc 3 and in shunted one : Misc 0 + Misc 2 + NVVDD1... What is the correct formula ? Maybe it was the purpose to enlighten me lol.
Let's say something wrong : Misc 0 + 2 + 3 = 120.817 + 72.792 + 16.729 = 210.338W even worse than 222 lol. I don't know for GPU power if Misc 2 would be too low, it's still possible...
Or maybe both, Misc 0 is too high and Misc 2 is too low. Misc 3 (16.729W) is almost same than unshunted picture 15.568W, weird.

From what I'm convinced : Misc 2 is definitely too low : 72.792 / 1.5 = 48.528W really lower than unshunted !
Input PP is definitely too low : 127.033 / 1.5 = 84.69W really lower than unshunted ! But Input PP + FBVDD is almost same than unshunted, no clue...

Something that I notice in unshunted picture : GPU Rail Powers (263.896W) = NVVDD2... BUT in shunted one : GPU Rail Powers (296.924W) is not equal to NVVDD2 (343.748W) but equal to NVVDD Output Power (sum). NVVDD2 too high ? Yes same as you, I think it's caused by GPU Chip Power shunt, at least ! Phew ! Thank you again for your help, I see clearer now 

Oh... I checked again, every input power is well x1.5, no doubt.


----------



## Falkentyne

kryptonfly said:


> Thank you very much for your support, you're really awesome
> I was replying when you edited your message, yep I tried to figure out on my side and I think Misc 2 is too low compared to Misc 0 = 120W /1.5 = 80W (still possible).
> I think 8 pin#1 + 8 pin #2 + PCIE Slot Power is correct, this morning I could pull 508W = 339W in GPU-Z + my CPU OC'ed (~200W), my inverter did beep ~700W intake, so I think it's correct. (my assumption). And besides, numbers seem close & realistic.
> There's also Input PP too low = 127W (/1.5 = 84W) almost same as unshunted = 128W !
> 
> "_*In unshunted value picture, misc2 is just less than 4x NVVDD1 (3.85x), but in the shunted picture, Misc2 is 2.7x NVVDD1*_"
> - We don't have same result... Misc 2 (80.281) / NVVDD1 (21.820) = 3.68x and in the shunted picture 72.792 / 30.120 = 2.42x... maybe I'm wrong ! Anyway it's on the same way...
> 
> "_*And the total value is strange. It is always a little 'off' in unshunted but its close:*_
> *87 + 80 + 15 = 182W, GPU Chip Power=178W = close enough (good)..
> 
> Shunted value: 120 + 72 + 30 = 222, (should be close to GPU Core NVVDD inputpower (sum) at the very top (the first one).*
> _*GPU Chip Power= 244.461 W? Something is wrong. And 120/72 balance (with 30W too) is very wrong (87 and 80 is fine, e.g. 87 * 1.5, 80 * 1.5 is 130 and 120, still close enough).*_"
> - Same here, in unshunted picture you do : Misc 0 + Misc 2 + Misc 3 and in shunted one : Misc 0 + Misc 2 + NVVDD1... What is the correct formula ? Maybe it was the purpose to enlighten me lol.
> Let's say something wrong : Misc 0 + 2 + 3 = 120.817 + 72.792 + 16.729 = 210.338W even worse than 222 lol. I don't know for GPU power if Misc 2 would be too low, it's still possible...
> Or maybe both, Misc 0 is too high and Misc 2 is too low. Misc 3 (16.729W) is almost same than unshunted picture 15.568W, weird.
> 
> From what I'm convinced : Misc 2 is definitely too low : 72.792 / 1.5 = 48.528W really lower than unshunted !
> Input PP is definitely too low : 127.033 / 1.5 = 84.69W really lower than unshunted ! But Input PP + FBVDD is almost same than unshunted, no clue...
> 
> Something that I notice in unshunted picture : GPU Rail Powers (263.896W) = NVVDD2... BUT in shunted one : GPU Rail Powers (296.924W) is not equal to NVVDD2 (343.748W) but equal to NVVDD Output Power (sum). NVVDD2 too high ? Yes same as you, I think it's caused by GPU Chip Power shunt, at least ! Phew ! Thank you again for your help, I see clearer now
> 
> Oh... I checked again, every input power is well x1.5, no doubt.


Ignore "GPU rail Powers". This is just the highest value anywhere (except Total Board Power). It's like "GPU hot watt". It's silly, it means nothing.

In your shunted picture, its 'GPU Core NVVDD Output Power (sum)". You have the field truncated (not wide enough) but that's what it is.

In unshunted, it's GPU Core NVVDD Input Power (sum). Because that's the highest value being reported

Note: Output power rails are not affected by shunt reporting (these are always accurate anyway). So the higher the "real" board power, the higher the output rails will be.
463W is higher than 380W, so at 463W, the output rail is going to be higher than any of the input rails.

And the misc thing....

On mine I just need misc0+misc2+nvvdd1, nothing more.
167W (Chip power uncorrected) at 525W (board power corrected 1.98x)= 67.6 + 67.2 + 33.5 (Misc0 + misc2 + nvvdd1, all uncorrected) = 168.38 total, GPU Core=167.2W close enough already=no need for misc3.

Anyway that sum is never going to be perfect. Always seems to be a little off (usually about 10W or so).

Misc3 isn't important to me. When I pull 520W to board (GPU Core=167W---raw value), misc3 is only 7W... when i pull 400W to board (GPU Core=107W, raw value), Misc3 is 6.3W, so I just ignore it. I dont know what it's for.


----------



## kryptonfly

Ok... I just extrapolated stock values, no shunt, PL 105% full load with my previous result 10mΩ shunt x1.5. I used "ShuntMod Calculator" with stock data to calculate true power from slot, pin #1 & #2. There's always a difference from pin #1 & #2 even at stock, so I wanted to check if my shunts were reliable and they are, at least slot, pin #1 & #2.

Of course it's only theory and PL is about 84.6% in real and 84% in software but it already matched between 469W in real and 466W in software for same PL.

True power slot = 71.4/68.8 = 3.8% difference, then 78.047x3.8% = 2.97W of margin, 78.047+2.97 = ~81W. We get closer of 85.7W in software !
True power pin #1 = 167.2/158.5 = 5.5% difference (the most), then 205.323x5.5% = 11.29W of margin, 205.323-11.29 = ~194W in theory.
True power pin #2 = 158.5/156.9 = 1% difference, then 187.506x1% = 1.875W of margin, 187.506+1.875 = ~189W.

So, I think slot, pin #1 & 2 shunts are good. Pin #1 is always higher on this card, even at stock +10W so around 190W it would be more... GPU shunt needs to be redone.


----------



## nyk20z3

Does any one actually game or just bench


----------



## Falkentyne

kryptonfly said:


> Ok... I just extrapolated stock values, no shunt, PL 105% full load with my previous result 10mΩ shunt x1.5. I used "ShuntMod Calculator" with stock data to calculate true power from slot, pin #1 & #2. There's always a difference from pin #1 & #2 even at stock, so I wanted to check if my shunts were reliable and they are, at least slot, pin #1 & #2.
> 
> Of course it's only theory and PL is about 84.6% in real and 84% in software but it already matched between 469W in real and 466W in software for same PL.
> 
> True power slot = 71.4/68.8 = 3.8% difference, then 78.047x3.8% = 2.97W of margin, 78.047+2.97 = ~81W. We get closer of 85.7W in software !
> True power pin #1 = 167.2/158.5 = 5.5% difference (the most), then 205.323x5.5% = 11.29W of margin, 205.323-11.29 = ~194W in theory.
> True power pin #2 = 158.5/156.9 = 1% difference, then 187.506x1% = 1.875W of margin, 187.506+1.875 = ~189W.
> 
> So, I think slot, pin #1 & 2 shunts are good. Pin #1 is always higher on this card, even at stock +10W so around 190W it would be more... GPU shunt needs to be redone.


Yes I knew your 8 pin #1 and #2 are good.
And PCIE Slot is good too. Memory i think is okay.
So Chip Power and/or SRC is maybe suspect.
But I don't know which shunt is chip power and which is SRC.
If it follows the EVGA layout, It could be something like this.



https://raw.githubusercontent.com/bmgjet/ShutMod-Calculator/main/ShuntMap%20EVGA%20XC3.jpg



MSI: https://raw.githubusercontent.com/bmgjet/ShutMod-Calculator/main/MSIShunts.jpg (some unknown MSI, why is there is a small 1206 shunt for SRC?
Looks like a 3x8 pin 3080 MSI card. MSI Suprim X 3090 looks more normal. So SRC is "probably" in the same location as the evga one.



https://www.techpowerup.com/review/msi-geforce-rtx-3090-suprim-x/images/front_full.jpg



Gigabyte i don't know.

Founder's Edition looks nothing like this whatsoever. 8 pins and GDDR6X/Vmem (by V cutout) are on the front side (3080 FE has VMEM and 8 pin #2 next to each other), PCIE Slot is (close to the PCIE Slot) on the back side, Chip Power is by the GPU core on backside, and SRC is by the middle of the V cutout on backside.


----------



## cletus-cassidy

SoldierRBT said:


> Not sure if mine is okay. Daily settings.
> 10900K 5.2GHz
> 32GB 4600 CL17
> Z490 Apex
> 3090
> 
> View attachment 2524616


Aside: I have a very similar setup and have only managed 32GB 4266 C16. Assuming you have dual rank b-die, would you mind sharing your high level timings and voltages? Can send via DM if this question clogs the thread.... Thanks in advance.


----------



## kryptonfly

nyk20z3 said:


> Does any one actually game or just bench


LOL. Like I'm saying on myself : I bought a 3090, not a 3080. Every dual pin 3090 is restricted by PL, 390W at best, FE is even better 400W so this GPU needs at least 400W according to nVidia, not 350 nor 390W... And besides with bad shunts, the behavior is really tricky to achieve a Port Royal, what a hell ! Not just throttling, it bugs a lot even though I raise curve voltage.

@Falkentyne : thanks 👍 for these Gigabyte there are 4 little R005 (1206) on the back, located near of vram VRM (not shown on pictures unfortunately, I'm on my phone but you can guess I think) :


----------



## Falkentyne

kryptonfly said:


> LOL. Like I'm saying on myself : I bought a 3090, not a 3080. Every dual pin 3090 is restricted by PL, 390W at best, FE is even better 400W so this GPU needs at least 400W according to nVidia, not 350 nor 390W... And besides with bad shunts, the behavior is really tricky to achieve a Port Royal, what a hell ! Not just throttling, it bugs a lot even though I raise curve voltage.
> 
> @Falkentyne : thanks 👍 for these Gigabyte there are 4 little R005 (1206) on the back, located near of vram VRM (not shown on pictures unfortunately, I'm on my phone but you can guess I think) :


I know what the 1206 shunts do but I don't know if I have permission to post the schematic.
Then again Elmor's discord has been filled with people posting stuff for modding so...

But anyway,

one says "0.003 ohm (3 mohm) 1206 res, 4x NVVDD PH
(i do not know what PH stands for), hooked up to "RS24" to common, labeled RSENSE4

The second says 0.003 ohm 1206 res, 3x MSVDD PH, hooked up to "RS26" to common, labeled RSENSE5.

the third says "RS26 to common", RSENSE6, with no label about what it is connected to.

What's interesting is that the schematic says 3 mohm.
But the cards seem to all have "5 mohm".

Also--the Asus Strix does NOT have these at all. I think these are 'built in" (3 mohm) into the circuit, while gigabyte and MSI and eVGA and others are using 5 mohm physical.
This explains why the "Shunt modded " Asus ROG Strix with stock BIOS does not throttle on "TDP Normalized" in Timespy Extreme at 520W or Port Royal, while all other cards seem to throttle. You can look many many pages back. Maybe around last February. Someone posted GPU-Z and HWinfo readout of a shunt modded Strix running Superposition and TSE. No throttling, TDP Normalized much lower than any other shunt modded card (Except cards using KPE 1kw bios==this bios has all NVVDD, MSVDD rail limits removed).

I am wondering if modding those will change the power rail limits for TDP NORMALIZED limits on MSVDD (voltage 1.081v internal) and NVVDD (voltage 1.15v internal). (this is not afterburner VID, Kingpin cards will show this voltage).

(You can stack 5 mohm shunts on top of these safely. However they are very small and very hard to do).

(if any problem just use 'desoldering wick' and remove the shunts).

There are two "small" 5 mOhm shunts (next to PCIE slot large 5 mohm shunt) and GPU Chip Power 5 mOhm shunt on 3090 Founder's Edition, but both me and Dante`afk modded those (although the size was not correct---the Nvidia ones are a little larger than 1206...almost like 1210(?) or something I dont know of...and I could not find any information about this...very strange, very weird), we modded those two but it changed nothing on the card.
we checked for continuity also...I think the Nvidia ones do something else.

Here you can go ahead and mod them. 4090 is coming out next year anyway. Hope I wont get in trouble.


----------



## gfunkernaught

Delete


----------



## kryptonfly

Falkentyne said:


> I know what the 1206 shunts do but I don't know if I have permission to post the schematic.
> Then again Elmor's discord has been filled with people posting stuff for modding so...
> 
> But anyway,
> 
> one says "0.003 ohm (3 mohm) 1206 res, 4x NVVDD PH
> (i do not know what PH stands for), hooked up to "RS24" to common, labeled RSENSE4
> 
> The second says 0.003 ohm 1206 res, 3x MSVDD PH, hooked up to "RS26" to common, labeled RSENSE5.
> 
> the third says "RS26 to common", RSENSE6, with no label about what it is connected to.
> 
> What's interesting is that the schematic says 3 mohm.
> But the cards seem to all have "5 mohm".
> 
> Also--the Asus Strix does NOT have these at all. I think these are 'built in" (3 mohm) into the circuit, while gigabyte and MSI and eVGA and others are using 5 mohm physical.
> This explains why the "Shunt modded " Asus ROG Strix with stock BIOS does not throttle on "TDP Normalized" in Timespy Extreme at 520W or Port Royal, while all other cards seem to throttle. You can look many many pages back. Maybe around last February. Someone posted GPU-Z and HWinfo readout of a shunt modded Strix running Superposition and TSE. No throttling, TDP Normalized much lower than any other shunt modded card (Except cards using KPE 1kw bios==this bios has all NVVDD, MSVDD rail limits removed).
> 
> I am wondering if modding those will change the power rail limits for TDP NORMALIZED limits on MSVDD (voltage 1.081v internal) and NVVDD (voltage 1.15v internal). (this is not afterburner VID, Kingpin cards will show this voltage).
> 
> (You can stack 5 mohm shunts on top of these safely. However they are very small and very hard to do).
> 
> (if any problem just use 'desoldering wick' and remove the shunts).
> 
> There are two "small" 5 mOhm shunts (next to PCIE slot large 5 mohm shunt) and GPU Chip Power 5 mOhm shunt on 3090 Founder's Edition, but both me and Dante`afk modded those (although the size was not correct---the Nvidia ones are a little larger than 1206...almost like 1210(?) or something I dont know of...and I could not find any information about this...very strange, very weird), we modded those two but it changed nothing on the card.
> we checked for continuity also...I think the Nvidia ones do something else.
> 
> Here you can go ahead and mod them. 4090 is coming out next year anyway. Hope I wont get in trouble.


Really really interesting ! I was thinking to shunt these little R005 same as others to somehow balance a little better powers. I thought they were related with vram but no apparently... Are you sure 5 mohm shunt is "safe" ? Not too much ? I planned to use 10 mohm instead to balance all.


----------



## Falkentyne

kryptonfly said:


> Really really interesting ! I was thinking to shunt these little R005 same as others to somehow balance a little better powers. I thought they were related with vram but no apparently... Are you sure 5 mohm shunt is "safe" ? Not too much ? I planned to use 10 mohm instead to balance all.


You're on your own when you do this stuff.
Shunts on these cards are not voltages. You aren't doing a volt mod.
One of the 1206 shunts has something to do with a RGB connector, though.
It will either work or it won't work. Then you just remove the stacked shunt (with desoldering wick).
The only hard part is getting such small shunts modded because of low working space. Remember to use Polyimide tape and be safe. You need to do the mod properly (first you should do the big shunts correctly. maybe your soldering iron is not good? You need at least a 65W iron, minimum and you should maybe use 400C temperature to melt and bond the solder. And you must use rosin flux every step, always.


----------



## gfunkernaught

Have any of you experienced mount issues when removing and reinstalling your gpu with water block mounted? I recently moved my system to a new case and since then, my GPU temps have risen quite a bit. I now have more radiators and last night I saw the core temp rise to as high as 44c in the 520w bios and my water temp was 31c. I'm guessing it's a contact issue. Thoughts?

Edit: I should probably share the case and cooling system info:
1000D
Dual d5 pump (serial)>cpu>360>360>gpu>480>480>res.
I do see how the water temperature has dropped considerably, just about 5c above ambient air temperature, which is pretty good, but it's probably that low because the GPU isn't dumping heat into the water. The more I think about, the more I think that I must have knocked something loose in the GPU block and the mount isn't good anymore.


----------



## Falkentyne

gfunkernaught said:


> Have any of you experienced mount issues when removing and reinstalling your gpu with water block mounted? I recently moved my system to a new case and since then, my GPU temps have risen quite a bit. I now have more radiators and last night I saw the core temp rise to as high as 44c in the 520w bios and my water temp was 31c. I'm guessing it's a contact issue. Thoughts?
> 
> Edit: I should probably share the case and cooling system info:
> 1000D
> Dual d5 pump (serial)>cpu>360>360>gpu>480>480>res.
> I do see how the water temperature has dropped considerably, just about 5c above ambient air temperature, which is pretty good, but it's probably that low because the GPU isn't dumping heat into the water. The more I think about, the more I think that I must have knocked something loose in the GPU block and the mount isn't good anymore.


Did you replace the thermal pads and thermal paste? Once compressed, you usually can't re-use them, and the thinner and softer they are, the less likely they can be re-used safely. Don't treat thermal pads like gold--just replace them. Anyone who has money for a block and an out of stock video card has money for thermal pads. 

Can you remove a blocked GPU from a case to a new case without draining the loop and removing the block and backplate, etc? I have no experience with that, so I don't know if it's the same as an AIO system like a Kingpin (which remains intact).


----------



## gfunkernaught

Falkentyne said:


> Did you replace the thermal pads and thermal paste? Once compressed, you usually can't re-use them, and the thinner and softer they are, the less likely they can be re-used safely. Don't treat thermal pads like gold--just replace them. Anyone who has money for a block and an out of stock video card has money for thermal pads.
> 
> Can you remove a blocked GPU from a case to a new case without draining the loop and removing the block and backplate, etc? I have no experience with that, so I don't know if it's the same as an AIO system like a Kingpin (which remains intact).


I removed the card whole, I didn't remove the block. I drained all the liquid though. But yeah didn't take the block off the GPU.


----------



## yzonker

gfunkernaught said:


> I removed the card whole, I didn't remove the block. I drained all the liquid though. But yeah didn't take the block off the GPU.


Seems very unlikely you degraded the mount if you didn't take the block apart. If you're delta is worse, probably less flow. I've lost track, how does this new loop compare to your old one in regards to restrictions? This is why you need a flow meter if you don't have one. I don't but should have added one some time ago.


----------



## yzonker

gfunkernaught said:


> Have any of you experienced mount issues when removing and reinstalling your gpu with water block mounted? I recently moved my system to a new case and since then, my GPU temps have risen quite a bit. I now have more radiators and last night I saw the core temp rise to as high as 44c in the 520w bios and my water temp was 31c. I'm guessing it's a contact issue. Thoughts?
> 
> Edit: I should probably share the case and cooling system info:
> 1000D
> Dual d5 pump (serial)>cpu>360>360>gpu>480>480>res.
> I do see how the water temperature has dropped considerably, just about 5c above ambient air temperature, which is pretty good, but it's probably that low because the GPU isn't dumping heat into the water. The more I think about, the more I think that I must have knocked something loose in the GPU block and the mount isn't good anymore.


Oh and you're always dumping the same amount of heat. It has to go somewhere. The higher block delta just indicates the block is working less efficient either due to higher thermal resistance between the core and block or less flow. So then the delta has to increase to get the same amount of heat transfer. It's analogous to increasing voltage to maintain the same current flow as resistance increases in a circuit.


----------



## gfunkernaught

@Falkentyne @yzonker 
Idk how restrictive the TT CL360s are, given the tripe row design. Probably should get a flow meter. Still though, if the block is still making good contact with the block but the flow is inefficient, I should see an increase in water temp right?


----------



## ManniX-ITA

gfunkernaught said:


> Idk how restrictive the TT CL360s are, given the tripe row design. Probably should get a flow meter. Still though, if the block is still making good contact with the block but the flow is inefficient, I should see an increase in water temp right?


Did you change the CPU as well?
It should be possible to use it as a reference considering you say the water temperature improved from the previous setup.
If the loop is the issue you should have the same bad delta on the CPU.


----------



## yzonker

gfunkernaught said:


> @Falkentyne @yzonker
> Idk how restrictive the TT CL360s are, given the tripe row design. Probably should get a flow meter. Still though, if the block is still making good contact with the block but the flow is inefficient, I should see an increase in water temp right?


No not necessarily,






Where is your temp sensor?


----------



## yzonker

ManniX-ITA said:


> Did you change the CPU as well?
> It should be possible to use it as a reference considering you say the water temperature improved from the previous setup.
> If the loop is the issue you should have the same bad delta on the CPU.


No not necessarily either,









Volume flow, pressure loss and cooling performance using the example of a CPU and GPU water block | Practical knowledge | Page 3 | igor'sLAB


berechnen, chiller, cpu-kuehler, custom loop, Druckverlust, GPU-Kühler, Messen, pumpe, Ventil, Volumenstrom, Wasserdruck, wasserkühlung




www.igorslab.de


----------



## gfunkernaught

ManniX-ITA said:


> Did you change the CPU as well?
> It should be possible to use it as a reference considering you say the water temperature improved from the previous setup.
> If the loop is the issue you should have the same bad delta on the CPU.


I moved the cpu with the rest of the system. I also delidded it between so I can't use the cpu as a reference. Although a quick run of cinebench at stock showed a max temp of 60c on one of the cores, the rest were lower.


----------



## gfunkernaught

yzonker said:


> No not necessarily,
> 
> 
> 
> 
> 
> 
> Where is your temp sensor?


it is between the "last" radiator and the res.

I noticed today some white filmy stuff inside the res at the top where there isn't coolant. Could be the coolant got corrupted and became more viscous? I'm tempted to drain and put some dw and test again.


----------



## gfunkernaught

Drained my loop, refilled with Utopia sysprep/DW mix I had. Immediately noticed much better flow than before at the res. I have the EK Revo Res 150 lite and it has the tube that protrudes into the res. Before it was a small steady stream coming out of that tube. Now it was a thicker and stronger blast of water coming out of it into the res. I'll test it out hopefully the heat load won't mess up the sysprep. And no I won't keep the sysprep'd water in there. I'll let it run for 24hrs then drain and put some DW+Utopia liquid protection as I already have some.


----------



## gfunkernaught

After some Quake 2 RTX time, about 30min, 520w bios, mild OC. GPU core temp equalized to an average of 42.5c, water temp avg 31c, ambient room temp 26c, fans 70%, all 28 of them lol. Looks like there might be some gunk stuck in the fins of the gpu block. The rads and fans are doing their job keeping the water temp low I guess but the gpu core temp is still too high for the amount of cooling I have now. I'll keep playing with the fan curves.


----------



## kryptonfly

Falkentyne said:


> You're on your own when you do this stuff.
> Shunts on these cards are not voltages. You aren't doing a volt mod.
> One of the 1206 shunts has something to do with a RGB connector, though.
> It will either work or it won't work. Then you just remove the stacked shunt (with desoldering wick).
> The only hard part is getting such small shunts modded because of low working space. Remember to use Polyimide tape and be safe. You need to do the mod properly (first you should do the big shunts correctly. maybe your soldering iron is not good? You need at least a 65W iron, minimum and you should maybe use 400C temperature to melt and bond the solder. And you must use rosin flux every step, always.


Yeah you're right, I don't have Polyimide yet, that's why I don't use enough solder to bind shunt but I ordered it yesterday and with some desoldering wick too. I use solder paste 183°C so my soldering iron is enough, melts quick but not enough solder it's like nothing. I have good Flux green Relife and soon with Polyimide I should solder as a pro lol...


gfunkernaught said:


> I moved the cpu with the rest of the system. I also delidded it between so I can't use the cpu as a reference. Although a quick run of cinebench at stock showed a max temp of 60c on one of the cores, the rest were lower.


If you delidded your cpu between, take note it's like your cpu produces more heat cause of better transfer & efficiency so it warmes more than before, sure. Depends how much you pull from the cpu but at idle and OC'ed it still produces more heat than before. If you didn't touch the gpu block itself there's something with the loop, you don't have some air inside ?

EDIT : 60°c on one of the core at stock in cinebench is a lot I think but I'm not familiar with your cpu so I'm not so sure. Anyway if you have unbalanced cores temp check your delid and spread again metal liquid.

I have separate gpu loop, ambiant 26°c, +460W in bench with 5x360mm (15 fans) and one Phobya pump, it's hard to reach 40°c. Good thing is I can unplug gpu and block separately without changing water as quick as air cooler, so great to solder many times lol.


----------



## gfunkernaught

@kryptonfly Right that is expected, but my max cpu temps are actually lower than in the old build, which is great! After playing with the fan and pump curves, I managed to get the gpu core temp under control. The cooling system in general is doing a good job keeping the water temp moderate, 29-31c under load, with ambient air temp 23c. I just put the 1kw bios back on for some real testing and even at an unlocked 500-520w during Bright Memory bench loop, 2130-2155mhz eff @1063mv, core temp stayed at 40c with some 41c peaks. The system works well and I seem to have built it right. The coolant I had went bad apparently, ek cryofuel clear, also just bought a 1L bottle last month that I was saving for this build, its 2.5L system total. Now I'm running DW+SysPrep until tomorrow night when I will put just DW or DW+Utopia. I read about "dead water" additive for I'm assuming algae growth, but Idk if that handles corrosion as well. One day I may muster up the patience and courage to repaste/remount the card, 10.5c gpu/water delta is good, but I want to do better now that I have all this cooling capacity.


----------



## Falkentyne

gfunkernaught said:


> @kryptonfly Right that is expected, but my max cpu temps are actually lower than in the old build, which is great! After playing with the fan and pump curves, I managed to get the gpu core temp under control. The cooling system in general is doing a good job keeping the water temp moderate, 29-31c under load, with ambient air temp 23c. I just put the 1kw bios back on for some real testing and even at an unlocked 500-520w during Bright Memory bench loop, 2130-2155mhz eff @1063mv, core temp stayed at 40c with some 41c peaks. The system works well and I seem to have built it right. The coolant I had went bad apparently, ek cryofuel clear, also just bought a 1L bottle last month that I was saving for this build, its 2.5L system total. Now I'm running DW+SysPrep until tomorrow night when I will put just DW or DW+Utopia. I read about "dead water" additive for I'm assuming algae growth, but Idk if that handles corrosion as well. One day I may muster up the patience and courage to repaste/remount the card, 10.5c gpu/water delta is good, but I want to do better now that I have all this cooling capacity.


Maybe try this?






Koolance 705 Liquid Coolant, Electrically Insulative, Colorless, 5000ml (169 fl oz)


Koolance 705 series coolant is used by many industries requiring a low conductivity (3 microSiemens per centimeter), low-toxicity, reliable coolant with corr



koolance.com













Amazon.com: Red line Performance Coolant with Water Wetter 1/2 US gallon : Automotive


Buy Red line Performance Coolant with Water Wetter 1/2 US gallon: Antifreezes & Coolants - Amazon.com ✓ FREE DELIVERY possible on eligible purchases



www.amazon.com





Keep in mind I have absolutely no idea what I'm talking about.
Got it from here.






Car radiator coolant instead of water?


I'm upgrading my watercooling setup and I'm wondering about using car radiator coolant instaed of water. The idea is to avoid copper corrosion. There are different engine coolants, like: IAT (Inorganic Additive Technology) OAT (Organic Acid Technology) HOAT (Hybrid OAT) Which one would be more...




hardforum.com


----------



## kryptonfly

gfunkernaught said:


> @kryptonfly Right that is expected, but my max cpu temps are actually lower than in the old build, which is great! After playing with the fan and pump curves, I managed to get the gpu core temp under control. The cooling system in general is doing a good job keeping the water temp moderate, 29-31c under load, with ambient air temp 23c. I just put the 1kw bios back on for some real testing and even at an unlocked 500-520w during Bright Memory bench loop, 2130-2155mhz eff @1063mv, core temp stayed at 40c with some 41c peaks. The system works well and I seem to have built it right. The coolant I had went bad apparently, ek cryofuel clear, also just bought a 1L bottle last month that I was saving for this build, its 2.5L system total. Now I'm running DW+SysPrep until tomorrow night when I will put just DW or DW+Utopia. I read about "dead water" additive for I'm assuming algae growth, but Idk if that handles corrosion as well. One day I may muster up the patience and courage to repaste/remount the card, 10.5c gpu/water delta is good, but I want to do better now that I have all this cooling capacity.


Good job 
I also had troubles with EK cryofuel clear, some white residues at the top of the tank but my bad cause I reused same old water. Is it common to achieve Port Royal at 2145-2160 mhz effective at 968 mV max curve at best ? Because when I see 2155 mhz at 1063 mV it's a lot for me and I just have my gpu as reference, I didn't see others that's why I'm asking.


----------



## gfunkernaught

Falkentyne said:


> Maybe try this?
> 
> 
> 
> 
> 
> 
> Koolance 705 Liquid Coolant, Electrically Insulative, Colorless, 5000ml (169 fl oz)
> 
> 
> Koolance 705 series coolant is used by many industries requiring a low conductivity (3 microSiemens per centimeter), low-toxicity, reliable coolant with corr
> 
> 
> 
> koolance.com
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Amazon.com: Red line Performance Coolant with Water Wetter 1/2 US gallon : Automotive
> 
> 
> Buy Red line Performance Coolant with Water Wetter 1/2 US gallon: Antifreezes & Coolants - Amazon.com ✓ FREE DELIVERY possible on eligible purchases
> 
> 
> 
> www.amazon.com
> 
> 
> 
> 
> 
> Keep in mind I have absolutely no idea what I'm talking about.
> Got it from here.
> 
> 
> 
> 
> 
> 
> Car radiator coolant instead of water?
> 
> 
> I'm upgrading my watercooling setup and I'm wondering about using car radiator coolant instaed of water. The idea is to avoid copper corrosion. There are different engine coolants, like: IAT (Inorganic Additive Technology) OAT (Organic Acid Technology) HOAT (Hybrid OAT) Which one would be more...
> 
> 
> 
> 
> hardforum.com


I've used car coolant way back in the day and while it helped a tiny bit with heat absorption it was too viscous for the tiny TT pump I had. I wouldn't want to put that stuff in my loop. Plus based on my experience with experimenting with coolant vs water, straight DW has the best heat absorption.


----------



## gfunkernaught

kryptonfly said:


> Good job
> I also had troubles with EK cryofuel clear, some white residues at the top of the tank but my bad cause I reused same old water. Is it common to achieve Port Royal at 2145-2160 mhz effective at 968 mV max curve at best ? Because when I see 2155 mhz at 1063 mV it's a lot for me and I just have my gpu as reference, I didn't see others that's why I'm asking.


Just did another Quake 2 RTX session, PL capped at 70%, eff clock avg [email protected], mem offset +1200, no issues, and the avg core temp was 42c with 43c peaks when the watts reached 680w! Ayayay!

And the water temp stayed at 30-31c. I have the pumps corresponding to water temp now instead of gpu temp. The 28xfan array responds to gpu temp. Plus the 120mm noctua on the back of the card responding to the gpu memory junction sensor. Two rear case fans at 100% because they're fairly quiet and I have the A/C running so I can't hear them anyway.

So now there is less of that residue in the reservoir, but still some left. I'm thinking take the res apart after draining to clean it. Really dont want to touch the gpu block. Could add more sysprep. It seems to be moving the gunk into the top/dry part of the res which is what I want. I top off the res in the beginning so that later on all the bubbles grab the gunk and end up in the rest, later remaining on the side walls of the res.

PR at 2145-2160 eff 968mv??? Idk man that's way better than me. I crashed PR when I tried 2170mhz at 1063mv. Which 3090 do you have?


----------



## kryptonfly

gfunkernaught said:


> Just did another Quake 2 RTX session, PL capped at 70%, eff clock avg [email protected], mem offset +1200, no issues, and the avg core temp was 42c with 43c peaks when the watts reached 680w! Ayayay!
> 
> And the water temp stayed at 30-31c. I have the pumps corresponding to water temp now instead of gpu temp. The 28xfan array responds to gpu temp. Plus the 120mm noctua on the back of the card responding to the gpu memory junction sensor. Two rear case fans at 100% because they're fairly quiet and I have the A/C running so I can't hear them anyway.
> 
> So now there is less of that residue in the reservoir, but still some left. I'm thinking take the res apart after draining to clean it. Really dont want to touch the gpu block. Could add more sysprep. It seems to be moving the gunk into the top/dry part of the res which is what I want. I top off the res in the beginning so that later on all the bubbles grab the gunk and end up in the rest, later remaining on the side walls of the res.
> 
> PR at 2145-2160 eff 968mv??? Idk man that's way better than me. I crashed PR when I tried 2170mhz at 1063mv. Which 3090 do you have?


For now I'm still PL starting 962 mV so hard to say but my curve is 2175 mhz at 968 mV, around 2145 mhz effective. Max gpu temp 39°C and ambiant 24°C with Bykski block, 5x360mm and handmade metal liquid from a local overclocker, I will do again shunts (bad solders) + 4 little shunts on the back to see... I have a Gigabyte turbo 2x8 pin which has full capacitors cover front and back (76 poscap) with 20x mlcc behind the gpu, made a little for servers so tough enough for stability I guess but nVidia asked to Gigabyte to stop to make Turbo cause really cheaper than Quatro ! Doesn't exist since march anymore.


----------



## Antsu

kryptonfly said:


> For now I'm still PL starting 962 mV so hard to say but my curve is 2175 mhz at 968 mV, around 2145 mhz effective. Max gpu temp 39°C and ambiant 24°C with Bykski block, 5x360mm and handmade metal liquid from a local overclocker, I will do again shunts (bad solders) + 4 little shunts on the back to see... I have a Gigabyte turbo 2x8 pin which has full capacitors cover front and back (76 poscap) with 20x mlcc behind the gpu, made a little for servers so tough enough for stability I guess but nVidia asked to Gigabyte to stop to make Turbo cause really cheaper than Quatro ! Doesn't exist since march anymore.


That's a nice sample... My current card is better than my first, and it's miles away from that. GPU temp was 31-32C and best I could do was 2115Mhz @ 0.968V, around 2108Mhz effective. How was your score on that run?


----------



## Antsu

gfunkernaught said:


> Just did another Quake 2 RTX session, PL capped at 70%, eff clock avg [email protected], mem offset +1200, no issues, and the avg core temp was 42c with 43c peaks when the watts reached 680w! Ayayay!
> 
> And the water temp stayed at 30-31c.


That's a solid contact I'd say! For me, in that same scenario I can keep my water temp in control, but even with 23.0C water I peak to 39.0C... I've wanted to remount my block for a while now, but haven't gotten around to doing it... Now I just might have to.


----------



## gfunkernaught

kryptonfly said:


> For now I'm still PL starting 962 mV so hard to say but my curve is 2175 mhz at 968 mV, around 2145 mhz effective. Max gpu temp 39°C and ambiant 24°C with Bykski block, 5x360mm and handmade metal liquid from a local overclocker, I will do again shunts (bad solders) + 4 little shunts on the back to see... I have a Gigabyte turbo 2x8 pin which has full capacitors cover front and back (76 poscap) with 20x mlcc behind the gpu, made a little for servers so tough enough for stability I guess but nVidia asked to Gigabyte to stop to make Turbo cause really cheaper than Quatro ! Doesn't exist since march anymore.


Thats really good! What was your water temp at?


----------



## gfunkernaught

Antsu said:


> That's a solid contact I'd say! For me, in that same scenario I can keep my water temp in control, but even with 23.0C water I peak to 39.0C... I've wanted to remount my block for a while now, but haven't gotten around to doing it... Now I just might have to.


Tbh I've thought about getting either the bykski or alphacool block for my Trio. Idk how they compare to the ek trio block in terms of contact and heat transfer.


----------



## kryptonfly

Antsu said:


> That's a nice sample... My current card is better than my first, and it's miles away from that. GPU temp was 31-32C and best I could do was 2115Mhz @ 0.968V, around 2108Mhz effective. How was your score on that run?


My best score is 15225 but unfortunately my vram doesn't want to raise so much, +1168 but I don't know if it's the max, maybe less cause in games I have crashes. I don't have REbar cause incompatible with x99 so score can be lower than others. 


gfunkernaught said:


> Thats really good! What was your water temp at?


I don't check water temp, just delta ambient around 15°C, thanks to low gpu voltage I need less power but in bench it reaches +460W. I could put a probe in the tank just to check but now I have a good gpu contact and delta ambient is enough for me. Anyway I need to do again my shunts cause not perfect yet. Waiting polyimide tape to mod properly.


----------



## gfunkernaught

Would I get better flow to the gpu block if I did pump>cpu>gpu vs pump>cpu>rad>gpu? I'm thinking I would but can someone chime in?


----------



## kryptonfly

gfunkernaught said:


> Would I get better flow to the gpu block if I did pump>cpu>gpu vs pump>cpu>rad>gpu? I'm thinking I would but can someone chime in?


To my mind and technically it would be the exact same flow with a flow meter and it would be a little worse with cpu heat going directly to gpu. The best and if you can it's to make another loop just for gpu, you will have full efficiency, control and so on... I always made loops separately for max efficiency but especially if you wanna change cpu or do maintenance you don't have to empty all water and I don't like that my cpu absorbs heat as well. If you want the best temps it's the way to go, even though the cpu is way colder than gpu, the problem is these gpus are too power hungry, they need the max we can do to cool them.


----------



## gfunkernaught

kryptonfly said:


> To my mind and technically it would be the exact same flow with a flow meter and it would be a little worse with cpu heat going directly to gpu. The best and if you can it's to make another loop just for gpu, you will have full efficiency, control and so on... I always made loops separately for max efficiency but especially if you wanna change cpu or do maintenance you don't have to empty all water and I don't like that my cpu absorbs heat as well. If you want the best temps it's the way to go, even though the cpu is way colder than gpu, the problem is these gpus are too power hungry, they need the max we can do to cool them.


Yeah I'm looking at my loop now and thought the same, and I always wanted to avoid the CPU dumping heat right to the GPU. I know loop order doesn't affect water temp but it will affect component temp. I use flexible tubing so if I need to service the CPU I can just removed the block. I always leave a bit a slack on the tubes for service.


----------



## kryptonfly

gfunkernaught said:


> Yeah I'm looking at my loop now and thought the same, and I always wanted to avoid the CPU dumping heat right to the GPU. I know loop order doesn't affect water temp but it will affect component temp. I use flexible tubing so if I need to service the CPU I can just removed the block. I always leave a bit a slack on the tubes for service.


Maybe you can share a picture of your full system (if it's not already done and that I don't know, excuse me), do you have space limits ? I've put 5x360mm rads outside my case with quick connectors so I can unplug gpu and block as fast as air heat sink, just another bracket to remove. There's a pic few posts above, I'm really happy with this feature, really handy.


----------



## yzonker

gfunkernaught said:


> Yeah I'm looking at my loop now and thought the same, and I always wanted to avoid the CPU dumping heat right to the GPU. I know loop order doesn't affect water temp but it will affect component temp. I use flexible tubing so if I need to service the CPU I can just removed the block. I always leave a bit a slack on the tubes for service.


My CPU is directly before my GPU in my loop. I just tested it out of curiosity. Fired up Kombustor in 4k to minimize cpu usage with the gpu running at ~500w. Let the gpu temp stabilize and then hit the cpu with Prime95 SFFT. At 60% pump speed (where it happened to be running) I saw no measurable change in gpu core temp. At most 0.2C as it was bouncing around about that much. This was with a 5800x, so not a really high wattage cpu.


----------



## gfunkernaught

yzonker said:


> My CPU is directly before my GPU in my loop. I just tested it out of curiosity. Fired up Kombustor in 4k to minimize cpu usage with the gpu running at ~500w. Let the gpu temp stabilize and then hit the cpu with Prime95 SFFT. At 60% pump speed (where it happened to be running) I saw no measurable change in gpu core temp. At most 0.2C as it was bouncing around about that much. This was with a 5800x, so not a really high wattage cpu.


I was referring to flow and pressure in terms of loop order. I thought maybe the rad would impede the flow to the gpu but I should let gravity help the flow. I'


----------



## mirkendargen

gfunkernaught said:


> I was referring to flow and pressure in terms of loop order. I thought maybe the rad would impede the flow to the gpu but I should let gravity help the flow. I'


Gravity won't effect flow in a closed loop once the system is primed. What goes up must come down and vice versa. I have a 2.5m height difference in my loop and yeah, it's a problem if I don't have my res at the top and the pump at the bottom when initially starting it with an empty loop, but once it's primed head pressure isn't impacted by the height difference.


----------



## gfunkernaught

mirkendargen said:


> Gravity won't effect flow in a closed loop once the system is primed. What goes up must come down and vice versa. I have a 2.5m height difference in my loop and yeah, it's a problem if I don't have my res at the top and the pump at the bottom when initially starting it with an empty loop, but once it's primed head pressure isn't impacted by the height difference.


So would the following make a difference in flow or pressure?
Pump cpu GPU rad or pump cpu rad GPU? This rad is top mounted.


----------



## mirkendargen

gfunkernaught said:


> So would the following make a difference in flow or pressure?
> Pump cpu GPU rad or pump cpu rad GPU? This rad is top mounted.


In terms of flow, no. In terms of temperature the second option would have the coolant at the GPU a tiny bit cooler and the coolant at the CPU a tiny bit warmer, but almost certainly less than 1C difference.


----------



## gfunkernaught

mirkendargen said:


> In terms of flow, no. In terms of temperature the second option would have the coolant at the GPU a tiny bit cooler and the coolant at the CPU a tiny bit warmer, but almost certainly less than 1C difference.


Yep I figured that. It helps to hash out ideas and get perspective BEFORE dismantling a loop 😅


----------



## AllGamer

Still can't find any 3090 in stock.
Gave up waiting, and picked up a RTX 3080 Ti at MSRP


----------



## gfunkernaught

AllGamer said:


> Still can't find any 3090 in stock.
> Gave up waiting, and picked up a RTX 3080 Ti at MSRP


Ah man...





ZOTAC ZT-A30900J-10P NVIDIA GeForce RTX 3090 24 GB GDDR6X Graphic Card | SabrePC


ZOTAC ZT-A30900J-10P NVIDIA GeForce RTX 3090 24 GB GDDR6X Graphic Card




www.sabrepc.com


----------



## GRABibus

mirkendargen said:


> Gravity won't effect flow in a closed loop once the system is primed. What goes up must come down and vice versa. I have a 2.5m height difference in my loop and yeah, it's a problem if I don't have my res at the top and the pump at the bottom when initially starting it with an empty loop, but once it's primed head pressure isn't impacted by the height difference.





AllGamer said:


> Still can't find any 3090 in stock.
> Gave up waiting, and picked up a RTX 3080 Ti at MSRP


Stocks at resselers in France


----------



## Falkentyne

BTW I'm sure most people know this now but there is no "ECC" in GDDR6X. There never was. it's the controller retrying failed transactions.



https://media-www.micron.com/-/media/client/global/documents/products/data-sheet/dram/gddr/gddr6/gddr6x_sgram_8gb_brief.pdf?rev=161547726f0b45239d3da37ef29b09bf


----------



## Antsu

Falkentyne said:


> BTW I'm sure most people know this now but there is no "ECC" in GDDR6X. There never was. it's the controller retrying failed transactions.
> 
> 
> 
> https://media-www.micron.com/-/media/client/global/documents/products/data-sheet/dram/gddr/gddr6/gddr6x_sgram_8gb_brief.pdf?rev=161547726f0b45239d3da37ef29b09bf


Thank you for confirming what my gut feeling has been telling me. Am I totally in the wrong if I assume this allows for the performance to stay identical almost all the way before crashing? Except of course if you manage to hit the "middle zone" where errors happen but they can still be corrected. So it basicly functions like ECC but the "middle zone" is a lot smaller?


----------



## ManniX-ITA

Antsu said:


> Thank you for confirming what my gut feeling has been telling me. Am I totally in the wrong if I assume this allows for the performance to stay identical almost all the way before crashing? Except of course if you manage to hit the "middle zone" where errors happen but they can still be corrected. So it basicly functions like ECC but the "middle zone" is a lot smaller?


The performances will degrade the moment retries will start to happen, not going to stay identical.
When retries starts failing then you'll have visual glitches and then crashes.
ECC error checking is strong, CRC check is weak.
Corrupted data can go through and will become visual glitches.
Also correction is faster while retries will have a much bigger performance impact.


----------



## jura11

Well yesterday hasn't been my best day hahaha

My plan was to replace all thermal pads on RTX 3090s and replacing TIM on GPUs, TIM was replaced for ZF-EX on both GPUs and pads I have replaced for Gelid thermal pads and did improve temperatures by good 20°C, GPU temperatures raised by 2-4°C and I think there is bad mount and will probably redo loop when other GPU is home😞

Old Kryonaut was literally dry that's what I expected and on other GPU I have used Arctic MX4 thermal paste which was still in good condition and not been dry at all

Now to bad news, my better OC GPU failed and not sure what happened, I'm getting strange error on that GPU and is not recognised in BIOS or in device manager or even in NvFlash etc

Did tried to use stock cooler and nothing, never have issues with the that GPU, that GPU was working perfectly prior to swapping thermal pads and paste

Checked if there is any sign of damage etc, nothing, literally I'm in loss now 😞

For people who have Bykski waterblocks and want to improve VRAM temperatures, Gelid 1.5mm are worth it to use it, these pads improved VRAM temperatures by good 20°C on one of my worse GPU(previously I would seen 76°C, now I'm seeing 56°C on VRAM), just bit of advice pre-squish them a bit with glass or something like that

Hope this helps

Thanks, Jura


----------



## Falkentyne

jura11 said:


> Well yesterday hasn't been my best day hahaha
> 
> My plan was to replace all thermal pads on RTX 3090s and replacing TIM on GPUs, TIM was replaced for ZF-EX on both GPUs and pads I have replaced for Gelid thermal pads and did improve temperatures by good 20°C, GPU temperatures raised by 2-4°C and I think there is bad mount and will probably redo loop when other GPU is home😞
> 
> Old Kryonaut was literally dry that's what I expected and on other GPU I have used Arctic MX4 thermal paste which was still in good condition and not been dry at all
> 
> Now to bad news, my better OC GPU failed and not sure what happened, I'm getting strange error on that GPU and is not recognised in BIOS or in device manager or even in NvFlash etc
> 
> Did tried to use stock cooler and nothing, never have issues with the that GPU, that GPU was working perfectly prior to swapping thermal pads and paste
> 
> Checked if there is any sign of damage etc, nothing, literally I'm in loss now 😞
> 
> For people who have Bykski waterblocks and want to improve VRAM temperatures, Gelid 1.5mm are worth it to use it, these pads improved VRAM temperatures by good 20°C on one of my worse GPU(previously I would seen 76°C, now I'm seeing 56°C on VRAM), just bit of advice pre-squish them a bit with glass or something like that
> 
> Hope this helps
> 
> Thanks, Jura


Check carefully the PCIE Slot pins on both sides of the card and check the full PCB very carefully.
Maybe try removing the card, removing everything from it, soak the card in pure (99%) isopropyl alcohol (you may need a quart or two of this) for 15 minutes, then let it dry, then carefully look at the card fully, all traces, all SMD's, all caps, then reassemble everything and see if it boots. You should be able to re-use the pads (at least to get the card booting again).


----------



## AllGamer

gfunkernaught said:


> Ah man...
> 
> 
> 
> 
> 
> ZOTAC ZT-A30900J-10P NVIDIA GeForce RTX 3090 24 GB GDDR6X Graphic Card | SabrePC
> 
> 
> ZOTAC ZT-A30900J-10P NVIDIA GeForce RTX 3090 24 GB GDDR6X Graphic Card
> 
> 
> 
> 
> www.sabrepc.com


Thanks but no thanks, I've seen plenty of scalper prices everywhere.
At those prices I can pick up 2 at MSRP.


----------



## jura11

And I wouldn't use again


Falkentyne said:


> Check carefully the PCIE Slot pins on both sides of the card and check the full PCB very carefully.
> Maybe try removing the card, removing everything from it, soak the card in pure (99%) isopropyl alcohol (you may need a quart or two of this) for 15 minutes, then let it dry, then carefully look at the card fully, all traces, all SMD's, all caps, then reassemble everything and see if it boots. You should be able to re-use the pads (at least to get the card booting again).


Hi there 

Thanks for reply mate, done that as first thing yesterday, soaked GPU in isopropyl alcohol and checked PCI_E slot and checked everything and everything seems is okay, no damage or ripped SMD etc

Hope this helps 

Thanks, Jura


----------



## KCDC

So I THINK the discord 3090 stock bot actually worked for me finally, or at least got me into the Zotac Queue... Only 3090 they had available is the AMP Extreme Holo, looks like the order went through, we will see. 

This is going in a multi gpu render station, so I am trying to find a waterblock for it, Bykski seems to be the only one that has one, but I am confused with the model numbers and compatibility. AliExpress is selling this as compatible with the one I just ordered:








200.46US $ 10% OFF|Bykski Water Block use for ZOTAC Gaming RTX 3090 AMP Core Holo/Extreme Holo GPU Card Block Backplane water Cooling,N ST3090PGF|Fluid DIY Cooling| - AliExpress


Smarter Shopping, Better Living! Aliexpress.com




www.aliexpress.com












Bykski Full Coverage GPU Water Block w/ Integrated Active Backplate for Zotac RTX 3090 PGF OC (N-ST3090PGF-TC)


This full coverage block directly cools the video cards core (GPU), graphics memory (RAM), and voltage regulator modules (VRM) on both sides of GPUs. By directly cooling these components you can achieve great temperatures along with the ability to reach higher overclocks or extend the overall...




www.bykski.us





pcb layout seems to match, but wondering if anyone here has the card I am supposedly getting and if they're watercooling it. Thanks!


----------



## gfunkernaught

What thermal paste has the best compression for good contact between gpu die and block?


----------



## KedarWolf

gfunkernaught said:


> What thermal paste has the best compression for good contact between gpu die and block?


This is really good like THX but much thinner and easier to apply. Cheap on Amazon too, I didn't link it directly because I'm on amazon.ca.

*Thermal Paste, SYY 8 Grams CPU Paste Thermal Compound Paste Heatsink for IC/Processor/CPU/All Coolers, 15.7W/m.k Carbon Based High Performance, Thermal Interface Material, CPU Thermal Paste*

Edit: But good contact with the GPU die is obtained by using soft, compressable pads like GELID Extreme.

Second edit: Don't buy cheap GELID pads like from Aliexpress, they are an older version that GELID support says has quality problems. 

I can give you a link to 15% off GELID pads from the official GELID store. Let me know.


----------



## hengunl25

hi, can someone help me pls

my zotac trinity 3090 is weird, almost all the time it always run at 0.725v and 780mhz, the only change i made is just set power to 70% in afterburner, thats all, and i like this behavior because it can run most games at 4k 60 with temp between 60-65c core and 75-80c vram

but..

sometimes it also run at full speed BY ITSELF, by full speed i mean up to 2ghz and more than 1v, and i dont want this behavior because the temp can ramp up to 75c core and 90c vram

how can i prevent that to happen? i want the 1st behavior of my card because it runs cool

pls someone help


----------



## Falkentyne

hengunl25 said:


> hi, can someone help me pls
> 
> my zotac trinity 3090 is weird, almost all the time it always run at 0.725v and 780mhz, the only change i made is just set power to 70% in afterburner, thats all, and i like this behavior because it can run most games at 4k 60 with temp between 60-65c core and 75-80c vram
> 
> but..
> 
> sometimes it also run at full speed BY ITSELF, by full speed i mean up to 2ghz and more than 1v, and i dont want this behavior because the temp can ramp up to 75c core and 90c vram
> 
> how can i prevent that to happen? i want the 1st behavior of my card because it runs cool
> 
> pls someone help


Manually lock the 0.725v in MSI Afterburner with "L" and "Apply" when using the "default" (reset to stock) curve.
You can force any voltage (VID) along the curve this way, as long as you are NOT power limited, but it will also follow the frequency for that VID as well.


----------



## hengunl25

Edit : nvm, turns out the 3090 is so powerfull even it comes to idle when playing some medium-heavy graphical intense games

I turn on ac valhalla and the card do boost to its normal clock


----------



## Dreams-Visions

I just noticed the most recent beta version of Afterburner (a few months old now) allows for the memory slider to drag all the way up to +2000, up from +1500 before IIRC. 

Is it safe to drag it to max and see what happens? My card was fine with +1500 before. Open loop if that matters.


----------



## des2k...

Dreams-Visions said:


> I just noticed the most recent beta version of Afterburner (a few months old now) allows for the memory slider to drag all the way up to +2000, up from +1500 before IIRC.
> 
> Is it safe to drag it to max and see what happens? My card was fine with +1500 before. Open loop if that matters.


sure, check PR to see if score gets better

my zotac 3090 stops around +1575mem


----------



## ManniX-ITA

I have tested my OC for daily use with Metro Exodus EE, Cyberpunk 2077, Horizon Zero Down, etc etc etc just to name a few.
Plus synthetic benchmarks in loop, 3DMark & Co.
G-Sync enabled and frame rate cap at 141fps so no "New World" kind of issues.

What is the game that crashes in minutes? Valheim... LoL.


----------



## GRABibus

ManniX-ITA said:


> I have tested my OC for daily use with Metro Exodus EE, Cyberpunk 2077, Horizon Zero Down, etc etc etc just to name a few.
> Plus synthetic benchmarks in loop, 3DMark & Co.
> G-Sync enabled and frame rate cap at 141fps so no "New World" kind of issues.
> 
> What is the game that crashes in minutes? Valheim... LoL.


For me, BFV fully stable at +120MHz on core and 1.1V point of V/F curve set at 2175MHz (GPU and Bios in signature)

If I set the same OC on Modern Warfare 2019 Multiplayer or Cold War Multiplayer => Crash in menu !
Modern Warfare can only be stable with +75MHz max on core
Cold war can only be stable with +105MHz max on core.

I think we can't define a game as the reference game which could be considered as the one to test our OC.
A lot of people say "If you pass Metro or BFV with your OC, then you are stable for all other games with the same OC"....That's purely not true at all.

*"One game, one OC".*


----------



## ManniX-ITA

GRABibus said:


> I think we can't define a game as the reference game which could be considered as the one to test our OC.


Yes in general it's hard with the RTX 3090, was much easier with my old GTX 1070 
You can almost always find something harsher and graphically intensive than what you thought was the most challenging.

But my point is... Valheim?
Gosh it's pixel art with some sophisticated effects.
Wouldn't have expected to be the toughest to run.
Something like Modern Warfare yes, but Valheim surprised me...


----------



## satinghostrider

GRABibus said:


> For me, BFV fully stable at +120MHz on core and 1.1V point of V/F curve set at 2175MHz (GPU and Bios in signature)
> 
> If I set the same OC on Modern Warfare 2019 Multiplayer or Cold War Multiplayer => Crash in menu !
> Modern Warfare can only be stable with +75MHz max on core
> Cold war can only be stable with +105MHz max on core.
> 
> I think we can't define a game as the reference game which could be considered as the one to test our OC.
> A lot of people say "If you pass Metro or BFV with your OC, then you are stable for all other games with the same OC"....That's purely not true at all.
> 
> *"One game, one OC".*


Very true. I myself had to downclock my 3090 for Coldwar. It would seem stable at first but either you have to keep upping the voltage slightly for the V/F curve or downclock as it might randomly just crash after a while. Temps are all fine.

I choose to downclock slightly for it to be stable. My other games works fine with higher overclocks but I prefer to just leave it at a lower value as Coldwar is a game I play daily.


----------



## GRABibus

satinghostrider said:


> Very true. I myself had to downclock my 3090 for Coldwar. It would seem stable at first but either you have to keep upping the voltage slightly for the V/F curve or downclock as it might randomly just crash after a while. Temps are all fine.
> 
> I choose to downclock slightly for it to be stable. My other games works fine with higher overclocks but I prefer to just leave it at a lower value as Coldwar is a game I play daily.


did you try modern warfare multiplayer ?
It is worst than Cold War 😊

and for those 2 games, I can’t set the 1,1V point of V/F curve at a fixed frequency like I do for BFV (explained in my post just above) => immediate crash.

‘what’s you V/F curve for Cold War ?


----------



## satinghostrider

GRABibus said:


> did you try modern warfare multiplayer ?
> It is worst than Cold War 😊


No I did not im quite focussed on Coldwar at the moment. 😎

But every game engine works differently on what one would deem stable. So I just base it one the game I play the most. 🙂


----------



## GRABibus

satinghostrider said:


> No I did not im quite focussed on Coldwar at the moment. 😎
> 
> But every game engine works differently on what one would deem stable. So I just base it one the game I play the most. 🙂


what’s you V/F curve for Cold War ?


----------



## satinghostrider

GRABibus said:


> what’s you V/F curve for Cold War ?


+120Mhz at 1.012V which brings me to around 2,070Mhz.


----------



## GRABibus

satinghostrider said:


> +120Mhz at 1.012V which brings me to around 2,070Mhz.


can you please show screenshot of MSI AB ?


----------



## satinghostrider

GRABibus said:


> can you please show screenshot of MSI AB ?


I'll post here tomorrow. Just ended a 5 hour game and gonna tuck in. 😊


----------



## GRABibus

satinghostrider said:


> I'll post here tomorrow. Just ended a 5 hour game and gonna tuck in. 😊


+120MHz is crashing for me in Cold War 😊


----------



## satinghostrider

GRABibus said:


> +120MHz is crashing for me in Cold War 😊


What's your core clocks at that frequency/voltage?


----------



## GRABibus

satinghostrider said:


> What's your core clocks at that frequency/voltage?


With this curve I crash in Cold War ;


----------



## satinghostrider

GRABibus said:


> With this curve I crash in Cold War ;
> 
> View attachment 2525319


My power limit is maxed out and my core voltage is set to 0. It's not really useful to compare as every card and bios is different. I'm on 460W SuprimX bios on my 3090 Gaming X Trio.

Also, I've locked my voltage at 1.012v to boost around 2,070Mhz.


----------



## GRABibus

satinghostrider said:


> My power limit is maxed out and my core voltage is set to 0. It's not really useful to compare as every card and bios is different. I'm on 460W SuprimX bios on my 3090 Gaming X Trio.
> 
> Also, I've locked my voltage at 1.012v to boost around 2,070Mhz.


It is a 1000W Bios, so I lock it at PL=60%.

You lock the 1.012V piont at 2070MHz and then flat curve beyond 1.012V ?


----------



## satinghostrider

GRABibus said:


> It is a 1000W Bios, so I lock it at PL=60%.
> 
> You lock the 1.012V piont at 2070MHz and then flat curve beyond 1.012V ?


Yes that's what I did. No throttle in GPU-Z during Coldwar at least.


----------



## GRABibus

satinghostrider said:


> Yes that's what I did. No throttle in GPU-Z during Coldwar at least.


Water or air ?


----------



## satinghostrider

GRABibus said:


> Water or air ?


Water. Max temps 41.3 degrees for core and memory is 52 degrees. No active backplate.


----------



## Falkentyne

GRABibus said:


> With this curve I crash in Cold War ;
> 
> View attachment 2525319


Lower your memory overclock to +250 (not a typo) and then try +120/135/150 on the core, please. Awaiting your reply.


----------



## satinghostrider

Oh forgot to mention I'm only on +750Mhz for memory.


----------



## GRABibus

Falkentyne said:


> Lower your memory overclock to +250 (not a typo) and then try +120/135/150 on the core, please. Awaiting your reply.





Falkentyne said:


> Lower your memory overclock to +250 (not a typo) and then try +120/135/150 on the core, please. Awaiting your reply.


+250MHz on memory and +120MHz on core => crash


----------



## GRABibus

here is my errors codes :











I read that Activision is working on PC crashing on these error numbers.
But from my side, this is due to GPU overclock unstability.


----------



## KedarWolf

What do you peeps get on your 24/7 gaming settings with Nvidia and Afterburner.

Here is mine below, my screen resolution doesn't matter because 3DMark uses its own.

I used Time Spy Extreme because GSync and frame limiter settings don't affect it, on both tests the frame rate never gets high enough. My screen refresh rate is 144Hz.

I'm using the MSI Suprim X Rebar BIOS and have Rebar enabled globally with Nvidia Inspector.

It says I'm using Windows 10 but it's Windows 11 Enterprise Insider Preview.


----------



## gfunkernaught

I lost 300 pts in PR going to a new driver. Check it out:








Result







www.3dmark.com





Let me know if the link doesn't work. Its a result comparison.


----------



## GRABibus

gfunkernaught said:


> I lost 300 pts in PR going to a new driver. Check it out:
> 
> 
> 
> 
> 
> 
> 
> 
> Result
> 
> 
> 
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> Let me know if the link doesn't work. Its a result comparison.


Had same results for both drivers.

Rebar was forced in NVIDIA Inspector for Port Royal in both runs ?


----------



## gfunkernaught

GRABibus said:


> Had same results for both drivers.
> 
> Rebar was forced in NVIDIA Inspector for Port Royal in both runs ?


Yup. The first time I ran it with the latest drivers and without rebar, it was 4pts lower.


----------



## Lord of meat

jura11 said:


> Well yesterday hasn't been my best day hahaha
> 
> My plan was to replace all thermal pads on RTX 3090s and replacing TIM on GPUs, TIM was replaced for ZF-EX on both GPUs and pads I have replaced for Gelid thermal pads and did improve temperatures by good 20°C, GPU temperatures raised by 2-4°C and I think there is bad mount and will probably redo loop when other GPU is home😞
> 
> Old Kryonaut was literally dry that's what I expected and on other GPU I have used Arctic MX4 thermal paste which was still in good condition and not been dry at all
> 
> Now to bad news, my better OC GPU failed and not sure what happened, I'm getting strange error on that GPU and is not recognised in BIOS or in device manager or even in NvFlash etc
> 
> Did tried to use stock cooler and nothing, never have issues with the that GPU, that GPU was working perfectly prior to swapping thermal pads and paste
> 
> Checked if there is any sign of damage etc, nothing, literally I'm in loss now 😞
> 
> For people who have Bykski waterblocks and want to improve VRAM temperatures, Gelid 1.5mm are worth it to use it, these pads improved VRAM temperatures by good 20°C on one of my worse GPU(previously I would seen 76°C, now I'm seeing 56°C on VRAM), just bit of advice pre-squish them a bit with glass or something like that
> 
> Hope this helps
> 
> Thanks, Jura


Thank you About to swap pads on mine from thermalight to gelid. useful info.


----------



## SuperMumrik

Will do some changes to my loop and was planning to change the thermal pads on my Strix at the same time. Anyone remember what pad size bykski used on the block without active back plate? 😀


----------



## kryptonfly

SuperMumrik said:


> Will do some changes to my loop and was planning to change the thermal pads on my Strix at the same time. Anyone remember what pad size bykski used on the block without active back plate? 😀


I have a Bykski GMOC-X for Gigabyte with normal backplate, it's 1mm on GPU block and 1.5mm on the backplate but I don't know if Bykski uses same spacing for Asus as well.


----------



## SuperMumrik

kryptonfly said:


> I have a Bykski GMOC-X for Gigabyte with normal backplate, it's 1mm on GPU block and 1.5mm on the backplate but I don't know if Bykski uses same spacing for Asus as well.


Tnx! I vaguely remembered that it could be 1mm on the gpu block. Probably 1,5mm on the backplate then 👌


----------



## GRABibus

KedarWolf said:


> What do you peeps get on your 24/7 gaming settings with Nvidia and Afterburner.
> 
> Here is mine below, my screen resolution doesn't matter because 3DMark uses its own.
> 
> I used Time Spy Extreme because GSync and frame limiter settings don't affect it, on both tests the frame rate never gets high enough. My screen refresh rate is 144Hz.
> 
> I'm using the MSI Suprim X Rebar BIOS and have Rebar enabled globally with Nvidia Inspector.
> 
> It says I'm using Windows 10 but it's Windows 11 Enterprise Insider Preview.
> 
> View attachment 2525345
> 
> 
> View attachment 2525337
> 
> 
> View attachment 2525343
> 
> 
> View attachment 2525338
> 
> 
> View attachment 2525339
> 
> 
> View attachment 2525340
> 
> 
> View attachment 2525341
> 
> 
> View attachment 2525342


I don't really get what your MSI AB settings are for the PR score 15210...?


----------



## sepia21

Has anyone used these thermal pads from alphacool? Alphacool Eisschicht Wärmeleitpad - 14W/mK 120x20x1,5mm - 2 Stück
Thanks in advance


----------



## ManniX-ITA

sepia21 said:


> Has anyone used these thermal pads from alphacool? Alphacool Eisschicht Wärmeleitpad - 14W/mK 120x20x1,5mm - 2 Stück
> Thanks in advance


No but I've been told the new line "Soft" are Fujipoly re-branded.
Not sure about these.









Alphacool Apex Soft Wärmeleitpad 14W/mk 2x 120x20x1mm


Die Apex Soft sind Alphacool`s High-End Lösung im Bereich der Wärmeleitpads, da man sie aufgrund ihrer herausragenden Eigenschaften fast mit einer Wärmeleitpaste vergleichen kann. Die Apex Soft Wärmeleitpads schmiegen sich viel besser an...




www.alphacool.com


----------



## Falkentyne

ManniX-ITA said:


> No but I've been told the new line "Soft" are Fujipoly re-branded.
> Not sure about these.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Alphacool Apex Soft Wärmeleitpad 14W/mk 2x 120x20x1mm
> 
> 
> Die Apex Soft sind Alphacool`s High-End Lösung im Bereich der Wärmeleitpads, da man sie aufgrund ihrer herausragenden Eigenschaften fast mit einer Wärmeleitpaste vergleichen kann. Die Apex Soft Wärmeleitpads schmiegen sich viel besser an...
> 
> 
> 
> 
> www.alphacool.com


Igor's lab said the alphacool pads are made in Japan, and of course outlandish highway robbery prices for small sizes, so they are undoubtedly Fujipoly.
I still wonder if EC360 pads are made in Germany or Japan...


----------



## ManniX-ITA

Falkentyne said:


> I still wonder if EC360 pads are made in Germany or Japan...


They say made in Germany, Igor as well...


----------



## GAN77

ManniX-ITA said:


> Igor as well...


Igor talks a lot) And more and more often he makes mistakes)


----------



## yzonker

Dang, just went through the same bizzare problem with the Nvidia drivers that I had a few months ago. All of the sudden I had lost significant stability, in this case trying to run PR. I finally gave up and DDU'd again. Poof, fixed. I hadn't touched anything to my knowledge. The 3 runs I'm comparing are my old personal best, the best I could do now, then after DDU (which is my personal best of 15700).









Result







www.3dmark.com


----------



## sepia21

sepia21 said:


> Has anyone used these thermal pads from alphacool? Alphacool Eisschicht Wärmeleitpad - 14W/mK 120x20x1,5mm - 2 Stück
> Thanks in advance


So they are not good quality? I have some overheating issues with the memory.


----------



## jura11

Lord of meat said:


> Thank you About to swap pads on mine from thermalight to gelid. useful info.


Hi there 

Personally I would use Gelid pads and you should see decent decrease in VRAM temperatures or at least in my case I have seen decent decrease in VRAM temperatures, done few tests and VRAM temperatures now won't break 60°C(usually I see them in 52-56°C), previously I would see 74-76°C on that GPU 

Hope this helps 

Thanks, Jura


----------



## jura11

SuperMumrik said:


> Tnx! I vaguely remembered that it could be 1mm on the gpu block. Probably 1,5mm on the backplate then 👌


What I remember or what I know, Bykski using 1.2-1.25 mm thermal pads on their blocks and backplates

I measured both pads on my Bykski waterblocks, got home Zotac and reference PCB waterblocks and both use same pads

Hope this helps 

Thanks, Jura


----------



## jura11

SuperMumrik said:


> Will do some changes to my loop and was planning to change the thermal pads on my Strix at the same time. Anyone remember what pad size bykski used on the block without active back plate? 😀


If you have Bykski waterblock and backplate then you will need 1.5mm thermal pads from Gelid or you can try 1mm thermal pads but I have used Gelid 1.5mm thermal pads on my Bykski waterblock and backplate 

Stock pads on Bykski are 1.2-1.25mm pads 

Hope this helps 

Thanks, Jura


----------



## jura11

sepia21 said:


> So they are not good quality? I have some overheating issues with the memory.


Hi there 

What GPU do you have and are you running under water or still on air? 

I would personally get Gelid pads than Alphacool pads which are very expensive 

Hope this helps 

Thanks, Jura


----------



## ManniX-ITA

GAN77 said:


> Igor talks a lot) And more and more often he makes mistakes)


He makes mistakes as everyone, we are all humans 
But yes he's definitely talking a lot!


----------



## sepia21

jura11 said:


> Hi there
> 
> What GPU do you have and are you running under water or still on air?
> 
> I would personally get Gelid pads than Alphacool pads which are very expensive
> 
> Hope this helps
> 
> Thanks, Jura


Hi there,
Thanks for the reply, I'm using Zotac 3090 underwater (2 420mm rads) but today I was playing a bit of cyberpunk on 4k and I checked the memory temps and they were 100 after like 15-20 minutes. the core temp was ok considering the fact that it was running on 2100Mhz (at around 50). But I clearly have to change the pads. Unfortunately, these Gelid pads are not available in Italy and I don't know where to buy them. Also, I need to buy a new thermal paste as my previous one is finished, I was using Thermal Grizzly Kryonaut but I wasn't satisfied with them cause after a month of use they get dry, Do you have any suggestions for that? 

Thanks again


----------



## jura11

sepia21 said:


> Hi there,
> Thanks for the reply, I'm using Zotac 3090 underwater (2 420mm rads) but today I was playing a bit of cyberpunk on 4k and I checked the memory temps and they were 100 after like 15-20 minutes. the core temp was ok considering the fact that it was running on 2100Mhz (at around 50). But I clearly have to change the pads. Unfortunately, these Gelid pads are not available in Italy and I don't know where to buy them. Also, I need to buy a new thermal paste as my previous one is finished, I was using Thermal Grizzly Kryonaut but I wasn't satisfied with them cause after a month of use they get dry, Do you have any suggestions for that?
> 
> Thanks again


Hi there 

What waterblock are you using, are you using Alphacool or EKWB waterblock or Bykski one? 

Gelid pads you can get through their store which is EU based one or you can try get them through Amazon or Aliexpress which I got my ones and no issues with these pads

For thermal paste, use something like is Noctua NT-H1, Thermalright TFX, ZF-EX or SYY-157, used all these thermal pastes with good success on RTX 3090 or 3080 or 3070 

Kryonaut is not the best for these RTX 3xxx series GPUs, my literally been dry after removing the waterblock, on other hand Arctic Cooling MX4 was still nice which I was using on other GPU 

Same I wouldn't use Kryonaut on new CPUs which I found as well been dry after 2 months or so which is strange because that TIM was good on 5960x or 3900X but not on 5950X 

Hope this helps 

Thanks, Jura


----------



## Falkentyne

jura11 said:


> Hi there
> 
> What waterblock are you using, are you using Alphacool or EKWB waterblock or Bykski one?
> 
> Gelid pads you can get through their store which is EU based one or you can try get them through Amazon or Aliexpress which I got my ones and no issues with these pads
> 
> For thermal paste, use something like is Noctua NT-H1, Thermalright TFX, ZF-EX or SYY-157, used all these thermal pastes with good success on RTX 3090 or 3080 or 3070
> 
> Kryonaut is not the best for these RTX 3xxx series GPUs, my literally been dry after removing the waterblock, on other hand Arctic Cooling MX4 was still nice which I was using on other GPU
> 
> Same I wouldn't use Kryonaut on new CPUs which I found as well been dry after 2 months or so which is strange because that TIM was good on 5960x or 3900X but not on 5950X
> 
> Hope this helps
> 
> Thanks, Jura


Aren't people having great results with Kryonaut Extreme?


----------



## GAN77

jura11 said:


> Gelid pads you can get through their store which is EU based


gelidstore.com
*Shipping Schedules*

_Please note that all goods are shipped from Hong Kong._


----------



## yzonker

Falkentyne said:


> Aren't people having great results with Kryonaut Extreme?


Still working for my card. I just rechecked the block delta and hotspot delta. No change so far after 7 weeks.

BTW I finally got around to testing something you mentioned before. It's interesting how sensitive effective clock is to the relative spacing of the VF curve points. I suspect some confusion I've had in the past when benchmarking stems from this. Effective clock continues to move up slowly as the curve gets flatter near whatever cutoff/lock voltage is chosen. Seems like it might be a good way to fine tune. Obviously small changes in performance at that point though.


----------



## Falkentyne

yzonker said:


> Still working for my card. I just rechecked the block delta and hotspot delta. No change so far after 7 weeks.
> 
> BTW I finally got around to testing something you mentioned before. It's interesting how sensitive effective clock is to the relative spacing of the VF curve points. I suspect some confusion I've had in the past when benchmarking stems from this. Effective clock continues to move up slowly as the curve gets flatter near whatever cutoff/lock voltage is chosen. Seems like it might be a good way to fine tune. Obviously small changes in performance at that point though.


I wonder if that's why I had 50 mhz lower than requested clocks sometimes and other times 29 mhz below requested clocks (this is is a static unchanging scene, e.g. Fortnite Main Menu, with an 800W TDP via shunts (which the internal core voltage won't allow to go past 600W due to NVVDD/MSVDD power limits being limited by the VID and SRAM voltages (1.069v-1.10v), and I don't even know what the true internal NVVDD is on a Founder's Edition.

I also seem to get different results with effective clocks depending on what the "MSVDD" VID (Sram output voltage in HWinfo) is showing. I was looking carefully at reported power draw.
One time I saw power draw change by 5W instantly (495W to 500W) with NO change whatsoever in "NVVDD VID" or "MSVDD VID". Temps went up by half a C also when that happened.
I suspect something changed related to internal NVVDD, MSVDD or even loadline/switching frequency (i don't know, maybe thats hardwired) but didn't register on the two VID's but did register on power draw. And no, Elmor's EVC2X is read only on a FE card (you cant write to the MP2888B or something), so I have no ability to change anything.

And why hasn't anyone released an XOC vbios for a 3090 FE? (that would have all MSVDD and NVVDD limits removed so TDP Normalized stops throttling when the board (or chip?) power draw is being limited by MSVDD or NVVDD (1.087v, 1.15v??)'s cap itself?

Someone has it. It got mentioned on Elmor's discord, a user there knows who has it, but the person who has it absolutely refuses to give it to anyone (and it's not someone on the channel either).


----------



## ManniX-ITA

yzonker said:


> Still working for my card. I just rechecked the block delta and hotspot delta. No change so far after 7 weeks.


I bought the TFX for the GPU die, just to be sure since someone reported also the Kryonaut Extreme had gone dry.

There are a couple of interesting videos from Luumi about thermal paste comparison:






And re-test of the Gelid GC-Extreme:






From my experience the one that needed least curing is the Arctic Silver Ceramique 2.
It's also the best for Peltier cells. Not a top performer but extremely reliable over time and insanely cheap.






Second best is the Gelid GC-Extreme, almost top performer and usually dry only around the 2 years mark.
But I used only the green tube seen in Luumi's first video.


----------



## kryptonfly

Falkentyne said:


> I wonder if that's why I had 50 mhz lower than requested clocks sometimes and other times 29 mhz below requested clocks (this is is a static unchanging scene, e.g. Fortnite Main Menu, with an 800W TDP via shunts (which the internal core voltage won't allow to go past 600W due to NVVDD/MSVDD power limits being limited by the VID and SRAM voltages (1.069v-1.10v), and I don't even know what the true internal NVVDD is on a Founder's Edition.
> 
> I also seem to get different results with effective clocks depending on what the "MSVDD" VID (Sram output voltage in HWinfo) is showing. I was looking carefully at reported power draw.
> One time I saw power draw change by 5W instantly (495W to 500W) with NO change whatsoever in "NVVDD VID" or "MSVDD VID". Temps went up by half a C also when that happened.
> I suspect something changed related to internal NVVDD, MSVDD or even loadline/switching frequency (i don't know, maybe thats hardwired) but didn't register on the two VID's but did register on power draw. And no, Elmor's EVC2X is read only on a FE card (you cant write to the MP2888B or something), so I have no ability to change anything.
> 
> And why hasn't anyone released an XOC vbios for a 3090 FE? (that would have all MSVDD and NVVDD limits removed so TDP Normalized stops throttling when the board (or chip?) power draw is being limited by MSVDD or NVVDD (1.087v, 1.15v??)'s cap itself?
> 
> Someone has it. It got mentioned on Elmor's discord, a user there knows who has it, but the person who has it absolutely refuses to give it to anyone (and it's not someone on the channel either).


I redone all my shunts with 5 mohm (10 mohm before) and I have same TDP normalized limit : 100% at ~460W in Port Royal but my %TDP (shunt) drastically decreased around 62%, almost 40% lower ! With 10 mohm my %TDP was around 83% for same OC in Port Royal and for 15 mohm unfortunately I didn't check at that time but surely it would be really close to TDP normalized as 460W seems reaching an internal limit. So I'm wondering if 15 mohm shunts would be better to "balance" %TDP and TDP normalized ? I think so, would it be better for stability ? Because it's a pain to pass Port Royal. I shunted the 4 tiny R005 on the back with fan's wires and R015 but did nothing so I removed them.

I noticed wires have some resistance because I first tried with 5 mohm and wires on the 6 shunts and magically Misc 0 & 2 were same (even better than at stock 7W difference), pin #1 & 2 just 5W of difference (not 20W) but pcie shunt was too low because of tiny wires compared to others. %TDP was around +7% than TDP normalized but I removed wires + 5 mohm shunts because I had same power draw than 10 mohm shunts directly soldered (excepted better balance with 5 mohm and wires). So maybe contacts are not good enough without wires ? I could use again wires but impossible to all cut at the same length because of components around so resistivity would change a bit.

I checked in "The Plague Tale" (power hungry) the max power draw before hitting TDP normalized (throttling) and it was around 480W and even with 5 mohm and +105% PL the max I saw was 496W (curve 2100mhz 937mV because I'm definitely limited without voltage curve). So impossible that I burn anything with kind of security. And I tried manually in Port Royal +180 and it takes around 488W (throttling obviously). I'm not even able to reach more than 500W with 5 mohm shunts, I did it before with 10 mohm just few seconds.

At stock 390W, %TDP is 106% (105% in Afterburner) and TDP normalized is 115,8% I think it's NVVDD which prevents to go beyond. There's a trick to "blind" voltage regulators but a little hard to do cause many wires :


----------



## SuperMumrik

jura11 said:


> If you have Bykski waterblock and backplate then you will need 1.5mm thermal pads from Gelid or you can try 1mm thermal pads but I have used Gelid 1.5mm thermal pads on my Bykski waterblock and backplate
> 
> Stock pads on Bykski are 1.2-1.25mm pads
> 
> Hope this helps
> 
> Thanks, Jura


Tnx a lot man! 
Unusual size on the stock pads though 😛


----------



## sepia21

jura11 said:


> Hi there
> 
> What waterblock are you using, are you using Alphacool or EKWB waterblock or Bykski one?
> 
> Gelid pads you can get through their store which is EU based one or you can try get them through Amazon or Aliexpress which I got my ones and no issues with these pads
> 
> For thermal paste, use something like is Noctua NT-H1, Thermalright TFX, ZF-EX or SYY-157, used all these thermal pastes with good success on RTX 3090 or 3080 or 3070
> 
> Kryonaut is not the best for these RTX 3xxx series GPUs, my literally been dry after removing the waterblock, on other hand Arctic Cooling MX4 was still nice which I was using on other GPU
> 
> Same I wouldn't use Kryonaut on new CPUs which I found as well been dry after 2 months or so which is strange because that TIM was good on 5960x or 3900X but not on 5950X
> 
> Hope this helps
> 
> Thanks, Jura


Thanks again for your help, I'm using an Alphacool block, I Just checked amazon and I saw that Gelid ultimate is available but extreme is only available 0.5mm which isn't the size that I should use. Is ultimate good for these memory chips? 
Regards, Sep


----------



## Lord of meat

jura11 said:


> Hi there
> 
> Personally I would use Gelid pads and you should see decent decrease in VRAM temperatures or at least in my case I have seen decent decrease in VRAM temperatures, done few tests and VRAM temperatures now won't break 60°C(usually I see them in 52-56°C), previously I would see 74-76°C on that GPU
> 
> Hope this helps
> 
> Thanks, Jura


just finished redoing pads and water loop. core is 45.1 after 15 min loop on port royal and memory was 56c, hot spot 59.7c.
2 slim 360 rads (one ek and other is nemesis) ek block on 3090 and ek block on 3950x, d5 res combo pump.
need to run it for longer to really see what it does but not bad so far.


----------



## Falkentyne

kryptonfly said:


> I redone all my shunts with 5 mohm (10 mohm before) and I have same TDP normalized limit : 100% at ~460W in Port Royal but my %TDP (shunt) drastically decreased around 62%, almost 40% lower ! With 10 mohm my %TDP was around 83% for same OC in Port Royal and for 15 mohm unfortunately I didn't check at that time but surely it would be really close to TDP normalized as 460W seems reaching an internal limit. So I'm wondering if 15 mohm shunts would be better to "balance" %TDP and TDP normalized ? I think so, would it be better for stability ? Because it's a pain to pass Port Royal. I shunted the 4 tiny R005 on the back with fan's wires and R015 but did nothing so I removed them.
> 
> I noticed wires have some resistance because I first tried with 5 mohm and wires on the 6 shunts and magically Misc 0 & 2 were same (even better than at stock 7W difference), pin #1 & 2 just 5W of difference (not 20W) but pcie shunt was too low because of tiny wires compared to others. %TDP was around +7% than TDP normalized but I removed wires + 5 mohm shunts because I had same power draw than 10 mohm shunts directly soldered (excepted better balance with 5 mohm and wires). So maybe contacts are not good enough without wires ? I could use again wires but impossible to all cut at the same length because of components around so resistivity would change a bit.
> 
> I checked in "The Plague Tale" (power hungry) the max power draw before hitting TDP normalized (throttling) and it was around 480W and even with 5 mohm and +105% PL the max I saw was 496W (curve 2100mhz 937mV because I'm definitely limited without voltage curve). So impossible that I burn anything with kind of security. And I tried manually in Port Royal +180 and it takes around 488W (throttling obviously). I'm not even able to reach more than 500W with 5 mohm shunts, I did it before with 10 mohm just few seconds.
> 
> At stock 390W, %TDP is 106% (105% in Afterburner) and TDP normalized is 115,8% I think it's NVVDD which prevents to go beyond. There's a trick to "blind" voltage regulators but a little hard to do cause many wires :


Normalized won't change unless you use a KP 1kw XOC Bios, or increase NVVDD voltage with a hardware mod (and MSVDD).
You can do that if your voltage controller supports read access with Elmor's EVC2SX device, assuming you know how to find the points to solder the connectors to (the FE cards don't work, the controller seems to be locked to read only)

There is only a certain amount of power that can be pulled at 1.1v (this may be 1.15v internal, you need hardware tools to find out) NVVDD and 1.081v (internal) MSVDD.
This may be some sort of amps limit because play Quake 2 RTX (this old game has almost no rasterization use) and the board will happily pull 700W!. Normalized% reports to this two internal voltages but the exact value isn't shown in any of the hwinfo rails. The Kingpin cards (with the normal 520W bios) have onboard jumpers which will increase NVVDD and MSVDD voltages (I think it goes up to 1.175v NVVDD and 1.15v MSVDD? I don't know), and that will raise the TDP Normalized limit for those rails. But obviously the board will draw more power and run hotter when you do that.

Do you have any "tiny" 5 mOhm shunts on your board? They look identical to the big shunts, just smaller. Those are 1206 package. Some cards have 3 of them, and one by a RGB connector. You can try soldering a "1206" package 5 mOhm shunt on the ones not next to the RGB connector and see if that helps. Two of them seem to be "inline" (One says Rsense4, 4x NVVDD PH and the other rsense5 and 3x MSVDD) with something but I can't understand the schematics. This appears to be from a Strix, but the Strix doesn't seem to have the actual 3 mOhm 1206 shunts but a shunt modded Strix (7x 2512 shunts, 5 mOhm) can pull 600W in Timespy Extreme without hitting TDP Normalized 100% limit, while all other shunt modded cards seem to hit it around 520W. (except cards using the KP XOC Bios).


----------



## J7SC

ManniX-ITA said:


> I bought the TFX for the GPU die, just to be sure since someone reported also the Kryonaut Extreme had gone dry.
> 
> There are a couple of interesting videos from Luumi about thermal paste comparison:
> 
> 
> 
> 
> 
> 
> And re-test of the Gelid GC-Extreme:
> 
> 
> 
> 
> 
> 
> From my experience the one that needed least curing is the Arctic Silver Ceramique 2.
> It's also the best for Peltier cells. Not a top performer but extremely reliable over time and insanely cheap.
> 
> 
> 
> 
> 
> 
> Second best is the Gelid GC-Extreme, almost top performer and usually dry only around the 2 years mark.
> But I used only the green tube seen in Luumi's first video.


...timely as I'm building up some more systems this week  and I have many of the pastes mentioned ready to go. Re. that 'Gelid jar' outlier, I wonder whether it was a bad batch, bad storage - or even a fake product. It is an unfortunate (and well-reported on) fact of life that there are cheap fakes out there on various online market places. With a complex custom water-loop and the time it takes to assemble / disassemble, realizing you got dealt a fake after-the-fact can be a real pain .


----------



## ManniX-ITA

J7SC said:


> It is an unfortunate (and well-reported on) fact of life that there are cheap fakes out there on various online market places. With a complex custom water-loop and the time it takes to assemble / disassemble, realizing you got dealt a fake after-the-fact can be a real pain .


True. A lot of bad Gelid products were and are still circulating. Fake + bad previous factory. 
They recently changed all the packaging and products design so it's easy to spot if it's an old batch or not.
But the fakes are fakes... either buy directly or from a decent reseller. Or use something else altogether.


----------



## jura11

Falkentyne said:


> Aren't people having great results with Kryonaut Extreme?


Hi there 

I really can't comment on Kryonaut Extreme, I didn't tried yet Kryonaut Extreme

Hope this helps 

Thanks, Jura


----------



## jura11

sepia21 said:


> Thanks again for your help, I'm using an Alphacool block, I Just checked amazon and I saw that Gelid ultimate is available but extreme is only available 0.5mm which isn't the size that I should use. Is ultimate good for these memory chips?
> Regards, Sep


Hi there 

Just check what size thermal pads Alphacool is using and go from there, Gelid Ultimate should be okay 

Right now I'm not sure which one is more compresseable pad Extreme or Ultimate, guys over here will confirm that

On my GPUs I have used TP-GP02 pads which I bought over Aliexpress 

Hope this helps 

Thanks, Jura


----------



## ManniX-ITA

jura11 said:


> ight now I'm not sure which one is more compresseable pad Extreme or Ultimate, guys over here will confirm that


The Extreme is the softest.
A lot of people had issues with GPU die contact and the Ultimate.


----------



## sepia21

jura11 said:


> Hi there
> 
> Just check what size thermal pads Alphacool is using and go from there, Gelid Ultimate should be okay
> 
> Right now I'm not sure which one is more compresseable pad Extreme or Ultimate, guys over here will confirm that
> 
> On my GPUs I have used TP-GP02 pads which I bought over Aliexpress
> 
> Hope this helps
> 
> Thanks, Jura


I ordered the thermal pads, I will report the results tomorrow, thanks again for your helps


----------



## jura11

ManniX-ITA said:


> The Extreme is the softest.
> A lot of people had issues with GPU die contact and the Ultimate.


I think I have used too Extreme pads on my RTX 3090s

Hope this helps 

Thanks, Jura


----------



## jura11

sepia21 said:


> I ordered the thermal pads, I will report the results tomorrow, thanks again for your helps


If you will be using Ultimate pads then please try pre squeeze them before placing them on waterblock and backplate 

Hope this helps 

Thanks, Jura


----------



## jura11

Last night I played like modded Skyrim Special Edition with ENB and Cyberpunk 2077, highest temperature what I have seen in gaming was in 38-42°C and VRAM temperatures was in 56-60°C and GPU hot-spot was in 8-10°C 

I'm suspecting I have bad mount on GPU because previously I have seen max 36-38°C although VRAM temperatures been higher 

Hope this helps 

Thanks, Jura


----------



## Lord of meat

jura11 said:


> Last night I played like modded Skyrim Special Edition with ENB and Cyberpunk 2077, highest temperature what I have seen in gaming was in 38-42°C and VRAM temperatures was in 56-60°C and GPU hot-spot was in 8-10°C
> 
> I'm suspecting I have bad mount on GPU because previously I have seen max 36-38°C although VRAM temperatures been higher
> 
> Hope this helps
> 
> Thanks, Jura


I flipped the washers om my screws when i had that happen, mine tend to warp so that might be why its not pressing right, or im just crazy.


----------



## marti69

KedarWolf said:


> What do you peeps get on your 24/7 gaming settings with Nvidia and Afterburner.
> 
> Here is mine below, my screen resolution doesn't matter because 3DMark uses its own.
> 
> I used Time Spy Extreme because GSync and frame limiter settings don't affect it, on both tests the frame rate never gets high enough. My screen refresh rate is 144Hz.
> 
> I'm using the MSI Suprim X Rebar BIOS and have Rebar enabled globally with Nvidia Inspector.
> 
> It says I'm using Windows 10 but it's Windows 11 Enterprise Insider Preview.
> 
> View attachment 2525345
> 
> 
> View attachment 2525337
> 
> 
> View attachment 2525343
> 
> 
> View attachment 2525338
> 
> 
> View attachment 2525339
> 
> 
> View attachment 2525340
> 
> 
> View attachment 2525341
> 
> 
> View attachment 2525342


use only 1ccd on 5950x to get better score


----------



## sepia21

jura11 said:


> Last night I played like modded Skyrim Special Edition with ENB and Cyberpunk 2077, highest temperature what I have seen in gaming was in 38-42°C and VRAM temperatures was in 56-60°C and GPU hot-spot was in 8-10°C
> 
> I'm suspecting I have bad mount on GPU because previously I have seen max 36-38°C although VRAM temperatures been higher
> 
> Hope this helps
> 
> Thanks, Jura


I Installed the Gelid ultimate today, unfortunately, I bought the wrong size for the backplate and I have to return them but for the other side I installed them and I got a 20-degree drop on memory temp. As you said they were not super soft so I had to press them hard but after pressing they made good contact with the memory modules. I'm satisfied with the result but I think if I replace the thermal pads for the backplate I will reduce it 5-10 degrees more. 
Thanks again for your help


----------



## yzonker

Maybe I'm just not finding the 3090 in the list, but I can't seem to find the unverified 3090 bios on Techpowerup now. I had it bookmarked which doesn't work either.


----------



## GRABibus

yzonker said:


> Maybe I'm just not finding the 3090 in the list, but I can't seem to find the unverified 3090 bios on Techpowerup now. I had it bookmarked which doesn't work either.


Confirmed.
Since weeks.


----------



## outofmyheadyo

Does anyone know what thickness of thermal pads BARROW BS-MSG3090M-PA uses for memory and VRM? I am not sure if the original pads that come witH the card are any good or should I replace them with something better, vrm and memory pads are of the same thickness. Nothing in the user manual or website about the thickness or anything else about the pads really.


----------



## jura11

outofmyheadyo said:


> Does anyone know what thickness of thermal pads BARROW BS-MSG3090M-PA uses for memory and VRM? I am not sure if the original pads that come witH the card are any good or should I replace them with something better, vrm and memory pads are of the same thickness. Nothing in the user manual or website about the thickness or anything else about the pads really.


What I remember, Barrow using two types of thermal pads, grey one is 0.3-0.5mm and blue one is 1.3-1.5mm what I measured last time, if you have home caliper for measuring then use it and please confirm that 

You can always try contact Barrow and their technical support is quite good and fast 

If its worth it to replace it, depending on the temperatures, on my Bykski waterblock was worth it, VRAM temperatures dropped by good 15-20°C with exactly same settings from 76-80°C to 54-60°C that's just with normal Bykski backplate, with active backplate I think VRAM temperatures wouldn't broke 50°C on my system or in my PC

Hope this helps 

Thanks, Jura


----------



## geriatricpollywog

Is it possible to use a 3090 with Windows XP as a generic display adapter? Just doing CPU benching.


----------



## yzonker

I finally got around to putting washers under my block screws on my Corsair block. Gained about 1C. Measurable, but nothing significant. I am getting high 14.xx C to low 15C. Down from high 15 to low 16C at 500w. 

My water temp only reads integer values so the delta varies about 1C as the water heats up.


----------



## geriatricpollywog

Hydrocopper + MO-RA3 420 + stock 520w Kingpin bios









I scored 15 976 in Port Royal


Intel Core i9-11900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## outofmyheadyo

yzonker said:


> I finally got around to putting washers under my block screws on my Corsair block. Gained about 1C. Measurable, but nothing significant. I am getting high 14.xx C to low 15C. Down from high 15 to low 16C at 500w.
> 
> My water temp only reads integer values so the delta varies about 1C as the water heats up.


Holy smokes, how many rads u runnin and what about ambient?


----------



## J7SC

0451 said:


> Hydrocopper + MO-RA3 420 + stock 520w Kingpin bios
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 976 in Port Royal
> 
> 
> Intel Core i9-11900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com


...grats ! Only 24 more points to 16k ...cold-boot in Win 10, wait for 2 min for Win to do its house-holding things and then fresh 'banzai'


----------



## geriatricpollywog

J7SC said:


> ...grats ! Only 24 more points to 16k ...cold-boot in Win 10, wait for 2 min for Win to do its house-holding things and then fresh 'banzai'


I wasn’t a cold boot. I just had the MO-RA on the patio.


----------



## J7SC

0451 said:


> I wasn’t a cold boot. I just had the MO-RA on the patio.
> 
> View attachment 2525995


...I meant 'try a cold boot' for your next (16k) run


----------



## geriatricpollywog

J7SC said:


> ...I meant 'try a cold boot' for your next (16k) run


I noticed that I can’t bench back to back at +180 core on the Kingpin and that it needs to wait a few minutes. There must be some part of the card that is heating up which doesn’t have a temp sensor because all my temps are unanimously good.


----------



## Lobstar

Just chiming in on Kryonaut Extreme working great on my 3090. It's hard to work with and messy as hell but at least it's not liquid metal. In my experience it's best after a few heat cycles.


----------



## outofmyheadyo

jura11 said:


> What I remember, Barrow using two types of thermal pads, grey one is 0.3-0.5mm and blue one is 1.3-1.5mm what I measured last time, if you have home caliper for measuring then use it and please confirm that
> 
> You can always try contact Barrow and their technical support is quite good and fast
> 
> If its worth it to replace it, depending on the temperatures, on my Bykski waterblock was worth it, VRAM temperatures dropped by good 15-20°C with exactly same settings from 76-80°C to 54-60°C that's just with normal Bykski backplate, with active backplate I think VRAM temperatures wouldn't broke 50°C on my system or in my PC
> 
> Hope this helps
> 
> Thanks, Jura


Thanks, I dont have a digital caliper but oldschool one showed something like that 1.5 and 0.5 around there, I sent them an email but it might be a while before answer if any!


----------



## gfunkernaught

Lobstar said:


> Just chiming in on Kryonaut Extreme working great on my 3090. It's hard to work with and messy as hell but at least it's not liquid metal. In my experience it's best after a few heat cycles.
> 
> View attachment 2525997


Is the Kryonaut Extreme good for daily use or just good for short-term high heat benchmarks?


----------



## GRABibus

gfunkernaught said:


> Is the Kryonaut Extreme good for daily use or just good for short-term high heat benchmarks?


good question.
I would like to know if I will get temperature benefits by replacing my Conductonaut liquid metal with Kryonaut extreme, for 24/7 use.
I am on stock cooler.


----------



## Vld

To all looking to buy Gelid pads in EU - angela.pl , they are based in Poland, have bought parts from them before, good seller.


----------



## shadow85

Anyone here have a O11 XL Dynamic case and EK FTW3?

I was told by EK that the FTW3 3090 with their waterblock won't fit in that case in horizontal position.

Kind of gutted. Had my build all planned out ready to buy all the part.


----------



## Lobstar

gfunkernaught said:


> Is the Kryonaut Extreme good for daily use or just good for short-term high heat benchmarks?


I daily drive it on all my devices. I got that little tub of it so I'm just going nuts with it. I've had no issues.


----------



## yzonker

I swapped to some B-die. There was a discussion over in the 3080ti thread about the cpu/mem dependence of the Endwalker benchmark. This gives an idea of the difference.









[Official] NVIDIA RTX 3080 Ti Owner's Club


Nice video. Felt like I was going to get a massage with that music haha. Yes, Falkentyne did give me a write-up of his phenomenon, but this video definitely helps a lot more. I've never undervolted with the one point drag method. I've always moved the offset as close as possible and then...




www.overclock.net


----------



## gfunkernaught

Lobstar said:


> I daily drive it on all my devices. I got that little tub of it so I'm just going nuts with it. I've had no issues.


Have you tried liquid metal before on your 3090? Just trying to get an idea of temp differences in your experience. I'm beginning to suspect that LM doesn't compress as well and evenly as paste.


----------



## ManniX-ITA

0451 said:


> Hydrocopper + MO-RA3 420 + stock 520w Kingpin bios


Can you run it with HT disabled and less active cores?
Should probably break the 16K easy.


----------



## ManniX-ITA

GRABibus said:


> I would like to know if I will get temperature benefits by replacing my Conductonaut liquid metal with Kryonaut extreme, for 24/7 use.


Someone reported it goes dry after 6 months like the non Extreme.
I got a tube of TFX but on second thought I'll probably buy one of KPx for the die.


----------



## geriatricpollywog

ManniX-ITA said:


> Can you run it with HT disabled and less active cores?
> Should probably break the 16K easy.


Thanks for the tip, but HT-off with 6c enabled did not make a difference. Should I turn up the frequency? I can bench as high as 5.6 at 8c with HT off.

NVIDIA GeForce RTX 3090 video card benchmark result - Intel Core i9-11900K Processor,ASRock Z590 OC Formula (3dmark.com)


----------



## ManniX-ITA

0451 said:


> Thanks for the tip, but HT-off with 6c enabled did not make a difference. Should I turn up the frequency? I can bench as high as 5.6 at 8c with HT off.


Yes try it.
At least on AMD with HT Off and 4 cores is a bit better.


----------



## J7SC

0451 said:


> Thanks for the tip, but HT-off with 6c enabled did not make a difference. Should I turn up the frequency? I can bench as high as 5.6 at 8c with HT off.
> 
> NVIDIA GeForce RTX 3090 video card benchmark result - Intel Core i9-11900K Processor,ASRock Z590 OC Formula (3dmark.com)


...in addition to frequencies, also look at any further optimization of system memory you can do as PR is sensitive to system RAM settings.


----------



## Lobstar

gfunkernaught said:


> Have you tried liquid metal before on your 3090? Just trying to get an idea of temp differences in your experience. I'm beginning to suspect that LM doesn't compress as well and evenly as paste.


I have used it in a previous build with my FE 1080ti and my delidded 6700k. I chose to not use it this time around since replacement hardware is so hard to come by. I find a lot of value in reduced maintenance nowadays so that was another reason for me to avoid it. Having to replace it every year or sooner just is a pain in the ass. Also, there was a bunch of surface mount stuff around the die that I was worried about getting LM in.


----------



## GRABibus

Lobstar said:


> I have used it in a previous build with my FE 1080ti and my delidded 6700k. I chose to not use it this time around since replacement hardware is so hard to come by. I find a lot of value in reduced maintenance nowadays so that was another reason for me to avoid it. Having to replace it every year or sooner just is a pain in the ass. Also, there was a bunch of surface mount stuff around the die that I was worried about getting LM in.


You don't need to replace LM every year.
The more you change it, the more you add layers and the less it performs.






At 8:44mns, important comment about LM hardening.


----------



## geriatricpollywog

GRABibus said:


> You don't need to replace LM every year.
> The more you change it, the more you add layers and the less it performs.
> 
> 
> 
> 
> 
> 
> At 8:44mns, important comment about LM hardening.


Layers? Does that mean you can’t completely clean off LM from the silicon and cold plate?


----------



## GRABibus

0451 said:


> Layers? Does that mean you can’t completely clean off LM from the silicon and cold plate?


in the video, it concerns layers on copper heat sinks.
No worry about hardening LM in this case.


----------



## gfunkernaught

Right I'm using a nickel-plated block. I'm concerned about compression with LM, I'm thinking it doesn't compress or spread as well as paste. I had to put in an RMA for the block's RGB and I want to replace it, but that requires removing the block from the gpu, so that will be an opportunity to repaste/remount. I want to try paste again, not because of the risk of LM getting on components, but the compression.


----------



## nrpeyton

*Question for all of you 'GPU Experts......... any Electricians? (for printed circuit boards)? Or BIOS experts?*

I have three RTX 3060's (EVGA), which are all exhibiting behaviour that is baffling me -- and I need to know if RTX 3090's would behave the same way if I followed the same method but used 3090's instead:



The script/method: (with background)

Normally when you under-clock [under-volt] a computer you are using stability headroom to reduce voltage to decrease current & power. (This is achieved by keeping the *SAME* clock speed when reducing voltage) meaning *NO* loss to performance or speed is incurred.

Obviously, you can't keep reducing the voltage forever, when you run out of stability headroom any further voltage reductions require a simultaneous clock speed reduction as well, ignore that and the CPU or GPU would be unstable or even crash.

So that's how it normally works, so my question is this (because I am baffled):-
Why, when the core clock is *LOCKED*, and I lower the voltage [within] stability threshold (for that clock), why, does performance*/*throughput decrease?

It is not physically possible what I am seeing - unless the driver _is_ secretly reducing the clock in a way that GPU-z and MSI-afterburner can't detect?

Lastly, the application is very memory dependant (a lot more than the core). But, the memory clock speed is also locked. So is there any reason a 3090 would behave in a different [normal] way that overclockers have long been familiar with? Is this behaviour true for the entire 30 series lineup? (Technical/Scientific reasons only please) - thank you.


----------



## Falkentyne

nrpeyton said:


> *Question for all of you 'GPU Experts......... any Electricians? (for printed circuit boards)? Or BIOS experts?*
> 
> I have three RTX 3060's (EVGA), which are all exhibiting behaviour that is baffling me -- and I need to know if RTX 3090's would behave the same way if I followed the same method but used 3090's instead:
> 
> 
> 
> The script/method: (with background)
> 
> Normally when you under-clock [under-volt] a computer you are using stability headroom to reduce voltage to decrease current & power. (This is achieved by keeping the *SAME* clock speed when reducing voltage) meaning *NO* loss to performance or speed is incurred.
> 
> Obviously, you can't keep reducing the voltage forever, when you run out of stability headroom any further voltage reductions require a simultaneous clock speed reduction as well, ignore that and the CPU or GPU would be unstable or even crash.
> 
> So that's how it normally works, so my question is this (because I am baffled):-
> Why, when the core clock is *LOCKED*, and I lower the voltage [within] stability threshold (for that clock), why, does performance*/*throughput decrease?
> 
> It is not physically possible what I am seeing - unless the driver _is_ secretly reducing the clock in a way that GPU-z and MSI-afterburner can't detect?
> 
> Lastly, the application is very memory dependant (a lot more than the core). But, the memory clock speed is also locked. So is there any reason a 3090 would behave in a differently [normal] way that overclockers have long been familiar with? Is this behaviour true for the entire 30 series lineup? (Technical/Scientific reasons only please) - thank you.


Look at effective clocks in hwinfo64. MSI AB can be set to show effective (PLL) clocks by editing a cfg file.


----------



## lolhaxz

gfunkernaught said:


> Right I'm using a nickel-plated block. I'm concerned about compression with LM, I'm thinking it doesn't compress or spread as well as paste. I had to put in an RMA for the block's RGB and I want to replace it, but that requires removing the block from the gpu, so that will be an opportunity to repaste/remount. I want to try paste again, not because of the risk of LM getting on components, but the compression.


I did this (LM + Lap) back in March and have not taken the card apart since... zero change in performance from when I first applied... my usual daily system.

Don't waste your time - the difference is 2-3 degrees at 500-600W... Even without the lapping, warranty is now void with LM (not that I personally care) .. but I would recommend you simply use the best paste you can (ie kryonaut extreme, the pink stuff)... but in terms of application, there's nothing special.. didn't even nail polish contacts.. only an issue if you use way too much.









Original paste imprint bottom right from hydronaut (IRRC) .. I later changed from Hydronaut to Kryonaut extreme, this had 2-3C impact in of itself.
Test fitting copper shims (0.6mm) cut with aviation snips, bottom left...
Lapping die across top.
Copper RAM shims are worth it tho... 65C memory max - two choices here, either the extreme pads with no shims or copper plate + EK pads = both about the same from looking at others results... it's just that the EK pads are crap I guess and the copper shim helps spread the heat across the pad.
Did not lap the EK waterblock because it was cumbersome - but it would probably help considering the sloppyness of EK these days in terms of machine marks.
And no, the shims won't go nowhere, A) the pad is compressing them and B) the paste has significant surface tension on the copper shims, it's practically impossible to slide using sensible force them unless it's a watery paste
The backplate is the standard EK backplate with a low profile CPU block bolted on (not a proper active backplate)
CPU's on the other hand.... although as we know the coolers tend to be matched, only really an "issue" when you DD.









Third picture fully lapped with a fluro tube reflection.
Direct Die @ 320W Prime95, 21C water, 65C-ish max at 1.32v (5.4GHz) - good luck doing it without DD.
Also no issues with LM on copper after 6 months (yet atleast)


----------



## des2k...

lolhaxz said:


> I did this (LM + Lap) back in March and have not taken the card apart since... zero change in performance from when I first applied... my usual daily system.
> 
> Don't waste your time - the difference is 2-3 degrees at 500-600W... Even without the lapping, warranty is now void with LM (not that I personally care) .. but I would recommend you simply use the best paste you can (ie kryonaut extreme, the pink stuff)... but in terms of application, there's nothing special.. didn't even nail polish contacts.. only an issue if you use way too much.
> View attachment 2526230
> 
> 
> Original paste imprint bottom right from hydronaut (IRRC) .. I later changed from Hydronaut to Kryonaut extreme, this had 2-3C impact in of itself.
> Test fitting copper shims (0.6mm) cut with aviation snips, bottom left...
> Lapping die across top.
> Copper RAM shims are worth it tho... 65C memory max - two choices here, either the extreme pads with no shims or copper plate + EK pads = both about the same from looking at others results... it's just that the EK pads are crap I guess and the copper shim helps spread the heat across the pad.
> Did not lap the EK waterblock because it was cumbersome - but it would probably help considering the sloppyness of EK these days in terms of machine marks.
> And no, the shims won't go nowhere, A) the pad is compressing them and B) the paste has significant surface tension on the copper shims, it's practically impossible to slide using sensible force them unless it's a watery paste
> CPU's on the other hand.... although as we know the coolers tend to be matched, only really an "issue" when you DD.
> View attachment 2526231
> 
> 
> Third picture fully lapped with a fluro tube reflection.
> Direct Die @ 320W Prime95, 21C water, 65C-ish max at 1.32v (5.4GHz) - good luck doing it without DD.
> Also no issues with LM on copper after 6 months (yet atleast)


EK gpu blocks are terrible. Had to lap mine with 3090 die being so big. My EK cpu block was also not flat.

While not perfect, I get very good delta. I would say lapping the die + finishing the block could save another 2c on the delta with normal paste.


----------



## gfunkernaught

2-3c at 500-600w can be a meaningful difference in terms of stability. Like when your voltage is good but temps are too high for a given clock. I'm trying to keep the gpu temp as close to 40c as I can even at 600W+. As @Falkentyne said the EK blocks aren't that good. I asked EK why do their blocks lose efficiency passed 500w and they said 500w is already really high. I do believe that they aren't designing the Trio blocks for example with high watts in mind. So can't blame them for that.


----------



## i_max2k2

Hi all,

I installed EK quantum vector water block + active backplate on my Strix 3090, and I realized after that insane install, I can’t access the bios toggle switch, is this a known issue, is there any way to resolve this? The temps are pretty good though, I used thermalright’s pads for memory and rest were the ek supplied ones. With mining the temps don’t go over 62c. 

Anyway, I really wanted to try the KP 1KW bios to see how far the card world go, but I’ma little scared If something goes wrong I would have to take it apart to switch and all.


----------



## ManniX-ITA

i_max2k2 said:


> and I realized after that insane install, I can’t access the bios toggle switch,


This is something I was wondering myself as well... thanks for sharing.
I don't have much choice here in Europe other than EK.
But I'm considering waiting for the Optimus being available again and import it from USA.
I'm only reading terrible things about EK blocks, again and again.


----------



## jura11

ManniX-ITA said:


> This is something I was wondering myself as well... thanks for sharing.
> I don't have much choice here in Europe other than EK.
> But I'm considering waiting for the Optimus being available again and import it from USA.
> I'm only reading terrible things about EK blocks, again and again.


For money Bykski waterblock is one of the best value block which you can get, performance wise is okay, delivery from China or Aliexpress is okay too(you should get them delivered within 10 days) 

Optimus I would say is one of the best waterblocks just that price is a bit high for my liking 

Hope this helps 

Thanks, Jura


----------



## lolhaxz

gfunkernaught said:


> 2-3c at 500-600w can be a meaningful difference in terms of stability. Like when your voltage is good but temps are too high for a given clock. I'm trying to keep the gpu temp as close to 40c as I can even at 600W+. As @Falkentyne said the EK blocks aren't that good. I asked EK why do their blocks lose efficiency passed 500w and they said 500w is already really high. I do believe that they aren't designing the Trio blocks for example with high watts in mind. So can't blame them for that.


It still all ultimately comes down to becoming a test of your cooling... unless you have a couple of grids of 1080 worth of radiator space, your pissing in the wind keeping the temperatures low anyway.

Ambient water being about 19-20C

Normal radiator profile which keeps the fans around 1000rpm (almost inaudible) - 29C water (9C rise, after 10+ mins) - 46C GPU









100% fan's on radiators - 25C water (5C rise, after 10+ mins) - 41C GPU









Hard mode: (a mind boggling 900W+)









And at the end of the day - even 5C is only going to get you 15MHz closer to the bleeding edge... which.. will have zero impact 

In my particular case, with approx 16C delta (delta drops as wattage goes down), you need to keep the water at or below 24C to main 40C; and 3x360mm is insufficient. Let's not forget the room must also be in the same ballpark. (ie. 20C)

What I have been meaning to do for really some time now, is build a MODERN small phase change system with on-demand/realtime smart feedback loop, ie that uses variable rate compressor instead of on/off with no real intention to go particularly far below ambient... this IMO, is the real solution - do away entirely with radiators, assuming the power draw of these devices stay high. On my list of things to do.


----------



## gfunkernaught

lolhaxz said:


> It still all ultimately comes down to becoming a test of your cooling... unless you have a couple of grids of 1080 worth of radiator space, your pissing in the wind keeping the temperatures low anyway.
> 
> Ambient water being about 19-20C
> 
> Normal radiator profile which keeps the fans around 1000rpm (almost inaudible) - 29C water (9C rise, after 10+ mins) - 46C GPU
> View attachment 2526299
> 
> 
> 100% fan's on radiators - 25C water (5C rise, after 10+ mins) - 41C GPU
> View attachment 2526300
> 
> 
> Hard mode: (a mind boggling 900W+)
> View attachment 2526301
> 
> 
> And at the end of the day - even 5C is only going to get you 15MHz closer to the bleeding edge... which.. will have zero impact


I think the last pic should be "Nightmare mode" . I haven't tested with fans at 100% yet. Only about 60-75% so 1300-1500rpm, not silent but doesn't matter when I game with 5.1 speakers or headphones. But I try not to go anywhere near 46c at 600w, even if things are stable. At 46c, the back of the card gets really hot, and I don't want the VRMs to suffer from that heat. I'd like to keep this 3090 for as long as possible. My daily OC now is in the range of [email protected] The clock will fluctuate based on temps and load, but never below 2100mhz, which I'm fine with. That thirst for bleeding edge, at least for me, is keeping that 2100mhz. If I see 2095mhz it kind of bothers me.


----------



## KedarWolf

Does anyone else notice with the latest drivers the Time Spy CPU test has tanked?

I get well under 14000 now.


----------



## Falkentyne

KedarWolf said:


> Does anyone else notice with the latest drivers the Time Spy CPU test has tanked?
> 
> I get well under 14000 now.


Scores mean nothing anyway, unless you're competing for world records.


----------



## KedarWolf

KedarWolf said:


> Does anyone else notice with the latest drivers the Time Spy CPU test has tanked?
> 
> I get well under 14000 now.


It was my VBIOS. Was using the Suprim X BIOS. Went back to the 100W Rebar, everything is back to normal.


----------



## i_max2k2

I'm going to try a few different bios on my strix, I wanted to see which one would work best with a watercooled card (front & back), any strix owners have any experience to share? I noticed that the KP 1000W bios on TP had two versions, one from 11/20 and one from 03/21. The 520W bios from EVGA also has similar time stamps for their two versions, does anyone know what are the differences between the two?

Apparently there is another 1000W bios called XOC or something, I've been trying to read through this thead but being almost 1000 pages I'm not able to determine who had what success with these bios on a strix. My goal is to get a decent oc for every day gaming, I'm going to do some mining for the rest of the time to recover some costs, and those clocks will be controlled from the miner itself, with the stock bios the card does draw more power then I think some other 3090's to reach the same hash rate. I was able to get the memory junction temps down to 62C on a ~1550 ish memory oc @ 1200 core (37c), I believe in gaming it hasn't gone beyond 56c, and core doesn't go past 42c. The stock bios goes to 480w so a 600w pull shouldn't get temps overly high either I believe.

I don't think you can do a rebar update on a x470? I have a 2700x for now, but will be upgrading soon to a 5800x, I am not sure if on the CH7 we can get the rebar update with 5000x cpu, anyway rest of the system is in sig.

Thanks again.


----------



## hayame

Finally felt brave enough to try the kingpin 1000w bios on my TUF OC. With 70% on the power slider, I manage to crack a good "2ghz" at .925v to crack 10k gpu score on time spy extreme.

I'd try to see what the max clocks this card can achieve on stock air cooler/paste/pads but I was concerned with how hwinfo64 was reporting the power draw between the 8-pins as shown here:










I'm not sure how accurate it's reporting these power consumption figures and was wondering if anyone could give any insight on how the card decides to pull power from the two 8-pins that are available on my card, whether it's a vbios thing that the kingpin 1000w does to 2x8pin cards, or if it's a limitation of this card in particular. I don't want to try and push more wattage unless it would evenly pull power through both 8-pins (rm850x with two separate cablemod pro series 8-pin pci-e cables)

That being said though, I did notice two things: 
1. In the second gpu test in time spy extreme, the gpu clock speeds would dip as low as 1900mhz and that's where the "Performance Limit - Power" changes to "Yes" with a 70% power limit in msi afterburner
2. "GPU Effective Clock" peaks at 1977mhz 😔

thanks in advance


----------



## des2k...

hayame said:


> Finally felt brave enough to try the kingpin 1000w bios on my TUF OC. With 70% on the power slider, I manage to crack a good "2ghz" at .925v to crack 10k gpu score on time spy extreme.
> 
> I'd try to see what the max clocks this card can achieve on stock air cooler/paste/pads but I was concerned with how hwinfo64 was reporting the power draw between the 8-pins as shown here:
> 
> View attachment 2526346
> 
> 
> I'm not sure how accurate it's reporting these power consumption figures and was wondering if anyone could give any insight on how the card decides to pull power from the two 8-pins that are available on my card, whether it's a vbios thing that the kingpin 1000w does to 2x8pin cards, or if it's a limitation of this card in particular. I don't want to try and push more wattage unless it would evenly pull power through both 8-pins (rm850x with two separate cablemod pro series 8-pin pci-e cables)
> 
> That being said though, I did notice two things:
> 1. In the second gpu test in time spy extreme, the gpu clock speeds would dip as low as 1900mhz and that's where the "Performance Limit - Power" changes to "Yes" with a 70% power limit in msi afterburner
> 2. "GPU Effective Clock" peaks at 1977mhz 😔
> 
> thanks in advance


70% tdp, that's just 460w on 2x8pin
100% tdp, should be 660w max


----------



## yzonker

des2k... said:


> 70% tdp, that's just 460w on 2x8pin
> 100% tdp, should be 660w max


I'm not sure going more is a good idea with the cooling that card has though. Looks like it hit 74C core and 90C mem.


----------



## Falkentyne

hayame said:


> Finally felt brave enough to try the kingpin 1000w bios on my TUF OC. With 70% on the power slider, I manage to crack a good "2ghz" at .925v to crack 10k gpu score on time spy extreme.
> 
> I'd try to see what the max clocks this card can achieve on stock air cooler/paste/pads but I was concerned with how hwinfo64 was reporting the power draw between the 8-pins as shown here:
> 
> View attachment 2526346
> 
> 
> I'm not sure how accurate it's reporting these power consumption figures and was wondering if anyone could give any insight on how the card decides to pull power from the two 8-pins that are available on my card, whether it's a vbios thing that the kingpin 1000w does to 2x8pin cards, or if it's a limitation of this card in particular. I don't want to try and push more wattage unless it would evenly pull power through both 8-pins (rm850x with two separate cablemod pro series 8-pin pci-e cables)
> 
> That being said though, I did notice two things:
> 1. In the second gpu test in time spy extreme, the gpu clock speeds would dip as low as 1900mhz and that's where the "Performance Limit - Power" changes to "Yes" with a 70% power limit in msi afterburner
> 2. "GPU Effective Clock" peaks at 1977mhz 😔
> 
> thanks in advance


If you're using 16 AWG PCIE pin cables, you may be able to get away with 100% PL (660W) for one single run but watch the temps carefully. Maybe try 90% first and check carefully. You seem to be on air cooling.
I would not go above 75% PL if you are on 18 AWG PCIE cables.

Also 8 pin #3 is duplicated from 8 pin#1 because your card doesn't have a third 8 pin, and that vbios has three SRC (SRC 1/2/3) powerlimits (I believe set at 400W each) which are linked to the corresponding 8 pins, so it has to get the missing SRC rail from somewhere. So the "missing" rail gets added to your total TDP, so you lose 33% real power of what you set on the slider.

The NVVDD "Output" power rails can be trusted, though.


----------



## SoldierRBT

16109 PR 600W 50C avg








I scored 16 109 in Port Royal


Intel Core i9-10900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## GRABibus

SoldierRBT said:


> 16109 PR 600W 50C avg
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 16 109 in Port Royal
> 
> 
> Intel Core i9-10900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com


We miss some GPU-Z or HWInfo screenshots


----------



## SoldierRBT

GRABibus said:


> We miss some GPU-Z or HWInfo screenshots


Sorry didn't have GPU-Z opened this time. I did check MSI afterburner and it was 609W peak. 580-590W through the run and 51C max.


----------



## hayame

des2k... said:


> 70% tdp, that's just 460w on 2x8pin
> 100% tdp, should be 660w max





yzonker said:


> I'm not sure going more is a good idea with the cooling that card has though. Looks like it hit 74C core and 90C mem.


I've put off changing out the thermal pads and pastes as I was always worried to break this card with how hard they are to find, and the markup from original msrp it used to be.



Falkentyne said:


> If you're using 16 AWG PCIE pin cables, you may be able to get away with 100% PL (660W) for one single run but watch the temps carefully. Maybe try 90% first and check carefully. You seem to be on air cooling.
> I would not go above 75% PL if you are on 18 AWG PCIE cables.
> 
> Also 8 pin #3 is duplicated from 8 pin#1 because your card doesn't have a third 8 pin, and that vbios has three SRC (SRC 1/2/3) powerlimits (I believe set at 400W each) which are linked to the corresponding 8 pins, so it has to get the missing SRC rail from somewhere. So the "missing" rail gets added to your total TDP, so you lose 33% real power of what you set on the slider.
> 
> The NVVDD "Output" power rails can be trusted, though.


With different information about CableMod Pro series cabling that I've read: it's at most likely 18awg, and definitely not 20awg or 16awg (sadly). One of their reps also say their Pro series use thicker insulation compared to their regular lineup.

Y'all have provided such wonderful information. Knowing that 100% PL is 660W, I'm more tempted to change out the paste/pads and bust bust out my old rm1000x and it's stock cables just for one run at 100% PL just to see the limits of air cooling. Thanks


----------



## Falkentyne

hayame said:


> I've put off changing out the thermal pads and pastes as I was always worried to break this card with how hard they are to find, and the markup from original msrp it used to be.
> 
> 
> 
> With different information about CableMod Pro series cabling that I've read: it's at most likely 18awg, and definitely not 20awg or 16awg (sadly). One of their reps also say their Pro series use thicker insulation compared to their regular lineup.
> 
> Y'all have provided such wonderful information. Knowing that 100% PL is 660W, I'm more tempted to change out the paste/pads and bust bust out my old rm1000x and it's stock cables just for one run at 100% PL just to see the limits of air cooling. Thanks


I've done this on my shunt modded 3090 FE. I can't use the Kingpin XOC vbios, which removes current/power limitations from the NVVDD and MSVDD Rails directly (this is not related to shunt mods at all, which just reduce/remove individual 8 pins, PCIE Slot, MVDDC, Power Plane SRC chip, and GPU Chip Power rail power limits only by making the card report it's consuming less power on all of these rails), since I can't even flash it (and there's no SOP8 bios chip on the card for hardware flashing, it's UDFN-8, so recovery would be impossible unless blind flash to recover from a brick would work with an iGPU). So even though I have a "theoretical" 800W TDP, I can't get anywhere close to there because I end up hitting the MSVDD and NVVDD limits (these report to TDP Normalized %) and throttle anywhere between 510W to 580W, depending on what I'm running. (Quake 2 RTX can exceed 650W). Path of Exile can sometimes reach 590W but hits a rail limit on the voltage rails.

At 600W, I can barely keep the card under 84C. And that's with Gelid Extreme thermal pads on the front side, Gelid Ultimate on the backside VRAM + heatsink+60mm Noctua fan on the backplate, and Thermalright TFX paste on the core. I have the Seasonic 16 AWG Micro-fit 3.0 cable (Seasonic PX-1000 PSU).


----------



## i_max2k2

Falkentyne said:


> I've done this on my shunt modded 3090 FE. I can't use the Kingpin XOC vbios, which removes current/power limitations from the NVVDD and MSVDD Rails directly (this is not related to shunt mods at all, which just reduce/remove individual 8 pins, PCIE Slot, MVDDC, Power Plane SRC chip, and GPU Chip Power rail power limits only by making the card report it's consuming less power on all of these rails), since I can't even flash it (and there's no SOP8 bios chip on the card for hardware flashing, it's UDFN-8, so recovery would be impossible unless blind flash to recover from a brick would work with an iGPU). So even though I have a "theoretical" 800W TDP, I can't get anywhere close to there because I end up hitting the MSVDD and NVVDD limits (these report to TDP Normalized %) and throttle anywhere between 510W to 580W, depending on what I'm running. (Quake 2 RTX can exceed 650W). Path of Exile can sometimes reach 590W but hits a rail limit on the voltage rails.
> 
> At 600W, I can barely keep the card under 84C. And that's with Gelid Extreme thermal pads on the front side, Gelid Ultimate on the backside VRAM + heatsink+60mm Noctua fan on the backplate, and Thermalright TFX paste on the core. I have the Seasonic 16 AWG Micro-fit 3.0 cable (Seasonic PX-1000 PSU).


After flashing a non KP/EVGA card with the KP 1k bios, does the classified tool work with the card? if so can core/memory be put into p-state for reduced power use? just wondering if the overall card power would go up using the KP bios.


----------



## gfunkernaught

SoldierRBT said:


> 16109 PR 600W 50C avg
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 16 109 in Port Royal
> 
> 
> Intel Core i9-10900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com


Which 3090 did you use? Also what voltage?


----------



## yzonker

i_max2k2 said:


> After flashing a non KP/EVGA card with the KP 1k bios, does the classified tool work with the card? if so can core/memory be put into p-state for reduced power use? just wondering if the overall card power would go up using the KP bios.


Classified tool doesn't work, but you can set the mem offset to -251 or less and the card will idle down including the mem.


----------



## i_max2k2

yzonker said:


> Classified tool doesn't work, but you can set the mem offset to -251 or less and the card will idle down including the mem.


Thank you, I was happy to see it load, but it won't take any settings. I was wondering how do we change the power state for memory from nv inspector?

Edit: nvm found it, in global profile common.


----------



## SoldierRBT

gfunkernaught said:


> Which 3090 did you use? Also what voltage?


 3090 KPE HC


----------



## nrpeyton

lolhaxz said:


> View attachment 2526230
> 
> 
> 
> Test fitting copper shims (0.6mm) cut with aviation snips, bottom left...
> Copper RAM shims are worth it tho... 65C memory max - two choices here, either the extreme pads with no shims or copper plate + EK pads = both about the same from looking at others results... it's just that the EK pads are crap I guess and the copper shim helps spread the heat across the pad.


-----------------------------------------------------------

Good post - very interesting. I was into all this kind of stuff that you're doing a few years ago, and like you -- was addicted to getting those temps as low as possible and seeing how many extra drops of performance I could squeeze out (sometimes it was like trying to draw blood from a stone) lol.

Also tried the copper shims in an effort to lower memory temps (before memory temps were even seen as a problem ---but just for the extra OC headroom). This was all on the 10 series. 

Anyway, after spending days trying hundreds of different combinations of copper shims (and making lots of comparisons) I ultimately realised that if it was always impossible to get an ABSOLUTELY PERFECT balance/fitting with the block with so many points of metal contact. (Basically going from 1 [the GPU core] to 4 or more with core plus each memory row -- or even 12 if you done each gddrx memory chip individually). Temps would get better at certain points yes, but always worse at others (including the GPU). No matter how perfect, (or even how CLOSE to perfect) you think you have got it is still never going to be perfectly flat enough. I suppose you're now thinking, "well if I can get it close enough it might still be possible to get it slightly better than pads -- or maybe someone else might even be able to get slightly better than me/you". But, in truth that's a bit too much wishful thinking.

The BEST thing which I found worked better than ANY other method was to do the opposite to what you think is the best... it is easy to think "the thinnest pads that fit will give the best performance" -- but that's entirely NOT the case. What I found, is that the best way to improve temps under those pads is to go with THICKER pads... I'll explain why: when the block makes contact with the GPU core the distance between the top of the memory chip and block is set (locked) I.E it is decided by the size/shape of block in comparison to the GPU core, not the size/shape of block in comparison to the memory chips. The distance between the memory chip and block can't/won't change. It's impossible. Whether a pad is thick or thin doesn't matter, that spacing is staying exactly the same. (Think about it, if it didn't then it would affect your core GPU temp). So, the spacing (or distance) between memory and block is the same regardless of what you do. So, all you CAN do is make the heat transfer over that distance as effectively as possible, and you do that by concentrating on DENSITY. The pads will contract to fit any space. That's also how you find good pads by the way... it's not the make or manufacturer which determines how good or bad they are -- its a simple measurement of how (1) How thick they are, and (2) Their maximum compression ratio. So look for the THICKEST possible pads with the BEST possible compression ratio. (And it is not usually an issue but just do a quick final check that the thickness at full compression isn't affecting your core GPU temp -- to do that you need something which can measure the temp behind the CPU core (on cards' naked back) the moment you power the system on. If it checks out okay power off and finish up.

I was able to improve my memory temps by 20c just by using overly thick pads with really good compression ratios. They also were NOT necessarily the most expensive pads. You probably already know how easy it is to source all the weird and wonderful tools and things for this stuff off ebay. The ones I used back in 2017 were less than 10 USD. Anyway, good luck.


----------



## chibi

@nrpeyton - I remember your posts from then with the 1080 Ti, you went overkill trying to min/max and I loved reading every bit of it. IIRC, you did some sub ambient too right? I remember a lot of OCCT testing and checking for error corrections. I took a bunch of notes, but lost them all lol.


----------



## prelude514

Looking for a little confirmation that I'm on the right track before I flash my BIOS. 

So, my 3090 is the Aorus Extreme Waterforce AIO with 2x 8 pins. Stock power limits = 370/390 according to GPUZ. Best I can achieve is 1905Mhz @ 850mv.

I'd like extra power headroom to lock in at least 2000Mhz. I'm thinking somewhere between 450-500w limit would let me achieve that, so I want to flash the KP 1000w BIOS onto this card. My understanding is that it will work despite being a 2x 8 pin card, but I'd be limited to ~667w? (or is it ~742w with 75w coming from the pcie slot?) 

So, after flashing that bios and setting a power limit of 75% I should be sitting right at 500w. Correct? 

Thanks in advance!


----------



## gfunkernaught

prelude514 said:


> Looking for a little confirmation that I'm on the right track before I flash my BIOS.
> 
> So, my 3090 is the Aorus Extreme Waterforce AIO with 2x 8 pins. Stock power limits = 370/390 according to GPUZ. Best I can achieve is 1905Mhz @ 850mv.
> 
> I'd like extra power headroom to lock in at least 2000Mhz. I'm thinking somewhere between 450-500w limit would let me achieve that, so I want to flash the KP 1000w BIOS onto this card. My understanding is that it will work despite being a 2x 8 pin card, but I'd be limited to ~667w? (or is it ~742w with 75w coming from the pcie slot?)
> 
> So, after flashing that bios and setting a power limit of 75% I should be sitting right at 500w. Correct?
> 
> Thanks in advance!


Try the 520w rebar bios first and see how that performs before jumping to the 1kw bios.


----------



## bmgjet

gfunkernaught said:


> Try the 520w rebar bios first and see how that performs before jumping to the 1kw bios.


He has a 2X plug card that would give less power then his stock bios.


----------



## propa

Hi there i got also a 3090 Tuf non oc with the TUF bios i can reach 1.0v 1975Mhz stable but the PWR Limit kicks in realy hard with the KP1000W bios i can clock my TUF 0,975 @ 2010Mhz @67% is round about 440-460 Watt @ Wattmeter and i loving it. With the TUF bios my GFX performed very poor [email protected] and [email protected] stable but with the KP1000W bios [email protected] and [email protected] oO ? i should try the TUF OC bios ...


----------



## yzonker

propa said:


> Hi there i got also a 3090 Tuf non oc with the TUF bios i can reach 1.0v 1975Mhz stable but the PWR Limit kicks in realy hard with the KP1000W bios i can clock my TUF 0,975 @ 2010Mhz @67% is round about 440-460 Watt @ Wattmeter and i loving it. With the TUF bios my GFX performed very poor [email protected] and [email protected] stable but with the KP1000W bios [email protected] and [email protected] oO ? i should try the TUF OC bios ...


I've seen that comment before in regards to stability with the KP bios. One thing you need to verify is your effective clocks shown in HWINFO are actually lower and/or benchmark scores are lower.


----------



## propa

yzonker said:


> I've seen that comment before in regards to stability with the KP bios. One thing you need to verify is your effective clocks shown in HWINFO are actually lower and/or benchmark scores are lower.


I am only use the HWinfo and RTS, and jup TUF OC bios same poor perf ... i am now at [email protected] against tuf bios 1885Mhz ... but for 2,0GHz i need on KW1000 975 and TUF 1.012v


----------



## Falkentyne

propa said:


> I am only use the HWinfo and RTS, and jup TUF OC bios same poor perf ... i am now at [email protected] against tuf bios 1885Mhz


Please use the newest version of hwinfo. And look at "effective' clocks (not requested clocks). You keep posting requested clocks even though he asked you for effective clocks.
Also check your benchmark scores. Make sure they are NOT lower on the Kingpin bios.


----------



## propa

Falkentyne said:


> Please use the newest version of hwinfo. And look at "effective' clocks (not requested clocks). You keep posting requested clocks even though he asked you for effective clocks.
> Also check your benchmark scores. Make sure they are NOT lower on the Kingpin bios.


Nop i am only looking on HWinfo eff. clock and 3Dmark extr GT1 59,5 againts 65,7 fps <- KP1000w or 6984pt against 7662 Heavy extr. i think 190-210W per 8pin rail is ok for me with an RX850W Corsair PSU.


----------



## Falkentyne

propa said:


> Nop i am only looking on HWinfo eff. clock and 3Dmark extr GT1 59,5 againts 65,7 fps <- KP1000w or 6984pt against 7662 Heavy extr. i think 190-210W per 8pin rail is ok for me with an RX850W Corsair PSU.


Good work, then.
the reason I said this is because you posted clocks that are in 15 mhz steps. Those are only requested clocks. Effective clocks can be any number (like 2033.4 mhz, etc). So I thought you were only looking at requested clocks.


----------



## KCDC

Finally got this 3090 installed. Not my first choice, card-wise, but at least I got it at MSRP. Zotac AMP Extreme Holo. Haven't done much gaming or any benchmarks, just been GPU rendering so far and it's pretty snappy! along with a 2080ti and a 10980xe in my loop, cards stay around 38-40c rendering, clock sticks around 1995 stock. The little OCing I did while gaming had it at 2100. Looking to do some benching and see how much I can push it later on once projects slow down. Hoping to get a second soon. I remember reading a ways back that finding 3 slot NVLinks for these cards is pretty tough, is that still the case? Looking to utilize it for work, pooling the Vram on both cards would be quite beneficial.


----------



## chispy

@Falkentyne , amigo thanks a lot for the feedback and testing you did using thermal pads , also for your recomendation to use Gelid extreme and not ultimate.

I did some maintnance on my 24/7 gaming rig because i was getting very hot memory temperatures and also hot spot was getting worst by the day around 25c+ over the temperature of the core , using ek fc water block on Asus Tuf oc 3090. What a big mess when i took it appart :/ , thermal grissly kryonaut thermal paste on the core was completely bone dry , like powder  and the cheap thermal pads that ek included on the fc h2o block was brittle and dry as a bone too  ...

I replace all the thermal pads with gelid extreme 1.0mm on the front and 2.0mm on the backplate plus re-pasted with fresh kryonaut thick application and the results were very good. Now it runs so much cooler  , hot spot is only 10c over core temps ( ex. core 43c hot spot 53 ) even with high clocks , high pl and volts it wont go higher than 10c off the core temps always. The new thermal pads from Gelid ( gelid extreme not ultimate , soft thermal pads ) did amazing also as it dropped my memory temps by around 20c.

Guys beware of bad performance due to dry thermal paste and dry thermal pads. I was running the same thermal paste application and stock ek thermal pads for over 9 months on the fc ek h2o block and the massive heat that this thing produces did dry them up. I recommend you guys change thermal paste every 6~8 months and apply a fresh coat on it and check the thermal pads also to make sure they are not bone dry and brittle.


----------



## ManniX-ITA

chispy said:


> I was running the same thermal paste application and stock ek thermal pads for over 9 months on the fc ek h2o block and the massive heat that this thing produces did dry them up.


I'm still not sure what I'm going to use 

Kryonaut, Kryonaut Extreme and Kingpin KPx they all dry up fast. 
Good temperatures, especially the KPx, but none is long lasting.

Those that are confirmed long lasting are the carbon based Thermalright TFX (extremely hard to spread) and Thermalright TF8/TF7 (easier to spread, less performing).

There's also the Phobya NanoGrease Extreme, the only silicon based that is confirmed long lasting.
It's the winner for gaming notebooks repasting; small toasters running almost 24/7 with uneven heatsinks, annoying to dismantle every 6 months.
Very similar use case as ours.
There's also a proper datasheet: https://www.aquatuning.de/download/...a-NanoGrease-Extreme-16Wmk-Paste-MSDS-eng.pdf
But the performances, at least on paper, are below any Thermalright paste (Thermal impedance is < 0.05 vs <0.01 and less).


----------



## chispy

ManniX-ITA said:


> I'm still not sure what I'm going to use
> 
> Kryonaut, Kryonaut Extreme and Kingpin KPx they all dry up fast.
> Good temperatures, especially the KPx, but none is long lasting.
> 
> Those that are confirmed long lasting are the carbon based Thermalright TFX (extremely hard to spread) and Thermalright TF8/TF7 (easier to spread, less performing).
> 
> There's also the Phobya NanoGrease Extreme, the only silicon based that is confirmed long lasting.
> It's the winner for gaming notebooks repasting; small toasters running almost 24/7 with uneven heatsinks, annoying to dismantle every 6 months.
> Very similar use case as ours.
> There's also a proper datasheet: https://www.aquatuning.de/download/...a-NanoGrease-Extreme-16Wmk-Paste-MSDS-eng.pdf
> But the performances, at least on paper, are below any Thermalright paste (Thermal impedance is < 0.05 vs <0.01 and less).


You are correct  , most of the good thermal paste with high heat transfer rate dry up very fast 6~8 months. Specially Kryonaut ( grey ) and Kryonaut Extreme ( pink ) , Kingpin KPx ( blue ) , Gelid GC extreme ( grey ). The worst been Gelid GC extreme as it wont last 6 months, Maybe next time i need to re-paste i will try that Phobya Nano Grease Extreme as it is a pita to remove the water block every 6~8 months just to re-paste.


----------



## KCDC

Interesting, I will check one of the 2080tis I pulled out to see if the kpX dried. Been about 8 months. So, if all the top shelf pastes dry, then what's the solution aside from reapplying every 8 months? I'm not going conductive or graphite pads. If that Phobya Nano Grease you mention lasts longer, then I'd give that a shot, who cares if it's tough to apply if you're only doing it once. Rather apply something great that lasts longer than re-applying every 8 months.


----------



## ManniX-ITA

KCDC said:


> Interesting, I will check one of the 2080tis I pulled out to see if the kpX dried. Been about 8 months.


Would really like to know, please share it once you're done.



KCDC said:


> So, if all the top shelf pastes dry, then what's the solution aside from reapplying every 8 months?


Seems none other than using a paste that lasts longer and loose some thermal performances.



KCDC said:


> I'm not going conductive or graphite pads.


Agree, not only dangerous; liquid metal needs to be re-applied as well, just a bit less often.



KCDC said:


> If that Phobya Nano Grease you mention lasts longer, then I'd give that a shot, who cares if it's tough to apply if you're only doing it once.


It's never only once but if it's 12-18 months vs 6-9 still quite a difference.

Those hard to apply are the carbon based, especially the TFX.
The Phobya is a silicon based, similar to Kryonaut but just a bit more stiffy I've read.

The NanoGrease has a very high thermal conductivity and high thermal impendence.
Probably means due to the material composition it struggles to fill the micro gaps between the surfaces.

The carbon based Thermalright pastes have all very low thermal impendence.
Means their thermal conductivity (which is a performance number only related to the thermal material) will be used more effectively.
Probably cause they can better adhere to the surfaces and fill the micro gaps.


----------



## Falkentyne

ManniX-ITA said:


> I'm still not sure what I'm going to use
> 
> Kryonaut, Kryonaut Extreme and Kingpin KPx they all dry up fast.
> Good temperatures, especially the KPx, but none is long lasting.
> 
> Those that are confirmed long lasting are the carbon based Thermalright TFX (extremely hard to spread) and Thermalright TF8/TF7 (easier to spread, less performing).
> 
> There's also the Phobya NanoGrease Extreme, the only silicon based that is confirmed long lasting.
> It's the winner for gaming notebooks repasting; small toasters running almost 24/7 with uneven heatsinks, annoying to dismantle every 6 months.
> Very similar use case as ours.
> There's also a proper datasheet: https://www.aquatuning.de/download/...a-NanoGrease-Extreme-16Wmk-Paste-MSDS-eng.pdf
> But the performances, at least on paper, are below any Thermalright paste (Thermal impedance is < 0.05 vs <0.01 and less).


Can you explain the difference between these "carbon" based pastes and Kryonaut Extreme and Kingpin KPX, etc?

I noticed there is a huge family of pastes, possibly all coming from the same factories in Shenzhen. I can't find out where Thermalright TFX is made, but Thermagic ZF-EX seems to be a direct clone of it (identical in thickness, even), if it isn't the same stuff. TFX and ZF-EX are like god tier levels of thickness. They can be easier to spread if you put the syringe in very hot water (almost boiling) for 10 minutes, or just use a hairdryer on the paste after you squeeze it. 

And there's a large collection of pastes that are not quite as thick as TFX but are very close, and they all have about the same performance, only differing in viscosity and wetting ability. SYY-157, Maxtor CTG9 and Zezzio (14.3 w/mk) and I think one other paste seem to be all resold by different suppliers. 
FuzeIce Plus and Alseye T9+ Platinum are both wetter and not as thick as SYY-157 family of pastes and seem to be 100% identical. All the pastes seem to look the same, with the exception of TFX, which seems to leave an even thicker coat on your fingers, when you try to rub it (it leaves almost like a lacquer type coat, while SYY leaves a thinner coat, and FuzeIce Plus (T9+ Platinum) is even thinner than that.


----------



## KCDC

ManniX-ITA said:


> Would really like to know, please share it once you're done.
> 
> 
> 
> Seems none other than using a paste that lasts longer and loose some thermal performances.
> 
> 
> 
> Agree, not only dangerous; liquid metal needs to be re-applied as well, just a bit less often.
> 
> 
> 
> It's never only once but if it's 12-18 months vs 6-9 still quite a difference.
> 
> Those hard to apply are the carbon based, especially the TFX.
> The Phobya is a silicon based, similar to Kryonaut but just a bit more stiffy I've read.
> 
> The NanoGrease has a very high thermal conductivity and high thermal impendence.
> Probably means due to the material composition it struggles to fill the micro gaps between the surfaces.
> 
> The carbon based Thermalright pastes have all very low thermal impendence.
> Means their thermal conductivity (which is a performance number only related to the thermal material) will be used more effectively.
> Probably cause they can better adhere to the surfaces and fill the micro gaps.


I would be interested in it if it needs less downtime re-applying. I did a quick search and I guess the lesser HeGrease performs better even though it's rated as less than, maybe it's not as thick... 









Test of 27 thermal compounds, part 2 - Page 2 of 3 - HWCooling.net


Results and conclusionThe second round will be especially interesting for people who want to find the best solution for a standard use. The winner is a big surprise. Gelid, Noctua, Phobya, Reeven, SilentiumPC, Thermal Grizzly or Zalman? We can give away that it is not an expected favourite this...




www.hwcooling.net





Whether or not that also dries I don't know, didn't dive too deep into it, but if it's on par with nano or kpx and has the same lifetime of nano, then I'll give that a shot first


----------



## ManniX-ITA

KCDC said:


> Whether or not that also dries I don't know, didn't dive too deep into it, but if it's on par with nano or kpx and has the same lifetime of nano, then I'll give that a shot first


I don't know... often what manufacturers are declaring as thermal conductivity is BS but those are from the same.
That's half of the NanoGrease with the same viscosity.
Probably the thermal compound is much worse but the silicon base the same.
I wouldn't take that test on hwcooling.net too seriously, almost all pastes are in the 74-75c which is below the margin of error.
That setup is not appropriate to show the difference between a 8 and 16 mW/K paste.
I consider much more valuable the professionals' experience with gaming botebooks.
They say it lasts long and provides the best thermal performances. If the HeGrease was enough they'd use that one instead, it's cheaper 



Falkentyne said:


> Can you explain the difference between these "carbon" based pastes and Kryonaut Extreme and Kingpin KPX, etc?


Not really as I have no idea what is the "carbon" material used as base 

The silica based are made with a kind of uncured rubber.
It's like the inflatable balloons; when they get old they crack.
Also heat is a way to cure and harden the rubber.
Heat plus time are inevitably drying up any silicon based paste.

Have no idea about this carbon based but seems confirmed they need less curing.
And the drawback is that they are less manageable.
But at the end there's no TIM that doesn't need curing when exposed to time and heat.
It's more a matter of when; 6 months dry is really too short for a GPU waterblock...



Falkentyne said:


> I noticed there is a huge family of pastes, possibly all coming from the same factories in Shenzhen. I can't find out where Thermalright TFX is made, but Thermagic ZF-EX seems to be a direct clone of it (identical in thickness, even), if it isn't the same stuff. TFX and ZF-EX are like god tier levels of thickness. They can be easier to spread if you put the syringe in very hot water (almost boiling) for 10 minutes, or just use a hairdryer on the paste after you squeeze it.


Like everything today 
There's a ghost brand from Thermalright, don't remember the name, which is selling exactly the same products with different name and models.
I think they also license their IP to other brands which are producing in the same fabs with slightly different recipes.

Spreading is going to be though indeed, I usually like more the air dryer method as it's quicker and less messy.



Falkentyne said:


> FuzeIce Plus and Alseye T9+ Platinum are both wetter and not as thick as SYY-157 family of pastes and seem to be 100% identical. All the pastes seem to look the same, with the exception of TFX, which seems to leave an even thicker coat on your fingers, when you try to rub it (it leaves almost like a lacquer type coat, while SYY leaves a thinner coat, and FuzeIce Plus (T9+ Platinum) is even thinner than that.


Problem with similar or identical pastes is that what looks the same is the base compound. The thermal compound mixed in makes the difference.
Usually these 2nd tier brands they are cheaper and the reason is they use a lesser quality/lower quantity thermal compound mix.
So the only way to really know is to test yourself or have someone else tested beforehand.
But if you test yourself it can end up in a huge waste of money and effort pretty quick... 
That's why I prefer to stick to 1st tier brands. At least there's data and hopefully you don't get a bad batch...


----------



## jura11

@chispy 

That's what I was experienced with Thermal Grizzly Kryonaut as well, on both GPUs I was seeing higher temperatures, usually delta between the GPU core and GPU hot-spot was in 8-10°C in best case or in worst case I was seeing like 14-15°C at higher power limits or when power draw was like 500W 

Replaced thermal pads for Gelid pads and for thermal paste I used ZF-EX and right now delta between the GPU core and GPU hot-spot is 8-10°C as max and VRAM temperatures dropped by 15-20°C, usually they sit in 50's and not seen them higher than 62°C previously on that particular GPU I would see 76-84°C in best case 

I think as well my thermal paste was like 6-8 months old and as well was dry, let's see how's ZF-EX will perform and used as well SYY-157 on Ryzen 5950x where previously I have used Kryonaut as well which was dry after 6 months or maybe not even that 

Hope this helps 

Thanks, Jura


----------



## homestyle

What kind of memory undervolts are people running with the 3090 kingpin and classified tool?

Is 1.2 volts possible with no overclocks?


----------



## Arizor

Apologies if I'm misunderstanding @homestyle , but the 3090 conventionally maxes at 1.1V, so 1.2V (though yes possible with 3090 KP classified) is not an undervolt?

For undervolts a lot of folks here have achieved great performance on their KPs and can advise I'm sure.


----------



## homestyle

Arizor said:


> Apologies if I'm misunderstanding @homestyle , but the 3090 conventionally maxes at 1.1V, so 1.2V (though yes possible with 3090 KP classified) is not an undervolt?
> 
> For undervolts a lot of folks here have achieved great performance on their KPs and can advise I'm sure.


Perhaps you are talking about gpu voltage?

In the classified tool, fbvdd defaults to 1.375 with a minimum of 1.2 volts.


----------



## ArcticZero

chispy said:


> You are correct  , most of the good thermal paste with high heat transfer rate dry up very fast 6~8 months. Specially Kryonaut ( grey ) and Kryonaut Extreme ( pink ) , Kingpin KPx ( blue ) , Gelid GC extreme ( grey ). The worst been Gelid GC extreme as it wont last 6 months, Maybe next time i need to re-paste i will try that Phobya Nano Grease Extreme as it is a pita to remove the water block every 6~8 months just to re-paste.


Strange, I've had great results with Gelid GC Extreme paste. Before I went custom loop, I had my 8700K at 5ghz on it for years without taking it apart and temps didn't really degrade much if at all.

Can't say I've tried it on my 3090 for a long period of time though, but it is what I used on my block now which has been up for a couple of months, and so far so good: 12c max hotspot while on full load.

Used to use TFX but found it hard to spread despite having great temps, and didn't really think of using the hair dryer method at the time. Not to mention it costs quite a lot for the volume you get.


----------



## ManniX-ITA

ArcticZero said:


> Strange, I've had great results with Gelid GC Extreme paste. Before I went custom loop, I had my 8700K at 5ghz on it for years without taking it apart and temps didn't really degrade much if at all.


Same as the Extreme pads also the Extreme paste suffered from the bad batches from the previous factory.
The new batches have a blue branding instead of the old green.

I've used it a lot before Kryonaut and it was a decent paste, lasting quite long on the CPU. Still fine after more than 1 year.
But of course temps were 1-3c worse than Kryonaut.


----------



## WHeelbeer

Hi all, i was wondering something. As 3090 and 3080Ti have the same chip, i was wondering if someone consider flashing a 3090 with a 3080Ti vbios possible ? why you should ask ? to shutdown some VRAM chips to gain more Watt dedicated to the GPU clock. (VRAM use 2W per chip according to igor's lab). 
Or is there a way to shutdown some VRAM chip through tweaking ? 
Maybe a silly question but hey... why not.


----------



## J7SC

...can't quite remember the exact sequence of posts re. Thermalright 12.8 W/mK pads not conforming well / imprinting, but not so, IMO...here are some pics of those Thermalright pads on my 3090 Strix after 5 months or so.


----------



## des2k...

Thermalright 12.8 W/mK


J7SC said:


> ...can't quite remember the exact sequence of posts re. Thermalright 12.8 W/mK pads not conforming well / imprinting, but not so, IMO...here are some pics of those Thermalright pads on my 3090 Strix after 5 months or so.
> 
> View attachment 2527108
> 
> 
> View attachment 2527109


why are your Thermalright 12.8 W/mK blue ?

I got alot of them from Amazon/aliexpress and they are grey.


----------



## J7SC

des2k... said:


> Thermalright 12.8 W/mK
> 
> why are your Thermalright 12.8 W/mK blue ?
> 
> I got alot of them from Amazon/aliexpress and they are grey.


 ...just lighting and an older camera - they're grey


----------



## Falkentyne

J7SC said:


> ...just lighting and an older camera - they're grey


This is on a different heatsink.
Isn't this a water block?

You have good mounting pressure because you have more screws. So even Fujipoly 17 w/mk pads will work.
The mounting pressure issue with the Odyssey pads are for the Founder's Edition heatsinks. Because there is absolutely ATROCIOUS mounting pressure because the only thing that screws the PCB into the heatsink are the four leaf spring screws! I AM NOT KIDDING. 

The rest of the screws screw into the OUTER SHROUD which isn't attached to the heatsink directly! So Odyssey pads end up causing degrading hotspot issues slowly. It's VERY noticeable if you use Kryonaut / Kryonaut Extreme. Even Thermalright TFX can't save that mess.


----------



## J7SC

...a potential (updated) 'fyi' re. New World, 3090s and other RTX 3k


----------



## des2k...

J7SC said:


> ...a potential (updated) 'fyi' re. New World, 3090s and other RTX 3k


another video on this and just as worthless as his first video

bunch of speculation with no real data and stupid recomendations limit power,limit fps...


----------



## newls1

Quick question regarding GPU Core Temp, VS GPU Hotspot temp.... My eVGA RTX 3090 FTW3 Ultra is FCWB'd with active backplate. Is it within normalization for there to be around a 11-12c difference between GPU Core and Hotspot temps?


----------



## des2k...

newls1 said:


> Quick question regarding GPU Core Temp, VS GPU Hotspot temp.... My eVGA RTX 3090 FTW3 Ultra is FCWB'd with active backplate. Is it within normalization for there to be around a 11-12c difference between GPU Core and Hotspot temps?


That's what I get with zotac 3090 /ek waterblock, about 13c gpu/hotspot delta.

My block mount is ok, 10c delta water / gpu core for 500w load.

Even with a perfect block mount, it's not going bellow 10c gpu/hotspot delta.


----------



## Falkentyne

newls1 said:


> Quick question regarding GPU Core Temp, VS GPU Hotspot temp.... My eVGA RTX 3090 FTW3 Ultra is FCWB'd with active backplate. Is it within normalization for there to be around a 11-12c difference between GPU Core and Hotspot temps?


Hotspot delta can't go below 7-10C (the base delta is depending on sku). It's fixed by firmware to not go below this (if it does it will follow the core temp at +7 to +10C).
11-12c is normal.


----------



## des2k...

Falkentyne said:


> Hotspot delta can't go below 7-10C (the base delta is depending on sku). It's fixed by firmware to not go below this (if it does it will follow the core temp at +7 to +10C).
> 11-12c is normal.


any tool available to read what the hardcoded offfset is ? Would it be on the bios or somewhere else ?


----------



## bmgjet

Playing Space Engineers,
Lapped EKWB with liquid metal since there die contact is shocking from factory and gives a +15c hot spot difference.
With the liquid metal its 0.2-0.8 difference at idle.
Medium load which space engineers mostly is, its as above in screen shot.
Then something full load like PR at 500W its 5-6C difference.


----------



## yzonker

des2k... said:


> That's what I get with zotac 3090 /ek waterblock, about 13c gpu/hotspot delta.
> 
> My block mount is ok, 10c delta water / gpu core for 500w load.
> 
> Even with a perfect block mount, it's not going bellow 10c gpu/hotspot delta.


Mine is 11C in CP2077 at 400-500w. It can go as high as 13C though in other stuff, particularly some of the stress test apps. 

I have seen other card models (not Zotac) showing sub 10C deltas as @Falkentyne is saying. Kinda scratched my head at that. Probably is a different min as they were not even particularly well cooled cards IIRC. Certainly nothing to explain it being so much lower.


----------



## des2k...

bmgjet said:


> View attachment 2527282
> 
> Playing Space Engineers,
> Lapped EKWB with liquid metal since there die contact is shocking from factory and gives a +15c hot spot difference.
> With the liquid metal its 0.2-0.8 difference at idle.
> Medium load which space engineers mostly is, its as above in screen shot.
> Then something full load like PR at 500W its 5-6C difference.


impressive 😀

my 3090 EK block was not very flat, I also measured my old EK block on ftw3 1080ti and that one was really bad

I'm actually surprised I was able to game at 450w+ with asus xoc 2170core ~40c for 2+ years.

After spending the weekend lapping that 3090block, I got really lazy/bored. It wasn't so much lapping, it was more like forcing , trying to grind the high spots.

Your hotspot delta are kinda motivating me to finish lapping + go LM🙄


----------



## Arizor

Yeah I just finished taking off my EKWB and fitting an Optimus to my ROG Strix. Hotspot now 10 (EKWB was 15 as others have experienced), plus I really like the look of it, which is just a bonus.

Also replaced my EKWB 360mm (I'll send it to anyone who needs a rad super cheap if they want it) with Nemesis GTR 360mm, and overall if I run my push/pull at 65% I max at around 37C after an hour or so of constant intense use.


----------



## ArcticZero

I really wish Optimus would make a block for reference cards. As it is my only option with an active backplate apart from the EKWB block I have (which has some nickel already faded after just a few months) is Barrow, which I don't mind. But the Optimus ones look so good, and the upcoming cerakote ones look interesting as well.


----------



## des2k...

ArcticZero said:


> I really wish Optimus would make a block for reference cards. As it is my only option with an active backplate apart from the EKWB block I have (which has some nickel already faded after just a few months) is Barrow, which I don't mind. But the Optimus ones look so good, and the upcoming cerakote ones look interesting as well.


They do have one. They said it would release after strix blocks. I'm guessing soon unless they are behind orders for the strix block.

For their testing, reference boards 600w, it shows ~8c delta. So very good.


----------



## EarlZ

des2k... said:


> another video on this and just as worthless as his first video
> 
> bunch of speculation with no real data and stupid recomendations limit power,limit fps...


What would be your recommendation instead ?


----------



## des2k...

EarlZ said:


> What would be your recommendation instead ?


I wasn't serious with my comment 😀
Jay likes to make a few premature videos based on what's popular / talked about on reddit/forums/twitter. But in the end it's pure speculation on what exactly is happening and no real data.

I don't have any recomendations. After reading alot on this issue I didn't learn much.


----------



## ArcticZero

des2k... said:


> They do have one. They said it would release after strix blocks. I'm guessing soon unless they are behind orders for the strix block.
> 
> For their testing, reference boards 600w, it shows ~8c delta. So very good.


Woah, that's interesting! Last I asked on the Optimus thread, the response was basically "not happening". Definitely doing the swap then this happens!



EarlZ said:


> What would be your recommendation instead ?


To me it's basically, if there is even a remote risk, and you can't afford to lose your GPU, then why try the game. I say this as someone who jumped in and shunt modded my unobtanium 3090 with minimal soldering experience, so take my opinion with a giant grain of salt.


----------



## sem115

Have 3090 Zotac AMP Extreme Holo. Have heard i need to load kingpin BIOS to unlock full potential of card. Anybody have experience with this? Thanks


----------



## EarlZ

des2k... said:


> I wasn't serious with my comment 😀
> Jay likes to make a few premature videos based on what's popular / talked about on reddit/forums/twitter. But in the end it's pure speculation on what exactly is happening and no real data.
> 
> I don't have any recomendations. After reading alot on this issue I didn't learn much.


I think his speculation is based on EVGA's excuse that there is a weak solder joints, kinda make sense to limit the FPS/Power draw to make sure those solders wont weaken, but i think its only a matter of time those solder join will weaken, New World just exposed them sooner!


----------



## EarlZ

duplicate post


----------



## yzonker

sem115 said:


> Have 3090 Zotac AMP Extreme Holo. Have heard i need to load kingpin BIOS to unlock full potential of card. Anybody have experience with this? Thanks


If that's 3x8pin then there are other choices like the Kingpin 520w bios. Look at Techpowerup's bios database.


----------



## ManniX-ITA

It's going to cost me like a 3070 MSRP price with customs fees and VAT but I couldn't resist; ordered the Optimus block for my Strix.


----------



## Arizor

Name and description checks out @ManniX-ITA 

You won't regret it, even just aesthetically it's a beautiful block, let alone the quality.


----------



## Arizor

Question folks, though I’m quite sure I know the answer and simply trying to avoid the inevitable, slightly depressing answer.

All my temps are down, following installation of the optimus block and nemesis gtr rad.

I still can’t hold a stable overclock above 2040mhz on my Strix, regardless of BIOS.

Memory overclocks very well, +1250 rock solid stable, but my gpu core is letting the side down.

so, is it a case of losing the GPU chip lottery, or is there perhaps a less obvious issue? Is my EVGA G5 1000watt PSU limiting it? Something else?

edit: if it’s any use at all, the CTD is always “gpu device removed” or asimilar, though again I imagine that’s standard for most crashes.


----------



## ManniX-ITA

Arizor said:


> You won't regret it, even just aesthetically it's a beautiful block, let alone the quality.


Pretty sure I'm not going to regret it, looks awesome under all aspects.
But my wallet is complaining, wants to emigrate to the neighbor


----------



## ManniX-ITA

Arizor said:


> I still can’t hold a stable overclock above 2040mhz on my Strix, regardless of BIOS.
> 
> Memory overclocks very well, +1250 rock solid stable, but my gpu core is letting the side down.
> 
> so, is it a case of losing the GPU chip lottery, or is there perhaps a less obvious issue? Is my EVGA G5 1000watt PSU limiting it? Something else?


Well, I'm probably in a similar situation as my binning seems to be quite bad considering how it performs on air.

So I'm interested:

Is it the OC or non OC version?
Which BIOS are you running?
Limited at 2040 for general usage or running a particular benchmark?
How much do you score on TimeSpy Extreme and Port Royale?
Can you get an HWInfo screenshot with PR to see max power/voltages/TDP?


----------



## Arizor

ManniX-ITA said:


> Well, I'm probably in a similar situation as my binning seems to be quite bad considering how it performs on air.
> 
> So I'm interested:
> 
> Is it the OC or non OC version?
> Which BIOS are you running?
> Limited at 2040 for general usage or running a particular benchmark?
> How much do you score on TimeSpy Extreme and Port Royale?
> Can you get an HWInfo screenshot with PR to see max power/voltages/TDP?


Good Q's Mannix, should've included this all.

1. OC Version.
2. Ran various BIOS, currently back on the ASUS Strix V4 standard (480W PL), the EVGA FTW is maybe a _bit_ better but honestly it makes so little difference for my particular GPU I thought I'd rather have the wattage displaying accurately.
3. Limited for general use - in Port Royale I can hit 2085 without crashing, but it _will_ crash at that clock in TimeSpy, for example.
4. PR - Pushing to its max (2085/2100ish, +1500 mem) I can hit 15k / TimeSpy from memory I hit about 20k. On my "every day" stable setting it's about 14.8K PR / 19ishK TimeSpy.
5. Pic (running my everyday stable clocks):


----------



## ManniX-ITA

Arizor said:


> 2. Ran various BIOS, currently back on the ASUS Strix V4 standard (480W PL), the EVGA FTW is maybe a _bit_ better but honestly it makes so little difference for my particular GPU I thought I'd rather have the wattage displaying accurately.


I would check the KP520W with ReBar. I'm also back at the Strix V4 but I had the impression without temp constraint that was better.



Arizor said:


> Pic (running my everyday stable clocks):


Looks all good. Did you try with an AB profile with fans set to zero?


----------



## Arizor

Yep already used the KP520W with ReBar, still hit the limit of the 2040 GPU core 

Sorry should've included pic of my AB, have the fans at 0:


----------



## ManniX-ITA

Arizor said:


> PR - Pushing to its max (2085/2100ish, +1500 mem) I can hit 15k / TimeSpy from memory I hit about 20k. On my "every day" stable setting it's about 14.8K PR / 19ishK TimeSpy.


Forgot to ask, what it the water temperature under load?
The hotspot at +10c seems in line but in the recent posts seems it's possible go down to +5c with the Optimus.
The memory is a bit on the high side.
Did you try with lower mem OC? From what I've read it starts degrading performance above +1250.
Memory is +18c over the Core on idle and +22c on load. 
I've seen better but I don't know what is expected with the Optimus.


----------



## Arizor

Don't have water temps @ManniX-ITA as this board (MSI Unify X570) doesn't have a native temp connection and I couldn't be arsed getting all the hardware for an external one 

Yeah tried with lower MEM clocks, doesn't seem to have any impact on the GPU core stability...


----------



## ManniX-ITA

What do you prioritize temperature or power?
I'm not sure that skin even has it.

I'd use the ampere skin for AB.












Arizor said:


> Sorry should've included pic of my AB, have the fans at 0:


Weird I see fans at 53% in HWInfo maybe it's not read properly.

I wouldn't set the Core Voltage to 100%, did you try with 25%-50%?
Mine tries too hard to go up and ends in a lower frequency strap with higher.



Arizor said:


> Yeah tried with lower MEM clocks, doesn't seem to have any impact on the GPU core stability...


Pity 



Arizor said:


> Don't have water temps @ManniX-ITA as this board (MSI Unify X570) doesn't have a native temp connection and I couldn't be arsed getting all the hardware for an external one


Pity again 
I'd consider to buy an Aquaero


----------



## Arizor

Thanks for the advice @ManniX-ITA . Core voltage doesn't seem to have any impact but I'll play with it and see what happens!


----------



## yzonker

Arizor said:


> Good Q's Mannix, should've included this all.
> 
> 1. OC Version.
> 2. Ran various BIOS, currently back on the ASUS Strix V4 standard (480W PL), the EVGA FTW is maybe a _bit_ better but honestly it makes so little difference for my particular GPU I thought I'd rather have the wattage displaying accurately.
> 3. Limited for general use - in Port Royale I can hit 2085 without crashing, but it _will_ crash at that clock in TimeSpy, for example.
> 4. PR - Pushing to its max (2085/2100ish, +1500 mem) I can hit 15k / TimeSpy from memory I hit about 20k. On my "every day" stable setting it's about 14.8K PR / 19ishK TimeSpy.
> 5. Pic (running my everyday stable clocks):
> View attachment 2527481


That looks fairly typical to me for a game stable OC limited to 480w. Does that PR run have reBar forced on?


----------



## Arizor

yzonker said:


> That looks fairly typical to me for a game stable OC limited to 480w. Does that PR run have reBar forced on?


Sadly what I thought, just wish I had one of those 2100+ chips but then again I got good memory, all things considered.

Yep rebar on @yzonker.


----------



## yzonker

Arizor said:


> Sadly what I thought, just wish I had one of those 2100+ chips but then again I got good memory, all things considered.
> 
> Yep rebar on @yzonker.


Keep in mind that yes some of those chips exist, but I would bet that a significant percentage you see claimed are really not 100% game stable. I've run VF curves that ran for multiple hours before finally getting a random crash. Or it only crashes in one game out of 3 or 4. 

And it's understandable. You play for 10 hours and finally get a random crash. Can be unclear if it's actually your hardware or a game bug. Although I've found I get very very few crashes when running a more conservative OC.


----------



## Arizor

Very true @yzonker


----------



## DBCooper1

Can anyone give me some pointers here? I cannot get to 15K on Port Royal on my 3090 FTW3 Ultra Hybrid. This is the latest Hybrid bios that supposedly 500W, but I cant pull more then 465ish watts or so from it. This latest run I'm at my limits with +167mhz GPU and +1250 memory, fans at 100%. Thanks in advance..


----------



## Falkentyne

DBCooper1 said:


> Can anyone give me some pointers here? I cannot get to 15K on Port Royal on my 3090 FTW3 Ultra Hybrid. This is the latest Hybrid bios that supposedly 500W, but I cant pull more then 465ish watts or so from it. This latest run I'm at my limits with +167mhz GPU and +1250 memory, fans at 100%. Thanks in advance..
> 
> View attachment 2527525


TDP Normalized is throttling you.
Might be coming from that 300W GPU Chip Power reading. (ignore the "rail powers" reading at the top, that's nothing more than the highest reading of any sensor, regardless of what it does).

Try the Kingpin 1kw bios. You have a 3x8 pin card so just set it to 55% TDP and test it. You won't destroy your card. Your TDP and Normalized TDP should both appear at about 55% together as all internal sub-rail limits on that card vbios are set to sky high values.


----------



## GRABibus

Arizor said:


> Yep rebar on @yzonker.


It means did you enable ReBar for "Port Royal DLSS" profile in NVIDIA Inspector ? (Not only in Bios, just to be sure  )









Here's How You Can Enable Resizable BAR Support in Any Game via NVIDIA Inspector


While only sixteen games have been officially whitelisted for Resizable BAR support by NVIDIA, there's a procedure to manually enable any.




wccftech.com


----------



## DBCooper1

Falkentyne said:


> TDP Normalized is throttling you.
> Might be coming from that 300W GPU Chip Power reading. (ignore the "rail powers" reading at the top, that's nothing more than the highest reading of any sensor, regardless of what it does).
> 
> Try the Kingpin 1kw bios. You have a 3x8 pin card so just set it to 55% TDP and test it. You won't destroy your card. Your TDP and Normalized TDP should both appear at about 55% together as all internal sub-rail limits on that card are set to sky high values.


Much appreciated for the info!


----------



## Sheyster

chispy said:


> thermal grissly kryonaut thermal paste on the core was completely bone dry , like powder


I'm done with Kryonaut. Currently using Thermalright TF8. It's easy to work with, cheaper and so far is not drying out.

Link:









Amazon.com: TF8 Thermal Compound Paste 13.8 W/mK, Carbon Based High Performance, Heatsink Paste, CPU for All Coolers, Interface Material, 2 Grams with Tool : Electronics


Buy TF8 Thermal Compound Paste 13.8 W/mK, Carbon Based High Performance, Heatsink Paste, CPU for All Coolers, Interface Material, 2 Grams with Tool: Heatsinks - Amazon.com ✓ FREE DELIVERY possible on eligible purchases



www.amazon.com


----------



## Falkentyne

Sheyster said:


> I'm done with Kryonaut. Currently using Thermalright TF8. It's easy to work with, cheaper and so far is not drying out.
> 
> Link:
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Amazon.com: TF8 Thermal Compound Paste 13.8 W/mK, Carbon Based High Performance, Heatsink Paste, CPU for All Coolers, Interface Material, 2 Grams with Tool : Electronics
> 
> 
> Buy TF8 Thermal Compound Paste 13.8 W/mK, Carbon Based High Performance, Heatsink Paste, CPU for All Coolers, Interface Material, 2 Grams with Tool: Heatsinks - Amazon.com ✓ FREE DELIVERY possible on eligible purchases
> 
> 
> 
> www.amazon.com


There's 12.8 grams of TF8 for $35 now. But TFX is still 6.2g max and still highway robbery.
I wonder if TF8 is the same paste as SYY-157 (SYY is the exact same paste as Zezzio's 14.3 w/mk paste, despite Zezzio having the same labeled specs as TFX, it's absolutely not TFX side by side, it's SYY-157)...


----------



## Falkentyne

You guys seen these?
Wonder how these compare to Gelid Extremes in softness and performance?









Amazon.com: Zezzio 14.8 W/mK Silicone Thermal Pad for Heatsink GPU CPU RAM SSD LED Cooler IC Chipset Cooling (120x120x1.5mm) : Electronics


Buy Zezzio 14.8 W/mK Silicone Thermal Pad for Heatsink GPU CPU RAM SSD LED Cooler IC Chipset Cooling (120x120x1.5mm): Heatsinks - Amazon.com ✓ FREE DELIVERY possible on eligible purchases



www.amazon.com





Sorry guys but I'm not going to be the guinea pig here. I'll let one of you richer guys do it this time.



> 12.8w update to 14.8w
> Previous thermal pad,a lot of buyers told me that they hope to reduce the hardness and make it softer.
> 
> So,this 14.8w thermal pad get 2 updates:
> 
> 1. Increase 2w, heat dissipation effect is better and faster.
> 
> 2. The hardness has been changed from 40 to 20-40, which is softer and easier to squeeze and install.


----------



## gfunkernaught

Has anyone here tried this yet? Results vs passive backplate cooling?


https://www.bykski.us/products/bykski-gpu-backside-water-block-for-nvidia-rtx-3090-video-memory-b-3090tc-x


----------



## homestyle

how many watts of heat does the backside memory chips produce?


----------



## des2k...

homestyle said:


> how many watts of heat does the backside memory chips produce?


50w or so


----------



## Falkentyne

homestyle said:


> how many watts of heat does the backside memory chips produce?


Enough to fry an egg with.


----------



## gfunkernaught

I saw someone on reddit put that block on the back on top of the backplate. If that block can absorb heat not only from the core and vram but the backplate itself, it should help out a decent amount right?


----------



## geriatricpollywog

I put a slab of dry ice directly on my backplate and it did diddly squat for memory OC. Temps were great though.


----------



## geriatricpollywog

NVIDIA GeForce RTX 3090 video card benchmark result - Intel Core i9-11900K Processor,ASRock Z590 OC Formula (3dmark.com)


----------



## ManniX-ITA

Could someone please confirm these are all the shunts for the ASUS 3090 Strix?

Thanks!


----------



## Falkentyne

ManniX-ITA said:


> Could someone please confirm these are all the shunts for the ASUS 3090 Strix?
> 
> Thanks!
> 
> View attachment 2527651
> 
> 
> View attachment 2527652


Yes those are all 7 main shunts you need to mod.
2x8 pin cards have 6 shunts.


----------



## krizby

anyone tried copper shim on 3090 yet? these would last forever vs thermal pads.


----------



## J7SC

Thermal Putty > WOW !

After heavily modding the 6900XT for the work-play Ravens' Nest build...now it was the 3090 Strix' turn for block build-up for Ravens' Nest 'A'...with superb results: The HWInfo temps shown were for a 450W 3DM Port Royal run, using a weak temporary loop w/ a single 360 for both 5950X and 3090 Strix w/ fans at 800 rpm  ! And It can only get better w/ the huge cooling system destined for Ravens Nest 'A'. That conforming thermal putty (10 W/mK) sure is nice, as long as it does not deteriorate, which it is not known to do - time will tell though.

I used the thermal putty for the VRAM (both sides) as well _as the back_ of the GPU, plus added the custom cooler on the back...I have an extra backplate and some universal GPU block for w-cooling the back, but so far it is clear that I don't need it. I used Gelid GC Extreme paste for the die and Thermalright thermal pads for the power stages...strategic locations also got a sprinkling of MX5 for good measure. IMO, the Phanteks block performs better than the EKWB I had on there, even before it developed problems with runaway Hotspot temps due to a machining ridge of the EK block

I've done custom w-cooling on GPUs for a decade or so, but I still get nervous for the first boot-up after taking such hard-to-come-by custom PCB cards apart and modding them. Getting rewarded with such benchmark temps on a temp loop made it all worth it. Last pic is the 6900XT and between the two GPUs, I used a full 50 grams of the thermal putty


----------



## yzonker

J7SC said:


> Thermal Putty > WOW !
> 
> After heavily modding the 6900XT for the work-play Ravens' Nest build...now it was the 3090 Strix' turn for block build-up for Ravens' Nest 'A'...with superb results: The HWInfo temps shown were for a 450W 3DM Port Royal run, using a weak temporary loop w/ a single 360 for both 5950X and 3090 Strix w/ fans at 800 rpm  ! And It can only get better w/ the huge cooling system destined for Ravens Nest 'A'. That conforming thermal putty (10 W/mK) sure is nice, as long as it does not deteriorate, which it is not known to do - time will tell though.
> 
> I used the thermal putty for the VRAM (both sides) as well _as the back_ of the GPU, plus added the custom cooler on the back...I have an extra backplate and some universal GPU block for w-cooling the back, but so far it is clear that I don't need it. I used Gelid GC Extreme paste for the die and Thermalright thermal pads for the power stages...strategic locations also got a sprinkling of MX5 for good measure. IMO, the Phanteks block performs better than the EKWB I had on there, even before it developed problems with runaway Hotspot temps due to a machining ridge of the EK block
> 
> I've done custom w-cooling on GPUs for a decade or so, but I still get nervous for the first boot-up after taking such hard-to-come-by custom PCB cards apart and modding them. Getting rewarded with such benchmark temps on a temp loop made it all worth it. Last pic is the 6900XT and between the two GPUs, I used a full 50 grams of the thermal putty
> 
> View attachment 2527728
> 
> 
> View attachment 2527729
> 
> 
> View attachment 2527730
> 
> 
> View attachment 2527731
> 
> 
> View attachment 2527732


I see one pic of little balls of putty. How did you actually form it for the VRAM? Been wondering what the best way is to get good coverage without using way too much.


----------



## KedarWolf

This is a bit off-topic but helps PC performance, so may be interesting to some.

I use a stripped-down bloatware removed Windows 10 with a ton of services not needed disabled.

See here how I do it.









Optimize-Offline Guide - Windows Debloating Tool, Windows 1803, 1903, 19H2, 1909, 20H1 and LTSC 2019


All credit goes to GodHand and who wrote and maintains this script. And to @gdeliana who created the fork of Godhand' s Script we are using for...




forums.mydigitallife.net





This is my O/S after running four hours with MSI Afterburner running and VMWare installed. Some extra services with VMWare, it would be leaner if I uninstalled it.

*Edit: The second pic is with VMWare uninstalled.*


----------



## J7SC

yzonker said:


> I see one pic of little balls of putty. How did you actually form it for the VRAM? Been wondering what the best way is to get good coverage without using way too much.


...your answer is in your question, sort of: Use (a bit) too much putty, seriously !

I did the 6900XT 3x8 pin first as its block needed to be notched, cut etc to fit. As a result, I did several test-fits with it and noticed that any 'excess' putty was just filling all the neighboring voids and had a much bigger surface area where it conformed to the block - so I settled on using those balls of putty I showed; three of those for four VRAM chips seemed to work best for the 3090 Strix as well.

The 6900XT also had a soft 2mm pad on the back of the GPU (though nothing else, no pads etc). So I made a bigger ball of putty and gently worked it into the rear cut-out of the Strix back-plate on top of the back of the GPU until a bit was protruding...then I mounted the additional back cooler on top of the back-plate to make contact.

Just ran another PR with even slightly better temps (due to slightly lower ambient). In any event, I never dreamed I could hit those VRAM and Hotspot temps. Final step will be to mount both setups into the Raven work-play 'case' and hook up the extended cooling system.

*EDIT:* Just ran PR again to test temps after heat cycling and cool-down/shut-down...for now on Asus V3 bios instead of KPE520 and not pushing max clocks yet though full PL...KPE 520 or perhaps even 1kw should be ok if these temps hold. Ambient for this was 24 C


----------



## KCDC

This Bykski block meant for the Zotac PGF OC works perfect for the AMP Extreme Holo, however due to the bulkiness of the active backplate, it won't fit in the top x16 slot as it hits the ram, so it's currently in the lower x8 slot due to the current config. Just got my second one, but this one's getting the passive backplate due to said issue, so hoping I can fit active backplate card in slot 2 x16 and passive in slot 3 x8, but it's gonna be tight, not sure I'll be able to make the ports link up or even fit heatsinks in the backplate with the spacing and bykski's bulkiness on both sides. Really hate when motherboards make their pcie offsets different. 

Only other option is getting rid of the nvme raid @ x16 which has been a huge help with the 9k read speed for work, put that back at 2 drives, 6000ish read at x8 and populate both x16s with the zotacs..

That said, on x8, card stayed around 2100-2130 in RDR2 if I kept the ram oc at 1000-1100 which kept the temps around 43c. Bumping ram OC above 1100 put the temps closer to 47c which dropped the bin. Allowed decent fps in 4k hdr, around 70-80 constant depending on where I am, DLSS on Quality. Potato pic attached

Renderings been great on stock, 1995 so far staying around 40c

Really wish there was another option for this card in waterblocks that's thinner on both sides, this bykski is the bulkiest I've ever had. Even the pcie plate could be made 1 slot, all the ports are on 1 slot.


----------



## Zogge

Which one is the asus v3 bios ?


----------



## Arizor

Zogge said:


> Which one is the asus v3 bios ?


AFAIK it’s the same as the v4 bios on the ASUS website in everything but the name (the v4 is v3 for people the original v3 didn’t work for).


----------



## J7SC

Zogge said:


> Which one is the asus v3 bios ?





Arizor said:


> AFAIK it’s the same as the v4 bios on the ASUS website in everything but the name (the v4 is v3 for people the original v3 didn’t work for).


Asus V3 was the 1st r_bar enabled bios for the Strix. I never bothered with V4 which presumably is very similar, not least as KPE 520 r_bar is on the second bios of my card.


----------



## Lobstar

Jay is really milking that New World fear. I've been running the game for over 80 hours according to steam on my EVGA 3090 FTW3U with the KPE 520w bios and a +90/+1448/+100 OC the entire time.


----------



## Falkentyne

Lobstar said:


> Jay is really milking that New World fear. I've been running the game for over 80 hours according to steam on my EVGA 3090 FTW3U with the KPE 520w bios and a +90/+1448/+100 OC the entire time.


Jay is a freaking clown.
We told him that his bios on his evga card is BUGGED and causing MSI Afterburner to report TDP Normalized% as TDP, we told him this in the first NW video he ever posted but unless you pay him on patreon, he doesn't care what normies say, so he repeats the same mistake AGAIN.

TDP Normalized spiking to 112% means its an internal rail being hammered to death. Probably similar to what you get when you run Superposition 4k Custom extreme shaders preset, or Timespy Extreme....(these also report sky high TDP Normalized% especially on shunt modded or 480W + cards).


----------



## J7SC

...independent of Jay2c and all the other more sensationalized reporting, there really do seem to be some issues w/ New World some folks are having, like this 2080 Ti OCN post > here .

What is not clear if it is just one issue (ie. PSU related) or just a byproduct of modern GPUs and their variable speed and power subject to the underlying algorithms...overshoot most definitely can happen much more easily w/ that.

Nor is that just an issue with New World...On my 3090, I have seen menus in FS2020 shoot up well past 600 fps, and per spoiler, my 6900XT has hit at least a 1000 fps.



Spoiler


----------



## yzonker

Lobstar said:


> Jay is really milking that New World fear. I've been running the game for over 80 hours according to steam on my EVGA 3090 FTW3U with the KPE 520w bios and a +90/+1448/+100 OC the entire time.


Anything out of normal with the various rail powers, etc... that might explain why it pops some cards? Seems like it's doing something different than most other games.

Not really a game I'm interested in or I'd test myself. Been kinda curious.


----------



## gfunkernaught

Don't forget, Jay's true passion is film production, not tech. Get those likes Jay! 🙄😒


----------



## EarlZ

The only take away I have from his video is to set a profile for New World, Does MSI AB support an auto profile switching? I'd love to limit the power usage on some games that I have.


----------



## yzonker

EarlZ said:


> The only take away I have from his video is to set a profile for New World, Does MSI AB support an auto profile switching? I'd love to limit the power usage on some games that I have.


No it doesn't, but I think you could do that in a batch file with the Nvidia smi commands. I haven't used it but others on here have I know. Maybe someone else can confirm.


----------



## Falkentyne

Need to see HWinfo64 caps with all power rails expanded and TDP Normalized visible.
You can calculate the # of watts MSVDD and NVVDD are using and then get a close approximation on the amps (per phase or total?) on each rail by comparing the input rails true (corrected) power draw (which respond to shunt mods, so you need to correct them) and the output rails (which ignore shunt mods).


----------



## Lobstar

Falkentyne said:


> Need to see HWinfo64 caps with all power rails expanded and TDP Normalized visible.


Just a few minutes but I reset everything after I had the game loaded. Please note that it appears they have fixed something as I'm rarely going over 400w now where as when there was a 62fps limit on the menu for the queue it was using over 400w as recently as last night. [Found the scene posted below to take me over 400w] 3440x1440, everything maxed. 520w KPE bios.


----------



## Falkentyne

Lobstar said:


> Just a few minutes but I reset everything after I had the game loaded. Please note that it appears they have fixed something as I'm rarely going over 400w now where as when there was a 62fps limit on the menu for the queue it was using over 400w as recently as last night. [Found the scene posted below to take me over 400w] 3440x1440, everything maxed. 520w KPE bios.
> 
> View attachment 2527877


118% of TDP Normalized. There's your problem right there. That 118% of normalized (some rail value 118% higher than its 100% value) is NOT coming from either the 8 pins or PCIE Slot power.

all the aux rails (Misc 0/1/2/3) are set to 400W default and 500W max










Doesn't seem to be coming from GPU Chip Power either since default chip power (100% value) is 292W and max is 312W.
GPU Chip Power is 295W at only 462W total board power. That's pretty high.

That high normalized HAS to be coming from the internal MSVDD / NVVDD voltage rails (current draw).
Probably why the cards are blowing up...

I've only seen Normalized that high on Superposition 4k extreme shaders and Timespy Extreme.
Even the 600W massive 82C torture test (my VRM's get to 95C, almost get black screen+100% fans from that!) from Path of Exile : GI Shadows Quality=Ultra only reaches 110% of Normalized on my shunt modded 3090 FE (normalized follows the TDP slider% past 100% for the internal rails, so it starts throttling slowly at about 107%, like one or two +15 mhz clock bins down) and starts hitting the breaks at 110% normalized to stay below 114% (max on the FE's TDP slider).

This game is putting as much load on the card as Timespy Extreme/Superposition 4k + Extreme shaders custom preset


----------



## Lobstar

Falkentyne said:


> 118% of TDP Normalized. There's your problem right there.


I have the slider at 120% though.


----------



## yzonker

Lobstar said:


> I have the slider at 120% though.


Well what I was planning to do is compare your max values to what I see in CP2077 or other games just to see if there are any standout differences. Probably not, but curious. I'll report back if I come up with anything. Might not get to it tonight though. Thanks.


----------



## Falkentyne

Lobstar said:


> I have the slider at 120% though.


Gotcha. But there are cards that can't handle 120%. Those cards are the ones blowing up 
The internal MSVDD and NVVDD power limits are not shown either on Hwinfo64 or on the ABE editor. They're SOMEWHERE in the bios though, because if you use the Kingpin 1kw bios, even those limits are removed, and then your TDP Normalized matches your TDP almost exactly (600W=60% on both normalized and main TDP).

You can get an idea of what those rails are pulling by subtracting Total Board Power - GPU Rail Power (NVVDD Output Power (sum)).
That gives you _MSVDD_ power.

In your case, its 467W-354W=113W MSVDD.
This is decent. 
That means NVVDD is hammering the rest. (I'm going to assume that's the exact summed value of output but I probably should ask him first).

The second NVVDD Output Power is also really suspicious. That's showing 112W on yours.

Just to put that in perspective, this is Path of Exile on GI Shadows Ultra hammering my card at 600W.
Note that my input rails are wrong. Only Total Board Power has the 1.98x shunt multiplier applied to it in HWinfo because I'm lazy. All other input rails are lower by almost 50%, affected by the shunt mod so multiply them by 1.98x if you want.

The output rails completely ignore the shunts.









You can see my MSVDD power is 220W.
(595W board power -376 W NVVDD Output power (sum)). NVVDD output Power (sum) is the rail power.

But look at the second NVVDD Output Power (the non-summed one).
It's only 96W. Maybe this has to do with me being at 1080p--i don't have a 1440p monitor to find out.
Yours is 112W at just 460W board power...

So something is being hammered on the card.

For NVVDD amps, if I'm thinking correctly, 354W divided by 1.15v = 307 amps 

Compare my TDP Normalized% to yours.

Even on your card, the internal rail limits for MSVDD and NVVDD voltage itself are based on the actual core voltage (which I think defaults to 1.15v NVVDD and 1.087v MSVDD, well that's what someone with a Kingpin card said). Increasing core voltage itself also extends these limits.


----------



## Lobstar

Falkentyne said:


> They're SOMEWHERE in the bios though, because if you use the Kingpin 1kw bios, your TDP Normalized matches your TDP almost exactly (600W=60% on both normalized and main TDP).


Yeah, not quite brave enough to try that with this game in particular. I reran the game for about 10 mins with the card reset to stock settings for this bios.









Edit: Oh yeah, I'm on the C0.0C bios btw.


----------



## EarlZ

I'd like to know if the MSI Suprim X 3090 can handle this game with out blowing up, I do not have the game yet. What test can I run that should be a good indicator if it hand handle it?


----------



## Falkentyne

EarlZ said:


> I'd like to know if the MSI Suprim X 3090 can handle this game with out blowing up, I do not have the game yet. What test can I run that should be a good indicator if it hand handle it?


Can you loop Timespy Extreme GT2 for 30 minutes without the VRM's popping?
What about Superposition 4k with extreme shaders preset (not just the regular 4k setting, it's a custom preset)?

Those absolutely hammer the VRM's. If you can run those and the magic smoke doesn't come out, you should be able to run New World.


----------



## EarlZ

Falkentyne said:


> Can you loop Timespy Extreme GT2 for 30 minutes without the VRM's popping?
> What about Superposition 4k with extreme shaders preset (not just the regular 4k setting, it's a custom preset)?


I haven't tried! Can the vast majority of 3090's handle that assumed they do not have a weak solder joint?
Have you tried it on your 3090 ?


----------



## Falkentyne

EarlZ said:


> I haven't tried! Can the vast majority of 3090's handle that assumed they do not have a weak solder joint?
> Have you tried it on your 3090 ?


Yes but it gets too hot and just throttles. I reached 120% of TDP Normalized on those tests and >75C.
Nvidia 3090 Founder's Edition has 10 70 amp smart power stages for Vcore, so that's 700 amps theoretical...(good luck cooling that).

12 minutes: (this is the 3080 version)


----------



## Lobstar

Falkentyne said:


> Yes but it gets too hot and just throttles.


I was going to give you guys a baseline with my machine but looping TSE GT2 test caused it to crash after three minutes. I haven't had any crashes in New World in 80 hours of game play with those settings. /shrug


----------



## Falkentyne

Lobstar said:


> I was going to give you guys a baseline with my machine but looping TSE GT2 test caused it to crash after three minutes. I haven't had any crashes in New World in 80 hours of game play with those settings. /shrug


TSE GT2 is hellish to run. That and Superposition Extreme shaders 4k custom are the only tests that massively power limit (with a huge thick green bar) my shunt modded card from the sub power rails. Despite not getting anywhere close to TDP% (nicely below 100%), I reached 120% normalized TDP on both of them before. Throttles so much vcore drops down to something like 0.95v...

I saw someone with a shunt modded Strix run TSE GT2 and not even one blip of throttle. Higher internal rail limits on that card.


----------



## Arizor

Pardon my ignorance fellas - to run TSE GT2 is it simple as custom run > select only GT2 and let it go? Or are there other settings to turn on and make it particularly brutal?


----------



## EarlZ

Falkentyne said:


> Yes but it gets too hot and just throttles. I reached 120% of TDP Normalized on those tests and >75C.
> Nvidia 3090 Founder's Edition has 10 70 amp smart power stages for Vcore, so that's 700 amps theoretical...(good luck cooling that).
> 
> 12 minutes: (this is the 3080 version)


This is not a 30min test but I had to stop it since I was a bit worried about the GPU hotspot hitting 90C! I am not sure what all of the numbers mean. My OC settings is at 0.950V @ 1950Mhz with a 15Mhz step per voltage point and a power limit of 107%, Stock MSI Suprim X 3090 bios with the updated cache thing.













Arizor said:


> Pardon my ignorance fellas - to run TSE GT2 is it simple as custom run > select only GT2 and let it go? Or are there other settings to turn on and make it particularly brutal?


Include loop so it does more than 1 run.


----------



## Falkentyne

EarlZ said:


> This is not a 30min test but I had to stop it since I was a bit worried about the GPU hotspot hitting 90C! I am not sure what all of the numbers mean. My OC settings is at 0.950V @ 1950Mhz with a 15Mhz step per voltage point and a power limit of 107%, Stock MSI Suprim X 3090 bios with the updated cache thing.
> 
> View attachment 2527908
> 
> 
> 
> 
> 
> Include loop so it does more than 1 run.


Yikes, that's hot for just 0.950v. Very hot.
Look at how high the TDP normalized is even with an undervolt


----------



## Arizor

Ah simple, thanks.


----------



## EarlZ

Falkentyne said:


> Yikes, that's hot for just 0.950v. Very hot.
> Look at how high the TDP normalized is even with an undervolt


I would agree its too hot but it was my understanding that the hotspot is 10-13c above gpu temps




Falkentyne said:


> Look at how high the TDP normalized is even with an undervolt


Does that mean MSI also has bad limits on the card?
What should be the expected TDP normalized, I was guessing that it should be the same with or with out an undervolt as the card will attempt to pull as much power as it can, My reasoning behind this guess is that if the card is 0.950v or 1.089v they will both reach the same TDP but a difference of clock speeds.


----------



## Arizor

Here's mine after 35 mins of straight loopin'. Strix under an Optimus, warm day in Australia with the study door closed so probably around 2-3C hotter in the room than otherwise.


----------



## geriatricpollywog

Arizor said:


> Here's mine after 35 mins of straight loopin'. Strix under an Optimus, warm day in Australia with the study door closed so probably around 2-3C hotter in the room than otherwise.
> View attachment 2527912


That looks really good. Do you know what your ambient temperature is?


----------



## Arizor

0451 said:


> That looks really good. Do you know what your ambient temperature is?


Sadly not mate, Unify X570 doesn't have pins for a temp reader, the buggers, and I'm not getting all that clunky external stuff . I can say leaving it idling now for 5 mins (asides from doing stuff such as browsing) the GPU is 24C, so I'd imagine near that point.


----------



## geriatricpollywog

Arizor said:


> Sadly not mate, Unify X570 doesn't have pins for a temp reader, the buggers, and I'm not getting all that clunky external stuff . I can say leaving it idling now for 5 mins (asides from doing stuff such as browsing) the GPU is 24C, so I'd imagine near that point.


Do you have a thermometer in your computer room?


----------



## Arizor

Nope @0451 , but I would say 23ish C feels about right.


----------



## Falkentyne

EarlZ said:


> I would agree its too hot but it was my understanding that the hotspot is 10-13c above gpu temps
> 
> 
> 
> 
> Does that mean MSI also has bad limits on the card?
> What should be the expected TDP normalized, I was guessing that it should be the same with or with out an undervolt as the card will attempt to pull as much power as it can, My reasoning behind this guess is that if the card is 0.950v or 1.089v they will both reach the same TDP but a difference of clock speeds.


I meant the core temp.
Hotspot is always going to be a certain minimum above the core.
I don't know how bmgjet managed to get 5C delta but on Founder's edition 3090 it's 10C minimum no matter what you try to do. 3080 Ti is 7C minimum.


----------



## EarlZ

Arizor said:


> Here's mine after 35 mins of straight loopin'. Strix under an Optimus, warm day in Australia with the study door closed so probably around 2-3C hotter in the room than otherwise.
> View attachment 2527912


Water cooled?




Falkentyne said:


> I meant the core temp.
> Hotspot is always going to be a certain minimum above the core.
> I don't know how bmgjet managed to get 5C delta but on Founder's edition 3090 it's 10C minimum no matter what you try to do. 3080 Ti is 7C minimum.


I think my core temp of 77c is fine on a 31c ambient with 85% limited fan speed, I've not seen a game push this past 74c though!


----------



## Arizor

Yep sorry @EarlZ - Optimus is a waterblock.


----------



## EarlZ

YT algo recommended this, Its probably NOT from New World but damn!

Beta New World RTX EVGA 3090! Fuego! PIDO AYUDA PARA DIFUNDIR! - YouTube


----------



## Lobstar

Arizor said:


> Here's mine after 35 mins of straight loopin'. Strix under an Optimus, warm day in Australia with the study door closed so probably around 2-3C hotter in the room than otherwise.
> View attachment 2527912


My room temp is 23.6C in this run that was about 20 mins. Also under an optimus but mine is a FTW3U.









And then here is the card with a +95 OC but curve limited to 1v.









And then this crashed after 3.5 mins but it was a stupid attempt anyway.


----------



## J7SC

...just spent close to two hours with the test bench (hopefully the last time before case mount w/ full w-cooling instead of single 360 rad for both 5950X and 3090), ambient at 24 C. As mentioned before, I'm still on the stock bios so 450W+- but clocks definitely gained after mounting the new Phanteks block, thermal putty and big but passive heatsink on the back-plate. GPU VRM temps are also great, about 12 C - 15 C lower than before


----------



## Arizor

Wow @J7SC is that your Strix? You really win the gpu core silicon lottery with that frequency!


----------



## geriatricpollywog

Arizor said:


> Wow @J7SC is that your Strix? You really win the gpu core silicon lottery with that frequency!


Port Royal is the great equalizer for 3090s.


----------



## Arizor

This is true @0451 . I can hit 15k with my dud Strix chip, but no way I’m getting close to anything higher.


----------



## Falkentyne

J7SC said:


> ...just spent close to two hours with the test bench (hopefully the last time before case mount w/ full w-cooling instead of single 360 rad for both 5950X and 3090), ambient at 24 C. As mentioned before, I'm still on the stock bios so 450W+- but clocks definitely gained after mounting the new Phanteks block, thermal putty and big but passive heatsink on the back-plate. GPU VRM temps are also great, about 12 C - 15 C lower than before
> 
> View attachment 2527935


Please check your 8 pin#2 power connector on both the cable and PSU end. It's dropping way too low and affecting misc2 input voltage.


----------



## MangoMunchaa

Hey guys, I'm getting really fed up with my EK ref 3090 waterblock (roughly 17C delta @ 390W after repadding/repasting). Just wondering if anyone knows what's currently the best waterblock for the reference 3090?


----------



## yzonker

MangoMunchaa said:


> Hey guys, I'm getting really fed up with my EK ref 3090 waterblock (roughly 17C delta @ 390W after repadding/repasting). Just wondering if anyone knows what's currently the best waterblock for the reference 3090?


Bykski might be the best choice unless Optimus finally releases a block. @jura11 has had good luck with them if I remember correctly.









[Official] NVIDIA RTX 3090 Owner's Club


Question for all of you 'GPU Experts......... any Electricians? (for printed circuit boards)? Or BIOS experts? I have three RTX 3060's (EVGA), which are all exhibiting behaviour that is baffling me -- and I need to know if RTX 3090's would behave the same way if I followed the same method...




www.overclock.net





I have the Corsair. With a remount I'm definitely lower than what you are seeing. About 11C @ 390w. That is with dual D5s to increase flow (~2C improvement). Most reviews I've seen put the Corsair and EK at approximately the same delta though.


----------



## ArcticZero

yzonker said:


> Bykski might be the best choice unless Optimus finally releases a block. @jura11 has had good luck with them if I remember correctly.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> [Official] NVIDIA RTX 3090 Owner's Club
> 
> 
> Question for all of you 'GPU Experts......... any Electricians? (for printed circuit boards)? Or BIOS experts? I have three RTX 3060's (EVGA), which are all exhibiting behaviour that is baffling me -- and I need to know if RTX 3090's would behave the same way if I followed the same method...
> 
> 
> 
> 
> www.overclock.net
> 
> 
> 
> 
> 
> I have the Corsair. With a remount I'm definitely lower than what you are seeing. About 11C @ 390w. That is with dual D5s to increase flow (~2C improvement). Most reviews I've seen put the Corsair and EK at approximately the same delta though.


I too am waiting for an Optimus reference block to replace this EKWB one whose nickel plating is already fading in some spots. I have yet to hear anything from them, but I suppose if anything I missed any earlier announcements.


----------



## Lobstar

ArcticZero said:


> I too am waiting for an Optimus reference block to replace this EKWB one whose nickel plating is already fading in some spots. I have yet to hear anything from them, but I suppose if anything I missed any earlier announcements.


I just had the acrylic crack on my Asus X570 Hero EKWB monoblock. Every single product I've purchased from them with the exception of the absolutely useless Monarch RAM cooling setup has been a let down. There is zero precision to their Torque fittings. The nickel on the monoblock was already wearing after only 1.5 years of use. The gaskets are cheap. ZMT is worse and more expensive than just getting Zygon. I guess my two Dual XTOP pumps are ok too.

To the contrary, my Optimus stuff has been absolutely amazing. The Foundation and Absolute are both worth the difference in price if you have compatible hardware.


----------



## Lobstar

Arizor said:


> This is true @0451 . I can hit 15k with my dud Strix chip, but no way I’m getting close to anything higher.


Port Royal is really weird with clock speeds. 

This run I managed to get 2265 but it dropped to 2250 after short while.








I scored 15 298 in Port Royal


AMD Ryzen 9 5950X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com





But this is still my best run ever at just 2190.








I scored 15 595 in Port Royal


AMD Ryzen 9 5950X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## Falkentyne

Buildzoid just did a PCB multimeter probe of a failed Gigabyte 3090 Vision, from New World blood sacrifice, and said "Vcore failed."


----------



## J7SC

Arizor said:


> Wow @J7SC is that your Strix? You really win the gpu core silicon lottery with that frequency!


Thanks ! To tell you the truth though, what with over 30 GPUs here right now (I work in the computer-related field), I'm a bit allergic to the expression 'silicon lottery' - win some, lose some...also, the 'duds' probably taught me more than the out-of-the-box good ones.

For gaming, all I really want (and got) is a solid 2140 MHz powering a C1 OLED / HDR, with great temps to boot. While I had used thermal putty on other occasions, this was the first time I applied it to a 450W-520W card and highly recommend it, especially after multiple heat cycles now > still WOW, including a hotspot delta consistently in the 11.5 C to 12.5 C range. I also spent a lot of time modding the backplate with putty on the GPU back, MX5 and a big heatsink... 



Falkentyne said:


> Please check your 8 pin#2 power connector on both the cable and PSU end. It's dropping way too low and affecting misc2 input voltage.


...good catch - I know about it though as the PSU on the test bench is a decade old and won't be part of the final assembly. 

ED.: The test-bench has an old BeQuiet DarkPower Pro 1200W I (ab)used a lot in the olden days for HWBot...on the Strix, 2 of 3 PCIe connector LEDs had been blinking away on the card even when the unit was powered down, just as was the case when I first got the card. Said PSU is switchable between single and multi-rail and I really should switch it over...that said, a trusty 1300W Antec HPC Platinum will again be the final power source in the full build - love that PSU, and its PCIe cables which are so thick they're hard to bend and then also stay in place without bending back...


----------



## gfunkernaught

Lobstar said:


> Port Royal is really weird with clock speeds.
> 
> This run I managed to get 2265 but it dropped to 2250 after short while.
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 298 in Port Royal
> 
> 
> AMD Ryzen 9 5950X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> But this is still my best run ever at just 2190.
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 595 in Port Royal
> 
> 
> AMD Ryzen 9 5950X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com


That one bin drop is normal GPU boost behavior. Your core temp probably hit and/or exceeded 40c at some point and stayed there. It looks like the Boost will average out the temps and if it sees an avg of 40c over a period of time, it knocks the bin down one.


----------



## yzonker

Lobstar said:


> Port Royal is really weird with clock speeds.
> 
> This run I managed to get 2265 but it dropped to 2250 after short while.
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 298 in Port Royal
> 
> 
> AMD Ryzen 9 5950X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> But this is still my best run ever at just 2190.
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 595 in Port Royal
> 
> 
> AMD Ryzen 9 5950X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com


Yea that's a big difference. Feels like there was something else going on with your system. What do you get now? Looks like those runs are fairly old. I've seen some inconsistency like that, but I think it was from me shaping the VF curve differently.


----------



## yzonker

Falkentyne said:


> Buildzoid just did a PCB multimeter probe of a failed Gigabyte 3090 Vision, from New World blood sacrifice, and said "Vcore failed."


Yea that's pretty interesting. I just watched it. Lines up with your analysis on Reddit. I wonder if what he said might be correct too in that they did something unconventional with the VRM that caused it somehow. He said that card has 10x60amp, but I think the true reference cards are maybe just 8x50amp? Still a little unclear from his video of the Zotac analysis.


----------



## Lobstar

yzonker said:


> Yea that's a big difference. Feels like there was something else going on with your system. What do you get now? Looks like those runs are fairly old. I've seen some inconsistency like that, but I think it was from me shaping the VF curve differently.


I actually haven't really tried to get a max clock out of it in a while. I ran a couple times and got this result.








I scored 15 652 in Port Royal


AMD Ryzen 9 5950X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## J7SC

@Falkentyne - ...switched PSUs to final (Antec) config...next few days (weeks? ) will be taken up with the final custom loop to replace single 360. It all seemed like a good idea at the time...


----------



## Arizor

It blows my mind how so many clocks in this thread hit over 2100, whilst my measly Strix craps itself above 2040


----------



## yzonker

Lobstar said:


> I actually haven't really tried to get a max clock out of it in a while. I ran a couple times and got this result.
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 652 in Port Royal
> 
> 
> AMD Ryzen 9 5950X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> View attachment 2528003


Like has been discussed, that steep curve will drop your effective clock a bunch compared to a very shallow slope leading up to your fixed voltage point. If you happened to run a much less steep curve on that other run, that would explain at least some of the difference in 3dmark average clock.


----------



## gfunkernaught

Arizor said:


> It blows my mind how so many clocks in this thread hit over 2100, whilst my measly Strix craps itself above 2040


Hitting and sustaining are different. Is your strix air cooled?


----------



## Falkentyne

J7SC said:


> @Falkentyne - ...switched PSUs to final (Antec) config...next few days (weeks? ) will be taken up with the final custom loop to replace single 360. It all seemed like a good idea at the time...
> 
> View attachment 2528004


Well that card is sure happier about that.
Did you gain any extra stability from that?


----------



## J7SC

Falkentyne said:


> Well that card is sure happier about that.
> Did you gain any extra stability from that?


...the Antec HPC is the regular PSU for the Strix anyhow. I had just moved the card over to the temp testbench for new block / putty / heatsink leak testing. It didn't crash with the other BQ PSU but I also didn't push it yet via KPE 520 etc due to the single rad on the testbench and an ongoing update for the custom loop I still have to finish.


----------



## Arizor

gfunkernaught said:


> Hitting and sustaining are different. Is your strix air cooled?


Yep Optimus block, maxes at 38C under 100% load, no dice above 2040 regardless. Wondering if it could be my EVGA 1000W supply, but that's probably a stretch


----------



## J7SC

Arizor said:


> It blows my mind how so many clocks in this thread hit over 2100, whilst my measly Strix craps itself above 2040


I think your card is pretty special re. clocks. Per your earlier post > here, your max effective clock _exceeds _your max nominal clock by 25 Mhz ! Talk about phase-shifting...


----------



## Arizor

Hahaha I hadn’t noticed that…!


----------



## gfunkernaught

Arizor said:


> Yep Optimus block, maxes at 38C under 100% load, no dice above 2040 regardless. Wondering if it could be my EVGA 1000W supply, but that's probably a stretch


What offset are you setting? 38c at 500w is awesome. I have to crank my fans up to get that.


----------



## Arizor

gfunkernaught said:


> What offset are you setting? 38c at 500w is awesome. I have to crank my fans up to get that.


Yeah it's doing well. PL, Voltage maxed out, custom curve up to 2040 (then flat), +1250 on the mem (11k).

edit: fans are push/pull Noctua, 65%


----------



## gfunkernaught

Arizor said:


> Yeah it's doing well. PL, Voltage maxed out, custom curve up to 2040 (then flat), +1250 on the mem (11k).
> 
> edit: fans are push/pull Noctua, 65%


Is 2040mhz your effective clock or requested/advertised? Have you tried just using the offset slider instead of the v/f curve?


----------



## MangoMunchaa

yzonker said:


> Bykski might be the best choice unless Optimus finally releases a block. @jura11 has had good luck with them if I remember correctly.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> [Official] NVIDIA RTX 3090 Owner's Club
> 
> 
> Question for all of you 'GPU Experts......... any Electricians? (for printed circuit boards)? Or BIOS experts? I have three RTX 3060's (EVGA), which are all exhibiting behaviour that is baffling me -- and I need to know if RTX 3090's would behave the same way if I followed the same method...
> 
> 
> 
> 
> www.overclock.net
> 
> 
> 
> 
> 
> I have the Corsair. With a remount I'm definitely lower than what you are seeing. About 11C @ 390w. That is with dual D5s to increase flow (~2C improvement). Most reviews I've seen put the Corsair and EK at approximately the same delta though.


Hmmm alright, I'll wait a bit for the optimus block otherwise I might try the bykski, thanks for your help I appreciate it


----------



## jura11

MangoMunchaa said:


> Hey guys, I'm getting really fed up with my EK ref 3090 waterblock (roughly 17C delta @ 390W after repadding/repasting). Just wondering if anyone knows what's currently the best waterblock for the reference 3090?


Hi there 

What delta you mean, core to core hotspot or core to water? 

As @yzonker said I'm using on my RTX 3090 Bykski waterblocks which performs quite well, difference between GPU core and GPU hot-spot is 8-10°C under load at 500-550W 

Getting better or lower core to water delta you really want have more radiator space and still think that's a lot depending on few other factors like die and GPU block itself 

These RTX 3xxx are different beasts from RTX 2xxx which was much easier to cool than 3090 for sure, I remember I have run on my RTX 2080Ti Strix 1000W BIOS as daily and temperatures wouldn't break 40-42°C in hottest weather, running RTX 3090 with XOC 1000W BIOS uncapped I think I would probably hit 40-42°C in normal ambient temperatures (23-25°C), usually running my RTX 3090 capped at 75-85% as max and temperatures are okay at 36-38°C 

Hope this helps 

Thanks, Jura


----------



## Sulek83

error


----------



## Sulek83

Toopy said:


> Can anyone help me out here as to why this zotac 3090 amp holo is power limiting at 280w?
> 
> To me it's probably because 8pin #3 is at 150W, yet there is plenty of headroom available on 1,2 and the pci-e bus.
> Maximum in the bios is 407w
> View attachment 2511563


Hi Toopy,

I have Zotac 3090 Artic Storm. I think it is a same card as 3090 holo but with water block.
Have the same issue when mining ETH. It max out at 280W and mines poorly circa 116MH

Have you managed to find solution?


----------



## KCDC

Decided to get my second one in mid-project last night which was a gamble, and man, I thought I screwed myself once I finally got the second one in and saw how close the ports were. I went through my old drawer of fittings I never ended up using from the old days and found one of those mini "sli extenders". Guess I didn't waste my money afterall, only took 4 years

The passive gets about 10-12 C hotter on the memory hotspot than the active, around 67C while rendering (real test will be on an overnight render). Haven't added fans to it yet, and I actually might swap the 10mm tall heatsinks with these 40mm tall versions since they seem to be pretty effective with thermal glue, unless anyone's tested already and found no difference. Add a 80mm noctua and see how it goes. Also noticing the bykski passive backplate has no cutout for the NVLink, so that'll be a dremel hack if I can ever find a 3 slot that's not 300 bucks.

Not the prettiest ghetto job but it'll do until I have downtime and my new cables come in. Happy to be on the non-scalped 3090 team!


----------



## J7SC

KCDC said:


> Decided to get my second one in mid-project last night which was a gamble, and man, I thought I screwed myself once I finally got the second one in and saw how close the ports were. I went through my old drawer of fittings I never ended up using from the old days and found one of those mini "sli extenders". Guess I didn't waste my money afterall, only took 4 years
> 
> The passive gets about 10-12 C hotter on the memory hotspot than the active, around 67C while rendering (real test will be on an overnight render). Haven't added fans to it yet, and I actually might swap the 10mm tall heatsinks with these 40mm tall versions since they seem to be pretty effective with thermal glue, unless anyone's tested already and found no difference. (...)


As posted a few times now, I've got my RTX 3090 VRAM temps down to the low to mid 50 C range w/ the passive heatsinks shown below...I have taller-but-smaller in total area ones, and these here were the tallest that still allowed some contorted access to the PCIe release clip (s.th. to keep in mind, btw). FYI, I also used a lot of MX5 between the backplate and the heatsink, with thermal compound glue only at select 'perimeter' spots. The temps are so good that I probably won't bother w/ active backplate design and parts I have ready to go. Low to mid 50 C at full bench tilt at 24 C ambient for VRAM temp is good enough for me.

The heatsinks below have both horizontal and vertical cuts which seems to add to their effectiveness via bigger surface area and air-flow capture.










6900XT and RTX 3090 w/ heatsinks attached


----------



## KCDC

Cool, so basically what I have on there already which is covering 160x80 10mm height. Still might try the taller finned version and see if there's a difference. Thanks!


----------



## J7SC

KCDC said:


> Cool, so basically what I have on there already which is covering 160x80 10mm height. Still might try the taller finned version and see if there's a difference. Thanks!


 ...just make sure that you still can easily unhook the GPUs from their PCIe mobo slots as most heatsinks make it a fairly tight affair to get to the release clip...I ended up using a modded paint-can lid opener after test-fitting and leak testing both cards on the test bench.


----------



## Arizor

gfunkernaught said:


> Is 2040mhz your effective clock or requested/advertised? Have you tried just using the offset slider instead of the v/f curve?


2040 requested, effective is around 2018. Yeah tried using the slider, get worse results (CTD above 2010ish requested).


----------



## ArcticZero

Buildzoid posted an update of the Gigabyte 3090 killed by New World. Looks like the core is fine.


----------



## jura11

KCDC said:


> Decided to get my second one in mid-project last night which was a gamble, and man, I thought I screwed myself once I finally got the second one in and saw how close the ports were. I went through my old drawer of fittings I never ended up using from the old days and found one of those mini "sli extenders". Guess I didn't waste my money afterall, only took 4 years
> 
> The passive gets about 10-12 C hotter on the memory hotspot than the active, around 67C while rendering (real test will be on an overnight render). Haven't added fans to it yet, and I actually might swap the 10mm tall heatsinks with these 40mm tall versions since they seem to be pretty effective with thermal glue, unless anyone's tested already and found no difference. Add a 80mm noctua and see how it goes. Also noticing the bykski passive backplate has no cutout for the NVLink, so that'll be a dremel hack if I can ever find a 3 slot that's not 300 bucks.
> 
> Not the prettiest ghetto job but it'll do until I have downtime and my new cables come in. Happy to be on the non-scalped 3090 team!



Hi there 

I wouldn't call it ghetto job, looks okay there and if you are happy then I would be happy too, that's what counts more than anything else 

What I have done and used both of my RTX 3090s for rendering with both running XOC 1000W BIOS capped at 75% and with both OC at +120MHz on Core and VRAM with 1295MHz on top and bottom 1000MHz, temperatures won't break 36-38°C in rendering and VRAM temperatures won't break 62°C on top one and on bottom one I would see 72-76°C as max that's with Bykski thermal pads, with Gelid thermal pads VRAM temperatures are in 50's that's with passive backplate on both GPUs and used as @J7SC same heatsink on one and on other I have used proper copper heatsink same size as above mentioned plus 120mm fan was pointed on backplate 

Regarding the NvLink cutout, didn't realise that as I still can't find 2 slot NvLink for reasonable money 

Hope this helps 

Thanks, Jura


----------



## Falkentyne

jura11 said:


> Hi there
> 
> I wouldn't call it ghetto job, looks okay there and if you are happy then I would be happy too, that's what counts more than anything else
> 
> What I have done and used both of my RTX 3090s for rendering with both running XOC 1000W BIOS capped at 75% and with both OC at +120MHz on Core and VRAM with 1295MHz on top and bottom 1000MHz, temperatures won't break 36-38°C in rendering and VRAM temperatures won't break 62°C on top one and on bottom one I would see 72-76°C as max that's with Bykski thermal pads, with Gelid thermal pads VRAM temperatures are in 50's that's with passive backplate on both GPUs and used as @J7SC same heatsink on one and on other I have used proper copper heatsink same size as above mentioned plus 120mm fan was pointed on backplate
> 
> Regarding the NvLink cutout, didn't realise that as I still can't find 2 slot NvLink for reasonable money
> 
> Hope this helps
> 
> Thanks, Jura


Did you do better with Gelid Extreme or Gelid Ultimate thermal pads?


----------



## KCDC

jura11 said:


> Hi there
> 
> I wouldn't call it ghetto job, looks okay there and if you are happy then I would be happy too, that's what counts more than anything else
> 
> What I have done and used both of my RTX 3090s for rendering with both running XOC 1000W BIOS capped at 75% and with both OC at +120MHz on Core and VRAM with 1295MHz on top and bottom 1000MHz, temperatures won't break 36-38°C in rendering and VRAM temperatures won't break 62°C on top one and on bottom one I would see 72-76°C as max that's with Bykski thermal pads, with Gelid thermal pads VRAM temperatures are in 50's that's with passive backplate on both GPUs and used as @J7SC same heatsink on one and on other I have used proper copper heatsink same size as above mentioned plus 120mm fan was pointed on backplate
> 
> Regarding the NvLink cutout, didn't realise that as I still can't find 2 slot NvLink for reasonable money
> 
> Hope this helps
> 
> Thanks, Jura


Haha, thanks, the temps aren't in the red zone, so I'm happy.
I will look into undervolting to get the temps down, watercooling the memory on one is definitely kicking the water temp up higher than before but still manageable. Also CPU is in the same loop, so I may investigate giving that it's own loop.

I will probably stay away from any custom BIOS stuff and stick with the stock (I think 460) just for the sake of reliability. Redshift also hates any sort of overclocking, so that's out the door, at least for rendering. Also I think using other BIOSes still cancels out some of the ports, correct? I do use all 4.
If I end up having to create some crazy VDB Volume scenes at crazy resolutions, or the same with geometry, then I might have to fit an NVLink into the project budget to double the VRAM ,but so far it hasn't complained on this current 5640x2120 scene. Stage visuals have some crazy resolutions.

EDIT: I also didn't use the pads that came with the blocks and used 1.5mm Gelid Extremes which fit perfectly on these


----------



## kx11

That is a nice little thing to see when i used hdmi 2.1 instead of DP on my LG 27GP950



Glad my 3090 actually got it since it was advertised and not really tested by any1


----------



## J7SC

...thought I run 'one more for the road' before disassembling the setup from the testbench and while I was at it also confirmed the LinkUp riser PCI 4.0x16 functionality. Bios is still the lower-PL stock for this and cooling via a single 360/60 for CPU and GPU (final will be 1680 x 60+)

...the mods I did for cooling allow for higher VRAM 'efficient' setting, haven't even maxed yet but I'll do that once the final config is up and running and KPE 520 loaded. What's weird though is that the GPU fan show zero rpm but '53%' as max below, zero the rest of the time. There are no GPU fans plugged in at all (w-block) and when I tried to use MSI AB to manually override, it defaults to 30% as 'min'.

*Q:* does anyone know how to get the phantom fans from potentially stealing a bit of PL ?


----------



## MangoMunchaa

jura11 said:


> Hi there
> 
> What delta you mean, core to core hotspot or core to water?
> 
> As @yzonker said I'm using on my RTX 3090 Bykski waterblocks which performs quite well, difference between GPU core and GPU hot-spot is 8-10°C under load at 500-550W
> 
> Getting better or lower core to water delta you really want have more radiator space and still think that's a lot depending on few other factors like die and GPU block itself
> 
> These RTX 3xxx are different beasts from RTX 2xxx which was much easier to cool than 3090 for sure, I remember I have run on my RTX 2080Ti Strix 1000W BIOS as daily and temperatures wouldn't break 40-42°C in hottest weather, running RTX 3090 with XOC 1000W BIOS uncapped I think I would probably hit 40-42°C in normal ambient temperatures (23-25°C), usually running my RTX 3090 capped at 75-85% as max and temperatures are okay at 36-38°C
> 
> Hope this helps
> 
> Thanks, Jura


I was meaning core to water delta. I have 2 360x30mm and 1 360x54mm rad, depending on my fan curves my water usually sits at 35C with a quiet profile and 28C with a louder profile.

My core to core hotspot is around 13C but was 10C before I changed my thermal pads from the EK ones to Gelid Ultimate (I later found out these aren't great because they don't compress well) so I imagine that's probably why that increased, however I don't understand why I will run at 52C with 35C water at 390W (was about the same, actually a little worse before changing pads)

Thank you Jura!


----------



## Arizor

I'm reading up on PSUs and it seems my EVGA G5 1000W might be limiting/causing my OC crashes due to it not handling spikes very well. What PSUs do you folk use/recommend? How is the Be Quiet range? - https://www.pccasegear.com/products...er-pro-12-titanium-modular-1200w-power-supply


----------



## Zogge

Arizor said:


> It blows my mind how so many clocks in this thread hit over 2100, whilst my measly Strix craps itself above 2040


My 3090 strix can reach 2220 as well and memory +1400 in benchmarks. (Bykski+mp5)
On gaming it is 2160/+750 mem and this is also my 24/7 setup. 520w kingpin bios.


----------



## Arizor

Zogge said:


> My 3090 strix can reach 2220 as well and memory +1400 in benchmarks. (Bykski+mp5)
> On gaming it is 2160/+750 mem and this is also my 24/7 setup. 520w kingpin bios.


Yeah I think I'm going to grab this Be Quiet PSU and see if it does anything for me...


----------



## Zogge

Arizor said:


> Yeah I think I'm going to grab this Be Quiet PSU and see if it does anything for me...


I have the AX1600i, works great. Then a 3kW UPS with voltage regulation and pure sinus trimming to that. (Found one with very silent fans and intelligent fan control hence quiet in normal operation)


----------



## Arizor

Whoa, that's a beefy PSU...!


----------



## bmgjet

Dont worry about only being 2040mhz.
The difference between 2100 and 2040 is nothing performance wise in games.
I can run mine up to 2180mhz (550w) but I stick to 2050mhz since thats the point where power draw just gets out of control.


----------



## geriatricpollywog

Zogge said:


> My 3090 strix can reach 2220 as well and memory +1400 in benchmarks. (Bykski+mp5)
> On gaming it is 2160/+750 mem and this is also my 24/7 setup. 520w kingpin bios.


Your overclock seems a bit low. Have you tried 2175/1300?


----------



## KedarWolf




----------



## jura11

Falkentyne said:


> Did you do better with Gelid Extreme or Gelid Ultimate thermal pads?


Hi there 

I have used Gelid Extreme pads on both GPUs and replaced recently on friend RTX 3090 as well pads for Gelid pads and friend GPU improved VRAM temperatures by good 18-22°C, previously he would seen 86°C, now he seeing 64-66°C as max that's with passive backplate and fairly restrictive airflow case 

Hope this helps 

Thanks, Jura


----------



## gfunkernaught

@Arizor 
It could be your PSU. I had an hx1200 that kept shutting off probably the OCP getting tripped prematurely while I was playing GTA 5 at 8k, even at stock settings on my 3090, stock bios as well. I replaced it with my older RM750 and it was fine. RMAd the PSU and the replacement works perfectly.


----------



## J7SC

Arizor said:


> Yeah I think I'm going to grab this Be Quiet PSU and see if it does anything for me...


...remember that 'slippery slope' and bobsled ?  

Anyhoo, with a bit of research, you can find out which brand actually uses what 'innards'. I tend to go with 'Delta' PSU products for my top-end stuff, like the Antec HCP 1.3Kw Platinum. I've purchased several of those in 2014 - 2016 for heavy quad-GPU EVbot lifting, not least as they have a little OClink which allows you to connect two together. I don't think Antec offers that particular model anymore, but there might be some updated versions...whatever your choice, just make sure to figure out who made/assembled the actual components, like Delta or Super Flower. I would also pay attention to the gauge of the PCIe and other modular cables. 

Or just go even faster in your bobsled and pair that Optimus block with > this (subject to your home's circuitry) 

btw, as others have stated, the difference between 2040 and 2140 on a 3090 really only matters if benchies are very important to you - for gaming and other every-day stuff, I couldn't tell the difference unless I have a monitoring overlay running...


----------



## Arizor

I did think of that @J7SC ! That PSU is insane, hopefully this 1200 is enough, it’s got really great reviews (unlike my evga g5 which has received middling reviews and lots of complaints about quality!). Yep @gfunkernaught your posting was what initially aroused my suspicion, I’ll let you all know when it arrives and I can test.


----------



## jura11

MangoMunchaa said:


> I was meaning core to water delta. I have 2 360x30mm and 1 360x54mm rad, depending on my fan curves my water usually sits at 35C with a quiet profile and 28C with a louder profile.
> 
> My core to core hotspot is around 13C but was 10C before I changed my thermal pads from the EK ones to Gelid Ultimate (I later found out these aren't great because they don't compress well) so I imagine that's probably why that increased, however I don't understand why I will run at 52C with 35C water at 390W (was about the same, actually a little worse before changing pads)
> 
> Thank you Jura!


Hi there 

Not sure why everyone is using core to water delta as main measurement of performance of waterblocks, I wouldn't pay as much attention to that, good measurement of performance of waterblocks would say is idle and load temperature delta 

On my waterblocks my core to water temperature delta is like 13-15°C as max and normal water delta T is like 2-3°C during the gaming or rendering and in gaming my GPU pulls usually in 480-500W region 

What ambient temperatures are you seeing during the gaming or normal day, I assume its in high 20's to 30's 

On my loop I see something like 22-24°C water temperature, I have 3 water temperature sensor one is IN and another is OUT and last one is located or fitted at coldest place before pumps and this one always reads like 1.5°C lower than another two, difference between IN and OUT water temperature sensors is usually in 0.5-1.5°C as max, during the hotter weather highest water temperature what I have seen been in 29's and water delta T would stay same in 2-3°C with fans running at 750-850RPM 

Not sure if Bykski would make any difference maybe yes you will see lower overall GPU core temperatures and how much you will gain probably 5-6°C on core and on VRAM hard to say, but most of people who runs or using Bykski waterblocks they have reasonable VRAM temperatures in 60's to 70's region, I would probably go with active backplate from Bykski and their block

Optimus waterblocks I would say they are one of the best if not the best, would love to test their blocks on my loop and see if they are really as good as they're claiming 

But for now I'm good with Bykski, they're outperform my expectations which I got from them and for money you will hardly find better waterblock 

Hope this helps 

Thanks, Jura


----------



## gfunkernaught

I could see a difference between 2040mhz and 2140mhz while gaming without a frame counter. But only in intensive games like cyberpunk or when I play GTA 5 in 8k (I keep mentioning that but damn that game looks good in 8k 😁).


----------



## Lobstar

jura11 said:


> Optimus waterblocks I would say they are one of the best if not the best, would love to test their blocks on my loop and see if they are really as good as they're claiming


I'm super happy with mine. I posted some data over the past few pages but on my latest PR run I had a 7.8C core to water delta on my gpu pulling over 500w. When idle the GPU core is the same as the water temp.


----------



## Arizor

Yeah can also say very happy with my Optimus over my EKWB. Nothing wrong with the latter but Optimus has completely masked any coil whine, runs cooler, and is actually a much simpler process to put together, if that matters for anyone.


----------



## iionas

Thought id finally join in than just reading.
Gday fellas!
Heres is my rig (photo attached)
Lian lii 011d / 5900x / 1000w / 32gb cl14 / 3090 tuf oc
Ive been thinking about watercooling the tuf for a bit, im not sure that i should or not.
Unsure if i want to dumpo the extra 535 euro on it for a Ekwb / Pump etc. Is there any MAJOR daily use benfits apart from total Silence?
The machine is used for engineering / gaming. 
I currently have it undervolted from the Igors lab write up : NVIDIA GeForce RTX 3090 Undervolting Update - This is how (a little) common sense works! | igor´sLAB
Great card mind you, ive also been thinking about repasting and re-padding it.


----------



## J7SC

iionas said:


> Thought id finally join in than just reading.
> Gday fellas!
> Heres is my rig (photo attached)
> Lian lii 011d / 5900x / 1000w / 32gb cl14 / 3090 tuf oc
> Ive been thinking about watercooling the tuf for a bit, im not sure that i should or not.
> Unsure if i want to dumpo the extra 535 euro on it for a Ekwb / Pump etc. Is there any MAJOR daily use benfits apart from total Silence?
> The machine is used for engineering / gaming.
> I currently have it undervolted from the Igors lab write up : NVIDIA GeForce RTX 3090 Undervolting Update - This is how (a little) common sense works! | igor´sLAB
> Great card mind you, ive also been thinking about repasting and re-padding it.


Nice setup ! Generally speaking with 'heaters' such as 3090s, the adage that lower temps are better holds especially true. While I personally prefer w-cooling, undervolting will contribute to lower temps as well, as will re-pasting and using top-notch thermal pads (or thermal putty !). In addition, adding a heatsink w/good contact on the back plate will also help, due to the double-sided VRAM heat there. Here's are examples of heat sinks from Amazon (two in a package for the black ones)...quite cheap but effective...


----------



## iionas

J7SC said:


> Nice setup ! Generally speaking with 'heaters' such as 3090s, the adage that lower temps are better holds especially true. While I personally prefer w-cooling, undervolting will contribute to lower temps as well, as will re-pasting and using top-notch thermal pads (or thermal putty !). In addition, adding a heatsink w/good contact on the back plate will also help, due to the double-sided VRAM heat there. Here's are examples of heat sinks from Amazon (two in a package for the black ones)...quite cheap but effective...
> 
> View attachment 2528364



Hey Canadian Buddy

Yeah look tbh its not that hot like 68-71c in the winder with it full tilt gaming or calculating rain sims is fine for me.

Id like to also re-paste it due to the shocking paste normally they do from factory, I was thinking of replacing the pads with Fujipoly (i think theyre called).

In saying all this I was thinking about cooling only the GPU which may look average in my system, I have an AIO (Arctic Liquid Freezer II rev 4) and the thing is fine for the 5900x. 

Im so unsure what to do, its another 1k aud I have to throw at it basically.


----------



## Arizor

Hey VIC buddy @iionas , I have 2 EKWB radiators (360mm and 240mm) sitting my closet, and am happy to send them to you free of charge to get you started, if you want them, just shoot me a direct message.

I also had an EKWB waterblock but chucked it in the bin thinking no one would want it, sad to say!


----------



## iionas

Arizor said:


> Hey VIC buddy @iionas , I have 2 EKWB radiators (360mm and 240mm) sitting my closet, and am happy to send them to you free of charge to get you started, if you want them, just shoot me a direct message.
> 
> I also had an EKWB waterblock but chucked it in the bin thinking no one would want it, sad to say!



heya dude! You threw out the tuf 3090 block? You mad mad man! haha!

Id take the rads however Im currently awaiting to come home (im international but melb is my home) Stuck here due to corona.

Hold onto them as Ill take them off you or maybe pay for the int shipping? Let me know your thoughts. Ill drop you a PM anyway.

im 50/50 on the water, Im not sure if it would be stupid just doing the GPU


----------



## Arizor

Threw out the Strix block, which I’m not sure is compatible with TUF now I think about it anyway…!

no worries mate I’ve responded to your pm.


----------



## iionas

Arizor said:


> Threw out the Strix block, which I’m not sure is compatible with TUF now I think about it anyway…!
> 
> no worries mate I’ve responded to your pm.


Genius thank you!

The optimus stuff looks amazing also, what was wrong with that block?


----------



## Arizor

Nothing wrong per se, just got a better block (Optimus) and couldn’t think what else to do with it 😂


----------



## iionas

Those blocks are so damn nice man


----------



## Arizor

Yeah it’s my favourite block by a mile, such good performance and looks great vertically mounted.


----------



## iionas

Arizor said:


> Yeah it’s my favourite block by a mile, such good performance and looks great vertically mounted.



Got some photos?


----------



## Arizor

Not great ones, but here you go!


----------



## ArcticZero

That looks so good. Still crossing my fingers on a reference Optimus block!


----------



## iionas

ArcticZero said:


> That looks so good. Still crossing my fingers on a reference Optimus block!


think they said new stock is imminent


----------



## gfunkernaught

... it'd be nice if there was an Optimus block for the Trio...😒


----------



## J7SC

Arizor said:


> Nothing wrong per se, just got a better block (Optimus) and couldn’t think what else to do with it 😂


...I know what you mean  - I'm now sitting on an unused EKWB Strix OC block and never-mounted (new via RMA) back plate after switching to the Phanteks Glacier and modded back plate / heat sink. Results were worth it though 

...an acquaintance owns a precision machining shop; may be I have the block machined for a future application (3090 Ti, 4090 ??), ie. as an active back plate


----------



## iionas

Speaking of that, there's no active blocks apart from optimus yeah?


----------



## jura11

iionas said:


> Speaking of that, there's no active blocks apart from optimus yeah?


There are active backplates from several different manufacturers like EKWB, Bykski, Barrow and Aquacomputer and I think Heatkiller is making one 

Hope this helps 

Thanks, Jura


----------



## Vld

Hi all !

Got weird issue with Asus 3090 Strix. 
If i use Asus original 480W bios then card works normal, all 3 8pin power connectors working as supposed to, max power draw 480W.
If I flash EVGA 520W bios - card seems to be "limited" to use only 2 8 pin connectors, 8-Pin #1 and 8-Pin#2. If i put some load on card then 8-Pin #3 will stay at 4W, red led on card will flash.

Any ideas ?


----------



## Falkentyne

Vld said:


> Hi all !
> 
> Got weird issue with Asus 3090 Strix.
> If i use Asus original 480W bios then card works normal, all 3 8pin power connectors working as supposed to, max power draw 480W.
> If I flash EVGA 520W bios - card seems to be "limited" to use only 2 8 pin connectors, 8-Pin #1 and 8-Pin#2. If i put some load on card then 8-Pin #3 will stay at 4W, red led on card will flash.
> 
> Any ideas ?


Don't use the eVGA 520W bios. Use the Kingpin 1kw Bios and just set your power limit to 55%


----------



## GRABibus

Vld said:


> Hi all !
> 
> Got weird issue with Asus 3090 Strix.
> If i use Asus original 480W bios then card works normal, all 3 8pin power connectors working as supposed to, max power draw 480W.
> If I flash EVGA 520W bios - card seems to be "limited" to use only 2 8 pin connectors, 8-Pin #1 and 8-Pin#2. If i put some load on card then 8-Pin #3 will stay at 4W, red led on card will flash.
> 
> Any ideas ?


You can use these one's :









EVGA RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com













MSI RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com





With those Bios, you will also have one pin stuck at 4W, but you will have higher fan speed and then better gaming boost.

KP 1000W even capped at 55% gives me crazy temps on my strix on air in Port Royal for example.
If you are on air, don't use the KP 1000W Bios. This is my advice applied to Strix on air.


----------



## J7SC

GRABibus said:


> You can use these one's :
> 
> 
> 
> 
> 
> 
> 
> 
> 
> EVGA RTX 3090 VBIOS
> 
> 
> 24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory
> 
> 
> 
> 
> www.techpowerup.com
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> MSI RTX 3090 VBIOS
> 
> 
> 24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory
> 
> 
> 
> 
> www.techpowerup.com
> 
> 
> 
> 
> 
> With those Bios, you will also have one pin stuck at 4W, but you will have higher fan speed and then better gaming boost.
> 
> KP 1000W even capped at 55% gives me crazy temps on my strix on air in Port Royal for example.
> If you are on air, don't use the KP 1000W Bios. This is my advice applied to Strix on air.


_"*KP 1000W *even capped at 55% gives me crazy temps on my strix *on air* in Port Royal..." 








_


----------



## Arizor

Went a bit, well, "us" and grabbed a Corsair AX1600i to replace the current PSU and see if I can get better, stable overclocks. Picking it up soon, we'll see...


----------



## Falkentyne

GRABibus said:


> You can use these one's :
> 
> 
> 
> 
> 
> 
> 
> 
> 
> EVGA RTX 3090 VBIOS
> 
> 
> 24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory
> 
> 
> 
> 
> www.techpowerup.com
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> MSI RTX 3090 VBIOS
> 
> 
> 24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory
> 
> 
> 
> 
> www.techpowerup.com
> 
> 
> 
> 
> 
> With those Bios, you will also have one pin stuck at 4W, but you will have higher fan speed and then better gaming boost.
> 
> KP 1000W even capped at 55% gives me crazy temps on my strix on air in Port Royal for example.
> If you are on air, don't use the KP 1000W Bios. This is my advice applied to Strix on air.


KP 1kw bios doesn't just remove TDP limits.
It also removes MSVDD and NVVDD rail limits. Without that your card would throttle on TDP Normalized once that reaches between 5% of the TDP slider (this is ignored if the TDP slider is lower than 100%, only the 100% values are used for TDP Normalized).


----------



## Vld

KP 1000 W bios works, but does that in strange way, higher temps, lower clocks. I’v got pretty ok cooling setup - EKWB Vector with active backplate, mora3 420 with 8 noctua 200 mm fans plus EKWB pe360 with 5 noctua 120 mm fans,2 D5 + DDC, before temps in superposition 1080 extreme around 41-42 gpu core, now 48-49 c.

Winter is coming, maybe this bios is good for heating ...


----------



## des2k...

Vld said:


> KP 1000 W bios works, but does that in strange way, higher temps, lower clocks. I’v got pretty ok cooling setup - EKWB Vector with active backplate, mora3 420 with 8 noctua 200 mm fans plus EKWB pe360 with 5 noctua 120 mm fans,2 D5 + DDC, before temps in superposition 1080 extreme around 41-42 gpu core, now 48-49 c.
> 
> Winter is coming, maybe this bios is good for heating ...


need to reinstall superposition, what's the wattage for 1080p extreme ?

i have 2x8pin, was limited to 390w, 1000w vbios set to 400w had the same gpu temps


----------



## gfunkernaught

Arizor said:


> Went a bit, well, "us" and grabbed a Corsair AX1600i to replace the current PSU and see if I can get better, stable overclocks. Picking it up soon, we'll see...


I was considering that a while ago, then landed on the HX1200 I found for $260 (I think). Now that the AX1600i is back in stock _teeth grind_ I probably have no use for it _teeth grind_ 😂


----------



## Arizor

gfunkernaught said:


> I was considering that a while ago, then landed on the HX1200 I found for $260 (I think). Now that the AX1600i is back in stock _teeth grind_ I probably have no use for it _teeth grind_ 😂


We've got a disease mate


----------



## Vld

des2k... said:


> need to reinstall superposition, what's the wattage for 1080p extreme ?
> 
> i have 2x8pin, was limited to 390w, 1000w vbios set to 400w had the same gpu temps


TDP limit set at 55%, max was at 520-525w, hit VRel and PWR limits.


----------



## des2k...

Vld said:


> TDP limit set at 55%, max was at 520-525w, hit VRel and PWR limits.


oh.. if it's around 500w 48c with those rads, it's prob around 20c delta which is not great

It could be around 38c with 10c delta if that EK block makes good contact with the die


----------



## dansi

gfunkernaught said:


> I was considering that a while ago, then landed on the HX1200 I found for $260 (I think). Now that the AX1600i is back in stock _teeth grind_ I probably have no use for it _teeth grind_ 😂


arent we going to have new PSU standards for pcie5 GPU soonish?

Ax1600i seems overboard?


----------



## J7SC

dansi said:


> arent we going to have new PSU standards for pcie5 GPU soonish?
> 
> Ax1600i seems overboard?


The rumored January (>salt) refresh of the RTX 3K line (Super ? Ti ? Ti Super ?) will reportedly have some models that feature the new PCIe 5.0 GPU power cable design, delivering up to 600+ W. Likely, adapters for current GPU / PCIe will be available for existing PSUs though, given what I've read. Besides, the installed base of current PCIe cards is huge so the switchover will take some time. Top-end PSU manufacturers might have to include both modular sets for a while...so, surprise, more $$$


----------



## gfunkernaught

@J7SC its the 3090 Ti Super Ultra Jensen's Edition iirc


----------



## Arizor

Powergate update folks: Just finished installing the AX1600i, clock frequencies are now solid as a rock (fluctuation on the old PSU was normal), but so far no real improvement in clock speed overall. Watch this space as I test other BIOS...


----------



## Arizor

Ok, so after testing, interesting findings.

My max clock speed is up slightly (2055), but the big difference is _it does not budge whatsoever_ in terms of clock speed or voltage. On the previous PSU, same settings in Afterburner, clock frequency and voltage would bounce around, I assumed because I couldn't find the sweet spot for the card.

Turns out it must have been the PSU, since now same settings just sit stable at clock and voltage, no movement whatsoever. Probably normal for you folk but this is new to me. Overall pretty happy with the outcome, and at least I'm obscenely future proof for at least 5 years.


----------



## gfunkernaught

Arizor said:


> Ok, so after testing, interesting findings.
> 
> My max clock speed is up slightly (2055), but the big difference is _it does not budge whatsoever_ in terms of clock speed or voltage. On the previous PSU, same settings in Afterburner, clock frequency and voltage would bounce around, I assumed because I couldn't find the sweet spot for the card.
> 
> Turns out it must have been the PSU, since now same settings just sit stable at clock and voltage, no movement whatsoever. Probably normal for you folk but this is new to me. Overall pretty happy with the outcome, and at least I'm obscenely future proof for at least 5 years.


I've never experienced that before. PSU's can be real finicky. Did you install the software to monitor all the deetz of your new PSU (I'm jelly btw).


----------



## Arizor

Haha! Yeah strange isn’t it. 

Nah sadly it needs a usb connection to there motherboard and I couldn’t quite be bothered as my usb header is behind my GPU (vertical mount), meaning I’ll need to mess around a bit. Will do soon though!


----------



## J7SC

gfunkernaught said:


> @J7SC its the 3090 Ti Super Ultra Jensen's Edition iirc


...sneak preview of next gen (Ti at the end):


Spoiler


----------



## KedarWolf

dansi said:


> arent we going to have new PSU standards for pcie5 GPU soonish?
> 
> Ax1600i seems overboard?


AX1600i is still being reviewed as the best PSU you can buy.


----------



## Arizor

Installed USB to AX1600i for temp readings, only horrid issue is it depends on iCUE, an abominable piece of software. Immediately incompatible with HWInfo and caused crash, then caused another crash as I tried to uninstall, then had to reinstall to manually uninstall. Then it still hung around running some random "headset driver" in task manager. Yeesh, horrific. Asides from that, great PSU!


----------



## ManniX-ITA

Arizor said:


> Installed USB to AX1600i for temp readings, only horrid issue is it depends on iCUE, an abominable piece of software. Immediately incompatible with HWInfo and caused crash, then caused another crash as I tried to uninstall, then had to reinstall to manually uninstall. Then it still hung around running some random "headset driver" in task manager. Yeesh, horrific. Asides from that, great PSU!


Doesn't work reading in HWInfo without iCUE?


----------



## Arizor

ManniX-ITA said:


> Doesn't work reading in HWInfo without iCUE?


For sure using the ASUS BIOS, but for any other the iCUE gives accurate readings whilst HWInfo doesn’t give accurate info for the 3rd PCIe. Not too big a deal just a bit of a shame iCUE is up there with MSI dragon centre and ASUS armoury crate as godawful. Do these companies really think they’re improving their rep with this poor software….?!


----------



## ManniX-ITA

Arizor said:


> For sure using the ASUS BIOS, but for any other the iCUE gives accurate readings whilst HWInfo doesn’t give accurate info for the 3rd PCIe. Not too big a deal just a bit of a shame iCUE is up there with MSI dragon centre and ASUS armoury crate as godawful. Do these companies really think they’re improving their rep with this poor software….?!


Unfortunately they are hardware companies... they either use an external company or a ridiculously thin internal team.
They spend their effort in flashy stuff for marketing instead of QA...


----------



## gfunkernaught

J7SC said:


> ...sneak preview of next gen (Ti at the end):
> 
> 
> Spoiler


Na man that's the regular stock one. The 4090 Ti Super Ultra Jensen's Edition will have 1TB of VRAM but you can only use 16GB unless you subscribe to Jensen's Patreon.


----------



## OC-NightHawk

Arizor said:


> Installed USB to AX1600i for temp readings, only horrid issue is it depends on iCUE, an abominable piece of software. Immediately incompatible with HWInfo and caused crash, then caused another crash as I tried to uninstall, then had to reinstall to manually uninstall. Then it still hung around running some random "headset driver" in task manager. Yeesh, horrific. Asides from that, great PSU!


If you can find it the Corsair Link software will work with the AX1600i. If that is your only Corsair product you might be better off with it.


----------



## gfunkernaught

I also don't like iCue. But I had to install it to get my commander pro working. I can control fan speed with Argus but it doesn't report correct fan speed.


----------



## ManniX-ITA

gfunkernaught said:


> I also don't like iCue. But I had to install it to get my commander pro working. I can control fan speed with Argus but it doesn't report correct fan speed.


Not sure if it can help but I had this issue in the past (now I don't use it anymore).
I've installed the latest firmware with Corsair Link, uninstalled it afterwards, and then Argus was working fine.


----------



## gfunkernaught

ManniX-ITA said:


> Not sure if it can help but I had this issue in the past (now I don't use it anymore).
> I've installed the latest firmware with Corsair Link, uninstalled it afterwards, and then Argus was working fine.


You uninstalled iCue completely? And argus still recognized commander pro as a sensor/controller?


----------



## ManniX-ITA

gfunkernaught said:


> You uninstalled iCue completely? And argus still recognized commander pro as a sensor/controller?


Sure you don't need iCue or Link installed at all.
They will only conflict and mess with Argus.


----------



## J7SC

gfunkernaught said:


> Na man that's the regular stock one. The 4090 Ti Super Ultra Jensen's Edition will have 1TB of VRAM but you can only use 16GB unless you subscribe to Jensen's Patreon.


...I think I shuffled enough $s Jensen's way over the years already


----------



## gfunkernaught

J7SC said:


> ...I think I shuffled enough $s Jensen's way over the years already


Shuffled or shoveled? I'm right there with ya. We all are. I remember the excitement of buying a $500 GPU. Now I get disappointed with a $2k card not doing what I think is worth $2k.🙄


----------



## J7SC

gfunkernaught said:


> Shuffled or shoveled? I'm right there with ya. We all are. I remember the excitement of buying a $500 GPU. Now I get disappointed with a $2k card not doing what I think is worth $2k.🙄


..._shoveled_ reminds me of stinky manure, besides, I shuffled the company credit cards for NVidia more than once  ! Also, sometime soon, you may rue the day when you could still get GPUs for $2k.


----------



## Arizor

If anyone needs a non-invasive program to control their fans, the best I've found is the freely available FanControl GitHub - Rem0o/FanControl.Releases: This is the release repository for Fan Control, a highly customizable fan controlling software for Windows.


----------



## gfunkernaught

J7SC said:


> ..._shoveled_ reminds me of stinky manure, besides, I shuffled the company credit cards for NVidia more than once  ! Also, sometime soon, you may rue the day when you could still get GPUs for $2k.


To someone that doesn't make money with these cards, its shoveled. As far as one day possibly missing $2k GPUs, not a chance. 😎


----------



## KCDC

I definitely wouldn't have upgraded so soon if it wasn't put in a project's budget. Prices are insane, scalped or not..


Testing a couple 80mm fans on the heatsinks has proven to help quite a bit, been working/rendering with them and has dropped temps by 10+C (GPU #1 being the passive backplate). Gives me a bit more confidence letting it render overnight.


----------



## J7SC

KCDC said:


> I definitely wouldn't have upgraded so soon if it wasn't put in a project's budget. Prices are insane, scalped or not..
> 
> 
> Testing a couple 80mm fans on the heatsinks has proven to help quite a bit, been working/rendering with them and has dropped temps by 10+C (GPU #1 being the passive backplate). Gives me a bit more confidence letting it render overnight.


..helper fans aimed at the backplate / extra heatsink are a good idea; I'm trying to decide whether to use 2x80mm Arctic or 1x 120mm Arctic as I'm finalizing my update-build this week. I had 2x 120mm before in the old config but the case and mounting orientation changed...


----------



## iionas

Have any of you guys cooled a 5900x? What were your results with which block? 

Sorry bit off topic but thought id ask here anyway 

Also good morning from across europe


----------



## J7SC

iionas said:


> Have any of you guys cooled a 5900x? What were your results with which block?
> 
> Sorry bit off topic but thought id ask here anyway
> 
> Also good morning from across europe


I'm cooling both a 3950X and 5950X w/ this Phanteks model with superb results. There could be / are better blocks out there, but I find price / performance / appearance hard to beat. Pic below, with a 3090 Strix in the back, so thread-related


----------



## iionas

J7SC said:


> I'm cooling both a 3950X and 5950X w/ this Phanteks model with superb results. There could be / are better blocks out there, but I find price / performance / appearance hard to beat. Pic below, with a 3090 Strix in the back, so thread-related
> 
> View attachment 2528538


heya! thanks man. What kind of temps you got happerning? 

DM me if you wish!


----------



## J7SC

...


Spoiler: next to a 3090 :)



note: 1320x60 rad space, older screenie w/ older RAM setup


----------



## iionas

J7SC said:


> ...
> 
> 
> Spoiler: next to a 3090 :)
> 
> 
> 
> note: 1320x60 rad space, older screenie w/ older RAM setup
> View attachment 2528541


Yeah wow thats VERY impressive.

I get 80c peaks with my Liquid Freezer II. 

I was asking because one of my friends gets into 80c with his 5900x on water with an optimus block,.

I think the need for the Lian Lii 011 is 3 rads where I dont think that 1x thick EK 360 and a slim EK 360 would be enough


Thoughts?


----------



## Lobstar

iionas said:


> Have any of you guys cooled a 5900x? What were your results with which block?


I've got the optimus foundation block. It's $109 for the copper/acetal version. It's definitely in the top performers.

Here is my 5950x running CBr23 with 22c ambient air and 26c water. This was after 15 minutes (though a shorter window of data capture; the last 3 mins or so) which was after playing New World so the water is as hot as it's going to get.








It's pulling 250w with just a CO all-core of -20 and vcore offset of -0.025. I have also sanded both the IHS and the cold plate to 2000 grit to help with temps between the CCDs.


----------



## J7SC

iionas said:


> Yeah wow thats VERY impressive.
> 
> I get 80c peaks with my Liquid Freezer II.
> 
> I was asking because one of my friends gets into 80c with his 5900x on water with an optimus block,.
> 
> I think the need for the Lian Lii 011 is 3 rads where I dont think that 1x thick EK 360 and a slim EK 360 would be enough
> 
> 
> Thoughts?


Not sure this helps, but for my custom CPU / GPU loops, I always use:
a.) thick(er) copper / brass rads with two or even three cores (typically 60mm plus),
b.) 1/2 x 3/4 tubing,
c.) at minimum two D5 pumps in series at 75% or so speed.

Even on a temp testbench for a recent update with just one XSPC RX 360 and 3x Arctic P12 fans for both the 5950X and the 3090 Strix w/dual D5s, max temps were no higher than 71 C or so for the 5950X after prolonged GPU testing followed by Cinebench R23 runs (210W +- CPU package power). I really do like the Phanteks CPU blocks...with that experience under my belt, I got the Phanteks block for the 3090 Strix with great temps as well. Generally, I would prefer two thick 360s over three thin ones (30mm or less) if you can make the thick ones fit (Dremel !  )


----------



## iionas

J7SC said:


> Not sure this helps, but for my custom CPU / GPU loops, I always use:
> a.) thick(er) copper / brass rads with two or even three cores (typically 60mm plus),
> b.) 1/2 x 3/4 tubing,
> c.) at minimum two D5 pumps in series at 75% or so speed.
> 
> Even on a temp testbench for a recent update with just one XSPC RX 360 and 3x Arctic P12 fans for both the 5950X and the 3090 Strix w/dual D5s, max temps were no higher than 71 C or so for the 5950X after prolonged GPU testing followed by Cinebench R23 runs (210W +- CPU package power). I really do like the Phanteks CPU blocks...with that experience under my belt, I got the Phanteks block for the 3090 Strix with great temps as well. Generally, I would prefer two thick 360s over three thin ones (30mm or less) if you can make the thick ones fit (Dremel !  )


I would think say 2x 60mm woulx be great in this system (my lian lii 011d) Im just doing digging round to make sure im not doing the build twice, research 900x times and build once! 

Question for you, why dual d5s? 

and obviously thats in serial right that you run them ?


----------



## geriatricpollywog

iionas said:


> I would think say 2x 60mm woulx be great in this system (my lian lii 011d) Im just doing digging round to make sure im not doing the build twice, research 900x times and build once!
> 
> Question for you, why dual d5s?
> 
> and obviously thats in serial right that you run them ?


Pumps help with flow rate. Flow rate keeps the temperature of the loop uniform. You should be OK with 1 pump if you just have 2 x 60mm wide 360 rads.


----------



## ManniX-ITA

0451 said:


> Pumps help with flow rate. Flow rate keeps the temperature of the loop uniform. You should be OK with 1 pump if you just have 2 x 60mm wide 360 rads.


There are also other benefits like redundancy (if the top is non blocking like the EK Revo) and noise (two pumps at low speed are almost inaudible vs one at high rpm).


----------



## geriatricpollywog

ManniX-ITA said:


> There are also other benefits like redundancy (if the top is non blocking like the EK Revo) and noise (two pumps at low speed are almost inaudible vs one at high rpm).


I don't buy the redundancy thing. Your CPU and GPU will protect themselves if they reach tj max.

But yeah, you can buy more pumps and turn down the speeds to make them quieter. With my setup, 4 pumps at 100% fill up a gallon jug in 40 seconds. I run them at 75%.


----------



## iionas

0451 said:


> Pumps help with flow rate. Flow rate keeps the temperature of the loop uniform. You should be OK with 1 pump if you just have 2 x 60mm wide 360 rads.


Thanks for that man, my bad about the noob questions!


----------



## Arizor

Yeah just 1 solid D5 pump is fine @iionas if you’re running a 2 rad setup. Don’t worry about the noob questions, everyone has to start at the same place.


----------



## iionas

0451 said:


> Pumps help with flow rate. Flow rate keeps the temperature of the loop uniform. You should be OK with 1 pump if you just have 2 x 60mm wide 360 rads.



right and in serial you wouldnt separate loops would you?


----------



## iionas

Arizor said:


> Yeah just 1 solid D5 pump is fine @iionas if you’re running a 2 rad setup. Don’t worry about the noob questions, everyone has to start at the same place.


Legit trying to do all my homework here,

I used the EKWB configurator and it came out cheaper than I expected (for both gpu and cpu) however id be tempyed to put 2 thicc rads in the Lian Lii for heat soak. I think like max load would be peak of 600-680w.

Now another question, how bad does the water temp affect the pump / tubing? Im wanting to do soft tube I dont think I want to get complicated with hardline yet


----------



## Arizor

Soft tube is absolutely fine. 2 thick rads are solid as long as you can do push/pull fans, otherwise you won’t get the full benefit of the thick.


----------



## ManniX-ITA

0451 said:


> I don't buy the redundancy thing. Your CPU and GPU will protect themselves if they reach tj max.


It's not about protection; it's about being able to run the system while waiting the D5 replacement to be delivered


----------



## yzonker

ManniX-ITA said:


> It's not about protection; it's about being able to run the system while waiting the D5 replacement to be delivered


I've seen hard tubing on a CPU block sagged and ruined by a failed pump.


----------



## Nizzen

Imagine not having Quick Disconnects on every component in 2021 😅

Changing cpu and gpu is done in minutes, without needing to drain all the water. Worth 😎


----------



## ManniX-ITA

yzonker said:


> I've seen hard tubing on a CPU block sagged and ruined by a failed pump.


Yeah, I heard a few horror stories as well... but the D5s are pretty reliable and when they die usually just stops spinning without imploding in a huge cloud of small debris.
Having the luxury to keep working and wait till next weekend to replace it is a killer feature for me!


----------



## geriatricpollywog

ManniX-ITA said:


> Yeah, I heard a few horror stories as well... but the D5s are pretty reliable and when they die usually just stops spinning without imploding in a huge cloud of small debris.
> Having the luxury to keep working and wait till next weekend to replace it is a killer feature for me!


I’ve never had a D5 fail in 5 years of watercooling using 2 pumps. My latest build has the same 2 pumps, plus another 2 (4 total) on the sidecar of my MO-RA. The only components that have outright failed has been GPUs. I suppose this 747 could fly on 3 engines but I would re-evaluate my loop rather than continue using the computer. I don’t use it for anything important like work.


----------



## yzonker

0451 said:


> I’ve never had a D5 fail in 5 years of watercooling using 2 pumps. My latest build has the same 2 pumps, plus another 2 (4 total) on the sidecar of my MO-RA. The only components that have outright failed has been GPUs. I suppose this 747 could fly on 3 engines but I would re-evaluate my loop rather than continue using the computer. I don’t use it for anything important like work.


I had a bearing start to fail on one recently that was less than a year old. Never been run dry. In that case though it just created a vibration, so I had plenty of warning.


----------



## geriatricpollywog

yzonker said:


> I had a bearing start to fail on one recently that was less than a year old. Never been run dry. In that case though it just created a vibration, so I had plenty of warning.


The only moving part is the impeller. Which bearing are you referring to?

Again I have 4 pumps but I won’t pretend you need more than one, even for highly restrictive loops. And I definitely wouldn’t recommend more than one to a budget conscious newcomer. That would give the impression that the cost of entry is higher than it really is. Even with 6-8 pairs of QDCs, one pump is capable of filling my loop and keeping the water flowing fast enough to maintain normal temperatures. If allI had was 2 medium sised radiators, I wouldn’t even run a single D5 at 100%.


----------



## yzonker

@0451, there is a single bearing (bushing in the center the impeller rides on).

And I checked the impeller for debris. Looked clean. Couldn't blow anything out of it.


----------



## Lobstar

Nizzen said:


> Imagine not having Quick Disconnects on every component in 2021 😅
> 
> Changing cpu and gpu is done in minutes, without needing to drain all the water. Worth 😎


It's expensive! I use 7 sets of QD3s on my system.


----------



## yzonker

Lobstar said:


> It's expensive! I use 7 sets of QD3s on my system.


They're bulky and heavy too. I have them on my external rad as well as an extra pair to let me easily drain the loop through the external lines. Although it would be really convenient to put them between components. Certainly understand that.


----------



## Luggage

Lobstar said:


> It's expensive! I use 7 sets of QD3s on my system.


My cheap solution Slangtång

Just need enough slack to get away from the expensive parts 

(Well I have 1 pair of QDCs to the external supernova)


----------



## Nizzen

Lobstar said:


> It's expensive! I use 7 sets of QD3s on my system.


Compared to the lifespan I have used them, and all the hardware I change every year, they're almost free. Time is money too, so I actual (almost) getting paid to used them LOL

The first 8 par Koolance QD I bought is in use now. They was bought the same time as 780 classified SLI 😅


----------



## J7SC

I seemed to have 'started something'  with my comment re. the minimum of two D5s per loop comment earlier. There are multiple benefits, some already covered above.

- Redundancy...it's not only about gaming machines but workstations and even light servers. While modern CPUs and GPUs have temp protection built in, it's the downtime for a business which is important. I only ever lost one D5 (incl. for workstations) in over a decade and that was my own fault by accidentally running it dry as I had confused the D5 Molex with those from multiple GT fans for wire-harness testing.

- '''Noise'''...I usually mount D5s on a rubber pad and they tend to be near-silent. Still, with dual D5s (or more, depending on loop), you can dial down the speed a bit.

- "Actual flow"...this is something near and dear to my heart, given many w-cooled multi-GPU system work-and-play builds. On paper, a single D5 can out-do a DDC (the latter dumps a bit more heat in your case, btw), but in practice, that's not always the case...

...D5s have a larger internal diameter and are more susceptible to sudden drops in flow if there's any air hiding in the system, as can easily be the case when using multiple GPU and CPU blocks and multiple rads. And who hasn't chased that bubble in the GPU block, especially when mounted vertically. However, also per DerBauer's test a few years back for casekingtiv, dual D5s put out enough pressure and consistent flow for that not to be an issue. Dual D5s almost always outperform dual DDCs and rarely are susceptible to bubble-induced drops. Somewhat related to this is also the fact that I use the larger-diameter 1/2 - 3/4 tubing and multiple Koolance QD 4s.


----------



## iionas

0451 said:


> I’ve never had a D5 fail in 5 years of watercooling using 2 pumps. My latest build has the same 2 pumps, plus another 2 (4 total) on the sidecar of my MO-RA. The only components that have outright failed has been GPUs. I suppose this 747 could fly on 3 engines but I would re-evaluate my loop rather than continue using the computer. I don’t use it for anything important like work.


so that’s one thing I’d take into account. I work on this thing daily, like it gets a thrashing everyday for 8 hours and then some gaming also. The redundant pump is a good idea, least in my book


----------



## geriatricpollywog

J7SC said:


> I seemed to have 'started something'  with my comment re. the minimum of two D5s per loop comment earlier. There are multiple benefits, some already covered above.
> 
> - Redundancy...it's not only about gaming machines but workstations and even light servers. While modern CPUs and GPUs have temp protection built in, it's the downtime for a business which is important. I only ever lost one D5 (incl. for workstations) in over a decade and that was my own fault by accidentally running it dry as I had confused the D5 Molex with those from multiple GT fans for wire-harness testing.
> 
> - '''Noise'''...I usually mount D5s on a rubber pad and they tend to be near-silent. Still, with dual D5s (or more, depending on loop), you can dial down the speed a bit.
> 
> - "Actual flow"...this is something near and dear to my heart, given many w-cooled multi-GPU system work-and-play builds. On paper, a single D5 can out-do a DDC (the latter dumps a bit more heat in your case, btw), but in practice, that's not always the case...
> 
> ...D5s have a larger internal diameter and are more susceptible to sudden drops in flow if there's any air hiding in the system, as can easily be the case when using multiple GPU and CPU blocks and multiple rads. And who hasn't chased that bubble in the GPU block, especially when mounted vertically. However, also per DerBauer's test a few years back for casekingtiv, dual D5s put out enough pressure and consistent flow for that not to be an issue. Dual D5s almost always outperform dual DDCs and rarely are susceptible to bubble-induced drops. Somewhat related to this is also the fact that I use the larger-diameter 1/2 - 3/4 tubing and multiple Koolance QD 4s.


If you are going for “redundancy,” why stop at a pump, one of the parts that’s least likely to fail?

If you have a second pump, you should also have a second power supply hooked up in parallel.

Next you must run a second boot drive in RAID 1.

Graphics cards fail quite often. Gotta have 2 of those hooked up at all times.

Second motherboard and CPU. Those fail too.

Maybe I’m hitting at some of the design goals of 3090/6800XT Raven’s nest.


----------



## J7SC

0451 said:


> (...)
> Maybe I’m hitting at some of the design goals of 3090/6800XT Raven’s nest.


...may be you are  ...a few D5s awaiting deployment for one of the two Ravens' nests


----------



## Lobstar

Luggage said:


> My cheap solution Slangtång
> 
> Just need enough slack to get away from the expensive parts
> 
> (Well I have 1 pair of QDCs to the external supernova)


I have two of those by Gearwrench 


Nizzen said:


> Compared to the lifespan I have used them, and all the hardware I change every year, they're almost free.


This is the reason I got into water cooling. It's one of those hidden costs when you first get into it but worth the cost the longer you are into it.


----------



## KCDC

I have a backup d5 with all the bits needed to put it in a series setup, but I've been torn between doing that or separating my cpu from the loop, put it on it's own sr2 420 and keep the gpus on the other 420, XE 360 and slim 360. Whichever gets me better temps on the gpus and I am guessing it will be doing two loops vs serial single loop with two pumps.


----------



## yzonker

The gpu will probably get the best temps from having all the rads in a single loop as the rad on the cpu will usually not be well utilized unless you are running at 100% on the cpu at the same time possibly.


----------



## J7SC

KCDC said:


> I have a backup d5 with all the bits needed to put it in a series setup, but I've been torn between doing that or separating my cpu from the loop, put it on it's own sr2 420 and keep the gpus on the other 420, XE 360 and slim 360. Whichever gets me better temps on the gpus and I am guessing it will be doing two loops vs serial single loop with two pumps.


I've run completely separate loops for CPU and GPU (ie. spoiler below), and even the dual GPU-only loop had the 2x D5s and rads 'separated' (so res > pump 1 > GPU 1 > rad 1 > rad 2 > pump 2 > GPU 2 > rad 3 > back to res). Temps were great, and that system was in 3DM overall PortRoyal HoF for 18+ months, as high as 15th.

However, I recently combined the TR CPU loop of that system with the GPU loop (which I kept mostly 'as is'), and net differences in temp are 1.5C to 2C at most - and that is with 3x D5s instead of 4x, and 3x RX 360s instead 5x, and defanged 120mm fans (Arctic P12s instead of GT3k rpm). With good fast flow and multi-rads, it doesn't really matter. Also, if you distribute the D5s and rads as described in series, you get the best of both worlds, including stronger flow _and_ redundancy and peace of mind.



Spoiler


----------



## iionas

fudge me you guys run like what would seem to a normal person overkill but it makes 100% sense why youre doing so

loving the info this is great


----------



## KCDC

J7SC said:


> I've run completely separate loops for CPU and GPU (ie. spoiler below), and even the dual GPU-only loop had the 2x D5s and rads 'separated' (so res > pump 1 > GPU 1 > rad 1 > rad 2 > pump 2 > GPU 2 > rad 3 > back to res). Temps were great, and that system was in 3DM overall PortRoyal HoF for 18+ months, as high as 15th.
> 
> However, I recently combined the TR CPU loop of that system with the GPU loop (which I kept mostly 'as is'), and net differences in temp are 1.5C to 2C at most - and that is with 3x D5s instead of 5, and 3x RX 360s instead 4, and defanged 120mm fans (Arctic P12s instead of GT3k rpm). With good fast flow and multi-rads, it doesn't really matter. Also, if you distribute the D5s and rads as described in series, you get the best of both worlds, including stronger flow _and_ redundancy and peace of mind.
> 
> 
> 
> Spoiler
> 
> 
> 
> 
> View attachment 2528650


Cool, it'll be a lot easier just adding to the current loop. Thanks!


----------



## KCDC

Delete


----------



## Lord of meat

Hey guys just noticed different 520w bios,








EVGA RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com





subsystem ends with 3999 instead of 3998

Edit 1:
YEP it works, gonna see if there is any difference. rebar work too.
Edit 2:
Did a quick dummy test, i can overclock it higher now, very odd considering it should be same as old one.
I was capped in Port Royal at +60 (2085-2115), now i can do +105 (2100-2145). at +120 i shut it down it causes artifacts.
On the old one i used vf curve to reach 2130 but it was annoying to do.
It pulls 522.4w according to gpuz, old one did 502w/
Memory oc +1000 its temp is 54c hotspot is 58-60c. no difference


----------



## ManniX-ITA

0451 said:


> If you are going for “redundancy,” why stop at a pump, one of the parts that’s least likely to fail?
> 
> If you have a second pump, you should also have a second power supply hooked up in parallel.
> 
> Next you must run a second boot drive in RAID 1.
> 
> Graphics cards fail quite often. Gotta have 2 of those hooked up at all times.
> 
> Second motherboard and CPU. Those fail too.


Well, the ultimate hack would be a ready-to-go clone of myself in a cryo pod in case of myself-failure 

Pumps are mechanical so the failure rate, against anything which is not, is something like 10000:1 higher.
There are dependencies, the pump fails and it's like the CPU or the Mobo; nothing runs without it.
Availability is key, D5 pumps or water-cooling pumps (for PC at least) are not available everywhere.
Chances are at any PC store you can buy a PSU, GPU, CPU, Mobo for emergencies.
Less likely a pump.

But yes, it's not really required, the D5 pumps are reliable. More than the average of the other components.
Together with the other benefits it's a smart way to go if you can invest the money.
It is definitely not something that should block anyone to go custom loop.
Even with one D5 the difference against an AIO is massive.


----------



## J7SC

iionas said:


> fudge me you guys run like what would seem to a normal person overkill but it makes 100% sense why youre doing so
> 
> loving the info this is great


....it can be a lot of fun, just be aware that it can also be a slippery slope, just ask our friend @Arizor 



ManniX-ITA said:


> Well, the ultimate hack would be a ready-to-go clone of myself in a cryo pod in case of myself-failure
> 
> Pumps are mechanical so the failure rate, against anything which is not, is something like 10000:1 higher.
> There are dependencies, the pump fails and it's like the CPU or the Mobo; nothing runs without it.
> Availability is key, D5 pumps or water-cooling pumps (for PC at least) are not available everywhere.
> Chances are at any PC store you can buy a PSU, GPU, CPU, Mobo for emergencies.
> Less likely a pump.
> 
> But yes, it's not really required, the D5 pumps are reliable. More than the average of the other components.
> Together with the other benefits it's a smart way to go if you can invest the money.
> It is definitely not something that should block anyone to go custom loop.
> Even with one D5 the difference against an AIO is massive.


...good points there. Getting select water-cooling peripherals (including full D5 setups w/housing) seems more difficult these days compared to 5-6 years ago...With pandemics, ships getting stuck in the Suez Canal or piling up outside LA Harbour, I rather have fail-over and some spares. And to each his/her own, but with modern GPUs and CPUs placing ever-more emphasis on boost algorithms including temps as a major parameter, good cooling makes a lot of sense rather than just bashing your head into the hotspot wall, so to speak...


----------



## Arizor

I don’t know what you could possibly mean @J7SC . Now if you don’t mind I need to get back to researching how to install an extra D5 pump and 360 rad into my case… now do I need to buy redundancy backups too…


----------



## geriatricpollywog

I messed up the thermal pads for my Kingpin Hydrocopper and need new ones (2mm). Is $44.95 a terrible price for (100 × 100 × 2,0 mm) Thermal Grizzly Minus 8?
@tps3443 

Amazon.com: Thermopad Thermal Grizzly Minus Pad 8 - Silicone, Self-Adhesive, Thermally Conductive Thermal Pad - Conducts Heat and Cools The Heating Elements of The Computer or Console (30 × 30 × 0,5 mm) : Industrial & Scientific



Arizor said:


> I don’t know what you could possibly mean @J7SC . Now if you don’t mind I need to get back to researching how to install an extra D5 pump and 360 rad into my case… now do I need to buy redundancy backups too…
> View attachment 2528679


Make sure you have an extra redundant ethernet port (dual LAN) and ISP in case your gaming gets interrupted. 15 MB/s should be okay. Also make sure you have a redundant power source coming into your home. Solar or a hospital generator are both acceptable. Stock on up food in case the global supply chain shortage gets bad enough that you have to stand in line for food or go to a further away store.


----------



## iionas

@Arizor do tell your adventures sir hahahaha


----------



## CZonin

Hey everyone. Looking to take another stab at my 3090 TUF. I've been running the 390W Gigabyte Gaming OC BIOS for awhile that's been working great. I deshrouded and replaced the fans with 3x Noctua A9's with a fairly quiet fan curve. My case is also essentially open air so my temps peak around 59-61C. Do you think I could flash the XOC BIOS and cap the power limit even though my card is only 2x8pin since I feel like I have a bit of thermal headroom I could take advantage of?


----------



## J7SC

0451 said:


> (...)
> Make sure you have an extra redundant ethernet port (dual LAN) and ISP in case your gaming gets interrupted. 15 MB/s should be okay. Also make sure you have a redundant power source coming into your home. Solar or a hospital generator are both acceptable. Stock on up food in case the global supply chain shortage gets bad enough that you have to stand in line for food or go to a further away store.


Almost good advice, but you forgot to add several packs of spare quality thermal pads to the redundancy list 


Spoiler



_"I messed up the thermal pads for my Kingpin Hydrocopper and need new ones (2mm). Is $44.95 a terrible price for (100 × 100 × 2,0 mm) Thermal Grizzly Minus 8?"...
_
BTW, you might want to consider TG 10 w/mk _thermal putty_ instead


----------



## ManniX-ITA

0451 said:


> I messed up the thermal pads for my Kingpin Hydrocopper and need new ones (2mm). Is $44.95 a terrible price for (100 × 100 × 2,0 mm) Thermal Grizzly Minus 8?


I really wouldn't use the Minus 8 on a Kingpin... it does have the advantage of being very solid and soft at the same time, it doesn't crumble into pieces.
But the thermal performances are between a good 6 W/mk and a below average 8 W/mk.
Used it on my old GTX 1070 and wasn't impressed at all.


----------



## mattxx88

KCDC said:


> I definitely wouldn't have upgraded so soon if it wasn't put in a project's budget. Prices are insane, scalped or not..
> 
> 
> Testing a couple 80mm fans on the heatsinks has proven to help quite a bit, been working/rendering with them and has dropped temps by 10+C (GPU #1 being the passive backplate). Gives me a bit more confidence letting it render overnight.


This is my (ghetto)mod instead

maybe not beautiful, but functional


----------



## CZonin

CZonin said:


> Hey everyone. Looking to take another stab at my 3090 TUF. I've been running the 390W Gigabyte Gaming OC BIOS for awhile that's been working great. I deshrouded and replaced the fans with 3x Noctua A9's with a fairly quiet fan curve. My case is also essentially open air so my temps peak around 59-61C. Do you think I could flash the XOC BIOS and cap the power limit even though my card is only 2x8pin since I feel like I have a bit of thermal headroom I could take advantage of?


Just a follow up. I tried both the 1000W and 500W Kingpin BIOS, but was getting very low clocks with both. Power limit increase worked perfectly though.


----------



## jura11

CZonin said:


> Just a follow up. I tried both the 1000W and 500W Kingpin BIOS, but was getting very low clocks with both. Power limit increase worked perfectly though.


If yours temperatures allows then XOC 1000W BIOS is worth it, just cap it at 65% which give you 425-430W or you can try cap it at 75% which gives you 495W, personally I would cap it at 75-80% which will give you good headroom for most of games and your scores should improve 

Don't use Kingpin 520W or any 3*8-pin GPU BIOS on 2*8-pin GPU unless is XOC BIOS from EVGA/Kingpin, not sure on Asus XOC 

Hope this helps 

Thanks, Jura


----------



## Arizor

iionas said:


> @Arizor do tell your adventures sir hahahaha


basically, I began with a literal EKWB starter kit, and before I knew it I’d replaced every aspect of that, and the waterblock and, of course recently, the psu. Learned lots though, mostly that I’ve got this strange itch to keep building and modding, like a crack fiend looking for his next fix.


----------



## KCDC

Well undervolting stock clock has helped, glad I finally did it. Both cards @ 1995 .925v. 24/7 rendering Redshift is now 5-9c cooler and water temps dropped enough that I can keep the case closed. Look forward to doing it for a gaming OC when I get the time.


----------



## CZonin

jura11 said:


> If yours temperatures allows then XOC 1000W BIOS is worth it, just cap it at 65% which give you 425-430W or you can try cap it at 75% which gives you 495W, personally I would cap it at 75-80% which will give you good headroom for most of games and your scores should improve
> 
> Don't use Kingpin 520W or any 3*8-pin GPU BIOS on 2*8-pin GPU unless is XOC BIOS from EVGA/Kingpin, not sure on Asus XOC
> 
> Hope this helps
> 
> Thanks, Jura


Got it, I'll give the 1000W another try. I had it at around 65% when I tried it which is probably where I'd leave it. Any idea why I was seeing low clocks with it though? I was seeing in the 1600's running Port Royal.


----------



## yzonker

jura11 said:


> If yours temperatures allows then XOC 1000W BIOS is worth it, just cap it at 65% which give you 425-430W or you can try cap it at 75% which gives you 495W, personally I would cap it at 75-80% which will give you good headroom for most of games and your scores should improve
> 
> Don't use Kingpin 520W or any 3*8-pin GPU BIOS on 2*8-pin GPU unless is XOC BIOS from EVGA/Kingpin, not sure on Asus XOC
> 
> Hope this helps
> 
> Thanks, Jura


No neither Asus or Galax 1kw will work on a 2x8pin. Both hit limits early. Not sure what limits the Galax, but I think the Asus hits the 3rd 8pin limit. 

We're lucky Vince opens the KP bios up so much. The Galax 3080ti 1kw bios has the same problem, so 3080ti 2x8pin is stuck below 400w.


----------



## Falkentyne

yzonker said:


> No neither Asus or Galax 1kw will work on a 2x8pin. Both hit limits early. Not sure what limits the Galax, but I think the Asus hits the 3rd 8pin limit.
> 
> We're lucky Vince opens the KP bios up so much. The Galax 3080ti 1kw bios has the same problem, so 3080ti 2x8pin is stuck below 400w.


SRC limits on the Asus 1kw Bios are messed up on 8 pin #3, someone didn't change it from stock. the 175W SRC3 limit limits 8 pin #3, while the SRC1 and SRC2 limits (these link to 8 pin #1 and 8 pin #2) are all 400W+. Since 8 pin #3 is duplicated from 8 pin #1 on a 2x8 pin card, you get hard capped right away. The Kingpin has all three SRC limits set sky high.


----------



## gfunkernaught

Arizor said:


> basically, I began with a literal EKWB starter kit, and before I knew it I’d replaced every aspect of that, and the waterblock and, of course recently, the psu. Learned lots though, mostly that I’ve got this strange itch to keep building and modding, like a crack fiend looking for his next fix.


Ah I remember that itch...the 1000D build was the last time I scratched it, and now its a laceration that's just scaring over. Totally should not have bought that case. It's all gimmick. I completely ignored the fact that two 140mm fans cannot keep up with the heat output of 2x360 and 2x480 rads. That and the fact that adding more rad space means nothing if your block can't absorb enough heat to make use of said rad space. An expensive lesson indeed. 😂


----------



## Arizor

Anyone tried the KUDAN bios? Bit surprised the default boost clock is so low for the most expensive card on the market Colorful RTX 3090 VBIOS


----------



## J7SC

Arizor said:


> Anyone tried the KUDAN bios? Bit surprised the default boost clock is so low for the most expensive card on the market Colorful RTX 3090 VBIOS


$4,999 MSRP ?!  That's bobsledding continuously downhill w/o any brakes whatsoever.


----------



## kx11

Arizor said:


> Anyone tried the KUDAN bios? Bit surprised the default boost clock is so low for the most expensive card on the market Colorful RTX 3090 VBIOS


Nope, 5000$ price tag and no backplate cooling


----------



## newls1

If someone wouldnt mind please, can someone link me to this1000w Re-Bar enabled BIOS you all are using for my FTW3 3090 3 pin card. There seems to be several, and just would feel better knowing i get the right one the first time. Ill limit my card to 75-80% like mentioned in prior posts but really wanting to see if this bios will help me. A million thanks!


----------



## yzonker

@newls1









EVGA RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com


----------



## yzonker

gfunkernaught said:


> Ah I remember that itch...the 1000D build was the last time I scratched it, and now its a laceration that's just scaring over. Totally should not have bought that case. It's all gimmick. I completely ignored the fact that two 140mm fans cannot keep up with the heat output of 2x360 and 2x480 rads. That and the fact that adding more rad space means nothing if your block can't absorb enough heat to make use of said rad space. An expensive lesson indeed. 😂


Well a block always absorbs the same amount of heat, it's just what temp delta it takes to do it. Only caveat to this is slightly more heat is probably transferred through the backplate as core/mem temps rise.


----------



## CZonin

Just tried the 1000W XOC BIOS again. Capped power at 43% which allowed the card to draw 430W but clocks were maxing out at 1740. Temps were still really low at 53C. Really not sure why my clocks are so low with it.


----------



## newls1

yzonker said:


> @newls1
> 
> 
> 
> 
> 
> 
> 
> 
> 
> EVGA RTX 3090 VBIOS
> 
> 
> 24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory
> 
> 
> 
> 
> www.techpowerup.com


Thanks


----------



## yzonker

CZonin said:


> Just tried the 1000W XOC BIOS again. Capped power at 43% which allowed the card to draw 430W but clocks were maxing out at 1740. Temps were still really low at 53C. Really not sure why my clocks are so low with it.


2x8pin? If so, needs to be 43/.66 for the correct PL.


----------



## CZonin

yzonker said:


> 2x8pin? If so, needs to be 43/.66 for the correct PL.


Ya 2x8pin. At 43% HWInfo64 was reporting it pulling 430W. Is there something else I should be looking at?


----------



## Nizzen

CZonin said:


> Ya 2x8pin. At 43% HWInfo64 was reporting it pulling 430W. Is there something else I should be looking at?


Total power draw from the wall is what I look at. Total draw from the computer


----------



## yzonker

Nizzen said:


> Total power draw from the wall is what I look at. Total draw from the computer


But you set the PL to 43% in Afterburner? The 3rd 8 pin is a duplicate and gets counted in the total. If you ran 43%, set it to 65% and re-test.


----------



## CZonin

Nizzen said:


> Total power draw from the wall is what I look at. Total draw from the computer


Ya I'll plug my meter back in to check what the total from the system is.



yzonker said:


> But you set the PL to 43% in Afterburner? The 3rd 8 pin is a duplicate and gets counted in the total. If you ran 43%, set it to 65% and re-test.


That was a comment from someone else. I'm going to plug my meter back in to check total system draw, but I really wanted to get a reading from just the GPU to make sure I wasn't pulling too much from just 2x8pin. I guess HWInfo shows incorrect values with the 1000W BIOS on 2x8pin cards.


----------



## yzonker

The 1st and 2nd 8 pin readings are ballpark, but not correct (on my card anyway). My Zotac is very well balanced between the two when I check with a clamp meter. Keep in mind that balance does not change with the bios. So it'll be the same even if HWINFO shows something different.


----------



## jura11

CZonin said:


> Ya 2x8pin. At 43% HWInfo64 was reporting it pulling 430W. Is there something else I should be looking at?


Hi there 

43% power limit pulls something like 280W in reality not 430W

If you want to pull 430W you need to set power limit to 65% at least

I would ignore HWiNFO power readings for XOC 1000W BIOS 

Running same BIOS capped at 85% and I have hit power limit I think only few times at Ultrawide 1440p, for normal gaming I would say 75-80% will be more than enough 

Hope this helps 

Thanks, Jura


----------



## CZonin

yzonker said:


> The 1st and 2nd 8 pin readings are ballpark, but not correct (on my card anyway). My Zotac is very well balanced between the two when I check with a clamp meter. Keep in mind that balance does not change with the bios. So it'll be the same even if HWINFO shows something different.


Got it, ty for the info!



jura11 said:


> Hi there
> 
> 43% power limit pulls something like 280W in reality not 430W
> 
> If you want to pull 430W you need to set power limit to 65% at least
> 
> I would ignore HWiNFO power readings for XOC 1000W BIOS
> 
> Running same BIOS capped at 85% and I have hit power limit I think only few times at Ultrawide 1440p, for normal gaming I would say 75-80% will be more than enough
> 
> Hope this helps
> 
> Thanks, Jura


Are you also running it with a 2x8pin card?


----------



## yzonker

@CZonin Both of us are I think. I have a Zotac 3090 Trinity. I would have said something sooner but didn't catch initially you were 2x8pin also. Somehow was thinking you had a Strix rather than a TUF.

Oops, sorry @Nizzen I replied to the wrong post.


----------



## CZonin

@yzonker It's all good, ty for the help and info! How long have you been running it with that BIOS?

Testing it out again now at 71% with +110 core and +750 mem which is only slightly higher than I had it before but seeing a solid improvement on scores.


----------



## yzonker

CZonin said:


> @yzonker It's all good, ty for the help and info! How long have you been running it with that BIOS?
> 
> Testing it out again now at 71% with +110 core and +750 mem which is only slightly higher than I had it before but seeing a solid improvement on scores.


Since February or so. Card hasn't blown up yet. You do need good cooling though and keep an eye on the 8-pin cables and plugs for overheating.


----------



## jura11

CZonin said:


> Got it, ty for the info!
> 
> 
> 
> Are you also running it with a 2x8pin card?


Yes as @yzonker we both running 2*8-pin GPU with XOC 1000W BIOS as daily BIOS and no issues to the date, personally running +105MHz on core and +1000MHz on VRAM, with other Palit RTX 3090 GamingPro I can run +1295MHz on VRAM stable for rendering and gaming, for benching that GPU can do +1495-1505MHz on VRAM without the issue but is just not stable for gaming or rendering

I'm running that BIOS from date was released, prior to that I have run normal XOC 1000W BIOS but thanks to @Nizzen we have now ReBAR XOC 1000W BIOS 

Hope this helps

Thanks, Jura


----------



## Arizor

Not sure what goes on with my Strix when I run the KINGPIN rebar BIOS, but regardless of my PL it won't go above approx 500 as measured via my killawatt.


----------



## Arizor

Also everyone can take a breath, I've stalled my itch by now purchasing an Optimus CPU block for my 5900x, and of course that needs its own rad, noctua fans, and fittings.


----------



## J7SC

Arizor said:


> Also everyone can take a breath, I've stalled my itch by now purchasing an Optimus CPU block for my 5900x, and of course that needs its own rad, noctua fans, and fittings.
> View attachment 2528927





Spoiler



1+41 = redundancy


----------



## kryptonfly

What about TDP normalized with XOC 1000W bios ? I tried it before but my power supply turned off in the Endwalker bench at 45% PL ~1800mhz, that's why I shunted the card doing 2145mhz 975mV in same bench and no shut down anymore. Still PL around 460W in Port Royal but amply enough for gaming. I always use curve. Doing 2055mhz @906mV in TSE GT2 25 loops, my "normal" OC but some RTX games need way more juice... 

To everyone : what is your max OC in Metro Exodus Enhanced Edition ? I'm PL around 480W and can't stabilize it even at 2055mhz 931mV around 40°C.


----------



## yzonker

Coincidentally I replaced my Seasonic GX1000 PSU with a Corsair Rm1000x. No more OCP in Endwalker. I had also discovered the Seasonic would hit OCP in Firestrike Extreme also. Runs with no issue now too. Maybe I'll try to RMA the Seasonic.

I also noticed at high load (500w+) the 8pin voltages are pinned right at 12v (like 11.97 - 11.99). The Seasonic would drop to 11.8v. The Seasonic I think only has 18 gage wire though. Tom's Hardware review says 16 for the Corsair. So that may be part of it.


----------



## Arizor

Interesting @yzonker . Is there a good resource you’d recommend to read up on how gage and voltage functions within PSUs and how to discern quality from the dross?


----------



## ManniX-ITA

yzonker said:


> I also noticed at high load (500w+) the 8pin voltages are pinned right at 12v (like 11.97 - 11.99). The Seasonic would drop to 11.8v. The Seasonic I think only has 18 gage wire though. Tom's Hardware review says 16 for the Corsair. So that may be part of it.


Indeed it's something to keep in mind.
Especially if adding extension cables for cosmetic or practical reasons.
Every 8-pin has 3 x 12V+ conductors so AWG 18 can only carry 15A (180W) with 2% or less loss for 75cm.
While for AWG 16 limit for the same loss is up to 1mt and 15cm.


----------



## J7SC

yzonker said:


> Coincidentally I replaced my Seasonic GX1000 PSU with a Corsair Rm1000x. No more OCP in Endwalker. I had also discovered the Seasonic would hit OCP in Firestrike Extreme also. Runs with no issue now too. Maybe I'll try to RMA the Seasonic.
> 
> I also noticed at high load (500w+) the 8pin voltages are pinned right at 12v (like 11.97 - 11.99). The Seasonic would drop to 11.8v. The Seasonic I think only has 18 gage wire though. Tom's Hardware review says 16 for the Corsair. So that may be part of it.


PSU ailments don't always immediately show up as crashes, but they do influence performance when Watts get up there...per recent post, I have an old BeQuiet DarkPro 1200 which had been tortured for a while in sub-zero / quad-SLI years back but otherwise still works, and is part of my basic test-bench. However, its 12v performance in PCIe was pitiful (2 of 3 red LEDs on the 3090 Strix were blinking...) though the system / GPU never crashed. I substituted my fav PSU, the Antec 1300W HPC Platinum (Delta innards), and the results below speak for themselves...scores incidentally also improved. The Antec has the thickest wires of any PSU here...










I also have a few other old (8 years+) PSUs that still appear to work but have some issues, such as a Corsair AX1200 with a 3.3v output that usually only makes it to 2.6v - 2.8v. Again, not immediately obvious unless you have monitoring software open, but that one led to intermittent crashes.


----------



## yzonker

When I have time I'll see if stability improved any. I'm guessing it did not but who knows.


----------



## kryptonfly

@yzonker : indeed, my PSU is a little limited (Prime TX 750W), almost all minimum voltages fall below 12v. I have a new Cooler Master 1250W GWE gold from my previous V1200 RMA (waited 5 months !) but I've put it on eBay because I prefer titanium specification (at least platinum). I will take a look on Antec...


----------



## Turisas

Hey,

I have a question. Can I flash a 3090 FE with a bios with a higher power limit?

If so, you can tell me one pls?

LG


----------



## chibi

Got my new card, wolf in sheep's disguise! I asked for a bag and Memory Express gave me a nicer one than expected


----------



## yzonker

Turisas said:


> Hey,
> 
> I have a question. Can I flash a 3090 FE with a bios with a higher power limit?
> 
> If so, you can tell me one pls?
> 
> LG


No the FE is incompatible with other manufacturers bios.


----------



## Turisas

yzonker said:


> No the FE is incompatible with other manufacturers bios.


Ok thanks for your answer


----------



## yzonker

Stability seems the same with this PSU. All I could do is match my previous personal best in PR.









Result







www.3dmark.com


----------



## J7SC

yzonker said:


> Stability seems the same with this PSU. All I could do is match my previous personal best in PR.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Result
> 
> 
> 
> 
> 
> 
> 
> www.3dmark.com


With those clocks and scores (1Kw bios?) already, you need some LN2 to go with a fresh PSU


----------



## yzonker

J7SC said:


> With those clocks and scores (1Kw bios?) already, you need some LN2 to go with a fresh PSU


I need better mem. I think junction temp only hits low to mid 50s during that run, but +850 is the limit. Even +875 crashes.


----------



## KedarWolf

The Crysis Remastered Games, don't enable antialiasing, I had SMAA1TX enabled and no DLSS, game would crash often. If I run DLSS Quality which disables the antialiasing, zero crashes since. 🐺


----------



## J7SC

yzonker said:


> I need better mem. I think junction temp only hits low to mid 50s during that run, but +850 is the limit. Even +875 crashes.


FYI, I never had serious VRAM temp issues even when only air-cooled, but since I added the new Phanteks block and thermal putty on the VRAM along w/ extra back-plate cooling, I was able to raise 'max efficient' memory speed by 40 MHz+


----------



## CZonin

The 1000W rebar BIOS on my 3090 TUF is working out great so far. Originally I had it at 71% PL / +100 core / +900 mem, but now I've settled on 71% PL / 2010 @ .968 / +900 mem. It doesn't seem to be hitting the new PL, barely cracks 60 C, and the clock is staying perfectly steady at 2010. I tested a lot of higher clocks but the voltage requirement to get it stable was getting too high. Pretty happy with this setup and it's performing a good amount better than with the Gigabyte Gaming OC BIOS I was running previously.


----------



## yzonker

J7SC said:


> FYI, I never had serious VRAM temp issues even when only air-cooled, but since I added the new Phanteks block and thermal putty on the VRAM along w/ extra back-plate cooling, I was able to raise 'max efficient' memory speed by 40 MHz+


I saw your previous post about the putty. It does sound like a good option. This is what I get by the end of a PR run. Water was 28C at the end (25C ambient).


----------



## yzonker

CZonin said:


> The 1000W rebar BIOS on my 3090 TUF is working out great so far. Originally I had it at 71% PL / +100 core / +900 mem, but now I've settled on 71% PL / 2010 @ .968 / +900 mem. It doesn't seem to be hitting the new PL, barely cracks 60 C, and the clock is staying perfectly steady at 2010. I tested a lot of higher clocks but the voltage requirement to get it stable was getting too high. Pretty happy with this setup and it's performing a good amount better than with the Gigabyte Gaming OC BIOS I was running previously.


Yea I find 450w or a little more is a good compromise for gaming. Not much more performance beyond that and it just makes the room hotter.


----------



## CZonin

yzonker said:


> Yea I find 450w or a little more is a good compromise for gaming. Not much more performance beyond that and it just makes the room hotter.


With my settings now it looks like it's peaking at around 400. Pretty happy with it so not sure if I'll push it anymore.


----------



## J7SC

yzonker said:


> I saw your previous post about the putty. It does sound like a good option. This is what I get by the end of a PR run. Water was 28C at the end (25C ambient).
> 
> View attachment 2529116


...instead of system-to-system, I can really only compare the before-and-after results of my own setup, never mind that it was on a single 360-60 rad w/ 3 default speed fans for both the 5950X and 3090 Strix and longish stress testing at around 24 C ambient. 

That said, I gained over 20 C on VRAM compared to the previous w-cooling setup w/ the same back-plate, but obviously it's not just the thermal putty but also the extra heat sink and a few other mods I made. Bottom line is the obvious conclusion that 'less hot GDDR6X' allows for a higher 'max efficient' VRAM setting...not really an earth shattering observation...


----------



## DirDir

CZonin said:


> The 1000W rebar BIOS on my 3090 TUF is working out great so far. Originally I had it at 71% PL / +100 core / +900 mem, but now I've settled on 71% PL / 2010 @ .968 / +900 mem. It doesn't seem to be hitting the new PL, barely cracks 60 C, and the clock is staying perfectly steady at 2010. I tested a lot of higher clocks but the voltage requirement to get it stable was getting too high. Pretty happy with this setup and it's performing a good amount better than with the Gigabyte Gaming OC BIOS I was running previously.


Sorry i am new but i need to ask why are ppl uesing the 1k bios? If you hit 60c that bios is your temp not bad then? Why not go for a 500+ bios and have the safty on that bios on? 
Sorry if i askt a stupid question.


----------



## KedarWolf

Can you peeps test the Time Spy CPU test on the latest 3DMark.

With the 1000W Rebar BIOS I'd get around 16000 and it would finish at the lowest 54 FPS.

Now I'm getting 13000 and it's finishing with the lowest 41 FPS.


----------



## jura11

DirDir said:


> Sorry i am new but i need to ask why are ppl uesing the 1k bios? If you hit 60c that bios is your temp not bad then? Why not go for a 500+ bios and have the safty on that bios on?
> Sorry if i askt a stupid question.


Hi there 

There is no 400-500W BIOS for 2*8-pin GPU, there are 400-500W BIOS but only for 3*8-pin GPU and using such BIOS on 2*8-pin GPU or reference PCB end up with lower performance because you will be hitting something like 350W not 400-500W as you should 

Best BIOS for 2*8-pin GPU are Gigabyte or KFA2 390W BIOS and if you want to push it more then XOC 1000W BIOS is best BIOS for us with 2*8-pin GPU, cap it 65-75% and you good to go

Hope this helps 

Thanks, Jura


----------



## KedarWolf

KedarWolf said:


> Can you peeps test the Time Spy CPU test on the latest 3DMark.
> 
> With the 1000W Rebar BIOS I'd get around 16000 and it would finish at the lowest 54 FPS.
> 
> Now I'm getting 13000 and it's finishing with the lowest 41 FPS.


My CPU-Z and Cinebench R23 scores are normal though.


----------



## DirDir

jura11 said:


> Hi there
> 
> There is no 400-500W BIOS for 2*8-pin GPU, there are 400-500W BIOS but only for 3*8-pin GPU and using such BIOS on 2*8-pin GPU or reference PCB end up with lower performance because you will be hitting something like 350W not 400-500W as you should
> 
> Best BIOS for 2*8-pin GPU are Gigabyte or KFA2 390W BIOS and if you want to push it more then XOC 1000W BIOS is best BIOS for us with 2*8-pin GPU, cap it 65-75% and you good to go
> 
> Hope this helps
> 
> Thanks, Jura


Okej so the xoc 1k bios is a 2pin bios. What are you puilling in wats per pin?


----------



## yzonker

KedarWolf said:


> My CPU-Z and Cinebench R23 scores are normal though.


I just ran it on my 5800x earlier today. 13.2k. About as high as it goes without really pushing CO.


----------



## DirDir

KedarWolf said:


> Can you peeps test the Time Spy CPU test on the latest 3DMark.
> 
> With the 1000W Rebar BIOS I'd get around 16000 and it would finish at the lowest 54 FPS.
> 
> Now I'm getting 13000 and it's finishing with the lowest 41 FPS.


Hi

If it helps 13816 but i use a 10900kf and 0 oc atm.


----------



## J7SC

5950X on all-c 4.65 (Asus Dark & dynamicOC) with KPE 520 r_BAR VBios run from a few months back. Daily w/o CCD or curve, SMT on


----------



## Falkentyne

KedarWolf said:


> Can you peeps test the Time Spy CPU test on the latest 3DMark.
> 
> With the 1000W Rebar BIOS I'd get around 16000 and it would finish at the lowest 54 FPS.
> 
> Now I'm getting 13000 and it's finishing with the lowest 41 FPS.


13676 on 11900k @ 4.7
45.95 fps
3090 FE shunted


----------



## jura11

DirDir said:


> Okej so the xoc 1k bios is a 2pin bios. What are you puilling in wats per pin?


Hi there 

XOC 1000W BIOS from Kingpin/KPE is in reality 3*8-pin but works quite well with 2*8-pin GPU, I wouldn't use it on Founders Edition GPUs as you can't use AIB BIOS on FE GPUs

At 65-75% you will be pulling 430W with 65% power limit and 480-490W with 75% power limit and at 100% power limit you will be pulling 660W I think or something around that figure 

With this BIOS just ignore 3rd 8-pin reading in HWiNFO 


Hope this helps 

Thanks, Jura


----------



## DirDir

jura11 said:


> Hi there
> 
> XOC 1000W BIOS from Kingpin/KPE is in reality 3*8-pin but works quite well with 2*8-pin GPU, I wouldn't use it on Founders Edition GPUs as you can't use AIB BIOS on FE GPUs
> 
> At 65-75% you will be pulling 430W with 65% power limit and 480-490W with 75% power limit and at 100% power limit you will be pulling 660W I think or something around that figure
> 
> With this BIOS just ignore 3rd 8-pin reading in HWiNFO
> 
> 
> Hope this helps
> 
> Thanks, Jura


Hi

Ooo did not that it workt on 2pin cards. Hmm don't think i even try that bios, i am pulling close to 520w atm and can't keep my temps down. is the kpe 1kw bios any diffrent then the HoF 1kw bios?

Ty Jura


----------



## CZonin

A quick note for anyone else running the 1000W BIOS on a 2x8pin card that monitors with HWInfo64. I found out that you can right click any item > Customize values > Multiply. I went into GPU Power and changed the multiple from 1 to .66. It's probably not completely accurate, but it gives a more realistic value compared to it thinking you have 3x8pin.


----------



## yzonker

So what does everyone run for a filter in the loop? Are they very restrictive? 

Just got done cleaning some crap out of my CPU block. Cleaned the rads originally but still got stuff in it.


----------



## Nizzen

yzonker said:


> So what does everyone run for a filter in the loop? Are they very restrictive?
> 
> Just got done cleaning some crap out of my CPU block. Cleaned the rads originally but still got stuff in it.


No filter, just pure destiled water in loop + a bit "anti alge". Maybe clean the block next year, if I don't just buy new blocks


----------



## Falkentyne

Nizzen said:


> No filter, just pure destiled water in loop + a bit "anti alge". Maybe clean the block next year, if I don't just buy new blocks


What's the best anti-algae to mix in distilled water?


----------



## Nizzen

Falkentyne said:


> What's the best anti-algae to mix in distilled water?


I use the same as they use in aquarium for fish 😅 Looks like it's working just fine.

In my benchloop I used a 3 DL of some aquacomputer clear premix tougether with destilled water. Cheaper is better. More money for hardware 😆


----------



## jura11

Falkentyne said:


> What's the best anti-algae to mix in distilled water?


Personally I would get Mayhems Biocide+ with Inhibitor+, I used that on my friend loop and on my loop I use Mayhems X1 and no issues to the date and my PC is running 24/7 and my last coolant change was I think 2 years ago

Hope this helps 

Thanks, Jura


----------



## yzonker

Got the machine up and running again. Despite the fins being quite dirty, there seems to be no change in performance. GPU block delta didn't change either, so flow probably didn't change significantly either. Guess it's fairly forgiving. Good to know for future reference.


----------



## J7SC

yzonker said:


> Got the machine up and running again. Despite the fins being quite dirty, there seems to be no change in performance. GPU block delta didn't change either, so flow probably didn't change significantly either. Guess it's fairly forgiving. Good to know for future reference.


...I'm still beavering away at Raven A (2 main mobos on the front of a TT Core P8), but at least the rads are all mounted (2 circuits) and the fans have all been tested out  ...just need to hide the wiring harnesses for 2 hubs and a total 42 fans mounted on that mobile table w/wheels. Daily setup is:

1320x60+ rads and 3x D5 for the 5950X / 3090 Strix combo,
1200x60+ and 2x D5 for the 3950X / 6900XT.
With the ODs, I can add to / change that quickly, and the wiring is such that one SATA connection will disable all the fans per system / bank.

sorry for the pic quality


----------



## Arizor

Spoiler alert: We've been living in a simulation all along folks, and that simulation runs on @J7SC 's home setup.


----------



## yzonker

And what water cooling component needs a big pair of channel locks? Lol.


----------



## J7SC

yzonker said:


> And what water cooling component needs a big pair of channel locks? Lol.


...ideal to lock things into place when you have only two hands but a gazillion connections to make


----------



## yzonker

Also as a data point, it's been about 3 months since I put Kryonaut Extreme on my CPU and GPU. Paste wasn't dried up at all on my CPU when I pulled the block today. GPU block delta is still the same too.


----------



## kryptonfly

KedarWolf said:


> Can you peeps test the Time Spy CPU test on the latest 3DMark.
> 
> With the 1000W Rebar BIOS I'd get around 16000 and it would finish at the lowest 54 FPS.
> 
> Now I'm getting 13000 and it's finishing with the lowest 41 FPS.


Gigabyte GamingOC bios + shunt :

 


jura11 said:


> Hi there
> 
> XOC 1000W BIOS from Kingpin/KPE is in reality 3*8-pin but works quite well with 2*8-pin GPU, I wouldn't use it on Founders Edition GPUs as you can't use AIB BIOS on FE GPUs
> 
> At 65-75% you will be pulling 430W with 65% power limit and 480-490W with 75% power limit and at 100% power limit you will be pulling 660W I think or something around that figure
> 
> With this BIOS just ignore 3rd 8-pin reading in HWiNFO
> 
> 
> Hope this helps
> 
> Thanks, Jura


Well... maybe I'm crazy but I was still PL with TDP normalized (inner limits) despite shunts so I flashed with the XOC 1000W bios, fortunately I know well this card (Gigabyte Turbo 2*8-pin), powers and behavior so I'm aware about risks. Now I definitely removed this wicked PL normalized. Despite shunts, "normal" TDP & normalized are close enough :

 

Powers are all individually "handmade", I run an accurate game at stock (no shunt, same OC 2055 @887mV) to reach 390W (Gigabyte bios), then I did same again with this XOC 1000W + shunt and I applied "correct" multiple. I know for sure total power is right but others I don't think so, near of the truth though. I have a trouble with 8-pin #1 & #2 (bad shunt I knew it). I can do 2175 mhz @993 mV and +1200 mem in Port Royal but my PSU is too weak (Prime TX-750), OCPs occur in Endwalker bench and if I OC the CPU, Port Royal scores & voltages (11.77v) are lower so CPU is at stock. I set PL at 42% in Afterburner for safety, around 540W max intake (Endwalker CPU @ stock). I ordered an Antec Signature 1000W Titanium (Seasonic manufacturer), hoping there's no OCP anymore.

Question 1 : To XOC 1000W bios users : How much is your TDP normalized and "normal" TDP during Port Royal ?
Question 2 : May I remove shunt and keep same "headroom" without (never) hitting PL ? I don't want at all to hit normalized PL.

Thank you all


----------



## rix2

Please anyone send the link KP520w rebar enabled bios, this was not enabled 94.02.42.C0.0C, thanks!


----------



## yzonker

rix2 said:


> Please anyone send the link KP520w rebar enabled bios, this was not enabled 94.02.42.C0.0C, thanks!


That version should have reBar enabled. You may have another issue. Just to verify, this copy should have it enabled, 









EVGA RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com





I'm basing that off of the 3-30 add date and 3-13 build date. That wss when reBar launched.


----------



## yzonker

Double post


----------



## chibi

For my 3090 Strix, if I wanted to get some more power limit once it's under custom loop, would the evga 500w bios be the choice? I can use the kingpin 1000w if I wanted to go for personal records on benchmarks, but I assume it would not be the bios to go to for 24/7 gaming, right?


----------



## ManniX-ITA

chibi said:


> I can use the kingpin 1000w if I wanted to go for personal records on benchmarks, but I assume it would not be the bios to go to for 24/7 gaming, right?


There are many options...
HOF OC Lab 500W/1000W, the Kingpin 520W/1000W, the ASUS XOC/XOC2.
I'm testing the ASUS XOC right now and seems ok but it doesn't have ReBar.
A little bit lower than the stock Strix V4 with ReBar.
But it does raise the power limit to 520W unlike the HOF and KingPin 520W which were topping at 500W.
You probably need to test them once it's under custom loop.

I guess the KP 1000W also could be used for daily. Just setting decent limits should work.
The main issue is the memory is always at max speed but you can set a 2D profile with Afterburner with -250 and should clock down.

@Falkentyne 
Did you maybe get ASUS XOC bioses with ReBar?


----------



## chibi

ManniX-ITA said:


> There are many options...
> HOF OC Lab 500W/1000W, the Kingpin 520W/1000W, the ASUS XOC/XOC2.
> I'm testing the ASUS XOC right now and seems ok but it doesn't have ReBar.
> A little bit lower than the stock Strix V4 with ReBar.
> But it does raise the power limit to 520W unlike the HOF and KingPin 520W which were topping at 500W.
> You probably need to test them once it's under custom loop.
> 
> I guess the KP 1000W also could be used for daily. Just setting decent limits should work.
> The main issue is the memory is always at max speed but you can set a 2D profile with Afterburner with -250 and should clock down.
> 
> @Falkentyne
> Did you maybe get ASUS XOC bioses with ReBar?


Cool I didn't know ASUS has it's own XOC bios. The first page of this thread only mentioned the evga 1000W bios. Do you know where I can read more on the ASUS XOC and where to download?


----------



## J7SC

chibi said:


> For my 3090 Strix, if I wanted to get some more power limit once it's under custom loop, would the evga 500w bios be the choice? I can use the kingpin 1000w if I wanted to go for personal records on benchmarks, but I assume it would not be the bios to go to for 24/7 gaming, right?


On my w-cooled Strix, I use the stock vbios (v3) and the KPE 520, both w/r_BAR...I may try the KPE 1K later as I have the cooling for it, just for fun. Keep in mind though that the Strix won't show correct PCIe power readings on all three PCIe power inputs w/EVGA vbios.

The Asus 1kw XOC seems broken / doesn't do much...may be there's an updated (and fully functional) Asus 1kw now ?


----------



## ManniX-ITA

chibi said:


> Cool I didn't know ASUS has it's own XOC bios. The first page of this thread only mentioned the evga 1000W bios. Do you know where I can read more on the ASUS XOC and where to download?


You need to read about it, they are theoretically 999W but limited (mistake seems):









[Official] NVIDIA RTX 3090 Owner's Club


@Arizor do tell your adventures sir hahahaha basically, I began with a literal EKWB starter kit, and before I knew it I’d replaced every aspect of that, and the waterblock and, of course recently, the psu. Learned lots though, mostly that I’ve got this strange itch to keep building and...




www.overclock.net













[Official] NVIDIA RTX 3090 Owner's Club


I need some feedback from you folks on Strix 3090 OC mvddc. As established before, it is on 'v2' stock bios and air-cooled, but does have helper fans for the back-VRAM and temps are ok. Mvddc is NOT bouncing off a 90 W limit, but now I'm wondering about the other direction - what the max safe...




www.overclock.net





Which makes them maybe favorable to the totally uncapped KP 1000W if you want thread lightly.


----------



## ManniX-ITA

J7SC said:


> The Asus 1kw XOC seems broken / doesn't do much...may be there's an updated (and fully functional) Asus 1kw now ?


I have started to play with the Asus XOC BIOS cause of the memory settings:










The PL is even higher than the KP1000W.
Which seems to help my limited memory OC, I was crashing in a matter of minutes in Valheim above +900 now seems running fine with +1100 (played an half hour).
I'll check with more stuff.

Anyone knows if there's a newer ABE version than 0.06?


----------



## Nizzen

The asus "xoc" 3090 rebar bios is about 550-555w, but has disabled protection etc... so it's perfect for xoc IF you shuntmod it.
Higher voltage for vmem is a plus 
You find it in my personal collection here:






NVIDIA GeForce RTX 30xx Bioser for flashing


Hvordan ta backup av gpu biosen: Quote CMD: (åpne som administrator) nvflash64 --protectoff nvflash64 --save 30xxModel.rom Hvordan flashe bios: nvflash64 --protectoff nvflash64 -6 biosnavn.rom Det finner du her: FLASH | GUIDE Nvflash for 30xx skjermkort: nvflash64.rar Nvflash for FE kort: Denne n...



www.diskusjon.no


----------



## GRABibus

Nizzen said:


> The asus "xoc" 3090 rebar bios is about 550-555w, but has disabled protection etc... so it's perfect for xoc IF you shuntmod it.
> Higher voltage for vmem is a plus
> You find it in my personal collection here:
> 
> 
> 
> 
> 
> 
> NVIDIA GeForce RTX 30xx Bioser for flashing
> 
> 
> Hvordan ta backup av gpu biosen: Quote CMD: (åpne som administrator) nvflash64 --protectoff nvflash64 --save 30xxModel.rom Hvordan flashe bios: nvflash64 --protectoff nvflash64 -6 biosnavn.rom Det finner du her: FLASH | GUIDE Nvflash for 30xx skjermkort: nvflash64.rar Nvflash for FE kort: Denne n...
> 
> 
> 
> www.diskusjon.no


Add the MSI Suprim X BAR to your collection


----------



## GRABibus

KedarWolf said:


> Can you peeps test the Time Spy CPU test on the latest 3DMark.
> 
> With the 1000W Rebar BIOS I'd get around 16000 and it would finish at the lowest 54 FPS.
> 
> Now I'm getting 13000 and it's finishing with the lowest 41 FPS.


15796 
5900X (PBO/CO), settings in sig
Strix 3090 with MSI Suprim X BAR Bios.


----------



## J7SC

GRABibus said:


> 15796
> 5900X (PBO/CO), settings in sig
> Strix 3090 with MSI Suprim X BAR Bios.
> 
> View attachment 2529217


...nice ! 1 kw XOC on air or MSI ?


----------



## Nizzen

GRABibus said:


> 15796
> 5900X (PBO/CO), settings in sig
> Strix 3090 with MSI Suprim X BAR Bios.
> 
> View attachment 2529217


Where is the gpu result?

Want cpu benchmark?
Here you go 😁


----------



## GRABibus

Nizzen said:


> Where is the gpu result?
> 
> Want cpu benchmark?
> Here you go 😁


I only did CPU bench in Timespy


----------



## chibi

Nizzen said:


> The asus "xoc" 3090 rebar bios is about *550-555w*, but has disabled protection etc... so it's perfect for xoc IF you shuntmod it.
> Higher voltage for vmem is a plus
> You find it in my personal collection here:
> 
> 
> 
> 
> 
> 
> NVIDIA GeForce RTX 30xx Bioser for flashing
> 
> 
> Hvordan ta backup av gpu biosen: Quote CMD: (åpne som administrator) nvflash64 --protectoff nvflash64 --save 30xxModel.rom Hvordan flashe bios: nvflash64 --protectoff nvflash64 -6 biosnavn.rom Det finner du her: FLASH | GUIDE Nvflash for 30xx skjermkort: nvflash64.rar Nvflash for FE kort: Denne n...
> 
> 
> 
> www.diskusjon.no



@Nizzen - from your thread, I see there is two ASUS 3090 Strix bios (520 + 1000 Rebar). Do I need the 1000 rebar to get 550~555W? Or will the 520 get me there? Also, will these have any issues reading the GPU-Z/HWInfo sensors/power readings?


----------



## Nizzen

chibi said:


> @Nizzen - from your thread, I see there is two ASUS 3090 Strix bios (520 + 1000 Rebar). Do I need the 1000 rebar to get 550~555W? Or will the 520 get me there? Also, will these have any issues reading the GPU-Z/HWInfo sensors/power readings?
> 
> View attachment 2529219


Tip of the day: Try them both, then repport back


----------



## chibi

At work right now, can't test until 7 hours. First time flashing bios too so I need to do some reading before hand


----------



## GRABibus

J7SC said:


> ...nice ! 1 kw XOC on air or MSI ?


Here is total Time Spy score :









I scored 21 166 in Time Spy


AMD Ryzen 9 5900X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com














Strix on stock air cooler with MSI Suprim X Bar bios
23°C ambient
5900X with PBO/CO (See in sig)

I can make 16k+ CPU score with my 5900X, only with a H115 RGB Platinum as cooler...Not too bad


----------



## des2k...

kryptonfly said:


> Question 1 : To XOC 1000W bios users : How much is your TDP normalized and "normal" TDP during Port Royal ?
> Question 2 : May I remove shunt and keep same "headroom" without (never) hitting PL ? I don't want at all to hit normalized PL.


1, TDP normalized is exactly PCIE power on 2x8pins
so you can be at 94% TDP but for some reason if PCIE will pull 100w then it will be 100% normalized
2, it's 660w max on xoc for 2x8pins


----------



## Falkentyne

des2k... said:


> 1, TDP normalized is exactly PCIE power on 2x8pins
> so you can be at 94% TDP but for some reason if PCIE will pull 100w then it will be 100% normalized
> 2, it's 660w max on xoc for 2x8pins


TDP normalized can even exceed PCIE power because MVDDC also reports to it, if MVDDC Bios power limit is set too low.
NVVDD and MSVDD current limit on their voltages also report to it but that only becomes a problem if over 480W power limit...


----------



## King G

Im sorry for the runaway question but I'm sort of new here. I read a lot of the comments for months but I decided to join the fun. I currently own a Strix 3090 and long story short I returned an old item to newegg and they honored it but only store credit, I said screw it i'll order thermal pads on their mistake. 

I have repasted my GPU before with Thermal Grizzly but never done pads. Anything I should know? I thought i'd ask the experts here for advice. I also wanted to know if I ordered enough because these pads are tiny! and they are so expensive but I had to use the $300 credit and I didn't really need anything else.

Thanks in advance and if this isn't allowed her please ignore me and again im sorry.


----------



## Falkentyne

King G said:


> Im sorry for the runaway question but I'm sort of new here. I read a lot of the comments for months but I decided to join the fun. I currently own a Strix 3090 and long story short I returned an old item to newegg and they honored it but only store credit, I said screw it i'll order thermal pads on their mistake.
> 
> I have repasted my GPU before with Thermal Grizzly but never done pads. Anything I should know? I thought i'd ask the experts here for advice. I also wanted to know if I ordered enough because these pads are tiny! and they are so expensive but I had to use the $300 credit and I didn't really need anything else.
> 
> Thanks in advance and if this isn't allowed her please ignore me and again im sorry.
> View attachment 2529267


Should have ordered Gelid Extremes.
A lot cheaper than that and you get a lot more pad.









GP-EXTREME


Best-in-class products for gamers and tech enthusiasts.




gelidstore.com












GP-ULTIMATE 120mm


Best-in-class products for gamers and tech enthusiasts.




gelidstore.com










Are you a human?







www.newegg.com









Are you a human?







www.newegg.com


----------



## King G

Falkentyne said:


> Should have ordered Gelid Extremes.
> A lot cheaper than that and you get a lot more pad.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> GP-EXTREME
> 
> 
> Best-in-class products for gamers and tech enthusiasts.
> 
> 
> 
> 
> gelidstore.com
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> GP-ULTIMATE 120mm
> 
> 
> Best-in-class products for gamers and tech enthusiasts.
> 
> 
> 
> 
> gelidstore.com
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Are you a human?
> 
> 
> 
> 
> 
> 
> 
> www.newegg.com
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Are you a human?
> 
> 
> 
> 
> 
> 
> 
> www.newegg.com



Funny thing is I did but they sent a bad sample so im done with them. I literally got crumbles of pads all breaking apart.


----------



## geriatricpollywog

Falkentyne said:


> Should have ordered Gelid Extremes.
> A lot cheaper than that and you get a lot more pad.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> GP-EXTREME
> 
> 
> Best-in-class products for gamers and tech enthusiasts.
> 
> 
> 
> 
> gelidstore.com
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> GP-ULTIMATE 120mm
> 
> 
> Best-in-class products for gamers and tech enthusiasts.
> 
> 
> 
> 
> gelidstore.com
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Are you a human?
> 
> 
> 
> 
> 
> 
> 
> www.newegg.com
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Are you a human?
> 
> 
> 
> 
> 
> 
> 
> www.newegg.com


I just ordered a 120x120x2mm Thermalright pad for $30 shipped. It was the most pad I could find for the money.









Amazon.com: Thermalright Thermal Pad 12.8 W/mK Non Conductive Heat Dissipation Silicone Pad Silicone Thermal Pads for PC Laptop Heatsink/CPU/GPU/LED Graphics Card Motherboard Silicone Grease Pad Multi-Size(2.0mm) : Electronics


Buy Thermalright Thermal Pad 12.8 W/mK Non Conductive Heat Dissipation Silicone Pad Silicone Thermal Pads for PC Laptop Heatsink/CPU/GPU/LED Graphics Card Motherboard Silicone Grease Pad Multi-Size(2.0mm): Heatsinks - Amazon.com ✓ FREE DELIVERY possible on eligible purchases



www.amazon.com


----------



## King G

0451 said:


> I just ordered a 120x120x2mm Thermalright pad for $30 shipped. It was the most pad I could find for the money.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Amazon.com: Thermalright Thermal Pad 12.8 W/mK Non Conductive Heat Dissipation Silicone Pad Silicone Thermal Pads for PC Laptop Heatsink/CPU/GPU/LED Graphics Card Motherboard Silicone Grease Pad Multi-Size(2.0mm) : Electronics
> 
> 
> Buy Thermalright Thermal Pad 12.8 W/mK Non Conductive Heat Dissipation Silicone Pad Silicone Thermal Pads for PC Laptop Heatsink/CPU/GPU/LED Graphics Card Motherboard Silicone Grease Pad Multi-Size(2.0mm): Heatsinks - Amazon.com ✓ FREE DELIVERY possible on eligible purchases
> 
> 
> 
> www.amazon.com


I do really like Thermalright as a company. I just think I'm going to stick with Fujipoly since they claim to be the best w/mk available.


----------



## J7SC

King G said:


> Funny thing is I did but they sent a bad sample so im done with them. I literally got crumbles of pads all breaking apart.


A lot of folks really like the Fujipoly you listed...but it's also a real crap shoot these days if something hasn't been stored right or for that matter is actually fake rather than from the brand name you wanted.

I use and recommend _thermal putty_ these days (GigiKey's TG-PP10) for VRAM and power phases rather than pads, including on my 3090 Strix. But the putty is all sold out (again). 

For pads, I have done just fine with Thermalright's 12.8w/mk pads, though even there, you can get fake / bad samples. If the Fujipoly arrives in good condition, you got a very nice product indeed. Do make sure you read up about mounting pressures etc. Finally, when I used pads, even good ones, I always added a big line of MX4/5 on top. Not everyone agrees with that though...


----------



## kryptonfly

des2k... said:


> 1, TDP normalized is exactly PCIE power on 2x8pins
> so you can be at 94% TDP but for some reason if PCIE will pull 100w then it will be 100% normalized
> 2, it's 660w max on xoc for 2x8pins





Falkentyne said:


> TDP normalized can even exceed PCIE power because MVDDC also reports to it, if MVDDC Bios power limit is set too low.
> NVVDD and MSVDD current limit on their voltages also report to it but that only becomes a problem if over 480W power limit...


Thanks guys 👍 
Hum... So if I understand well we can hit PL normalized with the XOC 1000W bios in Port Royal ? Feel free to correct me if I'm wrong. Because I kept my shunts + flashed with XOC 1000W bios and I'm around 45% PL normalized and 39% PCIE power in Port Royal for ~510W (see my previous post). That means there's no PL at all anymore, I even can go beyond true 1000W but just 2*8-pin ! I just wanna know what is %PL normalized and PCIE power in Port Royal with XOC 1000W bios. A HWinfo screen would be perfect 👌


----------



## ManniX-ITA

chibi said:


> @Nizzen - from your thread, I see there is two ASUS 3090 Strix bios (520 + 1000 Rebar). Do I need the 1000 rebar to get 550~555W? Or will the 520 get me there? Also, will these have any issues reading the GPU-Z/HWInfo sensors/power readings?


Thanks @Nizzen really appreciated!
The 520W ReBar is the same as 1000W, same hash.

Done some quick & dirty testing 

The KP 1000W is unusable as daily; the memory underclock trick doesn't work. Power usage in idle is always 120W-140W.
Best scores, highest temperatures of course. Tried with 52/55 % TDP and beats everything else. Memory OC limited to 900 MHz stable.
Also has a different DP ports configuration so I've lost the 2nd monitor.
(If I recall the HOF OC Lab bios has the same config, I'll have to test again that one too).

The ASUS XOC 1000W indeed has ReBar but it's a bit weird.
Unlike the 999W doesn't have temp limits and memory OC is limited to 900 MHz stable.
Same power consumption in idle as the KP1000, even a bit more. 
The power on 8p is limited as the previous non ReBar, 150/175/150W.
Weird cause it's not stable with TDP 52/55% or 100%. But it is at 66%.
Tried just cause I've read is a magic number for the 2x8p cards.
Since it's not usable as daily, the KP1000 is better.

The best daily so far, probably doing even better with WC, is the Asus XOC 999W without ReBar.
Best TimeSpy score after the KP1000, higher than with all the other BIOS with ReBar.
I could also pass the Metro Exodus EE torture with 15 MHz more on the GPU clock.



King G said:


> I currently own a Strix 3090


Do you have the stock cooler?
Cause those sizes are not the right ones to buy then.



Spoiler: Strix thermal pads


----------



## King G

ManniX-ITA said:


> Thanks @Nizzen really appreciated!
> The 520W ReBar is the same as 1000W, same hash.
> 
> Done some quick & dirty testing
> 
> The KP 1000W is unusable as daily; the memory underclock trick doesn't work. Power usage in idle is always 120W-140W.
> Best scores, highest temperatures of course. Tried with 52/55 % TDP and beats everything else. Memory OC limited to 900 MHz stable.
> Also has a different DP ports configuration so I've lost the 2nd monitor.
> (If I recall the HOF OC Lab bios has the same config, I'll have to test again that one too).
> 
> The ASUS XOC 1000W indeed has ReBar but it's a bit weird.
> Unlike the 999W doesn't have temp limits and memory OC is limited to 900 MHz stable.
> Same power consumption in idle as the KP1000, even a bit more.
> The power on 8p is limited as the previous non ReBar, 150/175/150W.
> Weird cause it's not stable with TDP 52/55% or 100%. But it is at 66%.
> Tried just cause I've read is a magic number for the 2x8p cards.
> Since it's not usable as daily, the KP1000 is better.
> 
> The best daily so far, probably doing even better with WC, is the Asus XOC 999W without ReBar.
> Best TimeSpy score after the KP1000, higher than with all the other BIOS with ReBar.
> I could also pass the Metro Exodus EE torture with 15 MHz more on the GPU clock.
> 
> 
> 
> Do you have the stock cooler?
> Cause those sizes are not the right ones to buy then.
> 
> 
> 
> Spoiler: Strix thermal pads
> 
> 
> 
> 
> View attachment 2529275


I ordered 2 of each size?! I figured that for example the 2.5mm would be two pads stacked on each other. I have the same image for reference. Am I supposed to purchase the exact same size?!


----------



## King G

J7SC said:


> A lot of folks really like the Fujipoly you listed...but it's also a real crap shoot these days if something hasn't been stored right or for that matter is actually fake rather than from the brand name you wanted.
> 
> I use and recommend _thermal putty_ these days (GigiKey's TG-PP10) for VRAM and power phases rather than pads, including on my 3090 Strix. But the putty is all sold out (again).
> 
> For pads, I have done just fine with Thermalright's 12.8w/mk pads, though even there, you can get fake / bad samples. If the Fujipoly arrives in good condition, you got a very nice product indeed. Do make sure you read up about mounting pressures etc. Finally, when I used pads, even good ones, I always added a big line of MX4/5 on top. Not everyone agrees with that though...



mounting pressures on the pad product itself? Im not sure where to find it. I also really like the idea of adding more thermal paste on top since it crossed my mind, why would this be recommended?


----------



## ManniX-ITA

King G said:


> I ordered 2 of each size?! I figured that for example the 2.5mm would be two pads stacked on each other. I have the same image for reference. Am I supposed to purchase the exact same size?!


Well yes, didn't considered this option 
I suppose you can stack them.
Don't know the Fujipoly, never had one.
They'll come with the Optimus block and it'll be a first for me.
Heard they are very stiff, in case you can have them "glue" with a very very thin layer of thermal paste in between.
And since thy are stiff, as already noted, you need to be very careful about the mounting pressure.

My advice, if you don't have one buy a torque controlled screwdriver.
I've remounted my Strix cooler without it, cause I've realized I don't have an hex PH00, and the temps are higher than before.
Done my best without but it's not the same.


----------



## King G

ManniX-ITA said:


> Well yes, didn't considered this option
> I suppose you can stack them.
> Don't know the Fujipoly, never had one.
> They'll come with the Optimus block and it'll be a first for me.
> Heard they are very stiff, in case you can have them "glue" with a very very thin layer of thermal paste in between.
> And since thy are stiff, as already noted, you need to be very careful about the mounting pressure.
> 
> My advice, if you don't have one buy a torque controlled screwdriver.
> I've remounted my Strix cooler without it, cause I've realized I don't have an hex PH00, and the temps are higher than before.
> Done my best without but it's not the same.



Im not going to lie im a bit new to this mounting pressure thing i've never heard of it. What exactly do I need to get started? Where can I find the right pressure for the pcb


----------



## ManniX-ITA

King G said:


> mounting pressures on the pad product itself? Im not sure where to find it. I also really like the idea of adding more thermal paste on top since it crossed my mind, why would this be recommended?





King G said:


> Im not going to lie im a bit new to this mounting pressure thing i've never heard of it. What exactly do I need to get started? Where can I find the right pressure for the pcb


The problem is die contact on the GPU.
More stiff are the pads and easier is the GPU die will not make good contact and you get sky high temps.
The Gelid Extreme (good batches of course) are the best cause they are very soft.
Almost no issue with GPU die contact.
The others, especially the Fujipoly, since they are stiff they can easily alter the distance between the cooler and the die.
Optimus is the only one providing them by default cause the block is super refined and without any imperfection.
Not the same with stock coolers and Byksky, EKWB, etc


----------



## J7SC

ManniX-ITA said:


> Thanks @Nizzen really appreciated!
> The 520W ReBar is the same as 1000W, same hash.
> 
> Done some quick & dirty testing
> 
> The KP 1000W is unusable as daily; the memory underclock trick doesn't work. Power usage in idle is always 120W-140W.
> Best scores, highest temperatures of course. Tried with 52/55 % TDP and beats everything else. Memory OC limited to 900 MHz stable.
> Also has a different DP ports configuration so I've lost the 2nd monitor.
> (If I recall the HOF OC Lab bios has the same config, I'll have to test again that one too).
> 
> The ASUS XOC 1000W indeed has ReBar but it's a bit weird.
> Unlike the 999W doesn't have temp limits and memory OC is limited to 900 MHz stable.
> Same power consumption in idle as the KP1000, even a bit more.
> The power on 8p is limited as the previous non ReBar, 150/175/150W.
> Weird cause it's not stable with TDP 52/55% or 100%. But it is at 66%.
> Tried just cause I've read is a magic number for the 2x8p cards.
> Since it's not usable as daily, the KP1000 is better.
> 
> The best daily so far, probably doing even better with WC, is the Asus XOC 999W without ReBar.
> Best TimeSpy score after the KP1000, higher than with all the other BIOS with ReBar.
> I could also pass the Metro Exodus EE torture with 15 MHz more on the GPU clock.
> (...)


...that's helpful ! I'm currently on the Asus V3 (stock, r_BAR) and the KPE 520 r_BAR (orig) on my Strix OC. As I'm about to hook up a 2nd monitor, the '999' sounds like the way to go, though r_BAR would be nice.

FYI, the Strix has 2x HDMI and 3x DisplayPort while the Galax has 1x HDMI and 3x DisplayPort...have you or @Nizzen tested the Galax XOC r_BAR re. performance, daily usability and importantly, whether at least one of the DisplayPort connections works when running it on the Strix ?


----------



## King G

ManniX-ITA said:


> The problem is die contact on the GPU.
> More stiff are the pads and easier is the GPU die will not make good contact and you get sky high temps.
> The Gelid Extreme (good batches of course) are the best cause they are very soft.
> Almost no issue with GPU die contact.
> The others, especially the Fujipoly, since they are stiff they can easily alter the distance between the cooler and the die.
> Optimus is the only one providing them by default cause the block is super refined and without any imperfection.
> Not the same with stock coolers and Byksky, EKWB, etc


I see what your saying. theoretically even if I tighten the radiator all the way because of the pad thickness if they are too stiff I could get worse core thermals. Well I really hope they come in super squishy or else i'll return these too unfortunately.


----------



## kryptonfly

ManniX-ITA said:


> Thanks @Nizzen really appreciated!
> The 520W ReBar is the same as 1000W, same hash.
> 
> Done some quick & dirty testing
> 
> The KP 1000W is unusable as daily; the memory underclock trick doesn't work. Power usage in idle is always 120W-140W.
> Best scores, highest temperatures of course. Tried with 52/55 % TDP and beats everything else. Memory OC limited to 900 MHz stable.
> Also has a different DP ports configuration so I've lost the 2nd monitor.
> (If I recall the HOF OC Lab bios has the same config, I'll have to test again that one too).
> 
> The ASUS XOC 1000W indeed has ReBar but it's a bit weird.
> Unlike the 999W doesn't have temp limits and memory OC is limited to 900 MHz stable.
> Same power consumption in idle as the KP1000, even a bit more.
> The power on 8p is limited as the previous non ReBar, 150/175/150W.
> Weird cause it's not stable with TDP 52/55% or 100%. But it is at 66%.
> Tried just cause I've read is a magic number for the 2x8p cards.
> Since it's not usable as daily, the KP1000 is better.
> 
> The best daily so far, probably doing even better with WC, is the Asus XOC 999W without ReBar.
> Best TimeSpy score after the KP1000, higher than with all the other BIOS with ReBar.
> I could also pass the Metro Exodus EE torture with 15 MHz more on the GPU clock.
> 
> 
> 
> Do you have the stock cooler?
> Cause those sizes are not the right ones to buy then.
> 
> 
> 
> Spoiler: Strix thermal pads
> 
> 
> 
> 
> View attachment 2529275


What is the difference between the EVGA XOC 1000W bios and Asus XOC 999W ? Link to the Asus XOC 999W no-ReBar bios ?


----------



## ManniX-ITA

J7SC said:


> FYI, the Strix has 2x HDMI and 3x DisplayPort while the Galax has 1x HDMI and 3x DisplayPort...have you or @Nizzen tested that one re. performance, daily usability and importantly, whether at least one of the DisplayPort connections work when running the Galax XOC r_BAR on the Strix ?


For the Galax XOC you mean the HOF OC Lab right?
I'm not sure I tried the 1000W, for sure the 520W. Good one but not really bringing anything more than the Strix original.
Not sure about the DP/HDMI ports but one DP for sure working. I think even 2. I remember I considered the 520W as daily.



kryptonfly said:


> What is the difference between the EVGA XOC 1000W bios and Asus XOC 999W ? Link to the Asus XOC 999W no-ReBar bios ?


You can download it here:








[Official] NVIDIA RTX 3090 Owner's Club


Anyone know why my MSI RTX 3090 MIS Trio (on water with watercooled backplate) will not take the 520w Bios. I can check in GPUZ and it shows 520w but when in RTX Quake it limits power to around 490-500w. I just loaded the 1000w bios and limited it to 550w (same GPU/Mem overlock as before) and...




www.overclock.net





The ASUS XOC 999W does have temp limits and dynamic memory clock (50-60W in idle, not 120-140W).
Power limit is 1000W but artificially capped with 8pin max PL at 150/175/150.
It does better than all other 520W bios out there with 518W max board power draw.
I get +200 MHz of memory OC (higher Power Limits on VRAM seems to help) and +15 MHz on GPU clock (have to verify).
Despite the lack of ReBar seems best performer after KP1000.


----------



## ManniX-ITA

King G said:


> I see what your saying. theoretically even if I tighten the radiator all the way because of the pad thickness if they are too stiff I could get worse core thermals. Well I really hope they come in super squishy or else i'll return these too unfortunately.


Buy a torque controlled screw driver and a PH00 head.
Otherwise I see it as almost a mission impossible


----------



## King G

ManniX-ITA said:


> Buy a torque controlled screw driver and a PH00 head.
> Otherwise I see it as almost a mission impossible


Would you mind sharing a link for both recommended? I see so many options now I don't mind picking it up who knows I might need it again down the road!


----------



## Nizzen

J7SC said:


> ...that's helpful ! I'm currently on the Asus V3 (stock, r_BAR) and the KPE 520 r_BAR (orig) on my Strix OC. As I'm about to hook up a 2nd monitor, the '999' sounds like the way to go, though r_BAR would be nice.
> 
> FYI, the Strix has 2x HDMI and 3x DisplayPort while the Galax has 1x HDMI and 3x DisplayPort...have you or @Nizzen tested the Galax XOC r_BAR re. performance, daily usability and importantly, whether at least one of the DisplayPort connections works when running it on the Strix ?


I have 3090 HOF, but have 1000w no rebar. Is there any 1000w rebar HOF?


----------



## J7SC

ManniX-ITA said:


> For the Galax XOC you mean the HOF OC Lab right?
> I'm not sure I tried the 1000W, for sure the 520W. Good one but not really bringing anything more than the Strix original.
> Not sure about the DP/HDMI ports but one DP for sure working. I think even 2. I remember I considered the 520W as daily.
> 
> 
> 
> You can download it here:
> 
> 
> 
> 
> 
> 
> 
> 
> [Official] NVIDIA RTX 3090 Owner's Club
> 
> 
> Anyone know why my MSI RTX 3090 MIS Trio (on water with watercooled backplate) will not take the 520w Bios. I can check in GPUZ and it shows 520w but when in RTX Quake it limits power to around 490-500w. I just loaded the 1000w bios and limited it to 550w (same GPU/Mem overlock as before) and...
> 
> 
> 
> 
> www.overclock.net
> 
> 
> 
> 
> 
> The ASUS XOC 999W does have temp limits and dynamic memory clock (50-60W in idle, not 120-140W).
> Power limit is 1000W but artificially capped with 8pin max PL at 150/175/150.
> It does better than all other 520W bios out there with 518W max board power draw.
> I get +200 MHz of memory OC (higher Power Limits on VRAM seems to help) and +15 MHz on GPU clock (have to verify).
> Despite the lack of ReBar seems best performer after KP1000.


...yes - the Galax HoF OC Labs 1kw. I do need one DisplayPort for the Philips panel and definitely one HDMI 2.1 for the LG C1.

@King G - obviously, you should first take a look at the Fujipoly they send you, and even do a test mount with paste on the die. But if you for some reason return it and thermal putty is available then, I would give that some serious thought. Mounting with thermal putty is mostly idiot proof (if I do say so myself ) as it 'conforms' to whatever available space there and it is so soft that it does not impact the die mount in any way. I recently did my Strix as well as a 6900XT (both w-cooled) that way and temps are excellent.

@Nizzen - not sure but I thought I saw a reference to the Galax HoF OCL r_BAR 1kw in a HWBot forum comment...can't remember exactly where though


----------



## King G

J7SC said:


> ...yes - the Galax HoF OC Labs 1kw. I do need one DisplayPort for the Philips panel and definitely one HDMI 2.1 for the LG C1.
> 
> @King G - obviously, you should first take a look at the Fujipoly they send you, and even do a test mount with paste on the die. But if you for some reason return it and thermal putty is available then, I would give that some serious thought. Mounting with thermal putty is mostly idiot proof (if I do say so myself ) as it 'conforms' to whatever available space there and it is so soft that it does not impact the die mount in any way. I recently did my Strix as well as a 6900XT (both w-cooled) that way and temps are excellent.
> 
> @Nizzen - not sure but I thought I saw a reference to the Galax HoF OCL r_BAR 1kw in a HWBot forum comment...can't remember exactly where though


I think I really like that approach! essentially use thermal paste and pads if the pads are stiff. I will post before and after temps of my results here as well as the teardown of my gpu.


----------



## ManniX-ITA

J7SC said:


> ...yes - the Galax HoF OC Labs 1kw. I do need one DisplayPort for the Philips panel and definitely one HDMI 2.1 for the LG C1.


Have to test it again but I seems to recall the ports configuration was better than the Kingpin on the Strix.



Nizzen said:


> I have 3090 HOF, but have 1000w no rebar. Is there any 1000w rebar HOF?


I don't think I've seen it, mine doesn't have a ReBar in the file name so probably it's not (if it's not in already, I usually add it myself).



King G said:


> Would you mind sharing a link for both recommended? I see so many options now I don't mind picking it up who knows I might need it again down the road!


Something like this would be perfect, 40-200cNm:









CDI 151NSM Micro Adjustable Torque Screwdriver, Torque Range 40 to 200 Centinewton Meters - - Amazon.com


CDI 151NSM Micro Adjustable Torque Screwdriver, Torque Range 40 to 200 Centinewton Meters - - Amazon.com



www.amazon.com





But it's expensive, you can get away with something much cheaper.
I'm not sure what else to recommend as you have different products in the US and mostly with torque specified in lbs...

These screwdrivers have a 1/4 inch hex head. 
You need a PH00 for the Strix and it's usually a microbit 1/8 inch, needs an adapter to fit the 1/4.
They do exist also 1/4 but I couldn't find them on Amazon DE, while on US seems common:






Amazon.com: Rocaris 12Pcs Hex Shank Magnetic Phillips Cross Screwdriver Bits 50mm 1/4 Inch : Tools & Home Improvement


Amazon.com: Rocaris 12Pcs Hex Shank Magnetic Phillips Cross Screwdriver Bits 50mm 1/4 Inch : Tools & Home Improvement



www.amazon.com





In the meantime I have tested a bit with New World, 521.8W power draw, with the 999W XOC bios.
It did crash after 45 minutes with that 15 MHz clock more 
I'll test more later without.
I did notice that while the temperature limit slider is enabled it doesn't seem to work.
I usually limit the temp to 85c and it doesn't hold it, the hot spot goes up to 91.9c like it wasn't set.


----------



## Falkentyne

ManniX-ITA said:


> Have to test it again but I seems to recall the ports configuration was better than the Kingpin on the Strix.
> 
> 
> 
> I don't think I've seen it, mine doesn't have a ReBar in the file name so probably it's not (if it's not in already, I usually add it myself).
> 
> 
> 
> Something like this would be perfect, 40-200cNm:
> 
> 
> 
> 
> 
> 
> 
> 
> 
> CDI 151NSM Micro Adjustable Torque Screwdriver, Torque Range 40 to 200 Centinewton Meters - - Amazon.com
> 
> 
> CDI 151NSM Micro Adjustable Torque Screwdriver, Torque Range 40 to 200 Centinewton Meters - - Amazon.com
> 
> 
> 
> www.amazon.com
> 
> 
> 
> 
> 
> But it's expensive, you can get away with something much cheaper.
> I'm not sure what else to recommend as you have different products in the US and mostly with torque specified in lbs...
> 
> These screwdrivers have a 1/4 inch hex head.
> You need a PH00 for the Strix and it's usually a microbit 1/8 inch, needs an adapter to fit the 1/4.
> They do exist also 1/4 but I couldn't find them on Amazon DE, while on US seems common:
> 
> 
> 
> 
> 
> 
> Amazon.com: Rocaris 12Pcs Hex Shank Magnetic Phillips Cross Screwdriver Bits 50mm 1/4 Inch : Tools & Home Improvement
> 
> 
> Amazon.com: Rocaris 12Pcs Hex Shank Magnetic Phillips Cross Screwdriver Bits 50mm 1/4 Inch : Tools & Home Improvement
> 
> 
> 
> www.amazon.com
> 
> 
> 
> 
> 
> In the meantime I have tested a bit with New World, 521.8W power draw, with the 999W XOC bios.
> It did crash after 45 minutes with that 15 MHz clock more
> I'll test more later without.
> I did notice that while the temperature limit slider is enabled it doesn't seem to work.
> I usually limit the temp to 85c and it doesn't hold it, the hot spot goes up to 91.9c like it wasn't set.


Hotspot has nothing to do with the temperature slider.
Temp slider is for normal core temp.


----------



## ManniX-ITA

Falkentyne said:


> Hotspot has nothing to do with the temperature slider.
> Temp slider is for normal core temp.


I'm pretty sure with the normal Strix bios limiting with the temp slider is working with the hot spot.
But right now I have no will to go back and test again


----------



## ManniX-ITA

Falkentyne said:


> Temp slider is for normal core temp.


Well, I have test with the XOC bios and indeed is capping the GPU core nicely.
Very likely I'm just confused and it didn't cap the hot spot as well on the original Strix 
Have set it a couple of months ago, bit fuzzy already.


----------



## J7SC

ManniX-ITA said:


> Well, I have test with the XOC bios and indeed is capping the GPU core nicely.
> Very likely I'm just confused and it didn't cap the hot spot as well on the original Strix
> Have set it a couple of months ago,* bit fuzzy already.*


bit fuzzy already = known side effect of too much 'New World'


----------



## ManniX-ITA

J7SC said:


> bit fuzzy already = known side effect of too much 'New World'


ahahah no it's more the fast approaching 50 years marks 
I'm not really fond on New World, some stuff nice but in general not very exciting
Just trying to keep up with my niece so I can play with her when they finally deliver this transfer char uber feature.
And also proving my Strix it's not melting down even with a 999W bios eheh


----------



## kryptonfly

Nizzen said:


> Where is the gpu result?
> 
> Want cpu benchmark?
> Here you go 😁


SOTTR got an update with the new DLSS 2.3.2.0 and besides my score improved (1080p lowest, DLSS disabled) :

 

HZD 1080p ultra 2175 mhz @ 993mV :


----------



## des2k...

kryptonfly said:


> Thanks guys 👍
> Hum... So if I understand well we can hit PL normalized with the XOC 1000W bios in Port Royal ? Feel free to correct me if I'm wrong. Because I kept my shunts + flashed with XOC 1000W bios and I'm around 45% PL normalized and 39% PCIE power in Port Royal for ~510W (see my previous post). That means there's no PL at all anymore, I even can go beyond true 1000W but just 2*8-pin ! I just wanna know what is %PL normalized and PCIE power in Port Royal with XOC 1000W bios. A HWinfo screen would be perfect 👌


It's the XOC vbios, there's no other limits on the vbios (like Falkentyne said) . The only normalized 100% TDP you will hit would be if it reaches around 660w or the PCIE peaks at 100W first.

PR 2200+ core at max 1.1v doesn't reach this 660w limit.


----------



## yzonker

The only other limit is your card blowing up or melting. Lol.


----------



## Nizzen

yzonker said:


> The only other limit is your card blowing up or melting. Lol.


Impossible


----------



## des2k...

PR on Windows11, so close but not 16k lol








I scored 15 946 in Port Royal


AMD Ryzen 9 3900X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 11}




www.3dmark.com


----------



## CZonin

Need everyone's opinion. I'm running a 3090 TUF with 3x Noctua A9's and the 1000W XOC rebar BIOS. I just got a 3080ti FTW3 Ultra through the EVGA queue. I'm going to end up selling either my current card or the 3080ti to a friend. I'm wondering if I could end up getting more performance out of the 3080ti as it's a 3x8pin compared to 2x8pin on my 3090. I'll likely end up flashing a 1000W rebar BIOS, deshrouding with the Noctua A9's like I did on my current card, and likely upgrading to a 1000W PSU compared to my 800W. What do you think?


----------



## HowYesNo

well just wanted to post this.
have GTX 1070, which runs fine, temps in gaming go to 65C. however backside of pcb where VRM's are gets quite toasty during gaming. can't hold finger more than 3 seconds.
so i bought a large aluminum heatsink and cheap thermalpad. for now just hoping the weight of the thing will proide enough pressure. also gpu suport is a must with this thing.
a ew photos of the contraption, might help you guys with RTX 3080/3090 backside memory overheating.
heatsink https://www.amazon.de/-/en/gp/product/B08J82664X/ref=ppx_yo_dt_b_asin_title_o01_s00?ie=UTF8&psc=1

















just enogh clearance between cpu cooler.










post link if you know of a clip or somethig to press it firmly to pcb


----------



## yzonker

CZonin said:


> Need everyone's opinion. I'm running a 3090 TUF with 3x Noctua A9's and the 1000W XOC rebar BIOS. I just got a 3080ti FTW3 Ultra through the EVGA queue. I'm going to end up selling either my current card or the 3080ti to a friend. I'm wondering if I could end up getting more performance out of the 3080ti as it's a 3x8pin compared to 2x8pin on my 3090. I'll likely end up flashing a 1000W rebar BIOS, deshrouding with the Noctua A9's like I did on my current card, and likely upgrading to a 1000W PSU compared to my 800W. What do you think?


3090 will likely still be marginally faster depending on how well you do in the silicon lottery on each. My Zotac 3090 is still faster in benches and games than my 3080ti FTW3 and I lost the mem lottery on my 3090 with a max of +850 (700-800 for game stable). Both running 1kw bios. The FTW3 has the limitation of 20 amp fuses on the 8 pins too. 2nd 8 pin runs hot and will hit 20 amps in the 550-600w range.

Upside though is being able to game at 450w on a bone stock card (3080ti) rather than running the KP XOC 24/7. Dual bios is nice too for swapping easily to the Galax 1kw bios.


----------



## yzonker

des2k... said:


> PR on Windows11, so close but not 16k lol
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 946 in Port Royal
> 
> 
> AMD Ryzen 9 3900X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 11}
> 
> 
> 
> 
> www.3dmark.com


Nice. Kinda thought before you should be able to get close if not over 16k.


----------



## J7SC

Nizzen said:


> Impossible


...not impossible '''in capable right hands''' and unique circumstances. Not that long ago, I had an expensive GPU blow up (flash of light, cloud of obnoxious smoke). Turned out that some moisture had gotten onto the cable connection of a Club3D DisplayPort to HDMI...


----------



## kryptonfly

J7SC said:


> ...not impossible '''in capable right hands''' and unique circumstances. Not that long ago, I had an expensive GPU blow up (flash of light, cloud of obnoxious smoke). Turned out that some moisture had gotten onto the cable connection of a Club3D DisplayPort to HDMI...


Not impossible indeed, with shunts + XOC 1000W bios I'm at 45% PL normalized and 39% PCIE for ~510W, no limit left. It's really possible to burn it if I set PL around 53% (600W) in Afterburner + high GPU voltage in Endwalker, Metro Exodus EE, Quake RTX... even though I get PL by Afterburner, TDP normalized can go higher, I've seen +540W in Endwalker I stopped it... (fuses 20A for 8-pins and 10A for pcie slot) but I can shunt fuses with 0 mΩ though... VRMs won't like ! I will keep shunts if I want to re-flash with Gigabyte 390W bios.


----------



## Nizzen

kryptonfly said:


> Not impossible indeed, with shunts + XOC 1000W bios I'm at 45% PL normalized and 39% PCIE for ~510W, no limit left. It's really possible to burn it if I set PL around 53% (600W) in Afterburner + high GPU voltage in Endwalker, Metro Exodus EE, Quake RTX... even though I get PL by Afterburner, TDP normalized can go higher, I've seen +540W in Endwalker I stopped it... (fuses 20A for 8-pins and 10A for pcie slot) but I can shunt fuses with 0 mΩ though... VRMs won't like ! I will keep shunts if I want to re-flash with Gigabyte 390W bios.


You can't set over 1.1v, so..... There is no unlimited power to the gpu, unless you use Elmor tool on Asus Strix/TUF, Galax/kfa 2 with "LCD monitor" or Kingpin. As far as I know.


----------



## GRABibus

Nizzen said:


> I have 3090 HOF, but have 1000w no rebar. Is there any 1000w rebar HOF?


No.

did you put the MSI SUPRIM X bar Bios in your collection ? 😉


----------



## Nizzen

GRABibus said:


> No.
> 
> did you put the MSI SUPRIM X bar Bios in your collection ? 😉


Don't need it in the collection 








Already have it


----------



## kryptonfly

Nizzen said:


> You can't set over 1.1v, so..... There is no unlimited power to the gpu, unless you use Elmor tool on Asus Strix/TUF, Galax/kfa 2 with "LCD monitor" or Kingpin. As far as I know.


In Endwalker bench I fixed 2145mhz at 962 mV and it went up to +540W at the end, I stopped it quickly, I was PL through Afterburner set at 42% (39% PCIE in PR around 510W with shunts + XOC 1000W bios) but TDP normalized was rising up beyond Afterburner PL, there's no limit from what I've seen except with Afterburner manually and I won't take the risk to go beyond. So I'm convinced I can blow up fuses in some games and cases especially with 2*8-pins.


----------



## yzonker

kryptonfly said:


> In Endwalker bench I fixed 2145mhz at 962 mV and it went up to +540W at the end, I stopped it quickly, I was PL through Afterburner set at 42% (39% PCIE in PR around 510W with shunts + XOC 1000W bios) but TDP normalized was rising up beyond Afterburner PL, there's no limit from what I've seen except with Afterburner manually and I won't take the risk to go beyond. So I'm convinced I can blow up fuses in some games and cases especially with 2*8-pins.


Well that's probably close to 20 amps and that's assuming the 8 pins are fairly balanced. I have wondered how easily those fuses actually blow since I've flirted with 20 amps on my 3080ti FTW3 also. In looking at similar fuses sold online it looked like a lot of them have a bit more before letting go. Although might depend on the fuse lottery. Lol. 

Since you've already shunt modded really doesn't matter other than the hassle of fixing it if one blew.


----------



## J7SC

HowYesNo said:


> well just wanted to post this.
> have GTX 1070, which runs fine, temps in gaming go to 65C. however backside of pcb where VRM's are gets quite toasty during gaming. can't hold finger more than 3 seconds.
> so i bought a large aluminum heatsink and cheap thermalpad. for now just hoping the weight of the thing will proide enough pressure. also gpu suport is a must with this thing.
> a ew photos of the contraption, might help you guys with RTX 3080/3090 backside memory overheating.
> heatsink https://www.amazon.de/-/en/gp/product/B08J82664X/ref=ppx_yo_dt_b_asin_title_o01_s00?ie=UTF8&psc=1
> 
> View attachment 2529313
> View attachment 2529314
> 
> 
> just enogh clearance between cpu cooler.
> 
> View attachment 2529315
> 
> 
> post link if you know of a clip or somethig to press it firmly to pcb


....those sinks are great...have been using them on two of my cards, including 3090 Strix. I thought of leaving the stock back-plate off and go for direct VRAM contact, but as you discovered, keeping a firm mount is not really possible w/o other mods or clamps on the (sensitive) pcb.

The 3090 Strix back plate is pretty good re. heat dispersion anyway, especially with thermal putty on the VRAM. FYI, I used thermal adhesive from DigiKey on select spots around the perimeter of the heasink while the rest got a good blob of MX5. Temps are superb.


----------



## yzonker

J7SC said:


> ....those sinks are great...have been using them on two of my cards, including 3090 Strix. I thought of leaving the stock back-plate off and go for direct VRAM contact, but as you discovered, keeping a firm mount is not really possible w/o other mods or clamps on the (sensitive) pcb.
> 
> The 3090 Strix back plate is pretty good re. heat dispersion anyway, especially with thermal putty on the VRAM. FYI, I used thermal adhesive from DigiKey on select spots around the perimeter of the heasink while the rest got a good blob of MX5. Temps are superb.
> 
> View attachment 2529351


I've got the same. It's just located by the fan being wedged in along with a piece of black foam tape on each end.


----------



## J7SC

yzonker said:


> I've got the same. It's just located by the fan being wedged in along with a piece of black foam tape on each end.
> 
> View attachment 2529352


Nice ! Even w/o fans, I got the VRAM temps to low-mid 50s max with this but I'm currently trying to decide what fans to add - 1x Arctic P12 pwm pst or 2x Arctic P8 v2 pwm. The problem with the (otherwise preferred) latter option is that the hottest part of the GPU back would be a bit in the 'wind shadow' where the fans are adjoined.

FYI, the stuff below / bottom right ("Dahl") also used for HVAC hangers, among other things, is great to make custom mounts from, ie. for free-floating fans etc


----------



## kryptonfly

yzonker said:


> Well that's probably close to 20 amps and that's assuming the 8 pins are fairly balanced. I have wondered how easily those fuses actually blow since I've flirted with 20 amps on my 3080ti FTW3 also. In looking at similar fuses sold online it looked like a lot of them have a bit more before letting go. Although might depend on the fuse lottery. Lol.
> 
> Since you've already shunt modded really doesn't matter other than the hassle of fixing it if one blew.


Yep but since I flashed with XOC 1000W bios, 8-pin #1 is well higher than #2 around 50W at 510W so I'm wondering if it's caused by XOC bios which has 3*8-pin with the #2 a little lower than #1 from what I saw. But I think I have a bad shunt, bad 1W resistances really thin, I have original 2W R005 same as on the card, I will replace shunts. And yes in any case I could replace fuses with 0 ohm to fully unlock PCIE power avoiding fuse "lottery" but I'm scared for VRMs. When I see a Gigabyte 2*8-pin which has fuse + power stage blown up just by a game (New World), it makes think twice. Bad process ? Bad components ? I feel golden with 2175 mhz @993 mV on PR, maybe thanks to full poscap capacitors front & back on the card.


----------



## yzonker

kryptonfly said:


> Yep but since I flashed with XOC 1000W bios, 8-pin #1 is well higher than #2 around 50W at 510W so I'm wondering if it's caused by XOC bios which has 3*8-pin with the #2 a little lower than #1 from what I saw. But I think I have a bad shunt, bad 1W resistances really thin, I have original 2W R005 same as on the card, I will replace shunts. And yes in any case I could replace fuses with 0 ohm to fully unlock PCIE power avoiding fuse "lottery" but I'm scared for VRMs. When I see a Gigabyte 2*8-pin which has fuse + power stage blown up just by a game (New World), it makes think twice. Bad process ? Bad components ? I feel golden with 2175 mhz @993 mV on PR, maybe thanks to full poscap capacitors front & back on the card.


The bios doesn't change the balance. I see the same thing on my Zotac, but when actually measuring the 8-pin power draw with a clamp meter, power is still fairly well balanced just like when running a 390w bios.


----------



## SloJedi

Question for the pro’s thanks in advance, I bought a EK Quantum Vector Strix RTX 3080/3090 D-RGB - Nickel Acetal. Which active backplate should I buy to match it? I see a couple choices to mate them together to make loop 🙏 thanks. 3090 Strix card


----------



## GRABibus

yzonker said:


> I've got the same. It's just located by the fan being wedged in along with a piece of black foam tape on each end.
> 
> View attachment 2529352


I have better temps (2° less) by using the fan on the heatsink in pulling mode instead of pushing mode


----------



## yzonker

GRABibus said:


> I have better temps (2° less) by using the fan on the heatsink in pulling mode than in pushing mode


I found that to be quite a lot louder though, I assume because the fan blades are a lot closer to the heatsink.


----------



## ManniX-ITA

SloJedi said:


> Question for the pro’s thanks in advance, I bought a EK Quantum Vector Strix RTX 3080/3090 D-RGB - Nickel Acetal. Which active backplate should I buy to match it? I see a couple choices to mate them together to make loop 🙏 thanks. 3090 Strix card


There are only two options:










One is Acetal and the other Plexi, both have D-RGB.
Considering you already got the Acetal version maybe it's more appropriate to get also the AB in Acetal.


----------



## newls1

*Quick question... Has anyone or does anyone here use the rebar 1000w evga bios on the FTW3 Ultra 3090 and if so, what % are you setting your power limit at? I dont want to pop the onboard fuses. Also, what are you using for AB settings? Thanks*


----------



## KedarWolf

newls1 said:


> *Quick question... Has anyone or does anyone here use the rebar 1000w evga bios on the FTW3 Ultra 3090 and if so, what % are you setting your power limit at? I dont want to pop the onboard fuses. Also, what are you using for AB settings? Thanks*


Does the Strix OC 3090 have onboard fuses?

I can run the Power Limit at 100% all day at 1.1v with no issues gaming.


----------



## iionas

its slowly coming to xmas and I think thats when Im going to pull the trigger on the cooling for the tuf for me boys. 

Im thinking 1 x 30mm x 360 rad and 1 x 45mm x 360mm ONLY for the card.

IM not sure if i want to do the cpu or not, I have a liquid freezer II on it and its doing a great job.

Thoughts?

The rad choice is based on my lian lii o11d, my cpu aio is up top hence the choice of a slim 30mm and a 45mm. 

Or do i yolo and do the cpu also?


----------



## chibi

iionas said:


> The rad choice is based on my lian lii o11d, my cpu aio is up top hence the choice of a slim 30mm and a 45mm.


Regarding the 011, I had one previously and stuffing it full of rads it was starved for fresh air. The water cooling isn't as good as I expected for the amount of rads. The glass is nice and all, but do not expect miracles.


----------



## geriatricpollywog

iionas said:


> its slowly coming to xmas and I think thats when Im going to pull the trigger on the cooling for the tuf for me boys.
> 
> Im thinking 1 x 30mm x 360 rad and 1 x 45mm x 360mm ONLY for the card.
> 
> IM not sure if i want to do the cpu or not, I have a liquid freezer II on it and its doing a great job.
> 
> Thoughts?
> 
> The rad choice is based on my lian lii o11d, my cpu aio is up top hence the choice of a slim 30mm and a 45mm.
> 
> Or do i yolo and do the cpu also?


MO-RA


----------



## ManniX-ITA

KedarWolf said:


> Does the Strix OC 3090 have onboard fuses?


From my understanding no, doesn't have it.



iionas said:


> Or do i yolo and do the cpu also?


A good block in a custom loop will blow away the liquid freezer.
I would replace it as well.


----------



## StreaMRoLLeR

KedarWolf said:


> Does the Strix OC 3090 have onboard fuses?
> 
> I can run the Power Limit at 100% all day at 1.1v with no issues gaming.


IMHO It doesnt need to. Fuses as explained by Buildzoid doesnt save your card, It makes more easy to repair for manufacturer. Strix is build like KPE, Same Mp288A controllers are 22 phase ( 23 on kpe) I have seen someone here with EVC2SX on Strix 3090 and KPE bios pulling 700 ish watt and braking 15.500 barrier.


----------



## ManniX-ITA

My Optimus Strix block has been shipped


----------



## chibi

ManniX-ITA said:


> My Optimus Strix block has been shipped
> 
> View attachment 2529518


Nice! Did you get the cerakote or the previous nickel finish? I'm waiting for my Opt Strix block as well.


----------



## ManniX-ITA

chibi said:


> Nice! Did you get the cerakote or the previous nickel finish? I'm waiting for my Opt Strix block as well.


Got the nickel finish, the cerakote wasn't an option when I ordered it:


----------



## kx11

ManniX-ITA said:


> Got the nickel finish, the cerakote wasn't an option when I ordered it:
> 
> View attachment 2529521


You're about to try the best GPU block ever  i have one already

but the wait nearly made me cancel the order, 2 full months


----------



## ManniX-ITA

kx11 said:


> You're about to try the best GPU block ever  i have one already


I have great expectations indeed 

My order took much less than the 4-5 weeks anticipated, only 2 and a half.
Hope they can deliver it for the 26th as planned cause the 28th I'll travel for 10 days...
I'm worried about customs which are painful here in Germany, clearance sometimes can get days.

Also I'm no where near having my wc build finished so I'll have to just stare at it for a while.
But it's a good reason to hurry up!


----------



## J7SC

ManniX-ITA said:


> I have great expectations indeed
> 
> My order took much less than the 4-5 weeks anticipated, only 2 and a half.
> Hope they can deliver it for the 26th as planned cause the 28th I'll travel for 10 days...
> I'm worried about customs which are painful here in Germany, clearance sometimes can get days.
> 
> Also I'm no where near having my wc build finished so I'll have to just stare at it for a while.
> But it's a good reason to hurry up!


I'm looking forward to your results (may be keep some HWInfo 'before' shots as well). I recently switched from an EK block for the Strix to the Phanteks w/ significant improvements, though there were also other cooling-mods involved. I like to see what the Optimus can do on a Strix.


----------



## kx11

ManniX-ITA said:


> I have great expectations indeed
> 
> My order took much less than the 4-5 weeks anticipated, only 2 and a half.
> Hope they can deliver it for the 26th as planned cause the 28th I'll travel for 10 days...
> I'm worried about customs which are painful here in Germany, clearance sometimes can get days.
> 
> Also I'm no where near having my wc build finished so I'll have to just stare at it for a while.
> But it's a good reason to hurry up!


Oh good luck with your trip

this is my last benchmark with Crysis3 remastered, used HWifno64 to get VRM temps also running on Win11


----------



## chispy

RTX 3090 Ti / Super to be release on Q1 2022 , 450w tdp out of the box , full fat chip , a lot faster gddrx6 memory confirmed , to fight the radeon RT 6900xtxh. Google it guys. Imagine the Asus Strix RTX 3090 Ti / Super , shunt moded and evc2 for voltage control 700w in no time  , it will be a very hot gpu. Me wants it anyways 😂






__





NVIDIA GeForce RTX 3090 Ti: Q1 2022 release, uses 450W power


NVIDIA's new monstrous GeForce RTX 3090 Ti teased again: Q1 2022 release, and does use 450W+ of power which is ridiculous.




www.tweaktown.com


----------



## King G

ManniX-ITA said:


> Have to test it again but I seems to recall the ports configuration was better than the Kingpin on the Strix.
> 
> 
> 
> I don't think I've seen it, mine doesn't have a ReBar in the file name so probably it's not (if it's not in already, I usually add it myself).
> 
> 
> 
> Something like this would be perfect, 40-200cNm:
> 
> 
> 
> 
> 
> 
> 
> 
> 
> CDI 151NSM Micro Adjustable Torque Screwdriver, Torque Range 40 to 200 Centinewton Meters - - Amazon.com
> 
> 
> CDI 151NSM Micro Adjustable Torque Screwdriver, Torque Range 40 to 200 Centinewton Meters - - Amazon.com
> 
> 
> 
> www.amazon.com
> 
> 
> 
> 
> 
> But it's expensive, you can get away with something much cheaper.
> I'm not sure what else to recommend as you have different products in the US and mostly with torque specified in lbs...
> 
> These screwdrivers have a 1/4 inch hex head.
> You need a PH00 for the Strix and it's usually a microbit 1/8 inch, needs an adapter to fit the 1/4.
> They do exist also 1/4 but I couldn't find them on Amazon DE, while on US seems common:
> 
> 
> 
> 
> 
> 
> Amazon.com: Rocaris 12Pcs Hex Shank Magnetic Phillips Cross Screwdriver Bits 50mm 1/4 Inch : Tools & Home Improvement
> 
> 
> Amazon.com: Rocaris 12Pcs Hex Shank Magnetic Phillips Cross Screwdriver Bits 50mm 1/4 Inch : Tools & Home Improvement
> 
> 
> 
> www.amazon.com
> 
> 
> 
> 
> 
> In the meantime I have tested a bit with New World, 521.8W power draw, with the 999W XOC bios.
> It did crash after 45 minutes with that 15 MHz clock more
> I'll test more later without.
> I did notice that while the temperature limit slider is enabled it doesn't seem to work.
> I usually limit the temp to 85c and it doesn't hold it, the hot spot goes up to 91.9c like it wasn't set.



I finally have all I need just came in today. what pressure should the screws be maxxed at? I read on reddit it was 25inlb but is there somewhere I can get the exact before I ruin my card lol


----------



## King G

Does anyone know the max torque in.lb of the strix 3090 screws? I have a torque screw and I cant seem to find it online anywhere.


----------



## ManniX-ITA

King G said:


> I finally have all I need just came in today. what pressure should the screws be maxxed at? I read on reddit it was 25inlb but is there somewhere I can get the exact before I ruin my card lol


just go all way in
what you need to use the torque control is to balance all the screws so you get the cooler perfectly lined to the pcb/die
start screwing with a low torque, do the cross with the 4 screws for the die
the raise the torque a bit and start again
when you are at the max torque give a last round with a normal screwdriver


----------



## ManniX-ITA

J7SC said:


> I'm looking forward to your results (may be keep some HWInfo 'before' shots as well). I recently switched from an EK block for the Strix to the Phanteks w/ significant improvements, though there were also other cooling-mods involved. I like to see what the Optimus can do on a Strix.


I will for sure!
Hopefully this weekend will have time to re-attempt the shunting and fix better the stock cooler


----------



## King G

ManniX-ITA said:


> just go all way in
> what you need to use the torque control is to balance all the screws so you get the cooler perfectly lined to the pcb/die
> start screwing with a low torque, do the cross with the 4 screws for the die
> the raise the torque a bit and start again
> when you are at the max torque give a last round with a normal screwdriver


This is probably a silly question but if I'm only wanting cooler vrm temps should I replace all of the pads or just the VRM chips. I don't think I have enough pads since I ordered Fujipoly and theyre absurdly small. I have other brand that arent as good that can work on other components but ideally do all these pads shown affect vrm temps?


----------



## GRABibus

chispy said:


> RTX 3090 Ti / Super to be release on Q1 2022 , 450w tdp out of the box , full fat chip , a lot faster gddrx6 memory confirmed , to fight the radeon RT 6900xtxh. Google it guys. Imagine the Asus Strix RTX 3090 Ti / Super , shunt moded and evc2 for voltage control 700w in no time  , it will be a very hot gpu. Me wants it anyways 😂
> 
> 
> 
> 
> 
> 
> __
> 
> 
> 
> 
> 
> NVIDIA GeForce RTX 3090 Ti: Q1 2022 release, uses 450W power
> 
> 
> NVIDIA's new monstrous GeForce RTX 3090 Ti teased again: Q1 2022 release, and does use 450W+ of power which is ridiculous.
> 
> 
> 
> 
> www.tweaktown.com


Complete useless and non sense card


----------



## GRABibus

Just for info, if some of you are interested.

Only by PM of course, and not in this thread.









[Sold] : GALAX RTX 3090 Hall of Fame


Hello, Bought the 09th of September 2021 at an official french reseller The card is perfectly working.  Shipment : West Europe (France, Italy, Spain, Portugal, Germany, Switzerland, Belgium, Luxembourg, Netherland). If you are from another country, please let me know. Price : 3000€ (Euros) +...




www.overclock.net


----------



## ManniX-ITA

King G said:


> This is probably a silly question but if I'm only wanting cooler vrm temps should I replace all of the pads or just the VRM chips. I don't think I have enough pads since I ordered Fujipoly and theyre absurdly small. I have other brand that arent as good that can work on other components but ideally do all these pads shown affect vrm temps?


The red ones are for the memory.
All the others I think they are on the power stages components.


----------



## J7SC

ManniX-ITA said:


> I will for sure!
> Hopefully this weekend will have time to re-attempt the shunting and fix better the stock cooler


...fix stock cooler ? How about some bigger fans for the stock cooler ?


----------



## King G

My results from applying high quality thermal pads on my strix 3090

My previous stress test, running Borderlands 3 and Cyberpunk at 4k Settings maxxed out with 123% power limit and +175 core. I had max temps steadily around 76c in the core and 95-98 in the VRAM. the average just looks lower because I closed the game for about 2 minutes before I took this screenshot so thats why average looks decent.​









Miraculously I was able to drop Core and VRM temps drastically. Core decrease of -4c and VRM decrease of -11c! 









This was how it all started, pretty nervous










This is me halfway when I lost track of what I was doing because I answered a phone call.









Order total (they included sour patch kids which I thought was funny af)











Ultimately I highly suggest replacing pads. I do not recommend buying two of each because fujipoly is expensive. with just 2x of the 1.5 and 2x of the .5, you will have more than enough and plenty to spare. Overall I think with about $70-$100 its a pretty damn good deal for such low temps. I also recommend a torque screwdriver if you don't have one just tighten the screws to the cooler diagonally little by little like as if you were putting on a tire like a pro. I hope these results or images are clear I apologize in advance for the potato quality but I was in a hurry because I'm going to watch the release of DUNE with family tonight! Anyways thanks again everybody!


----------



## ManniX-ITA

King G said:


> My results from applying high quality thermal pads on my strix 3090


So what did you end up replacing, everything except RAM?
And did you use the Fujipoly or something else/both?


----------



## ManniX-ITA

J7SC said:


> ...fix stock cooler ? How about some bigger fans for the stock cooler ?


...just need to find out how to fit 3 of them on the Strix heatsink, easy peasy


----------



## J7SC

ManniX-ITA said:


> ...just need to find out how to fit 3 of them on the Strix heatsink, easy peasy


....power saw, crowbar and sledgehammer will come in handy for that mod  

Quick question for those running the KPE 520 r_BAR: Recently, someone posted here that there was a newer version of that vBios...worth flashing to (ie. on a Strix 3090 OC) ?


----------



## Nizzen

J7SC said:


> ....power saw, crowbar and sledgehammer will come in handy for that mod
> 
> Quick question for those running the KPE 520 r_BAR: Recently, someone posted here that there was a newer version of that vBios...worth flashing to (ie. on a Strix 3090 OC) ?


Try Asus XOC rebar?
https://www.diskusjon.no/applications/core/interface/file/attachment.php?id=667910


----------



## J7SC

Nizzen said:


> Try Asus XOC rebar?
> https://www.diskusjon.no/applications/core/interface/file/attachment.php?id=667910


...would like to, but link does't work ('attachment removed or no permission') :-( ...can you attach as .txt, please ?


----------



## Nizzen

J7SC said:


> ...would like to, but link does't work ('attachment removed or no permission') :-( ...can you attach as .txt, please ?


----------



## J7SC

Thanks ! 

follow-up:
@Nizzen - have you run this bios ? I can't seem to open/inspect it with ABE008 unlike all the other custom bios, but instead get an error / exception handling message


----------



## King G

ManniX-ITA said:


> So what did you end up replacing, everything except RAM?
> And did you use the Fujipoly or something else/both?


so I just replaced the VRAM pads which are just the pads outlined in red. Everything else I left alone because the pads were still soft and sticky. I highly recommend it to anyone wanting to lower temps


----------



## Arizor

@Nizzen is like the Willy Wonka of 3090 bioses, got all the secret ones us kids want!


----------



## ManniX-ITA

J7SC said:


> Thanks !
> 
> follow-up:
> @Nizzen - have you run this bios ? I can't seem to open/inspect it with ABE008 unlike all the other custom bios, but instead get an error / exception handling message


Do you have ABE 0.08? Can you share it?
I only have 0.06 and it does give exception with all ReBar bioses.


----------



## J7SC

ManniX-ITA said:


> Do you have ABE 0.08? Can you share it?
> I only have 0.06 and it does give exception with all ReBar bioses.


...here you go, got it from a link here a while back (rename txt to zip)


----------



## ManniX-ITA

J7SC said:


> ...here you go, got it from a link here a while back (rename txt to zip)


Thanks, I know it's somewhere there but I couldn't find it via the search.


----------



## J7SC

ManniX-ITA said:


> Thanks, I know it's somewhere there but I couldn't find it via the search.


...no probs, it can be hard to find.

Generally speaking, bios editors are one area where BigNavi has an advantage over Ampere, given the way AMD has set up GPU in Win registry...the MPT (MorePowerTool) is really easy to use, including for pushing PL max W; max A etc.


----------



## yzonker

Interesting. So if AMD made a truly competitive GPU, being able to actually have proper control over power, etc... might be a big enough carrot to switch from Nvidia.


----------



## KedarWolf

So, I want to get accurate power readings from my Strix OC card for benchmarking purposes, I go to flash the 1000W Strix Rebar BIOS, my PC BSODs in the middle of the flash, reboots, video card is bricked. 

But I know from my 1080 Ti days you can pop in a second video card to get a display, then flash the bricked one, problem solved. 🤓

Skeery though.


----------



## J7SC

yzonker said:


> Interesting. So if AMD made a truly competitive GPU, being able to actually have proper control over power, etc... might be a big enough carrot to switch from Nvidia.


...I run both Ampere and BigNavi for work-and-play, and yeah, the MPT is easy...too easy perhaps. My 6900XT is a 3x8 pin (dual bios) and max stock PL with slider is just around 330W, and HotSpot temps around 58 C (fans on full blasts). With MPT PL at 380 W, Hotspot went to around 95 C. Then I w-cooled the card and now I can run well past 450W...some folks with sub-ambient cooling get close to 600W . The only (minor) fly in the ointment is that you have to spend 20 seconds reloading your custom MPT profile after a driver update.


----------



## Sheyster

@Nizzen - Are there any benefits of this new ASUS XOC BIOS over the re-bar KPE 1000W BIOS for us Strix owners? I don't care about memory running at full speed all the time with the KPE BIOS.


----------



## yzonker

KedarWolf said:


> So, I want to get accurate power readings from my Strix OC card for benchmarking purposes, I go to flash the 1000W Strix Rebar BIOS, my PC BSODs in the middle of the flash, reboots, video card is bricked.
> 
> But I know from my 1080 Ti days you can pop in a second video card to get a display, then flash the bricked one, problem solved. 🤓
> 
> Skeery though.


Yikes, haven't had that happen. I don't think I have room for a 2nd card right now. Doesn't your card have dual bios?


----------



## Falkentyne

KedarWolf said:


> So, I want to get accurate power readings from my Strix OC card for benchmarking purposes, I go to flash the 1000W Strix Rebar BIOS, my PC BSODs in the middle of the flash, reboots, video card is bricked.
> 
> But I know from my 1080 Ti days you can pop in a second video card to get a display, then flash the bricked one, problem solved. 🤓
> 
> Skeery though.


Don't flash bioses while overclocked and don't flash bioses with stuff running in windows either (clean boot first).


----------



## yzonker

J7SC said:


> ...I run both Ampere and BigNavi for work-and-play, and yeah, the MPT is easy...too easy perhaps. My 6900XT is a 3x8 pin (dual bios) and max stock PL with slider is just around 330W, and HotSpot temps around 58 C (fans on full blasts). With MPT PL at 380 W, Hotspot went to around 95 C. Then I w-cooled the card and now I can run well past 450W...some folks with sub-ambient cooling get close to 600W . The only (minor) fly in the ointment is that you have to spend 20 seconds reloading your custom MPT profile after a driver update.


The 6900xt looks a little weak even in non-rtx, 









AMD Radeon RX 6900 XT review


It sounds like a movie trailer; but the trilogy ends today, the 3rd iteration of AMD Big Navi gets reviewed, oh yeah the shader unlocked megalodon is going to battle the GeForce RTX 3090, whilst being... DX11: Microsoft Flight Simulator (2020)




www.guru3d.com





Does that gap close any with the power bumps or does it just stay similar if you increase the 3090 PL also?


----------



## Sheyster

Falkentyne said:


> Don't flash bioses while overclocked and don't flash bioses with stuff running in windows either (clean boot first).


I always USB boot when flashing a mobo or video card BIOS, FWIW.


----------



## J7SC

yzonker said:


> The 6900xt looks a little weak even in non-rtx,
> 
> 
> 
> 
> 
> 
> 
> 
> 
> AMD Radeon RX 6900 XT review
> 
> 
> It sounds like a movie trailer; but the trilogy ends today, the 3rd iteration of AMD Big Navi gets reviewed, oh yeah the shader unlocked megalodon is going to battle the GeForce RTX 3090, whilst being... DX11: Microsoft Flight Simulator (2020)
> 
> 
> 
> 
> www.guru3d.com
> 
> 
> 
> 
> 
> Does that gap close any with the power bumps or does it just stay similar if you increase the 3090 PL also?


...basically, in non-RTX, custom PCB 6900XT w/PL is slightly ahead of a custom 3090 in some benches (ie. Time Spy) up to but not including 4K. Even w/InfinityCache, the narrower VRAM bus is an issue at 4K, though it certainly is not a total wash-out.

...with RTX, my 6900XT performs like a decent / mid-range 3080 (ie. in Port Royal) but is well behind the 3090s. Jury is still out on AMD FSR vs DLSS; for now DLSS is clearly better but AMD's take is improving with updates.



Spoiler: 6900XT new / pre-MPT PL

















Falkentyne said:


> Don't flash bioses while overclocked and don't flash bioses with stuff running in windows either (clean boot first).


I would add that before flashing, check any Ampere vBios candidate with ABE(008)...if you cannot open a vBios w/ ABE but get exception handling errors even outside the actual flashing environment, I avoid it. It doesn't necessarily mean it is 'bad', it's just a safety check it should pass in my book. I have flashed / used modded vbios since 2012 (and never bricked a card, so far at least).


----------



## newls1

so ive come to the findings that my evga 3090FTW3 when gaming needs 1.100v @ 2130MHz... am I hurting it by allowing max vgpu? it is FCWB'd front and back


----------



## GRABibus

Sheyster said:


> @Nizzen - Are there any benefits of this new ASUS XOC BIOS over the re-bar KPE 1000W BIOS for us Strix owners? I don't care about memory running at full speed all the time with the KPE BIOS.


I tested it several weeks ago on my strix.
Power draw is 520W so forget it for bench.
For gaming this is ok.


----------



## newls1

I need you smart peoples' help please. I feel im going about my OC on my FTW3 Ultra 3090 in the wrong way and just seeking advice if and what im doing is incorrect and if someone can put me "more on track" it would be so much appreciated. 

I'll start with the bases... Card is FCWB'd with active backplate (blocks are byski) GPU is on its own dedicated loop (420mm EK Rad x 60mm and a EK240mm x 45mm rad. Max GPU temp has never gone over 34-36c in demanding games, and 40-42c in timespy. (so with that said cooling is not of a concern here)

Using MSI AB, I set the following settings which allow stable game play currently in the only game im using right now.. FC6 
*Core +85*
*Volt +100
Mem + 1100*
*Power Limit + 120%*
FWIW, I bios flashed this eVGA RTX3090 FTW3 Ultra with eVGA's own 520w KP BIOS with Re-Bar enabled (Flashed to this bios months back, zero issues)
In FC6 using my prior OC settings in MSI AB with +135Core (everything else same as above) Game will insta-crash after about 5-10minutes due to GPU core going to 2175MHz. So to troubleshoot, I lowered the GPU offset in -5 increments, and finally after a long and boring journey to find stability, I landed on +85 Core (which is what i listed above) which will start game out at 2145 @ 1.087v and settle out to 2130 @ 1.08x-1.1v and game is completely stable. 

Here is my question, Im using a 520w bios. Watching my Rivatuner statistics on top left of screen while in game, wattage never exceeds 320w (game is frame capped to 142FPS) cooling is completely under control, so why am I needing to use such high voltage to only maintain 2130MHz, or is this about normal voltage needed for this GPU?? I really dont know and spent so much time trying to dial in this card but I feel like im doing something wrong here and dont want to ruin a 3500$ 

Been wanting to BIOS flash this GPU to the evga 1000w re-bar bios to completely unlock her, but will this help me, I dont know??? Im no pro when it comes to tweaking gpu's and would really appreciate you smart peoples' advise!! 

Thank you very much for your time..


----------



## KCDC

Think I might try the kpe 1000w bios on one of these Zotac AMP Extreme HOLO cards. Stock BIOS for this is a 460 max, I see it hit about 438 max in games on the 4k, Power limit keeps it at 2000ish in 4k, but I can get 2170+ on 1440p.

Both cards have the same ports, however mine is HDMI/DP/DP/DP vs the kingpins DP/DP/DP/HDMI layout. Curious to see if all ports will be recognized since I use all of them. 

With my cards max bios being 460, and I'd like to not kill this card, the safe power slider setting would be 75? I remember @Jura stating that, but want to be sure of this important factor before even attempting this. 

Doesn't seem like a 500W bios would give me any benefits since my stock one is so close to it, however I am open to opinions on that.


----------



## TheDailyDagger

Hey Y'all,

Was curious if there was a target OC for 3090s watercooled and generally what people's results are, online results (reddit etc). are all over the place.

I have a 3090 FE with the EKWB Block, 2 360 rads and ambient around 24C. Card is hitting 55C with +200 core +200 Memory - power+temp cranked up to max, Stable clocks around 2050 MH/z - in game performance - stable for hours on end. What would be considered good? Unfortunately, even when I crank it up for benchmarks I can't get past 2070 for time spy etc.

Please let me know!


----------



## yzonker

newls1 said:


> I need you smart peoples' help please. I feel im going about my OC on my FTW3 Ultra 3090 in the wrong way and just seeking advice if and what im doing is incorrect and if someone can put me "more on track" it would be so much appreciated.
> 
> I'll start with the bases... Card is FCWB'd with active backplate (blocks are byski) GPU is on its own dedicated loop (420mm EK Rad x 60mm and a EK240mm x 45mm rad. Max GPU temp has never gone over 34-36c in demanding games, and 40-42c in timespy. (so with that said cooling is not of a concern here)
> 
> Using MSI AB, I set the following settings which allow stable game play currently in the only game im using right now.. FC6
> *Core +85*
> *Volt +100
> Mem + 1100*
> *Power Limit + 120%*
> FWIW, I bios flashed this eVGA RTX3090 FTW3 Ultra with eVGA's own 520w KP BIOS with Re-Bar enabled (Flashed to this bios months back, zero issues)
> In FC6 using my prior OC settings in MSI AB with +135Core (everything else same as above) Game will insta-crash after about 5-10minutes due to GPU core going to 2175MHz. So to troubleshoot, I lowered the GPU offset in -5 increments, and finally after a long and boring journey to find stability, I landed on +85 Core (which is what i listed above) which will start game out at 2145 @ 1.087v and settle out to 2130 @ 1.08x-1.1v and game is completely stable.
> 
> Here is my question, Im using a 520w bios. Watching my Rivatuner statistics on top left of screen while in game, wattage never exceeds 320w (game is frame capped to 142FPS) cooling is completely under control, so why am I needing to use such high voltage to only maintain 2130MHz, or is this about normal voltage needed for this GPU?? I really dont know and spent so much time trying to dial in this card but I feel like im doing something wrong here and dont want to ruin a 3500$
> 
> Been wanting to BIOS flash this GPU to the evga 1000w re-bar bios to completely unlock her, but will this help me, I dont know??? Im no pro when it comes to tweaking gpu's and would really appreciate you smart peoples' advise!!
> 
> Thank you very much for your time..


Well some games will not be stable at the same settings as others. The KP 520w bios has some core offset built in I think (+45 or +60 possibly). So if you compare to what others run then your OC may look low. The KP 1kw bios uses the baseline Nvidia VF curve though. 

No need to do +5 increments. The curve works in 15mhz increments. So +85 is actually +75. That's why it became stable at that point. Dropping from +90 to +75.

You could try running DDU if you haven't just to be sure you have a good clean install of the drivers.


----------



## yzonker

TheDailyDagger said:


> Hey Y'all,
> 
> Was curious if there was a target OC for 3090s watercooled and generally what people's results are, online results (reddit etc). are all over the place.
> 
> I have a 3090 FE with the EKWB Block, 2 360 rads and ambient around 24C. Card is hitting 55C with +200 core +200 Memory - power+temp cranked up to max, Stable clocks around 2050 MH/z - in game performance - stable for hours on end. What would be considered good? Unfortunately, even when I crank it up for benchmarks I can't get past 2070 for time spy etc.
> 
> Please let me know!


+200 mem is very low. I'd try pushing that up. Even my mem lottery loser 3090 can do +700-800 game stable.


----------



## yzonker

KCDC said:


> Think I might try the kpe 1000w bios on one of these Zotac AMP Extreme HOLO cards. Stock BIOS for this is a 460 max, I see it hit about 438 max in games on the 4k, Power limit keeps it at 2000ish in 4k, but I can get 2170+ on 1440p.
> 
> Both cards have the same ports, however mine is HDMI/DP/DP/DP vs the kingpins DP/DP/DP/HDMI layout. Curious to see if all ports will be recognized since I use all of them.
> 
> With my cards max bios being 460, and I'd like to not kill this card, the safe power slider setting would be 75? I remember @Jura stating that, but want to be sure of this important factor before even attempting this.
> 
> Doesn't seem like a 500W bios would give me any benefits since my stock one is so close to it, however I am open to opinions on that.


75% won't really be different than 100% as pretty much no game needs 750w to maintain max voltage. 60% is as far as I would go for gaming. That'll hold the voltage limit in nearly all games except maybe a few exceptions like Quake 2 RTX.


----------



## Arizor

Yeah just to add to @yzonker 's sage advice, memory is generally very good this generation. I can hold a very stable +1250 on my mem for gaming, it can go to +1500 for benching, so you should be able to do at least 700-800 without worry. 

The GPU cores are much more of a lottery, 2050 is my max stable for games too (and even then took a loooooot of tweaking with BIOSes and VF curves).

Your 55C temp is a bit odd for 2 x 360mm rads. Hot room maybe? How's the fan set up?


----------



## KCDC

delete


----------



## ManniX-ITA

newls1 said:


> Here is my question, Im using a 520w bios. Watching my Rivatuner statistics on top left of screen while in game, wattage never exceeds 320w (game is frame capped to 142FPS) cooling is completely under control, so why am I needing to use such high voltage to only maintain 2130MHz, or is this about normal voltage needed for this GPU??


Flashing a 1000W bios will definitely not help.
I've seen the benchmarks for FC6, it's really a terrible engine.
Full usage, low workload. It's going to push the frequency/voltage very high cause of the low power/temperature.

I think you have two options.
First would be to try with a lower core voltage percentage. If you try with 25/50/75 could be enough not to have the GPU clock go up to 2175MHz but around 2125/2150.
You can probably bring up again a bit the Core clock from 85.
Core +135 is pretty high for a crash free profile. I can bench with core clock up to 180/200, some benchmarks.
Others 146, 130. But for a crash free daily profile the max is 110 (I'm still on air).
So even with amazing cooling you may need to scale down from 135 MHz.

The 2nd option, the best one, is to make a curve.
You know at which frequency and voltage is going to crash.
Using the curve editor is not trivial but even without too much fuss you should get a stable profile which is better than just +85 MHz.

Open it with Ctrl+F or with the icon after you have set +135:










Now you have a beautiful curve:











In my case I have 2175 MHz at 1087 for +180 MHz, your strap will be different for voltage and frequency point.

Now let's say I want to keep decent clocks at lower voltages but capping at 2130 cause higher than that is crashing.

Keep shift pressed and select on the curve background from the voltage point after 2130 MHz, this case is 2145 MHz to the end:











And then select and bring down the last raising point to 2145 MHz, the whole selection will go down:











When you press apply you'll have a nice curve that will have the max clock boost until 1050mV then will go flat (vcore is limited to 1.1V):










This way you should be able to enjoy +135 (if it's really stable) without skyrocketing high when there's low load.


----------



## newls1

ManniX-ITA said:


> Flashing a 1000W bios will definitely not help.
> I've seen the benchmarks for FC6, it's really a terrible engine.
> Full usage, low workload. It's going to push the frequency/voltage very high cause of the low power/temperature.
> 
> I think you have two options.
> First would be to try with a lower core voltage percentage. If you try with 25/50/75 could be enough not to have the GPU clock go up to 2175MHz but around 2125/2150.
> You can probably bring up again a bit the Core clock from 85.
> Core +135 is pretty high for a crash free profile. I can bench with core clock up to 180/200, some benchmarks.
> Others 146, 130. But for a crash free daily profile the max is 110 (I'm still on air).
> So even with amazing cooling you may need to scale down from 135 MHz.
> 
> The 2nd option, the best one, is to make a curve.
> You know at which frequency and voltage is going to crash.
> Using the curve editor is not trivial but even without too much fuss you should get a stable profile which is better than just +85 MHz.
> 
> Open it with Ctrl+F or with the icon after you have set +135:
> 
> View attachment 2529749
> 
> 
> Now you have a beautiful curve:
> 
> View attachment 2529751
> 
> 
> 
> In my case I have 2175 MHz at 1087 for +180 MHz, your strap will be different for voltage and frequency point.
> 
> Now let's say I want to keep decent clocks at lower voltages but capping at 2130 cause higher than that is crashing.
> 
> Keep shift pressed and select on the curve background from the voltage point after 2130 MHz, this case is 2145 MHz to the end:
> 
> View attachment 2529757
> 
> 
> 
> And then select and bring down the last raising point to 2145 MHz, the whole selection will go down:
> 
> 
> View attachment 2529758
> 
> 
> When you press apply you'll have a nice curve that will have the max clock boost until 1050mV then will go flat (vcore is limited to 1.1V):
> 
> View attachment 2529759
> 
> 
> This way you should be able to enjoy +135 (if it's really stable) without skyrocketing high when there's low load.


thank you so much for taking the time to put that reply together, very much appreciate you. I will certainly take this advise and try this in just a bit


----------



## newls1

is 1.100V to much voltage?


----------



## ManniX-ITA

newls1 said:


> is 1.100V to much voltage?


It's high and it's the limit, you need external hardware to go over it.


----------



## Arizor

@newls1 it’s the limit but I think many folks here run at 1.1 without issue. As long as your temps are ok it should be fine. Folks like @Falkentyne have a much deeper understanding here though so hopefully they’ll chime in.


----------



## newls1

ManniX-ITA said:


> It's high and it's the limit, you need external hardware to go over it.


im certainly not wanting to go over it, i just see people running running like 2100 @ less then 1v and im jealous!


----------



## newls1

Arizor said:


> @newls1 it’s the limit but I think many folks here run at 1.1 without issue. As long as your temps are ok it should be fine. Folks like @Falkentyne have a much deeper understanding here though so hopefully they’ll chime in.


Falkentyne knows damn near everything!


----------



## ManniX-ITA

newls1 said:


> im certainly not wanting to go over it, i just see people running running like 2100 @ less then 1v and im jealous!


Yeah they probably won the binning lottery, I'm jealous too 
You can't go over 1.1V cause Nvidia limited it.
Otherwise could be with your royal cooling you could keep higher frequencies going up with voltage.
But anyway such low voltage at 2100 MHz is probably just working for specific benchmarks.


----------



## KCDC

Tried that KP 520w bios for fun and to test stability in redshift rendering and as I feared it didn't like it, Octane didn't mind it but since I use both that's null. Also didn't give much in terms of higher clocks, mostly just more heat. Also don't really think the card liked it, would get total system hangs when putting the PL to 120% sometimes. All the ports worked though! Either way, seems sticking to the stock 460 max is where I will stay. I never did it on my 2080tis, but remember more benefits when I flashed my 1080tis back in the day.

I did have a crashfree curve with 2175 @1050v (i think that was the voltage) for 1440p gaming that worked well and then I accidentally deleted it. Forgot about that shift function in the curve editor so that outta make it easier to find the curve again, thanks for that write-up @*ManniX-ITA*. 4k is a bit more difficult as I hit the power limit at about 2050-2080 on heavy games like RDR2 or Cyberpunk. That curve I had for 1440p crashes at 4k which confuses me, or just means it wasn't actually a stable curve. Shouldn't one curve work for all? Overall it's a lot of fiddling for little gains but whatever, it's still fun to tweak.

Undervolting 1995 @925 has given me the most benefit. In rendering, keeps temps under 40 which is nice for 2 gpus and an active backplate along with the 10980xe.That's with PL at 100%. Also always nice to see less power usage when you're constantly rendering. Electricity ain't cheap around here.

Also threw in my backup pump as a second in serial. Wasn't expecting much but it has helped a bit in keeping load temps down a couple degrees. Was a fun project figuring out that placement in an already filled case. Pretty sure this case is officially filled to the brim now, haha. Looking forward to my sleeved cables when they arrive.

EDIT: redid curve, looks like 2160 @1087v is where I am.


----------



## Falkentyne

newls1 said:


> Falkentyne knows damn near everything!


silicon lottery.
And the voltage slider in msi afterburner doesn't increase the voltage directly. Just allows one higher tier via "VID" on the v/f chart to be used corresponding to +15 mhz and 1.069v-1.10v. What you think you see at "1.10v" is actually 1.125v NVVDD.

And clock steps are in 15 mhz.


----------



## GRABibus

Falkentyne said:


> silicon lottery.
> And the voltage slider in msi afterburner doesn't increase the voltage. Just allows one higher tier on the v/f chart to be used corresponding to +15 mhz and 1.069v-1.10v
> And clock steps are in 15 mhz.


Question : why great overclockers and great PC enthousiasts always win Silicon Lottery ?


----------



## Falkentyne

GRABibus said:


> Question : why great overclockers and great PC enthousiasts always win Silicon Lottery ?


They don't. The ones who win are the ones who post and they don't tell you how many times they lost.


----------



## West.

My 3090 HOFhas been running without any problem since last time I replaced thermal pads. (Stock cooler re-pad)
Recently, I installed a new memory kit after that gpu temp rises like 20c in game. Was getting around 60-75c but now is getting mid 80c+. 
I didn't even touch my gpu at all but the core temp rises 10-20c with no reason under same load. Same bios same settings same everything.

Anyone encountered the same problem ?


----------



## ManniX-ITA

Falkentyne said:


> 1.069v-1.10v


What is the reason why should work only in this range?
I tried with Kombustor burn-in at 1080p.
From 1920 MHz @ 967mV at 0% goes to 1935 MHz @ 981mV with the slider to 100%.
Pretty much the same when the speed is around 2000 and the voltage around 1000mV.
This until the temperature goes up, after 65c seems to become ineffective.
Or even going to a lower strap at 100%.


----------



## pantsoftime

Has anyone ever tried the Strix 500W stock (rebar) BIOS on a Gigabyte Aorus Xtreme? Does it work normally and do you get at least 2 functional HDMI ports? I'd like to try to get >450W but I need at least 2 or the 3 HDMI ports to work.


----------



## Sheyster

newls1 said:


> im certainly not wanting to go over it, i just see people running running like 2100 @ less then 1v and im jealous!


That's not typical at all for stable gaming, most will need over 1v for stable Warzone and BFV gaming sessions.


----------



## GRABibus

Sheyster said:


> That's not typical at all for stable gaming, most will need over 1v for stable Warzone and BFV gaming sessions.


2100MHz with less than 1V ? I'd like to see this in Warzone or Cold war with Ray tracing enabled. Curious to see who is stable 24/7 in those games with those settings.


----------



## GRABibus

pantsoftime said:


> Has anyone ever tried the Strix 500W stock (rebar) BIOS on a Gigabyte Aorus Xtreme? Does it work normally and do you get at least 2 functional HDMI ports? I'd like to try to get >450W but I need at least 2 or the 3 HDMI ports to work.


You mean 480W Strix bios ?


----------



## newls1

ManniX-ITA said:


> Flashing a 1000W bios will definitely not help.
> I've seen the benchmarks for FC6, it's really a terrible engine.
> Full usage, low workload. It's going to push the frequency/voltage very high cause of the low power/temperature.
> 
> I think you have two options.
> First would be to try with a lower core voltage percentage. If you try with 25/50/75 could be enough not to have the GPU clock go up to 2175MHz but around 2125/2150.
> You can probably bring up again a bit the Core clock from 85.
> Core +135 is pretty high for a crash free profile. I can bench with core clock up to 180/200, some benchmarks.
> Others 146, 130. But for a crash free daily profile the max is 110 (I'm still on air).
> So even with amazing cooling you may need to scale down from 135 MHz.
> 
> The 2nd option, the best one, is to make a curve.
> You know at which frequency and voltage is going to crash.
> Using the curve editor is not trivial but even without too much fuss you should get a stable profile which is better than just +85 MHz.
> 
> Open it with Ctrl+F or with the icon after you have set +135:
> 
> View attachment 2529749
> 
> 
> Now you have a beautiful curve:
> 
> View attachment 2529751
> 
> 
> 
> In my case I have 2175 MHz at 1087 for +180 MHz, your strap will be different for voltage and frequency point.
> 
> Now let's say I want to keep decent clocks at lower voltages but capping at 2130 cause higher than that is crashing.
> 
> Keep shift pressed and select on the curve background from the voltage point after 2130 MHz, this case is 2145 MHz to the end:
> 
> View attachment 2529757
> 
> 
> 
> And then select and bring down the last raising point to 2145 MHz, the whole selection will go down:
> 
> 
> View attachment 2529758
> 
> 
> When you press apply you'll have a nice curve that will have the max clock boost until 1050mV then will go flat (vcore is limited to 1.1V):
> 
> View attachment 2529759
> 
> 
> This way you should be able to enjoy +135 (if it's really stable) without skyrocketing high when there's low load.


so I finally had time last night to try this out and I hope this is ok, but I locked in 2175 @ 1.1v and gamed (FC6) for about an hour and it didnt CTD... YET. So with that said, given the card is sandwich wrapped with waterblocks, GPU temp didnt get hotter the n34/35c and hottest MEM temp was 50c.... Hope 1.1v wont degrade this *****!


----------



## GRABibus

newls1 said:


> so I finally had time last night to try this out and I hope this is ok, but I locked in 2175 @ 1.1v and gamed (FC6) for about an hour and it didnt CTD... YET. So with that said, given the card is sandwich wrapped with waterblocks, GPU temp didnt get hotter the n34/35c and hottest MEM temp was 50c.... Hope 1.1v wont degrade this ***!


try Cold War multiplayer rif you have it with all max out (ray tracing and DLSS on quality)


----------



## yzonker

newls1 said:


> so I finally had time last night to try this out and I hope this is ok, but I locked in 2175 @ 1.1v and gamed (FC6) for about an hour and it didnt CTD... YET. So with that said, given the card is sandwich wrapped with waterblocks, GPU temp didnt get hotter the n34/35c and hottest MEM temp was 50c.... Hope 1.1v wont degrade this ***!


Seems very unlikely it will hurt the card. That voltage is allowed by Nvidia.


----------



## J7SC

yzonker said:


> Seems very unlikely it will hurt the card. That voltage is allowed by Nvidia.


^^...yeah, 1.1v for Ampere shouldn't be an issue w/ decent cooling, as much as I've never seen my Strix go above / rarely at 1.087v (not yet anyways ). Coincidentally, over at the 6900XT thread today, someone shared how to unlock voltage w/ MPT, so now it ups the ante to 1.25v+ for some (hopefully those w/ great cooling) burn baby, burn


----------



## geriatricpollywog

Does anybody else have a crossed-out die?


----------



## maustrap

Hi, first time posting here. Through absolute shear luck and circumstance, I ended up getting a SUPRIM X last week and been reading through this thread pretty intensely. I had a question on the thermal re-padding and hope someone in the same situation could shed some light on it for me.

Is there any downside to only re-padding the memory under the backplate? I'm a admittedly a little nervous on disassembling the fan to also re-pad the front chips since I can keep it under 60c with the fan curve I setup, but I wasn't sure if only doing the back side will throw the temps of the whole card off balance or something. Anyone else tested or just done the backplate side only?

Also in regards to the pads, I did some research and found the PCB for the SUPRIM X and the pad thickness. I wasn't sure if I needed to match those EXACTLY or if sticking with 1.5mm or 2mm all around is better. I was looking at the Thermalright odysseys since I found a video of someone doing it step by step on Youtube.

Appreciate any feedback on this!


----------



## yzonker

maustrap said:


> Hi, first time posting here. Through absolute shear luck and circumstance, I ended up getting a SUPRIM X last week and been reading through this thread pretty intensely. I had a question on the thermal re-padding and hope someone in the same situation could shed some light on it for me.
> 
> Is there any downside to only re-padding the memory under the backplate? I'm a admittedly a little nervous on disassembling the fan to also re-pad the front chips since I can keep it under 60c with the fan curve I setup, but I wasn't sure if only doing the back side will throw the temps of the whole card off balance or something. Anyone else tested or just done the backplate side only?
> 
> Also in regards to the pads, I did some research and found the PCB for the SUPRIM X and the pad thickness. I wasn't sure if I needed to match those EXACTLY or if sticking with 1.5mm or 2mm all around is better. I was looking at the Thermalright odysseys since I found a video of someone doing it step by step on Youtube.
> 
> Appreciate any feedback on this!


Re-padding only the backplate is a small improvement at best from what I can recall. Particularly if you don't actively cool the backplate with a heatsink/fan.


----------



## J7SC

0451 said:


> Does anybody else have a crossed-out die?
> 
> View attachment 2529961


...it's been known to happen, ie. per hardwareluxx discovery (March '21) and secondary stories > here


----------



## newls1

wth is a crossed out die mean?


----------



## ManniX-ITA

newls1 said:


> wth is a crossed out die mean?


Oh it's just a die that needed a little bit of exorcism to get all the shaders unlocked


----------



## geriatricpollywog

ManniX-ITA said:


> Oh it's just a die that needed a little bit of exorcism to get all the shaders unlocked


I mean I’m not complaining 









I scored 16 195 in Port Royal


Intel Core i9-11900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## KedarWolf

0451 said:


> I mean I’m not complaining
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 16 195 in Port Royal
> 
> 
> Intel Core i9-11900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com


I think that might be the best Port Royal score I've seen here. 

You try pushing your memory even higher maybe?


----------



## chibi

Only Splave has him beat, the rest of the 11900K + 3090 scores are all his for the most part. That's cool and funny at the same time lol.


----------



## geriatricpollywog

KedarWolf said:


> I think that might be the best Port Royal score I've seen here.
> 
> You try pushing your memory even higher maybe?


When I push the memory past 1400, the score starts to drop and I see artifacts. I need to try adding memory voltage using the Kingpin dip switches.



chibi said:


> Only Splave has him beat, the rest of the 11900K + 3090 scores are all his for the most part. That's cool and funny at the same time lol.


11900K guys are too obsessed with memory overclocking to care about their GPU.

I’m #50 on the PR HOF. The first 35-37 scores are on some kind of extreme cooling. I am running ice water directly into my loop. I can probably get to the 35-37 position and have the best non-LN2 score.


----------



## Arizor

0451 said:


> I mean I’m not complaining
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 16 195 in Port Royal
> 
> 
> Intel Core i9-11900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com


 What a heartwarming underdog story. Back in the chip factory all the other chips laughed and pointed, they said he'd never be GA102-300 material, but now look at him, taking on the big guns and winning!

It feels like a Rudolph the Rednosed Reindeer song.


----------



## ManniX-ITA

I'm thrilled 
Surprisingly I didn't have to pay any customs fee for clearance!



Spoiler: Unboxing































*
















*


----------



## geriatricpollywog

ManniX-ITA said:


> I'm thrilled
> Surprisingly I didn't have to pay any customs fee for clearance!
> 
> 
> 
> Spoiler: Unboxing
> 
> 
> 
> 
> View attachment 2530072
> 
> 
> View attachment 2530073
> 
> 
> View attachment 2530074
> 
> 
> *
> View attachment 2530075
> 
> 
> View attachment 2530076
> *


Woohoo! ‘Murica

It’s like a Cadillac in a box


----------



## ManniX-ITA

0451 said:


> It’s like a Cadillac in a box


Yeah, it's screaming awesomeness 

Didn't strip yet the sticker to see the mirror finish, I'll do it only last minute when I'll mount it.
I'll take a shot then!


----------



## DirDir

0451 said:


> Does anybody else have a crossed-out die?
> 
> View attachment 2529961










Hi beautiful


----------



## geriatricpollywog

DirDir said:


> View attachment 2530091
> Hi beautiful


Is that a HOF? Mine is a KPE. So these crossed-out dies must be passing the binning.


----------



## DirDir

0451 said:


> Is that a HOF? Mine is a KPE. So these crossed-out dies must be passing the binning.


Ya its a HoF OC Lab Edition. My guess is that the king ping cards and the Hof Lab editions are hand pickt.
I will ask a person that shud know more abut that.
Atm i only have 2 runs on the card, on a fully blooted windows and 0 cooling, 3x 3090 cards sucks to keep cool in a case lol
Left side is the Hof card, mid is you, and right side is my smal computer diffrent card (msi card) that have nicer cooling but the ram in cpu sucks.
Hoping to go over 16k sone i only need a new water pump and to finish my own backplate.


----------



## DirDir

0451 said:


> Is that a HOF? Mine is a KPE. So these crossed-out dies must be passing the binning.


Btw any recommendations on a good water pump for going cold?


----------



## PLATOON TEKK

What’s been good. Hope you’ve all been bless, been a second. Anybody hear anything about the KP Optimus blocks? Last I heard was the ceramic switch. I also noticed they had the full pads in stock (for those waiting on the fujipoly.5mm).

Just got back home from my travels and had to reclaim top port on water. I scored a single point higher ha

31,367 pts

edit: that was on 1.1V. Have yet to use the classy tool with stability oddly. Also those with OC labs (and maybe normal hofs), there’s apparently a new goc version that works with the 3090s, will test if I get it and post here if it works.



Spoiler: Pic


----------



## geriatricpollywog

DirDir said:


> Btw any recommendations on a good water pump for going cold?


The water pump doesn’t matter. Just get the water as cold as possible. What block do you have on the HOF?


----------



## J7SC

DirDir said:


> Btw any recommendations on a good water pump for going cold?


special series 









...as to water pumps for (relative) cold, I've used real D5s for as low as - 12C before w/o issue. Re. liquids, Mayhems' X1 UV blue is rated for -15 C to +100 C (depending on pure / mix).


----------



## DirDir

0451 said:


> The water pump doesn’t matter. Just get the water as cold as possible. What block do you have on the HOF?


Bykski full cover, but only 6.2W/mk thermal pads atm. Hard to find 1.8mm thermal pads. But going to look now if the inprint is good on the 1.8mm and see if i cant ues the 1.5mm gelid gp ultimate.



> ...as to water pumps for (relative) cold, I've used real D5s for as low as - 12C before w/o issue. Re. liquids, Mayhems' X1 UV blue is rated for -15 C to +100 C (depending on pure / mix.


 looks like a "food" glycol mix only nothing special. Well this is blue i guess lol. Wear did you buy the nvidia card J7SC? at AMD?


----------



## Falkentyne

DirDir said:


> Bykski full cover, but only 6.2W/mk thermal pads atm. Hard to find 1.8mm thermal pads. But going to look now if the inprint is good on the 1.8mm and see if i cant ues the 1.5mm gelid gp ultimate.
> 
> looks like a "food" glycol mix only nothing special. Well this is blue i guess lol. Wear did you buy the nvidia card J7SC? at AMD?


Gelid Ultimates can be compressed from 2.0mm to 1.8mm if you have a flat sheet of glass and a digital micrometer caliper (which you can get for $10) so you can measure your work. 
Or you can get the 1.778mm Laird Tflex pads if you want to donate a kidney.






A17752-07 Laird Technologies - Thermal Materials | Fans, Thermal Management | DigiKey


Order today, ships today. A17752-07 – Thermal Pad Gray 228.60mm x 228.60mm Square Tacky - Both Sides from Laird Technologies - Thermal Materials. Pricing and Availability on millions of electronic components from Digi-Key Electronics.




www.digikey.com


----------



## Shawnb99

PLATOON TEKK said:


> Anybody hear anything about the KP Optimus blocks?




__ https://twitter.com/i/web/status/1451601221056286722
Hopefully out this week


----------



## KedarWolf

DirDir said:


> Bykski full cover, but only 6.2W/mk thermal pads atm. Hard to find 1.8mm thermal pads. But going to look now if the inprint is good on the 1.8mm and see if i cant ues the 1.5mm gelid gp ultimate.
> 
> looks like a "food" glycol mix only nothing special. Well this is blue i guess lol. Wear did you buy the nvidia card J7SC? at AMD?


Gelid Extreme 2.0mm are extremely soft and compressible and would compress more than enough to use as 1.8mm 

Most people go Extreme over Ultimate for that reason and get better results with them than even with Thermalright or Fujipoly.


----------



## DirDir

Falkentyne said:


> Gelid Ultimates can be compressed from 2.0mm to 1.8mm if you have a flat sheet of glass and a digital micrometer caliper (which you can get for $10) so you can measure your work.
> Or you can get the 1.778mm Laird Tflex pads if you want to donate a kidney.
> 
> 
> 
> 
> 
> 
> A17752-07 Laird Technologies - Thermal Materials | Fans, Thermal Management | DigiKey
> 
> 
> Order today, ships today. A17752-07 – Thermal Pad Gray 228.60mm x 228.60mm Square Tacky - Both Sides from Laird Technologies - Thermal Materials. Pricing and Availability on millions of electronic components from Digi-Key Electronics.
> 
> 
> 
> 
> www.digikey.com


Sorry only have one kidney, want to keep that one  But Compressing a 2mm to 1,8mm sounds easy, or even 1,7 so i can ues a bit of thermal paste.

After 6min off slamming the card on 30c-32c water, what celsius are good on core and mem temp?
Atm i am hitting 50c on core and 80c mem on ht Hof
The Msi card i hit 40c on core and 74c mem on the msi


----------



## DirDir

KedarWolf said:


> Gelid Extreme 2.0mm are extremely soft and compressible and would compress more than enough to use as 1.8mm
> 
> Most people go Extreme over Ultimate for that reason and get better results with them than even with Thermalright or Fujipoly.


Will try Falkentyne suggestion first if the temps is off i will try your suggestion.


----------



## J7SC

DirDir said:


> (...) Wear did you buy the nvidia card J7SC? at AMD?


? 🎃 ? ...I bought the Ampere at the same place as the BigNavi, and they live in the same case. Harmony !


----------



## geriatricpollywog

DirDir said:


> Sorry only have one kidney, want to keep that one  But Compressing a 2mm to 1,8mm sounds easy, or even 1,7 so i can ues a bit of thermal paste.
> 
> After 6min off slamming the card on 30c-32c water, what celsius are good on core and mem temp?
> Atm i am hitting 50c on core and 80c mem on ht Hof
> The Msi card i hit 40c on core and 74c mem on the msi


You should see about 14C-17C difference between water temperature and core temperature under load unless you have an active backplate.


----------



## KCDC

DirDir said:


> Bykski full cover, but only 6.2W/mk thermal pads atm. Hard to find 1.8mm thermal pads. But going to look now if the inprint is good on the 1.8mm and see if i cant ues the 1.5mm gelid gp ultimate.
> 
> looks like a "food" glycol mix only nothing special. Well this is blue i guess lol. Wear did you buy the nvidia card J7SC? at AMD?


I have the same 1.7 or 8mm pads that came with my bykski blocks. I tried 1.5mm gelid extremes and everything makes contact, good thermals and so on.


----------



## PLATOON TEKK

Shawnb99 said:


> __ https://twitter.com/i/web/status/1451601221056286722
> Hopefully out this week


Thanks a million, wonder how much better the Optimus blocks will perform over stock HC.


----------



## J7SC

PLATOON TEKK said:


> Thanks a million, wonder how much better the Optimus blocks will perform over stock HC.


...would be great if you could post before-and-after comparative results !


----------



## geriatricpollywog

PLATOON TEKK said:


> Thanks a million, wonder how much better the Optimus blocks will perform over stock HC.


Here is my best Hydrocopper. GL!









I scored 16 195 in Port Royal


Intel Core i9-11900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## PLATOON TEKK

J7SC said:


> ...would be great if you could post before-and-after comparative results !


Will do for sure brother. I’ll also be using that thermal putty ManniX recommended and just slap it on wherever I can. I’ll set the chillers and cards to all the same values and will be able to pretty much get a 1:1 comparison to post.



0451 said:


> Here is my best Hydrocopper. GL!
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 16 195 in Port Royal
> 
> 
> Intel Core i9-11900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com


Boom! that’s a beautiful score, I have yet to run single card Port. But this is why I’m so curious as to how much better the Optimus will potentially be. I’m assuming to create a substantial performance difference, they went for the Cerakote ceramic coating.

The two HC I used to land sli Port 1st on water the other day perform really really well for a “stock” wb. But to be fair those bastards were still $299 a pop.


----------



## geriatricpollywog

PLATOON TEKK said:


> Will do for sure brother. I’ll also be using that thermal putty ManniX recommended and just slap it on wherever I can. I’ll set the chillers and cards to all the same values and will be able to pretty much get a 1:1 comparison to post.
> 
> 
> 
> Boom! that’s a beautiful score, I have yet to run single card Port. But this is why I’m so curious as to how much better the Optimus will potentially be. I’m assuming to create a substantial performance difference, they went for the Cerakote ceramic coating.
> 
> The two HC I used to land sli Port 1st on water the other day perform really really well for a “stock” wb. But to be fair those bastards were still $299 a pop.


I don’t think a $299 waterblock will necessarily perform better than a $149 waterblock if there is good contact. Copper is copper. The Optimus block may keep the core 5-10 degrees cooler with the active backplate cooling.


----------



## PLATOON TEKK

0451 said:


> I don’t think a $299 waterblock will necessarily perform better than a $149 waterblock if there is good contact. Copper is copper. The Optimus block may keep the core 5-10 degrees cooler with the active backplate cooling.


quality/choice of materials, flow and design definitely play a part. That normally improves with the upmark. But the barrow HOF block was going for $299 on their site and performed on par (actually slightly worse) than the same cheaper block for the Strix. I’m just highlighting the fact that I’d expect good performance from a $299 sold by EVGA, which is the case here.

Am most curious about the role the ceramic will play for thermals.


----------



## Arizor

Finally installed my optimus CPU block to match the gpu.


----------



## PLATOON TEKK

Arizor said:


> Finally installed my optimus CPU block to match the gpu.
> View attachment 2530274


Thats badass, how’s the cpu temp difference?


----------



## Arizor

Thanks @PLATOON TEKK . It’s tricky since we’ve been at 18C here in Melbourne for ages and the day I install we have a 27C day, and my study has no air con outside of a small fan.

My NZXT Kraken AIO was solid but under load it would go high, around 85 if not more. I’ve applied minimal overclock to the 5900x (5ghz max boost) and am getting about a 10% improvement on my cinebench scores (r20 is 9k, previously 8ish k), and it’s holding stable (used to crash) with temps not moving above 72C under sustained load.

it basically seems the optimus really keeps the temps under much better control in terms of the ceiling the 5900x reaches, helping stability a lot. Found the same with my Strix block.


----------



## DirDir

J7SC said:


> ? 🎃 ? ...I bought the Ampere at the same place as the BigNavi, and they live in the same case. Harmony !


 Just you wait one day one of the cards is dead! And its all your fault :


----------



## DirDir

Did just preorder i9 12900K and ASUS ROG MAXIMUS Z690 APEX but but but but i can't see if ther is a sli support  Plz tell me i am wrong


----------



## Nizzen

DirDir said:


> Did just preorder i9 12900K and ASUS ROG MAXIMUS Z690 APEX but but but but i can't see if ther is a sli support  Plz tell me i am wrong


MB don't need sli support... It will work with sli/nvlink on apex z690. Looks like 4 slot spacing too?
I'm going to test 2x3090 nvlink on Apex z690 for sure if it's 4 slot spacing.


----------



## DirDir

Nizzen said:


> MB don't need sli support... It will work with sli/nvlink on apex z690. Looks like 4 slot spacing too?
> I'm going to test 2x3090 nvlink on Apex z690 for sure if it's 4 slot spacing.


Yes thats good news, Ya it looks like it is 4 slot spacing, if not i just pick up the ASUS ROG MAXIMUS Z690 HERO


----------



## J7SC

Nizzen said:


> MB don't need sli support... It will work with sli/nvlink on apex z690. Looks like 4 slot spacing too?
> I'm going to test 2x3090 nvlink on Apex z690 for sure if it's 4 slot spacing.


...fyi re. 3-slot bridge > here


----------



## KedarWolf

I freed up nearly 15GB of my Google Drive in the last few days. I'm going to make completely stripped benching Windows 10 O/S's with most services removed and stripped of anything not needed for benching.

Both 32 bit (I had requests for it) and 64 bit.

There will be no printing, Wi-Fi, Bluetooth etc. just a bare-bones Win 10 operating system strictly for benching.

Before installing the O/S, you might want to back up your existing O/S with a Macrium Reflect Free boot USB you make in 'Other Tasks' in the software.

Or just install the Windows 10 to a spare M.2 or SSD to dual boot with your existing O/S.

Some might break their existing scores for Port Royal, Cinebench and CPU-Z etc.

I'll start on it and test them in VMWare tonight when I'm home from work.


----------



## newls1

KedarWolf said:


> I freed up nearly 15GB of my Google Drive in the last few days. I'm going to make completely stripped benching Windows 10 O/S's with most services removed and stripped of anything not needed for benching.
> 
> Both 32 bit (I had requests for it) and 64 bit.
> 
> There will be no printing, Wi-Fi, Bluetooth etc. just a bare-bones Win 10 operating system strictly for benching.
> 
> Before installing the O/S, you might want to back up your existing O/S with a Macrium Reflect Free boot USB you make in 'Other Tasks' in the software.
> 
> Or just install the Windows 10 to a spare M.2 or SSD to dual boot with your existing O/S.
> 
> Some might break their existing scores for Port Royal, Cinebench and CPU-Z etc.
> 
> I'll start on it and test them in VMWare tonight when I'm home from work.


I wants!!!!


----------



## KedarWolf

Win10BenchISO.zip







drive.google.com





New and improved benching ISO, stripped, services disabled.

Windows 10 LTSC 2021, lots of services disabled and all bloatware removed.

Tested as working in VMWare.

There will be no printing, no audio no Wi-Fi, no Bluetooth etc. just a bare-bones Win 10 operating system strictly for benching.

Before installing the O/S, you might want to back up your existing O/S with a Macrium Reflect Free boot USB you make in 'Other Tasks' in the software.

Or just install the Windows 10 to a spare M.2 or SSD to dual boot with your existing O/S.

READ the README.txt in the .zip file, there are some more things to disable with Autoruns and you need to install your chipset drivers and use the included NVCleanstall_1.12.0.exe to install Nvidia drivers.

Burn the ISO with the included RUFUS and can only be used as a clean Windows install.


----------



## DirDir

KedarWolf said:


> I freed up nearly 15GB of my Google Drive in the last few days. I'm going to make completely stripped benching Windows 10 O/S's with most services removed and stripped of anything not needed for benching.
> 
> Both 32 bit (I had requests for it) and 64 bit.
> 
> There will be no printing, Wi-Fi, Bluetooth etc. just a bare-bones Win 10 operating system strictly for benching.
> 
> Before installing the O/S, you might want to back up your existing O/S with a Macrium Reflect Free boot USB you make in 'Other Tasks' in the software.
> 
> Or just install the Windows 10 to a spare M.2 or SSD to dual boot with your existing O/S.
> 
> Some might break their existing scores for Port Royal, Cinebench and CPU-Z etc.
> 
> I'll start on it and test them in VMWare tonight when I'm home from work.


Did you ues NTlite to strip it?


----------



## KedarWolf

DirDir said:


> Did you ues NTlite to strip it?


No, I used Optimize Offline.









Optimize-Offline Guide - Windows Debloating Tool, Windows 1803, 1903, 19H2, 1909, 20H1 and LTSC 2019


All credit goes to GodHand and who wrote and maintains this script. And to @gdeliana who created the fork of Godhand' s Script we are using for...




forums.mydigitallife.net


----------



## GRABibus

newls1 said:


> Hope 1.1v wont degrade this ***!


No issue so far with 1.1V !

Look ! 






My Strix on air with MSI Suprim X Bios

Curve :


----------



## DirDir

Nizzen said:


> MB don't need sli support... It will work with sli/nvlink on apex z690. Looks like 4 slot spacing too?
> I'm going to test 2x3090 nvlink on Apex z690 for sure if it's 4 slot spacing.


The Apex one is 4slots and hearo 3slot.


----------



## newls1

GRABibus said:


> No issue so far with 1.1V !
> 
> Look !
> 
> 
> 
> 
> 
> 
> My Strix on air with MSI Suprim X Bios
> 
> Curve :
> 
> View attachment 2530403


Thank you for the update sir. We are @ same voltage and exact same boost speeds.... Mine will eventually drop to 1.093v and 2130MHz but then right back to 2145. Thanks for the reply, makes me feel a little better


----------



## zkareemz

I'm new here,
does this considered a good performance?









I scored 19 561 in Time Spy


Intel Core i9-9900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 11}




www.3dmark.com





MSI Suprim x 3090 - ASUS 1000w bios
i9 9900k - 5.1GHz
Windows 11


I'd appreciate if you share with me a bios that I can use with my Suprim x .


----------



## GRABibus

zkareemz said:


> I'm new here,
> does this considered a good performance?
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 19 561 in Time Spy
> 
> 
> Intel Core i9-9900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 11}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> MSI Suprim x 3090 - ASUS 1000w bios
> i9 9900k - 5.1GHz
> Windows 11
> 
> 
> I'd appreciate if you share with me a bios that I can use with my Suprim x .


Try those :

*EVGA 500w with Rebar :*








EVGA RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com





*EVGA Kingpin 520W with Rebar :*


----------



## DirDir

KedarWolf said:


> Win10BenchISO.zip
> 
> 
> 
> 
> 
> 
> 
> drive.google.com
> 
> 
> 
> 
> 
> Stripped Windows 10, lots of services disabled and all bloatware removed.
> 
> Tested as working in VMWare.
> 
> There will be no printing, no audio no Wi-Fi, no Bluetooth etc. just a bare-bones Win 10 operating system strictly for benching.
> 
> Before installing the O/S, you might want to back up your existing O/S with a Macrium Reflect Free boot USB you make in 'Other Tasks' in the software.
> 
> Or just install the Windows 10 to a spare M.2 or SSD to dual boot with your existing O/S.
> 
> READ the README.txt in the .zip file, a few more things to disable with Autoruns and you need to install your chipset drivers and use the included NVCleanstall_1.10.0.exe to install Nvidia drivers.
> 
> Burn the ISO with the included RUFUS and can only be used as a clean Windows install.



Did downloade it and love to try it tonight, any recomendations?


----------



## DirDir

PLATOON TEKK said:


> edit: that was on 1.1V. Have yet to use the classy tool with stability oddly. Also those with OC labs (and maybe normal hofs), there’s apparently a new goc version that works with the 3090s, will test if I get it and post here if it works.


what classy tool are you talking about? And whats a goc?

Did look at your instagram f***ing nice setups. 2.9c what are you using to get the water so low? Do you have a pic on your oc setup?


----------



## Arizor

If anyone has Guardians of the Galaxy (very good game overall, but perhaps wait for patches to fix quite a few bugs if you're on the fence), they have a solid benchmark. Interestingly, testing at 4k, DLSS off, and graphics settings set to highest ("ULTRA"), my Strix runs better on the 480W Strix BIOS than the 520 KINGPIN Bios.

My Kingpin scores 111ish, whilst Strix goes to 115 (this screenshot 114 though).


----------



## kryptonfly

@Arizor : You're just at 2010mhz ?



Usually I play at 2040 mhz 906mV.


----------



## GRABibus

kryptonfly said:


> @Arizor : You're just at 2010mhz ?
> 
> 
> 
> Usually I play at 2040 mhz 906mV.


Woo, that’s very good.
From my side, I can play [email protected],950V.
As of 2055MHz with same voltage => crash in BFV for example.


----------



## Arizor

Yeah @kryptonfly , I lost the chip lottery, it can go to 2040 but needs a lot of voltage and the kingpin BIOS, not really worth it.


----------



## yzonker

Arizor said:


> Yeah @kryptonfly , I lost the chip lottery, it can go to 2040 but needs a lot of voltage and the kingpin BIOS, not really worth it.


That's definitely the exception though. A lot of cards probably can't even complete a PR run with that offset. Looks like it's +270 core to get to [email protected] Although I did make it through PR. Didn't bother doing any of the usual tweaks so score is kinda low (switching resolution, forcing on reBar, etc...).









Result not found







www.3dmark.com





I feel certain that isn't game stable on my card though.


----------



## Arizor

Yeah for sure @yzonker 2010 and +1250 mem is my “game all day, rock solid” setting. I can go quite a bit higher but as you say, stability in games really suffers.


----------



## GRABibus

newls1 said:


> Thank you for the update sir. We are @ same voltage and exact same boost speeds.... Mine will eventually drop to 1.093v and 2130MHz but then right back to 2145. Thanks for the reply, makes me feel a little better


[email protected] 
21°C ambient
Still with my strix on stock air cooler.


----------



## Falkentyne

GRABibus said:


> [email protected]
> 21°C ambient
> Still with my strix on stock air cooler.
> 
> 
> 
> 
> 
> 
> View attachment 2530868


You can do +192 because your effective clocks drop.
Also the steps are +150, +165, +180, +195...

try +180 or +195 without touching the V/F curve at all. You will crash, right?


----------



## GRABibus

Falkentyne said:


> You can do +192 because your effective clocks drop.
> Also the steps are +150, +165, +180, +195...
> 
> try +180 or +195 without touching the V/F curve at all. You will crash, right?


of course I will crash.
+120MHz is my max stable gaming offset.

on the MSI AB screenshot and on the video I made +120MHz offset and then 2220Mhz at 1,1V.


----------



## Falkentyne

GRABibus said:


> of course I will crash.
> +120MHz is my max stable gaming offset.
> 
> on the MSI AB screenshot and on the video I made +120MHz offset and then 2220Mhz at 1,1V.


Does your FPS actually increase from using +120 to manually changing the curve like this?
If it does, that's great. But if it does not, there's no point to doing it. Because to fix your effective clocks, you must raise NVVDD voltage directly.
Then to increase stability so you dont crash, you must raise MSVDD Voltage directly...


----------



## Arizor

Yeah as @Falkentyne is saying (I think, I completely defer to this expertise), I can raise my "clock" by manually adjusting the V/F curve, and AB/Rivatuner can show me running at like 2100mhz, but the Effective Clock is substantially lower.


----------



## kryptonfly

GRABibus said:


> Woo, that’s very good.
> From my side, I can play [email protected],950V.
> As of 2055MHz with same voltage => crash in BFV for example.


Depends a lot of the temperature and games, starting 2175mhz there's a wall and it crashes at +40°C. Metro Exodus EE is really hard, I can play 2100mhz 975mV but it crashes with temperature, needs +981mV but power draw is insane ! (~500W 4K ultra RTX DLSS quality). I always use curve, it's the only way to reach accurate and high OC. But mem on the other hand is +1098mhz and for bench +1200mhz max.
In 3dmark Sampler Feedback I do 2220mhz 981mV, I didn't test more, it's really light compared to RTX games. I think there's at least 3 levels of OC : 
RTX games really heavy (Metro Exodus EE, Cold War... and Timespy)
DX12 games mid power draw (SOTTR for example, Guardians too...)
DX11 games or light power draw (firestrike, sampler feedback and many) but every game has its own power draw.


----------



## Arizor

Yeah RTX + DLSS is a brutal combo to test overclocks. It's interesting you say you need to curve to get the accurate OC @kryptonfly ; I can get a very high OC with a curve, but I find accuracy suffers (i.e. in terms of Effective Clock).


----------



## yzonker

Arizor said:


> Yeah as @Falkentyne is saying (I think, I completely defer to this expertise), I can raise my "clock" by manually adjusting the V/F curve, and AB/Rivatuner can show me running at like 2100mhz, but the Effective Clock is substantially lower.


Yup, effective clock is what matters and can be manipulated by changing the VF curve points to the left of the set point. Steeper the curve, generally the lower the effective clock. Probably some limit to this. Haven't tested that. I cheated a little on that PR run previously using this bit of info. I probably could have made it crash by making the slope very shallow since +270 is definitely ragged edge for my card. I chose something I think is typical though of what most may do.


----------



## J7SC

Arizor said:


> Yeah RTX + DLSS is a brutal combo to test overclocks. It's interesting you say you need to curve to get the accurate OC @kryptonfly ; I can get a very high OC with a curve, but I find accuracy suffers (i.e. in terms of Effective Clock).


I've yet to use VF curve on my 3090...These days, I tend to tune for max effective clock and throw a lot of cooling at the GPU, along with one custom vbios for the dual vbios settings. The way I function, if I started using VF curves, I would be back to this in no time... 🎃


----------



## kryptonfly

Arizor said:


> Yeah RTX + DLSS is a brutal combo to test overclocks. It's interesting you say you need to curve to get the accurate OC @kryptonfly ; I can get a very high OC with a curve, but I find accuracy suffers (i.e. in terms of Effective Clock).


Yep, there are little tricks with curve :
Once again it relies on the temperature, if you apply the curve with low temperature then you will throttle early, you need to warm up a little the gpu and apply again the curve to improve gpu boost.
If you are PL, it's better to set 2 more points at the left and take this point as the true OC : ex [email protected], [email protected] and the one [email protected] would be the true OC, tune just 3 points if PL and 1 point if not PL and apply, the curve will fix by itself. Also there's often a bin (15mhz) lower than the curve so you need to apply 2025mhz for 2010mhz. Not easy cause of temperature and load but effective clock should not move (or as your desired OC if PL)


----------



## GRABibus

Falkentyne said:


> Does your FPS actually increase from using +120 to manually changing the curve like this?
> If it does, that's great. But if it does not, there's no point to doing it. Because to fix your effective clocks, you must raise NVVDD voltage directly.
> Then to increase stability so you dont crash, you must raise MSVDD Voltage directly...


my fps don’t increase, and I knew they wouldn’t.
My aim was just to show to @newls1 that 1,1V is safe, as he was worrying about this.


----------



## joshpdemesa

Just want to ask if I did a bad mount just by referring to this paste job? It seems the block is actually touching a small portion of the center of the die. I'm getting 13°C core delta over water temp. hot spot delta over core is like 15-17°C I think. Will check once I've remounted.

GPU side:









Block side:









EDIT: Resized the images


----------



## Panchovix

KedarWolf said:


> Win10BenchISO.zip
> 
> 
> 
> 
> 
> 
> 
> drive.google.com
> 
> 
> 
> 
> 
> Stripped Windows 10, lots of services disabled and all bloatware removed.
> 
> Tested as working in VMWare.
> 
> There will be no printing, no audio no Wi-Fi, no Bluetooth etc. just a bare-bones Win 10 operating system strictly for benching.
> 
> Before installing the O/S, you might want to back up your existing O/S with a Macrium Reflect Free boot USB you make in 'Other Tasks' in the software.
> 
> Or just install the Windows 10 to a spare M.2 or SSD to dual boot with your existing O/S.
> 
> READ the README.txt in the .zip file, a few more things to disable with Autoruns and you need to install your chipset drivers and use the included NVCleanstall_1.10.0.exe to install Nvidia drivers.
> 
> Burn the ISO with the included RUFUS and can only be used as a clean Windows install.


Just passed by with my 3080 to say thanks! Just installed the ISO and will try to run some benchs tonight, in the day the ambient temp is too high lol


----------



## geriatricpollywog

joshpdemesa said:


> Just want to ask if I did a bad mount just by referring to this paste job? It seems the block is actually touching a small portion of the center of the die. I'm getting 13°C core delta over water temp. hot spot delta over core is like 15-17°C I think. Will check once I've remounted.
> 
> GPU side:
> View attachment 2531104
> 
> 
> Block side:
> View attachment 2531105


RMA card paste job with stock AIO. It looks like the paste dried before the AIO was installed. There was virtually no contact.









My paste job:


----------



## Falkentyne

joshpdemesa said:


> Just want to ask if I did a bad mount just by referring to this paste job? It seems the block is actually touching a small portion of the center of the die. I'm getting 13°C core delta over water temp. hot spot delta over core is like 15-17°C I think. Will check once I've remounted.
> 
> GPU side:
> View attachment 2531104
> 
> 
> Block side:
> View attachment 2531105
> 
> 
> EDIT: Resized the images



Yeah you have a convex something there.
Either the block is convex (very middle sticks out a few fractions of a mm more than the edges), or (the block is partially warped, leading to the same outcome), or the GPU core is convex itself.

You may need to do a VERY minor sanding of the block and see if tha thelps.


----------



## joshpdemesa

Falkentyne said:


> Yeah you have a convex something there.
> Either the block is convex (very middle sticks out a few fractions of a mm more than the edges), or (the block is partially warped, leading to the same outcome), or the GPU core is convex itself.
> 
> You may need to do a VERY minor sanding of the block and see if tha thelps.


I tried with another identical block and it seems to have reduced the delta. But then I also replaced the pump with a dual d5 so I cannot conclude since 2 variables have changed. I’ll try to turn the other pump off and let you know.

How do I make sure I lap the ‘peak’ (if that’s the correct term) of the convex plate/die first? If you know any guides please let me know.

If I lap the block, should I also lap the mounting hole standoffs since I reduced the plate’s height?

And if I lap the GPU die, which is the last thing I’m gonna do, what grit of sandpaper is recommended? 2000 grit?


----------



## DirDir

joshpdemesa said:


> I tried with another identical block and it seems to have reduced the delta. But then I also replaced the pump with a dual d5 so I cannot conclude since 2 variables have changed. I’ll try to turn the other pump off and let you know.
> 
> How do I make sure I lap the ‘peak’ (if that’s the correct term) of the convex plate/die first? If you know any guides please let me know.
> 
> If I lap the block, should I also lap the mounting hole standoffs since I reduced the plate’s height?
> 
> And if I lap the GPU die, which is the last thing I’m gonna do, what grit of sandpaper is recommended? 2000 grit?


----------



## joshpdemesa

0451 said:


> RMA card paste job with stock AIO. It looks like the paste dried before the AIO was installed. There was virtually no contact.
> View attachment 2531226
> 
> 
> My paste job:
> View attachment 2531227


Your paste job looks like there were more contact now which is great. Do you have a comparison of temps from before?


----------



## joshpdemesa

DirDir said:


>


Thank you for this. Yes I always watch Luumi’s videos. Really helpful. Guess I’ll need to actually lap the die then.


----------



## DirDir

joshpdemesa said:


> Thank you for this. Yes I always watch Luumi’s videos. Really helpful. Guess I’ll need to actually lap the die then.


Hmm well like you did typ, do it last. Did just lap my cpu waterblock did take abut 1h-2h.
Did start at p1500... wanted to kill myself after 20min. did go down to p400, after abut 15min i hade a 5mm x 5mm left and i started to move up p800 then p1200.
If i ever do a gpu starting at p800 sounds good and moving up for ther.


----------



## Sheyster

GRABibus said:


> my fps don’t increase, and I knew they wouldn’t.
> My aim was just to show to @newls1 that 1,1V is safe, as he was worrying about this.


My approach with gaming with the Strix (air cooled) has been to lock 1.0v at 2100 MHz. Sure, it drops a few bins as it heats up. The upside: Not a single game crashes! One click and done for all game!


----------



## GRABibus

Sheyster said:


> My approach with gaming with the Strix (air cooled) has been to lock 1.0v at 2100 MHz. Sure, it drops a few bins as it heats up. The upside: Not a single game crashes! One click and done for all game!


[email protected] is crashing sometimes for me.
[email protected] is "ok"


----------



## CZonin

Been thinking about repasting my TUF 3090 with some Kryonaut to see if I can drop a few degrees and figured I might as well replace the pads while I'm at it. What's the latest recommendation for this card in terms of brand and sizing for the pads? Also, from what I've read, in the US my warranty shouldn't be void as long as there's no damage to the card. Is that right?


----------



## DirDir

KedarWolf said:


> I freed up nearly 15GB of my Google Drive in the last few days. I'm going to make completely stripped benching Windows 10 O/S's with most services removed and stripped of anything not needed for benching.
> 
> Both 32 bit (I had requests for it) and 64 bit.
> 
> There will be no printing, Wi-Fi, Bluetooth etc. just a bare-bones Win 10 operating system strictly for benching.
> 
> Before installing the O/S, you might want to back up your existing O/S with a Macrium Reflect Free boot USB you make in 'Other Tasks' in the software.
> 
> Or just install the Windows 10 to a spare M.2 or SSD to dual boot with your existing O/S.
> 
> Some might break their existing scores for Port Royal, Cinebench and CPU-Z etc.
> 
> I'll start on it and test them in VMWare tonight when I'm home from work.


Going to make runs later today! Any tips and triks mate?


----------



## Panchovix

DirDir said:


> Going to make runs later today! Any tips and triks mate?


I'm not OP but I did install that clean ISO for benchmarks (to do on my 3080 + 5800X) and so far it has been amazing, I've getting better scores in any of the 3DMark tests than on other SO.

For example I did my "best" Port Royal Score atm with my3080 (13150 points) using that OS I scored 13 150 in Port Royal

KedarWolf added some instructions inside of the file that you have to do after installing the OS. (Though I used Windows To Go installation on a HDD, to have a portable SO for benchs basically, though it takes age to load lmao), I still have to test Unigine, etc


----------



## Lobstar

Don't mind the HDR screenshot. Forza Horizon 5 built-in benchmark.


----------



## kx11

MSAA x8? damn that's a lot of pressure


----------



## DirDir

Did test KedarWolf OS and did break 4 off my PB! I was abut 150 point higer on KedarWolf Os in Port Royal on my 3090 so i want to say a big ty KedarWolf 

The cards i did run was one off my MSI 3090 gaming trio x (1000w Bios) And 3080 Ti MSI gaming trio x (1000w bios)
Did not have time to run the 3090 HOF lab edition or SLI 3090. Coming soon i hope.

Links to all the scores and screenshots









I scored 16 113 in Port Royal


Intel Core i9-10900KF Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com













I scored 22 162 in Time Spy


Intel Core i9-10900KF Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com













I scored 15 514 in Port Royal


Intel Core i9-10900KF Processor, NVIDIA GeForce RTX 3080 Ti x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com













I scored 21 684 in Time Spy


Intel Core i9-10900KF Processor, NVIDIA GeForce RTX 3080 Ti x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## DirDir

Panchovix said:


> I'm not OP but I did install that clean ISO for benchmarks (to do on my 3080 + 5800X) and so far it has been amazing, I've getting better scores in any of the 3DMark tests than on other SO.
> 
> For example I did my "best" Port Royal Score atm with my3080 (13150 points) using that OS I scored 13 150 in Port Royal
> 
> KedarWolf added some instructions inside of the file that you have to do after installing the OS. (Though I used Windows To Go installation on a HDD, to have a portable SO for benchs basically, though it takes age to load lmao), I still have to test Unigine, etc


It was great, will save this os and love to see if he will do a updated version on it!


----------



## des2k...

kx11 said:


> MSAA x8? damn that's a lot of pressure


I don't think the game is that demanding, been playing 4k max with x8 MSAA. vsync 66fps, It's very low gpu usage, the most I've seen is 90% usage ( busy forest area).


----------



## Bukkster

Falkentyne said:


> You guys seen these?
> Wonder how these compare to Gelid Extremes in softness and performance?
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Amazon.com: Zezzio 14.8 W/mK Silicone Thermal Pad for Heatsink GPU CPU RAM SSD LED Cooler IC Chipset Cooling (120x120x1.5mm) : Electronics
> 
> 
> Buy Zezzio 14.8 W/mK Silicone Thermal Pad for Heatsink GPU CPU RAM SSD LED Cooler IC Chipset Cooling (120x120x1.5mm): Heatsinks - Amazon.com ✓ FREE DELIVERY possible on eligible purchases
> 
> 
> 
> www.amazon.com
> 
> 
> 
> 
> 
> Sorry guys but I'm not going to be the guinea pig here. I'll let one of you richer guys do it this time.



Any news on these thermal pads, not that many reviews on amazon for the new version but a lot of positive ones from AliExpress? Amazon won't be getting the 120x120mm's in stock until next year, which means people have been buying them.


----------



## Falkentyne

Bukkster said:


> Any news on these thermal pads, not that many reviews on amazon for the new version but a lot of positive ones from AliExpress? Amazon won't be getting the 120x120mm's in stock until next year, which means people have been buying them.


They perform about the same as Gelid Extremes. A complete steal when they were only $20 for a 100mm * 100mm * 1.5mm pad. But then the price went up after a few weeks  At the current price, just buy the 120mm * 120mm Gelid Ultimates instead.


----------



## Herald

I repaded my 3090 gamerock OC with EC silver on the front (soft ones) and EC platinum on the backplate. Despite the fact that Palit suggests 0.75mm for the front, I put 1mm silver. The best results ive seen arent on the memory temps but rather on the die, now i'm at a constant 10.1-10.3Δ between gpu and hotspot. The pads that Palit was using was probably decent already, so I only got around an 8C temperature decrease, which is fine but nothing to write home about. I was maxing around 86 on Cyberpunk 4k + RT (highest temperatures recorded, besides mining), now im constantly below 80.

I have some gelid extremes lying around but they look lower quality than EC platinum, so I didin't use them

Now coupled with my new Fractal Torrent, I can constantly stay below 65C on the die, although that gets quite noisy (25 ambient). For a virtually dead silent operation it hits around 71-73C with the 500w bios maxed out.


----------



## Arizor

any ideas gents on how to get a good thermal sensor if stuck in Australia? Only one available here seems to be thermaltake tf2 which gets horrid reviews. My motherboard (MSI unify x570) doesn’t have any temp header so a simple wire-plug is out.

could some kind of water tank thermometer in the pump reservoir do the job?


----------



## jura11

@Arizor 

I would recommend get Aquacomputer temperature sensors and if you are looking for budget options then I would recommend Bykski or Barrow temperature sensor, Alphacool or Phobya are another option 

For flow sensor I only recommend Aquacomputer and nothing more 

Hope this helps 

Thanks, Jura


----------



## Arizor

Thanks @jura11 , I agree that Aquacomputer is superb. Sadly no one within 10,000 miles sells them


----------



## ManniX-ITA

Arizor said:


> Thanks @jura11 , I agree that Aquacomputer is superb. Sadly no one within 10,000 miles sells them


If you don't have access to Aqua stuff, there's an alternative, even if a bit risky as seems the failure rate is very high:









Corsair CL-9011110-WW Digital Fan and RGB Lighting Controller - Newegg.com


Buy CORSAIR Commander PRO, CL-9011110-WW, Digital Fan and RGB Lighting Controller with fast shipping and top-rated customer service. Once you know, you Newegg!




www.newegg.com





I used one for a couple of years without issues but many had it failing after 1 year.
You need to install the Corsair LINK software to update the firmware.
Then uninstall it cause it's garbage 

Then you have 4 temperature inputs and a few fans outputs (which I wouldn't trust at all, sometimes they are dead at boot. Happened a few times to me as well).
Should work with HWInfo without any software installed for monitoring.
Once the firmware is updated you can optionally buy Argus Monitor to control the temperatures and fans.


----------



## Arizor

Thanks @ManniX-ITA ! I found stock of the aqua computer NEXT sensor in the uk and a friend is going to ship it across to me.


----------



## kx11




----------



## KedarWolf

kx11 said:


>


What benchmark is that?


----------



## Arizor

It's the Forza 5 benchmark @KedarWolf . Here's my results, following the same custom settings as @kx11


----------



## kx11

KedarWolf said:


> What benchmark is that?


Forza horizon 5


----------



## Arizor

Apologies, just noticed my SSAO was only set to high first time round - proper result with all things maxxed, same exact FPS regardless it seems...!


----------



## KedarWolf

Arizor said:


> Apologies, just noticed my SSAO was only set to high first time round - proper result with all things maxxed, same exact FPS regardless it seems...!
> 
> View attachment 2531879


How do I run the benchmark? I bought Game Pass for $1.


----------



## Arizor

Yeah that’s a great deal @KedarWolf . In the Graphics menu, top option “benchmark mode”.


----------



## tps3443

I haven’t posted here in a little while, this is my work/school/play PC. Still loving my Evga RTX 3090 Kingpin Hydrocopper very much!! I’ve had it for about 5 months now. Great GPU! When properly setup, the 3090 Kingpin block is good enough as well. My direct die 11900K pushes this thing properly at 2560x1440P.

I’m running that custom OptimusPC 3090 Kingpin full coverage Fujipoly thermal pad, it covers the entire back PCB of this card, and two D5 pumps to a large 1,080x45MM radiator. When using the XOC 1000 Kingpin bios, it runs extremely well.


----------



## Arizor

Awesome @tps3443 , what kind of temps do you see when gaming with that monster setup?


----------



## DirDir

tps3443 said:


> I haven’t posted here in a little while, this is my work/school/play PC. Still loving my Evga RTX 3090 Kingpin Hydrocopper very much!! I’ve had it for about 5 months now. Great GPU! When properly setup, the 3090 Kingpin block is good enough as well. My direct die 11900K pushes this thing properly at 2560x1440P.
> 
> I’m running that custom OptimusPC 3090 Kingpin full coverage Fujipoly thermal pad, it covers the entire back PCB of this card, and two D5 pumps to a large 1,080x45MM radiator. When using the XOC 1000 Kingpin bios, it runs extremely well.


Looks great!


----------



## KedarWolf

Arizor said:


> Apologies, just noticed my SSAO was only set to high first time round - proper result with all things maxxed, same exact FPS regardless it seems...!
> 
> View attachment 2531879


I can't run 4K, my screen is an ultrawide 3840x1080. I can do 1080p or 3840x1080 which is a bit fewer total pixels than 1440p.

Actually 3840x1080=4147200 and 2560×1440=3686400


----------



## Arizor

If anyone is looking for the next ray tracing extravaganza, Bright Memory Infinite is gorgeous and pretty fun too. You get it free if you buy the prequel for ten bucks Bright Memory: Infinite on Steam


----------



## kx11

Arizor said:


> If anyone is looking for the next ray tracing extravaganza, Bright Memory Infinite is gorgeous and pretty fun too. You get it free if you buy the prequel for ten bucks Bright Memory: Infinite on Steam


Just finished it, looks stunning and runs pretty good maxed out

short game though like 1hr 30min, it's free if you bought the original game


----------



## ArcticZero

BF2042 early access, no updated drivers yet. Getting anywhere from 40-60 FPS on 3440x1440 Ultra, Ray Traced AO off since I'm not really a fan of Ambient Occlusion, ~5ish more FPS on lowest. DLSS appears to do nothing to help performance mainly because it's CPU bottlenecked. Only have it maxing 8 logical cores on my 5950x. GPU usage hovers at around 50-60%.

GPU is set to 2100mhz, 1.075v, +800 memory.

Seems the 2042 maps are a lot heavier than the Portal BF3/BC2 maps though, where I'm pegged at 100+fps constantly and it's butter smooth.


----------



## GRABibus

ArcticZero said:


> BF2042 early access, no updated drivers yet. Getting anywhere from 40-60 FPS on 3440x1440 Ultra, Ray Traced AO off since I'm not really a fan of Ambient Occlusion, ~5ish more FPS on lowest. DLSS appears to do nothing to help performance mainly because it's CPU bottlenecked. Only have it maxing 8 logical cores on my 5950x. GPU usage hovers at around 50-60%.
> 
> GPU is set to 2100mhz, 1.075v, +800 memory.
> 
> Seems the 2042 maps are a lot heavier than the Portal BF3/BC2 maps though, where I'm pegged at 100+fps constantly and it's butter smooth.


Is there any config.ini file in the game folder where you could set the number of threads used by the game ?


----------



## J7SC

Arizor said:


> If anyone is looking for the next ray tracing extravaganza, Bright Memory Infinite is gorgeous and pretty fun too. You get it free if you buy the prequel for ten bucks Bright Memory: Infinite on Steam





kx11 said:


> Just finished it, looks stunning and runs pretty good maxed out
> 
> short game though like 1hr 30min, it's free if you bought the original game


...gonna have to try that out !


----------



## tps3443

What do you guys think of Assassins creed Valhalla? I wasn’t that impressed with the graphics. They seemed weak in some areas. The game was also very easy to run. I felt like I was playing a game from 2013 at times. It has those moments where the graphics do look great though.


----------



## Lobstar

ArcticZero said:


> BF2042 early access, no updated drivers yet. Getting anywhere from 40-60 FPS on 3440x1440 Ultra, Ray Traced AO off since I'm not really a fan of Ambient Occlusion, ~5ish more FPS on lowest. DLSS appears to do nothing to help performance mainly because it's CPU bottlenecked. Only have it maxing 8 logical cores on my 5950x. GPU usage hovers at around 50-60%.
> 
> GPU is set to 2100mhz, 1.075v, +800 memory.
> 
> Seems the 2042 maps are a lot heavier than the Portal BF3/BC2 maps though, where I'm pegged at 100+fps constantly and it's butter smooth.


Weird, I'm not having this issue. 75 FPS with +0 core +1448 mem. 3440x1440, everything maxed, no DLSS.


----------



## des2k...

anybody tried the RTX mod on Forza 5 ?


----------



## Arizor

Yep just tried it @des2k... and came here to post about it  - looks great.


----------



## joshpdemesa

KedarWolf said:


> Win10BenchISO.zip
> 
> 
> 
> 
> 
> 
> 
> drive.google.com
> 
> 
> 
> 
> 
> Stripped Windows 10, lots of services disabled and all bloatware removed.
> 
> Tested as working in VMWare.
> 
> There will be no printing, no audio no Wi-Fi, no Bluetooth etc. just a bare-bones Win 10 operating system strictly for benching.
> 
> Before installing the O/S, you might want to back up your existing O/S with a Macrium Reflect Free boot USB you make in 'Other Tasks' in the software.
> 
> Or just install the Windows 10 to a spare M.2 or SSD to dual boot with your existing O/S.
> 
> READ the README.txt in the .zip file, a few more things to disable with Autoruns and you need to install your chipset drivers and use the included NVCleanstall_1.10.0.exe to install Nvidia drivers.
> 
> Burn the ISO with the included RUFUS and can only be used as a clean Windows install.


Thanks a lot man! Reached 16151 on Port Royal.


----------



## ArcticZero

Lobstar said:


> Weird, I'm not having this issue. 75 FPS with +0 core +1448 mem. 3440x1440, everything maxed, no DLSS.


 Strangely enough it seems to have resolved itself as well. I didn't touch anything and a few games later I'm getting exactly the same performance now.


----------



## gfunkernaught

joshpdemesa said:


> Thanks a lot man! Reached 16151 on Port Royal.
> View attachment 2532231


Can that ISO be run in PE? Maybe preload/slipstream the drivers? That'd be cool.


----------



## gfunkernaught

ATTN Trio owners with EK Blocks:








EK-Quantum Vector TRIO RTX 3080/3090 Active Backplate D-RGB - Plexi


EK-Quantum Vector Trio RTX 3080/3090 Active Backplate D-RGB - Plexi is a cutting-edge addition to the EK® Quantum Line. It is made to complement the existing EK-Quantum Vector Trio RTX 3080/3090 water blocks and actively cool the backside of all MSI® NVIDIA® GeForce RTX™ 3080 and 3090 Trio GPUs...




www.ekwb.com




Active backplate is coming! Finally.


----------



## DirDir

Lobstar said:


> Weird, I'm not having this issue. 75 FPS with +0 core +1448 mem. 3440x1440, everything maxed, no DLSS.





joshpdemesa said:


> Thanks a lot man! Reached 16151 on Port Royal.
> View attachment 2532231


MF..... Naaa gj mate nice score and nice setup!


----------



## Roacoe717

I am getting the asus rog strix oc on tuesday what kind of score in timespy and port royal is good.


----------



## geriatricpollywog

Roacoe717 said:


> I am getting the asus rog strix oc on tuesday what kind of score in timespy and port royal is good.


15,000 PR for an air-cooled overclocked card with no RE-Bar enabled. If memory can do +1200 or more you have a good card.


----------



## joshpdemesa

DirDir said:


> MF..... Naaa gj mate nice score and nice setup!


Thank you! I will bench Superposition next 😊


----------



## Arizor

Gents, I've just installed the Aqua Computer NEXT flow sensor, nice solid piece of hardware.

However, the flow is showing 115 l/h for my EK D5 pump; I'm running at 100%. Is that normal? Seems very low.

I'm running 2 x 360mm rads (HWLabs GTR / EKWB PE), GPU block (Strix Optimus) and CPU block (AMD optimus).


----------



## Arizor

Quick update to the above: After a few hours of running, flow rate is up to 128 l/h (I assume air bubbles slowly moving out and water heating up helps).

Also, just for folks' information if they're looking at an Optimus block: after an hour of solid Forza 5, water temp is maxxing at 32 and GPU core is maxxing at 39/40C. Happy with that.


----------



## ManniX-ITA

Arizor said:


> Quick update to the above: After a few hours of running, flow rate is up to 128 l/h (I assume air bubbles slowly moving out and water heating up helps).


Considering the 2 rads it's pretty good.


----------



## Arizor

Thanks @ManniX-ITA , good to know!


----------



## joshpdemesa

gfunkernaught said:


> Can that ISO be run in PE? Maybe preload/slipstream the drivers? That'd be cool.


I have no idea with this sorry I'm a total noob. I just installed this on my 3rd SSD. I guess you'll have to ask @KedarWolf.


----------



## yzonker

Taking advantage of a little cool morning air. Water was still 21-22C though, so nothing sub-zero by any means. Still personal best, but mem still wouldn't go past +850. 









Result not found







www.3dmark.com













I scored 20 544 in Time Spy


AMD Ryzen 7 5800X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## DirDir

yzonker said:


> Taking advantage of a little cool morning air. Water was still 21-22C though, so nothing sub-zero by any means. Still personal best, but mem still wouldn't go past +850.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Result not found
> 
> 
> 
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 20 544 in Time Spy
> 
> 
> AMD Ryzen 7 5800X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com


Nice! Don't worry abut your mem. in the core clock is the score!


----------



## DirDir

Need a bit of help.
Did do a lot of benchmarks on my galaxy card yesterday for the first time and i did run into a lot of problems.

This is my test bench setup
Intel Core i9-10900KF 5.5ghz all cores
MEG Z590 ACE
32 GB G.Skill DDR4 4133 14.14.14.28
Kingston 500gb m.2
Bequiet 1500w dark power pro oc mode on
Costom waterloop set to 8c but can go lower
Galax 3090 all thermalpads 14,8W/mK++ and liquid metal 1000w galax bios

This is my first run cc0+ mc+0 just installd the drivers 472.12 SCORE 14 279








I scored 14 279 in Port Royal


Intel Core i9-10900KF Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com




All good in my eyes so far a good start score at this time i think i fixt all the **** in nvida controll panel

2 completed run score 15381 cc163 mc728 now i start to figure out that something is wrong the score is realy low, the gpu is boosting to 2175 mhz and stays there..
I scored 15 381 in Port Royal

3rd completed run score 15612 cc170 mc1283 Okej now i know something is wrong, i am close to the limit now on the card. The coil whine is ya holy f...








I scored 15 612 in Port Royal


Intel Core i9-10900KF Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com





4th completed run score15 691 cc180+ mc1337+... Ya the score is **** even if i boost to average clock frequency 2,206 MHz and stays there... Well and this point i know the card is working okej, not a great card but still okej for what it is. So why is the score so low? in my experience i am expecting is just over 16k, Im i wrong? Well i start looking around in window to see if there is something i have done thats not right. All looks good until i open gpuz i see that re sizable bar disabled! Great that is a simple fix i start to look around for the bios that is 1000w and re sizable bar.... Well i can't find any. Hmm  its patch for it! Great downlode it and try to install it and it tells me that i have incompatible components...WAIT WHAT?? are you ****ing kidding me i have an official 1000w bios and the official patch from galax home page... incompatible my ass.... i ofc try to reinstall a new bios and download the patch again and ofc same **** over and over and over... i guess i did this **** for about 2h until i gave up. New idea pops up why not just install the KP1000w bios! Boom bang and now i have a galax card and using a KP bios! Aka 3090 Romeos and Juliet version.

Great reinstall nvidia drivers! This time to the latest 496.49 and dubelchek that that rebar was enabled and did all the teaks in nvidia controll panel yada yada yada

1st completed run score 15432 cc170 mc1283
Clock frequency 2,220 MHz (1,395 MHz) 
Average clock frequency 2,176 MHz 
Memory clock frequency 1,379 MHz (1,219 MHz) 
Average memory clock frequency1,379 MHz
I scored 15 432 in Port Royal 

2 completed run score 15569 cc210 mc1283 This was top cc on the KP bios. Still a bloody low score..
Clock frequency 2,235 MHz (1,395 MHz)
Average clock frequency 2,215 MHz
Memory clock frequency 1,379 MHz (1,219 MHz)
Average memory clock frequency 1,344 MHz








I scored 15 569 in Port Royal


Intel Core i9-10900KF Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com




Even lower then when i did have resizable bar disabled *** is happening! Okej my conclusion was that it's all the nvidia 496.49 fault so reinstall the 472.12 and go agin i guess....


1st completed run kp1000w bios resizable bar enabald the nvidia 472.12 drivers cc210 mc1337 BOOM


Spoiler



15640 I scored 15 640 in Port Royal


 Ooo lord plz kill me *** is happening........
Oooo i forgot to do the nvidia control panel ****..... Do all that **** yada yada yada run portroyal agin.....LORD PLZ PLZ PLZ GIVE ME A FUNKING NICE SCORE


Spoiler



15683 I scored 15 683 in Port Royal


 at this point i want to jump from a bridge....

Its 03:00 and i can't figure out the questions i have so i give them to you all and hope you can help me!!
First question then ofc is ther no galax 1000w bios that have resizable bar enabled?
Second question why dont the score change when i go from quality to performance in nvidia control panel or driver version?
Third question and final question why is my score ****?



And ofc the water loop did hit i guess about -5c at the end and this happened


Spoiler














 Nothing like a great day overclocking!


----------



## ManniX-ITA

DirDir said:


> And ofc the water loop did hit i guess about -5c at the end and this happened


Ouch what a bad day... which block is that?
Acrylic or plexi?

I think the scores are not great but not even that bad.
Did you force ReBar enabled with the nVidia Profile Inspector?
Otherwise I think it's normal you'll miss the 16K mark.


----------



## DirDir

ManniX-ITA said:


> Ouch what a bad day... which block is that?
> Acrylic or plexi?
> 
> I think the scores are not great but not even that bad.
> Did you force ReBar enabled with the nVidia Profile Inspector?
> Otherwise I think it's normal you'll miss the 16K mark.


Its a bad score just look this and compare. Result 

Its a Bykski Full Coverage "Rauf" Edition! And it should be a acrylic.
I might be lucky and its only the plat inside that have crackt and that one felt like it was plexi the last time i did pull it apart Hmmmm!

Hmm no force the rebar enabled i have never done....is that something you always need to do?
And wear do i find it in nvidiaProfileInspector looking now i might be blind but can't find it!

Ty ManniX-ITA


----------



## joshpdemesa

DirDir said:


> Need a bit of help.
> Did do a lot of benchmarks on my galaxy card yesterday for the first time and i did run into a lot of problems.
> 
> This is my test bench setup
> Intel Core i9-10900KF 5.5ghz all cores
> MEG Z590 ACE
> 32 GB G.Skill DDR4 4133 14.14.14.28
> Kingston 500gb m.2
> Bequiet 1500w dark power pro oc mode on
> Costom waterloop set to 8c but can go lower
> Galax 3090 all thermalpads 14,8W/mK++ and liquid metal 1000w galax bios
> 
> This is my first run cc0+ mc+0 just installd the drivers 472.12 SCORE 14 279
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 14 279 in Port Royal
> 
> 
> Intel Core i9-10900KF Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> All good in my eyes so far a good start score at this time i think i fixt all the **** in nvida controll panel
> 
> 2 completed run score 15381 cc163 mc728 now i start to figure out that something is wrong the score is realy low, the gpu is boosting to 2175 mhz and stays there..
> I scored 15 381 in Port Royal
> 
> 3rd completed run score 15612 cc170 mc1283 Okej now i know something is wrong, i am close to the limit now on the card. The coil whine is ya holy f...
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 612 in Port Royal
> 
> 
> Intel Core i9-10900KF Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> 4th completed run score15 691 cc180+ mc1337+... Ya the score is *** even if i boost to average clock frequency 2,206 MHz and stays there... Well and this point i know the card is working okej, not a great card but still okej for what it is. So why is the score so low? in my experience i am expecting is just over 16k, Im i wrong? Well i start looking around in window to see if there is something i have done thats not right. All looks good until i open gpuz i see that re sizable bar disabled! Great that is a simple fix i start to look around for the bios that is 1000w and re sizable bar.... Well i can't find any. Hmm  its patch for it! Great downlode it and try to install it and it tells me that i have incompatible components...WAIT WHAT?? are you ***ing kidding me i have an official 1000w bios and the official patch from galax home page... incompatible my ass.... i ofc try to reinstall a new bios and download the patch again and ofc same ** over and over and over... i guess i did this **** for about 2h until i gave up. New idea pops up why not just install the KP1000w bios! Boom bang and now i have a galax card and using a KP bios! Aka 3090 Romeos and Juliet version.
> 
> Great reinstall nvidia drivers! This time to the latest 496.49 and dubelchek that that rebar was enabled and did all the teaks in nvidia controll panel yada yada yada
> 
> 1st completed run score 15432 cc170 mc1283
> Clock frequency 2,220 MHz (1,395 MHz)
> Average clock frequency 2,176 MHz
> Memory clock frequency 1,379 MHz (1,219 MHz)
> Average memory clock frequency1,379 MHz
> I scored 15 432 in Port Royal
> 
> 2 completed run score 15569 cc210 mc1283 This was top cc on the KP bios. Still a bloody low score..
> Clock frequency 2,235 MHz (1,395 MHz)
> Average clock frequency 2,215 MHz
> Memory clock frequency 1,379 MHz (1,219 MHz)
> Average memory clock frequency 1,344 MHz
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 569 in Port Royal
> 
> 
> Intel Core i9-10900KF Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> Even lower then when i did have resizable bar disabled *** is happening! Okej my conclusion was that it's all the nvidia 496.49 fault so reinstall the 472.12 and go agin i guess....
> 
> 
> 1st completed run kp1000w bios resizable bar enabald the nvidia 472.12 drivers cc210 mc1337 BOOM
> 
> 
> Spoiler
> 
> 
> 
> 15640 I scored 15 640 in Port Royal
> 
> 
> Ooo lord plz kill me *** is happening........
> Oooo i forgot to do the nvidia control panel **..... Do all that ** yada yada yada run portroyal agin.....LORD PLZ PLZ PLZ GIVE ME A FUNKING NICE SCORE
> 
> 
> Spoiler
> 
> 
> 
> 15683 I scored 15 683 in Port Royal
> 
> 
> at this point i want to jump from a bridge....
> 
> Its 03:00 and i can't figure out the questions i have so i give them to you all and hope you can help me!!
> First question then ofc is ther no galax 1000w bios that have resizable bar enabled?
> Second question why dont the score change when i go from quality to performance in nvidia control panel or driver version?
> Third question and final question why is my score ****?
> 
> 
> 
> And ofc the water loop did hit i guess about -5c at the end and this happened
> 
> 
> Spoiler
> 
> 
> 
> 
> View attachment 2532463
> 
> 
> 
> Nothing like a great day overclocking!


Ouch! How did that happen? sudden temp change?

I was able to reach 16443 just now @ 6.4° C water. max GPU temp was 30°C under heavy load. I can't lower the temperature without condensation. I think I'm done benching Port Royal on water. My whole system was consuming a lot of power with a single card .


Spoiler


----------



## ManniX-ITA

DirDir said:


> Hmm no force the rebar enabled i have never done....is that something you always need to do?
> And wear do i find it in nvidiaProfileInspector looking now i might be blind but can't find it!


I don't have enough experience but my understanding is that even with low temp and XOC bios you'll still incur in power limitations.
So to get better results than this you need either a Kingping card, which is full unlocked or an ASUS Strix, which is less locked.
Plus shunt modding the resistors.

Those 200-300 points you miss are probably just cause you need to force ReBar:









Here's How You Can Enable Resizable BAR Support in Any Game via NVIDIA Inspector


While only sixteen games have been officially whitelisted for Resizable BAR support by NVIDIA, there's a procedure to manually enable any.




wccftech.com


----------



## DirDir

joshpdemesa said:


> Ouch! How did that happen? sudden temp change?
> 
> I was able to reach 16443 just now @ 6.4° C water. max GPU temp was 30°C under heavy load. I can't lower the temperature without condensation. I think I'm done benching Port Royal on water. My whole system was consuming a lot of power with a single card .
> 
> 
> Spoiler
> 
> 
> 
> 
> View attachment 2532485


Don't realy know! Well **** happens. Now the cnc boys at work have something to do  
Wow gz thats a F great score mate! W
Well if i think Port Royal is easy one the power, time spy pull more don't it?


----------



## joshpdemesa

DirDir said:


> Don't realy know! Well **** happens. Now the cnc boys at work have something to do
> Wow gz thats a F great score mate! W
> Well if i think Port Royal is easy one the power, time spy pull more don't it?


I actually don't know. I didn't bother trying Time Spy yet. I only have a 10700K. 😅 still waiting for my 12900K before I bench TS.


----------



## DirDir

ManniX-ITA said:


> I don't have enough experience but my understanding is that even with low temp and XOC bios you'll still incur in power limitations.
> So to get better results than this you need either a Kingping card, which is full unlocked or an ASUS Strix, which is less locked.
> Plus shunt modding the resistors.
> 
> Those 200-300 points you miss are probably just cause you need to force ReBar:
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Here's How You Can Enable Resizable BAR Support in Any Game via NVIDIA Inspector
> 
> 
> While only sixteen games have been officially whitelisted for Resizable BAR support by NVIDIA, there's a procedure to manually enable any.
> 
> 
> 
> 
> wccftech.com


99.99% this card is not limited at all. This card was one of Rauf oc cards and i don't think Galax will send him any thing that have restrictions on it. 
But sure installing a EVC2SX to it might be a good idea and i have one but have 0 clues how it work.

Even reading that wonderful link/page i can't find it 🤯


----------



## DirDir

joshpdemesa said:


> I actually don't know. I didn't bother trying Time Spy yet. I only have a 10700K. 😅 still waiting for my 12900K before I bench TS.


Why? There is still a ranking for 10700k in time spy and using a 3090. A bit different when you need to oc memory and cpu as well.
This is my old rig score I scored 20 182 in Time Spy


----------



## ManniX-ITA

DirDir said:


> 99.99% this card is not limited at all. This card was one of Rauf oc cards and i don't think Galax will send him any thing that have restrictions on it.
> But sure installing a EVC2SX to it might be a good idea and i have one but have 0 clues how it work.


Thought it was a normal Galax... then it's quite disappointing.
I don't think using the EVC2SX is very complex, I've read about it and looked quite simple. I was thinking about ordering one...
But you need to solder it to the 3090 and it's not really straightforward and easy.



DirDir said:


> Even reading that wonderful link/page i can't find it 🤯


You can't find what? profile inspector?


----------



## DirDir

ManniX-ITA said:


> Thought it was a normal Galax... then it's quite disappointing.
> I don't think using the EVC2SX is very complex, I've read about it and looked quite simple. I was thinking about ordering one...
> But you need to solder it to the 3090 and it's not really straightforward and easy.
> 
> 
> 
> You can't find what? profile inspector?


Okej, well i have one and it looks complex as f***... Just looking at it now and am sweating... cut the red wire or the green... ME PEON electricity is magic uggh


Spoiler













 Well Solder things for me is the easy part of it, ooh well i just use a glue gun! i have done it for years on shunt mods or what ever really it 100% works and you have you keep your "warranty".

The "codes" These are 0x000F00BA and 0x000F00BB and, a bit further down, 0x000F00FF. In the dropdown menu, just activate them by switching to the flag which mentions the officially supported titles, then apply the changes.


----------



## ManniX-ITA

DirDir said:


> The "codes" These are 0x000F00BA and 0x000F00BB and, a bit further down, 0x000F00FF. In the dropdown menu, just activate them by switching to the flag which mentions the officially supported titles, then apply the changes.


Did you click show unknown settings on top bar?


----------



## DirDir

ManniX-ITA said:


> Did you click show unknown settings on top bar?
> 
> View attachment 2532509


No i did not. ME PEON! I got it now found a easy guide to it! Made 2 days ago on 3080ti owners club


----------



## ManniX-ITA

DirDir said:


> I got it now found a easy guide to it! Made 2 days ago on 3080ti owners club


Oh nice, that's what I wanted to re-post to you but couldn't find it


----------



## DirDir

ManniX-ITA said:


> Oh nice, that's what I wanted to re-post to you but couldn't find it


Ty Ty man! Well now when thats done what else might i have missed to gain higher point? You are a Old crazy guy that have be around for a wile what wisdom might you bestow my peon brain?
😇


----------



## joshpdemesa

DirDir said:


> No i did not. ME PEON! I got it now found a easy guide to it! Made 2 days ago on 3080ti owners club


with the rebar enabled you might be able to reach greater than 16151! That's a nice card you have there.


----------



## DirDir

joshpdemesa said:


> with the rebar enabled you might be able to reach greater than 16151! That's a nice card you have there.


Well the 16151 was made on my Msi gaming x 3090 now i need to re do that to to see what i can get... That card is just a normal 3090 gaming x card that well i guess i was lucky on. Have a suprim x card aswell but can even go over 2000mhz and the mem dies close to 900mhz oc.. After that card i told myself i am changing brand, no more msi parts.


----------



## ManniX-ITA

DirDir said:


> You are a Old crazy guy that have be around for a wile what wisdom might you bestow my peon brain?


How much did you score with ReBar on? Were you able to bench again?
Considering the type of card and the cooling... either the card is a bit fried or something else is off.
Are you using a clean install for benching?
Kedar shared a while ago a clean image.


----------



## J7SC

...finally got all the components for the TT Core P8 dual work-play system for the home office hooked up (3950X, 6900XT + 5950X, 3090 Strix)....very complex project, what with two of everything, ie. start / reset, USB etc. Quick peek below, before final RGB clean-up and better pics...


----------



## Arizor

GOOD LORD.


----------



## J7SC

Arizor said:


> GOOD LORD.


...that was the _second of two dual_ mobo builds, streamlining operations and footprint was the goal...when I conceived of these builds, I forgot to conceive of all the tricky bits I was going to run into


----------



## Arizor

Devs was a documentary about @J7SC , it seems.


----------



## J7SC

Arizor said:


> Devs was a documentary about @J7SC , it seems.


...felt more like this


----------



## DirDir

ManniX-ITA said:


> How much did you score with ReBar on? Were you able to bench again?
> Considering the type of card and the cooling... either the card is a bit fried or something else is off.
> Are you using a clean install for benching?
> Kedar shared a while ago a clean image.


Have not tested it yet, going to later today i think. 
Well hope it not fried, we I need to have a look on the EVC2SX and PMD Power Measurement Device to see whats happening. Need a good tutorial on wear to "soder" the evc2sx and i will do it.
Yes i am uesing Kedar windows that he have done. It did work great! 

Know why the nvidia controller behaves like it did? Going from quality to performance did not change anything.




J7SC said:


> ...finally got all the components for the TT Core P8 dual work-play system for the home office hooked up (3950X, 6900XT + 5950X, 3090 Strix)....very complex project, what with two of everything, ie. start / reset, USB etc. Quick peek below, before final RGB clean-up and better pics...
> 
> View attachment 2532559


Looks super nice have any project pic?


----------



## ManniX-ITA

DirDir said:


> Know why the nvidia controller behaves like it did? Going from quality to performance did not change anything.


Doesn't matter for 3DMark.
Guess it's enforcing its own settings to keep a fair playing field.



DirDir said:


> Well hope it not fried, we I need to have a look on the EVC2SX and PMD Power Measurement Device to see whats happening. Need a good tutorial on wear to "soder" the evc2sx and i will do it.


I saw a while ago the video from der8auer:






Didn't get the EVC2SX cause it's not going fit with the Optimus block.
I think maybe there's a chance with a flat connector but I have no idea if/how can be used.


----------



## J7SC

DirDir said:


> (...)
> Looks super nice have any project pic?


...Tx  ...tons of 'raw' pics I took for the build log; I will try to update that starting next week...

In the meantime, some GPU-prep and mounting detail...Gelid OC for both dies, TGP thermal putty for VRAM, Thermalright pads w/MX5 for the rest. On the back, the biggest heatsinks I could fit, along with an extra fan though there's good downdraft from 4x InWin 120s up top. Temps are superb, also per earlier posts. LinkUp PCIe 4.0 16x Risers work extremely well ('transparent' on performance), 3DM before-and-after tests show no loss on either card (one actually showed a gain, but within margin of error).


----------



## long2905

update on my water cooled inno3d ichill 3090 card. so previously i swapped from an alphacool block to bykski full active backplate cooling block which resulted in a decent temp drop in mining workload, 49 core 84 mem on stock byski 1.8mm pad. Without data and gelid pads, i tried first to put some mx4 paste in between the mem - pad - block contact and that dropped temp a bit further to 49 core 80 mem. that looks okay and the max mem clock OC i can get is 1150 stable.

yesterday i finally got the necessary components for another go at this: mayhem clear cooling liquid, 1.5mm gelid ultimate pad for front and 2mm on the back (since I also used the 1.5mm pad for another cards and ran out of it for the back). It's a bit harder to screw down for the active backplate but things still fit and the results are amazing. I took the chance to try a different paste as well Thermalright TF8 (its a bit cheaper and perform very similar to TFX). 

After 1 day my temps are 41 core and 62 mem during mining at 1300 mem OC (anymore and the system crash or artifact).

So yeah if you use bykski block like I do, get gelid ultimate/extreme 1.5mm pads for your memory. it works wonder!


----------



## Roacoe717

0451 said:


> 15,000 PR for an air-cooled overclocked card with no RE-Bar enabled. If memory can do +1200 or more you have a good card.


Dang i only got 14250 in port royal. I only had 2 hours to mess with it though. On msi afterburner i could only do +98 gpu and +600 memory. Any suggestions?


----------



## marti69

DirDir said:


> Need a bit of help.
> Did do a lot of benchmarks on my galaxy card yesterday for the first time and i did run into a lot of problems.
> 
> This is my test bench setup
> Intel Core i9-10900KF 5.5ghz all cores
> MEG Z590 ACE
> 32 GB G.Skill DDR4 4133 14.14.14.28
> Kingston 500gb m.2
> Bequiet 1500w dark power pro oc mode on
> Costom waterloop set to 8c but can go lower
> Galax 3090 all thermalpads 14,8W/mK++ and liquid metal 1000w galax bios
> 
> This is my first run cc0+ mc+0 just installd the drivers 472.12 SCORE 14 279
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 14 279 in Port Royal
> 
> 
> Intel Core i9-10900KF Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> All good in my eyes so far a good start score at this time i think i fixt all the **** in nvida controll panel
> 
> 2 completed run score 15381 cc163 mc728 now i start to figure out that something is wrong the score is realy low, the gpu is boosting to 2175 mhz and stays there..
> I scored 15 381 in Port Royal
> 
> 3rd completed run score 15612 cc170 mc1283 Okej now i know something is wrong, i am close to the limit now on the card. The coil whine is ya holy f...
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 612 in Port Royal
> 
> 
> Intel Core i9-10900KF Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> 4th completed run score15 691 cc180+ mc1337+... Ya the score is *** even if i boost to average clock frequency 2,206 MHz and stays there... Well and this point i know the card is working okej, not a great card but still okej for what it is. So why is the score so low? in my experience i am expecting is just over 16k, Im i wrong? Well i start looking around in window to see if there is something i have done thats not right. All looks good until i open gpuz i see that re sizable bar disabled! Great that is a simple fix i start to look around for the bios that is 1000w and re sizable bar.... Well i can't find any. Hmm  its patch for it! Great downlode it and try to install it and it tells me that i have incompatible components...WAIT WHAT?? are you ***ing kidding me i have an official 1000w bios and the official patch from galax home page... incompatible my ass.... i ofc try to reinstall a new bios and download the patch again and ofc same ** over and over and over... i guess i did this **** for about 2h until i gave up. New idea pops up why not just install the KP1000w bios! Boom bang and now i have a galax card and using a KP bios! Aka 3090 Romeos and Juliet version.
> 
> Great reinstall nvidia drivers! This time to the latest 496.49 and dubelchek that that rebar was enabled and did all the teaks in nvidia controll panel yada yada yada
> 
> 1st completed run score 15432 cc170 mc1283
> Clock frequency 2,220 MHz (1,395 MHz)
> Average clock frequency 2,176 MHz
> Memory clock frequency 1,379 MHz (1,219 MHz)
> Average memory clock frequency1,379 MHz
> I scored 15 432 in Port Royal
> 
> 2 completed run score 15569 cc210 mc1283 This was top cc on the KP bios. Still a bloody low score..
> Clock frequency 2,235 MHz (1,395 MHz)
> Average clock frequency 2,215 MHz
> Memory clock frequency 1,379 MHz (1,219 MHz)
> Average memory clock frequency 1,344 MHz
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 569 in Port Royal
> 
> 
> Intel Core i9-10900KF Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> Even lower then when i did have resizable bar disabled *** is happening! Okej my conclusion was that it's all the nvidia 496.49 fault so reinstall the 472.12 and go agin i guess....
> 
> 
> 1st completed run kp1000w bios resizable bar enabald the nvidia 472.12 drivers cc210 mc1337 BOOM
> 
> 
> Spoiler
> 
> 
> 
> 15640 I scored 15 640 in Port Royal
> 
> 
> Ooo lord plz kill me *** is happening........
> Oooo i forgot to do the nvidia control panel **..... Do all that ** yada yada yada run portroyal agin.....LORD PLZ PLZ PLZ GIVE ME A FUNKING NICE SCORE
> 
> 
> Spoiler
> 
> 
> 
> 15683 I scored 15 683 in Port Royal
> 
> 
> at this point i want to jump from a bridge....
> 
> Its 03:00 and i can't figure out the questions i have so i give them to you all and hope you can help me!!
> First question then ofc is ther no galax 1000w bios that have resizable bar enabled?
> Second question why dont the score change when i go from quality to performance in nvidia control panel or driver version?
> Third question and final question why is my score ****?
> 
> 
> 
> And ofc the water loop did hit i guess about -5c at the end and this happened
> 
> 
> Spoiler
> 
> 
> 
> 
> View attachment 2532463
> 
> 
> 
> Nothing like a great day overclocking!


whats the nvidia controle panel tweaks do you use to have better score?


----------



## ManniX-ITA

Roacoe717 said:


> Dang i only got 14250 in port royal. I only had 2 hours to mess with it though. On msi afterburner i could only do +98 gpu and +600 memory. Any suggestions?



Force ReBar enabled for 3DMark?
Use a custom curve in AF with voltage lock
Re-paste GPU die and replace thermal pads (or use thermal putty); you should easy gain +25 on GPU and +250/500 on memory


----------



## KedarWolf

marti69 said:


> whats the nvidia controle panel tweaks do you use to have better score?


And for Port Royal try driver 466.63 installed with NVCleaninstall.































After much testing I get the best with the below, click on the Spoiler.


Spoiler: Nvidia Settings


























































*Turn off HDMI Audio.*










*Select only your DP monitor.*


----------



## J7SC

I'm tidying up the final look for the dual mobo 'Ravens' nest' build, but everything else is complete, including the final config cooling setups...for the 5950X/ Strix 3090, that's 1320x63 mm rad space and multi D5s, with Phanteks blocks for the CPU and GPU. 

After letting it warm up to normal temps (23+ ambient) and running Superposition 4K, the temps are as per inset pic below. Mind you, that was with the stock r_bar vbios, not the KPE520 r_bar which would add 70 Watts of heat energy. As outlined before, I used thermal putty on the VRAM (front and back), Gelid OC for the dies, Thermalright pads for the power delivery and a big heatsink on the backplate, which also has an Arctic p8 v2 fan, in addition to four InWin 120mm downdraft fans on the topside of the case. 

I credit the thermal putty and the heatsink for a fair bit of the temp improvements, in particular for the VRAM and I highly recommend thermal putty if you're thinking of re-padding your GPU. I don't even know where max clocks are yet w/ this new cooling setup, but I look forward to find out


----------



## DirDir

KedarWolf said:


> And for Port Royal try driver 466.63 installed with NVCleaninstall.
> 
> 
> View attachment 2532944
> 
> 
> 
> 
> View attachment 2532941
> 
> 
> View attachment 2532942
> 
> 
> After much testing I get the best with the below, click on the Spoiler.
> 
> 
> Spoiler: Nvidia Settings
> 
> 
> 
> 
> View attachment 2532943
> 
> 
> View attachment 2532924
> 
> 
> View attachment 2532925
> 
> 
> View attachment 2532926
> 
> 
> View attachment 2532927
> 
> 
> View attachment 2532928
> 
> 
> *Turn off HDMI Audio.*
> 
> View attachment 2532929
> 
> 
> *Select only your DP monitor.*
> 
> View attachment 2532938
> 
> 
> 
> 
> View attachment 2532932
> 
> 
> View attachment 2532933
> 
> 
> View attachment 2532935
> 
> 
> 
> View attachment 2532936
> 
> 
> View attachment 2532937


Dame you are nice! Ty again @KedarWolf you are super helpful to us all


----------



## GRABibus

J7SC said:


> I'm tidying up the final look for the dual mobo 'Ravens' nest' build, but everything else is complete, including the final config cooling setups...for the 5950X/ Strix 3090, that's 1320x63 mm rad space and multi D5s, with Phanteks blocks for the CPU and GPU.
> 
> After letting it warm up to normal temps (23+ ambient) and running Superposition 4K, the temps are as per inset pic below. Mind you, that was with the stock r_bar vbios, not the KPE520 r_bar which would add 70 Watts of heat energy. As outlined before, I used thermal putty on the VRAM (front and back), Gelid OC for the dies, Thermalright pads for the power delivery and a big heatsink on the backplate, which also has an Arctic p8 v2 fan, in addition to four InWin 120mm downdraft fans on the topside of the case.
> 
> I credit the thermal putty and the heatsink for a fair bit of the temp improvements, in particular for the VRAM and I highly recommend thermal putty if you're thinking of re-padding your GPU. I don't even know where max clocks are yet w/ this new cooling setup, but I look forward to find out
> 
> View attachment 2532948


You are insane 😆


----------



## J7SC

GRABibus said:


> You are insane 😆


Thanks (I think) ...may be it's the 'Cherenkov radiation blue' that's affecting my brain waves 😵 ? 

But there's a method to my madness - I need to have physically separate, unconnected-to-each-other work and play setups in my home office, and this way, the dual mobo footprint is just like that from just one system (mind you, with a Sasquatch-sized footprint). The extra work I did on prepping the GPUs for cooling is paying off, the run above actually had the push-pull rad fans on default (around 1200 rpm) rather than the full-tilt setting for the Arctic P12s (around 1800 - 1900 rpm).

Btw, I picked up a lot from this thread about custom-cooling a 3090 !


----------



## Roacoe717

Best I could do for now, Asus Strix 3090, rebar enabled. 67f in the house right now. msi afterburner +110 gpu, +791 memory.


----------



## GRABibus

Delete


----------



## GRABibus

Roacoe717 said:


> Best I could do for now, Asus Strix 3090, rebar enabled. 67f in the house right now. msi afterburner +110 gpu, +791 memory.
> 
> View attachment 2533087


Don’t enable rebar with nvidia inspector in time spy.
It will Increase GPU score, but will decrease CPU score too much.


----------



## Roacoe717

GRABibus said:


> Don’t enable rebar with nvidia inspector in time spy.
> It will Increase GPU score, but will decrease CPU score too much.


Alright. Im gonna toss the pc outside this weekend hopefully this nice PA chilly weather will help bump those numbers up.


----------



## ManniX-ITA

Thermal putty is back in stock at Digi-key.
Purchased another 2 cans


----------



## HeadlessKnight

RT is brutal man. I was gaming on my 3090 @ 1950 MHz 0.9v for almost a year now, never had any crashes, all my games were stable and non of them have RT. I recently purchased Crysis 2/3 remastered and in C2M it crashed within an hour of gaming. I had to increase the voltage to 0.918v to stop the crashing, that's almost 0.02v compared to rastorized only games. 
Heared Metro Exodus EE is brutal too, I have it but I never got into it or played seriously yet.


----------



## GRABibus

HeadlessKnight said:


> RT is brutal man. I was gaming on my 3090 @ 1950 MHz 0.9v for almost a year now, never had any crashes, all my games were stable and non of them have RT. I recently purchased Crysis 2/3 remastered and in C2M it crashed within an hour of gaming. I had to increase the voltage to 0.918v to stop the crashing, that's almost 0.02v compared to rastorized only games.
> Heared Metro Exodus EE is brutal too, I have it but I never got into it or played seriously yet.


Cold War, BFV are brutal also.


----------



## ManniX-ITA

HeadlessKnight said:


> Heared Metro Exodus EE is brutal too, I have it but I never got into it or played seriously yet.


It does have a nice benchmark tool that can be used to test stability.
RT is also overkill for memory, often a stable OC doesn't hold.


----------



## J7SC

ManniX-ITA said:


> Thermal putty is back in stock at Digi-key.
> Purchased another 2 cans


Time to replenish my 'putty pantry'...I've only got 1.5 cans left after 'the Ravens ate a lot'


----------



## yzonker

J7SC said:


> Time to replenish my 'putty pantry'...I've only got 1.5 cans left after 'the Ravens a te a lot'


How much is needed for just the mem on a card like a 3090? Is one of those 50g tubs enough?


----------



## arvinz

yzonker said:


> How much is needed for just the mem on a card like a 3090? Is one of those 50g tubs enough?


How do you guys apply this stuff? I got a can that just came in...what's the best/cleanest way? Just use your hands or is there some applicator we can use?


----------



## J7SC

yzonker said:


> How much is needed for just the mem on a card like a 3090? Is one of those 50g tubs enough?


...yes, it's plenty. I changed w-blocks and did the Strix 3090 twice and still had plenty left over of the 50g.



arvinz said:


> How do you guys apply this stuff? I got a can that just came in...what's the best/cleanest way? Just use your hands or is there some applicator we can use?


I make little balls out of it with palm rolling (after Isopropyl hand cleaning). The good part about thermal putty is that it 'conforms' to whatever space there is (within reason...).


----------



## DirDir

ManniX-ITA said:


> Thermal putty is back in stock at Digi-key.
> Purchased another 2 cans


Ty for the info, my first order on this product really wanted to test is out if its any good or just pure bs.


----------



## yzonker

Hopefully nobody minds me posting this here too (I posted it in the 3080ti thread also). Relevant to either.

I noticed when I ran TS last time on my 3090 system with reBar forced on that it didn't appear to hurt my CPU score very much. Just confirmed,









Result







www.3dmark.com





I'm not 100% sure what changed. I kinda think it may have improved when I went to reasonably well tuned b-die RAM. My 3900x system still takes a big hit which has my old c-die RAM.









Result







www.3dmark.com





Could be something else but not sure what? I don't think it's the Nvidia driver as I used an older driver for this run recently,









I scored 20 544 in Time Spy


AMD Ryzen 7 5800X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com





Slightly lower but much.


----------



## newls1

ManniX-ITA said:


> It does have a nice benchmark tool that can be used to test stability.
> RT is also overkill for memory, often a stable OC doesn't hold.


this is the damn truth!


----------



## dansi

J7SC said:


> ...yes, it's plenty. I changed w-blocks and did the Strix 3090 twice and still had plenty left over of the 50g.
> 
> 
> 
> I make little balls out of it with palm rolling (after Isopropyl hand cleaning). The good part about thermal putty is that it 'conforms' to whatever space there is (within reason...).
> View attachment 2533555


what is difference between silicone and non-silicone putty? this TG-PP10 is silicone based, has only 6 month shelf life?


----------



## ManniX-ITA

dansi said:


> what is difference between silicone and non-silicone putty? this TG-PP10 is silicone based, has only 6 month shelf life?


It's the base component; rubber.
There are other base components, like carbon as used in Thermalright TFX/TF8 thermal pastes, but they have different pros/cons.
The rubber/silicon is what makes it easy to model and apply it, like it was clay.
The 6 months shelf life is very conservative.


----------



## dansi

ManniX-ITA said:


> It's the base component; rubber.
> There are other base components, like carbon as used in Thermalright TFX/TF8 thermal pastes, but they have different pros/cons.
> The rubber/silicon is what makes it easy to model and apply it, like it was clay.
> The 6 months shelf life is very conservative.


do you think 50g is more than enough for 3080ti vrms chokes?
can i use it on vram even?


----------



## ManniX-ITA

dansi said:


> do you think 50g is more than enough for 3080ti vrms chokes?
> can i use it on vram even?


Quoting:



J7SC said:


> ...yes, it's plenty. I changed w-blocks and did the Strix 3090 twice and still had plenty left over of the 50g.


Yes there's plenty for both VRM and VRAM.
Before deciding to buy the Optimus block I was thinking to use it over the whole PCB with the EK block


----------



## mickyc357

Is everyone still running 450/500w bioses on the MSI trio (air cooled) without any problems?


----------



## J7SC

ManniX-ITA said:


> (...)
> Yes there's plenty for both VRM and VRAM.
> Before deciding to buy the Optimus block I was thinking to use it over the whole PCB with the EK block


I started out w/ 4x 50g jars and have 1 & 3/4 jars left after doing a total of 3 GPU applications (2 of which were for the double-sided VRAM 3090). 

Thermal putty is a bit trickier to work with for actual application, but the rolling-into-a-ball method seems to work best, also per other long term users. Speaking of long-term users, those who have posted on it at OCN report no degradation etc. My applications are simply still too young to make any pronunciations on that, but so far, so good great


----------



## dansi

J7SC said:


> I started out w/ 4x 50g jars and have 1 & 3/4 jars left after doing a total of 3 GPU applications (2 of which were for the double-sided VRAM 3090).
> 
> Thermal putty is a bit trickier to work with for actual application, but the rolling-into-a-ball method seems to work best, also per other long term users. Speaking of long-term users, those who have posted on it at OCN report no degradation etc. My applications are simply still too young to make any pronunciations on that, but so far, so good great


i dont think it would degrade once put to use...that would make application unfeasible?
but for the unused portion that is exposed to air, the silicon will dry out and putty becomes snow flakeys?

how big of a ball for each components? if it is too small, we cannt get contact?


----------



## J7SC

dansi said:


> i dont think it would degrade once put to use...that would make application unfeasible?
> but for the unused portion that is exposed to air, the silicon will dry out and putty becomes snow flakeys?
> 
> how big of a ball for each components? if it is too small, we cannt get contact?


The jar comes with an inner airtight lid which makes a satisfying 'pop' when taking it out (and putting it back after use, of course).

Re. the little putty ball size (again, after cleaning hands first with isopropyl alcohol), I settled on a 1 cm diameter + -; it might to some extent depend on the prescribed 'pad' thickness for the specific card in question, but the 1 cm worked well on both GPUs in my case. It is probably better to have just a bit too much (within reason) than too little, it will conform anyway and squeeze into nooks and crannies, almost giving a 3-D cooling contact via the sides of the VRAM chips.


----------



## geriatricpollywog

How much MSVDD and FBVDD voltage do you guys use on the Classified tool for Kingpin cards when playing with the 1000w bios on AIO/water?


----------



## SoldierRBT

0451 said:


> How much MSVDD and FBVDD voltage do you guys use on the Classified tool for Kingpin cards when playing with the 1000w bios on AIO/water?


I believe in this run I used 1.225v NVDD LLC6 1.16v MSVVD LLC4 1.40v FBVDD 600W 23C ambient. Going above this voltages was worse since I was already at 50C. 









I scored 16 109 in Port Royal


Intel Core i9-10900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## geriatricpollywog

SoldierRBT said:


> I believe in this run I used 1.225v NVDD LLC6 1.16v MSVVD LLC4 1.40v FBVDD 600W 23C ambient. Going above this voltages was worse since I was already at 50C.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 16 109 in Port Royal
> 
> 
> Intel Core i9-10900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com


Thanks, my MSVDD was way low. I think 0.85 is what the tool defaults to. I was also using default LL, which is 0 or 1.


----------



## Arizor

Folks any idea what this greasy liquid is on my gpu? I took my Optimus block apart to apply Gelid extreme pads and new paste, after putting back together I noticed this “grease” as I looked through the optimus acrylic.

I’ve tried wiping with buds, using a hair dryer, doesn’t seem to matter. It seems almost like it’s coming through from the Fuji poly back pad that covers the entirety of the back, like some kind of thermal paste discharge on a hot day.

Any ideas? In the pic you can see the shiny/dark areas where the “grease” is.


----------



## KedarWolf

Win10BenchISO.zip







drive.google.com





*New and improved benching ISO, stripped, services disabled.*

Windows 10 LTSC 2021, lots of services disabled and all bloatware removed.

Tested as working in VMWare.

There will be no printing, no audio no Wi-Fi, no Bluetooth etc. just a bare-bones Win 10 operating system strictly for benching.

Before installing the O/S, you might want to back up your existing O/S with a Macrium Reflect Free boot USB you make in 'Other Tasks' in the software.

Or just install the Windows 10 to a spare M.2 or SSD to dual boot with your existing O/S.

READ the README.txt in the .zip file, there are some more things to disable with Autoruns and you need to install your chipset drivers and use the included NVCleanstall_1.12.0.exe to install Nvidia drivers.

Burn the ISO with the included RUFUS and can only be used as a clean Windows install.


----------



## ManniX-ITA

Arizor said:


> Folks any idea what this greasy liquid is on my gpu? I took my Optimus block apart to apply Gelid extreme pads and new paste, after putting back together I noticed this “grease” as I looked through the optimus acrylic.


Is there a reason why are you replacing the Fujipoly pads with Gelid Extreme?



Arizor said:


> I’ve tried wiping with buds, using a hair dryer, doesn’t seem to matter. It seems almost like it’s coming through from the Fuji poly back pad that covers the entirety of the back, like some kind of thermal paste discharge on a hot day.


Yes, that stuff comes from the pads usually.
Means part of the chemical mix is separating.
Usually isn't an indicator of quality, even if I've seen it as well from many Laird thermal pads mount on mainboards.
But I'm very surprised about the Fujipoly...


----------



## Arizor

Thanks @ManniX-ITA , did some research and saw it was probably that too, so put together and running again now no issues.

I wanted to replace the fujipoly with gelid just because I wasn't happy with memory temps and knew gelid extreme were a really solid brand. Got my backup fujipoly from Optimus ready if not! But so far doing intensive mem testing i've dropped temps around 8C, so happy with that.


----------



## J7SC

Arizor said:


> Thanks @ManniX-ITA , did some research and saw it was probably that too, so put together and running again now no issues.
> 
> I wanted to replace the fujipoly with gelid just because I wasn't happy with memory temps and knew gelid extreme were a really solid brand. Got my backup fujipoly from Optimus ready if not! But so far doing intensive mem testing i've dropped temps around 8C, so happy with that.


...What VRAM temps are you running, ie. HWinfo for Superposition 4/8K or 3DM PR ?


----------



## ManniX-ITA

Arizor said:


> I wanted to replace the fujipoly with gelid just because I wasn't happy with memory temps and knew gelid extreme were a really solid brand. Got my backup fujipoly from Optimus ready if not! But so far doing intensive mem testing i've dropped temps around 8C, so happy with that.


Wow... the Gelid Pads are good but in theory the Fujipoly pads are a superior class.
I'm really surprised you can do better.


----------



## Arizor

@J7SC hot day here in Melbourne, water temps 38.5 here in my boiling study, memory temps max at 74C running superposition 8k loop.

@ManniX-ITA yeah they’re very good but I just wanted to try gelid, so far, just for comparison, running this loop on a similar day would hit 80C.

I also did the trick this time, think @J7SC tipped me to it, adding a bit of noctua thermal paste on top of the pads.


----------



## ManniX-ITA

Arizor said:


> @ManniX-ITA yeah they’re very good but I just wanted to try gelid, so far, just for comparison, running this loop on a similar day would hit 80C.


I would have anyway expected similar if not worse performances from Gelid...
Maybe I'll skip totally the pads when I'll mount the Optimus block and go directly for thermal putty.


----------



## Arizor

definitely @ManniX-ITA , not really too much in it but enough for a slight temp advantage.


----------



## J7SC

Arizor said:


> @J7SC hot day here in Melbourne, water temps 38.5 here in my boiling study, memory temps max at 74C running superposition 8k loop.
> 
> @ManniX-ITA yeah they’re very good but I just wanted to try gelid, so far, just for comparison, running this loop on a similar day would hit 80C.
> 
> I also did the trick this time, think @J7SC tipped me to it, adding a bit of noctua thermal paste on top of the pads.


...my condolences, I've lived through a never-seen-before-here 39 C this summer :-( ...temps below are w/ ambient around 23C



ManniX-ITA said:


> I would have anyway expected similar if not worse performances from Gelid...
> Maybe I'll skip totally the pads when I'll mount the Optimus block and go directly for thermal putty.


...I really like the thermal putty I applied on both GPUs. Apart from a big heatsink and thermal putty on the VRAM in the back as well, I'm just utilizing the stock Strix backplate (not actively w-cooled).


----------



## kryptonfly

https://www.digikey.fr/product-detail/fr/t-global-technology/TG-PP10-50/1168-TG-PP10-50-ND/6204863

Is it the exact same ref of thermal putty you're talking about in the previous page ? My vram's temp is around 64°C max so far during gaming with gelid extreme pads, I don't think it's useful, right ? Surely for big chokes instead. 

I don't know if it's normal but with XOC 1000W bios when I try to record games, memory falls at 400 mhz and it's unplayable. With gigabyte stock bios it works great. Did anyone experience this trouble ? Thx


----------



## J7SC

kryptonfly said:


> TG-PP10-50 t-Global Technology | Ventilateurs, gestion thermique | DigiKey
> 
> Is it the exact same ref of thermal putty you're talking about in the previous page ? (...)


yes - that's the one.


----------



## geriatricpollywog

SoldierRBT said:


> I believe in this run I used 1.225v NVDD LLC6 1.16v MSVVD LLC4 1.40v FBVDD 600W 23C ambient. Going above this voltages was worse since I was already at 50C.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 16 109 in Port Royal
> 
> 
> Intel Core i9-10900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com


I managed this with cool water and using your settings as a starting point.


----------



## SoldierRBT

0451 said:


> I managed this with cool water and using your settings as a starting point.
> 
> View attachment 2533972
> 
> 
> View attachment 2533973


Very nice. Now try NVDD LLC4 and MSVDD LLC2 with 800KHz on both. Another trick I use is lock frequency with nvidia-smi. I see you running +225. Try 240 and if it crashes, check at what frequency and lock it 15MHz lower. If you do it right, you can gain 15MHz more in the avg coreclocks. After you find the max clocks for your silicon/temp, tweak MSVDD from 1.14 to 1.18v. Sometimes too high decreases score. 

You can also try using msi afterburner v/f curve like 1.10v +240 or +255. Your internal clocks won't be affected since you're manually applying NVVDD/MSVVDD


----------



## Lobstar

Arizor said:


> Folks any idea what this greasy liquid is on my gpu? I took my Optimus block apart ...
> 
> Any ideas? In the pic you can see the shiny/dark areas where the “grease” is.


It's from the backplate pad. I run mine vertical in my test bench case so it pools on my mobo. I also have an optimus block on my 3090 (FTW3U).


----------



## Arizor

Thanks @Lobstar , yeah @ManniX-ITA confirmed it too, all installed again with no issue. Memory temps are a solid amount lower and using grizzly extreme has also lowered gpu core by 1-2 C.


----------



## kryptonfly

SoldierRBT said:


> Very nice. Now try NVDD LLC4 and MSVDD LLC2 with 800KHz on both. *Another trick I use is lock frequency with nvidia-smi*. I see you running +225. Try 240 and if it crashes, check at what frequency and lock it 15MHz lower. If you do it right, you can gain 15MHz more in the avg coreclocks. After you find the max clocks for your silicon/temp, tweak MSVDD from 1.14 to 1.18v. Sometimes too high decreases score.
> 
> You can also try using msi afterburner v/f curve like 1.10v +240 or +255. Your internal clocks won't be affected since you're manually applying NVVDD/MSVVDD


Sorry for my stupid question, I didn't know NVSMI before you talked about it but how do you lock frequency with NVSMI ? Is it even possible to lock effective clock ? Thx


----------



## SoldierRBT

kryptonfly said:


> Sorry for my stupid question, I didn't know NVSMI before you talked about it but how do you lock frequency with NVSMI ? Is it even possible to lock effective clock ? Thx


You can't lock effective clocks. Only way to increase effective clocks is by increasing NVDD/MSVDD voltages. The idea of nvidia-smi is to lock the frequency so it doesn't crash in PR. For example, 1.10v - 2265MHz v/f curve would be 2200 avg in PR in my case. Every time I tried 1.10v 2280MHz it would boost to 2235MHz and crash when it reaches 50C. Using nvidia-smi I can lock frequency to 2220MHz but still using 1.10v - 2280MHz voltage point giving me 2212MHz avg in PR which is almost 15MHz higher avg clocks.

You need to create a bat file. nvidia-smi -lgc 2220 (you can change this number) and a reset bat file. nvidia-smi -rgc


----------



## ManniX-ITA

There's also documentation for nvidia-smi:



https://developer.download.nvidia.com/compute/DCGM/docs/nvidia-smi-367.38.pdf


----------



## J7SC

First run w/ the new w-cooling system on the Strix w/ KPE 520 rb vbios (stock Strix rb vbios on the left for comps). Ambient was 1 C higher (24 C) in the new run, and temp comps are interesting w/ the higher power limit. HWInfo won't pick up max Watts correctly with the KPE on the Strix, but rail values seemed to change by the correct amount. VRAM isn't quite maxed yet, and ignore the '2295 MHz' GPU in the new run (got greedy, crashed/spiked and forgot to reset HWInfo). Overall, I'm happy w/ the cooling, including the Phanteks block and air-cooled heatsink on the back.

Question: I recall someone posting a while back that there was a newer 'v2' KPE 520 W r_BAR vbios (mine is the earlier r_BAR one). What are the differences ? Tx.









PS/Ed.: The bubble trapped in the Strix block is why I'm not cranking VRAM yet; it probably doesn't hurt too much but might as well fix it. I had gotten rid of it before via moving the GPU just-a-few-degrees over horizontal but then changed something else in the loop (which is big, 1320x63 rads, 3x D5s, lots of up/down tubing) so the bubble is back (but not for long)


----------



## gfunkernaught

KedarWolf said:


> Win10BenchISO.zip
> 
> 
> 
> 
> 
> 
> 
> drive.google.com
> 
> 
> 
> 
> 
> *New and improved benching ISO, stripped, services disabled.*
> 
> Windows 10 LTSC 2021, lots of services disabled and all bloatware removed.
> 
> Tested as working in VMWare.
> 
> There will be no printing, no audio no Wi-Fi, no Bluetooth etc. just a bare-bones Win 10 operating system strictly for benching.
> 
> Before installing the O/S, you might want to back up your existing O/S with a Macrium Reflect Free boot USB you make in 'Other Tasks' in the software.
> 
> Or just install the Windows 10 to a spare M.2 or SSD to dual boot with your existing O/S.
> 
> READ the README.txt in the .zip file, there are some more things to disable with Autoruns and you need to install your chipset drivers and use the included NVCleanstall_1.12.0.exe to install Nvidia drivers.
> 
> Burn the ISO with the included RUFUS and can only be used as a clean Windows install.


Did you build this iso yourself? I'm wondering if we can use dism to customize the win file like injecting all drivers for one's system let's say. Then we can burn the iso to a USB drive and boot to it and run the benchmarks. This way you don't have to format your primary drive. This would only be useful for people like me that don't have their system on a bench. I don't want to have to swap m.2 drives.


----------



## KedarWolf

gfunkernaught said:


> Did you build this iso yourself? I'm wondering if we can use dism to customize the win file like injecting all drivers for one's system let's say. Then we can burn the iso to a USB drive and boot to it and run the benchmarks. This way you don't have to format your primary drive. This would only be useful for people like me that don't have their system on a bench. I don't want to have to swap m.2 drives.


I built it using the Optimize Offline Script.

You can extract the ISO, use DISM to add the drivers, then rebuild the ISO.

Install 2004 ADK.

You can have the drivers and .inf and required files to the D:\Drivers folder, or use the below command in Windows 10 to save the drivers from your running Windows PC.

Make a folder D:\Drivers

Run in Admin PowerShell.



Code:


Export-WindowsDriver -Online -Destination "D:\Drivers"

Extract the benching ISO with WinRar. I extracted it to the D:\2 folder in my example

Make a directory D:\Mount

Use the below commands pointing to your sources/install.wim location.



Code:


DISM /Mount-Wim /WimFile:D:\2\\sources\install.wim /Index:1 /MountDir:D:\Mount




Code:


DISM /Image:D:\Mount /Add-Driver /Driver:D:\Drivers /recurse /ForceUnsigned




Code:


DISM /Unmount-Wim /MountDir:D:\Mount /Commit

Open an Admin Command prompt in the ADK Tools folder where Oscdimg.exe file is and run the below to rebuild the ISO as UEFI.

C:\Program Files (x86)\Windows Kits\10\Assessment and Deployment Kit\Deployment Tools\amd64\Oscdimg\ is the location Oscdimg.exe file.




Code:


oscdimg -m -o -u2 -udfver102 bootdata:2#p0,e,bd:\2\boot\etfsboot.com#pEF,e,bd:\2\efi\microsoft\boot\efisys.bin d:\2\ d:\Windows10.iso

Edit: If any driver files are need to boot your PC follow the add drivers parts but with:



Code:


DISM /Mount-Wim /WimFile:D:\2\\sources\boot.wim /Index:2 /MountDir:D:\Mount




Code:


DISM /Mount-Wim /WimFile:D:\2\\sources\boot.wim /Index:2 /MountDir:D:\Mount




Code:


DISM /Unmount-Wim /MountDir:D:\Mount /Commit


----------



## gfunkernaught

KedarWolf said:


> I built it using the Optimize Offline Script.
> 
> You can extract the ISO, use DISM to add the drivers, then rebuild the ISO.
> 
> Install 2004 ADK.
> 
> You can have the drivers and .inf and required files to the D:\Drivers folder, or use the below command in Windows 10 to save the drivers from your running Windows PC.
> 
> Make a folder D:\Drivers
> 
> Run in Admin PowerShell.
> 
> 
> 
> Code:
> 
> 
> Export-WindowsDriver -Online -Destination "D:\Drivers"
> 
> Extract the benching ISO with WinRar. I extracted it to the D:\2 folder in my example
> 
> Make a directory D:\Mount
> 
> Use the below commands pointing to your sources/install.wim location.
> 
> 
> 
> Code:
> 
> 
> DISM /Mount-Wim /WimFile:D:\2\\sources\install.wim /Index:1 /MountDir:D:\Mount
> 
> 
> 
> 
> Code:
> 
> 
> DISM /Image:D:\Mount /Add-Driver /Driver:D:\Drivers /recurse /ForceUnsigned
> 
> 
> 
> 
> Code:
> 
> 
> DISM /Unmount-Wim /MountDir:D:\Mount /Commit
> 
> Open an Admin Command prompt in the ADK Tools folder where Oscdimg.exe file is and run the below to rebuild the ISO as UEFI.
> 
> C:\Program Files (x86)\Windows Kits\10\Assessment and Deployment Kit\Deployment Tools\amd64\Oscdimg\ is the location Oscdimg.exe file.
> 
> 
> 
> 
> Code:
> 
> 
> oscdimg -m -o -u2 -udfver102 bootdata:2#p0,e,bd:\2\boot\etfsboot.com#pEF,e,bd:\2\efi\microsoft\boot\efisys.bin d:\2\ d:\Windows10.iso
> 
> Edit: If any driver files are need to boot your PC follow the add drivers parts but with:
> 
> 
> 
> Code:
> 
> 
> DISM /Mount-Wim /WimFile:D:\2\\sources\boot.wim /Index:2 /MountDir:D:\Mount
> 
> 
> 
> 
> Code:
> 
> 
> DISM /Mount-Wim /WimFile:D:\2\\sources\boot.wim /Index:2 /MountDir:D:\Mount
> 
> 
> 
> 
> Code:
> 
> 
> DISM /Unmount-Wim /MountDir:D:\Mount /Commit


Thanks!


----------



## yzonker

All I managed to do was match my previous PR run with @KedarWolf OS, but I easily beat my previous TS score, 









Result







www.3dmark.com


----------



## KedarWolf

yzonker said:


> All I managed to do was match my previous PR run with @KedarWolf OS, but I easily beat my previous TS score,
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Result
> 
> 
> 
> 
> 
> 
> 
> www.3dmark.com


See that I updated the benching ISO recently. Really important to do the AutoRuns things though, lots more services can be disabled.


----------



## yzonker

KedarWolf said:


> See that I updated the benching ISO recently. Really important to do the AutoRuns things though, lots more services can be disabled.


I saw that but I had installed the older one a few days ago already. I'll update when I get a chance. Thanks for all your effort on this and sharing it.


----------



## Panchovix

KedarWolf said:


> See that I updated the benching ISO recently. Really important to do the AutoRuns things though, lots more services can be disabled.


I tried using the USER-PC.arn file on AutoRun64 but it said the file was corrupted  I don't know if something else faced that issue.
Didn't happen with the past version of the ISO, for me at least


----------



## KedarWolf

Panchovix said:


> I tried using the USER-PC.arn file on AutoRun64 but it said the file was corrupted  I don't know if something else faced that issue.
> Didn't happen with the past version of the ISO, for me at least


I'll check into it when I get home from work.


----------



## yzonker

KedarWolf said:


> I'll check into it when I get home from work.


I tried it just a bit ago with the same issue. Older one works fine though.


----------



## yzonker

I'm sure this is old news to some, but this costs less than the clamp meter I bought to measure 8-pin power.



https://www.elmorlabs.com/product/pmd-power-measurement-device/


----------



## inedenimadam

Hey guys. Has there been an update to the 1000W bios to include rebar? if you can't post it here, DM me a link? Trying to get things together for some much needed time off in december to run SLI benchies on z690.


----------



## KedarWolf

yzonker said:


> I tried it just a bit ago with the same issue. Older one works fine though.


Here is Autoruns and a better .arn file to use. You need these new versions of them.

Rename the attachment and remove the .txt from it. I'm reuploading the BenchingISO.zip as well.


----------



## KedarWolf

Win10BenchISO.zip







drive.google.com


----------



## yzonker

inedenimadam said:


> Hey guys. Has there been an update to the 1000W bios to include rebar? if you can't post it here, DM me a link? Trying to get things together for some much needed time off in december to run SLI benchies on z690.











EVGA RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com


----------



## geriatricpollywog

SoldierRBT said:


> Very nice. Now try NVDD LLC4 and MSVDD LLC2 with 800KHz on both. Another trick I use is lock frequency with nvidia-smi. I see you running +225. Try 240 and if it crashes, check at what frequency and lock it 15MHz lower. If you do it right, you can gain 15MHz more in the avg coreclocks. After you find the max clocks for your silicon/temp, tweak MSVDD from 1.14 to 1.18v. Sometimes too high decreases score.
> 
> You can also try using msi afterburner v/f curve like 1.10v +240 or +255. Your internal clocks won't be affected since you're manually applying NVVDD/MSVVDD


My card didn't like those loadline settings but here's a new PR.


----------



## inedenimadam

yzonker said:


> EVGA RTX 3090 VBIOS
> 
> 
> 24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory
> 
> 
> 
> 
> www.techpowerup.com


Thank You. I guess the cat is out of the bag, so to speak. I have been out of the loop for a few months.


----------



## KedarWolf

I was using CableMod Pro PSU cables on my AX1600i.

Someone in another thread noticed my CPU and GPU 12v rails were dipping down to as low as 11.6v while running Port Royal and that is not good for stability.

I put back in the stock Corsair cables, and now my 12v rails don't go below 12.05v.


----------



## KCDC

KedarWolf said:


> I was using CableMod Pro PSU cables on my AX1600i.
> 
> Someone in another thread noticed my CPU and GPU 12v rails were dipping down to as low as 11.6v while running Port Royal and that is not good for stability.
> 
> I put back in the stock Corsair cables, and now my 12v rails don't go below 12.05v.


I had Cablemods a long time ago for a 1300i, pretty sure they use lower gauge wires. Didnt take long for them to get all bent out of shape.

I can't stand the chunky corsair cables from all those capacitors or whatever they put in-line. Absolute pain to work with.

Just recently got my custom cables for my 1600i from Mainframe Customs. Been flawless, no voltage issues, been running on them nonstop 24/7 rendering on dual 3090s constant 900+watts especially when the cpu is also calculating sims. Biggest downside is he takes forever, but if you don't mind waiting 6ish weeks, I can't speak highly enough over the quality. 

Still deciding on how I want to eventually route them but this layout will do for now


----------



## gfunkernaught

KCDC said:


> I had Cablemods a long time ago for a 1300i, pretty sure they use lower gauge wires. Didnt take long for them to get all bent out of shape.
> 
> I can't stand the chunky corsair cables from all those capacitors or whatever they put in-line. Absolute pain to work with.
> 
> Just recently got my custom cables for my 1600i from Mainframe Customs. Been flawless, no voltage issues, been running on them nonstop 24/7 rendering on dual 3090s constant 900+watts especially when the cpu is also calculating sims. Biggest downside is he takes forever, but if you don't mind waiting 6ish weeks, I can't speak highly enough over the quality.
> 
> Still deciding on how I want to eventually route them but this layout will do for now


Does Cablemods let you choose the wire gauge for those cables?


----------



## yzonker

Has anyone installed @KedarWolf 's newest Win ISO? I installed it on a machine with an Asus X570 Pro mobo and can't get the ethernet adapter working. Tried various methods of installing the driver but just errors out saying the device can't be started.


----------



## Roacoe717

Question does anyone know what temps effect boost clock? I put my pc outside in 40f weather and maybe got 150 extra points in port royal and time spy. Highest clock i saw was 2085, inside it clocked at 2060.


----------



## gfunkernaught

yzonker said:


> Has anyone installed @KedarWolf 's newest Win ISO? I installed it on a machine with an Asus X570 Pro mobo and can't get the ethernet adapter working. Tried various methods of installing the driver but just errors out saying the device can't be started.


Go back a few pages. @KedarWolf posted instructions to build an ISO with all of your drivers preinstalled. Then you burn that ISO to a USB drive and boot from it.

Nevermind here is the link:








[Official] NVIDIA RTX 3090 Owner's Club


Thanks @Lobstar , yeah @ManniX-ITA confirmed it too, all installed again with no issue. Memory temps are a solid amount lower and using grizzly extreme has also lowered gpu core by 1-2 C.




www.overclock.net


----------



## gfunkernaught

Roacoe717 said:


> Question does anyone know what temps effect boost clock? I put my pc outside in 40f weather and maybe got 150 extra points in port royal and time spy. Highest clock i saw was 2085, inside it clocked at 2060.


From what I've seen, 40c is where the Boost clock throttling starts. It seems to average out the temps. So you can be at 40c for a while but if it starts to jump to 41c and 42c, those numbers get averaged in and you get stepped down one bin. I believe this algorithm is in effect all the way up to the danger-thermal throttling levels.


----------



## ManniX-ITA

gfunkernaught said:


> From what I've seen, 40c is where the Boost clock throttling starts.


40 Fahrenheit is around 4-5 Celsius


----------



## J7SC

From the FYI file: 
The local outlet of the national retailer where I got both my Strix 3090 and 6900XT (at MSRP & no tariff here ) hasn't had any 3090s by any board partner in for a while...but they did get 7x Strix 3080 Ti in yesterday (at a higher MSRP than what I paid for the Strix 3090 in late January...).

While they also have a big online store, for top end GPUs, you have to show up in person to buy, given what is going on...all 7x Strix 3080 Ti were gone within a few hours, in spite of really bad weather, traffic grid lock etc. Checking retailers in EU also still paints a grim picture for 3090 availability and pricing. I guess the much-touted recovery in the GPU market from the late summer got Omicron ?


----------



## BigMack70

Anyone here play Halo Infinite and trying to get 240fps in the multiplayer?

I've got my 3090 paired with a 9900ks @ 5.2 GHz, and playing at 5120*1440 resolution, I can't get 220-240fps reliably even with 50% resolution scale and all settings on low.

Wondering if anyone has a better CPU and is getting closer to that magic 240fps mark?


----------



## Roacoe717

Ó


gfunkernaught said:


> From what I've seen, 40c is where the Boost clock throttling starts. It seems to average out the temps. So you can be at 40c for a while but if it starts to jump to 41c and 42c, those numbers get averaged in and you get stepped down one bin. I believe this algorithm is in effect all the way up to the danger-thermal throttling levels.


Thank you.


----------



## yzonker

gfunkernaught said:


> Go back a few pages. @KedarWolf posted instructions to build an ISO with all of your drivers preinstalled. Then you burn that ISO to a USB drive and boot from it.
> 
> Nevermind here is the link:
> 
> 
> 
> 
> 
> 
> 
> 
> [Official] NVIDIA RTX 3090 Owner's Club
> 
> 
> Thanks @Lobstar , yeah @ManniX-ITA confirmed it too, all installed again with no issue. Memory temps are a solid amount lower and using grizzly extreme has also lowered gpu core by 1-2 C.
> 
> 
> 
> 
> www.overclock.net


Thanks. I saw that before but I guess I didn't realize I'd need to use that method if I just installed it on a separate drive. That's all I did with the original ISO. 

BTW I did install the original this afternoon on that machine and the ethernet worked fine. Win10 does pick up a ethernet driver from that mobo. I canceled that step this morning when trying to install the new ISO because it wanted to install Armory Crate also. This afternoon I let it grab the driver and then canceled the AC install. Maybe that was the difference.


----------



## KedarWolf

yzonker said:


> Thanks. I saw that before but I guess I didn't realize I'd need to use that method if I just installed it on a separate drive. That's all I did with the original ISO.
> 
> BTW I did install the original this afternoon on that machine and the ethernet worked fine. Win10 does pick up a ethernet driver from that mobo. I canceled that step this morning when trying to install the new ISO because it wanted to install Armory Crate also. This afternoon I let it grab the driver and then canceled the AC install. Maybe that was the difference.


You can use the new Autoruns file with the original ISO and it'll work just as good.

Here is Autoruns and a better .arn file to use. You need these new versions of them.

Rename the attachment and remove the .txt from it. I'm reuploading the BenchingISO.zip as well.


----------



## KCDC

gfunkernaught said:


> Does Cablemods let you choose the wire gauge for those cables?


Haven't used them since that last time but I think you can
The other thing I didn't like is the fabric they used, it attracts and holds dirt/dust/crap from your fingers so if you got any lighter colors, they'd end up dingy after a while. Dunno what materials they use now, hopefully not that anymore.

Edited for more info


----------



## ManniX-ITA

yzonker said:


> install Armory Crate also


May I ask for what reason are you installing AC?
It's definitely the last thing you want to have on a benching install.


----------



## J7SC

For Microsoft FlightSim 2020 fans, how's your experience with the latest 'mandatory  patch' ? ...At least we finally have the DX12 beta of FS2020. Overall, frame rates and visuals were great at DX12 4K / Ultra on my 3090 Strix, but 10% and 1% lows were a bit 'jerky'...same on my other FS2020 machine (dual 2080 Ti). Then again, it's a beta...


----------



## gfunkernaught

J7SC said:


> For Microsoft FlightSim 2020 fans, how's your experience with the latest 'mandatory  patch' ? ...At least we finally have the DX12 beta of FS2020. Overall, frame rates and visuals were great at DX12 4K / Ultra on my 3090 Strix, but 10% and 1% lows were a bit 'jerky'...same on my other FS2020 machine (dual 2080 Ti). Then again, it's a beta...


I don't have FS2020 but I'm curious about those jerky lows. Are you referring to frametimes?

I just bought RDR2 and saw jumpy frametimes on vulkan with everything maxed out at 4k, without DLSS. They were bouncing around from 15.9-17.2ms, all with vsync and t-buffering on. Switched to DX12 and now its a consistent 16ms. I can't help it and it may be a placebo, but I feel like vulkan looked better than DX12. DX12's shadows and volumetric lighting have a pasty-waxy look to it. I see it in other games as well. I have to do an actual comparison between the two to see for sure.


----------



## J7SC

gfunkernaught said:


> I don't have FS2020 but I'm curious about those jerky lows. Are you referring to frametimes?
> 
> I just bought RDR2 and saw jumpy frametimes on vulkan with everything maxed out at 4k, without DLSS. They were bouncing around from 15.9-17.2ms, all with vsync and t-buffering on. Switched to DX12 and now its a consistent 16ms. I can't help it and it may be a placebo, but I feel like vulkan looked better than DX12. DX12's shadows and volumetric lighting have a pasty-waxy look to it. I see it in other games as well. I have to do an actual comparison between the two to see for sure.


Yes, I was referring to 10% and 1% low frame times. I reiterate that FS2020 (a Microsoft) title, after all) is finally getting DX12, albeit in beta for now. I don't have the jerkiness in other DX12 titles such as CP2077.


----------



## geriatricpollywog

J7SC said:


> Yes, I was referring to 10% and 1% low frame times. I reiterate that FS2020 (a Microsoft) title, after all) is finally getting DX12, albeit in beta for now. I don't have the jerkiness in other DX12 titles such as CP2077.


It looks really good with DX12 enabled, but I'm seeing the same stutters I saw on launch. The engine is only using 5-6 gb of RAM and VRAM, which is a shame when I have 32 and 24 gb respectively.


----------



## EarlZ

I am a little worred about my voltage readings on my GPU as the minimums on HWinfo64 and GPU-Z show between 11.64v to 11.86v, I havent gotten any instabilities issues yet and this is with a Cooler Master V1000 bought in 2014. I am also using a comb extension to my GPU and to my Mobo. I included the Mobo sensor values for 12v.

Shouldnt 11.64v at the point of instability?


----------



## ManniX-ITA

EarlZ said:


> Shouldnt 11.64v at the point of instability?


Technically is below 11.4V.
But due to the short and heavy bursts of load sometimes nVidia GPUs gets unstable when just below 12V.
The issue is likely due to the extension cables. The board sensor seems fine.
I wouldn't be too much worried if it's not unstable.
But you should really fix it even without experiencing issues.

The 8-pin #1 is dropping 0.5V and it's very high, even for an old PSU.
If you get these readings running a benchmark like Port Royal, only GPU and below about 500W load, these are bad.
Close to the max load could become an issue as it would probably drop even more.


----------



## EarlZ

ManniX-ITA said:


> Technically is below 11.4V.
> But due to the short and heavy bursts of load sometimes nVidia GPUs gets unstable when just below 12V.
> The issue is likely due to the extension cables. The board sensor seems fine.
> I wouldn't be too much worried if it's not unstable.
> But you should really fix it even without experiencing issues.
> 
> The 8-pin #1 is dropping 0.5V and it's very high, even for an old PSU.
> If you get these readings running a benchmark like Port Royal, only GPU and below about 500W load, these are bad.
> Close to the max load could become an issue as it would probably drop even more.


I'll try it w/o any extensions and see if there is a difference. I believe the extensions I have use a 16AWG cable so even with a PSU change this will not improve?


----------



## yzonker

EarlZ said:


> I'll try it w/o any extensions and see if there is a difference. I believe the extensions I have use a 16AWG cable so even with a PSU change this will not improve?


Seems like the answer to that question depends on whether the voltage stays at 12v without the extensions. My new Rm1000x will hold 12v even at 600w on my 2x8pin Zotac. My old Seasonic would drop to 11.8v. Seasonic had 18 gage cables whereas the Corsair has 16 gage.


----------



## ManniX-ITA

yzonker said:


> My old Seasonic would drop to 11.8v. Seasonic had 18 gage cables whereas the Corsair has 16 gage.


My assumption is a faulty PSU more than the wires section.
There are 3 positive wires on a 8-pin PCIe connector, that's around 5-6 Ampere for each one.
The voltage drop is about 1% at 4 feet for an AWG 18 and 6 feet for an AWG 16.
For a much smaller length as they usually have it doesn't matter the sizing; they can both sustain without any issue even much more than 15A.
If the system/GPU is crashing then more likely is the PSU OCP protection kicking in.


----------



## yzonker

ManniX-ITA said:


> My assumption is a faulty PSU more than the wires section.
> There are 3 positive wires on a 8-pin PCIe connector, that's around 5-6 Ampere for each one.
> The voltage drop is about 1% at 4 feet for an AWG 18 and 6 feet for an AWG 16.
> For a much smaller length as they usually have it doesn't matter the sizing; they can both sustain without any issue even much more than 15A.
> If the system/GPU is crashing then more likely is the PSU OCP protection kicking in.


Yea may be the PSU. I don't know that the wire gage difference explains what I saw either. My bigger concern with extensions is adding more plugs. You never see a burned wire, only burned plugs. 

I certainly won't be surprised if a new PSU resolves the issue.


----------



## Carillo

In total, about 10 cards were sold in Norway March 2021, (as far as I have heard) and at the time I unfortunately did not get the opportunity to buy one, but today I was so lucky to pick up a second hand 3090 HOF  Better late and never


----------



## J7SC

ManniX-ITA said:


> My assumption is a faulty PSU more than the wires section.
> There are 3 positive wires on a 8-pin PCIe connector, that's around 5-6 Ampere for each one.
> The voltage drop is about 1% at 4 feet for an AWG 18 and 6 feet for an AWG 16.
> For a much smaller length as they usually have it doesn't matter the sizing; they can both sustain without any issue even much more than 15A.
> If the system/GPU is crashing then more likely is the PSU OCP protection kicking in.





yzonker said:


> Yea may be the PSU. I don't know that the wire gage difference explains what I saw either. My bigger concern with extensions is adding more plugs. You never see a burned wire, only burned plugs.
> 
> I certainly won't be surprised if a new PSU resolves the issue.


PSUs do get 'tired' after heavy use for years, even if they don't blow up with a bang. By coincidence, until yesterday, I had not bought a new PSU since '14 or so as I got a pile of heavy-duty ones back then for XOC HWBot. However, lately one has a 3.3.V rail consistently at 2.8V or less, while two other ones started to drop 12V to the mid-11s (or lower). 

The 3090 system has a fairly fresh Antec Platinum 1300W (Delta innards) which is excellent in its rail values, but I replaced the other one for the BigNavi with a Seasonic Prime Platinum 1300W (on sale for ~ US$ 289). 


Spoiler


----------



## gfunkernaught

yzonker said:


> Yea may be the PSU. I don't know that the wire gage difference explains what I saw either. My bigger concern with extensions is adding more plugs. You never see a burned wire, only burned plugs.
> 
> I certainly won't be surprised if a new PSU resolves the issue.


I second this. I had an HX1200 that seemed fine but it couldn't power my 3090 properly. Eventually it started shutting down with heavy load, even on stock bios and clocks. RMA replacement has been going strong even with the xoc bios. I asked Corsair what gauge wires they use on their PSUs and they told me that's not public info.


----------



## ManniX-ITA

J7SC said:


> PSUs do get 'tired' after heavy use for years, even if they don't blow up with a bang.


Yes, there's a normal decay due to time and usage but an high-end PSU can work perfectly fine for 20-30 years.
My Supernova is from 2013 and still works like the 1st day luckily 

Guess most known about it but probably who's young doesn't even heard about it.
Unfortunately till 2010-2015 we were still paying the consequences of the capacitor plague:









Capacitor plague - Wikipedia







en.wikipedia.org





Here's an article about it:









How a stolen capacitor formula ended up costing Dell $300m


Though the American company had nothing to do with the industrial espionage in China in 2002 that led to faulty components, it paid the price with millions of faulty PCs




www.theguardian.com





Despite being known for years many manufacturers still bought, knowingly (price too low to be true, but hey cheap) or unknowingly, fake Japanese capacitors.
I've thrown myself in the bin many ASUS and Gigabyte boards with leaking caps...
For the PSUs is the same, many have been manufactured with them.
Especially the good ones as they were sold as high quality Japanese caps.

That's why in every board or PSU marketing material the first flashy thing they advertise are the high quality Japanese caps (hopefully).


----------



## yzonker

gfunkernaught said:


> I second this. I had an HX1200 that seemed fine but it couldn't power my 3090 properly. Eventually it started shutting down with heavy load, even on stock bios and clocks. RMA replacement has been going strong even with the xoc bios. I asked Corsair what gauge wires they use on their PSUs and they told me that's not public info.


Well Tom's Hardware reports them as 16/18 for primary/pigtail and you can see they are different sizes. Hopefully they are not 18/20. Lol


----------



## yzonker

J7SC said:


> For Microsoft FlightSim 2020 fans, how's your experience with the latest 'mandatory  patch' ? ...At least we finally have the DX12 beta of FS2020. Overall, frame rates and visuals were great at DX12 4K / Ultra on my 3090 Strix, but 10% and 1% lows were a bit 'jerky'...same on my other FS2020 machine (dual 2080 Ti). Then again, it's a beta...


I haven't spent a lot of time with this new update, but when I tried it the other day overall performance was good but it definitely jerks and hitches here and there, but that really isn't new. It is particularly bad when you look side to side. I assume it's shuffling stuff in/out of VRAM or something. Particularly annoying in VR.


----------



## EarlZ

ManniX-ITA said:


> Technically is below 11.4V.
> But due to the short and heavy bursts of load sometimes nVidia GPUs gets unstable when just below 12V.
> The issue is likely due to the extension cables. The board sensor seems fine.
> I wouldn't be too much worried if it's not unstable.
> But you should really fix it even without experiencing issues.
> 
> The 8-pin #1 is dropping 0.5V and it's very high, even for an old PSU.
> If you get these readings running a benchmark like Port Royal, only GPU and below about 500W load, these are bad.
> Close to the max load could become an issue as it would probably drop even more.





yzonker said:


> Seems like the answer to that question depends on whether the voltage stays at 12v without the extensions. My new Rm1000x will hold 12v even at 600w on my 2x8pin Zotac. My old Seasonic would drop to 11.8v. Seasonic had 18 gage cables whereas the Corsair has 16 gage.


Removed the extension and my lowest value now sits at 12.11v which is a a lot better than 11.6v! It sucks that PSU extension cables degrade in less than a year of usage. Must be due to poor wire quality.


----------



## yzonker

EarlZ said:


> Removed the extension and my lowest value now sits at 12.11v which is a a lot better than 11.6v! It sucks that PSU extension cables degrade in less than a year of usage. Must be due to poor wire quality.
> 
> View attachment 2535021


If they didn't have this problem originally, then it's the plugs.


----------



## EarlZ

yzonker said:


> If they didn't have this problem originally, then it's the plugs.


I'm almost certain this was not an issue when I first got them. Because that was one of the things I checked on how much voltage drop will happen with an extension. I measured all pins and I am reading 0.4~0.5omh resistance. Not sure if thats high or expeced.


----------



## yzonker

EarlZ said:


> I'm almost certain this was not an issue when I first got them. Because that was one of the things I checked on how much voltage drop will happen with an extension. I measured all pins and I am reading 0.4~0.5omh resistance. Not sure if thats high or expeced.


The issue is usually the quality of the contact between the plugs, not the cable itself. Trying shutting down the machine and check the entire thing (psu cable plus extension), assuming that's not what you did just now?


----------



## EarlZ

yzonker said:


> The issue is usually the quality of the contact between the plugs, not the cable itself. Trying shutting down the machine and check the entire thing (psu cable plus extension), assuming that's not what you did just now?


I made sure that the extensions were as snug as possible but still getting that 11.6v very bummbed out! now back to using stock PSU cables.


----------



## yzonker

EarlZ said:


> I made sure that the extensions were as snug as possible but still getting that 11.6v very bummbed out! now back to using stock PSU cables.


I think 0.4-0.5 is high actually. I have some extensions in a box. Just checked a couple of pins (only the extension) and got 0.0 omhs. Which makes sense to me as it's just a short wire really. So then I dug out one of the Seasonic 8pin cables and connected it. Still 0.0 omhs. 

You did check zero on your meter? (I know, dumb question but you never know)


----------



## EarlZ

yzonker said:


> I think 0.4-0.5 is high actually. I have some extensions in a box. Just checked a couple of pins (only the extension) and got 0.0 omhs. Which makes sense to me as it's just a short wire really. So then I dug out one of the Seasonic 8pin cables and connected it. Still 0.0 omhs.
> 
> You did check zero on your meter? (I know, dumb question but you never know)


Do you mean if my multimeter is reading zero when not touching anything?


----------



## yzonker

EarlZ said:


> Do you mean if my multimeter is reading zero when not touching anything?


Should read zero when you touch the leads together. Probably does but I always do that test just to be sure.


----------



## gfunkernaught

Just hooked up my active back plate cooler and uh yeah in Quake 2 RTX, the vram tops out at 48c +1500mhz offset and the core went up to 46c, no PL, 650w max, 2135mhz eff core, 1093mv, I def got lazy doing this and did not repaste anything. Now with that confession out of the way, I think it could also maybe be an issue since the backplate takes the incoming water first then sends the warmed water to the front of the card. I didn't see the water temp change from 30c max at all. So there is def a contact issue but the in-series config ek chose makes me curious. Prob after the holidays if I can resist my OCD to repaste the damn thing.


----------



## J7SC

ManniX-ITA said:


> Yes, there's a normal decay due to time and usage but an high-end PSU can work perfectly fine for 20-30 years.
> My Supernova is from 2013 and still works like the 1st day luckily
> 
> Guess most known about it but probably who's young doesn't even heard about it.
> Unfortunately till 2010-2015 we were still paying the consequences of the capacitor plague:
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Capacitor plague - Wikipedia
> 
> 
> 
> 
> 
> 
> 
> en.wikipedia.org
> 
> 
> 
> 
> 
> Here's an article about it:
> 
> 
> 
> 
> 
> 
> 
> 
> 
> How a stolen capacitor formula ended up costing Dell $300m
> 
> 
> Though the American company had nothing to do with the industrial espionage in China in 2002 that led to faulty components, it paid the price with millions of faulty PCs
> 
> 
> 
> 
> www.theguardian.com
> 
> 
> 
> 
> 
> Despite being known for years many manufacturers still bought, knowingly (price too low to be true, but hey cheap) or unknowingly, fake Japanese capacitors.
> I've thrown myself in the bin many ASUS and Gigabyte boards with leaking caps...
> For the PSUs is the same, many have been manufactured with them.
> Especially the good ones as they were sold as high quality Japanese caps.
> 
> That's why in every board or PSU marketing material the first flashy thing they advertise are the high quality Japanese caps (hopefully).


...About eight+ years ago, I picked up three Corsair TX850 as well as two Corsair AX1200. The three TX850 have been continuously running (mostly on work-related machines) without any problems whatsoever. I checked their 12V, 5V and 3.3V again this week, and all of them are dead on, even under load. One of the two AX1200 is still working great as well, but the other has developed a 3.3V rail problem (~2.8V) which leads to system crashes. Around the same time of those purchases, I also added up a Lepa G1200W (Lepa = Enermax)...it's still working great, but so it should as I used it only rarely. It's basically back-up.

On the heavy-duty use side, seven years ago,I also added a BeQuiet DarkPro 1200W and a couple of Antec 1300W HPC for XOC fun. The BeQuiet DarkPro now has the 12V rail at below 11.5V (under load only), but it did a lot of heavy lifting, up to and including OCP while XOCing at well beyond PSU spec. The two Antec's are still working perfectly though one of them did go for an early RMA for an unrelated reason.

PSUs are like chocolate, you never know what Japanese Caps you get ?


----------



## EarlZ

yzonker said:


> Should read zero when you touch the leads together. Probably does but I always do that test just to be sure.


At 200ohms on the multimeter dial I get 0.3ohms when I touch the leads together at 2k ohms I get 0.001ohms. Did you check at 200ohms?


----------



## gfunkernaught

So for those using thermal putty, do you use it to replace thermal pads? How do you apply it? On the PCB or the block or both? I've seen many posts about it, but which putty performs the best for heat transfer?


----------



## J7SC

gfunkernaught said:


> So for those using thermal putty, do you use it to replace thermal pads? How do you apply it? On the PCB or the block or both? I've seen many posts about it, but which putty performs the best for heat transfer?


I use TG 10 (DigiKey) and it seems to work great, at least for the three+ months I have it in use on two GPUs, including the 3090. I just made little balls out of it (after cleaning hands w/ isopropyl alcohol) w/ approx 1 cm diameter - it will squish and conform anyways, as long as you don't go overboard re. the amount and hinder die contact.

On the 3090, I used it on the VRAM front and back, on the 6900XT, I also use it on some of the power phases in addition to its VRAM...w/ the 'little ball method', you just place them on the VRAM on on side, then mount the PCB to the block.


----------



## gfunkernaught

J7SC said:


> I use TG 10 (DigiKey) and it seems to work great, at least for the three+ months I have it in use on two GPUs, including the 3090. I just made little balls out of it (after cleaning hands w/ isopropyl alcohol) w/ approx 1 cm diameter - it will squish and conform anyways, as long as you don't go overboard re. the amount and hinder die contact.
> 
> On the 3090, I used it on the VRAM front and back, on the 6900XT, I also use it on some of the power phases in addition to its VRAM...w/ the 'little ball method', you just place them on the VRAM on on side, then mount the PCB to the block.


You even used it on the die huh? How do you figure the amount to use in places where there are specific thickness requirements? For my Trio block (front), ek uses 1mm pads, and the back uses a 1mm, 1.5mm, and 2mm.


----------



## J7SC

gfunkernaught said:


> You even used it on the die huh? How do you figure the amount to use in places where there are specific thickness requirements? For my Trio block (front), ek uses 1mm pads, and the back uses a 1mm, 1.5mm, and 2mm.


No, not on the die !! There, I used Gelid GC Extreme as it a bit thicker.


----------



## ManniX-ITA

gfunkernaught said:


> How do you figure the amount to use in places where there are specific thickness requirements?


The putty will just expand and overflow in the adjacent area, just be sure there's enough space in the surroundings.
If it can't overflow it could add a gap between the block and the PCB which will cause poor die contact.
Since it's very soft, unless you put in really too much in a tight spot the risky is very low.


----------



## gfunkernaught

J7SC said:


> No, not on the die !! There, I used Gelid GC Extreme as it a bit thicker.


Oh ok idk why I thought that.


----------



## gfunkernaught

@J7SC 
Is this what you use?





TG-PP10-50 t-Global Technology | Fans, Thermal Management | DigiKey


Order today, ships today. TG-PP10-50 – Thermal Silicone Putty 50 gram Container from t-Global Technology. Pricing and Availability on millions of electronic components from Digi-Key Electronics.




www.digikey.com




Also when applying to the back of the card, how do you apply the putty to where the vrms are? Like instead of a long pad strip, do you apply like several dots of putty in a line?


----------



## J7SC

gfunkernaught said:


> @J7SC
> Is this what you use?
> 
> 
> 
> 
> 
> TG-PP10-50 t-Global Technology | Fans, Thermal Management | DigiKey
> 
> 
> Order today, ships today. TG-PP10-50 – Thermal Silicone Putty 50 gram Container from t-Global Technology. Pricing and Availability on millions of electronic components from Digi-Key Electronics.
> 
> 
> 
> 
> www.digikey.com
> 
> 
> 
> 
> Also when applying to the back of the card, how do you apply the putty to where the vrms are? Like instead of a long pad strip, do you apply like several dots of putty in a line?


...When I did the VRM area on the back of the 6900XT, I used the ball shape again (same diameter, per above) - seemed to work great . For the Strix 3090, I used Thermalright pads w/MX5 on top for the VRM front and back, putty on the VRAM and Gelid OC Extreme for the die.


----------



## gfunkernaught

J7SC said:


> ...When I did the VRM area on the back of the 6900XT, I used the ball shape again (same diameter, per above) - seemed to work great . For the Strix 3090, I used Thermalright pads w/MX5 on top for the VRM front and back, putty on the VRAM and Gelid OC Extreme for the die.


Mind just doing a markup on this pic so I can get a better idea of where exactly to put the putty? For the VRMs I mean. The VRAM and anything with a flat/square/limited surface is easy. I put a few circles of what I think you're talking about.


----------



## J7SC

gfunkernaught said:


> Mind just doing a markup on this pic so I can get a better idea of where exactly to put the putty? For the VRMs I mean. The VRAM and anything with a flat/square/limited surface is easy. I put a few circles of what I think you're talking about.
> View attachment 2535148


...sorry, sporadic connection today as home office network w/ fixed and dynamic IPs is getting upgraded. As mentioned, I only did the VRM backside of the 6900XT, but it's equivalent to the pic below. Thermal putty balls will squish anyways and cover more than a single row.


----------



## gfunkernaught

J7SC said:


> ...sorry, sporadic connection today as home office network w/ fixed and dynamic IPs is getting upgraded. As mentioned, I only did the VRM backside of the 6900XT, but it's equivalent to the pic below. Thermal putty balls will squish anyways and cover more than a single row.
> 
> View attachment 2535151


All good homie. I see. So by putting dots of putty next to each other, the compression will cause them to mesh together and form a strip like that? I also thought about the 8-pin terminal backs where the pins are soldered. That gets real warm esp at 550w+. I could use putty there too but I think a standard strip of tape will do. This way some of that heat will get absorbed into the backplate.


----------



## Carillo

Carillo said:


> In total, about 10 cards were sold in Norway March 2021, (as far as I have heard) and at the time I unfortunately did not get the opportunity to buy one, but today I was so lucky to pick up a second hand 3090 HOF  Better late and never
> View attachment 2534960
> View attachment 2534961


So, anyone here have the same card, and would like to share/ link oc results ? Any voltage control hardware and or software for this card ?


----------



## J7SC

gfunkernaught said:


> All good homie. I see. So by putting dots of putty next to each other, the compression will cause them to mesh together and form a strip like that?  I also thought about the 8-pin terminal backs where the pins are soldered. That gets real warm esp at 550w+. I could use putty there too but I think a standard strip of tape will do. This way some of that heat will get absorbed into the backplate.


...'merged oval dots / semi string' is what resulted on the 6900XT / back VRM area. I suppose you could also just roll up the thermal putty into a sausage type. Key is to take one of the little balls and test-fit as a.) block-to-pcb component vary my manufacturer and b.) components such as VRAM and VRM elements on a given pcb have different height, though the putty will make up for it anyhow.

Also don't be surprised if you see a bit of oil sweat out around the putty after a few weeks use, that is normal (and non-conductive) as it sets.

FYI, I did apply putty to the area below the three PCIe 8-pin connectors on both the 6900XT and the Strix 3090...


----------



## KedarWolf

J7SC said:


> I use TG 10 (DigiKey) and it seems to work great, at least for the three+ months I have it in use on two GPUs, including the 3090. I just made little balls out of it (after cleaning hands w/ isopropyl alcohol) w/ approx 1 cm diameter - it will squish and conform anyways, as long as you don't go overboard re. the amount and hinder die contact.
> 
> On the 3090, I used it on the VRAM front and back, on the 6900XT, I also use it on some of the power phases in addition to its VRAM...w/ the 'little ball method', you just place them on the VRAM on on side, then mount the PCB to the block.


Would putty be better on the Strix 3090 VRMs and memory than say Gelid Extreme pads?

Edit: And on the card you did putty on the memory, what are your memory temps?


----------



## J7SC

KedarWolf said:


> Would putty be better on the Strix 3090 VRMs and memory than say Gelid Extreme pads?
> 
> Edit: And on the card you did putty on the memory, what are your memory temps?


For pads, I still use the Thermalright Extreme as I have good experience w/th those and also still have packages ranging from 0.5mm to 2.0 mm. Then again, VRAM and some VRM is now putty. Re. putty VRAM temps, posted this a few days ago. Ambient was 24 C. GPU die has Gelid GC Extreme thermal paste.

@gfunkernaught - GPU VRM temps were 43 C max after those runs


----------



## gfunkernaught

J7SC said:


> ...'merged oval dots / semi string' is what resulted on the 6900XT / back VRM area. I suppose you could also just roll up the thermal putty into a sausage type. Key is to take one of the little balls and test-fit as a.) block-to-pcb component vary my manufacturer and b.) components such as VRAM and VRM elements on a given pcb have different height, though the putty will make up for it anyhow.
> 
> Also don't be surprised if you see a bit of oil sweat out around the putty after a few weeks use, that is normal (and non-conductive) as it sets.
> 
> FYI, I did apply putty to the area below the three PCIe 8-pin connectors on both the 6900XT and the Strix 3090...
> View attachment 2535206


Ah gotcha. Ok cool now I have a better understanding of how to apply this stuff. I ordered a 50g tub. Regarding the 8pin terminals, since the ek block doesn't make contact with the front, I can't put putty there. The backplate however, does sit within 1-2mm of those back solder points. Can't wait to try. Still sticking with LM for the core tho, it has proven to be excellent. I've been having fun draining my system. At first I drain with gravity, then hook up a strong electric blower to pump most of the water out, just enough to disco the GPU so I can take it out.


----------



## gfunkernaught

J7SC said:


> For pads, I still use the Thermalright Extreme as I have good experience w/th those and also still have packages ranging from 0.5mm to 2.0 mm. Then again, VRAM and some VRM is now putty. Re. putty VRAM temps, posted this a few days ago. Ambient was 24 C. GPU die has Gelid GC Extreme thermal paste.
> 
> @gfunkernaught - GPU VRM temps were 43 C max after those runs
> View attachment 2535217


With my half-a$$ed mount I got 48c for the vram.


----------



## J7SC

gfunkernaught said:


> With my half-a$$ed mount I got 48c for the vram.


'grats  That's excellent ! Adjusted for ambient (24 C per above), mine is in the ballpark. In any event, the drastic drop on my setup meant that I decided to skip active backplate altogether.


----------



## gfunkernaught

J7SC said:


> 'grats  That's excellent ! Adjusted for ambient (24 C per above), mine is in the ballpark. In any event, the drastic drop on my setup meant that I decided to skip active backplate altogether.


Your backplate is passive? Damn dude! My ambient at the time of measurement was 22c. My water temp on full load over time is still 30c. I'm hoping with a better mount and this putty I can dump more heat into the loop, it looks like I have room!


----------



## J7SC

gfunkernaught said:


> Your backplate is passive? Damn dude! My ambient at the time of measurement was 22c. My water temp on full load over time is still 30c. I'm hoping with a better mount and this putty I can dump more heat into the loop, it looks like I have room!


...yup, passive, but w/a big heatsink on the back. Either way, that putty works great on VRAM, doesn't it ?!


----------



## gfunkernaught

J7SC said:


> ...yup, passive, but w/a big heatsink on the back. Either way, that putty works great on VRAM, doesn't it ?!


Word!


----------



## EarlZ

J7SC said:


> ...yup, passive, but w/a big heatsink on the back. Either way, that putty works great on VRAM, doesn't it ?!


Pics?


----------



## J7SC

EarlZ said:


> Pics?


top 6900XT, bottom Strix 3090 - same putty / heatsink treatment for both









3090 at bottom right


----------



## gfunkernaught

J7SC said:


> top 6900XT, bottom Strix 3090 - same putty / heatsink treatment for both
> View attachment 2535239
> 
> 
> 3090 at bottom right
> View attachment 2535240


I don't think there are enough GPUs in there 😂


----------



## J7SC

gfunkernaught said:


> I don't think there are enough GPUs in there 😂


I thought the same...


Spoiler



...this is 'Raven B', another work-plus-play, single-'case' build...it has three GPUs - 2x 2080 Ti upfront and a 980 Classified on the back


















Then again, you can order this...


Spoiler


----------



## EarlZ

J7SC said:


> top 6900XT, bottom Strix 3090 - same putty / heatsink treatment for both
> View attachment 2535239
> 
> 
> 3090 at bottom right
> View attachment 2535240


Those are some okay looking heatsinks! I guess its okay to cover the rear GPU cut-outs? I'm gonna look for some and add it on my 3090 too!


----------



## yzonker

EarlZ said:


> Those are some okay looking heatsinks! I guess its okay to cover the rear GPU cut-outs? I'm gonna look for some and add it on my 3090 too!


Yea those work well (heatsink). I have one on my 3090 also. Sorry no fancy RGB.


----------



## J7SC

EarlZ said:


> Those are some okay looking heatsinks! I guess its okay to cover the rear GPU cut-outs? I'm gonna look for some and add it on my 3090 too!





yzonker said:


> Yea those work well (heatsink). I have one on my 3090 also. Sorry no fancy RGB.
> 
> View attachment 2535325


As mentioned before, the rear-cut out on the stock Strix back-plate actually has thermal putty in it now, transferring heat from the SMTs on the rear of the die / PCB to the heatsink. 

I chose that route for the 3090 Strix after I noticed the stock config of the solid (no cut-out) back-plate of the 6900XT PCB die area connecting to its stock back-plate via a 2mm or so thick,white and very soft thermal pad (Fuji?). Thus I ended up skipping an EKWB 3090 custom back-plate...which btw is actually solid all they way through (no cut-out over the rear of the 3090 die).

@yzonker ...there's no RGB on the back-plate or heatsink..but I might add some there later


----------



## gfunkernaught

yzonker said:


> Yea those work well (heatsink). I have one on my 3090 also. Sorry no fancy RGB.
> 
> View attachment 2535325


When I saw how cheap those heatsinks are, I looked at my expensive m.2 heatsinks like 😡


----------



## J7SC

gfunkernaught said:


> When I saw how cheap those heatsinks are, I looked at my expensive m.2 heatsinks like 😡


...and you get two per pack from Amazon


----------



## gfunkernaught

J7SC said:


> ...and you get two per pack from Amazon


Ok ow sir ow 😂


----------



## J7SC

gfunkernaught said:


> Ok ow sir ow 😂


FYI, I had ordered 12 (now unused) M.2 heatsinks myself, which cost way more...


----------



## gfunkernaught

J7SC said:


> FYI, I had ordered 12 (now unused) M.2 heatsinks myself, which cost way more...


It's like "why didn't I think to buy the better cheaper heatsinks before buying the m.2 sinks??"


----------



## yzonker

gfunkernaught said:


> It's like "why didn't I think to buy the better cheaper heatsinks before buying the m.2 sinks??"


I think we all went through a research and discovery process in regards to 3090 mem cooling. I've got 2 previous heatsinks in a drawer.


----------



## gfunkernaught

yzonker said:


> I think we all went through a research and discovery process in regards to 3090 mem cooling. I've got 2 previous heatsinks in a drawer.


tbh when I was seeing all these legit heatsink posts, I felt like the only one that bought the m.2 sinks. Feels good to be part of the club lol. The two m.2 drives I have in my PC get warm, so I'll use the heatsinks for those drives.


----------



## J7SC

geriatricpollywog said:


> It looks really good with DX12 enabled, but I'm seeing the same stutters I saw on launch. The engine is only using 5-6 gb of RAM and VRAM, which is a shame when I have 32 and 24 gb respectively.


FYI, DX12 stuttering is still there but noticeably reduced even when flying in a new area not in cache (Shanghai) after we finally got 1 Gb/s symmetric connection. I don't think Microsoft's servers upload at that speed, but FS2020 is definitely faster / smoother than before Looking sideways / backwards / downwards has no delay / stutters at all now.

EDIT: ...here are some 'pics' of FS2020 DX12 on OLED C1 at 4K HDR...Because it is HDR and also w/LG screen overlay to show 'Game Mode', those won't be captured correctly (HDR) or at all (overlay) by Win 10...so these are crappy angled cellphone-type camera shots, instead of screen grabs. Strix 3090 was running at between 2205 to 2190 (effective) on KPE 520W vbios.


----------



## Carillo

Hi! What is the normal Port Royal score these days with 450w bios on air? Is there anyone here who would like to share their result? With and without Rebar


----------



## gfunkernaught

@J7SC 
My putty just came in. 50g is a lot smaller than I thought. Is 50g enough for a single application? VRAM, vrm, front and back?


----------



## J7SC

gfunkernaught said:


> @J7SC
> My putty just came in. 50g is a lot smaller than I thought. Is 50g enough for a single application? VRAM, vrm, front and back?


...yes, but do VRAM front and back first; VRM is less critical (though still important). A good set of thermal pads plus some MX4/5 or so will also work well


----------



## mrpeters

Hi all! I have an RTX 3090 AORUS Xtreme 24G which has has the pads replaced and been repasted and runs at around 60C core, 84C VRAM mining ETH at 125/MHs but when trying to dial in a solid gaming overclock my performance cap is always voltage. Never power or thermals. 

Are there any other compatible BIOS for this card that I can give a try to try and gain some gaming performance?


----------



## yzonker

mrpeters said:


> Hi all! I have an RTX 3090 AORUS Xtreme 24G which has has the pads replaced and been repasted and runs at around 60C core, 84C VRAM mining ETH at 125/MHs but when trying to dial in a solid gaming overclock my performance cap is always voltage. Never power or thermals.
> 
> Are there any other compatible BIOS for this card that I can give a try to try and gain some gaming performance?


None that will increase voltage.


----------



## gfunkernaught

J7SC said:


> ...yes, but do VRAM front and back first; VRM is less critical (though still important). A good set of thermal pads plus some MX4/5 or so will also work well


I managed to have some putty left over. I did VRAM, VRM on the front, and VRAM at the back. For the back VRM I used the Thermalright pads without paste. 15min Quake 2 RTX run and look at the max readings:








Looks like I either did it wrong or EK's blocks aren't meant for extreme performance which we kind of already knew. Water temp maxed out as usual at 30c, ambient 21-22c. Welp these results aren't terrible and at least I can run my fans at 50%.


----------



## des2k...

gfunkernaught said:


> I managed to have some putty left over. I did VRAM, VRM on the front, and VRAM at the back. For the back VRM I used the Thermalright pads without paste. 15min Quake 2 RTX run and look at the max readings:
> View attachment 2535647
> 
> Looks like I either did it wrong or EK's blocks aren't meant for extreme performance which we kind of already knew. Water temp maxed out as usual at 30c, ambient 21-22c. Welp these results aren't terrible and at least I can run my fans at 50%.


Looks fine to me, water 30c, gpu 45c for 600w+ for quake, that's 15c delta.
I haven't tested mine for a while, with my EK I was ~13c delta.

VRAM temp, is that external sensor ? Because I know internal sensor (memory junction temp) is ~65c+ for games.


----------



## gfunkernaught

des2k... said:


> Looks fine to me, water 30c, gpu 45c for 600w+ for quake, that's 15c delta.
> I haven't tested mine for a while, with my EK I was ~13c delta.
> 
> VRAM temp, is that external sensor ? Because I know internal sensor (memory junction temp) is ~65c+ for games.


I renamed it so it would fit better on the OSD. It is the mem junction temp sensor.

Didn't you lap your ek block?


----------



## des2k...

gfunkernaught said:


> I renamed it so it would fit better on the OSD. It is the mem junction temp sensor.
> 
> Didn't you lap your ek block?


yep, did quick lap on the core part of the block & lowered the standoffs (went from 1mm pads to 0.5mm pads for the front)

Didn't even know you could get 48c for memory. Is that because of that thermal putty ?

I didn't really work much on it, my Zotac core oc,mem oc is already at max.


----------



## gfunkernaught

How did you lower the standoffs? Replace or grind? 

Yeah I was getting 48c with the pads as well as the putty. But I think more heat is being absorbed because the air is now warmer out of the rads now since I used the putty. But the 1000D only has two 140mm exhaust and I have two 120s at the top-front as intake. Those 140s can't keep up with the 28 rad fans.


----------



## des2k...

gfunkernaught said:


> How did you lower the standoffs? Replace or grind?


grind, used weight of the block with flat surface / sand paper


----------



## mrpeters

yzonker said:


> None that will increase voltage.


Thank you. Am I just running at the max capability of the card then, or is something like my PSU (CORSAIR RM1000x 1000 Watt) a bottleneck now?


----------



## J7SC

gfunkernaught said:


> I managed to have some putty left over. I did VRAM, VRM on the front, and VRAM at the back. For the back VRM I used the Thermalright pads without paste. 15min Quake 2 RTX run and look at the max readings:
> View attachment 2535647
> 
> Looks like I either did it wrong or EK's blocks aren't meant for extreme performance which we kind of already knew. Water temp maxed out as usual at 30c, ambient 21-22c. Welp these results aren't terrible and at least I can run my fans at 50%.


I switched from the EK block for the Strix 3090 to the Phanteks one which clearly has a much larger micro fin area over the die. My best guess is a gain of about 4 C to 5 C 'net'...net means also accounting for location and ambient changes, along with the thermal putty application and the extra heatsink. As an aside, I really like Phanteks these days and use their CPU blocks as well (for example the 5950X at just under 200 W > max 69 C with 24 C ambient).

On the GPU side, are you running a single big loop, or separate loops for CPU and GPU ? It won't make much difference, may be 3 C + -, but it might allow you to shuffle rads and tubing a bit. As shown before, I use a TT Core P8 / dual mobo with a separate, mobile cooling table for both systems, connected via Koolance QD4s. All fans there are ArcticP12 pst in push/pull allowing for very quiet operation.


----------



## des2k...

mrpeters said:


> Thank you. Am I just running at the max capability of the card then, or is something like my PSU (CORSAIR RM1000x 1000 Watt) a bottleneck now?


There's a few games out there that will push RTX 3090 to the limits.

Have you tried Quake 2 RTX from steam ? If you can hold 2115+ core for that game, most likely you're at max of the card, power usage goes to 600w+


----------



## yzonker

mrpeters said:


> Thank you. Am I just running at the max capability of the card then, or is something like my PSU (CORSAIR RM1000x 1000 Watt) a bottleneck now?


Power supply isn't a problem. Same one I'm running with my 3090 and KP 1kw bios.


----------



## gfunkernaught

J7SC said:


> I switched from the EK block for the Strix 3090 to the Phanteks one which clearly has a much larger micro fin area over the die. My best guess is a gain of about 4 C to 5 C 'net'...net means also accounting for location and ambient changes, along with the thermal putty application and the extra heatsink. As an aside, I really like Phanteks these days and use their CPU blocks as well (for example the 5950X at just under 200 W > max 69 C with 24 C ambient).
> 
> On the GPU side, are you running a single big loop, or separate loops for CPU and GPU ? It won't make much difference, may be 3 C + -, but it might allow you to shuffle rads and tubing a bit. As shown before, I use a TT Core P8 / dual mobo with a separate, mobile cooling table for both systems, connected via Koolance QD4s. All fans there are ArcticP12 pst in push/pull allowing for very quiet operation.
> View attachment 2535679


I have my cpu and gpu on the same loop. Can the putty be reused? I want to repaste the core.

Also does anyone have a link to an applicator/gun to apply the putty?


----------



## J7SC

gfunkernaught said:


> I have my cpu and gpu on the same loop. Can the putty be reused? I want to repaste the core.
> 
> Also does anyone have a link to an applicator/gun to apply the putty?


I've re-used some putty after about a week of usage w/o ill effects. I haven't seen a putty 'applicator/gun, but then again, I haven't looked for one either...


----------



## mrpeters

des2k... said:


> There's a few games out there that will push RTX 3090 to the limits.
> 
> Have you tried Quake 2 RTX from steam ? If you can hold 2115+ core for that game, most likely you're at max of the card, power usage goes to 600w+


I haven't. 3DMark benchmarks, F1 2021 with DLSS, Forza Horizon 5 and iRacing. I'll see if there is a demo perhaps for Quake 2 RTX.

I know my power limit is 450w though because I'm on the latest Gigabyte BIOS for the Xtreme, so I certainly wouldn't hit 600W. That's why I wasn't sure if I might benefit from another BIOS. Not sure the 1000W Kingpin is compatible.

Here's my best Time Spy Extreme benchmark so far. Looks like I am only averaging a 2000MHz clock.









I scored 9 870 in Time Spy Extreme


Intel Core i9-11900K Processor, NVIDIA GeForce RTX 3090 x 1, 32638 MB, 64-bit Windows 10}




www.3dmark.com





Appreciate the feedback on Quake, and a target clock to shoot for! 

Look forward to any thoughts on a BIOS that might boost power limit and also help!

Other components are Z590 AORUS Master board, 11900K CPU (360mm EVGA AIO cooled), and Corsair Dominator Platinum RGB 32GB 3200MHz C16 memory. 

Based on comparable 3DMark configurations, I seem to be underperforming, so I'd love to find my bottleneck.


----------



## gfunkernaught

J7SC said:


> I've re-used some putty after about a week of usage w/o ill effects. I haven't seen a putty 'applicator/gun, but then again, I haven't looked for one either...


Ok. I asked because of how paste-like it is, vs the rigid pads. I saw some pics of a proper application of the putty, and yeah I may have put too much. They were like chocolate chip drops, mine were blobs.


----------



## Carillo

So i got my 3090 HOF, but there is something strange going on with this card. First thing i noticed is rather low Port Royal scores. Been trying the 1000w Galax ( no rebar) Kingpin 1000w rebar and stock bios. Air in Norway is cold, so the card was hitting only 54 average with 1025mV. The scores i'm getting is around 14700 with and without rebar @ 2160mhz... And here's the kicker, the score stays at around 14700 no matter what memory OC i set, only when pushing beyond +1000mhz OC the the score gets lower. Memory junction temp is only 55 degree, and that's also 
suspiciously low... So downloaded nicehash ( only gpu memory performance test i could think of) And the ETH daggerhashimoto is only 100MH/S... same 
behavior as Port Royal. No matter what i do with memory slider the score is between 100-103mh/s...... I have taken the card apart, and there is no pads missing....Strange thing, been gaming Forza horizon 5 for hours without issues but 20% lower FPS than my friend also using 3090...... Getting pretty frustrated, anyone have an idea ?? Can it be, that one or more of the memory chips is damaged without creating artifacts or other issued when gaming ?


----------



## mrpeters

Carillo said:


> So i got my 3090 HOF, but there is something strange going on with this card. First thing i noticed is rather low Port Royal scores. Been trying the 1000w Galax ( no rebar) Kingpin 1000w rebar and stock bios. Air in Norway is cold, so the card was hitting only 54 average with 1025mV. The scores i'm getting is around 14700 with and without rebar @ 2160mhz... And here's the kicker, the score stays at around 14700 no matter what memory OC i set, only when pushing beyond +1000mhz OC the the score gets lower. Memory junction temp is only 55 degree, and that's also
> suspiciously low... So downloaded nicehash ( only gpu memory performance test i could think of) And the ETH daggerhashimoto is only 100MH/S... same
> behavior as Port Royal. No matter what i do with memory slider the score is between 100-103mh/s...... I have taken the card apart, and there is no pads missing....Strange thing, been gaming Forza horizon 5 for hours without issues but 20% lower FPS than my friend also using 3090...... Getting pretty frustrated, anyone have an idea ?? Can it be, that one or more of the memory chips is damaged without creating artifacts or other issued when gaming ?


Are you using NiceHash Quick miner? If not, try that and use the mobile app to set "Extreme" mode. 

Will push memory to the max.


----------



## yzonker

Carillo said:


> So i got my 3090 HOF, but there is something strange going on with this card. First thing i noticed is rather low Port Royal scores. Been trying the 1000w Galax ( no rebar) Kingpin 1000w rebar and stock bios. Air in Norway is cold, so the card was hitting only 54 average with 1025mV. The scores i'm getting is around 14700 with and without rebar @ 2160mhz... And here's the kicker, the score stays at around 14700 no matter what memory OC i set, only when pushing beyond +1000mhz OC the the score gets lower. Memory junction temp is only 55 degree, and that's also
> suspiciously low... So downloaded nicehash ( only gpu memory performance test i could think of) And the ETH daggerhashimoto is only 100MH/S... same
> behavior as Port Royal. No matter what i do with memory slider the score is between 100-103mh/s...... I have taken the card apart, and there is no pads missing....Strange thing, been gaming Forza horizon 5 for hours without issues but 20% lower FPS than my friend also using 3090...... Getting pretty frustrated, anyone have an idea ?? Can it be, that one or more of the memory chips is damaged without creating artifacts or other issued when gaming ?


Do the monitoring apps show the mem speed is actually changing when you change the offset?

Edit: I guess you did say the score eventually falls off though.


----------



## J7SC

gfunkernaught said:


> Ok. I asked because of how paste-like it is, vs the rigid pads. I saw some pics of a proper application of the putty, and yeah I may have put too much. They were like chocolate chip drops, mine were blobs.


...I blobbed the putty myself; temps are great though


----------



## gfunkernaught

I reapplied the LM and left the putty as is, with some minor adjustments, like reusing some spilled over putty that got squashed out of place. While I was reassembling the gpu and blocks, I found one screw that attaches the acrylic to the metal plate of the block was loose, like made a noise and felt very loose when I touched it. So I tightened all the screws on the block. I think I may have improved my gpu<>water delta temp. Here's another 10-15min quake 2 rtx run. Once again, water temp peaked at 30c.


----------



## J7SC

gfunkernaught said:


> I reapplied the LM and left the putty as is, with some minor adjustments, like reusing some spilled over putty that got squashed out of place. While I was reassembling the gpu and blocks, I found one screw that attaches the acrylic to the metal plate of the block was loose, like made a noise and felt very loose when I touched it. So I tightened all the screws on the block. I think I may have improved my gpu<>water delta temp. Here's another 10-15min quake 2 rtx run. Once again, water temp peaked at 30c.
> View attachment 2535836


Grrrreat temps for 600+ W ! 

Btw, I think I posted before (somewhere) that I never ever had a new GPU block where all the screws were tightened correctly...these days, checking those is the first thing I do after opening the box and the initial visual inspection.


----------



## WilliamLeGod

Carillo said:


> So i got my 3090 HOF, but there is something strange going on with this card. First thing i noticed is rather low Port Royal scores. Been trying the 1000w Galax ( no rebar) Kingpin 1000w rebar and stock bios. Air in Norway is cold, so the card was hitting only 54 average with 1025mV. The scores i'm getting is around 14700 with and without rebar @ 2160mhz... And here's the kicker, the score stays at around 14700 no matter what memory OC i set, only when pushing beyond +1000mhz OC the the score gets lower. Memory junction temp is only 55 degree, and that's also
> suspiciously low... So downloaded nicehash ( only gpu memory performance test i could think of) And the ETH daggerhashimoto is only 100MH/S... same
> behavior as Port Royal. No matter what i do with memory slider the score is between 100-103mh/s...... I have taken the card apart, and there is no pads missing....Strange thing, been gaming Forza horizon 5 for hours without issues but 20% lower FPS than my friend also using 3090...... Getting pretty frustrated, anyone have an idea ?? Can it be, that one or more of the memory chips is damaged without creating artifacts or other issued when gaming ?


what is the power limit in stock bios?


----------



## gfunkernaught

J7SC said:


> Grrrreat temps for 600+ W !
> 
> Btw, I think I posted before (somewhere) that I never ever had a new GPU block where all the screws were tightened correctly...these days, checking those is the first thing I do after opening the box and the initial visual inspection.


Ain't that something? I guess from all the in and out of the case action must have made it loose. But that is an excellent precaution for the new blocks.

I was playing RDR2 just now, avg 500w, water temp got to 32c and core 42c and change. So there is definitely more heat being dumped into the loop which is good. VRAM never went above 44c.


----------



## mrpeters

WilliamLeGod said:


> what is the power limit in stock bios?


Looks like stock is 420W.

Could get to 630W with the 1000W BIOS.









GALAX GeForce RTX 3090 Hall Of Fame (HOF) Edition GPU Benched with Custom 1000 W vBIOS


GALAX, the maker of the popular premium Hall Of Fame (HOF) edition of graphics cards, has recently announced its GeForce RTX 3090 HOF Edition GPU. Designed for extreme overclocking purposes, the card is made with a 12 layer PCB, 26 phase VRM power delivery configuration, and three 8-pin power...




www.techpowerup.com


----------



## des2k...

gfunkernaught said:


> I reapplied the LM and left the putty as is, with some minor adjustments, like reusing some spilled over putty that got squashed out of place. While I was reassembling the gpu and blocks, I found one screw that attaches the acrylic to the metal plate of the block was loose, like made a noise and felt very loose when I touched it. So I tightened all the screws on the block. I think I may have improved my gpu<>water delta temp. Here's another 10-15min quake 2 rtx run. Once again, water temp peaked at 30c.
> View attachment 2535836


very nice, that's pretty much what I get for delta with my lapped EK block & normal paste. 13c delta is pretty good for 600w especially if you play 400w-500w that's around 7c+ delta 

The only negative thing about a good delta is that heat transfer is so efficient, you need good air circulation in your room.


----------



## gfunkernaught

des2k... said:


> The only negative thing about a good delta at is that heat transfer is so efficient, you need good air circulation in your room.


That's exactly what's happening now. The water is getting warmer now, and thus the core temp goes up with it, since the Delta doesn't change much. I think it could be that the back of the card is feeding warm water to the front. Unfortunately I can't parallel the front and back blocks, but I could separate them. I could also just feed the front before the back.


----------



## geriatricpollywog

I managed to get 2nd place for 3090 (behind OGS) in FIreStrike with my EVGA Kingpin Hydrocopper block, helped along by a MO-RA and some cold weather. I look forward to seeing what Optimus KPE blocks can do as Santa makes his rounds.

3DMark Fire Strike Hall Of Fame


----------



## J7SC

...quick update after a couple of months of new Strix 3090 block, thermal putty and passive back-heatsink. This was with 23 C ambient and the regular Asus vbios instead of the KPE 520 on the other vbios chip.

...temps and deltas seem steady, and note the nominal vs effective clocks. With KPE 520, I can get past 2265. @gfunkernaught, see the VRM temps at the bottom, helped by thermal putty in that area as well.


----------



## gfunkernaught

J7SC said:


> ...quick update after a couple of months of new Strix 3090 block, thermal putty and passive back-heatsink. This was with 23 C ambient and the regular Asus vbios instead of the KPE 520 on the other vbios chip.
> 
> ...temps and deltas seem steady, and note the nominal vs effective clocks. With KPE 520, I can get past 2265. @gfunkernaught, see the VRM temps at the bottom, helped by thermal putty in that area as well.
> View attachment 2536543


Nice! Was that during a bench run?


----------



## J7SC

gfunkernaught said:


> Nice! Was that during a bench run?


Yup, Superposition 4K Optimized


----------



## jura11

J7SC said:


> ...quick update after a couple of months of new Strix 3090 block, thermal putty and passive back-heatsink. This was with 23 C ambient and the regular Asus vbios instead of the KPE 520 on the other vbios chip.
> 
> ...temps and deltas seem steady, and note the nominal vs effective clocks. With KPE 520, I can get past 2265. @gfunkernaught, see the VRM temps at the bottom, helped by thermal putty in that area as well.
> View attachment 2536543


Great results @J7SC

Friend have same waterblock on his Strix as well and his temperatures are lot worse, they're in 80-90's on VRAM and core temperatures in high 40's to low 50's

Do you remember what size thermal pads are on this Phanteks waterblock? I didn't have time to measure pads but probably will get him thermal putty

Hope this helps 

Thanks, Jura


----------



## J7SC

jura11 said:


> Great results @J7SC
> 
> Friend have same waterblock on his Strix as well and his temperatures are lot worse, they're in 80-90's on VRAM and core temperatures in high 40's to low 50's
> 
> Do you remember what size thermal pads are on this Phanteks waterblock? I didn't have time to measure pads but probably will get him thermal putty
> 
> Hope this helps
> 
> Thanks, Jura


Thanks...the rest of the cooling system (3x D5, 1320x60+ multi-core rad space) also helps for sure. As to the pads for the Phanteks Strix block, there are definitely two different sizes involved, one of which looks like 2mm, but not sure about the other one, either 0.5mm or 1mm. When in doubt, putty  !


----------



## jura11

J7SC said:


> Thanks...the rest of the cooling system (3x D5, 1320x60+ multi-core rad space) also helps for sure. As to the pads for the Phanteks Strix block, there are definitely two different sizes involved, one of which looks like 2mm, but not sure about the other one, either 0.5mm or 1mm. When in doubt, putty  !


Hi @J7SC

Friend loop is pretty much similar to my 4*360mm radiators plus MO-ra3 360mm and he is running 4*D5 pumps in that loop

Literally we are went through everything and checked twice everything, removing side or top or front panel didn't changed temperatures a lot, I would say 3-4°C as max but that's it

In cold ambient like right now we are seeing, his temperatures are still in mid 40's to high 40's 

After New year I will redo his loop and see if we can improve it, if not I will drop there my Bykski waterblock and compare both 

Hope this helps 

Thanks, Jura


----------



## gfunkernaught

J7SC said:


> Yup, Superposition 4K Optimized


Superposition only used 449w? Ah wait I misread, 4k, I'm used to running it at 8k. I'm curious to see how high can I clock now. Probably not any higher than 2190mhz, I know this because I tested it in my garage last winter. Not bad for a mid-range 3090. That strix is a beast though.


----------



## J7SC

gfunkernaught said:


> Superposition only used 449w? Ah wait I misread, 4k, I'm used to running it at 8k. I'm curious to see how high can I clock now. Probably not any higher than 2190mhz, I know this because I tested it in my garage last winter. Not bad for a mid-range 3090. That strix is a beast though.


...that Strix really likes the KPE 520W bios and the voltage slider all the way to the right (typically = 1.083v / max on HWInfo). But for regular daily duties (including FS2020), I stay on the normal Strix bios / default voltage slider.

Speaking of FS2020, apart from still stuttery DX12 beta, they also added that electric German Volocopter and a F18 (no weapons firing though)


----------



## kryptonfly

J7SC said:


> ...quick update after a couple of months of new Strix 3090 block, thermal putty and passive back-heatsink. This was with 23 C ambient and the regular Asus vbios instead of the KPE 520 on the other vbios chip.
> 
> ...temps and deltas seem steady, and *note the nominal vs effective clocks*. With KPE 520, I can get past 2265. @gfunkernaught, see the VRM temps at the bottom, helped by thermal putty in that area as well.
> View attachment 2536543


You're in idle so effective clock can't be seen, it's normal that effective clock reaches normal clock at start loading. Here's my 4k optimized :



Bykski WB, passive backplate, 5x360mm, just 1 phobya pump (link), 2220mhz locked with SMI @1v (big thanks to @SoldierRBT who taught me this feature ! We can mitigate (cancel) GPU boost !), true effective clock ~2175mhz all along, power draw around ~520W, max 537W with 1000W XOC bios + 4 mΩ shunts. Powers are all fine tuned with stock bios without shunt in a particular 3D scene and I reported watts to match with 1000W bios + shunts but you seem to consume less...

Do you think my total power is accurate in this 4k optimized => 2175mhz effective = 1v ~520W ? This GPU could do way more but it's a 2*8 pins 



Spoiler


----------



## gfunkernaught

...4k Optimized Superposition challenge accepted 

I will post results when I get home from work
😁


----------



## gfunkernaught

J7SC said:


> ...that Strix really likes the KPE 520W bios and the voltage slider all the way to the right (typically = 1.083v / max on HWInfo). But for regular daily duties (including FS2020), I stay on the normal Strix bios / default voltage slider.
> 
> Speaking of FS2020, apart from still stuttery DX12 beta, they also added that electric German Volocopter and a F18 (no weapons firing though)
> View attachment 2536605
> 
> View attachment 2536606


I've noticed how the EVGA 1kw bios actually has some type of efficiency to it. Normally at full load and +135mhz core offset, voltage slider at 0%, my effective clock will eventually settle 2099-2115mhz depending on max core temp, and the VID will remain at 1085mv, if load drops below 99%, it drops a bit to 1075mv. If the load is less than like 85%, VID drops to 1068mv, and the core clock goes up to around 2120mhz. That's pretty neat for a b*lls out extreme OC bios.


----------



## J7SC

kryptonfly said:


> You're in idle so effective clock can't be seen, it's normal that effective clock reaches normal clock at start loading. Here's my 4k optimized :
> 
> 
> 
> Bykski WB, passive backplate, 5x360mm, just 1 phobya pump (link), 2220mhz locked with SMI @1v (big thanks to @SoldierRBT who taught me this feature ! We can mitigate (cancel) GPU boost !), true effective clock ~2175mhz all along, power draw around ~520W, max 537W with 1000W XOC bios + 4 mΩ shunts. Powers are all fine tuned with stock bios without shunt in a particular 3D scene and I reported watts to match with 1000W bios + shunts but you seem to consume less...
> 
> Do you think my total power is accurate in this 4k optimized => 2175mhz effective = 1v ~520W ? This GPU could do way more but it's a 2*8 pins
> 
> 
> 
> Spoiler


On effective clocks, the usual delta to max nominal clocks has changed (narrowed) significantly compared to before the recent cooling updates.


----------



## des2k...

Godfall is free on Epic,

It's an AMD sponsored title, swinging the sword ads instant 100w, 150w(no fidelity fx) to power usage 

I'm convinced AMD trying to kill RTX cards; I never seen a sword, some dust, light effects jump power by 150w every swing in other games !


----------



## J7SC

des2k... said:


> Godfall is free on Epic,
> 
> It's an AMD sponsored title, swinging the sword ads instant 100w, 150w(no fidelity fx) to power usage
> 
> I'm convinced AMD trying to kill RTX cards; I never seen a sword, some dust, light effects jump power by 150w every swing in other games !
> 
> View attachment 2536710


I haven't tried this release yet, but other threads at OCN and on other sites seem to suggest 'NVidia favoritism' instead, including an initial media release with the minimum requirement by resolution table omitting AMD GPUs altogether. Likely, there will be lots of patches coming for PC to fix some of the issues. I wait until then to get this game.

Anyway, I've put on a minimum frame-rate cap of 250 fps in the 3090 driver as this runs on a LG C1 48 OLED / 120Hz / HDR...some games like FS2020 had reached 600 - 700+ fps in the load screens and menus even on the stock Strix vbios and no oc.

In other news, I finally chased all the air bubbles out which seemed to prefer the vertical GPU blocks... With two loops for a combined 2520x62+ multi-core rads and a combined 20 feet+ of tubing and up-and-down elevation changes of more than 2.5 feet, there were lots of places for air bubbles to park themselves. It took four or five tries to get it done completely for both systems. Now I can finally finish the 'visuals' re. beautifying the tube runs, cable management etc...


----------



## des2k...

It doesn't take much to get bubbles stuck eveywhere. I remember having to do prime95+ gpu load and tilt the system for hours with single pump.

Takes 30mins now, no heat load, no tilt with 4 pumps

I do need to flush the thing. My loop takes 3liters


----------



## gfunkernaught

I get bubbles in my front rads. I jiggle them loose by just moving my case side to side rapidly (it's on wheels). They end up with the rest of the air at the top of my res.


----------



## J7SC

des2k... said:


> It doesn't take much to get bubbles stuck eveywhere. I remember having to do prime95+ gpu load and tilt the system for hours with single pump.
> 
> Takes 30mins now, no heat load, no tilt with 4 pumps😂


...yeah, the dual-mobo build has 2xD5 pumps for the 6900XT and 3xD5 pumps for the 3090 Strix systems. That definitely helped; the main issue was the unique 'up-and-down' tubing in the loop...tilt and shake plus pumps on full tilt got much of the air out, but not all...had to dismount the 3090 in particular and turn just past horizontal a few times....all is well that ends well.


----------



## gfunkernaught

Here we go..._ReBAR not globally enabled_
4k Superposition, DOF Enabled









8k Superposition, DOF Enabled

















Wishing I got the acetal front block instead of acrylic. The part where the 8-pin power array is at the front heats up the acrylic and I can't use pad/sink since it won't absorb the heat from the acrylic, tried already. But, even at 550w+, the thing is pretty quiet. I tuned the fan profiles to respond to water temp. This setup is definitely overkill for my particular card, since its only a mid-ranger. Now a Strix/KP/HOF then yeah but no more 3090s for me.


----------



## kryptonfly

@gfunkernaught : good results 👌.
562W for 1.10v, I'm at 537W for 1v with 2*8 pins. My gpu seems to eat more because I tried 1.006v and it's 546W max. Maybe my powers in hwinfo64 don't scale well beyond 390W, what I used to fine tune powers. But my 1400VA inverter (UPS) beeps around 700W all along in 4k optimized, so I think I'm not far from the truth. Around 250W with cpu OC + system and ~530W for the gpu. I don't think I will try more because of 2*8 pins but this GPU has lots of potential ! A "simple" Gigabyte Turbo lol


----------



## gfunkernaught

kryptonfly said:


> locked with SMI @1


SMI that's command line right? Is there a tutorial?
Btw, which 3090 do you have?


----------



## J7SC

gfunkernaught said:


> Here we go..._ReBAR not globally enabled_
> 4k Superposition, DOF Enabled
> View attachment 2536793
> 
> 
> 8k Superposition, DOF Enabled
> View attachment 2536796
> 
> 
> View attachment 2536798
> 
> Wishing I got the acetal front block instead of acrylic. The part where the 8-pin power array is at the front heats up the acrylic and I can't use pad/sink since it won't absorb the heat from the acrylic, tried already. But, even at 550w+, the thing is pretty quiet. I tuned the fan profiles to respond to water temp. This setup is definitely overkill for my particular card, since its only a mid-ranger. Now a Strix/KP/HOF then yeah but no more 3090s for me.


Nice run, actually ! Keep in mind that Superposition (4K and 8K at least) is sensitive to system memory. Below is a run from February 6 this year (bottom right) when I had the Strix for just a few days and plugged it into my Intel I7-5960X system w/ 32GB of DDR4 3200 / tight RAM. The I7-5960X only ran at 4.45GHz or so. The Strix itself was bone stock and air-cooled, no r_BAR back then.


----------



## gfunkernaught

J7SC said:


> Nice run, actually ! Keep in mind that Superposition (4K and 8K at least) is sensitive to system memory. Below is a run from February 6 this year (bottom right) when I had the Strix for just a few days and plugged it into my Intel I7-5960X system w/ 32GB of DDR4 3200 / tight RAM. The I7-5960X only ran at 4.45GHz or so. The Strix itself was bone stock and air-cooled, no r_BAR back then.
> View attachment 2536844


Ha your card barely broke a sweat there, meanwhile mine needs 4 rads and a bios flash to get 19596 
I use the XMP spd profile defaults. That was also many drivers ago. I think in july or august, the drivers I was using got me to 15960 in PR, havent been able to get that score since.

EDIT: Could've just done this lol








I scored 15 960 in Port Royal


Intel Core i9-9900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com





471.11


----------



## J7SC

gfunkernaught said:


> Ha your card barely broke a sweat there, meanwhile mine needs 4 rads and a bios flash to get 19596
> I use the XMP spd profile defaults. That was also many drivers ago. I think in july or august, the drivers I was using got me to 15900 in PR, havent been able to get that score since.


System memory HEDT bandwidth really does make a big difference w/ this...case in point, my Threadripper HEDT 2950X with 2x w-cooled 2080 Ti (380W per) scored 11998 in Superposition _8_K in '19, after I changed the TR 2950 node setting from UMA to NUMA...that still would be in the top 10 overall for Superposition 8K at HWBot today...

Ed.In PortRoyal, that TR 2950 memory change got me an extra 300 or so points back then with everything else equal, boosting HoF. Later drivers got that 2080 Ti duo past 21K in Port Royal...but then 3090s came along


----------



## kryptonfly

gfunkernaught said:


> SMI that's command line right? Is there a tutorial?
> Btw, which 3090 do you have?


You need to create a bat file. nvidia-smi -lgc 2220 (or whatever you want) but first you have to locate where is SMI => https://stackoverflow.com/questions/57100015/how-do-i-run-nvidia-smi-on-windows/57100016#57100016
Then open notepad, paste the path and add :
nvidia-smi -lgc 2220 (to lock at 2220). Save the file as .bat
Make another file to reset : nvidia-smi -rgc

I have a Gigabyte Turbo 2*8 pins
Yep surely my quad 3400mhz 13-12-12-24 helps a lot too. I just got a 12900k yesterday I will test and compare if it worths the shot.


----------



## J7SC

kryptonfly said:


> You need to create a bat file. nvidia-smi -lgc 2220 (or whatever you want) but first you have to locate where is SMI => https://stackoverflow.com/questions/57100015/how-do-i-run-nvidia-smi-on-windows/57100016#57100016
> Then open notepad, paste the path and add :
> nvidia-smi -lgc 2220 (to lock at 2220). Save the file as .bat
> Make another file to reset : nvidia-smi -rgc
> 
> I have a Gigabyte Turbo 2*8 pins
> Yep surely my quad 3400mhz 13-12-12-24 helps a lot too. *I just got a 12900k yesterday I will test and compare if it worths the shot.*


That should be interesting to see w/ the 12900K comparative results...12900K seems to use two DDR5 sticks as 'quad channel'....and who knows where it will all go - apparently AMD's upcoming Epyc server chip (and possibly the corresponding-gen Threadripper Pro HEDT) will use _12-channe_l DDR5 🆒


----------



## gfunkernaught

J7SC said:


> System memory HEDT bandwidth really does make a big difference w/ this...case in point, my Threadripper HEDT 2950X with 2x w-cooled 2080 Ti (380W per) scored 11998 in Superposition _8_K in '19, after I changed the TR 2950 node setting from UMA to NUMA...that still would be in the top 10 overall for Superposition 8K at HWBot today...


4x1080 Ti's #1 8k on hwbot...wow


----------



## kryptonfly

J7SC said:


> That should be interesting to see w/ the 12900K comparative results...12900K seems to use two DDR5 sticks as 'quad channel'....and who knows where it will all go - apparently AMD's upcoming Epyc server chip (and possibly the corresponding-gen Threadripper Pro HEDT) will use _12-channe_l DDR5 🆒


I will use ddr4 because in games it's a little better, ddr5 is too expensive (shortage & new) and not mature, waiting my Tomahawk ddr4 today, I will push my G.Skill to the max (1.55-1.6v) and see. Right now I have a 6950X delided @4.6 Ghz 1,4v - cache 3800 1.24v stable cinebench R20 ~5700, max 70°C just on 1 core, others mid-low 60°C, 3400-13-12-12-24-1T quad channel (don't pass memtest). I'll lose a little bandwidth for around same latency (right now 45.7ns) but IPC will really improve !


----------



## yzonker

Ran this a while ago with my Zotac. Looks like my temps were pretty high. Probably can do better than that now, at least in regards to core temp. Might give it a try when I get a chance.


----------



## J7SC

kryptonfly said:


> I will use ddr4 because in games it's a little better, ddr5 is too expensive (shortage & new) and not mature, waiting my Tomahawk ddr4 today, I will push my G.Skill to the max (1.55-1.6v) and see. Right now I have a 6950X delided @4.6 Ghz 1,4v - cache 3800 1.24v stable cinebench R20 ~5700, max 70°C just on 1 core, others mid-low 60°C, 3400-13-12-12-24-1T quad channel (don't pass memtest). I'll lose a little bandwidth for around same latency (right now 45.7ns) but IPC will really improve !


It even should be interesting to see the Alder Lake / DDR4 combo results compared to the 6950X HEDT; hopefully, you can post some results with the same GPU settings. It does make sense though to wait for a.) better DDR5 (ie. 6800 low latency, even 8000 per some leaks, within 12 months) and b.) better overall DDR5 availability and pricing. 

I have a 4x8 DDR4 4000 CL15 kit which is one heck of an oc'er...looking forward to get a 'DDR5 equivalent' when the time comes.


----------



## yzonker

A bit better. Would be nice to break 20k, but I'd have to get temps down more. BTW, reBar has no effect on this benchmark. Scores within 20pts on/off.


----------



## EarlZ

Whats a good brand of cable extensions to use for the 3090 something that does not degrade over a period of less than 3-4years?


----------



## ManniX-ITA

EarlZ said:


> Whats a good brand of cable extensions to use for the 3090 something that does not degrade over a period of less than 3-4years?











[Official] NVIDIA RTX 3090 Owner's Club


I was using CableMod Pro PSU cables on my AX1600i. Someone in another thread noticed my CPU and GPU 12v rails were dipping down to as low as 11.6v while running Port Royal and that is not good for stability. I put back in the stock Corsair cables, and now my 12v rails don't go below 12.05v. :)...




www.overclock.net


----------



## 7empe

yzonker said:


> A bit better. Would be nice to break 20k, but I'd have to get temps down more. BTW, reBar has no effect on this benchmark. Scores within 20pts on/off.


Even closer to 20k  +1350 MHz on memory and +180 MHz on core with 520W EVGA BIOS on ASUS 3090 OC card. Custom water cooling on both sides of the card.


----------



## wesley8

EVGA 1KW vbios with +180 core and +1200 mem.....


----------



## KedarWolf

Result #21
in Superposition 4K Optimized (1x GPU)

Score 20098

Got to scroll to 21.

UNIGINE Benchmarks


----------



## gfunkernaught

I'm going to start a 4k Boundary benchmark challenge.


----------



## J7SC

gfunkernaught said:


> I'm going to start a 4k Boundary benchmark challenge.


4K Boundary benchmark ? Does it have any electric blue in it ?


----------



## kryptonfly

J7SC said:


> It even should be interesting to see the Alder Lake / DDR4 combo results compared to the 6950X HEDT; hopefully, you can post some results with the same GPU settings. It does make sense though to wait for a.) better DDR5 (ie. 6800 low latency, even 8000 per some leaks, within 12 months) and b.) better overall DDR5 availability and pricing.
> 
> I have a 4x8 DDR4 4000 CL15 kit which is one heck of an oc'er...looking forward to get a 'DDR5 equivalent' when the time comes.


Sure don't worry, I'm on my way with plenty of bench, this CPU is insane my 3090 crashes with devil framerates lol, new vram limit +1068mhz before artifacts in Tomb Raider 2013 (~700 fps in caves at 1080p ultimate), I get errors in memtest with my 4x8 3000C14Q-32GTZR from my X99 to 4000 14-14-14-34 with tight secondary timings. It's worse beyond 1,57v and below 1,53v doesn't boot. I tried 4133 15-15-15-35 but dozens errors in few seconds, I will try with 3600C14D-32GTZNA, I've seen some good results with it.


----------



## KedarWolf

ManniX-ITA said:


> [Official] NVIDIA RTX 3090 Owner's Club
> 
> 
> I was using CableMod Pro PSU cables on my AX1600i. Someone in another thread noticed my CPU and GPU 12v rails were dipping down to as low as 11.6v while running Port Royal and that is not good for stability. I put back in the stock Corsair cables, and now my 12v rails don't go below 12.05v. :)...
> 
> 
> 
> 
> www.overclock.net


Bought these.






Home


Best selection of PC cable sleeving supplies at the lowest price. All of our PC Cable Sleeving materials have been thoroughly tested to provide the best possible quality and satisfaction. The best custom PC cables are also found here.




mainframecustom.com





One black 20" 24-pin power PSU cable with two cable combs.
Two black 20" 8-pin EPS PSU cables with two cable combs.
Three black 26" 8-pin PCI-E PSU cables with three cable combs.
Three 32" SATA PSU cables. with three cable combs
Two 14" SATA PSU cables.with two cable combs.

A steal at only $415.87 USD including shipping to Canada. :/


----------



## ManniX-ITA

KedarWolf said:


> A steal at only $415.87 USD including shipping to Canada. :/


----------



## J7SC

ManniX-ITA said:


>


I had two older (2011ish) non-modular PSUs for office use that didn't want to do any more work...but before e-recycling them, I cut off the 16-gauge wiring with PCIe plugs and also took the two fans out....might come in handy one day.


----------



## gfunkernaught

J7SC said:


> 4K Boundary benchmark ? Does it have any electric blue in it ?
> View attachment 2537984


No but it is in SPACE!


----------



## J7SC

gfunkernaught said:


> No but it is in SPACE!


Nice, I love space apps...any URL links, or just > this ?


----------



## des2k...

J7SC said:


> Nice, I love space apps...any URL links, or just > this ?











Boundary: Benchmark on Steam


This is a benchmark for raytracing tech. It will perform a real-time rendering CG scene on your computer. Let's measure the performance of your PC!




store.steampowered.com


----------



## gfunkernaught

If you love real space apps or sims, and want to test the limits of your CPU and RAM, try Universe Sandbox and run the Saturn with 2,500 object ring sim. You can add more material to the ring, crushing your system.


----------



## yzonker

gfunkernaught said:


> No but it is in SPACE!


I installed it from Steam the other day when you mentioned it but haven't run it yet. Waiting for you to kick this thing off.


----------



## KedarWolf

yzonker said:


> I installed it from Steam the other day when you mentioned it but haven't run it yet. Waiting for you to kick this thing off.


I got 143.8 FPS, I saved a screenshot, then because I had some hard reboots having my GPU overclock settings too high, immediately restored my Macrium Reflect image, and lost the screenshot I had saved in my Pictures folder. :/

Tell yah what, I try my benching O/S Macrium image, see if I can do better.


----------



## gfunkernaught

yzonker said:


> I installed it from Steam the other day when you mentioned it but haven't run it yet. Waiting for you to kick this thing off.


I did a run but I wanted to run it again because I've noticed that in all apps except PR, ReBar enabled globally hits performance negatively with all games. I'll do two runs, one with and without rebar, but I'll set rebar just for Boundary.


----------



## gfunkernaught

KedarWolf said:


> I got 143.8 FPS, I saved a screenshot, then because I had some hard reboots having my GPU overclock settings too high, immediately restored my Macrium Reflect image, and lost the screenshot I had saved in my Pictures folder. :/
> 
> Tell yah what, I try my benching O/S Macrium image, see if I can do better.


Whoa what DLSS setting?

I set my vram offset too high last night, thanks to the new AB allowing more than +1500, and I was getting the Green/Red Screen of Death at login, constantly. Flashed to stock bios in safe mode, tested 3D load OK, then put the 1kw bios back on. Whew!


----------



## J7SC

gfunkernaught said:


> Whoa what DLSS setting?
> 
> I set my vram offset too high last night, thanks to the new AB allowing more than +1500, and I was getting the Green/Red Screen of Death at login, constantly. Flashed to stock bios in safe mode, tested 3D load OK, then put the 1kw bios back on. * Whew!*


...shades of 2080 Ti good ol' days ?! Glad you got to the 'whew' stage... 

I'm setting up all-new Steam installs on my new systems over the next few days and will download the Boundary benchmark, along with a few new games. It feels good to have a fresh Windows 10 Pro install (no Win 11 here for a while yet) on new NVME drives etc. BTW, is the latest NVidia driver the best all-rounder, or are there some issues with it ? I'm currently on 471.xx


----------



## gfunkernaught

4k, Quality DLSS ReBAR not enabled for this bench.


----------



## gfunkernaught

J7SC said:


> ...shades of 2080 Ti good ol' days ?! Glad you got to the 'whew' stage...


Word!


----------



## gfunkernaught

'nother 4k Quality run


----------



## gfunkernaught

4k dlss off...


----------



## KedarWolf

gfunkernaught said:


> Whoa what DLSS setting?
> 
> I set my vram offset too high last night, thanks to the new AB allowing more than +1500, and I was getting the Green/Red Screen of Death at login, constantly. Flashed to stock bios in safe mode, tested 3D load OK, then put the 1kw bios back on. Whew!


Default settings, 4K, DLSS on Quality. I never changed a thing after installing the benchmark.

Oh wait, I have a 3080x1080 screen, it was at that, won't let me run 4K.


----------



## tefla

Figured I'd throw an update out there for anyone searching in the future. Shunt modded my 3090FE about 9 months ago, still chugging along smoothly! Card is mining 24/7 when not gaming and 0 issues--have already ROI'd by a significant margin. Still waiting to put this card underwater to really see what it can do, one day Optimus will release a 3090FE block... but it doesn't look like anytime soon😅

Weather was quite cold this morning so i shoved the case next to the window and got a new best in Port Royal! Definitely have some more headroom. Normal OS, and very conservative CPU/Mem OC. GPU core/mem clocks had some room too, but GF didn't want the window open for longer while it was -15C out haha. 15.4 on a stock FE cooler would be pretty cool. 

I scored 15 295 in Port Royal


----------



## J7SC

@Arizor @gfunkernaught - finished it !  


Spoiler


----------



## GRABibus

gfunkernaught said:


> I've noticed that in all apps except PR, ReBar enabled globally hits performance negatively with all games.


in Vanguard, Rebar enabled increases my fps by 10% to 15%.
same for Cold War


----------



## yzonker

Some cool morning air,

Still not quite 16k in PR but better, 









Result not found







www.3dmark.com













I scored 21 010 in Time Spy


AMD Ryzen 7 5800X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com





Nice improvement here,










Don't know how @gfunkernaught got 51.9 here though,


----------



## gfunkernaught

@yzonker I'm not sure either. Those two 4k runs were back to back. I noticed the eff clock was slightly higher in the 51.9 run.


----------



## yzonker

gfunkernaught said:


> @yzonker I'm not sure either. Those two 4k runs were back to back. I noticed the eff clock was slightly higher in the 51.9 run.


I'd have to gain almost 5% to get to 51.9 though. My 3080ti scored 48.6, so doesn't seem like just a problem with my 3090 machine. Must be something else it's sensitive to. Tried reBar on/off. ReBar on scored slightly higher.


----------



## gfunkernaught

yzonker said:


> I'd have to gain almost 5% to get to 51.9 though. My 3080ti scored 48.6, so doesn't seem like just a problem with my 3090 machine. Must be something else it's sensitive to. Tried reBar on/off. ReBar on scored slightly higher.


When you enable rebar globally, do you manually set the flag as 0x0000001 or just click the drop down and select the preset with the list of games?


----------



## yzonker

gfunkernaught said:


> When you enable rebar globally, do you manually set the flag as 0x0000001 or just click the drop down and select the preset with the list of games?


Hmm, drop down list. Is that not the correct way? Seems to work in 3dmark.


----------



## gfunkernaught

yzonker said:


> Hmm, drop down list. Is that not the correct way? Seems to work in 3dmark.


I followed a tutorial that said to manually enter a value, and that value seems to match the drop down selection. But I noticed that for one of the hex values 0xF000AA if I blank out the value then apply, it automatically goes to the Far Cry 6 selection. Weird.


----------



## yzonker

gfunkernaught said:


> I followed a tutorial that said to manually enter a value, and that value seems to match the drop down selection. But I noticed that for one of the hex values 0xF000AA if I blank out the value then apply, it automatically goes to the Far Cry 6 selection. Weird.


Didn't seem to make a difference. The pulldowns have those values listed there anyway. Probably same thing. Is your run repeatable?


----------



## des2k...

I'm surprised we don't have a rebar utility already. Nvinspector source code is on github.
All the functions are there. Loaded visual studio / nvinspector last week, but I'm very lazy :-(


----------



## yzonker

des2k... said:


> I'm surprised we don't have a rebar utility already. Nvinspector source code is on github.
> All the functions are there. Loaded visual studio / nvinspector last week, but I'm very lazy :-(
> 
> View attachment 2538395


That would be handy. Although I'm using @KedarWolf 's OS image, which is handy in that I just have it all configured for benching and most of them score the same or better with reBar forced. I like not having to mess with my daily OS (or crash it over and over LOL).

BTW, my mem is still holding me back the most. Even at those temps from my runs above, I can still only go +900 in PR. I know you copper shimmed your mem, what did you use to hold the shims in place? Some kind of thermal adhesive?


----------



## geriatricpollywog

yzonker said:


> That would be handy. Although I'm using @KedarWolf 's OS image, which is handy in that I just have it all configured for benching and most of them score the same or better with reBar forced. I like not having to mess with my daily OS (or crash it over and over LOL).
> 
> BTW, my mem is still holding me back the most. Even at those temps from my runs above, I can still only go +900 in PR. I know you copper shimmed your mem, what did you use to hold the shims in place? Some kind of thermal adhesive?


I wonder if there is anything you can do about memory overclocking on the 3090. Mine likes +1350 whether I allow it to run hot, or when I have it outside in the freezing cold with a fan on the backplate.


----------



## kryptonfly

J7SC said:


> It even should be interesting to see the Alder Lake / DDR4 combo results compared to the 6950X HEDT; hopefully, you can post some results with the same GPU settings. It does make sense though to wait for a.) better DDR5 (ie. 6800 low latency, even 8000 per some leaks, within 12 months) and b.) better overall DDR5 availability and pricing.
> 
> I have a 4x8 DDR4 4000 CL15 kit which is one heck of an oc'er...looking forward to get a 'DDR5 equivalent' when the time comes.


@J7SC : Sorry for the late, it took time to test this new stuff, my 12900K is pretty median, I didn't win silicon lottery this time :/ but I will probably change for a 12900KS the next year.
Anyway, here we go, some bench from my 6950X to 12900K with same bench settings, same RAM (4x8Gb 3000C14D-32GTZR), OC as powerful as possible, HT disabled, E-core disabled, only true cores, 12900K = 8 cores/threads, 6950X = 10 cores/threads :

*AC Odyssey 1080p ultra* :

 

Gain = +9.7%

*Endwalker 1080p and 4K* :

 
 

Gain 1080p : +30.5%
Gain 4K : +6.5% (I was even limited with the 6950X)

*FFXIV benchmark* :

 

Gain : +40%

*HZD 1080p ultra and low* :

 
 

Gain ultra : +8.4%
Gain low : +15.5%

*FarCry 5 1080p ultra and low* :

 
 

Gain ultra : +51.5% !
Gain low : +51.7% !

*SOTTR 1080p very high and very low* :

 
 

Gain very high : 354 fps cpu/242 fps cpu = 46.3%
Gain very low : 349/257 = 35.8% (my cpu avg is lower, maybe because of HT disabled)

*Shadow Of War 1080p ultra* :



I don't have the screen from my 6950X but I remember in this particular scene, it reached 87 fps in worst case scenario (the worst in game from my understanding). This game is poorly optimized on 5 cores. Gain +51.7% ! It's about what we see with FarCry 5. Insane for old DX11/9 games, clearly what my 6950X didn't have with poor ipc.

In other games it was already high thanks to quad DDR4 (89 Gb/s reading) but even in HZD really bandwidth dependent, there's a tiny improvement. These games/bench are the most cpu intensive that I have. Oops, I just forgot to test RDR2, nevermind...


----------



## J7SC

kryptonfly said:


> @J7SC : Sorry for the late, it took time to test this new stuff, my 12900K is pretty median, I didn't win silicon lottery this time :/ but I will probably change for a 12900KS the next year.
> Anyway, here we go, some bench from my 6950X to 12900K with same bench settings, same RAM (4x8Gb 3000C14D-32GTZR), OC as powerful as possible, HT disabled, E-core disabled, only true cores, 12900K = 8 cores/threads, 6950X = 10 cores/threads :
> 
> *AC Odyssey 1080p ultra* :
> 
> 
> 
> Gain = +9.7%
> 
> *Endwalker 1080p and 4K* :
> 
> 
> 
> 
> Gain 1080p : +30.5%
> Gain 4K : +6.5% (I was even limited with the 6950X)
> 
> *FFXIV benchmark* :
> 
> 
> 
> Gain : +40%
> 
> *HZD 1080p ultra and low* :
> 
> 
> 
> 
> Gain ultra : +8.4%
> Gain low : +15.5%
> 
> *FarCry 5 1080p ultra and low* :
> 
> 
> 
> 
> Gain ultra : +51.5% !
> Gain low : +51.7% !
> 
> *SOTTR 1080p very high and very low* :
> 
> 
> 
> 
> Gain very high : 354 fps cpu/242 fps cpu = 46.3%
> Gain very low : 349/257 = 35.8% (my cpu avg is lower, maybe because of HT disabled)
> 
> *Shadow Of War 1080p ultra* :
> 
> 
> 
> I don't have the screen from my 6950X but I remember in this particular scene, it reached 87 fps in worst case scenario (the worst in game from my understanding). This game is poorly optimized on 5 cores. Gain +51.7% ! It's about what we see with FarCry 5. Insane for old DX11/9 games, clearly what my 6950X didn't have with poor ipc.
> 
> In other games it was already high thanks to quad DDR4 (89 Gb/s reading) but even in HZD really bandwidth dependent, there's a tiny improvement. These games/bench are the most cpu intensive that I have. Oops, I just forgot to test RDR2, nevermind...


Thanks   - very thorough and helpful re. delineating what's Alder Lake IPC vs older architecture w/o DDR5 getting in the way.


----------



## des2k...

yzonker said:


> That would be handy. Although I'm using @KedarWolf 's OS image, which is handy in that I just have it all configured for benching and most of them score the same or better with reBar forced. I like not having to mess with my daily OS (or crash it over and over LOL).
> 
> BTW, my mem is still holding me back the most. Even at those temps from my runs above, I can still only go +900 in PR. I know you copper shimmed your mem, what did you use to hold the shims in place? Some kind of thermal adhesive?


The plan was to finish my block lapping and add copper shims so I can use .5mm pad at the back; didn't happen 😂

Mem temps are still ok, 50c games - 60c+ for heavy rtx / mining at +1500. For PR runs / games my limit is +1570mem.


----------



## des2k...

geriatricpollywog said:


> I wonder if there is anything you can do about memory overclocking on the 3090. Mine likes +1350 whether I allow it to run hot, or when I have it outside in the freezing cold with a fan on the backplate.


Since you can run +1350 hot, it's most likely not the mem IC issue.

You want to have excellent edge(mem controller) contact on the die with your cooler to push 1500+



Redirect Notice


----------



## geriatricpollywog

des2k... said:


> Since you can run +1350 hot, it's most likely not the mem IC issue.
> 
> You want to have excellent edge(mem controller) contact on the die with your cooler to push 1500+
> 
> 
> 
> Redirect Notice


Ah. I’ll just let it be then. I’m not able to lap the die.


----------



## Roacoe717

tefla said:


> Figured I'd throw an update out there for anyone searching in the future. Shunt modded my 3090FE about 9 months ago, still chugging along smoothly! Card is mining 24/7 when not gaming and 0 issues--have already ROI'd by a significant margin. Still waiting to put this card underwater to really see what it can do, one day Optimus will release a 3090FE block... but it doesn't look like anytime soon😅
> 
> Weather was quite cold this morning so i shoved the case next to the window and got a new best in Port Royal! Definitely have some more headroom. Normal OS, and very conservative CPU/Mem OC. GPU core/mem clocks had some room too, but GF didn't want the window open for longer while it was -15C out haha. 15.4 on a stock FE cooler would be pretty cool.
> 
> I scored 15 295 in Port Royal


Jeeze! Sell me your card or trade a strix 3090 plus cash lol.


----------



## PLATOON TEKK

Hope everyone is good. Was going to update here with the Optimus KP blocks but I ended up cancelling my order, they’ve barely shipped any blacks, let alone a single white.

I love Optimus and their blocks are absolutely impressive but the 3090ti in January makes $1.2k for blocks for two “old” cards silly imo.


----------



## J7SC

PLATOON TEKK said:


> Hope everyone is good. Was going to update here with the Optimus KP blocks but I ended up cancelling my order, they’ve barely shipped any blacks, let alone a single white.
> 
> I love Optimus and their blocks are absolutely impressive but the 3090ti in January makes $1.2k for blocks for two “old” cards silly imo.


I would have cancelled also, though for other reasons. I do wonder though if the new 3090 Ti would not perhaps fit 3090 blocks (w/ different passive back-plate). I realize that the 3090 Ti may (or may not) use single-sided 2 GB VRAM chips, but with plain-vanilla GDDR6 cards I have with 1GB vs 2 GB chips, the size difference is negligible, if it exists at all. Never say never, but I wonder if NVidia would actually require vendors to retool so close to 4090 release.


----------



## PLATOON TEKK

J7SC said:


> I would have cancelled also, though for other reasons. I do wonder though if the new 3090 Ti would not perhaps fit 3090 blocks (w/ different passive back-plate). I realize that the 3090 Ti may (or may not) use single-sided 2 GB VRAM chips, but I with plain-vanilla GDDR6 cards I have with 1GB vs 2 GB chips, the size difference is negligible, if it exists at all. Never say never, but I wonder if NVidia would actually require vendors to retool so close to 4090 release.


You make very good points. If they do drop it, it will be a slight variant at best for sure. Judging by how slow EVGA has been at releases (z690 dark pushed till feb potentially), I doubt there even will be a KP3090ti drop. Making those blocks potentially somewhat useless.

I’m very confused at the idea of a 3090ti myself and am still not sure about it’s legitimacy. But Nvidia have dropped strange “ti” and “super” drops, so what do I know lol

Supposed 3090ti leak (Videocardz.com) _may be absolute bs!_


----------



## J7SC

PLATOON TEKK said:


> View attachment 2538693
> 
> 
> 
> You make very good points. If they do drop it, it will be a slight variant at best for sure. Judging by how slow EVGA has been at releases (z690 dark pushed till feb potentially), I doubt there even will be a KP3090ti drop. Making those blocks potentially somewhat useless.
> 
> I’m very confused at the idea of a 3090ti myself and am still not sure about it’s legitimacy. But Nvidia have dropped strange “ti” and “super” drops, so what do I know lol
> 
> Supposed 3090ti leak (Videocardz.com) _may be absolute bs!_


I remember the exact same speculation (along with '''photos''') concerning the 2080 Ti _Super _which never did officially show up though I think there probably were some engineering samples floating about. Either way, I'm unlikely to go for a 3090 Ti if it actually becomes a reality...what I really want are some hi-po mGPUs with lots of tiles


----------



## PLATOON TEKK

J7SC said:


> I remember the exact same speculation (along with '''photos''') concerning the 2080 Ti _Super _which never did officially show up though I think there probably were some engineering samples floating about. Either way, I'm unlikely to go for a 3090 Ti if it actually becomes a reality...what I really want are some hi-po mGPUs with lots of tiles


Haha this is true, I remember those super Ti leaks too. There even was one “for sale” at one point lol. Hopefully this is the case here. Regarding the Optimus blocks. I feel the few degree difference at such a late stage isn’t worth it for me.

However, those who haven’t had a chance to land the hydro copper kit for their KP should absolutely go for the Optimus blocks.


----------



## Roacoe717

Has anyone heard of anyone using a evga hybrid kit on a 3090 strix?


----------



## yzonker

Roacoe717 said:


> Has anyone heard of anyone using a evga hybrid kit on a 3090 strix?


Likely won't work due to them being custom PCBs. Even if the block itself fit, the VRM heatsinks probably would not. And the hybrid sucks balls at cooling the VRAM anyway, so not really a great choice. 

I have one on my 3080ti and it took every trick I know to just get my mem temps BACK to the air cooler temps. Too bad Asus doesn't sell their hybrid separately as I think it's a bit better. 

If you really don't want to get in to a custom loop, might check and see if Alphacool makes their AIO for your card. It's a full block. Ah here we go, 









Alphacool Eiswolf 2 AIO - 360mm RTX 3080/3090 ROG Strix with Backplate


The Alphacool Eiswolf 2 is the first full cover GPU AIO waterblock from Alphacool. It is based on the Alphacool GPX Eisblock Aurora GPX water block, a pump unit and a 360mm NexXxoS ST30 full copper radiator. The latter is equipped with...




www.aquatuning.us


----------



## inedenimadam

Ugh. 3090 Kingpin hydrocopper doesn't fit in 011D.


----------



## J7SC

PLATOON TEKK said:


> Haha this is true, I remember those super Ti leaks too. There even was one “for sale” at one point lol. Hopefully this is the case here. Regarding the Optimus blocks. I feel the few degree difference at such a late stage isn’t worth it for me.
> 
> However, those who haven’t had a chance to land the hydro copper kit for their KP should absolutely go for the Optimus blocks.


Given the fairly steep boost algorithm inverse to temps with 3090s, it makes good sense to invest in a decent block. For KingPin, I would also go for at least the KP Hydro Copper.

As discussed before, I initially ran the EK Vector block on my Strix, but there were 'some issues' with that block and back-plate, so I replaced it with the Phanteks - and couldn't be happier. Granted, I also added a (passive) heatsink to the stock backplate. Then I used gobs of MX5, GC Gelid Extreme (die) and thermal putty for the VRAM and the whole thing sits in an extensive cooling loop, but the results speak for themselves, at 23 C ambient, btw:









This is with the regular Asus Strix bios, the KPE 520 bios clocks a bit higher but also adds a 1 C - 2 C though there's some temperature headroom.

I am really impressed with the Phanteks products these days, apart may be from some screws that could have been tighter (I check all of them on every brand's CPU and GPU block - there are almost always some which need tightening).

I run two Phanteks CPU blocks for X570 16 core CPUs (max w/ PBO etc is around 68 C, 23 C ambient) and just tonight, I replaced the Watercool Heatkiller IV Pro nickel-copper block on my Threadripper ...some nickel plating was coming off the micro fin plate of the Heatkiller (!), and their customer service left a lot to be desired (!!). Watercool _used to be_ my fav w-cooling supplier I would recommend to others, but not anymore. Stress-testing the new TR CPU block w/CB R23 loops resulted in the same temps as the much more expensive Heatkiller block.

Anyhow, back to the 3090 Strix...








...I really like that card and have had no problems with it. It sits in a dual mobo TT Core P8 case and it is a joy to use in FlightSim 2020, CP2077 and anything else I throw at it, usually at 4K Ultra, on a LG C1. I am adding a few more games for Christmas, like this one, now that all the new builds and updates are finished (until the next time) :


----------



## Nizzen

inedenimadam said:


> Ugh. 3090 Kingpin hydrocopper doesn't fit in 011D.


Not 3090 strix with Bykski block either 

Imagine having a 3090 Kingpin in a 80$ case 😆


----------



## inedenimadam

Nizzen said:


> Not 3090 strix with Bykski block either
> 
> Imagine having a 3090 Kingpin in a 80$ case 😆


I went ahead and ordered the vertical mount. The case was easy enough to build in, but it's going to make playing with SLI a pain.


----------



## des2k...

J7SC said:


> Given the fairly steep boost algorithm inverse to temps with 3090s, it makes good sense to invest in a decent block. For KingPin, I would also go for at least the KP Hydro Copper.
> 
> As discussed before, I initially ran the EK Vector block on my Strix, but there were 'some issues' with that block and back-plate, so I replaced it with the Phanteks - and couldn't be happier. Granted, I also added a (passive) heatsink to the stock backplate. Then I used gobs of MX5, GC Gelid Extreme (die) and thermal putty for the VRAM and the whole thing sits in an extensive cooling loop, but the results speak for themselves, at 23 C ambient, btw:
> View attachment 2538864


lol EK blocks...

I didn't get rid of mine, I just dropped the standoffs / lapped the thing🙄

My water doesn't go past 27c (800-900rpm fans) & usually I keep it 400w-500w (7c-10c delta) for gaming. Depends on the game type, but usually it's below 38c for GPU temps.

Recently finished guardians of the galaxy. Very good looking game, everything max 4k  .


----------



## ArcticZero

Speaking of EK blocks...










Despite how it looks, the loop is actually clean. My Optimus CPU block is pristine, and the coolant is as clear as ever, with no film on the res/tubes. Just the EK front GPU block looking horrendous. The active backplate block is perfectly fine though for whatever reason.

Only reason I'm not switching is it performs fine despite its looks. And I don't really want to give up the active backplate either. Really wish Optimus made a reference sandwich too.


----------



## des2k...

ArcticZero said:


> Speaking of EK blocks...
> 
> View attachment 2538951
> 
> 
> Despite how it looks, the loop is actually clean. My Optimus CPU block is pristine, and the coolant is as clear as ever, with no film on the res/tubes. Just the EK front GPU block looking horrendous. The active backplate block is perfectly fine though for whatever reason.
> 
> Only reason I'm not switching is it performs fine despite its looks. And I don't really want to give up the active backplate either. Really wish Optimus made a reference sandwich too.


that's a nice looking EK block....

no idea what happened with mine at EK side of things, but I should of opened mine (black acetal version) before installing

about 2-3weeks, had green stuff, some hair & some machining oil  
of course, nickel just faded as if it was painted on top of oily copper surface😄


----------



## outofmyheadyo

Usual EK quality for you, and still people keep buying and praising there crap like it`s worth somehting, they have been raising their prices and lowering quality for years now, and still they get away with it somehow...


----------



## Ironcobra

This being my first water block and water cooling loop, I imagine this patina is normal on copper? I actually think the blue looks pretty cool. Using mayhems ultra pure h20 with biocide and inhibitor.


----------



## gfunkernaught

outofmyheadyo said:


> Usual EK quality for you, and still people keep buying and praising there crap like it`s worth somehting, they have been raising their prices and lowering quality for years now, and still they get away with it somehow...


Some of us don't have many options. I think only EK and Bykski make a block for the Trio. EK has the better block IMO, especially since they have the full cover active backplate. I personally never had an issue with their blocks.


----------



## J7SC

gfunkernaught said:


> Some of us don't have many options. I think only EK and Bykski make a block for the Trio. EK has the better block IMO, especially since they have the full cover active backplate. I personally never had an issue with their blocks.


Yeah, sometimes you have little or no choice re. a custom block, and then you have to make the best of it. For the record, I have only one Bykski product (6900XT custom PCB GPU block), and it seems to be very good quality re. actual materials and workmanship. However, it had other issues, ie. wrong labeling of the SKU , and the 'instruction manual' was just plain wrong at some spots, ie. about using thermal paste instead of the (included) pads on the VRAM. Re. the SKU, I just cut and grind the block to make it fit, and used thermal putty anyway instead of the included pads...

As mentioned before, I have some serious reservations about EK now, especially when comparing the 3090 Strix block to the Phanteks version. The latter just has a much bigger micro-fin area and superior cooling. Then there was the factory cutting tool imprint on my EK block with a ridge sitting right at the corner of the die contact area which didn't help 'hotspot' temps one bit - as I did have a choice, I switched. I also already posted about the big chunks of nickel flaking off the EK back-plate (handled through RMA) before it was ever even mounted, per pic below.

Finally, I noticed that when I used the flash on my camera, the Nickel coating on the EK block 'seemed to all but disappear' as it is so thin. The Nickel-coated copper Phanteks CPU block in the same pic underscores this. The same camera / flash on the right-hand pic depict the 3090 Strix Phanteks and 6900XT Bykski blocks...quite a difference, wouldn't you say ?

All that said, you got yourself a superb w-cooling setup with active front-and-back plate (and putty, lots of putty !), so enjoy it. I would just do regular maintenance checks re. the plating of the EK - you'll be upgrading to a 4090 soon enough, anyway ;-)

All this also actually goes beyond just EK quality...the last few years have brought me a lot more RMAs and the like with other vendors and products, perhaps due to pandemic and supply-chain issues, who knows. I even had the micro-fin plate of a Watercool Heatkiller IV Pro block show some Nickel plating coming off recently...


----------



## yzonker

Update on Kryonaut Extreme used on my GPU block. It's been 5 months since the last mount. I do think the block delta has degraded by ~1C. Nothing huge obviously. I use CP2077 each time with the same settings for consistency since the block delta can vary somewhat with how the card is loaded. I don't see anything in the block, etc... that would be hurting it, so most likely some bit of degredation of the paste.


----------



## gfunkernaught

@J7SC I agree with the pandemic and overall "rent" that has been only increasing for just about everyone on Earth contributing to quality decrease. It's been happening to the bigger names in almost every industry. I was at Microcenter once buying Corsair XR7s and the kid at the register was like "Dude Corsair? Na man EK all the way!" So I get the sentiment of the name being seemingly more important than the product's quality itself. Look at Picasso! 

Back to 3090s, yes it is superb for what it is, and it took a lot of work to make it superb. 

I'm not upgrading to a 4090 man come on. A 10090 or 11090, sure. I want to play Cyberpunk or RDR2 fully path traced at 4k 60fps.


----------



## yzonker

Still trying to make it to 16k in PR. Found 30pts, but still not quite there. Maybe try one more time tomorrow.

15956









Result







www.3dmark.com


----------



## J7SC

yzonker said:


> Still trying to make it to 16k in PR. Found 30pts, but still not quite there. Maybe try one more time tomorrow.
> 
> 15956
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Result
> 
> 
> 
> 
> 
> 
> 
> www.3dmark.com


Have you tried reboot + 2 minutes idle before running Port Royal...that can add some 30 - 50 points. Else, look for Arctic Outflow, like we have right now


----------



## yzonker

@J7SC , thanks. I have had some success with the restart trick although that score didn't come right after a restart. And yes I was going to try again tomorrow when it's supposed to be stupid cold here. 🥶


----------



## yzonker

Also, note that the run above was in Win11. I tested that Win11 install against the stripped down Win10 @KedarWolf shared and they scored identically in PR. Other benchmarks like TS that might not be true for though if they are more memory/cpu dependent. 

I also ran Aida latency in all 3 installs (5800x b-die), 

Win10 (bloated daily) : 56.xx 
Win11 (clean) : 55.xx
Win10 (stripped) : 53.5

So maybe Win11 isn't too bad at this point. Seemed snappy while I was using it.


----------



## yzonker

Ok, this is when less is more. Unhooked my external rad and stuck the box in front of the window. A lot more comfortable than making the room really cold too. LOL









Result not found







www.3dmark.com


----------



## JKurz

Just ordered the Quantum EK water block and active backplate for my 3090. Cant wait!


----------



## AvengedRobix

My little PNY: NVIDIA GeForce RTX 3090 video card benchmark result - Intel Core i9-12900K Processor,Gigabyte Technology Co., Ltd. Z690 AORUS MASTER (3dmark.com)


----------



## kryptonfly

My little Gigabyte Turbo :


----------



## truehighroller1

kryptonfly said:


> My little Gigabyte Turbo :



Uh, settings please? I bet it's because of windows 10. I'm not going back to that. I'm on 11. Nice score man!


----------



## kryptonfly

truehighroller1 said:


> Uh, settings please? I bet it's because of windows 10. I'm not going back to that. I'm on 11. Nice score man!


I'm on W11. Re-bar disabled, hardware accelerated disabled, g-sync disabled, perf max and low latency "ultra" in Nvidia panel. I locked 2175mhz with SMI @1.012v. (peak 603W around the GT2 beginning.)
CPU 5.2 p-cores / 4.1 e-cores (bad chip SP 81), ring 4.5 Ghz, 4100CL14-14-14-28-280-62464-2N with second & other timings lowered. (3600CL14D-32GTZNA)


----------



## yzonker

This is interesting. Generally it's believed a card won't work when flashed with a bios from a different family. In this case it's being claimed at least some 3080ti's can be flashed with a 3090 bios. 


__
https://www.reddit.com/r/gpumining/comments/ruflz2

I guess this is really a 3080Ti topic but it's an interesting find if true. And a lot of you that hang out here seem to be more knowledgeable so I thought I'd see if anyone knew anything about it.


----------



## AvengedRobix

kryptonfly said:


> I'm on W11. Re-bar disabled, hardware accelerated disabled, g-sync disabled, perf max and low latency "ultra" in Nvidia panel. I locked 2175mhz with SMI @1.012v. (peak 603W around the GT2 beginning.)
> CPU 5.2 p-cores / 4.1 e-cores (bad chip SP 81), ring 4.5 Ghz, 4100CL14-14-14-28-280-62464-2N with second & other timings lowered. (3600CL14D-32GTZNA)


interesting.. i don't know SMI.. what is?


----------



## kryptonfly

AvengedRobix said:


> interesting.. i don't know SMI.. what is?


It's integrated in Nvidia drivers, you can lock gpu, mem and control other stuff.

You need to find where it is, generally in "system32/DriverStore/FileRepository/nvdm* (where there's nvidia-smi), then copy the path to notepad and add : nvidia-smi - lgc 2175
And save as a .bat file. (if you want to lock at 2175 for example).
You need to create a new one to reset : nvidia-smi -rgc
Right click and run as Admin ! Check with Afterburner.


----------



## geriatricpollywog

NVIDIA GeForce RTX 3090 video card benchmark result - Intel Core i9-12900K Processor,ASUSTeK COMPUTER INC. ROG STRIX Z690-A GAMING WIFI D4 (3dmark.com)










NVIDIA GeForce RTX 3090 video card benchmark result - Intel Core i9-12900K Processor,ASUSTeK COMPUTER INC. ROG STRIX Z690-A GAMING WIFI D4 (3dmark.com)


----------



## kryptonfly

geriatricpollywog said:


> NVIDIA GeForce RTX 3090 video card benchmark result - Intel Core i9-12900K Processor,ASUSTeK COMPUTER INC. ROG STRIX Z690-A GAMING WIFI D4 (3dmark.com)
> View attachment 2540736
> 
> 
> 
> NVIDIA GeForce RTX 3090 video card benchmark result - Intel Core i9-12900K Processor,ASUSTeK COMPUTER INC. ROG STRIX Z690-A GAMING WIFI D4 (3dmark.com)
> View attachment 2540735


Great ! What is your ram kit and timings ?


----------



## geriatricpollywog

kryptonfly said:


> Great ! What is your ram kit and timings ?


Binned Trident Z Royal dual rank 4000c14 kit running at 4190 14-15-15-28


----------



## AvengedRobix

kryptonfly said:


> It's integrated in Nvidia drivers, you can lock gpu, mem and control other stuff.
> 
> You need to find where it is, generally in "system32/DriverStore/FileRepository/nvdm* (where there's nvidia-smi), then copy the path to notepad and add : nvidia-smi - lgc 2175
> And save as a .bat file. (if you want to lock at 2175 for example).
> You need to create a new one to reset : nvidia-smi -rgc
> Right click and run as Admin ! Check with Afterburner.


Look Very interesting.. have you try ti use the -pl command for limit the Power limit? For example 600 for a 2x8 with kpe BIOS... You can set the freq of the memory?

Inviato dal mio ONEPLUS A6013 utilizzando Tapatalk


----------



## Thanh Nguyen

Anyone here get the 90ti? What kinda price range u expect it will be ?


----------



## bmgjet

Thanh Nguyen said:


> Anyone here get the 90ti? What kinda price range u expect it will be ?


Got one on order with local computer shop here in NZD $4399 deposit, Get refunded difference if it turns out to be cheaper other wise pay more if it costs more depending on custom fees and currency exchange at the time the store gets them. Price isnt really too much of a worry well I guess if it turns out to be 5K ill be a little pissed.
For reference when I got my 3090 that was $3299 and I have a buyer lined up for it as soon as the ti shows up thats willing to pay $3600.


----------



## J7SC

...on the topic of 3090 Ti, here's a snippet from TweakTown on the KPE. Points 2 and 6 are interesting (especially if that holds true for other brands). Also, those power connectors potentially afford a lot of wattskies  ...never say never, but I'm more than thrilled w/ my 3090 Strix and don't do HWbot anymore, so I probably wait until 4090s 
never say never


----------



## kryptonfly

AvengedRobix said:


> Look Very interesting.. have you try ti use the -pl command for limit the Power limit? For example 600 for a 2x8 with kpe BIOS... You can set the freq of the memory?
> 
> Inviato dal mio ONEPLUS A6013 utilizzando Tapatalk


Would be really handy but with shunts + XOC bios, power is really lower than what GPU-Z is showing so it would not be 600w but much less. Anyway, it's a good idea and I'll check if it works.
Yes you can set vram freq but if you lock gpu freq it won't move in 3dmark for example. No need but you can use : nvidia-smi -lmc


----------



## PLATOON TEKK

Seems the 3090ti is more possible than most of us expected. If those rumors about the KP are true, am even happier I cancelled those Optimus blocks.


----------



## sew333

Hi my pc: 10850K stock 4800mhz

2x16 GB DDR4 GSKILL 3000mhz XMP

Seasonic Tx-850 Ultra Titanium first digits s/n R2011 

Gigabyte Rtx 3090 Gaming ( 2x8pin,2 separate pcie cables )

Asus Z490 Maximus

1 TB SSD


4 days ago…i launched Metro Exodus and during advertisement part,my pc just shutdown. I rebooted again and its fine again.

Happened after launching game on intro advertisements part.

My question is. It was psu issue or maybe other hardware?

I remember that when pc goes off ,pc case lights flickered for a second and then turned off completely. That flicker not happens when i shutting down pc manually.

Also monitor flickered 10 seconds after shutdown.



Ok i am running now game and no shutdowns. Happened once 4 days ago. Also i have pc until march 2021 and thats like today never happened.

Is psu going just bad,rma or what you recommend for me. Thank you

It's not overcurrent protection getting triggered.
That causes complete shutdown of PSU needing always power cycling to clear it?
I don’t have to flip switch.


----------



## J7SC

PLATOON TEKK said:


> Seems the 3090ti is more possible than most of us expected. If those rumors about the KP are true, am even happier I cancelled those Optimus blocks.


I guess NVidia's motto is why offer only one unobtanium model when you can do two  ? BTW, I haven't read anything about LHR for 3090s/Tis, even though at some point last year, NVidia suggested that they were going to phase it in for all of their Ampere models...I guess they know their target market very well...


----------



## KedarWolf

Little Afterburner trick.

I find my benchmarks are better with Afterburner set at what I want it and actually running, not exited.

But go to Options, Monitoring and uncheck everything in that list. Also, put the polling rate to the max at 60000ms.

It added 100 points to my last benchmark score. I think having all the sensors running and polling at the default 1000ms slows everything down.

Don't worry if your Afterburner only shows your idle clock speeds when the benchmark starts, it'll only update to your actual clock speeds after 60 seconds.


----------



## gfunkernaught

sew333 said:


> Hi my pc: 10850K stock 4800mhz
> 
> 2x16 GB DDR4 GSKILL 3000mhz XMP
> 
> Seasonic Tx-850 Ultra Titanium first digits s/n R2011
> 
> Gigabyte Rtx 3090 Gaming ( 2x8pin,2 separate pcie cables )
> 
> Asus Z490 Maximus
> 
> 1 TB SSD
> 
> 
> 4 days ago…i launched Metro Exodus and during advertisement part,my pc just shutdown. I rebooted again and its fine again.
> 
> Happened after launching game on intro advertisements part.
> 
> My question is. It was psu issue or maybe other hardware?
> 
> I remember that when pc goes off ,pc case lights flickered for a second and then turned off completely. That flicker not happens when i shutting down pc manually.
> 
> Also monitor flickered 10 seconds after shutdown.
> 
> 
> 
> Ok i am running now game and no shutdowns. Happened once 4 days ago. Also i have pc until march 2021 and thats like today never happened.
> 
> Is psu going just bad,rma or what you recommend for me. Thank you
> 
> It's not overcurrent protection getting triggered.
> That causes complete shutdown of PSU needing always power cycling to clear it?
> I don’t have to flip switch.


Overcurrent protection will shut the system off completely, its by design. What have you changed regarding software and/or hardware since the incident?


----------



## gfunkernaught

J7SC said:


> I guess NVidia's motto is why offer only one unobtanium model when you can do two  ? BTW, I haven't read anything about LHR for 3090s/Tis, even though at some point last year, NVidia suggested that they were going to phase it in for all of their Ampere models...I guess they know their target market very well...


Did you hear about the NvVRRXTRX ∞ ∞ 90? It will run Crysis, in real life.


----------



## J7SC

gfunkernaught said:


> Did you hear about the NvVRRXTRX ∞ ∞ 90? *It will run Crysis, in real life.*


...budget _cry_sis, that is  ?


----------



## gfunkernaught

J7SC said:


> ...budget _cry_sis, that is  ?


It will only cost one life.


----------



## PLATOON TEKK

J7SC said:


> I guess NVidia's motto is why offer only one unobtanium model when you can do two  ? BTW, I haven't read anything about LHR for 3090s/Tis, even though at some point last year, NVidia suggested that they were going to phase it in for all of their Ampere models...I guess they know their target market very well...


I feel you 100. I feel like I just got done stressing about all the 3090s and now it’s the same race all over again (as always) lol. As you said though, I think they silently reversed their “fight against mining”.

I wish I would actually play Crysis instead of just playing my frametime and stats instead. Am genuinely debating on sitting out the Ti this time, but my soul hurts.


----------



## tps3443

Thanh Nguyen said:


> Anyone here get the 90ti? What kinda price range u expect it will be ?


I‘m gonna just keep my 3090 kp hc. I don’t think it’s going to offer any improvement.


Any thoughts on this?


----------



## J7SC

PLATOON TEKK said:


> I feel you 100. I feel like I just got done stressing about all the 3090s and now it’s the same race all over again (as always) lol. As you said though, I think they silently reversed their “fight against mining”.
> 
> I wish I would actually play Crysis instead of just playing my frametime and stats instead. Am genuinely debating on sitting out the Ti this time, but my soul hurts.


The step from 3090 to 3090 Ti seems marginal (say 6% fps on average); I rather wait for the 4090s. Besides, what with the new PCBs and Power Connectors, there's no way I'm modding the wiring and GPU's front-and-back in this complex build I finished only recently.:









When I was doing HWBot years back, it was always about adding to / defending your records and cups, and the latest _'stuff' _couldn't get here fast enough. In my case, though, I entered a new realm some time ago - I like to build unusual systems more than chase benches all the time (but to each his own) - and_ really use_ those systems. 

I still run benchmarks of course to see if I can improve on my previous efforts, but it is not the priority anymore. I spent the 'TI money' on the latest-gen peripherals (ie. latest PCIe Gen4 NVMEs, C1 OLED) to make sure that the overall systems performance is both very fast and a joy to use w/o a single hiccup - be it for work, or for play. With up to 10 hours / day in front of the 'puters, it made the most sense to me. But again, I respect other folks' approaches to this; it just comes down to one's priorities re. time and money spent for a given return.


----------



## yzonker

Yea I've considered trying to replace my Zotac with a 3090 Ti, but I would definitely need to sell the Zotac to justify it. Probably not worth the hassle. 

BTW speaking of things that probably aren't worth it, anyone think the reference heatkiller block might be better than my Corsair block? Techpowerup didn't show much of a difference but I'm not sure I'm really confident in their testing.


----------



## SPL Tech

Thanh Nguyen said:


> Anyone here get the 90ti? What kinda price range u expect it will be ?


$5000 scalped online which is the only place you'll be able to buy one.


----------



## BigMack70

3090 Ti not worth it over my 3090 founders. There's no way it's going to be more than 10% faster, and it will probably have a net cost of $500-1500 after re-selling my 3090. 

We know from the 3090 that this architecture doesn't scale very well at these power levels. My guess is that you have to get at least 800W through a 3090 Ti before it makes a noticeable performance difference in games over a stock 3090.

Will wait for 4000 series, and honestly may just wait for the market to settle back to a point where you can buy cards at MSRP.


----------



## adl505

kryptonfly said:


> I'm on W11. Re-bar disabled, hardware accelerated disabled, g-sync disabled, perf max and low latency "ultra" in Nvidia panel. I locked 2175mhz with SMI @1.012v. (peak 603W around the GT2 beginning.)
> CPU 5.2 p-cores / 4.1 e-cores (bad chip SP 81), ring 4.5 Ghz, 4100CL14-14-14-28-280-62464-2N with second & other timings lowered. (3600CL14D-32GTZNA)


Why re-bar disabled? Sorry I'm new to overclocking and have a 3090 myself but I thought re-bar was good for performance? (I currently have it enabled).


----------



## tps3443

On paper the 3090 K|NGP|N is a 3090Ti, or a smidge faster actually. I’ve had my memory at 22.5Gbps for 7 months now daily. So, after careful consideration. I think I will pass as well.

OG 3090KP HC stays!!


----------



## managerman

A little backstory before the question.....Rebuilding my rig with some new components!! 

Re-Build Log ----> HERE

Intel 12900k
32GB G.Skill DDR5-6000 C36
Asus Z690 Maximus Extreme Glacial 
(2) EVGA RTX 3090 FTW3 Ultras with EK waterblocks and active backplates...(Original cards with latest Rebar 500W XOC Bios)

Still waiting for ASUS to release an SLI/NVLINK certified BIOS.....hopefully soon...

Question: When I ran Precision X1 for the first time it wanted to upgrade the firmware??? I exited out immediately....Is it trying to "upgrade" the vBIOS on the cards or just fan and RGB stuff? Should I just forget about PX1 since I actually like afterburner better? Trying not to brick my cards...

Thanks.....glad to be back in the game again...

-M


----------



## yzonker

PX1 will update either the firmware or vbios. And it's annoying in that it won't let you skip the update. So I never use it. It's definitely known to brick cards. There were a bunch when reBar was released.


----------



## GRABibus

tps3443 said:


> I‘m gonna just keep my 3090 kp hc. I don’t think it’s going to offer any improvement.
> 
> 
> Any thoughts on this?


Everything will be bios related.
I am curious to see which bioses will be released to use all potential.


----------



## yzonker

GRABibus said:


> Everything will be bios related.
> I am curious to see which bioses will be released to use all potential.


Yep. Without a high PL, it probably won't be any faster than our 3090s running the KP 1kw bios. If I don't buy a KP, then I'll be waiting for some kind soul to leak the new 1kw (1.2kw?) bios.


----------



## tps3443

yzonker said:


> PX1 will update either the firmware or vbios. And it's annoying in that it won't let you skip the update. So I never use it. It's definitely known to brick cards. There were a bunch when reBar was released.


I use PX1 all the time to update my 3090KP firmware. It works fine. I swap between bios all the time. Some of the bios I use are KP Hybrid, and one is KP Hydro copper.


PX1 only updates my Kingpin 3090 if I go from KP Hybrid to KP Hydro Copper bios. It just updates some firmware on the card.

If I flash bios from KP Hybrid to KP Hybrid it does nothing. Or KP HC to KP HC it does nothing.


----------



## J7SC

GRABibus said:


> Everything will be bios related.
> I am curious to see which bioses will be released to use all potential.


...or (enjoy with plenty of salt sprinkles):


----------



## tps3443

J7SC said:


> ...or (enjoy with plenty of salt sprinkles):
> View attachment 2541134


I’m torn if I may keep my 3090KP HC, or sale now and queue up for the 3090Ti KP, which offers a kibble sized improvement. What are you gonna do @J7SC?


----------



## J7SC

tps3443 said:


> I’m torn if I may keep my 3090KP HC, or sale now and queue up for the 3090Ti KP, which offers a kibble sized improvement. What are you gonna do @J7SC?


...likely do nothing but enjoy my current configs...took me long enough to complete them


----------



## tps3443

J7SC said:


> ...likely do nothing but enjoy my current configs...took me long enough to complete them


Absolutely. I finally built up a great watercooling. Setup. I always strived for your temps, they were always great! 1HP Aquafarm chiller will be here Friday. After getting in to custom watercooling, I got addicted.


----------



## gfunkernaught

Has anyone figured out a way to bypass the GPU Boost temp single bin step down? I just locked in a freq using CTRL+L on the AB curve, at [email protected], which gave me 2205mhz (2190mhz-ish eff), but as soon as I loaded up Bright Memory Benchmark, it instantly drops to 2190mhz (2179mhz eff). I prewarmed the card before trying this.

Can the nvidia SMI utility lock a freq no matter what the conditions are?


----------



## kryptonfly

adl505 said:


> Why re-bar disabled? Sorry I'm new to overclocking and have a 3090 myself but I thought re-bar was good for performance? (I currently have it enabled).


Depends of games/bench, I don't see an improvement with re-bar enabled except for Port Royal but you have to test.


gfunkernaught said:


> Has anyone figured out a way to bypass the GPU Boost temp single bin step down? I just locked in a freq using CTRL+L on the AB curve, at [email protected], which gave me 2205mhz (2190mhz-ish eff), but as soon as I loaded up Bright Memory Benchmark, it instantly drops to 2190mhz (2179mhz eff). I prewarmed the card before trying this.
> 
> Can the nvidia SMI utility lock a freq no matter what the conditions are?


Yes


----------



## yzonker

kryptonfly said:


> Depends of games/bench, I don't see an improvement with re-bar enabled except for Port Royal but you have to test.
> Yes


Time Spy also sees a boost in graphics score by forcing reBar, but that also hurts the cpu score for some reason, at least for AMD cpus. Depends on your config as to which yields the highest overall score. My 5800x/3090 machine scores highest with reBar forced on, 3900x/3080ti scores highest with it off.


----------



## tps3443

yzonker said:


> Time Spy also sees a boost in graphics score by forcing reBar, but that also hurts the cpu score for some reason, at least for AMD cpus. Depends on your config as to which yields the highest overall score. My 5800x/3090 machine scores highest with reBar forced on, 3900x/3080ti scores highest with it off.


Cyberpunk 2077, RDR2, Death Stranding, usually just the AAA Title games that support DLSS in game settings, demonstrate improvements with Resizeable Bar. Anything else without DLSS just isn’t gonna support that that feature. Nvidia usually adds a few titles every driver launch though. Improvements need to be checked with a graph with MSI AB on your frame rate.

it’s easy to miss 1-10% when you‘re already getting 100-150FPS+ already

I leave it enabled in my z590 Dark bios. And run Rebar bios on my 3090 as well.

Rebar seems to provide the placebo effect in a lot of games to though Lol.


----------



## yzonker

tps3443 said:


> Cyberpunk 2077, RDR2, Death Stranding, usually just the AAA Title games that support DLSS in game settings, demonstrate improvements with Resizeable Bar. Anything else without DLSS just isn’t gonna support that that feature. Nvidia usually adds a few titles every driver launch though. Improvements need to be checked with a graph with MSI AB on your frame rate.
> 
> it’s easy to miss 1-10% when you‘re already getting 100-150FPS+ already
> 
> I leave it enabled in my z590 Dark bios. And run Rebar bios on my 3090 as well.
> 
> Rebar seems to provide the placebo effect in a lot of games to though Lol.


I'm not sure if we're talking about 2 different things or not. I'm referring to using Nvidia Profile Inspector to force reBar on for benchmarks that are not supported by Nvidia.


----------



## kryptonfly

sew333 said:


> Hi my pc: 10850K stock 4800mhz
> 
> 2x16 GB DDR4 GSKILL 3000mhz XMP
> 
> Seasonic Tx-850 Ultra Titanium first digits s/n R2011
> 
> Gigabyte Rtx 3090 Gaming ( 2x8pin,2 separate pcie cables )
> 
> Asus Z490 Maximus
> 
> 1 TB SSD
> 
> 
> 4 days ago…i launched Metro Exodus and during advertisement part,my pc just shutdown. I rebooted again and its fine again.
> 
> Happened after launching game on intro advertisements part.
> 
> My question is. It was psu issue or maybe other hardware?
> 
> I remember that when pc goes off ,pc case lights flickered for a second and then turned off completely. That flicker not happens when i shutting down pc manually.
> 
> Also monitor flickered 10 seconds after shutdown.
> 
> 
> 
> Ok i am running now game and no shutdowns. Happened once 4 days ago. Also i have pc until march 2021 and thats like today never happened.
> 
> Is psu going just bad,rma or what you recommend for me. Thank you
> 
> It's not overcurrent protection getting triggered.
> That causes complete shutdown of PSU needing always power cycling to clear it?
> I don’t have to flip switch.


Seasonic are known to trigger OCP faster than others, it happens in Endwalker benchmark especially with high fluctuate framerates, you can test Endwalker with cpu+gpu OC'ed, my Prime TX 750W did same, I changed for an Antec Signature 1000W (same build quality) and no more OCP. Maybe there's high framerates during advertisement, I don't remember, you can remove intro with mod, look at nexus mod.


KedarWolf said:


> Little Afterburner trick.
> 
> I find my benchmarks are better with Afterburner set at what I want it and actually running, not exited.
> 
> But go to Options, Monitoring and uncheck everything in that list. Also, put the polling rate to the max at 60000ms.
> 
> It added 100 points to my last benchmark score. I think having all the sensors running and polling at the default 1000ms slows everything down.
> 
> Don't worry if your Afterburner only shows your idle clock speeds when the benchmark starts, it'll only update to your actual clock speeds after 60 seconds.


Or you can close Afterburner during bench and let HWinfo in the background, no cost at all in scores, seems even to run better with it. (placebo lol).
Honestly for stability, no trouble to run Afterburner during bench but once you've seen behavior & limits, no reason to keep Afterburner during bench. Moreover, you can lock freq with SMI and I guess you know well your gpu temp. Just my personal thought.

EDIT : the only game (that I know) which benefits from Afterburner monitoring is Horizon Zero Dawn, no placebo, there's really an improvement, I don't know why. I always use Afterburner for my HZD bench from the launch. Seems it can be useful for all games 🤔 Needs some investigating.


----------



## KedarWolf

kryptonfly said:


> Seasonic are known to trigger OCP faster than others, it happens in Endwalker benchmark especially with high fluctuate framerates, you can test Endwalker with cpu+gpu OC'ed, my Prime TX 750W did same, I changed for an Antec Signature 1000W (same build quality) and no more OCP. Maybe there's high framerates during advertisement, I don't remember, you can remove intro with mod, look at nexus mod.
> Or you can close Afterburner during bench and let HWinfo in the background, no cost at all in scores, seems even to run better with it. (placebo lol).
> Honestly for stability, no trouble to run Afterburner during bench but once you've seen behavior & limits, no reason to keep Afterburner during bench. Moreover, you can lock freq with SMI and I guess you know well your gpu temp. Just my personal thought.
> 
> EDIT : the only game (that I know) which benefits from Afterburner monitoring is Horizon Zero Dawn, no placebo, there's really an improvement, I don't know why. I always use Afterburner for my HZD bench from the launch. Seems it can be useful for all games 🤔 Needs some investigating.


I find if I close Afterburner, my curve changes and gets messed up. This reflects when I open it again and some points on my curve have changed. Hence me leaving it open as I fine-tune my curve.


----------



## KedarWolf

Welp, I messed up, no gaming for a week or two. I thought I had the right thermal pads to redo my backplate, but I had put 1.5mm pads in a 2mm cardboard container, and never realized it until late. Not having 2mm pads, my backplate install is not right, so to be safe, no gaming or benching until I get the pads I ordered from China in a week or two. Hope Omicron won't delay it, but I think China is on lockdown, so might.


----------



## gfunkernaught

kryptonfly said:


> Yes


if I do nvidia-smi.exe -lgc 2205, I see the clock go to only 2025mhz.


----------



## kryptonfly

KedarWolf said:


> I find if I close Afterburner, my curve changes and gets messed up. This reflects when I open it again and some points on my curve have changed. Hence me leaving it open as I fine-tune my curve.


I set 2220 @1.018v and lock with SMI @2190 then I close it, it doesn't move from 25°C to 40°C no matter load. 


gfunkernaught said:


> if I do nvidia-smi.exe -lgc 2205, I see the clock go to only 2025mhz.


You need at least +30mhz to make it work properly when you set a point in Afterburner. Are you sure you see the lock in real time in Afterburner ? Right click as Admin ? No need for .exe just : the path /nvidia-smi -lgc 2205
You need to set the point in Afterburner at 2235 mhz.


----------



## J7SC

I've got to try that '/nvidia-smi -lgc' just for the fun of it ! But first, I want to push VRAM speed a bit more. Per below, temps overall were already good but the 3090 loop always had a lot if little bubbles in the liquid which would build up to a big one inside the block - even after extensive bubble bleeding etc (not my first w-cooling rodeo), and unlike the neighboring system with its 2x D5 on 'default'. I think that the culprit was too high pump speeds with 3x D5s (set to 5:5 on the back) in the loop; essentially, there was cavitation happening. I set the pumps to 3:5 tonight with no loss of temps with the stock Strix bios or the KPE520 respectively, but the cavitation seems to have disappeared


----------



## wtf_apples

yzonker said:


> EVGA RTX 3090 VBIOS
> 
> 
> 24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory
> 
> 
> 
> 
> www.techpowerup.com


Sorry to bump an older post but im having problems getting this to load. 
Ive loaded 520w rebar now with no problems but this 1000w rebar throws an error.
Even had nvflash lock up the pc once trying.
Any ideas?
Thanks


----------



## PLATOON TEKK

J7SC said:


> The step from 3090 to 3090 Ti seems marginal (say 6% fps on average); I rather wait for the 4090s. Besides, what with the new PCBs and Power Connectors, there's no way I'm modding the wiring and GPU's front-and-back in this complex build I finished only recently.:
> View attachment 2541046
> 
> 
> When I was doing HWBot years back, it was always about adding to / defending your records and cups, and the latest _'stuff' _couldn't get here fast enough. In my case, though, I entered a new realm some time ago - I like to build unusual systems more than chase benches all the time (but to each his own) - and_ really use_ those systems.
> 
> I still run benchmarks of course to see if I can improve on my previous efforts, but it is not the priority anymore. I spent the 'TI money' on the latest-gen peripherals (ie. latest PCIe Gen4 NVMEs, C1 OLED) to make sure that the overall systems performance is both very fast and a joy to use w/o a single hiccup - be it for work, or for play. With up to 10 hours / day in front of the 'puters, it made the most sense to me. But again, I respect other folks' approaches to this; it just comes down to one's priorities re. time and money spent for a given return.


Very well said, I think you are absolutely right. This time around I think I will lay back a bit and skip the middle ti step. The idea of stressing bout the cards, then the blocks, only for the 4090 to be announced once I finally run them in my rig, seems redundant. I agree with actually using the builds, that’s why I much prefer chillers over ln2.

It’s dope to know I can hit the top of the charts when I have the time and money, but making it a constant is just unnecessarily stressful lol. I don’t even know anyone in person who understands or appreciates any of this haha.

I might even return this 12900k and just stick to thez590 dark build.


----------



## des2k...

gfunkernaught said:


> if I do nvidia-smi.exe -lgc 2205, I see the clock go to only 2025mhz.


That's pretty high freq and usually that needs 1.1v & eff freq will not be the best due to lack of voltage for normal 3090s

*This MSI curve type will work best for locking that high freq*

1.1v @ 2235 (2205 + 30mhz) or +50mhz & do the nvidia-smi.exe -lgc 2205
-1 step from 1.1v @ 2190 (2205 -15mhz (1 step)
-2 step from 1.1v @ 2190 (2205 -15mhz (1 step)
-3 step from 1.1v @ 2190 (2205 -15mhz (1 step)

*you're locking 2190 (1step bellow 2205 to 3VF points to force next VF point)
*you're setting request freq to 2205 + 30mhz and limit to 2205 with SMI
*eff frequency can't drop too low else it will step down or power limits

eff freq ends up lower vs requested freq but it does pull 16k for port royal on my 2x8pin zotac

Example of how the curve suppose to look


----------



## des2k...

wtf_apples said:


> Sorry to bump an older post but im having problems getting this to load.
> Ive loaded 520w rebar now with no problems but this 1000w rebar throws an error.
> Even had nvflash lock up the pc once trying.
> Any ideas?
> Thanks


1000w rebar original file is on page 832


----------



## yzonker

des2k... said:


> 1000w rebar original file is on page 832


I assume you are talking about your original post, 









[Official] NVIDIA RTX 3090 Owner's Club


Has anyone tested two power connector BIOS's on three connector cards? I think they pull more power because the power limits are set for two power connectors but our cards have three. What is the highest wattage 2-pin BIOS? When I had my 3090 FTW3 which was completely Power limit due to high...




www.overclock.net





I wonder if the one on TPU is corrupted. It really should work. I think I may have uploaded it originally after I flashed the one in your post.


----------



## sew333

kryptonfly said:


> Seasonic are known to trigger OCP faster than others, it happens in Endwalker benchmark especially with high fluctuate framerates, you can test Endwalker with cpu+gpu OC'ed, my Prime TX 750W did same, I changed for an Antec Signature 1000W (same build quality) and no more OCP. Maybe there's high framerates during advertisement, I don't remember, you can remove intro with mod, look at nexus mod.
> Or you can close Afterburner during bench and let HWinfo in the background, no cost at all in scores, seems even to run better with it. (placebo lol).
> Honestly for stability, no trouble to run Afterburner during bench but once you've seen behavior & limits, no reason to keep Afterburner during bench. Moreover, you can lock freq with SMI and I guess you know well your gpu temp. Just my personal thought.
> 
> EDIT : the only game (that I know) which benefits from Afterburner monitoring is Horizon Zero Dawn, no placebo, there's really an improvement, I don't know why. I always use Afterburner for my HZD bench from the launch. Seems it can be useful for all games 🤔 Needs some investigating.






> Also
> It's not overcurrent protection getting triggered.
> That causes complete shutdown of PSU needing power cycling to clear it.
> I don’t have to flip switch.


----------



## kryptonfly

@sew333 : How are you so sure ? I had a V1200 which triggered OCP despite its 1200W and really good voltages +12v min in Cyberpunk 77 always around the same area, I RMA it and got a V1250 V2 (on eBay). If pc shuts down wildly and lights flicker, it's like unplug from the wall, capacitors get empty (lights flicker) so a protection is triggered. Especially Seasonic are really known to have this issue, voltages are all good and you can't see it by just monitoring in windows, try Endwalker bench @1080p max and 4k max with your cpu+gpu OC, it's a good test to check your PSU, if you have a shut down just RMA it.


----------



## sew333

kryptonfly said:


> @sew333 : How are you so sure ? I had a V1200 which triggered OCP despite its 1200W and really good voltages +12v min in Cyberpunk 77 always around the same area, I RMA it and got a V1250 V2 (on eBay). If pc shuts down wildly and lights flicker, it's like unplug from the wall, capacitors get empty (lights flicker) so a protection is triggered. Especially Seasonic are really known to have this issue, voltages are all good and you can't see it by just monitoring in windows, try Endwalker bench @1080p max and 4k max with your cpu+gpu OC, it's a good test to check your PSU, if you have a shut down just RMA it.


Thanks . But i didnt have to flip OFF/ON the power switch on the psu after shutdown. if it was OCP i must do that.



I tried Cyberpunk 2077 no shutdowns


that benchmark:?








FINAL FANTASY XIV: Endwalker Official Benchmark


The official site for the FINAL FANTASY XIV: Endwalker benchmark.




na.finalfantasyxiv.com


----------



## yzonker

My Seasonic GX1000 would hit OCP during the Endwalker benchmark while running the KP 1kw bios. Machine would just click off. All I had to do was press the power button on the machine to turn it back on. Didn't have to flip the switch on the back. 

Swapping it for a Rm1000x fixed the problem.


----------



## sew333

So if Endwalker i pass its fine? All games i tested no issues


this is this benchmark?








FINAL FANTASY XIV: Endwalker Official Benchmark


The official site for the FINAL FANTASY XIV: Endwalker benchmark.




na.finalfantasyxiv.com


----------



## yzonker

sew333 said:


> So if Endwalker i pass its fine? All games i tested no issues
> 
> 
> this is this benchmark?
> 
> 
> 
> 
> 
> 
> 
> 
> FINAL FANTASY XIV: Endwalker Official Benchmark
> 
> 
> The official site for the FINAL FANTASY XIV: Endwalker benchmark.
> 
> 
> 
> 
> na.finalfantasyxiv.com


Maybe, hard to know for sure. All you can do is keep using it and see if it happens again. 

Yea that's the benchmark. I kinda think it was the combination of a high power limit and high frame rates. Other games/benches pull more power but never caused the issue (PR, TSE, Cyberpunk, etc...). The only other time I had the issue was running Firestrike which also runs at fairly high fps.


----------



## sew333

yzonker said:


> Maybe, hard to know for sure. All you can do is keep using it and see if it happens again.
> 
> Yea that's the benchmark. I kinda think it was the combination of a high power limit and high frame rates. Other games/benches pull more power but never caused the issue (PR, TSE, Cyberpunk, etc...). The only other time I had the issue was running Firestrike which also runs at fairly high fps.


Also i tested 3dmark WILD LIFE there is 500-900fps and no shutdowns too.


----------



## sew333

kryptonfly said:


> @sew333 : How are you so sure ? I had a V1200 which triggered OCP despite its 1200W and really good voltages +12v min in Cyberpunk 77 always around the same area, I RMA it and got a V1250 V2 (on eBay). If pc shuts down wildly and lights flicker, it's like unplug from the wall, capacitors get empty (lights flicker) so a protection is triggered. Especially Seasonic are really known to have this issue, voltages are all good and you can't see it by just monitoring in windows, try Endwalker bench @1080p max and 4k max with your cpu+gpu OC, it's a good test to check your PSU, if you have a shut down just RMA it.


Yeap. Pc shutdown like something like with standby mode . During shutdown pc case bottom lights briefly flicker and i think some led was on ,inside case. From what i remember. Then i pressed power button and pc booted.


----------



## wtf_apples

des2k... said:


> 1000w rebar original file is on page 832


Thanks for the page.
I must be doing something wrong with this bios. Ive tried every other bios on tpu and never had a problem.
Multiple kp 1kw dont work for me. Gives me the same error.

nvflash64 --protectoff
nvflash64 -6 3090KP_X_bar.rom

Im at a loss


----------



## yzonker

wtf_apples said:


> Thanks for the page.
> I must be doing something wrong with this bios. Ive tried every other bios on tpu and never had a problem.
> Multiple kp 1kw dont work for me. Gives me the same error.
> 
> nvflash64 --protectoff
> nvflash64 -6 3090KP_X_bar.rom
> 
> Im at a loss


Are you using the latest NVFlash from TPU?


----------



## sew333

lol ok thx


----------



## wtf_apples

yzonker said:


> Are you using the latest NVFlash from TPU?


Dang.. I was using an older version of nvflash that came after the bios, not the latest..
The latest one from tpu worked. Thank you!
I guess theres something in that 1kw


----------



## wtf_apples

I scored 14 912 in Port Royal 

178core and 1500 mem without classy tool, rebar off and smt off, didnt close anything in task manager yet and using steam 3dmark. 
Board pulls just under 500w with the 1kw bios.
Good start I guess


----------



## KedarWolf

Deleted


----------



## AvengedRobix

des2k... said:


> That's pretty high freq and usually that needs 1.1v & eff freq will not be the best due to lack of voltage for normal 3090s
> 
> *This MSI curve type will work best for locking that high freq*
> 
> 1.1v @ 2235 (2205 + 30mhz) or +50mhz & do the nvidia-smi.exe -lgc 2205
> -1 step from 1.1v @ 2190 (2205 -15mhz (1 step)
> -2 step from 1.1v @ 2190 (2205 -15mhz (1 step)
> -3 step from 1.1v @ 2190 (2205 -15mhz (1 step)
> 
> *you're locking 2190 (1step bellow 2205 to 3VF points to force next VF point)
> *you're setting request freq to 2205 + 30mhz and limit to 2205 with SMI
> *eff frequency can't drop too low else it will step down or power limits
> 
> eff freq ends up lower vs requested freq but it does pull 16k for port royal on my 2x8pin zotac
> 
> Example of how the curve suppose to look
> View attachment 2541398


Interesting.. for me my max Is 15.2.. only 1000w BIOS or shunt mod?

Inviato dal mio ONEPLUS A6013 utilizzando Tapatalk


----------



## Dreams-Visions

So my queue from who knows when last year just popped for a KPE. Any reason to pick it up and swap out the current card if I can find a block for it? Buy and sell to someone who needs it for their build? Ignore and let it roll to the next person?

Currently running a 3090 strix in an open loop. 

What would you all recommend?


----------



## gfunkernaught

des2k... said:


> That's pretty high freq and usually that needs 1.1v & eff freq will not be the best due to lack of voltage for normal 3090s
> 
> *This MSI curve type will work best for locking that high freq*
> 
> 1.1v @ 2235 (2205 + 30mhz) or +50mhz & do the nvidia-smi.exe -lgc 2205
> -1 step from 1.1v @ 2190 (2205 -15mhz (1 step)
> -2 step from 1.1v @ 2190 (2205 -15mhz (1 step)
> -3 step from 1.1v @ 2190 (2205 -15mhz (1 step)
> 
> *you're locking 2190 (1step bellow 2205 to 3VF points to force next VF point)
> *you're setting request freq to 2205 + 30mhz and limit to 2205 with SMI
> *eff frequency can't drop too low else it will step down or power limits
> 
> eff freq ends up lower vs requested freq but it does pull 16k for port royal on my 2x8pin zotac
> 
> Example of how the curve suppose to look
> View attachment 2541398


Right that's what I've been doing with the curve, but the thermal bin drop still happens, guess it's unavoidable. I haven't been able to get passed 15.5k in PR since a few drivers ago.


----------



## GRABibus

KedarWolf said:


> Welp, I messed up, no gaming for a week or two. I thought I had the right thermal pads to redo my backplate, but I had put 1.5mm pads in a 2mm cardboard container, and never realized it until late. Not having 2mm pads, my backplate install is not right, so to be safe, no gaming or benching until I get the pads I ordered from China in a week or two. Hope Omicron won't delay it, but I think China is on lockdown, so might.


Chinese new year


----------



## bearsdidit

Dreams-Visions said:


> So my queue from who knows when last year just popped for a KPE. Any reason to pick it up and swap out the current card if I can find a block for it? Buy and sell to someone who needs it for their build? Ignore and let it roll to the next person?
> 
> Currently running a 3090 strix in an open loop.
> 
> What would you all recommend?


PM me if you end up getting it!


----------



## Zogge

Did the asus 1kW bios ever get fixed by the way to work properly with possibility to draw over 480W ?


----------



## Nizzen

Zogge said:


> Did the asus 1kW bios ever get fixed by the way to work properly with possibility to draw over 480W ?


I seen 570w with the "asus xoc rebar bios"

Good enough for gaming 

Think it's this.








Asus RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com


----------



## Zogge

Thanks !! Will try it today


----------



## Zogge

Works like a charm in some games, but sometimes it stays on 420mhz core, 400mhz memory even in games ? I have the power saving turned off and prefer maximum performance. Anyone else experienced this and how to fix it ? It came with the bios above. I will try to reflash it also.


----------



## sew333

Anyone have a clue? I have 6 months pc no single shutdown in games i tested:
Control,Quake 2 rtx, Battlefield 5, Call off dutyVanguard,Crysis 1,3,Metro Exodus Enhanced,Cyberpunk 2077,Watch Dogs Legion,3dmarks.

Week ago had single system shut off when launching Metro Exodus Enhanced. IN advertisement part. Not happened again since. Pc just shut off and i pressed power button to boot up pc.

Should i be worry?
Seasonic TX-850 Prime Titanium, Gigabyte Rtx 3090 Gaming stock


----------



## J7SC

Zogge said:


> Works like a charm in some games, but sometimes it stays on 420mhz core, 400mhz memory even in games ? I have the power saving turned off and prefer maximum performance. Anyone else experienced this and how to fix it ? It came with the bios above. I will try to reflash it also.


I've also had some issues with that 1kw Strix r_bar, as much as I would like to _actually run it_. Every 3090 bios I have downloaded via links here or at Techpowerup can be opened with ABE 008, but this specific one continues to throw 'exception handling' errors when I try to open and review it, even via downloads on different systems.


----------



## Zogge

Okay maybe I need to try it more on a clean windows install. That I save for later then, but good to know others also had issues and not just me.


----------



## DannyTC

Hi all!
I have a 3090 FE with BIOS version 94.02.4b.00.0b. Could anyone tell me which BIOS version is the latest, the most recommended and if I should update it? 

Thanks!


----------



## yzonker

J7SC said:


> I've also had some issues with that 1kw Strix r_bar, as much as I would like to _actually run it_. Every 3090 bios I have downloaded via links here or at Techpowerup can be opened with ABE 008, but this specific one continues to throw 'exception handling' errors when I try to open and review it, even via downloads on different systems.


ABE008 doesn't work on the reBar KP 1kw bios either or any 3080ti bios. But it does work on other newer 3090 reBar bios I've tried. No clue what the difference is.


----------



## kryptonfly

sew333 said:


> Yeap. Pc shutdown like something like with standby mode . During shutdown pc case bottom lights briefly flicker and i think some led was on ,inside case. From what i remember. Then i pressed power button and pc booted.





sew333 said:


> Anyone have a clue? I have 6 months pc no single shutdown in games i tested:
> Control,Quake 2 rtx, Battlefield 5, Call off dutyVanguard,Crysis 1,3,Metro Exodus Enhanced,Cyberpunk 2077,Watch Dogs Legion,3dmarks.
> 
> Week ago had single system shut off when launching Metro Exodus Enhanced. IN advertisement part. Not happened again since. Pc just shut off and i pressed power button to boot up pc.
> 
> Should i be worry?
> Seasonic TX-850 Prime Titanium, Gigabyte Rtx 3090 Gaming stock


Sorry for the late reply. As of now it's up to you and your expectations, if you can manage 1x shut down every... in rare special conditions (Endwalker, games with high framerates + high power usage), you can live like that. But if it happens more and more, you should RMA your Seasonic with no guarantee the new one will pass Endwalker 
You need at least 1000W, I changed for an Antec Signature 1000W mid-2021 (same building quality) and Endwalker passes. I got 2 OCPs since but in really heavy OC +700W in 1 second (cpu+gpu oc in Rise of the TR 4k, PL 100% I forgot to apply the curve with shunts 4 mohms + bios 1000W), fortunately there's OCP in these cases ! Seasonic is safer let's say. If you have your Gigabyte at bios stock 390W, there's something wrong. My prime TX-750W managed Endwalker with 390W bios (Gigabyte Turbo). It started to trigger OCP when I tried 1000W XOC bios.


----------



## J7SC

yzonker said:


> ABE008 doesn't work on the reBar KP 1kw bios either or any 3080ti bios. But it does work on other newer 3090 reBar bios I've tried. No clue what the difference is.


Oddly enough, the version I saved of the KPE 1kw r_bar does open with ABE008, just not the Strix 1kw r_bar - then again, at least the latter is clearly marked as 'unoffical'.


----------



## yzonker

J7SC said:


> Oddly enough, the version I saved of the KPE 1kw r_bar does open with ABE008, just not the Strix 1kw r_bar - then again, at least the latter is clearly marked as 'unoffical'.


Well that's really strange as it should be the same file. I just tried the copy on TPU and the one @des2k... originally posted in this thread. Neither load for me with the same error.


----------



## sew333

kryptonfly said:


> Sorry for the late reply. As of now it's up to you and your expectations, if you can manage 1x shut down every... in rare special conditions (Endwalker, games with high framerates + high power usage), you can live like that. But if it happens more and more, you should RMA your Seasonic with no guarantee the new one will pass Endwalker
> You need at least 1000W, I changed for an Antec Signature 1000W mid-2021 (same building quality) and Endwalker passes. I got 2 OCPs since but in really heavy OC +700W in 1 second (cpu+gpu oc in Rise of the TR 4k, PL 100% I forgot to apply the curve with shunts 4 mohms + bios 1000W), fortunately there's OCP in these cases ! Seasonic is safer let's say. If you have your Gigabyte at bios stock 390W, there's something wrong. My prime TX-750W managed Endwalker with 390W bios (Gigabyte Turbo). It started to trigger OCP when I tried 1000W XOC bios.


I write to SEASONIC and they told me if that happened once ( not OCP - just simply shutdown ,pressed power button to reboot ) in 7 months, psu is fine.


----------



## ValentinZ

Sulek83 said:


> Hi Toopy,
> 
> I have Zotac 3090 Artic Storm. I think it is a same card as 3090 holo but with water block.
> Have the same issue when mining ETH. It max out at 280W and mines poorly circa 116MH
> 
> Have you managed to find solution?


Hi,
I have the same problem. Did You find any solution?


----------



## yzonker

ValentinZ said:


> Hi,
> I have the same problem. Did You find any solution?


Somebody popped up on reddit with this problem and same card. I suggested flashing a different bios. 


__
https://www.reddit.com/r/gpumining/comments/ryl2hi/_/hrqb4jc


----------



## Dreams-Visions

Just catching up on the 1k strix w/rebar bios. Just so that I'm clear: is this BIOS okay to use on an actual Strix card? I wasn't clear if there's just a general issue with the bios, or an issue with it for people who are attempting to use it on non-Strix cards. 

I have a 3090 Strix and would be interested in giving it a try if it's not going to be problematic for me. Thanks!


----------



## Nizzen

Dreams-Visions said:


> Just catching up on the 1k strix w/rebar bios. Just so that I'm clear: is this BIOS okay to use on an actual Strix card? I wasn't clear if there's just a general issue with the bios, or an issue with it for people who are attempting to use it on non-Strix cards.
> 
> I have a 3090 Strix and would be interested in giving it a try if it's not going to be problematic for me. Thanks!


5 seconds to flash.5 seconds to reflash. Test it, then repport back.


----------



## Dreams-Visions

Nizzen said:


> 5 seconds to flash.5 seconds to reflash. Test it, then repport back.


_sweats_

roger that, sir!


----------



## wtf_apples

kryptonfly said:


> You need to create a bat file. nvidia-smi -lgc 2220 (or whatever you want) but first you have to locate where is SMI => https://stackoverflow.com/questions/57100015/how-do-i-run-nvidia-smi-on-windows/57100016#57100016
> Then open notepad, paste the path and add :
> nvidia-smi -lgc 2220 (to lock at 2220). Save the file as .bat
> Make another file to reset : nvidia-smi -rgc
> 
> I have a Gigabyte Turbo 2*8 pins
> Yep surely my quad 3400mhz 13-12-12-24 helps a lot too. I just got a 12900k yesterday I will test and compare if it worths the shot.


Hello, Well ive spent awhile now trying to get this working.
EDIT______________
Fixed the error, had an extra character in there. 
C:\windows\system32 nvidia-smi -lgc 2100 gives no error.
Sorry for the dumb questions..








Curious what im doing wrong?
Also will msi ab, px1 and hwinfo64 show the new effective clock after using this bat file?

Thanks


----------



## yzonker

wtf_apples said:


> Hello, Well ive spent awhile now trying to get this working.
> 
> I created a .bat, C:\windows\system32>nvidia-smi -lgc 2100. It opens and closes quickly, so I guess its working?
> Then I created a file "C:\Windows\System32\nvidia-smi -rgc" and I get this error.
> View attachment 2542275
> 
> Curious what im doing wrong?
> Also will msi ab, px1 and hwinfo64 show the new effective clock after using this bat file?
> 
> Thanks


Is that a typo in the first line? ">" instead of a backslash. It may not be doing anything if that's what you have in the bat file. Not sure about the error though. It does have to run as admin though.


----------



## wtf_apples

yzonker said:


> Is that a typo in the first line? ">" instead of a backslash. It may not be doing anything if that's what you have in the bat file. Not sure about the error though. It does have to run as admin though.


Thank you. I should have seen that extra character in there. Thanks again


----------



## tps3443

I got me one of those cool water chillerzzzz. They really do provide the best results.

With the fans at 0 RPM, temps drop it down to 2C idle in a matter of minutes. If I run my radiator fans at 30% it keeps it warmer and I can stay at around 9C idle. And very low load temps. Higher rpm fans provide higher temps, thus allowing you to fine tune the chiller to run at whatever temp you want.

Waterchillers are just incredible though! 

Heavy load in games is just ridiculously cool. It’s like 28C with a heavy heavy OC. That’s why 30% fans. If I run 0 RPM, it does get cooler but it gets too cold. And I want to avoid condensation.


----------



## KedarWolf

I couldn't run a chiller. If I have my A/C or space heater, my flat-screen TV on and play games, I blow a breaker. Those chillers use a ton of power, and I'd never have my heater or A/C on with the PC running, 

The drawbacks of living in a larger bachelor apartment.


----------



## J7SC

@Falkentyne et al ...don't know if you have seen this DerBauer vid re. adjustable shunt mods by Elmor


----------



## yzonker

@J7SC Have you pulled a card back out of a block yet where you used the thermal putty? I'm wondering how stuck the card is and how hard it is to clean up? 

I've got a Heatkiller block on the way and toying with using the putty since I have a tub anyway.


----------



## J7SC

yzonker said:


> @J7SC Have you pulled a card back out of a block yet where you used the thermal putty? I'm wondering how stuck the card is and how hard it is to clean up?
> 
> I've got a Heatkiller block on the way and toying with using the putty since I have a tub anyway.


I pulled the card off the block within the first week of thermal putty application to make a mod in the late summer, and clean-up was somewhat easier than brittle pads or old paste. But since then I haven't pulled it apart - temps are steady as she goes, though.


----------



## kryptonfly

wtf_apples said:


> Hello, Well ive spent awhile now trying to get this working.
> EDIT______________
> Fixed the error, had an extra character in there.
> C:\windows\system32 nvidia-smi -lgc 2100 gives no error.
> Sorry for the dumb questions..
> View attachment 2542275
> 
> Curious what im doing wrong?
> Also will msi ab, px1 and hwinfo64 show the new effective clock after using this bat file?
> 
> Thanks


I don't know if you definitely fixed your trouble but generally SMI is located at : C/windows/system32/driverstore/Filerepository/nv_***(something, depends of the drivers). Enter in this folder (nvidia-smi should be inside), click on the path in address bar and copy/paste into notepad. Just add : \nvidia-smi -lgc 2100. Save as a .bat files and run as Admin.

"no error" doesn't mean it works, just windows didn't find something to say if it pointing to nowhere.

Yes, you can directly see the clock changing in monitoring, if not you did something wrong.



tps3443 said:


> I got me one of those cool water chillerzzzz. They really do provide the best results.
> 
> With the fans at 0 RPM, temps drop it down to 2C idle in a matter of minutes. If I run my radiator fans at 30% it keeps it warmer and I can stay at around 9C idle. And very low load temps. Higher rpm fans provide higher temps, thus allowing you to fine tune the chiller to run at whatever temp you want.
> 
> Waterchillers are just incredible though!
> 
> Heavy load in games is just ridiculously cool. It’s like 28C with a heavy heavy OC. That’s why 30% fans. If I run 0 RPM, it does get cooler but it gets too cold. And I want to avoid condensation.
> 
> View attachment 2542338
> 
> View attachment 2542337
> 
> View attachment 2542336
> 
> View attachment 2542335


Awesome 
What is the brand and model number ?


----------



## tps3443

kryptonfly said:


> I don't know if you definitely fixed your trouble but generally SMI is located at : C/windows/system32/driverstore/Filerepository/nv_***(something, depends of the drivers). Enter in this folder (nvidia-smi should be inside), click on the path in address bar and copy/paste into notepad. Just add : \nvidia-smi -lgc 2100. Save as a .bat files and run as Admin.
> 
> "no error" doesn't mean it works, just windows didn't find something to say if it pointing to nowhere.
> 
> Yes, you can directly see the clock changing in monitoring, if not you did something wrong.
> 
> Awesome
> What is the brand and model number ?


I bought it on eBay for $250.00 BIN with $27 dollar shipping cost lol. I could not pass it up. It was listed as a 1/2HP chiller in the add, however the photo was a 1HP unit. The seller said I would receive what is pictured, so I would either get the AACH50 (1/2HP) or get super lucky and receive a AACH100 (1HP). Obviously I didn’t care either way which model I received. 

I actually received the AACH50 (1/2HP Unit) it is the popular and very common Hailea HC-500.

You can run it 24/7 daily very reliably too. This thing gets water cold!!! Super cold!! And it does it fast. It’s probably the coolest thing I have ever owned. I am really barely using it, I’m not after the lowest possible temps, but, it can get temps down to 39F/2C super quick.










This unit is super clean inside and runs incredibly well. I’d recommend a chiller to anyone who is always chasing low temp numbers. It’s literally a dream come true for any overclocker who is always after low temps.


----------



## kryptonfly

@tps3443 : great 👍 with this I could simplify my WC and mix cpu+gpu in the same loop because right now CPU has : 1x360mm + 1x280mm + 1x240mm *45 + 3x120mm *45
GPU has 5x360mm external with 15 Arctic F12 at 5v (2x hub 5/12v) around 40-41°C at 500W, 23°C ambient.


----------



## dpoverlord

Anyone have issues with the EVGA 3090 locking up when overclocking in gameS?


----------



## EarlZ

On GPU intensive games around how much FPS do we gain with something like a +100~150Mhz OC from say an 1800Mhz core (3440x1440)


----------



## tps3443

dpoverlord said:


> Anyone have issues with the EVGA 3090 locking up when overclocking in gameS?


It could be numerous things, you are hitting a power limit, or maybe it’s clocking up to a voltage point that is too low for the GPU to handle. Reduce your OC, or run the GPU cooler if possible.



kryptonfly said:


> @tps3443 : great 👍 with this I could simplify my WC and mix cpu+gpu in the same loop because right now CPU has : 1x360mm + 1x280mm + 1x240mm *45 + 3x120mm *45
> GPU has 5x360mm external with 15 Arctic F12 at 5v (2x hub 5/12v) around 40-41°C at 500W, 23°C ambient.


Thats a lot going on there. Yeah, I had a similar setup, several smaller radiators combined.

Honestly, I’d just get a single large Alphacool Nexxxos 1,080x45mm radiator like I did. It’s like a poor mans Mora3 at about $125-$140 USD. And, offers the same performance. I recommend (2) D5’s with it.


I have 2) 1,080x45 Alphacool nexxxos rads. But, I only use one of them. Thats all that’s needed (Without) a chiller. It could keep my 3090 Kingpin at around 38C in most cases. And that’s with fans that are barely turning.


----------



## wtf_apples

kryptonfly said:


> I don't know if you definitely fixed your trouble but generally SMI is located at : C/windows/system32/driverstore/Filerepository/nv_***(something, depends of the drivers). Enter in this folder (nvidia-smi should be inside), click on the path in address bar and copy/paste into notepad. Just add : \nvidia-smi -lgc 2100. Save as a .bat files and run as Admin.
> 
> "no error" doesn't mean it works, just windows didn't find something to say if it pointing to nowhere.
> 
> Yes, you can directly see the clock changing in monitoring, if not you did something wrong.
> 
> Awesome
> What is the brand and model number ?


I really appreciate you checking in. I had located the wrong nvidia-smi.exe
Theres two, 1 in windows\system32 and also Windows\System32\DriverStore\FileRepository\nv_dispi.inf_amd64_0bc9105c62ca22fb (driver 497.29)
When applied, I now see msi ab report 2025 when I set 2200. I’ll try adding oc in msi ab now. I am excite, time to tinker. thanks

Now to my next challenge. Waiting for tg-pp10 to come back in stock, my kp hc needs new pads big time.


----------



## wtf_apples

Well after was playin with port r for about 1.5 hrs with the window open.
Set nvidia smi to 2190, core 178 and mem 1425 mem. Using px1 with 1kw bios. 
Is it normal for my gpu to only pull around 450w max during the bench? Seems very low from that other ppl have told me.
Ill try dduing my drivers again and see if it changes. This is what I could get before pr started to crash out a lot. 

Heres my best run. Havent used classified yet. 
NVIDIA GeForce RTX 3090 video card benchmark result - AMD Ryzen 9 5950X,Micro-Star International Co., Ltd. MEG X570 GODLIKE (MS-7C34) (3dmark.com)


----------



## Nizzen

wtf_apples said:


> Well after was playin with port r for about 1.5 hrs with the window open.
> Set nvidia smi to 2190, core 178 and mem 1425 mem. Using px1 with 1kw bios.
> Is it normal for my gpu to only pull around 450w max during the bench? Seems very low from that other ppl have told me.
> Ill try dduing my drivers again and see if it changes. This is what I could get before pr started to crash out a lot.
> 
> Heres my best run. Havent used classified yet.
> NVIDIA GeForce RTX 3090 video card benchmark result - AMD Ryzen 9 5950X,Micro-Star International Co., Ltd. MEG X570 GODLIKE (MS-7C34) (3dmark.com)


Look at total powerdraw from the wall. Cpu-z readings is often way off. 
With my 5950x and 3090, total powerdraw from the wall is up to 800w in some games, and timespy benchmark.


----------



## kryptonfly

wtf_apples said:


> Well after was playin with port r for about 1.5 hrs with the window open.
> Set nvidia smi to 2190, core 178 and mem 1425 mem. Using px1 with 1kw bios.
> Is it normal for my gpu to only pull around 450w max during the bench? Seems very low from that other ppl have told me.
> Ill try dduing my drivers again and see if it changes. This is what I could get before pr started to crash out a lot.
> 
> Heres my best run. Havent used classified yet.
> NVIDIA GeForce RTX 3090 video card benchmark result - AMD Ryzen 9 5950X,Micro-Star International Co., Ltd. MEG X570 GODLIKE (MS-7C34) (3dmark.com)


Hmm, something goes wrong, I suggest you to use a manual curve because you must have a point "above" your personal locking. So, I recommend to set +30mhz (2 bins) higher because with temp, GPU boost will try to lower your point but it will not succeed and you will stay locked between 2 points in the curve (impossible without SMI). For 2190 => set just 1 point at 2220 based on your required voltage (maybe 1.081v depends of your GPU), that's it. Apply the curve and then lock with SMI. That way you won't have a single mhz moving in 3dmark. Here's my PR locked at 2205 mhz (1.018v), you can see frequencies don't move at all. You have to have the same behavior.

I scored 15 737 in Port Royal


----------



## wtf_apples

Nizzen said:


> Look at total powerdraw from the wall. Cpu-z readings is often way off.
> With my 5950x and 3090, total powerdraw from the wall is up to 800w in some games, and timespy benchmark.


Ya I picked up one of those voltage monitors and plugged my pc into it, says total draw was around 640w. Dont think ive ever seen it over 680w tbh. 
Ill try timespy tomorrow and report back.


----------



## wtf_apples

kryptonfly said:


> Hmm, something goes wrong, I suggest you to use a manual curve because you must have a point "above" your personal locking. So, I recommend to set +30mhz (2 bins) higher because with temp, GPU boost will try to lower your point but it will not succeed and you will stay locked between 2 points in the curve (impossible without SMI). For 2190 => set just 1 point at 2220 based on your required voltage (maybe 1.081v depends of your GPU), that's it. Apply the curve and then lock with SMI. That way you won't have a single mhz moving in 3dmark. Here's my PR locked at 2205 mhz (1.018v), you can see frequencies don't move at all. You have to have the same behavior.
> 
> I scored 15 737 in Port Royal


Interesting. Im just setting up everything here and glad I found out about the smi trick. Never heard of it before tbh. Ill follow these instructions tomorrow and see what I can do. Ive saved that bench iso several pages back as well, ill get that loaded shortly.

Very impressive score. Pretty crazy to hold such a high clock!


----------



## Nizzen

wtf_apples said:


> Interesting. Im just setting up everything here and glad I found out about the smi trick. Never heard of it before tbh. Ill follow these instructions tomorrow and see what I can do. Ive saved that bench iso several pages back as well, ill get that loaded shortly.
> 
> Very impressive score. Pretty crazy to hold such a high clock!


Average temperature 37 °C

Low temperature on gpu is the key


----------



## GRABibus

I am currently testing a 3090 Kingpin.

Port Royal with DLSS, BIOS *520W* with BAR => Max power draw => 480W. Seems low, no ?
Power seems well balanced on the 3 connector during port royal.

Max stable VRAM OC => Offset + 1300MHz
Delta between GPU temp and hot spot temp => between 12°c and 17°C

Any comments on these first things...?


----------



## yzonker

GRABibus said:


> I am currently testing a 3090 Kingpin.
> 
> Port Royal with DLSS, BIOS *520W* with BAR => Max power draw => 480W. Seems low, no ?
> Power seems well balanced on the 3 connector during port royal.
> 
> Max stable VRAM OC => Offset + 1300MHz
> Delta between GPU temp and hot spot temp => between 12°c and 17°C
> 
> Any comments on these first things...?


The 480w you are seeing is typical I think. I've seen others report it did not quite make it to 520w.


----------



## GRABibus

yzonker said:


> The 480w you are seeing is typical I think. I've seen others report it did not quite make it to 520w.


I have to play with classified tool on 520W rebar bios and I can reach 520W hopefully


----------



## yzonker

GRABibus said:


> I have to play with classified tool on 520W rebar bios and I can reach 520W hopefully


Or just flash the 1kw bios.


----------



## GRABibus

yzonker said:


> Or just flash the 1kw bios.


I am with stock cooler 😁


----------



## GRABibus

Kingpin 3090.

Best PR run @ 21.5°C ambient
Drivers on performances
ReBar forced with NVIDIA Profile Inspector
Playing with Classified
Bios 520W with Rebar










NVIDIA GeForce RTX 3090 video card benchmark result - AMD Ryzen 9 5900X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VIII HERO (3dmark.com)


----------



## GRABibus

Kingpin 3090, stock cooler.

Best PR run @ 21°C ambient
Drivers on performances
ReBar forced with NVIDIA Profile Inspector
Playing with Classified
Bios KP 1000W XOC with Rebar


















I scored 15 681 in Port Royal


AMD Ryzen 9 5900X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com





I need Classified tool for those scores.
Otherwise, the car draw 480W max in PR, even with 520W Bios....Sad.


----------



## kryptonfly

@GRABibus : You will so much gain under WC, it's "almost" incompatible the heat🔥 with high power draw. More it's hot, more you need to increase voltage... And the heat rises again more and more. It's an endless circle. I can pass PR at 2205mhz locked - at only 1.018v (TimeSpy 2175mhz locked especially in the GT2, same voltage) thanks to WC indeed. Unfortunately it's a 2*8 pins so I'm limited by melting fuses & power stage but you're not... You could lessen voltages under WC.


----------



## GRABibus

kryptonfly said:


> @GRABibus : You will so much gain under WC, it's "almost" incompatible the heat🔥 with high power draw. More it's hot, more you need to increase voltage... And the heat rises again more and more. It's an endless circle. I can pass PR at 2205mhz locked - at only 1.018v (TimeSpy 2175mhz locked especially in the GT2, same voltage) thanks to WC indeed. Unfortunately it's a 2*8 pins so I'm limited by melting fuses & power stage but you're not... You could lessen voltages under WC.


Yes for sure.
Would you do this WC for me ?
I am Not an expert


----------



## kryptonfly

GRABibus said:


> Yes for sure.
> Would you do this WC for me ?
> I am Not an expert


lol I don't think I'm an expert too. Depends of your will, expectations, imagination (tastes...), money, pc case, room... I just used a Bykski WB, 5x360mm Bykski (40€ each at that time) outside the pc case for best results, a 240mm water tank and a Phobya DC-400 amply enough with 2 meters of tube.


----------



## truehighroller1

kryptonfly said:


> lol I don't think I'm an expert too. Depends of your will, expectations, imagination (tastes...), money, pc case, room... I just used a Bykski WB, 5x360mm Bykski (40€ each at that time) outside the pc case for best results, a 240mm water tank and a Phobya DC-400 amply enough with 2 meters of tube.


That's how I setup my radiators! Four in tandem outside case ran the hoses in, two d5s.


----------



## kryptonfly

truehighroller1 said:


> That's how I setup my radiators! Four in tandem outside case ran the hoses in, two d5s.


Do you have some pics ? It could enlighten our friend @GRABibus


----------



## truehighroller1

kryptonfly said:


> Do you have some pics ? It could enlighten our friend @GRABibus



The pictures are in my profile I believe showcase or whatever. Hasn't changed for awhile besides for me cleaning them.


----------



## J7SC

kryptonfly said:


> Do you have some pics ? It could enlighten our friend @GRABibus


Nope, I gently reminded @GRABibus before re. when he benched his 3090 Strix on air / 1KW KPE bios - but to no avail...might have even shared some pics


Spoiler



for two systems, total = 2520x60+

But GRABIbus does know what he's doing


----------



## GRABibus

Do you know guys the XOC Bios mentionned in this link below ? :

3090 []

*Download:*XOC unofficial BIOS, Version 90.02.17.40.88 XOC overclocking version for RTX 3090 KINGPIN card ??


----------



## yzonker

GRABibus said:


> Do you know guys the XOC Bios mentionned in this link below ? :
> 
> 3090 []
> 
> *Download:*XOC unofficial BIOS, Version 90.02.17.40.88 XOC overclocking version for RTX 3090 KINGPIN card ??


It's the old non-reBar KP bios. Not sure why the page shows a different version #.


----------



## GRABibus

kryptonfly said:


> @GRABibus : You will so much gain under WC, it's "almost" incompatible the heat🔥 with high power draw. More it's hot, more you need to increase voltage... And the heat rises again more and more. It's an endless circle. I can pass PR at 2205mhz locked - at only 1.018v (TimeSpy 2175mhz locked especially in the GT2, same voltage) thanks to WC indeed. Unfortunately it's a 2*8 pins so I'm limited by melting fuses & power stage but you're not... You could lessen voltages under WC.


2205MHz locked in PR at 1.018V...
Even wIth very low temps, not sure to reach such frequency at so low voltage.

you won the lottery !


----------



## Panchovix

yzonker said:


> It's the old non-reBar KP bios. Not sure why the page shows a different version #.
> 
> View attachment 2542803


Sorry for bothering, but where did you found ABE 0.08? I can't find it anywhere lol


----------



## wtf_apples

Panchovix said:


> Sorry for bothering, but where did you found ABE 0.08? I can't find it anywhere lol


Just rename .txt to .zip
Just found it like an hour ago reading threw countless pages


----------



## kryptonfly

GRABibus said:


> 2205MHz locked in PR at 1.018V...
> Even wIth very low temps, not sure to reach such frequency at so low voltage.
> 
> you won the lottery !


Probably but my vram can hold max +1048mhz in game. Actually high framerates are harder to hold than heavy 4K from my experience, need more juice and more lower temps, TimeSpy is stronger than PR, for same voltage the gpu max is 2175mhz. With lower temps you will be easily able to lessen voltages, below 40°C is the aim.

I scored 23 167 in Time Spy

Re-bar enabled but not forced


----------



## wtf_apples

Just saw how to force rebar in 3dmark using nv inspector, Ill be testing that this weekend.

Also is anyone here using the whea suppression tools? 
WHEAService, WHEA errors suppressor - unleash Ryzen processor high FCLK without performance penalties | Overclock.net 
and 
Windows 11 - Block WHEA from Event Log (Information) · Issue #2 · mann1x/WHEAService · GitHub 

Going to try it out tomorrow


----------



## GRABibus

kryptonfly said:


> Probably but my vram can hold max +1048mhz in game. Actually high framerates are harder to hold than heavy 4K from my experience, need more juice and more lower temps, TimeSpy is stronger than PR, for same voltage the gpu max is 2175mhz. With lower temps you will be easily able to lessen voltages, below 40°C is the aim.
> 
> I scored 23 167 in Time Spy
> 
> Re-bar enabled but not forced


Mine can handle max 1200MHz offset in game and there is 14 degrees to 16 degrees delta between GPU temp and hot spot.
For a kingpin card which is supposed to be the king (with HOF), that’s not so great😊


----------



## Zogge

4001 working fine on my Formula + 5950 + 4x8gb 4400CL18g-skill running at 3600 CL 14 bench / CL 16 24/7. 
I still get WHEAs 3733+ so I decided to stay on 3600.


----------



## GRABibus

Zogge said:


> 4001 working fine on my Formula + 5950 + 4x8gb 4400CL18g-skill running at 3600 CL 14 bench / CL 16 24/7.
> I still get WHEAs 3733+ so I decided to stay on 3600.


Is it the right thread ? 😂


----------



## GRABibus

wtf_apples said:


> Ya I picked up one of those voltage monitors and plugged my pc into it, says total draw was around 640w. Dont think ive ever seen it over 680w tbh.
> Ill try timespy tomorrow and report back.


Which GPU do you have ?
Kingpin ?


----------



## Roacoe717

got another strix 3090, stock bios. might venture to watercool in the future. someone said this card could do 15k but i dont see how. my old one topped out at 14.5k, id love to reach legendary on air but owell.


----------



## GRABibus

Roacoe717 said:


> got another strix 3090, stock bios. might venture to watercool in the future. someone said this card could do 15k but i dont see how. my old one topped out at 14.5k, id love to reach legendary on air but owell.


Did you force rebar in nvidia profile Inspector for « 3DMark Port Royal DLSS » profile ?
Which bios did you use ?


----------



## Roacoe717

GRABibus said:


> Did you force rebar in nvidia profile Inspector for « 3DMark Port Royal DLSS » profile ?
> Which bios did you use ?


no rebar enabled and just the strix 480w bios. i'll try again later with rebar enabled. I assume I follow these instructions for rebar?


'1 - Download Nvidia Inspector. Probably at least version 2.3 ideal.
2 - In Nvidia Inspector, look for the icon that kind of looks like a hammer 2nd from the right, if you hover over it, it says "Show unknown setting from Nvidia Predefined Profiles", Activate it.
3 - Now if you browse any game in Nvidia Inspector in the Unknown flags at bottom, look for the following line.
0x000F00BA (17 profiles)
and
0x000F00BB (17 profiles)

For games that are whitelisted these flags will probably be already enabled, for other games the value will be 0x00000000. You can flip it to 0x00000001.

Update - It has been brought to my attention there is a 3rd flag as detailed below to enable.

0x000F00FF - 0x0000000040000000

I checked a few games and the ones I checked with them enabled by Nvidia have both flipped on, so my suggestion if you trying this out is to flip both, click apply, then when you launch app/game REBAR should be on. Obviously this is use at your own risk type of thing.'


----------



## GRABibus

Roacoe717 said:


> no rebar enabled and just the strix 480w bios. i'll try again later with rebar enabled. I assume I follow these instructions for rebar?
> 
> 
> '1 - Download Nvidia Inspector. Probably at least version 2.3 ideal.
> 2 - In Nvidia Inspector, look for the icon that kind of looks like a hammer 2nd from the right, if you hover over it, it says "Show unknown setting from Nvidia Predefined Profiles", Activate it.
> 3 - Now if you browse any game in Nvidia Inspector in the Unknown flags at bottom, look for the following line.
> 0x000F00BA (17 profiles)
> and
> 0x000F00BB (17 profiles)
> 
> For games that are whitelisted these flags will probably be already enabled, for other games the value will be 0x00000000. You can flip it to 0x00000001.
> 
> Update - It has been brought to my attention there is a 3rd flag as detailed below to enable.
> 
> 0x000F00FF - 0x0000000040000000
> 
> I checked a few games and the ones I checked with them enabled by Nvidia have both flipped on, so my suggestion if you trying this out is to flip both, click apply, then when you launch app/game REBAR should be on. Obviously this is use at your own risk type of thing.'
> View attachment 2542877


Yes you will have to follow the process in NV control inspector for the profile « 3DMark Port Royal DLSS ».

you will increase scores by 100-200points.

the best bios in my opinion for the strix on air is the MSI Suprim X rebar :








MSI RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com





This is the one which gives better score on PR, except with KP 1000W bios (but too dangerous).


----------



## JKurz

Damn it, got a new EVGA ftw3, got my EK water block on perfectly. Just went simple with a Nickelback plate waiting for the new active back plates to come out.

And I stripped a damn screw on the black back plate. First screw ever stripped on a computer or computer component. It's nice and tight tho all others perfect. So no worries there. Just need to get the back plate off to change a thermal pad that's to thin! Ugh. Any helpful suggestions? Should prolly give the gym a rest for a week or two!

😭😭😭


----------



## yzonker

Either cut a slot in it with a dremel if you can or drill it out. Use a vacuum to suck up the chips as you go.


----------



## JKurz

yzonker said:


> Either cut a slot in it with a dremel if you can or drill it out. Use a vacuum to suck up the chips as you go.



Yep, that's on my list to do. Damn tiny ass screws tho.


----------



## Roacoe717

thanks you guys, got a score I like. I took my pc outside in this chilly PA weather, I wish I would have had hwinfo64 up front but you get the idea. air cooled Strix 3090 480w bios with rebar.


----------



## wtf_apples

GRABibus said:


> Which GPU do you have ?
> Kingpin ?


Ya the kp with the hydrocopper block.


----------



## tps3443

You guys seen eBay? All of the 3090 Kingpin owners are LITERALLY panic selling THEIR CARDS on eBay LOL.


Last week, there was 3 auctions on eBay for a 3090 Kingpin. Now we’ve got like 200 probably.


----------



## geriatricpollywog

tps3443 said:


> You guys seen eBay? All of the 3090 Kingpin owners are LITERALLY panic selling THEIR CARDS on eBay LOL.
> 
> 
> Last week, there was 3 auctions on eBay for a 3090 Kingpin. Now we’ve got like 200 probably.


Why sell on Ebay when you can RMA it in 10 years for a free upgrade?


----------



## yzonker

tps3443 said:


> You guys seen eBay? All of the 3090 Kingpin owners are LITERALLY panic selling THEIR CARDS on eBay LOL.
> 
> 
> Last week, there was 3 auctions on eBay for a 3090 Kingpin. Now we’ve got like 200 probably.


Is that in anticipation of the 3090ti KP that has allegedly been delayed now? EVGA still has their mystery vide o up for the 27th.


----------



## tps3443

geriatricpollywog said:


> Why sell on Ebay when you can RMA it in 10 years for a free upgrade?
> 
> View attachment 2542983


Thats awesome. My warranty isn’t nearly as long lol.


----------



## sew333

Hey guys. I had week ago weird shutdown when i start run Metro Exodus Enhanced . Just in middle advertisement part,pc shuts off. I pressed power button to boot again pc. Not power cycling psu. Just normal shutdown. Also happened once i am not able to reproduce.

Also
It's not overcurrent protection getting triggered i think?
That causes complete shutdown of PSU needing power cycling to clear it.
I don’t have to flip switch.



I have Rtx 3090 Gigabyte Gaming stock and 10850K stock and Seasonic TX-850 Prime Titanium

I build pc in march 2021 and never had single shutdown. Should i worry?

Also tested multiple games without issues.


----------



## yzonker

Got my Zotac mounted in the Heatkiller block complete with thermal putty. 

Mining mem temps are down about 6C from the Corsair. Running at 62C @120mh right now. 23C ambient. Not sure if the putty is actually better, or it's the block. The Heatkiller block only uses 0.5mm pads for the front VRAM and 1.0mm pads on the backplate.

Still not stellar in this crowd, but block delta at 500w is 13-14C. My Corsair block when running the same test was 16-17C. (running Kombustor for a steady load) 

All in all I'm happy with it. Probably not really worth swapping, but I was just bored and wanting to try something new and they were actually in stock for once.

This is what my Corsair block looked like when I pulled the card out. Kryonaut Extreme was still soft and not dried out. (about 6 months) Notice though the half circles in the pads. Those are screw holes directly under the mem chips. I had forgotten it had those. Not ideal.










No need to double washer the Heatkiller as the screws can go all the way to the bottom of the standoff. Not sure why they used plastic on those though. Also don't know why they needed to tag on that copper shim for the single mem chip.


----------



## tps3443

yzonker said:


> Is that in anticipation of the 3090ti KP that has allegedly been delayed now? EVGA still has their mystery vide o up for the 27th.


January 27th is just the Nvidia 3090Ti announcement. 3090Ti Kingpin comes out in March.


----------



## J7SC

...😠


----------



## wtf_apples

Dang, I forgot to buy the extended warranty.


----------



## yzonker

Who uses a video card for 10 years though? Nobody on this forum I bet.


----------



## Lobstar

yzonker said:


> Who uses a video card for 10 years though? Nobody on this forum I bet.


EVGA lets you transfer the warranty once I believe. It's also like $60 so why not for $6/yr?


----------



## geriatricpollywog

Lobstar said:


> EVGA lets you transfer the warranty once I believe. It's also like $60 so why not for $6/yr?


They do not transfer the extended warranty.

You can’t buy the 10 year anymore. The best is 7 yr for $240.


----------



## tps3443

J7SC said:


> ...😠


3090 Kingpin stock is already just faster than a stock 3090Ti “On paper”

Now, the 3090Ti Kingpin will probably only be slightly faster than a 3090Ti. I just don’t see them squeezing much more out of the 8NM Samsung.

Overclock them both to 2.1-2.2Ghz range with 22-23gbps memory. And, I just don’t really see much difference between them.

Ive tossed around the idea of potentially snagging a 3090Ti Kingpin. And, I keep coming back to MEH…Do I really want to even deal with the mild stress and worry of finding one, buying one, and installing one?


Plus, I don’t think the 3090Ti KP releases until March 2022.


----------



## wtf_apples

So I decided to repad and paste my card tonight. Seen it mentioned that ppl have used 2mm pads on the die side so I gave it a shot. Kpx on the die.
Im confused because the 2mm on the mos seems to be making contact. Even ppl at evga have said different numbers than this.


----------



## wtf_apples

One run of port royal. 21ambient, pumps at 30% and fans at 700rpm. I think I need something better than this ek pads...


----------



## wtf_apples

Ill try putting thermal paste on the pads


----------



## J7SC

wtf_apples said:


> Ill try putting thermal paste on the pads


Your temps are not bad other than the delta, but hard to say w/o full-bore run and ambient C info...I started putting thermal paste on the pads on all my GPU blocks a while back, and I find it really does help. A 'perfect mount' also depends not only on the thickness of your pads, but also their firmness rating.

Below is a 3090 Strix after stressing, including w/ Port Royal runs. This is with the stock bios, but the 520 KPE on the other bios is only a degree +- or so warmer.

Gelid GC on the die, Thermalright 12.8w/mk pads with MX5 paste on the phases, thermal putty on the vram (front & back) plus extra rear heatsink.


----------



## tps3443

I run a Optimus Fujipoly full coverage rear pad on my 3090KP HC. I have the memory at +1,850 so far. Totally stable! Amazing how far it runs once it runs cooler.

My card has some nasty memory on it. It’s still climbing. I can run NiceHash at +2000.
Cyberpunk 2077 has always been demanding with my past experiences, so I’m slowing going up on Memory clocks. Slowly but surely working towards that +2000.

Temps are looking fantastic though.


----------



## J7SC

tps3443 said:


> I run a Optimus Fujipoly full coverage rear pad on my 3090KP HC. I have the memory at +1,850 so far. Totally stable! Amazing how far it runs once it runs cooler.
> 
> My card has some nasty memory on it. It’s still climbing. I can run NiceHash at +2000.
> Cyberpunk 2077 has always been demanding with my past experiences, so I’m slowing going up on Memory clocks. Slowly but surely working towards that +2000.
> 
> Temps are looking fantastic though.
> View attachment 2543025


Nice ! Is that w/ the chiller you depicted the other day, or prior ? I'm thinking about a chiller but am concerned a bit about a.) the additional power draw on an already busy circuit, and b.) 'noise...how loud is it ?


----------



## tps3443

J7SC said:


> Nice ! Is that w/ the chiller you depicted the other day, or prior ? I'm thinking about a chiller but am concerned a bit about a.) the additional power draw on an already busy circuit, and b.) 'noise...how loud is it ?



Yes, this is with the chiller at an easy 63F-65F water temp. Also, all dip switches are turned on. I can go much lower, but really no need.

Power draw is not to crazy with a 1/2HP unit. Just under 500 watts max for a brief moment when it first kicks on, and then immediately settles to the 300’s watt range, then even less than that. Now, it only runs for about 15-30 second at a time, every 3-5 or even much longer depending how low or high you’ve got it’s thermostat set.



Noise isn’t too bad either. It sounds like one of those really small A/C window units running.

Seeing my memory OC like this is fantastic, and achieving game stability at that.


----------



## wtf_apples

> Your temps are not bad other than the delta, but hard to say w/o full-bore run and ambient C info...I started putting thermal paste on the pads on all my GPU blocks a while back, and I find it really does help. A 'perfect mount' also depends not only on the thickness of your pads, but also their firmness rating.
> 
> Below is a 3090 Strix after stressing, including w/ Port Royal runs. This is with the stock bios, but the 520 KPE on the other bios is only a degree +- or so warmer.
> 
> Gelid GC on the die, Thermalright 12.8w/mk pads with MX5 paste on the phases, thermal putty on the vram (front & back) plus extra rear heatsink.


Very impressive temps. I will use your paste method.
Ya the biggest problem is pad firmness. Im hearing different reviews from ppl about pads. Some say gelid extreme and the thermalright are pretty hard. Ive heard good things about gelid ultimate's, price adds up fast for those.. Would be nice to find some 2.5 that are super super soft.
Ive added some mx5 and kpx to my current pads now, front and back.

Previously my fans were set at 50% and pumps @ 30%. 21c in here so its pretty warm. Just took my card apart 3 times. Been a long night so ill do more testing tomorrow!


----------



## J7SC

wtf_apples said:


> Very impressive temps. I will use your paste method.
> Ya the biggest problem is pad firmness. Im hearing different reviews from ppl about pads. Some say gelid extreme and the thermalright are pretty hard. Ive heard good things about gelid ultimate's, price adds up fast for those.. Would be nice to find some 2.5 that are super super soft.
> Ive added some mx5 and kpx to my current pads now, front and back.
> 
> Previously my fans were set at 50% and pumps @ 30%. 21c in here so its pretty warm. Just took my card apart 3 times. Been a long night so ill do more testing tomorrow!


For future reference, you might want to consider getting some thermal putty ? It conforms so easily that it is always a perfect fit...I also used it on the back of the GPU die for heat transfer to the backplate, and on select VRAM components where I didn't use the Thermalright pads + MX5.

Time to go and count some sheep...


----------



## Lobstar

geriatricpollywog said:


> They do not transfer the extended warranty.
> 
> You can’t buy the 10 year anymore. The best is 7 yr for $240.


That's strange. I checked with my buddy and the 970 I sold him still lists it's 10 year warranty that we transferred a few years back. Also my 3090 I was able to purchase my 10 year for $60 at the end of 2020. It must be a very recent change.


----------



## J7SC

tps3443 said:


> Yes, this is with the chiller at an easy 63F-65F water temp. Also, all dip switches are turned on. I can go much lower, but really no need.
> 
> Power draw is not to crazy with a 1/2HP unit. Just under 500 watts max for a brief moment when it first kicks on, and then immediately settles to the 300’s watt range, then even less than that. Now, it only runs for about 15-30 second at a time, every 3-5 or even much longer depending how low or high you’ve got it’s thermostat set.
> 
> 
> 
> Noise isn’t too bad either. It sounds like one of those really small A/C window units running.
> 
> Seeing my memory OC like this is fantastic, and achieving game stability at that.


Interesting ! ...what are the inner and outer tube diameters of the chiller ? What kind of fittings ?


----------



## GRABibus

Kingpin Hybrid
*Stock cooler*

Bios KP 1000W XOC Rebar
17°C ambient
Drivers performances
Crazy voltages with Classified 


PR (Rebar forced in NVIDIA Profile Inspector) :
















I scored 15 772 in Port Royal


AMD Ryzen 9 5900X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com






TS :
















I scored 21 737 in Time Spy


AMD Ryzen 9 5900X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## yzonker

Looks like you will need to slide your home mortgage through the card reader to get a 3090Ti. LOL.









NVIDIA GeForce RTX 3090 Ti Custom Models Listed Online, Over $3500 US & Up To An Insane $4500 US Price


The NVIDIA GeForce RTX 3090 Ti is going to be the most expensive graphics card ever with its preliminary pricing reaching over $4000 US.




wccftech.com


----------



## tps3443

yzonker said:


> Looks like you will need to slide your home mortgage through the card reader to get a 3090Ti. LOL.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> NVIDIA GeForce RTX 3090 Ti Custom Models Listed Online, Over $3500 US & Up To An Insane $4500 US Price
> 
> 
> The NVIDIA GeForce RTX 3090 Ti is going to be the most expensive graphics card ever with its preliminary pricing reaching over $4000 US.
> 
> 
> 
> 
> wccftech.com


What a joke. “Full FAT GA102?” Lol.

It has less than 3% more Cuda cores. So, it looks like the “Full Fat” model is within margin of error.

The 3090Ti is a weak disaster. All it did was literally “Newness coat” the already existing 3090 model. So prices can’t go down. Nvidia see’s them numbers dropping, people start to my 3090 Kingpin their Hands on them, and boom! 3090Ti same performance as a 3090 only much more expensive.


----------



## GRABibus

Nvidia has delayed RTX 3090 Ti production, pushing back the launch of their 40 TFLOPS monster GPU


We might not be hearing more about Nvidia's RTX 3090 Ti this month




www.overclock3d.net


----------



## tps3443

GRABibus said:


> Nvidia has delayed RTX 3090 Ti production, pushing back the launch of their 40 TFLOPS monster GPU
> 
> 
> We might not be hearing more about Nvidia's RTX 3090 Ti this month
> 
> 
> 
> 
> www.overclock3d.net


Oh no! How unfortunate. I cant believe Nvidia has delayed the 40Tflops monster GPU.

I guess I will have to just use my 40.30 tflops 3090kp instead.


----------



## J7SC

GRABibus said:


> Nvidia has delayed RTX 3090 Ti production, pushing back the launch of their 40 TFLOPS monster GPU
> 
> 
> We might not be hearing more about Nvidia's RTX 3090 Ti this month
> 
> 
> 
> 
> www.overclock3d.net


...just wait a while for the truly next gen models


----------



## GRABibus

J7SC said:


> ...just wait a while for the truly next gen models


5000W XOC Bios ?


----------



## J7SC

GRABibus said:


> 5000W XOC Bios ?


11500W XOC


----------



## GreatestChase

Hey everyone, first time poster here. Finally got my hands on a 3090 Kingpin and HC block recently and have been jumping into benchmarking for my first time in which I'm actually trying. I've made it up to 15,301 so far and I'm pretty sure my limiting factor right now is temps. I live in central AL so my temps outside aren't the greatest, but I'm able to get my loop down to around 15 C at idle leaving my window to my office open. It's supposed to get colder over the night so I'm hoping I'll be able to get a run in the morning before I have to go to clinic. Able to run at a +200 core offset and +1700 mem offset. I think my limiting factor at the moment is my temps so hopefully I'll see some improvements as I get some colder air in the room.


----------



## GRABibus

GreatestChase said:


> Hey everyone, first time poster here. Finally got my hands on a 3090 Kingpin and HC block recently and have been jumping into benchmarking for my first time in which I'm actually trying. I've made it up to 15,301 so far and I'm pretty sure my limiting factor right now is temps. I live in central AL so my temps outside aren't the greatest, but I'm able to get my loop down to around 15 C at idle leaving my window to my office open. It's supposed to get colder over the night so I'm hoping I'll be able to get a run in the morning before I have to go to clinic. Able to run at a +200 core offset and +1700 mem offset. I think my limiting factor at the moment is my temps so hopefully I'll see some improvements as I get some colder air in the room.


Those offsets are nice.
Which bios did you use ?
Can you please post a GPUz screenshot of your PR run showing max power draw and max temperatures ?


----------



## GreatestChase

GRABibus said:


> Those offsets are nice.
> Which bios did you use ?
> Can you please post a GPUz screenshot of your PR run showing max power draw and max temperatures ?


I'm using the 1 kW XOC bios and the classified tool. NVVDD @ 1.24325, MSVDD @ 1.1, 900 and 800 for the switching frequencies, and LLCs at level 6 and 5. And I didn't think to take a screenshot, but the highest power draw I've seen so far was 613 W, but for my PR run I believe it was in the 580s. And for max temp it was 43 C if I remember correctly. I'm set on chasing GeriatricPollywogs scores as he's the only person above me on a 3090 and 12700k. I'm not sure how far I'll make it without using ice water though, looks like his average temps are in the mid 20s C during his runs.


----------



## GRABibus

GreatestChase said:


> I'm using the 1 kW XOC bios and the classified tool. NVVDD @ 1.24325, MSVDD @ 1.1, 900 and 800 for the switching frequencies, and LLCs at level 6 and 5. And I didn't think to take a screenshot, but the highest power draw I've seen so far was 613 W, but for my PR run I believe it was in the 580s. And for max temp it was 43 C if I remember correctly. I'm set on chasing GeriatricPollywogs scores as he's the only person above me on a 3090 and 12700k. I'm not sure how far I'll make it without using ice water though, looks like his average temps are in the mid 20s C during his runs.


Here s my best PR run today with xoc 1000W bios with rebar forced with nvidia profile inspector (in the screenshot you can see my classified settings). I Am still on stock cooler. Max power draw is 640W









[Official] NVIDIA RTX 3090 Owner's Club


They do not transfer the extended warranty. You can’t buy the 10 year anymore. The best is 7 yr for $240. That's strange. I checked with my buddy and the 970 I sold him still lists it's 10 year warranty that we transferred a few years back. Also my 3090 I was able to purchase my 10 year for...




www.overclock.net





did you enable rebar in profile inspector ?
If no, do it. You will increase score by 200 points.

try also to reduce NVCDD to 1,2V and increase MSVDD to 1,2V also.

why so high switching frequencies ?
Do you see any improvement versus 400Khz?


----------



## sew333

sec


----------



## GreatestChase

GRABibus said:


> Here s my best PR run today with xoc 1000W bios with rebar forced with nvidia profile inspector (in the screenshot you can see my classified settings). I Am still on stock cooler. Max power draw is 640W
> 
> 
> 
> 
> 
> 
> 
> 
> 
> [Official] NVIDIA RTX 3090 Owner's Club
> 
> 
> They do not transfer the extended warranty. You can’t buy the 10 year anymore. The best is 7 yr for $240. That's strange. I checked with my buddy and the 970 I sold him still lists it's 10 year warranty that we transferred a few years back. Also my 3090 I was able to purchase my 10 year for...
> 
> 
> 
> 
> www.overclock.net
> 
> 
> 
> 
> 
> did you enable rebar in profile inspector ?
> If no, do it. You will increase score by 200 points.
> 
> try also to reduce NVCDD to 1,2V and increase MSVDD to 1,2V also.
> 
> why so high switching frequencies ?
> Do you see any improvement versus 400Khz?


I have not forced rebar so it may not be active then, so I'll have to do that. And I'll def try those voltages. As far as the switching frequencies, I was following recommendations on settings that Luumi shared in his KP OC video. I'm extremely new to XOC so I've been scouring looking for information and his was one of the few I found with the classified tool. Definitely still learning so I appreciate the info. 

One thing I have noticed thought is that temp for temp, my core doesn't quite produce the clocks that others do. Looking at your run, your average clock speed is 2185 MHz @ an average of 53 C where as mine is 2123 MHz @ 41 C. I'm assuming that's just silicon lottery?


----------



## sew333

Also
It's not overcurrent protection getting triggered.
That causes complete shutdown of PSU needing power cycling to clear it.
I don’t have to flip switch.

I had one shutdown like that during loading Metro Exodus Enhanced just before main menu.

So it is OCP or not?I don’t have to flip switch.

Rtx 3090 Gaming 
10850K stock
Seasonic TX-850 Titanium Ultra Prime


----------



## GRABibus

GreatestChase said:


> I have not forced rebar so it may not be active then, so I'll have to do that. And I'll def try those voltages. As far as the switching frequencies, I was following recommendations on settings that Luumi shared in his KP OC video. I'm extremely new to XOC so I've been scouring looking for information and his was one of the few I found with the classified tool. Definitely still learning so I appreciate the info.
> 
> One thing I have noticed thought is that temp for temp, my core doesn't quite produce the clocks that others do. Looking at your run, your average clock speed is 2185 MHz @ an average of 53 C where as mine is 2123 MHz @ 41 C. I'm assuming that's just silicon lottery?


Silicon lottery it could be.
or you must play with voltages in classified :
Increasing MSVDD increases power draw. So it is all benefit with XOC bios.


----------



## GreatestChase

GRABibus said:


> Silicon lottery it could be.
> or you must play with voltages in classified :
> Increasing MSVDD increases power draw. So it is all benefit with XOC bios.


I may give it another run tonight, but I definitely want to get another run in the morning. Temps tomorrow morning and the next morning are supposed to be the coldest in my area for the foreseeable future so I want to try to get a run with the best temps I can. Otherwise, I guess I'll have to try setting up an ice water loop.


----------



## tps3443

I can’t get good numbers at all in Port Royal anymore. I have not ran it in a while, and I’ve lost my touch for benching PR I guess.. I can get maybe 15,500-15,600 lol. Which is really laughable. With 24C max GPU temps at that.


I use to run and get around 15,900 with (57-60C) GPU temps

I am running a much newer Windows 1941 build, Ive also got a faster CPU. I just can’t get any worth while numbers at all from Port Royal.

I can get my 3090KP to 2C idle temps at the touch of a button. I can also get some really good numbers out of Timespy, and Firestrike. Just not Port Royal for some weird reason.


----------



## GRABibus

tps3443 said:


> I can’t get good numbers at all in Port Royal anymore. I have not ran it in a while, and I’ve lost my touch for benching PR I guess.. I can get maybe 15,500-15,600 lol. Which is really laughable. With 24C max GPU temps at that.
> 
> 
> I use to run and get around 15,900 with (57-60C) GPU temps
> 
> I am running a much newer Windows 1941 build, Ive also got a faster CPU. I just can’t get any worth while numbers at all from Port Royal.
> 
> I can get my 3090KP to 2C idle temps at the touch of a button. I can also get some really good numbers out of Timespy, and Firestrike. Just not Port Royal for some weird reason.


Rebar forced ?
Do you use classified ? (Stupid question maybe)


----------



## sew333

tps3443 said:


> I can’t get good numbers at all in Port Royal anymore. I have not ran it in a while, and I’ve lost my touch for benching PR I guess.. I can get maybe 15,500-15,600 lol. Which is really laughable. With 24C max GPU temps at that.
> 
> 
> I use to run and get around 15,900 with (57-60C) GPU temps
> 
> I am running a much newer Windows 1941 build, Ive also got a faster CPU. I just can’t get any worth while numbers at all from Port Royal.
> 
> I can get my 3090KP to 2C idle temps at the touch of a button. I can also get some really good numbers out of Timespy, and Firestrike. Just not Port Royal for some weird reason.


You must force MAX PERFORMANCE in power mode in windows to get stable score. Thats worked.


----------



## GRABibus

tps3443 said:


> I use to run and get around 15,900 with (57-60C) GPU temps


Do you have settings and 3dmark link for this kind of run?


----------



## wtf_apples

Would be nice to find a bulk supply of this stuff for cheap. https://www.evga.com/products/product.aspx?pn=M047-10-000003


----------



## geriatricpollywog

wtf_apples said:


> Would be nice to find a bulk supply of this stuff for cheap. https://www.evga.com/products/product.aspx?pn=M047-10-000003


If I'm cleaning the card, I scrape as much putty as I can from the chokes, then I re-apply the putty after cleaning. If I'm just re-pasting, I leave the putty on.


----------



## yzonker

wtf_apples said:


> Would be nice to find a bulk supply of this stuff for cheap. https://www.evga.com/products/product.aspx?pn=M047-10-000003


Actual EVGA putty, or just putty in general?






TG-PP10-50 t-Global Technology | Fans, Thermal Management | DigiKey


Order today, ships today. TG-PP10-50 – Thermal Silicone Putty 50 gram Container from t-Global Technology. Pricing and Availability on millions of electronic components from Digi-Key Electronics.




www.digikey.com





Of course out of stock again.


----------



## wtf_apples

*geriatricpollywog*
Ya I should have been more careful and saved that stuff. Could easily put it back in the syringe and reapply... 

*yzonker*
Just something of the same consistency as the one evga provides with the hc block, minus the expensive price.
I ordered some tg pp-10 a few days ago, said oos but hopefully ships soon


----------



## J7SC

wtf_apples said:


> *geriatricpollywog*
> Ya I should have been more careful and saved that stuff. Could easily put it back in the syringe and reapply...
> 
> *yzonker*
> Just something of the same consistency as the one evga provides with the hc block, minus the expensive price.
> *I ordered some tg pp-10 a few days ago, said oos but hopefully ships soon*


...once it is in stock, get double (+ one extra)...coz 'puttified'


Spoiler



...puttified front and back


----------



## GreatestChase

Hey again everyone, I've got a trouble shooting question so I hope this isn't the wrong place, but it did occur while trying to bench with my 3090 so that counts right? Anyways, I was trying to get another PR run in before I went to bed and I had a hard crash after trying to push my cpu a little farther. After the hard crash, I now get stuck at the bios splash screen 4/5 times when I try to boot the PC. When I do make it through, it seems as if there is nothing wrong with the system. When it does hang, I do not get any drdebug codes or lights. Also, I am able to get into the bios prior to the splash screen without any issue.

System Info: MSI Z690 Carbon Wifi, i7-12700k, Corsair ddr5 5200 mem, and a 3090 KP

So what I've tried troubleshooting wise so far: I cleared the CMOS, I flashed the lasted bios for my motherboard, I removed the gpu and booted off the igpu, and when I was able to get in to windows I did a fresh driver install, and reflashed the vbios on the gpu. So far none of this has changed anything. I'm still able to get in if I attempt to boot enough times. The one thing I haven't tried yet is to nuke the OS. before I did this I just wanted a sanity check that a hard crash like that could bork the OS to begin with, and that this would be the next step that you all would take in the trouble shooting process. 

At the moment, I've got it booted into windows, and I wanted to wait to reinstall windows until tomorrow when I had a little more time as for right now the PC is functional and I can use it for my regular daily driving activites. Thanks in advance for the advice. I'm about to go to sleep for the night, but I'll follow up with any recommendations that I get tomorrow after I have a chance to sit down with the PC agian.


----------



## geriatricpollywog

It would help to have a picture of the splash screen where you are stuck. On Asus, there is a ROG logo with “press delete to enter bios” below it. After a few seconds, the message disappears and a spinning wheel appears below the logo. If I were to get stuck at the spinning wheel, I would know it was an OS issue.


----------



## J7SC

GreatestChase said:


> Hey again everyone, I've got a trouble shooting question so I hope this isn't the wrong place, but it did occur while trying to bench with my 3090 so that counts right? Anyways, I was trying to get another PR run in before I went to bed and I had a hard crash after trying to push my cpu a little farther. After the hard crash, I now get stuck at the bios splash screen 4/5 times when I try to boot the PC. When I do make it through, it seems as if there is nothing wrong with the system. When it does hang, I do not get any drdebug codes or lights. Also, I am able to get into the bios prior to the splash screen without any issue.
> 
> System Info: MSI Z690 Carbon Wifi, i7-12700k, Corsair ddr5 5200 mem, and a 3090 KP
> 
> So what I've tried troubleshooting wise so far: I cleared the CMOS, I flashed the lasted bios for my motherboard, I removed the gpu and booted off the igpu, and when I was able to get in to windows I did a fresh driver install, and reflashed the vbios on the gpu. So far none of this has changed anything. I'm still able to get in if I attempt to boot enough times. The one thing I haven't tried yet is to nuke the OS. before I did this I just wanted a sanity check that a hard crash like that could bork the OS to begin with, and that this would be the next step that you all would take in the trouble shooting process.
> 
> At the moment, I've got it booted into windows, and I wanted to wait to reinstall windows until tomorrow when I had a little more time as for right now the PC is functional and I can use it for my regular daily driving activites. Thanks in advance for the advice. I'm about to go to sleep for the night, but I'll follow up with any recommendations that I get tomorrow after I have a chance to sit down with the PC agian.


I would check out the DDR5 thread - it seems rebooting a DDR5 system isn't yet free of the kind of thing you described, depending also if fastboot is enabled, and bios updates are coming out fast and furious.

Anyway, I would power the system down for the night (PSU off), then see what happens when you cold-boot it tomorrow. If everything works the way it is supposed to once you are in Windows, I would not even bother re-installing it.


----------



## kryptonfly

sew333 said:


> Also
> It's not overcurrent protection getting triggered.
> That causes complete shutdown of PSU needing power cycling to clear it.
> I don’t have to flip switch.
> 
> I had one shutdown like that during loading Metro Exodus Enhanced just before main menu.
> 
> So it is OCP or not?I don’t have to flip switch.
> 
> Rtx 3090 Gaming
> 10850K stock
> Seasonic TX-850 Titanium Ultra Prime


Excuse me but I already answered few pages ago, I told you to try Endwalker bench, if the pc shuts down wildly that means OCP is triggered. You can reproduce it everytime, and everytime it will trigger OCP. It's "common" with Seasonic PSU too sensitive, my Prime 750TX did exactly the same thing when I tried 1000W XOC bios on my Gigabyte Turbo 2*8 pins (same board than yours). Someone else here got same trouble and someone on the official techpowerup thread too.
BUT you have a 850W and cpu stock, it should not trigger OCP in these conditions, so maybe PSU is too sensitive (RMA). You need at least 1000W to be mind free. Me too, I don't have to "flip switch", pc restarts by itself showing a "Power supply surge protection" advertising on booting (when activated in bios) just after logo (Asus X99, my new Asus Z690 doesn't have "surge protection" in bios settings so no advertising but I experienced 2 OCPs with my Antec Signature 1000W it reproduced exactly same behavior than my TX-750W so power surge protection is well applied by default in bios but "invisible").

You need to understand, it's the motherboard which firstly triggers OCP for good reasons through ATX 24-pins (mainly caused by 12V CPU and 12V pcie in same time) + 8-pins from GPU, so no "flip switch" needed, it's the motherboard which "flip switch". But when PSU really triggers OCP indeed you need to flip switch, when for example you start 15 fans at 5v and a pump at 12v PSU triggers OCP even with my 1000W PSU.
You should try with a higher PSU (1000W minimum and not Seasonic) through Amazon for example and return it in case it did the same thing but I'm convinced it's a OCP through your motherboard caused by cpu+gpu fast load, a stronger PSU fix this matter.


----------



## sew333

kryptonfly said:


> Excuse me but I already answered few pages ago, I told you to try Endwalker bench, if the pc shuts down wildly that means OCP is triggered. You can reproduce it everytime, and everytime it will trigger OCP. It's "common" with Seasonic PSU too sensitive, my Prime 750TX did exactly the same thing when I tried 1000W XOC bios on my Gigabyte Turbo 2*8 pins (same board than yours). Someone else here got same trouble and someone on the official techpowerup thread too.
> BUT you have a 850W and cpu stock, it should not trigger OCP in these conditions, so maybe PSU is too sensitive (RMA). You need at least 1000W to be mind free. Me too, I don't have to "flip switch", pc restarts by itself showing a "Power supply surge protection" advertising on booting (when activated in bios) just after logo (Asus X99, my new Asus Z690 doesn't have "surge protection" in bios settings so no advertising but I experienced 2 OCPs with my Antec Signature 1000W it reproduced exactly same behavior than my TX-750W so power surge protection is well applied by default in bios but "invisible").
> 
> You need to understand, it's the motherboard which firstly triggers OCP for good reasons through ATX 24-pins (mainly caused by 12V CPU and 12V pcie in same time) + 8-pins from GPU, so no "flip switch" needed, it's the motherboard which "flip switch". But when PSU really triggers OCP indeed you need to flip switch, when for example you start 15 fans at 5v and a pump at 12v PSU triggers OCP even with my 1000W PSU.
> You should try with a higher PSU (1000W minimum and not Seasonic) through Amazon for example and return it in case it did the same thing but I'm convinced it's a OCP through your motherboard caused by cpu+gpu fast load, a stronger PSU fix this matter.


Ok so no needed necessary ,to flip the switch on the back of psu if OCP triggers? Ok then.My mb is Aorus 490 Pro Ax.
I run 3dmark WILDLIFE 3dmark on 1080P ,fps are 800-900fps and no shutdowns. Is this enough? I think its better test than Endwalker.

Also played Cyberpunk 2077,Watch Dogs Legion and no shutdowns. Just happened once in Metro Exodus Enhanced intro advertisement part. That was simply shutdown. To reboot i just pressed power button nothing more.


----------



## ManniX-ITA

sew333 said:


> I run 3dmark WILDLIFE 3dmark on 1080P ,fps are 800-900fps and no shutdowns. Is this enough? I think its better test than Endwalker.


I think it's better to run both, they are putting a different kind of strain on the GPU.
A bit like testing a CPU both with AVX and SSE.
3DMark Wildlife is better for stressing the VRM/PCB soldering/PSU OCP.
The very high fps, higher and steady power consumption it's very peculiar.



kryptonfly said:


> Excuse me but I already answered few pages ago, I told you to try Endwalker bench, if the pc shuts down wildly that means OCP is triggered. You can reproduce it everytime, and everytime it will trigger OCP. It's "common" with Seasonic PSU too sensitive, my Prime 750TX did exactly the same thing when I tried 1000W XOC bios on my Gigabyte Turbo 2*8 pins (same board than yours). Someone else here got same trouble and someone on the official techpowerup thread too.


There's not only OCP (Over Current) but also UVP (Under Voltage).
Very often new PSUs they have a very high OCP ceiling to avoid shutdowns with nVidia GPUs but then their 12V rails drop below 11.4V under load.
Or they burst in flames like the famous GigaByte units.



kryptonfly said:


> so power surge protection is well applied by default in bios but "invisible").


There's no "control plane" between the MB and PSU; no data exchange, no communication protocol.
The faulty ASUS Anti Surge protection was botched and thankfully dropped in the newer boards.
The shutdowns/reboots are due to the PSU protections which are invisible to the MB.



kryptonfly said:


> You should try with a higher PSU (1000W minimum and not Seasonic) through Amazon for example and return it in case it did the same thing but I'm convinced it's a OCP through your motherboard caused by cpu+gpu fast load, a stronger PSU fix this matter.


Could also be that the culprit it's not the PSU.
Extension or custom power cables and riser cables are other common root causes.


----------



## GreatestChase

geriatricpollywog said:


> It would help to have a picture of the splash screen where you are stuck. On Asus, there is a ROG logo with “press delete to enter bios” below it. After a few seconds, the message disappears and a spinning wheel appears below the logo. If I were to get stuck at the spinning wheel, I would know it was an OS issue.


Sorry, I could've been a little more descriptive. But the MSI boot up is the same. I get past the screen with press delete to enter bios, and I can enter the bios fine. It's the next screen with the spinning wheel that I get stuck at. Also, side question, in your Port Royal runs, were you using chilled water to get you scores? I'm new to benchmarking and you're above me for scores with a 12700k and 3090 so I've been working on chasing your scores.



J7SC said:


> I would check out the DDR5 thread - it seems rebooting a DDR5 system isn't yet free of the kind of thing you described, depending also if fastboot is enabled, and bios updates are coming out fast and furious.
> 
> Anyway, I would power the system down for the night (PSU off), then see what happens when you cold-boot it tomorrow. If everything works the way it is supposed to once you are in Windows, I would not even bother re-installing it.


 Thanks for the info, I'll definitely have to check this thread out. The system has been running fine with a ftw3 3090 for months now and had no trouble rebooting with that card. Since I only have the two sticks of DDR5 I can't really throw another set in the system at the moment, so I may reinstall the OS and if that doesn't work then I guess I'm down the the DDR5 causing the problem. 

Thanks for the help guys, I'll hopefully update later today.


----------



## sew333

ManniX-ITA said:


> I think it's better to run both, they are putting a different kind of strain on the GPU.
> A bit like testing a CPU both with AVX and SSE.
> 3DMark Wildlife is better for stressing the VRM/PCB soldering/PSU OCP.
> The very high fps, higher and steady power consumption it's very peculiar.
> 
> 
> 
> There's not only OCP (Over Current) but also UVP (Under Voltage).
> Very often new PSUs they have a very high OCP ceiling to avoid shutdowns with nVidia GPUs but then their 12V rails drop below 11.4V under load.
> Or they burst in flames like the famous GigaByte units.
> 
> 
> 
> There's no "control plane" between the MB and PSU; no data exchange, no communication protocol.
> The faulty ASUS Anti Surge protection was botched and thankfully dropped in the newer boards.
> The shutdowns/reboots are due to the PSU protections which are invisible to the MB.
> 
> 
> 
> Could also be that the culprit it's not the PSU.
> Extension or custom power cables and riser cables are other common root causes.


Can you explain me why monitor flickered once after 10 seconds after shutdown? ( if that was OCP ).
Power was in house because router no shutdown and laptop was in power mode<no battery>.Also ceiling lights was not off. Thx


----------



## sew333

kryptonfly said:


> Excuse me but I already answered few pages ago, I told you to try Endwalker bench, if the pc shuts down wildly that means OCP is triggered. You can reproduce it everytime, and everytime it will trigger OCP. It's "common" with Seasonic PSU too sensitive, my Prime 750TX did exactly the same thing when I tried 1000W XOC bios on my Gigabyte Turbo 2*8 pins (same board than yours). Someone else here got same trouble and someone on the official techpowerup thread too.
> BUT you have a 850W and cpu stock, it should not trigger OCP in these conditions, so maybe PSU is too sensitive (RMA). You need at least 1000W to be mind free. Me too, I don't have to "flip switch", pc restarts by itself showing a "Power supply surge protection" advertising on booting (when activated in bios) just after logo (Asus X99, my new Asus Z690 doesn't have "surge protection" in bios settings so no advertising but I experienced 2 OCPs with my Antec Signature 1000W it reproduced exactly same behavior than my TX-750W so power surge protection is well applied by default in bios but "invisible").
> 
> You need to understand, it's the motherboard which firstly triggers OCP for good reasons through ATX 24-pins (mainly caused by 12V CPU and 12V pcie in same time) + 8-pins from GPU, so no "flip switch" needed, it's the motherboard which "flip switch". But when PSU really triggers OCP indeed you need to flip switch, when for example you start 15 fans at 5v and a pump at 12v PSU triggers OCP even with my 1000W PSU.
> You should try with a higher PSU (1000W minimum and not Seasonic) through Amazon for example and return it in case it did the same thing but I'm convinced it's a OCP through your motherboard caused by cpu+gpu fast load, a stronger PSU fix this matter.


Can you explain me why monitor flickered once after 10 seconds after shutdown? ( if that was OCP ).
Power was in house because router no shutdown and laptop was in power mode<no battery>.Also ceiling lights was not off. Thx


----------



## ManniX-ITA

sew333 said:


> Can you explain me why monitor flickered once after 10 seconds after shutdown? ( if that was OCP ).
> Power was in house because router no shutdown and laptop was in power mode<no battery>.Also ceiling lights was not off. Thx


No idea about monitor flickering, there could be dozens of reasons...
Were you running with GPU OC?
If the OC is unstable you can get the same "effect" of OCP.
It happened a few times with my 3090 Strix but it was one of the reasons I've sent back the 3080ti Suprim X.
One notch of instability and the whole PC was shutdown brutally by the GPU, unacceptable....


----------



## sew333

Graphic card is on sto


ManniX-ITA said:


> No idea about monitor flickering, there could be dozens of reasons...
> Were you running with GPU OC?
> If the OC is unstable you can get the same "effect" of OCP.
> It happened a few times with my 3090 Strix but it was one of the reasons I've sent back the 3080ti Suprim X.
> One notch of instability and the whole PC was shutdown brutally by the GPU, unacceptable....


Graphic card is on stock. Also i tested for man hours all games and no single shutdown. So i think gpu is fine.
I tested : Cyberpunk 2077,Watch Dogs Legion,Quake 2 rtx,3dmark port royal ,fire strike,battlefield 5,cod vanguard,metro exodus enhanced,crysis remastered 3,control.


Just cant understand why monitor flickered once after 10 seconds of shutdown. When pc shutted off i immidiatelly turn on ceiling lights. They was on. Then check laptop and wifi was on. Then check laptop if it is on battery or power,but it was on power. So power was in house.


----------



## ManniX-ITA

GreatestChase said:


> Thanks for the help guys, I'll hopefully update later today.


Look also for nvidia-smi on this thread to learn how to lock the clock frequency



sew333 said:


> Just cant understand why monitor flickered once after 10 seconds of shutdown.


I wouldn't focus too much on it, can happen.
Just once, wouldn't worry.
A few days ago one of my monitor after shutdown started switching on from standby every 5 mins without any reason.
At next start & shutdown stopped doing it...


----------



## sew333

ManniX-ITA said:


> Look also for nvidia-smi on this thread to learn how to lock the clock frequency
> 
> 
> 
> I wouldn't focus too much on it, can happen.
> Just once, wouldn't worry.
> A few days ago one of my monitor after shutdown started switching on from standby every 5 mins without any reason.
> At next start & shutdown stopped doing it...


Ah oki


----------



## J7SC

ManniX-ITA said:


> (...)
> A few days ago one of my monitor after shutdown started switching on from standby every 5 mins without any reason.
> At next start & shutdown stopped doing it...


The ghost in the machine, obviously !


----------



## CoraDaelu

Hi, I'm new to this, I wanted to see if anyone can help me a bit as I'm a bit confused, I have a 3090 Zotac trinity x2 8 PIN limited to 385w (no shunt mod), if I change bios to a KFA2 with 1000w support what results could I expect? at least I could get 450w or is it even compatible?

Thanks.


----------



## kryptonfly

sew333 said:


> Yeap. Pc shutdown like something like with standby mode . During shutdown pc case bottom lights briefly flicker and i think some led was on ,inside case. From what i remember. Then i pressed power button and pc booted.


You told me when you tried Endwalker that your PC shutted down, you can retry but if it shuts down, it's your PSU. It's not normal that at stock with 850W it triggers any protection. Capacitors get empty so light flickers.



sew333 said:


> Ok so no needed necessary ,to flip the switch on the back of psu if OCP triggers? Ok then.My mb is Aorus 490 Pro Ax.
> I run 3dmark WILDLIFE 3dmark on 1080P ,fps are 800-900fps and no shutdowns. Is this enough? I think its better test than Endwalker.
> 
> Also played Cyberpunk 2077,Watch Dogs Legion and no shutdowns. Just happened once in Metro Exodus Enhanced intro advertisement part. That was simply shutdown. To reboot i just pressed power button nothing more.


Wildlife is not enough, it's not the same load than cpu+gpu as I told you. If PSU triggers OCP you need to flip switch but in this case it's the motherboard which acts as a switch first, that's why you don't need to unplug PSU from the wall. Same as a good true OCP from PSU, just unplug ATX 24 pins and it's like flip switch. (what motherboard does in this case)


ManniX-ITA said:


> I think it's better to run both, they are putting a different kind of strain on the GPU.
> A bit like testing a CPU both with AVX and SSE.
> 3DMark Wildlife is better for stressing the VRM/PCB soldering/PSU OCP.
> The very high fps, higher and steady power consumption it's very peculiar.
> 
> There's not only OCP (Over Current) but also UVP (Under Voltage).
> Very often new PSUs they have a very high OCP ceiling to avoid shutdowns with nVidia GPUs but then their 12V rails drop below 11.4V under load.
> Or they burst in flames like the famous GigaByte units.
> 
> There's no "control plane" between the MB and PSU; no data exchange, no communication protocol.
> The faulty ASUS Anti Surge protection was botched and thankfully dropped in the newer boards.
> The shutdowns/reboots are due to the PSU protections which are invisible to the MB.
> 
> Could also be that the culprit it's not the PSU.
> Extension or custom power cables and riser cables are other common root causes.


I had a V1200 which had +12v and perfect voltage in load and it triggers OCP in Cyberpunk when I entered on a load street (cpu OC + gpu OC) exactly always at the same area. Since the new PSU no trouble anymore.
My new Asus Z690 acts like my Asus X99 when I experienced an OCP in Endwalker (too much cpu OC 1,52v + gpu OC on 2*8 pins), it shutted down but in this case I didn't see "power surge protection" warning when it restarted because the setting is not present anymore in this motherboard , but it uses the same mechanism than my X99.


sew333 said:


> Can you explain me why monitor flickered once after 10 seconds after shutdown? ( if that was OCP ).
> Power was in house because router no shutdown and laptop was in power mode<no battery>.Also ceiling lights was not off. Thx


No link in this matter. It's a "normal" screen stuff. I would not get worried about it. 

Overall these Gigabyte 2*8 pins are not well balanced, some burn because of "New World", some can handle 600W like mine...
1st : try with at least a 1000W non Seasonic PSU (it will fix the problem but anyway).
2nd : your GPU board is not well made but I really don't think so.
It's cheaper and wiser to check the PSU first. It's up to you to fix your trouble with elements here, seems like you're waiting a comment translating your point of view...


----------



## ManniX-ITA

kryptonfly said:


> I had a V1200 which had +12v and perfect voltage in load and it triggers OCP in Cyberpunk when I entered on a load street (cpu OC + gpu OC) exactly always at the same area. Since the new PSU no trouble anymore.
> My new Asus Z690 acts like my Asus X99 when I experienced an OCP in Endwalker (too much cpu OC 1,52v + gpu OC on 2*8 pins), it shutted down but in this case I didn't see "power surge protection" warning when it restarted because the setting is not present anymore in this motherboard , but it uses the same mechanism than my X99.


I don't think it had perfect voltage under load; you probably couldn't see it cause it was going down very fast and the protection acting before you could have a chance to see it.

If you did get the ASUS message there was a voltage drop; the MB can't know the current load (Amperage) on the rail.
Can only read the voltage level and thus if the Anti Surge kicked in at some point it went down below 11.4V (or whatever ASUS decided was too low).
The MB voltage reading sensor is usually not very reliable... it was a bad idea to use it.

If the Asus Z690 was doing the same without message then it was the PSU protection kicking in.
Probably still the UVP and not OCP. 
The OCP trigger point for a 1200W is massive, even if it was a multirail.
I find it hard a 2x8p GPU can even come closer.
More likely the faulty Antec had sudden voltage drops.


----------



## sew333

I am thinking that maybe i had power outage, but not.I am thinking because that happened once and never more. Thats why. 


Especially coincedence while starting Metro ? Nah And laptop,router wont notice? too much this


----------



## ManniX-ITA

sew333 said:


> And laptop,router wont notice? too much this


Laptop maybe was with battery?
If it was a voltage drop could have enough to trip the PSU but not the router, lower loads can go through without hitching.


----------



## sew333

ManniX-ITA said:


> Laptop maybe was with battery?
> If it was a voltage drop could have enough to trip the PSU but not the router, lower loads can go through without hitching.


laptop was on power mode because i check in event logs.No battery mode.So power cord was on then. Also i dont understand why monitor flicker after 10 seconds after shutdown. If that was voltage drop, though.


----------



## geriatricpollywog

GreatestChase said:


> Sorry, I could've been a little more descriptive. But the MSI boot up is the same. I get past the screen with press delete to enter bios, and I can enter the bios fine. It's the next screen with the spinning wheel that I get stuck at. Also, side question, in your Port Royal runs, were you using chilled water to get you scores? I'm new to benchmarking and you're above me for scores with a 12700k and 3090 so I've been working on chasing your scores.
> 
> 
> Thanks for the info, I'll definitely have to check this thread out. The system has been running fine with a ftw3 3090 for months now and had no trouble rebooting with that card. Since I only have the two sticks of DDR5 I can't really throw another set in the system at the moment, so I may reinstall the OS and if that doesn't work then I guess I'm down the the DDR5 causing the problem.
> 
> Thanks for the help guys, I'll hopefully update later today.


Here are the classified tool settings I used for my PR runs. CPU and memory overclock don't matter for PR. I scored 16,157 in safe boot on my 11900K.
NVIDIA GeForce RTX 3090 video card benchmark result - Intel Core i9-11900K Processor,ASRock Z590 OC Formula (3dmark.com)










Make sure you force enable Re-Bar in Nvidia Profile Inspector. That is the only trick I use. I haven't adjusted any other settings or lowered the quality in any way.

My computer and external radiator are both outside in the cold when I run PR. I am using a 30% glycol coolant that can withstand -17C before freezing. I can run it for hours without condensation because the whole system is cold.


----------



## tps3443

geriatricpollywog said:


> Here are the classified tool settings I used for my PR runs. CPU and memory overclock don't matter for PR. I scored 16,157 in safe boot on my 11900K.
> NVIDIA GeForce RTX 3090 video card benchmark result - Intel Core i9-11900K Processor,ASRock Z590 OC Formula (3dmark.com)
> 
> View attachment 2543295
> 
> 
> Make sure you force enable Re-Bar in Nvidia Profile Inspector. That is the only trick I use. I haven't adjusted any other settings or lowered the quality in any way.
> 
> My computer and external radiator are both outside in the cold when I run PR. I am using a 30% glycol coolant that can withstand -17C before freezing. I can run it for hours without condensation because the whole system is cold.
> 
> View attachment 2543296



Your dew point is probably like 0F anyways. So, even if the PC was inside you wouldn’t get condensation at all. I would just leave the rad outside, and bring the PC inside.


My dew point right now is 31F, so honestly the coldest water I can get on my chiller is 39f. So, I couldn’t get condensation even if I wanted to.


----------



## geriatricpollywog

tps3443 said:


> Your dew point is probably like 0F anyways. So, even if the PC was inside you wouldn’t get condensation at all. I would just leave the rad outside, and bring the PC inside.
> 
> 
> My dew point right now is 31F, so honestly the coldest water I can get on my chiller is 39f. So, I couldn’t get condensation even if I wanted to.


Humidity was 86% during my last benching session so condensation would have been a problem. Nature's chiller trumps everything.


----------



## GreatestChase

geriatricpollywog said:


> Here are the classified tool settings I used for my PR runs. CPU and memory overclock don't matter for PR. I scored 16,157 in safe boot on my 11900K.
> NVIDIA GeForce RTX 3090 video card benchmark result - Intel Core i9-11900K Processor,ASRock Z590 OC Formula (3dmark.com)
> 
> View attachment 2543295
> 
> 
> Make sure you force enable Re-Bar in Nvidia Profile Inspector. That is the only trick I use. I haven't adjusted any other settings or lowered the quality in any way.
> 
> My computer and external radiator are both outside in the cold when I run PR. I am using a 30% glycol coolant that can withstand -17C before freezing. I can run it for hours without condensation because the whole system is cold.
> 
> View attachment 2543296


Very nice. I've reinstalled my OS and all seems to be going along perfectly at the moment. Thanks for the insight on your settings. Living in the southern US, I don't think I'll get temps low enough lol. Just looking at this week, I might be able to try this coming weekend. Lowest lows of the season so far at around 25 F. Guess I'll have to wait and see.


----------



## GRABibus

geriatricpollywog said:


> Here are the classified tool settings I used for my PR runs. CPU and memory overclock don't matter for PR. I scored 16,157 in safe boot on my 11900K.
> NVIDIA GeForce RTX 3090 video card benchmark result - Intel Core i9-11900K Processor,ASRock Z590 OC Formula (3dmark.com)
> 
> View attachment 2543295
> 
> 
> Make sure you force enable Re-Bar in Nvidia Profile Inspector. That is the only trick I use. I haven't adjusted any other settings or lowered the quality in any way.
> 
> My computer and external radiator are both outside in the cold when I run PR. I am using a 30% glycol coolant that can withstand -17C before freezing. I can run it for hours without condensation because the whole system is cold.
> 
> View attachment 2543296


Nice.

I have reconsidered my voltages according to yours for my KP Hybrid, whci I own only since last Friday.

So, *stock cooler and 54°C GPU average* during this PR run : I could pass 15800 !! 


















I scored 15 816 in Port Royal


AMD Ryzen 9 5900X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com





I am not too far to consider that I have a very very good sample


----------



## geriatricpollywog

GreatestChase said:


> Very nice. I've reinstalled my OS and all seems to be going along perfectly at the moment. Thanks for the insight on your settings. Living in the southern US, I don't think I'll get temps low enough lol. Just looking at this week, I might be able to try this coming weekend. Lowest lows of the season so far at around 25 F. Guess I'll have to wait and see.


My best run was around 25F ambient outside. You’ll need glycol coolant to keep the water from freezing.


GRABibus said:


> Nice.
> 
> I have reconsidered my voltages according to yours for my KP Hybrid, whci I own only since last Friday.
> 
> So, *stock cooler and 54°C GPU average* during this PR run : I could pass 15800 !!
> 
> View attachment 2543313
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 816 in Port Royal
> 
> 
> AMD Ryzen 9 5900X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> I am not too far to consider that I have a very very good sample


I have an average sample for a Kingpin. My memory tops out around 1394. You should be able to exceed my score if you can get your card on a Hydrocopper block and get your water temp low enough. I wouldn’t suggest my voltages at your high temps though.


----------



## tps3443

geriatricpollywog said:


> Humidity was 86% during my last benching session so condensation would have been a problem. Nature's chiller trumps everything.


Just stay above the dew point. I live in a very humid environment as well.

I would think a Waterchiller would trump natural cooling, but maybe I am wrong. You can sustain that temp daily with little to no variables involved.

I use to overthink condensation though. I can get down to 3C water temp without condensation because my dew point is far lower.


If you want temps to go lower, then you must allow the compressor to continue on running and it will go below 0. I‘m not using a antifreeze though. Just distilled water.


----------



## kryptonfly

ManniX-ITA said:


> I don't think it had perfect voltage under load; you probably couldn't see it cause it was going down very fast and the protection acting before you could have a chance to see it.
> 
> If you did get the ASUS message there was a voltage drop; the MB can't know the current load (Amperage) on the rail.
> Can only read the voltage level and thus if the Anti Surge kicked in at some point it went down below 11.4V (or whatever ASUS decided was too low).
> The MB voltage reading sensor is usually not very reliable... it was a bad idea to use it.
> 
> If the Asus Z690 was doing the same without message then it was the PSU protection kicking in.
> Probably still the UVP and not OCP.
> The OCP trigger point for a 1200W is massive, even if it was a multirail.
> I find it hard a 2x8p GPU can even come closer.
> More likely the faulty Antec had sudden voltage drops.


Well, at the end it's a PSU problem, current or voltage are both as much as important, the way motherboard acts first makes me think to an UVP indeed. I'm using 4 mohms shunts + 1000W XOC bios at 100% PL, I'm forced to use a manual curve otherwise it triggers something. I don't mind because I always use a curve. In December on my X99 I forgot to apply the curve, I ran RotTR 4K ultra with cpu OC (~200W) and actually with curve I was already close to 500W (1.006v) but without curve it climbs to 1.075-1.081v (from my experience with this GPU) so maybe 600W + CPU OC, I didn't have time to see GPU power, it triggered something in 1 second, probably more than 750W in 1 second. PC shutted down and rebooted. At the end it's the PSU, always. Usually everything holds on except when I forgot the curve and combine with high cpu OC. My Antec works perfectly in these uncommon conditions.


----------



## GreatestChase

Not my best run thermally, but it is a new personal best after enabling resizable bar. 15,413


----------



## tps3443

GreatestChase said:


> Not my best run thermally, but it is a new personal best after enabling resizable bar. 15,413
> View attachment 2543326


You are doing better than me. I think my OS is burnt or something. My max GPU temp 24C and using the same setting as you I can like 15,300. I did well in this test once upon a time. I need a benching OS.


The Port Royal optimization and tuning is not
fun.


----------



## GreatestChase

tps3443 said:


> You are doing better than me. I think my OS is burnt or something. My max GPU temp 24C and using the same setting as you I can like 15,300. I did well in this test once upon a time. I need a benching OS.


I just had to do a completely clean install of the OS after yesterday. Had a hard crash after trying to push my cpu to 5.3 and keep getting stuck at the Bios splash screen. Unfortunately I don't think the binning on my gpu is the greatest. I'm only getting clocks of 2130 at 50 C with +200 core offset. Mem does pretty well though at +1700.


----------



## tps3443

GreatestChase said:


> I just had to do a completely clean install of the OS after yesterday. Had a hard crash after trying to push my cpu to 5.3 and keep getting stuck at the Bios splash screen. Unfortunately I don't think the binning on my gpu is the greatest. I'm only getting clocks of 2130 at 50 C with +200 core offset. Mem does pretty well though at +1700.


As far as I know, Evga selects (10) random GA102’s from a batch of (100) GA102’s. Then they test for the best single (GA102) out of those (10) GA102’s.


Your bin is probably fine. 50C is just warm is all honestly, and it comes down to app tuning, setting proper curve in MSI AB etc. As for the memory, yeah that’s pretty good. Mine has really great memory too. I can run the full +2,000 with both FBVDD dip switches enabled during games. But, I will say this it didn’t always do that. I remember my card struggling for +1350-+1400 range with the Hybrid cooler Installed. The Hydro Copper KP block opened that up to +1,600-1700 range, and the full coverage thermal pad provided even more room 1700-1800ish. Now, the water chiller allows +2000 with just a locked 68F water temp. When I refer to memory numbers I mean “Long term game stable” 

You can force the cards to run slightly higher core clocks etc. with MSI AB. My personal best score with normal ambient water cooling is around 15,900 or so. And that was on the regular 520KP RBar bios.


----------



## GreatestChase

tps3443 said:


> As far as I know, Evga selects (10) random GA102’s from a batch of (100) GA102’s. Then they test for the best single (GA102) out of those (10) GA102’s.
> 
> 
> Your bin is probably fine. 50C is just warm is all honestly, and it comes down to app tuning, setting proper curve in MSI AB etc. As for the memory, yeah that’s pretty good. Mine has really great memory too. I can run the full +2,000 with both FBVDD dip switches enabled during games.
> 
> You can force the cards to run slightly higher clocks etc. with MSI AB. My personal best score with normal ambient water cooling is around 15,900 or so. And that was on the regular 520KP RBar bios.


In regards to the voltage curve, do you mean just bringing up the points on the curve over the voltage your running to the specific clock your looking for? I’ve just been using a straight offset, but that may be something I should try. Still pretty new to this type of OCing. I’m used to finding stable OCs for gaming, but I’m still learning when it comes to pushing the limits.


----------



## tps3443

GreatestChase said:


> In regards to the voltage curve, do you mean just bringing up the points on the curve over the voltage your running to the specific clock your looking for? I’ve just been using a straight offset, but that may be something I should try. Still pretty new to this type of OCing. I’m used to finding stable OCs for gaming, but I’m still learning when it comes to pushing the limits.


Yea, I always obtained better numbers with MSI Afterburner VS. Precision X1 software.

And it’s because you can lock 1.1V to the GPU core.

Also, keep in mind you have a external clock and internal clock on your GPU.

It could say that it’s running 2,130Mhz, but the effective Mhz may only be 2,070Mhz or 2,085Mhz. This is why you see massive score fluctuation in Port a Royal. The NVVD voltage directly controls your internal clock. Force the 3D clock to run, and use HWinfo to see the effective GPU clock. Bring up the NVVD to bring up the internal clock to match the external clock.

Also, flipping the dip switches NVVD1 and PLL1 will make your internal clock match the external clock all the time.


Using MSI Atterburner you can set +130, then open the curve editor, select the desired voltage point and bring it up exactly +15 or +30 then hit SHIFT+L
then apply. You must also bring the MSI AB voltage to 100% too as well. It’ll force that clock at that voltage all the time.


----------



## GreatestChase

tps3443 said:


> Yea, I always obtained better numbers with MSI Afterburner VS. Precision X1 software.
> 
> And it’s because you can lock 1.1V to the GPU core.
> 
> Also, keep in mind you have a external clock and internal clock on your GPU.
> 
> It could say that it’s running 2,130Mhz, but the effective Mhz may only be 2,070Mhz or 2,085Mhz. This is why you see massive score fluctuation in Port a Royal. The NVVD voltage directly controls your internal clock. Force the 3D clock to run, and use HWinfo to see the effective GPU clock. Bring up the NVVD to bring up the internal clock to match the external clock.
> 
> Also, flipping the dip switches NVVD1 and PLL1 will make your internal clock match the external clock all the time.
> 
> 
> Using MSI Atterburner you can set +130, then open the curve editor, select the desired voltage point and bring it up exactly +15 or +30 then hit SHIFT+L
> then apply. You must also bring the MSI AB voltage to 100% too as well. It’ll force that clock at that voltage all the time.


Yeah, during the run it was only at 2130 for a short period at the beginning and then dropped to 2115 or 2100 for the rest of the run. So what I’m understanding from what you’re saying is that by increasing NVVDD I can help it stay at 2130 for longer? I’m not sure how much further I can safely push voltage wise. What’s generally considered the “safe” limit?


----------



## tps3443

GreatestChase said:


> Yeah, during the run it was only at 2130 for a short period at the beginning and then dropped to 2115 or 2100 for the rest of the run. So what I’m understanding from what you’re saying is that by increasing NVVDD I can help it stay at 2130 for longer? I’m not sure how much further I can safely push voltage wise. What’s generally considered the “safe” limit?


1.300 NVVD would be my limit. 1.215 MSVDD and 1.515 FBVDD. I’d stay in this range with classified tool. Sometimes the card can go over these values per OLED read out.


Also, try the 520 watt LN2 bios. I always achieved better results with that for some reason.

I can do much better on the 520KP LN2 bios though. Usually 15,600 easily without a waterchiller. Before this newer OS requirement just to play BF2042, I was benching much higher than that.

I have the cooling and ability to get right past 16K. The motivation to do so is another matter lol.


----------



## GreatestChase

tps3443 said:


> 1.300 NVVD would be my limit. 1.215 MSVDD and 1.515 FBVDD. I’d stay in this range with classified tool. Sometimes the card can go over these values per OLED read out.
> 
> 
> Also, try the 520 watt LN2 bios. I always achieved better results with that for some reason.
> 
> I can do much better on the 520KP LN2 bios though. Usually 15,600 easily without a waterchiller. Before this newer OS requirement just to play BF2042, I was benching much higher than that.
> 
> I have the cooling and ability to get right past 16K. The motivation to do so is another matter lol.


Will power not become my limiting factor using the 520 W bios? I would say during a PR run I'm averaging at least 570-580W using the XOC bios and it peaks up to around 613 W.


----------



## GreatestChase

Whelp, got another BSOD with *Nvlddmkm Sys* error. Now I'm back to being stuck in the bios splash loop again. Going straight to reinstalling windows this time. I did manage to get another personal best in the process though at least. I'm gonna figure this stuff out one way or another lol. I tried using the LN2 bios as well as locking the gpu voltage in afterburner, but was having trouble with the clocks immediately dropping significantly, so I'm not sure if I'm doing something wrong there or what.








I scored 15 573 in Port Royal


Intel Core i7-12700K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 11}




www.3dmark.com


----------



## geriatricpollywog

GreatestChase said:


> Whelp, got another BSOD with *Nvlddmkm Sys* error. Now I'm back to being stuck in the bios splash loop again. Going straight to reinstalling windows this time. I did manage to get another personal best in the process though at least. I'm gonna figure this stuff out one way or another lol. I tried using the LN2 bios as well as locking the gpu voltage in afterburner, but was having trouble with the clocks immediately dropping significantly, so I'm not sure if I'm doing something wrong there or what.
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 573 in Port Royal
> 
> 
> Intel Core i7-12700K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 11}
> 
> 
> 
> 
> www.3dmark.com


I’ve never crashed Windows, let a lone corrupted it with an unstable GPU overclock. If you are just benching Port Royal, you don’t need to overclock your CPU or RAM.


----------



## GreatestChase

geriatricpollywog said:


> I’ve never crashed Windows, let a lone corrupted it with an unstable GPU overclock. If you are just benching Port Royal, you don’t need to overclock your CPU or RAM.


I just have xmp enabled and the cpu at [email protected] V both of which have been stable for months now. I think it’s an OS issue still. This morning I opted to repair the OS rather than do a fresh install. Hopefully this fresh install will resolve it. If not I may have to dig further, but the cpu and ram haven’t given me any issues until this point. I’ve tried searching into ddr5 issues like another user mentioned, but wasn’t able to find anything in my very brief search.


----------



## tps3443

GreatestChase said:


> Whelp, got another BSOD with *Nvlddmkm Sys* error. Now I'm back to being stuck in the bios splash loop again. Going straight to reinstalling windows this time. I did manage to get another personal best in the process though at least. I'm gonna figure this stuff out one way or another lol. I tried using the LN2 bios as well as locking the gpu voltage in afterburner, but was having trouble with the clocks immediately dropping significantly, so I'm not sure if I'm doing something wrong there or what.
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 573 in Port Royal
> 
> 
> Intel Core i7-12700K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 11}
> 
> 
> 
> 
> www.3dmark.com


You can break 16K on the 520 watt LN2 bios.


----------



## tps3443

Started playing CB2077 again. Amazing how fast a 3090KP is chewing through this title.

Running 2,175Mhz internal clock sustained with 23GBPS memory overclock. Power usage is between 420-460 watts. With GPU temps of 27C .

Absolutely chewing this game up.

Ive got my water temp set to 58F.


----------



## sew333

tps3443 said:


> Started playing CB2077 again. Amazing how fast a 3090KP is chewing through this title.
> 
> Running 2,175Mhz internal clock sustained with 23GBPS memory overclock. Power usage is between 420-460 watts. With GPU temps of 27C .
> 
> Absolutely chewing this game up.
> 
> Ive got my water temp set to 58F.


CB2077 can drops to 60 fps when using Ultra RT and 1440P DLSS QUALITY.


----------



## GreatestChase

Well, my plight continues. So after reinstalling my OS, I decided to play it safe and go ahead and remove my OC on my cpu and disable XMP on my memory. When I'm using the LN2 bios I'm generally able to boot into windows with no problem. However, if I swap over to the XOC bios, I seem to have much more trouble. For example, this morning I shut down my computer, swapped to the XOC bios, and attempted to boot up. Then I once again got stuck at the bios splash screen with the circle spinning at the bottom. The problem persisted after multiple hard resets, and when the automatic repair thing showed up on the bios splash screen it would hang as well. So I swapped it back to the LN2 bios, and after a couple attempts it booted into windows, and would continue to do so when I initiated a restart. I've reflashed the XOC bios thinking maybe that somehow it had become bricked. The problem persists after that. So now I'm even more confused. I was ready to get some runs in the morning because it was quite cold, had managed to get my liquid temp to sub 10 C. But no dice I guess.

Edit: Forgot to mention that no matter what the circumstances are, I have been able to get into the motherboard bios every time that I have tried.

On a separate note, does the classified tool not work properly with the 520W bios? I tried to give it a couple runs on it, and the NVVDD reading from the gpu seemed correct, but I was getting large fluctuations in my gpu clocks during runs when trying to use it from 1800 MHz-2100 MHz. If I don't use the classified tool, the clocks are much more stable, I just can't get it to run at the same clocks using only the dip switches.

Sorry to keep bombarding you guys with questions. I'm just kind of at a loss as to where to go from here. I'm going to let it be over the rest of the week, and maybe try again this weekend when the temps get back down and I have a little more time to deal with it. Thanks to those who have been helping out.


----------



## kryptonfly

I just realized when I raise "core voltage %" it increases "SRAM output voltage", usually it's at 1.081v max or even less but at +50% core voltage it's at 1.087v and when I set +100% it's at 1.10v but I got artifacts in PR. Anyway, here's my max PR (till now), I hit internal voltages limit, if I set GPU at 1.075v, effective clock falls at 2160 mhz = lower score. I think I can improve score with lower temps (= lower GPU voltage) because T° ambient is 23°C and it warmed up my WC after many tries :

I scored 15 998 in Port Royal 
(2192 mhz effective clock at 1.068v, can do 1.062v with lower temps) 




Re-Bar is disabled, it looks like it is always enabled because I don't see any improvement ON/OFF... I ran with just 1x P-Core at 4.2 Ghz and 8x E-Core at 3.7 Ghz / vCore 1v with C-States and TVB voltage optimisations enabled / SA 0.95v / VDDQ 1.15v / ram XMP fixed at 1.4v / G-Sync disabled / Hardware-accelerated disabled / SMI lock 2235 mhz, nothing more. My +12v is sligthly better with this "ECO" profile but mainly my UPS (power inverter) doesn't beep anymore  



Seems like I can't burn this Gigabyte Turbo 2*8 pins in PR anyhow with 4 mΩ shunts + 1000W XOC bios, internal voltages prevent me to go beyond.



GreatestChase said:


> Well, my plight continues. So after reinstalling my OS, I decided to play it safe and go ahead and remove my OC on my cpu and disable XMP on my memory. When I'm using the LN2 bios I'm generally able to boot into windows with no problem. However, if I swap over to the XOC bios, I seem to have much more trouble. For example, this morning I shut down my computer, swapped to the XOC bios, and attempted to boot up. Then I once again got stuck at the bios splash screen with the circle spinning at the bottom. The problem persisted after multiple hard resets, and when the automatic repair thing showed up on the bios splash screen it would hang as well. So I swapped it back to the LN2 bios, and after a couple attempts it booted into windows, and would continue to do so when I initiated a restart. I've reflashed the XOC bios thinking maybe that somehow it had become bricked. The problem persists after that. So now I'm even more confused. I was ready to get some runs in the morning because it was quite cold, had managed to get my liquid temp to sub 10 C. But no dice I guess.
> 
> Edit: Forgot to mention that no matter what the circumstances are, I have been able to get into the motherboard bios every time that I have tried.
> 
> On a separate note, does the classified tool not work properly with the 520W bios? I tried to give it a couple runs on it, and the NVVDD reading from the gpu seemed correct, but I was getting large fluctuations in my gpu clocks during runs when trying to use it from 1800 MHz-2100 MHz. If I don't use the classified tool, the clocks are much more stable, I just can't get it to run at the same clocks using only the dip switches.
> 
> Sorry to keep bombarding you guys with questions. I'm just kind of at a loss as to where to go from here. I'm going to let it be over the rest of the week, and maybe try again this weekend when the temps get back down and I have a little more time to deal with it. Thanks to those who have been helping out.


It could be many things, storage, GPU, bad settings in bios... Did you try to load "optimized default" setting in bios (or whatever the name) and boot directly ? Did you try another GPU ? Did you modify SSDs or anything ? Maybe it will take time to find what's wrong, it could be beyond of GPU.


----------



## GreatestChase

kryptonfly said:


> I just realized when I raise "core voltage %" it increases "SRAM output voltage", usually it's at 1.081v max or even less but at +50% core voltage it's at 1.087v and when I set +100% it's at 1.10v but I got artifacts in PR. Anyway, here's my max PR (till now), I hit internal voltages limit, if I set GPU at 1.075v, effective clock falls at 2160 mhz = lower score. I think I can improve score with lower temps (= lower GPU voltage) because T° ambient is 23°C and it warmed up my WC after many tries :
> 
> I scored 15 998 in Port Royal
> (2192 mhz effective clock at 1.068v, can do 1.062v with lower temps)
> 
> 
> 
> 
> Re-Bar is disabled, it looks like it is always enabled because I don't see any improvement ON/OFF... I ran with just 1x P-Core at 4.2 Ghz and 8x E-Core at 3.7 Ghz / vCore 1v with C-States and TVB voltage optimisations enabled / SA 0.95v / VDDQ 1.15v / ram XMP fixed at 1.4v / G-Sync disabled / Hardware-accelerated disabled / SMI lock 2235 mhz, nothing more. My +12v is sligthly better with this "ECO" profile but mainly my UPS (power inverter) doesn't beep anymore
> 
> 
> 
> Seems like I can't burn this Gigabyte Turbo 2*8 pins in PR anyhow with 4 mΩ shunts + 1000W XOC bios, internal voltages prevent me to go beyond.
> 
> It could be many things, storage, GPU, bad settings in bios... Did you try to load "optimized default" setting in bios (or whatever the name) and boot directly ? Did you try another GPU ? Did you modify SSDs or anything ? Maybe it will take time to find what's wrong, it could be beyond of GPU.


So far the things I have done are: flash the newest mobo bios, remove any OC on memory and cpu, reinstall OS, attempt boot without GPU using the iGPU, and reflash bios on GPU. I think that covers everything. Through each of these things the problem has persisted. I just noticed the XOC bios thing this morning. The only thing I guess I haven't tried is swapping out the storage drive that the OS is on. I may have to try that next. My OS is currently on a Samsung 970 NVME drive.


----------



## GRABibus

geriatricpollywog said:


> My best run was around 25F ambient outside. You’ll need glycol coolant to keep the water from freezing.
> 
> I have an average sample for a Kingpin. My memory tops out around 1394. You should be able to exceed my score if you can get your card on a Hydrocopper block and get your water temp low enough. I wouldn’t suggest my voltages at your high temps though.


The memory OC seems to be a mess on this card.
I mean during some run and games I can set +1600Mhz without touching classified (on default) and no issue.
And sometimes, with classified set to high voltages, as soon as I launch my OC though MSI AB with +1600MHz on memory, my computer shut down with black screen.
Even with +1400MHz I have this.
+1200MHz seems to be the max without these issues.

iI have never seen so erratic stability (or instability) on VRAM in a graphic card….


----------



## J7SC

GRABibus said:


> The memory OC seems to be a mess on this card.
> I mean during some run and games I can set +1600Mhz without touching classified (on default) and no issue.
> And sometimes, with classified set to high voltages, as soon as I launch my OC though MSI AB with +1600MHz on memory, my computer shut down with black screen.
> Even with +1400MHz I have this.
> +1200MHz seems to be the max without these issues.
> 
> iI have never seen so erratic stability (or instability) on VRAM in a graphic card….


What are the VRAM temps under sustained heavy load and with HWInfo fast-polling ?


----------



## GRABibus

J7SC said:


> What are the VRAM temps under sustained heavy load and with HWInfo fast-polling ?


I will check but the issue occurs just after reboot and when I just set the OC with MSI AB.
I mean, I set my OC with MSI AB with +1600MHz, I click « apply » in MSI AB and Booomm !, black screen and computer shuts down. In this case memory chips are cold.


----------



## yzonker

GreatestChase said:


> Well, my plight continues. So after reinstalling my OS, I decided to play it safe and go ahead and remove my OC on my cpu and disable XMP on my memory. When I'm using the LN2 bios I'm generally able to boot into windows with no problem. However, if I swap over to the XOC bios, I seem to have much more trouble. For example, this morning I shut down my computer, swapped to the XOC bios, and attempted to boot up. Then I once again got stuck at the bios splash screen with the circle spinning at the bottom. The problem persisted after multiple hard resets, and when the automatic repair thing showed up on the bios splash screen it would hang as well. So I swapped it back to the LN2 bios, and after a couple attempts it booted into windows, and would continue to do so when I initiated a restart. I've reflashed the XOC bios thinking maybe that somehow it had become bricked. The problem persists after that. So now I'm even more confused. I was ready to get some runs in the morning because it was quite cold, had managed to get my liquid temp to sub 10 C. But no dice I guess.
> 
> Edit: Forgot to mention that no matter what the circumstances are, I have been able to get into the motherboard bios every time that I have tried.
> 
> On a separate note, does the classified tool not work properly with the 520W bios? I tried to give it a couple runs on it, and the NVVDD reading from the gpu seemed correct, but I was getting large fluctuations in my gpu clocks during runs when trying to use it from 1800 MHz-2100 MHz. If I don't use the classified tool, the clocks are much more stable, I just can't get it to run at the same clocks using only the dip switches.
> 
> Sorry to keep bombarding you guys with questions. I'm just kind of at a loss as to where to go from here. I'm going to let it be over the rest of the week, and maybe try again this weekend when the temps get back down and I have a little more time to deal with it. Thanks to those who have been helping out.


You've probably already tried this, but what if you DDU while on the LN2, then swap to the XOC bios and boot up with no NVIDIA drivers installed. Then install drivers (assuming you make it back to the desktop).


----------



## yzonker

GRABibus said:


> I will check but the issue occurs just after reboot and when I just set the OC with MSI AB.
> I mean, I set my OC with MSI AB with +1600MHz, I click « apply » in MSI AB and Booomm !, black screen and computer shuts down. In this case memory chips are cold.


My Zotac 3090 will do that if I set the mem OC way too high.


----------



## GRABibus

yzonker said:


> My Zotac 3090 will do that if I set the mem OC way too high.


Yes.
And sometimes I can play 2 hours gaming w with same settings….
Crazy


----------



## KedarWolf

I got these to redo my backplate on my 3090. I'd get an Optimus Watercooling Strix OC RTX 3090 block instead of my EK block, but I'd be running it on an EKWB Phoenix 360 AIO with the block added and unless I went custom water cooling for the block, it would really be a waste. 

I used Gelid Extreme on the waterblock, I find being softer, I get better die contact.


----------



## GRABibus

kryptonfly said:


> I just realized when I raise "core voltage %" it increases "SRAM output voltage", usually it's at 1.081v max or even less but at +50% core voltage it's at 1.087v and when I set +100% it's at 1.10v but I got artifacts in PR. Anyway, here's my max PR (till now), I hit internal voltages limit, if I set GPU at 1.075v, effective clock falls at 2160 mhz = lower score. I think I can improve score with lower temps (= lower GPU voltage) because T° ambient is 23°C and it warmed up my WC after many tries :
> 
> I scored 15 998 in Port Royal
> (2192 mhz effective clock at 1.068v, can do 1.062v with lower temps)
> 
> 
> 
> 
> Re-Bar is disabled, it looks like it is always enabled because I don't see any improvement ON/OFF... I ran with just 1x P-Core at 4.2 Ghz and 8x E-Core at 3.7 Ghz / vCore 1v with C-States and TVB voltage optimisations enabled / SA 0.95v / VDDQ 1.15v / ram XMP fixed at 1.4v / G-Sync disabled / Hardware-accelerated disabled / SMI lock 2235 mhz, nothing more. My +12v is sligthly better with this "ECO" profile but mainly my UPS (power inverter) doesn't beep anymore
> 
> 
> 
> Seems like I can't burn this Gigabyte Turbo 2*8 pins in PR anyhow with 4 mΩ shunts + 1000W XOC bios, internal voltages prevent me to go beyond.
> 
> It could be many things, storage, GPU, bad settings in bios... Did you try to load "optimized default" setting in bios (or whatever the name) and boot directly ? Did you try another GPU ? Did you modify SSDs or anything ? Maybe it will take time to find what's wrong, it could be beyond of GPU.


What a shame 😅
Come Back with a screenshot with 2 points more in PR ! 😊


----------



## sew333

kryptonfly said:


> You told me when you tried Endwalker that your PC shutted down, you can retry but if it shuts down, it's your PSU. It's not normal that at stock with 850W it triggers any protection. Capacitors get empty so light flickers.
> 
> Wildlife is not enough, it's not the same load than cpu+gpu as I told you. If PSU triggers OCP you need to flip switch but in this case it's the motherboard which acts as a switch first, that's why you don't need to unplug PSU from the wall. Same as a good true OCP from PSU, just unplug ATX 24 pins and it's like flip switch. (what motherboard does in this case)
> I had a V1200 which had +12v and perfect voltage in load and it triggers OCP in Cyberpunk when I entered on a load street (cpu OC + gpu OC) exactly always at the same area. Since the new PSU no trouble anymore.
> My new Asus Z690 acts like my Asus X99 when I experienced an OCP in Endwalker (too much cpu OC 1,52v + gpu OC on 2*8 pins), it shutted down but in this case I didn't see "power surge protection" warning when it restarted because the setting is not present anymore in this motherboard , but it uses the same mechanism than my X99.
> No link in this matter. It's a "normal" screen stuff. I would not get worried about it.
> 
> Overall these Gigabyte 2*8 pins are not well balanced, some burn because of "New World", some can handle 600W like mine...
> 1st : try with at least a 1000W non Seasonic PSU (it will fix the problem but anyway).
> 2nd : your GPU board is not well made but I really don't think so.
> It's cheaper and wiser to check the PSU first. It's up to you to fix your trouble with elements here, seems like you're waiting a comment translating your point of view...


Hi krytponfly. My motherboard is Aorus Z490 Pro Ax. So if i had that shutdown and just pressed restart button to boot,without flipping switch on back psu , it was still OCP?


Or maybe that was power fluctuation in my house? But during Metro Exodus advertisement part ,coincedence?

It happened once i try reproduce this , launching game 100 times and no shutdown again.


----------



## tps3443

sew333 said:


> CB2077 can drops to 60 fps when using Ultra RT and 1440P DLSS QUALITY.


I run Psycho RT with Psycho reflections on as well. Everything is maxed out, with “Quality DLSS”, my game never dips below 73fps.

If I run Ultra RT that frame rate never dips below 87 FPS. With an average of around 100FPS.


----------



## sew333

tps3443 said:


> I run Psycho RT with Psycho reflections on as well. Everything is maxed out, with “Quality DLSS”, my game never dips below 73fps.
> 
> If I run Ultra RT that frame rate never dips below 87 FPS. With an average of around 100FPS.


Not possible especially on most crowd areas. Just enter in crowd area and you will dip to 60 fps. I have Rtx 3090 with 10850K and all ultra,RT ULTRA,DLSS QUALITY and have dips to 60fps in some areas. Even on youtube the same fps like me.
Dont get me wrong i am respect you. But can you test on day light in most crowd area in city and measure fps?Thanks


Psycho RT 



 6:23


51:40 Ultra RT dlss quality


----------



## kryptonfly

sew333 said:


> Hi krytponfly. My motherboard is Aorus Z490 Pro Ax. So if i had that shutdown and just pressed restart button to boot,without flipping switch on back psu , it was still OCP?
> 
> 
> Or maybe that was power fluctuation in my house? But during Metro Exodus advertisement part ,coincedence?
> 
> It happened once i try reproduce this , launching game 100 times and no shutdown again.


Yep, high FPS don't mean it will always trigger OCP, I have 2000-4000 fps in Outriders during advertising, +1400 fps in HZD after bench, 4000 fps in FFXIV benchmark (2010) and some others... so high FPS are not always involved in these shutdowns.
 

It's caused by high load + fast transient (= high FPS, +300W in few ms) + CPU load. I suggest you to try with an other PSU, at least 1000W and non Seasonic. Test again Endwalker bench, if it shuts down it's the PSU. I don't know what can I tell you more... Nobody except you will fix your problem if you don't want to listen advices.


----------



## tps3443

@sew333 I made a video just for you!

Keep in mind, the 3090KP bone stock is 40.3 Tflops. Slightly more than that of a stock 3090Ti, add in additional overclocking and it’s a BEAST.

This video is with 2560x1440P Quality DLSS, and Psycho Raytracing and Ultra Raytracing. I try both options for you.

I hope this helps.


----------



## sew333

tps3443 said:


> @sew333 I made a video just for you!
> 
> Keep in mind, the 3090KP bone stock is 40.3 Tflops. Slightly more than that of a stock 3090Ti, add in additional overclocking and it’s a BEAST.
> 
> This video is with 2560x1440P Quality DLSS, and Psycho Raytracing and Ultra Raytracing. I try both options for you.
> 
> I hope this helps.


You must walk around many peoples in center of city + day light. test pls


----------



## tps3443

sew333 said:


> You must walk around many peoples in center of city + day light. test pls


Tell me where to go in the game, and I’ll make another video. I have played this game a lot in the past on numerous GPU’s and CPU’s and my lowest FPS is 68-69 even with Psycho RT on.


----------



## sew333

tps3443 said:


> Tell me where to go in the game, and I’ll make another video. I have played this game a lot in the past on numerous GPU’s and CPU’s and my lowest FPS is 68-69 even with Psycho RT on.


places in beggining of game when you go out from hotel.There many peoples on street.Thanks


----------



## tps3443

sew333 said:


> places in beggining of game when you go out from hotel.There many peoples on street.


@sew333 

Never dips below 80FPS. Crowds of people, and day time.


ULTRA RT and Quality DLSS
2560x1440P.


----------



## sew333

tps3443 said:


> @sew333
> 
> Never dips below 80FPS. Crowds of people, and day time.
> 
> 
> ULTRA RT and Quality DLSS
> 2560x1440P.


Thx. So why he got here 60fps on the same settings with 3090 and 10900K?
I have the same fps there. 51:40 Ultra RT dlss quality







Maybe because you overclocked 3090 and cpu oc?
Or drivers? I have old drivers 466.27,from may 2021. Whats your drivers bro,tps?

Also whats your NPC density setting setting?HIGH,MEDIUM? its in gameplay options


----------



## tps3443

sew333 said:


> Thx. So why he got here 60fps on the same settings with 3090 and 10900K?
> I have the same fps there. 51:40 Ultra RT dlss quality
> 
> 
> 
> 
> 
> 
> 
> Maybe because you overclocked 3090 and cpu oc?


Honestly, it’s all about setup and tuning. He could be running resizable bar off, he could have a poorly optimized setup too. My GDDR6X memory alone is overclocked by 20%. While the stock 3090KP is already 15% higher Tflops than a 3090FE.

As for my CPU, it is overclocked by +700Mhz on all cores over stock. Instead of 4.8Ghz all cores, it runs 5.5Ghz on all cores. Cyberpunk 2077 is pretty GPU limited though.


But you can never compare setups. I’m gonna guess and say, it’s the overclocking that is providing the performance difference.

Though, I don’t gain a huge amount over the already fast stock 3090 Kingpin.


----------



## sew333

tps3443 said:


> Honestly, it’s all about setup and tuning. He could be running resizable bar off, he could have a poorly optimized setup too. My GDDR6X memory alone is overclocked by 20%. While the stock 3090KP is already 15% higher Tflops than a 3090FE.
> 
> As for my CPU, it is overclocked by +700Mhz on all cores over stock. Instead of 4.8Ghz all cores, it runs 5.5Ghz on all cores. Cyberpunk 2077 is pretty GPU limited though.
> 
> 
> But you can never compare setups. I’m gonna guess and say, it’s the overclocking that is providing the performance difference.
> 
> Though, I don’t gain a huge amount over the already fast stock 3090 Kingpin.


ok


----------



## yzonker

tps3443 said:


> Honestly, it’s all about setup and tuning. He could be running resizable bar off, he could have a poorly optimized setup too. My GDDR6X memory alone is overclocked by 20%. While the stock 3090KP is already 15% higher Tflops than a 3090FE.
> 
> As for my CPU, it is overclocked by +700Mhz on all cores over stock. Instead of 4.8Ghz all cores, it runs 5.5Ghz on all cores. Cyberpunk 2077 is pretty GPU limited though.
> 
> 
> But you can never compare setups. I’m gonna guess and say, it’s the overclocking that is providing the performance difference.
> 
> Though, I don’t gain a huge amount over the already fast stock 3090 Kingpin.


No way overclocking a 3090 gets you from 60 to 80fps. Even my Zotac 3090 trinity from bone stock to a custom loop and the KP 1kw bios didn't gain much. 

Obviously something else that's different. Not sure what.


----------



## KedarWolf

sew333 said:


> Not possible especially on most crowd areas. Just enter in crowd area and you will dip to 60 fps. I have Rtx 3090 with 10850K and all ultra,RT ULTRA,DLSS QUALITY and have dips to 60fps in some areas. Even on youtube the same fps like me.
> Dont get me wrong i am respect you. But can you test on day light in most crowd area in city and measure fps?Thanks
> 
> 
> Psycho RT
> 
> 
> 
> 6:23
> 
> 
> 51:40 Ultra RT dlss quality


3840x1080, which is 4147200 pixels in total, everything maxed out in Cyberpunk, DLSS on Quality, HDR on, resizable bar enabled globally with Nvidia Inspector, 144Hz, 1440p is 3686400 pixels in total. 

5950x CPU with a really aggressive but stable CPU Cores Curve. Memory Cl14 3800 Mhz.

I average 80 FPS even in dense areas on my Strix OC with an average overclock, nothing high at all.


----------



## ManniX-ITA

sew333 said:


> Or drivers? I have old drivers 466.27,from may 2021. Whats your drivers bro,tps?


Different drivers and/or game/DLSS library version.
Both those videos are very old.

With latest drivers and game version I can do 80 fps with the Strix OC (+110/800 MHz) outside the hotel.

DLSS Off goes down to 45-50 fps. Balanced around 90 fps and Performance 100-110 fps.

With RT Psycho drops around 60-70 fps with DLSS Quality and 40 fps with Off.


----------



## kryptonfly

GRABibus said:


> What a shame 😅
> Come Back with a screenshot with 2 points more in PR ! 😊


 
I scored 16 023 in Port Royal



I noticed when I raise "core voltage %" I could increase & push away internal limit just a bit to hold 2200 mhz effective at 1.075v, it's not the temp because I tried with "+50% core voltage" and effective fell to 2173 mhz. For a 2*8 pins it's well enough, I'm not sure that I'm at 621W though, I customed powers from my stock 390W bios during a game at 99% usage (bios 390W) and transfered powers to HWiNFO64 with custom values when the board was pure from any shunts but now there's 4 mΩ soldered (lazy to de-solder, I plenty enough soldered like that ! 15-10-5-8 and 4 mΩ !), so I have to "trust" these approximate powers.
EDIT : 4 mΩ shunts + 1000W XOC bios.

Your turn now


----------



## GRABibus

kryptonfly said:


> I scored 16 023 in Port Royal
> 
> 
> 
> I noticed when I raise "core voltage %" I could increase & push away internal limit just a bit to hold 2200 mhz effective at 1.075v, it's not the temp because I tried with "+50% core voltage" and effective fell to 2173 mhz. For a 2*8 pins it's well enough, I'm not sure that I'm at 621W though, I customed powers from my stock 390W bios during a game at 99% usage (bios 390W) and transfered powers to HWiNFO64 with custom values when the board was pure from any shunts but now there's 4 mΩ soldered (lazy to de-solder, I plenty enough soldered like that ! 15-10-5-8 and 4 mΩ !), so I have to "trust" these approximate powers.
> EDIT : 4 mΩ shunts + 1000W XOC bios.
> 
> Your turn now


France is proud of you 🇫🇷 ☺


----------



## geriatricpollywog

GRABibus said:


> France is proud of you 🇫🇷 ☺


Not surprised considering how patriotic you guys are.


----------



## GRABibus

geriatricpollywog said:


> Not surprised considering how patriotic you guys are.


Ahhahahah


----------



## geriatricpollywog

GRABibus said:


> Ahhahahah


Don’t you mean “honhonhon?”


----------



## GRABibus

geriatricpollywog said:


> Don’t you mean “honhonhon?”


Hihihih


----------



## des2k...

rtx30 series have a cpu co-processor, not yet unlocked for desktop cards, very interesting, could help lower end cpu with fps boost ?









Nvidia Driver Unlocks Performance Boosting GPU System Processor


It will be a feature exclusive to 2022 Max-Q laptops and Enterprise GPUs for now




www.tomshardware.com





*Not active on desktop rtx30 series*










*From nvidia documentation*
Some GPUs include a GPU System Processor (GSP) which can be used to offload GPU initialization and management tasks. This processor is driven by the firmware file /lib/firmware/nvidia/510.39.01/gsp.bin. A few select products currently use GSP by default, and more products will take advantage of GSP in future driver releases.


Offloading tasks which were traditionally performed by the driver on the CPU can improve performance due to lower latency access to GPU hardware internals.


----------



## GRABibus

The kingpin Hybrid is coming with 3 bios.
=> Normal bios (450W)
=> OC Bios (480W)
=> LN2 Bios (520W).

The LN2 bios has the same version 94.02.42.C0.0C as the 520W Kingpin Bios with Rebar we all know.

BUT, by using it, I could play some games stable with at +30MHz - +45MHz on core than with the usual 520W Kingpin Bios with Rebar.
Very interesting.

If you want to try it (At your own risk). Maybe you already know it 

Question : is it a real 520W Bios ? Are the protections removed ?

here is the LN2 Bios :


----------



## yzonker

GRABibus said:


> The kingpin Hybrid is coming with 3 bios.
> => Normal bios (450W)
> => OC Bios (480W)
> => LN2 Bios (520W).
> 
> The LN2 bios has the same version 94.02.42.C0.0C as the 520W Kingpin Bios with Rebar we all know.
> 
> BUT, by using it, I could play some games stable with at +30MHz - +45MHz on core than with the usual 520W Kingpin Bios with Rebar.
> Very interesting.
> 
> If you want to try it (At your own risk). Maybe you already know it
> 
> Question : is it a real 520W Bios ? Are the protections removed ?
> 
> here is the LN2 Bios :


Highly likely it's the exact same bios if the version number is the same. Higher overclock must be due to some other factor if that is the case.


----------



## GRABibus

yzonker said:


> Highly likely it's the exact same bios if the version number is the same. Higher overclock must be due to some other factor if that is the case.





yzonker said:


> Highly likely it's the exact same bios if the version number is the same. Higher overclock must be due to some other factor if that is the case.


Behavior is fully different on core as I said.
And also on memory => with this Bios, my "Classified" settings to be stable at +1600MHz with 520W Rebar don't work anymore with the LN2.
Maybe it is the same...But not same behavior... ?.

Try it.


----------



## ManniX-ITA

GRABibus said:


> Question : is it a real 520W Bios ? Are the protections removed ?


It can be opened by ABE:










It doesn't look so special...


----------



## GRABibus

OK.
But different behavior for me.


----------



## KedarWolf

Roacoe717 said:


> got another strix 3090, stock bios. might venture to watercool in the future. someone said this card could do 15k but i dont see how. my old one topped out at 14.5k, id love to reach legendary on air but owell.


I scored 15 772 in Port Royal Strix OC EK block, 1x 360 RAD, GPU only.


----------



## truehighroller1

What's the best bios now? I have the suprim x 3090. I believe in running the 1k watt kp bios. I don't know if it's the most recent rebar bios or not but it says rebar enabled I believe.


----------



## KedarWolf

So, they released a new Vector2 waterblock and active backplate combo from EKWB for $400 USD.

I contacted their tech support and they say the thermal pads they provide with it are 3.5 wm/k.

W T F??

I need replacement pads as well if I buy it, but they need to be Hardness: 5 - Shore A to work properly with the block and active backplate.

Anyone suggest which pads would work? If they are too hard or too soft, when you're trying the screw in the connector to the active backplate and waterblock, the screws won't line up properly.


----------



## GRABibus

truehighroller1 said:


> What's the best bios now? I have the suprim x 3090. I believe in running the 1k watt kp bios. I don't know if it's the most recent rebar bios or not but it says rebar enabled I believe.


Best bios depends on cooling, on card model, on usage (benchmarks, games, etc,…).

best bioses are the ones you already know for sure :
KP 1KW with rebar
KP 520W with rebar
SuprimX 450W with Rebar
EVGA 500W with rebar

what is missing in my opinion is a bios like 550W-600W with rebar and protections, so that it can be used 24/7 on good water cooling, even an AIO like the great kingpin one.


----------



## yzonker

KedarWolf said:


> So, they released a new Vector2 waterblock and active backplate combo from EKWB for $400 USD.
> 
> I contacted their tech support and they say the thermal pads they provide with it are 3.5 wm/k.
> 
> W T F??
> 
> I need replacement pads as well if I buy it, but they need to be Hardness: 5 - Shore A to work properly with the block and active backplate.
> 
> Anyone suggest which pads would work? If they are too hard or too soft, when you're trying the screw in the connector to the active backplate and waterblock, the screws won't line up properly.


First step is this hardness scale in case you haven't dug in to it,



https://www.apstpe.com/media/pdf/Shore-Hardness-Scales.pdf



Since companies like Gelid report Shore 00 rather than A like EK does. Appears Extreme's are a bit softer possibly at 35 shore 00. I saw someone on reddit a while ago working this same issue out but I don't recall what pad they chose. But looks like you need somewhere in the 45-50 shore 00 range.

I think most blocks other than Optimus come with 3.5 w/mk pads or something close to that. That's what my new Heatkiller V came with also.


----------



## GreatestChase

I personally have had good results with thermalright extreme odyssey pads for both my ftw3 cards and now my KP card. On my ftw3 cards with an ekwb block and backplate I would have temps in the mid-60s on the mem junction temp while eth mining. On my KP card I get mid 70s on the mem junction and it currently has the stock backplate and HC waterblock while I wait on one from optimus.


----------



## truehighroller1

GRABibus said:


> Best bios depends on cooling, on card model, on usage (benchmarks, games, etc,…).
> 
> best bioses are the ones you already know for sure :
> KP 1KW with rebar
> KP 520W with rebar
> SuprimX 450W with Rebar
> EVGA 500W with rebar
> 
> what is missing in my opinion is a bios like 550W-600W with rebar and protections, so that it can be used 24/7 on good water cooling, even an AIO like the great kingpin one.



I'm using water cooling. I have BIOS version 94.02.42.80.27. Looks like I have the FTW 3 500 watt. 

VGA Bios Collection: EVGA RTX 3090 24 GB | TechPowerUp 

Am I missing something with the newer ones if so where's a download link so I can flash this puppy?


----------



## GRABibus

Deleted


----------



## GRABibus

truehighroller1 said:


> I'm using water cooling. I have BIOS version 94.02.42.80.27. Looks like I have the FTW 3 500 watt.
> 
> VGA Bios Collection: EVGA RTX 3090 24 GB | TechPowerUp
> 
> Am I missing something with the newer ones if so where's a download link so I can flash this puppy?


Which bios do you want to try ?


----------



## truehighroller1

GRABibus said:


> Which bios do you want to try ?



Well, I am pretty good at modding first of all so don't think I'll blow anything up I used to mod the 1080ti BIOS files for people on here. I just did some more reading and people were saying that the asus 520 rebar had a higher memory voltage called out in the BIOS and that it seemed to work best for them but is that still the case? I score 23000 gpu right now in Timespy. I remember having issues with the galax BIOS. I remember that much. I have pretty good cooling here so heat won't be an issue for me. I just want to go right at what should be best performance wise.

I noticed that you have a higher GPU score then me. Which one are you running?

Edit:

Well, I found the 1k rebar Asus xoc and I'm trying that to see what I get score wise.

Update:

Phew this things a beast! I'm almost at my high score with the gpu and I'm still tweaking. 535 watts, I'm limiting it on purpose at this point. Holy smokes. I'm at +1450 memory right now.

Update 2:

Wow beat my score and I'm not fully tweaked yet. I had a funny feeling things had gotten better. Now I just need to force reabar I guess or whatever you guys were talking about, locking clocks too?

NVIDIA GeForce RTX 3090 video card benchmark result - Intel Core i9-12900K Processor,ASUSTeK COMPUTER INC. ROG STRIX Z690-A GAMING WIFI D4 (3dmark.com)

550 watts peak.


----------



## tps3443

yzonker said:


> No way overclocking a 3090 gets you from 60 to 80fps. Even my Zotac 3090 trinity from bone stock to a custom loop and the KP 1kw bios didn't gain much.
> 
> Obviously something else that's different. Not sure what.



It’s almost all from overclocking. Think of a 350w 35tflop RTX3090. It runs about 1,775Mhz boost stock with 19.5GBPS memory.

Now, overclock the core by 20% then overclock the GDDR6X by 20% all while giving it unlimited power to run free.

You’re going from 35tflops to about 44tflops. Add in driver optimization, resizable bar that was later announced which is supported in Cyberpunk 2077, and a game that has been updated numerous times. and this can easily net you these gains.


My 2080Ti FE was 30% faster than a stock 2080Ti FE. Overclocking and setup is everything.


----------



## GRABibus

truehighroller1 said:


> Well, I am pretty good at modding first of all so don't think I'll blow anything up I used to mod the 1080ti BIOS files for people on here. I just did some more reading and people were saying that the asus 520 rebar had a higher memory voltage called out in the BIOS and that it seemed to work best for them but is that still the case? I score 23000 gpu right now in Timespy. I remember having issues with the galax BIOS. I remember that much. I have pretty good cooling here so heat won't be an issue for me. I just want to go right at what should be best performance wise.
> 
> I noticed that you have a higher GPU score then me. Which one are you running?
> 
> Edit:
> 
> Well, I found the 1k rebar Asus xoc and I'm trying that to see what I get score wise.
> 
> Update:
> 
> Phew this things a beast! I'm almost at my high score with the gpu and I'm still tweaking. 535 watts, I'm limiting it on purpose at this point. Holy smokes. I'm at +1450 memory right now.
> 
> Update 2:
> 
> Wow beat my score and I'm not fully tweaked yet. I had a funny feeling things had gotten better. Now I just need to force reabar I guess or whatever you guys were talking about, locking clocks too?
> 
> NVIDIA GeForce RTX 3090 video card benchmark result - Intel Core i9-12900K Processor,ASUSTeK COMPUTER INC. ROG STRIX Z690-A GAMING WIFI D4 (3dmark.com)
> 
> 550 watts peak.



I run the one in my signature 24/7 :
Bios EVGA RTX 3090 K|NGP|N 520W with ReBar version 94.02.42.C0.0C


In Time spy, playing with Classified Tool and with 1KW Rebar Bios I hit 653W Power draw :
+195MHz on Core (Over 1695MHz)
+1600MHz on memory









I scored 21 777 in Time Spy.


----------



## tps3443

The OEM 3090 Kingpin thermal pads are amazing. Not the Hybrid thermal pads, but the Kingpin Hydro Copper thermal pads. They are firm, but not too firm. And they conduct extremely well. 

They are 13MW/k with 50 shore hardness.


----------



## truehighroller1

GRABibus said:


> I run the one in my signature 24/7 :
> Bios EVGA RTX 3090 K|NGP|N 520W with ReBar version 94.02.42.C0.0C
> 
> 
> In Time spy, playing with Classified Tool and with 1KW Rebar Bios I hit 653W Power draw :
> +195MHz on Core (Over 1695MHz)
> +1600MHz on memory
> View attachment 2543732
> 
> 
> I scored 21 777 in Time Spy.


Can I use the classified tool if I don't have the kp?


----------



## GreatestChase

tps3443 said:


> The OEM 3090 Kingpin thermal pads are amazing. Not the Hybrid thermal pads, but the Kingpin Hydro Copper thermal pads. They are firm, but not too firm. And they conduct extremely well.
> 
> They are 13MW/k with 50 shore hardness.


They're extremely similar in appearance and feel to the ones that came on the HC block and have similar specs per their website ODYSSEY THERMAL PAD 85x45x2.0mm – Thermalright


----------



## yzonker

tps3443 said:


> It’s almost all from overclocking. Think of a 350w 35tflop RTX3090. It runs about 1,775Mhz boost stock with 19.5GBPS memory.
> 
> Now, overclock the core by 20% then overclock the GDDR6X by 20% all while giving it unlimited power to run free.
> 
> You’re going from 35tflops to about 44tflops. Add in driver optimization, resizable bar that was later announced which is supported in Cyberpunk 2077, and a game that has been updated numerous times. and this can easily net you these gains.
> 
> 
> My 2080Ti FE was 30% faster than a stock 2080Ti FE. Overclocking and setup is everything.


But 60 to 80 fps is 33%. If you add 20% core and mem, that won't be 33%.


----------



## truehighroller1

OMG.

NVIDIA GeForce RTX 3090 video card benchmark result - Intel Core i9-12900K Processor,ASUSTeK COMPUTER INC. ROG STRIX Z690-A GAMING WIFI D4 (3dmark.com)

Smashing it!

I can't take it it's to much. lol Beat it again.

NVIDIA GeForce RTX 3090 video card benchmark result - Intel Core i9-12900K Processor,ASUSTeK COMPUTER INC. ROG STRIX Z690-A GAMING WIFI D4 (3dmark.com)


----------



## tps3443

yzonker said:


> But 60 to 80 fps is 33%. If you add 20% core and mem, that won't be 33%.


I said it comes from a multitude of things.

The 3090FE stock is 35Tflops, well my 3090KP with the core boosting an internal of 2,160Mhz would be 45tflops. That’s around 28% higher computer power than a standard 3090 (With the memory stock) Add in an additional 20% memory overclocking. Resizable bar bios, and driver updates, and cyberpunk 2077 game updates.

Cyberpunk 2077 saw a tremendous boost with resizable bar upgrade. It saved the experience for a lot of people playing the game.


----------



## yzonker

Taking another run at PR with the new Heatkiller block. Absolutely helped. Even though water temp was 3-5C higher than last time I did some cool weather benching, core temp was 1C lower. Also was able to go an extra +125 on the mem. Still not great at +1125, but at least better.









Result not found







www.3dmark.com





Effective core clock in the 2220-2225 range. 2220 minimum below.

Also ran TS and TSE while I had everything cooled down well.

23 898 Graphics, reBar was still forced. I think it would have made 24k+ if I had been using @KedarWolf 's stripped down Win10 image. 









I scored 21 169 in Time Spy


AMD Ryzen 7 5800X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 11}




www.3dmark.com





12422 Graphics TSE,









I scored 10 677 in Time Spy Extreme


AMD Ryzen 7 5800X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 11}




www.3dmark.com


----------



## tps3443

yzonker said:


> Taking another run at PR with the new Heatkiller block. Absolutely helped. Even though water temp was 3-5C higher than last time I did some cool weather benching, core temp was 1C lower. Also was able to go an extra +125 on the mem. Still not great at +1125, but at least better.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Result not found
> 
> 
> 
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> Effective core clock in the 2220-2225 range. 2220 minimum below.
> 
> Also ran TS and TSE while I had everything cooled down well.
> 
> 22898 Graphics, reBar was still forced. I think it would have made 24k+ if I had been using @KedarWolf 's stripped down Win10 image.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 21 169 in Time Spy
> 
> 
> AMD Ryzen 7 5800X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 11}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> 12422 Graphics TSE,
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 10 677 in Time Spy Extreme
> 
> 
> AMD Ryzen 7 5800X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 11}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> View attachment 2543743


Wow your GPU power consumption is insane! I can’t get anywhere near that type of power usage.

Can you share your MSI AB settings? I’d like to give this a go.


----------



## yzonker

tps3443 said:


> Wow your GPU power consumption is insane! I can’t get anywhere near that type of power usage.
> 
> Can you share your MSI AB settings? I’d like to give this a go.


2x8pin. Only 550w or so.


----------



## J7SC

...I was just doing a bit of reading and came across > this article at Igor's Lab re. potential/speculative/add salt reasoning for the 3090 Ti's delay...wondering about the implications for the 3090 on 520W, though whenever I switched to that vbios on my Strix with its sturdy VRM, I haven't had any problems with it (so far?)


----------



## tps3443

truehighroller1 said:


> Can I use the classified tool if I don't have the kp?


Unfortunately no. Classified tool is very picky, I’ve had it block even my 3090KP HydroCopper. Fortunately, Vince gave me a newer classified tool that supports the KP 520LN2 HC bios.


----------



## ManniX-ITA

KedarWolf said:


> So, they released a new Vector2 waterblock and active backplate combo from EKWB for $400 USD.


Why are you interested in the EKWB when you can easily order the Optimus Absolute?
Not only comes with the Fujipoly and is much better in all aspects but it's also cheaper


----------



## geriatricpollywog

J7SC said:


> ...I was just doing a bit of reading and came across > this article at Igor's Lab re. potential/speculative/add salt reasoning for the 3090 Ti's delay...wondering about the implications for the 3090 on 520W, though whenever I switched to that vbios on my Strix with its sturdy VRM, I haven't had any problems with it (so far?)


Stumbling through the machine-generated translations on Igor’s Lab makes me respect Der8auer for how much effort he puts towards English content.


----------



## J7SC

geriatricpollywog said:


> Stumbling through the machine-generated translations on Igor’s Lab makes me respect Der8auer for how much effort he puts towards English content.


...fortunately, I can read the original in German. Also, DerBauer has had the odd vid not translated but yeah, usually he does everything twice...dedication !


----------



## kryptonfly

tps3443 said:


> It’s almost all from overclocking. Think of a 350w 35tflop RTX3090. It runs about 1,775Mhz boost stock with 19.5GBPS memory.
> 
> Now, overclock the core by 20% then overclock the GDDR6X by 20% all while giving it unlimited power to run free.
> 
> You’re going from 35tflops to about 44tflops. Add in driver optimization, resizable bar that was later announced which is supported in Cyberpunk 2077, and a game that has been updated numerous times. and this can easily net you these gains.
> 
> 
> My 2080Ti FE was 30% faster than a stock 2080Ti FE. Overclocking and setup is everything.


I tried at 1440p DLSS quality v2.3.4, RT psycho, everything max out, I got 66 FPS at the lowest so far on the Prophet Garry's street, GPU locked at 2175mhz (2120-2130 effective), vram +1168 for now. It crashes, my cpu could be involved, it was the first time yesterday I tested my 12900k with this game. 


tps3443 said:


> I said it comes from a multitude of things.
> 
> The 3090FE stock is 35Tflops, well my 3090KP with the core boosting an internal of 2,160Mhz would be 45tflops. That’s around 28% higher computer power than a standard 3090 (With the memory stock) Add in an additional 20% memory overclocking. Resizable bar bios, and driver updates, and cyberpunk 2077 game updates.
> 
> Cyberpunk 2077 saw a tremendous boost with resizable bar upgrade. It saved the experience for a lot of people playing the game.


I confirm, it's day and night from a X99 (Gen 3.0, no re-bar) to a Z690, I have same FPS at psycho max out than at high with RT shadows disabled before on X99.
Did you tweak Nvidia panel ? Anything ? Or just run as default settings ?


----------



## tps3443

Anyone struggling to maintain a consistent Time Spy physics score? I can get a 16,200 then a 16,025, then 15,900 lol… Ughhh I swear it’s annoying.

Any ideas?

I push for a good graphics score, and my physics score is tanked. I think I had the same issue on my 7980XE, extremely inconsistent scoring.


----------



## MrTOOSHORT

tps3443 said:


> Anyone struggling to maintain a consistent Time Spy physics score? I can get a 16,200 then a 16,025, then 15,900 lol… Ughhh I swear it’s annoying.
> 
> Any ideas?
> 
> I push for a good graphics score, and my physics score is tanked. I think I had the same issue on my 7980XE, extremely inconsistent scoring.


Supposed to run 3dmark right after firing up or restarting your pc to get your best Physics score. Running 3dmark right after a run without doing so drops the Physics score a bit.


----------



## J7SC

MrTOOSHORT said:


> Supposed to run 3dmark right after firing up or restarting your pc to get your best Physics score. Running 3dmark right after a run without doing so drops the Physics score a bit.


...slight variation on that good advice: I usually wait for about 1 -2 min w/o opening anything but CoreTemps (which has a small footprint) just to let Windows finish its busybody ways after starting, Put differently, I wait after (re)start until CoreTemps % core load is stable / near zero.


----------



## GRABibus

truehighroller1 said:


> Can I use the classified tool if I don't have the kp?


No


----------



## tps3443

kryptonfly said:


> I tried at 1440p DLSS quality v2.3.4, RT psycho, everything max out, I got 66 FPS at the lowest so far on the Prophet Garry's street, GPU locked at 2175mhz (2120-2130 effective), vram +1168 for now. It crashes, my cpu could be involved, it was the first time yesterday I tested my 12900k with this game.
> I confirm, it's day and night from a X99 (Gen 3.0, no re-bar) to a Z690, I have same FPS at psycho max out than at high with RT shadows disabled before on X99.
> Did you tweak Nvidia panel ? Anything ? Or just run as default settings ?



No, I don’t have any driver tweaks. And yes newer platform runs Cyberpunk 2077 best.

I think Z390/9900K or even newer would be optimal.


----------



## truehighroller1

Some tweaking this morning.

NVIDIA GeForce RTX 3090 video card benchmark result - Intel Core i9-12900K Processor,ASUSTeK COMPUTER INC. ROG STRIX Z690-A GAMING WIFI D4 (3dmark.com)

How are you guys forcing rebar and locking clocks? I keep googling it and am not seeing a straight here it is answer and I know it's probably just me missing it somehow.


----------



## GreatestChase

truehighroller1 said:


> Some tweaking this morning.
> 
> NVIDIA GeForce RTX 3090 video card benchmark result - Intel Core i9-12900K Processor,ASUSTeK COMPUTER INC. ROG STRIX Z690-A GAMING WIFI D4 (3dmark.com)
> 
> How are you guys forcing rebar and locking clocks? I keep googling it and am not seeing a straight here it is answer and I know it's probably just me missing it somehow.


Here is the article I followed. It's not completely the clearest description thought. Once you have the nvidia inspector tool, use the drop down box to select the test you're trying to run, so I'm showing it for Timespy. Then make sure you have the magnifying glass on the R side of the top selected. Once you've done that, scroll down to unknown and modify the three setting that I have highlighted to the options that I have selected.


----------



## truehighroller1

GreatestChase said:


> Here is the article I followed. It's not completely the clearest description thought. Once you have the nvidia inspector tool, use the drop down box to select the test you're trying to run, so I'm showing it for Timespy. Then make sure you have the magnifying glass on the R side of the top selected. Once you've done that, scroll down to unknown and modify the three setting that I have highlighted to the options that I have selected.
> 
> View attachment 2543805
> 
> 
> View attachment 2543806
> 
> 
> View attachment 2543807



Hell yeah!

Time Spy 

NVIDIA GeForce RTX 3090 video card benchmark result - Intel Core i9-12900K Processor,ASUSTeK COMPUTER INC. ROG STRIX Z690-A GAMING WIFI D4 (3dmark.com)


Port Royal

NVIDIA GeForce RTX 3090 video card benchmark result - Intel Core i9-12900K Processor,ASUSTeK COMPUTER INC. ROG STRIX Z690-A GAMING WIFI D4 (3dmark.com)


----------



## ManniX-ITA

truehighroller1 said:


> and locking clocks? I keep googling it and am not seeing a straight here it is answer and I know it's probably just me missing it somehow.


Use the search tool on top of this page and look for "nvidia-smi" and select in the drop down "in this thread only"


----------



## GRABibus

yzonker said:


> Taking another run at PR with the new Heatkiller block. Absolutely helped. Even though water temp was 3-5C higher than last time I did some cool weather benching, core temp was 1C lower. Also was able to go an extra +125 on the mem. Still not great at +1125, but at least better.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Result not found
> 
> 
> 
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> Effective core clock in the 2220-2225 range. 2220 minimum below.
> 
> Also ran TS and TSE while I had everything cooled down well.
> 
> 22898 Graphics, reBar was still forced. I think it would have made 24k+ if I had been using @KedarWolf 's stripped down Win10 image.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 21 169 in Time Spy
> 
> 
> AMD Ryzen 7 5800X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 11}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> 12422 Graphics TSE,
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 10 677 in Time Spy Extreme
> 
> 
> AMD Ryzen 7 5800X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 11}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> View attachment 2543743


Why do you force Rebart in TS ?


truehighroller1 said:


> Hell yeah!
> 
> Time Spy
> 
> NVIDIA GeForce RTX 3090 video card benchmark result - Intel Core i9-12900K Processor,ASUSTeK COMPUTER INC. ROG STRIX Z690-A GAMING WIFI D4 (3dmark.com)
> 
> 
> Port Royal
> 
> NVIDIA GeForce RTX 3090 video card benchmark result - Intel Core i9-12900K Processor,ASUSTeK COMPUTER INC. ROG STRIX Z690-A GAMING WIFI D4 (3dmark.com)


rebar is forced for PR ?


----------



## yzonker

ManniX-ITA said:


> Use the search tool on top of this page and look for "nvidia-smi" and select in the drop down "in this thread only"


I had this post bookmarked. 









[Official] NVIDIA RTX 3090 Owner's Club


The step from 3090 to 3090 Ti seems marginal (say 6% fps on average); I rather wait for the 4090s. Besides, what with the new PCBs and Power Connectors, there's no way I'm modding the wiring and GPU's front-and-back in this complex build I finished only recently.: When I was doing HWBot years...




www.overclock.net


----------



## truehighroller1

GRABibus said:


> Why do you force Rebart in TS ?
> 
> rebar is forced for PR ?


I see it in the drop down and enabled it and beat my high score.


----------



## GRABibus

truehighroller1 said:


> I see it in the drop down and enabled it and beat my high score.


For PR ?


----------



## truehighroller1

GRABibus said:


> Why do you force Rebart in TS ?


It helped me beat my high score fairly easily, I guess.



GRABibus said:


> For PR ?


----------



## GRABibus

View attachment 2543825

[/QUOTE]

Keep effort, you have to beat me with my 15816 @ 54 degrees average 😉









I scored 15 816 in Port Royal


AMD Ryzen 9 5900X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## truehighroller1

GRABibus said:


> View attachment 2543825
> 
> 
> 
> Keep effort, you have to beat me with my
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 816 in Port Royal
> 
> 
> AMD Ryzen 9 5900X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com



But, I beat my, high score. I'll try beating more soon. I need to get my cpu score back up to the 22000 range again which I can, then run these tricks and I should be able to beat it I would think.

I have you on my radar now lol.

15727

NVIDIA GeForce RTX 3090 video card benchmark result - Intel Core i9-12900K Processor,ASUSTeK COMPUTER INC. ROG STRIX Z690-A GAMING WIFI D4 (3dmark.com)


----------



## GRABibus

truehighroller1 said:


> But, I beat my, high score. I'll try beating more soon. I need to get my cpu score back up to the 22000 range again which I can, then run these tricks and I should be able to beat it I would think.


Be aware also that I will let my room window opened all night and beat your TS graphics score very soon 😃


----------



## ManniX-ITA

truehighroller1 said:


> It helped me beat my high score fairly easily, I guess.


I will re-test, last time I checked the Timespy score was decreasing with ReBar forced on


----------



## truehighroller1

ManniX-ITA said:


> I will re-test, last time I checked the Timespy score was decreasing with ReBar forced on


It definitely boosted mine 200 you can see where it jumped. It's all I did different. From what I was reading it you're CPU limited, it decreases it.


----------



## ManniX-ITA

truehighroller1 said:


> It definitely boosted mine 200 you can see where it jumped. It's all I did different. From what I was reading it you're CPU limited, it decreases it.


Which is what was happening with PR while TS was dropping 200-400 points.
Wasn't for sure linked to the CPU.
Maybe a change in drivers/3dmark...
Time to re-test


----------



## GRABibus

ManniX-ITA said:


> Which is what was happening with PR while TS was dropping 200-400 points.
> Wasn't for sure linked to the CPU.
> Maybe a change in drivers/3dmark...
> Time to re-test


When I tested formerly (some
Months ago) TS with forced rebar I had 200points more in graphics score but 2000 less on CPU score !
I will restest


----------



## ManniX-ITA

GRABibus said:


> When I tested formerly (some
> Months ago) TS with forced rebar I had 200points more in graphics score but 2000 less on CPU score !
> I will restest


Oh yes you are right, now I recall.
The overall score was much lower but not due to the GPU score.


----------



## sew333

yzonker said:


> No way overclocking a 3090 gets you from 60 to 80fps. Even my Zotac 3090 trinity from bone stock to a custom loop and the KP 1kw bios didn't gain much.
> 
> Obviously something else that's different. Not sure what.


Maybe dlss version manually updated? I have it on default. Or mabe progress in game? I am on beggining 1 mission.

Only that game low fps, in other titles my fps are adequate and 3dmark scores are on within range.


----------



## yzonker

ManniX-ITA said:


> Oh yes you are right, now I recall.
> The overall score was much lower but not due to the GPU score.


Yea it hurts the CPU score. Seems to be a lot less in percentage loss on my 5800x than my older 3900x system. 

Not sure if it's due to the processor or the much better b-die RAM in the 5800x system. It hurts the 5800x so little that I still get a higher combined score despite the lower CPU score.


----------



## yzonker

Not huge gains, but I was looking back at my previous high PR score with the Corsair block. Looks like the Heatkiller gained 1 bin or so on the core (I think I added +15 to the 1093 and 1100mv points) and +125 mem. 

Reran this morning and got very slightly more, just run to run variance as water temp was 6C same as last night. About 15C block delta at ~550w.









I scored 16 142 in Port Royal


AMD Ryzen 7 5800X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 11}




www.3dmark.com





Comparing to the older Corsair run (water was 2-3C then), 









Result







www.3dmark.com


----------



## GRABibus

TS with Rebar disabled *(14°C*) : I scored 21 777 in Time Spy
21177pts (Max power draw 653W)
GX score = 23233
CPU score = 16072

TS with forced Rebar (*18°C*) : I scored 21 379 in Time Spy
21379pts (Max power draw 679W)
GX score = 23589 (*+356pts vs no Rebar*)
CPU score = 13966 (*-2106pts vs no Rebar*)


----------



## ManniX-ITA

sew333 said:


> Maybe dlss version manually updated? I have it on default.


I've just updated the DLSS library manually to 2.3.5.0 and it works.
Seems there's a small improvement mostly visible in performance mode.
Around 90-100 fps with RT Psycho, dropping to 87 fps while running.
RT Ultra is always over 100 fps, peaking 110-120 fps. Before it was dropping to 90fps while running.









GitHub - beeradmoore/dlss-swapper


Contribute to beeradmoore/dlss-swapper development by creating an account on GitHub.




github.com


----------



## sew333

ManniX-ITA said:


> I've just updated the DLSS library manually to 2.3.5.0 and it works.
> Seems there's a small improvement mostly visible in performance mode.
> Around 90-100 fps with RT Psycho, dropping to 87 fps while running.
> RT Ultra is always over 100 fps, peaking 110-120 fps. Before it was dropping to 90fps while running.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> GitHub - beeradmoore/dlss-swapper
> 
> 
> Contribute to beeradmoore/dlss-swapper development by creating an account on GitHub.
> 
> 
> 
> 
> github.com


My fps is 60-70 on RT ULTRA,DLSS QUALITY ,1440P on city walking. All are updated. And no have performance issues in other games,also 3dmark scores are ok. I have game on EPIC so it should be updated.


----------



## ManniX-ITA

sew333 said:


> My fps is 60-70 on RT ULTRA,DLSS QUALITY ,1440P on city walking.


I'm testing walking in front of the hotel, daylight.
Unfortunately it's hard to do A-B testing without an in game benchmark...
The game is using a very old DLSS library, 2.1 something.
Have to say while the fps improvement is not substantial the quality of the rendering is clearly better.
In Performance mode is quite horribly blurry but with the new dll version is pretty good, not so much difference from the Quality setting.


----------



## sew333

ManniX-ITA said:


> I'm testing walking in front of the hotel, daylight.
> Unfortunately it's hard to do A-B testing without an in game benchmark...
> The game is using a very old DLSS library, 2.1 something.
> Have to say while the fps improvement is not substantial the quality of the rendering is clearly better.
> In Performance mode is quite horribly blurry but with the new dll version is pretty good, not so much difference from the Quality setting.


Nice thx


----------



## yzonker

GRABibus said:


> TS with Rebar disabled *(14°C*) : I scored 21 777 in Time Spy
> 21177pts (Max power draw 653W)
> GX score = 23233
> CPU score = 16072
> 
> TS with forced Rebar (*18°C*) : I scored 21 379 in Time Spy
> 21379pts (Max power draw 679W)
> GX score = 23589 (*+356pts vs no Rebar*)
> CPU score = 13966 (*-2106pts vs no Rebar*)


Interesting. I take a much smaller percentage hit.

Here's an older run with reBar disabled, 

CPU: 13142









I scored 20 930 in Time Spy


AMD Ryzen 7 5800X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com





Forced, 
CPU: 12992









I scored 21 010 in Time Spy


AMD Ryzen 7 5800X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com





And the one from last night that was a little lower probably due to using Win11 rather than the stripped down image of Win10, 

CPU: 12853








I scored 21 169 in Time Spy


AMD Ryzen 7 5800X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 11}




www.3dmark.com


----------



## GRABibus

yzonker said:


> Interesting. I take a much smaller percentage hit.
> 
> Here's an older run with reBar disabled,
> 
> CPU: 13142
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 20 930 in Time Spy
> 
> 
> AMD Ryzen 7 5800X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> Forced,
> CPU: 12992
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 21 010 in Time Spy
> 
> 
> AMD Ryzen 7 5800X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> And the one from last night that was a little lower probably due to using Win11 rather than the stripped down image of Win10,
> 
> CPU: 12853
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 21 169 in Time Spy
> 
> 
> AMD Ryzen 7 5800X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 11}
> 
> 
> 
> 
> www.3dmark.com





yzonker said:


> Interesting. I take a much smaller percentage hit.
> 
> Here's an older run with reBar disabled,
> 
> CPU: 13142
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 20 930 in Time Spy
> 
> 
> AMD Ryzen 7 5800X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> Forced,
> CPU: 12992
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 21 010 in Time Spy
> 
> 
> AMD Ryzen 7 5800X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> And the one from last night that was a little lower probably due to using Win11 rather than the stripped down image of Win10,
> 
> CPU: 12853
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 21 169 in Time Spy
> 
> 
> AMD Ryzen 7 5800X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 11}
> 
> 
> 
> 
> www.3dmark.com


You are with PBO ?


----------



## J7SC

ManniX-ITA said:


> I'm testing walking in front of the hotel, daylight.
> Unfortunately it's hard to do A-B testing without an in game benchmark...
> The game is using a very old DLSS library, 2.1 something.
> Have to say while the fps improvement is not substantial the quality of the rendering is clearly better.
> In Performance mode is quite horribly blurry but with the new dll version is pretty good, not so much difference from the Quality setting.


I'll have to try that newer DLSS version ! As to benching CP2077, yeah, an in-game benchmark would be nice. However, one can still bench reasonably accurately when loading up a saved scene and then hit 'Esc' at the same subsequent landmark.



yzonker said:


> Interesting. I take a much smaller percentage hit.
> Here's an older run with reBar disabled,
> (...)


It is worth keeping in mind that Port Royal is very sensitive to system memory latency (in addition to bandwidth). Well before r_BAR was implemented, I did a lot of PortRoyal on my 2950X Threadripper / 2x 2080Ti. While that CPU wasn't an ideal 'bencher', it had a numa - uma switch acting on memory that made a huge difference - allowed me to lay down some good scores in 3D HoF back in the day per below against much faster CPUs. 

With r_BAR relating to system memory, I'm not surprised at the gains r_BAR delivers in Port Royal.


----------



## truehighroller1

J7SC said:


> I'll have to try that newer DLSS version ! As to benching CP2077, yeah, an in-game benchmark would be nice. However, one can still bench reasonably accurately when loading up a saved scene and then hit 'Esc' at the same subsequent landmark.
> 
> 
> 
> It is worth keeping in mind that Port Royal is very sensitive to system memory latency (in addition to bandwidth). Well before r_BAR was implemented, I did a lot of PortRoyal on my 2950X Threadripper / 2x 2080Ti. While that CPU wasn't an ideal 'bencher', it had a numa - uma switch acting on memory that made a huge difference - allowed me to lay down some good scores in 3D HoF back in the day per below against much faster CPUs.
> 
> With r_BAR relating to system memory, I'm not surprised at the gains r_BAR delivers in Port Royal.
> View attachment 2543846



My system memory is pretty tweaked to say the least, so this makes sense with my system pulling a higher score with it enabled.


----------



## yzonker

GRABibus said:


> You are with PBO ?


Yes and stable daily driver per core CO.


----------



## yzonker

J7SC said:


> I'll have to try that newer DLSS version ! As to benching CP2077, yeah, an in-game benchmark would be nice. However, one can still bench reasonably accurately when loading up a saved scene and then hit 'Esc' at the same subsequent landmark.
> 
> 
> 
> It is worth keeping in mind that Port Royal is very sensitive to system memory latency (in addition to bandwidth). Well before r_BAR was implemented, I did a lot of PortRoyal on my 2950X Threadripper / 2x 2080Ti. While that CPU wasn't an ideal 'bencher', it had a numa - uma switch acting on memory that made a huge difference - allowed me to lay down some good scores in 3D HoF back in the day per below against much faster CPUs.
> 
> With r_BAR relating to system memory, I'm not surprised at the gains r_BAR delivers in Port Royal.
> View attachment 2543846


It doesn't seem very sensitive to latency now. I didn't see any significant improvement going from C-die to optimized b-die. Tried it in both my 5800x and 3900x systems. TS gains a little in the graphics score and a bunch in the cpu score of course.


----------



## J7SC

yzonker said:


> It doesn't seem very sensitive to latency now. I didn't see any significant improvement going from C-die to optimized b-die. Tried it in both my 5800x and 3900x systems. TS gains a little in the graphics score and a bunch in the cpu score of course.


That's the beauty about the TR 2950X (HEDT/quad-channel) - one could vary latency from low 70ns to mid 50ns depending on setting, with a big gain in the score as the change in latency was so pronounced. That said, with my 5950X running native RAM 4000 CL15 (daily 3800 CL14), r_BAR is the next-best thing


----------



## truehighroller1

yzonker said:


> It doesn't seem very sensitive to latency now. I didn't see any significant improvement going from C-die to optimized b-die. Tried it in both my 5800x and 3900x systems. TS gains a little in the graphics score and a bunch in the cpu score of course.


You can't ignore my bump in score.



J7SC said:


> That's the beauty about the TR 2950X (HEDT/quad-channel) - one could vary latency from low 70ns to mid 50ns depending on setting, with a big gain in the score as the change in latency was so pronounced. That said, with my 5950X running native RAM 4000 CL15 (daily 3800 CL14), r_BAR is the next-best thing


This, I'm at about 46 with my memory timings latency wise and it shows in my bump in score with rebar enabled.


----------



## GreatestChase

Quick question, I've noticed many people are using different graphics drivers than I am. I'm using 496.76. Is there a particular driver set that is the best for PR?


----------



## kryptonfly

J7SC said:


> It is worth keeping in mind that *Port Royal is very sensitive to system memory latency (in addition to bandwidth)*. Well before r_BAR was implemented, I did a lot of PortRoyal on my 2950X Threadripper / 2x 2080Ti. While that CPU wasn't an ideal 'bencher', it had a numa - uma switch acting on memory that made a huge difference - allowed me to lay down some good scores in 3D HoF back in the day per below against much faster CPUs.
> 
> With r_BAR relating to system memory, I'm not surprised at the gains r_BAR delivers in Port Royal.
> View attachment 2543846


Are you sure ? My last +16K PR was at "stock", just 1x P-core at 4.2 Ghz and 8x E-core at 3.7 Ghz (9 threads) @1v, ring 3.5 Ghz and ram XMP 3600CL14. I tried when OC 4100-14-14-14-28-240 and no difference


----------



## J7SC

kryptonfly said:


> Are you sure ? My last +16K PR was at "stock", just 1x P-core at 4.2 Ghz and 8x E-core at 3.7 Ghz (9 threads) @1v, ring 3.5 Ghz and ram XMP 3600CL14. I tried when OC 4100-14-14-14-28-240 and no difference


...yup (see earlier post pic (and check CPUs in the right column), though it may be that these days, Alder Lake's IPC strength moved the bar so that bottlenecks are more pronounced elsewhere now. 3950X was (is) also responsive to lower latency. 

...on that whole IPC thing, I'm looking forward to Zen-4 vs Raptor Lake, with DDR5 8000 CL32 and PCIe 5.0 4090s / 7900XTs, probably won't upgrade until then but never say never


----------



## ManniX-ITA

J7SC said:


> You can't ignore my bump in score.


Could be as suggested by J7SC it's a peculiar feature with Alder Lake.
It's indeed the first time I see an improvement on the score with ReBar enabled.



J7SC said:


> However, one can still bench reasonably accurately when loading up a saved scene and then hit 'Esc' at the same subsequent landmark.


Yes that's what I do but I meant comparing scores with someone else.
Maybe we could share a save to start from the same exact point?
Not sure how saves are working with CP2077.


----------



## J7SC

ManniX-ITA said:


> Could be as suggested by J7SC it's a peculiar feature with Alder Lake.
> It's indeed the first time I see an improvement on the score with ReBar enabled.
> 
> 
> 
> *Yes that's what I do but I meant comparing scores with someone else*.
> Maybe we could share a save to start from the same exact point?
> Not sure how saves are working with CP2077.


CP2077 is such a weird if graphically gorgeous beast anyway. I usually just bench against my own previous best scores in CP2077 to see if a new driver or setting etc works. I even had CP2077 '''run''' in SLI-CFR w/ 2x 2080 Ti in the early days for short bits, but only until I hit escape to make some changes - hard reboots  ...then that 3090 Strix OC fell into my lap...


----------



## sew333

kryptonfly said:


> You told me when you tried Endwalker that your PC shutted down, you can retry but if it shuts down, it's your PSU. It's not normal that at stock with 850W it triggers any protection. Capacitors get empty so light flickers.
> 
> Wildlife is not enough, it's not the same load than cpu+gpu as I told you. If PSU triggers OCP you need to flip switch but in this case it's the motherboard which acts as a switch first, that's why you don't need to unplug PSU from the wall. Same as a good true OCP from PSU, just unplug ATX 24 pins and it's like flip switch. (what motherboard does in this case)
> I had a V1200 which had +12v and perfect voltage in load and it triggers OCP in Cyberpunk when I entered on a load street (cpu OC + gpu OC) exactly always at the same area. Since the new PSU no trouble anymore.
> My new Asus Z690 acts like my Asus X99 when I experienced an OCP in Endwalker (too much cpu OC 1,52v + gpu OC on 2*8 pins), it shutted down but in this case I didn't see "power surge protection" warning when it restarted because the setting is not present anymore in this motherboard , but it uses the same mechanism than my X99.
> No link in this matter. It's a "normal" screen stuff. I would not get worried about it.
> 
> Overall these Gigabyte 2*8 pins are not well balanced, some burn because of "New World", some can handle 600W like mine...
> 1st : try with at least a 1000W non Seasonic PSU (it will fix the problem but anyway).
> 2nd : your GPU board is not well made but I really don't think so.
> It's cheaper and wiser to check the PSU first. It's up to you to fix your trouble with elements here, seems like you're waiting a comment translating your point of view...


Is any difference? I saw 30 fps in advertisemrnt intro if i remember when shutdown occured. Also i dont flip the switch on back of psu after shutdown.Just presses reset button.Happened once during 7 months of use.


----------



## kryptonfly

J7SC said:


> ...yup (see earlier post pic (and check CPUs in the right column), though it may be that these days, *Alder Lake's IPC strength moved the bar so that bottlenecks are more pronounced elsewhere now*. 3950X was (is) also responsive to lower latency.
> 
> ...on that whole IPC thing, I'm looking forward to Zen-4 vs Raptor Lake, with DDR5 8000 CL32 and PCIe 5.0 4090s / 7900XTs, probably won't upgrade until then but never say never


I don't agree, I ran PR with this perf :

  

It's plain to see it's not cause of the IPC nor the memory even with just 1 P-core enabled at 4.2 Ghz and 8 E-core at 3.7 Ghz.



sew333 said:


> Is any difference? I saw 30 fps in advertisemrnt intro if i remember when shutdown occured. Also i dont flip the switch on back of psu after shutdown.Just presses reset button.Happened once during 7 months of use.


Not impossible, it could happen due to bad PSU to generate instabilities, I'm not surprised if it's the case. A bad PSU will create crash, BSOD, OCP... Did you try Endwalker again ?


----------



## KedarWolf

J7SC said:


> I'll have to try that newer DLSS version ! As to benching CP2077, yeah, an in-game benchmark would be nice. However, one can still bench reasonably accurately when loading up a saved scene and then hit 'Esc' at the same subsequent landmark.
> 
> 
> 
> It is worth keeping in mind that Port Royal is very sensitive to system memory latency (in addition to bandwidth). Well before r_BAR was implemented, I did a lot of PortRoyal on my 2950X Threadripper / 2x 2080Ti. While that CPU wasn't an ideal 'bencher', it had a numa - uma switch acting on memory that made a huge difference - allowed me to lay down some good scores in 3D HoF back in the day per below against much faster CPUs.
> 
> With r_BAR relating to system memory, I'm not surprised at the gains r_BAR delivers in Port Royal.
> View attachment 2543846


I'm going to manually download the DLSS files and copy them over. I've known of DLSS Swapper for a long time now but my Cyberpunk is on GoG and I don't think it supports GoG yet.


----------



## sew333

So i run again CB 2077, go to moment when you go out from hotel at beggining and look at street. Get now 70-73fps.I think its fine now. I am on RT ULTRA, ALL maxed,DLSS QUALITY ,1440P

Boost on 3090 is 1830-1845mhz, 63C. Cpu clock 10850K, 4800mhz. 65-72C


----------



## GreatestChase

I made some progress today with PR. Think I'll be able to break 16k this weekend if I can keep the temps low enough. I also think I've noticed a pattern in regards to my computer getting stuck at the bios splash screen. It primarily seems to happen when the system is cold. I don't have a good way to test that hypothesis, but while benching today I noticed that I have a lot more trouble getting the system to boot when I've got the temps low. I don't know if it could be my storage, but I did install another m.2 drive that I had lying around so I could set up a dual boot system so I can have just benching stuff on one and my regular stuff on the other. I was having the same problem when trying to boot into the win 10 drive as well when the system was cold. Maybe the ddr5 doesn't like the cold? I'm still at a loss as to the source, but I think I may have at least figured out a factor. I sat my system in front of my window and had a fan blowing at it with 38 F temps outside and when I started I was not having trouble booting, but as I let the system get colder, I began to have more frequent issues.









I scored 15 874 in Port Royal


Intel Core i7-12700K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## tps3443

GRABibus said:


> Be aware also that I will let my room window opened all night and beat your TS graphics score very soon 😃


You should consider a waterchiller. Imagine the best temps possible, only you now have that as a daily 24/7 option.

You can lock a water temp. 39F and it’ll hold that water temp with just a compressor kicking on for a few seconds every few minutes. It’s really not that loud either.


Im surprised more people have not gone this route.


It’s about making the extreme a daily reality. And a waterchiller is the coolest piece of tech I have ever owned in my 18 years of PC overclocking.


----------



## GreatestChase

tps3443 said:


> You should consider a waterchiller. Imagine the best temps possible, only you now have that as a daily 24/7 option.


Which chiller do you use? You piqued my interest with your talk of yours. Also, any pics of your setup? Particularly how you have to chiller plumbed into the rest of your system.


----------



## tps3443

GreatestChase said:


> Which chiller do you use? You piqued my interest with your talk of yours. Also, any pics of your setup? Particularly how you have to chiller plumbed into the rest of your system.


You can take these benchmark numbers and put it down in real world applications like games.

I use a AACH50. This is identical to the Hailea HC500. These units have massive coil designed to output a lot of heat, and they can sustain around 800 watts of heat output.


The chiller does nothing more than “Chill water” no pumps, nothing. So, if I unplug the chiller my water loop works as normal. And water just freely flows through it either way. But, I do leave it on 24/7 set to a specific temp. And it kicks on when needed.


----------



## The Pook

my card is a 3080 Ti but posting here because it seems to be the most active 30XX thread ... but has anyone had issues with a Seasonic Focus Plus PSU with their 3090/3080? 

Got my 3080 Ti and stock without an OC insta-crashes on any heavy GPU load despite not going over 600W (measured with a Kill-A-Watt) on a 750W 

Found this post on reddit, just curious if anyone here had a similar problem.


__
https://www.reddit.com/r/nvidia/comments/k6axtc


----------



## GreatestChase

tps3443 said:


> You can take these benchmark numbers and put it down in real world applications like games.
> 
> I use a AACH50. This is identical to the Hailea HC500. These units have massive coil designed to output a lot of heat, and they can sustain around 800 watts of heat output.
> 
> 
> The chiller does nothing more than “Chill water” no pumps, nothing. So, if I unplug the chiller my water loop works as normal. And water just freely flows through it either way. But, I do leave it on 24/7 set to a specific temp. And it kicks on when needed.


Thanks, I appreciate the pics. Was your second D5 necessary for the chiller or would one D5 suffice? I currently have a flow rate of around 220 l/h, but it looks like the chiller is spec'd for 3000-6000 l/h.


----------



## Nizzen

The Pook said:


> my card is a 3080 Ti but posting here because it seems to be the most active 30XX thread ... but has anyone had issues with a Seasonic Focus Plus PSU with their 3090/3080?
> 
> Got my 3080 Ti and stock without an OC insta-crashes on any heavy GPU load despite not going over 600W (measured with a Kill-A-Watt) on a 750W
> 
> Found this post on reddit, just curious if anyone here had a similar problem.
> 
> 
> __
> https://www.reddit.com/r/nvidia/comments/k6axtc


Imagine buying midrange psu for overclocking high end hardware 

It's like buying 40$ tires on a ferrari....
When I'm played Battlefield 1 on 10900k and 3090 total draw fram the wall peaked 850w. Used 1300w psu.

ALLWAYS have overkill PSU. Rule nr 1.


----------



## The Pook

Nizzen said:


> Imagine buying midrange psu for overclocking high end hardware


Imagine entering a pissing contest over PSU quality and power draw. 

I'm drawing less than 600W and asking about issues with a specific model PSU, though, so not super useful information unfortunately.


----------



## ManniX-ITA

The Pook said:


> I'm drawing less than 600W and asking about issues with a specific model PSU, though, so not super useful information unfortunately.


Most of the problematic PSU tripping with 30x0 cards are Seasonic.
They have a very sensible and conservative OCP protection.
I would double check your AC meter... 600W with the config you have on sig is suspiciously low.
Problem with the nVidia cards are the transients, short bursts of 600W alone only for the GPU.
You can't see them via the AC meter.


----------



## The Pook

ManniX-ITA said:


> Most of the problematic PSU tripping with 30x0 cards are Seasonic.
> They have a very sensible and conservative OCP protection.
> I would double check your AC meter... 600W with the config you have on sig is suspiciously low.
> Problem with the nVidia cards are the transients, short bursts of 600W alone only for the GPU.
> You can't see them via the AC meter.


I wasn't at 100% load on the CPU, the only tests I really did were RDR2 (crashed on the loading screen) and Unigine Heaven. CPU was only at _maybe_ 20% load in either case. I've got another Kill-A-Watt hooked up to my HTPC I could sanity check with but I doubt it'll show anything different. 

I attempted to run Cinbench alongside Unigine but unless I set a really low power limit it shut off before I could get any useful numbers.

I just haven't heard anyone else complain about Seagate PSUs with the 3000 series, but I guess it's a well known issue 😢


----------



## sew333

ManniX-ITA said:


> Most of the problematic PSU tripping with 30x0 cards are Seasonic.
> They have a very sensible and conservative OCP protection.
> I would double check your AC meter... 600W with the config you have on sig is suspiciously low.
> Problem with the nVidia cards are the transients, short bursts of 600W alone only for the GPU.
> You can't see them via the AC meter.


I had during 7 months 1 shutdown ,and never more. That happened when system was idle 3-4 days. And then in advertisement intro part with 30fps on Metro Exodus.
Or maybe it was brownout thats why happened once. Because i saw monitor flickered once after 10 seconds after shutdown,heh. But other devices like laptop,router not. Strange huh?
If you ask me why dont change now to 1000W i disagree. Because i can life with that single shutdown. With new psu i can have new and more problems hehe

.Dunno maybe some fluke with power in my house in this moment. Hard to say.


But overall 0 shutdowns in all games.

I have Seasonic TX-850 Ultra Titanium with Rtx 3090 Gaming Gigabyte. No issues on all games. But yes i agree many peoples complain about seasonic and rtx 3xxx on reddit.


PS:
Sorry for my bad english i am from poland


----------



## pewpewlazer

Seasonic OCP has been an issue since the Vega 56/64 era. I had issues with my 2080 Ti shutting off a 1000w Seasonic PSU. With Turing, the problems are more widespread than ever.

Get yourself a nice Super Flower based PSU and don't look back.


----------



## tps3443

I found out the weird anomaly with Timespy physics test. I slowly reduce the Cache OC and it slowly increased the physics score.

For some weird reason my 11900K at 5.5Ghz all cores with 4.4Ghz cache scores the best.


----------



## Panchovix

GreatestChase said:


> Quick question, I've noticed many people are using different graphics drivers than I am. I'm using 496.76. Is there a particular driver set that is the best for PR?


472.12 and 466.13 I think? Do give slightly better scores on TimeSpy and Port Royal, meanwhile 496.76 it's pretty good for FireStrike I think (on my experience at least, with the rtx 3080)


----------



## tps3443

GreatestChase said:


> Thanks, I appreciate the pics. Was your second D5 necessary for the chiller or would one D5 suffice? I currently have a flow rate of around 220 l/h, but it looks like the chiller is spec'd for 3000-6000 l/h.


Honestly, I think you could get by with just 1 really good D5 pump at 4,800RPM. The chiller is not restrictive at all. It’s like blowing air through a toilet paper roll. Absolutely zero restriction. But I do think you’d be better off with two pumps. I mostly run two because my CPU block, my CPU block is a Optimus Signature V2. This cpu block acts like a filter its so restrictive. If your coolant is not always perfect it will clog up inside the cold plate fins.

Also, I’d recommend setting the chiller further away so longer tubing would benefit from a 2nd or even a third D5 depending how far it is away from your setup.

With a waterchiller, your radiators and fans are useless. So, you can leave the fans unplugged all the time.


----------



## GreatestChase

tps3443 said:


> Honestly, I think you could get by with just 1 really good D5 pump at 4,800RPM. The chiller is not restrictive at all. It’s like blowing air through a toilet paper roll. Absolutely zero restriction. But I do think you’d be better off with two pumps. I mostly run two because my CPU block, my CPU block is a Optimus Signature V2. This cpu block acts like a filter its so restrictive. If your coolant is not always perfect it will clog up inside the cold plate fins.
> 
> Also, I’d recommend setting the chiller further away so longer tubing would benefit from a 2nd or even a third D5 depending how far it is away from your setup.
> 
> With a waterchiller, your radiators and fans are useless. So, you can leave the fans unplugged all the time.


Gotcha. I have a quadro controlling my loop now, so if I were to get a chiller I could just turn them off while it would be on. Do you have any comments on penguin chillers? From what I can find, the activeaqua chillers are either marked up or on sale on websites that I don't know that I would trust lol. They offer a 1/2 hp water chiller capable of 5000 btu. https://www.penguinchillers.com/product/1-2-hp-water-chiller-by-penguin-chillers/


----------



## kryptonfly

sew333 said:


> So i run again CB 2077, go to moment when you go out from hotel at beggining and look at street. Get now 70-73fps.I think its fine now. I am on *RT ULTRA*, ALL maxed,DLSS QUALITY ,1440P
> 
> *Boost on 3090 is 1830-1845mhz*, 63C. Cpu clock 10850K, 4800mhz. 65-72C


It's what I got with my Turbo blower at 70°C 1 year ago. That could explain a lot why your fps are low, your effective clock is probably below 1800mhz, guys here have 2130mhz effective clock, +330mhz on the GPU. Cyberpunk 2077 is one of the most painful games for the GPU. (actually the most from my experience)


----------



## yzonker

Yep, my Seasonic GX1000 had the same problem while hitting it with the KP 1kw bios, and not at particularly high power limits. It would trip with the power limit set at just 500w. Nowhere near 1000w total power draw. Replaced it with a Rm1000x and no shutdowns since.


----------



## yzonker

Looks like I'm tapped out on benchmark gains. Tried the stripped down Win10 but didn't gain anything significant in PR or TS. That fresh Win11 image worked fairly impressively. 

Final PR, 









I scored 16 145 in Port Royal


AMD Ryzen 7 5800X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## yzonker

That chiller does look tempting though. The furnace room in my house is only about 10 feet from my machine. Could put it in there. Just need enough pump to handle 20 feet or so of line.


----------



## truehighroller1

GRABibus said:


> View attachment 2543825
> 
> 
> 
> Keep effort, you have to beat me with my 15816 @ 54 degrees average 😉
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 816 in Port Royal
> 
> 
> AMD Ryzen 9 5900X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com



Got you finally lol.

NVIDIA GeForce RTX 3090 video card benchmark result - Intel Core i9-12900K Processor,ASUSTeK COMPUTER INC. ROG STRIX Z690-A GAMING WIFI D4 (3dmark.com)


----------



## tps3443

GreatestChase said:


> Gotcha. I have a quadro controlling my loop now, so if I were to get a chiller I could just turn them off while it would be on. Do you have any comments on penguin chillers? From what I can find, the activeaqua chillers are either marked up or on sale on websites that I don't know that I would trust lol. They offer a 1/2 hp water chiller capable of 5000 btu. https://www.penguinchillers.com/product/1-2-hp-water-chiller-by-penguin-chillers/



That brand is just fine. You should be able to find a Active Aqua or Hydro farm 1/2HP for $590-$700 ish range if you shop properly. 


There is a pretty banged up and well used 1/2HP Penguin chiller on eBay for $500 OBO. So, that’s also an option too.


----------



## truehighroller1

New record for me in time spy.

NVIDIA GeForce RTX 3090 video card benchmark result - Intel Core i9-12900K Processor,ASUSTeK COMPUTER INC. ROG STRIX Z690-A GAMING WIFI D4 (3dmark.com) 

I'm figuring out how to push my cpu more and it's bumping my score up now that I have my gpu set to the sweet spot.


----------



## ManniX-ITA

The Pook said:


> I wasn't at 100% load on the CPU, the only tests I really did were RDR2 (crashed on the loading screen) and Unigine Heaven. CPU was only at _maybe_ 20% load in either case. I've got another Kill-A-Watt hooked up to my HTPC I could sanity check with but I doubt it'll show anything different.
> 
> I attempted to run Cinbench alongside Unigine but unless I set a really low power limit it shut off before I could get any useful numbers.
> 
> I just haven't heard anyone else complain about Seagate PSUs with the 3000 series, but I guess it's a well known issue 😢


Thought it was under full load. 
Your CPU is power hungry; you need something more than 750W. 
And not Seasonic...

With both CPU and GPU OC with new World my AC draw is 750W and more.
Just 3DM Wildlife Stress draws 770W.

Even if it's not the best PSU around, the Corsair RM1000 does the job and it's cheap.
Didn't see anyone complaining about it.


----------



## truehighroller1

ManniX-ITA said:


> Thought it was under full load.
> Your CPU is power hungry; you need something more than 750W.
> And not Seasonic...
> 
> With both CPU and GPU OC with new World my AC draw is 750W and more.
> Just 3DM Wildlife Stress draws 770W.
> 
> Even if it's not the best PSU around, the Corsair RM1000 does the job and it's cheap.
> Didn't see anyone complaining about it.



I have the Corsair 1600i and I need it for what I'm doing benching tonight. I'm probably pulling 950 watts to 1125 is my guess. I have a meter plug thingy somewhere.


----------



## ManniX-ITA

truehighroller1 said:


> I have the Corsair 1600i and I need it for what I'm doing benching tonight. I'm probably pulling 950 watts to 1125 is my guess. I have a meter plug thingy somewhere.


It's not only about total power draw.
The quick transients peaks are usually the culprits tripping OCP protections.










If your GPU OC is drawing 500W you better consider the PSU has to handle shorts peaks of 700-800W. Just for the GPU.
With shunt-mods drawing 600W and more could be easily 1000W.
That's why if you really want to push OC better to go for a PSU 1200W and above...


----------



## sew333

kryptonfly said:


> It's what I got with my Turbo blower at 70°C 1 year ago. That could explain a lot why your fps are low, your effective clock is probably below 1800mhz, guys here have 2130mhz effective clock, +330mhz on the GPU. Cyberpunk 2077 is one of the most painful games for the GPU. (actually the most from my experience)


For me 70fps its enough.


----------



## ManniX-ITA

sew333 said:


> For me 70fps its enough.


If you are not OCing the GPU that's perfectly fine; same for my Strix which is still on Air.
With GPU OC it's 80 fps and clocks are between 1935-1965 MHz.
Just to use it as a reference to check if everything is properly set.


----------



## J7SC

ManniX-ITA said:


> It's not only about total power draw.
> The quick transients peaks are usually the culprits tripping OCP protections.
> 
> View attachment 2543998
> )
> 
> If your GPU OC is drawing 500W you better consider the PSU has to handle shorts peaks of 700-800W. Just for the GPU.
> With shunt-mods drawing 600W and more could be easily 1000W.
> That's why if you really want to push OC better to go for a PSU 1200W and above...


With CPUs and GPUs that have a pronounced boost algorithm, the transient spiking is virtually guaranteed (in other words, stock up on good strong PSUs given what's coming down the pipe in CPUs and GPUs).

I typically run 1300W Plat. PSUs...a quick look at a PSU's efficiency curve also suggests that it is a good thing to have plenty of headroom; they also tend to last longer.


----------



## ManniX-ITA

J7SC said:


> stock up on good strong PSUs


Holy words 

Still remember that the 180€ I paid for the 1300W Supernova were a bit too much...
Today a good one costs almost double.
Heck the AX1600i is around 600€, unbelievable.
If you see a good offer, catch it.


----------



## truehighroller1

ManniX-ITA said:


> Holy words
> 
> Still remember that the 180€ I paid for the 1300W Supernova were a bit too much...
> Today a good one costs almost double.
> Heck the AX1600i is around 600€, unbelievable.
> If you see a good offer, catch it.



I bought mine before everything sky rocketed. I think I paid $450 at the time for it from Microcenter. Hell I got my GPU for $1275, god only knows what the suprim x 3090 cost now.


----------



## Panchovix

yzonker said:


> Looks like I'm tapped out on benchmark gains. Tried the stripped down Win10 but didn't gain anything significant in PR or TS. That fresh Win11 image worked fairly impressively.
> 
> Final PR,
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 16 145 in Port Royal
> 
> 
> AMD Ryzen 7 5800X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com


That's pretty nice, to know that W11 can give good results on Ryzen 5000; did you just install W11 and ran the benchmarks?


----------



## yzonker

Panchovix said:


> That's pretty nice, to know that W11 can give good results on Ryzen 5000; did you just install W11 and ran the benchmarks?


Yes, I just installed the Win11 Pro image from Microsoft along with 3DMark, Afterburner, drivers. 

I struggled to get back to that number too with the Win10 image. I was really starting to think Win11 was faster, but I think temps had dropped just enough to increase effective clocks slightly. I ended up moving to a couple of points down one bin to get the score you quoted. 

And same story for Time Spy which is the more sensitive to system performance (graphics score). 









I scored 21 146 in Time Spy


AMD Ryzen 7 5800X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com





Small difference, but Win11 technically won, 









Result







www.3dmark.com


----------



## yzonker

tps3443 said:


> That brand is just fine. You should be able to find a Active Aqua or Hydro farm 1/2HP for $590-$700 ish range if you shop properly.
> 
> 
> There is a pretty banged up and well used 1/2HP Penguin chiller on eBay for $500 OBO. So, that’s also an option too.


The model you have appears to be 1/2 HP too. Has is that for capacity? More than enough, just enough? What minimum coolant temp can it reach while pumping 600-700w in to the loop?


----------



## The Pook

ManniX-ITA said:


> Your CPU is power hungry; you need something more than 750W.
> And not Seasonic...
> 
> With both CPU and GPU OC with new World my AC draw is 750W and more.
> Just 3DM Wildlife Stress draws 770W.


I believe you guys that's a PSU issue but not convinced it's a "750W isn't enough" issue, lol. I think you guys are REALLY overestimating 10900K CPU power draw when *gaming.*

at a higher voltage than me (5.2 @ 1.33v vs 5.1 @ 1.26v) on a 100% CPU load test that uses AVX and AVX2 = 316.8W










bought a Super Flower Leadex Titanium 1000W in any case ¯\_(ツ)_/¯


----------



## ManniX-ITA

The Pook said:


> I believe you guys that's a PSU issue but not convinced it's a "750W isn't enough" issue, lol. I think you guys are REALLY overestimating 10900K CPU power draw when *gaming.*


You need to consider the quick transients 
But even on power draw with a 750W you are really borderline on the maximum amperage.
When I say it's power hungry I mean compared to my 5950X.
On average a 10900K while gaming it's 50-100W more.

This is a typical peak when I'm gaming:










And this smart plug is averaging the draw every 10 seconds.
So in those minutes at 750W it was probably peaking at 800W.

If you consider the AC-DC loss that's a 750W DC draw and you would need more than me to get both the CPU and GPU OCed.



The Pook said:


> bought a Super Flower Leadex Titanium 1000W in any case


Good choice 💪


----------



## tps3443

Anyone after a really high quality PSU should take a look at the Evga 1300G+ this unit uses the same big C19 power cable as the EVGA 1600 watt units. And it is in a lineup of three new Evga PSU’s the 1300/1600/2000. They are all the same size and weight, so I have no doubt this unit can pull far above its rating reliably. (In the last picture: Look at the picture of the power cable compared to my prior Seasonic 1200 Prime Platinum unit)


No power shutdown issues at all here! This unit is a BEAST!! Especially for only $200 USD brand new.


----------



## GreatestChase

tps3443 said:


> Anyone after a really high quality PSU should take a look at the Evga 1300G+ this unit uses the same big C19 power cable as the EVGA 1600 watt units. And it is in a lineup of three new Evga PSU’s the 1300/1600/2000. They are all the same size and weight, so I have no doubt this unit can pull far above its rating reliably. (In the last picture: Look at the picture of the power cable compared to my prior Seasonic 1200 Prime Platinum unit)


Completely agree. I have one of the G+ units, one of the P+ units, and one of the 1600 W G+ units.


----------



## tps3443

GreatestChase said:


> Completely agree. I have one of the G+ units, one of the P+ units, and one of the 1600 W G+ units.


Yeah, it’s crazy overbuilt. I pull a lot of power through my setup. Honestly, I should have grabbed an Evga psu earlier on. No more power shutdowns for me!


----------



## ManniX-ITA

tps3443 said:


> Yeah, it’s crazy overbuilt. I pull a lot of power through my setup. Honestly, I should have grabbed an Evga psu earlier on. No more power shutdowns for me!


Got my G2 almost nine years ago and it's still just perfect like the first day.
Have only a couple of complaints.
First that any of the custom cable EVGA has in its catalogue has ever been available AFAIK.
Second that I can't find a reason to upgrade to anything, it's definitive


----------



## J7SC

ManniX-ITA said:


> Got my G2 almost nine years ago and it's still just perfect like the first day.
> Have only a couple of complaints.
> First that any of the custom cable EVGA has in its catalogue has ever been available AFAIK.
> *Second that I can't find a reason to upgrade to anything*, it's definitive


...how's this for a reason > EU 230V version


----------



## GreatestChase

J7SC said:


> ...how's this for a reason > EU 230V version


I was planning on buying a 2000 W unit when I bought my 1600 W, but a lot of reviews I found complained of them randomly dying after a year or two. Not this one specifically though.


----------



## J7SC

GreatestChase said:


> I was planning on buying a 2000 W unit when I bought my 1600 W, but a lot of reviews I found complained of them randomly dying after a year or two. Not this one specifically though.


...2000W on 110V is a bit too odd anyway; then again, there's this 3450W dual plug model (no comment on quality rating)


----------



## GreatestChase

J7SC said:


> ...2000W on 110V is a bit too odd anyway; then again, there's this 3450W dual plug model (no comment on quality rating)


It was going to be run on 240 V. Idk how the m word is handled around here, but I do that as well. I have all my PSUs for that running on 240V.


----------



## ManniX-ITA

J7SC said:


> ...how's this for a reason > EU 230V version


Oh it's so lovely and gorgeous...
But the reason for not upgrading is that I don't really need more than 1300W.
Even if the 4090 will draw 150-200W more.
My 5950X barely consumes more than the previous i4770K.
Plus for my next rig with the TECs I'm building a dedicated Core case just for the PSUs.
I have another 750W ATX and a 2200W 30V inside there 
Guess 4000W are more than enough...


----------



## J7SC

ManniX-ITA said:


> Oh it's so lovely and gorgeous...
> But the reason for not upgrading is that I don't really need more than 1300W.
> Even if the 4090 will draw 150-200W more.
> My 5950X barely consumes more than the previous i4770K.
> Plus for my next rig with the TECs I'm building a dedicated Core case just for the PSUs.
> I have another 750W ATX and a 2200W 30V inside there
> Guess 4000W are more than enough...


...need vs. want  ...in the bad old days of HWBot, I had three 1300W hooked together for multi-GPU benches, and sometimes, that still was not enough.

Re. 5950X, on my typical daily settings, it will hit about 230W CPU Package Power, plus 3x D5 and ~ 25 x 120mm fans, it all adds up, never mind spikes even before custom-bios 3090 but it doesn't add up to 2000 W, fortunately

FYI, Super Flower Leadex 2000 powering dual Xeon - quad 3090s w/ MoRa etc


----------



## Rulandu

Hi,
which is the best bios 2x8 with resbar? gigabyte gaming oc?
Ty


----------



## GRABibus

truehighroller1 said:


> Got you finally lol.
> 
> NVIDIA GeForce RTX 3090 video card benchmark result - Intel Core i9-12900K Processor,ASUSTeK COMPUTER INC. ROG STRIX Z690-A GAMING WIFI D4 (3dmark.com)


I will get you also...Not yet, but... 









I scored 15 841 in Port Royal


AMD Ryzen 9 5900X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## tps3443

J7SC said:


> ...2000W on 110V is a bit too odd anyway; then again, there's this 3450W dual plug model (no comment on quality rating)
> View attachment 2544113


When I see something like this , I think
of small house fires haha lol. Maybe grab a fire extinguisher on Banggood too!

Maybe that’s not the case anymore with these no
name brand cheaper PSU’s. But, I remember from like 2003 to 2016 at least cheaper PSU’s were a no go. Not that the above PSU in questioned is cheap, but maybe compared to a more legitimate 3.5KW unit it’s cheap. “If there even is a legitimate brand 3.5KW PSU available”


----------



## ManniX-ITA

tps3443 said:


> Maybe that’s not the case anymore with these no
> name brand cheaper PSU’s. But, I remember from like 2003 to 2016 at least cheaper PSU’s were a no go.


There was never and never will be a time when a cheap no brand PSU will be good...
That period was worse cause of the capacitor plague.

Prices rose up first for waste & recycling normative, like car tires.
Then for soaring prices for raw materials, shortages, supply chains, etc etc etc

But there are still half decent branded PSU at good prices.
I bought an Enermax 750W as 2nd PSU and paid it around 60 €.
Still trillion times better than no brand 30-40 € units I see on Amazon.
That's dangerous stuff that can burst in sparkles a whole PC in a matter of seconds.
Worst thing you can do is to go cheap on the PSU.


----------



## kryptonfly

truehighroller1 said:


> Got you finally lol.
> 
> NVIDIA GeForce RTX 3090 video card benchmark result - Intel Core i9-12900K Processor,ASUSTeK COMPUTER INC. ROG STRIX Z690-A GAMING WIFI D4 (3dmark.com)





truehighroller1 said:


> New record for me in time spy.
> 
> NVIDIA GeForce RTX 3090 video card benchmark result - Intel Core i9-12900K Processor,ASUSTeK COMPUTER INC. ROG STRIX Z690-A GAMING WIFI D4 (3dmark.com)
> 
> I'm figuring out how to push my cpu more and it's bumping my score up now that I have my gpu set to the sweet spot.


It's weird because PR is easier and less power hungry than TS. I'm at +650W approximately in TS and I don't want to go higher with 2*8 pins, PR : 2235Mhz locked @1.075v ~620W.
TS : 2205MHz locked @1.068v ~654W (I can pass TS at 2175Mhz locked at 1.018V and PR at 2205MHz locked at 1.018V too).
I scored 23 305 in Time Spy
I scored 16 023 in Port Royal

(something really weird I perf ~15900 when system is OC P5.3/E4.1/R4.5 4100-14-14-14-28-240 than just 1xP4.2/8xE3.7/R3.5 XMP 3600, I tried 4100C14 to XMP, ring 4.5 to 3.5 still less perf when OC, I just boost the same profil at 1xP5.4/8xE4.2/R3.5 still less perf ~100pts when OC... No clue why, remains voltages & powers...)
I scored 15 901 in Port Royal


----------



## J7SC

tps3443 said:


> When I see something like this , I think
> of small house fires haha lol. Maybe grab a fire extinguisher on Banggood too!
> 
> Maybe that’s not the case anymore with these no
> name brand cheaper PSU’s. But, I remember from like 2003 to 2016 at least cheaper PSU’s were a no go. Not that the above PSU in questioned is cheap, but maybe compared to a more legitimate 3.5KW unit it’s cheap. “If there even is a legitimate brand 3.5KW PSU available”





ManniX-ITA said:


> There was never and never will be a time when a cheap no brand PSU will be good...
> That period was worse cause of the capacitor plague.
> 
> Prices rose up first for waste & recycling normative, like car tires.
> Then for soaring prices for raw materials, shortages, supply chains, etc etc etc
> 
> But there are still half decent branded PSU at good prices.
> I bought an Enermax 750W as 2nd PSU and paid it around 60 €.
> Still trillion times better than no brand 30-40 € units I see on Amazon.
> That's dangerous stuff that can burst in sparkles a whole PC in a matter of seconds.
> Worst thing you can do is to go cheap on the PSU.


...just to be sure, that 3450W 'miner PSU' was meant as a humorous contribution...btw, they also sell $5 GPU aluminum w-blocks...

Per earlier posts, for really serious applications, I usually use PSUs with Super Flower or Delta innards


----------



## ManniX-ITA

J7SC said:


> ...just to be sure, that 3450W 'miner PSU' was meant as a humorous contribution...btw, they also sell $5 GPU aluminum w-blocks...


Obviously indeed, I enjoyed it


----------



## GreatestChase

Got a new PB in PR this morning. I think I'm going to have to get my liquid temps sub-zero to get any higher. The lowest I could get them this morning was about 4C at idle. During a run, my water to gpu delta is ~23-24 C. This run was at 2205 MHz locked at 1100 mV, but I would drop to 2190 MHz as soon as I began the run, and then around 27 C I would drop another bin down to 2175 MHz. Forgot to include it in the screenshot, but NVVDD was 1.2375, FBVDD stock, MSVDD 1.2, frequencies 800 and 700, and LLC at 6 and 5. I tried pushing it to 2220 MHz, but it would insta-crash when PR launched. 








I scored 15 983 in Port Royal


Intel Core i7-12700K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com















I possibly could've gotten my water temps a little lower if I had a better start to my benching session, but I was having boot issues again. I do believe that I have identified my problem there though. I believe that my issues are my boot drives. I have two m.2 drives for a dual boot system so that I can bench in one and dd the other. I noticed this morning though that when the drives got cold physically they would refuse to boot. Additionally, if I got booted and let the drive I was booted into get physically cold, the system would start to freeze up. During my session I used my heat gun to keep the drive warm and I did not have any issues, and when I kept the drive warm during a shut down it would boot up without an issue. Unfortunately this led to dumping heat into the system, and I eventually lost the cold ambient temps that I had. Have you guys heard of behavior like this before? It's a new one to me.


----------



## J7SC

GreatestChase said:


> Got a new PB in PR this morning. I think I'm going to have to get my liquid temps sub-zero to get any higher. The lowest I could get them this morning was about 4C at idle. During a run, my water to gpu delta is ~23-24 C. This run was at 2205 MHz locked at 1100 mV, but I would drop to 2190 MHz as soon as I began the run, and then around 27 C I would drop another bin down to 2175 MHz. Forgot to include it in the screenshot, but NVVDD was 1.2375, FBVDD stock, MSVDD 1.2, frequencies 800 and 700, and LLC at 6 and 5. I tried pushing it to 2220 MHz, but it would insta-crash when PR launched.
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 983 in Port Royal
> 
> 
> Intel Core i7-12700K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> View attachment 2544195
> 
> 
> 
> I possibly could've gotten my water temps a little lower if I had a better start to my benching session, but I was having boot issues again. I do believe that I have identified my problem there though. I believe that my issues are my boot drives. I have two m.2 drives for a dual boot system so that I can bench in one and dd the other. I noticed this morning though that when the drives got cold physically they would refuse to boot. Additionally, if I got booted and let the drive I was booted into get physically cold, the system would start to freeze up. During my session I used my heat gun to keep the drive warm and I did not have any issues, and when I kept the drive warm during a shut down it would boot up without an issue. Unfortunately this led to dumping heat into the system, and I eventually lost the cold ambient temps that I had. Have you guys heard of behavior like this before? It's a new one to me.


First of all: Congrats on your PR ! 

Also, in sub-zero benching, there's the dreaded 'cold bug' issue for CPU etc, but not sure if this is the reason for what you're running into w/ your m.2. Generally, for really cold (sub zero or near it) benching, you would want to use a Sata drive, not m.2. In addition, in the lower extreme temp ranges, your USB will get wonky as well which is why up to this day, apart from new server boards, many new top-line oc boards (Apex Tachyon etc) still come with a PS2 mouse / keyboard port


----------



## GreatestChase

J7SC said:


> First of all: Congrats on your PR !
> 
> Also, in sub-zero benching, there's the dreaded 'cold bug' issue for CPU etc, but not sure if this is the reason for what you're running into w/ your m.2. Generally, for really cold (sub zero or near it) benching, you would want to use a Sata drive, not m.2. In addition, in the lower extreme temp ranges, your USB will get wonky as well which is why up to this day, apart from new server boards, many new top-line oc boards (Apex Tachyon etc) still come with a PS2 mouse / keyboard port


I knew that USB interface can get wonky when cold, but I didn't realize that m.2 could as well. I haven't had any issues though at my current temps with my usb ports. Guess I need to grab another SATA drive for my benching OS. Only problem is that so far when I'm using dual boot, it still boots into my primary OS and then asks me which OS I want to boot into. And if that's too cold, it'll hang before it lets me pick. So I guess I would have to completely disconnect that drive when benching to avoid the cold issues. Also, I thought that my issue might be my DDR5 getting too cold, but it would boot fine when it was cold and my drive was warm.


----------



## geriatricpollywog

J7SC said:


> First of all: Congrats on your PR !
> 
> Also, in sub-zero benching, there's the dreaded 'cold bug' issue for CPU etc, but not sure if this is the reason for what you're running into w/ your m.2. Generally, for really cold (sub zero or near it) benching, you would want to use a Sata drive, not m.2. In addition, in the lower extreme temp ranges, your USB will get wonky as well which is why up to this day, apart from new server boards, many new top-line oc boards (Apex Tachyon etc) still come with a PS2 mouse / keyboard port


I had no cold bug issues with my Strix D4 with the whole computer outside at -14C. I have 3 m.2 drives and no SATA drives. The mouse and keyboard are both wireless USB. The Strix D4 has a PS/2 but I haven’t needed it. CPU idle temperature was -5C but load temperature was double digit positive. Maybe newer motherboards automatically adjust for cold bug or maybe it wasn’t cold enough.


----------



## tps3443

GreatestChase said:


> Got a new PB in PR this morning. I think I'm going to have to get my liquid temps sub-zero to get any higher. The lowest I could get them this morning was about 4C at idle. During a run, my water to gpu delta is ~23-24 C. This run was at 2205 MHz locked at 1100 mV, but I would drop to 2190 MHz as soon as I began the run, and then around 27 C I would drop another bin down to 2175 MHz. Forgot to include it in the screenshot, but NVVDD was 1.2375, FBVDD stock, MSVDD 1.2, frequencies 800 and 700, and LLC at 6 and 5. I tried pushing it to 2220 MHz, but it would insta-crash when PR launched.
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 983 in Port Royal
> 
> 
> Intel Core i7-12700K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> View attachment 2544195
> 
> 
> 
> I possibly could've gotten my water temps a little lower if I had a better start to my benching session, but I was having boot issues again. I do believe that I have identified my problem there though. I believe that my issues are my boot drives. I have two m.2 drives for a dual boot system so that I can bench in one and dd the other. I noticed this morning though that when the drives got cold physically they would refuse to boot. Additionally, if I got booted and let the drive I was booted into get physically cold, the system would start to freeze up. During my session I used my heat gun to keep the drive warm and I did not have any issues, and when I kept the drive warm during a shut down it would boot up without an issue. Unfortunately this led to dumping heat into the system, and I eventually lost the cold ambient temps that I had. Have you guys heard of behavior like this before? It's a new one to me.


Water chiller all the way. 2C idle. Great score by the way!! I may get in to benching PR. It is 72F ambient.


----------



## GreatestChase

tps3443 said:


> Water chiller all the way. 2C idle. Great score by the way!! I may get in to benching PR. It is 72F ambient.
> 
> View attachment 2544200


Thanks, I'm taking it one component at a time lol. Also, disregarding condensation, what is the lowest that your chiller can go?


----------



## J7SC

GreatestChase said:


> I knew that USB interface can get wonky when cold, but I didn't realize that m.2 could as well. I haven't had any issues though at my current temps with my usb ports. Guess I need to grab another SATA drive for my benching OS. Only problem is that so far when I'm using dual boot, it still boots into my primary OS and then asks me which OS I want to boot into. And if that's too cold, it'll hang before it lets me pick. So I guess I would have to completely disconnect that drive when benching to avoid the cold issues. Also, I thought that my issue might be my DDR5 getting too cold, but it would boot fine when it was cold and my drive was warm.


...couldn't you just disable the m.2 in bios, and or change boot sequence ? I don't do sub-zero anymore so my m.2s never act up, but a nice WD Blue 1TB Sata 'bench drive' with a slimmed / stripped OS might be worth your while


----------



## GreatestChase

J7SC said:


> ...couldn't you just disable the m.2 in bios, and or change boot sequence ? I don't do sub-zero anymore so my m.2s never act up, but a nice WD Blue 1TB Sata 'bench drive' with a slimmed / stripped OS might be worth your while


Possibly, currently though with both boot drives being m.2, I only allows me to select my 970 drive as the "windows boot manager." It doesn't even show my Kingston drive in the BIOS. And yeah, I was thinking about grabbing another WD Blue. I currently have one that is purely for storage purposes, would just have to put it in the hot swap cage and it'd be good to go. Just don't know about if it would allow me to boot directly to it, or if I would still have to go through the m.2 first. Also, any good recommendations for slimming up the OS or where to get a slimmed up OS?


----------



## tps3443

GreatestChase said:


> Thanks, I'm taking it one component at a time lol. Also, disregarding condensation, what is the lowest that your chiller can go?


Right out of the gate they can get water down to 39F Before shutting off. However, you can very quickly set it to keep running regardless. So, that temp
will continue on dropping. You would need to mix up some special fluid though because 39F isn’t far from freezing already. I’m good with 39F, I don’t need anymore than that.

Condensation never been an issue either. Because the dew point is like 15F right now. Stay above the dew point and your good.


----------



## GreatestChase

tps3443 said:


> Right out of the gate they can get water down to 39F Before shutting off. However, you can very quickly set it to keep running regardless. So, that temp
> will continue on dropping. You would need to mix up some special fluid though because 39F isn’t far from freezing already.


Gotcha, and yeah, I figure that I would mix a 50/50 with glycol. From everything I've read that should be fine with acrylic.


----------



## J7SC

GreatestChase said:


> Possibly, currently though with both boot drives being m.2, I only allows me to select my 970 drive as the "windows boot manager." It doesn't even show my Kingston drive in the BIOS. And yeah, I was thinking about grabbing another WD Blue. I currently have one that is purely for storage purposes, would just have to put it in the hot swap cage and it'd be good to go. Just don't know about if it would allow me to boot directly to it, or if I would still have to go through the m.2 first. Also, any good recommendations for slimming up the OS or where to get a slimmed up OS?


There are a few Win 10 bench configs floating around in this very thread over the last 3 months or so...may be use Google to find them (I didn't install them) with keywords that include this site and thread name and then windows 10 bench ?

A quick comment on cooling liquid mixes for chillers...do make sure to check compatibility of your loop - including o-rings etc - with glycol etc. It's a while ago that I saw it, but apparently there's one type of RainX (orangey -brown rust colour?) that is ok to use.


----------



## yzonker

GreatestChase said:


> Possibly, currently though with both boot drives being m.2, I only allows me to select my 970 drive as the "windows boot manager." It doesn't even show my Kingston drive in the BIOS. And yeah, I was thinking about grabbing another WD Blue. I currently have one that is purely for storage purposes, would just have to put it in the hot swap cage and it'd be good to go. Just don't know about if it would allow me to boot directly to it, or if I would still have to go through the m.2 first. Also, any good recommendations for slimming up the OS or where to get a slimmed up OS?


You can change the default OS on that same screen where you choose one as it boots. Then it will load the default on startup.


----------



## yzonker

J7SC said:


> There are a few Win 10 bench configs floating around in this very thread over the last 3 months or so...may be use Google to find them (I didn't install them) with keywords that include this site and thread name and then windows 10 bench ?
> 
> A quick comment on cooling liquid mixes for chillers...do make sure to check compatibility of your loop - including o-rings etc - with glycol etc. It's a while ago that I saw it, but apparently there's one type of RainX (orangey -brown rust colour?) that is ok to use.


What about dissimilar metals?


----------



## GreatestChase

yzonker said:


> You can change the default OS on that same screen where you choose one as it boots. Then it will load the default on startup.


It'll load the one I choose as the default yes, but it is still using the 970 as the boot manager regardless for some reason.


----------



## yzonker

GreatestChase said:


> It'll load the one I choose as the default yes, but it is still using the 970 as the boot manager regardless for some reason.


Yea it would. You need to disable the 970 like I think @J7SC suggested, but do it before you install your bench OS. That way it won't find your daily OS install. Then swap the boot drive in the bios. I think that will work anyway.


----------



## yzonker

And just FYI, I haven't had any issues with my machine with -5C air ducted to it. Still some room air mixing though as the lowest my water temp during the runs I've posted lately was about +5C.


----------



## J7SC

yzonker said:


> What about dissimilar metals?


...I try to avoid things like aluminum in a copper/nickel/brass setup anyhow at normal temps, but the fine-print of the liquids (ie. RainX) on the back of the bottle (or link to their web site) should have a compatibility section anyways, including re. metals.

@GreatestChase ...this oldie might be worth watching (incl. re RainX)


----------



## yzonker

J7SC said:


> ...I try to avoid things like aluminum in a copper/nickel/brass setup anyhow at normal temps, but the fine-print of the liquids (ie. RainX) on the back of the bottle (or link to their web site) should have a compatibility section anyways, including re. metals.


Well I'm wondering how you can know what metals are in the flow path of the chiller and how to deal with that. I know some fluids are supposed to stop corrosion in the case of mixed metals, but as you say, I'd rather avoid that if possible.


----------



## GreatestChase

J7SC said:


> ...I try to avoid things like aluminum in a copper/nickel/brass setup anyhow at normal temps, but the fine-print of the liquids (ie. RainX) on the back of the bottle (or link to their web site) should have a compatibility section anyways, including re. metals.
> 
> @GreatestChase ...this oldie might be worth watching (incl. re RainX)


Was just watching that the other day lol.


----------



## yzonker

@tps3443 what fluid are you running with your chiller?


----------



## ManniX-ITA

GreatestChase said:


> It doesn't even show my Kingston drive in the BIOS.


You need to select the Kingston drive without the "Windows Boot Manager" label.
Once it's booted successfully it will be added to the list also with the label.


----------



## tps3443

yzonker said:


> @tps3443 what fluid are you running with your chiller?


Distilled water with just anti biocide.


----------



## dansi

J7SC said:


> ...I try to avoid things like aluminum in a copper/nickel/brass setup anyhow at normal temps, but the fine-print of the liquids (ie. RainX) on the back of the bottle (or link to their web site) should have a compatibility section anyways, including re. metals.
> 
> @GreatestChase ...this oldie might be worth watching (incl. re RainX)


i can see EKWB doing a mini chiller?


----------



## tps3443

Waterchillers are incredible. I wish I would have gotten one sooner though. You can use your existing custom loop and fans to bleed off any condensation.

I will admit, I just don’t really see condensation at all though. Now, it is the winter time. I imagine Summer time will effect that some.


That Linus Tech tip video is crazy how much condensation they have. This is a much more powerful unit I have at 1/2HP, and with 39F water temp locked, I have zero condensation on my cpu block.


----------



## tps3443

Hey! I’m #1 with 11900K & 3090!!! Thats so cool.

This is not even my best either.


----------



## The Pook

GreatestChase said:


> Also, any good recommendations for slimming up the OS or where to get a slimmed up OS?


for what? general use or benchmarking? 

if the latter, either look into Windows 10 Ameliorated or Windows 8.1 Embedded Industry Pro. 

if the former, nothing _really_ that's going to do anything significant on high end hardware_. _


----------



## GreatestChase

The Pook said:


> for what? general use or benchmarking?
> 
> if the latter, either look into Windows 10 Ameliorated or Windows 8.1 Embedded Industry Pro.
> 
> if the former, nothing _really_ that's going to do anything significant on high end hardware_. _


It would be specifically for a benchmarking OS. I'll look into those. Thanks.


----------



## The Pook

GreatestChase said:


> It would be specifically for a benchmarking OS. I'll look into those. Thanks.


I use 8.1 Embedded Industry Pro as my HWBOT OS but getting an ISO/key is iffy without an Azure account. "Windows 9" is just Windows 8.1 Embedded Industry Pro skinned to look/act like Windows 7 and it makes the ISO part easier but you'd still need a key though. Think you can use it for 30/60/90 days before it shuts you out though.


----------



## EarlZ

Ive noticed a lot of YT vids that show undervolting by applying na -200 to -300Mhz core offset then raise one of the points to a desired clock speed then hit apply. Is there any draw back to this method? None of them mentions about cooling down the GPU before applying these curves is this no longer a thing with the rtx 3000 series where changing the curve at high temperatues will yeild to an overshoot?


----------



## yzonker

EarlZ said:


> Ive noticed a lot of YT vids that show undervolting by applying na -200 to -300Mhz core offset then raise one of the points to a desired clock speed then hit apply. Is there any draw back to this method? None of them mentions about cooling down the GPU before applying these curves is this no longer a thing with the rtx 3000 series where changing the curve at high temperatues will yeild to an overshoot?


It's still best to adjust while the gpu is cooled down. A better method is to do the opposite though. Raise the offset to where you want it and then flatten the curve above your target voltage point. This preserves the original curve below that point and keeps effective clocks closer to requested.


----------



## yzonker

So I've always noticed quite a bit of lag in my water temp reading with it in a spare port in my reservoir (when it was cooling, I would see a negative block delta). So I decided to move the sensor to the input side of my gpu block to get it in more direct flow and get what seems like the best reading for my block delta. Made a 1-2C difference. Now I'm showing [email protected] with the Heatkiller V. A little surprised it made that much difference although seemed like that part of the res was kinda cutoff from the main flow, so maybe it's always slightly cooler? Not sure really.


----------



## EarlZ

yzonker said:


> It's still best to adjust while the gpu is cooled down. A better method is to do the opposite though. Raise the offset to where you want it and then flatten the curve above your target voltage point. This preserves the original curve below that point and keeps effective clocks closer to requested.


No need to touch the other voltage points or make them 15mhz apart?


----------



## yzonker

EarlZ said:


> No need to touch the other voltage points or make them 15mhz apart?


Not sure what you mean. This is what I was referring to.






Rtx 3000 series undervolt discussion


Its part of the guide in the op. Once you set your curve, youll want to set a power limit. I suggest between 80 and 90 but you should try it and see where you are happy with performance and Temps. At 90 you should basically be as fast as stock while running cooler. If you want even less power...




hardforum.com


----------



## Falkentyne

The Pook said:


> I wasn't at 100% load on the CPU, the only tests I really did were RDR2 (crashed on the loading screen) and Unigine Heaven. CPU was only at _maybe_ 20% load in either case. I've got another Kill-A-Watt hooked up to my HTPC I could sanity check with but I doubt it'll show anything different.
> 
> I attempted to run Cinbench alongside Unigine but unless I set a really low power limit it shut off before I could get any useful numbers.
> 
> I just haven't heard anyone else complain about Seagate PSUs with the 3000 series, but I guess it's a well known issue 😢


Seasonic Focus Plus uses the old OCP circuit that was used apparently on the old SS-1050XP platform. This can trip in transient loads even though you reach nowhere near the PSU limit.
RMA it for a Seasonic Focus GX or PX unit, or even better, a Prime TX / PX. (You should contact Seasonic or the retailer about this).






OneSeasonic







seasonic.com





They apparently have a "Prime PX-1300" unit out now but I don't know where it's being sold. The shown on newegg is labeled "PX-1300" but the pictures are of the OLDER Non PX (older Prime platform, e.g. the one with the sensitive OCP) as the new PX-1300 does NOT say "Prime Platinum" on the PSU or the box. it says PX-1300.


----------



## PLATOON TEKK

As promised (my bad took ages); those with HOF 3090's should be able to use this to modify voltage.

HOF OC GOC v2021-3004 (easyupload)

This is the 2021 version of the galax GOC voltage exe.









edit: use at own risk etc


----------



## J7SC

Falkentyne said:


> Seasonic Focus Plus uses the old OCP circuit that was used apparently on the old SS-1050XP platform. This can trip in transient loads even though you reach nowhere near the PSU limit.
> RMA it for a Seasonic Focus GX or PX unit, or even better, a Prime TX / PX. (You should contact Seasonic or the retailer about this).
> 
> 
> 
> 
> 
> 
> OneSeasonic
> 
> 
> 
> 
> 
> 
> 
> seasonic.com
> 
> 
> 
> 
> 
> They apparently have a "Prime PX-1300" unit out now but I don't know where it's being sold. The shown on newegg is labeled "PX-1300" but the pictures are of the OLDER Non PX (older Prime platform, e.g. the one with the sensitive OCP) as the new PX-1300 does NOT say "Prime Platinum" on the PSU or the box. it says PX-1300.


...you mean this one ?


----------



## kryptonfly

geriatricpollywog said:


> I had no cold bug issues with my Strix D4 with the whole computer outside at -14C. I have 3 m.2 drives and no SATA drives. The mouse and keyboard are both wireless USB. *The Strix D4 has a PS/2 but I haven’t needed it.* CPU idle temperature was -5C but load temperature was double digit positive. Maybe newer motherboards automatically adjust for cold bug or maybe it wasn’t cold enough.


Maybe I'm wrong but I don't see any PS/2 port. Are we talking about the same board ?
https://rog.asus.com/fr/motherboards/rog-strix/rog-strix-z690-a-gaming-wifi-d4-model/gallery


----------



## GreatestChase

Managed to crawl a little further. Couldn't get it to run at 2190 without crashing this go round. So close to 16k but it just doesn't want to let me past lol.


----------



## EarlZ

yzonker said:


> Not sure what you mean. This is what I was referring to.
> 
> 
> 
> 
> 
> 
> Rtx 3000 series undervolt discussion
> 
> 
> Its part of the guide in the op. Once you set your curve, youll want to set a power limit. I suggest between 80 and 90 but you should try it and see where you are happy with performance and Temps. At 90 you should basically be as fast as stock while running cooler. If you want even less power...
> 
> 
> 
> 
> hardforum.com


Thanks for the link, it answers my question. Though I notice my GPU would go up to 1875Mhz even if I have it at 1860 @ 0.850v and this curve is set at 33c. I must have missed something and its boosting past what I set it too.


----------



## J7SC

GreatestChase said:


> View attachment 2544316
> 
> Managed to crawl a little further. Couldn't get it to run at 2190 without crashing this go round. So close to 16k but it just doesn't want to let me past lol.


Have you tried w/fresh boot / reboot plus about 2 min for Win start-up to settle, then do a PortRoyal ? Can be worth some extra fps. Obviously GPU temps should be nice and low when starting


----------



## GreatestChase

J7SC said:


> Have you tried w/fresh boot / reboot plus about 2 min for Win start-up to settle, then do a PortRoyal ? Can be worth some extra fps. Obviously GPU temps should be nice and low when starting


I didn’t tonight, but this morning I did. I was able to run at 2205 this morning, but lost a couple boost bins to temps. With my system being so finicky when cold, I try not to reboot when cold if I don’t have to.


----------



## dansi

EarlZ said:


> Ive noticed a lot of YT vids that show undervolting by applying na -200 to -300Mhz core offset then raise one of the points to a desired clock speed then hit apply. Is there any draw back to this method? None of them mentions about cooling down the GPU before applying these curves is this no longer a thing with the rtx 3000 series where changing the curve at high temperatues will yeild to an overshoot?


from my own trials, it does not matter if gpu is cooled or not.

when you apply offsets, you just need to note at what voltages you are applying the offsets. you can ignore the frequency y-axis

say at 0.95v, you apply +180, depending on your gpu core quality, it will try to boost up to +180 at 0.95v. up from whatever was set at the factory according to your gpu asic quality.

better cooling will only prevent you from losing clock bins.

i would not raise one points, but multiple points along the curve. First you need to trial out different games, see what kind of load they takes. some games will throttle your gpu at full wattage and voltage can only hover around 0.9v and power limited, so 0.9v is your gpu lowest point, you may want to add some offsets.

other games can run your gpu at highest clock speeds, so same thing, you can limit the high clock speeds at another offset point at 1v, to save some power and prevent high clock instability


----------



## geriatricpollywog

kryptonfly said:


> Maybe I'm wrong but I don't see any PS/2 port. Are we talking about the same board ?
> ROG STRIX Z690-A GAMING WIFI D4 | ROG Strix | Gaming Cartes mères｜ROG - Republic of Gamers｜ROG France


No you’re right. I was confusing it with my AsRock Z590 OC Formula.


----------



## J7SC

GreatestChase said:


> I didn’t tonight, but this morning I did. I was able to run at 2205 this morning, but lost a couple boost bins to temps. With my system being so finicky when cold, I try not to reboot when cold if I don’t have to.


...how about HWInfo - open during the actual run ? If so, that costs just a bit - may be try a run w/o it and any other monitoring software open.


----------



## GreatestChase

J7SC said:


> ...how about HWInfo - open during the actual run ? If so, that costs just a bit - may be try a run w/o it and any other monitoring software open.


I did have that and afterburner open on my second monitor that’s running off my igpu. As well as cmd and classified. I’ll have to try closing those next time.


----------



## gfunkernaught

How do you guys handle PCIe connector heat? My backplate doesn't make contact with the solder points. Not that they're melting, I'm keeping the PL at 65%. Just wondering.


----------



## andrew149

Anyone running 2 3090s under water what kind of temps are you getting mining with Eth I also have a i9


----------



## dansi

gfunkernaught said:


> How do you guys handle PCIe connector heat? My backplate doesn't make contact with the solder points. Not that they're melting, I'm keeping the PL at 65%. Just wondering.


some one added TG thermal putty on connectors IIRC.


----------



## andrew149

Guys I have a founders edition 2 pin watercooled and I heard there’s a 390 new watt bios I can flash? Where can I get this file.


----------



## J7SC

gfunkernaught said:


> How do you guys handle PCIe connector heat? My backplate doesn't make contact with the solder points. Not that they're melting, I'm keeping the PL at 65%. Just wondering.





dansi said:


> some one added TG thermal putty on connectors IIRC.


...I added thermal putty on the backside of the 3x8 Pin connectors, as well as the front...on the back, the thermal putty makes contact with the metal back-plate and its extra heatsink


----------



## Falkentyne

J7SC said:


> ...you mean this one ?
> 
> View attachment 2544315


Yes that's the good one.
The "Old" version says "Prime Platinum" on the box instead of "Prime PX-(wattage)."

Also, a few people had the "OneSeasonic" GX, PX, TX" models and had them OCP trip and Seasonic determined those were defective. They got replaced under warranty and the new ones were flawless. I think it was one user on this thread MONTHS ago, and another on HardOCP (hardforum) who had to RMA theirs like that. (I remember one was a TX-850).


----------



## Falkentyne

PLATOON TEKK said:


> As promised (my bad took ages); those with HOF 3090's should be able to use this to modify voltage.
> 
> HOF OC GOC v2021-3004 (easyupload)
> 
> This is the 2021 version of the galax GOC voltage exe.
> View attachment 2544304
> 
> 
> edit: use at own risk etc


Does the switching frequency/loadline work on any non Galax card?


----------



## ManniX-ITA

If someone didn't notice, nVidia released CUDA 11.6 with 511.23 drivers:









CUDA Toolkit 11.7 Downloads


Get the latest feature updates to NVIDIA's proprietary compute stack.




developer.nvidia.com





Seems GPGPU performances went considerably up with this driver.


----------



## ManniX-ITA

Falkentyne said:


> Does the switching frequency/loadline work on any non Galax card?


Nope it doesn't.
Went through the effort of re-flashing an HOF BIOS on my Strix, even if I knew it


----------



## J7SC

Falkentyne said:


> Yes that's the good one.
> The "Old" version says "Prime Platinum" on the box instead of "Prime PX-(wattage)."
> 
> Also, a few people had the "OneSeasonic" GX, PX, TX" models and had them OCP trip and Seasonic determined those were defective. They got replaced under warranty and the new ones were flawless. I think it was one user on this thread MONTHS ago, and another on HardOCP (hardforum) who had to RMA theirs like that. (I remember one was a TX-850).


Good, thanks . I just got this one a short wile ago at Memory Express (Canada), on sale no less. It does duty in the 3950X / 6900XT-450W daily work slugger...5950X/3090-520W Strix gets the Antec HPC 1300 (Delta innards)


----------



## Nizzen

PLATOON TEKK said:


> As promised (my bad took ages); those with HOF 3090's should be able to use this to modify voltage.
> 
> HOF OC GOC v2021-3004 (easyupload)
> 
> This is the 2021 version of the galax GOC voltage exe.
> View attachment 2544304
> 
> 
> edit: use at own risk etc


Lucily I still have 3090 HOF 😅


----------



## J7SC

ManniX-ITA said:


> Nope it doesn't.
> Went through the effort of re-flashing an HOF BIOS on my Strix, even if I knew it


On the topic of Strix custom bios, did you ever get the Strix XOC r_BAR (1kw nominal?) working ?


----------



## ManniX-ITA

J7SC said:


> On the topic of Strix custom bios, did you ever get the Strix XOC r_BAR (1kw nominal?) working ?


There's no ReBar version sadly.
What was circulating as ReBar is the same 1000W without it.


----------



## J7SC

ManniX-ITA said:


> There's no ReBar version sadly.
> What was circulating as ReBar is the same 1000W without it.


Thanks - I never loaded it because I couldn't first open it with ABE008 (a minimum requirement for me before flashing). Have you tried any of the HoF ones (ie. ~ 500W) on your Strix ?


----------



## kryptonfly

Here's my last PR and Superposition :

I scored 16 077 in Port Royal (2250 Mhz locked at 1.081v, I just gained 10 mhz effective clock from my previous PR at 2235 Mhz)


Superposition 1080p extreme 2280 Mhz locked at 1.081v :



Superposition 4K optimized 2280 Mhz locked at 1.081v :


----------



## ManniX-ITA

J7SC said:


> Thanks - I never loaded it because I couldn't first open it with ABE008 (a minimum requirement for me before flashing). Have you tried any of the HoF ones (ie. ~ 500W) on your Strix ?


Tried but probably not the best with air cooling. The HOF 520W was not bad at all.
Problem is that with all HOF BIOS most DP ports doesn't work.
Since I have 2 monitors and 2 VR headsets they are not useless for me.


----------



## J7SC

ManniX-ITA said:


> Tried but probably not the best with air cooling. The HOF 520W was not bad at all.
> Problem is that with all HOF BIOS most DP ports doesn't work.
> Since I have 2 monitors and 2 VR headsets they are not useless for me.


...ok thanks  Did all the power readings work at least with the HoF bios on Strix ? I like the KPE 520W r_BAR (on my bios slot 2) but the lack of more or less accurate power readings is a bit of a downer.


----------



## ManniX-ITA

J7SC said:


> Did all the power readings work at least with the HoF bios on Strix


This I don't recall for sure but I think it was working fine.


----------



## tps3443

J7SC said:


> ...ok thanks  Did all the power readings work at least with the HoF bios on Strix ? I like the KPE 520W r_BAR (on my bios slot 2) but the lack of more or less accurate power readings is a bit of a downer.


The KP520 rbar bios is probably the best.


----------



## GreatestChase

tps3443 said:


> The KP520 rbar bios is probably the best.


It’s probably user error, but every time I try to use that one, as soon as it goes over 520 W it drops clocks down to like 1800 MHz. Also has wild fluctuations when it’s below the power limit if I use the classified tool or the dip switches.


----------



## truehighroller1

J7SC said:


> ...ok thanks  Did all the power readings work at least with the HoF bios on Strix ? I like the KPE 520W r_BAR (on my bios slot 2) but the lack of more or less accurate power readings is a bit of a downer.



So far the best one for me has been the asus striz xoc 1k bios performance wise and I've tried all of them so far. Even though mine seems to limit itself to 550 watts with it but still performance wise the best for me as I don't want to shunt mod.


----------



## tps3443

GreatestChase said:


> It’s probably user error, but every time I try to use that one, as soon as it goes over 520 W it drops clocks down to like 1800 MHz. Also has wild fluctuations when it’s below the power limit if I use the classified tool or the dip switches.


When you swap bios, always use DDU and then reinstall Nvidia drivers. 

Also, try the 520 watt Kingpin LN2 Hydro Copper Rebar bios. It works well.

When I flash a bios with NVflash, I usually DDU, then reboot.


----------



## tps3443

truehighroller1 said:


> So far the best one for me has been the asus striz xoc 1k bios performance wise and I've tried all of them so far.


The 3090KP default bios are all really good. Honestly, the card bone stock works great. As for overclocking and gaming with an overclock the 3090KP has a few good options.

Bios work different on different AIB cards though.


----------



## GreatestChase

tps3443 said:


> When you swap bios, always use DDU and then reinstall Nvidia drivers.
> 
> Also, try the 520 watt Kingpin LN2 Hydro Copper Rebar bios. It works well.
> 
> When I flash a bios with NVflash, I usually DDU, then reboot.


Gotcha. I’ve also tried to flash the HC bios, but it gives me an error saying that it can’t flash it onto my card since it’s not the correct card or something to that extent. And do you just ddu when you flash a bios? Not when you just change between already flashed bios?


----------



## tps3443

GreatestChase said:


> Gotcha. I’ve also tried to flash the HC bios, but it gives me an error saying that it can’t flash it onto my card since it’s not the correct card or something to that extent. And do you just ddu when you flash a bios? Not when you just change between already flashed bios?


Flash the 3090KP HC bios, reboot your system, then after windows recognizes the new device, launch Evga PX1 and it will update your on board firmware on your 3090KP from Hybrid to Hydro Copper.

After this is done run DDU, then reinstall new Nvidia drivers.


Yes I run DDU and install new drivers anytime I change my bios.


----------



## GreatestChase

tps3443 said:


> Flash the 3090KP HC bios, reboot your system, then after windows recognizes the new device, launch Evga PX1 and it will update your on board firmware on your 3090KP from Hybrid to Hydro Copper.
> 
> After this is done run DDU, then reinstall new Nvidia drivers.
> 
> 
> Yes I run DDU and install new drivers anytime I change my bios.


When I try to flash any HC bios onto the card it gives me an error. I’m not at my pc at the moment so I can’t remember exactly what it says, but it’s to the extent that it won’t flash because the bios doesn’t match the card. Is there a way to force the flash? I’ve been using the method of dragging the .rom file onto the nvflash executable and that doesn’t work for any of the HC versions.


----------



## sew333

kryptonfly said:


> Excuse me but I already answered few pages ago, I told you to try Endwalker bench, if the pc shuts down wildly that means OCP is triggered. You can reproduce it everytime, and everytime it will trigger OCP. It's "common" with Seasonic PSU too sensitive, my Prime 750TX did exactly the same thing when I tried 1000W XOC bios on my Gigabyte Turbo 2*8 pins (same board than yours). Someone else here got same trouble and someone on the official techpowerup thread too.
> BUT you have a 850W and cpu stock, it should not trigger OCP in these conditions, so maybe PSU is too sensitive (RMA). You need at least 1000W to be mind free. Me too, I don't have to "flip switch", pc restarts by itself showing a "Power supply surge protection" advertising on booting (when activated in bios) just after logo (Asus X99, my new Asus Z690 doesn't have "surge protection" in bios settings so no advertising but I experienced 2 OCPs with my Antec Signature 1000W it reproduced exactly same behavior than my TX-750W so power surge protection is well applied by default in bios but "invisible").
> 
> You need to understand, it's the motherboard which firstly triggers OCP for good reasons through ATX 24-pins (mainly caused by 12V CPU and 12V pcie in same time) + 8-pins from GPU, so no "flip switch" needed, it's the motherboard which "flip switch". But when PSU really triggers OCP indeed you need to flip switch, when for example you start 15 fans at 5v and a pump at 12v PSU triggers OCP even with my 1000W PSU.
> You should try with a higher PSU (1000W minimum and not Seasonic) through Amazon for example and return it in case it did the same thing but I'm convinced it's a OCP through your motherboard caused by cpu+gpu fast load, a stronger PSU fix this matter.


hey kryptonfly. According to my shutdown week ago. Yes i dont flip switch on the back of psu.
But i found some topic here ,person with the same psu had OCP issue and he said:"and I have to reset the PSU at the switch to get my system back. "





Need replacement for Seasonic TX-850 with better OCP for 3090


Bottom line: this PSU (Seasonic Prime TX-850 Titanium) just can handle the 3090. OCP keeps tripping randomly regardless of load and I have to reset the PSU at the switch to get my system back. This is a known issue with certain PSUs and the 3090. Should I go to a Seasonic Prime 1000 watt...




hardforum.com





So its depend on motherboard then or what?


----------



## tps3443

GreatestChase said:


> When I try to flash any HC bios onto the card it gives me an error. I’m not at my pc at the moment so I can’t remember exactly what it says, but it’s to the extent that it won’t flash because the bios doesn’t match the card. Is there a way to force the flash? I’ve been using the method of dragging the .rom file onto the nvflash executable and that doesn’t work for any of the HC versions.


Drop the 3090KP HC bios in the NVflash directory.

Then launch NVflash via command prompt 


Then just do the normal 

nvflash64.exe - - protectoff 
nvflash64.exe -6 xxxx.rom “Whatever your bios is”


----------



## gfunkernaught

J7SC said:


> ...I added thermal putty on the backside of the 3x8 Pin connectors, as well as the front...on the back, the thermal putty makes contact with the metal back-plate and its extra heatsink


Problem is my backplate and front plate don't touch the power connector area at all. Even putty wouldn't fill that much of a gap. I tried 2mm pads but that was actually too much and caused uneven contact. It's not the PCB that's getting warm, but the pcie power cable connectors. Those are radiating more than the PCB area.


----------



## J7SC

gfunkernaught said:


> Problem is my backplate and front plate don't touch the power connector area at all. Even putty wouldn't fill that much of a gap. I tried 2mm pads but that was actually too much and caused uneven contact. It's not the PCB that's getting warm, but the pcie power cable connectors. Those are radiating more than the PCB area.


ah, ok...I have 16 gauge 3x8 cables connecting to the Strix; barely get warm. Can you mount a 40mm or 80mm fan near the cables w/o screwing up the rest of the case airflow ?


----------



## Roacoe717

another cold morning, still trying to reach Legendary. Rebar enabled, strix 3090 xoc bios. +133 gpu +900 mem. Seems no mater what voltage I set in MSI after it wont go past 490w owell. Also for newcomers this is air cooled.


----------



## rtgoad

Longtime lurker & first time poster here.

Started benching my KP HC again after barely breaking into PR HoF last July. Incredibly pleased w/ the results, generally. I pushed voltages a littler higher than I prefer, sans exotic cooling, w/ only a custom loop.

PR: 16102 (ambient 22c)


----------



## sew333

rtgoad said:


> Longtime lurker & first time poster here.
> 
> Started benching my KP HC again after barely breaking into PR HoF last July. Incredibly pleased w/ the results, generally. I pushed voltages a littler higher than I prefer, sans exotic cooling, w/ only a custom loop.
> 
> PR: 16102 (ambient 22c)
> 
> View attachment 2544540


and bum shutdown bro


----------



## dante`afk

does anyone know how to use shadowplay to record on a non-primary monitor?


----------



## gfunkernaught

J7SC said:


> ah, ok...I have 16 gauge 3x8 cables connecting to the Strix; barely get warm. Can you mount a 40mm or 80mm fan near the cables w/o screwing up the rest of the case airflow ?


I have no idea what gauge wires the corsair hx1200 has, and corsair won't tell me. Barely get warm at how many watts? I probably could mount a small fan right above the connectors, but it wouldn't be mounted, more like just sit there. The airflow from the front rad fans isn't powerful enough to really remove the heat from the power connectors, so the heat just radiates passively.


----------



## GRABibus

Today I lauch 3DMark and got an update.

On my Kingpin Hybrid, I ran a test today with exactly same settings than one week ago and I lost 500pts.

Nobody else ?


----------



## yzonker

I haven't run it in a while. Too warm here now. Lol. Need to order that chiller...


----------



## kryptonfly

GRABibus said:


> Today I lauch 3DMark and got an update.
> 
> On my Kingpin Hybrid, I ran a test today with exactly same settings than one week ago and I lost 500pts.
> 
> Nobody else ?


Weird, normally updates don't change scores. These are last updates from Steam :
12/20/21
"_3DMark Windows 2.22.7334
This is a minor update. Benchmark scores are not affected.
*Improved*_

_The Home screen now recommends the best 3DMark benchmark tests for your graphics card, processor and storage device._
_The Benchmarks screen now shows the filter panel by default making it easier to find and select tests for your hardware_."
01/25/22
"_SystemInfo 5.46_

_Updated GPU detection module to improve compatibility with latest hardware._
_Updated CPUID module to improve compatibility with latest hardware and to fix incorrect memory clock speed indication on Intel Tiger Lake-based systems._
_Fixed an issue that could cause results from systems with AMD GPUs to be incorrectly flagged as invalid due to modified tessellation settings_."
3DMark on Steam

SystemInfo seems to be your trouble, my last run was before 25 January so I don't know but you can try the original version from guru3d.com, maybe it would help, past your key from 3dmark :

3DMark Download v2.22.7334 + Time Spy

I will test later and add feedback.


----------



## yzonker

Looks like PR scores about the same for me, certainly not anywhere near a 500pt loss. Just ran a quick test run that I know will always complete which I've done before with the same settings (+180/+850).









Result not found







www.3dmark.com


----------



## J7SC

...just noticed that NVIDIA released 511.23 WHQL drivers...any positive/neutral/negative feedback on those ?


----------



## GRABibus

Seems that I recovered my scores roughly this morning....
Yesterday in fact my KP pulled 50W to 60W less in PR thazn this morning for same settings 

I will check again when back home this evening.


----------



## GRABibus

J7SC said:


> ...just noticed that NVIDIA released 511.23 WHQL drivers...any positive/neutral/negative feedback on those ?


for PR, didn't change my scores between both releases.
511.23 run fine in Vanguard and BFV for me.


----------



## elbramso

kryptonfly said:


> Here's my last PR and Superposition :
> 
> I scored 16 077 in Port Royal (2250 Mhz locked at 1.081v, I just gained 10 mhz effective clock from my previous PR at 2235 Mhz)
> 
> 
> Superposition 1080p extreme 2280 Mhz locked at 1.081v :
> 
> 
> 
> Superposition 4K optimized 2280 Mhz locked at 1.081v :


Good results and a really good card you got there!
Did your score go up with the latest driver?

_EDIT_ just read your statement above^^


----------



## kryptonfly

J7SC said:


> ...just noticed that NVIDIA released 511.23 WHQL drivers...any positive/neutral/negative feedback on those ?


The most impressive new stuff is DLDSR, it's DLSS integrated in DSR, I tried in some games and it's really awesome but it seems not working in Cyberpunk 2077, I don't have benefit of DLSS when I choose 2.25x in game, it's like normal scaling, maybe because of my 1080p monitor.










@elbramso : my scores have been made with 511.23


----------



## kryptonfly

GRABibus said:


> Seems that I recovered my scores roughly this morning....
> Yesterday in fact my KP pulled 50W to 60W less in PR thazn this morning for same settings
> 
> I will check again when back home this evening.


My score is normal, you can still use the original 3dmark version if you have problems or re-download from Steam, I already got troubles with Steam version after so many crashes lol.


----------



## GRABibus

kryptonfly said:


> My score is normal, you can still use the original 3dmark version if you have problems or re-download from Steam, I already got troubles with Steam version after so many crashes lol.


If you use former version, aren’t they automatically updated ?


----------



## kryptonfly

GRABibus said:


> If you use former version, aren’t they automatically updated ?


I don't think so, you need to update manually (maybe there's an option to update automatically, I'm not 100% sure, it was few months ago...), 3dmark from guru3d is the latest version but SystemInfo have been updated on 25 January so you need to download it from 3dmark site, or update 3dmark from Steam and you will have automatically SystemInfo updated (you already have it installed so no problem). For now I'm using Steam version.

SystemInfo in case you uninstall Steam version : SystemInfo support


----------



## KedarWolf

kryptonfly said:


> I don't think so, you need to update manually (maybe there's an option to update automatically, I'm not 100% sure, it was few months ago...), 3dmark from guru3d is the latest version but SystemInfo have been updated on 25 January so you need to download it from 3dmark site, or update 3dmark from Steam and you will have automatically SystemInfo updated (you already have it installed so no problem). For now I'm using Steam version.
> 
> SystemInfo in case you uninstall Steam version : SystemInfo support


I've found I get a bit higher scores with the stand alone version rather than the Steam version. You can get your key from the Steam version and it'll work in the stand alone. Not your Steam key, the key is there in the app settings somewhere. Google is your friend, I'm on my phone going to work.


----------



## elbramso

kryptonfly said:


> The most impressive new stuff is DLDSR, it's DLSS integrated in DSR, I tried in some games and it's really awesome but it seems not working in Cyberpunk 2077, I don't have benefit of DLSS when I choose 2.25x in game, it's like normal scaling, maybe because of my 1080p monitor.
> 
> View attachment 2545021
> 
> 
> @elbramso : my scores have been made with 511.23


I should really learn to read


----------



## GRABibus

kryptonfly said:


> The most impressive new stuff is DLDSR, it's DLSS integrated in DSR, I tried in some games and it's really awesome but it seems not working in Cyberpunk 2077, I don't have benefit of DLSS when I choose 2.25x in game, it's like normal scaling, maybe because of my 1080p monitor.
> 
> View attachment 2545021
> 
> 
> @elbramso : my scores have been made with 511.23


Do you know if we need to enable DLSS in game in addition to this feature ?


----------



## kryptonfly

elbramso said:


> I should really learn to read


Yep, you're not the one here 


GRABibus said:


> Do you know if we need to enable DLSS in game in addition to this feature ?


No you can bring DLSS in old games ! But the best thing is you can enable DLDSR + DLSS in games too ! Take a look : 





Unfortunately I can't enable DSR in God of War with my 1080p monitor, a bug. I have to increase resolution in windows to bypass this matter. Same with Outriders, Kena Bridge, I'm limited at 1080p but in others games DSR works as intended.


----------



## GRABibus

kryptonfly said:


> Yep, you're not the one here
> No you can bring DLSS in old games ! But the best thing is you can enable DLDSR + DLSS in games too ! Take a look :
> 
> 
> 
> 
> 
> Unfortunately I can't enable DSR in God of War with my 1080p monitor, a bug. I have to increase resolution in windows to bypass this matter. Same with Outriders, Kena Bridge, I'm limited at 1080p but in others games DSR works as intended.


Tried it in Vanguard Multiplayer
=> DLSS quality in game
=> DL Scaling to 2.25 to play in "4K" with my 2560x1440p monitor
=> Rebar forced for Vanguard (+5% to +10% fps)
=> All settings Ultra

Between 130 and 170fps, with a most nicer image than 2560x1440 with DLSS quality only of course.

I had to play with Classified tool to increase power draw on my KP Hybrid in order to not be Power limied with this new feature which simulates 4K .
What is funny is that CPU load decreases, then CPU power decreases and my 5900X boosts much higher in game on all cores.

This feature is very nice !


----------



## KedarWolf

I'm in Canada. Found Gelid Extreme quite cheap here. You need quite a few packages to do a waterblock and backplate, but at this price won't break the bank. 





__





Gelid GP Extreme Thermal Pads, 80x40


Gelid's Extreme Thermal Pads are an extremely easy and tidy way of adding thermal compound to a heatsink.




www.quietpc.com





I think it's even cheaper than the Gelid official store, even with a discount code.

Does anyone have a Gelid discount code for the official store?


----------



## GreatestChase

It's supposed to be about -6 C here on Saturday morning, so I think I'm going to be moving my PC into my garage Friday night in prep for a benchmarking session Saturday morning. Added about 100 mL of Zerex G05 to my loop in prep which with my guesstimations should put it at around a 10 % mix (was only using DI water and primochill utopia prior). That should keep anything from freezing, and hopefully will allow me to get colder than I've been able to get previously with just opening my office window. Hopefully that'll let me break into the 16k mark. Thoughts or any considerations that I may be missing?


----------



## KedarWolf

KedarWolf said:


> I'm in Canada. Found Gelid Extreme quite cheap here. You need quite a few packages to do a waterblock and backplate, but at this price won't break the bank.
> 
> 
> 
> 
> 
> __
> 
> 
> 
> 
> 
> Gelid GP Extreme Thermal Pads, 80x40
> 
> 
> Gelid's Extreme Thermal Pads are an extremely easy and tidy way of adding thermal compound to a heatsink.
> 
> 
> 
> 
> www.quietpc.com
> 
> 
> 
> 
> 
> I think it's even cheaper than the Gelid official store, even with a discount code.
> 
> Does anyone have a Gelid discount code for the official store?


Welp, Gelid Solutions Official Store is much cheaper. They have a sale and shipping on the QuietPC store is like $70.


----------



## yzonker

GreatestChase said:


> It's supposed to be about -6 C here on Saturday morning, so I think I'm going to be moving my PC into my garage Friday night in prep for a benchmarking session Saturday morning. Added about 100 mL of Zerex G05 to my loop in prep which with my guesstimations should put it at around a 10 % mix (was only using DI water and primochill utopia prior). That should keep anything from freezing, and hopefully will allow me to get colder than I've been able to get previously with just opening my office window. Hopefully that'll let me break into the 16k mark. Thoughts or any considerations that I may be missing?


Wear warm clothes or buy some long cables. 🥶 

Rather than putting my machine outside, I just built a duct out of a cardboard box to get the outside air to the machine on my desk. Had a box fan at the window running at low speed to move air. This only got water temp down to +5 to 6C though. Incoming air was about -4C. I had a temp probe mounted in the duct.


----------



## GreatestChase

yzonker said:


> Wear warm clothes or buy some long cables. 🥶
> 
> Rather than putting my machine outside, I just built a duct out of a cardboard box to get the outside air to the machine on my desk. Had a box fan at the window running at low speed to move air. This only got water temp down to +5 to 6C though. Incoming air was about -4C. I had a temp probe mounted in the duct.


That’s about as low as I could get it with a similar method and temps. I wish I had a place where I could make use of long cables, but outside my office window are bushes, so I can’t really set it out there unfortunately. And they’d have to be really long cables to make use of them from the garage. Warm clothes it is I guess.


----------



## kryptonfly

GreatestChase said:


> It's supposed to be about -6 C here on Saturday morning, so I think I'm going to be moving my PC into my garage Friday night in prep for a benchmarking session Saturday morning. Added about 100 mL of Zerex G05 to my loop in prep which with my guesstimations should put it at around a 10 % mix (was only using DI water and primochill utopia prior). That should keep anything from freezing, and hopefully will allow me to get colder than I've been able to get previously with just opening my office window. Hopefully that'll let me break into the 16k mark. Thoughts or any considerations that I may be missing?


Hope you will succeed to finally break this 16k barrier 
As mentioned earlier on this page, you can test with original version from guru3d.com but you have to update SystemInfo from 3dmark or through Steam updates if you already have Steam version (automatically). You can copy/paste your key directly from 3dmark website, it will help a little your score. However, I did bench with Steam version 2.22.7334 s64 (s for Steam) : Result

Try to optimize Nvidia, shadowplay disabled, re-bar forced and enabled, hardware-accelerated disabled, g-sync disabled, nvidia-smi... Few tips.


----------



## GreatestChase

kryptonfly said:


> Hope you will succeed to finally break this 16k barrier
> As mentioned earlier on this page, you can test with original version from guru3d.com but you have to update SystemInfo from 3dmark or through Steam updates if you already have Steam version (automatically). You can copy/paste your key directly from 3dmark website, it will help a little your score. However, I did bench with Steam version 2.22.7334 s64 (s for Steam) : Result
> 
> Try to optimize Nvidia, shadowplay disabled, re-bar forced and enabled, hardware-accelerated disabled, g-sync disabled, nvidia-smi... Few tips.


Thanks for the tips. I think my barrier at this point has been temps. I can’t run the clock speeds that you guys above me on the leaderboard are running. My best run so far was 15,998 and that was at 2175 MHz. I was able to get one run in previously set at 2190, but it was only at 2190 for a couple seconds at the beginning of the run and then eventually dropped down to 2160 toward the end. I’m surprised that I’ve gotten as high on the leaderboard as I have because all the people around me are above 2200 MHz, and most above 2250 MHz.


----------



## yzonker

GreatestChase said:


> Thanks for the tips. I think my barrier at this point has been temps. I can’t run the clock speeds that you guys above me on the leaderboard are running. My best run so far was 15,998 and that was at 2175 MHz. I was able to get one run in previously set at 2190, but it was only at 2190 for a couple seconds at the beginning of the run and then eventually dropped down to 2160 toward the end. I’m surprised that I’ve gotten as high on the leaderboard as I have because all the people around me are above 2200 MHz, and most above 2250 MHz.


The average clock speed shown in the 3dmark result is misleading depending on how the VF curve was manipulated. Unfortunately to really compare, you have to have effective clocks from HWINFO. I was in the 2220-2225 range on the 16k+ runs I posted. My card is held back to some extent by the poor quality VRAM it has. My highest score came from only +1125 mem. 

Post a pic of the VF curve you are running.


----------



## kryptonfly

@GreatestChase : Yep, you did already great results, thanks to memory I think, my vram is picky lol. One of the rare bench where vram matters (I think).


----------



## GreatestChase

yzonker said:


> The average clock speed shown in the 3dmark result is misleading depending on how the VF curve was manipulated. Unfortunately to really compare, you have to have effective clocks from HWINFO. I was in the 2220-2225 range on the 16k+ runs I posted. My card is held back to some extent by the poor quality VRAM it has. My highest score came from only +1125 mem.
> 
> Post a pic of the VF curve you are running.


I don’t have a pic at the moment, but I typically set 1100 mV at +30 from my target clock speed. So for 2175 I set 1100 mV at 2205 MHz, and the for the 3 points below 1100 mV I set the -15 below my target. Then I use nvidia-smi to request 2175 MHz. That way it locks my gpu core voltage at 1100 mV and pending temps will lock my core speed. And typically my effective is within 1-2 MHz of my external clock speed.


----------



## yzonker

GreatestChase said:


> I don’t have a pic at the moment, but I typically set 1100 mV at +30 from my target clock speed. So for 2175 I set 1100 mV at 2205 MHz, and the for the 3 points below 1100 mV I set the -15 below my target. Then I use nvidia-smi to request 2175 MHz. That way it locks my gpu core voltage at 1100 mV and pending temps will lock my core speed. And typically my effective is within 1-2 MHz of my external clock speed.


I don't use the lock. Otherwise similar. For some reason moving those last 4 point up some above the rest of the curve helps. Not sure why. So I'll set the initial offset at +165 to +180, then move all 4 points up another +30, then the 1100mv point +45 more (so it would be +225 if I started at +180). 

Then once I find the limit (crashes), I start moving each point down one bin starting on the left (lowest voltage). This will reduce effective some, but not as much as moving all 4 down. Sometimes I can find a bit more.


----------



## ManniX-ITA

J7SC said:


> ...just noticed that NVIDIA released 511.23 WHQL drivers...any positive/neutral/negative feedback on those ?


I saw quite a nice jump in GPGPU scores, rest seems neutral.


----------



## J7SC

ManniX-ITA said:


> I saw quite a nice jump in GPGPU scores, rest seems neutral.


Thanks . I installed it earlier today and haven't benched it yet, just playing around with DLDSR...trouble is, most things look awesome on a big Oled, so it is harder to make a critical judgement


----------



## yzonker

kryptonfly said:


> Hope you will succeed to finally break this 16k barrier
> As mentioned earlier on this page, you can test with original version from guru3d.com but you have to update SystemInfo from 3dmark or through Steam updates if you already have Steam version (automatically). You can copy/paste your key directly from 3dmark website, it will help a little your score. However, I did bench with Steam version 2.22.7334 s64 (s for Steam) : Result
> 
> Try to optimize Nvidia, shadowplay disabled, re-bar forced and enabled, hardware-accelerated disabled, g-sync disabled, nvidia-smi... Few tips.


I wasn't able to replicate that result. Did you do anything other than close Steam when you made the standalone run?


----------



## ManniX-ITA

yzonker said:


> I wasn't able to replicate that result. Did you do anything other than close Steam when you made the standalone run?


Also close Afterburner after applying the profile, don't let it run in background.


----------



## yzonker

ManniX-ITA said:


> Also close Afterburner after applying the profile, don't let it run in background.


I did, but all I was trying to do is make 2 consistent runs. One with the Steam version of 3dmark, and the 2nd from the standalone version with Steam closed. I got the same score within 10pts. Ran both twice to be sure.


----------



## yzonker

One thing I discovered that was slightly interesting. With the newest NVIDIA driver, I don't seem to need to disable P2 state in order to mine ETH (with the KP 1kw bios). Appeared to be running in P2 like any other bios.


----------



## kryptonfly

yzonker said:


> I wasn't able to replicate that result. Did you do anything other than close Steam when you made the standalone run?


It's not magic, it (should) help just a little, I'm using Steam version for convenience. My result above is just to show steam version, both are with Steam version.


----------



## GreatestChase

Now we wait until morning when it's nice and cold.


----------



## Thanh Nguyen

Anyone here has 2 3090 on z690 apex? I dont know why there is no sli option in the panel but window detects 2 cards.


----------



## SoldierRBT

Thanh Nguyen said:


> Anyone here has 2 3090 on z690 apex? I dont know why there is no sli option in the panel but window detects 2 cards.


You probably need bios with SLI support. Bios 1101 should work. 









[OFFICIAL] Asus Strix/Maximus Z690 Owners Thread


I have found some settings that are pretty stable on my rig. Days of testing (memtest, ram test, cinebench, aida), gaming, programming with IDEs, code compiling. Stable settings: Z690 Extreme 4x16GB Samsung 4.8GHz C36,36,36,76 1.360V VDD/VDDQ/TX VDDQ 1.275V MC 1.150V SA I can't crush my PC...




www.overclock.net


----------



## Thanh Nguyen

SoldierRBT said:


> You probably need bios with SLI support. Bios 1101 should work.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> [OFFICIAL] Asus Strix/Maximus Z690 Owners Thread
> 
> 
> I have found some settings that are pretty stable on my rig. Days of testing (memtest, ram test, cinebench, aida), gaming, programming with IDEs, code compiling. Stable settings: Z690 Extreme 4x16GB Samsung 4.8GHz C36,36,36,76 1.360V VDD/VDDQ/TX VDDQ 1.275V MC 1.150V SA I can't crush my PC...
> 
> 
> 
> 
> www.overclock.net


Thanks man. How do u set rtl? I cant boot when change the rtl. I saw we need to enable algo rtl. Where is it? Thanks.


----------



## SoldierRBT

Thanh Nguyen said:


> Thanks man. How do u set rtl? I cant boot when change the rtl. I saw we need to enable algo rtl. Where is it? Thanks.


Leave RTLs on auto. SenseAmp offset disable and set RTL max value to what you want, and RTL offset to 0. 6400C30 1T I use 61/56/61/56


----------



## GreatestChase

After about 4 hours in my freezing garage I finally broke 16k. Unfortunately most of that time was spent fighting cold boot issues. I think my problem is that when my pc is physically cold, the cpu does not like to boot. It wouldn't even detect my win 10 drive for most of my session. Tried for a while on Win 11 with no luck, so I took my heat gun and got the cpu up to about 30 C and it finally let me into my Win 10 drive. Compared to Win 11, the settings I used for this run produced about 200 pts lower than I got with this run on Win 10. 

This was at 2220 MHz, but it pretty quickly dropped to 2190 for most of the run. I didn't want to push the voltages much further without really knowing where the "safe" limit is so I couldn't get it to run at 2235 MHz. Also, I couldn't quite get temps to sub-zero. It was about -5 C outside, but in my garage the lowest it got was just above 1 C.


----------



## GRABibus

kryptonfly said:


> Yep, you're not the one here
> No you can bring DLSS in old games ! But the best thing is you can enable DLDSR + DLSS in games too ! Take a look :
> 
> 
> 
> 
> 
> Unfortunately I can't enable DSR in God of War with my 1080p monitor, a bug. I have to increase resolution in windows to bypass this matter. Same with Outriders, Kena Bridge, I'm limited at 1080p but in others games DSR works as intended.


DLDSR 2.25 applied to 2560x1440 144Hz monitor, DLSS quality in game, Ultra settings, Rebar forced : 150 fps average, crazy


----------



## Thanh Nguyen

Has anyone use cablemod with the 3090? I should use 16 or 18 awg? Thanks.


----------



## elbramso

GreatestChase said:


> View attachment 2545495
> 
> 
> After about 4 hours in my freezing garage I finally broke 16k. Unfortunately most of that time was spent fighting cold boot issues. I think my problem is that when my pc is physically cold, the cpu does not like to boot. It wouldn't even detect my win 10 drive for most of my session. Tried for a while on Win 11 with no luck, so I took my heat gun and got the cpu up to about 30 C and it finally let me into my Win 10 drive. Compared to Win 11, the settings I used for this run produced about 200 pts lower than I got with this run on Win 10.
> 
> This was at 2220 MHz, but it pretty quickly dropped to 2190 for most of the run. I didn't want to push the voltages much further without really knowing where the "safe" limit is so I couldn't get it to run at 2235 MHz. Also, I couldn't quite get temps to sub-zero. It was about -5 C outside, but in my garage the lowest it got was just above 1 C.


First of all congratulations to 16k!
Imo you did already fly relatively close to the sun with your classified settings. Did you see any "nvvdd too high" messages on the oled? 
I was never able to hit a power draw past 650w 😉
Is smi better than just using precision x1 or AB?


----------



## GreatestChase

elbramso said:


> First of all congratulations to 16k!
> Imo you did already fly relatively close to the sun with your classified settings. Did you see any "nvvdd too high" messages on the oled?
> I was never able to hit a power draw past 650w 😉
> Is smi better than just using precision x1 or AB?


Thanks, and that's kinda what I figured with the classified settings. I never saw any warnings though. And I still setup my curve in AB, but the ctrl + L doesn't always hold it at 1100 mV for me, so using smi it will stay at 1100 mV throughout the run for me.


----------



## Agenesis

My 3090 ftw3 keeps tripping my Ax1600i OCP. Is this a known issue and is there an updated bios to fix this? 

Funny enough it doesn't trip when connected to an ax860i.


----------



## geriatricpollywog

GreatestChase said:


> View attachment 2545495
> 
> 
> After about 4 hours in my freezing garage I finally broke 16k. Unfortunately most of that time was spent fighting cold boot issues. I think my problem is that when my pc is physically cold, the cpu does not like to boot. It wouldn't even detect my win 10 drive for most of my session. Tried for a while on Win 11 with no luck, so I took my heat gun and got the cpu up to about 30 C and it finally let me into my Win 10 drive. Compared to Win 11, the settings I used for this run produced about 200 pts lower than I got with this run on Win 10.
> 
> This was at 2220 MHz, but it pretty quickly dropped to 2190 for most of the run. I didn't want to push the voltages much further without really knowing where the "safe" limit is so I couldn't get it to run at 2235 MHz. Also, I couldn't quite get temps to sub-zero. It was about -5 C outside, but in my garage the lowest it got was just above 1 C.


Are you saying your score was higher on Win 10 or Win 11?

The 3090 Kingpin doesn’t have cold boot issues. It might be your motherboard. Do you have an LN2 mode switch?



Thanh Nguyen said:


> Has anyone use cablemod with the 3090? I should use 16 or 18 awg? Thanks.


Always use 16 awg. I melted a Cable Mod 18 awg on a power modded Vega64. I switched back to the 16 awg cables that came with my psu.


----------



## GreatestChase

geriatricpollywog said:


> Are you saying your score was higher on Win 10 or Win 11?
> 
> The 3090 Kingpin doesn’t have cold boot issues. It might be your motherboard. Do you have an LN2 mode switch?
> 
> 
> Always use 16 awg. I melted a Cable Mod 18 awg on a power modded Vega64. I switched back to the 16 awg cables that came with my psu.


My score was higher on Win 10. It could also have to do with my Win 10 being slimmed down with a debloater tool though. 

And I think the cold boot issues are due to my 12700k. Once it would get up to around 30 C it would detect and boot into my SSD that has Win 10 on it. Interestingly, it would boot cold into my Samsung 970 drive without an issue this time around. Last benching session I did, I had the opposite problem, where it wouldn't detect my M.2 drives, but it would my 2.5" SSDs. It could be the motherboard as well though. I'm using the MSI Z690 Carbon, which to my knowledge does not have an LN2 switch anywhere on it.


----------



## MrTOOSHORT

Agenesis said:


> My 3090 ftw3 keeps tripping my Ax1600i OCP. Is this a known issue and is there an updated bios to fix this?
> 
> Funny enough it doesn't trip when connected to an ax860i.


Set your 1600i to single rail if it's not already. I think it's through iCue app.


----------



## geriatricpollywog

GreatestChase said:


> My score was higher on Win 10. It could also have to do with my Win 10 being slimmed down with a debloater tool though.
> 
> And I think the cold boot issues are due to my 12700k. Once it would get up to around 30 C it would detect and boot into my SSD that has Win 10 on it. Interestingly, it would boot cold into my Samsung 970 drive without an issue this time around. Last benching session I did, I had the opposite problem, where it wouldn't detect my M.2 drives, but it would my 2.5" SSDs. It could be the motherboard as well though. I'm using the MSI Z690 Carbon, which to my knowledge does not have an LN2 switch anywhere on it.


It’s not your 12700k. My 12900k boots fine at -4C. It sounds like an issue with your SSD. Is it SATA? I have 3 NVME drives and they boot fine at cold temps.


----------



## GreatestChase

geriatricpollywog said:


> It’s not your 12700k. My 12900k boots fine at -4C. It sounds like an issue with your SSD. Is it SATA? I have 3 NVME drives and they boot fine at cold temps.


I have 4 different drives. 1 NVME, one M.2 SATA, and then 2 SATA SSD. I've had the problem with all of them at different times. The more I'm looking into it, I think it may be a cold-boot bug issue. If I'm going to continue doing this I guess it may be worth while to invest into something like the Unify-X which does have slow boot and LN2 headers.


----------



## geriatricpollywog

GreatestChase said:


> I have 4 different drives. 1 NVME, one M.2 SATA, and then 2 SATA SSD. I've had the problem with all of them at different times. The more I'm looking into it, I think it may be a cold-boot bug issue. If I'm going to continue doing this I guess it may be worth while to invest into something like the Unify-X which does have slow boot and LN2 headers.


I’m using a Strix D4 which doesn’t have slow boot or LN2 mode and it boots fine down to -13C water temp, -4C cpu temp.


----------



## GreatestChase

geriatricpollywog said:


> I’m using a Strix D4 which doesn’t have slow boot or LN2 mode and it boots fine down to -13C water temp, -4C cpu temp.


I'm at a loss then lol. I don't feel like it's the storage drives because sometimes the motherboard just doesn't detect them when the PC is cold. And it will continue to not detect them even if I warm the drives up. The only thing I've been able to do so far that will reliably let me boot into the drive I want as of this morning is bring the cooling loop up to temps where the cpu is in the 20-30 C range. This morning I tried to heat everything as individually as I could to try and determine what could be the source of the problem, and the cpu was the only thing that would reliably let me boot when I brought it to temp.


----------



## truehighroller1

GreatestChase said:


> I'm at a loss then lol. I don't feel like it's the storage drives because sometimes the motherboard just doesn't detect them when the PC is cold. And it will continue to not detect them even if I warm the drives up. The only thing I've been able to do so far that will reliably let me boot into the drive I want as of this morning is bring the cooling loop up to temps where the cpu is in the 20-30 C range. This morning I tried to heat everything as individually as I could to try and determine what could be the source of the problem, and the cpu was the only thing that would reliably let me boot when I brought it to temp.


Could just be the CPU itself is indeed testy.


----------



## Falkentyne

Agenesis said:


> My 3090 ftw3 keeps tripping my Ax1600i OCP. Is this a known issue and is there an updated bios to fix this?
> 
> Funny enough it doesn't trip when connected to an ax860i.


That shouldn't happen. Did you check the rail config of that PSU? IIRC that has programmable rail configuration, right?
It's either defective, or your rail configuration is set improperly.


----------



## KedarWolf

Agenesis said:


> My 3090 ftw3 keeps tripping my Ax1600i OCP. Is this a known issue and is there an updated bios to fix this?
> 
> Funny enough it doesn't trip when connected to an ax860i.


I can run 1.1v., 100 Power Limit on a power-hungry 5950x on my AX1600i at the default settings with the EVGA XOC 1000W BIOS, have never tripped it. I HAVE had an unstable memory overclock reboot my PC though, but I get artefacts just before it does.

I think your AX1600i might be defective.


----------



## KedarWolf

Thanh Nguyen said:


> Has anyone use cablemod with the 3090? I should use 16 or 18 awg? Thanks.


I used the CableMod Pro cable kit, until someone mentioned my CPU and GPU 12v rails were really low when benching, like 11.6v.
I switched back to the AX1600i stock cables and now never goes below 12.2v.

I ordered MainFrame cables when someone suggested them, but they are very pricey. And take up to 8 weeks or more to deliver. 

Haven't received them yet.


----------



## elbramso

GreatestChase said:


> I have 4 different drives. 1 NVME, one M.2 SATA, and then 2 SATA SSD. I've had the problem with all of them at different times. The more I'm looking into it, I think it may be a cold-boot bug issue. If I'm going to continue doing this I guess it may be worth while to invest into something like the Unify-X which does have slow boot and LN2 headers.


If you want better or more performance out of your graphics card you need to do something with your block. 
Your water was 3c at the beginning and your card ended up with 31c, right? Reducing the temperature by another 10c would grant 200-300 extra pts in PR.


----------



## GreatestChase

elbramso said:


> If you want better or more performance out of your graphics card you need to do something with your block.
> Your water was 3c at the beginning and your card ended up with 31c, right? Reducing the temperature by another 10c would grant 200-300 extra pts in PR.


Yeah, there is decent gpu to water delta when I'm doing a run. I'm using KPX for my paste on the HC block. At idle though, the water and gpu are usually within 1 C of each other. But at the end of a run that delta typically becomes around 23-24 C. Idk how I can lower that delta without going liquid metal.

I did manage to beat my PR last night though. Was able to get my water temps at idle to around -1 C. That let me get in a 2235 MHz run. Had to push the voltage a hair further, but still no warnings that the voltage is too high. It does however start telling me that the mem is overheating when it goes below 0 C in icx. Is that normal?


----------



## yzonker

GreatestChase said:


> Yeah, there is decent gpu to water delta when I'm doing a run. I'm using KPX for my paste on the HC block. At idle though, the water and gpu are usually within 1 C of each other. But at the end of a run that delta typically becomes around 23-24 C. Idk how I can lower that delta without going liquid metal.
> 
> I did manage to beat my PR last night though. Was able to get my water temps at idle to around -1 C. That let me get in a 2235 MHz run. Had to push the voltage a hair further, but still no warnings that the voltage is too high. It does however start telling me that the mem is overheating when it goes below 0 C in icx. Is that normal?
> View attachment 2545749


Nice. That's puts you ahead of me now. Forecast here shows the coldest weather of the year coming this week though. I can only gain about 5-10C at best since I'm just running distilled. Not changing it. 

Optimus block is probably the only way to get much lower. You might be able to mod the block a bit or lap it to gain some. 

More flow helps too. What are you running for pumps? You've probably said before but I can't remember.


----------



## GreatestChase

yzonker said:


> Nice. That's puts you ahead of me now. Forecast here shows the coldest weather of the year coming this week though. I can only gain about 5-10C at best since I'm just running distilled. Not changing it.
> 
> Optimus block is probably the only way to get much lower. You might be able to mod the block a bit or lap it to gain some.
> 
> More flow helps too. What are you running for pumps? You've probably said before but I can't remember.


I have a single D5. I wasn't paying attention last night at lower temps, but yesterday morning when the lowest I could get it was around 1C it was flowing at just below 200 lph.


----------



## yzonker

GreatestChase said:


> I have a single D5. I wasn't paying attention last night at lower temps, but yesterday morning when the lowest I could get it was around 1C it was flowing at just below 200 lph.


I see the difference now. HWINFO is showing 690w max!! Holy crap. I only hit 550w on those 16k+ runs. Looks like core voltage was still 1100mv though. You must be driving a bunch of power with the other voltages? 

Anyway, I'd be 20C+ delta at nearly 700w too. Have you tried dialing that down?


----------



## GreatestChase

yzonker said:


> I see the difference now. HWINFO is showing 690w max!! Holy crap. I only hit 550w on those 16k+ runs. Looks like core voltage was still 1100mv though. You must be driving a bunch of power with the other voltages?
> 
> Anyway, I'd be 20C+ delta at nearly 700w too. Have you tried dialing that down?


Yeah, unfortunately I can't get over like 2145 MHz at 550 W.


----------



## elbramso

GreatestChase said:


> Yeah, there is decent gpu to water delta when I'm doing a run. I'm using KPX for my paste on the HC block. At idle though, the water and gpu are usually within 1 C of each other. But at the end of a run that delta typically becomes around 23-24 C. Idk how I can lower that delta without going liquid metal.
> 
> I did manage to beat my PR last night though. Was able to get my water temps at idle to around -1 C. That let me get in a 2235 MHz run. Had to push the voltage a hair further, but still no warnings that the voltage is too high. It does however start telling me that the mem is overheating when it goes below 0 C in icx. Is that normal?
> View attachment 2545749


Wow that is a lot of power 😳
I should maybe start a new benchmark session but unfortunately it is too warm atm 😞


----------



## yzonker

GreatestChase said:


> Yeah, unfortunately I can't get over like 2145 MHz at 550 W.


And you're using a curve like this,









[Official] NVIDIA RTX 3090 Owner's Club


The step from 3090 to 3090 Ti seems marginal (say 6% fps on average); I rather wait for the 4090s. Besides, what with the new PCBs and Power Connectors, there's no way I'm modding the wiring and GPU's front-and-back in this complex build I finished only recently.: When I was doing HWBot years...




www.overclock.net





Including the steep jump before the last 4 points. In case of locking to 1100mv, there should be a steep jump before the 1081mv point. Like I said before, I'm not sure why, but I seem to be able to run significantly higher effective clock this way.


----------



## GreatestChase

yzonker said:


> And you're using a curve like this,
> 
> 
> 
> 
> 
> 
> 
> 
> 
> [Official] NVIDIA RTX 3090 Owner's Club
> 
> 
> The step from 3090 to 3090 Ti seems marginal (say 6% fps on average); I rather wait for the 4090s. Besides, what with the new PCBs and Power Connectors, there's no way I'm modding the wiring and GPU's front-and-back in this complex build I finished only recently.: When I was doing HWBot years...
> 
> 
> 
> 
> www.overclock.net
> 
> 
> 
> 
> 
> Including the steep jump before the last 4 points. In case of locking to 1100mv, there should be a steep jump before the 1081mv point. Like I said before, I'm not sure why, but I seem to be able to run significantly higher effective clock this way.


Yep, that's the same shape curve that I use. Just with the peak being at 1100 mV.


----------



## GreatestChase

elbramso said:


> Wow that is a lot of power 😳
> I should maybe start a new benchmark session but unfortunately it is too warm atm 😞


I wish I could get the same clocks that everyone else does without using that much haha. Makes it a lot more difficult to keep it cool.


----------



## yzonker

GreatestChase said:


> I wish I could get the same clocks that everyone else does without using that much haha. Makes it a lot more difficult to keep it cool.


Hmm, maybe it's silicon lottery. Just feels like something else is off is the only reason I was asking about the curve. I thought evga did some amount of binning for the KP.


----------



## GreatestChase

yzonker said:


> Hmm, maybe it's silicon lottery. Just feels like something else is off is the only reason I was asking about the curve. I thought evga did some amount of binning for the KP.


From what someone else said, they pick 10 random chips, and then the best 2 of those go into KPs. If that is the case, then I could still just have gotten a chip that doesn't clock as well. However, if you look at it from a performance per clock standpoint, I'm scoring in a range where most people have a much higher GPU clock. Most of the scores around me have clock speeds of at least 2265 MHz, where as that run was only 2235 MHz and the average was closer to 2205 MHz. So I guess there are two ways to think about it.


----------



## yzonker

GreatestChase said:


> From what someone else said, they pick 10 random chips, and then the best 2 of those go into KPs. If that is the case, then I could still just have gotten a chip that doesn't clock as well. However, if you look at it from a performance per clock standpoint, I'm scoring in a range where most people have a much higher GPU clock. Most of the scores around me have clock speeds of at least 2265 MHz, where as that run was only 2235 MHz and the average was closer to 2205 MHz. So I guess there are two ways to think about it.


Well but the clock speed shown on the 3dmark page is not effective clock. The difference between the clock speed shown by 3dmark and effective clock can be influenced a lot by the curve shape.

For example, this run shows 2265 average, but effective clocks were in the 2220-2230 range. 









I scored 16 145 in Port Royal


AMD Ryzen 7 5800X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## GreatestChase

yzonker said:


> Well but the clock speed shown on the 3dmark page is not effective clock. The difference between the clock speed shown by 3dmark and effective clock can be influenced a lot by the curve shape.
> 
> For example, this run shows 2265 average, but effective clocks were in the 2220-2230 range.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 16 145 in Port Royal
> 
> 
> AMD Ryzen 7 5800X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com


True, my effective and displayed clock stay within 1-2 MHz of each other so I guess I didn't really think about that.


----------



## Panchovix

yzonker said:


> And you're using a curve like this,
> 
> 
> 
> 
> 
> 
> 
> 
> 
> [Official] NVIDIA RTX 3090 Owner's Club
> 
> 
> The step from 3090 to 3090 Ti seems marginal (say 6% fps on average); I rather wait for the 4090s. Besides, what with the new PCBs and Power Connectors, there's no way I'm modding the wiring and GPU's front-and-back in this complex build I finished only recently.: When I was doing HWBot years...
> 
> 
> 
> 
> www.overclock.net
> 
> 
> 
> 
> 
> Including the steep jump before the last 4 points. In case of locking to 1100mv, there should be a steep jump before the 1081mv point. Like I said before, I'm not sure why, but I seem to be able to run significantly higher effective clock this way.


Not OP here, sorry for bothering but couldn't understand much that curve lol.

So for example, I set 2205 Mhz at 1.1V on my curve (so 2175 + 30Mhz), with nvidia-smi.exe -lgc 2175, then on the rest of the curve, I have to use 2160Mhz at 1.093V, 1.087V and 1.081V?


----------



## GRABibus

EVGA GeForce RTX 3090 Ti Kingpin to require dual 12-pin power connectors - VideoCardz.com


EVGA RTX 3090 Ti Kingpin might cost more than original Kingpin New details on the next-gen EVGA flagship graphics card. A forum post by QuasarZone member alleged that EVGA RTX 3090 Ti graphics card development is at the very final stage of development. It should be noted that previous RTX 3090...




videocardz.com


----------



## kryptonfly

Panchovix said:


> Not OP here, sorry for bothering but couldn't understand much that curve lol.
> 
> So for example, I set 2205 Mhz at 1.1V on my curve (so 2175 + 30Mhz), with nvidia-smi.exe -lgc 2175, then on the rest of the curve, I have to use 2160Mhz at 1.093V, 1.087V and 1.081V?


It's simple, just set a point at +30 or even till +45 mhz from your desired freq (2205 or 2220 for 2175) at your desired voltage and apply.
SMI is not affected by the curve shape, it depends about internal voltages & limits. For example I have around -40mhz for effective clock no matter the curve with SMI.


----------



## elbramso

Today I was able to hit 16k pts in Port Royal using the 520w bios. 
Now I'm waiting for colder weather 😂


----------



## GRABibus

elbramso said:


> Today I was able to hit 16k pts in Port Royal using the 520w bios.
> Now I'm waiting for colder weather 😂


Which card ? 
and no link or screenshot ?


----------



## elbramso

GRABibus said:


> Which card ?
> and no link or screenshot ?


It's a 3090 kingpin. 
Didn't submit because it wasn't my best overall but you can have a link when I'm back from work 😊


----------



## GRABibus

elbramso said:


> It's a 3090 kingpin.
> Didn't submit because it wasn't my best overall but you can have a link when I'm back from work 😊


Nice 😜
I also have kingpin and I think I can break 15850 on stock cooler.
I could already do 15841 (link in signature) 😉


----------



## GRABibus

elbramso said:


> Today I was able to hit 16k pts in Port Royal using the 520w bios.
> Now I'm waiting for colder weather 😂


Ahh!!!!
It was on 520W bios ! Impressive !


----------



## elbramso

GRABibus said:


> Ahh!!!!
> It was on 520W bios ! Impressive !


This was my best run with the 1000w bios so far:








I scored 16 569 in Port Royal


Intel Core i9-10900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com





I was curious how far I could take the card with the 520w bios.


----------



## yzonker

I thought this seemed appropriate. 


__
https://www.reddit.com/r/overclocking/comments/sh5xr3


----------



## yzonker

elbramso said:


> This was my best run with the 1000w bios so far:
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 16 569 in Port Royal
> 
> 
> Intel Core i9-10900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> I was curious how far I could take the card with the 520w bios.


Nice. Do you know what effective clock that was? Curious what it takes to get to 16.5k?


----------



## Panchovix

kryptonfly said:


> It's simple, just set a point at +30 or even till +45 mhz from your desired freq (2205 or 2220 for 2175) at your desired voltage and apply.
> SMI is not affected by the curve shape, it depends about internal voltages & limits. For example I have around -40mhz for effective clock no matter the curve with SMI.


Really thanks man, this worked! At least on my 3080


----------



## elbramso

yzonker said:


> Nice. Do you know what effective clock that was? Curious what it takes to get to 16.5k?


I haven't really monitored my effective clocks tbh 

I made a couple more runs yesterday - but unfortunately it wasn't cold enough....
This run could have been huge but it was 3c warmer than on my perspnal best:








I scored 16 545 in Port Royal


Intel Core i9-10900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com





Here is the 16k run with 520w bios:








I scored 16 033 in Port Royal


Intel Core i9-10900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com





all in all I'm super happy with the results but still feel like there is a little room for improvement. I've never tried to lock 1100mv while benching for example


----------



## GreatestChase

elbramso said:


> I haven't really monitored my effective clocks tbh
> 
> I made a couple more runs yesterday - but unfortunately it wasn't cold enough....
> This run could have been huge but it was 3c warmer than on my perspnal best:
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 16 545 in Port Royal
> 
> 
> Intel Core i9-10900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> Here is the 16k run with 520w bios:
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 16 033 in Port Royal
> 
> 
> Intel Core i9-10900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> all in all I'm super happy with the results but still feel like there is a little room for improvement. I've never tried to lock 1100mv while benching for example


Very nice. What classified settings are you using?


----------



## GRABibus

GreatestChase said:


> Very nice. What classified settings are you using?


None.
Everything default 😂


----------



## GreatestChase

GRABibus said:


> None.
> Everything default 😂


Lol, I don't think I'll be able to get anywhere near a 16,545 w/o exotic cooling. I think I have push my safety limits already to get a 16,165. NVVDD @ 1.3125, FBVDD @ stock, MSVDD @ 1.225. Pulling almost 700 W per HWInfo. Probably why I also have a 23-24 C water to gpu delta. Couldn't get it to run at 2235 MHz without those voltage setting though, and even then, during the run I was at 2205 MHz pretty much the whole time.


----------



## elbramso

GreatestChase said:


> Lol, I don't think I'll be able to get anywhere near a 16,545 w/o exotic cooling. I think I have push my safety limits already to get a 16,165. NVVDD @ 1.3125, FBVDD @ stock, MSVDD @ 1.225. Pulling almost 700 W per HWInfo. Probably why I also have a 23-24 C water to gpu delta. Couldn't get it to run at 2235 MHz without those voltage setting though, and even then, during the run I was at 2205 MHz pretty much the whole time.


settings where:









Power Draw of 650w was the highest I saw.


----------



## GreatestChase

elbramso said:


> settings where:
> View attachment 2546260
> 
> 
> Power Draw of 650w was the highest I saw.


Any particular reason that you use LLC at level 15? I typically use 6 and 5 because that is what I have seen from others. I have a vague understanding of LLC, but certainly not the best.


----------



## elbramso

GreatestChase said:


> Any particular reason that you use LLC at level 15? I typically use 6 and 5 because that is what I have seen from others. I have a vague understanding of LLC, but certainly not the best.


I wanted to have the most control over the voltage that I put to the card. 
Pretty much wanted to avoid stupid mistakes that let the voltage overshoot too much ;-)


----------



## GRABibus

elbramso said:


> I wanted to have the most control over the voltage that I put to the card.
> Pretty much wanted to avoid stupid mistakes that let the voltage overshoot too much ;-)


What is the most safe LLC level ?
At default setting, it is « 1 ».

you mean 15 is safer Than 1 ??


----------



## GreatestChase

GRABibus said:


> What is the most safe LLC level ?
> At default setting, it is « 1 ».
> 
> you mean 15 is safer Than 1 ??


The higher the number, so 15 > 1, the more Vdroop you have, but also the less overshoot that you have. That's my understanding of it.


----------



## elbramso

GreatestChase said:


> The higher the number, so 15 > 1, the more Vdroop you have, but also the less overshoot that you have. That's my understanding of it.


And this is what I've tested too.
LL1 @ 1.26... NVVDD shoots higher than 1.31 NVVDD @ LL15 (if I remember correctly^^)


----------



## GRABibus

GreatestChase said:


> The higher the number, so 15 > 1, the more Vdroop you have, but also the less overshoot that you have. That's my understanding of it.


Sorry I don’t get here.


----------



## GRABibus

elbramso said:


> And this is what I've tested too.
> LL1 @ 1.26... NVVDD shoots higher than 1.31 NVVDD @ LL15 (if I remember correctly^^)


So for stability llc1 is better and for safe llc15 is better ?


----------



## elbramso

GRABibus said:


> So for stability llc1 is better and for safe llc15 is better ?


Pretty much, yeah. 
Once I know what my card is capable of I'm going to lower the nvvdd and switch LL5-6


----------



## elbramso

BTW. Luumi has some good insights on KPE overclocking in general on his yt channel


----------



## GreatestChase

elbramso said:


> BTW. Luumi has some good insights on KPE overclocking in general on his yt channel


That's where I got started with some of my initial settings. They we're really useful.


----------



## 8472

Okay, so I finally got around to water cooling my 3090, but now my temps get high and I can't figure out what's wrong. 

I have the Suprim X with EK waterblock and active backplate on a GPU only loop. I'm using a 360mm radiator (corsair XR5) inside the 011 dynamic XL. 

At idle my temps are fine, ~29C for the GPU, fluid temp is low, and ambient is ~75F. However when I run Unigen Valley the temps gradually rise. After two loops ~6 minutes, the GPU is over 60C and goes higher if I let the benchmark continue to run. My fluid temp is also high, it'll get up to 57C when the GPU reaches 67C. Way too hot. 

I've tried putting the pump at 100%, shaking the case to get possible bubbles out, removing the dust filter to allow unimpeded air flow, adding 3 extra fans for push/pull, as well as tightening the screws on the backplate and none of it has solved the problem. I'm hesitant to reapply the TIM and reseat the waterblock due to having to drain the whole loop and EK including soft screws that are easy to round out. 

If the radiator is getting hot, wouldn't that suggests that heat is being pulled away from the card? So could it still be a contact issue? 

I'm lost at this point so any suggestions would be welcome.


----------



## bearsdidit

8472 said:


> Okay, so I finally got around to water cooling my 3090, but now my temps get high and I can't figure out what's wrong.
> 
> I have the Suprim X with EK waterblock and active backplate on a GPU only loop. I'm using a 360mm radiator (corsair XR5) inside the 011 dynamic XL.
> 
> At idle my temps are fine, ~29C for the GPU, fluid temp is low, and ambient is ~75F. However when I run Unigen Valley the temps gradually rise. After two loops ~6 minutes, the GPU is over 60C and goes higher if I let the benchmark continue to run. My fluid temp is also high, it'll get up to 57C when the GPU reaches 67C. Way too hot.
> 
> I've tried putting the pump at 100%, shaking the case to get possible bubbles out, removing the dust filter to allow unimpeded air flow, adding 3 extra fans for push/pull, as well as tightening the screws on the backplate and none of it has solved the problem. I'm hesitant to reapply the TIM and reseat the waterblock due to having to drain the whole loop and EK including soft screws that are easy to round out.
> 
> If the radiator is getting hot, wouldn't that suggests that heat is being pulled away from the card? So could it still be a contact issue?
> 
> I'm lost at this point so any suggestions would be welcome.


A single 360 rad isn’t nearly enough for a 3090. I run three radiators in total for my 011 with agreeable temps for my 3090. 

I would also check your mounting because 60c on the die is way too high for a properly installed block. My peak GPU temp is between 45-50 with a pretty crappy Corsair block. After changing cards/block, my current max is 45c after a few hours of gaming with an ambient temp of 20-24c.


----------



## ManniX-ITA

8472 said:


> If the radiator is getting hot, wouldn't that suggests that heat is being pulled away from the card?


Maybe a single 360mm it's not enough but I would suspect the pump.
If it's getting hot and 6 fans are not enough to pull away the heat is very likely the pump is broken and going slow.
What else do you have in the loop?


----------



## Baasha

When is the RTX 3090 Ti supposed to be released? Thought it was supposed to be Jan. 27th?


----------



## ManniX-ITA

Baasha said:


> When is the RTX 3090 Ti supposed to be released? Thought it was supposed to be Jan. 27th?


New info today but today passed and no new info


----------



## J7SC

@8472 I've run a single 360 with my 3090 Strix on a testbench, and it was 'ok', though not all 360s are created equal...rads can have a single core ('thin', on the left below) a double core or a triple core ('thick', on the right below) all within the same length. Here's a comparison of a single core vs a triple core; no prizes for guessing which one cools better:










I agree though with @ManniX-ITA that there might be something else going on, some sort of restriction, low flow rate or a sick pump.

I've posted this one before re. decent cooling, clocks etc for a 3090 Strix w/ stock bios after multiple bench runs on a loop with 1320x64 double and triple core rads, push-pull Arctic P12 PSTs and 3x D5 pumps at 60%











ManniX-ITA said:


> New info today but today passed and no new info


FYI...I heard (jay2C, YT) that the 3090 Ti is delayed until late February or even March, not that it matters to me


----------



## GreatestChase

8472 said:


> Okay, so I finally got around to water cooling my 3090, but now my temps get high and I can't figure out what's wrong.
> 
> I have the Suprim X with EK waterblock and active backplate on a GPU only loop. I'm using a 360mm radiator (corsair XR5) inside the 011 dynamic XL.
> 
> At idle my temps are fine, ~29C for the GPU, fluid temp is low, and ambient is ~75F. However when I run Unigen Valley the temps gradually rise. After two loops ~6 minutes, the GPU is over 60C and goes higher if I let the benchmark continue to run. My fluid temp is also high, it'll get up to 57C when the GPU reaches 67C. Way too hot.
> 
> I've tried putting the pump at 100%, shaking the case to get possible bubbles out, removing the dust filter to allow unimpeded air flow, adding 3 extra fans for push/pull, as well as tightening the screws on the backplate and none of it has solved the problem. I'm hesitant to reapply the TIM and reseat the waterblock due to having to drain the whole loop and EK including soft screws that are easy to round out.
> 
> If the radiator is getting hot, wouldn't that suggests that heat is being pulled away from the card? So could it still be a contact issue?
> 
> I'm lost at this point so any suggestions would be welcome.


I would say that your water to gpu delta indicates that there is good heat transfer and you shouldn't need to remount your blocks. One thing not mentioned is the fans that you are using. Even if you have 6 fans, if they're not the right choice, then your temps could be poor. With a single 360 I would say that those temps are close to what I would expect, especially looping for 6 mins. People see into the 50s C pretty quickly during a PR run on the stock 3090 KP Hybrid which is a 360 rad. Personally, I have 3x360 rads for my 3090 and 12700k and 3 T30 fans for each rad. During gaming I usually see about 15-20 C above ambient temps on the gpu.


----------



## GRABibus

elbramso said:


> Pretty much, yeah.
> Once I know what my card is capable of I'm going to lower the nvvdd and switch LL5-6


I have tested the tool in gaming to check stability.

LLC seems to be the key to use it in game.
Of course, voltages must be checked.

I tested LLC1 and LLC15 in Vanguard, and yes, LLC1 makes the NVDD on Oled going higher than with LLC15.

What is also fine (Must bes tested longer), is that I get better stability with LLC15 and much lower voltages than with LLC1.
Also, I have 40W less on GPU with LLC15 than with LLC1 with same volatges settings for both LLC's

Example :
+105MHz on Core
=> NVVDD=1,1150V
=> MSVDD=1,1V
=> LLC1 both
=> FBVDD=1.35V

Conclusion :
NVVDD hits until 1.25V => And this is unstable.
Power draw => 450W average


+105MHz on Core
=> NVVDD=1V
=> MSVDD=0.8V
=> LLC1 both
=> FBVDD=1.35V

Conclusion :
NVVDD hits until 1.1Vmax
I could play 30mns, no crash yet (With such settings I would crash after 3 minutes with LLC1)
Power draw => 350W average.

so it seems that LLC15 helsp in having lower shoots, but also I could stabilize better my OC (to be confirmed after hours gaming).


----------



## GRABibus

Baasha said:


> When is the RTX 3090 Ti supposed to be released? Thought it was supposed to be Jan. 27th?


Delayed due to issues


----------



## 8472

bearsdidit said:


> A single 360 rad isn’t nearly enough for a 3090. I run three radiators in total for my 011 with agreeable temps for my 3090.
> 
> I would also check your mounting because 60c on the die is way too high for a properly installed block. My peak GPU temp is between 45-50 with a pretty crappy Corsair block. After changing cards/block, my current max is 45c after a few hours of gaming with an ambient temp of 20-24c.


One reason why I thought that a single 360 would be enough is that YouTubers including Linus and Ali from Optimum Tech had no problem cooling a 3090 with a single 360. Linus was getting ~44 in furmark, and Ali had 51 after playing F1 with a 10900k in the loop as well. 



ManniX-ITA said:


> Maybe a single 360mm it's not enough but I would suspect the pump.
> If it's getting hot and 6 fans are not enough to pull away the heat is very likely the pump is broken and going slow.
> What else do you have in the loop?


Nothing else, just the card, rad, and pump/res combo. It sounds like I need to use a flow meter to test the pump. I have the EK D5 with PWM control. I hooked up the 4 pin to my Corsair Commander Pro. It's telling me that the pump is able to spin up ~4800 rpm at 100%. 



J7SC said:


> @8472 I've run a single 360 with my 3090 Strix on a testbench, and it was 'ok', though not all 360s are created equal...rads can have a single core ('thin', on the left below) a double core or a triple core ('thick', on the right below) all within the same length. Here's a comparison of a single core vs a triple core; no prizes for guessing which one cools better:
> 
> View attachment 2546309
> 
> 
> I agree though with @ManniX-ITA that there might be something else going on, some sort of restriction, low flow rate or a sick pump.
> 
> I've posted this one before re. decent cooling, clocks etc for a 3090 Strix w/ stock bios after multiple bench runs on a loop with 1320x64 double and triple core rads, push-pull Arctic P12 PSTs and 3x D5 pumps at 60%
> View attachment 2546310
> 
> 
> 
> 
> FYI...I heard (jay2C, YT) that the 3090 Ti is delayed until late February or even March, not that it matters to me


My rad is thin (~30mm). I should have gotten a clear block. Since my block and backplate are the black acetal and my tubing is the black EK ZMT, I can't see any restrictions outside of the reservoir. 



GreatestChase said:


> I would say that your water to gpu delta indicates that there is good heat transfer and you shouldn't need to remount your blocks. One thing not mentioned is the fans that you are using. Even if you have 6 fans, if they're not the right choice, then your temps could be poor. With a single 360 I would say that those temps are close to what I would expect, especially looping for 6 mins. People see into the 50s C pretty quickly during a PR run on the stock 3090 KP Hybrid which is a 360 rad. Personally, I have 3x360 rads for my 3090 and 12700k and 3 T30 fans for each rad. During gaming I usually see about 15-20 C above ambient temps on the gpu.


I have 3 Corsair QL120s running ~1100 rpm. I turned them up to max just to see if that made any difference and it didn't. I added 3 of the Lian Li Uni fans to try push pull but that didn't do much either.


----------



## GreatestChase

8472 said:


> One reason why I thought that a single 360 would be enough is that YouTubers including Linus and Ali from Optimum Tech had no problem cooling a 3090 with a single 360. Linus was getting ~44 in furmark, and Ali had 51 after playing F1 with a 10900k in the loop as well.
> 
> 
> 
> Nothing else, just the card, rad, and pump/res combo. It sounds like I need to use a flow meter to test the pump. I have the EK D5 with PWM control. I hooked up the 4 pin to my Corsair Commander Pro. It's telling me that the pump is able to spin up ~4800 rpm at 100%.
> 
> 
> 
> My rad is thin (~30mm). I should have gotten a clear block. Since my block and backplate are the black acetal and my tubing is the black EK ZMT, I can't see any restrictions outside of the reservoir.
> 
> 
> 
> I have 3 Corsair QL120s running ~1100 rpm. I turned them up to max just to see if that made any difference and it didn't. I added 3 of the Lian Li Uni fans to try push pull but that didn't do much either.


Have you done any gaming to see what your temps are there? Your setup doesn't sound like it is problematic. Could probably use a thicker rad, but that shouldn't cost your 10+ degrees in cooling performance. Running a benchmark and gaming are not equal to one another. A benchmark is going to keep the gpu at close to 100% load the entire time, where as in gaming, you won't be at 100% unless you're in a graphically intensive scene. I would test what temps you're getting in game and report back. It's likely that they'll be lower than what you see when you're benching.


----------



## 8472

GreatestChase said:


> Have you done any gaming to see what your temps are there? Your setup doesn't sound like it is problematic. Could probably use a thicker rad, but that shouldn't cost your 10+ degrees in cooling performance. Running a benchmark and gaming are not equal to one another. A benchmark is going to keep the gpu at close to 100% load the entire time, where as in gaming, you won't be at 100% unless you're in a graphically intensive scene. I would test what temps you're getting in game and report back. It's likely that they'll be lower than what you see when you're benching.


I'll play God of War later on and let everyone know. One of the reasons I like Valley is historically for me it's done a good job of representing what kind of temperatures I see in a game, at least the worst case scenario for games. 

I'll probably try adding another 120 rad in the rear before going with a thicker 360. My AIO for my CPU is on the side to draw in fresh air and the top will be tricky to cleanly route tubing.


----------



## J7SC

8472 said:


> (...)
> My rad is thin (~30mm). I should have gotten a clear block. Since my block and backplate are the black acetal and my tubing is the black EK ZMT, I can't see any restrictions outside of the reservoir.
> (...)


Can you see the fluid moving inside the reservoir, may be with a flashlight ?


----------



## 8472

J7SC said:


> Can you see the fluid moving inside the reservoir, may be with a flashlight ?


When the pump is at 100%, I can see water moving. Looks like it's reacting more so to the pump's vibrations. At 40% I can't see any coolant moving. The res is an EK FLT 80 with a D5 FWIW.


----------



## yzonker

8472 said:


> When the pump is at 100%, I can see water moving. Looks like it's reacting more so to the pump's vibrations. At 40% I can't see any coolant moving. The res is an EK FLT 80 with a D5 FWIW.


Ok, so it's definitely not the pump or flow IMO. Reading your original post, you have ~10C block delta (57C water, 67C core temp). Can't get a 10C delta at 420w (default for the Suprim X) with low flow.

So the issue has to be poor cooling with the 360 rad. I actually started out with the same setup for my 3090. It performed better than yours though. I had a thin EK 360 which is one of the worst rads I think. IIRC, at 390w it would hit 55-60C, maybe 10C lower than yours. This was with 3 Noctua F12's at full speed (1500 rpm).

Did you try pulling the case covers are re-run the stress test? And you have your CPU AIO as intake and the GPU rad as exhaust? If so, you should make them both either intake or exhaust so one doesn't pick up warm air from the other.

The real solution of course is more rads. A 120 will help some, but not a lot. When I added an additional 280 temps dropped about 5C. In all honesty, if you have a small case, then either get a bigger case or just add an external rad (which is what I finally did to get decent water temps).


----------



## gfunkernaught

Has anyone noticed stability issues with a previously stable OC going to the latest drivers?


----------



## kryptonfly

@8472 : Before to add rad, you have to check and touch if your rad is (a little) warm or iced without fan. If it's iced, you don't need to add more rad but it means you have a bad waterblock contact. It's almost generally the problem n°1 in WC. I'm using liquid metal (a must have with this kind of heat) since last September and temp didn't change an inch. 10°C delta GPU <=> hot spot. Depends of your expectation but from my experience this GPU is hard to cool down, let's say even 3x360 is not enough (for me), I have 5x360x30mm with 15 fans at 5v, the max that I have seen is 41°C at 640W PR after many consecutive tries, ambient 20°C. If I turn off fans, the 5x360s get warm really quickly. Also I prefer thin rads against thick because it's easier to dissipate and we don't need powerful loud fans. My pump (Phobya DC12-400) is louder than 15 fans at 5v and I can see bubbles going down at about 3/4 of the 240mm tank.


----------



## GreatestChase

kryptonfly said:


> @8472 : Before to add rad, you have to check and touch if your rad is (a little) warm or iced without fan. If it's iced, you don't need to add more rad but it means you have a bad waterblock contact. It's almost generally the problem n°1 in WC. I'm using liquid metal (a must have with this kind of heat) since last September and temp didn't change an inch. 10°C delta GPU <=> hot spot. Depends of your expectation but from my experience this GPU is hard to cool down, let's say even 3x360 is not enough (for me), I have 5x360x30mm with 15 fans at 5v, the max that I have seen is 41°C at 640W PR after many consecutive tries, ambient 20°C. If I turn off fans, the 5x360s get warm really quickly. Also I prefer thin rads against thick because it's easier to dissipate and we don't need powerful loud fans. My pump (Phobya DC12-400) is louder than 15 fans at 5v and I can see bubbles going down at about 3/4 of the 240mm tank.


How large of a temp difference did you notice when you changed to liquid metal compared to thermal paste? I’m currently using KPX and I’m curious what sort of drop I could expect.


----------



## yzonker

Cold front coming through. Setting up. Still kinda hot outside right now... lol Makeshift duct works pretty good though. Water is showing 7C right now, desk thermometer shows 18C, so fairly comfortable to sit in here.

BTW, Superposition is much more tolerant of high overclocks. 2240-2245 effective for that run.


----------



## kryptonfly

GreatestChase said:


> How large of a temp difference did you notice when you changed to liquid metal compared to thermal paste? I’m currently using KPX and I’m curious what sort of drop I could expect.


Hum, hard to say because at that time I tested thermal pads with bad height, maybe I improved 5-6°C at +400W. It's a LM handmade from a local reseller (liquid at +25°C and a little solid below).



yzonker said:


> Cold front coming through. Setting up. Still kinda hot outside right now... lol Makeshift duct works pretty good though. Water is showing 7C right now, desk thermometer shows 18C, so fairly comfortable to sit in here.
> 
> BTW, Superposition is much more tolerant of high overclocks. 2240-2245 effective for that run.
> 
> View attachment 2546358


Ambient 23°C :


----------



## GreatestChase

kryptonfly said:


> Hum, hard to say because at that time I tested thermal pads with bad height, maybe I improved 5-6°C at +400W. It's a LM handmade from a local reseller (liquid at +25°C and a little solid below).
> 
> Ambient 23°C :


Gotcha. Did you use anything around the die like nail polish or kapton tape to protect it from shorts?


----------



## kryptonfly

GreatestChase said:


> Gotcha. Did you use anything around the die like nail polish or kapton tape to protect it from shorts?


Yep, I'm using Kapton tape, always use tape or anything to avoid shorts ! I've always put less than more LM but the die is big so we don't have to hesitate to put enough.


----------



## yzonker

kryptonfly said:


> Hum, hard to say because at that time I tested thermal pads with bad height, maybe I improved 5-6°C at +400W. It's a LM handmade from a local reseller (liquid at +25°C and a little solid below).
> 
> Ambient 23°C :


Does the 12900k help any compared to the previous gens or amd with Superposition? 

Kinda curious. Looks like about the same mem clock and lower effective. At least the min is lower than what I saw on my run. I did have my 2nd monitor on. Not sure if that hurts it or not. Make a better run tomorrow night when it's colder.


----------



## 8472

yzonker said:


> Ok, so it's definitely not the pump or flow IMO. Reading your original post, you have ~10C block delta (57C water, 67C core temp). Can't get a 10C delta at 420w (default for the Suprim X) with low flow.
> 
> So the issue has to be poor cooling with the 360 rad. I actually started out with the same setup for my 3090. It performed better than yours though. I had a thin EK 360 which is one of the worst rads I think. IIRC, at 390w it would hit 55-60C, maybe 10C lower than yours. This was with 3 Noctua F12's at full speed (1500 rpm).
> 
> Did you try pulling the case covers are re-run the stress test? And you have your CPU AIO as intake and the GPU rad as exhaust? If so, you should make them both either intake or exhaust so one doesn't pick up warm air from the other.
> 
> The real solution of course is more rads. A 120 will help some, but not a lot. When I added an additional 280 temps dropped about 5C. In all honesty, if you have a small case, then either get a bigger case or just add an external rad (which is what I finally did to get decent water temps).


I have the GPU rad as intake on the bottom and the CPU AIO as intake on the side. So they're both breathing fresh air. I'm wondering how much heat the VRAM from the active backplate is adding to the loop. I have a spare 360 rad that I could add, but I'll have to pick up some quick disconnects so I can easily remove it if it doesn't work out.


----------



## kryptonfly

yzonker said:


> Does the 12900k help any compared to the previous gens or amd with Superposition?
> 
> Kinda curious. Looks like about the same mem clock and lower effective. At least the min is lower than what I saw on my run. I did have my 2nd monitor on. Not sure if that hurts it or not. Make a better run tomorrow night when it's colder.


Your GPU is at 98%, not 99% I don't know why. It helps in FPS mini for sure, re-bar was enabled, g-sync disabled and performance max was set in Nvidia. I sent back my 12900K to Amazon for a future 12900KS, so I returned to my X99 for now. I could test again but I'm pretty sure the score will be lower. I'm cpu limited even in 4K with my 6950X at 4.65 Ghz.


----------



## J7SC

8472 said:


> I have the GPU rad as intake on the bottom and the CPU AIO as intake on the side. So they're both breathing fresh air. I'm wondering how much heat the VRAM from the active backplate is adding to the loop. I have a spare 360 rad that I could add, but I'll have to pick up some quick disconnects so I can easily remove it if it doesn't work out.


The more cooling you can give a GPU (or for that matter CPU) with an aggressive boost algorithm that has temps as one of the major constraints, the better.


----------



## ManniX-ITA

8472 said:


> One reason why I thought that a single 360 would be enough is that YouTubers including Linus and Ali from Optimum Tech had no problem cooling a 3090 with a single 360. Linus was getting ~44 in furmark, and Ali had 51 after playing F1 with a 10900k in the loop as well.


Are you sure it's a 1:1 comparison?
Do you know how the GPU was set (stock or AB profile)?
How are you testing? Stock or OC profile?

Better if you get an average of board power draw with GPU-z or HWInfo, resetting the stats once you start Heaven.
Also record HWInfo stats in a CSV file and share it here (first button bottom dx in stats window).



8472 said:


> Nothing else, just the card, rad, and pump/res combo. It sounds like I need to use a flow meter to test the pump. I have the EK D5 with PWM control. I hooked up the 4 pin to my Corsair Commander Pro. It's telling me that the pump is able to spin up ~4800 rpm at 100%.


I'm still suspecting a flow issue... a D5 even with a 30mm rad should do better. Doesn't add up.

Yes a flow meter would be a good thing to add.
But don't get cheap stuff... better nothing than something that lies.

Really hard to find but look for the Flow Next:






Flow sensor high flow NEXT, G1/4


Flow sensor high flow NEXT, G1/4: Fully integrated sensor for coolant flow, temperature and quality ewuipped with USB interface, RGBpx lighting and OLED display. Flow measurement Coolant flowing through the sensor drives a rotor/impeller, a contactless magnetic sensor system detects the rotation...




shop.aquacomputer.de





Or the high flow 2:






Flow sensor high flow 2, G1/4


Flow sensor high flow 2, G1/4: Aqua Computer flow sensors are easily integrated into your water cooling loop and allow continuous flow monitoring. The flow sensor high flow 2 is also equipped with a temperature sensor for coolant temperature monitoring. Coolant flowing through the sensor drives...




shop.aquacomputer.de





It's a rare event a broken D5.
But if the pump is faulty the reported rpm could be just a fake.
Had once a cheap pump that reported the right rpm but the flow was just a 3rd of what should have been.

I still think flow could be the culprit...
The next thing to check is if maybe you didn't realize there's a bending somewhere.
Very very easy to miss without a flow meter.
Everything looks normal but the flow rate is cut in half.
Did you use plenty of anti-kinking springs?









Amazon.com: Alphacool 17186 Anti-kinking Spring Individual with tubing 11mm (320mm Length) - Matte Black Water Cooling Tubing : Electronics


Buy Alphacool 17186 Anti-kinking Spring Individual with tubing 11mm (320mm Length) - Matte Black Water Cooling Tubing: Water Cooling Systems - Amazon.com ✓ FREE DELIVERY possible on eligible purchases



www.amazon.com





Check thoroughly every inch of the tubing to see if it's bending inward somewhere.

Also add a pressure equalizer plug on the FLT's top fill port. Almost any brand has one.
It'll save a lot of problems with air bubbles and high pressure.









Amazon.com: Thermaltake CL-W086-CU00BL-A Pacific G1/4 Pressure Equalizer Stop Plug with O-Ring Black : Industrial & Scientific


Amazon.com: Thermaltake CL-W086-CU00BL-A Pacific G1/4 Pressure Equalizer Stop Plug with O-Ring Black : Industrial & Scientific



www.amazon.com


----------



## kryptonfly

yzonker said:


> Does the 12900k help any compared to the previous gens or amd with Superposition?
> 
> Kinda curious. Looks like about the same mem clock and lower effective. At least the min is lower than what I saw on my run. I did have my 2nd monitor on. Not sure if that hurts it or not. Make a better run tomorrow night when it's colder.


Here's my score with my 6950X, we're both (a little) cpu limited (GPU 98%, not 99% like my previous 12900K). I only swapped my CPU+MB+RAM, I kept same windows, drivers, everything...



12900K :


----------



## GreatestChase

kryptonfly said:


> Hum, hard to say because at that time I tested thermal pads with bad height, maybe I improved 5-6°C at +400W. It's a LM handmade from a local reseller (liquid at +25°C and a little solid below).
> 
> Ambient 23°C :


Meant to ask last night, but what height thermal pads are you using? I'm using 2mm thick thermalright extreme odyssey pads and I feel like I have decent die contact.


----------



## GreatestChase

ManniX-ITA said:


> Are you sure it's a 1:1 comparison?
> Do you know how the GPU was set (stock or AB profile)?
> How are you testing? Stock or OC profile?
> 
> Better if you get an average of board power draw with GPU-z or HWInfo, resetting the stats once you start Heaven.
> Also record HWInfo stats in a CSV file and share it here (first button bottom dx in stats window).
> 
> 
> 
> I'm still suspecting a flow issue... a D5 even with a 30mm rad should do better. Doesn't add up.
> 
> Yes a flow meter would be a good thing to add.
> But don't get cheap stuff... better nothing than something that lies.
> 
> Really hard to find but look for the Flow Next:
> 
> 
> 
> 
> 
> 
> Flow sensor high flow NEXT, G1/4
> 
> 
> Flow sensor high flow NEXT, G1/4: Fully integrated sensor for coolant flow, temperature and quality ewuipped with USB interface, RGBpx lighting and OLED display. Flow measurement Coolant flowing through the sensor drives a rotor/impeller, a contactless magnetic sensor system detects the rotation...
> 
> 
> 
> 
> shop.aquacomputer.de
> 
> 
> 
> 
> 
> Or the high flow 2:
> 
> 
> 
> 
> 
> 
> Flow sensor high flow 2, G1/4
> 
> 
> Flow sensor high flow 2, G1/4: Aqua Computer flow sensors are easily integrated into your water cooling loop and allow continuous flow monitoring. The flow sensor high flow 2 is also equipped with a temperature sensor for coolant temperature monitoring. Coolant flowing through the sensor drives...
> 
> 
> 
> 
> shop.aquacomputer.de
> 
> 
> 
> 
> 
> It's a rare event a broken D5.
> But if the pump is faulty the reported rpm could be just a fake.
> Had once a cheap pump that reported the right rpm but the flow was just a 3rd of what should have been.
> 
> I still think flow could be the culprit...
> The next thing to check is if maybe you didn't realize there's a bending somewhere.
> Very very easy to miss without a flow meter.
> Everything looks normal but the flow rate is cut in half.
> Did you use plenty of anti-kinking springs?
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Amazon.com: Alphacool 17186 Anti-kinking Spring Individual with tubing 11mm (320mm Length) - Matte Black Water Cooling Tubing : Electronics
> 
> 
> Buy Alphacool 17186 Anti-kinking Spring Individual with tubing 11mm (320mm Length) - Matte Black Water Cooling Tubing: Water Cooling Systems - Amazon.com ✓ FREE DELIVERY possible on eligible purchases
> 
> 
> 
> www.amazon.com
> 
> 
> 
> 
> 
> Check thoroughly every inch of the tubing to see if it's bending inward somewhere.
> 
> Also add a pressure equalizer plug on the FLT's top fill port. Almost any brand has one.
> It'll save a lot of problems with air bubbles and high pressure.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Amazon.com: Thermaltake CL-W086-CU00BL-A Pacific G1/4 Pressure Equalizer Stop Plug with O-Ring Black : Industrial & Scientific
> 
> 
> Amazon.com: Thermaltake CL-W086-CU00BL-A Pacific G1/4 Pressure Equalizer Stop Plug with O-Ring Black : Industrial & Scientific
> 
> 
> 
> www.amazon.com


I also recommend aquacomputer parts. I have their previous gen flow sensor in my build and it works great with the quadro that I have.


----------



## GRABibus

kryptonfly said:


> Here's my score with my 6950X, we're both (a little) cpu limited (GPU 98%, not 99% like my previous 12900K). I only swapped my CPU+MB+RAM, I kept same windows, drivers, everything...
> 
> 
> 
> 12900K :


Do you enable graphics hardware acceleration ?
On my side, it increases GPU usage by 2%, at least in games.


----------



## kryptonfly

GreatestChase said:


> Meant to ask last night, but what height thermal pads are you using? I'm using 2mm thick thermalright extreme odyssey pads and I feel like I have decent die contact.


I have a Bykski WB, I tried first with 1.5mm on the GPU side but the contact was not optimal, so I'm using 1mm on the GPU side and 1.5mm on the backplate side, almost perfect. EDIT : Gelid Extreme



GRABibus said:


> Do you enable graphics hardware acceleration ?
> On my side, it increases GPU usage by 2%, at least in games.


Hum, you mean hardware-accelerated in windows settings ? It's weird because I have lower GPU usage in games (at least combined with the X99, but almost same usage on/off with the 12900k) and I'm using exactly same settings and windows than with my previous 12900k. I always disable this feature anyway.


----------



## GRABibus

kryptonfly said:


> I have a Bykski WB, I tried first with 1.5mm on the GPU side but the contact was not optimal, so I'm using 1mm on the GPU side and 1.5mm on the backplate side, almost perfect. EDIT : Gelid Extreme
> 
> Hum, you mean hardware-accelerated in windows settings ? It's weird because I have lower GPU usage in games (at least combined with the X99, but almost same usage on/off with the 12900k) and I'm using exactly same settings and windows than with my previous 12900k. I always disable this feature anyway.


Yes, hardware accelerated.
1% to 2% GPU usage more in games …


----------



## ManniX-ITA

kryptonfly said:


> Hum, you mean hardware-accelerated in windows settings ? It's weird because I have lower GPU usage in games (at least combined with the X99, but almost same usage on/off with the 12900k) and I'm using exactly same settings and windows than with my previous 12900k. I always disable this feature anyway.


The HW accelerated GPU scheduling?
I always disable it as well.
Some games are running marginally better but some much worse.
Depends on the platform, the drivers, too many factors.

There are few benchmarks around:









Windows 10 Hardware-Accelerated GPU Scheduling Benchmarks (Frametimes, FPS)


Enabling hardware-accelerated GPU scheduling requires Windows 10 2004, a supported GPU, and the latest drivers for that GPU (NVIDIA version 451.48, AMD version 20.5.1 Beta). -




www.gamersnexus.net













Windows 10 Hardware Accelerated GPU Scheduling Performance Analysis


Windows 10 Hardware Accelerated GPU Scheduling performance analysis with a RTX 3080 MASTER benching 11 PC games using GeForce driver 456.71.




babeltechreviews.com


----------



## kryptonfly

GRABibus said:


> Yes, hardware accelerated.
> 1% to 2% GPU usage more in games …


Honestly it's hard to say because I was mainly at 98-99% GPU usage even at 1080p with the 12900K and HAGS off. With the X99 I could see 96% instead of 98% some time ago with HAGS on, maybe that changed since, I'll take a look later. But yes I always disable it.


----------



## GRABibus

kryptonfly said:


> Honestly it's hard to say because I was mainly at 98-99% GPU usage even at 1080p with the 12900K and HAGS off. With the X99 I could see 96% instead of 98% some time ago with HAGS on, maybe that changed since, I'll take a look later. But yes I always disable it.


I will check also, because I have higher GPU usage but maybe less fps 😊


----------



## GreatestChase

kryptonfly said:


> I have a Bykski WB, I tried first with 1.5mm on the GPU side but the contact was not optimal, so I'm using 1mm on the GPU side and 1.5mm on the backplate side, almost perfect. EDIT : Gelid Extreme
> 
> Hum, you mean hardware-accelerated in windows settings ? It's weird because I have lower GPU usage in games (at least combined with the X99, but almost same usage on/off with the 12900k) and I'm using exactly same settings and windows than with my previous 12900k. I always disable this feature anyway.


Gotcha, I thought that you were running a KP card. I know that my thermal pads for my 3090 ftw3 using the ekwb block are thinner than the ones used with the KP HC block.


----------



## elbramso

Regarding LM vs. Thermal Paste:
I was wondering how much performance I would gain when I swap my KPx for LM. I have the optimus block on my gpu and with KPx I'm seeing deltas of 11,2c @520w. As other user seem to have better results with LM and this block I'm tempted to give it a shot....


----------



## kryptonfly

GreatestChase said:


> Gotcha, I thought that you were running a KP card. I know that my thermal pads for my 3090 ftw3 using the ekwb block are thinner than the ones used with the KP HC block.


I have a Gigabyte Turbo with its "light" 2*8 pins :



It was 1 year ago, on February 2021 before I shunted it.


elbramso said:


> Regarding LM vs. Thermal Paste:
> I was wondering how much performance I would gain when I swap my KPx for LM. I have the optimus block on my gpu and with KPx I'm seeing deltas of 11,2c @520w. As other user seem to have better results with LM and this block I'm tempted to give it a shot....


You can try, it won't get worse


----------



## GreatestChase

Is LM safe to use on pure copper surfaces like the KP HC block? I've read some conflicting information in that regard. I know that you can't use it on aluminum, and that it is inert to nickel plated copper, but I've read that it can corrode pure copper, and I've read that it isn't a problem for pure copper. Any insight there?


----------



## 8472

kryptonfly said:


> @8472 : Before to add rad, you have to check and touch if your rad is (a little) warm or iced without fan. If it's iced, you don't need to add more rad but it means you have a bad waterblock contact. It's almost generally the problem n°1 in WC. I'm using liquid metal (a must have with this kind of heat) since last September and temp didn't change an inch. 10°C delta GPU <=> hot spot. Depends of your expectation but from my experience this GPU is hard to cool down, let's say even 3x360 is not enough (for me), I have 5x360x30mm with 15 fans at 5v, the max that I have seen is 41°C at 640W PR after many consecutive tries, ambient 20°C. If I turn off fans, the 5x360s get warm really quickly. Also I prefer thin rads against thick because it's easier to dissipate and we don't need powerful loud fans. My pump (Phobya DC12-400) is louder than 15 fans at 5v and I can see bubbles going down at about 3/4 of the 240mm tank.


After a while the rad is hot to the touch with the fans on, which makes sense given the coolant temp. I'm not sure how high the temps will peak at since I don't want to have the coolant above 50C for too long. 



ManniX-ITA said:


> Are you sure it's a 1:1 comparison?
> Do you know how the GPU was set (stock or AB profile)?
> How are you testing? Stock or OC profile?
> 
> Better if you get an average of board power draw with GPU-z or HWInfo, resetting the stats once you start Heaven.
> Also record HWInfo stats in a CSV file and share it here (first button bottom dx in stats window).
> 
> 
> 
> I'm still suspecting a flow issue... a D5 even with a 30mm rad should do better. Doesn't add up.
> 
> Yes a flow meter would be a good thing to add.
> But don't get cheap stuff... better nothing than something that lies.
> 
> Really hard to find but look for the Flow Next:
> 
> 
> 
> 
> 
> 
> Flow sensor high flow NEXT, G1/4
> 
> 
> Flow sensor high flow NEXT, G1/4: Fully integrated sensor for coolant flow, temperature and quality ewuipped with USB interface, RGBpx lighting and OLED display. Flow measurement Coolant flowing through the sensor drives a rotor/impeller, a contactless magnetic sensor system detects the rotation...
> 
> 
> 
> 
> shop.aquacomputer.de
> 
> 
> 
> 
> 
> Or the high flow 2:
> 
> 
> 
> 
> 
> 
> Flow sensor high flow 2, G1/4
> 
> 
> Flow sensor high flow 2, G1/4: Aqua Computer flow sensors are easily integrated into your water cooling loop and allow continuous flow monitoring. The flow sensor high flow 2 is also equipped with a temperature sensor for coolant temperature monitoring. Coolant flowing through the sensor drives...
> 
> 
> 
> 
> shop.aquacomputer.de
> 
> 
> 
> 
> 
> It's a rare event a broken D5.
> But if the pump is faulty the reported rpm could be just a fake.
> Had once a cheap pump that reported the right rpm but the flow was just a 3rd of what should have been.
> 
> I still think flow could be the culprit...
> The next thing to check is if maybe you didn't realize there's a bending somewhere.
> Very very easy to miss without a flow meter.
> Everything looks normal but the flow rate is cut in half.
> Did you use plenty of anti-kinking springs?
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Amazon.com: Alphacool 17186 Anti-kinking Spring Individual with tubing 11mm (320mm Length) - Matte Black Water Cooling Tubing : Electronics
> 
> 
> Buy Alphacool 17186 Anti-kinking Spring Individual with tubing 11mm (320mm Length) - Matte Black Water Cooling Tubing: Water Cooling Systems - Amazon.com ✓ FREE DELIVERY possible on eligible purchases
> 
> 
> 
> www.amazon.com
> 
> 
> 
> 
> 
> Check thoroughly every inch of the tubing to see if it's bending inward somewhere.
> 
> Also add a pressure equalizer plug on the FLT's top fill port. Almost any brand has one.
> It'll save a lot of problems with air bubbles and high pressure.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Amazon.com: Thermaltake CL-W086-CU00BL-A Pacific G1/4 Pressure Equalizer Stop Plug with O-Ring Black : Industrial & Scientific
> 
> 
> Amazon.com: Thermaltake CL-W086-CU00BL-A Pacific G1/4 Pressure Equalizer Stop Plug with O-Ring Black : Industrial & Scientific
> 
> 
> 
> www.amazon.com


My afterburner settings are stock out of the box settings. Using my OC profile would have probably burned down the neighborhood. Lol. 

I'll search for the flow meter and check out the other hardware you mentioned. Thanks!


----------



## KedarWolf

kryptonfly said:


> Here's my score with my 6950X, we're both (a little) cpu limited (GPU 98%, not 99% like my previous 12900K). I only swapped my CPU+MB+RAM, I kept same windows, drivers, everything...
> 
> 
> 
> 12900K :


I see 6950X and think, "Wait, not a 5950X, is this an engineering sample or something?" then see Intel Core I7. :/


----------



## kryptonfly

GreatestChase said:


> Is LM safe to use on pure copper surfaces like the KP HC block? I've read some conflicting information in that regard. I know that you can't use it on aluminum, and that it is inert to nickel plated copper, but I've read that it can corrode pure copper, and I've read that it isn't a problem for pure copper. Any insight there?


Normally from what I've seen for many years, it's safe on copper, it can change color, get oxidized and become black but nothing more and we can clean it with hydrochloric acid mixed with 2/3 of water (1/3 acid, 2/3 water, be caution !). But I prefer to use a car polish such as to restore cars, it's more powerful than any alcohol and safer, it takes time and lots of rubbing but the result is great, not as new but almost.



8472 said:


> After a while the rad is hot to the touch with the fans on, which makes sense given the coolant temp. I'm not sure how high the temps will peak at since I don't want to have the coolant above 50C for too long.
> 
> My afterburner settings are stock out of the box settings. Using my OC profile would have probably burned down the neighborhood. Lol.
> 
> I'll search for the flow meter and check out the other hardware you mentioned. Thanks!


Ok so you need to add more rads, your WC is overloaded and can't dissipate more heat. 1x360x30mm is not enough, you need at least 3x360mm minimum if you OC your GPU.

@KedarWolf : I sent back my 12900K and I'm waiting a 12900KS, so I came back on my X99 for now.


----------



## J7SC

Core I7 5960X were no slouches, in part due to the HEDT quad-channel RAM...my first day / first PortRoyal run a year ago w/ stock 3090 Strix on air (plus back-plate fans)



















Re. earlier posts re. CPU utilization, 98% vs 99% won't rob me of my sleep. Below is a typical Superposition 4K run with my Strix and 5950X ...by typical I mean ambient 23 C (after all, it is snowing outside, so cozy is needed  ), 520 KPE bios r_bar_forced on Strix, and no curves, no nvidia -smi etc, just MSI AB sliders. BTW, max Watt reading is inaccurate due to KPE on Strix. Custom cooling is extensive, though, which helps.

I've got some bookmarks on -smi but is there a summary link for -smi newbies on that ? Also, I take it I can run it w/o having implemented MSI AB curves ?


----------



## KedarWolf

kryptonfly said:


> Normally from what I've seen for many years, it's safe on copper, it can change color, get oxidized and become black but nothing more and we can clean it with hydrochloric acid mixed with 2/3 of water (1/3 acid, 2/3 water, be caution !). But I prefer to use a car polish such as to restore cars, it's more powerful than any alcohol and safer, it takes time and lots of rubbing but the result is great, not as new but almost.
> 
> Ok so you need to add more rads, your WC is overloaded and can't dissipate more heat. 1x360x30mm is not enough, you need at least 3x360mm minimum if you OC your GPU.
> 
> @KedarWolf : I sent back my 12900K and I'm waiting a 12900KS, so I came back on my X99 for now.


I run my Strix OC RTX 3090 on an EKWB Phoenix 360 AIO core unit with two QDCs and an EKWB waterblock. It idles at 27C, games at 55C but the memory hits 70C while running Port Royal, about 65C while gaming. I'm waiting on the thermal pads to install my active backplate though.


----------



## kryptonfly

J7SC said:


> Core I7 5960X were no slouches, in part due to the HEDT quad-channel RAM...my first day / first PortRoyal run a year ago w/ stock 3090 Strix on air (plus back-plate fans)
> 
> View attachment 2546445
> 
> View attachment 2546446
> 
> 
> 
> Re. earlier posts re. CPU utilization, 98% vs 99% won't rob me of my sleep. Below is a typical Superposition 4K run with my Strix and 5950X ...by typical I mean ambient 23 C (after all, it is snowing outside, so cozy is needed  ), 520 KPE bios r_bar_forced on Strix, and no curves, no nvidia -smi etc, just MSI AB sliders. BTW, max Watt reading is inaccurate due to KPE on Strix. Custom cooling is extensive, though, which helps.
> 
> I've got some bookmarks on -smi but is there a summary link for -smi newbies on that ? Also, I take it I can run it w/o having implemented MSI AB curves ?
> View attachment 2546448


I still have my 5960X on my desk, which does 4.8 Ghz in games and my 6950X at 4.65 Ghz HT off. X99 is still great, I've just seen +51% in some games with 12900K (SOTTR, FC5...) but mainly X99 does the job 
PR and Superposition are not CPU intensive anyway. (Superposition a little more)

About SMI, you need to locate it in C:\Windows\System32\DriverStore\FileRepository\nv_dispig.inf***** then copy the path and paste it into notepad, then type /nvidia-smi -lgc 2265 (if you want to lock GPU at 2265 mhz), then save as a .bat file. You need also another file to reset GPU by \nvidia-smi -rgc.
-lgc : lock gpu clock
-rgc : reset gpu clock

*=> run the .bat file as admin !*

You have to use Afterburner to set a custom voltage otherwise it could be unstable or the wrong freq. Set a point at your required voltage with +30 or +45 mhz from your desired freq, ex : a point at 2310mhz/1.10v (if you wanna lock at 2265 mhz 1.10v). If you don't add minimum +30 mhz the lock will move at the left point and you will lose 1 voltage bin (1.093v). You can close Afterburner after checking the lock.


----------



## J7SC

kryptonfly said:


> I still have my 5960X on my desk, which does 4.8 Ghz in games and my 6950X at 4.65 Ghz HT off. X99 is still great, I've just seen +51% in some games with 12900K (SOTTR, FC5...) but mainly X99 does the job
> PR and Superposition are not CPU intensive anyway. (Superposition a little more)
> 
> About SMI, you need to locate it in C:\Windows\System32\DriverStore\FileRepository\nv_dispig.inf***** then copy the path and paste it into notepad, then type /nvidia-smi -lgc 2265 (if you want to lock GPU at 2265 mhz), then save as a .bat file. You need also another file to reset GPU by \nvidia-smi -rgc.
> -lgc : lock gpu clock
> -rgc : reset gpu clock
> 
> *=> run the .bat file as admin !*
> 
> You have to use Afterburner to set a custom voltage otherwise it could be unstable or the wrong freq. Set a point at your required voltage with +30 or +45 mhz from your desired freq, ex : a point at 2310mhz/1.10v (if you wanna lock at 2265 mhz 1.10v). If you don't add minimum +30 mhz the lock will move at the left point and you will lose 1 voltage bin (1.093v). You can close Afterburner after checking the lock.


Thanks ! So I do need to set a curve in MSI AB before doing the -smi. I like to use -smi to keep the card in a range I know it is comfortable in...sometimes when ambient temp is much lower, it can spike way too high, too often.


----------



## yzonker

J7SC said:


> Thanks ! So I do need to set a curve in MSI AB before doing the -smi. I like to use -smi to keep the card in a range I know it is comfortable in...sometimes when ambient temp is much lower, it can spike way too high, too often.


I thought this was a good explanation. Bookmarked it. 









[Official] NVIDIA RTX 3090 Owner's Club


The step from 3090 to 3090 Ti seems marginal (say 6% fps on average); I rather wait for the 4090s. Besides, what with the new PCBs and Power Connectors, there's no way I'm modding the wiring and GPU's front-and-back in this complex build I finished only recently.: When I was doing HWBot years...




www.overclock.net


----------



## kryptonfly

J7SC said:


> Thanks ! So I do need to set a curve in MSI AB before doing the -smi. I like to use -smi to keep the card in a range I know it is comfortable in...sometimes when ambient temp is much lower, it can spike way too high, too often.


Yep, because you can't lock a freq higher than the curve, so you need Afterburner... it could lessen freq if the temp increase too much from the start, that's why the point needs to be +30 or +45 mhz to prevent this "GPU boost" feature, more than +45 mhz and less than +30 mhz will move the lock at the left and lose 1 bin. I completely removed "GPU boost" from 25°C to 40°C, my GPU doesn't exceed 41-42°C so I don't know if the lock will last with +20, +30°C from the applying.

It's possible to do a curve as @yzonker says but I find easier to just set one point. It doesn't change my effective clock, it depends about internal voltages & limits first.


----------



## J7SC

yzonker said:


> I thought this was a good explanation. Bookmarked it.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> [Official] NVIDIA RTX 3090 Owner's Club
> 
> 
> The step from 3090 to 3090 Ti seems marginal (say 6% fps on average); I rather wait for the 4090s. Besides, what with the new PCBs and Power Connectors, there's no way I'm modding the wiring and GPU's front-and-back in this complex build I finished only recently.: When I was doing HWBot years...
> 
> 
> 
> 
> www.overclock.net





kryptonfly said:


> Yep, because you can't lock a freq higher than the curve, so you need Afterburner... it could lessen freq if the temp increase too much from the start, that's why the point needs to be +30 or +45 mhz to prevent this "GPU boost" feature, more than +45 mhz and less than +30 mhz will move the lock at the left and lose 1 bin. I completely removed "GPU boost" from 25°C to 40°C, my GPU doesn't exceed 41-42°C so I don't know if the lock will last with +20, +30°C from the applying.
> 
> It's possible to do a curve as @yzonker says but I find easier to just set one point. It doesn't change my effective clock, it depends about internal voltages & limits first.


Thanks folks. I'l dive into it on the weekend.


----------



## yzonker

Don't think I've got much left in 3DMark, didn't get past my previous PR run. Better for Superposition, although still looks low given the temps and @kryptonfly scores, but better. It did gain about 100pts by running with everything off. Effective was 2250+.


----------



## geriatricpollywog

What are your guys’s settings for Superposition 4K? The most I’ve seen is 20.1K and I’m repeatedly hitting 19.8K. It doesn’t seem to matter if HT and e-cores are enabled. I see a slight decrease with Re-Bar forced ON.


----------



## yzonker

geriatricpollywog said:


> What are your guys’s settings for Superposition 4K? The most I’ve seen is 20.1K and I’m repeatedly hitting 19.8K. It doesn’t seem to matter if HT and e-cores are enabled. I see a slight decrease with Re-Bar forced ON.


The run above was using a curve like I linked to a couple posts back, only set at 1100mv with the 1100mv offset +270. Mem was +1250.


----------



## J7SC

yzonker said:


> The run above was using a curve like I linked to a couple posts back, only set at 1100mv with the 1100mv offset +270. Mem was +1250.


Noice ! Any idea of max power draw for that run ? And was that with -smi ?


----------



## yzonker

J7SC said:


> Noice ! Any idea of max power draw for that run ? And was that with -smi ?


Before I shut off the 2nd monitor, I was seeing 550-575w. No I didn't use smi. I've tried it before and not found any performance.


----------



## geriatricpollywog

yzonker said:


> The run above was using a curve like I linked to a couple posts back, only set at 1100mv with the 1100mv offset +270. Mem was +1250.


Was it with the 1000w bios? I was testing with the stock 520w Kingpin bios and was seeing 480-520w.


----------



## yzonker

geriatricpollywog said:


> Was it with the 1000w bios? I was testing with the stock 520w Kingpin bios and was seeing 480-520w.


Yes, that's the only bios I run (KP 1kw). 2x8pin so that my only good option.


----------



## GRABibus

yzonker said:


> The run above was using a curve like I linked to a couple posts back, only set at 1100mv with the 1100mv offset +270. Mem was +1250.


+270MHz offset on core ??


----------



## yzonker

GRABibus said:


> +270MHz offset on core ??


Yes


----------



## elbramso

yzonker said:


> Yes


That's massive 😳

Btw. I'm kinda happy that the 3090ti just is a paperlaunch card 😇😜


----------



## GreatestChase

elbramso said:


> That's massive 😳
> 
> Btw. I'm kinda happy that the 3090ti just is a paperlaunch card 😇😜


Agreed, means I get to save up a little longer until the next actually available KPE card. Maybe I won't have to wait a year like with mine now lol.


----------



## J7SC

GreatestChase said:


> Agreed, means I get to save up a little longer until the next actually available KPE card. Maybe I won't have to wait a year like with mine now lol.


Waiting for the NVidia 'Lovelace' (RTX4k) might not be such a bad idea. Per tomsguide.com (and others)_ "the latest leaks claim that the flagship card in a Lovelace-powered GeForce 4000-series will have 144 streaming multiprocessors (the GeForce RTX 3090 has 82) and a maximum of 18,432 CUDA cores"_. Fair enough, leaks are not always to be believed w/o the salt shaker, but given the lead time for vendors, Lovelace is already out there in basic form for PCB partners.

After Lovelace comes 'Hopper' which apparently has a huge monolithic die instead of the mGPU tiles that were expected, but that is much further out and not only needs a salt shaker but a pepper one as well....and it could refer to an enterprise / cloud version only.

For now, I'm more than happy w/ my current setup (3090 Strix on the right, 6900XT on the left). I wish though it would be as easy to mod NVidia bios as it is with the 6900XT...PL-W / -A, volts and various other parameters are just a few clicks away in Windows


----------



## elbramso

I'm pretty much skipping 4090 series cards. It was kinda hard to get a KPE card and even harder to get an appropriate block for it 🤣


----------



## GreatestChase

elbramso said:


> I'm pretty much skipping 4090 series cards. It was kinda hard to get a KPE card and even harder to get an appropriate block for it 🤣


If Optimus could get their blocks out a little quicker I would have one of those, but I opted to go with the HC because I wouldn't have signed up until recently and I know that people that pre-ordered are still waiting on theirs. I'll at least try to get the 4090 KP or whatever the equivalent name will be. That'll probably be the only one that I go for and it'll be pretty much just for benchmarking purposes. The 3090 KP certainly meets all my needs for gaming at the moment.


----------



## yzonker

J7SC said:


> Waiting for the NVidia 'Lovelace' (RTX4k) might not be such a bad idea. Per tomsguide.com (and others)_ "the latest leaks claim that the flagship card in a Lovelace-powered GeForce 4000-series will have 144 streaming multiprocessors (the GeForce RTX 3090 has 82) and a maximum of 18,432 CUDA cores"_. Fair enough, leaks are not always to be believed w/o the salt shaker, but given the lead time for vendors, Lovelace is already out there in basic form for PCB partners.
> 
> After Lovelace comes 'Hopper' which apparently has a huge monolithic die instead of the mGPU tiles that were expected, but that is much further out and not only needs a salt shaker but a pepper one as well....and it could refer to an enterprise / cloud version only.
> 
> For now, I'm more than happy w/ my current setup (3090 Strix on the right, 6900XT on the left). I wish though it would be as easy to mod NVidia bios as it is with the 6900XT...PL-W / -A, volts and various other parameters are just a few clicks away in Windows
> View attachment 2546572


It'll be interesting to see what AMD comes up with too. We could all be over in the AMD thread next winter. 









AMD confirms RDNA3 GPUs are indeed coming in 2022 - VideoCardz.com


AMD reiterates its RDNA3 launch date claims It has been a while since AMD made any public references to RDNA3, the next-gen GPU architecture for Radeon 7000 series. AMD Radeon RX 7000 series, a successor to the never-available RX 6000 series should launch by the end of this year. During the...




videocardz.com


----------



## yzonker

elbramso said:


> That's massive 😳
> 
> Btw. I'm kinda happy that the 3090ti just is a paperlaunch card 😇😜


Keep in mind my best scores seem to come from a curve that results in a bit bigger gap between requested and effective. That run was 2250-2260 effective, but requested was around 2300. 

Still that's as high as I've seen. Superposition is more tolerant of high overclocks for both core and mem. I can't complete PR at that level. Maximum effective I've seen for PR was 2225-2230.


----------



## J7SC

yzonker said:


> It'll be interesting to see what AMD comes up with too. We could all be over in the AMD thread next winter.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> AMD confirms RDNA3 GPUs are indeed coming in 2022 - VideoCardz.com
> 
> 
> AMD reiterates its RDNA3 launch date claims It has been a while since AMD made any public references to RDNA3, the next-gen GPU architecture for Radeon 7000 series. AMD Radeon RX 7000 series, a successor to the never-available RX 6000 series should launch by the end of this year. During the...
> 
> 
> 
> 
> videocardz.com


Yup, isn't competition grand  ? I've read several times (which doesn't make it anymore true or false) that the 3090 Ti is just a stop-gap response to some of the records going to the 6900XT, mostly in sub-4K resolutions, such as the 'regular flavour' TimeSpy. AMD has been quite busy with driver improvements. 

Next-gen RDNA3 (-4 ?) and Lovelace should be an interesting showdown.


----------



## yzonker

J7SC said:


> Yup, isn't competition grand  ? I've read several times (which doesn't make it anymore true or false) that the 3090 Ti is just a stop-gap response to some of the records going to the 6900XT, mostly in sub-4K resolutions, such as the 'regular flavour' TimeSpy. AMD has been quite busy with driver improvements.
> 
> Next-gen RDNA3 (-4 ?) and Lovelace should be an interesting showdown.


Yea the 6900xt seems to really kick butt in raster. Just lacking in more advanced features like RT.


----------



## J7SC

yzonker said:


> Yea the 6900xt seems to really kick butt in raster. Just lacking in more advanced features like RT.


...my 6900XT scores RTX such as Port Royal well below the 3090, but it already is faster in RTX than well-tuned 2080 Ts and 'equivalent' in Ampere gen...it can only get better, but then, NVidia isn't standing still either. At the end of the day, top GPU is not a one horse race anymore, which is great for us.


----------



## gfunkernaught

Have you guys tried running Doom Eternal at 8k? Please do. My PC was screaming at me, but the fps avg'd like 45-50, and about 20GB VRAM used. I'm trying to find stuff that will really tax the system, other than quake 2 rtx and cyberpunk. I also played Quake 1 Remastered in 8k, pushed 500w and 120fps. Just finished an Arkham Knight in 8k run, 500w avg but peaks at 580w. I have to go through some older but "enhanced" titles like Metro Redux.


----------



## kryptonfly

yzonker said:


> Keep in mind my best scores seem to come from a curve that results in a bit bigger gap between requested and effective. That run was 2250-2260 effective, but requested was around 2300.
> 
> Still that's as high as I've seen. Superposition is more tolerant of high overclocks for both core and mem. I can't complete PR at that level. Maximum effective I've seen for PR was 2225-2230.


Same here, I request 2310 for 2240 effective mini (Superposition 1080p extreme 1.081v). My problem now is I can't go above 2280 SMI (2240 Superposition - 2210 PR effective mini) because at idle in windows, effective clock does not lock at 2295mhz due to +1.081v (1.087v...), I have less effective and in load it's lower than 2280 SMI. I can't tweak internal voltages but "core voltage %" in AB helps a little to repel limits. But overall this GPU is golden, 2210 effective mini in PR at 1.081v


----------



## GreatestChase

Does anyone have any thoughts on a MO-RA3 360 pro vs alphacool's UT60 1080? I'm playing with the idea of setting up an external loop for my gpu and then letting the cpu run on the rads that are inside the case. They're the same physical size, but I'm uncertain which one would provide the best cooling capacity due to their difference in water pipe design. The alphacool utilizes the traditional flat tubes that we see in most normal watercooling rads, and the MO-RA3 uses the larger round tubes. Also, I'd like to stick with the 360/1080 size so that I can use 120 mm fans rather than 140 mm, and I don't have any interest in running larger fans either. If anyone has any insight I'd appreciate it.


----------



## yzonker

@GreatestChase , I don't know which rad would really be better, but I think the better approach is simply adding the external rad to your existing loop. It allows you to disconnect the external rad and still run the machine on the internal rads.


----------



## GreatestChase

yzonker said:


> @GreatestChase , I don't know which rad would really be better, but I think the better approach is simply adding the external rad to your existing loop. It allows you to disconnect the external rad and still run the machine on the internal rads.


My plan would be to use quick connects so that I can isolate the gpu, but also plumb the whole loop as one if I wanted to. So I would have QCs setup on the gpu, cpu, and the external rad. Would have a dual pump and res setup for the external rad alone in combination with the D5 and 3x360 that comprises my current loop. The up side to this would be that I would be able to isolate the gpu and cpu for benching purposes, and for daily driving I would just have some added rad space.


----------



## ManniX-ITA

GreatestChase said:


> Does anyone have any thoughts on a MO-RA3 360 pro vs alphacool's UT60 1080?


I've read posts with owners' feedback.
What I took out of it is that more or less the cooling capacity is the same.
But the MoRA is definitely much better looking and most importantly full of options.
You can attach a res with a dual D5 pump on it.
These kinds of pluses are just not available on the Alphacool.
If these two points are not relevant for you, maybe price is the king.
Alphacool is cheaper.


----------



## Roacoe717

I would only use 2 sets of disconnects and that would be for the external radiator only. If you add any more you may hurt the flow.


----------



## GreatestChase

ManniX-ITA said:


> I've read posts with owners' feedback.
> What I took out of it is that more or less the cooling capacity is the same.
> But the MoRA is definitely much better looking and most importantly full of options.
> You can attach a res with a dual D5 pump on it.
> These kinds of pluses are just not available on the Alphacool.
> If these two points are not relevant for you, maybe price is the king.
> Alphacool is cheaper.


In the end, I don't think the price difference is that large, but you are correct. The MoRA definitely has more modularity. One downside though to going with the 360 is that I can't use the attachment for the res and dual pump as it is designed to work with the 420 version. It does have a side attachment for a res though and I could just mount the pumps over one of the fans. In the end, it looks like the price difference would be around $20 more for the MoRA.



Roacoe717 said:


> I would only use 2 sets of disconnects and that would be for the external radiator only. If you add any more you may hurt the flow.


In the end, there would essentially be one QC per pump so I wouldn't expect it to impact the flow too much. I have a flow meter in the loop already so I'll be able to watch for any major restrictions though.


----------



## J7SC

Roacoe717 said:


> I would only use 2 sets of disconnects and that would be for the external radiator only. If you add any more you may hurt the flow.


That depends in part on the QDs. I use the larger-diameter Koolance QD4s and restriction is minimal, at least in systems with dual and triple D5s respectively. The four QD4s below are actually for a dual system but can be switched to different CPU and GPU configs


----------



## GreatestChase

J7SC said:


> That depends in part on the QDs. I use the larger-diameter Koolance QD4s and restriction is minimal, at least in systems with dual and triple D5s respectively. The four QD4s below are actually for a dual system but can be switched to different CPU and GPU configs
> View attachment 2546762
> 
> 
> View attachment 2546764


I'm planning on using Alphacool's high flow QCs which look pretty similar to the QD4s. Alphacool Eiszapfen HF quick release connector kit G3/8 inner thread with reducing nipple G1/4 - deep black
This is all still speculatory at this point lol. In all honesty, my cooling is sufficient for my daily driving as it is, but I just can't help myself for wanting maximum performance.


----------



## 8472

kryptonfly said:


> Normally from what I've seen for many years, it's safe on copper, it can change color, get oxidized and become black but nothing more and we can clean it with hydrochloric acid mixed with 2/3 of water (1/3 acid, 2/3 water, be caution !). But I prefer to use a car polish such as to restore cars, it's more powerful than any alcohol and safer, it takes time and lots of rubbing but the result is great, not as new but almost.
> 
> Ok so you need to add more rads, your WC is overloaded and can't dissipate more heat. 1x360x30mm is not enough, you need at least 3x360mm minimum if you OC your GPU.
> 
> @KedarWolf : I sent back my 12900K and I'm waiting a 12900KS, so I came back on my X99 for now.


I'm hoping two 360s will be enough. I'm waiting on Amazon to deliver the package they were supposed to deliver yesterday to add a second 360. Fingers crossed.


----------



## elbramso

8472 said:


> I'm hoping two 360s will be enough. I'm waiting on Amazon to deliver the package they were supposed to deliver yesterday to add a second 360. Fingers crossed.


What CPU and gpu did you have? 
I started watercooling one year ago with an ekwb block on an 3090 ftw3. My CPU was a 8700k. I had 2 360mm rads, one ekwb pe360 and the other was an ekwb xe360. D5 pump... 
The system was OK but while gaming under heavy load my fans and pump needed to be at 80% to handle 550w.
I simple didn't like it 😅

At the end I sold it all, bought a new case and spent triple the money to have a dead silent system 😂🤣

Most people here will agree when I say "you're never done with your loop" 😜


----------



## bearsdidit

elbramso said:


> What CPU and gpu did you have?
> I started watercooling one year ago with an ekwb block on an 3090 ftw3. My CPU was a 8700k. I had 2 360mm rads, one ekwb pe360 and the other was an ekwb xe360. D5 pump...
> The system was OK but while gaming under heavy load my fans and pump needed to be at 80% to handle 550w.
> I simple didn't like it 😅
> 
> At the end I sold it all, bought a new case and spent triple the money to have a dead silent system 😂🤣
> 
> Most people here will agree when I say "you're never done with your loop" 😜


Agreed, I just finished a build in December and now spending a lot more money for a new case and MoRa 420. The performance gains will be nice but probably minimal but the process of “building” a different rig will be fun.


----------



## kryptonfly

GreatestChase said:


> My plan would be to use quick connects so that I can isolate the gpu, but also plumb the whole loop as one if I wanted to. So I would have QCs setup on the gpu, cpu, and the external rad. Would have a dual pump and res setup for the external rad alone in combination with the D5 and 3x360 that comprises my current loop. The up side to this would be that I would be able to isolate the gpu and cpu for benching purposes, and for daily driving I would just have some added rad space.


You're right, external rads is a "must have" with our GPUs and even more with future GPUs at +600W stock. I'm using 5x Bykski full copper 360x30mm but it's up to you to choose whatever rad regarding your expectation. I'm using quick connects and honestly it saved my life ! I have just 1 Phobya DC-400 and my fans are at 5v, I've never seen +43°C in Metro Exodus +500W this last summer (ambient 26°C) after long session. I prefer to use thin rads and flat tubes to spread as most as possible the heat. I don't like large rad, they are not efficient, acts like tanks and that requires strong fans usually louder. I cut 2 pieces of wood and painted them in black to support rads. About the flow, when there are bubbles (with heat I open the tank to release pressure and it brings bubbles from rads), I can see bubbles going down at 3/4 of the 240mm tank.


----------



## GreatestChase

kryptonfly said:


> You're right, external rads is a "must have" with our GPUs and even more with future GPUs at +600W stock. I'm using 5x Bykski full copper 360x30mm but it's up to you to choose whatever rad regarding your expectation. I'm using quick connects and honestly it saved my life ! I have just 1 Phobya DC-400 and my fans are at 5v, I've never seen +43°C in Metro Exodus +500W this last summer (ambient 26°C) after long session. I prefer to use thin rads and flat tubes to spread as most as possible the heat. I don't like large rad, they are not efficient, acts like tanks and that requires strong fans usually louder. I cut 2 pieces of wood and painted them in black to support rads. About the flow, when there are bubbles (with heat I open the tank to release pressure and it brings bubbles from rads), I can see bubbles going down at 3/4 of the 240mm tank.


I currently have 3x EKWB PE 360 mm rads (which I believe are 45 mm) with 9xT30 fans. I don't really mind the noise as I have Tinnitus and the fan noise beats the ringing noise that I hear all the time. So I don't really mind having a thicker rad like the MoRA. 

This is before I got my KP card with my FTW3 card.









This is now that I have the KP card. I think I'm done with the acrylic tubing lol. It looks nice, but as many iterations as this thing has gone through, I'm getting tired of redoing bends lol.


----------



## geriatricpollywog

GreatestChase said:


> My plan would be to use quick connects so that I can isolate the gpu, but also plumb the whole loop as one if I wanted to. So I would have QCs setup on the gpu, cpu, and the external rad. Would have a dual pump and res setup for the external rad alone in combination with the D5 and 3x360 that comprises my current loop. The up side to this would be that I would be able to isolate the gpu and cpu for benching purposes, and for daily driving I would just have some added rad space.



My MO-RA gets all the compliments when the ladies are over. A radiator array would scare them away.











In other news, my Firestrike score is now the top 3090 on the 3DMark HOF now that OGS beat their own 3090 score with a 6900XT.










3DMark Fire Strike Hall Of Fame


----------



## StAndrew

Is there a bios/software that allows you to adjust volts on the Asus STRIX?


----------



## GreatestChase

StAndrew said:


> Is there a bios/software that allows you to adjust volts on the Asus STRIX?


To my knowledge, only the kingpin and hall of fame cards have the ability to modify voltages. You would have to shunt mod on a strix card.


----------



## andrew149

what bios is everyone using when they have a water-cooled setup on a Nvidia founders edition?


----------



## yzonker

andrew149 said:


> what bios is everyone using when they have a water-cooled setup on a Nvidia founders edition?


Unfortunately the FE cards aren't compatible with other manufacturers bios. AFAIK, it's shunt mod or live with the stock bios.


----------



## andrew149

yzonker said:


> Unfortunately the FE cards aren't compatible with other manufacturers bios. AFAIK, it's shunt mod or live with the stock bios.


That’s sucks I’m glad I went with a gigabyte 3080 with my other rig than.


----------



## elbramso

geriatricpollywog said:


> My MO-RA gets all the compliments when the ladies are over. A radiator array would scare them away.
> 
> View attachment 2546984
> 
> 
> 
> In other news, my Firestrike score is now the top 3090 on the 3DMark HOF now that OGS beat their own 3090 score with a 6900XT.
> 
> View attachment 2546987
> 
> 
> 3DMark Fire Strike Hall Of Fame


Subzero water in this run? 13c gpu is just insane 😳


----------



## geriatricpollywog

elbramso said:


> Subzero water in this run? 13c gpu is just insane 😳


My PC was outside in that run. The water temp was below 0.


----------



## Nizzen

StAndrew said:


> Is there a bios/software that allows you to adjust volts on the Asus STRIX?


You need to connect "Elmor tool" on the "hotwire" spots on the Strix pcb, to adjust voltages on strix 

No bios can help you adjust voltages.


----------



## StAndrew

Nizzen said:


> You need to connect "Elmor tool" on the "hotwire" spots on the Strix pcb, to adjust voltages on strix
> 
> No bios can help you adjust voltages.


Thanks! So with the 1000W bios and the Elmor EVC I can volt mod and no need to shunt mod, correct?


----------



## Nizzen

StAndrew said:


> Thanks! So with the 1000W bios and the Elmor EVC I can volt mod and no need to shunt mod, correct?


yes


----------



## Luggage

kryptonfly said:


> You're right, external rads is a "must have" with our GPUs and even more with future GPUs at +600W stock. I'm using 5x Bykski full copper 360x30mm but it's up to you to choose whatever rad regarding your expectation. I'm using quick connects and honestly it saved my life ! I have just 1 Phobya DC-400 and my fans are at 5v, I've never seen +43°C in Metro Exodus +500W this last summer (ambient 26°C) after long session. I prefer to use thin rads and flat tubes to spread as most as possible the heat. I don't like large rad, they are not efficient, acts like tanks and that requires strong fans usually louder. I cut 2 pieces of wood and painted them in black to support rads. About the flow, when there are bubbles (with heat I open the tank to release pressure and it brings bubbles from rads), I can see bubbles going down at 3/4 of the 240mm tank.


If you feel like experimenting - run your rads in parallel instead of series.
In theory you should get lower resistance, higher flow. Lower relative flow speed in rads vs blocks. And the same delta through all the rads.


----------



## StAndrew

Nizzen said:


> yes


Awesome, thanks!

Stock volts should be 1.1V on the GPU and 1.35V on the memory correct? I don't know the uncore volts (is that even necessary to overvolt). Watercooling under ambient temps, I should be safe at around 1.2V core and 1.4-1.43V memory, correct? 

Thanks!


----------



## GreatestChase

StAndrew said:


> Awesome, thanks!
> 
> Stock volts should be 1.1V on the GPU and 1.35V on the memory correct? I don't know the uncore volts (is that even necessary to overvolt). Watercooling under ambient temps, I should be safe at around 1.2V core and 1.4-1.43V memory, correct?
> 
> Thanks!


I believe those are correct stock voltages. And are you talking about daily driving at those voltage? I would think as long as your temps are fine you should be good. Now if you're talking about OCing for benching, I've gone as far as 1.3125 V on NVVDD which is not the same readout that you would see in afterburner, but it is a core voltage. I don't typically mess with the mem voltage on my card as I can bench with +1700 on the mem without adjustment. The mem also doesn't scale as well with voltage as the core does to my understanding. For uncore, I have pushed to 1.225 V while benching. From my understanding, it helps keep your internal and external clock speeds the same.


----------



## StAndrew

GreatestChase said:


> I believe those are correct stock voltages. And are you talking about daily driving at those voltage? I would think as long as your temps are fine you should be good. Now if you're talking about OCing for benching, I've gone as far as 1.32 V on NVVDD which is not the same readout that you would see in afterburner, but it is a core voltage. I don't typically mess with the mem voltage on my card as I can bench with +1700 on the mem without adjustment. The mem also doesn't scale as well with voltage as the core does to my understanding. For uncore, I have pushed to 1.225 V while benching. From my understanding, it helps keep your internal and external clock speeds the same.


It'll be for gaming clocks. Not too interested in benchmark score chasing. Yet.


----------



## Agenesis

Anyone here dual mining eth+ton with the Zotac 3090? There's something wonky going on with Zotac cards. It overclocks like a champ and mines ethereum like nothing at 128MH. 

But when I start dual mining this thing glitches out and sometimes mines at 124mh Eth + 200~400mh TON or it drops down to 107mh Eth +1.7GH TON. It's a lottery which speed it wants to mine at. I've flashed KF2/Gigabyte/EVGA bios onto this thing with power limits all the way to 460w and have seen it consume as much as 375w, but the hashes are still inconsistent with the EVGA. So that rules out power limit. The temps are fine. 

Really it's literally 30 cents USD a day revenue reduction at this point but I'm just curious as what could be causing it.


----------



## andrew149

Agenesis said:


> Anyone here dual mining eth+ton with the Zotac 3090? There's something wonky going on with Zotac cards. It overclocks like a champ and mines ethereum like nothing at 128MH.
> 
> But when I start dual mining this thing glitches out and sometimes mines at 124mh Eth + 200~400mh TON or it drops down to 107mh Eth +1.7GH TON. It's a lottery which speed it wants to mine at. I've flashed KF2/Gigabyte/EVGA bios onto this thing with power limits all the way to 460w and have seen it consume as much as 375w, but the hashes are still inconsistent with the EVGA. So that rules out power limit. The temps are fine.
> 
> Really it's literally 30 cents USD a day revenue reduction at this point but I'm just curious as what could be causing it.
> 
> View attachment 2547171


It’s not your cards because I can tell you when I dual mine with my 3080 the program stops on lolminner I went back to mbminner because it’s more reliable also it’s it’s impossible to keep the fans from going crazy and keeping the ram cool while dual minning.


----------



## GRABibus

GreatestChase said:


> I don't typically mess with the mem voltage on my card as I can bench with +1700 on the mem without adjustment.


Insane !
With my Kingpin Hybrid, as of +1200MHz, I am obliged to increase FBVDD to 1.375V at least, even for gaming.
[email protected]=1.375V crashes in roughly all my games.

So, for gaming => +1200MHz max and FBVDD=1.375V
.


----------



## geriatricpollywog

GRABibus said:


> Insane !
> With my Kingpin Hybrid, as of +1200MHz, I am obliged to increase FBVDD to 1.375V at least, even for gaming.
> [email protected]=1.375V crashes in roughly all my games.
> 
> So, for gaming => +1200MHz max and FBVDD=1.375V
> .


Mine is stable at +1350 but I can run +1400 with both FBVDD dip switches on.


----------



## geriatricpollywog

Does anybody have a link to the 3090 Kingpin technical guide? I believe it has a link to the newest EVBOT firmware which I'm looking for.


----------



## andrew149

GRABibus said:


> Insane !
> With my Kingpin Hybrid, as of +1200MHz, I am obliged to increase FBVDD to 1.375V at least, even for gaming.
> [email protected]=1.375V crashes in roughly all my games.
> 
> So, for gaming => +1200MHz max and FBVDD=1.375V
> .


I bet if you change your pads out and change the fans you can bump that mem clock up some more


----------



## GreatestChase

Well gentlemen/gentleladies, it was nice knowing you. However this will likely be my last message. My wife will probably kill me before long as I just placed an order for a MoRA 360 and the necessary hardware to add it to my system. Nice meeting you all.


----------



## yzonker

Well just point out how much cheaper the Mo-ra3 is than the video card. 

Uh, maybe not the best advice....

I was just looking at a couple of those actually in stock at Performance PCs. Managed to resist. I'm saving for a chiller. 🥶


----------



## GreatestChase

yzonker said:


> Well just point out how much cheaper the Mo-ra3 is than the video card.
> 
> Uh, maybe not the best advice....
> 
> I was just looking at a couple of those actually in stock at Performance PCs. Managed to resist. I'm saving for a chiller. 🥶


I want a chiller too. I just can't justify it yet. And yeah, probably best I don't tell her how much I spent on the GPU lol.


----------



## ManniX-ITA

GreatestChase said:


> I want a chiller too. I just can't justify it yet. And yeah, probably best I don't tell her how much I spent on the GPU lol.


I can imagine the first time you fire it up and the compressor starts its loud thump-thump-thump...: What did you buy again???


----------



## J7SC

...had some fun with Superposition 4K today and broke 20k for the first time, ambient temp was 15 C...no MSI AB curve or nv -smi (that comes later) for below, just sliders  ...Strix 3090 w/ KPE 520 vbios


----------



## GRABibus

andrew149 said:


> I bet if you change your pads out and change the fans you can bump that mem clock up some more


Not sure of this.
On my former Strix, I couldn't go beyond +1000MHz.
I changed to Thermalright Extreme and reduced Memory temps by 10°C at least. I didn't win one MHz of memory OC.

It can be simply Silicon lottery and sometimes cold doesn't help.


----------



## J7SC

GRABibus said:


> Not sure of this.
> On my former Strix, I couldn't go beyond +1000MHz.
> I changed to Thermalright Extreme and reduced Memory temps by 10°C at least. I didn't win one MHz of memory OC.
> 
> It can be simply Silicon lottery and sometimes cold doesn't help.


...yeah, cold only helps if VRAM GDDR6X runs too hot in the first place...still, throwing extra temp transfer onto VRAM can help overall PCB temps and is thus beneficial. Thermal Putty front and back + w-block + extra heatsink on the back on my setup means that VRAM temps won't spoil the party.


----------



## yzonker

I disagree. I have seen a steady progression of improved mem clocks as I've lowered my VRAM temps. Card started out bone stock at no better than +500 mem. My original mount in the Corsair block got that to 700-800. Remount with good pads resulted in +850. Heatkiller block with putty +900. Then the sub ambient runs got it all the way to +1125. Can't even think about completing a PR run at +1125 at normal ambient.


----------



## QSS-5

Can a reference 3090 2xPin (non-founders) run the 1000W 3pin bios, will there be any problems I assume the max Watt will be 300w per pin so the card will top out at 600W + PCI Express (75W) = 675W?


----------



## yzonker

QSS-5 said:


> Can a reference 3090 2xPin (non-founders) run the 1000W 3pin bios, will there be any problems I assume the max Watt will be 300w per pin so the card will top out at 600W + PCI Express (75W) = 675W?


Yea about that. I think the slot power goes significantly above 75w though at 600w+. Only issue is innacurate power readings and of course the lack of thermal limits.


----------



## QSS-5

yzonker said:


> Yea about that. I think the slot power goes significantly above 75w though at 600w+. Only issue is innacurate power readings and of course the lack of thermal limits.


Thanks for the quick reply. What do you mean by thermal limits? are you saying it won't downclock if it his above let's say 90C? In other words, it does not care that it toast?


----------



## yzonker

QSS-5 said:


> Thanks for the quick reply. What do you mean by thermal limits? are you saying it won't downclock if it his above let's say 90C? In other words, it does not care that it toast?


Yes, that's exactly what it means.


----------



## kryptonfly

QSS-5 said:


> Can a reference 3090 2xPin (non-founders) run the 1000W 3pin bios, will there be any problems I assume the max Watt will be 300w per pin so the card will top out at 600W + PCI Express (75W) = 675W?


I would say "in theory" 240W per pin and 120W pcie slot. I have fuses of 20A for 8-pins and 10A for slot => 240x2+120=600W but the max that I've seen is 640W in hwinfo64 with my custom settings, beyond this is internal voltage limit which prevents me first. In real I think it's around 280W per pin and 80W for slot. I'm curious to check 8-pins with my current clamp to see...


----------



## yzonker

kryptonfly said:


> I would say "in theory" 240W per pin and 120W pcie slot. I have fuses of 20A for 8-pins and 10A for slot => 240x2+120=600W but the max that I've seen is 640W in hwinfo64 with my custom settings, beyond this is internal voltage limit which prevents me first. In real I think it's around 280W per pin and 80W for slot. I'm curious to check 8-pins with my current clamp to see...


I bet you see a bit more than 20amps per 8pin. Those fuses are somewhat forgiving of at least short durations over their rating. And it's temp dependent. Warmer the fuse, the less time/amps it takes to pop it.


----------



## GRABibus

yzonker said:


> I disagree. I have seen a steady progression of improved mem clocks as I've lowered my VRAM temps. Card started out bone stock at no better than +500 mem. My original mount in the Corsair block got that to 700-800. Remount with good pads resulted in +850. Heatkiller block with putty +900. Then the sub ambient runs got it all the way to +1125. Can't even think about completing a PR run at +1125 at normal ambient.


it depends of a lot of parameters...

You had Silicon margin that you gained by cooling better (Because you were more temperature bottlenecked in fact).

From my side, I was full Silicon bottlenecked


----------



## J7SC

GRABibus said:


> it depends of a lot of parameters...
> 
> You had Silicon margin that you gained by cooling better (Because you were more temperature bottlenecked in fact).
> 
> From my side, I was full Silicon bottlenecked


yeah, as stated before, it won't magically cure bad-performing VRAM, but extra VRAM cooling can take care of a heat problem that is otherwise reducing your VRAM's potential performance to get up to its silicon limits. Also, extra cooling for the PCB via VRAM cooling helps in other ways.


----------



## EarlZ

Recently got a Seasonic PX 1300W and I was a bit surprised to see that my HWinfo64/GPU-Z readings go as low as 11.86V. I am using the stock cables that came with the PSU 1 cable per power connector. Seasonic has included one PCIe power cable that has 2 8pin connectors so I used that to take a voltage reading and I get 11.94V and this value is reflected on one of the sensors on my GPU. 

I am not sure if this is normal since my Cooler Master V1000 ( 8year old unit) with the stock cables can maintain12.05V and I am wondering if I got a defective Seasonic PSU or this is with in 100% of the normal expectation?


----------



## elbramso

I'm afraid Winter is officially over for my part of Germany... Benching-Season is over


----------



## geriatricpollywog

elbramso said:


> I'm afraid Winter is officially over for my part of Germany... Benching-Season is over


Same. The snow is starting to melt here in Kansas.


----------



## GRABibus

elbramso said:


> I'm afraid Winter is officially over for my part of Germany... Benching-Season is over


Poor guy 😂


----------



## elbramso

GRABibus said:


> Poor guy 😂


Most fun with a gpu I had in years 🤣
But my system pulling over 850w during benchmarks might be the reason why it got so warm already. My KPE is the reason for global warming 😜


----------



## GreatestChase

The coldest it has been here so far has been about -4 C.


----------



## yzonker

geriatricpollywog said:


> Same. The snow is starting to melt here in Kansas.


Hey, I'm in Wichita. Small world.


----------



## geriatricpollywog

yzonker said:


> Hey, I'm in Wichita. Small world.


Overland Park here. Next to the Microcenter  

Let me know if you need anything there


----------



## GreatestChase

geriatricpollywog said:


> Overland Park here. Next to the Microcenter


I wish there was a microcenter within less than 4 hours from me. I think the nearest to me is in Atlanta. And I'm in western Alabama.


----------



## J7SC

geriatricpollywog said:


> Overland Park here. Next to the Microcenter
> (...)


...condolences


----------



## 8472

Okay, so Amazon finally delivered the parts I needed. I added a second 360 to my loop. The way I did it was more proof of concept than practical but it got the job done. 

Adding the second 360 solved the problem. My GPU peaked at 55C while hovering at 53-54 most of the time. The coolant temp only got up to 38C. The max VRAM temp was 70C although it stayed mainly in the 60s. 

I expect performance to be even better once I get the loop properly set up. My flow rate was probably terrible since I used the extra inlet and outlet ports on my res. At 40% for the pump there was no flow to the second rad, I had to kick it up to 100% for it to be able to fight gravity. 

Thanks to everyone for helping me figure this out! 



elbramso said:


> What CPU and gpu did you have?
> I started watercooling one year ago with an ekwb block on an 3090 ftw3. My CPU was a 8700k. I had 2 360mm rads, one ekwb pe360 and the other was an ekwb xe360. D5 pump...
> The system was OK but while gaming under heavy load my fans and pump needed to be at 80% to handle 550w.
> I simple didn't like it 😅
> 
> At the end I sold it all, bought a new case and spent triple the money to have a dead silent system 😂🤣
> 
> Most people here will agree when I say "you're never done with your loop" 😜


My loop is GPU only. I have a 7940X. I would love to upgrade to a 12900k but since the most stressful thing I do nowadays with my CPU is game at 4k, I can't justify the upgrade since it really wouldn't significantly improve performance. 



kryptonfly said:


> You're right, external rads is a "must have" with our GPUs and even more with future GPUs at +600W stock. I'm using 5x Bykski full copper 360x30mm but it's up to you to choose whatever rad regarding your expectation. I'm using quick connects and honestly it saved my life ! I have just 1 Phobya DC-400 and my fans are at 5v, I've never seen +43°C in Metro Exodus +500W this last summer (ambient 26°C) after long session. I prefer to use thin rads and flat tubes to spread as most as possible the heat. I don't like large rad, they are not efficient, acts like tanks and that requires strong fans usually louder. I cut 2 pieces of wood and painted them in black to support rads. About the flow, when there are bubbles (with heat I open the tank to release pressure and it brings bubbles from rads), I can see bubbles going down at 3/4 of the 240mm tank.


Which quick disconnects are those? I've been trying to decide if I should go with quick disconnects or a set of ball valves connected with a small extender. I keep reading people complaining about their QDCs getting stuck open or stuck closed or having the black paint from their black Koolance QD3s chip off.


----------



## CptSpig

8472 said:


> Which quick disconnects are those? I've been trying to decide if I should go with quick disconnects or a set of ball valves connected with a small extender. I keep reading people complaining about their QDCs getting stuck open or stuck closed or having the black paint from their black Koolance QD3s chip off.


I have three black Koolance QDC3's I have used for several year's with no issues.


----------



## J7SC

8472 said:


> Okay, so Amazon finally delivered the parts I needed. I added a second 360 to my loop. The way I did it was more proof of concept than practical but it got the job done.
> 
> Adding the second 360 solved the problem. My GPU peaked at 55C while hovering at 53-54 most of the time. The coolant temp only got up to 38C. The max VRAM temp was 70C although it stayed mainly in the 60s.
> 
> I expect performance to be even better once I get the loop properly set up. My flow rate was probably terrible since I used the extra inlet and outlet ports on my res. At 40% for the pump there was no flow to the second rad, I had to kick it up to 100% for it to be able to fight gravity.
> 
> Thanks to everyone for helping me figure this out!
> 
> 
> 
> My loop is GPU only. I have a 7940X. I would love to upgrade to a 12900k but since the most stressful thing I do nowadays with my CPU is game at 4k, I can't justify the upgrade since it really wouldn't significantly improve performance.
> 
> 
> 
> Which quick disconnects are those? I've been trying to decide if I should go with quick disconnects or a set of ball valves connected with a small extender. I keep reading people complaining about their QDCs getting stuck open or stuck closed or having the black paint from their black Koolance QD3s chip off.


No problems with my black Koolance QD4s...I did have a Koolance Nickel QD4 get stuck open, but that happens when the O-ring wasn't seated properly. 

QD3 vs QD4 depends on your tubing size in your loop. I also have Swiftech QDs which work ok, but always drip a bit when opening, unlike the Koolance QD4s


----------



## StAndrew

Two more quick questions on the Elmors EVC2 tool.

The Strix 3090 has a number of IIC headers. Does it matter which one I connect to?

When I apply a voltage offset, do I need to keep the EVC2 tool connected to maintain the voltage or is it constant until changed?


----------



## elbramso

GreatestChase said:


> I wish there was a microcenter within less than 4 hours from me. I think the nearest to me is in Atlanta. And I'm in western Alabama.


Micro...What? No such thing here in Germany 😂🤣


----------



## 8472

CptSpig said:


> I have three black Koolance QDC3's I have used for several year's with no issues.


Nice! I'm assuming my D5 will be able to handle the restrictiveness of 3 QDCs (3 male and 3 female). The only other things in my loop are two rads, the gpu block, and active backplate. 



J7SC said:


> No problems with my black Koolance QD4s...I did have a Koolance Nickel QD4 get stuck open, but that happens when the O-ring wasn't seated properly.
> 
> QD3 vs QD4 depends on your tubing size in your loop. I also have Swiftech QDs which work ok, but always drip a bit when opening, unlike the Koolance QD4s
> 
> View attachment 2547575
> 
> 
> View attachment 2547576
> 
> 
> View attachment 2547577
> View attachment 2547578


I'd love to get the QD4s, but their availability isn't good. Especially in the color black that'll work with 10/16mm tubing.


----------



## CptSpig

8472 said:


> Nice! I'm assuming my D5 will be able to handle the restrictiveness of 3 QDCs (3 male and 3 female). The only other things in my loop are two rads, the gpu block, and active backplate.


I run two DDC 4.2 PWM pumps one before the CPU and one after the GPU. I used to run one DDC and never noticed any restrictions. I added a second just incase one stops working.


----------



## geriatricpollywog

elbramso said:


> Micro...What? No such thing here in Germany 😂🤣


You mean you don’t have computer stores the size of OBI in rural areas?


----------



## GreatestChase

geriatricpollywog said:


> You mean you don’t have computer stores the size of OBI in rural areas?


Isn't Aquacomputer based in Germany though? And Aquatuning?


----------



## kryptonfly

8472 said:


> Which quick disconnects are those? I've been trying to decide if I should go with quick disconnects or a set of ball valves connected with a small extender. I keep reading people complaining about their QDCs getting stuck open or stuck closed or having the black paint from their black Koolance QD3s chip off.


These are Bykski V1 (they made other versions), no problem, they work great 👍 I disconnected them many many times when I tested shunt mod and thermal pads, not even a drop. They are heavy & strong.


----------



## geriatricpollywog

GreatestChase said:


> Isn't Aquacomputer based in Germany though? And Aquatuning?


Also Alphacool and Watercool. I’m referring to computer part stores, not manufacturers.


----------



## GreatestChase

geriatricpollywog said:


> Also Alphacool and Watercool. I’m referring to computer part stores, not manufacturers.


Lol, whoops, replied to the wrong person, but it’s more than is local to me was the point I was trying to make. Not a lot of love for techies here in Alabama.


----------



## EarlZ

EarlZ said:


> Recently got a Seasonic PX 1300W and I was a bit surprised to see that my HWinfo64/GPU-Z readings go as low as 11.86V. I am using the stock cables that came with the PSU 1 cable per power connector. Seasonic has included one PCIe power cable that has 2 8pin connectors so I used that to take a voltage reading and I get 11.94V and this value is reflected on one of the sensors on my GPU.
> 
> I am not sure if this is normal since my Cooler Master V1000 ( 8year old unit) with the stock cables can maintain12.05V and I am wondering if I got a defective Seasonic PSU or this is with in 100% of the normal expectation?


I quoted my own question for more visibility and I hope someone could chime in and help me.


----------



## andrew149

EarlZ said:


> I quoted my own question for more visibility and I hope someone could chime in and help me.


If your power supply was defective your system wouldn’t be stable.


----------



## J7SC

EarlZ said:


> I quoted my own question for more visibility and I hope someone could chime in and help me.


I don't think that 11.86 v min is anything to worry about...anyhow, here's my new Seasonic's PX 1300 HWInfo via Asus sensors


----------



## elbramso

geriatricpollywog said:


> You mean you don’t have computer stores the size of OBI in rural areas?


It's even worse. We don't even have dedicated computer stores bigger than my living room nearby 🙄 I have to order everything online


----------



## EarlZ

J7SC said:


> I don't think that 11.86 v min is anything to worry about...anyhow, here's my new Seasonic's PX 1300 HWInfo via Asus sensors
> View attachment 2547631


That is the mobo sensor right and not the GPU? Under the mobo sensor I am getting something like 12.10V as minimum.


----------



## ManniX-ITA

EarlZ said:


> I quoted my own question for more visibility and I hope someone could chime in and help me.


Is this under load or in idle?
If it's idle doesn't look right.

Under load Seasonic claims its ultra stable regulation voltage will keep the 12V in +/- 2%.
Which means it could go down as low as 11.75V.

In general if it's not tripping under load with OC is likely there's no issue.

The old PSUs were more generous but also less efficient in AC/DC conversion and overhead, dropping more under load.
My EVGA 1300 G2 keeps 12.4V in idle and 12.1V under load.


----------



## EarlZ

ManniX-ITA said:


> Is this under load or in idle?
> If it's idle doesn't look right.
> 
> Under load Seasonic claims its ultra stable regulation voltage will keep the 12V in +/- 2%.
> Which means it could go down as low as 11.75V.
> 
> In general if it's not tripping under load with OC is likely there's no issue.
> 
> The old PSUs were more generous but also less efficient in AC/DC conversion and overhead, dropping more under load.
> My EVGA 1300 G2 keeps 12.4V in idle and 12.1V under load.


11.86V is under load with time spy extreme or even any game that hits 400watts+ while Idle it sits at at around 12.10V or 12.20V


----------



## ManniX-ITA

EarlZ said:


> 11.86V is under load with time spy extreme or even any game that hits 400watts+ while Idle it sits at at around 12.10V or 12.20V


It's perfectly fine then, no worries.


----------



## yzonker

Yea my Seasonic would drop to 11.8v on the 8pins under high load (500w). The Rm1000x I have now stays right at 12v but there is no difference in stability.


----------



## GreatestChase

Well I know what my plans for the weekend are now.


----------



## yzonker

GreatestChase said:


> Well I know what my plans for the weekend are now.


Must be quite a rad to need a floor jack. 

I see a brand new track tire there too I think. Can't quite decide which one it is? I just mounted a set of 660s to test against my A052s on my Z06.


----------



## GreatestChase

yzonker said:


> Must be quite a rad to need a floor jack.
> 
> I see a brand new track tire there too I think. Can't quite decide which one it is? I just mounted a set of 660s to test against my A052s on my Z06.


Good eye, lol. They’re NT01s. Picked up a new set of Apex wheels to go with them. Unfortunately I can’t use them yet because there is a massive back order on the hubs I need that have longer apr studs.


----------



## J7SC

GreatestChase said:


> Good eye, lol. They’re NT01s. Picked up a new set of Apex wheels to go with them. Unfortunately I can’t use them yet because there is a massive back order on the hubs I need that have longer apr studs.
> 
> View attachment 2547877
> 
> View attachment 2547878


...they look just perfect for Canadian winter driving


----------



## EarlZ

yzonker said:


> Yea my Seasonic would drop to 11.8v on the 8pins under high load (500w). The Rm1000x I have now stays right at 12v but there is no difference in stability.


No difference in maximum overclocks for your 3090?


----------



## yzonker

EarlZ said:


> No difference in maximum overclocks for your 3090?


Nope.


----------



## GreatestChase

Hey @geriatricpollywog, how do you power your pumps and fans? Do you just run a molex cable out of your case to the external rad? Was trying to find a small external power supply that just did molex, but I'm finding them with like a max of 2A, and I know my fans alone would exceed that at full tilt. I have an extra 1000W psu that I could use, but I feel like that is a little extra lol.


----------



## GreatestChase

Well, I’m impatient and couldn’t wait until the weekend. 2/4 of the alphacool QDCs were defective and leaked, but thankfully I tested the new equipment before integrating it into the build. Also only have 6/9 of my fans installed as I need m3x35 screws and the fans come with m4x35 and the radiator comes with m3x30. Flow is good though. I’m getting 245 lph w/ 2 D5 pumps. Still have to cable manage and what not, but it is up and running and not leaking.


----------



## andrew149

GreatestChase said:


> Well, I’m impatient and couldn’t wait until the weekend. 2/4 of the alphacool QDCs were defective and leaked, but thankfully I tested the new equipment before integrating it into the build. Also only have 6/9 of my fans installed as I need m3x35 screws and the fans come with m4x35 and the radiator comes with m3x30. Flow is good though. I’m getting 245 lph w/ 2 D5 pumps. Still have to cable manage and what not, but it is up and running and not leaking.
> View attachment 2547929


Link to the rad you have!


----------



## elbramso

AH this forum is pure poison for my wallet  
It looks like it'll be -1c tomorrow night - you all know what that means 🤣 I'll try to push for 16,7k in PR


----------



## geriatricpollywog

GreatestChase said:


> Hey @geriatricpollywog, how do you power your pumps and fans? Do you just run a molex cable out of your case to the external rad? Was trying to find a small external power supply that just did molex, but I'm finding them with like a max of 2A, and I know my fans alone would exceed that at full tilt. I have an extra 1000W psu that I could use, but I feel like that is a little extra lol.


I used a 3x SATA cable that came with my EVGA 1200P2 and a PWM hub for 9 fans. I can dig up more info if you are srs.

EVGA - Products - EVGA 3x SATA Cable (Single) - W001-00-000148


----------



## yzonker

GreatestChase said:


> Well, I’m impatient and couldn’t wait until the weekend. 2/4 of the alphacool QDCs were defective and leaked, but thankfully I tested the new equipment before integrating it into the build. Also only have 6/9 of my fans installed as I need m3x35 screws and the fans come with m4x35 and the radiator comes with m3x30. Flow is good though. I’m getting 245 lph w/ 2 D5 pumps. Still have to cable manage and what not, but it is up and running and not leaking.
> View attachment 2547929


That's funny. I did the same thing when I got the external rad I have. I ran it for a couple of days with just half the fans because I didn't have enough screws. 

Looks good though. I guess I need to step up and get that chiller!


----------



## GreatestChase

andrew149 said:


> Link to the rad you have!


This is what I went with. Watercool MO-RA3 360 PRO - White



geriatricpollywog said:


> I used a 3x SATA cable that came with my EVGA 1200P2 and a PWM hub for 9 fans. I can dig up more info if you are srs.
> 
> EVGA - Products - EVGA 3x SATA Cable (Single) - W001-00-000148
> 
> 
> 
> View attachment 2547941
> 
> 
> View attachment 2547942
> 
> 
> View attachment 2547943


Thanks. I ended up just using the extra psu I have lying around though lol. And I'm using an Aquacomputer Quadro as my fan hub. So with my pumps and the Quadro I needed to have 3x molex. I could've run it out of the case like you did, but want to be able to move it a little further away from the case if I want to. Thanks for the pics though.


----------



## GreatestChase

yzonker said:


> That's funny. I did the same thing when I got the external rad I have. I ran it for a couple of days with just half the fans because I didn't have enough screws.
> 
> Looks good though. I guess I need to step up and get that chiller!


While I understand that I'm using non-standard sized fans with the T30s, I wish that manufacturers would use a standard sized screw. It seems that some companies like Alphacool and Watercool like to use M3 and others like EK like to use M4. Or maybe Phanteks should include both M3 and M4 screws since their fan isn't standard sized. Oh well, I ordered a pack of M3x35 off amazon and they'll be here tomorrow. The crazy thing is, even with 6/9 of my fans, I'm already like 4C lower on water temps while mining at about 310 W continuous load. Currently have about a 3 C delta between ambient and water temps.


----------



## yzonker

GreatestChase said:


> While I understand that I'm using non-standard sized fans with the T30s, I wish that manufacturers would use a standard sized screw. It seems that some companies like Alphacool and Watercool like to use M3 and others like EK like to use M4. Or maybe Phanteks should include both M3 and M4 screws since their fan isn't standard sized. Oh well, I ordered a pack of M3x35 off amazon and they'll be here tomorrow. The crazy thing is, even with 6/9 of my fans, I'm already like 4C lower on water temps while mining at about 310 W continuous load. Currently have about a 3 C delta between ambient and water temps.


Actually EK is 6-32. I have 3 rads, EK, Corsair, and Thermaltake. All 3 use different screws. And yes it's annoying. 

That should be plenty of fan for the Mo-ra3. Even with my little CL480, I saw a large drop in temps when I added it to the other 2. Cut my delta in half (12C to 6C with GPU in the 400-500w range).


----------



## GreatestChase

yzonker said:


> Actually EK is 6-32. I have 3 rads, EK, Corsair, and Thermaltake. All 3 use different screws. And yes it's annoying.
> 
> That should be plenty of fan for the Mo-ra3. Even with my little CL480, I saw a large drop in temps when I added it to the other 2. Cut my delta in half (12C to 6C with GPU in the 400-500w range).


Gotcha, I just assumed they were 4mm since they were slightly larger than the M3s. Good to know. I haven't tested at higher loads yet, but so far I'm happy with the performance.


----------



## des2k...




----------



## truehighroller1

des2k... said:


>


LET'S GO CANADA! FREEDOM TRUCKS FTW.


----------



## ArcticZero

God. This video makes me feel so good about my mediocre shunt mod soldering work. And no I didn't have the means to replace a 3090 if I had killed it then. And I almost did too with the silver paint I had used initially.


----------



## dansi

des2k... said:


>


looks messy and amatuerish total? 
should have invited some OCN member to guest overclock their 3090


----------



## elbramso

dansi said:


> looks messy and amatuerish total?
> should have invited some OCN member to guest overclock their 3090


The amatuer approach is what I like the most in this video  gives the rest of us semi-pro's a good feeling 🤣


----------



## GRABibus

elbramso said:


> The amatuer approach is what I like the most in this video  gives the rest of us semi-pro's a good feeling 🤣


I am an amateurish and hate amateurish videos 😊


----------



## Luggage

GRABibus said:


> I am an amateurish and hate amateurish videos 😊


Still I think it's intentional to give viewers that "I could do that - and Better!"-feeling.

At least that's why I went through with my janky outdoors radiator setup


----------



## J7SC

dansi said:


> looks messy and amatuerish total?
> should have invited some OCN member to guest overclock their 3090


Never mind their 'oc' in this installment , in an earlier vid, they seemed to suggest that thermal putty was a revolutionary new discovery...apart from the fact that they used this newer, weaker putty 'just invented'. All the while folks have been using PG 10 etc...

But to each his own, Linus Tech Tips is slipping more and more into the infomercial entertainment space, on purpose, IMO.


----------



## dansi

J7SC said:


> Never mind their 'oc' in this installment , in an earlier vid, they seemed to suggest that thermal putty was a revolutionary new discovery...apart from the fact that they used this newer, weaker putty 'just invented'. All the while folks have been using PG 10 etc...
> 
> But to each his own, Linus Tech Tips is slipping more and more into the infomercial entertainment space, on purpose, IMO.


i posted a comment on that video that PG10 is better and cheaper. 
it got deleted! 

no surprises such that video was sponsored by ....


----------



## elbramso

Well I did it:








I scored 16 696 in Port Royal


Intel Core i9-10900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com





That's pretty much an end of season result as the temps rise daily and I don't have a chiller 
Classified settings used:


----------



## GreatestChase

elbramso said:


> Well I did it:
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 16 696 in Port Royal
> 
> 
> Intel Core i9-10900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> That's pretty much an end of season result as the temps rise daily and I don't have a chiller
> Classified settings used:
> View attachment 2548156


Very nice. Sunday morning it's supposed to be about -3 C here. I'll probably try to get a session in and test out the Mora. Maybe it'll help with my cold boot issue since I have the gpu isolated now.


----------



## elbramso

GreatestChase said:


> Very nice. Sunday morning it's supposed to be about -3 C here. I'll probably try to get a session in and test out the Mora. Maybe it'll help with my cold boot issue since I have the gpu isolated now.


At the end of the day it's all about the temperature.
Yesterday I had a hard time to get my water below 4c when the last time I tried it went down to 2,7c. Still the LM was able to compensate the difference and still gained a lower end of run temperature (around 1c).
If my rig wasn't this heavy I'd place it outside for the runs 😂
Anyways, good luck for your session 👍


----------



## GreatestChase

elbramso said:


> At the end of the day it's all about the temperature.
> Yesterday I had a hard time to get my water below 4c when the last time I tried it went down to 2,7c. Still the LM was able to compensate the difference and still gained a lower end of run temperature (around 1c).
> If my rig wasn't this heavy I'd place it outside for the runs 😂
> Anyways, good luck for your session 👍


You guys are making me want to put LM on my card more and more lol. Which brand are you using? Is the go to typically conductonaut?


----------



## yzonker

I slightly beat my PB TS score. Both graphics and combined. Still only matched my PR PB though. 









I scored 21 341 in Time Spy


AMD Ryzen 7 5800X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 11}




www.3dmark.com





In other news, Amazon just shipped my chiller. Probably be a week or 2 before I get it set up though. Quite a bit of work and some additional parts needed.


----------



## yzonker

Also, in stock again. Grab it while you can if you want it. 






TG-PP10-50 t-Global Technology | Fans, Thermal Management | DigiKey


Order today, ships today. TG-PP10-50 – Thermal Silicone Putty 50 gram Container from t-Global Technology. Pricing and Availability on millions of electronic components from Digi-Key Electronics.




www.digikey.com


----------



## kx11

Testing out my z690 xtreme waterforce in Port Royale 










I scored 15 083 in Port Royal


Intel Core i9-12900KF Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 11}




www.3dmark.com





i guess my rusty ROG is still going strong


----------



## GreatestChase

yzonker said:


> I slightly beat my PB TS score. Both graphics and combined. Still only matched my PR PB though.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 21 341 in Time Spy
> 
> 
> AMD Ryzen 7 5800X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 11}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> In other news, Amazon just shipped my chiller. Probably be a week or 2 before I get it set up though. Quite a bit of work and some additional parts needed.


Which chiller did you go with?


----------



## yzonker

GreatestChase said:


> Which chiller did you go with?


The 1/4hp model. I don't intent to use it for sustained loads, so I'm hoping that will be enough.









Amazon.com: Active Aqua AACH25HP Hydroponic Water Chiller Cooling System, 1/4 HP, Rated BTU per hour: 3,010, User-Friendly,Black : Everything Else


Buy Active Aqua AACH25HP Hydroponic Water Chiller Cooling System, 1/4 HP, Rated BTU per hour: 3,010, User-Friendly,Black: Everything Else - Amazon.com ✓ FREE DELIVERY possible on eligible purchases



www.amazon.com


----------



## geriatricpollywog

kx11 said:


> Testing out my z690 xtreme waterforce in Port Royale
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 083 in Port Royal
> 
> 
> Intel Core i9-12900KF Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 11}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> i guess my rusty ROG is still going strong


Your motherboard and CPU should have no effect on PR score.


----------



## yzonker

des2k... said:


>


Finally got around to watching this. I'm hoping that was terrible on purpose (assume so). Reminded me of the car show "Roadkill".


----------



## des2k...

yzonker said:


> Finally got around to watching this. I'm hoping that was terrible on purpose (assume so). Reminded me of the car show "Roadkill".


I think all their mods, extreme OC videos are jank on purpose to get views. Does that also imply they lack some skills,common thinking ? I think yes😂

It's a 3pin card, with waterblocks available and he had vcore mod support. To me the 1000w vbios and a big chiller(not custom made) would of worked better and looked very nice.

Of course in true LTT fashion, let's ask somebody on payroll in the office to solder some shunts on a 2000$ gpu😬

You know your mod is going downhill if you need to cook your card in the oven because your boss buys you a 99cents soldering tool on aliexpress 😂


----------



## yzonker

des2k... said:


> I think all their mods, extreme OC videos are jank on purpose to get views. Does that also imply they lack some skills,common thinking ? I think yes😂
> 
> It's a 3pin card, with waterblocks available and he had vcore mod support. To me the 1000w vbios and a big chiller(not custom made) would of worked better and looked very nice.
> 
> Of course in true LTT fashion, let's ask somebody on payroll in the office to solder some shunts on a 2000$ gpu😬
> 
> You know your mod is going downhill if you need to cook your card in the oven because your boss buys you a 99cents soldering tool on aliexpress 😂


Just a 3090 with the 1kw bios and a decent loop is as good or better. I got this with just sliders (+195/+1100) at 22C room temp.









Result not found







www.3dmark.com


----------



## kx11

geriatricpollywog said:


> Your motherboard and CPU should have no effect on PR score.


My old Ryzen 3900xt couldn't get close to these numbers


----------



## J7SC

yzonker said:


> Just a 3090 with the 1kw bios and a decent loop is as good or better. I got this with just sliders (+195/+1100) at 22C room temp.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Result not found
> 
> 
> 
> 
> 
> 
> 
> www.3dmark.com


Agree - this is from last April '21, 5950X w/ 520 KPE on 3090 Strix; MSI AB sliders, room temp...should run it again


----------



## yzonker

J7SC said:


> Agree - this is from last April '21, 5950X w/ 520 KPE on 3090 Strix; MSI AB sliders, room temp...should run it again
> View attachment 2548311


Definitely a better card than mine. I can't make 11.7k at 520w. Sure would be nice if supply improves for 40 series. I wanted a Strix but they have been nearly impossible to get for me. Also, somewhere along the way TSE got a free score bump too so you might even go higher than that.


----------



## yzonker

Well there's less difference than I thought. Although I think I'm getting help from that score boost I mentioned,

~520w: 11692
~600w: 11779 









Result







www.3dmark.com


----------



## KedarWolf

Thermal putty in stock in Canada at Digikey. Likely in the USA too. 






TG-PP10-50 t-Global Technology | Fans, Thermal Management | DigiKey


Order today, ships today. TG-PP10-50 – Thermal Silicone Putty 50 gram Container from t-Global Technology. Pricing and Availability on millions of electronic components from Digi-Key Electronics.




www.digikey.ca


----------



## long2905

the price increased from $28 to $41 too holy smoke. dont think i can justify this in addition to shipping since im not in the states 

@KedarWolf dohhh! thanks man. i missed the currency, will proceed to order 2 now haha


----------



## KedarWolf

long2905 said:


> the price increased from $28 to $41 too holy smoke. dont think i can justify this in addition to shipping since im not in the states


That's CAD, which because of the exchange rate is more. It'll be less in USD on the USA website.

TG-PP10-50 t-Global Technology | Fans, Thermal Management | DigiKey USD


----------



## yzonker

Yea it was $36 shipped US for me.


----------



## yzonker

And friggin' Amazon and their fast shipping. I've got other work to do today. Of course it doesn't hurt that the actual seller is only 300 miles from my house.


----------



## yzonker

Works. Not loud at all. The cheap aquarium pump I bought just for testing and flushing is more annoying. Lol. 

Seeing very slight condensation at 38F (it overshot slightly). Dew point calculates to be 39F so that seems about right.


----------



## J7SC

yzonker said:


> Definitely a better card than mine. I can't make 11.7k at 520w. Sure would be nice if supply improves for 40 series. I wanted a Strix but they have been nearly impossible to get for me. Also, somewhere along the way TSE got a free score bump too so you might even go higher than that.


...I checked the download folder re. KPE - didn't actually install KPE 520 until June so this actually was on stock Asus vbios (which hits 500W on 115% PL per GPUz). 



yzonker said:


> Works. Not loud at all. The cheap aquarium pump I bought just for testing and flushing is more annoying. Lol.
> 
> Seeing very slight condensation at 38F (it overshot slightly). Dew point calculates to be 39F so that seems about right.
> 
> View attachment 2548373


Nice ! What are the in / out barb fitting sizes, in case I want to get a chiller ? I ask because I run 1/2 id / 3/4 od tubing, fittings with QD4s


----------



## yzonker

J7SC said:


> ...I checked the download folder re. KPE - didn't actually install KPE 520 until June so this actually was on stock Asus vbios (which hits 500W on 115% PL per GPUz).
> 
> 
> 
> Nice ! What are the in / out barb fitting sizes, in case I want to get a chiller ? I ask because I run 1/2 id / 3/4 od tubing, fittings with QD4s


Oh just rub it in with your fancy Strix. 

It comes with 1/2" and 3/4" barbs. I have the smaller 10mm line in my loop so I'll have to adapt it down unless those Performance PC fittings work that have use standard watercooling fittings. I sent them an email asking about it.


----------



## J7SC

yzonker said:


> Oh just rub it in with your fancy Strix.
> 
> It comes with 1/2" and 3/4" barbs. I have the smaller 10mm line in my loop so I'll have to adapt it down unless those Performance PC fittings work that have use standard watercooling fittings. I sent them an email asking about it.


...wasn't trying to bug you...instead, making a point re. Strix on stock bios w/o shunts since that moronic LT vid also used a Strix 3090 - but perhaps the extra-cost white Strix are binned for slowness ? 

Good to hear about the chiller barb sizes, tx. Do you have a link to your chiller's model ?


----------



## StAndrew

Quick question, sorry I'm still behind the learning curve. I uploaded the 1000W bios to a ROG strix but now max power draw is around 350 - 360W in GPUz. Is this just the software not reading? PerfCap says VRel, VOp (running Unique Heaven) and sometimes it just says Idle (despite running unique engine). My GPU clocks are pretty good, around 2145 - 2160.


----------



## yzonker

J7SC said:


> ...wasn't trying to bug you...instead, making a point re. Strix on stock bios w/o shunts since that moronic LT vid also used a Strix 3090 - but perhaps the extra-cost white Strix are binned for slowness ?
> 
> Good to hear about the chiller barb sizes, tx. Do you have a link to your chiller's model ?


Quite a bit cheaper off Amazon though and the seller is close enough I could drive there if needed, 






Active Aqua Chiller with Power Boost, 1/4 HP


Active Aqua Chiller, 1/4 HP Boost




www.hydrofarm.com





Amazon.com


----------



## StAndrew

StAndrew said:


> Quick question, sorry I'm still behind the learning curve. I uploaded the 1000W bios to a ROG strix but now max power draw is around 350 - 360W in GPUz. Is this just the software not reading? PerfCap says VRel, VOp (running Unique Heaven) and sometimes it just says Idle (despite running unique engine). My GPU clocks are pretty good, around 2145 - 2160.


Tried the EVGA 520W bios and now power is stuck at 350w and GPU clocks are much worse. Am I missing a step here?


----------



## J7SC

yzonker said:


> Quite a bit cheaper off Amazon though and the seller is close enough I could drive there if needed,
> 
> 
> 
> 
> 
> 
> Active Aqua Chiller with Power Boost, 1/4 HP
> 
> 
> Active Aqua Chiller, 1/4 HP Boost
> 
> 
> 
> 
> www.hydrofarm.com
> 
> 
> 
> 
> 
> Amazon.com


Thanks !



StAndrew said:


> Tried the EVGA 520W bios and now power is stuck at 350w and GPU clocks are much worse. Am I missing a step here?


Asus Strix w/ KPW 520 or 1kw bios will NOT read accurately in GPUz or HWInfo...power draw is much higher than indicated.


----------



## StAndrew

J7SC said:


> Thanks !
> 
> 
> 
> Asus Strix w/ KPW 520 or 1kw bios will NOT read accurately in GPUz or HWInfo...power draw is much higher than indicated.


Thanks for the clarification!


----------



## EarlZ

ManniX-ITA said:


> It's perfectly fine then, no worries.


Just wanted to circle back to this and now that the PSU is powering the entire system the 0.5% regulation from Seasonic has kicked in and it is showing 11.97V as the minimum. Previously It was only powering the GPU alone while I was waiting for other parts to arrive and my old V1000 was powering the mobo via 24p and 8p.


----------



## J7SC

StAndrew said:


> Thanks for the clarification!


...check out the dark blue underline parameter below...not sure how accurate it ultimately is, but even w/ the KPE 520 bios, the increase in rail power gives you an indication of actual peak power...just add the difference in rail power to your original Strix bios peak Watt number (on my card between 430W and 500W, depending on app) and you get 'an indication' of peak power with Strix while on KPE bios. Again, an approximation only. Actually measuring w/ clips and a meter on the 3x8 pin (plus adding PCIe slot power) is probably the only true way to figure it all out.


----------



## yzonker

Well I didn't work this too hard, so probably could get closer. For one thing, that older driver I run on the bench OS is good for 50pts or a little more over these new drivers. But good temps. Only 5C higher than my old man winter run. It was struggling to get below 7C, but I've got nothing insulated with 20ft total of line going back/forth to it and of course it's going through all of my loop (with the fans turned off of course). Ultimately I want to add a dedicated pump for the chiller and then just run it on my internal loop which should help some too. Not have it go through the big external rad.









Result







www.3dmark.com


----------



## yzonker

J7SC said:


> ...check out the dark blue underline parameter below...not sure how accurate it ultimately is, but even w/ the KPE 520 bios, the increase in rail power gives you an indication of actual peak power...just add the difference in rail power to your original Strix bios peak Watt number (on my card between 430W and 500W, depending on app) and you get 'an indication' of peak power with Strix while on KPE bios. Again, an approximation only. Actually measuring w/ clips and a meter on the 3x8 pin (plus adding PCIe slot power) is probably the only true way to figure it all out.
> View attachment 2548419


I'm trying to get somebody to try  Uh, this thing would solve that problem (assuming PCIE slot is reasonably accurate).



https://www.elmorlabs.com/product/elmorlabs-pmd-power-measurement-device/



BTW, back on the chiller (don't want to make yet another post), my GPU block delta appeared unchanged with the 20 ft of tubing and the chiller in the loop. I don't have a flow meter, but obviously it didn't hurt it much. Biggest problem is trying to get the air out of all of that stuff.


----------



## StAndrew

J7SC said:


> ...check out the dark blue underline parameter below...not sure how accurate it ultimately is, but even w/ the KPE 520 bios, the increase in rail power gives you an indication of actual peak power...just add the difference in rail power to your original Strix bios peak Watt number (on my card between 430W and 500W, depending on app) and you get 'an indication' of peak power with Strix while on KPE bios. Again, an approximation only. Actually measuring w/ clips and a meter on the 3x8 pin (plus adding PCIe slot power) is probably the only true way to figure it all out.
> View attachment 2548419


Well I was running the Heaven demo and with 1000w XOC bios, temps got into the 60's and power draw at the wall was about 900w. I think if I want to continue (overvolting), I'll need a bigger than 1000w PS.


----------



## satinghostrider

Just curious, do you guys apply a drop of thermal paste on your pads for GDDR5 and VRMs to improve temps on your cards? Any issues so far?
Thanks in advance.


----------



## J7SC

satinghostrider said:


> Just curious, do you guys apply a drop of thermal paste on your pads for GDDR5 and VRMs to improve temps on your cards? Any issues so far?
> Thanks in advance.


I use a small amount of MX 5 on both thermal pads and thermal putty (GDDR6X...)


----------



## yzonker

StAndrew said:


> Well I was running the Heaven demo and with 1000w XOC bios, temps got into the 60's and power draw at the wall was about 900w. I think if I want to continue (overvolting), I'll need a bigger than 1000w PS.


I doubt it. I've been running that bios a long time and 1000w PSU seems to be enough (RM1000x). I only have a RM850x in my 3080ti machine, but it has never hit OCP running the Galax 1kw bios.


----------



## J7SC

StAndrew said:


> Well I was running the Heaven demo and with 1000w XOC bios, temps got into the 60's and power draw at the wall was about 900w. I think if I want to continue (overvolting), I'll need a bigger than 1000w PS.


Depends on the quality of the PSU, but I like to have a few hundred watts in hand, so the smallest I run these days is 1300W Platinum.


----------



## des2k...

StAndrew said:


> Tried the EVGA 520W bios and now power is stuck at 350w and GPU clocks are much worse. Am I missing a step here?


I think Strix reports 0watts on Pin3 with non-asus bios.

If quake 2 rtx demo(steam) holds 2115core, that's 600w power usage, 2175+ will be 700w+
so you can test with that, if your cooling is good


----------



## J7SC

...what's better than one 1300W PSU ? Two 1300W PSUs, of course


----------



## GreatestChase

Well, I didn't end up testing out the Mora yesterday. Temps didn't get as low as originally forecasted. But this morning when I have to get up for clinic it is of course 26 F outside lol.


----------



## StAndrew

yzonker said:


> I doubt it. I've been running that bios a long time and 1000w PSU seems to be enough (RM1000x). I only have a RM850x in my 3080ti machine, but it has never hit OCP running the Galax 1kw bios.


Itts a a good PSU and the wall outlet is shared so the computer was most likely closer to 850-900w load but that's about the max I want to push the PSU. I was planning on overvolting (watercooled) but I'm guessing I'll need a new PSU.


----------



## heatdotnet

Another for the "casually dunking on LTTs shunt mod results" pile:










on a paltry 2-pin 3090 TUF w/1kw bios running at pcie3.0 and a fairly average custom loop, +150 core +1000 vram set in afterburner


----------



## Falkentyne

J7SC said:


> ...check out the dark blue underline parameter below...not sure how accurate it ultimately is, but even w/ the KPE 520 bios, the increase in rail power gives you an indication of actual peak power...just add the difference in rail power to your original Strix bios peak Watt number (on my card between 430W and 500W, depending on app) and you get 'an indication' of peak power with Strix while on KPE bios. Again, an approximation only. Actually measuring w/ clips and a meter on the 3x8 pin (plus adding PCIe slot power) is probably the only true way to figure it all out.
> View attachment 2548419


Um....
"GPU Rail Power" is simply the highest power reported on any individual rail on hwinfo64. this applies to any collapsed rail that reports a value and expands into a group of reported rails, ever since hwinfo added collapsible fields (this also applies even to voltages and temps).

What you want to look at is the bottom several GPU Output power sections, as that seems to come from a different section of the controller and is not linked to the shunts themselves like the input rails are


----------



## yzonker

Closer. Benchmark OS FTW. Didn't change much from the 15.8k run I posted yesterday (which I deleted so that link probably doesn't work now).









I scored 16 022 in Port Royal


AMD Ryzen 7 5800X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 11}




www.3dmark.com





Only 120pts shy of old man winter,









Result







www.3dmark.com





I will say that if you want to go even lower than that (chiller was showing 7C,water temp sensor 9C at the beginning of the run) , either need to not have rads in the loop or a bigger chiller. It's still going down slowly at this level but takes a long time. 

Another option would be to add a bigger res and cool it down with the machine off first using a separate pump.

Although I primarily wanted this for the warmer weather. The dew point will likely be too high to go much below 10C anyway.

Suggestions, ideas?


----------



## J7SC

yzonker said:


> Closer. Benchmark OS FTW. Didn't change much from the 15.8k run I posted yesterday (which I deleted so that link probably doesn't work now).
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 16 022 in Port Royal
> 
> 
> AMD Ryzen 7 5800X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 11}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> Only 120pts shy of old man winter,
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Result
> 
> 
> 
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> I will say that if you want to go even lower than that (chiller was showing 7C,water temp sensor 9C at the beginning of the run) , either need to not have rads in the loop or a bigger chiller. It's still going down slowly at this level but takes a long time.
> 
> Another option would be to add a bigger res and cool it down with the machine off first using a separate pump.
> 
> Although I primarily wanted this for the warmer weather. The dew point will likely be too high to go much below 10C anyway.
> 
> Suggestions, ideas?


...Obvious one would be to disconnect the rad fans if you haven't already done so, and consider covering the rads temporarily with plastic


----------



## yzonker

J7SC said:


> ...Obvious one would be to disconnect the rad fans if you haven't already done so, and consider covering the rads temporarily with plastic


Oh you wouldn't get even close if the fans were running. I disabled all of them in the bios. Even the case fan in the back and the one on the backplate. I realized the heatsink was cooler than the room temp and it was adding heat to the backplate. Lol.

I did get down 3C lower by covering the machine and external rad with a blanket. Really weird to see decent mobo temps with no fans running and blanket over it. Lol.

But then the water temp jumped up pretty quick (2-3C) during a PR run. Goes back to needing that larger volume of water to keep it more stable. The chiller can be set to 3C but it obviously is very inefficient at that low temp.


----------



## J7SC

yzonker said:


> Oh you wouldn't get even close if the fans were running. I disabled all of them in the bios. Even the case fan in the back and the one on the backplate. I realized the heatsink was cooler than the room temp and it was adding heat to the backplate. Lol.
> 
> I did get down 3C lower by covering the machine and external rad with a blanket. Really weird to see decent mobo temps with no fans running and blanket over it. Lol.
> 
> But then the water temp jumped up pretty quick (2-3C) during a PR run. Goes back to needing that larger volume of water to keep it more stable. The chiller can be set to 3C but it obviously is very inefficient at that low temp.


I you have QDs and an older rad, expanding the loop volume would help. Sort of like an extra- big metal base on a LN2 pot. 

I used to do quad-SLI w/EVBot and it is amazing how quickly a bucket of ice in a basin w/ a big rad can melt...watt is a heat energy measure after all.


----------



## GRABibus

J7SC said:


> ...what's better than one 1300W PSU ? Two 1300W PSUs, of course
> 
> View attachment 2548459


Crazy man 😂


----------



## KedarWolf

I saw last night you can buy 120x120mm GELID Extreme and Ultimate pads from the official GELID site now, and right now they automatically apply 15% off.


----------



## andrew149

KedarWolf said:


> I saw last night you can buy 120x120mm GELID Extreme and Ultimate pads from the official GELID site now, and right now they automatically apply 15% off.


Man I will never understand why this crap is so expensive and you get so little. 









GP-EXTREME THERMAL PAD 120×120 SINGLE 2mm [TP-GP01-S-D]


GP-Ultimate 120×20 is designed to provide perfect thermal interface to transfer heat to heatsinks when installed on PCB with height differences and uneven surfaces such as DRAM ICs, VRM ICs, power MOSFETs, NVRAM ICs and other high-temperature SMD components.




gelidstore.com


----------



## StAndrew

KedarWolf said:


> I saw last night you can buy 120x120mm GELID Extreme and Ultimate pads from the official GELID site now, and right now they automatically apply 15% off.





andrew149 said:


> Man I will never understand why this crap is so expensive and you get so little.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> GP-EXTREME THERMAL PAD 120×120 SINGLE 2mm [TP-GP01-S-D]
> 
> 
> GP-Ultimate 120×20 is designed to provide perfect thermal interface to transfer heat to heatsinks when installed on PCB with height differences and uneven surfaces such as DRAM ICs, VRM ICs, power MOSFETs, NVRAM ICs and other high-temperature SMD components.
> 
> 
> 
> 
> gelidstore.com


I'm sticking with the thermal putty. I haven't had good results with thermal pads; they always raise the GPU temperature at least 2-3*C due to lower contact pressure. My memory stays under 50*C with the putty which is plenty good for me (Phanteks Block and Bykski universal backside block). 









TG-PP-10 Silicone Thermal Putty - T-Global Technology


Description TG-PP-10 is a high-performance silicone putty material which is designed to offer a low cost of ownership approach to thermal management. The product is a one-part, fully cured thixotropic putty which can be dispensed either manually or automatically for volume production. The...




www.tglobaltechnology.com


----------



## J7SC

GRABibus said:


> Crazy man 😂


...Spacesaver !


----------



## andrew149

StAndrew said:


> I'm sticking with the thermal putty. I haven't had good results with thermal pads; they always raise the GPU temperature at least 2-3*C due to lower contact pressure. My memory stays under 50*C with the putty which is plenty good for me (Phanteks Block and Bykski universal backside block).
> 
> 
> 
> 
> 
> 
> 
> 
> 
> TG-PP-10 Silicone Thermal Putty - T-Global Technology
> 
> 
> Description TG-PP-10 is a high-performance silicone putty material which is designed to offer a low cost of ownership approach to thermal management. The product is a one-part, fully cured thixotropic putty which can be dispensed either manually or automatically for volume production. The...
> 
> 
> 
> 
> www.tglobaltechnology.com


Man where do you buy the putty I’ve never used it before and I don’t see the buy it now button.


I was planning on getting the Bykski block for myself $105s that’s just cheap I’m use to seeing Ek prices at $180s from 10 years ago when I started doing water cooling.


----------



## J7SC

andrew149 said:


> Man where do you buy the putty I’ve never used it before and I don’t see the buy it now button.
> 
> 
> I was planning on getting the Bykski block for myself $105s that’s just cheap I’m use to seeing Ek prices at $180s from 10 years ago when I started doing water cooling.


...I dumped the EK block for my 3090 Strix and am now using the Phanteks one - better performance and quality; you might want to check that one out as well....and of course, thermal putty (almost) everywhere front and back


----------



## andrew149

J7SC said:


> ...I dumped the EK block for my 3090 Strix and am now using the Phanteks one - better performance and quality; you might want to check that one out as well....and of course, thermal putty (almost) everywhere front and back


So my Dad has 3 3090s and he’s using Corsair water blocks with great success the memory is at like 60c on those blocks.

My 3080 gigabyte gaming OC needs a water block and I have limited choices between these brands

Barrow- in stock 
Bykski- in stock
Bitspower- out of stock 
Alpacool- out of stock


----------



## StAndrew

andrew149 said:


> Man where do you buy the putty I’ve never used it before and I don’t see the buy it now button.
> 
> 
> I was planning on getting the Bykski block for myself $105s that’s just cheap I’m use to seeing Ek prices at $180s from 10 years ago when I started doing water cooling.


50G here ($28): Fans, Thermal Management | Electronic Components Distributor DigiKey 

More than enough for a GPU and yes, its re-usable but kind of a PITA to apply.

I take a large clump and spread it onto the waterblock surface and spread it out by tapping it down with my finger tip (I wear nitrile gloves to not contaminate the putty and its pretty messy... but easy to clean).

I then use the blade side of this tool: Halberd Spudger - iFixit to cut off the extra, cutting down the block raised edges. I then can eyeball the thickness of the putty and pat down further as needed. It try to make sure the middle of the putty is thicker than the edges to make sure it "squish" outwards and eliminates potential air bubbles.

If you are concerned about wasting / using too much, just do a test fit, squish the block on, use the above ifixit tool to cut way and remove the extra putty that squishes out the sides and then scrape off the putty left and re-apply back onto the block (I add just a small bit extra) and you know you have right amount.



J7SC said:


> ...I dumped the EK block for my 3090 Strix and am now using the Phanteks one - better performance and quality; you might want to check that one out as well....and of course, thermal putty (almost) everywhere front and back


YES. I love the Phanteks block; except for Optimus (overpriced) its the only block I know that uses 0.5mm thermal pads which is a lot better than most blocks (usually 2mm thermal pads). My memory temps on the RTX 3090 FE with a Bykski block were 68-70*C when mining. My Strix with the Phanteks block is 20* better at 48-50*C (both setups used the same Bykski universal back block).


----------



## StAndrew

Anyone have the specs on the RTX 3090 FE Shunt specs or a link? Trying to fix my water damaged card: Dead RTX 3090 FE - Help Needed | Overclock.net


----------



## ArcticZero

StAndrew said:


> Anyone have the specs on the RTX 3090 FE Shunt specs or a link? Trying to fix my water damaged card: Dead RTX 3090 FE - Help Needed | Overclock.net


If FE cards use the same ones as reference, then those would be standard 5mOhm 2512 resistors.


----------



## Falkentyne

ArcticZero said:


> If FE cards use the same ones as reference, then those would be standard 5mOhm 2512 resistors.


FE cards use 2W shunts, while "most" reference cards use 1W shunts. 1W shunts have flush edges to middle, 2W have depressed metal edges. A few of the reference cards use 2W Shunts and the Kingpin uses weird non 2512 ones.
There is no penalty or functional difference in using 1W vs 2W, except 2W shunts are a pain in the hell to stack shunt mod.


----------



## yzonker

@PLATOON TEKK are you still out there? Just curious since you were building the massive chiller rig. Curios how that went?


----------



## Caffinator

I got my step-up email today. I am worried only about VRAM. maybe a game is released next Christmas and I can't play at 4K on maximum settings because of these new "low vram" messages. somehow for 20 years games managed to not complain about VRAM. 

i also am hesitant about SLI support in the future - i figure the 24GB VRAM was really intended to be shared between two 3090 in SLI. 180% compute power needs double the VRAM i suppose. and now you see the new 3080 arrive with 12GB VRAM. 

so right now my 10GB 3080 works great, but i don't want to get bamboozled by VRAM issues - is it worth doubling the cost of ownership($920 for 3080, 1800 for 3090)?


----------



## elbramso

Caffinator said:


> I got my step-up email today. I am worried only about VRAM. maybe a game is released next Christmas and I can't play at 4K on maximum settings because of these new "low vram" messages. somehow for 20 years games managed to not complain about VRAM.
> 
> i also am hesitant about SLI support in the future - i figure the 24GB VRAM was really intended to be shared between two 3090 in SLI. 180% compute power needs double the VRAM i suppose. and now you see the new 3080 arrive with 12GB VRAM.
> 
> so right now my 10GB 3080 works great, but i don't want to get bamboozled by VRAM issues - is it worth doubling the cost of ownership($920 for 3080, 1800 for 3090)?


As of now, my 3090 kingpin can't even play cyberpunk (a last year's tittle) at 4k with all settings set to ultra. I wouldn't be to worried about next year's game titles tbh...
I'd rather think about the fact that next gen graphics card will outperform 30 series cards by a lot. A 4070 might beat a 3090.


----------



## geriatricpollywog

elbramso said:


> As of now, my 3090 kingpin can't even play cyberpunk (a last year's tittle) at 4k with all settings set to ultra. I wouldn't be to worried about next year's game titles tbh...
> I'd rather think about the fact that next gen graphics card will outperform 30 series cards by a lot. A 4070 might beat a 3090.


I think I agree with @Section31 that now is the time to sell your 3090 and take a profit. The opportunity cost is $5.50 per day in mining revenue.


----------



## Section31

elbramso said:


> As of now, my 3090 kingpin can't even play cyberpunk (a last year's tittle) at 4k with all settings set to ultra. I wouldn't be to worried about next year's game titles tbh...
> I'd rather think about the fact that next gen graphics card will outperform 30 series cards by a lot. A 4070 might beat a 3090.


I think so but im also think hold off on upgrade to 2022/2023 gpus considering there costs. Better off going every other generation at this point


----------



## elbramso

Pretty simple for me. If I'm able to buy a 4090 Kingpin I will upgrade and if not I'm just sticking to my 3090 kingpin.


----------



## GreatestChase

elbramso said:


> Pretty easy for me. If I'm able to buy a 4090 Kingpin I will upgrade and if not I'm just sticking to my 3090 kingpin.


Hopefully the wait isn't over a year next time around. Took about a year from the time I signed up for the notify queue until I got my email for my KP.


----------



## elbramso

GreatestChase said:


> Hopefully the wait isn't over a year next time around. Took about a year from the time I signed up for the notify queue until I got my email for my KP.


100% agree.
Last time I had a 1080ti so I'm in a better position this time 😅
What really bothers me is that you need to be insanely lucky to get your hands on kingpin products here in Germany. You can't even queue up for these products...


----------



## Section31

elbramso said:


> Pretty simple for me. If I'm able to buy a 4090 Kingpin I will upgrade and if not I'm just sticking to my 3090 kingpin.


As long as you have the means thats all that matters. Im sticking with reference cards going forward myself.


----------



## Section31

elbramso said:


> 100% agree.
> Last time I had a 1080ti so I'm in a better position this time 😅
> What really bothers me is that you need to be insanely lucky to get your hands on kingpin products here in Germany. You can't even queue up for these products...


Hopefully next gen is lot less crazy than last. That last gen should have covered enough of the 10xx and older group that needed GPU upgrades. Not that many 4k Gamers out there


----------



## yzonker

Section31 said:


> Hopefully next gen is lot less crazy than last. That last gen should have covered enough of the 10xx and older group that needed GPU upgrades. Not that many 4k Gamers out there


Right that's why there are so many cards available now.  

Best hope is ETH 2.0 hitting before the 40 series launch. Used prices come way down and people grab those rather than the sky high expensive new cards. Not sure how much hope there is for that either though.


----------



## Section31

yzonker said:


> Right that's why there are so many cards available now.
> 
> Best hope is ETH 2.0 hitting before the 40 series launch. Used prices come way down and people grab those rather than the sky high expensive new cards. Not sure how much hope there is for that either though.


Lol at mining. I don’t bother accounting for it at this point. Most gamers are doing it with there single gpu (i’m ok with it) and its going to be fad thing in the end.

Pure mining farm i don’t know will continue to invest into such operations. Mining farm collapse is bad thing since companies will get tons of gpu returns and delay launches to clear out these gpus.

I rather companies make products with big performance gains (40-50%) without marketing jumbo than focus there time on miners, etc. Miner got more than us, they have access to nvidia a100 gpus now.


----------



## GRABibus

elbramso said:


> 100% agree.
> Last time I had a 1080ti so I'm in a better position this time 😅
> What really bothers me is that you need to be insanely lucky to get your hands on kingpin products here in Germany. You can't even queue up for these products...


Neither in France.
There was a drop at a reseller some weeks ago (only 3 cards !) and they were sold in 5 minutes. I was lucky to catch one 😜. 2550€.
It was 4 months that Kingping hybrid was not available in France.


----------



## J7SC

I usually (not always) upgrade only very second gen re. the 'game' half of my work-play setup with a 6900XT and Strix 3090 (spoiler), and never go for reference models for a variety of reasons. Another system still has 2x 2080 Tis and while I likely don't play as much as many others here, Cyberpunk 2077 at 4K Ultra works fine on both the 3090 and the 2080 Tis (not loaded on the 6900XT workhorse). As to upgrading for 4K re. VRAM, I've seen some apps that at 4K / Ultra that get past 10 GB VRAM 'usage' (per MSI AB's 'definition' also re. VRAM allocation). For new purchases for work or play, my minimum VRAM is now 16 GB - mostly as I keep all my cards active for a while (never sell them) since I run multiple systems. 4K is my standard daily resolution now.

If you have a well-running 3090, I see absolutely no reason to sell / upgrade now...when the 4090s come out, it is worth taking a look but until I see some real-world results, I'll wait - I don't have to be the first on the block with the latest stuff. I just checked major retailers both here in Canada and Europe and while some seemed to have gotten a handful of 3090s in, they still are few and far between - and pricey ! Of the custom PCB 3090s I would actually want, I only found one. I leave the mining topic alone, other than to say that I wouldn't buy a 3090 that was part of a mining setup.


Spoiler


----------



## Caffinator

I will keep the 3080 then, and plan on next gen when 4000 arrives.


----------



## ArcticZero

Section31 said:


> As long as you have the means thats all that matters. Im sticking with reference cards going forward myself.


Likewise. After all the issues people have had with random SKUs (e.g. first gen FTW2's, some Gigabyte SKUs) failing, I am more than happy I got a reference card that most people simply looked down on (PNY), when it has turned out to be outstanding all around even after shunt modding and daily driving games at ~460-500w.

I don't really want to push it further since it's more than enough frames for me for any game and I find it's the sweet spot to be over 2100mhz consistently.


----------



## J7SC

...not sure anybody really 'looks down' on PNY...in my regional market, they simply aren't available. Also, there have been folks with FE and reference cards that were shunt-modded and that have developed problems. Be that as it may, I usually go for 3x 8 pin custom PCBs (Asus among my first choices) since I know that there will be some nice XOC bios that 'fit' w/o too much hassle


----------



## Ironcobra

Section31 said:


> Lol at mining. I don’t bother accounting for it at this point. Most gamers are doing it with there single gpu (i’m ok with it) and its going to be fad thing in the end.
> 
> Pure mining farm i don’t know will continue to invest into such operations. Mining farm collapse is bad thing since companies will get tons of gpu returns and delay launches to clear out these gpus.
> 
> I rather companies make products with big performance gains (40-50%) without marketing jumbo than focus there time on miners, etc. Miner got more than us, they have access to nvidia a100 gpus now.


Say what you want about mining but i have paid for my 3090 in less then 9 months and eth 2.0 solving the mining thing is a myth, I will believe it when I see it.


----------



## KedarWolf

A Samsung Odyssey Neo G9 monitor was on sale for $800 off at a local store, yesterday was the last day of the sale. 5120x1440 49' ultrawide, HDR 2000 GSync/FreeSync, 240Hz screen.

However I don't get my tax refund for another week or so, and they wouldn't hold it that long at the sale price. My sis, who often helps me, couldn't lend me $2000 CAD to get it.

Another store had it on backorder and would lock it in at the sale price for a $200 deposit. So I contacted Memory Express, told them this, the guy talked to the manager and agreed to hold it at the sale price for $200 down. They had one left in stock.

I'm sooooo happy, I'd never be able to get it at the full price which was close to $3000. 

I have a water-cooled Strix OC RTX 3090 to drive it. That resolution works out in total pixels to be about 89% of 4K total pixels.










Putting $200 down to lock in the sale price at $800 CAD off, then paying it in full in about a week.

Nearly $2000 CAD with tax, $1699.99 before tax.


----------



## andrew149

Ironcobra said:


> Say what you want about mining but i have paid for my 3090 in less then 9 months and eth 2.0 solving the mining thing is a myth, I will believe it when I see it.


Agreed! With there historical data there not gonna be ready this year they do about 10% a year in progress and there at like 60% or something like that completed.


----------



## Section31

Ironcobra said:


> Say what you want about mining but i have paid for my 3090 in less then 9 months and eth 2.0 solving the mining thing is a myth, I will believe it when I see it.


I talk to couple miners/flippers. No issues at all with you guys. I probably could have done the same had i started mining when i got my 3090 but i choice different direction. All the power to you guys. The miner i spoke with actually been downscaling full mining rigs but still mines but different boat than you guys. Guy made an bundle selling the mining machines to new miners for like 15-20K USD (think 6-7 GPU plus mining setup) after mining full recovery.

I even had people at the hype asking if they could use my PC to mine for them lol. They know my PC sits there idle 99% of the time and its watercooled too so they can push it. However, could never come to an agreement about actual price. Even some of my friends who are into mining asked the same though the discussion never went deep as we didn't want to ruin friendship over something like that.


----------



## ArcticZero

J7SC said:


> ...not sure anybody really 'looks down' on PNY...in my regional market, they simply aren't available. Also, there have been folks with FE and reference cards that were shunt-modded and that have developed problems. Be that as it may, I usually go for 3x 8 pin custom PCBs (Asus among my first choices) since I know that there will be some nice XOC bios that 'fit' w/o too much hassle


At the time the 3090 was released, I do recall quite a few people referring to PNY as a "low end" or "not the first choice" brand, and only taking it because there were there were no other available SKUs. Honestly I took it since it was the first one available as well, despite originally wanting a 3x8-pin card.

Then you had others reminding people PNY were one of the only a few AIBs who made Quadros - the other one being Leadtek. No brand preference or anything, just zero regrets on my own purchase, hindsight being 20/20 and all.


----------



## andrew149

ArcticZero said:


> At the time the 3090 was released, I do recall quite a few people referring to PNY as a "low end" or "not the first choice" brand, and only taking it because there were there were no other available SKUs. Honestly I took it since it was the first one available as well, despite originally wanting a 3x8-pin card.
> 
> Then you had others reminding people PNY were one of the only a few AIBs who made Quadros - the other one being Leadtek. No brand preference or anything, just zero regrets on my own purchase, hindsight being 20/20 and all.


Man I have 2 3060s lhr cards from pny and those things run all day at 36 hash without issues. Idk why people have an issue with pny


----------



## PowerK

Hi folks. 
Have had 3090 since its launch but been out of loop for several months.
The other day, I flashed my 3090 with 1000W XOC vBIOS and tried see how much further it can go. And result was quite disappointing. (but was expected to be honest) I've never had fun overclocking GPU, be it under water or air from the time they introduced what they call 'GPU Boost' with GTX680 (Kepler) generation.
Currently, with stock BIOS, my 3090 clocks at stable 2130-2145MHz with 993mV (around 330-380W power consumption) no matter what I throw at it (CP2007, 3DM PR, Metro Exodus EE, Control, DL2 etc)
With XOC BIOS, I could go further: 2205-2220MHz with 1037mV for example. But power consumption goes over 520W (!!). And idle power consumption (with Wallpaper Engine running) seems to be doubled (to around 80W) compared to the stock vBIOS (30-35W). Not worth it for daily gaming rig from my experience.


----------



## andrew149

PowerK said:


> Hi folks.
> Have had 3090 since its launch but been out of loop for several months.
> The other day, I flashed my 3090 with 1000W XOC vBIOS and tried see how further it can go. And result was quite disappointing. (but was expected to be honest) I've never had fun overclocking GPU, be it under water or air from the time they introduced what they call 'GPU Boost' with GTX680 (Kepler) generation.
> Currently, with stock BIOS, my 3090 clocks at stable 2130-2145MHz with 993mV (around 330-380W power consumption) no matter what I throw at it (CP2007, 3DM PR, Metro Exodus EE, Control, DL2 etc)
> With XOC BIOS, I could go further: 2205-2220MHz with 1037mV for example. But power consumption goes over 520W (!!). And idle power consumption (with Wallpaper Engine running) seems to be doubled (to around 80W) compared to the stock vBIOS (30-35W). Not worth it for daily gaming rig from my experience.


gpuz is no longer accurate with a card when you switch bios I doubt your are doubling.


----------



## PowerK

andrew149 said:


> gpuz is no longer accurate with a card when you switch bios I doubt your are doubling.


Actually, I was using AIDA64 when monitoring PWR consumption.


----------



## andrew149

PowerK said:


> Actually, I was using AIDA64 when monitoring PWR consumption.


Software isn’t going to be accurate at all you need to measure from the wall the difference


----------



## PowerK

andrew149 said:


> Software isn’t going to be accurate at all you need to measure from the wall the difference


Thanks for stating the obvious. I never said software was accurate. But it can be determined as a good approximation. (Is this not why we're all using various type of software monitoring tools?)
Now, with regards to GPU-Z not reporting proper values after flashing vBIOS, can this is off as much as 100% increase delta? Also, is this the case with AIDA64 as well?


----------



## geriatricpollywog

Section31 said:


> I talk to couple miners/flippers. No issues at all with you guys. I probably could have done the same had i started mining when i got my 3090 but i choice different direction. All the power to you guys. The miner i spoke with actually been downscaling full mining rigs but still mines but different boat than you guys. Guy made an bundle selling the mining machines to new miners for like 15-20K USD (think 6-7 GPU plus mining setup) after mining full recovery.
> 
> I even had people at the hype asking if they could use my PC to mine for them lol. They know my PC sits there idle 99% of the time and its watercooled too so they can push it. However, could never come to an agreement about actual price. Even some of my friends who are into mining asked the same though the discussion never went deep as we didn't want to ruin friendship over something like that.


It’s not too late to start mining. Nicehash is very easy to set up. I have good insulation so my computer keeps my living room about 5C warmer than ambient. The electricity cost of mining is subtracted from what my central heater uses. In the summer I’ll put the MO-RA on the patio.


----------



## J7SC

PowerK said:


> Thanks for stating the obvious. I never said software was accurate. But it can be determined as a good approximation. (Is this not why we're all using various type of software monitoring tools?)
> Now, with regards to GPU-Z not reporting proper values after flashing vBIOS, can this is off as much as 100% increase delta? Also, is this the case with AIDA64 as well?


...I would use HWInfo as monitoring software, but for the record, 520W is entirely believable. When I first got my Strix 3090, it pulled over 500 W on its stock vBios with PL slider maxed (oldie GPUz screenshot below). 

Also, I think folks might confuse another issue here, and that is that if you use the EVGA KPE 520 W or 1kw XOC bios on cards like the Strix, HWInfo and GPUz will not properly report the power consumption. That is the case with my now w-cooled Strix when I switch the vBios over to the KPE 520W. *However*, if you're talking about your sig rig's Galax HoF OC Labs, it might be accurate, especially if you loaded the Galax XOC 1kw vBios. If you have a voltmeter handy, the only accurate way is to measure with clips on your PCIe cables (+ PCIe slot from monitoring software).


----------



## Section31

geriatricpollywog said:


> It’s not too late to start mining. Nicehash is very easy to set up. I have good insulation so my computer keeps my living room about 5C warmer than ambient. The electricity cost of mining is subtracted from what my central heater uses. In the summer I’ll put the MO-RA on the patio.


Thanks but not doing. Just not my thing.


----------



## GreatestChase

Well the notify page for the Z690 Dark has finally gone live. Looks like with tax and shipping it'll be around $1000 USD.


----------



## J7SC

GreatestChase said:


> Well the notify page for the Z690 Dark has finally gone live. Looks like with tax and shipping it'll be around $1000 USD.


...what a steal 🥴 - the ASUS ROG Maximus Z690 Extreme Glacial costs twice as much


----------



## GreatestChase

J7SC said:


> ...what a steal 🥴 - the ASUS ROG Maximus Z690 Extreme Glacial cost twice as much


Looks to be about the same msrp as the Maxiumus Apex board. Which I guess is in the same target range. However, MSI's Unify-X comes in at around $300 less than either of those. The Apex does have 24 power phases, while the Unify-X and Dark boards both have 21.


----------



## elbramso

Crypto mining is an absurd waste of energy imo. I know it might not be a popular opinion but having my rig up and running 24/7 for a 60usd profit per month is kinda stupid.
At the current rates and energy costs I can't justify crypto mining.
That being said Germany has insanely high energy costs atm


----------



## yzonker

elbramso said:


> Crypto mining is an absurd waste of energy imo. I know it might not be a popular opinion but having my rig up and running 24/7 for a 60usd profit per month is kinda stupid.
> At the current rates and energy costs I can't justify crypto mining.
> That being said Germany has insanely high energy costs atm


It's incredibly bad for the environment. I've been mining for over a year,but it only works because my electric rates are relatively low. Right now it's heating my house, so not such a waste this time of year.


----------



## GreatestChase

Well ****. I may or may not have just purchased a Z690 Dark board....

With that said, anyone have any recommendations for an open air test bench?


----------



## PLATOON TEKK

yzonker said:


> @PLATOON TEKK are you still out there? Just curious since you were building the massive chiller rig. Curios how that went?


What’s happening man. Hope you’ve been well brother.

The crazy ass chiller setup is still going well. My main issue at the moment is my dewpoint. Unlike my signature deceptively states, -12C was the dewpoint at the last location. At the moment I hover at lowest 0c.

I’m running 5 chillers in total, 4x 900exc and 1x 1.5Hp chiller. The setup is capable of -20c (including the coolant), my only issue at the moment is controlling humidity and temperature of the room (dewpoint). Also, the power consumption is madness;

i) 5x chillers (1 needing a full fuse)
ii) PC using up to 1600/1700w (needed 2000w UPS)
iii) 3x dehumidifiers (one navy-grade desiccant)
iv) central ac
v) 2x supplemental server Ac

Ive gone so far as using the garden’s fuse lol.

I have also routed tubing under the house to my main gaming PC on the 2nd floor (using 9x PMP500 pumps). This allows me to use the chillers to cool two setups, depending on what I’m using (gaming or bench, switched with QDCs). I got highest on water on Port when I tested the setup a while back.

My next step is creating a sealed “dewpoint controlled” room to house all chillers and the pc itself. This will allow me to control humidity and temp to achieve a dewpoint of -20c. I’ve been in talks with industrial manufactures and that **** costs a fortune lol. Once the dark z690 arrives on Monday, I’ll probably give the 2 HC KPs another run on port Royal. If the weather is right I can hit -5c dewpoint, fingers crossed.

I have a further 4 hailea chillers under the house for other systems, so if anyone needs advice about long loops or pumps, I’m here ha.

always peeping this thread, the information on here is invaluable, really appreciate y’all.

Will update more regularly on Instagram soon too once a project drops.



Spoiler: Setup


----------



## J7SC

...fyi, I've been giving a workout to my Strix with Forza H5, as well as this: Highly recommend it 







---

And in other news, eye-candy, including 3x Kudan 3090


----------



## yzonker

PLATOON TEKK said:


> What’s happening man. Hope you’ve been well brother.
> 
> The crazy ass chiller setup is still going well. My main issue at the moment is my dewpoint. Unlike my signature deceptively states, -12C was the dewpoint at the last location. At the moment I hover at lowest 0c.
> 
> I’m running 5 chillers in total, 4x 900exc and 1x 1.5Hp chiller. The setup is capable of -20c (including the coolant), my only issue at the moment is controlling humidity and temperature of the room (dewpoint). Also, the power consumption is madness;
> 
> i) 5x chillers (1 needing a full fuse)
> ii) PC using up to 1600/1700w (needed 2000w UPS)
> iii) 3x dehumidifiers (one navy-grade desiccant)
> iv) central ac
> v) 2x supplemental server Ac
> 
> Ive gone so far as using the garden’s fuse lol.
> 
> I have also routed tubing under the house to my main gaming PC on the 2nd floor (using 9x PMP500 pumps). This allows me to use the chillers to cool two setups, depending on what I’m using (gaming or bench, switched with QDCs). I got highest on water on Port when I tested the setup a while back.
> 
> My next step is creating a sealed “dewpoint controlled” room to house all chillers and the pc itself. This will allow me to control humidity and temp to achieve a dewpoint of -20c. I’ve been in talks with industrial manufactures and that **** costs a fortune lol. Once the dark z690 arrives on Monday, I’ll probably give the 2 HC KPs another run on port Royal. If the weather is right I can hit -5c dewpoint, fingers crossed.
> 
> I have a further 4 hailea chillers under the house for other systems, so if anyone needs advice about long loops or pumps, I’m here ha.
> 
> always peeping this thread, the information on here is invaluable, really appreciate y’all.
> 
> Will update more regularly on Instagram soon too once a project drops.
> 
> 
> 
> Spoiler: Setup
> 
> 
> 
> 
> View attachment 2549299
> 
> View attachment 2549301
> 
> View attachment 2549302
> 
> View attachment 2549300


That's taking it to another level for sure. Really impressive. Glad you're still hanging around. 

That's what I've been thinking, that the biggest limitation I'll have is dew point. Although a smaller scale, my plan is the same. I can pull some moisture out with the HVAC along with a decent size portable dehumidifier. Still I'm guessing I'll be limited to 10C or so in the summer depending on the outside conditions.

How do you measure temp/humidity/dew point? Is there an affordable hygrometer that is known to be fairly accurate, or maybe one that can plug in to USB so I could get readings in Windows?


----------



## PLATOON TEKK

yzonker said:


> That's taking it to another level for sure. Really impressive. Glad you're still hanging around.
> 
> That's what I've been thinking, that the biggest limitation I'll have is dew point. Although a smaller scale, my plan is the same. I can pull some moisture out with the HVAC along with a decent size portable dehumidifier. Still I'm guessing I'll be limited to 10C or so in the summer depending on the outside conditions.
> 
> How do you measure temp/humidity/dew point? Is there an affordable hygrometer that is known to be fairly accurate, or maybe one that can plug in to USB so I could get readings in Windows?


SensorPush has Bluetooth sensors you can connect to a gateway to make their data trackable off location.


----------



## yzonker

J7SC said:


> ...fyi, I've been giving a workout to my Strix with Forza H5, as well as this: Highly recommend it
> 
> 
> 
> 
> 
> 
> 
> ---
> 
> And in other news, eye-candy, including 3x Kudan 3090


I do need to try Ride 4 sometime. Always looks so impressive.


----------



## yzonker

PLATOON TEKK said:


> SensorPush has Bluetooth sensors you can connect to a gateway to make their data trackable off location.


That could work. Thanks.


----------



## J7SC

yzonker said:


> I do need to try Ride 4 sometime. Always looks so impressive.


...it's a blast once you calibrated and finetuned your controller


----------



## Luggage

yzonker said:


> That's taking it to another level for sure. Really impressive. Glad you're still hanging around.
> 
> That's what I've been thinking, that the biggest limitation I'll have is dew point. Although a smaller scale, my plan is the same. I can pull some moisture out with the HVAC along with a decent size portable dehumidifier. Still I'm guessing I'll be limited to 10C or so in the summer depending on the outside conditions.
> 
> How do you measure temp/humidity/dew point? Is there an affordable hygrometer that is known to be fairly accurate, or maybe one that can plug in to USB so I could get readings in Windows?


temperhum variants work but software is meh, have given up on automating it with aquasuit/hwinfo 


















Amazon.com: TEMPerHUM PC Sensor PC USB Hygrometer and Thermometer-Golden Blue : Electronics


Buy TEMPerHUM PC Sensor PC USB Hygrometer and Thermometer-Golden Blue: Network Adapters - Amazon.com ✓ FREE DELIVERY possible on eligible purchases



www.amazon.com


----------



## kryptonfly

PowerK said:


> Hi folks.
> Have had 3090 since its launch but been out of loop for several months.
> The other day, I flashed my 3090 with 1000W XOC vBIOS and tried see how much further it can go. And result was quite disappointing. (but was expected to be honest) I've never had fun overclocking GPU, be it under water or air from the time they introduced what they call 'GPU Boost' with GTX680 (Kepler) generation.
> Currently, with stock BIOS, my 3090 clocks at stable 2130-2145MHz with 993mV (around 330-380W power consumption) no matter what I throw at it (CP2007, 3DM PR, Metro Exodus EE, Control, DL2 etc)
> With XOC BIOS, I could go further: 2205-2220MHz with 1037mV for example. But power consumption goes over 520W (!!). And idle power consumption (with Wallpaper Engine running) seems to be doubled (to around 80W) compared to the stock vBIOS (30-35W). Not worth it for daily gaming rig from my experience.


You need first to monitor with your stock vBIOS with HWinfo64 in bench or something with stable power draw (Port Royal, etc...) and stable OC freq/voltage. Then, once you flashed the vBIOS, you need to calibrate powers in HWinfo64 with the same bench, OC (freq/voltage). It's not perfect but it's what I'm using with my shunted 2x8 pins card + 1000W XOC bios, ~30-50W of error maybe with high usage (max I've seen is 640W, limited by internal voltages).



But it's plain to see temp diff between 380W and 520W. Your GPU should become hotter by a few degrees, probably +5°C with good custom WC. You can also check output powers to see how much they increase.


----------



## yzonker

Spent a little time optimizing the chiller. I added a 3rd pump at the chiller as well as a larger DIY res that holds about 3/4 of a gallon in order to stabilize temps better. The practical limit right now seems to be about 5C at the chiller. Water temp sensor at my GPU reads 7C though. It does go through the 2 internal rads and the CPU block before getting to the water temp sensor, so possibly that is contributing to the difference. All fans off of course. I set up a bios profile that shuts them all off, pretty easy to swap back and forth.

I saw as low as 22C average in PR, although this run was the highest score at 23C.









I scored 16 095 in Port Royal


AMD Ryzen 7 5800X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 11}




www.3dmark.com





Compared to cold air run,









Result







www.3dmark.com





Some of the difference is the VRAM runs warmer since the backplate doesn't have sub zero air blowing across it. I lose 50-100 Mhz because of this. (+1050 vs +1125)

Same story with TS although the cold air run probably had even colder water in this case,









Result







www.3dmark.com





I did beat my old TSE run by a small amount despite the higher temps. One thing that changes was I had an easy opportunity to tune my CPU more to these colder temps which I had not done previously.









I scored 10 758 in Time Spy Extreme


AMD Ryzen 7 5800X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 11}




www.3dmark.com





Compare to cold air run,









Result







www.3dmark.com





All in all I'm pretty happy with it. A bigger chiller would be even better, but I just couldn't justify paying 2x as much to get the 1/2HP. It's only rated at 33% more capacity also. Have to step up to the 1HP to see a big gain according to the specs anyway. 

So much more convenient than opening a window and dumping cold air in to the room though. LOL Thanks to the dry air, no condensation even when the chiller is all the way down to the 3C min setting.


----------



## J7SC

...just a matter of time before one of you chaps goes for a cascading phase (you'll need big garage, plenty of ear plugs)  

This one via Hexus.net:


----------



## yzonker

J7SC said:


> ...just a matter of time before one of you chaps goes for a cascading phase (you'll need big garage, plenty of ear plugs)
> 
> This one via Hexus.net:


Interesting to know what they would use for coolant. 





__





ABIT chase world records... and use some SERIOUS cooling to get there


Anyone fancy a -100⁰C CPU core?




hexus.net


----------



## kryptonfly

yzonker said:


> Spent a little time optimizing the chiller. I added a 3rd pump at the chiller as well as a larger DIY res that holds about 3/4 of a gallon in order to stabilize temps better. The practical limit right now seems to be about 5C at the chiller. Water temp sensor at my GPU reads 7C though. It does go through the 2 internal rads and the CPU block before getting to the water temp sensor, so possibly that is contributing to the difference. All fans off of course. I set up a bios profile that shuts them all off, pretty easy to swap back and forth.
> 
> I saw as low as 22C average in PR, although this run was the highest score at 23C.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 16 095 in Port Royal
> 
> 
> AMD Ryzen 7 5800X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 11}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> Compared to cold air run,
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Result
> 
> 
> 
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> Some of the difference is the VRAM runs warmer since the backplate doesn't have sub zero air blowing across it. I lose 50-100 Mhz because of this. (+1050 vs +1125)
> 
> Same story with TS although the cold air run probably had even colder water in this case,
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Result
> 
> 
> 
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> I did beat my old TSE run by a small amount despite the higher temps. One thing that changes was I had an easy opportunity to tune my CPU more to these colder temps which I had not done previously.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 10 758 in Time Spy Extreme
> 
> 
> AMD Ryzen 7 5800X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 11}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> Compare to cold air run,
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Result
> 
> 
> 
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> All in all I'm pretty happy with it. A bigger chiller would be even better, but I just couldn't justify paying 2x as much to get the 1/2HP. It's only rated at 33% more capacity also. Have to step up to the 1HP to see a big gain according to the specs anyway.
> 
> So much more convenient than opening a window and dumping cold air in to the room though. LOL Thanks to the dry air, no condensation even when the chiller is all the way down to the 3C min setting.


Interesting to see temp impact especially on vram, maybe someday I would try something crazy like this just for fun. For now my heavy WC performs pretty well at "normal" temp but I think it prevents me to go beyond 1.081v on gpu, my effective clock decreases a lot if I increase gpu voltage +1.081v, surely I could gain some benefits and lessen power if temp would be lower than 40°C, but let's say it's anecdotic for my needs.


----------



## yzonker

kryptonfly said:


> Interesting to see temp impact especially on vram, maybe someday I would try something crazy like this just for fun. For now my heavy WC performs pretty well at "normal" temp but I think it prevents me to go beyond 1.081v on gpu, my effective clock decreases a lot if I increase gpu voltage +1.081v, surely I could gain some benefits and lessen power if temp would be lower than 40°C, but let's say it's anecdotic for my needs.


Best analogy I can think of is Nitrous on cars. Fun to play with but not very usable on your daily commute. Every 10C seems to be around 200pts or a little more in PR. Not too hard to figure out where you would be if you dropped water temp 20C.


----------



## kryptonfly

yzonker said:


> Best analogy I can think of is Nitrous on cars. Fun to play with but not very usable on your daily commute. Every 10C seems to be around 200pts or a little more in PR. Not too hard to figure out where you would be if you dropped water temp 20C.


You mean I would do around 16477 in PR at 20°C for the same freq ? Independently of voltage, freq, vram, let's say same OC ?
I scored 16 077 in Port Royal


----------



## elbramso

kryptonfly said:


> You mean I would do around 16477 in PR at 20°C for the same freq ? Independently of voltage, freq, vram, let's say same OC ?
> I scored 16 077 in Port Royal


Maybe even higher. For 38c your card clocks pretty high 👍


----------



## yzonker

kryptonfly said:


> You mean I would do around 16477 in PR at 20°C for the same freq ? Independently of voltage, freq, vram, let's say same OC ?
> I scored 16 077 in Port Royal


Most likely you could get in that range, yes. You would get the most by tweaking the VF curve also. Usually I can increase the core offset some. Maybe 15-30mhz. Then the card boosts higher as well. As far as I can tell, the boost bins keep coming even at these low temps. Then I also gain 100-200mhz offset on the VRAM as well. So it's a combination of those 3 changes. My card only scores around 15.6-15.7k at ambient temps.


----------



## PLATOON TEKK

J7SC said:


> ...just a matter of time before one of you chaps goes for a cascading phase (you'll need big garage, plenty of ear plugs)
> 
> This one via Hexus.net:


Oh man, don’t tempt me. I’ve already gone way overboard.

Dark z690 came in yesterday. It’s a clean ass board. Will throw the KPs in there and try to push the whole setup once more. This supports nvlink unlike that damn asrock aqua oc. 



Spoiler: Darkz690


----------



## jura11

PowerK said:


> Hi folks.
> Have had 3090 since its launch but been out of loop for several months.
> The other day, I flashed my 3090 with 1000W XOC vBIOS and tried see how much further it can go. And result was quite disappointing. (but was expected to be honest) I've never had fun overclocking GPU, be it under water or air from the time they introduced what they call 'GPU Boost' with GTX680 (Kepler) generation.
> Currently, with stock BIOS, my 3090 clocks at stable 2130-2145MHz with 993mV (around 330-380W power consumption) no matter what I throw at it (CP2007, 3DM PR, Metro Exodus EE, Control, DL2 etc)
> With XOC BIOS, I could go further: 2205-2220MHz with 1037mV for example. But power consumption goes over 520W (!!). And idle power consumption (with Wallpaper Engine running) seems to be doubled (to around 80W) compared to the stock vBIOS (30-35W). Not worth it for daily gaming rig from my experience.


Hi there 

Depending on the GPU which you have right now, if you have 3*8-pin GPU then not sure if I would run XOC 1000W BIOS maybe only for benchmarks and that's it, as 2*8-pin GPU owner XOC 1000W BIOS is only option for me to have higher power limit, yes there is option to shunt mod but that's just not for me.... For 3*8-pin GPU I would probably run EVGA 520W BIOS as daily BIOS 

If yours RTX 3090 would do 2130-2145MHz in most of the games without the running XOC BIOS then I don't think XOC BIOS will be beneficial to you, XOC BIOS is more intended for benchmarks or plebs with 2*8-pin GPUs hahaha 

My old RTX 3090 GamingPro wouldn't be able run such clocks without the running XOC BIOS and 2205MHz or 2220HMz is impossible for my RTX 3090 hahaha, such clocks I could achieve on my old Asus RTX 2080Ti Strix OC without the question but I have been quite unlucky with RTX 3090 hahaha 

Yup XOC BIOS can pull up to 600-660W quite easily, in some games I could see 520-550W easily, max what I have seen is 580W, but that's past, I have sold both RTX 3090 GamingPro's 


Hope this helps and best of luck there

Thanks, Jura


----------



## jura11

ArcticZero said:


> At the time the 3090 was released, I do recall quite a few people referring to PNY as a "low end" or "not the first choice" brand, and only taking it because there were there were no other available SKUs. Honestly I took it since it was the first one available as well, despite originally wanting a 3x8-pin card.
> 
> Then you had others reminding people PNY were one of the only a few AIBs who made Quadros - the other one being Leadtek. No brand preference or anything, just zero regrets on my own purchase, hindsight being 20/20 and all.


Hi there 

Same applies to me as well, I bought that time Palit RTX 3090 GamingPro and if I would buy them again, I wouldn't touch Palit just due RMA which I went through 

They didn't wanted to approve my RMA because I have took off heatsink/cooler and replaced thermal pads or used waterblock, this shouldn't be a problem for you in USA but in EU or UK personally I would get EVGA which hands down have best RMA 

Hope this helps and best of luck there

Thanks, Jura


----------



## andrew149

Anyone ever have issues getting 3 3090s to show on a msi z690 motherboard?


----------



## PowerK

jura11 said:


> Hi there
> 
> Depending on the GPU which you have right now, if you have 3*8-pin GPU then not sure if I would run XOC 1000W BIOS maybe only for benchmarks and that's it, as 2*8-pin GPU owner XOC 1000W BIOS is only option for me to have higher power limit, yes there is option to shunt mod but that's just not for me.... For 3*8-pin GPU I would probably run EVGA 520W BIOS as daily BIOS
> 
> If yours RTX 3090 would do 2130-2145MHz in most of the games without the running XOC BIOS then I don't think XOC BIOS will be beneficial to you, XOC BIOS is more intended for benchmarks or plebs with 2*8-pin GPUs hahaha
> 
> My old RTX 3090 GamingPro wouldn't be able run such clocks without the running XOC BIOS and 2205MHz or 2220HMz is impossible for my RTX 3090 hahaha, such clocks I could achieve on my old Asus RTX 2080Ti Strix OC without the question but I have been quite unlucky with RTX 3090 hahaha
> 
> Yup XOC BIOS can pull up to 600-660W quite easily, in some games I could see 520-550W easily, max what I have seen is 580W, but that's past, I have sold both RTX 3090 GamingPro's
> 
> 
> Hope this helps and best of luck there
> 
> Thanks, Jura


Hi Jura,
What happened (while I was out of the loop)? :-D If you've sold them, what do you game on these days?


----------



## yzonker

Well not very practical, but kinda cool. That's after about an hour of playing. Only at 400w on the GPU, but still a lot better than I can do with rads considering the ambient is up to 26C thanks to the chiller and the 3090 pumping out heat. lol


----------



## EB_Z590

I have a chance to purchase a EVGA 3090 FTW air cooled. ( 24G-P5-3987-KR )

I am curious if I can SLI this card with an existing 3090 FTW hybrid. (24G-P5-3988-KR )

Anyone have any experience with doing this? I would probably grab a hybrid kid and water cool this one too, but want to make sure that I can
SLI them beforehand.

Thanks!


----------



## bearsdidit

EB_Z590 said:


> I have a chance to purchase a EVGA 3090 FTW air cooled.
> 
> I am curious if I can SLI this card with an existing 3090 FTW hybrid. (24G-P5-3988-KR )
> 
> Anyone have any experience with doing this? I would probably grab a hybrid kid and water cool this one too, but want to make sure that I can
> SLI them beforehand.
> 
> Thanks!


FYI, the link you posted is your purchasing link through evga. I got the same email today and ended up buying it for a friend. 

Good luck!


----------



## EB_Z590

bearsdidit said:


> FYI, the link you posted is your purchasing link through evga. I got the same email today and ended up buying it for a friend.
> 
> Good luck!


Thx, fixed that. Mind removing my quote?
Not sure if someone can use it other than me, but can never be too safe.

Im really only interested if i can SLI them, so I guess we'll see.


----------



## yzonker

EB_Z590 said:


> Thx, fixed that. Mind removing my quote?
> Not sure if someone can use it other than me, but can never be too safe.
> 
> Im really only interested if i can SLI them, so I guess we'll see.


I'm sure you can. The cards are identical. There is a hybrid bios but it only changes the fan curves AFAIK. Worst case you flash the same bios on both and use a custom fan curve.


----------



## dr/owned

PLATOON TEKK said:


> What’s happening man. Hope you’ve been well brother.
> 
> The crazy ass chiller setup is still going well. My main issue at the moment is my dewpoint. Unlike my signature deceptively states, -12C was the dewpoint at the last location. At the moment I hover at lowest 0c.
> 
> [/SPOILER]


Did and abandoned chillers several years ago. Just not worth the energy cost and it didn't really make a difference towards max overclocks. Like I don't think the 3000 series pulls clocks until it gets to 45C or so and I can hold that with just watercooling. The end game would have been a purge box to set my desktop in to remove all humidity but there just wasn't the ROI to bother with it.


----------



## yzonker

dr/owned said:


> Did and abandoned chillers several years ago. Just not worth the energy cost and it didn't really make a difference towards max overclocks. Like I don't think the 3000 series pulls clocks until it gets to 45C or so and I can hold that with just watercooling. The end game would have been a purge box to set my desktop in to remove all humidity but there just wasn't the ROI to bother with it.


No it keeps boosting higher all the way down to 20C at least which is as low as I've gotten mine. 

But the chiller isn't really for daily driving because of the various limitations. Still a lot of fun to mess with though.


----------



## dr/owned

yzonker said:


> No it keeps boosting higher all the way down to 20C at least which is as low as I've gotten mine.
> 
> But the chiller isn't really for daily driving because of the various limitations. Still a lot of fun to mess with though.


But you can force it with the VF curve by over-overclocking? It's been 18 months since I've really messed with it but I thought it was only when voltage got pulled with temperature that you started getting screwed.

(Didn't help that I have a mediocre chip that doesn't clock higher even with more voltage given over I2C directly).


----------



## yzonker

Yea clocks at the same voltage keep increasing. Benchmark scores will increase without touching the VF curve. So much so that I generally can't add much more manually. 15-30mhz at most.


----------



## sew333

ManniX-ITA said:


> Look also for nvidia-smi on this thread to learn how to lock the clock frequency
> 
> 
> 
> I wouldn't focus too much on it, can happen.
> Just once, wouldn't worry.
> A few days ago one of my monitor after shutdown started switching on from standby every 5 mins without any reason.
> At next start & shutdown stopped doing it...


Yes,monitor just turned black and then its OSD showed the message "no signal"

Yes, but why OSD showed the message "no signal" after 10 seconds after shutdown?When i normally shutdown pc, monitor will remain on standby mode without no signal message.


----------



## J7SC

yzonker said:


> Yea clocks at the same voltage keep increasing. Benchmark scores will increase without touching the VF curve. So much so that I generally can't add much more manually. 15-30mhz at most.


...yes, boost reduction starts early with temps. 

In other news, loving Forza H5 on w-cooled Strix with everything at or beyond extreme and 4096 x 2160 / LG C1 at over 100 fps +-


----------



## yzonker

J7SC said:


> ...yes, boost reduction starts early with temps.
> 
> In other news, loving Forza H5 on w-cooled Strix with everything at or beyond extreme and 4096 x 2160 / LG C1 at over 100 fps +-


I'm not sure if there is a bottom to the auto boost or not. I was thinking I'd try getting it as cold as possible and then let the loop warm up while logging just to see what it does from 20C or less. If I use a relatively low PL I can probably start from around 10-15C core. At least see if I can put that debate to bed. 

I have a C1 also now, probably mentioned it before. Need to try FH5. Are you using the ray tracing mod?


----------



## PLATOON TEKK

dr/owned said:


> Did and abandoned chillers several years ago. Just not worth the energy cost and it didn't really make a difference towards max overclocks. Like I don't think the 3000 series pulls clocks until it gets to 45C or so and I can hold that with just watercooling. The end game would have been a purge box to set my desktop in to remove all humidity but there just wasn't the ROI to bother with it.


I agree there is quiet a bit of hassle involved and there are risks chillers present that aren’t present normally.

However, in my case it allows all systems across the entire house to be 100 fan free and silent. They are all being cooled by chillers under the house that run 24/7. So it allows my PCs to act more like terminals and all look like showcases/benches. When I want to do 3dMark I use the QDCs to combine all chillers into one loop.

In terms of temp increasing score, the only time I’ve ever ran my setup to 60% capacity I got highest on water on Port Royal . Going from 5th on water to 1st was done by ONLY adjusting the chillers, not any clocks.

When I spoke to the people at xdevs a while back, they mentioned peaks of 35C and under at the least, before even adding voltage if you want to go proper “xoc”.

This is an excerpt about the 2080ti KP and temps. I can also absolutely 1000% confirm that temps AND pump speed (with increased benefits up to 5 l/m) help push clocks

















Once I have the time and figure out to tweak humidity a bit more, I’d be genuinely curious to see the stats at -20c coolant.

Overall, is this all worth it, probably not lol.


----------



## J7SC

yzonker said:


> I'm not sure if there is a bottom to the auto boost or not. I was thinking I'd try getting it as cold as possible and then let the loop warm up while logging just to see what it does from 20C or less. If I use a relatively low PL I can probably start from around 10-15C core. At least see if I can put that debate to bed.
> 
> I have a C1 also now, probably mentioned it before. Need to try FH5. Are you using the ray tracing mod?


...just using the 'built-in' max Ray Tracing preset...link to the mod ?


----------



## yzonker

J7SC said:


> ...just using the 'built-in' max Ray Tracing preset...link to the mod ?


Link in this article, but I obviously haven't tried it since I don't have the game (yet). I only knew about it because @des2k... mentioned it shortly after the game released. 









Forza Horizon 5 PC modded to add ray tracing in-game


Forza Horizon 5 ships on PC, Xbox Series X and Series S with hardware-accelerated ray tracing support to embellish vehi…




www.eurogamer.net


----------



## J7SC

yzonker said:


> Link in this article, but I obviously haven't tried it since I don't have the game (yet). I only knew about it because @des2k... mentioned it shortly after the game released.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Forza Horizon 5 PC modded to add ray tracing in-game
> 
> 
> Forza Horizon 5 ships on PC, Xbox Series X and Series S with hardware-accelerated ray tracing support to embellish vehi…
> 
> 
> 
> 
> www.eurogamer.net


Thanks . Do you know what the difference is to the regular-menu ray tracing ?


----------



## yzonker

PLATOON TEKK said:


> I agree there is quiet a bit of hassle involved and there are risks chillers present that aren’t present normally.
> 
> However, in my case it allows all systems across the entire house to be 100 fan free and silent. They are all being cooled by chillers under the house that run 24/7. So it allows my PCs to act more like terminals and all look like showcases/benches. When I want to do 3dMark I use the QDCs to combine all chillers into one loop.
> 
> In terms of temp increasing score, the only time I’ve ever ran my setup to 60% capacity I got highest on water on Port Royal . Going from 5th on water to 1st was done by ONLY adjusting the chillers, not any clocks.
> 
> When I spoke the people at xdevs a while back, they mentioned peaks of 35C and under at the least, before even adding voltage if you want to go proper “xoc”.
> 
> This is an excerpt about the 2080ti KP and temps. I can also absolutely 1000% confirm that temps AND pump speed (with increased benefits up to 5 l/m) help push clocks
> View attachment 2549818
> 
> 
> View attachment 2549820
> 
> Once I have the time and figure out to tweak humidity a bit more, I’d be genuinely curious to see the stats at -20c coolant.
> 
> Overall, is this all worth it, probably not lol.


Well maybe I don't need to run that test I mentioned above. Lol. 

I have to admit, after running that little gaming test the other day, I was immediately considering doing the same thing. My machine is adjacent to the furnace room in my house. I would only need a 10-12 foot run to do it. Dump all the heat in to another room and no/minimal fans running in the machine. Also could put a MO-RA3 in there so I didn't need the chiller all the time.


----------



## yzonker

J7SC said:


> Thanks . Do you know what the difference is to the regular-menu ray tracing ?


Unless they changed the game, originally it was released with only ray tracing active in the garage/livery (whatever they call it).


----------



## J7SC

yzonker said:


> Unless they changed the game, originally it was released with only ray tracing active in the garage/livery (whatever they call it).


...this is how I've set it (just got the game a few days ago, so latest version/patch).


----------



## kryptonfly

I'm asking myself lots of questions about RTX 4000, according to leaks, the 4060 should perf almost same than a 3090 with just 8 GB ! No way. My worries :


Should I sell my 3090 before the 4000s launch ? 3000s will lose lots of values at that time I think.
We don't know at all the 4000s cost (close to msrp or not) because of mining at that time and stock.
Would the performances be like leaks ?









Did anyone plan to upgrade ? I've seen on Ebay the same 3090 Turbo than mine at 2159€ with 2 guys on it, so I could easily sell it at this price range.


----------



## GreatestChase

kryptonfly said:


> I'm asking myself lots of questions about RTX 4000, according to leaks, the 4060 should perf almost same than a 3090 with just 8 GB ! No way. My worries :
> 
> 
> Should I sell my 3090 before the 4000s launch ? 3000s will lose lots of values at that time I think.
> We don't know at all the 4000s cost (close to msrp or not) because of mining at that time and stock.
> Would the performances be like leaks ?
> 
> View attachment 2549965
> 
> Did anyone plan to upgrade ? I've seen on Ebay the same 3090 Turbo than mine at 2159€ with 2 guys on it, so I could easily sell it at this price range.


If there is anything that I learned from watching others with this past release is that you should definitely not get rid of your card until you have the card you want in your hands. It took me about a year to get my KP card from when it was released. I was lucky and was able to get my hands on other cards before then, but many people weren't and still don't have the cards that they would like to upgrade to. I'll definitely try to get a 4090 or whatever the flagship card is when the time comes, but I think I'll probably hang on to my 3090 KP. It's been my first real venture into XOC and I'll probably just keep it as a collectors item once I get my hands on the next generation.


----------



## yzonker

If the next gen cards have anywhere near the rumored performance, I'll get one. The question is, which one will it be, AMD or Nvidia?


----------



## kryptonfly

GreatestChase said:


> If there is anything that I learned from watching others with this past release is that you should definitely not get rid of your card until you have the card you want in your hands. It took me about a year to get my KP card from when it was released. I was lucky and was able to get my hands on other cards before then, but many people weren't and still don't have the cards that they would like to upgrade to. I'll definitely try to get a 4090 or whatever the flagship card is when the time comes, but I think I'll probably hang on to my 3090 KP. It's been my first real venture into XOC and I'll probably just keep it as a collectors item once I get my hands on the next generation.


I understand your point of view, thank you to share your deepest thought 
I agree, this 3000 gen is still too rare for common human beings, I was doubly lucky because I got my 3090 Turbo on January 2021 and it runs pretty well with good WC and shunts + XOC bios. Better wait a little more until we have official news, if the RTX 4080s and beyond cost +3000€ at launch, I won't upgrade and I would have had right to keep my 3090 even longer.


----------



## geriatricpollywog

kryptonfly said:


> I understand your point of view, thank you to share your deepest thought
> I agree, this 3000 gen is still too rare for common human beings, I was doubly lucky because I got my 3090 Turbo on January 2021 and it runs pretty well with good WC and shunts + XOC bios. Better wait a little more until we have official news, if the RTX 4080s and beyond cost +3000€ at launch, I won't upgrade and I would have had right to keep my 3090 even longer.


I don’t see the 3090 street price falling below $1500 after the 4000 series launch since it has a whopping 24gb vram and is non-LHR. Good chance the 4060/4070 will be LHR and have 8-12gb vram.


----------



## dr/owned

PLATOON TEKK said:


> I agree there is quiet a bit of hassle involved and there are risks chillers present that aren’t present normally.
> 
> However, in my case it allows all systems across the entire house to be 100 fan free and silent. They are all being cooled by chillers under the house that run 24/7. So it allows my PCs to act more like terminals and all look like showcases/benches. When I want to do 3dMark I use the QDCs to combine all chillers into one loop.
> 
> Once I have the time and figure out to tweak humidity a bit more, I’d be genuinely curious to see the stats at -20c coolant.
> 
> Overall, is this all worth it, probably not lol.


When I was just messing around with it on a bench type setup to see how crazy I could get, I put a 5 gallon bucket in an insulated box and bypassed the thermostat in the chillers. I was slamming 1.7V into a CPU, direct die cooled, pushing for 5.5Ghz when 5.2Ghz was considered a great result (7700K). I found that I could race condensation being a problem and get some benchmarks off before it started dripping. 

The end game for me though was it was always (10 years now) a whole-house loop, just the cooling of the loop changed several times. Chillers to a giant radiator to now tap water (about +$60 a month in water to me). Rock solid temperature controls that I can hold forever until the earth warms (which it does in the summer a couple degrees). If I had a swimming pool in the backyard I'd have used that. (A geothermal loop isn't really feasible with the soil I've got). Anyways the wisdom I'm getting to: as you showed in the table, the gains from going to -20C are pretty minor compared to LN2 so whatever landing you want to make, assume it ends with ambient water again and build for that.

I gotta give you props though. Far as I knew until I read your posts I was the only other psycho that went down this road trying to make 24/7 subambient work.


----------



## GreatestChase

This monster was delivered yesterday. Hopefully going to get it set up this weekend. Quick question that maybe you guys are able to answer. I’ve looked through the manual but wasn’t able to find the answer I was looking for. Since the dark doesn’t have hdmi or DP outputs, am I able to use the usb-c on the motherboard to use the igpu? I typically run my secondary monitor off the igpu. Just wanted to check and see if anyone knew before I went and bought a usb-c to hdmi cable.


----------



## J7SC

geriatricpollywog said:


> I don’t see the 3090 street price falling below $1500 after the 4000 series launch since it has a whopping 24gb vram and is non-LHR. Good chance the 4060/4070 will be LHR and have 8-12gb vram.


...then there's also the question of 'competing leaks'. Sometimes, leaks turn out to be accurate but should still be taken with lots of salt. After all, the more 'dramatic' the post, the more eyeball$s. Anyhow, I've seen and read anything from '4090 is more than twice as fast as 3090' to a more recent article suggesting that the basic design is much closer to current Ampere than folks have suggested - though with a die shrink and some tweaking will come higher clocks and thus better performance. How much better ? We'll just have to wait. For now, my well-cooled 3090 clocks superbly well and does everything I want it to at 4096x2160. I may / may not upgrade, have to see a 4090 in action first...

...finally, world politics looks set for a lot more uncertainty for a wile yet, which can spill over into Asia and supply chains. From that perspective alone, it probably makes sense to hold on to a 3090 for now.


----------



## geriatricpollywog

J7SC said:


> ...then there's also the question of 'competing leaks'. Sometimes, leaks turn out to be accurate but should still be taken with lots of salt. After all, the more 'dramatic' the post, the more eyeball$s. Anyhow, I've seen and read anything from '4090 is more than twice as fast as 3090' to a more recent article suggesting that the basic design is much closer to current Ampere than folks have suggested - though with a die shrink and some tweaking will come higher clocks and thus better performance. How much better ? We'll just have to wait. For now, my well-cooled 3090 clocks superbly well and does everything I want it to at 4096x2160. I may / may not upgrade, have to see a 4090 in action first...
> 
> ...finally, world politics looks set for a lot more uncertainty for a wile yet, which can spill over into Asia and supply chains. From that perspective alone, it probably makes sense to hold on to a 3090 for now.


For leaks I turn to Moore’s Law is Dead. I haven’t watched any 4090 content recently. I think the extra performance will come at the expense of TDP. After all, the 3080 is only about 5% more efficient than the 2080ti when you look at frames per watt. And if the 4090 is LHR like every new consumer-oriented Nvidia product, the 3090 will still command high prices.

And if things go south, I could see Taiwanese chip production allocated solely for the Chinese defense industry and western chip fabs being sequestered for Raytheon products.


----------



## yzonker

I doubt the 4090 will be LHR for the same reasoning the 3090 isn't. 

And the architecture being the same doesn't necessarily mean they won't pile in the cores to increase performance given the extra real estate they will have at 5nm.


----------



## davidm71

Hi guys,

Picked up a 3090 Asus Strix OC at Microcenter but having doubts if I should hold onto it as have not broken the plastic seal yet. Mostly game at 2560x1440. Currently selling 4 of my old 980 Tis and 1080s on eBay and have raised almost half the cost of the Strix but feel the price is crazy expensive and worried in 6 mos the 4090 will blow this out of the water. Should it stay or should it go?
Thanks


----------



## andrew149

davidm71 said:


> Hi guys,
> 
> Picked up a 3090 Asus Strix OC at Microcenter but having doubts if I should hold onto it as have not broken the plastic seal yet. Mostly game at 2560x1440. Currently selling 4 of my old 980 Tis and 1080s on eBay and have raised almost half the cost of the Strix but feel the price is crazy expensive and worried in 6 mos the 4090 will blow this out of the water. Should it stay or should it go?
> Thanks


That 3090 is gonna last you a long time man.


----------



## yzonker

davidm71 said:


> Hi guys,
> 
> Picked up a 3090 Asus Strix OC at Microcenter but having doubts if I should hold onto it as have not broken the plastic seal yet. Mostly game at 2560x1440. Currently selling 4 of my old 980 Tis and 1080s on eBay and have raised almost half the cost of the Strix but feel the price is crazy expensive and worried in 6 mos the 4090 will blow this out of the water. Should it stay or should it go?
> Thanks


Yea, if you can make up quite a bit of the cost, out with the old and in with the new. You'll be able to sell the Strix for good money too most likely.


----------



## J7SC

davidm71 said:


> Hi guys,
> 
> Picked up a 3090 Asus Strix OC at Microcenter but having doubts if I should hold onto it as have not broken the plastic seal yet. Mostly game at 2560x1440. Currently selling 4 of my old 980 Tis and 1080s on eBay and have raised almost half the cost of the Strix but feel the price is crazy expensive and worried in 6 mos the 4090 will blow this out of the water. Should it stay or should it go?
> Thanks


I've enjoyed my (custom water-cooled) Strix OC for over a year now. Even at 4K+ / Ultra gaming, I've yet to wish for a faster card. Also, per my earlier comment, who knows what the supply chains will look like six months from now.


----------



## MrTOOSHORT

I almost bought a 3080ti Strix today, but for that money, I'd rather a 3090 Strix which wasn't in stock. If it was, I'd be benching as we type!


----------



## J7SC

MrTOOSHORT said:


> I almost bought a 3080ti Strix today, but for that money, I'd rather a 3090 Strix which wasn't in stock. If it was, I'd be benching as we type!


 ...I benched it the 1st day I got it - put it into an older X99 and broke Port Royal 15K on stock bios / air right away...now it's not stock and not on air  ...mostly though, I'm using it to drive a C1 OLED with everything set to 4K and full eye-candy.


----------



## Zogge

J7SC said:


> ...I benched it the 1st day I got it - put it into an older X99 and broke Port Royal 15K on stock bios / air right away...now it's not stock and not on air  ...mostly though, I'm using it to drive a C1 OLED with everything set to 4K and full eye-candy.


Same here ! 2160-2175mhz gaming in 4k all eye candy on. 3090 strix oc with water. OLED CX 48.


----------



## davidm71

MrTOOSHORT said:


> I almost bought a 3080ti Strix today, but for that money, I'd rather a 3090 Strix which wasn't in stock. If it was, I'd be benching as we type!


Yeah thats why I got a 3090. Really wanted an EVGA 3080 12gb or a 3080 ti (Non-LHR if available) but the price difference was so close didn't make sense.


----------



## yzonker

I'm still learning in regards to all of the intricacies of adjusting the VF curve. Can anyone explain why the first curve below seems to always offer considerably better stability than the 2nd one? Is this due to locking the frequency better? Less bounce even though it doesn't really appear to bounce with either. Seems no doubt that it always does work better though. If I try to set the bottom curve with a +195 core offset it's a guaranteed crash in a few minutes, but the top curve seems stable (although I don't have a huge # of hours using it yet, but still very obviously better).


----------



## kryptonfly

@yzonker : With the curve above, you're at 956mv ~2055 mhz right ? In the 2nd curve, points are too close each other and GPU boost tends to decrease freq by going on the left, we need a big jump 60 mhz max between the true point and the one on the left. We can't do more than 60 mhz each other anyway. If we use SMI, we just put a point higher and apply (ex : 2085/956mv) ideally +30/+45mhz max from our target because with load we lose 15 mhz from our target. Do you have instabilities with SMI ?


----------



## yzonker

kryptonfly said:


> @yzonker : With the curve above, you're at 956mv ~2055 mhz right ? In the 2nd curve, points are too close each other and GPU boost tends to decrease freq by going on the left, we need a big jump 60 mhz max between the true point and the one on the left. We can't do more than 60 mhz each other anyway. If we use SMI, we just put a point higher and apply (ex : 2085/956mv) ideally +30/+45mhz max from our target because with load we lose 15 mhz from our target. Do you have instabilities with SMI ?


I haven't been using SMI. I've just been experimenting with different curve shapes. I should have pointed out too that the bottom curve only has a +150 offset. It's what I've normally been gaming with. But if I moved that curve to +195 it would not be stable at all. 

I've been using the top curve for benchmarking all along (but with the right most point at 1100mv) and found it more stable, but I just hadn't had time to test it for gaming.


----------



## kryptonfly

@yzonker : it's useless for gaming to use your highest OC, better lock FPS to have room when needed, I play at 80 fps with DLDSR (+DLSS in game when available) it's extremely efficient (only DLDSR is already 2x more efficient according to Nvidia). I don't want to play at +450W for sometimes +5h straight. It's just my way to play, I'm even testing a curve at 750mv and see what's the max freq.


----------



## yzonker

kryptonfly said:


> @yzonker : it's useless for gaming to use your highest OC, better lock FPS to have room when needed, I play at 80 fps with DLDSR (+DLSS in game when available) it's extremely efficient (only DLDSR is already 2x more efficient according to Nvidia). I don't want to play at +450W for sometimes +5h straight. It's just my way to play, I'm even testing a curve at 750mv and see what's the max freq.


Yea I've been working my way down in power over time. I get tired of the room getting hot from the machine dumping out so much heat. But if a curve like the one we're talking about can gain 2 or 3 more bins and be 100% stable, then that would be a good combination with a lower voltage point. 

I did get my chiller set up in the furnace room. Ran it for a couple of hours today. Not something I'm going to use all the time, but it's an interesting setup. Dump all the heat in to another room with zero fans running on the machine. Chiller is just barely audible through the wall. It settled in at 13-14C using that curve I posted. Mostly played RDR2, around 450w.


----------



## kryptonfly

yzonker said:


> Yea I've been working my way down in power over time. I get tired of the room getting hot from the machine dumping out so much heat. But if a curve like the one we're talking about can gain 2 or 3 more bins and be 100% stable, then that would be a good combination with a lower voltage point.
> 
> I did get my chiller set up in the furnace room. Ran it for a couple of hours today. Not something I'm going to use all the time, but it's an interesting setup. Dump all the heat in to another room with zero fans running on the machine. Chiller is just barely audible through the wall. It settled in at 13-14C using that curve I posted. Mostly played RDR2, around 450w.


Yep, I agree with your point of view, all mhz that we gain can be converted into better efficiency on the other side. But I'm playing with zero fan running for GPU too, just my 5x360mm rads, 1x phobya DC12-400 at 5v, GPU around 185W/35°C at 80 fps with DLDSR + DLSS when available and ambient 20°C. Load is so weak in some games than it's around 1100 mhz, that means below my curve. In games which need more power I usually set 12v for the pump + my 15 fans at 5v, my pc case is louder than my 3090. Honestly I'm surprised by my WC thanks to ML maybe.

Btw, I would like to improve my vram temp because when I turn on my pc, vram is at 48°C when it's at 14°C ambient and lower but usually 50-52°C at 20°C ambient and reaches just 62°C after many hours of gaming. Just 10°C more in load. I've never seen below 48°C. Maybe thermal putty would help. Is it easy to remove ? Does it become solid ? The backplate doesn't touch components without pads ? I guess we can put it behind the GPU and on power stages ? Thanks


----------



## yzonker

kryptonfly said:


> Yep, I agree with your point of view, all mhz that we gain can be converted into better efficiency on the other side. But I'm playing with zero fan running for GPU too, just my 5x360mm rads, 1x phobya DC12-400 at 5v, GPU around 185W/35°C at 80 fps with DLDSR + DLSS when available and ambient 20°C. Load is so weak in some games than it's around 1100 mhz, that means below my curve. In games which need more power I usually set 12v for the pump + my 15 fans at 5v, my pc case is louder than my 3090. Honestly I'm surprised by my WC thanks to ML maybe.
> 
> Btw, I would like to improve my vram temp because when I turn on my pc, vram is at 48°C when it's at 14°C ambient and lower but usually 50-52°C at 20°C ambient and reaches just 62°C after many hours of gaming. Just 10°C more in load. I've never seen below 48°C. Maybe thermal putty would help. Is it easy to remove ? Does it become solid ? The backplate doesn't touch components without pads ? I guess we can put it behind the GPU and on power stages ? Thanks


I'm not sure if the putty really beats good pads or not. I'm thinking it doesn't. I dropped about 6C when I switched to the HKV block (and switched from Gelid Extremes to putty), but the HKV only uses a 0.5mm pad on the front and 1mm on the back. So it has an advantage there (Corsair was 1mm/2mm).

When I asked @J7SC, the answer was that the putty isn't all that difficult to clean up. I don't think it dries out, at least not in the time frame we'd probably care (a year at most since we can't leave stuff alone. Lol). 

You can put the putty anywhere pretty much. Not conductive.


----------



## kryptonfly

yzonker said:


> I'm not sure if the putty really beats good pads or not. I'm thinking it doesn't. I dropped about 6C when I switched to the HKV block (and switched from Gelid Extremes to putty), but the HKV only uses a 0.5mm pad on the front and 1mm on the back. So it has an advantage there (Corsair was 1mm/2mm).
> 
> When I asked @J7SC, the answer was that the putty isn't all that difficult to clean up. I don't think it dries out, at least not in the time frame we'd probably care (a year at most since we can't leave stuff alone. Lol).
> 
> You can put the putty anywhere pretty much. Not conductive.


I think I have to test, better see by ourselves about this kind of experimenting. I have no choice to keep my Bykski WB because there's no best available for my Gigabyte. What about the backplate which will touch components ? It's already the case with 0.5mm pads.


----------



## andrew149

kryptonfly said:


> I think I have to test, better see by ourselves about this kind of experimenting. I have no choice to keep my Bykski WB because there's no best available for my Gigabyte. What about the backplate which will touch components ? It's already the case with 0.5mm pads.


Ive had great luck with these pads there custom cut to the correct size and thickness.





__





GIGABYTE






kriticalpads.com


----------



## PLATOON TEKK

I’ve done it, gone subzero and subsequently overheated the KPs memory apparently, ha. ⚰⚰ I’m assuming when liquid temp is at 0c and below the LED assumes 100c instead. I will be unplugging the LEDs over the next few days anyways for the spare watts, but was interesting to see. They also flash green when “overheating”. The screens boot up normally at around 2-3C.

Some interesting observations I’ve had is how swiftly the flow decreases below 0, went from 244 l/h to as low as 122 l/h (makes sense). Also, the liquid arrives around 2 degrees warmer at the bench through the friction and heat off all the pumps needed. I will be adding a further 3 PMP 500s. Am seeing benefits up to around 5 l/m.

Had to enabled ln2 mode for stable boot on darkz690 at subzero which I was somewhat surprised at. Lastly, regardless of all the cooling and dehumidifying efforts in place, living on the coast and the wattage makes the dewpoint rise within the hour.

When I have a bit more time in the next few days I’ll tweak the new system, voltages and run proper bench at these temps.

Was peaking at around 1300w with a 12900k. Also the fans are absolutely not for cooling, they do help keep dew at bay up to a few degrees past dewpoint (dangerous territory).



















Spoiler: Spoiler


----------



## GreatestChase

PLATOON TEKK said:


> I’ve done it, gone subzero and subsequently overheated the KPs memory apparently, ha. ⚰⚰ I’m assuming when liquid temp is at 0c and below the LED assumes 100c instead. I will be unplugging the LEDs over the next few days anyways for the spare watts, but was interesting to see. They also flash green when “overheating”. The screens boot up normally at around 2-3C.
> 
> Some interesting observations I’ve had is how swiftly the flow decreases below 0, went from 244 l/h to as low as 122 l/h (makes sense). Also, the liquid arrives around 2 degrees warmer at the bench through the friction and heat off all the pumps needed. I will be adding a further 3 PMP 500s. Am seeing benefits up to around 5 l/m.
> 
> Had to enabled ln2 mode for stable boot on darkz690 at subzero which I was somewhat surprised at. Lastly, regardless of all the cooling and dehumidifying efforts in place, living on the coast and the wattage makes the dewpoint rise within the hour.
> 
> When I have a bit more time in the next few days I’ll tweak the new system, voltages and run proper bench at these temps.
> 
> Was peaking at around 1300w with a 12900k.
> View attachment 2550398
> 
> View attachment 2550392
> 
> 
> 
> 
> Spoiler: Spoiler
> 
> 
> 
> 
> View attachment 2550393
> 
> View attachment 2550391
> 
> View attachment 2550394
> 
> View attachment 2550397


I noticed when benching as well that when I went subzero I started getting the mem overheat warning. I also have had a ton of issues with cold booting and system freezing when I was using my MSI Z690 Carbon board. If my cpu went below around 20 C during boot I would often get stuck at the motherboard splash screen, and if I got close to 0 C while the system was running it would freeze. It hasn't been cold enough since I got the Z690 dark board for me to do any benching so I haven't had an opportunity to test that out yet unfortunately.


----------



## yzonker

kryptonfly said:


> I think I have to test, better see by ourselves about this kind of experimenting. I have no choice to keep my Bykski WB because there's no best available for my Gigabyte. What about the backplate which will touch components ? It's already the case with 0.5mm pads.


Like you say, I think the only way to be to know which is best is to test them. Those Kritical pads @andrew149 mentioned have been getting a lot of good feedback too. I was tempted to try them with the HK block, but I already had the putty and was wanting to try it. 

I did see a +50 to +100 increase in stable mem speed. Seems like a lot for the 6C decrease in mining temp. Possibly the putty covers the chip better than a pad. @J7SC has mentioned that more than once. 

And I used the putty for front and back mem. Although I used Gelid Ultimates on the VRM.


----------



## yzonker

PLATOON TEKK said:


> I’ve done it, gone subzero and subsequently overheated the KPs memory apparently, ha. ⚰⚰ I’m assuming when liquid temp is at 0c and below the LED assumes 100c instead. I will be unplugging the LEDs over the next few days anyways for the spare watts, but was interesting to see. They also flash green when “overheating”. The screens boot up normally at around 2-3C.
> 
> Some interesting observations I’ve had is how swiftly the flow decreases below 0, went from 244 l/h to as low as 122 l/h (makes sense). Also, the liquid arrives around 2 degrees warmer at the bench through the friction and heat off all the pumps needed. I will be adding a further 3 PMP 500s. Am seeing benefits up to around 5 l/m.
> 
> Had to enabled ln2 mode for stable boot on darkz690 at subzero which I was somewhat surprised at. Lastly, regardless of all the cooling and dehumidifying efforts in place, living on the coast and the wattage makes the dewpoint rise within the hour.
> 
> When I have a bit more time in the next few days I’ll tweak the new system, voltages and run proper bench at these temps.
> 
> Was peaking at around 1300w with a 12900k. Also the fans are absolutely not for cooling, they do help keep dew at bay up to a few degrees past dewpoint (dangerous territory).
> View attachment 2550398
> 
> View attachment 2550392
> 
> 
> 
> 
> Spoiler: Spoiler
> 
> 
> 
> 
> View attachment 2550393
> 
> View attachment 2550391
> 
> View attachment 2550394
> 
> View attachment 2550397


That's amazing and scary at the same time. Keep thinking about you probably just bought a single chiller originally like I recently did and then went down the rabbit hole. 

BTW, what are you using for pumps and water line? The normal water cooling flexible line only comes in short 3m pieces. Lol. I've just got a janky setup right now for testing with cheap stuff, but need to get the better stuff for long term. 

Even with my much simpler setup, I see a 1-2C increase in temp from the chiller to the water temp sensor on the inlet of my GPU block.


----------



## yzonker

GreatestChase said:


> I noticed when benching as well that when I went subzero I started getting the mem overheat warning. I also have had a ton of issues with cold booting and system freezing when I was using my MSI Z690 Carbon board. If my cpu went below around 20 C during boot I would often get stuck at the motherboard splash screen, and if I got close to 0 C while the system was running it would freeze. It hasn't been cold enough since I got the Z690 dark board for me to do any benching so I haven't had an opportunity to test that out yet unfortunately.


Time to get that chiller. 😁


----------



## Bobbylee

Hi guys, sorry if this has been answered previously. I did try searching but 1000 pages is a lot. 

Anyway, I have a pny 3090 which has 2x8 pin. I have been running the 390w gaming oc rebar vbios. I have used the 1000w KP bios in the past for gaming and benchmarks, however, I mine on this card when not gaming so reverted to a normal vbios as I dont have the know how to be confident I won't damage my card. So I wondered what the risks are when using the 1000w kp vbios for mining, if power limit is set to 300w. (front and back waterblock, currently vram temps are 60c when mining at 300w with this vbios). I wondered if anyone has experience locking core voltage, mem voltage etc using nvidia-smi to mine on this bios and then get the benefit of higher power limits for gaming? im not worried about temps, i did try mining for a quick sting and was actually getting a better hash rate at lower power draw, but i wonder if memory voltage is too high or something?

Or is this just a bad idea? any tips or advice appreciated.


----------



## yzonker

I've been mining on the KP 1kw bios for a year now without issue. Even with the PL set to 100%, it will only pull ~400w, so that's not an issue. Primary risk is damaging the card from temps going out of control if something fails in your cooling. I have dual D5 pumps for this reason. I also have HWINFO set up to shut down the machine if temps go higher than normal.

Edit, I lock the lowest voltage point with Afterburner and then max the mem out as high as possible.


----------



## Bobbylee

yzonker said:


> I've been mining on the KP 1kw bios for a year now without issue. Even with the PL set to 100%, it will only pull ~400w, so that's not an issue. Primary risk is damaging the card from temps going out of control if something fails in your cooling. I have dual D5 pumps for this reason. I also have HWINFO set up to shut down the machine if temps go higher than normal.


thank you for the swift responce. I actually have two d5s and a ddc in loop with similar failsafes in place already. have you noticed any memory degredation at all since running?


----------



## yzonker

Bobbylee said:


> thank you for the swift responce. I actually have two d5s and a ddc in loop with similar failsafes in place already. have you noticed any memory degredation at all since running?


Nope. My max mem overclock has just increased over time as I've improved cooling. I've had 5 cards total running for over a year with no sign of degredation.


----------



## Bobbylee

yzonker said:


> Nope. My max mem overclock has just increased over time as I've improved cooling. I've had 5 cards total running for over a year with no sign of degredation.


That's great, thank you for the peace of mind!


----------



## elbramso

PLATOON TEKK said:


> I’ve done it, gone subzero and subsequently overheated the KPs memory apparently, ha. ⚰⚰ I’m assuming when liquid temp is at 0c and below the LED assumes 100c instead. I will be unplugging the LEDs over the next few days anyways for the spare watts, but was interesting to see. They also flash green when “overheating”. The screens boot up normally at around 2-3C.
> 
> Some interesting observations I’ve had is how swiftly the flow decreases below 0, went from 244 l/h to as low as 122 l/h (makes sense). Also, the liquid arrives around 2 degrees warmer at the bench through the friction and heat off all the pumps needed. I will be adding a further 3 PMP 500s. Am seeing benefits up to around 5 l/m.
> 
> Had to enabled ln2 mode for stable boot on darkz690 at subzero which I was somewhat surprised at. Lastly, regardless of all the cooling and dehumidifying efforts in place, living on the coast and the wattage makes the dewpoint rise within the hour.
> 
> When I have a bit more time in the next few days I’ll tweak the new system, voltages and run proper bench at these temps.
> 
> Was peaking at around 1300w with a 12900k. Also the fans are absolutely not for cooling, they do help keep dew at bay up to a few degrees past dewpoint (dangerous territory).
> View attachment 2550398
> 
> View attachment 2550392
> 
> 
> 
> 
> Spoiler: Spoiler
> 
> 
> 
> 
> View attachment 2550393
> 
> View attachment 2550391
> 
> View attachment 2550394
> 
> View attachment 2550397


I wouldn't unplug the oled to save 5w in total tbh but I'm maybe just too scared of missing an NVVDD TOO HIGH message 🤣😂.

Btw. does anyone know if it is possible to turn off specific warnings on the kingpin card for example the mem overheat?


----------



## Bobbylee

yzonker said:


> Nope. My max mem overclock has just increased over time as I've improved cooling. I've had 5 cards total running for over a year with no sign of degredation.


Sorry to be a pain, do you recall which 8-pin power reading on hwinfo we ignore if using a 2x8 pin card on kp bios?


----------



## yzonker

Bobbylee said:


> Sorry to be a pain, do you recall which 8-pin power reading on hwinfo we ignore if using a 2x8 pin card on kp bios?


The 3rd is a duplicate of the first.


----------



## andrew149

yzonker said:


> Like you say, I think the only way to be to know which is best is to test them. Those Kritical pads @andrew149 mentioned have been getting a lot of good feedback too. I was tempted to try them with the HK block, but I already had the putty and was wanting to try it.
> 
> I did see a +50 to +100 increase in stable mem speed. Seems like a lot for the 6C decrease in mining temp. Possibly the putty covers the chip better than a pad. @J7SC has mentioned that more than once.
> 
> And I used the putty for front and back mem. Although I used Gelid Ultimates on the VRM.


At the end of the day it’s all about getting the correct size pad I’ve used both gelid and Kritical pads on different cards and both had great results. I just love the precuts and knowing you have the correct size from Kritical.


Here’s my gigabyte 3080 mining Eth fans are auto and everything stays nice and cold before my memory was at 104C all the time on stock pads I’m also using the Corsair paste on the gpu


----------



## GreatestChase

yzonker said:


> Time to get that chiller. 😁


Lol, I've honestly thought about it, and I'm not even sure that I would be able to run one on the same circuit in my office. At full tilt while benching, my PC is probably drawing something close to 800-900W. If I account for all the other miscellaneous things in my office, I don't know that I have the electrical overhead to run a 1/2 hp chiller. I'm also happy with my current cooling system for daily driving purposes so I doubt I'd use it there. Plus it's always so humid here in central Alabama that I know I wouldn't be able to get close to 0 C without having to worry about condensation.


----------



## yzonker

GreatestChase said:


> Lol, I've honestly thought about it, and I'm not even sure that I would be able to run one on the same circuit in my office. At full tilt while benching, my PC is probably drawing something close to 800-900W. If I account for all the other miscellaneous things in my office, I don't know that I have the electrical overhead to run a 1/2 hp chiller. I'm also happy with my current cooling system for daily driving purposes so I doubt I'd use it there. Plus it's always so humid here in central Alabama that I know I wouldn't be able to get close to 0 C without having to worry about condensation.


Yea I thought about that as you had previously indicated you lived in that part of the country. It's much more arid here in southern KS. I expect to be able to do 10C or less most of the year. 

And I have the same problem on power. Partly why I bought the 1/4 hp. Definitely couldn't do the 1 hp. 1/2 hp would probably work, but that's because it really isn't that much more powerful according to the specs. 

The 1/4 hp can hold 13C at 450w gpu and 50-100 cpu. And it can hit 4-5C for benching. Most of the time even now dew point in the house is around the same (4-5C). So sufficient I think.


----------



## yzonker

Here ya go, just toss you $800 mobo and $2k vid card in a bucket of fluid. 


__
https://www.reddit.com/r/overclocking/comments/t4fl00


----------



## GreatestChase

yzonker said:


> Here ya go, just toss you $800 mobo and $2k vid card in a bucket of fluid.
> 
> 
> __
> https://www.reddit.com/r/overclocking/comments/t4fl00


You can't have condensation if you are the condensation.


----------



## elbramso

yzonker said:


> Here ya go, just toss you $800 mobo and $2k vid card in a bucket of fluid.
> 
> 
> __
> https://www.reddit.com/r/overclocking/comments/t4fl00


That's so smart but it would be even smarter if you used the fluid itself to cool the components and he would save a lot of money on waterblocks 🤣 
Yeah I know the price for the fluid is a downer...
But wouldn't it be possible to run this fluid through the chiller directly? You could strip your cpu + gpu and have best possible direct DIE


----------



## GreatestChase

elbramso said:


> That's so smart but it would be even smarter if you used the fluid itself to cool the components and he would save a lot of money on waterblocks 🤣
> Yeah I know the price for the fluid is a downer...
> But wouldn't it be possible to run this fluid through the chiller directly? You could strip your cpu + gpu and have best possible direct DIE


From what I've read when looking into immersion cooling, I think you still have to have a cooler mounted to the card. I believe that the thermal transfer from the die directly to the coolant is not efficient enough, but if you have a cooler, the transfer to the cooler is as efficient as normal, then you have enough surface area for the coolant to cool the cooler on the card.


----------



## J7SC

GreatestChase said:


> From what I've read when looking into immersion cooling, I think you still have to have a cooler mounted to the card. I believe that the thermal transfer from the die directly to the coolant is not efficient enough, but if you have a cooler, the transfer to the cooler is as efficient as normal, then you have enough surface area for the coolant to cool the cooler on the card.


...or, go all in with this...


----------



## J7SC

...get your salt shakers out...

The onboard cache re. 'RTX4K' also makes a lot of sense. 

1.) Probably wise to skip the 3090 Ti (if it ever materializes) and 2.) Grab some more hi-po PSUs...


----------



## yzonker

J7SC said:


> ...get your salt shakers out...
> 
> The onboard cache re. 'RTX4K' also makes a lot of sense.
> 
> 1.) Probably wise to skip the 3090 Ti (if it ever materializes) and 2.) Grab some more hi-po PSUs...


If you are able to actually buy one when they release. If performance is anywhere near these insane leaks, demand will also be insane. Need to pump up my EVGA score over the summer. Lol


----------



## J7SC

yzonker said:


> If you are able to actually buy one when they release. If performance is anywhere near these insane leaks, demand will also be insane. Need to pump up my EVGA score over the summer. Lol


As mentioned before, I'll wait until the Lovelace / RTX4k series (if that's what they'll call it) is out with custom PCB models, and thoroughly tested, before I make up my mind re. an upgrade. While I like benching for the fun of it, it is really the performance in the few but demanding games I play (4K Ultra) which determines my upgrade path. 

It also makes sense to view the Lovelace / RTX4k leaks around what AMD might be up to with next-gen RDNAs...apparently, mGPUs are a possibility, at least with the range toppers. AMD has gathered a lot of experience with tile CPUs, Infinity Fabric and Infinity Cache, even 3DVcache now (Epyc, and soon 5800X 3DVc)...key will be to have the mGPUs appear as a single homogenous GPU to the OS and drivers. 

Either way, later in 2022 (Q3,4) things could get really interesting in the range-topping GPU (and CPU) space...of course, who knows what supply-chains and pricing will look like then.


----------



## GreatestChase

Have you guys had any bad experiences with Aquatuning.us's customer support? I got my order from them about a month ago now and have attempted to contact them multiple times about the leaking qdcs that I received with no response. Since I couldn't get in touch with them I disputed the charge for the amount of the defective qdcs and that has now been denied saying that I modified the fittings. Guess I'm out $64 and won't be using Aquatuning.us anymore.


----------



## yzonker

GreatestChase said:


> Have you guys had any bad experiences with Aquatuning.us's customer support? I got my order from them about a month ago now and have attempted to contact them multiple times about the leaking qdcs that I received with no response. Since I couldn't get in touch with them I disputed the charge for the amount of the defective qdcs and that has now been denied saying that I modified the fittings. Guess I'm out $64 and won't be using Aquatuning.us anymore.


I understand your frustration, but I'd probably just get Koolance QDCs and move on. Maybe keep pinging Alphacool. I've never tried to work with their support,so no idea what people's experience is.


----------



## GreatestChase

yzonker said:


> I understand your frustration, but I'd probably just get Koolance QDCs and move on. Maybe keep pinging Alphacool. I've never tried to work with their support,so no idea what people's experience is.


Yeah, I'll definitely be going to Koolance QDC's from now on. The only reason I didn't go with them in the first place is because I couldn't find enough of the black ones in stock anywhere at the time. I didn't see many negative reviews of the Alphacool ones beforehand so I thought I'd give them a try, but you live and you learn I guess. I was just hoping that I would have received better customer support from Aquatuning as I thought they were a pretty reputable supplier.


----------



## ManniX-ITA

GreatestChase said:


> Aquatuning as I thought they were a pretty reputable supplier


They claim not to be connected with Alphacool but it's a white label subsidiary selling Alphacool plus all other brands.
Found out when I complained I couldn't register with my mail address.
Turns out it's the same user database as Alphacool, where I was already registered with my email...


----------



## yzonker

GreatestChase said:


> Yeah, I'll definitely be going to Koolance QDC's from now on. The only reason I didn't go with them in the first place is because I couldn't find enough of the black ones in stock anywhere at the time. I didn't see many negative reviews of the Alphacool ones beforehand so I thought I'd give them a try, but you live and you learn I guess. I was just hoping that I would have received better customer support from Aquatuning as I thought they were a pretty reputable supplier.


I have 2 silver QDCs for that exact reason. Just behind my desk going to the external rad, so who cares.


----------



## J7SC

GreatestChase said:


> Yeah, I'll definitely be going to Koolance QDC's from now on. The only reason I didn't go with them in the first place is because I couldn't find enough of the black ones in stock anywhere at the time. I didn't see many negative reviews of the Alphacool ones beforehand so I thought I'd give them a try, but you live and you learn I guess. I was just hoping that I would have received better customer support from Aquatuning as I thought they were a pretty reputable supplier.


My overwhelming preference is for the Koolance QD4s, but I also have collected s.th. like 16 Swiftech QDs (1/2inch ID) over the years...those are much smaller and lighter and do 'work' - as in won't leak during locked operation, but they always leak a fair bit when opening and closing due in part to their twist lock (which can bring other complications). In addition, they also seem to have a noticeably higher flow restriction. Thermaltake also makes very similar ones (perhaps Swiftech is the OEM, or vice versa).









...+ direct size comp


----------



## Lobstar

EB_Z590 said:


> I have a chance to purchase a EVGA 3090 FTW air cooled. ( 24G-P5-3987-KR )
> 
> I am curious if I can SLI this card with an existing 3090 FTW hybrid. (24G-P5-3988-KR )


It should be 'fine'. One thing to note is if the hybrid is a very early Rev0.1 and the other is a newer Rev1.0 with a revised power delivery you might have to balance the performance down to the PCIE power limit of the Rev0.1 card to keep it under 75w and maintain stability without forcing both cards to run at a slower speed. EVGA ****ed up those super early cards and I ended up with a bunch of RMAs. 

Be prepared for a life of pain for almost no compatibility.


----------



## elbramso

Guy's I'm lost... 
I need some sort of expert Port Royal troubleshooting. My recent benchmarks all crashed at the very last second of the run. Happened 4 times today... 
Ofc I'm at my overclocking limit but why do the runs crash 0.5 seconds before I get the results? I have not updated 3dmark recently, have there been bugs. 

Very frustrating


----------



## Nizzen

elbramso said:


> Guy's I'm lost...
> I need some sort of expert Port Royal troubleshooting. My recent benchmarks all crashed at the very last second of the run. Happened 4 times today...
> Ofc I'm at my overclocking limit but why do the runs crash 0.5 seconds before I get the results? I have not updated 3dmark recently, have there been bugs.
> 
> Very frustrating


Try Format C:\


----------



## autoshot

Hello everyone!
May I ask: is it a good idea to flash my ZOTAC GeForce RTX 3090 Trinity with the BIOS of a ZOTAC GeForce RTX 3090 AMP Extreme Holo to enable more headroom in terms of power consumption - especially considering the Trinity only has two 8-Pin power connectors?
Cheers and happy weekend


----------



## CptSpig

elbramso said:


> Guy's I'm lost...
> I need some sort of expert Port Royal troubleshooting. My recent benchmarks all crashed at the very last second of the run. Happened 4 times today...
> Ofc I'm at my overclocking limit but why do the runs crash 0.5 seconds before I get the results? I have not updated 3dmark recently, have there been bugs.
> 
> Very frustrating


Thermals?


----------



## elbramso

CptSpig said:


> Thermals?


Water at start 1.5c and 2.6c after 1min47sec.
Card peaks at 15c.


----------



## yzonker

elbramso said:


> Water at start 1.5c and 2.6c after 1min47sec.
> Card peaks at 15c.


I've seen this too with mine at least once. I think it warms up just enough to crash. What does your VF curve look like?


----------



## elbramso

yzonker said:


> I've seen this too with mine at least once. I think it warms up just enough to crash. What does your VF curve look like?


Pretty much a +240mhz offset with some minor tweaks. Still trying to get an 2310mhz start to finish run. 
My guess now, is that my memory crashed the run


----------



## yzonker

elbramso said:


> Pretty much a +240mhz offset with some minor tweaks. Still trying to get an 2310mhz start to finish run.
> My guess now, is that my memory crashed the run


Yea that's a good point. The mem doesn't adjust frequency with temp, so if it's right on the edge it might cause a crash at/near the end. Does your mem temp vary much during the run?


----------



## yzonker

BTW, I swapped to one of these PMP-500's for my chiller setup that @PLATOON TEKK mentioned a while back. It's not quiet at all, but must flow a lot. Even with all the line and chiller in the loop, it easily beats my external D5 that's normally in the loop. GPU block delta is 1-2C lower using that PMP-500 than the D5.

Not something you want in your case, at least not running full speed, but works great in a separate room for the chiller. 

Considering putting 2 of them in there. Is there any significant risk of causing a leak with them? They have a lot more head pressure than a D5.



Amazon.com


----------



## Coldmud

Hey guys, been playing with a 3090 STRIX OC for a few days and seriously contemplating returning it. 
Im running it on an open testbench, out of the box it immedietly reached around 80°C when I started up cp2077 at stock settings. Any sort of power limit or clock increase results in 90°+ temps.. 

Basically i'm trying to undervolt now. I have reached 1830-1845mhz @ 837mV stable, the card draws about 100W less at a loss of 2-4fps, but it still reaches mid 70's?
After watching a few reviews that talked about temps in the 65°- 70°C range, I'm wondering, did these reviewers all get cherry-picked cards or what? 

Anyone running a strix on air? What temps are you getting? Is it even possible to run these on air without a good undervolt? Seems I have to do some research on waterblocks again, or are these temps just too high and I'm better off returning the card?


----------



## andrew149

Coldmud said:


> Hey guys, been playing with a 3090 STRIX OC for a few days and seriously contemplating returning it.
> Im running it on an open testbench, out of the box it immedietly reached around 80°C when I started up cp2077 at stock settings. Any sort of power limit or clock increase results in 90°+ temps..
> 
> Basically i'm trying to undervolt now. I have reached 1830-1845mhz @ 837mV stable, the card draws about 100W less at a loss of 2-4fps, but it still reaches mid 70's?
> After watching a few reviews that talked about temps in the 65°- 70°C range, I'm wondering, did these reviewers all get cherry-picked cards or what?
> 
> Anyone running a strix on air? What temps are you getting? Is it even possible to run these on air without a good undervolt? Seems I have to do some research on waterblocks again, or are these temps just too high and I'm better off returning the card?


Replacing the pads is pretty much a must on the 3070 3080 and 3090 cards and putting some artic silver on the gpu also helps



EVGA


----------



## PLATOON TEKK

yzonker said:


> That's amazing and scary at the same time. Keep thinking about you probably just bought a single chiller originally like I recently did and then went down the rabbit hole.
> 
> BTW, what are you using for pumps and water line? The normal water cooling flexible line only comes in short 3m pieces. Lol. I've just got a janky setup right now for testing with cheap stuff, but need to get the better stuff for long term.
> 
> Even with my much simpler setup, I see a 1-2C increase in temp from the chiller to the water temp sensor on the inlet of my GPU block.


My bad it’s taken me a bit to respond, been all over. I use the primo chill clear tubing 5/8. It’s nice and rigid and can take a big temp range (including -20c) according to them. You can buy it in 10ft and it comes with sysprpep and utopia.

Haha, you are absolutely right about the rabbit hole. You’re on your way to the dark chilled side already lol. Before you know it you will be monitoring dew points and promising the feds you don’t grow weed (tripled my power consumption).



yzonker said:


> BTW, I swapped to one of these PMP-500's for my chiller setup that @PLATOON TEKK mentioned a while back. It's not quiet at all, but must flow a lot. Even with all the line and chiller in the loop, it easily beats my external D5 that's normally in the loop. GPU block delta is 1-2C lower using that PMP-500 than the D5.
> 
> Not something you want in your case, at least not running full speed, but works great in a separate room for the chiller.
> 
> Considering putting 2 of them in there. Is there any significant risk of causing a leak with them? They have a lot more head pressure than a D5.
> 
> 
> 
> Amazon.com


The PMP500 is a great pump, is near top of performance and matches head pressure and flow of koolance chillers (they run a single pmp500). That helps in case of adding one into the loop later. It is however considered an “extreme” pump so it priorities flow over db. Also, matching speed and pressure is NOT required but highly recommended. My old opaque loop is still running swiftech dual pumps and the pmp500s and all has been fine for years now.

To give you a peace of mind, at one point I was running 11x pmp500 in a single loop (was long ass tube run). I think running two should be fine, however, make sure to check all fittings are tight, test with power off and start them low speed. The only thing that’s ever given me issues with 4+ pumps was a distro plate in my old loop, since that is basically just an o ring between two plates, the pressure would literally lift the plates apart and cause a slight leak.

You’re on your way to silent greatness, dope someone else here is following this wild ass path. Let me know if you have any other questions. I haven’t benched since last post, dewpoint has been crazy here.




GreatestChase said:


> Have you guys had any bad experiences with Aquatuning.us's customer support? I got my order from them about a month ago now and have attempted to contact them multiple times about the leaking qdcs that I received with no response. Since I couldn't get in touch with them I disputed the charge for the amount of the defective qdcs and that has now been denied saying that I modified the fittings. Guess I'm out $64 and won't be using Aquatuning.us anymore.


They have terrible correspondence and as @ManniX-ITA mentioned, they have questionable business practices. They also rebrand and upmark regular old hailea aquarium chillers.

In regards to qdcs, have over 60 koolance qd3 and 4 and never had issues. Highly recommended, just make sure to turn the pumps off first.


----------



## Coldmud

andrew149 said:


> Replacing the pads is pretty much a must on the 3070 3080 and 3090 cards and putting some artic silver on the gpu also helps
> 
> 
> 
> EVGA


Thanks! Figured as much.. I will look into it. Temps just seemed high compared to the few reviews I've seen online.
Probably will replace it for a waterblock, just a little hesitant at these rediculous prices to tamper with it. 
And I still have a return window, no questions asked.


----------



## yzonker

autoshot said:


> Hello everyone!
> May I ask: is it a good idea to flash my ZOTAC GeForce RTX 3090 Trinity with the BIOS of a ZOTAC GeForce RTX 3090 AMP Extreme Holo to enable more headroom in terms of power consumption - especially considering the Trinity only has two 8-Pin power connectors?
> Cheers and happy weekend


If that's a 3x8pin bios, it will actually give a lower PL due to the 1st 8pin being duplicated as the 3rd. The only 3x8pin bios that increases the PL is the KP 1kw bios. 

Only other option is one of the 390w bios from Gigabyte, Galax, etc...


----------



## yzonker

Coldmud said:


> Thanks! Figured as much.. I will look into it. Temps just seemed high compared to the few reviews I've seen online.
> Probably will replace it for a waterblock, just a little hesitant at these rediculous prices to tamper with it.
> And I still have a return window, no questions asked.


You're seeing those temps at the default 100% PL? Or the max PL? If 100% PL, then that is hotter than it should be unless your ambient temp is high. New pads won't help that, but a re-paste might. Although you'll probably need to buy new pads anyway to re-assemble since they tend to get torn up when you pull it apart unless you are very careful and have some luck.


----------



## yzonker

PLATOON TEKK said:


> My bad it’s taken me a bit to respond, been all over. I use the primo chill clear tubing 5/8. It’s nice and rigid and can take a big temp range (including -20c) according to them. You can buy it in 10ft and it comes with sysprpep and utopia.
> 
> Haha, you are absolutely right about the rabbit hole. You’re on your way to the dark chilled side already lol. Before you know it you will be monitoring dew points and promising the feds you don’t grow weed (tripled my power consumption).
> 
> 
> 
> The PMP500 is a great pump, is near top of performance and matches head pressure and flow of koolance chillers (they run a single pmp500). That helps in case of adding one into the loop later. It is however considered an “extreme” pump so it priorities flow over db. Also, matching speed and pressure is NOT required but highly recommended. My old opaque loop is still running swiftech dual pumps and the pmp500s and all has been fine for years now.
> 
> To give you a peace of mind, at one point I was running 11x pmp500 in a single loop (was long ass tube run). I think running two should be fine, however, make sure to check all fittings are tight, test with power off and start them low speed. The only thing that’s ever given me issues with 4+ pumps was a distro plate in my old loop, since that is basically just an o ring between two plates, the pressure would literally lift the plates apart and cause a slight leak.
> 
> You’re on your way to silent greatness, dope someone else here is following this wild ass path. Let me know if you have any other questions. I haven’t benched since last post, dewpoint has been crazy here.
> 
> 
> 
> 
> They have terrible correspondence and as @ManniX-ITA mentioned, they have questionable business practices. They also rebrand and upmark regular old hailea aquarium chillers.
> 
> In regards to qdcs, have over 60 koolance qd3 and 4 and never had issues. Highly recommended, just make sure to turn the pumps off first.


I had actually already ordered the 2nd pump and installed it just a bit ago. And that's what I did to test. Pulled power on the machine and ran the pumps for a while to check for leaks. Wasn't an issue. It blew all the air out of the loop in like 5 minutes though. LOL Didn't drop my GPU block delta much more. Must be hitting the limit of what more flow can do. If I could ever catch one of the good flow meters in stock I'd get one, but so far I'm still running without one. Although the block delta is really all that matters anyway, so that's a good test I think.

How are you varying the speed of the PMP-500's? I've been looking in to it, but haven't found too much that looks good for varying the voltage.

I also did some dew point testing while it's a bit more humid here this weekend. I just put an old GPU block in the chiller loop by itself and dropped the temp slowly until I started to see some condensation. I haven't gotten one of those meters you recommended yet, but my Nest thermostat seemed to give me good numbers. Just dropping 1C or so below the calculated dew point started to show some condensation.


----------



## PLATOON TEKK

yzonker said:


> I had actually already ordered the 2nd pump and installed it just a bit ago. And that's what I did to test. Pulled power on the machine and ran the pumps for a while to check for leaks. Wasn't an issue. It blew all the air out of the loop in like 5 minutes though. LOL Didn't drop my GPU block delta much more. Must be hitting the limit of what more flow can do. If I could ever catch one of the good flow meters in stock I'd get one, but so far I'm still running without one. Although the block delta is really all that matters anyway, so that's a good test I think.
> 
> How are you varying the speed of the PMP-500's? I've been looking in to it, but haven't found too much that looks good for varying the voltage.
> 
> I also did some dew point testing while it's a bit more humid here this weekend. I just put an old GPU block in the chiller loop by itself and dropped the temp slowly until I started to see some condensation. I haven't gotten one of those meters you recommended yet, but my Nest thermostat seemed to give me good numbers. Just dropping 1C or so below the calculated dew point started to show some condensation.


boom, sounds like you got it set. I recommend this for pump control:Noctua NA-FC1 (Amazon)


----------



## Coldmud

yzonker said:


> You're seeing those temps at the default 100% PL? Or the max PL? If 100% PL, then that is hotter than it should be unless your ambient temp is high. New pads won't help that, but a re-paste might. Although you'll probably need to buy new pads anyway to re-assemble since they tend to get torn up when you pull it apart unless you are very careful and have some luck.


Goes to 80° instantly at 100% PL..  Ambient temp is low, it's still winter here. I will probably return it and order from another supplier, seeing good stock now..


----------



## ManniX-ITA

yzonker said:


> I also did some dew point testing while it's a bit more humid here this weekend. I just put an old GPU block in the chiller loop by itself and dropped the temp slowly until I started to see some condensation. I haven't gotten one of those meters you recommended yet, but my Nest thermostat seemed to give me good numbers. Just dropping 1C or so below the calculated dew point started to show some condensation.


I have this one for easy reading, it's extremely precise and you can put it close to the cold points:






SHT31 SMART GADGET Sensirion AG | Entwicklungsboards, -kits, Programmierer | DigiKey


Heute bestellt, heute ausgeliefert. SHT31 SMART GADGET – SHT31 series Luftfeuchtigkeit, Temperatur Evaluierungsboards - Sensoren von Sensirion AG. Preisgestaltung und Verfügbarkeit für Millionen von elektronischen Komponenten von Digi-Key Electronics.




www.digikey.de





And a couple of to put close them to the TECs, they are even more precise (SHT35):






101020592 Seeed Technology Co., Ltd | Entwicklungsboards, -kits, Programmierer | DigiKey


Heute bestellt, heute ausgeliefert. 101020592 – SHT35 Luftfeuchtigkeit, Temperatur Sensor Grove Plattform-Evaluierung Erweiterungsboard von Seeed Technology Co., Ltd. Preisgestaltung und Verfügbarkeit für Millionen von elektronischen Komponenten von Digi-Key Electronics.




www.digikey.de





But you need a Seeduino and do some programming to use them.


----------



## elbramso

yzonker said:


> Yea that's a good point. The mem doesn't adjust frequency with temp, so if it's right on the edge it might cause a crash at/near the end. Does your mem temp vary much during the run?


I do have a different theory now^^ the run crashes at the end because load drops from something like 650w to 160w in this second at the very end... Don't know how to avoid it though.


----------



## the bag

Hello Folks!

Just get my hands on a Strix 3090 OC, i'm stucked at +1700 Mem OC at Afterburner, the card doesn't crash but also doesn't react on more OC, Fans @100% Mem Temp Stable 90'C Hot Spot 54'C GPU Temp 40'C. Everything Stock on Air nothing Changed only used Afterburner.
Has anyone here same same behavior on his Strix 3090 or EVGA 3090?

Greetings!


----------



## Nizzen

the bag said:


> Hello Folks!
> 
> Just get my hands on a Strix 3090 OC, i'm stucked at +1700 Mem OC at Afterburner, the card doesn't crash but also doesn't react on more OC, Fans @100% Mem Temp Stable 90'C Hot Spot 54'C GPU Temp 40'C. Everything Stock on Air nothing Changed only used Afterburner.
> Has anyone here same same behavior on his Strix 3090 or EVGA 3090?
> 
> Greetings!


Memory OC is random on every 3090 gpu. Temperature make a difference too. Colder v-ram temp is allways better. As long as it scales with memory speed, it's ok. 90c+, and the memory tend to throttle a bit.

-----------


----------



## the bag

Nizzen said:


> Memory OC is random on every 3090 gpu. Temperature make a difference too. Colder v-ram temp is allways better. As long as it scales with memory speed, it's ok. 90c+, and the memory tend to throttle a bit.


Hello,

Thanks for the information i think i can get to 84'C with copper heatsinks like on my tuf gaming on the backplate but this strix performs amazing without additional fans and copper heatsinks on the backplate, my tuf gaming 3090 holding 90'C with heatsink and fan mode. Strix holding this at stock.
The Tuf 3090 crashed immediately with Mem above 1425Mhz+.

Greetings!


----------



## yzonker

More confirmation on 40 series.










From,









NVIDIA Employee Data Leaked, Hackers Threaten Trade Secrets, RTX 40 Series Allegedly Exposed


It has not been a great week for NVIDIA, due to the many threats and potential exposure of trade secrets as a result of a recent hack.




hothardware.com


----------



## ManniX-ITA

the bag said:


> Just get my hands on a Strix 3090 OC, i'm stucked at +1700 Mem OC at Afterburner, the card doesn't crash but also doesn't react on more OC, Fans @100% Mem Temp Stable 90'C Hot Spot 54'C GPU Temp 40'C. Everything Stock on Air nothing Changed only used Afterburner.


What do you mean "stuck"?
My Strix with default BIOS usually crashes benchmarking at 1300 and over.
Anyway above 1200 the scores are worse.
Running Metro EE in loop above +1000 will crash between the 15th and 30th loop.
With Valheim, which is the game that stresses most the memory, it will crash between 15 and 45 minutes above +700.


----------



## the bag

ManniX-ITA said:


> What do you mean "stuck"?
> My Strix with default BIOS usually crashes benchmarking at 1300 and over.
> Anyway above 1200 the scores are worse.
> Running Metro EE in loop above +1000 will crash between the 15th and 30th loop.
> With Valheim, which is the game that stresses most the memory, it will crash between 15 and 45 minutes above +700.


Hi,

Something like the card could do more on the Mem but something slows down. I dont know what. Like i said before my Tuf crashes immediately above 1425Mhz on the Mem. But the Strix goes something like 1625Mhz on the Mem and above it doesn't react to the settings anymore. I don't think its a thermal issue i opened some windows the Mem temp sits stable at 84'C now with 1675Mhz+ on the Mem. Just wondering if other people have the same behaviour on their cards.

Greetings!


----------



## ManniX-ITA

the bag said:


> But the Strix goes something like 1625Mhz on the Mem and above it doesn't react to the settings anymore.


Except in specific uses cases going higher than 1200/1300 MHz is counter productive.
First the memory will start eating into the power budget of the GPU die, providing slower overall performances.
Then after a specific frequency/temperature it'll start re-transmissions.
There's no ECC but it will detect errors and will attempt re-transmissions of corrupted data.
Which will lead to same or worse performances.


----------



## the bag

ManniX-ITA said:


> Except in specific uses cases going higher than 1200/1300 MHz is counter productive.
> First the memory will start eating into the power budget of the GPU die, providing slower overall performances.
> Then after a specific frequency/temperature it'll start re-transmissions.
> There's no ECC but it will detect errors and will attempt re-transmissions of corrupted data.
> Which will lead to same or worse performances.


Hi,

So we need to figure out the perfect value between 1200Mhz to 1350Mhz so the memory don't distract the GPU die with his power consumption I think that's something arround 1300Mhz on this Strix like you said the core clock starting to throttle when i go over 1325Mhz in Heaven Benchmark and the pre cap power limit was kicking in. When the Mem Clock setting is 1275Mhz, the core clock was more stable again immediately and no power limiting.

Thanks!


----------



## ManniX-ITA

the bag said:


> Hi,
> 
> So we need to figure out the perfect value between 1200Mhz to 1350Mhz so the memory don't distract the GPU die with his power consumption I think that's something arround 1300Mhz on this Strix like you said the core clock starting to throttle when i go over 1325Mhz in Heaven Benchmark and the pre cap power limit was kicking in. When the Mem Clock setting is 1275Mhz, the core clock was more stable again immediately and no power limiting.
> 
> Thanks!


Yes, the limit I found for mine is 1265 MHz.
But still depends on which benchmark you run, for some the score will decrease after 1000 MHz.
This of course is valid for the stock BIOS.
Using the ASUS XOC 999W, which is unfortunately not ReBar, I can get +800 MHz full stable and up to +1200 stable in benchmarks.
The KP 1000W has very few limits so it can allow to run the memory to the highest clock before degradation without eating in the GPU die power budget.


----------



## KedarWolf

ManniX-ITA said:


> Yes, the limit I found for mine is 1265 MHz.
> But still depends on which benchmark you run, for some the score will decrease after 1000 MHz.
> This of course is valid for the stock BIOS.
> Using the ASUS XOC 999W, which is unfortunately not ReBar, I can get +800 MHz full stable and up to +1200 stable in benchmarks.
> The KP 1000W has very few limits so it can allow to run the memory to the highest clock before degradation without eating in the GPU die power budget.


My Strix I can only run Port Royal at around +1310 on memory, other benchmarks +1220, and everyday gaming I only run +742. My core will only do 2190 in benches, 2130 gaming. My card is not a really great card, and it's water-cooled.


----------



## yzonker

Here is my chiller setup so far. Work in progress but functional. Doesn't seem like it needs to be really pretty with it in the furnace room next to the sump pump. 

It held 10C while gaming for a couple of hours yesterday. Need to test more, but I wonder if efficiency improved by increasing flow. PC watercooling flow rates are well below what the manufacturer recommends for the chiller. I'm not sure how that effects efficiency. 

This will give me a way to easily refill my loop as well as test new rads/blocks outside my loop too since it can run independently.


----------



## elbramso

As winter kinda had comeback here in northern Germany I decided to torture my KPE once more.
After freezing for 2 hours straight I found a sweetspot of voltage and offset/vf curve. Watertemp was down to 0.2c but crashed until it got to 0.8c at the start of the run.
The curve + classified settings where stable three runs in a row - so there still might be little room for improvement^^ 
Anyways I guess I'm done for now - was a hell lot of fun but very frustrating at times.

Here is the run:








I scored 16 751 in Port Royal


Intel Core i9-10900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com





Still on water, no chiller!


----------



## andrew149

When you spend too much money because 3090s are impossible to keep cool


----------



## geriatricpollywog

elbramso said:


> As winter kinda had comeback here in northern Germany I decided to torture my KPE once more.
> After freezing for 2 hours straight I found a sweetspot of voltage and offset/vf curve. Watertemp was down to 0.2c but crashed until it got to 0.8c at the start of the run.
> The curve + classified settings where stable three runs in a row - so there still might be little room for improvement^^
> Anyways I guess I'm done for now - was a hell lot of fun but very frustrating at times.
> 
> Here is the run:
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 16 751 in Port Royal
> 
> 
> Intel Core i9-10900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> Still on water, no chiller!


Very impressive! Would you share your classified settings? It’s supposed to be -15C here on Saturday morning. My coolant is good down to -20C.


----------



## CptSpig

andrew149 said:


> When you spend too much money because 3090s are impossible to keep cool


Impossible you say? You just need one of these gadgets to keep it cold! The thing behind the rad.


----------



## andrew149

CptSpig said:


> Impossible you say? You just need one of these gadgets to keep it cold! The thing behind the rad.
> 
> View attachment 2551379


Yeah  I need one of those.


----------



## KedarWolf

CptSpig said:


> Impossible you say? You just need one of these gadgets to keep it cold! The thing behind the rad.
> 
> View attachment 2551379


I blow a breaker if I have my A/C on in summer, TV on, and play a game on my PC.

Plus I don't pay for my electricity and when I tried to mine bitcoin years ago, my landlord told me I was using 6x the electricity of anyone else in the apartment complex and I had to stop.

I'd love a chiller but no way I can do it.


----------



## yzonker

Wow, Newegg has had the 3090 Strix in stock the last couple of days (or more, hadn't been looking). For a rock bottom $2300 US.


----------



## J7SC

yzonker said:


> Wow, Newegg has had the 3090 Strix in stock the last couple of days (or more, hadn't been looking). For a rock bottom $2300 US.


While Newegg.ca is sold out of the Strix card again, the supply situation is improving, including in online markets I check in Canada and Europe. Mind you, US$ 2300 is still over $430 more than I paid for my Strix just over a year ago. If they wouldn't have put so many roadblocks in the way of SLI/NVLink w/ DX12, I would even consider adding a second one (after the current price has fallen some more) - up until this gen, I only built SLI systems.

Then again, with the RTX 4090 on the horizon, I'm probably done re. GPU upgrades for now. BTW, that huge jump in Cuda cores per AnandTech's table posted earlier for the AD102 might reflect that NVidia could be expecting AMD to offer a true mGPU top card...AMD filed for some interesting GPU Infinity Fabric patents last year. Unlike NVLink/SLI or Crossfire, next-gen mGPUs are expected to appear transparent / as a single GPU to the system / driver. Intel has announced that it is travelling a similar route with some upcoming Xe models.


----------



## elbramso

geriatricpollywog said:


> Very impressive! Would you share your classified settings? It’s supposed to be -15C here on Saturday morning. My coolant is good down to -20C.


I will share the settings once I'm back at my PC. 
Please promise not to push me away from my position 😜


----------



## yzonker

J7SC said:


> While Newegg.ca is sold out of the Strix card again, the supply situation is improving, including in online markets I check in Canada and Europe. Mind you, US$ 2300 is still over $430 more than I paid for my Strix just over a year ago. If they wouldn't have put so many roadblocks in the way of SLI/NVLink w/ DX12, I would even consider adding a second one (after the current price has fallen some more) - up until this gen, I only built SLI systems.
> 
> Then again, with the RTX 4090 on the horizon, I'm probably done re. GPU upgrades for now. BTW, that huge jump in Cuda cores per AnandTech's table posted earlier for the AD102 might reflect that NVidia could be expecting AMD to offer a true mGPU top card...AMD filed for some interesting GPU Infinity Fabric patents last year. Unlike NVLink/SLI or Crossfire, next-gen mGPUs are expected to appear transparent / as a single GPU to the system / driver. Intel has announced that it is travelling a similar route with some upcoming Xe models.


I agree. Too close to 40 series now and it is really overpriced. I'm not sure if that's all tarrif, or just markup by ASUS also. Makes my $1500 Zotac look cheap. Lol

Yes I'm really hoping we see some monster mGPU from AMD.


----------



## ManniX-ITA

yzonker said:


> Yes I'm really hoping we see some monster mGPU from AMD.


Well maybe now that Apple brutally shamed both of them unleashing an MCM GPU with this UltraFusion interconnect at 2.5 TB/s they'll stop milking us with old recycled products


----------



## yzonker

Grrr... 









Exclusive: Russia's attack on Ukraine halts half of world's neon output for chips


Ukraine's two leading suppliers of neon, which produce about half the world's supply of the key ingredient for making chips, have halted their operations as Moscow has sharpened its attack on the country, threatening to raise prices and aggravate the semiconductor shortage.




www.reuters.com


----------



## J7SC

yzonker said:


> Grrr...
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Exclusive: Russia's attack on Ukraine halts half of world's neon output for chips
> 
> 
> Ukraine's two leading suppliers of neon, which produce about half the world's supply of the key ingredient for making chips, have halted their operations as Moscow has sharpened its attack on the country, threatening to raise prices and aggravate the semiconductor shortage.
> 
> 
> 
> 
> www.reuters.com


----------



## yzonker

I don't want to start a political discussion of course, but I immediately thought of this.


----------



## GRABibus

yzonker said:


> I don't want to start a political discussion of course, but I immediately thought of this.


This is a French word


----------



## z390e

elbramso said:


> Here is the run:
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 16 751 in Port Royal
> 
> 
> Intel Core i9-10900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com


top 40 score very impressive with no chiller, very impressive indeed


----------



## z390e

yzonker said:


> Wow, Newegg has had the 3090 Strix in stock the last couple of days (or more, hadn't been looking). For a rock bottom $2300 US.



with the Ukraine neon issue and expected $4k retail for a 3090ti this seems like a great deal right now


----------



## CptSpig

z390e said:


> top 40 score very impressive with no chiller, very impressive indeed


No chiller? his water temperature is 15c. He has a chiller or his rad or computer are outside in very cold weather.


----------



## z390e

CptSpig said:


> No chiller? his water temperature is 15c. He has a chiller or his rad or computer are outside in very cold weather.


read the post I linked in my comment it will answer both of those questions


----------



## geriatricpollywog

elbramso said:


> I will share the settings once I'm back at my PC.
> Please promise not to push me away from my position 😜


I don’t think that’s possible. I tried the last few times you posted your Classified settings but unfortunately my card doesn’t have the same stuff.


----------



## elbramso

geriatricpollywog said:


> I don’t think that’s possible. I tried the last few times you posted your Classified settings but unfortunately my card doesn’t have the same stuff.


Sorry almost forgot you^^
here you go:









At 0.9c water I still haven't seen any NVVDD warnings


----------



## yzonker

Just one more chiller update. The added pump/flow must have helped the efficiency. Unlike previous runs, the chiller had no problem getting down to it's minimum 3C setting and shutting off (computer was running but idle in each case). So for best performance, you definitely want a lot of pump. This was one D5 and 2 PMP-500's running full speed.


----------



## PLATOON TEKK

yzonker said:


> Just one more chiller update. The added pump/flow must have helped the efficiency. Unlike previous runs, the chiller had no problem getting down to it's minimum 3C setting and shutting off (computer was running but idle in each case). So for best performance, you definitely want a lot of pump. This was one D5 and 2 PMP-500's running full speed.


Boom, happy to hear. Exactly as I mentioned earlier, pumps and flow play a significant role in a chiller setup. You will keep seeing good benefits up to 5/6 lpm. Ive gone as far as 17.5 lpm lol, but no real benefit


----------



## yzonker

PLATOON TEKK said:


> Boom, happy to hear. Exactly as I mentioned earlier, pumps and flow play a significant role in a chiller setup. You will keep seeing good benefits up to 5/6 lpm. Ive gone as far as 17.5 lpm lol, but no real benefit


Well I thought you were just referring to the block deltas, not the chiller performance. I thought I had overkilled the pumps since my gpu block delta didn't improve much, but now I'm glad I bought 2.


----------



## arvinz

Flashed the KPE 520 rebar bios onto my Strix 3090 last night successfully but I'm not getting anywhere near the 520W output. GPU-Z Board Power Draw is reporting around 380W or so. Has anyone had this before? Anything I'm doing wrong or need to do to get closer to that?

I had the Strix V4 rebar bios on before and I was hitting close to the 480W max on that with Port Royal.

Thoughts?


----------



## yzonker

arvinz said:


> Flashed the KPE 520 rebar bios onto my Strix 3090 last night successfully but I'm not getting anywhere near the 520W output. GPU-Z Board Power Draw is reporting around 380W or so. Has anyone had this before? Anything I'm doing wrong or need to do to get closer to that?
> 
> I had the Strix V4 rebar bios on before and I was hitting close to the 480W max on that with Port Royal.
> 
> Thoughts?


Readings are innacurate because of compatibility issues between the different boards. Check clocks or benchmark scores between the 2 bios to see if the 520w is actually boosting higher or not. It's likely pulling close to the 520w limit.


----------



## arvinz

yzonker said:


> Readings are innacurate because of compatibility issues between the different boards. Check clocks or benchmark scores between the 2 bios to see if the 520w is actually boosting higher or not. It's likely pulling close to the 520w limit.


Yeah, clocks and scores were definitely lower even with the PL to the max. I'll switch to the quiet bios and flip that to the KPE 520 rebar so I can switch back and forth between the two and test further.

I suppose HWinfo wouldn't report the correct board draw either if it's a compatibility issue between different boards, right? I might be wrong but I've seen quite a few Strix owners reporting proper power wattage through GPU-Z so not sure.


----------



## bearsdidit

arvinz said:


> Yeah, clocks and scores were definitely lower even with the PL to the max. I'll switch to the quiet bios and flip that to the KPE 520 rebar so I can switch back and forth between the two and test further.
> 
> I suppose HWinfo wouldn't report the correct board draw either if it's a compatibility issue between different boards, right? I might be wrong but I've seen quite a few Strix owners reporting proper power wattage through GPU-Z so not sure.


You could also get a watt meter to measure the draw from the wall.


----------



## arvinz

bearsdidit said:


> You could also get a watt meter to measure the draw from the wall.


Good call! Is there any you recommend? Amazon is littered with them...


----------



## bearsdidit

I used the following for my mining rig but it’s definitely overkill:










I don’t have a specific recommendation but the kill a watt versions seem to the the gold standard among the mining community.


----------



## ManniX-ITA

arvinz said:


> Good call! Is there any you recommend? Amazon is littered with them...


I'm not 100% sure but I think that the power draw reported in HWInfo was fine with the KP520 BIOS and also the HoF.
When it's wrong, it's both on HWInfo and GPU-z.

For the power draw, you can also buy a simple clamp meter that supports DC:









Digital Clamp Meter T-RMS 1999 Counts NCV Multimeter Volt Amp Ohm Tester Auto-ranging Measures Current Voltage Resistance Diodes Continuity (AC Clamp Meter) for Electricians from Plusivo: Amazon.com: Industrial & Scientific


Digital Clamp Meter T-RMS 1999 Counts NCV Multimeter Volt Amp Ohm Tester Auto-ranging Measures Current Voltage Resistance Diodes Continuity (AC Clamp Meter) for Electricians from Plusivo: Amazon.com: Industrial & Scientific



www.amazon.com





There are dozens all very similar.

Then you can measure each 12V rail and sum it & compare it with what is reported.

But you need separate wires on your 8-pin cables.
Mines are sleeved so I bought an extension cable without sleeved wires:









CableMod Classic ModFlex Sleeved 8-pin PCI-e Extension (White, 45cm) : Electronics


Buy CableMod Classic ModFlex Sleeved 8-pin PCI-e Extension (White, 45cm): Cables & Interconnects - Amazon.com ✓ FREE DELIVERY possible on eligible purchases



www.amazon.com





Then I've identified the wires with the positive and separated them.
When I want to measure I put the extension cables and remove them when I'm done.


----------



## ALSTER868

arvinz said:


> Thoughts?


It's lots easier than you might think. With this bios the 3rd pin draw is not reported correctly hence the incorrect overall power draw monitoring. You can see it urself in GPU-Z during any 3D load.
Check my Strix monitoring during mining on this KPE 520 bios. To have a correct idea what is going on with my power draw, I just added +130W correction in MSI AB, thus I'm aware more or less what's happening. Cheers.


----------



## yzonker

ALSTER868 said:


> It's lots easier than you might think. With this bios the 3rd pin draw is not reported correctly hence the incorrect overall power draw monitoring. You can see it urself in GPU-Z during any 3D load.
> Check my Strix monitoring during mining on this KPE 520 bios. To have a correct idea what is going on with my power draw, I just added +130W correction in MSI AB, thus I'm aware more or less what's happening. Cheers.
> View attachment 2552102


8 pin #2 looks wrong as well based on the 8 pin #1 reading. I don't think the Strix has that poor of load balancing. 

A UPS with a power readout can give an approximate power draw also.


----------



## yzonker

arvinz said:


> Yeah, clocks and scores were definitely lower even with the PL to the max. I'll switch to the quiet bios and flip that to the KPE 520 rebar so I can switch back and forth between the two and test further.
> 
> I suppose HWinfo wouldn't report the correct board draw either if it's a compatibility issue between different boards, right? I might be wrong but I've seen quite a few Strix owners reporting proper power wattage through GPU-Z so not sure.


Well I thought that was covered previously in this thread. You might look at posts by @J7SC . 

No, none of the monitoring apps will report correct power draw.


----------



## Nizzen

I'm always looking at total powerdraw from the wall. Using "kill a watt meter"


----------



## GreatestChase

I decided to take the liquid metal (LM) plunge after thinking about it, and I just figured that I'd share my results with you all. Overall it was a pretty easy process. I just took my time with the application of the kapton tape around the die to protect the SMDs as well as with applying the LM. At the same OC settings and load, I saw about a 3 C drop in my GPU to water delta. This resulted in it maintaining a higher boost bin for a longer duration, but at identical water temps the boost bins were the same. This was done with Thermal Grizzly Conductonaut.


----------



## z390e

Nizzen said:


> I'm always looking at total powerdraw from the wall. Using "kill a watt meter"


When you are running a high power draw GPU or CPU test software like 3dMark, do you connect to a UPC or surge then through that meter or just plug your PC directly in to the meter and then the wall?

I use a APC J25B which is rated to 825w (and its not alerting that the draw is too much) but I don't see how I can get one of those meters connected to my APC to be viewable easily or if I would just hook my main PSU Cable to it?


----------



## Nizzen

z390e said:


> When you are running a high power draw GPU or CPU test software like 3dMark, do you connect to a UPC or surge then through that meter or just plug your PC directly in to the meter and then the wall?
> 
> I use a APC J25B which is rated to 825w (and its not alerting that the draw is too much) but I don't see how I can get one of those meters connected to my APC to be viewable easily or if I would just hook my main PSU Cable to it?


Cable from PSU directly into wattmeter on the wall.


----------



## elbramso

CptSpig said:


> No chiller? his water temperature is 15c. He has a chiller or his rad or computer are outside in very cold weather.


No chiller! Open door and my pc standing in the door. Temperature outside was - 2c.
Can I repeat this run without having damn cold temperatures outside? Absolutely not!


----------



## arvinz

Ended up getting ones of these. This is stock Strix V4 bios running Port Royal at max PL, no overclock on GPU/CPU or ram:


----------



## tcclaviger

So... Power draw. I wasn't aware it's generally reported incorrectly on these cards. Just checked what I was pulling out of curiosity.

7 fans at 1000rpm, 2xD5s at full power, 3nvmes, 2x16 ram at 1.58v is non CPU/GPU draw.

Thor 1200 says 770watt input power during TS-E gt2.
System idle is 150, 38 CPU, 22 GPU, so 90 for idle of non CPU or GPU, seems believable since pumps alone are around 60 watts.

So if we take 770 *0.905 = 697.
697 -90 system power, -80 CPU we get 526 watts for GPU. 526/499= 5% deviation.

Seems about right for shunted card stock 370watt limited bios 2xpin at 100% on slider? HWinfo says 499 after applying a 1.625 multiplier for the 8mohm shunts.

Edit: Just tested at 60% power. (370*0.6)1.625= 360 watts afterburner limit. HWinfo indicated 360 max for board power. Thor showed 590 max, 590*.91 = 543.

543 - 80cpu, 90system = 373. HWinfo showed 360.54. 373/360= 3.6% deviation.

Seems reading are fairly accurate tbh, skewing slightly lower than actual. At full power Thor 1200 shows about 89.5% in reviews, and 91% when half power, the HWinfo read outs are about 4% low. Probably caused by my shunt work correction factor being slightly off from the ideal 1.625 multiplier.

I know some cards absolutely do miss report, seems the GB Gaming OC is not significantly out of range.


----------



## J7SC

...and for the next trick...


----------



## FarmerJo

is there any bios that works with the founders card that increases power limit?


----------



## Nizzen

FarmerJo said:


> is there any bios that works with the founders card that increases power limit?


No.
Shuntmod is the only option


----------



## dual109

Hi team

Whats the go to bios with Rebar support for the MSI X trio? Asus just released a new Bios with Vermeer support for X370 platform so need to update my vid card bios as current bios has no support (no point with 3700X).

cheers


----------



## Nizzen

HoooooooF.....

3090 *ti* HOF with 2x16pin


----------



## GRABibus

Deleted


----------



## J7SC

...3090 Ti test by GN - bonus: comes with octopus cables


----------



## Thanh Nguyen

Anyone like me? Hit the order button then watch a review and cancel the order after that. I dont think 3090ti is much better than 3090 at the same power draw.


----------



## GreatestChase

Thanh Nguyen said:


> Anyone like me? Hit the order button then watch a review and cancel the order after that. I dont think 3090ti is much better than 3090 at the same power draw.


For me, the only way I would go to a 3090 ti would be with the KingPin edition, and even then I'm not sure that I will at this point. XOC season is pretty much over for me with warmer weather moving in, and I just can't justify getting a chiller yet. So I'll just continue to be content with my 3090 and wait to see what the 40 series brings.


----------



## J7SC

Thanh Nguyen said:


> Anyone like me? Hit the order button then watch a review and cancel the order after that. I dont think 3090ti is much better than 3090 at the same power draw.


What with the leaks already out there etc, I figured 3090 Ti was a bit of a tempest in a teapot...if I wouldn't have a well running 3090 Strix and I needed a new top card for work and play _right now_, I might be tempted, especially as the 24 GB VRAM config is better / easier to cool, but really, we are so close to RTX 4alk (posts somewhere above) that it doesn't make much sense to upgrade from a 3090, unless you're into 3DM Hof / HWBot and want to defend your top scores. After seeing the early tests and reviews now, I do think that 3090 Ti w/ custom bios / extreme cooling should be benchmark monsters, until next gen...

3090 Strix, w-cooled, with 520KPE vbios and just MSI AB sliders (ambient, no curve / no -smi) in Port Royal. I know there are far better results out there, but w/ minimum effort, and all safeties on, this will do for me until may be 4090 / Ti


----------



## tcclaviger

Shame the HOF 3090ti will be very difficult to get a hold of, would love to have one to play around with, no real need at all, but it would be quite an interesting toy.

My measly 3080ti doing 15150 PR in safe daily config will have to do for now


----------



## Falkentyne

I mentioned this before but everyone ignored me.

No one noticed that the 3090 Ti's max power draw is the same as a shunt modded 3090 (excluding Shunted Strix, KPE (dip switches) and HOF versions, which have higher internal rail limits), that gets limited by TDP Normalized% despite the shunt mod (between 480-560W)?


----------



## elbramso

GreatestChase said:


> For me, the only way I would go to a 3090 ti would be with the KingPin edition, and even then I'm not sure that I will at this point. XOC season is pretty much over for me with warmer weather moving in, and I just can't justify getting a chiller yet. So I'll just continue to be content with my 3090 and wait to see what the 40 series brings.


I couldn't agree more!


----------



## yzonker

J7SC said:


> What with the leaks already out there etc, I figured 3090 Ti was a bit of a tempest in a teapot...if I wouldn't have a well running 3090 Strix and I needed a new top card for work and play _right now_, I might be tempted, especially as the 24 GB VRAM config is better / easier to cool, but really, we are so close to RTX 4alk (posts somewhere above) that it doesn't make much sense to upgrade from a 3090, unless you're into 3DM Hof / HWBot and want to defend your top scores. After seeing the early tests and reviews now, I do think that 3090 Ti w/ custom bios / extreme cooling should be benchmark monsters, until next gen...
> 
> 3090 Strix, w-cooled, with 520KPE vbios and just MSI AB sliders (ambient, no curve / no -smi) in Port Royal. I know there are far better results out there, but w/ minimum effort, and all safeties on, this will do for me until may be 4090 / Ti
> View attachment 2553680


Although I agree with the point you are making, it does appear to match that with a light overclock.










Stolen from Gamers Nexus,






Not that I put a lot of faith in most of these Youtuber's, but at least he did show some real #'s and not just the benchmark regurgitation.


----------



## tcclaviger

It matches it because it's already equally power limited. Just like a 3080ti pulling 500 watts vs 3090 at 500 watts, almost no difference in performance.

3090ti vs power equalized 3090 will be a near wash.

Just over 2% core count between 3080ti and 3090, same gap between 3090 and 3090ti. Whichever card is running a higher OC in a more optimized hardware and windows environment will perform better, that simple.

The price gap is absurd given a 2.5% potential performance increase once the 3090 is tuned. Essentially a $500 shunt mod lmao.

EDIT: Something wrong with GN numbers. 15631 3080ti @ 2197core +1340mem at 500 watts pwr limited vs 15549 on 3090ti 2175 +1580mem shouldn't happen.

Look what I found:








I scored 14 094 in Port Royal


AMD Ryzen 9 5950X, NVIDIA GeForce RTX 3090 Ti x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com





@ average temp ***** just a wee bit toasty.


----------



## J7SC

tcclaviger said:


> It matches it because it's already equally power limited. Just like a 3080ti pulling 500 watts vs 3090 at 500 watts, almost no difference in performance.
> 
> 3090ti vs power equalized 3090 will be a near wash.
> 
> Just over 2% core count between 3080ti and 3090, same gap between 3090 and 3090ti. Whichever card is running a higher OC in a more optimized hardware and windows environment will perform better, that simple.
> 
> The price gap is absurd given a 2.5% potential performance increase once the 3090 is tuned. Essentially a $500 shunt mod lmao.
> 
> EDIT: Something wrong with GN numbers. 15631 3080ti @ 2197core +1340mem at 500 watts pwr limited vs 15549 on 3090ti 2175 +1580mem shouldn't happen.
> 
> Look what I found:
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 14 094 in Port Royal
> 
> 
> AMD Ryzen 9 5950X, NVIDIA GeForce RTX 3090 Ti x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> @ average temp *** just a wee bit toasty.


...yeah, 89 C is a bit of a boost killer.

I found > this link w/ partial pic below of the Galax 3090 Ti Hof/OCL...and it seems to be all about power management. I have also read elsewhere on a couple of occasions that the upcoming big RTX 4K Ada Lovelace chips are supposed to be pin-compatible with the 3090 Ti, so 3090 Ti will teach a lot to vendors about their 500W+(+) PCB durability with customers for when the next big boy hits, with even more wattskies...

Btw, no shunt mod jokes re. below


----------



## changboy

This afternoon i got a notification from EVGA to buy the 3090 ti ftw3 ultra and i dont think i will buy it.

I already have 4 x 3090 lol.


----------



## D-EJ915

changboy said:


> This afternoon i got a notification from EVGA to buy the 3090 ti ftw3 ultra and i dont think i will buy it.
> 
> I already have 4 x 3090 lol.


I signed up and got one too but honesty no waterblock no care at this point. If I were to get anything it'd prob be the hybrid giga or asus.


----------



## elbramso

D-EJ915 said:


> I signed up and got one too but honesty no waterblock no care at this point. If I were to get anything it'd prob be the hybrid giga or asus.


I laughed so hard at the hybrid version from Asus - who at Asus thought it might be a good idea to give this card a 240mm AiO?? Raised TDP but well an 240mm has to be enough


----------



## changboy

So the waterblock of the 3090 dosent fit with this new 3090 ti ?


----------



## GreatestChase

changboy said:


> So the waterblock of the 3090 dosent fit with this new 3090 ti ?


At least for the FTW3, based on Steve's teardown video, I would say no. The power management/voltage regulation solution is quite different.

Edit: Just for comparison sake, here is a screen cap from his vid and a pic I took of a FTW3 3090. Putting them side by side, it also looks like the mem and gpu die itself are shifted more towards the IO on the 3090ti


----------



## changboy

Ya i have the EK waterblock for my ftw3 ultra so not a great idea then


----------



## yzonker

GreatestChase said:


> At least for the FTW3, based on Steve's teardown video, I would say no. The power management/voltage regulation solution is quite different.
> 
> Edit: Just for comparison sake, here is a screen cap from his vid and a pic I took of a FTW3 3090. Putting them side by side, it also looks like the mem and gpu die itself are shifted more towards the IO on the 3090ti
> View attachment 2553955
> 
> View attachment 2553956


I looked through the cards Techpowerup tore down and didn't see any that were close enough for the old blocks to work. EK has said they are working on blocks but who knows when those will be available.


----------



## jura11

In current situation getting RTX 3090Ti is just no point because of incoming release of 40xx series GPUs 

Few months back this would make big sense getting one because of scarcity availability of RTX 3090

From reviews ses they're easy hitting 2160-2175MHz which is awesome and I assume with waterblock they should easily do 2205-2220MHz, with that I would be very happy, I couldn't touch such clocks on my old RTX 3090 GamingPro and that's with XOC 1000W BIOS no way

If you have poor binned RTX 3090 then I would probably get one 

Hope this helps 

Thanks, Jura


----------



## GosuPl

RTX 3090 Ti SUPRIM X with PT 480 and core clock + 120, mem clock + 1400 on AIR from mate benchtable RIG

vs

RTX 3090 STRIX with PT 480 and core clock + 140, mem clock + 1500 on water from my personal RIG









Result







www.3dmark.com













Result







www.3dmark.com





Personally i will skip RTX 3090 Ti because non existen waterblock on that moment, problem with decent cable managment and aestethics and RTX 4090 soon ;-)

Tu much effort for to low gains in case when we have RTX 3090 with high PT.


----------



## elbramso

Delete


----------



## changboy

You can put your name for the EK waterblock now :









Can RTX 3090 Ti Get Even Cooler? Yes - With EK Water Blocks - ekwb.com


EK, the leading computer liquid cooling solutions provider, is working on a plethora of new Quantum Vector² water blocks for NVIDIA GeForce RTX 3090 Ti Series graphics cards. RTX 3090 GPUs are one of the most wanted cards on the market and are among the components that can benefit greatly when...




www.ekwb.com


----------



## zkareemz

is it a good idea to flash 3090Ti bios into 3090 ?


----------



## Panchovix

zkareemz said:


> is it a good idea to flash 3090Ti bios into 3090 ?


Different Devices ID (10DE 2204 on the 3090 vs 10DE 2203 on the 3090Ti), so you won't be able to flash it even if you wanted via software.

I guess you can force with a BIOS programmer but wouldn't recommend, high chance to brick the card.


----------



## kx11

I'm not buying a 3090 ti but this is so good looking 










AORUS GeForce RTX™ 3090 Ti XTREME WATERFORCE 24G Key Features | Graphics Card - GIGABYTE Global


Discover AORUS premium graphics cards, ft. WINDFORCE cooling, RGB lighting, PCB protection, and VR friendly features for the best gaming and VR experience!




www.gigabyte.com


----------



## Guts_

Hi! I recently got my 3090, finally, and have been doing some research. I'm doing undervolt with MSI Afterburner, but I'm having a problem.... When I set the voltage curve, if for example I set it to 2100, sometimes it goes up to 2115 (or 2130) and sometimes it goes down to 2085.... Is there any way to avoid this? I've read about making a .bat with nvidia-smi, but I don't know if there would be another way.

Thank you all very much, I've been quite informed reading the forum, and in particular, this thread


----------



## viper16341

Hello guys please allow me a noob Question - is there a 1000W Bios which is clocking down the VRam in Idle? If i am using the Kingpin Bios it works fine but the Vram always is clocked at full speed. And if I am using just a 'Ordinary' KingPin Bios, my ArcticStorm 3090 is just taking about 450 Watts - according to Aida64.


----------



## yzonker

Guts_ said:


> Hi! I recently got my 3090, finally, and have been doing some research. I'm doing undervolt with MSI Afterburner, but I'm having a problem.... When I set the voltage curve, if for example I set it to 2100, sometimes it goes up to 2115 (or 2130) and sometimes it goes down to 2085.... Is there any way to avoid this? I've read about making a .bat with nvidia-smi, but I don't know if there would be another way.
> 
> Thank you all very much, I've been quite informed reading the forum, and in particular, this thread


You can keep it from going higher, but not lower with nvidia-smi. AFAIK, there's no way of preventing it from downclocking (due to heat/load).


----------



## yzonker

viper16341 said:


> Hello guys please allow me a noob Question - is there a 1000W Bios which is clocking down the VRam in Idle? If i am using the Kingpin Bios it works fine but the Vram always is clocked at full speed. And if I am using just a 'Ordinary' KingPin Bios, my ArcticStorm 3090 is just taking about 450 Watts - according to Aida64.


Not AFAIK. They all keep the mem at full speed for LN2 OC'ing. IIRC, the mem doesn't like being that cold, so they intentionally keep it at full speed. The only workaround I know of is to set the mem offset to -251 or lower. Obviously you would need to flip it back and forth for gaming.


----------



## viper16341

So i am going to stay with one of the original Bios. It would just be cool if there was a 600W Bios or something... Even with a stock Kingpin Bios my ArcticStorm is just at around 440W max.


----------



## andrew149

viper16341 said:


> So i am going to stay with one of the original Bios. It would just be cool if there was a 600W Bios or something... Even with a stock Kingpin Bios my ArcticStorm is just at around 440W max.


You get the most accurate power numbers and your fans work best on stock bios anyway. Unless your a miner does it really matter?


----------



## GRABibus

My Kingping Hybrid on *stock cooler* :

*







*










I scored 15 926 in Port Royal


AMD Ryzen 9 5900X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com





not so far from 16k on stock cooler !
And I entered HOF : 3DMark Port Royal Hall of Fame
I will not stay too long


----------



## viper16341

There are no fans here, Andrew. But to be honest at 1440P my 3090 @ Stock will be okay the next few years... As my 2080ti is in my 2nd PC @ 1080P. I just dont wanna buy a lot to expensive hardware every year.


----------



## elbramso

GRABibus said:


> My Kingping Hybrid on *stock cooler* :
> 
> *
> View attachment 2554458
> *
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 926 in Port Royal
> 
> 
> AMD Ryzen 9 5900X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> not so far from 16k on stock cooler !
> And I entered HOF : 3DMark Port Royal Hall of Fame
> I will not stay too long


Be careful with your FBVVD on stock cooler. 1.45v is a bit out of spec 😅 but should still be fine in benchmarks.


----------



## GRABibus

elbramso said:


> Be careful with your FBVVD on stock cooler. 1.45v is a bit out of spec 😅 but should still be fine in benchmarks.


Yes.
For 24/7 gaming I use 1.4V. This is ok ?


----------



## elbramso

GRABibus said:


> Yes.
> For 24/7 gaming I use 1.4V. This is ok ?


Absolutely if your mem temp is good.


----------



## GRABibus

elbramso said:


> Absolutely if your mem temp is good.


Yes, max 70 degrees (4K gaming)


----------



## viper16341

Which Bios are you guys using 24hrs for a 3 PCI-E PIN Card?


----------



## KedarWolf

viper16341 said:


> Which Bios are you guys using 24hrs for a 3 PCI-E PIN Card?


On my Strix OC I use the 1000W XOC BIOS even for gaming but I'm water-cooled and keep the Power Limit at 50%.

Edit: If you're not water-cooled maybe the EVGA 450W BIOS would work well. Or if your cooling is good, the 500W or 520W EVGA BIOS. I'd keep memory and core temps under 80C for sure whichever way you go.


----------



## GRABibus

viper16341 said:


> Which Bios are you guys using 24hrs for a 3 PCI-E PIN Card?


I use 1000W XOC KP Bios on my Kingpin hybrid card with stock cooler.
This is for gaming only.
I cap PL at 58%.


----------



## PLATOON TEKK

After deciding against the 3090ti. I figured I’d actually game a bit, when playing “It takes Two” with my girl I randomly stressed a single card much further than I thought was possible.

Am using the KP bios on two KPs, so to my knowledge the readings are pretty accurate . Afterburner reported up to 829w! Hwinfo capped at 802w.

Am assuming a mix of circumstances and a pretty high frame-rate might have lead to it.


Spoiler: Hw-info


----------



## yzonker

PLATOON TEKK said:


> After deciding against the 3090ti. I figured I’d actually game a bit, when playing “It takes Two” with my girl I randomly stressed a single card much further than I thought was possible.
> 
> Am using the KP bios on two KPs, so to my knowledge the readings are pretty accurate . Afterburner reported up to 829w! Hwinfo capped at 802w.
> 
> Am assuming a mix of circumstances and a pretty high frame-rate might have lead to it.
> 
> 
> Spoiler: Hw-info
> 
> 
> 
> 
> View attachment 2554624


I assume that was not default voltages? Didn't think power could go that high with defaults. 

3090ti really seems underwhelming with basically only air cooled cards available except for the hybrids from Asus, etc... But no water blocks for any of them. Probably get your 3090ti blocked right before 40 series launches.


----------



## GreatestChase

yzonker said:


> I assume that was not default voltages? Didn't think power could go that high with defaults.
> 
> 3090ti really seems underwhelming with basically only air cooled cards available except for the hybrids from Asus, etc... But no water blocks for any of them. Probably get your 3090ti blocked right before 40 series launches.


And if you try to get a block for a 3090 ti from optimus, if they offer them, it'll be at the end of life of the 40 series cards before you got one.


----------



## viper16341

Well my Problem with the 1000W XOC Bios is, that the Ram is not clocking down in Idle. Normally i would like to use this as well... Yes i am on a pretty good Watercooler.


----------



## Nizzen

This is why we are buying HOF and Kingpin 
Someone thought only Kingpin had the fun tools


----------



## GRABibus

viper16341 said:


> Well my Problem with the 1000W XOC Bios is, that the Ram is not clocking down in Idle. Normally i would like to use this as well... Yes i am on a pretty good Watercooler.


Just set a profile in MSI AB with -250MHz on Memory and launch it.


----------



## GRABibus

Nizzen said:


> This is why we are buying HOF and Kingpin
> Someone thought only Kingpin had the fun tools
> 
> View attachment 2554822


Nice.
It works with the standard HOF ?


----------



## PLATOON TEKK

yzonker said:


> I assume that was not default voltages? Didn't think power could go that high with defaults.
> 
> 3090ti really seems underwhelming with basically only air cooled cards available except for the hybrids from Asus, etc... But no water blocks for any of them. Probably get your 3090ti blocked right before 40 series launches.


I was in disbelief myself so I just re-ran to test, all values are default. I have included my readings when playing COD and my reading when playing, It takes two. You have to load the “Clockwork” level.
Highest I clocked off single card was: 833w.

edit: all on stock voltage without classy tool



Spoiler: It takes 2







































vs



Spoiler: COD 2019









































GreatestChase said:


> And if you try to get a block for a 3090 ti from optimus, if they offer them, it'll be at the end of life of the 40 series cards before you got one.


I agree, the ti is also just a stepping stone to 4000. By the time the Optimus blocks are here and in we will all have a 4090 already. 1000% skipping.


----------



## Nizzen

GRABibus said:


> Nice.
> It works with the standard HOF ?


Yes


----------



## PLATOON TEKK

GRABibus said:


> Nice.
> It works with the standard HOF ?


Correct, but not with non Hof cards with flashed bios. I can reupload the tool if you like. Got it from cancelled GOC


----------



## Nizzen

PLATOON TEKK said:


> Correct, but not with non Hof cards with flashed bios. I can reupload the tool if you like. Got it from cancelled GOC


cancelled GOC?

What version do you have?


----------



## PLATOON TEKK

Nizzen said:


> cancelled GOC?
> 
> What version do you have?


The GALAX Overclocking competition which was cancelled due to Covid (garbage cards anyways this gen imo). I uploaded version 3.0.0.4 here a few months ago.


----------



## Nizzen

PLATOON TEKK said:


> The GALAX Overclocking competition which was cancelled due to Covid (garbage cards anyways this gen imo). I uploaded version 3.0.0.4 here a few months ago.


Same 3004 here too 

I miss the 780 and 780ti classy times with OC from ~875mhz to 1500mhz++ on water


----------



## PLATOON TEKK

Nizzen said:


> Same 3004 here too
> 
> I miss the 780 and 780ti classy times with OC from ~875mhz to 1500mhz++ on water


So do I my brother, so do I! Those were the days. All these thermal boost mechanisms and power caps take a lot of the fun out of it. Hopefully this Nvidia hack will at least allow bios mods down the line lol


----------



## GRABibus

PLATOON TEKK said:


> The GALAX Overclocking competition which was cancelled due to Covid (garbage cards anyways this gen imo). I uploaded version 3.0.0.4 here a few months ago.


Does it work with KP hybrid ?


----------



## Veii

PLATOON TEKK said:


> So do I my brother, so do I! Those were the days. All these thermal boost mechanisms and power caps take a lot of the fun out of it. Hopefully this Nvidia hack will at least allow bios mods down the line lol


You and anybody here
Do you want bios mods ?

It takes me ~2h for foreign layouts, 10min for normal (ABE is not used)
In the near future there will be an OCN post with a tutorial for everyone to replicate
Sadly it's impossible to flash them the normal way, unless nvflash cert-bypass get's updated ~ but SPI works

On higher demand (motivation) and nice asking, 3090ti can be done too
3000 series so far~

Offer only accepts raw exports, not Techpowerup or already covered cards
Offer is limited time only, goal is to extend my layout collection soo everyone can do it for themself later


----------



## J7SC

Nizzen said:


> Same 3004 here too
> 
> I miss the 780 and 780ti classy times with OC from ~875mhz to 1500mhz++ on water





PLATOON TEKK said:


> So do I my brother, so do I! Those were the days. All these thermal boost mechanisms and power caps take a lot of the fun out of it. Hopefully this Nvidia hack will at least allow bios mods down the line lol


...them were the days ! 👻 ...they're currently grazing on a nice pasture here with red EVBot flowers, but may be, one day., they get another shot at tripping the fuse box !


Spoiler


----------



## Nizzen

Veii said:


> You and anybody here
> Do you want bios mods ?
> 
> It takes me ~2h for foreign layouts, 10min for normal (ABE is not used)
> In the near future there will be an OCN post with a tutorial for everyone to replicate
> Sadly it's impossible to flash them the normal way, unless nvflash cert-bypass get's updated ~ but SPI works
> 
> On higher demand (motivation) and nice asking, 3090ti can be done too
> 3000 series so far~
> 
> Offer only accepts raw exports, not Techpowerup or already covered cards
> Offer is limited time only, goal is to extend my layout collection soo everyone can do it for themself later


Soon everyone is buying EVC ? 🤟


----------



## Veii

Nizzen said:


> Soon everyone is buying EVC ? 🤟


Sadly EVC and shuntmodding can not bypass chip powerlimit controller (SRC control for each 8pin)
There is no way around bios modding as on-chip there is a FW limit








Key values
Are all plain values in little endian (converted)
Checksums exist 3 on later bioses, 2 on pre 94.02.59.XX.XX
Chip powerlimit is offset based sadly

Signature of blobs like BAR patch or LHR patch does not break if you don't write into it
But file integrity breaks on editing date ~ soo nvflash moans about it being an invalid image
Which means, SPI Flash only~
I also would advice against using this teasing tool. Nvidia bought up the tool, for OEMs and themself for 4000 series

Anywho, short offer 
Export me what you want modded and i'll make some * (bored)
* be realistic about the limits, 3090s can easily pull 900W 
Later anybody will be able to do it by hand


----------



## PLATOON TEKK

Veii said:


> Sadly EVC and shuntmodding can not bypass chip powerlimit controller (SRC control for each 8pin)
> There is no way around bios modding as on-chip there is a FW limit
> 
> 
> 
> 
> 
> 
> 
> 
> Key values
> Are all plain values in little endian (converted)
> Checksums exist 3 on later bioses, 2 on pre 94.02.59.XX.XX
> Chip powerlimit is offset based sadly
> 
> Signature of blobs like BAR patch or LHR patch does not break if you don't write into it
> But file integrity breaks on editing date ~ soo nvflash moans about it being an invalid image
> Which means, SPI Flash only~
> I also would advice against using this teasing tool. Nvidia bought up the tool, for OEMs and themself for 4000 series
> 
> Anywho, short offer
> Export me what you want modded and i'll make some * (bored)
> * be realistic about the limits, 3090s can easily pull 900W
> Later anybody will be able to do it by hand


Thanks so much for your detailed insight and offer man. Some of the lads here are far more versed in bios stats and limits than I am. Would it be possible to push all limits to the upmost capacity the connectors will deliver?

I think the most “powerful” bios we all use is the KP 1000w with rebar. Pardon my ignorance in the matter, I would absolutely LOVE a tutorial and appreciate any time you spend speeding the word.

thanks a million either way


----------



## PLATOON TEKK

J7SC said:


> ...them were the days ! 👻 ...they're currently grazing on a nice pasture here with red EVBot flowers, but may be, one day., they get another shot at tripping the fuse box !
> 
> 
> Spoiler
> 
> 
> 
> 
> View attachment 2554827
> 
> View attachment 2554828


haha that brings a tear to my eye. Knowing they are living their best retirement lives. That evbot is still useful though!




GRABibus said:


> Does it work with KP hybrid ?


just tested here for you, doesn’t seem to work. However the classified tool will do the same for the KP hybrid


----------



## J7SC

Veii said:


> Sadly EVC and shuntmodding can not bypass chip powerlimit controller (SRC control for each 8pin)
> There is no way around bios modding as on-chip there is a FW limit
> 
> 
> 
> 
> 
> 
> 
> 
> Key values
> Are all plain values in little endian (converted)
> Checksums exist 3 on later bioses, 2 on pre 94.02.59.XX.XX
> Chip powerlimit is offset based sadly
> 
> Signature of blobs like BAR patch or LHR patch does not break if you don't write into it
> But file integrity breaks on editing date ~ soo nvflash moans about it being an invalid image
> Which means, SPI Flash only~
> I also would advice against using this teasing tool. Nvidia bought up the tool, for OEMs and themself for 4000 series
> 
> Anywho, short offer
> Export me what you want modded and i'll make some * (bored)
> * be realistic about the limits, 3090s can easily pull 900W
> Later anybody will be able to do it by hand


PM sent !


----------



## Veii

PLATOON TEKK said:


> Would it be possible to push all limits to the upmost capacity the connectors will deliver?


A lot of the Vendors reuse the same Bios and cheat on slot powerlimit increase (don't hold to 75W spec) to look better in benchmarks
ASUS from GALAX, EVGA from ASUS and so on

KPx bios is great, cudos where they belong (if not the only bios that's correctly modded)
But for example a 2x8 pin card, although able to use 3x8 pin bioses ~ will have odd bugs, different V/F curve and fan issues

Hence the short offer about realistic wishes & only currently-running ROM exports
I don't want to give a bios which first might melt users PCIe slot (like some of the 90W options out there with unaware users)
and haven't sadly figured out the offset math to fully unlock the chips for LN2 (chip powerlimit still exists)
Soo there will be a v2 or the future thread will have it covered

What i want to do tho, is help stopping this teasing of "unmoddable" RTX cards that's going on since 2020.
Let's call it High-OC Bioses only for now, not LN2 ready

Connectors change & quality of them. I really don't want to be responsible for dead cards
It's not my last post about the matter, soo later you can make one for yourself if you want
There are 1000W GALAX bioses out there, but these like most have not lifted the actual controller limits & cards simply shut-down or throttle-crash 
In a later date more extreme versions. Right now only feeling helpful, soo hence the short offer~


----------



## PLATOON TEKK

Veii said:


> A lot of the Vendors reuse the same Bios and cheat on slot powerlimit increase (don't hold to 75W spec) to look better in benchmarks
> ASUS from GALAX, EVGA from ASUS and so on
> 
> KPx bios is great, cudos where they belong (if not the only bios that's correctly modded)
> But for example a 2x8 pin card, although able to use 3x8 pin bioses ~ will have odd bugs, different V/F curve and fan issues
> 
> Hence the short offer about realistic wishes
> I don't want to give a bios which first might melt users PCIe slot (like some of the 90W options out there with unaware users)
> and haven't sadly figured out the offset math to fully unlock the chips for LN2 (chip powerlimit still exists)
> Soo there will be a v2 or the future thread will have it covered
> 
> What i want to do tho, is help stopping this teasing of "unmoddable" RTX cards that's going on since 2020.
> Let's call it High-OC Bioses only for now, not LN2 ready
> 
> Connectors change & quality of them. I really don't want to be responsible for dead cards
> It's not my last post about the matter, soo later you can make one for yourself if you want
> There are 1000W GALAX bioses out there, but these like most have not lifted the actual controller limits & cards simply shut-down or throttle-crash
> In a later date more extreme versions. Right now only feeling helpful, soo hence the short offer~


Got you, and that makes absolute sense. You wouldn’t want to put the wrong tools into the wrong hands, especially seeing how many variables are involved.
I will be closely following your posts and am down to test whatever. If I was more knowledgeable on the matter I’d also be of more help.
Thanks again for your thorough explanation an know how, is good to know bios modding isn’t “dead” per se!


----------



## GRABibus

Veii said:


> A lot of the Vendors reuse the same Bios and cheat on slot powerlimit increase (don't hold to 75W spec) to look better in benchmarks
> ASUS from GALAX, EVGA from ASUS and so on
> 
> KPx bios is great, cudos where they belong (if not the only bios that's correctly modded)
> But for example a 2x8 pin card, although able to use 3x8 pin bioses ~ will have odd bugs, different V/F curve and fan issues
> 
> Hence the short offer about realistic wishes
> I don't want to give a bios which first might melt users PCIe slot (like some of the 90W options out there with unaware users)
> and haven't sadly figured out the offset math to fully unlock the chips for LN2 (chip powerlimit still exists)
> Soo there will be a v2 or the future thread will have it covered
> 
> What i want to do tho, is help stopping this teasing of "unmoddable" RTX cards that's going on since 2020.
> Let's call it High-OC Bioses only for now, not LN2 ready
> 
> Connectors change & quality of them. I really don't want to be responsible for dead cards
> It's not my last post about the matter, soo later you can make one for yourself if you want
> There are 1000W GALAX bioses out there, but these like most have not lifted the actual controller limits & cards simply shut-down or throttle-crash
> In a later date more extreme versions. Right now only feeling helpful, soo hence the short offer~


Make us a 600W Bios 😊


----------



## Veii

GRABibus said:


> Make us a 600W Bios 😊


Export yours & i'll see how long i need
Goal is to cover more cards 
As mentioned, every card has a different boosting curve, as Boost 3.0 system is unique (+signed) & has a different Fan-Curve
There are more 2pin cards out there too


----------



## GRABibus

.


----------



## GRABibus

Veii said:


> Export yours & i'll see how long i need
> Goal is to cover more cards
> As mentioned, every card has a different boosting curve, as Boost 3.0 system is unique (+signed) & has a different Fan-Curve
> There are more 2pin cards out there too


PM sent 👍


----------



## Veii

J7SC said:


> PM sent !


*ASUS 3090 Strix OC [3x 8pin 450-600W]
VBIOS 94.02.42.00.A9*

OC Bios - Comparable to a 3090Ti
Half-Familiar layout ~ 50min needed

Pls flash via SPI
if official NVFlash
or nvflash64_RTX-Rebrand.7z
refuse







Changes & Location
Make an SPI-ROM backup please


----------



## PLATOON TEKK

Veii said:


> *ASUS 3090 Strix OC [3x 8pin 450-600W]
> VBIOS 94.02.42.00.A9*
> 
> OC Bios - Comparable to a 3090Ti
> Half-Familiar layout ~ 50min needed
> 
> Pls flash via SPI
> if official NVFlash
> or nvflash64_RTX-Rebrand.7z
> refuse
> View attachment 2554842
> 
> Changes & Location
> Make an SPI-ROM backup please


you’re a beast! 50 mins is madness. Sent you a pm.


----------



## GRABibus

I have never flashed via SPI.
Are you guys familiar with this ?
Any Tutorial available ? 😊


----------



## changboy

Do you have the Evga 3090 ft3w Ultra ?


----------



## GRABibus

changboy said:


> Do you have the Evga 3090 ft3w Ultra ?


Who ?


----------



## J7SC

Veii said:


> *ASUS 3090 Strix OC [3x 8pin 450-600W]
> VBIOS 94.02.42.00.A9*
> 
> OC Bios - Comparable to a 3090Ti
> Half-Familiar layout ~ 50min needed
> 
> Pls flash via SPI
> if official NVFlash
> or nvflash64_RTX-Rebrand.7z
> refuse
> View attachment 2554842
> 
> Changes & Location
> Make an SPI-ROM backup please


...Thanks so much, Veii ! I just came back from a meeting (I'm on Canadian Pacific Coast time) and will try out in a bit. Normal nvflash version & commands for this gen should work ?


----------



## yzonker

GRABibus said:


> I have never flashed via SPI.
> Are you guys familiar with this ?
> Any Tutorial available ? 😊


Isn't that the hardware flash method? If so, you need the stuff @Falkentyne always talks about. Unless that NVFlash just posted will work.


----------



## Veii

GRABibus said:


> PM sent 👍


Finally...
10min patching - nearly 48h digging about this







Now i can make actual XOC BIOSes
It was hidden inside Boost 3.0 table, which is signed, packaged & obfuscated value ~ soo annoying

*KINGPIN 3090 Hydro 
450-600W ~ Watermarked*
_"safe version"_

Changes can be seen in the viewer
Fixed:

68-78W variable PCIe powerlimit
228W chip hardlock
higher MemOC powerlimit
same 450-600W range
@PLATOON TEKK your request is VBIOS 94.02.59 !
I have zero for it. Friend-Dev is not ready with the tool, and i blind on this one
Can take some days, but thank you~
I'll come back to you or remember and ping you in the correct place when it's time comes

Also thanks to you guys,
Stupid chip powerlimit lock got backtraced


----------



## Veii

J7SC said:


> ...Thanks so much, Veii ! I just came back from a meeting (I'm on Canadian Pacific Coast time) and will try out in a bit. Normal nvflash version & commands for this gen should work ?


Yours maybe, but its not clear if it passes ~ might spit out "invalid file" report. Cert shouldn't be broken yet can need cert-bypass (file integrity bypass) nvflash.
Soo answer is "no" , try but "likely no"


GRABibus said:


> I have never flashed via SPI.
> Are you guys familiar with this ?
> Any Tutorial available ? 😊


It's the little rom chip on the cards, needs an SPI hardware flasher like an EVC or CH341a
Dot on the chip = VCC // and needs SOIC-8 clip ~ red cable = VCC , match orientation
FE's are checking Cert's on bios level different and fail. Also have a different rom which needs to be desoldered to flash AIB Bioses on it = modding impossible without custom updated nvflash.


changboy said:


> Do you have the Evga 3090 ft3w Ultra ?


Upload , ask and you shall receive
Tired tho, might make one more.
Found the thing i was desperately looking for days ~ ty guys


----------



## GRABibus

Veii said:


> Finally...
> 10min patching - nearly 48h digging about this
> View attachment 2554857
> 
> Now i can make actual XOC BIOSes
> It was hidden inside Boost 3.0 table, which is signed, packaged & obfuscated value ~ soo annoying
> 
> *KINGPIN 3090 Hydro
> 450-600W ~ Watermarked*
> _"safe version"_
> 
> Changes can be seen in the viewer
> Fixed:
> 
> 68-78W variable PCIe powerlimit
> 228W chip hardlock
> higher MemOC powerlimit
> same 450-600W range
> @PLATOON TEKK your request is VBIOS 94.02.59 !
> I have zero for it. Friend-Dev is not ready with the tool, and i blind on this one
> Can take some days, but thank you~
> I'll come back to you or remember and ping you in the correct place when it's time comes
> 
> Also thanks to you guys,
> Stupid chip powerlimit lock got backtraced


Thank you man !
I have now to learn how to flash it


----------



## GRABibus

Veii said:


> Yours maybe, but its not clear if it passes ~ might spit out "invalid file" report. Cert shouldn't be broken yet can need cert-bypass (file integrity bypass) nvflash.
> Soo answer is "no" , try but "likely no"
> 
> It's the little rom chip on the cards, needs an SPI hardware flasher like an EVC or CH341a
> Dot on the chip = VCC // and needs SOIC-8 clip ~ red cable = VCC , match orientation
> FE's are checking Cert's on bios level different and fail. Also have a different rom which needs to be desoldered to flash AIB Bioses on it = modding impossible without custom updated nvflash.
> 
> Upload , ask and you shall receive
> Tired tho, might make one more.
> Found the thing i was desperately looking for days ~ ty guys


Ok, so not simple to flash 😆


----------



## GRABibus

Veii said:


> Finally...
> 10min patching - nearly 48h digging about this
> View attachment 2554857
> 
> Now i can make actual XOC BIOSes
> It was hidden inside Boost 3.0 table, which is signed, packaged & obfuscated value ~ soo annoying
> 
> *KINGPIN 3090 Hydro
> 450-600W ~ Watermarked*
> _"safe version"_
> 
> Changes can be seen in the viewer
> Fixed:
> 
> 68-78W variable PCIe powerlimit
> 228W chip hardlock
> higher MemOC powerlimit
> same 450-600W range
> @PLATOON TEKK your request is VBIOS 94.02.59 !
> I have zero for it. Friend-Dev is not ready with the tool, and i blind on this one
> Can take some days, but thank you~
> I'll come back to you or remember and ping you in the correct place when it's time comes
> 
> Also thanks to you guys,
> Stupid chip powerlimit lock got backtraced


No matter to try nvflash64 with this Bios ?


----------



## Veii

GRABibus said:


> Ok, so not simple to flash 😆


Like every custom bios,
Rom chip is this:










GRABibus said:


> No matter to try nvflash64 with this Bios ?


If you can flash it, wonderful
but NVIDIA since 6 years ago still uses the "enthusiast key" card unlock nonsense  These days just with more checksums
Same on Navi, needs on-chip override to stop file-integrity checks ~ or potential bios date reverse and then file creation

Bioses are signed by bigben.nvidia.com ~ old HP collaboration // same algorithm since Maxwell days
Nobody has broken it till now, as far as i'm aware

















Answer is pretty much
"It's impossible the easy way, in the current state"
But that's with all XOC bioses the case ~ they need specific signing or just be flashed via SPI


----------



## Veii

@ManniX-ITA As promised
3090 Strix-LC
VBIOS 94.02.42 BAR ~ 3x 8pin
450W-600W, 75W PCie, more VRAM OC headroom

Let me guys know if all these flash & boot-up (they have to, haha)
And how software reacts on 34% slider bump
How V/F curve is and such
G'Night for now~


----------



## J7SC

Veii said:


> @ManniX-ITA As promised
> 3090 Strix-LC
> VBIOS 94.02.42 BAR ~ 3x 8pin
> 450W-600W, 75W PCie, more VRAM OC headroom
> 
> Let me guys know if all these flash & boot-up (they have to, haha)
> And how software reacts on 34% slider bump
> How V/F curve is and such
> G'Night for now~


...thanks again, now it's time for you to...


----------



## yzonker

I can almost hear Nvidia folks flailing around right now trying to change the bios signing/encryption for 40 series. Lol.


----------



## Falkentyne

yzonker said:


> I can almost hear Nvidia folks flailing around right now trying to change the bios signing/encryption for 40 series. Lol.


NVflash still can't flash the BIOS. It didn't bypass the Cert 3.0 and bios integrity problem. Just the card can probably boot if you force flash it with a 1.8v adapter and a programmer, as long as the checksum passes.


----------



## PLATOON TEKK

Veii said:


> Finally...
> 10min patching - nearly 48h digging about this
> View attachment 2554857
> 
> Now i can make actual XOC BIOSes
> It was hidden inside Boost 3.0 table, which is signed, packaged & obfuscated value ~ soo annoying
> 
> *KINGPIN 3090 Hydro
> 450-600W ~ Watermarked*
> _"safe version"_
> 
> Changes can be seen in the viewer
> Fixed:
> 
> 68-78W variable PCIe powerlimit
> 228W chip hardlock
> higher MemOC powerlimit
> same 450-600W range
> @PLATOON TEKK your request is VBIOS 94.02.59 !
> I have zero for it. Friend-Dev is not ready with the tool, and i blind on this one
> Can take some days, but thank you~
> I'll come back to you or remember and ping you in the correct place when it's time comes
> 
> Also thanks to you guys,
> Stupid chip powerlimit lock got backtraced


Damn, you put in work already! Thanks so much for belting these out. Sounds good, just keep me posted and hit me if you need anything in the meantime.


----------



## bman4455576r3

Anyone ever see this xerox branded rtx3090?!


----------



## GRABibus

Thank you Veii for your great job.
Now we need the nvflash to flash your modded bioses 😂


----------



## Veii

GRABibus said:


> Thank you Veii for your great job.
> Now we need the nvflash to flash your modded bioses 😂


Just open the card, backup your rom and override it
It always was normal like this.

FE's have a changed cert check in bios ~ which will not resolve till you flash AIB bioses that check different
Files do not break certs and checksum is correct ~ bios itself is solid now. nvflash is an issue, but that is on nvidia with their enthusiast-key business
SPI flash it and the issue resolves

In the future i should (hopefully) find out what it is, soo once people flash mods, they won't have integrity checks anymore (know the location, but not the algo)
For now, if you want to mod something ~ you got to mod something  Like also common on mainboard bios-mods
Sadly, but it is what it is. Can't break it for now 
======================================
Also good morning 
Waiting for anybody to try it, to see if sliders on afterburner and remain do not bug out with +33%
Might need to force on stock 500W limit, but i hope for a different answer


----------



## GRABibus

Veii said:


> Just open the card, backup your rom and override it
> It always was normal like this.
> 
> FE's have a changed cert check in bios ~ which will not resolve till you flash AIB bioses that check different
> Files do not break certs and checksum is correct ~ bios itself is solid now. nvflash is an issue, but that is on nvidia with their enthusiast-key business
> SPI flash it and the issue resolves
> 
> In the future i should (hopefully) find out what it is, soo once people flash mods, they won't have integrity checks anymore (know the location, but not the algo)
> For now, if you want to mod something ~ you got to mod something  Like also common on mainboard bios-mods
> Sadly, but it is what it is. Can't break it for now


Thank you for explanation.
Currently, I don’t want to open my card, especially if I would resale it.

As I have poor knowledge on these flash processes for modded bios, I stupidly didn’t know that we have to flash via SPI.

Of course I keep an eye here to see how more skilled people will flash, and maybe, in next future, I will try 😊.

Thank you again, you are amazing.


----------



## Veii

GRABibus said:


> Currently, I don’t want to open my card, especially if I would resale it.
> As I have poor knowledge on these flash processes for modded bios, I stupidly didn’t know that we have to flash via SPI.


Other easier option:
"success on nvflash cert-bypass modding"
(somebody could figure out the modder of it and get me in contact with them)

Or ask HP/Nvidia (bigben.nvidia), or EVGA to sign the bioses with an XOC certificate
Then everyone can flash it 
Be it just "one", as they don't have unreasonable limits. All within spec

Forgot to upload
*ASUS 3090 TUF OC 2x8pin
400-600W , VBIOS 94.02.26 // No Bar*
~taken request from elmors DC~

If any of these get's signed by the XOC cert
People can already rebrand between vendors,we just don't have powerful enough nvflash ~ to be more lazy 
Yet personalized for every card are still better 🤭

Technically if some nice soul leaks me the XOC cert, i can sign everything without having to share the algo
But more likely to happen, is make a new nvflash ~ than to ever get these bioses signed


----------



## GRABibus

Veii said:


> Other easier option
> success on nvflash cert-bypass modding (somebody could figure out the modder of it and contact)
> 
> Or ask HP/Nvidia (bigben.nvidia), or EVGA to sign the bioses with an XOC certificate
> Then everyone can flash it
> Be it just "one", as they don't have unreasonable limits. All within spec
> 
> Forgot to upload
> *ASUS 3090 TUF OC 2x8pin
> 400-600W , VBIOS 94.02.26 // No Bar*
> ~taken request from elmors DC~
> 
> If any of these get's signed by the XOC cert
> People can already rebrand cards, just don't have powerful enough nvflash ~ to be more lazy


You mean to try this ?









NVIDIA NVFlash with Board Id Mismatch Disabled (v5.590.0) Download


This is a patched version of NVIDIA's NVFlash. On Turing cards, NVFlash no longer allows overriding of the "board ID mismatch" message through comm




www.techpowerup.com


----------



## Veii

GRABibus said:


> You mean to try this ?
> 
> 
> 
> 
> 
> 
> 
> 
> 
> NVIDIA NVFlash with Board Id Mismatch Disabled (v5.590.0) Download
> 
> 
> This is a patched version of NVIDIA's NVFlash. On Turing cards, NVFlash no longer allows overriding of the "board ID mismatch" message through comm
> 
> 
> 
> 
> www.techpowerup.com


That and/or update it
cards like the 3090 and 3060ti are "too new"
Same for LHR based 3090's ~ soo likely not covered by this version

This one should flash it ~ if support for Ampere exists
NVFlash just fails on detecting the file as file-integrity breaks
This should read it, if it has Ampere support

Any change you do atm, be it just resave on a later date than the file was signed
breaks integrity of it and is detected as "invalid" rom. In reality the file is fine but nvflash is just picky


----------



## GRABibus

Veii said:


> That and/or update it
> cards like the 3090 and 3060ti are "too new"
> Same for LHR based 3090's ~ soo likely not covered by this version
> 
> This one should flash it ~ if support for Ampere exists
> NVFlash just fails on detecting the file as file-integrity breaks
> This should read it, if it has Ampere support
> 
> Any change you do atm, be it just resave on a later date than the file was signed
> breaks integrity of it and is detected as "invalid" rom. In reality the file is fine but nvflash is just picky


I am pretty sure I saw a version of nvflash bypass for 3xxx series here on this thread.

Does someone have it ?


----------



## yzonker

GRABibus said:


> I am pretty sure I saw a version of nvflash bypass for 3xxx series here on this thread.
> 
> Does someone have it ?


This one?









[Official] NVIDIA RTX 3090 Owner's Club


I currently have an i9 7900x and a 1080TI FE, both are overclocked, CPU is at 4.6Ghz. All of this is running in a single custom waterloop with x2 Alphacool NexXxoS ST30 Full Copper 420mm radiators (Push configuration) and a D5 Pump. The case is a Dark Base Pro 900, one of the radiators is...




www.overclock.net


----------



## GRABibus

yzonker said:


> This one?
> 
> 
> 
> 
> 
> 
> 
> 
> 
> [Official] NVIDIA RTX 3090 Owner's Club
> 
> 
> I currently have an i9 7900x and a 1080TI FE, both are overclocked, CPU is at 4.6Ghz. All of this is running in a single custom waterloop with x2 Alphacool NexXxoS ST30 Full Copper 420mm radiators (Push configuration) and a D5 Pump. The case is a Dark Base Pro 900, one of the radiators is...
> 
> 
> 
> 
> www.overclock.net


Yes maybe 😊


----------



## Arizor

Who's first to shove their hand in the wolf's mouth?


----------



## Veii

Little updates
Backtracing nvflash mods. 5.670.00 which is the shared by Falkentyne "appears" to be an official mid-release version also findable on techpowerup
it came straight from nvidia towards X vendor ~ no modification was done.
One i shared was 5.660 which is the first one with deviceID checks to cover Ampere
DeviceID check disabled = rebrand possibility. It's not certificate bypass like the old one. Tho 5.670 has both ID bypass and cert bypass according to HEX

"NVFlash with Board Id Mismatch Disabled v5.590.0" ~ file hence can not support 3000 series, but possible to apply same patches on 5.660/670
Working atm on 5.735 myself

2nd update,
got a pack of 20ish Bioses for RTX 3090's, including 3060ti's
All these are signed 1024KB vendor-roms, compared to our 976KB dumps








While uploaded ROMs are fine, for SPI rom which is 1024KB ~ you need to add "49152 bytes" of padding (FF FF FF FF)
1 byte = FF (2 digit)
From usually 00 0F 40 00 to 00 10 00 00 
^ if you want to SPI flash ~ but i think that's known once you use CH341a or EVC SPI programmer (will get a size-missmatch padding warning)

Working on figuring out of it's date, checksum, or something else ~ but change is little. Might be able to sign them soon
Work on checksum help tool moves too, once that is done more than 3090s will be covered including Ti's 
Little break for now, more updates to follow~

EDIT:
I forgot,
If anybody is tech savy and bored
Could you try to decompile and export me bioses from:
http://us.download.nvidia.com/Windows/rebar/1.0/NVIDIA_ResizableBAR_Firmware_Updater_1.3-x64.exeto look more into FE's signatures
Files like







Soo to look deeper into GA102 ~ would be nice


----------



## GRABibus

yzonker said:


> This one?
> 
> 
> 
> 
> 
> 
> 
> 
> 
> [Official] NVIDIA RTX 3090 Owner's Club
> 
> 
> I currently have an i9 7900x and a 1080TI FE, both are overclocked, CPU is at 4.6Ghz. All of this is running in a single custom waterloop with x2 Alphacool NexXxoS ST30 Full Copper 420mm radiators (Push configuration) and a D5 Pump. The case is a Dark Base Pro 900, one of the radiators is...
> 
> 
> 
> 
> www.overclock.net





Arizor said:


> Who's first to shove their hand in the wolf's mouth?


As mentionned by Veii,doesn’t work. I tried it.


----------



## Falkentyne

GRABibus said:


> I am pretty sure I saw a version of nvflash bypass for 3xxx series here on this thread.
> 
> Does someone have it ?


5.590 won't flash ampere. It's too old. Ampere came out September 2020.
I have the ID check bypass (well, one of them).

I have no idea which one will work for you.

Doesn't seem to work on FE cards.
The one guy who tried the 5670 one to flash a 3090 FE Bios on his ROG Strix 3090 got a "Check VGA" post LED code error when he tried to reboot, and had to boot from the backup bios.

_edit_ looks like veii beat me to it.


----------



## PLATOON TEKK

I just copped a CH341A and CTYRZCH in preparation for flashing spi. I have no idea what I’m doing but will figure it out and test on a 680 before I destroy a 3090 lol. Reminds me of flashing Xbox’s back in the day. Shout out to @Veii for really diggin in to this and the rest of you helping. Am curious to what we can achieve.


----------



## Veii

PLATOON TEKK said:


> I just copped a CH341A and CTYRZCH in preparation for flashing spi. I have no idea what I’m doing but will figure it out and test on a 680 before I destroy a 3090 lol. Reminds me of flashing Xbox’s back in the day. Shout out to @Veii for really diggin in to this and the rest of you helping. Am curious to what we can achieve.


You just need a SOIC clip, either it flashes or it doesn't read it
You can backup boardID and everything with it, then clean flash whatever you feel like

I have a feeling 3090ti is not a full new die - but i can't say, people barely dig into it
Anywho you can just put something on it and hope it posts 
And there won't be restictions, as PCBs do not have hardware level differences. 

Flip chips are on "semi custom" designs , but the the "socket" of it is the same between Vendors
Soo as chip is chip, and if you give it the same brain it will post  
Question is on custom modified stuff, but we'll get there.

Yes fon't forget a SOIC clip ~ tho CH341A is a bit complicated, eh bearable 
Supporting the EVC developement would be also great, especially since you can take over the PWM controller and change memory or core voltage
But that's for another day


----------



## GRABibus

Falkentyne said:


> 5.590 won't flash ampere. It's too old. Ampere came out September 2020.
> I have the ID check bypass (well, one of them).
> 
> I have no idea which one will work for you.
> 
> Doesn't seem to work on FE cards.
> The one guy who tried the 5670 one to flash a 3090 FE Bios on his ROG Strix 3090 got a "Check VGA" post LED code error when he tried to reboot, and had to boot from the backup bios.
> 
> _edit_ looks like veii beat me to it.


Thanks !
none of them work for me.


----------



## PLATOON TEKK

Veii said:


> You just need a SOIC clip, either it flashes or it doesn't read it
> You can backup boardID and everything with it, then clean flash whatever you feel like
> 
> I have a feeling 3090ti is not a full new die - but i can't say, people barely dig into it
> Anywho you can just put something on it and hope it posts
> And there won't be restictions, as PCBs do not have hardware level differences.
> 
> Flip chips are on "semi custom" designs , but the the "socket" of it is the same between Vendors
> Soo as chip is chip, and if you give it the same brain it will post
> Question is on custom modified stuff, but we'll get there.
> 
> Yes fon't forget a SOIC clip ~ tho CH341A is a bit complicated, eh bearable
> Supporting the EVC developement would be also great, especially since you can take over the PWM controller and change memory or core voltage
> But that's for another day


got you, can always refund from Amazon. What equipment would you recommend for the most “straight forward” flash?

am down to figure this out proper


----------



## yzonker

PLATOON TEKK said:


> got you, can always refund from Amazon. What equipment would you recommend for the most “straight forward” flash?
> 
> am down to figure this out proper


May have been mentioned in all of these posts, but anyway don't forget you need to use a 1.8v adapter.


----------



## PLATOON TEKK

yzonker said:


> May have been mentioned in all of these posts, but anyway don't forget you need to use a 1.8v adapter.


got you, might have missed it. Will comb through it and get started.


----------



## J7SC

...as expected, my standard nvflash for RTX3K didn't quite work (but only had limited time to try so far). Which of the 'special' nvflash ones re. date changes, signing etc would you folks recommend for the Asus 3090 Strix OC - problem is I now have linked and downloaded way too many of them, and can't see 'the tree because of the forest', so to speak 🥴


----------



## Veii

PLATOON TEKK said:


> got you, can always refund from Amazon. What equipment would you recommend for the most “straight forward” flash?
> 
> am down to figure this out proper


Amazon is cheaper, but you need such SOIC-8 clip








EVC is "a bit premium for an SPI flasher", but it's target was more I²C & PWM controller takeover. Soo while i have one of the first EVC2 batches (before 2 more revisions)
~ i still only use 1/4th of the features. Just a tiny trusty flasher for me without voltage problems
Speaking about problems and CH341A




There are legitimate reasons (which i forgot, as v1.6 was too new) why it's more work to invest to a cheap programmer vs buying into an ecosystem (sound like a sales pitch)







Currently the shipping system is a bit odd to my eyes
It's indeed been some time

If we figure this out, great. Bruteforce nvflash mod or actual sign'ing
But in general, there are many people who now get into i²c voltage control over their cards
It's a good little swiss-knife for repairing things. Unless you can find the CH341A v1.6 (green PCB with IR remote module, release 2022) somewhere on amazon or aliexpress

At any point you'll need a clip for the chip and the flasher, if you want to bruteforce things 
Bioses won't really change


PLATOON TEKK said:


> What equipment would you recommend for the most “straight forward” flash?


Most straightforward and easy is exactly that
No soldering, a single clip and a little tool
These days they don't include rubber feets anymore, but it gave it it's charm. Just can do much more than that and SPI was a "tertiary purpose" on modding or XOC
, SPI will be needed unless research shows significant progress


----------



## mFuSE

Veii said:


> *ASUS 3090 Strix OC [3x 8pin 450-600W]
> VBIOS 94.02.42.00.A9*
> 
> OC Bios - Comparable to a 3090Ti
> Half-Familiar layout ~ 50min needed
> 
> Pls flash via SPI
> if official NVFlash
> or nvflash64_RTX-Rebrand.7z
> refuse
> ...


Hello,

sadly the bios can't be installed, also not with provided nvflash:








I do not know SPI, I will need more time to get into this tool how it works.



i tried also this bios i didn't noticed earlier:








Asus RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com




But it max out at ~500 Watt :/








I had a year ago or so tried the Kingpin 3090 bios - this works great with powerlimit 530Watt. But HDMI Monitors does not work and some other quirks happend.
So im currently back to official V4 bios:








Asus RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com






Is it possible to use this bios:








Asus RTX 3090 Ti VBIOS


24 GB GDDR6X, 1560 MHz GPU, 1313 MHz Memory




www.techpowerup.com





on the 3090?
Have anyone tried this out already?


best regards!


----------



## yzonker

mFuSE said:


> Hello,
> 
> sadly the bios can't be installed, also not with provided nvflash:
> View attachment 2555058
> 
> 
> I do not know SPI, I will need more time to get into this tool how it works.
> 
> 
> 
> i tried also this bios i didn't noticed earlier:
> 
> 
> 
> 
> 
> 
> 
> 
> Asus RTX 3090 VBIOS
> 
> 
> 24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory
> 
> 
> 
> 
> www.techpowerup.com
> 
> 
> 
> 
> But it max out at ~500 Watt :/
> View attachment 2555059
> 
> 
> I had a year ago or so tried the Kingpin 3090 bios - this works great with powerlimit 530Watt. But HDMI Monitors does not work and some other quirks happend.
> So im currently back to official V4 bios:
> 
> 
> 
> 
> 
> 
> 
> 
> Asus RTX 3090 VBIOS
> 
> 
> 24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory
> 
> 
> 
> 
> www.techpowerup.com
> 
> 
> 
> 
> 
> 
> Is it possible to use this bios:
> 
> 
> 
> 
> 
> 
> 
> 
> Asus RTX 3090 Ti VBIOS
> 
> 
> 24 GB GDDR6X, 1560 MHz GPU, 1313 MHz Memory
> 
> 
> 
> 
> www.techpowerup.com
> 
> 
> 
> 
> 
> on the 3090?
> Have anyone tried this out already?
> 
> 
> best regards!


From comments I've seen in the past, the Asus 1kw bios still has a 150w limit set on the 3rd 8pin IIRC, causing it to limit early. 

No, NVFlash won't flash a bios with a different device ID. Good chance it wouldn't work correctly anyway,if at all.


----------



## PLATOON TEKK

Veii said:


> Amazon is cheaper, but you need such SOIC-8 clip
> 
> 
> 
> 
> 
> 
> 
> 
> EVC is "a bit premium for an SPI flasher", but it's target was more I²C & PWM controller takeover. Soo while i have one of the first EVC2 batches (before 2 more revisions)
> ~ i still only use 1/4th of the features. Just a tiny trusty flasher for me without voltage problems
> Speaking about problems and CH341A
> 
> 
> 
> 
> There are legitimate reasons (which i forgot, as v1.6 was too new) why it's more work to invest to a cheap programmer vs buying into an ecosystem (sound like a sales pitch)
> View attachment 2555054
> 
> Currently the shipping system is a bit odd to my eyes
> It's indeed been some time
> 
> If we figure this out, great. Bruteforce nvflash mod or actual sign'ing
> But in general, there are many people who now get into i²c voltage control over their cards
> It's a good little swiss-knife for repairing things. Unless you can find the CH341A v1.6 (green PCB with IR remote module, release 2022) somewhere on amazon or aliexpress
> 
> At any point you'll need a clip for the chip and the flasher, if you want to bruteforce things
> Bioses won't really change
> 
> Most straightforward and easy is exactly that
> No soldering, a single clip and a little tool
> These days they don't include rubber feets anymore, but it gave it it's charm. Just can do much more than that and SPI was a "tertiary purpose" on modding or XOC
> , SPI will be needed unless research shows significant progress


Boom, thanks once more for being so elaborate and schooling me on all of this. I actually have a evc2sx from when I added voltage to one of the Strix’s here. Just ordered the SOIC8 test clip and will look into how the whole process goes with a evc2x


----------



## Falkentyne

PLATOON TEKK said:


> Boom, thanks once more for being so elaborate and schooling me on all of this. I actually have a evc2sx from when I added voltage to one of the Strix’s here. Just ordered the SOIC8 test clip and will look into how the whole process goes with a evc2x
> 
> View attachment 2555071


Make sure you have the 1.8v adapter unless the EVC2SX supports direct 1.8v setting for flashing! Flashing with 3.3v can destroy the chip (people have done this on the Pascal laptop thread that was on the old notebookreview, and the chip stopped working).


----------



## elbramso

@PLATOON TEKK 
Ive just checked your signature - very impressive 👍
I guess I'm 1st in single card Port Royal under water (no chiller) 😎


----------



## Veii

mFuSE said:


> Hello,
> sadly the bios can't be installed, also not with provided nvflash:


Ty for the test
This looks like "i messed up cert" ~ which on those files should not be

The others online are different. Some have a chip limit of 228W, some have a PCIe (SRC) port limit of 150W (450 + 75wPCIe) and cap out
It's just the "half done bios" work, re-branded for other vendors

Yes, spi only as it seems ~ but i want to ask:
What VBIOS version where/are you running atm ~ what's the name of the bios from which you try to update?








^ global offset (CTRL+G on HxD) 9268
Tried anything newer than 94.02.42 ever ?

Do both 5.660 and 5.670 not pass ?
5.670 i guess you tried the "rebrand" one i shared , what about Falkentyne's 5.660
Both should be the same thing (DeviceID check, disabled)

If you feel very brave, we can try one more "different" thing
Else till i figure this out, SPI is the only way
ROMs are fine, just tools and on-board bios ~ limit flashing
Once ROMs pass cert check the problem fades away
Just cert-failure will not limit functionality, only bother with flashing


Falkentyne said:


> Make sure you have the 1.8v adapter unless the EVC2SX supports direct 1.8v setting for flashing! Flashing with 3.3v can destroy the chip


All EVCs don't need it ~ it's not "cheap" designed, but EVC itself costs a bit for simply being an SPI Flasher only


----------



## PLATOON TEKK

Falkentyne said:


> Make sure you have the 1.8v adapter unless the EVC2SX supports direct 1.8v setting for flashing! Flashing with 3.3v can destroy the chip (people have done this on the Pascal laptop thread that was on the old notebookreview, and the chip stopped working).


Thanks for the heads up! That could have ended messy, ha. I actually ordered one of the 1.8 clips along with the soic8 order from Elmor. I also added some shunts. I will absolutely be using that, thanks a million for pointing that out!




elbramso said:


> @PLATOON TEKK
> Ive just checked your signature - very impressive 👍
> I guess I'm 1st in single card Port Royal under water (no chiller) 😎


Thanks man. Boom that’s badass. Water, wether with chiller or not is the way forward for me. Since I actually use the systems I push. You’d probably kill with two cards!


----------



## Veii

PLATOON TEKK said:


> Thanks for the heads up! That could have ended messy, ha. I actually ordered one of the 1.8 clips along with the soic8 order from Elmor. I also added some shunts. I will absolutely be using that, thanks a million for pointing that out!


If this works out (has to)
Shunts will be irrelevant

Shunts in the current state, mess up the reading and card draws more - but i do not think that this matters much, as SRC control chip has a too small shunt and can not be modified unless by Firmware
Then once you can modify by firmware , Shunts are rendered useless
I'm currently a bit worried for people killing their cards with shunts + mods

Substrate doesn't seem like has any sort of management engine (gladly), and 4000 series appears to be on the PSU side
Soo mods work out, but mods an also be the reason some cards die when they actually pull more than these 680W chiplimit's
Stock "normal" cards cap at 228W , rest likely is turned into heat ~ unsure fully where all this power goes that's pulled but never pushed to the die. Odd 


PLATOON TEKK said:


> Since I actually use the systems I push.


Same 
Still uncertain on a mora for me ~ it's too bulky and i run ITX systems usually
A chiller would be loud having it in the same room, no ?


----------



## PLATOON TEKK

Veii said:


> If this works out (has to)
> Shunts will be irrelevant
> 
> Shunts in the current state, mess up the reading and card draws more - but i do not think that this matters much, as SRC control chip has a too small shunt and can not be modified unless by Firmware
> Then once you can modify by firmware , Shunts are rendered useless
> I'm currently a bit worried for people killing their cards with shunts + mods
> 
> Substrate doesn't seem like has any sort of management engine (gladly), and 4000 series appears to be on the PSU side
> Soo mods work out, but mods an also be the reason some cards die when they actually pull more than these 680W chiplimit's
> Stock "normal" cards cap at 228W , rest likely is turned into heat ~ unsure fully where all this power goes that's pulled but never pushed to the die. Odd
> 
> Same
> Still uncertain on a mora for me ~ it's too bulky and i run ITX systems usually
> A chiller would be loud having it in the same room, no ?


i haven’t shunt modded since the pascal x, I feel it’s just good to have them around lol. But good to note that it might mess with the mods and reading.

In terms of chiller I am an absolute supporter of it, you can easily use qdcs and a long tube run to place it in a separate room or floor.

the whole house here is tubed up to chillers, so all setups are 100 quiet and fanless. You can count those tubing boxes (10ft) in the pic and get an idea of how much tubing is under the house lol.

my mortal enemy has been DewPoint being right on the west coast. Let me know if you need any tips when it comes to that, I went a bit wild.



Spoiler: Chiller setups


----------



## Veii

Absolutely astonishing ! @PLATOON TEKK
Tiny bit insane, but an a big part "simply astonishing" 

I will come back to you, just currently buying toys to make content and research
Usually used to lower tier gear, as my applications aren't graphic demanding
Neither a developer, just like to fix grudge's when it's unfair. Soo also not a XOC guy, although it's fun 

It makes me wonder on two things
"How are you not familiar with modding, so also firmware flashing"
"How haven't you researched a bit into hydrogen fun and/or nitrogen"
Can totally imagine you running hydrogen generators and make your own nitrogen for XOC ~ in your own house
(although country legal side is questionable) 
============================================================
mm, if you're into elmorlabs's discord, let me know ~ once you got an SPI flasher.
I'll see if i get lucky again with the findings and it makes progress.
Key Certificates sit:








as a check and location anchor
And








As an encryption code with stamps of something
(key thing of file-integrity checks, and file recognition by nvflash)
That's on what i hang atm. What's the meaning of it.
Is it saving-date, is it checksum, is it anchors or like the usual two "checksums which rotate from FE, FF, to 00, 01 up to how much the distance of changes it
(checksum algo is the same since Kepler days, just "more checksums" added, that influence each other)

Signatures:








Are not generate-able for sure
Mnemonics like:
NPDS, NPDE, PCIR, ISBN, JFFS, KGI & PCGI
are all meaningless ~ which can be used as anchors for lookup

It's no wonder that ABE tool (by nvidia now) is obfuscated and dev tells nonsense or at best/atm doesn't comment anymore on it
He sits deep under NDA ~ but hence whole nvidia bios is designed on these since at least 8 ? , well many many years ~ the rabbit hole looks to be very deep.
Simple, yet quite deep. No wonder nobody digged into this or nvidia bought him out/away. Get the tool together, and all cards are compatible with it. Also no wonder every update has obfuscated new mess

Sharing bit more now, just to spread awareness.
If anybody has NDA papers or saw any github project out there to learn the meaning of these mnemonics on ~ any PPR or technical paper for any nvidia card, let me know
I am sure , there is no way to generate the actual signatures that EVGA requests from NVIDIA, or Jensen himself handles out. Soo also no way to request this enthusiast key "nonsense" from them to "unlock" cards for flashing

What can be done tho, is fixing nvflash
Getting used to SPI flashes, and hopefully figure the meaning of these tiny changes out, soo i/we can design Vendor based official updates
Just in general, FE's check it different and deeper. AIB cards are easier. FE unlocking might not work out at all without forcing an AIB bios into them via SPI (clean erased rom)


----------



## yzonker

PLATOON TEKK said:


> i haven’t shunt modded since the pascal x, I feel it’s just good to have them around lol. But good to note that it might mess with the mods and reading.
> 
> In terms of chiller I am an absolute supporter of it, you can easily use qdcs and a long tube run to place it in a separate room or floor.
> 
> the whole house here is tubed up to chillers, so all setups are 100 quiet and fanless. You can count those tubing boxes (10ft) in the pic and get an idea of how much tubing is under the house lol.
> 
> my mortal enemy has been DewPoint being right on the west coast. Let me know if you need any tips when it comes to that, I went a bit wild.


That is an insane number of Primochill boxes. LOL. I just re-worked my chiller loop with that same line. But I only used 3 of them. 

Still I'm pretty happy with it. I've got it more permanently connected to my loop now, so I went down to one PMP-500 since I have 2 D5's also in my loop. Although the dual PMP-500's still offer the best performance, but it's fairly small. I may eventually go back to dual pumps, but wanted to test this for a while.

The other thing I did was add a smart plug that you can see in the pic. Since the chiller has static memory, I can power it on with a phone app and it just picks up where it left off previously.

And that reservoir is a sealed food container I bought a Target. I just added fittings to it. It holds about a gallon and helps stabilize temps quite a bit.


----------



## yzonker

Falkentyne said:


> 5.590 won't flash ampere. It's too old. Ampere came out September 2020.
> I have the ID check bypass (well, one of them).
> 
> I have no idea which one will work for you.
> 
> Doesn't seem to work on FE cards.
> The one guy who tried the 5670 one to flash a 3090 FE Bios on his ROG Strix 3090 got a "Check VGA" post LED code error when he tried to reboot, and had to boot from the backup bios.
> 
> _edit_ looks like veii beat me to it.


Did the two versions you linked in this post come from here?









Modified NVFlash v5.667.0


You can thank @Lost_N_BIOS for this one I just went to him with the idea. Theres some more things planned to be done but for now this version overrides the board ID mismatch. Passworded for now so nobody else tries to take the credit for it when that belongs to Lost, just shoot me a PM for the...




www.win-raid.com





I wonder if the originator of those files is still around?


----------



## Falkentyne

yzonker said:


> Did the two versions you linked in this post come from here?
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Modified NVFlash v5.667.0
> 
> 
> You can thank @Lost_N_BIOS for this one I just went to him with the idea. Theres some more things planned to be done but for now this version overrides the board ID mismatch. Passworded for now so nobody else tries to take the credit for it when that belongs to Lost, just shoot me a PM for the...
> 
> 
> 
> 
> www.win-raid.com
> 
> 
> 
> 
> 
> I wonder if the originator of those files is still around?


The first one was some nvflash someone had posted on techpowerup when someone else was asking for an NV ID bypass to cross-flash an RTX ampere laptop BIOS (I believe a RTX 3070). And it apparently worked on some desktop cards too (if you didn't use the -6 parameter, which seemed to make it not work).
A strix user used it to flash a 3090 FE bios onto his card but he got a "check VGA Bios" post code display error when he rebooted and had to restore from the backup then hotflash the main with the bios switch. Which means the 3090 FE bios is drastically different enough from a regular 2x8 pin AIB Bios in some way, as the Strix would be castrated by a 2x8 pin AIB Bios in power draw, but would at least boot with it.


----------



## Veii

yzonker said:


> I wonder if the originator of those files is still around?


Ket, is an old bios modder for gigabyte boards, but been recently busy
Some vendors did copy his work, and nobody ever donated to him (he's a friend to Lost_N_Bios)
He moved to win-raid to have it public, but hence of bad history all files are always password locked


Falkentyne said:


> Which means the 3090 FE bios is drastically different enough from a regular 2x8 pin AIB Bios in some way, as the Strix would be castrated by a 2x8 pin AIB Bios in power draw, but would at least boot with it.


Problem is, they are not
12VHPWR plug, has no sense pins, and functions on the same "routing" as normal (3x) 8pins - with same SRC limits by controller
User likely messed up their card by nvflash, as nvflash does not wipe everything
User should've used SPI Flash for a full rebrand, as card has more cert checks than AIB

EDIT:
There still potentially can be a deviceID (chip ID rather) , check ~ if flipchip is incompatible
But to my comparisons with limited uploads, only cert check is done differently on FEs than AIB cards
There isn't a "module" for the cards 12pin or something very special. Just ROM checks more annoying and wants unlock keys.
SPI flash it away (clean rom, clean brain) and it might even boot up
For all other AIB cards, half of the issue belongs to me that i can not sign them correctly (modification date) ~ other half is nvflash being annoying
SPI flash and it will work


----------



## Veii

Veii said:


> Most straightforward and easy is exactly that
> No soldering, a single clip and a little tool


I think i have to retract my recommendation and nice words for EVC
Ecosystem is still good, but
Compare:













Current vs Old

The moment i can not differentiate it from a flawed CH341a
Where a downgrade is not only done on the visual side of things , but also literally gifted sales to the rival with the downgrade decisions
I just see no reason anymore to recommend it for SPI purposes
Elmorlabs can with this change not stand-out anymore compared to rival companies, and hence there was a cut on visuals and on functionality ~ i can not see out of couple negatives a reason to update to S or SX version

Most common SPI ROM is 1.8v and 3.3v
It's unfortunate but "ok" that it doesn't cover 2.5v ROMs and 5v ones are rare too
But the changes done (reason unknown engineer sleeps) to downgrade it soo much and increase price ~ gives me zero reason to recommend it
For "basic functionality" the price increase of split parts is too much (modular things make it look bad ~ an octopus full of adapters)

Likely still a good eco system, but flawed downgrade. Can't see much of a positive outcome from it sadly.
Didn't know or expect he'd ruin it himself, but the shop page doesn't even put a warning that 1.8v is not supported anymore 

@Falkentyne is right,
You need an 1.8v adapter @PLATOON TEKK 
Couldn't know, site doesn't explicitly warn that "successor" has a cut on basic functionality
Still good tool, but less good unit than the original EVC2 (outweight changes)

Eh, just got more expensive and less fashioned or compact with all these adapters
Hard to justify recommending it anymore, tho still "easiest" resolve than fixing CH341 flaws that are now EVC flaws too. How irritating

EDIT:
Well, this happened today







It still is bothersome with extensions, but at least , well what to say
It's surely not the old EVC2 that it was, but who ever wants to buy it ~ i guess 20% off sounds ok (coupon "antitux_is_live")


----------



## z390e

PLATOON TEKK said:


> "Chiller setups "


Just showed this to my wife and she said dont even think about it 😆 What an absolutely insane commitment to keeping the rig performing well. A+ sir. 

You should leave the heat of SoCal and come up to Humboldt, much cooler up here less dewpoint management


----------



## PLATOON TEKK

z390e said:


> Just showed this to my wife and she said dont even think about it 😆 What an absolutely insane commitment to keeping the rig performing well. A+ sir.
> 
> You should leave the heat of SoCal and come up to Humboldt, much cooler up here less dewpoint management


haha you should have seen my girl’s face when I purchased 5 baby monitors at once, out of the blue. She was oddly relieved when she found out it was to watch the chillers lol.

Its definitely been a journey, Humboldt sounds amazing. Never knew dewpoint would become a real factor when choosing homes ha.



Veii said:


> I think i have to retract my recommendation and nice words for EVC
> Ecosystem is still good, but
> Compare:
> View attachment 2555335
> View attachment 2555336
> 
> Current vs Old
> 
> The moment i can not differentiate it from a flawed CH341a
> Where a downgrade is not only done on the visual side of things , but also literally gifted sales to the rival with the downgrade decisions
> I just see no reason anymore to recommend it for SPI purposes
> Elmorlabs can with this change not stand-out anymore compared to rival companies, and hence there was a cut on visuals and on functionality ~ i can not see out of couple negatives a reason to update to S or SX version
> 
> Most common SPI ROM is 1.8v and 3.3v
> It's unfortunate but "ok" that it doesn't cover 2.5v ROMs and 5v ones are rare too
> But the changes done (reason unknown engineer sleeps) to downgrade it soo much and increase price ~ gives me zero reason to recommend it
> For "basic functionality" the price increase of split parts is too much (modular things make it look bad ~ an octopus full of adapters)
> 
> Likely still a good eco system, but flawed downgrade. Can't see much of a positive outcome from it sadly.
> Didn't know or expect he'd ruin it himself, but the shop page doesn't even put a warning that 1.8v is not supported anymore
> 
> @Falkentyne is right,
> You need an 1.8v adapter @PLATOON TEKK
> Couldn't know, site doesn't explicitly warn that "successor" has a cut on basic functionality
> Still good tool, but less good unit than the original EVC2 (outweight changes)
> 
> Eh, just got more expensive and less fashioned or compact with all these adapters
> Hard to justify recommending it anymore, tho still "easiest" resolve than fixing CH341 flaws that are now EVC flaws too. How irritating
> 
> EDIT:
> Well, this happened today
> View attachment 2555383
> 
> It still is bothersome with extensions, but at least , well what to say
> It's surely not the old EVC2 that it was, but who ever wants to buy it ~ i guess 20% off sounds ok (coupon "antitux_is_live")


damn, I just missed that coupon for my order, all good. Thanks for pointing that out. Although my shipment is currently “on hold”, DHL says it will be here 14th. From then I should be able to test the spi flash for us (carefully lol).


----------



## PLATOON TEKK

Veii said:


> Absolutely astonishing ! @PLATOON TEKK
> Tiny bit insane, but an a big part "simply astonishing"
> 
> I will come back to you, just currently buying toys to make content and research
> Usually used to lower tier gear, as my applications aren't graphic demanding
> Neither a developer, just like to fix grudge's when it's unfair. Soo also not a XOC guy, although it's fun
> 
> It makes me wonder on two things
> "How are you not familiar with modding, so also firmware flashing"
> "How haven't you researched a bit into hydrogen fun and/or nitrogen"
> Can totally imagine you running hydrogen generators and make your own nitrogen for XOC ~ in your own house
> (although country legal side is questionable)
> ============================================================
> mm, if you're into elmorlabs's discord, let me know ~ once you got an SPI flasher.
> I'll see if i get lucky again with the findings and it makes progress.
> Key Certificates sit:
> 
> 
> 
> 
> 
> 
> 
> 
> as a check and location anchor
> And
> 
> 
> 
> 
> 
> 
> 
> 
> As an encryption code with stamps of something
> (key thing of file-integrity checks, and file recognition by nvflash)
> That's on what i hang atm. What's the meaning of it.
> Is it saving-date, is it checksum, is it anchors or like the usual two "checksums which rotate from FE, FF, to 00, 01 up to how much the distance of changes it
> (checksum algo is the same since Kepler days, just "more checksums" added, that influence each other)
> 
> Signatures:
> 
> 
> 
> 
> 
> 
> 
> 
> Are not generate-able for sure
> Mnemonics like:
> NPDS, NPDE, PCIR, ISBN, JFFS, KGI & PCGI
> are all meaningless ~ which can be used as anchors for lookup
> 
> It's no wonder that ABE tool (by nvidia now) is obfuscated and dev tells nonsense or at best/atm doesn't comment anymore on it
> He sits deep under NDA ~ but hence whole nvidia bios is designed on these since at least 8 ? , well many many years ~ the rabbit hole looks to be very deep.
> Simple, yet quite deep. No wonder nobody digged into this or nvidia bought him out/away. Get the tool together, and all cards are compatible with it. Also no wonder every update has obfuscated new mess
> 
> Sharing bit more now, just to spread awareness.
> If anybody has NDA papers or saw any github project out there to learn the meaning of these mnemonics on ~ any PPR or technical paper for any nvidia card, let me know
> I am sure , there is no way to generate the actual signatures that EVGA requests from NVIDIA, or Jensen himself handles out. Soo also no way to request this enthusiast key "nonsense" from them to "unlock" cards for flashing
> 
> What can be done tho, is fixing nvflash
> Getting used to SPI flashes, and hopefully figure the meaning of these tiny changes out, soo i/we can design Vendor based official updates
> Just in general, FE's check it different and deeper. AIB cards are easier. FE unlocking might not work out at all without forcing an AIB bios into them via SPI (clean erased rom)


My bad just saw this, to answer your questions:
1) the reason I lack modding experience is most likely that I like focusing on the hardware aspect of things. With all this tubing I’m more like a plumber than anything lol. That’s why I’m extremely grateful to have someone like you show up who knows their ****. However, I’ve been reading your responses (which have been incredibly informative) and am slowly starting to get an idea. Closest I’ve gotten was hacking Xboxes (calibrating laser, flashing chips etc).

2) I think the main reasons I haven’t jumped to ln2 is due to time and the fact that I want to push this chiller setup so hard that I stand between water and ln2. I’ve always wanted to have the “most powerful setup” that’s actually usable. Ln2 just gives results and numbers, which is amazing in itself, but I like closing 3dmark and just using the **** instead of having to Vaseline every inch of everything.

Before the 4000 series, my aim is to build an insulated chamber to house the 6 chillers and Pc to finally allow me to hit -20c coolant temp that the loop is capable of. To my knowledge (and Koolance’s) no other chiller setup exists with such a high wattage. Am running 5x 900 and 1x 1.5hp. Building the chamber is incredibly elaborate since it needs to cool a large amount of wattage in a fully enclosed space. Server rooms pale in comparison (for both temp and humidity requirements).

will def be one last adventure, but when that is done, any and every piece of hardware that gets thrown in there will perform the closest to ln2 possible.


----------



## Benni231990

This week i got my 1000D incl new Radiator  and i maked a few Benchmarks  with little UV the card Boost to 2040mhz but my ****ing little PL 470Watt is the biggest crap


----------



## elbramso

This is with my daily UV setup (CPU and GPU) -> total system power draw max 500w:








I scored 14 178 in Port Royal


Intel Core i9-12900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 11}




www.3dmark.com




https://www.3dmark.com/3dm/74138557https://www.3dmark.com/3dm/74138688*All runs started with ~ 27c water

GPU: [email protected]

At 150lph and fans at 35% my water maxed out at 30.2c (loop tested) with an ambient temp of 24.7c.
I guess I'm good for the summer^^

EDIT:
Loop tested with 35 mins of TimeSpy extrem - in Port Royal max system power draw is more like 460w.


----------



## Skinnered

Hi guys.

in summery, what would be the best bios for a 3090 strix OC with the best perf (highest powerlimit) and rebar, alltough rebar seems disabled in driver on many games still (see inspector)?
NV flash mentioned in the topic start is suitable for flashing in windows?


----------



## Nizzen

Skinnered said:


> Hi guys.
> 
> in summery, what would be the best bios for a 3090 strix OC with the best perf (highest powerlimit) and rebar, alltough rebar seems disabled in driver on many games still (see inspector)?
> NV flash mentioned in the topic start is suitable for flashing in windows?


Try strix 520w rebar bios and kingpin 1000w rebar. Good cooling is almost more important than bios...

I found 520w enough for almost every game, except a few. Like quake 2 with raytracing.


----------



## GRABibus

Nizzen said:


> Try strix 520w rebar bios and kingpin 1000w rebar. Good cooling is almost more important than bios...


1000W KP Bios was a « no go » on my former strix with stock cooler. Températures were too high (more than 100degrees !!! On hot spot) during Port Royal.

It didn’t happen on the SUPRIM X I tested or other 3080 with stock cooler. I don’t know why the strix showed so high temps.

So, no 1000W xoc bios on strix with stock cooler !


----------



## Skinnered

^thanks. I have a feeling thermal pads/paste is an issue with many(?) 3090 strix's. Causing high hotspot temps.
Will try the 520W first.

I noticed when injecting enb or reshade the GPU load increasesa lot, and my GPU ferq. dropping to sub 1900 core, so it looks like a powerissue.


----------



## Nizzen

GRABibus said:


> 1000W KP Bios was a « no go » on my former strix with stock cooler. Températures were too high (more than 100degrees !!! On hot spot) during Port Royal.
> 
> It didn’t happen on the SUPRIM X I tested or other 3080 with stock cooler. I don’t know why the strix showed so high temps.
> 
> So, no 1000W xoc bios on strix with stock cooler !


Strix hasn't the best cooler, so maybe suprim x rebar bios is best on air. This bios tend to perform well.

Cooling is everything with air cooling.

Is it legal with 3090 strix oc and aircoolig


----------



## Skinnered

This 520W bios is from Evga? Or is there a strix 520 bios? I only looked at tech'spowerup database.


----------



## Nizzen

Skinnered said:


> This 520W bios is from Evga? Or is there a strix 520 bios? I only looked at tech'spowerup database.


Both evga and strix has 520w. Both are on tpu.


----------



## Benni231990

have you a link for the 520 EVGA ReBar bios and the strix 520watt rebar bios?

i cant find the 2 bios


----------



## GRABibus

At least the EVGA 520W with rebar :








EVGA RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com


----------



## yzonker

Benni231990 said:


> have you a link for the 520 EVGA ReBar bios and the strix 520watt rebar bios?
> 
> i cant find the 2 bios


Kingpin 520w bios, 









EVGA RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com





I'm not aware of a 520w Asus bios though.


----------



## GRABibus

yzonker said:


> Kingpin 520w bios,
> 
> 
> 
> 
> 
> 
> 
> 
> 
> EVGA RTX 3090 VBIOS
> 
> 
> 24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory
> 
> 
> 
> 
> www.techpowerup.com
> 
> 
> 
> 
> 
> The 520W Asus bios is the XOC 1000W one’s in fact.
> 
> I'm not aware of a 520w Asus bios though.


----------



## Benni231990

super thank you  

and the ASUS bios? i cant find the 520watt i only found the XOC 1000 watt and the normal 480watt


----------



## GRABibus

Benni231990 said:


> super thank you
> 
> and the ASUS bios? i cant find the 520watt i only found the XOC 1000 watt and the normal 480watt


The Asus 1000W XOC with rebar is in fact 520W


----------



## Benni231990

ok the XOC i dont like it because the ram never clocks down 

now im using the EVGA and it works pretty well


----------



## Nizzen

Asus "520w" rebar


https://www.diskusjon.no/applications/core/interface/file/attachment.php?id=661381


----------



## Benni231990

i cant see the file because of an error 

but its ok the evga runs pretty good  

or ist the asus bios "better"


----------



## GRABibus

Benni231990 said:


> i cant see the file because of an error
> 
> but its ok the evga runs pretty good
> 
> or ist the asus bios "better"



ASUS 1000W (in fact 520W) with Rebar is good for gaming, not for bench.
EVGA 520W Bios with Rebar is good for gaming and bench.

But he best Bios I tested for bench + games for my former strix on air was the MSI SUPRIM X ReBar bios :








MSI RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com





Enclosed the 1000W (In fact 520W) ReBar ASUS (Rename it with .rom extension).


----------



## Benni231990

also i have now played warzone and the EVGA bios is awsome i can constand boost to 2070 with 1.018Vcore absolut perfect  

i dont like the xoc becaue the ram clock doesnt clock down in idle this isnt perfect but the EVGA bios i loved


----------



## NapsterAU

Any higher power limit bios i can chuck on my Zotac 3090 2x8pin?


----------



## yzonker

NapsterAU said:


> Any higher power limit bios i can chuck on my Zotac 3090 2x8pin?


Only choices are either one of the 390w bios (Galax, GIGABYTE) or this KP XOC, but you need good cooling for it.









EVGA RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com


----------



## GRABibus

Benni231990 said:


> also i have now played warzone and the EVGA bios is awsome i can constand boost to 2070 with 1.018Vcore absolut perfect
> 
> i dont like the xoc becaue the ram clock doesnt clock down in idle this isnt perfect but the EVGA bios i loved


You can set a profile in MSI AB with offset = -250MHz on memory and launch it when you are in windows. That’s it.


----------



## the bag

NapsterAU said:


> Any higher power limit bios i can chuck on my Zotac 3090 2x8pin?


Hi,

I flashed my 2 pin Asus Tuf Gaming 3090 with the Gigabyte Bios 390 Maxed TDP. Works since a Year without issues. Higher stable clocks -> The Tuf could handle the additional 20 wats easily but they help a lot. I dont know your cooling but you could give it a try with the Gigabyte Bios.

Greetings!


----------



## PiotrMKG

Veii said:


> *ASUS 3090 Strix OC [3x 8pin 450-600W]
> VBIOS 94.02.42.00.A9*
> 
> OC Bios - Comparable to a 3090Ti
> Half-Familiar layout ~ 50min needed
> 
> Pls flash via SPI
> if official NVFlash
> or nvflash64_RTX-Rebrand.7z
> refuse
> View attachment 2554842
> 
> Changes & Location
> Make an SPI-ROM backup please


Hello, 
Should I change extension from TXT to ROM for NVflash? Thanks


----------



## GRABibus

PiotrMKG said:


> Hello,
> Should I change extension from TXT to ROM for NVflash? Thanks


nvflash doesn't work with these modded bios.


----------



## PiotrMKG

GRABibus said:


> nvflash doesn't work with these modded bios.


Even with ver.5.735.0? So for no hassle bios swap, ASUS XOC 1000W is an option for custom watercooled STRIX OC?


----------



## GRABibus

PiotrMKG said:


> Even with ver.5.735.0? So for no hassle bios swap, ASUS XOC 1000W is an option for custom watercooled STRIX OC?


Which Bios do you want to flash and on which card ?


----------



## PiotrMKG

GRABibus said:


> Which Bios do you want to flash and on which card ?


I want to flash BIOS from quoted reply into my STRIX OC (Same card). Both BIOSes are ver 94.0.2.00.A09 s


----------



## Benni231990

i have a little question 

today i was playing Division 2 after a long time and i maked the benchmark and i see i have constand 480-520watt power draw 😅 (by the way UWQHD and everything is Max out in the settings)

and after a little gameplay i got 68°C (constand 480-520watt) is this normal? i think its ab bit high? i have a obsidian 1000D with 2x 480x30mm Front and a 420x45 on the top and the water temp was arround 36°C 

or is this normal wenn you have consitense 490-520watt power draw?


----------



## GRABibus

PiotrMKG said:


> I want to flash BIOS from quoted reply into my STRIX OC (Same card). Both BIOSes are ver 94.0.2.00.A09 s


definitely not possible with nvflash.
See the SPI method described by Veii in posts some days ago.


----------



## PLATOON TEKK

Getting the evc2x ready for the spi flash and waiting on voltage clip. Just had a chance to check out the Elmorlabs PMD. The readings fluctuate heavy because of the fast and accurate polling.

this is definitely useful for that ****ty OC Labs I have with the faulty readings, or anyone with shunts. You can log the wattage with the EVC


----------



## yzonker

PLATOON TEKK said:


> Getting the evc2x ready for the spi flash and waiting on voltage clip. Just had a chance to check out the Elmorlabs PMD. The readings fluctuate heavy because of the fast and accurate polling.
> 
> this is definitely useful for that ****ty OC Labs I have with the faulty readings, or anyone with shunts. You can log the wattage with the EVC
> 
> View attachment 2555884
> 
> View attachment 2555885


What card are you going to test that on?


----------



## PLATOON TEKK

yzonker said:


> What card are you going to test that on?


I have a couple of older gpus here from a 580 upward so I will find the earliest that works with all of this and just flash it to a standard bios first and evolve from there hopefully.


----------



## yzonker

Benni231990 said:


> i have a little question
> 
> today i was playing Division 2 after a long time and i maked the benchmark and i see i have constand 480-520watt power draw 😅 (by the way UWQHD and everything is Max out in the settings)
> 
> and after a little gameplay i got 68°C (constand 480-520watt) is this normal? i think its ab bit high? i have a obsidian 1000D with 2x 480x30mm Front and a 420x45 on the top and the water temp was arround 36°C
> 
> or is this normal wenn you have consitense 490-520watt power draw?


That's likely a gpu block mount issue. Pretty any block can at least maintain a 20C water to core delta. Only other possibility is very low flow, but I suspect that's not the problem.


----------



## dr/owned

Scored a 3090 FTW3 under MSRP so I'm going to do a shuffle of TUF 3090 -> guest desktop (and hope the 600W SFX psu can take it....) and put the FTW3 in my desktop. I've wanted to play around with a 3 connector card for a while and also I'll (supposedly?) be able to load the 1000W bios w/ rebar and not have to **** with shunts. But I will have to solder over the fuses cause 20A isn't going to do it.

I tested it out with the aircooler just to make sure everything works before ordering a waterblock: god the aircooler experience SUCKS (and blows). Sounds like a jet engine under full load and 106C memory temperature vs. 56 with my TUF.

1500W from the wall though if I load both 3090s and the CPU, but that's with the FTW3 running in stock 430W TDP mode.


----------



## Veii

dr/owned said:


> Scored a 3090 FTW3 under MSRP so I'm going to do a shuffle of TUF 3090 -> guest desktop (and hope the 600W SFX psu can take it....) and put the FTW3 in my desktop. I've wanted to play around with a 3 connector card for a while and also I'll (supposedly?) be able to load the 1000W bios w/ rebar and not have to **** with shunts. But I will have to solder over the fuses cause 20A isn't going to do it.


I'm confident in my work
Shunting will not be needed anymore
But current XOC bioses out there with little examples all cap at 500-600W draw, 450W SRC+PCIe power and VRAM power
EDIT: wouldn't know SRC exists without @Falkentyne
Mine will cap at 680W, but it actually draws 600W not 450W  i don't want people to kill their cards instantly. Would reputation-damage the big project. Soo no LN2 bios

SPI flash for now,


Spoiler: Sneaky WIP




































Work moves, but i'm embarrassing at assembly & C#
Not a coder after all  soo learning it at the same time as digging
We'll get there with signing, soonTM~

SPI flash it is for now, but that will break nvflash ability as cert will be messed up
Keep that in mind and always make a full rom backup first via HW-Flasher








Funnily even radare2 welcome-screen, is mocking the dev (ABE Tool, dev) 
Sad to see him take that position and betray community. Eh past-is-past, his tool is irrelevant now ~ as it doesn't cover other RTX cards 🤭


----------



## dr/owned

Is the XOC the 500W bios? I already have the 1000W rebar one (LN2?) loaded up (94.02.59.00.82) but I think I'll probably shunt the PCIE slot so I don't hit a limit there. The card is already disassembled and now waiting on the Bykski block from China. I just couldn't tolerate the jet engine noise from the fans enough to leave it running in my desktop.


This is what I had to work with on the TUF and I got it up to shunted 650W or so before it hits some other secondary power limit I think (maybe AUX or something) that isn't measured with a big shunt:










Feels like 2020 all over. Gotta do the usual taking measurements of the PCB stack heights, bypass fuses, tie the 8 pins together.


----------



## Benni231990

yzonker said:


> That's likely a gpu block mount issue. Pretty any block can at least maintain a 20C water to core delta. Only other possibility is very low flow, but I suspect that's not the problem.


you got damn right i repaste my Gpu with Kryonaut Extreme and i got much much better temps i got an improve of 7-9°C now im happy


----------



## KedarWolf

Benni231990 said:


> you got damn right i repaste my Gpu with Kryonaut Extreme and i got much much better temps i got an improve of 7-9°C now im happy


I'm not sure I'd want liquid metal as my GPU is a vertical mount, even with the surrounding area protected.


----------



## Nizzen

KedarWolf said:


> I'm not sure I'd want liquid metal as my GPU is a vertical mount, even with the surrounding area protected.


Cryonaut Extreme isn't liquid metal.


----------



## Falkentyne

dr/owned said:


> Is the XOC the 500W bios? I already have the 1000W rebar one (LN2?) loaded up (94.02.59.00.82) but I think I'll probably shunt the PCIE slot so I don't hit a limit there. The card is already disassembled and now waiting on the Bykski block from China. I just couldn't tolerate the jet engine noise from the fans enough to leave it running in my desktop.
> 
> 
> This is what I had to work with on the TUF and I got it up to shunted 650W or so before it hits some other secondary power limit I think (maybe AUX or something) that isn't measured with a big shunt:
> 
> View attachment 2556073
> 
> 
> Feels like 2020 all over. Gotta do the usual taking measurements of the PCB stack heights, bypass fuses, tie the 8 pins together.


I'm not sure what limit the card is reaching but I don't think it's a limit shown here.
The AUX limits, as far as I know, are the "GPU Misc 0/1/2/3" input powers shown in HWinfo64, and they themselves seem to "sum" into other main rails.
I know that on 2x8 pin cards, GPU Core NVVDD Input Power (sum), or "GPU Chip Power" (which has its own shunt), is the sum of GPU Misc0+GPU Misc2+NVVDD1 Input Power(sum).

I don't know what misc1 and misc3 go into at all, and I'm too bored to figure it out.
It's already known that on Kingpin cards, when you flip the dip switches, this increases the NVVDD and MSVDD voltages (hardware, not VID) directly, which also allows more power to be drawn.
Apparently, the Asus ROG Strix 3090 cards already have higher internal voltages for NVVDD and MSVDD--someone on hardwareluxx mentioned that the Strix's bios limits are the same as the Kingpin limits with the dip switches enabled. Months ago, someone verified this by running Timespy Extreme with a shunt modded Strix (stock BIOS), and not hitting a "TDP Normalized" power limit or power limit flag shown in GPU-Z at all during the run, with the TDP slider maxed out in MSI Afterburner, while all the other shunt modded 3x8 pin cards would trigger momentary TDP Normalized power limit throttling in Timespy Extreme, as well as Superposition 4k Custom+Extreme shaders.

This can be seen in HWinfo64 (Not in GPU-Z however) by looking at TDP Normalized% and comparing it to TDP% right below it.

The Kingpin XOC Bios bypasses all of these limits completely, which is why some users had their 1KW PSUs trip with this bios, while not tripping with a regular 520W Bios, even when both were drawing the same power. (Flash a kingpin 1kw bios on a 3x8 pin card and you will notice that TDP Normalized and TDP% will be close to the same value, while on most other cards, TDP Normalized% will reach 100% much sooner than TDP%).


----------



## KedarWolf

Nizzen said:


> Cryonaut Extreme isn't liquid metal.


Is it thin and easy to apply?


----------



## Nizzen

KedarWolf said:


> Is it thin and easy to apply?


Never tried it, but it looks like normal cryonaut.


----------



## dr/owned

Falkentyne said:


> The Kingpin XOC Bios bypasses all of these limits completely, which is why some users had their 1KW PSUs trip with this bios, while not tripping with a regular 520W Bios, even when both were drawing the same power. (Flash a kingpin 1kw bios on a 3x8 pin card and you will notice that TDP Normalized and TDP% will be close to the same value, while on most other cards, TDP Normalized% will reach 100% much sooner than TDP%).


Anyone found a reason not to daily drive it? So far I've shunted the PCIe slot a conservative amount +50% / 10mOhm and soldered an extra wire to the 10A fuse since that really seems to be the only "limit" that may happen based on the bios @ 90W. With the TUF I don't think I ever went over about 80W even with a 4mOhm shunt, so it's probably only connected to a minor supply anyways that never draws a crazy amount. I'm going to stack a power jumper (I learned that regular zero ohm resistors aren't rated for more than like 1A) on all the 20A fuses and leave the shunts alone.


----------



## Falkentyne

dr/owned said:


> Anyone found a reason not to daily drive it? So far I've shunted the PCIe slot a conservative amount +50% / 10mOhm and soldered an extra wire to the 10A fuse since that really seems to be the only "limit" that may happen based on the bios @ 90W. With the TUF I don't think I ever went over about 80W even with a 4mOhm shunt, so it's probably only connected to a minor supply anyways that never draws a crazy amount. I'm going to stack a power jumper (I learned that regular zero ohm resistors aren't rated for more than like 1A) on all the 20A fuses and leave the shunts alone.


If the card properly supports correct power readings on the Kingpin 1kw XOC Bios and is a 3x8 pin card (or a 2x8 pin card limited to 667W), there's no reason not to daily drive it at 55% (3x8 pin) or 80% (2x8 pin), if your cooling and thermal pad job is up to spec. Besides the atrocious power per watt you get when you exceed 450W....


----------



## KedarWolf

Benni231990 said:


> you got damn right i repaste my Gpu with Kryonaut Extreme and i got much much better temps i got an improve of 7-9°C now im happy


Is it thin and easy to apply?


----------



## yzonker

KedarWolf said:


> Is it thin and easy to apply?


No it's thick. But with a little patience you can easily apply it with one of the little plastic spreaders. I've applied it twice now. Once on my old Corsair block and then on my new HKV block.


----------



## yzonker

dr/owned said:


> Anyone found a reason not to daily drive it? So far I've shunted the PCIe slot a conservative amount +50% / 10mOhm and soldered an extra wire to the 10A fuse since that really seems to be the only "limit" that may happen based on the bios @ 90W. With the TUF I don't think I ever went over about 80W even with a 4mOhm shunt, so it's probably only connected to a minor supply anyways that never draws a crazy amount. I'm going to stack a power jumper (I learned that regular zero ohm resistors aren't rated for more than like 1A) on all the 20A fuses and leave the shunts alone.


A lot of us have been daily driving it on 2x8pin cards including me for over a year now. No issues at all other than @Falkentyne comment about my Seasonic GX1000 getting tripped on a couple of benchmarks. The Rm1000x that I have now has no issues though.


----------



## Benni231990

KedarWolf said:


> Is it thin and easy to apply?


Yes ist super easy ill put a fine layer von the gpu and put it back together


----------



## Panchovix

KedarWolf said:


> Win10BenchISO.zip
> 
> 
> 
> 
> 
> 
> 
> drive.google.com


Sorry for quoting this old comment lol, but this is the latest version of your benchOS right? Gonna try some benchs and wanna make sure I'm using the latest version lol


----------



## KedarWolf

Panchovix said:


> Sorry for quoting this old comment lol, but this is the latest version of your benchOS right? Gonna try some benchs and wanna make sure I'm using the latest version lol


Yes, it is. I need to update the Autoruns in it though, can disable even more services.

Edit: I'll probably just add a .cmd file you run as admin that disables them all.


----------



## KedarWolf

Panchovix said:


> Sorry for quoting this old comment lol, but this is the latest version of your benchOS right? Gonna try some benchs and wanna make sure I'm using the latest version lol


Run this .cmd file after the bench O/S is installed. Install all your drivers, benchmarks, chipset drivers, Nvidia or AMD graphics drivers etc. first before running this .cmd. If you need to install anything after running it, just enable the AppXSVC and the AxInstSV services.

I haven't tested it as of yet, so after installing it, reboot, and make sure Windows still runs. If it doesn't let me know.

Rename, remove the .txt at the end.


----------



## Panchovix

KedarWolf said:


> Run this .cmd file after the bench O/S is installed. Install all your drivers, benchmarks, chipset drivers, Nvidia or AMD graphics drivers etc. first before running this .cmd. If you need to install anything after running it, just enable the AppXSVC and the AxInstSV services.
> 
> I haven't tested it as of yet, so after installing it, reboot, and make sure Windows still runs. If it doesn't let me know.
> 
> Rename, remove the .txt at the end.


Many thanks! Gonna try in some hours


----------



## KedarWolf

New Window 10 Bench ISO Link, much improved with new instructions.






Win10BenchISO.zip







drive.google.com


----------



## Panchovix

KedarWolf said:


> New Window 10 Bench ISO Link, much improved with new instructions.
> 
> 
> 
> 
> 
> 
> Win10BenchISO.zip
> 
> 
> 
> 
> 
> 
> 
> drive.google.com


Just wanted to say that all went good, used it for some benchs on my 2080Ti/3070, the 2080Ti did surprisingly good.

Also, I really like this BenchOS, since I get more points on Cinebench or CPU related benchmarks when you disable the services.

Many thanks!


----------



## KedarWolf

Panchovix said:


> Just wanted to say that all went good, used it for some benchs on my 2080Ti/3070, the 2080Ti did surprisingly good.
> 
> Also, I really like this BenchOS, since I get more points on Cinebench or CPU related benchmarks when you disable the services.
> 
> Many thanks!


----------



## yzonker

Panchovix said:


> Just wanted to say that all went good, used it for some benchs on my 2080Ti/3070, the 2080Ti did surprisingly good.
> 
> Also, I really like this BenchOS, since I get more points on Cinebench or CPU related benchmarks when you disable the services.
> 
> Many thanks!


It will even bump your graphics score a little in higher framerate benches like TS.


----------



## Panchovix

yzonker said:


> It will even bump your graphics score a little in higher framerate benches like TS.


Yup, on the 3080 and 2080Ti, the benchOS helped a lot. (The 2080Ti overclocks surprisingly good tho, like 25% more performance, vs my 3080 15% shunt modded and all :/)


----------



## dr/owned

Amazon has the 3090 FTW3 for the absolute lowest I've seen since launch:










4000 series may be upon us if they're blowing out inventory like this. Looks like a 3080Ti price drop too.


----------



## GreatestChase

Hey everyone, I have a quick question for those of you with a KPE 3090. I noticed today while working at my PC that the OLED display and LEDs on the water block have started to flicker. None of the other LEDs in the system flicker. The only change that I have made recently is that I swapped in my 1600W EVGA G+ PSU to use my 1300W psu elsewhere. I've made sure that all the connections are seated well, I've updated the card's firmware in precision X1, and I've test that the card itself is still functioning fine. Has anyone else experienced anything similar? And if so, did you find a way to make it stop?

Edit: Well, in true troubleshooting fashion, I decided to just go ahead and swap the PSUs back out with the 1300W G+ and it is no longer producing the flickering. Anyone have any idea why the 1600W would cause that? It otherwise worked fine.


----------



## viper16341

Which bios ( with the Ram clocking down ) would you guys suggest for a 3 x 8 Pin power connector? Card is cooled by water... ( Zotac Arctic Storm 3090 )


----------



## AvengedRobix

Falkentyne said:


> If the card properly supports correct power readings on the Kingpin 1kw XOC Bios and is a 3x8 pin card (or a 2x8 pin card limited to 667W), there's no reason not to daily drive it at 55% (3x8 pin) or 80% (2x8 pin), if your cooling and thermal pad job is up to spec. Besides the atrocious power per watt you get when you exceed 450W....


80% for daily in a watercooled card was good?


----------



## dr/owned

It's not particularly interesting nowadays doing a build log but here's some proper FTW3 fuse bypassing and a 10mOhm shunt on pcie (putting the 1000W bios limit at ~ 150W which effectively is unlimited with the power balancing):










These are the jumpers that I used. They're just a piece of tinned copper meant for actual power transmission:

TLRZ2BTTD
RES 0 OHM JUMPER 1206

And kapton taped up the socket so liquid metal is WAY easier to clean up later:









Now I wait for China to ship my Bykski block. I'm planning on using 3 different sets of thermal pads based on the measurements I've taken. Overall the FTW3's critical measurements (like mem and vrm) are the same as the TUF so the mem will be 1.25mm (20w/mK, firm), the vrm will be 1.5mm (13w/mK, average), the back of the socket will be 2.5mm (12w/mK, soft). I don't think I'll bother with the inductors but they were the only major difference (TUF = 4.63mm, FTW3 = 3.75mm). It'll be a whole procedure of clamping and heating to ensure good contact. The backplate doesn't have a ton of screws for mounting so it's going to need external heat and pressure to make the sandwich.


----------



## gamerMwM

Falkentyne said:


> If the card properly supports correct power readings on the Kingpin 1kw XOC Bios and is a 3x8 pin card (or a 2x8 pin card limited to 667W), *there's no reason not to daily drive it at 55%* (3x8 pin) or 80% (2x8 pin), if your cooling and thermal pad job is up to spec. Besides the atrocious power per watt you get when you exceed 450W....


So you would need to do both a 55% Power Limit _AND_ -250 on Memory for daily driver (Watercooled Strix OC 3090)?
I might try that as I've been daily using the KP 520 bios for months and kinda want to try something else for awhile. 

I did use the 1000watt bios for benchmarking and had more OC headroom with it. I was happy with the results so I stopped benchmarking and have been using more conservative OC's gaming ever since. But as I said, I am thinking of playing with another Daily driver just for the heck of it.









I scored 15 544 in Port Royal


AMD Ryzen 9 5900X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## kx11

random benchmark on default ROG bios









I scored 15 049 in Port Royal


Intel Core i9-12900KF Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 11}




www.3dmark.com


----------



## yzonker

Got a spiffy (sorta, keep reading) new 5800x3D. It does kick butt in very high frame rate games/benches (see below), but I'm pretty sure it's 50-100 pts slower in Port Royal. It does take a clock speed and latency hit, so that may explain it unfortunately.

5800x









5800x3D


----------



## long2905

put some bclk oc on it?


----------



## yzonker

long2905 said:


> put some bclk oc on it?


Even 101 bclk didn't post. I'll have to dig in to it to see if there's any way to improve that.


----------



## yzonker

I realize this isn't exactly 3090 related (except that's what I'm running), but there may be some high fps gamers floating around in here. 

Ok, 102 bclk did work (3090 won't show video at 103), but had to drop to 3633 fclk to get it to post. Still slightly faster despite the drop in fclk.

PR is still a little short though, but I think it's really ~50pts as I couldn't get back to the higher score when I put my 5800x back in for whatever reason,









Result







www.3dmark.com


----------



## KedarWolf

yzonker said:


> I realize this isn't exactly 3090 related (except that's what I'm running), but there may be some high fps gamers floating around in here.
> 
> Ok, 102 bclk did work (3090 won't show video at 103), but had to drop to 3633 fclk to get it to post. Still slightly faster despite the drop in fclk.
> 
> PR is still a little short though, but I think it's really ~50pts as I couldn't get back to the higher score when I put my 5800x back in for whatever reason,
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Result
> 
> 
> 
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> View attachment 2557822


CO Tuner for the 5800X3D in the below post. But it needs to be applied on every boot. Peeps are getting -30 on all cores Core Cycler stable.









CoreCycler - tool for testing Curve Optimizer settings


@ArchStanton I'm gonna to find the baseline first then test them all one by one then :D I always thought that if it passed all the extreme stress tests then the light ones should be considered done. Better 1 thread with 2 the boost clock will go down So if it's passed then should I retest...




www.overclock.net










Debug.7z







drive.google.com


----------



## yzonker

KedarWolf said:


> CO Tuner for the 5800X3D in the below post. But it needs to be applied on every boot. Peeps are getting +30 on all cores Core Cycler stable.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> CoreCycler - tool for testing Curve Optimizer settings
> 
> 
> @ArchStanton I'm gonna to find the baseline first then test them all one by one then :D I always thought that if it passed all the extreme stress tests then the light ones should be considered done. Better 1 thread with 2 the boost clock will go down So if it's passed then should I retest...
> 
> 
> 
> 
> www.overclock.net
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Debug.7z
> 
> 
> 
> 
> 
> 
> 
> drive.google.com


Yea, all of the results I have shown here and in the 5800x3D thread are with CO set to -20. Going lower only appeared to help in all core benches like Cinebench. It passed CoreCycler at -20, but haven't tested lower.


----------



## Mad1137

Guys what is this Pwr src voltage?? On my card it's show 12.7 , should I worry ?? And how I can fix it ( psu Asus ROG Thor 1200) please help


----------



## yzonker

Mad1137 said:


> Guys what is this Pwr src voltage?? On my card it's show 12.7 , should I worry ?? And how I can fix it ( psu Asus ROG Thor 1200) please help


That is on the high side of the spec I think (+-5%) which would be 11.4v to 12.6v. Might be worth shooting an email to Asus at least. My RM1000x shows 12.0v exactly (not all are that close though).


----------



## J7SC

To Asus Strix 3090 OC owners --- Like many of you, I run the stock vBios on one bios and the KPE520 on the other, both with r_BAR. The updated Asus Strix OC vBios with r_BAR was 'version 3' on the ROG site back then, but there's also a slightly younger 'version 4'. Does anybody know how v4 behaves compared to v3, and what might be different ?


----------



## KedarWolf

J7SC said:


> To Asus Strix 3090 OC owners --- Like many of you, I run the stock vBios on one bios and the KPE520 on the other, both with r_BAR. The updated Asus Strix OC vBios with r_BAR was 'version 3' on the ROG site back then, but there's also a slightly younger 'version 4'. Does anybody know how v4 behaves compared to v3, and what might be different ?





J7SC said:


> To Asus Strix 3090 OC owners --- Like many of you, I run the stock vBios on one bios and the KPE520 on the other, both with r_BAR. The updated Asus Strix OC vBios with r_BAR was 'version 3' on the ROG site back then, but there's also a slightly younger 'version 4'. Does anybody know how v4 behaves compared to v3, and what might be different ?


I tried V4, it sucked. For gaming the MSI SUPRIM is the way to go. If if you have water cooling, the EVGA XOC 1000w with power limit at 50-60% okay. 

But the SUPRIM in the Endwalker benchmark did just as good as the XOC.


----------



## J7SC

KedarWolf said:


> I tried V4, it sucked. For gaming the MSI SUPRIM is the way to go. If if you have water cooling, the EVGA XOC 1000w with power limit at 50-60% okay.
> 
> But the SUPRIM in the Endwalker benchmark did just as good as the XOC.


Thanks ! Does the SuprimX on a Asus Strix 3090 OC read out power correctly, ie. in HWInfo or GPUz ?


----------



## KedarWolf

J7SC said:


> Thanks ! Does the SuprimX on a Asus Strix 3090 OC read out power correctly, ie. in HWInfo or GPUz ?


No, third 8-pin doesn't register on the Strix, but you can figure out your power draw by averaging what you get on pins 1 and 2 and adding it. like if you get 130 on pin 1, 120 on pin 2, just add them together and add 125 to it, it'll be close. Like it would be 130w+120w+125w=375w


----------



## J7SC

KedarWolf said:


> No, third 8-pin doesn't register on the Strix, but you can figure out your power draw by averaging what you get on pins 1 and 2 and adding it. like if you get 130 on pin 1, 120 on pin 2, just add them together and add 125 to it, it'll be close. Like it would be 130w+120w+125w=375w


Thanks, so like power calc when using the 520KPE


----------



## GRABibus

KedarWolf said:


> I tried V4, it sucked. For gaming the MSI SUPRIM is the way to go. If if you have water cooling, the EVGA XOC 1000w with power limit at 50-60% okay.
> 
> But the SUPRIM in the Endwalker benchmark did just as good as the XOC.


Yep, even with my former strix on air, the MSI Suprim X Bios was the best one for bench and gaming, despite it has a lower PL than the others.


----------



## J7SC

...I wonder if that may indicate tighter VRM and/or PLL ?


----------



## yzonker

I posted this over in the 5800x3D thread. Posting here to see if anyone has any comments/suggestions in regards to not being able to overclock the VRAM as high with the 5800x3D. Swapped between my 5800x and 5800x3D several times now to confirm. 









5800X3D Owners


1967 tested again overshoots on FCLK effective to 2100+ And logically WHEA's Its mindered if you boot up with higher VDD18 on old agesa LN2 mode on asus forces 2.1v on VDD18 I wish they where not from PGS fab Tried also to run 6250Mhz ^^' Bios shows it, but its fake ~ OC mode is in disabled...




www.overclock.net


----------



## darkeclypse

Wow.. huge thread.

Anyways, I've got a newer Evga 3090 Ftw3 Ultra. Rev. 1.0. Even using the xoc bios I seem to max our at 450-60 watts. The 3 8pins connectors are unbalanced at like 150watts for 1 and 2 and 90 wax on the 3rd and around 60+ on the pcie connector.

I seem to hit the power limit way early after 420watts.

So I've been messing around with under volting just to try to keep me out of the power/performance limit.. so like around 2160mhz at around 1.031mv.

I'm fully water cooled using an Ek front and back water block. Temps usually hover around 32-35c under load for benches and games.

Mem temps low in the 20-30s.

I'm using a wall of rads (2x EK XE 480,2x 360 rads w/ push pull fans full blast. 2x Ek dual D5 serial pumps.. 21-22c water temps. Room is usually 66-70f.

What bios should I use?? Can I use the Kingpin 1000watt bios? If so what settings so safe usage?? I mainly use Afterburner.

I scored 14,982 in Port Royal and man does it burn being so close to 15k.


----------



## darkeclypse

I guess No one is using the kp 1000watt bios.


----------



## J7SC

darkeclypse said:


> Wow.. huge thread.
> 
> Anyways, I've got a newer Evga 3090 Ftw3 Ultra. Rev. 1.0. Even using the xoc bios I seem to max our at 450-60 watts. The 3 8pins connectors are unbalanced at like 150watts for 1 and 2 and 90 wax on the 3rd and around 60+ on the pcie connector.
> 
> I seem to hit the power limit way early after 420watts.
> 
> So I've been messing around with under volting just to try to keep me out of the power/performance limit.. so like around 2160mhz at around 1.031mv.
> 
> I'm fully water cooled using an Ek front and back water block. Temps usually hover around 32-35c under load for benches and games.
> 
> Mem temps low in the 20-30s.
> 
> I'm using a wall of rads (2x EK XE 480,2x 360 rads w/ push pull fans full blast. 2x Ek dual D5 serial pumps.. 21-22c water temps. Room is usually 66-70f.
> 
> What bios should I use?? Can I use the Kingpin 1000watt bios? If so what settings so safe usage?? I mainly use Afterburner.
> 
> I scored 14,982 in Port Royal and man does it burn being so close to 15k.


...I don't think you need the KPE 1000W to break 15k in Port Royal; below is a w-cooled 3090 Strix w/ KPE 520W and using MSI AB sliders only


----------



## Nizzen

darkeclypse said:


> I guess No one is using the kp 1000watt bios.


My daughter has used since day 1. KP 1000w rebar on 3090 strix oc watercooled. It ain't dead yet


----------



## darkeclypse

J7SC said:


> ...I don't think you need the KPE 1000W to break 15k in Port Royal; below is a w-cooled 3090 Strix w/ KPE 520W and using MSI AB sliders only
> 
> View attachment 2559318


No but it's gona be a shunt mod atleast!

Evga 30xx series have a bad design flaw I guess and bad power limit.

Prob won't matter what bios I use.. no more power to be had.

Yeah I've heard about other brands benefiting from evga xoc bios haha.


----------



## yzonker

darkeclypse said:


> No but it's gona be a shunt mod atleast!
> 
> Evga 30xx series have a bad design flaw I guess and bad power limit.
> 
> Prob won't matter what bios I use.. no more power to be had.
> 
> Yeah I've heard about other brands benefiting from evga xoc bios haha.


The KP 1kw bios has all the limits raised. I don't think I've ever seen anyone report it not increasing the power limit. I've been daily driving that bios on my Zotac 3090 for over a year now. Pretty much never hit the power limit (~660w for 2x8pin).


----------



## GreatestChase

darkeclypse said:


> No but it's gona be a shunt mod atleast!
> 
> Evga 30xx series have a bad design flaw I guess and bad power limit.
> 
> Prob won't matter what bios I use.. no more power to be had.
> 
> Yeah I've heard about other brands benefiting from evga xoc bios haha.


You're probably running into voltage limits. I know that with my KP card, if I don't make any voltage adjustments I'll typically be in the 420-460 W range.


----------



## darkeclypse

GreatestChase said:


> You're probably running into voltage limits. I know that with my KP card, if I don't make any voltage adjustments I'll typically be in the 420-460 W range.


No I can lock a voltage in the curve for afterburner.

I get thet power limit warning basically all the time after 420watts.

It's ruff cause my card is using the dual ek blocks and my card runs around 32-35c tops and mem temp the same or lower.

I've been needing to under volt the card and run it around 2160mhz just to get a good score in 3dmark benches.

8pin 1 and 2 can hit 150watts ea while the 3rd 8pin will top out at under 100watts and the pcie will hover around 60watts.

I could do 1.1mv at 2100mhz and it will just say pwr limit and downclock the card to around 1950mhz.

So I kind of have doubts just slaping the kingpin 1000watt bios will help.

Though if I go for it, what precautions do I need to take? Set the power limit to 50-60% to make sure I don't trip a fuse on the card with too much wattage hitting the card?

I hear the evga cards only have a 10amp fuse while others have a 20amp fuse.


----------



## GreatestChase

darkeclypse said:


> No I can lock a voltage in the curve for afterburner.
> 
> I get thet power limit warning basically all the time after 420watts.
> 
> It's ruff cause my card is using the dual ek blocks and my card runs around 32-35c tops and mem temp the same or lower.
> 
> I've been needing to under volt the card and run it around 2160mhz just to get a good score in 3dmark benches.
> 
> 8pin 1 and 2 can hit 150watts ea while the 3rd 8pin will top out at under 100watts and the pcie will hover around 60watts.
> 
> I could do 1.1mv at 2100mhz and it will just say pwr limit and download the card to around 1950mhz.
> 
> So I kind of have doubts just slaping the kingpin 1000watt bios will help.
> 
> Though if I go for it, what precautions do I need to take? Set the power limit to 50-60% to make sure I don't trip a fuse on the card with too much wattage hitting the card?
> 
> I hear the evga cards only have a 10amp fuse while others have a 20amp fuse.


Use at your own risk, but you should run into voltage limits long before you run into power issues with the 1 kw bios since you're limited to the factory voltages. Also, my KP card won't run 2160 MHz at stock voltages. Best I can do without changing voltages, on room air, and 24-25 C water temps is 2130 MHz. Using the 520 W bios I'm drawing at max 460 W during that run.


----------



## darkeclypse

GreatestChase said:


> Use at your own risk, but you should run into voltage limits long before you run into power issues with the 1 kw bios since you're limited to the factory voltages. Also, my KP card won't run 2160 MHz at stock voltages. Best I can do without changing voltages, on room air, and 24-25 C water temps is 2130 MHz. Using the 520 W bios I'm drawing at max 460 W during that run.


Hmmm.. I'll have to grab my notes and share what mine is doing.

Timespy extreme

2160mhz at 1.043mv, 1.031mv and at 1.025mv and even as low as 1.018mv.

Managed 2175mhz at 1.043mv

Can do 2100mhz at .981mv.

Also ran 2190mv at 1.056mv.
2205mhz at 1.075v. Also at 1.068mv.

Port Royal run at 2160mhz at 1031mv.

Best score at 2145mhz at 1018mv.. got 14922.

Another run at 2100mhz and +1000 mem at .993mv got 14850.

The same but eith +1300mem got 14925.


----------



## darkeclypse

Just tried 1.093mv at 2205mhz on TSE and instant power limit holds and it drops to 1830mhz.

Max power is at 457watts.

Pcie at 66watts
8pin 1 at 153watts.
Pin 2 at 148watts.
Pin 3 at 102watts.

Scored a massive 10540 haha. Gs of 10659. :/


----------



## GreatestChase

darkeclypse said:


> Just tried 1.093mv at 2205mhz on TSE and instant power limit holds and it drops to 1920mhz.
> 
> Max power is at 457watts.
> 
> Pcie at 66watts
> 8pin 1 at 153watts.
> Pin 2 at 148watts.
> Pin 3 at 102watts.


Yeah, I looked at mine when I was doing my run and they were all within 5-10 W of each other. I'll have to defer to more experienced individuals, but there may be a problem with the card at the hardware level. Seems a little out of my depth at this point.


----------



## yzonker

darkeclypse said:


> Just tried 1.093mv at 2205mhz on TSE and instant power limit holds and it drops to 1830mhz.
> 
> Max power is at 457watts.
> 
> Pcie at 66watts
> 8pin 1 at 153watts.
> Pin 2 at 148watts.
> Pin 3 at 102watts.
> 
> Scored a massive 10540 haha. Gs of 10659. :/


I may have missed it, but which card do you have?


----------



## darkeclypse

yzonker said:


> I may have missed it, but which card do you have?


Evgs 3090 ftw3 ultra rev 1.0.


----------



## yzonker

darkeclypse said:


> Evgs 3090 ftw3 ultra rev 1.0.


I think that card has the same fuses as my 3080ti FTW3. 20 amps on the 8pins and 8 amps on PCIE. 60% is probably ok, but could be problematic if the power readings or incorrect (can't remember if the are accurate or not).


----------



## J7SC

darkeclypse said:


> Evgs 3090 ftw3 ultra rev 1.0.


...it's all a bit nebulous / far back, but I recall that there was either a bios update and/or an exchange program offered by EVGA for certain serial runs of the FTW3 v1 (should be in this thread, early last year). That said, below is my 3090 Strix stock bios / on air when I first got it, showing some variance between the three 8 pin power as well (still hit over 15k PR on air, though). AFAIK, the true KingPin and Galax HoF 3090s have much more evenly distributed power between the 3x 8 pin, but owners of those could confirm.


----------



## Falkentyne

darkeclypse said:


> Evgs 3090 ftw3 ultra rev 1.0.


You need to use the KP 1kw bios to avoid this problem with your card.
The eVGA 3090 Ti had the same problems upon release with the 8 pin #3 not scaling properly like 8 pin #1 and #2, causing a premature power limit (when 8 pin #1 or 8 pin #2 reach 150W), and this was fixed recently on a new eVGA bios for the 3090 Ti.

I don't know if their older 3090 rev 1.0 FTW3's have such a bios or not.


----------



## J7SC

Falkentyne said:


> You need to use the KP 1kw bios to avoid this problem with your card.
> The eVGA 3090 Ti had the same problems upon release with the 8 pin #3 not scaling properly like 8 pin #1 and #2, causing a premature power limit (when 8 pin #1 or 8 pin #2 reach 150W), and this was fixed recently on a new eVGA bios for the 3090 Ti.
> 
> I don't know if their older 3090 rev 1.0 FTW3's have such a bios or not.


...wondering about your comment on 3090 >_ Ti_ (  ) ...I haven't done any reading-up about the 3090 Ti at all, but thought they all came with that new 'single' power connector (even if fed by 3x 8 pin dongle upstream). Does its internal bios breakout still split it into 3x power channels downstream (plus PCIe slot) ? Secondly - and this may have already been addressed - would a 3090 'Ti' bios work on a 3090 ? I kind of doubt it given design differences, but stranger things have happened.


----------



## dr/owned

You kids wanna see a dead body?










Exercising the 1000W BIOS (had to run Furmark at 4K just to get 100% GPU usage):










The PCIe slot is shunted 150% so it was really closer to 125W there. Total power draw from the wall: *1331W*


----------



## darkeclypse

Extreme z690 board with the extra 6pin connector for pcie slot so no worries their.

Haha.. don't want it to attempt 1000watts though. It would put a smile on my face if I could use the card jormallu and not hit the power limit the whole time though and only pulling 420-430watts.. makes me feel like I'm using a lesser card like a 3080.

I'll be more careful next round.. might have to jump from evga after this.. been using them for over a decade really.


----------



## Falkentyne

J7SC said:


> ...wondering about your comment on 3090 >_ Ti_ (  ) ...I haven't done any reading-up about the 3090 Ti at all, but thought they all came with that new 'single' power connector (even if fed by 3x 8 pin dongle upstream). Does its internal bios breakout still split it into 3x power channels downstream (plus PCIe slot) ? Secondly - and this may have already been addressed - would a 3090 'Ti' bios work on a 3090 ? I kind of doubt it given design differences, but stranger things have happened.


Yes, and No.
the 3090 Ti FE is still a microfit 3.0 connector. The other four sense pins are disabled as they are not used on ampere. and it still feeds from 2 8 pins, but I believe the adapter should be 16 AWG.
The official Seasonic microfit 3.0 cable (16 AWG) would also work instead of the Nvidia adapter. No you can't use a 3090 Ti bios on a 3090 FTW3, unless you want a brick.

Seasonic said that their microfit 3.0 cable will NOT work on 4090 cards (I'm not sure if that's from pin layout issues or the missing sense connector not on the cable limiting the card to 150W).


----------



## Falkentyne

dr/owned said:


> You kids wanna see a dead body?
> 
> View attachment 2559478
> 
> 
> Exercising the 1000W BIOS (had to run Furmark at 4K just to get 100% GPU usage):
> 
> View attachment 2559480
> 
> 
> The PCIe slot is shunted 150% so it was really closer to 125W there. Total power draw from the wall: *1331W*


What's up with the second dead body (GPU Hotspot temperature delta?)


----------



## dr/owned

Falkentyne said:


> What's up with the second dead body (GPU Hotspot temperature delta?)


I'm guessing just from the scaling of the extreme power draw. At Port Royal ~520W it's about 20C delta.


----------



## Jpmboy

dr/owned said:


> I'm guessing just from the scaling of the extreme power draw. At Port Royal ~520W it's about 20C delta.


you been running that card for any length of time with the hotspot in the upper 90s?


----------



## Falkentyne

dr/owned said:


> I'm guessing just from the scaling of the extreme power draw. At Port Royal ~520W it's about 20C delta.


Check the contact pressure of the core to the heatsink, or the contact pressure and integrity of the VRM (mosfet) thermal pads. Both the GPU Core (various sections) and the mosfets (in undocumented sensors) report the highest of any of their individual readings as the hotspot temp, which has a certain minimum delta floor (can be anywhere between 6C to 10C over the actual average core temp). VRAM itself only reports to memory junction (This is the "VRAM hotspot temp").


----------



## long2905

yzonker said:


> The KP 1kw bios has all the limits raised. I don't think I've ever seen anyone report it not increasing the power limit. I've been daily driving that bios on my Zotac 3090 for over a year now. Pretty much never hit the power limit (~660w for 2x8pin).


do you know whats the sustained power running through the 2 8 pin ports on the card. have you unplugged them to check for any possible damage over long term?


----------



## dr/owned

Falkentyne said:


> Check the contact pressure of the core to the heatsink, or the contact pressure and integrity of the VRM (mosfet) thermal pads. Both the GPU Core (various sections) and the mosfets (in undocumented sensors) report the highest of any of their individual readings as the hotspot temp, which has a certain minimum delta floor (can be anywhere between 6C to 10C over the actual average core temp). VRAM itself only reports to memory junction (This is the "VRAM hotspot temp").


No doubt it would be the core. Look at the rate the curve changes when jumping from 980W -> 100W. The vram is what I would expect out of a thermal pad...or the vrm... where it's much more gradual warm up and decline. The hot spot temperature tracking the gpu temperature indicates it's somewhere in the core:










It's Conductonaut though so it would be a hassle messing with it so I'm going to leave it alone. 15C delta in realistic workloads is good enough for me.


In other testing looks like I'm probably very close to maxing out the VRM stages on the FTW3. It's 50A per stage and 22 stages total. So at theoretical limit it's 1150W. I've seen 984W reported, about 1025W actual, and 1375W from the wall. Even though I have the VRM heatsinked to both the frontside and backside waterblocks, the backplate near the VRM is still getting disco-inferno hot to the touch. I already knew EVGA cheaped out this gen and made the Kingpin what the FTW3 should have been. The Strix is what the Kingpin should be but the FTW3 is significantly cheaper.


Here's a build log:

Amount of Conductonaut applied on both die and waterblock. I'm not particularly impressed by this Bykski block, which deviates from my previous experience. They didn't seem to bother using many of the possible mounting holes (in total they used 5 around the die, and 6 around the peripheral through the backplate) and there was surface machining blemishes on the die. They also didn't bother counterboring on the backplate where the screws are for the front waterblock to give more clearance.















Thermal pads everywhere. 1.25mm Kritical 20W pads on the mem (1mm void being filled), 1.5mm Thermalright 12W pads on the VRM (1.25mm void being filled), 2.5mm Gelid Ultimate on the back of the VRM (2mm void being filled). I depth-gauged the heights of every component and the waterblock so when I say "void being filled" that's the gap based on the Z-height of the component and of the waterblock. I took this picture before I realized I also needed to use lower profile screws and remove the white RGB block on the end of the card...2mm of void is not much to work with and the RGB thing was like 2.2mm thick. I used my own torx screws from McMaster because phillips sucks.





















Hydrochloric Acid cleaning the waterblock (like 15% concentration). The camera isn't out of focus...the fog is all the random **** (probably coolant) on the block from the factory. I errored slightly in not using phosphoric acid which doesn't affect nickel at all. HCl is only mild effect though and I didn't leave it in long enough to attack. HCl doesn't affect copper at all, but it would murder brass (zinc).









Bake the card in the oven at 75C to relax the thermal pads and then I tighten the screws more. All the screws were torqued to about 4.4 in-lbs which is about the limit for M2 thread screws.








Oh and the TUF block that I've been running for 18 months looks like I wore through the nickel. I don't think I acid cleaned this one and interestingly it wore through in places where there were fingerprints from the dude in China (amongst other places)










This is Mayhem X1 Clear but I don't ever do complete drain-and-fills so who knows if it's a nickel issue or the fluid is just acidic or something after this long running.


----------



## yzonker

long2905 said:


> do you know whats the sustained power running through the 2 8 pin ports on the card. have you unplugged them to check for any possible damage over long term?


No, no issue with the connectors. Although I don't generally sustain 600w+ with the card. I limit to around 500w or less for gaming which puts the 8pins around 200-225w. Only time I let it hit 600w or more is for benchmarks.


----------



## J7SC

dr/owned said:


> No doubt it would be the core. Look at the rate the curve changes when jumping from 980W -> 100W. The vram is what I would expect out of a thermal pad...or the vrm... where it's much more gradual warm up and decline. The hot spot temperature tracking the gpu temperature indicates it's somewhere in the core:
> 
> View attachment 2559515
> 
> 
> It's Conductonaut though so it would be a hassle messing with it so I'm going to leave it alone. 15C delta in realistic workloads is good enough for me.
> 
> 
> In other testing looks like I'm probably very close to maxing out the VRM stages on the FTW3. It's 50A per stage and 22 stages total. So at theoretical limit it's 1150W. I've seen 984W reported, about 1025W actual, and 1375W from the wall. Even though I have the VRM heatsinked to both the frontside and backside waterblocks, the backplate near the VRM is still getting disco-inferno hot to the touch. I already knew EVGA cheaped out this gen and made the Kingpin what the FTW3 should have been. The Strix is what the Kingpin should be but the FTW3 is significantly cheaper.
> 
> 
> Here's a build log:
> 
> Amount of Conductonaut applied on both die and waterblock. I'm not particularly impressed by this Bykski block, which deviates from my previous experience. They didn't seem to bother using many of the possible mounting holes (in total they used 5 around the die, and 6 around the peripheral through the backplate) and there was surface machining blemishes on the die. They also didn't bother counterboring on the backplate where the screws are for the front waterblock to give more clearance.
> 
> View attachment 2559516
> View attachment 2559517
> 
> 
> Thermal pads everywhere. 1.25mm Kritical 20W pads on the mem (1mm void being filled), 1.5mm Thermalright 12W pads on the VRM (1.25mm void being filled), 2.5mm Gelid Ultimate on the back of the VRM (2mm void being filled). I depth-gauged the heights of every component and the waterblock so when I say "void being filled" that's the gap based on the Z-height of the component and of the waterblock. I took this picture before I realized I also needed to use lower profile screws and remove the white RGB block on the end of the card...2mm of void is not much to work with and the RGB thing was like 2.2mm thick. I used my own torx screws from McMaster because phillips sucks.
> 
> View attachment 2559518
> View attachment 2559518
> View attachment 2559519
> 
> 
> Hydrochloric Acid cleaning the waterblock (like 15% concentration). The camera isn't out of focus...the fog is all the random **** (probably coolant) on the block from the factory. I errored slightly in not using phosphoric acid which doesn't affect nickel at all. HCl is only mild effect though and I didn't leave it in long enough to attack. HCl doesn't affect copper at all, but it would murder brass (zinc).
> 
> View attachment 2559522
> 
> 
> Bake the card in the oven at 75C to relax the thermal pads and then I tighten the screws more. All the screws were torqued to about 4.4 in-lbs which is about the limit for M2 thread screws.
> View attachment 2559521
> 
> 
> Oh and the TUF block that I've been running for 18 months looks like I wore through the nickel. I don't think I acid cleaned this one and interestingly it wore through in places where there were fingerprints from the dude in China (amongst other places)
> 
> View attachment 2559524
> 
> 
> This is Mayhem X1 Clear but I don't ever do complete drain-and-fills so who knows if it's a nickel issue or the fluid is just acidic or something after this long running.


...fingerprints from manufacturing can really be annoying...the Strix 3090 block at lower right which is otherwise a great performer re. temps (below w/ stock Strix bios rather than the KPE 520 setting) had fingerprints _on the inside on the acrylic cover_...a magnet for little air bubbles.


----------



## darkeclypse

Process to flash a evga 3090 ftw3 ultra to kp 520 or 1000watt bios??

Guessing a special nvflash.exe to flash a totally different card bios?


----------



## yzonker

darkeclypse said:


> Process to flash a evga 3090 ftw3 ultra to kp 520 or 1000watt bios??
> 
> Guessing a special nvflash.exe to flash a totally different card bios?


Yea, just grab it from Techpowerup.


----------



## GRABibus

darkeclypse said:


> I guess No one is using the kp 1000watt bios.


 I Use it daily on my Kingpin hybrid with stock cooler, PL cap at 60%.


----------



## GRABibus

J7SC said:


> ...I don't think you need the KPE 1000W to break 15k in Port Royal; below is a w-cooled 3090 Strix w/ KPE 520W and using MSI AB sliders only
> 
> View attachment 2559318


your Strix is a very good sample.
You could break 15k on Asus stock Bios and on air…
Who else could do that ? 😂


----------



## KingUniverse

I just got two EVGA RTX 3090 FTW3 Ultra Gaming models, one is REV 1.0, the other is REV 1.2.1

I've heard conflicting stories about shunt mods vs. flashing the 1000w KingPin bios. I've heard the shunt mods don't work, I've heard the BIOS doesn't work, I've also heard the power load levelling is off between the PCIe and 8pins. I've tried to research as much as I could about the topic, but still at a loss for what to do with my cards. They will be in SLI and on a 1600w PSU as well as being under water (EK-Quantum Vector2 FTW3 RTX 3080/90 ABP Set - Nickel + Plexi – EK Webshop (ekwb.com) )

Can I have suggestions on whether to shunt mod or flash before I put these water blocks on (They should be here by May 20th)

Thanks in advance!


----------



## Panchovix

KingUniverse said:


> I just got two EVGA RTX 3090 Ultra Gaming models, one is REV 1.0, the other is REV 1.2.1
> 
> I've heard conflicting stories about shunt mods vs. flashing the 1000w KingPin bios. I've heard the shunt mods don't work, I've heard the BIOS doesn't work, I've also heard the power load levelling is off between the PCIe and 8pins. I've tried to research as much as I could about the topic, but still at a loss for what to do with my cards. They will be in SLI and on a 1600w PSU as well as being under water (EK-Quantum Vector2 FTW3 RTX 3080/90 ABP Set - Nickel + Plexi – EK Webshop (ekwb.com) )
> 
> Can I have suggestions on whether to shunt mod or flash before I put these water blocks on (They should be here by May 20th)
> 
> Thanks in advance!


It's a bummer how FTW3 cards have kinda hard times to shunt mod because the fuses are pretty low, and balance issues because hardware? So XOC VBIOS are kinda also complicated. (It seems EVGA uses an unique power delivery controller instead, vs all other cards)
I hope someone who managed to successfully managed to shunt mod the card can help you man, 2 3090 on SLI sounds amazing!


----------



## KingUniverse

If nothing else, the active backplate will help cool the vRAM while mining (and to some extent during gaming) but would really love to stretch these card's legs, even if it's just a BIOS flash 🤷


----------



## darkeclypse

Finally got my 500watts with my 2 month old Evga 3090 ftw3 Ultra!!

Xoc bios thet claims 500watts never did more then 460 watts for me. Early powerlimit after 424watts downloading the card.

Grew a pair and took baby steps and tries the Kingpin 3090 520watt bios..

Now I get more then 500watts haha.. 516watts max so far just playing around. So awesome thet a bios can fix it as I was thinking I needed to hard mod the card to fix the issue.

Now so far my power lines look like this:

Pcie = 70 watts
Pin 1 = 170 watts
Pin 2 = 166 watts
Pin 3 = 107 watts

Finally can have some fun overclcoking and benching.


----------



## dr/owned

KingUniverse said:


> I just got two EVGA RTX 3090 FTW3 Ultra Gaming models, one is REV 1.0, the other is REV 1.2.1
> 
> I've heard conflicting stories about shunt mods vs. flashing the 1000w KingPin bios. I've heard the shunt mods don't work, I've heard the BIOS doesn't work, I've also heard the power load levelling is off between the PCIe and 8pins. I've tried to research as much as I could about the topic, but still at a loss for what to do with my cards. They will be in SLI and on a 1600w PSU as well as being under water (EK-Quantum Vector2 FTW3 RTX 3080/90 ABP Set - Nickel + Plexi – EK Webshop (ekwb.com) )
> 
> Can I have suggestions on whether to shunt mod or flash before I put these water blocks on (They should be here by May 20th)
> 
> Thanks in advance!


1000W BIOS. Shunting is only for 2 connector cards where flashing the 1000W bios disables outputs.

The card realistically doesn't go over about 550W unless you're actively trying to hammer it with Furmark or whatever. Just set the TDP limit to 55% with the 1000W bios and you shouldn't blow the fuses. You can also bypass the fuses like I did but that's soldering.


----------



## KingUniverse

dr/owned said:


> 1000W BIOS. Shunting is only for 2 connector cards where flashing the 1000W bios disables outputs.
> 
> The card realistically doesn't go over about 550W unless you're actively trying to hammer it with Furmark or whatever. Just set the TDP limit to 55% with the 1000W bios and you shouldn't blow the fuses. You can also bypass the fuses like I did but that's soldering.


I'm great with a soldering iron, and with them being under such great waterblocks (Full front and active back) along with a 1600W PSU, I'm not shy about bypassing a fuse. What are the limitations if I bypass the fuse (Thermal? VRM?) And what are the dangers? If thermals are under control, and I'm only using one card, can I safely push past 550W? Any info would be greatly appreciated.


----------



## des2k...

KingUniverse said:


> I'm great with a soldering iron, and with them being under such great waterblocks (Full front and active back) along with a 1600W PSU, I'm not shy about bypassing a fuse. What are the limitations if I bypass the fuse (Thermal? VRM?) And what are the dangers? If thermals are under control, and I'm only using one card, can I safely push past 550W? Any info would be greatly appreciated.


You can calculate the vrm usage based on #phases and their max rating. You can use hwinfo to find ameprage of your 3 vrm sections.
Wattage = voltage x amperage

There are 3 groups of vrm. Mem, cache & mem controller & core.

The core is the one working the hardest, then uncore, last is mem.

On 2x8pin ref nvidia vrm, around 660w you are using 90% of the core vrm current, about 75% uncore vrm and maybe 55% mem vrm with +1500mem offset.

I would not go past 90% usage on core vrm, so if each stage is 50a max, don't use more than 40a.

Not sure what card / vrm stage you have, but 40a per stage on my Zotac is crazy in terms vrm heat.


----------



## KingUniverse

des2k... said:


> You can calculate the vrm usage based on #phases and their max rating. You can use hwinfo to find ameprage of your 3 vrm sections.
> Wattage = voltage x amperage
> 
> There are 3 groups of vrm. Mem, cache & mem controller & core.
> 
> The core is the one working the hardest, then uncore, last is mem.
> 
> On 2x8pin ref nvidia vrm, around 660w you are using 90% of the core vrm current, about 75% uncore vrm and maybe 55% mem vrm with +1500mem offset.
> 
> I would not go past 90% usage on core vrm, so if each stage is 50a max, don't use more than 40a.
> 
> Not sure what card / vrm stage you have, but 40a per stage on my Zotac is crazy in terms vrm heat.


I have 3x8pin EVGA FTW3 Ultras
Anybody have any pics for the fuse I need to bypass? With these cards being under full waterblocks, the VRM will be water cooled as well as the backside from the active backplate. I don't think thermals will be an issue _crosses fingers_


----------



## yzonker

KingUniverse said:


> I have 3x8pin EVGA FTW3 Ultras
> Anybody have any pics for the fuse I need to bypass? With these cards being under full waterblocks, the VRM will be water cooled as well as the backside from the active backplate. I don't think thermals will be an issue _crosses fingers_


There are pics on Techpowerup. The fuses are white and close to the shunts. Some have the amp rating on them, but some appear to not. 

I suspect you don't need to bypass them though. I've hit my 3080ti FTW3 with 600w+ running TSE. According to my clamp meter (GPUZ/HWINFO readings are incorrect on that card when running the Galax 1kw bios), the highest it hit was 20-21 amps on 8pin #2. Obviously ragged edge, but that card runs hot on the 2nd 8 pin. Probably if you just limit to 60% PL it will be ok unless power balancing is even worse than my 3080ti.


----------



## dr/owned

KingUniverse said:


> I have 3x8pin EVGA FTW3 Ultras
> Anybody have any pics for the fuse I need to bypass? With these cards being under full waterblocks, the VRM will be water cooled as well as the backside from the active backplate. I don't think thermals will be an issue _crosses fingers_


I made a post here where I show the PCIe slot fuse bypassed: [Official] NVIDIA RTX 3090 Owner's Club . Not my cleanest work but it's so small that I was struggling with the jumper sticking to my iron when I tried adding in more solder.

They're 1206 size fuses and you need t use power-rated jumpers. Regular jumpers are only for a couple amps and not helpful. There's 3 of them (one per 8 pin) around the shunts and the one by the PCIe slot.


----------



## KingUniverse

dr/owned said:


> I made a post here where I show the PCIe slot fuse bypassed: [Official] NVIDIA RTX 3090 Owner's Club . Not my cleanest work but it's so small that I was struggling with the jumper sticking to my iron when I tried adding in more solder.
> 
> They're 1206 size fuses and you need t use power-rated jumpers. Regular jumpers are only for a couple amps and not helpful. There's 3 of them (one per 8 pin) around the shunts and the one by the PCIe slot.


Bypassing the fuses with the 1000w BIOS, is thermals the only limiting factor at that point? Is there a danger to the PCIe connector? What is the potential gains vs NOT bypassing the fuse and just flashing the 1000w BIOS? If I flash the 1000w BIOS and set it above 55%, can it potentially blow the fuse(s)?

Thanks in advance for any info


----------



## Mad1137

Hey guys , I got same issue or bug with my card 




What is this card or psu ??? Someone help


----------



## dr/owned

KingUniverse said:


> Bypassing the fuses with the 1000w BIOS, is thermals the only limiting factor at that point? Is there a danger to the PCIe connector? What is the potential gains vs NOT bypassing the fuse and just flashing the 1000w BIOS? If I flash the 1000w BIOS and set it above 55%, can it potentially blow the fuse(s)?
> 
> Thanks in advance for any info


The connectors all have 20A fuses on them but it's a 2:2:1 split on the power. So 20A+20A+10A+6A(pcie) = 56A = 672W before you really start risking it. I knew that I was going to be maxing out (and I did at >1000W) so I had to bypass the fuses.

You could also just risk it and if you blow the fuse then bypass it.


----------



## darkeclypse

dr/owned said:


> The connectors all have 20A fuses on them but it's a 2:2:1 split on the power. So 20A+20A+10A+6A(pcie) = 56A = 672W before you really start risking it. I knew that I was going to be maxing out (and I did at >1000W) so I had to bypass the fuses.
> 
> You could also just risk it and if you blow the fuse then bypass it.


Bypass it by just soldering a wire across the fuse? If not how does one do it?


----------



## PLATOON TEKK

Hope all of you have been bless, been gone for a sec, but work has been a bit wild.

Finally had a chance to spend some more time with the evc2x and 2 pmd’s (1 per gpu, via I2C1 & I2C2), hopefully in prep for some spi flashing. What’s been dope about the pmds, since they measure at ~30 microseconds and don’t perform any averaging, you get some pretty interesting info.

One of the things that stood out to me is, when running the 1000w bios (rB KP), in games and benches (this case COD). The max reported wattage on HWinfo (and all other sysinfo, gpuz etc) can be up to 200-300w different from the pmd reading. In the screenshot for example, HWinfo maxed out at 494w while the pmd hit 738.6w (small red square, left gpu reading), quiet often in that range actually.

However, when it does hit the range, it is extremely brief and returns to a somewhat similar reading to the system’s. I’m assuming similar peaks (within their parameters obv.) caused a lot of those “burnt card and psu” issues we’ve seen on the 3000 series. The game doesn’t have an sli profile so only the 1st gpu is active (left portion of screenshot).

When I have a chance tonight or tomorrow, I will reinstall “it takes two” and run that crazy ass clockwork level that had hwinfo hitting 800w+ on single card. I’m curious how close to 1000w we actually are, even just briefly.

Also seeing a few gpuz read outs of the 3090ti 890w bios (I don’t own one, online I mean), is wild how much more it actually “hits” that range as opposed to the 3090s on a 1000w bios. maybe it’s all of the improved power delivery and new connectors.



Spoiler: Screenshot & pmds























got all the clamps, dip, 1.8v etc from Elmor too so will get into that next and will let y’all know how I fare with the spi flashes ha


----------



## PLATOON TEKK

I stand corrected, the insane wattage is in the chapter after “Clockworks” in “It Takes Two”.
Didn’t hit the 840w in hwinfo like last time (am assuming lower OC), however even at 774w reported on Hwinfo I get 943.1w peak on the PMD. Wonder if it’s the split screen that pushes it so hard. Pretty impressive, shame we only see those wattages briefly ha. Back to the flashing for me.



Spoiler: 943.1w


----------



## J7SC

PLATOON TEKK said:


> I stand corrected, the insane wattage is in the chapter after “Clockworks” in “It Takes Two”.
> Didn’t hit the 840w in hwinfo like last time (am assuming lower OC), however even at 774w reported on Hwinfo I get 943.1w peak on the PMD. Wonder if it’s the split screen that pushes it so hard. Pretty impressive, shame we only see those wattages briefly ha. Back to the flashing for me.
> 
> 
> 
> Spoiler: 943.1w
> 
> 
> 
> 
> View attachment 2560573


 I can see you will be in the market for the top-end RTX4k cards that come with* two* 600W connectors per GPU


----------



## PLATOON TEKK

J7SC said:


> I can see you will be in the market for the top-end RTX4k cards that come with* two* 600W connectors per GPU


haha I burned so much money on chillers this **** better get hot! lol.

Realistically though, was thinking last night, I don’t think NVLINK will be around next gen. 2000w is only possible on 220 for a solo psu (to my knowledge). unless running two psu’s becomes more common.


----------



## J7SC

PLATOON TEKK said:


> haha I burned so much money on chillers this **** better get hot! lol.
> 
> Realistically though, was thinking last night, I don’t think NVLINK will be around next gen. 2000w is only possible on 220 for a solo psu (to my knowledge). unless running two psu’s becomes more common.


...Realistically, they killed NVLink though I still use it and love it on an older w-cooled 2x 2080 Ti SLI-"CFR" setup. In the bad old HWBot days, I used to run multi PSUs for HEDT CPU w/ multi-GPU...no choice but to use different circuits w/ ~ 110...


----------



## PLATOON TEKK

J7SC said:


> ...Realistically, they killed NVLink though I still use it and love it on an older w-cooled 2x 2080 Ti SLI-"CFR" setup. In the bad old HWBot days, I used to run multi PSUs for HEDT CPU w/ multi-GPU...no choice but to use different circuits w/ ~ 110...
> View attachment 2560576


Truth, I’m just holding on to a corpse at this point, ha. Those PSUs are badass for being designed to be linked though, didn’t know they ever made them specifically. Might be the only way, IF, the NVLINK connector still shows up on the 4(5)090.


----------



## J7SC

PLATOON TEKK said:


> Truth, I’m just holding on to a corpse at this point, ha. Those PSUs are badass for being designed to be linked though, didn’t know they ever made them specifically. Might be the only way, IF, the NVLINK connector still shows up on the 4(5)090.


....or get some of these (in PCIe 5.0 version of course)  😁


----------



## KingUniverse

I was cleaning off thermal paste for my waterblock installation and this little (capacitor maybe?) tore off with the thermal paste. I can't find the piece now, is the card borked?


----------



## darkeclypse

KingUniverse said:


> I was cleaning off thermal paste for my waterblock installation and this little (capacitor maybe?) tore off with the thermal paste. I can't find the piece now, is the card borked?
> View attachment 2560797


Maybe.. did you use something other then quips for that area?

Yeah I'd try to find it to resolder it on. Find out the part and maybe find the same part on some other old component and solder it on the card.


----------



## KingUniverse

darkeclypse said:


> Maybe.. did you use something other then quips for that area?
> 
> Yeah I'd try to find it to resolder it on. Find out the part and maybe find the same part on some other old component and solder it on the card.


It was a Qtip that broke it off 🤦 I wasn't even using too much force either. It was stuck in the thermal paste and when I lifted the piece of thermal paste, that came up with it. It's going to be the "slave" card in an SLI/NVLink setup, so I have my fingers crossed.


----------



## gfunkernaught

For those running mo-ras with 15+ fans, what fan hubs do you use? I have 28 fans in my rig, 3x10 pwm port thermaltake commanders. Been running them powered via daisy chained SATA power connector for almost a year then all of a sudden some fans on each hub stopped working. Fans are fine as I can power them via motherboard header. I tested each hub with a different power supply one at a time, and put a voltmeter on each port. 11.92v on every port except for one on each, which read .020v. Thermaltake says to power each hub with separate SATA power cables. I call bs since they've been working. PSU is an HX1200 btw. I think I want to try a different brand. Thoughts?


----------



## J7SC

KingUniverse said:


> It was a Qtip that broke it off 🤦 I wasn't even using too much force either. It was stuck in the thermal paste and when I lifted the piece of thermal paste, that came up with it. It's going to be the "slave" card in an SLI/NVLink setup, so I have my fingers crossed.


...worth trying out to see if it works before_ really_ panicking...I recall some posts re. little SMTs broken off w/o any detriment, though I think it was concerning a CPU, not GPU. Still, I would try the card out first, even in the primary slot by itself, to see what's what.


----------



## Nizzen

gfunkernaught said:


> For those running mo-ras with 15+ fans, what fan hubs do you use? I have 28 fans in my rig, 3x10 pwm port thermaltake commanders. Been running them powered via daisy chained SATA power connector for almost a year then all of a sudden some fans on each hub stopped working. Fans are fine as I can power them via motherboard header. I tested each hub with a different power supply one at a time, and put a voltmeter on each port. 11.92v on every port except for one on each, which read .020v. Thermaltake says to power each hub with separate SATA power cables. I call bs since they've been working. PSU is an HX1200 btw. I think I want to try a different brand. Thoughts?


I use Mo-Ra with 4x 200mm fans, NF-A20 PWM chromax black . Why do it more complicated than necessary ? 

Use a small aquasuite hub wich is connected to the other aquasuite fancontroller.


----------



## gfunkernaught

Nizzen said:


> I use Mo-Ra with 4x 200mm fans, NF-A20 PWM chromax black . Why do it more complicated than necessary ?
> 
> Use a small aquasuite hub wich is connected to the other aquasuite fancontroller.


I don't use a mo-ra, but typically mora users have more than a few fans. I have 2x480s and 2x360s, push pull 😏.

I'll check out that aquasuite hub.👍


----------



## Nizzen

Edit: LOL Aqua Computer is the name


----------



## J7SC

gfunkernaught said:


> I don't use a mo-ra, but typically mora users have more than a few fans. I have 2x480s and 2x360s, push pull 😏.
> 
> I'll check out that aquasuite hub.👍


...Raven_A and its 1350x63 rads use three TT Commander Sata-powered fan hubs, each for 10 pwm fans...not the 'Rolls Royce' of fan hubs, but these - as well as three more in another build from 2018 - have been working flawlessly...


----------



## darkeclypse

Yeah I use simular as above for the last close ton4 years now.. no probs. 4 total controllers shoved in-between the rads on ea side front and back as i have full push/fans. 28 total. Fans run full tilt as their all 3 pin but it's fine. Could control them with a main 4 pin fan cable going to the mb and control in the bios.


----------



## KingUniverse

J7SC said:


> ...worth trying out to see if it works before_ really_ panicking...I recall some posts re. little SMTs broken off w/o any detriment, though I think it was concerning a CPU, not GPU. Still, I would try the card out first, even in the primary slot by itself, to see what's what.


She's alive! So far, no adverse affects (fingers crossed 🤞) both 2D and 3D clocks work, and drivers installed successfully. I'll update if any funny business starts occurring.


----------



## darkeclypse

KingUniverse said:


> She's alive! So far, no adverse affects (fingers crossed 🤞) both 2D and 3D clocks work, and drivers installed successfully. I'll update if any funny business starts occurring.


Nice!


----------



## gfunkernaught

J7SC said:


> ...Raven_A and its 1350x63 rads use three TT Commander Sata-powered fan hubs, each for 10 pwm fans...not the 'Rolls Royce' of fan hubs, but these - as well as three more in another build from 2018 - have been working flawlessly...


That's what I have currently and having issues with. I redid all the fan cables and routes in my case. Now all but one fan is spinning and the fan is officially dead. I plugged the fan into a mobo header, dead. I suspect the TT Commander killed it. Also lost fan speed control of all three hubs. I have the control cables plugged into my Corsair Commander pro. The controller sees them and shows an RPM reading, but no control whatsoever.

TT in an email suggested that I use a dedicated SATA power cable for each hub. Lets see.
HX1200 100A max on the +12v rail. The SATA connection uses that +12v rail. The NF-P12 uses 1.09w/.09A at 12V. 28*.09=2.52A, 30.24W total power consumption. Really? A single SATA daisy chain cable isn't enough to power all those fans when they were designed to power four HDDs at ~25w/per drive at max load? 

Alas, they did offer to RMA the units. Noctua offers a 6 year warranty on their fans. Fantastic.

EDIT: Correction, I had to run iCUE to set the ports manually to PWM to regain control. Noctua fan is still dead


----------



## J7SC

gfunkernaught said:


> That's what I have currently and having issues with. I redid all the fan cables and routes in my case. Now all but one fan is spinning and the fan is officially dead. I plugged the fan into a mobo header, dead. I suspect the TT Commander killed it. Also lost fan speed control of all three hubs. I have the control cables plugged into my Corsair Commander pro. The controller sees them and shows an RPM reading, but no control whatsoever.
> 
> TT in an email suggested that I use a dedicated SATA power cable for each hub. Lets see.
> HX1200 100A max on the +12v rail. The SATA connection uses that +12v rail. The NF-P12 uses 1.09w/.09A at 12V. 28*.09=2.52A, 30.24W total power consumption. Really? A single SATA daisy chain cable isn't enough to power all those fans when they were designed to power four HDDs at ~25w/per drive at max load?
> 
> Alas, they did offer to RMA the units. Noctua offers a 6 year warranty on their fans. Fantastic.
> 
> EDIT: Correction, I had to run iCUE to set the ports manually to PWM to regain control. Noctua fan is still dead


...I hope you get it figured out. The six TT Commanders I have across three machines all have been working flawlessly, w/ 2 hubs per Sata cable each (on a total of 3x 1300 W PSUs) with Arctic, Noctua and Corsair fans. I don't use fan curve software in OS, but either fixed rpm, or fan curve in the mobo bios.


----------



## gfunkernaught

J7SC said:


> ...I hope you get it figured out. The six TT Commanders I have across three machines all have been working flawlessly, w/ 2 hubs per Sata cable each (on a total of 3x 1300 W PSUs) with Arctic, Noctua and Corsair fans. I don't use fan curve software in OS, but either fixed rpm, or fan curve in the mobo bios.


Noctua is going to send me another fan! All I have to do is break the dead fan, take a picture of it with a note showing the ticket number and the lot number, and it seems like they'll just ship the fan! Hot damn that is awesome if it is actually is that simple. But running all three hubs on one SATA power channel shouldn't be an issue right? I don't want to cause damage by underpowering even if the numbers are right.


----------



## J7SC

gfunkernaught said:


> Noctua is going to send me another fan! All I have to do is break the dead fan, take a picture of it with a note showing the ticket number and the lot number, and it seems like they'll just ship the fan! Hot damn that is awesome if it is actually is that simple. But running all three hubs on one SATA power channel shouldn't be an issue right? I don't want to cause damage by underpowering even if the numbers are right.


...don't know about 3x as I only run 2x hubs per SATA. BTW, re. pictures of stuff smashed for warranty purposes, that is s.th. I ran into before - with a PSU. Rather than send the old (heavy, bulky) one back which is even more expensive these days re. fuel, delivery labour etc., they asked me to take a hammer to the PSU and really, really smash it, and then send pics


----------



## ArcticZero

KingUniverse said:


> She's alive! So far, no adverse affects (fingers crossed 🤞) both 2D and 3D clocks work, and drivers installed successfully. I'll update if any funny business starts occurring.


Probably just a power filter cap. They usually run in parallel and there's usually more of them than necessary for the card to operate normally.

My 3090 also had a cap fall off near the PCIe slot. While I could solder it back or get a replacement, the pad on one side has been ripped and I can't be bothered to repair it from the trace as it works just fine anyway. That was a year ago and zero issues since.


----------



## domdtxdissar

domdtxdissar said:


> So i did a full benchrun last night with my new gpu waterblock, to compare to my old 450w max air cooled runs:
> 
> 
> Spoiler: Stock cooling and stock 450w bios
> 
> 
> 
> *Port Royal = 14789* *@ *I scored 14 789 in Port Royal
> 
> Graphics Score = 14789
> *TIME SPY =21861 *@ I scored 21 861 in Time Spy
> 
> Graphics Score = 22575
> CPU Score = 18540
> *TIME SPY EXTREME = 11557* @ I scored 11 557 in Time Spy Extreme
> 
> Graphics Score = 11577
> CPU Score = 11450
> *FIRE STRIKE = 41342* @ I scored 41 342 in Fire Strike
> 
> Graphics Score = 45888
> Physics Score = 44869
> Combined Score = 22218
> *FIRE STRIKE extreme = 25679 @* I scored 25 679 in Fire Strike Extreme
> 
> Graphics Score = 26227
> Physics Score = 44859
> Combined Score = 14282
> *FIRE STRIKE ULTRA = 14329* @ I scored 14 329 in Fire Strike Ultra
> 
> Graphics Score = 13996
> Physics Score = 44727
> Combined Score = 7785
> *WILD LIFE = 116589* @ I scored 14 329 in Fire Strike Ultra
> 
> 
> 
> It look as if the MSI Suprim x 3090 have too few / weak VRMs for it to make any sense to use 1000w XOC bios instead of lets say EVGA 520w for daily usage.. In most of the benches the card only used around 475w to 550w.. Was only in Time Spy I saw a peak of 616w for a moment.
> 
> These are my 3dmark results:
> 
> *Port Royal = 16 050* *@ *I scored 16 050 in Port Royal
> 
> Graphics Score = 16050
> *TIME SPY = 22 666 *@ I scored 22 666 in Time Spy
> 
> Graphics Score = 23 449
> CPU Score = 19 062
> *TIME SPY EXTREME = 12 180* @ I scored 12 180 in Time Spy Extreme
> 
> Graphics Score = 12 258
> CPU Score = 11 760
> *FIRE STRIKE = 43 513* @ I scored 43 513 in Fire Strike
> 
> Graphics Score = 49 613
> Physics Score = 44 674
> Combined Score = 22 188
> *FIRE STRIKE extreme = 27 138 @* I scored 27 138 in Fire Strike Extreme
> 
> Graphics Score = 27 940
> Physics Score = 44 267
> Combined Score = 15 113
> *FIRE STRIKE ULTRA = 15 072 * @ I scored 15 072 in Fire Strike Ultra
> 
> Graphics Score = 14 796
> Physics Score = 44 514
> Combined Score = 8 140
> Did also run some other benches
> 
> Unigine Heaven @ 1080p
> View attachment 2522022
> 
> 
> Unigine Heaven @ 1440p
> View attachment 2522023
> 
> 
> Superposition 1080p extreme
> View attachment 2522024
> 
> 
> And some Finale Fantasy runs to end it
> Endwalker @ 1440p Maximum
> View attachment 2522025
> 
> 
> Final Fantasy XV benchmark @ 1920p standard
> View attachment 2522026
> 
> 
> Now i have a baseline that can be used to compare against both Alder lake and Zen3 3d cache when they are launched..
> (think i will go for Zen v-cache since i already have the platform)


... 9 months later 
Here are a comparison with the 5800x3d against my 5950x, same 3090 graphic card: 
(sadly cpu limited to 4460mhz max)

Night Raid score = 72331 points 

Graphics Score 171771
CPU Score 16898
Wild Life score = 124195 points 

Wild life Extreme = 52837 points 

Fire Strike = 41882 points

Graphics Score 55490
Physics Score 30370
Combined Score 18446
Fire Strike Extreme = 26505 points 

Graphics Score 28705
Physics Score 30422
Combined Score 14995
Fire Strike Ultra = 14864 points 

Graphics Score 14992
Physics Score 30318
Combined Score 8130
Time Spy = 20891 points 

Graphics Score 23182
CPU Score 13393
Time Spy Extreme = 10322 points 

Graphics Score 12056
CPU Score 5687
Port Royal = 15879 points


----------



## gfunkernaught

J7SC said:


> don't know about 3x as I only run 2x hubs per SATA


Good point. I also have the Commander Pro on the SATA with 3 hubs. Do you have anything else on those SATA cables or just the hubs?


----------



## J7SC

gfunkernaught said:


> Good point. I also have the Commander Pro on the SATA with 3 hubs. *Do you have anything else on those SATA cables or just the hubs?*


...nope


----------



## gfunkernaught

J7SC said:


> ...nope


Added another SATA cable now so they're split up. Two hubs on one cable, a hub and the Corsair commander on the other.

Btw when you move your water blocked GPU out of one system to another, have you had to repaste it or noticed a difference in delta T between water and core temp? I was doing maintenance and had to remove the GPU with the block out, and now my water gpu core temp delta went from 10c to 13c at ~500w. At one point while playing cyberpunk with 62% power limit, the core temp maxed at 47.8c and the water temp never exceeded 30c.


----------



## yzonker

gfunkernaught said:


> Added another SATA cable now so they're split up. Two hubs on one cable, a hub and the Corsair commander on the other.
> 
> Btw when you move your water blocked GPU out of one system to another, have you had to repaste it or noticed a difference in delta T between water and core temp? I was doing maintenance and had to remove the GPU with the block out, and now my water gpu core temp delta went from 10c to 13c at ~500w. At one point while playing cyberpunk with 62% power limit, the core temp maxed at 47.8c and the water temp never exceeded 30c.


That shouldn't happen. Did you change anything that would affect flow?


----------



## J7SC

gfunkernaught said:


> Added another SATA cable now so they're split up. Two hubs on one cable, a hub and the Corsair commander on the other.
> 
> Btw when you move your water blocked GPU out of one system to another, have you had to repaste it or noticed a difference in delta T between water and core temp? I was doing maintenance and had to remove the GPU with the block out, and now my water gpu core temp delta went from 10c to 13c at ~500w. At one point while playing cyberpunk with 62% power limit, the core temp maxed at 47.8c and the water temp never exceeded 30c.


...not really 'normal' if all else stayed the same, or did you make some other changes, ie. different routing ?


----------



## gfunkernaught

yzonker said:


> That shouldn't happen. Did you change anything that would affect flow?


The only thing I changed was loop order from 
Pump > cpu > 2x rad > GPU> 2x rad > pump
Now it's pump > cpu > GPU > all rads > pump.

I'll have to check the fins on the GPU block for obstructions.


----------



## gfunkernaught

J7SC said:


> ...not really 'normal' if all else stayed the same, or did you make some other changes, ie. different routing ?


Routing would/could cause changes in GPU temp but not water temp?


----------



## J7SC

gfunkernaught said:


> Routing would/could cause changes in GPU temp but not water temp?


...not that likely, but potentially yes if there's more of a restriction right ahead of the GPU while a big rad / fan system has an easier time.


----------



## yzonker

gfunkernaught said:


> The only thing I changed was loop order from
> Pump > cpu > 2x rad > GPU> 2x rad > pump
> Now it's pump > cpu > GPU > all rads > pump.
> 
> I'll have to check the fins on the GPU block for obstructions.


Changing order might change the water temp at the GPU slightly and also slightly change the water temp sensor reading (1C, maybe 2C maximum). 

Moving my temp sensor from my res to the inlet of the GPU block dropped my calculated block delta 1C for example. But nothing really changed obviously.


----------



## gfunkernaught

yzonker said:


> Changing order might change the water temp at the GPU slightly and also slightly change the water temp sensor reading (1C, maybe 2C maximum).
> 
> Moving my temp sensor from my res to the inlet of the GPU block dropped my calculated block delta 1C for example. But nothing really changed obviously.


Now that I'm thinking about yeah totally. I'm feed the GPU water warmed by the CPU. Doh! Gotta redo it. I have to find a sterile bottle to drain the water because I'd like to reuse it.


----------



## gfunkernaught

@J7SC @yzonker 
I forgot to mention that my water temp sensor is located just before the pump/res. I'm wondering if I should do CPU>2x360>GPU>2x480 or CPU>2x480>GPU>2x360 or just put the GPU right after the pump so pump>GPU>CPU>rads>pump...🤔


----------



## J7SC

gfunkernaught said:


> @J7SC @yzonker
> I forgot to mention that my water temp sensor is located just before the pump/res. I'm wondering if I should do CPU>2x360>GPU>2x480 or CPU>2x480>GPU>2x360 or just put the GPU right after the pump so pump>GPU>CPU>rads>pump...🤔


...probably won't make that much difference in a hi-flow system, but I would use the 2x 480 rads right after the 3090 GPU as it puts out the most heat/watts.


----------



## gfunkernaught

J7SC said:


> ...probably won't make that much difference in a hi-flow system, but I would use the 2x 480 rads right after the 3090 GPU as it puts out the most heat/watts.


Right, that's how I had it initially. But the massive increase in water<>GPU delta is bothering me, and crashing my previously stable OC. My 3090 in the 1000D system never went higher than 45c and that was with a very moderate and quiet fan profile, and at 580w.


----------



## yzonker

gfunkernaught said:


> Right, that's how I had it initially. But the massive increase in water<>GPU delta is bothering me, and crashing my previously stable OC. My 3090 in the 1000D system never went higher than 45c and that was with a very moderate and quiet fan profile, and at 580w.


BTW, how did the 1000D work out? Happy with it? I've been toying with getting one.


----------



## gfunkernaught

yzonker said:


> BTW, how did the 1000D work out? Happy with it? I've been toying with getting one.


So here's my quick review:
Its big, spacious, fits plenty of rads and looks pretty.

Some issues due to lack of foresight on Corsair's part which when it comes to cases.I have two 140mm rear exhaust fans. I have both front and top rad fans pulling air in, that air gets warmed by the rads, then that warm air fills the case, only to be pulled out by two measly 140s. 

I also had to install a slot cooler fan on the vertical mounts at the bottom-rear of the case to pull some additional air out because the area above the PSU is a hot pocket. 

The PSU shroud only insulates the PSU so I removed it, but my PSU's still fan ramps up and gets quite loud. Corsair says the fan is controlled by load, but I watched the GPU watts as the fan turned on at around 550w then came down to 450w (went from one scene to another and waited in cyberpunk) the fan was still on.


My previous case, the Graphite 760T which I heavily modified, had that external rad mounted at the back plus two more inside the case. I also had two 200mm fans that I attached on the side panel door to remove all that heat from inside the case which worked great. The problem with that is now the ambient air heats up, and thus warmer air gets fed back into the case, and that system was louder.

So bottom line is this: If you don't want all that heat from your case to heat up your room but still want to keep your system cool and quiet, this case is great for that. But remember that heat will stay inside the case for the most part. I doubt serious water cooling folk would want to feed a rad warm air, so using the top rads as exhaust is probably a bad idea.


----------



## yzonker

Wow, thanks. That's a lot of good info. I remembered you making a comment not too long after buying the 1000D suggesting you weren't 100% happy with it. 

I'm not sure how to balance the airflow better either. I usually pull all of the slot covers off to help move air in that area. Doesn't show much since it's on the back. But you are already addressing that to some extent with the slot cooler.

Making the top exhaust might not perform worse since the fans will probably flow more air with the better balance, but probably no better.

What fan are you using at the rear exhaust? Obviously need something there that flows a lot of air but isn't too loud. 

I need to get something different anyway. My old Corsair 450d just isn't big enough for some of the larger cards. Just for fun mostly, I swapped my 3080ti FTW3 into my case in place of the Zotac 3090. But it takes up so much room I don't even have a good path for the water line to go from the front rad to the top and had to move my reservoir to the back of the case (outside the case). Kinda janky right now, but working at least.

To get some 3090 content, I will say I really like the 3080ti FTW3 in place of my Zotac. Having a bit better (450w) stock bios is nice and the dual bios option lets me keep the Galax 1kw bios on hand without needing to flash. The 3080ti seems to punch above it's rated power limit to some extent compared to my 3090. It holds a higher core clock/voltage than the 3090 at approximately the same power level (30-60mhz).

I ran this in PR the other day with the 3080ti to compare to my best 3090 chiller run. 

Edit: and I should add that the 3090 is benefiting some from the older Nvidia driver as well that can't be used with the 3080ti.

3080ti








I scored 15 937 in Port Royal


AMD Ryzen 7 5800X, NVIDIA GeForce RTX 3080 Ti x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com





3090








I scored 16 107 in Port Royal


AMD Ryzen 7 5800X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 11}




www.3dmark.com


----------



## gfunkernaught

@yzonker I used the 140mm fans that came with my Graphite 760T case. They're air flow fans, and even 100% are fairly quiet. I've experimented with the top rad as exhaust and overall water temp went up. I think my current issue with warmer temps is having the cpu right before the gpu as opposed to cpu rad gpu. I also think that something happened while moving the gpu. I must have handled it in a way that made the block contact worse, which I can see happening since I'm using liquid metal. The slightest movement can affect the LM contact, unlike paste which almost acts like an adhesive.

BTW I was playing Cyberpunk last night for at least 2hrs, checked the temps and the gpu core maxed at 44c water was at 30c, average 530w.


----------



## gfunkernaught

I wanted to run this idea I had by you guys before I start dismantling my loop. What if I made the loop like so:
pump>gpu>2x480 rad>cpu>2x360 rad>res or
pump>gpu>cpu>2x360>2x480>res

BTW I know that loop order doesn't affect overall water temp since it will equalize, but I'm after controlling the water temp just before it gets to the gpu block.


----------



## gfunkernaught

So it looks like I fixed my issue. Put the tube routing back to the way I had it pump>cpu>2x360>gpu>2x480>res, and now my 530w gpu/water delta T is maybe a bit over 10c? I must be hallucinating. Someone smack me. I should probably plug my water temp sensor into the Corsair commander to see if it can read decimals. Also the only major difference in this loop is the coolant which I made a custom mix. I used primo chill utopia with distilled water, and a touch of dish soap which was left over from when I was cleaning it. There is a little foam in the rest, but the loop is clear and clean.


----------



## yzonker

gfunkernaught said:


> So it looks like I fixed my issue. Put the tube routing back to the way I had it pump>cpu>2x360>gpu>2x480>res, and now my 530w gpu/water delta T is maybe a bit over 10c? I must be hallucinating. Someone smack me. I should probably plug my water temp sensor into the Corsair commander to see if it can read decimals. Also the only major difference in this loop is the coolant which I made a custom mix. I used primo chill utopia with distilled water, and a touch of dish soap which was left over from when I was cleaning it. There is a little foam in the rest, but the loop is clear and clean.
> View attachment 2561418


That is interesting. If I'm following correctly, you are seeing a 4C reduction in actual GPU temp by just changing the loop order. That I wouldn't expect. I wasn't sure how to answer your question about what order to use as it really shouldn't make that big of a difference.


----------



## gfunkernaught

yzonker said:


> That is interesting. If I'm following correctly, you are seeing a 4C reduction in actual GPU temp by just changing the loop order. That I wouldn't expect. I wasn't sure how to answer your question about what order to use as it really shouldn't make that big of a difference.


Right and I feel kinda dumb asking the loop order question cuz we all know it doesn't matter in terms of water temp. But as we can see loop order does affect component temp. Yeah I'm still shocked at the temps now. I just ran the Bright Memory benchmark on loop for a bit, same thing. Ambient temp is 22c, water 30c , gpu core 40c @ 500-520w. Rad fans at 55% still quiet unless you right next to the case. I think the loop order change + adding the blower exhaust to remove that warm air from the bottom of the case really helped. I was worried that I'd lose the chemistry of my coolant mix if I drained it back into the same bottle of DW that I used to make it, because I didn't use the whole gallon. But so far so good.


----------



## gfunkernaught

Haven't run one of these in a while


----------



## heptilion

Hi All,

Can someone help me with what I could do with my Aorus Extreme 3090 3xpin card? Which bios should i play around with?


----------



## 7empe

heptilion said:


> Hi All,
> 
> Can someone help me with what I could do with my Aorus Extreme 3090 3xpin card? Which bios should i play around with?


EVGA KP 520W bios with BAR enabled (94.02.42.C0.0C). After switching the bios try +1000 MEM and +105 on core and see what happens. Don't increase voltage slider for now.


----------



## Luggage

gfunkernaught said:


> Right and I feel kinda dumb asking the loop order question cuz we all know it doesn't matter in terms of water temp. But as we can see loop order does affect component temp. Yeah I'm still shocked at the temps now. I just ran the Bright Memory benchmark on loop for a bit, same thing. Ambient temp is 22c, water 30c , gpu core 40c @ 500-520w. Rad fans at 55% still quiet unless you right next to the case. I think the loop order change + adding the blower exhaust to remove that warm air from the bottom of the case really helped. I was worried that I'd lose the chemistry of my coolant mix if I drained it back into the same bottle of DW that I used to make it, because I didn't use the whole gallon. But so far so good.


"Loop order doesn't matter" was always a simplification for "normal" loops in a case with one or two rads, a GPU and a CPU for people not pushing it. If 1-2C doesn't matter then loop order doesn't matter...


----------



## dream3

Guys is there a consensus on thermal pad thickness and softness for the STRIX? I want to replace mine but been seeing conflicting info.


----------



## gfunkernaught

dream3 said:


> Guys is there a consensus on thermal pad thickness and softness for the STRIX? I want to replace mine but been seeing conflicting info.


Have you looked into thermal putty?


----------



## ArcticZero

Actually what is the source now for TG-PP10 considering Digikey has had it as "Obsolete" for quite some time? Had been meaning to grab a batch but never got around to doing so. Temps are perfectly fine but wanted to prepare for my inevitable block reseat down the road.


----------



## J7SC

dream3 said:


> Guys is there a consensus on thermal pad thickness and softness for the STRIX? I want to replace mine but been seeing conflicting info.


I actually measured the 3090 Strix OC stock pad size and posted it (along w/others as well)* in this thread *- around *mid-March '21*, but can't find it right now. In any case, you can order various thicknesses of Thermaright pads (ie. at Amazon) and get fairly close if not match.










All that said, I changed to the Phanteks w-block for the 3090 a few months later and ended up using TG-10 thermal putty for the VRAM (both sides) and stuck with the Thermalright pads for the phases. Almost a year later, temps are still exactly the same...



ArcticZero said:


> Actually what is the source now for TG-PP10 considering Digikey has had it as "Obsolete" for quite some time? Had been meaning to grab a batch but never got around to doing so. Temps are perfectly fine but wanted to prepare for my inevitable block reseat down the road.


Per last pic below, T-Global, the supplier to Digikey *still makes *thermal putty, but at lower W/mK (max 6.3 instead of 10 ?). There are also other thermal putty makers in EU and Asia, but anything over 7 W/mK is hard to find.

I ended up ordering a care package from Digikey (3x30 TG 10-30g) recently before they were out. I also still have 2x TG-10-50g jars from last year...all unopened and vacuum-sealed, in another sealed bag in the fridge w/a warning label ('not food / icing').

As far as I can gather, once applied, shelf life / 'best before' date is not really the issue, it is more about the oils separating from the medium while standing on a shelf somewhere for a long time. As per above, my 3090 Strix in my gamer had that putty on for a year now w/o any decline in performance, ditto for the 6900XT in my work system. But the area on the PCBs around the putty do show some of the (harmless, non-conductive) oils having 'exited'.


----------



## kryptonfly

Well... I sold my beloved Gigabyte 3090 Turbo for 1270€ with waterblock. Does anybody here plan to change for a 4090 or 4090 Ti ? Honestly I'd prefer a 4090 Ti if it could be available soon but I think I'll go for a 4090. For now I'm waiting with the UHD 770, what a slap ! Only desktop tasks for months


----------



## wtf_apples

I was kinda shocked when I got the email saying tg pp10 is eol.
Ive been mia from here due to some hardware failures. Kingpin block was leaking from the termial, cant even rma because evga doesnt have any. Ended up taking it apart and fixing it.
Then the card itself died. vga error. 
Trying again once I get all my drives up and running. Lost my files when the usb drive failed.. 
Been looking for the post explaining nvidia-smi again, I was having problems with it before but lost the instructions text file I had.
Lastly is the evga oc 520 rebar the best bios to use now? Going to avoid the 1kw since these cards keep dying


----------



## yzonker

kryptonfly said:


> Well... I sold my beloved Gigabyte 3090 Turbo for 1270€ with waterblock. Does anybody here plan to change for a 4090 or 4090 Ti ? Honestly I'd prefer a 4090 Ti if it could be available soon but I think I'll go for a 4090. For now I'm waiting with the UHD 770, what a slap ! Only desktop tasks for months


Yea I'm planning to get a 4090 if the rumors turn out to be relatively close and the price isn't too insane. I think I'd have to draw the line around $2k USD. Although reality is that I don't really need it for gaming unless something new is released that pushes the card harder. It'll be interesting to see what demand is really like this time around. I suspect quite a few people will stick with 30 series or pick one up used relatively cheap as mining continues to decline (assuming it does).


----------



## kryptonfly

yzonker said:


> Yea I'm planning to get a 4090 if the rumors turn out to be relatively close and the price isn't too insane. I think I'd have to draw the line around $2k USD. Although reality is that I don't really need it for gaming unless something new is released that pushes the card harder. It'll be interesting to see what demand is really like this time around. I suspect quite a few people will stick with 30 series or pick one up used relatively cheap as mining continues to decline (assuming it does).


Honestly, I was not overjoyed with this 3000 serie, For example, Cyberpunk 2077 just 3 months after release put the 3090 at knees ! Without DLSS lots of games at 4K can't run well with ray trace. I'm not talking about Virtual Reality neither the 8K as one of the 3090's selling argument... Unreal Engine 5 seems to need lots of power too. I look forward to put my hand on the 4090 minimum.


----------



## yzonker

kryptonfly said:


> Honestly, I was not overjoyed with this 3000 serie, For example, Cyberpunk 2077 just 3 months after release put the 3090 at knees ! Without DLSS lots of games at 4K can't run well with ray trace. I'm not talking about Virtual Reality neither the 8K as one of the 3090's selling argument... Unreal Engine 5 seems to need lots of power too. I look forward to put my hand on the 4090 minimum.


CP2077 is the exception for me, but I played it through twice. Probably not playing it much again unless some real DLC comes along. 

As far as DLSS, I really can't tell it's on in most games unless make careful observations while toggling it back and forth (at least set to quality). And in some cases it removes aliasing really well. And for reference, I game on a 55" LG C1 sitting on my desk about 2-3 feet away. That also limits me to 120hz.


----------



## Nizzen

kryptonfly said:


> Honestly, I was not overjoyed with this 3000 serie, For example, Cyberpunk 2077 just 3 months after release put the 3090 at knees ! Without DLSS lots of games at 4K can't run well with ray trace. I'm not talking about Virtual Reality neither the 8K as one of the 3090's selling argument... Unreal Engine 5 seems to need lots of power too. I look forward to put my hand on the 4090 minimum.


Does Unreal Engine 5 scale well with cpu?
Frostbite is scaling pretty good with cpu's. Faster cpu and faster memory= more fps. (when cpu bound ofc)


----------



## kryptonfly

Nizzen said:


> Does Unreal Engine 5 scale well with cpu?
> Frostbite is scaling pretty good with cpu's. Faster cpu and faster memory= more fps. (when cpu bound ofc)


Honestly I can't tell for sure because I didn't test before to sell my 3090 but there are benches on Youtube about UE5 demo, CPU usage is really low, I think GPU matters more : Unreal Engine 5 Lumens Lighting Demo Epic Settings 4K | RTX 3090 Ti | i9 12900K 5.3GHz - YouTube


----------



## Koozwad

Had my ASUS ROG STRIX 3090 OC die on me a couple days ago. Bought it new from a regular webshop on the 24th of December 2020, and had it die on the 30th of May 2022. Been building PCs for ~22 years now and never had a part die before. Have had at least 4 issues with this card as well. First it would randomly restart my PC during gaming, so after a lot of testing and researching it turned out my PSU couldn't handle the spiking. New PSU, problem solved. Also had occasional fan rattling at certain (high)fan speeds during gaming(default curve) and crazy high coil whine(often). Both of those weren't really dealbreakers though. The card dying? Yeah I'd say that counts as one. Turned my PC on one morning and... no signal. LEDs were going fine. Replaced GPU with an old 660 Ti and voila, 0 issues. Time for RMA...

Has anyone else here experienced some of the above with their STRIX 3090?


----------



## 7empe

Koozwad said:


> Had my ASUS ROG STRIX 3090 OC die on me a couple days ago. Bought it new from a regular webshop on the 24th of December 2020, and had it die on the 30th of May 2022. Been building PCs for ~22 years now and never had a part die before. Have had at least 4 issues with this card as well. First it would randomly restart my PC during gaming, so after a lot of testing and researching it turned out my PSU couldn't handle the spiking. New PSU, problem solved. Also had occasional fan rattling at certain (high)fan speeds during gaming(default curve) and crazy high coil whine(often). Both of those weren't really dealbreakers though. The card dying? Yeah I'd say that counts as one. Turned my PC on one morning and... no signal. LEDs were going fine. Replaced GPU with an old 660 Ti and voila, 0 issues. Time for RMA...
> 
> Has anyone else here experienced some of the above with their STRIX 3090?


Did you try vbios switch button?


----------



## J7SC

yzonker said:


> Yea I'm planning to get a 4090 if the rumors turn out to be relatively close and the price isn't too insane. I think I'd have to draw the line around $2k USD. *Although reality is that I don't really need it for gaming unless something new is released that pushes the card harder. * It'll be interesting to see what demand is really like this time around. I suspect quite a few people will stick with 30 series or pick one up used relatively cheap as mining continues to decline (assuming it does).


...ultimately, exceeding requirements for maximum settings for demanding games on my C1 is the determining factor, though I don't mind the odd benching. For now, the w-cooled 3090 Strix is more than capable of handling everything I throw at it, though I don't have the time to spend as much time on gaming as I want to. Once the RTX4K / Lovelace products are released, I will wait to find out more about the 4090 vs 4090 Ti, to look for a *real and actually noticeable* 'jump' from my current setup.

On another note, ever since I got my 3090 Strix, I noticed that I tend to get slightly better performance (stock vbios, and 520W max vbios) w/ NVidia settings on 'normal' power rather than 'prefer maximum performance', and that doesn't seem to relate to GPU / Hotspot / VRAM temp - it was he same on air vs much lower w-cooled temps, though temp-related boost algorithms will obviously affect both cooling scenarios. This is w/o curves or overvolting etc, just 'regular' MSI AB slider oc'ing. *Are other 3090 owners experiencing the same ?*

Finally, I confirmed that my most efficient / best fps (across several different apps) re. VRAM is at +1175 MHz to +1202 MHz w/ stock voltages, noting thermal putty on the double-sided VRAM plus an extra big heatsink on the back. From what I've read , this is sort of upper-middle performance..?


----------



## wtf_apples

Well having problems with my rma kp so im sending it back. Maybe 3rd time the charm. Im on the fence about should I list it for sale once the new one arrives. Worried prices might be even higher, what will supply like etc. Same as always


----------



## yzonker

J7SC said:


> ...ultimately, exceeding requirements for maximum settings for demanding games on my C1 is the determining factor, though I don't mind the odd benching. For now, the w-cooled 3090 Strix is more than capable of handling everything I throw at it, though I don't have the time to spend as much time on gaming as I want to. Once the RTX4K / Lovelace products are released, I will wait to find out more about the 4090 vs 4090 Ti, to look for a *real and actually noticeable* 'jump' from my current setup.
> 
> On another note, ever since I got my 3090 Strix, I noticed that I tend to get slightly better performance (stock vbios, and 520W max vbios) w/ NVidia settings on 'normal' power rather than 'prefer maximum performance', and that doesn't seem to relate to GPU / Hotspot / VRAM temp - it was he same on air vs much lower w-cooled temps, though temp-related boost algorithms will obviously affect both cooling scenarios. This is w/o curves or overvolting etc, just 'regular' MSI AB slider oc'ing. *Are other 3090 owners experiencing the same ?*
> 
> Finally, I confirmed that my most efficient / best fps (across several different apps) re. VRAM is at +1175 MHz to +1202 MHz w/ stock voltages, noting thermal putty on the double-sided VRAM plus an extra big heatsink on the back. From what I've read , this is sort of upper-middle performance..?
> View attachment 2562649


In regards to mem OC, that's probably average to a little above average is my guess. It's hard to know for sure as I'm sure some people are actually running higher than what actually adds performance and then claim that #. My 3090 is a solid below average at only around +800-900 stable gaming. My 3080ti can do +1200-1300, but that's about the same since the base clock is lower than a 3090.


----------



## J7SC

yzonker said:


> In regards to mem OC, that's probably average to a little above average is my guess. It's hard to know for sure as I'm sure some people are actually running higher than what actually adds performance and then claim that #. My 3090 is a solid below average at only around +800-900 stable gaming. My 3080ti can do +1200-1300, but that's about the same since the base clock is lower than a 3090.


Thanks for that info  . I was using TS GT1, Superposition 4/8K etc to determine the highest repeatable fps (> multiple runs, same conditions) for the VRAM.


----------



## Arizor

J7SC said:


> Thanks for that info  . I was using TS GT1, Superposition 4/8K etc to determine the highest repeatable fps (> multiple runs, same conditions) for the VRAM.


Yep @J7SC my mem OC is always around 1200 whilst gaming (1250 I think on my AB), anything more and I don't see any benefit, same ROG Strix OC card.

I'm in the same boat vis-a-vis 4090, it will have to show a noticeable uplift to encourage a purchase, since I've just installed my new monitor, LG C2 42 inch, so I'm limited to 120hz anyway.


----------



## GRABibus

kryptonfly said:


> Well... I sold my beloved Gigabyte 3090 Turbo for 1270€ with waterblock. Does anybody here plan to change for a 4090 or 4090 Ti ? Honestly I'd prefer a 4090 Ti if it could be available soon but I think I'll go for a 4090. For now I'm waiting with the UHD 770, what a slap ! Only desktop tasks for months


I am also to go to a 4090.
This means I will have to sale my 3090 kingpin hybrid.
But, before any 4090 purchase, I would wait for XOC bioses release (if there are).


----------



## GRABibus

J7SC said:


> ...ultimately, exceeding requirements for maximum settings for demanding games on my C1 is the determining factor, though I don't mind the odd benching. For now, the w-cooled 3090 Strix is more than capable of handling everything I throw at it, though I don't have the time to spend as much time on gaming as I want to. Once the RTX4K / Lovelace products are released, I will wait to find out more about the 4090 vs 4090 Ti, to look for a *real and actually noticeable* 'jump' from my current setup.
> 
> On another note, ever since I got my 3090 Strix, I noticed that I tend to get slightly better performance (stock vbios, and 520W max vbios) w/ NVidia settings on 'normal' power rather than 'prefer maximum performance', and that doesn't seem to relate to GPU / Hotspot / VRAM temp - it was he same on air vs much lower w-cooled temps, though temp-related boost algorithms will obviously affect both cooling scenarios. This is w/o curves or overvolting etc, just 'regular' MSI AB slider oc'ing. *Are other 3090 owners experiencing the same ?*
> 
> Finally, I confirmed that my most efficient / best fps (across several different apps) re. VRAM is at +1175 MHz to +1202 MHz w/ stock voltages, noting thermal putty on the double-sided VRAM plus an extra big heatsink on the back. From what I've read , this is sort of upper-middle performance..?
> View attachment 2562649


My 3090 kingpin hybrid can do +1200MHz at stock voltage.
Beyond +1200Mhz, I have to increase Vram voltage with classified tool for stability.

I noticed that there is performance scale in PR until +1600MHz.


----------



## yzonker

GRABibus said:


> I am also to go to a 4090.
> This means I will have to sale my 3090 kingpin hybrid.
> But, before any 4090 purchase, I would wait for XOC bioses release (if there are).


One issue this time may be with 1 vs 2 16 pin connectors. Someone tried the Galax HOF 3090 ti XOC bios on a single connector card and didn't see any increase in power draw over in the owners thread. 

We'll just have to see how it goes. Waiting will work if availability is better this time. I think it should be, but who knows. I'll probably grab a 4090 FTW3 at launch from the EVGA queue. Can always sell it later if it turns out to be a turd.


----------



## J7SC

FYI, below is Kingpin 3090 Ti PCB w/ _2x _600W connectors ...As usual, a very nice PCB... 4090 / Ti PCB is supposed to be 'similar' but not identical (shorter?).

For now, the w-cooled 3090 Strix OC w/ 520W vbios is it for me as it delivers more than C1 OLED 4K 120Hz can usually eat, even w/ full-ultra eye-candy. When the 4090 releases, rumors re. the 4090 Ti should be a bit closer to the truth, given that one can usually infer from the 'full die' technical descriptions. That's when I try to figure out which one to get next...probably will be a new build project, complete w/ next-gen AMD?/Intel? CPU, PCIe 5 NVMe etc...early next year, may be ?


----------



## yzonker

Plug in your twelve 8 pins for SLI and off you go. 😁 

We need something to stir things up. This whole sub forum is pretty dead now.


----------



## 7empe

J7SC said:


> FYI, below is Kingpin 3090 Ti PCB w/ _2x _600W connectors ...As usual, a very nice PCB... 4090 / Ti PCB is supposed to be 'similar' but not identical (shorter?).
> 
> For now, the w-cooled 3090 Strix OC w/ 520W vbios is it for me as it delivers more than C1 OLED 4K 120Hz can usually eat, even w/ full-ultra eye-candy. When the 4090 releases, rumors re. the 4090 Ti should be a bit closer to the truth, given that one can usually infer from the 'full die' technical descriptions. That's when I try to figure out which one to get next...probably will be a new build project, complete w/ next-gen AMD?/Intel? CPU, PCIe 5 NVMe etc...early next year, may be ?


I'm with 12th gen intel and Strix 3090 OC and 520W bios too with 3440x1440 200 Hz. It is quite close to feed the 200 fps in most of the games on high/ultra. The BFV is fully 200 fps at ultra, but there are some titles like Halo Infinite that sits around 150 fps at high settings. Should I skip 4090 and wait for 5090 to combine it with 14th gen intel? I think so...


----------



## GRABibus

Of you wait for 5090 (?), it will be at a time where games will have new features wi


7empe said:


> I'm with 12th gen intel and Strix 3090 OC and 520W bios too with 3440x1440 200 Hz. It is quite close to feed the 200 fps in most of the games on high/ultra. The BFV is fully 200 fps at ultra, but there are some titles like Halo Infinite that sits around 150 fps at high settings. Should I skip 4090 and wait for 5090 to combine it with 14th gen intel? I think so...


« Halo infinite 5 » will also gives you 150fps with the 6090 in 2 years 😊
Take the pleasure now, future doesn’t exist !
Go for a 4090 👍


----------



## J7SC

I finally got around to trying nvidia-smi.exe and established some base values for different apps at 2175,2190, and 2205. My w-cooled Strix (on 520W KPE) seems to love it, not least as it is a crazy spiker: It runs just fine at reasonable oc clocks and then all of a sudden, it decides to spike to 2265 -or even higher- for several seconds which can lead to crashes - with nvidia-smi, that behavior is easily controlled / capped . I am now experimenting with combining nvidia-smi and MSI AB curves / fixed voltage points. Advice is always welcome !


----------



## 7empe

J7SC said:


> I finally got around to trying nvidia-smi.exe and established some base values for different apps at 2175,2190, and 2205. My w-cooled Strix (on 520W KPE) seems to love it, not least as it is a crazy spiker: It runs just fine at reasonable oc clocks and then all of a sudden, it decides to spike to 2265 -or even higher- for several seconds which can lead to crashes - with nvidia-smi, that behavior is easily controlled / capped . I am now experimenting with combining nvidia-smi and MSI AB curves / fixed voltage points. Advice is always welcome !


What's the voltage point that refers to 2265 MHz? It is impossible to get there without having this frequency accessible below 1100 mV. Curve does change while temperature fluctuates - it is going down if temp raises and vice versa. Maybe you chilled the water or sth?


----------



## gfunkernaught

yzonker said:


> Plug in your twelve 8 pins for SLI and off you go. 😁
> 
> We need something to stir things up. This whole sub forum is pretty dead now.


Benchmarking can get pretty boring tbh. Now I just game here and there. Capping the PL at 600w, I was thinking about power consumption and my electric bill. There are times when I would play Cyberpunk for over four hours, figure an average of 530w. So say I played for six hours, that is a total of 3kw! That is just the gpu alone, not counting the cpu which consumed an average of 90-120w, then everything else. Now multiply that to about 4-5 times a month, 15kw/month to play games...I'm reaching the age of reason and I gotta tell ya, it takes the fun out of PC gaming when you start thinking about the electric bill and rightly so!

Anyways, game on!


----------



## GRABibus

gfunkernaught said:


> Benchmarking can get pretty boring tbh. Now I just game here and there. Capping the PL at 600w, I was thinking about power consumption and my electric bill. There are times when I would play Cyberpunk for over four hours, figure an average of 530w. So say I played for six hours, that is a total of 3kw! That is just the gpu alone, not counting the cpu which consumed an average of 90-120w, then everything else. Now multiply that to about 4-5 times a month, 15kw/month to play games...I'm reaching the age of reason and I gotta tell ya, it takes the fun out of PC gaming when you start thinking about the electric bill and rightly so!
> 
> Anyways, game on!


500W during 6 hours is still 500W and not 3kW.
You mismatch power and electrical consumption 😊


----------



## kryptonfly

gfunkernaught said:


> Benchmarking can get pretty boring tbh. Now I just game here and there. Capping the PL at 600w, I was thinking about power consumption and my electric bill. There are times when I would play Cyberpunk for over four hours, figure an average of 530w. So say I played for six hours, that is a total of 3kw! That is just the gpu alone, not counting the cpu which consumed an average of 90-120w, then everything else. Now multiply that to about 4-5 times a month, 15kw/month to play games...I'm reaching the age of reason and I gotta tell ya, it takes the fun out of PC gaming when you start thinking about the electric bill and rightly so!
> 
> Anyways, game on!


You can use solar panels, I'm using 3x385W which can handle ~1300W in the real world with good weather (no cloud, 1080W full consumption in game Cyberpunk 2077) and ~200W in really sad cloudy darkest weather (suitable for internet, music... no game, ~150W pc + screen + speakers with amplifier.)

- 500W during 6 hours is 3 kWh
Example : 2000W for 15 min = 0.5 kWh

EDIT : according to my experience above, if I play Cyberpunk for 6 hours with 1080W of consumption, is 6.48 kWh for the entire pc (3090 around 500W + 12900KS 5.4/4.2 around 115W) but I sold the 3090 to refund a bit solar panels, I'll add 3 others 385W panels because when it's full dark cloudy is not enough and my water heater (2600W) could run in full solar.


----------



## yzonker

kryptonfly said:


> You can use solar panels, I'm using 3x385W which can handle ~1300W in the real world with good weather (no cloud, 1080W full consumption in game Cyberpunk 2077) and ~200W in really sad cloudy darkest weather (suitable for internet, music... no game, ~150W pc + screen + speakers with amplifier.)
> 
> - 500W during 6 hours is 3 kW/h
> Example : 2000W for 15 min = 0.5 kW/h


No, 

2000w * 15 min * (1 hr/ 60 min) = 0.5 kw-hr


----------



## GRABibus

kWh and not kW/h


----------



## kryptonfly

Old bad habits, I edited. Thanks 👍


----------



## J7SC

7empe said:


> What's the voltage point that refers to 2265 MHz? It is impossible to get there without having this frequency accessible below 1100 mV. Curve does change while temperature fluctuates - it is going down if temp raises and vice versa. Maybe you chilled the water or sth?


...well, obviously it is possible, per quick screenie from just now. It is spiking that high, even if only briefly, at normal temps and voltages...I've worked with more than one 3090 Strix, and this one is the only one which does it. 

My point was simply that with nvidia-smi.exe, I can oc to exploit the chips natural oc range w/o getting hampered by the spikes to 2265 and beyond...


----------



## yzonker

I did order up another CL480 to add to the one I already have. Now I'll have 2 CL480 external and 280/360 internal along with the chiller integrated in to the loop. Figured I should add more cooling for my 600w 4090. lol (or whatever the PL ends up being, not convinced it will be quite that high, but some cards might max out at that).


----------



## J7SC

yzonker said:


> I did order up another CL480 to add to the one I already have. Now I'll have 2 CL480 external and 280/360 internal along with the chiller integrated in to the loop. Figured I should add more cooling for my 600w 4090. lol (or whatever the PL ends up being, not convinced it will be quite that high, but some cards might max out at that).


How do you like your CL480 ? I'm running 2x TT CL480s (triple core) and one XSPC RX360 (dual core) along w/3x D5s for a year now, so even 600W GPUs should hopefully be taken care of.


----------



## yzonker

J7SC said:


> How do you like your CL480 ? I'm running 2x TT CL480s (triple core) and one XSPC RX360 (dual core) along w/3x D5s for a year now, so even 600W GPUs should hopefully be taken care of.


Love the one I have now. Been running it for almost a year as an external. Right now I've got 3xD5 and one PMP-500. (when the extended chiller loop is QDC'd in, 2xD5 otherwise)


----------



## J7SC

yzonker said:


> Love the one I have now. Been running it for almost a year as an external. Right now I've got 3xD5 and one PMP-500. (when the extended chiller loop is QDC'd in, 2xD5 otherwise)


...yeah, I started with one CL 480 just to test it out and was really impressed with it so I got two more (the third one sits in another system). I probably add more 'in the near future' for other build projects I'm thinking about since next-gen top CPUs and GPUs don't look like 'low-watt exercises'.


----------



## gfunkernaught

Alright since math isn't my strong point, I asked google to help me out:








Five 6hr cyberpunk sessions per month at 700w assuming my PSU is 100% efficient, 21kWh/month to play a video game 😉
I do have solar panels however I still get all my power from utility, the panels generate energy and send it back to the grid, and I get credit.


----------



## yzonker

Got the 2nd CL480 installed. As expected, not a big gain since my air to water delta was only 8C before. Now it's 5-6C.


----------



## J7SC

yzonker said:


> Got the 2nd CL480 installed. As expected, not a big gain since my air to water delta was only 8C before. Now it's 5-6C.
> 
> View attachment 2563624


Nice ! The second rad will help in the summer re. heat-soaking etc. 

Bottom two CL480s in pic below (sorry for the pic quality, they're hidden underneath a table) serve the 5950X/3090 setup and then connect to a RX360 (not visible here) as well. The top CL480 combines with 2x RX 360s to cool the 3950X/6900XT work-setup. With push-pull 1800 rpm fans throughout, noise is not an issue - I just hear the whoosh of the air.


----------



## yzonker

J7SC said:


> Nice ! The second rad will help in the summer re. heat-soaking etc.
> 
> Bottom two CL480s in pic below (sorry for the pic quality, they're hidden underneath a table) serve the 5950X/3090 setup and then connect to a RX360 (not visible here) as well. The top CL480 combines with 2x RX 360s to cool the 3950X/6900XT work-setup. With push-pull 1800 rpm fans throughout, noise is not an issue - I just hear the whoosh of the air.
> View attachment 2563626


Good thing those P12s are cheap. That's a lot of fans. LOL.


----------



## J7SC

yzonker said:


> Good thing those P12s are cheap. That's a lot of fans. LOL.


yup ! I ended up getting 13x5 packs...for other builds as well, of course. I love my GentleTyphoons 3K and similar, but push-pull Arctic P12s are the beans re. price, performance and (lack of) noise.


----------



## yzonker

Yea I'm up to 23 rad fans now. 16 p12, 4 p14, and 3 Noctua A12s.


----------



## KedarWolf

Install your 3DMark to a RAM disk. I gained 600 points in the Time Spy graphics score by doing this.

The best free one. Add the RAM disk as an HDD NTFS GPT. 



OSFMount - Mount Disk Images & Create RAM Drives


----------



## yzonker

KedarWolf said:


> Install your 3DMark to a RAM disk. I gained 600 points in the Time Spy graphics score by doing this.
> 
> The best free one. Add the RAM disk as an HDD NTFS GPT.
> 
> 
> 
> OSFMount - Mount Disk Images & Create RAM Drives


Whaaaaaat... No? Really? I don't really doubt you, just would have never guessed that would work. So it's reading from the drive during the run?


----------



## J7SC

KedarWolf said:


> Install your 3DMark to a RAM disk. I gained 600 points in the Time Spy graphics score by doing this.
> 
> The best free one. Add the RAM disk as an HDD NTFS GPT.
> 
> 
> 
> OSFMount - Mount Disk Images & Create RAM Drives


 ...yup, that does work ...I have been using that one on and off since the days of 3DMark 11. Caution though re. installs on temporary RAM disks w/o a fixed-drive backup re. licenses etc.


----------



## yzonker

Great, now I'll have to go fire up the chiller and the dehumidifier. Lol.


----------



## tps3443

I’m still rocking my 3090 Kingpin Hydro Copper. 1,080mm*45mm radiator, (2) D5 pumps, and a 1/2HP Hailea HC500 water chiller.

I saw the 6950XT rolling past the 3090Ti in timespy graphics score leaks. Surprisingly my ole 3090KP is still faster than the 6950XT though.

Not sure how the overclocked 6950XT fairs though.


----------



## yzonker

KedarWolf said:


> Install your 3DMark to a RAM disk. I gained 600 points in the Time Spy graphics score by doing this.
> 
> The best free one. Add the RAM disk as an HDD NTFS GPT.
> 
> 
> 
> OSFMount - Mount Disk Images & Create RAM Drives


Didn't seem to help in my case.

From RAM disk,









I scored 20 181 in Time Spy


AMD Ryzen 7 5800X3D, NVIDIA GeForce RTX 3080 Ti x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com





From NVME drive (Samsung 980 Pro),









I scored 20 201 in Time Spy


AMD Ryzen 7 5800X3D, NVIDIA GeForce RTX 3080 Ti x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## J7SC

yzonker said:


> Didn't seem to help in my case.
> 
> From RAM disk,
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 20 181 in Time Spy
> 
> 
> AMD Ryzen 7 5800X3D, NVIDIA GeForce RTX 3080 Ti x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> From NVME drive (Samsung 980 Pro),
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 20 201 in Time Spy
> 
> 
> AMD Ryzen 7 5800X3D, NVIDIA GeForce RTX 3080 Ti x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com


...'back in he day' with slowish spinning hard drives, it was worth more, but now with 7 giggles/s NVMEs, it's probably not worth starting up ye ol' chiller...


----------



## KedarWolf

With a RAM disk and my new Optimus Water Cooling Strix OC 3090 block on one 360 rad, broke 16000 by a decent amount.

I saw my CPU was running hot. My board has a weird bug where if I move my RAM fans to centre them over the RAM, somehow it messes up the pump cable attached to the fan header and my CPU temps go out of whack. Reattaching the pump cable does NOT fix it.

The fix is after moving my RAM fans, save BIOS settings, shutdown, hit the reset BIOS button, boot into BIOS, reload the BIOS settings, fixed. 

Makes me wonder what I would have gotten if my CPU temps weren't 80C. But I spent hours beating 16000 and I'm done.









I scored 16 055 in Port Royal


AMD Ryzen 9 5950X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 11}




www.3dmark.com


----------



## Arizor

You got a god sample there @KedarWolf . My Strix OC in Optimus will stay below 40C but still **** itself if it goes above 2085 in Port Royale


----------



## J7SC

Arizor said:


> You got a god sample there @KedarWolf . My Strix OC in Optimus will stay below 40C but still **** itself if it goes above 2085 in Port Royale


...also a question of vBios (ie. 1kw), so your Strix could get up there yet before you take it out back and....


----------



## deedeeDMT

How does the Suprim X stand compared to the other top end 3090's? I heard it was really silent


----------



## Nizzen

deedeeDMT said:


> How does the Suprim X stand compared to the other top end 3090's? I heard it was really silent


One of the best aircoolers with heatpipe connected to the backplate for better vram cooling. Way better cooling than strix 3090. Tested both


----------



## yzonker

J7SC said:


> ...'back in he day' with slowish spinning hard drives, it was worth more, but now with 7 giggles/s NVMEs, it's probably not worth starting up ye ol' chiller...


Yea I made several runs and even round tripped back to the physical drive. Just run to run variance is all I saw. Kinda disappointed. Was hoping for a little bump in score.


----------



## yzonker

KedarWolf said:


> With a RAM disk and my new Optimus Water Cooling Strix OC 3090 block on one 360 rad, broke 16000 by a decent amount.
> 
> I saw my CPU was running hot. My board has a weird bug where if I move my RAM fans to centre them over the RAM, somehow it messes up the pump cable attached to the fan header and my CPU temps go out of whack. Reattaching the pump cable does NOT fix it.
> 
> The fix is after moving my RAM fans, save BIOS settings, shutdown, hit the reset BIOS button, boot into BIOS, reload the BIOS settings, fixed.
> 
> Makes me wonder what I would have gotten if my CPU temps weren't 80C. But I spent hours beating 16000 and I'm done.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 16 055 in Port Royal
> 
> 
> AMD Ryzen 9 5950X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 11}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> View attachment 2563786
> 
> 
> View attachment 2563787


Did you try disabling SMT and/or one of the ccx's? I usually see a small bump (20-30pts) by disabling SMT. 

And yes I'm familiar with the hours of fiddling and just hitting a point of calling it good enough. 16k at those temps is good. My Zotac 3090 can't get close to that. I can only break 16k with the chiller.


----------



## deedeeDMT

Nizzen said:


> One of the best aircoolers with heatpipe connected to the backplate for better vram cooling. Way better cooling than strix 3090. Tested both


Nice. I have a 10G 3080 Aorus Master Rev 2.0 (has good VRam padding) and I am planning to make the jump to 4k but with the 4080 being delayed and how I'm not sure how much VRAM it would have until more info is released.


----------



## ssgwright

hey guys, so I just got my hands on a zotac trinity oc 3090. Stock bios I can hit 14.1k on port, with the kingpin 1k bios I can break 15k on port but I don't like the amount of watts needed for a measly 900 point increase. Do you guys have any recommendations for the best bios for a 2 x power pin card? I'm watercooled so even with the kingpin bios temps weren't an issue I'm just nervous about the amount of power it's pulling. I'm looking for a good 500w bios. I tried the kingpin 500w but it didn't perform as good as my stock bios.


----------



## KedarWolf

ssgwright said:


> hey guys, so I just got my hands on a zotac trinity oc 3090. Stock bios I can hit 14.1k on port, with the kingpin 1k bios I can break 15k on port but I don't like the amount of watts needed for a measly 900 point increase. Do you guys have any recommendations for the best bios for a 2 x power pin card? I'm watercooled so even with the kingpin bios temps weren't an issue I'm just nervous about the amount of power it's pulling. I'm looking for a good 500w bios. I tried the kingpin 500w but it didn't perform as good as my stock bios.


The best BIOS for just gaming is the Suprim X. Get near to Kingpin framerates and decent power draw and temps.


----------



## ssgwright

KedarWolf said:


> The best BIOS for just gaming is the Suprim X. Get near to Kingpin framerates and decent power draw and temps.


is the Suprim X a 2 pin power or 3?


----------



## KedarWolf

ssgwright said:


> is the Suprim X a 2 pin power or 3?


 3 pin


----------



## KedarWolf

ssgwright said:


> is the Suprim X a 2 pin power or 3?


oh, zotac is 2 pin, I dunno then.


----------



## ssgwright

KedarWolf said:


> oh, zotac is 2 pin, I dunno then.


no worries, thanks for trying to help


----------



## wtf_apples

Hello.
Apologies for the same questions about nvidia-smi, memory isnt good...

C:\Windows\System32\DriverStore\FileRepository\nv_dispig.inf_amd64_647b4244e991951b\nvidia-smi -lgc 2100 

Problem is that when I try 2100 or 2200, hwinfo reports 2010mhz after running the bat. Always the same result... 1920mhz is stock. 
Im using instructions I found here about a year ago now. Not sure what im doing wrong. 

Win 11pro
520 kp hc bios on ftw3 3090

Thanks


----------



## KedarWolf

wtf_apples said:


> Hello.
> Apologies for the same questions about nvidia-smi, memory isnt good...
> 
> C:\Windows\System32\DriverStore\FileRepository\nv_dispig.inf_amd64_647b4244e991951b\nvidia-smi -lgc 2100
> 
> Problem is that when I try 2100 or 2200, hwinfo reports 2010mhz after running the bat. Always the same result... 1920mhz is stock.
> Im using instructions I found here about a year ago now. Not sure what im doing wrong.
> 
> Win 11pro
> 520 kp hc bios on ftw3 3090
> 
> Thanks


Try:

C:\Windows\System32\nvidia-smi -lgc 2100

Edit: And this.

You need at least +30mhz to make it work properly when you set a point in Afterburner. Are you sure you see the lock in real time in Afterburner ? Right click as Admin ? No need for .exe just : the path /nvidia-smi -lgc 2205
You need to set the point in Afterburner at 2235 mhz.


----------



## PLATOON TEKK

Hope everybody’s been golden. 
Yo @KedarWolf thanks a million for the benchOS about to run that for the next few days. Will also finally spi flash a 980 and see what happens lol.


----------



## KedarWolf

PLATOON TEKK said:


> Hope everybody’s been golden.
> Yo @KedarWolf thanks a million for the benchOS about to run that for the next few days. Will also finally spi flash a 980 and see what happens lol.




Don't forget to read the README!!!Installnpp.8.3.3ToRead.txt and run the BenchOSServicesDisable.cmd file to disable more services.


----------



## PLATOON TEKK

KedarWolf said:


> Don't forget to read the README!!!Installnpp.8.3.3ToRead.txt and run the BenchOSServicesDisable.cmd file to disable more services.


no doubt man, thanks again. I had a similar os when I went dpc latency crazy a few years back so is dope to have an updated version of it, good look!

I’ll let you know how it works out for sure


----------



## J7SC

PLATOON TEKK said:


> Hope everybody’s been golden.
> Yo @KedarWolf thanks a million for the benchOS about to run that for the next few days. Will also finally spi *flash a 980 and see what happens* lol.


...killing time until the 4090 / Ti releases  ?
BTW, I still have a few 980 Classies running in a retro system - ton of fun w/ EVBot


----------



## KedarWolf

A picture of an actual 4090 in a system has leaked!!


----------



## J7SC

^^I love that chap's YouTube April 1st vids (including not only the 4090 clip, but also the 32 SLI one), but sadly, I don't think he did one this year


----------



## wtf_apples

KedarWolf said:


> Try:
> 
> C:\Windows\System32\nvidia-smi -lgc 2100
> 
> Edit: And this.
> 
> You need at least +30mhz to make it work properly when you set a point in Afterburner. Are you sure you see the lock in real time in Afterburner ? Right click as Admin ? No need for .exe just : the path /nvidia-smi -lgc 2205
> You need to set the point in Afterburner at 2235 mhz.


Thanks, I got it working!


----------



## ssgwright

Is the kingpin 1000w bios safe to use on a 2xpin card? I get the best results from it (+15k in port) but the power it's pulling is crazy and I don't want to fry anything. I'm running an EK block so temps weren't an issue but I'm just wondering if anyone heard of any cards dying because of the power it's pulling with this bios?


----------



## KedarWolf

If I have the TV on to watch sports, my portable A/C on and I game on my PC, I trip a power breaker. 

The fix is to undervolt with Afterburner to .875v and my 3090 pulls about 320W tops on the Suprim X BIOS

I'm only playing Diablo 3 right now so undervolting isn't a problem.


----------



## yzonker

ssgwright said:


> Is the kingpin 1000w bios safe to use on a 2xpin card? I get the best results from it (+15k in port) but the power it's pulling is crazy and I don't want to fry anything. I'm running an EK block so temps weren't an issue but I'm just wondering if anyone heard of any cards dying because of the power it's pulling with this bios?


I haven't seen anyone report killing a 2x8 pin card that was running that bios. I've run it for over a year. I usually limit it to 450-500w for gaming max. But I have run dozens of benchmarks with it at 100% PL. (2x8 pin Zotac)

I think the bigger risk is just having something fail in the loop and no thermal safeties. Although I just set up HWINFO to shut down my machine if the core/mem temp went above normal (not very high, just a little above where it can normally reach).


----------



## ssgwright

yzonker said:


> I haven't seen anyone report killing a 2x8 pin card that was running that bios. I've run it for over a year. I usually limit it to 450-500w for gaming max. But I have run dozens of benchmarks with it at 100% PL. (2x8 pin Zotac)
> 
> I think the bigger risk is just having something fail in the loop and no thermal safeties. Although I just set up HWINFO to shut down my machine if the core/mem temp went above normal (not very high, just a little above where it can normally reach).


just what I wanted to hear, that bios really increases performance, thanks!


----------



## Spiriva

yzonker said:


> I haven't seen anyone report killing a 2x8 pin card that was running that bios. I've run it for over a year. I usually limit it to 450-500w for gaming max. But I have run dozens of benchmarks with it at 100% PL. (2x8 pin Zotac)
> 
> I think the bigger risk is just having something fail in the loop and no thermal safeties. Although I just set up HWINFO to shut down my machine if the core/mem temp went above normal (not very high, just a little above where it can normally reach).


Same here, been using it for over a year on my 2x8pin too. I also set it to ~450-500w (EK waterblock)
No problem so far, and soon the 4000 series come anyhow


----------



## PLATOON TEKK

J7SC said:


> ...killing time until the 4090 / Ti releases  ?
> BTW, I still have a few 980 Classies running in a retro system - ton of fun w/ EVBot


haha absolutely man. Hopefully time will be killed and not this card lol. But will be great knowledge to have.

I will sell my firstborn for that evbot btw lol. So let me know if it ever gets to bulky to store for you.980s were class.




KedarWolf said:


> A picture of an actual 4090 in a system has leaked!!
> 
> View attachment 2564348


This made me snort my coffee haha. The os is great! It feels even more responsive and stripped than my older version. Truly appreciate the time you put into it and for sharing it, g ****.


----------



## gfunkernaught

Has anyone gained/lost performance with the latest 516.40 drivers? Bench, gaming, both? I've been getting worse and worse performance in benchmarks but gaming seems to be fine with newer drivers. Just wondering.

Also @KedarWolf have you updated your BenchOS since April?


----------



## KedarWolf

gfunkernaught said:


> Has anyone gained/lost performance with the latest 516.40 drivers? Bench, gaming, both? I've been getting worse and worse performance in benchmarks but gaming seems to be fine with newer drivers. Just wondering.
> 
> Also @KedarWolf have you updated your BenchOS since April?


Last link I posted with the DisableServices.bat the last one. Yeah, I uploaded it April 15th.


----------



## gfunkernaught

KedarWolf said:


> Last link I posted with the DisableServices.bat the last one. Yeah, I uploaded it April 15th.


Cool thanks. I'm gonna try injecting my drivers with dism then repack the iso. Is this image a WinPE or a stock and stripped Windows 10 image?


----------



## KedarWolf

gfunkernaught said:


> Cool thanks. I'm gonna try injecting my drivers with dism then repack the iso. Is this image a WinPE or a stock and stripped Windows 10 image?





gfunkernaught said:


> Cool thanks. I'm gonna try injecting my drivers with dism then repack the iso. Is this image a WinPE or a stock and stripped Windows 10 image?


Stock striped Windows 10 image, with April 15th updates integrated, but you can use W10UI from MyDigitalLife forums to update to the latest updates. It'll run on the ISO.

W10UI









Windows 10 Hotfix Repository


actually it's for any w10 build after 10240 including insider previews assembly version: 4.0.10608.16393 files version : 4.6.1532.0[/QUOTE] It...




forums.mydigitallife.net





Get the updates from the Win 10 Hotfix thread.

[DISCUSSION] Windows 10 Final Build 19041/2/3/4 (PC) [20H1/2 / 21H1/2 vb_release] Build 19044


----------



## ArcticZero

Sharing my experience as well. While I haven't been using a custom BIOS for my PNY 3090 (2-pin), I do have it shunt modded with identical 5mOhm resistors so I get around 700+w of headroom disregarding internal limits.

Of course I only ever set it to 500w max since I feel like it's diminishing returns past that, and it's been incredibly stable for over a year now be it for games, benchmarks, or mining. Loop keeps it at 64c absolute max hotspot, 62c memory (mining) with 30-35c ambients.


----------



## yzonker

gfunkernaught said:


> Has anyone gained/lost performance with the latest 516.40 drivers? Bench, gaming, both? I've been getting worse and worse performance in benchmarks but gaming seems to be fine with newer drivers. Just wondering.
> 
> Also @KedarWolf have you updated your BenchOS since April?


Yes, the older drivers are better in PR for sure. The best driver is still the releases just after reBar was released. Pretty sure the run KedarWolf linked previously was also on one of these old drivers. 









I scored 16 107 in Port Royal


AMD Ryzen 7 5800X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 11}




www.3dmark.com


----------



## KedarWolf

Kingpins, no queue. Retail price.



https://www.evga.com/products/product.aspx?pn=24G-P5-3998-KR


----------



## KedarWolf

EVGA GeForce RTX 3090 FTW3 ULTRA GAMING $1699.99









EVGA GeForce RTX 3090 FTW3 ULTRA GAMING, 24G-P5-3987-KR, 24GB GDDR6X, iCX3 Technology, ARGB LED, Metal Backplate


The EVGA GeForce RTX 3090 is colossally powerful in every way imaginable, giving you a whole new tier of performance at 8K resolution. It's powered by the NVIDIA Ampere architecture, which doubles down on ray tracing and AI performance with enhanced RT Cores, Tensor Cores, and new streaming...




www.evga.com


----------



## J7SC

KedarWolf said:


> Kingpins, no queue. Retail price.
> 
> 
> 
> https://www.evga.com/products/product.aspx?pn=24G-P5-3998-KR





KedarWolf said:


> EVGA GeForce RTX 3090 FTW3 ULTRA GAMING $1699.99
> 
> 
> 
> 
> 
> 
> 
> 
> 
> EVGA GeForce RTX 3090 FTW3 ULTRA GAMING, 24G-P5-3987-KR, 24GB GDDR6X, iCX3 Technology, ARGB LED, Metal Backplate
> 
> 
> The EVGA GeForce RTX 3090 is colossally powerful in every way imaginable, giving you a whole new tier of performance at 8K resolution. It's powered by the NVIDIA Ampere architecture, which doubles down on ray tracing and AI performance with enhanced RT Cores, Tensor Cores, and new streaming...
> 
> 
> 
> 
> www.evga.com


...getting into famine-to-feast territory ? I haven't bought an EVGA product since 2015, but got three EVGA 'GPU specials' emails in four days now...


----------



## gfunkernaught

J7SC said:


> ...getting into famine-to-feast territory ? I haven't bought an EVGA product since 2015, but got three EVGA 'GPU specials' emails in four days now...











Signature GPU Block - Kingpin 3090


The Optimus Signature Kingpin block is here! Our first ever Signature GPU block is designed to take advantage of the EVGA Kingpin's extreme performance. We've put everything into this block, and then some. And you asked for the ultimate block with an active backplate to match, so we're launching...




optimuspc.com




Don't forget to bring a towel.


----------



## Arizor

How are Optimus doing these days? They don't seem very present on their social media and a lot of their stuff is sold out for ages. Are they winding down?


----------



## gfunkernaught

Arizor said:


> How are Optimus doing these days? They don't seem very present on their social media and a lot of their stuff is sold out for ages. Are they winding down?


I haven't been paying much attention since they don't make a block for the Trio, but I remember when I was researching block performance they came out on top of everyone else.


----------



## Arizor

gfunkernaught said:


> I haven't been paying much attention since they don't make a block for the Trio, but I remember when I was researching block performance they came out on top of everyone else.


Yeah I have their Strix block and Ryzen one too, both excellent pieces. Hope they're not going under.


----------



## KedarWolf

gfunkernaught said:


> I haven't been paying much attention since they don't make a block for the Trio, but I remember when I was researching block performance they came out on top of everyone else.


I have their Strix Block and their Ryzen Foundation block too.

I had GPU-Z open and ran Diablo 3, 34C core, 44C hotspot and 56C memory.

My GPU IS only pulling 320W though as I undervolt to stop tripping a power breaker when my A/C and TV are running.

I turned my A/C off, Suprim X BIOS pulling 450W.

38C core, 50C hotspot, 60C memory, on one 360 rad.


----------



## Arizor

KedarWolf said:


> I have their Strix Block and their Ryzen Foundation block too.
> 
> I had GPU-Z open and ran Diablo 3, 34C core, 44C hotspot and 56C memory.
> 
> My GPU IS only pulling 320W though as I undervolt to stop tripping a power breaker when my A/C and TV are running.
> 
> I turned my A/C off, Suprim X BIOS pulling 450W.
> 
> 38C core, 50C hotspot, 60C memory, on one 360 rad.


That's awesome mate. 

I use the latest STRIX bios (no other seems to do much difference for me) and I flatten at 1.31V, 2010 GPU, 11k mem (+1250). That's my rock solid stable set up. I can go further for benching but honestly not much. Temps are very similar to yours but as I'm pulling 420-60W, I'd like to do less. 

What setup is that on AB for your undervolt?


----------



## J7SC

Arizor said:


> That's awesome mate.
> 
> I use the latest STRIX bios (no other seems to do much difference for me) and I flatten at 1.31V, 2010 GPU, 11k mem (+1250). That's my rock solid stable set up. I can go further for benching but honestly not much. Temps are very similar to yours but as I'm pulling 420-60W, I'd like to do less.
> 
> What setup is that on AB for your undervolt?


...1.31V w/ stock Strix vBios ? 

...and speaking of 'juice', this has some interesting tables in it re. 3090 (Igor's Lab had done s.th. similar recently)


----------



## Arizor

Haha sorry I of course meant 1.031...!


----------



## gfunkernaught

Came across this


----------



## gfunkernaught

KedarWolf said:


> I have their Strix Block and their Ryzen Foundation block too.
> 
> I had GPU-Z open and ran Diablo 3, 34C core, 44C hotspot and 56C memory.
> 
> My GPU IS only pulling 320W though as I undervolt to stop tripping a power breaker when my A/C and TV are running.
> 
> I turned my A/C off, Suprim X BIOS pulling 450W.
> 
> 38C core, 50C hotspot, 60C memory, on one 360 rad.


I remember when I lived in an apt of a really old building playing Crysis 3 with SLI GTX 580s, and the lights would dim and flicker whenever there was a gpu heavy scene😆


----------



## RetroWave78

Can you flash 3090 Ti FE vbios to 3090 FE? I apologize if this has been asked already, I searched the web and came back with nothing.


----------



## yzonker

RetroWave78 said:


> Can you flash 3090 Ti FE vbios to 3090 FE? I apologize if this has been asked already, I searched the web and came back with nothing.


Nope. Although it probably wouldn't work anyway, even the version of NVFlash on TPU won't let you flash a bios with a different device ID.


----------



## yzonker

BTW, I was inspecting stuff on my loop and discovered my original CL480 has a leak in the bottom tank. Maybe I'll have to retract my glowing review. Doh...


----------



## J7SC

yzonker said:


> BTW, I was inspecting stuff on my loop and discovered my original CL480 has a leak in the bottom tank. Maybe I'll have to retract my glowing review. Doh...


pics re. the exact spot ? So far, my 3x CL480s don't know a thing about leaks.


----------



## ssgwright

anyone tried a shunt modded card with a 2xpin card and the kingpin 1000w bios?


----------



## KedarWolf

Win10BenchISO.zip







drive.google.com





New Improved Bench ISO. READ the instructions, or the Bench O/S will not run as clean as it should.

Windows 10 19044 all updates preinstalled, stripped, only O/S essential appx's etc installed., no printing, no audio, has internet and Internet Explorer just for getting drivers.

If for any reason you install another browser, use Autoruns to disable all the browser services. It'll still work just fine.


----------



## ssgwright

@*Falkentyne*

with the 1000w bios allowing 2xpin cards to pull more juice, is there an advantage to shunt modding them? I'm thinking of shunt modding and going back to my stock bios, but is there a point since there's a bios out there to "unlock" it already?

Basically what I'm asking is, is shunt modding and using my stock bios any different than just running the kingpin 1000w bios?


----------



## yzonker

J7SC said:


> pics re. the exact spot ? So far, my 3x CL480s don't know a thing about leaks.


I didn't take a pic, but the rad was vertically mounted and water was standing in and around the tubes where they seal to the tank and dripping down a fitting. At first I thought it was just the fitting until I pulled the fan off on that end.


----------



## PLATOON TEKK

@yzonker sorry bout that leak man hope you get it sorted asap.

@KedarWolf when using the first rebar drivers my NVLINK/SLI doesn’t seem to show up at all oddly, will try new OS tonight, thanks again.


----------



## J7SC

yzonker said:


> I didn't take a pic, but the rad was vertically mounted and water was standing in and around the tubes where they seal to the tank and dripping down a fitting. At first I thought it was just the fitting until I pulled the fan off on that end.


...ok. The reason why I asked is that I had that happen to my other 'fav rad' type, the XSPC RX360, on recent build. It looked like the 'neck' where the fitting screws onto the rad was leaking on the rad-side. It turned out that the previously-used rad had just a sliver of broken-off threads from a previous screw-in fitting that was in the way of a new fitting screwing in tightly (very hard to see unless using a flash light and looking for it)...


----------



## gfunkernaught

J7SC said:


> ...1.31V w/ stock Strix vBios ?
> 
> ...and speaking of 'juice', this has some interesting tables in it re. 3090 (Igor's Lab had done s.th. similar recently)


This further confirms my theory/rule of thumb about choosing a PSU. I consider my system running at 100%, every component. Take that # in watts, then double it, and that's what you should get.


----------



## des2k...

ssgwright said:


> anyone tried a shunt modded card with a 2xpin card and the kingpin 1000w bios?


2x8pin are ref nvidia for vrm, just 660w with 1000w no shunts you'll use 90% of the vrm amperage for core and uncore

that last 10% is worthless since you're limited to 1.2v on the core ~2.2ghz

you'll also be going over 100w for pcie with shunts since 2x8pins are balanced with PCIE power

The 660w limit is just that, reaching 100w on the pcie, limit with 1000w vbios


----------



## yzonker

J7SC said:


> ...ok. The reason why I asked is that I had that happen to my other 'fav rad' type, the XSPC RX360, on recent build. It looked like the 'neck' where the fitting screws onto the rad was leaking on the rad-side. It turned out that the previously-used rad had just a sliver of broken-off threads from a previous screw-in fitting that was in the way of a new fitting screwing in tightly (very hard to see unless using a flash light and looking for it)...


I thought it was something like that too until I pulled the fan off. Definitely not unfortunately. Not easily repaired either. I'll just order up another one unless I can RMA this one. Any idea what the warranty is? Nothing in the box that tells me and the website seemed unclear.


----------



## J7SC

yzonker said:


> I thought it was something like that too until I pulled the fan off. Definitely not unfortunately. Not easily repaired either. I'll just order up another one unless I can RMA this one. Any idea what the warranty is? Nothing in the box that tells me and the website seemed unclear.


...seems it is 1 year for water-cooling products


----------



## yzonker

J7SC said:


> ...seems it is 1 year for water-cooling products
> View attachment 2564783


Thanks. I passed that last month of course. I may just use the new one I bought and just run one of them. It didn't make much difference anyway.


----------



## KedarWolf

Deleted, for the Optimus Water Cooling thread.


----------



## J7SC

yzonker said:


> Thanks. I passed that last month of course. I may just use the new one I bought and just run one of them. It didn't make much difference anyway.


Once taken out of the loop and drained, you might try to find the exact spot where it leaks. As the CL 480 is copper and copper/brass, an automotive radiator shop might be able to fix it. Alternatively, the stuff below is great - I used it decades ago to fix a leak on a fuel line into an 850 cfm Rochester carb on my car back then...never leaked again, even with gasoline and fuel pressure...


----------



## Aftermath2006

has anyone tried this Bitspower Enhance VRAM Water Block for GeForce RTX 3090 Backplate, Nickel/Plexi on the Kingpin backplate or know if it will fit


----------



## yzonker

J7SC said:


> Once taken out of the loop and drained, you might try to find the exact spot where it leaks. As the CL 480 is copper and copper/brass, an automotive radiator shop might be able to fix it. Alternatively, the stuff below is great - I used it decades ago to fix a leak on a fuel line into an 850 cfm Rochester carb on my car back then...never leaked again, even with gasoline and fuel pressure...
> View attachment 2564788


Yea I might give that a try when I have time. I pulled the rad out for now. It really was only making a 2C difference at most. Wasn't sure it was even that though when I tested the system again with just the single CL480. Looked like 7-8C delta vs 6-7C with 2 CL480s. Really not worth it.


----------



## ssgwright

anyone have liquid metal react to a waterblock where it dissolves the metal in less than a minute? I just put liquid metal on my block and it dissolved the metal on the block in less than a minute, a huge hole where the liquid metal was. I caught it in time before any damage was done but wow....


----------



## yzonker

ssgwright said:


> anyone have liquid metal react to a waterblock where it dissolves the metal in less than a minute? I just put liquid metal on my block and it dissolved the metal on the block in less than a minute, a huge hole where the liquid metal was. I caught it in time before any damage was done but wow....


My understanding is aluminum is primarily what it will eat up. What block?


----------



## ssgwright

crap that's exactly it... thought it was nickel plated copper... nope aluminum, welp, I'm a dummy


----------



## Nizzen

ssgwright said:


> crap that's exactly it... thought it was nickel plated copper... nope aluminum, welp, I'm a dummy


What cpu block do you have?


----------



## ssgwright

EK velocity 2 nickel plated copper


----------



## ssgwright

still not sure if the 3090 is jacked or not... saw the water start dripping from between the board and card, panicked and powered down. Fully dried the board and threw my 3080 in and the board is fine. I have a proper nickel plated block coming for my 3090 so we'll see


----------



## Nizzen

ssgwright said:


> still not sure if the 3090 is jacked or not... saw the water start dripping from between the board and card, panicked and powered down. Fully dried the board and threw my 3080 in and the board is fine. I have a proper nickel plated block coming for my 3090 so we'll see


I spilled water on one of my 3090 strix. It wouldn't start at first, but after drying it up for a day, it worked flawless 

Using other fluid than destilled water isn't very good. Destilled water is easy to dry, but "special" fluids not so good. Then you need to clean the whole pcb with products like Isopropyl alcohol


----------



## Nizzen

ssgwright said:


> EK velocity 2 nickel plated copper


I have this on one of my computers. Guess I'll not sanding it any time soon 

Nice info !


----------



## ssgwright

ya if you haven't seen liquid metal react with aluminum it basically dissolved into sand and went all over the board and card. I spent hours cleaning both the board, the card, and having to flush the loop over and over to ensure I got all the "sand" out.


----------



## KedarWolf

Check to make sure you have your inlet and outlet hoses in right.

I had them reversed and there was a lot of air in the block. 

I fixed it and now memory temps are 10C less while gaming.


----------



## mellhen

Has anyone tried to flash a 3090 Ti Bios on the 3090 non-Ti version?
I have read that flashing the 3090 Bios on the 3080 works to increase the hash rate for mining.

I do not intend to mine, however, I am curious if someone tried it. It would be interesting to see, whether the reported PCIe power draw would be correct when flashing a 3090Ti ASUS bios on a 3090 non-Ti ASUS card.

Currently, my water-cooled Strix 3090 with "KingPin 1000W rBAR" Bios capped at 600W and undervolted runs just fine; however, power is misreported (something most of you experienced).

I am aware of one ASUS XOC 1000W bios for the 3090. However, it does not support rBAR. Could this ASUS XOC Bios for the Tuff 3090 Ti be a solution? Despite the new 12+4-pin connector on the Tuf 3090Ti, it reports three PCIe (8-pin) connectors.

Thanks for your advice & have a nice day!


----------



## Arizor

mellhen said:


> Has anyone tried to flash a 3090 Ti Bios on the 3090 non-Ti version?
> I have read that flashing the 3090 Bios on the 3080 works to increase the hash rate for mining.
> 
> I do not intend to mine, however, I am curious if someone tried it. It would be interesting to see, whether the reported PCIe power draw would be correct when flashing a 3090Ti ASUS bios on a 3090 non-Ti ASUS card.
> 
> Currently, my water-cooled Strix 3090 with "KingPin 1000W rBAR" Bios capped at 600W and undervolted runs just fine; however, power is misreported (something most of you experienced).
> 
> I am aware of one ASUS XOC 1000W bios for the 3090. However, it does not support rBAR. Could this ASUS XOC Bios for the Tuff 3090 Ti be a solution? Despite the new 12+4-pin connector on the Tuf 3090Ti, it reports three PCIe (8-pin) connectors.
> 
> Thanks for your advice & have a nice day!


not possible sorry bud.


----------



## jeiselramos

mellhen said:


> Has anyone tried to flash a 3090 Ti Bios on the 3090 non-Ti version?
> I have read that flashing the 3090 Bios on the 3080 works to increase the hash rate for mining.
> 
> I do not intend to mine, however, I am curious if someone tried it. It would be interesting to see, whether the reported PCIe power draw would be correct when flashing a 3090Ti ASUS bios on a 3090 non-Ti ASUS card.
> 
> Currently, my water-cooled Strix 3090 with "KingPin 1000W rBAR" Bios capped at 600W and undervolted runs just fine; however, power is misreported (something most of you experienced).
> 
> I am aware of one ASUS XOC 1000W bios for the 3090. However, it does not support rBAR. Could this ASUS XOC Bios for the Tuff 3090 Ti be a solution? Despite the new 12+4-pin connector on the Tuf 3090Ti, it reports three PCIe (8-pin) connectors.
> 
> Thanks for your advice & have a nice day!


3090 strix 1000w + rebar 





__





3090strix1000wrebar.rom







drive.google.com


----------



## ssgwright

im running the kingpin 1000w bios on my 2xpin 3090 and it's working great, but is there a 1000w bios with rebar enabled?

-edit nm i found it on techpowerup lol


----------



## mellhen

jeiselramos said:


> 3090 strix 1000w + rebar
> 
> 
> 
> 
> 
> __
> 
> 
> 
> 
> 
> 3090strix1000wrebar.rom
> 
> 
> 
> 
> 
> 
> 
> drive.google.com


Thank you!


----------



## ssgwright

how is the strix 1000w compared to the kingpin 1000w bios? (specifically when used on a 2xpin card)


----------



## KedarWolf

EVGA NVIDIA GeForce RTX 3090 FTW3 Ultra Triple-Fan 24GB GDDR6X PCIe 4.0 Graphics Card - Micro Center


Get it now! The EVGA GeForce RTX 3090 is colossally powerful in every way imaginable, giving you a whole new tier of performance at 8K resolution.




www.microcenter.com





$1349.99


----------



## yzonker

That would have been nice a year or more ago.


----------



## J7SC

yzonker said:


> That would have been nice a year or more ago.


...


----------



## yzonker

It's hard to believe they couldn't see this coming. Maybe they could shift some of the chip production to the automotive industry. New car lots are still pretty bare here in the US. Maybe different companies/fabs though.


----------



## Benni231990

Hello 

i have a littel problem i changed my DDC310 to a D5 and now i have 10°C higher as before how is that possible? i have no air in the circut and the d5 is running on 100%?

my case is a corsair 1000D with 2x 480 rads in the fron and 1x420 in the top


----------



## Nizzen

Benni231990 said:


> Hello
> 
> i have a littel problem i changed my DDC310 to a D5 and now i have 10°C higher as before how is that possible? i have no air in the circut and the d5 is running on 100%?
> 
> my case is a corsair 1000D with 2x 480 rads in the fron and 1x420 in the top


Maybe you got some stuff in cpu block finns?


----------



## Benni231990

no it is all clear and is use only clear fluid

i use Aqua Computer DP Ultra clear and the GPU Block is from alphacool


----------



## Nizzen

Benni231990 said:


> no it is all clear and is use only clear fluid


Did you check inside the blocks?
10 c is a lot. It's like 80% less flow much


----------



## Benni231990

yes i check it on bubble or somthing that why i dont understand i only changed the pump nothing more

my idle temps aber absolut good only 1°C over water temp


----------



## mellhen

jeiselramos said:


> 3090 strix 1000w + rebar
> 
> 
> 
> 
> 
> __
> 
> 
> 
> 
> 
> 3090strix1000wrebar.rom
> 
> 
> 
> 
> 
> 
> 
> drive.google.com


I tested this BIOS: reBar is enabled, all three PCIe lanes are monitored, and it shows 1000W bios in GPU-z. However, it never pulls more than about 480W @Board Power Draw. So stock values. Am I missing something?


----------



## Nizzen

mellhen said:


> I tested this BIOS: reBar is enabled, all three PCIe lanes are monitored, and it shows 1000W bios in GPU-z. However, it never pulls more than about 480W @Board Power Draw. So stock values. Am I missing something?


Try quake 2 rtx or 3dmark port Royal. On most 3090 3 pin with cold enough card and "max" oc, you shold see atleast 550w


----------



## yzonker

mellhen said:


> I tested this BIOS: reBar is enabled, all three PCIe lanes are monitored, and it shows 1000W bios in GPU-z. However, it never pulls more than about 480W @Board Power Draw. So stock values. Am I missing something?


What is 8pin #3 showing at 480w? My guess is 150w. IIRC that bios is partially crippled due to the 3rd 8pin limit being set to 150w. This is why pretty much everyone uses the KP 1kw bios.


----------



## J7SC

Benni231990 said:


> Hello
> 
> i have a littel problem i changed my DDC310 to a D5 and now i have 10°C higher as before how is that possible? i have no air in the circut and the d5 is running on 100%?
> 
> my case is a corsair 1000D with 2x 480 rads in the fron and 1x420 in the top


...D5s have a physically larger internal diameter than DDC and are a bit more susceptible to sudden pressure drop (ie. air pockets which may be hiding in the rads). I never use fewer than 2xD5s in a single loop, not only for fail-over but also because it alleviates that sudden pressure drop issue, according to DerBauer (check his special on that on YT/ Casekingtv).

...I use 2x 480 (3-core) and 1x 360 (2-core) rads with clear windows over the 5950X CPU and 3090 Strix GPU micro-fins, and even when it all seems bubble-free and properly bled, it can still actually take 2+ days to clear it all out (even w/ shaking, depending how your rads are mounted re. the in- and outlets / vertical plane). One way to tell is if you leave the system off for a couple of days, then re-start it - a lot of gurgling sounds are tell-tale of air trapped, ie. in the rads...


----------



## yzonker

@J7SC, Looks like I may have managed to seal my leaking CL480 with the sealer you mentioned before. Really odd. The leak was actually out in the middle in one of the tubes. Fix isn't very pretty as I bent the fins out some so I could get the sealer in there better, but the fans will hide it anyway since I run it with push/pull. I taped up the back side so the sealer would pool up better with the hope that it would run in to whatever cracks/holes that were present.

Anyway it survived a few hours running in the kitchen sink with a temp loop and now it's been running a while longer in my loop with no apparent leak. Just have it QDC'ed in right now until I see if it's really going to hold up.


----------



## J7SC

yzonker said:


> @J7SC, Looks like I may have managed to seal my leaking CL480 with the sealer you mentioned before. Really odd. The leak was actually out in the middle in one of the tubes. Fix isn't very pretty as I bent the fins out some so I could get the sealer in there better, but the fans will hide it anyway since I run it with push/pull. I taped up the back side so the sealer would pool up better with the hope that it would run in to whatever cracks/holes that were present.
> 
> Anyway it survived a few hours running in the kitchen sink with a temp loop and now it's been running a while longer in my loop with no apparent leak. Just have it QDC'ed in right now until I see if it's really going to hold up.
> 
> View attachment 2565916


----------



## yzonker

J7SC said:


>


But it didn't hold up to the chiller loop with the PMP-500. Lol. It worked long enough to retest and I decided it was worth ordering another one.


----------



## J7SC

yzonker said:


> But it didn't hold up to the chiller loop with the PMP-500. Lol. It worked long enough to retest and I decided it was worth ordering another one.


...did you follow the instruction re. roughing the surface etc ? That always made a difference when I used it. Also as mentioned, I fixed a high-pressure fuel line on a car with that and drove the car with that repair for many more years. Also as posted, another option would be to bring it to an automotive radiator shop.


----------



## yzonker

J7SC said:


> ...did you follow the instruction re. roughing the surface etc ? That always made a difference when I used it. Also as mentioned, I fixed a high-pressure fuel line on a car with that and drove the car with that repair for many more years. Also as posted, another option would be to bring it to an automotive radiator shop.


As best I could, but it's one of the middle tubes so access isn't very good at all. Funny thing is, I've got it running again with just my dual D5's and no indication of a leak. I already ordered another one from Amazon, so I'll just swap it in. Thanks for the ideas though.


----------



## dante`afk

I got notify for the 3090TI KP hybrid + PSU.

if someone wants it lmk, I'm not interested any longer.


----------



## GRABibus

dante`afk said:


> I got notify for the 3090TI KP hybrid + PSU.
> 
> if someone wants it lmk, I'm not interested any longer.


Free of charge ? 🤓


----------



## yzonker

Yea I got a notify yesterday too, but no interest since there's no (good) way to integrate it in to my loop.


----------



## J7SC

I would rather wait for the (inevitable) 4090 Ti, judging by the latest 'leak info'...Release timing might have been pushed back because of all the RTX 3k still in the sales channels.


----------



## GRABibus

Do you plan to sale your 3k ?
If yes, when ?

The best should be to do it now, as the after sales prices will keep on decreasing


----------



## yzonker

GRABibus said:


> Do you plan to sale your 3k ?
> If yes, when ?
> 
> The best should be to do it now, as the after sales prices will keep on decreasing


Too late, Ebay, etc... is already flooded with miner cards. 3090's are down to $1k or so.


----------



## J7SC

GRABibus said:


> Do you plan to sale your 3k ?
> If yes, when ?
> 
> The best should be to do it now, as the after sales prices will keep on decreasing


...nope, it's a keeper. When the time comes, I'll add a 4090/Ti in a new (AMD?/Intel?) system.


----------



## yzonker

I've actually been more tempted to buy a 6950xt just to see how the other half lives.


----------



## J7SC

yzonker said:


> I've actually been more tempted to buy a 6950xt just to see how the other half lives.


...very well, thank you  ...my other machine has a custom 3x8pin 6900XT w/6950XT vBios on one of the vBios slots...


----------



## yzonker

Well I'm back up and running with yet another brand new CL480. Hopefully no more leaks. Tired of swapping them out. Somebody needs to invent fans that don't use screws. Lol. 

Got nothing else. A little jealous of @J7SC 's red and green rigs.


----------



## J7SC

yzonker said:


> Well I'm back up and running with yet another brand new CL480. Hopefully no more leaks. Tired of swapping them out. *Somebody needs to invent fans that don't use screws*. Lol.
> 
> Got nothing else. A little jealous of @J7SC 's red and green rigs.


...yeah, I can't complain re. both setups... 

... was it a fan screw that did the deed on the prior CL480 ?

In other news, I finally unsubscribed from EVGA's email list...I was on there re. product registrations, but the last time I bought an EVGA product was back in 2015. Unlike Asus, Gigabyte and MSI, where I'm also registered and get the odd notification from, EVGA has gone absolutely mad _since mid-April of this year_...now I was getting them sometimes less than 24 hrs apart (like today). That's bad corporate behaviour in my book...


----------



## yzonker

J7SC said:


> ...yeah, I can't complain re. both setups...
> 
> ... was it a fan screw that did the deed on the prior CL480 ?
> 
> In other news, I finally unsubscribed from EVGA's email list...I was on there re. product registrations, but the last time I bought an EVGA product was back in 2015. Unlike Asus, Gigabyte and MSI, where I'm also registered and get the odd notification from, EVGA has gone absolutely mad _since mid-April of this year_...now I was getting them sometimes less than 24 hrs apart (like today). That's bad corporate behaviour in my book...
> View attachment 2566226


No, if you look at the attempted fix location, it's not under a fan screw location. About an inch away. Screws aren't long enough to hit anything anyway. Really weird. 

That's odd too. I don't get any emails from EVGA and am even an Elite member so they obviously have my address. Not sure if I've ever opted out of notifications though. Don't recall anything.


----------



## gfunkernaught

@KedarWolf
Couple of suggestions for your BenchOS. I think the stock windows 10 background for unactivated Windows is just a file. You should be able to edit the file and just black out all the pixels. Also might want to include the latest version Rufus since it supports Windows To Go to install the image directly to a drive instead of running through the windows installation process.

I haven't been on here in a while and I don't come here often so if you already did all this then don't mind me.

EDIT: To anyone that wants to darken the BenchOS background, open the file in Windows\Web\4k\Windows\Wallpaper that matches your monitor's res OFFLINE meaning not while in BenchOS but in your normal Windows or other OS. If you do it in Windows, take ownership and give yourself full rights to the file, open mspaint as Administrator (elevated) then edit the file by deleted the image, filling the background with black, then save.


----------



## gfunkernaught

I guess benching just isn't my thing, truly.








Result







www.3dmark.com





That link should work for the comparison between BenchOS (left) and my older run from a while ago on my normal bloated unoptimized system (right). Very weird.


----------



## J7SC

...still having a grand old time with the 3090 Strix, even on the stock bios w/o full PL (as opposed to the KPE 520 secondary bios). My favs remain CP2077 - which on a 48 inch OLED with all the ray tracing maxed at 4K is simply _surreal_...mostly, I just walk / ride / drive and roam around, just for the eye candy . Then there is FS 2020 (below) which on this setup has become a daily joy.

I got the 3090 at the end of January '21 (before the price bulge, coincidentally same price as now) and finished the 5950X CH8 DarkH and custom loop setup a few months later....glad to report that temp deltas are exactly the same now as they were when I completed the loop (I used TG10 thermal putty on VRAM and VRMs, plus extra heatsink w/fan on the back of the 3090). Even at 25 C ambient and hours of playing the above, no issues


----------



## yzonker

You know this guy does some things that are interesting, but then he tears it down with some noobish thing. He's gone on and on in other vids about how his 3090ti FTW3 clocks super high (2200+), but then he shows how he's modifying the VF curve in this vid (it's the single point pulled waaaay up destroying effective clock),






I just tested this with my card (single point vs a smooth curve) at the 1100mv point. Indicated frequency was 2145 both times (+150), but effective clock was 2120-2125 with the smoother curve and 2050-2055 with the steep curve. 

We've obviously covered this before in this thread, but since things are kinda dead in here thought I'd rant a bit. LOL

This also shows how bored I am to be watching those vids. I need an intervention to not buy a 3090ti FTW3 for $1500 right now....


----------



## yzonker

J7SC said:


> ...still having a grand old time with the 3090 Strix, even on the stock bios w/o full PL (as opposed to the KPE 520 secondary bios). My favs remain CP2077 - which on a 48 inch OLED with all the ray tracing maxed at 4K is simply _surreal_...mostly, I just walk / ride / drive and roam around, just for the eye candy . Then there is FS 2020 (below) which on this setup has become a daily joy.
> 
> I got the 3090 at the end of January '21 (before the price bulge, coincidentally same price as now) and finished the 5950X CH8 DarkH and custom loop setup a few months later....glad to report that temp deltas are exactly the same now as they were when I completed the loop (I used TG10 thermal putty on VRAM and VRMs, plus extra heatsink w/fan on the back of the 3090). Even at 25 C ambient and hours of playing the above, no issues
> View attachment 2566458


What core offset are you using? +180? Looks like something in that range from the max 2205 reported unless the Strix bios starts with some offset above reference. Do you play Metro EE? I think both of my cards are fairly stable at +180 except in that game. Both of them have to fall back to +150 (maybe +135 for my 3080ti) for Metro.


----------



## J7SC

yzonker said:


> What core offset are you using? +180? Looks like something in that range from the max 2205 reported unless the Strix bios starts with some offset above reference. Do you play Metro EE? I think both of my cards are fairly stable at +180 except in that game. Both of them have to fall back to +150 (maybe +135 for my 3080ti) for Metro.


...yeah, ~+180 or so. I've run up to ~+210 on this stock vbios for Superposition, but for gaming, ~+180 is max. Sometimes I forget when I've switched to the 520KPE vbios and it's base of 1920 MHz instead of 1860 MHz, and wonder what's wrong ...

...below is an Superposition oldie when the card was still air-cooled (stock vbios) ...it always tended to spike really high than settle 40 MHz lower (though these days I use nvidia-smi to stop the spiking in the first place, at least when benching).


----------



## Gandyman

So strange thing happening (probably just my lack of understanding) hoping someone here can explain. 

I have 3090 Strix OC on water. I run it 24/7 with 100% power limit and +75 on core. This gets me over 2k clock most the time in games, and doesn't use that extra 120+ watts to get another 25 or so mhz, saving my power bill. 

So today I was playing Tiny Tinas Wonderland (just happens to be what I'm playing at the moment) And I noticed I was getting about 110 fps. Out of curiosity I put the power slider to max, to see if that extra power would give me the extra 10 fps to max out my 120hz monitor. What happened was ... the GPU became unstable. Multiple nvlddmkm crashes. With the same +75 on core. I thought surely this was a coincidence so put the power limit back to 100% and replayed the same chapter that had crashed half dozen times .. and its fully stable. 

TL;DR why does increasing my power limit (and leaving the same +75 on core) make my card unstable? I thought adding more power adds stability.


----------



## GRABibus

Gandyman said:


> So strange thing happening (probably just my lack of understanding) hoping someone here can explain.
> 
> I have 3090 Strix OC on water. I run it 24/7 with 100% power limit and +75 on core. This gets me over 2k clock most the time in games, and doesn't use that extra 120+ watts to get another 25 or so mhz, saving my power bill.
> 
> So today I was playing Tiny Tinas Wonderland (just happens to be what I'm playing at the moment) And I noticed I was getting about 110 fps. Out of curiosity I put the power slider to max, to see if that extra power would give me the extra 10 fps to max out my 120hz monitor. What happened was ... the GPU became unstable. Multiple nvlddmkm crashes. With the same +75 on core. I thought surely this was a coincidence so put the power limit back to 100% and replayed the same chapter that had crashed half dozen times .. and its fully stable.
> 
> TL;DR why does increasing my power limit (and leaving the same +75 on core) make my card unstable? I thought adding more power adds stability.


Adding more power results in higher boost and then potential instability.
Did you measure your core boost frequency before and after sliding the PL ?


----------



## Gandyman

GRABibus said:


> Adding more power results in higher boost and then potential instability.
> Did you measure your core boost frequency before and after sliding the PL ?


Yeah the core clock didn't change. It has enough power to hit the ~2025mhz at 100% power limit. Increasing power just makes it crash, doesnt add more core clock. so odd...


----------



## J7SC

Gandyman said:


> So strange thing happening (probably just my lack of understanding) hoping someone here can explain.
> 
> I have 3090 Strix OC on water. I run it 24/7 with 100% power limit and +75 on core. This gets me over 2k clock most the time in games, and doesn't use that extra 120+ watts to get another 25 or so mhz, saving my power bill.
> 
> So today I was playing Tiny Tinas Wonderland (just happens to be what I'm playing at the moment) And I noticed I was getting about 110 fps. Out of curiosity I put the power slider to max, to see if that extra power would give me the extra 10 fps to max out my 120hz monitor. What happened was ... the GPU became unstable. Multiple nvlddmkm crashes. With the same +75 on core. I thought surely this was a coincidence so put the power limit back to 100% and replayed the same chapter that had crashed half dozen times .. and its fully stable.
> 
> TL;DR why does increasing my power limit (and leaving the same +75 on core) make my card unstable? I thought adding more power adds stability.





Gandyman said:


> Yeah the core clock didn't change. It has enough power to hit the ~2025mhz at 100% power limit. Increasing power just makes it crash, doesnt add more core clock. so odd...


...assuming that your PSU is not cause of the stability issue (?), you might try to run the 520 KPE vbios since your dual-vbios Strix is on water (I run both the stock Strix OC and the 520 KPE on my w-cooled Strix OC) to see if that aids your stability at the same _effective_ clocks, or in fact makes it worse. Keep in mind that the stock bios is at 1860 nominal boost base, the KPE at 1920.


----------



## KedarWolf

J7SC said:


> ...yeah, ~+180 or so. I've run up to ~+210 on this stock vbios for Superposition, but for gaming, ~+180 is max. Sometimes I forget when I've switched to the 520KPE vbios and it's base of 1920 MHz instead of 1860 MHz, and wonder what's wrong ...
> 
> ...below is an Superposition oldie when the card was still air-cooled (stock vbios) ...it always tended to spike really high than settle 40 MHz lower (though these days I use nvidia-smi to stop the spiking in the first place, at least when benching).
> View attachment 2566602


I've had games unstable with the stock ASUS BIOS that pass say the EVGA FTW3 BIOS for a similar BIOS.


----------



## joyzao

Hello people. 

I recently got a 3090 tuf oc, which bios could I use on this 3090? Because in 4k on witcher 3 I have 1850mhz clock, even with maximum energy + 150mhz gpu, could it improve?


----------



## ssgwright

joyzao said:


> Hello people.
> 
> I recently got a 3090 tuf oc, which bios could I use on this 3090? Because in 4k on witcher 3 I have 1850mhz clock, even with maximum energy + 150mhz gpu, could it improve?


yes but at a cost, you could run the kingpin 1000w bios. However, if you run it at max power your probably pulling 600w and even with my custom loop my computer starts melting after about 20min of playing. So what I do is run the 1000w bios and set the power limit to about 80% this puts my average core mhz in games from 2050 to 2100.


----------



## joyzao

ssgwright said:


> yes but at a cost, you could run the kingpin 1000w bios. However, if you run it at max power your probably pulling 600w and even with my custom loop my computer starts melting after about 20min of playing. So what I do is run the 1000w bios and set the power limit to about 80% this puts my average core mhz in games from 2050 to 2100.


Thanks for the answer.

I did the test with the xoc bios, but it was worse than the tuf oc bios, here is the result, in the tuf oc bios I get 1860mhz with 350 to 360w... What can I do? I don't understand shunt mod


----------



## Arizor

joyzao said:


> Thanks for the answer.
> 
> I did the test with the xoc bios, but it was worse than the tuf oc bios, here is the result, in the tuf oc bios I get 1860mhz with 350 to 360w... What can I do? I don't understand shunt mod


That voltage is extremely low. Are you running a custom curve in Afterburner? You probably want to allow at least a 1.03 curve to get the highest clocks on a TUF.


----------



## joyzao

Arizor said:


> Essa tensão é extremamente baixa. Você está executando uma curva personalizada no Afterburner? Você provavelmente deseja permitir pelo menos uma curva de 1,03 para obter os clocks mais altos em um TUF.
> [/CITAR]


yes I noticed, but it happens in this xoc bios, or any other 3-pin bios, in the default bios it is on average 0.90v, I can't get above 0.92 with no configuration even in the default bios, I'm testing games like the witcher, control raytracing , in 4k.

I believe it's some limitation of the board, why can't I get above 0.90/0.92v? If you can give me some curve orientation in the afterburner, I've already tried some settings but without success, it always stays at the maximum with this voltage above.


----------



## Nizzen

joyzao said:


> Hello people.
> 
> I recently got a 3090 tuf oc, which bios could I use on this 3090? Because in 4k on witcher 3 I have 1850mhz clock, even with maximum energy + 150mhz gpu, could it improve?


Allways buy 3 pin 3090, or shuntmod.


----------



## yzonker

Arizor said:


> That voltage is extremely low. Are you running a custom curve in Afterburner? You probably want to allow at least a 1.03 curve to get the highest clocks on a TUF.


Yea either that or the power limit was set to 50% or so. 

@joyzao Keep in mind a 2x8pin card will only pull about 66% of what the PL is set to when using the Kingpin 1kw bios (50% PL will be around 310w). Also keep in mind there are no thermal limits set in that bios. So if you are air cooled be careful with how high temps go.


----------



## joyzao

yzonker said:


> Yea either that or the power limit was set to 50% or so.
> 
> @joyzao Keep in mind a 2x8pin card will only pull about 66% of what the PL is set to when using the Kingpin 1kw bios (50% PL will be around 310w). Also keep in mind there are no thermal limits set in that bios. So if you are air cooled be careful with how high temps go.


I did these tests but it also came out worse than the default bios.
In the bios gigabyte oc 3090 I got little more performance, what can I do to improve?

What I notice is that the voltage is really low, it doesn't increase.

I bought this model because it was a very good opportunity, if I could I would get a better model, but the price I paid was very good.


----------



## yzonker

joyzao said:


> yes I noticed, but it happens in this xoc bios, or any other 3-pin bios, in the default bios it is on average 0.90v, I can't get above 0.92 with no configuration even in the default bios, I'm testing games like the witcher, control raytracing , in 4k.
> 
> I believe it's some limitation of the board, why can't I get above 0.90/0.92v? If you can give me some curve orientation in the afterburner, I've already tried some settings but without success, it always stays at the maximum with this voltage above.


Just to clarify, you are using this bios?









EVGA RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com





If so, set the PL to 70-75% and report back with the result. This should work and pull around 450w.


----------



## joyzao

yzonker said:


> Just to clarify, you are using this bios?
> 
> 
> 
> 
> 
> 
> 
> 
> 
> EVGA RTX 3090 VBIOS
> 
> 
> 24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory
> 
> 
> 
> 
> www.techpowerup.com
> 
> 
> 
> 
> 
> If so, set the PL to 70-75% and report back with the result. This should work and pull around 450w.



yes, i just took the test.

50% pl









70% pl










Is the energy it shows real? Because theoretically if I consume 700w it wouldn't support my pc lol, my source is 850w gold.

Interesting that at 70% pl my voltage went from 1.0, the first time I see it, but if this is really the consumption is insane.

At 50% pl the consumption is very high and even so the result is worse than the standard bios.

What can I do? Can I use this bios at 70% pl 24/7 for gaming?

In the default bios, asus tuf oc or gigabyte oc, I can only get on average 1860 to 1905 mhz maximum, with maximum power, +150 mhz on the gpu.


----------



## yzonker

joyzao said:


> yes, i just took the test.
> 
> 50% pl
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 70% pl
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Is the energy it shows real? Because theoretically if I consume 700w it wouldn't support my pc lol, my source is 850w gold.
> 
> Interesting that at 70% pl my voltage went from 1.0, the first time I see it, but if this is really the consumption is insane.
> 
> At 50% pl the consumption is very high and even so the result is worse than the standard bios.
> 
> What can I do? Can I use this bios at 70% pl 24/7 for gaming?
> 
> In the default bios, asus tuf oc or gigabyte oc, I can only get on average 1860 to 1905 mhz maximum, with maximum power, +150 mhz on the gpu.


No, you need to multiply the power it shows by around 0.66 to get the true power draw. If you look at GPUZ, it will show 3x8pin even though you only have 2. The 3rd 8pin is a copy of the 1st 8pin and gets counted as part of the total power. So for example the screenshot you show above,

power = 681 * 0.66 = 449w

This is an approximation, but close enough to get a good idea of the true power draw.


----------



## joyzao

yzonker said:


> No, you need to multiply the power it shows by around 0.66 to get the true power draw. If you look at GPUZ, it will show 3x8pin even though you only have 2. The 3rd 8pin is a copy of the 1st 8pin and gets counted as part of the total power. So for example the screenshot you show above,
> 
> power = 681 * 0.66 = 449w
> 
> This is an approximation, but close enough to get a good idea of the true power draw.


I have a metering power socket, I will use it to check the consumption. Is it reliable to rely on this type of device?

But is it safe to use like this? In this 70% pl image, I didn't put any oc in memory or gpu, can I put it?

In the most I will measure by the device to see the real consumption. Thanks for the answers.
About the temperature, I played a little to see, but from what I noticed it reached about 64 degrees in the control full 4k game. If staying below 70 degrees is safe correct?


----------



## yzonker

joyzao said:


> I have a metering power socket, I will use it to check the consumption. Is it reliable to rely on this type of device?
> 
> But is it safe to use like this? In this 70% pl image, I didn't put any oc in memory or gpu, can I put it?
> 
> In the most I will measure by the device to see the real consumption. Thanks for the answers.
> About the temperature, I played a little to see, but from what I noticed it reached about 64 degrees in the control full 4k game. If staying below 70 degrees is safe correct?


The power meter will get you total system power including the PSU loss. The 0.66 value is pretty close and probably better than you can do by measuring total power. The only way I know of to get any closer is to use a clamp meter on each 8pin and write some equations based on the readings to correct the power value shown in a program like HWINFO. This is a bit more complicated and really not necessary though.

Yea, under 70C should be ok. Keep on eye on the hotspot and memory temps as well though.


----------



## joyzao

yzonker said:


> The power meter will get you total system power including the PSU loss. The 0.66 value is pretty close and probably better than you can do by measuring total power. The only way I know of to get any closer is to use a clamp meter on each 8pin and write some equations based on the readings to correct the power value shown in a program like HWINFO. This is a bit more complicated and really not necessary though.
> 
> Yea, under 70C should be ok. Keep on eye on the hotspot and memory temps as well though.



With the meter it gave 650w total consumption, with i9 10900k at 5.1 , is it as expected?

But the most interesting thing is that the performance compared to gigabyte oc bios (2 pin) was very close. The temperature was around 64/65 grams, memories below 92-90. 
With 80% pl it improves the performance well, but it increases by 70w compared to the previous one, and a significant gain in temperature.

Or is there any way to extract the maximum with a 2 pin bios? What I noticed in the bios tuf/gigabyte the performance is good, however according to the image I showed the voltage is very low, so the clock does not go up. 

Finally, is there any xoc bios like this that can take advantage of the 2 HDMI ports? Here I use 2 tv's oled, and I need the 2 hdmi. 

What would you advise me to do to have a good performance? Thanks


----------



## yzonker

joyzao said:


> With the meter it gave 650w total consumption, with i9 10900k at 5.1 , is it as expected?
> 
> But the most interesting thing is that the performance compared to gigabyte oc bios (2 pin) was very close. The temperature was around 64/65 grams, memories below 92-90.
> With 80% pl it improves the performance well, but it increases by 70w compared to the previous one, and a significant gain in temperature.
> 
> Or is there any way to extract the maximum with a 2 pin bios? What I noticed in the bios tuf/gigabyte the performance is good, however according to the image I showed the voltage is very low, so the clock does not go up.
> 
> Finally, is there any xoc bios like this that can take advantage of the 2 HDMI ports? Here I use 2 tv's oled, and I need the 2 hdmi.
> 
> What would you advise me to do to have a good performance? Thanks


That bios is your only option for 2x8pin card. Only other thing you can do is improve cooling.


----------



## Arizor

Let's take it with a large swab of salt fellas, but this is an interesting result for the 4090 on Time Spy Extreme, if remotely true.


----------



## J7SC

Arizor said:


> Let's take it with a large swab of salt fellas, but this is an interesting result for the 4090 on Time Spy Extreme, if remotely true.
> 
> View attachment 2566943


..makes you wonder about the 4090 _Ti  _


----------



## Arizor

Haha, I'm holding out an extra 18 months for that 4% increase! 4090 be damned!


----------



## GosuPl

Arizor said:


> Haha, I'm holding out an extra 18 months for that 4% increase! 4090 be damned!


RTX 3090 82 SM vs RTX 3090 Ti 84 SM 10496 CUDA / 328 TMU / 328 Tensor / 82 RT / 112 ROP vs 10752 CUDA / 336 TMU / 336 Tensor / 84 RT / 112 ROP

Clock vs Clock at same PT, indead 2-4% more perf on RTX 3090 Ti but if rumors and "leaks" are true, 4090 will be +/- 2000 Cuda less than 4090 Ti / TITAN.

128 SM on 4090 vs 144 SM on 4090 Ti. 10-15% more perf clock vs clock if that true. Personally i will wait for full AD102, 4090 seems to weak for top of the top. Not great


----------



## Arizor

GosuPl said:


> RTX 3090 82 SM vs RTX 3090 Ti 84 SM 10496 CUDA / 328 TMU / 328 Tensor / 82 RT / 112 ROP vs 10752 CUDA / 336 TMU / 336 Tensor / 84 RT / 112 ROP
> 
> Clock vs Clock at same PT, indead 2-4% more perf on RTX 3090 Ti but if rumors and "leaks" are true, 4090 will be +/- 2000 Cuda less than 4090 Ti / TITAN.
> 
> 128 SM on 4090 vs 144 SM on 4090 Ti. 10-15% more perf clock vs clock if that true. Personally i will wait for full AD102, 4090 seems to weak for top of the top. Not great


Wow that's interesting if the rumours are true, maybe I will hold out...


----------



## GosuPl

Arizor said:


> Wow that's interesting if the rumours are true, maybe I will hold out...



We will find out the truth with the relase or just before it. However, there are many indications that the RTX 4090 will get a spec that the RTX 4080 Ti was supposed to have earlier (also according to rumors). It is known that the card will be powerful, but from the perspective of an enthusiast looking for top performance, such a hard-cut GPU does not bring joy. At least I have such a thing that I am like the top of the top on a full core, and the aceptable cut is 2-4 SM less than full. Otherwise, there is no joy or motivation to buy such a GPU.


----------



## GosuPl

Full NVIDIA "Ada" AD102 GPU reportedly twice as fast as RTX 3090 in game Control at 4K - VideoCardz.com


Alleged full AD102 GPU with 160+ FPS in Control at 4K Just a day after first performance leaks on GeForce RTX 4090, we now have rumors on the full AD102 GPU. The full AD102 GPU should not be confused with RTX 4090 non-Ti. The full GPU features as many as 18432 CUDA cores which is […]




videocardz.com





Hmm, full AD102 in another rumor. But, rumor is just rumor  

However, if such a 4090 Ti / TITAN appeared with the RTX 4090 or soon after the relase I take it right.


----------



## ArcticZero

Well, my PSU finally gave. It was an old Seasonic Prime Gold 1000w. Was playing a game of 2042 when everything died, and I feared the worst thinking it was the 3090. I was glad when I traced the smell to the PSU, after which I tested it and confirmed it was dead. I don't expect anything else is dead at this point.

Time to get a Prime PX 1300w to replace it. I just hope the cables are reusable since it's going to be a major hassle to tear down the loop. 
Well, as far as I'm aware Seasonic says they are reusable between those series anyway.

EDIT: New PSU in, everything survived. All good!


----------



## Ironcobra

ArcticZero said:


> Well, my PSU finally gave. It was an old Seasonic Prime Gold 1000w. Was playing a game of 2042 when everything died, and I feared the worst thinking it was the 3090. I was glad when I traced the smell to the PSU, after which I tested it and confirmed it was dead. I don't expect anything else is dead at this point.
> 
> Time to get a Prime PX 1300w to replace it. I just hope the cables are reusable since it's going to be a major hassle to tear down the loop.
> Well, as far as I'm aware Seasonic says they are reusable between those series anyway.


If i was a psu and you were playing 2042 I would kill myself too🤣


----------



## John117-

AD102 (TITAN) will have 48 GB GDDR6X in my opinion


----------



## yzonker

For the question of whether Intel or AMD cpus can generate the highest PR scores, I have a direct comparison now with a new 12900k system. I think the Intel system is very slightly better. Score was my highest ever and I couldn't get PR to complete at the same [email protected] that I'm pretty sure I did previously. Not sure why. 

The AB curve was acting weird though by putting the last 4 points all at the same level even though I had the 3 points to the left of 1100mv set with a lower offset. I've seen that before, but not to this extent. At one point I had the 1093 point at +195 and it was still at the same level as the 1100mv point at +225. 

Although I've seen the same loss in 1 bin in both my 3090 and 3080ti when swapping to my 5800x3D as well, so maybe it's slightly cpu/system dependent as well.

Then there's the issue with getting some condensation with the chiller this time of year, so it's possible that was causing issues as well with the 3090.

Best I can do for now anyway.

Just to refresh memories, my AMD system is mostly optimized with tuned b-die and CO. The Intel rig is a 12900k with tuned DDR5 running at 7000. CPU clocked at 5.5/4.3/4.5.

AMD








I scored 16 107 in Port Royal


AMD Ryzen 7 5800X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 11}




www.3dmark.com





Intel








I scored 16 127 in Port Royal


Intel Core i9-12900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com





Compare








Result







www.3dmark.com





This was also my highest ever TS graphics score (the reBar on score). (even slightly higher than what I scored using cold winter air a few months ago)

reBar on:









I scored 23 946 in Time Spy


Intel Core i9-12900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com





reBar off:









I scored 23 880 in Time Spy


Intel Core i9-12900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## gfunkernaught

Has there ever been a direct upgrade in gpus in history where the new card was 180% faster than its predecessor?


----------



## ssgwright

gfunkernaught said:


> Has there ever been a direct upgrade in gpus in history where the new card was 180% faster than its predecessor?


hell no


----------



## kryptonfly

yzonker said:


> For the question of whether Intel or AMD cpus can generate the highest PR scores, I have a direct comparison now with a new 12900k system. I think the Intel system is very slightly better. Score was my highest ever and I couldn't get PR to complete at the same [email protected] that I'm pretty sure I did previously. Not sure why.
> 
> The AB curve was acting weird though by putting the last 4 points all at the same level even though I had the 3 points to the left of 1100mv set with a lower offset. I've seen that before, but not to this extent. At one point I had the 1093 point at +195 and it was still at the same level as the 1100mv point at +225.
> 
> Although I've seen the same loss in 1 bin in both my 3090 and 3080ti when swapping to my 5800x3D as well, so maybe it's slightly cpu/system dependent as well.
> 
> Then there's the issue with getting some condensation with the chiller this time of year, so it's possible that was causing issues as well with the 3090.
> 
> Best I can do for now anyway.
> 
> Just to refresh memories, my AMD system is mostly optimized with tuned b-die and CO. The Intel rig is a 12900k with tuned DDR5 running at 7000. CPU clocked at 5.5/4.3/4.5.
> 
> AMD
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 16 107 in Port Royal
> 
> 
> AMD Ryzen 7 5800X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 11}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> Intel
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 16 127 in Port Royal
> 
> 
> Intel Core i9-12900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> Compare
> 
> 
> 
> 
> 
> 
> 
> 
> Result
> 
> 
> 
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> This was also my highest ever TS graphics score (the reBar on score). (even slightly higher than what I scored using cold winter air a few months ago)
> 
> reBar on:
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 23 946 in Time Spy
> 
> 
> Intel Core i9-12900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> reBar off:
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 23 880 in Time Spy
> 
> 
> Intel Core i9-12900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com


Really interesting  is it possible to test with your 12900k at 5.4/4.2/4.5 ? Because mine can't and I'm in DDR4. I wanna see the memory impact and maybe plan a DDR5 upgrade. Thx 

EDIT : with TimeSpy bench


----------



## yzonker

kryptonfly said:


> Really interesting  is it possible to test with your 12900k at 5.4/4.2/4.5 ? Because mine can't and I'm in DDR4. I wanna see the memory impact and maybe plan a DDR5 upgrade. Thx
> 
> EDIT : with TimeSpy bench


It won't change much. That run above is 5.5/4.3/4.5. So 5.4/4.2/4.5 is probably around 23900-24000 with reBar off.


----------



## kryptonfly

yzonker said:


> It won't change much. That run above is 5.5/4.3/4.5. So 5.4/4.2/4.5 is probably around 23900-24000 with reBar off.


I hit ~22600 with re-bar on at 5.4/4.2/4.5 and I think DDR4 could matters a little. I was curious especially because of the DDR5. I don't have my 3090 anymore but I will try with re-bar off and see.


----------



## yzonker

kryptonfly said:


> I hit ~22600 with re-bar on at 5.4/4.2/4.5 and I think DDR4 could matters a little. I was curious especially because of the DDR5. I don't have my 3090 anymore but I will try with re-bar off and see.


That would end up in the 23-23.5k range. DDR5 definitely scores higher in that benchmark. That's one reason I went that direction. If I get a chance I'll re-run it.


----------



## yzonker

Came up a little short from what I thought at 22,888.









Result not found







www.3dmark.com





My DDR5 tune kinda sucks though. I lost the lottery somewhere in that chain. This was the best I could manage after several hours over several days of testing. Not sure if it's the DDR5 sticks or the IMC in the CPU.


----------



## kryptonfly

yzonker said:


> Came up a little short from what I thought at 22,888.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Result not found
> 
> 
> 
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> My DDR5 tune kinda sucks though. I lost the lottery somewhere in that chain. This was the best I could manage after several hours over several days of testing. Not sure if it's the DDR5 sticks or the IMC in the CPU.
> 
> View attachment 2568695


Is it with re-bar On ? IPC is so high with this gen than just 100mhz changes the result.

You have a good 12900K, I don't think it's your IMC because you are in gear 2, my 12900KS can only do 5.4/4.2/4.5 at 1.36v under cinebench R23. My DDR4 sticks are at 1.67v for 4100-14-14-14-26-240 and others timings tuned. It seems pretty close between DDR4 and DDR5 heavily oc'ed. I'm waiting a 13700K or 13900K + the 4090 to finalize my rig. I have a direct die for gen 12 that I'm saving, I hope it will fit with gen 13, seems exactly the same layout.


----------



## yzonker

kryptonfly said:


> Is it with re-bar On ? IPC is so high with this gen than just 100mhz changes the result.
> 
> You have a good 12900K, I don't think it's your IMC because you are in gear 2, my 12900KS can only do 5.4/4.2/4.5 at 1.36v under cinebench R23. My DDR4 sticks are at 1.67v for 4100-14-14-14-26-240 and others timings tuned. It seems pretty close between DDR4 and DDR5 heavily oc'ed. I'm waiting a 13700K or 13900K + the 4090 to finalize my rig. I have a direct die for gen 12 that I'm saving, I hope it will fit with gen 13, seems exactly the same layout.


Yes, reBar forced and 5.4/4.2/4.5. I thought it took too large of a hit compared to the other runs I made, but couldn't get a higher result.


----------



## gfunkernaught

ssgwright said:


> hell no


Exactly. New RTX 4090 Ti+ Premium Ultra Mega Hype edition!


----------



## yzonker

Yo, @kryptonfly. I figured out the disconnect with my reBar 5.4/4.2/4.5 run. I had upped the voltage enough that it thermal throttled at the end! I didn't run in to that with my 5.5 runs because I was using the chiller @3C water. I checked my runs I posted originally, the CPU only hit 70C using the chiller. LOL

Here's 5.3/4.2/4.0 with reBar forced which is about the best I can do without the chiller. I'm not delided/lapped/etc... Just the Thermaltake contact frame.









Result not found







www.3dmark.com


----------



## yzonker

gfunkernaught said:


> Exactly. New RTX 4090 Ti+ Premium Ultra Mega Hype edition!


Based on the current rumored specs and using a comparison of the 2080ti and 3090 to estimate how much is gained by adding cores, etc... I'd guess 50-75% increase in performance. A big part of that is the rumored clock speed increase. 

And it depends on how bottlenecked the core may be given memory speeds aren't supposed to increase much. Given that, my best guess is closer to 50% which coincidentally is in line with the gain we saw with the 3090 vs 2080ti. How about that...


----------



## gfunkernaught

yzonker said:


> Based on the current rumored specs and using a comparison of the 2080ti and 3090 to estimate how much is gained by adding cores, etc... I'd guess 50-75% increase in performance. A big part of that is the rumored clock speed increase.
> 
> And it depends on how bottlenecked the core may be given memory speeds aren't supposed to increase much. Given that, my best guess is closer to 50% which coincidentally is in line with the gain we saw with the 3090 vs 2080ti. How about that...


Direct upgrades as in 3090 to 4090 or 2080 ti to 3080 ti. They almost always have been 25-30% (sometimes 50%) increase in overall performance. Now if we saw a 50% increase in performance with 50% less power usage, now that would be something to get hyped about. But based on the rumors, power consumption is only going up (for now).


----------



## GRABibus

yzonker said:


> Yo, @kryptonfly. I figured out the disconnect with my reBar 5.4/4.2/4.5 run. I had upped the voltage enough that it thermal throttled at the end! I didn't run in to that with my 5.5 runs because I was using the chiller @3C water. I checked my runs I posted originally, the CPU only hit 70C using the chiller. LOL
> 
> Here's 5.3/4.2/4.0 with reBar forced which is about the best I can do without the chiller. I'm not delided/lapped/etc... Just the Thermaltake contact frame.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Result not found
> 
> 
> 
> 
> 
> 
> 
> www.3dmark.com


That’s 5000pts more than with my 5950X 🤓


----------



## Warocia

Does anyone have the newest 3090 TUF bios? The ASUS website seems have bios bundled with exe.


----------



## Arizor

Warocia said:


> Does anyone have the newest 3090 TUF bios? The ASUS website seems have bios bundled with exe.


This one mate? Asus RTX 3090 VBIOS


----------



## Warocia

Arizor said:


> This one mate? Asus RTX 3090 VBIOS


Thanks.


----------



## ArcticZero

ArcticZero said:


> Well, my PSU finally gave. It was an old Seasonic Prime Gold 1000w. Was playing a game of 2042 when everything died, and I feared the worst thinking it was the 3090. I was glad when I traced the smell to the PSU, after which I tested it and confirmed it was dead. I don't expect anything else is dead at this point.
> 
> Time to get a Prime PX 1300w to replace it. I just hope the cables are reusable since it's going to be a major hassle to tear down the loop.
> Well, as far as I'm aware Seasonic says they are reusable between those series anyway.
> 
> EDIT: New PSU in, everything survived. All good!


Kinda off topic, but as an update I got my PSU back from Seasonic after I sent it for RMA. They upgraded it to a new GX-1000. Granted it came in a nondescript brown box with only the unit (I only sent back the unit anyway), but the smell and stickers told me it was definitely brand new. 

I'd already gotten a PX-1300 in the time since, but it's always nice to have a decent spare that can power the main rig with a 3090 if needed.


----------



## BenchAndGames

I'm a bit surprised that many models have chosen not to include thermal pads on the VRM chockes.

The most surprising is the ROG Strix, which does not include a thermal pad in the front line of the chockes, only in the rear line.

Also some EVGA and most ZOTAC and all FE version don't include pads on the chockes at all.

I thought that was necessary, but looking at this it seems that chockes don't really need to be cooled...


----------



## olegdjus

Hello friends!
I have a question: MSI RTX 3090 Suprim X with installed BIOS XOC from EVGA (94.02.59.00.82). I'm trying to pair it with Classified Tools, but nothing works: version 3.1.4.0 doesn't load anything, 3.1.7.0 just hangs and doesn't load anything either. Am I doing something wrong, or is the controller in Suprim X not the right one? Yes, tried another mod 3.1.4.0 - does not work too


----------



## GRABibus

olegdjus said:


> Hello friends!
> I have a question: MSI RTX 3090 Suprim X with installed BIOS XOC from EVGA (94.02.59.00.82). I'm trying to pair it with Classified Tools, but nothing works: version 3.1.4.0 doesn't load anything, 3.1.7.0 just hangs and doesn't load anything either. Am I doing something wrong, or is the controller in Suprim X not the right one? Yes, tried another mod 3.1.4.0 - does not work too


Classified tool only works with EVGA Kingpin Hybrid card and Galax HOF card.


----------



## Nizzen

GRABibus said:


> Classified tool only works with EVGA Kingpin Hybrid card and Galax HOF card.


Does classified tool work on HOF? Never tested on mine, but I use "3090 HOF_XOC_OC Tool" on Galax Hof 3090


----------



## Nizzen

olegdjus said:


> Hello friends!
> I have a question: MSI RTX 3090 Suprim X with installed BIOS XOC from EVGA (94.02.59.00.82). I'm trying to pair it with Classified Tools, but nothing works: version 3.1.4.0 doesn't load anything, 3.1.7.0 just hangs and doesn't load anything either. Am I doing something wrong, or is the controller in Suprim X not the right one? Yes, tried another mod 3.1.4.0 - does not work too


You bought the wrong GPU


----------



## GRABibus

Nizzen said:


> Does classified tool work on HOF? Never tested on mine, but I use "3090 HOF_XOC_OC Tool" on Galax Hof 3090


For HOF, I was talking about the tool you mention which is a kind of Classified tool also 😜


----------



## J7SC

"let the games begin" (subject to pinches of salet, of course)...also, combined w/ power-hungry next-gen CPUs, big PSUs (1300W and up) will be a useful investment.


----------



## Nizzen

J7SC said:


> "let the games begin" (subject to pinches of salet, of course)...also, combined w/ power-hungry next-gen CPUs, big PSUs (1300W and up) will be a useful investment.
> View attachment 2570263


We had 2x1200w+ psu's from Classified 780/780ti days, so we are allways ready 

I hope 7 series AMD isn't like 6 series with many games that have performance problems and driver issues. One or 2 games that perform VERY good, isn't good enough. Atleast for me....

I'ts fun to "play" the old 3dmark games with AMD 6 series sometimes, but every day is a bit boring


----------



## J7SC

Nizzen said:


> We had 2x1200w+ psu's from Classified 780/780ti days, so we are allways ready
> 
> I hope 7 series AMD isn't like 6 series with many games that have performance problems and driver issues. One or 2 games that perform VERY good, isn't good enough. Atleast for me....
> 
> I'ts fun to "play" the old 3dmark games with AMD 6 series sometimes, but every day is a bit boring


...yes, multiple 780Ti Classies on sub-ambient w/EVBot were a great teaching-tool for PSU planning - they never met a power plug they didn't like (I still have several older 1200W in storage from those days). 

As to AMD / NVidia, I currently run both a w-cooled 6900XT and a w-cooled 3090 Strix. I tend to use the 6900XT setup more for work and older DX10/11 games, while the 3090 runs all the latest gen DX12, RTX etc. That said, AMD drivers have gotten a lot better and I haven't had any real issues. I look forward to see what shakes out after the fall / winter with new GPU and CPU gens out and field-tested for a while. But even with exclusive gaming at 4K, my current setups are not screaming 'upgrade now', so I can decide in due course.


----------



## yzonker

Yea it seems like GPUs have gotten significantly ahead of the games. DLSS helps that a lot too. 

I've just about got my new system finished with the 12900k, KP mobo, and Zotac 3090. The benchmark runs I posted before were just the system set up as a test bed more or less. 

I put everything in a Phanteks Enthoo Pro 2. With the external dual CL480s, I'll have 3x480x60 and a token 240x45 I tossed in the bottom. And dual D5s of course. 

I went with Alphacool internal rads, but mostly because tests I've seen show them as very low restriction (the 240 is even a cross flow). All in an effort to keep flow as high as possible when using the chiller. 

I'd post a pic but my fan cables are a bit of a mess thanks to this idiot shorting out his Octo. Maybe if I publicly shame myself I won't do anything that stupid again. Lol. Fortunately they're in stock these days so I have a replacement already on the way.


----------



## Arizor

So what kind of performance uplift are we looking for to consider an upgrade folks? I think if the 4090 is 60% more powerful than the 3090 I’m interested.

I just want to guarantee a 4K 120hz experience with all the bells and whistles, as much as one can.


----------



## Nizzen

Arizor said:


> So what kind of performance uplift are we looking for to consider an upgrade folks? I think if the 4090 is 60% more powerful than the 3090 I’m interested.
> 
> I just want to guarantee a 4K 120hz experience with all the bells and whistles, as much as one can.


Wait for 5 series, if you want 120fps in 4k "ultra" for todays games


----------



## Arizor

Nizzen said:


> Wait for 5 series, if you want 120fps in 4k "ultra" for todays games


_starts sweating_


----------



## J7SC

5090 Ti might have to wait ...

I put my 5950X /3090 Strix combo on the same mild oc / PL for my fav FS2020 and MHz/temp values are just what they were a year ago. No degradation, and just pure joy at 4K ultra max detail  . The 3090 will bench at over 2220 for the clock (inset) and for benching, I crank the VRAM and PL up a lot more, but the main daily 24/7 setup is rock steady and gets by with low temps


----------



## GRABibus

J7SC said:


> 5090 Ti might have to wait ...
> 
> I put my 5950X /3090 Strix combo on the same mild oc / PL for my fav FS2020 and MHz/temp values are just what they were a year ago. No degradation, and just pure joy at 4K ultra max detail  . The 3090 will bench at over 2220 for the clock (inset) and for benching, I crank the VRAM and PL up a lot more, but the main daily 24/7 setup is rock steady and gets by with low temps
> View attachment 2570411


The problem will be the required power supply connectors.
Not sure I want to change my Corsair HX1200 to upgrade to a RTX 4xxx


----------



## J7SC

GRABibus said:


> The problem will be the required power supply connectors.
> Not sure I want to change my Corsair HX1200 to upgrade to a RTX 4xxx


Most higher-tier GPUs will apparently include the 3-into-1 dongle, as seen on the 3090 Ti...and many PSU vendors have announced those dongles as a separate accessory. Still, buying a new hi-po PSU next year should include a check on whether it comes with the new connectors w/o dongles.


----------



## gfunkernaught

We are quite far from path traced Cyberpunk 2077 at 4k native and at least 60fps. I can't even imagine how much power that would need...
Although, I this seems interesting...


----------



## gfunkernaught

Has kp1kw bios been updated at all to allow idle memory clock?


----------



## Nizzen

gfunkernaught said:


> Has kp1kw bios been updated at all to allow idle memory clock?


Memory at 100% is by design, to hold the memory "hot" when overclocking with ln2


----------



## gfunkernaught

Nizzen said:


> Memory at 100% is by design, to hold the memory "hot" when overclocking with ln2


That I knew, but was wondering if someone maybe hacked it to allow for better auto clock management. I use the keyboard shortcuts to change profiles, but sometimes I forget to leave it "on". Also have a low power profile for use with Photoshop and other lite gpu tasks.


----------



## spin5000

Is it safe to put a FTW3 BIOS on an XC3 (Ultra) EVGA 3090? I'm sick of my card always in the 1700 MHz core region and with some spikes in the 1800s even though it's only running at 60 - 65 degrees (memory temps are fine too). I would like to see the core consistently in the low to high 1900s...


----------



## yzonker

spin5000 said:


> Is it safe to put a FTW3 BIOS on an XC3 (Ultra) EVGA 3090? I'm sick of my card always in the 1700 MHz core region and with some spikes in the 1800s even though it's only running at 60 - 65 degrees (memory temps are fine too). I would like to see the core consistently in the low to high 1900s...


Safe yes, effective no. You'll end up with a lower power limit most likely due to the difference in number of 8pin connections. The only bios that will give you a higher PL is the KP 1kw bios. But that's a bad idea for an air cooled card, assuming that's what you have (60-65C is high for water anyway).


----------



## spin5000

yzonker said:


> Safe yes, effective no. You'll end up with a lower power limit most likely due to the difference in number of 8pin connections. The only bios that will give you a higher PL is the KP 1kw bios. But that's a bad idea for an air cooled card, assuming that's what you have (60-65C is high for water anyway).


What about using the 1kW BIOS and then setting the power limit to whatever percentage equates to around 450 or 500 W? Is that a good idea?

Man, it's incredible that no one has still figured out a way to edit the BIOS. Up until and including the GTX 1000 series if I remember correctly, we could just edit the BIOS rather than having to download other cards' stock BIOS. It would be so simple to just extract our own BIOS, edit it (simply raise the power limit), save it, flash, done. I can't believe it's been 2 full generations of cards where no one in the world (at least known to us) has come out with a way to do this like we always could before.


----------



## yzonker

spin5000 said:


> What about using the 1kW BIOS and then setting the power limit to whatever percentage equates to around 450 or 500 W? Is that a good idea?
> 
> Man, it's incredible that no one has still figured out a way to edit the BIOS. Up until and including the GTX 1000 series if I remember correctly, we could just edit the BIOS rather than having to download other cards' stock BIOS. It would be so simple to just extract our own BIOS, edit it (simply raise the power limit), save it, flash, done. I can't believe it's been 2 full generations of cards where no one in the world (at least known to us) has come out with a way to do this like we always could before.


You can do that. Just have to be careful since the bios has no thermal limits. There's also the fact that gains will be small since temps can go up fast as you add power.

Actually it is possible to edit the bios, but it can't be flashed even with the modded NVFlash versions floating around. You have to use a hardware flasher.


----------



## Falkentyne

gfunkernaught said:


> Has there ever been a direct upgrade in gpus in history where the new card was 180% faster than its predecessor?


Yes, Voodoo Graphics-->Voodoo 2 SLI.


----------



## Arizor

Falkentyne said:


> Yes, Voodoo Graphics-->Voodoo 2 SLI.


My first GPU(s)! Those were the days...


----------



## gfunkernaught

Glaaaaazed over that one 😆


----------



## ttnuagmada

Just dropping in, im probably a year behind in this thread. Was curious what the current best vbios is. Still the KPE 520w? Anyone ever release a 1000w one that let the vram downclock?

edit: looks like my 2nd question is answered right above me.


----------



## des2k...

ttnuagmada said:


> Just dropping in, im probably a year behind in this thread. Was curious what the current best vbios is. Still the KPE 520w? Anyone ever release a 1000w one that let the vram downclock?
> 
> edit: looks like my 2nd question is answered right above me.


you could use afterburner to apply -offset to the memory which will break the mem clock and/or core clock, will be stuck very low like 200mhz

apparently it's still usable for desktop / youtube, never tried since I have water block on it


----------



## GRABibus

ttnuagmada said:


> Just dropping in, im probably a year behind in this thread. Was curious what the current best vbios is. Still the KPE 520w? Anyone ever release a 1000w one that let the vram downclock?
> 
> edit: looks like my 2nd question is answered right above me.


I set -252MHz on memory in MSI AB and that’s it.


----------



## gfunkernaught

ttnuagmada said:


> Just dropping in, im probably a year behind in this thread. Was curious what the current best vbios is. Still the KPE 520w? Anyone ever release a 1000w one that let the vram downclock?
> 
> edit: looks like my 2nd question is answered right above me.


Best bios depends on what you want + your system. 

Yup, set up profiles in MSI Afterburner. The negative offset does work fine for desktop and YT. But once you need something actually accelerated by the GPU you need the clocks to go back up. Also I've found that Photoshop and Topaz DeNoise want at least 120-130w from the GPU. So my "editing" profile has the power limit at 12%, and the offsets at 0. Warms the GPU up a bit from 25c to 30c.


----------



## ttnuagmada

gfunkernaught said:


> Best bios depends on what you want + your system.
> 
> Yup, set up profiles in MSI Afterburner. The negative offset does work fine for desktop and YT. But once you need something actually accelerated by the GPU you need the clocks to go back up. Also I've found that Photoshop and Topaz DeNoise want at least 120-130w from the GPU. So my "editing" profile has the power limit at 12%, and the offsets at 0. Warms the GPU up a bit from 25c to 30c.


This particular system is absolute max performance voltage/power be damned, WHEN I'm gaming at least. It's a waterblocked Strix with an active backplate.


----------



## KedarWolf

I need to undervolt my Strix 3090 because in my apartment my 13500 BTU portable A/C unit, my TV and my PC, modem etc. have to be on the same electrical circuit and when playing a game, I'll trigger a power breaker.

What works for me is the MSI Supreme BIOS at 850v in Afterburner at 1830 core, +579 memory and power limit at 50%.

My clocks never go below 1830 while gaming, if I put the power limit any lower though, they start to drop.

The max power draw is around 300W.

Winter, I'm fine though.

Right now I'm only playing Diablo 3 which will run on a potato attached to an ethernet cord so no big deal.


----------



## ttnuagmada

ttnuagmada said:


> Finally got around to installing the EK active backplate for my Strix. took the opportunity to replace all of the thermal pads with Thermalright 12.8 W/mk ones. Here are the before and after screenshots of GPU-Z after 35 loops of TSE GT2. +100 core +500 mem and max power limt. I will say that it looked like one of the memory rows wasn't making good contact on the backside of the card prior to installing the active backplate, so at least some of the improvement is probably due to that. Either way, I'm pretty pleased with the results.
> 
> Before:
> View attachment 2518989
> 
> 
> After:
> 
> View attachment 2518990



Thought I'd update on this, since I had seen people suggest that the thermalright pads would degrade over time. Here are temps from doing the same thing here over a year later. looks like i might need to re-do the liquid metal, but my memory temps haven't even budged.


----------



## Martin778

Hmm, I know the ambient is very hot at 26*C at the moment but I can't remember my 3090 Strix running this hot and loud when unlocked? 78*C is already it's Tjmax at 100% fan in TSE stress test? It also seem to really dislike my new Corsair sleeved cables on the 1600i...never before have I had this rig just shut down, strangely enough it will pass Superposition 4K, TSE but sometimes it will just shut off when starting TS, I'm clueless.


----------



## yzonker

Martin778 said:


> Hmm, I know the ambient is very hot at 26*C at the moment but I can't remember my 3090 Strix running this hot and loud when unlocked? 78*C is already it's Tjmax at 100% fan in TSE stress test? It also seem to really dislike my new Corsair sleeved cables on the 1600i...never before have I had this rig just shut down, strangely enough it will pass Superposition 4K, TSE but sometimes it will just shut off when starting TS, I'm clueless.
> 
> View attachment 2571123


Looks like it needs a re-paste. It's thermal throttling because of the very high hotspot temp.


----------



## Arizor

Martin778 said:


> Hmm, I know the ambient is very hot at 26*C at the moment but I can't remember my 3090 Strix running this hot and loud when unlocked? 78*C is already it's Tjmax at 100% fan in TSE stress test? It also seem to really dislike my new Corsair sleeved cables on the 1600i...never before have I had this rig just shut down, strangely enough it will pass Superposition 4K, TSE but sometimes it will just shut off when starting TS, I'm clueless.


That hotspot temp is reaaaaaal high, indicates the GPU core needs investigating, properly need to whip out a fresh paste.

edit: beat by @yzonker


----------



## Martin778

Bummer, the thermal pads will rip when opening...


----------



## Martin778

Repasted, still not great hotspot.


Spoiler


----------



## J7SC

Martin778 said:


> Repasted, still not great hotspot.
> 
> 
> Spoiler
> 
> 
> 
> 
> View attachment 2571176


...could very well be a still-misaligned mount, or even a slightly warped cooler assembly - happens with those big, long cards. For comps, below is my 3090 Strix when it was still on air at about 24C ambient, running Superposition4/8k. Since then I water-cooled it which dropped max hotspot and VRAM temps by ~ 30C under full load. FYI, I used Gelid GC Extreme for the die, thermal putty for the VRAM and Thermalright pads (below) for the power. 

FYI, I did have another NVidia GPU where the hotspot was way off - after disassembly, top leftish area showed some lack of coverage...then added the Gelid (which is thicker) and spread across the whole surface of the die, then made sure to have even mounting pressure of the air-cooler re. those 4 screws right around the die before tightening the rest. That fixed it.


----------



## Martin778

I used GC-Extreme as well. Tried thermal putty once in a notebook years ago (forgot the name but it was white stuff similar to Play-Doh - pita to remove, had to wash the PCB with alcohol/acetone to flush it from all around the chips.


----------



## gfunkernaught

So I'm on driver 516.94 and I just built a new system. 12900KS on an MSI MEG Z690 Unity, Corsair Dominator 16GBx2 DDR5 5600Mhz. I've noticed that over the past year or so of drivers, TimeSpy Extreme GPU Test 1 fails almost immediately with a previously stable core overclock. I used to be able to do at least [email protected] and now I can't. Port Royal runs but scores lower than my previous build. ~15500 vs 15300 points. I know that PR scored higher with older drivers. But the Timespy test crashing so easily bugs me. Anyone else seeing newer drivers not liking previously stable overclocks?


----------



## J7SC

gfunkernaught said:


> So I'm on driver 516.94 and I just built a new system. 12900KS on an MSI MEG Z690 Unity, Corsair Dominator 16GBx2 DDR5 5600Mhz. I've noticed that over the past year or so of drivers, TimeSpy Extreme GPU Test 1 fails almost immediately with a previously stable core overclock. I used to be able to do at least [email protected] and now I can't. Port Royal runs but scores lower than my previous build. ~15500 vs 15300 points. I know that PR scored higher with older drivers. But the Timespy test crashing so easily bugs me. Anyone else seeing newer drivers not liking previously stable overclocks?


I haven't noticed any decline in 'max boost' oc settings and am using the same settings with 516.94 as before with older drivers for TS/E, PR and per below, Superpos8K (stock vbios instead of KPE520). BTW, still trying to figure out why the fan speeds say 0 rpm (correct as there are no GPU fans) but 53%


----------



## gfunkernaught

J7SC said:


> I haven't noticed any decline in 'max boost' oc settings and am using the same settings with 516.94 as before with older drivers for TS/E, PR and per below, Superpos8K (stock vbios instead of KPE520). BTW, still trying to figure out why the fan speeds say 0 rpm (correct as there are no GPU fans) but 53%
> View attachment 2571206


Bios doesn't know there isn't a fan hooked up, just that it thinks the fan isn't moving, 53% not enough give more 

Yeah Idk it's strange. Timespy in particular is the issue. I was just playing Fallen Order at 8k with +180 on the core so yielded about 2145-2160mhz 1063mv and +1400 on the mem. Eventually crashed lol. I still don't understand the internal logic of gpu boost. Sometimes setting the core voltage slider to 100% does something, other times it doesn't. Setting the curve too high ( to gain a higher or closer to req core clock) crashes anything I throw at the card. The gpu temp never really goes over 45c, maybe some peaks in 46c, but mostly 45c, 25c ambient.









Result







www.3dmark.com





Yeah the score is higher, new system, but the gpu scored lower because I couldn't get that higher OC. Weird mang.
EDIT: Hold up a second...The 9900k scored higher in the CPU test than the 12900ks? Exsqueeze me? Uh yea ok no. Something is up.


----------



## yzonker

gfunkernaught said:


> Bios doesn't know there isn't a fan hooked up, just that it thinks the fan isn't moving, 53% not enough give more
> 
> Yeah Idk it's strange. Timespy in particular is the issue. I was just playing Fallen Order at 8k with +180 on the core so yielded about 2145-2160mhz 1063mv and +1400 on the mem. Eventually crashed lol. I still don't understand the internal logic of gpu boost. Sometimes setting the core voltage slider to 100% does something, other times it doesn't. Setting the curve too high ( to gain a higher or closer to req core clock) crashes anything I throw at the card. The gpu temp never really goes over 45c, maybe some peaks in 46c, but mostly 45c, 25c ambient.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Result
> 
> 
> 
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> Yeah the score is higher, new system, but the gpu scored lower because I couldn't get that higher OC. Weird mang.
> EDIT: Hold up a second...The 9900k scored higher in the CPU test than the 12900ks? Exsqueeze me? Uh yea ok no. Something is up.
> View attachment 2571208


If you're looking at the last score, that's the millisecond value it gives, so lower is better. The graph is misleading. It does look like your CPU score is very low though. Here's mine in comparison, just running my daily 5200/4100/4300 OC.









Result







www.3dmark.com





I will say that I had a similar experience that I posted about a while ago in this thread. I seemed to be stuck at +210 core (at 1100mv) with my 3090 in the 12900k system, where I had been running +225. I still beat some of my old graphics scores anyway though. I haven't been closely tracking effective clocks, so I don't know if I really had lower effective in this new system or not. The curve I ended up running was very flat, so it's possible effective clocks were the same or very close.

When I have time, I plan to take another run at PR/TS/TSE with this new system. We'll see if I have any better luck. I did swap out to a much better PSU as the one I used before was on the ragged edge of hitting OCP I think. That 12th gen CPU seems to suck more power even when not loaded too heavily. Definitely need more PSU for it than my AMD system.


----------



## gfunkernaught

@yzonker
Not seeing the millisecond value/score anywhere. The CPU score is measured in frames per second, higher is better. Same goes for the other scores. Btw did you tweak your 12900k in the bios at all? I'm all at auto. The only thing I enabled in my bios was XMP.

I think the PSU could be the issue. Loading 600w from the GPU alone, then maybe 100-160w (300w max) from the CPU, then everything else in the large system, I'm already breaking passed 50% on my 1200w PSU. I'm probably gonna get the ax1600i to make sure I'm not wasting power, or try not to 😉


----------



## gfunkernaught

Btw I've been out of the BIOS hunting game for a while. Has there been any updates to the 520w rBar or any newer versions of 1kw rbar bios that have auto downclock when idle or just better/newer in general? I like having no limits but not having the switch afterburner profiles every time I want to game or process photos would be nice.


----------



## yzonker

gfunkernaught said:


> @yzonker
> Not seeing the millisecond value/score anywhere. The CPU score is measured in frames per second, higher is better. Same goes for the other scores. Btw did you tweak your 12900k in the bios at all? I'm all at auto. The only thing I enabled in my bios was XMP.
> 
> I think the PSU could be the issue. Loading 600w from the GPU alone, then maybe 100-160w (300w max) from the CPU, then everything else in the large system, I'm already breaking passed 50% on my 1200w PSU. I'm probably gonna get the ax1600i to make sure I'm not wasting power, or try not to 😉


Average simulation time per frame is what I was referring to. My 12900k is just running an all core OC of 5200/4100/4300 with DDR5 7000 in the result I posted.


----------



## J7SC

gfunkernaught said:


> Btw I've been out of the BIOS hunting game for a while. Has there been any updates to the 520w rBar or any newer versions of 1kw rbar bios that have auto downclock when idle or just better/newer in general? I like having no limits but not having the switch afterburner profiles every time I want to game or process photos would be nice.


I'm using both the stock Strix and the KPE520 r_bar vbios on my 3090 (usually the former for gaming). I think there was an updated version (per vbios date) of the KPE520 r_bar, but from what I've read, it didn't seem to make any difference in performance.

I would also check your _system memory_ timings of your new build - tightest possible re. stable RAM testing doesn't always translate into highest GPU or even CPU scores in tests such as 3DM TimeSpy Extreme. It just takes a lot of testing to dial in a new system. When I first ran my 5950X, 3DM CPU scores were about 1,300 points lower than now - just differences in tuning the same hardware...


----------



## gfunkernaught

J7SC said:


> I'm using both the stock Strix and the KPE520 r_bar vbios on my 3090 (usually the former for gaming). I think there was an updated version (per vbios date) of the KPE520 r_bar, but from what I've read, it didn't seem to make any difference in performance.
> 
> I would also check your _system memory_ timings of your new build - tightest possible re. stable RAM testing doesn't always translate into highest GPU or even CPU scores in tests such as 3DM TimeSpy Extreme. It just takes a lot of testing to dial in a new system. When I first ran my 5950X, 3DM CPU scores were about 1,300 points lower than now - just differences in tuning the same hardware...
> View attachment 2571395


Using the kpe520 vs the 1kw with a power limit of 60, performance difference in games? I bought this ram specifically because of its lower latency. I haven't even begun to go through the bios as this is my first high end motherboard and there are tons of settings. I also just upgraded to Windows 11 and Cyberpunk 1.6 definitely is a bit more sluggish than I remember.


----------



## J7SC

gfunkernaught said:


> Using the kpe520 vs the 1kw with a power limit of 60, performance difference in games? I bought this ram specifically because of its lower latency. I haven't even begun to go through the bios as this is my first high end motherboard and there are tons of settings. I also just upgraded to Windows 11 and Cyberpunk 1.6 definitely is a bit more sluggish than I remember.


...This DDR5 thread has lots of useful info (and lots of pages; work backwards ?...)


----------



## gfunkernaught

J7SC said:


> ...This DDR5 thread has lots of useful info (and lots of pages; work backwards ?...)


Cool I'll check it out. Did you see any difference in performance usi g the 520w bios vs 1kw bios with limiting the power to 520w or so? Have you ever tried that? Maybe I'll give that a shot just to see.


----------



## J7SC

gfunkernaught said:


> Cool I'll check it out. Did you see any difference in performance usi g the 520w bios vs 1kw bios with limiting the power to 520w or so? Have you ever tried that? Maybe I'll give that a shot just to see.


...I never installed the 1kw vbios (though downloaded four of them)...the stock 450W r_bar is good for 90% of what I do and the installed KPE 520W r_bar is for 'special occasions' ie. benching in the winter time.

...on DDR5, make sure to review the segments on PMIC and cooling, especially if you plan to oc the DDR5.


----------



## gfunkernaught

J7SC said:


> ...I never installed the 1kw vbios (though downloaded four of them)...the stock 450W r_bar is good for 90% of what I do and the installed KPE 520W r_bar is for 'special occasions' ie. benching in the winter time.
> 
> ...on DDR5, make sure to review the segments on PMIC and cooling, especially if you plan to oc the DDR5.


You were able to go above 2190mhz on the kpe 520w for benchmarks?

Which four? I only know of the kingpin and Asus.


----------



## J7SC

gfunkernaught said:


> You were able to go above 2190mhz on the kpe 520w for benchmarks?
> 
> Which four? I only know of the kingpin and Asus.


...Galax and pre r_Bar Kingpin. As posted before, my card can > 2200 in benches on stock bios, but extra PL of KPE520W r_Bar works great...


----------



## gfunkernaught

J7SC said:


> ...Galax and pre r_Bar Kingpin. As posted before, my card can > 2200 in benches on stock bios, but extra PL of KPE520W r_Bar works great...


So when you bench SuperPos8k or TS/PR with the KPE520w r_Bar, the _effective_ clock stays at ~2200mhz? All three of those benchmarks pull more than 520w so why doesn't GPU Boost knock you down a few bins?


----------



## J7SC

gfunkernaught said:


> So when you bench SuperPos8k or TS/PR with the KPE520w r_Bar, the _effective_ clock stays at ~2200mhz? All three of those benchmarks pull more than 520w so why doesn't GPU Boost knock you down a few bins?


...as recently posted, it _does_ knock me down a few bins. Also, max vcore is low, so more headroom in a given PL budget, especially with the controlled temps


----------



## gfunkernaught

J7SC said:


> ...as recently posted, it _does_ knock me down a few bins. Also, max vcore is low, so more headroom in a given PL budget, especially with the controlled temps
> View attachment 2571511


I do remember your posts, but I thought you were using an unlimited/unlocked bios like the 1kw when I saw them.


----------



## gfunkernaught

Personal TSE best. I forgot to enable rbar globally. 








I scored 11 469 in Time Spy Extreme


Intel Core i9-12900KS Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 11}




www.3dmark.com





And my PR score seems to be back to "normal". 








I scored 15 634 in Port Royal


Intel Core i9-12900KS Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 11}




www.3dmark.com





Also beat my all my previous SP8k scores









Was playing Cyberpunk for about an hour and performance was great, actually a lot better than my previous system, core peaked at 48c though, water was 35c with 25c ambient. Once winter rolls in I want to try to break TSE with 2205mhz again. TSE is def more sensitive to OC than PR on my system.


----------



## J7SC

...meanwhile, some more leaked AD102+ TSE Graphics scores😄 ...our 3090s score is likely to be around that of a 4080...


----------



## gfunkernaught

J7SC said:


> ...meanwhile, some more leaked AD102+ TSE Graphics scores😄 ...our 3090s score is likely to be around that of a 4080...
> View attachment 2571800


Wild. 4090 Ti is >2x a 3090?


----------



## Martin778

Finally some good progress! With the 3090 I already didn't have to turn on the heating during the winter months.


----------



## noterbel

Hello,
I own a 3090FE since they first came to market. (lucky early owner)

Recently i managed to push i bit forward my 3090FE as RTX4000 are cmming
I Steped up power limit to 113% with EVGA X1, overcloked GPU (+95) and RAM (+100), and set to boost lock to be allways at max.
The card was heating too much and allways throtlled from 2GHz to 1.8GHz~1.9GHz so I bought watercooling solution with active backplate
It's nice with lower temps but i'm stuck by power limits. after few readings i found few articles speaking of non founder cards flashing possibility (they are confusing by the way)
Somes say there is nothing possible, somes say it's possible to use 2*8pin compatible bios to flash over founder cards exemple using galax 2*8pins at 1000w limits.

So i was trying to find a way to edit founder bioses before attempting to flash my founder with a non founder bios
Then i saw in this thread, #7,648 · Dec 20, 2020 by @bmgjet , the ABE004 had editing bit disabled.
Found your github repo and found v0.06 of ABA but still not able to modify anithing.
I tried with DisableControls() in Form1.cs but after compilation still failled to activate edition.

Can ou tell me how to enable edition of bios settings with ABE (source code editing or special release not avalable on your github repo) ?

By The Way I have another thing a didn't solve: There is a way to hard shunt power stage in Founder boards like for customs to make it thinking t use less power than it is really to override power limitation ?

Best regards


----------



## yzonker

noterbel said:


> Hello,
> I own a 3090FE since they first came to market. (lucky early owner)
> 
> Recently i managed to push i bit forward my 3090FE as RTX4000 are cmming
> I Steped up power limit to 113% with EVGA X1, overcloked GPU (+95) and RAM (+100), and set to boost lock to be allways at max.
> The card was heating too much and allways throtlled from 2GHz to 1.8GHz~1.9GHz so I bought watercooling solution with active backplate
> It's nice with lower temps but i'm stuck by power limits. after few readings i found few articles speaking of non founder cards flashing possibility (they are confusing by the way)
> Somes say there is nothing possible, somes say it's possible to use 2*8pin compatible bios to flash over founder cards exemple using galax 2*8pins at 1000w limits.
> 
> So i was trying to find a way to edit founder bioses before attempting to flash my founder with a non founder bios
> Then i saw in this thread, #7,648 · Dec 20, 2020 by @bmgjet , the ABE004 had editing bit disabled.
> Found your github repo and found v0.06 of ABA but still not able to modify anithing.
> I tried with DisableControls() in Form1.cs but after compilation still failled to activate edition.
> 
> Can ou tell me how to enable edition of bios settings with ABE (source code editing or special release not avalable on your github repo) ?
> 
> By The Way I have another thing a didn't solve: There is a way to hard shunt power stage in Founder boards like for customs to make it thinking t use less power than it is really to override power limitation ?
> 
> Best regards


The 3090 FE can't be flashed with a bios from another vendor. Unfortunately it's the only 3090 that can't AFAIK.

Even if you manage to edit the bios, you won't be able to flash it with any version of NVFlash I've ever found. You would have to hardware flash it

Shunt modding can be done. I've never tried to do it. This thread might be useful, although the best method is to actually solder them shunts rather than the method shown in the first post I think.









How To: Easy mode Shut Modding.


Heres some basic instructions for a easy to do and easy to remove shunt mod method. Tools Needed: Small phillips screwdriver. Tweesers Materials: Conductive paint. (Stuff iv used is water based and very easy to remove with warm water and cotton swab), But a silver type one would work even...




www.overclock.net


----------



## gfunkernaught

I ran another test this time enabling per P-Core HT. Interesting results...
TSE P-Core HT OFF








I scored 11 469 in Time Spy Extreme


Intel Core i9-12900KS Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 11}




www.3dmark.com




TSE P-Core HT ON








I scored 11 537 in Time Spy Extreme


Intel Core i9-12900KS Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 11}




www.3dmark.com




PR P-Core HT OFF








I scored 15 634 in Port Royal


Intel Core i9-12900KS Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 11}




www.3dmark.com




PR P-Core HT ON








I scored 15 567 in Port Royal


Intel Core i9-12900KS Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 11}




www.3dmark.com


----------



## yzonker

More leaks, 









ZOTAC GeForce RTX 4090 graphics card has been leaked - VideoCardz.com


ZOTAC GeForce RTX 4090 AMP Extreme AIRO pictured ZOTAC leak confirms upcoming high-end GPU and new logo design. Over at Chinese Baidu forums, there is a picture showing upcoming RTX 4090 graphics card from ZOTAC. The card is to feature completely designed cooler, but also new packaging. ZOTAC...




videocardz.com


----------



## yzonker

gfunkernaught said:


> I ran another test this time enabling per P-Core HT. Interesting results...
> TSE P-Core HT OFF
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 11 469 in Time Spy Extreme
> 
> 
> Intel Core i9-12900KS Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 11}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> TSE P-Core HT ON
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 11 537 in Time Spy Extreme
> 
> 
> Intel Core i9-12900KS Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 11}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> PR P-Core HT OFF
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 634 in Port Royal
> 
> 
> Intel Core i9-12900KS Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 11}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> PR P-Core HT ON
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 567 in Port Royal
> 
> 
> Intel Core i9-12900KS Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 11}
> 
> 
> 
> 
> www.3dmark.com


Interesting, I had forgotten to test HT off for PR. Looks like the same small gain (if any) as Ryzen. If I get time I might give it another go this weekend as a last showing for my 3090. Feels like it's going to be old news pretty soon.


----------



## yzonker

Sorry for multiple posts, but another semi random thought. Isn't it amazing timing that ETH mining is dying this week? 

This is supposed to be a live countdown by tracking TTD. 






- Ethereum Merge Countdown - Big Gamba


Counting the days to the Ethereum PoS merge basessed on the TTD




biggamba.com


----------



## gfunkernaught

Has anyone here tried Fall Order at 8k? I was running it at 8k, locked 30fps, with dynamic resolution enabled. Sometimes framerate dips to 28-29fps, but overall good hi fidelity experience.


----------



## gfunkernaught

yzonker said:


> Interesting, I had forgotten to test HT off for PR. Looks like the same small gain (if any) as Ryzen. If I get time I might give it another go this weekend as a last showing for my 3090. Feels like it's going to be old news pretty soon.


Indeed. I learned that if I'm buying new hardware, buy the best money can buy...goodbye midrange anything! 
4090 ti Kingpin!😁


----------



## yzonker

Gotta make some room.


----------



## gfunkernaught

yzonker said:


> Gotta make some room.
> 
> View attachment 2571939


You're gonna need a bigger boat  

Btw, what's the lowest voltage you got on your 12900k? So far I got mine (KS) down to 1.315v. Running cinebench each time just to see if it will crash, hasn't yet, and the score keeps rising, and the max all p-core freq for the most part is 5.2ghz, e-cores 4ghz, max 270w. Again I haven't touched the ratio or anything except lowering the vcore, and the MSI setting for cooling profile is set to water cooling (4096w)


----------



## J7SC

yzonker said:


> Gotta make some room.
> 
> View attachment 2571939


...


----------



## gfunkernaught

J7SC said:


> ...


bash hypemachine --spice


----------



## yzonker

Oh baby, isn't this the same one we've been buying?! 






TG-PP10-50 t-Global Technology | Fans, Thermal Management | DigiKey


Order today, ships today. TG-PP10-50 – Thermal Silicone Putty 50 gram Container from t-Global Technology. Pricing and Availability on millions of electronic components from Digi-Key Electronics.




www.digikey.com


----------



## J7SC

yzonker said:


> Oh baby, isn't this the same one we've been buying?!
> 
> 
> 
> 
> 
> 
> TG-PP10-50 t-Global Technology | Fans, Thermal Management | DigiKey
> 
> 
> Order today, ships today. TG-PP10-50 – Thermal Silicone Putty 50 gram Container from t-Global Technology. Pricing and Availability on millions of electronic components from Digi-Key Electronics.
> 
> 
> 
> 
> www.digikey.com


...looks like it...prior supply chain issues ?


----------



## yzonker

J7SC said:


> ...looks like it...prior supply chain issues ?


Says it's obsolete and will not be restocked. I bought 3. 😁


----------



## J7SC

yzonker said:


> Says it's obsolete and will not be restocked. I bought 3. 😁


...same here ! They're at the bottom of the fridge, waiting... BTW, both my 6900XT and 3090 Strix which both have extensive TG thermal putty since well over a year ago show exactly the same load temps when adjusted for ambient...


----------



## yzonker

J7SC said:


> ...same here ! They're at the bottom of the fridge, waiting... BTW, both my 6900XT and 3090 Strix which both have extensive TG thermal putty since well over a year ago show exactly the same load temps when adjusted for ambient...


Yea same. Neither of my card mounts have degraded any either. Both the putty and Kryonaut Extreme have held up really well.


----------



## gfunkernaught

@J7SC @yzonker Thermal putty should be refrigerated? I'm about to get a bunch as well.

EDIT: Just read 6 month shelf life so yeah.


----------



## yzonker

gfunkernaught said:


> @J7SC @yzonker Thermal putty should be refrigerated? I'm about to get a bunch as well.
> 
> EDIT: Just read 6 month shelf life so yeah.


Certainly doesn't hurt, particularly if you have opened it.


----------



## gfunkernaught

yzonker said:


> Certainly doesn't hurt, particularly if you have opened it.


Wondering if I should freeze it for long term storage, then when/if I need to use it, let it thaw. You think freezing the putty could cause separation?


----------



## yzonker

gfunkernaught said:


> Wondering if I should freeze it for long term storage, then when/if I need to use it, let it thaw. You think freezing the putty could cause separation?


I'm going to guess that it would be fine, but I don't know that for certain. Haven't tried it.


----------



## ssgwright

thermal putty? this is a first for me, how is it used? on the memory?


----------



## J7SC

ssgwright said:


> thermal putty? this is a first for me, how is it used? on the memory?


Thermal putty is great for VRAM (and also parts of the VRM)...it conforms exactly to the available space as it is soft and 'squishes'. The 3090 Strix got the same treatment as the 6900XT putty VRAM below (other than of course putty on both front and back VRAM). In addition, both GPUs have a big wallop of thermal putty right on the back of the GPU die to transfer directly to the extra heatsink. 

Post-putty-application temps >here


----------



## ssgwright

J7SC said:


> Thermal putty is great for VRAM (and also parts of the VRM)...it conforms exactly to the available space as it is soft and 'squishes'. The 3090 Strix got the same treatment as the 6900XT putty VRAM below (other than of course putty on both front and back VRAM). In addition, both GPUs have a big wallop of thermal putty right on the back of the GPU die to transfer directly to the extra heatsink.
> 
> Post-putty-application temps >here
> View attachment 2572096


hmm i might have to give this a try! thanks!


----------



## gfunkernaught

ssgwright said:


> hmm i might have to give this a try! thanks!


I had to use a flat metal tool, like the one that comes with the ifixit toolkit, to apply the putty. It kept sticking to my fingers even after constantly wiping them with 99% isopropyl. Keep that in mind. I don't think latex gloves would be good either. Maybe use some flat plastics like guitar pics to handle the putty.


----------



## ArcticZero

Yikes. I may have to stock up on this. Just as concerned about the shelf life and if refrigeration will prolong it then I can keep it along side my solder paste.


----------



## yzonker

BTW, the specs show storage down to -50C, so I would assume freezing the thermal putty is fine.


----------



## gfunkernaught

I think I'm buying a kg of putty....🤔


----------



## yzonker

Digikey was fast. My order should be here tomorrow. I wonder if that 6 month shelf life is worst case given the very high temp they give for storage?


----------



## gfunkernaught

yzonker said:


> Digikey was fast. My order should be here tomorrow. I wonder if that 6 month shelf life is worst case given the very high temp they give for storage?


Room temp maybe? Shelf life in a warehouse?


----------



## yzonker

gfunkernaught said:


> Room temp maybe? Shelf life in a warehouse?


Maybe more. The spec shows -50C to 200C! I just can't believe it's 6 months under good storage since what we buy is quite possibly already that old (or older). Seems way too short except possibly under extreme conditions.


----------



## gfunkernaught

yzonker said:


> Maybe more. The spec shows -50C to 200C! I just can't believe it's 6 months under good storage since what we buy is quite possibly already that old (or older). Seems way too short except possibly under extreme conditions.


Like you said air exposure. Maybe it dries out? I wouldn't even know how to moisturize thermal putty. Putty goes into retirement after a while, collects a pension 😆


----------



## stahlhart

yzonker said:


> Says it's obsolete and will not be restocked. I bought 3. 😁


Shelf life 6 months.  (Hopefully only after the container is opened?)

I got this stuff. Strange how one can be stated as only six months but another as "practically infinite".


----------



## gfunkernaught

stahlhart said:


> Shelf life 6 months.  (Hopefully only after the container is opened?)
> 
> I got this stuff. Strange how one can be stated as only six months but another as "practically infinite".


That stuff isnt as conductive as the TG. Thermal putty in general was a game changer tho. I don't use pads on my GPU anymore, only paste. Like others here have stated, thermal performance has been consistent and hasn't degraded since first application. Actually I do have pads on the outside of the active backplate with a heatsink but that's to dissipate heat from the pcie power area of the GPU. I also added a couple of tiny fans to just remove heat from that area.


----------



## stahlhart

Just grabbed a couple of pots myself.


----------



## max883

Got a cheap Asus RTX 3090 24GB TUF OC. Think i got a Golden one. undervolt 0.900 GPU 250+ = 2.020 Mhz gpu 

max temp 62c max fann speed 1430.rpm . So its silent


----------



## yzonker

That should be enough for a 4090.


----------



## J7SC

stahlhart said:


> Shelf life 6 months.  (Hopefully only after the container is opened?)
> 
> I got this stuff. Strange how one can be stated as only six months but another as "practically infinite".


As stated before, TG 10 is actually vacuum-sealed and once applied lasts far be


yzonker said:


> *That should be enough for a 4090.*
> 
> View attachment 2572327


...but is it enough for a 4090 _Ti ?  


Spoiler














_


----------



## gfunkernaught

Or a 409090 Ti Super Ultra 128GB Edition?


----------



## yzonker

gfunkernaught said:


> Or a 409090 Ti Super Ultra 128GB Edition?


You forgot SLI.  Assuming that's a thing anymore with 40 series...


----------



## gfunkernaught

yzonker said:


> You forgot SLI.  Assuming that's a thing anymore with 40 series...


409090 Premium Plus Platinum SLI


----------



## KedarWolf

4190 Ti, dual GPU in one video card.


----------



## gfunkernaught




----------



## 7empe

gfunkernaught said:


>


No more kingpin. Neither the gpu nor bios for other vendors :/


----------



## gfunkernaught

7empe said:


> No more kingpin. Neither the gpu nor bios for other vendors :/


Vince is too good to not be hired by another vendor. But knowing people of tech, you can't stop them from release unlocked and unlimited bios.


----------



## yzonker

Well that just sucks. Although I have been planning to try to grab a Strix and possibly get a FTW3 if I couldn't score one. This will put a lot more sales pressure on the Strix too probably.


----------



## J7SC

yzonker said:


> Well that just sucks. Although I have been planning to try to grab a Strix and possibly get a FTW3 if I couldn't score one. This will put a lot more sales pressure on the Strix too probably.


...given my very positive experience with the 3090 Strix OC, I probably get the Strix 4090 Ti version if / when...


----------



## GRABibus

J7SC said:


> ...given my very positive experience with the 3090 Strix OC, I probably get the Strix 4090 Ti version if / when...


Galax HOF 😜


----------



## yzonker

J7SC said:


> ...given my very positive experience with the 3090 Strix OC, I probably get the Strix 4090 Ti version if / when...


Yea my previous card was a 2080S Strix. Always liked that card.


----------



## gfunkernaught

You think nvidia would ever make tiered versions of their gpus like nvidia's equivalent to a strix or kingpin.


----------



## Nizzen

7empe said:


> No more kingpin. Neither the gpu nor bios for other vendors :/


Strix Kingpin edition next?

Atleast Galax/Kfa2 Hall Of Fame is still alive.

I like my 3090 HOF with the software voltage tool


----------



## J7SC

GRABibus said:


> Galax HOF 😜


...I contacted Galax / KFA a while back for 2080 Ti, but they don't sell here in the Great White North nor do warranties if imported. Asus Strix it will have to be, whether 4090 / Ti or that 'rumoured' AMD XTX 7900...


----------



## Pauliesss

I have a Gainward RTX3090 Phoenix "Golden Sample" and want to try how far I can push it since I only score about 9600 points in TSE.
What would be the best BIOS to use?
I have a pretty good airflow case (Fractal Torrent) and a 1600W PSU.


----------



## 7empe

Pauliesss said:


> I have a Gainward RTX3090 Phoenix "Golden Sample" and want to try how far I can push it since I only score about 9600 points in TSE.
> What would be the best BIOS to use?
> I have a pretty good airflow case (Fractal Torrent) and a 1600W PSU.


EVGA 520W (1920 MHz base boost clock).


----------



## J7SC

7empe said:


> EVGA 520W (1920 MHz base boost clock).


...the EVGA 520W / is a vbios for 3x8 pin, while the Gainward RTX3090 Phoenix "Golden Sample" is a 2x8...will still work though, as long as this is taken into account w/ power slider etc.


----------



## yzonker

J7SC said:


> ...the EVGA 520W / is a vbios for 3x8 pin, while the Gainward RTX3090 Phoenix "Golden Sample" is a 2x8...will still work though, as long as this is taken into account w/ power slider etc.


No, anything other than the KP 1kw bios will result in a lower PL (any 3x8pin). I've tried the KP 520w bios on my Zotac and ended up with just over 300w PL IIRC.


----------



## yzonker

Well I took another run at 3DMark with my new rig. I did better my CPU scores and got a decent TSE run in, but no progress on graphics scores. The TS run below is actually 50pts lower than what I did the last time. I think the conclusion is that a good tuned Ryzen 5000 rig will score just as well in the graphics tests. So the notion that Intel scores better is just internet folklore and not correct. Still like my new rig though. Much nicer with the bigger case, bigger rads, 12900k, and OCTO controller (really like the OCTO).









I scored 23 986 in Time Spy


Intel Core i9-12900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com













I scored 12 057 in Time Spy Extreme


Intel Core i9-12900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com







Spoiler: Pics


----------



## Xdrqgol

Hi everyone,

Just got a *GALAX RTX 3090 24 GB BIOS EX Gamer Black *, which bios should I look for if I want to pull a bit more W?

Currenlty I am on Defaul 370W /Maximum 390W
The card is a 2x8pin 

If anyone has one and would recommend a certain bios, i would supper appreciate.


thanks in advance!


----------



## gfunkernaught

yzonker said:


> Well I took another run at 3DMark with my new rig. I did better my CPU scores and got a decent TSE run in, but no progress on graphics scores. The TS run below is actually 50pts lower than what I did the last time. I think the conclusion is that a good tuned Ryzen 5000 rig will score just as well in the graphics tests. So the notion that Intel scores better is just internet folklore and not correct. Still like my new rig though. Much nicer with the bigger case, bigger rads, 12900k, and OCTO controller (really like the OCTO).
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 23 986 in Time Spy
> 
> 
> Intel Core i9-12900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 12 057 in Time Spy Extreme
> 
> 
> Intel Core i9-12900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> 
> 
> Spoiler: Pics
> 
> 
> 
> 
> View attachment 2572515
> 
> 
> View attachment 2572516
> 
> 
> View attachment 2572517


What vcore did you set for that 5.5ghz oc?


----------



## yzonker

gfunkernaught said:


> What vcore did you set for that 5.5ghz oc?


1.45v with minimum droop. It was only pulling about 300w for the CPU portion of the benches. I might be able to go a little lower but temps were fine since I was using the chiller. I know it will crash at 1.38v though, so somewhere between the two is the minimum. 5.6ghz seems to be a no go even with more voltage though (tried 1.5v, still crashed, didn't try higher).

Runs were done with 5.5/4.4/4.6.

Also I forgot to mention the TS run is with reBar forced so the cpu score is a bit nerfed because of it. I don't think that's true for TSE though. As I recall TSE isn't significantly affected by reBar for whatever reason.


----------



## yzonker

Xdrqgol said:


> Hi everyone,
> 
> Just got a *GALAX RTX 3090 24 GB BIOS EX Gamer Black *, which bios should I look for if I want to pull a bit more W?
> 
> Currenlty I am on Defaul 370W /Maximum 390W
> The card is a 2x8pin
> 
> If anyone has one and would recommend a certain bios, i would supper appreciate.
> 
> 
> thanks in advance!


Go up 2 posts above yours and you will have the answer.


----------



## gfunkernaught

yzonker said:


> 1.45v with minimum droop. It was only pulling about 300w for the CPU portion of the benches. I might be able to go a little lower but temps were fine since I was using the chiller. I know it will crash at 1.38v though, so somewhere between the two is the minimum. 5.6ghz seems to be a no go even with more voltage though (tried 1.5v, still crashed, didn't try higher).
> 
> Runs were done with 5.5/4.4/4.6.
> 
> Also I forgot to mention the TS run is with reBar forced so the cpu score is a bit nerfed because of it. I don't think that's true for TSE though. As I recall TSE isn't significantly affected by reBar for whatever reason.


Ah the chiller it explains it. I'm at 5.3/4.1/ring is set to auto so I don't know how to check what the actual freq is, vcore is set to 1.275v, boots to 1.265v and droops to 1.263v. I was testing it with prime95 and since I can't keep the chip cool, it kept smacking 100c and throttling. I did see the ek made a peltier block but not for lga1700.


----------



## Xdrqgol

yzonker said:


> Go up 2 posts above yours and you will have the answer.


this one you mean? EVGA 520W (1920 MHz base boost clock). ?


----------



## yzonker

gfunkernaught said:


> Ah the chiller it explains it. I'm at 5.3/4.1/ring is set to auto so I don't know how to check what the actual freq is, vcore is set to 1.275v, boots to 1.265v and droops to 1.263v. I was testing it with prime95 and since I can't keep the chip cool, it kept smacking 100c and throttling. I did see the ek made a peltier block but not for lga1700.


That's pretty much identical to mine. My daily is 5.2/4.1/4.3 at about 1.28v full load. That's as high as I can go without thermal throttling during the Ycruncher stress tests. Same as you, I can run 5.3 just fine, but I can't confirm 100% stability since it will thermal throttle during testing. So I just settled on 5.2. I don't like crashes and nothing I do daily on the machine would be benefited by another 100mhz. Probably could even game at 5.4,but no benefit at 4k at least.


----------



## yzonker

Xdrqgol said:


> this one you mean? EVGA 520W (1920 MHz base boost clock). ?


No my post in regards to the Kingpin 1kw bios.


----------



## gfunkernaught

yzonker said:


> That's pretty much identical to mine. My daily is 5.2/4.1/4.3 at about 1.28v full load. That's as high as I can go without thermal throttling during the Ycruncher stress tests. Same as you, I can run 5.3 just fine, but I can't confirm 100% stability since it will thermal throttle during testing. So I just settled on 5.2. I don't like crashes and nothing I do daily on the machine would be benefited by another 100mhz. Probably could even game at 5.4,but no benefit at 4k at least.


I honestly don't see a performance gain from 100mhz. I actually watched cinebench score higher as I dropped the vcore keeping ratios on auto. I wonder what the S means in KS? (sarcasm) 😆

EDIT: Ok well maybe the S isn't marketing nonsense. I just tried 5.2/4.1/4.3 with 1.25v set in the bios, boots to 1.24v, load 1.235v and ran cinebench for a little bit. The max temp was 86c.


----------



## GRABibus

Xdrqgol said:


> Hi everyone,
> 
> Just got a *GALAX RTX 3090 24 GB BIOS EX Gamer Black *, which bios should I look for if I want to pull a bit more W?
> 
> Currenlty I am on Defaul 370W /Maximum 390W
> The card is a 2x8pin
> 
> If anyone has one and would recommend a certain bios, i would supper appreciate.
> 
> 
> thanks in advance!


Bios EVGA Kingpin 1000W with ReBar 









EVGA RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com





At your own risk.


----------



## J7SC

yzonker said:


> No, anything other than the KP 1kw bios will result in a lower PL (any 3x8pin). I've tried the KP 520w bios on my Zotac and ended up with just over 300w PL IIRC.


...technically should be around 346 W but either way, 1Kw vbios would give more oomph, as long as he's cautious and got the cooling for it.


----------



## yzonker

GRABibus said:


> Bios EVGA Kingpin 1000W with ReBar
> 
> 
> 
> 
> 
> 
> 
> 
> 
> EVGA RTX 3090 VBIOS
> 
> 
> 24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory
> 
> 
> 
> 
> www.techpowerup.com
> 
> 
> 
> 
> 
> At your own risk.


We need a sticky link to that bios. LOL. I was on my phone and didn't want to try to dig it up. Thanks for helping out.


----------



## yzonker

gfunkernaught said:


> I honestly don't see a performance gain from 100mhz. I actually watched cinebench score higher as I dropped the vcore keeping ratios on auto. I wonder what the S means in KS? (sarcasm) 😆
> 
> EDIT: Ok well maybe the S isn't marketing nonsense. I just tried 5.2/4.1/4.3 with 1.25v set in the bios, boots to 1.24v, load 1.235v and ran cinebench for a little bit. The max temp was 86c.


The "S" is Special edition. They are binned to the point of guaranteeing the stock clocks will be stable. Since they are raised above the 12900k, you will generally not get a lottery loser. There have been people that binned a bunch of these (maybe Der8auer covered this, or maybe Buildzoid). The Asus SP scores generally are in the 90-100 range for the KS. There are definitely K's that go below 90 which are the lottery losers. Golden samples go well above 100 though (higher #'s obviously better). 

I'd guess mine is around 90 from what I've seen. Before declaring victory on your [email protected], run the yCruncher stress tests (menu items 1, 7, 0) and get back to us.


----------



## gfunkernaught

yzonker said:


> The "S" is Special edition. They are binned to the point of guaranteeing the stock clocks will be stable. Since they are raised above the 12900k, you will generally not get a lottery loser. There have been people that binned a bunch of these (maybe Der8auer covered this, or maybe Buildzoid). The Asus SP scores generally are in the 90-100 range for the KS. There are definitely K's that go below 90 which are the lottery losers. Golden samples go well above 100 though (higher #'s obviously better).
> 
> I'd guess mine is around 90 from what I've seen. Before declaring victory on your [email protected], run the yCruncher stress tests (menu items 1, 7, 0) and get back to us.


Been running it for about 20min now, halfway through the 2nd iteration. I think the 2nd or 3rd test hits 100c, which I don't like. I'm starting to shop around for a better block. Heatkillers are coming up a lot.


----------



## yzonker

gfunkernaught said:


> Been running it for about 20min now, halfway through the 2nd iteration. I think the 2nd or 3rd test hits 100c, which I don't like. I'm starting to shop around for a better block. Heatkillers are coming up a lot.


That's good. I'm surprised you are throttling though at that voltage. Here's mine running test #12. I highlighted the core voltage I'm looking at in the EVGA section. I'm not sure how to see that voltage in the other HWINFO sections. Seems like it should be there but I've never found it. I have the Corsair XC7 Pro block. Don't think it's anything amazing.

We should take this over to the 12900k OC'ing thread though and not bore the 3090 guys with our CPU talk.


----------



## freddy85

Has anyone tried to flash GALAX RTX 3090 24 GB HOF Limited Edition 500w bios to an 
Zotac RTX 3090 AMP Extreme Holo(460w) card?


----------



## TheDidgeridooMan

Hi! Does anyone here knows how to fix the checksum of a bios file? I edited the bios for my 3090 ti but it isn't supported by ABE, so I have no way of fixing the bios checksum. Is it possible to do by hand?


----------



## KedarWolf

TheDidgeridooMan said:


> Hi! Does anyone here knows how to fix the checksum of a bios file? I edited the bios for my 3090 ti but it isn't supported by ABE, so I have no way of fixing the bios checksum. Is it possible to do by hand?


Try the WinRaid forums. They have peeps who can do stuff like that.

Start as a thread in the BIOS mod forum.


----------



## TheDidgeridooMan

KedarWolf said:


> Try the WinRaid forums. They have peeps who can do stuff like that.
> 
> Start as a thread in the BIOS mod forum.


Thanks I'll try that!


----------



## Lurifaks

Edit: NVM , figured it out!


----------



## gfunkernaught

I've asked this before but didn't get a response. Really curious though. Has anyone tried running any games at 8k? Even with DSR? I'm curious to know how other 3090s are performing with that kind of load.


----------



## GosuPl

gfunkernaught said:


> I've asked this before but didn't get a response. Really curious though. Has anyone tried running any games at 8k? Even with DSR? I'm curious to know how other 3090s are performing with that kind of load.


RTX 3080 Ti STRIX vs RTX 3090 STRIX - 3440x1440 / 5160x2160 / 6880x2880. Not 8k but not far from ;-) DSR OFF / DSR ON with DLSS OFF and DSR ON with DLSS Q
Cards with max PT (def bioses) and OC both core and mem

My own channel, test starts from 0:33


----------



## gfunkernaught

GosuPl said:


> RTX 3080 Ti STRIX vs RTX 3090 STRIX - 3440x1440 / 5160x2160 / 6880x2880. Not 8k but not far from ;-) DSR OFF / DSR ON with DLSS OFF and DSR ON with DLSS Q
> Cards with max PT (def bioses) and OC both core and mem
> 
> My own channel, test starts from 0:33


Cool! So while I haven't even tried 8k in Cyberpunk, I ditched DLSS and started using Dynamic Resolution at 4k native, with 60% min res and 100% max res, and 60fps target. Both image quality and performance are much better than using DLSS, even Quality DLSS. I can tell the resolution rarely goes down to 60% because anything less than 1440p is noticeably jagged and poor looking. I also noticed lower overall power usage. Your results may vary, maybe its just a good match for my TV.

Crazy how the 3080 Ti is just a bit slower than a 3090.

Btw what language is that?


----------



## emc02

Hi, I have upgraded my eGPU from 1080 to 3090 (see signature) and a bit disappointed about the idle power consumption.
I am using the system many hours daily for office apps, so don't need much power. I had a Afterburner Profile with the 1080 with lowest Clockspeed/Voltage (600mV) settings and about 16W power draw.
With the 3090 the lowest Voltage is 737mV, lowest Core Clock 225MHz (thats nice) and memory is at 405MHz. Power draw is about 26W.
But the memory clock jumps sometimes up to 9501MHz and then the card uses nearly 100W.
How can that be locked?
Or how could a lower Voltage be achieved?
Are there any Tools to mod the BIOS?

Thanks!


----------



## gfunkernaught

emc02 said:


> Hi, I have upgraded my eGPU from 1080 to 3090 (see signature) and a bit disappointed about the idle power consumption.
> I am using the system many hours daily for office apps, so don't need much power. I had a Afterburner Profile with the 1080 with lowest Clockspeed/Voltage (600mV) settings and about 16W power draw.
> With the 3090 the lowest Voltage is 737mV, lowest Core Clock 225MHz (thats nice) and memory is at 405MHz. Power draw is about 26W.
> But the memory clock jumps sometimes up to 9501MHz and then the card uses nearly 100W.
> How can that be locked?
> Or how could a lower Voltage be achieved?
> Are there any Tools to mod the BIOS?
> 
> Thanks!


Use profile shortcuts in afterburner. First create a profile. Slide the power, core and mem sliders all the way to the left, hit apply, save the profile. Then click the defaults button, save that to a different profile.


----------



## emc02

gfunkernaught said:


> Use profile shortcuts in afterburner. First create a profile. Slide the power, core and mem sliders all the way to the left, hit apply, save the profile. Then click the defaults button, save that to a different profile.


thanks, but I know how to deal with afterburner, the core clock ist locked to the value I want, but the memory clock and with it the power consumption is "jumping" up on load. Is there a way to prevent this?


----------



## 7empe

emc02 said:


> thanks, but I know how to deal with afterburner, the core clock ist locked to the value I want, but the memory clock and with it the power consumption is "jumping" up on load. Is there a way to prevent this?


Yes, there is.

In the windows powershell you can use following commands to lock the GPU and Memory clocks respectively:

nvidia-smi -lgc 225,225
nvidia-smi -lmc 405,405

Clocks won't "jump" anymore. You can also write a script that executes these commands on window's startup.

To remove the locks, use following commands:

nvidia-smi -rgc
nvidia-smi -rmc


----------



## emc02

7empe said:


> Yes, there is.
> 
> In the windows powershell you can use following commands to lock the GPU and Memory clocks respectively:
> 
> nvidia-smi -lgc 225,225
> nvidia-smi -lmc 405,405
> 
> Clocks won't "jump" anymore. You can also write a script that executes these commands on window's startup.
> 
> To remove the locks, use following commands:
> 
> nvidia-smi -rgc
> nvidia-smi -rmc


OMG, that's exactly what I needed and works perfect. So I don't need Afterburner anymore and can run just some scripts to lock frequencies as needed.
Thanks for your help!


----------



## emc02

but I guess a solution to go below 737mV is not possible?
I've read 737mV is the needed Voltage for the GDDR6X, but maybe at 405MHz it could be less 🙃
I know it's strange to ask for power savings with an 3090, but for daily office use I don't need more power and on gaming it's perfect efficient with undervolting.


----------



## gfunkernaught

emc02 said:


> but I guess a solution to go below 737mV is not possible?
> I've read 737mV is the needed Voltage for the GDDR6X, but maybe at 405MHz it could be less 🙃
> I know it's strange to ask for power savings with an 3090, but for daily office use I don't need more power and on gaming it's perfect efficient with undervolting.


I wouldn't worry about those 100w jumps though. Unless whatever you're doing is causing the clocks to stay in "3d mode" with sustained usage, the card will power down as it should. If you want to get fancy: does your mobo have onboard video? Hook up a second monitor to that, use it for regular stuff, then game on the monitor thats hooked up to the 3090 😆


----------



## emc02

gfunkernaught said:


> I wouldn't worry about those 100w jumps though. Unless whatever you're doing is causing the clocks to stay in "3d mode" with sustained usage, the card will power down as it should. If you want to get fancy: does your mobo have onboard video? Hook up a second monitor to that, use it for regular stuff, then game on the monitor thats hooked up to the 3090 😆


I have a Laptop with an eGPU (see Signature) and need it for my 3 Monitor Setup (3x4K on DisplayPort)


----------



## gfunkernaught

emc02 said:


> I have a Laptop with an eGPU (see Signature) and need it for my 3 Monitor Setup (3x4K on DisplayPort)


Does your laptop have a video output? Try it out. You may have to enable multigpu support in your laptops bios/fw to allow video output from multiple GPUs.

Make the laptop display the primary, the egpu secondary. Put game shortcuts on the secondary desktop so games launch on that monitor.


----------



## emc02

gfunkernaught said:


> Does your laptop have a video output? Try it out. You may have to enable multigpu support in your laptops bios/fw to allow video output from multiple GPUs.


it has only one TB3 connector, but you can't add three 4K Displays and even if you could, the performance would be really bad with the Intel iGPU (UHD 620)
I had that setup with 2 Displays and it was not good and the VRAM is shared with the RAM...
eGPU has a lot of advantages!


----------



## gfunkernaught

emc02 said:


> it has only one TB3 connector, but you can't add three 4K Displays and even if you could, the performance would be really bad with the Intel iGPU (UHD 620)
> I had that setup with 2 Displays and it was not good and the VRAM is shared with the RAM...
> eGPU has a lot of advantages!


The laptop itself not the egpu. I looked up that laptop, it should have a DP that with the adapter gives you VGA and HDMI. So you'd use that DP for main/primary display and the TB for the egpu. I'm surprised Intel GPU can't handle a 4k desktop, just for 2D.

As far as selective GPU acceleration that might be something to look into. I'm not 100% sure but if the Intel GPU is your primary all basic acceleration will occur on that, while other programs can use the 3090 as long as you can select it, such as for example Adobe AE or PS.


----------



## emc02

gfunkernaught said:


> The laptop itself not the egpu. I looked up that laptop, it should have a DP that with the adapter gives you VGA and HDMI. So you'd use that DP for main/primary display and the TB for the egpu. I'm surprised Intel GPU can't handle a 4k desktop, just for 2D.


I have and need three 4K Displays ;-)


----------



## gfunkernaught

emc02 said:


> I have and need three 4K Displays ;-)


Two on Intel, one on 3090? 🤔


----------



## KedarWolf

gfunkernaught said:


> Two on Intel, one on 3090? 🤔


I don't know why everyone is stoked about an RTX 4090, I have a GTX 5900, the number is 69.3% bigger!!


----------



## emc02

gfunkernaught said:


> Two on Intel, one on 3090? 🤔


3 on 3090?
you are kidding me, right?


----------



## gfunkernaught

KedarWolf said:


> I don't know why everyone is stoked about an RTX 4090, I have a GTX 5900, the number is 69.3% bigger!!


I needed that! Humour is lacking nowadays! 😆


----------



## gfunkernaught

emc02 said:


> 3 on 3090?
> you are kidding me, right?


Two on Intel, one on 3090...🤔

Two monitors off of the DP breakout adapter, which is the Intel GPU, 3rd monitor on the 3090. Or the other way around, depending upon how you use the three monitors.


----------



## domdtxdissar

Took the 7950x out for a spin in the different 3dmarks 
Pretty sure its the 3090 that's holding me back at this point lol.. 









Result not found







www.3dmark.com













Result not found







www.3dmark.com













I scored 46 874 in Fire Strike


AMD Ryzen 9 7950X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 11}




www.3dmark.com













I scored 27 130 in Fire Strike Extreme


AMD Ryzen 9 7950X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 11}




www.3dmark.com













I scored 15 116 in Fire Strike Ultra


AMD Ryzen 9 7950X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 11}




www.3dmark.com













I scored 92 983 in Night Raid


AMD Ryzen 9 7950X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 11}




www.3dmark.com





I think these are good cpu scores atleast (?)


----------



## Nizzen

domdtxdissar said:


> Took the 7950x out for a spin in the different 3dmarks
> Pretty sure its the 3090 that's holding me back at this point lol..
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Result not found
> 
> 
> 
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Result not found
> 
> 
> 
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 46 874 in Fire Strike
> 
> 
> AMD Ryzen 9 7950X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 11}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 27 130 in Fire Strike Extreme
> 
> 
> AMD Ryzen 9 7950X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 11}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 116 in Fire Strike Ultra
> 
> 
> AMD Ryzen 9 7950X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 11}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 92 983 in Night Raid
> 
> 
> AMD Ryzen 9 7950X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 11}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> I think these are good cpu scores atleast (?)


Cpu score in timespy is 1000 points less than 12900k at 5.4 ghz.  I think memory plays a BIG role here...


----------



## J7SC

domdtxdissar said:


> Took the 7950x out for a spin in the different 3dmarks
> Pretty sure its the 3090 that's holding me back at this point lol..
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Result not found
> 
> 
> 
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Result not found
> 
> 
> 
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 46 874 in Fire Strike
> 
> 
> AMD Ryzen 9 7950X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 11}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 27 130 in Fire Strike Extreme
> 
> 
> AMD Ryzen 9 7950X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 11}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 116 in Fire Strike Ultra
> 
> 
> AMD Ryzen 9 7950X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 11}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 92 983 in Night Raid
> 
> 
> AMD Ryzen 9 7950X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 11}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> I think these are good cpu scores atleast (?)





Nizzen said:


> Cpu score in timespy is 1000 points less than 12900k at 5.4 ghz.  I think memory plays a BIG role here...


...for TimeSpy, it is probably a good idea to disable SMT and/or a few cores...unlike TSExtreme

3950X w/all core and all threads - and a 6900XT (old run)


Spoiler


----------



## yzonker

Speaking of Firestrike. I asked this over in the 12900k thread but maybe someone here has an idea.

Using the Win10 bench OS KedarWolf shared, all of the 3dmark benches I've run work fine on my 12900k except Firestrike. It seems to be a thread director problem as it scores much higher on my daily Win11 install and the graphics score increases quite a bit on Win10 if I disable e-cores. 

So it's not just the physics score that is tanked, but graphics too. There are very high scores on Hwbot with 12900k and Win10 combinations so apparently it can work. 

Any thoughts? Things to try? Obviously one option is to strip down Win11 which I've been intending to do for a while so that may happen if I can't fix this. 

BTW @domdtxdissar ,I've got a 13900k on pre-order. We'll see how that stacks up. Although I'm sure you do too. Lol. (or maybe waiting for the 13900KS)


----------



## yzonker

Ok, false alarm. It's either the old driver I've been using for benching or because I did a clean install. Have to test the old one again when I get time, but this is even better than the Win11 score now.

Win10








I scored 46 067 in Fire Strike


Intel Core i9-12900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com





Win11








I scored 44 741 in Fire Strike


Intel Core i9-12900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 11}




www.3dmark.com


----------



## 67091

Hi guys 
I just bought a second hand 3090 msi supreme X with a block on it , I was wondering is it worth overclocking the beast and what’s the average oc that most are getting on the 3090s?


----------



## domdtxdissar

yzonker said:


> BTW @domdtxdissar ,I've got a 13900k on pre-order. We'll see how that stacks up. Although I'm sure you do too. Lol. (or maybe waiting for the 13900KS)


Na i get a other 7950x delivered later today, will keep the best one.
After that i'm waiting for the 7950x3d


----------



## J7SC

...I wish we would know a bit more about the 7950X3D, including release date if/when...I'm probably updating the GPU first when the time comes, but same story - 4090 Ti when ? Or will AMD come up with the rumoured monster (beyond 7900XT) first...



angushades said:


> Hi guys
> I just bought a second hand 3090 msi supreme X with a block on it , I was wondering is it worth overclocking the beast and what’s the average oc that most are getting on the 3090s?


...really depends on a lot of factors, including w-cooling capacity, vbios etc. Typically, boost peak should be somewhere between 2100 MHz and 2200 MHz...higher wattage PL vbios will mean higher clocks can be maintained (again, subject to other factors, such as cooling)


----------



## gfunkernaught

angushades said:


> Hi guys
> I just bought a second hand 3090 msi supreme X with a block on it , I was wondering is it worth overclocking the beast and what’s the average oc that most are getting on the 3090s?


What kind of block is it? You could remove the block, repaste the core, and use thermal putty for the vrms and VRAM. I used liquid metal on my core, and also have the active backplate. I have the Trio btw, and use the kingpin 1kw rebar bios, average 2115-2135mhz now but in the summer the warm client would push that down to 2100mhz. 1075mv for all those clocks. Power limit set to 620w.


----------



## 67091

gfunkernaught said:


> What kind of block is it? You could remove the block, repaste the core, and use thermal putty for the vrms and VRAM. I used liquid metal on my core, and also have the active backplate. I have the Trio btw, and use the kingpin 1kw rebar bios, average 2115-2135mhz now but in the summer the warm client would push that down to 2100mhz. 1075mv for all those clocks. Power limit set to 620w.


It’s a EK Vector Trio D-RGB , is it worth getting the rear block? Memory temperature is 70C when benching and gaming. Could you link the bios please ?


----------



## GRABibus

angushades said:


> It’s a EK Vector Trio D-RGB , is it worth getting the rear block? Memory temperature is 70C when benching and gaming. Could you link the bios please ?


It is in my signature


----------



## gfunkernaught

angushades said:


> It’s a EK Vector Trio D-RGB , is it worth getting the rear block? Memory temperature is 70C when benching and gaming. Could you link the bios please ?


I have the same block. Yes, the active backplate works great. Memory maxes at like 48c, +1250mhz offset.

Make sure you put thermal putty on the front and back of the card. Anywhere EK says in their instructions to place a thermal pad, use putty instead. Roll the putty up into a tiny ball, and place on the components. The pressure from the blocks will spread it evenly. Use washers on the four main screws around the core that hold the block to the PCB to increase pressure and improve contact.

Here's the link to the putty, this is the 1000G bottle, but the TG-PP10 is the stuff! https://www.digikey.com/en/products/detail/t-global-technology/TG-PP10-1000/6204865


----------



## ssgwright

I'll have my 3090 with ek block and backplate for sale when the 4090 hits. I'm thinking what $700 plus shipping maybe?


----------



## gfunkernaught

ssgwright said:


> I'll have my 3090 with ek block and backplate for sale when the 4090 hits. I'm thinking what $700 plus shipping maybe?


You're already jumping ship?


----------



## ssgwright

gfunkernaught said:


> You're already jumping ship?


upgrade every gen


----------



## KedarWolf

Do you think the 4090 Ti will be out by March?

I get my tax refund then.


----------



## dream3

guys asus tuf 3090 copper plate mod, does anyone know the correct thickness of the copper for the front memory? I cant believe i havent gound a single piece of information on this mod yet.

hard to think no one modded the tuf 3090 with copper yet.


----------



## yzonker

The newer driver definitely was the fix for my low FS score for whatever reason.









I scored 47 841 in Fire Strike


Intel Core i9-12900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## RiffageGalore

I'm going to get a 3090 in the next couple of months, but I have to make a choice here because I only want to use a Heatkiller V water block for it. 

Problem is, I have to choose between getting a reference PCB 3090 for Heatkiller V regular version, or I can go for the EVGA FTW3. The EVGA would require I make ample room in my case, while a reference PCB Heatkiller block would be fine...no moving necessary (it is small enough).

If I get a 3090, it seems like all reference style versions, no matter the AIB, are all limited to about 350w power. The EVGA can go up to 450w/500w and ALSO has dual bios. Given the higher power limit, I know the EVGA FTW3 Ultra 3090 would perform better than the cards that are limited to 350w. 

Is there any reference layout 3090 that you can get a higher power limit for, without screwing up the ports and all that when flashing a BIOS? Seems to me that I will have to get the EVGA if I really want the higher power option and overclocking ability...


----------



## yzonker

RiffageGalore said:


> I'm going to get a 3090 in the next couple of months, but I have to make a choice here because I only want to use a Heatkiller V water block for it.
> 
> Problem is, I have to choose between getting a reference PCB 3090 for Heatkiller V regular version, or I can go for the EVGA FTW3. The EVGA would require I make ample room in my case, while a reference PCB Heatkiller block would be fine...no moving necessary (it is small enough).
> 
> If I get a 3090, it seems like all reference style versions, no matter the AIB, are all limited to about 350w power. The EVGA can go up to 450w/500w and ALSO has dual bios. Given the higher power limit, I know the EVGA FTW3 Ultra 3090 would perform better than the cards that are limited to 350w.
> 
> Is there any reference layout 3090 that you can get a higher power limit for, without screwing up the ports and all that when flashing a BIOS? Seems to me that I will have to get the EVGA if I really want the higher power option and overclocking ability...


I think the ports work using the KP 1kw bios if the card has the standard 1xHDMI, 3xDP layout. I just tried all 3 DP on my Zotac 3090 Trinity. All work. That's the only bios you could run though and get much of a power bump. Otherwise the best you can do is one of the 390w bios. There are several.


----------



## gfunkernaught

So I've been entertaining the idea of going SLI lately. Been looking at used Trios, considering costs of buying another block+active backplate, nvlink. I'm wondering what kind of gains I'd see, especially at 8k. Has anyone here tried 3090 SLI yet? I'm also thinking I should lower the power limit on both. Instead of 620w per card, I could do 550w per card, lower the clocks a bit.


----------



## J7SC

gfunkernaught said:


> So I've been entertaining the idea of going SLI lately. Been looking at used Trios, considering costs of buying another block+active backplate, nvlink. I'm wondering what kind of gains I'd see, especially at 8k. Has anyone here tried 3090 SLI yet? I'm also thinking I should lower the power limit on both. Instead of 620w per card, I could do 550w per card, lower the clocks a bit.


I have been toying with the same idea but too many apps simply don't support it anymore...good for benchmarking though. My current build is the first since 2012 which hasn't SLI/NVL/Crossfire. Besides, for benchmark amusement, I still have the 2x 2080 Ti which have hit 21.1k in PortRoyal.


----------



## gfunkernaught

J7SC said:


> I have been toying with the same idea but too many apps simply don't support it anymore...good for benchmarking though. My current build is the first since 2012 which hasn't SLI/NVL/Crossfire. Besides, for benchmark amusement, I still have the 2x 2080 Ti which have hit 21.1k in PortRoyal.


Have you had any success forcing AFR or other modes on games that don't "support" SLI? I've seen vids on yt showcasing 3090 SLI performance on several mondern-ish games, Cyberpunk was one that said no to SLI. But there has to be a way to force games to use both gpus with profileinspector no?


----------



## 67091

gfunkernaught said:


> I have the same block. Yes, the active backplate works great. Memory maxes at like 48c, +1250mhz offset.
> 
> Make sure you put thermal putty on the front and back of the card. Anywhere EK says in their instructions to place a thermal pad, use putty instead. Roll the putty up into a tiny ball, and place on the components. The pressure from the blocks will spread it evenly. Use washers on the four main screws around the core that hold the block to the PCB to increase pressure and improve contact.
> 
> Here's the link to the putty, this is the 1000G bottle, but the TG-PP10 is the stuff! https://www.digikey.com/en/products/detail/t-global-technology/TG-PP10-1000/6204865


So the best I’ve gotten on the 3090 I brought is 2160 to 2145 on the core @ 1093mv and 1000mhz on the memory. I’m going to try and reduce the voltage needed for core. Thanks for that bios by the way


----------



## J7SC

gfunkernaught said:


> Have you had any success forcing AFR or other modes on games that don't "support" SLI? I've seen vids on yt showcasing 3090 SLI performance on several mondern-ish games, Cyberpunk was one that said no to SLI. But there has to be a way to force games to use both gpus with profileinspector no?


...usually it is hit and miss. Crytek-engined games seem to be the most inclined to try forcing some sort of SLI. For RTX 2080 Ti, they even supported SLI/NVL CFR (instead of AFR), but not RTX3K series, afaik...CFR was an undocumented feature in NVidia drivers for about half a year. CFR does little to no microstutter. Below is a sample, but the drivers are old now (though still work - I use them for FS2020 on my 2ndary machine and even had it going for Cyberpunk '77 before a patch update got in the way).

...generally speaking, though, SLI/NVL profiles are getting more difficult to force with DLSS and other niceties these days...


----------



## gfunkernaught

J7SC said:


> ...usually it is hit and miss. Crytek-engined games seem to be the most inclined to try forcing some sort of SLI. For RTX 2080 Ti, they even supported SLI/NVL CFR (instead of AFR), but not RTX3K series, afaik...CFR was an undocumented feature in NVidia drivers for about half a year. CFR does little to no microstutter. Below is a sample, but the drivers are old now (though still work - I use them for FS2020 on my 2ndary machine and even had it going for Cyberpunk '77 before a patch update got in the way).
> 
> ...generally speaking, though, SLI/NVL profiles are getting more difficult to force with DLSS and other niceties these days...


The non-cfr on the right had slightly lower core clock than the SLI on the left, does that have something to do with SLI being enabled vs disabled? So I guess DX12 MGPU is no longer really a thing or is it? I didn't do my homework on the diff between MGPU and SLI modes. I wonder why CDPR would not support dual gpu...I miss the days when SLI was booming. I had the 8800 GTXs SLI'ed for a while, then the GTX 580s which was my favorite SLI setup. The thing lasted forever and was my introduction into "4k gaming" with DSR on my 1080p TV. Crysis 3 would make the lights flicker with that PC. So you would use nvprofileinspector to enable CFR on Cyberpunk or any game?


----------



## gfunkernaught

angushades said:


> So the best I’ve gotten on the 3090 I brought is 2160 to 2145 on the core @ 1093mv and 1000mhz on the memory. I’m going to try and reduce the voltage needed for core. Thanks for that bios by the way


That is with the offset slider right? What temp does your core max out at? I tuned my curve to [email protected], so when cold or low load, the core clock shoots up to 2160Mhz, and the core warms up to about 38-40c, 2130mhz, 45c and up, drops to a minimum of 2100mhz, that's when the ambient temp is 25c, like summer months, and power usage is a constant 600w.


----------



## SPL Tech

SLI is deeead guys. come on.


----------



## gfunkernaught

SPL Tech said:


> SLI is deeead guys. come on.


It's not dead, devs and fans are losing interest. Take to the streets with SLI picket signs. I want a room of SLI'd 3090s, maybe about 100 of them submerged in cooling oil and my own power station, which is connected to a pelaton.

Edit, I stand corrected. 40 series will not support SLI apparently. Boo-hoo. Oh well, maybe if used 3090s that's weren't abused get cheaper I'll grab another one and play around with it.


----------



## kx11

See ya in 4090 thread


----------



## gfunkernaught

kx11 said:


> See ya in 4090 thread


I was just about to say the same thing after watching this...2.8ghz 1050mv stock, 58c...





I wonder if AIBs send their best binned cards to reviewers to increase hype so people go out and buy their cards🤔


----------



## kx11

gfunkernaught said:


> I was just about to say the same thing after watching this...2.8ghz 1050mv stock, 58c...
> 
> 
> 
> 
> 
> I wonder if AIBs send their best binned cards to reviewers to increase hype so people go out and buy their cards🤔


my 1st benchmark









I scored 10 541 in Speed Way


Intel Core i9-12900KF Processor, NVIDIA GeForce RTX 4090 x 1, 32768 MB, 64-bit Windows 11}




www.3dmark.com





holy crap i didn't think 3ghz is that easy


----------



## gfunkernaught

kx11 said:


> my 1st benchmark
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 10 541 in Speed Way
> 
> 
> Intel Core i9-12900KF Processor, NVIDIA GeForce RTX 4090 x 1, 32768 MB, 64-bit Windows 11}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> holy crap i didn't think 3ghz is that easy


Did you have to tinker to get 3ghz on your card?


----------



## gfunkernaught

Crap, just skimming through reviews, the 4090 is a massive improvement over the 3090, using less or same amount of power. Like GN said, this is very much like the 1080 series release, in terms of performance increase. 3 options: keep my 3090 until 5090s come out, sell it now get a 4090, or wait for the 4090 ti. Crap.


----------



## yzonker

New driver is a nice bump in performance. Just running my daily driver settings.

516.94 vs 522.25

CP2077: 53.77 vs 59.13 fps (10%)

516.94








I scored 5 959 in Speed Way


Intel Core i9-12900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 11}




www.3dmark.com





522.25








I scored 6 035 in Speed Way


Intel Core i9-12900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 11}




www.3dmark.com





516.94








Result not found







www.3dmark.com





522.25








Result not found







www.3dmark.com


----------



## gfunkernaught

yzonker said:


> New driver is a nice bump in performance. Just running my daily driver settings.
> 
> 516.94 vs 522.25
> 
> CP2077: 53.77 vs 59.13 fps (10%)
> 
> 516.94
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 5 959 in Speed Way
> 
> 
> Intel Core i9-12900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 11}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> 522.25
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 6 035 in Speed Way
> 
> 
> Intel Core i9-12900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 11}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> 516.94
> 
> 
> 
> 
> 
> 
> 
> 
> Result not found
> 
> 
> 
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> 522.25
> 
> 
> 
> 
> 
> 
> 
> 
> Result not found
> 
> 
> 
> 
> 
> 
> 
> www.3dmark.com


What are your cp2077 graphics settings? Also what resolution do you run it at?

Sorry I should be more specific. Do you have DLSS enabled? Which mode? Have you tried dynamic resolution?


----------



## yzonker

gfunkernaught said:


> What are your cp2077 graphics settings? Also what resolution do you run it at?
> 
> Sorry I should be more specific. Do you have DLSS enabled? Which mode? Have you tried dynamic resolution?


4k, DLSS Quality, max settings except RT on Ultra.

No I haven't tried dynamic resolution. I haven't played CP in a long time. I played it through twice, just haven't had any interest since.


----------



## KedarWolf

I can't find the link to buy Speed Way. 

Never mind. Google is my friend.


----------



## yzonker

KedarWolf said:


> I can't find the link to buy Speed Way.
> 
> Never mind. Google is my friend.


Just showed up in 3dmark when I loaded it. Clicking on it took me back to Steam to purchase.


----------



## gfunkernaught

yzonker said:


> 4k, DLSS Quality, max settings except RT on Ultra.
> 
> No I haven't tried dynamic resolution. I haven't played CP in a long time. I played it through twice, just haven't had any interest since.


I think it was added in Patch 1.6. Give it a shot, set the target fps to 60, max res 100%, min res 65%. I've noticed better IQ than Quality DLSS, and a noticeable improvement in performance overall, and also lower average power usage.


----------



## yzonker

Might take another cut at this, but this is about all I've got I think other than maybe playing with some control panel settings, etc...









I scored 6 333 in Speed Way


Intel Core i9-12900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 11}




www.3dmark.com


----------



## KedarWolf

yzonker said:


> Might take another cut at this, but this is about all I've got I think other than maybe playing with some control panel settings, etc...
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 6 333 in Speed Way
> 
> 
> Intel Core i9-12900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 11}
> 
> 
> 
> 
> www.3dmark.com


My 5950x can't compete with a 12900k. But we'll see when I have the DDR5 and motherboard for my 7950x.









I scored 6 235 in Speed Way


AMD Ryzen 9 5950X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 11}




www.3dmark.com


----------



## yzonker

KedarWolf said:


> My 5950x can't compete with a 12900k. But we'll see when I have the DDR5 and motherboard for my 7950x.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 6 235 in Speed Way
> 
> 
> AMD Ryzen 9 5950X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 11}
> 
> 
> 
> 
> www.3dmark.com


Whoa. That sounds like a challenge. 😁. I do have a 13900k on pre-order....


----------



## KedarWolf

yzonker said:


> Whoa. That sounds like a challenge. 😁. I do have a 13900k on pre-order....


Now I'm skeered.


----------



## gfunkernaught

I scored 11 801 in Time Spy Extreme


Intel Core i9-12900KS Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 11}




www.3dmark.com





522.25 drivers. Highest TSE yet. I think these drivers are hindering my previous "max oc", but the same OC I just created crashed on regular Timespy. Maybe these drivers are so optimized that I can get better scores with lower clock/voltage?


----------



## kaspar737

Is OC Scanner+reducing power limit an effective way to reduce power consumption with minimal performance loss? Also has anyone tested how much reducing memory speed decreases power consumption?


----------



## gfunkernaught

kaspar737 said:


> Is OC Scanner+reducing power limit an effective way to reduce power consumption with minimal performance loss? Also has anyone tested how much reducing memory speed decreases power consumption?


The most effective way to reduce power consumption is to reduce GPU load. I've used OC Scanner in the past and it would either set the OC too low or too high. Reducing VRAM speed does reduce power consumption slightly based on my experience. You can also find a stable OC frequency, then see how low you can set the voltage with the curve editor.


----------



## GRABibus

Drivers 522.25 are very good.




















I scored 15 961 in Port Royal


AMD Ryzen 9 5950X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com





This is my best PR ever with my EVGA KP Hybrid on *stock cooler (58°C average during the test)*

22°C ambient
Bios EVGA 1000W Rebar
Rebar forced in PR
Drivers on high performance

For sure now, I can break 16k on my stock cooler !


----------



## gfunkernaught

Anyone notice how the global nvidia control panel settings are overriding the 3dmark profile settings? I have vsync disabled in the 3dmark profile, but when launching, I see the warning that vsync is enabled, and the benchmarks are capped at 60fps, so vsync is definitely enabled. I have vsync enabled + low latency set to Ultra globally. For now I just disable and change the global settings to suite 3dmark.


----------



## KedarWolf

gfunkernaught said:


> Anyone notice how the global nvidia control panel settings are overriding the 3dmark profile settings? I have vsync disabled in the 3dmark profile, but when launching, I see the warning that vsync is enabled, and the benchmarks are capped at 60fps, so vsync is definitely enabled. I have vsync enabled + low latency set to Ultra globally. For now I just disable and change the global settings to suite 3dmark.


I found Latency Off did better in benchmarks than it on Ultra.


----------



## gfunkernaught

KedarWolf said:


> I found Latency Off did better in benchmarks than it on Ultra.


Well right, but I'm wondering if these new drivers + 3dmark have a bug where the profile settings don't apply and/or get overridden by the global settings. I read on a steam discussion one of the UL devs said that happens with gsync, where the global setting overrides the profile setting.


----------



## gfunkernaught

I scored 15 811 in Port Royal


Intel Core i9-12900KS Processor, NVIDIA GeForce RTX 3090 x 1, 32628 MB, 64-bit Windows 11}




www.3dmark.com





Nvidia making old hardware run better while releasing new hardware?


----------



## J7SC

gfunkernaught said:


> I scored 15 811 in Port Royal
> 
> 
> Intel Core i9-12900KS Processor, NVIDIA GeForce RTX 3090 x 1, 32628 MB, 64-bit Windows 11}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> *Nvidia making old hardware run better while releasing new hardware?*


Now that they have sold a few RTX 4090s, they can turn back to the task of selling all those extra 3090 / Tis; a nice boost in RTX3K benchmarks might help 🥴


----------



## gfunkernaught

J7SC said:


> Now that they have sold a few RTX 4090s, they can turn back to the task of selling all those extra 3090 / Tis; a nice boost in RTX3K benchmarks might help 🥴


_Derp_ of course...I guess a few drivers down the line they'll nerf the 3090s once stock is depleted.🙄


----------



## yzonker

Well I road tripped to Microcenter this afternoon and got a 4090 TUF OC. But I'm not going to install it until I can at least re-run TS, TSE, and PR on my 3090 this weekend with the new driver. Doesn't feel right to pull it out until I at least do that.


----------



## gfunkernaught

I scored 15 851 in Port Royal


Intel Core i9-12900KS Processor, NVIDIA GeForce RTX 3090 x 1, 32628 MB, 64-bit Windows 11}




www.3dmark.com





Upped the core and mem clocks, better. Not the 15,960 pts from a while ago though.


----------



## GRABibus

yzonker said:


> Well I road tripped to Microcenter this afternoon and got a 4090 TUF OC. But I'm not going to install it until I can at least re-run TS, TSE, and PR on my 3090 this weekend with the new driver. Doesn't feel right to pull it out until I at least do that.


In France, no chance to get a 4090. Complete shortage.


----------



## gfunkernaught

If anyone runs into the 3dmark profile settings issue where global overrides them, it is because the benches now have their own executables, so create profiles for each of them.


----------



## 7empe

GRABibus said:


> In France, no chance to get a 4090. Complete shortage.


Same in Poland. 0 availability.


----------



## GRABibus

7empe said:


> Same in Poland. 0 availability.


Today the Gigabyte Gaming OC is available.
But I am not interested in it.


----------



## kairi_zeroblade

Is there potential to a 2x8pin RTX 3090??


----------



## GRABibus

GRABibus said:


> Drivers 522.25 are very good.
> 
> 
> View attachment 2575887
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 961 in Port Royal
> 
> 
> AMD Ryzen 9 5950X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> This is my best PR ever with my EVGA KP Hybrid on *stock cooler (58°C average during the test)*
> 
> 22°C ambient
> Bios EVGA 1000W Rebar
> Rebar forced in PR
> Drivers on high performance
> 
> For sure now, I can break 16k on my stock cooler !


Those drivers give me also a big gain in the games I currently play : Overwatch 2, COD Vanguard, COD Cold War, Spiderman Remastered.
This is between 10% and 20% fps more ! That's incredible.

If you cumulate : Drivers 522.25 + Rebar forced in games where this is beneficial +GPU and GDDR6X overclocking + drivers tweaking on high performances for the "games.exe" you play...Is it necessay to switch to RTX 4090 now ? 😁


----------



## gfunkernaught

GRABibus said:


> Those drivers give me also a big gain in the games I currently play : Overwatch 2, COD Vanguard, COD Cold War, Spiderman Remastered.
> This is between 10% and 20% fps more ! That's incredible.
> 
> If you cumulate : Drivers 522.25 + Rebar forced in games where this is beneficial +GPU and GDDR6X overclocking + drivers tweaking on high performances for the "games.exe" you play...Is it necessay to switch to RTX 4090 now ? 😁


Like @J7SC and I were talking about earlier, this performance boost might be short-lived. Speculating: once 3090s are completely out of stock, a new game comes out, new driver comes out to optimize this new DLSS 3.0 game to showcase the 4090, and nerf the 3090s to make everyone want to upgrade. They did something similar during the 2080 launch. Ray tracing could only be done on Turning, then a couple months later, they allowed it to run on 1080s, poorly. I got bit by that bug.😆


----------



## GRABibus

gfunkernaught said:


> Like @J7SC and I were talking about earlier, this performance boost might be short-lived. Speculating: once 3090s are completely out of stock, a new game comes out, new driver comes out to optimize this new DLSS 3.0 game to showcase the 4090, and nerf the 3090s to make everyone want to upgrade. They did something similar during the 2080 launch. Ray tracing could only be done on Turning, then a couple months later, they allowed it to run on 1080s, poorly. I got bit by that bug.😆


I agree.


----------



## changboy

Driver 522.25 are bad, have some glitch in game see video on cyberpunk map :
Me i ré-install the older driver


----------



## Nizzen

changboy said:


> Driver 522.25 are bad, have some glitch in game see video on cyberpunk map :
> Me i ré-install the older driver


Luckily I don't play that game


----------



## GRABibus

Nizzen said:


> Luckily I don't play that game


Neither me 

I then keep my +5% to +20% fps with this new drivers


----------



## Nizzen

changboy said:


> Driver 522.25 are bad, have some glitch in game see video on cyberpunk map :
> Me i ré-install the older driver


Fix?






Cyberpunk 2077 may display artifacts when bringing up the in-game map after updating to Game Ready Driver 522.25 | NVIDIA







nvidia.custhelp.com





😘


----------



## kairi_zeroblade

DLSS 3.0 hacked and bypassed..lol..rtx 2000 series ripping with double performance..lol..


----------



## Nizzen

kairi_zeroblade said:


> DLSS 3.0 hacked and bypassed..lol..rtx 2000 series ripping with double performance..lol..


Link?


----------



## kairi_zeroblade

Nizzen said:


> Link?











NVIDIA DLSS 3 "Frame Generation" Lock Reportedly Bypassed, RTX 2070 Gets Double The Frames In Cyberpunk 2077


NVIDIA recently introduced its new DLSS 3 technology that adds a new feature known as Frame Generation on supported GPUs.




www.google.com


----------



## GRABibus

kairi_zeroblade said:


> NVIDIA DLSS 3 "Frame Generation" Lock Reportedly Bypassed, RTX 2070 Gets Double The Frames In Cyberpunk 2077
> 
> 
> NVIDIA recently introduced its new DLSS 3 technology that adds a new feature known as Frame Generation on supported GPUs.
> 
> 
> 
> 
> www.google.com


We unfortunately don't see how he does this


----------



## GRABibus

Bye bye 3090 









3DMark Port Royal Hall of Fame


The 3DMark.com Overclocking Hall of Fame is the official home of 3DMark world record scores.




www.3dmark.com


----------



## kairi_zeroblade

GRABibus said:


> We unfortunately don't see how he does this


Lets wait a couple more days..


----------



## 7empe

kairi_zeroblade said:


> NVIDIA DLSS 3 "Frame Generation" Lock Reportedly Bypassed, RTX 2070 Gets Double The Frames In Cyberpunk 2077
> 
> 
> NVIDIA recently introduced its new DLSS 3 technology that adds a new feature known as Frame Generation on supported GPUs.
> 
> 
> 
> 
> www.google.com


News like this makes me glad that I did not get a chance to hunt for the rtx 4090 yet 😁


----------



## gfunkernaught

kairi_zeroblade said:


> NVIDIA DLSS 3 "Frame Generation" Lock Reportedly Bypassed, RTX 2070 Gets Double The Frames In Cyberpunk 2077
> 
> 
> NVIDIA recently introduced its new DLSS 3 technology that adds a new feature known as Frame Generation on supported GPUs.
> 
> 
> 
> 
> www.google.com


Nvidia will surely scramble and shut this down😆


----------



## RetroWave78

gfunkernaught said:


> Nvidia will surely scramble and shut this down😆


From unlaunching the 12GB "4080" to 4090 paper launch shenanigans Nvidia are quickly ruining what remained of their reputation, already tarnished from the crypto-boom and bust fiasco. This post didn't stand a chance of appearing on the nvidia sub-reddit, the mods there are clearly employed by Nvidia and censor anything critical of NV without explanation or response to inquiry about the nature of a post being pulled down, such as this one:

"Reduce sell in to let channel inventory correct"






Paper launch, artificial scarcity 100% confirmed.

Think about it, there are maybe 10 FE's on ebay all going for $3k and no-one in this thread managed to get one, so maybe Nvidia sold 100-500 on launch day globally?

This is disgusting.










Interesting conversation here:


__
https://www.reddit.com/r/pcmasterrace/comments/y4vp60


----------



## gfunkernaught

RetroWave78 said:


> From unlaunching the 12GB "4080" to 4090 paper launch shenanigans Nvidia are quickly ruining what remained of their reputation, already tarnished from the crypto-boom and bust fiasco. This post didn't stand a chance of appearing on the nvidia sub-reddit, the mods there are clearly employed by Nvidia and censor anything critical of NV without explanation or response to inquiry about the nature of a post being pulled down, such as this one:
> 
> "Reduce sell in to let channel inventory correct"
> 
> 
> 
> 
> 
> 
> Paper launch, artificial scarcity 100% confirmed.
> 
> Think about it, there are maybe 10 FE's on ebay all going for $3k and no-one in this thread managed to get one, so maybe Nvidia sold 100-500 on launch day globally?
> 
> This is disgusting.
> 
> View attachment 2576178
> 
> 
> Interesting conversation here:
> 
> 
> __
> https://www.reddit.com/r/pcmasterrace/comments/y4vp60


The more I listen to these business people talk about this stuff, the more I learn. I just listen for common phrases and words and replace things like "nvidia" "gpu" with "company name" and "product". They're all the same. They all graduated with the business degree and were hired as consultants to boost sales and share price. That's their job, not making quality, long-lasting products, but increase profits and maintain market security. It's almost militant.


----------



## gfunkernaught

About DLSS 3.0 Frame Generation, after watching the Hardware Unboxed video about it, it seems a lot like Motion Flow on Sony TVs. Takes a 30fps feed and doubles it to 60fps. So it appears smooth, but latency is increased. I miss the days of old gpu benchmarks where you had native resolutions tested with MSAA 2x, 4x, 8x and seeing the raw performance per frame of every card.


----------



## RetroWave78

gfunkernaught said:


> About DLSS 3.0 Frame Generation, after watching the Hardware Unboxed video about it, it seems a lot like Motion Flow on Sony TVs. Takes a 30fps feed and doubles it to 60fps. So it appears smooth, but latency is increased. I miss the days of old gpu benchmarks where you had native resolutions tested with MSAA 2x, 4x, 8x and seeing the raw performance per frame of every card.


Smooth Video Player, that was what I likened it to when I first heard about it after the reveal.


----------



## yzonker

gfunkernaught said:


> About DLSS 3.0 Frame Generation, after watching the Hardware Unboxed video about it, it seems a lot like Motion Flow on Sony TVs. Takes a 30fps feed and doubles it to 60fps. So it appears smooth, but latency is increased. I miss the days of old gpu benchmarks where you had native resolutions tested with MSAA 2x, 4x, 8x and seeing the raw performance per frame of every card.


Yea, most TVs/projectors have this now. Nothing revolutionary about it, just hadn't been done in gaming.


----------



## gfunkernaught

I gotta say though, pretty impressive and crafty they'd do something like that, and use Reflex to compensate for latency. I already use Reflex (Ultra-Low latency) and vsync globally with the tv in Game Mode on for most of the games I play, disable in-game vsync. Works well.


----------



## kairi_zeroblade

I'd like to ask if somebody already has been able to shunt mod an RTX 3090 Gaming OC from Gigabutt..what are the shunt resistor values you used?? The GPU has fuses that is why I am a bit weary on what to do moving forward..


----------



## gfunkernaught

Lastest driver seems to have improved DLSS 2.0 image quality overall. So far tested in Control and Cyberpunk. Cyberpunk DLSS Quality at 4k better image quality and slightly less overall power usage. I looked at the AB monitor log and found fewer power limit hits than before at 62% PL. Could just be my curve too though. Control is the opposite, more GPU and power usage, higher quality, performance seems the same. Had some trouble with screenshots, but 4K DLSS vs 4k Native look just about the same, except while the scene is moving, DLSS is still susceptible to that moving scene pixel regen.


----------



## J7SC

gfunkernaught said:


> Lastest driver seems to have improved DLSS 2.0 image quality overall. So far tested in Control and Cyberpunk. Cyberpunk DLSS Quality at 4k better image quality and slightly less overall power usage. I looked at the AB monitor log and found fewer power limit hits than before at 62% PL. Could just be my curve too though. Control is the opposite, more GPU and power usage, higher quality, performance seems the same. Had some trouble with screenshots, but 4K DLSS vs 4k Native look just about the same, except while the scene is moving, DLSS is still susceptible to that moving scene pixel regen.


Cyberpunk '77 is one of my favs, and even before DLSS3, my 4090 blows the 3090 Strix out of the water (more than +100%) at the same settings; in other apps, it is closer.


----------



## Alexshunter

Guys, how to find out when was made my 3090 founders edition? Purchased from ebay and i am curious for how long it was mining?


----------



## gfunkernaught

Alexshunter said:


> Guys, how to find out when was made my 3090 founders edition? Purchased from ebay and i am curious for how long it was mining?


I don't think graphics cards have odometers, which is why I try to avoid used cards, especially nowadays. 

"Did you mine on this card?"
"No it's good really trust me" 🙄


----------



## gfunkernaught

J7SC said:


> Cyberpunk '77 is one of my favs, and even before DLSS3, my 4090 blows the 3090 Strix out of the water (more than +100%) at the same settings; in other apps, it is closer.


I'm curious to know native 8k performance on the 4090s. Haven't seen anything 8k that isn't using DLSS.


----------



## ttnuagmada

gfunkernaught said:


> About DLSS 3.0 Frame Generation, after watching the Hardware Unboxed video about it, it seems a lot like Motion Flow on Sony TVs. Takes a 30fps feed and doubles it to 60fps. So it appears smooth, but latency is increased. I miss the days of old gpu benchmarks where you had native resolutions tested with MSAA 2x, 4x, 8x and seeing the raw performance per frame of every card.


Frame interpolation has been around forever. Even dirt cheap Vizio TV's from over a decade ago had it.


----------



## J7SC

gfunkernaught said:


> I'm curious to know native 8k performance on the 4090s. Haven't seen anything 8k that isn't using DLSS.


..don't have a native 8K monitor , so I haven't tested too much other than Superposition 8k and and 50% 8K for 4K 120 Hz.


----------



## gfunkernaught

J7SC said:


> ..don't have a native 8K monitor , so I haven't tested too much other than Superposition 8k and and 50% 8K for 4K 120 Hz.


Use 4x Dynamic Super Resolution on your 4k Tv/Monitor. 4x will multiply four times the native res number of pixels of your display. I've been using it for years. That's how I can play just about any game at 8k on my 4k tv. Some games really need it, because their implementation of anti-aliasing actually decreases fidelity and makes a soft image.


----------



## J7SC

gfunkernaught said:


> Use 4x Dynamic Super Resolution on your 4k Tv/Monitor. 4x will multiply four times the native res number of pixels of your display. I've been using it for years. That's how I can play just about any game at 8k on my 4k tv. Some games really need it, because their implementation of anti-aliasing actually decreases fidelity and makes a soft image.


...I know that approach from my 3090, but right now, I'm just finding out the basic limits of this thing and will really put it through its paces once I've water-cooled it. Still, even on air, PortRoyal at ~ 28.3k, CP '77 4K Psycho Ray Tracing DLSS quality ~80 fps (w/o overclocking).


----------



## 67091

Question 
How do you know your vram clocks are stable? Is gaming all you do to check for stability


----------



## J7SC

angushades said:


> Question
> How do you know your vram clocks are stable? Is gaming all you do to check for stability


3DM Timespy Extreme & Port Royal are a good start.


----------



## GRABibus

angushades said:


> Question
> How do you know your vram clocks are stable? Is gaming all you do to check for stability


You never know as no overclocks are stable 😜


----------



## ttnuagmada

angushades said:


> Question
> How do you know your vram clocks are stable? Is gaming all you do to check for stability


Warzone will sniff it out even if stress tests dont.


----------



## gfunkernaught

ttnuagmada said:


> Warzone will sniff it out even if stress tests dont.


Best description of finding a stable OC I've ever read.🤣


----------



## 67091

I’ve been playing with my oc and I can play Metro enhanced edition @ 2145 on the core and 1250 memory. Im just waiting the rear ek block so I can reduce the temperature on the memory.


----------



## yzonker

angushades said:


> I’ve been playing with my oc and I can play Metro enhanced edition @ 2145 on the core and 1250 memory. Im just waiting the rear ek block so I can reduce the temperature on the memory.


That's a good card. ME EE loves to crush overclocks.


----------



## gfunkernaught

yzonker said:


> That's a good card. ME EE loves to crush overclocks.


Cyberpunk also "sniffs" out unstable overclocks. 😆


----------



## 67091

So I’d like to ask , is there always a 12mhz difference between normal clock and effective clock?


----------



## gfunkernaught

angushades said:


> So I’d like to ask , is there always a 12mhz difference between normal clock and effective clock?


Yes. Usually one bin, which is 15mhz, more or less.


----------



## Audioboxer

3090 on water, what would the performance uplift be like compared to a 3080? I'm running my current 3080 around 450w+ when it needs it and maintaining 2115~2160mhz at 1.075v when under heavy load. So, I'd be planning on letting a 3090 eat 500w if it wants.

Considering the sideways upgrade due to 3090s being dumped onto the market. I've got a front and back waterblock for my FTW 3080 (back waterblock came stupid cheap through an eBay bidding auction so just ended up using it for aesthetics). This is why I'm trying to stay FTW3 if I pick one up, waterblocks are transferable and having a back block is ideal for a 3090.

Raytracing seems like it'll get a reasonable boost going 3080 10GB to 3090. I'd be happy enough with that. Looking at the stats on paper, it seems like the 3090 should trade blows with a 4080, maybe losing out a bit with Nvidia locking DLSS3 to 4xxx.

I know the answer is often wait for 4xxx, but given I'm good to go with my waterblocks so not overhead expense there, a reasonable purchase grabbing a 3090 FTW3 right now? Thankfully EVGA also do transferrable warranties, but goodness knows what that companies warranty process will be in a year or so or if they'll even be around by then


----------



## GRABibus

Audioboxer said:


> 3090 on water, what would the performance uplift be like compared to a 3080? I'm running my current 3080 around 450w+ when it needs it and maintaining 2115~2160mhz at 1.075v when under heavy load. So, I'd be planning on letting a 3090 eat 500w if it wants.
> 
> Considering the sideways upgrade due to 3090s being dumped onto the market. I've got a front and back waterblock for my FTW 3080 (back waterblock came stupid cheap through an eBay bidding auction so just ended up using it for aesthetics). This is why I'm trying to stay FTW3 if I pick one up, waterblocks are transferable and having a back block is ideal for a 3090.
> 
> Raytracing seems like it'll get a reasonable boost going 3080 10GB to 3090. I'd be happy enough with that. Looking at the stats on paper, it seems like the 3090 should trade blows with a 4080, maybe losing out a bit with Nvidia locking DLSS3 to 4xxx.
> 
> I know the answer is often wait for 4xxx, but given I'm good to go with my waterblocks so not overhead expense there, a reasonable purchase grabbing a 3090 FTW3 right now? Thankfully EVGA also do transferrable warranties, but goodness knows what that companies warranty process will be in a year or so or if they'll even be around by then


If this is only for gaming, keep the 3080.


----------



## Audioboxer

GRABibus said:


> If this is only for gaming, keep the 3080.


Only for gaming. The out-of-pocket price once I sell the 3080 won't be too much, maybe £200~250. If I didn't have the waterblocks already I wouldn't even be considering this. Already sitting on an active backplate is decent for the 3090 memory chips.

1440p UW, so 4K isn't an issue, although UW is slightly more taxing than regular 1440p. It's really just raytracing would see a bump.

Will probably talk myself out of it, or holding off a little longer to see if 2nd hand 3090's drop more. There isn't much price difference with the 3080Ti, it's only selling for marginally less. I guess for those aircooling supply and demand means preferring a 3080Ti over a 3090.


----------



## gfunkernaught

Audioboxer said:


> Only for gaming. The out-of-pocket price once I sell the 3080 won't be too much, maybe £200~250. If I didn't have the waterblocks already I wouldn't even be considering this. Already sitting on an active backplate is decent for the 3090 memory chips.
> 
> 1440p UW, so 4K isn't an issue, although UW is slightly more taxing than regular 1440p. It's really just raytracing would see a bump.
> 
> Will probably talk myself out of it, or holding off a little longer to see if 2nd hand 3090's drop more. There isn't much price difference with the 3080Ti, it's only selling for marginally less. I guess for those aircooling supply and demand means preferring a 3080Ti over a 3090.


Remember, the 4090 will chew through current games. A year from now, that might not be the case. I played The First Descendants on Very High at 4k DLSS off and it crushed my 3090, I had to either drop to High or enable DLSS Balanced to get good framerates.


----------



## Audioboxer

gfunkernaught said:


> Remember, the 4090 will chew through current games. A year from now, that might not be the case. I played The First Descendants on Very High at 4k DLSS off and it crushed my 3090, I had to either drop to High or enable DLSS Balanced to get good framerates.


I'll be gaming at 3440x1440 for the foreseeable future, I love UW. My next monitor will just be OLED with the same resolution.

Made up my mind just to marginally upgrade with a 3090, already have the waterblocks for it and the 10~15% boost over my 3080 10GB isn't bad. RTing on max in something like Cyberpunk 2077 also moves into high 60s/low 70s instead of where it is just now where some scenes can cause a small dip below 60. 4080, let alone the 4090 have meh prices in the UK and stock issues with the 4080 will likely lead to scalping. Then there is the whole early adopter waterblocks on these cards, pricing for them can be dodgy let alone stock issues.

Will make sure to get a revision 1.0+ for the EVGA variant, I heard that is when they changed power delivery chip. 0.1's seem to be the cards being destroyed by power spiking. I would say EVGA would still replace 0.1 cards, but not taking any chances with them at the moment since they decided to bail on graphics cards.


----------



## gfunkernaught

Audioboxer said:


> I'll be gaming at 3440x1440 for the foreseeable future, I love UW. My next monitor will just be OLED with the same resolution.
> 
> Made up my mind just to marginally upgrade with a 3090, already have the waterblocks for it and the 10~15% boost over my 3080 10GB isn't bad. RTing on max in something like Cyberpunk 2077 also moves into high 60s/low 70s instead of where it is just now where some scenes can cause a small dip below 60. 4080, let alone the 4090 have meh prices in the UK and stock issues with the 4080 will likely lead to scalping. Then there is the whole early adopter waterblocks on these cards, pricing for them can be dodgy let alone stock issues.
> 
> Will make sure to get a revision 1.0+ for the EVGA variant, I heard that is when they changed power delivery chip. 0.1's seem to be the cards being destroyed by power spiking. I would say EVGA would still replace 0.1 cards, but not taking any chances with them at the moment since they decided to bail on graphics cards.


Dasssit! Don't fall prey to those super high (inflated) numbers of 4090 benchmarks. I'm rocking my 3090 for as long as I can. I played with the idea of a mini upgrade by buying a used Kingpin 3090, but how much would I gain really? Not much. 3090s are very capable cards. Even my low-end Trio does exceptionally well compared to higher-tier 3090s.


----------



## Audioboxer

gfunkernaught said:


> Dasssit! Don't fall prey to those super high (inflated) numbers of 4090 benchmarks. I'm rocking my 3090 for as long as I can. I played with the idea of a mini upgrade by buying a used Kingpin 3090, but how much would I gain really? Not much. 3090s are very capable cards. Even my low-end Trio does exceptionally well compared to higher-tier 3090s.


It's more a matter of circumstance, such as already having the waterblocks that can transfer from a 3080, being on the 10GB variant which is a bit worse than even the 12GB 3080 (so more performance _will_ help me) and 3090 prices tumbling a bit right now. But yeah, I broadly agree. Early adopter tax is always painful! EU stock of the 3xxx range was also terrible compared to USA stock, but the crypto crash should ease that a bit for the incoming 4080.

I'll slap the 1000w BIOS on it and see what it can do at 550~600w. At 1440p, should be good for some years to come. No doubt part of NVIDIA's "DLSS 3" will continually get modded to work on 3xxx cards lol. I see modders have already broken the artifical lock a bit.


----------



## MietiOC

Hi All I'd like your honest opinion concerning the "problem" i'm affected by.
First of all i'm obsessed with overclocking the hell out of cards while mantaining an understated look. So i bench in a case with panels closed (i know i know) to see what's the maximum that i can get while mantaining a "daily form". Recently i got two 3090 strix o24g and and put then on water with two eiswolf 360.

I know this solution is subpar compared to a custom loop, also it does not support nvlink due to the sheer size of the pump block, however deshrouding the nvlink connector and sanding it off 1mm it does fit lol

I'm now testing each card singularly to see what they can bring me without bios modding.

I found that +180 on the core at 1.0v and +1275 mem barely fits into the power budget of 480w for the strix netting me 15k points in port royal on This run.

Do you think i should flash a 1000w bios to the cards?
Considering that there are two in so little space and that i ultimately aim at an nvlink score, i think that expanding the power budget could increase the score for the single card but introduce too much heath into the system (even tho they have their own 360 rad loop).

I'm aiming at 29.5k plus PR in nvlink.

Please help me


----------



## des2k...

MietiOC said:


> Hi All I'd like your honest opinion concerning the "problem" i'm affected by.
> First of all i'm obsessed with overclocking the hell out of cards while mantaining an understated look. So i bench in a case with panels closed (i know i know) to see what's the maximum that i can get while mantaining a "daily form". Recently i got two 3090 strix o24g and and put then on water with two eiswolf 360.
> 
> I know this solution is subpar compared to a custom loop, also it does not support nvlink due to the sheer size of the pump block, however deshrouding the nvlink connector and sanding it off 1mm it does fit lol
> 
> I'm now testing each card singularly to see what they can bring me without bios modding.
> 
> I found that +180 on the core at 1.0v and +1275 mem barely fits into the power budget of 480w for the strix netting me 15k points in port royal on This run.
> 
> Do you think i should flash a 1000w bios to the cards?
> Considering that there are two in so little space and that i ultimately aim at an nvlink score, i think that expanding the power budget could increase the score for the single card but introduce too much heath into the system (even tho they have their own 360 rad loop).
> 
> I'm aiming at 29.5k plus PR in nvlink.
> 
> Please help me


On my Zotac 2x8pin ~15.9k is about the max PR with 1000w rebar vbios. That's over 2200 core +1600mem with 17c water temps. Canada winter mornings with heating vents closed 🤣

That was a while ago, I think I was short 80points from 16k with my zen2 3900x. Didn't run the bench with my 5900x 4000 tweaked bdie or new drivers.

I think it climbs to 560w+ and needs 35c on gpu or lower for the run.


----------



## kairi_zeroblade

des2k... said:


> On my Zotac 2x8pin ~15.9k is about the max PR with 1000w rebar vbios. That's over 2200 core +1600mem with 17c water temps. Canada winter mornings with heating vents closed 🤣
> 
> That was a while ago, I think I was short 80points from 16k with my zen2 3900x. Didn't run the bench with my 5900x 4000 tweaked bdie or new drivers.
> 
> I think it climbs to 560w+ and needs 35c on gpu or lower for the run.


Can you link me to the 1000w rebar bios you used?? I also have a 2x8pin 3090 and feels adventurous to flash it with a potent bios..


----------



## GRABibus

kairi_zeroblade said:


> Can you link me to the 1000w rebar bios you used?? I also have a 2x8pin 3090 and feels adventurous to flash it with a potent bios..


Probably the EVGA one (See in my signature)


----------



## kairi_zeroblade

Hmm..so I got plenty time on my hands and decided to tinker with the KP 1kW vbios..and I am still power limited on 600w..lol..both cables (2x8pin model here) are pulling over 220ish watts on 16awg wires..damn..this card is hungry as hell

My max temps on 2095mhz is around 43c on core memory on 46c and hotspot on 54c..I still got more headroom I think..though my GPU has fuses..lol..

But it kinda still feels behind my 6800XT (PL unlocked) on the Games I play..


----------



## gfunkernaught

Been tweaking the v/f curve and seem to have found a sweet spot. I set the v/f point to [email protected], which gives me 2145Mhz and settles to 2130mhz once the core hits 40c, netting an effective [email protected] I raise the power limit up from 62% to 70%. The first test was Control at 4k native to really push the card, the average core temp was about 42c at around 650w. Next, Quake 2 RTX 4k native, no dynamic res, temporal AA, 680w, average 43c core temp. Confident, I tested Cyberpunk, core temp never exceeded 41c with an average 530w. The whole time playing Cyberpunk the water temp remained at 30c, with 22c ambient. I never really kept notes but I believe lowering the vcore from 1075mv to 1063mv has really made a difference in thermal and overall rendering performance, ambient temperature hasn't dropped that much either. Could also be that I'm allowing more power to be used at the same clock and lower voltage. Maybe I'm running more efficiently and gaining a bit? Not sure, but interesting, thought I'd share.


----------



## nexxusty

Anyone have any luck flashing a 3090 FE bios to a 3090 TUF? Both 2x8 pin, should work fine no?

Appreciate any feedback.


----------



## yzonker

nexxusty said:


> Anyone have any luck flashing a 3090 FE bios to a 3090 TUF? Both 2x8 pin, should work fine no?
> 
> Appreciate any feedback.


Nope, FE uses a different connector and has a different device ID. Not possible to flash.


----------



## nexxusty

yzonker said:


> Nope, FE uses a different connector and has a different device ID. Not possible to flash.


Thank you for the detailed reply. Galax BIOS it is then.


----------



## GRABibus

I invite you to test PR with last hotfix 526.61.
It is a pity it is not considered as a valid driver 😊


----------



## KedarWolf

GRABibus said:


> I invite you to test PR with last hotfix 526.61.
> It is a pity it is not considered as a valid driver 😊


I'll wait for the next full release, hopefully it's as good.


----------



## steverebo

I've just flashed my rtx 3090 tuf oc to the galaxy 390w bios how do I change the rgb led colour now?


----------



## Lorenzitto

Hello all, I have a question about OCP, I'm having issues with a MSI ventus x3 OC rtx 3090 stock no overclock, my system keeps turning off and rebooting while gaming in 4k 240 hz but not while benchmarking with 3D Mark or BaseMark web or desktop version, for now I'm using a ThermalTake Toughpower DPS G RGB 850 watt titanium as PSU, with which one I should replace it?
I'll prefer one that is at least 1000 watt, titanium certified, atx 3.0 compatibile and with a RGB fan.


----------



## gfunkernaught

Lorenzitto said:


> Hello all, I have a question about OCP, I'm having issues with a MSI ventus x3 OC rtx 3090 stock no overclock, my system keeps turning off and rebooting while gaming in 4k 240 hz but not while benchmarking with 3D Mark or BaseMark web or desktop version, for now I'm using a ThermalTake Toughpower DPS G RGB 850 watt titanium as PSU, with which one I should replace it?
> I'll prefer one that is at least 1000 watt, titanium certified, atx 3.0 compatibile and with a RGB fan.


I had a Corsair HX1200 that kept doing that, even with my Trio at stock, and cpu stock, OCP kept getting tripped while gaming. Turns out the PSU was bad. Corsair replaced it for me no charge. Never had that problem again on that PSU.


----------



## somedudez

Hi, i got the trinity OC from zotac. I will do a deshroud mod soon as fans are noisy and ineffective also i plan to go SFFC at some point.
My psu is good and nice. Card uses 385Ws. 158W on pci ex 8pins and 69.5w from pci ex mobo. Those cables can deliver a little more i guess.

I will do copper vram backplate mod and I ordered 3d graphite thermal pads for the core side vrams as the temps soar as high as 106C while gaming.

So i believe card will get cooler eventually. Do you recommend a bios with 2 x8 pin power intake with 3 DP and 1 HDMI? HDMI is very important for me as i use it for my cap card.

I already did undervolting. So i run 2020 mhz around 1.012vs. But im usually power limited.

I heard there is a compatible 1000w bios but it removes all protections as i wont go all water on this card i am not sure about it.

Also do you know that any different brand air cooler work on trintiy oc? Like asus strix 3090 cooler? Because form what i understand zotac trinity OC and some other cards uses the same dimensions as on a reference pcb design? Does any one have this card and a different 3090 to test swapping out air coolers? I believe it could work well. even VRM locs look similar. + trinity Oc uses a different extra heatsink for VRMs whic makes it even more compatible tbh.

I also updated my bios thru offical zotac tool to enable resizable BAR
thanks!


----------



## long2905

somedudez said:


> Hi, i got the trinity OC from zotac. I will do a deshroud mod soon as fans are noisy and ineffective also i plan to go SFFC at some point.
> My psu is good and nice. Card uses 385Ws. 158W on pci ex 8pins and 69.5w from pci ex mobo. Those cables can deliver a little more i guess.
> 
> I will do copper vram backplate mod and I ordered 3d graphite thermal pads for the core side vrams as the temps soar as high as 106C while gaming.
> 
> So i believe card will get cooler eventually. Do you recommend a bios with 2 x8 pin power intake with 3 DP and 1 HDMI? HDMI is very important for me as i use it for my cap card.
> 
> I already did undervolting. So i run 2020 mhz around 1.012vs. But im usually power limited.
> 
> I heard there is a compatible 1000w bios but it removes all protections as i wont go all water on this card i am not sure about it.
> 
> Also do you know that any different brand air cooler work on trintiy oc? Like asus strix 3090 cooler? Because form what i understand zotac trinity OC and some other cards uses the same dimensions as on a reference pcb design? Does any one have this card and a different 3090 to test swapping out air coolers? I believe it could work well. even VRM locs look similar. + trinity Oc uses a different extra heatsink for VRMs whic makes it even more compatible tbh.
> 
> I also updated my bios thru offical zotac tool to enable resizable BAR
> thanks!


you need either the copper plate for vram or gelid extreme pads (2mm). it might be 3mm for the backside im not sure. use decent paste on the core. thats about it unless you go custom water cooling


----------



## somedudez

long2905 said:


> you need either the copper plate for vram or gelid extreme pads (2mm). it might be 3mm for the backside im not sure. use decent paste on the core. thats about it unless you go custom water cooling


Yes i ordered the copper vram shim mod (the one piece re-machined ones). I will do it on the back side VRAMS since this cards backplate isn't good enough also much easier to kapton tape safely there. I ordered the 3d graphite 2.2m thermal pads for te core side (they are 40w/mk way better than gelid ultimate but you can't cut them else tey conduct electricity ). I will actually do a video on my mods.
Do you recommend any bios for this card? I will even do deshourd mod. I want to hit like 400W under loads for better perforamnce. I don't think 15W+ would melt my pci ex 8 pin connectors lol.
would kfa2 bios let me push 400w?
I undervolt gpu and do 728mhz on memory.

thanks


----------



## somedudez

steverebo said:


> I've just flashed my rtx 3090 tuf oc to the galaxy 390w bios how do I change the rgb led colour now?


you don't lol. RGB looks bad in gpus unless its a gamerock they are stunning. jokes aside by turning of LEDS RGBS you allow more power budget for the card. rgb could easily waste 10 watts.


----------



## long2905

somedudez said:


> Yes i ordered the copper vram shim mod (the one piece re-machined ones). I will do it on the back side VRAMS since this cards backplate isn't good enough also much easier to kapton tape safely there. I ordered the 3d graphite 2.2m thermal pads for te core side (they are 40w/mk way better than gelid ultimate but you can't cut them else tey conduct electricity ). I will actually do a video on my mods.
> Do you recommend any bios for this card? I will even do deshourd mod. I want to hit like 400W under loads for better perforamnce. I don't think 15W+ would melt my pci ex 8 pin connectors lol.
> would kfa2 bios let me push 400w?
> I undervolt gpu and do 728mhz on memory.
> 
> thanks


i would say its a bit overkill but you do you. the galax vbios will get you close to 390w and is your best bet. other than that you can try the xoc vbios which get you to 600w but it has its own problem. putting copper plate on the backside is tricky as there are lots of tiny resistors there. unless u're mining (which i did), a thermal pad is more than fine


----------



## somedudez

long2905 said:


> i would say its a bit overkill but you do you. the galax vbios will get you close to 390w and is your best bet. other than that you can try the xoc vbios which get you to 600w but it has its own problem. putting copper plate on the backside is tricky as there are lots of tiny resistors there. unless u're mining (which i did), a thermal pad is more than fine


I see. But trinity oc looks like has so much more components on the core side vrams? And my back plate is REALLY bad friend. Look it up. By far the worst back plate. Probably no back plate would be better because its too bad. 

106C is crazy. On the back side I used thermal right odessey right now.. I will do lots of testing. I will use kapton tape. I heard using paste is bad because it goes underneath vram, causes cold solders in the future.. I will use putty. The back plate basically has dented openings on the vram side . Wish it was flat. 

Gelid pads are good but very expensive too. I won't use random shims it's a pre cut one they say it's the least risky way to do it. I hope I will not burn that card lol.. 

*Should I try xoc vbios? Does it also have 2 piece x 8 pins? *I won't make the power slider to use more than 420W. 600w is impossible with this bad stock cooler and I suspect there ie any benefit going 600w lol.


----------



## long2905

somedudez said:


> I see. But trinity oc looks like has so much more components on the core side vrams? And my back plate is REALLY bad friend. Look it up. By far the worst back plate. Probably no back plate would be better because its too bad.
> 
> 106C is crazy. On the back side I used thermal right odessey right now.. I will do lots of testing. I will use kapton tape. I heard using paste is bad because it goes underneath vram, causes cold solders in the future.. I will use putty. The back plate basically has dented openings on the vram side . Wish it was flat.
> 
> Gelid pads are good but very expensive too. I won't use random shims it's a pre cut one they say it's the least risky way to do it. I hope I will not burn that card lol..
> 
> *Should I try xoc vbios? Does it also have 2 piece x 8 pins? *I won't make the power slider to use more than 420W. 600w is impossible with this bad stock cooler and I suspect there ie any benefit going 600w lol.


you can put additional heatsink and fan on top of the backplate. there are plenty of examples both here and on reddit.
if you use shims with anything else but paste you are gonna have a bad time.
you can try the xoc vbios for fun but in time you will find that its best to stay with the galax one.


----------



## yzonker

long2905 said:


> you can put additional heatsink and fan on top of the backplate. there are plenty of examples both here and on reddit.
> if you use shims with anything else but paste you are gonna have a bad time.
> you can try the xoc vbios for fun but in time you will find that its best to stay with the galax one.


Yea the stock air cooler won't handle the extra power very well at all. Before going to a waterblock, I tested it on my Trinity and barely gained any performance at all due to much higher temps.


----------



## yzonker

long2905 said:


> you can put additional heatsink and fan on top of the backplate. there are plenty of examples both here and on reddit.
> if you use shims with anything else but paste you are gonna have a bad time.
> you can try the xoc vbios for fun but in time you will find that its best to stay with the galax one.


This. Without active cooling, better pads, shims, etc.. won't help much since the backplate just heat soaks. A HSF combo will drop temps about 10C. Better pads on the front helps too as it will pull heat through the PCB and help cool the rear chips. But the biggest gain is from the HSF probably.


----------



## somedudez

long2905 said:


> you can put additional heatsink and fan on top of the backplate. there are plenty of examples both here and on reddit.
> if you use shims with anything else but paste you are gonna have a bad time.
> you can try the xoc vbios for fun but in time you will find that its best to stay with the galax one.


Hi yeah I was back plate cooling my arous rx 580 had an amazing back plate to achieve that (145w bios to 185w bios without artifacts thanks to mod) 

My current gpu mod (i received the copper shim but not putty. I will make a video with fixed fan curve and long stress tests what worked how first copper than copper + 3d graphite + back plate cooling) I actually rotated the tower cooler to align with exhaust fan.


----------



## gfunkernaught

I get worried when I see "1000w bios" and "air cooled" together in a conversation.


----------



## cletus-cassidy

Hey all, could use some advice. I have a 3090 Strix that I had under water for the past year (never used the stock cooler, still has the plastic on it). I flashed it with a 3090 kingpin 520w bios. I just replaced this card with a 4090 and I took the water block off the Strix and put on the stock cooler. Everything went back together well and when I boot the card gets power (fans spinning, RGB), but it won't post and I'm getting a 97 post code on my z690 Apex. Of course now I am remembering that I failed to flash back to the original bios, although when I flip the bios switch nothing changes.

I've done this (swapped card with stock cooler) a bunch of times with no issues. Wondering if anyone has had something like this happen? Or am I sending this back into Asus (with the EVGA bios on the alternate bios)? Thanks in advance.


----------



## gfunkernaught

@cletus-cassidy CARNAAGE! ✊
Flash the stock bios back into your strix. If you can, boot into windows with two cards, your 3090 and onboard video or your 4090 as the main display. You should be able to select the adapter with nvflash.


----------



## cletus-cassidy

gfunkernaught said:


> @cletus-cassidy CARNAAGE! ✊
> Flash the stock bios back into your strix. If you can, boot into windows with two cards, your 3090 and onboard video or your 4090 as the main display. You should be able to select the adapter with nvflash.


Thanks - just tried this. GPU-Z couldn't "see" the Strix. I will try to put the waterblock back on just to see if it's a short somewhere with the stock cooler or if the card is dead.


----------



## gfunkernaught

cletus-cassidy said:


> Thanks - just tried this. GPU-Z couldn't "see" the Strix. I will try to put the waterblock back on just to see if it's a short somewhere with the stock cooler or if the card is dead.


Interesting. You ran GPUz as elevated admin right? See if nvflash can see both adapters.


----------



## cletus-cassidy

cletus-cassidy said:


> Thanks - just tried this. GPU-Z couldn't "see" the Strix. I will try to put the waterblock back on just to see if it's a short somewhere with the stock cooler or if the card is dead.


Found the issue - wrong screw was too long and touching PCB. Whew - appreciate your help @gfunkernaught


----------



## long2905

cletus-cassidy said:


> Found the issue - wrong screw was too long and touching PCB. Whew - appreciate your help @gfunkernaught


thats quite some luck as nothing fried yet. wrong screw or...?


----------



## gfunkernaught

cletus-cassidy said:


> Found the issue - wrong screw was too long and touching PCB. Whew - appreciate your help @gfunkernaught


Reminds me of my 2080 ti debacle. I returned my 2080 ti because _I thought _it was dead, but the backplate was touching my shuntmod resistor, and I realized it after the fact🙄😆


----------



## steverebo

Cab you install the 500w XOC bios on the tuf oc 3090?


----------



## 7empe

Hey Guys, I'm wondering about the binning quality of my 3090 Strix OC chip. I've been searching for the lowest voltage that can handle steady 2130 MHz clock (locked by nvidia-smi) and it is 987 mV. How does it compare to other 3090's? It is stable game-wise in games like BF2042 (RT on), CoD MWII, SpiderMan Remastered (RT on), Cyberpunk (RT on).

Some more details:

+1200 MHz offset on memory
EKWB waterblock + active backplate with MORA3
burn-out temps do not exceed 36-37C and are in the range of 28-34C while gaming
520W EVGA vbios with re-bar enabled


----------



## gfunkernaught

7empe said:


> steady 2130 MHz clock (locked by nvidia-smi) and it is 987 mV


Is that the effective clock? My 3090 at 2130mhz requested, 2115mhz effective is stable at 1063mv at full load avg core temp 40c.


----------



## cletus-cassidy

gfunkernaught said:


> Reminds me of my 2080 ti debacle. I returned my 2080 ti because _I thought _it was dead, but the backplate was touching my shuntmod resistor, and I realized it after the fact🙄😆


That's it exactly. I've done this like 5x times, but for whatever reason this time I had more trouble reinstalling the stock cooler. Thanks again for the encouragement to look again vs. lazily just sending back in for RMA.


----------



## cletus-cassidy

long2905 said:


> thats quite some luck as nothing fried yet. wrong screw or...?


Hard to describe, but on the top of the Strix there is a shroud that layers on top of the cooler (it says "GEFORCE RTX"). Anyway I forgot to add that shroud as it was sitting in the bottom of the box. Without it, the screw went in too deep and dug into the PCB ever so slightly.


----------



## 7empe

gfunkernaught said:


> Is that the effective clock? My 3090 at 2130mhz requested, 2115mhz effective is stable at 1063mv at full load avg core temp 40c.


It’s effective.


----------



## gfunkernaught

7empe said:


> It’s effective.


That's really good.


----------



## J7SC

7empe said:


> Hey Guys, I'm wondering about the binning quality of my 3090 Strix OC chip. I've been searching for the lowest voltage that can handle steady 2130 MHz clock (locked by nvidia-smi) and it is 987 mV. How does it compare to other 3090's? It is stable game-wise in games like BF2042 (RT on), CoD MWII, SpiderMan Remastered (RT on), Cyberpunk (RT on).
> 
> Some more details:
> 
> +1200 MHz offset on memory
> EKWB waterblock + active backplate with MORA3
> burn-out temps do not exceed 36-37C and are in the range of 28-34C while gaming
> 520W EVGA vbios with re-bar enabled


...2130 MHz at 0.987 V sounds quite good. I know my Strix OC is similar ~ 2145 MHz (both stock vBios, KPE 520), but that 3090 is out right now re. 4090 loop insertion so I can't test it until I finished moving it to another work+play setup. Full tilt on water (Phanteks block) run per below.


----------



## somedudez

My 3090 exhibits this flashing/flickering
It began like yesterday. Its a used card.
What is going on? :/

Is the vrams are dead. It is really frustrating this card costed me like my whole salary









rtx 3090 issue?







youtube.com


----------



## gfunkernaught

somedudez said:


> My 3090 exhibits this flashing/flickering
> It began like yesterday. Its a used card.
> What is going on? :/
> 
> Is the vrams are dead. It is really frustrating this card costed me like my whole salary
> 
> 
> 
> 
> 
> 
> 
> 
> 
> rtx 3090 issue?
> 
> 
> 
> 
> 
> 
> 
> youtube.com


Just that game?


----------



## somedudez

gfunkernaught said:


> Just that game?


Happens in idle too.. Also in valorant


----------



## Falkentyne

somedudez said:


> My 3090 exhibits this flashing/flickering
> It began like yesterday. Its a used card.
> What is going on? :/
> 
> Is the vrams are dead. It is really frustrating this card costed me like my whole salary
> 
> 
> 
> 
> 
> 
> 
> 
> 
> rtx 3090 issue?
> 
> 
> 
> 
> 
> 
> 
> youtube.com


Either the VRAM/GPU is shot or your displayport or displayport cable is giving out.
Try both a different DP cable (higher quality) AND a different display output first before panicking.


----------



## somedudez

Falkentyne said:


> Either the VRAM/GPU is shot or your displayport or displayport cable is giving out.
> Try both a different DP cable (higher quality) AND a different display output first before panicking.


What does it mean gpu or vram is shot

After making pci express version 3 from auto in bios (b450) and deleting zotac fire storm this issue didn't happen in 40 mins of pubg. 

I am waiting for my putty to arrive. I will do the back plate copper mod with lots of kapton tape + 3d graphite thermal pads on core side vrams (40w/mk 2.2mm pads). I will do benchmarks for a YouTube video incase anyone is wondering. 

Then i will finally do the deshroud mod with 2 arctic p12 fans for better temps + less noise 

I hope my card is fine. Because its q used card I'm afraid that it will explode in my hands


----------



## gfunkernaught

Tried lowering the vcore again. Was able to loop PR, run the PR bench, then ran TSE, then played Quake 2 RTX for the remainder of this session. I don't remember trying this v/f combo before with any success. The clock starts out at 2145mhz, eventually settles to 2115mhz with some heat, low of 2099mhz. Very interesting that I'm able to finish benches and quake 2 rtx at this v/f.









I scored 11 615 in Time Spy Extreme


Intel Core i9-12900KS Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 11}




www.3dmark.com













I scored 15 579 in Port Royal


Intel Core i9-12900KS Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 11}




www.3dmark.com












Played Cyberpunk for 50min.


----------



## GRABibus

First time I break 16K in PR with my EVGA Kingpin Hybrid with *STOCK COOLER* 


















I scored 16 187 in Port Royal


AMD Ryzen 9 5950X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 11}




www.3dmark.com





Max Power draw = 630W

Average GPU temperature = 52°C


----------



## long2905

GRABibus said:


> First time I break 16K in PR with my EVGA Kingpin Hybrid with *STOCK COOLER*
> 
> View attachment 2582201
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 16 187 in Port Royal
> 
> 
> AMD Ryzen 9 5950X, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 11}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> Max Power draw = 630W
> 
> Average GPU temperature = 52°C


nice! got any boost from the latest driver?


----------



## GRABibus

long2905 said:


> got any boost from the latest driver?


Yep


----------



## Shraf2k

Need some help gents. Bought a KP 2nd hand and it was artifacting like crazy but managed to get it RMA'd. I cannot for the life of me run any port royal runs with a memory OC over +900. Should I RMA it again? It's barely stable in games in anything besides stock settings. Port royal looks like this at +900/+100 on OC bios.


----------



## 7empe

Shraf2k said:


> Need some help gents. Bought a KP 2nd hand and it was artifacting like crazy but managed to get it RMA'd. I cannot for the life of me run any port royal runs with a memory OC over +900. Should I RMA it again? It's barely stable in games in anything besides stock settings. Port royal looks like this at +900/+100 on OC bios.
> View attachment 2583518


IMO, that does not look like artifacting caused by the OC. What kind of instabilities in games you have observed? Is it a crash to desktop or hard boot? Did you check cable and PCIe slot?


----------



## andypc2013

hello all i have a 3090 zotac trinity and would love to get a bit more fps out of it while gaming,
what is the best bios i can get for the 3090 zotac trinity, i have a waterblock on and gpu is on its own loop
with 2x 560 rads push pull fans, in corsair 1000d case cpu is on the outer 2x 560 rads
would be nice as well to keep the rebar as well on a bios
any help is most well come


----------



## kairi_zeroblade

andypc2013 said:


> hello all i have a 3090 zotac trinity and would love to get a bit more fps out of it while gaming,
> what is the best bios i can get for the 3090 zotac trinity, i have a waterblock on and gpu is on its own loop
> with 2x 560 rads push pull fans, in corsair 1000d case cpu is on the outer 2x 560 rads
> would be nice as well to keep the rebar as well on a bios
> any help is most well come


BIOS won't squeeze out anything from that GPU, you really need to Shunt Mod it, my 3090 was underperforming on any 1KW VBios I flash and even if I feed it 600W it still won't perform as expected.

Also on a note, anything above 40c will automatically downclock you bins below the frequency you set.


----------



## yzonker

kairi_zeroblade said:


> BIOS won't squeeze out anything from that GPU, you really need to Shunt Mod it, my 3090 was underperforming on any 1KW VBios I flash and even if I feed it 600W it still won't perform as expected.
> 
> Also on a note, anything above 40c will automatically downclock you bins below the frequency you set.


There must have been some other issue as the KP 1kw bios works really well on 2x8pin. My Zotac 3090 trinity always performed well with that bios. This is one of the last PR runs I made. 









I scored 16 127 in Port Royal


Intel Core i9-12900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com





@andypc2013 , either one of the 390w bios on TPU or the KP 1kw bios are your options. Look for versions on TPU that are newer than about March 2021 to get one that has reBar. They're on there.


----------



## kairi_zeroblade

yzonker said:


> There must have been some other issue as the KP 1kw bios works really well on 2x8pin. My Zotac 3090 trinity always performed well with that bios. This is one of the last PR runs I made.


I only get around 13k with the KP 1kw VBios, on original it does 14k, strange behavior on the KP VBios is it doesn't go up 0.95v for my Gigabutt Gaming OC GPU sandwiched between waterblocks no matter how much I feed it (normally I set it to 520-550W, don't ask me if I know how to use the Curve editor on MSI AB, I do, since I have a laptop with 3060 and its overclocked using the Cruve Editor too)

I was about to shunt mod it, but fortunately its going to its new owner already, slapped back my 6900XT and all my FPS woes are gone.

will just probably buy a 4090 when things clear up with the connector issue and when prices are sane (or will never buy as I am confident and happy with the performance of the 6900XT)


----------



## andypc2013

yzonker said:


> There must have been some other issue as the KP 1kw bios works really well on 2x8pin. My Zotac 3090 trinity always performed well with that bios. This is one of the last PR runs I made.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 16 127 in Port Royal
> 
> 
> Intel Core i9-12900K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> @andypc2013 , either one of the 390w bios on TPU or the KP 1kw bios are your options. Look for versions on TPU that are newer than about March 2021 to get one that has reBar. They're on there.


have you got a link to the bios with the rebair on it im new to flashing the gpu bios, so would not know where to look 
thanks buddy


----------



## GRABibus

andypc2013 said:


> have you got a link to the bios with the rebair on it im new to flashing the gpu bios, so would not know where to look
> thanks buddy











EVGA RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com


----------



## gfunkernaught

As mentioned many pages ago, most 3090s, especially low to mid range cards, won't benefit much from going over 500w. I've seen this myself. I have my trio limited to 620w only to keep the clock from bouncing around. But when I was testing 8k in games like battlefield 1 and fall order, allowing the card to go over 620w showed zero benefit.


----------



## J7SC

andypc2013 said:


> have you got a link to the bios with the rebair on it im new to flashing the gpu bios, so would not know where to look
> thanks buddy


This > 520W KingPin vbios (March '21) is what I use as secondary on the 3090 Strix - the XOC 1000W version can obviously go a bit higher, but the 520 does not have safeties disengaged. 



kairi_zeroblade said:


> I only get around 13k with the KP 1kw VBios, on original it does 14k, strange behavior on the KP VBios is it doesn't go up 0.95v for my Gigabutt Gaming OC GPU sandwiched between waterblocks no matter how much I feed it (normally I set it to 520-550W, don't ask me if I know how to use the Curve editor on MSI AB, I do, since I have a laptop with 3060 and its overclocked using the Cruve Editor too). I was about to shunt mod it, but fortunately its going to its new owner already, slapped back my 6900XT and all my FPS woes are gone.
> 
> will just probably buy a 4090 when things clear up with the connector issue and when prices are sane (or will never buy as I am confident and happy with the performance of the 6900XT)


...obvioulsy there must be another issue at play either with your card or your setup; @yzonker 's Zotac performance has been speaking volumes. As to the 6900XT, I run one literally in the same case as a 4090 (bought at entry level pricing), they're not even in the same zip-code...


----------



## kairi_zeroblade

J7SC said:


> ...obvioulsy there must be another issue at play either with your card or your setup; @yzonker 's Zotac performance has been speaking volumes. As to the 6900XT, I run one literally in the same case as a 4090 (bought at entry level pricing), they're not even in the same zip-code...


I don't enable RT, don't feel liking it, I get nausea watching those shimmering crap all over, so I am just happy with the brute raster from the 6900XT, there's nothing wrong with my system, it has been fully tweaked and stable ever since, my scores are normally back to what they are with the 6900XT after removing the 3090, I just tried it for fun, like what you said and what I have read as well here, the 2x8pins are knockers, that is why I went with that for my 3090 choice, well its gigabutt, what do I expect? anyways It has a new owner tomorrow (sold it for scalping price so I won't be bicker with it, also 79000XTX is just around the corner)


_Edit:drunken grammar_


----------



## andypc2013

am I not better off using the gigabyte OC on my 3090 zotac trinity ???
dose any one have a link for that bios with the rebar for my zotac trinity
will it defo work ?


----------



## yzonker

andypc2013 said:


> am I not better off using the gigabyte OC on my 3090 zotac trinity ???
> dose any one have a link for that bios with the rebar for my zotac trinity
> will it defo work ?


Just depends on how much you want to push your card. The 1kw bios doesn't have thermal limits too. So some risk there if you're loop has an issue. 









Gigabyte RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com





This Galax I think has better port compatibility than the Gigabyte. 









GALAX RTX 3090 VBIOS


24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory




www.techpowerup.com


----------



## andypc2013

yzonker said:


> Just depends on how much you want to push your card. The 1kw bios doesn't have thermal limits too. So some risk there if you're loop has an issue.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Gigabyte RTX 3090 VBIOS
> 
> 
> 24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory
> 
> 
> 
> 
> www.techpowerup.com
> 
> 
> 
> 
> 
> This Galax I think has better port compatibility than the Gigabyte.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> GALAX RTX 3090 VBIOS
> 
> 
> 24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory
> 
> 
> 
> 
> www.techpowerup.com


do they have the rebar sorry if if i sound dum lol i use mine for gaming thats all


----------



## GRABibus

andypc2013 said:


> do they have the rebar sorry if if i sound dum lol i use mine for gaming thats all


Yes they all have ReBar


----------



## nievz

hey guys is anyone using the Bykski n-ms3090trio-x block for the MSI Gaming X Trio? What's your experience with it? Is it truly full coverage as it states on their website?


----------



## Shraf2k

7empe said:


> IMO, that does not look like artifacting caused by the OC. What kind of instabilities in games you have observed? Is it a crash to desktop or hard boot? Did you check cable and PCIe slot?


I've had crashes as well as reboots when this happens. I'll try to reseat it in the pcie slot if you think it's that. Def not the cable since I have swapped to my side monitor which is on DP (main is HDMI) and I get the same thing as well as crashing. What do you think it is?


----------



## andypc2013

yzonker said:


> Just depends on how much you want to push your card. The 1kw bios doesn't have thermal limits too. So some risk there if you're loop has an issue.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Gigabyte RTX 3090 VBIOS
> 
> 
> 24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory
> 
> 
> 
> 
> www.techpowerup.com
> 
> 
> 
> 
> 
> This Galax I think has better port compatibility than the Gigabyte.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> GALAX RTX 3090 VBIOS
> 
> 
> 24 GB GDDR6X, 1395 MHz GPU, 1219 MHz Memory
> 
> 
> 
> 
> www.techpowerup.com


thank you i went for the galax and all works well thanks again


----------



## Shraf2k

Reseated it and reinstalled drivers yet again. Still crashes or has what looks like half the screen is torn 3 different ways as well as blackouts before completely crashing my system. Set the CPU and RAM to stock settings and no change. I've never had an ampere GPU act like this. So weird.


----------



## yzonker

Shraf2k said:


> Reseated it and reinstalled drivers yet again. Still crashes or has what looks like half the screen is torn 3 different ways as well as blackouts before completely crashing my system. Set the CPU and RAM to stock settings and no change. I've never had an ampere GPU act like this. So weird.


If it does that at stock clocks might be time for a 2nd RMA unfortunately.

Might try, just as a test, downclock the memory below default. Usually it's the memory that causes corruption and blackouts.


----------



## Imprezzion

Going to pick up 2 Inno3D iChill x4's tomorrow which I got for 700 bucks a piece. They are ex mining but meh. I'll take a chance for that price. 

Does anyone run a Inno3D iChill x3/x4 with a full cover block? EK configurator doesn't list them at all but Corsair and Watercool.de configurator does list them as reference compatible. I plan to use a EK RE full nickel 3090 block with nickel backplate.


----------



## Shraf2k

yzonker said:


> If it does that at stock clocks might be time for a 2nd RMA unfortunately.
> 
> Might try, just as a test, downclock the memory below default. Usually it's the memory that causes corruption and blackouts.


I appreciate the reply. I think there's just a lack of "GOOD" kingpins out there and they're being held onto by their owners. I remember hearing some early gripes about cards that wont OC etc. I called EVGA and they were willing to take back the RMA card and the guy I bought the 1st card from is willing to take his card back. I got so frustrated that I popped my 3080ti back in and broke my old port royal record lol. 15,467! i couldn't get either kingpin to score over 14,800ish. maybe I'll keep looking but for a 3090ti KP this time.


----------



## gfunkernaught

Shraf2k said:


> I appreciate the reply. I think there's just a lack of "GOOD" kingpins out there and they're being held onto by their owners. I remember hearing some early gripes about cards that wont OC etc. I called EVGA and they were willing to take back the RMA card and the guy I bought the 1st card from is willing to take his card back. I got so frustrated that I popped my 3080ti back in and broke my old port royal record lol. 15,467! i couldn't get either kingpin to score over 14,800ish. maybe I'll keep looking but for a 3090ti KP this time.


Sometimes its not just good/bad silicon. I thought my Trio wasn't a good OC'er until I dialed in the cooling and got a top-tier PSU. I used to get crashes playing Cyberpunk or other heavy 3D apps with [email protected], now its stable. Drivers are also a factor.


----------



## Shraf2k

gfunkernaught said:


> Sometimes its not just good/bad silicon. I thought my Trio wasn't a good OC'er until I dialed in the cooling and got a top-tier PSU. I used to get crashes playing Cyberpunk or other heavy 3D apps with [email protected], now its stable. Drivers are also a factor.


3080ti








I scored 15 467 in Port Royal


Intel Core i7-12700K Processor, NVIDIA GeForce RTX 3080 Ti x 1, 32768 MB, 64-bit Windows 11}




www.3dmark.com





KP 3090








I scored 14 554 in Port Royal


Intel Core i7-12700K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 11}




www.3dmark.com





slapped into the same system.


----------



## long2905

Imprezzion said:


> Going to pick up 2 Inno3D iChill x4's tomorrow which I got for 700 bucks a piece. They are ex mining but meh. I'll take a chance for that price.
> 
> Does anyone run a Inno3D iChill x3/x4 with a full cover block? EK configurator doesn't list them at all but Corsair and Watercool.de configurator does list them as reference compatible. I plan to use a EK RE full nickel 3090 block with nickel backplate.


its ref pcb but tiny bit longer. bykski has a full cover active backplate for it. alphacool has one without active backplate. both are fine.


----------



## Imprezzion

long2905 said:


> its ref pcb but tiny bit longer. bykski has a full cover active backplate for it. alphacool has one without active backplate. both are fine.


I found a BNIB EK-Quantum Vector RE RTX 3080/3090 nickel with passive plate for 100 bucks but I just wanna know for sure if it fits. EK doesn't list it but other manufacturers like Corsair or watercool do list it as compatible with reference blocks. I use a Bykski block on my current card and they are perfectly fine but that EK is so cheap lol.


----------



## long2905

Imprezzion said:


> I found a BNIB EK-Quantum Vector RE RTX 3080/3090 nickel with passive plate for 100 bucks but I just wanna know for sure if it fits. EK doesn't list it but other manufacturers like Corsair or watercool do list it as compatible with reference blocks. I use a Bykski block on my current card and they are perfectly fine but that EK is so cheap lol.


you can check and compare the pcb shots of the inno3d card over the ek block see if they match up. ek is not that great though...


----------



## J7SC

long2905 said:


> its ref pcb but tiny bit longer. bykski has a full cover active backplate for it. alphacool has one without active backplate. both are fine.


I really like the Phanteks Glacier block for my 3090 Strix (lower right in dual-build artsy pic). The Phanteks replaced an EK block I wasn't happy with, and with an extra heatsink on the back and plenty of thermal putty everywhere, temps were/are great.


----------



## Imprezzion

long2905 said:


> you can check and compare the pcb shots of the inno3d card over the ek block see if they match up. ek is not that great though...


EK block does fit but I found a reddit thread saying I would need to cut the 3rd fan header off the PCB on the iChill x4 as the EK block has no cutout for that. That's why they don't list it as compatible. The Alphacool GPX-N however does have this cutout. I am ordering a GPX-N with passive BP from Aquatuning.nl / .de later today. It's in stock locally and about €132 with backplate so good price as well. Thanks for the help! If the memory temps are still cringe I can always opt for the matching Alphacool active BP later on.


----------



## long2905

Imprezzion said:


> EK block does fit but I found a reddit thread saying I would need to cut the 3rd fan header off the PCB on the iChill x4 as the EK block has no cutout for that. That's why they don't list it as compatible. The Alphacool GPX-N however does have this cutout. I am ordering a GPX-N with passive BP from Aquatuning.nl / .de later today. It's in stock locally and about €132 with backplate so good price as well. Thanks for the help! If the memory temps are still cringe I can always opt for the matching Alphacool active BP later on.


alphacool is inno3d partner. they have a preblocked version ichill frostbyte straight from alphacool. same with 4090 which im waiting for things to normalize


----------



## gfunkernaught

Shraf2k said:


> 3080ti
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 15 467 in Port Royal
> 
> 
> Intel Core i7-12700K Processor, NVIDIA GeForce RTX 3080 Ti x 1, 32768 MB, 64-bit Windows 11}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> KP 3090
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 14 554 in Port Royal
> 
> 
> Intel Core i7-12700K Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 11}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> slapped into the same system.


Huh, I thought kingpins were EVGA's best-binned chips. Did you buy that KP new or used?


----------



## Imprezzion

Got both the Inno3D's. They work perfectly fine so far. Only 1 weird issue. On Auto they run "100% power" at 360--365w ish. But when I use a curve OC/UV the power also goes down but % stay 100 in AB but not in GPU-Z. It will go to perfcap PWR at 330w ish. The lower the curve the worse the power draw gets.. weird.. maybe it's my old drivers or windows or MSI AB...


----------



## WayWayUp

My 4090 is here today 
Happy to have the beast but sad to let go of my 3090
It’s a super golden sample ;’(
I know I’ll never get that lucky again with silicon lottery
I ran one last time fire strike ultra and the card did 2,295 and maintained for most the benchmark. Also set the record for 3090 and 10900kf combo

but alas my 4090 is here










I think I’ll sell after thxgiving weekend


----------



## Imprezzion

K so the power issue with MSI AB curve is fixed. More then fixed. I flashed a Zotac Trinity OC BIOS on my Inno3D iChill x4 as they share almost the full PCB and it works miracles. I can draw 380w now on 110% power limit slider stable. That is even above what 2x8 pin should be able to do. PCI-E slot is running 68-70w, both 8 pins 155-156w. It is not misreading cause the temperatures on air are very very high. Even on 100% fanspeeds the core gets up to 67c and hotspot kinda runs away over 93c. They need a repaste bad. But it will hold something like 1935 @ 0.875 without throttling so far. Much better then stock BIOS which struggles to hold 1850 at 0.837..

So yeah, tl;dr zotac trinity oc bios is amazing. 380w no throttling and way higher boost clocks.


----------



## long2905

Imprezzion said:


> K so the power issue with MSI AB curve is fixed. More then fixed. I flashed a Zotac Trinity OC BIOS on my Inno3D iChill x4 as they share almost the full PCB and it works miracles. I can draw 380w now on 110% power limit slider stable. That is even above what 2x8 pin should be able to do. PCI-E slot is running 68-70w, both 8 pins 155-156w. It is not misreading cause the temperatures on air are very very high. Even on 100% fanspeeds the core gets up to 67c and hotspot kinda runs away over 93c. They need a repaste bad. But it will hold something like 1935 @ 0.875 without throttling so far. Much better then stock BIOS which struggles to hold 1850 at 0.837..
> 
> So yeah, tl;dr zotac trinity oc bios is amazing. 380w no throttling and way higher boost clocks.


try galax vbios. its the best one for 2 8pin cards.


----------



## Celcius

I got bored and decided to tinker with my PC. My CPU is already at it's OC limit (10700K @ 4.9ghz all core) with my cooler so I decided to focus on my videocard (evga rtx 3090 ftw3 ultra).

First I ran Unigine Heaven:
stock clocks= 77c, 77.5 fps
core undervolted to 900mv= 70c, 77.1 fps
core undervolted and oc mem +500 = 72c, 78.2 fps
core undervolted and oc mem +750 (3090Ti mem speed) = 72c, 78.7 fps

Then I decided to run my other usual benchmarks:
Stock:
3dmark firestrike = 33,197
3dmark timespy = 18,396
ff14 endwalker benchmark = 17,193
ff15 benchmark = 8,975
supersposition 1080p extreme = 12,856 @ 78c
Borderlands 3 = 86 fps

Undervolted gpu and overclocked mem:
3dmark firestrike = 33,201
3dmark timespy = 18,532
ff14 endwalker benchmark = 17,316
ff15 benchmark = 9,225
supersposition 1080p extreme = 12,259 @ 74c
Borderlands 3 = 88 fps

The core is running a few degrees cooler and the mem temps are the same (88c under full load), but fans are a little quieter than stock due to lower gpu temps. The only benchmark where the score didn't go up was Superposition (went down a little) but I guess it relies on the core more than memory.

Edit: Updated to add Borderlands 3 built-in benchmark @ 4K max settings as well


----------



## gfunkernaught

Celcius said:


> I got bored and decided to tinker with my PC. My CPU is already at it's OC limit (10700K @ 4.9ghz all core) with my cooler so I decided to focus on my videocard (evga rtx 3090 ftw3 ultra).
> 
> First I ran Unigine Heaven:
> stock clocks= 77c, 77.5 fps
> core undervolted to 900mv= 70c, 77.1 fps
> core undervolted and oc mem +500 = 72c, 78.2 fps
> core undervolted and oc mem +750 (3090Ti mem speed) = 72c, 78.7 fps
> 
> Then I decided to run my other usual benchmarks:
> Stock:
> 3dmark firestrike = 33,197
> 3dmark timespy = 18,396
> ff14 endwalker benchmark = 17,193
> ff15 benchmark = 8,975
> supersposition 1080p extreme = 12,856 @ 78c
> 
> Undervolted gpu and overclocked mem:
> 3dmark firestrike = 33,201
> 3dmark timespy = 18,532
> ff14 endwalker benchmark = 17,316
> ff15 benchmark = 9,225
> supersposition 1080p extreme = 12,259 @ 74c
> 
> The core is running a few degrees cooler and the mem temps are the same (88c under full load), but fans are a little quieter than stock due to lower gpu temps. The only benchmark where the score didn't go up was Superposition (went down a little) but I guess it relies on the core more than memory.


Try the 4k benchmarks 😉


----------



## Celcius

gfunkernaught said:


> Try the 4k benchmarks 😉


I forgot to mention heaven, ff14, and ff15 benchmarks were run at 4K. For 3dmark I just run the free demo. For superposition the 1080 extreme benchmark is more stressful than the 4K optimized bench. (Free options)


----------



## Imprezzion

At what kind of memory temperatures should I be worried lol. I just put my Alphacool block on the 3090 but with the stock Alphacool pads which are.. not great.. and I like the stock backplate so I kept that as well. Core temps at a constant 385w simulated load are totally fine but memory is very high still.. I stress tested with Kombustor and Division 2 (sitting at a static spot that maxes power draw at really high FPS to stress memory) for over 40 minutes so temps have reached equilibrium at the points in the second column. 90 really doesn't look good for watercooling but then again it's probably the back modules getting this hot with the stock backplate. 

I ordered some Gelid Extreme pads in the correct thickness but I'm not that happy having to strip the entire loop down again after 1 evening..


----------



## long2905

Imprezzion said:


> At what kind of memory temperatures should I be worried lol. I just put my Alphacool block on the 3090 but with the stock Alphacool pads which are.. not great.. and I like the stock backplate so I kept that as well. Core temps at a constant 385w simulated load are totally fine but memory is very high still.. I stress tested with Kombustor and Division 2 (sitting at a static spot that maxes power draw at really high FPS to stress memory) for over 40 minutes so temps have reached equilibrium at the points in the second column. 90 really doesn't look good for watercooling but then again it's probably the back modules getting this hot with the stock backplate.
> 
> I ordered some Gelid Extreme pads in the correct thickness but I'm not that happy having to strip the entire loop down again after 1 evening..
> 
> View attachment 2585022


yeah try with gelid extreme and let us know. on the backplate you still need to put some additional heatsink and a fan on top for worthwhile heat dissipation


----------



## Imprezzion

long2905 said:


> yeah try with gelid extreme and let us know. on the backplate you still need to put some additional heatsink and a fan on top for worthwhile heat dissipation


Using the Gelid with the stock Inno3D backplate, not the thicker Alphacool one, and zero active airflow it dropped from 90c @ +250 memory to 84c @ +1250 memory.

Still ain't great but I mean, it's not "too" high I assume?


----------



## Imprezzion

Sorry for the double post but it has been a while tho.

I am quite surprised how high my VRAM will OC without dropping frames or crashing. It doesn't seem to hit ECC up to +1500 at all. At +1600 the frametimes aren't as stable anymore and at +1700 it really starts to get hard drops. 1750 crashes. I ran multiple different games and benchmarks for several hours at +1250 comparing it to 0, +250 and +750 and it scaled beautifully and zero issues. Even at 85c.

Also, 1845 core @ 0.837v with +250 VRAM is my cards incredibly power efficient sweet spot. It hasn't hit 320w in any 1080p game yet and usually sits around 285-290w which is way lower then my 3080 and it's still much faster.

I have 2 profiles now. 1845 @ 0.837 +250 VRAM for efficiency and 1980 @ 0.912 +1250 VRAM for just full power brute force. That does run around 390-395w tho and with the Zotac Trinity OC BIOS I run it does sometimes slightly throttle to 1950 @ 0.900 / 1935 @ 0.893. With the 400w Galax BIOS it can sustain full clocks but one of my display outputs won't work with it so yeah..


----------



## Zetsubou

darkeclypse said:


> Finally got my 500watts with my 2 month old Evga 3090 ftw3 Ultra!!
> 
> Xoc bios thet claims 500watts never did more then 460 watts for me. Early powerlimit after 424watts downloading the card.
> 
> Grew a pair and took baby steps and tries the Kingpin 3090 520watt bios..
> 
> Now I get more then 500watts haha.. 516watts max so far just playing around. So awesome thet a bios can fix it as I was thinking I needed to hard mod the card to fix the issue.
> 
> Now so far my power lines look like this:
> 
> Pcie = 70 watts
> Pin 1 = 170 watts
> Pin 2 = 166 watts
> Pin 3 = 107 watts
> 
> Finally can have some fun overclcoking and benching.


Are you still running the Kingpin BIOS with the 520W output?

I have the same 3090 FTW3 Ultra that I converted with the Hybrid kit but am still seeing the 426W limit you mentioned and it was driving me crazy.

Just wanted to see how your experience had aged 😄


----------



## Zetsubou

_ignore. Double post fam._


----------



## gfunkernaught

Latest drivers...








I scored 15 966 in Port Royal


Intel Core i9-12900KS Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 11}




www.3dmark.com




Haven't seen this in a while.


----------



## man from atlantis

Can't run ampere tools on win 11 home 22H2









GitHub - l0lhax/AmpereOC


Contribute to l0lhax/AmpereOC development by creating an account on GitHub.




github.com


----------



## gfunkernaught

I scored 16 007 in Port Royal


Intel Core i9-12900KS Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 11}




www.3dmark.com


----------



## KedarWolf

gfunkernaught said:


> I scored 16 007 in Port Royal
> 
> 
> Intel Core i9-12900KS Processor, NVIDIA GeForce RTX 3090 x 1, 32768 MB, 64-bit Windows 11}
> 
> 
> 
> 
> www.3dmark.com


My ASUS Strix OC RTX 4090 on the GALAX 666w BIOS if anyone is thinking of upgrading.









I scored 29 748 in Port Royal


AMD Ryzen 9 7950X, NVIDIA GeForce RTX 4090 x 1, 32768 MB, 64-bit Windows 11}




www.3dmark.com





I upgraded from my Strix 3090, which I still haven't sold yet. 

I have a $600 CAD Optimus water block on it. And letting it go for relatively cheap.


----------



## KedarWolf

I scored 16 055 in Port Royal Best 3090 run.


----------



## yzonker

KedarWolf said:


> My ASUS Strix OC RTX 4090 on the GALAX 666w BIOS if anyone is thinking of upgrading.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 29 748 in Port Royal
> 
> 
> AMD Ryzen 9 7950X, NVIDIA GeForce RTX 4090 x 1, 32768 MB, 64-bit Windows 11}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> I upgraded from my Strix 3090, which I still haven't sold yet.
> 
> I have a $600 CAD Optimus water block on it. And letting it go for relatively cheap.


Are you still using a RAM drive to run 3DMark from? I remember you posting about seeing gains quite a while ago. IIRC, I tested it in PR and didn't see any benefit though. Wondering if I should re-visit this.

You have a better 4090 than me, but still seems like you're finding a little better efficiency than me.


----------



## gfunkernaught

KedarWolf said:


> I scored 16 055 in Port Royal Best 3090 run.


My low-end Trio scored 48pts less than your Strix


----------



## KedarWolf

yzonker said:


> Are you still using a RAM drive to run 3DMark from? I remember you posting about seeing gains quite a while ago. IIRC, I tested it in PR and didn't see any benefit though. Wondering if I should re-visit this.
> 
> You have a better 4090 than me, but still seems like you're finding a little better efficiency than me.


No, never ran it from a RAM drive, a 990 Pro 2TB M.2.


----------



## yzonker

KedarWolf said:


> No, never ran it from a RAM drive, a 990 Pro 2TB M.2.


Uh really? Although you mentioned TS, not PR now that I read it again. But I think you answered my question anyway.









[Official] NVIDIA RTX 3090 Owner's Club


kWh and not kW/h




www.overclock.net


----------



## Sheyster

KedarWolf said:


> I upgraded from my Strix 3090, which I still haven't sold yet.


I also upgraded to a GB-G-OC 4090 from a Strix 3090. The Strix 4090 is impossible to find where I live and scalpers want 3000+ USD for it. 

If anyone is interested in a nice air-cooled Strix with no mining time on it, PM me. It's listed in the marketplace here on OCN.


----------



## nexxusty

Deleted.


----------



## Imprezzion

Would you guys recommend ReBAR and HAGS on or off. I currently run both on and everything seems perfectly smooth but maybe I'm leaving some performance on the table? Running a 5800X3D, Windows 11 Pro 22H2, second to latest drivers.


----------



## dmecer3

Hoping to get some insight as a first time poster here :O
I downloaded GPU-Z and I worry its saying my GPU is running at half speed on the PCIe slot. The bus interface says PCIe- x16 4.0 @x8 1.1
I have an Asus z690-e and I am running my NVME on the bottom m2 slot to avoid any chance of that reducing the main PCIe lane speed... anyone else see this happening?

I also just ran a scan with EVGA Precision and it threw my breakers despite being on a 1500x backup PSU... 

Starting to worry.. I am just trying to benchmark this thing as its used. 

If it matters it never got above 70c without OC running 3dmark bench.

i7 13700k 64GB 5600mhz ram Asus z690-e Windows 11 64bit


----------



## Imprezzion

dmecer3 said:


> Hoping to get some insight as a first time poster here :O
> I downloaded GPU-Z and I worry its saying my GPU is running at half speed on the PCIe slot. The bus interface says PCIe- x16 4.0 @x8 1.1
> I have an Asus z690-e and I am running my NVME on the bottom m2 slot to avoid any chance of that reducing the main PCIe lane speed... anyone else see this happening?
> 
> I also just ran a scan with EVGA Precision and it threw my breakers despite being on a 1500x backup PSU...
> 
> Starting to worry.. I am just trying to benchmark this thing as its used.
> 
> If it matters it never got above 70c without OC running 3dmark bench.
> 
> i7 13700k 64GB 5600mhz ram Asus z690-e Windows 11 64bit


Is it just idle at x8 1.1 because that's normal. Power saving. Check under a 3D load with GPU-Z what the link speed is.


----------



## i_max2k2

Been getting back to do some benching, thanks to the chilli weather, and I'm using the KP 1K rebar bios (whats the newest version for that?) on my Asus Strix. I was wondering what kind of scores are people getting. I was able to get a 19,800 on time spy w/ PR around 15400ish, while trying a 2190Mhz stabilizing around 2145/2160Mhz. I'm trying to see if I can get 2175Mhz stable. Btw this is watercooled with temps on core not exceeding 45c and with an active backplate memory junction not going over 60C. 

What is a good way to overclock?, I'm trying to undervolt and then overclock i.e. lower the curve to -250/300 and then pick a voltage and getting it to 2205/2190 etc. Keeping the memory to about +1250.


----------



## Imprezzion

i_max2k2 said:


> Been getting back to do some benching, thanks to the chilli weather, and I'm using the KP 1K rebar bios (whats the newest version for that?) on my Asus Strix. I was wondering what kind of scores are people getting. I was able to get a 19,800 on time spy w/ PR around 15400ish, while trying a 2190Mhz stabilizing around 2145/2160Mhz. I'm trying to see if I can get 2175Mhz stable. Btw this is watercooled with temps on core not exceeding 45c and with an active backplate memory junction not going over 60C.
> 
> What is a good way to overclock?, I'm trying to undervolt and then overclock i.e. lower the curve to -250/300 and then pick a voltage and getting it to 2205/2190 etc. Keeping the memory to about +1250.


I am way more power limited then you are with a 2x8pin card using the 390w Zotac BIOS but still. I have it settled at 2 different profiles, one max power +1250 memory and using just +165 core and I let it regulate the core speed itself. In light games it can hold 2190 at 1.087-1.093v fine but in heavier games it throttles back to around 1980 @ 0.925v ish.

I also have a low power profile with 1860 @ 0.837 locked and +250 VRAM. It stays around 290-310w with that.


----------



## GosuPl

RTX 3090 STRIX vs RTX 3090 Ti vs RTX 4090 STRIX
FP32 / RT ON / DLSS ON
1440p / 4K

Test platform :

i9-13900K 5.7 GHz P/ EOFF / 5.0 Ring EVGA Z690 CLASSIFIED 
G.Skill Trident Z5 6400 MHz CL32.38.38.32 T2 G2 + II/III 
RTX 3090 STRIX PT 480W / OC + 80 / + 1100 Fan ~ 2200 RPM 
RTX 3090 Ti FTW3 PT 480W / OC + 90 / + 800 Fan ~ 2200 RPM 
RTX 4090 STRIX PT 600W / OC + 100/ + 800 Fan ~ 2200 RPM 
be quiet! Dark Pro 12 1500 (bq! 12VHPWR for RTX 3090 Ti / RTX 4090) 
LC 360/45 + 240/45 + 1x D5 for CPU 
Win 10 Pro 
Nvidia Drivers 526.47

Pure testing with full OSD information on each GPU starts at 15:09 and last for 50:58

Before and after the tests, discussing the topic in Polish. I a recommend to jump to the tests right away, unless someone knows Polish, then I invite you


----------



## Imprezzion

GosuPl said:


> RTX 3090 STRIX vs RTX 3090 Ti vs RTX 4090 STRIX
> FP32 / RT ON / DLSS ON
> 1440p / 4K
> 
> Test platform :
> 
> i9-13900K 5.7 GHz P/ EOFF / 5.0 Ring EVGA Z690 CLASSIFIED
> G.Skill Trident Z5 6400 MHz CL32.38.38.32 T2 G2 + II/III
> RTX 3090 STRIX PT 480W / OC + 80 / + 1100 Fan ~ 2200 RPM
> RTX 3090 Ti FTW3 PT 480W / OC + 90 / + 800 Fan ~ 2200 RPM
> RTX 4090 STRIX PT 600W / OC + 100/ + 800 Fan ~ 2200 RPM
> be quiet! Dark Pro 12 1500 (bq! 12VHPWR for RTX 3090 Ti / RTX 4090)
> LC 360/45 + 240/45 + 1x D5 for CPU
> Win 10 Pro
> Nvidia Drivers 526.47
> 
> Pure testing with full OSD information on each GPU starts at 15:09 and last for 50:58
> 
> Before and after the tests, discussing the topic in Polish. I a recommend to jump to the tests right away, unless someone knows Polish, then I invite you


Great video! I unfortunately don't speak or read Polish but it seems very well set up and informative! You put a lot of time and effort into this one. 

I have been tweaking my OC up to 350w max a bit and settled at 1935 @ 0.881 for now. With the power limit set to 390w it stays nicely around 340-345w in the most demanding games I play and remains stable. I am a little worried about my VRAM temps tho. I have a Alphacool GPX-N full cover on it but I'm just running the Inno3D stock backplate and pads. I ordered and received the Alphacool passive plate and some Gelid Extreme pads but haven't put it on yet. At +1250 the VRAM is perfectly stable but it runs 84c in games with peaks of 88c. Hopefully the much thicker Alphacool plate and way better pads will bring it down into at least the 70's. Gives me more peace of mind for the summer.


----------



## cfranko

Hello! I have a 2 8 pin inno3D iChill X4 3090, can I flash the XOC 1000W bios for daily use on this card?


----------



## Imprezzion

cfranko said:


> Hello! I have a 2 8 pin inno3D iChill X4 3090, can I flash the XOC 1000W bios for daily use on this card?


Same card I have. I use the Zotac Trinity OC BIOS (390w) and the Galax 400w BIOS also works. It's water-cooled tho. Can you run the XOC BIOS? Yes. It will limit to 667w but still. Is it in any way safe or recommended to pull 600w+ out of just PCI-E slot and 2x8 pin? No. Nowhere near.


----------



## yzonker

Imprezzion said:


> Same card I have. I use the Zotac Trinity OC BIOS (390w) and the Galax 400w BIOS also works. It's water-cooled tho. Can you run the XOC BIOS? Yes. It will limit to 667w but still. Is it in any way safe or recommended to pull 600w+ out of just PCI-E slot and 2x8 pin? No. Nowhere near.


I think slot power hits around 100w on my Zotac 3090 with the KP 1kw bios. I ran that for almost 2 years 24/7 without blowing it up, although I didn't let it pull the full 600w+ all the time. I did play all the way through CP2077 twice at 500w along with a couple of other games as well as running many many benchmark runs at full power (550-620w).


----------



## Imprezzion

yzonker said:


> I think slot power hits around 100w on my Zotac 3090 with the KP 1kw bios. I ran that for almost 2 years 24/7 without blowing it up, although I didn't let it pull the full 600w+ all the time. I did play all the way through CP2077 twice at 500w along with a couple of other games as well as running many many benchmark runs at full power (550-620w).


Agreed, and I have done similar on for example a 2080 Ti with an XOC BIOS, that thing had no issues drawing 560w in 3DMark, but we don't know what motherboard he has and if the PCI-E slot and power delivery is good enough not do we know what wire gauge his 8 pins are. On 14 or 16 gauge it should be fine. On 18 or a split 2x 6+2 connector? Nope.


----------



## cfranko

Imprezzion said:


> Agreed, and I have done similar on for example a 2080 Ti with an XOC BIOS, that thing had no issues drawing 560w in 3DMark, but we don't know what motherboard he has and if the PCI-E slot and power delivery is good enough not do we know what wire gauge his 8 pins are. On 14 or 16 gauge it should be fine. On 18 or a split 2x 6+2 connector? Nope.


I have a corsair rm750 and use its own cables, no extra sleeve cables. I also have a MSI Z690 Pro. I could use the XOC bios and lower the power limit slider all the way down and set it to around 450w to stay safe I guess, why use gigabyte or zotac bios and hard limit yourself to 390w when you can lower the PL on the XOC bios? Asking because I genuinely don’t know.


----------



## Imprezzion

cfranko said:


> I have a corsair rm750 and use its own cables, no extra sleeve cables. I also have a MSI Z690 Pro. I could use the XOC bios and lower the power limit slider all the way down and set it to around 450w to stay safe I guess, why use gigabyte or zotac bios and hard limit yourself to 390w when you can lower the PL on the XOC bios? Asking because I genuinely don’t know.


Because power balancing is probably better with a native 390-400w BIOS and I don't really want more power. VRAM already gets up to 86c even under water due to the bad backplate and power is expensive. I bought this card (a 3090 in general) because it is by far the most power efficient card at ~350w. It draws the same power as my 3080 10GB did but it's way way faster. I don't want it to run 450-500w because at that power draw I would've been better off with a 4080/4090.

Besides, I have my doubts a RM750 can run that BIOS, even at minimum limits, without triggering OCP due to transient spikes. I'm impressed it even runs it at stock without random shutdowns tbh with a Z690 platform so probably 12700K or higher which has a really high power draw on it's own..


----------



## cfranko

Imprezzion said:


> Because power balancing is probably better with a native 390-400w BIOS and I don't really want more power. VRAM already gets up to 86c even under water due to the bad backplate and power is expensive. I bought this card (a 3090 in general) because it is by far the most power efficient card at ~350w. It draws the same power as my 3080 10GB did but it's way way faster. I don't want it to run 450-500w because at that power draw I would've been better off with a 4080/4090.
> 
> Besides, I have my doubts a RM750 can run that BIOS, even at minimum limits, without triggering OCP due to transient spikes. I'm impressed it even runs it at stock without random shutdowns tbh with a Z690 platform so probably 12700K or higher which has a really high power draw on it's own..


Previously I had a 6900 xt, that card’s power limit could be adjusted super easily without flashing a bios just by using the MPT software. The PSU was fine even when the card was pulling 550w so I think the PSU would be fine and I have a 12900K. Anyway, which 390/400 watt bios is best for iChill X4? I was thinking of the 400w Galax bios if it allows me to use all displayport ports and not lose any. Thanks for the explanation btw


----------



## Imprezzion

cfranko said:


> Previously I had a 6900 xt, that card’s power limit could be adjusted super easily without flashing a bios just by using the MPT software. The PSU was fine even when the card was pulling 550w so I think the PSU would be fine and I have a 12900K. Anyway, which 390/400 watt bios is best for iChill X4? I was thinking of the 400w Galax bios if it allows me to use all displayport ports and not lose any. Thanks for the explanation btw


I use the Zotac Trinity OC 390w newest ReBAR one straight from techpowerup bios database. All ports work with that BIOS. Galax one probably would work but haven't tested all the ports with that one. I use 2x DP 1x HDMI.


----------



## cfranko

Imprezzion said:


> I use the Zotac Trinity OC 390w newest ReBAR one straight from techpowerup bios database. All ports work with that BIOS. Galax one probably would work but haven't tested all the ports with that one. I use 2x DP 1x HDMI.


Also what is the safest method of flashing a bios? I am really concerned about the card being bricked in the process. I thought of going into HiveOS and using its built in bios flasher because it does everything automatically.


----------



## Imprezzion

cfranko said:


> Also what is the safest method of flashing a bios? I am really concerned about the card being bricked in the process. I thought of going into HiveOS and using its built in bios flasher because it does everything automatically.


Just nvflash in windows. You have an iGPU so what could go wrong even if the flash fails.


----------



## Yukss




----------



## LaBestiaHumana

Celebrating its 1 year anniversary lol


----------



## J7SC

...haven't posted here for a while, but my 3090 Strix (displaced by RTX 4090 / on the right in bottom pic) is now running great in its new home it shares with 2x 2080 Ti (latter in NVLink). It actually works very smoothly given Windows 10 options which to run as primary. The old Threadripper system is being wrangled for some work tasks, and the 3090 seems to love it. Pic below is during leak testing (1700W creates a lot of heat !). Pics once it is all fully reassembled.


----------



## gfunkernaught

I put the 520w ReBAR bios on my 3090 to experiment with a truly limited power budget to see how much performance loss I'd see coming from 620w and ~100mhz on the 1kw bios. So far I don't see any real difference. I've been playing GoW at 5760x2340 using the DL DSR option and it looks a lot better than standard 3840x2160 does on my 4k tv. DSR is a very useful feature as some games show more aliasing than others, especially when using FSR or DLSS.


----------

